text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# Module 1: Setting up the problem
Before we bgin, import SimPEG into ipython notebook as follows:
```
from SimPEG import *
from IPython.html.widgets import interactive
```
Efficiency Warning: Interpolation will be slow, use setup.py!
python setup.py build_ext --inplace
**Introduction**<br>
Every geophysical survey consists of a similar basic framework. An energy source is delivered into the earth, and an array of recievers pick up a signal at the surface...
<hr>
**Module summary:** <br>
(1) Start with an expression that relates a kernel function with the continuous distribution of a physical property. <br>
(2) Discretize this expression. <br>
(3) Define a mesh that organizes the data. <br>
(4) Build up the matrix equation $d = Gm$. <br>
<hr>
**Step 1: Physical property distribution and the kernel function.** <br>
1.1 define kernel function (...add description) <br>
1.2 define the model. (...add description) <br>
Each datum can be expressed as the inner product of the kernel function and the model:<br>
*--the integral needs a physical description--*
\begin{equation}d_j = \int_a^b g(x) m(x) dx \end{equation}
...each data point is a measure of a physical property of the entire earth at that point. Consequently it is the sum total (and is therefore the integral) of the influence of the material property specified by the model,
<hr>
**Step 2: Discretize the function for each datum.** <br>
\begin{equation}d_i = \sum_{j=1}^N g_i (x_j) m_i \Delta x\end{equation}<br>
We can then gather all the data (the $d^i$s) into a column vector (and say we have $M$ data). Similarly we assemble each kernel function as a row in a matrix, $\widetilde{G}$, and our model parameters $m$ into a column vector of length $N$. For the spacing increments, we require that we obtain a single output for each $\Delta x$ in the kernel function. It follows that this is best represented as a diagonal matrix, with a $\Delta x$ on every diagonal entry. Combining all of the aforemention yields the following vector equation:<br>
\begin{equation}
\left[
\begin{array}{c}
d_1\\
d_2\\
\vdots\\
d_M
\end{array}
\right]
=
\begin{bmatrix}
g_1(x_j) & g_1(x_j) & g_1(x_j) & \dots & g_1(x_N) \\
g_2(x_j) & g_2(x_j) & g_2(x_j) & \dots & g_2(x_N) \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
g_M(x_j) & g_M(x_j) & g_M(x_j) & \dots & g_M(x_N) \\
\end{bmatrix}
\begin{bmatrix}
\Delta x_1 & 0 & \dots &0 \\
0 & \Delta x_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \dots & \Delta x_N \\
\end{bmatrix}
\left[
\begin{array}{c}
m_1\\
m_2\\
\vdots\\
m_N
\end{array}
\right]
\end{equation} <br>
Or put more succinctly: $d = \widetilde{G} diag(\Delta x) m$.
Next, set up "toy problem" with $d_1$, $d_2$ and $j=1:5$.
<hr>
**Step 3: Define a mesh.**<br>
While the previous section describes how the data is converted from a continuous function in physical space into discreet data "chunks," what it does not address is the manner in which it is to be represented on a computer so that it is available for processing. <br>
3.1 The physical property of interest lies in the cell centers. (...add sketch) <br>
3.2 The kernel functions reside on the nodes (...add sketch) <br>
<hr>
**Step 4: Build up the matrix equation $d=Gm$.** <br>
Note again that the kernel functions "live" on the nodes and the model values in the cell centers. To achieve the inner product of the kernel function with the model parameters we need to refomulate the $G$ matrix to obtain values for the kernel functions at the cell centers.<br>
4.1 Define the matrix $G_n$ (as in "n" for "nodes") as the matrix containing the values for the kernel functions on the nodes. Given that we have $M$ data points and seek $N$ model parameters, it follows that the dimensions of $G_n$ will be $M \times (N+1)$. Schematically, $G_n$ will appear like this: <br> <br>
\begin{equation}
G_n =
\begin{bmatrix}
g_{n_1}^1 & g_{n_2}^1 & g_{n_3}^1 & \dots & g_{n_{N+1}}^1 \\
g_{n_1}^2 & g_{n_2}^2 & g_{n_3}^2 & \dots & g_{n_{N+1}}^2 \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
g_{n_1}^M & g_{n_2}^M & g_{n_3}^M & \dots & g_{n_{N+1}}^M
\end{bmatrix}
\end{equation} <br>
In order to evaluate the kernel functions on the cell centers, we will employ the trapezoidal rule, which states: <br>
```
import numpy as np
```
```
m=5
n=6
w=n-1
s = (m,n)
A = np.zeros(s)
for i in range(m):
j=i
k=i+1
A[i,j] = 0.5
A[i,k] = 0.5
print A
```
[[ 0.5 0.5 0. 0. 0. 0. ]
[ 0. 0.5 0.5 0. 0. 0. ]
[ 0. 0. 0.5 0.5 0. 0. ]
[ 0. 0. 0. 0.5 0.5 0. ]
[ 0. 0. 0. 0. 0.5 0.5]]
```
g = lambda x, k, p, q: np.exp(-p*k*x)*np.cos(2*np.pi*q*k*x)
x=np.linspace(0,1,6)
#x = np.array([0., 0.2, 0.4, 0.6, 0.8, 1.])
p = 0.01
q = 0.1
k = np.array([1, 2, 3, 4, 5, 6])
Gn = np.zeros((len(x), len(k)))
for i in range(len(k)):
f = g(x,k[i],p,q)
Gn[:,i] = f
#print f
print Gn
```
[[ 1. 1. 1. 1. 1. 1. ]
[ 0.99013245 0.96471657 0.92421453 0.86932419 0.80096714 0.72027328]
[ 0.96471657 0.86932419 0.72027328 0.52732179 0.30289805 0.06130149]
[ 0.92421453 0.72027328 0.41818383 0.06130149 -0.29988416 -0.61488486]
[ 0.86932419 0.52732179 0.06130149 -0.41237005 -0.77729498 -0.94561804]
[ 0.80096714 0.30289805 -0.29988416 -0.77729498 -0.95122942 -0.76190351]]
```
```
```
f = lambda x, y : x + y
f(1,1)
```
2
```
# make the delta x array
deltax = 0.2*np.ones(m)
V = np.diag(deltax)
print V
```
[[ 0.2 0. 0. ]
[ 0. 0.2 0. ]
[ 0. 0. 0.2]]
```
```
|
f48bab604fde4d9d093cc731b13f41772d91afdc
| 10,512 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/Module 1-checkpoint.ipynb
|
jokulhaup/directed_studies
|
99f0e6e8cc7010d34db1d9bc37988e4944f66826
|
[
"MIT"
] | 2 |
2017-10-08T02:10:35.000Z
|
2017-10-18T17:49:21.000Z
|
.ipynb_checkpoints/Module 1-checkpoint.ipynb
|
jokulhaup/directed_studies
|
99f0e6e8cc7010d34db1d9bc37988e4944f66826
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/Module 1-checkpoint.ipynb
|
jokulhaup/directed_studies
|
99f0e6e8cc7010d34db1d9bc37988e4944f66826
|
[
"MIT"
] | 3 |
2016-09-01T20:38:20.000Z
|
2020-05-13T22:19:16.000Z
| 31.473054 | 550 | 0.475171 | true | 2,135 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.909907 | 0.849971 | 0.773395 |
__label__eng_Latn
| 0.93898 | 0.635187 |
# 6. Internal dynamic factor
Based on:
[1] ISO 6336-1:2006 Calculation of load capacity of spur and helical gears -- Part 1: Basic principles, introduction and general influence factors
```python
from sympy import *
from matplotlib import pyplot
from numpy import arange
init_printing()
def symb(x, y, z = ''):
return symbols('{0}_{1}{2}'.format(x, y, z), type = float)
```
### 6.4.2 Method B -- Factor $K_{v-B}$
This method is not recommended if:
\begin{align}
\frac{v z_1}{100} \sqrt{u^2/(1 + u^2)} < 3 \mathrm{m/s}
\end{align}
## 6.5 Determination using Method B
### 6.5.3 Resonance running speed:
It is given by:
```python
u, z1, cga, mRed = symbols('u z_1 c_gamma_alpha m_red')
eq6 = 30000/(pi*z1)*sqrt(cga/mRed) # n_E1
eq6
```
where $m_{red}$ is the relative mass of a gear pair, i.e. of(?) the mass per face width of each gear, referred to its base radius or to the line of action. It is equal to:
```python
eq7 = symb('m',1,'^*')*symb('m',2,'^*')/(symb('m',1,'^*') + symb('m',2,'^*'))
eq7
```
The resonance ratio is the ratio of pinion speed to resonance speed, and it given by:
```python
simplify(symb('n', 1)/eq6)
```
```python
NS = lambda x: (0.85 if (x >= 100) else 0.5 + 0.35*sqrt(x/100))
X = arange(0, 200.0)
Y = array([NS(x) for x in X])
plotly(X, Y)
```
### 6.5.9
valid for gear pais with external teeth: spur, single and double helical.
```python
dm1, db1, rho1, q1, rho2, q2, rho = symbols('d_m1 d_b1 rho_1 q_1 rho_2 q_2 rho')
eq30r = dm1**2/(1/(rho1*(1 - q1**4)) + 1/(rho2*(1 - q2**4)*u**2))
eq30l = (pi/8)*(dm1/db1)**2
eq30 = eq30l*eq30r
eq30
```
for pinions and wheels of solid construction: $q_1 = q_2 = 0$.
```python
eq30 = eq30.subs([(q1, 0), (q2,0)])
simplify(eq30)
```
Considering that the gears have the same density, leads to:
```python
eq30 = eq30.subs([(rho1, rho), (rho2, rho)])
simplify(eq30)
```
Notice that altough this expression represents the reduced mass of a gear pair, it does not has units of mass.
```python
```
|
74dfab5f15cd456569d2b848e4d6c70db4d0b02f
| 25,327 |
ipynb
|
Jupyter Notebook
|
notes/.ipynb_checkpoints/internal_dynamic_factor-checkpoint.ipynb
|
gfsReboucas/Drivetrain-python
|
90cc8a0b26fa6dd851a8ddaaf321f5ae9f5cf431
|
[
"MIT"
] | 1 |
2020-10-17T13:43:01.000Z
|
2020-10-17T13:43:01.000Z
|
notes/.ipynb_checkpoints/internal_dynamic_factor-checkpoint.ipynb
|
gfsReboucas/Drivetrain-python
|
90cc8a0b26fa6dd851a8ddaaf321f5ae9f5cf431
|
[
"MIT"
] | null | null | null |
notes/.ipynb_checkpoints/internal_dynamic_factor-checkpoint.ipynb
|
gfsReboucas/Drivetrain-python
|
90cc8a0b26fa6dd851a8ddaaf321f5ae9f5cf431
|
[
"MIT"
] | null | null | null | 78.900312 | 3,268 | 0.772417 | true | 718 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91118 | 0.839734 | 0.765149 |
__label__eng_Latn
| 0.912823 | 0.616029 |
## Exercise 1 : LDA Classification
From the last assignment, we have a basc understanding of how LDA works. Here, we want to use LDA on a practical example and see how it can help us in the classification process. Besides dimensionality reduction, LDA proides us with inforation of how important the new axes are for class discrimination and it offers a great way to show us if higher dimensional data is actually linearly seperable.
We are going to work with the Statlog(Vehical Silouttes) dataset throughout this exercise. It classifies four types of vehicles(OPEL, SAAB, BUS, VAN) based on the features exgtracted from the corresponding 2D silhoutte from an image of the vehicle(but only the silhoutte metrics are present in the dataset). In total, it measures k=18 features for n = 846 observations classified assigned to one of the four classes. The dataset is also summarized in Table 1.
There are two additional files accompanied by this exercise: train.csv which contains 752 observations supposed to be used for training and test.csv which contains 94 observations extracted to test the classifier.
Our goal is to test a classifier on this dataset and to use LDA as a preprocessing step. The general idea is to reduce the dimensions with LDA and pass the resulting projections to a classifier. For the latter, we are using and SVM.
In order to use LDA here, we first need to generalize the scatter metrics to C-class problems since the approach we have discussed so far is designed for a two-class problem only. The within-class matrix is easily extended by jst adding up every covariance matrix from each class
\begin{equation} S_W = \Sigma = \sum_{i=1}^C {\Sigma}_i \end{equation}
For the between-class scatter matrix, however, we need a new concept since we now have multiple mean vectors p; (one for each class). The idea is to compare the mean from each class with the overall mean
\begin{equation} \mu = \frac{1}{C} \sum_{i=1}^C\mu_i \end{equation}
of all observations (cf. Figure 1). Mathematically, we end up with something like
\begin{equation} S_B = \sum_{i=1}^C N_i(\mu_i - \mu)(\mu_i - \mu)^T \end{equation}
where we sum up all scatter from each class to the overall mean whereby weighting each matrix with the number N; of points in the class. This incorporates a weighting process where classes with more points contribute higher to the resulting matrix.
1. [Pen and Paper] In the previous assignment, the matrix Sp had rank 1 as there was only a single linearly independent vector in Sz. We now want to clarify how this situation changes when we use the general definition of Equation 3 which includes C classes. To answer this question, we need to find the number of linearly independent vectors of Sp. Here, the main ingredients of Sg are the difference vectors p; — pt (defined as column vectors). For simplicity, we discard the other terms and factors as they do not provide new information to our vector system. Then, we can define a new matrix which contains all information.
\begin{equation} A = (\mu_1 - \mu \hspace{0.5cm} \mu_2 - \mu \hspace{0.5cm} ... \hspace{0.5cm}\mu_C - \mu) \in \mathbb{R}^{k*C} \end{equation} <br/>
What is the highest possible rank of the matrix A? Hint: look at the column vectors of A
and Equation 2.
2. [Python] Let’s start with the implementation. We first want to write a function LDA(data, labels) which performs LDA on a labelled dataset (arbitrary dimension and class size). Similar to principal component analysis (covered later), this function should return three values:
$\hspace{0.5cm}$*The coefficients of the new coordinate system. This is a matrix with the eigenvectors in the columns.*<br/>
$\hspace{0.5cm}$*The projections of the data points relative to the new coordinate systems.*<br/>
$\hspace{0.5cm}$*The contribution of each new axis to the class separability. This is called the latent and corresponds to the eigenvalues.*<br/><br/>
To compute these variables, we implement the procedure of the last assignment with the extensions to deal with multiple classes.
a) Calculate the two scatter matrices Sy and Sz according to Equation 1 and Equation 3 by iterating over all classes and summing up the individual scatter matrices. Useful functions: np.mean, np. cov, np. outer.
b) Set up the matrix Sy Sp. Use the (Moore-Penrose) pseudo-inverse (np.linalg.pinv) instead of the normal matrix inverse to improve the stability of your function.
c) Calculate the eigenvalues and corresponding eigenvectors of the matrix Sj!Sg and sort the result descendingly by the eigenvalues. You can use the np. linalg.eig function but you have to do the sorting by yourself.
d) With the help of the eigenvectors u; stored in the columns of the matrix U, compute the projections X for each data point x (stored in the rows of $X \in \mathbb{R}^{nxd}$)
\begin{equation} \tilde{X} = X . U = \end{equation}
3. [Python] Write a function which loads the content of one of the *.csv files into your Python program. The last column of the array contains the class label of each observation (encoded as an integer, cf. Table 1). Store both, the data and the labels, in a Numpy array. Hint: set the data types to np. float64.
4. [Python] Training phase: use the content of the train.csv file in the following sub- tasks.
a) The eigenvectors u; are the axes of the new coordinate system designed for class discrimination and the eigenvalues A; denote how important each axis actually is to discriminate the data. This is very useful to specify the dimension of our new feature space. Print the eigenvalues as “percentage explained”, i.e. what is the contribution of each eigenvalue relative to the available class discrimination
\begin{equation} \tilde{\lambda}_i = |\frac{\lambda_i}{\sum_{j=1}^k \lambda_j}|\end{equation}
b) Based on the previous result, choose the dimensionality d < C - 1 = 3 of the new feature space, ie. how many axes you want to use to describe your data in a class discriminative way.
c) Train an SVM classifier with the d components of the projected data points x. In order to reduce the values to real numbers, you can use (here) the sum of both parts re(X) + im(x). For training, the svm object from the sk learn package with default parameters is suitable for the job (but you can experiment with different kernels if you like).
d) Remember that in its raw form the dataset contains 18 different features. If you had to decide about the separability of the data, you would be left with a hard job since there is no direct way to visualize the data. Luckily, you chose d to be much smaller.
Create a plot of the data you used to train the classifier, ie. the corresponding projections, as well as the resulting decision surfaces of the SVM. Figure 2 shows a possible solution and the following list contains some general recommendations:
+ Write a function as you need the same plot again later in the testing phase.
+ For the decision surfaces, it is helpful to generate a meshgrid of the relevant area and predict a label for each point in the grid. Then, the result can be visualized with the contour and contourf functions.
+ It is possible to combine multiple scatter plots. This is useful to plot all projec- tions of one class at a time.
5. [Python] Testing phase: use the content of the test . csv file in the following subtasks.
a) Project all your test data to your d-dimensional LDA feature space.
b) Predict labels for the test data with the help of the predict function.
c) Since we also have labels for our test data, we can compare the predictions. A useful tool here is a confusion matrix where each entry cj; denotes how many observations from class i are classified to class j. The diagonal elements contain, therefore, the number of correctly classified elements for each class i. The function confusion_matrix from sklearn.metrics is suitable for this job.
d) Plot the test projections together with the decision surfaces of the SVM (similar to the training phase).
e) Explain why the confusion matrix is in coherence with the result of Figure 2.
|
a7c218908deef46d674c5d895b14d809272ae246
| 9,713 |
ipynb
|
Jupyter Notebook
|
07_lda_classification/lda_classification.ipynb
|
jhinga-la-la/pattern-recognition-course
|
7ad4f70b2c427f3c37f59f47768b90371873823c
|
[
"Apache-2.0"
] | null | null | null |
07_lda_classification/lda_classification.ipynb
|
jhinga-la-la/pattern-recognition-course
|
7ad4f70b2c427f3c37f59f47768b90371873823c
|
[
"Apache-2.0"
] | null | null | null |
07_lda_classification/lda_classification.ipynb
|
jhinga-la-la/pattern-recognition-course
|
7ad4f70b2c427f3c37f59f47768b90371873823c
|
[
"Apache-2.0"
] | null | null | null | 74.145038 | 636 | 0.695357 | true | 1,918 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.863392 | 0.861538 | 0.743845 |
__label__eng_Latn
| 0.999611 | 0.566533 |
# Module 1: Setting up the problem
### Introduction
Geophysical surveys consist of a similar basic framework. An energy source is delivered into the earth, which can be natural (for example, the Earth's magnetic field) or human-made (current in the ground, acoustic wave energy, etc.), and this stimulates a response according to the variation in physical properties in the subsurface. At the surface, receivers pick up the a signal and record this as data. <br>
<br>
The goal of inversion is to find a model of the physical property distribution in the earth that produced the data. This is a dificult process because (1) information about a physical property for each datum is encoded in a complex way, and (2) we have a finite amount of data and cannot represent the physical property distribution everywhere. <br>
Inversion is a multistep process, often represented as a workflow *insert workflow image here*. The goal of this module is to cover the first section of the workflow, which discretizes the data and places the values of the functions onto a mesh. This will be done using the following steps:<br>
(1) Start with an expression that relates a kernel function with the continuous distribution of a physical property. <br>
(2) Discretize this expression, and introduce a simple example problem to illustrate the mathematics in detail. <br>
(3) Define a mesh that organizes the data. <br>
(4) Build up the matrix equation $d = Gm$. <br>
(5) Generalize the form of the problem from the example. <br>
(6) Implement the example problem in Python as a forward problem.
But first, here are some fundamental definitions:<br>
The general mathematical description of the inverse problem can be written as follows:<br>
\begin{equation} F_j[m]= d_j +n_j \quad \text{for} \quad j=1,..,N \; \text{where}\end{equation} <br>
<ul>
<li>$\bf{F}$ is a foward modeling operator. $\bf{F}$ incorporates the survey design and the physics of the problem. </li>
<li>$\bf{m}$ is a generic symbol for a physical property distibution.</li>
<li>$\bf{d}$ represents the observed data, (sometimes also represented as $\bf{d^{obs}}$).</li>
<li>$\bf{n}$ is a term that represents additive noise.</li>
</ul> <br>
This is, of course, the most general formuation of the problem. In this module we will consider the simplest case, which is (a) one dimensional (this can be likened to a survey that varies as a function of depth only) and (b) linear, in which our forward modeling operator becomes a matrix **G** in the matix equation: <br>
\begin{equation}d = Gm\end{equation}
### Step 1: Physical property distribution and the kernel function.
Each datum ($d_i$) collected is a volumetric response, that is to say, every datum measures the response of the whole volume (within the range of the system); it records the superposition of effects caused by all the material in the ground, and is therefore naturally represented as an integral. A "kernel function" or "sensitivity function", $g(x,y,z)$, shows how a datum is affected by all the subsurface. It describes the physics of the problem. The model, $m(x,y,z)$, represents the distribution of a physical property in the volume. Since each datum measures the response of the kernel function with the physical property distribution in the volume, for a continuous medium we can express this relationship as the inner product of the kernel function and the model. In the one dimensional case the expression for the *ith* datum is written as:<br>
\begin{equation}d_i = \int_a^b g_i(x) m(x) dx \end{equation} <br>
where again:
\begin{equation}
d :=\text{measured data} \\
g :=\text{kernel function} \\
m :=\text{physical property}\\
\end{equation}
### Step 2: Discretize the expression
While the integral expression describes the inner product in a continuum, it is not possible to maintain this representation in our data space, so the above equation must be discretized. Let us consider the case where we have $N$ data, and as we build up the matrix equation, let $i$ be the rows and $j$ be the columns of our matrix: <br>
\begin{equation}d_i = \int_a^b g_i(x) m(x) dx \; \Rightarrow \; \sum_{j=1}^N g_i (x_j) m_j \Delta x\end{equation} <br>
**A "Toy Problem"** <br>
Consider a simple case where we have a one dimensional, linear problem that will generate two data points, $d_1$ and $d_2$,from five physical property values <br> ($m_1, m_2, m_3, m_4, m_5$), and these two data points are generated using two kernel functions $g_1$ and $g_2$. Further, let us assume that our domain of interest lies on the interval [0,1]. The equation above is then expressed as the following two equations, one for each datum:<br>
\begin{equation}d_1 = \int_0^1 g_1(x) m(x) dx \; \Rightarrow \; \sum_{j=1}^N g_1 (x_j) m_j \Delta x\end{equation} <br>
\begin{equation}d_2 = \int_0^1 g_2(x) m(x) dx \; \Rightarrow \; \sum_{j=1}^N g_2 (x_j) m_j \Delta x\end{equation} <br>
Given that our problem is small by design, it is instructive to visualize the summation notation as matrix-vector products. Doing so yields the following expressions:<br>
(a) Data. We can collect our two data points into a column vector. Our data in vector notation is given by:<br>
\begin{equation}
d = \left[
\begin{array}{c}
d_1\\
d_2
\end{array}
\right]
\end{equation}<br>
(b) The x-spacing. The x-spacings, $\Delta x$, are represented by a diagonal matrix. Here we are in a one dimensional case, but in general our data will reside in a volume, so let this matrix be represented as $V$. In the most general situation, the x-spacings need not be equal, and there can be significant variation in the distances within a grid, depending on the amount of resolution one desires at a particular location. For the moment, let's ignore this complexity and assume equal spacing in our grid. Then let $V=diag(\Delta x)$, and given the dimensions of our problem, $V$ appears as follows:<br>
\begin{equation}
V=\begin{bmatrix}
\Delta x &0 &0 &0 &0 \\
0 &\Delta x &0 &0 &0 \\
0 &0 &\Delta x &0 &0 \\
0 &0 &0 &\Delta x &0 \\
0 &0 &0 &0 &\Delta x\\
\end{bmatrix}
\end{equation} <br>
(c) The model. The
model, $m$, is a column vector with five rows:
\begin{equation}
m=
\left[
\begin{array}{c}
m_1\\
m_2\\
m_3\\
m_4\\
m_5
\end{array}
\right]
\end{equation} <br>
(d) The kernel functions. Recall that a kernel function shows how a datum is affected by all the subsurface. The full expression for the kernel function will be developed further in the next sections, but for the moment, let us put each kernel function on a row of a matrix, such that $g_1$ will be on row 1, and $g_2$ on row 2, and define this matrix as $\widetilde{G}$. This will yield the matrix:
\begin{equation}
\widetilde{G}=
\begin{bmatrix}
g_{11} &g_{12} &g_{13} &g_{14} &g_{15}\\
g_{21} &g_{22} &g_{23} &g_{24} &g_{25}
\end{bmatrix}
\end{equation}
Using the above, the assembled matrix vector forms for these equations becomes:<br>
\begin{equation}
\left[
\begin{array}{c}
d_1\\
d_2\\
\end{array}
\right]
=
\begin{bmatrix}
g_{11} &g_{12} &g_{13} &g_{14} &g_{15}\\
g_{21} &g_{22} &g_{23} &g_{24} &g_{25}
\end{bmatrix}
\begin{bmatrix}
\Delta x &0 &0 &0 &0 \\
0 &\Delta x &0 &0 &0 \\
0 &0 &\Delta x &0 &0 \\
0 &0 &0 &\Delta x &0 \\
0 &0 &0 &0 &\Delta x\\
\end{bmatrix}
\left[
\begin{array}{c}
m_1\\
m_2\\
m_3\\
m_4\\
m_5
\end{array}
\right]
\end{equation}
Or to put it succinctly, $d = \widetilde{G} \, diag(\Delta x)\, m$.
### Step 3: Set up the mesh
Now that the data has ben discretized, lets look at where the data is to be placed in our data space. First, subdivide the one dimensional domain into cell centers and nodes, with spacings of $\Delta x$: <br>
For convenience (and for applications that will be discussed in later modules), the model values reside in the cell centers, while the values for the kernel functions reside on the nodes. <br>
<br>
Note that the kernel function in our example (represented above) has six values and the model values are five; moreover, the x-coordinates where $g(x) $ and $m(x)$ are evaluated are not coincident, and this leads to a complication when we want to perform the inner product of **m** and **g**. What is needed is a way to evaluate the kernel functions at the cell centers. To do this, we employ the trapeziodal rule for approximating integration. Simply put, to obtain the values of the kernel function on the cell centers, take the average of the kernel function values on the adjacent nodes. Let us then define two kernel function matrices, one with values evaluated on the cell centers, $G_c$ (represented by the black circles below), and another with the values evaluated on the nodes, $G_n$ (in white).
The relationship between $G_c $ and $G_n$ is an "averaging matix", $A_v$, such that $G_c = A_v G_n$. Putting each kernel function in the columns for both $G_c$ and $G_n$, and using again the dimensions of our toy problem, this relation appears as follows:
\begin{equation}
\begin{bmatrix}
g_{c1}(x_1) & g_{c2}(x_1) \\
g_{c1}(x_2) & g_{c2}(x_2) \\
g_{c1}(x_3) & g_{c2}(x_3) \\
g_{c1}(x_4) & g_{c2}(x_4) \\
g_{c1}(x_5) & g_{c2}(x_5) \\
\end{bmatrix}
=
\frac{1}{2}
\begin{bmatrix}
1 & 1 & 0 & 0 & 0 & 0\\
0 & 1 & 1 & 0 & 0 & 0\\
0 & 0 & 1 & 1 & 0 & 0\\
0 & 0 & 0 & 1 & 1 & 0\\
0 & 0 & 0 & 0 & 1 & 1\\
\end{bmatrix}
\begin{bmatrix}
g_{n1}(x_1) & g_{n2}(x_1) \\
g_{n1}(x_2) & g_{n2}(x_2) \\
g_{n1}(x_3) & g_{n2}(x_3) \\
g_{n1}(x_4) & g_{n2}(x_4) \\
g_{n1}(x_5) & g_{n2}(x_5) \\
g_{n1}(x_6) & g_{n2}(x_6) \\
\end{bmatrix}
\end{equation} <br>
Meanwhile, the relationship between $G_c$ and $\widetilde{G}$ in step 2 is such that $G_c = \widetilde{G}^T$.
### Step 4: Build up the matrix equation $d=Gm$¶
We now have all the required building blocks to assemble the matrix equation. At the end of step 2 we arrived at $d = \widetilde{G} \, diag(\Delta x)\, m$, and from the previous step we have $G_c = \widetilde{G}^T = A_v G_n$. Put more orderly, $\widetilde{G} = (A_v G_n)^T$, so substuting this into our expression gives:
\begin{equation}
d = (A_v G_n)^T \, diag(\Delta x)\, m
\end{equation}<br>
If we group matrices together and let $G = (A_v G_n)^T diag(\Delta x)$ then we arrive at $d=Gm$, our desired form. Referring again to our example, this would appear as follows:
\begin{equation}
\left[
\begin{array}{c}
d_1\\
d_2\\
\end{array}
\right]
=
\frac{1}{2}
\left(
\begin{bmatrix}
1 & 1 & 0 & 0 & 0 & 0\\
0 & 1 & 1 & 0 & 0 & 0\\
0 & 0 & 1 & 1 & 0 & 0\\
0 & 0 & 0 & 1 & 1 & 0\\
0 & 0 & 0 & 0 & 1 & 1\\
\end{bmatrix}
\begin{bmatrix}
g_{n1}(x_1) & g_{n2}(x_1) \\
g_{n1}(x_2) & g_{n2}(x_2) \\
g_{n1}(x_3) & g_{n2}(x_3) \\
g_{n1}(x_4) & g_{n2}(x_4) \\
g_{n1}(x_5) & g_{n2}(x_5) \\
g_{n1}(x_6) & g_{n2}(x_6) \\
\end{bmatrix}
\right)^T
\begin{bmatrix}
\Delta x &0 &0 &0 &0 \\
0 &\Delta x &0 &0 &0 \\
0 &0 &\Delta x &0 &0 \\
0 &0 &0 &\Delta x &0 \\
0 &0 &0 &0 &\Delta x\\
\end{bmatrix}
\left[
\begin{array}{c}
m_1\\
m_2\\
m_3\\
m_4\\
m_5
\end{array}
\right]
\end{equation} <br>
### Step 5: Generalize the form of the problem
In the above case we had two data points and five model values. We can generalize this to larger data sets easily. In the case where we have $M$ measured data and $N$ model values, we obtain a matrix equation of the following dimensions:
\begin{equation}
(M \times 1) = [(N \times (N+1)) \; ((N+1) \times M)]^T (N \times N) (N \times 1)\\
d \qquad = \qquad \qquad \qquad \quad [A_v G_n]^T \qquad \qquad diag(\Delta x) \quad m
\end{equation}
### Step 6: Implement the example problem in Python.
Next, we will build up our matrices one by one in Python. First import the SimPEG and numpy packages:
```
from SimPEG import *
from IPython.html.widgets import interactive
import numpy as np
```
Let start by formulating the forward problem, that is, let us assume that we have our model values $m$ and seek to generate synthetic data $d$. Recall that we are building up the matrix equation: $d = (A_v G_n)^T \, diag(\Delta x)\, m$, so let's start with the right hand side and go over each matrix, one by one. <br>
(1) The $A_v$ matrix. We can build the "averaging matrix" as follows:
```
N=5 # RecalL is the number of data we have in m.
n=N+1 # Define n as the N+1 dimension of the matrix
w=n-1 # Define w and the n-1 dimentsion of the matix
s = (N,n) # Store matrix values
Av = np.zeros(s) # Create a matrix of zeros of the correct dimensions
# and fill in with elements usin the loop below (note the 1/2 is included in here).
for i in range(N):
j=i
k=i+1
Av[i,j] = 0.5
Av[i,k] = 0.5
print Av
```
[[ 0.5 0.5 0. 0. 0. 0. ]
[ 0. 0.5 0.5 0. 0. 0. ]
[ 0. 0. 0.5 0.5 0. 0. ]
[ 0. 0. 0. 0.5 0.5 0. ]
[ 0. 0. 0. 0. 0.5 0.5]]
(2) The kernel matrix on the nodes, $G_n$. To build this matrix, we must first define a sensitivity function that describes some physical phenomenon. For the sake of our example, even though it has no direct, physical meaning, let us define our kernel function for the toy problem as follows:<br><br>
\begin{equation}
g_j(x) = e^{jpx} \cos(2 \pi j q x)
\end{equation} <br>
```
g = lambda x, k, p, q: np.exp(-p*k*x)*np.cos(2*np.pi*q*k*x) # create an anonymous function as immediately above
x=np.linspace(0,1,6) # define the nodes of our x-array
#x = np.array([0., 0.2, 0.4, 0.6, 0.8, 1.])
p = 0.01 # Set values for p, q, j
q = 0.1
j = np.array([1, 2])
Gn = np.zeros((len(x), len(j))) # preallocate a matrix Gn, and evaluate functions in loop below.
for i in range(len(j)):
f = g(x,k[i],p,q)
Gn[:,i] = f
#print f
print Gn
```
[[ 1. 1. ]
[ 0.99013245 0.96471657]
[ 0.96471657 0.86932419]
[ 0.92421453 0.72027328]
[ 0.86932419 0.52732179]
[ 0.80096714 0.30289805]]
(3) The volume matrix $V$. This simply consists of making an $N \times N$ array of x-spacings, $\Delta x.$
```
# make the delta x array
Deltax = 0.2*np.ones(N) # set x-spacings
V = np.diag(Deltax) # create diagonal matrix
print V
```
[[ 0.2 0. 0. 0. 0. ]
[ 0. 0.2 0. 0. 0. ]
[ 0. 0. 0.2 0. 0. ]
[ 0. 0. 0. 0.2 0. ]
[ 0. 0. 0. 0. 0.2]]
(4) Input the model values, $m$. Given that we are making a forward problem, we assume these values as given, so we input ficticious values:
```
m = np.array([0.02, 0.05, 0.09, 0.07, 0.04])
print m
```
[ 0.02 0.05 0.09 0.07 0.04]
(5) Generate our two data values. The remaining step is simply to perform the matix-vector multiplication :
```
d = np.dot(np.dot(np.transpose(np.dot(Av, Gn)), V),m)
print d
```
[ 0.04999083 0.03946006]
```
# Put x array on cell centers as follow
x = np.array([0., 0.2, 0.4, 0.6, 0.8, 1.])
xc = 0.5*(x[1:] + x[0:-1])
print x
print xc
```
[ 0. 0.2 0.4 0.6 0.8 1. ]
[ 0.1 0.3 0.5 0.7 0.9]
This is just an extra cell with more equations if needed:<br>
\begin{equation}
\left[
\begin{array}{c}
d_1\\
d_2\\
\end{array}
\right]
=
\frac{1}{2}
\left(
\begin{bmatrix}
g_{n1}(x_1) + g_{n1}(x_2) & g_{n2}(x_1) + g_{n2}(x_2) \\
g_{n1}(x_2) + g_{n1}(x_3) & g_{n2}(x_2) + g_{n2}(x_3) \\
g_{n1}(x_3) + g_{n1}(x_4) & g_{n2}(x_3) + g_{n2}(x_4) \\
g_{n1}(x_4) + g_{n1}(x_5) & g_{n2}(x_4) + g_{n2}(x_5) \\
g_{n1}(x_5) + g_{n1}(x_6) & g_{n2}(x_5) + g_{n2}(x_6) \\
\end{bmatrix}
\right)^T
\begin{bmatrix}
\Delta x &0 &0 &0 &0 \\
0 &\Delta x &0 &0 &0 \\
0 &0 &\Delta x &0 &0 \\
0 &0 &0 &\Delta x &0 \\
0 &0 &0 &0 &\Delta x\\
\end{bmatrix}
\left[
\begin{array}{c}
m_1\\
m_2\\
m_3\\
m_4\\
m_5
\end{array}
\right]
\end{equation} <br>
|
f9627d936c90278f54f1a4a3f779aa0bab540f8e
| 24,825 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/Mod 1-Copy1-checkpoint.ipynb
|
jokulhaup/directed_studies
|
99f0e6e8cc7010d34db1d9bc37988e4944f66826
|
[
"MIT"
] | 2 |
2017-10-08T02:10:35.000Z
|
2017-10-18T17:49:21.000Z
|
.ipynb_checkpoints/Mod 1-Copy1-checkpoint.ipynb
|
jokulhaup/directed_studies
|
99f0e6e8cc7010d34db1d9bc37988e4944f66826
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/Mod 1-Copy1-checkpoint.ipynb
|
jokulhaup/directed_studies
|
99f0e6e8cc7010d34db1d9bc37988e4944f66826
|
[
"MIT"
] | 3 |
2016-09-01T20:38:20.000Z
|
2020-05-13T22:19:16.000Z
| 39.342314 | 867 | 0.50566 | true | 5,590 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.899121 | 0.845942 | 0.760605 |
__label__eng_Latn
| 0.990223 | 0.605472 |
# Deutsch-Jozsa algorithm(ドイチ・ジョザ アルゴリズム)(概要)
Deutsch algorithm の一般化である Deutsch-Jozsa algorithm を説明します。
Deutsch-Jozsa algorithm は 00...000 から 11...111の $2^n$ 通りの入力をとりうる $f$ について、以下の条件のどちらかが成り立つものとします。
1. 全ての入力で $f(x)$ が同じ。
すなわち、全ての $x$ で $f(x)=0$ または 全ての $x$ で $f(x)=1$
2. 入力の半分で $f(x)$ が異なる。
すなわち、$2^{n-1}$ 個の $x$ で $f(x)=0$ 、残りの $x$ で $f(x)=1$
このアルゴリズムでは Oracle が上の1. か 2. かを判別します。
具体的な回路を考えます。
$\lvert 0\rangle$ は $n$ 個並んでいるものとします。
ではそれぞれの状態を確認します。
$$
\begin{align}
\lvert \psi_1\rangle &= \biggl(\otimes^n H\lvert 0\rangle \biggr)\otimes H \lvert 1\rangle \\
&= \frac{1}{ \sqrt{2^{n+1}} } \sum^{2^n}_{x\in \{ 0, 1 \}^n} \bigl( \lvert x\rangle \otimes (\lvert 0\rangle - \lvert 1\rangle) \bigr) \\
&= \frac{1}{ \sqrt{2^{n+1}} } \sum^{2^n}_{x\in \{ 0, 1 \}^n} \lvert x\rangle \otimes \lvert 0\rangle - \frac{1}{ \sqrt{2^{n+1}} } \sum^{2^n}_{x\in \{ 0, 1 \}^n} \lvert x\rangle \otimes \lvert 1\rangle \\
\end{align}
$$
$n$ 番目のビットについて、$\lvert 0\rangle$ と $\lvert 1\rangle$ を入れ替えると全体の符号が変わることがわかります。
次に $\psi_2$ を考えてみます。
$f(x)$ は $n$ 番目のビットにかかるので
$$
f(x) = 0 \to n番目のビットは入れ替わらない
$$
$$
f(x) = 1 \to n番目のビットが入れ替わる
$$
となリます。
各項について $f(x) = 0, 1$ でシグマを分けると
$$
\frac{1}{ \sqrt{2^{n+1}} } \sum^{2^n}_{x\in \{ 0, 1 \}^n} \lvert x\rangle \otimes \lvert 0\rangle \xrightarrow{U_f} \frac{1}{ \sqrt{2^{n+1}} } \sum_{f(x)=0} \lvert x\rangle \otimes \lvert 0\rangle + \frac{1}{ \sqrt{2^{n+1}} } \sum_{f(x)=1} \lvert x\rangle \otimes \lvert 1\rangle
$$
$$
\frac{1}{ \sqrt{2^{n+1}} } \sum^{2^n}_{x\in \{ 0, 1 \}^n} \lvert x\rangle \otimes \lvert 1\rangle \xrightarrow{U_f} \frac{1}{ \sqrt{2^{n+1}} } \sum_{f(x)=0} \lvert x\rangle \otimes \lvert 1\rangle + \frac{1}{ \sqrt{2^{n+1}} } \sum_{f(x)=1} \lvert x\rangle \otimes \lvert 0\rangle
$$
$f(x)=1$ のときビットが変わっていることがわかります。
まとめると、
$$
\lvert \psi_2\rangle = \frac{1}{\sqrt{2^n}} \biggl( \sum_{f(x)=0}\lvert x\rangle - \sum_{f(x)=1}\lvert x\rangle \biggr) \otimes \frac{1}{\sqrt{2}} (\lvert 0\rangle - \lvert 1\rangle)
$$
"1.全ての入力で $f(x)$ が同じ" のときは $f(x)=0,1$ のどちらかのシグマが消えるので
$$
\begin{align}
\lvert \psi_2 \rangle &= \pm \frac{1}{ \sqrt{2^{n}} } \sum^{2^n}_{x\in \{ 0, 1 \}^n} \lvert x\rangle \otimes \frac{1}{\sqrt{2}} (\lvert 0\rangle - \lvert 1\rangle) \\
&= \pm \biggl( \otimes^{n} H \lvert 0\rangle \biggr) \otimes H \lvert 1\rangle
\end{align}
$$
よって、$\lvert \psi_3 \rangle$は以下のようになります。
$$
\lvert \psi_3 \rangle = \pm \lvert 00...00 \rangle \otimes H\lvert 1 \rangle
$$
この時、上から$n$量子ビットまでを測定した結果は全て"0"となります。
"2.入力の半分で $f(x)$ が異なる" の場合を考えます。$f(x)$によって符号が異なるので
$$
\lvert \psi_2 \rangle = \frac{1}{ \sqrt{2^{n}} } \sum^{2^n}_{x\in \{ 0, 1 \}^n} (-1)^{f(x)} \lvert x\rangle \otimes \frac{1}{\sqrt{2}} (\lvert 0\rangle - \lvert 1\rangle)
$$
$$
\begin{align}
\lvert \psi_3 \rangle &= \frac{1}{ \sqrt{2^{n}} } \sum^{2^n}_{x\in \{ 0, 1 \}^n} (-1)^{f(x)} (\otimes^nH) \lvert x\rangle \otimes \frac{1}{\sqrt{2}} (\lvert 0\rangle - \lvert 1\rangle) \\
&= \frac{1}{ 2^{n} } \sum^{2^n}_{x\in \{ 0, 1 \}^n} (-1)^{f(x)} \bigl( \sum^{2^n-1}_{y=0}(-1)^{x\cdot y}\lvert y \rangle \bigr) \otimes \frac{1}{\sqrt{2}} (\lvert 0\rangle - \lvert 1\rangle)
\end{align}
$$
ここで、上から$n$量子ビットまでを測定した結果が全て"0"となる確率$\rm{Pr}(0)$を考えます。
これは状態 $\lvert y \rangle\ (y=0)$ の振幅の絶対値の2乗です。
$y=0$ の時 $x\cdot y=0$のため、
$$
\mathrm{Pr}(0) = \biggl| \frac{1}{2^n} \sum^{2^n}_{x\in \{ 0, 1 \}^n} (-1)^{f(x)} \biggr|^2
$$
入力の半分で $f(x)$ が異なる場合、全ての項が打ち消し合うため $\mathrm{Pr}(0) =0$ となります。
よって、上から$n$量子ビットまでを測定した結果は決して、全て"0"とはなりません。
以上から、$n-1$番目までのビットを測定した結果が全て $0$ かそれ以外かで Oracleを判別できます。
これをblueqatで実装してみましょう。
```python
from blueqat import Circuit
import numpy as np
```
1\. と 2\. の場合のオラクルをそれぞれ実装します。
1\. の場合は$n-1$番目までのビットに関係なく、$n$番目のビットを反転するものとします。
2\. の場合は$n-1$番目までのビット1つ1つを制御ビットとし、ターゲットビットは全て$n$番目のビットであるような $CX$ゲート $n$個を用意します。
この時$n-1$番目までのビットがとりうる値の半分について$n$番目のビットが反転し、もう半分については元のままです。
```python
def oracle_1(c):
n = c.n_qubits
c.x[n-1]
def oracle_2(c):
n = c.n_qubits
for i in range(n-1):
c.cx[i, n-1]
```
以下がアルゴリズム本体です。
まず、オラクルが 1\. と2\. のどちらであるかを乱数で決めます。これが求めたい正解となります。
次に、Deutsch-Jozsa algorithmでオラクルを判別します。
最後に判別の結果と、事前に乱数で選択したオラクルが一致しているかを確認します。
```python
n = 4
c = Circuit(n + 1)
c.x[n].h[:]
if np.random.rand() > 0.5:
oracle_1(c)
oracle = "f(x) = 1 for all x."
else:
oracle_2(c)
oracle = "f(x) = 0 for half x and f(x) = 1 for others."
c.h[:]
res = c.m[:].run(shots = 1000)
print("Oracle:", oracle)
print("Results of quantum circuit:", res)
if [arr[:n] for arr in res.keys()] == ['0'*n] and oracle == "f(x) = 1 for all x.":
print("OK")
elif [arr[:n] for arr in res.keys()] != ['0'*n] and oracle == "f(x) = 0 for half x and f(x) = 1 for others.":
print("OK")
else:
print("incorrect")
```
Oracle: f(x) = 1 for all x.
Results of quantum circuit: Counter({'00001': 1000})
OK
以上より、Deutsch-Jozsa algorithmでオラクルを判別できました。
```python
```
|
a595fee14c9a71a243b4ba84810bb14425694bb3
| 8,509 |
ipynb
|
Jupyter Notebook
|
tutorial-ja/101_deutsch-jozsa_ja.ipynb
|
ssmi1975/Blueqat-tutorials
|
f2962a7eda733568d228cb1ebbcd2c2f409f84cb
|
[
"Apache-2.0"
] | 1 |
2022-02-09T02:10:48.000Z
|
2022-02-09T02:10:48.000Z
|
tutorial-ja/101_deutsch-jozsa_ja.ipynb
|
ssmi1975/Blueqat-tutorials
|
f2962a7eda733568d228cb1ebbcd2c2f409f84cb
|
[
"Apache-2.0"
] | null | null | null |
tutorial-ja/101_deutsch-jozsa_ja.ipynb
|
ssmi1975/Blueqat-tutorials
|
f2962a7eda733568d228cb1ebbcd2c2f409f84cb
|
[
"Apache-2.0"
] | null | null | null | 29.040956 | 316 | 0.477377 | true | 2,749 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.715424 | 0.625336 |
__label__yue_Hant
| 0.359524 | 0.291195 |
```python
# stan implementation
import pystan
%pylab inline
from scipy.special import polygamma as pg
```
Populating the interactive namespace from numpy and matplotlib
Bad key "axes.color_cycle" on line 250 in
/home/matus/Desktop/matustools/matplotlibrc.
You probably need to get an updated matplotlibrc file from
https://github.com/matplotlib/matplotlib/blob/v3.2.1/matplotlibrc.template
or from the matplotlib source distribution
# Notation
$Y$ generic random variable
$U$ latent random variable
$V$ residual random variable
$X$ predictor
### Parameters
$\eta$ and $\nu$ generic parameters
$\mu=E[Y]$ mean parameter
$\gamma=E[\log Y]$ geometric mean parameter
$\sigma^2=E[(Y-\mu)^2]$ standard deviation parameter
$Y=\alpha+U$ shift parameter
$Y= U/\theta$ scale parameter
$Y= U \lambda$ inverse-scale (rate) parameter
$Y=e^{-\tau} U$ log-scale parameter
$Y=U^\kappa$ exponent parameter
$Y=f(U,\rho)$ shape parameter
$Y=\alpha + \beta X$ linear predictor
$\psi$ digamma function
$\pi$ pi number
$\phi$ measurement scale
$\delta$ dirac function
$\zeta,\epsilon,\varepsilon,\vartheta,\iota,\xi,\varpi,\varrho,\varsigma,\varphi,\chi,\omega$
## Gamma distribution
Paremeters $\eta$ and $\nu$ are orthogonal if
$$\operatorname{E}_Y
\left[
\frac{\partial \log f(Y;\eta,\nu)}{\partial\eta \ \partial\nu}
\right]=0$$
The probability density function of Gamma distribution parametrized by shape parameter $\rho$ and scale parameter $\theta$ is
$$f(Y=y;\rho,\theta)=\frac{1}{\Gamma(\rho) \theta^\rho} y^{\rho - 1} e^{-\frac{y}{\theta}}$$
with Fisher information
$$I_{\rho \theta} = \begin{pmatrix}
\psi'(\rho) & \theta^{-1} \\
\theta^{-1} & \rho \theta^{-2} \end{pmatrix} $$
Consider parametrization in terms of logarithm of geometric mean $\gamma=E[\log Y]=\psi(\rho)+\log \theta$ and log-scale $\tau=\log(\theta)$, where $\psi$ is the digamma function. Then the logarithm of density function parametrized by $\gamma$ and $\tau$ is
$$\log f(Y=y;\gamma,\tau)=-\log{\Gamma(\omega(\gamma-\tau)) -\tau \omega(\gamma-\tau) + (\omega(\gamma-\tau)-1)\log y- y e^{-\tau}}$$
where we use $\omega$ to label the inverse digamma function. By $\omega'(y)$ $\omega''(y)$ and we denote the first and second derivative of inverse digamma function with respect to $y$. Next, we compute the first derivative of the log-density with respect to $\gamma$:
$$\begin{align} \frac{\partial}{\partial\gamma}\log f(Y;\gamma,\tau) &= -\psi(\omega(\gamma-\tau)) \omega'(\gamma-\tau)-\tau \omega'(\gamma-\tau) + \omega'(\gamma-\tau) \log y \\
&= -(\gamma-\tau) \omega'(\gamma-\tau)-\tau \omega'(\gamma-\tau) + \omega'(\gamma-\tau) \log y \\
&= (\log y - \gamma)\omega'(\gamma -\tau)\end{align}$$
Next we obtain derivative with respect to $\gamma$ and $\tau$:
$$\begin{align} \frac{\partial}{\partial\gamma \partial\tau}\log f(Y;\gamma,\tau) &= \frac{\partial}{\partial\tau}\left[(\log y - \gamma)\omega'(\gamma -\tau)\right]\\
&= (\gamma-\log y)\omega''(\gamma-\tau)
\end{align}$$
Finally, compute the expectation
$$\begin{align} \operatorname{E}_Y
\left[
\frac{\partial \log f(Y;\tau,\gamma)}{\partial\tau\ \partial\gamma}
\right]&= \operatorname{E}\left[\omega''(\gamma-\tau)(\gamma-\log y)\right] \\
&=\omega''(\gamma-\tau)(\gamma-\operatorname{E}[\log y])\\
&=\omega''(\gamma-\tau)(\gamma-\gamma)\\
&=0
\end{align}$$
Note that $\operatorname{E}[\log y]$ is the logarithm of geometric mean and hence $\operatorname{E}[\log y]=\gamma$
$$I_{\gamma \tau} = \begin{pmatrix}
\omega'(\gamma-\tau) & 0\\
0 & \omega(\gamma-\tau)-\omega'(\gamma-\tau)\end{pmatrix} $$
$$I_{\rho \tau} = \begin{pmatrix}
\psi'(\rho) & 1 \\
1 & \rho \end{pmatrix} $$
$$I_{\rho, \tau+\log \rho} = \begin{pmatrix}
\psi'(\rho)-1/\rho & 0 \\
0 & \rho \end{pmatrix} $$
$$I_{\psi(\rho), \tau} = \begin{pmatrix}
\psi'(\rho)^{-1} & \psi'(\rho)^{-1} \\
\psi'(\rho)^{-1} & \rho \end{pmatrix} $$
$$I_{\psi(\rho)+\tau, \tau} = \begin{pmatrix}
\psi'(\rho)^{-1} & 0 \\
0& \rho-\psi'(\rho)^{-1} \end{pmatrix} $$
```python
model = """
data {
int<lower=0> N; //nr subjects
real<lower=0> k;
real<lower=0> t;
}generated quantities{
real<lower=0> y;
y=gamma_rng(k,1/t);
}
"""
smGammaGen = pystan.StanModel(model_code=model)
```
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_56e71184937d3586a66635dbe41869eb NOW.
```python
model = """
data {
int<lower=0> N; //nr subjects
real<lower=0> y[N];
}parameters{
real<lower=0> k;
real<lower=0> t;
}model{
for (n in 1:N)
y[n]~gamma(k,1/t);
}
"""
smGamma = pystan.StanModel(model_code=model)
```
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_b8b2a451eabaf76e034fac0e9eb791d9 NOW.
```python
N=1000
fit=smGammaGen.sampling(data={'N':N,'k':10,'t':np.exp(-10)},
chains=1,n_jobs=1,seed=1,thin=1,iter=N,warmup=0,algorithm="Fixed_param")
w=fit.extract()
y=w['y']
print(y.shape)
fit=smGamma.sampling(data={'N':N,'y':y},
chains=4,n_jobs=4,seed=1,thin=2,iter=2000,warmup=1000)
print(fit)
w=fit.extract()
t=np.log(w['t'])
g=pg(0,w['k'])+t
w=fit.extract()
plt.plot(g,t,'.')
np.corrcoef(g,t)[0,1]
```
```python
invgammafun='''functions{
vector invdigamma(vector x){
vector[num_elements(x)] y; vector[num_elements(x)] L;
for (i in 1:num_elements(x)){
if (x[i]==digamma(1)){
y[i]=1;
}else{ if (x[i]>=-2.22){
y[i]=(exp(x[i])+0.5);
}else{
y[i]=1/(x[i]-digamma(1));
}}}
L=digamma(y)-x;
while (min(L)>10^-12){
y=y-L ./trigamma(y);
L=digamma(y)-x;
}
return y;}
real invdigammaR(real x){
real y; real L;
if (x==digamma(1)){
y=1;
}else{ if (x>=-2.22){
y=(exp(x)+0.5);
}else{
y=1/(x-digamma(1));
}}
L=digamma(y)-x;
while (abs(L)>1e-5){
y=y-L ./trigamma(y);
L=digamma(y)-x;
}
return y;
}} '''
model = """
data {
int<lower=0> N; //nr subjects
real<lower=0> y[N];
}parameters{
real<lower=-100,upper=100> g;
real<lower=-100,upper=100> t;
}transformed parameters{
real k;
k=invdigammaR(g-t);
}model{
for (n in 1:N)
y[n]~gamma(k,exp(-t));
}
"""
smGammaGeom = pystan.StanModel(model_code=invgammafun+model)
```
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_73d022c146ed724ba4b8072e7a8ef172 NOW.
```python
N=10
fit=smGammaGen.sampling(data={'N':N,'k':1,'t':np.exp(0)},
chains=1,n_jobs=1,seed=1,thin=1,iter=N,warmup=0,algorithm="Fixed_param")
w=fit.extract()
y=w['y']
fit=smGammaGeom.sampling(data={'N':N,'y':y},
chains=4,n_jobs=4,seed=2,thin=1,iter=500,warmup=200)
#control={'adapt_delta':0.99})
print(fit)
```
```python
w=fit.extract()
#plt.plot(pg(0,w['k']),w['g']-w['t'],'.')
#np.max(np.abs(pg(0,w['k'])-w['g']+w['t']))
plt.plot(w['g'],w['t'],'.')
```
# Hierarchical parameter recovery
```python
model = """
data {
int<lower=0> N; //nr subjects
int<lower=0> M;
real gm;
real gs;
real t;
}generated quantities{
real g[N];
real<lower=0> y[N,M];
for (n in 1:N){
g[n]=normal_rng(gm,gs);
for (m in 1:M){
y[n,m]=gamma_rng(invdigammaR(g[n]-t),exp(t));
}}}
"""
smGammaGen = pystan.StanModel(model_code=invgammafun+model)
```
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_0f1c4cd5a7a2dd5bfc3c53aaa88ff7a5 NOW.
```python
N=10;M=20
fit=smGammaGen.sampling(data={'N':N,'M':M,'gm':5,'gs':2,'t':1},
chains=4,n_jobs=4,seed=1,thin=1,iter=30,warmup=0,algorithm="Fixed_param")
w=fit.extract()
y=w['y'][0,:,:]
print(y.shape)
```
(10, 20)
```python
model = """
data {
int<lower=0> N; //nr subjects
int<lower=0> M;
real<lower=0> y[N,M];
}parameters{
real g[N];
real gm;
real<lower=0> gs;
real t;
}model{
for (n in 1:N){
g[n]~normal(gm,gs);
for (m in 1:M){
y[n,m]~gamma(invdigammaR(g[n]-t),exp(t));
}}}
"""
smGamma = pystan.StanModel(model_code=invgammafun+model)
```
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_f6a8758085290b91fe0fd36fc35263d5 NOW.
```python
fit=smGamma.sampling(data={'N':N,'M':M,'y':y},
chains=4,n_jobs=4,seed=2,thin=1,iter=1000,warmup=500)
print(fit)
```
```python
%pylab inline
plt.plot(w['gm'])
```
## Weibull distribution
$$f(y)=\frac{\kappa}{y}\left(\frac{y}{\theta}\right)^{\kappa}e^{-\left(\frac{y}{\theta} \right)^\kappa}$$
$$I_{\theta \kappa} = \begin{pmatrix}
\frac{\kappa^2}{\theta^2} & -\frac{\psi(2)}{\theta}\\
. & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$
$E[\log Y]= \log \theta + \psi(1)/\kappa$
$E[Y^s]=\theta^s \Gamma(1+s/\kappa)$
$E[Y^\kappa]=\theta^\kappa $
$\mathrm{Var}[\log Y]=\psi'(1)/\kappa^2$
$E[(Y/\theta)^\kappa]=1$
$\mathrm{Var}[(Y/\theta)^\kappa]=1$
$E[\log (Y/\theta^\kappa)]= \psi(1)$
$E[\log^2 (Y/\theta^\kappa)]= \psi'(1)+\psi(1)^2$
$E[(Y/\theta)^\kappa \log (Y/\theta)^\kappa ]= \psi(2)= \psi(1)+1$
$E[(Y/\theta)^\kappa \log^2(Y/\theta)^\kappa ]= \psi'(2)+\psi(2)^2$
$$I_{\tau \kappa} = \begin{pmatrix}
\kappa^2 & - \psi(2)\\
. & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$
$\tau=\log \theta$
$r_{\tau \kappa}=\psi(2)/\sqrt{\psi'(1)+\psi(2)^2}=0.31$
This is orthogonal parametrization
$$\kappa= \frac{1}{\xi-H \tau}$$
$$\xi=\frac{1}{\kappa}+H \tau $$
$H=\frac{\psi(2)}{\psi'(1)+\psi(2)^2}=0.232$
$$I_{\tau \xi} = \frac{H}{(\xi-\tau)^{2}} \begin{pmatrix}
\left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \\
. & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix} $$
$$I_{\tau \kappa} = \begin{pmatrix}
\kappa^2 & - \psi(2)\\
. & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$
$$I_{\tau,1/\kappa} =\kappa^{2} \begin{pmatrix}
1 & \psi(2)\\
. & \psi'(1)+\psi(2)^2\end{pmatrix} $$
$$I_{\tau,1/\kappa-H\tau} =\kappa^{2} \begin{pmatrix}
1-\psi(2) H& 0\\
. & \psi'(1)+\psi(2)^2\end{pmatrix} = \kappa^{2} \begin{pmatrix}
0.902 & 0\\
. & 1.824\end{pmatrix}$$
$$I_{\tau,H\kappa} =\begin{pmatrix}
\kappa^{2} & \psi(2) H\\
. & \kappa^{-2} \psi(2) H\end{pmatrix} $$
$$I_{\tau,1/(H\kappa)} =\kappa^{2} H^2 \begin{pmatrix}
1 & \psi(2) H\\
. & \psi(2) H\end{pmatrix} $$
$$I_{\tau,1/(H\kappa)+\tau} =\kappa^{2} H^2 \begin{pmatrix}
1-\psi(2) H& 0\\
. & \psi(2) H\end{pmatrix} \\= \kappa^{2} H^2
\begin{pmatrix}
\left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \\
. & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix}= \kappa^{2} H^2 \begin{pmatrix}
0.902 & 0\\
. & 0.098\end{pmatrix}$$
$$I_{\tau,\epsilon} =(\epsilon-\tau)^2\begin{pmatrix}
\left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \\
. & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix}$$
Orthogonal from Cox and Reid (1987)
$\epsilon= \exp(\log \theta + \psi(2)/\kappa)=\exp(1/\kappa)\exp(E[\log Y])=\exp E[(Y/\theta)^\kappa \log Y]$
$\theta= \epsilon \exp(-\psi(2)/\kappa)$
```python
1/H
```
4.313501020391736
$$J_{a/H,b}=\begin{pmatrix} H &0 \\0 & 1 \end{pmatrix}$$
$$J^T \begin{pmatrix} H^{-2} A & H^{-1} B \\ H^{-1} B & C \end{pmatrix} J= \begin{pmatrix} A &B \\B & C\end{pmatrix}$$
$H=B/A$
$$J^T \begin{pmatrix} A & B \\ B & C \end{pmatrix} J= \begin{pmatrix} B^2/A &B^2/A \\B^2/A & C\end{pmatrix}$$
$$J_{a+b,b}=\begin{pmatrix} 1 &-1 \\0 & 1 \end{pmatrix}$$
$$J^T\begin{pmatrix} A &A \\A & B \end{pmatrix} J= \begin{pmatrix} A &0 \\0 & B-A\end{pmatrix}$$
$$J_{\log a,b}=\begin{pmatrix} e^a &0 \\0 & 1 \end{pmatrix}$$
$$J_{\log a,b}^T \begin{pmatrix} e^{-2a} A & e^{-a} B \\e^{-a} B & C \end{pmatrix} J_{\log a,b}= \begin{pmatrix} A &B \\B & C\end{pmatrix}$$
$$J_{e^a,b}=\begin{pmatrix} 1/a &0 \\0 & 1 \end{pmatrix}$$
$$J^T \begin{pmatrix} a^2 A & a B \\ a B & C \end{pmatrix} J= \begin{pmatrix} A &B \\B & C\end{pmatrix}$$
$$J_{a^{-1},b}=\begin{pmatrix} -a^{2} &0 \\0 & 1 \end{pmatrix}$$
$$J^T \begin{pmatrix} a^{-4} A & a^{-2} B \\ a^{-2} B & C \end{pmatrix} J= \begin{pmatrix} A &-B \\-B & C\end{pmatrix}$$
```python
pg(1,1)
```
array(1.64493407)
#### old stuff
$$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}\frac{\kappa^2}{\theta^2}+J_{21}J_{22}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\frac{\psi(2)}{\theta}(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$
$\theta=e^\phi$
$J_{11}=\frac{\partial \theta}{\partial \phi}=e^\phi=\theta$
$J_{12}=\frac{\partial \theta}{\partial \gamma}=0$
$$\mathrm{Cov}(\gamma,\phi)=J_{21}J_{22}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\frac{\psi(2)}{J_{11}}(J_{21}J_{22}+J_{11}J_{22})$$
$$\mathrm{Cov}(\gamma,\phi)=J_{22}\left(J_{21}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\psi(2)(J_{21}/J_{11}+1)\right)\\
= J_{21}J_{22}\frac{\psi(2)}{\kappa^2}\left(
\frac{\psi'(2)+\psi(2)^2+1}{\psi(2)}-\kappa^2\left(\frac{\partial \phi}{\partial \kappa}+e^{-\phi}\right)\right)\\
= J_{21}J_{22}\psi(2)\left(
\frac{\psi'(2)+\psi(2)^2+1}{\kappa^2\psi(2)}-e^{-\phi}-\frac{\partial \phi}{\partial \kappa}\right)
$$
$\gamma=-\phi- \frac{\psi'(2)+\psi(2)^2+1}{\kappa \psi(2)}$
$\frac{\partial \gamma}{\partial \kappa}= -\frac{\psi'(2)+\psi(2)^2+1}{\kappa^2 \psi(2)}$
$\kappa=-\frac{\psi'(2)+\psi(2)^2+1}{(\gamma+\phi) \psi(2)}$
$\frac{\partial \kappa}{\partial \phi}= \frac{\psi'(2)+\psi(2)^2+1}{(\gamma+\phi)^2 \psi(2)}$
$$\mathrm{Cov}(\gamma,\phi)= J_{21}J_{22}\psi(2)\left(
\frac{(\gamma+\phi)^2 \psi(2)}{\psi'(2)+\psi(2)^2+1}-e^{-\phi}- \frac{(\gamma+\phi)^2 \psi(2)}{\psi'(2)+\psi(2)^2+1} \right)
$$
$c \mathrm{Ei}(\frac{c}{\kappa})-e^\frac{c}{\kappa}(e^{-\phi}+\kappa)=k$
```python
model = """
data {
int<lower=0> N; //nr subjects
vector<lower=0>[N] y;
}parameters {
real<lower=0> k;
real<lower=0> t;
}model {
y~weibull(k,t);
}
"""
smWeibull = pystan.StanModel(model_code=model)
model = """
data {
int<lower=0> N; //nr subjects
vector<lower=0>[N] y;
}parameters {
real t;
real e;
}model {
y~weibull(4.313501020391736/(e-t),exp(t));
}
"""
smWeibullE = pystan.StanModel(model_code=model)
```
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_dd507d30f0dad57573a6224c4c01ef6c NOW.
```python
model = """
data {
int<lower=0> N;
int<lower=0> M;
vector<lower=0>[M] y[N];
}parameters {
real lnk[N];
real lnt[N];
real km;real tm;
real<lower=0> ks;
real<lower=0> ts;
}model {
lnk~normal(km,ks);
lnt~normal(tm,ts);
for (n in 1:N)
y[n]~weibull(exp(lnk[n]),exp(lnt[n]));
}
"""
#smWeibullH = pystan.StanModel(model_code=model)
model = """
data {
int<lower=0> N;
int<lower=0> M;
vector<lower=0>[M] y[N];
}parameters {
real<lower=0> lne[N];
real lnt[N];
real em;real tm;
real<lower=0> es;
real<lower=0> ts;
}model {
lne~normal(em,es);
lnt~normal(tm,ts);
for (n in 1:N)
y[n]~weibull(4.313501020391736/(lne[n]),exp(lnt[n]));
}
"""
smWeibullEH = pystan.StanModel(model_code=model)
```
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_3d99857cd395ed26afb9bdf6f6f112d9 NOW.
```python
print(polygamma(0,1))
print(polygamma(0,2))
print(polygamma(1,1))
print(polygamma(1,2))
print(polygamma(1,1)**2)
print(polygamma(1,2)**2)
```
-0.5772156649015329
0.42278433509846713
1.6449340668482266
0.6449340668482266
2.7058080842778462
0.4159399505813928
```python
ts=[-10,-1,0,1,10]
k=1
for t in ts:
plt.subplot(2,3,k);k+=1
e=np.linspace(-10,10,101)
plt.plot(e,4.313501020391736/(e-t))
```
```python
```
```python
fit.get_adaptation_info()
```
['# Adaptation terminated\n# Step size = 0.889853\n# Diagonal elements of inverse mass matrix:\n# 0.0492589, 0.380148\n',
'# Adaptation terminated\n# Step size = 0.791963\n# Diagonal elements of inverse mass matrix:\n# 0.0610601, 0.670636\n',
'# Adaptation terminated\n# Step size = 0.964206\n# Diagonal elements of inverse mass matrix:\n# 0.0462998, 0.622056\n',
'# Adaptation terminated\n# Step size = 0.79833\n# Diagonal elements of inverse mass matrix:\n# 0.0554586, 0.539706\n',
'# Adaptation terminated\n# Step size = 0.840287\n# Diagonal elements of inverse mass matrix:\n# 0.0620329, 0.380415\n',
'# Adaptation terminated\n# Step size = 0.708001\n# Diagonal elements of inverse mass matrix:\n# 0.0556485, 0.39183\n']
```python
from scipy import stats
def prs(x):
ts= x.rsplit('\n#')
out=[ts[1].rsplit('=')[1]]
out.extend(ts[3][:-2].rsplit(','))
return out
def computeConvergence(ms,data,reps=50):
from time import time
D=[[],[]]
R=[[],[]]
for sd in range(reps):
print(sd)
for m in range(len(ms)):
sm=ms[m]
t0=time()
try:
fit=sm.sampling(data=data,chains=6,n_jobs=6,
seed=1,thin=1,iter=1000,warmup=500)
D[m].append(time()-t0)
nfo=list(map(prs,fit.get_adaptation_info()) )
R[m].append(nfo)
except:
D[m].append(np.nan)
R[m].append(np.zeros((6,3))*np.nan)
D=np.array(D)
#R=np.float32(R)
print(np.mean(D,1))
return D, R
t=-1;e=1
k=4.313501020391736/(e-t)
print('k= ',k)
temp={'y':stats.weibull_min.rvs(k,0,np.exp(t),size=100),'N':100}
#D,R=computeConvergence([smWeibull, smWeibullE])
```
k= 2.156750510195868
# Hierarchical weibull
```python
N=20
M=50
e=np.random.randn(N)*1+2
t=np.random.randn(N)*1+1
#t=-1;e=1
k=4.313501020391736/(np.abs(e-t))
#print('k= ',k)
data={'y':stats.weibull_min.rvs(k,0,np.exp(t),size=(M,N)).T,'N':N,'M':M}
ms=[smWeibullH, smWeibullEH]
D,R=computeConvergence(ms,data,reps=50)
```
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
[2.06550846 3.78643272]
```python
D
```
array([[1.03845024, 0.94294357, 1.01578379, 0.99120307, 1.02925539,
1.01027203, 1.05439496, 1.0273838 , 1.03560948, 1.03058553,
1.05739832, 1.05801606, 1.06082344, 1.05408597, 1.07381892,
1.04979825, 1.0801661 , 1.05875039, 1.0076158 , 1.02997637,
1.04445195, 0.97521567, 1.12239099, 0.96830678, 1.11979413,
0.96245241, 1.05582952, 0.98207235, 1.06741858, 0.98894548,
1.03653193, 0.9653337 , 1.05480242, 1.03396964, 1.0364089 ,
0.97402525, 1.01951122, 1.06045532, 1.08821511, 1.0336051 ,
1.05263734, 1.06267476, 1.11014342, 1.03312802, 1.05125666,
1.01969242, 0.99545646, 1.03181911, 1.04430819, 1.05982757],
[ nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan]])
# Information matrix generalized gamma
$$f(y)=\frac{\kappa}{y \Gamma(\rho)}\left(\frac{y}{\theta}\right)^{\kappa \rho}e^{-\left(\frac{y}{\theta} \right)^\kappa}$$
$$\log f(y)=\log \kappa- \log y -\log \Gamma(\rho) +\kappa \rho \log y - \kappa \rho \log \theta -\left(\frac{y}{\theta} \right)^\kappa$$
$$I_{\rho \theta \kappa} = \begin{pmatrix} \psi'(\rho) & \frac{\kappa}{\theta} &- \frac{\psi(\rho)}{\kappa} \\
. & \frac{\rho \kappa^2}{\theta^2} & -\frac{\rho}{\theta}\psi(\rho+1)\\
. & . & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$\rho (\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{k})= \rho \psi'(\rho)+\rho \psi(\rho)^2 + 2\psi(\rho) +1$
$E[\log Y]= \log \theta + \psi(\rho)/\kappa$
$E[Y^s]=\theta^s \Gamma(\rho+s/\kappa)/\Gamma(\rho)$
$E[Y^\kappa]=\theta^\kappa \rho$
$E[Y^\kappa \log Y ]=\theta^\kappa \rho (\log \theta + \psi(\rho+1)/\kappa)= \theta^\kappa (\rho \log \theta + \rho \psi(\rho)/\kappa+1/\kappa)$
$E[\log^2 Y]= \log^2 \theta + 2 \log \theta \psi(\rho)/\kappa+(\psi'(\rho)+\psi(\rho)^2)/\kappa^2$
$E[Y^\kappa \log^2 Y]= \theta^\kappa \rho (\log^2 \theta + 2 \log \theta \psi(\rho+1)/\kappa+(\psi'(\rho+1)+\psi(\rho+1)^2)/\kappa^2)$
$E[Y^{2\kappa} \log^2 Y]= \theta^{2\kappa} (\rho+1) (\log^2 \theta + 2 \log \theta \psi(\rho+2)/\kappa+(\psi'(\rho+2)+\psi(\rho+2)^2)/\kappa^2)$
$\mathrm{Var}[\log Y]=\psi'(\rho)/\kappa^2$
$E[(Y/\theta)^\kappa]=\rho$
$\mathrm{Var}[(Y/\theta)^\kappa]=\rho$
$E[\log (Y/\theta)^\kappa]= \psi(\rho)$
$E[\log^2 (Y/\theta)^\kappa]= \psi'(\rho)+\psi(\rho)^2$
$E[(Y/\theta)^\kappa \log (Y/\theta)^\kappa ]= \rho \psi(\rho+1)= \rho \psi(\rho)+1$
$E[(Y/\theta)^\kappa \log^2(Y/\theta)^\kappa ]= \rho (\psi'(\rho+1)+\psi(\rho+1)^2)$
$$I_{\rho \tau \kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \frac{\psi(\rho)}{\kappa} \\
. & \rho \kappa^2 & -\rho\psi(\rho+1)\\
. & . & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$$I_{\rho, \tau, \log \kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \psi(\rho) \\
. & \rho \kappa^2 & -\kappa\rho\psi(\rho+1)\\
. & . & \rho \left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$$I_{\rho \tau,1/\kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \kappa\psi(\rho) \\
. & \rho \kappa^2 & -\kappa^2 \rho A\\
. & . & \kappa^2 \rho B\end{pmatrix} $$
$$I_{\rho \tau,B/(A \kappa)} = \begin{pmatrix} \psi'(\rho) & \kappa &- \kappa\psi(\rho)A/B \\
. & \rho \kappa^2 & -\kappa^2 \rho A^2/B\\
. & . & \kappa^2 \rho A^2/B\end{pmatrix} $$
$$I_{\rho \tau,B/(A \kappa)-\tau} = \begin{pmatrix} \psi'(\rho) & \kappa-\kappa\psi(\rho)A/B &- \kappa\psi(\rho)A/B \\
. & \rho \kappa^2 & 0\\
. & . & \kappa^2 \rho A^2/B-\rho \kappa^2\end{pmatrix} $$
A=\psi(\rho+1)
B=\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)
$\gamma=\tau+\psi(\rho)/\kappa$
$\rho=\omega(\kappa(\gamma-\tau))=\omega$
$$J=\begin{pmatrix}\kappa \omega' &-\kappa \omega' & (\gamma-\tau)\omega'\\ 0&1 &0 \\ 0& 0& 1 \end{pmatrix}$$
$$I_{\gamma \tau \kappa} = J^T\begin{pmatrix} \frac{1}{\omega'} & \kappa &-(\gamma-\tau) \\
. & \omega \kappa^2 & -(\gamma-\tau)\omega-1\\
. & . & \frac{R}{\kappa^2}\end{pmatrix} J $$
$$I_{\gamma \tau \kappa} = \begin{pmatrix} \kappa^2\omega' &0&0 \\
. & \kappa^2(\omega -\omega')& (\gamma-\tau)(\kappa\omega'-\omega)-1\\
. & . & \frac{R}{\kappa^2}-(\gamma-\tau)^2\omega'\end{pmatrix} $$
with $R=\frac{\omega}{\omega'} +\omega \kappa^2 (\gamma-\tau)^2 + 2\kappa (\gamma-\tau)+1$
## Simplyfied Gamma
$$f(y;\rho)=\frac{ y^{\rho-1} e^{-y}}{\Gamma(\rho)}$$
$$\log f(y;\rho)=\rho \log y -\log y -y-\log \Gamma(\rho)$$
$\Gamma(z+1) = \int_0^\infty x^{z} e^{-x}\, dx$
$\Gamma(z+1)/\Gamma(z)=z$
$\frac{d^n}{dx^n}\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} (\ln t)^n \, dt$
$\psi(x)=\log(\Gamma(x))'=\Gamma'(x)/\Gamma(x)$
$E[Y]= \int_0^\infty y^{\rho} e^{-y}\, dy / \Gamma(\rho)= \Gamma(\rho+1)/ \Gamma(\rho)=\rho$
$E[Y^s]=\Gamma(\rho+s)/ \Gamma(\rho)$
$\mathrm{Var}[Y]=E[Y^2]-E[Y]^2=\rho(\rho+1)-\rho^2=\rho$
$E[\log Y]=\Gamma'(\rho)/\Gamma(\rho)=\psi(\rho)$
$E[Y \log Y]=\Gamma'(\rho+1)/\Gamma(\rho)= \rho \psi(\rho+1)= \rho \psi(\rho)+1$
$E[1/Y]= \Gamma(\rho-1)/ \Gamma(\rho)=1/(\rho-1)$
$\mathrm{Var}[1/Y]=E[Y^2]-E[Y]^2=\frac{1}{(\rho-2)(\rho-1)^2}$
$E[\log^2 Y]=\Gamma''(\rho)/\Gamma(\rho)=\psi'(\rho)+\psi(\rho)^2$
use $\psi'(x)=(\Gamma'(x)/\Gamma(x))'=\Gamma''(x)/\Gamma(x)-(\Gamma'(x)/\Gamma(x))^2$
$E[Y \log^2 Y]=\Gamma''(\rho+1)/\Gamma(\rho)=\rho(\psi'(\rho+1)+\psi(\rho+1)^2)=\rho\psi'(\rho)+\rho\psi(\rho)^2+2\psi(\rho)$
# Gengamma with $\theta=1$
$$f(y)=\frac{\kappa}{y \Gamma(\rho)}y^{\kappa \rho} e^{-y^\kappa}$$
$$\log f(y)=\log \kappa- \log y -\log \Gamma(\rho) +\kappa \rho \log y -y^\kappa$$
$$I_{\rho \kappa} = \begin{pmatrix} \psi'(\rho) & - \frac{\psi(\rho)}{\kappa} \\
. & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$$I_{\rho \log\kappa} = \begin{pmatrix} \psi'(\rho) & - \psi(\rho) \\
. & \rho\psi'(\rho)+\rho\psi(\rho)^2+2\psi(\rho)+1\end{pmatrix} $$
$\gamma=\psi(\rho)/\kappa$
$\rho=\omega(\gamma \kappa)$
$1=d \psi(\omega(\gamma))/d \gamma$
$$I_{\gamma \kappa} = \begin{pmatrix} \kappa^2 \omega(\gamma\kappa) & 0 \\
. & \frac{\omega(\gamma\kappa)}{\kappa\omega'(\gamma\kappa)}+ \omega(\gamma\kappa)\gamma^2\kappa+2\gamma+\frac{1}{\kappa^2}\end{pmatrix} $$
$$I_{\gamma \kappa} = \begin{pmatrix} \kappa^2 \omega(\gamma\kappa) & 0 \\
. & \kappa^{-1}E[Y\log^2 Y]+\frac{1}{\kappa^2}\end{pmatrix} $$
TODO check the last result by transformation of $I_{\rho \kappa}$
orthogonal with
```python
import pystan
ggcode='''functions{
//' Naive implementation of the generalized Gamma density.
//' @param x Value to evaluate density at.
//' @param alpha Shape parameter.
//' @param beta Scale parameter.
//' @param nu Tail parameter.
real gengamma_pdf(real x, real k, real b, real q) {
real d;
d = q/(b*tgamma(k))*(x/b)^(k*q-1) * exp(-(x/b)^q);
return d;
}
real gengamma_lpdf(real x, real k, real b, real q) {
real d;
d = log(q) - log(b) - lgamma(k) +
(k*q-1)*(log(x) - log(b)) - (x/b)^q;
return d;
}
real generalized_gamma_cdf(real x, real k, real b, real q) {
real d;
d = gamma_p(k, (x/b)^q);
return d;
}
real generalized_gamma_lcdf(real x, real k, real b, real q) {
real d;
d = log(generalized_gamma_cdf(x, k, b, q));
return d;
}}'''
model = """
data {
int<lower=0> N; //nr subjects
vector<lower=0>[N] yLT;
}parameters {
real k;
//real b;
real q;
}model {
for (n in 1:N)
yLT[n]~gengamma(exp(k),exp(0),exp(q));
}
"""
smGengamma = pystan.StanModel(model_code=ggcode+model)
```
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_50f847b44dc31effb899e4ef907a37a6 NOW.
```python
from scipy import stats
x=np.linspace(0,10,101)[1:]
#k,q,0,b
k=2;b=1;q=3;
plt.plot(x,stats.gengamma.pdf(x,k,q,0,b))
temp={'yLT':stats.gengamma.rvs(k,q,0,b,size=100),'N':100}
fit=smGengamma.sampling(data=temp,chains=6,n_jobs=6,
seed=1,thin=1,iter=10000,warmup=500)
print(fit)
w=fit.extract()
p=np.exp(w['k'])
#b=np.exp(w['b'])
H=(pg(1,p+1)+np.square(pg(0,p+1))+1/p)/pg(0,p+1)
e=H/np.exp(w['q'])+1
plt.figure()
plt.plot(p,e,'.')
np.corrcoef(p,e)[0,1]
```
```python
from scipy.special import gamma, digamma,polygamma
plt.figure(figsize=(12,4))
g=np.log(b)+digamma(k)/q
c=(polygamma(1,k+1)+polygamma(0,k+1)**2+1/k)*q/polygamma(0,k+1)+np.log(b)
q1=g
q2=np.log(b)
q3=c
#*np.exp(-a)+q2
plt.subplot(1,3,1)
plt.plot(q1,q2,'.')
plt.title(np.corrcoef(q1,q2)[0,1])
plt.subplot(1,3,2)
plt.plot(q1,q3,'.')
plt.title(np.corrcoef(q1,q3)[0,1])
plt.ylim([-1000,1000])
plt.subplot(1,3,3)
plt.plot(q2,q3,'.')
plt.title(np.corrcoef(q2,q3)[0,1]);
plt.ylim([-50,50])
```
# Beta distribution
Parameters $\alpha$ and $\beta$ are orthogonal if
$$\operatorname{E}_X
\left[
\frac{\partial \log f(X;\alpha,\beta)}{\partial\alpha \ \partial\beta}
\right]=0$$
The probability density function of Beta distribution parametrized by shape parameters $\alpha$ and $\beta$ is
$$f(X=x;\alpha,\beta)=\frac{ x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}$$
Consider parametrization in terms of logarithm of geometric mean $E[\log X]=\gamma=\psi(\alpha)-\psi(\alpha+\beta)$ and the logarithm of geometric mean of $1-X$: $E[\log (1-X)]=\phi=\psi(\beta)-\psi(\alpha+\beta)$
Then the fisher information matrix of the distribution parametrized by shape parameters is
$$I_{\alpha,\beta}=\begin{pmatrix}\psi'(\alpha)-\psi'(\alpha+\beta) & -\psi'(\alpha+\beta)\\
-\psi'(\alpha+\beta) & \psi'(\beta)-\psi'(\alpha+\beta)
\end{pmatrix}$$
Fisher information matrix when parametrized by $\gamma$ and $\phi$ is
$$I_{\gamma,\phi}=J^\mathrm{T} I_{\alpha,\beta} J$$
Where $J$ is the Jacobian matrix defined as
$$J=\begin{pmatrix}\frac{\partial \alpha}{\partial \gamma} & \frac{\partial \alpha}{\partial \phi}\\
\frac{\partial \beta}{\partial \gamma} & \frac{\partial \beta}{\partial \phi}
\end{pmatrix}$$
Note that $I_{\alpha,\beta}$ can be written as:
$$I_{\alpha,\beta}=\begin{pmatrix}\frac{\partial \gamma}{\partial \alpha} & \frac{\partial \phi}{\partial \alpha} \\ \frac{\partial \gamma}{\partial \beta} & \frac{\partial \phi}{\partial \beta}
\end{pmatrix}$$
$$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}\psi'(\alpha)+J_{21}J_{22}\psi'(\beta)-\psi'(\alpha+\beta)(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$
$\gamma=\psi(\alpha)-\psi(\alpha+\beta)$
$\phi=\psi(\beta)-\psi(\alpha+\beta)$
$\gamma-\phi=\psi(\alpha)-\psi(\beta)$
$\alpha=\omega(\phi-\psi(\beta))-\beta$
$\beta=\omega(\gamma-\psi(\alpha))-\alpha$
$$\gamma=\frac{\partial \log \mathrm{B}(\alpha,\beta)}{\partial \alpha}=\frac{\partial \log \Gamma(\alpha)}{\partial \alpha}-\frac{\partial \log \Gamma(\alpha+\beta)}{\partial \alpha}$$
$$\phi=\frac{\partial \log \mathrm{B}(\alpha,\beta)}{\partial \beta}=\frac{\partial \log \Gamma(\beta)}{\partial \beta}-\frac{\partial \log \Gamma(\alpha+\beta)}{\partial \beta}$$
$\psi'(\alpha)=\psi'(\alpha+\beta)\frac{\partial \beta}{\partial \alpha} -\frac{1}{J_{11}}$
$\psi'(\beta)=\psi'(\alpha+\beta)\frac{\partial \alpha}{\partial \beta} -\frac{1}{J_{22}}$
$I_{\alpha,\beta}=\begin{pmatrix}A+C & C \\ C & B+C
\end{pmatrix}$
$J^{-1}=\begin{pmatrix}A+C & C \\ C & B+C \end{pmatrix}$
$I_{\gamma,\phi}=J^\mathrm{T} I_{\alpha,\beta} J= J^\mathrm{T} J^{-1} J=J$
$$J=\frac{1}{AB+BC+AC}\begin{pmatrix}B+C & -C \\ -C & A+C \end{pmatrix}
= \begin{pmatrix}\frac{1}{A+\frac{BC}{B+C}} & -\frac{1}{A+B+\frac{AB}{C}} \\ -\frac{1}{A+B+\frac{AB}{C}} & \frac{1}{B+\frac{AC}{A+C}} \end{pmatrix}$$
$$J_{11}=(A+C)^{-1}$$
$$J_{12}=J_{21}= C^{-1}$$
$$J_{22}=-(B+C)^{-1}$$
$$\frac{J_{11}J_{22}}{J_{12}J_{21}}=1$$
$$\frac{-C^2}{(A+C)(B+C)}=1$$
$$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}A+J_{21}J_{22}B+C(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$
$$\mathrm{Cov}(\gamma,\phi)=\frac{A}{C(A+C)}-\frac{B}{C(B+C)}+\frac{1}{A+C}-\frac{1}{B+C} +\frac{1}{C} +\frac{1}{C}\frac{-C^2}{(A+C)(B+C)}
= \frac{1}{C}\left(\frac{A}{A+C}-\frac{B}{B+C}+\frac{C}{A+C}-\frac{C}{B+C} +1 +1\right)
= \frac{2}{C}$$
```python
import pystan
model = """
data {
int<lower=0> N; //nr subjects
vector<lower=0>[N] yLT;
}parameters {
real a;
real b;
}model {
for (n in 1:N)
yLT[n]~beta(exp(a),exp(b));
}
"""
smBeta = pystan.StanModel(model_code=model)
```
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_bd01f563b8035622229a042f898734bd NOW.
```python
from scipy import stats
x=np.linspace(0,1,101)[1:]
plt.plot(x,stats.beta.pdf(x,4,15,0,1))
temp={'yLT':stats.beta.rvs(4,15,0,1,size=100),'N':100}
fit=smBeta.sampling(data=temp,chains=6,n_jobs=6,
seed=1,thin=4,iter=55000,warmup=5000)
print(fit)
w=fit.extract()
a=np.exp(w['a'])
b=np.exp(w['b'])
```
```python
from scipy.special import gamma, digamma,polygamma,beta
plt.figure(figsize=(12,12))
gA=digamma(a)-digamma(a+b)
gB=digamma(b)-digamma(a+b)
tA=polygamma(1,a)-polygamma(1,a+b)
var=a*b/np.square(a+b)/(a+b+1)
ex=a/(a+b)
q1=ex
q2=var
#q2=g
#k=np.exp(a)
#l=np.exp(b)
#q1=np.log(np.square(k)*digamma(2)+digamma(1))/(2*digamma(2))-g/(polygamma(1,1)+1)
plt.plot(q1,q2,'.')
#plt.ylim([0,1])
#plt.xlim([0,1])
np.corrcoef(q1,q2)[0,1]
```
## Wald distribution Fisher information
$$f(x)=\frac{\alpha}{\sigma \sqrt{2 \pi x^3}}\exp\left(-\frac{(\nu x-\alpha)^2}{2 \sigma^2 x}\right)$$
$E[X]=\alpha/\nu$
$E[1/X]=\nu/\alpha +\sigma^2/\alpha^2$
$$I_{\alpha \sigma \nu} = \begin{pmatrix} \frac{2}{\alpha^2}+\frac{\nu}{\sigma^2 \alpha} & \frac{2}{\sigma \alpha} & \frac{1}{\sigma}\\
. & \frac{1}{\sigma^2} &0\\
. & . & \frac{\alpha}{\sigma^2 \nu}\end{pmatrix} $$
$$I_{\log \alpha,\log \sigma \nu} = \begin{pmatrix} 2 \sigma+\frac{\nu \alpha}{\sigma} & 2 & \frac{1}{\sigma}\\
. & 1 &0\\
. & . & \frac{\alpha}{\sigma^2 \nu}\end{pmatrix} $$
```python
1/(1+pg(0,2)**2/pg(1,1))
```
0.9019858038517234
```python
pg(0,2)
```
array(0.42278434)
```python
from scipy.special import gamma
```
```python
gamma(1)
```
1.0
```python
```
|
dfe6e0fd27c677eef716102d379b852d2ee8e423
| 430,863 |
ipynb
|
Jupyter Notebook
|
Statformulas.ipynb
|
simkovic/matustools
|
bd2444bfea5a02396e4960a7946160a60edebd49
|
[
"MIT"
] | null | null | null |
Statformulas.ipynb
|
simkovic/matustools
|
bd2444bfea5a02396e4960a7946160a60edebd49
|
[
"MIT"
] | null | null | null |
Statformulas.ipynb
|
simkovic/matustools
|
bd2444bfea5a02396e4960a7946160a60edebd49
|
[
"MIT"
] | null | null | null | 226.412507 | 138,268 | 0.886395 | true | 13,156 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.787931 | 0.727975 | 0.573595 |
__label__kor_Hang
| 0.112504 | 0.170982 |
```python
# Add graph and math features
# 그래프, 수학 기능 추가
import pylab as py
# scipy.optimize.newton()
import scipy.optimize as so
```
```python
# symbolic processor
# 기호처리기
import sympy as sym
import sympy.utilities as su
sym.init_printing()
```
# 복소근과 뉴튼 랩슨법<br>Newton Rapson Method and Complex Roots
## A polynomial with complex roots<br>복소근을 갖는 다항식 예
The following video is about applying Newton Raphson method to find complex roots. (26m05s)<br>
아래 비디오는 뉴튼 랩슨법으로 복소근을 찾는 경우에 관한 것이다. (26m05s)
[](https://youtu.be/-RdOwhmqP5s)
```python
z = sym.symbols('z', complex=True)
```
From the video, let's think about the following polynomial.<br>
영상의 여러 다항식 가운데 다음을 생각해 보자.
```python
P_z = z ** 5 + z ** 2 - z + 1
P_z
```
Its derivative would be as follows.<br>
그 미분은 다음과 같을 것이다.
```python
dP_dz = P_z.diff(z)
dP_dz
```
Let's make python functions.<br>파이썬 함수를 생성해 보자.
```python
p = su.lambdify(z, P_z)
```
```python
assert p(-1) == P_z.subs({z:-1})
```
```python
dp_dz = su.lambdify(z, dP_dz)
```
```python
assert dp_dz(-1) == dP_dz.subs({z:-1})
```
Let's visualize the complex function `p(z)`.<br>
해당 복소 함수 `p(z)` 를 시각화 해 보자.
```python
x = py.linspace(-3, 3, 150+1)
y = py.linspace(-2, 2, 100+1)
X, Y = py.meshgrid(x, y)
Z = X + Y * 1.0j
P = p(Z)
```
```python
cmap = "viridis"
levels = 32
```
Let's take a look at the real and imaginary parts of $P(z)$.<br>$P(z)$ 의 실수부와 허수부를 살펴보자.
```python
fig, ax = py.subplots(1, 2, figsize=(20, 5))
c0 = ax[0].pcolor(X, Y, P.real, cmap=cmap)
py.colorbar(c0, ax=ax[0])
ax[0].contour(X, Y, P.real, cmap="jet", levels=levels)
ax[0].axis("equal")
py.xlabel("$real(z)$")
py.ylabel("$imag(z)$")
py.title(r"$ real\left( P(z) \right) $");
c1 = ax[1].pcolor(X, Y, P.imag, cmap=cmap)
py.colorbar(c1, ax=ax[1])
ax[1].contour(X, Y, P.imag, cmap="jet", levels=levels)
ax[1].axis("equal");
py.xlabel("$real(z)$")
py.ylabel("$imag(z)$")
py.title(r"$ imag\left( P(z) \right) $");
```
What about the absolute values of $P(z)$?<br>$P(z)$ 의 절대값은 어떤가?
```python
log_abs_P = py.log(abs(P))
```
```python
fig, ax = py.subplots(figsize=(18, 10))
c_abs = ax.pcolor(X, Y, log_abs_P, cmap=cmap)
py.colorbar(c_abs, ax=ax)
ax.contour(X, Y, log_abs_P, cmap="jet", levels=levels)
ax.axis("equal");
py.xlabel("$real(z)$")
py.ylabel("$imag(z)$")
py.title(r"$ \left| P(z) \right| $");
```
## Finding complex roots using Newton-Raphson method<br>복소근을 뉴튼랩슨법으로 찾기
```python
class LogAttempts():
def __init__(self):
self.z_dict = {}
self.z_list = []
def init(self, z0:complex):
self.z_dict[z0] = []
self.z_list = self.z_dict[z0] = []
def f(self, z:complex) -> complex:
self.z_list.append(z)
return p(z)
```
```python
logger = LogAttempts()
```
### Various initial conditions<br>다양한 초기 조건
```python
for z_initial in (-2.0 + 1.0j, -1.0 + 1.0j, 2.0j, 2.0 + 1.0j):
logger.init(z_initial)
f_z = logger.f
root = so.newton(f_z, z_initial, fprime=dp_dz)
```
```python
fig, ax = py.subplots(figsize=(18, 10))
c_abs = ax.pcolor(X, Y, log_abs_P, cmap=cmap, alpha=0.75)
py.colorbar(c_abs, ax=ax)
ax.contour(X, Y, log_abs_P, cmap="jet", levels=levels)
for z_initial in logger.z_dict:
z_array = py.array(logger.z_dict[z_initial])
z_real = z_array.real
z_imag = z_array.imag
ax.plot(z_real, z_imag, '.-', label=f"{z_initial}")
ax.axis("equal");
ax.legend(loc=0)
py.xlabel("$real(z)$")
py.ylabel("$imag(z)$")
py.title(r"$ \left| P(z) \right| $");
```
## Another Example<br>다른 예
```python
r = 10.0
```
```python
Q_z = - (r * r - z * z) ** 0.5 + r * 0.5
Q_z
```
Its derivative would be as follows.<br>
그 미분은 다음과 같을 것이다.
```python
dQ_dz = Q_z.diff(z)
dQ_dz
```
```python
q = su.lambdify(z, Q_z)
```
```python
dq_dz = su.lambdify(z, dQ_dz)
```
```python
x = py.linspace(-r*2, r*2, 100+1)
y = py.linspace(-r, r, 100+1)
X, Y = py.meshgrid(x, y)
Z = X + Y * 1.0j
Q = q(Z)
```
```python
log_abs_Q = py.log(abs(Q))
```
```python
fig, ax = py.subplots(figsize=(18, 8))
c_abs = ax.pcolor(X, Y, log_abs_Q, cmap=cmap)
py.colorbar(c_abs, ax=ax)
ax.contour(X, Y, log_abs_Q, cmap="jet", levels=levels)
ax.axis("equal");
py.xlabel("$real(z)$")
py.ylabel("$imag(z)$")
py.title(r"$ \left| Q(z) \right| $");
```
```python
class LogAttemptsQ(LogAttempts):
def __init__(self):
# Understanding Python super() with __init__() methods, https://stackoverflow.com/questions/576169
super(LogAttemptsQ, self).__init__()
def f(self, z:complex) -> complex:
self.z_list.append(z)
result = q(z)
assert 100 > abs(result), f"abs(result) = {abs(result)}"
return result
```
```python
logger_q = LogAttemptsQ()
```
```python
for z_initial in (-1.5 * r + 0.01j, -0.5 * r + 1.0j, 0.5 * r + 1.0j):
logger_q.init(z_initial)
f_z = logger_q.f
try:
root = so.newton(f_z, z_initial, fprime=dq_dz)
except RuntimeError as e:
print(e)
```
```python
fig, ax = py.subplots(figsize=(18, 8))
c_abs = ax.pcolor(X, Y, log_abs_Q, cmap=cmap, alpha=0.75)
py.colorbar(c_abs, ax=ax)
ax.contour(X, Y, log_abs_Q, cmap="jet", levels=levels)
for z_initial in logger_q.z_dict:
z_array = py.array(logger_q.z_dict[z_initial])
z_real = z_array.real
z_imag = z_array.imag
ax.plot(z_real, z_imag, '.-', label=f"{z_initial}")
ax.axis("equal");
ax.legend(loc=0)
py.xlabel("$real(z)$")
py.ylabel("$imag(z)$")
py.title(r"$ \left| Q(z) \right| $");
```
## Final Bell<br>마지막 종
```python
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
```python
```
|
25990ca1a78bb231cafd61adbd81b40b65d442f0
| 12,538 |
ipynb
|
Jupyter Notebook
|
10_root_finding/45_newton_raphson_complex.ipynb
|
kangwon-naver/nmisp
|
141f8148b3ce783d3df27ee0c9986f530cada8fb
|
[
"BSD-3-Clause"
] | 7 |
2019-05-14T11:00:53.000Z
|
2020-08-27T01:04:29.000Z
|
10_root_finding/45_newton_raphson_complex.ipynb
|
kangwon-naver/nmisp
|
141f8148b3ce783d3df27ee0c9986f530cada8fb
|
[
"BSD-3-Clause"
] | 170 |
2018-07-12T06:06:21.000Z
|
2022-01-28T09:06:55.000Z
|
10_root_finding/45_newton_raphson_complex.ipynb
|
kangwon-naver/nmisp
|
141f8148b3ce783d3df27ee0c9986f530cada8fb
|
[
"BSD-3-Clause"
] | 57 |
2018-08-28T08:38:59.000Z
|
2020-09-02T03:40:47.000Z
| 21.286927 | 143 | 0.472563 | true | 2,046 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.845942 | 0.7773 | 0.657551 |
__label__kor_Hang
| 0.386067 | 0.366042 |
```python
import time
import random
from typing import List
import sympy
import math
import string
import types
```
```python
sympy.init_printing()
```
```python
def quick_sort(collection: list) -> list:
if len(collection) < 2:
return collection
pivot = collection.pop()
greater: List[int] = []
lesser: List[int] = []
for element in collection:
(greater if element > pivot else lesser).append(element)
return quick_sort(lesser) + [pivot] + quick_sort(greater)
```
```python
user_input = input("input numbers seperated by a comma:\n").strip()
unsorted = user_input.split(',')
print(quick_sort(unsorted))
```
['']
```python
unsorted = random.shuffle(list(range(1000000)), None)
```
```python
n = sympy.Symbol('n')
expr = sympy.solve(8 * sympy.functions.log(n, 2) - n)
expr
```
```python
expr[0].evalf()
```
```python
f_1 = 8 * sympy.functions.ln(n)
f_2 = n
sympy.plot(f_1, f_2, (n,0.1,100))
```
```python
f_1 = (n**2 * 100)
f_2 = (2**n)
p = sympy.plot(f_1, f_2, (n,0,100), show=False, legend=True)
p[0].label = '$100 * n**2$'
p[0].line_color = 'red'
p[1].label = '$2**n$'
p.show()
```
```python
f_1 = sympy.log(n)
f_2 = sympy.sqrt(n)
f_3 = n
f_4 = n * sympy.log(n)
f_5 = n**2
f_6 = n**3
f_7 = 2**n
f_8 = sympy.factorial(n)
```
```python
t0 = 1000 # 1000 milliseconds
```
```python
r_1 = sympy.solve(f_1 - t0)
r_2 = sympy.solve(f_2 - t0)
r_3 = sympy.solve(f_3 - t0)
r_4 = sympy.solve(f_4 - t0)
r_5 = sympy.solve(f_5 - t0)
r_6 = sympy.solve(f_6 - t0)
r_7 = sympy.solve(f_7 - t0)
```
```python
r_8 = sympy.solve(f_8 - t0)
```
```python
print(r_1, r_2, r_3, r_4, r_5, r_6, r_7)
```
[exp(1000)] [1000000] [1000] [exp(LambertW(1000))] [-10*sqrt(10), 10*sqrt(10)] [10, -5 - 5*sqrt(3)*I, -5 + 5*sqrt(3)*I] [log(10**(3/log(2)))]
```python
# c_01 = random.sample(range(-100, 100), 10)
c_01 = random.choices(string.ascii_letters + string.digits, k=10)
print(c_01)
for i, item in enumerate(c_01):
for j in range(i)[::-1]:
if item < c_01[j]:
c_01[j+1] = c_01[j]
c_01[j] = item
else:
break
print(c_01)
```
['M', 'X', 'L', '8', 'D', 'i', 'g', 'S', 'b', 'N']
['8', 'D', 'L', 'M', 'N', 'S', 'X', 'b', 'g', 'i']
```python
A = random.sample(range(-100, 100), 10)
print(A)
# A = random.choices(string.ascii_letters + string.digits, k=10)
for j, key in enumerate(A):
if j == 0:
pass
i = j - 1
while i > 0 and A[i] > key:
A[i+1] = A[i]
i = i - 1
A[i+1] = key
print(A)
```
```python
def fib(n: int) -> int:
if n <= 0: return 0
if n == 1 | n == 2: return 1
prev = curr= 1
for i in range(3, n+1):
sum = prev + curr
prev = curr
curr = sum
return curr
```
```python
fib(4)
```
3
```python
def fib_recursive(n: int) -> int:
if n <= 2: return 1
return fib_recursive(n-1) + fib_recursive(n-2)
```
```python
fib_recursive(6)
```
8
```python
def coinChange(coins: List[int], amount: int) -> int:
# 数组大小为 amount,初始值为 n,对于每个 amount,最多需要 amount 个 1 分的 coin。
results = list(range(amount+1))
for i, r in enumerate(results): # 从 0 开始计算最小 amount
if i == 0: continue # start from amount == 1
for coin in coins:
remains = r - coin
if remains < 0: continue
results[i] = min(r, 1 + results[remains - coin])
return results[amount]
```
```python
for amount in range(12):
print("{}, {}".format(amount, coinChange([1,2,5], amount)))
```
0, 0
1, 1
2, 2
3, 3
4, 1
5, 2
6, 4
7, 5
8, 5
9, 5
10, 1
11, 2
```python
def coinChange(coins: List[int], amount: int) -> int:
if amount < 0: return -1
dp = list(range(amount+1))
for i in range(amount+1):
result = []
for coin in coins:
```
|
da8f40a3234b70ecef2933f0f68610534c60b054
| 60,724 |
ipynb
|
Jupyter Notebook
|
sorts/my_algo.ipynb
|
wuchenchen/Python
|
301ccb57d6ce5fc1d0edff40260464152da5bbc7
|
[
"MIT"
] | 1 |
2021-08-25T13:29:58.000Z
|
2021-08-25T13:29:58.000Z
|
sorts/my_algo.ipynb
|
wuchenchen/Python
|
301ccb57d6ce5fc1d0edff40260464152da5bbc7
|
[
"MIT"
] | null | null | null |
sorts/my_algo.ipynb
|
wuchenchen/Python
|
301ccb57d6ce5fc1d0edff40260464152da5bbc7
|
[
"MIT"
] | null | null | null | 100.703151 | 18,867 | 0.81192 | true | 1,448 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.857768 | 0.83762 | 0.718484 |
__label__eng_Latn
| 0.331226 | 0.50761 |
# CHEM 1000 - Spring 2022
Prof. Geoffrey Hutchison, University of Pittsburgh
## Graded Homework 6
For this homework, we'll focus on:
- integrals in 2D polar and 3D spherical space
- probability (including integrating continuous distributions)
---
As a reminder, you do not need to use Python to solve the problems. If you want, you can use other methods, just put your answers in the appropriate places.
To turn in, either download as Notebook (.ipynb) or Print to PDF and upload to Gradescope.
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators (i.e., anyone you discussed this with) below:
```python
NAME = ""
COLLABORATORS = ""
```
### Cartesian to Spherical Integrals
Consider the Cartesian integral:
$$
\iiint z^{2} d x d y d z
$$
Evaluation is fairly simple over a rectangular region, but if we wish to evaluate across a spherical volume, the limits of integration become messy.
Instead, we can transform $z^2$, resulting in the integral:
$$
\int_{0}^{a} \int_{0}^{\pi} \int_{0}^{2 \pi}(r \cos \theta)^{2} r^{2} \sin \theta d \varphi d \theta d r
$$
Integrate across a sphere of size "a"
```python
from sympy import init_session
init_session()
```
```python
a, r, theta, phi = symbols("N r theta phi")
# technically, we're integrating psi**2 but let's not worry about that now
# z => (r*cos(theta)) in spherical coordinates
f = (r*cos(theta)**2)
integrate(# something)
```
### Normalizing
We often wish to normalize functions. Calculate a normalization constant $N^2$ for the following integral (e.g., evaluate it, set it equal to one, etc.)
$$
\int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2 \pi} N^2 \mathrm{e}^{-2 r} \cos ^{2} \theta r^{2} \sin \theta d \varphi d \theta d r
$$
(This is related to a $2p_z$ hydrogen atomic orbital.)
```python
# if you have an error, make sure you run the cell above this
N, r, theta, phi = symbols("N r theta phi")
# technically, we're integrating psi**2 but let's not worry about that now
f = N**2 * exp(-2*r) * cos(theta)**2
integrate(# something)
```
### Gaussian Distribution and Probability
You may have heard much about the Gaussian "normal" distribution ("Bell curve").
If we flip a coin enough, the binomial distribution becomes essentially continuous. Moreover, the "law of large numbers" (i.e., the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem)) indicates that by taking the enough data points, the sum - and average will tend towards a Gaussian distribution.
In short, even if our underlying data comes from some different distribution, the average will become normal (e.g. consider the average velocities in an ideal gas - a mole is a big number).
We will spend some time on the Gaussian distribution.
- mean $x_0$
- standard deviation $\sigma$, variance $\sigma^2$
A normalized Gaussian distribution is then:
$$
p(x) =\frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \left[-\frac{\left(x-x_{0}\right)^{2}}{2 \sigma^{2}}\right]
$$
If the mean $x_0 = 0$, what fraction of data will fall:
- within $\pm 0.5\sigma$
- within one standard deviation
- within $\pm 1.5 \sigma$
- within $\pm 2.0 \sigma$
(In other words, change the limits of integration on the probability function.)
```python
sigma = symbols('sigma')
p = (1/sqrt(2*pi*sigma**2))*exp((-x**2)/(2*sigma**2))
simplify(integrate(p, (x, -0.5*sigma, +0.5*sigma)))
```
### If we want to include "error bars" with 95% confidence, what intervals do we use?
YOUR ANSWER HERE
What about 99% confidence intervals?
|
1ce0df6b99e963f4fc402df420d611e96b0bf2a1
| 8,669 |
ipynb
|
Jupyter Notebook
|
homework/ps6/ps6.ipynb
|
ghutchis/chem1000
|
07a7eac20cc04ee9a1bdb98339fbd5653a02a38d
|
[
"CC-BY-4.0"
] | 12 |
2020-06-23T18:44:37.000Z
|
2022-03-14T10:13:05.000Z
|
homework/ps6/ps6.ipynb
|
ghutchis/chem1000
|
07a7eac20cc04ee9a1bdb98339fbd5653a02a38d
|
[
"CC-BY-4.0"
] | null | null | null |
homework/ps6/ps6.ipynb
|
ghutchis/chem1000
|
07a7eac20cc04ee9a1bdb98339fbd5653a02a38d
|
[
"CC-BY-4.0"
] | 4 |
2021-07-29T10:45:23.000Z
|
2021-10-16T09:51:00.000Z
| 27.520635 | 334 | 0.560272 | true | 977 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.904651 | 0.909907 | 0.823148 |
__label__eng_Latn
| 0.980534 | 0.750781 |
# Cart-pole swing-up problem: interactive demonstration
Hello and welcome. This is a Jupyter Notebook, a kind of document that can alternate between static content, like text and images, and executable cells of code.
This document ilustrates the Cart-pole swing-up test case of the paper: "Collocation Methods for Second Order Systems", submitted to RSS 2022.
In order to run the cells of code, you can select the cell and clic on the small "play" button in the bar above or press shift+enter. Alternatively, you can select the option "run -> run all cells" in order to run all the code in order. Beware that some cells can take several minutes!
All of the code used in this example is open-source and free to use.
[SymPy](https://www.sympy.org/en/index.html) is used for Symbolic formulation and manipulation of the problem.
[Numpy](https://numpy.org/) is used for numerical arrays and operations.
[CasADI](https://web.casadi.org/) is used for optimization.
[Optibot](https://github.com/AunSiro/optibot) is the name of the package where we are compiling our code. We aim to produce a toolbox for Optimal Control Problems, focused on robotics, including a high level, readable and clean interface between the prior three packages.
## Package imports
```python
import numpy as np
import matplotlib.pyplot as plt
```
```python
from sympy import (symbols, simplify)
from sympy.physics.mechanics import dynamicsymbols, init_vprinting
from sympy.physics.mechanics import Lagrangian, ReferenceFrame, Point, Particle,inertia, RigidBody
```
```python
from optibot.symbolic import lagrange, diff_to_symb, SimpLagrangesMethod
from optibot.numpy import unpack
```
```python
from functools import lru_cache
```
```python
#SymPy vector-like latex rendering inizialization:
init_vprinting()
```
## Symbolic Problem Modelling
The first step is to model our problem taking advantage of the high level object syntax of the mechanics module in SymPy
```python
# Creating symbols and dynamic symbols
m0, m1, l, t, g = symbols('m_0 m_1 l t g')
q0, q1 = dynamicsymbols('q_0 q_1')
```
```python
# Definition of the physics system
N_in = ReferenceFrame('N')
pN = Point('N*')
pN.set_vel(N_in, 0)
P0 = pN.locatenew('P0', q0 * N_in.x)
P0.set_vel(N_in, q0.diff(t) * N_in.x)
cart_part = Particle('CartPart', P0, m0)
cart_part.potential_energy = m0 * g * P0.pos_from(pN).dot(N_in.y)
N1 = N_in.orientnew('N1', 'Axis', [q1, N_in.z])
P1 = P0.locatenew('P1', -l*N1.y)
P1.set_vel(N_in, P1.pos_from(pN).dt(N_in))
pend_part = Particle('PendPart', P1, m1)
pend_part.potential_energy = m1 * g * P1.pos_from(pN).dot(N_in.y)
```
```python
#Computing the Lagrangian
Lag_simp = Lagrangian(N_in, cart_part, pend_part)
Lag_simp
```
```python
# Defining the control forces and external actions, and applying them to our system
u0, u1 = symbols('u_0, u_1')
FL = [(P0, u0 * N_in.x)]#, (N1, u1 * N_in.z)]
LM_small = SimpLagrangesMethod(Lag_simp, [q0, q1], forcelist=FL, frame=N_in)
```
```python
# Generating the dynamic equations
LM_small.form_lagranges_equations()
RHS_small = LM_small.rhs
RHS_small
```
### Scheme definitions
Each scheme is defined here as a function that must be equal to zero at each interval.
Note that functions that contain "mod" in the name are those we define as "second order",
and use separate conditions for q and v.
Schemes that contain "parab" in the name are versions of Hermite Simpson that allow
or $U_c$ to be a free parameter. It is passed to the function through the
"scheme_params" argument.
If you wish to define your own schemes, do it here.
Be careful to respect the function structure: either
restriction(x, x_n, u, u_n, F, dt, params) = 0
or
restriction(x, x_n, u, u_n, F, dt, params, scheme_params) = 0
```python
from optibot.schemes import index_div
from copy import copy
def euler_restr(x, x_n, u, u_n, F, dt, params):
return x_n - (x + dt * F(x, u, params))
def trapz_restr(x, x_n, u, u_n, F, dt, params):
f = F(x, u, params)
f_n = F(x_n, u_n, params)
return x_n - (x + dt / 2 * (f + f_n))
def trapz_mod_restr(x, x_n, u, u_n, F, dt, params):
res = copy(x)
first_ind, last_ind = index_div(x)
q = x[first_ind]
v = x[last_ind]
f = F(x, u, params)[last_ind]
f_n = F(x_n, u_n, params)[last_ind]
res[last_ind] = v + dt / 2 * (f + f_n)
res[first_ind] = q + dt * v + dt ** 2 / 6 * (f_n + 2 * f)
return x_n - res
def hs_restr(x, x_n, u, u_n, F, dt, params):
f = F(x, u, params)
f_n = F(x_n, u_n, params)
x_c = (x + x_n) / 2 + dt / 8 * (f - f_n)
u_c = (u + u_n) / 2
f_c = F(x_c, u_c, params)
return x + dt / 6 * (f + 4 * f_c + f_n) - x_n
def hs_mod_restr(x, x_n, u, u_n, F, dt, params):
x_c = copy(x)
res = copy(x)
first_ind, last_ind = index_div(x)
f = F(x, u, params)[last_ind]
f_n = F(x_n, u_n, params)[last_ind]
q = x[first_ind]
v = x[last_ind]
q_n = x_n[first_ind]
v_n = x_n[last_ind]
u_c = (u + u_n) / 2
q_c = q + dt / 32 * (13 * v + 3 * v_n) + dt**2 / 192 * (11 * f - 5 * f_n)
v_c = (v + v_n) / 2 + dt / 8 * (f - f_n)
x_c[first_ind] = q_c
x_c[last_ind] = v_c
f_c = F(x_c, u_c, params)[last_ind]
res[last_ind] = v + dt / 6 * (f + 4 * f_c + f_n)
res[first_ind] = q + dt * v + dt ** 2 / 6 * (f + 2 * f_c)
return x_n - res
def hs_parab_restr(x, x_n, u, u_n, F, dt, params, scheme_params):
f = F(x, u, params)
f_n = F(x_n, u_n, params)
x_c = (x + x_n) / 2 + dt / 8 * (f - f_n)
u_c = scheme_params
f_c = F(x_c, u_c, params)
return x + dt / 6 * (f + 4 * f_c + f_n) - x_n
def hs_mod_parab_restr(x, x_n, u, u_n, F, dt, params, scheme_params):
x_c = copy(x)
res = copy(x)
first_ind, last_ind = index_div(x)
f = F(x, u, params)[last_ind]
f_n = F(x_n, u_n, params)[last_ind]
q = x[first_ind]
v = x[last_ind]
q_n = x_n[first_ind]
v_n = x_n[last_ind]
u_c = scheme_params
q_c = q + dt / 32 * (13 * v + 3 * v_n) + dt**2 / 192 * (11 * f - 5 * f_n)
v_c = (v + v_n) / 2 + dt / 8 * (f - f_n)
x_c[first_ind] = q_c
x_c[last_ind] = v_c
f_c = F(x_c, u_c, params)[last_ind]
res[last_ind] = v + dt / 6 * (f + 4 * f_c + f_n)
res[first_ind] = q + dt * v + dt ** 2 / 6 * (f + 2 * f_c)
return x_n - res
```
### Casadi optimization
We have generated the system equations symbolicaly. Now, we translate them to CasADi objects in order to perform the optimization.
```python
#Numerical values of the paramenters
m0_n, m1_n = [1., 0.3]
l_n = 0.5
g_n = 9.81
params = [g_n, l_n, m0_n, m1_n]
```
```python
#Package imports
import casadi as cas
from optibot.casadi import rhs_to_casadi_function, restriction2casadi
```
```python
# Translating the Sympy Expression into a CasADi function
F_cas_simp = rhs_to_casadi_function(RHS_small[2:], 2)
```
```python
def gen_ini_guess(N = 25, ini_guess = 'lin'):
'''
Generates an initial guess for the Cartpole problem of N intervals.
'''
if ini_guess == 'zero':
x_init_guess = np.zeros([N+1,4])
elif ini_guess == 'lin':
def_q1 = np.linspace(0,1,N+1)
def_q2 = np.linspace(0,np.pi,N+1)
def_v1 = np.zeros(N+1)
def_v2 = np.zeros(N+1)
x_init_guess = np.array([def_q1, def_q2, def_v1, def_v2]).T
return x_init_guess
```
```python
import time
def chrono_solve(opti, solve_repetitions):
'''
Calls the solver a certain amount of times and returns the last solution
obtained and the average computing time
'''
cput0 = time.time()
for ii in range(solve_repetitions):
sol = opti.solve()
cput1 = time.time()
cpudt = (cput1-cput0)/solve_repetitions
return sol, cpudt
```
```python
#@lru_cache
def casadi_cartpole(N = 25, scheme = 'euler', ini_guess = 'lin', solve_repetitions = 1, t_end = 2):
opti = cas.Opti()
p_opts = {"expand":True,'ipopt.print_level':0, 'print_time':0}
s_opts = {"max_iter": 10000, 'tol': 1e-26}
opti.solver("ipopt",p_opts,
s_opts)
restr_schemes = {
'euler': euler_restr, # Euler scheme
'trapz': trapz_restr, # Trapezoidal Scheme
'trapz_mod' : trapz_mod_restr, # Second Order Trapezoidal Scheme
'hs': hs_restr, # Hermite Simpson Scheme, assuming that each Uc is the central value
'hs_mod': hs_mod_restr, # Second Order Hermite Simpson Scheme, assuming that each Uc is the central value
'hs_parab': hs_parab_restr, # Hermite Simpson Scheme, with Uc as a free problem parameter
'hs_mod_parab': hs_mod_parab_restr # Second Order Hermite Simpson Scheme, with Uc as a free problem parameter
#'your scheme name here': your_scheme_function_here
}
f_restr = restr_schemes[scheme]
# parab is a boolean variable that controls wether the centran points of U are free decision variables
if scheme in ['hs_parab', 'hs_mod_parab']:
parab = True
else:
parab = False
# Creating problem structure
X = opti.variable(N+1,4)
U = opti.variable(N+1)
if parab:
U_c = opti.variable(N)
T = opti.parameter()
u_m = opti.parameter()
Params = opti.parameter(4)
# Defining the problem cost to minimize (integral of u^2)
cost = (cas.sum1(U[:]**2)+cas.sum1(U[1:-1]**2))/N
if parab:
cost = (4*cas.sum1(U_c[:]**2) + cas.sum1(U[:]**2)+cas.sum1(U[1:-1]**2))/(3*N)
opti.minimize(cost)
# Initial and final conditions
opti.subject_to(X[0,:].T == [0, 0, 0, 0])
opti.subject_to(X[-1,:].T == [1, np.pi, 0, 0])
# Translating the scheme restriction function into a CasADi function
if parab:
restriction = restriction2casadi(f_restr, F_cas_simp, 2, 1, 4, 1)
else:
restriction = restriction2casadi(f_restr, F_cas_simp, 2, 1, 4)
# Appliying restrictions and action boundaries
for ii in range(N):
if parab:
opti.subject_to(restriction(X[ii,:], X[ii+1,:], U[ii,:], U[ii+1],T/N, Params, U_c[ii])==0)
opti.subject_to(opti.bounded(-u_m, U_c[ii,:] ,u_m))
else:
opti.subject_to(restriction(X[ii,:], X[ii+1,:], U[ii,:], U[ii+1,:],T/N, Params)==0)
opti.subject_to(opti.bounded(-u_m,U[ii,:],u_m))
opti.subject_to(opti.bounded(-u_m,U[-1, :],u_m))
# Setting parameters to their numeric values
opti.set_value(T, t_end)
max_f = 20.0
opti.set_value(u_m, max_f)
m0_n, m1_n = [1., 0.3]
l_n = 0.5
g_n = 9.81
opti.set_value(Params, [g_n, l_n, m0_n, m1_n])
# Setting the initialization values
if ini_guess in ['zero', 'lin']:
opti.set_initial(X, gen_ini_guess(N, ini_guess))
elif type(ini_guess) == list:
opti.set_initial(X, ini_guess[0])
opti.set_initial(U, ini_guess[1])
if parab:
opti.set_initial(U_c, ini_guess[2])
else:
raise TypeError('initial guess not understood')
# Solve
sol, cpudt = chrono_solve(opti, solve_repetitions)
err_count = None
sol_cost = sol.value(cost)
xx_simp = sol.value(X)
uu_simp = sol.value(U)
if parab:
uu_c = sol.value(U_c)
else:
uu_c = None
# Return data
return xx_simp, uu_simp, uu_c, cpudt, err_count, sol_cost
```
Let's try to solve the problem for 25 points and the 2nd order Hermite Simpson
```python
from optibot.schemes import interpolated_array, interpolated_array_derivative
from optibot.analysis import dynamic_error
from optibot.numpy import RHS2numpy
```
```python
F_nump = RHS2numpy(RHS_small, 2)
```
```python
scheme = 'hs_mod_parab'
N = 25
xx, uu, uu_c, cpudt, _, cost = casadi_cartpole(N, scheme, 'lin', 1)
xx_interp, uu_interp = interpolated_array(
X = xx,
U = uu,
F = F_nump,
h = 2/N,
t_array = np.linspace(0, 2, 2000),
params = params,
scheme = "hs_parab",
u_scheme = 'parab',
scheme_params = {'u_c' : uu_c}
)
plt.figure(figsize=[16,8])
plt.plot(np.linspace(0,2,N+1),uu[:], 'o',label = '$u_k$ points')
plt.plot(np.linspace(0,2,2*N+1)[1::2],uu_c, 'o',label = '$u_c$ points')
plt.plot(np.linspace(0,2,2000),uu_interp, label = 'interpolation')
plt.grid()
plt.legend()
plt.title('Cart-pole U(t) for 2nd order Hermite Simpson with N = 25')
labels = ['q1','q2','v1','v2']
for ii in range(4):
plt.figure(figsize=[16,10])
plt.plot(np.linspace(0,2,N+1),xx[:,ii], 'o',label = f'${labels[ii]}_k$ points')
plt.plot(np.linspace(0,2,2000),xx_interp[:,ii], label = 'interpolation')
plt.grid()
plt.legend()
plt.title(f'Cart-pole {labels[ii]}(t) for 2nd order Hermite Simpson with N = 25')
```
## Sistematic comparison of schemes for different values of N
Now let's solve the problem with different methods.
### Caution!
Executing the next cell may require some time!
```python
schemes = ['hs_parab', 'hs_mod_parab', 'trapz', 'trapz_mod'] #If you defined a custom function, name your scheme here
initials = ['lin']
solve_repetitions = 30 #Increase this number to get more reliable values of execution times
N_arr = [20, 25, 30, 40, 50, 60]# You can increase the numbers here, but it will take more time
results = {}
for scheme in schemes:
for init in initials:
key = scheme + '_' + init
print('Problem:', key)
results[key] = {'N_arr':N_arr}
for N in N_arr:
print(f'\tN = {N}')
xx, uu, uu_c, cpudt, _, cost = casadi_cartpole(N, scheme, init, solve_repetitions)
results[key][N] = {
'x': xx,
'u': uu,
'u_c': uu_c,
'cpudt': cpudt,
'cost': cost,
}
```
```python
#Calculating the number of collocation number
for scheme in results.keys():
if 'hs' in scheme:
n_coll = np.array(results[scheme]['N_arr'])*2-1
results[scheme]['N_coll_arr'] = n_coll
else:
results[scheme]['N_coll_arr'] = results[scheme]['N_arr']
```
## Dynamic Error
Now we can compute the dynamic errors for each case
```python
def total_state_error(t_arr, dyn_err):
errors = np.trapz(np.abs(dyn_err), t_arr, axis=0)
return errors
```
```python
schemes = ['hs_parab', 'hs_mod_parab', 'trapz', 'trapz_mod']
initials = ['lin']#, 'funcs']
n_interp = 4000
for scheme in schemes:
for init in initials:
key = scheme + '_' + init
print('Problem:', key)
N_arr = results[key]['N_arr']
for N in N_arr:
print(f'\tN = {N}')
if 'parab' in scheme:
u_scheme = 'parab'
else:
u_scheme = 'lin'
dyn_err_q, dyn_err_v, dyn_err_2_a, dyn_err_2_b = dynamic_error(
results[key][N]['x'],
results[key][N]['u'],
2,
params,
F_nump,
scheme = scheme,
u_scheme= u_scheme,
scheme_params={'u_c':results[key][N]['u_c']},
n_interp = n_interp)
t_arr = np.linspace(0,2, n_interp)
tot_dyn_err_q = total_state_error(t_arr, dyn_err_q)
tot_dyn_err_v = total_state_error(t_arr, dyn_err_v)
tot_dyn_err_2_a = total_state_error(t_arr, dyn_err_2_a)
tot_dyn_err_2_b = total_state_error(t_arr, dyn_err_2_b)
results[key][N]['err_q_int'] = dyn_err_q
results[key][N]['err_v_int'] = dyn_err_v
results[key][N]['err_2_a_int'] = dyn_err_2_a
results[key][N]['err_2_b_int'] = dyn_err_2_b
results[key][N]['err_q'] = tot_dyn_err_q
results[key][N]['err_v'] = tot_dyn_err_v
results[key][N]['err_2_a'] = tot_dyn_err_2_a
results[key][N]['err_2_b'] = tot_dyn_err_2_b
```
```python
for scheme in schemes:
for init in initials:
key = scheme + '_' + init
print('Problem:', key)
N_arr = results[key]['N_arr']
err_q_acum = []
err_v_acum = []
err_2_a_acum = []
err_2_b_acum = []
cpudt = []
for N in N_arr:
err_q_acum.append(results[key][N]['err_q'])
err_v_acum.append(results[key][N]['err_v'])
err_2_a_acum.append(results[key][N]['err_2_a'])
err_2_b_acum.append(results[key][N]['err_2_b'])
cpudt.append(results[key][N]['cpudt'])
results[key]['err_q_acum'] = np.array(err_q_acum, dtype = float)
results[key]['err_v_acum'] = np.array(err_v_acum, dtype = float)
results[key]['err_2_a_acum'] = np.array(err_2_a_acum, dtype = float)
results[key]['err_2_b_acum'] = np.array(err_2_b_acum, dtype = float)
results[key]['cpudt'] = np.array(cpudt, dtype = float)
```
```python
#Plotting parameters
plt.rcParams.update({'font.size': 12})
oct_fig_size = [15,10]
```
```python
sch = [['hs_parab','hs_mod_parab'],['trapz', 'trapz_mod']]
tit = [['Hermite Simpson','2nd order Hermite Simpson'],['Trapezoidal', '2nd order Trapezoidal']]
colors = [f'C{ii}' for ii in [1,0,2,3]]
n_int = len(t_arr)
N_hh = [25,50]
for hh in range(2):
schemes = sch[hh]
titles = tit[hh]
N = N_hh[hh]
interv_n = (N * t_arr)/2
for ii in range(2):
plt.figure(figsize=oct_fig_size)
for kk in range(len(schemes)):
scheme = schemes[kk]
key = scheme + '_lin'
cut_p = 0
for ll in range(1,N+1):
jj = np.searchsorted(interv_n, ll)
plt.plot(t_arr[cut_p:jj],results[key][N]['err_q_int'][cut_p:jj,ii], '-', c = colors[2*hh+kk], label = titles[kk] if cut_p == 0 else None)
cut_p = jj
plt.plot(np.linspace(0,2,N+1), np.zeros(N+1), 'ok', label = 'knot & collocation points')
if hh == 0:
plt.plot(np.linspace(0,2,2*N+1)[1::2], np.zeros(N), 'ow', markeredgecolor='k', label = 'collocation points')
plt.legend()
plt.grid()
plt.title(r'First order dynamic error $\varepsilon^{[1]}_{q_'+f'{ii+1}}}$, {titles[0]} schemes, N = {N}')
plt.xlabel('Time(s)')
units = 'm/s' if ii == 0 else'rad/s'
plt.ylabel(f'Dynamic error $({units})$')
plt.tight_layout(pad = 0.0)
sch_type = titles[0].replace(' ','_')
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'Cartpole_First_Order_Dynamic_Error_q_{ii+1}_{sch_type}_schemes_N_{N}.eps', format='eps')
```
```python
sch = [['hs_parab','hs_mod_parab'],['trapz', 'trapz_mod']]
tit = [['Hermite Simpson','2nd order Hermite Simpson'],['Trapezoidal', '2nd order Trapezoidal']]
colors = [f'C{ii}' for ii in [1,0,2,3]]
n_int = len(t_arr)
N_hh = [25,50]
for hh in range(2):
schemes = sch[hh]
titles = tit[hh]
N = N_hh[hh]
interv_n = (N * t_arr)/2
for ii in range(2):
plt.figure(figsize=oct_fig_size)
for kk in range(len(schemes)):
scheme = schemes[kk]
key = scheme + '_lin'
cut_p = 0
for ll in range(1,N+1):
jj = np.searchsorted(interv_n, ll)
plt.plot(t_arr[cut_p:jj],results[key][N]['err_2_b_int'][cut_p:jj,ii], '-', c = colors[2*hh+kk], label = titles[kk] if cut_p == 0 else None)
cut_p = jj
plt.plot(np.linspace(0,2,N+1), np.zeros(N+1), 'ok', label = 'knot & collocation points')
if hh == 0:
plt.plot(np.linspace(0,2,2*N+1)[1::2], np.zeros(N), 'ow', markeredgecolor='k', label = 'collocation points')
plt.legend()
plt.grid()
#plt.ylim([-0.00022, 0.00022])
plt.title(r'Second order dynamic error $\varepsilon^{[2]}_{q_'+f'{ii+1}}}$, {titles[0]} schemes, N = {N}')
plt.xlabel('Time(s)')
units = 'm/s^2' if ii == 0 else'rad/s^2'
plt.ylabel(f'Dynamic error $({units})$')
plt.tight_layout(pad = 0.0)
sch_type = titles[0].replace(' ','_')
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'Cartpole_Second_Order_Dynamic_Error_q_{ii+1}_{sch_type}_schemes_N_{N}.eps', format='eps')
```
```python
schemes_graph = ['hs_mod_parab', 'hs_parab', 'trapz', 'trapz_mod']
titles = ['2nd order Hermite Simpson', 'Hermite Simpson','Trapezoidal', '2nd order Trapezoidal']
colors = [f'C{ii}' for ii in range(9)]
data_array = ['err_q_acum','err_v_acum','err_2_b_acum','cpudt']
initial = 'lin'
data_key = data_array[2]
for qq in range(2):
plt.figure(figsize=[10,6])
plt.title(f'Second order dynamic error $E^{{[2]}}_{{q_{qq+1}}}$')
for ii in [2,3,1,0]:
scheme = schemes_graph[ii]
key = scheme + '_' + initial
print('Problem:', key)
N_arr = results[key]['N_arr']
if len(results[key][data_key].shape) == 1:
plt.plot(N_arr,results[key][data_key], marker = 'o', c = f'C{ii}',label = titles[ii])
else:
plt.plot(N_arr,results[key][data_key][:,qq], marker = 'o', c = f'C{ii}',label = titles[ii])
plt.yscale('log')
plt.xlabel('Number of intervals')
plt.grid()
plt.legend()
units = 'm/s' if qq == 0 else'rad/s'
plt.ylabel(f'Dynamic error $({units})$')
plt.tight_layout(pad = 0.0)
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'Cartpole_Integrated_Second_Order_Dynamic_Error_q_{qq+1}_vs_N.eps', format='eps')
```
```python
schemes = ['hs_mod_parab','hs_parab', 'trapz', 'trapz_mod']
titles = ['2nd order Hermite Simpson', 'Hermite Simpson','Trapezoidal', '2nd order Trapezoidal']
plt.figure(figsize=[10,6])
for ii in [2,3,1,0]:
key = schemes[ii] + '_lin'
plt.plot(results[key]['N_arr'], results[key][f'cpudt'], marker = 'o', c = f'C{ii}',label = titles[ii])
plt.grid()
plt.legend()
plt.title('Optimization time')
plt.xlabel('Number of intervals')
plt.ylabel('Time (s)')
plt.tight_layout(pad = 0.0)
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'Cartpole_optimization_time_vs_interval_number.eps', format='eps')
```
```python
# Here we print the data shown in Table II of the paper
for scheme in ['hs_mod_parab', 'hs_parab', 'trapz', 'trapz_mod']:
key = scheme + '_lin'
for N in [25,50]:#results[key]['N_arr']:
print('scheme:', scheme, 'N:', N,'\n\ttime:', results[key][N][f'cpudt'],
'\n\tErr 1:', results[key][N]['err_q'], '\n\tErr 2:', results[key][N]['err_2_b'])
```
## Animation
```python
from matplotlib import animation, rc
import matplotlib.patches as patches
from matplotlib.transforms import Affine2D
from IPython.display import HTML
import matplotlib
matplotlib.rcParams['animation.embed_limit'] = 200
```
```python
def create_anim(X, U, params):
[g_n, l_n, m0_n, m1_n] = params
N = X.shape[0]
fig, ax = plt.subplots()
y_scale = 1
min_x_cart = np.min(X[:,0])
max_x_cart = np.max(X[:,0])
cart_displ = max_x_cart-min_x_cart
size_x = 2*y_scale + cart_displ
size_y = 2*y_scale
draw_width = 14
draw_height = draw_width / size_x * size_y
x_0 = X[:,0]
y_0 = np.zeros_like(x_0)
x_1 = x_0 + l_n*np.sin(X[:,1])
y_1 = y_0 - l_n*np.cos(X[:,1])
x_cm = (m0_n * x_0 + m1_n * x_1)/(m0_n + m1_n)
y_cm = (m0_n * y_0 + m1_n * y_1)/(m0_n + m1_n)
fig.set_dpi(72)
fig.set_size_inches([draw_width,draw_height])
ax.set_xlim(( min_x_cart-y_scale, max_x_cart+y_scale))
ax.set_ylim(( -y_scale, y_scale))
#circle1 = plt.Circle((0, 0), l_n, color='b', ls = ":", fill=False)
#ax.add_artist(circle1)
ax.plot([min_x_cart - l_n, max_x_cart + l_n], [0,0], 'k', lw=1, ls = ':')
line1, = ax.plot([], [], lw=2)
line3, = ax.plot([], [], 'k', lw=1, ls = ':')
#line_cm, = ax.plot([], [], 'g', lw=1, ls = ':')
point0, = ax.plot([], [], marker='s', markersize=10, color="k")
point1, = ax.plot([], [], marker='o', markersize=7, color="red")
#point_cm, = ax.plot([], [], marker='o', markersize=10, color="green")
u_max = max(np.max(np.abs(U[:])),1e-15)
arrow_w = 0.1*l_n
arrow_l = 0.7*l_n
u_arrow = patches.Arrow(0, 0, 0, -arrow_l, color = 'gray',width = arrow_w)
ax.add_patch(u_arrow)
print_vars = [X[:,0], X[:,1], U[:], np.linspace(0, N-1, N, dtype=int)]
print_var_names = ['q_0', 'q_1', 'u_0', 'step']
texts = []
ii = 0.8
for arr in print_vars:
texts.append(ax.text(-0.8, ii, "", fontsize = 12))
ii -= 0.2*l_n
xx_interpolated, uu_interpolated = interpolated_array(
X,
U,
F = F_nump,
h = 2/(N-1),
t_array = np.linspace(0, 2, 5*(N-1)+1),
params = params,
scheme = 'hs_mod_parab',
u_scheme = 'parab',
scheme_params = {'u_c' : results['hs_mod_parab_lin'][N-1]['u_c']}
)
x_0_interp = xx_interpolated[:,0]
y_0_interp = np.zeros_like(x_0_interp)
x_1_interp = x_0_interp + l_n*np.sin(xx_interpolated[:,1])
y_1_interp = y_0_interp - l_n*np.cos(xx_interpolated[:,1])
def init():
line1.set_data([], [])
line3.set_data([], [])
#line_cm.set_data([], [])
point1.set_data([], [])
#circle1.center = (0, 0)
return (line1,)
def animate(i):
#circle1.center = (x_0[i], y_0[i])
point0.set_data(x_0[i], y_0[i])
line1.set_data([x_0[i], x_1[i]], [y_0[i], y_1[i]])
point1.set_data(x_1[i], y_1[i])
#point_cm.set_data(x_cm[i], y_cm[i])
line3.set_data(x_1_interp[:5*i+1], y_1_interp[:5*i+1])
#line_cm.set_data(x_cm[:i], y_cm[:i])
trans = Affine2D()
u_arrow._patch_transform = trans.scale(U[i] * arrow_l / u_max, arrow_w).translate(x_0[i],0)
for ii in range(len(texts)):
text = texts[ii]
name = print_var_names[ii]
arr = print_vars[ii]
if name == 'step':
text.set_text("$step$ = " + str(arr[i]))
else:
text.set_text("$" + name + "$ = %.3f" % arr[i])
return (line1,u_arrow)
frame_indices = np.concatenate((np.zeros(10, dtype=int), np.arange(0, N, 1), np.ones(15, dtype=int)*(N-1)))
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=frame_indices, interval=20,
blit=True)
return anim
```
```python
anim = create_anim(results['hs_parab_lin'][25]['x'], results['hs_parab_lin'][25]['u'], params)
```
```python
HTML(anim.to_jshtml())
```
```python
f = r"cartpole_animation.mp4"
writervideo = animation.FFMpegWriter(fps=12)
# If you are running the notebook locally and want to save the animation,
# uncomment the next line
#anim.save(f, writer=writervideo)
```
```python
```
```python
```
```python
```
|
8061224c1e7c6ca917e0f0ad15bc00a63315d585
| 39,007 |
ipynb
|
Jupyter Notebook
|
Cartpole-demo.ipynb
|
AunSiro/Second-Order-Schemes
|
ef7ac9a6755e166d81b83f584f82055d38265087
|
[
"MIT"
] | null | null | null |
Cartpole-demo.ipynb
|
AunSiro/Second-Order-Schemes
|
ef7ac9a6755e166d81b83f584f82055d38265087
|
[
"MIT"
] | null | null | null |
Cartpole-demo.ipynb
|
AunSiro/Second-Order-Schemes
|
ef7ac9a6755e166d81b83f584f82055d38265087
|
[
"MIT"
] | null | null | null | 34.79661 | 298 | 0.508242 | true | 8,390 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91611 | 0.903294 | 0.827517 |
__label__eng_Latn
| 0.538571 | 0.760931 |
# Lecture 18 - Intro to data science (https://bit.ly/intro_python_18)
Today we're going to look at doing simple machine learning with Python, as an intro to very basic data science.
The idea is not to give you a full knowledge of any single package or technique, rather to give you a sense for what is possible.
To keep things simple, we're going to start by looking at **one variable linear regression**. This is the simplest form of machine learning we can think of.
For data, we're going to look at an archival database of breast tumor data to try to find variables that predict malignancy.
#### ORIGINAL DATASET http://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/
**As a bonus: At the end of this lecture are some notes on three core Python data science libraries that are widely used: Numpy, Matplotlib and Pandas. These will not be covered in class due to time, but please play with them!**
# Wisconsin Breast Cancer Database
1. Number of Instances: 699 (as of 15 July 1992 (yeah, it's old))
2. Number of Attributes: 10 plus the class attribute
3. Attribute Information: (class attribute has been moved to last column)
Attribute Domain
1. Sample code number id number
2. Clump Thickness 1 - 10
3. Uniformity of Cell Size 1 - 10
4. Uniformity of Cell Shape 1 - 10
5. Marginal Adhesion 1 - 10
6. Single Epithelial Cell Size 1 - 10
7. Bare Nuclei 1 - 10
8. Bland Chromatin 1 - 10
9. Normal Nucleoli 1 - 10
10. Mitoses 1 - 10
11. Class: (2 for benign, 4 for malignant)
4. Missing attribute values: 16
There are 16 instances in Groups 1 to 6 that contain a single missing
(i.e., unavailable) attribute value, now denoted by "?".
5. Class distribution:
Benign: 458 (65.5%)
Malignant: 241 (34.5%)
# Load the data
First copy the data from Google Drive to a local file.
The data is in a comma separated value (csv) file:
> **Year,Make,Model,Description,Price<br />
> 1997,Ford,E350,"ac, abs, moon",3000.00<br />
> 1999,Chevy,Venture Extended Edition,"",4900.00<br />
> 1999,Chevy,"Venture Extended Edition, Very Large",5000.00<br />**
* In a CSV, Each line gives a comma separated sequence of N text strings
* CSV files (and tab seperated value (TSV) files) are formats for 2d tables
* they can also generally be loaded by spreadsheets
```python
# Copy the file from the internet
import urllib.request
url = "https://raw.githubusercontent.com/benedictpaten/intro_python/main/lecture_notebooks/data/breast-cancer-wisconsin.data.csv"
cancer_data_file = 'cancer_data.csv'
urllib.request.urlretrieve(url, cancer_data_file) # This function copies the thing the url points at into
# a local file copy
```
('cancer_data.csv', <http.client.HTTPMessage at 0x166d20c40>)
**Import the Pandas and Numpy modules**
* Numpy is an "array" and "matrix" library, used to represent large collections of data efficiently in Python
* Pandas builds on Numpy to provide a "spreadsheet like" package for manipulating tables of data and performing statistical analyses.
* For more details see the appendices at the end of the notebook and the linked tutorials.
```python
# These are two key libraries we need for representing the data in Python
import numpy as np
import pandas as pd
```
Next, load the data into Python using Pandas..
```python
# Load the data as a Pandas dataframe
# Read input
df = pd.read_csv(cancer_data_file)
df.head(5) # Head just shows us the first 5 rows
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>clump-thickness</th>
<th>uniformity-of-cell-size</th>
<th>uniformity-of-cell-shape</th>
<th>marginal-adhesion</th>
<th>single-epithelial-cell-size</th>
<th>bare-nuclei</th>
<th>bland-chromatin</th>
<th>normal-nucleoli</th>
<th>mitoses</th>
<th>class</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1000025</td>
<td>5</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>1</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<th>1</th>
<td>1002945</td>
<td>5</td>
<td>4</td>
<td>4</td>
<td>5</td>
<td>7</td>
<td>10</td>
<td>3</td>
<td>2</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>1015425</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>2</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<th>3</th>
<td>1016277</td>
<td>6</td>
<td>8</td>
<td>8</td>
<td>1</td>
<td>3</td>
<td>4</td>
<td>3</td>
<td>7</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<th>4</th>
<td>1017023</td>
<td>4</td>
<td>1</td>
<td>1</td>
<td>3</td>
<td>2</td>
<td>1</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
This converts process reads the csv table data into a Pandas "data frame", abbreviated "df".
With df.head(5) we print the first five rows to illustrate the nature of the data.
# Preprocess The Data
In data science and machine learning frequently much of the challenge is in preprocessing the data into a format that is ameniable to the algorithms used.
Here we don't need to do too much, we just normalize the data.
We use scikit-learn, a popular Python machine learning package to do the preprocessing. See: https://scikit-learn.org/stable/modules/preprocessing.html
```python
# Preprocess the data so we can use it for regression. Especially the class values.
# Access function/method docstrings in jupyter via '?'. Ex: preprocessing.LabelEncoder?
from sklearn import preprocessing
encoder = preprocessing.LabelEncoder()
for col in df.columns: # For each column in the data frame
df[col] = encoder.fit_transform(df[col]) # Transform the series so is zero based
# and dense
df.head(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>clump-thickness</th>
<th>uniformity-of-cell-size</th>
<th>uniformity-of-cell-shape</th>
<th>marginal-adhesion</th>
<th>single-epithelial-cell-size</th>
<th>bare-nuclei</th>
<th>bland-chromatin</th>
<th>normal-nucleoli</th>
<th>mitoses</th>
<th>class</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>172</td>
<td>4</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>175</td>
<td>4</td>
<td>3</td>
<td>3</td>
<td>4</td>
<td>6</td>
<td>1</td>
<td>2</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>176</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>177</td>
<td>5</td>
<td>7</td>
<td>7</td>
<td>0</td>
<td>2</td>
<td>4</td>
<td>2</td>
<td>6</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>179</td>
<td>3</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>1</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
# Linear Regression w/One Variable
To start, we suppose we have a set of pairs:
\begin{equation}
(x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)
\end{equation}
The task is to try to predict $y_i$ given $x_i$. Call our prediction $y'_i$, in linear regression we use the following simple linear equation for our prediction:
\begin{equation}
y'_i = w*x_i+b
\end{equation}
In this picture the pairs:
\begin{equation}
(x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)
\end{equation}
are the blue dots and our prediction:
\begin{equation}
y'_i = w*x_i+b
\end{equation}
is the red line
In Python we encode the prediction as:
```python
def predict(x, w, b):
return x*w + b
```
The task of machine learning here is to learn the parameters of the model, that is $w$ and $b$. To do this we need some way of deciding the value of our parameter choices - a "cost function"
# Cost function / loss function / risk function
The mean squared error (MSE) is a measure of the quality of an estimator—it is always non-negative, and values closer to zero are better.
To judge our parameters we use MSE:
\begin{equation}
MSE = \frac{1}{n} \sum_{i=1}^n (y_i - (wx_i+b))^2
\end{equation}
We visualize this error calculation as shown.
The error is proportional to the sum of the squared length of the green lines:
In Python:
```python
def cost_function(x, y, w, b):
n = len(x)
total_error = 0.0
for i in range(n):
total_error += (y[i] - (w*x[i] + b))**2
return total_error / n
```
# Gradient descent
Having defined a model and a cost function, the next thing to do is define a learning method which can find good parameters for the model.
For the case of simple linear regression we could calculate the optimum parameters analytically, but here we choose to use gradient descent. An iterative method that can be used to optimize a large class of problems, many of which can not be solved exactly.
In Gradient Descent the idea is to use the derivative (rate of change) of the cost function to iteratively search for the point where the parameters minimize the MSE:
(In the picture J(w) is the cost function and we traverse the cost function for the w parameter by iteratively moving toward the global minimum)
There are two parameters (coefficients) in our cost function we can control: weight $w$ and bias $b$. Since we need to consider the impact each one has on the final prediction, we use partial derivatives.
Recall the MSE cost function (here called $f$):
\begin{equation}
f(w,b)= \frac{1}{n} \sum_{i=1}^{n} (y_i - (wx_i+b))^2
\end{equation}
First lets find the partial derivative for $w$:
\begin{equation}
\frac{\partial f}{\partial w} = \frac{1}{n} \sum_{i=1}^n -2x_i(y_i - (wx_i+b))
\end{equation}
And the partial derivative for $b$:
\begin{equation}
\frac{\partial f}{\partial b} = \frac{1}{n} \sum_{i=1}^n -2(y_i - (wx_i+b))
\end{equation}
Suppose we have estimates of $w$ and $b$, the gradient descent method estimates new estimates, $w'$ and $b'$, as follows:
\begin{equation}
w' = w - l * \frac{\partial f}{\partial w}
\end{equation}
And the partial derivative for $b$:
\begin{equation}
b' = b - l * \frac{\partial f}{\partial b}
\end{equation}
Where $l$ is the "learning rate" hyperparameter (a parameter of the learning algorithm) that dictates the speed at which the model learns - moves along the slope.
Given these update equations, we can express them in Python:
```python
def update_weights(x, y, w, b, learning_rate):
weight_deriv = 0
bias_deriv = 0
n = len(x)
for i in range(n):
# Calculate partial derivatives
# -2x(y - (mx + b))
weight_deriv += -2*x[i] * (y[i] - (w*x[i] + b))
# -2(y - (mx + b))
bias_deriv += -2*(y[i] - (w*x[i] + b))
# We subtract because the derivatives point in direction of steepest ascent
w -= (weight_deriv / n) * learning_rate
b -= (bias_deriv / n) * learning_rate
return w, b
```
Note: making the learning rate too large will stop the model converging, because the size of the jumps between parameter estimates will hop around the optimal values.
# Train loop
We now have a simple model, cost function and associated learning algorithm. We can put these together to train the model iteratively:
```python
def train_model(x, y, w, b, learning_rate, epochs):
cost_history = []
for i in range(epochs):
w,b = update_weights(x, y, w, b, learning_rate)
#Calculate cost for auditing purposes
cost = cost_function(x, y, w, b)
cost_history.append(cost)
# Log Progress
if (i+1) % 20 == 0:
print("Epochs: ", str(i+1), " cost: ", str(cost))
return w, b, cost_history
```
An epoch is a cycle of learning from the data, the learning rate and number of epochs are the hyperparameters of this algorithm.
# Run the training
Let's test this all out:
```python
x = df['uniformity-of-cell-size']
y = df['class'] # Benign / malignant
w, b, cost_history = train_model(x, y, 0, 0, 0.02, 500) # Start with w and b as 0, 0
print(w, b)
```
Epochs: 20 cost: 0.07538027961136255
Epochs: 40 cost: 0.07498926619585158
Epochs: 60 cost: 0.07485402644136209
Epochs: 80 cost: 0.07480725108668539
Epochs: 100 cost: 0.07479107290140467
Epochs: 120 cost: 0.07478547735475996
Epochs: 140 cost: 0.07478354202383664
Epochs: 160 cost: 0.0747828726512064
Epochs: 180 cost: 0.07478264113538957
Epochs: 200 cost: 0.07478256106104562
Epochs: 220 cost: 0.07478253336574431
Epochs: 240 cost: 0.07478252378677425
Epochs: 260 cost: 0.07478252047369835
Epochs: 280 cost: 0.07478251932780343
Epochs: 300 cost: 0.07478251893147433
Epochs: 320 cost: 0.07478251879439674
Epochs: 340 cost: 0.0747825187469841
Epochs: 360 cost: 0.07478251873058646
Epochs: 380 cost: 0.07478251872491373
Epochs: 400 cost: 0.07478251872295176
Epochs: 420 cost: 0.0747825187222749
Epochs: 440 cost: 0.07478251872203892
Epochs: 460 cost: 0.07478251872195751
Epochs: 480 cost: 0.07478251872193077
Epochs: 500 cost: 0.07478251872191918
0.12748809923704618 0.07265767658062813
We can see that the model converges towards better parameters!
How do we judge how useful the model is in practice?
# Predictive separation index (PSI) as score
We can use Predictive Separation Index (PSI), to use as the strength of a predictor. The equation is:
\begin{equation}
PSI ( x ) = [ \textrm{mean } y'_i \textrm{ when } y_i = 1 ] - [ \textrm{mean } y'_i \textrm{ when } y_i = 0 ] \, .
\end{equation}
We want PSI(x) very close to 1 as the first term should be close to 1 and the second term should be close to 0.
In Python:
```python
def get_score(x, y, w, b):
preds_0 = []
preds_1 = []
for i in range(len(x)):
p = predict(x[i], w, b)
if y[i] == 0:
preds_0.append(p)
else:
preds_1.append(p)
if len(preds_0) != 0:
score = (sum(preds_1) / len(preds_1) - sum(preds_0) / len(preds_0))
else:
score = (sum(preds_1) / len(preds_1) - 0)
return preds_0, preds_1, score
```
```python
preds_0, preds_1, score = get_score(x, y, w, b)
print("PSI: ", score)
```
PSI: 0.6689665943993244
# The prediction distribution plot
A better way to look at our prediction is to look at the distribution of $y'_i$ we get for each of the classes. For $y_i = 0$ the $y'_i$ should be 0. Conversely, $y'_i$ should be close to 1 when $y_i = 1$. This is a visual way of seeing the strength of a predictor.
```python
%matplotlib inline
# The above is required to display matplotlib in jupyter
import matplotlib.pyplot as plt
# The y=0 class
n, bins, patches = plt.hist(preds_0, bins=100, density=1, cumulative=0)
plt.title('Predictive distribution for class y=0')
plt.show()
```
```python
# The y=1 class
n, bins, patches = plt.hist(preds_1, bins=100, density=1, cumulative=0)
plt.title('Predictive distribution for class y=1')
plt.show()
```
## Use a library to do the same thing! - SciKit Learn
We can use SciKit Learn (sklearn) to do the same thing with a couple lines of code!:
```python
x = df[['uniformity-of-cell-size']]
y = df['class']
from sklearn import datasets, linear_model
lm_model = linear_model.LinearRegression(fit_intercept=True, copy_X=True, n_jobs=1)
lm_model.fit(x, y)
print("sklearn, w, b, score: ", round(lm_model.coef_[0], 6), round(lm_model.intercept_, 6), round(lm_model.score(x,y), 6))
print("manualr, w, b, score: ", round(w, 6) , round(b, 6), round(score, 6))
# So, we have learned the same co-efficents (or, almost the same) through a library.
```
sklearn, w, b, score: 0.127488 0.072658 0.668967
manualr, w, b, score: 0.127488 0.072658 0.668967
# Accuracy and Classification
We have create a "regression model", which predicts a continious value. However, what we actually want for this task is a classifier - a model which predicts either "true" (malignant) or "benign" (false). We can make a classifier from our regression model in the simplest way possible: by picking a threshold.
From the above plots it looks like 0.2 is a good cutoff? Let's pick that and look at the accuracy of the model, i.e. how accurate is the model when we say y'_i > 0.2 is "true" but otherwise "false":
```python
y_pred = lm_model.predict(x)
y_pred = [1 if p > 0.2 else 0 for p in y_pred]
from sklearn.metrics import accuracy_score
accuracy_score(y, y_pred)
```
0.882689556509299
Not bad! However, there is a lot more to this - we haven't considered:
* Integrating multiple variables
* Switching models - e.g. logisitic rather than linear regression is generally used for classification rather than regression tasks
* Splitting out test data from out training data.
Take a look at the SciKit Learn tutorial if you'd like to dig in more.
# Summary / Lectures Wrap
Okay, we've covered a lot of concepts and shown how we can code from first principles a machine learning algorithm to predict malignancy!
Obviously, we've barely scratched the surface, but you should know that you're now much closer to doing real data science yourself than you were 10 weeks ago.
We've now completed all the lectures - well done and thanks for listening, I appreciate it :)
Next lecture we'll reserve for a revision session.
# Numpy, Pandas, Sklearn and Matplotlib Reading (this won't be on the exam!)
* Browse through the NumPy tutorial here: https://docs.scipy.org/doc/numpy/user/quickstart.html
* Browse through the Pyplot tutorial here:
https://matplotlib.org/tutorials/introductory/pyplot.html#sphx-glr-tutorials-introductory-pyplot-py
* Browse the Pandas 10 min tutorial: https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html#min
* Browse through the SciKit Learn tutorial here: https://scikit-learn.org/stable/tutorial/basic/tutorial.html#learning-and-predicting
# Homework
* Zybooks Reading 18
# Appendix - Exploring Numpy, Matplotlib and Pandas
# NumPy Arrays
* NumPy is a Python library for efficiently representing multi-dimensional arrays (e.g. matrices, tensors).
* Pretty much all ML and visualization in Python relies on these array types, or things derived from them.
So far we've seen multi-dimensional arrays using Python lists, e.g.:
```python
# Consider a 2D matrix represented using Python lists
a = [[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]]
a[1][0]
```
5
* The problem is that Python lists are general purpose and flexible and as a result both memory inefficient and really slow.
* This is not a big deal with small data, but when dealing with large amounts of data this becomes of key concern.
* NumPy provides fast, memory efficient array backed multi-dimensional arrays.
We can directly convert to a NumPy 2D array as follows:
```python
import numpy as np # Import the numpy library using the
# common abbreviation np
a = np.array(a) # Replace the list-of-lists with a numpy 2D array
a
```
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
```python
type(a)
```
numpy.ndarray
```python
# We can address this matrix two ways:
print("The first way", a[1][0]) # This looks familiar
print("The second way", a[1,0]) # This does not look familiar but is shorter and more efficient
```
The first way 5
The second way 5
So what is going on behind the scenes?
* A NumPy array is represented as contiguous block of memory in which each value is stored in place (not as a reference to separate object), with the elements arranged into a linear ordering, e.g.:
* As a consequence, arrays have a "data type" corresponding to the way memory is represented for each value stored:
```python
# We can can find out the data type of a NumPy array using the dtype
# variable:
a.dtype # NumPy stores each number as an integer, each using 64 bits
```
dtype('int64')
If we create an array using floats then the data type will be different:
```python
a = np.array([ 1.0, 2, 4 ])
print(a) # All numbers in the array will be represented as a float
print(a.dtype) # The type is float, again each with 64 bits.
```
[1. 2. 4.]
float64
The dimensions of a numpy array are stored using the shape and ndim attributes:
```python
print(a.shape) # The shape of the array (here it has one dimension)
```
(3,)
```python
print(a.ndim) # The number of dimensions in the array
```
1
```python
# If we make the array multi-dimensional
a = np.array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
print("The dimensions of the array:", a.shape)
print("The number of dimensions", a.ndim)
```
The dimensions of the array: (3, 5)
The number of dimensions 2
The size of a numpy array is the size attribute:
```python
print("To get the size of the first dimension use len()", len(a))
print("The total size of the array is the .size attribute", a.size)
```
To get the size of the first dimension use len() 3
The total size of the array is the .size attribute 15
Building numpy arrays from lists defeats the purpose of trying to save memory.
To make large numeric arrays use zeros(), ones(), empty() or arange()
```python
a = np.zeros((3, 5, 6))
print("Shape:", a.shape)
print("Type", a.dtype) # The default type is 64bit float
print("The array\n", a)
```
Shape: (3, 5, 6)
Type float64
The array
[[[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]]
[[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]]
[[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]]]
```python
a = np.ones((2, 5)) # To get an array containing ones
a
```
array([[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]])
```python
a = np.empty((3, 2, 5)) # To get an array with unitialized memory (faster)
a # Note the values have no defined value in this case
```
array([[[0.00000000e+000, 9.88131292e-324, 0.00000000e+000,
0.00000000e+000, 4.83424335e-277],
[1.16095484e-028, 1.06398206e+248, 3.03772881e-067,
1.07548461e+272, 5.03032220e+180]],
[[9.30860811e+199, 4.63461826e+228, 2.26249033e+137,
9.00495205e+130, 8.01760059e-096],
[8.95393586e-096, 1.81148490e-152, 9.89069227e-096,
1.08266064e-095, 1.19683021e+141]],
[[6.01347002e-154, 8.04868985e-096, 8.98502512e-096,
1.76539239e+137, 7.79145813e+140],
[6.48224637e+170, 3.67145870e+228, 1.13908484e-071,
1.95714674e-305, 0.00000000e+000]]])
```python
a = np.arange(15)
a
```
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
```python
a = np.arange(0.0, 3.0, 0.2) # You can do ranges in floating point!
a
```
array([0. , 0.2, 0.4, 0.6, 0.8, 1. , 1.2, 1.4, 1.6, 1.8, 2. , 2.2, 2.4,
2.6, 2.8])
A neat feature of a numpy array is that you can reshape it's dimensions with reshape():
```python
a = a.reshape(3, 5)
print(a)
```
[[0. 0.2 0.4 0.6 0.8]
[1. 1.2 1.4 1.6 1.8]
[2. 2.2 2.4 2.6 2.8]]
# NumPy Slices
NumPy slicing on multi-dimensional arrays lets you quickly get slices of your data:
```python
a # All of a
```
array([[0. , 0.2, 0.4, 0.6, 0.8],
[1. , 1.2, 1.4, 1.6, 1.8],
[2. , 2.2, 2.4, 2.6, 2.8]])
```python
a[:,2] # Just the third elements of the 2nd dimension of the array.
```
array([0.4, 1.4, 2.4])
```python
a[1,2:] # Just the 3rd and subsequent elements of the 2nd element
#of the 1st dimension of the array.
```
array([1.4, 1.6, 1.8])
# NumPy Iterators
```python
for i in a: # By default NumPy iterates over rows
print(i)
```
[0. 0.2 0.4 0.6 0.8]
[1. 1.2 1.4 1.6 1.8]
[2. 2.2 2.4 2.6 2.8]
```python
for i in a.flat: # But you can iterate over all the elements using flat
print(i)
```
0.0
0.2
0.4
0.6000000000000001
0.8
1.0
1.2000000000000002
1.4000000000000001
1.6
1.8
2.0
2.2
2.4000000000000004
2.6
2.8000000000000003
# NumPy Builtin Functions
NumPy lets you do math with vectors.
```python
a = np.arange(10)
print(a)
```
[0 1 2 3 4 5 6 7 8 9]
```python
np.exp(a) # Calculate e**x for each entry x in the array
```
array([1.00000000e+00, 2.71828183e+00, 7.38905610e+00, 2.00855369e+01,
5.45981500e+01, 1.48413159e+02, 4.03428793e+02, 1.09663316e+03,
2.98095799e+03, 8.10308393e+03])
```python
np.sqrt(a) # Calculate sqrt(x) for each entry x in the array
```
array([0. , 1. , 1.41421356, 1.73205081, 2. ,
2.23606798, 2.44948974, 2.64575131, 2.82842712, 3. ])
NumPy has lots of builtin math functions!
# NumPy Operator Overloading
```python
a = np.arange(10)
a + 10 # Add ten to each value
```
array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
```python
a**2 # Make square of each number
```
array([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81])
etc... each of these operators makes a new array.
To edit the array in place use the increment operators:
```python
a + 10
print(a) # a is not changed
```
[0 1 2 3 4 5 6 7 8 9]
```python
a += 10 # a is now changed in place.
print(a)
```
[10 11 12 13 14 15 16 17 18 19]
```python
a **= 2 #ditto
print(a)
```
[100 121 144 169 196 225 256 289 324 361]
# Linear Algebra Functions
```python
a = np.array([ [1, 2, 3], [4, 5, 6], [ 7, 8, 9]])
print(a)
```
[[1 2 3]
[4 5 6]
[7 8 9]]
```python
a.transpose() # Compute the transpose (rows become columns and vice versa)
```
array([[1, 4, 7],
[2, 5, 8],
[3, 6, 9]])
```python
a @ a # This is matrix multiplication
```
array([[ 30, 36, 42],
[ 66, 81, 96],
[102, 126, 150]])
# Plotting with MatPlotLib
MatPlotLib is a Python plotting library that works with NumPy to make super nice plots.
It is extremely customizable and can create very fancy plots.
It's pyplot function provides a MATLAB-like interface to the library for making plots.
We can use NumPy and MatPlotLib to create complex data visualization. Here we'll just look at histograms and graphing functions (like on a graphical calculator):
# Histograms
```python
# Build a vector of 10000 samples from a normal distribution with variance 0.5^2 and mean 2
mean, variance_sqrt = 2, 0.5
v = np.random.normal(mean, variance_sqrt, 100000) # Random module allows you to create
# arrays of random numbers drawn from a given distribution.
print(v[:100]) # First 100 members of array
```
[3.05860056 2.47094643 2.52816327 1.06051309 1.51347561 2.07549409
2.67532159 2.11362191 2.27810538 1.60530092 2.79430517 2.55384148
2.71733437 1.66242764 1.93892845 1.39116753 2.56617408 1.76097175
1.76275852 2.09832228 2.6808698 1.37928547 2.07241814 1.25749651
1.59688044 2.07152009 1.44225236 2.68596062 2.47293694 1.96695516
2.10813339 1.63712879 2.1365555 1.26180979 1.83714402 1.54215922
1.89239566 2.3143319 2.42369037 2.19719057 1.79300188 1.64746831
1.76278447 2.75239863 2.21301337 2.3131599 2.48666909 2.00507997
1.61474314 1.36147258 2.26238744 1.84012176 0.86990009 1.69078918
2.15750101 2.27658333 2.17649476 1.30026459 1.62300265 1.38821459
2.150771 1.46014625 2.55485899 3.05612462 1.50334031 1.21879721
2.34316225 1.4618033 1.32275568 0.54912742 1.35766123 2.29520379
2.06355796 2.21548564 1.38581955 2.3951399 1.88086719 1.68092591
2.38628586 1.52334477 1.11916155 2.89121608 2.80126514 2.46808187
2.1362822 2.3105049 1.21183951 2.2169369 1.79903039 2.44571742
2.45988852 2.11475738 2.31968988 1.530576 2.39368022 2.05532901
2.50659493 2.02117424 1.56076555 1.87058332]
```python
import matplotlib.pyplot as plt # Import pyplot
# Plot a histogram with 50 bins
plt.hist(v, bins=50) # pyplot call to create a histogram with 50 bins
plt.show() # This call is what causes the plot to create the graphic
```
```python
# Plot a normalized histogram with 50 bins
plt.hist(v, bins=50, density=1) # Density argument normalizes histogram
plt.show()
```
```python
# Plot a normalized histogram with 50 bins and label axes, title, etc.
plt.hist(v, bins=50, density=1) # Density argument normalizes histogram
plt.ylabel('density')
plt.xlabel('x')
plt.title('A Histogram')
plt.show()
```
# Graphing Functions
This allows us to introduce the basic plot command:
```python
# If you just give it one array (list or numpy array, it assumes that these are the y values)
plt.plot([1, 2, 3, 4], "ro") # "ro" makes pyplot create red points
plt.show()
```
```python
plt.plot([5, 10, 15, 20], [2, 4, 8, 16], 'ro') # If you give it two arrays, it treats
# each pair of elements of the two lists, e.g. (5, 2), (10, 4), etc. as points on the 2D plane
plt.show()
```
We can use this to graph functions really easily:
```python
# evenly sampled points from 0 to 5 and 0.2 intervals
t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
plt.plot(t, t**2, 'bs')
plt.show()
```
If we give it multiple pairs of arrays we can show multiple lines on the same plot:
```python
# evenly sampled time at 200ms intervals
t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.show()
```
# Pandas Series
The basic array object in Pandas is the series object, it is very NumPy array like and most of the stuff we showed for NumPy arrays works with series:
```python
import numpy as np
import pandas as pd # Standard pandas abbreviation to pd
s = pd.Series(np.arange(5))
s
```
0 0
1 1
2 2
3 3
4 4
dtype: int64
Under the hood it is organized like NumPy, and is mostly compatible and functional like a NumPy array.
However, it is not a multi-dimensional object:
```python
a = np.array([ [ 3, 4, 5], [ 6, 7, 8 ]])
print(a)
s = pd.Series(a) # This doesn't work, because Pandas series are one dimensional
print(s)
```
# Data Frames
* Panda's Data Frames are 2D "tables".
* Each column is a series
* Think of them like a spreadsheet, with different columns having different types, etc.:
```python
a = np.random.randn(6, 4) # Create a matrix of random numbers with 6 rows and 4 columns
print("The numpy array", a)
df = pd.DataFrame(a)
print("The corresponding data frame\n", df)
```
```python
df.columns=list('ABCD') # We can name the columns
df # IF you don't use print, it makes this nice table format in the notebook:
```
```python
# We can also name the rows (here we use dates as an example)
dates = pd.date_range('20190501', periods=6) # Make a list of dates as a Pandas series
print("The list of dates\n", dates) #
df.index=dates # Now give the rows of the matrix these dates
print("\nThe data frame with dates naming the rows:")
df
```
```python
# We could have done this all in one line:
df = pd.DataFrame(a, index=dates, columns=list("ABCD"))
df
```
# Data Frames w/Heterogenous Data
Data frames can also contain heterogenous types, for example by construction with a dictionary:
```python
# Here the series are put in a dictionary and the colum indices are the keys
df2 = pd.DataFrame({'A': 1.,
'B': pd.Timestamp('20130102'),
'C': pd.Series(1, index=list(range(4)), dtype='float32'),
'D': np.array([3] * 4, dtype='int32'),
'E': pd.Categorical(["test", "train", "test", "train"]),
'F': 'foo'})
df2
```
```python
df2.dtypes # The types of the different columns
```
# Data Frame Selection, Slicing and Filter
Data frames are easy to subselect by row or column, allowing you to get subsets of the data easily.
To select by column name use the square bracket notation:
```python
df
```
```python
df['A'] # Select the 'A' column
```
To select by row uses square brackets and slice notation:
```python
df[0:2] # First two rows
```
To select rows and columns using indexes use the iloc index:
```python
df.iloc[:, 1:3] # Selects all the rows and the first two columns
```
To select rows by row and column names use the loc index:
```python
df.loc['20190502':'20190505', ['A', 'D']]
```
To filter the rows by a column attribute:
```python
df[df.A > 0]
```
Finally, to convert back to a NumPy array use .to_numpy(), then allowing you to slice using NumPy array rules:
```python
df.to_numpy() # Convert to a numpy array
```
```python
df2.to_numpy()
```
# Data Frame Sorting
It's easy to sort data frames:
```python
# Sort the column ordering
df.sort_index(axis=1, ascending=False)
```
```python
# Sort the rows by value, keying on "B"
df.sort_values(by='B')
```
# Data Frame Stats
```python
df.mean() # get the median of each column
```
```python
df.median() # get the median of each column
```
```python
df.describe() # Calculate summary stats
```
```python
# You can even devise your own functions and apply them to each column
def total_range(x):
return x.max() - x.min()
df.apply(total_range) # This is an example of passing around functions as pointers
```
# Grouping
If you have multiple rows with the same value, it's easy to group by them:
```python
df2 # df2 has a test/train split, described by column "E"
```
```python
df2.groupby("E").sum() # This groups by the values of column E and then
# adds up the values which can be summed (dates and strings can't be summed)
```
# Data Frames and CSVs
It is easy to convert comma separated files into a data frame, and to write to other popular formats (JSON, HDF5, etc.): (see https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#csv-text-files)
```python
df2
```
```python
df2.to_csv('out.csv') # Phew, that was easy - write the df to the file out.csv in CSV format
```
```python
# as a reminder, df2 looks like this:
df2
```
```python
# Let's just inspect the file real quick:
with open("out.csv") as f:
for i, l in enumerate(f):
print("Line:", i, "is: ", l, end="")
```
```python
df3 = pd.read_csv('out.csv') # As simple as
df3 # But note that the row index is all messed up..
```
```python
# Fix the row index column by telling it to use column 0 as the row index:
df3 = pd.read_csv('out.csv', index_col=0) # Read the csv, telling it that the first row is the
# index of the column names, tell it to respect the dates
df3 # Looks good
```
```python
# The one thing to bear in mind, the types of the elements in the table are not necessarily
# preserved:
print("Before writing\n", df2.dtypes)
print("After writing\n", df3.dtypes)
```
# Data Frames Plotting
```python
df
```
```python
df.plot() # By default makes a line graph of each column
```
```python
df2.plot() # Doesn't work - too many different series types
```
```python
df2
```
```python
df2['A'].plot()
```
|
ae8d32dcb0bdab8ff5661d152858d0913ffcff98
| 179,818 |
ipynb
|
Jupyter Notebook
|
lecture_notebooks/L18 Data Science .ipynb
|
chmote/intro_python
|
f38be255ba37e7f6ea4a95e694c2a6580bebc4d3
|
[
"MIT"
] | 1 |
2022-02-02T00:01:05.000Z
|
2022-02-02T00:01:05.000Z
|
lecture_notebooks/L18 Data Science .ipynb
|
chmote/intro_python
|
f38be255ba37e7f6ea4a95e694c2a6580bebc4d3
|
[
"MIT"
] | null | null | null |
lecture_notebooks/L18 Data Science .ipynb
|
chmote/intro_python
|
f38be255ba37e7f6ea4a95e694c2a6580bebc4d3
|
[
"MIT"
] | null | null | null | 40.728879 | 8,200 | 0.674204 | true | 12,001 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.849971 | 0.793106 | 0.674117 |
__label__eng_Latn
| 0.955633 | 0.404531 |
# Optimizer tweaks
```python
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
```python
#export
from exp.nb_08 import *
```
## Imagenette data
We grab the data from the previous notebook.
```python
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
```
```python
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
bs=128
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
```
Then a model:
```python
nfs = [32,64,128,256]
```
```python
cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
```
This is the baseline of training with vanilla SGD.
```python
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
```
```python
run.fit(1, learn)
```
train: [1.7737157604796805, tensor(0.3847, device='cuda:0')]
valid: [1.4919990234375, tensor(0.4820, device='cuda:0')]
## Refining the optimizer
In PyTorch, the base optimizer in `torch.optim` is just a dictionary that stores the hyper-parameters and references to the parameters of the model we want to train in parameter groups (different groups can have different learning rates/momentum/weight decay... which is what lets us do discriminative learning rates).
It contains a method `step` that will update our parameters with the gradients and a method `zero_grad` to detach and zero the gradients of all our parameters.
We build the equivalent from scratch, only ours will be more flexible. In our implementation, the step function loops over all the parameters to execute the step using stepper functions that we have to provide when initializing the optimizer.
```python
class Optimizer():
def __init__(self, params, steppers, **defaults):
# might be a generator
self.param_groups = list(params)
# ensure params is a list of lists
if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups]
# this makes it so that there are hyper parameters for each paramater group.
# start out here all being the same but different objects for reference and can change
# (for learning rate annealing, etc.)
# the Scheduler is what goes through and actually changes these
self.hypers = [{**defaults} for p in self.param_groups]
print("self.hypers!!", self.hypers)
self.steppers = listify(steppers)
print("self.steppers!!", self.steppers)
def grad_params(self):
# convenience function for getting all the paramaters in all groups -- used to zero them all out in zero_grad
res = [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers)
for p in pg if p.grad is not None]
# print("grad_params!!", res)
return res
def zero_grad(self):
for p,hyper in self.grad_params():
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper)
```
To do basic SGD, this what a step looks like:
```python
#export
def sgd_step(p, lr, **kwargs):
# add in pytorch will add the p.data to p.grad.data AFTER multiplying p.grad.data to the scalar of -lr
p.data.add_(-lr, p.grad.data)
return p
```
```python
opt_func = partial(Optimizer, steppers=[sgd_step])
```
```python
opt_func()
```
Now that we have changed the optimizer, we will need to adjust the callbacks that were using properties from the PyTorch optimizer: in particular the hyper-parameters are in the list of dictionaries `opt.hypers` (PyTorch has everything in the the list of param groups).
```python
#export
class Recorder(Callback):
def begin_fit(self): self.lrs,self.losses = [],[]
def after_batch(self):
if not self.in_train: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.loss.detach().cpu())
def plot_lr (self): plt.plot(self.lrs)
def plot_loss(self): plt.plot(self.losses)
def plot(self, skip_last=0):
losses = [o.item() for o in self.losses]
n = len(losses)-skip_last
plt.xscale('log')
plt.plot(self.lrs[:n], losses[:n])
class ParamScheduler(Callback):
_order=1
def __init__(self, pname, sched_funcs):
self.pname,self.sched_funcs = pname,sched_funcs
def begin_fit(self):
if not isinstance(self.sched_funcs, (list,tuple)):
self.sched_funcs = [self.sched_funcs] * len(self.opt.param_groups)
def set_param(self):
for f,h in zip(self.sched_funcs,self.opt.hypers):
h[self.pname] = f(self.n_epochs/self.epochs)
def begin_batch(self):
if self.in_train: self.set_param()
class LR_Find(Callback):
_order=1
def __init__(self, max_iter=100, min_lr=1e-6, max_lr=10):
self.max_iter,self.min_lr,self.max_lr = max_iter,min_lr,max_lr
self.best_loss = 1e9
def begin_batch(self):
if not self.in_train: return
pos = self.n_iter/self.max_iter
lr = self.min_lr * (self.max_lr/self.min_lr) ** pos
for pg in self.opt.hypers: pg['lr'] = lr
def after_step(self):
if self.n_iter>=self.max_iter or self.loss>self.best_loss*10:
raise CancelTrainException()
if self.loss < self.best_loss: self.best_loss = self.loss
```
So let's check we didn't break anything and that recorder and param scheduler work properly.
```python
sched = combine_scheds([0.3, 0.7], [sched_cos(0.3, 0.6), sched_cos(0.6, 0.2)])
```
```python
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback, Recorder,
partial(ParamScheduler, 'lr', sched)]
```
```python
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=opt_func)
```
self.hypers!! [{'lr': 0.4}]
self.steppers!! [<function sgd_step at 0x7f51e02af488>]
```python
%time run.fit(1, learn)
```
train: [1.7521567080618892, tensor(0.3982, device='cuda:0')]
valid: [1.335036376953125, tensor(0.5580, device='cuda:0')]
CPU times: user 5.03 s, sys: 2.28 s, total: 7.31 s
Wall time: 7.97 s
```python
run.recorder.plot_loss()
```
```python
run.recorder.plot_lr()
```
## Weight decay
By letting our model learn high parameters, it might fit all the data points in the training set with an over-complex function that has very sharp changes, which will lead to overfitting.
Weight decay comes from the idea of L2 regularization, which consists in adding to your loss function the sum of all the weights squared. Why do that? Because when we compute the gradients, it will add a contribution to them that will encourage the weights to be as small as possible.
Limiting our weights from growing too much is going to hinder the training of the model, but it will yield to a state where it generalizes better. Going back to the theory a little bit, weight decay (or just `wd`) is a parameter that controls that sum of squares we add to our loss:
``` python
loss_with_wd = loss + (wd/2) * (weights**2).sum()
```
In practice though, it would be very inefficient (and maybe numerically unstable) to compute that big sum and add it to the loss. If you remember a little bit of high schoool math, you should now that the derivative of `p**2` with respect to `p` is simple `2*p`, so adding that big sum to our loss is exactly the same as doing
``` python
weight.grad += wd * weight
```
for every weight in our model, which is equivalent to (in the case of vanilla SGD) updating the parameters
with
``` python
weight = weight - lr*(weight.grad + wd*weight)
```
This last formula explains why the name of this technique is weight decay, as each weight is decayed by a factor `lr * wd`.
This only works for standard SGD, as we have seen that with momentum, RMSProp or in Adam, the update has some additional formulas around the gradient. In those cases, the formula that comes from L2 regularization:
``` python
weight.grad += wd * weight
```
is different than weight decay
``` python
new_weight = weight - lr * weight.grad - lr * wd * weight
```
Most libraries use the first one, but as it was pointed out in [Decoupled Weight Regularization](https://arxiv.org/pdf/1711.05101.pdf) by Ilya Loshchilov and Frank Hutter, it is better to use the second one with the Adam optimizer, which is why fastai made it its default.
Weight decay is subtracting `lr*wd*weight` from the weights. We need this function to have an attribute `_defaults` so that we are sure there is an hyper-parameter of the same name in our `Optimizer`.
```python
#export
def weight_decay(p, lr, wd, **kwargs):
p.data.mul_(1 - lr*wd)
return p
weight_decay._defaults = dict(wd=0.)
```
L2 regularization is adding `wd*weight` to the gradients.
```python
#export
def l2_reg(p, lr, wd, **kwargs):
p.grad.data.add_(wd, p.data)
return p
l2_reg._defaults = dict(wd=0.)
```
Let's allow steppers to add to our `defaults` (which are the default values of all the hyper-parameters). This helper function adds in `dest` the key/values it finds while going through `os` and applying `f` when they was no `key` of the same name.
```python
#export
def maybe_update(os, dest, f):
for o in os:
for k,v in f(o).items():
# only updating if the key is not there at all
if k not in dest: dest[k] = v
def get_defaults(d): return getattr(d,'_defaults',{})
```
This is the same as before, we just take the default values of the steppers when none are provided in the kwargs.
```python
#export
class Optimizer():
def __init__(self, params, steppers, **defaults):
self.steppers = listify(steppers)
# will update defaults that have not been assigned with the defaults from the stepper defaults
maybe_update(self.steppers, defaults, get_defaults)
# might be a generator
self.param_groups = list(params)
# ensure params is a list of lists
if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups]
self.hypers = [{**defaults} for p in self.param_groups]
def grad_params(self):
return [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers)
for p in pg if p.grad is not None]
def zero_grad(self):
for p,hyper in self.grad_params():
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper)
```
```python
#export
sgd_opt = partial(Optimizer, steppers=[weight_decay, sgd_step])
```
```python
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=sgd_opt)
```
Before trying to train, let's check the behavior works as intended: when we don't provide a value for `wd`, we pull the corresponding default from `weight_decay`.
```python
model = learn.model
```
```python
opt = sgd_opt(model.parameters(), lr=0.1)
test_eq(opt.hypers[0]['wd'], 0.)
test_eq(opt.hypers[0]['lr'], 0.1)
```
But if we provide a value, it overrides the default.
```python
opt = sgd_opt(model.parameters(), lr=0.1, wd=1e-4)
test_eq(opt.hypers[0]['wd'], 1e-4)
test_eq(opt.hypers[0]['lr'], 0.1)
```
Now let's fit.
```python
cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback]
```
```python
learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=partial(sgd_opt, wd=0.01))
```
```python
run.fit(1, learn)
```
train: [1.8099562720548317, tensor(0.3771, device='cuda:0')]
valid: [1.860242919921875, tensor(0.3520, device='cuda:0')]
This is already better than the baseline!
## With momentum
Momentum requires to add some state. We need to save the moving average of the gradients to be able to do the step and store this inside the optimizer state. To do this, we introduce statistics. Statistics are object with two methods:
- `init_state`, that returns the initial state (a tensor of 0. for the moving average of gradients)
- `update`, that updates the state with the new gradient value
We also read the `_defaults` values of those objects, to allow them to provide default values to hyper-parameters.
```python
#export
class StatefulOptimizer(Optimizer):
def __init__(self, params, steppers, stats=None, **defaults):
self.stats = listify(stats)
maybe_update(self.stats, defaults, get_defaults)
super().__init__(params, steppers, **defaults)
self.state = {}
def step(self):
for p,hyper in self.grad_params():
if p not in self.state:
#Create a state for p and call all the statistics to initialize it.
# we track state for EACH PARAMETER
self.state[p] = {}
maybe_update(self.stats, self.state[p], lambda o: o.init_state(p))
state = self.state[p]
for stat in self.stats: state = stat.update(p, state, **hyper)
compose(p, self.steppers, **state, **hyper)
self.state[p] = state
```
```python
#export
class Stat():
_defaults = {}
def init_state(self, p): raise NotImplementedError
def update(self, p, state, **kwargs): raise NotImplementedError
```
Here is an example of `Stat`:
```python
class AverageGrad(Stat):
_defaults = dict(mom=0.9)
def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, mom, **kwargs):
state['grad_avg'].mul_(mom).add_(p.grad.data)
return state
```
Then we add the momentum step (instead of using the gradients to perform the step, we use the average).
```python
#export
def momentum_step(p, lr, grad_avg, **kwargs):
p.data.add_(-lr, grad_avg)
return p
```
```python
sgd_mom_opt = partial(StatefulOptimizer, steppers=[momentum_step,weight_decay],
stats=AverageGrad(), wd=0.01)
```
```python
learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=sgd_mom_opt)
```
```python
run.fit(1, learn)
```
train: [1.7859231704184118, tensor(0.3843, device='cuda:0')]
valid: [1.96252978515625, tensor(0.3720, device='cuda:0')]
### Momentum experiments
What does momentum do to the gradients exactly? Let's do some plots to find out!
```python
x = torch.linspace(-4, 4, 200)
y = torch.randn(200) + 0.3
betas = [0.5, 0.7, 0.9, 0.99]
```
```python
def plot_mom(f):
_,axs = plt.subplots(2,2, figsize=(12,8))
for beta,ax in zip(betas, axs.flatten()):
ax.plot(y, linestyle='None', marker='.')
avg,res = None,[]
for i,yi in enumerate(y):
avg,p = f(avg, beta, yi, i)
res.append(p)
ax.plot(res, color='red')
ax.set_title(f'beta={beta}')
```
This is the regular momentum.
```python
def mom1(avg, beta, yi, i):
if avg is None: avg=yi
res = beta*avg + yi
return res,res
plot_mom(mom1)
```
As we can see, with a too high value, it may go way too high with no way to change its course.
Another way to smooth noisy data is to do an exponentially weighted moving average. In this case, there is a dampening of (1-beta) in front of the new value, which is less trusted than the current average.
### This is lerp in pytorch speak. It dappens the new item. In regular momentum we have beta*avg + yi
```python
def ewma(v1, v2, beta): return beta*v1 + (1-beta)*v2
```
```python
def mom2(avg, beta, yi, i):
if avg is None: avg=yi
avg = ewma(avg, yi, beta)
return avg, avg
plot_mom(mom2)
```
We can see it gets to a zero-constant when the data is purely random. If the data has a certain shape, it will get that shape (with some delay for high beta).
```python
y = 1 - (x/3) ** 2 + torch.randn(200) * 0.1
```
```python
y[0]=0.5
```
```python
plot_mom(mom2)
```
### Debiasing is here to correct the wrong information we may have in the very first batch. The debias term corresponds to the sum of the coefficient in our moving average. At the time step i, our average is:
$\begin{align*}
avg_{i} &= \beta\ avg_{i-1} + (1-\beta)\ v_{i} = \beta\ (\beta\ avg_{i-2} + (1-\beta)\ v_{i-1}) + (1-\beta)\ v_{i} \\
&= \beta^{2}\ avg_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \\
&= \beta^{3}\ avg_{i-3} + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \\
&\vdots \\
&= (1-\beta)\ \beta^{i}\ v_{0} + (1-\beta)\ \beta^{i-1}\ v_{1} + \cdots + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i}
\end{align*}$
and so the sum of the coefficients is
$\begin{align*}
S &=(1-\beta)\ \beta^{i} + (1-\beta)\ \beta^{i-1} + \cdots + (1-\beta)\ \beta^{2} + (1-\beta)\ \beta + (1-\beta) \\
&= (\beta^{i} - \beta^{i+1}) + (\beta^{i-1} - \beta^{i}) + \cdots + (\beta^{2} - \beta^{3}) + (\beta - \beta^{2}) + (1-\beta) \\
&= 1 - \beta^{i+1}
\end{align*}$
since all the other terms cancel out each other.
By dividing by this term, we make our moving average a true average (in the sense that all the coefficients we used for the average sum up to 1).
```python
def mom3(avg, beta, yi, i):
if avg is None: avg=0
avg = ewma(avg, yi, beta)
# key to debiasing is this divide step
return avg, avg/(1-beta**(i+1))
plot_mom(mom3)
```
## Adam and friends
In Adam, we use the gradient averages but with dampening (not like in SGD with momentum), so let's add this to the `AverageGrad` class.
```python
dict(somekey=0.9)
```
{'somekey': 0.9}
```python
#export
class AverageGrad(Stat):
_defaults = dict(mom=0.9)
def __init__(self, dampening:bool=False): self.dampening=dampening
def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, mom, **kwargs):
state['mom_damp'] = 1-mom if self.dampening else 1.
state['grad_avg'].mul_(mom).add_(state['mom_damp'], p.grad.data)
return state
```
We also need to track the moving average of the gradients squared.
```python
#export
class AverageSqrGrad(Stat):
_defaults = dict(sqr_mom=0.99)
def __init__(self, dampening:bool=True): self.dampening=dampening
def init_state(self, p): return {'sqr_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, sqr_mom, **kwargs):
state['sqr_damp'] = 1-sqr_mom if self.dampening else 1.
state['sqr_avg'].mul_(sqr_mom).addcmul_(state['sqr_damp'], p.grad.data, p.grad.data)
return state
```
We will also need the number of steps done during training for the debiasing.
```python
#export
class StepCount(Stat):
def init_state(self, p): return {'step': 0}
def update(self, p, state, **kwargs):
state['step'] += 1
return state
```
This helper function computes the debias term. If we dampening, `damp = 1 - mom` and we get the same result as before. If we don't use dampening, (`damp = 1`) we will need to divide by `1 - mom` because that term is missing everywhere.
```python
#export
def debias(mom, damp, step): return damp * (1 - mom**step) / (1-mom)
```
Then the Adam step is just the following:
```python
#export
def adam_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, **kwargs):
debias1 = debias(mom, mom_damp, step)
debias2 = debias(sqr_mom, sqr_damp, step)
p.data.addcdiv_(-lr / debias1, grad_avg, (sqr_avg/debias2).sqrt() + eps)
return p
adam_step._defaults = dict(eps=1e-5)
```
```python
#export
def adam_opt(xtra_step=None, **kwargs):
return partial(StatefulOptimizer, steppers=[adam_step,weight_decay]+listify(xtra_step),
stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()], **kwargs)
```
```python
learn,run = get_learn_run(nfs, data, 0.001, conv_layer, cbs=cbfs, opt_func=adam_opt())
```
```python
run.fit(3, learn)
```
train: [1.717051505302854, tensor(0.4114, device='cuda:0')]
valid: [1.3037923583984374, tensor(0.5480, device='cuda:0')]
train: [1.2101377093997208, tensor(0.6031, device='cuda:0')]
valid: [1.061533203125, tensor(0.6660, device='cuda:0')]
train: [0.9421420422047077, tensor(0.6971, device='cuda:0')]
valid: [1.033464599609375, tensor(0.6620, device='cuda:0')]
## LAMB
It's then super easy to implement a new optimizer. This is LAMB from a [very recent paper](https://arxiv.org/pdf/1904.00962.pdf):
$\begin{align}
g_{t}^{l} &= \nabla L(w_{t-1}^{l}, x_{t}) \\
m_{t}^{l} &= \beta_{1} m_{t-1}^{l} + (1-\beta_{1}) g_{t}^{l} \\
v_{t}^{l} &= \beta_{2} v_{t-1}^{l} + (1-\beta_{2}) g_{t}^{l} \odot g_{t}^{l} \\
m_{t}^{l} &= m_{t}^{l} / (1 - \beta_{1}^{t}) \\
v_{t}^{l} &= v_{t}^{l} / (1 - \beta_{2}^{t}) \\
r_{1} &= \|w_{t-1}^{l}\|_{2} \\
s_{t}^{l} &= \frac{m_{t}^{l}}{\sqrt{v_{t}^{l} + \epsilon}} + \lambda w_{t-1}^{l} \\
r_{2} &= \| s_{t}^{l} \|_{2} \\
\eta^{l} &= \eta * r_{1}/r_{2} \\
w_{t}^{l} &= w_{t}^{l-1} - \eta_{l} * s_{t}^{l} \\
\end{align}$
```python
def lamb_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, wd, **kwargs):
debias1 = debias(mom, mom_damp, step)
debias2 = debias(sqr_mom, sqr_damp, step)
r1 = p.data.pow(2).mean().sqrt()
step = (grad_avg/debias1) / ((sqr_avg/debias2).sqrt()+eps) + wd*p.data
r2 = step.pow(2).mean().sqrt()
p.data.add_(-lr * min(r1/r2,10), step)
return p
lamb_step._defaults = dict(eps=1e-6, wd=0.)
```
```python
lamb = partial(StatefulOptimizer, steppers=lamb_step, stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()])
```
```python
learn,run = get_learn_run(nfs, data, 0.003, conv_layer, cbs=cbfs, opt_func=lamb)
```
```python
run.fit(3, learn)
```
train: [1.846915534841787, tensor(0.3682, device='cuda:0')]
valid: [1.482430908203125, tensor(0.5140, device='cuda:0')]
train: [1.3096460502462386, tensor(0.5643, device='cuda:0')]
valid: [1.1855213623046874, tensor(0.6220, device='cuda:0')]
train: [1.0266159859963937, tensor(0.6672, device='cuda:0')]
valid: [1.011562744140625, tensor(0.6740, device='cuda:0')]
Other recent variants of optimizers:
- [Large Batch Training of Convolutional Networks](https://arxiv.org/abs/1708.03888) (LARS also uses weight statistics, not just gradient statistics. Can you add that to this class?)
- [Adafactor: Adaptive Learning Rates with Sublinear Memory Cost](https://arxiv.org/abs/1804.04235) (Adafactor combines stats over multiple sets of axes)
- [Adaptive Gradient Methods with Dynamic Bound of Learning Rate](https://arxiv.org/abs/1902.09843)
## Export
```python
!python notebook2script.py 09_optimizers.ipynb
```
Converted 09_optimizers.ipynb to exp/nb_09.py
```python
```
|
dcb3ba4d482fd3ef35d1297db70ccbf0157d3295
| 409,557 |
ipynb
|
Jupyter Notebook
|
dev_course/dl2/09_optimizers-Copy1.ipynb
|
LaurenSpiegel/fastai_docs
|
4fe6b62116d88dea9610548133e6cadb6b260a73
|
[
"Apache-2.0"
] | null | null | null |
dev_course/dl2/09_optimizers-Copy1.ipynb
|
LaurenSpiegel/fastai_docs
|
4fe6b62116d88dea9610548133e6cadb6b260a73
|
[
"Apache-2.0"
] | null | null | null |
dev_course/dl2/09_optimizers-Copy1.ipynb
|
LaurenSpiegel/fastai_docs
|
4fe6b62116d88dea9610548133e6cadb6b260a73
|
[
"Apache-2.0"
] | null | null | null | 302.033186 | 108,604 | 0.927146 | true | 6,703 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.72487 | 0.749087 | 0.542991 |
__label__eng_Latn
| 0.915667 | 0.09988 |
## Calculation of exponent function using Maclaurin Series for x = 1
\begin{align}
e^x = \sum\limits_{n=0}^{\infty}\frac{x^n}{n!}
\end{align}
```python
# importing dependency functions
from math import exp as ideal_exp
from matplotlib import pyplot as plt
# initial guess for iteration number
iter_num = 25
# implementation of factorial function
def custom_factorial(n):
# base case to stop recursion
if n < 1: return 1
# general case to compute factorial
return n * custom_factorial(n-1)
# implementation of power function
def custom_power(base, degree):
# base case to stop recursion
if degree < 1: return 1
# general case to compute power
return base * custom_power(base, degree - 1)
# recursive implementation of exponential function
def custom_exp_recursive(x, counter=0, limit = iter_num):
# base case to stop recursion
if counter == limit - 1: return custom_power(x, counter)/custom_factorial(counter)
# general case to compute exponent
return custom_power(x, counter) / custom_factorial(counter) + custom_exp_recursive(x, counter + 1, limit)
# loop implementation of exponential function to analyze error function
def custom_exp_analysis(x, iter_num=iter_num):
# computing math library exponent as reference
ideal = ideal_exp(x)
# initialization of final result and error vector
result = 0
error_vec = [ideal]
# Maclaurin iteration loop
for i in range(iter_num):
result += custom_power(x, i)/custom_factorial(i)
error_vec.append((ideal - result) / ideal)
return result, error_vec
print("Calculated exponent value is {} using {} iterations of Maclaurin series".format(custom_exp_recursive(1), iter_num))
```
Calculated exponent value is 2.718281828459045 using 25 iterations of Maclaurin series
## True relative error analysis.
After each iteration term is added, the relative error is computed and plotted
```python
# plotting true error function
plt.plot(custom_exp_analysis(1)[1])
plt.ylabel('True error')
plt.xlabel('# of iterations')
plt.grid(color='b', linestyle='--', linewidth=0.5)
plt.show()
```
## Precision test
Suppose that $10^-15$ is a principal precision for custom exponent function. How many Maclaurin terms is needed in order to reach the given precision?
```python
# zoom plotting to explore precision
plt.plot(custom_exp_analysis(1)[1])
plt.ylabel('True relative error')
plt.xlabel('# of iterations')
plt.xlim(15, 25)
plt.ylim(-1e-15, 1e-15)
plt.grid(color='b', linestyle='--', linewidth=0.5)
plt.show()
```
From above plot it is clearly seen that 17 iterations are enough to reach e-15 precision, since error function fits into e-15 precision boundaries.
## Truncation error test
After 19th iteration, result is not improving anymore. Why?
Increasing number of iterations has no any effect to error function anymore. To understand causes let's analyze 19th cycle process. 19th Maclaurin element is calculated as 1/19! and added to global result. Let's see what we get by dividing 1 to 19!.
```python
from decimal import Decimal
print(Decimal(1/custom_factorial(19)))
```
8.2206352466243294955370400408296422011147285715637438030523043153152684681117534637451171875E-18
It is not 0, so cause is not from this term.
```python
small_term = 1/custom_factorial(19)
added_something = 0.99 + small_term
print(Decimal(added_something))
```
0.9899999999999999911182158029987476766109466552734375
Something strange happens when we add 0.99 to 19th Maclauring series. Resulting value becomes less than 0.99.
```python
added_something = 2599999999999999990000.999999 + small_term
print(Decimal(added_something))
```
2600000000000000000000
Even 10000 difference is rounded here to obtain number without fractions.
```python
print(type(small_term))
```
<class 'float'>
In Python there is 53 bits of precision available for floating numbers.
### Precision tests using x = 2 and x = 10
```python
# plotting true error function
plt.plot(custom_exp_analysis(2)[1])
plt.ylabel('True relative error')
plt.xlabel('# of iterations')
plt.grid(color='b', linestyle='--', linewidth=0.5)
plt.show()
```
```python
# zoom plotting to explore precision
plt.plot(custom_exp_analysis(2, iter_num=25)[1])
plt.ylabel('True relative error')
plt.xlabel('# of iterations')
plt.xlim(15, 25)
plt.ylim(-1e-15, 1e-15)
plt.grid(color='b', linestyle='--', linewidth=0.5)
plt.show()
```
This time, we could reach e-15 accuracy in 22 iterations.
```python
# plotting true error function
plt.plot(custom_exp_analysis(10)[1])
plt.ylabel('True relative error')
plt.xlabel('# of iterations')
plt.grid(color='b', linestyle='--', linewidth=0.5)
plt.show()
```
```python
# zoom plotting to explore precision
plt.plot(custom_exp_analysis(10, iter_num=50)[1])
plt.ylabel('True error')
plt.xlabel('# of iterations')
plt.xlim(40, 50)
plt.ylim(-1e-15, 1e-15)
plt.grid(color='b', linestyle='--', linewidth=0.5)
plt.show()
```
In the case when x = 10, error function converged to e-15 precision at 45th Maclaurin iteration.
### Conclusion about approximations
Most decimal fractions cannot be represented exactly as binary fractions. Therefore, in general, the decimal floating-point numbers are only approximated by the binary floating-point numbers actually stored in the machine. For instance 1/3 fraction will be stored in bits in following way 0.0001100110011001100110011001100110011001100110011... and it consumes all bits allocated for current variable. To save space, Python stops its binary iterations at some step to approximate value and save bits. This may be a reason of such strange behaviour of floating number operations and early approximations.
```python
```
|
2f621e33228403607ef161db5eaed5352f4920a7
| 110,190 |
ipynb
|
Jupyter Notebook
|
exponent.ipynb
|
BatyaGG/numerical_methods
|
40036c07ed4db2fb03fe0d188feeb440aa260ce2
|
[
"MIT"
] | 1 |
2018-06-23T12:19:55.000Z
|
2018-06-23T12:19:55.000Z
|
exponent.ipynb
|
BatyaGG/numerical_methods
|
40036c07ed4db2fb03fe0d188feeb440aa260ce2
|
[
"MIT"
] | null | null | null |
exponent.ipynb
|
BatyaGG/numerical_methods
|
40036c07ed4db2fb03fe0d188feeb440aa260ce2
|
[
"MIT"
] | null | null | null | 256.853147 | 18,092 | 0.927834 | true | 1,455 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.942507 | 0.930458 | 0.876963 |
__label__eng_Latn
| 0.959154 | 0.875813 |
###### Content provided under a Creative Commons Attribution license, CC-BY 4.0; code under MIT license. (c)2014 Lorena A. Barba, Olivier Mesnard. Thanks: NSF for support via CAREER award #1149784.
[@LorenaABarba](https://twitter.com/LorenaABarba)
##### Version 0.4 -- April 2015
# Source panel method
We are now getting close to the finish line with *AeroPython*! Our first few lessons introduced the fundamental flow solutions of potential flow, and we quickly learned that using our superposition powers we could get some useful results in aerodynamics.
The superposition of a [doublet](http://nbviewer.ipython.org/urls/github.com/barbagroup/AeroPython/blob/master/lessons/03_Lesson03_doublet.ipynb) and a free stream gave the flow around a circular cylinder, and we learned about the *D'Alembert paradox*: the result of zero drag for potential flow around a cylinder. Adding a [vortex](http://nbviewer.ipython.org/urls/github.com/barbagroup/AeroPython/blob/master/lessons/06_Lesson06_vortexLift.ipynb) at the center of the cylinder, we learned about lift and the *Kutta-Joukowski theorem* stating that lift is proporional to circulation: $L=\rho U \Gamma$. A most important result!
Adding together fundamental solutions of potential flow and seeing what we get when interpreting a dividing streamline as a solid body is often called an *indirect method*. This method goes all the way back to Rankine in 1871! But its applicability is limited because we can't stipulate a geometry and find the flow associated to it.
In [Lesson 9](http://nbviewer.ipython.org/urls/github.com/barbagroup/AeroPython/blob/master/lessons/09_Lesson09_flowOverCylinder.ipynb), we learned that it is possible to stipulate first the geometry, and then solve for the source strengths on a panel discretization of the body that makes the flow tangent at the boundary. This is called a *direct method* and it took off in the 1960s with the work of Hess and Smith at Douglas Aircraft Company.
A set of panels (line segments in 2D) can represent the surface of any solid body immersed in a potential flow by making the source-sheet strengths such that the normal velocity at each panel is equal to zero. This is a very powerful idea! But you should realize that all the panel strengths are coupled to each other, which is why we end up with a linear system of equations.
For an arbitrary geometry, we need to build a set of panels according to some points that define the geometry. In this lesson, we will read from a file a geometry definition corresponding to a **NACA0012 airfoil**, create a set of panels, and solve for the source-sheet strengths to get flow around the airfoil.
*Make sure you have studied [Lesson 9](http://nbviewer.ipython.org/github/barbagroup/AeroPython/blob/master/lessons/09_Lesson09_flowOverCylinder.ipynb) carefully before proceeding!* We will not repeat the full mathematical formulation in this notebook, so refer back as needed.
First, load our favorite Python libraries, and the `integrate` module from SciPy:
```
import math
import numpy
from scipy import integrate
from matplotlib import pyplot
```
Next, we read the body geometry from a file using the NumPy function [`loadtxt()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html). The file comes from the [Airfoil Tools](http://airfoiltools.com/airfoil/details?airfoil=n0012-il) website and it contains a set of coordinates for the standard NACA0012 symmetric profile. We saved the file in the `resources` folder and load it from our local copy.
The geometry points get loaded into one NumPy array, so we separate the data into two arrays: `x,y` (for better code readability). The subsequent code will plot the geometry of the airfoil.
```
# reads of the geometry from a data file
with open ('./resources/naca0012.dat') as file_name:
x, y = numpy.loadtxt(file_name, dtype=float, delimiter='\t', unpack=True)
# plots the geometry
%matplotlib inline
val_x, val_y = 0.1, 0.2
x_min, x_max = x.min(), x.max()
y_min, y_max = y.min(), y.max()
x_start, x_end = x_min-val_x*(x_max-x_min), x_max+val_x*(x_max-x_min)
y_start, y_end = y_min-val_y*(y_max-y_min), y_max+val_y*(y_max-y_min)
size = 10
pyplot.figure(figsize=(size, (y_end-y_start)/(x_end-x_start)*size))
pyplot.grid(True)
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.plot(x, y, color='k', linestyle='-', linewidth=2);
```
## Discretization into panels
Like in [Lesson 9](http://nbviewer.ipython.org/urls/github.com/barbagroup/AeroPython/blob/master/lessons/09_Lesson09_flowOverCylinder.ipynb), we will create a discretization of the body geometry into panels (line segments in 2D). A panel's attributes are: its starting point, end point and mid-point, its length and its orientation. See the following figure for the nomenclature used in the code and equations below.
We can modify the `Panel` class from our previous notebook slightly, to work better for our study of flow over an airfoil. The only difference is that we identify points on the top or bottom surfaces with the words `upper` and `lower`, which is only used later for plotting results with different colors for the top and bottom surfaces of the profile.
```
class Panel:
"""Contains information related to a panel."""
def __init__(self, xa, ya, xb, yb):
"""Creates a panel.
Arguments
---------
xa, ya -- Cartesian coordinates of the first end-point.
xb, yb -- Cartesian coordinates of the second end-point.
"""
self.xa, self.ya = xa, ya
self.xb, self.yb = xb, yb
self.xc, self.yc = (xa+xb)/2, (ya+yb)/2 # control-point (center-point)
self.length = math.sqrt((xb-xa)**2+(yb-ya)**2) # length of the panel
# orientation of the panel (angle between x-axis and panel's normal)
if xb-xa <= 0.:
self.beta = math.acos((yb-ya)/self.length)
elif xb-xa > 0.:
self.beta = math.pi + math.acos(-(yb-ya)/self.length)
# location of the panel
if self.beta <= math.pi:
self.loc = 'upper'
else:
self.loc = 'lower'
self.sigma = 0. # source strength
self.vt = 0. # tangential velocity
self.cp = 0. # pressure coefficient
```
For the circular cylinder, the discretization into panels was really easy. This is the part that gets more complicated when you want to compute the flow around a general geometry, while the solution part is effectively the same as in [Lesson 9](http://nbviewer.ipython.org/urls/github.com/barbagroup/AeroPython/blob/master/lessons/09_Lesson09_flowOverCylinder.ipynb).
The function below will create the panels from the geometry data that was read from a file. It is better to have small panels near the leading-edge and the trailing edge, where the curvature is large. One method to get a non uniform distribution around the airfoil is to first discretize a circle with diameter equal to the airfoil's chord, with the leading edge and trailing edge touching the circle at a node, as shown in the following sketch.
Then, we store the $x$-coordinates of the circle points, `x_circle`, which will also be the $x$-coordinates of the panel nodes, `x`, and project the $y$-coordinates of the circle points onto the airfoil by interpolation. We end up with a node distribution on the airfoil that is refined near the leading edge and the trailing edge. It will look like this:
With the discretization method just described, the function `definePanels()` returns an array of objects, each an instance of the class `Panel` and containing all information about a panel, given the desired number of panels and the set of body coordinates.
A few remarks about the implementation of the function `definePanels()`:
* we just need to compute the $x$-coordinates of the circle (`x_circle`) since the $y$-coordinates of the panel nodes will be computed by interpolation;
* we create a circle with `N+1` points, but the first and last points coincide;
* we extend our NumPy arrays by adding an extra value that is equal to the first one; thus we don't have to do anything special with the value `x[i+1]` in the different loops;
* the *while*-loop is used to find two consecutive points, (`x[I]`,`y[I]`) and (`x[I+1]`,`y[I+1]`), on the foil such that the interval [`x[I]`,`x[I+1]`] contains the value `x_ends[i]`; we use the keyword `break` to get out of the loop;
* once the two points have been identified, the value `y_ends[i]` is computed by interpolation.
```
def define_panels(x, y, N=40):
"""Discretizes the geometry into panels using the 'cosine' method.
Arguments
---------
x, y -- Cartesian coordinates of the geometry (1D arrays).
N - number of panels (default 40).
Returns
-------
panels -- Numpy array of panels.
"""
R = (x.max()-x.min())/2 # radius of the circle
x_center = (x.max()+x.min())/2 # x-coord of the center
x_circle = x_center + R*numpy.cos(numpy.linspace(0, 2*math.pi, N+1)) # x-coord of the circle points
x_ends = numpy.copy(x_circle) # projection of the x-coord on the surface
y_ends = numpy.empty_like(x_ends) # initialization of the y-coord Numpy array
x, y = numpy.append(x, x[0]), numpy.append(y, y[0]) # extend arrays using numpy.append
# computes the y-coordinate of end-points
I = 0
for i in range(N):
while I < len(x)-1:
if (x[I] <= x_ends[i] <= x[I+1]) or (x[I+1] <= x_ends[i] <= x[I]):
break
else:
I += 1
a = (y[I+1]-y[I])/(x[I+1]-x[I])
b = y[I+1] - a*x[I+1]
y_ends[i] = a*x_ends[i] + b
y_ends[N] = y_ends[0]
panels = numpy.empty(N, dtype=object)
for i in range(N):
panels[i] = Panel(x_ends[i], y_ends[i], x_ends[i+1], y_ends[i+1])
return panels
```
Now we can use this function, calling it with a desired number of panels whenever we execute the cell below. We also plot the resulting geometry.
```
N = 40 # number of panels
panels = define_panels(x, y, N) # discretizes of the geometry into panels
# plots the geometry and the panels
val_x, val_y = 0.1, 0.2
x_min, x_max = min(panel.xa for panel in panels), max(panel.xa for panel in panels)
y_min, y_max = min(panel.ya for panel in panels), max(panel.ya for panel in panels)
x_start, x_end = x_min-val_x*(x_max-x_min), x_max+val_x*(x_max-x_min)
y_start, y_end = y_min-val_y*(y_max-y_min), y_max+val_y*(y_max-y_min)
size = 10
pyplot.figure(figsize=(size, (y_end-y_start)/(x_end-x_start)*size))
pyplot.grid(True)
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.plot(x, y, color='k', linestyle='-', linewidth=2)
pyplot.plot(numpy.append([panel.xa for panel in panels], panels[0].xa),
numpy.append([panel.ya for panel in panels], panels[0].ya),
linestyle='-', linewidth=1, marker='o', markersize=6, color='#CD2305');
```
## Freestream conditions
The NACA0012 airfoil will be immersed in a uniform flow with velocity $U_\infty$ and an angle of attack $\alpha=0$. Even though it may seem like overkill to create a class for the freestream, we'll do it anyway. When creating a class, one is expecting to also create several intances of its objects. Here, we just have one freestream, so why define a class? Well, it makes the code more readable and does not block the programer from using the variable names `u_inf` and `alpha` for something else outside of the class.
Also, every time we need the freestream condition as input to a function, we will just have to pass the object as an argument and not all the attributes of the freestream.
```
class Freestream:
"""Freestream conditions."""
def __init__(self, u_inf=1.0, alpha=0.0):
"""Sets the freestream conditions.
Arguments
---------
u_inf -- Farfield speed (default 1.0).
alpha -- Angle of attack in degrees (default 0.0).
"""
self.u_inf = u_inf
self.alpha = alpha*math.pi/180 # degrees --> radians
```
```
# defines and creates the object freestream
u_inf = 1.0 # freestream spee
alpha = 0.0 # angle of attack (in degrees)
freestream = Freestream(u_inf, alpha) # instantiation of the object freestream
```
## Flow tangency boundary condition
Enforcing the flow-tangency condition on each *control point* approximately makes the body geometry correspond to a dividing streamline (and the approximation improves if we represented the body with more and more panels). So, for each panel $i$, we make $u_n=0$ at $(x_{c_i},y_{c_i})$, which leads to the equation derived in the previous lesson:
\begin{equation}
u_{n_i} = \frac{\partial}{\partial n_i}\left\lbrace \phi\left(x_{c_i},y_{c_i}\right) \right\rbrace = 0
\end{equation}
i.e.
\begin{equation}
0 = U_\infty \cos\beta_i + \frac{\sigma_i}{2} + \sum_{j=1,j\neq i}^{N_p} \frac{\sigma_j}{2\pi} \int \frac{
\left(x_{c_i}-x_j(s_j)\right) \cos\beta_i
+ \left(y_{c_i}-y_j(s_j)\right) \sin\beta_i
}
{\left(x_{c_i}-x_j(s)\right)^2 + \left(y_{c_i}-y_j(s)\right)^2} {\rm d}s_j
\end{equation}
In the equation above, we calculate the derivative of the potential in the normal direction to enforce the flow tangency condition on each panel. But later, we will have to calculate the derivative in the tangential direction to compute the surface pressure coefficient. And, when we are interested in plotting the velocity field onto a mesh, we will have to calculate the derivative in the $x$- and $y$-direction.
Therefore the function below is similar to the one implemented in [Lesson 9](http://nbviewer.ipython.org/github/barbagroup/AeroPython/blob/master/lessons/09_Lesson09_flowOverCylinder.ipynb) to obtain the integrals along each panel, but we've generalized it to adapt to the direction of derivation (by means of two new arguments, `dxdz` and `dydz`, which respectively represent the value of $\frac{\partial x_{c_i}}{\partial z_i}$ and $\frac{\partial y_{c_i}}{\partial z_i}$, $z_i$ being the desired direction).
Moreover, the function is also more general in the sense of allowing any evaluation point, not just a control point on a panel (the argument `p_i` has been replaced by the coordinates `x` and `y` of the control-point, and `p_j` has been replaced with `panel`).
```
def integral(x, y, panel, dxdz, dydz):
"""Evaluates the contribution of a panel at one point.
Arguments
---------
x, y -- Cartesian coordinates of the point.
panel -- panel which contribution is evaluated.
dxdz -- derivative of x in the z-direction.
dydz -- derivative of y in the z-direction.
Returns
-------
Integral over the panel of the influence at one point.
"""
def func(s):
return ( ((x - (panel.xa - math.sin(panel.beta)*s))*dxdz
+(y - (panel.ya + math.cos(panel.beta)*s))*dydz)
/ ((x - (panel.xa - math.sin(panel.beta)*s))**2
+(y - (panel.ya + math.cos(panel.beta)*s))**2) )
return integrate.quad(lambda s:func(s), 0., panel.length)[0]
```
## Building the linear system
Here, we build and solve the linear system of equations of the form
\begin{equation}[A][\sigma] = [b].\end{equation}
In building the matrix, below, we call the `integral()` function with the correct values for the last parameters: $\cos \beta_i$ and $\sin\beta_i$, corresponding to a derivative in the normal direction.
Finally, we use `linalg.solve()` from NumPy to solve the system and find the strength of each panel.
```
def build_matrix(panels):
"""Builds the source matrix.
Arguments
---------
panels -- array of panels.
Returns
-------
A -- NxN matrix (N is the number of panels).
"""
N = len(panels)
A = numpy.empty((N, N), dtype=float)
numpy.fill_diagonal(A, 0.5)
for i, p_i in enumerate(panels):
for j, p_j in enumerate(panels):
if i != j:
A[i,j] = 0.5/math.pi*integral(p_i.xc, p_i.yc, p_j, math.cos(p_i.beta), math.sin(p_i.beta))
return A
def build_rhs(panels, freestream):
"""Builds the RHS of the linear system.
Arguments
---------
panels -- array of panels.
freestream -- farfield conditions.
Returns
-------
b -- 1D array ((N+1)x1, N is the number of panels).
"""
b = numpy.empty(len(panels), dtype=float)
for i, panel in enumerate(panels):
b[i] = -freestream.u_inf * math.cos(freestream.alpha - panel.beta)
return b
```
```
A = build_matrix(panels) # computes the singularity matrix
b = build_rhs(panels, freestream) # computes the freestream RHS
```
```
# solves the linear system
sigma = numpy.linalg.solve(A, b)
for i, panel in enumerate(panels):
panel.sigma = sigma[i]
```
## Surface pressure coefficient
From Bernoulli's equation, the pressure coefficient on the $i$-th panel is
\begin{equation}C_{p_i} = 1-\left(\frac{u_{t_i}}{U_\infty}\right)^2
\end{equation}
where $u_{t_i}$ is the tangential component of the velocity at the center point of the $i$-th panel,
\begin{equation}
u_{t_i} = -U_\infty \sin\beta_i + \sum_{j=1}^{N_p} \frac{\sigma_j}{2\pi} \int \frac{
\left(x_{c_i}-x_j(s_j)\right) \frac{\partial x_{c_i}}{\partial t_i}
+ \left(y_{c_i}-y_j(s_j)\right) \frac{\partial y_{c_i}}{\partial t_i}
}
{\left(x_{c_i}-x_j(s)\right)^2 + \left(y_{c_i}-y_j(s)\right)^2} {\rm d}s_j
\end{equation}
with
\begin{equation}
\frac{\partial x_{c_i}}{\partial t_i} = -\sin\beta_i \quad\text{and} \quad \frac{\partial y_{c_i}}{\partial t_i} = \cos\beta_i
\end{equation}
Notice that below we call the function `integral()` with different arguments: $-\sin\beta_i$ and $\cos\beta_i$ to get the derivation in the tangential direction.
```
def get_tangential_velocity(panels, freestream):
"""Computes the tangential velocity on the surface.
Arguments
---------
panels -- array of panels.
freestream -- farfield conditions.
"""
N = len(panels)
A = numpy.empty((N, N), dtype=float)
numpy.fill_diagonal(A, 0.0)
for i, p_i in enumerate(panels):
for j, p_j in enumerate(panels):
if i != j:
A[i,j] = 0.5/math.pi*integral(p_i.xc, p_i.yc, p_j, -math.sin(p_i.beta), math.cos(p_i.beta))
b = freestream.u_inf * numpy.sin([freestream.alpha - panel.beta for panel in panels])
sigma = numpy.array([panel.sigma for panel in panels])
vt = numpy.dot(A, sigma) + b
for i, panel in enumerate(panels):
panel.vt = vt[i]
```
```
# computes the tangential velocity at the center-point of each panel
get_tangential_velocity(panels, freestream)
```
```
def get_pressure_coefficient(panels, freestream):
"""Computes the surface pressure coefficients.
Arguments
---------
panels -- array of panels.
freestream -- farfield conditions.
"""
for panel in panels:
panel.cp = 1.0 - (panel.vt/freestream.u_inf)**2
```
```
# computes the surface pressure coefficients
get_pressure_coefficient(panels, freestream)
```
### Theoretical solution
There is a classical method to obtain the theoretical characteristics of airfoils, known as *Theodorsen's method*. It uses the Joukowski transformation but is able to deal with any airfoil by an additional transformation between a "near circle" and a circle. The method is hairy indeed! But the resulting values of pressure coefficient are provided for some airfoils in table form in the 1945 [NACA Report No.824](http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930090976.pdf), available from the NASA web server (see p. 71).
The values of $(u/U_{\infty})^2$ are given for several stations along the chord length. We transcribed them here, saving them into an array:
```
voverVsquared=numpy.array([0, 0.64, 1.01, 1.241, 1.378, 1.402, 1.411, 1.411, 1.399, 1.378, 1.35, 1.288, 1.228, 1.166, 1.109, 1.044, 0.956, 0.906, 0])
print voverVsquared
```
[ 0. 0.64 1.01 1.241 1.378 1.402 1.411 1.411 1.399 1.378
1.35 1.288 1.228 1.166 1.109 1.044 0.956 0.906 0. ]
```
xtheo=numpy.array([0, 0.5, 1.25, 2.5, 5.0, 7.5, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80, 90, 95, 100])
xtheo = xtheo/100
print xtheo
```
[ 0. 0.005 0.0125 0.025 0.05 0.075 0.1 0.15 0.2 0.25
0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 1. ]
### And plot the result!
We will use the values from the NACA report (also given in the book by Abbot and von Doenhoff, ["Theory of Wing Sections,"](http://books.google.com/books/about/Theory_of_Wing_Sections_Including_a_Summ.html?id=DPZYUGNyuboC) 1949) to visually compare the pressure distribution with the result of our source panel method. Let's see how it looks!
```
# plots the surface pressure coefficient
val_x, val_y = 0.1, 0.2
x_min, x_max = min( panel.xa for panel in panels ), max( panel.xa for panel in panels )
cp_min, cp_max = min( panel.cp for panel in panels ), max( panel.cp for panel in panels )
x_start, x_end = x_min-val_x*(x_max-x_min), x_max+val_x*(x_max-x_min)
y_start, y_end = cp_min-val_y*(cp_max-cp_min), cp_max+val_y*(cp_max-cp_min)
pyplot.figure(figsize=(10, 6))
pyplot.grid(True)
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('$C_p$', fontsize=16)
pyplot.plot([panel.xc for panel in panels if panel.loc == 'upper'],
[panel.cp for panel in panels if panel.loc == 'upper'],
color='r', linewidth=1, marker='x', markersize=8)
pyplot.plot([panel.xc for panel in panels if panel.loc == 'lower'],
[panel.cp for panel in panels if panel.loc == 'lower'],
color='b', linewidth=0, marker='d', markersize=6)
pyplot.plot(xtheo, 1-voverVsquared, color='k', linestyle='--',linewidth=2)
pyplot.legend(['upper', 'lower'], loc='best', prop={'size':14})
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.gca().invert_yaxis()
pyplot.title('Number of panels : %d' % N);
```
That looks pretty good! The only place where the panel method doesn't quite match the tabulated data from Theordorsen's method is at the trailing edge. But note that the flow-tangency boundary condition in the panel method is applied at the control point of the panel (not at the endpoints), so this discrepancy is not surprising.
##### Accuracy check
For a closed body, the sum of all the source strengths must be zero. If not, it means the body would be adding or absorbing mass from the flow! Therfore, we should have
$$\sum_{j=1}^{N} \sigma_j l_j = 0$$
where $l_j$ is the length of the $j^{\text{th}}$ panel.
With this, we can get a get an idea of the accuracy of the source panel method.
```
# calculates the accuracy
accuracy = sum([panel.sigma*panel.length for panel in panels])
print '--> sum of source/sink strengths:', accuracy
```
--> sum of source/sink strengths: 0.00461703117528
## Streamlines onto a mesh grid
To get a streamline plot, we have to create a mesh (like we've done in all *AeroPython* lessons!) and compute the velocity field onto it. Knowing the strength of every panel, we find the $x$-component of the velocity by taking derivative of the velocity potential in the $x$-direction, and the $y$-component by taking derivative in the $y$-direction:
$$u\left(x,y\right) = \frac{\partial}{\partial x}\left\lbrace \phi\left(x,y\right) \right\rbrace$$
$$v\left(x,y\right) = \frac{\partial}{\partial y}\left\lbrace \phi\left(x,y\right) \right\rbrace$$
Notice that here we call the funcion `integral()` with $1,0$ as the final arguments when calculating the derivatives in the $x$-direction, and $0,1$ for the derivatives in th $y$-direction.
```
def get_velocity_field(panels, freestream, X, Y):
"""Returns the velocity field.
Arguments
---------
panels -- array of panels.
freestream -- farfield conditions.
X, Y -- mesh grid.
"""
Nx, Ny = X.shape
u, v = numpy.empty((Nx, Ny), dtype=float), numpy.empty((Nx, Ny), dtype=float)
for i in xrange(Nx):
for j in xrange(Ny):
u[i,j] = freestream.u_inf*math.cos(freestream.alpha)\
+ 0.5/math.pi*sum([p.sigma*integral(X[i,j], Y[i,j], p, 1, 0) for p in panels])
v[i,j] = freestream.u_inf*math.sin(freestream.alpha)\
+ 0.5/math.pi*sum([p.sigma*integral(X[i,j], Y[i,j], p, 0, 1) for p in panels])
return u, v
```
```
# defines a mesh grid
Nx, Ny = 20, 20 # number of points in the x and y directions
val_x, val_y = 1.0, 2.0
x_min, x_max = min( panel.xa for panel in panels ), max( panel.xa for panel in panels )
y_min, y_max = min( panel.ya for panel in panels ), max( panel.ya for panel in panels )
x_start, x_end = x_min-val_x*(x_max-x_min), x_max+val_x*(x_max-x_min)
y_start, y_end = y_min-val_y*(y_max-y_min), y_max+val_y*(y_max-y_min)
X, Y = numpy.meshgrid(numpy.linspace(x_start, x_end, Nx), numpy.linspace(y_start, y_end, Ny))
# computes the velicity field on the mesh grid
u, v = get_velocity_field(panels, freestream, X, Y)
```
```
# plots the velocity field
size=10
pyplot.figure(figsize=(size, (y_end-y_start)/(x_end-x_start)*size))
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.streamplot(X, Y, u, v, density=1, linewidth=1, arrowsize=1, arrowstyle='->')
pyplot.fill([panel.xc for panel in panels],
[panel.yc for panel in panels],
color='k', linestyle='solid', linewidth=2, zorder=2)
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.title('Streamlines around a NACA 0012 airfoil, AoA = %.1f' % alpha);
```
We can now calculate the pressure coefficient. In Lesson 9, we computed the pressure coefficient on the surface of the circular cylinder. That was useful because we have an analytical solution for the surface pressure on a cylinder in potential flow. For an airfoil, we are interested to see how the pressure looks all around it, and we make a contour plot in the flow domain.
```
# computes the pressure field
cp = 1.0 - (u**2+v**2)/freestream.u_inf**2
# plots the pressure field
size=12
pyplot.figure(figsize=(1.1*size, (y_end-y_start)/(x_end-x_start)*size))
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
contf = pyplot.contourf(X, Y, cp, levels=numpy.linspace(-2.0, 1.0, 100), extend='both')
cbar = pyplot.colorbar(contf)
cbar.set_label('$C_p$', fontsize=16)
cbar.set_ticks([-2.0, -1.0, 0.0, 1.0])
pyplot.fill([panel.xc for panel in panels],
[panel.yc for panel in panels],
color='k', linestyle='solid', linewidth=2, zorder=2)
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.title('Contour of pressure field');
```
### Final words
We've learned to use a source-sheet to represent any solid body: first a [circular cylinder](http://nbviewer.ipython.org/github/barbagroup/AeroPython/blob/master/lessons/09_Lesson09_flowOverCylinder.ipynb) (which we knew we could get by superposing a doublet and a freestream), and now an airfoil.
But what is the feature of airfoils that makes them interesting? Well, the fact that we can use them to generate lift and make things that fly, of course! But what do we need to generate lift? Think, think ... what is it?
## References
1. [Airfoil Tools](http://airfoiltools.com/index), website providing airfoil data.
1. Ira Herbert Abbott, Albert Edward Von Doenhoff and Louis S. Stivers, Jr. (1945), "Summary of Airfoil Data," NACA Report No.824, [PDF on the NASA web server](http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930090976.pdf) (see p. 71)
1. Ira Herbert Abbott, Albert Edward Von Doenhoff, "Theory of Wing Sections, Including a Summary of Airfoil Data" (1949), Dover Press.
A further reference on Theodorsen's method is:
* Roland Schinzinger, Patricio A. A. Laura (1991), "Conformal Mapping: Methods and Applications." Dover edition in 2003. [Read on Google Books](https://books.google.com/books?id=qe-7AQAAQBAJ&lpg=PA128&ots=wbg0jLlqq5&dq=method%20theodorsen&pg=PA128#v=onepage&q=%22method%20of%20theodorsen%20and%20garrick%22&f=false)
---
###### Please ignore the cell below. It just loads our style for the notebook.
```
from IPython.core.display import HTML
def css_styling():
styles = open('../styles/custom.css', 'r').read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 750px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:600px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Alegreya Sans', sans-serif;
font-style:regular;
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Fenix', serif;
font-size: 22pt;
line-height: 100%;
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Fenix', serif;
margin-top:12px;
font-size: 16pt;
margin-bottom: 3px;
font-style: regular;
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Fenix', serif;
font-size: 2pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Alegreya Sans', sans-serif;
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'Source Code Pro', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.warning{
color: rgb( 240, 20, 20 )
}
</style>
|
93931aa6d28e9d3fbaf9f605e4d26ed1ad449b27
| 176,488 |
ipynb
|
Jupyter Notebook
|
lessons/10_Lesson10_sourcePanelMethod.ipynb
|
cpop-fr/AeroPython
|
5b4a6f15ff2d6e49ad6ffbce0ad7ea72f15af451
|
[
"CC-BY-4.0"
] | null | null | null |
lessons/10_Lesson10_sourcePanelMethod.ipynb
|
cpop-fr/AeroPython
|
5b4a6f15ff2d6e49ad6ffbce0ad7ea72f15af451
|
[
"CC-BY-4.0"
] | null | null | null |
lessons/10_Lesson10_sourcePanelMethod.ipynb
|
cpop-fr/AeroPython
|
5b4a6f15ff2d6e49ad6ffbce0ad7ea72f15af451
|
[
"CC-BY-4.0"
] | null | null | null | 148.184719 | 42,097 | 0.838607 | true | 8,864 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.893309 | 0.835484 | 0.746345 |
__label__eng_Latn
| 0.967478 | 0.572342 |
# Immersed Boundary Method
---
### Author: Marin Lauber
```python
import numpy as np
import matplotlib.pyplot as plt
import NSsolver as ns
try:
plt.style.use("jupyter")
except OSError:
print("Delaut syle in use")
```
Charles S Peskin (1972) developed the immersed boundary method (IBM) to tackle the problem of heart valves. Dirac delta function source terms impose the kinematic boundary condition in the fluid by regularizing a force density on the background mesh
\begin{equation}
\begin{split}
&\frac{\partial \vec{u}}{\partial t} + (\vec{u}\cdot\nabla)\vec{u} = -\nabla p + \frac{1}{Re}\nabla^2\vec{u} + \int\vec{f}(\vec{\xi}(s, t))\delta_h(\vec{\xi} - \vec{x}) \text{ d}\vec{x},\\
&\nabla\cdot\vec{u} = 0,\\
&\vec{u}(\vec{\xi}(s, t)) = \int_{\vec{x}}\vec{u}(\vec{x})\delta(\vec{x} - \vec{\xi} ) \text{ d}\vec{x} = \vec{u}_B(\vec{\xi}(s, t)).
\end{split}
\end{equation}
A constitutional relationship, Hooke’s law, is used to derive this force density based on the motion of the filament, convected by the fluid. Different discrete Dirac delta function can be used within the IBM (three point delta function)
\begin{equation}
\delta_h(r) = \begin{cases}
\frac{1}{6\Delta r}\left[ 5-3\frac{|r|}{\Delta r} -\sqrt{-3\left(1-\frac{|r|}{\Delta r}\right)^2 +1}\right] & \text{for } 0.5\Delta r \le |r| \le 1.5\Delta r\\
\frac{1}{3\Delta r}\left[ 1+\sqrt{-3\left(\frac{r}{\Delta r}\right)^2 +1}\right] & \text{for } |r| \le 0.5\Delta r\\
0 & \text{else}
\end{cases}
\end{equation}
or the 3-point kernel (used in the following)
\begin{equation}
\delta_h(r) = \begin{cases}
\frac{4}{3} - r^2 & \text{for } |r| \le 1.5\\
5(2.25-3|d|+r^2)&\text{for } |r| \le 0.5\\
0 & \qquad \text{else}
\end{cases}
\end{equation}
```python
def kernel(d):
return np.where(abs(d)<=.5, .75-d**2, np.where(abs(d)<=1.5,.5*(2.25-3*abs(d)+d**2), 0))
def Ic(x, X, dx, f):
# interpolation operator
return np.sum(f*kernel((X - x)/dx))
def Sc(x, X, F, ds=1):
# spreading operator
return kernel((X - x)/dx)*ds*F
def f(x, X, u, V, dx, dt, rho):
# interpolate to Lagrangian point
Ur = Ic(x, X, dx, u)
# compute Lagrangian force density
F = rho/dt*(V - Ur)
return Sc(x, X, F) # extrapolate to Eulerian points
```
```python
d = np.linspace(-3,3,256)
plt.plot(d, kernel(d));
```
We use a fractional-step algorithm to solve the following system of equations.
\begin{split}
u^* &= r_{\Delta t}(u^n) + \Delta t \mathcal{S}_c[\xi - x](\mathcal{I}_c[\xi-x](u^n)),\\
\nabla\cdot(\nabla p) &= \nabla\cdot u^*,\\
u^{n+1} &= u^* - \frac{\Delta t}{\rho}\nabla p.
\end{split}
where $\mathcal{S}_c$ and $\mathcal{I}_c$ are the spreading and interpolation kernels.
```python
def update(x, X, u0, V, dx, dt=1, rho=1, Nsteps=1):
u_n = u0
for i in range(Nsteps):
# first step
u_1 = u_n + dt*ns.r(u_n)
u_star = u_1 + dt*f(x, X, u_1, V, dx, dt, rho)
sigma = ns.div(u_star, dx)
p = ns.solve_pressure(np.ones_like(sigma), sigma, dx, verbose=True)
u_n = u_star - dt/rho*(ns.grad(p, dx))
return u_star, sigma, p, u_n
```
```python
N = 32
x, dx = np.linspace(-1, 1, N, retstep=True)
xs = x + 0.5*dx # scalar cell-centred values
X = 0.0
V = 1
u0 = np.zeros_like(x)
dt = 1.
us, sigma, p, u_n = update(x, X, u0, V, dx, dt)
print("Interface at X: %.2f" % X)
print(r"L inf: %.3e" % np.max(np.abs(u_n - V)))
```
Jacobi solver:
res0: 3.425e-01
res: 3.390e-11
iter: 548
Interface at X: 0.00
L inf: 9.688e-01
```python
ns.draw_results(x, xs, X, u0, u_n, p, sigma)
```
Clearly, the correct pressure is not obtained with a single step of the fractional-step method. In fact any number of sub-iteration of this sytem fails to capture the correct pressure (and thus solution).
|
29bb1f98988c03eb897bfdd9a77856803b299c61
| 53,838 |
ipynb
|
Jupyter Notebook
|
1D-Piston/Immersed-Boundary-Method.ipynb
|
marinlauber/FlexibleSheets
|
487b035a5aea4a0f4cf5aa49c3eab5cb238aa1f7
|
[
"MIT"
] | null | null | null |
1D-Piston/Immersed-Boundary-Method.ipynb
|
marinlauber/FlexibleSheets
|
487b035a5aea4a0f4cf5aa49c3eab5cb238aa1f7
|
[
"MIT"
] | null | null | null |
1D-Piston/Immersed-Boundary-Method.ipynb
|
marinlauber/FlexibleSheets
|
487b035a5aea4a0f4cf5aa49c3eab5cb238aa1f7
|
[
"MIT"
] | 2 |
2020-12-18T18:57:16.000Z
|
2022-03-04T06:58:09.000Z
| 229.097872 | 29,040 | 0.905773 | true | 1,419 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.903294 | 0.79053 | 0.714081 |
__label__eng_Latn
| 0.598078 | 0.497382 |
<a href="https://colab.research.google.com/github/nickwotton/MQP2019/blob/master/Nick/Copy_of_linearfunction01.ipynb" target="_parent"></a>
# Attempt to Improve Solving a Linear Function using a Nueral Network
Given code to use a neural network to fit a linear function, try to optimize the code to get a better fit, i.e. the data points completely overlap on the plot.
```python
import torch
import torch.nn as nn
import numpy as np
import torch.nn.functional as F
import matplotlib.pyplot as plt
```
## Define the Function
Here we define our function $f(x)=ax+b$ with coefficient $a=1$ and intercept term $b=2$.
Then we test the equation with a test value of 2.
```python
# target function
a = 1.
b = 2.
f = lambda x: a*x+b
#test
f(2.)
```
4.0
## Create Model
Next, we create the neural network model. This is done first by setting the inner and outer dimensions with variables. Next we code the model and vary the internal dimensions to attempt to improve the model. At this level, this is essentially a simple linear algebra exercise:
If we have input $x$, internal parameters $a,b$, and solution $f(x)$ then in the one-dimensional case we have:
\begin{equation}
\left(
a_{1}x+b_{2}
\right)
a_{2} + b_{2}
= f(x)
\end{equation}
However, we want to get a better estimate for the true equation. So we increase the interior dimension which corresponds to the number of neurons inside the network. For example, we raised the inner dimension to 3. In matrix form we have:
\begin{equation}
\left(
\begin{bmatrix} x \end{bmatrix}
\begin{bmatrix} a_{1} & a_{2} & a_{3} \end{bmatrix}
+
\begin{bmatrix} b_{1} & b_{2} & b_{3} \end{bmatrix}
\right)
\begin{bmatrix} a_{4} \\ a_{5} \\ a_{6} \end{bmatrix}
+
\begin{bmatrix} b_{4} \\ \end{bmatrix}
=
\begin{bmatrix} f(x) \end{bmatrix}
\end{equation}
What we discovered here is that ReLU was slowing down the process, so since our function is Linear, we can just remove it.
Additionally, we discerned that the higher the inner dimension, that is, the more nodes in each layer, the smaller the error and the better the performance.
```python
#model
#nn.Linear
in_dim = 1
out_dim = 1
model = nn.Sequential(
nn.Linear(in_dim, 30),
# nn.ReLU(),
nn.Linear(30, out_dim)
)
```
Here we define the Loss function as the Mean Squared Error(MSE).
Note that by doing so, we are essentially 'cheating' the system. In most applications, we would not know the function $f$ so we would be unable to find the MSE.
```python
#loss function
criterion = nn.MSELoss()
```
Next we choose a learning rate and a method for learning. The learning rate is the percent of the data that is accepted in each iteration. The Methods we tried were SGD and Adam.
```python
#optimizer
learning_rate = 0.001
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
## Train the Model
First we create the training data. This is a batch of random points that we pass through our function $f$.
```python
#training data
batch_size = 1000
x_train = torch.randn(batch_size, 1)
y_train = f(x_train)
```
Once we have the training data, we pass this collection of inputs and solutions into the model. With each iteration we calculate the loss and attempt to optimize the model to further reduce the loss.
In this code we print out the loss every 500 iterations.
```python
# Train the model
num_epochs = 10000
for epoch in range(num_epochs):
# Forward pass
outputs = model(x_train)
loss = criterion(outputs, y_train)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (epoch+1) % 500 == 0:
print ('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1,
num_epochs, loss.item()))
```
Epoch [500/10000], Loss: 0.0000
Epoch [1000/10000], Loss: 0.0000
Epoch [1500/10000], Loss: 0.0000
Epoch [2000/10000], Loss: 0.0000
Epoch [2500/10000], Loss: 0.0000
Epoch [3000/10000], Loss: 0.0000
Epoch [3500/10000], Loss: 0.0000
Epoch [4000/10000], Loss: 0.0000
Epoch [4500/10000], Loss: 0.0000
Epoch [5000/10000], Loss: 0.0000
Epoch [5500/10000], Loss: 0.0000
Epoch [6000/10000], Loss: 0.0000
Epoch [6500/10000], Loss: 0.0000
Epoch [7000/10000], Loss: 0.0000
Epoch [7500/10000], Loss: 0.0000
Epoch [8000/10000], Loss: 0.0000
Epoch [8500/10000], Loss: 0.0000
Epoch [9000/10000], Loss: 0.0000
Epoch [9500/10000], Loss: 0.0000
Epoch [10000/10000], Loss: 0.0000
## Testing the Model
Now that we have a trained model with low Loss, we want to attempt to replicate the function. To do this we get another random sample of numbers. This sample is passed into both our fuction $f$ and the model.
We then graph both sets of points on a scatter plot. Since the model is highly accurate now, the two sets of points completely overlap.
```python
#test
x_ = torch.randn(50,1)
y_ = f(x_)
plt.scatter(x_.detach().numpy(), y_.detach().numpy(), label='true')
y_pred = model(x_)
plt.scatter(x_.detach().numpy(), y_pred.detach().numpy(), label='pred')
plt.legend()
```
__Todo__
- Improve the above code.
- train a model to $f(x) = x^2 + 1$
- train a model to
$$f(x) = \begin{bmatrix}
1 & 1 \\
0 & 1
\end{bmatrix} x +
\begin{bmatrix}
1 \\
0
\end{bmatrix}$$
```python
```
|
07dd1b9f05749e7558944ce5181ea0f23d4b42c1
| 20,917 |
ipynb
|
Jupyter Notebook
|
Nick/Copy_of_linearfunction01.ipynb
|
xulisong1/MQP2019
|
c0fb22fd5a6ea23d579493d591b08f94375c07b8
|
[
"MIT"
] | null | null | null |
Nick/Copy_of_linearfunction01.ipynb
|
xulisong1/MQP2019
|
c0fb22fd5a6ea23d579493d591b08f94375c07b8
|
[
"MIT"
] | null | null | null |
Nick/Copy_of_linearfunction01.ipynb
|
xulisong1/MQP2019
|
c0fb22fd5a6ea23d579493d591b08f94375c07b8
|
[
"MIT"
] | null | null | null | 48.531323 | 8,178 | 0.659703 | true | 1,593 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.766294 | 0.686792 |
__label__eng_Latn
| 0.959372 | 0.433979 |
# A Cournot competition model with product differentiation
## Model Project
### Group: Anders&Frederik
#### Group members: Frederik Andresen, rjv586. Anders Meelby, zpw286.
**The model**
Consider two firms who compete in the same market i.e. a duopoly. The market is characterized by Cournot competetion:
- The firms are competing in quantities they produce;
- The firms decide the quantities independently of each other and simultaneously;
- The firms do not coorperate (no collusion);
- The firms have market power why each firm's output decision affects the price of each firm's good;
- The firms are economically rational and act strategically to maximize their profit given their competitors' decisions.
**Demand/price**
We define the inverse demand (price) for firm $i$ in such a market:
$p_i=p(x_i+x_j)=1-x_i+b\cdot x_j,$ for each $i\neq j \in \{1,2\},$
where $x_1$ is production by firm 1, $x_2$ is production by firm 2, and $b$ is a positive constant determining the degree of substitutability between the two goods. Letting $b=1$ makes the goods perfect substitutes and $b<1$ makes the goods differentiated.
```python
# 1.1 Define price/demand
def demand(x1,x2,b):
return 1-x1-b*x2
```
**Cost**
Both firms have the cost functions, $C(x_i)=cx_i$ for $i=1,2$ such that the marginal costs are constant, $\dfrac{\partial C(x_i)}{\partial x_i}=c_i.$
```python
# 1.2 Define costs
def cost(x,c): #c(x==0)=0
if x == 0:
cost = 0
else:
cost = c*x
return cost
```
**Profit**
The profit of firm $i$ can be stated as the following:
$\pi_i(x_i,x_j) = p_i(x_i,x_j)x_i-C(x_i)$ for each $i\neq j \in \{1,2\}$
```python
# 1.3 Define profits
def profit(x1,x2,c1,b):
return demand(x1,x2,b)*x1 - cost(x1,c1)
```
**Best response**
Firm $i$ chooses its quantity, $x_i$, as to maximize profits, taking $x_j$ as given. $(x_i^*,x_j^*)$ is therefore a Nash equillibrium if $x_i^*=\underset{x_i}{\textrm{arg max}} \pi_i(x_i,x_j^*)$ for each $i\neq j \in{1,2}$.
If the profit is concave in output, the Nash equillibrium $(x_i^*,x_j^*)$ is the solution to $\frac{\partial \pi_i(x_i,x_j}{\partial x_i}|_{x_i=x_i^*,x_j=x_j^*} = 0$.
Isolating the profit maximizing quantity $x_i^*$ yields the best response function, $x_i^*=R_{x_i}(x_j^*)$.
To define this function, we use scipy brute force where the optimization problem per default is a minimization problem. Thus, if we want to maximize the profit of firm i, $\pi_i(x_i,x_j)$, we need to minimize $-\pi_i(x_i,x_j)$.
The method evaluates $-\pi_i(x_i,x_j)$ for each value of $x_i$ in the given range in order to find the global minimum. We set this particular range of 0 to 1 for our results to be well-defined.
The best response functions:
```python
from scipy import optimize
# 1.4 Define best response function of both firms using lambda and optimize.brute
def reaction(x2,c,b):
x1 = optimize.brute(lambda x: -profit(x,x2,c,b),
ranges=((0,1),), #x1,x2 can take on values between 0 and 1.
finish=optimize.fmin)
return x1[0]
```
**Finding the equilibrium by numerical optimization**
Having defined the best repsonse function as a function of the other firms choice of quantity and the parameters, we need to set up a system of equations, which ensures that both firms are responding optimally given the other firms choice of quantity. We define two vectors in order to satisfy this condition.
The first vector, $x^* = \begin{pmatrix}x_1^* \\x_2^* \end{pmatrix}$, is the Nash equillibrium, and the other being each firms reaction function given the
other firms choice of quantity, $f(x^*) = \begin{pmatrix}r_1(x_2^*) \\r_2(x_1^*) \end{pmatrix}$.
The Nash equillibrium, $x^*$ must satisfy that $x^* = f(x^*) \Leftrightarrow x^* - f(x^*) = 0$.
This is just another way of stating that both firms are responding optimally and therefore have no incentive to deviate. We define a function, vector_reaction, which is passed an initial quantity for each firm as well as a set of parameters. vector_reaction keeps changing quantities untill both firms has maximized profits i.e when $x^* - f(x^*) = 0$.
The equilibrium condition in vectorized form (array):
```python
from numpy import array
# 1.5 Define best responses as an array
def vector_reaction(x,param): # vector param = (b,c1,c2)
return array(x)-array([reaction(x[1],param[1],param[0]), #x1 = reaction(x2,c1,b)
reaction(x[0],param[2],param[0])]) #x2 = reaction(x1,c2,b)
```
We can solve for the profit maximizing quantity using Scipy's "fsolve", which can solve a vectorized system of equations using an initial guess and the parameter values.
The first case is the one with perfect substitutes or homogeneous products (b=1), the next is imperfect substitutes (b=0.5) and the last is perfect complements (b=0).
Finally, we print the Nash equilibrium quantity and -profit.
```python
from IPython.display import Markdown, display #Creating Markdown-like printing for better visuals
def printmd(string):
display(Markdown(string))
b_list=[1,0.5,0]
x0_list=[[0.3,0.3],[0.6,0.6],[0.5,0.5]]
titles =['**Perfect substitutes (b=1)**','**Imperfect substitutes (b=0.5)**','**Perfect complements (b=0)**']
# 1.6 Print results by looping through the above lists
for x0, b, t in zip(x0_list, b_list, titles):
x_opt = optimize.fsolve(vector_reaction, x0, args = ([b,0,0]))
printmd(t)
printmd(f'The Nash equilibrium quantity for both firms, $x^* =[x_1^*,x_2^*]$ = {x_opt}')
printmd(f'The Nash equilibrium price for firm 1 and 2, $p^*$= {demand(x_opt[0],x_opt[1],b)}')
printmd(f'The Nash equilibrium profit for firm 1 and 2, $\pi^*$= {profit(x_opt[0],x_opt[1],0,b)}')
```
**Perfect substitutes (b=1)**
The Nash equilibrium quantity for both firms, $x^* =[x_1^*,x_2^*]$ = [0.33332648 0.33332648]
The Nash equilibrium price for firm 1 and 2, $p^*$= 0.3333470347232139
The Nash equilibrium profit for firm 1 and 2, $\pi^*$= 0.11111339458222716
**Imperfect substitutes (b=0.5)**
The Nash equilibrium quantity for both firms, $x^* =[x_1^*,x_2^*]$ = [0.4 0.4]
The Nash equilibrium price for firm 1 and 2, $p^*$= 0.39999999997022395
The Nash equilibrium profit for firm 1 and 2, $\pi^*$= 0.15999999999602987
**Perfect complements (b=0)**
The Nash equilibrium quantity for both firms, $x^* =[x_1^*,x_2^*]$ = [0.49995888 0.49995888]
The Nash equilibrium price for firm 1 and 2, $p^*$= 0.5000411184210527
The Nash equilibrium profit for firm 1 and 2, $\pi^*$= 0.24999999830927547
Each firm sets a higher output (and charges a higher price) at the monopoly outcome (b=0) with independent products than if products are homogeneous (b=1).
Generally, the equilibrium quantity, -price and profit increase as the products become more differentiated.
The analytical solution is symmetrical as well (c=0): $x^*=\dfrac{1}{2+b}=p^*$, and, $\pi^*=\dfrac{1+2b}{(2+b)^2}$,
and checks out with the results above.
**Vizualizing the best response functions**
We plot the best response functions for the two firms and visualize the equilibrium quantities with three different parameter values of b.
```python
import sympy as sy
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('seaborn-poster')
# 2.1 Create list of three parameter values and figure titles
b_list = [1, 0.5, 0.001]
title_list = ["The best response functions in quantity-space with perfect substitutes (b=1)", "The best response functions in quantity-space with imperfect substitutes (b=0.5)","The best response functions in quantity-space with close complements (b=0.001)"]
# 2.2 Two loops through two lists for the parameter b and figure titles, respectively
for b, t in zip(b_list,title_list):
# 2.3 Define array of quantities and storage vectors
N=1000
q_vec = np.linspace(0, 1, N)
q1_vec = np.empty(N)
q2_vec = np.empty(N)
# 2.4 Loop through the 1000 quantities in q_vec and get R_i(x_j) for each firm
for i,x2 in enumerate(q_vec):
cord1 = q1_vec[i] = reaction(x2,0,b)
for i,x1 in enumerate(q_vec):
cord2 = q2_vec[i] = reaction(x1,0,b)
# 2.5 Plot equilibrium quantity given b
plt.title(t)
plt.xlabel('$x_2$')
plt.ylabel('$x_1$')
plt.plot(q1_vec, q_vec, '-', label='$R_1(x_2)$')
plt.plot(q_vec, q2_vec, '-', label='$R_2(x_1)$')
plt.legend(loc='best')
plt.xlim(0,1)
plt.ylim(0,1)
p = np.argwhere(np.diff(np.sign(q_vec - q1_vec))).flatten()
plt.plot(q_vec[p], q_vec[p], 'ro')
printmd(f'Intersection, $x^*$=({q_vec[p]},{q_vec[p]})')
plt.show()
```
In the two first graphs both best responses are decreases in the rival firm's quantity: If one firm raises its output it is optimal for the other firm to reducer its output ("Strategic substitutes").
At (near) perfect complements, the optimal output is (near) independent of the rival firm's output.
**Plotting the parameter $b$ against equilibrium quantity, -price and profit.**
* We initially define a set of storage arrays for each variable to be used with a length of N=1000.
* We loop through each value of b in an equal spaced array using enumerate.
* For each b in the loop, we solve for the equilibrium quantity for either firm using Scipy's optimize.solve.
* We define the resulting eq. price and -profit.
* We get that the equilibrium quantity, -price and profit get lower as the products become less differentiated.
```python
# 3.1 Define data for b and storage vectors
b_vec = np.linspace(0, 1, N)
x1_vec = np.empty(N)
x2_vec = np.empty(N)
price_vec = np.empty(N)
pi_vec = np.empty(N)
# 3.2 Define solver from the vectorized equilibrium condition for b to loop through
def solver(b):
return optimize.fsolve(vector_reaction, [0.5,0.5], args = ([b,0,0]))
# 3.3 Let b loop through the solver and get optimal quantity, -price and -profit of firm 1 for each b (price and profit of firm 2 are commented out)
for i,b in enumerate(b_vec):
cord = solver(b)
x1_vec[i] = cord[0]
x2_vec[i] = cord[1]
price_vec[i] = demand(x1_vec[i],x2_vec[i],b)
pi_vec[i] = profit(x1_vec[i],x2_vec[i],0,b)
# 3.4 Plot equilibrium quantity given b
fig, (ax_0, ax_1, ax_2) = plt.subplots(3, sharex=True)
ax_0.plot(b_vec,x1_vec)
ax_0.set_title('Equilibrium quantity given b')
ax_0.set_ylabel('$x^\star$')
ax_0.grid(True)
# 3.5 Plot equilibrium price given b
ax_1.plot(b_vec,price_vec)
ax_1.set_title('Equilibrium price given b')
ax_1.set_ylabel('$p^\star$')
ax_1.grid(True)
# 3.5 Plot equilibrium profit given b
ax_2.plot(b_vec,pi_vec)
ax_2.set_title('Equilibrium profit given b')
ax_2.set_xlabel('$b$')
ax_2.set_ylabel('$\pi^\star$')
ax_2.grid(True)
plt.show
```
The equilibrium quantity and price are equal as previously stated.
**Conclusion**
The equilibrium quantity, -price and profit decrease as the products become less differentiated.
Thus, having two monopolists with heterogeneous products is preferable to a duopoly with homogeneous products from a social welfare point-of-view.
|
95542eca9c94c490afa381c15b8f9e4d4df03676
| 197,756 |
ipynb
|
Jupyter Notebook
|
modelproject/model_project.ipynb
|
NumEconCopenhagen/projects-2020-anders-frederik
|
1e0b4b89c65c11c99a8ceaf6c49984667c02f1e8
|
[
"MIT"
] | null | null | null |
modelproject/model_project.ipynb
|
NumEconCopenhagen/projects-2020-anders-frederik
|
1e0b4b89c65c11c99a8ceaf6c49984667c02f1e8
|
[
"MIT"
] | 8 |
2020-04-18T13:06:58.000Z
|
2020-05-12T15:03:09.000Z
|
modelproject/model_project.ipynb
|
NumEconCopenhagen/projects-2020-anders-frederik
|
1e0b4b89c65c11c99a8ceaf6c49984667c02f1e8
|
[
"MIT"
] | 1 |
2020-04-19T09:34:52.000Z
|
2020-04-19T09:34:52.000Z
| 325.256579 | 62,236 | 0.923942 | true | 3,251 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.901921 | 0.859664 | 0.775349 |
__label__eng_Latn
| 0.986862 | 0.639727 |
```python
# General import
import numpy as np
import scipy.sparse as sparse
import time
import matplotlib.pyplot as plt
```
```python
# pyMPC import
from pyMPC.mpc import MPCController
```
## System dynamics ##
Point mass $M=2\; \text{Kg}$ subject to an input force $F_{ext}$ and viscous friction with coefficient $b = 0.3\;N \cdot \frac{s}{m}$.
\begin{equation}
\begin{split}
\dot p &= v\\
\dot v &= -\frac{b}{M}v + \frac{1}{M}F_{ext}
\end{split}
\end{equation}
System equations discretized with sampling time $T_s = 0.2~\text{s}$.
```python
# MPC system matrices #
Ts = 0.2 # sampling time (s)
M = 2 # mass (Kg)
b = 0.3 # friction coefficient (N*s/m)
# MPC model dynamics: x_k+1 = Ad*x_k + Bb*u_k
Ad = sparse.csc_matrix([
[1.0, Ts],
[0, 1.0 -b/M*Ts]
])
Bd = sparse.csc_matrix([
[0.0],
[Ts/M]])
# Continuous-time system matrices (just for reference, not used)
Ac = np.array([
[0.0, 1.0],
[0, -b/M]]
)
Bc = np.array([
[0.0],
[1/M]
])
```
```python
# MPC reference input and states (set-points)
pref = 7.0
vref = 0.0
xref = np.array([pref, vref]) # reference state
uref = np.array([0.0]) # reference input
uminus1 = np.array([0.0]) # input at time step negative one - used to penalize the first delta u at time instant 0. Could be the same as uref.
```
```python
# MPC constraints
xmin = np.array([-100.0, -100.0])
xmax = np.array([100.0, 100.0])
umin = np.array([-1.2])
umax = np.array([1.2])
# Constraints input variation with respect to previous sample
Dumin = np.array([-2e-1])
Dumax = np.array([2e-1])
# MPC objective function weights
Qx = sparse.diags([0.5, 0.1]) # Quadratic cost for states x0, x1, ..., x_N-1
QxN = sparse.diags([0.5, 0.1]) # Quadratic cost for xN
Qu = 2.0 * sparse.eye(1) # Quadratic cost for u0, u1, ...., u_N-1
QDu = 10.0 * sparse.eye(1) # Quadratic cost for Du0, Du1, ...., Du_N-1
```
```python
# Initial state
x0 = np.array([0.1, 0.2]) # initial state
# Prediction horizon
Np = 20
```
```python
# Initialize and setup MPC controller
K = MPCController(Ad,Bd,Np=Np, x0=x0,xref=xref,uminus1=uminus1,
Qx=Qx, QxN=QxN, Qu=Qu,QDu=QDu,
xmin=xmin,xmax=xmax,umin=umin,umax=umax,Dumin=Dumin,Dumax=Dumax)
K.setup() # this initializes the QP problem for the first step
```
```python
# Simulate in closed loop. Use MPC model as real system
[nx, nu] = Bd.shape # number of states and number or inputs
len_sim = 20 # simulation length (s)
nsim = int(len_sim/Ts) # simulation length(timesteps)
xsim = np.zeros((nsim,nx))
usim = np.zeros((nsim,nu))
tsim = np.arange(0,nsim)*Ts
time_start = time.time()
xstep = x0
uMPC = uminus1
for i in range(nsim):
xsim[i,:] = xstep
# MPC update and step. Could be in just one function call
K.update(xstep, uMPC) # update with measurement
uMPC = K.output() # MPC step (u_k value)
usim[i,:] = uMPC
xstep = Ad.dot(xstep) + Bd.dot(uMPC) # Real system step (x_k+1 value)
time_sim = time.time() - time_start
```
```python
# Plot results
fig,axes = plt.subplots(3,1, figsize=(10,10))
axes[0].plot(tsim, xsim[:,0], "k", label='p')
axes[0].plot(tsim, xref[0]*np.ones(np.shape(tsim)), "r--", label="pref")
axes[0].set_title("Position (m)")
axes[1].plot(tsim, xsim[:,1], label="v")
axes[1].plot(tsim, xref[1]*np.ones(np.shape(tsim)), "r--", label="vref")
axes[1].set_title("Velocity (m/s)")
axes[2].plot(tsim, usim[:,0], label="u")
axes[2].plot(tsim, uref*np.ones(np.shape(tsim)), "r--", label="uref")
axes[2].set_title("Force (N)")
for ax in axes:
ax.grid(True)
ax.legend()
```
|
6f9f96dc82f52a3d89003a0b84fcf3fdd9941f4c
| 59,138 |
ipynb
|
Jupyter Notebook
|
examples/example_point_mass.ipynb
|
forgi86/pyMPC
|
291db149554767a035fcb01df3fed7a6b3fe60e4
|
[
"MIT"
] | 84 |
2019-05-28T09:27:37.000Z
|
2022-03-31T08:38:23.000Z
|
examples/example_point_mass.ipynb
|
passion4energy/pyMPC
|
4b004ba707dab49cd36d96a3575b8593c870a904
|
[
"MIT"
] | 2 |
2020-04-17T00:03:27.000Z
|
2021-01-30T11:35:58.000Z
|
examples/example_point_mass.ipynb
|
passion4energy/pyMPC
|
4b004ba707dab49cd36d96a3575b8593c870a904
|
[
"MIT"
] | 20 |
2019-10-13T13:50:16.000Z
|
2022-03-31T08:38:25.000Z
| 238.459677 | 52,688 | 0.910413 | true | 1,261 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.815232 | 0.69454 |
__label__eng_Latn
| 0.570297 | 0.45198 |
>>> Work in Progress
#### Outline
- Perceptron
- Exponential Family
- Generalized Linear Models(GLM)
- Softmax Regression(Multiclass classification)
### Logistic Regression (Recap)
- Logistic Regression uses sigmoid function
- ranges from $-\infty$ to $\infty$, with values ranging from 0 to 1, which is probability
> $$ g(z) = \frac{1}{1+e^{-z}} $$
- At z=0, g(z) = 0.5
- As z tends to $-\infty$, g converges to 0
- As z tends to $\infty$, g converges to 1
- variant of this is perceptron
### Perceptron
- is not used widely because it does not have a probabilistic interpretation
- is taught for historical reasons
- logistic regression is a softer version of perceptron
> $$\begin{equation}
g(z) =
\begin{cases}
1 & \text{if z $\ge$ 0}\\
0 & \text{if z $\lt$ 0}
\end{cases}
\end{equation}$$
- hypothesis function
> $h_{\theta}(x) = g(\theta^{T}x)$
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Anand Avati}}$
> $\theta_{j} := \theta_{j} + \alpha(y^{(i)} - h_{\theta}(x^{(i)}))x_{j}^{(i)}$
- In this equation $(y^{(i)} - h_{\theta}(x^{(i)}))$ is scalar, because $y^{(i)}$ is either 0/1 and so will be $h_{\theta}(x^{(i)})$
- So the result can be either
$\begin{equation}
=
\begin{cases}
0 & \text{if algorithm got it right}\\
+1 & \text{if wrong $y^{(i)} = 1$}\\
-1 & \text{if wrong $y^{(i)} = 0$}
\end{cases}
\end{equation}$
- A result of 0 means, if the example is already classified, you do nothing
- A result of +1/-1 means, the example is misclassified, and you either add/subtract a small component of the example ($\alpha x_{j}^{(i)}$)
- This will shift the decision boundary correctly
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Anand Avati}}$
### Exponential Family
- its a class of probability distribution
- they are closely related to GLM
<br>
- PDF:
> $P(y;\eta) = b(y) \text{exp}[\eta^{T}T(y) - a(\eta)]$
- y: data - use it to model output of data
- $\eta$: natural parameter
- $T(y)$: sufficient statistics, cannot involve $\eta$
- b(y): base measure, cannot involve $\eta$
- $a(\eta)$: log partition function
> $P(y;\eta) = \frac{b(y) e^{(\eta^{T}T(y))}}{e^{a(\eta)}}$
- $a(\eta)$: is also called normalizing constant of probability distribution
#### Types of exponential family
##### Bernoulli distribution
- Bernoulli distribution belongs to the exponential family
- PDF
>$P(y; \theta) $
>$= \phi^{y}(1-\phi)^{1-y}$
>$ = exp(log(\phi^{y}(1-\phi)^{1-y}))$
>$ = exp[log(\frac{\phi}{1-\phi})y + log(1-\phi)]$
- where
>$b(y) = 1$
>$T(y) = y$
>$\eta = log(\frac{\phi}{1-\phi}) \Rightarrow \phi = \frac{1}{1+e^{-\eta}}$
>$a(\eta) = -log(1-\phi) = -log(1-\frac{1}{1+e^{-\eta} }) = log(1+e^{\eta})$
- We are linking the canonical parameters to natural parameters here
##### Gaussian distribution
- Gaussian distribution - with fixed variance
- Assume $\sigma^{2} = 1$
- PDF
>$P(y; \mu)$
>$= \frac{1}{\sqrt(2\pi)}exp(-\frac{(y-\mu)^{2}}{2})$
>$ = \frac{1}{\sqrt(2\pi)} e^{-\frac{y^{2}}{2}}exp(\mu y - \frac{1}{2}\mu ^{2}) $
- where
>$b(y) = \frac{1}{\sqrt(2\pi)} e^{-\frac{y^{2}}{2}}$
>$T(y) = y$
>$\eta = \mu$
>$a(\eta) = \frac{\mu^{2}}{2} = \frac{\eta^{2}}{2}$
##### Other distributions:
- How do you decide which distribution to use?
- The task in reality tells/influences you which distribution to use
- Real: Gaussian
- regression - predict house prices
- Binary: Bernoulli
- classification
- Integer Count: Poisson
- number of visitors to a web page
- Real positive: Gamma, Exponential
- Prob distribution over other distribution: Beta, Dirichlet (used mostly in Bayesian statistics)
#### Properties of Exponential family:
* If you perform maximum likelihood MLE wrt $\eta \Rightarrow$ concave
* NLL is convex (negative log likelihood)
* $E[y;\eta] = \frac{\partial}{\partial \eta} a(\eta)$
* $Var[y;\eta] = \frac{\partial^{2}}{\partial \eta^{2}} a(\eta)$
* Generally to calculate distribution properties (mean and variance), you need to integrate, in exponential family you differentiate
### Generalized Linear Models (GLM)
Natural extension of exponential family - to include covariance/input features. Powerful models can be made by using this.
<br>
Assumption/Design choices (to move from exponential family to GLM):
i) $y | x; \theta \sim $ Exponential family$(\eta) $
ii) $\eta = \theta^{T}x $ where $ \theta \in \mathbb R^{n}, x \in \mathbb R^{n}$
iii) At test time: Output E[y|x; $\theta$]
- $h_{\theta}(x) = E[y|x; \theta]$ - is the hypothesis function
- if we plugin exponential family as Gaussian, the hypothesis will turn out to be Gaussian hypothesis of linear regression
- if we plugin exponential family as Bernoulli, the hypothesis will turn out to be Bernoulli hypothesis of logistic regression
<br>
- One way to visualize this is (as in figure below)
- there is a model (linear model here)
- given x, there is a learnable parameter $\theta$, and $\theta^{T}x$ will give you a parameter $\eta$
- there is a distribution
- the distribution is a member of exponential family and parameter for this distribution is output of linear model
- we choose exponential family based on the data that we have (classification problem, regression problem, or other)
- we will choose appropriate b, a and T based on distribution of your choice
- expectation
- During test time
- $E[y; \eta] = E[y; \theta^{T}x] = h_{\theta}(x)$ - this is the hypothesis function
- Caveat:
- the parameter that we are learning during gradient descent is $\theta$
- we dont learn anything of the exponential family eg., $\mu, \sigma^{2}, \eta$
- we learn $\theta$, that is part of model and not part of distribution
- During train time
- we perform gradient ascent/descent on the log probability with y where natural parameter was reparameterized with the linear model
- the gradient ascent is done by taking gradients on $\theta$
- Question
- Are we training $\theta$ to predict the parameter of exponential family distribution whose mean is our prediction for y
- True
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Anand Avati}}$
- This is the reason, or how GLMs are an extension of exponential families. You reparameterize the parameters with the linear model and you get a GLM.
#### GLM training
- At train time, we perform maximum likelihood over the log probability of y with respect to $\theta$
- Learning update rule
- plugin appropriate $h_\theta(x)$ depending on the choice of distribution and you can start learning
- the __learning update rule__ is the __same__ for all GLMs, for classification or regression, just the $h_{\theta}$ varies.
> $ \theta _{j} := \theta _{j} - \alpha (h_{\theta}(x^{(i)}) - y^{(i)}).x_{j}^{(i)} $
#### Terminology
- $\eta$ - natural parameter
- $\mu = E[y_{i}; \eta] = g(\eta)$ - natural parameter to the mean of the function - canonical response function
- $\eta = g^{-1}(\mu)$ - canonical link function
- $g(\eta) = \frac{\partial}{\partial \eta}a(\eta)$
<br>
- There are 3 types of parameterization being used here:
i) Model parameter - $\theta$ - this is the only parameter that is learned
ii) Natural parameter - $\eta$
iii) Canonical parameter
- $\phi$ - Bernoulli
- $\mu, \sigma^{2}$ - Gaussian
- $\lambda$ - Poisson
<br>
- How are they linked
- Model parameter and Natural parameter are linked by design choice ($\theta^{T}x$)
- g links natural parameter to canonical parameter
- $g^{-1}$ links canonical parameter to natural parameter
<br>
- Logistic Regression
> $h_{\theta}(x) = E[y|x;\theta] = \phi = \frac{1}{1+e^{-\eta}} = \frac{1}{1+e^{-\theta^{T}x}} $
- Are GLM used for classification/regression
- depends on choice of distribution
- GLM just a general way of model data which can be binary, real, exponential or others
<br>
- Assumptions
- Regression
- At every given x, there is a y given x which is Gaussian and is parameterized by $\theta^{T}x$ as mean
- The assumption is there was a Gaussian distribution and you sampled the value from this Gaussian distribution
- We assume that the data was generated as above and we will work backward to find $\theta$, which will give us boundary condition
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Anand Avati}}$
### Softmax Regression
- Cross entropy minimization
- Multiclass classification
- can work over thousand of classes
- Type of GLM
- one vector per class
- generalization of logistic regression, with different set of parameters per class
- Goal is to:
- Start from this data, learn a model that can given a new data point, make a prediction of its class
- Notation
- k - # of classes
- $x^{(i)} \in \mathbb R^{n}$
- Label: $y = [\{0,1\}^{k}]$
- For example: [0, 0, 0, 1, 0] - assuming there are 5 class here
- label is a vector which indicates which class the x corresponds to
- each element in the vector corresponds to a vector
- there will be only 1 in the label vector
- Each class has its own set of parameters
- $\theta_{class} \in \mathbb R^{n}$
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Anand Avati}}$
- Example
* For a given x, $\theta_{i}^{T}x$ (logit space) will have a range of $-\infty$ to $\infty$
* goal is to get probability distribution over classes
* inorder to do that, we exponentiate the logits ($exp(\theta_{i}^{T}x)$), which makes everything positive
* then we normalize this, by the sum of all $\frac{e^{\theta_{i}^{T}x}}{\sum_{all\space classes} e^{\theta_{i}^{T}x}}$
* this gives us a probability distribution $\hat{p}(y)$ over all the classes
* minimize the distance between true label and learned label distribution using cross entropy
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Anand Avati}}$
* minimize the distance between these two distributions or minimize the cross entropy ($p, \hat p $)
> Cross entropy($p, \hat p$)
> $ = -\sum\limits_{y \in \{\Delta, \square, \circ \} } p(y) \text{log}\hat p(y)$
> $ = - log\space\hat p(y_{\Delta})$ - associated class here is triangle
> $ = - log \frac{e^{\theta_{\Delta}^{T}x}}{\sum_{c \in \{\Delta, \square, \circ \}} e^{\theta_{c}^{T}x}}$
* treat this as a loss and apply gradient descent wrt parameter
|
2666fdacc5e0b736612814ce4656c4ded0a536e9
| 15,334 |
ipynb
|
Jupyter Notebook
|
cs229_ml/lec04-Perceptron-GLM.ipynb
|
chandrabsingh/learnings
|
a3f507bbbf46582ce5a64991983dfc0759db0af5
|
[
"MIT"
] | null | null | null |
cs229_ml/lec04-Perceptron-GLM.ipynb
|
chandrabsingh/learnings
|
a3f507bbbf46582ce5a64991983dfc0759db0af5
|
[
"MIT"
] | null | null | null |
cs229_ml/lec04-Perceptron-GLM.ipynb
|
chandrabsingh/learnings
|
a3f507bbbf46582ce5a64991983dfc0759db0af5
|
[
"MIT"
] | null | null | null | 40.459103 | 156 | 0.548128 | true | 3,115 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.872347 | 0.875787 | 0.76399 |
__label__eng_Latn
| 0.9835 | 0.613338 |
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from chmp.ds import mpl_set, get_color_cycle
```
```python
# helper for gradient checking
def approximate_gradient(x, func, eps=1e-5):
res = np.zeros(x.size)
for i in range(x.size):
d = np.zeros(x.size)
d[i] = eps
res[i] = (func(x + d) - func(x - d)) / (2 * eps)
return res
```
# Linear models for regression
## Linear Basis Function Models
### The Bias Variance Decomposition
## Bayesian linear regression
Prior for $L_q$ regularization in 1 dimension (p. 156):
$$
p(w|\alpha) =
\frac{q}{2} \left( \frac{\alpha}{2} \right)^{1/q} \frac{1}{\Gamma(1/q)}
\exp \left( -\frac{\alpha}{2} |w|^q \right)
$$
### Gaussian likelihood + known variance
$$
\begin{align}
&\text{likelihood} &\;
&p(t|\phi, w, \beta) = \mathcal{N}(t|w^T \phi, \beta^{-1})
\\
&\text{prior} &\;
&p(w|m_0, \alpha) = \mathcal{N}(w|m_0, \alpha^{-1})
\\
&\text{posterior} &\;
&p(w|\{t, \phi\}) = \mathcal{N}(w|m_N, S_N)
\\
&\; &\;
&m_N = S_N \left( \alpha m_0 + \beta \Phi^t T \right)
\\
&\; &\;
&S_N^{-1} = \alpha + \beta \Phi^T \Phi
\\
&\text{predictive} &\;
&p(t|\phi, \{y, \phi\}) = \mathcal{N}(t|m_N^T \phi, \sigma_N^2)
\\
&\; &\;
&\sigma_N^2 = \beta^{-1} + \phi^T S_N \phi
\end{align}
$$
Note, these equations are also valid if $\alpha$ is a matrix.
Note, notation is unified with Sec. 3.5 ($S_0^{-1} \rightarrow \alpha$).
```python
# TODO: check for inverse of alpha? does the definition match up?
def update_posterior(params, observed, beta):
m, s = params
x, y = observed
s_new = np.linalg.inv(np.linalg.inv(s) + beta * x.T @ x)
m_new = ((m @ np.linalg.inv(s) + beta * y.T @ x) @ s_new).reshape(-1)
return m_new, s_new
def eval_predictive(x, params, beta):
m, s = params
return (
x @ m,
# TODO: optimize the diag part
1.0 / beta + np.sum((x @ s_N) * x, axis=1)
)
def empirical_bayes(observed, alpha, beta):
x, y = observed
N, M = x.shape
lam = np.linalg.eigvals(x_eval.T @ x_eval)
for _ in range(10):
m_0, s_0 = np.zeros(M), np.diag([1 / alpha] * M)
m_N, s_N = update_posterior((m_0, s_0), observed, beta)
gamma = np.sum(lam / (lam + alpha))
alpha = gamma / (m_N.T @ m_N)
beta = np.sum((y - x @ m_N) ** 2.0) / (N - gamma)
return (m_N, s_N), alpha, beta
def plot_gaussian2d(m, s, extent, bins=(10, 10)):
# TODO: normalize s
x = np.linspace(extent[0][0], extent[0][1], bins[0])
y = np.linspace(extent[1][0], extent[1][1], bins[1])
x, y = np.meshgrid(x, y)
m_x, m_y = m
(t_xx, t_xy), (t_yx, t_yy) = np.linalg.inv(s_N)
norm = 1 / (2.0 * np.pi * np.linalg.det(s) ** 0.5)
p = norm * np.exp(-0.5 * (
t_xx * (x - m_x) * (x - m_x) +
t_xy * (x - m_x) * (y - m_y) +
t_yx * (y - m_y) * (x - m_x) +
t_yy * (y - m_y) * (y - m_y)
))
p = p.astype('complex128').real
plt.imshow(
p,
extent=[extent[0][0], extent[0][1], extent[1][0], extent[1][1]],
origin='lower',
aspect='auto',
)
```
```python
# Example (see fig. 3.7)
# beta = 25, alpha = 1 / 2.0
w_true = np.asarray([-0.3, 0.5]).reshape((2, 1))
np.random.seed(241)
n_samples = 20
x_obs = np.stack([np.ones(n_samples), np.random.uniform(low=-1, high=+1, size=n_samples)])
x_obs = x_obs.T
y_obs = np.random.normal(x_obs @ w_true, scale=0.2)
x_eval = np.stack([np.ones(100), np.linspace(-1, 1, 100)])
x_eval = x_eval.T
```
```python
beta = 25
alpha = 1 / 2.0
params_0 = np.zeros(2), np.diag([1 / alpha, 1 / alpha])
m_N, s_N = update_posterior(params_0, (x_obs, y_obs), beta)
p_m, p_s = eval_predictive(x_eval, (m_N, s_N), beta)
(m_emp, s_emp), alpha_emp, beta_emp = empirical_bayes((x_obs, y_obs), alpha, beta)
```
```python
c0, c1, c2 = get_color_cycle(3)
plt.figure(figsize=(16, 4))
plt.subplot(1, 3, 1)
plot_gaussian2d(m_N, s_N, extent=[[-0.75, +0.25], [-0.25, +0.75]], bins=(50, 50))
plt.plot([w_true[0]], [w_true[1]], 'wo')
mpl_set(xlabel='w_0', ylabel='w_1', colorbar=True, title='Posterior fixed hyperparameters')
plt.subplot(1, 3, 2)
plot_gaussian2d(m_emp, s_emp, extent=[[-0.75, +0.25], [-0.25, +0.75]], bins=(50, 50))
plt.plot([w_true[0]], [w_true[1]], 'wo')
mpl_set(xlabel='w_0', ylabel='w_1', colorbar=True, title='Posterior empirical Bayes')
plt.subplot(1, 3, 3)
plt.scatter(x_obs[:, 1], y_obs[:, 0], color=c0, label='data')
plt.plot(x_eval[:, 1], x_eval @ w_true, color=c1, label='truth')
plt.plot(x_eval[:, 1], p_m, label='predictive', color=c2)
plt.fill_between(x_eval[:, 1], p_m - p_s ** 0.5, p_m + p_s ** 0.5, color=c2, alpha=0.2)
mpl_set(xlabel='x', ylabel='y', legend=True)
pass
```
## Bayesian model comparison
Assume mutliple models $\mathcal{M}_i$ under investigation. The posterior for the model is given by
$$
\begin{align}
\color{red}{p(\mathcal{M}_i|\mathcal{D})} &=
\frac{P(\mathcal{M}_i) \color{blue}{p(\mathcal{D}|\mathcal{M}_i)}}{p(\mathcal{D})}
&\; &\;
\\
p(t|x, \mathcal{D}) &= \sum_i p(t|x, \mathcal{M}_i, \mathcal{D}) \color{red}{p(\mathcal{M}_i|\mathcal{D})}
&\; &\;
\\
\log \color{blue}{p(\mathcal{D}|\mathcal{M}_i)} &=
\log \int \mathrm{d}w\; p(\mathcal{D}|w, \mathcal{M}_i) p(w|\mathcal{M}_i)
&\; &\;
\\
&\approx \log \Delta w_\mathrm{posterior} \log p(\mathcal{D}|w_\mathrm{MAP}, \mathcal{M}_i)
p(w_\mathrm{MAP}|\mathcal{M}_i)
&\; &\text{strongly peaked posterior}
\\
&\approx \log p(\mathcal{D}|w_\mathrm{MAP}, \mathcal{M}_i) +
\log \frac{\Delta w_\mathrm{posterior} }{\Delta w_\mathrm{prior}}
&\; &\text{flat prior}
\\
&\approx \log p(\mathcal{D}|w_\mathrm{MAP}, \mathcal{M}_i) +
M \log \frac{\Delta w_\mathrm{posterior} }{\Delta w_\mathrm{prior}}
&\; &\text{for M parameters}
\end{align}
$$
For flat model prior $p(\mathcal{M}_i) = \mathrm{const}$ the relevant quantity is only the model evidence (or marginal likelihood) $\color{blue}{p(\mathcal{D}|\mathcal{M}_i)}$ is relevant.
Note that, in the approximation above, the model evidence is the sum between a likelihood term (how good the data is fitted) and a complexity term (how many parameters are included). This form favors a model that is complex enough to fit the data, but not too complex.
When approximating the predictive distribution by the most likely model, on selects on average the correct model, as can be seen as follows: Assume the correct model is $\mathcal{M}^\star$, then the average of the log model evidence is equal to
$$
\mathbb{E}_\mathcal{D} \log p(\mathcal{D}|\mathcal{M}_i) =
\int \mathrm{d} \mathcal{D}\; p(\mathcal{D}|\mathcal{M}^\star)
\log \frac{p(\mathcal{D}|\mathcal{M}^\star)}{p(\mathcal{D}|\mathcal{M}_i)} + \mathrm{const}
$$
I.e., it is equal to the KL divergence to the correct model, which is minimized for the correct model choice.
## The Evidence Approximation
Marginalize over some parameters and choose the maximum likelihood value for the other paramters. Also called Empircal Bayes. For example: marginalize over the weights, but determie the scale parameters by maximizing the marginal likelihood.
$$
p(\alpha, \beta|\mathcal{D}) \propto p(\mathcal{D}| \alpha, \beta) p(\alpha, \beta)
$$
Assume flat prior and search for $\alpha, \beta$ that maximize the marginal data likelihood
$$
p(\mathcal{D}| \alpha, \beta) =
\int\mathrm{d}w\; p(\mathcal{D}| \mathcal{w}, \beta)p(\mathcal{w}|\alpha)
$$
Note, that both $\alpha$ and $\beta$ are scalar here.
The maximum is given by
$$
\begin{align}
\gamma &= \sum_i \frac{\lambda_i}{\lambda_i + \alpha} \\
\alpha &= \frac{\gamma}{m_N^T m_N} \\
\beta^{-1} &= \frac{1}{N - \gamma} \sum_n \left( t_n - m_N^T \phi_n \right)^2
\end{align}
$$
In principle it would also be possible to first margininalize out the scale parameters and the approximate in the weight variables. However, empirically this leads to poorer results.
## Limitations of Fixed Basis Functions
```python
```
|
843683d65d333db52d4729f355d7b4bee4f41cbf
| 56,429 |
ipynb
|
Jupyter Notebook
|
BuildingBlocks/Bishop_Notes_03.ipynb
|
chmp/misc-exp
|
2edc2ed598eb59f4ccb426e7a5c1a23343a6974b
|
[
"MIT"
] | 6 |
2017-10-31T20:54:37.000Z
|
2020-10-23T19:03:00.000Z
|
BuildingBlocks/Bishop_Notes_03.ipynb
|
chmp/misc-exp
|
2edc2ed598eb59f4ccb426e7a5c1a23343a6974b
|
[
"MIT"
] | 7 |
2020-03-24T16:14:34.000Z
|
2021-03-18T20:51:37.000Z
|
BuildingBlocks/Bishop_Notes_03.ipynb
|
chmp/misc-exp
|
2edc2ed598eb59f4ccb426e7a5c1a23343a6974b
|
[
"MIT"
] | 1 |
2019-07-29T07:55:49.000Z
|
2019-07-29T07:55:49.000Z
| 131.536131 | 42,816 | 0.850945 | true | 2,864 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.849971 | 0.782662 | 0.665241 |
__label__eng_Latn
| 0.667948 | 0.383908 |
# Divorce rates and their relationship with Marriage rate and Median Age Marriage
```R
# load data and copy
library(rethinking)
options(mc.cores = parallel::detectCores())
data(WaffleDivorce)
d <- WaffleDivorce
# standardize variables
d$A <- scale( d$MedianAgeMarriage )
d$D <- scale( d$Divorce )
```
Loading required package: rstan
Loading required package: StanHeaders
Loading required package: ggplot2
rstan (Version 2.21.2, GitRev: 2e1f913d3ca3)
For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
Do not specify '-march=native' in 'LOCAL_CPPFLAGS' or a Makevars file
Loading required package: parallel
rethinking (Version 2.13)
Attaching package: 'rethinking'
The following object is masked from 'package:stats':
rstudent
```R
sd( d$MedianAgeMarriage )
```
1.24363030138808
```R
m5.1 <- quap(
alist(
D ~ dnorm( mu , sigma ) ,
mu <- a + bA * A ,
a ~ dnorm( 0 , 0.2 ) ,
bA ~ dnorm( 0 , 0.5 ) ,
sigma ~ dexp( 1 )
) , data = d )
```
## Prior Predictive Simulation
```R
set.seed(10)
prior <- extract.prior( m5.1 )
mu <- link( m5.1 , post=prior , data=list( A=c(-2,2) ) )
plot( NULL , xlim=c(-2,2) , ylim=c(-2,2) )
for ( i in 1:50 ) lines( c(-2,2) , mu[i,] , col=col.alpha("black",0.4) )
```
## Divorce vs Median Age Marriage
```R
# compute percentile interval of mean
A_seq <- seq( from=-3 , to=3.2 , length.out=30 )
mu <- link( m5.1 , data=list(A=A_seq) )
mu.mean <- apply( mu , 2, mean )
mu.PI <- apply( mu , 2 , PI )
# plot it all
plot( D ~ A , data=d , col=rangi2 )
lines( A_seq , mu.mean , lwd=2 )
shade( mu.PI , A_seq )
```
```R
d$M <- scale( d$Marriage )
m5.2 <- quap(
alist(
D ~ dnorm( mu , sigma ) ,
mu <- a + bM * M ,
a ~ dnorm( 0 , 0.2 ) ,
bM ~ dnorm( 0 , 0.5 ) ,
sigma ~ dexp( 1 )
) , data = d )
```
```R
# compute percentile interval of mean
A_seq <- seq( from=-3 , to=3.2 , length.out=30 )
mu <- link( m5.2 , data=list(M=A_seq) )
mu.mean <- apply( mu , 2, mean )
mu.PI <- apply( mu , 2 , PI )
# plot it all
plot( D ~ M , data=d , col=rangi2 )
lines( A_seq , mu.mean , lwd=2 )
shade( mu.PI , A_seq )
```
# Directed Acyclic Graph
```R
library(dagitty)
dag5.1 <- dagitty( "dag {
A -> D
A -> M
M -> D
}")
coordinates(dag5.1) <- list( x=c(A=0,D=1,M=2) , y=c(A=0,D=1,M=0) )
drawdag( dag5.1 )
```
## Another DAG with a different causal implication
```R
DMA_dag2 <- dagitty('dag{ D <- A -> M }')
coordinates(DMA_dag2) <- list( x=c(A=0,D=1,M=2) , y=c(A=0,D=1,M=0) )
drawdag(DMA_dag2)
```
```R
impliedConditionalIndependencies( DMA_dag2 )
```
D _||_ M | A
### The above relation translates to D is independent of A conditional on A
## The following is a multiple regression :-
```R
m5.3 <- quap(
alist(
D ~ dnorm( mu , sigma ) ,
mu <- a + bM*M + bA*A ,
a ~ dnorm( 0 , 0.2 ) ,
bM ~ dnorm( 0 , 0.5 ) ,
bA ~ dnorm( 0 , 0.5 ) ,
sigma ~ dexp( 1 )
) , data = d )
precis( m5.3 )
```
<table>
<caption>A precis: 4 × 4</caption>
<thead>
<tr><th></th><th scope=col>mean</th><th scope=col>sd</th><th scope=col>5.5%</th><th scope=col>94.5%</th></tr>
<tr><th></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th></tr>
</thead>
<tbody>
<tr><th scope=row>a</th><td> 6.145791e-06</td><td>0.09707503</td><td>-0.1551385</td><td> 0.1551508</td></tr>
<tr><th scope=row>bM</th><td>-6.539899e-02</td><td>0.15077111</td><td>-0.3063603</td><td> 0.1755624</td></tr>
<tr><th scope=row>bA</th><td>-6.135264e-01</td><td>0.15098166</td><td>-0.8548242</td><td>-0.3722285</td></tr>
<tr><th scope=row>sigma</th><td> 7.851074e-01</td><td>0.07784077</td><td> 0.6607028</td><td> 0.9095120</td></tr>
</tbody>
</table>
```R
plot( coeftab(m5.1,m5.2,m5.3), par=c("bA","bM") )
```
```R
# call link without specifying new data
# so it uses original data
mu <- link( m5.3 )
```
```R
# summarize samples across cases
mu_mean <- apply( mu , 2 , mean )
mu_PI <- apply( mu , 2 , PI )
```
```R
mu_mean
```
<style>
.list-inline {list-style: none; margin:0; padding: 0}
.list-inline>li {display: inline-block}
.list-inline>li:not(:last-child)::after {content: "\00b7"; padding: 0 .5ex}
</style>
<ol class=list-inline><li>0.368354486649211</li><li>0.317846556171417</li><li>0.120172279734276</li><li>0.754594505305712</li><li>-0.35210242265142</li><li>0.114402384795662</li><li>-0.712026970344305</li><li>-0.322345564338635</li><li>-1.75748948800128</li><li>-0.118798644892876</li><li>0.0399074909436967</li><li>-0.501194958425354</li><li>1.30713399648199</li><li>-0.430038703852553</li><li>0.178068050755266</li><li>0.296693166231551</li><li>0.483538214776328</li><li>0.580402126397884</li><li>0.0657178960624549</li><li>-0.0585743662824406</li><li>-0.5847983864951</li><li>-1.13328867640735</li><li>-0.110195176519957</li><li>-0.0402545497768813</li><li>0.137379216480115</li><li>0.248008677498342</li><li>0.200437068524857</li><li>0.329386346048645</li><li>-0.310805774461407</li><li>-0.721743318476945</li><li>0.118451586059692</li><li>-1.10120331050511</li><li>0.167743888707763</li><li>0.256509397801258</li><li>-0.0677856485702234</li><li>0.750040238196824</li><li>0.04567738588231</li><li>-0.438034358310611</li><li>-0.971646219066463</li><li>-0.137726275313299</li><li>0.222198272379584</li><li>0.431412338693953</li><li>0.395277771527691</li><li>1.19245533419973</li><li>-0.354935996085724</li><li>-0.179022923503312</li><li>0.0519523466657839</li><li>0.481817521101744</li><li>-0.072947729593975</li><li>0.729896979946677</li></ol>
```R
mu_PI
```
<table>
<caption>A matrix: 2 × 50 of type dbl</caption>
<tbody>
<tr><th scope=row>5%</th><td>0.1562013</td><td>-0.01750389</td><td>-0.04314372</td><td>0.4147012</td><td>-0.5525603</td><td>-0.1328497</td><td>-0.9793711</td><td>-0.648136479</td><td>-2.433501</td><td>-0.3490095</td><td>...</td><td>0.0448409</td><td>0.1612790</td><td>0.1981170</td><td>0.7121631</td><td>-0.5910848</td><td>-0.365989353</td><td>-0.1278724</td><td>0.2605910</td><td>-0.3022358</td><td>0.188137</td></tr>
<tr><th scope=row>94%</th><td>0.5671654</td><td> 0.64324070</td><td> 0.28030955</td><td>1.0857921</td><td>-0.1537885</td><td> 0.3458035</td><td>-0.4343818</td><td>-0.006013426</td><td>-1.127705</td><td> 0.0894809</td><td>...</td><td>0.3966038</td><td>0.6773873</td><td>0.5886995</td><td>1.6518693</td><td>-0.1422689</td><td>-0.004714744</td><td> 0.2183203</td><td>0.6917975</td><td> 0.1341940</td><td>1.254396</td></tr>
</tbody>
</table>
```R
D_sim <- sim( m5.3 , n=1e4 )
```
```R
plot( mu_mean ~ d$D , col=rangi2 , ylim=range(mu_PI) ,
xlab="Observed divorce" , ylab="Predicted divorce" )
abline( a=0 , b=1 , lty=2 )
for ( i in 1:nrow(d) ) lines( rep(d$D[i],2) , mu_PI[,i] , col=rangi2 )
identify( x=d$D , y=mu_mean , labels=d$Loc )
```
```R
N <- 100 # number of cases
x_real <- rnorm( N ) # x_real as Gaussian with mean 0 and stddev 1
x_spur <- rnorm( N , x_real ) # x_spur as Gaussian with mean=x_real
y <- rnorm( N , x_real ) # y as Gaussian with mean=x_real
d <- data.frame(y,x_real,x_spur) # bind all together in data frame
```
```R
x_spur
```
<style>
.list-inline {list-style: none; margin:0; padding: 0}
.list-inline>li {display: inline-block}
.list-inline>li:not(:last-child)::after {content: "\00b7"; padding: 0 .5ex}
</style>
<ol class=list-inline><li>-1.35205556179838</li><li>0.692630018489859</li><li>2.38970488462669</li><li>-0.277785286445208</li><li>2.20878196395475</li><li>-0.623157211371598</li><li>-0.691301777675285</li><li>-0.00841657520744366</li><li>-2.1068966619116</li><li>-2.475596402338</li><li>3.77752924711212</li><li>0.0869765810510864</li><li>-2.63222491112067</li><li>-2.72039080437196</li><li>-0.663492280904649</li><li>1.92060795532126</li><li>0.335483979433785</li><li>-0.282782624992253</li><li>1.01393662342351</li><li>2.97442700466915</li><li>0.901401246933798</li><li>-2.05511179248847</li><li>0.765853777414379</li><li>-1.4979684583892</li><li>0.0813784389131222</li><li>-1.31675350397186</li><li>0.70254161891636</li><li>0.574591348151105</li><li>2.8984309960213</li><li>-1.55120387921632</li><li>0.624935822546091</li><li>0.588252047530261</li><li>1.32475039281867</li><li>-1.1329325267783</li><li>0.157429074311472</li><li>0.0186486883598361</li><li>-0.417081802417184</li><li>-1.72451659683524</li><li>1.93998990465857</li><li>0.243119622903495</li><li>-0.343638634450165</li><li>2.39354237507181</li><li>0.5500243128989</li><li>-1.18004489026227</li><li>-0.871716808627928</li><li>0.859613671896576</li><li>0.033162777666438</li><li>2.40866859364035</li><li>1.23765568960264</li><li>-1.31312755104094</li><li>0.0405348814554642</li><li>0.433098259672419</li><li>0.575605324917927</li><li>0.0515943799359837</li><li>-1.39389954009737</li><li>-0.239471665560266</li><li>-1.09359506382444</li><li>1.12313077723405</li><li>1.19603301999564</li><li>3.18237860849344</li><li>3.01756658487018</li><li>-0.101827794104095</li><li>0.00973837101934016</li><li>-0.89691510430296</li><li>0.799666659022323</li><li>-0.966017873812466</li><li>0.72400599076021</li><li>2.34177963098057</li><li>-0.168071738321592</li><li>-2.0217800588304</li><li>-1.93221396839229</li><li>3.3250880218069</li><li>-0.728398228232703</li><li>0.267828899677299</li><li>-0.656170322463872</li><li>-0.0455085821980248</li><li>-0.719921621345485</li><li>-1.16773318714732</li><li>1.77104102706098</li><li>-0.490194900479153</li><li>-0.219563777817322</li><li>-0.118072362754051</li><li>-0.503507180341146</li><li>1.27167119844878</li><li>-0.57045068299662</li><li>2.08256023802364</li><li>-1.66558305864451</li><li>1.6780100832135</li><li>0.285611914944682</li><li>-1.38461374984414</li><li>1.18848395567335</li><li>1.03807732050792</li><li>1.28997891579462</li><li>0.727116348329177</li><li>1.00115597456983</li><li>4.82655855587325</li><li>-0.412610227194</li><li>-1.96255662668122</li><li>1.58977310437138</li><li>0.869737378417309</li></ol>
```R
x_real
```
<style>
.list-inline {list-style: none; margin:0; padding: 0}
.list-inline>li {display: inline-block}
.list-inline>li:not(:last-child)::after {content: "\00b7"; padding: 0 .5ex}
</style>
<ol class=list-inline><li>0.46504871558418</li><li>0.104996326852726</li><li>0.591889277022188</li><li>0.456121018594855</li><li>1.13802392123417</li><li>-1.21721042748313</li><li>0.600311221961509</li><li>-1.10136482956525</li><li>-0.883278574282695</li><li>-0.969457867094617</li><li>2.21120726655929</li><li>0.781330749204854</li><li>-1.1628090726602</li><li>0.0164813378689372</li><li>-0.772072563770278</li><li>1.31261616585865</li><li>-1.30097706125507</li><li>1.89685450984775</li><li>1.55009918834605</li><li>1.51939375074762</li><li>-0.331432676279169</li><li>-1.66110778416189</li><li>0.712376193541223</li><li>-2.08962097413881</li><li>-0.052980535634609</li><li>-1.63824858304232</li><li>-0.352844033152804</li><li>0.690809926272784</li><li>1.3520587482415</li><li>-0.569337959990894</li><li>2.07566147817585</li><li>0.0401866561368483</li><li>0.0521689815121645</li><li>-0.957344116337265</li><li>0.000424660039592618</li><li>-0.574711312557669</li><li>-0.391885585632191</li><li>0.286070510211621</li><li>1.25198155374721</li><li>-0.429467160199921</li><li>-0.757008186470816</li><li>0.671992175500776</li><li>0.607393564444157</li><li>-0.0771225890925739</li><li>0.150965323857022</li><li>-0.452397772119638</li><li>-0.0634736510063597</li><li>-0.513526218325328</li><li>0.871929643061742</li><li>-0.963613924115809</li><li>0.354298983855878</li><li>-0.184439846755855</li><li>-0.143818257682809</li><li>1.38981428343578</li><li>-0.314723907597331</li><li>-1.36336930310008</li><li>0.558470264126142</li><li>-0.970500150066699</li><li>-0.00744609215997565</li><li>0.626770295873479</li><li>2.33889939043286</li><li>-0.176170200171509</li><li>-0.753746568899285</li><li>0.0674839915000366</li><li>-0.119438611501735</li><li>-0.742946582232449</li><li>0.306037345940029</li><li>1.45547127444296</li><li>0.4504564311188</li><li>-0.414867966712541</li><li>-1.2709710013827</li><li>1.49866852600129</li><li>0.351263269924126</li><li>-0.0141930404161029</li><li>-0.622948360782438</li><li>0.687860010292194</li><li>-0.618357566694332</li><li>-1.45181357454632</li><li>1.59663329619412</li><li>0.296462825072267</li><li>-0.407069732214744</li><li>0.617317395587624</li><li>-0.440971688864465</li><li>0.309480565407576</li><li>-0.404878083834447</li><li>-0.371848684746369</li><li>-1.32555213028749</li><li>1.52328907325857</li><li>0.355501079526269</li><li>0.568934108221817</li><li>0.680067387674436</li><li>0.219846401737705</li><li>1.34839467338858</li><li>0.0546604981055179</li><li>0.879076083934329</li><li>3.43648934510459</li><li>-0.726564047459416</li><li>-0.660313922161985</li><li>0.244668789049531</li><li>0.333072383646635</li></ol>
```R
d
```
<table>
<caption>A data.frame: 100 × 3</caption>
<thead>
<tr><th scope=col>y</th><th scope=col>x_real</th><th scope=col>x_spur</th></tr>
<tr><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th></tr>
</thead>
<tbody>
<tr><td> 1.23837003</td><td> 0.46504872</td><td>-1.352055562</td></tr>
<tr><td> 0.21014531</td><td> 0.10499633</td><td> 0.692630018</td></tr>
<tr><td> 0.75867059</td><td> 0.59188928</td><td> 2.389704885</td></tr>
<tr><td> 1.20829338</td><td> 0.45612102</td><td>-0.277785286</td></tr>
<tr><td> 2.85487807</td><td> 1.13802392</td><td> 2.208781964</td></tr>
<tr><td>-2.23129968</td><td>-1.21721043</td><td>-0.623157211</td></tr>
<tr><td> 0.38395229</td><td> 0.60031122</td><td>-0.691301778</td></tr>
<tr><td>-1.37574356</td><td>-1.10136483</td><td>-0.008416575</td></tr>
<tr><td>-0.92992473</td><td>-0.88327857</td><td>-2.106896662</td></tr>
<tr><td>-0.09839109</td><td>-0.96945787</td><td>-2.475596402</td></tr>
<tr><td> 0.55906662</td><td> 2.21120727</td><td> 3.777529247</td></tr>
<tr><td> 2.30905325</td><td> 0.78133075</td><td> 0.086976581</td></tr>
<tr><td>-0.36630367</td><td>-1.16280907</td><td>-2.632224911</td></tr>
<tr><td>-1.32368322</td><td> 0.01648134</td><td>-2.720390804</td></tr>
<tr><td>-2.17579557</td><td>-0.77207256</td><td>-0.663492281</td></tr>
<tr><td> 0.31392467</td><td> 1.31261617</td><td> 1.920607955</td></tr>
<tr><td>-2.40565252</td><td>-1.30097706</td><td> 0.335483979</td></tr>
<tr><td> 2.74501440</td><td> 1.89685451</td><td>-0.282782625</td></tr>
<tr><td>-0.48731314</td><td> 1.55009919</td><td> 1.013936623</td></tr>
<tr><td> 1.31195999</td><td> 1.51939375</td><td> 2.974427005</td></tr>
<tr><td>-0.32518737</td><td>-0.33143268</td><td> 0.901401247</td></tr>
<tr><td>-1.55476035</td><td>-1.66110778</td><td>-2.055111792</td></tr>
<tr><td>-0.65982321</td><td> 0.71237619</td><td> 0.765853777</td></tr>
<tr><td>-2.78895348</td><td>-2.08962097</td><td>-1.497968458</td></tr>
<tr><td> 0.06102915</td><td>-0.05298054</td><td> 0.081378439</td></tr>
<tr><td>-3.06322507</td><td>-1.63824858</td><td>-1.316753504</td></tr>
<tr><td> 0.38972830</td><td>-0.35284403</td><td> 0.702541619</td></tr>
<tr><td> 0.34871202</td><td> 0.69080993</td><td> 0.574591348</td></tr>
<tr><td>-0.68836123</td><td> 1.35205875</td><td> 2.898430996</td></tr>
<tr><td>-1.61809608</td><td>-0.56933796</td><td>-1.551203879</td></tr>
<tr><td>...</td><td>...</td><td>...</td></tr>
<tr><td> 0.06283928</td><td>-1.27097100</td><td>-1.93221397</td></tr>
<tr><td> 2.24122861</td><td> 1.49866853</td><td> 3.32508802</td></tr>
<tr><td>-0.90951450</td><td> 0.35126327</td><td>-0.72839823</td></tr>
<tr><td>-1.18210173</td><td>-0.01419304</td><td> 0.26782890</td></tr>
<tr><td>-2.73401509</td><td>-0.62294836</td><td>-0.65617032</td></tr>
<tr><td> 1.54120743</td><td> 0.68786001</td><td>-0.04550858</td></tr>
<tr><td> 0.55025982</td><td>-0.61835757</td><td>-0.71992162</td></tr>
<tr><td>-1.62260770</td><td>-1.45181357</td><td>-1.16773319</td></tr>
<tr><td> 2.02855006</td><td> 1.59663330</td><td> 1.77104103</td></tr>
<tr><td> 0.99111728</td><td> 0.29646283</td><td>-0.49019490</td></tr>
<tr><td>-1.74503457</td><td>-0.40706973</td><td>-0.21956378</td></tr>
<tr><td> 0.91623296</td><td> 0.61731740</td><td>-0.11807236</td></tr>
<tr><td> 0.88907842</td><td>-0.44097169</td><td>-0.50350718</td></tr>
<tr><td> 2.17097253</td><td> 0.30948057</td><td> 1.27167120</td></tr>
<tr><td> 0.55394043</td><td>-0.40487808</td><td>-0.57045068</td></tr>
<tr><td>-1.15472825</td><td>-0.37184868</td><td> 2.08256024</td></tr>
<tr><td>-0.16322237</td><td>-1.32555213</td><td>-1.66558306</td></tr>
<tr><td> 1.61534860</td><td> 1.52328907</td><td> 1.67801008</td></tr>
<tr><td> 0.32711912</td><td> 0.35550108</td><td> 0.28561191</td></tr>
<tr><td>-0.38844307</td><td> 0.56893411</td><td>-1.38461375</td></tr>
<tr><td> 1.24643241</td><td> 0.68006739</td><td> 1.18848396</td></tr>
<tr><td>-0.26187694</td><td> 0.21984640</td><td> 1.03807732</td></tr>
<tr><td> 1.72864891</td><td> 1.34839467</td><td> 1.28997892</td></tr>
<tr><td> 0.72428338</td><td> 0.05466050</td><td> 0.72711635</td></tr>
<tr><td>-0.03436937</td><td> 0.87907608</td><td> 1.00115597</td></tr>
<tr><td> 3.40677569</td><td> 3.43648935</td><td> 4.82655856</td></tr>
<tr><td>-0.45170983</td><td>-0.72656405</td><td>-0.41261023</td></tr>
<tr><td> 1.68005195</td><td>-0.66031392</td><td>-1.96255663</td></tr>
<tr><td> 0.37582248</td><td> 0.24466879</td><td> 1.58977310</td></tr>
<tr><td>-0.16967498</td><td> 0.33307238</td><td> 0.86973738</td></tr>
</tbody>
</table>
```R
pairs(d)
```
# Counterfactual Plots
## Considering the below DAG we try to simulate the effect of A on D while also including the effect A has on M
```R
drawdag( dag5.1 )
```
```R
data(WaffleDivorce)
d <- list()
d$A <- standardize( WaffleDivorce$MedianAgeMarriage )
d$D <- standardize( WaffleDivorce$Divorce )
d$M <- standardize( WaffleDivorce$Marriage )
m5.3_A <- quap(
alist(
## A -> D <- M
D ~ dnorm( mu , sigma ) ,
mu <- a + bM*M + bA*A ,
a ~ dnorm( 0 , 0.2 ) ,
bM ~ dnorm( 0 , 0.5 ) ,
bA ~ dnorm( 0 , 0.5 ) ,
sigma ~ dexp( 1 ),
## A -> M
M ~ dnorm( mu_M , sigma_M ),
mu_M <- aM + bAM*A,
aM ~ dnorm( 0 , 0.2 ),
bAM ~ dnorm( 0 , 0.5 ),
sigma_M ~ dexp( 1 )
) , data = d )
```
```R
A_seq <- seq( from=-2 , to=2 , length.out=30 )
```
```R
# prep data
sim_dat <- data.frame( A=A_seq )
```
## Given the data(A_seq) we simulate M first and later use this A_seq,M to generate D
```R
# simulate M and then D, using A_seq
s <- sim( m5.3_A , data=sim_dat , vars=c("M","D") )
```
```R
# display counterfactual predictions
plot( sim_dat$A , colMeans(s$D) , ylim=c(-2,2) , type="l" ,
xlab="manipulated A" , ylab="counterfactual D" )
shade( apply(s$D,2,PI) , sim_dat$A )
mtext( "Total counterfactual effect of A on D" )
```
```R
plot( sim_dat$A , colMeans(s$M) , ylim=c(-2,2) , type="l" ,
xlab="manipulated A" , ylab="counterfactual M" )
shade( apply(s$M,2,PI) , sim_dat$A )
mtext( "Total counterfactual effect of A on M" )
```
## The above counterfactual plots show that A has a direct influence on both A and M as assumed by our model
## What happends when we try to manipulate A without involving the causal change in M ?
1. Implies that we use A -> D and M -> D
2. The code uses simulating **only D** assuming A = 0 (average State in US )
```R
# simulate D, using A_seq
sim_dat <- data.frame( M=seq(from=-2,to=2,length.out=30) , A=0 )
s <- sim( m5.3_A , data=sim_dat , vars="D")
```
```R
plot( sim_dat$M , colMeans(s) , ylim=c(-2,2) , type="l" ,
xlab="manipulated M" , ylab="counterfactual D" )
shade( apply(s,2,PI) , sim_dat$M )
mtext( "Total counterfactual effect of M on D" )
```
### The above plot shows that M doesnt have a significant affect on D
# Spurious Waffles
```R
library(rethinking)
data(milk)
d <- milk
str(d)
```
Loading required package: rstan
Loading required package: StanHeaders
Loading required package: ggplot2
rstan (Version 2.21.2, GitRev: 2e1f913d3ca3)
For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
Do not specify '-march=native' in 'LOCAL_CPPFLAGS' or a Makevars file
Loading required package: parallel
rethinking (Version 2.13)
Attaching package: 'rethinking'
The following object is masked from 'package:stats':
rstudent
'data.frame': 29 obs. of 8 variables:
$ clade : Factor w/ 4 levels "Ape","New World Monkey",..: 4 4 4 4 4 2 2 2 2 2 ...
$ species : Factor w/ 29 levels "A palliata","Alouatta seniculus",..: 11 8 9 10 16 2 1 6 28 27 ...
$ kcal.per.g : num 0.49 0.51 0.46 0.48 0.6 0.47 0.56 0.89 0.91 0.92 ...
$ perc.fat : num 16.6 19.3 14.1 14.9 27.3 ...
$ perc.protein : num 15.4 16.9 16.9 13.2 19.5 ...
$ perc.lactose : num 68 63.8 69 71.9 53.2 ...
$ mass : num 1.95 2.09 2.51 1.62 2.19 5.25 5.37 2.51 0.71 0.68 ...
$ neocortex.perc: num 55.2 NA NA NA NA ...
```R
d$K <- scale( d$kcal.per.g )
d$N <- scale( d$neocortex.perc )
d$M <- scale( log(d$mass) )
```
```R
m5.5_draft <- quap(
alist(
K ~ dnorm( mu , sigma ) ,
mu <- a + bN*N ,
a ~ dnorm( 0 , 1 ) ,
bN ~ dnorm( 0 , 1 ) ,
sigma ~ dexp( 1 )
) , data=d )
```
```R
d$neocortex.perc
```
<style>
.list-inline {list-style: none; margin:0; padding: 0}
.list-inline>li {display: inline-block}
.list-inline>li:not(:last-child)::after {content: "\00b7"; padding: 0 .5ex}
</style>
<ol class=list-inline><li>55.16</li><li><NA></li><li><NA></li><li><NA></li><li><NA></li><li>64.54</li><li>64.54</li><li>67.64</li><li><NA></li><li>68.85</li><li>58.85</li><li>61.69</li><li>60.32</li><li><NA></li><li><NA></li><li>69.97</li><li><NA></li><li>70.41</li><li><NA></li><li>73.4</li><li><NA></li><li>67.53</li><li><NA></li><li>71.26</li><li>72.6</li><li><NA></li><li>70.24</li><li>76.3</li><li>75.49</li></ol>
```R
dcc <- d[ complete.cases(d$K,d$N,d$M) , ]
```
```R
m5.5_draft <- quap(
alist(
K ~ dnorm( mu , sigma ) ,
mu <- a + bN*N ,
a ~ dnorm( 0 , 1 ) ,
bN ~ dnorm( 0 , 1 ) ,
sigma ~ dexp( 1 )
) , data=dcc )
```
```R
prior <- extract.prior( m5.5_draft )
xseq <- c(-2,2)
mu <- link( m5.5_draft , post=prior , data=list(N=xseq) )
plot( NULL , xlim=xseq , ylim=xseq )
for ( i in 1:50 ) lines( xseq , mu[i,] , col=col.alpha("black",0.3) )
```
## Impossible priors
## Tightening the priors so that we have reasonable values
```R
m5.5 <- quap(
alist(
K ~ dnorm( mu , sigma ) ,
mu <- a + bN*N ,
a ~ dnorm( 0 , 0.2 ) ,
bN ~ dnorm( 0 , 0.5 ) ,
sigma ~ dexp( 1 )
) , data=dcc )
```
```R
prior <- extract.prior( m5.5 )
xseq <- c(-2,2)
mu <- link( m5.5_draft , post=prior , data=list(N=xseq) )
plot( NULL , xlim=xseq , ylim=xseq )
for ( i in 1:50 ) lines( xseq , mu[i,] , col=col.alpha("black",0.3) )
```
```R
precis( m5.5 )
```
<table>
<caption>A precis: 3 × 4</caption>
<thead>
<tr><th></th><th scope=col>mean</th><th scope=col>sd</th><th scope=col>5.5%</th><th scope=col>94.5%</th></tr>
<tr><th></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th></tr>
</thead>
<tbody>
<tr><th scope=row>a</th><td>0.03993997</td><td>0.1544908</td><td>-0.2069662</td><td>0.2868461</td></tr>
<tr><th scope=row>bN</th><td>0.13323493</td><td>0.2237469</td><td>-0.2243559</td><td>0.4908258</td></tr>
<tr><th scope=row>sigma</th><td>0.99982070</td><td>0.1647082</td><td> 0.7365852</td><td>1.2630562</td></tr>
</tbody>
</table>
```R
xseq <- seq( from=min(dcc$N)-0.15 , to=max(dcc$N)+0.15 , length.out=30 )
mu <- link( m5.5 , data=list(N=xseq) )
mu_mean <- apply(mu,2,mean)
mu_PI <- apply(mu,2,PI)
plot( K ~ N , data=dcc )
lines( xseq , mu_mean , lwd=2 )
shade( mu_PI , xseq )
```
```R
m5.6 <- quap(
alist(
K ~ dnorm( mu , sigma ) ,
mu <- a + bM*M ,
a ~ dnorm( 0 , 0.2 ) ,
bM ~ dnorm( 0 , 0.5 ) ,
sigma ~ dexp( 1 )
) , data=dcc )
precis(m5.6)
```
<table>
<caption>A precis: 3 × 4</caption>
<thead>
<tr><th></th><th scope=col>mean</th><th scope=col>sd</th><th scope=col>5.5%</th><th scope=col>94.5%</th></tr>
<tr><th></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th></tr>
</thead>
<tbody>
<tr><th scope=row>a</th><td> 0.04639191</td><td>0.1512784</td><td>-0.1953801</td><td>0.28816396</td></tr>
<tr><th scope=row>bM</th><td>-0.28249661</td><td>0.1928751</td><td>-0.5907483</td><td>0.02575509</td></tr>
<tr><th scope=row>sigma</th><td> 0.94923665</td><td>0.1570432</td><td> 0.6982513</td><td>1.20022199</td></tr>
</tbody>
</table>
```R
xseq <- seq( from=min(dcc$M)-0.15 , to=max(dcc$M)+0.15 , length.out=30 )
mu <- link( m5.6 , data=list(M=xseq) )
mu_mean <- apply(mu,2,mean)
mu_PI <- apply(mu,2,PI)
plot( K ~ M , data=dcc )
lines( xseq , mu_mean , lwd=2 )
shade( mu_PI , xseq )
```
```R
m5.7 <- quap(
alist(
K ~ dnorm( mu , sigma ) ,
mu <- a + bN*N + bM*M ,
a ~ dnorm( 0 , 0.2 ) ,
bN ~ dnorm( 0 , 0.5 ) ,
bM ~ dnorm( 0 , 0.5 ) ,
sigma ~ dexp( 1 )
) , data=dcc )
precis(m5.7)
```
<table>
<caption>A precis: 4 × 4</caption>
<thead>
<tr><th></th><th scope=col>mean</th><th scope=col>sd</th><th scope=col>5.5%</th><th scope=col>94.5%</th></tr>
<tr><th></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th></tr>
</thead>
<tbody>
<tr><th scope=row>a</th><td> 0.06799076</td><td>0.1340005</td><td>-0.1461680</td><td> 0.2821495</td></tr>
<tr><th scope=row>bN</th><td> 0.67510201</td><td>0.2483044</td><td> 0.2782636</td><td> 1.0719405</td></tr>
<tr><th scope=row>bM</th><td>-0.70298796</td><td>0.2207911</td><td>-1.0558547</td><td>-0.3501212</td></tr>
<tr><th scope=row>sigma</th><td> 0.73803237</td><td>0.1324698</td><td> 0.5263200</td><td> 0.9497447</td></tr>
</tbody>
</table>
```R
plot( coeftab( m5.5 , m5.6 , m5.7 ) , pars=c("bM","bN") )
```
## Using both M and N have increased their influence on the outcome in a more farther direction
```R
pairs( ~K + M + N ,
dcc )
```
## Counterfactual with N = 0
```R
xseq <- seq( from=min(dcc$M)-0.15 , to=max(dcc$M)+0.15 , length.out=30 )
mu <- link( m5.7 , data=data.frame( M=xseq , N=0 ) )
mu_mean <- apply(mu,2,mean)
mu_PI <- apply(mu,2,PI)
plot( NULL , xlim=range(dcc$M) , ylim=range(dcc$K) )
lines( xseq , mu_mean , lwd=2 )
shade( mu_PI , xseq )
```
## Counterfactual with M = 0
```R
xseq <- seq( from=min(dcc$N)-0.15 , to=max(dcc$N)+0.15 , length.out=30 )
mu <- link( m5.7 , data=data.frame( N=xseq , M=0 ) )
mu_mean <- apply(mu,2,mean)
mu_PI <- apply(mu,2,PI)
plot( NULL , xlim=range(dcc$N) , ylim=range(dcc$K) )
lines( xseq , mu_mean , lwd=2 )
shade( mu_PI , xseq )
```
## So it is clear that M and N when applied together lead to stronger effect
# Index Variables
### Using dummy variables for categorical data doesnot help with the prior so we try to use Index variables
```R
data(Howell1)
d <- Howell1
str(d)
```
'data.frame': 544 obs. of 4 variables:
$ height: num 152 140 137 157 145 ...
$ weight: num 47.8 36.5 31.9 53 41.3 ...
$ age : num 63 63 65 41 51 35 32 27 19 54 ...
$ male : int 1 0 0 1 0 1 0 1 0 1 ...
```R
mu_female <- rnorm(1e4,178,20)
mu_male <- rnorm(1e4,178,20) + rnorm(1e4,0,10)
precis( data.frame( mu_female , mu_male ) )
```
<table>
<caption>A precis: 2 × 5</caption>
<thead>
<tr><th></th><th scope=col>mean</th><th scope=col>sd</th><th scope=col>5.5%</th><th scope=col>94.5%</th><th scope=col>histogram</th></tr>
<tr><th></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><chr></th></tr>
</thead>
<tbody>
<tr><th scope=row>mu_female</th><td>177.6473</td><td>19.95584</td><td>145.6448</td><td>209.5282</td><td><span style=white-space:pre-wrap><U+2581><U+2581><U+2583><U+2587><U+2587><U+2582><U+2581><U+2581> </span></td></tr>
<tr><th scope=row>mu_male</th><td>178.2149</td><td>22.19615</td><td>142.7647</td><td>213.3100</td><td><U+2581><U+2581><U+2581><U+2583><U+2587><U+2587><U+2583><U+2581><U+2581><U+2581></td></tr>
</tbody>
</table>
### Prior predictive simulation clearly shows that male has more deviation in its values even though there is no prior reasoning to that
\begin{align}
h_i = Normal(\mu_i,\sigma)\\
\mu_i = \alpha_i + \beta_m*m_i
\end{align}
since $\alpha_i$ represents the avg female height when $m_i$ = 0 and if it is 1 there are 2 parameters that determine male height causing more uncertainity even when there is no prior evidence
```R
d$sex <- ifelse( d$male==1 , 2 , 1 )
str( d$sex )
```
num [1:544] 2 1 1 2 1 2 1 2 1 2 ...
```R
m5.8 <- quap(
alist(
height ~ dnorm( mu , sigma ) ,
mu <- a[sex] ,
a[sex] ~ dnorm( 178 , 20 ) ,
sigma ~ dunif( 0 , 50 )
) , data=d )
precis( m5.8 , depth=2 )
```
<table>
<caption>A precis: 3 × 4</caption>
<thead>
<tr><th></th><th scope=col>mean</th><th scope=col>sd</th><th scope=col>5.5%</th><th scope=col>94.5%</th></tr>
<tr><th></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th></tr>
</thead>
<tbody>
<tr><th scope=row>a[1]</th><td>134.91059</td><td>1.6069414</td><td>132.3424</td><td>137.47879</td></tr>
<tr><th scope=row>a[2]</th><td>142.57873</td><td>1.6974815</td><td>139.8658</td><td>145.29164</td></tr>
<tr><th scope=row>sigma</th><td> 27.31009</td><td>0.8280521</td><td> 25.9867</td><td> 28.63348</td></tr>
</tbody>
</table>
```R
post <- extract.samples(m5.8)
post$diff_fm <- post$a[,1] - post$a[,2]
precis( post , depth=2 )
```
<table>
<caption>A precis: 4 × 5</caption>
<thead>
<tr><th></th><th scope=col>mean</th><th scope=col>sd</th><th scope=col>5.5%</th><th scope=col>94.5%</th><th scope=col>histogram</th></tr>
<tr><th></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><chr></th></tr>
</thead>
<tbody>
<tr><th scope=row>sigma</th><td> 27.321169</td><td>0.819567</td><td> 26.01392</td><td> 28.619513</td><td><U+2581><U+2581><U+2581><U+2581><U+2583><U+2587><U+2587><U+2587><U+2583><U+2582><U+2581><U+2581><U+2581><U+2581></td></tr>
<tr><th scope=row>a[1]</th><td>134.913639</td><td>1.619593</td><td>132.27227</td><td>137.503628</td><td><span style=white-space:pre-wrap><U+2581><U+2581><U+2581><U+2581><U+2582><U+2585><U+2587><U+2587><U+2585><U+2582><U+2581><U+2581><U+2581> </span></td></tr>
<tr><th scope=row>a[2]</th><td>142.546916</td><td>1.706710</td><td>139.82388</td><td>145.276203</td><td><U+2581><U+2581><U+2581><U+2582><U+2583><U+2587><U+2587><U+2587><U+2583><U+2582><U+2581><U+2581><U+2581><U+2581></td></tr>
<tr><th scope=row>diff_fm</th><td> -7.633278</td><td>2.349692</td><td>-11.40781</td><td> -3.917746</td><td><span style=white-space:pre-wrap><U+2581><U+2581><U+2581><U+2582><U+2587><U+2587><U+2583><U+2581><U+2581><U+2581> </span></td></tr>
</tbody>
</table>
### So this variable diff_fm is called the Contrast
```R
data(milk)
d <- milk
unique(d$clade)
```
<style>
.list-inline {list-style: none; margin:0; padding: 0}
.list-inline>li {display: inline-block}
.list-inline>li:not(:last-child)::after {content: "\00b7"; padding: 0 .5ex}
</style>
<ol class=list-inline><li>Strepsirrhine</li><li>New World Monkey</li><li>Old World Monkey</li><li>Ape</li></ol>
<details>
<summary style=display:list-item;cursor:pointer>
<strong>Levels</strong>:
</summary>
<style>
.list-inline {list-style: none; margin:0; padding: 0}
.list-inline>li {display: inline-block}
.list-inline>li:not(:last-child)::after {content: "\00b7"; padding: 0 .5ex}
</style>
<ol class=list-inline><li>'Ape'</li><li>'New World Monkey'</li><li>'Old World Monkey'</li><li>'Strepsirrhine'</li></ol>
</details>
```R
d$clade_id <- as.integer( d$clade )
d$clade_id
```
<style>
.list-inline {list-style: none; margin:0; padding: 0}
.list-inline>li {display: inline-block}
.list-inline>li:not(:last-child)::after {content: "\00b7"; padding: 0 .5ex}
</style>
<ol class=list-inline><li>4</li><li>4</li><li>4</li><li>4</li><li>4</li><li>2</li><li>2</li><li>2</li><li>2</li><li>2</li><li>2</li><li>2</li><li>2</li><li>2</li><li>3</li><li>3</li><li>3</li><li>3</li><li>3</li><li>3</li><li>1</li><li>1</li><li>1</li><li>1</li><li>1</li><li>1</li><li>1</li><li>1</li><li>1</li></ol>
```R
d$K <- scale( d$kcal.per.g )
m5.9 <- quap(
alist(
K ~ dnorm( mu , sigma ),
mu <- a[clade_id],
a[clade_id] ~ dnorm( 0 , 0.5 ),
sigma ~ dexp( 1 )
) , data=d )
labels <- paste( "a[" , 1:4 , "]:" , levels(d$clade) , sep="" )
plot( precis( m5.9 , depth=2 , pars="a" ) , labels=labels ,
xlab="expected kcal (std)" )
```
### The above example shows how this idea of Index Variables scales up
And when you want to find the difference u need to simulate from the posterior distribution
```R
```
|
05af6c8344b00922449750019985506438594d3f
| 412,536 |
ipynb
|
Jupyter Notebook
|
The Many Variables & The Spurious Waffles.ipynb
|
GodEater8042/Statistical-Rethinking-Jupyter-R
|
ba305082b8fb24cefc43d02208de361e5adade3e
|
[
"MIT"
] | null | null | null |
The Many Variables & The Spurious Waffles.ipynb
|
GodEater8042/Statistical-Rethinking-Jupyter-R
|
ba305082b8fb24cefc43d02208de361e5adade3e
|
[
"MIT"
] | null | null | null |
The Many Variables & The Spurious Waffles.ipynb
|
GodEater8042/Statistical-Rethinking-Jupyter-R
|
ba305082b8fb24cefc43d02208de361e5adade3e
|
[
"MIT"
] | null | null | null | 138.203015 | 42,506 | 0.841883 | true | 14,332 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.863392 | 0.875787 | 0.756147 |
__label__yue_Hant
| 0.169327 | 0.595115 |
```python
import numpy as np
from scipy.integrate import odeint
import numpy as np
from sympy import symbols,sqrt,sech,Rational,lambdify,Matrix,exp,cosh,cse,simplify,cos,sin
from sympy.vector import CoordSysCartesian
#from theano.scalar.basic_sympy import SymPyCCode
#from theano import function
#from theano.scalar import floats
from IRI import *
#from Symbolic import *
from scipy.integrate import simps
from ENUFrame import ENU
import astropy.coordinates as ac
import astropy.units as au
import astropy.time as at
from time import time as tictoc
from TricubicInterpolation import TriCubic
import math
import pp
from RadioArray import RadioArray
class Fermat(object):
def __init__(self,neTCI=None,frequency = 120e6,type='s'):
self.type = type
self.frequency = frequency#Hz
if neTCI is not None:
self.ne2n(neTCI)
return
def loadFunc(self,file):
'''Load the model given in `file`'''
data = np.load(file)
if 'ne' in data.keys():
ne = data['ne']
xvec = data['xvec']
yvec = data['yvec']
zvec = data['zvec']
self.ne2n(TriCubic(xvec,yvec,zvec,ne))
return
if 'n' in data.keys():
ne = data['n']
xvec = data['xvec']
yvec = data['yvec']
zvec = data['zvec']
self.n2ne(TriCubic(xvec,yvec,zvec,n))
return
def saveFunc(self,file):
np.savez(file,xvec=self.nTCI.xvec,yvec=self.nTCI.yvec,zvec=self.nTCI.zvec,n=self.nTCI.m,ne=self.neTCI.m)
def ne2n(self,neTCI):
'''Analytically turn electron density to refractive index. Assume ne in m^-3'''
self.neTCI = neTCI
#copy object
self.nTCI = neTCI.copy(default=1.)
#inplace change to refractive index
self.nTCI.m *= -8.980**2/self.frequency**2
self.nTCI.m += 1.
self.nTCI.m = np.sqrt(self.nTCI.m)
#wp = 5.63e4*np.sqrt(ne/1e6)/2pi#Hz^2 m^3 lightman p 226
return self.nTCI
def n2ne(self,nTCI):
"""Get electron density in m^-3 from refractive index"""
self.nTCI = nTCI
#convert to
self.neTCI = nTCI.copy()
self.neTCI.m *= -self.neTCI.m
self.neTCI.m += 1.
self.neTCI.m *= self.frequency**2/8.980**2
#wp = 5.63e4*np.sqrt(ne/1e6)/2pi#Hz^2 m^3 lightman p 226
return self.neTCI
def eulerODE(self,y,t,*args):
'''return pxdot,pydot,pzdot,xdot,ydot,zdot,sdot'''
#print(y)
px,py,pz,x,y,z,s = y
#n,nx,ny,nz,nxy,nxz,nyz,nxyz = self.nTCI.interp(x,y,z,doDiff=True)
#ne,nex,ney,nez,nexy,nexz,neyz,nexyz = self.neTCI.interp(x,y,z,doDiff=True)
#A = - 8.98**2/self.frequency**2
#n = math.sqrt(1. + A*ne)
#ndot = A/(2.*n)
#nx = ndot * nex
#ny = ndot * ney
#nz = ndot * nez
#print(n)
n,nx,ny,nz = 1.,0,0,0
#if (n>1):
# print(x,y,z,n)
if self.type == 'z':
sdot = n / pz
pxdot = nx*n/pz
pydot = ny*n/pz
pzdot = nz*n/pz
xdot = px / pz
ydot = py / pz
zdot = 1.
if self.type == 's':
sdot = 1.
pxdot = nx
pydot = ny
pzdot = nz
xdot = px / n
ydot = py / n
zdot = pz / n
return [pxdot,pydot,pzdot,xdot,ydot,zdot,sdot]
def jacODE(self,y,t,*args):
'''return d ydot / d y, with derivatives down column for speed'''
px,py,pz,x,y,z,s = y
#n,nx,ny,nz,nxy,nxz,nyz,nxyz = self.nTCI.interp(x,y,z,doDiff=True)
nxx,nyy,nzz = 0.,0.,0.
n,nx,ny,nz,nxy,nxz,nyz = 1.,0,0,0,0,0,0
#ne,nex,ney,nez,nexy,nexz,neyz,nexyz = self.neTCI.interp(x,y,z,doDiff=True)
#A = - 8.98**2/self.frequency**2
#n = math.sqrt(1. + A*ne)
#ndot = A/(2.*n)
#nx = ndot * nex
#ny = ndot * ney
#nz = ndot * nez
#ndotdot = -(A * ndot)/(2. * n**2)
#nxy = ndotdot * nex*ney + ndot * nexy
#nxz = ndotdot * nex * nez + ndot * nexz
#nyz = ndotdot * ney * nez + ndot * neyz
#if (n>1):
# print(x,y,z,n)
if self.type == 'z':
x0 = n
x1 = nx
x2 = pz**(-2)
x3 = x0*x2
x4 = 1./pz
x5 = ny
x6 = x4*(x0*nxy + x1*x5)
x7 = nz
x8 = x4*(x0*nxz + x1*x7)
x9 = x4*(x0*nyz + x5*x7)
jac = np.array([[ 0, 0, -x1*x3, x4*(x0*nxx + x1**2),x6, x8, 0.],
[ 0, 0, -x3*x5,x6, x4*(x0*nyy + x5**2), x9, 0.],
[ 0, 0, -x3*x7,x8, x9, x4*(x0*nzz + x7**2), 0.],
[x4, 0, -px*x2, 0, 0, 0, 0.],
[ 0, x4, -py*x2, 0, 0, 0, 0.],
[ 0, 0, 0, 0, 0, 0, 0.],
[ 0, 0,-x3,x1*x4, x4*x5, x4*x7, 0.]])
if self.type == 's':
x0 = n
x1 = nxy
x2 = nxz
x3 = nyz
x4 = 1./x0
x5 = nx
x6 = x0**(-2)
x7 = px*x6
x8 = ny
x9 = nz
x10 = py*x6
x11 = pz*x6
jac = np.array([[ 0, 0, 0, nxx, x1, x2, 0.],
[ 0, 0, 0, x1, nyy, x3, 0.],
[ 0, 0, 0, x2, x3, nzz, 0.],
[x4, 0, 0, -x5*x7, -x7*x8, -x7*x9, 0.],
[ 0, x4, 0, -x10*x5, -x10*x8, -x10*x9, 0.],
[ 0, 0, x4, -x11*x5, -x11*x8, -x11*x9, 0.],
[ 0, 0, 0, 0, 0, 0, 0.]])
return jac
def integrateRay(self,X0,direction,tmax,time = 0,N=100):
'''Integrate rays from x0 in initial direction where coordinates are (r,theta,phi)'''
direction /= np.linalg.norm(direction)
x0,y0,z0 = X0
xdot0,ydot0,zdot0 = direction
sdot = np.sqrt(xdot0**2 + ydot0**2 + zdot0**2)
px0 = xdot0/sdot
py0 = ydot0/sdot
pz0 = zdot0/sdot
init = [px0,py0,pz0,x0,y0,z0,0]
if self.type == 'z':
tarray = np.linspace(z0,tmax,N)
if self.type == 's':
tarray = np.linspace(0,tmax,N)
#print("Integrating at {0} from {1} in direction {2} until {3}".format(time,X0,direction,tmax))
#print(init)
#print("Integrating from {0} in direction {1} until {2}".format(x0,directions,tmax))
Y,info = odeint(self.eulerODE, init, tarray, args=(time,),Dfun = self.jacODE, col_deriv = True, full_output=1)
#print(info['hu'].shape,np.sum(info['hu']),info['hu'])
#print(Y)
x = Y[:,3]
y = Y[:,4]
z = Y[:,5]
s = Y[:,6]
return x,y,z,s
def plotWavefront(neTCI,rays,save=False,animate=False):
xmin = neTCI.xvec[0]
xmax = neTCI.xvec[-1]
ymin = neTCI.yvec[0]
ymax = neTCI.yvec[-1]
zmin = neTCI.zvec[0]
zmax = neTCI.zvec[-1]
X,Y,Z = np.mgrid[xmin:xmax:len(neTCI.xvec)*1j,
ymin:ymax:len(neTCI.yvec)*1j,
zmin:zmax:len(neTCI.zvec)*1j]
#reshape array
data = neTCI.getShapedArray()
print(np.mean(data),np.max(data),np.min(data))
l = mlab.pipeline.volume(mlab.pipeline.scalar_field(X,Y,Z,data))#,vmin=min, vmax=min + .5*(max-min))
l._volume_property.scalar_opacity_unit_distance = min((xmax-xmin)/4.,(ymax-ymin)/4.,(zmax-zmin)/4.)
l._volume_property.shade = False
mlab.contour3d(X,Y,Z,data,contours=5,opacity=0.2)
mlab.colorbar()
def getWave(rays,idx):
xs = np.zeros(len(rays))
ys = np.zeros(len(rays))
zs = np.zeros(len(rays))
ridx = 0
while ridx < len(rays):
xs[ridx] = rays[ridx]['x'][idx]
ys[ridx] = rays[ridx]['y'][idx]
zs[ridx] = rays[ridx]['z'][idx]
ridx += 1
return xs,ys,zs
if rays is not None:
for datumIdx in rays.keys():
ray = rays[datumIdx]
mlab.plot3d(ray["x"],ray["y"],ray["z"],tube_radius=0.25)
if animate:
plt = mlab.points3d(*getWave(rays,0),color=(1,0,0),scale_mode='vector', scale_factor=10.)
#mlab.move(-200,0,0)
view = mlab.view()
@mlab.animate(delay=100)
def anim():
nt = len(rays[0]["s"])
f = mlab.gcf()
save = False
while True:
i = 0
while i < nt:
#print("updating scene")
xs,ys,zs = getWave(rays,i)
plt.mlab_source.set(x=xs,y=ys,z=zs)
#mlab.view(*view)
if save:
#mlab.view(*view)
mlab.savefig('figs/wavefronts/wavefront_{0:04d}.png'.format(i))#,magnification = 2)#size=(1920,1080))
#f.scene.render()
i += 1
yield
save = False
anim()
mlab.show()
if save and rays is not None:
return
import os
os.system('ffmpeg -r 10 -f image2 -s 1900x1080 -i figs/wavefronts/wavefront_%04d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p figs/wavefronts/wavefront.mp4')
def plotModel(neTCI,save=False):
'''Plot the model contained in a tricubic interpolator (a convienient container for one)'''
plotWavefront(neTCI,None,save=save)
def createPrioriModel(iri = None,L_ne=15.):
if iri is None:
iri = IriModel()
xmin = -200.
xmax = 200.
ymin = -200.
ymax = 200.
zmin = -10.
zmax = 3000.
eastVec = np.linspace(xmin,xmax,int(np.ceil((xmax-xmin)/L_ne)))
northVec = np.linspace(ymin,ymax,int(np.ceil((ymax-ymin)/L_ne)))
upVec = np.linspace(zmin,zmax,int(np.ceil((zmax-zmin)/L_ne)))
E,N,U = np.meshgrid(eastVec,northVec,upVec,indexing='ij')
#get the points in ITRS frame
points = ac.SkyCoord(E.flatten()*au.km,N.flatten()*au.km,U.flatten()*au.km,frame=iri.enu).transform_to('itrs').cartesian.xyz.to(au.km).value
X = points[0,:].reshape(E.shape)
Y = points[1,:].reshape(N.shape)
Z = points[2,:].reshape(U.shape)
#X,Y,Z = np.meshgrid(points[0,:],points[1,:],points[2,:],indexing='ij')
##generate cartesian grid in ITRS frame
#Nx = int(np.ceil((np.max(points[0,:]) - np.min(points[0,:]))/30.))
#Ny = int(np.ceil((np.max(points[1,:]) - np.min(points[1,:]))/30.))
#Nz = int(np.ceil((np.max(points[2,:]) - np.min(points[2,:]))/30.))
#xvec = np.linspace(np.min(points[0,:]),np.max(points[0,:]),Nx)
#yvec = np.linspace(np.min(points[1,:]),np.max(points[1,:]),Ny)
#zvec = np.linspace(np.min(points[2,:]),np.max(points[2,:]),Nz)
#ij indexing - mgrid is that way by default
#X,Y,Z = np.mgrid[np.min(points[0,:]):np.max(points[0,:]):1j*Nx,
# np.min(points[1,:]):np.max(points[1,:]):1j*Ny,
# np.min(points[2,:]):np.max(points[2,:]):1j*Nz]
#X,Y,Z = np.meshgrid(xvec,yvec,zvec,indexing='ij')
#Get values at points
ne = iri.evaluate(X,Y,Z)
print("created an a priori cube of shape: {0}".format(ne.shape))
return eastVec,northVec,upVec,ne
def perturbModel(eastVec,northVec,upVec,ne,loc,width,amp):
nePert = ne.copy()
E,N,U = np.meshgrid(eastVec,northVec,upVec,indexing='ij')
for l,w,a in zip(loc,width,amp):
print("Adding amp:{0:1.2e} at: {1} scale:{2:0.2f}".format(a,l,w))
nePert += a*np.exp(-((E-l[0])**2 + (N-l[1])**2 + (U-l[2])**2)/w**2)
return nePert
def testSweep():
'''Test the full system.'''
# The priori ionosphere
iri = IriModel()
print("Creating priori model")
eastVec,northVec,upVec,nePriori = createPrioriModel(iri)
print("Creating perturbed model")
nePert = perturbModel(eastVec,northVec,upVec,nePriori,([0,0,200.],),(40.,),(1e12,))
print("creating TCI object")
neTCI = TriCubic(eastVec,northVec,upVec,nePert)
print("creating fermat object")
f = Fermat(neTCI = neTCI,type = 's')
### test interpolation in both
for i in range(0):
x = np.random.uniform(low=eastVec[0],high=eastVec[-1])
y = np.random.uniform(low=northVec[0],high=northVec[-1])
z = np.random.uniform(low=upVec[0],high=upVec[-1])
n_,nx_,ny_,nz_,nxy_,nxz_,nyz_,nxyz_ = f.nTCI.interp(x,y,z,doDiff=True)
ne,nex,ney,nez,nexy,nexz,neyz,nexyz = f.neTCI.interp(x,y,z,doDiff=True)
A = - 8.98**2/f.frequency**2
n = math.sqrt(1. + A*ne)
ndot = A/(2.*n)
nx = ndot * nex
ny = ndot * ney
nz = ndot * nez
ndotdot = -(A * ndot)/(2. * n**2)
nxy = ndotdot * nex*ney + ndot * nexy
nxz = ndotdot * nex * nez + ndot * nexz
nyz = ndotdot * ney * nez + ndot * neyz
print(x,y,z)
print(n,n_)
print(nx,nx_)
print(nxy,nxy)
print("min and max n:",np.min(f.nTCI.m),np.max(f.nTCI.m))
theta = np.linspace(-np.pi/15.,np.pi/15.,25)
#phi = np.linspace(0,2*np.pi,6)
rays = []
origin = ac.ITRS(iri.enu.location).transform_to(iri.enu).cartesian.xyz.to(au.km).value
print(origin)
rayIdx = 0
t1 = tictoc()
for t in theta:
for p in theta:
#print("integrating ray: {0}".format(rayIdx))
direction = ac.SkyCoord(np.sin(t),
np.sin(p),
1.,frame=iri.enu).cartesian.xyz.value#.transform_to('itrs').cartesian.xyz.value
x,y,z,s = f.integrateRay(origin,direction,1000,time=0.)
rayIdx += 1
rays.append({'x':x,'y':y,'z':z,'s':s})
print("time per ray:",(tictoc()-t1)/len(rays))
#print(rays)
plotWavefront(neTCI,rays,save=False)
#plotWavefront(f.nFunc.subs({'t':0}),rays,*getSolitonCube(sol),save = False)
#plotFuncCube(f.nFunc.subs({'t':0}), *getSolitonCube(sol),rays=rays)
def plot_dtec(Nant,directions,dtec,title='',subAnt=None):
def getDatumIdx(antIdx,dirIdx,timeIdx,numDirections,numTimes):
'''standarizes indexing'''
idx = antIdx*numDirections*numTimes + dirIdx*numTimes + timeIdx
return idx
vmin = np.min(dtec)
vmax = np.max(dtec)
#data -= np.min(dtec)
#data /= np.max(dtec)
Nperaxis = int(np.ceil(np.sqrt(Nant)))
import pylab as plt
cm = plt.cm.get_cmap('RdYlBu')
f = plt.figure(figsize=(22,17))
#f,ax = plt.subplots(int(np.ceil(np.sqrt(numAntennas))),int(np.ceil(np.sqrt(numAntennas))))
for antIdx in range(Nant):
ax = plt.subplot(Nperaxis,Nperaxis,antIdx+1)
ax.set_title("Antenna {}".format(antIdx))
for dirIdx in range(len(directions)):
datumIdx = getDatumIdx(antIdx,dirIdx,0,len(directions),1)
if subAnt is not None:
datumIdx0 = getDatumIdx(subAnt,dirIdx,0,len(directions),1)
sc=ax.scatter(directions[dirIdx,0],directions[dirIdx,1],c=dtec[datumIdx]-dtec[datumIdx0],s=20**2,vmin=vmin,vmax=vmax,cmap=cm)
else:
sc=ax.scatter(directions[dirIdx,0],directions[dirIdx,1],c=dtec[datumIdx],s=20**2,vmin=vmin,vmax=vmax,cmap=cm)
plt.colorbar(sc)
if title is not "":
f.savefig("figs/dtec/{}.png".format(title),format='png')
#plt.show()
def SimulatedDataInversion(numThreads = 1,noise=None):
'''Test the full system.'''
def getDatumIdx(antIdx,dirIdx,timeIdx,numDirections,numTimes):
'''standarizes indexing'''
idx = antIdx*numDirections*numTimes + dirIdx*numTimes + timeIdx
return idx
def reverseDatumIdx(datumIdx,numTimes,numDirections):
'''Reverse standardized indexing'''
timeIdx = datumIdx % numTimes
dirIdx = (datumIdx - timeIdx)/numTimes % numDirections
antIdx = (datumIdx - timeIdx - dirIdx*numTimes)/numTimes/numDirections
return antIdx, dirIdx, timeIdx
def datumDicts2array(datumDicts):
'''Given a tupel of dicts where each dict is of datumIdx:value
convert into single array with index giving order'''
N = 0
for datumDict in datumDicts:
N += len(datumDict)
array = np.zeros(N,dtype=np.double)
for datumDict in datumDicts:
for datumIdx in datumDict.keys():#ordering set by datumIdx function 1-to-1
array[datumIdx] = datumDict[datumIdx]
return array
raylength = 2000.
print("Using lofar array")
radioArray = RadioArray(arrayFile='arrays/lofar.hba.antenna.cfg')
timestamp = '2017-02-7T15:37:00.000'
timeIdx = 0#one time stamp for now
numTimes = 1
time = at.Time(timestamp,format='isot',scale='tai')
enu = ENU(obstime=time,location=radioArray.getCenter().earth_location)
phase = ac.SkyCoord(east=0,north=0,up=1,frame=enu).transform_to(ac.ITRS(obstime=time)).transform_to('icrs')#straight up for now
dec = phase.dec.rad
ra = phase.ra.rad
print("Simulating observation on {0}: {1}".format(time.isot,phase))
stations = radioArray.locs.transform_to(enu).cartesian.xyz.to(au.km).value.transpose()
stations = stations[50:60,:]
Nant = stations.shape[0]
print("Using {0} stations".format(Nant))
print(stations)
#stations = np.random.multivariate_normal([0,0,0],[[20**2,0,0],[0,20**2,0],[0,0,0.01**2]],Nant)
#stations = np.array([[0,0,0],[20,0,0]])
Ndir = 5
fov = radioArray.getFov()#radians
print("Creating {0} directions in FOV of {1}".format(Ndir,fov))
directions = np.random.multivariate_normal([ra,dec],[[(fov/2.)**2,0],[0,(fov/2.)**2]],Ndir)
print(directions)
directions = ac.SkyCoord(directions[:,0]*au.radian,directions[:,1]*au.radian,frame='icrs').transform_to(enu).cartesian.xyz.value.transpose()
#print(directions)
print("Setting up tri cubic interpolator")
L_ne = 15.
# The priori ionosphere
iri = IriModel()
print("Creating priori model")
eastVec,northVec,upVec,nePriori = createPrioriModel(iri,L_ne)
print("Creating perturbed model")
nePert = perturbModel(eastVec,northVec,upVec,nePriori,([0,0,200.],),(40.,),(1e9,))
print("Creating TCI object")
neTCI = TriCubic(eastVec,northVec,upVec,nePert)
neTCIModel = TriCubic(eastVec,northVec,upVec,nePriori)
TCI = TriCubic(eastVec,northVec,upVec,np.zeros_like(nePert))
print("Creating fermat object - based on a priori (second order corrections require iterating this)")
f = Fermat(neTCI = neTCIModel,type = 's')
print("Integrating rays with fermats principle")
t1 = tictoc()
rays = {}
for antIdx in range(Nant):
for dirIdx in range(Ndir):
datumIdx = getDatumIdx(antIdx,dirIdx,timeIdx,Ndir,numTimes)
#print(antIdx,dirIdx,timeIdx,datumIdx)
origin = stations[antIdx,:]#ENU frame, later use UVW frame
direction = directions[dirIdx,:]
x,y,z,s = f.integrateRay(origin,direction,raylength,time=0.)
rays[datumIdx] = {'x':x,'y':y,'z':z,'s':s}
Nd = len(rays)
print("Time (total/per ray): {0:0.2f} / {1:0.2e} s".format(tictoc()-t1,(tictoc()-t1)/Nd))
TCI.m = neTCI.m - neTCIModel.m
TCI.clearCache()
#plotWavefront(TCI,rays,save=False,animate=False)
print("Setting up ray chunks for {0} threads".format(numThreads))
#split up rays
raypack = {i:{} for i in range(numThreads)}
c = 0
for datumIdx in rays.keys():
raypack[c%numThreads][datumIdx] = rays[datumIdx]
c += 1
def ppForwardEquation(rays,TCI,mu,Kmu,rho,Krho,numTimes,numDirections):
dtec, rho, Krho = ParallelInversionProducts.forwardEquations(rays,TCI,mu,Kmu,rho,Krho,numTimes,numDirections)
return dtec, rho, Krho
def ppPrimaryInversionSteps(dtec,rays,TCI,mu,Kmu,rho,Krho,muprior,rhoprior,sigma_ne,L_ne,sigma_rho,numTimes,numDirections,priorFlag=True):
G, CmGt, ddGdmpm = ParallelInversionProducts.primaryInversionSteps(dtec,rays,TCI,mu,Kmu,rho,Krho,muprior,rhoprior,sigma_ne,L_ne,sigma_rho,numTimes,numDirections,priorFlag=True)
return G, CmGt, ddGdmpm
def ppSecondaryInversionSteps(rays, G, CmGt, TCI, sigma_rho, Cd,numTimes,numDirections):
S = ParallelInversionProducts.secondaryInversionSteps(rays, G, CmGt, TCI, sigma_rho, Cd,numTimes,numDirections)
return S
jobs = {}
job_server = pp.Server(numThreads, ppservers=())
print("Creating dTec simulated data")
job = job_server.submit(ppForwardEquation,
args=(rays,TCI,np.log(neTCI.m/np.mean(neTCI.m)),np.mean(neTCI.m),None,None,numTimes,Ndir),
depfuncs=(),
modules=('ParallelInversionProducts',))
jobs['dtecSim'] = job
job = job_server.submit(ppForwardEquation,
args=(rays,TCI,np.log(neTCIModel.m/np.mean(neTCIModel.m)),np.mean(neTCIModel.m),None,None,numTimes,Ndir),
depfuncs=(),
modules=('ParallelInversionProducts',))
jobs['dtecModel'] = job
dtecSim,rhoSim0, KrhoSim0 = jobs['dtecSim']()
dobs = datumDicts2array((dtecSim,))
#print("dobs: {0}".format(dobs))
if noise is not None:
print("Adding {0:0.2f}-sigma noise to simulated dtec".format(noise))
dtecStd = np.std(dobs)
dobs += np.random.normal(loc=0,scale=dtecStd*noise,size=np.size(dobs))
#print("dobs: {0}".format(dobs))
dtecModel,rhoModel0,KrhoModel0 = jobs['dtecModel']()
g = datumDicts2array((dtecModel,))
#print("g: {0}".format(g))
job_server.print_stats()
job_server.destroy()
subAnt = None
plot_dtec(Nant,directions,dobs,title='sim_dtec',subAnt=subAnt)
plot_dtec(Nant,directions,g,title='model_dtec',subAnt=subAnt)
plot_dtec(Nant,directions,dobs-g,title='sim-mod_dtec',subAnt=subAnt)
print("Setting up inversion with parameters:")
print("Number of rays: {0}".format(Nd))
print("Forward equation: g(m) = int_R^i (K_mu * EXP[mu(x)] - K_rho * EXP[rho])/TECU ds")
#gaussian process assumption, d = g + G.dm -> Cd = Gt.Cm.G (not sure)
Cd = np.eye(Nd)*np.std(dobs)
print("<Diag(Cd)> = {0:0.2e}".format(np.mean(np.diag(Cd))))
print("a priori model is IRI")
print("Define: mu(x) = LOG[ne(x) / K_mu]")
Kmu = np.mean(neTCIModel.m)
mu = np.log(neTCIModel.m/Kmu)
muPrior = mu.copy()
print("K_mu = {0:0.2e}".format(Kmu))
#spatial-ergodic assumption
sigma_ne = np.std(neTCIModel.m)
print("Coherence scale: L_ne = {0:0.2e}".format(L_ne))
print("C_ne = ({0:0.2e})**2 EXP[-|x1 - x2| / {1:0.1f}]".format(sigma_ne,L_ne))
print("Define: rho = LOG[TEC_0 / K_rho / S]")
Krho = KrhoModel0
rho = rhoModel0
rhoPrior = rho.copy()
sigma_TEC = np.std(g*1e13)
sigma_rho = np.sqrt(np.log(1+(sigma_TEC/Krho/raylength)**2))
print("K_rho = {0:0.2e}".format(Krho))
print("a priori rho (reference TEC): {0}".format(rho))
print("sigma_rho = {0:0.2e}".format(sigma_rho))
#inversion steps
iter = 0
residuals = np.inf
while residuals > 1e-10:
print("Performing iteration: {0}".format(iter))
print("Performing primary inversion steps on {0}".format(numThreads))
job_server = pp.Server(numThreads, ppservers=())
for i in range(numThreads):
job = job_server.submit(ppPrimaryInversionSteps,
args=(dtecSim,raypack[i],TCI,mu,Kmu,rho,Krho,muPrior,rhoPrior,sigma_ne,L_ne,sigma_rho,numTimes,Ndir,True),
depfuncs=(),
modules=('ParallelInversionProducts',))
jobs['ppPrimaryInversionSteps_{0}'.format(i)] = job
G,CmGt,ddGdmpm = {},{},{}
for i in range(numThreads):
G_, CmGt_, ddGdmpm_ = jobs['ppPrimaryInversionSteps_{0}'.format(i)]()
#print(G_, CmGt_, ddGdmpm_)
G.update(G_)
CmGt.update(CmGt_)
ddGdmpm.update(ddGdmpm_)
job_server.print_stats()
job_server.destroy()
print("Performing secondary inversion steps")
job_server = pp.Server(numThreads, ppservers=())
for i in range(numThreads):
job = job_server.submit(ppSecondaryInversionSteps,
args=(raypack[i], G, CmGt, TCI, sigma_rho, Cd,numTimes,Ndir),
depfuncs=(),
modules=('ParallelInversionProducts',))
jobs['ppSecondaryInversionSteps_{0}'.format(i)] = job
S = np.zeros([Nd,Nd],dtype=np.double)
for i in range(numThreads):
S_ = jobs['ppSecondaryInversionSteps_{0}'.format(i)]()
S += S_
print("Inverting S")
T = np.linalg.pinv(S)
if False:
import pylab as plt
ax = plt.subplot(121)
p1 = ax.imshow(S)
plt.colorbar(p1)
ax = plt.subplot(122)
p2 = ax.imshow(T)
plt.colorbar(p2)
print("S:",S)
print("T:",T)
#plt.show()
job_server.print_stats()
job_server.destroy()
# dm = (mp-m) + CmGt.T.ddGdmpm
ddGdmpmArray = datumDicts2array([ddGdmpm])
TddGdmpmArray = T.dot(ddGdmpmArray)
CmGtArray = np.zeros([np.size(mu)+np.size(rho),Nd])
for i in range(Nd):
CmGtArray[:np.size(mu),i] = CmGt[i][0]
CmGtArray[np.size(mu):,i] = CmGt[i][1]
dm = CmGtArray.dot(TddGdmpmArray)
dmu = (muPrior - mu) + dm[:np.size(mu)]
drho = (rhoPrior - rho) + dm[np.size(mu):]
residuals = np.sum(dmu**2) / np.sum(mu**2) + np.sum(drho**2) / np.sum(rho**2)
print("Residual:",residuals)
print("Incrementing mu and rho")
print("dmlog:",dmu)
print("drho:",drho)
mu += dmu
rho += drho
iter += 1
print('Finished inversion with {0} iterations'.format(iter))
#print(rays)
TCI.m = Kmu*np.exp(mu) - neTCIModel.m
TCI.clearCache()
plotWavefront(TCI,rays,save=False)
#plotWavefront(f.nFunc.subs({'t':0}),rays,*getSolitonCube(sol),save = False)
#plotFuncCube(f.nFunc.subs({'t':0}), *getSolitonCube(sol),rays=rays)
def LMSol(G,mprior,Cd,Cm,dobs,mu=1.,octTree=None):
"""Assume the frechet derivative is,
G(x) = exp"""
import pylab as plt
K = np.mean(mprior)
mlog = np.log(mprior/K)
Cm_log = transformCov2Log(Cm,K)#np.log(1. + Cm/K**2)#transformCov2Log(Cm,mprior)
#Cdinv = np.linalg.pinv(Cd)
if octTree is not None:
voxels = getAllDecendants(octTree)
scale = np.zeros(np.size(mprior))
i = 0
while i < np.size(mprior):
scale[i] = voxels[i].volume**(1./3.)
i+= 1
C = np.sum(G,axis=0)/scale
C = C/float(np.max(C))
C[C==0] = np.min(C[C>0])/2.
else:
C = np.sum(G>0,axis=0)
plt.hist(C)
plt.show()
C = C/float(np.max(C))
C[C==0] = np.min(C[C>0])/2.
#C = np.sum(G,axis=0)
#C = C/np.max(C)
res = 1
iter = 0
while res > 1e-6 and iter < 10000:
#forward transform
#print(mlog)
mForward = K*np.exp(mlog)
g = G.dot(mForward)
J = G*mForward
#residuals g - dobs -> -dm
res = g - dobs
#A1 = J.transpose().dot(Cdinv)
#Cmlog_inv = A1.dot(J) + mu*Cm_log
#dm,resi,rank,s = np.linalg.lstsq(Cmlog_inv,A1.dot(res))
#S = mu Cd + J.Cm.J^t
#S = int Ri Rj k^2 exp(m(x) + m(x')) sigma^2 exp(-|x-x'|/L) + Cd
#K int dV Cm(x,x') J(x') del(i)
P1 = Cm_log.dot(J.transpose())
smooth = np.linalg.pinv(mu*Cd + J.dot(P1))
dm = P1.dot(smooth).dot(res)
res = np.sum(dm**2)/np.sum(mlog**2)
print("Iter-{0} res: {1}".format(iter,res))
#converage learn propto length of rays in cells
#print(dm)
mlog -= dm*C
iter += 1
CmlogPost = Cm_log - P1.dot(smooth).dot(P1.transpose())
cmlin = transformCov2Linear(CmlogPost,K)
#print(CmlogPost)
#mMl,cmlin = metropolisPosteriorCovariance(G,dobs,Cd,CmlogPost,mlog,K)
#print(mMl - K*np.exp(mlog))
#print(transformCov2Linear(CmlogPost,K) - cmlin)
return K*np.exp(mlog), cmlin
if __name__=='__main__':
np.random.seed(1234)
#testSquare()
#testSweep()
SimulatedDataInversion(4,noise=None)
#testThreadedFermat()
#testSmoothify()
#testcseLam()
```
Using lofar array
WARNING: Tried to get polar motions for times after IERS data is valid. Defaulting to polar motion from the 50-yr mean for those.
If you need enough precision such that this matters (~<10 arcsec), you can
download the latest IERS predictions by running:
>>> from astropy.utils.data import download_file
>>> from astropy.utils import iers
>>> iers.IERS.iers_table = iers.IERS_A.open(download_file(iers.IERS_A_URL, cache=True))
[astropy.coordinates.builtin_frames.utils]
Simulating observation on 2017-02-07T15:37:00.000: <SkyCoord (ICRS): (ra, dec) in deg
(18.78277889, 52.82321826)>
Using 10 stations
[[ 5.19440041e+00 -2.72439778e+01 -3.70815110e-02]
[ 2.18090006e+00 -6.49589411e+01 -2.99629857e-01]
[ -4.72236748e+00 -1.69407482e+00 2.11281777e-02]
[ -6.76306945e+00 -2.69925381e+00 1.98400188e-02]
[ -1.08513739e+01 -1.24399002e+01 4.84456806e-03]
[ -4.74886384e+01 -1.64321721e+01 -1.88068457e-01]
[ -6.14736033e+00 1.15285624e+01 1.36414320e-02]
[ -3.91072222e+00 1.97198299e+01 -2.15157932e-02]
[ -3.26139435e+01 7.38267739e+00 -6.89811048e-02]
[ 5.75474767e-01 3.30397868e+00 1.41152971e-02]]
Creating 5 directions in FOV of 0.0698131700798
[[ 0.34427753 0.88036619]
[ 0.37783224 0.91102547]
[ 0.30266804 0.95290691]
[ 0.35782663 0.89972022]
[ 0.32836924 0.84365461]]
Setting up tri cubic interpolator
Generated IRI symbolic function with 9 params
Creating priori model
created an a priori cube of shape: (27L, 27L, 201L)
Creating perturbed model
Adding amp:1.00e+09 at: [0, 0, 200.0] scale:40.00
Creating TCI object
Creating fermat object - based on a priori (second order corrections require iterating this)
Integrating rays with fermats principle
Time (total/per ray): 0.01 / 1.20e-04 s
Setting up ray chunks for 4 threads
Creating dTec simulated data
Job execution statistics:
job count | % of all jobs | job time sum | time per job | job server
```python
import numpy as np
1./np.tan(0.5*np.pi/180.) * 1
```
114.58865012930961
```python
```
|
ac41d824f52063ad95a2d311f5556ef6040cfdd2
| 39,338 |
ipynb
|
Jupyter Notebook
|
src/ionotomo/notebooks/FermatPrincipleTricubic.ipynb
|
Joshuaalbert/IonoTomo
|
9f50fbac698d43a824dd098d76dce93504c7b879
|
[
"Apache-2.0"
] | 7 |
2017-06-22T08:47:07.000Z
|
2021-07-01T12:33:02.000Z
|
src/ionotomo/notebooks/FermatPrincipleTricubic.ipynb
|
Joshuaalbert/IonoTomo
|
9f50fbac698d43a824dd098d76dce93504c7b879
|
[
"Apache-2.0"
] | 1 |
2019-04-03T15:21:19.000Z
|
2019-04-03T15:48:31.000Z
|
src/ionotomo/notebooks/FermatPrincipleTricubic.ipynb
|
Joshuaalbert/IonoTomo
|
9f50fbac698d43a824dd098d76dce93504c7b879
|
[
"Apache-2.0"
] | 2 |
2020-03-01T16:20:00.000Z
|
2020-07-07T15:09:02.000Z
| 45.268124 | 193 | 0.482231 | true | 10,003 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.865224 | 0.596433 | 0.516048 |
__label__eng_Latn
| 0.160347 | 0.037282 |
## Histograms of Oriented Gradients (HOG)
As we saw with the ORB algorithm, we can use keypoints in images to do keypoint-based matching to detect objects in images. These type of algorithms work great when you want to detect objects that have a lot of consistent internal features that are not affected by the background. For example, these algorithms work well for facial detection because faces have a lot of consistent internal features that don’t get affected by the image background, such as the eyes, nose, and mouth. However, these type of algorithms don’t work so well when attempting to do more general object recognition, say for example, pedestrian detection in images. The reason is that people don’t have consistent internal features, like faces do, because the body shape and style of every person is different (see Fig. 1). This means that every person is going to have a different set of internal features, and so we need something that can more generally describe a person.
<br>
<figure>
<figcaption style = "text-align:left; font-style:italic">Fig. 1. - Pedestrians.</figcaption>
</figure>
<br>
One option is to try to detect pedestrians by their contours instead. Detecting objects in images by their contours (boundaries) is very challenging because we have to deal with the difficulties brought about by the contrast between the background and the foreground. For example, suppose you wanted to detect a pedestrian in an image that is walking in front of a white building and she is wearing a white coat and black pants (see Fig. 2). We can see in Fig. 2, that since the background of the image is mostly white, the black pants are going to have a very high contrast, but the coat, since it is white as well, is going to have very low contrast. In this case, detecting the edges of pants is going to be easy but detecting the edges of the coat is going to be very difficult. This is where **HOG** comes in. HOG stands for **Histograms of Oriented Gradients** and it was first introduced by Navneet Dalal and Bill Triggs in 2005.
<br>
<figure>
<figcaption style = "text-align:left; font-style:italic">Fig. 2. - High and Low Contrast.</figcaption>
</figure>
<br>
The HOG algorithm works by creating histograms of the distribution of gradient orientations in an image and then normalizing them in a very special way. This special normalization is what makes HOG so effective at detecting the edges of objects even in cases where the contrast is very low. These normalized histograms are put together into a feature vector, known as the HOG descriptor, that can be used to train a machine learning algorithm, such as a Support Vector Machine (SVM), to detect objects in images based on their boundaries (edges). Due to its great success and reliability, HOG has become one of the most widely used algorithms in computer vison for object detection.
In this notebook, you will learn:
* How the HOG algorithm works
* How to use OpenCV to create a HOG descriptor
* How to visualize the HOG descriptor.
# The HOG Algorithm
As its name suggests, the HOG algorithm, is based on creating histograms from the orientation of image gradients. The HOG algorithm is implemented in a series of steps:
1. Given the image of particular object, set a detection window (region of interest) that covers the entire object in the image (see Fig. 3).
2. Calculate the magnitude and direction of the gradient for each individual pixel in the detection window.
3. Divide the detection window into connected *cells* of pixels, with all cells being of the same size (see Fig. 3). The size of the cells is a free parameter and it is usually chosen so as to match the scale of the features that want to be detected. For example, in a 64 x 128 pixel detection window, square cells 6 to 8 pixels wide are suitable for detecting human limbs.
4. Create a Histogram for each cell, by first grouping the gradient directions of all pixels in each cell into a particular number of orientation (angular) bins; and then adding up the gradient magnitudes of the gradients in each angular bin (see Fig. 3). The number of bins in the histogram is a free parameter and it is usually set to 9 angular bins.
5. Group adjacent cells into *blocks* (see Fig. 3). The number of cells in each block is a free parameter and all blocks must be of the same size. The distance between each block (known as the stride) is a free parameter but it is usually set to half the block size, in which case you will get overlapping blocks (*see video below*). The HOG algorithm has been shown empirically to work better with overlapping blocks.
6. Use the cells contained within each block to normalize the cell histograms in that block (see Fig. 3). If you have overlapping blocks this means that most cells will be normalized with respect to different blocks (*see video below*). Therefore, the same cell may have several different normalizations.
7. Collect all the normalized histograms from all the blocks into a single feature vector called the HOG descriptor.
8. Use the resulting HOG descriptors from many images of the same type of object to train a machine learning algorithm, such as an SVM, to detect those type of objects in images. For example, you could use the HOG descriptors from many images of pedestrians to train an SVM to detect pedestrians in images. The training is done with both positive a negative examples of the object you want detect in the image.
9. Once the SVM has been trained, a sliding window approach is used to try to detect and locate objects in images. Detecting an object in the image entails finding the part of the image that looks similar to the HOG pattern learned by the SVM.
<br>
<figure>
<figcaption style = "text-align:left; font-style:italic">Fig. 3. - HOG Diagram.</figcaption>
</figure>
<br>
<figure>
<figcaption style = "text-align:left; font-style:italic">Vid. 1. - HOG Animation.</figcaption>
</figure>
# Why The HOG Algorithm Works
As we learned above, HOG creates histograms by adding the magnitude of the gradients in particular orientations in localized portions of the image called *cells*. By doing this we guarantee that stronger gradients will contribute more to the magnitude of their respective angular bin, while the effects of weak and randomly oriented gradients resulting from noise are minimized. In this manner the histograms tell us the dominant gradient orientation of each cell.
### Dealing with contrast
Now, the magnitude of the dominant orientation can vary widely due to variations in local illumination and the contrast between the background and the foreground.
To account for the background-foreground contrast differences, the HOG algorithm tries to detect edges locally. In order to do this, it defines groups of cells, called **blocks**, and normalizes the histograms using this local group of cells. By normalizing locally, the HOG algorithm can detect the edges in each block very reliably; this is called **block normalization**.
In addition to using block normalization, the HOG algorithm also uses overlapping blocks to increase its performance. By using overlapping blocks, each cell contributes several independent components to the final HOG descriptor, where each component corresponds to a cell being normalized with respect to a different block. This may seem redundant but, it has been shown empirically that by normalizing each cell several times with respect to different local blocks, the performance of the HOG algorithm increases dramatically.
### Loading Images and Importing Resources
The first step in building our HOG descriptor is to load the required packages into Python and to load our image.
We start by using OpenCV to load an image of a triangle tile. Since, the `cv2.imread()` function loads images as BGR we will convert our image to RGB so we can display it with the correct colors. As usual we will convert our BGR image to Gray Scale for analysis.
```python
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Load the image
image = cv2.imread('./images/triangle_tile.jpeg')
# Convert the original image to RGB
original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Convert the original image to gray scale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Print the shape of the original and gray scale images
print('The original image has shape: ', original_image.shape)
print('The gray scale image has shape: ', gray_image.shape)
# Display the images
plt.subplot(121)
plt.imshow(original_image)
plt.title('Original Image')
plt.subplot(122)
plt.imshow(gray_image, cmap='gray')
plt.title('Gray Scale Image')
plt.show()
```
# Creating The HOG Descriptor
We will be using OpenCV’s `HOGDescriptor` class to create the HOG descriptor. The parameters of the HOG descriptor are setup using the `HOGDescriptor()` function. The parameters of the `HOGDescriptor()` function and their default values are given below:
`cv2.HOGDescriptor(win_size = (64, 128),
block_size = (16, 16),
block_stride = (8, 8),
cell_size = (8, 8),
nbins = 9,
win_sigma = DEFAULT_WIN_SIGMA,
threshold_L2hys = 0.2,
gamma_correction = true,
nlevels = DEFAULT_NLEVELS)`
Parameters:
* **win_size** – *Size*
Size of detection window in pixels (*width, height*). Defines the region of interest. Must be an integer multiple of cell size.
* **block_size** – *Size*
Block size in pixels (*width, height*). Defines how many cells are in each block. Must be an integer multiple of cell size and it must be smaller than the detection window. The smaller the block the finer detail you will get.
* **block_stride** – *Size*
Block stride in pixels (*horizontal, vertical*). It must be an integer multiple of cell size. The `block_stride` defines the distance between adjecent blocks, for example, 8 pixels horizontally and 8 pixels vertically. Longer `block_strides` makes the algorithm run faster (because less blocks are evaluated) but the algorithm may not perform as well.
* **cell_size** – *Size*
Cell size in pixels (*width, height*). Determines the size fo your cell. The smaller the cell the finer detail you will get.
* **nbins** – *int*
Number of bins for the histograms. Determines the number of angular bins used to make the histograms. With more bins you capture more gradient directions. HOG uses unsigned gradients, so the angular bins will have values between 0 and 180 degrees.
* **win_sigma** – *double*
Gaussian smoothing window parameter. The performance of the HOG algorithm can be improved by smoothing the pixels near the edges of the blocks by applying a Gaussian spatial window to each pixel before computing the histograms.
* **threshold_L2hys** – *double*
L2-Hys (Lowe-style clipped L2 norm) normalization method shrinkage. The L2-Hys method is used to normalize the blocks and it consists of an L2-norm followed by clipping and a renormalization. The clipping limits the maximum value of the descriptor vector for each block to have the value of the given threshold (0.2 by default). After the clipping the descriptor vector is renormalized as described in *IJCV*, 60(2):91-110, 2004.
* **gamma_correction** – *bool*
Flag to specify whether the gamma correction preprocessing is required or not. Performing gamma correction slightly increases the performance of the HOG algorithm.
* **nlevels** – *int*
Maximum number of detection window increases.
As we can see, the `cv2.HOGDescriptor()`function supports a wide range of parameters. The first few arguments (`block_size, block_stride, cell_size`, and `nbins`) are probably the ones you are most likely to change. The other parameters can be safely left at their default values and you will get good results.
In the code below, we will use the `cv2.HOGDescriptor()`function to set the cell size, block size, block stride, and the number of bins for the histograms of the HOG descriptor. We will then use `.compute(image)`method to compute the HOG descriptor (feature vector) for the given `image`.
```python
# Specify the parameters for our HOG descriptor
# Cell Size in pixels (width, height). Must be smaller than the size of the detection window
# and must be chosen so that the resulting Block Size is smaller than the detection window.
cell_size = (6, 6)
# Number of cells per block in each direction (x, y). Must be chosen so that the resulting
# Block Size is smaller than the detection window
num_cells_per_block = (2, 2)
# Block Size in pixels (width, height). Must be an integer multiple of Cell Size.
# The Block Size must be smaller than the detection window
block_size = (num_cells_per_block[0] * cell_size[0],
num_cells_per_block[1] * cell_size[1])
# Calculate the number of cells that fit in our image in the x and y directions
x_cells = gray_image.shape[1] // cell_size[0]
y_cells = gray_image.shape[0] // cell_size[1]
# Horizontal distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer.
h_stride = 1
# Vertical distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer.
v_stride = 1
# Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size
block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride)
# Number of gradient orientation bins
num_bins = 9
# Specify the size of the detection window (Region of Interest) in pixels (width, height).
# It must be an integer multiple of Cell Size and it must cover the entire image. Because
# the detection window must be an integer multiple of cell size, depending on the size of
# your cells, the resulting detection window might be slightly smaller than the image.
# This is perfectly ok.
win_size = (x_cells * cell_size[0] , y_cells * cell_size[1])
# Print the shape of the gray scale image for reference
print('\nThe gray scale image has shape: ', gray_image.shape)
print()
# Print the parameters of our HOG descriptor
print('HOG Descriptor Parameters:\n')
print('Window Size:', win_size)
print('Cell Size:', cell_size)
print('Block Size:', block_size)
print('Block Stride:', block_stride)
print('Number of Bins:', num_bins)
print()
# Set the parameters of the HOG descriptor using the variables defined above
hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins)
# Compute the HOG Descriptor for the gray scale image
hog_descriptor = hog.compute(gray_image)
```
The gray scale image has shape: (250, 250)
HOG Descriptor Parameters:
Window Size: (246, 246)
Cell Size: (6, 6)
Block Size: (12, 12)
Block Stride: (6, 6)
Number of Bins: 9
# Number of Elements In The HOG Descriptor
The resulting HOG Descriptor (feature vector), contains the normalized histograms from all cells from all blocks in the detection window concatenated in one long vector. Therefore, the size of the HOG feature vector will be given by the total number of blocks in the detection window, multiplied by the number of cells per block, times the number of orientation bins:
<span class="mathquill">
\begin{equation}
\mbox{total_elements} = (\mbox{total_number_of_blocks})\mbox{ } \times \mbox{ } (\mbox{number_cells_per_block})\mbox{ } \times \mbox{ } (\mbox{number_of_bins})
\end{equation}
</span>
If we don’t have overlapping blocks (*i.e.* the `block_stride`equals the `block_size`), the total number of blocks can be easily calculated by dividing the size of the detection window by the block size. However, in the general case we have to take into account the fact that we have overlapping blocks. To find the total number of blocks in the general case (*i.e.* for any `block_stride` and `block_size`), we can use the formula given below:
<span class="mathquill">
\begin{equation}
\mbox{Total}_i = \left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right)\left( \frac{\mbox{window_size}_i}{\mbox{block_size}_i} \right) - \left [\left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right) - 1 \right]; \mbox{ for } i = x,y
\end{equation}
</span>
Where <span class="mathquill">Total$_x$</span>, is the total number of blocks along the width of the detection window, and <span class="mathquill">Total$_y$</span>, is the total number of blocks along the height of the detection window. This formula for <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, takes into account the extra blocks that result from overlapping. After calculating <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, we can get the total number of blocks in the detection window by multiplying <span class="mathquill">Total$_x$ $\times$ Total$_y$</span>. The above formula can be simplified considerably because the `block_size`, `block_stride`, and `window_size`are all defined in terms of the `cell_size`. By making all the appropriate substitutions and cancelations the above formula reduces to:
<span class="mathquill">
\begin{equation}
\mbox{Total}_i = \left(\frac{\mbox{cells}_i - \mbox{num_cells_per_block}_i}{N_i}\right) + 1\mbox{ }; \mbox{ for } i = x,y
\end{equation}
</span>
Where <span class="mathquill">cells$_x$</span> is the total number of cells along the width of the detection window, and <span class="mathquill">cells$_y$</span>, is the total number of cells along the height of the detection window. And <span class="mathquill">$N_x$</span> is the horizontal block stride in units of `cell_size` and <span class="mathquill">$N_y$</span> is the vertical block stride in units of `cell_size`.
Let's calculate what the number of elements for the HOG feature vector should be and check that it matches the shape of the HOG Descriptor calculated above.
```python
# Calculate the total number of blocks along the width of the detection window
tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1)
# Calculate the total number of blocks along the height of the detection window
tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1)
# Calculate the total number of elements in the feature vector
tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins
# Print the total number of elements the HOG feature vector should have
print('\nThe total number of elements in the HOG Feature Vector should be: ',
tot_bx, 'x',
tot_by, 'x',
num_cells_per_block[0], 'x',
num_cells_per_block[1], 'x',
num_bins, '=',
tot_els)
# Print the shape of the HOG Descriptor to see that it matches the above
print('\nThe HOG Descriptor has shape:', hog_descriptor.shape)
print()
```
The total number of elements in the HOG Feature Vector should be: 40 x 40 x 2 x 2 x 9 = 57600
The HOG Descriptor has shape: (57600, 1)
# Visualizing The HOG Descriptor
We can visualize the HOG Descriptor by plotting the histogram associated with each cell as a collection of vectors. To do this, we will plot each bin in the histogram as a single vector whose magnitude is given by the height of the bin and its orientation is given by the angular bin that its associated with. Since any given cell might have multiple histograms associated with it, due to the overlapping blocks, we will choose to average all the histograms for each cell to produce a single histogram for each cell.
OpenCV has no easy way to visualize the HOG Descriptor, so we have to do some manipulation first in order to visualize it. We will start by reshaping the HOG Descriptor in order to make our calculations easier. We will then compute the average histogram of each cell and finally we will convert the histogram bins into vectors. Once we have the vectors, we plot the corresponding vectors for each cell in an image.
The code below produces an interactive plot so that you can interact with the figure. The figure contains:
* the grayscale image,
* the HOG Descriptor (feature vector),
* a zoomed-in portion of the HOG Descriptor, and
* the histogram of the selected cell.
**You can click anywhere on the gray scale image or the HOG Descriptor image to select a particular cell**. Once you click on either image a *magenta* rectangle will appear showing the cell you selected. The Zoom Window will show you a zoomed in version of the HOG descriptor around the selected cell; and the histogram plot will show you the corresponding histogram for the selected cell. The interactive window also has buttons at the bottom that allow for other functionality, such as panning, and giving you the option to save the figure if desired. The home button returns the figure to its default value.
**NOTE**: If you are running this notebook in the Udacity workspace, there is around a 2 second lag in the interactive plot. This means that if you click in the image to zoom in, it will take about 2 seconds for the plot to refresh.
```python
%matplotlib notebook
import copy
import matplotlib.patches as patches
# Set the default figure size
plt.rcParams['figure.figsize'] = [9.8, 9]
# Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins].
# The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number
# and the second index to the column number. This will be useful later when we plot the feature vector, so
# that the feature vector indexing matches the image indexing.
hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx,
tot_by,
num_cells_per_block[0],
num_cells_per_block[1],
num_bins).transpose((1, 0, 2, 3, 4))
# Print the shape of the feature vector for reference
print('The feature vector has shape:', hog_descriptor.shape)
# Print the reshaped feature vector
print('The reshaped feature vector has shape:', hog_descriptor_reshaped.shape)
# Create an array that will hold the average gradients for each cell
ave_grad = np.zeros((y_cells, x_cells, num_bins))
# Print the shape of the ave_grad array for reference
print('The average gradient array has shape: ', ave_grad.shape)
# Create an array that will count the number of histograms per cell
hist_counter = np.zeros((y_cells, x_cells, 1))
# Add up all the histograms for each cell and count the number of histograms per cell
for i in range (num_cells_per_block[0]):
for j in range(num_cells_per_block[1]):
ave_grad[i:tot_by + i,
j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :]
hist_counter[i:tot_by + i,
j:tot_bx + j] += 1
# Calculate the average gradient for each cell
ave_grad /= hist_counter
# Calculate the total number of vectors we have in all the cells.
len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2]
# Create an array that has num_bins equally spaced between 0 and 180 degress in radians.
deg = np.linspace(0, np.pi, num_bins, endpoint = False)
# Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude
# equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram).
# To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the
# image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the
# cells in the image. Create the arrays that will hold all the vector positons and components.
U = np.zeros((len_vecs))
V = np.zeros((len_vecs))
X = np.zeros((len_vecs))
Y = np.zeros((len_vecs))
# Set the counter to zero
counter = 0
# Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the
# cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the
# average gradient array
for i in range(ave_grad.shape[0]):
for j in range(ave_grad.shape[1]):
for k in range(ave_grad.shape[2]):
U[counter] = ave_grad[i,j,k] * np.cos(deg[k])
V[counter] = ave_grad[i,j,k] * np.sin(deg[k])
X[counter] = (cell_size[0] / 2) + (cell_size[0] * i)
Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j)
counter = counter + 1
# Create the bins in degress to plot our histogram.
angle_axis = np.linspace(0, 180, num_bins, endpoint = False)
angle_axis += ((angle_axis[1] - angle_axis[0]) / 2)
# Create a figure with 4 subplots arranged in 2 x 2
fig, ((a,b),(c,d)) = plt.subplots(2,2)
# Set the title of each subplot
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False)
d.set(title = 'Histogram of Gradients')
# Plot the gray scale image
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
# Plot the feature vector (HOG Descriptor)
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
# Define function for interactive zoom
def onpress(event):
#Unless the left mouse button is pressed do nothing
if event.button != 1:
return
# Only accept clicks for subplots a and b
if event.inaxes in [a, b]:
# Get mouse click coordinates
x, y = event.xdata, event.ydata
# Select the cell closest to the mouse click coordinates
cell_num_x = np.uint32(x / cell_size[0])
cell_num_y = np.uint32(y / cell_size[1])
# Set the edge coordinates of the rectangle patch
edgex = x - (x % cell_size[0])
edgey = y - (y % cell_size[1])
# Create a rectangle patch that matches the the cell selected above
rect = patches.Rectangle((edgex, edgey),
cell_size[0], cell_size[1],
linewidth = 1,
edgecolor = 'magenta',
facecolor='none')
# A single patch can only be used in a single plot. Create copies
# of the patch to use in the other subplots
rect2 = copy.copy(rect)
rect3 = copy.copy(rect)
# Update all subplots
a.clear()
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
a.add_patch(rect)
b.clear()
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
b.add_patch(rect2)
c.clear()
c.set(title = 'Zoom Window')
c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1)
c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0]))
c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1]))
c.invert_yaxis()
c.set_aspect(aspect = 1)
c.set_facecolor('black')
c.add_patch(rect3)
d.clear()
d.set(title = 'Histogram of Gradients')
d.grid()
d.set_xlim(0, 180)
d.set_xticks(angle_axis)
d.set_xlabel('Angle')
d.bar(angle_axis,
ave_grad[cell_num_y, cell_num_x, :],
180 // num_bins,
align = 'center',
alpha = 0.5,
linewidth = 1.2,
edgecolor = 'k')
fig.canvas.draw()
# Create a connection between the figure and the mouse click
fig.canvas.mpl_connect('button_press_event', onpress)
plt.show()
```
The feature vector has shape: (57600, 1)
The reshaped feature vector has shape: (40, 40, 2, 2, 9)
The average gradient array has shape: (41, 41, 9)
<IPython.core.display.Javascript object>
# Understanding The Histograms
Let's take a look at a couple of snapshots of the above figure to see if the histograms for the selected cell make sense. Let's start looking at a cell that is inside a triangle and not near an edge:
<br>
<figure>
<figcaption style = "text-align:center; font-style:italic">Fig. 4. - Histograms Inside a Triangle.</figcaption>
</figure>
<br>
In this case, since the triangle is nearly all of the same color there shouldn't be any dominant gradient in the selected cell. As we can clearly see in the Zoom Window and the histogram, this is indeed the case. We have many gradients but none of them clearly dominates over the other.
Now let’s take a look at a cell that is near a horizontal edge:
<br>
<figure>
<figcaption style = "text-align:center; font-style:italic">Fig. 5. - Histograms Near a Horizontal Edge.</figcaption>
</figure>
<br>
Remember that edges are areas of an image where the intensity changes abruptly. In these cases, we will have a high intensity gradient in some particular direction. This is exactly what we see in the corresponding histogram and Zoom Window for the selected cell. In the Zoom Window, we can see that the dominant gradient is pointing up, almost at 90 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 90-degree bin in the histogram to dominate strongly over the others. This is in fact what we see.
Now let’s take a look at a cell that is near a vertical edge:
<br>
<figure>
<figcaption style = "text-align:center; font-style:italic">Fig. 6. - Histograms Near a Vertical Edge.</figcaption>
</figure>
<br>
In this case we expect the dominant gradient in the cell to be horizontal, close to 180 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 170-degree bin in the histogram to dominate strongly over the others. This is what we see in the histogram but we also see that there is another dominant gradient in the cell, namely the one in the 10-degree bin. The reason for this, is because the HOG algorithm is using unsigned gradients, which means 0 degrees and 180 degrees are considered the same. Therefore, when the histograms are being created, angles between 160 and 180 degrees, contribute proportionally to both the 10-degree bin and the 170-degree bin. This results in there being two dominant gradients in the cell near the vertical edge instead of just one.
To conclude let’s take a look at a cell that is near a diagonal edge.
<br>
<figure>
<figcaption style = "text-align:center; font-style:italic">Fig. 7. - Histograms Near a Diagonal Edge.</figcaption>
</figure>
<br>
To understand what we are seeing, let’s first remember that gradients have an *x*-component, and a *y*-component, just like vectors. Therefore, the resulting orientation of a gradient is going to be given by the vector sum of its components. For this reason, on vertical edges the gradients are horizontal, because they only have an x-component, as we saw in Figure 4. While on horizontal edges the gradients are vertical, because they only have a y-component, as we saw in Figure 3. Consequently, on diagonal edges, the gradients are also going to be diagonal because both the *x* and *y* components are non-zero. Since the diagonal edges in the image are close to 45 degrees, we should expect to see a dominant gradient orientation in the 50-degree bin. This is in fact what we see in the histogram but, just like in Figure 4., we see there are two dominant gradients instead of just one. The reason for this is that when the histograms are being created, angles that are near the boundaries of bins, contribute proportionally to the adjacent bins. For example, a gradient with an angle of 40 degrees, is right in the middle of the 30-degree and 50-degree bin. Therefore, the magnitude of the gradient is split evenly into the 30-degree and 50-degree bin. This results in there being two dominant gradients in the cell near the diagonal edge instead of just one.
Now that you know how HOG is implemented, in the workspace you will find a notebook named *Examples*. In there, you will be able set your own paramters for the HOG descriptor for various images. Have fun!
|
bf411cf464b00e112afa7362bad9eea035a3cfc4
| 631,034 |
ipynb
|
Jupyter Notebook
|
1_4_Feature_Vectors/3_1. HOG.ipynb
|
georgiagn/CVND_Exercises
|
4de186c80d14ed7d1e61c6bc51098ad0d9b4c54b
|
[
"MIT"
] | 1 |
2020-11-16T20:18:21.000Z
|
2020-11-16T20:18:21.000Z
|
1_4_Feature_Vectors/3_1. HOG.ipynb
|
georgiagn/CVND_Exercises
|
4de186c80d14ed7d1e61c6bc51098ad0d9b4c54b
|
[
"MIT"
] | null | null | null |
1_4_Feature_Vectors/3_1. HOG.ipynb
|
georgiagn/CVND_Exercises
|
4de186c80d14ed7d1e61c6bc51098ad0d9b4c54b
|
[
"MIT"
] | null | null | null | 427.819661 | 300,427 | 0.916125 | true | 7,765 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.752013 | 0.774583 | 0.582496 |
__label__eng_Latn
| 0.997521 | 0.191664 |
```python
%matplotlib inline
import numpy as np
import pylab as plt
import pandas as pd
from sklearn import svm
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
```
```python
np.random.seed(0)
X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]]
Y = [0] * 20 + [1] * 20
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.coolwarm, s=25)
```
```python
fig, ax = plt.subplots()
clf2 = svm.LinearSVC(C=1).fit(X, Y)
# get the separating hyperplane
w = clf2.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - (clf2.intercept_[0]) / w[1]
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx2, yy2 = np.meshgrid(np.arange(x_min, x_max, .2),
np.arange(y_min, y_max, .2))
Z = clf2.predict(np.c_[xx2.ravel(), yy2.ravel()])
Z = Z.reshape(xx2.shape)
ax.contourf(xx2, yy2, Z, cmap=plt.cm.coolwarm, alpha=0.3)
ax.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.coolwarm, s=25)
ax.plot(xx,yy)
ax.axis([x_min, x_max,y_min, y_max])
plt.show()
```
```python
def make_region(X,Y,clf,ax):
fig, ax = plt.subplots()
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx2, yy2 = np.meshgrid(np.arange(x_min, x_max, .2),
np.arange(y_min, y_max, .2))
Z = clf.predict(np.c_[xx2.ravel(), yy2.ravel()])
Z = Z.reshape(xx2.shape)
ax.contourf(xx2, yy2, Z, cmap=plt.cm.coolwarm, alpha=0.3)
ax.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.coolwarm, s=25)
ax.axis([x_min, x_max,y_min, y_max])
```
**Supervised learning**:
- Linear models (Ridge, Lasso, Elastic Net, ...)
- Support Vector Machines
- Tree-based methods (Random Forests, Bagging, GBRT, ...)
- Nearest neighbors
- Neural networks (basics)
**Unsupervised learning**:
- Clustering (KMeans, Ward, ...)
- Outlier detection
## Accuracy and precision
1. precision:
The fraction of relevant instances among the retrieved instances,
1. recall:
The fraction of relevant instances that have been retrieved over the total amount of relevant instances.
1. F-score
* The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.
* The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.
* The F-beta score can be interpreted as a weighted harmonic mean of the precision and recall, where an F-beta score reaches its best value at 1 and worst score at 0. The F-beta score weights recall more than precision by a factor of beta. beta == 1.0 means recall and precision are equally important.
* The support is the number of occurrences of each class in y_true.
Take a look at [HERE](https://en.wikipedia.org/wiki/F1_score) or [HERE](https://en.wikipedia.org/wiki/Precision_and_recall).
```python
# Generate data
from sklearn.datasets import make_blobs,make_circles,make_moons
X, y = make_blobs(n_samples=1000, centers=2,
cluster_std=1.5,
center_box=(-4.0, 4.0))
# X, y = make_circles(n_samples=1000, noise=.1, factor=.5)
# X,y = make_moons(n_samples=1000, noise=.2)
plt.scatter(X[:,0],X[:,1],c=y,)
```
```python
print(X[:3])
```
[[ 3.3168855 0.8283272 ]
[-2.64062115 -0.47053114]
[ 3.10295584 0.86800108]]
```python
print(y[:3])
```
[1 0 1]
```python
# X is a 2 dimensional array, with 1000 rows and 2 columns
print(X.shape)
# y is a vector of 1000 elements
print(y.shape)
```
(1000, 2)
(1000,)
```python
X_train, y_train = X[:700], y[:700]
X_test, y_test = X[700:], y[700:]
```
```python
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(14,7))
ax1.scatter(X_train[:,0],X_train[:,1],c=y_train,)
ax2.scatter(X_test[:,0],X_test[:,1],c=y_test,)
```
## K-Nearest Neighbours
```python
# K-Nearest Neighbours
from sklearn.neighbors import KNeighborsClassifier
Model = KNeighborsClassifier(n_neighbors=8)
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
precision recall f1-score support
0 0.98 0.99 0.98 155
1 0.99 0.98 0.98 145
micro avg 0.98 0.98 0.98 300
macro avg 0.98 0.98 0.98 300
weighted avg 0.98 0.98 0.98 300
[[153 2]
[ 3 142]]
accuracy is 0.9833333333333333
```python
# Compute (approximate) class probabilities
print(Model.predict_proba(X_test[:5]))
```
[[1. 0.]
[0. 1.]
[1. 0.]
[0. 1.]
[0. 1.]]
```python
make_region(X_test,y_test,Model,ax)
```
## Radius Neighbors Classifier
```python
from sklearn.neighbors import RadiusNeighborsClassifier
Model=RadiusNeighborsClassifier(radius=8.0)
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
#summary of the predictions made by the classifier
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_test,y_pred))
#Accouracy score
print('accuracy is ', accuracy_score(y_test,y_pred))
```
precision recall f1-score support
0 1.00 0.91 0.95 155
1 0.91 1.00 0.95 145
micro avg 0.95 0.95 0.95 300
macro avg 0.96 0.95 0.95 300
weighted avg 0.96 0.95 0.95 300
[[141 14]
[ 0 145]]
accuracy is 0.9533333333333334
```python
make_region(X_test,y_test,Model,ax)
```
## Naive Bayes
\begin{align}\begin{aligned}P(y \mid x_1, \dots, x_n) \propto P(y) \prod_{i=1}^{n} P(x_i \mid y)\\\Downarrow\\\hat{y} = \arg\max_y P(y) \prod_{i=1}^{n} P(x_i \mid y),\end{aligned}\end{align}
```python
# Naive Bayes
from sklearn.naive_bayes import GaussianNB
Model = GaussianNB()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
precision recall f1-score support
0 0.99 0.99 0.99 155
1 0.99 0.99 0.99 145
micro avg 0.99 0.99 0.99 300
macro avg 0.99 0.99 0.99 300
weighted avg 0.99 0.99 0.99 300
[[153 2]
[ 2 143]]
accuracy is 0.9866666666666667
```python
make_region(X_test,y_test,Model,ax)
```
## SVM
```python
# Support Vector Machine
from sklearn.svm import SVC
Model = SVC(kernel='linear')
# Model = svm.LinearSVC(C=1)
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
precision recall f1-score support
0 0.99 0.99 0.99 155
1 0.99 0.99 0.99 145
micro avg 0.99 0.99 0.99 300
macro avg 0.99 0.99 0.99 300
weighted avg 0.99 0.99 0.99 300
[[153 2]
[ 2 143]]
accuracy is 0.9866666666666667
```python
make_region(X_test,y_test,Model,ax)
```
## Decision Tree
```python
# Decision Tree's
from sklearn.tree import DecisionTreeClassifier
Model = DecisionTreeClassifier()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
precision recall f1-score support
0 0.98 0.98 0.98 155
1 0.98 0.98 0.98 145
micro avg 0.98 0.98 0.98 300
macro avg 0.98 0.98 0.98 300
weighted avg 0.98 0.98 0.98 300
[[152 3]
[ 3 142]]
accuracy is 0.98
```python
y_pred = Model.predict(X_train)
# Summary of the predictions made by the classifier
print(classification_report(y_train, y_pred))
```
precision recall f1-score support
0 1.00 1.00 1.00 345
1 1.00 1.00 1.00 355
micro avg 1.00 1.00 1.00 700
macro avg 1.00 1.00 1.00 700
weighted avg 1.00 1.00 1.00 700
```python
make_region(X_train,y_train,Model,ax)
```
```python
make_region(X_test,y_test,Model,ax)
```
## RandomForest
```python
from sklearn.ensemble import RandomForestClassifier
Model=RandomForestClassifier(max_depth=2)
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_pred,y_test))
#Accuracy Score
print('accuracy is ',accuracy_score(y_pred,y_test))
```
precision recall f1-score support
0 0.99 0.98 0.99 155
1 0.98 0.99 0.99 145
micro avg 0.99 0.99 0.99 300
macro avg 0.99 0.99 0.99 300
weighted avg 0.99 0.99 0.99 300
[[152 1]
[ 3 144]]
accuracy is 0.9866666666666667
/home/gf/packages/anaconda3/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
```python
# def visualize_classifier(model, X, y, ax=None, cmap='rainbow'):
# ax = ax or plt.gca()
# # Plot the training points
# ax.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=cmap,
# clim=(y.min(), y.max()), zorder=3)
# ax.axis('tight')
# ax.axis('off')
# xlim = ax.get_xlim()
# ylim = ax.get_ylim()
# # fit the estimator
# model.fit(X, y)
# xx, yy = np.meshgrid(np.linspace(*xlim, num=200),
# np.linspace(*ylim, num=200))
# Z = model.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
# # Create a color plot with the results
# n_classes = len(np.unique(y))
# contours = ax.contourf(xx, yy, Z, alpha=0.3,
# levels=np.arange(n_classes + 1) - 0.5,
# cmap=cmap,
# zorder=1)
# ax.set(xlim=xlim, ylim=ylim)
```
```python
make_region(X_test,y_test,Model,ax)
```
```python
Model.feature_importances_
```
array([0.76430017, 0.23569983])
## Neural network
```python
from sklearn.neural_network import MLPClassifier
Model=MLPClassifier()
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
# Summary of the predictions
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_test,y_pred))
#Accuracy Score
print('accuracy is ',accuracy_score(y_pred,y_test))
```
```python
make_region(X_test,y_test,Model,ax)
```
```python
```
```python
```
```python
```
|
2167cc5717d860c4929ed61e24aa11f32ff13514
| 599,888 |
ipynb
|
Jupyter Notebook
|
day1/4-sklearn.ipynb
|
vafaei-ar/IUMS-workshops
|
4d68d069e311d00a3283602536841ab548f57ce1
|
[
"MIT"
] | null | null | null |
day1/4-sklearn.ipynb
|
vafaei-ar/IUMS-workshops
|
4d68d069e311d00a3283602536841ab548f57ce1
|
[
"MIT"
] | null | null | null |
day1/4-sklearn.ipynb
|
vafaei-ar/IUMS-workshops
|
4d68d069e311d00a3283602536841ab548f57ce1
|
[
"MIT"
] | null | null | null | 630.134454 | 131,120 | 0.946192 | true | 3,649 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.907312 | 0.877477 | 0.796145 |
__label__eng_Latn
| 0.603149 | 0.688045 |
# Realization of Recursive Filters
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Cascaded Structures
The realization of recursive filters with a high order may be subject to numerical issues. For instance, when the coefficients span a wide amplitude range, their quantization may require a small quantization step or may impose a large relative error for small coefficients. The basic concept of cascaded structures is to decompose a high order filter into a cascade of lower order filters, typically first and second order recursive filters.
### Decomposition into Second-Order Sections
The rational transfer function $H(z)$ of a linear time-invariant (LTI) recursive system can be [expressed by its zeros and poles](introduction.ipynb#Transfer-Function) as
\begin{equation}
H(z) = \frac{b_M}{a_N} \cdot \frac{\prod_{\mu=1}^{P} (z - z_{0\mu})^{m_\mu}}{\prod_{\nu=1}^{Q} (z - z_{\infty\nu})^{n_\nu}}
\end{equation}
where $z_{0\mu}$ and $z_{\infty\nu}$ denote the $\mu$-th zero and $\nu$-th pole of degree $m_\mu$ and $n_\nu$ of $H(z)$, respectively. The total number of zeros and poles is denoted by $P$ and $Q$.
The poles and zeros of a real-valued filter $h[k] \in \mathbb{R}$ are either single real valued or conjugate complex pairs. This motivates to split the transfer function into
* first order filters constructed from a single pole and zero
* second order filters constructed from a pair of conjugated complex poles and zeros
Decomposing the transfer function into these two types by grouping the poles and zeros into single poles/zeros and conjugate complex pairs of poles/zeros results in
\begin{equation}
H(z) = K \cdot \prod_{\eta=1}^{S_1} \frac{(z - z_{0\eta})}{(z - z_{\infty\eta})}
\cdot \prod_{\eta=1}^{S_2} \frac{(z - z_{0\eta}) (z - z_{0\eta}^*)} {(z - z_{\infty\eta})(z - z_{\infty\eta}^*)}
\end{equation}
where $K$ denotes a constant and $S_1 + 2 S_2 = N$ with $N$ denoting the order of the system. The cascade of two systems results in a multiplication of their transfer functions. Above decomposition represents a cascade of first- and second-order recursive systems. The former can be treated as a special case of second-order recursive systems. The decomposition is therefore known as decomposition into second-order sections (SOSs) or [biquad filters](https://en.wikipedia.org/wiki/Digital_biquad_filter). Using a cascade of SOSs the transfer function of the recursive system can be rewritten as
\begin{equation}
H(z) = \prod_{\mu=1}^{S} \frac{b_{0, \mu} + b_{1, \mu} \, z^{-1} + b_{2, \mu} \, z^{-2}}{1 + a_{1, \mu} \, z^{-1} + a_{2, \mu} \, z^{-2}}
\end{equation}
where $S = \lceil \frac{N}{2} \rceil$ denotes the total number of SOSs. These results state that any real valued system of order $N > 2$ can be decomposed into SOSs. This has a number of benefits
* quantization effects can be reduced by sensible grouping of poles/zeros, e.g. such that the spanned amplitude range of the filter coefficients is limited
* A SOS may be extended by a gain factor to further reduce quantization effects by normalization of the coefficients
* efficient and numerically stable SOSs serve as generic building blocks for higher-order recursive filters
### Example - Cascaded second-order section realization of a lowpass
The following example illustrates the decomposition of a higher-order recursive Butterworth lowpass filter into a cascade of second-order sections.
```python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.markers import MarkerStyle
from matplotlib.patches import Circle
import scipy.signal as sig
N = 9 # order of recursive filter
def zplane(z, p, title='Poles and Zeros'):
"Plots zero and pole locations in the complex z-plane"
ax = plt.gca()
ax.plot(np.real(z), np.imag(z), 'bo', fillstyle='none', ms=10)
ax.plot(np.real(p), np.imag(p), 'rx', fillstyle='none', ms=10)
unit_circle = Circle((0, 0), radius=1, fill=False,
color='black', ls='solid', alpha=0.9)
ax.add_patch(unit_circle)
ax.axvline(0, color='0.7')
ax.axhline(0, color='0.7')
plt.title(title)
plt.xlabel(r'Re{$z$}')
plt.ylabel(r'Im{$z$}')
plt.axis('equal')
plt.xlim((-2, 2))
plt.ylim((-2, 2))
plt.grid()
# design filter
b, a = sig.butter(N, 0.2)
# decomposition into SOS
sos = sig.tf2sos(b, a, pairing='nearest')
# print filter coefficients
print('Coefficients of the recursive part \n')
print(['%1.2f' % ai for ai in a])
print('\n')
print('Coefficients of the recursive part of the individual SOS \n')
print('Section \t a1 \t\t a2')
for n in range(sos.shape[0]):
print('%d \t\t %1.5f \t %1.5f' % (n, sos[n, 4], sos[n, 5]))
# plot pole and zero locations
plt.figure(figsize=(5, 5))
zplane(np.roots(b), np.roots(a), 'Poles and Zeros - Overall')
plt.figure(figsize=(10, 7))
for n in range(sos.shape[0]):
plt.subplot(231+n)
zplane(np.roots(sos[n, 0:3]), np.roots(sos[n, 3:6]),
title='Poles and Zeros - Section %d' % n)
plt.tight_layout()
# compute and plot frequency response of sections
plt.figure(figsize=(10, 5))
for n in range(sos.shape[0]):
Om, H = sig.freqz(sos[n, 0:3], sos[n, 3:6])
plt.plot(Om, 20*np.log10(np.abs(H)), label=r'Section %d' % n)
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H_n(e^{j \Omega})|$ in dB')
plt.legend()
plt.grid()
```
Coefficients of the recursive part
['1.00', '-5.39', '13.38', '-19.96', '19.62', '-13.14', '5.97', '-1.78', '0.31', '-0.02']
Coefficients of the recursive part of the individual SOS
Section a1 a2
0 -0.50953 0.00000
1 -1.04232 0.28838
2 -1.11568 0.37905
3 -1.25052 0.54572
4 -1.46818 0.81477
**Exercise**
* What amplitude range is spanned by the filter coefficients?
* What amplitude range is spanned by the SOS coefficients?
* Change the pole/zero grouping strategy from `pairing='nearest'` to `pairing='keep_odd'`. What changes?
* Increase the order `N` of the filter. What changes?
Solution: Inspecting both the coefficients of the recursive part of the original filter and of the individual SOS reveals that the spanned amplitude range is lower for the latter. The choice of the pole/zero grouping strategy influences the locations of the poles/zeros in the individual SOS, the spanned amplitude range of their coefficients and the transfer functions of the individual sections. The total number of SOS scales with the order of the original filter.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
|
e1f99e9146c6f45091bbc04cd4385dea3ca32aca
| 290,256 |
ipynb
|
Jupyter Notebook
|
recursive_filters/cascaded_structures.ipynb
|
ZeroCommits/digital-signal-processing-lecture
|
e1e65432a5617a309ec02327a14962e37a0f7ec5
|
[
"MIT"
] | 630 |
2016-01-05T17:11:43.000Z
|
2022-03-30T07:48:27.000Z
|
recursive_filters/cascaded_structures.ipynb
|
alirezaopmc/digital-signal-processing-lecture
|
e1e65432a5617a309ec02327a14962e37a0f7ec5
|
[
"MIT"
] | 12 |
2016-11-07T15:49:55.000Z
|
2022-03-10T13:05:50.000Z
|
recursive_filters/cascaded_structures.ipynb
|
alirezaopmc/digital-signal-processing-lecture
|
e1e65432a5617a309ec02327a14962e37a0f7ec5
|
[
"MIT"
] | 172 |
2015-12-26T21:05:40.000Z
|
2022-03-10T23:13:30.000Z
| 59.995039 | 24,122 | 0.594286 | true | 2,022 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.793106 | 0.879147 | 0.697257 |
__label__eng_Latn
| 0.976409 | 0.458292 |
```python
import numpy as np
import sympy as sy
import control.matlab as cm
```
```python
s,z = sy.symbols('s,z', real=False)
h,t = sy.symbols('h,t', real=True, positive=True)
```
```python
G = (s+1)/(s+2)
Ya = sy.apart(G/s**2)
```
```python
ya = sy.inverse_laplace_transform(Ya, s, t)
print sy.pretty_print(ya)
print sy.latex(ya)
```
-2⋅t
t 1 ℯ
─ + ─ - ─────
2 4 4
None
\frac{t}{2} + \frac{1}{4} - \frac{1}{4 e^{2 t}}
```python
num= 2*h*(z-sy.exp(-2*h)) + (z-1)*(z-sy.exp(-2*h)) - (z-1)**2
```
```python
print sy.latex(sy.collect(sy.simplify(num), z))
print sy.pretty_print(sy.collect(sy.simplify(num), z))
```
- \frac{2 h}{e^{2 h}} + z \left(2 h + 1 - e^{- 2 h}\right) - 1 + e^{- 2 h}
-2⋅h ⎛ -2⋅h⎞ -2⋅h
- 2⋅h⋅ℯ + z⋅⎝2⋅h + 1 - ℯ ⎠ - 1 + ℯ
None
```python
cm.c2d??
```
```python
```
|
4d6cbf22ea907aef6be191d827e2414a09cfbfd6
| 2,715 |
ipynb
|
Jupyter Notebook
|
approximating-cont-controller/notebooks/L8-spring16-ramp-invariance.ipynb
|
kjartan-at-tec/mr2007-computerized-control
|
16e35f5007f53870eaf344eea1165507505ab4aa
|
[
"MIT"
] | 2 |
2020-11-07T05:20:37.000Z
|
2020-12-22T09:46:13.000Z
|
approximating-cont-controller/notebooks/L8-spring16-ramp-invariance.ipynb
|
alfkjartan/control-computarizado
|
5b9a3ae67602d131adf0b306f3ffce7a4914bf8e
|
[
"MIT"
] | 4 |
2020-06-12T20:44:41.000Z
|
2020-06-12T20:49:00.000Z
|
approximating-cont-controller/notebooks/L8-spring16-ramp-invariance.ipynb
|
alfkjartan/control-computarizado
|
5b9a3ae67602d131adf0b306f3ffce7a4914bf8e
|
[
"MIT"
] | 1 |
2019-09-25T20:02:23.000Z
|
2019-09-25T20:02:23.000Z
| 18.986014 | 88 | 0.448987 | true | 397 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.875787 | 0.839734 | 0.735428 |
__label__kor_Hang
| 0.112804 | 0.546978 |
```python
import math
import numpy as np;
import matplotlib.pyplot as plt
```
```python
def inf_n(z, a):
return 1-(9*a)/(8*z)+math.pow(a,3)/(2*math.pow(z,3))-math.pow(a,5)/(8*math.pow(z,5))
def inf_t(z, a):
return 1-(9*a)/(16*z)+2*math.pow(a,3)/(16*math.pow(z,3))-math.pow(a,5)/(16*math.pow(z,5))
def channel_n(z,a,L):
dom = 12
term1 = 0
term2 = 0
for n in range(0,dom):
term1 = term1 + math.pow(-1,n)*(1/inf_n(n*L+z,a) -1)
for n in range(0,dom):
term2 = term2 + math.pow(-1,n)*(1/inf_n((n+1)*L-z,a) -1)
return 1/(term1+term2+1)
channel_n_vec = np.vectorize(channel_n)
def channel_t(z,a,L):
dom = 12
term1 = 0
term2 = 0
for n in range(0,dom):
term1 = term1 + math.pow(-1,n)*(1/(inf_t(z,a)*(n*L+a)) -1)
for n in range(0,dom):
term2 = term2 + math.pow(-1,n)*(1/(inf_t(z,a)*((n+1)*L-a)) -1)
return 1/(term1+term2+1)
channel_t_vec = np.vectorize(channel_t)
def fit_wall_mob(h, p1, p2, p3, p4, p5, p6, p7):
return p1 + p2/pow(p3 + h,5) - p4/pow(p5 + h,3) - p6/(p7 + h)
def inf_n(z, a):
return 1-(9*a)/(8*z)+math.pow(a,3)/(2*math.pow(z,3))-math.pow(a,5)/(8*math.pow(z,5))
def inf_t(z, a):
return 1-(9*a)/(16*z)+math.pow(a,3)/(8*math.pow(z,3))-math.pow(a,5)/(16*math.pow(z,5))
def channel_n(z,a,L):
dom = 8
term1 = 0
term2 = 0
for n in range(0,dom):
term1 = term1 + math.pow(-1,n)*(1/inf_n(n*L+z,a) -1)
for n in range(0,dom):
term2 = term2 + math.pow(-1,n)*(1/inf_n((n+1)*L-z,a) -1)
return 1/(term1+term2+1)
def channel_t(z,a,L):
dom = 8
term1 = 0
term2 = 0
for n in range(0,dom):
term1 = term1 + math.pow(-1,n)*(1/inf_t(n*L+z,a) -1)
for n in range(0,dom):
term2 = term2 + math.pow(-1,n)*(1/inf_t((n+1)*L-z,a) -1)
return 1/(term1+term2+1)
channel_n_vec = np.vectorize(channel_n)
channel_t_vec = np.vectorize(channel_t)
```
```python
L=19.125
a=1
x=np.linspace(1,10,50)
y=channel_n_vec(x,a,L)
plt.plot(x,y)
```
```python
L=9.56
x=np.linspace(1,4,50)
y=channel_t_vec(x,1,L)
plt.plot(x,y)
```
```python
L=3e-7
totalRad=1.56875e-08
wetRad=3.1375e-08
dryRad=3.1375e-08
x=0.5e-7
totalm=channel_n_vec(x,totalRad,L)
wetm=channel_n_vec(x,wetRad,L)
drym = (totalm/totalRad - wetm/wetRad)*dryRad
print(totalm)
print(wetm)
print(drym)
```
0.6570385461720548
0.40081817918271434
0.9132589131613952
```python
```
```python
from scipy.optimize import curve_fit
def fit_wall_mob(h,p1, p2, p3, p4, p5, p6, p7):
return p1 + p2/pow(p3 + h,5) - p4/pow(p5 + h,3) - p6/(p7 + h)
L=3e-7
totalRad=1.56875e-08
xpos=np.array([0.5*L/totalRad,0.25*L/totalRad,0.125*L/totalRad,0.0625*L/totalRad,0.03125*L/totalRad,0.015625*L/totalRad,0.0078125*L/totalRad,0])
wetRad=3.1375e-08
xpos50=np.array([0.5*L/wetRad,0.25*L/wetRad,0.125*L/wetRad,0.0625*L/wetRad,0.03125*L/wetRad,0.015625*L/wetRad,0.0078125*L/wetRad,0])
tmob=np.array([0.8955,0.8642,0.7653,0.5755,0.2764,0.07962,0.01990,0])
nmob=np.array([0.8479,0.7659,0.5708,0.3141,0.1232,0.03737,0.01060,0])
#tmob50=np.array([0.8981,0.8605,0.7711,0.5709,0.2795,0.07817,0.02066,0])
#nmob50=np.array([0.8480,0.7658,0.5710,0.3140,0.1232,0.03737,0.01071,0])
tmob50=np.array([0.8041,0.7418,0.5674,0.2741,0.07903,0.01976,0.004939,0])
nmob50=np.array([0.7040,0.5645,0.3134,0.1231,0.03734,0.01059,0.002872,0])
poptt, pcovt = curve_fit(fit_wall_mob, xpos, tmob, p0=[1,1,1,16,4,1,1],maxfev=10000000)
poptn, pcovn = curve_fit(fit_wall_mob, xpos, nmob, p0=[1,1,1,16,4,1,1],maxfev=10000000)
poptt50, pcovt50 = curve_fit(fit_wall_mob, xpos50, tmob50, p0=[1,1,1,16,4,1,1],maxfev=10000000)
poptn50, pcovn50 = curve_fit(fit_wall_mob, xpos50, nmob50, p0=[1,1,1,16,4,1,1],maxfev=10000000)
```
```python
poptn50
```
array([-3.11038612e+01, 7.24698118e+02, 2.59720646e+00, 4.13848704e+01,
1.79959078e+00, -3.20502020e+04, 9.99299583e+02])
```python
plt.rcParams['mathtext.fontset'] = 'cm'
#plt.rc('text', usetex=True)
plt.rc('xtick', labelsize=13)
plt.rc('ytick', labelsize=13)
plt.rc('axes', linewidth=1.5)
fig = plt.figure(figsize=[21,10])
plt.subplots_adjust(wspace=0.20)
#xf1=np.linspace(0,2,50)
#f0=fit_wall_mob(xf1,*poptt15r)
#f1=fit_wall_mob(xf1,*poptt3)
#f2=fit_wall_mob(xf1,*poptt6)
#f3=fit_wall_mob(xf1,*poptn15)
#f4=fit_wall_mob(xf1,*poptn3)
#f5=fit_wall_mob(xf1,*poptn6)
ax1 = fig.add_subplot(1,2,1)
ax1.set_yscale('log')
#ax1.plot(xf1,f0,color='black', linewidth=2, label='1.5nm fit')
#ax1.plot(xf1,f2,color='red', linestyle='dashed',linewidth=3.5, label='6nm fit')
ax1.scatter(xpos,tmob,s=150,color='black',facecolors='none',marker='o', linewidths=2, label='100\% wet')
ax1.scatter(xpos50,tmob50,s=150,color='red',marker='x', linewidths=3, label='50\% wet')
ax1.set_xlabel(r'$\tilde{y}/a_t$', fontsize=30)
ax1.set_ylabel(r'$\gamma_{\parallel}$', fontsize=30,rotation=0, labelpad=15)
ax1.set_ylim([0.009,1.05])
ax1.tick_params(labelsize=20)
ax1.legend(loc=(0.65,0.04),fontsize=20)
ax1.xaxis.set_tick_params(width=1.5)
ax1.yaxis.set_tick_params(width=1.5)
ax2 = fig.add_subplot(1,2,2)
ax2.set_yscale('log')
#ax2.plot(xf1,f3,color='black', linewidth=2, label='1.5nm fit')
#ax2.plot(xf1,f5,color='red', linestyle='dashed',linewidth=3.5, label='6nm fit')
ax2.scatter(xpos,nmob,s=150,color='black',facecolors='none',marker='o', linewidths=2, label='100\% wet')
ax2.scatter(xpos50,nmob50,s=150,color='red',marker='x', linewidths=3, label='50\% wet')
ax2.set_xlabel(r'$\tilde{y}/a_t$', fontsize=30)
ax2.set_ylabel(r'$\gamma_{\bot}$', fontsize=30,rotation=0, labelpad=15)
ax2.set_ylim([0.005,1.05])
ax2.tick_params(labelsize=20)
ax2.legend(loc=(0.65,0.04),fontsize=20)
ax2.xaxis.set_tick_params(width=1.5)
ax2.yaxis.set_tick_params(width=1.5)
fig.savefig("mob3.pdf", bbox_inches = 'tight',
pad_inches = 0.05)
```
```python
#1.5nm
L=1.5e-7
wetRad=1.56875e-08
xpos15r=np.array([0.5*L/wetRad,0.25*L/wetRad,0.125*L/wetRad,0.0625*L/wetRad,0.03125*L/wetRad,0.015625*L/wetRad,0.0078125*L/wetRad,0])
tmob15r=np.array([0.7935,0.7358,0.5653,0.2734,0.07888,0.01972,0.004930,0])
nmob15r=np.array([0.7040,0.5645,0.3134,0.1231,0.03734,0.01059,0.002872,0])
poptt15r, pcovt15r = curve_fit(fit_wall_mob, xpos15r, tmob15r, p0=[1,1,1,16,4,1,1],maxfev=10000000)
poptn15r, pcovn15r = curve_fit(fit_wall_mob, xpos15r, nmob15r, p0=[1,1,1,16,4,1,1],maxfev=10000000)
xpos15=np.array([2,1,0.5,0.25,0.125,0.0635,0])
tmob15=np.array([0.7081,0.4923,0.2100 ,0.05522,0.01381 ,0.003563,0])
nmob15=np.array([0.4986,0.2580,0.09141,0.02714,0.007602,0.002104,0])
poptt15, pcovt15 = curve_fit(fit_wall_mob, xpos15, tmob15, p0=[1,1,1,16,4,1,1],maxfev=10000000)
poptn15, pcovn15 = curve_fit(fit_wall_mob, xpos15, nmob15, p0=[1,1,1,16,4,1,1],maxfev=10000000)
#3nm
xpos=np.array([2,1,0.5,0.25,0.125,0.0635,0])
tmob3=np.array([0.7310,0.4997,0.2120 ,0.05574,0.01393 ,0.003596,0])
nmob3=np.array([0.5022,0.2584,0.09148,0.02716,0.007605,0.002105,0])
poptt3, pcovt3 = curve_fit(fit_wall_mob, xpos, tmob3, p0=[1,1,1,16,4,1,1],maxfev=10000000)
poptn3, pcovn3 = curve_fit(fit_wall_mob, xpos, nmob3, p0=[1,1,1,16,4,1,1],maxfev=10000000)
#6nm
xpos=np.array([2,1,0.5,0.25,0.125,0.0635,0])
tmob6=np.array([0.7349,0.5008,0.2123 ,0.05581,0.01395 ,0.003601,0])
nmob6=np.array([0.5023,0.2584,0.09149,0.02716,0.007605,0.002105,0])
poptt6, pcovt6 = curve_fit(fit_wall_mob, xpos, tmob6, p0=[1,1,1,16,4,1,1],maxfev=10000000)
poptn6, pcovn6 = curve_fit(fit_wall_mob, xpos, nmob6, p0=[1,1,1,16,4,1,1],maxfev=10000000)
```
/usr/local/lib/python3.8/dist-packages/scipy/optimize/minpack.py:833: OptimizeWarning: Covariance of the parameters could not be estimated
warnings.warn('Covariance of the parameters could not be estimated',
```python
print(poptn15r)
```
[-3.11038612e+01 7.24698118e+02 2.59720646e+00 4.13848704e+01
1.79959078e+00 -3.20502020e+04 9.99299583e+02]
```python
plt.rcParams['mathtext.fontset'] = 'cm'
#plt.rc('text', usetex=True)
plt.rc('xtick', labelsize=13)
plt.rc('ytick', labelsize=13)
plt.rc('axes', linewidth=1.5)
fig = plt.figure(figsize=[21,10])
plt.subplots_adjust(wspace=0.20)
xf1=np.linspace(0,2,50)
f0=fit_wall_mob(xf1,*poptt15r)
f1=fit_wall_mob(xf1,*poptt3)
f2=fit_wall_mob(xf1,*poptt6)
f3=fit_wall_mob(xf1,*poptn15)
f4=fit_wall_mob(xf1,*poptn3)
f5=fit_wall_mob(xf1,*poptn6)
ax1 = fig.add_subplot(1,2,1)
ax1.set_yscale('log')
ax1.plot(xf1,f0,color='black', linewidth=2, label='1.5nm fit')
ax1.plot(xf1,f2,color='red', linestyle='dashed',linewidth=3.5, label='6nm fit')
ax1.scatter(xpos15,tmob15,s=150,color='black',facecolors='none',marker='o', linewidths=2, label='1.5nm data')
ax1.scatter(xpos,tmob6,s=150,color='red',marker='x', linewidths=3, label='6nm data')
ax1.set_xlabel(r'$\tilde{y}/a_t = \tilde{y}/a_w$', fontsize=30)
ax1.set_ylabel(r'$\gamma_{\parallel}\quad$', fontsize=30,rotation=0, labelpad=15)
ax1.set_ylim([0.009,1.05])
ax1.tick_params(labelsize=20)
ax1.legend(loc=(0.65,0.04),fontsize=20)
ax1.xaxis.set_tick_params(width=1.5)
ax1.yaxis.set_tick_params(width=1.5)
ax2 = fig.add_subplot(1,2,2)
ax2.set_yscale('log')
ax2.plot(xf1,f3,color='black', linewidth=2, label='1.5nm fit')
ax2.plot(xf1,f5,color='red', linestyle='dashed',linewidth=3.5, label='6nm fit')
ax2.scatter(xpos15,nmob15,s=150,color='black',facecolors='none',marker='o', linewidths=2, label='1.5nm data')
ax2.scatter(xpos,nmob6,s=150,color='red',marker='x', linewidths=3, label='6nm data')
ax2.set_xlabel(r'$\tilde{y}/a_t = \tilde{y}/a_w$', fontsize=30)
ax2.set_ylabel(r'$\gamma_{\bot}\quad$', fontsize=30,rotation=0, labelpad=15)
ax2.set_ylim([0.005,1.05])
ax2.tick_params(labelsize=20)
ax2.legend(loc=(0.65,0.04),fontsize=20)
ax2.xaxis.set_tick_params(width=1.5)
ax2.yaxis.set_tick_params(width=1.5)
fig.savefig("mob2.pdf", bbox_inches = 'tight',
pad_inches = 0.05)
```
```python
fit_wall_mob(0,*poptt)
```
-0.0007420656601707654
```python
plt.rcParams['mathtext.fontset'] = 'cm'
#plt.rc('text', usetex=True)
plt.rc('xtick', labelsize=13)
plt.rc('ytick', labelsize=13)
plt.rc('axes', linewidth=1.5)
fig = plt.figure(figsize=[21,10])
plt.subplots_adjust(wspace=0.20)
L=19.125
a=1
x1=np.linspace(1,9.56,50)
y1=channel_t_vec(x1,a,L)
xf1=np.linspace(0,9.56,1000000)
f1=fit_wall_mob(xf1,*poptt)
x2=np.linspace(1,9.56,50)
y2=channel_n_vec(x2,a,L)
xf2=np.linspace(0,9.56,100000)
f2=fit_wall_mob(xf2,*poptn)
ax1 = fig.add_subplot(1,2,1)
ax1.set_yscale('log')
ax1.plot(x1,y1,color='red', linestyle='dashed',linewidth=3.5, label='Theory')
ax1.plot(xf1,f1,color='black', linewidth=2, label='Fit')
ax1.scatter(xpos,tmob,s=150,color='black',facecolors='none',marker='o', linewidths=2, label='Measured')
ax1.set_xlabel(r'$\tilde{y}/a_t = \tilde{y}/a_w$', fontsize=30)
ax1.set_ylabel(r'$\gamma_{\parallel}$', fontsize=30,rotation=0, labelpad=15)
ax1.set_ylim([0.018,1.05])
ax1.tick_params(labelsize=20)
ax1.legend(loc=(0.65,0.04),fontsize=20)
ax1.xaxis.set_tick_params(width=1.5)
ax1.yaxis.set_tick_params(width=1.5)
ax2 = fig.add_subplot(1,2,2)
ax2.set_yscale('log')
ax2.plot(x2,y2,color='red', linestyle='dashed',linewidth=3.5, label='Theory')
ax2.plot(xf2,f2,color='black', linewidth=2, label='Fit')
ax2.scatter(xpos,nmob,s=150,color='black',facecolors='none',marker='o', linewidths=2, label='Measured')
ax2.set_xlabel(r'$\tilde{y}/a_t = \tilde{y}/a_w$', fontsize=30)
ax2.set_ylabel(r'$\gamma_{\bot}$', fontsize=30,rotation=0, labelpad=20)
ax2.set_ylim([0.009,1.05])
ax2.tick_params(labelsize=20)
ax2.legend(loc=(0.65,0.04),fontsize=20)
ax2.xaxis.set_tick_params(width=1.5)
ax2.yaxis.set_tick_params(width=1.5)
fig.savefig("mob1.pdf", bbox_inches = 'tight',
pad_inches = 0.05)
```
```python
kB=1.38064852e-16
eta=1e-2
T=300
totalDiff_small=1.331e-05
dx=3e-07/24
wetRad=dx*1.255 # pkernel=4
a=kB*T/(totalDiff_small*eta*math.pi*6.0)
print(3e-07/a)
```
18.171720440165384
```python
fig.savefig("test.pdf", bbox_inches = 'tight',
pad_inches = 0.05)
```
```python
from sympy import *
z = Symbol('z')
p1 = Symbol('p1')
p2 = Symbol('p2')
p3 = Symbol('p3')
p4 = Symbol('p4')
p5 = Symbol('p5')
p6 = Symbol('p6')
p7 = Symbol('p7')
aa = Symbol('aa')
diff(p1 + p2/pow(p3 + z/aa,5) - p4/pow(p5 + z/aa,3) - p6/(p7 + z/aa), z)
```
$\displaystyle - \frac{5 p_{2}}{aa \left(p_{3} + \frac{z}{aa}\right)^{6}} + \frac{3 p_{4}}{aa \left(p_{5} + \frac{z}{aa}\right)^{4}} + \frac{p_{6}}{aa \left(p_{7} + \frac{z}{aa}\right)^{2}}$
```python
```
|
b50bb0103790cfcc91d16c1edf03a0ddba511bd8
| 188,889 |
ipynb
|
Jupyter Notebook
|
tools/notebooks/channel_mob.ipynb
|
jackieyao0114/FHDeX
|
63b455d48d1845a66c295cb35d1b890e34a07d8d
|
[
"BSD-3-Clause-LBNL"
] | 3 |
2018-06-25T13:23:13.000Z
|
2021-12-28T21:31:54.000Z
|
tools/notebooks/channel_mob.ipynb
|
jackieyao0114/FHDeX
|
63b455d48d1845a66c295cb35d1b890e34a07d8d
|
[
"BSD-3-Clause-LBNL"
] | 44 |
2019-09-24T15:31:52.000Z
|
2022-02-24T21:05:21.000Z
|
tools/notebooks/channel_mob.ipynb
|
jackieyao0114/FHDeX
|
63b455d48d1845a66c295cb35d1b890e34a07d8d
|
[
"BSD-3-Clause-LBNL"
] | 7 |
2019-10-01T15:47:08.000Z
|
2022-02-22T23:04:58.000Z
| 282.345291 | 58,224 | 0.915659 | true | 5,299 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.891811 | 0.665411 | 0.59342 |
__label__eng_Latn
| 0.060553 | 0.217045 |
# Linear and Quadratic Discriminant Analysis
## Linear Discriminant Analysis
### Classifying with Bayes' Theorem
In a previous chapter we discussed logistic regression for the case of two response classes (e.g. 0 and 1). It models the conditional probability $\Pr(Y=k|X=x)$ directly through the use of the Sigmoid function. In this chapter we discuss an alternative approach that models the distribution of the predictors $X$ separately for each response class (i.e. given $Y$), and then uses Bayes' theorem to transform these into conditional probabilities for $\Pr(Y=k|X=x)$. Its main advantage compared to logistic regressions is that if classes are well-separated, parameter estimates for logistic regression tend to be unstable whereas linear discriminant analysis (LDA) does not suffer from this problem. Beyond that, LDA is a popular algorithm for multiple-class classification (i.e. the response has more than two classes, for example buy/hold/sell etc.) where logistic regression is not used that often (James et al. (2013)).
LDA assigns an object to class $k$ for which the computed probability is highest. These probabilities are calculated using Bayes' rule which states that
\begin{equation}
\begin{aligned}
\underbrace{\Pr(Y=k|X)}_{\text{posterior probability}} &= \frac{\overbrace{\Pr(X|Y=k)}^{\text{conditional probability}} \quad \overbrace{\Pr(k)}^{\text{prior probability}}}{\underbrace{\Pr(X)}_{\text{evidence}}} \\[3ex]
&= \frac{\Pr(X|Y=k) \Pr(k)}{\sum_{\ell=1}^K \Pr(X|Y=\ell) \Pr(\ell)}
\end{aligned}
\end{equation}
Above, $\Pr(k)$ is simply the prior probability of class $k$ (with $\sum_{k=1}^K \Pr(k) = 1$) that a randomly chosen observation is drawn from the $k$th class. $\Pr(X|Y=k)$ on the other hand is the class conditional density of $X$ in class $Y=k$. Following the notation in (Friedman et al. (2001)), we denote $\Pr(X|Y=k) \equiv f_k(x)$ to indicate that it is a density function. LDA's decision rule thus classifies an observation into class $k$ if
\begin{align}
\Pr(Y=k|X) &> \Pr(Y=j|X) \qquad \forall j \neq k \nonumber \\
\end{align}
or, if we substitute both sides of the inequality with Bayes' rule:
\begin{align}
\frac{f_k(x) \Pr(k)}{\sum_{\ell=1}^K f_\ell(x) \Pr(\ell)} &> \frac{f_j(x) \Pr(j)}{\sum_{\ell=1}^K f_\ell(x) \Pr(\ell)} \qquad \forall j \neq k \nonumber
\end{align}
The evidence term (denominator in above equation) can be omitted from the decision rule because it is merely a scaling factor (Raschka (2014)). This then yields the following simple decision boundary:
\begin{align}
f_k(x) \Pr(k) &> f_j(x) \Pr(j) \qquad \forall j \neq k
\end{align}
This expression can also be written as
\begin{equation}
\delta_k(x) = \arg \max_k \; f_k(x) \Pr(k)
\end{equation}
### Bayes Decision Rule in LDA with One Feature
Suppose that $f_k(x)$ follows a Gaussian distribution. For the one-dimensional setting, that is if we have just one feature $p=1$, the normal density takes the well known form
\begin{equation}
f_k(x) = \frac{1}{\sqrt{2\pi \sigma_k^2}} \, \exp\left( - \frac{(x - \mu_k)^2}{2 \sigma_k^2} \right)
\end{equation}
where $\mu_k$ and $\sigma_k^2$ are the mean and variance for the $k$th class, respectively. For the moment let us also assume that there is a shared variance term across all $K$ classes, i.e. $\sigma_1^2 = \sigma_2^2 = \ldots = \sigma_k^2$. Then plugging the normal distribution into our maximization problem, taking the log and doing some algebra - see the appendix of the script for detailed steps - we find that an observation is assigned to class $k$ for which $\delta_k(x)$ is greatest:
\begin{equation}
\delta_k(x) = \arg \max_k \left[\frac{x \mu_k}{\sigma^2} - \frac{\mu_k}{2\sigma^2} + \ln(\Pr(k)) \right]
\end{equation}
Below figure shows how LDA classifies data based on the above result. In the left subplot we see two separate normal densities representing a situation with two classes ($K \in \{\text{blue, green\}}$). $\Pr(k=\text{blue}) = \Pr(k=\text{green}) = 0.5$; equal for both classes. Both densities have the same variance $\sigma_1^2 = \sigma_2^2 = 1$ but different location parameter, $\mu_1 = -1.25, \mu_2 = 1.25$.
If we were to know these parameter, then LDA's decision boundary would be drawn exactly at zero (dashed line). If $\Pr(k=\text{blue}) > \Pr(k=\text{green})$, Bayes' decision boundary would move to the right, if $\Pr(k=\text{blue}) < \Pr(k=\text{green})$, to the left. There is some overlapping area leading to some uncertainty, but overall the error rate is minimized to a minimum. In reality however, we do not know the true location and scale parameter and hence we have to estimate them - what we will discuss momentarily. The right plot displays histograms of 50 randomly drawn observations from the aforementioned normal distribution. Given this data, LDA calculates $\mu_k, \sigma^2$ and uses $\delta_k(x)$ to draw the decision boundary (solid vertical line). Data points to the left belong to the blue class, all others to the green class. The dashed vertical line again displays the optimal decision boundary. Because we don't know the true location and scale parameter LDA relies on estimates. This introduces inaccuracy that is reduced the larger the data sample is (assuming our normal assumption is correct).
### Assumptions and Parameter Estimation
So far we have discussed how LDA draws its decision boundary with the help of Bayes rule and given the assumptions that the features follow a normal distribution. But in order to follow through with our classification task, estimates for $\Pr(k), \mu_k$, and $\sigma^2$ are required. Estimating the prior probability $\Pr(k)$ is no difficult job: we simply compute the fraction of training observations that belong to the $k$th class: $\hat{\Pr}(k) = n_k / n$, where $n_k$ represents the count of samples from class $k$ and $n$ the count of all samples. Location parameter $\mu_k$ is estimated using the average of training observation of the $k$th class and $\sigma^2$, the scale parameter, is the weighted average of the sample variance for each of the $K$ classes (Note that Friedman et al. (2001) and James et al (2013) both use a biased corrected version of $\hat{\sigma}^2$ (and $\hat{\Sigma}$ for the case of $p>1$) by dividing the summed terms by $n-K$ instead of $n$. The formula given here uses an uncorrected estimate of $\sigma$ and in that follows `Sklearn`'s implementation.)
\begin{align}
\hat{\mu}_k &= \frac{1}{n_k} \sum_{i:y_i=k} x_i \\
\hat{\sigma}^2 &= \frac{1}{n} \sum_{k=1}^K \sum_{i:y_i=k} (x_i - \hat{\mu}_k)^2
\end{align}
Given the assumption of normality and given these estimates for location and scale we are able to establish a decision rule that assigns each new data point to the class for which $\delta_k(x)$ is highest.
### Bayes Decision Rule in LDA with More Than One Feature
Above we have used the one-dimensional case with one predictor to introduce how LDA classifies an observation. Now we extend the classifier to work with multiple features ($p>1$). Again we assume that $X = (X_1, X_2, \ldots, X_p)$ is drawn from a (multivariate) normal distribution with a class specific mean vector $\mu_k$ of length $p$ and common covariance matrix $\Sigma$ of dimension $p \times p$. This is expressed as $X \sim N(\mu, \Sigma)$. The multivariate Gaussian density is defined as
\begin{equation}
f_k(x) = \frac{1}{(2\pi)^{p/2}|\Sigma|^{1/2}} \, \exp \left( -\frac{1}{2} (x-\mu_k)^T \Sigma^{-1} (x-\mu_k) \right)
\end{equation}
As before we plug this expression into our maximization problem, take the logarithm and perform a little bit of algebra (For the interested the different steps are shown in the appendix of the script). This yields to the following LDA's Bayes classifier rule, based on which an observation $X=x$ is assigned to the class for which $\delta_k(x)$ is largest.
\begin{equation}
\delta_k(x) = \arg \max_k \left[x^T \Sigma^{-1} \mu_k - \frac{1}{2} \mu_k^T\Sigma^{-1}\mu_k + \ln(\Pr(k))\right]
\end{equation}
The estimates for $\Pr(k), \mu_k$, and $\Sigma$ follow again the same approach as in the case of only one predictor.
The next figure plots LDA's Bayes decision boundary for a random training set with two features $X_1, X_2$. The colors indicate the binary response with blue circles indicating customers who accepted a product offer and green circles representing those who declined it. The bivariate normal contours (ellipses) represent iso-lines with the same probabilities. LDA uses Bayes' decision rule discussed above to classify any new data point into class $k$.
### LDA in Python
#### Setup
We will apply LDA in Python with the functions that are provided through the `sklearn` package and the 'Default' data set we used to introduce logistic regression in a previous chapter. `Sklearn`, short for Scikit-learn, is a key resource for clustering, classification or regression algorithms in machine learning. It offers an abundant variety of functions and functionalities and is actively developed by a large community.
`Sklearn` is one of the most extensive package in Python with hundreds, if not thousands, of functions. It is good practice to not load the full library as we did for example with `numpy` but to only load those functions that are needed to run your task at hand. This saves computer memory and with that improves the efficiency of your algorithm, especially if you are using your household PC to run it on larger data sets.
```python
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
plt.rcParams['font.size'] = 14
plt.style.use('seaborn-whitegrid')
```
```python
# Default data set is not available online. Data was extracted from R package "ISLR"
df = pd.read_csv('Data/Default.csv', sep=',')
# Factorize 'No' and 'Yes' in columns 'default' and 'student'
df['defaultFac'] = df.default.factorize()[0]
df['studentFac'] = df.student.factorize()[0]
df.head(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>default</th>
<th>student</th>
<th>balance</th>
<th>income</th>
<th>defaultFac</th>
<th>studentFac</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>No</td>
<td>No</td>
<td>729.526495</td>
<td>44361.625074</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>No</td>
<td>Yes</td>
<td>817.180407</td>
<td>12106.134700</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>No</td>
<td>No</td>
<td>1073.549164</td>
<td>31767.138947</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>No</td>
<td>No</td>
<td>529.250605</td>
<td>35704.493935</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>No</td>
<td>No</td>
<td>785.655883</td>
<td>38463.495879</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
```python
# Assign data to feature matrix X_train and response vector y_train
X_train = df[['balance', 'income', 'studentFac']]
y_train = df.defaultFac
```
#### LDA Classifier Object & Fit
Now we are in a position to run the LDA classifier. This, as you can see from the three lines below, is as easy as it gets.
```python
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
# Create LDA object and run classifier
lda = LDA(solver='lsqr')
lda = lda.fit(X_train, y_train)
lda
```
LinearDiscriminantAnalysis(n_components=None, priors=None, shrinkage=None,
solver='lsqr', store_covariance=False, tol=0.0001)
The parameter `solver='lsqr'` specifies the method by which the covariance matrix is estimated. `lsqr` follows the approach introduced in the preceding subsection. Others such as `svd` or `eigen` are available. See [Scikit-learn's guide](http://scikit-learn.org/stable/modules/lda_qda.html#estimation-algorithms) or the [function description](http://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.LinearDiscriminantAnalysis.html).
Every function in `sklearn` has different attributes and methods. `Sklearn`s convention is to store anything that is derived from the data in attributes that end with a trailing underscore. That is to separate them from parameters that are set by the user (Mueller and Guido (2017)). For example the estimated covariance matrix can be printed with this command.
```python
lda.covariance_
```
array([[ 2.05277550e+05, -9.37165654e+05, 4.21453998e+01],
[-9.37165654e+05, 1.77777941e+08, -4.57857337e+03],
[ 4.21453998e+01, -4.57857337e+03, 2.07468022e-01]])
In a Jupyter notebook, to see all options you can simply type `lda.` and hit tab.
#### LDA Performance
Here are some basic metrics on how the LDA classifier performed on the training data.
```python
print('default-rate: {0: .4f}'.format(np.sum(y_train)/len(y_train)))
print('score: {0: .4f}'.format(lda.score(X_train, y_train)))
print('error-rate: {0: .4f}'.format(1-lda.score(X_train, y_train)))
```
default-rate: 0.0333
score: 0.9725
error-rate: 0.0275
Overall, 3.33% of all observations defaulted. If we would simply label each entry as 'non-default' we would have an error rate of this magnitude. So, in comparison to this *naive* classifier, LDA seems to have some skill in predicting the default.
> **IMPORTANT NOTE: In order to be in line with James et al. (2015), the textbook for this course, we have not performed any train/test split of the data. Therefore we will use the same full matrix `X_train` and response vector `y_train` as test data. Performance metrics might be applied to both test and training data but in the end the results on the test set are those that we are ultimately interested in. To drive this point home, I have relabeled the `X_train` and `y_train` to `X_test`, `y_test`. Nevertheless, be aware that in this unique case it is the same data!**
Let us print the confusion matrix introduced in the previous chapter to see the class-wise performance. For reference the confusion matrix is also printed as `DataFrame` but moving forward be sure to know that row represent the true values and columns predicted labels.
```python
# Relabel variables as discussed
X_test = X_train
y_test = y_train
# Predict labels
y_pred = lda.predict(X_test)
# Sklearn's confusion matrix
print(metrics.confusion_matrix(y_test, y_pred))
# Manual confusion matrix as pandas DataFrame
confm = pd.DataFrame({'Predicted default status': y_pred,
'True default status': y_test})
confm.replace(to_replace={0:'No', 1:'Yes'}, inplace=True)
print(confm.groupby(['True default status','Predicted default status']).size().unstack('Predicted default status'))
```
[[9645 22]
[ 253 80]]
Predicted default status No Yes
True default status
No 9645 22
Yes 253 80
The confusion matrix tells us that for the non-defaulters, LDA only misclassified 22 of them. This is an excellent rate. However, out of the 333 (=253 + 80) people who actually defaulted, LDA classified only 80 correctly. This means our classifier missed out on 76.0% of those who actually defaulted! For a credit card applicant with a bad credit score this is good news. For a credit card company, not so much.
#### Varying the Threshold Levels
Why does LDA miss all these 'defaulters'? Implicitly, Bayes classifier minimizes the **overall** error rate, meaning that it yields the smallest possible total number of misclassified observations - irrespective of the class-specific error rate. Bayes classifier works by assigning an observation to class 'default' for which the posterior probability $Pr(\text{default = Yes}|X=x) > 0.5$. For a credit card company who seeks to have as few defaults as possible, this threshold might be too large. Instead, such a company might decide to label any customer with a posterior probability of default above 20% to the 'default' class ($Pr(\text{default = Yes}|X=x) > 0.2$). Let us investigate how the results in such a case would look like.
```python
# Calculated posterior probabilities
posteriors = lda.predict_proba(X_test)
posteriors[:5, :]
```
array([[0.996778 , 0.003222 ],
[0.99731184, 0.00268816],
[0.98529382, 0.01470618],
[0.99881647, 0.00118353],
[0.99597848, 0.00402152]])
The function `lda.predict_proba()` provides the posterior probabilities of $\Pr(\text{default = 0}|X=x)$ in the first column and $\Pr(\text{default = 1}|X=x)$ in the second. The latter column is what we are interested in. Out of convenience we use `sklearn`'s `binarize` function to classify all probabilities above the threshold of 0.2 as 1 (=default) and generate the confusion matrix.
```python
from sklearn.preprocessing import binarize
# Set threshold and get classes
thresh = 0.2
y_pred020 = binarize([posteriors[:, 1]], thresh)[0]
# new confusion matrix (threshold of 0.2)
print(metrics.confusion_matrix(y_test, y_pred020))
```
[[9435 232]
[ 140 193]]
Now LDA misclassifies only 140 out of 333 defaults, or 42.0%. Thats a sharp improvement over the 76.0% from before. But this comes at a price: Before, of those who did not default LDA mislabeled only 22 (or 0.2%) incorrectly. This number increased now to 232 (or 2.4%). Combined, the total error rate increased from 2.75% to 3.72%. For a credit card company, this might be a price they are willing to pay to have a more accurate identification of individuals who default.
Below code snippet calculates and plots the overall error rate, the proportion of missed defaulting customers and the fraction of error among the non-defaulting customers as a function of the threshold value for the posterior probability that is used to assign classes.
```python
# Array of thresholds
thresh = np.linspace(0, 0.5, num=100)
er = [] # Total error rate
der = [] # Defaults error rate
nder = [] # Non-Defaults error rate
for t in thresh:
# Sort/arrange data
y_pred_class = binarize([posteriors[:, 1]], t)[0]
confm = metrics.confusion_matrix(y_test, y_pred_class)
# Calculate error rates
er = np.append(er, (confm[0, 1] + confm[1, 0]) / len(posteriors))
der = np.append(der, confm[1, 0] / np.sum(confm[1, :]))
nder = np.append(nder, confm[0, 1] / np.sum(confm[0, :]))
```
```python
# Plot
plt.figure(figsize=(12, 6))
plt.plot(thresh, er, label='Total error rate')
plt.plot(thresh, der, label='Missed defaults')
plt.plot(thresh, nder, label='Missed non-defaults')
plt.xlim(0, 0.5)
plt.xlabel('Threshold')
plt.ylabel('Error Rate')
plt.legend();
```
How do we know what threshold value is best? Unfortunately there's no formula for it. "Such a decision must be based on *domain knowledge*, such as detailed information about costs associated with defaults" (James et al. (2013, p.147)) and it will always be a **trade-off: if we increase the threshold we reduce the missed non-defaults but at the same time increase the missed defaults**.
## Performance Metrics
This is now the perfect opportunity to refresh our memory on a few classification performance measures introduced in the previous chapters and add a few more to have a full bag of performance metrics. The following table will help in doing this.
We will use the following abbreviations ([Markham (2016)](https://github.com/justmarkham/scikit-learn-videos/blob/master/09_classification_metrics.ipynb)):
* True Positives (TP): correctly predicted defaults
* True Negatives (TN): correctly predicted non-defaults
* False Positives (FP): incorrectly predicted defaults ("Type I error")
* False Negatives (FN): incorrectly predicted non-defaults ("Type II error")
```python
# Assign confusion matrix values to variables
confm = metrics.confusion_matrix(y_test, y_pred)
print(confm)
TP = confm[1, 1] # True positives
TN = confm[0, 0] # True negatives
FP = confm[0, 1] # False positives
FN = confm[1, 0] # False negatives
```
[[9645 22]
[ 253 80]]
So far we've encountered the following performance metrics:
* Score
* Error rate
* Sensitivity and
* Specificity.
We briefly recapture their meaning, how they are calculated and how to call them in Scikit-learn. We will make use of the functions in the `metrics` sublibrary of `sklearn`
#### Score
* Score = (TN + TP) / (TN + TP + FP + FN)
* Fraction of (overall) correctly predicted classes
```python
print((TN + TP) / (TN + TP + FP + FN))
print(metrics.accuracy_score(y_test, y_pred))
print(lda.score(X_test, y_test))
```
0.9725
0.9725
0.9725
#### Error rate
* Error rate = 1 - Score or Error rate = (FP + FN) / (TN + TP + FP + FN)
* Fraction of (overall) incorrectly predicted classes
* Also known as "Misclassification Rate"
```python
print((FP + FN) / (TN + TP + FP + FN))
print(1 - metrics.accuracy_score(y_test, y_pred))
print(1 - lda.score(X_test, y_test))
```
0.0275
0.02749999999999997
0.02749999999999997
#### Specificity
* Specificity = TN / (TN + FP)
* Fraction of correctly predicted negatives (e.g. 'non-defaults')
```python
print(TN / (TN + FP))
```
0.9977242164063308
#### Sensitivity or Recall
* Sensitivity = TP / (TP + FN)
* Fraction of correctly predicted 'positives' (e.g. 'defaults'). Basically asks the question: "When the actual value is positive, how often is the prediction correct?"
* Also known as **True positive rate**
* Counterpart to *Precision*
```python
print(TP / (TP + FN))
print(metrics.recall_score(y_test, y_pred))
```
0.24024024024024024
0.24024024024024024
The above four classification performance metrics we already encountered. There are two more metrics we want to cover: **Precision** and the **F-Score**.
#### Precision
* Precision = TP / (TP + FP)
* Refers to the accuracy of a positive ('default') prediction. Basically asks the question: "When a positive value is predicted, how often is the prediction correct?"
* Counterpart to *Recall*
```python
print(TP / (TP + FP))
print(metrics.precision_score(y_test, y_pred))
```
0.7843137254901961
0.7843137254901961
#### F-Score
Van Rijsbergen (1979) introduced a measure that is still widely used to evaluate the accuracy of predictions in two-class (binary) classification problems: the F-Score. It combines Precision and Recall (aka Sensitivity) in one metric and tells us something about the relations between data's positive labels and those given by a classifier. It is a single measure of a classification procedure's usefullness and in general the rule is that the higher the F-Score, the better the predictive power of the classification procedure. It is defined as:
\begin{align}
F_{\beta} &= \frac{(1 + \beta^2) \cdot \text{precision} \cdot \text{recall}}{\beta^2 \cdot \text{precision} + \text{recall}} \\[2ex]
&= \frac{(1+\beta^2) \cdot TP}{(1+\beta^2) \cdot TP + \beta^2 \cdot FN + FP}
\end{align}
This measure employs a parameter $\beta$ that captures a user's preference (Guggenbuehler (2015)). The most common value for $\beta$ is 1. This $F_1$-score weights both precision and recall evenly (simple harmonic mean). In rare cases the $F_2$-score is used, which puts twice as much weights on recall as on precision (Hripcsak and Rotschild (2005)).
```python
print(metrics.confusion_matrix(y_test, y_pred))
print(metrics.f1_score(y_test, y_pred))
print(((1+1**2) * TP)/((1+1**2) * TP + FN + FP))
print(metrics.classification_report(y_test, y_pred))
```
[[9645 22]
[ 253 80]]
0.367816091954023
0.367816091954023
precision recall f1-score support
0 0.97 1.00 0.99 9667
1 0.78 0.24 0.37 333
accuracy 0.97 10000
macro avg 0.88 0.62 0.68 10000
weighted avg 0.97 0.97 0.97 10000
Let us compare this to the situation where we set the posterior probability threshold for 'default' at 20%.
```python
# Confusion matrix & clf-report for cut-off
# value Pr(default=yes | X = x) > 0.20
print(metrics.confusion_matrix(y_test, y_pred020))
print(metrics.classification_report(y_test, y_pred020))
```
[[9435 232]
[ 140 193]]
precision recall f1-score support
0 0.99 0.98 0.98 9667
1 0.45 0.58 0.51 333
accuracy 0.96 10000
macro avg 0.72 0.78 0.74 10000
weighted avg 0.97 0.96 0.96 10000
We see that by reducing the cut-off level from $\Pr(\text{default} = 1| X=x) > 0.5$ to $\Pr(\text{default} = 1| X=x) > 0.2$ precision decreases but recall improves. This changes the $F_1$-score.
Does this mean that a threshold of 20% is more appropriate? In general, one could argue for a 'yes'. Yet, as mentioned before, this boils down to domain knowledge. Where the $F_1$-score is of help, together with the other metrics introduced, is when we compare models against each other and want to determine which one performed best. For example if we compare results from logistic regression and LDA (and both used a cut-off level of 50%) the F-score would suggest that the one with the higher value performed better.
For a summary on performance metrics the following two ressources are recommended:
* For the interested reader an excellent and easily accessible summary on performance metrics is the article by [Sokolova and Lapalme (2009)](http://rali.iro.umontreal.ca/rali/sites/default/files/publis/SokolovaLapalme-JIPM09.pdf).
* For further details and examples please also consider the [scikit-learn discription](https://scikit-learn.org/stable/modules/model_evaluation.html#precision-recall-f-measure-metrics).
## Precision-Recall Curve
If one is interested in understanding how precision and recall varies given different level of thresholds, then there is a function to do this.
```python
# Extract data displayed in above plot
precision, recall, threshold = metrics.precision_recall_curve(y_test, posteriors[:, 1])
print('Precision: ', precision)
print('Recall: ', recall)
print('Threshold: ', threshold)
```
Precision: [0.05540765 0.05525046 0.05525965 ... 1. 1. 1. ]
Recall: [1. 0.996997 0.996997 ... 0.00600601 0.003003 0. ]
Threshold: [0.00227206 0.00227354 0.00227411 ... 0.92183985 0.93353321 0.94256071]
This one can easily visualize - done in the next code snippet. We also add some more information to the plot by displaying the *Average Precision (AP)* and the *Area under the Curve (AUC)*. The former summarizes the plot in that it calculates the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight [see further description here](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score). The latter calculates the area under the curve using the trapezoidal rule. Notice that ideally the function hugs the top-right corner.
```python
# Calculate the average precisions score
y_dec_bry = lda.decision_function(X_test)
average_precision = metrics.average_precision_score(y_test, y_dec_bry)
# Calculate AUC
prec_recall_auc = metrics.auc(recall, precision)
# Plot Precision/Recall variations given different
# levels of thresholds
plt.plot(recall, precision)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('2-class Precision-Recall curve: \n AP={0:0.2f} / AUC={1:0.2f}'.format(
average_precision, prec_recall_auc));
```
## ROC Curve
Having introduced the major performance measures, let us now discuss the so called ROC curve (short for "receiver operating characteristics"). This is a very popular way of visualizing the performance of binary classifiers. Its origin are in signal detection theory durign WWII (Flaach (2017)) but it has since found application in medical decision making and machine learning ([Fawcett (2006)](http://people.inf.elte.hu/kiss/13dwhdm/roc.pdf)). ROC investigates the relationship between sensitivity and specificity of a binary classifier. Sensitivity (or true positive rate) measures the proportion of positives (defaults) correctly classified. Specificity (or true negative rate) measures the proportion of negatives (non-defaults) correctly classified.
Above we calculated that if we use $\Pr(\text{default = Yes}|X=x) > 0.5$ to classify posterior probabilities as defaults, LDA has its best overall error rate but misses on 76.0% of the customers who acutally defaulted. By decreasing the threshold to 0.2 we improved the accuracy of detecting defaults but this came at the cost of a higher overall error rate. This was the trade-off we faced. The ROC curve serves to visualize a variation of this trade-off. It varies the cut-off threshold from 0 to 1 and calculates for each threshold the true positive rate (aka sensitivity) and false positive rate (equals 1 - specificity). These values are then plotted with the former on the vertical and the later on the horizontal axis.
Though this might feel a bit abstract if one is not familiar with all these technical terms, the interpretation is fortunately fairly simple. The ideal ROC curve will hug the top left corner. In that case, the area under the curve (AUC) is biggest. The bigger the AUC, the better the classifier. A perfect classifier has an AUC of 1.
Here's how we calculate the ROC numbers, the corresponding area under the curve and how we plot it.
```python
# Compute ROC curve and ROC area (AUC) for each class
fpr, tpr, thresholds = metrics.roc_curve(y_test, posteriors[:, 1])
roc_auc = metrics.auc(fpr, tpr)
```
```python
plt.figure(figsize=(6, 6))
plt.plot(fpr, tpr, lw=2, label='ROC curve (area = {0: 0.2f})'.format(roc_auc))
plt.plot([0, 1], [0, 1], lw=2, c = 'k', linestyle='--')
plt.xlim([-0.01, 1.0])
plt.ylim([-0.01, 1.01])
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.title('Receiver operating characteristic (ROC)', fontweight='bold', fontsize=18)
plt.legend(loc="lower right");
```
An AUC value of 0.95 is close to the maximum of 1 and should be deemed very good. The dashed black line puts this in perspective: it represents the "no information" classifier; this is what we would expect if the probability of default is not associated with 'student' status and 'balance'. Such a classifier, that performs no better than chance, is expected to have an AUC of 0.5.
## Quadratic Discriminant Analysis
### Underlying Assumptions
For LDA we assume that observations within each class are drawn from a multivariate normal distribution with a class-specific mean vector and a common covariance metrix: $X \sim N(\mu_k, \Sigma)$. Quadratic discriminant analysis (QDA) relaxes these assumptions somewhat. The basic assumption is still that the observations follow a multivariate normal distribution, however, QDA allows for class specific means and covariance matrices: $X \sim N(\mu_k, \Sigma_k)$, where $\Sigma_k$ is a covariance matrix for the $k$th class. With that, the Bayes classifier assigns an observation to the class for which
\begin{equation}
\delta_k(x) = \arg \max_k \; - \frac{1}{2} \ln(|\Sigma_k|) - \frac{1}{2} (x - \mu_k)^T \Sigma_k^{-1} (x - \mu_k) + \ln(\Pr(k))
\end{equation}
is highest. For a derivation of this result see again the appendix of the script. As was the case for LDA, parameter $\mu_k, \Sigma_k$ and $\Pr(k)$ are again estimated from the training data with the same formulas introduced in this notebook.
Below figure depict both LDA and QDA. Both classifiers were trained on the same data. Due to the different variability of the two classes the QDA algorithm seems to perform slightly better in this case.
Under what circumstances should we prefer QDA over LDA? As always, there's no straight answer to this question. Obviously, performance should be king. However, it is said that LDA tends to be a better bet than QDA if the training set is small. In contrast, if the hold-out set is large or the assumption of a common covariance matrix is clearly incorrect, then QDA is recommended. Beyond that, we have to keep in mind that QDA estimates $K p(p+1)/2$ parameters. So if the number of parameters $p$ is large, QDA might take some time to process (James et al. (2013)).
> **Naive Bayes**
> Naive Bayes is the name for a family of popular ML algorithms that are often used in text mining. Text mining is a field of ML that deals with extracting quantitative information from text. A simple example of it is the analysis of Twitter feeds in order to predict stock market reactions. There exist different variations of Naive Bayes applications. One is called 'Gaussian Naive Bayes' and works similar to QDA - with the exception that contrary to QDA the covariance matrices $\Sigma$ are assumed to be diagonal. This means $\Sigma_k$ only contains the variances of the different features for class $k$. Its covariance terms (the off-diagonal elements) are assumed to be zero. Because of its popularity, Naive Bayes is well documented in text books and on the web. A good starting point is Scikit-learn's tutorial on [Naive Bayes](http://scikit-learn.org/stable/modules/naive_bayes.html), Collins (2013) or Russell and Norvig (2009, p.499). To apply the algorithm in Python you want to use `sklearn.naive_bayes.GaussianNB()` or (for text mining preferably) `sklearn.naive_bayes.MultinomialNB()`.
### QDA in Python
The application of QDA follows the one detailed for LDA. Therefore we let the code speak for itself.
```python
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
# Run qda on training data
qda = QDA().fit(X_train, y_train)
qda
```
QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,
store_covariance=False, tol=0.0001)
```python
# Predict classes for qda
y_pred_qda = qda.predict(X_test)
posteriors_qda = qda.predict_proba(X_test)[:, 1]
# Print performance metrics
print(metrics.confusion_matrix(y_test, y_pred_qda))
print(qda.score(X_test, y_test))
print(metrics.classification_report(y_test, y_pred_qda))
```
[[9636 31]
[ 239 94]]
0.973
precision recall f1-score support
0 0.98 1.00 0.99 9667
1 0.75 0.28 0.41 333
accuracy 0.97 10000
macro avg 0.86 0.64 0.70 10000
weighted avg 0.97 0.97 0.97 10000
The performance seems to be slightly better than with LDA. Let's plot the ROC curve for both LDA and QDA.
```python
# Compute ROC curve and ROC area (AUC) for each class
fpr_qda, tpr_qda, _ = metrics.roc_curve(y_test, posteriors_qda)
roc_auc_qda = metrics.auc(fpr_qda, tpr_qda)
```
```python
plt.figure(figsize=(6, 6))
plt.plot(fpr, tpr, lw=2, label='LDA ROC (AUC = {0: 0.2f})'.format(roc_auc))
plt.plot(fpr_qda, tpr_qda, lw=2, label='QDA ROC (AUC = {0: 0.2f})'.format(roc_auc_qda))
plt.plot([0, 1], [0, 1], lw=2, c = 'k', linestyle='--')
plt.xlim([-0.01, 1.0])
plt.ylim([-0.01, 1.01])
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.title('ROC Curve', fontweight='bold', fontsize=18)
plt.legend(loc="lower right");
```
With respect to Sensitivity (Recall) and Specificity LDA and QDA perform virtually identical. Therefore, one might give the edge here to QDA because of its slighly better Recall and $F_1$-Score.
## Reality and the Gaussian Assumption for LDA & QDA
Despite the rather strict assumptions regarding normal distribution, LDA and QDA perform well on an amazingly large and diverse set of classification tasks. Friedman et al. (2001, p. 111) put it this way:
> "*Both techniques are widely used, and entire books are devoted to LDA. It seems that whatever exotic tools are the rage of the day, we should always have available these two simple tools. The question arises why LDA and QDA have such a good track record. The reason is not likely to be that the data are approximately Gaussian, and in addition for LDA that the covariances are approximately equal. More likely a reason is that the data can only support simple decision boundaries such as linear or quadratic, and the estimates provided via the Gaussian models are stable. This is a bias variance tradeoff - we can put up with the bias of a linear decision boundary because it can be estimated with much lower variance than more exotic alternatives. This argument is less believable for QDA, since it can have many parameters itself - although perhaps fewer than the non-parametric alternatives.*"
Whether LDA or QDA should be applied to categorical/binary features warrants a separate note. It is true that discriminant analysis was designed for continuous features (Ronald A. Fisher (1936)) where the underlying assumption is that the values are normally distributed. However, as above quote shows, studies have proven the robustness of the model even in light of violations of the rather rigid normality assumption. This is not only true for continuous features but also for categorical/binary features. For more details see Huberty et al. (1986). It follows that applying LDA and QDA is possible, though the user should cautiously control the output. We will discuss appropriate cross validation methods to do so in the next chapter.
# Further Ressources
In writing this notebook, many ressources were consulted. For internet ressources the links are provided within the textflow above and will therefore not be listed again. Beyond these links, the following ressources were consulted and are recommended as further reading on the discussed topics:
* Collins, Michael, 2013, The Naive Bayes Model, Maximum-Likelihood Estimation, and the EM Algorithm, Technical report, Columbia University, New York.
* Fawcett, Tom, 2006, An introduction to ROC analysis, *Pattern Recognition Letters* 27, 861–874.
* Fisher, Roland A., 1936, The Use of Multiple Measurements in Taxonomic Problems, *Annals of Human Genetics* 7, 179-188.
* Flach, Peter A., 2017, Roc analysis, in Claude Sammut, and Geoffrey I. Webb, eds., *Encyclopedia of Machine Learning and Data Mining*, 1109–1116 (Springer Science & Business Media, New York, NY).
* Friedman, Jerome, Trevor Hastie, and Robert Tibshirani, 2001, *The Elements of Statistical Learning* (Springer, New York, NY).
* Guggenbuehler, Jan P., 2015, Predicting Net New Money Using Machine Learning Algorithms and Newspaper Articles, Technical report, University of Zurich, Zurich.
* James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani, 2013, *An Introduction to Statistical Learning: With Applications in R* (Springer Science & Business Media, New York, NY).
* Jobson, J. David, and Bob Korkie, 1980, Estimation for Markowitz Efficient Portfolios, *Journal of the American Statistical Association* 75, 544–554.
* Hripcsak, George, and Adam S Rothschild, 2005, Agreement, the F-measure, and Reliability in Information Retrieval, *Journal of the American Medical Informatics Association* 12, 296–298.
* Huberty, Carl J., Joseph M. Wisenbaker, Jerry D. Smith, and Janet C. Smith, 1986, Using Categorical Variables in Discriminant Analysis, *Multivariate Behavioral Research* 21, 479-496.
* Ledoit, Olivier, and Michael Wolf, 2004, Honey, i shrunk the sample covariance matrix, *The Journal of Portfolio Management* 30, 110–119.
* Müller, Andreas C., and Sarah Guido, 2017, *Introduction to Machine Learning with Python* (O’Reilly Media, Sebastopol, CA).
* Raschka, Sebastian, 2014, Naive Bayes and Text Classification I - Introduction and Theory from website, http://sebastianraschka.com/Articles/2014_naive_bayes_1.html, 08/31/2017
* Russell, Stuart, and Peter Norvig, 2009, *Artificial Intelligence: A Modern Approach* (Prentice Hall Press, Upper Saddle River, NJ).
* Sokolova, Marina, and Guy Lapalme, 2009, A systematic analysis of performance measures for classification tasks, *Information Processing & Management* 45, 427–437.
* Van Rijsbergen, Cornelis Joost, 1979, *Information Retrieval* (Butterworths, London).
# Addendum
## predict, predict_proba, and decision_function
Let us quickly discuss the difference between the
* `classifier.predict()`,
* `classifier.predict_proba()`, and
* `classifier.decision_function()`.
`classifier.predict()` we already know: it simply predicts the label given the traineded classifier and a feature matrix X (preferably a test set).
```python
lda.predict(X_test)[:10]
```
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int64)
`classifier.predict_proba()` we have also introduced above: it provides probabilities of $\Pr(y = 0|X=x)$ in the first column and $\Pr(y = 1|X=x)$ in the second.
```python
lda.predict_proba(X_test)[:10]
```
array([[9.96778003e-01, 3.22199657e-03],
[9.97311835e-01, 2.68816484e-03],
[9.85293824e-01, 1.47061759e-02],
[9.98816468e-01, 1.18353192e-03],
[9.95978476e-01, 4.02152411e-03],
[9.95793515e-01, 4.20648549e-03],
[9.95593550e-01, 4.40644998e-03],
[9.97315195e-01, 2.68480456e-03],
[9.77081674e-01, 2.29183256e-02],
[9.99906164e-01, 9.38359460e-05]])
Finally, `classifier.decision_function()` predicts confidence scores given the feature matrix. The confidence scores for a feature matrix is the signed distance of that sample to the hyperplane. What this exaclty means should become more clear once we have discussed the support vector classifier (SVC).
```python
lda.decision_function(X_test)[:10]
```
array([-5.73452686, -5.91620475, -4.20467236, -6.73806793, -5.51206468,
-5.46691242, -5.42026972, -5.91745893, -3.75263341, -9.27386872])
## ROC & Precision-Recall Curve in Sklearn Version 0.22.1
Starting with Scikit-learn version 0.22.1 the plotting of the ROC and Precision-Recall Curve was integrated into Scikit-learn and there's now a function available to cut the plotting work a bit short. Below two code snippets that show how to do it.
```python
# Plot Precision-Recall Curve
disp = metrics.plot_precision_recall_curve(lda, X_test, y_test);
```
```python
disp = metrics.plot_roc_curve(lda, X_test, y_test);
```
Notice that you can also overlay ROCs from multiple models. [See this example](https://scikit-learn.org/stable/auto_examples/plot_roc_curve_visualization_api.html)
|
016ac2300d7be87be83b1a9f0c0205b1f722b783
| 263,162 |
ipynb
|
Jupyter Notebook
|
0208_LDA-QDA.ipynb
|
bMzi/ML_in_Finance
|
9b92e9bdf371d22b279d76556364f4645b080803
|
[
"MIT"
] | 8 |
2018-02-16T10:33:13.000Z
|
2022-02-19T13:56:57.000Z
|
0208_LDA-QDA.ipynb
|
bMzi/ML_in_Finance
|
9b92e9bdf371d22b279d76556364f4645b080803
|
[
"MIT"
] | null | null | null |
0208_LDA-QDA.ipynb
|
bMzi/ML_in_Finance
|
9b92e9bdf371d22b279d76556364f4645b080803
|
[
"MIT"
] | 13 |
2018-02-16T09:11:01.000Z
|
2021-12-22T08:19:46.000Z
| 185.194933 | 42,748 | 0.884432 | true | 11,579 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.891811 | 0.901921 | 0.804343 |
__label__eng_Latn
| 0.987863 | 0.707091 |
# Bayesian Inference in the Poisson Generalized Linear Model
**References:**
- Chapter 16 of BDA3 contains background material on generalized linear models.
- Chapter 7.1 of BDA3 introduces notation for model evaluation based on predictive log likelihoods.
## The Poisson GLM
The Poisson distribution is a common model for count data with a single parameter $\lambda \in \mathbb{R}_+$. Its pmf is,
\begin{align}
\Pr(y \mid \lambda) &= \frac{1}{y!} e^{-\lambda} \lambda^y,
\end{align}
for $y \in \mathbb{N}$. Its mean and variance are both equal to $\lambda$.
Suppose we have count observations $y_n \in \mathbb{N}$ along with covariates $x_n \in \mathbb{R}^P$. We construct a Poisson GLM by modeling the expected value as,
\begin{align}
\mathbb{E}[y_n \mid x_n] = f(w^\top x_n),
\end{align}
with $w \in \mathbb{R}^P$ and $f: \mathbb{R} \to \mathbb{R}_+$ is the mean function. The _canonical mean function_ is $f(a) = e^a$; equivalently, the canonical _link_ function is the logarithm.
We assume a Gaussian prior on the weights $w$:
$$
w \sim \mathcal{N}(0, \sigma^2 I),
$$
where $\sigma^2 I$ is the covariance matrix.
Derive the log joint probability,
\begin{align}
\mathcal{L}(w) &\triangleq \log p(\{y_n\}_{n=1}^N, w \mid \{x_n\}_{n=1}^N, \sigma^2) \\
&= \log p(w\mid\sigma^2) + \sum_{n=1}^N\log p(y_n \mid w^Tx_n)
\end{align}
If we replace $\lambda = \exp(w^Tx_n)$ in the Possion pmf we get
\begin{align}
\mathcal{L}(w) &= \log\mathcal{N}(w\mid 0, \sigma^2\mathbf{I})+\sum_{n=1}^N\log\left(\frac{1}{y_n!}\exp\left\{\left \langle y_n, w^Tx_n \right \rangle-\exp(w^Tx_n)\right\}\right)\\
&= \log\mathcal{N}(w\mid 0, \sigma^2\mathbf{I}) + \sum_{n=1}^N\left(\left \langle y_n, w^Tx_n \right \rangle - \exp(w^Tx_n) - \log (y_n!)\right)\\
&= -\frac{w^Tw}{2\sigma^2} + \left \langle \sum_{n=1}^Ny_nx_n, w \right \rangle - \sum_{n=1}^N\exp(w^Tx_n) + C
\end{align}
\begin{align}
\nabla_w \mathcal{L}(w) &= -\nabla_w\frac{w^Tw}{2\sigma^2} + \nabla_w\left \langle \sum_{n=1}^Ny_nx_n, w \right \rangle - \nabla_w\sum_{n=1}^N\exp(w^Tx_n)\\
&= -\frac{w}{\sigma^2}+\sum_{n=1}^Ny_nx_n-\sum_{n=1}^Nx_n\exp(w^Tx_n)
\end{align}
\begin{align}
\nabla^2_w \mathcal{L}(w) &= -\nabla_w\frac{w}{\sigma^2}+\nabla_w\sum_{n=1}^Ny_nx_n-\nabla_w\sum_{n=1}^Nx_n\exp(w^Tx_n)\\
&= -\frac{I}{\sigma^2}+0-\sum_{n=1}^Nx_n\nabla_w\exp(w^Tx_n)\\
&= -\frac{I}{\sigma^2}-\sum_{n=1}^Nx_nx_n^T\exp(w^Tx_n)
\end{align}
## Maximum a posterior (MAP) Estimation
### Optimize to find the posterior mode
```python
import sys
sys.path.append('../')
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from src.d01_data.dengue_data_api import DengueDataApi
from src.d04_modeling.poisson_glm import PoissonGLM
```
```python
dda = DengueDataApi()
x_train, x_validate, y_train, y_validate = dda.split_data()
axs0 = sns.heatmap(y_train.unstack('weekofyear'))
```
```python
sigma2 = 1.
poisson_glm1 = PoissonGLM(x_train=x_train, y_train=y_train, sigma2=sigma2)
```
```python
_, w_hist = poisson_glm1.compute_posterior_mode()
w_hist_df = pd.DataFrame(w_hist, columns=x_train.columns)
w_hist_df['log_joint'] = w_hist_df.apply(lambda w: poisson_glm1.log_joint(y_train.values.reshape((-1,1)),
x_train.values, 1.,
w.values, sigma2), axis=1)
w_hist_df.name = 'iter'
axs1 = sns.lineplot(data=w_hist_df.iloc[1:].reset_index(), x="index", y="log_joint")
```
## Laplace Approximation
### Approximate the covariance at the mode
Solve for $\Sigma_{\mathsf{MAP}} = -[\nabla^2(\mathcal{L}(w_{\mathsf{MAP}})]^{-1}$. Plot the covariance matrix (e.g. with `imshow`). Don't forget to add a colorbar and labels.
```python
poisson_glm1.compute_laplace_approximation()
cov_map = poisson_glm1.get_cov_map()
cov_map_df = pd.DataFrame(cov_map, index=x_train.columns, columns=x_train.columns)
axs2 = sns.heatmap(cov_map_df)
```
# Posterior of the weights
Plot the posterior mean of the weights for features $x_n$ (i.e. not including the bias term). Also plot 95% credible intervals around the mean by using two standard deviations of the marginal distribution of the weights. Note the diagonal of $\Sigma_{\mathsf{MAP}}$ gives the marginal variance of the posterior.
```python
w_map = poisson_glm1.get_w_map()
var_map = np.diagonal(cov_map)
fig, ax = plt.subplots()
n = len(var_map)
ax.errorbar(range(n), w_map, yerr=1.96*np.sqrt(var_map/n), marker='o', linestyle='none')
ax.set_xlabel('Covariate')
ax.set_xticks(range(n))
ax.set_xticklabels(x_train.columns, rotation='vertical')
ax.set_ylabel('$w$')
plt.show()
```
## Model validation
### Approximate the posterior predictive distribution of the rates
We can draw many samples $w^{(s)}$ from the Laplace approximation of the posterior $p(w \mid \{x_n, y_n\})$. Use those samples to approximate the posterior predictive distribution on the **test** dataset,
\begin{align}
p(y_{n'}=k \mid x_{n'}, \{x_n, y_n\}_{n=1}^N) &=
\int p(y_{n'} \mid w, x_{n'}) \, p(w \mid \{x_n, y_n\}_{n=1}^N) \, \mathrm{d} w \\
&\approx \frac{1}{S} \sum_{s=1}^S p(y_{n'}=k \mid w^{(s)}, x_{n'})
\end{align}
where
\begin{align}
w^{(s)} &\sim p(w \mid \{x_n, y_n\}_{n=1}^N \\
&\approx \mathcal{N}(w \mid w_{\mathsf{MAP}}, \Sigma_{\mathsf{MAP}})
\end{align}
Visualize the posterior predictive distribution as an $K \times N_{\mathsf{test}}$ array where row corresponds to possible spike counts $k\in \{0,\ldots, K\}$. You can set $K=5$ for this problem. **Only show the first 100 columns (time bins), otherwise it's hard to see changes in the rate.**
Overlay the actual spike counts for the test dataset.
```python
log_joint, mae, e = poisson_glm1.validate_model(x_validate=x_validate, y_validate=y_validate)
print("Log Joint Probability: %.2f" % log_joint)
print("MAE: %.6f" % mae)
```
## Singular Value Descomposition
```python
x_train, x_validate, y_train, y_validate = dda.split_data()
u, s, vh = np.linalg.svd(x_train, full_matrices=True)
num_components = 4
new_features = ["pc%i" % i for i in range(num_components)]
z_train = pd.DataFrame(np.dot(x_train, vh[:num_components, :].T), columns=new_features, index=x_train.index)
z_validate = pd.DataFrame(np.dot(x_validate, vh[:num_components, :].T), columns=new_features, index=x_validate.index)
sigma2 = 1.
poisson_glm2 = PoissonGLM(x_train=z_train, y_train=y_train, sigma2=sigma2)
poisson_glm2.compute_laplace_approximation()
cov_map = poisson_glm2.get_cov_map()
cov_map_df = pd.DataFrame(cov_map, index=z_train.columns, columns=z_train.columns)
axs2 = sns.heatmap(cov_map_df)
```
```python
w_map = poisson_glm2.get_w_map()
var_map = np.diagonal(cov_map)
fig, ax = plt.subplots()
n = len(var_map)
ax.errorbar(range(n), w_map, yerr=1.96*np.sqrt(var_map), marker='o', linestyle='none')
ax.set_xlabel('Covariate')
ax.set_xticks(range(n))
ax.set_xticklabels(z_train.columns, rotation='vertical')
ax.set_ylabel('$w$')
plt.show()
```
```python
log_joint, mae, e = poisson_glm2.validate_model(x_validate=z_validate, y_validate=y_validate)
print("Log Joint Probability: %.2f" % log_joint)
print("MAE: %.6f" % mae)
```
```python
```
|
06b87bb64d02a52f1f3e97a2693332d70fc5a8e3
| 290,443 |
ipynb
|
Jupyter Notebook
|
notebooks/jjl-poisson-glm.ipynb
|
jilanglois-su/cobs10-dengai
|
101d3434db6330e9794b2e266b02c93793abfb82
|
[
"MIT"
] | null | null | null |
notebooks/jjl-poisson-glm.ipynb
|
jilanglois-su/cobs10-dengai
|
101d3434db6330e9794b2e266b02c93793abfb82
|
[
"MIT"
] | null | null | null |
notebooks/jjl-poisson-glm.ipynb
|
jilanglois-su/cobs10-dengai
|
101d3434db6330e9794b2e266b02c93793abfb82
|
[
"MIT"
] | null | null | null | 627.306695 | 78,076 | 0.945504 | true | 2,390 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.924142 | 0.782662 | 0.723291 |
__label__eng_Latn
| 0.5492 | 0.518779 |
<link rel="stylesheet" href="../../styles/theme_style.css">
<!--link rel="stylesheet" href="../../styles/header_style.css"-->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<table width="100%">
<tr>
<td id="image_td" width="15%" class="header_image_color_6"><div id="image_img" class="header_image_6"></div></td>
<td class="header_text"> Parameter Extraction - Temporal/Statistical Parameters </td>
</tr>
</table>
<div id="flex-container">
<div id="diff_level" class="flex-item">
<strong>Difficulty Level:</strong> <span class="fa fa-star checked"></span>
<span class="fa fa-star checked"></span>
<span class="fa fa-star"></span>
<span class="fa fa-star"></span>
<span class="fa fa-star"></span>
</div>
<div id="tag" class="flex-item-tag">
<span id="tag_list">
<table id="tag_list_table">
<tr>
<td class="shield_left">Tags</td>
<td class="shield_right" id="tags">extract|statistics|temporal signals</td>
</tr>
</table>
</span>
<!-- [OR] Visit https://img.shields.io in order to create a tag badge-->
</div>
</div>
When we are working with a signal recording, it is important to perform an preliminar analysis to understand how it behaves and obtain its main characteristics. One of the most prevalent ways to do this is by extracting descriptive parameters. There are different ways to approach this, and this notebook will cover some of the most common statistical and temporal parameters that can be used to analyze time series.
The use of these descriptive charateristics will help us decide the most effective strategy to follow during data preparation, model selection and model tuning depending on the use we will give to the data.
**List of statistical parameters:**
+ Maximum, minimum and range
+ Mean
+ Mode
+ Median and quantiles
+ Variance and standard deviation
+ Skewness and kurtosis
**List of temporal parameters:**
+ Autocorrelation
+ Stationarity
+ Seasonal decomposition
<p class="steps">1 - Importation of the needed packages</p>
```python
# biosignalsnotebooks own package for loading and plotting the acquired data
import biosignalsnotebooks as bsnb
# Scientific packages
import numpy
import math
import scipy.integrate as integrate
import scipy.stats as stats
import statsmodels.tsa.stattools as stattools
import statsmodels.tsa.seasonal as seasonal
import pandas as pd
```
```python
# Base packages used in OpenSignals Tools Notebooks for plotting data
from bokeh.plotting import figure, output_file, show, curdoc
from bokeh.io import output_notebook
from bokeh.layouts import column
from bokeh.models import ColumnDataSource, Plot, LinearAxis, BoxAnnotation, Arrow, VeeHead, LinearAxis, Range1d
output_notebook(hide_banner=True)
```
<p class="steps">2 - Load of sample signal data</p>
```python
# Load of data
data, header = bsnb.load("../../signal_samples/ecg_20_sec_100_Hz.h5", get_header=True)
channel = list(data.keys())[0]
print(header)
```
{'channels': array([1]), 'comments': '', 'date': b'2018-9-28', 'device': 'biosignalsplux', 'device connection': b'BTH00:07:80:D8:A7:F9', 'device name': b'00:07:80:D8:A7:F9', 'digital IO': array([0, 1]), 'firmware version': 773, 'resolution': array([16]), 'sampling rate': 100, 'sync interval': 2, 'time': b'14:36:39.886', 'sensor': [b'ECG'], 'column labels': {1: 'channel_1'}}
<p class="steps">3 - Storage of sampling frequency and acquired data inside variables</p>
Since in this case we know that the original signal is a ECG recording, we can also convert its units to mV following the method explained in the <a href="https://www.biosignalsplux.com/notebooks/Categories/Pre-Process/unit_conversion_ecg_rev.php">ECG Sensor - Unit Conversion </a> notebook.
```python
# Sampling frequency and acquired data
fs = header["sampling rate"]
# Signal Samples
signal_raw = data[channel]
time = numpy.linspace(0, len(signal_raw) / fs, len(signal_raw))
# Let's convert the signal's units, since we know it is a ECG signal
vcc = 3000 # mV
gain = 1000
ch = "CH1" # Channel
sr = header['sampling rate'] # Sampling rate
resolution = header['resolution'] # Resolution (number of available bits)
signal = (((numpy.array(signal_raw) / 2**resolution) - 0.5) * vcc) / gain
# Let's plot our raw signal
p = figure(plot_width=1000, plot_height=200)
p.ygrid.grid_line_alpha=0.5
# add a circle renderer with x and y coordinates, size, color, and alpha
p.line(time, signal)
show(p) # show the results
```
<div class="bk-root" id="e71a33ca-24e0-4851-8afe-d99581d60054" data-root-id="1001"></div>
<p class="steps">4 - Extraction of statistical parameters in a time signal</p>
A fundamental task in many statistical analysis is to characterize the location (a central value that best describes the data) and variability (or spread) of a dataset, in order to describe and understand it better. These descriptors are independent of the domain of the signal, and can be used to describe any type of information or dataset. Due to their simplicity and the fact that they are easy to calculate, they are widely used during statistical analyses.
<p class="steps">4.1 - Minimum, maximum and range values</p>
The minimum and maximum values are, respectively, the lowest and highest power values across the signal duration. There may be more than one occurrence of the minimum and maximum value across the time series.
The range is the difference between the maximum and the minimum value.
```python
# Let's calculate the minimum, maximum and range of our signal
max_value = max(signal)
min_value = min(signal)
signal_range = max_value - min_value
print("Maximum: ", max_value, "\nMinimum: ", min_value, "\nRange (maximum - minimum): ", signal_range)
```
Maximum: 1.10101318359375
Minimum: -0.170013427734375
Range (maximum - minimum): 1.271026611328125
```python
# Let's draw this values in our signal
index_max = numpy.where(signal == max_value)[0]
index_min = numpy.where(signal == min_value)[0]
# Let's plot our raw signal
p_maxmin = figure(plot_width=1000, plot_height=200)
p_maxmin.ygrid.grid_line_alpha=0.5
p_maxmin.line(time, signal)
p_maxmin.circle([time[i] for i in index_max],[max_value for i in range(len(index_max))], size=15, line_color="navy", fill_color="orange", fill_alpha=0.5, legend_label='Maximum')
p_maxmin.circle([time[i] for i in index_min],[min_value for i in range(len(index_min))], size=15, line_color="orange", fill_color="navy", fill_alpha=0.5, legend_label='Minimum')
show(p_maxmin)
```
<div class="bk-root" id="b5c84c55-e269-460a-8cb6-0d52fb517cb4" data-root-id="1101"></div>
<p class="steps">4.2 - Mean</p>
The arithmetic mean ($\bar{x}$) of a temporal data set represents the average power value across the signal duration.
\begin{align}
\bar{x}=\frac{1}{n}\sum_{i=1}^n x_i=\frac{x_1+x_2+\cdots+x_n}{n} \\
\end{align}
With $x_1, x_2, \ldots, x_n$ being the values in the dataset and $n$ the number of elements in the dataset.
```python
# Let's calculate the mean value of our signal
mean = numpy.mean(signal)
print('Mean:', mean)
```
Mean: 0.00021564978679627862
```python
# Let's draw the mean line in our plot
p.line(time, mean, line_alpha=0.5, color="firebrick", line_width=3, legend_label='Mean')
show(p)
```
<div class="bk-root" id="dec52d3a-664c-402a-8c27-a794f7fe0141" data-root-id="1001"></div>
<p class="steps">4.3 - Mode</p>
The mode represents the power value with the highest number of occurrences in the dataset across the signal duration.
There may be more than one mode, if two or more different values have the same number of occurrences.
```python
# Let's find the mode (or modes) in our signal
mode, occurrences = stats.mode(signal)
print('Mode: ', mode, '. Occurrences: ', occurrences)
```
Mode: [-0.04655457] . Occurrences: [9]
```python
# Let's print the mode values in a separate new graph
index_mode = numpy.where(signal == mode)
p_modes = figure(plot_width=1000, plot_height=200)
p_modes.line(time, signal)
p_modes.ygrid.grid_line_alpha=0.5
index_aux = 0
for i in index_mode:
p_modes.line(time, mode[index_aux], color="green", line_width=3, line_alpha=0.5) # we can draw the line marking the mode value in the y-axis
p_modes.circle([time[idx] for idx in i], mode[index_aux],
size=10, line_color="green", fill_color="green", fill_alpha=0.5) # we can also mark the points where the function has the mode value
index_aux += 1
show(p_modes)
```
<div class="bk-root" id="2f5b0f40-27c7-4807-bb4c-44133079a78f" data-root-id="1423"></div>
<p class="steps">4.4 - Median value and quantiles</p>
The median is the value that separates the top half from the lower half in a dataset. It is a popular summary statistic because it is simple to calculate and gives a measure that is more robust than the mean in the presence of outlier values.
[//]: <> (\begin{align})
[//]: <> (\mathrm{median}(a) = \frac{a_{\lfloor(\#x+1) \div 2\rfloor} + a_{\lceil(\#x+1) \div 2\rceil}}{2} \\)
[//]: <> (\end{align})
The median is also known as the 2-quantile because it marks the middle point of the dataset. However, other <a href="https://en.wikipedia.org/wiki/Quantile">quantiles </a> can also be explored.
For example, 4-quantiles (or quartiles) divide the dataset in four sections:
- the first quartile (Q1), also known as the 25th percentile, splits off the lowest 25% of data from the highest 75%.
- the second quartile (Q2), also known as the 50th percentile, is the same as the median.
- the third quartile (Q3), also known as the 75th percentile splits off the lowest 75% of data from the highest 25%.
```python
# Let's calculate the median, 25 and 75 quartiles
median = numpy.median(signal) # 50% of the values of the signal will be lower than the median or 50th quantile.
Q1 = numpy.quantile(signal, 0.25) # 25% of the values of the signal will be lower than the 25th quantile
Q3 = numpy.quantile(signal, 0.75) # 75% of the values of the signal will be lower than the 75th quantile
print('Mean: ', mean)
print('| Q1 |\t\t\t| Q2 (Median) |\t\t\t| Q3 |\n', Q1, '\t', median, '\t\t', Q3)
```
Mean: 0.00021564978679627862
| Q1 | | Q2 (Median) | | Q3 |
-0.0703582763671875 -0.0411529541015625 0.026275634765625
```python
# Let's see them in a separated new graph to avoid cluttering the main one
p_quartiles = figure(plot_width=1000, plot_height=200)
p_quartiles.line(time, signal)
p_quartiles.ygrid.grid_line_alpha=0.5
# Redrawing the mean line...
p_quartiles.line(time, mean, color="red", line_width=2, legend_label='Mean')
p_quartiles.line(time, Q3, color="yellow", line_width=2, line_dash='dashed', legend_label='Q3')
p_quartiles.line(time, median, color="deeppink", line_width=2, line_dash='dashed', legend_label='Q2 (Median)')
p_quartiles.line(time, Q1, color="cyan", line_width=2, line_dash='dashed', legend_label='Q1')
p_quartiles.legend.location = "top_left"
p_quartiles.y_range = Range1d(-0.15, 0.15)
p_quartiles.x_range = Range1d(-1, 6)
show(p_quartiles)
```
<div class="bk-root" id="c2015ce5-f531-45ea-a735-e551e379c193" data-root-id="1585"></div>
In this example we can observe that the distance between the Q3 and Q2 is bigger than the distance between Q2 and Q1.
Quartiles give us and idea about how the signal's values are distributed.
Even though the signal has extreme values (peaks), their frequency of occurrence is low, and that is why the distance between the Q3 line and the top of the graph is much bigger than the distance between any of the other quartiles.
<p class="steps">4.5 - Standard deviation and variance</p>
Both the standard deviation and the variance are statistics determined by using the mean, and used to measure the degree of dispersion in a dataset. Informally, they measure how close a group of numbers tend to be to the mean.
The variance ($\sigma^2$) is defined as the average of the squared differences from the mean of the dataset. It is obtained by calculating the difference between each point of the dataset and the mean, squaring and averaging the results. Squaring the distance from the mean gives greater weight to values that are further from the mean.
However, because of the squaring, the variance is no longer in the same measurement units as the dataset. This makes measuring variance a bit difficult, so many times the standard deviation is used instead.
The standard deviation ($\sigma$) is the square root of the variation. A low standard deviation value indicates that the values tend to be more concentrated and closer to the mean, whilst a high value indicates that the values tend to spread out over a wider range.
\begin{align}
\sigma = \sqrt{\frac{1}{N-1}\sum_{i=1}^N (x_i - \bar{x})^2 }, \\
\end{align}
```python
# Let's calculate the standard deviation and the variance
std = numpy.std(signal)
variance = numpy.var(signal)
print('Mean: ', mean)
print('Standard deviation: ', std, '\nVariance: ', variance)
```
Mean: 0.00021564978679627862
Standard deviation: 0.16587464942332372
Variance: 0.02751439932131055
```python
# Let's draw the std line in our graph
p.line(x=time, y=mean+std, line_color='orange', line_dash='dashed', line_width=2, legend_label='Standard deviation')
p.line(x=time, y=mean-std, line_color='orange', line_dash='dashed', line_width=2)
show(p)
```
<div class="bk-root" id="c0ee57c2-d8c0-4d37-a0a6-11c34e397c53" data-root-id="1001"></div>
<p class="steps">4.6 - Skewness and kurtosis</p>
The skewness is a metric of symmetry (or the lack of) in a dataset. A dataset is symmetric if it looks the same to the left and right of the center point. A normal distribution has a skewness of 0.
- <strong>Negative (<0) skew value</strong>: longer left tail; distribution concentrates on the right.
- <strong>Positive (>=0) skew value</strong>: longer right tail; distribution concentrates on the left.
Kurtosis indicates if the dataset is heavy-tailed or light-tailed in comparison to a normal distribution. Datasets with high kurtosis ('heavy-tailed') tend to have outliers. The lower the kurtosis value, the less common it is to find outliers in a dataset.
```python
# Let's calculate the skewness and kurtosis
skewness = stats.skew(signal)
kurtosis = stats.kurtosis(signal)
print('Skewness: ', skewness, '\nKurtosis: ', kurtosis)
```
Skewness: 4.271322272366648
Kurtosis: 21.250186573425474
```python
arr_hist, edges = numpy.histogram(signal, bins = int(180/5))
# Put the information in a dataframe
delays = pd.DataFrame({'arr_delay': arr_hist, 'left': edges[:-1], 'right': edges[1:]})
p_histogram = figure(plot_height = 250, plot_width = 1000, x_axis_label="mV", y_axis_label="bins")
# Add a quad glyph
p_histogram.quad(bottom=0, top=delays['arr_delay'],
left=delays['left'], right=delays['right'],
fill_color='red', line_color='black')
# Show the plot
show(p_histogram)
```
<div class="bk-root" id="cbd472d8-f4ea-4fba-9693-bed66abc0bb6" data-root-id="2135"></div>
The histogram shows clearly that the time series is right tailed, which is consistent with the positive skew value obtained.
Some outliers are also present on the right side of the histogram, around the 1mv value. This is consistent with the kurtosis result, because as we previously mentioned in the analysis of the quartiles, the highest peaks in the heartbeat cycle are, althought with a low frequency, common across the time signal.
<p class="steps">5 - Temporal parameters</p>
Parameters for analyzing time series and extract meaningful statistics or other characteristics of the signal.
These are parameters specific for data series located in the time domain. They extract relevant and descriptive information about a dataset in virtue of its temporal nature, which should be taken into account when analyzing temporal series.
<p class="steps">5.1 - Autocorrelation</p>
Autocorrelation, or serial correlation, is often used in time domain signals. It is the relationship of a signal with a delayed copy of itself as a function of delay. The analysis of autocorrelation finds repeating patterns, such as the presence of a periodic signal obscured by noise.
The ACF (autocorrelation function) shows the correlation between points separated by various time lags. So, in other words, it is the degree of association between points based on how many time steps apart they are. Normally, the autocorrelation function falls towards 0 as points become more separated, because the bigger the separation, the less correlation between the points. This is not a rule, but it is the most typical scenario.
```python
# Let's obtain the autocorrelation function for our signal
autocorrelation_function = stattools.acf(signal, fft=True, nlags=round(len(signal)*0.75))
```
```python
def get_autocorrelation_plot_params(series):
n = len(series)
data = numpy.asarray(series)
mean = numpy.mean(data)
c0 = numpy.sum((data - mean) ** 2) / float(n)
def r(h):
return ((data[:n - h] - mean) *
(data[h:] - mean)).sum() / float(n) / c0
x = numpy.arange(n) + 1
y = list(map(r, x))
z95 = 1.959963984540054 # confidence interval 95%
z99 = 2.5758293035489004 # confidence interval 99%
return n, x, y, z95, z99
n, x, y, z95, z99 = get_autocorrelation_plot_params(autocorrelation_function)
x = x/100
auto_correlation_plot2 = figure(title='Time Series Auto-Correlation', plot_width=1000,
plot_height=400, x_axis_label="Lag (s)", y_axis_label="Autocorrelation")
auto_correlation_plot2.line(x, y=z99 / numpy.sqrt(n), line_dash='dashed', line_color='red', legend_label='99% confidence band')
auto_correlation_plot2.line(x, y=z95 / numpy.sqrt(n), line_color='red', legend_label='95% confidence band')
auto_correlation_plot2.line(x, y=0.0, line_color='black')
auto_correlation_plot2.line(x, y=-z95 / numpy.sqrt(n), line_color='red')
auto_correlation_plot2.line(x, y=-z99 / numpy.sqrt(n), line_dash='dashed', line_color='red')
auto_correlation_plot2.line(x, y, line_width=1)
#auto_correlation_plot2.circle(x, y, fill_color="white", size=8) # optional
curdoc().add_root(column(auto_correlation_plot2))
show(auto_correlation_plot2)
```
<div class="bk-root" id="1fcb69c7-8d4e-468e-b99c-185a9b3e2fe9" data-root-id="2293"></div>
As expected, the ACF tends to 0 the longer the distance between the points. However the analysis highlights the presence of a periodic pattern, with high correlation, characterized by the sucesion of a negative, positive and negative peak with a period of less than 1 second.
In this case, since this signal is a ECG recording, it makes sense that there is a periodicy associated with the heartbeat.
<p class="steps">5.2 - Stationarity</p>
A time series is stationary when its statistic properties, such as the mean and variance, do not change over time.
The stationarity of a dataset can usually be predicted by looking at the signal's plot, histogram or ACF. It can also be checked by calculating the mean and variance of the signal in different time intervals; if the results are similar then the signal is probably stationary. There are also statistical tests to determine if a time series is stationary, for instance the Dickey-Fuller test or the KPSS (Kwiatkowski-Phillips-Schmidt-Shin) test.
```python
# Let's try to check the stationarity of our signal
print('Length of the signal: ', len(signal))
# We can measure the mean and variance of different intervals of the signal and see if they are different
mean_1 = numpy.mean(signal[1:300])
mean_2 = numpy.mean(signal[800:1100])
mean_3 = numpy.mean(signal[1600:1900])
var_1 = numpy.var(signal[1:300])
var_2 = numpy.var(signal[800:1100])
var_3 = numpy.var(signal[1600:1900])
print('Means: ', mean_1, mean_2, mean_3)
print('Variances: ', var_1, var_2, var_3, '\n')
# Let's try the Dickey-Fuller test
from statsmodels.tsa.stattools import adfuller
adf_test = adfuller(signal)
# Let's try the KPSS test
from statsmodels.tsa.stattools import kpss
kpsstest = kpss(signal, regression='c')
```
Length of the signal: 1965
Means: -0.0026015980187865805 0.006810760498046875 -0.00485137939453125
Variances: 0.0260274864826267 0.03005196664656978 0.025713346455339344
c:\users\gui_s\appdata\local\programs\python\python37-32\lib\site-packages\statsmodels\tsa\stattools.py:1685: FutureWarning: The behavior of using lags=None will change in the next release. Currently lags=None is the same as lags='legacy', and so a sample-size lag length is used. After the next release, the default will change to be the same as lags='auto' which uses an automatic lag length selection method. To silence this warning, either use 'auto' or 'legacy'
warn(msg, FutureWarning)
c:\users\gui_s\appdata\local\programs\python\python37-32\lib\site-packages\statsmodels\tsa\stattools.py:1710: InterpolationWarning: p-value is greater than the indicated p-value
warn("p-value is greater than the indicated p-value", InterpolationWarning)
```python
# Using pandas Series, we can print the ADF results in a readable way.
print ('Results of ADF Test:')
dfoutput = pd.Series(adf_test[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in adf_test[4].items():
dfoutput['Critical Value (%s)'%key] = value
print (dfoutput)
# Using pandas Series, we can print the KPSS results in a readable way.
print ('\nResults of KPSS Test:')
kpss_output = pd.Series(kpsstest[0:3], index=['Test Statistic','p-value','Lags Used'])
for key,value in kpsstest[3].items():
kpss_output['Critical Value (%s)'%key] = value
print (kpss_output)
```
Results of ADF Test:
Test Statistic -9.868093e+00
p-value 4.063043e-17
#Lags Used 2.600000e+01
Number of Observations Used 1.938000e+03
Critical Value (1%) -3.433729e+00
Critical Value (5%) -2.863033e+00
Critical Value (10%) -2.567565e+00
dtype: float64
Results of KPSS Test:
Test Statistic 0.014125
p-value 0.100000
Lags Used 26.000000
Critical Value (10%) 0.347000
Critical Value (5%) 0.463000
Critical Value (2.5%) 0.574000
Critical Value (1%) 0.739000
dtype: float64
Although the means remain pretty similar in the three chosen intervals, the variances do show some significant changes.
Let's analyze the results of the statistical tests:
- The Dickey-Fuller test returned a very small p-value. Since this value is much smaller than 0.05 (95% confidence interval) and 0.01 (99% confidence interval), then we can reject the null hypothesis of non-stationarity. This test deems the signal as stationary.
- In the case of the KPSS test, the results have to be interpreted in the opposite way: a high p-value would point towards a stationary signal. In our case, however, the p-value is 0.1, which means that the signal may be non-stationary.
So we got opposite results in each test. What does this mean?
In reality, there is more than one type of non-stationarity. As a short introduction, there are three possible types of signals based in stationarity:
- <strong>Strict stationary</strong>: signals where the mean, variance and covariance do not vary with time. The aim is to convert non-stationary series into strict stationary series for making predictions.
- <strong>Trend stationary</strong>: a time series that can be made strict stationary by removing its trend. KPSS is a trend stationarity test.
- <strong>Difference stationary</strong>: a time series that can be made strict stationary by differencing. ADF is a difference stationarity test.
Therefore, the results of the KPSS and ADF tests are useful to test wether a signal es stationary, trend stationary or difference stationary:
- If both tests conclude that the series is not stationary: the series is not stationary.
- If both tests conclude that the series is stationary: the series is stationary.
- If KPSS = stationary and ADF = not stationary: the signal is trend stationary.
- If KPSS = not stationary and ADF = stationary: the signal difference stationary.
So in this case our signal would fall under the difference stationarity definition. To understand more about signal stationarity, there are some interesting <a href="https://www.analyticsvidhya.com/blog/2018/09/non-stationary-time-series-python/">online resources </a> that go deeper on how to transform non-stationary signals and different ways to handle them.
<p class="steps">5.3 - Seasonality decomposition</p>
The seasonality decomposition of a signal allows dividing the main time series in different components, each of which has a specific characteristic. This decomposition can be done following two different models:
- <strong>additive</strong>, for linear distributions, where the time signal is decomposed following the formula: $Signal=trend+seasonality+residue$
- <strong>multiplicative</strong>, for exponential series, where the time signal is decomposed following the formula: $Signal=trend*seasonality*residue$
The resulting components are the following:
- <strong>Trend</strong>: increasing or decreasing tendency in the time series.
- <strong>Seasonality</strong>: the repeating short-term cycles in the series.
- <strong>Residue</strong>: random noise present in the signal that cannot be attributed to trend or seasonality
```python
# Let's decompose our signal into trend, seasonal and residual components using seasonal decomposition
signal = numpy.array(signal) - numpy.mean(signal)
filter_signal_1 = bsnb.lowpass(signal, f=40, order=1, fs=sr)
filter_signal_2 = bsnb.lowpass(filter_signal_1, f=40, order=3, fs=sr)
results = seasonal.seasonal_decompose(filter_signal_2, model='additive', freq=sr, extrapolate_trend='freq')
```
```python
decomposition_plot2 = figure(title='Seasonal decomposition - Observed and trend', plot_width=1200, plot_height=200)
decomposition_plot2.line(numpy.arange(0,len(results.observed)), results.observed, legend_label='Observed')
decomposition_plot2.line(numpy.arange(0,len(results.trend)), results.trend, line_color='red', legend_label='Trend')
show(decomposition_plot2)
decomposition_plot3 = figure(title='Seasonal decomposition - Seasonality and residuals', plot_width=1200, plot_height=200)
decomposition_plot3.line(numpy.arange(0,len(results.seasonal)), results.seasonal, line_color='orange', legend_label='Seasonality')
decomposition_plot3.line(numpy.arange(0,len(results.resid)), results.resid, line_color='green', legend_label='Residuals')
show(decomposition_plot3)
```
<div class="bk-root" id="cb5e2658-6773-4cd5-92a3-44fd45bc1e61" data-root-id="2671"></div>
<div class="bk-root" id="c42c09ba-c628-4772-b770-d325f9cfa0ba" data-root-id="2905"></div>
The decomposition in this case is unable to separate the cyclic element (the heartbeat) from the signal. That is why the most significant part of the signal ends up as residues. This is not a good result, and it is probably caused by the naive approach of the seasonal_decompose function in the statsmodels package.
There are more complex and robust approaches to this problem, like Loess or STL decomposition, which would however require a more in depth explanation.
<span class="color6">**Auxiliary Code Segment (should not be replicated by the user)**</span>
```python
bsnb.css_style_apply()
```
.................... CSS Style Applied to Jupyter Notebook .........................
<div id='style_import'><style>
@import url(https://fonts.googleapis.com/css?family=Cuprum|Lato);
/*body {
font-family: 'Cuprum' !important;
font-size: 12pt !important;
/*overflow: hidden !important;*/
/*line-height: 1 !important;
font-weight: 400 !important;
color: #58585A !important;
text-align: justify !important;
text-justify: inter-word !important;
background-color: white !important;
}*/
/*
#notebook-container {
-webkit-box-shadow: none;
box-shadow: none;
}
.rendered_html h1 { font-size: 4rem !important; }
.rendered_html h2 { font-size: 3rem !important; }
.rendered_html h3 { font-size: 2.5rem !important; }
.rendered_html h4 { font-size: 2rem !important; }
.rendered_html h5 { font-size: 1.5rem !important; }
.rendered_html h6 { font-size: 1.5rem !important; }
.rendered_html h1,
.rendered_html h2,
.rendered_html h3,
.rendered_html h4,
.rendered_html h5,
.rendered_html h6 {
font-family: 'Lato' !important;
font-weight: 300 !important;
line-height: 1.5em !important;
color: rgb(221, 153, 51) !important;
}
h1 { font-size: 4.5rem !important; }
h2 { font-size: 4rem !important; }
h3 { font-size: 3.5rem !important; }
h4 { font-size: 3rem !important; }
h5 { font-size: 2.5rem !important; }
h6 { font-size: 2rem !important; }
h1, h2, h3, h4, h5, h6 {
font-family: 'Cuprum' !important;
color: #e6ae48 !important;
line-height: 150px !important;
}
p {
font-family: 'Cuprum' !important;
font-size: 12pt !important;
line-height: 1 !important;
font-weight: 400 !important;
color: #58585A !important;
text-align: justify !important;
text-justify: inter-word !important;
}
li {
font-family: 'Cuprum' !important;
font-size: 12pt !important;
line-height: 1 !important;
font-weight: 400 !important;
color: #58585A !important;
text-align: justify !important;
text-justify: inter-word !important;
}
code {
font-family: 'Cuprum' !important;
font-size: 12pt !important;
}
pre {
font-family: 'Cuprum' !important;
font-size: 12pt !important;
}
div.input_area {
border: none !important;
background: whitesmoke !important;
}
*/
p:empty{
display: none !important;
}
pre {
/*max-height: 100px !important;*/
overflow-x: hidden !important;
}
.bk-root{
height: auto !important;
}
.color1{
color: #62C3EE !important;
}
.color1_cell{
color: white !important;
background-color: #62C3EE !important;
}
.color1_cell_transparency{
color: black !important;
background-color: rgba(98,195,238,0.20) !important;
}
.color1_top{
border-top:solid 3px #FDC400 !important;
}
.gradient_color1 {
/*background-image: linear-gradient(to right, rgba(98,195,238,0.20) , white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #62C3EE inset !important;
}
.color2{
color: #009EE3 !important;
}
.color2_cell{
color: white !important;
background-color: #009EE3 !important;
}
.color2_cell_transparency{
color: black !important;
background-color: rgba(0,158,227,0.20) !important;
}
.gradient_color2 {
/*background-image: linear-gradient(to right, rgba(0,158,227,0.20) , white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #009EE3 inset !important;
}
.color3{
color: #AFE1F6 !important;
}
.color3_cell{
color: white !important;
background-color: #AFE1F6 !important;
}
.color3_cell_transparency{
color: black !important;
background-color: rgba(175,225,246,0.20) !important;
}
.gradient_color3 {
/*background-image: linear-gradient(to right, rgba(175,225,246,0.20) , white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #AFE1F6 inset !important;
}
.color4{
color: #94C11E !important;
}
.color4_cell{
color: white !important;
background-color: #94C11E !important;
}
.color4_cell_transparency{
color: black !important;
background-color: rgba(148,193,30,0.20) !important;
}
.gradient_color4 {
/*background-image: linear-gradient(to right, rgba(148,193,30,0.20), white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #94C11E inset !important;
}
.color5{
color: #B4D463 !important;
}
.color5_cell{
color: white !important;
background-color: #B4D463 !important;
}
.color5_cell_transparency{
color: black !important;
background-color: rgba(180,212,99,0.20) !important;
}
.gradient_color5 {
/*background-image: linear-gradient(to right, rgba(180,212,99,0.20), white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #B4D463 inset !important;
}
.color6{
color: #DEEBB9 !important;
}
.color6_cell{
color: white !important;
background-color: #DEEBB9 !important;
}
.color6_cell_transparency{
color: black !important;
background-color: rgba(222,235,185,0.20) !important;
}
.gradient_color6 {
/*background-image: linear-gradient(to right, rgba(222,235,185,0.20), white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #DEEBB9 inset !important;
}
.color7{
color: #E84D0E !important;
}
.color7_cell{
color: white !important;
background-color: #E84D0E !important;
}
.color7_cell_transparency{
color: black !important;
background-color: rgba(232,77,14,0.20) !important;
}
.gradient_color7 {
/*background-image: linear-gradient(to right, rgba(232,77,14,0.20), white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #E84D0E inset !important;
}
.color8{
color: #F18F69 !important;
}
.color8_cell{
color: white !important;
background-color: #F18F69 !important;
}
.color8_cell_transparency{
color: black !important;
background-color: rgba(241,143,105,0.20) !important;
}
.gradient_color8 {
/*background-image: linear-gradient(to right, rgba(241,143,105,0.20) , white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #F18F69 inset !important;
}
.color9{
color: #F1F1F1 !important;
}
.color10{
color: #D2D3D5 !important;
}
.color11{
color: #58585A !important;
}
.color12{
color: #F8D0BE !important;
}
.color12_cell{
color: white !important;
background-color: #F8D0BE !important;
}
.color12_cell_transparency{
color: black !important;
background-color: rgba(248,208,190,0.20) !important;
}
.gradient_color12 {
/*background-image: linear-gradient(to right, rgba(248,208,190,0.20) , white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #F8D0BE inset !important;
}
.color13{
color: #FDC400 !important;
}
.color13_cell{
color: white !important;
background-color: #FDC400 !important;
}
.color13_cell_transparency{
color: black !important;
background-color: rgba(253,196,0,0.20) !important;
}
.gradient_color13 {
/*background-image: linear-gradient(to right, rgba(253,196,0,0.20) , white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #FDC400 inset !important;
}
.color14{
color: #FDDC69 !important;
}
.color14_cell{
color: white !important;
background-color: #FDDC69 !important;
}
.color14_cell_transparency{
color: black !important;
background-color: rgba(253,220,105,0.20) !important;
}
.gradient_color14 {
/*background-image: linear-gradient(to right, rgba(253,220,105,0.20) , white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #FDDC69 inset !important;
}
.color15{
color: #BFBFBF !important;
}
.color15_cell{
color: white !important;
background-color: #BFBFBF !important;
}
.color15_cell_transparency{
color: black !important;
background-color: rgba(232,77,14,0.20) !important;
}
.gradient_color15 {
/*background-image: linear-gradient(to right, rgba(232,77,14,0.20) , white) !important;*/
box-shadow:0pt -2pt 0pt 0pt #BFBFBF inset !important;
}
.open_cell_light{
color: #58585A !important;
background-color: #F1F1F1 !important;
font-size: 12px !important;
}
.open_cell_dark{
color: #58585A !important;
background-color: #D2D3D5 !important;
}
.open_cell_border_1{
border-right: 2pt solid #62C3EE !important;
}
.open_cell_border_2{
border-right: 2pt solid #009EE3 !important;
}
.open_cell_border_3{
border-right: 2pt solid #AFE1F6 !important;
}
.open_cell_border_4{
border-right: 2pt solid #94C11E !important;
}
.open_cell_border_5{
border-right: 2pt solid #B4D463 !important;
}
.open_cell_border_6{
border-right: 2pt solid #DEEBB9 !important;
}
.open_cell_border_7{
border-right: 2pt solid #E84D0E !important;
}
.open_cell_border_8{
border-right: 2pt solid #F18F69 !important;
}
.open_cell_border_12{
border-right: 2pt solid #F8D0BE !important;
}
.open_cell_border_13{
border-right: 2pt solid #FDC400 !important;
}
.open_cell_border_14{
border-right: 2pt solid #FDDC69 !important;
}
.open_cell_border_15{
border-right: 2pt solid #BFBFBF !important;
}
.open_subtitle_cell{
background-color: white !important;
border-radius: 30px 30px 30px 30px !important;
padding: 2px !important;
}
.center_cell{
text-align: center !important;
}
.border_cell_left{
border-left: 1pt solid black !important;
}
.border_cell_bottom_black{
border-bottom: 1pt solid black !important;
}
.border_cell_bottom_white{
border-bottom: 1pt solid white !important;
}
.round_cell_left{
border-radius: 30px 0px 0px 30px !important;
}
.round_cell_right{
border-radius: 0px 30px 30px 0px !important;
}
.round_cell_both{
border-radius: 30px 30px 30px 30px !important;
}
.opacity{
color:black !important;
opacity: 0.20 !important;
}
tr:nth-child(even) {
background-color: white !important;
/*border: 0px !important;*/
}
tr:nth-child(odd) {
background-color: white !important;
}
a[href^="tel:"] {
color: orange;
text-decoration: none;
}
.rendered_html tr, .rendered_html th, .rendered_html td, .rendered_html table {
border: 0px;
}
.div.output_subarea {
max-width: none !important;
}
.output_subarea {
max-width: none !important;
width: 100% !important;
overflow-x: hidden !important;
margin: 5px !important;
}
.bk-grid-column, bk-layout-scale_width {
max-width: 95% !important;
}
.rendered_html :link{
text-decoration: none !important;
}
.output {
background-color: rgb(242, 242, 242) !important;
border-radius: 10px !important;
padding-right: 10px !important;
}
.hide_both {
display: none !important;
}
/*.hide_in {
display: none !important;
}
.hide_out {
display: none !important;
}*/
.icon{
height: 30px !important;
}
.title{
background-color: #F1F1F1 !important;
}
h2{
text-indent: 30px !important;
}
#notebook{
background-color: white !important;
}
#notebook-container{
width: 100% !important;
background-color: white !important;
-webkit-box-shadow: none !important;
box-shadow: none !important;
padding: 0px !important;
font-size: 12pt !important;
font-family: 'Cuprum' !important;
/*overflow: hidden !important;*/
line-height: 1 !important;
font-weight: 400 !important;
color: #58585A !important;
text-align: justify !important;
text-justify: inter-word !important;
/*line-height: 10pt !important;*/
}
.prompt{
min-width: 0ex !important;
}
.output_wrapper{
padding-left: 1ex !important;
padding-top: 1ex !important;
}
.output_scroll{
height: 100% !important;
box-shadow: none !important;
}
.input {
padding-left: 1ex !important;
}
/* Header Configuration Segment */
.header_text{
font-size: 30px !important;
text-align: left !important;
padding-left: 20px !important;
}
.notebook_title{
margin-left: 10px !important;
}
.header_image_color_1{
border-right: 2pt solid #62C3EE !important;
}
.header_image_1{
background: url("../../images/icons/Load.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_1::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_2{
border-right: 2pt solid #009EE3 !important;
}
.header_image_color_ext_2{
border-bottom: 2pt solid #009EE3 !important;
}
.header_image_2{
background: url("../../images/icons/Record.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_2::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_3{
border-right: 2pt solid #AFE1F6 !important;
}
.header_image_3{
background: url("../../images/icons/Visualise.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_3::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_4{
border-right: 2pt solid #94C11E !important;
}
.header_image_4{
background: url("../../images/icons/Pre-Process.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_4::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_5{
border-right: 2pt solid #B4D463 !important;
}
.header_image_5{
background: url("../../images/icons/Detect.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_5::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_6{
border-right: 2pt solid #DEEBB9 !important;
}
.header_image_6{
background: url("../../images/icons/Extract.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_6::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_7{
border-right: 2pt solid #E84D0E !important;
}
.header_image_7{
background: url("../../images/icons/Train_And_Classify.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_7::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_8{
border-right: 2pt solid #F18F69 !important;
}
.header_image_8{
background: url("../../images/icons/Understand.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_8::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_12{
border-right: 2pt solid #F8D0BE !important;
}
.header_image_12{
background: url("../../images/icons/Evaluate.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_12::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_13{
border-right: 2pt solid #FDC400 !important;
}
.header_image_13{
background: url("../../images/icons/Install.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_13::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_14{
border-right: 2pt solid #FDDC69 !important;
}
.header_image_14{
background: url("../../images/icons/Connect.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_14::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_image_color_15{
border-right: 2pt solid #BFBFBF !important;
}
.header_image_15{
background: url("../../images/icons/Other.png") !important;
background-repeat: no-repeat !important;
background-size: 60% !important;
background-position: center !important;
min-height: 100px !important;
}
.header_image_15::before {
content: url(../../images/icons/virtual_pixel.png) !important;
}
.header_gradient {
background-image: linear-gradient(to right, rgba(0, 158, 227, 0.6) , white) !important;
color: white !important;
}
.border_gradient {
background-image:linear-gradient(to right, rgba(0, 158, 227, 0.6) , white) !important;
background-size:100% 10% !important;
background-position:0 100% !important;
background-repeat:no-repeat !important;
}
.border_pre_gradient {
background-image:linear-gradient(to right, rgba(0, 158, 227, 0.6), rgba(0, 158, 227, 0.6)) !important;
background-size:100% 10% !important;
background-position:0 100% !important;
background-repeat:no-repeat !important;
}
.signal_samples_header {
text-align: center !important;
font-size: 14px !important;
font-weight: bold !important;
border-bottom: solid 3px #58585A !important;
}
.signal_samples_info_keys {
text-align: center !important;
background-color: #F1F1F1 !important;
border-bottom: solid 1px white !important;
}
.signal_samples_info_values {
text-align: center !important;
border-bottom: solid 1px #F1F1F1 !important;
}
.group_by_header {
text-align:left !important;
border-bottom:solid 3px #FDC400 !important;
}
.group_by_header_grey {
border-bottom:solid 3px #58585A !important;
}
.video {
margin-left:auto !important;
margin-right:auto !important;
display:block !important;
width:80% !important;
box-shadow: 5px 10px #58585A !important;
}
hr{
border-top: 2px solid #58585A !important;
margin-top: 8px !important;
margin-bottom: 8px !important;
}
.checked {
color: #FDC400 !important;
}
#flex-container {
display: flex !important;
flex-direction: row !important;
width: 100% !important;
margin: 0 auto !important;
}
.flex-item-tag {
text-align: left !important;
margin-left: 50px !important;
}
/* Shield Icons Segment */
.shield_left{
background-color: #58585A !important;
color: white !important;
}
.shield_right{
background-color: #009EE3 !important;
color: white !important;
/*opacity: 0.4 !important;*/
text-align: left !important;
white-space: nowwrap !important;
}
#diff_level {
padding-top: 3px !important;
}
/* Step Paragraphs Segment */
.steps{
font-weight: bold !important;
}
.substeps{
font-weight: bold !important;
text-indent: 10px !important;
}
/* Notebook Icon Class */
.file_icon{
background: url("../../images/icons/Notebook.png") !important;
background-repeat: no-repeat !important;
background-size: 10% !important;
background-position: center !important;
min-height: 20px !important;
}
.file_icon::before {
content: url("../../images/icons/virtual_pixel_small.png") !important;
}
.not_active {
pointer-events: none !important;
cursor: default !important;
text-decoration: none !important;
color: black !important;
}
.not_active_img {
opacity: 0.30 !important;
}
.header_buttons{
text-align: left !important;
width: 10% !important;
}
.header_icons{
text-align:left !important;
width: 5% !important;
}
.header_logo{
border-left: solid 2pt #009EE3 !important;
width: 15% !important;
}
.footer_logo{
border-right: solid 3px #009EE3 !important;
width: 20% !important;
}
.header_main_files{
width: 20% !important;
}
<style>
</div>
```python
```
|
f2f5a57d09aaf652ef3d5ac653e68dd8a41fcc15
| 902,942 |
ipynb
|
Jupyter Notebook
|
biosignalsnotebooks_notebooks/unpublished_notebooks/Pre-Process/temporal_statistical_parameters.ipynb
|
csavur/biosignalsnotebooks
|
c99596741a854c58bdefb429906023ac48ddc3b7
|
[
"MIT"
] | 1 |
2020-06-26T05:05:11.000Z
|
2020-06-26T05:05:11.000Z
|
biosignalsnotebooks_notebooks/unpublished_notebooks/Pre-Process/temporal_statistical_parameters.ipynb
|
csavur/biosignalsnotebooks
|
c99596741a854c58bdefb429906023ac48ddc3b7
|
[
"MIT"
] | null | null | null |
biosignalsnotebooks_notebooks/unpublished_notebooks/Pre-Process/temporal_statistical_parameters.ipynb
|
csavur/biosignalsnotebooks
|
c99596741a854c58bdefb429906023ac48ddc3b7
|
[
"MIT"
] | null | null | null | 279.462086 | 136,349 | 0.861279 | true | 12,626 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.795658 | 0.810479 | 0.644864 |
__label__eng_Latn
| 0.837905 | 0.336566 |
# Five-Link biped walking loop problem: interactive demonstration
Hello and welcome. This is a Jupyter Notebook, a kind of document that can alternate between static content, like text and images, and executable cells of code.
This document ilustrates the Five-link biped walking loop test case of the paper: "Collocation Methods for Second Order Systems", submitted to RSS 2022.
In order to run the cells of code, you can select the cell and clic on the small "play" button in the bar above or press shift+enter. Alternatively, you can select the option "run" -> "run all cells" in order to run all the code in order. Beware that some cells can take several minutes!
All of the code used in this example is open-source and free to use.
[SymPy](https://www.sympy.org/en/index.html) is used for Symbolic formulation and manipulation of the problem.
[Numpy](https://numpy.org/) is used for numerical arrays and operations.
[CasADI](https://web.casadi.org/) is used for optimization.
[Optibot](https://github.com/AunSiro/optibot) is the name of the package where we are compiling our code. We aim to produce a toolbox for Optimal Control Problems, focused on robotics, including a high level, readable and clean interface between the prior three packages.
## Package imports
```python
import numpy as np
import matplotlib.pyplot as plt
```
```python
from sympy import (symbols, simplify)
from sympy.physics.mechanics import dynamicsymbols, init_vprinting
from sympy.physics.mechanics import Lagrangian, ReferenceFrame, Point, Particle,inertia, RigidBody, angular_momentum
```
```python
from optibot.symbolic import lagrange, diff_to_symb, SimpLagrangesMethod
from optibot.numpy import unpack
```
```python
#SymPy vector-like latex rendering inizialization:
init_vprinting()
```
## Symbolic Problem Modelling
The first step is to model our problem taking advantage of the high level object syntax of the mechanics module in SymPy
```python
# Creating symbols and dynamic symbols
m0, m1, m2, m3, m4, l0, l1, l2, l3, l4, t, g = symbols('m_0:5 l_0:5 t g')
I0, I1, I2, I3, I4, d0, d1, d2, d3, d4 = symbols('I_0:5 d_0:5')
q0, q1, q2, q3, q4 = dynamicsymbols('q_0:5')
m0, m1, m2, m3, m4, l0, l1, l2, l3, l4, t, g, I0, I1, I2, I3, I4, d0, d1, d2, d3, d4, q0, q1, q2, q3, q4
```
```python
# Definition of the physics system
N_in = ReferenceFrame('N')
P0 = Point('P0')
P0.set_vel(N_in, 0)
N0 = N_in.orientnew('N0', 'Axis', [q0, N_in.z])
P1 = P0.locatenew('P1', l0 * N0.y)
P1.set_vel(N_in, P1.pos_from(P0).dt(N_in))
CM0 = P0.locatenew('CM0', (l0-d0) * N0.y)
CM0.set_vel(N_in, CM0.pos_from(P0).dt(N_in))
I_0 = inertia(N0, 0, 0, I0)
body0 = RigidBody('Stance_Tibia', CM0, N0, m0, (I_0,CM0))
body0.potential_energy = m0 * g * CM0.pos_from(P0).dot(N_in.y)
N1 = N_in.orientnew('N1', 'Axis', [q1, N_in.z])
P2 = P1.locatenew('P2', l1 * N1.y)
P2.set_vel(N_in, P2.pos_from(P0).dt(N_in))
CM1 = P1.locatenew('CM1', (l1-d1) * N1.y)
CM1.set_vel(N_in, CM1.pos_from(P0).dt(N_in))
I_1 = inertia(N1, 0, 0, I1)
body1 = RigidBody('Stance_Femur', CM1, N1, m1, (I_1,CM1))
body1.potential_energy = m1 * g * CM1.pos_from(P0).dot(N_in.y)
N2 = N_in.orientnew('N2', 'Axis', [q2, N_in.z])
P3 = P2.locatenew('P3', l2 * N2.y)
P3.set_vel(N_in, P3.pos_from(P0).dt(N_in))
CM2 = P2.locatenew('CM2', d2 * N2.y)
CM2.set_vel(N_in, CM2.pos_from(P0).dt(N_in))
I_2 = inertia(N2, 0, 0, I2)
body2 = RigidBody('Torso', CM2, N2, m2, (I_2,CM2))
body2.potential_energy = m2 * g * CM2.pos_from(P0).dot(N_in.y)
N3 = N_in.orientnew('N3', 'Axis', [q3, N_in.z])
P4 = P2.locatenew('P4', -l3 * N3.y)
P4.set_vel(N_in, P4.pos_from(P0).dt(N_in))
CM3 = P2.locatenew('CM3', -d3 * N3.y)
CM3.set_vel(N_in, CM3.pos_from(P0).dt(N_in))
I_3 = inertia(N3, 0, 0, I3)
body3 = RigidBody('Swing_Femur', CM3, N3, m3, (I_3,CM3))
body3.potential_energy = m3 * g * CM3.pos_from(P0).dot(N_in.y)
N4 = N_in.orientnew('N4', 'Axis', [q4, N_in.z])
P5 = P4.locatenew('P5', -l4 * N4.y)
P5.set_vel(N_in, P5.pos_from(P0).dt(N_in))
CM4 = P4.locatenew('CM4', -d4 * N4.y)
CM4.set_vel(N_in, CM4.pos_from(P0).dt(N_in))
I_4 = inertia(N4, 0, 0, I4)
body4 = RigidBody('Swing_Tibia', CM4, N4, m4, (I_4,CM4))
body4.potential_energy = m4 * g * CM4.pos_from(P0).dot(N_in.y)
```
```python
#Computing the Lagrangian
Lag_simp = Lagrangian(N_in, body0, body1, body2, body3, body4)
Lag_simp
```
```python
from optibot.symbolic import ImplicitLagrangesMethod
```
```python
# Defining the control forces and external actions, and applying them to our system
u0, u1, u2, u3, u4 = symbols('u_:5')
FL = [
(N0, (u0-u1) * N_in.z),
(N1, (u1-u2) * N_in.z),
(N2, (u2-u3) * N_in.z),
(N3, (u3-u4) * N_in.z),
(N4, u4 * N_in.z)
]
LM_small = ImplicitLagrangesMethod(Lag_simp, [q0, q1, q2, q3, q4], forcelist=FL, frame=N_in)
```
```python
# Generating the dynamic equations
LM_small.form_lagranges_equations()
```
```python
impl_x = LM_small.implicit_dynamics_x
```
```python
impl_q = LM_small.implicit_dynamics_q
```
### Generating auxiliar functions
Later in the problem we need some expressions derived from the problem. Here we will generate them as symbolic expressions, and then convert them to numerical functions.
```python
import casadi as cas
from sympy import lambdify
from optibot.casadi import implicit_dynamic_q_to_casadi_function, implicit_dynamic_x_to_casadi_function, sympy2casadi
from optibot.symbolic import find_arguments, diff_to_symb_expr
from sympy.physics.mechanics import kinetic_energy, potential_energy
```
```python
imp_dyn_x_f_cas = implicit_dynamic_x_to_casadi_function(impl_x, list(dynamicsymbols('x_0:10')), verbose=True)
```
```python
imp_dyn_q_f_cas = implicit_dynamic_q_to_casadi_function(impl_q, list(LM_small.q), verbose=True)
```
```python
imp_dyn_x_f_cas
```
```python
feet_x = P5.pos_from(P0).dot(N_in.x)
feet_x = diff_to_symb_expr(feet_x)
feet_x
```
```python
feet_y = P5.pos_from(P0).dot(N_in.y)
feet_y = diff_to_symb_expr(feet_y)
feet_y
```
```python
feet_y_vel = P5.vel(N_in).dot(N_in.y) #pos_from(P0).dot(N_in.y)
feet_y_vel = diff_to_symb_expr(feet_y_vel)
feet_y_vel
```
```python
cm_pos = m0*CM0.pos_from(P0)
cm_pos += m1*CM1.pos_from(P0)
cm_pos += m2*CM2.pos_from(P0)
cm_pos += m3*CM3.pos_from(P0)
cm_pos += m4*CM4.pos_from(P0)
cm_pos = cm_pos/(m0+m1+m2+m3+m4)
sys_CM = P0.locatenew('Sys_CM', cm_pos)
sys_CM_x = simplify(sys_CM.pos_from(P0).dot(N_in.x))
sys_CM_y = simplify(sys_CM.pos_from(P0).dot(N_in.y))
```
```python
sym_x = dynamicsymbols('q_0:5')
sym_x = sym_x + [ii.diff() for ii in sym_x]
sym_x = [diff_to_symb(ii) for ii in sym_x]
sym_params = list(symbols('I_0:5 d_0:5 g l_0:2 l_3 m_0:5'))
sym_add_params = [symbols('l_4'),]
sym_vars = sym_x + sym_params + sym_add_params
print(len(sym_vars), sym_vars)
```
```python
cas_x_args = cas.MX.sym("x", len(sym_x))
cas_params = cas.MX.sym("p", len(sym_params))
cas_add_params = cas.MX.sym("p_add", len(sym_add_params))
cas_all_vars = [cas_x_args[ii] for ii in range(len(sym_x))]
cas_all_vars += [cas_params[ii] for ii in range(len(sym_params))]
cas_all_vars += [cas_add_params[ii] for ii in range(len(sym_add_params))]
print(len(cas_all_vars), cas_all_vars)
```
```python
_cas_expr_temp_x = sympy2casadi(feet_x, sym_vars, cas_all_vars)
feet_x_cas = cas.Function(
"Feet_x",
[cas_x_args, cas_params, cas_add_params],
[_cas_expr_temp_x,],
["x", "params", "additional_params"],
["feet_x_position"],
)
```
```python
_cas_expr_temp_y = sympy2casadi(feet_y, sym_vars, cas_all_vars)
feet_y_cas = cas.Function(
"Feet_y",
[cas_x_args, cas_params, cas_add_params],
[_cas_expr_temp_y,],
["x", "params", "additional_params"],
["feet_y_position"],
)
```
```python
_cas_expr_temp_y_vel = sympy2casadi(feet_y_vel, sym_vars, cas_all_vars)
feet_y_vel_cas = cas.Function(
"Feet_y_vel",
[cas_x_args, cas_params, cas_add_params],
[_cas_expr_temp_y_vel,],
["x", "params", "additional_params"],
["feet_y_speed"],
)
```
```python
def simetric_cond_casadi(n = 5):
x1 = cas.MX.sym('x_1', 2*n)
x2 = cas.MX.sym('x_2', 2*n)
cond = [x1[ii] - x2[n-1-ii] for ii in range(n)]
cas_funcs = cas.horzcat(*cond)
return cas.Function(
"Sim_cond",
[x1, x2],
[cas_funcs,],
["x_1", "x2"],
["residue"],
)
```
```python
simetric_5_links = simetric_cond_casadi(5)
```
Creating and simplifying symbolically the expressions of the heel impact may require some time, but it alows for a faster problem formulation later on.
```python
bodies = [body0, body1, body2, body3, body4]
points_right = [P0, P1, P2, P2, P4]
points_left = [P5, P4, P2, P2, P1]
subs_key = list(zip(dynamicsymbols('q_0:5'),dynamicsymbols('q_p_0:5')))
impact_eqs = []
for ii in range(5):
print('calculating eq', ii)
print('\tleft side')
left_side = angular_momentum(points_left[ii], N_in, *bodies[:5-ii]).dot(N_in.z)
left_side = simplify(left_side)
print('\tright side')
right_side = angular_momentum(points_right[ii], N_in, *bodies[ii:]).dot(N_in.z)
right_side = simplify(right_side).subs(subs_key)
impact_eqs.append(left_side-right_side)
#impact_eqs
```
```python
def impact_cond_casadi(eqs, x1_sym, x2_sym, sym_params, sym_add_params):
x1_sym = [diff_to_symb(ii) for ii in x1_sym]
x2_sym = [diff_to_symb(ii) for ii in x2_sym]
eqs = [diff_to_symb_expr(ii) for ii in eqs]
all_vars = x1_sym + x2_sym + sym_params + sym_add_params
n = len(x1_sym)
cas_x1 = cas.MX.sym('x_1', n)
cas_x2 = cas.MX.sym('x_2', n)
cas_params = cas.MX.sym("p", len(sym_params))
cas_add_params = cas.MX.sym("p_add", len(sym_add_params))
cas_all_vars = [cas_x1[ii] for ii in range(n)]
cas_all_vars += [cas_x2[ii] for ii in range(n)]
cas_all_vars += [cas_params[ii] for ii in range(len(sym_params))]
cas_all_vars += [cas_add_params[ii] for ii in range(len(sym_add_params))]
cas_funcs = []
for function in eqs:
cas_funcs.append(sympy2casadi(function, all_vars, cas_all_vars))
cas_funcs = cas.horzcat(*cas_funcs)
return cas.Function(
"Sim_cond",
[cas_x1, cas_x2, cas_params, cas_add_params],
[cas_funcs,],
["x_1", "x2", 'params', 'additional_params'],
["residue"],
)
```
```python
sym_x = dynamicsymbols('q_0:5')
sym_x = sym_x + [ii.diff() for ii in sym_x]
subs_key = list(zip(dynamicsymbols('q_0:5'),dynamicsymbols('q_p_0:5')))
sym_x_2 = [ii.subs(subs_key) for ii in sym_x]
impact_cond_cas_f = impact_cond_casadi(impact_eqs, sym_x, sym_x_2, sym_params, sym_add_params)
```
```python
sys_cm_np = lambdify([sym_x, sym_params], [sys_CM_x, sys_CM_y],'numpy')
```
```python
ang_mom_p0 = angular_momentum(P0, N_in, *bodies).dot(N_in.z)
ang_mom_p0_np = lambdify([sym_x, sym_params], ang_mom_p0,'numpy')
```
```python
ang_mom_p5 = angular_momentum(P5, N_in, *bodies).dot(N_in.z)
ang_mom_p5_np = lambdify([sym_x, sym_params, sym_add_params], ang_mom_p5,'numpy')
```
```python
P5_static = P5.locatenew('P5_static', 0 * N_in.y)
P5_static.set_vel(N_in, 0 * N_in.y)
```
```python
ang_mom_p5_static = angular_momentum(P5_static, N_in, *bodies).dot(N_in.z)
ang_mom_p5_static_np = lambdify([sym_x, sym_params, sym_add_params], ang_mom_p5_static,'numpy')
```
```python
angular_momentum(P0, N_in, bodies[0]).dot(N_in.z)
```
```python
system_energy = potential_energy(*bodies) + kinetic_energy(N_in, *bodies)
```
```python
system_energy_np = lambdify([sym_x, sym_params], system_energy,'numpy')
```
```python
mass_matrix_np = lambdify([sym_x, sym_params], LM_small.mass_matrix,'numpy')
```
```python
sym_u = symbols('u_:5')
F_impl_np = lambdify([sym_x, sym_u, sym_params], LM_small.forcing,'numpy')
```
### Scheme definitions
Each scheme is defined here as a function that must be equal to zero at each interval.
Note that functions that contain "mod" in the name are those we define as "second order",
and use separate conditions for q and v.
Note that we will operate this problem wihout combining the scheme equations $F(q_k, q_{k+1}, q'_k, q'_{k+1} q''_k, q''_{k+1}, SchemeParams) = 0$ and the dynamics equations $H(q, q', q'', u, params) = 0$ imposed at the collocation points. This approach allows us to solve this problem without inverting the mass matrix.
If you wish to define your own schemes, do it here.
Be careful to respect the function structure:
restriction(x, x_n, a, a_n, dt, scheme_params) = 0
```python
from optibot.schemes import index_div
from copy import copy
def euler_accel_restr(x, x_n, a, a_n, dt, scheme_params):
first_ind, last_ind = index_div(x)
x_d = copy(x)
x_d[first_ind] = x[last_ind]
x_d[last_ind] = a
return x_n - (x + dt * x_d)
def trapz_accel_restr(x, x_n, a, a_n, dt, scheme_params):
first_ind, last_ind = index_div(x)
x_d = copy(x)
x_d[first_ind] = x[last_ind]
x_d[last_ind] = a
x_d_n = copy(x)
x_d_n[first_ind] = x_n[last_ind]
x_d_n[last_ind] = a_n
return x_n - (x + dt / 2 * (x_d + x_d_n))
def trapz_mod_accel_restr(x, x_n, a, a_n, dt, scheme_params):
res = copy(x)
first_ind, last_ind = index_div(x)
res[last_ind] = x[last_ind] + dt / 2 * (a + a_n)
res[first_ind] = x[first_ind] + dt * x[last_ind] + dt ** 2 / 6 * (a_n + 2 * a)
return x_n - res
def hs_half_x(x, x_n, x_d, x_d_n, dt):
x_c = (x + x_n) / 2 + dt / 8 * (x_d - x_d_n)
return x_c
def hs_accel_restr(x, x_n, a, a_n, dt, scheme_params):
a_c = scheme_params
first_ind, last_ind = index_div(x)
x_d = copy(x)
x_d[first_ind] = x[last_ind]
x_d[last_ind] = a
x_d_n = copy(x)
x_d_n[first_ind] = x_n[last_ind]
x_d_n[last_ind] = a_n
x_c = hs_half_x(x, x_n, x_d, x_d_n, dt)
x_d_c = copy(x)
x_d_c[first_ind] = x_c[last_ind]
x_d_c[last_ind] = a_c
return x + dt / 6 * (x_d + 4 * x_d_c + x_d_n) - x_n
def hs_mod_half_x(x, x_n, a, a_n, dt):
x_c = copy(x)
first_ind, last_ind = index_div(x)
q = x[first_ind]
v = x[last_ind]
q_n = x_n[first_ind]
v_n = x_n[last_ind]
q_c = q + dt / 32 * (13 * v + 3 * v_n) + dt**2 / 192 * (11 * a - 5 * a_n)
v_c = (v + v_n) / 2 + dt / 8 * (a - a_n)
x_c[first_ind] = q_c
x_c[last_ind] = v_c
return x_c
def hs_mod_accel_restr(x, x_n, a, a_n, dt, scheme_params):
a_c = scheme_params.T
res = copy(x)
first_ind, last_ind = index_div(x)
q = x[first_ind]
v = x[last_ind]
res[last_ind] = v + dt / 6 * (a + 4 * a_c + a_n)
res[first_ind] = q + dt * v + dt ** 2 / 6 * (a + 2 * a_c)
return x_n - res
```
### Casadi optimization
We have generated the system equations symbolicaly. Now, we translate them to CasADi objects in order to perform the optimization.
```python
from optibot.casadi import accelrestriction2casadi
#from optibot.schemes import (euler_accel_restr, trapz_accel_restr, trapz_mod_accel_restr,
# hs_mod_accel_restr, hs_accel_restr, hs_half_x)
```
```python
#Numerical values of the paramenters
I_0_n, I_1_n, I_2_n, I_3_n, I_4_n = 0.93, 1.08, 2.22, 1.08, 0.93
d_0_n, d_1_n, d_2_n, d_3_n, d_4_n = 0.128, 0.163, 0.2, 0.163, 0.128
g_n = 9.81
l_0_n, l_1_n, l_2_n, l_3_n, l_4_n = 0.4, 0.4, 0.625, 0.4, 0.4
m_0_n, m_1_n, m_2_n, m_3_n, m_4_n = 3.2, 6.8, 20, 6.8, 3.2
params = [
I_0_n, I_1_n, I_2_n, I_3_n, I_4_n,
d_0_n, d_1_n, d_2_n, d_3_n, d_4_n,
g_n,
l_0_n, l_1_n, l_3_n,
m_0_n, m_1_n, m_2_n, m_3_n, m_4_n
]
additional_params = [l_4_n,]
```
```python
opti = cas.Opti()
p_opts = {}#{"expand":True,'ipopt.print_level':0, 'print_time':0}
s_opts = {}#{"max_iter": 10000, 'tol': 1e-26}#, 'linear_solver' : "MA27"}
opti.solver("ipopt",p_opts,
s_opts)
```
```python
N = 25
X = opti.variable(N+1,10)
X_dot = opti.variable(N+1,10)
U = opti.variable(N+1,5)
U_c = opti.variable(N,5)
X_c = opti.variable(N,10)
X_dot_c = opti.variable(N,10)
```
```python
T = opti.parameter()
u_m = opti.parameter()
Params_opti = opti.parameter(len(params))
Add_params_opti = opti.parameter(len(additional_params))
D = opti.parameter()
```
```python
# Definition of the cost function
#cost = cas.sum2((cas.sum1(U[:,:]**2)+cas.sum1(U[1:-1,:]**2))/N)
cost = cas.sum2((4*cas.sum1(U_c[:,:]**2) + cas.sum1(U[:,:]**2)+cas.sum1(U[1:-1,:]**2))/(3*N))
#cost = cas.sum2(cas.sum1(U**2))
opti.minimize(cost)
```
```python
#Periodic gait constraint:
opti.subject_to(simetric_5_links(X[0,:], X[-1,:]) == 0)
opti.subject_to(impact_cond_cas_f(X[-1,:], X[0,:], Params_opti, Add_params_opti) == 0)
```
```python
#Step size constraint:
opti.subject_to(feet_x_cas(X[-1,:], Params_opti, Add_params_opti) == D)
opti.subject_to(feet_y_cas(X[-1,:], Params_opti, Add_params_opti) == 0)
```
```python
#Small Feet Conditions:
opti.subject_to(U[:,0] == 0)
opti.subject_to(U_c[:,0] == 0)
opti.subject_to(feet_y_vel_cas(X[0,:], Params_opti, Add_params_opti)>0)
opti.subject_to(feet_y_vel_cas(X[-1,:], Params_opti, Add_params_opti)<0)
```
```python
#Feet over ground Restrictions:
for ii in range(1,N):
opti.subject_to(feet_y_cas(X[ii,:], Params_opti, Add_params_opti) > 0)
```
```python
#Dynamics Constraints:
for ii in range(N+1):
opti.subject_to(imp_dyn_x_f_cas(X[ii,:], X_dot[ii,:], U[ii,:], [], Params_opti) == 0)
for ii in range(N):
opti.subject_to(X_c[ii,:] == hs_mod_half_x(X[ii,:], X[ii+1,:], X_dot[ii,5:], X_dot[ii+1,5:], T/N))
opti.subject_to(imp_dyn_x_f_cas(X_c[ii,:], X_dot_c[ii,:], U_c[ii,:], [], Params_opti) == 0)
```
```python
#Scheme Constraints
#cas_accel_restr = accelrestriction2casadi(trapz_mod_accel_restr, 5)
cas_accel_restr = accelrestriction2casadi(hs_mod_accel_restr, 5, 5)
for ii in range(N):
opti.subject_to(cas_accel_restr(X[ii,:], X[ii+1,:], X_dot[ii, 5:], X_dot[ii+1, 5:],T/N, X_dot_c[ii,5:]) == 0)
```
```python
opti.set_value(T, 0.7)#0.7
opti.set_value(D, 0.5)
```
```python
opti.set_value(Params_opti, params)
opti.set_value(Add_params_opti, additional_params)
```
```python
q_0_guess = np.array([-0.3, 0.7, 0, -0.5, -0.6])
q_1_guess = q_0_guess[::-1]
s_arr = np.linspace(0, 1, N+1)
q_guess = np.expand_dims(q_0_guess,0)+ np.expand_dims(s_arr,1)*np.expand_dims((q_1_guess - q_0_guess),0)
q_dot_guess = (q_1_guess - q_0_guess) * np.ones([N+1,1])/opti.value(T)
```
```python
opti.set_initial(X[:,:5], q_guess)
opti.set_initial(X[:,5:], q_dot_guess)
opti.set_initial(X_c[:,:5], (q_guess[:-1,:]+q_guess[1:,:])/2)
opti.set_initial(X_c[:,5:], q_dot_guess[:-1,:])
opti.set_initial(X_dot[:,:5], q_dot_guess)
opti.set_initial(X_dot[:,5:], 0)
opti.set_initial(X_dot_c[:,:5], q_dot_guess[:-1,:])
opti.set_initial(X_dot_c[:,5:], 0)
opti.set_initial(U, 0)
opti.set_initial(U_c, 0)
```
```python
sol = opti.solve()
```
```python
U_sol = sol.value(U)
U_c_sol = sol.value(U_c)
X_sol = sol.value(X)
X_c_sol = sol.value(X_c)
X_dot_sol = sol.value(X_dot)
X_dot_c_sol = sol.value(X_dot_c)
T_sol = sol.value(T)
T_sol_arr = np.linspace(0, T_sol, N+1)
T_c_arr = (T_sol_arr[:-1]+T_sol_arr[1:])/2
```
```python
plt.figure(figsize=[14,10])
labels= ['stance anckle', 'stance knee', 'stance hip', 'swing hip', 'swing knee']
for ii in range(5):
plt.plot(T_sol_arr,U_sol[:,ii], marker = 'o', label = labels[ii] + ' u_k')
plt.plot(T_c_arr,U_c_sol[:,ii], 'o', label = labels[ii] + ' u_c')
plt.grid()
plt.legend()
plt.title('u(t)')
```
```python
plt.figure(figsize=[14,10])
labels= ['stance tibia', 'stance femur', 'torso', 'swing femur', 'swing tibia']
for ii in range(5):
plt.plot(T_sol_arr, X_sol[:,ii], marker = 'o', label = labels[ii] + ' q_k')
plt.plot(T_c_arr,X_c_sol[:,ii], 'o', label = labels[ii] + ' q_c')
plt.grid()
plt.legend()
plt.title('q(t)')
```
```python
def chain_to_draw(x,params):
[
I_0_n, I_1_n, I_2_n, I_3_n, I_4_n,
d_0_n, d_1_n, d_2_n, d_3_n, d_4_n,
g_n,
l_0_n, l_1_n, l_3_n,
m_0_n, m_1_n, m_2_n, m_3_n, m_4_n
] = params
points_x = [0, ]
points_y = [0, ]
points_x.append(points_x[-1] - l_0_n*np.sin(x[0]))
points_x.append(points_x[-1] - l_1_n*np.sin(x[1]))
points_x.append(points_x[-1] - l_2_n*np.sin(x[2]))
points_x.append(points_x[-2])
points_x.append(points_x[-1] + l_3_n*np.sin(x[3]))
points_x.append(points_x[-1] + l_4_n*np.sin(x[4]))
points_y.append(points_y[-1] + l_0_n*np.cos(x[0]))
points_y.append(points_y[-1] + l_1_n*np.cos(x[1]))
points_y.append(points_y[-1] + l_2_n*np.cos(x[2]))
points_y.append(points_y[-2])
points_y.append(points_y[-1] - l_3_n*np.cos(x[3]))
points_y.append(points_y[-1] - l_4_n*np.cos(x[4]))
return points_x, points_y
```
```python
points_x, points_y = chain_to_draw(X_sol[0], params)
plt.figure(figsize=[15,15])
plt.grid()
for ii in range(0, N, 1):
points_x, points_y = chain_to_draw(X_sol[ii], params)
plt.plot(points_x, points_y, lw=1, color = plt.cm.viridis(ii/N))
plt.gca().set_aspect('equal')
```
total_mass = m_0_n + m_1_n + m_2_n + m_3_n + m_4_n
ang_mom_arr = [ang_mom_p0_np(X_sol[ii,:],params) for ii in range(N+1)]
ang_mom_swing_foot_arr = [ang_mom_p5_np(X_sol[ii,:],params, additional_params) for ii in range(N+1)]
ang_mom_swing_foot_static_arr = [ang_mom_p5_static_np(X_sol[ii,:],params, additional_params) for ii in range(N+1)]
cm_torque_arr = [total_mass * -g_n * sys_cm_np(X_sol[ii,:], params)[0] for ii in range(N+1)]
ang_mom_arr_deriv = np.gradient(ang_mom_arr, T_sol_arr)
```python
from optibot.schemes import interpolated_array, interpolated_array_derivative
from optibot.analysis import dynamic_error_implicit
```
## Sistematic comparative of schemes for different values of N
Now let's solve the problem with different methods.
### Caution!
Executing the next cell may require some time!
```python
def q_init(N):
q_0_guess = np.array([-0.3, 0.7, 0, -0.5, -0.6])
q_1_guess = q_0_guess[::-1]
s_arr = np.linspace(0, 1, N+1)
q_guess = np.expand_dims(q_0_guess,0)+ np.expand_dims(s_arr,1)*np.expand_dims((q_1_guess - q_0_guess),0)
q_dot_guess = (q_1_guess - q_0_guess) * np.ones([N+1,1])/opti.value(T)
return q_guess, q_dot_guess
```
```python
import time
def chrono_solve(opti, solve_repetitions):
cput0 = time.time()
for ii in range(solve_repetitions):
sol = opti.solve()
cput1 = time.time()
cpudt = (cput1-cput0)/solve_repetitions
return sol, cpudt
```
```python
def casadi_biped(N = 25, scheme = "trapz", solve_repetitions = 1, t_end = 0.7, step_length = 0.5):
opti = cas.Opti()
p_opts = {"expand":True,'ipopt.print_level':0, 'print_time':0}
s_opts = {"max_iter": 10000, 'tol': 1e-26}#, 'linear_solver' : "MA27"}
opti.solver("ipopt",p_opts,
s_opts)
restr_schemes = {
'trapz': trapz_accel_restr,
'trapz_mod' : trapz_mod_accel_restr,
'hs': hs_accel_restr,
'hs_mod': hs_mod_accel_restr,
'hs_parab': hs_accel_restr,
'hs_mod_parab': hs_mod_accel_restr
}
f_restr = restr_schemes[scheme]
X = opti.variable(N+1,10)
X_dot = opti.variable(N+1,10)
U = opti.variable(N+1,5)
if 'hs' in scheme:
U_c = opti.variable(N,5)
X_c = opti.variable(N,10)
X_dot_c = opti.variable(N,10)
T = opti.parameter()
u_m = opti.parameter()
Params_opti = opti.parameter(len(params))
Add_params_opti = opti.parameter(len(additional_params))
D = opti.parameter()
# Cost
if 'parab' in scheme:
cost = cas.sum2((4*cas.sum1(U_c[:,:]**2) + cas.sum1(U[:,:]**2)+cas.sum1(U[1:-1,:]**2))/(3*N))
else:
cost = cas.sum2((cas.sum1(U[:,:]**2)+cas.sum1(U[1:-1,:]**2))/N)
#cost = cas.sum2(cas.sum1(U**2))
opti.minimize(cost)
#Periodic gait constraint:
opti.subject_to(simetric_5_links(X[0,:], X[-1,:]) == 0)
opti.subject_to(impact_cond_cas_f(X[-1,:], X[0,:], Params_opti, Add_params_opti) == 0)
#Step size constraint:
opti.subject_to(feet_x_cas(X[-1,:], Params_opti, Add_params_opti) == D)
opti.subject_to(feet_y_cas(X[-1,:], Params_opti, Add_params_opti) == 0)
#Small Feet Conditions:
opti.subject_to(U[:,0] == 0)
opti.subject_to(feet_y_vel_cas(X[0,:], Params_opti, Add_params_opti)>0)
opti.subject_to(feet_y_vel_cas(X[-1,:], Params_opti, Add_params_opti)<0)
if 'hs' in scheme:
opti.subject_to(U_c[:,0] == 0)
#Feet over ground Restrictions:
for ii in range(1,N):
opti.subject_to(feet_y_cas(X[ii,:], Params_opti, Add_params_opti) > 0)
#Dynamics Constraints:
for ii in range(N+1):
opti.subject_to(imp_dyn_x_f_cas(X[ii,:], X_dot[ii,:], U[ii,:], [], Params_opti) == 0)
if 'hs' in scheme:
for ii in range(N):
opti.subject_to(X_c[ii,:] == hs_half_x(X[ii,:], X[ii+1,:], X_dot[ii,:], X_dot[ii+1,:], T/N))
opti.subject_to(imp_dyn_x_f_cas(X_c[ii,:], X_dot_c[ii,:], U_c[ii,:], [], Params_opti) == 0)
if 'parab' not in scheme:
for ii in range(N):
opti.subject_to(U_c[ii,:] == (U[ii,:]+U[ii+1,:])/2)
#Scheme Constraints
if 'hs' in scheme:
cas_accel_restr = accelrestriction2casadi(f_restr, 5, 5)
for ii in range(N):
opti.subject_to(cas_accel_restr(X[ii,:], X[ii+1,:], X_dot[ii, 5:], X_dot[ii+1, 5:],T/N, X_dot_c[ii,5:]) == 0)
else:
cas_accel_restr = accelrestriction2casadi(f_restr, 5)
for ii in range(N):
opti.subject_to(cas_accel_restr(X[ii,:], X[ii+1,:], X_dot[ii, 5:], X_dot[ii+1, 5:],T/N, []) == 0)
opti.set_value(T, t_end)#0.7
opti.set_value(D, step_length)#0.5
opti.set_value(Params_opti, params)
opti.set_value(Add_params_opti, additional_params)
q_guess, q_dot_guess = q_init(N)
opti.set_initial(X[:,:5], q_guess)
opti.set_initial(X[:,5:], q_dot_guess)
opti.set_initial(X_dot[:,:5], q_dot_guess)
opti.set_initial(X_dot[:,5:], 0)
opti.set_initial(U, 0)
if 'hs' in scheme:
opti.set_initial(X_c[:,:5], (q_guess[:-1,:]+q_guess[1:,:])/2)
opti.set_initial(X_c[:,5:], q_dot_guess[:-1,:])
opti.set_initial(X_dot_c[:,:5], q_dot_guess[:-1,:])
opti.set_initial(X_dot_c[:,5:], 0)
opti.set_initial(U_c, 0)
sol, cpudt = chrono_solve(opti, solve_repetitions)
U_sol = sol.value(U)
X_sol = sol.value(X)
X_dot_sol = sol.value(X_dot)
T_sol = sol.value(T)
T_sol_arr = np.linspace(0, T_sol, N+1)
T_c_arr = (T_sol_arr[:-1]+T_sol_arr[1:])/2
cost_sol = sol.value(cost)
if 'hs' in scheme:
U_c_sol = sol.value(U_c)
X_c_sol = sol.value(X_c)
X_dot_c_sol = sol.value(X_dot_c)
else:
U_c_sol = None
X_c_sol = None
X_dot_c_sol = None
return{
'u':U_sol,
'x':X_sol,
'x_dot':X_dot_sol,
't':T_sol,
't_array':T_sol_arr,
't_c_array': T_c_arr,
'cpudt':cpudt,
'u_c':U_c_sol,
'x_c':X_c_sol,
'x_dot_c':X_dot_c_sol,
'cost':cost_sol
}
```
```python
schemes = ['hs_parab', 'hs_mod_parab','trapz', 'trapz_mod']
solve_repetitions = 3
N_arr = [20, 25, 30, 40, 50, 60]
results = {}
for scheme in schemes:
key = scheme
print('Problem:', key)
results[key] = {'N_arr':N_arr}
for N in N_arr:
print(f'\tN = {N}')
results[key][N] = casadi_biped(
N = N,
scheme = scheme,
solve_repetitions = solve_repetitions,
t_end = 0.7,
step_length = 0.5)
```
### Calculating dynamic errors for each case
Caution! May take several seconds to run!
```python
schemes = ['hs_parab', 'hs_mod_parab','trapz', 'trapz_mod']
n_graph = 2000 # A higher number here will provide more exact results but take longer to run
t_arr = np.linspace(0,0.7,n_graph)
for scheme in schemes:
key = scheme
if 'parab' in scheme:
u_scheme = 'parab'
else:
u_scheme = 'lin'
print('Problem:', key)
N_arr = results[key]['N_arr']
for N in N_arr:
print(f'\tN = {N}')
dyn_err_q, dyn_err_v, _, dyn_err_2 = dynamic_error_implicit(
x_arr=results[key][N]['x'],
u_arr=results[key][N]['u'],
t_end=results[key][N]['t'],
params = params,
F_impl = F_impl_np,
M = mass_matrix_np,
scheme = scheme,
u_scheme = u_scheme,
scheme_params={'u_c':results[key][N]['u_c'],
'x_dot_c': results[key][N]['x_dot_c'],
'x_c': results[key][N]['x_c']},
n_interp= n_graph)
results[key][N]['dyn_err_q'] = dyn_err_q
results[key][N]['dyn_err_v'] = dyn_err_v
results[key][N]['dyn_err_2'] = dyn_err_2
```
```python
# Plot settings
plt.rcParams.update({'font.size': 15})
oct_fig_size = [10,6]
```
```python
schemes = ['hs_parab','hs_mod_parab', 'trapz', 'trapz_mod']
titles = ['Hermite Simpson','2nd order Hermite Simpson', 'Trapezoidal', '2nd order Trapezoidal']
colors = ['b', 'orange', 'g', 'r', 'purple']
n_int = len(t_arr)
N = 25
interv_n = (N * t_arr)/results[scheme][N]['t']
for kk in range(len(schemes)):
scheme = schemes[kk]
plt.figure(figsize=[14,8])
for ii in range(5):
cut_p = 0
for ll in range(1,N+1):
jj = np.searchsorted(interv_n, ll)
plt.plot(t_arr[cut_p:jj],results[scheme][N]['dyn_err_q'][cut_p:jj,ii], '-', c = colors[ii], label = f'$q_{ii+1}$' if cut_p == 0 else None)
cut_p = jj
plt.plot(np.linspace(0,results[scheme][N]['t'],N+1), np.zeros(N+1), 'ok')
plt.legend()
plt.grid()
if kk == 1:
plt.ylim([-0.00001, 0.00001])
elif kk == 3:
plt.ylim([-0.001, 0.001])
plt.title(r'First order dynamic error $\varepsilon^{[1]}_{q_i}$,'+f' {titles[kk]} scheme')
plt.xlabel('Time(s)')
plt.ylabel('Dynamic error $(rad/s)$')
plt.tight_layout(pad = 0.0)
sch_type = titles[kk].replace(' ','_')
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'5_link_First_Order_Dynamic_Error_{sch_type}_scheme.eps', format='eps')
```
```python
schemes = ['hs_parab','hs_mod_parab', 'trapz', 'trapz_mod']
titles = ['Hermite Simpson','2nd order Hermite Simpson', 'Trapezoidal', '2nd order Trapezoidal']
colors = ['b', 'orange', 'g', 'r', 'purple']
n_int = len(t_arr)
N = 25
interv_n = (N * t_arr)/results[scheme][N]['t']
for kk in range(len(schemes)):
scheme = schemes[kk]
plt.figure(figsize=[14,8])
for ii in range(5):
cut_p = 0
for ll in range(1,N+1):
jj = np.searchsorted(interv_n, ll)
plt.plot(t_arr[cut_p:jj],results[scheme][N]['dyn_err_2'][cut_p:jj,ii], '-', c = colors[ii], label = f'$q_{ii+1}$' if cut_p == 0 else None)
cut_p = jj
plt.plot(results[scheme][N]['t_array'], np.zeros(N+1), 'ok', label = 'knot & collocation points')
if 'hs' in scheme:
plt.plot(results[scheme][N]['t_c_array'], np.zeros(N), 'ow', markeredgecolor='b', label = 'collocation points')
plt.ylim([-0.08, 0.08])
else:
plt.ylim([-1.75, 1.75])
plt.legend()
plt.grid()
plt.title(r'Second order dynamic error $\varepsilon^{{[2]}}_{{q_i}}$,'+f' {titles[kk]} scheme')
plt.xlabel('Time(s)')
plt.ylabel('Dynamic error $(rad/s^2)$')
plt.tight_layout(pad = 0.0)
sch_type = titles[kk].replace(' ','_')
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'5_link_Second_Order_Dynamic_Error_{sch_type}_scheme.eps', format='eps')
```
```python
def arr_mod(x):
x_1 = np.sum(x*x, axis=1)
return np.sqrt(x_1)
def arr_sum(x):
return np.sum(np.abs(x), axis = 1)
def arr_max(x):
return np.max(np.abs(x), axis = 1)
```
```python
schemes = ['hs_mod_parab','hs_parab']#, 'trapz', 'trapz_mod']
titles = ['2nd order Hermite Simpson','Hermite Simpson']#, 'Trapezoidal', 'Modified Trapezoidal']
colors = ['b', 'orange', 'g', 'r', 'purple']
funcs = [arr_sum,]#arr_mod, arr_max
#func_tittles = ['Module of', 'Sum of absolute', 'Maximum of absolute']
y_max_list = [0.12, 0.2, 0.09]
n_int = len(t_arr)
N = 25
interv_n = (N * t_arr)/results[scheme][N]['t']
for ii in range(1):
plt.figure(figsize=oct_fig_size)
for kk in [1,0]:
scheme = schemes[kk]
cut_p = 0
for ll in range(1,N+1):
jj = np.searchsorted(interv_n, ll)
y_plot = funcs[ii](results[scheme][N]['dyn_err_2'])
plt.plot(t_arr[cut_p:jj],y_plot[cut_p:jj], '-', c = f'C{kk}', label = titles[kk] if cut_p == 0 else None)
cut_p = jj
plt.plot(results[scheme][N]['t_array'], np.zeros(N+1), 'ok', label = 'knot & collocation points')
plt.plot(results[scheme][N]['t_c_array'], np.zeros(N), 'ow', markeredgecolor='k', label = 'collocation points')
plt.legend()
plt.grid()
plt.ylim([-0.01,y_max_list[ii]])
plt.title(r'Second order dynamic error $\varepsilon^{[2]}$,'+f' N = {N}')
plt.xlabel('Time(s)')
plt.ylabel('Dynamic error $(rad/s^2)$')
plt.tight_layout(pad = 0.0)
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'5_link_HS_N{N}_second_order_dynamic_error.eps', format='eps')
```
```python
schemes = ['trapz', 'trapz_mod']
titles = ['Trapezoidal', '2nd order Trapezoidal']
funcs = [arr_sum,]#arr_mod, arr_max
y_max_list = [0.12, 0.2, 0.09]
n_int = len(t_arr)
N = 50
interv_n = (N * t_arr)/results[scheme][N]['t']
for ii in range(1):
plt.figure(figsize=oct_fig_size)
for kk in range(2):
scheme = schemes[kk]
cut_p = 0
for ll in range(1,N+1):
jj = np.searchsorted(interv_n, ll)
y_plot = funcs[ii](results[scheme][N]['dyn_err_2'])
plt.plot(t_arr[cut_p:jj],y_plot[cut_p:jj], '-', c = f'C{kk+2}', label = titles[kk] if cut_p == 0 else None)
cut_p = jj
plt.plot(results[scheme][N]['t_array'], np.zeros(N+1), 'ok', label = 'knot & collocation points')
plt.legend()
plt.grid()
plt.title(r'Second order dynamic error $\varepsilon^{[2]}$,'+f' N = {N}')
plt.xlabel('Time(s)')
plt.ylabel('Dynamic error $(rad/s^2)$')
plt.tight_layout(pad = 0.0)
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'5_link_Trapezoidal_N{N}_second_order_dynamic_error.eps', format='eps')
```
```python
def total_state_error(t_arr, dyn_err):
errors = np.trapz(np.abs(dyn_err), t_arr, axis=0)
return errors
```
```python
schemes = ['hs_parab', 'hs_mod_parab','trapz', 'trapz_mod']
N_arr = [10,15,20,25,30,40,50,75,100,150]
t_arr = np.linspace(0,0.7,n_graph)
for scheme in schemes:
key = scheme
print('Problem:', key)
N_arr = results[key]['N_arr']
for N in N_arr:
print(f'\tN = {N}')
for letter in 'qv2':
results[key][N][f'integ_dyn_err_{letter}']= total_state_error(t_arr, results[scheme][N][f'dyn_err_{letter}'])
results[key][N][f'module_dyn_err_{letter}']= np.sqrt(np.dot(results[key][N][f'integ_dyn_err_{letter}'], results[key][N][f'integ_dyn_err_{letter}']))
results[key][N][f'sum_dyn_err_{letter}'] = np.sum(results[key][N][f'integ_dyn_err_{letter}'])
```
```python
for scheme in schemes:
key = scheme
print('Problem:', key)
N_arr = results[key]['N_arr']
for letter in 'qv2':
list_mod = []
list_sum = []
for N in N_arr:
#print(f'\tN = {N}')
list_mod.append(results[key][N][f'module_dyn_err_{letter}'])
list_sum.append(results[key][N][f'sum_dyn_err_{letter}'])
results[key][f'module_dyn_err_{letter}_array'] = np.array(list_mod)
results[key][f'sum_dyn_err_{letter}_array'] = np.array(list_sum)
```
```python
# For each scheme, the number of collocation points can be obtained
for scheme in results.keys():
if 'hs' in scheme:
n_coll = np.array(results[scheme]['N_arr'])*2-1
results[scheme]['N_coll_arr'] = n_coll
else:
results[scheme]['N_coll_arr'] = results[scheme]['N_arr']
```
```python
schemes = ['hs_mod_parab','hs_parab', 'trapz', 'trapz_mod']
titles = ['2nd order Hermite Simpson', 'Hermite Simpson','Trapezoidal', '2nd order Trapezoidal']
plt.figure(figsize=oct_fig_size)
for ii in [2,3,1,0]:
key = schemes[ii]
plt.plot(results[key]['N_arr'], results[key][f'sum_dyn_err_2_array'], marker = 'o', c = f'C{ii}',label = titles[ii])
plt.grid()
plt.legend()
plt.yscale('log')
plt.title('Second order dynamic error $E^{[2]}$')
plt.xlabel('Number of intervals')
plt.ylabel('Dynamic error ($rad/s$)')
plt.tight_layout(pad = 0.0)
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'5_link_Sum_second_order_dynamic_error_vs_interval_number.eps', format='eps')
```
```python
for scheme in schemes:
key = scheme
print('Problem:', key)
N_arr = results[key]['N_arr']
list_cpudt = []
for N in N_arr:
#print(f'\tN = {N}')
list_cpudt.append(results[key][N]['cpudt'])
results[key]['cpudt_array'] = np.array(list_cpudt)
```
```python
schemes = ['hs_mod_parab','hs_parab', 'trapz', 'trapz_mod']
titles = ['2nd order Hermite Simpson', 'Hermite Simpson','Trapezoidal', '2nd order Trapezoidal']
plt.figure(figsize=oct_fig_size)
for ii in [2,3,1,0]:
key = schemes[ii]
plt.plot(results[key]['N_arr'], results[key][f'cpudt_array'], marker = 'o', c = f'C{ii}', label = titles[ii])
plt.grid()
plt.legend()
#plt.yscale('log')
plt.title('Optimization time')
plt.xlabel('Number of intervals')
plt.ylabel('Time (s)')
plt.tight_layout(pad = 0.0)
# If you are running the notebook locally and want to save the plots,
# uncomment the next line
#plt.savefig(f'5_link_optimization_vs_interval_number.eps', format='eps')
```
```python
# Here we print the data shown in Table III of the paper
for scheme in ['hs_mod_parab', 'hs_parab', 'trapz', 'trapz_mod']:
key = scheme
for N in [25,50]:#results[key]['N_arr']:
print('scheme:', scheme, 'N:', N,'\n\ttime:', results[key][N][f'cpudt'],
'\n\tErr 1:', results[key][N]['sum_dyn_err_q'], '\n\tErr 2:', results[key][N]['sum_dyn_err_2'])
```
## Animation
```python
from matplotlib import animation, rc
import matplotlib.patches as patches
from matplotlib.transforms import Affine2D
from IPython.display import HTML
import matplotlib
```
```python
matplotlib.rcParams['animation.embed_limit'] = 200
```
```python
def body_tray(X, params):
res = []
for ii in range(X.shape[0]):
res.append(list(chain_to_draw(X[ii,:], params)))
return np.array(res)
```
```python
def loop_body_tray(X, params):
point_tray = body_tray(X, params)
point_tray_loop = np.append(
point_tray,
np.expand_dims(
np.array(list(chain_to_draw(X[0,[4,3,2,1,0,5,6,7,8,9]],params)))
,0),
0)
return point_tray_loop
```
```python
def mod_sum(iterable, start):
for element in iterable:
start += element
return start
```
```python
def create_anim(X, U, params, n_loops = 1):
[
I_0_n, I_1_n, I_2_n, I_3_n, I_4_n,
d_0_n, d_1_n, d_2_n, d_3_n, d_4_n,
g_n,
l_0_n, l_1_n, l_3_n,
m_0_n, m_1_n, m_2_n, m_3_n, m_4_n
] = params
N = X.shape[0]
fig, ax = plt.subplots()
draw_width = 14
draw_height = 14
fig.set_dpi(72)
fig.set_size_inches([draw_width,draw_height])
ax.set_xlim(( -1, 1))
ax.set_ylim(( -0.2, 1.8))
body, = ax.plot([], [], lw=4, ms = 12, marker = 'o')
trail, = ax.plot([], [], lw=1, color = 'k')
old_trail, = ax.plot([], [], lw=1, color = 'k')
next_trail, = ax.plot([], [], lw=1, color = 'k')
point_tray = body_tray(X, params)
point_tray_loop = loop_body_tray(X, params)
#sys_cm_point, = ax.plot([], [], 'go', ms=12)
#line_sys_cm, = ax.plot([], [], 'k:', lw=1)
print_vars = [X[:,ii] for ii in range(5)]+[np.linspace(0, N-1, N, dtype=int)]
print_var_names = [f'q_{ii}' for ii in range(5)]+['step']
texts = []
ii = 0.8
for arr in print_vars:
texts.append(ax.text(-0.8, ii, "", fontsize = 12))
ii -= 0.2
ax.grid()
def init():
body.set_data([], [])
trail.set_data(point_tray_loop[0,0,-1], point_tray_loop[0,1,-1])
old_trail.set_data(point_tray_loop[:,0,-1]-0.5, point_tray_loop[:,1,-1])
#next_trail.set_data(point_tray_loop[:,0,-1]+0.5, point_tray_loop[:,1,-1])
#sys_cm_point.set_data([], [])
#line_sys_cm.set_data([], [])
return (body,)
def animate(i):
margin_x = -0.25 + i * 0.5/N
trail.set_data(point_tray_loop[0:i+1,0,-1], point_tray_loop[0:i+1,1,-1])
#sys_cm_coords = sys_cm_np(X[i,:], params)
#sys_cm_point.set_data(sys_cm_coords)
#line_sys_cm.set_data([0, sys_cm_coords[0]], [0, sys_cm_coords[1]])
ax.set_xlim(( -1+ margin_x, 1+ margin_x))
points_x, points_y = point_tray[i,:,:]
body.set_data(points_x, points_y)
for ii in range(len(texts)):
text = texts[ii]
name = print_var_names[ii]
arr = print_vars[ii]
text.set_position((-0.9 + margin_x, 1.7 - 0.05*ii))
if name == 'step':
text.set_text("$step$ = " + str(arr[i]))
else:
text.set_text("$" + name + "$ = %.3f" % arr[i])
return (body,)
iterable_frames = mod_sum([[jj for jj in range(N)]for kk in range(n_loops)], start = [])
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=iterable_frames, interval=20,
blit=True)
return anim
```
```python
anim = create_anim(results['hs_mod_parab'][25]['x'][:-1,:],results['hs_mod_parab'][25]['u'], params, 4)
```
```python
HTML(anim.to_jshtml())
```
```python
f = r"biped_animation.mp4"
writervideo = animation.FFMpegWriter(fps=25//0.7)
# If you are running the notebook locally and want to save the animation,
# uncomment the next line
#anim.save(f, writer=writervideo)
```
```python
```
|
a1d91564870d9ee35478d3d68f1b4e154dc9d2e5
| 64,663 |
ipynb
|
Jupyter Notebook
|
Five-Link-Biped-demo.ipynb
|
AunSiro/Second-Order-Schemes
|
ef7ac9a6755e166d81b83f584f82055d38265087
|
[
"MIT"
] | null | null | null |
Five-Link-Biped-demo.ipynb
|
AunSiro/Second-Order-Schemes
|
ef7ac9a6755e166d81b83f584f82055d38265087
|
[
"MIT"
] | null | null | null |
Five-Link-Biped-demo.ipynb
|
AunSiro/Second-Order-Schemes
|
ef7ac9a6755e166d81b83f584f82055d38265087
|
[
"MIT"
] | null | null | null | 32.202689 | 328 | 0.52486 | true | 14,056 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.928409 | 0.817574 | 0.759043 |
__label__eng_Latn
| 0.343656 | 0.601844 |
## Lecture topic 5:
## Ordinary and partial differential equations
```python
from lecture_utils import *
```
_This is the first part of the lecture material and should enable you to solve exercises 5.1, 5.2 and 5.3._
#### What are differential equations?
A differential equation is an equation that contains next to variables and functions of these variables also derivates of these functions.
- Example
$$
\frac{\mathrm{d}^2x(t)}{\mathrm{d}t^2}-\mu(1-x(t)^2)\frac{\mathrm{d}x(t)}{\mathrm{d}t} +\omega^2x(t) = 0
$$
The solution to this differential equation is the function $x(t)$ that satisfies the differential equation when $x(t)$ and its derivatives are substituted into the equation.
### Types of differential equation
#### Ordinary differential equations
An ordinary differential equation (ODE) has only one independent variable, let's call it $t$. The ODE has only derivatives with respect to $t$.
- Example
$$
tx^5\frac{\mathrm{d}x}{\mathrm{d}t} = \sin(t)
$$
For the ODEs, we will stick with $t$ as the independent variable because the time $t$ is for many problems indeed the independent variable.
#### Partial differential equations
Partial differential equations have several independent variables and contain partial derivatives with respect to these variables.
- Example
$$
\frac{\partial^2z}{\partial x \partial y} = xyz\frac{\partial z}{\partial x}\frac{\partial z}{\partial y}
$$
### Order of ordinary differential equations
The order $n$ of an ODE is defined by the highest order derivative $\frac{\mathrm{d}^n x}{\mathrm{d}t^n}$.
#### First-order differential equations
The general form of a first-order ODE with one variable is
$$
\frac{\mathrm{d}x}{\mathrm{d}t} = f(x,t)
$$
#### Second-order differential equations
The general form of a second-order ODE with one variable is
$$
\frac{\mathrm{d}^2x}{\mathrm{d}t^2} = f\left(x,\frac{\mathrm{d}x}{\mathrm{d}t},t\right)
$$
### Examples for ordinary differential equations in chemistry and physics:
- chemical reaction kinetics
$$
A + B \rightarrow \mathrm{Products} \qquad e.g. \qquad CH_3CH_2Br + OH^- \rightarrow CH_3CH_2OH + Br^-
$$
with the rate equation
$$
-\frac{\mathrm{d}c_A}{\mathrm{d}t} = - \frac{\mathrm{d}c_B}{\mathrm{d}t} = k c_A c_B
$$
- Harmonic oscillator
$$
\frac{\mathrm{d}^2x}{\mathrm{d}t^2} = -\omega^2 x
$$
- Lorenz equations
$$
{\mathrm{d} x\over\mathrm{d} t} = \sigma(y-x),\qquad
{\mathrm{d} y\over\mathrm{d} t} = rx - y - xz,\qquad
{\mathrm{d} z\over\mathrm{d} t} = xy - bz,
$$
where where $\sigma$, $r$ and $b$ are constants
- Newton's equation of motion
$$
\frac{\mathrm{d}^2\mathbf{r}_i}{\mathrm{d}t^2} = \frac{\mathbf{F}_i}{m_i}
$$
where $\mathbf{r}_i = (x,y,z)$, $m_i$, and $\mathbf{F}_i$ are the position, mass and force acting on particle $i$.
<b> Chemical reaction kinetics </b>:
Chemical kinetics or reaction kinetics investigates the speed of a chemical reaction, i.e., how fast the concentration of a reactant changes with time. The reactions are due to collisions of the the reactant species. The frequency with which the molecules or ions collide depends -next to theromodynamic variables such as the temperature- on their concentrations. $c_A$ and $c_B$ are here the concentrations of the reactants $A$ and $B$. Above you see a second-order reactions, i.e., the reaction rate depends on the concentration $c_A$ and $c_B$ of reactants $A$ and $B$. The reaction happens in one step. The order of an reaction is not defined by the stoichiometry of the reaction. The reaction could happen in several steps and intermediate products could form and the rate-determining step could have a different order. Studying the reaction rates is therefore important to identify the reaction mechanism. A typical example for a second-order reaction are so-called S$_N$2 reactions shown above. In the exercise you will solve the rate equation for a first-order reaction.
<b>Lorenz equations</b>:
These equations were first studied by Edward Lorenz in 1963, who
derived them from a simplified model of weather patterns. The
reason for their fame is that they were one of the first incontrovertible
examples of "deterministic chaos", the occurrence of apparently
random motion even though there is no randomness built into the equations.
<b> Equation of motion </b>:
These equations are solved, e.g., in astrophysics to calculate the orbits of planets etc. They are also relevant in statistical physics for molecular dynamics simulations.
### Example for equations of motions: molecular dynamics
- Realm of statistical physics: generation of trajectories to analyze the movement of atoms and molecules
- Applied for systems, where sampling of phase space is necessary, e.g., liquids
- Trajectories are obtained by numerically solving Newton's equations of motion
- Example: water film on top of Pt(111) surface
```python
play_H2O_on_Pt()
```
Molecular dynamics is a computer simulation method for analyzing the physical movements of atoms and molecules.
The simulations are run for a certain amount of time, in the example above, for a few pico seconds. You can then display the "evolution" of the system. From molecular dynamics, we can study the structure of the molecular systems, but also obtain properties such as diffusion coefficients etc. With more advanced techniques, we can also study reactions, e.g., dissociation reactions.
The system above shows a liquid water film (oxygen atoms in red, hydrogen atoms in white) on top of a platinum surface (Pt atoms in brown). The purpose of this simulation was to study the structure of the water on Pt(111). One can observe that a very dense first adsorption layer forms.
### Example for partial differential equations:
- Laplace equation
- Poisson equation
- Maxwell's equation
- Schrödinger equation
- ...
### Solving ordinary differential equations analytically
ODEs can be solved analytically if the variables can be separated. An example for such an equation is the following linear ODE
$$
\frac{\mathrm{d}x}{\mathrm{d}t} = \frac{2x}{t}
$$
- Separation of variables:
$$
\frac{\mathrm{d}x}{x} = 2 \frac{\mathrm{d}t}{t}
$$
- Integration of both sides
$$
\int \frac{\mathrm{d}x}{x} = \int2 \frac{\mathrm{d}t}{t} \Rightarrow \ln x = 2 \ln t + c
$$
- Setting the integration constant to c = ln k
$$
\ln x = \ln(t^2) + ln k
$$
- After rasing each side to $\exp$, the solution is then
$$
x(t) = kt^2
$$
-If we have an initial condition fixing $x$ at time $t$, we can also determine $k$.
### Numerical solution of ordinary differential equations
Let's assume we have instead
$$
\frac{\mathrm{d}x}{\mathrm{d}t} = \frac{2x}{t} + \frac{3x^2}{t^3}
$$
- no longer separable
- nonlinear equation, i.e., powers or other non-linear functions of the dependent variable $x$ appear
- nonlinear equations can be rarely solved analytically $\rightarrow$ <b> solve it numerically </b>
<b> Comment </b>:
The definition of a linear first-oder ODE is
$$
\frac{\mathrm{d}x(t)}{\mathrm{d}t} + f(t) x(t)= g(t)
$$
The definition of a linear $n$-th order ODE is
$$
\frac{\mathrm{d}^nx(t)}{\mathrm{d}t^n} + a_{n-1}(t)\frac{\mathrm{d}^{n-1}x(t)}{\mathrm{d}t^{n-1}} \cdots + a_2(t)\frac{\mathrm{d}^2x(t)}{\mathrm{d}t^2} + a_1(t)\frac{\mathrm{d}x(t)}{\mathrm{d}t} + a_0(t) x(t) = b(t)
$$
### Methods for numerical solutions of ODEs
Methods covered in this lecture
- Euler method
- Runge-Kutta methods
- Leapfrog
- Verlet
### Euler's method
If we start from the general form of the first-order ODE
$$
\frac{\mathrm{d}x}{\mathrm{d}t} = f(x,t)
$$
and have an initial condition that fixes the value of $x$ for some $t$, then we can write the value of $x$ a short interval $h$ later using a Taylor expansion:
$$
\begin{align}
x(t+h) &= x(t) + h\frac{\mathrm{d}x}{\mathrm{d}t} + \frac{1}{2}h^2\frac{\mathrm{d}^2x}{\mathrm{d}t^2} +\cdots\\
&= x(t) + h\frac{\mathrm{d}x}{\mathrm{d}t} + \mathcal{O}(h^2)
\end{align}
$$
with $\mathcal{O}(h^2)$ denotes all terms that go $h^2$ and higher
If $h$ is small, then $h^2$ is very small, so we can neglect the terms in $h^2$ and get
$$
x(t+h) = x(t) + hf(x,t)
$$
<b> Procedure: </b>:
- start at time $t$, where value $x$ is known
- calculate $x$ at short time later, i.e. at $t+h$
- repeat and calculate $x$ at $t+2h$
- continue until $t=t_{\mathrm{end}}$
If you are given a differential equation for $x$ and an initial condition $t=a$ and asked to make a graph of $x(t)$ for values $t$ from $t_a$ to $t_b$, divide the interval from $a$ to $b$ into steps of size $h$ and use $x(t+h) = x(t) + hf(x,t)$ repeatedly and then plot the results.This method is called after its inventor, Leonhard Euler.
### Example for Euler's method
Let's use Euler's method to solve the differential equation
$$
\frac{\mathrm{d}x(t)}{\mathrm{d}t} = (\cos t) x(t)
$$
with the initial condition $x=1$ at $t=0$ and we want to perform the calculation from $t=0$ to $t=10$. This is a first-order linear equation, which can be also solved analytically. With the given initial condition, the analytic solution is
$$
x(t) = e^{\sin(t)}
$$
```python
from numpy import cos, sin, exp,arange
from matplotlib import pyplot as plt
""" Define function """
def f(x,t):
return cos(t) * x
"""Initial values and parameters"""
a = 0.0 # Start of the interval
b = 10.0 # End of the interval
N = 1000 # Number of steps
h = (b-a)/N # Step size
print ('Step size', h)
x = 1.0 # Initial condition
tpoints = arange(a,b,h)
xpoints_euler =[]
xpoints_analytic =[]
""" Solve with Euler """
for t in tpoints:
xpoints_analytic.append(exp(sin(t)))
xpoints_euler.append(x)
x += h*f(x,t)
""" Plot results """
plt.rc('font', size=16)
plt.plot(tpoints,xpoints_euler,label='Euler',linewidth=3.0)
plt.plot(tpoints,xpoints_analytic,label='Analytic',)
plt.xlabel('t')
plt.ylabel('x')
plt.legend()
```
Since we can solve this ODE analytic, we can compare the accuracy of the Euler method to the exact solution. Play with the step size $h$ (i.e. with $N$) to see when the Euler solution converges to the exact one. You will find that the error decreases when decreasing the step size.
### Error for Euler's method
- neglection of $h^2$ and higher-order terms $\rightarrow$ error of single step is $\mathcal{O}(h^2)$
- with Euler we don't take one step, but many.
- total number of steps $N = (b- a)/h$, where solution is calculated from $t = a$ to $t =b$
- accumulative error:
$$
\begin{align}
\sum_{k=0}^{N-1}\frac{1}{2}h^2\left(\frac{\mathrm{d}^2x}{\mathrm{d}t^2}\right)_{t=t_k} &= \frac{1}{2}h \sum_{k=0}^{N-1}h\left(\frac{\mathrm{d}f}{\mathrm{d}t}\right)_{t=t_k}\\
&\approx \frac{1}{2}h \int_a^b \left(\frac{\mathrm{d}f}{\mathrm{d}t}\right) = \frac{1}{2}h \left[f(x(b),b)-f(x(a),a)\right]
\end{align}
$$
<b> Total error is $\mathcal{O}(h)$</b> !!
The accumulative error is linear in $h$, which means the total error goes down by a factor of two when we make $h$ half as large. To make the calculation more accurate, it will take proportionally longer. This might not be a problem for our example, but for the simulation of the water film on top of a platinum slab shown above (where the interactions are described at the quantum mechanics level) this would add weeks of additional computation time.
### Impact of error propagation
<b> Example:</b> Earth (blue) + moon (gray) orbiting around sun (yellow)
- Equations of motion solved by Euler method (= "Lazy man" method)
- yields qualitively wrong results
- <i>Video by Miguel Caro (Advanced Statistical Physics course at Aalto), see also https://youtu.be/nHAZGkKn1-g </i>
```python
play_lazy_man()
```
In the video you see a comparison between approximate and "exact" trajectories (where "exact results" are obtained with a better method and very small time step $h$. We see in this simulation that the Euler method performs terribly. The accumulated error leads to completely nonsensical behavior. Our Moon-like object shots away into an unstable orbit.
### Can we improve Euler's method?
<b> Idea </b>: We could keep the order $h^2$ in the Taylor expansion. The $h^2$ term is
$$
\frac{1}{2}h^2\frac{\mathrm{d}^2x}{\mathrm{d}t^2} = \frac{1}{2}h^2 \frac{\mathrm{d}f}{\mathrm{d}t}
$$
- would yield more accurate expression
- problem: we need an explicit expression of $f$ to calculate $\mathrm{d}f/\mathrm{d}t$ $\rightarrow$ not always the case
Conclusion: don't use Euler, there are better methods, e.g., Runge-Kutta
Side note: Euler's method is not completely useless, it will become relevant again for partial differential equations.
### Runge Kutta method
- actually set of methods
- second-order and fourth-order Runga method
- technically, Euler is a first-order Runge Kutta
- increasing accuracy with increasing order
### Second-order Runge Kutta (RK2) method
(Figure from "Computational Physics" by Marc Newman.)
Euler's method and the RK2 method: Euler's method is equivalent to taking the slope $\mathrm{d}x/\mathrm{d}t$ at time $t$ and extrapolating it into the future to time $t+h$. A better approximation is to perform the extrapolation using the slope at time $t + \frac{1}{2}h$. This is the idea of the RK2 method.
<b>Rational</b>: The curve represent the true form of $x(t)$, which we are trying to calculate. From $\mathrm{d}x/\mathrm{d}t = f(x,t)$ we know that the slope of the solution is equal to the function $f(x,t)$. Given the value of $x$ at time $t$ we can calculate the slope at that point, as shown in the figure. Then we extrapolate that slope to time $t+h$ $\rightarrow$ we get $x(t+h)$ (Euler's method). This would work perfectly, if the curve was a straight line, but not if it is curved. Better: calculate slope at $t+\frac{1}{2}h$. This is what we do in RK2, which is the reason why it is also called "midpoint method".
### Derivation of the RK2 equations
- First step: Taylor expansion around $t+\frac{1}{2}h$ to get value of $x(t+h)$.
$$
x(t+h) = x\left(t+\frac{1}{2}h\right) + \frac{1}{2}h\left(\frac{\mathrm{d}x}{\mathrm{d}t}\right)_{t+\frac{1}{2}h} + \frac{1}{8}h^2\left(\frac{\mathrm{d}x}{\mathrm{d}t}\right)_{t+\frac{1}{2}h} + \mathcal{O}(h^3)
$$
- Second step: Taylor expansion around $t+\frac{1}{2}h$ to get value of $x(t)$.
$$
x(t) = x\left(t+\frac{1}{2}h\right) - \frac{1}{2}h\left(\frac{\mathrm{d}x}{\mathrm{d}t}\right)_{t+\frac{1}{2}h} + \frac{1}{8}h^2\left(\frac{\mathrm{d}x}{\mathrm{d}t}\right)_{t+\frac{1}{2}h} + \mathcal{O}(h^3)
$$
- Third step: Subtract the second from the first expression
$$
\begin{align}
x(t+h) &= x(t) + h\left(\frac{\mathrm{d}x}{\mathrm{d}t}\right)_{t+\frac{1}{2}h} + \mathcal{O}(h^3)\\
&= x(t) + hf\left(x\left(t+\frac{1}{2}h\right),t+\frac{1}{2}h\right) + \mathcal{O}(h^3)
\end{align}
$$
The error term is now $\mathcal{O}(h^3)$! This is nice because our method will be more accurate. Problem: we don't have $x\left(t+\frac{1}{2}h\right)$, only $x(t)$ 🤔
- Fourth step: use Euler to approximate $x\left(t+\frac{1}{2}h\right)$
$$
x\left(t+\frac{1}{2}h\right) = x(t) + \frac{1}{2}h f(x,t)
$$
and then substitute it into the equation above
### Working equations for RK2
$$
\begin{align}
k_1 &= h f(x,t) \\
k_2 &= hf\left(x+\frac{1}{2}k_1,t+\frac{1}{2}h\right)\\
x(t+h) &= x(t) + k_2
\end{align}
$$
### Example for RK2 method
We want to solve again
$$
\frac{\mathrm{d}x(t)}{\mathrm{d}t} = (\cos t) x(t)
$$
with the initial condition $x=1$ at $t=0$.
```python
from numpy import cos, sin, exp,arange
from matplotlib import pyplot as plt
""" Define function """
def f(x,t):
return cos(t) * x
"""Initial values and parameters"""
a = 0.0 # Start of the interval
b = 10.0 # End of the interval
N = 50 # Number of steps
h = (b-a)/N # Step size
print ('Step size', h)
x = 1.0 # Initial condition
tpoints = arange(a,b,h)
xpoints_rk2 =[]
xpoints_analytic =[]
""" Solve with RK2 """
for t in tpoints:
xpoints_analytic.append(exp(sin(t)))
xpoints_rk2.append(x)
k1 = h*f(x,t)
k2 = h*f(x+0.5*k1,t+0.5*h)
x += k2
""" Plot results """
plt.rc('font', size=16)
plt.plot(tpoints,xpoints_rk2,label='RKS',linewidth=3.0)
plt.plot(tpoints,xpoints_analytic,label='Analytic',)
plt.xlabel('t')
plt.ylabel('x')
plt.legend()
```
Play with the variable $N$, which controls the step size $h$. Try $N = 10, 20, 50, 100$. How does the result compare to Euler's method? You should find that you can use much larger step sizes with RK2 than with Euler's method.
### Error of the RK2 method
- Does approximating $x(t+0.5h)$ by Euler's method increase the scaling of the error?
$\rightarrow$ No! Check by expanding $f(x+0.5k_1,t+0.5h)$ in its first argument around $x(t+0.5h)$
- RK2 has an error of order $\mathcal{O}(h^3)$ for a single step
- accumulative error is of order $\mathcal{O}(h^2)$
You can also "measure" the accumulative error by computing the error between analytic and numerical method at time $t=b$ for different steps sizes $h$. A power-law fit yields then the order of the error. You will do this in exercise 5.1 for the fourth-order Runge-Kutta method. If you are motivated, you can also test it for RK2.
### Fourth-order Runge Kutta (RK4) method
We can perform similar Taylor expansions around various points and then taking linear combinations of them. We can then again arrange for terms in $h^3$ and $h^4$ etc. In this way, we can get increasingly accurate solvers.
- equations get more complicated with higher order
- very common: fourth-order rule
- RK4 considered good balance between high accuracy and complexity of equation
### Working equations for RK4
$$
\begin{align}
k_1 &= h f(x,t) \\
k_2 &= hf\left(x+\frac{1}{2}k_1,t+\frac{1}{2}h\right)\\
k_3 &= hf\left(x+\frac{1}{2}k_2,t+\frac{1}{2}h\right)\\
k_4 &= hf\left(x+k_3,t+h\right)\\
x(t+h) &= x(t) + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)
\end{align}
$$
### Example for RK4 method
We want to solve again
$$
\frac{\mathrm{d}x(t)}{\mathrm{d}t} = (\cos t) x(t)
$$
with the initial condition $x=1$ at $t=0$.
```python
from numpy import cos, sin, exp,arange
from matplotlib import pyplot as plt
""" Define function """
def f(x,t):
return cos(t) * x
"""Initial values and parameters"""
a = 0.0 # Start of the interval
b = 10.0 # End of the interval
N = 50 # Number of steps
h = (b-a)/N # Step size
print ('Step size', h)
x = 1.0 # Initial condition
tpoints = arange(a,b,h)
xpoints_rk4 =[]
xpoints_analytic =[]
""" Solve with RK4 """
for t in tpoints:
xpoints_analytic.append(exp(sin(t)))
xpoints_rk4.append(x)
k1 = h*f(x,t)
k2 = h*f(x+0.5*k1,t+0.5*h)
k3 = h*f(x+0.5*k2,t+0.5*h)
k4 = h*f(x+k3,t+h)
x += (k1+2*k2+2*k3+k4)/6
""" Plot results """
plt.rc('font', size=16)
plt.plot(tpoints,xpoints_rk4,label='RK4',linewidth=3.0)
plt.plot(tpoints,xpoints_analytic,label='Analytic',)
plt.xlabel('t')
plt.ylabel('x')
plt.legend()
```
Play again with the variable $N$, which controls the step size $h$. What do you find?
### Error of the RK4 method
- RK4 has an error of order $\mathcal{O}(h^5)$ for a single step
- accumulative error is of order $\mathcal{O}(h^4)$
- RK4 is often method of choice for many problems
- there are exceptions, where other solvers are better suited. We will discuss them later
So far we only applied the Euler and RK methods to 1st-order ODEs with one dependent variable.
Before discussing other numerical solvers for ODEs, we learn how to
- use these solvers for ODEs with more than one variable
- second-order ODEs
### Simultaneous differential equations
- So far we only looked at ODEs with one <it> dependent </it> variable $x$.
- We have often more than one $\rightarrow$ simultaneous differential equations
- Example:
$$
\frac{\mathrm{d}x(t)}{\mathrm{d}t} = x(t)y(t) -x(t), \qquad \frac{\mathrm{d}y(t)}{\mathrm{d}t} = y(t) -x(t)y(t) + sin^2(\omega t)
$$
- there is still only one(!) <it> independet </it> variable $t$
These equations are still ODEs, not partial differential equations.
The general form for two simultaneous ODEs is
$$
\frac{\mathrm{d}x(t)}{\mathrm{d}t} = f_x(x,y,t), \qquad \frac{\mathrm{d}y(t)}{\mathrm{d}t} = f_y(x,y,t)
$$
where $f_x$ and $f_y$ are general, possibly nonlinear functions of $x$, $y$ and $t$. This can be also written in vector form for an arbitrary number of variables
$$
\frac{\mathrm{d}\mathbf{r}}{\mathrm{d}t} = \mathbf{f}(\mathbf{r},t)
$$
where $\mathbf{r} = (x,y,...)$ and $\mathbf{f}$ is a vector of functions $\mathbf{f}(\mathbf{r},t) = (f_x(\mathbf{r},t),f_y(\mathbf{r},t)...)$
### Numerical solution of simultaneous differential equations
The working equations for, e.g., Euler's method and the RK4 method can be straigthforwardly expanded to the multi-variable case
- Euler's method
$$
\mathbf{r}(t+h) = \mathbf{r}(t) + h\mathbf{f}(\mathbf{r},t)
$$
- RK4 method
$$
\begin{align}
\mathbf{k}_1 &= h \mathbf{f}(\mathbf{r},t) \\
\mathbf{k_2} &= h\mathbf{f}\left(\mathbf{r}+\frac{1}{2}\mathbf{k}_1,t+\frac{1}{2}h\right)\\
\mathbf{k_3} &= h\mathbf{f}\left(\mathbf{r}+\frac{1}{2}\mathbf{k}_2,t+\frac{1}{2}h\right)\\
\mathbf{k_4} &= h\mathbf{f}\left(\mathbf{r}+\mathbf{k}_3,t+h\right)\\
\mathbf{r}(t+h) &= \mathbf{r}(t) + \frac{1}{6}(\mathbf{k}_1 + 2\mathbf{k}_2 + 2\mathbf{k}_3 + \mathbf{k}_4)
\end{align}
$$
### Example for simultaneous ODEs
Let's try to find the solution with RK4 for
$$
\frac{\mathrm{d}x}{\mathrm{d}t} = xy -x, \qquad \frac{\mathrm{d}y}{\mathrm{d}t} = y -xy + sin^2(\omega t)
$$
for $t=0$ to $t = 10$ and $\omega = 1$ with initial conditions $x=y=1$ at $t=0$
```python
from numpy import sin, array, arange
from matplotlib import pyplot as plt
"""Initial values and parameters"""
a = 0.0 # Start of the interval
b = 10.0 # End of the interval
N = 100 # Number of steps
h = (b-a)/N # Step size
r = array([1.0,1.0]) # Initial condition: r[0] = x = 1 and r[1] = y = 1
""" Define function """
def f(r,t):
x = r[0]
y = r[1]
fx = x*y - x
fy = y - x*y + sin(t)**2
return array([fx,fy],float)
tpoints = arange(a,b,h)
xpoints = []
ypoints = []
""" Solve with RK4 """
for t in tpoints:
xpoints.append(r[0])
ypoints.append(r[1])
k1 = h*f(r,t)
k2 = h*f(r+0.5*k1,t+0.5*h)
k3 = h*f(r+0.5*k2,t+0.5*h)
k4 = h*f(r+k3,t+h)
r += (k1+2*k2+2*k3+k4)/6
""" Plot results """
plt.rc('font', size=16)
plt.plot(tpoints,xpoints,label='x',linewidth=3.0)
plt.plot(tpoints,ypoints,label='y',linewidth=3.0)
plt.xlabel('t')
plt.ylabel('x')
plt.legend()
```
Note that the program is almost identical to the RK4 program, except for the definition of the function $\mathbf{f}(\mathbf{r},t)$ which is slightly more diffcult.
### Solving second-order ODEs numerically
Second-order ODEs have the general form of
$$
\frac{\mathrm{d}^2x}{\mathrm{d}t^2} = f\left(x,\frac{\mathrm{d}x}{\mathrm{d}t},t \right)
$$
Solving these equations numerically is actually quite simple due to the following trick. We can define a new quantity $y$ and rewrite the 2nd-order ODE
$$
\frac{\mathrm{d}x}{\mathrm{d}t} = y, \qquad \frac{\mathrm{d}y}{\mathrm{d}t} = f(x,y,t)
$$
<b> Comment: </b> First-order equation are quite rare in physics. Many ODEs are second-order and higher. For higher orders we can do a similar trick. For example, for a third-order equation we would define two additional variables and obtain three simultaneous first-order equations.
### Example 1:
We have
$$
\frac{\mathrm{d}^2x}{\mathrm{d}t^2} = 2 \frac{\mathrm{d}x}{\mathrm{d}t} - x^3\mathrm{e}^{4t}
$$
Now we apply our trick
$$
\frac{\mathrm{d}x}{\mathrm{d}t} = y, \qquad \frac{\mathrm{d}y}{\mathrm{d}t} = 2 y - x^3\mathrm{e}^{4t}
$$
We have now two simultaneous first-order ODEs, that we can solved as described above
### Example 2: Nonlinear pendulum
Consider a pendulum with an arm of length $l$ holding a bob of mass $m$:
We ignore friction and assume the arm is massless. The differential equation for the pendulum has the form:
$$
\frac{\mathrm{d}^2\theta}{\mathrm{d}t^2} = - \frac{g}{l}\sin(\theta)
$$
Transform into two first-order equations
$$
\begin{align}
\frac{\mathrm{d}\theta}{\mathrm{d}t} & = \omega \\
\frac{\mathrm{d}\omega}{\mathrm{d}t} &= - \frac{g}{l}\sin(\theta)
\end{align}
$$
The second-order ODE is not easily solved analytically (small angle approximation are possible though). However, solving it numerically on the computer is straightforward.
```python
from numpy import sin
g = 9.81
l = 0.1 # Lenght of arm is 10 cm
def f(r,t):
theta = r[0]
omega = r[1]
ftheta = omega
fomega = -(g/l)*sin(theta)
return array([ftheta,fomega],float)
```
### Solving simultaneous second-order ODEs
The trick works also for simultaneous second-order ODEs, e.g., Newton's equation of motion. The general form
is
$$
\frac{\mathrm{d}^2\mathbf{r}}{\mathrm{d}t^2} = \mathbf{f}\left(\mathbf{r},\frac{\mathrm{d}\mathbf{r}}{\mathrm{d}t},t\right)
$$
which can be transformed to
$$
\frac{\mathrm{d}\mathbf{r}}{\mathrm{d}t} = \mathbf{s}, \qquad \frac{\mathrm{d}\mathbf{s}}{\mathrm{d}t} = \mathbf{f}\left(\mathbf{r},\mathbf{s},t\right)
$$
Let's say we start with three simultaneous equations, e.g., Newton's equations of motion, where we have $\mathbf{r} = (x,y,z)$, after applying our trick we end up with six simultaneous first-order equations.
#### Other integrators
- Leapfrog
- Verlet
- (Bulirsch-Stoer)
$\rightarrow$ less widely used than Runge-Kutta methods, but popluar to solve Newton's equation of motion
Runge-Kutta methods, in particular RK4, are widely used, but other integrators are more suitable for certain problems. For example, Runge-Kutta methods are rarely used for molecular dynamics. Leapfrog and Verlet are much more common for these calculations. The reasons will be explored in the next lecture.
### Leapfrog method
Scheme comparing RK2 and leapfrog (Figure adapted from "Computational Physics" by Marc Newman)
- RK2:
$$
\begin {align}
x(t+h) &= x(t) + hf\left(x\left(t+\frac{1}{2}h\right),t+\frac{1}{2}h\right)\\
x\left(t+\frac{1}{2}h\right) &= x(t) + \frac{1}{2}hf(x,t)
\end{align}
$$
- Leapfrog (for step > 1):
$$
\begin {align}
x\left(t+\frac{3}{2}h\right) &= x\left(t+\frac{1}{2}h\right) + hf(x(t+h),t+h)\\
x\left(t+2h\right) &= x(t+h) + hf\left(x\left(t+\frac{3}{2}h\right),t+\frac{3}{2}h\right)
\end{align}
$$
The figure above shows a graphical representation of the RK2 method. At each step we calculate the solution at the midpoint and then use this solution to calculate $x(t+h)$.
The leapfrog method is a variant of RK2. We start as with RK2 and make a half step followed by a full step, as depicted in the figure. The difference comes in the next step. Rather than calculating the next midpoint, $x\left(t+\frac{3}{2}h\right)$, from $x(t+h)$, we calculate it from the previous midpoint, which is $x\left(t+\frac{1}{2}h\right)$. Graphically, each step is "leaping" (like a frog) over the previous one. That is where the name is coming from.
### Working equations for Leapfrog method
$$
\begin{align}
x\left(t+h\right) &= x(t) + hf\left(x\left(t+\frac{1}{2}h\right),t+\frac{1}{2}h\right)\\
x\left(t+\frac{3}{2}h\right) &= x\left(t+\frac{1}{2}h\right) + hf(x(t+h),t+h)\\
\end{align}
$$
<b> Comment </b> Your task in exercise 5.3 is to figure out how to implement the Leapfrog method in a computer program. A few hints: Set the initial value for $x(t)$ given by the initial conditions. Then calculate the value for the second point (i.e. the first midpoint, which is $x(t+0.5h)$) as in RK2. You can then use it to calculate $x(t+h)$, which you can then again use to calculate $x(t+1.5h)$. Iterate until $t_{\mathrm{end}}$.
### Working equations for Leapfrog method for simultaneous ODEs
The extension of the formalism is, as for Euler's and RK methods, straightforward
$$
\begin{align}
\mathbf{r}\left(t+h\right) &= \mathbf{r}(t) + h\mathbf{f}\left(\mathbf{r}\left(t+\frac{1}{2}h\right),t+\frac{1}{2}h\right)\\
\mathbf{r}\left(t+\frac{3}{2}h\right) &= \mathbf{r}\left(t+\frac{1}{2}h\right) + h\mathbf{f}(\mathbf{r}(t+h),t+h)\\
\end{align}
$$
|
ad6a20f90f0dfefc4ca5d7d979c64f2940d0d77d
| 163,370 |
ipynb
|
Jupyter Notebook
|
Lecture 5 - Differential equations/lecture_topic5_differential_eq_part1.ipynb
|
hlappal/comp-phys
|
8d78a459bc5849ddf5c6c21d484503136bccccbd
|
[
"MIT"
] | null | null | null |
Lecture 5 - Differential equations/lecture_topic5_differential_eq_part1.ipynb
|
hlappal/comp-phys
|
8d78a459bc5849ddf5c6c21d484503136bccccbd
|
[
"MIT"
] | null | null | null |
Lecture 5 - Differential equations/lecture_topic5_differential_eq_part1.ipynb
|
hlappal/comp-phys
|
8d78a459bc5849ddf5c6c21d484503136bccccbd
|
[
"MIT"
] | null | null | null | 95.426402 | 28,424 | 0.831083 | true | 8,993 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.819893 | 0.7773 | 0.637303 |
__label__eng_Latn
| 0.987227 | 0.318999 |
```python
from sympy import *
from sympy.abc import r,x,y,z
from scipy.integrate import quad, nquad
import matplotlib.pyplot as plt
%matplotlib inline
init_printing()
```
# Energy of the Hydrogen Atom
The variational principle states a trial wavefunction will have an energy greater than or equal to the ground state energy.
$$\frac{\int \psi H \psi}{ \int \psi^2} \ge E_0$$
First consider the hydogen atom. Let us use a trial wavefunction that is not the exact ground state.
```python
beta = Symbol('beta')
R_T = exp(-r - beta*r*r)
R_T
```
The Hamiltonian for this system is
$$-\frac{1}{2} \nabla^2 - \frac{1}{r}$$
The first term is the kinetic energy of the electron, and the second term is the Coulomb attraction between the electron and proton.
The first step is to compute the derivative of the trial wavefunction in spherical coordinates
```python
def del_spherical(e, r):
"""Compute Laplacian for expression e with respect to symbol r.
Currently works only with radial dependence"""
t1 = r*r*diff(e, r)
t2 = diff(t1, r)/(r*r)
return simplify(t2)
```
```python
del1 = del_spherical(R_T, r)
```
Construct $\psi H \psi$
```python
H = -1/S(2) * R_T * del1 - R_T*R_T/r
```
```python
simplify(H)
```
The integration occurs in 3D over the electron coordinates. Because the integrand only has a dependence on $r$, it can be converted to spherical coordinates, and reduced to a 1D integral over $r$. (There should be an additional factor of $4 \pi$, but it will cancel since it occurs in the numerator and denominator)
```python
h1 = simplify(r*r*H)
```
Substitute a concrete value for $\beta$.
```python
h2 = h1.subs(beta, 0.1)
```
Perform the integral
```python
num = integrate(h2, (r, 0, oo)).evalf()
num
```
Also construct and integrate the denominator (the normalization).
```python
norm1 = r*r*R_T*R_T
norm2 = norm1.subs(beta, 0.1)
norm3 = simplify(norm2)
```
```python
denom = integrate(norm3, (r, 0, oo)).evalf()
simplify(denom).evalf()
```
```python
E = num/denom
E
```
And, as expected, energy is greater than the exact ground state energy of -0.5 Hartree.
## Find the minimum energy
Collect all the steps for computing the energy into a single function. Even though this particular integral could be done symbolically, use numerical integration instead.
```python
def compute_energy(R_T, beta_val):
"""Energy given a value for beta"""
# Normalization integrand (denominator)
norm1 = r*r*R_T*R_T
norm2 = norm1.subs(beta, beta_val)
norm3 = simplify(norm2)
# Integrand for the numerator
del1 = del_spherical(R_T, r)
# Construct psi * H * psi
H = -1/S(2) * R_T * del1 - R_T*R_T/r
h1 = simplify(r*r*H)
h2 = h1.subs(beta, beta_val)
lim = 20.0
denom_func = lambdify([r], norm3)
denom_res = quad(denom_func, 0.0, lim)
num_func = lambdify([r], h2)
num_res = quad(num_func, 0.0, lim)
e = num_res[0]/denom_res[0]
return e
```
Now the energy can be computed vs. $\beta$, and we can find the minimum energy. In this case, the minimum occurs at $\beta = 0$, which we know is the exact wavefunction for the hydrogen atom.
```python
energies = []
betas = []
for i in range(10):
beta_val = i*.01
e = compute_energy(R_T, beta_val)
betas.append(beta_val)
energies.append(e)
plt.plot(betas, energies)
```
```python
```
|
2166ce9847e281a2a483c8efa4b2e05cb3914019
| 32,621 |
ipynb
|
Jupyter Notebook
|
Variational/Variational_Hydrogen.ipynb
|
QMCPACK/qmc_algorithms
|
015fd1973e94f98662149418adc6b06dcd78946d
|
[
"MIT"
] | 3 |
2018-02-06T06:15:19.000Z
|
2019-11-26T23:54:53.000Z
|
Variational/Variational_Hydrogen.ipynb
|
chrinide/qmc_algorithms
|
015fd1973e94f98662149418adc6b06dcd78946d
|
[
"MIT"
] | 1 |
2017-03-23T17:17:04.000Z
|
2017-03-23T17:17:04.000Z
|
Variational/Variational_Hydrogen.ipynb
|
chrinide/qmc_algorithms
|
015fd1973e94f98662149418adc6b06dcd78946d
|
[
"MIT"
] | 4 |
2016-06-30T21:29:32.000Z
|
2019-10-22T16:10:03.000Z
| 78.604819 | 12,516 | 0.824346 | true | 982 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91611 | 0.874077 | 0.800751 |
__label__eng_Latn
| 0.971514 | 0.698744 |
## Variational Inference: Ising Model
This notebook focuses on Variational Inference (VI) for the Ising model in application to binary image de-noising. The Ising model is an example of a Markov Random Field (MRF) and it originated from statistical physics. The Ising model assumes that we have a grid of nodes, where each node can be in one of two states. In the case of binary images, you can think of each node as being a pixel with a black or white color. The state of each node depends on the neighboring nodes through interaction potentials. In the case of images, this translates to a smoothness constraint, i.e. a pixel prefers to be of the same color as the neighboring pixels. In the image denoising problem, we assume that we have a 2-D grid of noisy pixel observations of an underlying true image and we would like to recover the true image. Thus, we can model the image as a grid:
In the figure above, the shaded nodes are the noisy observations $y_i$ of binary latent variables $x_i \in \{-1, +1\}$. We can write down the joint distribution as follows:
\begin{equation}
p(x,y) = p(x)p(y|x) = \prod_{(s,t)\in E} \Psi_{st}(x_s, x_t) \prod_{i=1}^{n}p(y_i|x_i) = \prod_{(s,t)\in E} \exp \{x_s w_{st} x_t \} \prod_{i=1}^{N} N(y_i|x_i, \sigma^2)
\end{equation}
where the interaction potentials are represented by $\Psi_{st}$ for every pair of nodes $x_s$ and $x_t$ in a set of edges $E$ and the observations $y_i$ are Gaussian with mean $x_i$ and variance $\sigma^2$. Here, $w_{st}$ is the coupling strength and assumed to be constant and equal to $J>0$ indicating a preference for the same state as neighbors (i.e. potential $\Psi(x_s, x_t) = \exp\{x_s J x_t\}$ is higher when $x_s$ and $x_t$ are both either $+1$ or $-1$).
The basic idea behind variational inference is to choose an approximating disribution $q(x)$ which is close to the original distribution $p(x)$ where the distance is measured by KL divergence:
\begin{equation}
KL(q||p) = \sum_x q(x) \log \frac{q(x)}{p(x)}
\end{equation}
This makes inference into an optimization problem in which the objective is to minimize KL divergence or maximize the Evidence Lower BOund (ELBO). We can derive the ELBO as follows:
\begin{equation}
\log p(y) = \log \sum_{x} p(x,y) = \log \sum_x \frac{q(x)}{q(x)}p(x,y) = \log E_{q(x)}\big[\frac{p(x,y)}{q(x)} \big] \geq E_{q(x)}\big[\log \frac{p(x,y)}{q(x)} \big] = E_{q(x)}\big[\log p(x,y) \big] - E_{q(x)}\big[\log q(x) \big]
\end{equation}
In application to the Ising model, we have:
\begin{equation}
\mathrm{ELBO} = E_{q(x)}\big[\log p(x,y) \big] - E_{q(x)}\big[\log q(x) \big] = E_{q(x)}\big[\sum_{(s,t)\in E}x_s w_{st}x_t + \sum_{i=1}^{n} \log N(x_i, \sigma^2) \big] - \sum_{i=1}^{n} E_{q_i(x)}\big[\log q_i(x) \big]
\end{equation}
In *mean-field* variational inference, we assume a *fully-factored* approximation q(x):
\begin{equation}
q(x) = \prod_{i=1}^{n} q(x_i; \mu_i)
\end{equation}
It can be shown [1] that $q(x_i;\mu_i)$ that minimizes the KL divergence is given by:
\begin{equation}
q_i(x_i) = \frac{1}{Z_i}\exp \big[E_{-q_i}\{\log p(x) \} \big]
\end{equation}
where $E_{-q_i}$ denotes an expectation over every $q_j$ except for $j=i$. To compute $q_i(x_i)$, we only care about the terms that involve $x_i$, i.e. we can isolate them as follows:
\begin{equation}
E_{-q_i}\{\log p(x)\} = E_{-q_i}\{x_i \sum_{j\in N(i)} w_{ij}x_j + \log N(x_i,\sigma^2) + \mathrm{const} \} = x_i \sum_{j\in N(i)}J\times \mu_j + \log N(x_i, \sigma^2) + \mathrm{const}
\end{equation}
where $N(i)$ denotes the neighbors of node $i$ and $\mu_j$ is the mean of a binary random variable:
\begin{equation}
\mu_j = E_{q_j}[x_j] = q_j(x_j=+1)\times (+1) + q_j(x_j=-1)\times (-1)
\end{equation}
In order to compute this mean, we need to know the values of $q_j(x_j=+1)$ and $q_j(x_j=-1)$. Let $m_i = \sum_{j\in N(i)} w_{ij}\mu_j$ be the mean value of neighbors and let $L_{i}^{+} = N(x_i=+1; \sigma^2)$ and $L_{i}^{-} = N(x_i=-1; \sigma^2)$, then we can compute the mean as follows:
\begin{equation}
q_i(x_i=+1) = \frac{\exp\{m_i + L_{i}^{+}\}}{\exp\{m_i + L_{i}^{+}\} + \exp\{-m_i + L_{i}^{-}\}} = \frac{1}{1+\exp\{-2m_i+L_{i}^{-}-L_{i}^{+}\}} = \frac{1}{1+\exp\{-2 a_i\}} = \sigma(2a_i)
\end{equation}
\begin{equation}
q_i(x_i=-1) = 1 - q_i(x_i=+1) = 1 - \sigma(2a_i) = \sigma(-2a_i)
\end{equation}
\begin{equation}
\mu_i = E_{q_i}[x_i] = \sigma(2a_i) - \sigma(-2a_i) = \tanh(a_i)
\end{equation}
where $a_i = m_i + 1/2\big(L_{i}^{+} - L_{i}^{-}\big)$. In other words, our mean-field variational updates of the parameters $\mu_i$ at iteration $k$ are computed as follows:
\begin{equation}
\mu_{i}^{(k)} = \tanh \bigg(\sum_{j\in N(i)}w_{ij}\mu_{j}^{(k-1)} + \frac{1}{2}\bigg[\log \frac{N(x_i=+1, \sigma^2)}{N(x_i=-1, \sigma^2)} \bigg] \bigg) \times \lambda + (1-\lambda)\times \mu_{i}^{(k-1)}
\end{equation}
where we added a learning rate parameter $\lambda$. The figure below shows the parametric form of our mean-field approximation of the Ising model:
Now that we derived the variational updates and the ELBO, let's implement this in Python in application to binary image denoising!
```python
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from PIL import Image
from tqdm import tqdm
from scipy.special import expit as sigmoid
from scipy.stats import multivariate_normal
np.random.seed(0)
sns.set_style('whitegrid')
```
Let's load a grayscale (single channel) image, add Gaussian noise and binarize it based on mean threshold. We can then define variational inference parameters such as the coupling strength, noise level, smoothing rate and max number of iterations:
```python
#load data
print "loading data..."
data = Image.open('./figures/bayes.bmp')
img = np.double(data)
img_mean = np.mean(img)
img_binary = +1*(img>img_mean) + -1*(img 0$, we were able to find the mean parameters for our approximating distribution $q_i(x_i)$ that maximized the ELBO objective and resulted in mostly denoised image. We can visualize the ELBO objective as a function of iterations as follows:
```python
plt.figure()
plt.plot(ELBO, color='b', lw=2.0, label='ELBO')
plt.title('Variational Inference for Ising Model')
plt.xlabel('iterations'); plt.ylabel('ELBO objective')
plt.legend(loc='upper left')
plt.savefig('./figures/ising_vi_elbo.png')
```
Notice that the ELBO is monotonically increasing and flattening out after about 10 iterations. To get further insight into de-noising, we can plot the average entropy $\frac{1}{n}\sum_{i=1}^{n}H_q(x_i)$. We expect early entropy to be high due to random initialization, however, as the number of iterations increases, mean-field updates converge on binary values of $x_i$ that are consistent with observations and the neighbors resulting in a decrease in average entropy:
```python
plt.figure()
plt.plot(Hx_mean, color='b', lw=2.0, label='Avg Entropy')
plt.title('Variational Inference for Ising Model')
plt.xlabel('iterations'); plt.ylabel('average entropy')
plt.legend(loc="upper right")
plt.savefig('./figures/ising_vi_avg_entropy.png')
```
The 2-D Ising model can be extended in multiple ways, for example: 3-D grids and K-states per node (aka Potts model).
### References
[1] K. Murphy, "Machine Learning: A Probabilistic Perspective", The MIT Press, 2012
[2] E. Sudderth, "CS242: Probabilistic Graphical Models", http://cs.brown.edu/courses/cs242/lectures/
|
de844bc9d4ca7bf7f4e623f57b914272e009871d
| 503,997 |
ipynb
|
Jupyter Notebook
|
chp02/mean_field_mrf.ipynb
|
gerket/experiments_with_python
|
5dd6dbd69deaaa318bfa7d2c3c9f7fae6220c460
|
[
"MIT"
] | 382 |
2017-08-22T13:14:54.000Z
|
2022-03-28T17:56:59.000Z
|
chp02/mean_field_mrf.ipynb
|
gerket/experiments_with_python
|
5dd6dbd69deaaa318bfa7d2c3c9f7fae6220c460
|
[
"MIT"
] | 4 |
2017-07-31T00:52:36.000Z
|
2018-10-01T14:29:51.000Z
|
chp02/mean_field_mrf.ipynb
|
gerket/experiments_with_python
|
5dd6dbd69deaaa318bfa7d2c3c9f7fae6220c460
|
[
"MIT"
] | 280 |
2017-08-23T08:08:32.000Z
|
2022-03-09T07:04:01.000Z
| 936.797398 | 380,880 | 0.942837 | true | 2,382 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.935347 | 0.877477 | 0.820745 |
__label__eng_Latn
| 0.95954 | 0.745198 |
# !!! D . R . A . F . T !!!
# Lightness
[Lightness](http://en.wikipedia.org/wiki/Lightness) is defined as the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting. <a name="back_reference_1"></a><a href="#reference_1">[1]</a>
[Colour](https://github.com/colour-science/colour/) defines the following *Lightness* computation methods:
```python
import colour
colour.utilities.filter_warnings(True, False)
sorted(colour.LIGHTNESS_METHODS.keys())
```
[u'CIE 1976',
u'Fairchild 2010',
u'Fairchild 2011',
u'Glasser 1958',
u'Lstar1976',
u'Wyszecki 1963']
> Note: *'Lstar1976'* is a convenient aliases for *'CIE 1976'*.
## Glasser, Mckinney, Reilly and Schnelle (1958) Method
Glasser, Mckinney, Reilly and Schnelle (1958) described a visually uniform colour coordinate system close to *Adams* chromatic-value system but where the quintic-parabola function has been replaced with a cube-root function: the *Cube-Root Color Coordinate System*.
*Lightness* $L$ in the *Cube-Root Color Coordinate System* is calculated as follows: <a name="back_reference_2"></a><a href="#reference_2">[2]</a>
$$
\begin{equation}
L=25.29Y^{1/3}-18.38
\end{equation}
$$
where $Y$ defines the *luminance* in domain [0, 100].
The *colour.lightness_Glasser1958* definition is used to compute *Lightness* $L$:
```python
colour.colorimetry.lightness_Glasser1958(10.08)
```
36.250562645752595
> Note: Input *luminance* $Y$ is in domain [0, 100], output *Lightness* $L$ is in domain [0, 100].
The *colour.lightness* definition is implemented as a wrapper for various lightness computation methods:
```python
colour.lightness(10.08, method='Glasser 1958')
```
36.250562645752595
```python
%matplotlib inline
```
```python
from colour.plotting import *
colour_plotting_defaults()
# Plotting the "Glasser (1958)" "Lightness" function.
single_lightness_function_plot('Glasser 1958')
```
## Wyszecki (1963) Method
Wyszecki (1963) recommended the following cube root function to compute *Lightness* $W$ as a function of the luminance factor $Y$ within the practically important range of $1.0\%<Y<98\%$: <a name="back_reference_3"></a><a href="#reference_3">[3]</a>
$$
\begin{equation}
W=25Y^{1/3}-17
\end{equation}
$$
The *colour.lightness_Wyszecki1963* definition is used to compute *Lightness* $W$:
```python
colour.colorimetry.lightness_Wyszecki1963(10.08)
```
37.004114912764535
> Note: Input *luminance* $Y$ is in domain [0, 100], output *Lightness* $W$ is in domain [0, 100].
Using the *colour.lightness* wrapper definition:
```python
colour.lightness(10.08, method='Wyszecki 1963')
```
37.004114912764535
```python
# Plotting the "Wyszecki (1963)" "Lightness" function.
single_lightness_function_plot('Wyszecki 1963')
```
## CIE 1976 Method
The *CIE $L^*a^*b^*$* approximately uniform colourspace defined in 1976 computes the *Lightness* $L^*$ quantity as follows: <a name="back_reference_4"></a><a href="#reference_4">[4]</a>
$$
\begin{equation}
L^*=\begin{cases}116\biggl(\cfrac{Y}{Y_n}\biggr)^{1/3}-16 & for\ \cfrac{Y}{Y_n}>\epsilon\\
\kappa*\biggl(\cfrac{Y}{Y_n}\biggr) & for\ \cfrac{Y}{Y_n}<=\epsilon
\end{cases}
\end{equation}
$$
where $Y_n$ is the reference white *luminance*.
with
$$
\begin{equation}
\begin{aligned}
\epsilon&\ =\begin{cases}0.008856 & Actual\ CIE\ Standard\\
216\ /\ 24389 & Intent\ of\ the\ CIE\ Standard
\end{cases}\\
\kappa&\ =\begin{cases}903.3 & Actual\ CIE\ Standard\\
24389\ /\ 27 & Intent\ of\ the\ CIE\ Standard
\end{cases}
\end{aligned}
\end{equation}
$$
The original $\epsilon$ and $\kappa$ constants values have been shown to exhibit discontinuity at the junction point of the two functions grafted together to create the *Lightness* $L^*$ function. <a name="back_reference_5"></a><a href="#reference_5">[5]</a>
[Colour](https://github.com/colour-science/colour/) uses the rational values instead of the decimal values for these constants.
> See Also: The [CIE $L^*a^*b^*$ Colourspace](../models/cie_lab.ipynb) notebook for in-depth informations about the *CIE $L^*a^*b^*$* colourspace.
The *colour.lightness_CIE1976* definition is used to compute *Lightness* $L^*$:
```python
colour.colorimetry.lightness_CIE1976(10.08)
```
37.985629097653039
> Note: Input *luminance* $Y$ and $Y_n$ are in domain [0, 100], output *Lightness* $L^*$ is in domain [0, 100].
Using the *colour.lightness* wrapper definition:
```python
colour.lightness(10.08)
```
37.985629097653039
```python
colour.lightness(10.08, method='CIE 1976', Y_n=95)
```
38.916598757092821
```python
colour.lightness(10.08, method='Lstar1976', Y_n=95)
```
38.916598757092821
```python
# Plotting the "CIE 1976" "Lightness" function.
single_lightness_function_plot('CIE 1976')
```
```python
# Plotting multiple "Lightness" functions for comparison.
multi_lightness_function_plot(['CIE 1976', 'Glasser 1958'])
```
## Fairchild and Wyble (2010) Method
```python
colour.colorimetry.lightness_Fairchild2010(10.08 / 100, 1.836)
```
24.902290269546651
```python
colour.lightness(10.08 / 100, method='Fairchild 2010', epsilon=1.836)
```
24.902290269546651
```python
# Plotting the "Fairchild and Wyble (2010)" "Lightness" function.
single_lightness_function_plot('Fairchild 2010')
```
```python
# Plotting multiple "Lightness" functions for comparison.
multi_lightness_function_plot(['CIE 1976', 'Fairchild 2010'])
```
## Fairchild and Chen (2011) Method
```python
colour.colorimetry.lightness_Fairchild2011(10.08 / 100, 0.710)
```
26.459509817572265
```python
colour.lightness(10.08 / 100, method='Fairchild 2011', epsilon=0.710)
```
26.459509817572265
```python
# Plotting the "Fairchild and Chen (2011)" "Lightness" function.
single_lightness_function_plot('Fairchild 2011')
```
```python
# Plotting multiple "Lightness" functions for comparison.
multi_lightness_function_plot(['CIE 1976', 'Fairchild 2011'])
```
## Bibliography
1. <a href="#back_reference_1">^<a> <a name="reference_1"></a>CIE. (n.d.). 117-680 lightness (of a related colour). Retrieved July 09, 2014, from http://eilv.cie.co.at/term/680
2. <a href="#back_reference_2">^<a> <a name="reference_2"></a>Glasser, L. G., McKinney, A. H., Reilly, C. D., & Schnelle, P. D. (1958). Cube-Root Color Coordinate System. J. Opt. Soc. Am., 48(10), 736–740. doi:10.1364/JOSA.48.000736
3. <a href="#back_reference_3">^<a> <a name="reference_3"></a>Wyszecki, G. (1963). Proposal for a New Color-Difference Formula. J. Opt. Soc. Am., 53(11), 1318–1319. doi:10.1364/JOSA.53.001318
4. <a href="#back_reference_4">^<a> <a name="reference_4"></a>Wyszecki, G., & Stiles, W. S. (2000). CIE 1976 (L\*u\*v\*)-Space and Color-Difference Formula. In *Color Science: Concepts and Methods, Quantitative Data and Formulae* (p. 167). Wiley. ISBN:978-0471399186
5. <a href="#back_reference_5">^<a> <a name="reference_5"></a>Lindbloom, B. (2003). A Continuity Study of the CIE L* Function. Retrieved February 24, 2014, from http://brucelindbloom.com/LContinuity.html
|
17f2f0656a1b237f979e611f42635ae50ee86136
| 433,163 |
ipynb
|
Jupyter Notebook
|
notebooks/colorimetry/lightness.ipynb
|
Legendin/colour-notebooks
|
357b64e60e24468c88a7d6789003a6283c809c01
|
[
"BSD-3-Clause"
] | null | null | null |
notebooks/colorimetry/lightness.ipynb
|
Legendin/colour-notebooks
|
357b64e60e24468c88a7d6789003a6283c809c01
|
[
"BSD-3-Clause"
] | null | null | null |
notebooks/colorimetry/lightness.ipynb
|
Legendin/colour-notebooks
|
357b64e60e24468c88a7d6789003a6283c809c01
|
[
"BSD-3-Clause"
] | null | null | null | 657.30349 | 68,126 | 0.939674 | true | 2,345 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.812867 | 0.718607 |
__label__eng_Latn
| 0.620185 | 0.507896 |
```python
# In mathematics, the exponential integral Ei is a special function on the complex plane.
# It is defined as one particular definite integral of the ratio between an exponential function and its argument.
from sympy import *
from sympy import E
from sympy.abc import x,omega,u,m,g
f = lambda x: E**(E**x)
expr = f(x)
Eq(omega,f(x))
```
$\displaystyle \omega = e^{e^{x}}$
```python
dexpr = Derivative(expr)
Eq(dexpr,dexpr.doit())
```
$\displaystyle \frac{d}{d x} e^{e^{x}} = e^{x} e^{e^{x}}$
```python
dexprs = Eq(dexpr,dexpr.doit()).rhs.simplify()
Eq(dexpr.doit(),dexprs)
```
$\displaystyle e^{x} e^{e^{x}} = e^{x + e^{x}}$
```python
Eq(Integral(expr),Derivative(expr)) # is this true in terms of density? (a very thin string = large object in essence)
# trying to show that derivative/integral are inverse operations
```
$\displaystyle \int e^{e^{x}}\, dx = \frac{d}{d x} e^{e^{x}}$
```python
# if the above is true then the below expression must also be true
Eq(Integral(expr),dexprs)
#https://en.wikipedia.org/wiki/Exponential_integral
```
$\displaystyle \int e^{e^{x}}\, dx = e^{x + e^{x}}$
```python
Eq(Integral(expr),Integral(expr).doit()) # .doit() is a method that evaluate objects that are not evaluated by default.
```
$\displaystyle \int e^{e^{x}}\, dx = \operatorname{Ei}{\left(e^{x} \right)}$
```python
# therefore
Eq(Integral(expr).doit(),dexprs) # reads as "the exponential integral of e**x equals e**(x+e**x)"
```
$\displaystyle \operatorname{Ei}{\left(e^{x} \right)} = e^{x + e^{x}}$
```python
# and
Eq(Derivative(Integral(expr).doit()),Derivative(Integral(expr).doit()).doit())
```
$\displaystyle \frac{d}{d x} \operatorname{Ei}{\left(e^{x} \right)} = e^{e^{x}}$
```python
# here we succeeded in calculating some definite exponential integrals. The calculator got too slow around m>4
u = Integral(Ei(exp(x)),(x,0,m))
Eq(g,u)
```
$\displaystyle g = \int\limits_{0}^{m} \operatorname{Ei}{\left(e^{x} \right)}\, dx$
```python
import timeit # This returns the time it takes to execute the main statement a number of times, measured in seconds as a float.
values = []
total_time = []
from collections import defaultdict
Points = defaultdict(list)
for i in range (1,7):
start0 = timeit.default_timer()
print(u.subs(m,i).evalf(),"-------------------------------------------------------------when m = ",i)
stop0 = timeit.default_timer()
values.append(Integral(Ei(exp(x)),(x,0,)).evalf())
print(' Compute Time: ', stop0 - start0,'Seconds')
print(' ')
total_time.append(stop0 - start0)
Points[total_time[i-1]].append(i)
print(' ')
print(' ')
print(" Average Speed =",sum(total_time)/len(values),"Seconds ")
print(' ')
print(' ')
print(' ')
print(Points)
```
4.15775030018700 -------------------------------------------------------------when m = 1
Compute Time: 0.4002557999999681 Seconds
58.4176824523972 -------------------------------------------------------------when m = 2
Compute Time: 0.6176943000000392 Seconds
1552871.13084787 -------------------------------------------------------------when m = 3
Compute Time: 0.8336731999999074 Seconds
1.82897127952040e+20 -------------------------------------------------------------when m = 4
Compute Time: 6.680456499999991 Seconds
1.32124004569319e+60 -------------------------------------------------------------when m = 5
Compute Time: 17.255297300000052 Seconds
0.e+170 -------------------------------------------------------------when m = 6
Compute Time: 14.982998400000042 Seconds
Average Speed = 6.795062583333333 Seconds
defaultdict(<class 'list'>, {0.4002557999999681: [1], 0.6176943000000392: [2], 0.8336731999999074: [3], 6.680456499999991: [4], 17.255297300000052: [5], 14.982998400000042: [6]})
```python
from IPython.display import Image
from IPython.core.display import HTML
from sympy import *
Image(url= "https://i.imgur.com/8HOr9wu.png")
```
|
ce1fa9b57a77383a7d08a37e2e71d1394d2728e9
| 10,159 |
ipynb
|
Jupyter Notebook
|
Personal_Projects/Exponential_Integrals/Exponential Integrals Clocktested.ipynb
|
NSC9/Sample_of_Work
|
8f8160fbf0aa4fd514d4a5046668a194997aade6
|
[
"MIT"
] | null | null | null |
Personal_Projects/Exponential_Integrals/Exponential Integrals Clocktested.ipynb
|
NSC9/Sample_of_Work
|
8f8160fbf0aa4fd514d4a5046668a194997aade6
|
[
"MIT"
] | null | null | null |
Personal_Projects/Exponential_Integrals/Exponential Integrals Clocktested.ipynb
|
NSC9/Sample_of_Work
|
8f8160fbf0aa4fd514d4a5046668a194997aade6
|
[
"MIT"
] | null | null | null | 28.94302 | 188 | 0.423959 | true | 1,207 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90599 | 0.731059 | 0.662332 |
__label__eng_Latn
| 0.503814 | 0.377149 |
# Optimizer tweaks
```python
%load_ext autoreload
%autoreload 2
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
```
```python
#export
from exp.nb_08 import *
```
```python
listify??
```
## Imagenette data
We grab the data from the previous notebook.
```python
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
```
```python
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
bs=128
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4) # RGB 3 in_channels, 10 output channels, 10 object classes
```
Then a model:
```python
nfs = [32,64,128,256]
```
```python
cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
```
This is the baseline of training with vanilla SGD.
get_learn_run? get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, **kwargs)
```python
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
```
learn object has data, loss_func, opt, model
```python
# learn.model
learn.loss_func
```
<function torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')>
```python
run.fit(1, learn)
```
train: [1.7890693163971614, tensor(0.3789, device='cuda:0')]
valid: [1.8528590087890624, tensor(0.4120, device='cuda:0')]
## Refining the optimizer
In PyTorch, the base optimizer in `torch.optim` is just a dictionary that stores the hyper-parameters and references to the parameters of the model we want to train in parameter groups (different groups can have different learning rates/momentum/weight decay... which is what lets us do discriminative learning rates).
It contains a method `step` that will update our parameters with the gradients and a method `zero_grad` to detach and zero the gradients of all our parameters.
We build the equivalent from scratch, only ours will be more flexible. In our implementation, the step function loops over all the parameters to execute the step using stepper functions that we have to provide when initializing the optimizer.
```python
class Optimizer():
def __init__(self, params, steppers, **defaults):
# might be a generator
self.param_groups = list(params)
# ensure params is a list of lists
if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups]
self.hypers = [{**defaults} for p in self.param_groups]
self.steppers = listify(steppers)
def grad_params(self):
return [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers)
for p in pg if p.grad is not None]
def zero_grad(self):
for p,hyper in self.grad_params():
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper)
```
```python
compose??
```
To do basic SGD, this what a step looks like:
```python
a.add??
```
```python
a = tensor([1,2,3])
a.add_(2, 10) # add the product of the 2x10 to a
```
tensor([21, 22, 23])
```python
#export
def sgd_step(p, lr, **kwargs):
p.data.add_(-lr, p.grad.data)
return p
```
```python
opt_func = partial(Optimizer, steppers=[sgd_step])
```
Now that we have changed the optimizer, we will need to adjust the callbacks that were using properties from the PyTorch optimizer: in particular the hyper-parameters are in the list of dictionaries `opt.hypers` (PyTorch has everything in the the list of param groups).
```python
#export
class Recorder(Callback):
def begin_fit(self): self.lrs,self.losses = [],[]
def after_batch(self):
if not self.in_train: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.loss.detach().cpu())
def plot_lr (self): plt.plot(self.lrs)
def plot_loss(self): plt.plot(self.losses)
def plot(self, skip_last=0):
losses = [o.item() for o in self.losses]
n = len(losses)-skip_last
plt.xscale('log')
plt.plot(self.lrs[:n], losses[:n])
class ParamScheduler(Callback):
_order=1
def __init__(self, pname, sched_funcs):
self.pname,self.sched_funcs = pname,listify(sched_funcs)
def begin_batch(self):
if not self.in_train: return
fs = self.sched_funcs
if len(fs)==1: fs = fs*len(self.opt.param_groups)
pos = self.n_epochs/self.epochs
for f,h in zip(fs,self.opt.hypers): h[self.pname] = f(pos)
class LR_Find(Callback):
_order=1
def __init__(self, max_iter=100, min_lr=1e-6, max_lr=10):
self.max_iter,self.min_lr,self.max_lr = max_iter,min_lr,max_lr
self.best_loss = 1e9
def begin_batch(self):
if not self.in_train: return
pos = self.n_iter/self.max_iter
lr = self.min_lr * (self.max_lr/self.min_lr) ** pos
for pg in self.opt.hypers: pg['lr'] = lr
def after_step(self):
if self.n_iter>=self.max_iter or self.loss>self.best_loss*10:
raise CancelTrainException()
if self.loss < self.best_loss: self.best_loss = self.loss
```
So let's check we didn't break anything and that recorder and param scheduler work properly.
```python
sched = combine_scheds([0.3, 0.7], [sched_cos(0.3, 0.6), sched_cos(0.6, 0.2)])
```
```python
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback, Recorder,
partial(ParamScheduler, 'lr', sched)]
```
```python
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=opt_func)
```
```python
%time run.fit(1, learn)
```
train: [1.7651078626686831, tensor(0.3931, device='cuda:0')]
valid: [1.3783167724609375, tensor(0.5440, device='cuda:0')]
CPU times: user 4.74 s, sys: 1.37 s, total: 6.11 s
Wall time: 11.2 s
```python
run.recorder.plot_loss()
```
```python
run.recorder.plot_lr()
```
## Weight decay
By letting our model learn high parameters, it might fit all the data points in the training set with an over-complex function that has very sharp changes, which will lead to overfitting.
Weight decay comes from the idea of L2 regularization, which consists in adding to your loss function the sum of all the weights squared. Why do that? Because when we compute the gradients, it will add a contribution to them that will encourage the weights to be as small as possible.
Limiting our weights from growing too much is going to hinder the training of the model, but it will yield to a state where it generalizes better. Going back to the theory a little bit, weight decay (or just `wd`) is a parameter that controls that sum of squares we add to our loss:
``` python
loss_with_wd = loss + (wd/2) * (weights**2).sum()
```
In practice though, it would be very inefficient (and maybe numerically unstable) to compute that big sum and add it to the loss. If you remember a little bit of high school math, the derivative of `p**2` with respect to `p` is `2*p`. So adding that big sum to our loss is exactly the same as doing:
``` python
weight.grad += wd * weight
```
for every weight in our model, which in the case of vanilla SGD is equivalent to updating the parameters with:
``` python
weight = weight - lr*(weight.grad + wd*weight)
```
This technique is called "weight decay", as each weight is decayed by a factor `lr * wd`, as it's shown in this last formula.
This only works for standard SGD, as we have seen that with momentum, RMSProp and Adam, the update has some additional formulas around the gradient. In those cases, the formula that comes from L2 regularization:
``` python
weight.grad += wd * weight
```
is different than weight decay
``` python
new_weight = weight - lr * weight.grad - lr * wd * weight
```
Most libraries use the first one, but as it was pointed out in [Decoupled Weight Regularization](https://arxiv.org/pdf/1711.05101.pdf) by Ilya Loshchilov and Frank Hutter, it is better to use the second one with the Adam optimizer, which is why fastai made it its default.
Weight decay is subtracting `lr*wd*weight` from the weights. We need this function to have an attribute `_defaults` so that we are sure there is an hyper-parameter of the same name in our `Optimizer`.
```python
#export
def weight_decay(p, lr, wd, **kwargs):
p.data.mul_(1 - lr*wd)
return p
weight_decay._defaults = dict(wd=0.)
```
L2 regularization is adding `wd*weight` to the gradients.
```python
#export
def l2_reg(p, lr, wd, **kwargs):
p.grad.data.add_(wd, p.data)
return p
l2_reg._defaults = dict(wd=0.)
```
Let's allow steppers to add to our `defaults` (which are the default values of all the hyper-parameters). This helper function adds in `dest` the key/values it finds while going through `os` and applying `f` when they was no `key` of the same name.
```python
#export
def maybe_update(os, dest, f):
for o in os:
for k,v in f(o).items():
if k not in dest: dest[k] = v
def get_defaults(d): return getattr(d,'_defaults',{})
```
This is the same as before, we just take the default values of the steppers when none are provided in the kwargs.
```python
#export
class Optimizer():
def __init__(self, params, steppers, **defaults):
self.steppers = listify(steppers)
maybe_update(self.steppers, defaults, get_defaults)
# might be a generator
self.param_groups = list(params)
# ensure params is a list of lists
if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups]
self.hypers = [{**defaults} for p in self.param_groups]
def grad_params(self):
return [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers)
for p in pg if p.grad is not None]
def zero_grad(self):
for p,hyper in self.grad_params():
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper)
```
```python
#export
sgd_opt = partial(Optimizer, steppers=[weight_decay, sgd_step])
```
```python
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=sgd_opt)
```
Before trying to train, let's check the behavior works as intended: when we don't provide a value for `wd`, we pull the corresponding default from `weight_decay`.
```python
model = learn.model
```
```python
opt = sgd_opt(model.parameters(), lr=0.1)
test_eq(opt.hypers[0]['wd'], 0.)
test_eq(opt.hypers[0]['lr'], 0.1)
```
But if we provide a value, it overrides the default.
```python
opt = sgd_opt(model.parameters(), lr=0.1, wd=1e-4)
test_eq(opt.hypers[0]['wd'], 1e-4)
test_eq(opt.hypers[0]['lr'], 0.1)
```
Now let's fit.
```python
cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback]
```
```python
learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=partial(sgd_opt, wd=0.01))
```
```python
run.fit(1, learn)
```
train: [1.8041583961047774, tensor(0.3763, device='cuda:0')]
valid: [1.9106278076171874, tensor(0.3480, device='cuda:0')]
This is already better than the baseline!
## With momentum
Momentum requires to add some state. We need to save the moving average of the gradients to be able to do the step and store this inside the optimizer state. To do this, we introduce statistics. Statistics are object with two methods:
- `init_state`, that returns the initial state (a tensor of 0. for the moving average of gradients)
- `update`, that updates the state with the new gradient value
We also read the `_defaults` values of those objects, to allow them to provide default values to hyper-parameters.
```python
#export
class StatefulOptimizer(Optimizer):
def __init__(self, params, steppers, stats=None, **defaults):
self.stats = listify(stats)
maybe_update(self.stats, defaults, get_defaults)
super().__init__(params, steppers, **defaults)
self.state = {}
def step(self):
for p,hyper in self.grad_params():
if p not in self.state:
#Create a state for p and call all the statistics to initialize it.
self.state[p] = {}
maybe_update(self.stats, self.state[p], lambda o: o.init_state(p))
state = self.state[p]
for stat in self.stats: state = stat.update(p, state, **hyper)
compose(p, self.steppers, **state, **hyper)
self.state[p] = state
```
```python
#export
class Stat():
_defaults = {}
def init_state(self, p): raise NotImplementedError
def update(self, p, state, **kwargs): raise NotImplementedError
```
Here is an example of `Stat`:
```python
class AverageGrad(Stat):
_defaults = dict(mom=0.9)
def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, mom, **kwargs):
state['grad_avg'].mul_(mom).add_(p.grad.data)
return state
```
```python
a = tensor([[1,2,3], [4,5, 6]])
torch.zeros_like(a)
torch.zeros(3,4)
```
tensor([[0, 0, 0],
[0, 0, 0]])
tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
Then we add the momentum step (instead of using the gradients to perform the step, we use the average).
```python
#export
def momentum_step(p, lr, grad_avg, **kwargs):
p.data.add_(-lr, grad_avg)
return p
```
```python
sgd_mom_opt = partial(StatefulOptimizer, steppers=[momentum_step,weight_decay],
stats=AverageGrad(), wd=0.01)
```
```python
learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=sgd_mom_opt)
```
```python
run.fit(1, learn)
```
train: [1.749207631698852, tensor(0.3990, device='cuda:0')]
valid: [1.9519072265625, tensor(0.3440, device='cuda:0')]
### Momentum experiments
What does momentum do to the gradients exactly? Let's do some plots to find out!
```python
x = torch.linspace(-4, 4, 200)
y = torch.randn(200) + 0.3
betas = [0.5, 0.7, 0.9, 0.99]
x.mean();y.mean()
```
tensor(-5.4836e-08)
tensor(0.2762)
```python
def plot_mom(f):
_,axs = plt.subplots(2,2, figsize=(12,8))
for beta,ax in zip(betas, axs.flatten()):
ax.plot(y, linestyle='None', marker='.')
avg,res = None,[]
for i,yi in enumerate(y):
avg,p = f(avg, beta, yi, i)
res.append(p)
ax.plot(res, color='red')
ax.set_title(f'beta={beta}')
```
This is the regular momentum.
```python
def mom1(avg, beta, yi, i):
if avg is None: avg=yi
res = beta*avg + yi
return res,res
plot_mom(mom1)
```
As we can see, with a too high value, it may go way too high with no way to change its course.
Another way to smooth noisy data is to do an exponentially weighted moving average. In this case, there is a dampening of (1-beta) in front of the new value, which is less trusted than the current average. We'll define `lin_comb` (*linear combination*) to make this easier (note that in the lesson this was named `ewma`).
```python
#export
def lin_comb(v1, v2, beta): return beta*v1 + (1-beta)*v2
```
```python
def mom2(avg, beta, yi, i):
if avg is None: avg=yi
avg = lin_comb(avg, yi, beta)
return avg, avg
plot_mom(mom2)
```
We can see it gets to a zero-constant when the data is purely random. If the data has a certain shape, it will get that shape (with some delay for high beta).
```python
y = 1 - (x/3) ** 2 + torch.randn(200) * 0.1
```
```python
y[0]=0.5
```
```python
plot_mom(mom2)
```
Debiasing is here to correct the wrong information we may have in the very first batch. The debias term corresponds to the sum of the coefficient in our moving average. At the time step i, our average is:
$\begin{align*}
avg_{i} &= \beta\ avg_{i-1} + (1-\beta)\ v_{i} = \beta\ (\beta\ avg_{i-2} + (1-\beta)\ v_{i-1}) + (1-\beta)\ v_{i} \\
&= \beta^{2}\ avg_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \\
&= \beta^{3}\ avg_{i-3} + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \\
&\vdots \\
&= (1-\beta)\ \beta^{i}\ v_{0} + (1-\beta)\ \beta^{i-1}\ v_{1} + \cdots + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i}
\end{align*}$
and so the sum of the coefficients is
$\begin{align*}
S &=(1-\beta)\ \beta^{i} + (1-\beta)\ \beta^{i-1} + \cdots + (1-\beta)\ \beta^{2} + (1-\beta)\ \beta + (1-\beta) \\
&= (\beta^{i} - \beta^{i+1}) + (\beta^{i-1} - \beta^{i}) + \cdots + (\beta^{2} - \beta^{3}) + (\beta - \beta^{2}) + (1-\beta) \\
&= 1 - \beta^{i+1}
\end{align*}$
since all the other terms cancel out each other.
By dividing by this term, we make our moving average a true average (in the sense that all the coefficients we used for the average sum up to 1).
```python
def mom3(avg, beta, yi, i):
if avg is None: avg=0
avg = lin_comb(avg, yi, beta)
return avg, avg/(1-beta**(i+1))
plot_mom(mom3)
```
## Adam and friends
In Adam, we use the gradient averages but with dampening (not like in SGD with momentum), so let's add this to the `AverageGrad` class.
```python
#export
class AverageGrad(Stat):
_defaults = dict(mom=0.9)
def __init__(self, dampening:bool=False): self.dampening=dampening
def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, mom, **kwargs):
state['mom_damp'] = 1-mom if self.dampening else 1.
state['grad_avg'].mul_(mom).add_(state['mom_damp'], p.grad.data)
return state
```
We also need to track the moving average of the gradients squared.
```python
# a.addcmul??
a = tensor([[1., 2., 3,], [4., 5, 6]])
a.addcmul(.2, tensor([1.]),tensor([2.]))
```
tensor([[1.4000, 2.4000, 3.4000],
[4.4000, 5.4000, 6.4000]])
```python
#export
class AverageSqrGrad(Stat):
_defaults = dict(sqr_mom=0.99)
def __init__(self, dampening:bool=True): self.dampening=dampening
def init_state(self, p): return {'sqr_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, sqr_mom, **kwargs):
state['sqr_damp'] = 1-sqr_mom if self.dampening else 1.
state['sqr_avg'].mul_(sqr_mom).addcmul_(state['sqr_damp'], p.grad.data, p.grad.data)
return state
```
We will also need the number of steps done during training for the debiasing.
```python
#export
class StepCount(Stat):
def init_state(self, p): return {'step': 0}
def update(self, p, state, **kwargs):
state['step'] += 1
return state
```
This helper function computes the debias term. If we dampening, `damp = 1 - mom` and we get the same result as before. If we don't use dampening, (`damp = 1`) we will need to divide by `1 - mom` because that term is missing everywhere.
```python
#export
def debias(mom, damp, step): return damp * (1 - mom**step) / (1-mom)
```
Then the Adam step is just the following:
```python
#export
def adam_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, **kwargs):
debias1 = debias(mom, mom_damp, step)
debias2 = debias(sqr_mom, sqr_damp, step)
p.data.addcdiv_(-lr / debias1, grad_avg, (sqr_avg/debias2).sqrt() + eps)
return p
adam_step._defaults = dict(eps=1e-5)
```
```python
#export
def adam_opt(xtra_step=None, **kwargs):
return partial(StatefulOptimizer, steppers=[adam_step,weight_decay]+listify(xtra_step),
stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()], **kwargs)
```
```python
learn,run = get_learn_run(nfs, data, 0.001, conv_layer, cbs=cbfs, opt_func=adam_opt())
```
```python
run.fit(3, learn)
```
train: [1.7600864500930666, tensor(0.3937, device='cuda:0')]
valid: [1.445366455078125, tensor(0.5060, device='cuda:0')]
train: [1.2675379839944936, tensor(0.5829, device='cuda:0')]
valid: [1.181259033203125, tensor(0.6120, device='cuda:0')]
train: [0.9869430397617108, tensor(0.6824, device='cuda:0')]
valid: [1.04019091796875, tensor(0.6540, device='cuda:0')]
## LAMB
It's then super easy to implement a new optimizer. This is LAMB from a [very recent paper](https://arxiv.org/pdf/1904.00962.pdf):
$\begin{align}
g_{t}^{l} &= \nabla L(w_{t-1}^{l}, x_{t}) \\
m_{t}^{l} &= \beta_{1} m_{t-1}^{l} + (1-\beta_{1}) g_{t}^{l} \\
v_{t}^{l} &= \beta_{2} v_{t-1}^{l} + (1-\beta_{2}) g_{t}^{l} \odot g_{t}^{l} \\
m_{t}^{l} &= m_{t}^{l} / (1 - \beta_{1}^{t}) \\
v_{t}^{l} &= v_{t}^{l} / (1 - \beta_{2}^{t}) \\
r_{1} &= \|w_{t-1}^{l}\|_{2} \\
s_{t}^{l} &= \frac{m_{t}^{l}}{\sqrt{v_{t}^{l} + \epsilon}} + \lambda w_{t-1}^{l} \\
r_{2} &= \| s_{t}^{l} \|_{2} \\
\eta^{l} &= \eta * r_{1}/r_{2} \\
w_{t}^{l} &= w_{t}^{l-1} - \eta_{l} * s_{t}^{l} \\
\end{align}$
```python
def lamb_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, wd, **kwargs):
debias1 = debias(mom, mom_damp, step)
debias2 = debias(sqr_mom, sqr_damp, step)
r1 = p.data.pow(2).mean().sqrt()
step = (grad_avg/debias1) / ((sqr_avg/debias2).sqrt()+eps) + wd*p.data
r2 = step.pow(2).mean().sqrt()
p.data.add_(-lr * min(r1/r2,10), step)
return p
lamb_step._defaults = dict(eps=1e-6, wd=0.)
```
```python
dict(a=3, b='c')
```
{'a': 3, 'b': 'c'}
```python
lamb = partial(StatefulOptimizer, steppers=lamb_step, stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()])
```
```python
learn,run = get_learn_run(nfs, data, 0.003, conv_layer, cbs=cbfs, opt_func=lamb)
```
```python
run.fit(3, learn)
```
train: [1.8728660132619823, tensor(0.3419, device='cuda:0')]
valid: [1.484662841796875, tensor(0.4860, device='cuda:0')]
train: [1.3553239394291918, tensor(0.5506, device='cuda:0')]
valid: [1.537447021484375, tensor(0.4840, device='cuda:0')]
train: [1.0624497101364976, tensor(0.6537, device='cuda:0')]
valid: [1.11101806640625, tensor(0.6460, device='cuda:0')]
Other recent variants of optimizers:
- [Large Batch Training of Convolutional Networks](https://arxiv.org/abs/1708.03888) (LARS also uses weight statistics, not just gradient statistics. Can you add that to this class?)
- [Adafactor: Adaptive Learning Rates with Sublinear Memory Cost](https://arxiv.org/abs/1804.04235) (Adafactor combines stats over multiple sets of axes)
- [Adaptive Gradient Methods with Dynamic Bound of Learning Rate](https://arxiv.org/abs/1902.09843)
## Export
```python
!python notebook2script.py 09_optimizers.ipynb
```
```python
```
|
9abe96c32a4ceeacae0a6e745e069d5c4398a57e
| 406,684 |
ipynb
|
Jupyter Notebook
|
nbs/dl2/09_optimizers_sz_20191009.ipynb
|
stuartzong/course-v3
|
496c8d06d401e53f5cd517e3805a85befa6795cc
|
[
"Apache-2.0"
] | null | null | null |
nbs/dl2/09_optimizers_sz_20191009.ipynb
|
stuartzong/course-v3
|
496c8d06d401e53f5cd517e3805a85befa6795cc
|
[
"Apache-2.0"
] | null | null | null |
nbs/dl2/09_optimizers_sz_20191009.ipynb
|
stuartzong/course-v3
|
496c8d06d401e53f5cd517e3805a85befa6795cc
|
[
"Apache-2.0"
] | null | null | null | 276.467709 | 109,560 | 0.925554 | true | 6,828 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.760651 | 0.800692 | 0.609047 |
__label__eng_Latn
| 0.882892 | 0.25335 |
# Sampled Softmax
For classification and prediction problems a typical criterion function is cross-entropy with softmax. If the number of output classes is high the computation of this criterion and the corresponding gradients could be quite costly. Sampled Softmax is a heuristic to speed up training in these cases. (see: [Adaptive Importance Sampling to Accelerate Training of a Neural Probabilistic Language Model](http://www.iro.umontreal.ca/~lisa/pointeurs/importance_samplingIEEEtnn.pdf), [Exploring the Limits of Language Modeling](https://arxiv.org/pdf/1602.02410v1.pdf), [What is Candidate Sampling](https://www.tensorflow.org/extras/candidate_sampling.pdf))
#### Select the notebook runtime environment devices / settings
Before we dive into the details we run some setup that is required for automated testing of this notebook.
```python
import os
import cntk as C
# Select the right target device when this notebook is being tested:
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
```
## Basics
The softmax function is used in neural networks if we want to interpret the network output as a probability distribution over a set of classes $C$ with $|C|=N_C$.
Softmax maps an $N_C$-dimensional vector $z$, which has unrestricted values, to an $N_C$ dimensional vector $p$ with non-negative values that sum up to 1 so that they can be interpreted as probabilities. More precisely:
$$
\begin{align}
p_i &= softmax(z, i)\\
&= \frac{exp(z_i)}{\sum_{k\in C} exp(z_k)}\\
\end{align}
$$
In what follows we assume that the input $z$ to the softmax is computed from some hidden vector $h$ of dimension $N_h$ in a specific way, namely:
$$ z = W h + b $$
where $W$ is a learnable weight matrix of dimension $(N_c, N_h)$ and $b$ is a learnable bias vector.
We restrict ourselves to this specific choice of $z$ because it helps in implementing an efficient sampled softmax.
In a typical use-case like for example a recurrent language model, the hidden vector $h$ would be the output of the recurrent layers and $C$ would be the set of words to predict.
As a training criterion, we use cross-entropy which is a function of the expected (true) class $t\in C$ and the probability predicted for it:
$$cross\_entropy := -log(p_t)$$
## Sampled Softmax from the outside
For the normal softmax the CNTK Python-api provides the function [cross_entropy_with_softmax](https://cntk.ai/pythondocs/cntk.ops.html?highlight=softmax#cntk.ops.cross_entropy_with_softmax). This takes as input the $N_C$-dimensional vector $z$. As mentioned for our sampled softmax implementation we assume that this z is computed by $ z = W h + b $. In sampled softmax this has to be part of the whole implementation of the criterion.
Below we show the code for `cross_entropy_with_sampled_softmax_and_embedding`. Let’s look at the signature first.
One fundamental difference to the corresponding function in the Python-api (`cross_entropy_with_softmax`) is that in the Python api function the input corresponds to $z$ and must have the same dimension as the target vector, while in cross_entropy_with_full_softmax the input corresponds to our hidden vector $h$ can have any dimension (hidden_dim).
Actually, hidden_dim will be typically much lower than the dimension of the target vector.
We also have some additional parameters `num_samples, sampling_weights, allow_duplicates` that control the random sampling.
Another difference to the api function is that we return a triple (z, cross_entropy_on_samples, error_on_samples).
We will come back to the details of the implementation below.
```python
from __future__ import print_function
from __future__ import division
# Creates a subgraph computing cross-entropy with sampled softmax.
def cross_entropy_with_sampled_softmax_and_embedding(
hidden_vector, # Node providing hidden input
target_vector, # Node providing the expected labels (as sparse vectors)
num_classes, # Number of classes
hidden_dim, # Dimension of the hidden vector
num_samples, # Number of samples to use for sampled softmax
sampling_weights, # Node providing weights to be used for the weighted sampling
allow_duplicates = True, # Boolean flag to control whether to use sampling with replacemement
# (allow_duplicates == True) or without replacement.
):
# define the parameters learnable parameters
b = C.Parameter(shape = (num_classes, 1), init = 0)
W = C.Parameter(shape = (num_classes, hidden_dim), init = C.glorot_uniform())
# Define the node that generates a set of random samples per minibatch
# Sparse matrix (num_samples * num_classes)
sample_selector = C.random_sample(sampling_weights, num_samples, allow_duplicates)
# For each of the samples we also need the probablity that it in the sampled set.
inclusion_probs = C.random_sample_inclusion_frequency(sampling_weights, num_samples, allow_duplicates) # dense row [1 * vocab_size]
log_prior = C.log(inclusion_probs) # dense row [1 * num_classes]
# Create a submatrix wS of 'weights
W_sampled = C.times(sample_selector, W) # [num_samples * hidden_dim]
z_sampled = C.times_transpose(W_sampled, hidden_vector) + C.times(sample_selector, b) - C.times_transpose (sample_selector, log_prior)# [num_samples]
# Getting the weight vector for the true label. Dimension hidden_dim
W_target = C.times(target_vector, W) # [1 * hidden_dim]
z_target = C.times_transpose(W_target, hidden_vector) + C.times(target_vector, b) - C.times_transpose(target_vector, log_prior) # [1]
z_reduced = C.reduce_log_sum_exp(z_sampled)
# Compute the cross entropy that is used for training.
# We don't check whether any of the classes in the random samples conincides with the true label, so it might
# happen that the true class is counted
# twice in the normalising demnominator of sampled softmax.
cross_entropy_on_samples = C.log_add_exp(z_target, z_reduced) - z_target
# For applying the model we also output a node providing the input for the full softmax
z = C.times_transpose(W, hidden_vector) + b
z = C.reshape(z, shape = (num_classes))
zSMax = C.reduce_max(z_sampled)
error_on_samples = C.less(z_target, zSMax)
return (z, cross_entropy_on_samples, error_on_samples)
```
To give a better idea of what the inputs and outputs are and how this all differs from the normal softmax we give below a corresponding function using normal softmax:
```python
# Creates subgraph computing cross-entropy with (full) softmax.
def cross_entropy_with_softmax_and_embedding(
hidden_vector, # Node providing hidden input
target_vector, # Node providing the expected labels (as sparse vectors)
num_classes, # Number of classes
hidden_dim # Dimension of the hidden vector
):
# Setup bias and weights
b = C.Parameter(shape = (num_classes, 1), init = 0)
W = C.Parameter(shape = (num_classes, hidden_dim), init = C.glorot_uniform())
z = C.reshape( C.times_transpose(W, hidden_vector) + b, (1, num_classes))
# Use cross_entropy_with_softmax
cross_entropy = C.cross_entropy_with_softmax(z, target_vector)
zMax = C.reduce_max(z)
zT = C.times_transpose(z, target_vector)
error_on_samples = C.less(zT, zMax)
return (z, cross_entropy, error_on_samples)
```
As you can see the main differences to the api function `cross_entropy_with_softmax` are:
* We include the mapping $ z = W h + b $ into the function.
* We return a triple (z, cross_entropy, error_on_samples) instead of just returning the cross entropy.
## A toy example
To explain how to integrate sampled softmax let us look at a toy example. In this toy example we first transform one-hot input vectors via some random projection into a lower dimensional vector $h$. The modeling task is to reverse this mapping using (sampled) softmax. Well, as already said this is a toy example.
```python
import numpy as np
from math import log, exp, sqrt
from cntk.logging import ProgressPrinter
import timeit
# A class with all parameters
class Param:
# Learning parameters
learning_rate = 0.03
minibatch_size = 100
num_minbatches = 100
test_set_size = 1000
momentum_time_constant = 5 * minibatch_size
reporting_interval = 10
allow_duplicates = False
# Parameters for sampled softmax
use_sampled_softmax = True
use_sparse = True
softmax_sample_size = 10
# Details of data and model
num_classes = 50
hidden_dim = 10
data_sampling_distribution = lambda: np.repeat(1.0 / Param.num_classes, Param.num_classes)
softmax_sampling_weights = lambda: np.repeat(1.0 / Param.num_classes, Param.num_classes)
# Creates random one-hot vectors of dimension 'num_classes'.
# Returns a tuple with a list of one-hot vectors, and list with the indices they encode.
def get_random_one_hot_data(num_vectors):
indices = np.random.choice(
range(Param.num_classes),
size=num_vectors,
p = data_sampling_distribution()).reshape((1, num_vectors))
list_of_vectors = C.Value.one_hot(indices, Param.num_classes)
return (list_of_vectors, indices.flatten())
# Create a network that:
# * Transforms the input one hot-vectors with a constant random embedding
# * Applies a linear decoding with parameters we want to learn
def create_model(labels):
# random projection matrix
random_data = np.random.normal(scale = sqrt(1.0/Param.hidden_dim), size=(Param.num_classes, Param.hidden_dim)).astype(np.float32)
random_matrix = C.constant(shape = (Param.num_classes, Param.hidden_dim), value = random_data)
h = C.times(labels, random_matrix)
# Connect the latent output to (sampled/full) softmax.
if Param.use_sampled_softmax:
sampling_weights = np.asarray(softmax_sampling_weights(), dtype=np.float32)
sampling_weights.reshape((1, Param.num_classes))
softmax_input, ce, errs = cross_entropy_with_sampled_softmax_and_embedding(
h,
labels,
Param.num_classes,
Param.hidden_dim,
Param.softmax_sample_size,
softmax_sampling_weights(),
Param.allow_duplicates)
else:
softmax_input, ce, errs = cross_entropy_with_softmax_and_embedding(
h,
labels,
Param.num_classes,
Param.hidden_dim)
return softmax_input, ce, errs
def train(do_print_progress):
labels = C.input_variable(shape = Param.num_classes, is_sparse = Param.use_sparse)
z, cross_entropy, errs = create_model(labels)
# Setup the trainer
learning_rate_schedule = C.learning_rate_schedule(Param.learning_rate, C.UnitType.sample)
momentum_schedule = C.momentum_as_time_constant_schedule(Param.momentum_time_constant)
learner = C.momentum_sgd(z.parameters, learning_rate_schedule, momentum_schedule, True)
progress_writers = None
if do_print_progress:
progress_writers = [ProgressPrinter(freq=Param.reporting_interval, tag='Training')]
trainer = C.Trainer(z, (cross_entropy, errs), learner, progress_writers)
minbatch = 0
average_cross_entropy = compute_average_cross_entropy(z)
minbatch_data = [0] # store minibatch values
cross_entropy_data = [average_cross_entropy] # store cross_entropy values
# Run training
t_total= 0
# Run training
for minbatch in range(1,Param.num_minbatches):
# Specify the mapping of input variables in the model to actual minibatch data to be trained with
label_data, indices = get_random_one_hot_data(Param.minibatch_size)
arguments = ({labels : label_data})
# If do_print_progress is True, this will automatically print the progress using ProgressPrinter
# The printed loss numbers are computed using the sampled softmax criterion
t_start = timeit.default_timer()
trainer.train_minibatch(arguments)
t_end = timeit.default_timer()
t_delta = t_end - t_start
samples_per_second = Param.minibatch_size / t_delta
# We ignore the time measurements of the first two minibatches
if minbatch > 2:
t_total += t_delta
# For comparison also print result using the full criterion
if minbatch % Param.reporting_interval == int(Param.reporting_interval/2):
# memorize the progress data for plotting
average_cross_entropy = compute_average_cross_entropy(z)
minbatch_data.append(minbatch)
cross_entropy_data.append(average_cross_entropy)
if do_print_progress:
print("\nMinbatch=%d Cross-entropy from full softmax = %.3f perplexity = %.3f samples/s = %.1f"
% (minbatch, average_cross_entropy, exp(average_cross_entropy), samples_per_second))
# Number of samples we measured. First two minbatches were ignored
samples_measured = Param.minibatch_size * (Param.num_minbatches - 2)
overall_samples_per_second = samples_measured / t_total
return (minbatch_data, cross_entropy_data, overall_samples_per_second)
def compute_average_cross_entropy(softmax_input):
vectors, indices = get_random_one_hot_data(Param.test_set_size)
total_cross_entropy = 0.0
arguments = (vectors)
z = softmax_input.eval(arguments).reshape(Param.test_set_size, Param.num_classes)
for i in range(len(indices)):
log_p = log_softmax(z[i], indices[i])
total_cross_entropy -= log_p
return total_cross_entropy / len(indices)
# Computes log(softmax(z,index)) for a one-dimensional numpy array z in an numerically stable way.
def log_softmax(z, # numpy array
index # index into the array
):
max_z = np.max(z)
return z[index] - max_z - log(np.sum(np.exp(z - max_z)))
np.random.seed(1)
print("start...")
train(do_print_progress = True)
print("done.")
```
start...
Minbatch=5 Cross-entropy from full softmax = 3.843 perplexity = 46.645 samples/s = 15268.3
Minibatch[ 1- 10]: loss = 2.321515 * 1000, metric = 78.90% * 1000;
Minbatch=15 Cross-entropy from full softmax = 3.483 perplexity = 32.561 samples/s = 16016.7
Minibatch[ 11- 20]: loss = 2.042852 * 1000, metric = 61.10% * 1000;
Minbatch=25 Cross-entropy from full softmax = 3.061 perplexity = 21.339 samples/s = 16125.2
Minibatch[ 21- 30]: loss = 1.701563 * 1000, metric = 38.80% * 1000;
Minbatch=35 Cross-entropy from full softmax = 2.781 perplexity = 16.142 samples/s = 17111.2
Minibatch[ 31- 40]: loss = 1.435229 * 1000, metric = 17.20% * 1000;
Minbatch=45 Cross-entropy from full softmax = 2.452 perplexity = 11.609 samples/s = 17150.7
Minibatch[ 41- 50]: loss = 1.268124 * 1000, metric = 14.20% * 1000;
Minbatch=55 Cross-entropy from full softmax = 2.262 perplexity = 9.602 samples/s = 16039.8
Minibatch[ 51- 60]: loss = 1.052258 * 1000, metric = 7.70% * 1000;
Minbatch=65 Cross-entropy from full softmax = 2.092 perplexity = 8.100 samples/s = 16275.1
Minibatch[ 61- 70]: loss = 0.979096 * 1000, metric = 10.10% * 1000;
Minbatch=75 Cross-entropy from full softmax = 1.924 perplexity = 6.850 samples/s = 16244.5
Minibatch[ 71- 80]: loss = 0.908541 * 1000, metric = 9.50% * 1000;
Minbatch=85 Cross-entropy from full softmax = 1.751 perplexity = 5.761 samples/s = 15610.1
Minibatch[ 81- 90]: loss = 0.825795 * 1000, metric = 5.10% * 1000;
Minbatch=95 Cross-entropy from full softmax = 1.660 perplexity = 5.261 samples/s = 13370.0
done.
In the above code we use two different methods to report training progress:
1. Using a function that computes the average cross entropy on full softmax.
2. Using the built-in ProgressPrinter
ProgressPrinter reports how the value of the training criterion changes over time.
In our case the training criterion is cross-entropy from **sampled** softmax.
The same is true for the error rate computed by progress printer, this is computed only for true-class vs sampled-classes and will therefore underestimate the true error rate.
Therefore while ProgressPrinter already gives us some idea how training goes on, if we want to compare the behavior for different sampling strategies (sample size, sampling weights, ...) we should not rely on numbers that are computed only using the sampled subset of classes.
## Importance sampling
Often the we don't have uniform distribution for the classes on the output side. The typical example is when we have words as output classes. A typical example are words where e.g. 'the' will be much more frequent than most others.
In such cases one often uses a non uniform distribution for drawing the samples in sampled softmax but instead increases the sampling weight for the frequent classes. This is also called importane sampling.
In our example the sampling distribution is controlled by the weight array `softmax_sampling_weights`.
As an example let's look at the case where the classes are distrubted according to zipf-distrubtion like:
$$
p[i] \propto \frac{1}{i+5},
$$
actually we use this distribution already in our example.
How does training behavior change if we switch uniform sampling to sampling with the zipfian distribution in sampled softmax?
```python
# We want to lot the data
import matplotlib.pyplot as plt
%matplotlib inline
# Define weights of zipfian distributuion
def zipf(index):
return 1.0 / (index + 5)
# Use zipifian distribution for the classes
def zipf_sampling_weights():
return np.asarray([ zipf(i) for i in range(Param.num_classes)], dtype=np.float32)
data_sampling_distribution = lambda: zipf_sampling_weights() / np.sum(zipf_sampling_weights())
print("start...")
# Train using uniform sampling (like before)
np.random.seed(1)
softmax_sampling_weights = lambda: np.repeat(1.0/Param.num_classes, Param.num_classes)
minibatch_data, cross_entropy_data, _ = train(do_print_progress = False)
# Train using importance sampling
np.random.seed(1)
softmax_sampling_weights = zipf_sampling_weights
minibatch_data2, cross_entropy_data2, _ = train(do_print_progress = False)
plt.plot(minibatch_data, cross_entropy_data, 'r--',minibatch_data, cross_entropy_data2, 'b--')
plt.xlabel('number of mini-batches')
plt.ylabel('cross entropy')
plt.show()
```
In the example above we compare uniform sampling (red) vs sampling with the same distribution the classes have (blue).
You will need to experiment to find the best settings for all the softmax parameters.
## What speedups to expect?
The speed difference between full softmax and sampled softmax in terms of training instances depends strongly on the concrete settings, namely
* Number of classes. Typically the speed-up will increase the more output classes you have.
* Number of samples used in sampled softmax
* Dimension of hiddlen layer input
* Minibatch size
* Hardware
Also you need to test how much you can reduce sample size without degradation of the result.
```python
print("start...")
# Reset parameters
class Param:
# Learning parameters
learning_rate = 0.03
minibatch_size = 8
num_minbatches = 100
test_set_size = 1 # we are only interrested in speed
momentum_time_constant = 5 * minibatch_size
reporting_interval = 1000000 # Switch off reporting to speed up
allow_duplicates = False
# Parameters for sampled softmax
use_sampled_softmax = True
use_sparse = True
softmax_sample_size = 10
# Details of data and model
num_classes = 50000
hidden_dim = 10
data_sampling_distribution = lambda: np.repeat(1.0 / Param.num_classes, Param.num_classes)
softmax_sampling_weights = lambda: np.repeat(1.0 / Param.num_classes, Param.num_classes)
sample_sizes = [5, 10, 100, 1000]
speed_with_sampled_softmax = []
# Get the speed with sampled softmax for different sizes
for sample_size in sample_sizes:
print("Measuring speed of sampled softmax for sample size %d ..." % (sample_size))
Param.use_sampled_softmax = True
Param.softmax_sample_size = sample_size
_, _, samples_per_second = train(do_print_progress = False)
speed_with_sampled_softmax.append(samples_per_second)
# Get the speed with full softmax
Param.use_sampled_softmax = False
print("Measuring speed of full softmax ...")
_, _, samples_per_second = train(do_print_progress = False)
speed_without_sampled_softmax = np.repeat(samples_per_second, len(sample_sizes))
# Plot the speed of sampled softmax (blue) as a function of sample sizes
# and compare it to the speed with full softmax (red).
plt.plot(sample_sizes, speed_without_sampled_softmax, 'r--',sample_sizes, speed_with_sampled_softmax, 'b--')
plt.xlabel('softmax sample size')
plt.ylabel('speed: instances / second')
plt.title("Speed 'sampled softmax' (blue) vs. 'full softmax' (red)")
plt.ylim(ymin=0)
plt.show()
```
|
23391f99543c70634a3636f95335a6f541764eb6
| 74,613 |
ipynb
|
Jupyter Notebook
|
Tutorials/CNTK_207_Training_with_Sampled_Softmax.ipynb
|
mukehvier/CNTK
|
0ee09cf771bda9d4912790e0fed7322e89d86d87
|
[
"RSA-MD"
] | 1 |
2019-04-03T09:12:57.000Z
|
2019-04-03T09:12:57.000Z
|
Tutorials/CNTK_207_Training_with_Sampled_Softmax.ipynb
|
zhuyawen/CNTK
|
0ee09cf771bda9d4912790e0fed7322e89d86d87
|
[
"RSA-MD"
] | null | null | null |
Tutorials/CNTK_207_Training_with_Sampled_Softmax.ipynb
|
zhuyawen/CNTK
|
0ee09cf771bda9d4912790e0fed7322e89d86d87
|
[
"RSA-MD"
] | 1 |
2020-12-24T14:50:54.000Z
|
2020-12-24T14:50:54.000Z
| 117.500787 | 24,782 | 0.830123 | true | 5,078 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.877477 | 0.833325 | 0.731223 |
__label__eng_Latn
| 0.968693 | 0.537208 |
# Hopf Bifurcation: The Emergence of Limit-cycle Dynamics
*Cem Özen*, May 2017.
A *Hopf bifurcation* is a critical point in which a periodic orbit appears or disappears through a local change in the stability of a fixed point in a dynamical system as one of the system parameters is varied. Hopf bifurcations occur in many of the well-known dynamical systems such as the Lotka-Volterra model, the Lorenz model, the Selkov model of glycolysis, the Belousov-Zhabotinsky reaction model, and the Hodgkin-Huxley model for nerve membrane.
In this notebook, I will consider a system of chemical reactions known by the name *Brusselator* in literature (see: https://en.wikipedia.org/wiki/Brusselator for more information) as a model for Hopf bifurcations. The Brusselator reactions are given by
$A \longrightarrow X$ <br>
$2X + Y\longrightarrow 3X$ <br>
$B + X \longrightarrow Y + D$ <br>
$X \longrightarrow E$ <br>
For the sake of simplicity, we will assume that the reaction constants of all these reactions are unity (i.e. in all the reactions, $k=1$ ). Furthermore let's assume that the reactant concentrations $A$ and $B$ are so large that they remain constant. Therefore, only $X$ and $Y$ concentrations will be dynamical.
The rate equations for $X$ and $Y$ are then given by <br>
$$
\begin{eqnarray}
\dot{X} & = & A + X^2Y - BX - X, \\
\dot{Y} & = & BX - X^2Y
\end{eqnarray}
$$
The X-nullcline and the Y-nullcline are given by the conditions of $0 = A + X^2Y - BX - X$ and $0 = BX - X^2Y$ respectively. From these equations, we obtain:
$$
\begin{eqnarray}
Y(X) & = & \frac{-A + X(B+1)}{X^2}, & \quad \textrm{(X-nullcline)} \\
Y(X) & = & \frac{B}{X}, & \quad \textrm{(Y-nullcline)}
\end{eqnarray}
$$
In this notebook, I will also demonstrate how one can perform symbolical computations using Python's `SymPy` library. We also need extra Jupyter Notebook functionality to render nice display of the resulting equations. (Notice that we are using LaTex in typesetting this document particularly for the purpose of producing nice looking equations).
```python
import numpy as np
from numpy.linalg import eig
from scipy import integrate
import sympy
from IPython.display import display, Math, Latex
import matplotlib.pyplot as plt
sympy.init_printing(use_latex='mathjax')
%matplotlib inline
```
Let's obtain the nullcline equations using `SymPy`:
```python
X, Y, A, B = sympy.symbols('X Y A B') # you need to introduce the sysmbols first
# let's get the X-nullcline as a function of X:
sympy.solve(sympy.Eq(A + X**2 * Y - B * X - X, 0), Y)
```
$$\left [ \frac{1}{X^{2}} \left(- A + X \left(B + 1\right)\right)\right ]$$
```python
# let's get the Y-nullcline as a function of X:
sympy.solve(sympy.Eq(B * X - X**2 * Y, 0), Y)
```
$$\left [ \frac{B}{X}\right ]$$
Now let's find the fixed points ($X^*, Y^*$) of this 2-D system (there is only one, actually). The fixed point is given by the simultaneous solution of the X-nullcline and Y-nullcline equations, therefore
$$ (X^*, Y^*) = (A, \frac{B}{A}) $$
For the sake of using `SymPy`, let's obtain this solution once again:
```python
# Solve the system of equations defined by the X-nullcline and Y-nullcline with respect to X and Y:
sympy.solve([A + X**2 * Y - B * X - X, B * X - X**2 * Y], [X, Y])
```
$$\left [ \left ( A, \quad \frac{B}{A}\right )\right ]$$
Now, a bifurcation analysis of the Brusselator model requires us to keep track of the local stability of its fixed point. This can be done, according to the *linearized stability analysis* by evaluating the Jacobian matrix at the fixed point. <br>
The Jacobian matrix at the fixed point is given by:
$$
\begin{eqnarray}
J & = & \left\vert\matrix{{\partial f \over \partial x} & {\partial f\over \partial y} \cr
{\partial g \over \partial x} & {\partial g\over \partial y}
}\right\vert_{(X^*, Y^*)} \\
& = & \left\vert\matrix{ -B + 2XY - 1 & X^2 \cr
B - 2XY & -X^2
}\right\vert_{(X^*, Y^*)} \\
& = & \left\vert\matrix{ B - 1 & A^2 \cr
-B & -A^2
}\right\vert
\end{eqnarray}
$$
This result can also be obtained very easily using `SymPy`:
```python
# define the Brusselator dynamical system as a SymPy matrix
brusselator = sympy.Matrix([A + X**2 * Y - B * X - X, B * X - X**2 * Y])
```
```python
# Jacobian matrix with respect to X and Y
J = brusselator.jacobian([X, Y])
J
```
$$\left[\begin{matrix}- B + 2 X Y - 1 & X^{2}\\B - 2 X Y & - X^{2}\end{matrix}\right]$$
```python
# Jacobian matrix evaluated at the coordinates of the fixed point
J_at_fp = J.subs({X:A, Y:B/A}) # substitute X with A and Y with B/A
J_at_fp
```
$$\left[\begin{matrix}B - 1 & A^{2}\\- B & - A^{2}\end{matrix}\right]$$
A limit-cycle can emerge in a 2-dimensional, attractive dynamical system if the fixed point of the system goes unstable. In this case, trajectories must be pulled by a limit cycle. (According to the Poincare-Bendixson theorem, a 2-dimensional system cannot have strange attractors). In this case, the Hopf bifurcation is called a *supercritical Hopf bifurcation*, because limit cycle is stable.
In the following, we will see how the stable fixed point (spiral) of the Brusselator goes unstable, giving rise to a limit cycle in turn. Conditions for the stability are determined by the trace and the determinant of the Jacobian. So let's evaluate them:
```python
Delta = J_at_fp.det() # determinant of the Jacobian
Delta.simplify()
```
$$A^{2}$$
```python
tau = J_at_fp.trace() # trace of the Jacobian
tau
```
$$- A^{2} + B - 1$$
To have an unstable spiral we need:
$$
\begin{eqnarray}
\tau & > & 0 \quad \Rightarrow \quad & B > A^2 + 1 \quad \textrm{required} \\
\Delta & > & 0 \quad {} \quad & \textrm{automatically satisfied} \\
\tau^2 & < & 4 \Delta \quad {} \quad & \textrm{automatically satisfied}
\end{eqnarray}
$$
The second and third conditions were satisfied because of the first condition, automatically. Therefore we need to have:
$$ B > A^2 + 1 $$ for limit cycles.
## Birth of A Limit Cycle: Hopf Bifurcation
### Numerical Simulation of the Brusselator System
In the following, I perform a numerical simulation of the (supercritical) Hopf bifurcation in the Brusselator system by varying the parameter $B$ while the value of $A=1$.
```python
# Brusselator System:
def dX_dt(A, B, X, t):
x, y = X[0], X[1]
return np.array([A + x**2 * y - B * x -x,
B * x - x**2 * y])
T = 50 * np.pi # simulation time
dt = 0.01 # integration time step
# time steps to be used in integration of the Brusselator system
t=np.arange(0, T, dt)
# create a canvas and 3 subplots..we will use each one for different choice of A and B paraeters
fig, ax = plt.subplots(1, 3)
fig.set_size_inches(13,5)
def plotter(A, B, ax):
"""
This function draws a phase portrait by assigning a vector characterizing how the concentrations
change at a given value of X and Y. It also draws a couple of example trajectories.
"""
# Drow direction fields using matplotlib 's quiver function, similar to what we did
# in class but qualitatively
xmin, xmax = 0, 5 # min and max values of x axis in the plot
ymin, ymax = 0, 5 # min and max values of y axis in the plot
x = np.linspace(xmin, xmax, 10) # divide x axis to intervals
y = np.linspace(ymin, ymax, 10) # divide y axis to intervals
X1 , Y1 = np.meshgrid(x, y) # from these intervals create a grid
DX1, DY1 = dX_dt(A, B, [X1, Y1], t) # compute rate of change of the concentrations on grid points
M = (np.hypot(DX1, DY1)) # norm of the rate of changes
M[ M == 0] = 1. # prevention against divisions by zero
DX1 /= M # we normalize the direction field vector (each has unit length now)
DY1 /= M # we normalize the direction field vector (each has unit length now)
Q = ax.quiver(X1, Y1, DX1, DY1, M, pivot='mid', cmap=plt.cm.jet)
num_traj = 5 # number of trajectories
# choose several initial points (x_i, y_j), for i and j chosen as in linspace calls below
X0 = np.asarray(list(zip(np.linspace(xmin, xmax, num_traj), np.linspace(ymin, ymax, num_traj))))
vcolors = plt.cm.jet_r(np.linspace(0., 1., num_traj)) # colors for each trajectory
# integrate the Brusellator ODE's using all initial points to produce corresponding trajectories
X = np.asarray([integrate.odeint(lambda x, t: dX_dt(A, B, x, t), X0i,t)
for X0i in X0])
# plot the trajectories we obtained above
for i in range(num_traj):
x, y = X[i, :, :].T # x and y histories for trajectory i
ax.plot(x, y, '-', c=vcolors[i], lw=2)
# set limits, put labels etc..
ax.set_xlim(xmin=xmin, xmax=xmax)
ax.set_ylim(ymin=ymin, ymax=ymax)
ax.set_xlabel("X", fontsize = 20)
ax.set_ylabel("Y", fontsize = 20)
ax.annotate("A={}, B={}".format(A, B), xy = (0.4, 0.9), xycoords = 'axes fraction', fontsize = 20, color = "k")
# Now let's prepare plots for the following choices of A and B:
plotter(A=1, B=1, ax=ax[0])
plotter(A=1, B=2, ax=ax[1])
plotter(A=1, B=3, ax=ax[2])
```
Above you see how a limit cycle can be created in a dynamical system, as one of the system parameters is varied.
Here we have kept $A=1$ but varied $B$ from 1 to 2.5. Note that $B=2$ is the borderline value, marking the change in the stability of the fixed point. For $B<1$ the fixed point is stable but as we cross the value $B=2$, it changes its character to unstable and a limit cycle is born. This phenomenon is an example of *Hopf Bifurcation*.
On the leftmost panel we have a stable spiral. Technically, this means that the Jacobian at the fixed point has two complex eigenvalues (a complex conjugate pair). The fact that the eigenvalues are complex is responsible for the spiralling effect. In stable spirals, the real part of the eigenvalues are negative, which is why these spiralling solutions decay, that is, trajectories nearby fall on to the fixed point. As the bifurcation parameter (here $B$) varies, the real part of the complex eigenvalues increase, reach zero at certain value of $B$, and keep growing now on the positive side. If the real part of the eigenvalues are positive, the fixed point is an unstable spiral; trajectories nearby are pushed out of the fixed point (see the rightmost panel and plot below). Since this 2-D dynamical system is attractive, by the Poincare-Bendixson theorem, the emergence of the unstable spiral accompanies the birth of a limit-cycle. Notice that the panel in the middle is the borderline case between the stable and unstable spirals: in this case the real part of the eigenvalues is exactly zero (see plots below); the linear stabilization theory falsely predicts a neutral oscillation (i.e a center) at $B=2$---due to purely imaginary eigenvalues. However, the fixed point is still a stable spiral then.
### Eigenvalues of the Jacobian
```python
# Eigenvalues of the Jacobian at A=1, B=1 (fixed point is stable spiral)
J_numeric = np.asarray(J_at_fp.evalf(subs={A:1, B:1})).astype(np.float64)
w, _ = eig(J_numeric)
w
```
array([-0.5+0.8660254j, -0.5-0.8660254j])
```python
# Eigenvalues of the Jacobian at A=1, B=3 (fixed point is unstable spiral)
J_numeric = np.asarray(J_at_fp.evalf(subs={A:1, B:3})).astype(np.float64)
w, _ = eig(J_numeric)
w
```
array([ 0.5+0.8660254j, 0.5-0.8660254j])
Let's prepare plots showing how the real and imaginary parts of the eigenvalues change as $B$ is varied.
```python
from numpy.linalg import eig
a = 1
eigen_real, eigen_imag = [], []
B_vals = np.linspace(1, 3, 20)
for b in B_vals:
J_numeric = np.asarray(J_at_fp.evalf(subs={A:a, B:b})).astype(np.float64)
w, _ = eig(J_numeric)
eigen_real.append(w[0].real)
eigen_imag.append(abs(w[0].imag))
eigen_real = np.asanyarray(eigen_real)
eigen_imag = np.asarray(eigen_imag)
```
```python
fig, ax = plt.subplots(1, 2)
fig.set_size_inches(10,5)
fig.subplots_adjust(wspace=0.5)
ax[0].axhline(y=0, c="k", ls="dashed")
ax[0].plot(B_vals, eigen_real)
ax[0].set_ylabel(r"$\mathfrak{Re}(\lambda)$", fontsize = 20)
ax[0].set_xlabel(r"$B$", fontsize = 20)
ax[1].set_ylabel(r"$|\mathfrak{Im}(\lambda)|$", fontsize = 20)
ax[1].set_xlabel(r"$B$", fontsize = 20)
ax[1].plot(B_vals, eigen_imag)
```
Hopf bifurcation, is only one type of bifurcation, albeit a very important one for physical and biological systems. There are other types of bifurcation in which one can create or destroy fixed points or alter their properties in different ways than a Hopf bifurcations does. If you are curious, I suggest you to perform your own numerical experiements by playing with the values of $A$, $B$
or both.
### An Animation of the Hopf Bifurcation
```python
from matplotlib import animation, rc
from IPython.display import HTML
# Brusselator System:
def dX_dt(A, B, X, t):
x, y = X[0], X[1]
return np.array([A + x**2 * y - B * x -x, B * x - x**2 * y])
T = 50 * np.pi # simulation time
dt = 0.01 # integration time step
# time steps to be used in integration of the Brusselator system
t=np.arange(0, T, dt)
num_traj = 5 # number of trajectories
xmin, xmax = 0, 5 # min and max values of x axis in the plot
ymin, ymax = 0, 5 # min and max values of y axis in the plot
A = 1. # we will keep A parameter constant
# vary B parameter
Bmin, Bmax, numB = 1., 3., 100 # min, max, number of steps for varying B
Bvals = np.linspace(Bmin, Bmax, numB)
# set up the figure, the axis, and the plot element we want to animate
fig = plt.figure()
fig.set_size_inches(8,8)
ax = plt.axes(xlim=(xmin, xmax), ylim=(ymin, ymax))
ax.set_ylabel("Y", fontsize = 20)
ax.set_xlabel("X", fontsize = 20)
# choose a set of initial points for our trajectories (in each frame we will use the same set)
X0 = list(zip(np.linspace(xmin, xmax, num_traj), np.linspace(ymin, ymax, num_traj)))
# choose a color set for our trajectories
vcolors = plt.cm.jet_r(np.linspace(0., 1., num_traj))
# prepare the mesh grid
x = np.linspace(xmin, xmax, 15) # divide x axis to intervals
y = np.linspace(ymin, ymax, 15) # divide y axis to intervals
X1 , Y1 = np.meshgrid(x, y) # from these intervals create a grid
# set up the lines, the quiver and the text object
lines = [ax.plot([], [], [], '-', c=c, lw=2)[0] for c in vcolors]
Q = ax.quiver(X1, Y1, [], [], [], pivot='mid', cmap=plt.cm.jet)
text = ax.text(0.02, 0.95, '', fontsize=20, transform=ax.transAxes)
# initialization function: plot the background of each frame. Needs to return each object to be updated
def init():
for line in lines:
line.set_data([], [])
Q.set_UVC([], [], [])
text.set_text("")
return Q, lines, text
# animation function. This is called sequentially
def animate(i):
B = Bvals[i]
DX1, DY1 = dX_dt(A, B, [X1, Y1], t) # compute rate of change of the concentrations on grid points
M = (np.hypot(DX1, DY1)) # norm of the rate of changes
M[ M == 0] = 1. # prevention against divisions by zero
DX1 /= M # we normalize the direction field vector (each has unit length now)
DY1 /= M # we normalize the direction field vector (each has unit length now)
Q.set_UVC(DX1, DY1, M)
# integrate the Brusellator ODE's for the set of trajectories, store them in X
for line, X0i in zip(lines, X0):
X = integrate.odeint(lambda x, t: dX_dt(A, B, x, t), X0i,t)
x, y = X.T # get x and y for current trajectory
line.set_data(x, y)
text.set_text("A={:.2f}, B={:.2f}".format(A, B))
return Q, lines, text
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=30, blit=False)
# instantiate the animator.
#anim = animation.FuncAnimation(fig, animate, init_func=init, frames=1000, interval=200, blit=True)
#HTML(anim.to_html5_video())
rc('animation', html='html5')
plt.close()
anim
```
In the animation above, we see how the direction field gets modified as $B$ is varied. Also shown several trajectories that are initalized at various points (I have chosen them on the $Y=X$ line here).
## Notes:
Should you encounter difficulty in running the embedded animation, try to launch Jupter Notebook using the command:<br>
`jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000000`
## References:
1) Strogatz, S.H (2015). *Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering, Second Edition*, Boulder, USA: Westview Press. <br>
2) https://en.wikipedia.org/wiki/Brusselator
|
ea2d14f88f0858fe87c6488a202633885c6ef02c
| 523,717 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/brusselator_hopf-checkpoint.ipynb
|
cemozen/pattern_formation_in_reaction-diffusion_systems
|
7788c2dec71bcbe47758cabdc9816d99f88df589
|
[
"MIT"
] | 1 |
2021-04-04T02:01:50.000Z
|
2021-04-04T02:01:50.000Z
|
.ipynb_checkpoints/brusselator_hopf-checkpoint.ipynb
|
cemozen/pattern_formation_in_reaction-diffusion_systems
|
7788c2dec71bcbe47758cabdc9816d99f88df589
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/brusselator_hopf-checkpoint.ipynb
|
cemozen/pattern_formation_in_reaction-diffusion_systems
|
7788c2dec71bcbe47758cabdc9816d99f88df589
|
[
"MIT"
] | null | null | null | 107.649949 | 100,204 | 0.851072 | true | 4,955 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.939025 | 0.891811 | 0.837433 |
__label__eng_Latn
| 0.983167 | 0.78397 |
# Immersed Interface Method
---
### Author: Marin Lauber
```python
import numpy as np
import matplotlib.pyplot as plt
import NSsolver as ns
try:
plt.style.use("jupyter")
except OSerror:
print("Using default ploting style")
```
The Immersed Interface Method (IIM) was initially developed for elliptical equation of the form
\begin{equation}
\nabla\cdot(\beta(x)\nabla u(x)) + \kappa(x) u(x) = f(x)
\end{equation}
in a bounded domain $\Omega$, where $u$, $\beta$, $\kappa$ are potentially discontinuous across the interface located at $\alpha$. This type of problem is especially relevant for bi-material problem, such as multiphase flow, and even fluid-structure interaction, where an interface is immersed onto a fluid. The codimension of the interface located at $\alpha$ is one, that is it has one dimension less than the domain $\Omega$.
## 1 Dimensional Problem
Broadly, IIM aims at writing a finite difference approximation of a 1D version of (1)
\begin{equation}
(\beta u_x)_x + \kappa u = f
\end{equation}
as
\begin{equation}
\gamma_{i,1}u_{i-1} + \gamma_{i,2}u_{i} + \gamma_{i,3}u_{i+1} + \kappa_i u_i = f_i + C_i
\end{equation}
where $\gamma_{i,k}$ are coefficient that ensure global $O(h^2)$ accuracy of the numerical discretization. The term $C_i$ is a correction term that accounts for the jump in the variables at $\alpha$. If the point $\alpha$ falls in between two grid points, say $x_{j} < \alpha < x_{j+1}$, then $\forall i\neq j, j+1$ the $\gamma_{j,k}$ take the standard form
\begin{split}
\gamma_{j,1} &= \frac{\beta_{j-1/2}}{h^2}\\
\gamma_{j,2} &= \frac{-(\beta_{j-1/2}+\beta_{j+1/2})}{h^2}\\
\gamma_{j,3} &=\frac{\beta_{j+1/2}}{h^2}\\
C_i &= 0.
\end{split}
The local truncation error of this scheme is
\begin{equation}
T_i = \gamma_{i,1}u(x_{i-1}) + \gamma_{i,2}u(x_{i}) + \gamma_{i,3}u(x_{i+1}) + \kappa_i u(x_i) - f_i = O(h^2)
\end{equation}
To derive formula of the type (2) valid at $i = j, j+1$ we only need local $O(h)$ accuracy as only two grid points are involved. Because the underlying function is not smooth, we cannot Taylor expand across the interface, we must do it from each side of it. Taylor expanding for te three neighbouring grid points of the interface that is located such that $x_{j} < \alpha < x_{j+1}$ gives
\begin{split}
&u_{j-i} = u^- + (x_{j-1} - \alpha)u^-_x + \frac{1}{2}(x_{j-1} - \alpha)^2u^-_{xx} +O(h^3) \\
&u_{j} = u^- + (x_{j} - \alpha)u^-_x + \frac{1}{2}(x_{j} - \alpha)^2u^-_{xx} +O(h^3)\\
&u_{j+i} = u^+ + (x_{j+1} - \alpha)u^+_x + \frac{1}{2}(x_{j+1} - \alpha)^2u^+_{xx} +O(h^3)\\
\end{split}
where
\begin{equation}
u^- = \lim_{x\to \alpha^-}u(x) \qquad\qquad u^+ = \lim_{x\to \alpha^+}u(x).
\end{equation}
We also need $O(h)$ approximation to the remaining terms in (...), thos are obtained as
\begin{equation}
k_ju_{j} = k(\alpha) u^-(\alpha) = O(h) \qquad\qquad f_j = f(\alpha) = O(h).
\end{equation}
From the known jump conditions in $u$ and $\beta u_x$ we can get relationship between $u^-$ and $u^+$, and so on
\begin{equation}
u^+ = u^- +\hat{C} \qquad\qquad u_x^+ = (\beta^-u^-_x +C)/\beta^+.
\end{equation}
Since we assumed that $f$ is continuous, $(\beta u_x)_x + \kappa u$ must also be. This gives us an expression for $u_{xx}^+$
\begin{equation}
u_{xx}^+ = \frac{1}{\beta^+}\left(\beta^-u_{xx}^- + \left(\beta_x^- - \frac{\beta_x^+\beta^-}{\beta^+}\right)u_x^- - \frac{\beta_x^+}{\beta^+}C- \kappa\hat{C}\right).
\end{equation}
Finally we can get an approximation for $f(\alpha)$ from
\begin{equation}
\beta_x^-u_x^- + \beta^-u_xx^- + \kappa(\alpha)u^- = f(\alpha).
\end{equation}
Substituting all those expressions into our truncation error expression gives four relationshpis for a local $O(h)$ approximation of the equation near the discontinuity
\begin{split}
&\gamma_{j,1} + \gamma_{j,2} + \gamma_{j,3} = 0,\\
&(x_{j-1}- \alpha)\gamma_{j,1} + (x_{j}- \alpha)\gamma_{j,2} + \left\{\frac{\beta^-}{\beta^+}(x_{j+1}-\alpha) + \left(\frac{\beta_x^-}{\beta^+} - \frac{\beta_x^+\beta^-}{(\beta^+)^2}\right)\frac{(x_{j+1}-\alpha)^2}{2}\right\}\gamma_{j,3} = \beta_x^+,\\
&\frac{(x_{j-1}-\alpha)^2}{2}\gamma_{j,1} + \frac{(x_{j}-\alpha)^2}{2}\gamma_{j,2} + \frac{(x_{j+1}-\alpha)^2\beta^-}{2\beta^+}\gamma_{j,3} = \beta^-,\\
&C_j = \gamma_{j,3}\left\{\hat{C} + (x_{j+1}-\alpha)\frac{C}{\beta^+} - \frac{1}{2}(x_{j+1}-\alpha)^2\left(\frac{\beta_x^+C}{(\beta^+)^2}-\kappa\frac{\hat{C}}{\beta^+}\right)\right\}.\\
\end{split}
In the context of an immersed structural interface (a membrane) these sets of coefficients are much simpler to obtain. Because $\beta$ is continuous across the interface (in this case beta is the density). We also have $\kappa=0$ and as such only relationship (1) is kept. The two following ones disapear and the last relatinship become
\begin{equation}
C_j = \frac{1}{h^2}(x_{j+1}-\alpha)C + \frac{\beta}{h^2}\hat{C}
\end{equation}
which is discretized verion of
\begin{equation}
\beta u''(x) = f(x) + C\delta(x-\alpha) + \hat{C}\delta'(x-\alpha).
\end{equation}
For your 1D piston example, the pressure gradient must be
\begin{equation}
p_x = cst. = V
\end{equation}
such that the jump in $\beta p_x = C = 0$. This leaves us with simply
\begin{equation}
C_j = \hat{C}\beta\delta_h'(x-\alpha)
\end{equation}.
where $\hat{C}$ is the pressure jump at the interface. The algorithm that is solved using Chorin's projection method is
\begin{split}
u^* = r_{\Delta t}(u_0) \\
\nabla\cdot(\frac{\Delta t}{\rho}\nabla p) = \nabla\cdot u^* + \hat{C}\beta\delta_h'(x-\alpha)\\
u^{n+1} = u^* - \frac{\Delta t}{\rho}\nabla p + \Delta t\hat{C}\beta\delta_h(x-\alpha)
\end{split}
```python
def update(C_h, x, u, V, X, dx, dt, t):
# predict velocity
u_star = u + dt*ns.r(u, dx)
# get pressure
sigma = ns.div(u_star, dx) + C_h*ns.div(ns.kernel((x-X)/dx), dx)
p = ns.solve_pressure(np.ones_like(sigma), sigma, dx, verbose=True)
# correct
u_n = u_star - dt*ns.grad(p, dx) + dt*C_h*ns.kernel((x-X)/dx)
return sigma, p, u_n
```
```python
N = 32
x, dx = np.linspace(-1, 1, N, retstep=True)
xs = x +0.5*dx
X = 0.
V = 1
u0 = np.zeros_like(x)
dt = 1.
# pressure jump is mass of fluid in pipe, in 'cell' units
ch = N*V/dt
print('Pressure jump is: %.3f' % ch)
sigma, p, u_n = update(ch, x, u0, V, X, dx, 1, 1)
print("Interface at X: %.2f" % X)
print(r"L inf: %.3e" % np.max(np.abs(u_n - V)))
```
Pressure jump is: 32.000
Jacobi solver:
res0: 7.914e+00
res: 7.742e-10
iter: 557
Interface at X: 0.00
L inf: 2.006e-09
```python
ns.draw_results(x, xs, X, u0, u_n, p, sigma)
```
This recovers the exact solution, but we have to pass it the corrct pressure jump, which usually would be the what we are trying to solve for.
## IIM-Feedback Forcing
Because this jump in the pressure that is supplied to the equation is usually not known a priori, other immersed inetrface methods have been developped. _AN IMMERSED INTERFACE METHOD FOR INCOMPRESSIBLE NAVIER–STOKES EQUATIONS_
Here the idea is to use a Peskin type forcing to impose the boundary condition, but instead of being applied to the momentum equation, the tangential and normal component are split and the former is applied to the momentum equation while the latter is applied to the pressure Poisson equation
\begin{split}
u^* = r_{\Delta t}(u_0) + F_1 \\
\nabla\cdot(\frac{\Delta t}{\rho}\nabla p) = \nabla\cdot u^* + \nabla\cdot F_2\\
u^{n+1} = u^* - \frac{\Delta t}{\rho}\nabla p + F_2
\end{split}
\begin{split}
F1 = F_\tau\\
F2 = F_n
\end{split}
\begin{equation}
f_n(s,t ) = [p](s,t )
\end{equation}
\begin{equation}
F(x, t) = \int_{s_0}^{s_1}f(s, t)\delta(x - X(s, t))\text{ d}s
\end{equation}
\begin{equation}
f(s, t) = \kappa\left[\xi(s, t) - \chi(s, t)\right]+ \eta\left[W(s, t) - U(s, t)\right]
\end{equation}
\begin{equation}
\nabla\cdot(\frac{\Delta t}{\rho}\nabla p) = \nabla\cdot u^* + C_{ij}
\end{equation}
\begin{equation}
C_{ij} = (\nabla\cdot B)_{ij} = \frac{B^1_{i+1/2,j} - B^1_{i-1/2,j}}{\Delta x} + \frac{B^2_{i,j+1/2} - B^2_{i,j-1/2}}{\Delta y}
\end{equation}
\begin{equation}
B^1_{i-1/2,j} = \frac{[p]_{i-1/2,j}}{\Delta x}
\end{equation}
```python
def kernel(d, e=2):
return np.where(abs(d)<e, 0.5*(1+np.cos(np.pi*d/e))/e, 0)
```
```python
def update(x, u, V, X, dx, dt, t, eta=10):
# predict velocity
u_star = u + dt*ns.r(u, dx)
# get pressure
d = (x-X)/dx
F2 = kernel(d)*eta*(kernel(d)*u - V)
sigma = ns.div(u_star, dx) + ns.div(F2, dx)
p = ns.solve_pressure(np.ones_like(sigma), sigma, dx, True)
# correct
u_n = u_star - dt*ns.grad(p, dx) + dt*F2
return F2, p, u_n
```
```python
N = 32
x, dx = np.linspace(-1, 1, N, retstep=True)
xs = x +0.5*dx
X = 0.234
V = 1
u0 = np.zeros_like(x)
dt = 1.
sigma, p, u_n = update(x, u0, V, X, dx, 1, 1, 1)
print("Interface at X: %.2f" % X)
print(r"L inf: %.3e" % np.max(np.abs(u_n - V)))
```
Interface at X: 0.23
L inf: 1.495e+00
```python
ns.draw_results(x, xs, X, u0, u_n, p, sigma)
```
|
0436307b69355718549b74a77c237e7bd10d4a2f
| 49,423 |
ipynb
|
Jupyter Notebook
|
1D-Piston/Immersed-Interface-Method.ipynb
|
marinlauber/FlexibleSheets
|
487b035a5aea4a0f4cf5aa49c3eab5cb238aa1f7
|
[
"MIT"
] | null | null | null |
1D-Piston/Immersed-Interface-Method.ipynb
|
marinlauber/FlexibleSheets
|
487b035a5aea4a0f4cf5aa49c3eab5cb238aa1f7
|
[
"MIT"
] | null | null | null |
1D-Piston/Immersed-Interface-Method.ipynb
|
marinlauber/FlexibleSheets
|
487b035a5aea4a0f4cf5aa49c3eab5cb238aa1f7
|
[
"MIT"
] | 2 |
2020-12-18T18:57:16.000Z
|
2022-03-04T06:58:09.000Z
| 115.205128 | 21,708 | 0.832689 | true | 3,423 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.877477 | 0.810479 | 0.711176 |
__label__eng_Latn
| 0.871637 | 0.490633 |
# Linear programming with scipy
See https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html
```python
import scipy.optimize
```
Problem examples:
- http://people.brunel.ac.uk/~mastjjb/jeb/or/morelp.html
## Scipy's syntax
Example for a problem of 2 dimensions:
$$
\begin{align}
\min_{x_1,x_2} & \quad \color{\red}{c_1} x_1 + \color{\red}{c_2} x_2 \\
\text{s.t.} & \quad \color{\orange}{A_{1,1}} x_1 + \color{\orange}{A_{1,2}} x_2 \leq \color{\green}{b_1} \\
& \quad \color{\orange}{A_{2,1}} x_1 + \color{\orange}{A_{2,2}} x_2 \leq \color{\green}{b_2} \\
& \quad \color{\purple}{B_{1,1}} \geq x_1 \geq \color{\purple}{B_{1,2}} \\
& \quad \color{\purple}{B_{2,1}} \geq x_2 \geq \color{\purple}{B_{2,2}} \\
\end{align}
$$
$$
\color{\red}{
\boldsymbol{c} = \begin{pmatrix}
c_1 \\
c_2
\end{pmatrix}
}
\quad
\color{\orange}{
\boldsymbol{A} = \begin{pmatrix}
A_{1,1} & A_{1,2} \\
A_{2,1} & A_{2,2}
\end{pmatrix}
}
\quad
\color{\green}{
\boldsymbol{b} = \begin{pmatrix}
b_1 \\
b_2
\end{pmatrix}
}
\quad
\color{\purple}{
\boldsymbol{B} = \begin{pmatrix}
B_{1,1} & B_{1,2} \\
B_{2,1} & B_{2,2}
\end{pmatrix}
}
$$
$$
\text{scipy.optimize.linprog}(\color{\red}{\boldsymbol{c}}, ~ \color{\orange}{\boldsymbol{A}}, ~ \color{\green}{\boldsymbol{b}}, ~ \color{\purple}{\boldsymbol{B}})
$$
## Example 1
$$
\begin{align}
\min_{x_0,x_1} & \quad -x_0 + 4 x_1 \\
\text{s.t.} & \quad -3 x_0 + x_1 \leq 6 \\
& \quad -x_0 - 2 x_1 \geq -4 \\
& \quad x_1 \geq -3
\end{align}
$$
```python
# Coefficients of the linear objective function to be minimized
c = [-1, 4]
# 2-D array which, when matrix-multiplied by x, gives the values of the upper-bound inequality constraints at x.
A = [[-3, 1],
[ 1, 2]]
# 1-D array of values representing the upper-bound of each inequality constraint (row) in A.
b = [6, 4]
# Sequence of (min, max) pairs for each element in x, defining the bounds on that parameter.
# Use None for one of min or max when there is no bound in that direction.
# By default bounds are (0, None) (non-negative).
# If a sequence containing a single tuple is provided, then min and max will be applied to all variables in the problem.
x0_bounds = (None, None)
x1_bounds = (-3, None)
bounds = (x0_bounds,x1_bounds)
scipy.optimize.linprog(c, A_ub=A, b_ub=b, bounds=bounds)
```
## The carpenter problem
```python
# Coefficients of the linear objective function to be minimized
c = np.array([3, 5])
# 2-D array which, when matrix-multiplied by x, gives the values of the upper-bound inequality constraints at x.
A_ub = np.array([[3, 2],
[1, 2],
[5, 4]])
# 1-D array of values representing the upper-bound of each inequality constraint (row) in A_ub.
b_ub = np.array([700, 500, 1500])
# Sequence of (min, max) pairs for each element in x, defining the bounds on that parameter.
# Use None for one of min or max when there is no bound in that direction.
# By default bounds are (0, None) (non-negative).
# If a sequence containing a single tuple is provided, then min and max will be applied to all variables in the problem.
bounds = ((0, None), (0, None))
scipy.optimize.linprog(-c, A_ub=A_ub, b_ub=b_ub, bounds=bounds)
```
The optimal solution is obtain for $x_1=100$ and $x_2=200$ with a gain of 1300.
|
e2b4ae6daf557b63ee3754f935363337dfe078c6
| 6,101 |
ipynb
|
Jupyter Notebook
|
nb_dev_python/python_scipy_linear_programming_en.ipynb
|
jdhp-docs/python-notebooks
|
91a97ea5cf374337efa7409e4992ea3f26b99179
|
[
"MIT"
] | 3 |
2017-05-03T12:23:36.000Z
|
2020-10-26T17:30:56.000Z
|
nb_dev_python/python_scipy_linear_programming_en.ipynb
|
jdhp-docs/python-notebooks
|
91a97ea5cf374337efa7409e4992ea3f26b99179
|
[
"MIT"
] | null | null | null |
nb_dev_python/python_scipy_linear_programming_en.ipynb
|
jdhp-docs/python-notebooks
|
91a97ea5cf374337efa7409e4992ea3f26b99179
|
[
"MIT"
] | 1 |
2020-10-26T17:30:57.000Z
|
2020-10-26T17:30:57.000Z
| 26.876652 | 185 | 0.496968 | true | 1,143 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.958538 | 0.92079 | 0.882612 |
__label__eng_Latn
| 0.881216 | 0.888936 |
## Exercise 10.1 (search)
We want to find the largest and smallest values in a long list of numbers. Implement
two algorithms, based on:
1. Iterating over the list entries; and
1. First applying a built-in sort operation to the list.
Encapsulate each algorithm in a function. To create lists of numbers for testing use, for example:
```python
x = np.random.rand(1000)
```
### Solution
We first create the list of random numbers
```python
import numpy as np
x = np.random.rand(1000)
```
#### Approach 1
```python
def min_max1(x):
# YOUR CODE HERE
raise NotImplementedError()
return x_min, x_max
print(min_max1(x))
```
#### Approach 2
```python
def min_max2(x):
# YOUR CODE HERE
raise NotImplementedError()
print(min_max2(x))
```
```python
assert min_max1(x) == min_max2(x)
```
In practice, we would use the the NumPy function:
```python
print(np.min(x), np.max(x))
```
## Exercise 10.2 (Newton's method for root finding)
### Background
Newton's method can be used to find a root $x$ of a function $f(x)$ such that
$$
f(x) = 0
$$
A Taylor series expansion of $f$ about $x_{i}$ reads:
$$
f(x_{i+1}) = f(x_{i}) + \left. f^{\prime} \right|_{x_{i}} (x_{i+1} - x_{i}) + O((x_{i+1} - x_{i})^{2})
$$
If we neglect the higher-order terms and set $f(x_{i+1})$ to zero, we have Newton's method:
\begin{align}
x_{i + 1} &= - \frac{f(x_{i})}{f^{\prime}(x_{i})} + x_{i}
\\
x_{i} &\leftarrow x_{i+1}
\end{align}
In Newton's method, the above is applied iteratively until $\left|f(x_{i + 1})\right|$ is below a tolerance value.
### Task
Develop an implementation of Newton's method, with the following three functions in your implementation:
```python
def newton(f, df, x0, tol, max_it):
# Implement here
return x1 # return root
```
where `x0` is the initial guess, `tol` is the stopping tolerance, `max_it` is the maximum number
of iterations, and
```python
def f(x):
# Evaluate function at x and return value
def df(x):
# Evaluate df/dx at x and return value
```
Your implementation should raise an exception if the maximum number of iterations (`max_it`)
is exceeded.
Use your program to find the roots of:
$$
f(x) = \tan(x) - 2x
$$
between $-\pi/2$ and $\pi/2$. Plot $f(x)$ and $f^{\prime}(x)$ on the same graph,
and show the roots computed by Newton's method.
Newton's method can be sensitive to the starting value. Make sure you find the root around $x = 1.2$. What happens if you start at $x = 0.9$? It may help to add a print statement in the iteration loop, showing $x$ and $f$ at each iteration.
### Extension (optional)
For a complicated function we might not know how to compute the derivative, or it may be very complicated
to evaluate. Write a function that computes the *numerical derivative* of $f(x)$ by evaluating
$(f(x + dx) - f(x - dx)) / (2dx)$, where $dx$ is small. How should you choose $dx$?
### Solution
We first implement a Newton solver function:
```python
import numpy as np
def newton(f, df, x, tol=1e-8, max_it=20):
"""Find root of equation defined by function f(x) where df(x) is
first derivative and x is the initial guess.Optional arguments tol
(tolerance) and max_it (maximum number of iterations)"""
# YOUR CODE HERE
raise NotImplementedError()
```
We now provide implementations of `f` and `df`, and find the roots:
```python
def f(x):
# YOUR CODE HERE
raise NotImplementedError()
def df(x):
# YOUR CODE HERE
raise NotImplementedError()
```
```python
# YOUR CODE HERE
raise NotImplementedError()
```
We can visualise the result:
```python
%matplotlib inline
import matplotlib.pyplot as plt
# Plot f and df/dx
x = np.linspace(-1.5, 1.5, 100)
plt.plot(x, f(x), label='$f(x)$')
plt.plot(x, df(x), label="$f^{\prime}(x)$")
# Add location of roots to plot
# YOUR CODE HERE
raise NotImplementedError()
plt.show()
```
For the extension, we can replace the function `df(x)` with a new version
```python
def df(x):
# Try changing dx to 1e-15 or smaller
dx = 1e-9
# YOUR CODE HERE
raise NotImplementedError()
```
```python
# Find roots near -1.2, 0.1, and 1.2
xroots = np.array((newton(f, df, -1.2),
newton(f, df, 0.1),
newton(f, df, 1.2)))
assert np.isclose(xroots, [-1.16556119e+00, 2.08575213e-10, 1.16556119e+00]).all()
```
```python
# Plot f, f' and roots
# YOUR CODE HERE
raise NotImplementedError()
```
In practice, we could use the Newton function `scipy.optimize.newton` from SciPy (http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.html) rather than implementing our own function.
## Exercise 10.3 (optional, low pass image filter)
Images files can be loaded and displayed with Matplotlib. An imported image is stored as a
three-dimensional NumPy array of floats. The shape of the array is `[0:nx, 0:ny, 0:3]`.
where `nx` is the number of pixels in the $x$-direction, `ny` is the number of pixels in the $y$-direction,
and the third axis is for the colour component (RGB: red, green and blue) intensity. See http://matplotlib.org/users/image_tutorial.html for more background.
Below we fetch an image and display it:
```python
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Import image
img = mpimg.imread('https://raw.githubusercontent.com/matplotlib/matplotlib.github.com/master/_images/stinkbug.png')
# Check type and shape
print(type(img))
print("Image array shape: {}".format(img.shape))
# Display image
plt.imshow(img);
```
The task is to write a *function* that applies a particular low-pass filter algorithm to an image array
and returns the filtered image. With this particular filter, the value of a pixel in the filtered image
is equal to the average value of the four neighbouring pixels in the original image. For the `[i, j, :]` pixel,
the neighbours are `[i, j+1, :]`, `[i, j-1, :]`, `[i+1, j, :]` and `[i-1, j, :]`.
Run the filter algorithm multiple times on the above image to explore the effect of the filter.
*Hint*: To create a NumPy array of zeros, `B`, with the same shape as array `A`, use:
```python
import numpy as np
B = np.zeros_like(A)
```
```python
# YOUR CODE HERE
raise NotImplementedError()
```
|
67328aaa748c65e4efad83f2d5915788c201d82b
| 13,697 |
ipynb
|
Jupyter Notebook
|
Assignment/10 Exercises.ipynb
|
reddyprasade/PYTHON-BASIC-FOR-ALL
|
4fa4bf850f065e9ac1cea0365b93257e1f04e2cb
|
[
"MIT"
] | 21 |
2019-06-28T05:11:17.000Z
|
2022-03-16T02:02:28.000Z
|
Assignment/10 Exercises.ipynb
|
chandhukogila/Python-Basic-For-All-3.x
|
f4105833759a271fa0777f3d6fb96db32bbfaaa4
|
[
"MIT"
] | 2 |
2021-12-28T14:15:58.000Z
|
2021-12-28T14:16:02.000Z
|
Assignment/10 Exercises.ipynb
|
chandhukogila/Python-Basic-For-All-3.x
|
f4105833759a271fa0777f3d6fb96db32bbfaaa4
|
[
"MIT"
] | 18 |
2019-07-07T03:20:33.000Z
|
2021-05-08T10:44:18.000Z
| 26.089524 | 249 | 0.544645 | true | 1,746 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.890294 | 0.931463 | 0.829276 |
__label__eng_Latn
| 0.978028 | 0.765018 |
```python
import sympy as sm
sm.init_printing()
from pchem import solve
```
```python
P, V, n, R, T = sm.symbols('P V n R T', positive=True)
subs = dict(
P=2,
V=0.1,
R=0.083145,
T=275,
n=1,
)
gas_law = P*V - n * R *T
n1 = solve(gas_law, n, subs)
```
```python
R_J = 8.3145
n1 * 5/2*R_J*(550-275)
```
```python
sm.diff(5/2*R*T, T)
```
```python
```
|
9fde25939643d2ee07869309edfe96b4abbc7ebc
| 3,712 |
ipynb
|
Jupyter Notebook
|
notebooks/test-2.ipynb
|
ryanpdwyer/pchem
|
ad097d7fce07669f4ad269e895e2185fa51ac2d2
|
[
"MIT"
] | null | null | null |
notebooks/test-2.ipynb
|
ryanpdwyer/pchem
|
ad097d7fce07669f4ad269e895e2185fa51ac2d2
|
[
"MIT"
] | null | null | null |
notebooks/test-2.ipynb
|
ryanpdwyer/pchem
|
ad097d7fce07669f4ad269e895e2185fa51ac2d2
|
[
"MIT"
] | null | null | null | 32 | 956 | 0.68319 | true | 156 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.944177 | 0.705785 | 0.666386 |
__label__eng_Latn
| 0.280023 | 0.386569 |
<p align="center">
</p>
## Data Analytics
### Basic Bivariate Statistics in Python
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
### Data Analytics: Basic Bivariate Statistics
Here's a demonstration of calculation of bivariate statistics in Python. This demonstration is part of the resources that I include for my courses in Spatial / Subsurface Data Analytics and Geostatistics at the Cockrell School of Engineering and Jackson School of Goesciences at the University of Texas at Austin.
We will cover the following statistics:
#### Bivariate Statistics
* Covariances
* Pearson Product Momment Correlation Coefficient
* Spearman Rank Correlation Coefficient
I have a lecture on these bivariate statistics available on [YouTube](https://www.youtube.com/watch?v=wZwYEDqB4A4&list=PLG19vXLQHvSB-D4XKYieEku9GQMQyAzjJ&index=21&t=0s).
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
3. In the terminal type: pip install geostatspy.
4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
You will need to copy the data file to your working directory. The dataset is available on my GitHub account in my GeoDataSets repository at:
* Tabular data - [2D_MV_200wells.csv](https://github.com/GeostatsGuy/GeoDataSets/blob/master/2D_MV_200wells.csv)
#### Importing Packages
We will need some standard packages. These should have been installed with Anaconda 3.
```python
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # plotting
from scipy.stats import pearsonr # Pearson product moment correlation
from scipy.stats import spearmanr # spearman rank correlation
```
#### Set the Working Directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). Set this to your working directory, with the above mentioned data file.
```python
os.chdir("d:/PGE383") # set the working directory
```
#### Loading Data
Let's load the provided multivariate, spatial dataset. '2D_MV_200wells.csv' is available at https://github.com/GeostatsGuy/GeoDataSets. It is a comma delimited file with X and Y coordinates,facies 1 and 2 (1 is sandstone and 2 interbedded sand and mudstone), porosity (fraction), permeability (mDarcy) and acoustic impedance (kg/m2s*10^6). We load it with the pandas 'read_csv' function into a data frame we called 'df' and then preview it by printing a slice and by utilizing the 'head' DataFrame member function (with a nice and clean format, see below).
```python
df = pd.read_csv("2D_MV_200wells.csv") # read a .csv file in as a DataFrame
#print(df.iloc[0:5,:]) # display first 4 samples in the table as a preview
df.head() # we could also use this command for a table preview
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>X</th>
<th>Y</th>
<th>facies_threshold_0.3</th>
<th>porosity</th>
<th>permeability</th>
<th>acoustic_impedance</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>565</td>
<td>1485</td>
<td>1</td>
<td>0.1184</td>
<td>6.170</td>
<td>2.009</td>
</tr>
<tr>
<th>1</th>
<td>2585</td>
<td>1185</td>
<td>1</td>
<td>0.1566</td>
<td>6.275</td>
<td>2.864</td>
</tr>
<tr>
<th>2</th>
<td>2065</td>
<td>2865</td>
<td>2</td>
<td>0.1920</td>
<td>92.297</td>
<td>3.524</td>
</tr>
<tr>
<th>3</th>
<td>3575</td>
<td>2655</td>
<td>1</td>
<td>0.1621</td>
<td>9.048</td>
<td>2.157</td>
</tr>
<tr>
<th>4</th>
<td>1835</td>
<td>35</td>
<td>1</td>
<td>0.1766</td>
<td>7.123</td>
<td>3.979</td>
</tr>
</tbody>
</table>
</div>
Let's extract two of the features, porosity, into a 1D ndarray and do our statistics on porosity.
* then we can use NumPy's statistics methods
```python
por = df['porosity'].values
perm = df['permeability'].values
```
Let's take a quick look at the data. It is always a good idea to visualize data before we calculate statistics.
* check for linearity with respect to the related assumptions for each measure
* chech for outliers
```python
plt.scatter(por,perm,c = 'red',alpha=0.2, s = 20,edgecolors = 'black')
plt.xlabel('Porosity (fraction)'); plt.ylabel('Permeability (mD)')
plt.title('Permeability vs. Porosity'); plt.grid()
```
Now let's go through all the bivariate statistics listed above one-by-one.
#### Bivariate Statistics
We will cover a variety of measures of correlation.
##### Covariance
\begin{equation}
C_{X,Y} = \frac{\sum^n_{i=1}(x_i - \overline{x})(y_i - \overline{y})}{(n-1)}
\end{equation}
We can use Numpy ot calculate the covariance matrix including:
* sample variances on the diagonal
* sample covariance on the off-diagonal
We must put the two features into an $2 \times n$ array.
```python
cov_matrix = np.cov(np.array([por,perm]))
print('Porosity sample variance is ' + str(round(cov_matrix[0,0],8)))
print('Permeability sample variance is ' + str(round(cov_matrix[1,1],2)))
print('Sample covariance of permeability and porosity is ' + str(round(cov_matrix[0,1],2)))
```
Porosity sample variance is 0.00108557
Permeability sample variance is 4156.4
Sample covariance of permeability and porosity is 1.07
##### Pearson Product Moment Correlation Coefficient
\begin{equation}
C_{X,Y} = \frac{\sum^n_{i=1}(x_i - \overline{x})(y_i - \overline{y})}{(n-1)\sigma_X \sigma_Y}
\end{equation}
We can use Numpy ot calculate the correlation matrix including:
* sample variances on the diagonal
* sample correlation on the off-diagonal
We must put the two features into an $2 \times n$ array.
```python
corr_matrix = np.corrcoef(np.array([por,perm]))
print('Standardized porosity sample variance is ' + str(round(corr_matrix[0,0],8)))
print('Standardized permeability sample variance is ' + str(round(corr_matrix[1,1],2)))
print('Sample Pearson product moment correlation coefficient of permeability and porosity is ' + str(round(corr_matrix[0,1],2)))
```
Standardized porosity sample variance is 1.0
Standardized permeability sample variance is 1.0
Sample Pearson product moment correlation coefficient of permeability and porosity is 0.51
We can use Scipy to calculate the product moment correlation between 2 1D, paired arrays.
* we also get the p-value for the significance of the measure.
```python
corr, corr_p_value = pearsonr(por,perm)
print('Sample Pearson product moment correlation coefficient of permeability and porosity is ' + str(round(corr,2)))
print('Sample Pearson product moment correlation coefficient of permeability and porosity p-value is ' + str(round(corr_p_value,4)))
```
Sample Pearson product moment correlation coefficient of permeability and porosity is 0.51
Sample Pearson product moment correlation coefficient of permeability and porosity p-value is 0.0
##### Spearman Rank Correlation Coefficient
\begin{equation}
\rho_{R_X,R_Y} = \frac{\sum^n_{i=1}(R_{x_i} - \overline{R_x})(R_{y_i} - \overline{R_y})}{(n-1)\sigma_{R_X} \sigma_{R_Y}}
\end{equation}
We can use Scipy to calculate the rank correlation between 2 1D, paired arrays.
* we also get the p-value for the significance of the measure.
```python
rank_corr = spearmanr(por,perm)[0]
rank_corr_p_value = spearmanr(por,perm)[1]
print('Sample Spearman rank correlation coefficient of permeability and porosity is ' + str(round(rank_corr,2)))
print('Sample Spearman rank correlation coefficient of permeability and porosity p-value is ' + str(round(rank_corr_p_value,4)))
```
Sample Spearman rank correlation coefficient of permeability and porosity is 0.82
Sample Spearman rank correlation coefficient of permeability and porosity p-value is 0.0
#### Comments
This was a basic demonstration of bivariate statistics in Python.
I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at [Python Demos](https://github.com/GeostatsGuy/PythonNumericalDemos) and a Python package for data analytics and geostatistics at [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy).
I hope this was helpful,
*Michael*
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
```python
```
|
358e22180cc63b1e83fd9f8ed52acfc04534dcbf
| 39,135 |
ipynb
|
Jupyter Notebook
|
PythonDataBasics_Bivariate_Statistics.ipynb
|
caf3676/PythonNumericalDemos
|
206a3d876f79e137af88b85ba98aff171e8d8e06
|
[
"MIT"
] | 403 |
2017-10-15T02:07:38.000Z
|
2022-03-30T15:27:14.000Z
|
PythonDataBasics_Bivariate_Statistics.ipynb
|
caf3676/PythonNumericalDemos
|
206a3d876f79e137af88b85ba98aff171e8d8e06
|
[
"MIT"
] | 4 |
2019-08-21T10:35:09.000Z
|
2021-02-04T04:57:13.000Z
|
PythonDataBasics_Bivariate_Statistics.ipynb
|
caf3676/PythonNumericalDemos
|
206a3d876f79e137af88b85ba98aff171e8d8e06
|
[
"MIT"
] | 276 |
2018-06-27T11:20:30.000Z
|
2022-03-25T16:04:24.000Z
| 79.542683 | 20,572 | 0.779967 | true | 3,331 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.810479 | 0.651355 | 0.527909 |
__label__eng_Latn
| 0.922235 | 0.06484 |
# ***Introduction to Radar Using Python and MATLAB***
## Andy Harrison - Copyright (C) 2019 Artech House
<br/>
# Stratified Sphere Radar Cross Section
***
Referring to Section 7.4.1.5, Mie gives the exact solution for scattering from a sphere. The solution is composed of vector wave functions defined in a spherical coordinate system. The terms of the Mie series are obtained from boundary value techniques. Therefore, the Mie formulation may be employed regardless of the composition of the sphere. To calculate the radar cross section of a sphere, use the Mie formulation along with far field approximations to give (Equations 7.42 and 7.43)
\begin{align}
S_1(\theta_o) &= \sum\limits_{n=1}^\infty(j)^{n+1}\Big[A_n\frac{P_n^1(\cos\theta_o)}{\sin\theta_o} - jB_n\frac{d}{d\theta_o}P_n^1(\cos\theta_o)\Big], \\ \nonumber \\
S_2(\theta_o) &= \sum\limits_{n=1}^\infty(j)^{n+1}\Big[A_n\frac{d}{d\theta_o}\frac{P_n^1(\cos\theta_o)}{\sin\theta_o} - jB_nP_n^1(\cos\theta_o)\Big],
\end{align}
where $P_n^1$ is the associated Legendre polynomial and may be calculated using the SciPy implementation ***scipy.special.lpmn(m, n, z)***. $S_1(\theta_o)$ and $S_2(\theta_o)$ are the complex far-field scattered radiation values for the $\hat{\theta}$ and $\hat{\phi}$ directions. The radar cross section for the $\hat{\theta}$ and $\hat{\phi}$ polarization states is then found to be (Equations 7.44 and 7.45)
\begin{align}\label{eq:rcs_mie}
\sigma_\theta &= \frac{4\pi}{k_0^2}S_1(\theta_o)\cos^2(\phi_0) \hspace{0.5in} \text{(m^2)}, \\ \nonumber \\
\sigma_\phi &= \frac{4\pi}{k_0^2}S_2(\theta_o)\sin^2(\phi_0) \hspace{0.5in} \text{(m}^2\text{)}.
\end{align}
For the $N$-layer concentric sphere, use the Mie coefficients of the following form (Equations 7.46 and 7.47)
\begin{align}\label{eq:mie_coefficients_layered}
A_n = -(j)^n &\frac{2n+1}{n(n+1)}\frac{k_0a_0J_n(k_0a_0) + jZ_n(k_0a_0)(k_0a_0J_n^\prime(k_0a_0)}{k_0a_0H_n(k_0a_0) + jZ_n(k_0a_0)(k_0a_0H_n^\prime(k_0a_0)}, \\ \nonumber \\
B_n = (j)^n &\frac{2n+1}{n(n+1)}\frac{k_0a_0J_n(k_0a_0) + jY_n(k_0a_0)(k_0a_0J_n^\prime(k_0a_0)}{k_0a_0H_n(k_0a_0) + jY_n(k_0a_0)(k_0a_0H_n^\prime(k_0a_0)}.
\end{align}
Ruck et al. showed that the modal surface impedance and admittance can be derived using an iterative technique similar to the method used for transmission lines. To begin, the impedance at the interface between the core and the first layer, $Z_n^N$, is determined independently. Then, the impedance at the second interface, $Z_n^{N-1}$, is determined from $Z_n^N$. This process continues until the impedance at the outermost surface, $Z_n^0$, is found. Then $Z_n(k_0a_0) = j(Z_n^0/\eta)$. Following the same process for the admittance, $Y_n(k_0a_0)$ may also be calculated.
The impedance and admittance are used in the Mie coefficients of (7.46) for the scattering radiation calculation in (7.42). Finally, the radar cross section is obtained from (7.44).
***
Begin by getting the library path
```python
import lib_path
```
Set the operating frequency (Hz), the radii (m), the relative permeabilities, the relative permittivities, the number of modes and the flag for perfectly conducting core
```python
frequency = 1e9
radius = [1.0, 1.25]
mu_r = [1.0, 1.0]
eps_r = [1.0, 4.0]
number_of_modes = 60
pec = True
```
Size the ordered arrays
```python
from numpy import ones
nr = len(radius)
mu = ones(nr + 1)
eps = ones(nr + 1)
ra = ones(nr)
```
Set up the parameters in the correct order
```python
i = 0
for r in radius:
ra[nr - 1 - i] = float(r)
i += 1
i = 0
for m, e in zip(mu_r, eps_r):
mu[nr - i] = float(m)
eps[nr - i] = float(e)
i += 1
```
Set the observation angles (radians) using the `linspace` routine from `scipy`
```python
from numpy import linspace
from scipy.constants import pi
observation_angle = linspace(0, pi, 721)
```
Calculate the coefficients for the sphere
```python
from Libs.rcs.stratified_sphere import coefficients
An, Bn = coefficients(frequency, eps, mu, ra, number_of_modes, pec)
```
Calculate the radar cross section (m^2) for the stratified sphere
```python
from Libs.rcs.stratified_sphere import radar_cross_section
from numpy import array
et = array([radar_cross_section(frequency, oa, 0, An, Bn) for oa in observation_angle])
ep = array([radar_cross_section(frequency, oa, 0.5 * pi, An, Bn) for oa in observation_angle])
```
Display the radar cross section (dBsm) for the stratified sphere using the `matplotlib` routines
```python
from matplotlib import pyplot as plt
from numpy import log10, degrees
# Set the figure size
plt.rcParams["figure.figsize"] = (15, 10)
# Display the results
plt.plot(degrees(observation_angle), 20.0 * log10(abs(ep[:, 1])), '', label='TE')
plt.plot(degrees(observation_angle), 20.0 * log10(abs(et[:, 0])), '--', label='TM')
# Set the plot title and labels
plt.title('RCS vs Bistatic Angle', size=14)
plt.ylabel('RCS (dBsm)', size=12)
plt.xlabel('Observation Angle (deg)', size=12)
plt.ylim(min(20.0 * log10(abs(et[:,0]))) - 3, max(20.0 * log10(abs(et[:,0]))) + 3)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Set the legend
plt.legend(loc='upper left', prop={'size': 10})
```
```python
```
|
1335724216572f2ee029c71cbc56082a9eda8020
| 89,561 |
ipynb
|
Jupyter Notebook
|
jupyter/Chapter07/stratified_sphere.ipynb
|
mberkanbicer/software
|
89f8004f567129216b92c156bbed658a9c03745a
|
[
"Apache-2.0"
] | null | null | null |
jupyter/Chapter07/stratified_sphere.ipynb
|
mberkanbicer/software
|
89f8004f567129216b92c156bbed658a9c03745a
|
[
"Apache-2.0"
] | null | null | null |
jupyter/Chapter07/stratified_sphere.ipynb
|
mberkanbicer/software
|
89f8004f567129216b92c156bbed658a9c03745a
|
[
"Apache-2.0"
] | null | null | null | 279.006231 | 80,436 | 0.92183 | true | 1,738 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.899121 | 0.763484 | 0.686465 |
__label__eng_Latn
| 0.939721 | 0.433218 |
<h3>Simulación matemática 2018 </h3>
<div style="background-color:#0099cc;">
<font color = white>
<ul>
<li>Lázaro Alonso </li>
<li>Email: `alonsosilva@iteso.mx, lazarus.alon@gmail.com`</li>
</ul>
</font>
</div>
<!--NAVIGATION-->
< [git GitHub tutorial 2](Clase2_GitTutorial2.ipynb) | [Guía](Clase0_GuiaSimulacionM.ipynb) | [Programación Lineal](Clase5_ProgramacionLineal.ipynb) >
___
### Optimización de funciones escalares diferenciables con `SymPy`
> - Mediante optimización se obtienen soluciones elegantes tanto en teoría como en ciertas aplicaciones.
> - La teoría de optimización usa elementos comenzando con cálculo elemental y álgebra lineal básica, y luego se extiende con análisis funcional y convexo.
> - Las aplicaciones en optimización involucran ciencia, ingeniería, economía, finanzas e industria.
> - El amplio y creciente uso de la optimización lo hace escencial para estudiantes y profesionales de cualquier rama de la ciencia y la tecnología.
**Referencia:**
- http://www.math.uwaterloo.ca/~hwolkowi//henry/reports/talks.d/t06talks.d/06msribirs.d/optimportance.shtml
Algunas aplicaciones son:
1. Ingeniería
- Encontrar la composición de equilibrio de una mezcla de diferentes átomos.
- Planeación de ruta para un robot (o vehículo aéreo no tripulado).
- Planeación de la mano de obra óptima en una construcción o planta de producción.
2. Distribución óptima de recursos.
- Distribución de rutas de vuelo.
- Encontrar una dieta óptima.
- Planeación de ruta óptima.
3. Optimización financiera
- Administración de riesgos.
- Portafolios de inversión.
En esta clase veremos aspectos básicos de optimización. En específico, veremos cómo obtener máximos y mínimos de una función escalar de una variable (como en cálculo diferencial).
___
## 0. Librerías que usaremos
Como ya dijimos en la primer clase `python` es el lenguaje de programación (el cual es de alto nivel). Sin embargo, `python` solo tiene unos pocos comandos primitivos y para hacer más fácil su uso en nuestras actividades de simulación en ingeniería, otras personas ya han escrito ciertas librerías por nosotros.
### 0.1 `NumPy`
`NumPy` (Numerical Python) es la librería fundamental para computación científica (numérica) con `Python`. Contiene, entre otras cosas:
- un objeto tipo arreglo N-dimensional muy poderoso
- funciones sofisticadas
- funciones de álgebra lineal, transformada de Fourier y números aleatorios.
Por lo anterior, `NumPy` es de amplio uso entre la comunidad científica e ingenieril (por su manejo de cantidades vectoriales). De la misma manera, se usa para guardar datos. Para nuestros propósitos, se puede usar libremente.
**Referencia:**
- http://www.numpy.org/
`NumPy` ya viene incluido en la instalación estándar de Anaconda por defecto. Para comenzar a usarlo, solo debemos de importarlo:
```python
# importar la librería numpy
import numpy as np
```
### 0.2 `SymPy`
`SymPy` (Symbolic Python) es una librería de `Python` para matemáticas simbólicas. Su objetivo es convertirse en un sistema de álgebra computacional con las mejores características, manteniendo el código lo más simple posible para que sea comprensible.
**Referencia:**
- http://www.sympy.org/en/index.html
`SymPy` ya viene incluido en la instalación estándar de Anaconda por defecto. Para comenzar a usarlo, solo debemos de importarlo:
```python
# importar la librería sympy
import sympy as sym
# imprimir en formato latex
sym.init_printing(use_latex='mathjax')
```
La funcionalidad de imprimir en formato LaTex que nos da `SymPy` mediante el proyecto `mathjax` hace de `SymPy` una herramienta muy atractiva...
Notar que en `SymPy` y en `NumPy` existen funciones con el mismo nombre, pero reciben tipos de datos diferentes...
Explicar el uso de la sintaxis `from numpy import *` y sus peligros (no recomendable).
```python
# diferencias de funciones de sympy y numpy
sym.var('x')
sym.sin(x)*np.sin(3)
```
$$0.141120008059867 \sin{\left (x \right )}$$
```python
# importar con * y ver que pasa
# from numpy import *
# from sympy import *
# No recomendado
```
### 0.3 `PyPlot` de `matplotlib`
El módulo `PyPlot` de la librería `matplotlib` contiene funciones que nos permite generar una gran cantidad de gráficas rápidamente. Las funciones de este módulo están escritas con el mismo nombre que las funciones para graficar en `Matlab`.
**Referencia:**
- https://matplotlib.org/api/pyplot_summary.html
```python
# importar matplotlib.pyplot
import matplotlib.pyplot as plt
# comando para que las gráficas salgan en la misma ventana
%matplotlib inline
```
```python
### Style, plots
```
```python
```
Ya que revisamos todas las librerías que usaremos, empecemos con la clase como tal...
___
Basamos todos los resultados en los siguientes teoremas:
## 1. Teorema de Fermat (análisis)
Si una función $f(x)$ alcanza un máximo o mínimo local en $x=c$, y si la derivada $f'(c)$ existe en el punto $c$, entonces $f'(c) = 0$.
### Ejemplo
Sabemos que la función $f(x)=x^2$ tiene un mínimo global en $x=0$, pues
$$f(x)=x^2\geq0,\qquad\text{y}\qquad f(x)=x^2=0 \qquad\text{si y solo si}\qquad x=0.$$
```python
# declarar la variable real x
#t = sym.Symbol('x')
#t
sym.var('x', real = True)
```
$$x$$
```python
# declarar ahora f=x^2 y mostrar
f = x**2
f
```
$$x^{2}$$
```python
# derivar f respecto a x y mostrar
df = sym.diff(f, x)
df
```
$$2 x$$
```python
# resolver f'(x)=0 y mostrar soluciones
xc = sym.solve(df, x)
xc
```
$$\left [ 0\right ]$$
Veamos la gráfica...
```python
# convertir f e una función que se pueda evaluar numéricamente
#(función lambdify de la librería sympy)
f_num = sym.lambdify([x], f, 'numpy')
x_vec = np.linspace(-5, 5, 100)
# graficar
plt.plot(x_vec, f_num(x_vec))
plt.xlabel('$x$')
plt.ylabel('$x^2$')
plt.show()
```
Ver diferencias entre f y f_num
```python
# intentar evaluar f y f_num
```
**Otra manera de hacer lo anterior**
Concepto de función...
```python
def f(x):
return x**2
```
```python
f(x)
```
$$x^{2}$$
```python
f(np.array([5.124, 2.5436]))
```
array([ 26.255376 , 6.46990096])
```python
df = sym.diff(f(x), x)
df
```
$$2 x$$
```python
x_c = sym.solve(df, x)
x_c[0]
```
$$0$$
```python
# graficar
plt.plot(x_vec, f(x_vec))
plt.xlabel('$x$')
plt.ylabel('$x^2$')
plt.show()
```
El converso del teorema anterior no es cierto.
### Actividad
Considere $g(x)=x^3$.
- Usando `sympy`, muestre que $g'(0)=0$.
- Sin embargo, descartar que $x=0$ es un extremo de $g(x)$ viendo su **gráfica**.
```python
def g(x):
return x**3
```
```python
dg = sym.diff(g(x), x)
dg
```
$$3 x^{2}$$
```python
puntos_criticos = sym.solve(dg, x)
puntos_criticos
```
$$\left [ 0\right ]$$
```python
dg_eval0 = dg.subs(x, puntos_criticos[0])
dg_eval0
```
$$0$$
```python
# graficar
plt.plot(x_vec, g(x_vec))
plt.xlabel('$x$')
plt.ylabel('$x^3$')
plt.show()
```
## 2. Criterio de la segunda derivada
Sea $f(x)$ una función tal que $f’(c)=0$ y cuya segunda derivada existe en un intervalo abierto que contiene a $c$.
- Si $f’’(c)>0$, entonces $f(c)$ es un mínimo relativo.
- Si $f’’(c)<0$, entonces $f(c)$ es un máximo relativo.
- Si $f’’(c)=0$, entonces el criterio no decide.
### Ejemplo
Mostrar, usando `sympy`, que la función $f(x)=x^2$ tiene un mínimo relativo en $x=0$.
Ya vimos que $f'(0)=0$. Notemos que:
```python
f = x**2
#d2f = sym.diff(f, x, x)
d2f = sym.diff(f, x, 2)
d2f
```
```python
d2f>0
```
Por tanto, por el criterio de la segunda derivada, $f(0)=0$ es un mínimo relativo (en efecto, el mínimo global).
### Ejemplo
¿Qué pasa con $g(x)=x^3$ al intentar utilizar el criterio de la segunda derivada? (usar `sympy`).
```python
g(x)
```
```python
d2g = sym.diff(g(x), x, x)
d2g
```
```python
d2g.subs(x, 0)
```
### Actividad
¿Qué pasa con $h(x)=x^4$ al intentar utilizar el criterio de la segunda derivada?.
```python
def h(x):
return x**4
```
```python
dh = sym.diff(h(x), x, 1)
dh
```
```python
p_c_h = sym.solve(dh, x)
p_c_h
```
```python
d2h = sym.diff(h(x), x, 2)
d2h
```
```python
d2h.subs(x, p_c_h[0])
```
Como la segunda derivada en el punto crítico es cero, el criterio de la segunda derivada no decide.
## 3. Método para determinar extremos absolutos de una función continua y=f(x) en [a,b]
- Determinar todos los valores críticos $c_1, c_2, c_3, \dots, c_n$ en $(a,b)$.
- Evaluar $f$ en todos los valores críticos y en los extremos $x=a$ y $x=b$.
- El más grande y el más pequeño de los valores de la lista $f(a), f(b), f(c_1), f(c_2), \dots, f(c_n)$ son el máximo absoluto y el mínimo absoluto, respectivamente, de f en el intervalo [a,b].
### Ejemplo
Determinar los extremos absolutos de $f(x)=x^2-6x$ en $\left[0,5\right]$.
Obtenemos los puntos críticos de $f$ en $\left[0,5\right]$:
```python
f = x**2-6*x
f
```
```python
df = sym.diff(f, x)
df
```
```python
x_c = sym.solve(df, x)
x_c
```
Evaluamos $f$ en los extremos y en los puntos críticos:
```python
f.subs(x, 0), f.subs(x, 5), f.subs(x, x_c[0])
```
Concluimos que el máximo absoluto de $f$ en $\left[0,5\right]$ es $0$ y se alcanza en $x=0$, y que el mínimo absoluto es $-9$ y se alcanza en $x=3$.
```python
f_num = sym.lambdify([x], f, 'numpy')
x_vec = np.linspace(0, 5, 100)
plt.figure(figsize=(8,6))
plt.plot(x_vec, f_num(x_vec), 'k--', label = '$y=f(x)$')
plt.plot([0], [0], '*r', label = '$(0,0=\max_{0\leq x\leq 5} f(x))$')
plt.plot([3], [-9], '*b', label = '$(3,-9=\min_{0\leq x\leq 5} f(x))$')
plt.legend(loc='best')
plt.xlabel('x')
plt.grid()
plt.show()
```
### Actividad
Determinar los valores extremos absolutos de $h(x)=x^3-3x$ en $\left[-2.2,1.8\right]$, usando `sympy`. Mostrar en una gráfica.
```python
sym.var('x', real = True)
```
```python
def h(x):
return x**3-3*x
```
```python
dh = sym.diff(h(x), x)
dh
```
```python
p_c = sym.solve(dh, x)
p_c
```
```python
h(-1), h(1), h(-2.2), h(1.8)
```
```python
x_vec = np.linspace(-2.2, 1.8, 50)
plt.figure(figsize=(7,5))
plt.plot(x_vec, h(x_vec), 'b:', label = '$y=x^3-3x$')
plt.plot([-2.2], [h(-2.2)], 'mo', label = '(-2.2,-4.048)')
plt.plot([-1], [h(-1)], 'yo', label = '(-1,2)')
plt.legend(loc='best')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.grid()
plt.show()
```
### En varias variables...
El procedimiento es análogo.
Si una función $f:\mathbb{R}^n\to\mathbb{R}$ alcanza un máximo o mínimo local en $\boldsymbol{x}=\boldsymbol{c}\in\mathbb{R}^n$, y $f$ es diferenciable en el punto $\boldsymbol{x}=\boldsymbol{c}$, entonces $\left.\frac{\partial f}{\partial \boldsymbol{x}}\right|_{\boldsymbol{x}=\boldsymbol{c}}=\boldsymbol{0}$ (todas las derivadas parciales en el punto $\boldsymbol{x}=\boldsymbol{c}$ son cero).
**Criterio de la segunda derivada:** para ver si es máximo o mínimo, se toma la segunda derivada (matriz jacobiana) y se verifica definición negativa o positiva, respectivamente.
Si se restringe a cierta región, hay ciertas técnicas. La más general, pero también la más compleja es la de **multiplicadores de Lagrange**.
```python
sym.var('x y')
x, y
```
```python
def f(x, y):
return x**2 + y**2
```
```python
dfx = sym.diff(f(x,y), x)
dfy = sym.diff(f(x,y), y)
dfx, dfy
```
```python
xy_c = sym.solve([dfx, dfy], [x, y])
xy_c
```
```python
x_c, y_c = xy_c[x], xy_c[y]
x_c, y_c
```
```python
d2fx = sym.diff(f(x,y), x, 2)
d2fy = sym.diff(f(x,y), y, 2)
dfxy = sym.diff(f(x,y), x, y)
Jf = sym.Matrix([[d2fx, dfxy], [dfxy, d2fy]])
Jf.eigenvals()
```
```python
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
```
```python
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = np.linspace(-2, 2, 100)
y = x
X, Y = np.meshgrid(x, y)
ax.plot_surface(X, Y, f(X, Y))
ax.plot([x_c], [y_c], [f(x_c,y_c)], '*r');
```
### Tareas.
1. Por parejas harán un proyecto colaborativo.
2. Introducción a numpy. Obtener máximos y mínimos absolutos, usando `sympy`, de las funciones dadas en los intervalos dados, y graficar la función en dicho intervalo señalando los puntos máximo y mínimo absolutos.
**Definir fecha de las tareas**
___
<!--NAVIGATION-->
< [git GitHub tutorial 2](Clase2_GitTutorial2.ipynb) | [Guía](Clase0_GuiaSimulacionM.ipynb) | [Programación Lineal](Clase5_ProgramacionLineal.ipynb) >
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Lázaro Alonso.
<Strong> Copyright: </Strong> Public Domain como en [CC](https://creativecommons.org/licenses/by/2.0/) (Exepto donde se indique lo contrario)
</footer>
|
814a5699098c393beeb9f0ad6bf3b8d19d1973e6
| 68,756 |
ipynb
|
Jupyter Notebook
|
Modulo1/Clase4_OptimizacionSympy.ipynb
|
douglasparism/SimulacionM2018
|
85953efb86c7ebf2f398474608dfda18cb4cf5b8
|
[
"MIT"
] | null | null | null |
Modulo1/Clase4_OptimizacionSympy.ipynb
|
douglasparism/SimulacionM2018
|
85953efb86c7ebf2f398474608dfda18cb4cf5b8
|
[
"MIT"
] | null | null | null |
Modulo1/Clase4_OptimizacionSympy.ipynb
|
douglasparism/SimulacionM2018
|
85953efb86c7ebf2f398474608dfda18cb4cf5b8
|
[
"MIT"
] | null | null | null | 58.021941 | 14,756 | 0.780266 | true | 4,139 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.73412 | 0.835484 | 0.613345 |
__label__spa_Latn
| 0.925993 | 0.263336 |
## Classical Mechanics - Week 9
### Last Week:
- We saw how a potential can be used to analyze a system
- Gained experience with plotting and integrating in Python
### This Week:
- We will study harmonic oscillations using packages
- Further develope our analysis skills
- Gain more experience wtih sympy
```python
# Let's import packages, as usual
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
sym.init_printing(use_unicode=True)
```
Let's analyze a spring using sympy. It will have mass $m$, spring constant $k$, angular frequency $\omega_0$, initial position $x_0$, and initial velocity $v_0$.
The motion of this harmonic oscillator is described by the equation:
eq 1.) $m\ddot{x} = -kx$
This can be solved as
eq 2.) $x(t) = A\cos(\omega_0 t - \delta)$, $\qquad \omega_0 = \sqrt{\dfrac{k}{m}}$
Use SymPy below to plot this function. Set $A=2$, $\omega_0 = \pi/2$ and $\delta = \pi/4$.
(Refer back to ***Notebook 7*** if you need to review plotting with SymPy.)
```python
# Plot for equation 2 here
```
## Q1.) Calculate analytically the initial conditions, $x_0$ and $v_0$, and the period of the motion for the given constants. Is your plot consistent with these values?
✅ Double click this cell, erase its content, and put your answer to the above question here.
#### Now let's make plots for underdamped, critically-damped, and overdamped harmonic oscillators.
Below are the general equations for these oscillators:
- Underdamped, $\beta < \omega_0$ :
eq 3.) $x(t) = A e^{-\beta t}cos(\omega ' t) + B e^{-\beta t}sin(\omega ' t)$ , $\omega ' = \sqrt{\omega_0^2 - \beta^2}$
___________________________________
- Critically-damped, $\beta = \omega_0$:
eq 4.) $x(t) = Ae^{-\beta t} + B t e^{-\beta t}$
___________________________________
- Overdamped, $\beta > \omega_0$:
eq 5.) $x(t) = Ae^{-\left(\beta + \sqrt{\beta^2 - \omega_0^2}\right)t} + Be^{-\left(\beta - \sqrt{\beta^2 - \omega_0^2}\right)t}$
_______________________
In the cells below use SymPy to create the Position vs Time plots for these three oscillators.
Use $\omega_0=\pi/2$ as before, and then choose an appropriate value of $\beta$ for the three different damped oscillator solutions. Play around with the variables, $A$, $B$, and $\beta$, to see how different values affect the motion and if this agrees with your intuition.
```python
# Put your code for graphing Underdamped here
```
```python
# Put your code for graphing Critical here
```
```python
# Put your code for graphing Overdamped here
```
## Q2.) How would you compare the 3 different oscillators?
✅ Double click this cell, erase its content, and put your answer to the above question here.
# Here's another simple harmonic system, the pendulum.
The equation of motion for the pendulum is:
eq 6.) $ml\dfrac{d^2\theta}{dt^2} + mg \sin(\theta) = 0$, where $v=l\dfrac{d\theta}{dt}$ and $a=l\dfrac{d^2\theta}{dt^2}$
In the small angle approximation $\sin\theta\approx\theta$, so this can be written:
eq 7.) $\dfrac{d^2\theta}{dt^2} = -\dfrac{g}{l}\theta$
We then find the period of the pendulum to be $T = \dfrac{2\pi}{\sqrt{l/g}}$ and the angle at any given time
(if released from rest) is given by
$\theta = \theta_0\cos{\left(\sqrt{\dfrac{g}{l}} t\right)}$.
Let's use Euler's Forward method to solve equation (7) for the motion of the pendulum in the small angle approximation, and compare to the analytic solution.
First, let's graph the analytic solution for $\theta$. Go ahead and graph using either sympy, or the other method we have used, utilizing these variables:
- $t:(0s,50s)$
- $\theta_0 = 0.5$ radians
- $l = 40$ meters
```python
# Plot the analytic solution here
```
Now, use Euler's Forward method to obtain a plot of $\theta$ as a function of time $t$ (in the small angle approximation). Use eq (7) to calculate $\ddot{\theta}$ at each time step.
Try varying the time step size to see how it affects the Euler's method solution.
```python
# Perform Euler's Method Here
```
You should have found that if you chose the time step size small enough, then the Euler's method solution was
indistinguishable from the analytic solution.
We can now trivially modify this, to solve for the pendulum **exactly**, without using the small angle approximation.
The exact equation for the acceleration is
eq 8.) $\dfrac{d^2\theta}{dt^2} = -\dfrac{g}{l}\sin\theta$.
Modify your Euler's Forward method calculation to use eq (8) to calculate $\ddot{\theta}$ at each time step in the cell below.
```python
```
# Q3.) What time step size did you use to find agreement between Euler's method and the analytic solution (in the small angle approximation)? How did the exact solution differ from the small angle approximation?
✅ Double click this cell, erase its content, and put your answer to the above question here.
### Now let's do something fun:
In class we found that the 2-dimensional anisotropic harmonic motion can be solved as
eq 8a.) $x(t) = A_x \cos(\omega_xt)$
eq 8b.) $y(t) = A_y \cos(\omega_yt - \delta)$
If $\dfrac{\omega_x}{\omega_y}$ is a rational number (*i.e,* a ratio of two integers), then the trajectory repeats itself after some amount of time. The plots of $x$ vs $y$ in this case are called Lissajous figures (after the French physicists Jules Lissajous). If $\dfrac{\omega_x}{\omega_y}$ is not a rational number, then the trajectory does not repeat itself, but it still shows some very interesting behavior.
Let's make some x vs y plots below for the 2-d anisotropic oscillator.
First, recreate the plots in Figure 5.9 of Taylor. (Hint: Let $A_x=A_y$. For the left plot of Figure 5.9, let $\delta=\pi/4$ and for the right plot, let $\delta=0$.)
Next, try other rational values of $\dfrac{\omega_x}{\omega_y}$ such as 5/6, 19/15, etc, and using different phase angles $\delta$.
Finally, for non-rational $\dfrac{\omega_x}{\omega_y}$, what does the trajectory plot look like if you let the length of time to be arbitrarily long?
\[For these parametric plots, it is preferable to use our original plotting method, *i.e.* using `plt.plot()`, as introduced in ***Notebook 1***.\]
```python
# Plot the Lissajous curves here
```
# Q4.) What are some observations you make as you play with the variables? What happens for non-rational $\omega_x/\omega_y$ if you let the oscillator run for a long time?
✅ Double click this cell, erase its content, and put your answer to the above question here.
# Notebook Wrap-up.
Run the cell below and copy-paste your answers into their corresponding cells.
```python
from IPython.display import HTML
HTML(
"""
"""
)
```
# Well that's that, another Notebook! It's now been 10 weeks of class
You've been given lots of computational and programing tools these past few months. These past two weeks have been practicing these tools and hopefully you are understanding how some of these pieces add up. Play around with the code and see how it affects our systems of equations. Solve the Schrodinger Equation for the Helium atom. Figure out the unifying theory. The future is limitless!
|
97e4ea8cb9864cd7b107c814f69f83269316fbc5
| 11,393 |
ipynb
|
Jupyter Notebook
|
doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Student_Work/CM_Notebook9.ipynb
|
Shield94/Physics321
|
9875a3bf840b0fa164b865a3cb13073aff9094ca
|
[
"CC0-1.0"
] | 20 |
2020-01-09T17:41:16.000Z
|
2022-03-09T00:48:58.000Z
|
doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Student_Work/CM_Notebook9.ipynb
|
Shield94/Physics321
|
9875a3bf840b0fa164b865a3cb13073aff9094ca
|
[
"CC0-1.0"
] | 6 |
2020-01-08T03:47:53.000Z
|
2020-12-15T15:02:57.000Z
|
doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Student_Work/CM_Notebook9.ipynb
|
Shield94/Physics321
|
9875a3bf840b0fa164b865a3cb13073aff9094ca
|
[
"CC0-1.0"
] | 33 |
2020-01-10T20:40:55.000Z
|
2022-02-11T20:28:41.000Z
| 32.458689 | 430 | 0.581761 | true | 1,936 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.935347 | 0.838306 |
__label__eng_Latn
| 0.994606 | 0.785998 |
# The Bayesian Bootstrap Is Not a Free Lunch
Some recent work has suggested that we can solve computationally difficult, multi-modal Bayesian posterior calculations with optimization and bootstrap sampling. There are many variations such methods; for shorthand I will simply refer to them collectively as Bayesian bootstrap (BB) methods. It is true that BB can be practically quite informative, and, in certain problems, much more useful than any other out-of-the-box Bayesian methods which fail utterly to explore multiple modes. Nevertheless, I will argue that BB methods do not, in general, produce an accurate approximation of a Bayesian posterior.
I will argue that when we are actually interested in a particular multi-modal parametric posterior but simply cannot sample from it, then BB methods will not, in general, approximate the desired posterior. Of course, the BB is the right thing to do when we actually do have a proper Dirichlet process prior on a space of data distributions, and are interested in the output of a particular optimization problem as a posterior quantity. But naively replacing a parametric Bayesian posterior quantity with a BB quantity risks (a) misinterpreting what is meant by an optimum and (b) misinterpreting what is meant by uncertainty. I stress that I do not mean to argue that the BB is not useful. A flawed answer can better than no answer at all. But a flawed answer is even more useful when its potential shortcomings are well understood, and my aim is to clearly and intuitively illustrate these potential shortcomings.
### Model 1
I will begin with a toy model which is simple and contrived. Its simplicity is such that we can clearly see what the right Bayesian answer should be, and how the BB fails to replicate it. I will argue in future, less simple examples that its contrivance is not so severe, and that the failure of the BB in this simple model illustrate its potential failures in more complex cases.
Suppose we observe a vector of scalar, real-valued data $x=(x_1,...,x_N)$, drawn from the following model:
$$
\begin{align}
p(\tau) &= \mathrm{Gamma}(\tau | 1, 1)\\
p(x | \tau) &= \prod_{n=1}^N \mathcal{N}(x_n | \tau, 1).
\end{align}
$$
The unkown mean parameter $\tau$ is positive, and all other aspects of the data generating process are known. As stated, this is a simple problem. Now, let us suppose that we are actually interested in a quantity $\theta \in \mathbb{R}$, given by
$$
\begin{align}
z &\in \{-1, 1\}\\
p(z=1) &= 0.8\\
\theta &= z \sqrt{\tau}.
\end{align}
$$
Obviously, $|\theta| = \tau$, and $p(\theta > 0 | x) = 0.8$ because the data $x$ is independent of $z$, the sign of $\theta$.
Effectively, the sign of $\theta$ is identified only by the prior, and the posterior $p(\theta \vert x)$ has both a positive mode and a negative mode, which it occupies with respective probabilities $0.8$ and $0.2$. We can sample from the posterior of $p(\theta | x)$ by sampling from $p(\tau | x)$, drawing $z$ from its prior, and setting $\theta = z \tau$.
Can the BB reproduce a reasonable approximation of the posterior $p(\theta | x)$ in this trivial case? Below, I will show that it cannot. The reason is twofold and general. Recall that the probability probability of a lying near a particular mode is determined by the posterior mass under the mode. For example, in the present case, $p(\theta > 0 | x) = \int_{0}^{\infty} p(\theta | x) d\theta = 0.8$. However, in general,
- The frequency with which an optimum finds a mode is not, in general, determined by the relative mass of a mode and
- The variability of an optimum induced by the bootstrap is not, in general, determined by the relative mass of a mode.
```R
library(tidyverse)
library(rstan)
library(gridExtra)
library(numDeriv)
library(repr)
# Change plot size to 4 x 3
options(repr.plot.width=7, repr.plot.height=4)
options(mc.cores=4)
rstan_options(auto_write=TRUE)
setwd("/home/rgiordan/Documents/git_repos/variational_bayes/bayesian_bootstrap")
```
── Attaching packages ─────────────────────────────────────── tidyverse 1.2.1 ──
✔ ggplot2 3.1.0 ✔ purrr 0.2.5
✔ tibble 1.4.2 ✔ dplyr 0.7.8
✔ tidyr 0.8.1 ✔ stringr 1.3.1
✔ readr 1.1.1 ✔ forcats 0.3.0
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
Loading required package: StanHeaders
rstan (Version 2.18.2, GitRev: 2e1f913d3ca3)
For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
Attaching package: ‘rstan’
The following object is masked from ‘package:tidyr’:
extract
Attaching package: ‘gridExtra’
The following object is masked from ‘package:dplyr’:
combine
```R
EvalLogPost <- function(theta, fit, z_prob) {
upars <- unconstrain_pars(fit, list(tau=theta ^ 2))
lp <- log_prob(fit, upars) + ifelse(theta > 0, log(z_prob), log(1 - z_prob))
}
GetThetaGridLp <- function(fit, z_prob, theta_max=NA) {
draws <- extract(fit)
if (!is.finite(theta_max)) {
theta_max <- 1.3 * max(abs(draws$theta))
}
theta_grid <- seq(-1 * theta_max, theta_max, length.out=1000)
lp <- sapply(theta_grid, EvalLogPost, fit, z_prob)
return(data.frame(theta=theta_grid, lp=lp))
}
DrawAndFit <- function(N, z_prob, tau_prior_shape=1, tau_prior_rate=1, x_sd=1) {
tau_true <- rgamma(1, shape=tau_prior_shape, rate=tau_prior_rate)
x <- rnorm(N, mean=tau_true, sd=x_sd)
# Also save tau_true even though Stan doesn't use it.
bb_data <- list(x=x, N=N, prob=z_prob, x_sd=x_sd,
tau_prior_shape=tau_prior_shape, tau_prior_rate=tau_prior_rate,
tau_true=tau_true)
# The Stan model will exhibit some divergent transitions near 0, but that's
# not a real problem here.
fit <- stan(file='multimodal_v1.stan', data=bb_data)
return(list(fit=fit, bb_data=bb_data))
}
```
Let's draw a dataset and look at the posterior of $\theta$.
```R
z_prob <- 0.8
initial_fit <- DrawAndFit(N=50, z_prob=z_prob)
fit <- initial_fit$fit
```
hash mismatch so recompiling; make sure Stan code ends with a blank line
```R
draws <- extract(fit)
cat("P(\\theta > 0) \\approx ", mean(draws$theta > 0), "\n")
theta_lp <- GetThetaGridLp(fit, z_prob)
grid.arrange(
ggplot(theta_lp) +
geom_line(aes(x=theta, y=lp)) +
ggtitle("Log posterior")
,
ggplot(theta_lp) +
geom_line(aes(x=theta, y=exp(lp))) +
ggtitle("Posterior")
,
ggplot() +
geom_histogram(aes(x=draws$theta)) +
ggtitle("Draws")
, ncol=3
)
```
Now, let's see whether the Bayesian bootstrap will reproduce $p(\theta > 0 | x) = 0.8$.
First of all, note that reasonable optimizers (e.g. well-conditined gradient ascent) will find the positive mode if the initial value is positive, and the negative mode if the initial value is negative. So, irrespective of the noise induced by the bootstrap, the BB estimate of $P(\theta > 0)$ will be precisely the probability that the initial value for optimization is positive. With uniform random restarts, this probability will be $0.5$, not $0.8$.
A resonable question is whether drawing bootstrap samples of $x$ can overcome this obvious shortcoming. Let's take a look.
```R
DrawBootstrapSample <- function(x) {
return(x[sample.int(length(x), replace=TRUE)])
}
GetBootstrapFit <- function(initial_fit) {
x_boot <- DrawBootstrapSample(initial_fit$bb_data$x)
bb_data_boot <- initial_fit$bb_data
bb_data_boot$x <- x_boot
# We only need the fit object, not the draws, so set samples to be very low and
# suppress output and warnings.
options(warn=-1)
invisible(capture.output(
fit_boot <- stan(file='multimodal_v1.stan',
data=bb_data_boot, iter=1, chains=1)))
options(warn=0)
return(list(x_boot=x_boot, fit_boot=fit_boot))
}
```
```R
theta_max <- 2 * max(abs(draws$theta))
mean_abs_theta <- mean(abs(draws$theta))
boot_lp_df <- data.frame()
for (b in 1:10) {
print(b)
boot_fit <- GetBootstrapFit(initial_fit)
theta_lp <- GetThetaGridLp(boot_fit$fit, z_prob, theta_max)
boot_lp_df <- bind_rows(boot_lp_df, theta_lp %>% mutate(b=as.character(b)))
}
```
[1] 1
[1] 2
[1] 3
[1] 4
[1] 5
[1] 6
[1] 7
[1] 8
[1] 9
[1] 10
As we can see, the bootstrap variability is never such that the lower mode is larger than the higher mode. Consequently, if our optimization method finds the global optimum rather than a local optimum, it will estimate $p(\theta > 0 | x) = 1$.
```R
boot_lp_df <-
boot_lp_df %>%
group_by(b) %>%
mutate(lp_norm=lp - log(sum(exp(lp)))) # Were this to be a density you would need the delta x
ggplot(boot_lp_df) +
geom_line(aes(x=theta, y=exp(lp_norm), group=b), alpha=0.2) +
xlim(-2.5, 2.5)
```
Let's check this formally by actually optimizing a number of bootstrap samples.
```R
GetMAP <- function(fit, init_val=NA, width=NA, pos=TRUE) {
# init_val and width are in the log space.
x_sign <- (sign(pos) * 2 - 1)
Objective <- function(log_x) {
x <- x_sign * exp(log_x)
# The log probability doesn't matter because we're only finding the optimum in
# one of the two modes.
return(-1 * EvalLogPost(x, fit, 0.5))
}
if (!is.finite(init_val)) {
init_val <- log(mean(sqrt(extract(fit, "tau")$tau)))
}
if (!is.finite(width)) {
width <- 4 * sd(log(sqrt(extract(fit, "tau")$tau)))
}
opt <- optim(init_val, Objective, method="Brent",
upper=init_val + width, lower=init_val - width)
opt$par_theta <- x_sign * exp(opt$par)
return(opt)
}
GetBootstrapResult <- function(fit_boot, z_prob, init_val=NA, width=NA) {
boot_opt <- GetMAP(fit_boot, pos=TRUE, init_val=init_val, width=width)
lp_neg <- EvalLogPost(-1 * boot_opt$par_theta, fit_boot, z_prob)
lp_pos <- EvalLogPost(boot_opt$par_theta, fit_boot, z_prob)
lp_hess_neg <- hessian(
function(x) EvalLogPost(x, fit_boot, z_prob),
-1 * boot_opt$par_theta)[1, 1]
lp_hess_pos <- hessian(
function(x) EvalLogPost(x, fit_boot, z_prob),
boot_opt$par_theta)[1, 1]
return(data.frame(
theta_opt=ifelse(lp_pos > lp_neg, 1, -1) * boot_opt$par_theta,
lp_pos=lp_pos,
lp_neg=lp_neg,
lp_hess_neg=lp_hess_neg,
lp_hess_pos=lp_hess_pos
))
}
```
```R
init_val <- log(mean(sqrt(draws$tau)))
width <- 4 * sd(log(sqrt(draws$tau)))
B <- 50
boot_df <- data.frame()
boot_time <- Sys.time()
for (b in 1:B) {
fit_boot <- GetBootstrapFit(initial_fit)$fit_boot
boot_df <- bind_rows(
boot_df,
GetBootstrapResult(fit_boot, z_prob, init_val=init_val, width=width))
}
boot_time <- Sys.time() - boot_time
```
```R
cat("BB estimate of P(\\thetasign(boot_df$theta_opt): ", mean(boot_df$theta_opt > 0), "\n")
```
BB estimate of P(\thetasign(boot_df$theta_opt): 1
### Summary
In summary, there are two flavors of the BB: one which finds a local optimum, and one which finds a global optimum. The former estimates $p(\theta > 0 | x) \approx 0.5$, and the latter estimates $p(\theta > 0 | x) \approx 1.0$. Neither is a good estimate of the true $p(\theta > 0 | x) = 0.8$.
What has gone wrong? The value chosen by the BB depends on two things:
- The domain of attraction and the distribution of the starting point
- The variability of the height of the optima under data resampling.
Neither of these quantites necessarily have anything to do with the real posterior probability, which is the mass under the mode.
### Model
For reference, here is a dump of the Stan model.
```R
cat(scan("multimodal_v1.stan", what="character", sep="\n"), sep="\n")
```
data {
int<lower=0> N;
real x[N];
real<lower=0> x_sd;
real<lower=0, upper=1> prob;
real<lower=0> tau_prior_shape;
real<lower=0> tau_prior_rate;
}
parameters {
real<lower=0> tau;
}
model {
tau ~ gamma(tau_prior_shape, tau_prior_rate);
x ~ normal(tau, x_sd);
}
generated quantities {
real theta;
int z;
z = 2 * bernoulli_rng(prob) - 1;
theta = z * sqrt(tau);
}
|
0a64a7fb4ea8112a3269704bd478b0fbf3e29734
| 93,110 |
ipynb
|
Jupyter Notebook
|
assets/post_assets/bayesian_bootstrap_v1.ipynb
|
rgiordan/rgiordan.github.io
|
378cadac03ef1a9f7c5ac4007339004e61cef61e
|
[
"Apache-2.0"
] | null | null | null |
assets/post_assets/bayesian_bootstrap_v1.ipynb
|
rgiordan/rgiordan.github.io
|
378cadac03ef1a9f7c5ac4007339004e61cef61e
|
[
"Apache-2.0"
] | 2 |
2021-07-12T17:49:04.000Z
|
2021-07-12T17:49:06.000Z
|
assets/post_assets/bayesian_bootstrap_v1.ipynb
|
rgiordan/rgiordan.github.io
|
378cadac03ef1a9f7c5ac4007339004e61cef61e
|
[
"Apache-2.0"
] | null | null | null | 175.348399 | 40,264 | 0.880367 | true | 3,543 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.72487 | 0.822189 | 0.59598 |
__label__eng_Latn
| 0.965234 | 0.222992 |
# Variational Principle using Symbolic Mathematics in Python
## 1. Introduction
The variational principle tells us that we can use a trial wavefunction to solve the Schrodinger equation using the following theorem:
$${{\int {{\Psi ^*}\hat H{\rm{ }}\Psi } d\tau } \over {\int {{\Psi ^*}\Psi } d\tau }} \ge {E_0}$$
We will use Sympy to solve the particle in a box problem by guessing a trial wavefunction using variational principle
```python
import sympy as sym
```
This exercise is a bit more self-guided than the other notebooks we have done. One of the most useful things you can do is **open last week's notebook to remember the commands in sympy**. Also, remember that google is your friend:
1. [Sympy tutorial](https://docs.sympy.org/latest/tutorial/index.html)
2. [Stack Overflow](https://stackoverflow.com/search?q=sympy+)
3. [Stack Exchange](https://stackexchange.com/)
## 2. Particle in a box
The wave function that we pick for a particle in a box needs to have the following properties
1. single valued
1. normalizable
1. function and its first derivative are continuous
1. boundary condition that the wave function goes to zero at the ends of the box
Particle in a box: a is a classical particle, red is real part, blue is imaginary part.
This particle only expericnes kinetic energy between the box, so the Hamiltonian for this system is
$$\hat H = {{ - {\hbar ^2}} \over {2m}}{{{d^2}} \over {d{x^2}}} + \left\{ {\matrix{{V(x) = 0} & {0 < x < a} \cr {V(x) = \infty } & {x < 0\text{ }{\rm{ or}}\;x > a} \cr } } \right.$$
For our purposes, that means we can consider the Hamiltonian to be
$$\hat H = {{ - {\hbar ^2}} \over {2m}}{{{d^2}} \over {d{x^2}}}$$
as long as we keep the limits of integration to be $(0,a)$
### 2.1 Trial Wave function
Although the particle in box has a well known solution
[https://en.wikipedia.org/wiki/Particle_in_a_box](https://en.wikipedia.org/wiki/Particle_in_a_box)
(or check your favorite pchem book)
We are going to guess a trial wave function:
$$\Phi (x) = \left( {{x \over a} - {{{x^3}} \over a}} \right) + \alpha \left( {{{{x^5}} \over {{a^5}}} - {1 \over 2}\left( {{{{x^7}} \over {{a^7}}} + {{{x^7}} \over {{a^7}}}} \right)} \right)$$
### 2.2 Exercise: Variational Theorem
We are going to follow the following plan:
1. Solve for the energy of the trial wave function above
$${E_{trial}} = {{\int\limits_0^a {\Phi (x){{ - {\hbar ^2}} \over {2m}}{{{d^2}} \over {d{x^2}}}\Phi (x)dx} } \over {\int\limits_0^a {\Phi {{(x)}^2}dx} }}$$
Your answer will be a function of $ m,a,\text{and } \alpha$ We will use $\alpha$ as the parameter we vary to minimize the energy and make a new trial wave function.
2. Minimize the trial energy
We will use a first derivative of the trial energy $${d \over {d\alpha }}{E_{trial}}(\alpha )$$ to find the value of $\alpha$ that gives you the lowest energy
3. Plot your new wavefunction compared to the ground state particle in a box: $${\psi _{true}}(x) = {\left( {{2 \over a}} \right)^{1/2}}\sin {{n\pi x} \over a}$$ Plot as a function of $x/a$ from $0$ to $1$. Assuming this has $m=m_e$, and $a=a_0$ use atomic (theorist) units to plot the function.
4. Compare your trial energy to the actual energy (using atomic units)
$${E_{true}}(n = 1) = {{{\hbar ^2}{\pi ^2}} \over {2m{a^2}}}$$
```python
import sympy as sym
a,x,m=sym.symbols('a,x,m')
sym.Rational(1/2)
from sympy.physics.units import hbar
sym.init_printing()
Phi,phi,alpha,hbar=sym.symbols("Phi,phi,alpha,hbar")
alpha,phi,Phi,hbar
```
```python
phi = ((x/a)-(x**3/a**3)+ alpha*((x**5/a**5)-1/2*(x**9/a**9+x**9/a**9)))
num1 = (phi*sym.Rational(-1/2))
num2 = sym.diff(sym.diff(phi,x),x)
den1 = (phi**2)
```
```python
Numerator = sym.integrate((num1*num2),(x,0,a))
```
```python
Denominator = sym.integrate((den1),(x,0,a))
```
```python
Numerator/Denominator
```
```python
E_trial = (Numerator/Denominator).subs(a,1)
E_trial
```
```python
sym.solveset(sym.diff(E_trial,alpha))
```
```python
E_trial.subs(alpha, -0.257747652303185)
```
```python
E_trial.subs(alpha, -4.73345207773701)
```
```python
Newphi = phi.subs(alpha, 5.02467668289965)
Newphi
```
```python
Newphi2 = Newphi.subs(a,1)
Newphi2
```
```python
sym.plot(Newphi2,(x,0,1))
```
```python
sym.plot((2**1/2)*sym.sin(sym.pi*x),(x,0,1))
```
```python
sym.plot((2**1/2)*sym.sin(sym.pi*x),Newphi2,(x,0,1))
```
```python
E_true = sym.pi**2/(2*m)
E_true
```
Your descriptions/explanations here
### 2.3 Exercise: New trial wavefunction
Determine the minimum energy of the particle in a box using a new trial wavefunction $$x^\alpha(x-a)^\alpha$$
1. Find the minimum energy, $E_{trial}$
2. Plot the new trial wavefunction and compare it to the true solution and the wavefunction you found above
3. Compare you new energy to the trial energy you found above
4. Which wavefunction is better? How do you know?
```python
#Plug in 1 for a, after you solve for values, first find the first derivative
#After you solve both plug them into E trial
```
```python
phhi = (x**alpha)*(x-a)**alpha
phhi
```
```python
num11 = (phhi*sym.Rational(1/2))
num21 = sym.diff(sym.diff(phhi,x),x)
den11 = (phhi**2)
Numerator2 = sym.integrate((num11*num21),(x,0,a))
Numerator2
```
```python
Denominator2 = sym.integrate((den11),(x,0,a))
Denominator2
```
$\displaystyle \frac{a a^{4 \alpha} e^{2 i \pi \alpha} \Gamma\left(2 \alpha + 1\right) {{}_{2}F_{1}\left(\begin{matrix} - 2 \alpha, 2 \alpha + 1 \\ 2 \alpha + 2 \end{matrix}\middle| {1} \right)}}{\Gamma\left(2 \alpha + 2\right)}$
```python
E_trial2 = (Numerator2/Denominator2).subs(a,1)
E_trial2
```
$\displaystyle \frac{\alpha e^{- 2 i \pi \alpha} \Gamma\left(2 \alpha + 2\right) \int\limits_{0}^{1} \frac{x^{2 \alpha} \left(x - 1\right)^{2 \alpha} \left(4 \alpha x^{2} - 4 \alpha x + \alpha - 2 x^{2} + 2 x - 1\right)}{x^{2} \left(x - 1\right)^{2}}\, dx}{2 \Gamma\left(2 \alpha + 1\right) {{}_{2}F_{1}\left(\begin{matrix} - 2 \alpha, 2 \alpha + 1 \\ 2 \alpha + 2 \end{matrix}\middle| {1} \right)}}$
```python
sym.solveset(sym.diff(E_trial2,alpha))
```
$\displaystyle \left\{\alpha \mid \alpha \in \mathbb{C} \wedge \left(- 2 \alpha {{}_{2}F_{1}\left(\begin{matrix} - 2 \alpha, 2 \alpha + 1 \\ 2 \alpha + 2 \end{matrix}\middle| {1} \right)} \operatorname{polygamma}{\left(0,2 \alpha + 1 \right)} \int\limits_{0}^{1} \frac{x^{2 \alpha} \left(x - 1\right)^{2 \alpha} \left(4 \alpha x^{2} - 4 \alpha x + \alpha - 2 x^{2} + 2 x - 1\right)}{x^{2} \left(x - 1\right)^{2}}\, dx + 2 \alpha {{}_{2}F_{1}\left(\begin{matrix} - 2 \alpha, 2 \alpha + 1 \\ 2 \alpha + 2 \end{matrix}\middle| {1} \right)} \operatorname{polygamma}{\left(0,2 \alpha + 2 \right)} \int\limits_{0}^{1} \frac{x^{2 \alpha} \left(x - 1\right)^{2 \alpha} \left(4 \alpha x^{2} - 4 \alpha x + \alpha - 2 x^{2} + 2 x - 1\right)}{x^{2} \left(x - 1\right)^{2}}\, dx + \alpha {{}_{2}F_{1}\left(\begin{matrix} - 2 \alpha, 2 \alpha + 1 \\ 2 \alpha + 2 \end{matrix}\middle| {1} \right)} \int\limits_{0}^{1} \frac{x^{2 \alpha} \left(x - 1\right)^{2 \alpha} \left(4 x^{2} - 4 x + 2 \left(4 \alpha x^{2} - 4 \alpha x + \alpha - 2 x^{2} + 2 x - 1\right) \log{\left(x \right)} + 2 \left(4 \alpha x^{2} - 4 \alpha x + \alpha - 2 x^{2} + 2 x - 1\right) \log{\left(x - 1 \right)} + 1\right)}{x^{2} \left(x - 1\right)^{2}}\, dx - 2 i \pi \alpha {{}_{2}F_{1}\left(\begin{matrix} - 2 \alpha, 2 \alpha + 1 \\ 2 \alpha + 2 \end{matrix}\middle| {1} \right)} \int\limits_{0}^{1} \frac{x^{2 \alpha} \left(x - 1\right)^{2 \alpha} \left(4 \alpha x^{2} - 4 \alpha x + \alpha - 2 x^{2} + 2 x - 1\right)}{x^{2} \left(x - 1\right)^{2}}\, dx - \alpha \frac{d}{d \alpha} {{}_{2}F_{1}\left(\begin{matrix} - 2 \alpha, 2 \alpha + 1 \\ 2 \alpha + 2 \end{matrix}\middle| {1} \right)} \int\limits_{0}^{1} \frac{x^{2 \alpha} \left(x - 1\right)^{2 \alpha} \left(4 \alpha x^{2} - 4 \alpha x + \alpha - 2 x^{2} + 2 x - 1\right)}{x^{2} \left(x - 1\right)^{2}}\, dx + {{}_{2}F_{1}\left(\begin{matrix} - 2 \alpha, 2 \alpha + 1 \\ 2 \alpha + 2 \end{matrix}\middle| {1} \right)} \int\limits_{0}^{1} \frac{x^{2 \alpha} \left(x - 1\right)^{2 \alpha} \left(4 \alpha x^{2} - 4 \alpha x + \alpha - 2 x^{2} + 2 x - 1\right)}{x^{2} \left(x - 1\right)^{2}}\, dx\right) \Gamma\left(2 \left(\alpha + 1\right)\right) = 0 \right\} \setminus \left\{\alpha \mid \alpha \in \mathbb{C} \wedge e^{2 i \pi \alpha} \Gamma\left(2 \alpha + 1\right) {{{}_{2}F_{1}\left(\begin{matrix} - 2 \alpha, 2 \alpha + 1 \\ 2 \alpha + 2 \end{matrix}\middle| {1} \right)}}^{2} = 0 \right\}$
Your descriptions/explanations here
### 2.4 Exercise: Design your own wavefunction!
**Now you get to make your own wavefunction!**
The only guidance I would give you is that it make sense mathematically and that it include $\alpha$ so that you can minimize the energy.
Remember that $a$ and $x$ are both length units, and that trigonometric, logarithmic, and exponential functions are all unitless
Using your new wavefunction:
1. Find the minimum energy, $E_{trial}$
2. Plot the new trial wavefunction and compare it to the true solution and the wavefunction you found above
3. Compare you new energy to the trial energy you found above
4. Which wavefunction is better? How do you know?
```python
# Your code here
```
Your descriptions/explanations here
# Reading Homework
Read the following sections in Kramer
- 4.2.3 Born-Oppenheimer approximation
- 4.3.2 Secular equation
- All of 4.5
For each subsection
- write down the subchapter name
- what was the most important idea
- draw an idea digram of the main idea
**Make sure to upload this to the assignment repository**
Example idea diagram:
```python
```
|
cf7991e68f7a0266e06442385b5bc5c0ee7f2099
| 127,957 |
ipynb
|
Jupyter Notebook
|
variational-principle.ipynb
|
sju-chem264-2019/10-3-19-lecture-deannapatti
|
50e8c8cd80378db01e3b7876025a8eb0dc800e88
|
[
"MIT"
] | null | null | null |
variational-principle.ipynb
|
sju-chem264-2019/10-3-19-lecture-deannapatti
|
50e8c8cd80378db01e3b7876025a8eb0dc800e88
|
[
"MIT"
] | null | null | null |
variational-principle.ipynb
|
sju-chem264-2019/10-3-19-lecture-deannapatti
|
50e8c8cd80378db01e3b7876025a8eb0dc800e88
|
[
"MIT"
] | null | null | null | 138.932682 | 23,784 | 0.801566 | true | 3,424 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.815232 | 0.686284 |
__label__eng_Latn
| 0.848399 | 0.432798 |
# Two Degree-of-Freedom four well Potential
## Introduction and Development of the Problem
In this chapter we continue the study of Collins et al. {% cite collins2011 --file SNreac %} by considering the phase space structures that govern different reaction pathways and we then consider the influence of symmetry breaking, bifurcation, and energy on these phase space reaction pathways. In order to reveal the phase space geometry we take advantage of lagrangian descriptors, which will allow us to reconstruct the phase space structures that govern the chemical reaction processes.
We focus our attention on the Hamiltonian dynamics that takes place in a PES with four wells as a basic model for isomerization dynamics. Each potential well corresponds to a equilibrium configuration of the molecule under study, and wells are separated by saddle critical points, which represent transition states. In this setup, we understand that reaction occurs when an initial condition that starts for instance in one of the wells of the PES is capable of evolving to a neighboring well by crossing the saddle critical point that connects both wells.
Consider the Hamiltonian:
\begin{equation}
H(x,y,p_x,p_y) = \frac{1}{2}\left(p_x^2 + p_y^2\right) + V(x,y)
\label{hamiltonian}
\end{equation}
where we have supposed for simplicity that the mass in each DoF is $m_x = m_y = 1$. The PES is defined as follows:
\begin{equation}\label{pes_model}
V(x,y) = x^4 - \alpha x^2 - \delta x + y^4 - y^2 + \beta x^2 y^2
\end{equation}
and $\delta$ represents the asymmetry parameter in the double well potential of the $x$ DoF. The dynamics of the Hamiltonian in Eq. \eqref{hamiltonian} is described by Hamilton's equations of motion:
\begin{equation}
\begin{cases}
\dot{x} = \dfrac{\partial H}{\partial p_x} = p_x \\[.4cm]
\dot{y} = \dfrac{\partial H}{\partial p_y} = p_y \\[.4cm]
\dot{p}_x = -\dfrac{\partial H}{\partial x} = -\dfrac{\partial V}{\partial x} = -4 x^3 + 2 \alpha x + \delta - 2 \beta x y^2 \\[.4cm]
\dot{p}_y = -\dfrac{\partial H}{\partial y} = -\dfrac{\partial V}{\partial y} = -4 y^3 + 2 y - 2 \beta x^2 y
\end{cases}
\label{ham_eqs}
\end{equation}
Now we describe the dynamics of the Hamiltonian in Eq. \eqref{hamiltonian} in the case where both DoF are uncoupled, so that the coupling strength is $\beta = 0$. We start our analysis by focusing on the symmetric Hamiltonian with $\delta = 0$, and later we will move on to discuss the asymmetric case.
## Revealing Phase Space Structure: Symmetric and Uncoupled System
<a id="fig:PES_isom_rout"></a>
<figcaption style="text-align:center;font-size:14px"><b>fig:1 </b><em> Potential energy surface in Eq. \eqref{pes_model} for the symmetric uncoupled case.</em></figcaption><hr>
Consider the symmetric and uncoupled system with energy $H_0$. Since the system is conservative, dynamics is constrained to the three-dimensional energy hypersurface:
\begin{equation}
\mathcal{S}(H_0) = \left\{ (x,y,p_x,p_y) \in \mathbb{R}^4 \; | \; H_0 = \frac{1}{2}\left(p_x^2+p_y^2\right) + x^4 - x^2 + y^4 - y^2 \right\}
\end{equation}
where the total energy can be split between both DoF to yield:
\begin{equation}\label{eqsymun}
H(x,y,p_x,p_y) = H_{x}(x,p_x) + H_{y}(y,p_y)
\end{equation}
so that the partitioned energy is:
\begin{equation}
H_{x}(x,p_x) = \frac{1}{2} \, p_x^2 + W(x) \quad,\quad H_{y}(y,p_y) = \frac{1}{2} \, p_y^2 + W(y)
\end{equation}
and the potential in each DoF is a double-well in the form:
\begin{equation}
W(z) = z^4 - z^2
\label{1D_potSymm}
\end{equation}
Therefore, the symmetric and uncoupled system is integrable, as the energy in each DOF is conserved, and thus we have two independent constants of the motion, implying that all motion is regular and no chaotic dynamics is allowed. The Hill's region, which has its origins in Celestial Mechanics, is defined as the intersection of the energy hypersurface with configuration space, that is:
\begin{equation}
\mathcal{C}(H_0) = \left\{ (x,y,p_x,p_y) \in \mathbb{R}^4 \; | \; x^4 - x^2 + y^4 - y^2 \leq H_0 \right\}
\end{equation}
This region determines the energetically allowed configurations for the DoFs of the system, and all the points outside the Hill's region are energetically forbidden. The boundary of the Hill's region, $\partial \mathcal{C}(H_0)$, is known in the literature as the zero velocity curve and corresponds to phase space points for which kinetic energy is zero.
In order to determine the phase space structures that characterize isomerization dynamics, we need to look first at the equilibrium points of Hamilton's equations. A phase space point $\mathbf{x}_e = (x_e,y_e,p_{x,e},p_{y,e})$ is an equilibrium of Eq. \eqref{ham_eqs} when it satisfies $p_{x,e} = p_{y,e} = 0$ and $\nabla V(x_e,y_e) = \mathbf{0}$. The local stability of this stationary point is determined by the eigenvalues of the Jacobian matrix obtained by linearizing Hamilton's equations in its neighborhood. In the symmetric and uncoupled system, we have 9 equilibrium points with configuration coordinates and energies given below:
- Four potential wells at the points $ \left(\pm\sqrt{2}/2,\pm\sqrt{2}/2\right) $ with energies $ V(\pm\sqrt{2}/2,\pm\sqrt{2}/2) = -1/2 $.
- Four index-1 saddles located at $ (\pm\sqrt{2}/2,0)$, $(0,\pm\sqrt{2}/2) $ with energies $ V(\pm\sqrt{2}/2,0) = V(0,\pm\sqrt{2}/2) = -1/4 $.
- One index-2 saddle at the origin with energy $ V(0,0) = 0 $.
Potential wells of a PES correspond to center-stability equilibrium points of the Hamiltonian, and are characterized by the fact that the Hessian matrix of the PES evaluated at these critical points, denoted by $\text{Hess}_V$, has a pair of real and positive eigenvalues. This results in the Jacobian of the linearization having two pairs of complex (and purely imaginary) eigenvalues, which yields quasiperiodic motion in their neighborhood. In the context of chemical reaction dynamics, potential wells correspond to stable isomer configurations of the molecule under study. On the other hand, index-1 saddles of the PES are identified with saddle points of the potential energy landscape, and the associated phase space equilibrium point is of saddle-center stability type. This means that $\text{Hess}_V$ has two real eigenvalues of different sign, which is equivalent to the Jacobian having a pair of real eigenvalues of opposite sign and a pair of purely imaginary eigenvalues. Geometrically, an index-1 saddle is a local maximum in one direction and a local minimum in a perpendicular direction, as seen in the normal coordinates associated to the eigenvectors. Moreover, the eigenvector pointing in the maximum direction can be taken as a local approximation to define the reaction coordinate. The role that the index-1 saddles have in the model PES for isomerization that we consider is to connect pairs of wells, and therefore, they control access of phase space trajectories from well to well. From the perspective of chemical reactions, index-1 saddles are identified with transition state configurations of the given molecule, and, as we will see shortly, the phase space structures they give rise to are essential for the accurate computation of chemical reaction rates. Finally, index-2 saddles of a two-dimensional PES have the geometrical shape of a local maximum, i.e. a 'hilltop', which is characterized by the fact that the Hessian of the PES has two real and negative eigenvalues. This implies that when we linearize Hamilton's equations about this type of equilibrium point, the Jacobian yields two pairs of real eigenvalues, each pair having opposite signs. Thus, their stability is of saddle-saddle type, which has the dynamical effect of deflecting incoming trajectories in the neighborhood of an index-2 saddle at an exponential rate.
At this point, it is important to define what we mean for a trajectory to be trapped in one of the wells of the PES. From the energy landscape displayed in Fig. [fig:1](#fig:PES_isom_rout), this concept can be determined by studying the change in the sign of the configuration coordinates $x$ or $y$. For example, if we focus on the lower-left well, we can say that a trajectory is trapped in that well when the configuration coordinates of both $x$ and $y$ DoF remain negative during its evolution. Moreover, in this setup we can identify *reactive* trajectories as those that move from one well to another along their evolution. Since we are dealing in this case with a symmetric PES with respect to the origin, and there is one well in each quadrant, reaction would imply a change in the sign of one of the configuration coordinates of the trajectory. Notice that two types of reactive trajectories are possible when considering an initial condition starting on the lower-left well whose potential destination is the upper-right well. First, we could have sequential isomerization which implies transition from the lower-left well to an adjacent well through the phase space bottleneck region in the neighborhood of the index-1 saddle point that sits between both wells. This situation is only possible given that the system has enough energy to surmount the energy barrier (the energy of the index-1 saddle). A second alternative for reaching the upper-right well from the lower-left well is that trajectories cross directly through the index-2 saddle located at the origin (the hilltop of the PES), which is known as concerted isomerization. For an illustration of these two isomerization processes refer to Fig. [fig:1](#fig:PES_isom_rout).
We describe next the isomerization dynamics for the symmetric and uncoupled Hamiltonian system in terms of the phase space geometrical structures associated to the index-1 and index-2 saddles present in the model PES. We begin our discussion by fixing a value for the total energy of the system $H_0$. As we have mentioned earlier, this energy is distributed among both DoF, so that we can write $H_0 = H_{x,0} + H_{y,0}$ where $H_{x,0} $ and $H_{y,0}$ are the energies in the $x$ and $y$ DoFs respectively. We discuss some of the different cases in terms of the energy below:
- **Energy Level** $(-1/2 \leq H_0 \leq -1/4)$: For this case the wells of the PES are isolated from each other since the energy of the system is below that of the index-1 saddles that interconnect the wells. Therefore, motion of an initial condition that starts in one of the wells will remain trapped in that well forever, displaying librational quasiperiodic motion.
- **Energy Level** $(-1/4 < H_0 \leq 0)$: In this situation the energy is above that of any of the index-1 saddles in the PES, but below the energy of the index-2 saddle at the origin. Therefore, all potential wells are connected and isomerization can take place. This allows the transit of trajectories from well to well through the phase space bottlenecks that open in the neigborhood of the equilibrium points associated to index-1 saddles. However, the index-2 saddle region of the PES is energetically forbidden, and hence, only sequential isomerization is allowed. Given that the system is completely symmetric in this case, in order to describe the phase space structures that govern isomerization dynamics and characterize the bottleneck regions in the vicinity of the index-1 saddles of the PES, we focus our analysis on the equilibrium point $(0,\sqrt{2}/2,0,0)$ associated to the upper index-1 saddle, separating the upper-left and upper-right wells. This stationary point has saddle-center stability.
## Implications for Reaction Dynamics: Symmetric and Uncoupled System
We will introduce now the dynamical concepts. The phase space dividing surface separating the upper-left from the upper-right well regions of the PES is also defined in this situation by intersecting the energy surface with the slice $x = 0$, that is:
\begin{equation}\label{ds_sym_unc}
\mathcal{D}\left(H_0\right) = \mathcal{S}\left(H_0\right) \cap \lbrace x = 0 \rbrace = \left\{\left(x,y,p_x,p_y\right) \in \mathbb{R}^4 \; \bigg| \; H_0 = \frac{1}{2}\left(p_x^2+p_y^2\right) + y^4 - y^2 \;,\; x = 0 \right\}
\end{equation}
which is a non-invariant surface with the local non-recrossing property. Moreover, it has the topology of a sphere $S^2$ with two hemispheres, known as the forward and backward dividing surface. They are given by:
\begin{equation}
\begin{split}
\mathcal{D}_{f}(H_0) &= \left\{\left(x,y,p_x,p_y\right) \in \mathbb{R}^4 \; \bigg| \; H_0 = \frac{1}{2}\left(p_x^2+p_y^2\right) + y^4 - y^2 \;,\; x = 0 \;,\; p_x > 0 \right\} \\
\mathcal{D}_{b}(H_0) &= \left\{\left(x,y,p_x,p_y\right) \in \mathbb{R}^4 \; \bigg| \; H_0 = \frac{1}{2}\left(p_x^2+p_y^2\right) + y^4 - y^2 \;,\; x = 0 \;,\; p_x < 0 \right\}
\end{split}
\end{equation}
The two hemispheres meet at the equator, which is a NHIM (or an UPO), with the form:
\begin{equation}
\mathcal{N}(H_0) = \left\{\left(x,y,p_y,p_y\right) \in \mathbb{R}^4 \; \bigg| \; H_0 = \frac{p_y^2}{2} + y^4 - y^2 \;,\; x = p_x = 0 \right\}
\end{equation}
and has the topology of a circle $S^1$. The NHIM has stable and unstable manifolds:
\begin{equation}
\mathcal{W}^{u}(H_0) = \mathcal{W}^{s}(H_0) = \left\{\left(x,y,p_y,p_y\right) \in \mathbb{R}^4 \; \bigg| \; H_y = \frac{p_y^2}{2} + y^4 - y^2 = H_0 \;,\; H_x = \frac{p_x^2}{2} + x^4 - x^2 = 0 \right\}
\end{equation}
and topologically they have the structure of $S^1 \times \mathbb{R}$, representing *tube* or *cylindrical manifolds*. Observe that for the symmetric and uncoupled case that we are analyzing they have a homoclinic structure in phase space. All these relevant phase space structures that are responsible for the reaction mechanisms in phase space through the bottleneck that connects the upper-left ad upper-right wells of the PES are depicted in Fig. [fig:2](#fig:phasePort_1DoF_symm).
<a id="fig:phasePort_1DoF_symm"></a>
<figcaption style="text-align:center;font-size:14px"><b>fig:2 </b><em> A) Dividing surface described in Eq. \eqref{ds_sym_unc} for the symmetric uncoupled Hamiltonian system with energy $H_0 = -0.15$ in the neighborhood of the equilibrium point $\mathbf{x}_e = (0,\sqrt{2}/2,0,0)$. B) Symmetric double well potential given in Eq. \eqref{1D_potSymm}. C) and D) depict the phase portraits for the $x-p_x$ and $y-p_y$ planes respectively. We have marked with a magenta line the dividing surface $x = 0$ that separates the upper-left and upper-right wells of the PES.</em></figcaption><hr>
<a id="fig:LD_sym_h_neg"></a>
<figcaption style="text-align:center;font-size:14px"><b>fig:3 </b><em> Phase space structures at energy $H = -0.2$. A) LDs calculated using the p-norm definition, with $p = 1/2$, integration time $\tau = 5$ on the phase space slice $y = -1/\sqrt{2}$ for the symmetric uncoupled Hamiltonian in Eq. (\ref{eqsymun}). B) Dynamical evolution of the initial conditions selected in panel A.</em></figcaption><hr>
<a id="fig:LD_pot_neg"></a>
<figcaption style="text-align:center;font-size:14px"><b>fig:4 </b><em> Potential energy surface for the symmetric uncoupled Hamiltonian in Eq. (\ref{eqsymun}) at energy $H = -0.2$.</em></figcaption><hr>
We will describe now the phase space dynamics. Since the Hamiltonian is separable, the energy of the system can naturally partitioned between each DoF, that is, $H_{0}=H_{x,0}+H_{y,0}$. By looking at the phase portraits in Fig [fig:2](#fig:phasePort_1DoF_symm) we will discuss the different cases:
1. $h_x \, , \, h_y < 0$: Since this energy levels correspond to trajectories inside the homoclinic curve in the phase portraits (B,D) depicted in Fig [fig:2](#fig:phasePort_1DoF_symm), then we know that neither $x$ nor $y$ will change sign, and this means that trajectories will be trapped in the lower-right well.
2. $h_x < 0 \, , \, h_y > 0$: Since the energy in the $y$ DoF is positive, this means we are outside the separatrix in the $y-p_y$ plane, and thus the trajectory will be periodic in $y$ and the sign of $y$ changes along the trajectory. Moreover, since the energy in $x$ is negative, this gives that we are inside the separatrix in the $x-p_x$ plane, corresponding to $x$ not changing sign along the trajectory evolution. Consequently, we will have trajectories that move through the bottleneck corresponding to the right index-1 back and forth between the lower-right and upper-right wells.
3. $h_x > 0 \, , \, h_y < 0$: Since the energy in the $x$ DoF is positive, this means we are outside the separatrix in the $x-p_x$ plane, and thus the trajectory will be periodic in $x$ and the sign of $x$ changes along the trajectory. Moreover, since the energy in $y$ is negative, this gives that we are inside the separatrix in the $y-p_y$ plane, corresponding to $y$ not changing sign along the trajectory evolution. Consequently, we will have trajectories that move through the bottleneck corresponding to the bottom index-1 back and forth between the lower-right and lower-left wells.
4. $h_x = 0 \, , \, h_y < 0$: Since the energy in the $x$ DoF is zero, this means we are on the separatrix in the $x-p_x$ plane, and thus the trajectory will not change sign in $x$ and it will asymptotically approach $x = 0$. Moreover, since the energy in $y$ is negative, this gives that we are inside the separatrix in the $y-p_y$ plane, corresponding to $y$ not changing sign along the trajectory evolution. Consequently, we will have trajectories that evolve on the spherical cylinder of the bottom index-1 that will asymptotically approach the UPO.
5. $h_x < 0 \, , \, h_y = 0$: Since the energy in the $y$ DoF is zero, this means we are on the separatrix in the $y-p_y$ plane, and thus the trajectory will not change sign in $y$ and it will asymptotically approach $y = 0$. Moreover, since the energy in $x$ is negative, this gives that we are inside the separatrix in the $x-p_x$ plane, corresponding to $x$ not changing sign along the trajectory evolution. Consequently, we will have trajectories that evolve on the spherical cylinder of the right index-1 that will asymptotically approach the UPO.
So far we have discussed the nature of the trajectories for all the different cases of positive and negative total energy of the system. At this point we will explore the dynamics revealed by the Lagrangian descriptors (LDs) for the same cases of the total energy and conclude that the LDs recovers the same phase space as in Fig. [fig:2](#fig:phasePort_1DoF_symm). In Fig. [fig:3](#fig:LD_sym_h_neg) we have calculated the LDs using a small integration time $\tau = 5$, for the phase space slice $y = -1/\sqrt{2}$ and for total energy $H = -0.2$. This energy is above the energy of all four index-1 saddles and below the energy of the index-2 saddle. Therefore all the wells of the potential are connected and reaction can take place by crossing the open bottleneck, which is the NHIM (UPO) and its stable and unstable manifolds (spherical cylinders) that coincide in this case, and passing from the $x>0$ to $x<0$ (or from $y>0$ to $y<0$ or vice versa) or vice versa. So these geometrical structures act as a 'reactive highway' allowing the system to transit from well to well which would correspond to a given molecule undergoing an isomerization reaction. Remember that since the chosen energy is below the energy of the index-2 saddle but above the energy of all four index-1 saddles this means that we can have sequential isomerization but the region in the neighborhood of the index-2 saddle is forbidden and hence the system cannot exhibit concerted isomerization. In particular in Fig. [fig:3](#fig:LD_sym_h_neg) (A) we can notice that the spherical cylinders of the left and right index-1 saddles do not intersect with the cylinders of the bottom index-1 saddle and that the phase space consists of three regions: 1) the spherical cylinder of the index-1 in the right, where if we pick an initial condition inside this region the trajectory, in the configuration space, will be reactive and it will move through the bottleneck (UPO), corresponding to the right index-1, back and forth between the lower-right and upper-right wells (magenta). This is because the y-coordinate change sign while the x-coordinate doesn't. We are outside of the separatrix in the $y-p_{y}$ plane and the trajectory is periodic. On the other hand we are inside the separatrix in the $x-p_{x}$ plane and hence the trajectory doesn't change sign in this direction. We should mention here that the boundary of the cylinders is the boundary of the energy. 2) The spherical cylinder of the index-1 on the bottom, where if we pick any initial condition inside this cylinder the trajectory will be reactive and it move through the bottleneck (UPO), corresponding to the bottom index-1, back and forth between the lower-right and lower-left wells (red). This is because the x-coordinate change sign while the y-coordinate doesn't. We are outside of the separatrix in the $x-p_{x}$ plane and the trajectory is periodic. On the other hand we are inside the separatrix in the $y-p_{y}$ plane and hence the trajectory doesn't change sign in this direction. 3) The region inside the homoclinic curve, where if we pick an initial condition inside this region the trajectory will be trapped in the well (cyan) and they will never escape. In this case both the trajectories are inside the separatrix and hence non of them change sign. To summarize in this case, that the system is uncoupled, the stable and unstable manifolds of the UPOs of the index-1 saddles of the PES do not give rise to heteroclinic intersections, so that the dynamics of the system remains trapped in the lower-left well or it can exhibit sequential isomerization to the lower-right or upper-right well.
The potential energy surface for the symmetric case and for total energy $H = -0.2$, which is above the energy of all the index-1 saddles and below the energy of the index-2 saddle, is depicted in the Fig. [fig:4](#fig:LD_pot_neg), where we can see the reactive trajectories (red and magenta) in the spherical cylinders, which are impenetrable barriers on the constant energy surface and separating the reactive and non-reactive trajectories in the phase space, and the trapped trajectories (cyan) in one of the wells. Hence the cylinders determine the initial conditions that can pass through the bottleneck as they evolve in time. The method of LDs also detects the unstable periodic orbits (UPOs) of all the index-1 saddles that are normally hyperbolic invariant manifolds (NHIMs). The method of LDs recovers all the phase space structures that we described above.
## Revealing Phase Space Structures: Asymmetric and Uncoupled Dynamics
We explore now the dynamics of the asymmetric and uncoupled system. Consider that the system has energy $H_0$, and since the DoFs are separable, we can decompose the total energy as $H_0 = H_{x,0} + H_{y,0}$. Moreover, the energy in each DoF can be written as:
\begin{equation}\label{eqasymun}
H_{x}(x,p_x) = \frac{1}{2} \, p_x^2 + U(x) \quad,\quad H_{y}(y,p_y) = \frac{1}{2} \, p_y^2 + W(y)
\end{equation}
where the potential in the $x$ DoF is an asymmetric double-well in the form:
\begin{equation}\label{potasymun}
U(x) = x^4 - x^2 - \delta x
\end{equation}
and $\delta$ represents the asymmetry parameter. The potential energy $W(y)$ of the $y$ DoF is a symmetric double well given by Eq. \eqref{1D_potSymm}. Energy conservation implies that phase space dynamics occurs in the three-dimensional energy hypersurface:
\begin{equation}
\mathcal{S}(H_0) = \left\{ (x,y,p_x,p_y) \in \mathbb{R}^4 \; | \; H_0 = \frac{1}{2}\left(p_x^2+p_y^2\right) + x^4 - x^2 - \delta x + y^4 - y^2 \right\}
\end{equation}
We concentrate on describing the phase space structures that determine transport between the upper-left and upper-right well regions of the PES for values of the asymmetry parameter below that corresponding to the saddle-node bifurcation. The dividing surface separating both wells is defined in this situation by intersecting the energy surface with the slice $x = x_s$, and the value of $x_s$ is that of $x_2$ in Eq. \eqref{roots_cubic_asymm}, that is:
\begin{equation}
\mathcal{D}\left(H_0\right) = \mathcal{S}\left(H_0\right) \cap \lbrace x = x_{s} \rbrace = \left\{(x,y,p_x,p_y) \in \mathbb{R}^4 \; \bigg| \; H_0 = \frac{1}{2}\left(p_x^2+p_y^2\right) + y^4 - y^2 - \frac{x_s}{2} \left(x_s + \frac{3\delta}{2}\right) \right\}
\label{ds_asym_unc}
\end{equation}
where we have used that the potential energy $H_{x,s} = U(x_s) = x_s^4 - x_s^2 - \delta x_s = -x_s^2/2 - (3\delta/4) \, x_s$, a property that follows from the fact that $x_s$ is a critical point of $U(x)$. The dividing surface is non-invariant and has the local non-recrossing property. Moreover, it has the topology of a sphere $S^2$ with two hemispheres, the forward and backward dividing surfaces, given by:
\begin{equation}
\begin{split}
\mathcal{D}_{f}(H_0) &= \left\{(x,y,p_x,p_y) \in \mathbb{R}^4 \; \bigg| \; H_0 = \frac{1}{2}\left(p_x^2+p_y^2\right) + y^4 - y^2 - \frac{x_s}{2} \left(x_s + \frac{3\delta}{2}\right) \;,\; p_x > 0 \right\} \\
\mathcal{D}_{b}(H_0) &= \left\{(x,y,p_x,p_y) \in \mathbb{R}^4 \; \bigg| \; H_0 = \frac{1}{2}\left(p_x^2+p_y^2\right) + y^4 - y^2 - \frac{x_s}{2} \left(x_s + \frac{3\delta}{2}\right) \;,\; p_x < 0 \right\}
\end{split}
\end{equation}
The two hemispheres meet at the equator, which is a NHIM (or an UPO) described by:
\begin{equation}
\mathcal{N}(H_0) = \left\{(x,y,p_x,p_y) \in \mathbb{R}^4 \; \bigg| \; H_0 = \frac{1}{2} \, p_y^2 + y^4 - y^2 - \frac{x_s}{2} \left(x_s + \frac{3\delta}{2}\right) \;,\; p_x = 0 \right\}
\end{equation}
with the topology of a circle $S^1$. The NHIM has stable and unstable manifolds:
\begin{equation}
\mathcal{W}^{u}(H_0) = \mathcal{W}^{s}(H_0) = \left\{\left(x,y,p_y,p_y\right) \in \mathbb{R}^4 \; \bigg| \; H_{y,0} = \frac{p_y^2}{2} + y^4 - y^2 \;,\; H_{x,s} = \frac{p_x^2}{2} + x^4 - x^2 - \delta x \right\}
\end{equation}
and topologically they have the structure of $S^1 \times \mathbb{R}$, representing *tube* or *cylindrical manifolds*. Observe that for the asymmetric and uncoupled case that we are analyzing they have a homoclinic structure in phase space. All these relevant phase space structures that are responsible for the reaction mechanisms in phase space through the bottleneck that connects the upper-left ad upper-right wells of the PES.
The equilibrium points $\mathbf{x}_e = (x_e,y_e,p_x^e,p_y^e)$ satisfy the equations:
\begin{equation}
p_x^e = p_y^e = 0 \quad , \quad -4 y_e^3 + 2 y_e = 0 \quad , \quad 4 x_e^3 - 2 \alpha x_e - \delta = 0 \;.
\end{equation}
Solving for the $y$ DoF we get $y_e \in \lbrace -\sqrt{2}/2,0,\sqrt{2}/2 \rbrace$, and the $x$ DoF gives a cubic polynomial $f(x) = 4x^3 - 2 \alpha x - \delta$ whose roots can be obtained analytically using the formulas in {% cite brizzard2015 --file SNreac %}. If we take $\gamma$ such that $f^{\prime\prime}(\gamma) = 0$ and define:
\begin{equation}
\mu = \sqrt{-\frac{1}{3}f^{\prime}(\gamma)} \quad,\quad \phi = \arccos\left(-\dfrac{f(\gamma)}{\mu^3}\right)
\label{cubic_rules}
\end{equation}
then for this cubic polynomial we have that $\gamma = 0$ and therefore:
\begin{equation}
\mu = \sqrt{\dfrac{2\alpha}{3}} \quad,\quad \phi = \arccos \left( \left(\dfrac{3}{2\alpha} \right)^{3/2} \delta \right)
\end{equation}
The roots of the cubic polynomial have the form:
\begin{equation}
\begin{cases}
x_1 = \gamma + \mu \cos\left(\dfrac{\phi}{3}\right) = \mu \cos\left(\dfrac{\phi}{3}\right) \\[.3cm]
x_{2,3} = \gamma - \mu \cos\left(\dfrac{\pi \pm \phi}{3}\right) = - \mu \cos\left(\dfrac{\pi \pm \phi}{3}\right) = -\dfrac{x_1}{2} \pm \dfrac{\mu\sqrt{3}}{2} \sin \left(\dfrac{\phi}{3}\right)
\end{cases}
\label{roots_cubic_asymm}
\end{equation}
Observe that the two roots $x_{2,3}$ merge into one, i.e. $x_{2,3} = -x_1/2$, when $\phi = 0$ and this occurs for:
\begin{equation}
\left(\dfrac{3}{2\alpha} \right)^{3/2} \delta = 1 \quad \Leftrightarrow \quad \delta_c(\alpha) = \left(\dfrac{2\alpha}{3} \right)^{3/2}
\label{crit_asym_uncoup}
\end{equation}
Consequently, there is a bifurcation (saddle-node) in the geometry of the potential energy corresponding to the $x$ DoF as the value of the asymmetry parameter $\delta$ is varied. If $\delta \in [0,\delta_c)$, then we have three real roots. At the critical values $\delta = \delta_c$, $x_2$ and $x_3$ merge and thus the cubic polynomial has two real roots $x_1 = \mu$ and $x_2 = x_3 = -\mu / 2 = -\sqrt{\alpha/6}$. Moreover, for values of the asymmetry parameter $\delta > \delta_c$ only the root $x_1$ remains. In summary, when $0 \leq \delta < \delta_c$, the PES has 9 critical points, for $\delta = \delta_c$ only 6 critical points remain, since in each line $y = -\sqrt{2}/2, \, 0, \, -\sqrt{2}/2$ two roots merge (we have three simultaneous saddle-node bifurcations). Finally, for $\delta > \delta_c$ we are left with 3 critical points of the PES. We give below the energies of the critical points of the PES:
\begin{eqnarray}
(x_k,0) \quad ,& \quad &V(x_k,0) = -\dfrac{x_k}{2} \left(\alpha x_k + \dfrac{3\delta}{2}\right) \;,\quad k \in \lbrace 1,2,3 \rbrace \\[.2cm]
\left(x_k,\pm \sqrt{2}/2\right) \quad ,& \quad &V(x_k,\pm \sqrt{2}/2) = -\dfrac{x_k}{2} \left(\alpha x_k + \dfrac{3}{2}\delta \right) - \frac{1}{4} \;,\quad k \in \lbrace 1,2,3 \rbrace
\end{eqnarray}
In particular, the energies of the critical points where the bifurcations take place is:
\begin{equation}\label{energyeq}
V(-\sqrt{\alpha/6},0) = \frac{\alpha^2}{12} \quad.\quad V(-\sqrt{\alpha/6},\pm \sqrt{2}/2) = \frac{\alpha^2-3}{12}
\end{equation}
In Fig. [fig:5](#fig:asym_pot_dif_delta) we show the potential energy surface for different values of $\delta$. We can see that for different $\delta$ the critical points vary. In the first row is depicted the surface for $\delta = 0.2 < \delta_{c}$. In this case the PES has four minima (wells), four index-1 saddles and one index-2 saddle point ('hilltop'), notice that the location of the critical points is different than the symmetric case and the index-2 saddle is not at the begging of the axis (0,0) anymore. Another difference between this case and the symmetric one, is that the energies have changed and the index-1 saddles don't have equal energies anymore. When $\delta = \delta_{c}$ (second row) the PES has 6 critical points, we can see the creation of a cusp and the three critical points in the middle of the panel B have collide simultaneously, saddle node bifurcations, with the three critical points on the left of the panel B. In the last row we can see that after the bifurcation occurs for $\delta = 0.8$ the PES has only three critical points left, one index-1 saddle and the two wells on the right.
## Implications for Reaction Dynamics: Asymmetric and Uncoupled Dynamics
In Fig. [fig:6](#fig:LD_delta_02) we calculate LDs for the fixed value of the asymmetric parameter $\delta = 0.2$ and for different energies. In particular in this figure we have calculated LDs for an integration time of $\tau = 5$ in the phase space slice $y = -1/\sqrt{2}$. We discuss the phase space structures for different energy values. For instance, in Fig. [fig:6](#fig:LD_delta_02) A) we use an energy value of $H = -0.2$ which is below the energy of the left index-1 saddle and below the energy of the index-2 saddle. Hence, the bottleneck that connects the lower-left and upper-left wells is still closed. The only option for trajectories that start on the lower-left well is to be trapped in that well or to evolve to the lower-right well by crossing the bottom index-1 saddle. In panel B) we use an energy of $H=-0.1$ which is above the energy of the left index-1 and still below the energy of index-2 saddle. In this case trajectories starting on the lower-left well have three different types of motion. They can be trapped in the well forever, or they can move to the lower-right or upper-left wells, since the phase space bottleneck connecting the wells is now open. Therefore, trajectories can move between the wells and the system would show sequential isomerization.
In Fig. [fig:7](#fig:LD_delta_crit) we illustrate the systems' behavior using LDs for $p = 1/2$ integration time $\tau = 10$ for the asymmetric and uncoupled case $\delta = \delta_{c}$ and for three different energies based on the equations (\eqref{energyeq}) at the three bifurcations points, in particular we choose one energy $H = \alpha^{2}-3/12 =-1/6$ and one energy between $\alpha^{2}-3/12<H<\alpha^{2}/12$ that is $H = -1/12$. The magenta curve represents the energy boundary. For this value of $\delta$, that is the critical value, the three simultaneous saddle-node bifurcations have just occur and there are four wells and two index-1 saddles left. A) The energy of the system is $H = -1/6$ and corresponds to the phase space slice $y = -1/\sqrt{2}$. In this case we have two regions. One inside the spherical cylinder of the index-1 in the right and one outside of this cylinder, where is the trapping area. We choose an initial condition in each of these regions and in panel B we show their time evolution projected onto configuration space. The trajectory of any initial condition inside the spherical cylinder of index-1 (red) will remain inside the cylinder and move between the lower and upper wells in comparison to any trajectory starting outside of the cylinder will remain trapped in the well (blue), the coordinates don't change sign. C) The energy of the system is $H = -1/12$ and corresponds to the phase space slice $y = -1/\sqrt{2}$. In this case there are three regions. One inside the spherical cylinder of index-1, one area in the spherical cylinder of the parabolic point that is in the line of $x = x_{c}$ and one area outside of this. D) The phase space slide $p_y = 0$. Here we can see how the initial conditions from panel C evolve. The trajectory inside the parabolic spherical cylinder (cyan) will remain inside the cylinder, horizontal trapped motion. The initial condition (red) is trapping in the well. Finally the blue initial condition we move up and down between the two wells (sequential isomerization).
<a id="fig:asym_pot_dif_delta"></a>
<figcaption style="text-align:center;font-size:14px"><b>fig:5 </b><em> Potential energy surface on the left column and configuration space on the right column for the asymmetric uncoupled potential in Eq. (\ref{potasymun}) for different values of the asymmetric parameter $\delta$. A) and B) $\delta = 0.2 < \delta_c$, C) and D) $\delta = \delta_c$, E) and F) $\delta = 0.8 > \delta_c$.</em></figcaption><hr>
<a id="fig:LD_delta_02"></a>
<figcaption style="text-align:center;font-size:14px"><b>fig:6 </b><em> LDs calculated using $p = 1/2$ and $\tau = 5$ for the asymmetric uncoupled Hamiltonian in Eq. (\ref{eqasymun}) for $\delta = 0.2$. Energies are: A) $H = -0.2$. B) $H = -0.1$.</em></figcaption><hr>
<a id="fig:LD_delta_crit"></a>
<figcaption style="text-align:center;font-size:14px"><b>fig:7 </b><em> LDs calculated using $p = 1/2$ for $\tau = 10$ in the asymmetric and uncoupled Hamiltonian in Eq. (\ref{eqasymun}) for $\delta = \delta_c$. The left column corresponds to the phase space slice $y = -1/\sqrt{2}$ and the right column is for $p_x = 0$, except panel D) that is for $p_y = 0$. The energy of the system is: A) and B) $H = -1/6$; C) and D) $H = -1/12$ and $\tau = 10$.</em></figcaption><hr>
# References
{% bibliography --file SNreac --cited %}
|
3be0f23588f2b42cd4bd96afa1ccef0fce34d04d
| 41,071 |
ipynb
|
Jupyter Notebook
|
content/act2/four_well_morse/four_well_morse-jekyll.ipynb
|
champsproject/chem_react_dyn
|
53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598
|
[
"CC-BY-4.0"
] | 11 |
2019-12-09T11:23:13.000Z
|
2020-12-16T09:49:55.000Z
|
content/act2/four_well_morse/four_well_morse-jekyll.ipynb
|
champsproject/chem_react_dyn
|
53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598
|
[
"CC-BY-4.0"
] | 40 |
2019-12-09T14:52:38.000Z
|
2022-02-26T06:10:08.000Z
|
content/act2/four_well_morse/four_well_morse-jekyll.ipynb
|
champsproject/chem_react_dyn
|
53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598
|
[
"CC-BY-4.0"
] | 3 |
2020-05-12T06:27:20.000Z
|
2022-02-08T05:29:56.000Z
| 41,071 | 41,071 | 0.702272 | true | 10,138 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.785309 | 0.839734 | 0.65945 |
__label__eng_Latn
| 0.997932 | 0.370455 |
```python
import sympy as sm
```
## Depth
```python
u1, u2, r, k, mu_b, d = sm.symbols("u1, u2, r, k, mu_b, d", real=True)
mu = sm.sqrt(1 - r ** 2)
ir = 1 - u1 * (1 - mu) - u2 * (1 - mu) ** 2
```
```python
f0 = sm.simplify(sm.integrate(2 * sm.pi * r * ir, (r, 0, 1)))
f0
```
```python
df = sm.pi * k ** 2 * ir.subs([(r, sm.sqrt(1 - mu_b ** 2))])
df
```
```python
delta = sm.simplify(df / f0)
delta
```
```python
sm.solve(sm.Eq(delta, d), k ** 2)[0]
```
## Duration
```python
# sin2phi = sin^2(pi * tau / P)
k, b, a, sin2phi, tau, P = sm.symbols("k, b, a, sin2phi, tau, P", real=True)
cos2i = (b / a) ** 2
sin2i = 1 - cos2i
val = sm.simplify(((1 + k) ** 2 - b ** 2) / (a ** 2 * sin2i))
val
```
```python
a2_res = sm.solve(sm.Eq(sin2phi, val), a ** 2)[0]
a2_res
```
```python
sm.simplify(a2_res.subs([(sin2phi, sm.sin(sm.pi * tau / P) ** 2)]))
```
```python
sm.simplify(a2_res - ((1 + k) ** 2 - b ** 2 * (1 - sin2phi)) / sin2phi)
```
```python
sm.simplify(sm.diff(a2_res.subs([(sin2phi, sm.sin(sm.pi * tau / P) ** 2)]), tau))
```
```python
```
|
b583295b11034c9e8ff0e953408a237c21ff534a
| 3,090 |
ipynb
|
Jupyter Notebook
|
paper/figures/depth-and-duration.ipynb
|
exoplanet-dev/tess.world
|
06ee11db96351d167451615a98b72ff84b5f7765
|
[
"MIT"
] | 1 |
2020-09-08T10:43:48.000Z
|
2020-09-08T10:43:48.000Z
|
paper/figures/depth-and-duration.ipynb
|
exoplanet-dev/tess.world
|
06ee11db96351d167451615a98b72ff84b5f7765
|
[
"MIT"
] | null | null | null |
paper/figures/depth-and-duration.ipynb
|
exoplanet-dev/tess.world
|
06ee11db96351d167451615a98b72ff84b5f7765
|
[
"MIT"
] | 1 |
2020-09-08T10:43:59.000Z
|
2020-09-08T10:43:59.000Z
| 19.3125 | 87 | 0.460841 | true | 458 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.933431 | 0.7773 | 0.725556 |
__label__eng_Latn
| 0.061626 | 0.524041 |
# Solving Linear Systems
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
```
## Linear Systems
A [linear system of equations](https://en.wikipedia.org/wiki/System_of_linear_equations) is a collection of linear equations
\begin{align}
a_{0,0}x_0 + a_{0,1}x_2 + \cdots + a_{0,n}x_n & = b_0 \\\
a_{1,0}x_0 + a_{1,1}x_2 + \cdots + a_{1,n}x_n & = b_1 \\\
& \vdots \\\
a_{m,0}x_0 + a_{m,1}x_2 + \cdots + a_{m,n}x_n & = b_m \\\
\end{align}
In matrix notation, a linear system is $A \mathbf{x}= \mathbf{b}$ where
$$
A = \begin{bmatrix}
a_{0,0} & a_{0,1} & \cdots & a_{0,n} \\\
a_{1,0} & a_{1,1} & \cdots & a_{1,n} \\\
\vdots & & & \vdots \\\
a_{m,0} & a_{m,1} & \cdots & a_{m,n} \\\
\end{bmatrix}
\ \ , \ \
\mathbf{x} = \begin{bmatrix}
x_0 \\\ x_1 \\\ \vdots \\\ x_n
\end{bmatrix}
\ \ , \ \
\mathbf{b} = \begin{bmatrix}
b_0 \\\ b_1 \\\ \vdots \\\ b_m
\end{bmatrix}
$$
## Gaussian elimination
The general procedure to solve a linear system of equation is called [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination). The idea is to perform elementary row operations to reduce the system to its row echelon form and then solve.
### Elementary Row Operations
[Elementary row operations](https://en.wikipedia.org/wiki/Elementary_matrix#Elementary_row_operations) include:
1. Add $k$ times row $j$ to row $i$.
2. Multiply row $i$ by scalar $k$.
3. Switch rows $i$ and $j$.
Each of the elementary row operations is the result of matrix multiplication by an elementary matrix (on the left).
To add $k$ times row $i$ to row $j$ in a matrix $A$, we multiply $A$ by the matrix $E$ where $E$ is equal to the identity matrix except the $i,j$ entry is $E_{i,j} = k$. For example, if $A$ is 3 by 3 and we want to add 3 times row 2 to row 0 (using 0 indexing) then
$$
E_1 = \begin{bmatrix}
1 & 0 & 3 \\\
0 & 1 & 0 \\\
0 & 0 & 1
\end{bmatrix}
$$
Let's verify the calculation:
```python
A = np.array([[1,1,2],[-1,3,1],[0,5,2]])
print(A)
```
[[ 1 1 2]
[-1 3 1]
[ 0 5 2]]
```python
E1 = np.array([[1,0,3],[0,1,0],[0,0,1]])
print(E1)
```
[[1 0 3]
[0 1 0]
[0 0 1]]
```python
E1 @ A
```
array([[ 1, 16, 8],
[-1, 3, 1],
[ 0, 5, 2]])
To multiply $k$ times row $i$ in a matrix $A$, we multiply $A$ by the matrix $E$ where $E$ is equal to the identity matrix except the $,i,j$ entry is $E_{i,i} = k$. For example, if $A$ is 3 by 3 and we want to multiply row 1 by -2 then
$$
E_2 = \begin{bmatrix}
1 & 0 & 0 \\\
0 & -2 & 0 \\\
0 & 0 & 1
\end{bmatrix}
$$
Let's verify the calculation:
```python
E2 = np.array([[1,0,0],[0,-2,0],[0,0,1]])
print(E2)
```
[[ 1 0 0]
[ 0 -2 0]
[ 0 0 1]]
```python
E2 @ A
```
array([[ 1, 1, 2],
[ 2, -6, -2],
[ 0, 5, 2]])
Finally, to switch row $i$ and row $j$ in a matrix $A$, we multiply $A$ by the matrix $E$ where $E$ is equal to the identity matrix except $E_{i,i} = 0$, $E_{j,j} = 0$, $E_{i,j} = 1$ and $E_{j,i} = 1$. For example, if $A$ is 3 by 3 and we want to switch row 1 and row 2 then
$$
E^3 = \begin{bmatrix}
1 & 0 & 0 \\\
0 & 0 & 1 \\\
0 & 1 & 0
\end{bmatrix}
$$
Let's verify the calculation:
```python
E3 = np.array([[1,0,0],[0,0,1],[0,1,0]])
print(E3)
```
[[1 0 0]
[0 0 1]
[0 1 0]]
```python
E3 @ A
```
array([[ 1, 1, 2],
[ 0, 5, 2],
[-1, 3, 1]])
### Implementation
Let's write function to implement the elementary row operations. First of all, let's write a function called `add_rows` which takes input parameters $A$, $k$, $i$ and $j$ and returns the NumPy array resulting from adding $k$ times row $j$ to row $i$ in the matrix $A$. If $i=j$, then let's say that the function scales row $i$ by $k+1$ since this would be the result of $k$ times row $i$ added to row $i$.
```python
def add_row(A,k,i,j):
"Add k times row j to row i in matrix A."
n = A.shape[0]
E = np.eye(n)
if i == j:
E[i,i] = k + 1
else:
E[i,j] = k
return E @ A
```
Let's test our function:
```python
M = np.array([[1,1],[3,2]])
print(M)
```
[[1 1]
[3 2]]
```python
add_row(M,2,0,1)
```
array([[7., 5.],
[3., 2.]])
```python
add_row(M,3,1,1)
```
array([[ 1., 1.],
[12., 8.]])
Let's write a function called `scale_row` which takes 3 input parameters $A$, $k$, and $i$ and returns the matrix that results from $k$ times row $i$ in the matrix $A$.
```python
def scale_row(A,k,i):
"Multiply row i by k in matrix A."
n = A.shape[0]
E = np.eye(n)
E[i,i] = k
return E @ A
```
```python
M = np.array([[3,1],[-2,7]])
print(M)
```
[[ 3 1]
[-2 7]]
```python
scale_row(M,3,1)
```
array([[ 3., 1.],
[-6., 21.]])
```python
A = np.array([[1,1,1],[1,-1,0]])
print(A)
```
[[ 1 1 1]
[ 1 -1 0]]
```python
scale_row(A,5,1)
```
array([[ 1., 1., 1.],
[ 5., -5., 0.]])
Let's write a function called `switch_rows` which takes 3 input parameters $A$, $i$ and $j$ and returns the matrix that results from switching rows $i$ and $j$ in the matrix $A$.
```python
def switch_rows(A,i,j):
"Switch rows i and j in matrix A."
n = A.shape[0]
E = np.eye(n)
E[i,i] = 0
E[j,j] = 0
E[i,j] = 1
E[j,i] = 1
return E @ A
```
```python
A = np.array([[1,1,1],[1,-1,0]])
print(A)
```
[[ 1 1 1]
[ 1 -1 0]]
```python
switch_rows(A,0,1)
```
array([[ 1., -1., 0.],
[ 1., 1., 1.]])
## Examples
### Find the Inverse
Let's apply our functions to the augmented matrix $[M \ | \ I]$ to find the inverse of the matrix $M$:
```python
M = np.array([[5,4,2],[-1,2,1],[1,1,1]])
print(M)
```
[[ 5 4 2]
[-1 2 1]
[ 1 1 1]]
```python
A = np.hstack([M,np.eye(3)])
print(A)
```
[[ 5. 4. 2. 1. 0. 0.]
[-1. 2. 1. 0. 1. 0.]
[ 1. 1. 1. 0. 0. 1.]]
```python
A1 = switch_rows(A,0,2)
print(A1)
```
[[ 1. 1. 1. 0. 0. 1.]
[-1. 2. 1. 0. 1. 0.]
[ 5. 4. 2. 1. 0. 0.]]
```python
A2 = add_row(A1,1,1,0)
print(A2)
```
[[1. 1. 1. 0. 0. 1.]
[0. 3. 2. 0. 1. 1.]
[5. 4. 2. 1. 0. 0.]]
```python
A3 = add_row(A2,-5,2,0)
print(A3)
```
[[ 1. 1. 1. 0. 0. 1.]
[ 0. 3. 2. 0. 1. 1.]
[ 0. -1. -3. 1. 0. -5.]]
```python
A4 = switch_rows(A3,1,2)
print(A4)
```
[[ 1. 1. 1. 0. 0. 1.]
[ 0. -1. -3. 1. 0. -5.]
[ 0. 3. 2. 0. 1. 1.]]
```python
A5 = scale_row(A4,-1,1)
print(A5)
```
[[ 1. 1. 1. 0. 0. 1.]
[ 0. 1. 3. -1. 0. 5.]
[ 0. 3. 2. 0. 1. 1.]]
```python
A6 = add_row(A5,-3,2,1)
print(A6)
```
[[ 1. 1. 1. 0. 0. 1.]
[ 0. 1. 3. -1. 0. 5.]
[ 0. 0. -7. 3. 1. -14.]]
```python
A7 = scale_row(A6,-1/7,2)
print(A7)
```
[[ 1. 1. 1. 0. 0. 1. ]
[ 0. 1. 3. -1. 0. 5. ]
[ 0. 0. 1. -0.42857143 -0.14285714 2. ]]
```python
A8 = add_row(A7,-3,1,2)
print(A8)
```
[[ 1. 1. 1. 0. 0. 1. ]
[ 0. 1. 0. 0.28571429 0.42857143 -1. ]
[ 0. 0. 1. -0.42857143 -0.14285714 2. ]]
```python
A9 = add_row(A8,-1,0,2)
print(A9)
```
[[ 1. 1. 0. 0.42857143 0.14285714 -1. ]
[ 0. 1. 0. 0.28571429 0.42857143 -1. ]
[ 0. 0. 1. -0.42857143 -0.14285714 2. ]]
```python
A10 = add_row(A9,-1,0,1)
print(A10)
```
[[ 1. 0. 0. 0.14285714 -0.28571429 0. ]
[ 0. 1. 0. 0.28571429 0.42857143 -1. ]
[ 0. 0. 1. -0.42857143 -0.14285714 2. ]]
Let's verify that we found the inverse $M^{-1}$ correctly:
```python
Minv = A10[:,3:]
print(Minv)
```
[[ 0.14285714 -0.28571429 0. ]
[ 0.28571429 0.42857143 -1. ]
[-0.42857143 -0.14285714 2. ]]
```python
result = Minv @ M
print(result)
```
[[ 1.00000000e+00 4.44089210e-16 2.22044605e-16]
[-6.66133815e-16 1.00000000e+00 -2.22044605e-16]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
Success! We can see the result more clearly if we round to 15 decimal places:
```python
np.round(result,15)
```
array([[ 1.e+00, 0.e+00, 0.e+00],
[-1.e-15, 1.e+00, -0.e+00],
[ 0.e+00, 0.e+00, 1.e+00]])
### Solve a System
Let's use our functions to perform Gaussian elimination and solve a linear system of equations $A \mathbf{x} = \mathbf{b}$.
```python
A = np.array([[6,15,1],[8,7,12],[2,7,8]])
print(A)
```
[[ 6 15 1]
[ 8 7 12]
[ 2 7 8]]
```python
b = np.array([[2],[14],[10]])
print(b)
```
[[ 2]
[14]
[10]]
Form the augemented matrix $M$:
```python
M = np.hstack([A,b])
print(M)
```
[[ 6 15 1 2]
[ 8 7 12 14]
[ 2 7 8 10]]
Perform row operations:
```python
M1 = scale_row(M,1/6,0)
print(M1)
```
[[ 1. 2.5 0.16666667 0.33333333]
[ 8. 7. 12. 14. ]
[ 2. 7. 8. 10. ]]
```python
M2 = add_row(M1,-8,1,0)
print(M2)
```
[[ 1. 2.5 0.16666667 0.33333333]
[ 0. -13. 10.66666667 11.33333333]
[ 2. 7. 8. 10. ]]
```python
M3 = add_row(M2,-2,2,0)
print(M3)
```
[[ 1. 2.5 0.16666667 0.33333333]
[ 0. -13. 10.66666667 11.33333333]
[ 0. 2. 7.66666667 9.33333333]]
```python
M4 = scale_row(M3,-1/13,1)
print(M4)
```
[[ 1. 2.5 0.16666667 0.33333333]
[ 0. 1. -0.82051282 -0.87179487]
[ 0. 2. 7.66666667 9.33333333]]
```python
M5 = add_row(M4,-2,2,1)
print(M5)
```
[[ 1. 2.5 0.16666667 0.33333333]
[ 0. 1. -0.82051282 -0.87179487]
[ 0. 0. 9.30769231 11.07692308]]
```python
M6 = scale_row(M5,1/M5[2,2],2)
print(M6)
```
[[ 1. 2.5 0.16666667 0.33333333]
[ 0. 1. -0.82051282 -0.87179487]
[ 0. 0. 1. 1.19008264]]
```python
M7 = add_row(M6,-M6[1,2],1,2)
print(M7)
```
[[1. 2.5 0.16666667 0.33333333]
[0. 1. 0. 0.1046832 ]
[0. 0. 1. 1.19008264]]
```python
M8 = add_row(M7,-M7[0,2],0,2)
print(M8)
```
[[1. 2.5 0. 0.13498623]
[0. 1. 0. 0.1046832 ]
[0. 0. 1. 1.19008264]]
```python
M9 = add_row(M8,-M8[0,1],0,1)
print(M9)
```
[[ 1. 0. 0. -0.12672176]
[ 0. 1. 0. 0.1046832 ]
[ 0. 0. 1. 1.19008264]]
Success! The solution of $Ax=b$ is
```python
x = M9[:,3].reshape(3,1)
print(x)
```
[[-0.12672176]
[ 0.1046832 ]
[ 1.19008264]]
Or, we can do it the easy way...
```python
x = la.solve(A,b)
print(x)
```
[[-0.12672176]
[ 0.1046832 ]
[ 1.19008264]]
## `scipy.linalg.solve`
We are mostly interested in linear systems $A \mathbf{x} = \mathbf{b}$ where there is a unique solution $\mathbf{x}$. This is the case when $A$ is a square matrix ($m=n$) and $\mathrm{det}(A) \not= 0$. To solve such a system, we can use the function [`scipy.linalg.solve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve.html).
The function returns a solution of the system of equations $A \mathbf{x} = \mathbf{b}$. For example:
```python
A = np.array([[1,1],[1,-1]])
print(A)
```
[[ 1 1]
[ 1 -1]]
```python
b1 = np.array([2,0])
print(b1)
```
[2 0]
And solve:
```python
x1 = la.solve(A,b1)
print(x1)
```
[1. 1.]
Note that the output $\mathbf{x}$ is returned as a 1D NumPy array when the vector $\mathbf{b}$ (the right hand side) is entered as a 1D NumPy array. If we input $\mathbf{b}$ as a 2D NumPy array, then the output is a 2D NumPy array. For example:
```python
A = np.array([[1,1],[1,-1]])
b2 = np.array([2,0]).reshape(2,1)
x2 = la.solve(A,b2)
print(x2)
```
[[1.]
[1.]]
Finally, if the right hand side $\mathbf{b}$ is a matrix, then the output is a matrix of the same size. It is the solution of $A \mathbf{x} = \mathbf{b}$ when $\mathbf{b}$ is a matrix. For example:
```python
A = np.array([[1,1],[1,-1]])
b3 = np.array([[2,2],[0,1]])
x3 = la.solve(A,b3)
print(x3)
```
[[1. 1.5]
[1. 0.5]]
### Simple Example
Let's compute the solution of the system of equations
\begin{align}
2x + y &= 1 \\\
x + y &= 1
\end{align}
Create the matrix of coefficients:
```python
A = np.array([[2,1],[1,1]])
print(A)
```
[[2 1]
[1 1]]
And the vector $\mathbf{b}$:
```python
b = np.array([1,-1]).reshape(2,1)
print(b)
```
[[ 1]
[-1]]
And solve:
```python
x = la.solve(A,b)
print(x)
```
[[ 2.]
[-3.]]
We can verify the solution by computing the inverse of $A$:
```python
Ainv = la.inv(A)
print(Ainv)
```
[[ 1. -1.]
[-1. 2.]]
And multiply $A^{-1} \mathbf{b}$ to solve for $\mathbf{x}$:
```python
x = Ainv @ b
print(x)
```
[[ 2.]
[-3.]]
We get the same result. Success!
### Inverse or Solve
It's a bad idea to use the inverse $A^{-1}$ to solve $A \mathbf{x} = \mathbf{b}$ if $A$ is large. It's too computationally expensive. Let's create a large random matrix $A$ and vector $\mathbf{b}$ and compute the solution $\mathbf{x}$ in 2 ways:
```python
N = 1000
A = np.random.rand(N,N)
b = np.random.rand(N,1)
```
Check the first entries $A$:
```python
A[:3,:3]
```
array([[0.35754719, 0.63135432, 0.6572258 ],
[0.18450506, 0.14639832, 0.23528745],
[0.27576474, 0.46264005, 0.26589724]])
And for $\mathbf{b}$:
```python
b[:4,:]
```
array([[0.82726751],
[0.96946096],
[0.31351176],
[0.63757837]])
Now we compare the speed of `scipy.linalg.solve` with `scipy.linalg.inv`:
```python
%%timeit
x = la.solve(A,b)
```
2.77 s ± 509 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```python
%%timeit
x = la.inv(A) @ b
```
4.46 s ± 2.04 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
Solving with `scipy.linalg.solve` is about twice as fast!
|
fd3ba13adeae56b7679f0dc8f2b6671678a7c8a3
| 29,979 |
ipynb
|
Jupyter Notebook
|
Python/3. Computational Sciences and Mathematics/Linear Algebra/Solving Systems of Linear Equations.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Python/3. Computational Sciences and Mathematics/Linear Algebra/Solving Systems of Linear Equations.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Python/3. Computational Sciences and Mathematics/Linear Algebra/Solving Systems of Linear Equations.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | 2 |
2022-02-09T15:41:33.000Z
|
2022-02-11T07:47:40.000Z
| 29,979 | 29,979 | 0.590347 | true | 6,002 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.949669 | 0.875787 | 0.831708 |
__label__eng_Latn
| 0.732075 | 0.77067 |
```python
from sympy.physics.units import *
from sympy import *
# Rounding:
import decimal
from decimal import Decimal as DX
def iso_round(obj, pv, rounding=decimal.ROUND_HALF_EVEN):
import sympy
"""
Rounding acc. to DIN EN ISO 80000-1:2013-08
place value = Rundestellenwert
"""
assert pv in set([
# place value # round to:
100, # 3rd last digit before decimal
10, # 2nd last
1, # last
0.1, # 1st digit after decimal
0.01, # 2nd
0.001, # 3rd
0.0001, # 4th
0.00001, # 5th
0.000001, # 6th
0.0000001, # 7th
0.00000001, # 8th
0.000000001, # 9th
0.0000000001, # 10th
])
try:
tmp = DX(str(float(obj)))
obj = tmp.quantize(DX(str(pv)), rounding=rounding)
except:
for i in range(len(obj)):
tmp = DX(str(float(obj[i])))
obj[i] = tmp.quantize(DX(str(pv)), rounding=rounding)
return obj
# LateX:
kwargs = {}
kwargs["mat_str"] = "bmatrix"
kwargs["mat_delim"] = ""
# kwargs["symbol_names"] = {FB: "F^{\mathsf B}", }
# Units:
(k, M, G ) = ( 10**3, 10**6, 10**9 )
(mm, cm, deg) = ( m/1000, m/100, pi/180)
Newton = kg*m/s**2
Pa = Newton/m**2
MPa = M*Pa
GPa = G*Pa
kN = k*Newton
# ---
c, mass = var("c, mass", positive=True)
K = Matrix([
[2*c, -c],
[-c, c],
])
M = Matrix([
[3*mass/2, 0],
[0, mass],
])
EI = 210 *GPa * 5100 *cm**4
l = S(45)/10 *m
sub_list = [
(mass, 1000*kg ),
(c, 24*EI/l**3 ),
]
# ξ = λ²
xi = var("xi")
w = var("omega")
A = K + xi*M
pprint("\nCharacteristic equation:")
eq = Eq(det(A))
pprint(eq)
sol_xi = solve(eq,xi)
w2, w3 = var("w2, w3")
w = Matrix([w2, w3])
zero = Matrix([0,0])
for i in range(len(sol_xi)):
pprint("\n\nEigenvalue:")
xii = sol_xi[i]
pprint(xii)
Ai = A.subs(xi,xii)
eq = Eq(Ai*w,zero)
sol = solve(eq, w2)
pprint("\nEigenvector:")
pprint(sol)
# Omega 1:
pprint("\nw1 / (1/s):")
w1 = sqrt(2*c/mass)
tmp = w1.subs(sub_list)
tmp /= (1/s)
tmp = iso_round(tmp, 0.1)
pprint(tmp)
w = w1.subs(sub_list)
w_in_Hz = w / (1/s)
pprint("\nPeriod T1 / s:")
T = 2*pi/w
T_in_s = T / s
T_in_s = N(T_in_s,20)
T_in_s = float(T_in_s)
tmp = T_in_s
tmp = iso_round(tmp,0.001)
pprint(tmp)
from pylab import *
from numpy import linspace
t_in_s = linspace(0,T_in_s,100)
wt = w_in_Hz * t_in_s
cos_wt = array([cos(float(x)) for x in wt])
w2_in_mm = -10 * cos_wt
w3_in_mm = +10 * cos_wt
plt.axis()
plt.grid()
plt.plot(t_in_s, w2_in_mm, "b-", label=r"$w2\,\, / \,\, \mathrm{mm}$")
plt.plot(t_in_s, w3_in_mm, "r--", label=r"$w3\,\, / \,\, \mathrm{mm}$")
plt.xlabel(r"$t \,\, / \,\, \mathrm{s}$")
plt.legend()
plt.savefig('2dofs_motion.svg', transparent=True)
plt.show()
# Characteristic equation:
# 2 ⎛ 3⋅mass⋅ξ⎞
# - c + (c + mass⋅ξ)⋅⎜2⋅c + ────────⎟ = 0
# ⎝ 2 ⎠
#
#
# Eigenvalue:
# -2⋅c
# ─────
# mass
#
# Eigenvector:
# {w₂: -w₃}
#
#
# Eigenvalue:
# -c
# ──────
# 3⋅mass
#
# Eigenvector:
# ⎧ 2⋅w₃⎫
# ⎨w₂: ────⎬
# ⎩ 3 ⎭
#
# w1 / (1/s):
# 75.1
#
# Period T1 / s:
# 0.084
```
|
f23bff096363ed09e1034c5e680ea7b13778db14
| 6,412 |
ipynb
|
Jupyter Notebook
|
ipynb/TM_3/5_SL/Modal/2_DOFs/Beam/2dofs_cc.ipynb
|
kassbohm/tm-snippets
|
5e0621ba2470116e54643b740d1b68b9f28bff12
|
[
"MIT"
] | null | null | null |
ipynb/TM_3/5_SL/Modal/2_DOFs/Beam/2dofs_cc.ipynb
|
kassbohm/tm-snippets
|
5e0621ba2470116e54643b740d1b68b9f28bff12
|
[
"MIT"
] | null | null | null |
ipynb/TM_3/5_SL/Modal/2_DOFs/Beam/2dofs_cc.ipynb
|
kassbohm/tm-snippets
|
5e0621ba2470116e54643b740d1b68b9f28bff12
|
[
"MIT"
] | null | null | null | 31.586207 | 125 | 0.371179 | true | 1,301 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.877477 | 0.795658 | 0.698172 |
__label__eng_Latn
| 0.184437 | 0.460418 |
last edited by Claire Valva on May 13, 2019, with update and cleanup on June 24, 2019
# Test ENSO simulations and plotting
```python
# import packages
import numpy as np
from scipy.fftpack import fft, ifft, fftfreq, fftshift, ifftshift
import scipy.integrate as sciint
import pandas as pd
from math import pi
from sympy import solve, Poly, Eq, Function, exp, re, im
from scipy.optimize import fsolve
from decimal import Decimal
import pickle
import time
import random
import multiprocessing as mp
from joblib import Parallel, delayed
import numpy as np
from scipy.signal import get_window, csd
from scipy.signal.windows import hann, hanning, nuttall, flattop
from scipy.fftpack import fft, ifft, fftfreq, fftshift, ifftshift
#import matplotlib
#matplotlib.use('Agg')
import matplotlib.pyplot as plt
import scipy.integrate as sciint
import pandas as pd
import datetime
import matplotlib.cm as cm
from math import pi
import matplotlib.ticker as tck
import datetime
from sympy import solve, Poly, Eq, Function, exp, re, im
from netCDF4 import Dataset, num2date # This is to read .nc files and time array
from scipy.optimize import fsolve
from decimal import Decimal
import pickle
import multiprocessing as mp
from joblib import Parallel, delayed
import matplotlib.colors as colors
from seaborn import cubehelix_palette #for contour plot colors
import seaborn as sns
from decimal import Decimal
import numpy.ma as ma
import random
#flatten season for plotting
flatten = lambda l: [item for sublist in l for item in sublist]
```
```python
import scipy.stats as spyst
```
```python
from os import walk
oldf = []
for (dirpath, dirnames, filenames) in walk('/scratch/midway2/clairev/enso_spectra/'):
oldf.extend(filenames)
break
f = []
for named in oldf:
if named[0:15] == "spectra_enso_02":
f.append(named)
```
```python
```
```python
def solve_f(X, Zofkt):
# function to solve f coeff equation for trend analysis
x,y = X
f = Zofkt - x*np.exp(1j*y)
return [np.real(f), np.imag(f)]
def real_f(X,Zofkt):
# function to wrap solve_f so that it can be used with fsolve
x,y = X
z = [x+0j,y+0j]
actual_f = solve_f(z, Zofkt)
return(actual_f)
def fwithZ(entry):
answers = fsolve(real_f, np.array([0,0]), args = entry)
return answers
# get function to generate random coeffs
def entry_fft(amp, phase = random.uniform(0, 2*pi)):
# takes amplitude and phase to give corresponding fourier coeff
entry = amp*np.exp(1j*phase)
return entry
# write functions to make a longer ifft
def ext_row(row, n):
ext_f = np.zeros(((len(row) - 1) * n + 1,), dtype="complex128")
ext_f[::n] = row * n
return ext_f
def ext_ifft_new(n, input_array):
# add the zeros onto each end
ext_f = [ext_row(entry,n) for entry in input_array]
# make up for the formulat multiplying for array length
olddim = len(input_array[5])
newdim = len(ext_f[0])
mult = newdim/olddim
ext_f = np.multiply(ext_f, mult)
adjusted_tested = np.fft.ifft2(ext_f)
return adjusted_tested
```
```python
season_titles = ["Winter", "Spring", "Summer", "Fall"]
seasons = ["winter", "spring", "summer", "fall"]
# flatten season for plotting
flatten = lambda l: [item for sublist in l for item in sublist]
named = f[0]
```
```python
#file_name = "/scratch/midway2/clairev/enso_spectra/averaged/01_enso_avg_" + str(named[16:21])
#file_pickle = open(file_name, "rb")
#d2_touse, d2_seasons, d2_averages = pickle.load(file_pickle)
```
```python
ens = ["nino", "nina", "neutral"]
d2_names = [enso + " " + part for part in seasons for enso in ens]
d2_names
```
['ninowinter',
'ninawinter',
'neutralwinter',
'ninospring',
'ninaspring',
'neutralspring',
'ninosummer',
'ninasummer',
'neutralsummer',
'ninofall',
'ninafall',
'neutralfall']
```python
name = "01_enso_36.0N424"
name[8:13]
```
'36.0N'
```python
file_name = "/scratch/midway2/clairev/enso_sims/01_enso_36.0N424"
file_pickle = open(file_name, "rb")
pickled = pickle.load(file_pickle)
```
```python
flat_sims = [flatten(entry[0]) for entry in pickled]
```
```python
#make lists of el nino/regular/la nina years
nino = [1980,1983,1987,1988,1992,
1995,1998,2003,2007,2010]
neutral = [1979,1981,1982,1984,1985,1986,1990,
1991,1993,1994,1996,1997,2001,2002,
2004,2005,2006,2009,2013,2014,2015,2016]
nina = [1989,1999,2000,2008,2011,2012]
```
```python
len_all = 38.0
nina_per = len(nina)/len_all
nino_per = len(nino)/len_all
neutral_per = len(neutral)/len_all
all_pers = [nina_per, nino_per, neutral_per]
```
```python
all_pers
```
[0.15789473684210525, 0.2631578947368421, 0.5789473684210527]
```python
# now plot them
# weight them by years percentrage when plotting together
for j in range(4):
plt.clf();
plt.figure(figsize=(15, 5));
for k in range(3):
#print("hi")
plt.hist(x = np.real(flat_sims[j*3 + k]), bins = 100, density = True, alpha = 0.5, label = d2_names[j*3 + k])
plt.ylabel("density")
plt.legend()
plt.xlabel("geopotential height")
plt.show()
```
```python
# sort them into each season
phase_all = [[[[fwithZ(entry) for entry in sublist]
for sublist in year]
for year in season]
for season in d2_seasons]
# sort them into each season
amps_all = [[[[entry[0] for entry in sublist]
for sublist in year]
for year in season]
for season in phase_all]
ps_all = [[[[entry[1] % (2 * np.pi) for entry in sublist]
for sublist in year]
for year in season]
for season in phase_all]
# adjust for winter averaging
# TO DO: come up with better procedure rather
# current: chopping off edges to make the same length for averaging
norml = 359
longl = 364
def padded(to_pad, index):
length = len(to_pad)
if index == 0:
zeros = longl - length
to_pad = list(to_pad)
for i in range(zeros):
to_pad.append(0)
return to_pad
else:
return to_pad
#pad rows with zeros to account for leap year
season_amps_adj = [[[padded(row, 0)
for row in entry]
for entry in amps_all[i]]
for i in range(len(amps_all))]
#pad rows with zeros to account for leap year
season_phase_adj = [[[padded(row, 0)
for row in entry]
for entry in ps_all[i]]
for i in range(len(ps_all))]
#get average amplitude for each season
avg_amps = [np.average(season, axis = 0)
for season in season_amps_adj]
#get std amplitude for each season
std_amps = [np.std(season, axis = 0)
for season in season_amps_adj]
#get average phases for each season
avg_phase = [spyst.circmean(season, axis = 0)
for season in season_phase_adj]
#get std phases for each season
std_phase = [spyst.circstd(season, axis = 0)
for season in season_phase_adj]
import pickle
file_name2 = "/scratch/midway2/clairev/enso_spectra/averaged/01_enso_avg_" + str(named[16:21])
file_pickle = open(file_name2,'wb')
pickle.dump([avg_amps,std_amps,avg_phase,std_phase],file_pickle)
file_pickle.close()
```
```python
# get function to generate random coeffs
def entry_fft(amp,std, phase, stdphase):
# takes amplitude and phase to give corresponding fourier coeff
amp_new = np.random.normal(loc = amp, scale = std)
phase_new = np.random.normal(loc = phase, scale = stdphase)
entry = amp_new*np.exp(1j*phase_new)
return entry
# write functions to make a longer ifft
def ext_row(row, n):
ext_f = np.zeros(((len(row) - 1) * n + 1,), dtype="complex128")
ext_f[::n] = row * n
return ext_f
def ext_ifft_new(n, input_array):
# add the zeros onto each end
ext_f = [ext_row(entry,n) for entry in input_array]
# make up for the formulat multiplying for array length
olddim = len(input_array[5])
newdim = len(ext_f[0])
mult = newdim/olddim
# ext_f = np.multiply(mult, ext_f)
adjusted_tested = np.fft.ifft2(ext_f)
return adjusted_tested
def combined(amps,stds,phases,stdphases, length):
# combines generation of random phase with inverse transform
newarray = [[entry_fft(amp = amps[wave][timed],
std = stds[wave][timed],
phase = phases[wave][timed], stdphase = stdphases[wave][timed])
for timed in range(len(amps[wave]))]
for wave in range(len(amps))]
newarray = [np.array(leaf) for leaf in newarray]
iffted = ext_ifft_new(length, newarray)
return iffted
def repeater(season, stds, phases,stdphases, length, times):
# repeats the phase creation and inverse transform
newarray = [combined(season, stds, phases,stdphases,length) for leaf in range(times)]
return(newarray)
# set lims
runlen = 75
runtimes = 1
repeattimes = 20
listed_parts = []
def repeater_2(amps,stds, phases,stdphases, length, times):
#do procedure
repeated_comp = [repeater(amps[i],stds[i], phases[i], stdphases[i], length, times)
for i in range(len(amps))]
#output.put(repeated_comp)
#listed_parts.append(repeated_comp)
import pickle
file_name2 = "/scratch/midway2/clairev/enso_sims/01_enso_" + str(named[16:21]) + str(random.randint(1,1000))
file_pickle = open(file_name2,'wb')
pickle.dump(repeated_comp,file_pickle)
file_pickle.close()
return repeated_comp
toplot = repeater_2(avg_amps,std_amps, runlen, runtimes)
```
|
c5b873032d3cf4e7f37ba6ca3915907e2a4b6d33
| 83,819 |
ipynb
|
Jupyter Notebook
|
lin-assumption-2/enso_rep_test.ipynb
|
clairevalva/wavy-sims
|
259c81078e6069777fdef455b0d806e4f8c0c262
|
[
"MIT"
] | null | null | null |
lin-assumption-2/enso_rep_test.ipynb
|
clairevalva/wavy-sims
|
259c81078e6069777fdef455b0d806e4f8c0c262
|
[
"MIT"
] | null | null | null |
lin-assumption-2/enso_rep_test.ipynb
|
clairevalva/wavy-sims
|
259c81078e6069777fdef455b0d806e4f8c0c262
|
[
"MIT"
] | null | null | null | 133.469745 | 17,416 | 0.861117 | true | 2,843 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.903294 | 0.743168 | 0.671299 |
__label__eng_Latn
| 0.870875 | 0.397984 |
# Linear Gaussian filtering and smoothing
Provided are two examples of linear state-space models on which one can perform Bayesian filtering and smoothing in order to obtain
a posterior distribution over a latent state trajectory based on noisy observations.
In order to understand the theory behind these methods in detail we refer to [1] and [2].
We provide examples for two different types of state-space model:
1. [Linear, Discrete State-Space Model](#1.-Linear-Discrete-State-Space-Model:-Car-Tracking): Car Tracking
2. [Linear, Continuous-Discrete State-Space Model](#2.-Linear-Continuous-Discrete-State-Space-Model:-Ornstein-Uhlenbeck-Process): The Ornstein-Uhlenbeck Process
**References**:
> [1] Särkkä, Simo, and Solin, Arno. Applied Stochastic Differential Equations. Cambridge University Press, 2019.
>
> [2] Särkkä, Simo. Bayesian Filtering and Smoothing. Cambridge University Press, 2013.
```python
import numpy as np
import probnum as pn
from probnum import filtsmooth, randvars, statespace, randprocs
from probnum.problems import TimeSeriesRegressionProblem
```
```python
rng = np.random.default_rng(seed=123)
```
```python
# Make inline plots vector graphics instead of raster graphics
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats("pdf", "svg")
# Plotting
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
plt.style.use("../../probnum.mplstyle")
```
## 1. **Linear Discrete** State-Space Model: Car Tracking
---
We begin showcasing the arguably most simple case in which we consider the following state-space model. Consider matrices $A \in \mathbb{R}^{d \times d}$ and $H \in \mathbb{R}^{m \times d}$ where $d$ is the state dimension and $m$ is the dimension of the measurements. Then we define the dynamics and the measurement model as follows:
For $k = 1, \dots, K$ and $x_0 \sim \mathcal{N}(\mu_0, \Sigma_0)$:
$$
\begin{align}
\boldsymbol{x}_k &\sim \mathcal{N}(\boldsymbol{A} \, \boldsymbol{x}_{k-1}, \boldsymbol{Q}) \\
\boldsymbol{y}_k &\sim \mathcal{N}(\boldsymbol{H} \, \boldsymbol{x}_k, \boldsymbol{R})
\end{align}
$$
This defines a dynamics model that assumes a state $\boldsymbol{x}_k$ in a **discrete** sequence of states arising from a linear projection of the previous state $x_{k-1}$ corrupted with additive Gaussian noise under a **process noise** covariance matrix $Q$.
Similarly, the measurements $\boldsymbol{y}_k$ are assumed to be linear projections of the latent state under additive Gaussian noise according to a **measurement noise** covariance $R$.
In the following example we consider projections and covariances that are constant over the state and measurement trajectories (linear time invariant, or **LTI**). Note that this can be generalized to a linear time-varying state-space model, as well. Then $A$ is a function $A: \mathbb{T} \rightarrow \mathbb{R}^{d \times d}$ and $H$ is a function $H: \mathbb{T} \rightarrow \mathbb{R}^{m \times d}$ where $\mathbb{T}$ is the "time dimension".
In other words, here, every relationship is linear and every distribution is a Gaussian distribution.
Under these simplifying assumptions it is possible to obtain a filtering posterior distribution over the state trajectory $(\boldsymbol{x}_k)_{k=1}^{K}$ by using a
**Kalman Filter**. The example is taken from Example 3.6 in [2].
### Define State-Space Model
#### I. Discrete Dynamics Model: Linear, Time-Invariant, Gaussian Transitions
```python
state_dim = 4
observation_dim = 2
```
```python
delta_t = 0.2
# Define linear transition operator
dynamics_transition_matrix = np.eye(state_dim) + delta_t * np.diag(np.ones(2), 2)
# Define process noise (covariance) matrix
process_noise_matrix = (
np.diag(np.array([delta_t ** 3 / 3, delta_t ** 3 / 3, delta_t, delta_t]))
+ np.diag(np.array([delta_t ** 2 / 2, delta_t ** 2 / 2]), 2)
+ np.diag(np.array([delta_t ** 2 / 2, delta_t ** 2 / 2]), -2)
)
```
To create a discrete, LTI Gaussian dynamics model, `probnum` provides the `DiscreteLTIGaussian` class that takes
- `state_trans_mat` : the linear transition matrix (above: $A$)
- `shift_vec` : a force vector for _affine_ transformations of the state (here: zero)
- `proc_noise_cov_mat` : the covariance matrix for the Gaussian process noise
```python
# Create discrete, Linear Time-Invariant Gaussian dynamics model
dynamics_model = statespace.DiscreteLTIGaussian(
state_trans_mat=dynamics_transition_matrix,
shift_vec=np.zeros(state_dim),
proc_noise_cov_mat=process_noise_matrix,
)
```
#### II. Discrete Measurement Model: Linear, Time-Invariant, Gaussian Measurements
```python
measurement_marginal_variance = 0.5
measurement_matrix = np.eye(observation_dim, state_dim)
measurement_noise_matrix = measurement_marginal_variance * np.eye(observation_dim)
```
```python
measurement_model = statespace.DiscreteLTIGaussian(
state_trans_mat=measurement_matrix,
shift_vec=np.zeros(observation_dim),
proc_noise_cov_mat=measurement_noise_matrix,
)
```
#### III. Initial State Random Variable
```python
mu_0 = np.zeros(state_dim)
sigma_0 = 0.5 * measurement_marginal_variance * np.eye(state_dim)
initial_state_rv = randvars.Normal(mean=mu_0, cov=sigma_0)
```
```python
prior_process = randprocs.MarkovProcess(
transition=dynamics_model, initrv=initial_state_rv, initarg=0.0
)
```
### Generate Data for the State-Space Model
`statespace.generate_samples()` is used to sample both latent states and noisy observations from the specified state space model.
```python
time_grid = np.arange(0.0, 10.0, step=delta_t)
```
```python
latent_states, observations = statespace.generate_samples(
rng=rng,
dynmod=dynamics_model,
measmod=measurement_model,
initrv=initial_state_rv,
times=time_grid,
)
```
```python
regression_problem = TimeSeriesRegressionProblem(
observations=observations,
locations=time_grid,
measurement_models=[measurement_model] * len(time_grid),
)
```
### Kalman Filtering
#### I. Kalman Filter
```python
kalman_filter = filtsmooth.Kalman(prior_process)
```
#### II. Perform Kalman Filtering + Rauch-Tung-Striebel Smoothing
```python
state_posterior, _ = kalman_filter.filtsmooth(regression_problem)
```
The method `filtsmooth` returns a `KalmanPosterior` object which provides convenience functions for e.g. sampling and interpolation.
We can also extract the just computed posterior smoothing state variables by querying the `.state_rvs` property.
This yields a list of Gaussian Random Variables from which we can extract the statistics in order to visualize them.
```python
grid = state_posterior.locations
posterior_state_rvs = (
state_posterior.states
) # List of <num_time_points> Normal Random Variables
posterior_state_means = posterior_state_rvs.mean # Shape: (num_time_points, state_dim)
posterior_state_covs = (
posterior_state_rvs.cov
) # Shape: (num_time_points, state_dim, state_dim)
```
### Visualize Results
```python
state_fig = plt.figure()
state_fig_gs = gridspec.GridSpec(ncols=2, nrows=2, figure=state_fig)
ax_00 = state_fig.add_subplot(state_fig_gs[0, 0])
ax_01 = state_fig.add_subplot(state_fig_gs[0, 1])
ax_10 = state_fig.add_subplot(state_fig_gs[1, 0])
ax_11 = state_fig.add_subplot(state_fig_gs[1, 1])
# Plot means
mu_x_1, mu_x_2, mu_x_3, mu_x_4 = [posterior_state_means[:, i] for i in range(state_dim)]
ax_00.plot(grid, mu_x_1, label="posterior mean")
ax_01.plot(grid, mu_x_2)
ax_10.plot(grid, mu_x_3)
ax_11.plot(grid, mu_x_4)
# Plot marginal standard deviations
std_x_1, std_x_2, std_x_3, std_x_4 = [
np.sqrt(posterior_state_covs[:, i, i]) for i in range(state_dim)
]
ax_00.fill_between(
grid,
mu_x_1 - 1.96 * std_x_1,
mu_x_1 + 1.96 * std_x_1,
alpha=0.2,
label="1.96 marginal stddev",
)
ax_01.fill_between(grid, mu_x_2 - 1.96 * std_x_2, mu_x_2 + 1.96 * std_x_2, alpha=0.2)
ax_10.fill_between(grid, mu_x_3 - 1.96 * std_x_3, mu_x_3 + 1.96 * std_x_3, alpha=0.2)
ax_11.fill_between(grid, mu_x_4 - 1.96 * std_x_4, mu_x_4 + 1.96 * std_x_4, alpha=0.2)
# Plot groundtruth
obs_x_1, obs_x_2 = [observations[:, i] for i in range(observation_dim)]
ax_00.scatter(time_grid, obs_x_1, marker=".", label="measurements")
ax_01.scatter(time_grid, obs_x_2, marker=".")
# Add labels etc.
ax_00.set_xlabel("t")
ax_01.set_xlabel("t")
ax_10.set_xlabel("t")
ax_11.set_xlabel("t")
ax_00.set_title(r"$x_1$")
ax_01.set_title(r"$x_2$")
ax_10.set_title(r"$x_3$")
ax_11.set_title(r"$x_4$")
handles, labels = ax_00.get_legend_handles_labels()
state_fig.legend(handles, labels, loc="center left", bbox_to_anchor=(1, 0.5))
state_fig.tight_layout()
```
## 2. **Linear Continuous-Discrete** State-Space Model: Ornstein-Uhlenbeck Process
---
Now, consider we have a look at **continuous** dynamics. We assume that there is a continuous process that defines the dynamics of our latent space from which we collect discrete linear-Gaussian measurements (as above). Only the dynamics model becomes continuous. In particular, we formulate the dynamics as a stochastic process in terms of a linear time-invariant stochastic differential equation (LTISDE). We refer to [1] for more details.
Consider matrices $\boldsymbol{F} \in \mathbb{R}^{d \times d}$, $\boldsymbol{L} \in \mathbb{R}^{s \times d}$ and $H \in \mathbb{R}^{m \times d}$ where $d$ is the state dimension and $m$ is the dimension of the measurements.
We define the following **continuous-discrete** state-space model:
Let $x(t_0) \sim \mathcal{N}(\mu_0, \Sigma_0)$.
$$
\begin{align}
d\boldsymbol{x} &= \boldsymbol{F} \, \boldsymbol{x} \, dt + \boldsymbol{L} \, d \boldsymbol{\omega} \\
\boldsymbol{y}_k &\sim \mathcal{N}(\boldsymbol{H} \, \boldsymbol{x}(t_k), \boldsymbol{R}), \qquad k = 1, \dots, K
\end{align}
$$
where $\boldsymbol{\omega} \in \mathbb{R}^s$ denotes a vector of driving forces (often Brownian Motion).
Note that this can be generalized to a linear time-varying state-space model, as well. Then $\boldsymbol{F}$ is a function $\mathbb{T} \rightarrow \mathbb{R}^{d \times d}$,
$\boldsymbol{L}$ is a function $\mathbb{T} \rightarrow \mathbb{R}^{s \times d}$, and $H$ is a function $\mathbb{T} \rightarrow \mathbb{R}^{m \times d}$ where $\mathbb{T}$ is the "time dimension". In the following example, however, we consider a LTI SDE, namely, the Ornstein-Uhlenbeck Process from which we observe discrete linear Gaussian measurements.
### Define State-Space Model
#### I. Continuous Dynamics Model: Linear, Time-Invariant Stochastic Differential Equation (LTISDE)
```python
state_dim = 1
observation_dim = 1
```
```python
delta_t = 0.2
# Define Linear, time-invariant Stochastic Differential Equation that models
# the (scalar) Ornstein-Uhlenbeck Process
drift_constant = 0.21
dispersion_constant = np.sqrt(0.5)
drift = -drift_constant * np.eye(state_dim)
force = np.zeros(state_dim)
dispersion = dispersion_constant * np.eye(state_dim)
```
The _continuous_ counterpart to the discrete LTI Gaussian model from above is provided via the `LTISDE` class.
It is initialized by the state space components
- `driftmat` : the drift matrix $\boldsymbol{F}$
- `forcevec` : a force vector that is added to the state (note that this is **not** $\boldsymbol{\omega}$.) Here: zero.
- `dispmat` : the dispersion matrix $\boldsymbol{L}$
```python
# Create dynamics model
dynamics_model = statespace.LTISDE(
driftmat=drift,
forcevec=force,
dispmat=dispersion,
)
```
#### II. Discrete Measurement Model: Linear, Time-Invariant Gaussian Measurements
```python
measurement_marginal_variance = 0.1
measurement_matrix = np.eye(observation_dim, state_dim)
measurement_noise_matrix = measurement_marginal_variance * np.eye(observation_dim)
```
As above, the measurement model is discrete, LTI Gaussian. Only the dymanics are continuous (i.e. continuous-discrete).
```python
measurement_model = statespace.DiscreteLTIGaussian(
state_trans_mat=measurement_matrix,
shift_vec=np.zeros(observation_dim),
proc_noise_cov_mat=measurement_noise_matrix,
)
```
#### III. Initial State Random Variable
```python
mu_0 = 10.0 * np.ones(state_dim)
sigma_0 = np.eye(state_dim)
initial_state_rv = randvars.Normal(mean=mu_0, cov=sigma_0)
```
```python
prior_process = randprocs.MarkovProcess(
transition=dynamics_model, initrv=initial_state_rv, initarg=0.0
)
```
### Generate Data for the State-Space Model
`statespace.generate_samples()` is used to sample both latent states and noisy observations from the specified state space model.
```python
time_grid = np.arange(0.0, 10.0, step=delta_t)
```
```python
latent_states, observations = statespace.generate_samples(
rng=rng,
dynmod=dynamics_model,
measmod=measurement_model,
initrv=initial_state_rv,
times=time_grid,
)
```
```python
regression_problem = TimeSeriesRegressionProblem(
observations=observations,
locations=time_grid,
measurement_models=[measurement_model] * len(time_grid),
)
```
### Kalman Filtering
In fact, since we still consider a **linear** model, we can apply Kalman Filtering in this case again.
According to Section 10 in [1], the moments of the filtering posterior in the continuous-discrete case are solutions to linear differential equations, which `probnum` solves for us when invoking the `<Kalman_object>.filtsmooth(...)` method.
#### I. Kalman Filter
```python
kalman_filter = filtsmooth.Kalman(prior_process)
```
#### II. Perform Kalman Filtering + Rauch-Tung-Striebel Smoothing
```python
state_posterior, _ = kalman_filter.filtsmooth(regression_problem)
```
The method `filtsmooth` returns a `KalmanPosterior` object which provides convenience functions for e.g. sampling and prediction.
We can also extract the just computed posterior smoothing state variables by querying the `.state_rvs` property.
This yields a list of Gaussian Random Variables from which we can extract the statistics in order to visualize them.
```python
grid = np.linspace(0, 11, 500)
posterior_state_rvs = state_posterior(
grid
) # List of <num_time_points> Normal Random Variables
posterior_state_means = posterior_state_rvs.mean.squeeze() # Shape: (num_time_points, )
posterior_state_covs = posterior_state_rvs.cov # Shape: (num_time_points, )
samples = state_posterior.sample(rng=rng, size=3, t=grid)
```
### Visualize Results
```python
state_fig = plt.figure()
ax = state_fig.add_subplot()
# Plot means
ax.plot(grid, posterior_state_means, label="posterior mean")
# Plot samples
for smp in samples:
ax.plot(
grid,
smp[:, 0],
color="gray",
alpha=0.75,
linewidth=1,
linestyle="dashed",
label="sample",
)
# Plot marginal standard deviations
std_x = np.sqrt(np.abs(posterior_state_covs)).squeeze()
ax.fill_between(
grid,
posterior_state_means - 1.96 * std_x,
posterior_state_means + 1.96 * std_x,
alpha=0.2,
label="1.96 marginal stddev",
)
ax.scatter(time_grid, observations, marker=".", label="measurements")
# Add labels etc.
ax.set_xlabel("t")
ax.set_title(r"$x$")
# These two lines just remove duplicate labels (caused by the samples) from the legend
handles, labels = ax.get_legend_handles_labels()
by_label = dict(zip(labels, handles))
ax.legend(
by_label.values(), by_label.keys(), loc="center left", bbox_to_anchor=(1, 0.5)
)
state_fig.tight_layout()
```
```python
```
|
b1b6cd32644ba97e6f8d2a3aeb1962b4d1c7888e
| 467,097 |
ipynb
|
Jupyter Notebook
|
docs/source/tutorials/filtsmooth/linear_gaussian_filtering_smoothing.ipynb
|
christopheroates/probnum
|
4ae63da307bd7279c3ce477ef68cbd0b8e30c73a
|
[
"MIT"
] | null | null | null |
docs/source/tutorials/filtsmooth/linear_gaussian_filtering_smoothing.ipynb
|
christopheroates/probnum
|
4ae63da307bd7279c3ce477ef68cbd0b8e30c73a
|
[
"MIT"
] | null | null | null |
docs/source/tutorials/filtsmooth/linear_gaussian_filtering_smoothing.ipynb
|
christopheroates/probnum
|
4ae63da307bd7279c3ce477ef68cbd0b8e30c73a
|
[
"MIT"
] | null | null | null | 76.111618 | 115,418 | 0.723017 | true | 4,174 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.817574 | 0.752013 | 0.614826 |
__label__eng_Latn
| 0.806303 | 0.266778 |
```python
# import stuff from sympy
from sympy import *
import random
import numpy as np
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# x, y, z, t = symbols('x y z t')
# k, m, n = symbols('k m n', integer=True)
# f, g, h = symbols('f g h', cls=Function)
```
```python
# THIS IS WRONG - SOMEHOW !!! DO NOT USE - gives transpose?
def rs_element(X, r, s, h):
"""
Function that computes the r,s element of the matrix L for triangular kernel
using the Nadaraya-Watson estimator, where \hat{Y} = LY
X: input data predictors
r: row r
s: column s
h: bandwidth
"""
n = len(X)
numerator = h - abs(X[r] - X[s])
sum = 0
for i in range(n):
sum += abs(X[r] - X[i])
denominator = n*h - sum
print(denominator)
return(numerator/denominator)
```
```python
def triangular_kernel(x, xi, h):
"""
Function for triangular kernel
"""
if np.abs(x-xi)<h:
return 1 - np.abs(x-xi)/h
else:
return 0
def triangular_weights(X, i, h):
"""
Function that computes the weights for the triangular kernel for a given X list.
This is the same as the column i of projection matrix L.
w_i(x) = K[(x-xi)/h]/sum{K[(x-xj)/h]}
"""
result = []
for x in X:
numerator = triangular_kernel(x, X[i], h)
denominator = 0
for xj in X:
denominator += triangular_kernel(x, xj, h)
result.append(numerator/denominator)
return(np.array(result))
def denominator_list(X, h):
"""
Function that computes the common denominators in the matrix L for triangular kernel
using the Nadaraya-Watson estimator, where \hat{Y} = LY
X: input data predictors
h: bandwidth
"""
n = len(X)
result = []
for xj in X:
result.append(n*h - np.sum(np.abs(X-xj)))
return(np.array(result))
#denominator = n*h - sum
#print(denominator)
#return(numerator/denominator)
def build_L_matrix(X, h):
"""
Function that builds the smoothing L matrix for triangular kernel using
the Nadaraya-Watson estimator, where \hat{Y} = LY
X: input data predictors
h: bandwidth
"""
n = len(X)
L = np.zeros((n,n))
den_list = denominator_list(X, h)
for s in range(n):
L[:,s] = triangular_weights(X, s, h)
return(L)
```
```python
X = np.array([3, 4.3, 6, 7, 9.1, 10.3])
h = 2
L = build_L_matrix(X, h)
print(L)
```
[[0.74074074 0.25925926 0. 0. 0. 0. ]
[0.23333333 0.66666667 0.1 0. 0. 0. ]
[0. 0.09090909 0.60606061 0.3030303 0. 0. ]
[0. 0. 0.33333333 0.66666667 0. 0. ]
[0. 0. 0. 0. 0.71428571 0.28571429]
[0. 0. 0. 0. 0.28571429 0.71428571]]
```python
# "pretty" print using sympy
Matrix(np.around(L, 3))
latex(Matrix(np.around(L, 3))) # for LaTeX output
```
'\\left[\\begin{matrix}0.741 & 0.259 & 0.0 & 0.0 & 0.0 & 0.0\\\\0.233 & 0.667 & 0.1 & 0.0 & 0.0 & 0.0\\\\0.0 & 0.091 & 0.606 & 0.303 & 0.0 & 0.0\\\\0.0 & 0.0 & 0.333 & 0.667 & 0.0 & 0.0\\\\0.0 & 0.0 & 0.0 & 0.0 & 0.714 & 0.286\\\\0.0 & 0.0 & 0.0 & 0.0 & 0.286 & 0.714\\end{matrix}\\right]'
## 1.2 Computing prediction
```python
Y = np.array([0, 1, 2, 2, 4, 3])
```
```python
Y_hat = L@Y
```
```python
Y_hat
```
array([0.25925926, 0.86666667, 1.90909091, 2. , 3.71428571,
3.28571429])
```python
# "pretty" print using sympy
Matrix(np.around(Y_hat, 3))
latex(Matrix(np.around(Y_hat, 3))) # for LaTeX output
```
'\\left[\\begin{matrix}0.259\\\\0.867\\\\1.909\\\\2.0\\\\3.714\\\\3.286\\end{matrix}\\right]'
```python
# Plotting prediction
plt.figure(figsize=(10,8))
plt.plot(X, Y, '*', label="Observation")
plt.plot(X, Y_hat, 'x', label="Prediction")
plt.xlabel("X")
plt.ylabel("Y")
plt.legend()
```
## 1. 3 Computing MSE by LOOCV and GCV
The idea here is to compute the MSE by LOOCV in a "manual" way and then extend it using the Generalized Cross Validation method/formula.
$$
{\displaystyle \operatorname {MSE} ={\frac {1}{n}}\sum_{i=1}^{n}(Y_{i}-{\hat {Y_{i}}})^{2}.}
$$
```python
n = len(X) # size of data
# choosing a random index to leave out
leave_i = random.randint(0, n-1) # index to leave out
Y_loocv = np.delete(Y, leave_i)
# Computing model with new data
X_loocv = np.delete(X, leave_i)
n_loocv = len(X)-1
h = 2
L_loocv = build_L_matrix(X_loocv, h)
# prediction
Y_hat_loocv = L_loocv@Y_loocv
# performing MSE computation
print(np.sum((Y_loocv-Y_hat_loocv)**2))
MSE_loocv = np.sum((Y_loocv-Y_hat_loocv)**2)/n_loocv
```
0.093257604099355
```python
MSE_loocv
```
0.018651520819871002
```python
L_loocv
```
array([[0.74074074, 0.25925926, 0. , 0. , 0. ],
[0.23333333, 0.66666667, 0.1 , 0. , 0. ],
[0. , 0.09090909, 0.60606061, 0.3030303 , 0. ],
[0. , 0. , 0.33333333, 0.66666667, 0. ],
[0. , 0. , 0. , 0. , 1. ]])
```python
leave_i
```
4
```python
np.sqrt(MSE_loocv)
```
0.13657057084112595
BUT, there's another _shortcut_ to compute the MSE for the LOOCV case, as
$$
{\displaystyle \operatorname {MSE} ={\frac {1}{n}}\sum_{i=1}^{n}\left(\frac{Y_{i}-{\hat {Y_{i}}}}{1-h_{ii}}\right)^{2}.}
$$
Where $h_{ii}$ is the **leverage**, which is the diagonal element of the projection matrix $L$.
Let's see if this indeed gives the same result.
```python
# performing MSE computation
n = len(X)
MSE_loocv_2 = np.sum(((Y-Y_hat)/(1-L[leave_i, leave_i]))**2)/n
```
```python
print(MSE_loocv_2)
```
0.5237342750361837
```python
MSE_loocv_2 - MSE_loocv
```
0.5050827542163128
### GCV
The GCV is computed as
$$
{\displaystyle \operatorname {MSE_{GCV}} ={\frac {1}{n}}\sum_{i=1}^{n}\frac{\left(Y_{i}-{\hat {Y_{i}}}\right)^{2}}{1-v/n}.}
$$
Where $v=\text{Tr}(L)$
```python
v = L.trace()
MSE_gcv = (np.sum((Y-Y_hat)**2)/(1-v/n)**2)/n
```
```python
MSE_gcv
```
0.4302881332603392
```python
MSE_loocv_2 - MSE_gcv
```
0.2491508123441171
## 1.4
Trying to see $h$ dependance, is $h=2.8$ better?
```python
hlist = np.arange(1.5,3,0.01)
MSE_gcv_list = []
MSE_loocv_list = []
for hi in hlist:
L = build_L_matrix(X, hi)
Y_hat = L@Y
v = L.trace()
MSE_gcv_list.append((np.sum((Y-Y_hat)**2)/(1-v/n)**2)/n)
leave_i = random.randint(0, n-1) # index to leave out
MSE_loocv_list.append(np.sum(((Y-Y_hat)/(1-L[leave_i, leave_i]))**2)/n)
# Plotting
plt.figure(figsize=(10,8))
plt.plot(hlist, MSE_gcv_list, label="GCV")
plt.plot(hlist, MSE_loocv_list, '-o', label="Leverage")
plt.xlabel("h")
plt.ylabel("MSE")
plt.legend()
```
According to this, and considering that using the leverage is very noisy, GCV tells that we get less error for a value of $h=2.0$ compared to $h=2.8$, even though that when calculating the error with the leverage formulat it gives better result for $h=2.8$ but it's very variant, so we don't trust that.
```python
```
|
1562310cd6efe18859cfa7573dba4016e4c3694a
| 129,998 |
ipynb
|
Jupyter Notebook
|
Rcode/homework2/problema1.ipynb
|
ijpulidos/statlearn
|
fbe0964247d6466396d1e26fd63dae04be56a3ec
|
[
"MIT"
] | null | null | null |
Rcode/homework2/problema1.ipynb
|
ijpulidos/statlearn
|
fbe0964247d6466396d1e26fd63dae04be56a3ec
|
[
"MIT"
] | null | null | null |
Rcode/homework2/problema1.ipynb
|
ijpulidos/statlearn
|
fbe0964247d6466396d1e26fd63dae04be56a3ec
|
[
"MIT"
] | null | null | null | 212.068515 | 97,764 | 0.913129 | true | 2,554 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.740174 | 0.867036 | 0.641758 |
__label__eng_Latn
| 0.64116 | 0.329349 |
# Variational Principle using Symbolic Mathematics in Python
## 1. Introduction
The variational principle tells us that we can use a trial wavefunction to solve the Schrodinger equation using the following theorem:
$${{\int {{\Psi ^*}\hat H{\rm{ }}\Psi } d\tau } \over {\int {{\Psi ^*}\Psi } d\tau }} \ge {E_0}$$
We will use Sympy to solve the particle in a box problem by guessing a trial wavefunction using variational principle
```python
import sympy as sym
```
This exercise is a bit more self-guided than the other notebooks we have done. One of the most useful things you can do is **open last week's notebook to remember the commands in sympy**. Also, remember that google is your friend:
1. [Sympy tutorial](https://docs.sympy.org/latest/tutorial/index.html)
2. [Stack Overflow](https://stackoverflow.com/search?q=sympy+)
3. [Stack Exchange](https://stackexchange.com/)
## 2. Particle in a box
The wave function that we pick for a particle in a box needs to have the following properties
1. single valued
1. normalizable
1. function and its first derivative are continuous
1. boundary condition that the wave function goes to zero at the ends of the box
Particle in a box: a is a classical particle, red is real part, blue is imaginary part.
This particle only expericnes kinetic energy between the box, so the Hamiltonian for this system is
$$\hat H = {{ - {\hbar ^2}} \over {2m}}{{{d^2}} \over {d{x^2}}} + \left\{ {\matrix{{V(x) = 0} & {0 < x < a} \cr {V(x) = \infty } & {x < 0\text{ }{\rm{ or}}\;x > a} \cr } } \right.$$
For our purposes, that means we can consider the Hamiltonian to be
$$\hat H = {{ - {\hbar ^2}} \over {2m}}{{{d^2}} \over {d{x^2}}}$$
as long as we keep the limits of integration to be $(0,a)$
### 2.1 Trial Wave function
Although the particle in box has a well known solution
[https://en.wikipedia.org/wiki/Particle_in_a_box](https://en.wikipedia.org/wiki/Particle_in_a_box)
(or check your favorite pchem book)
We are going to guess a trial wave function:
$$\Phi (x) = \left( {{x \over a} - {{{x^3}} \over a^3}} \right) + \alpha \left( {{{{x^5}} \over {{a^5}}} - {1 \over 2}\left( {{{{x^9}} \over {{a^9}}} + {{{x^9}} \over {{a^9}}}} \right)} \right)$$
### 2.2 Exercise: Variational Theorem
We are going to follow the following plan:
1. Solve for the energy of the trial wave function above
$${E_{trial}} = {{\int\limits_0^a {\Phi (x){{ - {\hbar ^2}} \over {2m}}{{{d^2}} \over {d{x^2}}}\Phi (x)dx} } \over {\int\limits_0^a {\Phi {{(x)}^2}dx} }}$$
Your answer will be a function of $ m,a,\text{and } \alpha$ We will use $\alpha$ as the parameter we vary to minimize the energy and make a new trial wave function.
2. Minimize the trial energy
We will use a first derivative of the trial energy $${d \over {d\alpha }}{E_{trial}}(\alpha )$$ to find the value of $\alpha$ that gives you the lowest energy
3. Plot your new wavefunction compared to the ground state particle in a box: $${\psi _{true}}(x) = {\left( {{2 \over a}} \right)^{1/2}}\sin {{n\pi x} \over a}$$ Plot as a function of $x/a$ from $0$ to $1$. Assuming this has $m=m_e$, and $a=a_0$ use atomic (theorist) units to plot the function.
4. Compare your trial energy to the actual energy (using atomic units)
$${E_{true}}(n = 1) = {{{\hbar ^2}{\pi ^2}} \over {2m{a^2}}}$$
```python
# Your code here
# 1)
import sympy as sym
from sympy import init_printing
from sympy import *
from sympy.physics.units import gravitational_constant, hbar
x, a, m = symbols("x, a, m")
alpha, hbar = symbols("alpha, hbar")
Phi = symbols('Phi')
Phi = ((x/a)-(x**3/a**3))+alpha*((x**5/a**5)-((1/2)*((x**7/a**7)+(x**9/a**9))))
expr_num = Phi*(-hbar**2/2*m)*sym.diff(sym.diff(Phi, x), x)
expr_den = Phi**2
expr_num_int = sym.integrate(expr_num, (x, 0, a))
expr_den_int = sym.integrate(expr_den, (x, 0, a))
expr_Etrial = expr_num_int/expr_den_int
expr_Etrial
# 2)
min_Etrial = sym.diff(expr_Etrial, a)
min_Etrial
# 3
```
Your descriptions/explanations here
### 2.3 Exercise: New trial wavefunction
Determine the minimum energy of the particle in a box using a new trial wavefunction $$x^\alpha(x-a)^\alpha$$
1. Find the minimum energy, $E_{trial}$
2. Plot the new trial wavefunction and compare it to the true solution and the wavefunction you found above
3. Compare you new energy to the trial energy you found above
4. Which wavefunction is better? How do you know?
```python
# Your code here
```
Your descriptions/explanations here
### 2.4 Exercise: Design your own wavefunction!
**Now you get to make your own wavefunction!**
The only guidance I would give you is that it make sense mathematically and that it include $\alpha$ so that you can minimize the energy.
Remember that $a$ and $x$ are both length units, and that trigonometric, logarithmic, and exponential functions are all unitless
Using your new wavefunction:
1. Find the minimum energy, $E_{trial}$
2. Plot the new trial wavefunction and compare it to the true solution and the wavefunction you found above
3. Compare you new energy to the trial energy you found above
4. Which wavefunction is better? How do you know?
```python
# Your code here
import sympy as sym
```
Your descriptions/explanations here
# Reading Homework
Read the following sections in Kramer
- 4.2.3 Born-Oppenheimer approximation
- 4.3.2 Secular equation
- All of 4.5
For each subsection
- write down the subchapter name
- what was the most important idea
- draw an idea digram of the main idea
**Make sure to upload this to the assignment repository**
Example idea diagram:
```python
import sympy as sym
sym.init_printing()
tau = sym.symbols("tau")
sa_conj, ca_conj = sym.symbols("1s_a^*, c_a^*")
sa, ca, sb, cb = sym.symbols("1s_a, c_a, 1s_b, c_b")
Hel = sym.symbols("H_el")
Haa, Hbb, Hab, Hba = sym.symbols("H_aa, H_bb, H_ab, H_ba")
Haa = sym.integrate(sa_conj*Hel*sa, tau)
Hbb = sym.integrate(sb_conj*Hel*sb, tau)
Hab = sym.integrate(sa_conj*Hel*sb, tau)
Hba = sym.integrate(sb_conj*Hel*sa, tau)
```
True
```python
import sympy as sym
from sympy.solvers import solve
E = sym.symbols('E')
tau = sym.symbols("tau")
sa_conj, ca_conj = sym.symbols("1s_a^*, c_a^*")
sa, ca, sb, cb = sym.symbols("1s_a, c_a, 1s_b, c_b")
Haa, Hbb, Hab, Hba, Sab, Sba = sym.symbols('H_aa, H_bb, H_ab, H_ba, S_ab, S_ba')
expr_E = solve(Haa*Hbb-Haa*E-Hbb*E+E**2-Hab*Hba+Hab*E*Sba+Hba*E*Sab-E*Sab*E*Sba, E)
En = expr_E
En
```
```python
```
```python
```
|
da82d32504175620ef6a58c97a68291e64c8dcf6
| 50,135 |
ipynb
|
Jupyter Notebook
|
variational-principle.ipynb
|
sju-chem264-2019/10-3-19-lecture-justyn-cespedes
|
c1c6bc8a0affee8b90fd80cf197240cffcd0e293
|
[
"MIT"
] | null | null | null |
variational-principle.ipynb
|
sju-chem264-2019/10-3-19-lecture-justyn-cespedes
|
c1c6bc8a0affee8b90fd80cf197240cffcd0e293
|
[
"MIT"
] | null | null | null |
variational-principle.ipynb
|
sju-chem264-2019/10-3-19-lecture-justyn-cespedes
|
c1c6bc8a0affee8b90fd80cf197240cffcd0e293
|
[
"MIT"
] | null | null | null | 93.014842 | 18,692 | 0.70757 | true | 2,012 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.803174 | 0.766294 | 0.615467 |
__label__eng_Latn
| 0.969066 | 0.268266 |
<a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/deeplearning.ai/nlp/c2_w4_assignment.ipynb" target="_parent"></a>
# Assignment 4: Word Embeddings
Welcome to the fourth (and last) programming assignment of Course 2!
In this assignment, you will practice how to compute word embeddings and use them for sentiment analysis.
- To implement sentiment analysis, you can go beyond counting the number of positive words and negative words.
- You can find a way to represent each word numerically, by a vector.
- The vector could then represent syntactic (i.e. parts of speech) and semantic (i.e. meaning) structures.
In this assignment, you will explore a classic way of generating word embeddings or representations.
- You will implement a famous model called the continuous bag of words (CBOW) model.
By completing this assignment you will:
- Train word vectors from scratch.
- Learn how to create batches of data.
- Understand how backpropagation works.
- Plot and visualize your learned word vectors.
Knowing how to train these models will give you a better understanding of word vectors, which are building blocks to many applications in natural language processing.
## Outline
- [1 The Continuous bag of words model](#1)
- [2 Training the Model](#2)
- [2.0 Initialize the model](#2)
- [Exercise 01](#ex-01)
- [2.1 Softmax Function](#2.1)
- [Exercise 02](#ex-02)
- [2.2 Forward Propagation](#2.2)
- [Exercise 03](#ex-03)
- [2.3 Cost Function](#2.3)
- [2.4 Backproagation](#2.4)
- [Exercise 04](#ex-04)
- [2.5 Gradient Descent](#2.5)
- [Exercise 05](#ex-05)
- [3 Visualizing the word vectors](#3)
<a name='1'></a>
# 1. The Continuous bag of words model
Let's take a look at the following sentence:
>**'I am happy because I am learning'**.
- In continuous bag of words (CBOW) modeling, we try to predict the center word given a few context words (the words around the center word).
- For example, if you were to choose a context half-size of say $C = 2$, then you would try to predict the word **happy** given the context that includes 2 words before and 2 words after the center word:
> $C$ words before: [I, am]
> $C$ words after: [because, I]
- In other words:
$$context = [I,am, because, I]$$
$$target = happy$$
The structure of your model will look like this:
Where $\bar x$ is the average of all the one hot vectors of the context words.
Once you have encoded all the context words, you can use $\bar x$ as the input to your model.
The architecture you will be implementing is as follows:
\begin{align}
h &= W_1 \ X + b_1 \tag{1} \\
a &= ReLU(h) \tag{2} \\
z &= W_2 \ a + b_2 \tag{3} \\
\hat y &= softmax(z) \tag{4} \\
\end{align}
```
%%capture
!wget https://raw.githubusercontent.com/martin-fabbri/colab-notebooks/master/deeplearning.ai/nlp/datasets/shakespeare.txt
```
```
from collections import Counter, defaultdict
import nltk
import re
import numpy as np
import scipy as linalg
from nltk.tokenize import word_tokenize
```
```
#@markdown Utility functions
#@markdown - sigmoid(z)
#@markdown - get_idx(words, word2Ind)
#@markdown - pack_idx_with_frequency(context_words, word2Ind)
#@markdown - get_vectors(data, word2Ind, V, C)
#@markdown - get_batches(data, word2Ind, V, C, batch_size)
#@markdown - compute_pca(data, n_components=2)
#@markdown - get_dict(data)
def sigmoid(z):
# sigmoid function
return 1.0/(1.0+np.exp(-z))
def get_idx(words, word2Ind):
idx = []
for word in words:
idx = idx + [word2Ind[word]]
return idx
def pack_idx_with_frequency(context_words, word2Ind):
freq_dict = defaultdict(int)
for word in context_words:
freq_dict[word] += 1
idxs = get_idx(context_words, word2Ind)
packed = []
for i in range(len(idxs)):
idx = idxs[i]
freq = freq_dict[context_words[i]]
packed.append((idx, freq))
return packed
def get_vectors(data, word2Ind, V, C):
i = C
while True:
y = np.zeros(V)
x = np.zeros(V)
center_word = data[i]
y[word2Ind[center_word]] = 1
context_words = data[(i - C):i] + data[(i+1):(i+C+1)]
num_ctx_words = len(context_words)
for idx, freq in pack_idx_with_frequency(context_words, word2Ind):
x[idx] = freq/num_ctx_words
yield x, y
i += 1
if i >= len(data):
print('i is being set to 0')
i = 0
def get_batches(data, word2Ind, V, C, batch_size):
batch_x = []
batch_y = []
for x, y in get_vectors(data, word2Ind, V, C):
while len(batch_x) < batch_size:
batch_x.append(x)
batch_y.append(y)
else:
yield np.array(batch_x).T, np.array(batch_y).T
batch = []
def compute_pca(data, n_components=2):
"""
Input:
data: of dimension (m,n) where each row corresponds to a word vector
n_components: Number of components you want to keep.
Output:
X_reduced: data transformed in 2 dims/columns + regenerated original data
pass in: data as 2D NumPy array
"""
m, n = data.shape
### START CODE HERE ###
# mean center the data
data -= data.mean(axis=0)
# calculate the covariance matrix
R = np.cov(data, rowvar=False)
# calculate eigenvectors & eigenvalues of the covariance matrix
# use 'eigh' rather than 'eig' since R is symmetric,
# the performance gain is substantial
evals, evecs = linalg.eigh(R)
# sort eigenvalue in decreasing order
# this returns the corresponding indices of evals and evecs
idx = np.argsort(evals)[::-1]
evecs = evecs[:, idx]
# sort eigenvectors according to same index
evals = evals[idx]
# select the first n eigenvectors (n is desired dimension
# of rescaled data array, or dims_rescaled_data)
evecs = evecs[:, :n_components]
### END CODE HERE ###
return np.dot(evecs.T, data.T).T
def get_dict(data):
"""
Input:
K: the number of negative samples
data: the data you want to pull from
indices: a list of word indices
Output:
word_dict: a dictionary with the weighted probabilities of each word
word2Ind: returns dictionary mapping the word to its index
Ind2Word: returns dictionary mapping the index to its word
"""
#
# words = nltk.word_tokenize(data)
words = sorted(list(set(data)))
n = len(words)
idx = 0
# return these correctly
word2Ind = {}
Ind2word = {}
for k in words:
word2Ind[k] = idx
Ind2word[idx] = k
idx += 1
return word2Ind, Ind2word
```
```
# Download sentence tokenizer
nltk.data.path.append('.')
nltk.download('punkt')
```
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
True
```
with open("shakespeare.txt") as f:
data = f.read() # Read in the data
data = re.sub(r"[,!?;-]", ".", data) # Punktuations are replaced by .
data = nltk.word_tokenize(data) # Tokenize string to words
data = [
ch.lower() for ch in data if ch.isalpha() or ch == "."
] # Lower case and drop non-alphabetical tokens
print("Number of tokens:", len(data), "\n", data[:15])
```
Number of tokens: 60933
['o', 'for', 'a', 'muse', 'of', 'fire', '.', 'that', 'would', 'ascend', 'the', 'brightest', 'heaven', 'of', 'invention']
```
# Compute the frequency distribution of the words in the dataset (vocabulary)
fdist = nltk.FreqDist(word for word in data)
print("Size of vocabulary: ", len(fdist))
print(
"Most frequent tokens: ", fdist.most_common(20)
) # print the 20 most frequent words and their freq.
```
Size of vocabulary: 5772
Most frequent tokens: [('.', 9630), ('the', 1521), ('and', 1394), ('i', 1252), ('to', 1159), ('of', 1093), ('my', 857), ('that', 781), ('in', 770), ('you', 748), ('a', 742), ('is', 630), ('not', 559), ('for', 467), ('it', 460), ('with', 441), ('his', 434), ('but', 417), ('me', 417), ('your', 397)]
#### Mapping words to indices and indices to words
We provide a helper function to create a dictionary that maps words to indices and indices to words.
```
# get_dict creates two dictionaries, converting words to indices and viceversa.
word2Ind, Ind2word = get_dict(data)
V = len(word2Ind)
print("Size of vocabulary: ", V)
```
Size of vocabulary: 5772
```
# example of word to index mapping
print("Index of the word 'king' : ",word2Ind['king'] )
print("Word which has index 2743: ",Ind2word[2743] )
```
Index of the word 'king' : 2743
Word which has index 2743: king
<a name='2'></a>
# 2 Training the Model
### Initializing the model
You will now initialize two matrices and two vectors.
- The first matrix ($W_1$) is of dimension $N \times V$, where $V$ is the number of words in your vocabulary and $N$ is the dimension of your word vector.
- The second matrix ($W_2$) is of dimension $V \times N$.
- Vector $b_1$ has dimensions $N\times 1$
- Vector $b_2$ has dimensions $V\times 1$.
- $b_1$ and $b_2$ are the bias vectors of the linear layers from matrices $W_1$ and $W_2$.
The overall structure of the model will look as in Figure 1, but at this stage we are just initializing the parameters.
<a name='ex-01'></a>
### Exercise 01
Please use [numpy.random.rand](https://numpy.org/doc/stable/reference/random/generated/numpy.random.rand.html) to generate matrices that are initialized with random values from a uniform distribution, ranging between 0 and 1.
**Note:** In the next cell you will encounter a random seed. Please **DO NOT** modify this seed so your solution can be tested correctly.
```
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: initialize_model
def initialize_model(N,V, random_seed=1):
'''
Inputs:
N: dimension of hidden vector
V: dimension of vocabulary
random_seed: random seed for consistent results in the unit tests
Outputs:
W1, W2, b1, b2: initialized weights and biases
'''
np.random.seed(random_seed)
### START CODE HERE (Replace instances of 'None' with your code) ###
# W1 has shape (N,V)
W1 = np.random.rand(N, V)
# W2 has shape (V,N)
W2 = np.random.rand(V, N)
# b1 has shape (N,1)
b1 = np.random.rand(N, 1)
# b2 has shape (V,1)
b2 = np.random.rand(V, 1)
### END CODE HERE ###
return W1, W2, b1, b2
```
```
# Test your function example.
tmp_N = 4
tmp_V = 10
tmp_W1, tmp_W2, tmp_b1, tmp_b2 = initialize_model(tmp_N,tmp_V)
assert tmp_W1.shape == ((tmp_N,tmp_V))
assert tmp_W2.shape == ((tmp_V,tmp_N))
print(f"tmp_W1.shape: {tmp_W1.shape}")
print(f"tmp_W2.shape: {tmp_W2.shape}")
print(f"tmp_b1.shape: {tmp_b1.shape}")
print(f"tmp_b2.shape: {tmp_b2.shape}")
```
tmp_W1.shape: (4, 10)
tmp_W2.shape: (10, 4)
tmp_b1.shape: (4, 1)
tmp_b2.shape: (10, 1)
##### Expected Output
```CPP
tmp_W1.shape: (4, 10)
tmp_W2.shape: (10, 4)
tmp_b1.shape: (4, 1)
tmp_b2.shape: (10, 1)
```
<a name='2.1'></a>
### 2.1 Softmax
Before we can start training the model, we need to implement the softmax function as defined in equation 5:
<br>
$$ \text{softmax}(z_i) = \frac{e^{z_i} }{\sum_{i=0}^{V-1} e^{z_i} } \tag{5} $$
- Array indexing in code starts at 0.
- $V$ is the number of words in the vocabulary (which is also the number of rows of $z$).
- $i$ goes from 0 to |V| - 1.
<a name='ex-02'></a>
### Exercise 02
**Instructions**: Implement the softmax function below.
- Assume that the input $z$ to `softmax` is a 2D array
- Each training example is represented by a column of shape (V, 1) in this 2D array.
- There may be more than one column, in the 2D array, because you can put in a batch of examples to increase efficiency. Let's call the batch size lowercase $m$, so the $z$ array has shape (V, m)
- When taking the sum from $i=1 \cdots V-1$, take the sum for each column (each example) separately.
Please use
- numpy.exp
- numpy.sum (set the axis so that you take the sum of each column in z)
```
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: softmax
def softmax(z):
'''
Inputs:
z: output scores from the hidden layer [shape (V, m)] where m = #batch
Outputs:
yhat: prediction (estimate of y)
'''
### START CODE HERE (Replace instances of 'None' with your own code) ###
# Calculate yhat (softmax)
z_exp = np.exp(z)
z_exp_sum = np.sum(z_exp, axis=0)
yhat = z_exp / z_exp_sum
### END CODE HERE ###
return yhat
```
```
# Test the function
tmp = np.array([[1,2,3],
[1,1,1]
])
tmp_sm = softmax(tmp)
display(tmp_sm)
```
array([[0.5 , 0.73105858, 0.88079708],
[0.5 , 0.26894142, 0.11920292]])
##### Expected Ouput
```CPP
array([[0.5 , 0.73105858, 0.88079708],
[0.5 , 0.26894142, 0.11920292]])
```
<a name='2.2'></a>
### 2.2 Forward propagation
<a name='ex-03'></a>
### Exercise 03
Implement the forward propagation $z$ according to equations (1) to (3). <br>
\begin{align}
h &= W_1 \ X + b_1 \tag{1} \\
a &= ReLU(h) \tag{2} \\
z &= W_2 \ a + b_2 \tag{3} \\
\end{align}
For that, you will use as activation the Rectified Linear Unit (ReLU) given by:
$$f(h)=\max (0,h) \tag{6}$$
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>You can use numpy.maximum(x1,x2) to get the maximum of two values</li>
<li>Use numpy.dot(A,B) to matrix multiply A and B</li>
</ul>
</p>
```
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: forward_prop
def forward_prop(x, W1, W2, b1, b2):
'''
Inputs:
x: average one hot vector for the context
W1, W2, b1, b2: matrices and biases to be learned
Outputs:
z: output score vector
'''
### START CODE HERE (Replace instances of 'None' with your own code) ###
# Calculate h
h = np.dot(W1, x) + b1
# Apply the relu on h (store result in h)
h = np.maximum(0, h)
# Calculate z
z = np.dot(W2, h) + b2
### END CODE HERE ###
return z, h
```
```
# Test the function
# Create some inputs
tmp_N = 2
tmp_V = 3
tmp_x = np.array([[0,1,0]]).T
tmp_W1, tmp_W2, tmp_b1, tmp_b2 = initialize_model(N=tmp_N,V=tmp_V, random_seed=1)
print(f"x has shape {tmp_x.shape}")
print(f"N is {tmp_N} and vocabulary size V is {tmp_V}")
# call function
tmp_z, tmp_h = forward_prop(tmp_x, tmp_W1, tmp_W2, tmp_b1, tmp_b2)
print("call forward_prop")
print()
# Look at output
print(f"z has shape {tmp_z.shape}")
print("z has values:")
print(tmp_z)
print()
print(f"h has shape {tmp_h.shape}")
print("h has values:")
print(tmp_h)
```
x has shape (3, 1)
N is 2 and vocabulary size V is 3
call forward_prop
z has shape (3, 1)
z has values:
[[0.55379268]
[1.58960774]
[1.50722933]]
h has shape (2, 1)
h has values:
[[0.92477674]
[1.02487333]]
##### Expected output
```CPP
x has shape (3, 1)
N is 2 and vocabulary size V is 3
call forward_prop
z has shape (3, 1)
z has values:
[[0.55379268]
[1.58960774]
[1.50722933]]
h has shape (2, 1)
h has values:
[[0.92477674]
[1.02487333]]
```
<a name='2.3'></a>
## 2.3 Cost function
- We have implemented the *cross-entropy* cost function for you.
```
# compute_cost: cross-entropy cost functioN
def compute_cost(y, yhat, batch_size):
# cost function
logprobs = np.multiply(np.log(yhat),y) + np.multiply(np.log(1 - yhat), 1 - y)
cost = - 1/batch_size * np.sum(logprobs)
cost = np.squeeze(cost)
return cost
```
```
# Test the function
tmp_C = 2
tmp_N = 50
tmp_batch_size = 4
tmp_word2Ind, tmp_Ind2word = get_dict(data)
tmp_V = len(word2Ind)
tmp_x, tmp_y = next(get_batches(data, tmp_word2Ind, tmp_V,tmp_C, tmp_batch_size))
print(f"tmp_x.shape {tmp_x.shape}")
print(f"tmp_y.shape {tmp_y.shape}")
tmp_W1, tmp_W2, tmp_b1, tmp_b2 = initialize_model(tmp_N,tmp_V)
print(f"tmp_W1.shape {tmp_W1.shape}")
print(f"tmp_W2.shape {tmp_W2.shape}")
print(f"tmp_b1.shape {tmp_b1.shape}")
print(f"tmp_b2.shape {tmp_b2.shape}")
tmp_z, tmp_h = forward_prop(tmp_x, tmp_W1, tmp_W2, tmp_b1, tmp_b2)
print(f"tmp_z.shape: {tmp_z.shape}")
print(f"tmp_h.shape: {tmp_h.shape}")
tmp_yhat = softmax(tmp_z)
print(f"tmp_yhat.shape: {tmp_yhat.shape}")
tmp_cost = compute_cost(tmp_y, tmp_yhat, tmp_batch_size)
print("call compute_cost")
print(f"tmp_cost {tmp_cost:.4f}")
```
tmp_x.shape (5772, 4)
tmp_y.shape (5772, 4)
tmp_W1.shape (50, 5772)
tmp_W2.shape (5772, 50)
tmp_b1.shape (50, 1)
tmp_b2.shape (5772, 1)
tmp_z.shape: (5772, 4)
tmp_h.shape: (50, 4)
tmp_yhat.shape: (5772, 4)
call compute_cost
tmp_cost 12.9825
##### Expected output
```CPP
tmp_x.shape (5778, 4)
tmp_y.shape (5778, 4)
tmp_W1.shape (50, 5778)
tmp_W2.shape (5778, 50)
tmp_b1.shape (50, 1)
tmp_b2.shape (5778, 1)
tmp_z.shape: (5778, 4)
tmp_h.shape: (50, 4)
tmp_yhat.shape: (5778, 4)
call compute_cost
tmp_cost 9.9560
```
<a name='2.4'></a>
## 2.4 Training the Model - Backpropagation
<a name='ex-04'></a>
### Exercise 04
Now that you have understood how the CBOW model works, you will train it. <br>
You created a function for the forward propagation. Now you will implement a function that computes the gradients to backpropagate the errors.
```
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: back_prop
def back_prop(x, yhat, y, h, W1, W2, b1, b2, batch_size):
'''
Inputs:
x: average one hot vector for the context
yhat: prediction (estimate of y)
y: target vector
h: hidden vector (see eq. 1)
W1, W2, b1, b2: matrices and biases
batch_size: batch size
Outputs:
grad_W1, grad_W2, grad_b1, grad_b2: gradients of matrices and biases
'''
### START CODE HERE (Replace instanes of 'None' with your code) ###
# Compute l1 as W2^T (Yhat - Y)
# Re-use it whenever you see W2^T (Yhat - Y) used to compute a gradient
l1 = np.dot(W2.T, yhat - y)
# Apply relu to l1
l1 = np.maximum(0, l1)
# Compute the gradient of W1
grad_W1 = 1/batch_size * np.dot(l1, x.T)
# Compute the gradient of W2
grad_W2 = 1/batch_size * np.dot(yhat - y, h.T)
# Compute the gradient of b1
grad_b1 = 1/batch_size * np.sum(l1, axis=1, keepdims=True)
# Compute the gradient of b2
grad_b2 = 1/batch_size * np.sum(yhat - y, axis=1, keepdims=True)
### END CODE HERE ###
return grad_W1, grad_W2, grad_b1, grad_b2
```
```
# Test the function
tmp_C = 2
tmp_N = 50
tmp_batch_size = 4
tmp_word2Ind, tmp_Ind2word = get_dict(data)
tmp_V = len(word2Ind)
# get a batch of data
tmp_x, tmp_y = next(get_batches(data, tmp_word2Ind, tmp_V,tmp_C, tmp_batch_size))
print("get a batch of data")
print(f"tmp_x.shape {tmp_x.shape}")
print(f"tmp_y.shape {tmp_y.shape}")
print()
print("Initialize weights and biases")
tmp_W1, tmp_W2, tmp_b1, tmp_b2 = initialize_model(tmp_N,tmp_V)
print(f"tmp_W1.shape {tmp_W1.shape}")
print(f"tmp_W2.shape {tmp_W2.shape}")
print(f"tmp_b1.shape {tmp_b1.shape}")
print(f"tmp_b2.shape {tmp_b2.shape}")
print()
print("Forwad prop to get z and h")
tmp_z, tmp_h = forward_prop(tmp_x, tmp_W1, tmp_W2, tmp_b1, tmp_b2)
print(f"tmp_z.shape: {tmp_z.shape}")
print(f"tmp_h.shape: {tmp_h.shape}")
print()
print("Get yhat by calling softmax")
tmp_yhat = softmax(tmp_z)
print(f"tmp_yhat.shape: {tmp_yhat.shape}")
tmp_m = (2*tmp_C)
tmp_grad_W1, tmp_grad_W2, tmp_grad_b1, tmp_grad_b2 = back_prop(tmp_x, tmp_yhat, tmp_y, tmp_h, tmp_W1, tmp_W2, tmp_b1, tmp_b2, tmp_batch_size)
print()
print("call back_prop")
print(f"tmp_grad_W1.shape {tmp_grad_W1.shape}")
print(f"tmp_grad_W2.shape {tmp_grad_W2.shape}")
print(f"tmp_grad_b1.shape {tmp_grad_b1.shape}")
print(f"tmp_grad_b2.shape {tmp_grad_b2.shape}")
```
get a batch of data
tmp_x.shape (5772, 4)
tmp_y.shape (5772, 4)
Initialize weights and biases
tmp_W1.shape (50, 5772)
tmp_W2.shape (5772, 50)
tmp_b1.shape (50, 1)
tmp_b2.shape (5772, 1)
Forwad prop to get z and h
tmp_z.shape: (5772, 4)
tmp_h.shape: (50, 4)
Get yhat by calling softmax
tmp_yhat.shape: (5772, 4)
call back_prop
tmp_grad_W1.shape (50, 5772)
tmp_grad_W2.shape (5772, 50)
tmp_grad_b1.shape (50, 1)
tmp_grad_b2.shape (5772, 1)
<a name='2.5'></a>
## Gradient Descent
<a name='ex-05'></a>
### Exercise 05
Now that you have implemented a function to compute the gradients, you will implement batch gradient descent over your training set.
**Hint:** For that, you will use `initialize_model` and the `back_prop` functions which you just created (and the `compute_cost` function). You can also use the provided `get_batches` helper function:
```for x, y in get_batches(data, word2Ind, V, C, batch_size):```
```...```
Also: print the cost after each batch is processed (use batch size = 128)
```
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: gradient_descent
def gradient_descent(data, word2Ind, N, V, num_iters, alpha=0.03):
'''
This is the gradient_descent function
Inputs:
data: text
word2Ind: words to Indices
N: dimension of hidden vector
V: dimension of vocabulary
num_iters: number of iterations
Outputs:
W1, W2, b1, b2: updated matrices and biases
'''
W1, W2, b1, b2 = initialize_model(N,V, random_seed=282)
batch_size = 128
iters = 0
C = 2
for x, y in get_batches(data, word2Ind, V, C, batch_size):
### START CODE HERE (Replace instances of 'None' with your own code) ###
# Get z and h
z, h = forward_prop(x, W1, W2, b1, b2)
# Get yhat
yhat = softmax(z)
# Get cost
cost = compute_cost(y, yhat, batch_size)
if ( (iters+1) % 10 == 0):
print(f"iters: {iters + 1} cost: {cost:.6f}")
# Get gradients
grad_W1, grad_W2, grad_b1, grad_b2 = back_prop(x, yhat, y, h, W1, W2, b1, b2, batch_size)
# Update weights and biases
W1 = W1 - alpha * grad_W1
W2 = W2 - alpha * grad_W2
b1 = b1 - alpha * grad_b1
b2 = b2 - alpha * grad_b2
### END CODE HERE ###
iters += 1
if iters == num_iters:
break
if iters % 100 == 0:
alpha *= 0.66
return W1, W2, b1, b2
```
```
# test your function
C = 2
N = 50
word2Ind, Ind2word = get_dict(data)
V = len(word2Ind)
num_iters = 150
print("Call gradient_descent")
W1, W2, b1, b2 = gradient_descent(data, word2Ind, N, V, num_iters)
```
Call gradient_descent
iters: 10 cost: 0.543259
iters: 20 cost: 0.105205
iters: 30 cost: 0.057815
iters: 40 cost: 0.039841
iters: 50 cost: 0.030391
iters: 60 cost: 0.024565
iters: 70 cost: 0.020614
iters: 80 cost: 0.017758
iters: 90 cost: 0.015598
iters: 100 cost: 0.013906
iters: 110 cost: 0.012934
iters: 120 cost: 0.012129
iters: 130 cost: 0.011417
iters: 140 cost: 0.010785
iters: 150 cost: 0.010219
##### Expected Output
```CPP
iters: 10 cost: 0.789141
iters: 20 cost: 0.105543
iters: 30 cost: 0.056008
iters: 40 cost: 0.038101
iters: 50 cost: 0.028868
iters: 60 cost: 0.023237
iters: 70 cost: 0.019444
iters: 80 cost: 0.016716
iters: 90 cost: 0.014660
iters: 100 cost: 0.013054
iters: 110 cost: 0.012133
iters: 120 cost: 0.011370
iters: 130 cost: 0.010698
iters: 140 cost: 0.010100
iters: 150 cost: 0.009566
```
Your numbers may differ a bit depending on which version of Python you're using.
|
3f831722ed8f2fe178ffd2fe20afccea65e47b6a
| 37,096 |
ipynb
|
Jupyter Notebook
|
deeplearning.ai/nlp/c2_w4_assignment.ipynb
|
martin-fabbri/colab-notebooks
|
03658a7772fbe71612e584bbc767009f78246b6b
|
[
"Apache-2.0"
] | 8 |
2020-01-18T18:39:49.000Z
|
2022-02-17T19:32:26.000Z
|
deeplearning.ai/nlp/c2_w4_assignment.ipynb
|
martin-fabbri/colab-notebooks
|
03658a7772fbe71612e584bbc767009f78246b6b
|
[
"Apache-2.0"
] | null | null | null |
deeplearning.ai/nlp/c2_w4_assignment.ipynb
|
martin-fabbri/colab-notebooks
|
03658a7772fbe71612e584bbc767009f78246b6b
|
[
"Apache-2.0"
] | 6 |
2020-01-18T18:40:02.000Z
|
2020-09-27T09:26:38.000Z
| 37,096 | 37,096 | 0.641875 | true | 7,439 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.760651 | 0.824462 | 0.627128 |
__label__eng_Latn
| 0.903634 | 0.295358 |
In this notebook there are presented examples of usage of shiroin, a python library for proving inequalities of multivariate polynomials.
At the beginning we need to load the packages.
```python
from sympy import *
from shiroin import *
from IPython.display import Latex
shiro.seed=1
shiro.display=lambda x:display(Latex(x))
```
`shiro.seed=1` sets a seed for proving functions. If you don't write it, you can get a slightly different proof each time you run a function. This line is here only for the sake of reproducibility.
The next line provides a nicer display of proofs, i.e. formulas will be shown instead of LaTeX code of these formulas. Note that this works on Jupyter, but not on the git page.
Now let's make some proofs. We will use problems from https://www.imomath.com/index.php?options=593&lmm=0.
#### Problem 1
Prove the inequality $a^2+b^2+c^2\ge ab+bc+ca$, if $a,b,c$ are real numbers.
Function `prove` tries to prove that given formula is nonnegative, **assuming all variables are nonnegative**. In this case the nonnegativity assumption is not a problem, since all powers on the left side are even, so if $|a|^2+|b|^2+|c|^2 \ge |ab|+|ac|+|bc|,$ then $a^2+b^2+c^2= |a|^2+|b|^2+|c|^2 \ge |ab|+|ac|+|bc| \ge ab+ac+bc$.
```python
prove('(a^2+b^2+c^2-a*b-a*c-b*c)')
```
numerator: $a^{2} - a b - a c + b^{2} - b c + c^{2}$
denominator: $1$
status: 0
From weighted AM-GM inequality:
Program couldn't find a solution with integer coefficients. Try to multiple the formula by some integer and run this function again.
$$ a b+a c+b c \le a^{2}+b^{2}+c^{2} $$
-1
Function prove prints several things. The first two gives us a formula after expanding it. The next one is status, which is the return status of the first use of ```scipy.optimize.linprog```. Possible outputs and explanations are
* 0 - found a proof with real coefficients,
* 1 - need more time,
* 2 - function didn't find a proof,
* 3,4 - loss of precision (which may happen if it has to work with big numbers).
Then we've got a hint. So let's use it!
```python
prove('(a^2+b^2+c^2-a*b-a*c-b*c)*2')
```
numerator: $2 a^{2} - 2 a b - 2 a c + 2 b^{2} - 2 b c + 2 c^{2}$
denominator: $1$
status: 0
From weighted AM-GM inequality:
$$2 a b \le a^{2}+b^{2}$$
$$2 a c \le a^{2}+c^{2}$$
$$2 b c \le b^{2}+c^{2}$$
$$ 0 \le 0 $$
The sum of all inequalities gives us a proof of the inequality.
0
#### Problem 2
Find all real numbers such that $a^2+b^2+c^2+d^2=a(b+c+d)$.
At first glance it doesn't look like an inequality problem, but actually it is one. If you try to calculate both sides for different values, you can see that the left side of the equation is never less than the right one. So let's try
```python
prove('a^2+b^2+c^2+d^2-a*(b+c+d)')
```
numerator: $a^{2} - a b - a c - a d + b^{2} + c^{2} + d^{2}$
denominator: $1$
status: 2
Program couldn't find any proof.
$$ a b+a c+a d \le a^{2}+b^{2}+c^{2}+d^{2} $$
2
This time `prove` didn't found the proof. But it doesn't mean that the inequality is not true! `prove` uses a list of $n$ positive values, where $n$ is a number of variables in the formula. List of values should correspond to the list of variables in alphabetical order. Here are a few tips how to choose the right values.
1. Consider a function $pos(values)$ which is the sum of the positive addends in the formula after substitution of values to variables. Analogically, let $neg(values)$ be the sum of the negative addends. We should choose such values for which $quotient=pos(values)/|neg(values)|$ is small.
2. The symmetry group of the formula splits set of variables into orbits. Using the same values for variables in one orbit is recommended. In particular, if the symmetry group of the formula is transitive (for example, when the formula is cyclic), then all values (probably) should be the same.
3. If the formula is homogeneous, then $values=(a_1,a_2,...,a_n)$ provide a proof iff $values=(ka_1,ka_2,...,ka_n)$ provides a proof for any $k\in Q_+$ (as long as you don't run into overflow error).
In the formula above $b,c,d$ are in one orbit and the formula is homogenous, so let's try $a=2$ and $b=c=d=1$.
```python
prove('a^2+b^2+c^2+d^2-a*(b+c+d)','2,1,1,1')
```
Substitute $a\to 2 e$
numerator: $b^{2} - 2 b e + c^{2} - 2 c e + d^{2} - 2 d e + 4 e^{2}$
denominator: $1$
status: 0
From weighted AM-GM inequality:
$$2 b e \le b^{2}+e^{2}$$
$$2 c e \le c^{2}+e^{2}$$
$$2 d e \le d^{2}+e^{2}$$
$$ 0 \le e^{2} $$
The sum of all inequalities gives us a proof of the inequality.
0
Function makes a substitution $a\to 2e$ and try to prove new inequality. This time it succeeded. Moreover, if starting formula is equal to 0, then all these inequalities have to be equalities, so $e^2=0$ and eventually $a=0$. We can also try a little bit lower value for $a$.
```python
prove('a^2+b^2+c^2+d^2-a*(b+c+d)','7/4,1,1,1')
```
Substitute $a\to \frac{7 f}{4}$
numerator: $16 b^{2} - 28 b f + 16 c^{2} - 28 c f + 16 d^{2} - 28 d f + 49 f^{2}$
denominator: $16$
status: 0
From weighted AM-GM inequality:
$$28 b f \le 14 b^{2}+14 f^{2}$$
$$28 c f \le 14 c^{2}+14 f^{2}$$
$$28 d f \le 14 d^{2}+14 f^{2}$$
$$ 0 \le 2 b^{2}+2 c^{2}+2 d^{2}+7 f^{2} $$
The sum of all inequalities gives us a proof of the inequality.
0
Now we can see that if $a^2+b^2+c^2+d^2-a(b+c+d)=0$, then $7f^2+2b^2+2c^2+2d^2=0$ and eventually $a=b=c=d=0$. Note that inequality is proved only for positive numbers (which, by continuity, can be expanded to nonnegative numbers). But using similar argumentation to the one in previous problem, if $(a,b,c,d)=(x,y,z,t)$ is the solution of $a^2+b^2+c^2+d^2-a(b+c+d)=0$, then $(a,b,c,d)=(|x|,|y|,|z|,|t|)$ is a solution, too. Since the only nonnegative solution is $(0,0,0,0)$, it means that it is the only solution.
It is worth noting that this time function `prove` used $f$ as a new variable instead of $e$. If you want to start a new proof and you don't care about the collision of variables from previous proofs, you can use `newproof` function, which clears the set of used variables.
We can also use the `findvalues` function to find values for the proof more automatically. It looks for (local) minimum of the $quotient$ value defined above.
```python
formula=S('a^2+b^2+c^2+d^2-a*(b+c+d)')
numvalues=findvalues(formula)
numvalues
```
Optimization terminated successfully.
Current function value: 1.154701
Iterations: 68
Function evaluations: 127
(1.4339109663193974,
0.8278441585048405,
0.8279027492686561,
0.8278930696996669)
If the $quotient$ value were less than 1, that would mean that the formula is negative for given values. If $quotient$ were equal to 1, then we have to choose exactly these values (or other values for which the $quotient$ is equal to 1. But here $quotient$ is greater than 1, so we can take a point near it and (probably) still have a proof. The values given to the `prove` function must not be floating point numbers, so we can rationalize them.
```python
values=nsimplify(numvalues,tolerance=0.1,rational=True)
values
```
$\displaystyle \left( \frac{10}{7}, \ \frac{5}{6}, \ \frac{5}{6}, \ \frac{5}{6}\right)$
```python
newproof()
prove(formula,values)
```
Substitute $a\to \frac{10 e}{7}$
Substitute $b\to \frac{5 f}{6}$
Substitute $c\to \frac{5 g}{6}$
Substitute $d\to \frac{5 h}{6}$
numerator: $3600 e^{2} - 2100 e f - 2100 e g - 2100 e h + 1225 f^{2} + 1225 g^{2} + 1225 h^{2}$
denominator: $1764$
status: 0
From weighted AM-GM inequality:
$$2100 e f \le 1050 e^{2}+1050 f^{2}$$
$$2100 e g \le 1050 e^{2}+1050 g^{2}$$
$$2100 e h \le 1050 e^{2}+1050 h^{2}$$
$$ 0 \le 450 e^{2}+175 f^{2}+175 g^{2}+175 h^{2} $$
The sum of all inequalities gives us a proof of the inequality.
0
If you set the tolerance bigger, then the values will have smaller numerators and denominators, so coefficients in the proof will be smaller, too. But if the tolerance is too big, then proof will not be found.
Let's skip the problem 3 and look solve the problem 4 instead.
#### Problem 4
If $x$ and $y$ are two positive numbers less than 1, prove that
$$\frac{1}{1-x^2}+\frac{1}{1-y^2}\ge \frac{2}{1-xy}.$$
```python
prove('1/(1-x^2)+1/(1-y^2)-2/(1-x*y)')
```
numerator: $- x^{3} y + 2 x^{2} y^{2} - x^{2} - x y^{3} + 2 x y - y^{2}$
denominator: $x^{3} y^{3} - x^{3} y - x^{2} y^{2} + x^{2} - x y^{3} + x y + y^{2} - 1$
status: 2
Program couldn't find any proof.
$$ x^{3} y+x^{2}+x y^{3}+y^{2} \le 2 x^{2} y^{2}+2 x y $$
It looks like the formula is symmetric. You can assume without loss of generality that x >= y. Try
prove(makesubs(S("-x**3*y + 2*x**2*y**2 - x**2 - x*y**3 + 2*x*y - y**2"),[('y', 'oo')])
2
`prove` assumes that formula is well-defined if all variables are positive, so it doesn't have to analyze the denominator (except of choosing the right sign). In this case it is not true, since if $x=1$, then $1-x^2=0$. Also denominator is equal to $(x^2-1)(y^2-1)(xy-1)$ which is negative for $x,y\in (0,1)$. So we need to make some substitution after which new variables can have all positive values, not just these inside (0,1) interval.
We will use a function `makesubs` to generate these substitutions. It has three basic parameters: `formula`, `intervals` and `values`. `intervals` are current limitations of variables, `values` are values of variables for which $quotient$ of `formula` is small. `values` should be inside corresponding `intervals`. This argument is optional but it's better to use it.
Let's go back to our problem. If $x=y$, then $\frac{1}{1-x^2}+\frac{1}{1-y^2}\ge \frac{2}{1-xy}$, so it's the minimum value of the formula. So let `values=(1/2,1/2)` (**warning: do not use decimal point**, for example '0.5,0.5').
```python
newproof()
newformula,newvalues=makesubs('1/(1-x^2)+1/(1-y^2)-2/(1-x*y)','[0,1],[0,1]','1/2,1/2')
prove(newformula*3,newvalues)
```
Substitute $x\to 1 - \frac{1}{a + 1}$
Substitute $y\to 1 - \frac{1}{b + 1}$
numerator: $6 a^{3} b + 3 a^{3} - 12 a^{2} b^{2} - 3 a^{2} b + 3 a^{2} + 6 a b^{3} - 3 a b^{2} - 6 a b + 3 b^{3} + 3 b^{2}$
denominator: $4 a^{2} b + 2 a^{2} + 4 a b^{2} + 8 a b + 3 a + 2 b^{2} + 3 b + 1$
status: 0
From weighted AM-GM inequality:
$$12 a^{2} b^{2} \le 6 a^{3} b+6 a b^{3}$$
$$3 a^{2} b \le 2 a^{3}+b^{3}$$
$$3 a b^{2} \le a^{3}+2 b^{3}$$
$$6 a b \le 3 a^{2}+3 b^{2}$$
$$ 0 \le 0 $$
The sum of all inequalities gives us a proof of the inequality.
0
Now let's get back to problem 3.
#### Problem 3
If $a,b,c$ are positive real numbers that satisfy $a^2+b^2+c^2=1$, find the minimal value of
$$\frac{a^2b^2}{c^2}+\frac{b^2c^2}{a^2}+\frac{c^2a^2}{b^2}$$
The problem is equivalent to finding minimum of $xy/z+yz/x+zx/y$ assuming $x+y+z=1$ and $x,y,z>0$. The first idea is to suppose that the minimum is reached when $x=y=z$. In that case, $x=y=z=1/3$ and formula is equal to 1. Now we can substitute $z\to 1-x-y$. Constraints for variables are $x>0$, $y>0$, $x+y<1$. We can rewrite it as $x \in (0,1-y)$, $y \in (0,1)$. These two conditions have two important properties:
* constraints for variables are written as intervals,
* there are no "backwards dependencies", i.e. there is no $x$ in the interval of $y$.
If these two conditions hold, then you can use `makesubs` function.
```python
newproof()
formula=Sm('xy/z+yz/x+zx/y-1').subs('z',S('1-x-y'))
newformula,values=makesubs(formula,'[0,1-y],[0,1]','1/3,1/3')
prove(newformula,values)
```
Substitute $x\to - y + 1 + \frac{y - 1}{a + 1}$
Substitute $y\to 1 - \frac{1}{b + 1}$
Substitute $b\to \frac{c}{2}$
numerator: $a^{4} c^{2} + a^{3} c^{2} - 2 a^{3} c - 4 a^{2} c + 4 a^{2} + a c^{2} - 2 a c + c^{2}$
denominator: $a^{3} c^{2} + 2 a^{3} c + 2 a^{2} c^{2} + 4 a^{2} c + a c^{2} + 2 a c$
status: 0
From weighted AM-GM inequality:
$$2 a^{3} c \le a^{4} c^{2}+a^{2}$$
$$4 a^{2} c \le a^{3} c^{2}+2 a^{2}+a c^{2}$$
$$2 a c \le a^{2}+c^{2}$$
$$ 0 \le 0 $$
The sum of all inequalities gives us a proof of the inequality.
0
The proof is found, so the assumption that 1 is the minimum of `xy/z+yz/x+zx/y` was good.
Functions `S` and `Sm` creates a SymPy object from a string. The only difference is that `Sm` assumes that there are no multi-letter variables and adds a multiplication sign between every two terms which has no operator sign, so object `Sm(xy/z+yz/x+zx/y)` has 3 variables `x,y,z` and `S('xy/z+yz/x+zx/y')` has 6 variables `x,y,z,xy,yz,zx`.
As you may have noticed, formulas are often cyclic or symmetric. Therefore you can use `cyclize` or `symmetrize` function to reduce the length of the written formula. Here are a few commands which will do the same as each other.
```python
prove('(a^2+b^2+c^2-a*b-a*c-b*c)*2')
#prove(S('(a^2+b^2+c^2-a*b-a*c-b*c)*2'))
#prove(Sm('2(a^2+b^2+c^2-ab-ac-bc)'))
#prove(cyclize('2*a^2-2*a*b'))
#prove(symmetrize('a^2-a*b'))
```
numerator: $2 a^{2} - 2 a b - 2 a c + 2 b^{2} - 2 b c + 2 c^{2}$
denominator: $1$
status: 0
From weighted AM-GM inequality:
$$2 a b \le a^{2}+b^{2}$$
$$2 a c \le a^{2}+c^{2}$$
$$2 b c \le b^{2}+c^{2}$$
$$ 0 \le 0 $$
The sum of all inequalities gives us a proof of the inequality.
0
Now look at formula $(x-1)^4$. It's quite obvious that it's nonnegative, but `prove` fails to show this!
```python
prove('(x-1)^4')
```
numerator: $x^{4} - 4 x^{3} + 6 x^{2} - 4 x + 1$
denominator: $1$
status: 2
Program couldn't find any proof.
$$ 4 x^{3}+4 x \le x^{4}+6 x^{2}+1 $$
2
But there is a relatively simple method to generate a proof using this library. We will make to proofs: one for $x\in (1,\infty)$ and the second one for $(-\infty,1)$.
```python
newproof()
prove(makesubs('(x-1)^4','(1,oo)'))
```
Substitute $x\to a + 1$
numerator: $a^{4}$
denominator: $1$
status: 0
$$ 0 \le a^{4} $$
The sum of all inequalities gives us a proof of the inequality.
0
```python
newproof()
prove(makesubs('(x-1)^4','(-oo,1)'))
```
Substitute $x\to 1 - a$
numerator: $a^{4}$
denominator: $1$
status: 0
$$ 0 \le a^{4} $$
The sum of all inequalities gives us a proof of the inequality.
0
Now let's go to the problem 10
#### Problem 10
If $a,b,c,d>0$, prove that
$$\frac a{b+c}+\frac b{c+d}+ \frac c{d+a}+ \frac d{a+b}\geq 2.$$
Let's try a simple approach.
```python
formula=cyclize('a/(b+c)',variables='a,b,c,d')-2
formula
```
$\displaystyle \frac{a}{b + c} + \frac{b}{c + d} + \frac{c}{a + d} + \frac{d}{a + b} - 2$
```python
prove(formula)
```
numerator: $a^{3} c + a^{3} d + a^{2} b^{2} - a^{2} b d - 2 a^{2} c^{2} - a^{2} c d + a^{2} d^{2} + a b^{3} - a b^{2} c - a b^{2} d - a b c^{2} + a c^{3} - a c d^{2} + b^{3} d + b^{2} c^{2} - 2 b^{2} d^{2} + b c^{3} - b c^{2} d - b c d^{2} + b d^{3} + c^{2} d^{2} + c d^{3}$
denominator: $a^{2} b c + a^{2} b d + a^{2} c^{2} + a^{2} c d + a b^{2} c + a b^{2} d + a b c^{2} + 2 a b c d + a b d^{2} + a c^{2} d + a c d^{2} + b^{2} c d + b^{2} d^{2} + b c^{2} d + b c d^{2}$
status: 2
Program couldn't find any proof.
$$ a^{2} b d+2 a^{2} c^{2}+a^{2} c d+a b^{2} c+a b^{2} d+a b c^{2}+a c d^{2}+2 b^{2} d^{2}+b c^{2} d+b c d^{2} \le a^{3} c+a^{3} d+a^{2} b^{2}+a^{2} d^{2}+a b^{3}+a c^{3}+b^{3} d+b^{2} c^{2}+b c^{3}+b d^{3}+c^{2} d^{2}+c d^{3} $$
2
This problem, like the previous one, can be solved by splitting the domain of variables to several subdomains. But we can also use the symmetry of this inequality. For example, without loss of generality we can assume that $a\ge c$ and $b\ge d$, so $a\in [c,\infty)$, $b\in [d,\infty)$.
```python
newproof()
prove(makesubs(formula,'[c,oo],[d,oo]'))
```
Substitute $a\to c + e$
Substitute $b\to d + f$
numerator: $c^{2} e^{2} - c^{2} e f + c^{2} f^{2} + 2 c d e^{2} + 2 c d f^{2} + c e^{3} + c e f^{2} + c f^{3} + d^{2} e^{2} + d^{2} e f + d^{2} f^{2} + d e^{3} + d e^{2} f + 2 d e f^{2} + d f^{3} + e^{2} f^{2} + e f^{3}$
denominator: $c^{4} + 4 c^{3} d + 2 c^{3} e + 2 c^{3} f + 6 c^{2} d^{2} + 6 c^{2} d e + 6 c^{2} d f + c^{2} e^{2} + 3 c^{2} e f + c^{2} f^{2} + 4 c d^{3} + 6 c d^{2} e + 6 c d^{2} f + 2 c d e^{2} + 6 c d e f + 2 c d f^{2} + c e^{2} f + c e f^{2} + d^{4} + 2 d^{3} e + 2 d^{3} f + d^{2} e^{2} + 3 d^{2} e f + d^{2} f^{2} + d e^{2} f + d e f^{2}$
status: 0
From weighted AM-GM inequality:
Program couldn't find a solution with integer coefficients. Try to multiple the formula by some integer and run this function again.
$$ c^{2} e f \le c^{2} e^{2}+c^{2} f^{2}+2 c d e^{2}+2 c d f^{2}+c e^{3}+c e f^{2}+c f^{3}+d^{2} e^{2}+d^{2} e f+d^{2} f^{2}+d e^{3}+d e^{2} f+2 d e f^{2}+d f^{3}+e^{2} f^{2}+e f^{3} $$
-1
```python
newproof()
prove(makesubs(formula,'[c,oo],[d,oo]')*2)
```
Substitute $a\to c + e$
Substitute $b\to d + f$
numerator: $2 c^{2} e^{2} - 2 c^{2} e f + 2 c^{2} f^{2} + 4 c d e^{2} + 4 c d f^{2} + 2 c e^{3} + 2 c e f^{2} + 2 c f^{3} + 2 d^{2} e^{2} + 2 d^{2} e f + 2 d^{2} f^{2} + 2 d e^{3} + 2 d e^{2} f + 4 d e f^{2} + 2 d f^{3} + 2 e^{2} f^{2} + 2 e f^{3}$
denominator: $c^{4} + 4 c^{3} d + 2 c^{3} e + 2 c^{3} f + 6 c^{2} d^{2} + 6 c^{2} d e + 6 c^{2} d f + c^{2} e^{2} + 3 c^{2} e f + c^{2} f^{2} + 4 c d^{3} + 6 c d^{2} e + 6 c d^{2} f + 2 c d e^{2} + 6 c d e f + 2 c d f^{2} + c e^{2} f + c e f^{2} + d^{4} + 2 d^{3} e + 2 d^{3} f + d^{2} e^{2} + 3 d^{2} e f + d^{2} f^{2} + d e^{2} f + d e f^{2}$
status: 0
From weighted AM-GM inequality:
$$2 c^{2} e f \le c^{2} e^{2}+c^{2} f^{2}$$
$$ 0 \le c^{2} e^{2}+c^{2} f^{2}+4 c d e^{2}+4 c d f^{2}+2 c e^{3}+2 c e f^{2}+2 c f^{3}+2 d^{2} e^{2}+2 d^{2} e f+2 d^{2} f^{2}+2 d e^{3}+2 d e^{2} f+4 d e f^{2}+2 d f^{3}+2 e^{2} f^{2}+2 e f^{3} $$
The sum of all inequalities gives us a proof of the inequality.
0
It's a good idea to use intervals that are unbounded from one side (i.e. those which contain $\pm\infty$). In this problem we could assume that $a\in (0,c]$, $b\in (0,d]$ as well. But as you can see, in this case the proof is several times longer.
```python
newproof()
prove(makesubs(formula,'[0,c],[0,d]')*2)
```
Substitute $a\to c - \frac{c}{e + 1}$
Substitute $b\to d - \frac{d}{f + 1}$
numerator: $2 c^{4} e f^{3} + 6 c^{4} e f^{2} + 6 c^{4} e f + 2 c^{4} e - 2 c^{3} d e^{2} f^{2} - 4 c^{3} d e^{2} f - 2 c^{3} d e^{2} + 4 c^{3} d e f^{3} + 8 c^{3} d e f^{2} + 4 c^{3} d e f + 2 c^{3} d f^{3} + 4 c^{3} d f^{2} + 2 c^{3} d f + 2 c^{2} d^{2} e^{3} f + 2 c^{2} d^{2} e^{3} + 4 c^{2} d^{2} e^{2} f + 4 c^{2} d^{2} e^{2} + 2 c^{2} d^{2} e f^{3} + 4 c^{2} d^{2} e f^{2} + 6 c^{2} d^{2} e f + 4 c^{2} d^{2} e + 2 c^{2} d^{2} f^{3} + 4 c^{2} d^{2} f^{2} + 4 c^{2} d^{2} f + 2 c^{2} d^{2} + 4 c d^{3} e^{3} f + 2 c d^{3} e^{3} + 2 c d^{3} e^{2} f^{2} + 12 c d^{3} e^{2} f + 6 c d^{3} e^{2} + 4 c d^{3} e f^{2} + 12 c d^{3} e f + 6 c d^{3} e + 2 c d^{3} f^{2} + 4 c d^{3} f + 2 c d^{3} + 2 d^{4} e^{3} f + 6 d^{4} e^{2} f + 6 d^{4} e f + 2 d^{4} f$
denominator: $c^{4} e^{3} f^{3} + 3 c^{4} e^{3} f^{2} + 3 c^{4} e^{3} f + c^{4} e^{3} + c^{4} e^{2} f^{3} + 3 c^{4} e^{2} f^{2} + 3 c^{4} e^{2} f + c^{4} e^{2} + 4 c^{3} d e^{3} f^{3} + 10 c^{3} d e^{3} f^{2} + 8 c^{3} d e^{3} f + 2 c^{3} d e^{3} + 6 c^{3} d e^{2} f^{3} + 15 c^{3} d e^{2} f^{2} + 12 c^{3} d e^{2} f + 3 c^{3} d e^{2} + 2 c^{3} d e f^{3} + 5 c^{3} d e f^{2} + 4 c^{3} d e f + c^{3} d e + 6 c^{2} d^{2} e^{3} f^{3} + 12 c^{2} d^{2} e^{3} f^{2} + 7 c^{2} d^{2} e^{3} f + c^{2} d^{2} e^{3} + 12 c^{2} d^{2} e^{2} f^{3} + 24 c^{2} d^{2} e^{2} f^{2} + 14 c^{2} d^{2} e^{2} f + 2 c^{2} d^{2} e^{2} + 7 c^{2} d^{2} e f^{3} + 14 c^{2} d^{2} e f^{2} + 8 c^{2} d^{2} e f + c^{2} d^{2} e + c^{2} d^{2} f^{3} + 2 c^{2} d^{2} f^{2} + c^{2} d^{2} f + 4 c d^{3} e^{3} f^{3} + 6 c d^{3} e^{3} f^{2} + 2 c d^{3} e^{3} f + 10 c d^{3} e^{2} f^{3} + 15 c d^{3} e^{2} f^{2} + 5 c d^{3} e^{2} f + 8 c d^{3} e f^{3} + 12 c d^{3} e f^{2} + 4 c d^{3} e f + 2 c d^{3} f^{3} + 3 c d^{3} f^{2} + c d^{3} f + d^{4} e^{3} f^{3} + d^{4} e^{3} f^{2} + 3 d^{4} e^{2} f^{3} + 3 d^{4} e^{2} f^{2} + 3 d^{4} e f^{3} + 3 d^{4} e f^{2} + d^{4} f^{3} + d^{4} f^{2}$
status: 0
From weighted AM-GM inequality:
$$2 c^{3} d e^{2} f^{2} \le c^{4} e f^{3}+c^{2} d^{2} e^{3} f$$
$$2 c^{3} d e^{2} \le c^{4} e+c^{2} d^{2} e^{3}$$
$$4 c^{3} d e^{2} f \le c^{4} e f^{3}+c^{4} e+c^{2} d^{2} e^{3} f+c^{2} d^{2} e^{3}$$
$$ 0 \le 6 c^{4} e f^{2}+6 c^{4} e f+4 c^{3} d e f^{3}+8 c^{3} d e f^{2}+4 c^{3} d e f+2 c^{3} d f^{3}+4 c^{3} d f^{2}+2 c^{3} d f+4 c^{2} d^{2} e^{2} f+4 c^{2} d^{2} e^{2}+2 c^{2} d^{2} e f^{3}+4 c^{2} d^{2} e f^{2}+6 c^{2} d^{2} e f+4 c^{2} d^{2} e+2 c^{2} d^{2} f^{3}+4 c^{2} d^{2} f^{2}+4 c^{2} d^{2} f+2 c^{2} d^{2}+4 c d^{3} e^{3} f+2 c d^{3} e^{3}+2 c d^{3} e^{2} f^{2}+12 c d^{3} e^{2} f+6 c d^{3} e^{2}+4 c d^{3} e f^{2}+12 c d^{3} e f+6 c d^{3} e+2 c d^{3} f^{2}+4 c d^{3} f+2 c d^{3}+2 d^{4} e^{3} f+6 d^{4} e^{2} f+6 d^{4} e f+2 d^{4} f $$
The sum of all inequalities gives us a proof of the inequality.
0
Function `powerprove` is a shortcut for splitting domain $R_+^n$ on several subdomains and proving the inequality for each of them. This function uses $2^n$ of $n$-dimensional intervals with a common point (by default it's $(1,1,...,1)$), where $n$ is a number of variables. Here there are two examples of using it. As you can see, proofs are very long.
```python
newproof()
#this is equivalent to
#prove(makesubs('(x-1)^4','[1,oo]'))
#prove(makesubs('(x-1)^4','[1,0]'))
#you can write ends of interval in any order, so [1,0] is the same as [0,1]
#but the substitution is slightly simpler when 0 is on the right side
powerprove('(x-1)^4')
```
numerator: $x^{4} - 4 x^{3} + 6 x^{2} - 4 x + 1$
denominator: $1$
_______________________
Substitute $x\to 1+a$
Numerator after substitutions: $a^{4}$
status: 0
$$ 0 \le a^{4} $$
The sum of all inequalities gives us a proof of the inequality.
_______________________
Substitute $x\to 1/(1+b)$
Numerator after substitutions: $b^{4}$
status: 0
$$ 0 \le b^{4} $$
The sum of all inequalities gives us a proof of the inequality.
Counter({0: 2})
```python
newproof()
formula=Sm('-(3a+2b+c)(2a^3+3b^2+6c+1)+(4a+4b+4c)(a^4+b^3+c^2+3)')
#this is equivalent to
#prove(makesubs(formula,'[1,oo],[1,oo],[1,oo]'))
#prove(makesubs(formula,'[1,0],[1,oo],[1,oo]'))
#prove(makesubs(formula,'[1,oo],[1,0],[1,oo]'))
#prove(makesubs(formula,'[1,0],[1,0],[1,oo]'))
#prove(makesubs(formula,'[1,oo],[1,oo],[1,0]'))
#prove(makesubs(formula,'[1,0],[1,oo],[1,0]'))
#prove(makesubs(formula,'[1,oo],[1,0],[1,0]'))
#prove(makesubs(formula,'[1,0],[1,0],[1,0]'))
powerprove(formula)
```
numerator: $4 a^{5} + 4 a^{4} b + 4 a^{4} c - 6 a^{4} - 4 a^{3} b - 2 a^{3} c + 4 a b^{3} - 9 a b^{2} + 4 a c^{2} - 18 a c + 9 a + 4 b^{4} + 4 b^{3} c - 6 b^{3} - 3 b^{2} c + 4 b c^{2} - 12 b c + 10 b + 4 c^{3} - 6 c^{2} + 11 c$
denominator: $1$
_______________________
Substitute $a\to 1+d,b\to 1+e,c\to 1+f$
Numerator after substitutions: $4 d^{5} + 4 d^{4} e + 4 d^{4} f + 22 d^{4} + 12 d^{3} e + 14 d^{3} f + 42 d^{3} + 12 d^{2} e + 18 d^{2} f + 34 d^{2} + 4 d e^{3} + 3 d e^{2} - 2 d e + 4 d f^{2} + 4 e^{4} + 4 e^{3} f + 18 e^{3} + 9 e^{2} f + 18 e^{2} + 4 e f^{2} + 2 e f + 4 f^{3} + 14 f^{2}$
status: 0
From weighted AM-GM inequality:
$$2 d e \le d^{2}+e^{2}$$
$$ 0 \le 4 d^{5}+4 d^{4} e+4 d^{4} f+22 d^{4}+12 d^{3} e+14 d^{3} f+42 d^{3}+12 d^{2} e+18 d^{2} f+33 d^{2}+4 d e^{3}+3 d e^{2}+4 d f^{2}+4 e^{4}+4 e^{3} f+18 e^{3}+9 e^{2} f+17 e^{2}+4 e f^{2}+2 e f+4 f^{3}+14 f^{2} $$
The sum of all inequalities gives us a proof of the inequality.
_______________________
Substitute $a\to 1/(1+g),b\to 1+h,c\to 1+i$
Numerator after substitutions: $4 g^{5} h^{4} + 4 g^{5} h^{3} i + 14 g^{5} h^{3} + 9 g^{5} h^{2} i + 15 g^{5} h^{2} + 4 g^{5} h i^{2} + 2 g^{5} h i + 6 g^{5} h + 4 g^{5} i^{3} + 10 g^{5} i^{2} + 8 g^{5} i + 10 g^{5} + 20 g^{4} h^{4} + 20 g^{4} h^{3} i + 74 g^{4} h^{3} + 45 g^{4} h^{2} i + 78 g^{4} h^{2} + 20 g^{4} h i^{2} + 10 g^{4} h i + 24 g^{4} h + 20 g^{4} i^{3} + 54 g^{4} i^{2} + 30 g^{4} i + 40 g^{4} + 40 g^{3} h^{4} + 40 g^{3} h^{3} i + 156 g^{3} h^{3} + 90 g^{3} h^{2} i + 162 g^{3} h^{2} + 40 g^{3} h i^{2} + 20 g^{3} h i + 36 g^{3} h + 40 g^{3} i^{3} + 116 g^{3} i^{2} + 40 g^{3} i + 60 g^{3} + 40 g^{2} h^{4} + 40 g^{2} h^{3} i + 164 g^{2} h^{3} + 90 g^{2} h^{2} i + 168 g^{2} h^{2} + 40 g^{2} h i^{2} + 20 g^{2} h i + 20 g^{2} h + 40 g^{2} i^{3} + 124 g^{2} i^{2} + 18 g^{2} i + 34 g^{2} + 20 g h^{4} + 20 g h^{3} i + 86 g h^{3} + 45 g h^{2} i + 87 g h^{2} + 20 g h i^{2} + 10 g h i + 2 g h + 20 g i^{3} + 66 g i^{2} + 4 h^{4} + 4 h^{3} i + 18 h^{3} + 9 h^{2} i + 18 h^{2} + 4 h i^{2} + 2 h i + 4 i^{3} + 14 i^{2}$
status: 0
$$ 0 \le 4 g^{5} h^{4}+4 g^{5} h^{3} i+14 g^{5} h^{3}+9 g^{5} h^{2} i+15 g^{5} h^{2}+4 g^{5} h i^{2}+2 g^{5} h i+6 g^{5} h+4 g^{5} i^{3}+10 g^{5} i^{2}+8 g^{5} i+10 g^{5}+20 g^{4} h^{4}+20 g^{4} h^{3} i+74 g^{4} h^{3}+45 g^{4} h^{2} i+78 g^{4} h^{2}+20 g^{4} h i^{2}+10 g^{4} h i+24 g^{4} h+20 g^{4} i^{3}+54 g^{4} i^{2}+30 g^{4} i+40 g^{4}+40 g^{3} h^{4}+40 g^{3} h^{3} i+156 g^{3} h^{3}+90 g^{3} h^{2} i+162 g^{3} h^{2}+40 g^{3} h i^{2}+20 g^{3} h i+36 g^{3} h+40 g^{3} i^{3}+116 g^{3} i^{2}+40 g^{3} i+60 g^{3}+40 g^{2} h^{4}+40 g^{2} h^{3} i+164 g^{2} h^{3}+90 g^{2} h^{2} i+168 g^{2} h^{2}+40 g^{2} h i^{2}+20 g^{2} h i+20 g^{2} h+40 g^{2} i^{3}+124 g^{2} i^{2}+18 g^{2} i+34 g^{2}+20 g h^{4}+20 g h^{3} i+86 g h^{3}+45 g h^{2} i+87 g h^{2}+20 g h i^{2}+10 g h i+2 g h+20 g i^{3}+66 g i^{2}+4 h^{4}+4 h^{3} i+18 h^{3}+9 h^{2} i+18 h^{2}+4 h i^{2}+2 h i+4 i^{3}+14 i^{2} $$
The sum of all inequalities gives us a proof of the inequality.
_______________________
Substitute $a\to 1+j,b\to 1/(1+k),c\to 1+l$
Numerator after substitutions: $4 j^{5} k^{4} + 16 j^{5} k^{3} + 24 j^{5} k^{2} + 16 j^{5} k + 4 j^{5} + 4 j^{4} k^{4} l + 18 j^{4} k^{4} + 16 j^{4} k^{3} l + 76 j^{4} k^{3} + 24 j^{4} k^{2} l + 120 j^{4} k^{2} + 16 j^{4} k l + 84 j^{4} k + 4 j^{4} l + 22 j^{4} + 14 j^{3} k^{4} l + 30 j^{3} k^{4} + 56 j^{3} k^{3} l + 132 j^{3} k^{3} + 84 j^{3} k^{2} l + 216 j^{3} k^{2} + 56 j^{3} k l + 156 j^{3} k + 14 j^{3} l + 42 j^{3} + 18 j^{2} k^{4} l + 22 j^{2} k^{4} + 72 j^{2} k^{3} l + 100 j^{2} k^{3} + 108 j^{2} k^{2} l + 168 j^{2} k^{2} + 72 j^{2} k l + 124 j^{2} k + 18 j^{2} l + 34 j^{2} + 4 j k^{4} l^{2} + j k^{4} + 16 j k^{3} l^{2} + 8 j k^{3} + 24 j k^{2} l^{2} + 9 j k^{2} + 16 j k l^{2} + 2 j k + 4 j l^{2} + 4 k^{4} l^{3} + 10 k^{4} l^{2} + 3 k^{4} l + 4 k^{4} + 16 k^{3} l^{3} + 44 k^{3} l^{2} + 8 k^{3} l + 18 k^{3} + 24 k^{2} l^{3} + 72 k^{2} l^{2} + 3 k^{2} l + 18 k^{2} + 16 k l^{3} + 52 k l^{2} - 2 k l + 4 l^{3} + 14 l^{2}$
status: 0
From weighted AM-GM inequality:
$$2 k l \le k^{2}+l^{2}$$
$$ 0 \le 4 j^{5} k^{4}+16 j^{5} k^{3}+24 j^{5} k^{2}+16 j^{5} k+4 j^{5}+4 j^{4} k^{4} l+18 j^{4} k^{4}+16 j^{4} k^{3} l+76 j^{4} k^{3}+24 j^{4} k^{2} l+120 j^{4} k^{2}+16 j^{4} k l+84 j^{4} k+4 j^{4} l+22 j^{4}+14 j^{3} k^{4} l+30 j^{3} k^{4}+56 j^{3} k^{3} l+132 j^{3} k^{3}+84 j^{3} k^{2} l+216 j^{3} k^{2}+56 j^{3} k l+156 j^{3} k+14 j^{3} l+42 j^{3}+18 j^{2} k^{4} l+22 j^{2} k^{4}+72 j^{2} k^{3} l+100 j^{2} k^{3}+108 j^{2} k^{2} l+168 j^{2} k^{2}+72 j^{2} k l+124 j^{2} k+18 j^{2} l+34 j^{2}+4 j k^{4} l^{2}+j k^{4}+16 j k^{3} l^{2}+8 j k^{3}+24 j k^{2} l^{2}+9 j k^{2}+16 j k l^{2}+2 j k+4 j l^{2}+4 k^{4} l^{3}+10 k^{4} l^{2}+3 k^{4} l+4 k^{4}+16 k^{3} l^{3}+44 k^{3} l^{2}+8 k^{3} l+18 k^{3}+24 k^{2} l^{3}+72 k^{2} l^{2}+3 k^{2} l+17 k^{2}+16 k l^{3}+52 k l^{2}+4 l^{3}+13 l^{2} $$
The sum of all inequalities gives us a proof of the inequality.
_______________________
Substitute $a\to 1/(1+m),b\to 1/(1+n),c\to 1+o$
Numerator after substitutions: $4 m^{5} n^{4} o^{3} + 6 m^{5} n^{4} o^{2} + 11 m^{5} n^{4} o + 9 m^{5} n^{4} + 16 m^{5} n^{3} o^{3} + 28 m^{5} n^{3} o^{2} + 40 m^{5} n^{3} o + 38 m^{5} n^{3} + 24 m^{5} n^{2} o^{3} + 48 m^{5} n^{2} o^{2} + 51 m^{5} n^{2} o + 57 m^{5} n^{2} + 16 m^{5} n o^{3} + 36 m^{5} n o^{2} + 30 m^{5} n o + 34 m^{5} n + 4 m^{5} o^{3} + 10 m^{5} o^{2} + 8 m^{5} o + 10 m^{5} + 20 m^{4} n^{4} o^{3} + 34 m^{4} n^{4} o^{2} + 45 m^{4} n^{4} o + 40 m^{4} n^{4} + 80 m^{4} n^{3} o^{3} + 156 m^{4} n^{3} o^{2} + 160 m^{4} n^{3} o + 170 m^{4} n^{3} + 120 m^{4} n^{2} o^{3} + 264 m^{4} n^{2} o^{2} + 195 m^{4} n^{2} o + 246 m^{4} n^{2} + 80 m^{4} n o^{3} + 196 m^{4} n o^{2} + 110 m^{4} n o + 136 m^{4} n + 20 m^{4} o^{3} + 54 m^{4} o^{2} + 30 m^{4} o + 40 m^{4} + 40 m^{3} n^{4} o^{3} + 76 m^{3} n^{4} o^{2} + 70 m^{3} n^{4} o + 70 m^{3} n^{4} + 160 m^{3} n^{3} o^{3} + 344 m^{3} n^{3} o^{2} + 240 m^{3} n^{3} o + 300 m^{3} n^{3} + 240 m^{3} n^{2} o^{3} + 576 m^{3} n^{2} o^{2} + 270 m^{3} n^{2} o + 414 m^{3} n^{2} + 160 m^{3} n o^{3} + 424 m^{3} n o^{2} + 140 m^{3} n o + 204 m^{3} n + 40 m^{3} o^{3} + 116 m^{3} o^{2} + 40 m^{3} o + 60 m^{3} + 40 m^{2} n^{4} o^{3} + 84 m^{2} n^{4} o^{2} + 48 m^{2} n^{4} o + 58 m^{2} n^{4} + 160 m^{2} n^{3} o^{3} + 376 m^{2} n^{3} o^{2} + 152 m^{2} n^{3} o + 248 m^{2} n^{3} + 240 m^{2} n^{2} o^{3} + 624 m^{2} n^{2} o^{2} + 138 m^{2} n^{2} o + 312 m^{2} n^{2} + 160 m^{2} n o^{3} + 456 m^{2} n o^{2} + 52 m^{2} n o + 116 m^{2} n + 40 m^{2} o^{3} + 124 m^{2} o^{2} + 18 m^{2} o + 34 m^{2} + 20 m n^{4} o^{3} + 46 m n^{4} o^{2} + 15 m n^{4} o + 19 m n^{4} + 80 m n^{3} o^{3} + 204 m n^{3} o^{2} + 40 m n^{3} o + 82 m n^{3} + 120 m n^{2} o^{3} + 336 m n^{2} o^{2} + 15 m n^{2} o + 81 m n^{2} + 80 m n o^{3} + 244 m n o^{2} - 10 m n o - 2 m n + 20 m o^{3} + 66 m o^{2} + 4 n^{4} o^{3} + 10 n^{4} o^{2} + 3 n^{4} o + 4 n^{4} + 16 n^{3} o^{3} + 44 n^{3} o^{2} + 8 n^{3} o + 18 n^{3} + 24 n^{2} o^{3} + 72 n^{2} o^{2} + 3 n^{2} o + 18 n^{2} + 16 n o^{3} + 52 n o^{2} - 2 n o + 4 o^{3} + 14 o^{2}$
status: 0
From weighted AM-GM inequality:
$$2 m n \le m^{2}+n^{2}$$
$$2 n o \le n^{2}+o^{2}$$
$$10 m n o \le 2 m^{2}+2 m n^{2} o+4 m o^{2}+2 n^{3}$$
$$ 0 \le 4 m^{5} n^{4} o^{3}+6 m^{5} n^{4} o^{2}+11 m^{5} n^{4} o+9 m^{5} n^{4}+16 m^{5} n^{3} o^{3}+28 m^{5} n^{3} o^{2}+40 m^{5} n^{3} o+38 m^{5} n^{3}+24 m^{5} n^{2} o^{3}+48 m^{5} n^{2} o^{2}+51 m^{5} n^{2} o+57 m^{5} n^{2}+16 m^{5} n o^{3}+36 m^{5} n o^{2}+30 m^{5} n o+34 m^{5} n+4 m^{5} o^{3}+10 m^{5} o^{2}+8 m^{5} o+10 m^{5}+20 m^{4} n^{4} o^{3}+34 m^{4} n^{4} o^{2}+45 m^{4} n^{4} o+40 m^{4} n^{4}+80 m^{4} n^{3} o^{3}+156 m^{4} n^{3} o^{2}+160 m^{4} n^{3} o+170 m^{4} n^{3}+120 m^{4} n^{2} o^{3}+264 m^{4} n^{2} o^{2}+195 m^{4} n^{2} o+246 m^{4} n^{2}+80 m^{4} n o^{3}+196 m^{4} n o^{2}+110 m^{4} n o+136 m^{4} n+20 m^{4} o^{3}+54 m^{4} o^{2}+30 m^{4} o+40 m^{4}+40 m^{3} n^{4} o^{3}+76 m^{3} n^{4} o^{2}+70 m^{3} n^{4} o+70 m^{3} n^{4}+160 m^{3} n^{3} o^{3}+344 m^{3} n^{3} o^{2}+240 m^{3} n^{3} o+300 m^{3} n^{3}+240 m^{3} n^{2} o^{3}+576 m^{3} n^{2} o^{2}+270 m^{3} n^{2} o+414 m^{3} n^{2}+160 m^{3} n o^{3}+424 m^{3} n o^{2}+140 m^{3} n o+204 m^{3} n+40 m^{3} o^{3}+116 m^{3} o^{2}+40 m^{3} o+60 m^{3}+40 m^{2} n^{4} o^{3}+84 m^{2} n^{4} o^{2}+48 m^{2} n^{4} o+58 m^{2} n^{4}+160 m^{2} n^{3} o^{3}+376 m^{2} n^{3} o^{2}+152 m^{2} n^{3} o+248 m^{2} n^{3}+240 m^{2} n^{2} o^{3}+624 m^{2} n^{2} o^{2}+138 m^{2} n^{2} o+312 m^{2} n^{2}+160 m^{2} n o^{3}+456 m^{2} n o^{2}+52 m^{2} n o+116 m^{2} n+40 m^{2} o^{3}+124 m^{2} o^{2}+18 m^{2} o+31 m^{2}+20 m n^{4} o^{3}+46 m n^{4} o^{2}+15 m n^{4} o+19 m n^{4}+80 m n^{3} o^{3}+204 m n^{3} o^{2}+40 m n^{3} o+82 m n^{3}+120 m n^{2} o^{3}+336 m n^{2} o^{2}+13 m n^{2} o+81 m n^{2}+80 m n o^{3}+244 m n o^{2}+20 m o^{3}+62 m o^{2}+4 n^{4} o^{3}+10 n^{4} o^{2}+3 n^{4} o+4 n^{4}+16 n^{3} o^{3}+44 n^{3} o^{2}+8 n^{3} o+16 n^{3}+24 n^{2} o^{3}+72 n^{2} o^{2}+3 n^{2} o+16 n^{2}+16 n o^{3}+52 n o^{2}+4 o^{3}+13 o^{2} $$
The sum of all inequalities gives us a proof of the inequality.
_______________________
Substitute $a\to 1+p,b\to 1+q,c\to 1/(1+r)$
Numerator after substitutions: $4 p^{5} r^{3} + 12 p^{5} r^{2} + 12 p^{5} r + 4 p^{5} + 4 p^{4} q r^{3} + 12 p^{4} q r^{2} + 12 p^{4} q r + 4 p^{4} q + 18 p^{4} r^{3} + 58 p^{4} r^{2} + 62 p^{4} r + 22 p^{4} + 12 p^{3} q r^{3} + 36 p^{3} q r^{2} + 36 p^{3} q r + 12 p^{3} q + 28 p^{3} r^{3} + 98 p^{3} r^{2} + 112 p^{3} r + 42 p^{3} + 12 p^{2} q r^{3} + 36 p^{2} q r^{2} + 36 p^{2} q r + 12 p^{2} q + 16 p^{2} r^{3} + 66 p^{2} r^{2} + 84 p^{2} r + 34 p^{2} + 4 p q^{3} r^{3} + 12 p q^{3} r^{2} + 12 p q^{3} r + 4 p q^{3} + 3 p q^{2} r^{3} + 9 p q^{2} r^{2} + 9 p q^{2} r + 3 p q^{2} - 2 p q r^{3} - 6 p q r^{2} - 6 p q r - 2 p q + 4 p r^{3} + 4 p r^{2} + 4 q^{4} r^{3} + 12 q^{4} r^{2} + 12 q^{4} r + 4 q^{4} + 14 q^{3} r^{3} + 46 q^{3} r^{2} + 50 q^{3} r + 18 q^{3} + 9 q^{2} r^{3} + 36 q^{2} r^{2} + 45 q^{2} r + 18 q^{2} + 2 q r^{3} - 2 q r + 10 r^{3} + 14 r^{2}$
status: 0
From weighted AM-GM inequality:
$$2 p q r^{3} \le p^{2} q r^{3}+q r^{3}$$
$$2 p q \le p^{2}+q^{2}$$
$$2 q r \le q^{2}+r^{2}$$
$$6 p q r \le 2 p^{3} q+2 q^{2} r+2 r^{2}$$
$$6 p q r^{2} \le 2 p q^{2} r^{3}+p q^{2}+3 p r^{2}$$
$$ 0 \le 4 p^{5} r^{3}+12 p^{5} r^{2}+12 p^{5} r+4 p^{5}+4 p^{4} q r^{3}+12 p^{4} q r^{2}+12 p^{4} q r+4 p^{4} q+18 p^{4} r^{3}+58 p^{4} r^{2}+62 p^{4} r+22 p^{4}+12 p^{3} q r^{3}+36 p^{3} q r^{2}+36 p^{3} q r+10 p^{3} q+28 p^{3} r^{3}+98 p^{3} r^{2}+112 p^{3} r+42 p^{3}+11 p^{2} q r^{3}+36 p^{2} q r^{2}+36 p^{2} q r+12 p^{2} q+16 p^{2} r^{3}+66 p^{2} r^{2}+84 p^{2} r+33 p^{2}+4 p q^{3} r^{3}+12 p q^{3} r^{2}+12 p q^{3} r+4 p q^{3}+p q^{2} r^{3}+9 p q^{2} r^{2}+9 p q^{2} r+2 p q^{2}+4 p r^{3}+p r^{2}+4 q^{4} r^{3}+12 q^{4} r^{2}+12 q^{4} r+4 q^{4}+14 q^{3} r^{3}+46 q^{3} r^{2}+50 q^{3} r+18 q^{3}+9 q^{2} r^{3}+36 q^{2} r^{2}+43 q^{2} r+16 q^{2}+q r^{3}+10 r^{3}+11 r^{2} $$
The sum of all inequalities gives us a proof of the inequality.
_______________________
Substitute $a\to 1/(1+s),b\to 1+t,c\to 1/(1+u)$
Numerator after substitutions: $4 s^{5} t^{4} u^{3} + 12 s^{5} t^{4} u^{2} + 12 s^{5} t^{4} u + 4 s^{5} t^{4} + 10 s^{5} t^{3} u^{3} + 34 s^{5} t^{3} u^{2} + 38 s^{5} t^{3} u + 14 s^{5} t^{3} + 6 s^{5} t^{2} u^{3} + 27 s^{5} t^{2} u^{2} + 36 s^{5} t^{2} u + 15 s^{5} t^{2} + 8 s^{5} t u^{3} + 18 s^{5} t u^{2} + 16 s^{5} t u + 6 s^{5} t + 8 s^{5} u^{3} + 24 s^{5} u^{2} + 22 s^{5} u + 10 s^{5} + 20 s^{4} t^{4} u^{3} + 60 s^{4} t^{4} u^{2} + 60 s^{4} t^{4} u + 20 s^{4} t^{4} + 54 s^{4} t^{3} u^{3} + 182 s^{4} t^{3} u^{2} + 202 s^{4} t^{3} u + 74 s^{4} t^{3} + 33 s^{4} t^{2} u^{3} + 144 s^{4} t^{2} u^{2} + 189 s^{4} t^{2} u + 78 s^{4} t^{2} + 34 s^{4} t u^{3} + 72 s^{4} t u^{2} + 62 s^{4} t u + 24 s^{4} t + 44 s^{4} u^{3} + 114 s^{4} u^{2} + 90 s^{4} u + 40 s^{4} + 40 s^{3} t^{4} u^{3} + 120 s^{3} t^{4} u^{2} + 120 s^{3} t^{4} u + 40 s^{3} t^{4} + 116 s^{3} t^{3} u^{3} + 388 s^{3} t^{3} u^{2} + 428 s^{3} t^{3} u + 156 s^{3} t^{3} + 72 s^{3} t^{2} u^{3} + 306 s^{3} t^{2} u^{2} + 396 s^{3} t^{2} u + 162 s^{3} t^{2} + 56 s^{3} t u^{3} + 108 s^{3} t u^{2} + 88 s^{3} t u + 36 s^{3} t + 96 s^{3} u^{3} + 216 s^{3} u^{2} + 140 s^{3} u + 60 s^{3} + 40 s^{2} t^{4} u^{3} + 120 s^{2} t^{4} u^{2} + 120 s^{2} t^{4} u + 40 s^{2} t^{4} + 124 s^{2} t^{3} u^{3} + 412 s^{2} t^{3} u^{2} + 452 s^{2} t^{3} u + 164 s^{2} t^{3} + 78 s^{2} t^{2} u^{3} + 324 s^{2} t^{2} u^{2} + 414 s^{2} t^{2} u + 168 s^{2} t^{2} + 40 s^{2} t u^{3} + 60 s^{2} t u^{2} + 40 s^{2} t u + 20 s^{2} t + 100 s^{2} u^{3} + 190 s^{2} u^{2} + 84 s^{2} u + 34 s^{2} + 20 s t^{4} u^{3} + 60 s t^{4} u^{2} + 60 s t^{4} u + 20 s t^{4} + 66 s t^{3} u^{3} + 218 s t^{3} u^{2} + 238 s t^{3} u + 86 s t^{3} + 42 s t^{2} u^{3} + 171 s t^{2} u^{2} + 216 s t^{2} u + 87 s t^{2} + 12 s t u^{3} + 6 s t u^{2} - 4 s t u + 2 s t + 46 s u^{3} + 66 s u^{2} + 4 t^{4} u^{3} + 12 t^{4} u^{2} + 12 t^{4} u + 4 t^{4} + 14 t^{3} u^{3} + 46 t^{3} u^{2} + 50 t^{3} u + 18 t^{3} + 9 t^{2} u^{3} + 36 t^{2} u^{2} + 45 t^{2} u + 18 t^{2} + 2 t u^{3} - 2 t u + 10 u^{3} + 14 u^{2}$
status: 0
From weighted AM-GM inequality:
$$4 s t u \le s^{2} u^{2}+2 s t^{2}+u^{2}$$
$$2 t u \le t^{2}+u^{2}$$
$$ 0 \le 4 s^{5} t^{4} u^{3}+12 s^{5} t^{4} u^{2}+12 s^{5} t^{4} u+4 s^{5} t^{4}+10 s^{5} t^{3} u^{3}+34 s^{5} t^{3} u^{2}+38 s^{5} t^{3} u+14 s^{5} t^{3}+6 s^{5} t^{2} u^{3}+27 s^{5} t^{2} u^{2}+36 s^{5} t^{2} u+15 s^{5} t^{2}+8 s^{5} t u^{3}+18 s^{5} t u^{2}+16 s^{5} t u+6 s^{5} t+8 s^{5} u^{3}+24 s^{5} u^{2}+22 s^{5} u+10 s^{5}+20 s^{4} t^{4} u^{3}+60 s^{4} t^{4} u^{2}+60 s^{4} t^{4} u+20 s^{4} t^{4}+54 s^{4} t^{3} u^{3}+182 s^{4} t^{3} u^{2}+202 s^{4} t^{3} u+74 s^{4} t^{3}+33 s^{4} t^{2} u^{3}+144 s^{4} t^{2} u^{2}+189 s^{4} t^{2} u+78 s^{4} t^{2}+34 s^{4} t u^{3}+72 s^{4} t u^{2}+62 s^{4} t u+24 s^{4} t+44 s^{4} u^{3}+114 s^{4} u^{2}+90 s^{4} u+40 s^{4}+40 s^{3} t^{4} u^{3}+120 s^{3} t^{4} u^{2}+120 s^{3} t^{4} u+40 s^{3} t^{4}+116 s^{3} t^{3} u^{3}+388 s^{3} t^{3} u^{2}+428 s^{3} t^{3} u+156 s^{3} t^{3}+72 s^{3} t^{2} u^{3}+306 s^{3} t^{2} u^{2}+396 s^{3} t^{2} u+162 s^{3} t^{2}+56 s^{3} t u^{3}+108 s^{3} t u^{2}+88 s^{3} t u+36 s^{3} t+96 s^{3} u^{3}+216 s^{3} u^{2}+140 s^{3} u+60 s^{3}+40 s^{2} t^{4} u^{3}+120 s^{2} t^{4} u^{2}+120 s^{2} t^{4} u+40 s^{2} t^{4}+124 s^{2} t^{3} u^{3}+412 s^{2} t^{3} u^{2}+452 s^{2} t^{3} u+164 s^{2} t^{3}+78 s^{2} t^{2} u^{3}+324 s^{2} t^{2} u^{2}+414 s^{2} t^{2} u+168 s^{2} t^{2}+40 s^{2} t u^{3}+60 s^{2} t u^{2}+40 s^{2} t u+20 s^{2} t+100 s^{2} u^{3}+189 s^{2} u^{2}+84 s^{2} u+34 s^{2}+20 s t^{4} u^{3}+60 s t^{4} u^{2}+60 s t^{4} u+20 s t^{4}+66 s t^{3} u^{3}+218 s t^{3} u^{2}+238 s t^{3} u+86 s t^{3}+42 s t^{2} u^{3}+171 s t^{2} u^{2}+216 s t^{2} u+85 s t^{2}+12 s t u^{3}+6 s t u^{2}+2 s t+46 s u^{3}+66 s u^{2}+4 t^{4} u^{3}+12 t^{4} u^{2}+12 t^{4} u+4 t^{4}+14 t^{3} u^{3}+46 t^{3} u^{2}+50 t^{3} u+18 t^{3}+9 t^{2} u^{3}+36 t^{2} u^{2}+45 t^{2} u+17 t^{2}+2 t u^{3}+10 u^{3}+12 u^{2} $$
The sum of all inequalities gives us a proof of the inequality.
_______________________
Substitute $a\to 1+v,b\to 1/(1+w),c\to 1/(1+x)$
Numerator after substitutions: $4 v^{5} w^{4} x^{3} + 12 v^{5} w^{4} x^{2} + 12 v^{5} w^{4} x + 4 v^{5} w^{4} + 16 v^{5} w^{3} x^{3} + 48 v^{5} w^{3} x^{2} + 48 v^{5} w^{3} x + 16 v^{5} w^{3} + 24 v^{5} w^{2} x^{3} + 72 v^{5} w^{2} x^{2} + 72 v^{5} w^{2} x + 24 v^{5} w^{2} + 16 v^{5} w x^{3} + 48 v^{5} w x^{2} + 48 v^{5} w x + 16 v^{5} w + 4 v^{5} x^{3} + 12 v^{5} x^{2} + 12 v^{5} x + 4 v^{5} + 14 v^{4} w^{4} x^{3} + 46 v^{4} w^{4} x^{2} + 50 v^{4} w^{4} x + 18 v^{4} w^{4} + 60 v^{4} w^{3} x^{3} + 196 v^{4} w^{3} x^{2} + 212 v^{4} w^{3} x + 76 v^{4} w^{3} + 96 v^{4} w^{2} x^{3} + 312 v^{4} w^{2} x^{2} + 336 v^{4} w^{2} x + 120 v^{4} w^{2} + 68 v^{4} w x^{3} + 220 v^{4} w x^{2} + 236 v^{4} w x + 84 v^{4} w + 18 v^{4} x^{3} + 58 v^{4} x^{2} + 62 v^{4} x + 22 v^{4} + 16 v^{3} w^{4} x^{3} + 62 v^{3} w^{4} x^{2} + 76 v^{3} w^{4} x + 30 v^{3} w^{4} + 76 v^{3} w^{3} x^{3} + 284 v^{3} w^{3} x^{2} + 340 v^{3} w^{3} x + 132 v^{3} w^{3} + 132 v^{3} w^{2} x^{3} + 480 v^{3} w^{2} x^{2} + 564 v^{3} w^{2} x + 216 v^{3} w^{2} + 100 v^{3} w x^{3} + 356 v^{3} w x^{2} + 412 v^{3} w x + 156 v^{3} w + 28 v^{3} x^{3} + 98 v^{3} x^{2} + 112 v^{3} x + 42 v^{3} + 4 v^{2} w^{4} x^{3} + 30 v^{2} w^{4} x^{2} + 48 v^{2} w^{4} x + 22 v^{2} w^{4} + 28 v^{2} w^{3} x^{3} + 156 v^{2} w^{3} x^{2} + 228 v^{2} w^{3} x + 100 v^{2} w^{3} + 60 v^{2} w^{2} x^{3} + 288 v^{2} w^{2} x^{2} + 396 v^{2} w^{2} x + 168 v^{2} w^{2} + 52 v^{2} w x^{3} + 228 v^{2} w x^{2} + 300 v^{2} w x + 124 v^{2} w + 16 v^{2} x^{3} + 66 v^{2} x^{2} + 84 v^{2} x + 34 v^{2} + 5 v w^{4} x^{3} + 7 v w^{4} x^{2} + 3 v w^{4} x + v w^{4} + 24 v w^{3} x^{3} + 40 v w^{3} x^{2} + 24 v w^{3} x + 8 v w^{3} + 33 v w^{2} x^{3} + 51 v w^{2} x^{2} + 27 v w^{2} x + 9 v w^{2} + 18 v w x^{3} + 22 v w x^{2} + 6 v w x + 2 v w + 4 v x^{3} + 4 v x^{2} + 7 w^{4} x^{3} + 16 w^{4} x^{2} + 9 w^{4} x + 4 w^{4} + 38 w^{3} x^{3} + 82 w^{3} x^{2} + 46 w^{3} x + 18 w^{3} + 63 w^{2} x^{3} + 120 w^{2} x^{2} + 51 w^{2} x + 18 w^{2} + 38 w x^{3} + 56 w x^{2} + 2 w x + 10 x^{3} + 14 x^{2}$
status: 0
$$ 0 \le 4 v^{5} w^{4} x^{3}+16 v^{5} w^{3} x^{3}+24 v^{5} w^{2} x^{3}+16 v^{5} w x^{3}+4 v^{5} x^{3}+14 v^{4} w^{4} x^{3}+60 v^{4} w^{3} x^{3}+96 v^{4} w^{2} x^{3}+68 v^{4} w x^{3}+18 v^{4} x^{3}+16 v^{3} w^{4} x^{3}+76 v^{3} w^{3} x^{3}+132 v^{3} w^{2} x^{3}+100 v^{3} w x^{3}+28 v^{3} x^{3}+4 v^{2} w^{4} x^{3}+28 v^{2} w^{3} x^{3}+60 v^{2} w^{2} x^{3}+52 v^{2} w x^{3}+16 v^{2} x^{3}+5 v w^{4} x^{3}+24 v w^{3} x^{3}+33 v w^{2} x^{3}+18 v w x^{3}+4 v x^{3}+7 w^{4} x^{3}+38 w^{3} x^{3}+63 w^{2} x^{3}+38 w x^{3}+10 x^{3}+12 v^{5} w^{4} x^{2}+48 v^{5} w^{3} x^{2}+72 v^{5} w^{2} x^{2}+48 v^{5} w x^{2}+12 v^{5} x^{2}+46 v^{4} w^{4} x^{2}+196 v^{4} w^{3} x^{2}+312 v^{4} w^{2} x^{2}+220 v^{4} w x^{2}+58 v^{4} x^{2}+62 v^{3} w^{4} x^{2}+284 v^{3} w^{3} x^{2}+480 v^{3} w^{2} x^{2}+356 v^{3} w x^{2}+98 v^{3} x^{2}+30 v^{2} w^{4} x^{2}+156 v^{2} w^{3} x^{2}+288 v^{2} w^{2} x^{2}+228 v^{2} w x^{2}+66 v^{2} x^{2}+7 v w^{4} x^{2}+40 v w^{3} x^{2}+51 v w^{2} x^{2}+22 v w x^{2}+4 v x^{2}+16 w^{4} x^{2}+82 w^{3} x^{2}+120 w^{2} x^{2}+56 w x^{2}+14 x^{2}+12 v^{5} w^{4} x+48 v^{5} w^{3} x+72 v^{5} w^{2} x+48 v^{5} w x+12 v^{5} x+50 v^{4} w^{4} x+212 v^{4} w^{3} x+336 v^{4} w^{2} x+236 v^{4} w x+62 v^{4} x+76 v^{3} w^{4} x+340 v^{3} w^{3} x+564 v^{3} w^{2} x+412 v^{3} w x+112 v^{3} x+48 v^{2} w^{4} x+228 v^{2} w^{3} x+396 v^{2} w^{2} x+300 v^{2} w x+84 v^{2} x+3 v w^{4} x+24 v w^{3} x+27 v w^{2} x+6 v w x+9 w^{4} x+46 w^{3} x+51 w^{2} x+2 w x+4 v^{5} w^{4}+16 v^{5} w^{3}+24 v^{5} w^{2}+16 v^{5} w+4 v^{5}+18 v^{4} w^{4}+76 v^{4} w^{3}+120 v^{4} w^{2}+84 v^{4} w+22 v^{4}+30 v^{3} w^{4}+132 v^{3} w^{3}+216 v^{3} w^{2}+156 v^{3} w+42 v^{3}+22 v^{2} w^{4}+100 v^{2} w^{3}+168 v^{2} w^{2}+124 v^{2} w+34 v^{2}+v w^{4}+8 v w^{3}+9 v w^{2}+2 v w+4 w^{4}+18 w^{3}+18 w^{2} $$
The sum of all inequalities gives us a proof of the inequality.
_______________________
Substitute $a\to 1/(1+y),b\to 1/(1+z),c\to 1/(1+a_{1})$
Numerator after substitutions: $10 a_{1}^{3} y^{5} z^{3} + 30 a_{1}^{3} y^{5} z^{2} + 24 a_{1}^{3} y^{5} z + 8 a_{1}^{3} y^{5} + 9 a_{1}^{3} y^{4} z^{4} + 86 a_{1}^{3} y^{4} z^{3} + 195 a_{1}^{3} y^{4} z^{2} + 142 a_{1}^{3} y^{4} z + 44 a_{1}^{3} y^{4} + 36 a_{1}^{3} y^{3} z^{4} + 244 a_{1}^{3} y^{3} z^{3} + 480 a_{1}^{3} y^{3} z^{2} + 328 a_{1}^{3} y^{3} z + 96 a_{1}^{3} y^{3} + 54 a_{1}^{3} y^{2} z^{4} + 312 a_{1}^{3} y^{2} z^{3} + 558 a_{1}^{3} y^{2} z^{2} + 360 a_{1}^{3} y^{2} z + 100 a_{1}^{3} y^{2} + 30 a_{1}^{3} y z^{4} + 166 a_{1}^{3} y z^{3} + 282 a_{1}^{3} y z^{2} + 172 a_{1}^{3} y z + 46 a_{1}^{3} y + 7 a_{1}^{3} z^{4} + 38 a_{1}^{3} z^{3} + 63 a_{1}^{3} z^{2} + 38 a_{1}^{3} z + 10 a_{1}^{3} + 11 a_{1}^{2} y^{5} z^{4} + 62 a_{1}^{2} y^{5} z^{3} + 117 a_{1}^{2} y^{5} z^{2} + 78 a_{1}^{2} y^{5} z + 24 a_{1}^{2} y^{5} + 64 a_{1}^{2} y^{4} z^{4} + 346 a_{1}^{2} y^{4} z^{3} + 612 a_{1}^{2} y^{4} z^{2} + 384 a_{1}^{2} y^{4} z + 114 a_{1}^{2} y^{4} + 146 a_{1}^{2} y^{3} z^{4} + 764 a_{1}^{2} y^{3} z^{3} + 1278 a_{1}^{2} y^{3} z^{2} + 756 a_{1}^{2} y^{3} z + 216 a_{1}^{2} y^{3} + 162 a_{1}^{2} y^{2} z^{4} + 816 a_{1}^{2} y^{2} z^{3} + 1284 a_{1}^{2} y^{2} z^{2} + 700 a_{1}^{2} y^{2} z + 190 a_{1}^{2} y^{2} + 73 a_{1}^{2} y z^{4} + 370 a_{1}^{2} y z^{3} + 549 a_{1}^{2} y z^{2} + 258 a_{1}^{2} y z + 66 a_{1}^{2} y + 16 a_{1}^{2} z^{4} + 82 a_{1}^{2} z^{3} + 120 a_{1}^{2} z^{2} + 56 a_{1}^{2} z + 14 a_{1}^{2} + 16 a_{1} y^{5} z^{4} + 74 a_{1} y^{5} z^{3} + 120 a_{1} y^{5} z^{2} + 72 a_{1} y^{5} z + 22 a_{1} y^{5} + 75 a_{1} y^{4} z^{4} + 350 a_{1} y^{4} z^{3} + 543 a_{1} y^{4} z^{2} + 298 a_{1} y^{4} z + 90 a_{1} y^{4} + 140 a_{1} y^{3} z^{4} + 660 a_{1} y^{3} z^{3} + 972 a_{1} y^{3} z^{2} + 472 a_{1} y^{3} z + 140 a_{1} y^{3} + 126 a_{1} y^{2} z^{4} + 592 a_{1} y^{2} z^{3} + 798 a_{1} y^{2} z^{2} + 296 a_{1} y^{2} z + 84 a_{1} y^{2} + 42 a_{1} y z^{4} + 206 a_{1} y z^{3} + 228 a_{1} y z^{2} + 4 a_{1} y z + 9 a_{1} z^{4} + 46 a_{1} z^{3} + 51 a_{1} z^{2} + 2 a_{1} z + 9 y^{5} z^{4} + 38 y^{5} z^{3} + 57 y^{5} z^{2} + 34 y^{5} z + 10 y^{5} + 40 y^{4} z^{4} + 170 y^{4} z^{3} + 246 y^{4} z^{2} + 136 y^{4} z + 40 y^{4} + 70 y^{3} z^{4} + 300 y^{3} z^{3} + 414 y^{3} z^{2} + 204 y^{3} z + 60 y^{3} + 58 y^{2} z^{4} + 248 y^{2} z^{3} + 312 y^{2} z^{2} + 116 y^{2} z + 34 y^{2} + 19 y z^{4} + 82 y z^{3} + 81 y z^{2} - 2 y z + 4 z^{4} + 18 z^{3} + 18 z^{2}$
status: 0
From weighted AM-GM inequality:
$$2 y z \le y^{2}+z^{2}$$
$$ 0 \le 11 a_{1}^{2} y^{5} z^{4}+16 a_{1} y^{5} z^{4}+9 y^{5} z^{4}+10 a_{1}^{3} y^{5} z^{3}+62 a_{1}^{2} y^{5} z^{3}+74 a_{1} y^{5} z^{3}+38 y^{5} z^{3}+30 a_{1}^{3} y^{5} z^{2}+117 a_{1}^{2} y^{5} z^{2}+120 a_{1} y^{5} z^{2}+57 y^{5} z^{2}+24 a_{1}^{3} y^{5} z+78 a_{1}^{2} y^{5} z+72 a_{1} y^{5} z+34 y^{5} z+8 a_{1}^{3} y^{5}+24 a_{1}^{2} y^{5}+22 a_{1} y^{5}+10 y^{5}+9 a_{1}^{3} y^{4} z^{4}+64 a_{1}^{2} y^{4} z^{4}+75 a_{1} y^{4} z^{4}+40 y^{4} z^{4}+86 a_{1}^{3} y^{4} z^{3}+346 a_{1}^{2} y^{4} z^{3}+350 a_{1} y^{4} z^{3}+170 y^{4} z^{3}+195 a_{1}^{3} y^{4} z^{2}+612 a_{1}^{2} y^{4} z^{2}+543 a_{1} y^{4} z^{2}+246 y^{4} z^{2}+142 a_{1}^{3} y^{4} z+384 a_{1}^{2} y^{4} z+298 a_{1} y^{4} z+136 y^{4} z+44 a_{1}^{3} y^{4}+114 a_{1}^{2} y^{4}+90 a_{1} y^{4}+40 y^{4}+36 a_{1}^{3} y^{3} z^{4}+146 a_{1}^{2} y^{3} z^{4}+140 a_{1} y^{3} z^{4}+70 y^{3} z^{4}+244 a_{1}^{3} y^{3} z^{3}+764 a_{1}^{2} y^{3} z^{3}+660 a_{1} y^{3} z^{3}+300 y^{3} z^{3}+480 a_{1}^{3} y^{3} z^{2}+1278 a_{1}^{2} y^{3} z^{2}+972 a_{1} y^{3} z^{2}+414 y^{3} z^{2}+328 a_{1}^{3} y^{3} z+756 a_{1}^{2} y^{3} z+472 a_{1} y^{3} z+204 y^{3} z+96 a_{1}^{3} y^{3}+216 a_{1}^{2} y^{3}+140 a_{1} y^{3}+60 y^{3}+54 a_{1}^{3} y^{2} z^{4}+162 a_{1}^{2} y^{2} z^{4}+126 a_{1} y^{2} z^{4}+58 y^{2} z^{4}+312 a_{1}^{3} y^{2} z^{3}+816 a_{1}^{2} y^{2} z^{3}+592 a_{1} y^{2} z^{3}+248 y^{2} z^{3}+558 a_{1}^{3} y^{2} z^{2}+1284 a_{1}^{2} y^{2} z^{2}+798 a_{1} y^{2} z^{2}+312 y^{2} z^{2}+360 a_{1}^{3} y^{2} z+700 a_{1}^{2} y^{2} z+296 a_{1} y^{2} z+116 y^{2} z+100 a_{1}^{3} y^{2}+190 a_{1}^{2} y^{2}+84 a_{1} y^{2}+33 y^{2}+30 a_{1}^{3} y z^{4}+73 a_{1}^{2} y z^{4}+42 a_{1} y z^{4}+19 y z^{4}+166 a_{1}^{3} y z^{3}+370 a_{1}^{2} y z^{3}+206 a_{1} y z^{3}+82 y z^{3}+282 a_{1}^{3} y z^{2}+549 a_{1}^{2} y z^{2}+228 a_{1} y z^{2}+81 y z^{2}+172 a_{1}^{3} y z+258 a_{1}^{2} y z+4 a_{1} y z+46 a_{1}^{3} y+66 a_{1}^{2} y+7 a_{1}^{3} z^{4}+16 a_{1}^{2} z^{4}+9 a_{1} z^{4}+4 z^{4}+38 a_{1}^{3} z^{3}+82 a_{1}^{2} z^{3}+46 a_{1} z^{3}+18 z^{3}+63 a_{1}^{3} z^{2}+120 a_{1}^{2} z^{2}+51 a_{1} z^{2}+17 z^{2}+38 a_{1}^{3} z+56 a_{1}^{2} z+2 a_{1} z+10 a_{1}^{3}+14 a_{1}^{2} $$
The sum of all inequalities gives us a proof of the inequality.
Counter({0: 8})
Now let's take a look at slightly another kind of the problem.
#### Problem
Let $f:R^3\to R$ be a convex function. Prove that
$$f(1,2,3)+f(2,3,1)+f(3,1,2)\le f(4,3,-1)+f(3,-1,4)+f(-1,4,3).$$
To create a proof, we will use `provef` function. It assumes that $f$ is convex and nonnegative, then it tries to find a proof. However, if the last inequality is $0\le 0$, then the proof works for any convex function.
```python
provef('(-f(1,2,3)-f(2,3,1)-f(3,1,2)+f(4,3,-1)+f(3,-1,4)+f(-1,4,3))*21')
```
numerator: $21 f{\left(-1,4,3 \right)} - 21 f{\left(1,2,3 \right)} - 21 f{\left(2,3,1 \right)} + 21 f{\left(3,-1,4 \right)} - 21 f{\left(3,1,2 \right)} + 21 f{\left(4,3,-1 \right)}$
denominator: $1$
status: 0
From Jensen inequality:
$$21f(1, 2, 3) \le 11f(-1, 4, 3)+8f(3, -1, 4)+2f(4, 3, -1)$$
$$21f(2, 3, 1) \le 8f(-1, 4, 3)+2f(3, -1, 4)+11f(4, 3, -1)$$
$$21f(3, 1, 2) \le 2f(-1, 4, 3)+11f(3, -1, 4)+8f(4, 3, -1)$$
$$ 0 \le 0 $$
The sum of all inequalities gives us a proof of the inequality.
0
Let's try to solve problem 6 from the finals of LXIII Polish Mathematical Olympiad. It was one of the hardest inequality in the history of this contest, solved only by 3 finalists.
#### Problem
Prove the inequality
$$\left(\frac{a - b}{c}\right)^2 + \left(\frac{b - c}{a}\right)^2 + \left(\frac{c - a}{b}\right)^2\ge 2 \sqrt{2} \left(\frac{a - b}{c} + \frac{b - c}{a}+ \frac{c-a}{b}\right)$$
for any positive numbers $a,b,c$.
The first observation is that the formula is cyclic, so without loss of generality we may assume that $a\ge b,c$. We can go a step further and divide it into two cases: $a\ge b\ge c$ and $a\ge c\ge b$.
```python
shiro.display=lambda x:None #turn off printing of proofs
newproof()
formula=cyclize('((a-b)/c)^2-2*sqrt(2)*(a-b)/c')
```
```python
formula1=makesubs(formula,'[b,oo],[c,oo]',variables='a,b') #a>=b>=c
prove(formula1)
```
2
```python
formula2=makesubs(formula,'[c,oo],[b,oo]',variables='a,c') #a>=c>=b
prove(formula2)
```
0
So the case $a\ge c\ge b$ is done, but $a\ge b\ge c$ is not. But maybe we can adjust values.
```python
values=findvalues(formula1)
values
```
Optimization terminated successfully.
Current function value: 1.000000
Iterations: 137
Function evaluations: 249
(1.7908873553542452e-10, 2.5326984818340415e-10, 7.129450063690368)
First and second value is approximately equal to 0, so we can try to replace 0 with 1.
```python
prove(formula1,values='1,1,7')
```
2
The key observation is that the `formula1` is homogenous, so we can scale values.
```python
newvalues=(1,values[1]/values[0],values[2]/values[0])
newvalues
```
(1, 1.4142142855953455, 39809595184.05965)
```python
newvalues[1]**2
```
2.000002045581953
Now the third value is very big. Technically we could use it, but it would run into overflow error, so we will use 1 instead of it. Second value is very close to $\sqrt{2}$, so this value will be our next try.
```python
prove(formula1,values='1,sqrt(2),1')
```
0
So after getting the code all together we have got the following proof.
```python
newproof()
shiro.display=lambda x:display(Latex(x)) #turn on printing proofs
formula=cyclize('((a-b)/c)^2-2*sqrt(2)*(a-b)/c')
display(Latex('Case $a\ge c\ge b$'))
formula1=makesubs(formula,'[c,oo],[b,oo]',variables='a,c,b')
prove(formula1)
display(Latex('Case $a\ge b\ge c$'))
formula2=makesubs(formula,'[b,oo],[c,oo]')
prove(formula2,values='1,2**(1/2),1')
```
Case $a\ge c\ge b$
Substitute $a\to c + d$
Substitute $c\to b + e$
numerator: $2 b^{4} d^{2} + 2 b^{4} d e + 2 b^{4} e^{2} + 4 b^{3} d^{3} + 2 \sqrt{2} b^{3} d^{2} e + 10 b^{3} d^{2} e + 2 \sqrt{2} b^{3} d e^{2} + 6 b^{3} d e^{2} + 4 b^{3} e^{3} + 2 b^{2} d^{4} + 2 \sqrt{2} b^{2} d^{3} e + 10 b^{2} d^{3} e + 6 \sqrt{2} b^{2} d^{2} e^{2} + 12 b^{2} d^{2} e^{2} + 4 b^{2} d e^{3} + 4 \sqrt{2} b^{2} d e^{3} + 2 b^{2} e^{4} + 2 b d^{4} e + 2 \sqrt{2} b d^{3} e^{2} + 6 b d^{3} e^{2} + 4 b d^{2} e^{3} + 4 \sqrt{2} b d^{2} e^{3} + 2 \sqrt{2} b d e^{4} + d^{4} e^{2} + 2 d^{3} e^{3} + d^{2} e^{4}$
denominator: $b^{6} + 2 b^{5} d + 4 b^{5} e + b^{4} d^{2} + 6 b^{4} d e + 6 b^{4} e^{2} + 2 b^{3} d^{2} e + 6 b^{3} d e^{2} + 4 b^{3} e^{3} + b^{2} d^{2} e^{2} + 2 b^{2} d e^{3} + b^{2} e^{4}$
status: 0
$$ 0 \le 2 b^{4} d^{2}+2 b^{4} d e+2 b^{4} e^{2}+4 b^{3} d^{3}+2 \sqrt{2} b^{3} d^{2} e+10 b^{3} d^{2} e+2 \sqrt{2} b^{3} d e^{2}+6 b^{3} d e^{2}+4 b^{3} e^{3}+2 b^{2} d^{4}+2 \sqrt{2} b^{2} d^{3} e+10 b^{2} d^{3} e+6 \sqrt{2} b^{2} d^{2} e^{2}+12 b^{2} d^{2} e^{2}+4 \sqrt{2} b^{2} d e^{3}+4 b^{2} d e^{3}+2 b^{2} e^{4}+2 b d^{4} e+2 \sqrt{2} b d^{3} e^{2}+6 b d^{3} e^{2}+4 \sqrt{2} b d^{2} e^{3}+4 b d^{2} e^{3}+2 \sqrt{2} b d e^{4}+d^{4} e^{2}+2 d^{3} e^{3}+d^{2} e^{4} $$
The sum of all inequalities gives us a proof of the inequality.
Case $a\ge b\ge c$
Substitute $a\to b + f$
Substitute $b\to c + g$
Substitute $f\to \sqrt{2} h$
numerator: $2 c^{4} g^{2} + 2 \sqrt{2} c^{4} g h + 4 c^{4} h^{2} + 4 c^{3} g^{3} - 4 c^{3} g^{2} h + 6 \sqrt{2} c^{3} g^{2} h - 4 \sqrt{2} c^{3} g h^{2} + 20 c^{3} g h^{2} + 8 \sqrt{2} c^{3} h^{3} + 2 c^{2} g^{4} - 8 c^{2} g^{3} h + 4 \sqrt{2} c^{2} g^{3} h - 12 \sqrt{2} c^{2} g^{2} h^{2} + 24 c^{2} g^{2} h^{2} - 8 c^{2} g h^{3} + 20 \sqrt{2} c^{2} g h^{3} + 8 c^{2} h^{4} - 4 c g^{4} h - 8 \sqrt{2} c g^{3} h^{2} + 8 c g^{3} h^{2} - 8 c g^{2} h^{3} + 12 \sqrt{2} c g^{2} h^{3} + 8 c g h^{4} + 2 g^{4} h^{2} + 4 \sqrt{2} g^{3} h^{3} + 4 g^{2} h^{4}$
denominator: $c^{6} + 4 c^{5} g + 2 \sqrt{2} c^{5} h + 6 c^{4} g^{2} + 6 \sqrt{2} c^{4} g h + 2 c^{4} h^{2} + 4 c^{3} g^{3} + 6 \sqrt{2} c^{3} g^{2} h + 4 c^{3} g h^{2} + c^{2} g^{4} + 2 \sqrt{2} c^{2} g^{3} h + 2 c^{2} g^{2} h^{2}$
status: 0
From weighted AM-GM inequality:
$$4 c^{3} g^{2} h \le 2 c^{4} g^{2}+2 c^{2} g^{2} h^{2}$$
$$4 \sqrt{2} c^{3} g h^{2} \le 2 \sqrt{2} c^{4} g h+\sqrt{2} c^{3} h^{3}+\sqrt{2} c g^{2} h^{3}$$
$$8 c^{2} g^{3} h \le 4 c^{3} g^{3}+4 c g^{3} h^{2}$$
$$12 \sqrt{2} c^{2} g^{2} h^{2} \le 6 \sqrt{2} c^{3} g^{2} h+6 \sqrt{2} c g^{2} h^{3}$$
$$8 c^{2} g h^{3} \le 4 c^{3} g h^{2}+2 c^{2} h^{4}+2 g^{2} h^{4}$$
$$4 c g^{4} h \le 2 c^{2} g^{4}+2 g^{4} h^{2}$$
$$8 \sqrt{2} c g^{3} h^{2} \le 4 \sqrt{2} c^{2} g^{3} h+4 \sqrt{2} g^{3} h^{3}$$
$$8 c g^{2} h^{3} \le 4 c g^{3} h^{2}+4 c g h^{4}$$
$$ 0 \le 4 c^{4} h^{2}+16 c^{3} g h^{2}+7 \sqrt{2} c^{3} h^{3}+22 c^{2} g^{2} h^{2}+20 \sqrt{2} c^{2} g h^{3}+6 c^{2} h^{4}+5 \sqrt{2} c g^{2} h^{3}+4 c g h^{4}+2 g^{2} h^{4} $$
The sum of all inequalities gives us a proof of the inequality.
0
|
d74b5062289dd255397c707ce31cdfb0cb3982db
| 119,456 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/tutorial-checkpoint.ipynb
|
urojony/shiroin
|
64157741dfd705d7e0be6b0be88d89a28e178f40
|
[
"BSD-3-Clause"
] | 1 |
2020-12-13T19:58:17.000Z
|
2020-12-13T19:58:17.000Z
|
.ipynb_checkpoints/tutorial-checkpoint.ipynb
|
urojony/shiroin
|
64157741dfd705d7e0be6b0be88d89a28e178f40
|
[
"BSD-3-Clause"
] | null | null | null |
.ipynb_checkpoints/tutorial-checkpoint.ipynb
|
urojony/shiroin
|
64157741dfd705d7e0be6b0be88d89a28e178f40
|
[
"BSD-3-Clause"
] | 1 |
2020-12-13T19:55:52.000Z
|
2020-12-13T19:55:52.000Z
| 28.833213 | 2,397 | 0.441711 | true | 27,224 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.879147 | 0.803174 | 0.706108 |
__label__eng_Latn
| 0.429031 | 0.478856 |
<a href="https://colab.research.google.com/github/engdorm/semi-supervised-pytorch/blob/master/examples/notebooks/Deep Generative Model.ipynb" target="_parent"></a>
```python
# Imports
import torch
cuda = torch.cuda.is_available()
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.append("../../semi-supervised")
```
# Deep Generative Model
In this notebook we show how you can use the deep "generative model for semi-supervised learning" as presented in [[Kingma 2014]](https://arxiv.org/abs/1406.5298). The paper posits three different models, though we are just interested in two of these: the M2 model and the M1+M2 model.
The M1 model is just a variational autoencoder, so we refer to the previous notebook for more information on this. The M2 model however is an extension to the VAE to include label information for a semi-supervised objective. The structure is shown below (left: inference model, right: generative model).
The point of the generative model is to seperate the partially observed label information $y$ from the latent variable $z$ in order to learn a representation that seperates these two variables. We can use this model for semi-supervised learning as the inference model must also infer the label from the data $x$ along with the latent variable $z$.
```python
from models import DeepGenerativeModel, StackedDeepGenerativeModel
y_dim = 10
z_dim = 32
h_dim = [256, 128]
model = DeepGenerativeModel([784, y_dim, z_dim, h_dim])
model
```
DeepGenerativeModel(
(encoder): Encoder(
(hidden): ModuleList(
(0): Linear(in_features=794, out_features=256)
(1): Linear(in_features=256, out_features=128)
)
(sample): GaussianSample(
(mu): Linear(in_features=128, out_features=32)
(log_var): Linear(in_features=128, out_features=32)
)
)
(decoder): Decoder(
(hidden): ModuleList(
(0): Linear(in_features=42, out_features=128)
(1): Linear(in_features=128, out_features=256)
)
(reconstruction): Linear(in_features=256, out_features=784)
(output_activation): Sigmoid()
)
(classifier): Classifier(
(dense): Linear(in_features=784, out_features=256)
(logits): Linear(in_features=256, out_features=10)
)
)
```python
print(model.encoder.hidden[0])
print(model.decoder.hidden[0])
```
Linear(in_features=794, out_features=256)
Linear(in_features=42, out_features=128)
Notice how theres now a classifier associated with model. This classifier will just be a simple model that takes the size of the first layer encoder network. We also have a larger input space on both the encoder and decoder to make room for label information, in this case 10 labels.
## Training
Recall the ELBO from the VAE formulation, we want to construct a similar ELBO when we include labelled data $y$. In the case that we have labels, the ELBO has a simple formulation that is similar to the one for the VAE. The difference here is that we must also have a prior over labels $p(y)$, which we choose to be uniform over the different classes.
\begin{align}
\log p(x, y) &= \log \int q(z|x, y) \frac{p(x, y, z)}{q(z|xy)} \ dz \geq \int q(z|x, y) \log \frac{p(x, y, z)}{q(z|xy)} \ dz\\
&= \int q(z|x, y) [ \log p(x|z,y) + \log p(y) ] \ dz + \int q(z|x, y) \log \frac{p(z)}{q(z|xy)} \ dz\\
&= \mathbb{E}_{q(z|x, y)} [ \log p(x|z,y) + \log p(y) ] - KL(p(z)||q(z|xy)) = - \mathcal{L}(x, y)
\end{align}
In the case when the labels are not observed, we can instead integrate over all of the labels to achieve the same effect.
\begin{align}
\log p(x) &= \log \sum_{y} \int q(z,y|x) \frac{p(x, y, z)}{q(z,y|x)} \ dz \geq \sum_{y} q(y|x) \int q(z|x, y) \log \frac{p(x, y, z)}{q(z,y|x)} \ dz\\
&= \sum_{y} q(y|x) \int q(z|x, y) \log \frac{p(x, y, z)}{q(z,y|x)} \ dz + \sum_{y} q(y|x) \log q(y|x) \int q(z|x, y) \ dz\\
&= \sum_{y} q(y|x) (- \mathcal{L}(x,y)) + \mathcal{H}(q(y|x)) = - \mathcal{U}(x)
\end{align}
Notice how in both cases we need to compute the labelled bound, but in the unlabelled case we need to do it $n$ times where $n$ is the number of classes. In this model, we do not learn directly from the labelled class, as there is no cross entropy term between $y$ and our model output $q(y|x)$. We therefore add an auxiliary loss to arrive at the final loss objective.
$$\mathcal{J}^{\alpha} = \sum_{(x_l, y_l)}\mathcal{L}(x_l, y_l) + \alpha \cdot \mathbb{E}_{x_l, y_l}[- \log q(y_l|x_l)] + \sum_{(x_u)}\mathcal{U}(x_u)$$
Where $l, u$ denotes labelled and unlabelled data respectively and $\alpha$ is a hyperparameter that denotes the reliance of labelled data.
```python
from datautils import get_mnist
# Only use 10 labelled examples per class
# The rest of the data is unlabelled.
labelled, unlabelled, validation = get_mnist(location="./", batch_size=64, labels_per_class=10)
alpha = 0.1 * len(unlabelled) / len(labelled)
def binary_cross_entropy(r, x):
return -torch.sum(x * torch.log(r + 1e-8) + (1 - x) * torch.log(1 - r + 1e-8), dim=-1)
optimizer = torch.optim.Adam(model.parameters(), lr=3e-4, betas=(0.9, 0.999))
```
```python
from itertools import cycle
from inference import SVI, ImportanceWeightedSampler
# You can use importance weighted samples [Burda, 2015] to get a better estimate
# on the log-likelihood.
sampler = ImportanceWeightedSampler(mc=1, iw=1)
if cuda: model = model.cuda()
elbo = SVI(model, likelihood=binary_cross_entropy, sampler=sampler)
```
The library is conventially packed with the `SVI` method that does all of the work of calculating the lower bound for both labelled and unlabelled data depending on whether the label is given. It also manages to perform the enumeration of all the labels.
Remember that the labels have to be in a *one-hot encoded* format in order to work with SVI.
```python
from torch.autograd import Variable
for epoch in range(10):
model.train()
total_loss, accuracy = (0, 0)
for (x, y), (u, _) in zip(cycle(labelled), unlabelled):
# Wrap in variables
x, y, u = Variable(x), Variable(y), Variable(u)
if cuda:
# They need to be on the same device and be synchronized.
x, y = x.cuda(device=0), y.cuda(device=0)
u = u.cuda(device=0)
L = -elbo(x, y)
U = -elbo(u)
# Add auxiliary classification loss q(y|x)
logits = model.classify(x)
# Regular cross entropy
classication_loss = torch.sum(y * torch.log(logits + 1e-8), dim=1).mean()
J_alpha = L - alpha * classication_loss + U
J_alpha.backward()
optimizer.step()
optimizer.zero_grad()
total_loss += J_alpha.data[0]
accuracy += torch.mean((torch.max(logits, 1)[1].data == torch.max(y, 1)[1].data).float())
if epoch % 1 == 0:
model.eval()
m = len(unlabelled)
print("Epoch: {}".format(epoch))
print("[Train]\t\t J_a: {:.2f}, accuracy: {:.2f}".format(total_loss / m, accuracy / m))
total_loss, accuracy = (0, 0)
for x, y in validation:
x, y = Variable(x), Variable(y)
if cuda:
x, y = x.cuda(device=0), y.cuda(device=0)
L = -elbo(x, y)
U = -elbo(x)
logits = model.classify(x)
classication_loss = -torch.sum(y * torch.log(logits + 1e-8), dim=1).mean()
J_alpha = L + alpha * classication_loss + U
total_loss += J_alpha.data[0]
_, pred_idx = torch.max(logits, 1)
_, lab_idx = torch.max(y, 1)
accuracy += torch.mean((torch.max(logits, 1)[1].data == torch.max(y, 1)[1].data).float())
m = len(validation)
print("[Validation]\t J_a: {:.2f}, accuracy: {:.2f}".format(total_loss / m, accuracy / m))
```
## Conditional generation
When the model is done training you can generate samples conditionally given some normal distributed noise $z$ and a label $y$.
*The model below has only trained for 10 iterations, so the perfomance is not representative*.
```python
from utils import onehot
model.eval()
z = Variable(torch.randn(16, 32))
# Generate a batch of 7s
y = Variable(onehot(10)(7).repeat(16, 1))
x_mu = model.sample(z, y)
```
```python
f, axarr = plt.subplots(1, 16, figsize=(18, 12))
samples = x_mu.data.view(-1, 28, 28).numpy()
for i, ax in enumerate(axarr.flat):
ax.imshow(samples[i])
ax.axis("off")
```
## Stacked Deep Generative Model
The M1+M2 model also described in the same paper is an M1 model (VAE) with an M2 model stacked on top of it. That means that we train the a VAE end-to-end on the data given dataset, then we use the learned encoder as a feature extractor and feed the data transformed by the M1 encoder into the M2 model.
We approach is somewhat similar to restricted boltzmann machines (RBMs) in the sense that we perform a layerwise training of the whole model by first training a level-1 feature extractor and stacking another model on top of this. The stacked model is therefore also more modular, but cannot be trained end-to-end, which is a downside.
```python
from models import VariationalAutoencoder, StackedDeepGenerativeModel
features = VariationalAutoencoder([784, z_dim, h_dim])
model = StackedDeepGenerativeModel([784, y_dim, z_dim, h_dim], features)
```
Typically, you would want to load a pretrained feature extractor VAE instead.
```python
features = torch.load("./your-pretrained-vae.pt")
```
```python
from torch.autograd import Variable
for epoch in range(10):
model.train()
total_loss, accuracy = (0, 0)
for (x, y), (u, _) in zip(cycle(labelled), unlabelled):
# Wrap in variables
x, y, u = Variable(x), Variable(y), Variable(u)
if cuda:
# They need to be on the same device and be synchronized.
x, y = x.cuda(device=0), y.cuda(device=0)
u = u.cuda(device=0)
L = -elbo(x, y)
U = -elbo(u)
# Add auxiliary classification loss q(y|x)
logits = model.classify(x)
# Regular cross entropy
classication_loss = torch.sum(y * torch.log(logits + 1e-8), dim=1).mean()
J_alpha = L - alpha * classication_loss + U
J_alpha.backward()
optimizer.step()
optimizer.zero_grad()
total_loss += J_alpha.data[0]
accuracy += torch.mean((torch.max(logits, 1)[1].data == torch.max(y, 1)[1].data).float())
if epoch % 1 == 0:
model.eval()
m = len(unlabelled)
print("Epoch: {}".format(epoch))
print("[Train]\t\t J_a: {:.2f}, accuracy: {:.2f}".format(total_loss / m, accuracy / m))
total_loss, accuracy = (0, 0)
for x, y in validation:
x, y = Variable(x), Variable(y)
if cuda:
x, y = x.cuda(device=0), y.cuda(device=0)
L = -elbo(x, y)
U = -elbo(x)
logits = model.classify(x)
classication_loss = -torch.sum(y * torch.log(logits + 1e-8), dim=1).mean()
J_alpha = L + alpha * classication_loss + U
total_loss += J_alpha.data[0]
_, pred_idx = torch.max(logits, 1)
_, lab_idx = torch.max(y, 1)
accuracy += torch.mean((torch.max(logits, 1)[1].data == torch.max(y, 1)[1].data).float())
m = len(validation)
print("[Validation]\t J_a: {:.2f}, accuracy: {:.2f}".format(total_loss / m, accuracy / m))
```
## Additional tips
You can change the built-in classifier with your own, for example if you want a CNN as a classifier you can do the following.
```python
import torch.nn as nn
import torch.nn.functional as F
class ConvolutionalClassifier(nn.Module):
def __init__(self):
super(ConvolutionalClassifier, self).__init__()
self.conv1 = nn.Conv2d(1, 64, kernel_size=3)
self.conv2 = nn.Conv2d(64, 32, kernel_size=3)
self.pool = nn.MaxPool2d(kernel_size=4)
size = int((28 - 3) + 1)//4
size = int((size - 3) + 1)//4
self.fc1 = nn.Linear(32*size**2, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
batch, *_ = x.size()
x = x.view(-1, 1, 28, 28)
x = F.relu(self.conv1(x))
x = self.pool(x)
x = F.relu(self.conv2(x))
x = self.pool(x)
x = F.relu(x.view(batch, -1))
x = self.fc1(x)
x = self.fc2(x)
return F.softmax(x, dim=-1)
classifier = ConvolutionalClassifier()
model.classifier = classifier
```
```python
model
```
DeepGenerativeModel(
(encoder): Encoder(
(hidden): ModuleList(
(0): Linear(in_features=794, out_features=256)
(1): Linear(in_features=256, out_features=128)
)
(sample): GaussianSample(
(mu): Linear(in_features=128, out_features=32)
(log_var): Linear(in_features=128, out_features=32)
)
)
(decoder): Decoder(
(hidden): ModuleList(
(0): Linear(in_features=42, out_features=128)
(1): Linear(in_features=128, out_features=256)
)
(reconstruction): Linear(in_features=256, out_features=784)
(output_activation): Sigmoid()
)
(classifier): ConvolutionalClassifier(
(conv1): Conv2d (1, 64, kernel_size=(3, 3), stride=(1, 1))
(conv2): Conv2d (64, 32, kernel_size=(3, 3), stride=(1, 1))
(pool): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), dilation=(1, 1))
(fc1): Linear(in_features=32, out_features=50)
(fc2): Linear(in_features=50, out_features=10)
)
)
```python
```
|
e8233c91c8f17ae21b5d8ca8fca8c09739bd0348
| 45,660 |
ipynb
|
Jupyter Notebook
|
examples/notebooks/Deep Generative Model.ipynb
|
engdorm/semi-supervised-pytorch
|
b149e06aa413dd426886149930c8c265fd9cc746
|
[
"MIT"
] | null | null | null |
examples/notebooks/Deep Generative Model.ipynb
|
engdorm/semi-supervised-pytorch
|
b149e06aa413dd426886149930c8c265fd9cc746
|
[
"MIT"
] | null | null | null |
examples/notebooks/Deep Generative Model.ipynb
|
engdorm/semi-supervised-pytorch
|
b149e06aa413dd426886149930c8c265fd9cc746
|
[
"MIT"
] | null | null | null | 82.717391 | 25,762 | 0.76671 | true | 3,799 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.839734 | 0.760651 | 0.638744 |
__label__eng_Latn
| 0.914469 | 0.322347 |
# Problem set 7: Solving the consumer problem with income risk
```python
import numpy as np
import scipy as sp
from scipy import linalg
from scipy import optimize
from scipy import interpolate
import sympy as sm
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
```
# Tasks
```python
sm.init_printing(use_unicode=True)
```
## Optimization problem I
Consider the function
$$
f(\boldsymbol{x}) = f(x_1,x_2) = (x_1^2 - x_1x_2 + x_2^2)^2
$$
Define it in **sympy** by:
```python
x1 = sm.symbols('x_1')
x2 = sm.symbols('x_2')
f = (x1**2 - x1*x2 + x2**2)**2
f
```
The **Jacobian** is
```python
f1 = sm.diff(f,x1)
f2 = sm.diff(f,x2)
sm.Matrix([f1,f2])
```
The **Hessian** is
```python
f11 = sm.diff(f,x1,x1)
f12 = sm.diff(f,x1,x2)
f21 = sm.diff(f,x2,x1)
f22 = sm.diff(f,x2,x2)
sm.Matrix([[f11,f12],[f21,f22]])
```
**Question A:** Create a 3D plot and a contour plot of $f(x_1,x_2)$ such as those in the answer below.
```python
_f = sm.lambdify((x1,x2),f)
# write your code here
```
**Answer:**
```python
# a. grids
x1_vec = np.linspace(-2,2,500)
x2_vec = np.linspace(-2,2,500)
x1_grid,x2_grid = np.meshgrid(x1_vec,x2_vec,indexing='ij')
f_grid = _f(x1_grid,x2_grid)
# b. main
fig = plt.figure()
ax = fig.add_subplot(1,1,1,projection='3d')
cs = ax.plot_surface(x1_grid,x2_grid,f_grid,cmap=cm.jet)
# c. add labels
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('$f$')
# d. invert xaxis
ax.invert_xaxis()
# e. add colorbar
fig.colorbar(cs);
```
```python
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
levels = np.sort([j*10**(-i) for i in [-1,0,1,2,3,4] for j in [0.5,1,1.5]])
cs = ax.contour(x1_grid,x2_grid,f_grid,levels=levels,cmap=cm.jet)
fig.colorbar(cs);
```
**Question B:** Construct python functions for the jacobian and the hessian.
```python
f_python = lambda x: _f(x[0],x[1])
# write your code here
```
**Answer:**
```python
_f1 = sm.lambdify((x1,x2),f1)
_f2 = sm.lambdify((x1,x2),f2)
_f11 = sm.lambdify((x1,x2),f11)
_f12 = sm.lambdify((x1,x2),f12)
_f21 = sm.lambdify((x1,x2),f21)
_f22 = sm.lambdify((x1,x2),f22)
def f_jac(x):
return np.array([_f1(x[0],x[1]),_f2(x[0],x[1])])
def f_hess(x):
row1 = [_f11(x[0],x[1]),_f12(x[0],x[1])]
row2 = [_f21(x[0],x[1]),_f22(x[0],x[1])]
return np.array([row1,row2])
```
**Question C:** Minimize $f(x_1,x_2)$ using respectively
1. Nelder-Mead,
* BFGS without analytical jacobian,
* BFGS with analytical jacobian, and
* Newton-CG with analytical jacobian and hessian
Compare the results and discuss which optimizer you prefer.
**Optional:** If you wish, you can use the functions defined in the hidden cells below to also track how the optimizers converges to the solution.
```python
def collect(x):
# globals used to keep track across iterations
global evals # set evals = 0 before calling optimizer
global x0
global x1s
global x2s
global fs
# a. initialize list
if evals == 0:
x1s = [x0[0]]
x2s = [x0[1]]
fs = [f_python(x0)]
# b. append trial values
x1s.append(x[0])
x2s.append(x[1])
fs.append(f_python(x))
# c. increment number of evaluations
evals += 1
```
```python
def contour():
global evals
global x1s
global x2s
global fs
# a. contour plot
fig = plt.figure(figsize=(10,4))
ax = fig.add_subplot(1,2,1)
levels = np.sort([j*10**(-i) for i in [-1,0,1,2,3,4] for j in [0.5,1,1.5]])
cs = ax.contour(x1_grid,x2_grid,f_grid,levels=levels,cmap=cm.jet)
fig.colorbar(cs)
ax.plot(x1s,x2s,'-o',ms=4,color='black')
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
# b. function value
ax = fig.add_subplot(1,2,2)
ax.plot(np.arange(evals+1),fs,'-o',ms=4,color='black')
ax.set_xlabel('iteration')
ax.set_ylabel('function value')
```
```python
x0 = [-2,-1] # suggested initial guess
# write your code here
```
**Answer:**
```python
print('Nelder-Mead:')
evals = 0
result = optimize.minimize(f_python,x0,method='Nelder-Mead',callback=collect,options={'disp':True})
contour()
```
```python
print('BFGS without analytical gradient:')
evals = 0
result = optimize.minimize(f_python,x0,method='BFGS',callback=collect,options={'disp':True})
contour()
```
```python
print('BFGS with analytical gradient:')
evals = 0
result = optimize.minimize(f_python,x0,jac=f_jac,method='BFGS',callback=collect,options={'disp':True})
contour()
```
```python
print('Newton-CG with analytical gradient and hessian:')
evals = 0
result = optimize.minimize(f_python,x0,jac=f_jac,hess=f_hess,method='Newton-CG',callback=collect,options={'disp':True})
contour()
```
## Optimization problem II
Consider the function
$$
f(x_1,x_2) = (4-2.1x_1^2 + \frac{x_1^4}{3})x_1^2 + x_1x_2 + (4x_2^2 - 4)x_2^2)
$$
Define it in **sympy** by:
```python
x1 = sm.symbols('x_1')
x2 = sm.symbols('x_2')
f = (4-2.1*x1**2 + (x1**4)/3)*x1**2 + x1*x2 + (4*x2**2 - 4)*x2**2
_f = sm.lambdify((x1,x2),f)
f
```
Create **3D plot**:
```python
# a. grids
x1_vec = np.linspace(-2,2,500)
x2_vec = np.linspace(-1,1,500)
x1_grid,x2_grid = np.meshgrid(x1_vec,x2_vec,indexing='ij')
f_grid = _f(x1_grid,x2_grid)
# b. main
fig = plt.figure()
ax = fig.add_subplot(1,1,1,projection='3d')
cs = ax.plot_surface(x1_grid,x2_grid,f_grid,cmap=cm.jet)
# c. add labels
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('$f$')
# d. invert xaxis
ax.invert_xaxis()
# e. remove background
ax.xaxis.pane.fill = False
ax.yaxis.pane.fill = False
ax.zaxis.pane.fill = False
# f. add colorbar
fig.colorbar(cs);
```
**Question A:** Find the minimum of the function starting from each of the suggested initial values below. Print the first 20 solutions, and all solutions aftwards, which is the best yet to be found. Save the solutions and associated function values in `xs` and `fs`.
```python
# a. python function for f
f_python = lambda x: _f(x[0],x[1])
# b. initial guesses
np.random.seed(1986)
K = 1000
x0s = np.empty((K,2))
x0s[:,0] = -2 + 4*np.random.uniform(size=K)
x0s[:,1] = -1 + 2*np.random.uniform(size=K)
# c. solutions and associated values
xs = np.empty((K,2))
fs = np.empty(K)
# write your code here
```
**Answer:**
```python
fopt = np.inf
xopt = np.nan
for i,x0 in enumerate(x0s):
# a. optimize
result = optimize.minimize(f_python,x0,method='BFGS')
xs[i,:] = result.x
fs[i] = result.fun
# b. print first 20 or if better than seen yet
if i < 20 or fs[i] < fopt: # plot 20 first or if improving
if fs[i] < fopt:
xopt = xs[i,:]
fopt = fs[i]
print(f'{i:4d}: x0 = ({x0[0]:6.2f},{x0[0]:6.2f})',end='')
print(f' -> converged at ({xs[i][0]:6.2f},{xs[i][1]:6.2f}) with f = {fs[i]:.12f}')
# best solution
print(f'\nbest solution:\n x = ({xopt[0]:6.2f},{xopt[1]:6.2f}) -> f = {fopt:.12f}')
```
0: x0 = ( 0.28, 0.28) -> converged at ( 0.09, -0.71) with f = -1.031628453485
1: x0 = ( -1.69, -1.69) -> converged at ( -1.70, 0.80) with f = -0.215463824384
2: x0 = ( 0.43, 0.43) -> converged at ( 0.09, -0.71) with f = -1.031628453490
3: x0 = ( 1.59, 1.59) -> converged at ( 1.70, -0.80) with f = -0.215463824384
4: x0 = ( 0.18, 0.18) -> converged at ( 0.09, -0.71) with f = -1.031628453490
5: x0 = ( 0.81, 0.81) -> converged at ( -0.09, 0.71) with f = -1.031628453490
6: x0 = ( -0.46, -0.46) -> converged at ( -0.09, 0.71) with f = -1.031628453487
7: x0 = ( 0.61, 0.61) -> converged at ( 0.09, -0.71) with f = -1.031628453490
8: x0 = ( 0.76, 0.76) -> converged at ( -0.09, 0.71) with f = -1.031628453484
9: x0 = ( 0.87, 0.87) -> converged at ( -0.09, 0.71) with f = -1.031628453490
10: x0 = ( 0.76, 0.76) -> converged at ( -0.09, 0.71) with f = -1.031628453489
11: x0 = ( 1.23, 1.23) -> converged at ( 1.70, -0.80) with f = -0.215463824384
12: x0 = ( -0.87, -0.87) -> converged at ( 0.09, -0.71) with f = -1.031628453486
13: x0 = ( 1.03, 1.03) -> converged at ( 0.09, -0.71) with f = -1.031628453490
14: x0 = ( -0.77, -0.77) -> converged at ( -0.09, 0.71) with f = -1.031628453490
15: x0 = ( -0.25, -0.25) -> converged at ( 0.09, -0.71) with f = -1.031628453490
16: x0 = ( 0.21, 0.21) -> converged at ( 0.09, -0.71) with f = -1.031628453488
17: x0 = ( -0.27, -0.27) -> converged at ( 0.09, -0.71) with f = -1.031628453490
18: x0 = ( 0.31, 0.31) -> converged at ( -0.09, 0.71) with f = -1.031628453490
19: x0 = ( 1.57, 1.57) -> converged at ( 1.61, 0.57) with f = 2.104250310311
24: x0 = ( -0.14, -0.14) -> converged at ( 0.09, -0.71) with f = -1.031628453490
27: x0 = ( -0.13, -0.13) -> converged at ( 0.09, -0.71) with f = -1.031628453490
155: x0 = ( 0.60, 0.60) -> converged at ( 0.09, -0.71) with f = -1.031628453490
best solution:
x = ( 0.09, -0.71) -> f = -1.031628453490
**Question B:** Create a 3D scatter plot of where the optimizer converges, and color the dots by the associated function values.
```python
# write your code here
```
**Answer:**
```python
# a. main
fig = plt.figure()
ax = fig.add_subplot(1,1,1,projection='3d')
cs = ax.scatter(xs[:,0],xs[:,1],fs,c=fs);
# b. add labels
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('$f$')
# c. invert xaxis
ax.invert_xaxis()
# d. colorbar
fig.colorbar(cs);
```
**Question C:** Plot the function values at the solutions as a function of the starting values.
```python
# write your code here
```
**Answer:**
```python
# a. main
fig = plt.figure()
ax = fig.add_subplot(1,1,1,projection='3d')
cs = ax.scatter(x0s[:,0],x0s[:,1],fs,c=fs);
# b. add labels
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('$f$')
# c. invert xaxis
ax.invert_xaxis()
# d. colorbar
fig.colorbar(cs);
```
```python
sm.init_printing(pretty_printing=False)
```
# Problem: Solve the consumer problem with income risk I
Define the following **variables** and **parameters**:
* $m_t$ is cash-on-hand in period $t$
* $c_t$ is consumption in period $t$
* $y_t$ is income in period $t$
* $\Delta \in (0,1)$ is income risk
* $r$ is the interest rate
* $\beta > 0$, $\rho > 1$, $\nu > 0 $, $\kappa > 0$, $\xi > 0$ are utility parameters
In the **second period** the household solves:
$$
\begin{aligned}
v_{2}(m_{2}) &= \max_{c_{2}}\frac{c_{2}^{1-\rho}}{1-\rho}+\nu\frac{(m_{2}-c_{2}+\kappa)^{1-\rho}}{1-\rho} \\
\text{s.t.} \\
c_{2} & \in [0,m_{2}]
\end{aligned}
$$
In the **first period** the household solves:
$$
\begin{aligned}
v_{1}(m_{1}) & =
\max_{c_{1}}\frac{c_{1}^{1-\rho}}{1-\rho}+\beta\mathbb{E}_{1}\left[v_2(m_2)\right] \\
\text{s.t.} \\
m_2 &= (1+r)(m_{1}-c_{1})+y_{2} \\
y_{2} &= \begin{cases}
1-\Delta & \text{with prob. }0.5\\
1+\Delta & \text{with prob. }0.5
\end{cases}\\
c_{1} & \in [0,m_{1}]\\
\end{aligned}
$$
The **basic functions** are:
```python
def utility(c,rho):
return c**(1-rho)/(1-rho)
def bequest(m,c,nu,kappa,rho):
return nu*(m-c+kappa)**(1-rho)/(1-rho)
def v2(c2,m2,rho,nu,kappa):
return utility(c2,rho) + bequest(m2,c2,nu,kappa,rho)
def v1(c1,m1,rho,beta,r,Delta,v2_interp):
# a. v2 value, if low income
m2_low = (1+r)*(m1-c1) + 1-Delta
v2_low = v2_interp([m2_low])[0]
# b. v2 value, if high income
m2_high = (1+r)*(m1-c1) + 1+Delta
v2_high = v2_interp([m2_high])[0]
# c. expected v2 value
v2 = 0.5*v2_low + 0.5*v2_high
# d. total value
return utility(c1,rho) + beta*v2
```
The **solution functions** are:
```python
def solve_period_2(rho,nu,kappa,Delta):
# a. grids
m2_vec = np.linspace(1e-8,5,500)
v2_vec = np.empty(500)
c2_vec = np.empty(500)
# b. solve for each m2 in grid
for i,m2 in enumerate(m2_vec):
# i. objective
obj = lambda c2: -v2(c2,m2,rho,nu,kappa)
# ii. initial value (consume half)
x0 = m2/2
# iii. optimizer
result = optimize.minimize_scalar(obj,x0,method='bounded',bounds=[1e-8,m2])
# iv. save
v2_vec[i] = -result.fun
c2_vec[i] = result.x
return m2_vec,v2_vec,c2_vec
def solve_period_1(rho,beta,r,Delta,v1,v2_interp):
# a. grids
m1_vec = np.linspace(1e-8,4,100)
v1_vec = np.empty(100)
c1_vec = np.empty(100)
# b. solve for each m1 in grid
for i,m1 in enumerate(m1_vec):
# i. objective
obj = lambda c1: -v1(c1,m1,rho,beta,r,Delta,v2_interp)
# ii. initial guess (consume half)
x0 = m1*1/2
# iii. optimize
result = optimize.minimize_scalar(obj,x0,method='bounded',bounds=[1e-8,m1])
# iv. save
v1_vec[i] = -result.fun
c1_vec[i] = result.x
return m1_vec,v1_vec,c1_vec
```
**Question A:** Find optimal consumption in the first period as funcition of cash-on-hand, and plot it.
```python
rho = 8
kappa = 0.5
nu = 0.1
r = 0.04
beta = 0.94
Delta = 0.5
# b. solve
# write your code here
# c. plot
# write your code here
```
**Answer:**
```python
# b. solve
def solve(rho,beta,r,Delta,nu,kappa,v1):
# a. solve period 2
m2_vec,v2_vec,c2_vec = solve_period_2(rho,nu,kappa,Delta)
# b. construct interpolator
v2_interp = interpolate.RegularGridInterpolator((m2_vec,), v2_vec,
bounds_error=False,fill_value=None)
# b. solve period 1
m1_vec,v1_vec,c1_vec = solve_period_1(rho,beta,r,Delta,v1,v2_interp)
return m1_vec,c1_vec
m1_vec,c1_vec = solve(rho,beta,r,Delta,nu,kappa,v1)
# c. plot
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(m1_vec,c1_vec)
ax.set_xlabel('$m_1$')
ax.set_ylabel('$c_1$')
ax.set_title('consumption function in period 1')
ax.set_xlim([0,4])
ax.set_ylim([0,2.5]);
```
**Question B:** Find optimal consumption in the first period as funcition of cash-on-hand, and plot it, assuming that
$$
y_{2} = \begin{cases}
1-\sqrt{\Delta} & \text{with prob. }0.1\\
1-\Delta & \text{with prob. }0.4\\
1+\Delta & \text{with prob. }0.4\\
1+\sqrt{\Delta} & \text{with prob. }0.1
\end{cases}
$$
which add some low probability tail events, but does not change mean income. Give an interpretation of the change in the consumption function.
```python
# write your code here
```
**Answer:**
```python
def v1_alt(c1,m1,rho,beta,r,Delta,v2_interp):
# a. expected v2 value
Ra = (1+r)*(m1-c1)
v2 = 0
y2s = [1-np.sqrt(Delta),1-Delta,1+Delta,1+np.sqrt(Delta)]
probs = [0.1,0.4,0.4,0.1]
for y2,prob in zip(y2s,probs):
m2 = Ra + y2
v2 += prob*v2_interp([m2])[0]
# b. total value
return utility(c1,rho) + beta*v2
m1_vec_alt,c1_vec_alt = solve(rho,beta,r,Delta,nu,kappa,v1_alt)
# plot
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(m1_vec,c1_vec,label='original')
ax.plot(m1_vec_alt,c1_vec_alt,label='new')
ax.legend(loc='upper left')
ax.set_xlabel('$m_1$')
ax.set_ylabel('$c_1$')
ax.set_title('consumption function in periode 1')
ax.set_xlim([0,4])
ax.set_ylim([0,2.5]);
```
# Problem: Solve the consumer problem with income risk II
Define the following **variables** and **parameters**:
* $m_t$ is cash-on-hand in period $t$
* $c_t$ is non-durable consumption in period $t$
* $d_t$ is durable consumption in period $t$ (only adjustable in period 1)
* $y_t$ is income in period $t$
* $\Delta \in (0,1)$ is income risk
* $r$ is the interest rate
* $\beta > 0$, $\rho > 1$, $\alpha \in (0,1)$, $\nu > 0 $, $\kappa > 0$, $\xi > 0$ are utility parameters
In the **second period** the household solves:
$$
\begin{aligned}
v_{2}(m_{2},d_{2}) &= \max_{c_{2}}\frac{c_{2}^{1-\rho}}{1-\rho}+\alpha\frac{d_{2}^{1-\rho}}{1-\rho}+\nu\frac{(m_{2}+d_{2}-c_{2}+\kappa)^{1-\rho}}{1-\rho} \\
\text{s.t.} \\
c_{2} & \in [0,m_{2}]
\end{aligned}
$$
In the **first period** the household solves:
$$
\begin{aligned}
v_{1}(m_{1}) &= \max_{c_{1},d_{1}}\frac{c_{1}^{1-\rho}}{1-\rho}+\alpha\frac{d_{1}^{1-\rho}}{1-\rho}+\beta\mathbb{E}_{1}\left[v_2(m_2,d_2)\right]\\&\text{s.t.}&\\
m_2 &= (1+r)(m_{1}-c_{1}-d_{1})+y_{2} \\
y_{2} &= \begin{cases}
1-\Delta & \text{with prob. }0.5\\
1+\Delta & \text{with prob. }0.5
\end{cases}\\
c_{1}+d_{1} & \in [0,m_{1}]\\
\end{aligned}
$$
Choose **parameters**:
```python
rho = 2
alpha = 0.1
kappa = 0.5
nu = 0.1
r = 0.04
beta = 0.94
Delta = 0.5
# b. solve
# write your code here
# c. plot
# write your code here
```
The **basic functions** are:
```python
def utility(c,d,alpha,rho):
return c**(1-rho)/(1-rho) + alpha*d**(1-rho)/(1-rho)
def bequest(m,c,d,nu,kappa,rho):
return nu*(m+d-c+kappa)**(1-rho)/(1-rho)
def v2(c2,d2,m2,alpha,rho,nu,kappa):
return utility(c2,d2,alpha,rho) + bequest(m2,c2,d2,nu,kappa,rho)
def v1(c1,d1,m1,alpha,rho,beta,r,Delta,v2_interp):
# a. v2 value, if low income
m2_low = (1+r)*(m1-c1-d1) + 1-Delta
v2_low = v2_interp([m2_low,d1])[0]
# b. v2 value, if high income
m2_high = (1+r)*(m1-c1-d1) + 1+Delta
v2_high = v2_interp([m2_high,d1])[0]
# c. expected v2 value
v2 = 0.5*v2_low + 0.5*v2_high
# d. total value
return utility(c1,d1,alpha,rho) + beta*v2
```
The **solution function for period 2** is:
```python
def solve_period_2(alpha,rho,nu,kappa,Delta):
# a. grids
m2_vec = np.linspace(1e-8,5,200)
d2_vec = np.linspace(1e-8,5,200)
v2_grid = np.empty((200,200))
c2_grid = np.empty((200,200))
# b. solve for each m2 in grid
for i,m2 in enumerate(m2_vec):
for j,d2 in enumerate(d2_vec):
# i. objective
obj = lambda c2: -v2(c2,d2,m2,alpha,rho,nu,kappa)
# ii. initial value (consume half)
x0 = m2/2
# iii. optimizer
result = optimize.minimize_scalar(obj,x0,method='bounded',bounds=[1e-8,m2])
# iv. save
v2_grid[i,j] = -result.fun
c2_grid[i,j] = result.x
return m2_vec,d2_vec,v2_grid,c2_grid
```
**Question A:** Solve for consumption in period 2 and plot the consumption function.
```python
# write your code here
```
**Answer:**
```python
# a. solve
m2_vec,d2_vec,v2_grid,c2_grid = solve_period_2(alpha,rho,nu,kappa,Delta)
# b. grids
m2_grid,d2_grid = np.meshgrid(m2_vec,d2_vec,indexing='ij')
# c. main
fig = plt.figure()
ax = fig.add_subplot(1,1,1,projection='3d')
cs = ax.plot_surface(m2_grid,d2_grid,c2_grid,cmap=cm.jet)
# d. add labels
ax.set_xlabel('$m_2$')
ax.set_ylabel('$d_2$')
ax.set_zlabel('$c_2$')
# e. invert xaxis
ax.invert_xaxis()
# f. add colorbar
fig.colorbar(cs);
```
**Question B:** Find optimal consumption and choices of durables in the first period as a function of cash-on-hand and plot it.
```python
# write your code here
```
**Answer:**
```python
# a. define solve function
def solve_period_1(alpha,rho,beta,r,Delta,v1,v2_interp):
# a. grids
m1_vec = np.linspace(1e-4,4,100)
v1_vec = np.empty(100)
c1_vec = np.empty(100)
d1_vec = np.empty(100)
# b. solve for each m1 in grid
for i,m1 in enumerate(m1_vec):
# i. objective
obj = lambda x: -v1(x[0],x[1],m1,alpha,rho,beta,r,Delta,v2_interp)
# ii. initial guess
x0 = [m1*1/3,m1*1/3]
# iii. bounds and constraitns
bound = (1e-8,m1-1e-8)
bounds = (bound, bound)
ineq_con = {'type': 'ineq', 'fun': lambda x: m1-x[0]-x[1]}
# iv. optimize
result = optimize.minimize(obj,x0, method='SLSQP',
bounds=bounds,
constraints=[ineq_con])
#result = optimize.minimize(obj,x0, method='Nelder-Mead')
# v. save
v1_vec[i] = -result.fun
c1_vec[i] = result.x[0]
d1_vec[i] = result.x[1]
return m1_vec,v1_vec,c1_vec,d1_vec
# b. construct interpolator
v2_interp = interpolate.RegularGridInterpolator((m2_vec,d2_vec), v2_grid,
bounds_error=False,fill_value=None)
# c. solve period 1
m1_vec,v1_vec,c1_vec,d1_vec = solve_period_1(alpha,rho,beta,r,Delta,v1,v2_interp)
# d. plot
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(m1_vec,c1_vec,label='non-durable consumption')
ax.plot(m1_vec_alt,d1_vec,label='durable consumption')
ax.legend(loc='upper left')
ax.set_xlabel('$m_1$')
ax.set_xlim([0,4])
ax.set_ylim([0,2.5]);
```
|
e72520150ec26ea907b1e8a3c5d534fa31bf7f79
| 742,975 |
ipynb
|
Jupyter Notebook
|
PS7/problem_set_7.ipynb
|
mariusgruenewald/exercises-2019
|
9621af3054a2eb53efa5974640b581687853f820
|
[
"MIT"
] | 4 |
2019-02-28T07:45:15.000Z
|
2019-06-27T19:42:01.000Z
|
PS7/problem_set_7.ipynb
|
Teresepasquali/exercises-2020
|
9621af3054a2eb53efa5974640b581687853f820
|
[
"MIT"
] | null | null | null |
PS7/problem_set_7.ipynb
|
Teresepasquali/exercises-2020
|
9621af3054a2eb53efa5974640b581687853f820
|
[
"MIT"
] | 19 |
2019-01-09T15:32:14.000Z
|
2020-01-13T10:55:09.000Z
| 450.834345 | 77,992 | 0.939466 | true | 7,556 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.868827 | 0.874077 | 0.759422 |
__label__eng_Latn
| 0.589017 | 0.602723 |
# Programming Exercise 5:
# Regularized Linear Regression and Bias vs Variance
## Introduction
In this exercise, you will implement regularized linear regression and use it to study models with different bias-variance properties. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.
All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).
Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, [`matplotlib`](https://matplotlib.org/) for plotting, and [`scipy`](https://docs.scipy.org/doc/scipy/reference/) for scientific and numerical computation functions and tools. You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments).
```python
# used for manipulating directory paths
import os
# Scientific and vector computation for python
import numpy as np
# Plotting library
from matplotlib import pyplot
# Optimization module in scipy
from scipy import optimize
# will be used to load MATLAB mat datafile format
from scipy.io import loadmat
# library written for this exercise providing additional functions for assignment submission, and others
import utils
# define the submission/grader object for this exercise
grader = utils.Grader()
# tells matplotlib to embed plots within the notebook
%matplotlib inline
```
## Submission and Grading
After completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored.
| Section | Part | Submitted Function | Points |
| :- |:- |:- | :-: |
| 1 | [Regularized Linear Regression Cost Function](#section1) | [`linearRegCostFunction`](#linearRegCostFunction) | 25 |
| 2 | [Regularized Linear Regression Gradient](#section2) | [`linearRegCostFunction`](#linearRegCostFunction) |25 |
| 3 | [Learning Curve](#section3) | [`learningCurve`](#func2) | 20 |
| 4 | [Polynomial Feature Mapping](#section4) | [`polyFeatures`](#polyFeatures) | 10 |
| 5 | [Cross Validation Curve](#section5) | [`validationCurve`](#validationCurve) | 20 |
| | Total Points | |100 |
You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
<div class="alert alert-block alert-warning">
At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once.
</div>
<a id="section1"></a>
## 1 Regularized Linear Regression
In the first half of the exercise, you will implement regularized linear regression to predict the amount of water flowing out of a dam using the change of water level in a reservoir. In the next half, you will go through some diagnostics of debugging learning algorithms and examine the effects of bias v.s.
variance.
### 1.1 Visualizing the dataset
We will begin by visualizing the dataset containing historical records on the change in the water level, $x$, and the amount of water flowing out of the dam, $y$. This dataset is divided into three parts:
- A **training** set that your model will learn on: `X`, `y`
- A **cross validation** set for determining the regularization parameter: `Xval`, `yval`
- A **test** set for evaluating performance. These are “unseen” examples which your model did not see during training: `Xtest`, `ytest`
Run the next cell to plot the training data. In the following parts, you will implement linear regression and use that to fit a straight line to the data and plot learning curves. Following that, you will implement polynomial regression to find a better fit to the data.
```python
# Load from ex5data1.mat, where all variables will be store in a dictionary
data = loadmat(os.path.join('Data', 'ex5data1.mat'))
# Extract train, test, validation data from dictionary
# and also convert y's form 2-D matrix (MATLAB format) to a numpy vector
X, y = data['X'], data['y'][:, 0]
Xtest, ytest = data['Xtest'], data['ytest'][:, 0]
Xval, yval = data['Xval'], data['yval'][:, 0]
# m = Number of examples
m = y.size
# Plot training data
pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)');
```
### 1.2 Regularized linear regression cost function
Recall that regularized linear regression has the following cost function:
$$ J(\theta) = \frac{1}{2m} \left( \sum_{i=1}^m \left( h_\theta\left( x^{(i)} \right) - y^{(i)} \right)^2 \right) + \frac{\lambda}{2m} \left( \sum_{j=1}^n \theta_j^2 \right)$$
where $\lambda$ is a regularization parameter which controls the degree of regularization (thus, help preventing overfitting). The regularization term puts a penalty on the overall cost J. As the magnitudes of the model parameters $\theta_j$ increase, the penalty increases as well. Note that you should not regularize
the $\theta_0$ term.
You should now complete the code in the function `linearRegCostFunction` in the next cell. Your task is to calculate the regularized linear regression cost function. If possible, try to vectorize your code and avoid writing loops.
<a id="linearRegCostFunction"></a>
```python
def linearRegCostFunction(X, y, theta, lambda_=0.0):
"""
Compute cost and gradient for regularized linear regression
with multiple variables. Computes the cost of using theta as
the parameter for linear regression to fit the data points in X and y.
Parameters
----------
X : array_like
The dataset. Matrix with shape (m x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
y : array_like
The functions values at each datapoint. A vector of
shape (m, ).
theta : array_like
The parameters for linear regression. A vector of shape (n+1,).
lambda_ : float, optional
The regularization parameter.
Returns
-------
J : float
The computed cost function.
grad : array_like
The value of the cost function gradient w.r.t theta.
A vector of shape (n+1, ).
Instructions
------------
Compute the cost and gradient of regularized linear regression for
a particular choice of theta.
You should set J to the cost and grad to the gradient.
"""
# Initialize some useful values
m = y.size # number of training examples
# You need to return the following variables correctly
J = 0
grad = np.zeros(theta.shape)
# ====================== YOUR CODE HERE ======================
h = X.dot(theta)
J = (1 / (2 * m)) * np.sum(np.square(h - y)) + (lambda_ / (2 * m)) * np.sum(np.square(theta[1:]))
grad = (1 / m) * (h - y).dot(X)
grad[1:] = grad[1:] + (lambda_ / m) * theta[1:]
# ============================================================
return J, grad
```
When you are finished, the next cell will run your cost function using `theta` initialized at `[1, 1]`. You should expect to see an output of 303.993.
```python
theta = np.array([1, 1])
J, _ = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1)
print('Cost at theta = [1, 1]:\t %f ' % J)
print('This value should be about 303.993192)\n' % J)
```
Cost at theta = [1, 1]: 303.993192
This value should be about 303.993192)
After completing a part of the exercise, you can submit your solutions for grading by first adding the function you modified to the submission object, and then sending your function to Coursera for grading.
The submission script will prompt you for your login e-mail and submission token. You can obtain a submission token from the web page for the assignment. You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
*Execute the following cell to grade your solution to the first part of this exercise.*
```python
grader[1] = linearRegCostFunction
grader.grade()
```
Submitting Solutions | Programming Exercise regularized-linear-regression-and-bias-variance
Login (email address): fidajisa@hotmail.com
Token: IanCHO2H3QTZxNuZ
Part Name | Score | Feedback
--------- | ----- | --------
Validation Curve | 25 / 25 | Nice work!
Regularized Linear Regression Cost Function | 0 / 25 | Your answer is incorrect.
Regularized Linear Regression Gradient | 0 / 20 | Your answer is incorrect.
Learning Curve | 0 / 10 | Your answer is incorrect.
Polynomial Feature Mapping | 0 / 20 | Your answer is incorrect.
--------------------------------
| 25 / 100 |
<a id="section2"></a>
### 1.3 Regularized linear regression gradient
Correspondingly, the partial derivative of the cost function for regularized linear regression is defined as:
$$
\begin{align}
& \frac{\partial J(\theta)}{\partial \theta_0} = \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left(x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} & \qquad \text{for } j = 0 \\
& \frac{\partial J(\theta)}{\partial \theta_j} = \left( \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left( x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} \right) + \frac{\lambda}{m} \theta_j & \qquad \text{for } j \ge 1
\end{align}
$$
In the function [`linearRegCostFunction`](#linearRegCostFunction) above, add code to calculate the gradient, returning it in the variable `grad`. <font color='red'><b>Do not forget to re-execute the cell containing this function to update the function's definition.</b></font>
When you are finished, use the next cell to run your gradient function using theta initialized at `[1, 1]`. You should expect to see a gradient of `[-15.30, 598.250]`.
```python
theta = np.array([1, 1])
J, grad = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1)
print('Gradient at theta = [1, 1]: [{:.6f}, {:.6f}] '.format(*grad))
print(' (this value should be about [-15.303016, 598.250744])\n')
```
Gradient at theta = [1, 1]: [-15.303016, 598.250744]
(this value should be about [-15.303016, 598.250744])
*You should now submit your solutions.*
```python
grader[2] = linearRegCostFunction
grader.grade()
```
Submitting Solutions | Programming Exercise regularized-linear-regression-and-bias-variance
Use token from last successful submission (fidajisa@hotmail.com)? (Y/n): Y
Part Name | Score | Feedback
--------- | ----- | --------
Validation Curve | 25 / 25 | Nice work!
Regularized Linear Regression Cost Function | 25 / 25 | Nice work!
Regularized Linear Regression Gradient | 0 / 20 | Your answer is incorrect.
Learning Curve | 0 / 10 | Your answer is incorrect.
Polynomial Feature Mapping | 0 / 20 | Your answer is incorrect.
--------------------------------
| 50 / 100 |
### Fitting linear regression
Once your cost function and gradient are working correctly, the next cell will run the code in `trainLinearReg` (found in the module `utils.py`) to compute the optimal values of $\theta$. This training function uses `scipy`'s optimization module to minimize the cost function.
In this part, we set regularization parameter $\lambda$ to zero. Because our current implementation of linear regression is trying to fit a 2-dimensional $\theta$, regularization will not be incredibly helpful for a $\theta$ of such low dimension. In the later parts of the exercise, you will be using polynomial regression with regularization.
Finally, the code in the next cell should also plot the best fit line, which should look like the figure below.
The best fit line tells us that the model is not a good fit to the data because the data has a non-linear pattern. While visualizing the best fit as shown is one possible way to debug your learning algorithm, it is not always easy to visualize the data and model. In the next section, you will implement a function to generate learning curves that can help you debug your learning algorithm even if it is not easy to visualize the
data.
```python
# add a columns of ones for the y-intercept
X_aug = np.concatenate([np.ones((m, 1)), X], axis=1)
theta = utils.trainLinearReg(linearRegCostFunction, X_aug, y, lambda_=0)
# Plot fit over the data
pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1.5)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.plot(X, np.dot(X_aug, theta), '--', lw=2);
```
<a id="section3"></a>
## 2 Bias-variance
An important concept in machine learning is the bias-variance tradeoff. Models with high bias are not complex enough for the data and tend to underfit, while models with high variance overfit to the training data.
In this part of the exercise, you will plot training and test errors on a learning curve to diagnose bias-variance problems.
### 2.1 Learning Curves
You will now implement code to generate the learning curves that will be useful in debugging learning algorithms. Recall that a learning curve plots training and cross validation error as a function of training set size. Your job is to fill in the function `learningCurve` in the next cell, so that it returns a vector of errors for the training set and cross validation set.
To plot the learning curve, we need a training and cross validation set error for different training set sizes. To obtain different training set sizes, you should use different subsets of the original training set `X`. Specifically, for a training set size of $i$, you should use the first $i$ examples (i.e., `X[:i, :]`
and `y[:i]`).
You can use the `trainLinearReg` function (by calling `utils.trainLinearReg(...)`) to find the $\theta$ parameters. Note that the `lambda_` is passed as a parameter to the `learningCurve` function.
After learning the $\theta$ parameters, you should compute the error on the training and cross validation sets. Recall that the training error for a dataset is defined as
$$ J_{\text{train}} = \frac{1}{2m} \left[ \sum_{i=1}^m \left(h_\theta \left( x^{(i)} \right) - y^{(i)} \right)^2 \right] $$
In particular, note that the training error does not include the regularization term. One way to compute the training error is to use your existing cost function and set $\lambda$ to 0 only when using it to compute the training error and cross validation error. When you are computing the training set error, make sure you compute it on the training subset (i.e., `X[:n,:]` and `y[:n]`) instead of the entire training set. However, for the cross validation error, you should compute it over the entire cross validation set. You should store
the computed errors in the vectors error train and error val.
<a id="func2"></a>
```python
def learningCurve(X, y, Xval, yval, lambda_=0):
"""
Generates the train and cross validation set errors needed to plot a learning curve
returns the train and cross validation set errors for a learning curve.
In this function, you will compute the train and test errors for
dataset sizes from 1 up to m. In practice, when working with larger
datasets, you might want to do this in larger intervals.
Parameters
----------
X : array_like
The training dataset. Matrix with shape (m x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
y : array_like
The functions values at each training datapoint. A vector of
shape (m, ).
Xval : array_like
The validation dataset. Matrix with shape (m_val x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
yval : array_like
The functions values at each validation datapoint. A vector of
shape (m_val, ).
lambda_ : float, optional
The regularization parameter.
Returns
-------
error_train : array_like
A vector of shape m. error_train[i] contains the training error for
i examples.
error_val : array_like
A vecotr of shape m. error_val[i] contains the validation error for
i training examples.
Instructions
------------
Fill in this function to return training errors in error_train and the
cross validation errors in error_val. i.e., error_train[i] and
error_val[i] should give you the errors obtained after training on i examples.
Notes
-----
- You should evaluate the training error on the first i training
examples (i.e., X[:i, :] and y[:i]).
For the cross-validation error, you should instead evaluate on
the _entire_ cross validation set (Xval and yval).
- If you are using your cost function (linearRegCostFunction) to compute
the training and cross validation error, you should call the function with
the lambda argument set to 0. Do note that you will still need to use
lambda when running the training to obtain the theta parameters.
Hint
----
You can loop over the examples with the following:
for i in range(1, m+1):
# Compute train/cross validation errors using training examples
# X[:i, :] and y[:i], storing the result in
# error_train[i-1] and error_val[i-1]
....
"""
# Number of training examples
m = y.size
# You need to return these values correctly
error_train = np.zeros(m)
error_val = np.zeros(m)
# ====================== YOUR CODE HERE ======================
for i in range(1, m + 1):
theta_t = utils.trainLinearReg(linearRegCostFunction, X[:i], y[:i], lambda_ = lambda_)
error_train[i - 1], _ = linearRegCostFunction(X[:i], y[:i], theta_t, lambda_ = 0)
error_val[i - 1], _ = linearRegCostFunction(Xval, yval, theta_t, lambda_ = 0)
# =============================================================
return error_train, error_val
```
When you are finished implementing the function `learningCurve`, executing the next cell prints the learning curves and produce a plot similar to the figure below.
In the learning curve figure, you can observe that both the train error and cross validation error are high when the number of training examples is increased. This reflects a high bias problem in the model - the linear regression model is too simple and is unable to fit our dataset well. In the next section, you will implement polynomial regression to fit a better model for this dataset.
```python
X_aug = np.concatenate([np.ones((m, 1)), X], axis=1)
Xval_aug = np.concatenate([np.ones((yval.size, 1)), Xval], axis=1)
error_train, error_val = learningCurve(X_aug, y, Xval_aug, yval, lambda_=0)
pyplot.plot(np.arange(1, m+1), error_train, np.arange(1, m+1), error_val, lw=2)
pyplot.title('Learning curve for linear regression')
pyplot.legend(['Train', 'Cross Validation'])
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 150])
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
*You should now submit your solutions.*
```python
grader[3] = learningCurve
grader.grade()
```
Submitting Solutions | Programming Exercise regularized-linear-regression-and-bias-variance
Use token from last successful submission (fidajisa@hotmail.com)? (Y/n): Y
Part Name | Score | Feedback
--------- | ----- | --------
Validation Curve | 25 / 25 | Nice work!
Regularized Linear Regression Cost Function | 25 / 25 | Nice work!
Regularized Linear Regression Gradient | 20 / 20 | Nice work!
Learning Curve | 0 / 10 | Your answer is incorrect.
Polynomial Feature Mapping | 0 / 20 | Your answer is incorrect.
--------------------------------
| 70 / 100 |
<a id="section4"></a>
## 3 Polynomial regression
The problem with our linear model was that it was too simple for the data
and resulted in underfitting (high bias). In this part of the exercise, you will address this problem by adding more features. For polynomial regression, our hypothesis has the form:
$$
\begin{align}
h_\theta(x) &= \theta_0 + \theta_1 \times (\text{waterLevel}) + \theta_2 \times (\text{waterLevel})^2 + \cdots + \theta_p \times (\text{waterLevel})^p \\
& = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \cdots + \theta_p x_p
\end{align}
$$
Notice that by defining $x_1 = (\text{waterLevel})$, $x_2 = (\text{waterLevel})^2$ , $\cdots$, $x_p =
(\text{waterLevel})^p$, we obtain a linear regression model where the features are the various powers of the original value (waterLevel).
Now, you will add more features using the higher powers of the existing feature $x$ in the dataset. Your task in this part is to complete the code in the function `polyFeatures` in the next cell. The function should map the original training set $X$ of size $m \times 1$ into its higher powers. Specifically, when a training set $X$ of size $m \times 1$ is passed into the function, the function should return a $m \times p$ matrix `X_poly`, where column 1 holds the original values of X, column 2 holds the values of $X^2$, column 3 holds the values of $X^3$, and so on. Note that you don’t have to account for the zero-eth power in this function.
<a id="polyFeatures"></a>
```python
def polyFeatures(X, p):
"""
Maps X (1D vector) into the p-th power.
Parameters
----------
X : array_like
A data vector of size m, where m is the number of examples.
p : int
The polynomial power to map the features.
Returns
-------
X_poly : array_like
A matrix of shape (m x p) where p is the polynomial
power and m is the number of examples. That is:
X_poly[i, :] = [X[i], X[i]**2, X[i]**3 ... X[i]**p]
Instructions
------------
Given a vector X, return a matrix X_poly where the p-th column of
X contains the values of X to the p-th power.
"""
# You need to return the following variables correctly.
X_poly = np.zeros((X.shape[0], p))
# ====================== YOUR CODE HERE ======================
for i in range(p):
X_poly[:, i] = X[:, 0] ** (i + 1)
# ============================================================
return X_poly
```
Now you have a function that will map features to a higher dimension. The next cell will apply it to the training set, the test set, and the cross validation set.
```python
p = 8
# Map X onto Polynomial Features and Normalize
X_poly = polyFeatures(X, p)
X_poly, mu, sigma = utils.featureNormalize(X_poly)
X_poly = np.concatenate([np.ones((m, 1)), X_poly], axis=1)
# Map X_poly_test and normalize (using mu and sigma)
X_poly_test = polyFeatures(Xtest, p)
X_poly_test -= mu
X_poly_test /= sigma
X_poly_test = np.concatenate([np.ones((ytest.size, 1)), X_poly_test], axis=1)
# Map X_poly_val and normalize (using mu and sigma)
X_poly_val = polyFeatures(Xval, p)
X_poly_val -= mu
X_poly_val /= sigma
X_poly_val = np.concatenate([np.ones((yval.size, 1)), X_poly_val], axis=1)
print('Normalized Training Example 1:')
X_poly[0, :]
```
Normalized Training Example 1:
array([ 1. , -0.36214078, -0.75508669, 0.18222588, -0.70618991,
0.30661792, -0.59087767, 0.3445158 , -0.50848117])
*You should now submit your solutions.*
```python
grader[4] = polyFeatures
grader.grade()
```
Submitting Solutions | Programming Exercise regularized-linear-regression-and-bias-variance
Use token from last successful submission (fidajisa@hotmail.com)? (Y/n): Y
Part Name | Score | Feedback
--------- | ----- | --------
Validation Curve | 25 / 25 | Nice work!
Regularized Linear Regression Cost Function | 25 / 25 | Nice work!
Regularized Linear Regression Gradient | 20 / 20 | Nice work!
Learning Curve | 10 / 10 | Nice work!
Polynomial Feature Mapping | 0 / 20 | Your answer is incorrect.
--------------------------------
| 80 / 100 |
## 3.1 Learning Polynomial Regression
After you have completed the function `polyFeatures`, we will proceed to train polynomial regression using your linear regression cost function.
Keep in mind that even though we have polynomial terms in our feature vector, we are still solving a linear regression optimization problem. The polynomial terms have simply turned into features that we can use for linear regression. We are using the same cost function and gradient that you wrote for the earlier part of this exercise.
For this part of the exercise, you will be using a polynomial of degree 8. It turns out that if we run the training directly on the projected data, will not work well as the features would be badly scaled (e.g., an example with $x = 40$ will now have a feature $x_8 = 40^8 = 6.5 \times 10^{12}$). Therefore, you will
need to use feature normalization.
Before learning the parameters $\theta$ for the polynomial regression, we first call `featureNormalize` and normalize the features of the training set, storing the mu, sigma parameters separately. We have already implemented this function for you (in `utils.py` module) and it is the same function from the first exercise.
After learning the parameters $\theta$, you should see two plots generated for polynomial regression with $\lambda = 0$, which should be similar to the ones here:
<table>
<tr>
<td></td>
<td></td>
</tr>
</table>
You should see that the polynomial fit is able to follow the datapoints very well, thus, obtaining a low training error. The figure on the right shows that the training error essentially stays zero for all numbers of training samples. However, the polynomial fit is very complex and even drops off at the extremes. This is an indicator that the polynomial regression model is overfitting the training data and will not generalize well.
To better understand the problems with the unregularized ($\lambda = 0$) model, you can see that the learning curve shows the same effect where the training error is low, but the cross validation error is high. There is a gap between the training and cross validation errors, indicating a high variance problem.
```python
lambda_ = 0
theta = utils.trainLinearReg(linearRegCostFunction, X_poly, y,
lambda_=lambda_, maxiter=55)
# Plot training data and fit
pyplot.plot(X, y, 'ro', ms=10, mew=1.5, mec='k')
utils.plotFit(polyFeatures, np.min(X), np.max(X), mu, sigma, theta, p)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.title('Polynomial Regression Fit (lambda = %f)' % lambda_)
pyplot.ylim([-20, 50])
pyplot.figure()
error_train, error_val = learningCurve(X_poly, y, X_poly_val, yval, lambda_)
pyplot.plot(np.arange(1, 1+m), error_train, np.arange(1, 1+m), error_val)
pyplot.title('Polynomial Regression Learning Curve (lambda = %f)' % lambda_)
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 100])
pyplot.legend(['Train', 'Cross Validation'])
print('Polynomial Regression (lambda = %f)\n' % lambda_)
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
One way to combat the overfitting (high-variance) problem is to add regularization to the model. In the next section, you will get to try different $\lambda$ parameters to see how regularization can lead to a better model.
### 3.2 Optional (ungraded) exercise: Adjusting the regularization parameter
In this section, you will get to observe how the regularization parameter affects the bias-variance of regularized polynomial regression. You should now modify the lambda parameter and try $\lambda = 1, 100$. For each of these values, the script should generate a polynomial fit to the data and also a learning curve.
For $\lambda = 1$, the generated plots should look like the figure below. You should see a polynomial fit that follows the data trend well (left) and a learning curve (right) showing that both the cross validation and training error converge to a relatively low value. This shows the $\lambda = 1$ regularized polynomial regression model does not have the high-bias or high-variance problems. In effect, it achieves a good trade-off between bias and variance.
<table>
<tr>
<td></td>
<td></td>
</tr>
</table>
For $\lambda = 100$, you should see a polynomial fit (figure below) that does not follow the data well. In this case, there is too much regularization and the model is unable to fit the training data.
*You do not need to submit any solutions for this optional (ungraded) exercise.*
<a id="section5"></a>
### 3.3 Selecting $\lambda$ using a cross validation set
From the previous parts of the exercise, you observed that the value of $\lambda$ can significantly affect the results of regularized polynomial regression on the training and cross validation set. In particular, a model without regularization ($\lambda = 0$) fits the training set well, but does not generalize. Conversely, a model with too much regularization ($\lambda = 100$) does not fit the training set and testing set well. A good choice of $\lambda$ (e.g., $\lambda = 1$) can provide a good fit to the data.
In this section, you will implement an automated method to select the $\lambda$ parameter. Concretely, you will use a cross validation set to evaluate how good each $\lambda$ value is. After selecting the best $\lambda$ value using the cross validation set, we can then evaluate the model on the test set to estimate
how well the model will perform on actual unseen data.
Your task is to complete the code in the function `validationCurve`. Specifically, you should should use the `utils.trainLinearReg` function to train the model using different values of $\lambda$ and compute the training error and cross validation error. You should try $\lambda$ in the following range: {0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10}.
<a id="validationCurve"></a>
```python
def validationCurve(X, y, Xval, yval):
"""
Generate the train and validation errors needed to plot a validation
curve that we can use to select lambda_.
Parameters
----------
X : array_like
The training dataset. Matrix with shape (m x n) where m is the
total number of training examples, and n is the number of features
including any polynomial features.
y : array_like
The functions values at each training datapoint. A vector of
shape (m, ).
Xval : array_like
The validation dataset. Matrix with shape (m_val x n) where m is the
total number of validation examples, and n is the number of features
including any polynomial features.
yval : array_like
The functions values at each validation datapoint. A vector of
shape (m_val, ).
Returns
-------
lambda_vec : list
The values of the regularization parameters which were used in
cross validation.
error_train : list
The training error computed at each value for the regularization
parameter.
error_val : list
The validation error computed at each value for the regularization
parameter.
Instructions
------------
Fill in this function to return training errors in `error_train` and
the validation errors in `error_val`. The vector `lambda_vec` contains
the different lambda parameters to use for each calculation of the
errors, i.e, `error_train[i]`, and `error_val[i]` should give you the
errors obtained after training with `lambda_ = lambda_vec[i]`.
Note
----
You can loop over lambda_vec with the following:
for i in range(len(lambda_vec))
lambda = lambda_vec[i]
# Compute train / val errors when training linear
# regression with regularization parameter lambda_
# You should store the result in error_train[i]
# and error_val[i]
....
"""
# Selected values of lambda (you should not change this)
lambda_vec = [0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10]
# You need to return these variables correctly.
error_train = np.zeros(len(lambda_vec))
error_val = np.zeros(len(lambda_vec))
# ====================== YOUR CODE HERE ======================
for i in range(len(lambda_vec)):
lambda_try = lambda_vec[i]
theta_t = utils.trainLinearReg(linearRegCostFunction, X, y, lambda_ = lambda_try)
error_train[i], _ = linearRegCostFunction(X, y, theta_t, lambda_ = 0)
error_val[i], _ = linearRegCostFunction(Xval, yval, theta_t, lambda_ = 0)
# ============================================================
return lambda_vec, error_train, error_val
```
After you have completed the code, the next cell will run your function and plot a cross validation curve of error v.s. $\lambda$ that allows you select which $\lambda$ parameter to use. You should see a plot similar to the figure below.
In this figure, we can see that the best value of $\lambda$ is around 3. Due to randomness
in the training and validation splits of the dataset, the cross validation error can sometimes be lower than the training error.
```python
lambda_vec, error_train, error_val = validationCurve(X_poly, y, X_poly_val, yval)
pyplot.plot(lambda_vec, error_train, '-o', lambda_vec, error_val, '-o', lw=2)
pyplot.legend(['Train', 'Cross Validation'])
pyplot.xlabel('lambda')
pyplot.ylabel('Error')
print('lambda\t\tTrain Error\tValidation Error')
for i in range(len(lambda_vec)):
print(' %f\t%f\t%f' % (lambda_vec[i], error_train[i], error_val[i]))
```
*You should now submit your solutions.*
```python
grader[5] = validationCurve
grader.grade()
```
Submitting Solutions | Programming Exercise regularized-linear-regression-and-bias-variance
Use token from last successful submission (fidajisa@hotmail.com)? (Y/n): Y
Part Name | Score | Feedback
--------- | ----- | --------
Validation Curve | 25 / 25 | Nice work!
Regularized Linear Regression Cost Function | 25 / 25 | Nice work!
Regularized Linear Regression Gradient | 20 / 20 | Nice work!
Learning Curve | 10 / 10 | Nice work!
Polynomial Feature Mapping | 20 / 20 | Nice work!
--------------------------------
| 100 / 100 |
### 3.4 Optional (ungraded) exercise: Computing test set error
In the previous part of the exercise, you implemented code to compute the cross validation error for various values of the regularization parameter $\lambda$. However, to get a better indication of the model’s performance in the real world, it is important to evaluate the “final” model on a test set that was not used in any part of training (that is, it was neither used to select the $\lambda$ parameters, nor to learn the model parameters $\theta$). For this optional (ungraded) exercise, you should compute the test error using the best value of $\lambda$ you found. In our cross validation, we obtained a test error of 3.8599 for $\lambda = 3$.
*You do not need to submit any solutions for this optional (ungraded) exercise.*
```python
```
### 3.5 Optional (ungraded) exercise: Plotting learning curves with randomly selected examples
In practice, especially for small training sets, when you plot learning curves to debug your algorithms, it is often helpful to average across multiple sets of randomly selected examples to determine the training error and cross validation error.
Concretely, to determine the training error and cross validation error for $i$ examples, you should first randomly select $i$ examples from the training set and $i$ examples from the cross validation set. You will then learn the parameters $\theta$ using the randomly chosen training set and evaluate the parameters $\theta$ on the randomly chosen training set and cross validation set. The above steps should then be repeated multiple times (say 50) and the averaged error should be used to determine the training error and cross validation error for $i$ examples.
For this optional (ungraded) exercise, you should implement the above strategy for computing the learning curves. For reference, the figure below shows the learning curve we obtained for polynomial regression with $\lambda = 0.01$. Your figure may differ slightly due to the random selection of examples.
*You do not need to submit any solutions for this optional (ungraded) exercise.*
```python
```
|
ebf0524bee04ea661656e046451be541cc4d4659
| 159,946 |
ipynb
|
Jupyter Notebook
|
Supervised Learning/Learning Curve - Bias vs Variance/exercise5.ipynb
|
Jawwad-Fida/Machine-Learning-Algorithms
|
c326cd83850b771b979b8dfcbca6a54c508b035a
|
[
"MIT"
] | 1 |
2021-07-07T07:44:20.000Z
|
2021-07-07T07:44:20.000Z
|
Supervised Learning/Learning Curve - Bias vs Variance/exercise5.ipynb
|
Jawwad-Fida/Machine-Learning-Algorithms
|
c326cd83850b771b979b8dfcbca6a54c508b035a
|
[
"MIT"
] | null | null | null |
Supervised Learning/Learning Curve - Bias vs Variance/exercise5.ipynb
|
Jawwad-Fida/Machine-Learning-Algorithms
|
c326cd83850b771b979b8dfcbca6a54c508b035a
|
[
"MIT"
] | null | null | null | 131.859852 | 24,368 | 0.832193 | true | 9,132 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.766294 | 0.7773 | 0.59564 |
__label__eng_Latn
| 0.993049 | 0.222201 |
### Calculates price-equilibrium in the market for blockchain records, with and without the lightning network.
### Includes symbolic calculations and plots for specific parameter values.
```python
import numpy as np
import sympy
sympy.init_printing(use_unicode=True)
from sympy import symbols,simplify,diff,latex,Piecewise
from sympy.solvers import solve
from IPython.display import display
from typing import Callable
from sympy.utilities.lambdify import lambdify, implemented_function
%matplotlib inline
import matplotlib.pyplot as plt
def simplified(exp, title=None):
simp = simplify(exp)
if simplified.LOG:
if title: display(title,simp)
else: display(simp)
return simp
simplified.LOG = True
def firstOrderCondition(exp, var):
diffExp = simplified(diff(exp, var))
solutions = solve(diffExp, var)
if firstOrderCondition.LOG:
display(solutions)
return solutions
firstOrderCondition.LOG = True
class Result(object): # a class for holding results of calculations
def __repr__(self): return self.__dict__.__repr__()
def display(self):
for k,v in sorted(self.__dict__.items()):
display(k,v)
def subs(self, params):
ans = Result()
for k,v in sorted(self.__dict__.items()):
if hasattr(v,"subs"):
ans.__dict__[k] = v.subs(params)
else:
ans.__dict__[k] = v
return ans
```
# Symbolic calculations
```python
a,p,r,b,vmax,zmin,zmax,beta = symbols('a \\phi r z v_{\max} z_{\min} z_{\max} \\beta', positive=True,finite=True,real=True)
w,T,D,L,n,Supply = symbols('w T \\Delta \\ell n \\tau', positive=True,finite=True,real=True)
D,Supply,p
```
```python
def exactCostPerDay(T):
return (a*p + w*b*( (1+r)**T - 1 )) / T
def approxCostPerDay(T):
return a*p/T + w*b*r
def symmetricLifetime(w):
return w**2/4/L
def asymmetricLifetime(w):
return w / D
uniformPDF = Piecewise( (1 / zmax , b<zmax), (0, True) )
powerlawPDF = Piecewise( (0 , b<zmin), (zmin / b**2, True) )
display(sympy.integrate(uniformPDF, (b, 0, sympy.oo))) # should be 1
display(sympy.integrate(powerlawPDF, (b, 0, sympy.oo))) # should be 1
display(sympy.integrate(b*uniformPDF, (b, 0, sympy.oo))) # should be zmax/2
display(sympy.integrate(b*powerlawPDF, (b, 0, sympy.oo))) # should be infinity!
```
```python
params = {
L: 10, # total transfers per day
D: 6, # delta transfers per day
beta: 0.01, # value / transfer-size
r: 4/100/365, # interest rate per day
a: 1.1, # records per reset tx
Supply: 288000, # records per day
zmin: 0.001, # min transfer size (for power law distribution)
zmax: 1, # max transfer size (for uniform distribution)
}
```
```python
def calculateLifetime(costPerDay:Callable, channelLifetime:Callable, wSolutionIndex:int):
T = simplified(channelLifetime(w), "T")
CPD = simplified(costPerDay(T), "CPD")
optimal = Result()
optimal.w = simplified(firstOrderCondition(CPD,w)[wSolutionIndex], "Optimal channel funding (w)")
optimal.T = simplified(T.subs(w,optimal.w), "optimal channel lifetime (T)")
optimal.CPD = simplified(CPD.subs(w,optimal.w), "Cost-per-day")
optimal.RPD = simplified(a / optimal.T, "Potential records per day")
optimal.C = simplified(optimal.CPD*optimal.T, "Cost between resets")
optimal.V = simplified(optimal.T*L*beta*b, "Value between resets")
optimal.VCR1 = 1
optimal.VCR2 = simplified(optimal.V / optimal.C, "Value/Cost Ratio of lightning")
optimal.VCR3 = simplified(beta*b / p, "Value/Cost Ratio of blockchain")
optimal.b12 = simplified(solve(optimal.VCR1-optimal.VCR2,b)[0],"b below which an agent prefers nop to lightning")
optimal.b13 = simplified(solve(optimal.VCR1-optimal.VCR3,b)[0],"b below which an agent prefers nop to blockchain")
optimal.b23 = simplified(solve(optimal.VCR2-optimal.VCR3,b)[0],"b below which an agent prefers lightning to blockchain")
# Calculate threshold prices. This part is relevant only for uniform valuations.
optimal.p12 = simplified(solve(optimal.b12-zmax,p)[0],"price above which all agents prefer nop to lightning")
optimal.p13 = simplified(solve(optimal.b13-zmax,p)[0],"price above which all agents prefer nop to blockchain")
optimal.p23 = simplified(solve(optimal.b23-zmax,p)[0],"price above which all agents prefer lightning to blockchain")
# substitute the numeric params:
numeric = optimal.subs(params)
numeric.b23 = numeric.b23.evalf()
numeric.p23 = numeric.p23.evalf()
return (optimal,numeric)
```
```python
simplified.LOG = False
firstOrderCondition.LOG = False
(asymmetricSymbolic,asymmetricNumeric) = calculateLifetime(approxCostPerDay,asymmetricLifetime,wSolutionIndex=0)
```
```python
#asymmetricSymbolic.display()
asymmetricNumeric.display()
```
```python
simplified.LOG = False
firstOrderCondition.LOG = False
(symmetricSymbolic,symmetricNumeric) = calculateLifetime(approxCostPerDay,symmetricLifetime,wSolutionIndex=0)
```
```python
symmetricNumeric.display()
```
# Demand curves
```python
### Generic function for calculating demand - does not give plottable expressions:
def calculateDemands(optimal, valuePDF):
demand = Result()
demand.withLightning = simplified(
sympy.integrate(a / optimal.T * valuePDF, (b, optimal.b12,optimal.b23)) +\
sympy.integrate(L * valuePDF, (b, optimal.b23,np.inf)),
"demand with lightning"
)
demand.withoutLightning = simplified(
sympy.integrate(L * valuePDF, (b, optimal.b13,np.inf)),
"demand without lightning"
)
numeric = demand.subs(params)
return (demand,numeric)
simplified.LOG = True
asymmetricSymbolicUniform,asymmetricNumericUniform = calculateDemands(asymmetricSymbolic, uniformPDF)
aymmetricSymbolicPowerlaw,asymmetricNumericPowerlaw = calculateDemands(asymmetricSymbolic, powerlawPDF)
asymmetricNumericUniform.display()
asymmetricNumericPowerlaw.display()
```
# Plots
```python
plotSymmetric = True
plotAsymmetric = False
def plotSymbolic(xRange, yExpression, xVariable, style, label):
plt.plot(xRange, [yExpression.subs(xVariable,xValue) for xValue in xRange], style, label=label)
def plotDemandCurves(priceRange, demandWithoutLightning, demandAsymmetric, demandSymmetric):
global plotSymmetric, plotAsymmetric
plotSymbolic(priceRange, demandWithoutLightning, p, "r-",label="no lightning")
if plotAsymmetric:
plotSymbolic(priceRange, demandAsymmetric, p, "b.",label="asymmetric")
if plotSymmetric:
plotSymbolic(priceRange, demandSymmetric, p, "g--",label="symmetric")
plt.gca().set_ylim(-1,11)
plt.xlabel("blockchain fee $\\phi$ [bitcoins]")
plt.ylabel("Demand of a single pair [records/day]")
plt.legend(loc=0)
def plotTxsCurves(priceRange, txsBlockchain, txsLightning):
txsBlockchain = txsBlockchain.subs(params)
txsLightning = txsLightning.subs(params)
plotSymbolic(priceRange, txsBlockchain, p, "r--",label="blockchain")
plotSymbolic(priceRange, txsLightning, p, "b.",label="lightning")
plotSymbolic(priceRange, txsLightning+txsBlockchain, p, "k-",label="total")
plt.gca().set_ylim(-1,11)
plt.xlabel("blockchain fee $\\phi$ [bitcoins]")
plt.ylabel("# Transactions per day")
plt.legend(loc=0)
def plotLifetimeCurves(priceRange, timeAsymmetric, timeSymmetric):
global plotSymmetric, plotAsymmetric
if plotAsymmetric:
plotSymbolic(priceRange, timeAsymmetric, p, "b.",label="asymmetric")
if plotSymmetric:
plotSymbolic(priceRange, timeSymmetric, p, "g--",label="symmetric")
plt.xlabel("blockchain fee $\\phi$ [bitcoins]")
plt.ylabel("Maximum channel lifetime [days]")
plt.legend(loc=0)
def plotPriceCurves(nRange, priceWithoutLightning, priceAsymmetric, priceSymmetric):
global plotSymmetric, plotAsymmetric
priceWithoutLightning = priceWithoutLightning.subs(params)
priceAsymmetric = priceAsymmetric.subs(params)
priceSymmetric = priceSymmetric.subs(params)
plotSymbolic(nRange, priceWithoutLightning, n, "r-",label="no lightning")
if plotAsymmetric and priceAsymmetric:
plotSymbolic(nRange, priceAsymmetric, n, "b.",label="asymmetric")
if plotSymmetric and priceSymmetric:
plotSymbolic(nRange, priceSymmetric, n, "g--",label="symmetric")
plt.xlabel("Number of users $n$")
plt.ylabel("Market-equilibrium price $\\phi$ [bitcoins/record]")
plt.legend(loc=0)
def plotMarketTxsCurves(nRange, priceCurve, txsBlockchain, txsLightning):
priceCurve = priceCurve.subs(params)
txsBlockchain = txsBlockchain.subs(params)
txsLightning = txsLightning.subs(params)
plotSymbolic(nRange, n*txsBlockchain.subs(p,priceCurve), n, "g--",label="blockchain")
plotSymbolic(nRange, n*txsLightning.subs(p,priceCurve), n, "b." ,label="lightning")
plotSymbolic(nRange, n*params[L], n, "k-",label="total")
plt.plot(nRange, len(nRange)*[params[Supply]], "r-", label="no lightning")
plt.xlabel("Number of users $n$")
plt.ylabel("# Transactions per day")
plt.legend(loc=0)
def plotSymbolic3(xRange, yExpression, xVariable, style, label):
plt.plot(xRange, [yExpression.subs(xVariable,xValue)*params[Supply] for xValue in xRange], style, label=label)
def plotRevenueCurves(nRange, priceWithoutLightning, priceAsymmetric, priceSymmetric):
global plotSymmetric, plotAsymmetric
plotSymbolic3(nRange, priceWithoutLightning, n, "r-",label="no lightning")
if plotAsymmetric and priceAsymmetric:
plotSymbolic3(nRange, priceAsymmetric, n, "b.",label="asymmetric")
if plotSymmetric and priceSymmetric:
plotSymbolic3(nRange, priceSymmetric, n, "g--",label="symmetric")
plt.xlabel("Number of users $n$")
plt.ylabel("Miners' revenue [bitcoins/day]")
plt.legend(loc=0)
```
## Uniform distribution
```python
def calculateDemandsUniformDistribution(optimal):
optimal.demandB13 = sympy.integrate(L / zmax, (b, optimal.b13, zmax))
optimal.demandWithoutLightningUniform = simplified(Piecewise(
(optimal.demandB13, p < optimal.p13), # b13 < zmax
(0, True)),
"demand without lightning"
)
optimal.txsWithoutLightningUniform = optimal.demandWithoutLightningUniform
optimal.demandL1 = sympy.integrate(a / optimal.T / zmax, (b, optimal.b12, optimal.b23)) # b12<b23<zmax
optimal.demandL2 = sympy.integrate(a / optimal.T / zmax, (b, optimal.b12, zmax)) # b12<zmax<b23
optimal.demandB23 = sympy.integrate(L / zmax, (b, optimal.b23, zmax)) # b23<zmax
optimal.demandWithLightningUniform = simplified(Piecewise(
(optimal.demandL1+optimal.demandB23 , p < optimal.p23), # b23 < zmax
(optimal.demandL2 , p < optimal.p12), # b12 < zmax
(0, True)),
"demand with lightning"
)
optimal.txsL1 = sympy.integrate(L / zmax, (b, optimal.b12, optimal.b23)) # b12<b23<zmax
optimal.txsL2 = sympy.integrate(L / zmax, (b, optimal.b12, zmax)) # b12<zmax<b23
optimal.txsB23 = optimal.demandB23 #= sympy.integrate(L / zmax,(b, optimal.b23, zmax)) # b23<zmax
optimal.txsLightningUniform = simplified(Piecewise(
(optimal.txsL1, p < optimal.p23), # b23 < zmax
(optimal.txsL2, p < optimal.p12), # b12 < zmax
(0, True)),
"lightning txs"
)
optimal.txsBlockchainUniform = simplified(Piecewise(
(optimal.txsB23 , p < optimal.p23), # b23 < zmax
(0, True)),
"blockchain txs"
)
optimal.txsTotalUniform = optimal.txsLightningUniform + optimal.txsBlockchainUniform
optimal.maxDemand1 = (optimal.demandL1+optimal.demandB23).subs(p,0)
optimal.minDemand1 = (optimal.demandL1+optimal.demandB23).subs(p,optimal.p23)
optimal.maxDemand2 = (optimal.demandL2).subs(p,optimal.p23)
optimal.minDemand2 = (optimal.demandL2).subs(p,optimal.p12)
def calculatePricesUniformDistribution(optimal):
price1 = simplified(solve(n*(optimal.demandL1+optimal.demandB23)-Supply, p)[0])
price2 = simplified(solve(n*optimal.demandL2-Supply, p)[0])
optimal.priceWithLightningUniform = simplified(Piecewise(
(0 , Supply > (n/2)*optimal.maxDemand1), # = maxDemand
(price1 , Supply > (n/2)*optimal.minDemand1), # = maxDemand2
(price2 , Supply > (n/2)*optimal.minDemand2),
(np.inf, True)).subs(params))
calculateDemandsUniformDistribution(asymmetricSymbolic)
asymmetricNumeric = asymmetricSymbolic.subs(params)
calculateDemandsUniformDistribution(symmetricSymbolic)
symmetricNumeric = symmetricSymbolic.subs(params)
#plot:
priceRange = np.linspace(0,1e-4,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningUniform, asymmetricNumeric.demandWithLightningUniform, symmetricNumeric.demandWithLightningUniform)
plt.title("Demand curves, uniformly-distributed transfer-size")
plt.savefig('../graphs/demand-curves-small-price.pdf', format='pdf', dpi=1000)
plt.show()
priceRange = np.linspace(0,0.015,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningUniform, asymmetricNumeric.demandWithLightningUniform, symmetricNumeric.demandWithLightningUniform)
plt.gca().set_ylim(-0.1,1)
plt.title("Demand curves, uniformly-distributed transfer-size")
plt.savefig('../graphs/demand-curves-large-price.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
#plot:
priceRange = np.linspace(0,1e-4,100)
plotTxsCurves(priceRange, asymmetricSymbolic.txsBlockchainUniform, asymmetricSymbolic.txsLightningUniform)
plt.title("Transactions of a single pair, uniformly-distributed transfer-size")
plt.savefig('../graphs/txs-pair-uniform.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
priceRange = np.linspace(0,0.1,100)
transferSize = params[zmax]/1
plotLifetimeCurves(priceRange, asymmetricSymbolic.T.subs(b,transferSize), symmetricSymbolic.T.subs(b,params[zmax]))
plt.title("Channel lifetime with transfer-size {} bitcoin".format(transferSize))
plt.savefig('../graphs/lifetime-curves-1.pdf', format='pdf', dpi=1000)
plt.show()
transferSize = params[zmax]/100
plotLifetimeCurves(priceRange, asymmetricSymbolic.T.subs(b,transferSize), symmetricSymbolic.T.subs(b,params[zmax]))
plt.title("Channel lifetime with transfer-size {} bitcoin".format(transferSize))
plt.savefig('../graphs/lifetime-curves-001.pdf', format='pdf', dpi=1000)
plt.show()
transferSize = params[zmax]/10000
plotLifetimeCurves(priceRange, asymmetricSymbolic.T.subs(b,transferSize), symmetricSymbolic.T.subs(b,params[zmax]))
plt.title("Channel lifetime with transfer-size {} bitcoin".format(transferSize))
plt.savefig('../graphs/lifetime-curves-00001.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
### Price curves - uniform distribution
priceWithoutLightningUniform = simplified(Piecewise(
(beta*zmax*(1-Supply/(n/2)/L) , (n/2)*L>Supply),
(0,True)).subs(params))
calculatePricesUniformDistribution(asymmetricNumeric)
asymmetricNumeric = asymmetricNumeric.subs(params)
symmetricNumeric.priceWithLightningUniform = None # Erel: I do not know how to calculate it
#symmetricNumeric.priceWithLightning = simplified(Piecewise(
# (0, Supply > n*symmetricNumeric.maxDemand1),
# (price1s , Supply > n*symmetricNumeric.minDemand1),
# price2s, Supply > n*symmetricNumeric.maxDemand2), # u
# (0, True)).subs(params))
```
```python
nRange = np.linspace(0,3000000,100)
plotPriceCurves(nRange, priceWithoutLightningUniform, asymmetricNumeric.priceWithLightningUniform, asymmetricNumeric.priceWithLightningUniform)
plt.title("Price curves, uniformly-distributed transfer-size")
plt.savefig('../graphs/price-curves-uniform.pdf', format='pdf', dpi=1000)
plt.show()
#plotRevenueCurves(nRange, priceWithoutLightningUniform, asymmetricNumeric.priceWithLightningUniform, symmetricNumeric.priceWithLightningUniform)
#plt.title("Revenue curves, uniformly-distributed transfer-size")
#plt.savefig('../graphs/revenue-curves-uniform.pdf', format='pdf', dpi=1000)
```
```python
nRange = np.linspace(0,300000,100)
plotMarketTxsCurves(nRange, asymmetricNumeric.priceWithLightningUniform, asymmetricNumeric.txsBlockchainUniform, asymmetricNumeric.txsLightningUniform)
plt.title("Txs, uniformly-distributed transfer-size, asymmetric")
plt.savefig('../graphs/txs-market-uniform-asymmetric.pdf', format='pdf', dpi=1000)
plt.show()
nRange = np.linspace(0,300000,100)
plotMarketTxsCurves(nRange, asymmetricNumeric.priceWithLightningUniform, symmetricNumeric.txsBlockchainUniform, symmetricNumeric.txsLightningUniform)
plt.title("Txs, uniformly-distributed transfer-size, symmetric")
plt.savefig('../graphs/txs-market-uniform-symmetric.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
```
## Power-law distribution
```python
def calculateDemandsPowerlaw(optimal):
optimal.demandB13 = sympy.integrate(L * zmin / b**2, (b, optimal.b13, np.inf))
optimal.demandBzmin = sympy.integrate(L * zmin / b**2, (b, zmin, np.inf))
optimal.demandWithoutLightningPowerlaw = simplified(Piecewise(
(optimal.demandB13, zmin < optimal.b13),
(optimal.demandBzmin, True)),
"demand without lightning"
)
optimal.demandL1 = sympy.integrate(a / optimal.T * zmin / b**2, (b, optimal.b12, optimal.b23)) # zmin<b12<b23
optimal.demandL2 = sympy.integrate(a / optimal.T * zmin / b**2, (b, zmin , optimal.b23)) # b12<zmin<b23
optimal.demandB1 = sympy.integrate(L * zmin / b**2, (b, optimal.b23, np.inf)) # zmin<b23
optimal.demandB2 = sympy.integrate(L * zmin / b**2, (b, zmin, np.inf)) # b12<b23<zmin
optimal.demandWithLightningPowerlaw = simplified(Piecewise(
(optimal.demandB2, optimal.b23 < zmin),
(optimal.demandL2+optimal.demandB1 , optimal.b12 < zmin),
(optimal.demandL1+optimal.demandB1 , True),
),
"demand with lightning"
)
optimal.txsL1 = sympy.integrate(L * zmin / b**2, (b, optimal.b12, optimal.b23)) # zmin<b12<b23
optimal.txsL2 = sympy.integrate(L * zmin / b**2, (b, zmin , optimal.b23)) # b12<zmin<b23
optimal.txsB1 = optimal.demandB1 # zmin<b23
optimal.txsB2 = optimal.demandB2 # b12<b23<zmin
optimal.txsLightningPowerlaw = simplified(Piecewise(
(0, optimal.b23 < zmin),
(optimal.txsL2 , optimal.b12 < zmin),
(optimal.txsL1 , True),
),
"txs lightning"
)
optimal.txsBlockchainPowerlaw = simplified(Piecewise(
(optimal.demandB2, optimal.b23 < zmin),
(optimal.demandB1 , True),
),
"txs blockchain"
)
optimal.maxDemand1 = (optimal.demandB2).subs(p, 0)
optimal.minDemand1 = (optimal.demandB2).subs(p, optimal.p23.subs(zmax,zmin) )
optimal.maxDemand2 = (optimal.demandL2+optimal.demandB1).subs(p, optimal.p23.subs(zmax,zmin) )
optimal.minDemand2 = (optimal.demandL2+optimal.demandB1).subs(p, optimal.p12.subs(zmax,zmin) )
def calculatePricesPowerlaw(optimal):
price1 = simplified(solve((n/2)*(optimal.demandL2+optimal.demandB1)-Supply, p)[0])
price2 = simplified(solve((n/2)*(optimal.demandL1+optimal.demandB1)-Supply, p)[0])
optimal.priceWithLightningPowerlaw = simplified(Piecewise(
(0, Supply > (n/2)*optimal.minDemand1),
(price1 , Supply > (n/2)*optimal.minDemand2), # = maxDemand1
(price2, True)))
return optimal
simplified.LOG = True
calculateDemandsPowerlaw(asymmetricSymbolic)
asymmetricNumeric = asymmetricSymbolic.subs(params)
calculateDemandsPowerlaw(symmetricSymbolic)
symmetricNumeric = symmetricSymbolic.subs(params)
```
```python
priceRange = np.linspace(0,1e-7,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningPowerlaw, asymmetricNumeric.demandWithLightningPowerlaw, symmetricNumeric.demandWithLightningPowerlaw)
plt.title("Demand curves, power-law-distributed transfer-size")
plt.savefig('../graphs/demand-curves-powerlaw-small-price.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
priceRange = np.linspace(0,1e-4,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningPowerlaw, asymmetricNumeric.demandWithLightningPowerlaw, symmetricNumeric.demandWithLightningPowerlaw)
plt.title("Demand curves, power-law-distributed transfer-size")
plt.savefig('../graphs/demand-curves-powerlaw-medium-price.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
priceRange = np.linspace(0,0.01,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningPowerlaw, asymmetricNumeric.demandWithLightningPowerlaw, symmetricNumeric.demandWithLightningPowerlaw)
plt.title("Demand curves, power-law-distributed transfer-size")
plt.gca().set_ylim(-0.01,0.1)
plt.savefig('../graphs/demand-curves-powerlaw-large-price.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
priceRange = np.linspace(0,1,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningPowerlaw, asymmetricNumeric.demandWithLightningPowerlaw, symmetricNumeric.demandWithLightningPowerlaw)
plt.title("Demand curves, power-law-distributed transfer-size")
plt.gca().set_ylim(-0.0001,0.001)
plt.savefig('../graphs/demand-curves-powerlaw-xlarge-price.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
#plot:
priceRange = np.linspace(0,1e-6,100)
plotTxsCurves(priceRange, asymmetricNumeric.txsBlockchainPowerlaw, asymmetricNumeric.txsLightningPowerlaw)
plt.title("Transactions of a single pair, power-law transfer-size")
plt.savefig('../graphs/txs-pair-powerlaw.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
### Price curves - power-law distribution
priceWithoutLightningPowerlaw = simplified(Piecewise(
((n/2)*L*beta*zmin/Supply , (n/2)*L>Supply),
(0,True)))
priceWithoutLightningPowerlaw = priceWithoutLightningPowerlaw.subs(params)
calculatePricesPowerlaw(asymmetricSymbolic)
asymmetricNumeric = asymmetricSymbolic.subs(params)
calculatePricesPowerlaw(symmetricSymbolic)
symmetricNumeric = symmetricSymbolic.subs(params)
```
```python
nRange = np.linspace(0,3e6,100)
plotPriceCurves(nRange, priceWithoutLightningPowerlaw, asymmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.priceWithLightningPowerlaw)
plt.title("Price curves, power-law-distributed transfer-size")
plt.savefig('../graphs/price-curves-powerlaw-smalln.pdf', format='pdf', dpi=1000)
```
```python
nRange = np.linspace(0,3e7,100)
plotPriceCurves(nRange, priceWithoutLightningPowerlaw, asymmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.priceWithLightningPowerlaw)
plt.title("Price curves, power-law-distributed transfer-size")
plt.savefig('../graphs/price-curves-powerlaw-mediumn.pdf', format='pdf', dpi=1000)
```
```python
nRange = np.linspace(0,3e8,100)
plotPriceCurves(nRange, priceWithoutLightningPowerlaw, asymmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.priceWithLightningPowerlaw)
plt.title("Price curves, power-law-distributed transfer-size")
plt.savefig('../graphs/price-curves-powerlaw-largen.pdf', format='pdf', dpi=1000)
```
```python
nRange = np.linspace(0,3e9,100)
plotPriceCurves(nRange, priceWithoutLightningPowerlaw, asymmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.priceWithLightningPowerlaw)
plt.title("Price curves, power-law-distributed transfer-size")
plt.savefig('../graphs/price-curves-powerlaw-hugen.pdf', format='pdf', dpi=1000)
```
```python
nRange = np.linspace(0,300000,100)
plotMarketTxsCurves(nRange, asymmetricNumeric.priceWithLightningPowerlaw, asymmetricNumeric.txsBlockchainPowerlaw, asymmetricNumeric.txsLightningPowerlaw)
plt.title("Txs, powerlaw transfer-size, asymmetric")
plt.savefig('../graphs/txs-market-powerlaw-asymmetric-smalln.pdf', format='pdf', dpi=1000)
plt.show()
nRange = np.linspace(0,300000,100)
plotMarketTxsCurves(nRange, symmetricNumeric.priceWithLightningPowerlaw, asymmetricNumeric.txsBlockchainPowerlaw, asymmetricNumeric.txsLightningPowerlaw)
plt.title("Txs, powerlaw transfer-size, symmetric")
plt.savefig('../graphs/txs-market-powerlaw-symmetric-smalln.pdf', format='pdf', dpi=1000)
plt.show()
nRange = np.linspace(0,300000000,100)
plotMarketTxsCurves(nRange, asymmetricNumeric.priceWithLightningPowerlaw, asymmetricNumeric.txsBlockchainPowerlaw, asymmetricNumeric.txsLightningPowerlaw)
plt.title("Txs, powerlaw transfer-size, asymmetric")
plt.savefig('../graphs/txs-market-powerlaw-asymmetric-largen.pdf', format='pdf', dpi=1000)
plt.show()
nRange = np.linspace(0,300000000,100)
plotMarketTxsCurves(nRange, symmetricNumeric.priceWithLightningPowerlaw, asymmetricNumeric.txsBlockchainPowerlaw, asymmetricNumeric.txsLightningPowerlaw)
plt.title("Txs, powerlaw transfer-size, symmetric")
plt.savefig('../graphs/txs-market-powerlaw-symmetric-largen.pdf', format='pdf', dpi=1000)
plt.show()
```
```python
pw=np.random.power(a=0.5,size=10000)*2
plt.hist(pw)
```
```python
def first10():
for i in range(10):
yield i
first10.__len__ = lambda self: 10
for i in first10():
print(i)
print(len(first10))
```
```python
```
|
4cd827a724855ff4c1ccbc952b45c08557d3ea2f
| 752,733 |
ipynb
|
Jupyter Notebook
|
old/market-equilibrium-symbolic-uniform.ipynb
|
erelsgl/bitcoin-simulations
|
79bfa0930ab9ad17be59b9cad1ec6e7c3530aa3b
|
[
"MIT"
] | 1 |
2018-11-26T02:44:38.000Z
|
2018-11-26T02:44:38.000Z
|
old/market-equilibrium-symbolic-uniform.ipynb
|
erelsgl/bitcoin-simulations
|
79bfa0930ab9ad17be59b9cad1ec6e7c3530aa3b
|
[
"MIT"
] | null | null | null |
old/market-equilibrium-symbolic-uniform.ipynb
|
erelsgl/bitcoin-simulations
|
79bfa0930ab9ad17be59b9cad1ec6e7c3530aa3b
|
[
"MIT"
] | 3 |
2018-09-06T00:11:26.000Z
|
2021-08-29T17:14:59.000Z
| 246.555192 | 30,020 | 0.866124 | true | 6,910 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.824462 | 0.720643 |
__label__eng_Latn
| 0.210219 | 0.512628 |
**Competing in different settings**
In this project we consider 2 firms who compete in the same duopolistic market. We will look at three possible competition forms, which are characterized by
**Cournot**
- Firms compete in quantities, and decide upon these independently and simultaneously
- Firms profit maximize given the other's choice of quantity
- Both firms have market power, thus one's decision of output affects the price of the other's goods
**Bertrand**
- Firms compete in prices, and decide upon these simultaneously and as if they were in a perfect competition setting
- We assume that consumers seek the lowest price
**Stackelberg**
- Firms compete in quantities, and decide upon these sequentially.
- One firm is assumed to be the leader, and the other is the follower.
In the following we will analyse a sitaution with linear demand with $a=101$, $b=0.1$ and cost equal to 1 for all firms under all three situations.
```python
#Import packages
from scipy import optimize,arange
from numpy import array
import numpy as np
import matplotlib.pyplot as plt
import random
import sympy as sm
from math import *
%matplotlib inline
from IPython.display import Markdown, display
import pandas as pd
```
**Bertand**
First we wanna solve the case of Bertand competetion, i.e a sitatuion, where the firms compete in prices. Here we consider two companies who are selling differentiated products. The quantity each firm produces is dependent on the on it's own price $p_1$ and the competitor's price $p_2$.
$$x_1 = a - p_1 +bp_2$$
$$x_2 = a - p_2 + bp_1$$
The individual companies profits can be defined as:
$$\pi_1 = (p_1 - c_1)(a - p_1 +bp_2)$$
$$\pi_2 = (p_2 - c_2)(a - p_2 + bp_1).$$
For the following parts we assume that both companies have identical marginal costs and thus $$c_1 = c_2 = c.$$
```python
#Variables
a = sm.symbols('a')
p1 = sm.symbols('p1')
p2 = sm.symbols('p2')
c = sm.symbols('c')
b = sm.symbols('b')
alpha=sm.symbols('alpha')
#Frist we define the profit funtioncs for the two firms:
pi1 = (p1-c)*(a-p1+b*p2)
pi2 = (p2-c)*(a-p2+b*p1)
```
In order to get the equilibrium price we first have to determine the first order conditions (FOC) by maximizing the profit functions of the individual firms with repsect to the prices they offer.
$$\frac{\partial\pi_1}{\partial p_1}= a - 2p_1 + bp_2 + c=0$$
$$\frac{\partial\pi_2}{\partial p_2}= a - 2p_2 + bp_1 + c=0$$
By solving the first order conditions for the respective prices we arrive at the reaction functions.
$$p_1^{*} = \frac{a+ b*p_2 + c}{2}$$
$$p_2^{*} = \frac{a+ b*p_1 + c}{2}$$
By setting them equal, since it is the best repsonse for both firms to set their price equal to the competitior's price, we arrive at our equilibrium price:
$$p^{*} = \frac{-(a+c)}{b-2}$$
```python
#The first order conditions are then:
FOC1 = sm.diff(pi1, p1)
FOC2 = sm.diff(pi2, p2)
#Reactionsfuncions
RF1 = sm.solve(FOC1, p1)[0]
RF2 = sm.solve(FOC2, p2)[0]
#Equilibrium price
RF12 = sm.solve(FOC2,p1)[0]
EP1 = sm.solve(RF1-RF12,p2)[0]
print(EP1) #Equilibrium price both companies will charge
RF21 = sm.solve(FOC1,p2)[0]
EP2 = sm.solve(RF2-RF21,p1)[0]
```
-(a + c)/(b - 2)
Using the lambdify feature from the sympy package we can now make a function to solve the bertrand competition for chosen values. In this case we set $a$ equal to 101, $b$ equal to 0.1 and $c$ equal to 1 (the numbers were decided on arbitrarily).
```python
#make it a function to solve it
bertrand = sm.lambdify((a,b,c), EP1)
price1=bertrand(101,0.1,1)
quantity1 = 101 - 0.9*bertrand(101,0.1,1)
profit1 = (bertrand(101,0.1,1)-1)*(101 - 0.9*bertrand(101,0.1,1))
#Defining a way to print the solution in a fancy way
def fancy(string):
""" A function that prints strings in a fancy way
args:
string : a string
returns : a string i an fancy way"""
display(Markdown(string))
fancy(f'The equilibrium price is $p$ = {price1:.1f}, the equilibrium quantity is $x$ = {quantity1:.1f} and the profits for both firms are $π$ = {profit1:.1f}')
```
The equilibrium price is $p$ = 53.7, the equilibrium quantity is $x$ = 52.7 and the profits for both firms are $π$ = 2775.6
**Stackelberg**
We now turn to the Stackelberg, i.e a sequantiacal competion in quantities, thus one company is the leader and the other is a follower.
```python
#values
p = sm.symbols('p')
x1 = sm.symbols('x1')
x2 = sm.symbols('x2')
#Giving values to our parameters
a = 101
c = 1
b = 0.1
```
Each firm chooses a quantitiy $x_{1}, x_{2}\geq0$ where the cost of production is $c_{1}=c_{2}$. Each firm set the price $p = a-b\left(x_{1}+x_{2}\right)$. The profit function for firm $i$ is then written as $\pi_{i}\left(x_{i}\right)=p\left(x_{i}\right)-c_{i}x_{i}$ Assume firm 1 is the leader and 2 is the follower. The game is solved by backward induction.
We will first define the inverse demand function $p$:
```python
#Defining price
p = a - b*(x1+x2)
```
Then, we set up the profit function for firm 1 and firm 2:
```python
#Profit function of firm 1
profit_1 = p*x1 - c*x1
profit_2 = p*x2 - c*x2
```
We will say that firm 1 decides first on its output. Firm 2 observes the decision of firm 1 and decides how to reacht. We can solve the problem backwards and find optimal output of firm 2 for each possible output level of firm 1.
It means that we maximize the profit of firm 2 by differentiating it by $x_2$.
```python
#Differentiating the profit of firm 2 by q2
diff_profit_2 = sm.diff(profit_2, x2)
diff_profit_2
```
$\displaystyle - 0.1 x_{1} - 0.2 x_{2} + 100$
```python
#Finding optimal q2 by isolating q2 from the differentiation
opt_2 = sm.solve(diff_profit_2, x2)
opt_2[0]
```
$\displaystyle 500.0 - 0.5 x_{1}$
Then, we can substitute $q_2$ into the profit function of firm 1. Thanks to that, we have a single variable function.
```python
#Substituting q2 into the profit function of firm 1
profit_1_subs = profit_1.subs(x2,opt_2[0])
profit_1_subs
```
$\displaystyle x_{1} \left(51.0 - 0.05 x_{1}\right) - x_{1}$
Once again, we need to differentiate the profit of firm 1 by $q_1$ to find the optimal value of $q_1$.
```python
#Differentiating the profit of firm 1 by q1
diff_profit_1 = sm.diff(profit_1_subs, x1)
diff_profit_1
```
$\displaystyle 50.0 - 0.1 x_{1}$
```python
#Finding the optimal value of q1
opt_1 = sm.solve(diff_profit_1, x1)
opt_1[0]
```
$\displaystyle 500.0$
Now, we can find the optimal value of $q_2$.
```python
opt_2_subs = opt_2[0].subs(x1,opt_1[0])
opt_2_subs
```
$\displaystyle 250.0$
We can also find the equilibrium market price, profit of firm 1 and profit of firm 2:
```python
#Finding equilibrium market price
p = a - b*(opt_1[0] + opt_2_subs)
#Finding profit of firm 1
profit_1 = p*opt_1[0] - c*opt_1[0]
#Finding profit of firm 2
profit_2 = p*opt_2_subs - c*opt_2_subs
fancy(f'The equilibrium price is $p$ = {p:.0f}, and the profits for both firms are $π_1$ = {profit_1:.0f} and $π_2$ = {profit_2:.0f}')
```
The equilibrium price is $p$ = 26, and the profits for both firms are $π_1$ = 12500 and $π_2$ = 6250
We now want to make a figure that show how the beste responses relate.
```python
#Defining the x-axis
qu_1 = np.linspace(0, 1000, 1000)
#Defining the lambda functions for our best responses functions
BR2 = lambda qu_1 : 500 - qu_1/2
BR1 = lambda qu_1 : 50 - qu_1/10
#Plotting the figure
plt.figure(figsize=(15,10))
plt.plot(qu_1, BR2(qu_1), label = "Best Response of Firm 1")
plt.plot(BR1(qu_1),qu_1 , label = "Best Response of Firm 2")
plt.grid(True)
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.title('Best response functions')
plt.legend()
#To include a quantity of 0 for 1 firm, we set the axis like this
plt.xlim(0,1000) # sets the x-axis
plt.ylim(-100,600) # Sets the y-axis
plt.show()
```
**ADRIAN - this should be changed. This tells that when firm chooses a quantity of 500 in the first stage, firm 2 will choose a quantity half the size of firm 1 in second stage. In a case where firm 1 chooses to produce 1000 units, firm 2 will then produce 0.**
**Cournot**
For the Cournot case we are gonna set up some functions to solve the model in order to make an extention with n firms later on. For the solution with we have found inspiration in the following article: http://janboone.github.io/competition_policy_and_regulation/Collusion_Cournot/Collusion_Cournot.html
Consider the linear invese demand curve $p\left(x_{1},x_{2}\right)=a-x_{1}-bx_{j}$ where $b\in\left(0,1\right)$ is the elasticity of substitution between the goods, i.e. indicates that the goods are heterogenous for $0<b<1$.
We write the inverse demand:
```python
#Variables
a = sm.symbols('a')
x1 = sm.symbols('x1')
x2 = sm.symbols('x2')
c1 = sm.symbols('c1')
c2 = sm. symbols('c2')
c = sm.symbols('c')
b = sm.symbols('b')
pi = sm.symbols('pi')
```
```python
def price(x1,x2,a,b):
"""A function that computes price
args:
x1 : quantity for firm 1
x2 : quantity for firm 2
a :total demand
b : elasticity of substitution
returns a float
"""
return a-b*(x1+x2)
```
Firms have constant marginal costs, where total costs are given by $c\left(x_{1,2}\right)=cx_{1,2}$
```python
def cost(x,c):
"""A function that computes total cost
args:
c : cost for firm i
x : quantity for firm i
returns a float
"""
cost = c*x
return cost
```
Firms maximize profits given the choice of the other firm such that $$\pi\left(x_{1},x_{2}\right)=p_{1}\left(x_{1},x_{2}\right)x_{1}-c\left(x_{1}\right)$$
```python
def profit(x1,x2,c1,a,b):
"""A function that computes profit for firm 1
args:
x1 : quantity for firm 1
x2 : quantity for firm 2
c1 : cost for firm 1
a : total demand
b : elasticity of substitution
returns a float
"""
return price(x1,x2,a,b)*x1-cost(x1,c1)
```
In a Cournot setting, firm $1$ chooses $x_{1}$ taking $x_{2}$ as given. Thus, the Nash equilibrium for $x_{1}^{*},x_{2}^{*}$ holds for $$x_{1}^{*}=\text{arg}\underset{x_{1}}{\text{ max }}\pi\left(x_{1},x_{2}^{*}\right)\;\;\;\;\;\;\text{for each }i\neq j$$ Hence, taking the first order condition to find the Nash equilibrium for $x_{1}^{*},x_{2}^{*}$ yields $$\frac{\partial\pi\left(x_{1},x_{2}\right)}{\partial x_{1,2}}=0$$
We differentiate the profit function. First we need to make it into a expression in order to use `SymPy.diff`
```python
#Profit for firm i
pi_1 = profit(x,x2,c1,a,b) #taking the defined function from above
#FOC is then
foc1 = sm.diff(pi_1,x1)
foc1
```
$\displaystyle a - b x_{1} - b \left(x_{1} + x_{2}\right) - c_{1}$
```python
#Profit for firm 2
pi_2 = profit(x2,x1,c2,a,b)
#FOC is then
foc2 = sm.diff(pi_2,x2)
foc2
```
$\displaystyle a - b x_{2} - b \left(x_{1} + x_{2}\right) - c_{2}$
We now set this equal to 0:
```python
#For firm 1
top1 = sm.solve(sm.Eq(foc1,0),x1)[0]
top1
```
$\displaystyle \frac{a - b x_{2} - c_{1}}{2 b}$
```python
#For firm 2
top2 = sm.solve(sm.Eq(foc2,0),x2)[0]
top2
```
$\displaystyle \frac{a - b x_{1} - c_{2}}{2 b}$
Thus, we are able to find the optimal production level for a firm $1$ given the choice of $x_{2}$ by creating the function 'BR'. In this case, we use the function `fmin`. This allows one to look for the global maximum of $\pi$. Likewise, we keep in mind that `optimize` is bound to minimize problems, and since we want to maximize the profit function, we then minimize $-\pi\left(x_{i},x_{j}\right)$.
```python
def BR(x2,c1,b,a):
"""A function that computes the optimal quantity
args
x2 : quantity for firm 2
c1 : cost for firm 1
b : elasticity of substitution
a : total demand
returns a float
"""
x1 = optimize.fmin(lambda x: -profit(x,x2,c1,a,b), 0.1, disp=0)
return x1[0]
```
Now, we must find an equilibrium for price and quantity given their best response. For this reason, consider the vector function $f\left(x_{1},x_{2}\right)$ which takes firm $1$'s best response $r_{1}\left(x_{2}\right)$ to $2$ 's production level $x_{2}$ into consideration. We can write this as $$f\left(x_{1},x_{2}\right)=\left[\begin{array}{c}
r_{1}\left(x_{2}^{*}\right)\\
r_{2}\left(x_{1}^{*}\right)
\end{array}\right]$$
We are then looking for the quantities produced is equal to best response given what the other firm produce, i.e the point where:
$$\left(\begin{array}{c}
x_{1}^{*}\\
x_{2}^{*}
\end{array}\right)=\left(\begin{array}{c}
r_{1}(x_{2}^{*})\\
r_{2}(x_{1}^{*})
\end{array}\right)$$
Thus, we create a function 'vec_reaction' given by the difference between an array of the quantaites subtracted by the best response functions and pass $b,a, c_{i},c_{j}$ as a vector of parameters.
```python
def vec_reaction(x,param):
"""A function that computes the difference between the quantities and the beste response functions
args
x : quantaties
param : list of parameters
"""
return array(x)-array([BR(x[1],param[2],param[0],param[1]),BR(x[0],param[3],param[0], param[1])])
```
Now, moving on to find the actual Cournot equilibrium, we are able to set the values of each paramter. Using the `fsolve` from `scipy.optimize` to find $x$. Also, giving `fsolve` the inital guess of the quantaites, while passing the paramater vector 'param' as arguments 'args' to 'fsolve', this yields:
```python
param = [0.1,101,1,1] #parameters as with the other models
x0 = [0.1, 0.1] #initial guess
EQ = optimize.fsolve(vec_reaction, x0, args = (param))
fancy(f'The Nash equilibrium quantity for both firms are $x^* =[x_1^*,x_2^*]$ = [{EQ[0]:.0f},{EQ[1]:.0f}] where the equilibrium price for both firms are $p^*$ = {price(EQ[0],EQ[1],101,0.1):.0f} with a profit of $\pi^*$= {profit(EQ[0],EQ[1],1,101,0.1):.0f}')
```
The Nash equilibrium quantity for both firms are $x^* =[x_1^*,x_2^*]$ = [333,333] where the equilibrium price for both firms are $p^*$ = 34 with a profit of $\pi^*$= 11111
We now want to show, how the quantity produced changes, when the cost of firm 1 changes.
```python
range_c = np.linspace(1,0,20)
range_x = [optimize.fsolve(vec_reaction, x0, args = ([0.1,101, c, 1])) for c in range_c]
plt.style.use('seaborn')
plt.clf()
plt.plot(range_c,array(range_x)[:,0], label = '$x_1$')
plt.plot(range_c,array(range_x)[:,1], label = '$x_2$')
plt.xlabel('$c_1$')
plt.ylabel('$x$')
plt.title('Quantity produced by each firm as function of cost for firm 1')
plt.legend()
plt.show ()
```
In the figure, we see, that as the cost of firm 1 goes up, the quantity produced by firm 1 falls, while the quantity produced by firm 2 goes up.
**Compare all three cases**
We have now tried to solve a situation with two firms competing with three different models, giving them the same parameter values. This leads to the following results:
```python
fancy(f'For Bertrand: The equilibrium price is $p$ = {price1:.0f}, the equilibrium quantity is $q$ = {quantity1:.0f} and the profits for both firms are $π$ = {profit1:.0f}')
fancy(f'For Stackelberg: the equilibrium price is $p$ = {p:.0f}, and the profits for both firms are $π_1$ = {profit_1:.0f} and $π_2$ = {profit_2:.0f}')
fancy(f'For Cournot: the equilibrium equilibrium quantity for both firms are $x^* =[x_1^*,x_2^*]$ = [{EQ[0]:.0f},{EQ[1]:.0f}] where the equilibrium price for both firms are $p^*$ = {price(EQ[0],EQ[1],101,0.1):.0f} with a profit of $\pi$= {profit(EQ[0],EQ[1],1,101,0.1):.0f}')
```
For Bertrand: The equilibrium price is $p$ = 54, the equilibrium quantity is $q$ = 53 and the profits for both firms are $π$ = 2776
For Stackelberg: the equilibrium price is $p$ = 26, and the profits for both firms are $π_1$ = 12500 and $π_2$ = 6250
For Cournot: the equilibrium equilibrium quantity for both firms are $x^* =[x_1^*,x_2^*]$ = [333,333] where the equilibrium price for both firms are $p^*$ = 34 with a profit of $\pi$= 11111
**Cournot with N > 2 firms**
In order to do an extention to one of the models, we now want to investigate the case of N > 2 firms. First we set up the situation theoretically and use Python to show this. Afterwards we will use Python to solve the model.
In the following theoretical set-up we will follow the set-up in this note, adapting it to our situation: http://people.exeter.ac.uk/dgbalken/micro07/CournotHandout.pdf
Following the notation from ealier we now add:
- Firm $i$'s output: $x_i$
- Total output in in industry: $X=x_1+x_2+...+x_n$
- The opponent's output: $x_{-i} = X-x_i$
Each firm will then max profit given the expectation of $x_{-i}$:
$$\Pi_i(x_{-i},x_i)=(p(x_{-i}+x_i)-c_i)x_i$$
as with the n = 2 case we use the invese demand funcktion in a linear case:
$$p = a-b(x_{-i}+x_i)=a-bX$$
The FOC of is then:
$$\frac{\partial \Pi_i}{\partial x_i} = \frac{\partial p}{\partial x_i} \cdot x_i + p-c_i = 0$$
where $\frac{\partial p}{\partial x_i} = -b$ giving us
$$\frac{\partial\Pi_i}{\partial x_i} : -bx_i + p-c_i = 0$$
inserting the inverse demand function:
$$\frac{\partial\Pi_i}{\partial x_i} : -bx_i + (a-b(x_{-i}+x_{i}))-c_i = 0$$
solving for $x_i$ to get the reaction function for firm $i$ given its expectation for the amount the rest of the firms produce:
$$x_i(x_{-i}) = \frac{a-c}{2b}-\frac{1}{2}x_{-i}$$
which is the case for all firms $i=1,..,n$. We have $X=x_i+x_{-i}$ so the FOC found earlier can be rewritten to:
$$ -bx_i + (a-bX-c_i) = 0$$
which for the n firms:
$$-bX + n(a-bX)-n\bar c = 0$$
where we use the average cost level: $\bar c = \frac{c_1+c_2+...+c_n}{n}$
we can now rewrite this and get the quantity produced and the price in the market:
$$bX + n(bX) = n(A-\bar c)$$
$$(n+1)bX = n(A-\bar c)$$
Quantity is then:
$$X = \frac{n}{n+1} \frac{a-\bar c}{b}$$
Price is:
$$p = a-bX = a-b[\frac{n}{n+1} \frac{a-\bar c}{b}] = \frac{1}{n+1}a + \frac{n}{n+1}\bar c$$
Using the found functions we can now try to set a program that is able to solve cases with N firms.
```python
def Round(a):
"""A function that prints floats rounded as strings
args:
a : a float
returns a string
"""
return str("{:.2f}".format(a))
```
First we define a function, that calculates the prices $p = \frac{1}{n+1}a + \frac{n}{n+1}\bar c $ rewriting using the definition of the average cost level gives: $p= \frac{1}{n+1}a + \frac{c_1+c_2+...+c_n}{n+1}$ which we use in our function.
```python
def price_eq(a, n, c):
"""A function that computes price
args:
a : total demand
n : number of firms
c : cost of firm i
returns a float
"""
price = max((a+sum(c))/(1+n), 0)
return price
```
We now want to define a function that returns the quantity produced for firm $i$.
this is given by:
$$x_i = \frac{a-bX-c_i}{b}=\frac{a-c_i}{b}-\frac{n}{n+1}\frac{a-\bar c}{b} = \frac{1}{n+1}\frac{a}{b}+ \frac{n(\bar c -c_i)-c_i}{(n+1)b} = \frac{1}{n+1}\frac{1}{b}[a- n(\bar c -c_i)-c_i]$$
use that $\bar c = \frac{c_1+c_2+...+c_n}{n}$
$$x_i = \frac{1}{n+1}\frac{1}{b}[a+ (n-1)c_i-(c_1+c_2+...+c_n-c_i)]$$
which we now use to define a function, that returns the quantity:
```python
def xi_eq(a, b, n, c, i): #Returns equilibrium quantity for firm i
"""A function that computes quantity for firm i
args:
a : total demand
b : elasticity of substitution between the good
n : number of firms
c : cost of firm i
i : the number of the firm
Returns a float"""
x_i = (1/b)*((a+(n-1)*c[i]-sum(c))/(1+n))
return x_i
```
We can then define a function, that returns the quantity for the entire number of firms
```python
def X_eq(a, b, n, c):
"""A function that computes quantity for the total amount of firms
args:
a : total demand
b : elasticity of substitution between the good
n : number of firms
c : cost of firm i
returns a float
"""
X = []
for i in range(0, n, 1):
X.append(xi_eq(a, b, n, c, i))
return sum(X)
```
Based on the assumtion that $$c_i = \bar c = c $$ the profit for firm $i$ can be found by:
$$\Pi_{i}=(p-c)x_{i}=\left(\frac{a+nc}{n+1}-c\right)\frac{1}{n+1}\frac{1}{b}(a-c)$$
Without identical marginal costs $$c_i \neq c $$ $$\bar c \neq c$$ the profit for firm $i$ would be:
$$\Pi_{i}=(p-c_i)x_{i}=\left(\frac{a+n\bar c}{n+1}-c_i\right)\frac{1}{n+1}\frac{1}{b}(a+n(\bar c - c_i)-c_1) = \frac{1}{(n+1)^2 * b}(a+n(\bar c - c_1)-c_i)^2$$
By maximizing the previously derived profit function we find the equilibrium profit.
```python
def profit_eq(a, b, n, c, i):
"""A function that computes profit for firm i and sets profit to zero, if there is no production
args:
a : total demand
b : elasticity of substitution between the good
n : number of firms
c : cost of firm i
i : the number of the firm
returns a float
"""
if xi_eq(a, b, n, c, i) == 0:
profit = 0
else:
profit = max((1/b)*(((a-n*c[i]+(sum(c)-c[i]))/(1+n))**2), 0)
return profit
```
We are now ready to define a function, that can solve a general case of cournot competetion with n firms. We choose the the cost for each firm will be drawn from a uniform destribution using random.uniform, given it a seed number such that we can compare with the solution we later solve with Python. Note that adding a seed number means, that every firms will get the same random drawn cost values. This is done in order to compare with the solution later solved with Python.
```python
def EQ_N(a, b, n, c_low, c_high,seed):
"""A function that draws number of firms and compute quantaties, price and average profit
args:
a : total demand
b : elasticity of substitution between the good
N : the maximal number of firms that can be drawn
c_low : the lower bound of the interval for the cost
c_high : the upper bound of the interval for the cost
Returns a float"""
#drawing of costs for each firm:
c = []
np.random.seed(seed)
c = np.random.uniform(c_low,c_high,size=n)
#the price:
price = price_eq(a, n, c)
#quantities and profits
xi_list = []
profiti_list = []
#the quantity for the entire number of firms
X = X_eq(a, b, n, c)
return X, price, c
```
First we solve with 1000 firms and use the same parameters as with the other three models:
```python
a = 101
b = 0.1
c_low = 1
c_high = 5
seed = 1986
X,price, c_vec = EQ_N(a,b,1000,c_low,c_high,seed)
print('\n The number of firms are 1000 ' +
'\n The total amount produced is ' + Round(X) +',' + ' while the price is ' + Round(price) +
'\n')
```
The number of firms are 1000
The total amount produced is 978.80, while the price is 3.12
**Quantity and pice as functions of n**
We now want to show how the amout produced depends on the number of firms n. We therefor create a list with a range for n:
```python
#list with n values
n_vec = list(range(100,1000))
#array with solutions for different n
output_vec1 = array([EQ_N(a,b,n,c_low,c_high,seed) for n in n_vec])
#plot the the quantity as function of number of firms
fig,ax = plt.subplots()
ax.plot(n_vec,output_vec1[:,0], label = '$X$')
ax.title.set_text('Quantatity as a function of number of firms')
plt.legend()
plt.show()
```
Furthermore we can show how the price varies with number of firms n:
```python
#define solution in order to show price for different n
def solution(a,b,c_vec):
n= len(c_vec) #the number of firms will be equal to the length of the c vector
q = n/(n+1) * (a-np.mean(c_vec))/b
p = a-b*q
return q,p
output_vec = array([solution(a,b,c_vec) for c_vec in output_vec1[:,2]])
#plot the price as function of number of firms
plt.style.use('seaborn')
plt.clf()
plt.plot(n_vec,output_vec[:,1], label = '$price$')
plt.title('Price as a function of number of firms')
plt.legend()
plt.show()
```
**Solving with python**\
Up untill now we have solved the model analytically and then used python to show the results. However the model can also be solved using python, which will be done in the following section:
First we set up the same number og firms and parameters values as in the previous solution:
```python
N = 1000 # number of firms
#np.random.seed(1986)
c_low = 1
c_high = 5
c_vec = np.random.uniform(c_low,c_high,size=N)
```
We then use the profit function: $\Pi_i = (p(x_{-i}+x_{i})-c_i)x_i$ with $p=a-b(x_{-i}+x_i)$ to find the foc with sympy:
```python
c_i = sm.symbols('c_i')
x_i = sm.symbols('x_i') # x for firm i
x_minus = sm.symbols('x_{-i}') # x for the the opponents
#The profit of firm 1 is then:
Pi_i = x_i*((a_1-b_1*(x_i+x_minus))-c_i)
#giving focs:
foc = sm.diff(Pi_i,x_i)
foc
```
$\displaystyle a - b x_{i} - b \left(x_{i} + x_{-i}\right) - c_{i}$
In order to use this in our solutionen, we rewrite $x_{i}+x_{-i} = \sum x_{i}$ using np.sum and then define a function for the foc:
```python
def focs(a,b,x_vec,c_vec):
"""A function that defines the foc for firm i
args:
a : total demand
b : elasticity of substitution between the good
c_vec : a vector with the cost for the firms
x_vec : the quantatiy of firm i
"""
# use the foc from the sympy.diff
return -b*x_vec+a-b*np.sum(x_vec)-c_vec
```
We are now ready to set up a function that solves the case with N firms:
```python
def solve(a,b,c_vec):
"""A function that solve the cournot model with N firms
args:
a : total demand
b : elasticity of substitution between the good
c_vec : a vector with the cost for the firms
Returns two floats
"""
obj = lambda x_vec : focs(a,b,x_vec,c_vec)
res = optimize.root(obj, x0 = [90 for n in range(N)])
x_vec_star = res.x
Q = np.sum(x_vec_star)
# Use inverse demand to find price
p = a- b*Q
return Q, p
```
We then use the profit function: $\Pi_i = (p(x_{-i}+x_{i})-c_i)x_i$ with $p=a-b(x_{-i}+x_i)$ to find the foc with sympy:
```python
#Choose the same parameters as in previous models
a = 101
b = 0.1
X,p = solve(a,b,c_vec)
print('\n The number of firms are 1000 ' +
'\n The total amount produced is ' + Round(Q) +',' + ' while the price is ' + Round(p) +
'\n')
```
The number of firms are 1000
The total amount produced is 978.80, while the price is 3.12
```python
#Compare with the solution from the analytical solution
X,price, c_vec = EQ_N(a,b,1000,c_low,c_high,1986)
print('\n The number of firms are 1000 ' +
'\n The total amount produced is ' + Round(X) +',' + ' while the price is ' + Round(price) +
'\n')
```
The number of firms are 1000
The total amount produced is 978.80, while the price is 3.12
We see, that we get the same result for both the analytical solution and the solution using Python to solve.
```python
#my try with the figure
#list with n values
#n_vec = list(range(100,1000))
#array with solutions for different n
#output_vec1 = array([solve(a,b,c_vec) for N in n_vec])
#plot the the quantity as function of number of firms
#fig,ax = plt.subplots()
#ax.plot(n_vec,output_vec1[:,0], label = '$X$')
#ax.title.set_text('Quantatity as a function of number of firms')
#plt.legend()
#plt.show()
```
**Conclusion**
In this paper we examine two firms who have the possibility of competing in Bertrand, Stackelberg and Cournot. We use the same parameter values for all competition types, and find that the firms benefit the most from competing in a Cournot setting as this yields the highest total profits of XX. This is intuitive, as in contrast to Bertrand, the firms will undercut each other until their price is equal to their marginal costs to obtain efficiency. Therefore, this setting yields a price as in perfect competition, whereas Cournot yields an oligopoly equilibrium.
As Cournot yields the best outcome in terms of profits for two firms, we extend this setting for $n$ firms. We find that for $n\rightarrow\infty$, the totalt amount produces increases, while the price falls because competing becomes more intense as the competition setting moves from oligopoly to infinitely many firms competing in quantities.
```python
```
|
2c48770b621b3f1e8036488beefff5d54fd35582
| 168,493 |
ipynb
|
Jupyter Notebook
|
modelproject/modelproject_done4.0_rework3.ipynb
|
AskerNC/projects-2021-aristochats
|
cade4c02de648f4cd1220216598dc24b67bb8559
|
[
"MIT"
] | null | null | null |
modelproject/modelproject_done4.0_rework3.ipynb
|
AskerNC/projects-2021-aristochats
|
cade4c02de648f4cd1220216598dc24b67bb8559
|
[
"MIT"
] | null | null | null |
modelproject/modelproject_done4.0_rework3.ipynb
|
AskerNC/projects-2021-aristochats
|
cade4c02de648f4cd1220216598dc24b67bb8559
|
[
"MIT"
] | null | null | null | 97.790482 | 45,640 | 0.835536 | true | 8,885 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90053 | 0.782662 | 0.704811 |
__label__eng_Latn
| 0.993041 | 0.475843 |
# Computational Astrophysics
## Partial Differential Equations. 01 Generalities
---
## Eduard Larrañaga
Observatorio Astronómico Nacional\
Facultad de Ciencias\
Universidad Nacional de Colombia
---
### About this notebook
In this notebook we present some of the generalities about systems of Partial Differential Equations.
`A. Garcia. Numerical Methods for Physics. (1999). Chapter 6 - 7 `
---
## Partial Differential Equations (PDEs)
A PDE is a relation between the partial derivatives of an unknown
function and the independent variables. The order of the highest derivative sets the *order* of the PDE.
A PDE is *linear* if it is of the first degree in the dependent
variable (i.e. the unknown function) and in its partial derivatives.
If each term of a PDE contains either the dependent variable
or one of its partial derivatives, the PDE is called *homogeneous*.
Otherwise it is *non-homogeneous*.
---
### Types of Partial Differential Equations
There are three general types of PDEs
1. hyperbolic
2. parabolic
3. elliptic
Not all PDEs fall into one of these three types, but many
PDEs used in practice do.
These classes of PDEs model different sorts
of phenomena, display different behavior, and require different numerical
techniques for their solution.
It is not always straighforward to see (to show and proof) what type
of PDE is a given PDE!!.
### Linear Second Order Differential Equation
Consider a function $u=u(x,y)$ satisfying the linear second order differential equation
\begin{equation}
a \partial^2_{xx} u + b \partial^2_{xy} u + c \partial^2_{yy} u + d \partial_x u + e \partial_y u + f u = g\,\,,
\end{equation}
This equation is straightforwardly categorized based on the discriminant,
\begin{equation}
b^2 - 4ac \left\{ \begin{array}{lcr}
< 0 & \rightarrow & \text{elliptic},\\
= 0 & \rightarrow & \text{parabolic},\\
> 0 & \rightarrow & \text{hyperbolic}.
\end{array}\right.
\end{equation}
The names come from analogy with conic sections in the theory of
ellipses.
---
## Hyperbolic PDEs
Hyperbolic equations in physics and astrophysics
describe **dynamical** processes and systems that generally start
at some initial time $t_0=0$ with some initial conditions. Hence, the equations are then integrated in time.
The prototypical linear second-order hyperbolic equation is the
homogeneous wave equation,
\begin{equation}
c^2 \partial^2_{xx} u - \partial^2_{tt} u = 0\,\,,
\end{equation}
where $c$ is the wave speed.
---
Another class of hyperbolic equations are the **first-order
hyperbolic systems**. In one space dimension and assuming a linear
problem, this is
\begin{equation}
\partial_t u + A \partial_x u = 0\,\,,
\end{equation}
where $u(x,t)$ is a state vector with $s$ components and $A$
is a $s \times s$ matrix.
The problem is called *hyperbolic* if
$A$ has only real eigenvalues and is diagonizable, i.e., has a complet set
of linearly independent eigenvectors so that one can construct a
matrix
\begin{equation}
\Lambda = Q^{-1} A Q\,\,,
\end{equation}
where $\Lambda$ is diagonal and has real numbers on its diagonal.
**Example**
An example of these equations is the linear **advection equation**, in which the function $u=u(t,x)$ satisfies
\begin{equation}
\partial_t u + v \partial_x u = 0\,\,,
\end{equation}
which is first order, linear, and homogeneous.
---
Other example is given by the non-linear first-order systems. Consider the equation
\begin{equation}
\partial_t u + \partial_x F(u) = 0\,\,,
\end{equation}
where $F(u)$ is the **flux** and may or may not be non-linear in $u(t,x)$.
We can re-write this PDE in **quasi-linear** form, by introducing
the Jacobian
\begin{equation}
\bar{A} = \frac{\partial F}{\partial u}\,\,,
\end{equation}
and writing
\begin{equation}
\partial_t u + \bar{A}\partial_x u = 0\,\,.
\label{eq:pde_quasilin1}
\end{equation}
This PDE is hyperbolic if $\bar{A}$ has real eigenvalues and is
diagonizable.
The **equations of hydrodynamics** are a key example
of a non-linear, first-order hyperbolic PDE system.
### Initial Conditions for Hyperbolic Problems
One must specify either von Neumann, Dirichlet, or Robin boundary
conditions:
1. **Dirichlet Boundary Conditions**
\begin{equation}
\begin{aligned}
u(x=0,t) &= \Phi_1(t)\,\,,\\
u(x=L,t) &= \Phi_2(t)\,\,.
\end{aligned}
\end{equation}
2. **von Neumann Boundary Conditions**
\begin{equation}
\begin{aligned}
\partial_x u(x=0,t) &= \Psi_1(t)\,\,,\\
\partial_x u(x=L,t) &= \Psi_2(t)\,\,.
\end{aligned}
\end{equation}
Note that in a multi-dimensional problem $\partial_x$ turns into the derivative in the direction of the normal to the boundary.
3. **Robin Boundary Conditions**
$a_1, b_1, a_2, b_2$ be real numbers with $a_i \neq 0$ and $ b_i \neq 0$.
\begin{equation}
\begin{aligned}
a_1 u(x=0,t) + b_1 \partial_x u(x=0,t) &= \Psi_1(t)\,\,,\\
a_2 u(x=L,t) + b_2 \partial_x u(x=L,t) &= \Psi_2(t)\,\,.
\end{aligned}
\end{equation}
Dirichlet and von Neuman boundary conditions are recovered if either
both $a_i$ or both $b_i$ vanish.
Note that in a multi-D problem $\partial_x$ turns into
the derivative in the direction of the normal to the boundary.
---
## Parabolic PDEs
Parabolic PDEs describe processes that are slowly changing, such as
the slow diffusion of heat in a medium, sediments in ground water, and
radiation in an opaque medium. Parabolic PDEs are second order and
have generally the shape
\begin{equation}
\partial_t u - k \partial^2_{xx} u = f\,\,.
\end{equation}
### Initial Conditions for Parabolic Problems
One must specify $u(x,t=0)$ at all $x$.
### Boundary Conditions for Parabolic Problems
Dirichlet, von Neumann or Robin boundary conditions.
If the boundary conditions are independent of time, the system will
evolve towards a steady state ($\partial_t u = 0$). In this case,
one may set $\partial_t u = 0$ for all times and treat the differential equation as an elliptic equation.
---
## Ellpitic PDEs
Elliptic equations describe systems that are static, in steady state
and/or in equilibrium. There is no time dependence. A typical elliptic
equation is the Poisson equation,
\begin{equation}
\nabla^2 u = f\,\,,
\end{equation}
which one encounters in Newtonian gravity and in electrodynamics.
$\nabla^2$ is the Laplace operator, and $f$ is a given scalar function
of position. Elliptic problems may be linear ($f$ does not depend on
$u$ or its derivatives) or non-linear ($f$ depends on $u$ or its
derivatives).
### Initial Conditions for Elliptic Problems
Do not apply, since there is no time dependence.
### Boundary Conditions for Elliptic Problems
Dirichlet, von Neumann or Robin boundary conditions.
---
---
## Numerical Methods for PDEs
There is no such thing as a general robust method for the solution of
generic PDEs. Each type (and each sub-type) of PDE requires a different
approach. Real-life PDEs may be of mixed type or may have special
properties that require knowledge about the underlying physics for their
successful solution.
There are three general classes of approaches to solving PDEs
### 1. Finite Difference Methods.
The differential operators are approximated using their finite-difference
representation on a given grid. A sub-class of finite-difference methods,
so-called finite-volume methods, can be used for PDEs arising from
conservation laws (e.g., the hydrodynamics equations).
Finite difference/volume methods have polynomial convergence for
smooth functions.
### 2. Finite Element Methods.
The domain is divided into cells (**elements**). The solution
is represented as a single function (e.g., a polynomial) on each
cell and the PDE is transformed to an algebraic problem for the matching
conditions of the simple functions at cell interfaces.
Finite element methods can have polynomial or exponential convergence
for smooth functions.
### 3. Spectral Methods.
The solution is represented by a linear combination of known
functions (e.g. trigonometric functions or special
polynomials). The PDE is transformed to a set of algebraic
equations (or ODEs) for the amplitudes of the component
functions. A sub-class of these methods are the collocation
methods. In them, the solution is represented on a grid and the
spectral decomposition of the solution in known functions is
used to estimate to a high degree of accuracy the partial
derivatives of the solution on the grid points.
```python
```
|
4541eb6c92261fc6d04996d379dc087e7078c3e7
| 12,556 |
ipynb
|
Jupyter Notebook
|
13._PDE1/presentation/PDE01.ipynb
|
ashcat2005/ComputationalAstrophysics
|
edda507d0d0a433dfd674a2451d750cf6ad3f1b7
|
[
"MIT"
] | 2 |
2020-09-23T02:49:10.000Z
|
2021-08-21T06:04:39.000Z
|
13._PDE1/presentation/PDE01.ipynb
|
ashcat2005/ComputationalAstrophysics
|
edda507d0d0a433dfd674a2451d750cf6ad3f1b7
|
[
"MIT"
] | null | null | null |
13._PDE1/presentation/PDE01.ipynb
|
ashcat2005/ComputationalAstrophysics
|
edda507d0d0a433dfd674a2451d750cf6ad3f1b7
|
[
"MIT"
] | 2 |
2020-12-05T14:06:28.000Z
|
2022-01-25T04:51:58.000Z
| 32.612987 | 137 | 0.572396 | true | 2,272 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.812867 | 0.890294 | 0.723691 |
__label__eng_Latn
| 0.996416 | 0.519709 |
# 13 Root Finding (Students)
An important tool in the computational tool box is to find roots of equations for which no closed form solutions exist:
We want to find the roots $x_0$ of
$$
f(x_0) = 0
$$
## Problem: Projectile range
The equations of motion for the projectile with linear air resistance (see *12 ODE applications*) can be solved exactly.
As a reminder: the linear drag force is
$$
\mathbf{F}_1 = -b_1 \mathbf{v}\\
b := \frac{b_1}{m}
$$
Equations of motion with force due to gravity $\mathbf{g} = -g \hat{\mathbf{e}}_y$
\begin{align}
\frac{d\mathbf{r}}{dt} &= \mathbf{v}\\
\frac{d\mathbf{v}}{dt} &= - g \hat{\mathbf{e}}_y -b \mathbf{v}
\end{align}
### Analytical solution of the equations of motion
(Following Wang Ch 3.3.2)
Solve $x$ component of the velocity
$$
\frac{dv_x}{dt} = -b v_x
$$
by integration:
$$
v_x(t) = v_{0x} \exp(-bt)
$$
The drag force reduces the forward velocity to 0.
Integrate again to get the $x(t)$ component
$$
x(t) = x_0 + \frac{v_{0x}}{b}\left[1 - \exp(-bt)\right]
$$
Integrating the $y$ component of the velocity
$$
\frac{dv_y}{dt} = -g - b v_y
$$
gives
$$
v_y = \left(v_{0y} + \frac{g}{b}\right) \exp(-bt) - \frac{g}{b}
$$
and integrating again
$$
y(t) = y_0 + \frac{v_{0y} + \frac{g}{b}}{b} \left[1 - \exp(-bt)\right] - \frac{g}{b} t
$$
(Note: This shows immediately that the *terminal velocity* is
$$
\lim_{t\rightarrow\infty} v_y(t) = - \frac{g}{b},
$$
i.e., the force of gravity is balanced by the drag force.)
#### Analytical trajectory
To obtain the **trajectory $y(x)$** eliminate time (and for convenience, using the origin as the initial starting point, $x_0 = 0$ and $y_0 = 0$. Solve $x(t)$ for $t$
$$
t = -\frac{1}{b} \ln \left(1 - \frac{bx}{v_{0x}}\right)
$$
and insert into $y(t)$:
$$
y(x) = \frac{x}{v_{0x}} \left( v_{0y} + \frac{g}{b} \right) + \frac{g}{b^2} \ln \left(1 - \frac{bx}{v_{0x}}\right)
$$
#### Plot
Plot the analytical solution $y(x)$ for $\theta = 30^\circ$ and $v_0 = 100$ m/s.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
```
The function `y_lindrag()` should compute $y(x)$.
```python
def y_lindrag(x, v0, b1=0.2, g=9.81, m=0.5):
b = b1/m
v0x, v0y = v0
# IMPLEMENT FUNCTION
def initial_v(v, theta):
x = np.deg2rad(theta)
return v * np.array([np.cos(x), np.sin(x)])
```
```python
X = np.concatenate([np.linspace(0, 42, 100), np.linspace(42, 45, 1000)])
Y = y_lindrag(X, initial_v(100, 30), b1=1)
```
```python
# PLOT
```
Compare to the numerical solution (from **12 ODE Applications**):
```python
import ode
def simulate(v0, h=0.01, b1=0.2, g=9.81, m=0.5):
def f(t, y):
# y = [x, y, vx, vy]
return np.array([y[2], y[3], -b1/m * y[2], -g - b1/m * y[3]])
vx, vy = v0
t = 0
positions = []
y = np.array([0, 0, vx, vy], dtype=np.float64)
while y[1] >= 0:
positions.append([t, y[0], y[1]]) # record t, x and y
y[:] = ode.rk4(y, f, t, h)
t += h
return np.array(positions)
```
```python
r = simulate(initial_v(100, 30), h=0.01, b1=1)
```
```python
plt.plot(X, Y, lw=2, label="analytical")
plt.plot(r[:, 1], r[:, 2], '--', label="RK4")
plt.legend(loc="best")
plt.xlabel("$x$ (m)"); plt.ylabel("$y$ (m)")
```
### Predict the range $R$
How far does the ball or projectile fly, i.e., that value $x=R$ where $y(R) = 0$:
$$
\frac{R}{v_{0x}} \left( v_{0y} + \frac{g}{b} \right) + \frac{g}{b^2} \ln \left(1 - \frac{bR}{v_{0x}}\right) = 0
$$
This *transcendental equation* can not be solved in terms of elementary functions.
Use a **root finding** algorithm.
## Root-finding with the Bisection algorithm
**Bisection** is the simplest (but very robust) root finding algorithm that uses trial-and-error:
* bracket the root
* refine the brackets
* see [13_Root-finding-algorithms (PDF)](13_Root-finding-algorithms.pdf)
More specifically:
1. determine a bracket that contains the root: $a < x_0 < b$ (i.e., an interval $[a, b]$ with $f(a) > 0$ and $f(b) < 0$ or $f(a) < 0$ and $f(b) > 0$)
2. cut bracket in half: $x' = \frac{1}{2}(a + b)$
3. determine in which half the root lies: either in $[a, x']$ or in $[x', b]$: If $f(a) f(x') > 0$ then the root lies in the right half $[x', b]$, otherwise the left half $[a, x']$.
4. Change the boundaries $a$ or $b$.
5. repeat until $|f(x')| < \epsilon$.
### Implementation of Bisection
- Test that the initial bracket contains a root; if not, return `None` (and possibly print a warning).
- If either of the bracket points is a root then return the bracket point.
- Allow `Nmax` iterations or until the convergence criterion `eps` is reached.
- Bonus: print a message if no root was found after `Nmax` iterations, but print the best guess and the error (but return `None`).
```python
def bisection(f, a, b, Nmax=100, eps=1e-14):
# IMPLEMENT FUNCTION
```
### Finding the range with the bisection algorithm
Define the trial function:
```python
def f(x):
v0 = initial_v(100, 30)
b1 = 1.
return y_lindrag(x, v0, b1=b1)
```
The initial bracket $[a_\text{initial}, b_\text{initial}]$ is a little bit difficult for this function: choose the right bracket near the point where the argument of the logarithm becomes 0 (which is actually the maximum $x$ value $\lim_{t\rightarrow +\infty} x(t) = \frac{v_{0x}}{b}$):
$$
b_\text{initial} = \frac{v_{0x}}{b} - \epsilon'
$$
where $\epsilon'$ is a small number.
```python
v = initial_v(100, 30)
b1 = 1.
m = 0.5
b = b1/m
# COMPLETE: bisection( , eps=1e-6)
```
### Find the range as a function of the initial angle
```python
b1 = 1.
m = 0.5
b = b1/m
v0 = 100
u = []
# IMPLEMENT
```
```python
# PLOT
```
Write a function `find_range()` to calculate the range for a given initial velocity $v_0$ and plot $R(\theta)$ for $10\,\text{m/s} ≤ v_0 ≤ 100\,\text{m/s}$.
```python
def find_range(v0, b1=1, m=0.5):
b = b1/m
u = []
for theta in np.arange(1, 90):
v = initial_v(v0, theta)
# IMPLEMENT THE REST ...
return np.array(u)
```
```python
for v0 in (10, 25, 50, 75, 100):
u = find_range(v0)
plt.plot(u[:, 0], u[:, 1], label="{} m/s".format(v0))
plt.legend(loc="best")
plt.xlabel(r"$\theta$ (degrees)")
plt.ylabel(r"range $R$ (m)")
```
## Newton-Raphson algorithm
(see derivation in class and in the PDF or [Newton's Method](http://mathworld.wolfram.com/NewtonsMethod.html) on MathWorld)
### Activity: Implement Newton-Raphson
1. Implement the Newton-Raphson algorithm
2. Test with $g(x)$.
$$
g(x) = 2 \cos x - x
$$
3. Bonus: test performance of `newton_raphson()` against `bisection()`.
```python
def g(x):
return 2*np.cos(x) - x
```
```python
xvals = np.linspace(0, 7, 30)
plt.plot(xvals, np.zeros_like(xvals), 'k--')
plt.plot(xvals, g(xvals))
```
```python
def newton_raphson(f, x, h=3e-1, Nmax=100, eps=1e-14):
# IMPLEMENT ME
```
```python
newton_raphson(g, 0)
```
```python
```
|
415590f53cfc37a605a05c2d822949957077bf8d
| 13,387 |
ipynb
|
Jupyter Notebook
|
13_root_finding/13-Root-finding-students.ipynb
|
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020
|
20e08c20995eab567063b1845487e84c0e690e96
|
[
"CC-BY-4.0"
] | null | null | null |
13_root_finding/13-Root-finding-students.ipynb
|
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020
|
20e08c20995eab567063b1845487e84c0e690e96
|
[
"CC-BY-4.0"
] | null | null | null |
13_root_finding/13-Root-finding-students.ipynb
|
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020
|
20e08c20995eab567063b1845487e84c0e690e96
|
[
"CC-BY-4.0"
] | null | null | null | 23.948122 | 301 | 0.488832 | true | 2,453 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.83762 | 0.879147 | 0.736391 |
__label__eng_Latn
| 0.81336 | 0.549215 |
---
author: Nathan Carter (ncarter@bentley.edu)
---
This answer assumes you have imported SymPy as follows.
```python
from sympy import * # load all math functions
init_printing( use_latex='mathjax' ) # use pretty math output
```
Let's assume we've defined a variable and created a formula, as covered
in how to create symbolic variables.
```python
var( 'x' )
formula = x**2 + x
formula
```
$\displaystyle x^{2} + x$
We can substitute a value for $x$ using the `subs` function.
You provide the variable and the value to substitute.
```python
formula.subs( x, 8 ) # computes 8**2 + 8
```
$\displaystyle 72$
If you had to substitute values for multiple variables, you can use
multiple `subs` calls or you can pass a dictionary to `subs`.
```python
var( 'y' )
formula = x/2 + y/3
formula
```
$\displaystyle \frac{x}{2} + \frac{y}{3}$
```python
formula.subs( x, 10 ).subs( y, 6 )
```
$\displaystyle 7$
```python
formula.subs( { x: 10, y: 6 } )
```
$\displaystyle 7$
|
140d951638e08ebbdca3a83391b792743e81e51b
| 3,615 |
ipynb
|
Jupyter Notebook
|
database/tasks/How to substitute a value for a symbolic variable/Python, using SymPy.ipynb
|
nathancarter/how2data
|
7d4f2838661f7ce98deb1b8081470cec5671b03a
|
[
"MIT"
] | null | null | null |
database/tasks/How to substitute a value for a symbolic variable/Python, using SymPy.ipynb
|
nathancarter/how2data
|
7d4f2838661f7ce98deb1b8081470cec5671b03a
|
[
"MIT"
] | null | null | null |
database/tasks/How to substitute a value for a symbolic variable/Python, using SymPy.ipynb
|
nathancarter/how2data
|
7d4f2838661f7ce98deb1b8081470cec5671b03a
|
[
"MIT"
] | 2 |
2021-07-18T19:01:29.000Z
|
2022-03-29T06:47:11.000Z
| 19.862637 | 99 | 0.487137 | true | 302 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.941654 | 0.872347 | 0.82145 |
__label__eng_Latn
| 0.974607 | 0.746835 |
## Multidimensional search with gradient-search methods
## The objective is to find a minimum of a multivariate function
Luca Magri (lm547@cam.ac.uk)
(With many thanks to Professor Gábor Csányi.)
Multivariate function = multi-variable function = function that depends on two variables at least
## Direct search for multi-variable functions
- We assume we can evaluate the function $f(x)$ that is to be minimised
- If we do not have access to the gradient, we can perform a __direct search__
- A generalisation of the interval search for multiple variables is the Nelder-Mead method, which is robust but slow
- For example, we rarely have access to the gradient when the objective function is the result of a complex simulation, e.g.,"lift on an aircraft wing", "melting point of a material", "efficiency of a circuit design", etc.
- Aside: Sometimes we create a __surrogate model__, which approximates the objective function, but provides analytical gradients. __Machine Learning__ methods provide surrogate models.
## Search directions for multi-variable functions
- Access to the gradient enables __gradient search__, which is more efficient than direct (gradient-free) search
- Access to the Hessian (the gradient of the gradient) can improve convergence significantly
- The computation of the Hessian is straightforward to do for problems with only a few variables
- The computation of the Hessian may be computationally expensive for problems with many variables, say, thousands or more
- Even a rough approximation of the Hessian can be beneficial (see "preconditioning")
## Lectures 3 and 4: List of contents
In these lectures, we will analyse some common search-direction methods for multidimensional search
1. Steepest descent method
1. Newton-Raphson method
1. Barzilai-Borwein (BB) method
1. Conjugate-gradient (CG) method
1. Newton-Gauss method
## Nomenclature
- $f(x): \mathbb{R}^N\rightarrow\mathbb{R}$ is a nonlinear function, which we want to minimize
- $x^*\in\mathbb{R}^N$ is a minimum of $f$
- $\nabla f = \begin{pmatrix}
\frac{\partial f}{\partial x_1},
\frac{\partial f}{\partial x_2},
\ldots,
\frac{\partial f}{\partial x_N}
\end{pmatrix}^T = \frac{\partial f}{\partial x_i}$, $i=1,2,\ldots, N$ is the gradient
- $H=\nabla(\nabla f(x))=\nabla^2 f(x)$ is the Hessian $$H_{i,j}=\frac{\partial^2 f}{\partial x_i\partial x_j}\;\;\;\;\;i,j=1,2,\ldots,N$$
- $k=0,1,2,\ldots$ is the iteration step
- $d_k$ is the $k$-th search direction
- $\alpha_k$ is the step size to take along the $k$-th search direction
- h.o.t. stands for "higher order terms"
## Revision on Taylor expansion for multi-variable functions
- A function that is differentiable infinite times can be expressed as a Taylor series around a point
- We will consider terms up to second order
- The Taylor expansion of the multi-variable function $f(x)$ around the point $x_k$ is
\begin{align}
f(x) = f(x_k) + \nabla f(x_k)^T (x - x_k) + \frac12 (x-x_k)^T H(x_k) (x-x_k) + h.o.t.
\end{align}
- The term $\nabla f(x_k)^T (x - x_k)$ is called first-order term
- The term $\frac12 (x-x_k)^T H(x_k) (x-x_k)$ is called second-order term
- h.o.t. stands for higher-order terms
## Steepest descent method
We exploit the fact that the gradient is the direction of maximum first-order change of a multi-variable function $f(x)$
1. Start with an initial guess for the minimum $x_0$
1. Set the search direction as the negative gradient $$d_k \equiv -\nabla f(x_k)$$
1. Determine by _line search_ the step size $\alpha_k$ that minimizes $$f(x) \;\;\; \textrm{along} \;\;\;d_k$$
1. Update the solution $x_{k+1} = x_k + \alpha_k d_k$
1. If not converged, back to 2.
- Notice __nested structure__:
- The main loop in step 2 changes the _direction_
- The inner loop in step 3 is optimisation in 1D
- How accurate should this inner optimisation (line search) be?
- Ideally,
$$\frac{d f(x_k+\alpha_k d_k)}{d\alpha_k} =
0$$
which implies that the gradient at $x_k+\alpha_k d_k$ is orthogonal to $d_k$, i.e., successive search directions are orthogonal to each other
$$ \boxed{d^T_k d_{k+1} =0}$$
- Proof (optional):
\begin{align}
\frac{d f(x_k+\alpha_k d_k)}{d\alpha_k} &=0\\
\frac{df(x_{k+1})}{d\alpha}& =0 \\
\frac{\partial f(x_{k+1})}{\partial x_{k+1}}\cdot\frac{\partial x_{k+1}}{\partial\alpha} & =0 \\
\nabla f(x_{k+1}) \cdot \frac{\partial (x_{k}+\alpha_k d_k)}{\partial\alpha} & =0\\
-d_{k+1} \cdot d_k & =0\\
\end{align}
Therefore
$d^T_k d_{k+1} =0$
- Geometric interpretation: If the gradient $-d_{k+1}$ had a component along $x_k+\alpha_k d_k$, then $x_k+\alpha_k d_k$ could not be a minimum of $f(x_k+\alpha_k d_k)$, which contradicts the hypothesis. Hence, the gradient must have a zero component along $x_k+\alpha_k d_k$, which means that it must be orthogonal to $d_k$.
- Practically, an approximate line search is performed
$$\frac{d f(x_k+\alpha_k d_k)}{d\alpha_k} \approx
0$$
hence
$$ \boxed{d^T_k d_{k+1} \approx0 }$$
- If we approximate the function with a second-order Taylor expansion, the step size is
$$ \alpha_k= - \frac{\nabla f(x_k)^Td_k}{d_k^TH(x_k)d_k}$$
However, if you have the Hessian $H$, you may want to use better search methods (e.g., Newton-Raphson)
- If the approximation is poor, the algorithm might not converge
- If the approximation is too accurate, it could be a waste of resources because we are only making progress along the current search direction
- Tradeoff: $f(x_k+\alpha_k d_k)$ is often approximated by a quadratic polynomial, or a maximum number of iterations is set
Example with the Rosenbrock function
$$ f(x_1, x_2) = (1-x_1)^2 + a(x_2-x_1^2)^2$$
where $x_1$ and $x_2$ are the two variables on which $f$ depend (in this example, they have nothing to do with the iteration), and $a=10$
- The gradient is
\begin{align}\nabla f(x_1,x_2)
& = \begin{bmatrix} \frac{\partial f}{\partial x_1} \\\frac{\partial f}{\partial x_2} \end{bmatrix} \\
& = \begin{bmatrix} \frac{\partial \left((1-x_1)^2 + a(x_2-x_1^2)^2\right)}{\partial x_1} \\\frac{\partial \left( (1-x_1)^2 + a(x_2-x_1^2)^2\right)}{\partial x_2} \end{bmatrix} \\
& = \begin{bmatrix} -2(1-x_1)-4ax_1(x_2-x_1^2) \\ 2a(x_2-x_1^2) \end{bmatrix}
\end{align}
```python
from pylab import *
import numpy as np
# function we will optimize
def Rosen(x,a=10):
return (1-x[0])**2 + a*(x[1]-x[0]**2)**2
def dRosen(x,a=10):
return np.array([-2.0*(1.0-x[0]) - 2.0*a*(x[1]-x[0]**2)*2.0*x[0],2.0*a*(x[1]-x[0]**2)])
```
/home/nbuser/anaconda2_501/lib/python2.7/site-packages/matplotlib/font_manager.py:281: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
'Matplotlib is building the font cache using fc-list. '
```python
def linesearch_golden_section(f, x, d, alpha_init=None, xtol=1e-8):
def fa(a):
return f(x+a*d)
# initial point
alpha0 = 0
f0 = fa(alpha0)
# now find a bracket for the minimum
if alpha_init == None:
alpha1 = 1.0
else:
alpha1 = alpha_init
# check to see if the point at alpha1 goes down, if not find one that does
while fa(alpha1) > f0:
alpha1 /= 2.0
print (alpha0, f0, alpha1, fa(alpha1))
f1 = fa(alpha1)
# set up golden section ratio
r = (np.sqrt(5)-1)/2.0
# now find the outer bracket alpha2 where the function goes up again
alpha2 = alpha1/r
while fa(alpha2) < f1:
alpha1 = alpha2
alpha2 = alpha1/r
f2 = fa(alpha2)
# now we have three points in Golden Section ratio, (alpha0, alpha1, alpha2) such that the function goes down then up,
# so it must have a minimum in between
# now loop until convergence
while abs(alpha0-alpha2) > xtol:
# get 4th point
alpha3 = alpha0*r+alpha2*(1-r); f3=fa(alpha3) # 0,3,1,2
# depending on where the function value falls, update the brackets
if f3 < f1:
alpha2=alpha1; f2=f1
alpha1=alpha3; f1=f3
else:
alpha0=alpha2; f0=f2
alpha2=alpha3; f2=f3
return 0.5*(alpha0+alpha2)
def steepest_descent(f, df, x0, ftol, xtol):
traj = []
# get initial direction
x = x0[:]
d = -1.0*df(x)
alpha_init = 1e-6
i = 0
#print i, f(x), np.linalg.norm(d), x[:], d[:]
traj.append((x[0],x[1]))
# loop until convergence
while np.linalg.norm(d) > ftol:
i += 1
# do line search in current direction
alpha = linesearch_golden_section(f, x, d, alpha_init, xtol)
# update estimate
x += alpha*d
# get new direction
d = -1.0*df(x)
#print i, f(x), np.linalg.norm(d), x[:], d[:]
traj.append((x[0],x[1]))
return x[:],traj
```
```python
x,y = np.meshgrid(np.linspace(-1.7,1.7,100), np.linspace(-0.8,3,100))
R = [[Rosen(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
fig = figure(figsize=(12,8))
contourf(x, y, R, np.logspace(-1,2, 8))
plt.axes().set_aspect('equal', 'datalim')
xmin, traj = steepest_descent(Rosen,dRosen, np.array([-1.8,1.8]), 1e-3, 1e-8)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "r-o")
show()
```
Example with quadratic function
\begin{align}
f(x_1,x_2) & =x_1^2+x_2^2-1.9x_1x_2\\
& = \frac{1}{2}\begin{bmatrix} x_1 \;\;\;x_2\end{bmatrix} \begin{bmatrix} 2 & -1.9\\ -1.9& 2\end{bmatrix} \begin{bmatrix} x_1 \\x_2\end{bmatrix}\\
\end{align}
```python
def Q(x):
return x[0]**2+ x[1]**2 - 1.9*x[0]*x[1]
def dQ(x):
return np.array([2*x[0]-1.9*x[1], 2*x[1]-1.9*x[0]])
x, y = np.meshgrid(np.linspace(-1.1,1.1,50), np.linspace(-1.1,1.1,50))
Qxy = [[Q(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
figure(figsize=(12,8))
contour(x, y, Qxy, linspace(0, 0.1, 6))
title("Quadratic - Steepest descent")
axis('equal')
#axis([-1.5, 1.5, -1.1, 1.1])
xmin, traj = steepest_descent(Q, dQ, np.array([-0.8, -0.95]), 1e-2, 1e-2)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "r-o")
show()
```
## Convergence of steepest descent
- This algorithm converges __linearly__ ($p = 1$)
$$ \left\| x_{k+1}-x^*\right\| = \beta \left\| x_k - x^*\right\| $$
- $\beta$ is close to 1, hence the steepest descent is slow. As a consequence of [Kantorovich Inequality](http://mathworld.wolfram.com/KantorovichInequality.html), it can be shown that
$$ \beta \le \left[\frac{k-1}{k+1}\right]^2 $$
where $k$ is the condition number, which is the ratio between the largest and smallest eigenvalues of $H(x^*)$.
- In typical engineering problems, this ratio approaches 1 as the number of variables increases.
- Quadratic problems with ill-conditioned matrices lead to poor convergence of steepest descent.
- Loosely speaking, ill-conditioned quadratic functions have isocontours that resemble elongated valleys.
- Isocontour = contour, which is defined as $f(x)=c$ where $c$ is a constant
## Newton-Raphson method
- We assume we can evaluate the gradient, $\nabla f(x)$, and the Hessian, $H(x)=\nabla^2f(x)$
- $f(x)$ is approximated by a quadratic function using the function value, gradient and Hessian at the current location
- Mathematically, the function is Taylor expanded around $x_k$
$$ f(x) = f(x_k) + \nabla f(x_k)^T (x - x_k) + \frac12 (x-x_k)^T H(x_k) (x-x_k) + h.o.t. $$
- The higher order terms (h.o.t.) are neglected and $f$ is approximated by $q$
$$ f(x)\approx q(x) = f(x_k) + \nabla f(x_k)^T (x - x_k) + \frac12 (x-x_k)^T H(x_k) (x-x_k) $$
- The minimum of $q(x)$ occurs when $\nabla q(x) = 0$ $$
\begin{array}
\\
\nabla q(x) &= \nabla f(x_k) + H(x_k) (x-x_k) = 0\\
x &= x_k - [H(x_k)]^{-1} \nabla f(x_k)\\
\end{array}
$$
- We use the minimum of $q(x)$ as our next guess for the minimum of $f(x)$ $$
x=x_{k+1} = x_k + \alpha_k d_k
$$
- Therefore, the search direction is $
d_k = - [H(x_k)]^{-1} \nabla f(x_k)
$ and the step size is $\alpha_k = 1$.
```python
def ddQ(x):
return np.matrix([[2, -1.9], [-1.9, 2]])
def newton(f, df, ddf, x0, ftol, xtol=1e-3):
traj = []
# get initial direction
x = np.array(x0[:])
d = -1.0*asarray(np.matmul(np.linalg.inv(ddf(x)),df(x)))[0]
i = 0
#print i, f(x), np.linalg.norm(d), x[:], d[:]
traj.append((x[0],x[1]))
# loop until convergence
while np.linalg.norm(d) > ftol:
i += 1
# update estimate
x += d
# get new direction
d = -1.0*asarray(np.matmul(np.linalg.inv(ddf(x)),df(x)))[0]
#print i, f(x), np.linalg.norm(d), x[:], d[:]
traj.append((x[0],x[1]))
return x[:],traj
```
```python
x, y = np.meshgrid(np.linspace(-1.1,1.1,50), np.linspace(-1.1,1.1,50))
Qxy = [[Q(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
figure(figsize=(12,8))
contour(x, y, Qxy, linspace(0, 0.1, 6), color="k")
title("Quadratic")
axis('equal')
xmin, traj = newton(Q, dQ, ddQ, np.array([-1.75, -0.95]), 1e-2, 1e-2)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "r-o")
show()
```
## Convergence of Newton-Raphson method
- It is a __quadratically convergent__ method ($p=2$)
$$ \left\| x_{k+1} - x^*\right\| \leq \beta \left\| x_k - x^*\right\|^2 $$
- Therefore, it converges to the minimum in one iteration in a quadratic function
- It needs more iterations in a non-quadratic nonlinear function
Newton-Raphson method is not very robust:
- It may never converge if the initial guess is not appropriate
<!--- Once you are quite close to the answer (in the _asymptotic limit_)--->
- If $H(x)$ is not positive definite, the method can converge to a maximum, or a saddle point, etc.
- It can take the iterations away from a minimum
Example with Rosenbrock function
- The gradient is
$$ \nabla f(x_1,x_2)
= \begin{bmatrix} -2(1-x_1)-4ax_1(x_2-x_1^2) \\ 2a(x_2-x_1^2) \end{bmatrix}
$$
- The Hessian is
\begin{align}\nabla^2 f(x_1,x_2)
&= \begin{bmatrix} \frac{\partial^2 f}{\partial x_1^2} & \frac{\partial^2 f}{\partial x_1\partial x_2}\\ \frac{\partial^2 f}{\partial x_2\partial x_1}& \frac{\partial^2 f}{\partial x_2^2} \end{bmatrix}\\
&= \begin{bmatrix} \frac{\partial \left(-2(1-x_1)-4ax_1(x_2-x_1^2)\right)}{\partial x_1} \;\;& \frac{\partial \left(-2(1-x_1)-4ax_1(x_2-x_1^2)\right)}{\partial x_2}\\ \frac{\partial \left(2a(x_2-x_1^2)\right)}{\partial x_1}& \frac{\partial \left(2a(x_2-x_1^2)\right)}{\partial x_2} \end{bmatrix}\\
&= \begin{bmatrix} 2-4a(x_2-x_1^2)+8ax_1^2 \;\;& -4ax_1\\ -4ax_1 & 2a \end{bmatrix}
\end{align}
If the function has continuous second derivatives (such as the function of this example), the mixed partial derivatives are equal to each other.
- Hence, the Hessian is symmetric (aside: [Schwarz's theorem](https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives#Schwarz's_theorem))
```python
def ddRosen(x,a=10):
return np.matrix([[2.0 - 4.0*a*(x[1]-x[0]**2) - 4.0*a*(-2.0*x[0])*x[0], -4.0*a*x[0]],
[-4.0*a*x[0] , 2.0*a ]])
```
```python
x,y = np.meshgrid(np.linspace(-1.7,1.7,100), np.linspace(-0.8,3,100))
R = [[Rosen(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
fig = figure(figsize=(12,8))
contour(x, y, R, np.logspace(-1,2, 8), color="k")
title("Rosenbrock - Newton's method")
xmin, traj = newton(Rosen, dRosen, ddRosen, np.array([-1.3, 1.3]), 1e-1, 1e-8)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "r-o")
show()
```
## Newton-Raphson method with line search
- This is the natural extension to the previous method
- The step size $\alpha_k$ is found by a line search at each iteration
<!---$$
x_{k+1} = x_k + \alpha_k d_k
$$
With the direction given by
$$
d_k = - [H(x_k)]^{-1} \nabla f(x_k)
$$
Optimise $\alpha$ in each iteration. --->
- This method
- often speeds up the convergence
- will find a minimum, not a saddle point or a maximum (when it converges)
- can fail (try the initial guess $[-1.5,1.5]$ in the code)<!---, direction points uphill!--->
- "Trust region" methods do not allow too big a step, which reduces the failure of the method
```python
def newton_raphson(f, df, ddf, x0, ftol, xtol=1e-3):
traj = []
# get initial direction
x = np.array(x0[:])
d = -1.0*asarray(np.matmul(np.linalg.inv(ddf(x)),df(x)))[0]
i = 0
#print i, f(x), np.linalg.norm(d), x[:], d[:]
traj.append((x[0],x[1]))
alpha_init = 1e-6
# loop until convergence
while np.linalg.norm(d) > ftol:
i += 1
# do line search in current direction
alpha = linesearch_golden_section(f, x, d, alpha_init, xtol)
#print alpha
# update estimate
x += alpha*d
# get new direction
d = -1.0*asarray(np.matmul(np.linalg.inv(ddf(x)),df(x)))[0]
#print i, f(x), np.linalg.norm(d), x[:], d[:]
traj.append((x[0],x[1]))
print x
w, v = np.linalg.eig(ddf(x))
print("Eigenvalues:", w)
return x[:],traj
```
```python
x,y = np.meshgrid(np.linspace(-1.7,1.7,100), np.linspace(-0.8,3,100))
R = [[Rosen(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
fig = figure(figsize=(12,8))
contour(x, y, R, np.logspace(-1,2, 30), color="k")
title("Rosenbrock - Newton-Raphson method")
xmin, traj = newton_raphson(Rosen, dRosen, ddRosen, np.array([-1.3, 1.3]), 1e-1, 1e-8)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "r-o")
show()
```
## Caveat on Newton-type methods
- Computing and inverting the Hessian requires
- Computation $\sim O(N^2)$
- Inversion $\sim O(N^3)$
- In engineering applications, $N$ can be thousands, millions, billions
- If $H$ is sparse, this helps speed up computation and inversion
## Barzilai-Borwein method
- Reminder: The __secant method__ approximates the Hessian by finite difference to yield
$$
x = x_1 - f'(x_1) \frac{x_1-x_0}{f'(x_1)-f'(x_0)}
$$
- There is a multidimensional generalisation (Barzilai & Borwein, IMA J. Num. Analys., __8(1)__, 141-148, 1998)
$$
x = x_1 - \nabla f(x_1) \frac{(x_1-x_0)^T\left(\nabla f(x_1)-\nabla f(x_0)\right)}{\left|\nabla f(x_1)-\nabla f(x_0))\right|^2}
$$
- Initialised with a single steepest descent step
- Only requires a single evaluation of the gradient in each iteration
- Usually more stable than Newton's method and other Quasi-Newton methods
- Not much is known about its convergence
```python
def bb(f, df, x0, dftol, xtol):
traj = []
# get initial direction
x = x0[:]
alpha_init = 1e-1
i = 0
traj.append((x[0],x[1]))
# one step of steepest descent
x1 = x[:]
df1 = df(x1)
#print i, f(x1), np.linalg.norm(df1), x1[:], df1[:]
alpha = linesearch_golden_section(f, x1, -1.0*df1, alpha_init, xtol)
x2 = x1-alpha*df1
df2 = df(x2)
i += 1
#print i, f(x2), np.linalg.norm(df2), x2[:], df2[:]
traj.append((x2[0],x2[1]))
# loop until convergence
while np.linalg.norm(df2) > dftol:
i += 1
# update estimate
x = x2 - df2 * np.dot(x2-x1,df2-df1)/np.linalg.norm(df2-df1)**2
# get new direction
x1 = x2[:] ; df1 = df2[:]
x2 = x[:] ; df2 = df(x)
#print i, f(x), np.linalg.norm(d), x[:], d[:]
traj.append((x[0],x[1]))
return x[:],traj
x,y = np.meshgrid(np.linspace(-1.7,1.7,100), np.linspace(-0.8,3,100))
R = [[Rosen(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
```
```python
fig = figure(figsize=(12,8))
#fig, ax = plt.subplots(figsize=(16,12))
#CS = ax.contour(x, y, R)
#ax.clabel(CS, inline=1, fontsize=10)
contour(x, y, R, np.logspace(-1,2, 10), color="k")
title("Rosenbrock - Barzilai-Borwein method")
xmin, traj = bb(Rosen, dRosen, np.array([-1.3, 1.3]), 1e-1, 1e-8)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "r-o")
show()
```
## Comparison of the methods for a quadratic function
- Consider minimizing the quadratic function $f(x) = \frac12 x^T A x - b^T x$
- Example with quadratic function
\begin{align}
f(x_1,x_2) & =x_1^2+x_2^2-1.9x_1x_2\\
& = \frac{1}{2}\begin{bmatrix} x_1 \;\;\;x_2\end{bmatrix} \begin{bmatrix} 2 & -1.9\\ -1.9& 2\end{bmatrix} \begin{bmatrix} x_1 \\x_2\end{bmatrix}\\
\end{align}
```python
x, y = np.meshgrid(np.linspace(-1.1,1.1,50), np.linspace(-1.1,1.1,50))
Qxy = [[Q(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
figure(figsize=(12,8))
contour(x, y, Qxy, linspace(0, 0.1, 6), color="k")
title("Quadratic")
axis('equal')
#axis([-1.5, 1.5, -1.1, 1.1])
xmin, traj = steepest_descent(Q, dQ, np.array([-0.75, -0.95]), 1e-2, 1e-2)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "r-o", label="Steepest Descent")
xmin, traj = newton(Q, dQ, ddQ, np.array([-0.75, -0.95]), 1e-2, 1e-2)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "b-o", label="Newton")
xmin, traj = bb(Q, dQ, np.array([-0.75, -0.95]), 1e-2, 1e-2)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "g-o", label="Barzilai-Borwein")
legend()
show()
```
- The steepest descent takes many steps (''zig-zagging'')
- This behaviour deteriorates further as the condition number increases (longer-narrower valleys)
- Newton's method converges in one iteration because the order of convergence is $p=2$
- This is because the Taylor series of a quadratic function is _exact_ at second order
- The Barzilai-Borwein method is a good compromise
- Takeaway: An approximated Hessian can greatly improve convergence
# Conjugate Gradients
- Example with quadratic function
\begin{align}
f(x_1,x_2) & =x_1^2+x_2^2-1.9x_1x_2\\
& = \frac{1}{2}\begin{bmatrix} x_1 \;\;\;x_2\end{bmatrix} \begin{bmatrix} 2 & -1.9\\ -1.9& 2\end{bmatrix} \begin{bmatrix} x_1 \\x_2\end{bmatrix}\\
\end{align}
```python
x, y = np.meshgrid(np.linspace(-1,1,50), np.linspace(-1,1,50))
Qxy = [[Q(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
figure(figsize=(9,6))
contour(x, y, Qxy, linspace(0, 0.1, 6), color="k")
title("Quadratic")
axis('equal')
plot([-0.75, -0.85, 0], [-0.95, -0.85, 0], "r-o")
show()
```
- Example: $$ f(x, y) = x^2 + 50 y^2$$
- The Hessian is $H=\nabla (\nabla f(x,y,))=\nabla \begin{pmatrix}2x\\ 100y\end{pmatrix}=\begin{pmatrix}2& 0\\0& 100\end{pmatrix}$
- The eigenvalues are $\lambda_1=2$, $\lambda_2=100$
- The condition number is $k=\lambda_{max}/\lambda_{min}=50$.
- The eigenvectors for $\lambda_1$ and $\lambda_2$ are, respectively $$ \begin{pmatrix} 1\\0\end{pmatrix}\;\;\; \textrm{and}\;\;\;\begin{pmatrix} 0\\1\end{pmatrix}$$
- The direction of maximum variation is $y$, hence, the ellipsoid is elongated in the $x$ direction
```python
x, y = np.meshgrid(np.linspace(-1,1,50), np.linspace(-0.2,0.2,50))
def aQ(x):
return x[0]**2+50*x[1]**2
Qxy = [[aQ(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
figure(figsize=(9,1.5))
contour(x, y, Qxy, linspace(0, 1, 6), color="k")
title("Quadratic")
axis('equal')
plot([-0.75, -0.0, 0], [-0.095, -0.095, 0], "r-o")
show()
```
## Conjugate Gradients as a search method
- If we could step along the eigendirections, the convergence would be faster than steepest descent's
- Eigendirections are good search directions even in ill-conditioned quadratic functions because they represent the principal axes of ellipsoids
- This is mathematically a good idea, but computationally the calculation of eigenvectors is $\sim O(N^3)$, which is the same as computing the Hessian.
- Therefore, we do not want to use eigenvectors
<!--- - Things would _also_ be OK, if the quadratic was along the axes (i.e. $A$ is a diagonal matrix), and we stepped along the axis directions: --->
- Can we find a new coordinate system in which the search directions are orthogonal to each other?
Consider a set of directions $\{d_i\}$ for which
$$
d_i^T A d_j = 0 \quad \mathrm{for}\quad i\neq j
$$
- This is called __conjugacy__. (If $\{d_i\}$ are eigenvectors, then by definition $Ad_i=\lambda_id_i$, where $\lambda_i$ is the eigenvalue. Therefore $d_i^TAd_j=\lambda_i\lvert\lvert d_i^Td_i\lvert\lvert$. This shows that eigenvectors fullfil the conjugacy condition. Therefore, if we start from an eigendirection, the conjugacy condition will find the next eigenvector, and so on.)
- Collect the directions into a matrix,
$$ S =
\begin{bmatrix}
\uparrow & \uparrow & \uparrow & \uparrow \\
d_1 & d_2 &\ldots& d_N \\
\downarrow & \downarrow & \downarrow & \downarrow
\end{bmatrix}
$$
- Transform the coordinates into $y = S^{-1} x$, and set up the problem in the new coordinate system $y$:
$$ f(x) = f(S y) = \frac12 (Sy)^T A(Sy) - b^T Sy = \frac12 y^T (S^T A S) y - (S^T b)^T y$$
\begin{align}
S^T A S & =
\begin{bmatrix}
\uparrow & \uparrow & \uparrow & \uparrow \\
d_1 & d_2 &\ldots& d_N \\
\downarrow & \downarrow & \downarrow & \downarrow
\end{bmatrix}^T
A
\begin{bmatrix}
\uparrow & \uparrow & \uparrow & \uparrow \\
d_1 & d_2 &\ldots& d_N \\
\downarrow & \downarrow & \downarrow & \downarrow
\end{bmatrix}
& \\
&\\
&=\begin{bmatrix} d_1^TAd_1 & d_1^TAd_2 &\ldots & d_1^TAd_N \\
d_2^TAd_1 & d_2^TAd_2 & \ldots & d_2^TAd_N \\
\vdots & \vdots & \ddots &\vdots\\
d_N^TAd_1 & d_N^TAd_2&\ldots&d_N^TAd_N
\end{bmatrix}
& \\
&\\
&=\begin{bmatrix} d_1^TAd_1 & 0 &\ldots & 0 \\
0 & d_2^TAd_2 & \ldots & 0 \\
\vdots & \vdots & \ddots &\vdots\\
0 & 0&\ldots&d_N^TAd_N
\end{bmatrix}
\end{align}
- The matrix $S^T A S$ is diagonal because of the the conjugacy condition we imposed
- This is the critical property, which is central to the conjugate gradient
- The problem in the $y$ space is a quadratic form along the axes
- The axes of the $y$ space are the search directions $\{d_i\}$.
- How can we construct such a set of directions (ruling out the eigendirections, which would require the diagonalization of $A$)?
## Example on the difference between eigenvectors and conjugate vectors
- Consider the quadratic form expressed in the $x$ coordinate system
$$ f(x) = \frac{1}{2} x^T \underbrace{\begin{bmatrix} 3 & -1\\ -1& 3\end{bmatrix}}_{=A} x$$
- First, we discuss the eigendecomposition
\begin{align}
\Lambda & = Q^{-1}AQ \\
& \\
\begin{bmatrix}2 & 0\\ 0 & 4 \end{bmatrix}
&=
\begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ -\frac{1}{\sqrt{2}} &\frac{1}{\sqrt{2}}\end{bmatrix}
A
\begin{bmatrix}\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}} &\frac{1}{\sqrt{2}}\end{bmatrix}
\end{align}
- The quadratic form in the space spanned by eigenvectors has coordinates $z$ and is diagonal
\begin{align}
f(Qz) & = \frac{1}{2} z^TQ^T A Qz\\
& = \frac{1}{2} z^T \Lambda z
\end{align}
- Second, we discuss the conjugate directions. A matrix $S$ of conjugate directions that are not eigenvectors is
\begin{align}
S & = \begin{bmatrix}1 & -5\\ 2 & 1 \end{bmatrix}
\end{align}
The first column is $d_1$, the second column is $d_2$, which are conjugate to each other. Thus,
\begin{align}
D = S^T A S & = \begin{bmatrix}1 & 2\\ -5 & 1 \end{bmatrix} A \begin{bmatrix}1 & -5\\ 2 & 1 \end{bmatrix}\\
& = \begin{bmatrix}11 & 0\\ 0 & 88 \end{bmatrix}
\end{align}
- The quadratic form in the space spanned by conjugate directions has coordinates $y$ and is diagonal
\begin{align}
f(y) & = \frac{1}{2} y^TS^T A Sy\\
& = \frac{1}{2} y^T D y
\end{align}
Although diagonal, matrix $D$ does not contain the eigenvalues of $A$, thus, the conjugate directions $d_1$ and $d_2$ are not the eigenvectors of $A$.
## Finding the conjugate directions with the conjugate gradient method iteratively
- At iteration $k$, we have the gradient $\nabla f(x_k)$ and the previous search directions $\{d_1 \ldots d_{k-1}\}$.
- We construct the new search direction as
$$
d_k = -\nabla f(x_k) + \beta_k d_{k-1}
$$
Note that $d_k$ has "memory" of the previous direction $d_{k-1}$
- We impose the conjugacy condition
\begin{align}
0 &= d_{k-1}^T A d_k\\
0 &= d_{k-1}^T A (-\nabla f(x_k) + \beta_k d_{k-1})
\end{align}
which yields
$$\beta_k = \frac{d_{k-1}^T A \nabla f(x_k)}{d_{k-1}^T A d_{k-1}}$$
- It can be shown that with this choice of $\beta_k$, the search direction $d_k$ is conjugate to _all_ the previous directions
Algorithm:
1. Initialise with steepest descent $d_0 = -\nabla f(x_0)$
1. Update location and determine $\alpha_k$ with line search $$x_{k+1} = x_k + \alpha_k d_k$$
1. Find new search direction: with this choice of $d_0$, it can be shown that $$d_{k+1} = -\nabla f(x_{k+1}) + \left[\frac{\vert\lvert\nabla f(x_{k+1})\vert\lvert}{\vert\lvert\nabla f(x_k)\vert\lvert}\right]^2 d_k$$
(Optional: A derivation of the identity $\beta_k = \left[\frac{\vert\lvert\nabla f(x_{k+1})\vert\lvert}{\vert\lvert\nabla f(x_k)\vert\lvert}\right]^2$ can be found on pages 270-271 in Luenberger, D. & Ye, Y. Linear and Nonlinear Programming, third edition, Springer. See the course booklist.)
1. Back to 2 until convergence.
```python
def conjugate_gradients(f, df, x0, ftol, xtol=1e-3, n_restart=None):
traj = []
# get initial direction
x = x0[:]
df_new = df(x)
d = -1.0*df_new
alpha_init = 1e-6
i = 0
#print i, f(x), np.linalg.norm(d), x[:], d[:]
traj.append((x[0],x[1]))
# loop until convergence
while np.linalg.norm(df_new) > ftol:
i += 1
# do line search in current direction
alpha = linesearch_golden_section(f, x, d, alpha_init, xtol)
# update estimate
x += alpha*d
# get beta
df_old = df_new[:]
df_new = df(x)
beta = (np.linalg.norm(df_new)/np.linalg.norm(df_old))**2
# get new direction
if n_restart == None or i%n_restart != 0:
d = beta*d
d += -1.0*df_new
#print i, f(x), np.linalg.norm(d), x[:], d[:]
traj.append((x[0],x[1]))
return x[:],traj
```
```python
x, y = np.meshgrid(np.linspace(-1,1,50), np.linspace(-1,1,50))
Qxy = [[Q(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
figure(figsize=(12,12))
contour(x, y, Qxy, linspace(0, 0.1, 6), color="k")
title("Quadratic - Conjugate Gradients")
axis('equal')
xmin, traj = conjugate_gradients(Q, dQ, np.array([-1.01, -1.03]), 1e-2, 1e-8)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "r-o")
show()
```
```python
x,y = np.meshgrid(np.linspace(-1.7,1.7,100), np.linspace(-0.8,3,100))
R = [[Rosen(np.array([x[j,i],y[j,i]])) for i in range(size(x,0)) ] for j in range(size(x,1))]
fig = figure(figsize=(12,8))
contour(x, y, R, np.logspace(-1,2, 8), color="k")
title("Rosenbrock - Conjugate Gradients")
xmin, traj = conjugate_gradients(Rosen,dRosen, np.array([-1.3,1.3]), 1e-1, 1e-8,2)
plot([traj[i][0] for i in range(len(traj))], [traj[i][1] for i in range(len(traj))], "r-o")
show()
```
## Conjugate gradients final remarks
- It converges after $N$ iterations for an $N$ dimensional quadratic form with exact line search
- For a non-quadratic nonlinear function, search directions are approximately conjugate
- Numerical errors lead to _loss of conjugacy_
- The algorithm can be periocally _reset_ by setting $d_k=-\nabla f(x_k)$
- The convergence is linear, $$ \|x_{k+1}-x^*\| = \beta \|x_k - x^*\|$$
- But $\beta$ can be very small, $\beta \approx 0$. This is called "superlinear convergence"
- For large ill-conditioned problems, it can be shown that $$\beta \approx 1-\frac{2}{\sqrt{\kappa}}$$ where $\kappa$ is the condition number.
- $\beta$ is smaller than the steepest descent's
- The CG method does _not_ require the Hessian and its inversion
- The CG method was originally invented to solve linear systems $Ax = b$.
- Note the relationship with the quadratic problem
\begin{align}
f(x) &= \frac12 x^T A x - b^T x\\
\nabla f(x) &= Ax - b
\end{align}
## The conjugate gradient method is a Krylov subspace method
Consider the first two iterations
1. Start at $x_0$
2. $d_0 = b-A x_0$
3. $x_1 = x_0 + \alpha_1 d_0$
4. \begin{align}
d_1 &= b-A x_1 + \beta_1 d_0 \\
& = b-A ( x_0 + \alpha_1 d_0 ) + \beta_1 d_0 \\
& = b-A ( x_0 + \alpha_1 (b-A x_0 ) ) + \beta_1 (b-A x_0 ) \\
\end{align}
- The direction after $k$ steps is a linear combination of the vectors
$$
A x_0, A^2 x_0, A^3 x_0, \ldots, A^k x_0
$$
- The corresponding subspace is called __Krylov subspace__
- We often solve $Ax = b$ in the Krylov subspace for large matrices
- There is a large variety of methods called "Krylov subspace methods".
- The conjugate gradient is one of them.
## Summary of multi-dimensional search-direction methods
| Method | Advantages | Disadvantages
|:-|:-|:-
| Steepest descent | Only needs gradient | Linear convergence<br>slow for ill-conditioned problem |
| Newton-Raphson | Quadratic Convergence | Needs Hessian<br>can diverge |
| Newton-Raphson with line search | Quadratic Convergence<br> more reliable | Needs Hessian |
| Conjugate gradient | Faster than Steepest Descent | Linear convergence |
| Barzilai-Borwein | Often very stable,<br>faster than Steepest Descent | Convergence analysis<br>difficult in general |
## Least squares fitting of a model to data
One common optimization task involves a cost function that enables a simplification.
Let
- $y_j$ be the measured data, $j = 1,2,\ldots m$
- $\phi(x)$ be the (generally nonlinear) model we are fitting, with $\phi_j(x)$ being the model's prediction for $y_j$
- $x = [x_1,\ldots,x_n]^T$ be the vector of independent parameters, which can be tuned for the model to fit the data
The elements of the __residual__ vector $r(x)$ are the errors in the model's predictions for each piece of data
$$ r_j(x) = \phi_j(x) - y_j $$
- For all the pieces of data, the total error is
$$ f(x) = \sum_{j=1}^m r_j^2(x) = r(x)^T r(x) $$
- The optimization task is to minimize $f(x)$.
The gradient of $f(x)$ is
\begin{align}
\nabla f(x) &= \frac{\partial f}{\partial x_i}\;\;\;i=1,2,\ldots,m\\
& = \frac{\partial}{\partial x_i}\left(\sum_{j=1}^m r_j(x)^2\right)\;\;\;i=1,2,\ldots,m \\
& = 2 \sum_{j=1}^m \frac{\partial r_j(x)}{\partial x_i} r_j(x) \;\;\;i=1,2,\ldots,m \\
&= 2 J(x)^T r(x)
\end{align}
where $J(x)$ is the __Jacobian matrix__ of $r(x)$
$$
J(x) =
\begin{bmatrix}
\frac{\partial r_1}{\partial x_1} & \cdots & \frac{\partial r_1}{\partial x_n}\\
\vdots & \ddots & \vdots\\
\frac{\partial r_m}{\partial x_1} & \cdots & \frac{\partial r_m}{\partial x_n}\\
\end{bmatrix}
$$
The Hessian matrix $H(x)$ of $f(x)$ is expressed as
$$ H(x) = \nabla (\nabla f(x)) = 2 J(x)^T J(x) + 2 \sum_{j=1}^m r_j(x) R_j(x) $$
where $R_j(x)$ is the Hessian of the residual $r_j(x)$, i.e., $R_j(x) = \nabla(\nabla r_j(x))$
## The Gauss-Newton method
- Let us assume that the residuals at the optimum $r_j(x^*)$ are small, i.e., the model gives a good fit
- Therefore $R_j$ can be neglected. The Hessian $H(x)$ can be approximated as
$$H(x) \approx \tilde H(x) = 2J(x)^T J(x)$$
for $x$ near the minimum $x^*$, with $\tilde H(x)$ being positive definite.
- In the Gauss-Newton method, the search direction is then
$$
\begin{array}
~d_k &= -\left[\tilde H(x_k)\right]^{-1} \nabla f(x_k)\\
&= - \left[ 2J(x_k)^T J(x_k)\right]^{-1} \nabla f(x_k) \\
&= - \frac{1}{2}\left[J(x_k)^T J(x_k)\right]^{-1} 2J(x_k)^T r(x_k) \\
&= -J(x_k)^{+} r(x_k)\\
\end{array}
$$
where $J^+=\left[J(x_k)^T J(x_k)\right]^{-1} J(x_k)^T$ is the pseudoinverse matrix.
- The sequence of estimates is $$x_{k+1} = x_k + d_k$$
In order to find the search directions $d_k$ at each iteration, the following _linear problem_ needs to be solved,
\begin{align}
d_k& = -J(x_k)^{-1}[J(x_k)^{T}]^{-1}J(x_k)^Tr(x_k)\\
&=-J(x_k)^{-1}r(x_k)
\end{align}
Therefore
$$J(x_k) d_k = -r(x_k)$$
which is usually easy.
- The rank of $J$ is the smallest between the number of fitting parameters, $n$, and the pieces of data, $m$.
- We do not need to invert any large ill-conditioned matrix
- The Gauss-Newton methods converges nearly quadratically, if the residuals are small
## You have now the tools to do all examples paper 3
## Answers to common questions asked by students after lectures 3 and 4
1. The search directions of the steepest descent in the Rosenbrock function example do not _look_ orthogonal to each other in the final plot. Why?
- With an exact line search, the steepest descent directions are orthogonal to each other.
- The directions do not look orthogonal to each other in the plot, but this is only an "optical illusion"
- Make the aspect ratio of the figure equal, and you will see the orthogonality. E.g., use fig = figure(figsize=(12,12))
1. Consider the minimisation of an $N$-dimensional function where $N>2$. Is the line search an $N-1$-dimensional search?
- No, it is not. The line search is _always_ a one-dimensional minimisation problem.
1. Can the algorithms of these lectures find global minimum?
- These algorirthm are designed to solve $\nabla f(x)=0$, therefore, they are designed to find stationary points.
- If the function is convex, the local minimum is the global minimum.
1. Are the conjugate directions eigenvectors of the Hessian?
- Only in one case: when we choose the eigenvectors as conjugate directions. The eigenvectors diagonalise the Hessian _and_ the terms of the diagonal are the eigenvalues of the Hessian.
(Remember that an eigenvalue $\lambda$ is defined by the equation $\det(A-\lambda I)=0$ where $I$ is the identity matrix.)
- If the conjugate directions are not eigenvectors, which is the typical case, the answer is no. The conjugate directions diagonalise the Hessian _but_ the terms of the diagonal are not the eigenvalues of the Hessian.
5. How can we derive $\beta_k$ for the conjugate gradient method? (This is optional.)
For brevity, we define $g_{k}:=\nabla f(x_k)=Ax_k-b$. From the conjugate gradient method, $g_{k+1}^T g_{k}=0$ and $g_k^Td_i=0$ if $i<k$, which are the two properties we use for this proof. So:
\begin{align}
g_{k+1} &= Ax_{k+1} - b \\
& = A(x_{k+1}-x_k) + Ax_k - b \\
& = A(\alpha_kd_k) + g_k
\end{align}
Take the inner product with $g_{k+1}$
\begin{align}
g_{k+1}^T g_{k+1} &= \alpha_k g_{k+1}^T Ad_k + \underbrace{g_{k+1}^Tg_k}_{=0} \\
&= \left(-\frac{g_k^Td_k}{d_k^T A d_k}\right) g_{k+1}^T Ad_k
\end{align}
Hence
\begin{align}
-\frac{g_{k+1}^T g_{k+1}}{g_k^T d_k}
&= \frac{g_{k+1}^T Ad_k}{d_k^T A d_k}
\end{align}
Note that $$ g_k^T d_k = g_k^T (-g_k + \beta_{k}d_{k-1})=-g_k^Tg_k\;\;\;\;\textrm{because we start with $d_0=-g_0$}$$ therefore
\begin{align}
\frac{g_{k+1}^T g_{k+1}}{g_k^T g_k}
&= \frac{g_{k+1}^T Ad_k}{d_k^T A d_k}
\end{align}
We recognize the right-hand side as $\beta_{k+1}$, which proves the identity.
|
351d84f8e61267e9655b1203296fae268ade0a08
| 1,046,342 |
ipynb
|
Jupyter Notebook
|
Lectures_3_4_Multidimensional_search_methods.ipynb
|
LukeMagher/3M1
|
d3b6f06d8ecde209c405b412dcdcf1af3c9cfb98
|
[
"BSD-2-Clause"
] | 2 |
2020-09-23T08:16:18.000Z
|
2021-12-28T12:35:26.000Z
|
Lectures_3_4_Multidimensional_search_methods.ipynb
|
LukeMagher/3M1
|
d3b6f06d8ecde209c405b412dcdcf1af3c9cfb98
|
[
"BSD-2-Clause"
] | null | null | null |
Lectures_3_4_Multidimensional_search_methods.ipynb
|
LukeMagher/3M1
|
d3b6f06d8ecde209c405b412dcdcf1af3c9cfb98
|
[
"BSD-2-Clause"
] | null | null | null | 1,238.274556 | 172,843 | 0.944039 | true | 13,548 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.859664 | 0.849971 | 0.730689 |
__label__eng_Latn
| 0.947995 | 0.535968 |
# PCA
```python
import pandas
# For lots of great things.
import numpy as np
# To make our plots.
import matplotlib.pyplot as plt
%matplotlib inline
# Because sympy and LaTeX make
# everything look wonderful!
from sympy import *
init_printing(use_latex=True)
from IPython.display import display
# We will use this to check our implementation...
from sklearn.decomposition import PCA
import keras
from sklearn.preprocessing import StandardScaler
```
```python
## load in data and split into x train and y train
## data = np.array(pandas.read_csv("./comp_new_trainingdata.csv", header=0))
data = np.array(pandas.read_csv("./training_noavg.csv", header=0))
## bring in loc 0 -1 test data for PCA
data1 = np.array(pandas.read_csv("./test1.csv", header=0))
data2 = np.array(pandas.read_csv("./test2.csv", header=0))
data3 = np.array(pandas.read_csv("./test3.csv", header=0))
## Have to drop all teh rows that have nan values because they will not help with net
## clean out rows with nan values
data = data[~np.isnan(data).any(axis=1)]
data1 = data1[~np.isnan(data1).any(axis=1)]
data2 = data2[~np.isnan(data2).any(axis=1)]
data3 = data3[~np.isnan(data3).any(axis=1)]
print(data[:8])
print(data.shape)
data = np.vstack((data,data1,data2,data3))
print(data[:8])
data.shape
```
```python
# vectors AND class labels...
X = data[:,0:8] # 0 thru 30
Y = data[:,8] # 30
scaler = StandardScaler()
# standardize X .. will mean center data
X = scaler.fit_transform(X)
# Pretty-print with display()!
display(X.shape)
display(Y.shape)
display(Matrix(np.unique(Y)).T)
display(X[0:8])
```
```python
U,S,V = np.linalg.svd(X,full_matrices=True)
# Percent variance accounted for
plt.plot(100.0*S/np.sum(S))
plt.ylabel('% Var')
plt.xlabel('Singular Value')
plt.show()
```
```python
# Variance accounted for in the first two principal components
100.0*(S[0]+S[1])/np.sum(S)
```
```python
# Scale the singular vectors, resulting in a rotated form of our mean-centered data
D = np.zeros([X.shape[0],X.shape[1]])
np.fill_diagonal(D,S)
Xrotated = np.dot(U,D)
# Extract just the first two principal components!
PCs = Xrotated[:,0:2]
PCs.shape
```
```python
# The x and y values come from the two
# Principal Components and the colors for
# each point are selected based on the
# corresponding class for each point...
plt.scatter(PCs[:,0],PCs[:,1],
color=[['red','green','blue','orange'][i] for i in Y.astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
The data suggest that we have some clear descision boundries. The orange represents the testing dat from all three locations, we can see that it fits the class groupings. A simple MLP with linear activation functions will not work for our data, we should use a RELU or sigmoid instead along with a large hidden layer. It remains to be seen how well the network will be able to generalize.
|
123434b29481ce681ec0f01dd3095167d06a8959
| 70,120 |
ipynb
|
Jupyter Notebook
|
PCA.ipynb
|
holypolarpanda7/S19-team2-project
|
09b51f07849e3288dfa4ba91cf5d8d13909e35e2
|
[
"MIT"
] | null | null | null |
PCA.ipynb
|
holypolarpanda7/S19-team2-project
|
09b51f07849e3288dfa4ba91cf5d8d13909e35e2
|
[
"MIT"
] | null | null | null |
PCA.ipynb
|
holypolarpanda7/S19-team2-project
|
09b51f07849e3288dfa4ba91cf5d8d13909e35e2
|
[
"MIT"
] | null | null | null | 215.092025 | 39,956 | 0.904535 | true | 765 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.917303 | 0.901921 | 0.827334 |
__label__eng_Latn
| 0.946215 | 0.760508 |
<a href="https://colab.research.google.com/github/ValerieLangat/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments/blob/master/Valerie_Intermediate_Linear_Algebra_Assignment.ipynb" target="_parent"></a>
# Statistics
## 1.1 Sales for the past week was the following amounts: [3505, 2400, 3027, 2798, 3700, 3250, 2689]. Without using library functions, what is the mean, variance, and standard deviation of of sales from last week? (for extra bonus points, write your own function that can calculate these two values for any sized list)
```
import pandas as pd
import numpy as np
sales = [3505, 2400, 3027, 2798, 3700, 3250, 2689]
def mean1(numbs):
return sum(numbs)/len(numbs)
def variance(numbs):
mean = mean1(numbs)
return sum([(mean-x)**2 for x in numbs])/len(numbs)
def std(numbs):
return variance(numbs)**.5
SalesMean = mean(sales)
SalesVariance = variance(sales)
SalesSTDEV = std(sales)
print("Mean of Sales: ", SalesMean)
print("Variance of Sales: ", SalesVariance)
print("Standard Deviation of Sales: ", SalesSTDEV)
```
Mean of Sales: 3052.714285714286
Variance of Sales: 183761.06122448976
Standard Deviation of Sales: 428.67360686714756
## 1.2 Find the covariance between last week's sales numbers and the number of customers that entered the store last week: [127, 80, 105, 92, 120, 115, 93] (you may use librray functions for calculating the covariance since we didn't specifically talk about its formula)
```
import math
sales = [3505, 2400, 3027, 2798, 3700, 3250, 2689]
customers = [127, 80, 105, 92, 120, 115, 93]
df = pd.DataFrame({'LWsales': sales, 'LWcustomers':customers})
print(np.cov(df['LWsales'], df['LWcustomers']))
covariance = df.cov()['LWsales']['LWcustomers']
print(covariance)
```
[[214387.9047619 7604.35714286]
[ 7604.35714286 290.95238095]]
7604.357142857142
## 1.3 Find the standard deviation of customers who entered the store last week. Then, use the standard deviations of both sales and customers to standardize the covariance to find the correlation coefficient that summarizes the relationship between sales and customers. (You may use library functions to check your work.)
```
last_weekstd = std(customers)
print(last_weekstd)
co_co = covariance / (df['LWsales'].std() * df['LWcustomers'].std())
print(co_co)
```
15.792015549069118
0.9628339778148909
## 1.4 Use pandas to import a cleaned version of the titanic dataset from the following link: [Titanic Dataset](https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv)
## Calculate the variance-covariance matrix and correlation matrix for the titanic dataset's numeric columns. (you can encode some of the categorical variables and include them as a stretch goal if you finish early)
```
df = pd.read_csv('https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv')
df.head(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Unnamed: 0</th>
<th>pclass</th>
<th>survived</th>
<th>name</th>
<th>sex</th>
<th>age</th>
<th>sibsp</th>
<th>parch</th>
<th>ticket</th>
<th>fare</th>
<th>cabin</th>
<th>embarked</th>
<th>boat</th>
<th>body</th>
<th>home.dest</th>
<th>has_cabin_number</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>1.0</td>
<td>1.0</td>
<td>Allen, Miss. Elisabeth Walton</td>
<td>female</td>
<td>29.0000</td>
<td>0.0</td>
<td>0.0</td>
<td>24160</td>
<td>211.3375</td>
<td>B5</td>
<td>S</td>
<td>2</td>
<td>NaN</td>
<td>St Louis, MO</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>1.0</td>
<td>1.0</td>
<td>Allison, Master. Hudson Trevor</td>
<td>male</td>
<td>0.9167</td>
<td>1.0</td>
<td>2.0</td>
<td>113781</td>
<td>151.5500</td>
<td>C22 C26</td>
<td>S</td>
<td>11</td>
<td>NaN</td>
<td>Montreal, PQ / Chesterville, ON</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>1.0</td>
<td>0.0</td>
<td>Allison, Miss. Helen Loraine</td>
<td>female</td>
<td>2.0000</td>
<td>1.0</td>
<td>2.0</td>
<td>113781</td>
<td>151.5500</td>
<td>C22 C26</td>
<td>S</td>
<td>NaN</td>
<td>NaN</td>
<td>Montreal, PQ / Chesterville, ON</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
```
df.cov()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Unnamed: 0</th>
<th>pclass</th>
<th>survived</th>
<th>age</th>
<th>sibsp</th>
<th>parch</th>
<th>fare</th>
<th>body</th>
<th>has_cabin_number</th>
</tr>
</thead>
<tbody>
<tr>
<th>Unnamed: 0</th>
<td>143117.500000</td>
<td>284.357034</td>
<td>-53.967125</td>
<td>-1442.939812</td>
<td>25.828746</td>
<td>1.172783</td>
<td>-9410.735123</td>
<td>591.579132</td>
<td>-95.438885</td>
</tr>
<tr>
<th>pclass</th>
<td>284.357034</td>
<td>0.701969</td>
<td>-0.127248</td>
<td>-3.954605</td>
<td>0.053090</td>
<td>0.013287</td>
<td>-24.227788</td>
<td>-2.876653</td>
<td>-0.249992</td>
</tr>
<tr>
<th>survived</th>
<td>-53.967125</td>
<td>-0.127248</td>
<td>0.236250</td>
<td>-0.314343</td>
<td>-0.014088</td>
<td>0.034776</td>
<td>6.146023</td>
<td>0.000000</td>
<td>0.061406</td>
</tr>
<tr>
<th>age</th>
<td>-1442.939812</td>
<td>-3.954605</td>
<td>-0.314343</td>
<td>165.850021</td>
<td>-2.559806</td>
<td>-1.459378</td>
<td>114.416613</td>
<td>81.622922</td>
<td>1.463138</td>
</tr>
<tr>
<th>sibsp</th>
<td>25.828746</td>
<td>0.053090</td>
<td>-0.014088</td>
<td>-2.559806</td>
<td>1.085052</td>
<td>0.336833</td>
<td>8.641768</td>
<td>-8.708471</td>
<td>-0.003946</td>
</tr>
<tr>
<th>parch</th>
<td>1.172783</td>
<td>0.013287</td>
<td>0.034776</td>
<td>-1.459378</td>
<td>0.336833</td>
<td>0.749195</td>
<td>9.928031</td>
<td>4.237190</td>
<td>0.013316</td>
</tr>
<tr>
<th>fare</th>
<td>-9410.735123</td>
<td>-24.227788</td>
<td>6.146023</td>
<td>114.416613</td>
<td>8.641768</td>
<td>9.928031</td>
<td>2678.959738</td>
<td>-179.164684</td>
<td>10.976961</td>
</tr>
<tr>
<th>body</th>
<td>591.579132</td>
<td>-2.876653</td>
<td>0.000000</td>
<td>81.622922</td>
<td>-8.708471</td>
<td>4.237190</td>
<td>-179.164684</td>
<td>9544.688567</td>
<td>3.625689</td>
</tr>
<tr>
<th>has_cabin_number</th>
<td>-95.438885</td>
<td>-0.249992</td>
<td>0.061406</td>
<td>1.463138</td>
<td>-0.003946</td>
<td>0.013316</td>
<td>10.976961</td>
<td>3.625689</td>
<td>0.174613</td>
</tr>
</tbody>
</table>
</div>
```
df.corr()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Unnamed: 0</th>
<th>pclass</th>
<th>survived</th>
<th>age</th>
<th>sibsp</th>
<th>parch</th>
<th>fare</th>
<th>body</th>
<th>has_cabin_number</th>
</tr>
</thead>
<tbody>
<tr>
<th>Unnamed: 0</th>
<td>1.000000</td>
<td>0.897822</td>
<td>-0.293717</td>
<td>-0.296172</td>
<td>0.065594</td>
<td>0.003584</td>
<td>-0.481215</td>
<td>0.015558</td>
<td>-0.603727</td>
</tr>
<tr>
<th>pclass</th>
<td>0.897822</td>
<td>1.000000</td>
<td>-0.312469</td>
<td>-0.366370</td>
<td>0.060832</td>
<td>0.018322</td>
<td>-0.558629</td>
<td>-0.034642</td>
<td>-0.713857</td>
</tr>
<tr>
<th>survived</th>
<td>-0.293717</td>
<td>-0.312469</td>
<td>1.000000</td>
<td>-0.050199</td>
<td>-0.027825</td>
<td>0.082660</td>
<td>0.244265</td>
<td>NaN</td>
<td>0.302250</td>
</tr>
<tr>
<th>age</th>
<td>-0.296172</td>
<td>-0.366370</td>
<td>-0.050199</td>
<td>1.000000</td>
<td>-0.190747</td>
<td>-0.130872</td>
<td>0.171892</td>
<td>0.059059</td>
<td>0.271887</td>
</tr>
<tr>
<th>sibsp</th>
<td>0.065594</td>
<td>0.060832</td>
<td>-0.027825</td>
<td>-0.190747</td>
<td>1.000000</td>
<td>0.373587</td>
<td>0.160238</td>
<td>-0.099961</td>
<td>-0.009064</td>
</tr>
<tr>
<th>parch</th>
<td>0.003584</td>
<td>0.018322</td>
<td>0.082660</td>
<td>-0.130872</td>
<td>0.373587</td>
<td>1.000000</td>
<td>0.221539</td>
<td>0.051099</td>
<td>0.036806</td>
</tr>
<tr>
<th>fare</th>
<td>-0.481215</td>
<td>-0.558629</td>
<td>0.244265</td>
<td>0.171892</td>
<td>0.160238</td>
<td>0.221539</td>
<td>1.000000</td>
<td>-0.043110</td>
<td>0.507253</td>
</tr>
<tr>
<th>body</th>
<td>0.015558</td>
<td>-0.034642</td>
<td>NaN</td>
<td>0.059059</td>
<td>-0.099961</td>
<td>0.051099</td>
<td>-0.043110</td>
<td>1.000000</td>
<td>0.083796</td>
</tr>
<tr>
<th>has_cabin_number</th>
<td>-0.603727</td>
<td>-0.713857</td>
<td>0.302250</td>
<td>0.271887</td>
<td>-0.009064</td>
<td>0.036806</td>
<td>0.507253</td>
<td>0.083796</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
# Orthogonality
## 2.1 Plot two vectors that are orthogonal to each other. What is a synonym for orthogonal?
```
import matplotlib.pyplot as plt
# A synonym for orthogonal is perpendicular
v1 = np.array([10,0])
v2 = np.array([0,10])
plt.arrow(0,0, *v1, head_width=1, head_length=1, color='purple')
plt.arrow(0,0, *v2, head_width=1, head_length=1, color='blue')
plt.xlim(-1, 15)
plt.ylim(-1, 15)
plt.show()
```
## 2.2 Are the following vectors orthogonal? Why or why not?
\begin{align}
a = \begin{bmatrix} -5 \\ 3 \\ 7 \end{bmatrix}
\qquad
b = \begin{bmatrix} 6 \\ -8 \\ 2 \end{bmatrix}
\end{align}
```
#We can calculate the dot product to find out:
a = np.array([-5,3,7]).T
b = np.array([6,-8,2]).T
np.dot(a, b)
# Not orthogonal bc the dot product is -40, not zero+
```
-40
## 2.3 Compute the following values: What do these quantities have in common?
## What is $||c||^2$?
## What is $c \cdot c$?
## What is $c^{T}c$?
\begin{align}
c = \begin{bmatrix} 2 & -15 & 6 & 20 \end{bmatrix}
\end{align}
```
c = np.array([2, -15, 6, 20])
one = (np.linalg.norm(c) ** 2)
two = (np.dot(c, c))
three = (np.matmul(c.T, c))
print (one, two, three)
#All the same number
```
665.0 665 665
# Unit Vectors
## 3.1 Using Latex, write the following vectors as a linear combination of scalars and unit vectors:
\begin{align}
d = \begin{bmatrix} 7 \\ 12 \end{bmatrix}
\qquad
e = \begin{bmatrix} 2 \\ 11 \\ -8 \end{bmatrix}
\end{align}
Your text here
## 3.2 Turn vector $f$ into a unit vector:
\begin{align}
f = \begin{bmatrix} 4 & 12 & 11 & 9 & 2 \end{bmatrix}
\end{align}
```
f = [4, 12, 11, 9, 2]
unitvec = f/np.linalg.norm(f)
print(unitvec)
```
[0.20908335 0.62725005 0.57497921 0.47043754 0.10454167]
# Linear Independence / Dependence
## 4.1 Plot two vectors that are linearly dependent and two vectors that are linearly independent (bonus points if done in $\mathbb{R}^3$).
```
```
# Span
## 5.1 What is the span of the following vectors?
\begin{align}
g = \begin{bmatrix} 1 & 2 \end{bmatrix}
\qquad
h = \begin{bmatrix} 4 & 8 \end{bmatrix}
\end{align}
```
g = np.array([1, 2])
h = np.array([4, 8])
span = np.linalg.matrix_rank(g, h)
print(span)
```
1
## 5.2 What is the span of $\{l, m, n\}$?
\begin{align}
l = \begin{bmatrix} 1 & 2 & 3 \end{bmatrix}
\qquad
m = \begin{bmatrix} -1 & 0 & 7 \end{bmatrix}
\qquad
n = \begin{bmatrix} 4 & 8 & 2\end{bmatrix}
\end{align}
```
l = np.array([1,2,3])
m = np.array([-1,0,7])
n = np.array([4, 8, 2])
span2 = np.linalg.matrix_rank(l, m, n)
print(span2)
#that didnt work.....
np.linalg.matrix_rank(np.array([[1, 2, 3], [-1, 0, 7], [4, 8, 2]]))
```
1
3
# Basis
## 6.1 Graph two vectors that form a basis for $\mathbb{R}^2$
```
# Could use my orthogonal vectors from 2.1 but I'll make a new one
v1 = np.array([20,0])
v2 = np.array([0,20])
plt.arrow(0,0, *v1, head_width=1, head_length=1, color='purple')
plt.arrow(0,0, *v2, head_width=1, head_length=1, color='pink')
plt.xlim(-5, 25)
plt.ylim(-5, 25)
plt.show()
```
## 6.2 What does it mean to form a basis?
Vectors in a space that are linearly independent and span the whole vector space
# Rank
## 7.1 What is the Rank of P?
\begin{align}
P = \begin{bmatrix}
1 & 2 & 3 \\
-1 & 0 & 7 \\
4 & 8 & 2
\end{bmatrix}
\end{align}
```
# Exactly the same as 5.2 but this comined it to one big matrix. Can solve it the same way
RoP = np.linalg.matrix_rank(np.array([[1, 2, 3], [-1, 0, 7], [4, 8, 2]]))
RoP
```
3
## 7.2 What does the rank of a matrix tell us?
It can tell us how many independent columns and rows are in a given matrix and the different ways the matrix may be legally transformed. And other things too.
# Linear Projections
## 8.1 Line $L$ is formed by all of the vectors that can be created by scaling vector $v$
\begin{align}
v = \begin{bmatrix} 1 & 3 \end{bmatrix}
\end{align}
\begin{align}
w = \begin{bmatrix} -1 & 2 \end{bmatrix}
\end{align}
## find $proj_{L}(w)$
## graph your projected vector to check your work (make sure your axis are square/even)
```
v = np.array([1, 3])
w = np.array([-1, 2])
sca_v = np.dot(w, v)/(np.dot(v, v))
proj= sca_v * v
proj
```
array([0.5, 1.5])
```
#Checking
plt.arrow(0, 0, *v, head_width=.3, head_length=.3, color='purple')
plt.arrow(0, 0, *w, head_width=.3, head_length=.3, color='pink')
plt.arrow(0, 0, *proj, head_width=.3, head_length=.3, color='blue')
plt.xlim(-2, 5)
plt.ylim(-1, 5)
plt.show()
```
# Stretch Goal
## For vectors that begin at the origin, the coordinates of where the vector ends can be interpreted as regular data points. (See 3Blue1Brown videos about Spans, Basis, etc.)
## Write a function that can calculate the linear projection of each point (x,y) (vector) onto the line y=x. run the function and plot the original points in blue and the new projected points on the line y=x in red.
## For extra points plot the orthogonal vectors as a dashed line from the original blue points to the projected red points.
```
import pandas as pd
import matplotlib.pyplot as plt
# Creating a dataframe for you to work with -Feel free to not use the dataframe if you don't want to.
x_values = [1, 4, 7, 3, 9, 4, 5 ]
y_values = [4, 2, 5, 0, 8, 2, 8]
data = {"x": x_values, "y": y_values}
df = pd.DataFrame(data)
df.head()
plt.scatter(df.x, df.y)
plt.show()
```
```
```
|
77adaa088bb62c1324398f7d26a643fb127a0185
| 72,717 |
ipynb
|
Jupyter Notebook
|
Valerie_Intermediate_Linear_Algebra_Assignment.ipynb
|
ValerieLangat/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
|
3392c2e3fcadef510f9b7cb7832e186af64fe881
|
[
"MIT"
] | null | null | null |
Valerie_Intermediate_Linear_Algebra_Assignment.ipynb
|
ValerieLangat/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
|
3392c2e3fcadef510f9b7cb7832e186af64fe881
|
[
"MIT"
] | null | null | null |
Valerie_Intermediate_Linear_Algebra_Assignment.ipynb
|
ValerieLangat/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
|
3392c2e3fcadef510f9b7cb7832e186af64fe881
|
[
"MIT"
] | null | null | null | 51.101195 | 8,696 | 0.600864 | true | 5,985 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.861538 | 0.853913 | 0.735678 |
__label__eng_Latn
| 0.532441 | 0.547559 |
# 3. 신경망 (Neural Network)
* Perceptron은 복잡한 함수도 표현이 가능
> ex) 컴퓨터가 수행하는 복잡한 처리도 표현 가능, 하지만 가중치(weight)를 설정하는 작업 <br> (원하는 결과를 출력하도록 가중치 값을 적절히 정하는 작업)은 여전히 사람이 수동으로 조정. <br> 이전에는 AND, OR 게이트의 logic table을 보면서 적절한 가중치 값을 정함
* 신경망(neural network)는 위와 같은 문제를 해결해줌.<br> (**가중치의 매개변수의 적절한 값을 데이터로부터 자동으로 학습하는 능력이 신경망의 중요한 성질**)
-----
* 신경망을 그림으로 나타내면 가장 왼쪽(left_side)부터 입력층(input_layer), 은닉층(hidden_layer), 맨 오른쪽은 출력층(output_layer)<br>
은닉층의 neuron은 (input,output layer와 달리) 사람의 눈에 보이지 않음, 그래서 은닉이라 불림.<br>
입력층 ~ 출력층 방향을 차례로 0층, 1층, 2층이라 불림.<br>
(층 번호를 0부터 시작하는 이유는 파이썬 배열의 index도 0부터 시작하기 때문에)
# 3.0 퍼셉트론 복습
* 위의 그림과 같이 x1, x2라는 2개의 신호를 입력받아 y를 출력하는 perceptron이다.
----
<br>
\begin{align}
y = 0 \space (b + w_1x_1 + w_2x_2 \leq 0)
\end{align}
\begin{align}
y= 1 \space (b + w_1x_1 + w_2x_2 > 0)
\end{align}
<br>
* 위의 식에서 b는 **편향(bias)**을 나타내는 매개변수로 뉴런이 얼마나 쉽게 활성화되는냐를 제어한다. <br> 또한 w1, w2는 각 신호의 **가중치(weight)**를 나타내는 매개변수로, 각 신호의 영향력을 제어한다.
위의 식과 같이 퍼셉트론에 편향을 표시하면 위의 그림과 같다.가중치가 b이고 입력이 1인 뉴런이 추가되었다.
이 퍼셉트론의 동작은 x1, x2, 1이라는 3개의 신호가 뉴런에 입력되어, 각 신호에 가중치를 곱한 후, 다음 뉴런에 전달된다.<br> **다음 뉴런에서는 이들 신호의 값을 더하여, 그 합이 0을 넘으면 1을 출력하고 그렇지 않으면 0을 출력한다.**
<br>
편향의 입력신호는 항상 1이기 때문에 그림에서는 해당 뉴런을 회색으로 채워 다른 뉴런과 구별했다.
# 3.1 활성화 함수 (Activation function)
* **정의**: 입력 신호의 총합을 출력 신호로 변환하는 함수 / 입력 신호의 총합이 활성화를 일으키는 정함
<br>
\begin{align}
a = b + w1 x1 + w2 x2
\end{align}
\begin{align}
y = h(a)
\end{align}
* 위의 식은 가중치가 달린 입력 신호와 편향(bias)의 총합을 계산하고 이를 a라고 한다.
* 아래의 식은 a를 함수 h()에 넣어 y를 출력하는 흐름
<figcaption> 위의 식들을 그림으로 표현 </figcaption>
<br>
가중치 신호를 조합한 결과가 a라는 노드가 되고, 활성화 함수 h()를 통과하여 y라는 노드로 변환되는 과정
위에 왼쪽 그림은 일반적인 뉴런을 나타내고, 오른쪽은 활성화 처리 과정을<br> 명시한 뉴런(a는 입력 신호의 총합, h()는 활성화 함수, y는 출력을 의미한다.
<br>
**단순 퍼셉트론**은 단층 네트워크에서 계단 함수(임계값을 경계로 출력이 바뀌는 함수)를 <br> 활성화 함수라 사용한 모델을 가리키고 **다층 퍼셉트론**은 신경망 (여러층으로 구성되고<br> 시그모이드 함수 등의 매끈한 활성화 함수를 사용하는 네트워크)을 가리킨다.
# 3.12 계단 함수 (step function)
```
def step_function(x): # 여기서 인수 x는 실수(부동소수점)만 받아들인다. 즉 step_function(3.0)은 되지만 step_function(np.array([1.0, 2.0]))은 안된다.
y = x > 0 # 계단 함수는 입력이 0을 넘으면 1출력, 그 외에는 0을 출력
return y.astype(np.int)
```
```
import numpy as np # Numpy 배열을 부등호 연산에 수행하면 배열의 원소 각각에 부등호 연산을 수행한 bool배열이 생성
import matplotlib.pylab as plt
x = np.array([-1.0,1.0,2.0])
x
```
array([-1., 1., 2.])
```
y = x > 0 # 배열 x의 원소 각각이 0 보다 크면 True로, 0 이하면 False로 변환한 새로운 배열 y가 생성
y # y는 bool 배열이다. 하지만 우리가 원하는 계단 함수는 0이나 1의 int형을 출력하는 함수. 그래서 배열y의 원소를 bool에서 int형으로 바꾼다.
```
array([False, True, True])
```
y = y.astype(np.int) # Numpy의 배열 형 자료를 변환할 때는 astype() 메서드를 이용
y # 파이썬에서는 bool을 int로 변환하면 True는 1로, False는 0으로 변환
```
array([0, 1, 1])
```
def step_function(x):
return np.array(x > 0, dtype=np.int) # 계단 함수는 입력(x)가 0보다 크거나, 데이터 타입이 int로 선언했을 때 리턴값을 준다.
np.arange # np.arange() 함수는 인자로 받는 값 만큼 1씩 증가하는 1차원 array를 만든다.
x = np.arange(-5.0, 5.0, 0.1) # (-5.0 ~ 5.0까지) 0.1 간격으로 배열 생성 / numpy.arange([start, ] stop, [step, ] dtype=None)
y = step_function(x) # step_function()은 인수로 받은 Numpy 배열의 원소 각각을 인수로 계단 함수를 실행해, 그 결과를 다시 배열로 만듦
plt.plot(x, y) # plotting x & y axis
plt.ylim(-0.1, 1.1) # y 축의 범위 지정
plt.show() # plot을 show 함수를 이용해서 시각화
# 계단 함수는 0을 경계로 출력이 0에서 1(또는 1에서 0)로 바뀜 / vise versa
```
* 위의 그래프와 같이 계단 함수는 0을 경계로 출력이 0에서 1(또는 1에서 0)로 바뀐다.
# 3.12 시그모이드 함수 (Sigmoid function)
```
def sigmoid(x):
return 1 / (1 + np.exp(-x)) # np.exp(-x)는 exp(-x) 수식에 해당
x = np.array([-1.0, 1.0, 2.0]) # Broadcast 기능을 이용, Numpy 배열과 Scalar값 연산을 Numpy 배열의 원소 각각과 스칼라값의 연산으로 바꿔 수행
sigmoid(x)
```
array([0.26894142, 0.73105858, 0.88079708])
```
t = np.array([1.0, 2.0, 3.0]) # t에 배열(벡터)를 대입
1.0 + t # Scalar값 1.0과 Numpy 배열 사이에서 수치 연산(+ / -)를 실행
```
array([2., 3., 4.])
```
1.0 / t # 결과적으로 scalar값과 Numpy 배열의 각 원소 사이에 연산이 이루어지고, 연산 결과 Numpy 배열로 출력
```
array([1. , 0.5 , 0.33333333])
```
x = np.arange(-5.0, 5.0, 0.1)
y = sigmoid(x) # y를 출력하는 함수를 sigmoid 함수로 변경
plt.plot(x,y) # x축 y축을 그려서 시각화
plt.ylim(-0.1, 1,1) # y축(y-axis)의 범위 지정
plt.show
```
#3.13 ReLU 함수 (ReLu function)
* ReLU (Rectified Linear Unit)는 입력이 0이 넘으면 그 입력을 그대로 출력하고, 0 이하면 0을 출력하는 함수
<br>
<br>
-----
<br>
<br>
\begin{align}
h(x) = x \space(x>0)
\end{align}
\begin{align}
h(x) = x \space(x\leq0)
\end{align}
<br>
<br>
* ReLU 함수를 수식으로 나타내면 위와 같다.<br>
* ReLU에서 'Rectified' 란 정류된 이란 의미를 가지고 있음.
```
def relu(x):
return np.maximum(0, x) # Numpy의 maximum 함수를 사용하여 두 입력 중 큰 값을 선택해 반환하는 함수
```
# 3.2 다차원 배열의 계산
* 다차원 배열의 기본은 '숫자의 집합'이다. 숫자가 한 줄로 늘어선 것이나 직사각형으로 늘어 놓은 것, 3차원으로 늘어놓은 것이나, N차원(4,5...)으로 나열하는 것을 통틀어 다차원 배열이라고 함
```
import numpy as np
A = np.array([1,2,3,4]) # 1차원으로 배열로 생성
print(A)
np.ndim(A) # 배열의 차원 수는 np.ndim()으로 확인이 가능
```
[1 2 3 4]
1
```
A.shape # 배열의 형상은 인스턴스 변수인 shape로 알 수 있음
```
(4,)
```
A.shape[0]
```
4
```
B = np.array([[1.2],[3,4],[5,6]])
print(B)
np.ndim(B)
```
[list([1.2]) list([3, 4]) list([5, 6])]
1
```
B.shape
```
(3,)
# 3.21 행렬의 내적
```
A = np.array([[1,2],[3,4]])
A.shape
```
(2, 2)
```
B = np.array([[5,6],[7,8]])
B.shape
```
(2, 2)
```
np.dot(A,B) # np.dot()으로 행렬의 곱 구현
```
array([[19, 22],
[43, 50]])
```
A = np.array([[1,3,5],[2,4,6]])
A.shape
```
(2, 3)
```
B = np.array([[1,3],[2,5],[4,6,]])
B.shape
```
(3, 2)
```
np.dot(A,B)
```
array([[27, 48],
[34, 62]])
```
A = np.array([[1,2],[3,4],[5,6]])
A.shape
```
(3, 2)
```
B = np.array([7,8])
B.shape
```
(2,)
```
np.dot(A,B)
```
array([23, 53, 83])
# 3.22 신경망의 내적
```
X = np.array([1,2])
X.shape
```
(2,)
행렬과 곱으로 신경망의 계산을 수행하는 법. <br>
아래 코드의 구현엣도 X, W, Y의 형상을 봤을 때, 특히 X와 W의 대응하는 차원의 원소 수가 같아야 한다.
```
W = np.array([[1,3,5],[2,4,6]]) # W = weight(가중치을 의미)
print(W)
W.shape
```
[[1 3 5]
[2 4 6]]
(2, 3)
```
Y = np.dot(X, W) # 다차원 배열의 내적을 구해주는 np.dot함수를 사용하면 단번에 결과 Y를 계산할 수 있다.
print(Y) # 만약 np.dot을 사용하지 않으면 Y의 원소를 하나 씩 따져봐야 한다(for문을 사용해서 계산)
# 그래서 행렬의 내적을 한꺼번에 계산해주는 기능은 신경망을 구현할 때 매우 중요하다.
```
[ 5 11 17]
# 3.3 (삼층 / 3-layer) Neural Network 구현
* 입력층(0층) 2개, 1번째 은닉층(1층) 3개, 2번째 은닉층(2층) 2개, 출력층(3층) 2개로 뉴런이 구성
* 신경망에서의 계산을 행렬 계산으로 정리할 수 있다는 것을 의미. 신경망 각층의 계산은 행렬의 내적으로 처리할 수 있다.
# 각 층의 신호 전달
* 입력층에서 1층으로 신호 전달
```
X = np.array([1.0, 0.5])
W1 = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]])
B1 = np.array([0.1, 0.2, 0.3])
print(W1.shape) # (2, 3)
print(X.shape) # (2,)
print(B1.shape) # (3,)
A1 = np.dot(X,W1) + B1
```
(2, 3)
(2,)
(3,)
* 입력층에서 1층으로 신호 전달<br>
<br>
위의 그림과 같이 은닉층에서 가중치 합(가중 신호 편향의 총합)을 a로 표기하고 활성화 함수 h()로 변환된 신호를 z로 표기. <br> 여기에서 활성화 함수로 시그모이드 함수를 사용
```
Z1 = sigmoid(A1) # 이 시그모이드 함수는 Numpy배열을 받아 같은 수의 원소로 구성된 Numpy 배열을 반환
print(A1) # [0.3, 0.7, 1.1]
print(Z1) # [0.57444232, 0.66818777, 0.75026011]
```
[0.3 0.7 1.1]
[0.57444252 0.66818777 0.75026011]
* 1층에서 2층으로 신호 전달이 가는 과정.<br>
<br>
이 구현은 1층의 출력 z1이 2층의 입력이 된다는 점을 제외하면 조금 전 구형과 똑같다.이처럼 Numpy배열을 사용하면서 층 사이 신호전달을 쉽게 구현
```
W2 = np.array([[0.1, 0.4],[0.2, 0.5], [0.3, 0.6]])
B2 = np.array([0.1, 0.2])
print(Z1.shape) # (3,)
print(W2.shape) # (3,2)
print(B2.shape) # (2,)
A2 = np.dot(Z1, W2) + B2
Z2 = sigmoid(A2)
```
(3,)
(3, 2)
(2,)
* 2층에서 출력층으로 신호 전달
<br>
<br> 여기에서는 항등 함수인 identity_function()을 정의하고, 이를 출력층의 활성화 함수로 이용.
<br> 항등함수는 입력을 그대로 출력하는 함수.
```
def identity_function(x):
return x
W3 = np.array([[0.1, 0.3],[0.2, 0.4]])
B3 = np.array([0.1, 0.2])
A3 = np.dot(Z2, W3) + B3
Y = identity_function(A3) # Y = A3
```
- 출력층의 활성화 함수는 풀고자 하는 문제의 성질에 맞게 정한다.<br>
예를 들어 회귀에는 항등함수를 , 이진 클래스 분류에는 시그모이드 함수를, 다중 클래스 소프트맥스 함수를 사용
* **구현 정리**
```
# 3층 신경망 (3-layer neural network)
def init_network():
network = {}
network['W1'] = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]])
network['b1'] = np.array([0.1, 0.2, 0.3])
network['W2'] = np.array([[0.1, 0.4], [0.2, 0.5], [0.3, 0.6]])
network['b2'] = np.array([0.1, 0.2])
network['W3'] = np.array([[0.1, 0.3], [0.2, 0.4]])
network['b3'] = np.array([0.1, 0.2])
return network
def forward(network, x):
W1, W2, W3 = network['W1'], network['W2'], network['W3']
b1, b2, b3 = network['b1'], network['b2'], network['b3']
a1 = np.dot(x, W1) + b1
z1 = sigmoid(a1)
a2 = np.dot(z1, W2) + b2
z2 = sigmoid(a2)
a3 = np.dot(z2, W3) + b3
y = identity_function(a3)
return y
network = init_network()
x = np.array([1.0, 0.5])
y = forward(network, x)
print(y)
```
[0.31682708 0.69627909]
# 3.4 출력층 설계
```
a= np.array([0.3, 2.9, 4.0])
exp_a = np.exp(a) # exponential 지수함수
print(exp_a)
```
[ 1.34985881 18.17414537 54.59815003]
```
sum_exp_a = np.sum(exp_a) # 지수 함수의 합
print(sum_exp_a)
```
74.1221542101633
```
y = exp_a / sum_exp_a
print(y)
```
[0.01821127 0.24519181 0.73659691]
```
def softmax(a): # Softmax 함수로 구현
exp_a = np.exp(a)
sum_exp_a = np.sum(exp_a)
y = exp_a / sum_exp_a
return y
```
기계학습 문제는 분류(classification)와 회귀(regression)로 나뉜다. 분류는 데이터가 어느 클래스에 속하느냐의 문제다. <br> 회귀문제는 입력데이터에서 (연속적)인 수치를 예측하는 문제다.
Softmax function
\begin{align}
\text{Softmax}(y_{k}) = \frac{\exp(a_k)}{\sum_j \exp(a_i)}
\end{align}
<br>
<br>
exp(x)는 e^x을 뜻하는 지수함수(exponential function)을 뜻함. n은 출력층의 neuron 수, yk는 그 중 k번째 출력을 뜻함. 소프트맥스 함수의 분자는 입력 신호 ak의 지수 함수, 분모는 모든 입력 신호의 지수 함수의 합임.
```
a = np.array([0.3, 2.9, 4.0])
exp_a = np.exp(a) # 지수 함수
print(exp_a)
```
[ 1.34985881 18.17414537 54.59815003]
```
sum_exp_a = np.sum(exp_a) # 지수 함수의 합
print(sum_exp_a)
```
74.1221542101633
```
y = exp_a / sum_exp_a
print(y)
```
[0.01821127 0.24519181 0.73659691]
```
def softmax(a):
exp_a = np.exp(a)
sum_exp_a = np.sum(exp_a)
y = exp_a / sum_exp_a
return y
```
|
6bacdf8f3b029c3daf626a1856c85163880a0899
| 54,104 |
ipynb
|
Jupyter Notebook
|
deep_learning_from_scratch/ch3_neural_network.ipynb
|
Fintecuriosity11/TIL
|
6a3e87f01b1010eafb2b9a3f12e67bfcc5274c45
|
[
"MIT"
] | null | null | null |
deep_learning_from_scratch/ch3_neural_network.ipynb
|
Fintecuriosity11/TIL
|
6a3e87f01b1010eafb2b9a3f12e67bfcc5274c45
|
[
"MIT"
] | 2 |
2020-03-22T12:15:43.000Z
|
2020-03-22T12:29:54.000Z
|
deep_learning_from_scratch/ch3_neural_network.ipynb
|
Fintecuriosity11/TIL
|
6a3e87f01b1010eafb2b9a3f12e67bfcc5274c45
|
[
"MIT"
] | null | null | null | 54,104 | 54,104 | 0.753013 | true | 5,748 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.712232 | 0.606788 |
__label__kor_Hang
| 1.000009 | 0.248103 |
# Logit and Logistic of array values
This notebook illustrates the level of control and flexibility available in Julia functions. The task is to evaluate the *logistic* function $(-\infty, \infty)\rightarrow(0,1)$
\begin{equation}
x \rightarrow \frac{1}{1 + e^{-x}}
\end{equation}
and its inverse, the *logit* or "log-odds" function $(0,1)\rightarrow(-\infty, \infty)$
\begin{equation}
p \rightarrow \log\left(\frac{p}{1-p}\right)
\end{equation}
on an array of numeric values.
The first priority is to evaluate these functions accurately and robustly. This usually means watching for edge cases (e.g. very large positive or negative $x$ or values of $p$ that are close to zero or to one).
The second priority is evaluate them quickly and flexibly. When fitting a logistic regression model to very large data sets these functions may be evaluated hundreds of times on arrays with millions of elements.
In "vectorized" languages, such as [`R`](http://www.r-project.org) or [`Matlab/Octave`](http://octave.org) and, to some extent, [`Python`](http://python.org), the obvious choice is to work on vectors. In fact, the language often hides the fact that vectorization is occuring.
## logit and logistic in R
The [`RCall`](https://github.com/JuliaStats/RCall.jl) for Julia starts an embedded R process and provides for two-way communication with it. In a [`Jupyter`](http://jupyter.org) notebook like this a Julia string prepended with `R` is evaluated in the R process. String delimiters are `"` or `"""`. In the second case the string can span multiple lines and can contain `"` characters.
```julia
using RCall
```
```julia
R" logit <- function(p) log(p / (1 - p)) ";
```
Create a vector of 100,000,000 random values between 0 and 1 on which to evaluate `logit`. This is done in `Julia` after setting the random number seed, to allow for reproducibility.
```julia
srand(1234321)
pvals = rand(100_000_000);
```
Copy the vector to the `R` process under the same name.
```julia
@rput pvals;
```
```julia
R""" print(system.time(xvals <- logit(pvals))) """;
```
user system elapsed
5.412 0.164 5.576
The first few values of `pvals` and `xvals` are
```julia
R" print(str(list(pvals=pvals, xvals=xvals))) ";
```
List of 2
$ pvals: num [1:100000000] 0.0944 0.9366 0.2583 0.9309 0.5553 ...
$ xvals: num [1:100000000] -2.261 2.693 -1.055 2.601 0.222 ...
NULL
Similarly, a vectorized logistic function can be defined as
```julia
R"""
logistic <- function(x) 1 / (1 + exp(-x))
print(system.time(pvalsnew <- logistic(xvals)))
""";
```
user system elapsed
3.472 0.156 3.631
```julia
R" all(pvals == pvalsnew) "; # check for 'round trip' identity
```
RCall.RObject{RCall.LglSxp}
[1] FALSE
The problem with the "round trip" check is that floating point arithmetic is not exact. Numbers are represented in a finite precision. `pvalsnew` is close to `pvals` but not exactly equal.
```julia
R" print(str(list(pvals=pvals, xvals=xvals, pvn=pvalsnew))) ";
```
List of 3
$ pvals: num [1:100000000] 0.0944 0.9366 0.2583 0.9309 0.5553 ...
$ xvals: num [1:100000000] -2.261 2.693 -1.055 2.601 0.222 ...
$ pvn : num [1:100000000] 0.0944 0.9366 0.2583 0.9309 0.5553 ...
NULL
`R` has an `all.equal` function that compares floating point values using a tolerance on the differences.
```julia
R" print(all.equal(pvals, pvalsnew)) ";
```
[1] TRUE
## The problem with vectorization
Vectorized languages are wonderful environment when you begin programming because all the messy loop-related baggage is eliminated at the expense of some overhead. The evaluation of `logistic(xvals)` is done in 5 stages
1. Allocate a vector, `t1`, of 100,000,000 doubles and loop over `x` writing `-x` into `t1`.
2. Allocate a vector, `t2`, of 100,000,000 doubles and loop over `t1` writing `exp(t1)` into `t2`.
3. Allocate a vector, `t3`, of 100,000,000 doubles and loop over `t2` writing `1 + t2` into `t3`.
4. Allocate a vector, `result`, of 100,000,000 doubles and loop over `t3` writing `1 / t3` into `result`.
5. Return `result`
Because R allows for missing data in any vector the scalar arithmetic is more complicated than just looping over the vector. Every scalar addition in, e.g. `1 + t2` has a check on both addends to see if they are `NA`. Furthermore, the "recycling rule" that cycles over the `1` operand while looping over the indices of `t2` is further logic implemented inside the loop.
Notice that there are 3 temporary vectors allocated and the `result` must also be allocated. This storage must later be "garbage collected".
## Functional programming in Julia
The operations could be performed in exactly the same way in Julia. Currently some Julia arithmetic and math functions are vectorized and some aren't. In future releases vectorization will need to be explicitly stated by appending a `.` to a function name or prepending a `.` to an operator.
```julia
vlogistic(x::Vector) = 1 ./ (1 .+ exp.(-x));
vlogit(p::Vector) = log.(p ./ (1 .- p))
xvals = vlogit(pvals)
show(xvals[1:5])
```
Check for approximate equality
```julia
pvals ≈ vlogistic(xvals)
```
The timings show that the Julia code is faster than the R functions but it still allocates a considerable amount of storage and uses time in garbage collection (gc).
```julia
@time vlogit(pvals);
```
```julia
@time vlogistic(xvals);
```
However, there is no need to allocate the intermediate values when operating on only one value.
```julia
logit(p) = log(p / (one(p) - p));
logistic(x) = inv(one(x) + exp(-x));
sxvals = logit.(pvals);
show(sxvals[1:5])
```
[-2.2608,2.69298,-1.05468,2.60097,0.222041]
WARNING: Method definition logit(Any) in module Main at In[1]:1 overwritten at In[3]:1.
WARNING: Method definition logistic(Any) in module Main at In[1]:2 overwritten at In[3]:2.
```julia
@time logit.(pvals);
```
2.740797 seconds (159 allocations: 762.948 MB, 2.27% gc time)
This type of evaluation is in the "functional programming" style where simple functions are mapped over arrays. Julia allows for results to be pre-allocated as, e.g.
```julia
@time map!(logit, sxvals, pvals);
```
2.631148 seconds (3.43 k allocations: 140.514 KB)
In this case there isn't much of a savings in time but there is a saving in the amount of storage allocated. This becomes important when, e.g. fitting a generalized linear model or a generalized linear mixed model.
```julia
pvalsnew = similar(sxvals); @time map!(logistic, pvalsnew, sxvals);
```
2.116498 seconds (4.67 k allocations: 197.003 KB)
helps with the amount of allocation but doesn't actually run substantially faster.
There is a way to make the evaluation of the `log` and `exp` functions slightly faster, which is to use the [`@fastmath`](http://docs.julialang.org/en/stable/manual/performance-tips.html?highlight=fastmath#performance-annotations) macro in the definitions of the scalar functions.
```julia
@fastmath flogit(p) = log(p / (one(p) + p))
function flogistic(x)
expmx = @fastmath(exp(-x))
inv(one(expmx) + expmx)
end
```
flogistic (generic function with 1 method)
```julia
@time map!(flogit, sxvals, pvals);
```
2.382810 seconds (5.07 k allocations: 220.258 KB)
```julia
@time map!(flogistic, pvalsnew, sxvals);
```
1.978709 seconds (4.93 k allocations: 210.208 KB)
## Multiple threads
Some multithreading capability is available in v0.5.0 of Julia. Later versions with enhance these capabilities. Before starting this notebook I set the environment variable `JULIA_NUM_THREADS=4` as this is running on a 4-core processor.
It is easiest to use multiple threads on a simple loop. Define a function that overwrites the values in one array with the logit or logistic of the values in another array. By convention, a `!` is appended to the name of such a *mutating function*, which modifies one or more of its arguments.
```julia
function logit!(dest, src)
length(dest) == length(src) || throw(DimensionMismatch())
Threads.@threads for i in eachindex(dest, src)
@inbounds dest[i] = flogit(src[i])
end
dest
end
```
logit! (generic function with 1 method)
```julia
@time logit!(sxvals, pvals);
```
2.281610 seconds (13.87 k allocations: 597.863 KB)
The scaling is very good with 4 threads being
```julia
1.66/0.571
```
times as fast as a single thread.
|
7835f0e7406d4fce17ffcabefe3884408e2e5cf5
| 15,812 |
ipynb
|
Jupyter Notebook
|
CaseStudies/LogitLogistic.ipynb
|
dmbates/MixedMod
|
d7e8acd4ad40d1bfcb691cf17ec30143f7d797e6
|
[
"MIT"
] | 23 |
2016-12-06T00:02:58.000Z
|
2021-12-10T13:39:48.000Z
|
CaseStudies/LogitLogistic.ipynb
|
dmbates/MixedMod
|
d7e8acd4ad40d1bfcb691cf17ec30143f7d797e6
|
[
"MIT"
] | null | null | null |
CaseStudies/LogitLogistic.ipynb
|
dmbates/MixedMod
|
d7e8acd4ad40d1bfcb691cf17ec30143f7d797e6
|
[
"MIT"
] | 6 |
2016-12-13T21:17:14.000Z
|
2021-12-10T13:39:18.000Z
| 25.627229 | 398 | 0.554832 | true | 2,442 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.867036 | 0.867036 | 0.751751 |
__label__eng_Latn
| 0.992166 | 0.584902 |
## Overview
Kamodo provides a *functional* interface for space weather analysis, visualization, and knowledge discovery, allowing many problems in scientific data analysis to be posed in terms of function composition and evaluation. We'll walk through its general features here.
## Kamodo objects
Users primarily interact with models and data through Kamodo objects.
```python
from kamodo import Kamodo
```
### Function registration
Kamodo objects are essentially python dictionaries storing variable symbols as keys and their interpolating functions as values. New functions may be registered either at the initialization of the Kamodo object or later using dictionary bracket syntax.
```python
kamodo = Kamodo('$x = t^2$')
kamodo['g'] = 'y-1'
kamodo
```
\begin{equation}x{\left(t \right)} = t^{2}\end{equation}\begin{equation}g{\left(y \right)} = y - 1\end{equation}
### Function composition
Kamodo automatically composes functions through specifying on the right-hand-side.
```python
kamodo['f'] = 'g(x)'
kamodo
```
\begin{equation}x{\left(t \right)} = t^{2}\end{equation}\begin{equation}g{\left(y \right)} = y - 1\end{equation}\begin{equation}f{\left(t \right)} = g{\left(x{\left(t \right)} \right)}\end{equation}
Here we have defined two functions $x(t)$, $g(y)$, and the composition $g∘f$. Kamodo was able to determine that $f$ is implicitly a function of $t$ even though we did not say so in $f$'s declaration.
#### Function evaluation
Kamodo uses sympy's ```lambdify``` function to turn the above equations into highly optimized functions for numerical evaluation. We may evaluate $f(t)$ for $t=3$ using "dot" notation:
```python
kamodo.f(3)
```
8
where the return type is a numpy array. We could also have passed in a numpy array and the result shares the same shape:
```python
import numpy as np
t = np.linspace(-5, 5, 100000)
result = kamodo.f(t)
```
```python
assert(t.shape == result.shape)
```
### Unit conversion
Kamodo automatically handles unit conversions. Simply declare units on the left-hand-side of expressions using bracket notation.
```python
kamodo = Kamodo('mass[kg] = x', 'vol[m^3] = y')
```
```python
kamodo
```
\begin{equation}\operatorname{mass}{\left(x \right)} [kg] = x\end{equation}\begin{equation}\operatorname{vol}{\left(y \right)} [m^3] = y\end{equation}
Unless specified, Kamodo will assign the units for newly defined variables:
```python
kamodo['rho'] = 'mass/vol'
kamodo
```
\begin{equation}\operatorname{mass}{\left(x \right)} [kg] = x\end{equation}\begin{equation}\operatorname{vol}{\left(y \right)} [m^3] = y\end{equation}\begin{equation}\rho{\left(x,y \right)} [kilogram/meter**3] = \frac{\operatorname{mass}{\left(x \right)}}{\operatorname{vol}{\left(y \right)}}\end{equation}
We may override the default behavior by simply naming the our chosen units in the left hand side.
```python
kamodo['rho[g/cm^3]'] = 'mass/vol'
kamodo
```
\begin{equation}\operatorname{mass}{\left(x \right)} [kg] = x\end{equation}\begin{equation}\operatorname{vol}{\left(y \right)} [m^3] = y\end{equation}\begin{equation}\rho{\left(x,y \right)} [g/cm^3] = \frac{\operatorname{mass}{\left(x \right)}}{1000 \operatorname{vol}{\left(y \right)}}\end{equation}
!!! note
Kamodo will raise an error if the left and right-hand-side units are incompatible.
Even though generated functions are unitless, the units are clearly displayed on the lhs. We think this is a good trade-off between performance and legibility.
We can verify that kamodo produces the correct output upon evaluation.
```python
assert(kamodo.rho(3,8) == (3*1000.)/(8*100**3))
```
### Variable naming conventions
Kamodo allows for a wide array of variable names to suite your problem space, including greek, subscripts, superscripts.
```python
kamodo = Kamodo(
'rho = ALPHA+BETA+GAMMA',
'rvec = t',
'fprime = x',
'xvec_i = xvec_iminus1 + 1',
'F__gravity = G*M*m/R**2',
)
kamodo
```
\begin{equation}\rho{\left(\alpha,\beta,\gamma \right)} = \alpha + \beta + \gamma\end{equation}\begin{equation}\vec{r}{\left(t \right)} = t\end{equation}\begin{equation}\operatorname{{f}'}{\left(x \right)} = x\end{equation}\begin{equation}\vec{x}_{i}{\left(\vec{x}_{i-1} \right)} = \vec{x}_{i-1} + 1\end{equation}\begin{equation}\operatorname{F^{gravity}}{\left(G,M,R,m \right)} = \frac{G M m}{R^{2}}\end{equation}
For more details on variable names, see the [Syntax](../Syntax/) section.
## Kamodofication
Many functions can not be written as simple mathematical expressions - they could represent simulation output or observational data. For this reason, we provide a ```@kamodofy``` decorator, which turns any callable function into a kamodo-compatible variable and adds metadata that enables unit conversion.
```python
from kamodo import kamodofy, Kamodo
import numpy as np
@kamodofy(units = 'kg/m^3', citation = 'Pembroke et. al, 2018')
def rho(x = np.array([3,4,5]), y = np.array([1,2,3])):
"""A function that computes density"""
return x+y
kamodo = Kamodo(rho = rho)
kamodo['den[g/cm^3]'] = 'rho'
kamodo
```
\begin{equation}\rho{\left(x,y \right)} [kg/m^3] = \lambda{\left(x,y \right)}\end{equation}\begin{equation}\operatorname{den}{\left(x,y \right)} [g/cm^3] = \frac{\rho{\left(x,y \right)}}{1000}\end{equation}
```python
kamodo.rho
```
$\rho{\left(x,y \right)} = \lambda{\left(x,y \right)}$
```python
kamodo.den(3,4)
```
0.007
```python
kamodo.rho.meta # PyHC standard
```
{'units': 'kg/m^3',
'citation': 'Pembroke et. al, 2018',
'equation': None,
'hidden_args': []}
```python
kamodo.rho.data # PyHC standard
```
array([4, 6, 8])
Original function doc strings and signatures passed through
```python
help(kamodo.rho)
```
Help on function rho in module __main__:
rho(x=array([3, 4, 5]), y=array([1, 2, 3]))
A function that computes density
citation: Pembroke et. al, 2018
```python
kamodo.detail()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>lhs</th>
<th>rhs</th>
<th>symbol</th>
<th>units</th>
</tr>
</thead>
<tbody>
<tr>
<th>rho(x, y)</th>
<td>rho</td>
<td>None</td>
<td>rho(x, y)</td>
<td>kg/m^3</td>
</tr>
<tr>
<th>den(x, y)</th>
<td>den</td>
<td>rho(x, y)/1000</td>
<td>den(x, y)</td>
<td>g/cm^3</td>
</tr>
</tbody>
</table>
</div>
# Visualization
Kamodo graphs are generated directly from function signatures by examining the structure of both output and input arguments.
```python
from plotting import plot_types
plot_types
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>plot_type</th>
<th>function</th>
</tr>
<tr>
<th>out_shape</th>
<th>arg_shapes</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>(1,)</th>
<th>((N, M), (N, M), (N, M))</th>
<td>3d-parametric</td>
<td><function surface at 0x12f27f620></td>
</tr>
<tr>
<th rowspan="2" valign="top">(N,)</th>
<th>((N,),)</th>
<td>1d-line</td>
<td><function line_plot at 0x11e265950></td>
</tr>
<tr>
<th>((N,), (N,), (N,))</th>
<td>3d-line-scalar</td>
<td><function line_plot at 0x11e265950></td>
</tr>
<tr>
<th rowspan="2" valign="top">(N, 2)</th>
<th>((N,),)</th>
<td>2d-line</td>
<td><function line_plot at 0x11e265950></td>
</tr>
<tr>
<th>((N, 2),)</th>
<td>2d-vector</td>
<td><function vector_plot at 0x12f27f400></td>
</tr>
<tr>
<th rowspan="2" valign="top">(N, 3)</th>
<th>((N,),)</th>
<td>3d-line</td>
<td><function line_plot at 0x11e265950></td>
</tr>
<tr>
<th>((N, 3),)</th>
<td>3d-vector</td>
<td><function vector_plot at 0x12f27f400></td>
</tr>
<tr>
<th rowspan="6" valign="top">(N, M)</th>
<th>((N,), (M,))</th>
<td>2d-contour</td>
<td><function contour_plot at 0x12f27f488></td>
</tr>
<tr>
<th>((N, M), (N, M))</th>
<td>2d-contour-skew</td>
<td><function contour_plot at 0x12f27f488></td>
</tr>
<tr>
<th>((N, M), (N, M), (N, M))</th>
<td>3d-parametric-scalar</td>
<td><function surface at 0x12f27f620></td>
</tr>
<tr>
<th>((1,), (N, M), (N, M))</th>
<td>3d-plane</td>
<td><function plane at 0x12f27f598></td>
</tr>
<tr>
<th>((N, M), (1,), (N, M))</th>
<td>3d-plane</td>
<td><function plane at 0x12f27f598></td>
</tr>
<tr>
<th>((N, M), (N, M), (1,))</th>
<td>3d-plane</td>
<td><function plane at 0x12f27f598></td>
</tr>
<tr>
<th rowspan="3" valign="top">(N, M, 1)</th>
<th>((1,), (N,), (M,))</th>
<td>3d-plane</td>
<td><function plane at 0x12f27f598></td>
</tr>
<tr>
<th>((N,), (1,), (M,))</th>
<td>3d-plane</td>
<td><function plane at 0x12f27f598></td>
</tr>
<tr>
<th>((N,), (M,), (1,))</th>
<td>3d-plane</td>
<td><function plane at 0x12f27f598></td>
</tr>
</tbody>
</table>
</div>
Kamodo uses [plotly](https://plot.ly/python/) for visualization, enabling a rich array of interactive graphs and easy web deployment.
```python
import plotly.io as pio
from plotly.offline import iplot,plot, init_notebook_mode
init_notebook_mode(connected=True)
```
```python
@kamodofy(units = 'kg/m^3')
def rho(x = np.linspace(0,1, 20), y = np.linspace(-1, 1, 40)):
"""A function that computes density"""
x_, y_ = np.meshgrid(x,y)
return x_*y_
kamodo = Kamodo(rho = rho)
kamodo
```
\begin{equation}\rho{\left(x,y \right)} [kg/m^3] = \lambda{\left(x,y \right)}\end{equation}
We will generate an image of this function using plotly
```python
fig = kamodo.plot('rho')
pio.write_image(fig, 'images/Kamodo_fig1.svg')
```
See the [Visualization](../Visualization/) section for detailed examples.
## Latex I/O
Even though math is the language of physics, most scientific analysis software requires you to learn new programing languages. Kamodo allows users to write their mathematical expressions in LaTeX, a typesetting language most scientists already know:
```python
kamodo = Kamodo('$rho[kg/m^3] = x^3$', '$v[cm/s] = y^2$')
kamodo['p[Pa]'] = '$\\rho v^2$'
kamodo
```
\begin{equation}\rho{\left(x \right)} [kg/m^3] = x^{3}\end{equation}\begin{equation}v{\left(y \right)} [cm/s] = y^{2}\end{equation}\begin{equation}p{\left(x,y \right)} [Pa] = \frac{\rho{\left(x \right)} v^{2}{\left(y \right)}}{10000}\end{equation}
The resulting equation set may also be exported as a LaTeX string for use in publications:
```python
print(kamodo.to_latex() + '\n.')
```
\begin{equation}\rho{\left(x \right)} [kg/m^3] = x^{3}\end{equation}\begin{equation}v{\left(y \right)} [cm/s] = y^{2}\end{equation}\begin{equation}p{\left(x,y \right)} [Pa] = \frac{\rho{\left(x \right)} v^{2}{\left(y \right)}}{10000}\end{equation}
.
# Simulation api
Kamodo offers a simple api for functions composed of each other.
Define variables as usual (order matters).
```python
kamodo = Kamodo()
kamodo['y_iplus1'] = 'x_i + 1'
kamodo['x_iplus1'] = 'y_i - 2'
kamodo
```
\begin{equation}\operatorname{y_{i+1}}{\left(x_{i} \right)} = x_{i} + 1\end{equation}\begin{equation}\operatorname{x_{i+1}}{\left(y_{i} \right)} = y_{i} - 2\end{equation}
Now add the ```update``` attribute to map functions onto arguments.
```python
kamodo.x_iplus1.update = 'x_i'
kamodo.y_iplus1.update = 'y_i'
```
Create a simulation with initial conditions
```python
simulation = kamodo.simulate(x_i = 0, steps = 5)
simulation #an iterator of arg, val dictionaries
```
<generator object simulate at 0x12fdd5a98>
Run the simulation by iterating through the generator.
```python
import pandas as pd
pd.DataFrame(simulation) # pandas conveniently iterates the results for display
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>y_i</th>
<th>x_i</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>NaN</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>1.0</td>
<td>-1</td>
</tr>
<tr>
<th>2</th>
<td>0.0</td>
<td>-2</td>
</tr>
<tr>
<th>3</th>
<td>-1.0</td>
<td>-3</td>
</tr>
<tr>
<th>4</th>
<td>-2.0</td>
<td>-4</td>
</tr>
<tr>
<th>5</th>
<td>-3.0</td>
<td>-5</td>
</tr>
</tbody>
</table>
</div>
|
a40a97252f269cf52a0e0f8294e1e4510310c063
| 36,131 |
ipynb
|
Jupyter Notebook
|
docs/notebooks/Kamodo.ipynb
|
iamjavaexpert/Kamodo
|
26e7de66e67b9196ab19f13e73136db75832813c
|
[
"NASA-1.3"
] | null | null | null |
docs/notebooks/Kamodo.ipynb
|
iamjavaexpert/Kamodo
|
26e7de66e67b9196ab19f13e73136db75832813c
|
[
"NASA-1.3"
] | null | null | null |
docs/notebooks/Kamodo.ipynb
|
iamjavaexpert/Kamodo
|
26e7de66e67b9196ab19f13e73136db75832813c
|
[
"NASA-1.3"
] | null | null | null | 30.77598 | 457 | 0.457004 | true | 4,458 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.83762 | 0.874077 | 0.732145 |
__label__eng_Latn
| 0.709794 | 0.539349 |
## Visualizing Convolutional Neural Networks and Neural Style Transfer
July 2019 <br>
**Author:** Matthew Stewart
```python
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
```
<style>
blockquote { background: #AEDE94; }
h1 {
padding-top: 25px;
padding-bottom: 25px;
text-align: left;
padding-left: 10px;
background-color: #DDDDDD;
color: black;
}
h2 {
padding-top: 10px;
padding-bottom: 10px;
text-align: left;
padding-left: 5px;
background-color: #EEEEEE;
color: black;
}
div.exercise {
background-color: #ffcccc;
border-color: #E9967A;
border-left: 5px solid #800080;
padding: 0.5em;
}
div.discussion {
background-color: #ccffcc;
border-color: #88E97A;
border-left: 5px solid #0A8000;
padding: 0.5em;
}
div.theme {
background-color: #DDDDDD;
border-color: #E9967A;
border-left: 5px solid #800080;
padding: 0.5em;
font-size: 18pt;
}
div.gc {
background-color: #AEDE94;
border-color: #E9967A;
border-left: 5px solid #800080;
padding: 0.5em;
font-size: 12pt;
}
p.q1 {
padding-top: 5px;
padding-bottom: 5px;
text-align: left;
padding-left: 5px;
background-color: #EEEEEE;
color: black;
}
header {
padding-top: 35px;
padding-bottom: 35px;
text-align: left;
padding-left: 10px;
background-color: #DDDDDD;
color: black;
}
</style>
```python
import time
import numpy as np
from keras import backend as K
from keras.applications import vgg16, vgg19
from keras.preprocessing.image import load_img
from scipy.misc import imsave
from scipy.optimize import fmin_l_bfgs_b
# preprocessing
from utils import preprocess_image, deprocess_image
%matplotlib inline
```
### Part 1: Content loss
We can generate an image that combines the content and style of a pair with a loss function that incorporates this information. This is achieved with two terms, one that mimics the specific ativations of a certain layer for the content image, and a second term that mimics the style. The variable to optimize in the loss function will be a generated image that aims to minimize the proposed cost. Note that to optimize this function, we will perform gradient descent __on the pixel values__, rather than on the neural network weights.
We will load a trained neural network called VGG-16 proposed in [1](https://arxiv.org/pdf/1409.1556.pdf), who secured the first and second place in the localisation and classification tracks of ImageNet Challenge in 2014, respectively. This network has been trained to discriminate over 1000 classes over more than a million images. We will use the activation values obtained for an image of interest to represent the content and styles. In order to do so, we will feed-forward the image of interest and observe it's activation values at the indicated layer.
The content loss function measures how much the feature map of the generated image differs from the feature map of the source image. We will only consider a single layer to represent the contents of an image. The authors of this technique indicated they obtained better results when doing so. We denote the feature maps for layer $l$ with $a^{[l]} \in \mathbb{R}^{n_H^{[l]} \times n_W^{[l]} \times n_C^{[l]}}$. Parameter $n_C^{[l]}$ is the number of filters/channels in layer $l$, $n_H^{[l]}$ and $n_W^{[l]}$ are the height and width.
The content loss is then given by:
\begin{equation}
J^{[l]}_C = \big\Vert a^{[l](G)} - a^{[l](C)} \big\Vert^2_{\mathcal{F}},
\end{equation}
where $a^{[l](G)}$ refers to the layer's activation values of the generated image, and $a^{[l](C)}$ to those of the content image.
<div class="exercise"> <b> Part 1: Content loss</b> </div>
Implement funtion `feature_reconstruction_loss` that computes the loss of two feature inputs. You will need to use [keras backend functions](https://keras.io/backend/#backend-functions) to complete the exercise.
```python
def feature_reconstruction_loss(base, output):
"""
Compute the content loss for style transfer.
Inputs:
- output: features of the generated image, Tensor with shape [height, width, channels]
- base: features of the content image, Tensor with shape [height, width, channels]
Returns:
- scalar content loss
"""
# YOUR CODE GOES HERE
return K.sum(K.square(output - base))
```
Test your implementation:
```python
np.random.seed(1)
base = np.random.randn(10,10,3)
output = np.random.randn(10,10,3)
a = K.constant(base)
b = K.constant(output)
test = feature_reconstruction_loss(a, b)
print('Result: ', K.eval(test))
print('Expected result: ', 605.62195)
```
### Part 2: Style loss
The style measures the similarity among filters in a set of layers. In order to compute that similarity, we will compute the Gram matrix of the activation values for the style layers, i.e., $a^{[l]}$ for some set $\mathcal{L}$. The Gram matrix is related to the empirical covariance matrix, and therefore, reflects the statistics of the activation values.
Given a feature map $a^{[l]}$ of shape $(n_H^{[l]}, n_W^{[l]}, n_C^{[l]})$, the Gram matrix has shape $(n_C^{[l]}, n_C^{[l]})$ and its elements are given by:
\begin{equation*}
G^{[l]}_{k k'} = \sum_{i=1}^{n_H^{[l]}} \sum_{j=1}^{n_W^{[l]}} a^{[l]}_{ijk} a^{[l]}_{ijk'}.
\end{equation*}
The output is a 2-D matrix which approximately measures the cross-correlation among different filters for a given layer. This in essence constitutes the style of a layer.
<div class="exercise"> <b> Part 2: Computing the Gram matrix</b> </div>
We implement a function that computes the Gram matrix of a given keras tensor. This can be accomplished efficiently if $x$ is reshaped as a tensor of shape ($n_C^{[l]} \times n_H^{[l]} n_W^{[l]}$) and then you compute the outer product of this matrix with itself. We need to use [keras backend functions](https://keras.io/backend/#backend-functions) for this.
```python
def gram_matrix(x):
"""
Computes the outer-product of the input tensor x.
Input:
- x: input tensor of shape (H, W, C)
Returns:
- tensor of shape (C, C) corresponding to the Gram matrix of
the input image.
"""
# YOUR CODE GOES HERE
features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))
return K.dot(features, K.transpose(features))
```
Test your implementation:
```python
np.random.seed(1)
x_np = np.random.randn(10,10,3)
x = K.constant(x_np)
test = gram_matrix(x)
print('Result:\n', K.eval(test))
print('Expected:\n', np.array([[99.75723, -9.96186, -1.4740534], [-9.96186, 86.854324, -4.141108 ], [-1.4740534, -4.141108, 82.30106 ]]))
```
### Part 3: Style loss: layer's loss
Now we can tackle the style loss. For a given layer $l$, the style loss is defined as follows:
\begin{equation*}
J^{[l]}_S = \frac{1}{4 (n^{[l]}_W n^{[l]}_H)^2} \Big\Vert G^{[l](S)} - G^{[l](G)}\Big\Vert^2_{\mathcal{F}}.
\end{equation*}
In practice we compute the style loss at a set of layers $\mathcal{L}$ rather than just a single layer $l$; then the total style loss is the sum of style losses at each layer:
$$J_S = \sum_{l \in \mathcal{L}} \lambda_l J^{[l]}_S$$
where $\lambda_l$ corresponds to a weighting parameter.
<div class="exercise"> <b> Part 3: Computing the layer's loss</b> </div>
Implement `style_reconstruction_loss` that computes the loss for a given layer $l$. We again need to use [keras backend functions](https://keras.io/backend/#backend-functions) for this.
```python
def style_reconstruction_loss(base, output):
"""
Computes the style reconstruction loss. It encourages the output img
to have same stylistic features as style image.
Inputs:
- base: features at given layer of the style image.
- output: features of the same length as base of the generated image.
Returns:
- style_loss: scalar style loss
"""
# YOUR CODE GOES HERE
H, W = int(base.shape[0]), int(base.shape[1])
gram_base = gram_matrix(base)
gram_output = gram_matrix(output)
factor = 1.0 / float((2*H*W)**2)
out = factor * K.sum(K.square(gram_output - gram_base))
return out
```
Test your implementation:
```python
np.random.seed(1)
x = np.random.randn(10,10,3)
y = np.random.randn(10,10,3)
a = K.constant(x)
b = K.constant(y)
test = style_reconstruction_loss(a, b)
print('Result: ', K.eval(test))
print('Expected:', 0.09799164)
```
### Part 4: Total-variation regularization
We will also encourage smoothness in the image using a total-variation regularizer. This penalty term will reduce variation among the neighboring pixel values.
The following expression constitues the regularization penalty over all pairs that are next to each other horizontally or vertically. The expression is independent among different RGB channels.
\begin{equation*}
J_{tv} = \sum_{c=1}^3\sum_{i=1}^{n^{[l]}_H-1} \sum_{j=1}^{n^{[l]}_W-1} \left( (x_{i,j+1, c} - x_{i,j,c})^2 + (x_{i+1, j,c} - x_{i,j,c})^2 \right)
\end{equation*}
<div class="exercise"> <b> Part 4: Total-variation regularization</b> </div>
In the next cell, fill in the definition for the TV loss term.
__Remark:__ $x$ has dimension $(1, n_H^{[l]}, n_W^{[l]}, n_C^{[l]})$, which is different from the 3D-tensors we used before.
```python
def total_variation_loss(x):
"""
Total variational loss. Encourages spatial smoothness
in the output image.
Inputs:
- x: image with pixels, has shape 1 x H x W x C.
Returns:
- total variation loss, a scalar number.
"""
# YOUR CODE GOES HERE
a = K.square(x[:, :-1, :-1, :] - x[:, 1:, :-1, :])
b = K.square(x[:, :-1, :-1, :] - x[:, :-1, 1:, :])
return K.sum(a + b)
```
Test your implementation. If you do not get exact results but similar, you may still have a correct implementation. The goal is that you write a smoother for neighboring pixels.
```python
np.random.seed(1)
x_np = np.random.randn(1,10,10,3)
x = K.constant(x_np)
test = total_variation_loss(x)
print('Result: ', K.eval(test))
print('Expected:', 937.0538)
```
### Part 5: Style transfer
We now put it all together and generate some images! The `style_transfer` function below combines all the losses you coded up above and optimizes for an image that minimizes the total loss. Read the code and comments to understand the procedure.
```python
def style_transfer(base_img_path, style_img_path, output_img_path, convnet='vgg16',
content_weight=3e-2, style_weights=(20000, 500, 12, 1, 1), tv_weight=5e-2, content_layer='block4_conv2',
style_layers=['block1_conv1', 'block2_conv1', 'block3_conv1', 'block4_conv1', 'block5_conv1'], iterations=50):
print('\nInitializing Neural Style model...')
# Determine the image sizes. Fix the output size from the content image.
print('\n\tResizing images...')
width, height = load_img(base_img_path).size
new_dims = (height, width)
# Preprocess content and style images. Resizes the style image if needed.
content_img = K.variable(preprocess_image(base_img_path, new_dims))
style_img = K.variable(preprocess_image(style_img_path, new_dims))
# Create an output placeholder with desired shape.
# It will correspond to the generated image after minimizing the loss function.
output_img = K.placeholder((1, height, width, 3))
# Sanity check on dimensions
print("\tSize of content image is: {}".format(K.int_shape(content_img)))
print("\tSize of style image is: {}".format(K.int_shape(style_img)))
print("\tSize of output image is: {}".format(K.int_shape(output_img)))
# Combine the 3 images into a single Keras tensor, for ease of manipulation
# The first dimension of a tensor identifies the example/input.
input_img = K.concatenate([content_img, style_img, output_img], axis=0)
# Initialize the vgg16 model
print('\tLoading {} model'.format(convnet.upper()))
if convnet == 'vgg16':
model = vgg16.VGG16(input_tensor=input_img, weights='imagenet', include_top=False)
else:
model = vgg19.VGG19(input_tensor=input_img, weights='imagenet', include_top=False)
print('\tComputing losses...')
# Get the symbolic outputs of each "key" layer (they have unique names).
# The dictionary outputs an evaluation when the model is fed an input.
outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])
# Extract features from the content layer
content_features = outputs_dict[content_layer]
# Extract the activations of the base image and the output image
base_image_features = content_features[0, :, :, :] # 0 corresponds to base
combination_features = content_features[2, :, :, :] # 2 coresponds to output
# Calculate the feature reconstruction loss
content_loss = content_weight * feature_reconstruction_loss(base_image_features, combination_features)
# For each style layer compute style loss
# The total style loss is the weighted sum of those losses
temp_style_loss = K.variable(0.0) # we update this variable in the loop
weight = 1.0 / float(len(style_layers))
for i, layer in enumerate(style_layers):
# extract features of given layer
style_features = outputs_dict[layer]
# from those features, extract style and output activations
style_image_features = style_features[1, :, :, :] # 1 corresponds to style image
output_style_features = style_features[2, :, :, :] # 2 coresponds to generated image
temp_style_loss += style_weights[i] * weight * \
style_reconstruction_loss(style_image_features, output_style_features)
style_loss = temp_style_loss
# Compute total variational loss.
tv_loss = tv_weight * total_variation_loss(output_img)
# Composite loss
total_loss = content_loss + style_loss + tv_loss
# Compute gradients of output img with respect to total_loss
print('\tComputing gradients...')
grads = K.gradients(total_loss, output_img)
outputs = [total_loss] + grads
loss_and_grads = K.function([output_img], outputs)
# Initialize the generated image from random noise
x = np.random.uniform(0, 255, (1, height, width, 3)) - 128.
# Loss function that takes a vectorized input image, for the solver
def loss(x):
x = x.reshape((1, height, width, 3)) # reshape
return loss_and_grads([x])[0]
# Gradient function that takes a vectorized input image, for the solver
def grads(x):
x = x.reshape((1, height, width, 3)) # reshape
return loss_and_grads([x])[1].flatten().astype('float64')
# Fit over the total iterations
for i in range(iterations+1):
print('\n\tIteration: {}'.format(i+1))
toc = time.time()
x, min_val, info = fmin_l_bfgs_b(loss, x.flatten(), fprime=grads, maxfun=20)
# save current generated image
if i%10 == 0:
img = deprocess_image(x.copy(), height, width)
fname = output_img_path + '_at_iteration_%d.png' % (i)
imsave(fname, img)
print('\t\tImage saved as', fname)
tic = time.time()
print('\t\tLoss: {:.2e}, Time: {} seconds'.format(float(min_val), float(tic-toc)))
```
<div class="exercise"> <b> Part 5: Generate pictures!</b> </div>
Find style and content images under `images/inputs/`.
* The `base_img_path` is the filename of content image.
* The `style_img_path` is the filename of style image.
* The `output_img_path` is the filename of generated image.
* The `convnet` is for the neural network weights, VGG-16 or VGG-19.
* The `content_layer` specifies which layer to use for content loss.
* The `content_weight` weights the content loss in the overall composite loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content).
* `style_layers` specifies a list of which layers to use for the style loss.
* `style_weights` specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image.
* `tv_weight` specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content.
**CAUTION:** The script saves an image every 10 iterations.
### Great wave of Kanagawa + Chicago
```python
params = {
'base_img_path' : 'images/inputs/chicago.jpg',
'style_img_path' : 'images/inputs/great_wave_of_kanagawa.jpg',
'output_img_path' : 'images/results/wave_chicago',
'convnet' : 'vgg16',
'content_weight' : 500,
'style_weights' : (10, 10, 50, 10, 10),
'tv_weight' : 200,
'content_layer' : 'block4_conv2',
'style_layers' : ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1'],
'iterations' : 50
}
style_transfer(**params)
```
### Starry night + Tübingen
```python
params = {
'base_img_path' : 'images/inputs/tubingen.jpg',
'style_img_path' : 'images/inputs/starry_night.jpg',
'output_img_path' : 'images/results/starry_tubingen',
'convnet' : 'vgg16',
'content_weight' : 100,
'style_weights' : (1000, 100, 12, 1, 1),
'tv_weight' : 200,
'content_layer' : 'block4_conv2',
'style_layers' : ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1'],
'iterations' : 50
}
style_transfer(**params)
```
### Acknowledgments
- The implementation uses code from Francois Chollet's neural style transfer.
- The implementation uses code from Kevin Zakka's neural style transfer, under MIT license.
- The hierarchy borrows from Giuseppe Bonaccorso's gist, under MIT license.
|
d125c43f026017917a82b2a7f83e1a2fdf1bac58
| 26,003 |
ipynb
|
Jupyter Notebook
|
Neural-Style-Transfer/Neural-Style-Transfer.ipynb
|
victorwu89/Neural-Networks
|
6de5378701e5f8bac3be92ebf41ce778162a3d34
|
[
"MIT"
] | 70 |
2019-06-18T07:32:23.000Z
|
2022-01-18T07:53:08.000Z
|
Neural-Style-Transfer/Neural-Style-Transfer.ipynb
|
victorwu89/Neural-Networks
|
6de5378701e5f8bac3be92ebf41ce778162a3d34
|
[
"MIT"
] | null | null | null |
Neural-Style-Transfer/Neural-Style-Transfer.ipynb
|
victorwu89/Neural-Networks
|
6de5378701e5f8bac3be92ebf41ce778162a3d34
|
[
"MIT"
] | 38 |
2019-06-18T13:33:44.000Z
|
2022-03-15T13:16:10.000Z
| 38.183554 | 567 | 0.57278 | true | 4,893 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.831143 | 0.851953 | 0.708095 |
__label__eng_Latn
| 0.954358 | 0.483473 |
```python
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
pd.plotting.register_matplotlib_converters()
```
# Class 16: Introduction to New-Keynesian Business Cycle Modeling
In this notebook, we will briefly explore US macroeconomic data suggesting that, contrary to the assumptions of most RBC models, there is in fact a relationship between real and nominal quantities over the business cycle. Then we will use `linearsolve` to compute impulse responses of output, inflation, and the nominal interest rate to a monetary policy shock in the New-Keynesian model.
## Data
The file `business_cycle_data_actual_trend_cycle.csv`, available at https://github.com/letsgoexploring/econ126/raw/master/Data/Csv/business_cycle_data_actual_trend_cycle.csv, contains actual and trend data for real GDP per capita, real consumption per capita, real investment per capita, real physical capital per capita, TFP, hours per capita, the rea money supply (M2), (nominal) interest rate on 3-month T-bills, the PCE inflation rate, and the unemployment rate; each at quarterly frequency. The GDP, consumption, investment, capital, and money supply data are in terms of 2012 dollars. Hours is measured as an index with the value in October 2012 set to 100.
```python
# Read business_cycle_data_actual_trend.csv into a Pandas DataFrame with the first column set as the index and parse_dates=True
# Print the last five rows of the data
```
### Exercise: GDP and Inflation
Construct a plot of the cyclical components of GDP and inflation.
```python
# Construct plot
# Place legend to right of figure. PROVIDED
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
### Exercise: GDP and the 3-Month T-Bill Rate
Construct a plot of the cyclical components of GDP and the 3-month T-bill rate.
```python
# Construct plot
# Place legend to right of figure. PROVIDED
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
### Correlations Between GDP, Inflation, and 3-Month T-Bill Rate
Compute the coefficients of corrrelation between GDP, inflation, and the 3-month T-bill rate.
```python
```
Strong (but not perfect!) correlations between GDP and inflation and GDP and the T-bill rate suggest link between nominal and real quantities over the business cycle that should be exaplined by business cycle theory.
## The New-Keynesian Model
The most basic version of the New-Keynesian Model can be expressed as:
\begin{align}
y_t & = E_t y_{t+1} - \left( r_{t} - \bar{r}\right) + g_t\\
i_{t} & = r_{t} + E_t \pi_{t+1}\\
i_{t} & = \bar{r} + \pi^T + \phi_{\pi}\big(\pi_t - \pi^T\big) + \phi_{y}\big(y_t - \bar{y}\big) + v_t\\
\pi_t -\pi^T & = \beta \left( E_t\pi_{t+1} - \pi^T\right) + \kappa (y_t -\bar{y})+ u_t,
\end{align}
where: $y_t$ is (log) output, $r_t$ is the real interest rate, $i_t$ is the nominal interest rate, $\pi_t$ is the rate of inflation between periods $t-1$ and $t$, $\bar{r}$ is the long-run average real interest rate or the *natural rate of interest*, $\beta$ is the household's subjective discount factor, and $\pi^T$ is the central bank's inflation target. The coeffieints $\phi_{\pi}$ and $\phi_{y}$ reflect the degree of intensity to which the central bank *endogenously* adjusts the nominal interest rate in response to movements in inflation and output.
The variables $g_t$, $u_t$, and $v_t$ represent exogenous shocks to aggregate demand, inflation, and monetary policy. They follow AR(1) processes:
\begin{align}
g_{t+1} & = \rho_g g_{t} + \epsilon^g_{t+1}\\
u_{t+1} & = \rho_u u_{t} + \epsilon^u_{t+1}\\
v_{t+1} & = \rho_v v_{t} + \epsilon^v_{t+1}.
\end{align}
The goal is to compute impulse responses in the model to a one percent exogenous increase in the nominal interest rate. We will use the following parameterization:
| $\bar{y}$ | $\beta$ | $\bar{r}$ | $\kappa$ | $\pi^T$ | $\phi_{\pi}$ | $\phi_y$ | $\rho_g$ | $\rho_u$ | $\rho_v$ |
|-----------|---------|--------------|----------|---------|--------------|----------|----------|----------|---------|
| 0 | 0.995 | $-\log\beta$ | 0.1 | 0.02/4 | 1.5 | 0.5/4 | 0.5 | 0.5 | 0.5 |
```python
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
# Print the model's parameters
```
```python
# Create variable called 'var_names' that stores the variable names in a list with state variables ordered first
# Create variable called 'shock_names' that stores an exogenous shock name for each state variable.
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters. PROVIDED
p = parameters
# Current variables. PROVIDED
cur = variables_current
# Forward variables. PROVIDED
fwd = variables_forward
# IS equation
# Fisher_equation
# Monetary policy
# Phillips curve
# Demand process
# Monetary policy process
# Inflation process
# Stack equilibrium conditions into a numpy array
# Initialize the model into a variable named 'nk_model'
```
```python
# Compute the steady state numerically using .compute_ss() method of nk_model
# Print the computed steady state
```
```python
# Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of nk_model
# set argumement 'log_linear' to False because the model is already log-linear.
```
### Impulse Responses
Compute a 21 period impulse response of the model's variables to a 0.01/4 unit shock to the exogenous component of monetary policy ($v_t$) in period 5.
```python
# Compute impulse responses
# Print the first 10 rows of the computed impulse responses to the monetary policy shock
```
Plot the computed impulses responses of the nominal interest rate, the real interest rate, output, and inflation. Express inflation and interest rates in *annualized* (e.g., multiplied by 4) terms.
```python
# Create figure. PROVIDED
fig = plt.figure(figsize=(12,8))
# Create upper-left axis. PROVIDED
ax1 = fig.add_subplot(2,2,1)
# Create upper-right axis. PROVIDED
ax2 = fig.add_subplot(2,2,2)
# Create lower-left axis. PROVIDED
ax3 = fig.add_subplot(2,2,3)
# Create lower-right axis. PROVIDED
ax4 = fig.add_subplot(2,2,4)
# Set axis 1 ylabel
# Set axis 2 ylabel
# Set axis 3 ylabel
# Set axis 4 ylabel
# Set axis 1 limits
# Set axis 2 limits
# Set axis 3 limits
# Set axis 4 limits
# Plot the nominal interest rate, real interest rate, output, and inflation
```
|
ea58833929bfa97f3a96cbb1ce230eef12278a53
| 14,279 |
ipynb
|
Jupyter Notebook
|
Lecture Notebooks/Econ126_Class_16_blank.ipynb
|
t-hdd/econ126
|
17029937bd6c40e606d145f8d530728585c30a1d
|
[
"MIT"
] | null | null | null |
Lecture Notebooks/Econ126_Class_16_blank.ipynb
|
t-hdd/econ126
|
17029937bd6c40e606d145f8d530728585c30a1d
|
[
"MIT"
] | null | null | null |
Lecture Notebooks/Econ126_Class_16_blank.ipynb
|
t-hdd/econ126
|
17029937bd6c40e606d145f8d530728585c30a1d
|
[
"MIT"
] | null | null | null | 39.885475 | 681 | 0.415575 | true | 1,775 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.63341 | 0.803174 | 0.508739 |
__label__eng_Latn
| 0.972699 | 0.020299 |
# Sampling of Signals
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Ideal Sampling and Reconstruction
[Digital signal processors](https://en.wikipedia.org/wiki/Digital_signal_processor) and general purpose processors can only perform arithmetic operations within a limited number range. So far we considered continuous signals which are continuous with respect to time and its amplitude values. Such signals cannot be handled by processors in a straightforward manner. In order to obtain a digital representation of a continuous signal, a discretization has to be perfomed in both time and amplitude. The former is known as [*sampling*](https://en.wikipedia.org/wiki/Sampling_%28signal_processing%29), the latter as [*quantization*](https://en.wikipedia.org/wiki/Quantization_%28signal_processing%29). Sampling refers to the process of picking amplitude values from a continuous signal at discrete time-instants. The sampled signal is referred to as *discrete signal*. Quantization refers to the process of mapping a continuous amplitude to a countable set of amplitude values. The quantized signal is referred to as *quantized signal*. A signal which is discrete and quantized is a *digital signal*. This is illustrated in the following
Only digital signals can be handled by digital signal or general purpose processors. The sampling of signals is discussed in the following.
### Model of Ideal Sampling
A continuous signal $x(t)$ is sampled by taking its amplitude values at given time-instants. These time-instants can be chosen arbitrary in time, but most common are equidistant sampling schemes. The process of sampling is modeled by multiplying the continuous signal with a series of Dirac impulses. This constitutes an idealized model since Dirac impulses cannot be realized in practice.
For equidistant sampling of a continuous signal $x(t)$ with sampling interval $T$, the sampled signal $x_\text{s}(t)$ reads
\begin{equation}
x_\text{s}(t) = \sum_{k = - \infty}^{\infty} x(t) \cdot \delta(t - k T) = \sum_{k = - \infty}^{\infty} x(k T) \cdot \delta(t - k T)
\end{equation}
where the [multiplication property](../continuous_signals/standard_signals.ipynb#Dirac-Impulse) of the Dirac impulse was used for the last equality. The sampled signal is composed from a series of equidistant Dirac impulse which are weighted by the amplitude values of the continuous signal.
The series of Dirac impulse is represented conveniently by the [Dirac comb](../periodic_signals/spectrum.ipynb#The-Dirac-Comb). Rewriting the sampled signal yields
\begin{equation}
x_\text{s}(t) = x(t) \cdot \frac{1}{T} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T} \right)
\end{equation}
The process of sampling can be modeled by multipyling the continuous signal $x(t)$ with a Dirac comb. The samples $x(k T)$ for $k \in \mathbb{Z}$ of the continuous signal constitute the [discrete (-time) signal](https://en.wikipedia.org/wiki/Discrete-time_signal) $x[k] := x(k T)$. The question arises if and under which conditions the samples $x[k]$ fully represent the continuous signal and allow for a reconstruction. In order to investigate this, the spectrum of the sampled signal is derived in the following.
### Spectrum of Sampled Signal
The spectrum $X_\text{s}(j \omega) = \mathcal{F} \{ x_\text{s}(t) \}$ of the sampled signal $x_\text{s}(t)$ is derived by applying the [multiplication theorem](../fourier_transform/theorems.ipynb#Multiplication-Theorem) to above representation of the sampled signal by the Dirac comb
\begin{align}
X_\text{s}(j \omega) &= \frac{1}{2 \pi} X(j \omega) * {\bot \!\! \bot \!\! \bot} \left( \frac{\omega}{\omega_\text{s}} \right) \\
&= \frac{1}{2 \pi} X(j \omega) * \frac{2 \pi}{T} \sum_{\mu = - \infty}^{\infty} \delta(\omega - \mu \omega_\text{s}) \\
&= \frac{1}{T} \sum_{\mu = - \infty}^{\infty} X \left(j (\omega - \mu \omega_\text{s}) \right)
\end{align}
where $X(j \omega) = \mathcal{F} \{ x(t) \}$ denotes the Fourier transform of the continuous signal, $\omega_\text{s} = 2 \pi \, f_\text{s}$ the angluar sampling frequency and $f_\text{s} = \frac{1}{T}$ the sampling frequency. The second equality results from the definition of the Dirac comb and the scaling property of the Dirac impulse, the third from its sifting property. The spectrum of the sampled signal consists of a superposition of shifted copies of the spectrum of the continuous signal. The resulting spectrum is periodic with a period of $\omega_\text{s}$. It can be concluded, that equidistant sampling results in a repetition of the spectrum of the continuous signal.
The spectrum $X_\text{s}(j \omega)$ of a sampled signal is illustrated at the example of a real-valued low-pass signal. A low-pass signal $x(t)$ is a signal with band-limited spectrum
\begin{equation}
X(j \omega) = 0 \qquad \text{for } |\omega| > \omega_\text{u}
\end{equation}
where $\omega_\text{u}$ denotes its upper frequency limit. The following illustration shows the generic spectrum of a continuous real-valued low-pass signal
The spectrum of the sampled signal is constructed by superimposing shifted copies of the spectrum of the continuous low-pass signal $X(j \omega)$ at multiples of $\omega_\text{s}$
It can be concluded from the illustration, that the shifted copies of $X(j \omega)$ do not overlap if $\omega_\text{u} < \frac{\omega_\text{s}}{2}$. For $|\omega| < \omega_\text{u}$ the spectrum of the continuous signal is not affected by overlapping in this case. However, for $\omega_\text{u} > \frac{\omega_\text{s}}{2}$ overlapping occurs which changes the spectrum of the continuous signal within $|\omega| < \omega_\text{u}$.
### Ideal Reconstruction
The question arises if and under which conditions the continuous signal can be recovered from the sampled signal. Above consideration revealed that the spectrum $X_\text{s}(j \omega)$ of the sampled signal contains the unaltered spectrum of the continuous signal $X(j \omega)$ if $\omega_\text{u} < \frac{\omega_\text{s}}{2}$. Hence, the continuous signal can be reconstructed from the sampled signal by extracting the spectrum of the continuous signal from the spectrum of the sampled signal. This can be done by applying an [ideal low-pass](../system_properties/idealized_systems.ipynb#Ideal-Low-Pass) with cut-off frequency $\omega_\text{c} = \frac{\omega_{s}}{2}$. This is illustrated in the following
where the blue line represents the spectrum of the sampled signal and the red line the spectrum of the ideal low-pass. The transfer function $H(j \omega)$ of the low-pass reads
\begin{equation}
H(j \omega) = T \cdot \text{rect} \left( \frac{\omega}{\omega_\text{s}} \right)
\end{equation}
Its impulse response $h(t)$ is yielded by inverse Fourier transform of the transfer function
\begin{equation}
h(t) = \text{sinc} \left( \frac{\pi t}{T} \right)
\end{equation}
The reconstructed signal $y(t)$ is given by convolving the sampled signal $x_\text{s}(t)$ with the impulse response of the low-pass filter. This results in
\begin{align}
y(t) &= x_\text{s}(t) * h(t) \\
&= \left( \sum_{k = - \infty}^{\infty} x(k T) \cdot \delta(t - k T) \right) * \text{sinc} \left( \frac{\pi t}{T} \right) \\
&= \sum_{k = - \infty}^{\infty} x(k T) \cdot \text{sinc} \left( \frac{\pi}{T} (t - k T) \right)
\end{align}
where for the last equality the fact was exploited that $x(k T)$ is independent of the time $t$ for which the convolution is performed. The reconstructed signal is given by a weighted superposition of shifted sinc functions. Their weights are given by the samples $x(k T)$ of the continuous signal. The reconstruction is illustrated in the following figure
The black boxes show the samples $x(k T)$ of the continuous signal, the blue line the reconstructed signal $y(t)$, the gray lines the weighted sinc functions. The sinc function for $k = 0$ is highlighted in red. The amplitudes $x(k T)$ at the sampled positions are reconstructed perfectly since
\begin{equation}
\text{sinc} ( \frac{\pi}{T} (t - k T) ) = \begin{cases}
\text{sinc}(0) = 1 & \text{for } t=k T \\
\text{sinc}(n \pi) = 0 & \text{for } t=(k+n) T \quad , n \in \mathbb{Z} \notin \{0\}
\end{cases}
\end{equation}
The amplitude values in between the sampling positions $t = k T$ are given by superimposing the shifted sinc functions. The process of computing values in between given sampling points is termed [*interpolation*](https://en.wikipedia.org/wiki/Interpolation). The reconstruction of the sampled signal is performed by interpolating the discrete amplitude values $x(k T)$. The sinc function is the optimal interpolator in this context.
### Aliasing
So far the case was considered when no overlaps occur in the spectrum of the sampled signal. This is the case when the upper frequency limit $\omega_\text{u}$ of the real-valued low-pass signal is lower than $\frac{\omega_\text{s}}{2}$. Here a perfect reconstruction of the continuous signal $x(t)$ from its discrete counterpart $x[k]$ is possible. However when this condition is not fulfilled, the repetitions of the spectrum of the continuous signal overlap. This is illustrated in the following
In this case no perfect reconstruction of the continuous signal by low-pass filtering (interpolation) of the sampled signal is possible. The spectrum within the pass-band of the low-pass contains additional contributions from the repeated spectrum of the continuous signal. These contributions are known as [aliasing](https://en.wikipedia.org/wiki/Aliasing). It becomes evident from above discussion of ideal reconstruction that the amplitude values are reconstructed correctly at the time-instants $k T$. However, in between these time-instants the reconstructed signal $y(t)$ differs from the sampled signal $x(t)$ if aliasing is present.
### Sampling Theorem for Low-Pass Signals
It can be concluded from above discussion of sampling, that a sufficient condition for the perfect reconstruction of a real-valued low-pass signal $x(t)$ is given as
\begin{equation}
\omega_\text{s} \geq 2 \cdot \omega_\text{c}
\end{equation}
The minimum sampling frequency has to be chosen as double the highest frequency present in the continuous signal. This condition is known as [*Nyquist–Shannon sampling theorem*](https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem). Only if this condition is fulfilled, all information contained in a low-pass signal $x(t)$ is represented by its samples $x[k] = x(k T)$.
Depending on the relation between the sampling frequency $\omega_\text{s}$ and the upper frequency limit $\omega_\text{u}$ of the low-pass signal, three different cases can be distinguished
* oversampling $\omega_\text{s} > 2 \cdot \omega_\text{c}$
* critical sampling $\omega_\text{s} = 2 \cdot \omega_\text{c}$
* undersampling $\omega_\text{s} < 2 \cdot \omega_\text{c}$
In practical applications sampling is always oversampled to some degree since the ideal low-pass used to reconstruct the continuous signal cannot be realized. Examples for sampling rates in audio are
| | sampling frequency $f_\text{s}$ |
|:---|:---:|
| Telephone service | Narrowband: 8 kHz, Wideband: 16 kHz |
| [Compact Disc (CD)](https://en.wikipedia.org/wiki/Compact_disc) | [44.1 kHz](https://en.wikipedia.org/wiki/44,100_Hz) |
| [DVD-Audio](https://en.wikipedia.org/wiki/DVD-Audio) | 44.1, 48, 88.2, 96, 176.4, 192 kHz |
### Ideal Sampling and Reconstruction of a Cosine Signal
The ideal sampling and reconstruction of a signal $x(t)$ is illustrated in the following. Two functions are defined which ideally sample the signal $x(t)$ at equidistant time-instants $t = k T$ and compute the reconstructed signal $y(t)$ from the samples by an ideal low-pass.
```python
%matplotlib inline
import sympy as sym
import numpy as np
import matplotlib.pyplot as plt
sym.init_printing()
t = sym.symbols('t', real=True)
k = sym.symbols('k', integer=True)
# faster than sympy.sinc
def sinc(x):
return sym.sin(x)/x
def ideal_sampling(x, k, w_s):
kappa = sym.symbols('kappa')
xs = sym.lambdify(kappa, x.subs(t, kappa * 2 * sym.pi / w_s))
return [xs(kappa) for kappa in k]
def ideal_reconstruction(xs, k, w_s):
T = 2*sym.pi/w_s
return sum(xs[n] * sinc(sym.pi / T * (t - k[n] * T)) for n in range(len(k)))
```
Furthermore a helper function for plotting of the sampled and reconstructed signal is defined.
```python
def plot_signals(xs, y, w_s, k):
plt.stem(k*2*np.pi/w_s, xs)
plt.xlabel('$t$ in s')
plt.ylabel('$x_s[k] = x_s(kT)$')
plt.axis([0, 5, -1.2, 1.2])
sym.plot(y, (t,0,5), xlabel='$t$', ylabel='$y(t)$', ylim=(-1.2, 1.2))
```
Now the continuous signal to be sampled and reconstructed is defined and plotted. For ease of illustration a cosine signal $x(t) = \cos(\omega_0 t)$ with $\omega_0 = 5$ is used in the following.
```python
w_0 = 5
x = sym.cos(w_0 * t)
sym.plot(x, (t,0,5), xlabel=r'$t$', ylabel=r'$x(t)$');
```
First the case of oversampling $\omega_\text{s} > 2 \cdot \omega_0$ with $\omega_\text{s} = 50$ is illustrated
```python
k = np.arange(-100, 100)
w_s = 50
xs = ideal_sampling(x, k, w_s)
y = ideal_reconstruction(xs, k, w_s)
plot_signals(xs, y, w_s, k)
```
Then the case of critical sampling $\omega_\text{s} = 2 \cdot \omega_0$ with $\omega_\text{s} = 10$ is illustrated
```python
w_s = 10
xs = ideal_sampling(x, k, w_s)
y = ideal_reconstruction(xs, k, w_s)
plot_signals(xs, y, w_s, k)
```
Finally the case of undersampling $\omega_\text{s} < 2 \cdot \omega_0$ with $\omega_\text{s} = 7$ is illustrated
```python
w_s = 7
xs = ideal_sampling(x, k, w_s)
y = ideal_reconstruction(xs, k, w_s)
plot_signals(xs, y, w_s, k)
```
**Exercise**
* Derive the spectrum of the reconstructed signal for the sampling of $x = \cos(\omega_0 t)$ with sampling frequency $\omega_s$ by computing
* the spectrum $X(j \omega)$ of the continuous signal
* the spectrum $X_\text{s}(j \omega)$ of the sampled signal
* the spectrum $Y(j \omega)$ of the reconstructed signal for the case of over-, critial- and undersampling
* the reconstructed signal $y(t)$ for the case of over-, critial- and undersampling
* Reevaluate above example with $x(t) = \text{rect}(t - \frac{3}{2})$.
* Hint: Define the signal by `x = sym.Heaviside(t-1) - sym.Heaviside(t-2)`
* Is a perfect reconstruction possible? If not, why?
**Copyright**
The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
|
82811ae674bf9706b2372fce12bbd06a00ae9ef8
| 111,996 |
ipynb
|
Jupyter Notebook
|
sampling/ideal.ipynb
|
swchao/signalsAndSystemsLecture
|
7f135d091499e1d3d635bac6ddf22adee15454f8
|
[
"MIT"
] | 3 |
2019-01-27T12:39:27.000Z
|
2022-03-15T10:26:12.000Z
|
sampling/ideal.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | null | null | null |
sampling/ideal.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | 2 |
2020-09-18T06:26:48.000Z
|
2021-12-10T06:11:45.000Z
| 249.434298 | 18,674 | 0.89155 | true | 3,959 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.752013 | 0.740174 | 0.55662 |
__label__eng_Latn
| 0.990483 | 0.131545 |
```python
import os
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['mathtext.fontset'] = 'stix'
```
# Calculate $\kappa$ sampled from the first training
In the first training, we let 200 independent LSTMs predict 200 trajectories of 200$ns$. Since we are using LSTM as a generative model, we can also train just one LSTM and use it to generate 200 predictions, starting from either the same initial condition or different initial conditions.
Data location: `./Output/`
```python
output_dir='./Output'
kappa_list=[]
for i in range(200):
pred_dir=os.path.join(output_dir, '{}/prediction.npy'.format(i))
prediction=np.load(pred_dir)
N0=len(np.where(prediction<=15)[0])
N1=len(np.where(prediction>=16)[0])
kappa=N0/N1
kappa_list.append(kappa)
kappa_arr=np.array(kappa_list)
```
Plot distribution of $\kappa$
```python
# Plot distribution
hist = np.histogram( kappa_arr, bins=50 )
prob = hist[0].T
mids = 0.5*(hist[1][1:]+hist[1][:-1])
fig, ax = plt.subplots(figsize=(5,4))
ax.set_title('Distribution', size=20)
ax.plot(mids, prob)
ax.tick_params(axis='both', which='both', direction='in', labelsize=14)
ax.set_xlabel('$\kappa$', size=16)
ax.set_ylabel('Counts', size=16)
ax.set_yscale('log')
plt.show()
```
# Determine $\Delta\lambda$
Following the reference, we want to solve the following equation for $\Delta\lambda$
\begin{align}
\bar{s}^{(j)}_2&=\sum_{\Gamma}P^{(2)}_{\Gamma}s^{(j)}_{\Gamma} \nonumber \\
&=\frac{\sum_{k\in\Omega} s^{(j)}_k e^{-\Delta\lambda_j s^{(j)}_k} }{\sum_{k\in\Omega} e^{-\Delta\lambda_j s^{(j)}_k}} \\
&=f(\Delta\lambda)
\label{eq:lambda_solver}
\end{align}
To determine the $\Delta\lambda$ value, we can calculate the above equation and plot it versus $\Delta\lambda$, and find $\Delta\lambda=\Delta\lambda_{\ast}$ which gives
\begin{align}
\bar{s}^{(j)}_2=f(\Delta\lambda_{\ast})=s^{\rm target}
\end{align}
### $s=\kappa$
```python
def f(lm):
return np.sum(kappa_arr*np.exp(-lm*kappa_arr))/np.sum(np.exp(-lm*kappa_arr))
lm_arr = np.linspace(0,5)
f_arr = [f(lm_i) for lm_i in lm_arr]
fig, ax=plt.subplots(figsize=(5,3))
ax.plot(lm_arr, f_arr, label='$\kappa_f$')
ax.plot(lm_arr, [1]*len(lm_arr), '--', label='$\kappa^{\mathrm{target}}$')
ax.tick_params(axis='both', which='both', direction='in', labelsize=14)
ax.set_xlabel('$\lambda$', size=16)
ax.set_ylabel('$f(\lambda)$', size=16)
ax.legend(fontsize=16)
plt.show()
lm=0.317
print( 'f({:.3f}) = {:.3f}'.format(lm, f(lm)) )
```
Let's see if select 10 predictions to build the subset is enough.
```python
lm_ast=0.317 # Delta_lambda we used for bias sampling
p=np.exp(-lm_ast*(kappa_arr))
p/=np.sum(p)
subset_mean_arr = []
for i in range(200):
idx = np.random.choice(len(kappa_arr), 10, p=p)
selected = kappa_arr[idx]
mean=np.mean(selected)
subset_mean_arr.append(mean)
fig, ax = plt.subplots(figsize=(6,5), nrows=1, ncols=1)
ax.plot(subset_mean_arr)
ax.plot(np.arange(len(subset_mean_arr)), [1.0]*len(subset_mean_arr), label="constraint $\kappa$")
ax.tick_params(axis='both', which='both', direction='in', labelsize=16)
ax.set_xlabel('indices', size=16)
ax.set_ylabel('$\langle\kappa\\rangle$', size=16)
ax.set_ylim(0.0,3.0)
plt.show()
```
So we will constrain our $\kappa$ to 1 with standard error 0.081. Even though we believe from the above test the subset size=10 is sufficient, there is still some variance in mean constraint. Therefore, we will also constrain the standard deviation of $\kappa$ in the subset.
```python
lm_ast=0.317
p=np.exp(-lm_ast*(kappa_arr))
p/=np.sum(p)
mean=np.inf
stdv=np.inf
while abs(mean-1)>0.01 or abs(stdv-0.09)>0.01:
idx = np.random.choice(len(kappa_arr), 10, p=p)
selected = kappa_arr[idx]
mean=np.mean(selected)
stdv=np.std(selected)/np.sqrt(len(selected))
print( 'mean of selected sample = {:.3f}'.format(np.mean(selected)) )
print( 'Standard error stderr[selected sample] = {:.3f}'.format(np.std(selected)/np.sqrt(len(selected))) )
```
mean of selected sample = 1.010
Standard error stderr[selected sample] = 0.085
Concatenate the subset to a single trajectory, this concatenated trajectory is then used later to re-train a new LSTM.
# Concatenate subset as a new training set
```python
conc=[]
output_dir='./Output'
for i in idx:
pred_dir=os.path.join(output_dir, '{}/prediction.npy'.format(i))
prediction=np.load(pred_dir)
N0=len(np.where(prediction<=15)[0])
N1=len(np.where(prediction>=16)[0])
kappa=N0/N1
print(kappa)
conc.extend(prediction)
conc = np.array(conc)
```
0.8015989190406486
0.7965416573096789
1.7318112591601047
0.8249673787081055
0.8978032926887128
1.0385489608496672
0.9956395276321238
0.8256337231061333
0.9929648341355016
1.1913725662068437
```python
N0=len(np.where(conc<=15)[0])
N1=len(np.where(conc>=16)[0])
kappa_conc = N0/N1
print('kappa_conc:{:.3f}'.format(kappa_conc))
```
kappa_conc:0.980
|
dc88489b3731815ae6bed99795615000d172525f
| 73,212 |
ipynb
|
Jupyter Notebook
|
path_sampling_kappa.ipynb
|
tiwarylab/ps-LSTM
|
2b9a7b825a2236abf279cd0e5f8b522e2c780dfa
|
[
"MIT"
] | 2 |
2022-03-02T12:56:22.000Z
|
2022-03-02T21:13:25.000Z
|
path_sampling_kappa.ipynb
|
tiwarylab/ps-LSTM
|
2b9a7b825a2236abf279cd0e5f8b522e2c780dfa
|
[
"MIT"
] | null | null | null |
path_sampling_kappa.ipynb
|
tiwarylab/ps-LSTM
|
2b9a7b825a2236abf279cd0e5f8b522e2c780dfa
|
[
"MIT"
] | null | null | null | 198.406504 | 43,204 | 0.911107 | true | 1,543 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.763484 | 0.637878 |
__label__eng_Latn
| 0.670131 | 0.320335 |
# Prospect Theory and Cumulative Prospect Theory Agent Demo
The PTAgent and CPTAgent classes reproduce patterns of choice behavior described by Kahneman & Tverski's survey data in their seminal papers on Prospect Theory and Cumulative Prospect Theory. These classes expresses valuations of single lottery inputs, or express preferences between two lottery inputs. To more explicitly describe these agent classes, we define the following:
1. $(x_1, p_1; \cdots; x_n, p_n)$: a lottery offering outcome $x_1$ with probability $p_1$, ..., outcome $x_n$ with probability $p_n$.
2. $v(x)$: the internal representation of the value of an outcome $x$ to an instance of a PTAgent.
3. $\pi(p)$: the internal representation of a probability $p$ to an instance of a PTAgent.
4. $V(x_1, p_1; \cdots; x_n, p_n)$: a lottery valuation function.
#### **Prospect Theory Agent**
The PTAgent class reflects the lottery valuation function of Prospect Theory described in Kahneman & Tverski (1979). Generally, the lottery valuation function operates as follows:
$$V(x_1, p_1; \dots; x_n, p_n) = v(x_1) \times \pi(p_1) + \cdots + v(x_n) \times \pi(p_n) \tag{1a}$$
However, under certain conditions the lottery valuation function is operates under a different formulation. These conditions are:
1. When the lottery contains exactly two non-zero outcomes and one zero outcome relative to a reference point, with each of these outcomes occuring with non-zero probability; ie., $p_1 + p_2 + p_3 = 1$ for $x_1, x_2 \in \lbrace x | x \ne 0 \rbrace$ and $x_3=0$.
2. When the outcomes are both positive relative to a reference point or both negative relative to a reference point. Explicitly, $x_2 < x_1 < 0$ or $x_2 > x_1 > 0$.
When a lottery satisfies the conditions above, the lottery valuation function becomes:
$$V(x_1, p_1; x_2, p_2) = x_1 + p_2(x_2 - x_1) \tag{1b}$$
Since the original account of prospect theory does not explicitly describe the value function or weighting function, the value function uses the same function proposed in Tverski & Kahneman (1992):
$$v(x) = \begin{equation}
\left\{
\begin{aligned}
x^\alpha& \;\; \text{if} \, x \ge 0\\
-\lambda (-x)^\beta& \;\; \text{if} \, x \lt 0\\
\end{aligned}
\right.
\end{equation} \tag{2}$$
While the weighting function uses a form described here: https://sites.duke.edu/econ206_01_s2011/files/2011/04/39b-Prospect-Theory-Kahnemann-Tversky_final2-1.pdf.
$$\pi(p) = exp(-(-ln(p))^\gamma) \tag{3}$$
#### **Cumulative Prospect Theory Agent**
The CPTAgent class reflects the lottery valuation function, value function, and weighting function described in Tverski & Kahneman (1992). The CPTAgent class also incorporates capacities as described in this same paper. For Cumulative Prospect Theory, outcomes and associated probabilities include the attribute of *valence* that reflects whether the realization of an outcome would increases or decreases value from a reference point of the agent.
The value function for positive and negative outcomes is shown in equation 2 above.
For probabilities $p$ associated with positive valence outcomes, the *capacity* function is expressed as:
$$w^{+}(p) = \frac{p^\gamma}{\left(p^\gamma+(1-p)^\gamma) \right)^{1/ \gamma}} \tag{4a}$$
For probabilities $p$ associated with negative valence outcomes, the capacity function is expressed similarly as:
$$w^{-}(p) = \frac{p^\delta}{\left(p^\delta+(1-p)^\delta) \right)^{1/ \delta}} \tag{4b}$$
In order to compute a weight for the $i^{th}$ outcome with positive valence, a difference of cumulative sums is computed as follows:
$$\pi^{+}(p_i) = w^{+}(p_i + \cdots + p_n) - w^{+}(p_{i+1} + \cdots + p_n), \; 0 \le x_i < \cdots < x_n \tag{5a}$$
Similarly, computing a weight for the $j^{th}$ outcome with negative valence:
$$\pi^{-}(p_j) = w^{-}(p_j + \cdots + p_m) - w^{-}(p_{j+1} + \cdots + p_m), \; 0 \gt x_j > \cdots > x_m \tag{5b}$$
Lottery valuations for Cumulative Prospect Theory are then computed in a similar manner as Prospect Theory (equation 1a).
---
## Choice Behavior for Lotteries
#### **Normative Choice Behavior**
Specification of the following parameters leads to an agent that chooses lotteries according to Expected Utility Theory:
- $\alpha = \beta = 1$
- $\gamma = \delta = 1$
- $\lambda = 1$
#### **Descriptive Choice Behavior**
When $\alpha, \beta, \gamma, \delta$ take values on the interval $(0, 1)$, and when $\lambda > 1$, lottery valuation functions with constituent value and weighting functions show patterns of choice that better approximate empirical choice behavior than those predicted by normative choice behavior.
#### **Notation**
To illustrate functionality of the PTAgent and CPTAgent classes, we denote an outcome and its associated probability as a tuple $(G_1, p_1)$ and $(L_1, p_1)$, where $G_1$ is used to denote gains and $L_1$ denotes losses. A lottery is a set of gains and/or losses with associated probabilities: $[(L_1, p_1), \cdots, (G_n, p_n)]$, where $\sum p_i = 1$. A preference between two prospect, for example "A is prefered to B", is denoted as $A > B$.
The following instance of PTAgent uses function parameters estimated in Tverski & Kahneman (1992). These parameters are sufficient to replicate observed modal choices between prospects in (Kahneman & Tverski, 1992) and (Tverski & Kahneman, 1992).
---
### Decision Anomalies
The demonstrations below show instances of the PTAgent class exhibiting the same choice anomalies discussed in Kahneman & Tverskies seminal paper on Prospect Theory (1979).
```python
from cpt_agent import PTAgent
```
```python
pt = PTAgent(alpha=0.88, gamma=0.61, lambda_=2.25)
pt
```
PTAgent(
alpha=0.88,
beta=0.88,
gamma=0.61,
delta=0.61,
lambda_=2.25,
)
@ 2021-12-22 22:16:31.332939
### The certainty effect
The certainty effect demonstrates that reducing the probability of outcomes from certainty has larger effects on preferences than equivalent reductions from risky (ie., non-certain) outcomes. Problems 1 and 2 illustrate this effect for absolute reductions in probabilities and problems 3 and 4 show this effect for relative reductions in probabilities.
- Problem 1: $[(G_1, p_1), (G_2, p_2), (0, p_3)] < [(G_2, 1)]$
- Problem 2: $[(G_1, p_1), (G_2, 0), (0, p_3)] > [(G_2, 1-p_2)]$
Subtracting probability $p_2$ of outcome $G_2$ from both options in problem 1 leads to a preference reversal in problem 2.
```python
# Problem 1
lottery_1A = {'outcome':[2500, 2400, 0], 'probability':[0.33, 0.66, 0.01]}
lottery_1B = {'outcome':[2400], 'probability':[1]}
pt.choose(lottery_1A, lottery_1B)
```
{'lottery2': {'outcome': [2400], 'probability': [1]}}
```python
# Problem 2
lottery_2C = {'outcome':[2500, 0], 'probability':[0.33, 0.67]}
lottery_2D = {'outcome':[2400, 0], 'probability':[0.34, 0.66]}
pt.choose(lottery_2C, lottery_2D)
```
{'lottery1': {'outcome': [2500, 0], 'probability': [0.33, 0.67]}}
Scaling probabilities of risky outcome $G_1$ and certain outcome $G_2$ by $p'$ in problem 3 leads to a preference reversal in problem 4. This preference reversal violates the substitution axiom of Expected Utility Theory.
- Problem 3: $[(G_1, p_1), (0, 1-p_1)] < [(G_2, 1)]$
- Problem 4: $\left[\left(G_1, p_1\cdot p'\right), \left(0, \frac{1-p_1}{p'}\right)\right] > [(G_2, p'), (0, 1-p')]$
```python
# Problem 3
lottery_3A = {'outcome':[4000, 0], 'probability':[0.8, 0.2]}
lottery_3B = {'outcome':[3000], 'probability':[1]}
pt.choose(lottery_3A, lottery_3B)
```
{'lottery2': {'outcome': [3000], 'probability': [1]}}
```python
# Problem 4
lottery_4C = {'outcome':[4000, 0], 'probability':[0.2, 0.8]}
lottery_4D = {'outcome':[3000, 0], 'probability':[0.25, 0.75]}
pt.choose(lottery_4C, lottery_4D)
```
{'lottery1': {'outcome': [4000, 0], 'probability': [0.2, 0.8]}}
### The reflection effect
The reflection effect demonstrates that altering outcomes by recasting prospects from the domain of gains to losses will correspondingly alter decision behavior from risk-aversion to risk-seeking. Since the reflection effect highlights preferences characterized as risk-seeking in the loss domain, the effect disqualifies risk-aversion as a general principle for explaining the certainty effect above.
- Problem 3: $[(G_1, p_1), (0, 1-p_1)] < [(G_2, 1)]$
- Problem 3': $[(-G_1, p_1), (0, 1-p_1)] > [(-G_2, 1)]$
```python
# Problem 3'
lottery_3A_, lottery_3B_ = lottery_3A.copy(), lottery_3B.copy()
lottery_3A_.update({'outcome':[-g for g in lottery_3A_['outcome']]})
lottery_3B_.update({'outcome':[-g for g in lottery_3B_['outcome']]})
pt.choose(lottery_3A_, lottery_3B_)
```
{'lottery1': {'outcome': [-4000, 0], 'probability': [0.8, 0.2]}}
- Problem 4: $\left[\left(G_1, p_1\cdot p^{*}\right), \left(0, \frac{1-p_1}{p^{*}}\right)\right] > [(G_2, p^{*}), (0, 1-p^{*})]$
- Problem 4': $\left[\left(-G_1, p_1\cdot p^{*}\right), \left(0, \frac{1-p_1}{p^{*}}\right)\right] < [(-G_2, p^{*}), (0, 1-p^{*})]$
```python
# Problem 4'
lottery_4C_, lottery_4D_ = lottery_4C.copy(), lottery_4D.copy()
lottery_4C_.update({'outcome':[-g for g in lottery_4C_['outcome']]})
lottery_4D_.update({'outcome':[-g for g in lottery_4D_['outcome']]})
pt.choose(lottery_4C_, lottery_4D_)
```
{'lottery2': {'outcome': [-3000, 0], 'probability': [0.25, 0.75]}}
### Risk Seeking in Gains, Risk Aversion in Losses
In addition to violations of the substitution axiom, scaling probabilities of lotteries with a result of highly improbable outcomes can induce risk seeking in gains, and risk aversion in losses. While these characteristics of choice behavior are not violations of normative theories of choice behavior, they contrast with more typical observations of risk aversion in gains and risk seeking in losses for outcomes that occur with stronger likelihood. In the domain of gains, risk seeking for low probability events seems to correspond to the popularity of state lotteries.
- Problem 7: $[(G_1, p_1), (0, 1-p_1)] < [(G_2, p_2), (0, 1-p_2)]$
- Problem 8: $\left[\left(G_1, p_1\cdot p'\right), \left(0, \frac{1-p_1}{p'}\right)\right] > \left[\left(G_2, p_2\cdot p'\right), \left(0, \frac{1-p_2}{p'}\right)\right]$
```python
# Problem 7
lottery_7A = {'outcome':[6000, 0], 'probability':[0.45, 0.55]}
lottery_7B = {'outcome':[3000, 0], 'probability':[0.9, 0.1]}
pt.choose(lottery_7A, lottery_7B)
```
{'lottery2': {'outcome': [3000, 0], 'probability': [0.9, 0.1]}}
```python
# Problem 8
lottery_8C = {'outcome':[6000, 0], 'probability':[0.001, 0.999]}
lottery_8D = {'outcome':[3000, 0], 'probability':[0.002, 0.998]}
pt.choose(lottery_8C, lottery_8D)
```
{'lottery1': {'outcome': [6000, 0], 'probability': [0.001, 0.999]}}
Just as Prospect Theory accounts for risk seeking in gains for low probability events, the theory also accounts for risk aversion in the domain of losses when outcomes occur very infrequently. Risk aversion in the domain of losses seems to match well with consumer purchase of insurance products.
```python
# Problem 7'
lottery_7A_, lottery_7B_ = lottery_7A.copy(), lottery_7B.copy()
lottery_7A_.update({'outcome':[-g for g in lottery_7A_['outcome']]})
lottery_7B_.update({'outcome':[-g for g in lottery_7B_['outcome']]})
pt.choose(lottery_7A_, lottery_7B_)
```
{'lottery1': {'outcome': [-6000, 0], 'probability': [0.45, 0.55]}}
```python
# Problem 8'
lottery_8C_, lottery_8D_ = lottery_8C.copy(), lottery_8D.copy()
lottery_8C_.update({'outcome':[-g for g in lottery_8C_['outcome']]})
lottery_8D_.update({'outcome':[-g for g in lottery_8D_['outcome']]})
pt.choose(lottery_8D_, lottery_8D_)
```
{'lottery2': {'outcome': [-3000, 0], 'probability': [0.002, 0.998]}}
### Probabilistic Insurance
Kahneman & Tverski discuss another frequent choice anomalie called *probabilistic insurance*. To demonstrate choice behavior matching this anomalie, we first need to find a point of indifference reflecting the following relationship between current wealth $w$ and the cost of an insurance premium $y$ against a potential loss $x$ that occurs with probability $p$:
$$pu(w-x) + (1-p)u(w) = u(w-y) \tag{6}$$
That is, we are finding the premium $y$ for which a respondent is ambivelant between purchasing the insurance against loss $x$, and simply incurring the loss $x$ with probability $p$. Kahneman & Tverski introduce an insurance product called probabilistic insurance whereby the consumer only purchases a portion $r$ of the premium $y$. If the event leading to loss actually occurs, the purchaser pays the remainder of the premium with probability $r$, or is returned the premium and suffers the loss entirely with probability $1-r$.
$$(1-r) p u(w-x) + rpu(w-y) + (1-p)u(w-ry) \tag{7}$$
Kahneman & Tverski show that according to Expected Utility Theory, probabilistic insurance is generally preferred to either a fully insured product $u(w-y)$ or a loss $x$ with probability $p$ (under the assumption of ambivalence described above). In surveys, however, respondents generally show a strong preference against probabilistic insurance.
```python
# Problem 9
premium = 1000
asset_am = 6000
loss = 5000
prob_loss = 0.06925
lottery_9A = {'outcome':[asset_am - premium], 'probability':[1]}
lottery_9B = {'outcome':[asset_am - loss, asset_am], 'probability':[prob_loss, 1-prob_loss]}
```
```python
pt.evaluate(lottery_9A)
```
1799.2586689124155
```python
pt.evaluate(lottery_9B)
```
1799.313595333693
```python
# Problem 10
r = 0.94
lottery_10A = {'outcome':[asset_am - loss, asset_am - premium, asset_am - r*premium],
'probability':[(1-r)*prob_loss, r*prob_loss, (1-prob_loss)]}
```
```python
pt.choose(lottery_9B, lottery_10A)
```
{'lottery1': {'outcome': [1000, 6000], 'probability': [0.06925, 0.93075]}}
### Cumulative Prospect Theory
Kahneman & Tverski modified their original account of Prospect Theory with Cumulative Prospect Theory (1990). The CPTAgent exhibits the same choice behavior shown by the PTAgent for each of the problems considered above. Additionally, the cumulative features of the weighting function better demonstrates the choice patterns of respondents when considering probabilistic insurance, namely, the preference against probabilistic insurance seems to hold under a broader range of probabilities $r$.
```python
from cpt_agent import CPTAgent
```
```python
cpt = CPTAgent(alpha=0.88, gamma=0.61, lambda_=2.25)
cpt
```
CPTAgent(
alpha=0.88,
beta=0.88,
gamma=0.61,
delta=0.61,
lambda_=2.25,
)
@ 2021-12-22 22:17:44.818139
```python
# Problem 11
r = 0.73
lottery_10B = {'outcome':[asset_am - loss, asset_am - premium, asset_am - r*premium],
'probability':[(1-r)*prob_loss, r*prob_loss, (1-prob_loss)]}
cpt.choose(lottery_9A, lottery_10B)
```
{'lottery1': {'outcome': [5000], 'probability': [1]}}
|
d0c4a69a8845a8ea910dad591cb8a7363d3077a8
| 23,159 |
ipynb
|
Jupyter Notebook
|
Prospect_Theory_Agent_Demo.ipynb
|
cognitionswitch/decisionscience
|
ef6e3363dc87b682853c7e23be32d9224ee366b6
|
[
"MIT"
] | null | null | null |
Prospect_Theory_Agent_Demo.ipynb
|
cognitionswitch/decisionscience
|
ef6e3363dc87b682853c7e23be32d9224ee366b6
|
[
"MIT"
] | null | null | null |
Prospect_Theory_Agent_Demo.ipynb
|
cognitionswitch/decisionscience
|
ef6e3363dc87b682853c7e23be32d9224ee366b6
|
[
"MIT"
] | 1 |
2022-02-07T09:43:33.000Z
|
2022-02-07T09:43:33.000Z
| 33.515195 | 578 | 0.577227 | true | 4,530 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.887205 | 0.833325 | 0.739329 |
__label__eng_Latn
| 0.936046 | 0.556042 |
# Cadenas de Markov
## Transiciones de Estado
La secuencia de variables aleatorias $x_0, x_1, x_2, \dots, x_t , \dots$ representa un **proceso estocástico**.
Cuando se indexan solamente los puntos en el tiempo en el que ocurren *cambios* significativos, se habla de **procesos estocásticos de tiempo discreto**.
Hablamos de **estado** como la condición o característica de un sistema en momento dado. Se asume que hay un número finito de estados numerados $1, \dots, N$ que pueden describir al sistema en un cualquier momento. Cuando el sistema cambia de un estado a otro se dice que ocurre una **transición**.
\begin{equation}
P (x_{t+1} = s_{t+1} | x_t = s_t , x_{t−1} = s_{t−1}, \dots , x_1 = s_1, x_0 = s_0)\\
= P (x_{t+1} = s_{t+1} | x_t = s_t )
\end{equation}
Se conoce como **propiedad de Markov** el hecho que la probabilidad del estado en $t+1$ dependa solamente del estado en $t$.
Y como cada $x_t$ depende tan solo de $x_{t-1}$ e influye solamente en $x_{t+1}$ se dice que el proceso es una **cadena de Markov**.
Si, además el número de estados es numerable y finito, se habla de un **cadena de Markov de estado finito**.
Una suposición adicional es que la probabilidad de transición de un estado $i$ a un estado $j$ es constante en el tiempo.
\begin{equation}
P (x_{t+1} = j | x_t = i) = p_{ij}
\end{equation}
es independiente del índice del tiempo $t$, esto se conoce como la **propiedad estacionaria**.
La probabilidad $p_{ij}$ se denomina **probabilidad de transición**, al definirse para todos los estados del sistema $i,j = 1, 2, \dots, N$, y se suelen escribir en una **matriz de probabilidades de transición**.
\begin{equation}
\mathbf{P} = \begin{bmatrix}
p_{11} & p_{12} & \dots & p_{1N} \\
p_{21} & p_{22} & \dots & p_{2N} \\
\vdots & \vdots & \ddots & \vdots \\
p_{N1} & p_{N2} & \dots & p_{NN} \\
\end{bmatrix}
\end{equation}
Además, los elementos de cada fila forma una distribución de probabilidades discreta, por lo tanto:
\begin{equation}
\displaystyle\sum_{j=1}^{N}p_{ij}=1
\end{equation}
Un proceso de Markov comienza en algún momento inicial $t = 0$. Si el estado $x_0$ no se conoce con certeza, entonces debemos especificar las probabilidades con las que el sistema se encuentra inicialmente en cada uno de los $N$ estados. Denotamos esto como $P(x_0 = i) = p_i(0)$ para cada estado $i$, y usamos el vector
\begin{equation}
\mathbf{p}(0) = [p_1(0), p_2(0), \dots, p_N(0)]
\end{equation}
para describir la **distribución de probabilidad inicial** del sistema.
## Ejemplo
En un lugar se pueden describir los días como soleados, nublados, y lluviosos. Después de observar los patrones históricos, la condición del clima del siguiente día puede ser descrita de acuerdo a las siguientes probabilidades de transición:
\begin{equation}
\mathbf{P} = \begin{bmatrix}
0.7 & 0.1 & 0.2 \\
0.2 & 0.7 & 0.1 \\
0.5 & 0.2 & 0.3 \\
\end{bmatrix}
\end{equation}
Las probabilidades de un paso pueden ilustrarse mediante un **diagrama de transición de estados**, en el que los estados se representan por *nodos* y las posibles transiciones como *vértices*.
```python
from graphviz import Digraph
# Create Digraph object
g = Digraph()
g.attr(rankdir='RL', label='\nDiagrama de transición', fontsize='16', fontname='Lato')
g.attr('node', fontsize='10', fontname='Lato')
g.attr('edge', fontsize='10', fontname='Lato')
# Add nodes
g.node('1', 'Soleado')
g.node('2', 'Nublado')
g.node('3', 'Lluvioso')
# Add edges
g.edge('1', '1', '0.7')
g.edge('1', '2', '0.1')
g.edge('1', '3', '0.2')
g.edge('2', '1', '0.2')
g.edge('2', '2', '0.7')
g.edge('2', '3', '0.1')
g.edge('3', '1', '0.5')
g.edge('3', '2', '0.2')
g.edge('3', '3', '0.3')
# Visualize the graph
g
```
Si se desea conocer si una determinada condición del clima prevalece durante dos días, se puede utilizar un **árbol de transición** de dos pasos.
```python
g = Digraph()
g.attr(rankdir='LR', label='\nÁrbol de transición', fontsize='16', fontname='Lato')
g.attr('node', fontsize='10', fontname='Lato')
g.attr('edge', fontsize='10', fontname='Lato')
g.node('1a', 'Soleado')
g.node('1b', 'Soleado')
g.node('2b', 'Nublado')
g.node('3b', 'Lluvioso')
g.node('1c', 'Soleado')
g.node('2c', 'Nublado')
g.node('3c', 'Lluvioso')
g.node('1d', 'Soleado')
g.node('2d', 'Nublado')
g.node('3d', 'Lluvioso')
g.node('1e', 'Soleado')
g.node('2e', 'Nublado')
g.node('3e', 'Lluvioso')
g.edge('1a', '1b', '0.7')
g.edge('1a', '2b', '0.1')
g.edge('1a', '3b', '0.2')
g.edge('1b', '1c', '0.7')
g.edge('1b', '2c', '0.1')
g.edge('1b', '3c', '0.2')
g.edge('2b', '1d', '0.2')
g.edge('2b', '2d', '0.7')
g.edge('2b', '3d', '0.1')
g.edge('3b', '1e', '0.5')
g.edge('3b', '2e', '0.2')
g.edge('3b', '3e', '0.3')
g
```
## Probabilidades de Estado
Si se denota la probabilidad $p_{ij}^{(n)}$ como la probabilidad de transición del estado $i$ al estado $j$ en $n$ pasos.
El cálculo de $p_{11}^{(2)}$ sería
\begin{equation}
(0.7)(0.7)+(0.1)(0.2)+(0.2)(0.5)\\
=(p_{11})(p_{11})+(p_{12})(p_{21})+(p_{13})(p_{31})\\
= 0.61
\end{equation}
Es posible generalizar que para cualquier $i$ y $j$, $p_{ij}^{(2)}$ es el producto interno de la $i$-ésima fila de $\mathbf{P}$ con la $j$-ésima columna de $\mathbf{P}$.
\begin{equation}
p_{ij}^{(2)} = \sum_{k=1}^{N}p_{ik}p_{kj}
\end{equation}
```python
import numpy as np
p = np.array([[0.7, 0.1, 0.2], [0.2, 0.7, 0.1], [0.5, 0.2, 0.3]])
p
```
array([[0.7, 0.1, 0.2],
[0.2, 0.7, 0.1],
[0.5, 0.2, 0.3]])
```python
p2 = p @ p
p2
```
array([[0.61, 0.18, 0.21],
[0.33, 0.53, 0.14],
[0.54, 0.25, 0.21]])
```python
from IPython.display import display, Markdown
display(Markdown(rf'$p^2_{{11}} =$ {p2[0, 0]:.2f}'))
```
$p^2_{11} =$ 0.61
Lo que implica que la probabilidad que sea soleado dentro de dos días dado que hoy es soleado es igual a $0.61$.
----
Siendo `@` en numpy el operador para la multiplicación matricial o la multiplicación punto, siendo `p @ p` equivalente a `p.dot(p)`.
Se puede generalizar aún más esta expresión, llegando a lo que se conoce como la **Ecuación de Chapman-Kolgomorov**:
\begin{equation}
p_{ij}^{(n)} = p_{ij}^{n} = \sum_{k=1}^{N}p_{ik}^{n-m}p_{kj}^m
\end{equation}
## Clasificación de los estados en un proceso de Markov
Un estado $j$ es **alcanzable** desde el estado $i$ si hay una secuencia de transiciones que comienza en estado $i$ y termina en estado $j$. Esto es, $p^{(n)}_{ij} \gt 0$ para algún $n$.
Una cadena Markov **irreducible** es aquella en la que se puede llegar a todos los estados desde todos los demás estados. Es decir, en una cadena irreducible, no es posible que el proceso quede *atrapado* y de ahí en adelante solo pueda hacer transiciones dentro de algún subconjunto de los estados.
Se dice que un conjunto de estados está **cerrado** si no se puede llegar a ningún estado fuera del conjunto desde ningún estado dentro del conjunto. Esto significa que una vez que el sistema entra en cualquier estado en el conjunto, nunca saldrá de este conjunto. En una cadena irreducible, sus estados constituyen un conjunto cerrado y ningún subconjunto de estos estados está cerrado.
Un caso particularmente interesante se presenta si un conjunto cerrado contiene sólo un estado. Este estado $i$ se llama **estado absorbente**, y $p_{ii} = 1$. El sistema nunca deja un estado de absorción.
Un estado $i$ es un **estado transitorio** si hay una transición del estado $i$ a otro estado $j$ desde la cual no se puede volver a alcanzar el estado $i$. Por lo tanto, siempre que se sale de un estado transitorio, hay una probabilidad positiva de que no se vuelva a ocupar nunca más. Y por lo tanto, la probabilidad a largo plazo de que un sistema se encuentre en un estado transitorio es esencialmente nula porque, con el tiempo, se saldrá del estado y no volverá a entrar.
Un **estado recurrente** es cualquier estado que no sea transitorio. En una cadena de Markov irreducible de estados finitos, todos los estados son recurrentes. Un caso especial de un estado recurrente es un estado absorbente, desde el cual no se puede alcanzar ningún otro estado.
Se dice que un estado es **periódico** si está ocupado sólo en momentos que difieren entre sí por múltiplos de alguna constante mayor que $1$.
|
499179a15d77cc950ecaef70937490892854e71c
| 30,342 |
ipynb
|
Jupyter Notebook
|
docs/01cm_definiciones.ipynb
|
map0logo/tci-2019
|
64b83aadf88bf1d666dee6b94eb698a8b6125c14
|
[
"Unlicense"
] | 1 |
2022-03-27T04:04:33.000Z
|
2022-03-27T04:04:33.000Z
|
docs/01cm_definiciones.ipynb
|
map0logo/tci-2019
|
64b83aadf88bf1d666dee6b94eb698a8b6125c14
|
[
"Unlicense"
] | null | null | null |
docs/01cm_definiciones.ipynb
|
map0logo/tci-2019
|
64b83aadf88bf1d666dee6b94eb698a8b6125c14
|
[
"Unlicense"
] | null | null | null | 51.689949 | 486 | 0.529497 | true | 2,859 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.885631 | 0.891811 | 0.789816 |
__label__spa_Latn
| 0.983051 | 0.67334 |
<!-- dom:TITLE: Week 3 January 18-22: Building a Variational Monte Carlo program -->
# Week 3 January 18-22: Building a Variational Monte Carlo program
<!-- dom:AUTHOR: Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no at Department of Physics and Center fo Computing in Science Education, University of Oslo, Oslo, Norway & Department of Physics and Astronomy and Facility for Rare Ion Beams, Michigan State University, East Lansing, Michigan, USA -->
<!-- Author: -->
**Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no**, Department of Physics and Center fo Computing in Science Education, University of Oslo, Oslo, Norway and Department of Physics and Astronomy and Facility for Rare Ion Beams, Michigan State University, East Lansing, Michigan, USA
Date: **Jan 21, 2021**
Copyright 1999-2021, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license
## Overview of week 3
**Topics.**
* Variational Monte Carlo methods, Metropolis Algorithm, statistics and Markov Chain theory
* How to structure the VMC code
## Setting up a VMC code
In setting up a C++ code for variational Monte Carlo calculations, we will use an excellent framework developed by a former Computational Physics student, Morten Ledum, now PhD student at the Hylleraas center for Quantum Chemistry.
The GitHub repository is at <https://github.com/mortele/variational-monte-carlo-fys4411>.
We will discuss this both in our forthcoming lectures and during our lab sessions.
## Introduction
### Structure and Aims
These notebooks serve the aim of linking traditional variational Monte
Carlo VMC calculations methods with recent progress on solving
many-particle problems using Machine Learning algorithms.
### This notebook
In this notebook the aim is to give you an introduction as well as an
understanding of the basic elements that are needed in order to
develop a professional variational Monte Carlo code. We will focus on
a simple system of two particles in an oscillator trap (or
alternatively two fermions moving in a Coulombic potential). The particles can
interact via a repulsive or an attrative force. It is thus similar to the system described in project 1.
The advantage of these systems is that for two particles (boson or
fermions) we have analytical solutions for the eigenpairs of the
non-interacting case. Furthermore, for a two- or three-dimensional
system of two electrons moving in a harmonic oscillator trap, we have
[analytical solutions for the interacting case as well](https://iopscience.iop.org/article/10.1088/0305-4470/27/3/040/meta).
Having analytical eigenpairs is an invaluable feature that allows us
to assess the physical relevance of the trial wave functions, be
these either from a standard VMC procedure, from Boltzmann Machines or
from Shadow Wave functions.
In this notebook we start with the basics of a VMC calculation and
introduce concepts like Markov Chain Monte Carlo methods and the
Metropolis algorithm, importance sampling and Metropolis-Hastings
algorithm, resampling methods to obtain better estimates of the
statistical errors and minimization of the expectation values of the
energy and the variance. The latter is done in order to obtain the
best possible variational parameters. Furthermore it will define the
so-called **cost** function, a commonly encountered quantity in Machine
Learning algorithms. Minimizing the latter is the one which leads to
the determination of the optimal parameters in basically all Machine Learning algorithms.
This is a possible topic for project 2.
Topics like Markov Chain Monte Carlo and various resampling techniques
are also central to Machine Learning methods. Presenting them in the
context of VMC approaches leads hopefully to an easier starting point
for the understanding of these methods for those of you who will do the machine lerning variant of project 2.
## Basic Quantum Monte Carlo
We start with the variational principle.
Given a hamiltonian $H$ and a trial wave function $\Psi_T(\boldsymbol{R};\boldsymbol{\alpha})$, the variational principle states that the expectation value of $\cal{E}[H]$, defined through
$$
\cal {E}[H] =
\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R};\boldsymbol{\alpha})H(\boldsymbol{R})\Psi_T(\boldsymbol{R};\boldsymbol{\alpha})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R};\boldsymbol{\alpha})\Psi_T(\boldsymbol{R};\boldsymbol{\alpha})},
$$
is an upper bound to the ground state energy $E_0$ of the hamiltonian $H$, that is
$$
E_0 \le {\cal E}[H].
$$
In general, the integrals involved in the calculation of various
expectation values are multi-dimensional ones. Traditional integration
methods such as Gauss-Legendre quadrature will not be adequate for say the
computation of the energy of a many-body system.
Here we have defined the vector $\boldsymbol{R} = [\boldsymbol{r}_1,\boldsymbol{r}_2,\dots,\boldsymbol{r}_n]$ as an array that contains the positions of all particles $n$ while the vector $\boldsymbol{\alpha} = [\alpha_1,\alpha_2,\dots,\alpha_m]$ contains the variational parameters of the model, $m$ in total.
The trial wave function can be expanded in the eigenstates $\Psi_i(\boldsymbol{R})$
of the hamiltonian since they form a complete set, viz.,
$$
\Psi_T(\boldsymbol{R};\boldsymbol{\alpha})=\sum_i a_i\Psi_i(\boldsymbol{R}),
$$
and assuming that the set of eigenfunctions are normalized, one obtains
$$
\frac{\sum_{nm}a^*_ma_n \int d\boldsymbol{R}\Psi^{\ast}_m(\boldsymbol{R})H(\boldsymbol{R})\Psi_n(\boldsymbol{R})}
{\sum_{nm}a^*_ma_n \int d\boldsymbol{R}\Psi^{\ast}_m(\boldsymbol{R})\Psi_n(\boldsymbol{R})} =\frac{\sum_{n}a^2_n E_n}
{\sum_{n}a^2_n} \ge E_0,
$$
where we used that $H(\boldsymbol{R})\Psi_n(\boldsymbol{R})=E_n\Psi_n(\boldsymbol{R})$.
In general, the integrals involved in the calculation of various expectation
values are multi-dimensional ones.
The variational principle yields the lowest energy of states with a given symmetry.
In most cases, a wave function has only small values in large parts of
configuration space, and a straightforward procedure which uses
homogenously distributed random points in configuration space
will most likely lead to poor results. This may suggest that some kind
of importance sampling combined with e.g., the Metropolis algorithm
may be a more efficient way of obtaining the ground state energy.
The hope is then that those regions of configurations space where
the wave function assumes appreciable values are sampled more
efficiently.
The tedious part in a VMC calculation is the search for the variational
minimum. A good knowledge of the system is required in order to carry out
reasonable VMC calculations. This is not always the case,
and often VMC calculations
serve rather as the starting
point for so-called diffusion Monte Carlo calculations (DMC). Diffusion Monte Carlo is a way of
solving exactly the many-body Schroedinger equation by means of
a stochastic procedure. A good guess on the binding energy
and its wave function is however necessary.
A carefully performed VMC calculation can aid in this context.
The basic procedure of a Variational Monte Carlo calculations consists thus of
1. Construct first a trial wave function $\psi_T(\boldsymbol{R};\boldsymbol{\alpha})$, for a many-body system consisting of $n$ particles located at positions $\boldsymbol{R}=(\boldsymbol{R}_1,\dots ,\boldsymbol{R}_n)$. The trial wave function depends on $\alpha$ variational parameters $\boldsymbol{\alpha}=(\alpha_1,\dots ,\alpha_M)$.
2. Then we evaluate the expectation value of the hamiltonian $H$
$$
\overline{E}[\boldsymbol{\alpha}]=\frac{\int d\boldsymbol{R}\Psi^{\ast}_{T}(\boldsymbol{R},\boldsymbol{\alpha})H(\boldsymbol{R})\Psi_{T}(\boldsymbol{R},\boldsymbol{\alpha})}
{\int d\boldsymbol{R}\Psi^{\ast}_{T}(\boldsymbol{R},\boldsymbol{\alpha})\Psi_{T}(\boldsymbol{R},\boldsymbol{\alpha})}.
$$
1. Thereafter we vary $\boldsymbol{\alpha}$ according to some minimization algorithm and return eventually to the first step if we are not satisfied with the results.
Here we have used the notation $\overline{E}$ to label the expectation value of the energy.
### Linking with standard statistical expressions for expectation values
In order to bring in the Monte Carlo machinery, we define first a likelihood distribution, or probability density distribution (PDF). Using our ansatz for the trial wave function $\psi_T(\boldsymbol{R};\boldsymbol{\alpha})$ we define a PDF
$$
P(\boldsymbol{R})= \frac{\left|\psi_T(\boldsymbol{R};\boldsymbol{\alpha})\right|^2}{\int \left|\psi_T(\boldsymbol{R};\boldsymbol{\alpha})\right|^2d\boldsymbol{R}}.
$$
This is our model for probability distribution function.
The approximation to the expectation value of the Hamiltonian is now
$$
\overline{E}[\boldsymbol{\alpha}] =
\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R};\boldsymbol{\alpha})H(\boldsymbol{R})\Psi_T(\boldsymbol{R};\boldsymbol{\alpha})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R};\boldsymbol{\alpha})\Psi_T(\boldsymbol{R};\boldsymbol{\alpha})}.
$$
We define a new quantity
<!-- Equation labels as ordinary links -->
<div id="eq:locale1"></div>
$$
E_L(\boldsymbol{R};\boldsymbol{\alpha})=\frac{1}{\psi_T(\boldsymbol{R};\boldsymbol{\alpha})}H\psi_T(\boldsymbol{R};\boldsymbol{\alpha}),
\label{eq:locale1} \tag{1}
$$
called the local energy, which, together with our trial PDF yields a new expression (and which look simlar to the the expressions for moments in statistics)
<!-- Equation labels as ordinary links -->
<div id="eq:vmc1"></div>
$$
\overline{E}[\boldsymbol{\alpha}]=\int P(\boldsymbol{R})E_L(\boldsymbol{R};\boldsymbol{\alpha}) d\boldsymbol{R}\approx \frac{1}{N}\sum_{i=1}^NE_L(\boldsymbol{R_i};\boldsymbol{\alpha})
\label{eq:vmc1} \tag{2}
$$
with $N$ being the number of Monte Carlo samples. The expression on the right hand side follows from Bernoulli's law of large numbers, which states that the sample mean, in the limit $N\rightarrow \infty$ approaches the true mean
The Algorithm for performing a variational Monte Carlo calculations runs as this
* Initialisation: Fix the number of Monte Carlo steps. Choose an initial $\boldsymbol{R}$ and variational parameters $\alpha$ and calculate $\left|\psi_T^{\alpha}(\boldsymbol{R})\right|^2$.
* Initialise the energy and the variance and start the Monte Carlo calculation.
* Calculate a trial position $\boldsymbol{R}_p=\boldsymbol{R}+r*step$ where $r$ is a random variable $r \in [0,1]$.
* Metropolis algorithm to accept or reject this move $w = P(\boldsymbol{R}_p)/P(\boldsymbol{R})$.
* If the step is accepted, then we set $\boldsymbol{R}=\boldsymbol{R}_p$.
* Update averages
* Finish and compute final averages.
Observe that the jumping in space is governed by the variable *step*. This is called brute-force sampling and is normally replaced by what is called **importance sampling**, discussed in more detail below here.
### Simple example, the hydrogen atom
The radial Schroedinger equation for the hydrogen atom can be
written as (when we have gotten rid of the first derivative term in the kinetic energy and used $rR(r)=u(r)$)
$$
-\frac{\hbar^2}{2m}\frac{d^2 u(r)}{d r^2}-
\left(\frac{ke^2}{r}-\frac{\hbar^2l(l+1)}{2mr^2}\right)u(r)=Eu(r).
$$
We will specialize to the case with $l=0$ and end up with
$$
-\frac{\hbar^2}{2m}\frac{d^2 u(r)}{d r^2}-
\left(\frac{ke^2}{r}\right)u(r)=Eu(r).
$$
Then we introduce a dimensionless variable $\rho=r/a$ where $a$ is a constant with dimension length.
Multiplying with $ma^2/\hbar^2$ we can rewrite our equations as
$$
-\frac{1}{2}\frac{d^2 u(\rho)}{d \rho^2}-
\frac{ke^2ma}{\hbar^2}\frac{u(\rho)}{\rho}-\lambda u(\rho)=0.
$$
Since $a$ is just a parameter we choose to set
$$
\frac{ke^2ma}{\hbar^2}=1,
$$
which leads to $a=\hbar^2/mke^2$, better known as the Bohr radius with value $0.053$ nm. Scaling the equations this way does not only render our numerical treatment simpler since we avoid carrying with us all physical parameters, but we obtain also a **natural** length scale. We will see this again and again. In our discussions below with a harmonic oscillator trap, the **natural** lentgh scale with be determined by the oscillator frequency, the mass of the particle and $\hbar$. We have also defined a dimensionless 'energy' $\lambda = Ema^2/\hbar^2$.
With the rescaled quantities, the ground state energy of the hydrogen atom is $1/2$.
The equation we want to solve is now defined by the Hamiltonian
$$
H=-\frac{1}{2}\frac{d^2 }{d \rho^2}-\frac{1}{\rho}.
$$
As trial wave function we peep now into the analytical solution for
the hydrogen atom and use (with $\alpha$ as a variational parameter)
$$
u_T^{\alpha}(\rho)=\alpha\rho \exp{-(\alpha\rho)}.
$$
Inserting this wave function into the expression for the
local energy $E_L$ gives
$$
E_L(\rho)=-\frac{1}{\rho}-
\frac{\alpha}{2}\left(\alpha-\frac{2}{\rho}\right).
$$
To have analytical local energies saves us from computing numerically
the second derivative, a feature which often increases our numerical
expenditure with a factor of three or more. Integratng up the local energy (recall to bring back the PDF in the integration) gives $\overline{E}[\boldsymbol{\alpha}]=\alpha(\alpha/2-1)$.
### Second example, the harmonic oscillator in one dimension
We present here another well-known example, the harmonic oscillator in
one dimension for one particle. This will also serve the aim of
introducing our next model, namely that of interacting electrons in a
harmonic oscillator trap.
Here as well, we do have analytical solutions and the energy of the
ground state, with $\hbar=1$, is $1/2\omega$, with $\omega$ being the
oscillator frequency. We use the following trial wave function
$$
\psi_T(x;\alpha) = \exp{-(\frac{1}{2}\alpha^2x^2)},
$$
which results in a local energy
$$
\frac{1}{2}\left(\alpha^2+x^2(1-\alpha^4)\right).
$$
We can compare our numerically calculated energies with the exact energy as function of $\alpha$
$$
\overline{E}[\alpha] = \frac{1}{4}\left(\alpha^2+\frac{1}{\alpha^2}\right).
$$
Similarly, with the above ansatz, we can also compute the exact variance which reads
$$
\sigma^2[\alpha]=\frac{1}{4}\left(1+(1-\alpha^4)^2\frac{3}{4\alpha^4}\right)-\overline{E}.
$$
Our code for computing the energy of the ground state of the harmonic oscillator follows here. We start by defining directories where we store various outputs.
```python
# Common imports
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "Results/VMCHarmonic"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
outfile = open(data_path("VMCHarmonic.dat"),'w')
```
We proceed with the implementation of the Monte Carlo algorithm but list first the ansatz for the wave function and the expression for the local energy
```python
%matplotlib inline
# VMC for the one-dimensional harmonic oscillator
# Brute force Metropolis, no importance sampling and no energy minimization
from math import exp, sqrt
from random import random, seed
import numpy as np
import matplotlib.pyplot as plt
#from numba import jit
from decimal import *
# Trial wave function for the Harmonic oscillator in one dimension
def WaveFunction(r,alpha):
return exp(-0.5*alpha*alpha*r*r)
# Local energy for the Harmonic oscillator in one dimension
def LocalEnergy(r,alpha):
return 0.5*r*r*(1-alpha**4) + 0.5*alpha*alpha
```
Note that in the Metropolis algorithm there is no need to compute the
trial wave function, mainly since we are just taking the ratio of two
exponentials. It is then from a computational point view, more
convenient to compute the argument from the ratio and then calculate
the exponential. Here we have refrained from this purely of
pedagogical reasons.
```python
# The Monte Carlo sampling with the Metropolis algo
# The jit decorator tells Numba to compile this function.
# The argument types will be inferred by Numba when the function is called.
#@jit
def MonteCarloSampling():
NumberMCcycles= 1000000
StepSize = 1.0
# positions
PositionOld = 0.0
PositionNew = 0.0
# seed for rng generator
seed()
# start variational parameter
alpha = 0.8
for ia in range(MaxVariations):
alpha += .05
AlphaValues[ia] = alpha
energy = energy2 = 0.0
#Initial position
PositionOld = StepSize * (random() - .5)
wfold = WaveFunction(PositionOld,alpha)
#Loop over MC MCcycles
for MCcycle in range(NumberMCcycles):
#Trial position
PositionNew = PositionOld + StepSize*(random() - .5)
wfnew = WaveFunction(PositionNew,alpha)
#Metropolis test to see whether we accept the move
if random() <= wfnew**2 / wfold**2:
PositionOld = PositionNew
wfold = wfnew
DeltaE = LocalEnergy(PositionOld,alpha)
energy += DeltaE
energy2 += DeltaE**2
#We calculate mean, variance and error
energy /= NumberMCcycles
energy2 /= NumberMCcycles
variance = energy2 - energy**2
error = sqrt(variance/NumberMCcycles)
Energies[ia] = energy
Variances[ia] = variance
outfile.write('%f %f %f %f \n' %(alpha,energy,variance,error))
return Energies, AlphaValues, Variances
```
Finally, the results are presented here with the exact energies and variances as well.
```python
#Here starts the main program with variable declarations
MaxVariations = 20
Energies = np.zeros((MaxVariations))
ExactEnergies = np.zeros((MaxVariations))
ExactVariance = np.zeros((MaxVariations))
Variances = np.zeros((MaxVariations))
AlphaValues = np.zeros(MaxVariations)
(Energies, AlphaValues, Variances) = MonteCarloSampling()
outfile.close()
ExactEnergies = 0.25*(AlphaValues*AlphaValues+1.0/(AlphaValues*AlphaValues))
ExactVariance = 0.25*(1.0+((1.0-AlphaValues**4)**2)*3.0/(4*(AlphaValues**4)))-ExactEnergies*ExactEnergies
#simple subplot
plt.subplot(2, 1, 1)
plt.plot(AlphaValues, Energies, 'o-',AlphaValues, ExactEnergies,'r-')
plt.title('Energy and variance')
plt.ylabel('Dimensionless energy')
plt.subplot(2, 1, 2)
plt.plot(AlphaValues, Variances, '.-',AlphaValues, ExactVariance,'r-')
plt.xlabel(r'$\alpha$', fontsize=15)
plt.ylabel('Variance')
save_fig("VMCHarmonic")
plt.show()
#nice printout with Pandas
import pandas as pd
from pandas import DataFrame
data ={'Alpha':AlphaValues, 'Energy':Energies,'Exact Energy':ExactEnergies,'Variance':Variances,'Exact Variance':ExactVariance,}
frame = pd.DataFrame(data)
print(frame)
```
For $\alpha=1$ we have the exact eigenpairs, as can be deduced from the
table here. With $\omega=1$, the exact energy is $1/2$ a.u. with zero
variance, as it should. We see also that our computed variance follows rather well the exact variance.
Increasing the number of Monte Carlo cycles will improve our statistics (try to increase the number of Monte Carlo cycles).
The fact that the variance is exactly equal to zero when $\alpha=1$ is that
we then have the exact wave function, and the action of the hamiltionan
on the wave function
$$
H\psi = \mathrm{constant}\times \psi,
$$
yields just a constant. The integral which defines various
expectation values involving moments of the hamiltonian becomes then
$$
\langle H^n \rangle =
\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H^n(\boldsymbol{R})\Psi_T(\boldsymbol{R})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=
\mathrm{constant}\times\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=\mathrm{constant}.
$$
**This gives an important information: the exact wave function leads to zero variance!**
As we will see below, many practitioners perform a minimization on both the energy and the variance.
## The Metropolis algorithm
Till now we have not yet discussed the derivation of the Metropolis algorithm. We assume the reader has some familiarity with the mathematics of Markov chains.
The Metropolis algorithm , see [the original article](http://scitation.aip.org/content/aip/journal/jcp/21/6/10.1063/1.1699114), was invented by Metropolis et. al
and is often simply called the Metropolis algorithm.
It is a method to sample a normalized probability
distribution by a stochastic process. We define $\mathbf{P}_i^{(n)}$ to
be the probability for finding the system in the state $i$ at step $n$.
The algorithm is then
* Sample a possible new state $j$ with some probability $T_{i\rightarrow j}$.
* Accept the new state $j$ with probability $A_{i \rightarrow j}$ and use it as the next sample. With probability $1-A_{i\rightarrow j}$ the move is rejected and the original state $i$ is used again as a sample.
We wish to derive the required properties of $T$ and $A$ such that
$\mathbf{P}_i^{(n\rightarrow \infty)} \rightarrow p_i$ so that starting
from any distribution, the method converges to the correct distribution.
Note that the description here is for a discrete probability distribution.
Replacing probabilities $p_i$ with expressions like $p(x_i)dx_i$ will
take all of these over to the corresponding continuum expressions.
The dynamical equation for $\mathbf{P}_i^{(n)}$ can be written directly from
the description above. The probability of being in the state $i$ at step $n$
is given by the probability of being in any state $j$ at the previous step,
and making an accepted transition to $i$ added to the probability of
being in the state $i$, making a transition to any state $j$ and
rejecting the move:
$$
\mathbf{P}^{(n)}_i = \sum_j \left [
\mathbf{P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i}
+\mathbf{P}^{(n-1)}_iT_{i\rightarrow j}\left ( 1- A_{i\rightarrow j} \right)
\right ] \,.
$$
Since the probability of making some transition must be 1,
$\sum_j T_{i\rightarrow j} = 1$, and the above equation becomes
$$
\mathbf{P}^{(n)}_i = \mathbf{P}^{(n-1)}_i +
\sum_j \left [
\mathbf{P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i}
-\mathbf{P}^{(n-1)}_iT_{i\rightarrow j}A_{i\rightarrow j}
\right ] \,.
$$
For large $n$ we require that $\mathbf{P}^{(n\rightarrow \infty)}_i = p_i$,
the desired probability distribution. Taking this limit, gives the
balance requirement
$$
\sum_j \left [
p_jT_{j\rightarrow i} A_{j\rightarrow i}
-p_iT_{i\rightarrow j}A_{i\rightarrow j}
\right ] = 0 \,.
$$
The balance requirement is very weak. Typically the much stronger detailed
balance requirement is enforced, that is rather than the sum being
set to zero, we set each term separately to zero and use this
to determine the acceptance probabilities. Rearranging, the result is
$$
\frac{ A_{j\rightarrow i}}{A_{i\rightarrow j}}
= \frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}} \,.
$$
The Metropolis choice is to maximize the $A$ values, that is
$$
A_{j \rightarrow i} = \min \left ( 1,
\frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}}\right ).
$$
Other choices are possible, but they all correspond to multilplying
$A_{i\rightarrow j}$ and $A_{j\rightarrow i}$ by the same constant
smaller than unity.\footnote{The penalty function method uses just such
a factor to compensate for $p_i$ that are evaluated stochastically
and are therefore noisy.}
Having chosen the acceptance probabilities, we have guaranteed that
if the $\mathbf{P}_i^{(n)}$ has equilibrated, that is if it is equal to $p_i$,
it will remain equilibrated. Next we need to find the circumstances for
convergence to equilibrium.
The dynamical equation can be written as
$$
\mathbf{P}^{(n)}_i = \sum_j M_{ij}\mathbf{P}^{(n-1)}_j
$$
with the matrix $M$ given by
$$
M_{ij} = \delta_{ij}\left [ 1 -\sum_k T_{i\rightarrow k} A_{i \rightarrow k}
\right ] + T_{j\rightarrow i} A_{j\rightarrow i} \,.
$$
Summing over $i$ shows that $\sum_i M_{ij} = 1$, and since
$\sum_k T_{i\rightarrow k} = 1$, and $A_{i \rightarrow k} \leq 1$, the
elements of the matrix satisfy $M_{ij} \geq 0$. The matrix $M$ is therefore
a stochastic matrix.
The Metropolis method is simply the power method for computing the
right eigenvector of $M$ with the largest magnitude eigenvalue.
By construction, the correct probability distribution is a right eigenvector
with eigenvalue 1. Therefore, for the Metropolis method to converge
to this result, we must show that $M$ has only one eigenvalue with this
magnitude, and all other eigenvalues are smaller.
## The system: two electrons in a harmonic oscillator trap in two dimensions
The Hamiltonian of the quantum dot is given by
$$
\hat{H} = \hat{H}_0 + \hat{V},
$$
where $\hat{H}_0$ is the many-body HO Hamiltonian, and $\hat{V}$ is the
inter-electron Coulomb interactions. In dimensionless units,
$$
\hat{V}= \sum_{i < j}^N \frac{1}{r_{ij}},
$$
with $r_{ij}=\sqrt{\mathbf{r}_i^2 - \mathbf{r}_j^2}$.
This leads to the separable Hamiltonian, with the relative motion part given by ($r_{ij}=r$)
$$
\hat{H}_r=-\nabla^2_r + \frac{1}{4}\omega^2r^2+ \frac{1}{r},
$$
plus a standard Harmonic Oscillator problem for the center-of-mass motion.
This system has analytical solutions in two and three dimensions ([M. Taut 1993 and 1994](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.48.3561)).
We want to perform a Variational Monte Carlo calculation of the ground state of two electrons in a quantum dot well with different oscillator energies, assuming total spin $S=0$.
Our trial wave function has the following form
<!-- Equation labels as ordinary links -->
<div id="eq:trial"></div>
$$
\begin{equation}
\psi_{T}(\boldsymbol{r}_1,\boldsymbol{r}_2) =
C\exp{\left(-\alpha_1\omega(r_1^2+r_2^2)/2\right)}
\exp{\left(\frac{r_{12}}{(1+\alpha_2 r_{12})}\right)},
\label{eq:trial} \tag{3}
\end{equation}
$$
where the $\alpha$s represent our variational parameters, two in this case.
Why does the trial function look like this? How did we get there?
**This will be one of our main motivations** for switching to Machine Learning later.
To find an ansatz for the correlated part of the wave function, it is
useful to rewrite the two-particle local energy in terms of the
relative and center-of-mass motion.
Let us denote the distance
between the two electrons as $r_{12}$. We omit the center-of-mass
motion since we are only interested in the case when $r_{12}
\rightarrow 0$. The contribution from the center-of-mass (CoM)
variable $\boldsymbol{R}_{\mathrm{CoM}}$ gives only a finite contribution. We
focus only on the terms that are relevant for $r_{12}$ and for three
dimensions.
The relevant local energy becomes then
$$
\lim_{r_{12} \rightarrow 0}E_L(R)= \frac{1}{{\cal R}_T(r_{12})}\left(2\frac{d^2}{dr_{ij}^2}+\frac{4}{r_{ij}}\frac{d}{dr_{ij}}+\frac{2}{r_{ij}}-\frac{l(l+1)}{r_{ij}^2}+2E \right){\cal R}_T(r_{12})
= 0.
$$
Set $l=0$ and we have the so-called **cusp** condition
$$
\frac{d {\cal R}_T(r_{12})}{dr_{12}} = -\frac{1}{2(l+1)} {\cal R}_T(r_{12})\qquad r_{12}\to 0
$$
The above results in
$$
{\cal R}_T \propto \exp{(r_{ij}/2)},
$$
for anti-parallel spins and
$$
{\cal R}_T \propto \exp{(r_{ij}/4)},
$$
for anti-parallel spins.
This is the so-called cusp condition for the relative motion, resulting in a minimal requirement
for the correlation part of the wave fuction.
For general systems containing more than say two electrons, we have this
condition for each electron pair $ij$.
### First code attempt for the two-electron case
First, as with the hydrogen case, we declare where to store files.
```python
# Common imports
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "Results/VMCQdotMetropolis"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
outfile = open(data_path("VMCQdotMetropolis.dat"),'w')
```
Thereafter we set up the analytical expressions for the wave functions and the local energy
```python
# 2-electron VMC for quantum dot system in two dimensions
# Brute force Metropolis, no importance sampling and no energy minimization
from math import exp, sqrt
from random import random, seed
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import sys
#from numba import jit
# Trial wave function for the 2-electron quantum dot in two dims
def WaveFunction(r,alpha,beta):
r1 = r[0,0]**2 + r[0,1]**2
r2 = r[1,0]**2 + r[1,1]**2
r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
deno = r12/(1+beta*r12)
return exp(-0.5*alpha*(r1+r2)+deno)
# Local energy for the 2-electron quantum dot in two dims, using analytical local energy
def LocalEnergy(r,alpha,beta):
r1 = (r[0,0]**2 + r[0,1]**2)
r2 = (r[1,0]**2 + r[1,1]**2)
r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
deno = 1.0/(1+beta*r12)
deno2 = deno*deno
return 0.5*(1-alpha*alpha)*(r1 + r2) +2.0*alpha + 1.0/r12+deno2*(alpha*r12-deno2+2*beta*deno-1.0/r12)
```
The Monte Carlo sampling without importance sampling is set up here.
```python
# The Monte Carlo sampling with the Metropolis algo
# The jit decorator tells Numba to compile this function.
# The argument types will be inferred by Numba when the function is called.
#@jit
def MonteCarloSampling():
NumberMCcycles= 10000
StepSize = 1.0
# positions
PositionOld = np.zeros((NumberParticles,Dimension), np.double)
PositionNew = np.zeros((NumberParticles,Dimension), np.double)
# seed for rng generator
seed()
# start variational parameter
alpha = 0.9
for ia in range(MaxVariations):
alpha += .025
AlphaValues[ia] = alpha
beta = 0.2
for jb in range(MaxVariations):
beta += .01
BetaValues[jb] = beta
energy = energy2 = 0.0
DeltaE = 0.0
#Initial position
for i in range(NumberParticles):
for j in range(Dimension):
PositionOld[i,j] = StepSize * (random() - .5)
wfold = WaveFunction(PositionOld,alpha,beta)
#Loop over MC MCcycles
for MCcycle in range(NumberMCcycles):
#Trial position moving one particle at the time
for i in range(NumberParticles):
for j in range(Dimension):
PositionNew[i,j] = PositionOld[i,j] + StepSize * (random() - .5)
wfnew = WaveFunction(PositionNew,alpha,beta)
#Metropolis test to see whether we accept the move
if random() < wfnew**2 / wfold**2:
for j in range(Dimension):
PositionOld[i,j] = PositionNew[i,j]
wfold = wfnew
DeltaE = LocalEnergy(PositionOld,alpha,beta)
energy += DeltaE
energy2 += DeltaE**2
#We calculate mean, variance and error ...
energy /= NumberMCcycles
energy2 /= NumberMCcycles
variance = energy2 - energy**2
error = sqrt(variance/NumberMCcycles)
Energies[ia,jb] = energy
Variances[ia,jb] = variance
outfile.write('%f %f %f %f %f\n' %(alpha,beta,energy,variance,error))
return Energies, Variances, AlphaValues, BetaValues
```
And finally comes the main part with the plots as well.
```python
#Here starts the main program with variable declarations
NumberParticles = 2
Dimension = 2
MaxVariations = 10
Energies = np.zeros((MaxVariations,MaxVariations))
Variances = np.zeros((MaxVariations,MaxVariations))
AlphaValues = np.zeros(MaxVariations)
BetaValues = np.zeros(MaxVariations)
(Energies, Variances, AlphaValues, BetaValues) = MonteCarloSampling()
outfile.close()
# Prepare for plots
fig = plt.figure()
ax = fig.gca(projection='3d')
# Plot the surface.
X, Y = np.meshgrid(AlphaValues, BetaValues)
surf = ax.plot_surface(X, Y, Energies,cmap=cm.coolwarm,linewidth=0, antialiased=False)
# Customize the z axis.
zmin = np.matrix(Energies).min()
zmax = np.matrix(Energies).max()
ax.set_zlim(zmin, zmax)
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.set_zlabel(r'$\langle E \rangle$')
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5)
save_fig("QdotMetropolis")
plt.show()
```
```python
```
|
a1f7e4404e2ae3aba3d6af02a79e39148334b2d8
| 113,161 |
ipynb
|
Jupyter Notebook
|
doc/pub/week2/ipynb/week2.ipynb
|
Schoyen/ComputationalPhysics2
|
9cf10ffb2557cc73c4e6bab060d53690ee39426f
|
[
"CC0-1.0"
] | 87 |
2015-01-21T08:29:56.000Z
|
2022-03-28T07:11:53.000Z
|
doc/pub/week2/ipynb/week2.ipynb
|
Schoyen/ComputationalPhysics2
|
9cf10ffb2557cc73c4e6bab060d53690ee39426f
|
[
"CC0-1.0"
] | 3 |
2020-01-18T10:43:38.000Z
|
2020-02-08T13:15:42.000Z
|
doc/pub/week2/ipynb/week2.ipynb
|
Schoyen/ComputationalPhysics2
|
9cf10ffb2557cc73c4e6bab060d53690ee39426f
|
[
"CC0-1.0"
] | 54 |
2015-02-09T10:02:00.000Z
|
2022-03-07T10:44:14.000Z
| 81.528098 | 41,156 | 0.796476 | true | 9,189 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.66888 | 0.853913 | 0.571165 |
__label__eng_Latn
| 0.976544 | 0.165338 |
```python
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import sympy
%matplotlib inline
```
```python
df = pd.read_csv('../data/raw/cities.csv', index_col=['CityId'])
primes = list(sympy.primerange(0, max(df.index)))
df['prime'] = df.index.isin(primes).astype(int)
```
```python
df_prime = df[(df.index == 0) | df['prime'] == 1]
```
```python
def read_link(filename):
data = open(filename, 'r').read()
data = data.replace('\n', ' ')
data = np.fromstring(data, sep=' ', dtype=np.int32)
if len(data) != data[0] + 1:
raise Exception('Unrecognized format in %s' % filename)
return np.concatenate((data[1:], [0]))
```
```python
tour_data = read_link('prime.tour')
tour_data_2 = np.array([df_prime.index[x] for x in tour_data])
```
```python
tour_data_2
```
array([ 0, 5333, 97649, ..., 148853, 153911, 0])
```python
def read_submission(filename):
data = open(filename, 'r').read()
data = data.replace('Path', '')
data = data.replace('\n', ' ')
data = np.fromstring(data, sep=' ', dtype=np.int32)
return data
```
```python
best_tour = read_submission('submission.151557248.csv')
```
```python
best_tour.size
```
197770
```python
si = tour_data_2[2]
ei = tour_data_2[3]
bsi = np.where(best_tour==si)[0][0]
bei = np.where(best_tour==ei)[0][0]
bei - bsi
```
28
```python
c=0
for i in range(tour_data_2.size-1):
si = tour_data_2[i]
ei = tour_data_2[i+1]
bsi = np.where(best_tour==si)[0][0]
bei = np.where(best_tour==ei)[0][0]
if abs(bei - bsi) == 5:
c+=1
print(c)
```
594
|
6a3df7f265b6ab66c440deda6fd8e9c3e7fda375
| 4,126 |
ipynb
|
Jupyter Notebook
|
notebooks/Primes-add-from-best.ipynb
|
alexandrnikitin/kaggle-traveling-santa-2018-prime-paths
|
44a537ee3388d52dba5abffedd8f014820c8fd40
|
[
"MIT"
] | null | null | null |
notebooks/Primes-add-from-best.ipynb
|
alexandrnikitin/kaggle-traveling-santa-2018-prime-paths
|
44a537ee3388d52dba5abffedd8f014820c8fd40
|
[
"MIT"
] | null | null | null |
notebooks/Primes-add-from-best.ipynb
|
alexandrnikitin/kaggle-traveling-santa-2018-prime-paths
|
44a537ee3388d52dba5abffedd8f014820c8fd40
|
[
"MIT"
] | null | null | null | 20.733668 | 73 | 0.480368 | true | 530 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.795658 | 0.682574 | 0.543095 |
__label__eng_Latn
| 0.146335 | 0.100122 |
# From Second Quantization to Equation-of-Motion Coupled-Cluster using SymPy
## Table of contents
1. [Introduction](#Introduction)
2. [Second Quantization](#Second-Quantization)
3. [Normal product](#Normal-product)
4. [Contraction](#Contraction)
5. [Wicks theorem](#Wicks-theorem)
6. [Particle-Hole formalism](#Particle-Hole-formalism)
6. [Hartree-Fock](#Hartree-Fock)
7. [Baker-Campbell-Hausdorff](#Baker-Campbell-Hausdorff)
8. [Coupled-Cluster](#Coupled-Cluster)
9. [Equation-of-motion Coupled-Cluster](#Equation-of-motion-Coupled-Cluster)
### Introduction
We will have hands on tutorial for the derivation of EOM-CCSD amplitudes, one and two particle density matrix. I have developed a symbolic library using [SymPy](https://www.sympy.org/en/index.html) for deriving analytical expressions which can be easily extended to any operator.
First, we will derive the fermionic algebra from first quantization and study their properties.
$\newcommand{\ket}[1]{\left|{#1}\right\rangle}
\newcommand{\bra}[1]{\left\langle{#1}\right|}
\newcommand{\braket}[2]{\left\langle{#1}\middle|{#2}\right\rangle}$
### Second Quantization
In this section we will derive the fermionic algebra in second quantization representation from first quantization. Once the fermionic algebra is established, we will derive the relation between the its elements
$$ \ket{k} = {a^{+}_{k}}\ket{vac} $$
The opertor $a^{+}_{k}$ has created an electron on the vaccum state.
$$\phi_{k}(1) \longleftrightarrow {a^{+}_{k}}\ket{vac} $$
$\ket{vac}$ is an abstract vaccum state and $\braket{vac}{vac} = 1$
If there are two electrons the wavefunction in first quantization is
$$ \Phi(1,2) = \frac{1}{\sqrt{2}}
\begin{vmatrix}
\phi_{i}(1) & \phi_{k}(1) \\
\phi_{i}(2) & \phi_{k}(2)
\end{vmatrix} $$
The same in second quantization follows as (keeping normalization included)
$$\ket{ik} = {a^{+}_{i}a^{+}_{k}}\ket{vac} $$
As $\Phi(1,2)$ is an antisymmetric function therefore the creation operator in second quantization should respect this antisymmetry.
$$\Phi(2,1) = - \Phi(1,2) \longleftrightarrow {a^{+}_{k}a^{+}_{i}}\ket{vac} = - {a^{+}_{i}a^{+}_{k}}\ket{vac} $$
One can see that the antisymmetry requirement is fulfilled by having the anticommutation relation between creation operators.
$$a^{+}_{k}a^{+}_{i} + a^{+}_{i}a^{+}_{k} = \{a^{+}_{k} , a^{+}_{i} \} = 0 \tag{QED 1} $$
Now we will repeat the same excercice for anhilation operators. Consider first one-electron system by creating an electron on $i$th orbital using $a_{i}^{+}$. We can define the anhilator operator $a_{i}$, which act on the former state and returns back the system to $\ket{vac}$.
$$a_{i} a_{i}^{+} \ket{vac} = \ket{vac}$$
When there is no electron in a orbital $i$, then it should result to zero.
$$a_{i} \ket{vac} = 0$$
In order to establish the anticommutation of anhilation operators we will utilize anticommutation relation of creation operators.
$$ a_{i} a_{k} \ket{ki} = \ket{vac} \ \ \ \& \ \ \ a_{k} a_{i} \ket{ik} = \ket{vac} = -a_{k} a_{i} \ket{ki}$$
Therefore,
$$ a_{i}a_{k} + a_{k}a_{i} = \{a_{k}, a_{i}\} = 0 \tag{QED 2}$$
Now we already have two anticommutation relation and will try to derive the third, between creation and anhilation operators. Consider two orbitals $i \ \& \ k$, where $i\neq k$
$$a_{i}a_{k}^{+}\ket{i} = a_{i}\ket{ki} = - a_{i}\ket{ik} = - \ket{k}$$
Reversing the action,
$$a_{k}^{+} a_{i}\ket{i} = \ket{k}$$
Comparing both
$$a_{k}^{+} a_{i} + a_{i}a_{k}^{+} = \{ a_{i}, a_{k}^{+} \} = 0 $$
But when $i = k$, then
$$a_{i}^{+} a_{i} + a_{i}a_{i}^{+} = \{ a_{i}, a_{i}^{+} \} = 1 $$
Therefore the final expression which captures both sitation is
$$a_{i}^{+} a_{k} + a_{k}a_{i}^{+} = \{ a_{i}^{+}, a_{k} \} = \delta_{ik} \tag{QED 3}$$
Last, we need to establish the adjoint relation between anhilation and creation operator. Consider
From first quantization
$$\braket {\phi_{i}}{\phi_{k}} = \delta_{ik}$$
Using second quantization on right hand side where $\dagger$ indicates hermitian conjugate
$$\bra{vac}(a_{i}^{+})^{\dagger} a_{k}^{+}\ket{vac} = \delta_{ik}$$
It is only true if
$$(a_{i}^{+})^{\dagger} = a_{i} \tag{QED 4}$$
Hence, $a_{i}^{+} = a_{i}^{\dagger}$ which means creation operator for $i$th orbital is hermitian conjugate of
anhilation operator for the same orbital. From now on for creation operator we will use $\dagger$ as superscript instead of $+$
### Particle number operator
We succesfully showed the transformation of basis from first quantization to second quantization by creating fermionic algebra. Now we will do the same for operators. Lets look at number operator for $N$ fermions in $N$ orthogonal basis. The wavefuntion can be written as
$$\ket{\Psi_{N}} = a^{\dagger}_{N}a^{\dagger}_{N-1} ... a^{\dagger}_{2} a^{\dagger}_{1}\ket{vac}$$
Define number operator for $k$th orbital
$$a^{\dagger}_{k}a_{k} $$
Act this opeartor on $\ket{\Psi_{N}}$
$$a^{\dagger}_{k}a_{k} \ket{\Psi_{N}} = a^{\dagger}_{k}a_{k} (a^{\dagger}_{N}a^{\dagger}_{N-1} ... a^{\dagger}_{k} ... a^{\dagger}_{2} a^{\dagger}_{1}\ket{vac})$$
Using eq.(1) and eq.(3), both $a^{\dagger}_{k}a_{k}$ can move next to $a^{\dagger}_{k}$ inside $\Psi_{N}$
$$a^{\dagger}_{k}a_{k} \ket{\Psi_{N}} = a^{\dagger}_{N}a^{\dagger}_{N-1} ...a^{\dagger}_{k}a_{k}a^{\dagger}_{k} ... a^{\dagger}_{2} a^{\dagger}_{1}\ket{vac} = a^{\dagger}_{N}a^{\dagger}_{N-1} ...a^{\dagger}_{k}... a^{\dagger}_{2} a^{\dagger}_{1}\ket{vac} = +1\ket{\Psi_{N}}$$
#### Particle number operator for $N$ particles
$$\sum_{i=1}^{N} a^{\dagger}_{i}a_{i} \ket{\Psi_{N}} = N \ket{\Psi_{N}} $$
### Parity of permutation
$$p q r s \xrightarrow[]{(12)(34)} q p s r$$
The total number of transposition required to decompose any permutation $P$ is known as its parity, $\mathcal{p}$
If the total number is even then its called even parity
If the total number is odd then its called odd parity
Sign of permutation $P$ is defined as
$$sgn(P) = -1^{p}$$
Hence,
$$sgn(P_{odd}) = -1 \hspace{2cm} sgn(P_{even}) = 1$$
### Normal product
Given the product of operators, the normal product is defined as rearrangment of operators such that all the creation operator are moved to left and all the anhilation operator to right, then multiply it by sign of permutation.
Example: Consiper $Q = a_{p}^{\dagger}a_{q}a_{r}^{\dagger}$
Now, the in order to write normal product of $Q$, one has to permute $a_{q}a_{r}^{\dagger}$ and multiply thw whole by -1 ( $\because$ odd parity permutation).
$$n[a_{p}^{\dagger}a_{q}a_{r}^{\dagger}] = -a_{p}^{\dagger}a_{r}^{\dagger}a_{q} $$
Excercise: $Q = a_{s}a_{p}^{\dagger}a_{r}a_{q}^{\dagger}$
1) $n[Q]$
2) $n[n[Q]]$ (nomal product is idempotent operator)
Solution: $Q = a_{s}a_{p}^{\dagger}a_{r}a_{q}^{\dagger}$
1) $n[Q]=-a_{q}^{\dagger}a_{p}^{\dagger}a_{r}a_{s}=a_{p}^{\dagger}a_{q}^{\dagger}a_{r}a_{s}$
2) $n[n[Q]]=n[Q]$
Now, we will compute normal product using SymPy
```python
from secondquant_gen import AnnihilateFermion, CreateFermion, NO
from sympy import symbols
p,q,r,s = symbols('p,q,r,s')
Q = AnnihilateFermion(s)*CreateFermion(p)*AnnihilateFermion(r)*CreateFermion(q)
display(Q)
n = NO(Q)
nn = NO(n)
display(n,nn)
```
$\displaystyle a_{s} a^\dagger_{p} a_{r} a^\dagger_{q}$
$\displaystyle \left\{a^\dagger_{p} a^\dagger_{q} a_{r} a_{s}\right\}$
$\displaystyle \left\{a^\dagger_{p} a^\dagger_{q} a_{r} a_{s}\right\}$
### Contraction
Contraction between two operators is defined as itself minus its normal product. Symbolically
Example: Compute contraction of $Q = a_{p}a_{q}$
Excercise: Compute contraction of the following
1) $a_{p}a_{q}^{\dagger}$
2) $a_{p}^{\dagger}a_{q}$
3) $a_{p}^{\dagger}a_{q}^{\dagger}$
Solution:
Now, we will compute normal product using SymPy
```python
from secondquant_gen import contraction, F, Fd # F = AnnihilateFermion, Fd = CreateFermion
from sympy import symbols
p,q = symbols('p q')
C1 = contraction(F(p),F(q))
C2 = contraction(F(p),Fd(q))
C3 = contraction(Fd(p),F(q))
C4 = contraction(Fd(p),Fd(q))
display(C1,C2,C3,C4)
```
0
$\displaystyle \delta_{p q}$
0
0
### Normal products with Contractions
Given normal product of operators with contraction of operators inside the normal product.
Then take all the contractions out in pairs and same order, leave uncontracted operators inside normal product and multiply sign of permutation.
Example:
Example:
### Wicks Theorem
Given a product of operators then it can be rewritten as normal product of the same plus normal product with all possible contractions inside.
Example:
Now, we will show use of wicks theorem using SymPy
```python
from secondquant_gen import wicks, F, Fd
from sympy import symbols, Dummy
p,q,r,s,t,u = symbols('p q r s t u')
E1 = wicks(F(p)*Fd(q)*Fd(r))
display(E1)
```
$\displaystyle \delta_{p q} a^\dagger_{r} - \delta_{p r} a^\dagger_{q} + \left\{a^\dagger_{q} a^\dagger_{r} a_{p}\right\}$
```python
E2 = wicks(F(p)*F(q)*Fd(r)*F(s)*Fd(t)*Fd(u))
display(E2)
```
$\displaystyle \delta_{p r} \delta_{q t} \delta_{s u} - \delta_{p r} \delta_{q t} \left\{a^\dagger_{u} a_{s}\right\} - \delta_{p r} \delta_{q u} \delta_{s t} + \delta_{p r} \delta_{q u} \left\{a^\dagger_{t} a_{s}\right\} + \delta_{p r} \delta_{s t} \left\{a^\dagger_{u} a_{q}\right\} - \delta_{p r} \delta_{s u} \left\{a^\dagger_{t} a_{q}\right\} - \delta_{p r} \left\{a^\dagger_{t} a^\dagger_{u} a_{q} a_{s}\right\} - \delta_{p t} \delta_{q r} \delta_{s u} + \delta_{p t} \delta_{q r} \left\{a^\dagger_{u} a_{s}\right\} - \delta_{p t} \delta_{q u} \left\{a^\dagger_{r} a_{s}\right\} + \delta_{p t} \delta_{s u} \left\{a^\dagger_{r} a_{q}\right\} + \delta_{p t} \left\{a^\dagger_{r} a^\dagger_{u} a_{q} a_{s}\right\} + \delta_{p u} \delta_{q r} \delta_{s t} - \delta_{p u} \delta_{q r} \left\{a^\dagger_{t} a_{s}\right\} + \delta_{p u} \delta_{q t} \left\{a^\dagger_{r} a_{s}\right\} - \delta_{p u} \delta_{s t} \left\{a^\dagger_{r} a_{q}\right\} - \delta_{p u} \left\{a^\dagger_{r} a^\dagger_{t} a_{q} a_{s}\right\} - \delta_{q r} \delta_{s t} \left\{a^\dagger_{u} a_{p}\right\} + \delta_{q r} \delta_{s u} \left\{a^\dagger_{t} a_{p}\right\} + \delta_{q r} \left\{a^\dagger_{t} a^\dagger_{u} a_{p} a_{s}\right\} - \delta_{q t} \delta_{s u} \left\{a^\dagger_{r} a_{p}\right\} - \delta_{q t} \left\{a^\dagger_{r} a^\dagger_{u} a_{p} a_{s}\right\} + \delta_{q u} \delta_{s t} \left\{a^\dagger_{r} a_{p}\right\} + \delta_{q u} \left\{a^\dagger_{r} a^\dagger_{t} a_{p} a_{s}\right\} + \delta_{s t} \left\{a^\dagger_{r} a^\dagger_{u} a_{p} a_{q}\right\} - \delta_{s u} \left\{a^\dagger_{r} a^\dagger_{t} a_{p} a_{q}\right\} + \left\{a^\dagger_{r} a^\dagger_{t} a^\dagger_{u} a_{p} a_{q} a_{s}\right\}$
### Expectation value with $\ket{vac}$
Let $Q = M_{1}M_{2}M_{3} ... M_{n-1}M_{n}$ product of $n$ operators
From this one can easily see the importance of Wick's theorem. Lets do some checks using SymPy
Exercise:
$Q = a_{p}a_{q}^{\dagger}a_{r}^{\dagger}$
$Q = a_{p}a_{q}a_{r}^{\dagger}a_{s}a_{t}^{\dagger}a_{u}^{\dagger}$
Compute $\bra{vac}Q\ket{vac}$?
Solution:
1. Conver $Q$ to normal order using wicks theorem.
2. Only terms which are fully contracted will result in non-zero
```python
from secondquant_gen import wicks, F, Fd
from sympy import symbols
p,q,r,s,t,u = symbols('p q r s t u')
E1 = wicks(F(p)*Fd(q)*Fd(r),keep_only_fully_contracted=True)
display(E1)
```
$\displaystyle 0$
```python
E2 = wicks(F(p)*F(q)*Fd(r)*F(s)*Fd(t)*Fd(u),keep_only_fully_contracted=True)
display(E2)
```
$\displaystyle \delta_{p r} \delta_{q t} \delta_{s u} - \delta_{p r} \delta_{q u} \delta_{s t} - \delta_{p t} \delta_{q r} \delta_{s u} + \delta_{p u} \delta_{q r} \delta_{s t}$
### Particle Hole formalism
This procedure makes the evaluation of certain matrix elements easier like
$$\bra{\Phi_{0}}\hat{H}\ket{\Phi_{0}}$$
$$\bra{\Phi_{i}^{a}}\hat{H}\ket{\Phi_{0}}$$
$$\bra{\Phi_{ij}^{ab}}\hat{H}\ket{\Phi_{0}}$$
where, $\Phi_{0}$ is Hartree-Fock determinant. Given a Hartree-Fock solution we get occupied and virtual orbitals. Lets assign symbols $(i,j,k,...)$ to occupied and $(a,b,c,...)$ to virtual.
We will define new set of operators
$$
b_{p} =
\left\{
\begin{array}{ll}
a_{p}^{\dagger} & \mbox{if } p = i \\
a_{p} & \mbox{if } p = a
\end{array}
\right.
$$
$$
b_{p}^\dagger =
\left\{
\begin{array}{ll}
a_{p} & \mbox{if } p = i \\
a_{p}^{\dagger} & \mbox{if } p = a
\end{array}
\right.
$$
This makes the HF determinant a Fermi vaccum
$$ b_{p}\ket{\Phi_{0}} = 0$$
This follows
$$ \{ b_{p},b_{q} \} = 0 \hspace{1cm} \{ b_{p}^{\dagger},b_{q}^{\dagger} \} = 0 \hspace{1cm} \{ b_{p},b_{q}^\dagger \} = \delta_{pq} $$
### Normal product of PHF
It is defined similarly, as rearrangment of operators ($b_{p}$ \& $b_{q}^{\dagger}$) such that all the creation operator are moved to left and all the anhilation operator to right, then multiply it by sign of permutation.
Example: Consiper $R = b_{p}^{\dagger}b_{q}b_{r}^{\dagger}$
Now, the in order to write normal product of $R$, one has to permute $b_{q}b_{r}^{\dagger}$ and multiply thw whole by -1 ( $\because$ odd parity permutation).
$$N[b_{p}^{\dagger}b_{q}b_{r}^{\dagger}] = -b_{p}^{\dagger}b_{r}^{\dagger}b_{q} $$
The thing one has to focus is this definition shows different result that $n[ ]$.
Lets do an excercise to explain this.
We are going to compute normal order of $a_{p}^{\dagger}a_{q}a_{r}^{\dagger}$ for $p=i,q=j,r=a$
$$N[a_{i}^{\dagger}a_{j}a_{a}^{\dagger}] = N[b_{i}b_{j}^{\dagger}b_{a}^{\dagger}] =
b_{j}^{\dagger}b_{a}^{\dagger}b_{i} = -b_{a}^{\dagger}b_{j}^{\dagger}b_{i} = -a_{a}^{\dagger}a_{j}a_{i}^{\dagger} $$
Also compute
$$n[a_{i}^{\dagger}a_{j}a_{a}^{\dagger}] = -a_{i}^{\dagger}a_{a}^{\dagger}a_{j} = a_{a}^{\dagger}a_{i}^{\dagger}a_{j} = a_{a}^{\dagger}\delta_{ij} - a_{a}^{\dagger}a_{j}^{\dagger}a_{i} $$
```python
from sympy.physics.secondquant import NO, F, Fd # F = AnnihilateFermion, Fd = CreateFermion
from sympy import symbols
i,j = symbols('i j', below_fermi=True)
a = symbols('a', above_fermi=True)
display(NO(Fd(i)*F(j)*Fd(a)))
```
$\displaystyle - \left\{a^\dagger_{a} a_{j} a^\dagger_{i}\right\}$
### Contractions in PHF
Same way they are defined but the operators are from PHF
Similarlry the contraction inside normal product, Wicks theorem are defined.
### Expectation value with $\ket{\Phi_{0}}$
Let $R = M_{1}M_{2}M_{3} ... M_{n-1}M_{n}$ product of $n$ operators
From this one can easily see the importance of Wick's theorem. Lets do some checks using SymPy and derive Hartree-Fock energy
### Electronic Hamiltonian
$$H_{el} = \sum_{pq} h_{pq} a_{p}^{\dagger} a_{q} + \frac{1}{4}\sum_{pqrs} v^{pq}_{rs} a_{p}^{\dagger}a_{q}^{\dagger}a_{s}a_{r}$$
where,
$$h_{pq} = \bra{\phi_{p} (x)}(-\frac{1}{2}\nabla^{2} -\sum_{\alpha} \frac{Z_{\alpha}}{r_{\alpha x}} )\ket{\phi_{q} (x)}$$
$$v_{pqrs} = \bra{\phi_{p} \phi_{q}}\ket{\phi_{r} \phi_{s}} = \braket{\phi_{p} \phi_{q}}{\phi_{r}\phi_{s}} - \braket{\phi_{p} \phi_{q}}{\phi_{s}\phi_{r}}$$
### Hartree-Fock
Lets find Hartree-Fock energy
```python
from sympy.physics.secondquant import wicks, F, Fd, NO # F = AnnihilateFermion, Fd = CreateFermion
from sympy.physics.secondquant import substitute_dummies, contraction
from sympy.physics.secondquant import AntiSymmetricTensor, evaluate_deltas
from sympy import symbols, Rational, Dummy
p, q, r, s = symbols('p,q,r,s', cls=Dummy)
h = AntiSymmetricTensor('h', (p,), (q,))
pq = Fd(p)*F(q)
v = AntiSymmetricTensor('v', (p, q), (r, s))
pqsr = Fd(p)*Fd(q)*F(s)*F(r)
H = h*pq + Rational(1, 4)*v*pqsr
display(H) # Without summation
eq = wicks(H,keep_only_fully_contracted=True)
index_rule = {'below': 'ijklmno','above': 'abcde', 'general': 'pqrs'}
eq = substitute_dummies(eq, new_indices=True, pretty_indices=index_rule)
display(eq)
eq = evaluate_deltas(eq)
display(eq)
```
$\displaystyle h^{p}_{q} a^\dagger_{p} a_{q} + \frac{v^{pq}_{rs} a^\dagger_{p} a^\dagger_{q} a_{s} a_{r}}{4}$
$\displaystyle \delta_{i p} \delta_{p q} h^{q}_{p} + \frac{\delta_{i q} \delta_{j p} \delta_{p r} \delta_{q s} v^{rs}_{pq}}{2}$
$\displaystyle h^{i}_{i} + \frac{v^{ij}_{ij}}{2}$
### Normal ordered Hamiltonian
$$H_{N} = \sum_{pq} h_{pq} N[a_{p}^{\dagger} a_{q}] + \frac{1}{4}\sum_{pqrs} v^{pq}_{rs} N[a_{p}^{\dagger}a_{q}^{\dagger}a_{s}a_{r}]$$
One can show that
$$H_{el} = H_{N} + E_{HF}$$
From now on we will use $H_{N}$ or computing matrix elements, also $H_{N}$ measures the correlation effect
### Baker-Campbell-Hausdorff
$$e^{-B} A e^{B} = A + B + [A,B] + \frac{1}{2!}[[A,B],B] + \frac{1}{3!}[[[A,B],B],B] + ... $$
if $A$ is electronic Hamiltonian or normal ordered Hamiltonian and $B$ any excitation operator, then due to the stucture of Hamiltonian the series truncate after fourth expansion. This is the reason we call coupled cluster as coupled cluster.
### Coupled-Cluster
Coupled Cluster Singles and Doubles (CCSD)
$$\Psi_{CCSD} = e^{T} \ket{\Phi_{0}}$$
where,
$$ T_{1} = \sum_{ia}t_{i}^{a}a^{\dagger}i$$
$$ T_{2} = \frac{1}{4}\sum_{ijab}t_{ij}^{ab}a^{\dagger}b^{\dagger}ji$$
Equations which need to be solved
$$ E_{CCSD}^{cor} = \bra{\Phi_{0}}e^{-T} H_{N} e^{T} \ket{\Phi_{0}} \tag{Correltion energy}$$
$$ 0 = \bra{\Phi_{i}^{a}}e^{-T} H_{N} e^{T} \ket{\Phi_{0}} \tag{$T_{1}$ amplitude}$$
$$ 0 = \bra{\Phi_{ij}^{ab}}e^{-T} H_{N} e^{T} \ket{\Phi_{0}} \tag{$T_{2}$ amplitude}$$
Lets see the power of SymPy now,
```python
from sympy.physics.secondquant import wicks, F, Fd, NO # F = AnnihilateFermion, Fd = CreateFermion
from sympy.physics.secondquant import substitute_dummies, contraction
from sympy.physics.secondquant import AntiSymmetricTensor, evaluate_deltas
from sympy.physics.secondquant import PermutationOperator, simplify_index_permutations
from sympy import symbols, Rational, Dummy
import BCH
p, q, r, s = symbols('p,q,r,s', cls=Dummy)
i,j = symbols('i,j' , below_fermi=True)
a,b = symbols('a,b' , above_fermi=True)
f = AntiSymmetricTensor('f', (p,), (q,))
pq = NO(Fd(p)*F(q))
v = AntiSymmetricTensor('v', (p, q), (r, s))
pqsr = NO(Fd(p)*Fd(q)*F(s)*F(r))
H = f*pq + Rational(1, 4)*v*pqsr
ccsd = BCH.level(H,"SD")
eq = wicks(ccsd,simplify_kronecker_deltas=True,keep_only_fully_contracted=True)
index_rule = {'below': 'ijklmno','above': 'abcdef', 'general': 'pqrs'}
e_ccsd = substitute_dummies(eq, new_indices=True, pretty_indices=index_rule)
display(e_ccsd)
```
$\displaystyle f^{i}_{a} t^{a}_{i} - \frac{t^{a}_{j} t^{b}_{i} v^{ij}_{ab}}{2} + \frac{t^{ab}_{ij} v^{ij}_{ab}}{4}$
```python
eq = wicks(Fd(i)*F(a)*ccsd,simplify_kronecker_deltas=True,keep_only_fully_contracted=True)
index_rule = {'below': 'jklmno','above': 'bcdef', 'general': 'pqrs'}
t1 = (substitute_dummies(eq, new_indices=True, pretty_indices=index_rule))
display(t1)
```
$\displaystyle - f^{j}_{b} t^{b}_{i} t^{a}_{j} + f^{j}_{b} t^{ab}_{ij} - f^{j}_{i} t^{a}_{j} + f^{a}_{b} t^{b}_{i} + f^{a}_{i} - t^{b}_{j} t^{c}_{i} t^{a}_{k} v^{jk}_{bc} - t^{b}_{j} t^{c}_{i} v^{aj}_{bc} + t^{b}_{j} t^{a}_{k} v^{jk}_{ib} + t^{b}_{j} t^{ac}_{ik} v^{jk}_{bc} + t^{b}_{j} v^{aj}_{ib} - \frac{t^{b}_{i} t^{ac}_{jk} v^{jk}_{bc}}{2} + \frac{t^{a}_{k} t^{bc}_{ij} v^{jk}_{bc}}{2} + \frac{t^{bc}_{ij} v^{aj}_{bc}}{2} - \frac{t^{ab}_{jk} v^{jk}_{ib}}{2}$
```python
eq = wicks(Fd(i)*Fd(j)*F(b)*F(a)*ccsd,simplify_kronecker_deltas=True,keep_only_fully_contracted=True)
index_rule = {'below': 'klmno','above': 'cdef', 'general': 'pqrs'}
t2 = substitute_dummies(eq, new_indices=True, pretty_indices=index_rule)
P = PermutList = [PermutationOperator(i,j),PermutationOperator(a,b)]
t2 = simplify_index_permutations(t2,PermutList)
display(t2)
```
$\displaystyle f^{k}_{c} t^{c}_{i} t^{ab}_{jk} P(ij) + f^{k}_{c} t^{a}_{k} t^{bc}_{ij} P(ab) + f^{k}_{i} t^{ab}_{jk} P(ij) - f^{a}_{c} t^{bc}_{ij} P(ab) + t^{c}_{k} t^{d}_{i} t^{ab}_{jl} v^{kl}_{cd} P(ij) + t^{c}_{k} t^{a}_{l} t^{bd}_{ij} v^{kl}_{cd} P(ab) - t^{c}_{k} t^{ad}_{ij} v^{bk}_{cd} P(ab) + t^{c}_{k} t^{ab}_{il} v^{kl}_{jc} P(ij) + t^{c}_{i} t^{d}_{j} t^{a}_{k} t^{b}_{l} v^{kl}_{cd} + t^{c}_{i} t^{d}_{j} t^{a}_{k} v^{bk}_{cd} P(ab) + \frac{t^{c}_{i} t^{d}_{j} t^{ab}_{kl} v^{kl}_{cd}}{2} + t^{c}_{i} t^{d}_{j} v^{ab}_{cd} - t^{c}_{i} t^{a}_{k} t^{b}_{l} v^{kl}_{jc} P(ij) - t^{c}_{i} t^{a}_{k} t^{bd}_{jl} v^{kl}_{cd} P(ab) P(ij) - t^{c}_{i} t^{a}_{k} v^{bk}_{jc} P(ab) P(ij) - t^{c}_{i} t^{ad}_{jk} v^{bk}_{cd} P(ab) P(ij) - \frac{t^{c}_{i} t^{ab}_{kl} v^{kl}_{jc} P(ij)}{2} - t^{c}_{i} v^{ab}_{jc} P(ij) + \frac{t^{a}_{k} t^{b}_{l} t^{cd}_{ij} v^{kl}_{cd}}{2} + t^{a}_{k} t^{b}_{l} v^{kl}_{ij} + \frac{t^{a}_{k} t^{cd}_{ij} v^{bk}_{cd} P(ab)}{2} + t^{a}_{k} t^{bc}_{il} v^{kl}_{jc} P(ab) P(ij) + t^{a}_{k} v^{bk}_{ij} P(ab) + \frac{t^{cd}_{ij} t^{ab}_{kl} v^{kl}_{cd}}{4} + \frac{t^{cd}_{ij} v^{ab}_{cd}}{2} + \frac{t^{cd}_{jk} t^{ab}_{il} v^{kl}_{cd} P(ij)}{2} + t^{ac}_{ik} t^{bd}_{jl} v^{kl}_{cd} P(ij) + t^{ac}_{ik} v^{bk}_{jc} P(ab) P(ij) - \frac{t^{ac}_{ij} t^{bd}_{kl} v^{kl}_{cd} P(ab)}{2} + \frac{t^{ab}_{kl} v^{kl}_{ij}}{2} + v^{ab}_{ij}$
### Equation-of-motion Coupled-Cluster
EOM-CC allows to compute excited and open-shell character electronic states. There are many flavor depending on the target state from reference.
#### Energy and amplitude equations of EOM-CC
$$\bar{H} = e^{-T} H_{N} e^{T}$$
$$\bra{\Phi_{ij..}^{ab..}}\bar{H}\hat{R}\ket{\Phi_{0}} = E_{EOM} \bra{\Phi_{ij..}^{ab..}}\hat{R}\ket{\Phi_{0}}$$
$$\bra{\Phi_{ij..}^{ab..}}[\bar{H}-E_{cc},\hat{R}]\ket{\Phi_{0}} = \Delta E_{EOM} \bra{\Phi_{ij..}^{ab..}}\hat{R}\ket{\Phi_{0}}$$
Last equation is the one which we will derive using SymPy
Example - EOM-IP-CCSD using genralized Davidson method
$$\begin{pmatrix}
\bar{H}_{SS} - E_{cc} & \bar{H}_{SD} \\
\bar{H}_{DS} & \bar{H}_{DD} - E_{cc}
\end{pmatrix}
\begin{pmatrix}
R_{1}\\
R_{2}
\end{pmatrix}=\omega
\begin{pmatrix}
R_{1}\\
R_{2}
\end{pmatrix}$$
and
$$\begin{pmatrix}
L_{1} & & L_{2}
\end{pmatrix}
\begin{pmatrix}
\bar{H}_{SS}-E_{cc} & \bar{H}_{SD} \\
\bar{H}_{DS} & \bar{H}_{DD}-E_{cc}
\end{pmatrix}
=
\omega
\begin{pmatrix}
L_{1} & & L_{2},
\end{pmatrix} $$
Using matrix equations
Need right $\sigma$ amplitudes of EOM-IP-CCSD amplitudes and for this we need
$$\bra{\Phi_{i}}[\bar{H}-E_{cc},\hat{R_{1}}]\ket{\Phi_{0}} = ((\bar{H}_{SS} - E_{cc}) R_{1})$$
$$\bra{\Phi_{ij}^{a}}[\bar{H},\hat{R_{1}}]\ket{\Phi_{0}} = (\bar{H}_{DS} R_{1}) $$
$$\bra{\Phi_{i}}[\bar{H},\hat{R_{2}}]\ket{\Phi_{0}} = (\bar{H}_{SD} R_{2})$$
$$\bra{\Phi_{ij}^{a}}[\bar{H}-E_{cc},\hat{R_{2}}]\ket{\Phi_{0}} =((\bar{H}_{DD} - E_{cc}) R_{2}) $$
Therefore, trial vectors are defined as
$$\sigma_{1}=((\bar{H}_{SS}-E_{cc})R_{1})+(\bar{H}_{SD}R_{2})$$
$$\sigma_{2}=(\bar{H}_{DS}R_{1}) +((\bar{H}_{DD}-E_{cc})R_{2})$$
Similarly, left vectors are defined. Lets use power of SymPy for right $\sigma$ amplitudes of EOM-IP-CCSD
```python
import EOM,SIGMA
flavor1 = "IP"
R0_f1 = EOM.R0(flavor1)
R1_f1 = EOM.R1(flavor1)
R2_f1 = EOM.R2(flavor1)
Rf1 = R0_f1 + R1_f1 + R2_f1
SIGMA.RVECTORS(R0_f1,R1_f1,R2_f1,flavor1)
L0_f1 = EOM.L0(flavor1)
L1_f1 = EOM.L1(flavor1)
L2_f1 = EOM.L2(flavor1)
Lf1 = L0_f1 + L1_f1 + L2_f1
SIGMA.LVECTORS(L0_f1,L1_f1,L2_f1,flavor1)
```
Computing right sigma amplitudes for IP (skipping summation for dummy variables)
$\displaystyle ((\overline{H}_{SS}-E_{cc})R_{1}) = - f^{j}_{a} r^{}_{j} t^{a}_{i} - f^{j}_{i} r^{}_{j} + r^{}_{j} t^{a}_{k} t^{b}_{i} v^{jk}_{ab} - r^{}_{j} t^{a}_{k} v^{jk}_{ia} - \frac{r^{}_{j} t^{ab}_{ik} v^{jk}_{ab}}{2}$
$\displaystyle (\overline{H}_{SD}R_{2}) = f^{j}_{a} r^{a}_{ij} + \frac{r^{a}_{jk} t^{b}_{i} v^{jk}_{ab}}{2} - \frac{r^{a}_{jk} v^{jk}_{ia}}{2} + r^{a}_{ij} t^{b}_{k} v^{jk}_{ab}$
$\displaystyle (\overline{H}_{DS}R_{1}) = f^{k}_{b} r^{}_{k} t^{ab}_{ij} - r^{}_{k} t^{b}_{l} t^{ac}_{ij} v^{kl}_{bc} + r^{}_{k} t^{b}_{i} t^{c}_{j} t^{a}_{l} v^{kl}_{bc} + r^{}_{k} t^{b}_{i} t^{c}_{j} v^{ak}_{bc} - r^{}_{k} t^{b}_{i} t^{a}_{l} v^{kl}_{jb} - r^{}_{k} t^{b}_{i} t^{ac}_{jl} v^{kl}_{bc} - r^{}_{k} t^{b}_{i} v^{ak}_{jb} + r^{}_{k} t^{b}_{j} t^{a}_{l} v^{kl}_{ib} + r^{}_{k} t^{b}_{j} t^{ac}_{il} v^{kl}_{bc} + r^{}_{k} t^{b}_{j} v^{ak}_{ib} + \frac{r^{}_{k} t^{a}_{l} t^{bc}_{ij} v^{kl}_{bc}}{2} + r^{}_{k} t^{a}_{l} v^{kl}_{ij} + \frac{r^{}_{k} t^{bc}_{ij} v^{ak}_{bc}}{2} + r^{}_{k} t^{ab}_{il} v^{kl}_{jb} - r^{}_{k} t^{ab}_{jl} v^{kl}_{ib} + r^{}_{k} v^{ak}_{ij}$
$\displaystyle ((\overline{H}_{DD}-E_{cc})R_{2}) = - f^{k}_{b} r^{b}_{ij} t^{a}_{k} - f^{k}_{b} r^{a}_{ik} t^{b}_{j} P(ij) + f^{k}_{i} r^{a}_{jk} P(ij) + f^{a}_{b} r^{b}_{ij} - \frac{r^{b}_{kl} t^{ac}_{ij} v^{kl}_{bc}}{2} - r^{b}_{ik} t^{c}_{j} t^{a}_{l} v^{kl}_{bc} P(ij) - r^{b}_{ik} t^{c}_{j} v^{ak}_{bc} P(ij) + r^{b}_{ik} t^{a}_{l} v^{kl}_{jb} P(ij) + r^{b}_{ik} t^{ac}_{jl} v^{kl}_{bc} P(ij) + r^{b}_{ik} v^{ak}_{jb} P(ij) + r^{b}_{ij} t^{c}_{k} t^{a}_{l} v^{kl}_{bc} + r^{b}_{ij} t^{c}_{k} v^{ak}_{bc} - \frac{r^{b}_{ij} t^{ac}_{kl} v^{kl}_{bc}}{2} + \frac{r^{a}_{kl} t^{b}_{i} t^{c}_{j} v^{kl}_{bc}}{2} - \frac{r^{a}_{kl} t^{b}_{i} v^{kl}_{jb} P(ij)}{2} + \frac{r^{a}_{kl} t^{bc}_{ij} v^{kl}_{bc}}{4} + \frac{r^{a}_{kl} v^{kl}_{ij}}{2} + r^{a}_{ik} t^{b}_{l} t^{c}_{j} v^{kl}_{bc} P(ij) - r^{a}_{ik} t^{b}_{l} v^{kl}_{jb} P(ij) - \frac{r^{a}_{ik} t^{bc}_{jl} v^{kl}_{bc} P(ij)}{2}$
Computing left sigma amplitudes for IP (skipping summation for dummy variables)
$\displaystyle (L_{1}(\overline{H}_{SS}-E_{cc})) = - f^{i}_{a} l^{j}_{} t^{a}_{j} - f^{i}_{j} l^{j}_{} - l^{j}_{} t^{a}_{j} t^{b}_{k} v^{ik}_{ab} + l^{j}_{} t^{a}_{k} v^{ik}_{aj} - \frac{l^{j}_{} t^{ab}_{jk} v^{ik}_{ab}}{2}$
$\displaystyle (L_{2}\overline{H}_{DS}) = f^{a}_{b} l^{ij}_{a} t^{b}_{j} + f^{a}_{j} l^{ij}_{a} - f^{j}_{a} l^{ik}_{b} t^{a}_{k} t^{b}_{j} + f^{j}_{a} l^{ik}_{b} t^{ab}_{jk} - f^{k}_{j} l^{ij}_{a} t^{a}_{k} - \frac{f^{i}_{a} l^{jk}_{b} t^{ab}_{jk}}{2} + l^{jk}_{a} t^{a}_{l} t^{b}_{j} v^{il}_{bk} - \frac{l^{jk}_{a} t^{a}_{l} t^{b}_{k} t^{c}_{j} v^{il}_{bc}}{2} + \frac{l^{jk}_{a} t^{a}_{l} t^{bc}_{jk} v^{il}_{bc}}{4} + \frac{l^{jk}_{a} t^{a}_{l} v^{il}_{jk}}{2} - l^{jk}_{a} t^{b}_{j} v^{ia}_{bk} + \frac{l^{jk}_{a} t^{b}_{k} t^{c}_{j} v^{ia}_{bc}}{2} - l^{jk}_{a} t^{c}_{k} t^{ab}_{jl} v^{il}_{bc} + \frac{l^{jk}_{a} t^{c}_{l} t^{ab}_{jk} v^{il}_{bc}}{2} - l^{jk}_{a} t^{ab}_{jl} v^{il}_{bk} - \frac{l^{jk}_{a} t^{bc}_{jk} v^{ia}_{bc}}{4} - \frac{l^{jk}_{a} v^{ia}_{jk}}{2} + l^{ij}_{a} t^{a}_{k} t^{b}_{l} v^{kl}_{bj} - l^{ij}_{a} t^{a}_{l} t^{b}_{k} t^{c}_{j} v^{kl}_{bc} + \frac{l^{ij}_{a} t^{a}_{l} t^{bc}_{jk} v^{kl}_{bc}}{2} + l^{ij}_{a} t^{b}_{j} t^{c}_{k} v^{ak}_{bc} - l^{ij}_{a} t^{b}_{k} v^{ak}_{bj} + \frac{l^{ij}_{a} t^{c}_{j} t^{ab}_{kl} v^{kl}_{bc}}{2} + l^{ij}_{a} t^{c}_{l} t^{ab}_{jk} v^{kl}_{bc} + \frac{l^{ij}_{a} t^{ab}_{kl} v^{kl}_{bj}}{2} + \frac{l^{ij}_{a} t^{bc}_{jk} v^{ak}_{bc}}{2}$
$\displaystyle (L_{1}\overline{H}_{SD}) = - f^{i}_{a} l^{j}_{} + f^{j}_{a} l^{i}_{} + l^{k}_{} t^{b}_{k} v^{ij}_{ab} + l^{k}_{} v^{ij}_{ak} + l^{i}_{} t^{b}_{k} v^{jk}_{ab} - l^{j}_{} t^{b}_{k} v^{ik}_{ab}$
$\displaystyle (L_{2}(\overline{H}_{DD}-E_{cc})) = f^{b}_{a} l^{ij}_{b} - f^{k}_{a} l^{ij}_{b} t^{b}_{k} + f^{i}_{b} l^{jk}_{a} t^{b}_{k} P(ij) + f^{i}_{k} l^{jk}_{a} P(ij) - \frac{l^{kl}_{b} t^{bc}_{kl} v^{ij}_{ac}}{2} + l^{kl}_{a} t^{b}_{k} v^{ij}_{bl} - \frac{l^{kl}_{a} t^{b}_{l} t^{c}_{k} v^{ij}_{bc}}{2} + \frac{l^{kl}_{a} t^{bc}_{kl} v^{ij}_{bc}}{4} + \frac{l^{kl}_{a} v^{ij}_{kl}}{2} - l^{ik}_{b} t^{b}_{l} t^{c}_{k} v^{jl}_{ac} P(ij) - l^{ik}_{b} t^{b}_{l} v^{jl}_{ak} P(ij) + l^{ik}_{b} t^{c}_{k} v^{jb}_{ac} P(ij) + l^{ik}_{b} t^{bc}_{kl} v^{jl}_{ac} P(ij) + l^{ik}_{b} v^{jb}_{ak} P(ij) - l^{ik}_{a} t^{b}_{k} t^{c}_{l} v^{jl}_{bc} P(ij) + l^{ik}_{a} t^{b}_{l} v^{jl}_{bk} P(ij) - \frac{l^{ik}_{a} t^{bc}_{kl} v^{jl}_{bc} P(ij)}{2} - l^{ij}_{b} t^{b}_{k} t^{c}_{l} v^{kl}_{ac} + l^{ij}_{b} t^{c}_{k} v^{bk}_{ac} - \frac{l^{ij}_{b} t^{bc}_{kl} v^{kl}_{ac}}{2}$
### Properties
#### One particle density matrix (OPDM)
$$\gamma^I_{pq}= \bra{\Psi_{I}}p^{\dagger}q\ket{\Psi_{I}}$$
#### One particle transition density matrix (OPTDM)
$$\gamma^{IJ}_{pq}= \bra{\Psi_{I}}p^{\dagger}q\ket{\Psi_{J}}$$
```python
import EOM, DM, TDM
flavor1 = "IP"
R0_f1 = EOM.R0(flavor1)
R1_f1 = EOM.R1(flavor1)
R2_f1 = EOM.R2(flavor1)
Rf1 = R0_f1 + R1_f1 + R2_f1
L0_f1 = EOM.L0(flavor1)
L1_f1 = EOM.L1(flavor1)
L2_f1 = EOM.L2(flavor1)
Lf1 = L0_f1 + L1_f1 + L2_f1
DM.OPDM(Lf1,Rf1,flavor1)
```
Computing OPDM for IP (skipping summation for dummy variables)
$\displaystyle \gamma_{ij} = \delta_{i j} l^{k}_{} r^{}_{k} + \frac{\delta_{i j} l^{kl}_{a} r^{a}_{kl}}{2} - l^{j}_{} r^{}_{i} + l^{jk}_{a} r^{}_{k} t^{a}_{i} - l^{jk}_{a} r^{a}_{ik}$
$\displaystyle \gamma_{ia} = l^{j}_{} r^{}_{j} t^{a}_{i} - l^{j}_{} r^{}_{i} t^{a}_{j} - l^{j}_{} r^{a}_{ij} - l^{jk}_{b} r^{}_{j} t^{b}_{i} t^{a}_{k} + l^{jk}_{b} r^{}_{j} t^{ab}_{ik} - \frac{l^{jk}_{b} r^{}_{i} t^{ab}_{jk}}{2} + \frac{l^{jk}_{b} r^{b}_{jk} t^{a}_{i}}{2} + l^{jk}_{b} r^{b}_{ij} t^{a}_{k} - \frac{l^{jk}_{b} r^{a}_{jk} t^{b}_{i}}{2}$
$\displaystyle \gamma_{ai} = - l^{ij}_{a} r^{}_{j}$
$\displaystyle \gamma_{ab} = l^{ij}_{a} r^{}_{i} t^{b}_{j} + \frac{l^{ij}_{a} r^{b}_{ij}}{2}$
```python
flavor2 = "CCSD"
R0_f2 = EOM.R0(flavor2)
R1_f2 = EOM.R1(flavor2)
R2_f2 = EOM.R2(flavor2)
Rf2 = R0_f2 + R1_f2 + R1_f2
L0_f2 = EOM.L0(flavor2)
L1_f2 = EOM.L1(flavor2)
L2_f2 = EOM.L2(flavor2)
Lf2 = L0_f2 + L1_f2 + L2_f2
TDM.OPTDM(Lf1,Rf1,Lf2,Rf2,flavor1,flavor2)
```
Computing Dyson OPTDM between IP $\rightarrow$ CCSD (skipping summation for dummy variables)
$\displaystyle \gamma_i^{R} = - l^{j}_{b} r^{}_{j} t^{b}_{i} + l^{j}_{b} r^{b}_{ij} - \frac{l^{jk}_{bc} r^{}_{j} t^{bc}_{ik}}{2} + \frac{l^{jk}_{bc} r^{b}_{jk} t^{c}_{i}}{2} + r^{}_{i}$
$\displaystyle \gamma_a^{R} = l^{j}_{a} r^{}_{j} + \frac{l^{jk}_{ab} r^{b}_{jk}}{2}$
$\displaystyle \gamma_i^{L} = l^{i}_{}$
$\displaystyle \gamma_a^{L} = l^{i}_{} t^{a}_{i} + \frac{l^{ij}_{c} t^{ac}_{ij}}{2}$
|
792f957ee53076b125068a419bf5e24a00f79a24
| 78,462 |
ipynb
|
Jupyter Notebook
|
SQ2EOM.ipynb
|
sgulania/SQ2EOM
|
d10b79fc661ded29a6712e447eee3ea852e22882
|
[
"MIT"
] | null | null | null |
SQ2EOM.ipynb
|
sgulania/SQ2EOM
|
d10b79fc661ded29a6712e447eee3ea852e22882
|
[
"MIT"
] | null | null | null |
SQ2EOM.ipynb
|
sgulania/SQ2EOM
|
d10b79fc661ded29a6712e447eee3ea852e22882
|
[
"MIT"
] | null | null | null | 51.585799 | 4,423 | 0.521361 | true | 12,705 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.826712 | 0.819893 | 0.677815 |
__label__eng_Latn
| 0.26408 | 0.413124 |
```python
import utils
%load_ext autoreload
%autoreload 2
from utils import build_transf, full_homo_transf, prop_velo, prop_force_torque, comp_jacobian
import utils
from sympy import sqrt
import sympy as sy
from IPython.display import display, Math
```
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
# Final 20/21 Problem 3
Denavit-Hartenberg Parameters are from problem 3a).
```python
dh_params = [
[90, 0, "d_1", 0],
[90, 0, sy.Symbol("l_1")+sy.Symbol("d_3"), "theta_2"],
[0, "l_2", 0, 0],
[0, 0, 0, 0]
]
pc1_0 = sy.Matrix([0, -2/3 * sy.Symbol("d_1")])
pc2_0 = sy.Matrix([1/2 * sy.Symbol("l_2") * sy.cos(sy.Symbol("theta_2")),
-sy.Symbol("d_1") -1/2 * sy.Symbol("l_2") * sy.sin(sy.Symbol("theta_2")),
-sy.Symbol("l_1")])
pc3_0 = sy.Matrix([sy.Symbol("l_2") * sy.cos(sy.Symbol("theta_2")),
-sy.Symbol("d_1") - sy.Symbol("l_2") * sy.sin(sy.Symbol("theta_2")),
-sy.Symbol("l_1") -1/2 * sy.Symbol("d_3")])
```
```python
transforms = utils.build_transf(dh_params)
full_transform = full_homo_transf(transforms, verbose=False)
```
$\displaystyle {}^0_1T = \left[\begin{matrix}1 & 0 & 0 & 0\\0 & 0 & -1 & - d_{1}\\0 & 1 & 0 & 0\\0 & 0 & 0 & 1\end{matrix}\right]$
$\displaystyle {}^1_2T = \left[\begin{matrix}\cos{\left(\theta_{2} \right)} & - \sin{\left(\theta_{2} \right)} & 0 & 0\\0 & 0 & -1 & - d_{3} - l_{1}\\\sin{\left(\theta_{2} \right)} & \cos{\left(\theta_{2} \right)} & 0 & 0\\0 & 0 & 0 & 1\end{matrix}\right]$
$\displaystyle {}^2_3T = \left[\begin{matrix}1 & 0 & 0 & l_{2}\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{matrix}\right]$
$\displaystyle {}^3_4T = \left[\begin{matrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{matrix}\right]$
```python
T02 = transforms[0] @ transforms[1]
T03 = T02 @ transforms[2]
T04 = T03 @ transforms[3]
for t in (T02, T03, T04):
display(Math(sy.latex(t)))
```
$\displaystyle \left[\begin{matrix}\cos{\left(\theta_{2} \right)} & - \sin{\left(\theta_{2} \right)} & 0 & 0\\- \sin{\left(\theta_{2} \right)} & - \cos{\left(\theta_{2} \right)} & 0 & - d_{1}\\0 & 0 & -1 & - d_{3} - l_{1}\\0 & 0 & 0 & 1\end{matrix}\right]$
$\displaystyle \left[\begin{matrix}\cos{\left(\theta_{2} \right)} & - \sin{\left(\theta_{2} \right)} & 0 & l_{2} \cos{\left(\theta_{2} \right)}\\- \sin{\left(\theta_{2} \right)} & - \cos{\left(\theta_{2} \right)} & 0 & - d_{1} - l_{2} \sin{\left(\theta_{2} \right)}\\0 & 0 & -1 & - d_{3} - l_{1}\\0 & 0 & 0 & 1\end{matrix}\right]$
$\displaystyle \left[\begin{matrix}\cos{\left(\theta_{2} \right)} & - \sin{\left(\theta_{2} \right)} & 0 & l_{2} \cos{\left(\theta_{2} \right)}\\- \sin{\left(\theta_{2} \right)} & - \cos{\left(\theta_{2} \right)} & 0 & - d_{1} - l_{2} \sin{\left(\theta_{2} \right)}\\0 & 0 & -1 & - d_{3} - l_{1}\\0 & 0 & 0 & 1\end{matrix}\right]$
```python
for t in (T02, T03, T04):
display(Math(sy.latex(utils.homo_transpose(t))))
```
$\displaystyle \left[\begin{matrix}\cos{\left(\theta_{2} \right)} & - \sin{\left(\theta_{2} \right)} & 0 & - d_{1} \sin{\left(\theta_{2} \right)}\\- \sin{\left(\theta_{2} \right)} & - \cos{\left(\theta_{2} \right)} & 0 & - d_{1} \cos{\left(\theta_{2} \right)}\\0 & 0 & -1 & - d_{3} - l_{1}\\0 & 0 & 0 & 1\end{matrix}\right]$
$\displaystyle \left[\begin{matrix}\cos{\left(\theta_{2} \right)} & - \sin{\left(\theta_{2} \right)} & 0 & - d_{1} \sin{\left(\theta_{2} \right)} - l_{2}\\- \sin{\left(\theta_{2} \right)} & - \cos{\left(\theta_{2} \right)} & 0 & - d_{1} \cos{\left(\theta_{2} \right)}\\0 & 0 & -1 & - d_{3} - l_{1}\\0 & 0 & 0 & 1\end{matrix}\right]$
$\displaystyle \left[\begin{matrix}\cos{\left(\theta_{2} \right)} & - \sin{\left(\theta_{2} \right)} & 0 & - d_{1} \sin{\left(\theta_{2} \right)} - l_{2}\\- \sin{\left(\theta_{2} \right)} & - \cos{\left(\theta_{2} \right)} & 0 & - d_{1} \cos{\left(\theta_{2} \right)}\\0 & 0 & -1 & - d_{3} - l_{1}\\0 & 0 & 0 & 1\end{matrix}\right]$
|
7e57ec402907c3f5e4783bdf9f5928a15487d920
| 7,816 |
ipynb
|
Jupyter Notebook
|
examples/test_2021F.ipynb
|
philippwulff/robotics_calc
|
8365ed3931206ca3788086e261d800ebe21ef86b
|
[
"MIT"
] | null | null | null |
examples/test_2021F.ipynb
|
philippwulff/robotics_calc
|
8365ed3931206ca3788086e261d800ebe21ef86b
|
[
"MIT"
] | null | null | null |
examples/test_2021F.ipynb
|
philippwulff/robotics_calc
|
8365ed3931206ca3788086e261d800ebe21ef86b
|
[
"MIT"
] | null | null | null | 34.737778 | 389 | 0.480553 | true | 1,653 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.849971 | 0.679179 | 0.577282 |
__label__kor_Hang
| 0.119974 | 0.17955 |
# simbMoments
*simbMoments* determines a system of equations corresponding to the first and second moments of the population observations. The process to find each moment is quite similar to the way it was done to find the system of differential equations using *simbODE*. The equations are sympy objects which one can manipulate to compute some statistics from the whole population. The implemented function just determines upon the second moment. The first moment is equivalent to the mean behavior of the system output, while the second expresses the cross dependence of the species and is equivalent to the variance of the system output.
```
#libraries required
import numpy as np
import sympy as sp
```
**Defines System Properties**
```
#molecular species
species = ['x1', 'x2']
species = sp.var(species)
#system input
inp = ['u']
uh = sp.var(inp)
ruh = 0 #reaction affected by input (0 = 1st reaction)
#reagent and product matrices
reactants = np.array([[0, 1, 1, 0],
[0, 0, 0, 1]])
products = np.array([[1, 0, 1, 0],
[0, 0, 1, 0]])
#kinetic parameters
pars = ['c1', 'c2', 'c3', 'c4']
parsValues = sp.var(pars)
#to replace kinetic parameters for numeric values, uncomment the next line
#and comment the two previous ones
#parsValues = [4.0, 0.010, 1.0, 0.006]
```
**Pre-processing of the System**. From previous defined information determines stoichiometric matrix and propensity vector. These arrays are used to compute
the moments of the system.
```
#stoichiometric matrix
V = products - reactants
#pre-propensity function
aPro = parsValues
aPro[ruh] = aPro[ruh]*uh[0] #Attaches system input
#system dimentions
Sn, Rm = V.shape
#propensity function vector
for r in range(0,Rm):
for s in range(0, Sn):
#determines propensity vector expressions
for a in range(0, reactants[s,r]):
aPro[r] *= species[s]
#end for a
#end for s
#end for r
print("Stoichimotric Matrix:\n", V)
print("Propensity Function Vector:", aPro)
```
Stoichimotric Matrix:
[[ 1 -1 0 0]
[ 0 0 1 -1]]
Propensity Function Vector: [c1*u, c2*x1, c3*x1, c4*x2]
**Computes Moments Equations**. Each species has its own first moment equation and second orden moment equation. The second moment is found for itself and crossed with other species.
*First Moment*
```
#System of First Moment Equations
odeX = []
for s in range(0,Sn):
temp = 0
#Determines Defferential Equations
for r in range(0, Rm):
temp += V[s,r]*aPro[r]
#end for r
#Set of Differential Equations
odeX.append(temp)
#end for s
```
*Second Order Moment*
```
#System of Second Order Equations
ode2m = []
name2m = []
nameODE = species
odeTotal = odeX
for s1 in range(0, Sn):
for s2 in range(0, Sn):
#Determines second order expression
temp = 0
for r in range(0,Rm):
temp += (V[s1,r]*aPro[r]*species[s2] + V[s2,r]*aPro[r]*species[s1] \
+ V[s1,r]*V[s2,r]*aPro[r])
#end for r
#set of second order moment equations
if temp not in ode2m:
ode2m.append(temp)
odeTotal.append(temp)
#end if temp
#variable names of second order species
if (species[s1]*species[s2]) not in name2m:
name2m.append(species[s1]*species[s2])
nameODE.append(species[s1]*species[s2])
#end if species s1*s2
#end for s2
#end for s1
```
Some processing of the determined data to make easy posterior manipulation
```
#replaces variable names
dxName = []
dxODE = []
for exp in odeTotal:
#at each expression searches for the variable names to replaces them
#for a nickname "dx#"
for j in range(0,len(nameODE)):
name = sp.var('dx' + str(len(nameODE)-j))
exp = exp.subs(nameODE[len(nameODE)-j-1],name)
#stores nicknames
if name not in dxName:
dxName.append(name)
#end if name
#end for j
dxODE.append(exp)
#end for exp
dxName.reverse()
```
```
#Shows sistem of differential equations determined
for k in range(0,len(odeTotal)):
print(f'd{nameODE[k]}/dt:', odeTotal[k])
# print(f'd({dxName[k]})/dt:', dxODE[k])
# print('\n')
#end for k
```
dx1/dt: c1*u - c2*x1
dx2/dt: c3*x1 - c4*x2
dx1**2/dt: 2*c1*u*x1 + c1*u - 2*c2*x1**2 + c2*x1
dx1*x2/dt: c1*u*x2 - c2*x1*x2 + c3*x1**2 - c4*x1*x2
dx2**2/dt: 2*c3*x1*x2 + c3*x1 - 2*c4*x2**2 + c4*x2
|
ff6773c85447b207b07a002415ca9f0e03139f6c
| 9,073 |
ipynb
|
Jupyter Notebook
|
Single_Units/simbMoments.ipynb
|
Jebrayam/systemsbiology
|
65041a2bf6c5e06842042a0bdf5f7528c778fe3f
|
[
"MIT"
] | null | null | null |
Single_Units/simbMoments.ipynb
|
Jebrayam/systemsbiology
|
65041a2bf6c5e06842042a0bdf5f7528c778fe3f
|
[
"MIT"
] | 1 |
2020-10-16T03:30:51.000Z
|
2020-10-16T03:33:01.000Z
|
Single_Units/simbMoments.ipynb
|
Jebrayam/systemsbiology
|
65041a2bf6c5e06842042a0bdf5f7528c778fe3f
|
[
"MIT"
] | null | null | null | 30.243333 | 637 | 0.458393 | true | 1,341 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.822189 | 0.763484 | 0.627728 |
__label__eng_Latn
| 0.967184 | 0.296753 |
```python
import numpy as np
import scipy.stats as si
import scipy
import sympy as sy
import matplotlib.pyplot as plt
import pandas as pd
# import sympy.statistics as systats
```
```python
def euro_opt(S, K, T, r, sigma, option = 'call'):
#S: spot price
#K: strike price
#T: time to maturity
#r: interest rate
#sigma: volatility of underlying asset
d1 = (np.log(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * np.sqrt(T))
d2 = (np.log(S / K) + (r - 0.5 * sigma ** 2) * T) / (sigma * np.sqrt(T))
if option == 'call':
result = (S * si.norm.cdf(d1, 0.0, 1.0) - K * np.exp(-r * T) * si.norm.cdf(d2, 0.0, 1.0))
if option == 'put':
result = (K * np.exp(-r * T) * si.norm.cdf(-d2, 0.0, 1.0) - S * si.norm.cdf(-d1, 0.0, 1.0))
return result
```
```python
euro_opt(27,27.25,6,0.065,0.0000000000001)
```
8.550200169925013
```python
def vega(S, K, T, r, sigma):
#S: spot price
#K: strike price
#T: time to maturity
#r: interest rate
#sigma: volatility of underlying asset
d1 = (np.log(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * np.sqrt(T))
vega = S * si.norm.cdf(d1, 0.0, 1.0) * np.sqrt(T)
return vega
```
```python
vega(27,27.25,6,0.065,.001)
```
66.1362230551458
```python
black_scholes(-1,27,48.5,0.016,5,.65,0)
```
23.176669536768507
```python
#-------------------------------------------------------------------------------
# Name: Scenario Analysis Black Scholes
# Purpose:
#
# Author: Jamie
#
# Created: 20/06/2012
# Copyright: (c) Jamie 2012
# Licence: <your licence>
#-------------------------------------------------------------------------------
from scipy import stats
import math
def black_scholes (cp, s, k, t, rf, v):
""" Price an option using the Black-Scholes model.
s: initial stock price
k: strike price
t: expiration time
v: volatility
rf: risk-free rate
cp: +1/-1 for call/put
"""
d1 = (math.log(s/k)+(rf+0.5*math.pow(v,2))*t)/(v*math.sqrt(t))
d2 = d1 - v*math.sqrt(t)
optprice = (cp*s*math.exp(-div*t)*stats.norm.cdf(cp*d1)) - (cp*k*math.exp(-rf*t)*stats.norm.cdf(cp*d2))
return optprice
```
```python
```
23.176669536768507
```python
def bissecao(f,a,b, p, tol=1e-8):
# YOUR CODE HERE
# p = [cp, s, k, t, v, rf, div, price]
count = 0
while b-a >= tol:
count += 1
z = (a+b)/2
if f(p[0],p[1],p[2],p[3],z,p[5],p[6],p[7]) <= tol:
return z, count
if f(p[0],p[1],p[2],p[3],a,p[5],p[6],p[7])*f(p[0],p[1],p[2],p[3],z,p[5],p[6],p[7]) < 0:
b = z
else:
a = z
return z, count
```
```python
def h(cp, s, k, t, v, rf, div, price):
# print(black_scholes(cp, s, k, t, v, rf, div) - price)
return abs(black_scholes(cp, s, k, t, v, rf, div) - price)
```
```python
bissecao(h,0.1,5,[-1,27,48.5,0.016,0.1,0.065,0,21.34])
```
0.26928565347123623
0.1095862197097155
0.26928565347123623
1.0132549214567277
0.26928565347123623
1.0132549214567277
1.581320063767489
1.0132549214567277
1.581320063767489
1.9006520304136956
1.581320063767489
1.9006520304136956
2.0676267546886606
1.9006520304136956
2.0676267546886606
2.1527723919917783
2.0676267546886606
2.1527723919917783
2.195739933640265
2.1527723919917783
2.195739933640265
2.2173199815349633
2.195739933640265
2.2173199815349633
2.2281337790224143
2.2173199815349633
2.2281337790224143
2.233546584478237
2.2281337790224143
2.233546584478237
2.236254459318289
2.233546584478237
2.236254459318289
2.2376087641967324
2.236254459318289
2.2376087641967324
2.2382860084294514
2.2376087641967324
2.2382860084294514
2.2386246534852816
2.2382860084294514
2.2386246534852816
2.2387939817469658
2.2386246534852816
2.2387939817469658
2.2388786473111004
2.2387939817469658
2.2388786473111004
2.2389209804514856
2.2388786473111004
2.2389209804514856
2.2389421471112456
2.2389209804514856
2.2389421471112456
2.2389527304635273
2.2389421471112456
2.2389527304635273
2.238958022145262
2.2389527304635273
2.238958022145262
2.238960667987527
2.238958022145262
2.238960667987527
2.238961990909008
2.238960667987527
2.238961990909008
2.238962652369832
2.238961990909008
2.238962652369832
2.2389629831002793
2.238962652369832
2.2389629831002793
2.2389631484654977
2.2389629831002793
2.2389631484654977
2.2389632311481087
2.2389631484654977
2.2389632311481087
2.238963272489425
2.2389632311481087
2.238963272489425
2.238963293160076
2.238963272489425
2.238963293160076
2.238963303495403
2.238963293160076
2.238963303495403
(4.999999990873039, 29)
```python
black_scholes(-1,27,32,0.0164,2.55,0.065,0)
```
6.801033221436356
```python
ts = np.linspace(0.1,5,200)
plt.plot(ts,[h(-1,27,32,0.0164,0.7,0.065,0,21.34)]*200)
```
```python
black_scholes(-1,27,32,0.0164,0.065,0.715)
```
5.000239133333103
```python
```
3900
```python
cp = -1
s = 27
k = 32
t = 0.016
rf = 0.065
price = 5
def h(vol):
# print(black_scholes(cp, s, k, t, vol, rf, div) - price)
return black_scholes(cp, s, k, t, rf, vol) - price
scipy.optimize.bisect(h,1e-6,5,xtol=1e-16)
```
0.7199792850227894
```python
```
```python
cp = 1
s = 27
k = 26.25
t = 0.016
rf = 0.065
price = 1.04
def h(vol):
return black_scholes(cp, s, k, t, rf, vol) - price
scipy.optimize.bisect(h,1e-6,5,xtol=1e-16)
```
0.4237032042981779
```python
black_scholes(1, 27, 26.25, 0.016, 0.065, 0.42370)
```
1.0399962926697413
```python
from scipy import stats
import math
def black_scholes (cp, s, k, t, rf, v):
""" Price an option using the Black-Scholes model.
s: initial stock price
k: strike price
t: expiration time
v: volatility
rf: risk-free rate
cp: +1/-1 for call/put
"""
d1 = (math.log(s/k)+(rf+0.5*math.pow(v,2))*t)/(v*math.sqrt(t))
d2 = d1 - v*math.sqrt(t)
optprice = (cp*s*math.exp(-div*t)*stats.norm.cdf(cp*d1)) - (cp*k*math.exp(-rf*t)*stats.norm.cdf(cp*d2))
return optprice
def volat_impl(cp, s, k, t, rf, price):
def h(vol):
return black_scholes(cp, s, k, t, rf, vol) - price
return scipy.optimize.bisect(h,1e-6,5,xtol=1e-16)
```
```python
```
```python
black_scholes(1,27,28,0.016,0.065,0.3914)
```
0.18993518619015592
```python
data = pd.read_csv('BRFOODS.csv',';')
# data
```
```python
'''
Setting CONSTANTS
'''
sigla_acao = 'BRFS3'
empresa = 'BRFoods S.A.'
preco_acao = 27.00
dias = [6,28]
puts = data['Tipo'] == 'PUT'
calls = data['Tipo'] == 'CALL'
dia1706 = data['TF'] == '17-06-2019'
dia1507 = data['TF'] == '15-07-2019'
```
```python
## PUT Com 6 dias a vencer
## Buscando as informações no DataFrame
df_k = data[puts & dia1706].iloc[0:,2:3]
df_s = data[puts & dia1706].iloc[0:,3:4]
ks = df_k.values.flatten()
Ss = df_s.values.flatten()
## Setando o array com as volatilidades a serem plotadas
vs = []
for (k,s) in zip(ks,Ss):
vs.append(volat_impl(-1,preco_acao,k,dias[0]/365,0.065,s))
## Plot do gráfico
plt.figure(figsize=(10,6))
plt.plot(ks,vs, marker='o')
plt.xlabel('Strikes')
plt.ylabel('Volatilidade')
plt.title('Gráfico Smile - Volatilidade x Strike - PUT Com 6 dias a vencer')
plt.grid()
plt.show()
```
```python
## CALL Com 6 dias a vencer
## Buscando as informações no DataFrame
df_k = data[calls & dia1706].iloc[0:,2:3]
df_s = data[calls & dia1706].iloc[0:,3:4]
ks = df_k.values.flatten()
Ss = df_s.values.flatten()
## Setando o array com as volatilidades a serem plotadas
vs = []
for (k,s) in zip(ks,Ss):
vs.append(volat_impl(1,preco_acao,k,dias[0]/365,0.065,s))
## Plot do gráfico
plt.figure(figsize=(10,6))
plt.plot(ks,vs, marker='o')
plt.xlabel('Strikes')
plt.ylabel('Volatilidade')
plt.title('Gráfico Smile - Volatilidade x Strike - CALL Com 6 dias a vencer')
plt.grid()
plt.show()
```
```python
## PUT Com 28 dias a vencer
## Buscando as informações no DataFrame
df_k = data[puts & dia1507].iloc[0:,2:3]
df_s = data[puts & dia1507].iloc[0:,3:4]
ks = df_k.values.flatten()
Ss = df_s.values.flatten()
## Setando o array com as volatilidades a serem plotadas
vs = []
for (k,s) in zip(ks,Ss):
vs.append(volat_impl(-1,preco_acao,k,dias[1]/365,0.065,s))
## Plot do gráfico
plt.figure(figsize=(10,6))
plt.plot(ks,vs, marker='o')
plt.xlabel('Strikes')
plt.ylabel('Volatilidade')
plt.title('Gráfico Smile - Volatilidade x Strike - PUT com 28 dias a vencer')
plt.grid()
plt.show()
```
```python
## CALL Com 28 dias a vencer
## Buscando as informações no DataFrame
df_k = data[calls & dia1507].iloc[0:,2:3]
df_s = data[calls & dia1507].iloc[0:,3:4]
ks = df_k.values.flatten()
Ss = df_s.values.flatten()
## Setando o array com as volatilidades a serem plotadas
vs = []
for (k,s) in zip(ks,Ss):
vs.append(volat_impl(1,preco_acao,k,dias[1]/365,0.065,s))
## Plot do gráfico
plt.figure(figsize=(10,6))
plt.plot(ks,vs, marker='o')
plt.xlabel('Strikes')
plt.ylabel('Volatilidade')
plt.title('Gráfico Smile - Volatilidade x Strike - CALL com 28 dias a vencer')
plt.grid()
plt.show()
```
|
68e0d0a6a421c7230d4777e9ee66fc9ca6f595ae
| 156,203 |
ipynb
|
Jupyter Notebook
|
mod-mat-financas-I-2019-1/Project_3/proj3.ipynb
|
mirandagil/university-courses
|
e70ce5262555e84cffb13e53e139e7eec21e8907
|
[
"MIT"
] | 1 |
2019-12-23T16:39:01.000Z
|
2019-12-23T16:39:01.000Z
|
mod-mat-financas-I-2019-1/Project_3/proj3.ipynb
|
mirandagil/university-courses
|
e70ce5262555e84cffb13e53e139e7eec21e8907
|
[
"MIT"
] | null | null | null |
mod-mat-financas-I-2019-1/Project_3/proj3.ipynb
|
mirandagil/university-courses
|
e70ce5262555e84cffb13e53e139e7eec21e8907
|
[
"MIT"
] | null | null | null | 197.975919 | 44,404 | 0.908805 | true | 3,680 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.901921 | 0.798187 | 0.719901 |
__label__yue_Hant
| 0.139701 | 0.510903 |
# The Material Derivative
## Learning outcomes
* Understand the chain rule and the material derivative
The material derivative (or the substantive derivative) is an important concept in the analysis of fluid flow so it is worth taking some time to understand it.
Consider a time invariant flow in a nozzle. The continuity equation tells us that for a given mass flow rate the velocity must increase as the area is reduced. However for an observer or a probe situated at a fixed point in the nozzle the velocity appears constant. The probe would have to move up or downstream to observe a change in the velocity — the streamwise velocity gradient.
Similarly for a time-invariant flow such as that in a bending tube the point probe measurement will not detect the curvature of the velocity field and so the change in velocity direction, or the curvature of a pathline, is only apparent when moving the probe location and comparing differences in the observations.
The material derivative is therefore a way to express this change in a quantity such a the velocity at a point in the flow. It maps the **Lagrangian trajectory** of a particle onto an **Eulerian point** in the flow.
From another perspective consider a particle of fluid with some property such as temperature $T$ traveling along its pathline at a uniform velocity. The flow field will transport (or advect) the particle, but its temperature isn't changing — the Lagrangian derivative with respect to time is zero. However the Eulerian derivative will increase or decrease as the particle is advected across our fixed measurement location in particular as you increase or decrease the uniform velocity of the flow.
Let's look at this in more detail.
If we consider again a fluid particle moving along its pathline, i.e. the location of that particle relative to our coordinate system as a function of time, that particle undergoes acceleration as it changes direction or velocity. Acceleration being the time rate of change of velocity.
For particle $p$ we can write an expression for its velocity as:
\begin{equation}
\vec{{}V_p}(r_p,t) = \vec{{}V_p}[x_p(t), y_p(t), z_p(t),t]
\label{eq1}
\tag{1}
\end{equation}
where $x_p = x_p(t)$, $y_p = y_p(t)$ and $z_p = z_p(t)$ define the Cartesian location of the particle at time $t$. As stated, a change in velocity may be due to a change in the speed of the particle or a change in the direction in which it is traveling. To take the derivative of equation \ref{eq1} to obtain the acceleration of our particle we need to use the chain rule.
## The Chain Rule
The chain rule is a formula to compute the derivative of compositions of functions, or functions within functions. In this case we have the derivative with respect to time and the derivative with respect to position which itself is dependent on time. That's why the variables in the square brackets in equation 1 all contain a $t$.
To simplify, lets consider a composite function where some variable $z$ depends on $y$ which itself depends on $x$. We can expand the derivative of z with respect to y as follows:
\begin{equation}
\frac{dz}{dx} = \frac{dz}{dy} \cdot \frac{dy}{dx}
\tag{2}
\end{equation}
For partial derivatives (where we use $\partial$) the chain rule can be understood as follows.
If some function $u = u(x,y)$ and the independent variables $x$ and $y$ are each a function of $t$ such that $x = x(t)$ and $y = y(t)$ the derivative $du/dt$ can be obtained:
\begin{equation}
\delta{u} = \frac{\partial{u}}{\partial{x}} \delta{x} +
\frac{\partial{u}}{\partial{y}} \delta{y}
\tag{3}
\end{equation}
As $\delta{x}\rightarrow0$, $\delta{y}\rightarrow0$, and $\delta{t}\rightarrow0$ yielding:
\begin{equation}
\frac{d{u}}{d{t}} = \frac{\partial{u}}{\partial{x}} \frac{d{x}}{d{t}} +
\frac{\partial{u}}{\partial{y}} \frac{d{y}}{d{t}}
\tag{4}
\end{equation}
Here we have a mix of $\partial$ and $d$. This is because $x$ and $y$ are functions of only one variable $t$, while $u$ is a function of $x$ and $y$.
You can learn more about the chain rule here:
https://www.khanacademy.org/math/ap-calculus-ab/ab-differentiation-2-new/ab-3-1a/v/chain-rule-introduction
and here:
https://www.youtube.com/watch?v=YG15m2VwSjA
So to differentiate equation 1:
\begin{equation}
\vec{{}a_p}(t) = \frac{d\vec{{V_p}}}{dt} =
\frac{\partial\vec{{V_p}}}{\partial{t}} +
\frac{\partial\vec{{V_p}}}{\partial{x}}\frac{dx_p}{dt} +
\frac{\partial\vec{{V_p}}}{\partial{y}}\frac{dy_p}{dt} +
\frac{\partial\vec{{V_p}}}{\partial{z}}\frac{dz_p}{dt}
\label{eq5}
\tag{5}
\end{equation}
Since the velocity components are $u_p = dx_p/dt$, $v_p = dy_p/dt$ and $w_p = dz_p/dt$ we obtain:
\begin{equation}
\vec{{}a_p} = \frac{\partial\vec{{V_p}}}{\partial{t}} +
u_p \frac{\partial\vec{{V_p}}}{\partial{x}} +
v_p \frac{\partial\vec{{V_p}}}{\partial{y}} +
w_p \frac{\partial\vec{{V_p}}}{\partial{z}}
\label{eq6}
\tag{6}
\end{equation}
or generally for any particle:
\begin{equation}
\vec{{}a} = \frac{\partial\vec{{V}}}{\partial{t}} +
u \frac{\partial\vec{{V}}}{\partial{x}} +
v \frac{\partial\vec{{V}}}{\partial{y}} +
w \frac{\partial\vec{{V}}}{\partial{z}}
\label{eq7}
\tag{7}
\end{equation}
The scalar components of the vector $\vec{{}a}$ can be decomposed as:
\begin{align}
a_x &= \frac{\partial{u}}{\partial{t}} +
u \frac{\partial{u}}{\partial{x}} +
v \frac{\partial{u}}{\partial{y}} +
w \frac{\partial{u}}{\partial{z}}
\tag{8}
\end{align}
\begin{align}
a_y &= \frac{\partial{v}}{\partial{t}} +
u \frac{\partial{v}}{\partial{x}} +
v \frac{\partial{v}}{\partial{y}} +
w \frac{\partial{v}}{\partial{z}}
\tag{9}
\end{align}
\begin{align}
a_z &= \frac{\partial{w}}{\partial{t}} +
u \frac{\partial{w}}{\partial{x}} +
v \frac{\partial{w}}{\partial{y}} +
w \frac{\partial{w}}{\partial{z}}
\tag{10}
\end{align}
We write equation \ref{eq7} in shorthand notation as:
\begin{equation}
\vec{{}a} = \frac{D \vec{{}V} }{Dt} \equiv
\frac{\partial{\vec{{}V}}}{\partial{t}} + \vec{{}V} \cdot \nabla{\vec{{}V}}
\label{eq8}
\tag{11}
\end{equation}
The symbol $\nabla$, 'nabla' is used as short hand in vector calculus to represent the **gradient**.
The Material Derivative of any variable (scalar or vector) is the rate at which that variable changes in time for a given particle in a Lagrangian frame of reference, i.e. along its pathline. We can use it to describe the temperature or momentum (or whatever we are interested in) of a particle, not just acceleration. To reiterate, it acts as a link between the Lagrangian and the Eulerian descriptions as it tells us how a property of a particle changes as it moves across the Eulerian grid.
|
36a6f5b73ac4bcde756afae113d13c804c964837
| 8,941 |
ipynb
|
Jupyter Notebook
|
3.2a The Material Derivative and the Chain Rule.ipynb
|
nolankucd/MEEN20010
|
c82ad69956839bef123b6d4e3b5d74a096c32046
|
[
"MIT"
] | 4 |
2020-09-21T11:35:24.000Z
|
2020-10-22T18:19:10.000Z
|
3.2a The Material Derivative and the Chain Rule.ipynb
|
iitrabhi/MEEN20010
|
c82ad69956839bef123b6d4e3b5d74a096c32046
|
[
"MIT"
] | null | null | null |
3.2a The Material Derivative and the Chain Rule.ipynb
|
iitrabhi/MEEN20010
|
c82ad69956839bef123b6d4e3b5d74a096c32046
|
[
"MIT"
] | 6 |
2019-09-17T18:10:22.000Z
|
2021-05-01T12:34:19.000Z
| 48.32973 | 506 | 0.587294 | true | 1,934 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.833325 | 0.847968 | 0.706632 |
__label__eng_Latn
| 0.994478 | 0.480075 |
# EECS 445: Machine Learning
## Hands On 10: Bias Variance Tradeoff
Consider a sequence of IID random variable:
$$
X_i =
\begin{cases}
100 & \text{ with prob. } 0.02 \\
0 & \text{ with prob. } 0.97 \\
-100 & \text{ with prob. } 0.01 \\
\end{cases}
$$
The true mean of $X_i$ is
$$
0.02 \times 100 + 0.97 \times 0 + 0.01 \times -100 = 1
$$
We want to estimate the true mean of this distribution. We will consider two different estimators of the true mean.
Let's say you take three samples $X_1, X_2, X_3$, and you compute the **empirical mean** $Z=\frac{X_1 + X_2 + X_3}{3}$ and **empirical median** $Y$ of these three samples (recall that the median is obtained by sorting $X_1, X_2, X_3$ and then choosing the middle (2nd) entry).
### What is the bias-variance tradeoff of the $Y$ and $Z$ for estimating the true mean of the above distribution?
* They are both unbiased estimators of the true mean, and have the same variance.
* The median has higher bias and higher variance.
* The mean has higher bias and higher variance.
* They both have no bias, but the mean has lower variance.
* The mean has no bias but some variance, and the median has non-zero bias but less variance
### Solution
> The last answer is correct.
> The empirical mean of a sample of random $n$ IID random variables is always an unbiased estimate of the true mean. However, the empirical mean estimator can have high variance. Here it is $ \text{Var}(Z) = \frac{\text{Var}(X_i)}{3} = \frac{(100-1)^2 \times 0.02 + (-100 - 1)^2 \times 0.01 + (0-1)^2 \times 0.97}{3} = 99 \frac 2 3.$
>The median, on the other hand, is a biased estimator. It is a little bit hard to calculate exactly, but here goes:
$$
median = \begin{cases} 100 & w.p. 0.02^3 + \binom{3}{1} 0.02^2 \times 0.98 \\
-100 & w.p. 0.01^3 + \binom{3}{1} 0.01^2 \times 0.99
\end{cases}
$$
If you work this out, you see that the median on average is $0.089$. This means that the $\text{bias}^2 \approx (1-0.089)^2$ which is no more than 1. Using a similar argument, you can check that the variance of the median is no more than 20. This can be checked experimentally!
## Derivation of Bias-Variance Tradeoff eqaution
Assume that we have noisy data, modeled by $f = y + \epsilon$, where $\epsilon \in \mathcal{N}(0,\sigma)$. Given an estimator $\hat{f}$, the squared error can be derived as follows:
$$
\begin{align}
\mathbb{E}\left[\left(\hat{f} - f\right)^2\right] &= \mathbb{E}\left[\hat{f}^2 - 2f\hat{f} + f^2\right]\\
&= \mathbb{E}\left[\hat{f}^2\right] + \mathbb{E}\left[f^2\right] - 2\mathbb{E}\left[f\hat{f}^2\right] \text{ By linearity of expectation} \\
\end{align}
$$
Now, by definition, $Var(x) = \mathbb{E}\left[x^2\right] - \left(\mathbb{E}\left[x\right]\right)^2$. Subsituting this definition into the eqaution above, we get:
$$
\begin{align}
\mathbb{E}\left[\hat{f}^2\right] + \mathbb{E}\left[f^2\right] - 2\mathbb{E}\left[f\hat{f}^2\right] &= Var(\hat{f}) + \left(\mathbb{E}[\hat{f}]\right)^2 + Var(f) + \left(\mathbb{E}[f]\right)^2 - 2f\mathbb{E}[\hat{F}^2] \\
&= Var(\hat{f}) + Var(f) + \left(\mathbb{E}[\hat{f}] - f\right)^2\\
&= \boxed{\sigma + Var(\hat{f}) + \left(\mathbb{E}[\hat{f}] - f\right)^2}
\end{align}
$$
The first term $\sigma$ is the irreducible error due to the noise in the data (from the distribution of $\epsilon$). The second term is the **variance** of the estimator $\hat{f}$ and the final term is the **bias** of the estimator. There is an inherent tradeoff between the bias and variance of an estimator. Generally, more complex estimators (think of high-degree polynomials as an example) will have a low bias since they will fit the sampled data really well. However, this accuracy will not be maintained if we continued to resample the data, which implies that the variance of this estimator is high.
## Activity 1: Bias Variance Tradeoff
We will now see try to see the inherent tradeoff between bias and variance of estimators through linear regression. Consider the following dataset.
```python
import numpy as np
import matplotlib.pyplot as plt
from numpy.matlib import repmat
from sklearn
degrees = [1,2,3,4,5]
#define data
n = 20
sub = 1000
mean = 0
std = 0.25
#define test set
Xtest = np.random.random((n,1))*2*np.pi
ytest = np.sin(Xtest) + np.random.normal(mean,std,(n,1))
#pre-allocate variables
preds = np.zeros((n,sub))
bias = np.zeros(len(degrees))
variance = np.zeros(len(degrees))
mse = np.zeros(len(degrees))
values = np.expand_dims(np.linspace(0,2*np.pi,100),1)
```
Let's try several polynomial fits to the data:
```python
for j,degree in enumerate(degrees):
for i in range(sub):
#create data - sample from sine wave
x = np.random.random((n,1))*2*np.pi
y = np.sin(x) + np.random.normal(mean,std,(n,1))
#TODO
#create features corresponding to degree - ex: 1, x, x^2, x^3...
A =
#TODO:
#fit model using least squares solution (linear regression)
#later include ridge regression/normalization
coeffs =
#store predictions for each sampling
preds[:,i] = poly.fit_transform(Xtest).dot(coeffs)[:,0]
#plot 9 images
if i < 9:
plt.subplot(3,3,i+1)
plt.plot(values,poly.fit_transform(values).dot(coeffs),x,y,'.b')
plt.axis([0,2*np.pi,-2,2])
plt.suptitle('PolyFit = %i' % (degree))
plt.show()
#TODO
#Calculate mean bias, variance, and MSE (UNCOMMENT CODE BELOW!)
#bias[j] =
#variance[j] =
#mse[j] =
```
Let's plot the data with the estimators!
```python
plt.subplot(3,1,1)
plt.plot(degrees,bias)
plt.title('bias')
plt.subplot(3,1,2)
plt.plot(degrees,variance)
plt.title('variance')
plt.subplot(3,1,3)
plt.plot(degrees,mse)
plt.title('MSE')
plt.show()
```
|
ded95dbda472655b1573d1eeff50467bcea2727c
| 8,490 |
ipynb
|
Jupyter Notebook
|
handsOn_lecture10_bias-variance_tradeoff/draft/bias_variance_solutions.ipynb
|
xipengwang/umich-eecs445-f16
|
298407af9fd417c1b6daa6127b17cb2c34c2c772
|
[
"MIT"
] | 97 |
2016-09-11T23:15:35.000Z
|
2022-02-22T08:03:24.000Z
|
handsOn_lecture10_bias-variance_tradeoff/draft/bias_variance_solutions.ipynb
|
eecs445-f16/umich-eecs445-f16
|
298407af9fd417c1b6daa6127b17cb2c34c2c772
|
[
"MIT"
] | null | null | null |
handsOn_lecture10_bias-variance_tradeoff/draft/bias_variance_solutions.ipynb
|
eecs445-f16/umich-eecs445-f16
|
298407af9fd417c1b6daa6127b17cb2c34c2c772
|
[
"MIT"
] | 77 |
2016-09-12T20:50:46.000Z
|
2022-01-03T14:41:23.000Z
| 36.753247 | 618 | 0.546879 | true | 1,827 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.800692 | 0.859664 | 0.688326 |
__label__eng_Latn
| 0.977394 | 0.437543 |
# Matrix Factorization for Recommendations in Python <a class="anchor" id="mfrp"></a>
In this post, I'll detail a basic version of low-rank matrix factorization for recommendations employ it on a dataset of 1 million movie ratings (from 1 to 5) available from the [MovieLens](http://grouplens.org/datasets/movielens/) project. The MovieLens datasets were created collected by GroupLens Research at the University of Minnesota.
[Previously](https://beckernick.github.io/music_recommender/), I used item-based collaborative filtering to make music recommendations from raw artist listen-count data. I had a relatively small amount of data, and ended up making some pretty good recommendations. Collaborative filtering methods that compute distance relationships between items or users are generally thought of as "neighborhood" methods, since they center on the idea of "nearness". Unfortunately, there are two issues with taking this approach:
1. It doesn't scale particularly well to massive datasets
2. There's a theoretical concern with raw data based approaches.
I talked about the scaling issue in the previous post, but not the conceptual issue. The key concern is that ratings matrices may be overfit and noisy representations of user tastes and preferences. When we use distance based "neighborhood" approaches on raw data, we match to sparse low-level details that we assume represent the user's preference vector instead of the vector itself. It's a subtle difference, but it's important.
If I've listened to ten Red Hot Chili Peppers songs and you've listened to ten different Red Hot Chili Peppers songs, the raw user action matrix wouldn't have any overlap. We'd have nothing in common, even though it seems pretty likely we share at least some underlying preferencs.
If it sounds like using song features (such as genre) could help, you're right. But, to steal Joseph Konstan's (professor at Minnesota involved with GroupLens Research who has an awesome [Coursera course](https://www.coursera.org/specializations/recommender-systems) on Recommender Systems) example, what if we both like songs with great storytelling, regardless of the genre. So, how do we resolve this? I would need a method that can derive the tastes and preference vectors from the raw data.
Low-Rank Matrix Factorization is that kind of method.
## Matrix Factorization via Singular Value Decomposition <a class="anchor" id="subhead1"></a>
Matrix factorization is the breaking down of one matrix in a product of multiple matrices. It's extremely well studied in mathematics, and it's highly useful. There are many different ways to factor matrices, but singular value decomposition is particularly useful for making recommendations.
So what is singular value decomposition (SVD)? At a high level, SVD is an algorithm that decomposes a matrix $R$ into the best lower rank (i.e. smaller/simpler) approximation of the original matrix $R$. Mathematically, it decomposes R into a two unitary matrices and a diagonal matrix:
$$\begin{equation}
R = U\Sigma V^{T}
\end{equation}$$
where R is users's ratings matrix, $U$ is the user "features" matrix, $\Sigma$ is the diagonal matrix of singular values (essentially weights), and $V^{T}$ is the movie "features" matrix. $U$ and $V^{T}$ are orthogonal, and represent different things. $U$ represents how much users "like" each feature and $V^{T}$ represents how relevant each feature is to each movie.
To get the lower rank approximation, we take these matrices and keep only the top $k$ features, which we think of as the underlying tastes and preferences vectors.
## Setting Up the Ratings Data
Okay, enough with the math. Let's get to the code.
```python
import pandas as pd
import numpy as np
ratings_list = [i.strip().split("::") for i in open('/users/nickbecker/Downloads/ml-1m/ratings.dat', 'r').readlines()]
users_list = [i.strip().split("::") for i in open('/users/nickbecker/Downloads/ml-1m/users.dat', 'r').readlines()]
movies_list = [i.strip().split("::") for i in open('/users/nickbecker/Downloads/ml-1m/movies.dat', 'r').readlines()]
```
```python
ratings = np.array(ratings_list)
users = np.array(users_list)
movies = np.array(movies_list)
```
```python
ratings_df = pd.DataFrame(ratings_list, columns = ['UserID', 'MovieID', 'Rating', 'Timestamp'], dtype = int)
movies_df = pd.DataFrame(movies_list, columns = ['MovieID', 'Title', 'Genres'])
movies_df['MovieID'] = movies_df['MovieID'].apply(pd.to_numeric)
```
I'll also take a look at the movies and ratings dataframes.
```python
movies_df.head()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>MovieID</th>
<th>Title</th>
<th>Genres</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>Toy Story (1995)</td>
<td>Animation|Children's|Comedy</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>Jumanji (1995)</td>
<td>Adventure|Children's|Fantasy</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>Grumpier Old Men (1995)</td>
<td>Comedy|Romance</td>
</tr>
<tr>
<th>3</th>
<td>4</td>
<td>Waiting to Exhale (1995)</td>
<td>Comedy|Drama</td>
</tr>
<tr>
<th>4</th>
<td>5</td>
<td>Father of the Bride Part II (1995)</td>
<td>Comedy</td>
</tr>
</tbody>
</table>
</div>
```python
ratings_df.head()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>UserID</th>
<th>MovieID</th>
<th>Rating</th>
<th>Timestamp</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>1193</td>
<td>5</td>
<td>978300760</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>661</td>
<td>3</td>
<td>978302109</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>914</td>
<td>3</td>
<td>978301968</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>3408</td>
<td>4</td>
<td>978300275</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>2355</td>
<td>5</td>
<td>978824291</td>
</tr>
</tbody>
</table>
</div>
These look good, but I want the format of my ratings matrix to be one row per user and one column per movie. I'll `pivot` `ratings_df` to get that and call the new variable `R`.
```python
R_df = ratings_df.pivot(index = 'UserID', columns ='MovieID', values = 'Rating').fillna(0)
R_df.head()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>MovieID</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>...</th>
<th>3943</th>
<th>3944</th>
<th>3945</th>
<th>3946</th>
<th>3947</th>
<th>3948</th>
<th>3949</th>
<th>3950</th>
<th>3951</th>
<th>3952</th>
</tr>
<tr>
<th>UserID</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>5.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>2</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>3</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>4</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>5</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>2.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
</tbody>
</table>
<p>5 rows × 3706 columns</p>
</div>
The last thing I need to do is de-mean the data (normalize by each users mean) and convert it from a dataframe to a numpy array.
```python
R = R_df.as_matrix()
user_ratings_mean = np.mean(R, axis = 1)
R_demeaned = R - user_ratings_mean.reshape(-1, 1)
```
All set. With my ratings matrix properly formatted and normalized, I'm ready to do the singular value decomposition
## Singular Value Decomposition
Scipy and Numpy both have functions to do the singular value decomposition. I'm going to use the Scipy function `svds` because it let's me choose how many latent factors I want to use to approximate the original ratings matrix (instead of having to truncate it after).
```python
from scipy.sparse.linalg import svds
U, sigma, Vt = svds(R_demeaned, k = 50)
```
Done. The function returns exactly what I detailed earlier in this post, except that the $\Sigma$ returned is just the values instead of a diagonal matrix. This is useful, but since I'm going to leverage matrix multiplication to get predictions I'll convert it to the diagonal matrix form.
```python
sigma = np.diag(sigma)
```
## Making Predictions from the Decomposed Matrices
I now have everything I need to make movie ratings predictions for every user. I can do it all at once by following the math and matrix multiply $U$, $\Sigma$, and $V^{T}$ back to get the rank $k=50$ approximation of $R$.
I also need to add the user means back to get the actual star ratings prediction.
```python
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
```
If I wanted to put this kind of system into production, I'd want to create a training and validation set and optimize the number of latent features ($k$) by minimizing the Root Mean Square Error. Intuitively, the Root Mean Square Error will decrease on the training set as $k$ increases (because I'm approximating the original ratings matrix with a higher rank matrix).
However, for movies, between around 20 and 100 feature "preferences" vectors have been found to be optimal for generalizing to unseen data.
I could create a training and validation set and optimize $k$ by minimizing RMSE, but since I'm just going through proof of concept I'll leave that for another post. I just want to see some movie recommendations.
## Making Movie Recommendations
Finally, it's time. With the predictions matrix for every user, I can build a function to recommend movies for any user. All I need to do is return the movies with the highest predicted rating that the specified user hasn't already rated. Though I didn't use actually use any explicit movie content features (such as genre or title), I'll merge in that information to get a more complete picture of the recommendations.
I'll also return the list of movies the user has already rated, for the sake of comparison.
```python
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns)
preds_df.head()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>MovieID</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>...</th>
<th>3943</th>
<th>3944</th>
<th>3945</th>
<th>3946</th>
<th>3947</th>
<th>3948</th>
<th>3949</th>
<th>3950</th>
<th>3951</th>
<th>3952</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>4.288861</td>
<td>0.143055</td>
<td>-0.195080</td>
<td>-0.018843</td>
<td>0.012232</td>
<td>-0.176604</td>
<td>-0.074120</td>
<td>0.141358</td>
<td>-0.059553</td>
<td>-0.195950</td>
<td>...</td>
<td>0.027807</td>
<td>0.001640</td>
<td>0.026395</td>
<td>-0.022024</td>
<td>-0.085415</td>
<td>0.403529</td>
<td>0.105579</td>
<td>0.031912</td>
<td>0.050450</td>
<td>0.088910</td>
</tr>
<tr>
<th>1</th>
<td>0.744716</td>
<td>0.169659</td>
<td>0.335418</td>
<td>0.000758</td>
<td>0.022475</td>
<td>1.353050</td>
<td>0.051426</td>
<td>0.071258</td>
<td>0.161601</td>
<td>1.567246</td>
<td>...</td>
<td>-0.056502</td>
<td>-0.013733</td>
<td>-0.010580</td>
<td>0.062576</td>
<td>-0.016248</td>
<td>0.155790</td>
<td>-0.418737</td>
<td>-0.101102</td>
<td>-0.054098</td>
<td>-0.140188</td>
</tr>
<tr>
<th>2</th>
<td>1.818824</td>
<td>0.456136</td>
<td>0.090978</td>
<td>-0.043037</td>
<td>-0.025694</td>
<td>-0.158617</td>
<td>-0.131778</td>
<td>0.098977</td>
<td>0.030551</td>
<td>0.735470</td>
<td>...</td>
<td>0.040481</td>
<td>-0.005301</td>
<td>0.012832</td>
<td>0.029349</td>
<td>0.020866</td>
<td>0.121532</td>
<td>0.076205</td>
<td>0.012345</td>
<td>0.015148</td>
<td>-0.109956</td>
</tr>
<tr>
<th>3</th>
<td>0.408057</td>
<td>-0.072960</td>
<td>0.039642</td>
<td>0.089363</td>
<td>0.041950</td>
<td>0.237753</td>
<td>-0.049426</td>
<td>0.009467</td>
<td>0.045469</td>
<td>-0.111370</td>
<td>...</td>
<td>0.008571</td>
<td>-0.005425</td>
<td>-0.008500</td>
<td>-0.003417</td>
<td>-0.083982</td>
<td>0.094512</td>
<td>0.057557</td>
<td>-0.026050</td>
<td>0.014841</td>
<td>-0.034224</td>
</tr>
<tr>
<th>4</th>
<td>1.574272</td>
<td>0.021239</td>
<td>-0.051300</td>
<td>0.246884</td>
<td>-0.032406</td>
<td>1.552281</td>
<td>-0.199630</td>
<td>-0.014920</td>
<td>-0.060498</td>
<td>0.450512</td>
<td>...</td>
<td>0.110151</td>
<td>0.046010</td>
<td>0.006934</td>
<td>-0.015940</td>
<td>-0.050080</td>
<td>-0.052539</td>
<td>0.507189</td>
<td>0.033830</td>
<td>0.125706</td>
<td>0.199244</td>
</tr>
</tbody>
</table>
<p>5 rows × 3706 columns</p>
</div>
```python
def recommend_movies(predictions_df, userID, movies_df, original_ratings_df, num_recommendations=5):
# Get and sort the user's predictions
user_row_number = userID - 1 # UserID starts at 1, not 0
sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings_df[original_ratings_df.UserID == (userID)]
user_full = (user_data.merge(movies_df, how = 'left', left_on = 'MovieID', right_on = 'MovieID').
sort_values(['Rating'], ascending=False)
)
print 'User {0} has already rated {1} movies.'.format(userID, user_full.shape[0])
print 'Recommending highest {0} predicted ratings movies not already rated.'.format(num_recommendations)
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies_df[~movies_df['MovieID'].isin(user_full['MovieID'])].
merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left',
left_on = 'MovieID',
right_on = 'MovieID').
rename(columns = {user_row_number: 'Predictions'}).
sort_values('Predictions', ascending = False).
iloc[:num_recommendations, :-1]
)
return user_full, recommendations
```
```python
already_rated, predictions = recommend_movies(preds_df, 837, movies_df, ratings_df, 10)
```
User 837 has already rated 69 movies.
Recommending highest 10 predicted ratings movies not already rated.
So, how'd I do?
```python
already_rated.head(10)
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>UserID</th>
<th>MovieID</th>
<th>Rating</th>
<th>Timestamp</th>
<th>Title</th>
<th>Genres</th>
</tr>
</thead>
<tbody>
<tr>
<th>36</th>
<td>837</td>
<td>858</td>
<td>5</td>
<td>975360036</td>
<td>Godfather, The (1972)</td>
<td>Action|Crime|Drama</td>
</tr>
<tr>
<th>35</th>
<td>837</td>
<td>1387</td>
<td>5</td>
<td>975360036</td>
<td>Jaws (1975)</td>
<td>Action|Horror</td>
</tr>
<tr>
<th>65</th>
<td>837</td>
<td>2028</td>
<td>5</td>
<td>975360089</td>
<td>Saving Private Ryan (1998)</td>
<td>Action|Drama|War</td>
</tr>
<tr>
<th>63</th>
<td>837</td>
<td>1221</td>
<td>5</td>
<td>975360036</td>
<td>Godfather: Part II, The (1974)</td>
<td>Action|Crime|Drama</td>
</tr>
<tr>
<th>11</th>
<td>837</td>
<td>913</td>
<td>5</td>
<td>975359921</td>
<td>Maltese Falcon, The (1941)</td>
<td>Film-Noir|Mystery</td>
</tr>
<tr>
<th>20</th>
<td>837</td>
<td>3417</td>
<td>5</td>
<td>975360893</td>
<td>Crimson Pirate, The (1952)</td>
<td>Adventure|Comedy|Sci-Fi</td>
</tr>
<tr>
<th>34</th>
<td>837</td>
<td>2186</td>
<td>4</td>
<td>975359955</td>
<td>Strangers on a Train (1951)</td>
<td>Film-Noir|Thriller</td>
</tr>
<tr>
<th>55</th>
<td>837</td>
<td>2791</td>
<td>4</td>
<td>975360893</td>
<td>Airplane! (1980)</td>
<td>Comedy</td>
</tr>
<tr>
<th>31</th>
<td>837</td>
<td>1188</td>
<td>4</td>
<td>975360920</td>
<td>Strictly Ballroom (1992)</td>
<td>Comedy|Romance</td>
</tr>
<tr>
<th>28</th>
<td>837</td>
<td>1304</td>
<td>4</td>
<td>975360058</td>
<td>Butch Cassidy and the Sundance Kid (1969)</td>
<td>Action|Comedy|Western</td>
</tr>
</tbody>
</table>
</div>
```python
predictions
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>MovieID</th>
<th>Title</th>
<th>Genres</th>
</tr>
</thead>
<tbody>
<tr>
<th>516</th>
<td>527</td>
<td>Schindler's List (1993)</td>
<td>Drama|War</td>
</tr>
<tr>
<th>1848</th>
<td>1953</td>
<td>French Connection, The (1971)</td>
<td>Action|Crime|Drama|Thriller</td>
</tr>
<tr>
<th>596</th>
<td>608</td>
<td>Fargo (1996)</td>
<td>Crime|Drama|Thriller</td>
</tr>
<tr>
<th>1235</th>
<td>1284</td>
<td>Big Sleep, The (1946)</td>
<td>Film-Noir|Mystery</td>
</tr>
<tr>
<th>2085</th>
<td>2194</td>
<td>Untouchables, The (1987)</td>
<td>Action|Crime|Drama</td>
</tr>
<tr>
<th>1188</th>
<td>1230</td>
<td>Annie Hall (1977)</td>
<td>Comedy|Romance</td>
</tr>
<tr>
<th>1198</th>
<td>1242</td>
<td>Glory (1989)</td>
<td>Action|Drama|War</td>
</tr>
<tr>
<th>897</th>
<td>922</td>
<td>Sunset Blvd. (a.k.a. Sunset Boulevard) (1950)</td>
<td>Film-Noir</td>
</tr>
<tr>
<th>1849</th>
<td>1954</td>
<td>Rocky (1976)</td>
<td>Action|Drama</td>
</tr>
<tr>
<th>581</th>
<td>593</td>
<td>Silence of the Lambs, The (1991)</td>
<td>Drama|Thriller</td>
</tr>
</tbody>
</table>
</div>
Pretty cool! These look like pretty good recommendations. It's also good to see that, though I didn't actually use the genre of the movie as a feature, the truncated matrix factorization features "picked up" on the underlying tastes and preferences of the user. I've recommended some film-noirs, crime, drama, and war movies - all of which were genres of some of this user's top rated movies.
## Conclusion
We've seen that we can make good recommendations with raw data based collaborative filtering methods (neighborhood models) and latent features from low-rank matrix factorization methods (factorization models).
Low-dimensional matrix recommenders try to capture the underlying features driving the raw data (which we understand as tastes and preferences). From a theoretical perspective, if we want to make recommendations based on people's tastes, this seems like the better approach. This technique also scales **significantly** better to larger datasets.
However, we still likely lose some meaningful signals by using a lower-rank matrix. And though these factorization based techniques work extremely well, there's research being done on new methods. These efforts have resulted in various types probabilistic matrix factorization (which works and scales even better) and many other approaches.
One particularly cool and effective strategy is to combine factorization and neighborhood methods into one [framework](http://www.cs.rochester.edu/twiki/pub/Main/HarpSeminar/Factorization_Meets_the_Neighborhood-_a_Multifaceted_Collaborative_Filtering_Model.pdf). This research field is extremely active, and I highly recommend Joseph Konstan's Coursera course, [Introduction to Recommender Systems](https://www.coursera.org/specializations/recommender-systems), for anyone looking to get a high level overview of the field. The optional readings are influential papers in the field from the last 15-ish years, and they're really cool.
***
For those interested, the Jupyter Notebook with all the code can be found in the [Github repository](https://github.com/beckernick/matrix_factorization_recommenders) for this post.
|
aa4b79aec03a3064ba670e6c3fde3ea7edaf5314
| 36,982 |
ipynb
|
Jupyter Notebook
|
_notebooks/2022-01-16-mf-ml.ipynb
|
recohut/notebook
|
610670666a1c3d8ef430d42f712ff72ecdbd8f86
|
[
"Apache-2.0"
] | null | null | null |
_notebooks/2022-01-16-mf-ml.ipynb
|
recohut/notebook
|
610670666a1c3d8ef430d42f712ff72ecdbd8f86
|
[
"Apache-2.0"
] | 1 |
2022-01-12T05:40:57.000Z
|
2022-01-12T05:40:57.000Z
|
_notebooks/2022-01-16-mf-ml.ipynb
|
recohut/notebook
|
610670666a1c3d8ef430d42f712ff72ecdbd8f86
|
[
"Apache-2.0"
] | null | null | null | 36,982 | 36,982 | 0.545725 | true | 7,725 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.880797 | 0.808067 | 0.711743 |
__label__eng_Latn
| 0.816673 | 0.49195 |
# **Фильтр Калмана для системы ДУ второго порядка**
## Филаткин Алексей
Построим фильтр Калмана для системы
\begin{cases}
\dot x(t) = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}x(t) + \begin{pmatrix} 1\\ 0\end{pmatrix}u(t) + \widetilde{w}(t)\\
z(t) = \begin{pmatrix} 1& 0\end{pmatrix}x(t) + v(t)
\end{cases}
где $u(t)$ - внешнее воздействие, $w$ и $v$ - не коррелирующие между собой белые шумы с матрицей ковариции $Q =\begin{pmatrix} 3 & 0 \\ 0 & 3 \end{pmatrix}$ и дисперсией $R = 1$ соответственно, z(t) - измерение $x_{1}$
Белый шум моделируется нормальным распределением с нулевой медианой.
Внешнее воздействие также имеет нормальное распределение.
Тогда разностная схема в случае применения метода Эйлера для численного дифференцирования имеет вид
\begin{cases}
x_{k} = Ax_{k-1} + Bu_{k} + w_{k}\\
z_{k} = Hx_{k} + v_{k}
\end{cases}
где $A = \begin{pmatrix} 1 & \tau\\ \tau & 1 \end{pmatrix}$, $B = \begin{pmatrix} \tau\\ 0 \end{pmatrix}$, $H = \begin{pmatrix} 1 & 0 \end{pmatrix}$, w - белый шум с матрицей ковариации $Q =\begin{pmatrix} 3\tau^2 & 0 \\ 0 & 3\tau^2 \end{pmatrix}$, $\tau$ - шаг по времени
```python
import numpy as np
import matplotlib.pyplot as plt
# Инициализация
tau = 0.01
T = 3 # Суммарное время работы
A = np.array([[1, tau], [tau, 1]])
B = np.array([tau, 0])
H = np.array([1, 0]).T
Q = np.array([[3 * tau ** 2, 0], [0, 3 * tau ** 2]])
R = 1
I = np.eye(2)
P = I
x = np.array([1, 1])
x_estimate = x
l = int(T / tau)
# Задание шумов и внешнего воздействия
u = np.random.normal(0, 1, l)
v = np.random.normal(0, 1, l)
w = tau*np.array([np.random.normal(0, 3**0.5, l), np.random.normal(0, 3**0.5, l)])
# Сюда будут записываться данные
x_plot1 = np.zeros((l, 2))
x_plot2 = np.zeros((l, 2))
z_plot = np.zeros(l)
z = x[0]
```
Фильтр Калмана - классический двухшаговый алгоритм предиктор-корректор, реализующийся на каждом такте управления.
1) Оценка вектора состояния x по модели
\begin{equation}
\bar x_{k} = A x_{k-1} + B u_{k}
\end{equation}
Матрица ковариации ошибки оценивается следующим образом:
\begin{equation}
\bar P_{k} = A \hat P_{k-1} A^{T} + Q
\end{equation}
где $\hat P_{k-1}$ - оценка матрицы ковариации ошибки с учётом коррекции на предыдущем шаге
2) Коррекция предсказания с учетом измерений
Матрица коррекции:
\begin{equation}
\bar K_{k} = \bar P_{k} H^{T}(H \bar P_{k-1} H^{T} + R)^{-1}
\end{equation}
Оценка состояния:
\begin{equation}
\hat x_{k} = \bar x_{k} + K_{k}(z_{k} - H \bar x_{k})
\end{equation}
Матрица ковариации ошибки с учётом коррекции:
\begin{equation}
\hat P_{k} = (I - K_{k} H) \bar P_{k}
\end{equation}
Точное решение системы без внешнего воздействия и без шумов имеет вид
\begin{equation}
x = C_{1} e^t \begin{pmatrix} 1 \\ 1 \end{pmatrix} + C_{2} e^{-t} \begin{pmatrix} 1 \\ -1 \end{pmatrix}
\end{equation}
Поэтому при несильных внешних воздействиях графики должны быть похоже на экспоненциальные
```python
for i in range(l):
# Сохраняем в массив данных
x_plot1[i, :] = x_estimate
x_plot2[i, :] = x
z_plot[i] = z
# Новый шаг процесса
x = np.dot(A, x_estimate) + np.dot(B, u[i]) + w[:, i]
# Измеряем вектор x
z = np.dot(H, x) + v[i]
# Предиктор
x_predict = np.dot(A, x_estimate) + np.dot(B, u[i])
P_predict = np.dot(A, np.dot(P, A.T)) + Q
# Корректор
K = np.dot(P_predict, H.T) / (np.dot(H, np.dot(P_predict, H.T)) + R)
x_estimate = x_predict + np.dot(K, z - np.dot(H, x_predict))
P = np.dot(I - np.dot(K, H), P_predict)
# Строим графики
t = np.linspace(0, T, l)
fig = plt.figure(figsize=(25, 15))
ax = fig.add_subplot(2, 3, 1)
ax.plot(t, z_plot, label='Измерение x1')
ax.plot(t, x_plot2[:, 0], label='Модельное значение x1')
ax.grid()
ax.legend()
ax.set_xlabel('Время, с')
ax.set_ylabel('x1')
ax = fig.add_subplot(2, 3, 2)
ax.plot(t, x_plot1[:, 0], label='Скорректрированное фильтром значение x1')
ax.plot(t, x_plot2[:, 0], label='Модельное значение x1')
ax.set_xlabel('Время, с')
ax.set_ylabel('x1')
ax.grid()
ax.legend()
ax = fig.add_subplot(2, 3, 3)
ax.plot(t, x_plot1[:, 1], label='Скорректрированное фильтром значение x2')
ax.plot(t, x_plot2[:, 1], label='Модельное значение x2')
ax.set_xlabel('Время, с')
ax.set_ylabel('x2')
ax.grid()
ax.legend()
ax = fig.add_subplot(2, 3, 4)
ax.plot(t, abs(x_plot1[:, 0] - x_plot2[:, 0]))
ax.set_xlabel('Время, с')
ax.set_ylabel('Ошибка коррекции x1')
ax.grid()
ax = fig.add_subplot(2, 3, 5)
ax.plot(t, abs(x_plot1[:, 1] - x_plot2[:, 1]))
ax.set_xlabel('Время, с')
ax.set_ylabel('Ошибка коррекции x2')
ax.grid()
plt.show()
```
Усилим шум модели $\widetilde{Q} = 100 Q$ а заодно для разнообразия зададим постоянное воздействие.
```python
import numpy as np
import matplotlib.pyplot as plt
# Инициализация
tau = 0.01
T = 5 # Суммарное время работы
A = np.array([[1, tau], [tau, 1]])
B = np.array([tau, 0])
H = np.array([1, 0]).T
Q = np.array([[3 * tau ** 2, 0], [0, 3 * tau ** 2]])
R = 1
I = np.eye(2)
P = I
x = np.array([1, 1])
x_estimate = x
l = int(T / tau)
# ИЗМЕНИЛИ ЗАДАНИЕ ШУМОВ И ВОЗДЕЙСТВИЯ !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
u = np.random.normal(0, 1, l)
v = -2*np.ones(l)
w = 100*tau*np.array([np.random.normal(0, 3 ** 0.5, l), np.random.normal(0, 3 ** 0.5, l)])
# Сюда будут записываться данные
x_plot1 = np.zeros((l, 2))
x_plot2 = np.zeros((l, 2))
z_plot = np.zeros(l)
z = x[0]
for i in range(l):
# Сохраняем в массив данных
x_plot1[i, :] = x_estimate
x_plot2[i, :] = x
z_plot[i] = z
# Новый шаг процесса
x = np.dot(A, x_estimate) + np.dot(B, u[i]) + w[:, i]
# Измеряем вектор x
z = np.dot(H, x) + v[i]
# Предиктор
x_predict = np.dot(A, x_estimate) + np.dot(B, u[i])
P_predict = np.dot(A, np.dot(P, A.T)) + Q
# Корректор
K = np.dot(P_predict, H.T) / (np.dot(H, np.dot(P_predict, H.T)) + R)
x_estimate = x_predict + np.dot(K, z - np.dot(H, x_predict))
P = np.dot(I - np.dot(K, H), P_predict)
# Строим графики
t = np.linspace(0, T, l)
fig = plt.figure(figsize=(25, 15))
ax = fig.add_subplot(2, 3, 1)
ax.plot(t, z_plot, label='Измерение x1')
ax.plot(t, x_plot2[:, 0], label='Модельное значение x1')
ax.grid()
ax.legend()
ax.set_xlabel('Время, с')
ax.set_ylabel('x1')
ax = fig.add_subplot(2, 3, 2)
ax.plot(t, x_plot2[:, 0], label='Модельное значение x1')
ax.plot(t, x_plot1[:, 0], label='Скорректрированное фильтром значение x1')
ax.set_xlabel('Время, с')
ax.set_ylabel('x1')
ax.grid()
ax.legend()
ax = fig.add_subplot(2, 3, 3)
ax.plot(t, x_plot2[:, 1], label='Модельное значение x2')
ax.plot(t, x_plot1[:, 1], label='Скорректрированное фильтром значение x2')
ax.set_xlabel('Время, с')
ax.set_ylabel('x2')
ax.grid()
ax.legend()
ax = fig.add_subplot(2, 3, 4)
ax.plot(t, abs(x_plot1[:, 0] - x_plot2[:, 0]))
ax.set_xlabel('Время, с')
ax.set_ylabel('Ошибка коррекции x1')
ax.grid()
ax = fig.add_subplot(2, 3, 5)
ax.plot(t, abs(x_plot1[:, 1] - x_plot2[:, 1]))
ax.set_xlabel('Время, с')
ax.set_ylabel('Ошибка коррекции x2')
ax.grid()
plt.show()
```
Эффективность фильтра теперь наглядна. Поскольку внешнее воздействие слабее экспоненциального, то естественно с ростом времени решение снова приходит к классическому виду.
Если шум модели сравним с ней самой(экспоненциальный по времени), то фильтр все равно отлично работает.
Если же шум сильнее решения, то, как и ожидается, точно установить значения нельзя и в среднем будет ноль(из начальных условий)
```python
import numpy as np
import matplotlib.pyplot as plt
# Инициализация
tau = 0.01
T = 5 # Суммарное время работы
A = np.array([[1, tau], [tau, 1]])
B = np.array([tau, 0])
H = np.array([1, 0]).T
Q = np.array([[3 * tau ** 2, 0], [0, 3 * tau ** 2]])
R = 1
I = np.eye(2)
P = I
x = np.array([1, 1])
x_estimate = x
l = int(T / tau)
# ИЗМЕНИЛИ ЗАДАНИЕ ШУМОВ И ВОЗДЕЙСТВИЯ !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
t = np.linspace(0, T, l)
u = np.zeros((l, 2))
v = np.random.normal(0, 1, l)
w = np.array([np.exp(0.5*t) * np.random.normal(0, 1, l), t**2*np.exp(t) * np.random.normal(0, 3 ** 0.5, l)])
# Сюда будут записываться данные
x_plot1 = np.zeros((l, 2))
x_plot2 = np.zeros((l, 2))
z_plot = np.zeros(l)
z = x[0]
for i in range(l):
# Сохраняем в массив данных
x_plot1[i, :] = x_estimate
x_plot2[i, :] = x
z_plot[i] = z
# Новый шаг процесса
x = np.dot(A, x_estimate) + np.dot(B, u[i]) + w[:, i]
# Измеряем вектор x
z = np.dot(H, x) + v[i]
# Предиктор
x_predict = np.dot(A, x_estimate) + np.dot(B, u[i])
P_predict = np.dot(A, np.dot(P, A.T)) + Q
# Корректор
K = np.dot(P_predict, H.T) / (np.dot(H, np.dot(P_predict, H.T)) + R)
x_estimate = x_predict + np.dot(K, z - np.dot(H, x_predict))
P = np.dot(I - np.dot(K, H), P_predict)
# Строим графики
fig = plt.figure(figsize=(25, 15))
ax = fig.add_subplot(2, 3, 1)
ax.plot(t, z_plot, label='Измерение x1')
ax.plot(t, x_plot2[:, 0], label='Модельное значение x1')
ax.grid()
ax.legend()
ax.set_xlabel('Время, с')
ax.set_ylabel('x1')
ax = fig.add_subplot(2, 3, 2)
ax.plot(t, x_plot2[:, 0], label='Модельное значение x1')
ax.plot(t, x_plot1[:, 0], label='Скорректрированное фильтром значение x1')
ax.set_xlabel('Время, с')
ax.set_ylabel('x1')
ax.grid()
ax.legend()
ax = fig.add_subplot(2, 3, 3)
ax.plot(t, x_plot2[:, 1], label='Модельное значение x2')
ax.plot(t, x_plot1[:, 1], label='Скорректрированное фильтром значение x2')
ax.set_xlabel('Время, с')
ax.set_ylabel('x2')
ax.grid()
ax.legend()
ax = fig.add_subplot(2, 3, 4)
ax.plot(t, abs(x_plot1[:, 0] - x_plot2[:, 0]))
ax.set_xlabel('Время, с')
ax.set_ylabel('Ошибка коррекции x1')
ax.grid()
ax = fig.add_subplot(2, 3, 5)
ax.plot(t, abs(x_plot1[:, 1] - x_plot2[:, 1]))
ax.set_xlabel('Время, с')
ax.set_ylabel('Ошибка коррекции x2')
ax.grid()
plt.show()
```
Действительно, вследствие слишком сильного шума $x_{2}$($\sim t^2 e^{t}$ ) коррекция невозможна
# **Выводы**
Фильтр Калмана довольно неплохо справляется с определением значения переменных при корректной постановки задачи. С увеличением времени работы программы стабильность увеличивается.
|
aadf314e679de58d3e822dc84d52d333471e1ded
| 653,691 |
ipynb
|
Jupyter Notebook
|
Homework Problems/Kalman Filtering/Kalman_Filter_for_second_order_system.ipynb
|
DPritykin/Control-Theory-Course
|
f27c13cd0bf9671518c78414f8c3963c7cb870d6
|
[
"MIT"
] | 6 |
2022-02-21T06:42:30.000Z
|
2022-03-14T05:18:00.000Z
|
Homework Problems/Kalman Filtering/Kalman_Filter_for_second_order_system.ipynb
|
DPritykin/Control-Theory-Course
|
f27c13cd0bf9671518c78414f8c3963c7cb870d6
|
[
"MIT"
] | null | null | null |
Homework Problems/Kalman Filtering/Kalman_Filter_for_second_order_system.ipynb
|
DPritykin/Control-Theory-Course
|
f27c13cd0bf9671518c78414f8c3963c7cb870d6
|
[
"MIT"
] | 1 |
2022-03-07T16:25:30.000Z
|
2022-03-07T16:25:30.000Z
| 1,328.640244 | 262,964 | 0.95528 | true | 4,167 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.913677 | 0.72487 | 0.662297 |
__label__rus_Cyrl
| 0.127872 | 0.377069 |
# Best responses
---
## Definition of a best response
[Video](https://youtu.be/cJUZEmfhdcA?list=PLnC5h3PY-znxMsG0TRYGOyrnEO-QhVwLb)
In a two player game $(A,B)\in{\mathbb{R}^{m\times n}}^2$ a mixed strategy $\sigma_r^*$ of the row player is a best response to a column players' strategy $\sigma_c$ iff:
$$
\sigma_r^*=\text{argmax}_{\sigma_r\in S_r}\sigma_rA\sigma_c^T.
$$
Similarly a mixed strategy $\sigma_c^*$ of the column player is a best response to a row players' strategy $\sigma_r$ iff:
$$
\sigma_c^*=\text{argmax}_{\sigma_c\in S_c}\sigma_rB\sigma_c^T.
$$
---
In other words: a best response strategy maximise the utility of a player given a known strategy of the other player.
## Best responses in the Prisoners Dilemma
Consider the Prisoners Dilemma:
$$
A = \begin{pmatrix}
3 & 0\\
5 & 1
\end{pmatrix}\qquad
B = \begin{pmatrix}
3 & 5\\
0 & 1
\end{pmatrix}
$$
We can easily identify the pure strategy best responses by underlying the corresponding utilities. For the row player, we will underline the best utility in each column:
$$
A = \begin{pmatrix}
3 & 0\\
\underline{5} & \underline{1}
\end{pmatrix}
$$
For the column player we underling the best utility in each row:
$$
B = \begin{pmatrix}
3 & \underline{5}\\
0 & \underline{1}
\end{pmatrix}
$$
We see that both players' best responses are their second strategy.
## Best responses in matching pennies
[Video](https://youtu.be/dLUWbKNxU44?list=PLnC5h3PY-znxMsG0TRYGOyrnEO-QhVwLb)
Consider matching pennies with the best responses underlined:
$$
A = \begin{pmatrix}
\underline{1} & -1\\
-1 & \underline{1}
\end{pmatrix}\qquad
B = \begin{pmatrix}
-1 & \underline{1}\\
\underline{1} & -1
\end{pmatrix}
$$
We see that the best response now depend on what the opponent does.
Let us consider the best responses against a mixed strategy (and apply the previous definition):
- Assume $\sigma_r=(x,1-x)$
- Assume $\sigma_c=(y,1-y)$
We have:
$$
A\sigma_c^T = \begin{pmatrix}
2y-1\\
1-2y
\end{pmatrix}\qquad
\sigma_rB = \begin{pmatrix}
1-2x & 2x-1
\end{pmatrix}
$$
```python
import sympy as sym
import numpy as np
sym.init_printing()
x, y = sym.symbols('x, y')
A = sym.Matrix([[1, -1], [-1, 1]])
B = - A
sigma_r = sym.Matrix([[x, 1-x]])
sigma_c = sym.Matrix([y, 1-y])
A * sigma_c, sigma_r * B
```
Those two vectors gives us the utilities to the row/column player when they play either of their pure strategies:
- $(A\sigma_c^T)_i$ is the utility of the row player when playing strategy $i$ against $\sigma_c=(y, 1-y)$
- $(\sigma_rB)_j$ is the utility of the column player when playing strategy $j$ against $\sigma_r=(x, 1-x)$
Let us plot these (using `matplotlib`):
```python
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rc("savefig", dpi=100) # Increase the quality of the images (not needed)
ys = [0, 1]
row_us = [[(A * sigma_c)[i].subs({y: val}) for val in ys] for i in range(2)]
plt.plot(ys, row_us[0], label="$(A\sigma_c^T)_1$")
plt.plot(ys, row_us[1], label="$(A\sigma_c^T)_2$")
plt.xlabel("$\sigma_c=(y, 1-y)$")
plt.title("Utility to player 1")
plt.legend();
```
```python
xs = [0, 1]
row_us = [[(sigma_r * B)[j].subs({x: val}) for val in xs] for j in range(2)]
plt.plot(ys, row_us[0], label="$(\sigma_rB)_1$")
plt.plot(ys, row_us[1], label="$(\sigma_rB)_2$")
plt.xlabel("$\sigma_r=(x, 1-x)$")
plt.title("Utility to column player")
plt.legend();
```
We see that the best responses to the mixed strategies are given as:
$$
\sigma_r^* =
\begin{cases}
(1, 0),&\text{ if } y > 1/2\\
(0, 1),&\text{ if } y < 1/2\\
\text{indifferent},&\text{ if } y = 1/2
\end{cases}
\qquad
\sigma_c^* =
\begin{cases}
(0, 1),&\text{ if } x > 1/2\\
(1, 0),&\text{ if } x < 1/2\\
\text{indifferent},&\text{ if } x = 1/2
\end{cases}
$$
In this particular case we see that for any given strategy, the opponents' best response is either a pure strategy or a mixed strategy in which case they are indifferent between the pure strategies.
For example:
- If $\sigma_c=(1/4, 3/4)$ ($y=1/4$) then the best response is $\sigma_r^*=(0,1)$
- If $\sigma_c=(1/2, 1/2)$ ($y=1/2$) then any mixed strategy is a best response **but** in fact both pure strategies would give the same utility (the lines intersect).
This observation generalises to our first theorem:
---
## Best response condition
[Video](https://youtu.be/UQWoNZBifs8?list=PLnC5h3PY-znxMsG0TRYGOyrnEO-QhVwLb)
In a two player game $(A,B)\in{\mathbb{R}^{m\times n}}^2$ a mixed strategy $\sigma_r^*$ of the row player is a best response to a column players' strategy $\sigma_c$ iff:
$${\sigma_r^*}_i > 0 \Rightarrow (A\sigma_c^T)_i = \max_{k}(A\sigma_c^T)_k\text{ for all }1\leq i\leq m$$
### Proof of best response condition
$(A\sigma_c^T)_i$ is the utility of the row player when they play their $i$th strategy. Thus:
$$\sigma_rA\sigma_c^T=\sum_{i=1}^{m}{\sigma_r}_i(A\sigma_c^T)_i$$
Let $u=\max_{k}(A\sigma_c^T)_k$. Thus:
$$
\begin{align}
\sigma_rA\sigma_c^T&=\sum_{i=1}^{m}{\sigma_r}_i(u - u + (A\sigma_c^T)_i)\\
&=\sum_{i=1}^{m}{\sigma_r}_iu - \sum_{i=1}^{m}{\sigma_r}_i(u - (A\sigma_c^T)_i)\\
&=u - \sum_{i=1}^{m}{\sigma_r}_i(u - (A\sigma_c^T)_i)
\end{align}$$
We know that $u - (A\sigma_c^T)_i\geq 0$, thus the largest $\sigma_rA\sigma_c^T$ can be is $u$ which occurs iff ${\sigma_r}_i > 0 \Rightarrow (A\sigma_c^T)_i = u$ as required.
---
Returning to our previous example. If $\sigma_c=(1/2, 1/2)$, $(A\sigma_c^T)=(0, 0)$, thus $(A\sigma_c^T)_i = 0$ for all $i$.
Note that while any strategy is a best response to $(1/2, 1/2)$ the pair of strategies $(\sigma_r, \sigma_c) = ((1/2, 1/2), (1/2, 1/2))$ are the only two strategies that are best responses to each other. This _coordinate_ is called a **Nash equilibrium**.
## Definition of Nash equilibrium
[Video](https://youtu.be/b1JBFU0wDyY?list=PLnC5h3PY-znxMsG0TRYGOyrnEO-QhVwLb)
In a two player game $(A,B)\in{\mathbb{R}^{m\times n}}^2$, $(\sigma_r, \sigma_c)$ is a Nash equilibrium if $\sigma_r$ is a best response to $\sigma_c$ and vice versa.
|
923ce5883593a39448505dbc85ac687b44eea902
| 59,572 |
ipynb
|
Jupyter Notebook
|
nbs/chapters/04-Nash-equilibria.ipynb
|
prokolyvakis/gt
|
e679e5d54d9a98583ad4981411ce505cea31f028
|
[
"MIT"
] | 27 |
2017-05-25T08:10:40.000Z
|
2021-12-07T21:01:51.000Z
|
nbs/chapters/04-Nash-equilibria.ipynb
|
prokolyvakis/gt
|
e679e5d54d9a98583ad4981411ce505cea31f028
|
[
"MIT"
] | 65 |
2017-05-23T16:12:03.000Z
|
2022-03-30T13:42:25.000Z
|
nbs/chapters/04-Nash-equilibria.ipynb
|
prokolyvakis/gt
|
e679e5d54d9a98583ad4981411ce505cea31f028
|
[
"MIT"
] | 10 |
2017-06-19T11:04:06.000Z
|
2020-08-30T11:28:00.000Z
| 189.11746 | 23,936 | 0.893071 | true | 2,129 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.847968 | 0.845942 | 0.717332 |
__label__eng_Latn
| 0.940996 | 0.504934 |
### DEMDP06
# Deterministic Optimal Economic Growth Model
Welfare maximizing social planner must decide how much society should consume and invest. Model is of special interest because it has a known closed-form solution.
- States
- s stock of wealth
- Actions
- k capital investment
- Parameters
- beta capital production elasticity
- delta discount factor
```python
import numpy as np
import matplotlib.pyplot as plt
from compecon import BasisChebyshev, DPmodel, DPoptions, qnwnorm, demo
```
### Model parameters
Assuming that the marginal productivity of capital is $\beta=0.5$ and the discount factor is $\delta=0.9$
```python
β, δ = 0.5, 0.9
```
## Analytic results
The steady-state values for this model are
```python
sstar = (β * δ) ** (β / (1 - β)) # steady-state wealth
kstar = β * δ * sstar # steady-state capital investment
vstar = np.log(sstar - kstar) / (1 - δ) # steady-state value
pstar = 1 / (sstar * (1 - β * δ)) # steady-state shadow price
b = 1 / (1 - δ * β)
print('\n\nSteady-State')
for var, value in zip(['Wealth','Investment','Value','Shadow price'], [sstar,kstar,vstar,pstar]):
print(f'\t{var:12s} = {value:8.4f}')
```
Steady-State
Wealth = 0.4500
Investment = 0.2025
Value = -13.9634
Shadow price = 4.0404
The true value function is
\begin{equation}
V(s) = v^* + \frac{1}{1-\delta\beta}\left(\log(s) -\log(s^*)\right)
\end{equation}
```python
def vtrue(wealth): # analytic value function
return vstar + b * (np.log(wealth) - np.log(sstar))
```
The true policy function is
\begin{equation}
k(s) = \delta\beta s
\end{equation}
```python
def ktrue(wealth): #analytic policy function
return δ*β*wealth
```
## Numeric results
### State space
The state variable is s="Wealth", which we restrict to $0\in[0.2, 1.0]$.
Here, we represent it with a Chebyshev basis, with $n=15$ nodes.
```python
n, smin, smax = 15, 0.2, 1.0
basis = BasisChebyshev(n, smin, smax, labels=['Wealth'])
```
### Action space
The choice variable k="Investment" must be nonnegative.
```python
def bounds(s, i=None, j=None):
return np.zeros_like(s), s[:]
```
### Reward function
The reward function is the utility of consumption=$s-k$.
```python
def reward(s, k, i=None, j=None):
sk = s - k
u = np.log(sk)
ux= - sk ** -1
uxx = - sk ** -2
return u, ux, uxx
```
### State transition function
Next period, wealth will be equal to production from available initial capital $k$, that is $s' = k^\beta$
```python
def transition(s, k, i=None, j=None, in_=None, e=None):
g = k ** β
gx = β * k **(β - 1)
gxx = (β - 1) * β * k ** (β - 2)
return g, gx, gxx
```
### Model structure
The value of wealth $s$ satisfies the Bellman equation
\begin{equation*}
V(s) = \max_k\left\{\log(s-k) + \delta V(k^\beta) \right\}
\end{equation*}
To solve and simulate this model,use the CompEcon class `DPmodel`
```python
growth_model = DPmodel(basis, reward, transition, bounds,
x=['Investment'],
discount=δ)
```
### Solving the model
Solving the growth model by collocation, using *Newton* algorithm and a maximum of 20 iterations
```python
options = dict(print=True,
algorithm='newton',
maxit=20)
snodes = growth_model.Value.nodes
S = growth_model.solve(vtrue(snodes), ktrue(snodes), **options)
```
Solving infinite-horizon model collocation equation by Newton's method
iter change time
------------------------------
0 4.7e-07 0.0313
1 1.6e-12 0.0313
Elapsed Time = 0.03 Seconds
`DPmodel.solve` returns a pandas `DataFrame` with the following data:
```python
S.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Wealth</th>
<th>value</th>
<th>resid</th>
<th>Investment</th>
</tr>
<tr>
<th>Wealth</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>0.200000</th>
<td>0.200000</td>
<td>-15.437865</td>
<td>2.766030e-07</td>
<td>0.090000</td>
</tr>
<tr>
<th>0.205369</th>
<td>0.205369</td>
<td>-15.389699</td>
<td>-2.099164e-07</td>
<td>0.092416</td>
</tr>
<tr>
<th>0.210738</th>
<td>0.210738</td>
<td>-15.342776</td>
<td>-2.488242e-07</td>
<td>0.094832</td>
</tr>
<tr>
<th>0.216107</th>
<td>0.216107</td>
<td>-15.297033</td>
<td>-1.102768e-07</td>
<td>0.097248</td>
</tr>
<tr>
<th>0.221477</th>
<td>0.221477</td>
<td>-15.252412</td>
<td>5.644805e-08</td>
<td>0.099664</td>
</tr>
</tbody>
</table>
</div>
We are also interested in the shadow price of wealth (the first derivative of the value function) and the approximation error.
To analyze the dynamics of the model, it also helps to compute the optimal change of wealth.
```python
S['shadow price'] = growth_model.Value(S['Wealth'],1)
S['error'] = S['value'] - vtrue(S['Wealth'])
S['D.Wealth'] = transition(S['Wealth'], S['Investment'])[0] - S['Wealth']
S.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Wealth</th>
<th>value</th>
<th>resid</th>
<th>Investment</th>
<th>shadow price</th>
<th>error</th>
<th>D.Wealth</th>
</tr>
<tr>
<th>Wealth</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>0.200000</th>
<td>0.200000</td>
<td>-15.437865</td>
<td>2.766030e-07</td>
<td>0.090000</td>
<td>9.090758</td>
<td>4.241867e-07</td>
<td>0.100000</td>
</tr>
<tr>
<th>0.205369</th>
<td>0.205369</td>
<td>-15.389699</td>
<td>-2.099164e-07</td>
<td>0.092416</td>
<td>8.853206</td>
<td>-3.099040e-08</td>
<td>0.098631</td>
</tr>
<tr>
<th>0.210738</th>
<td>0.210738</td>
<td>-15.342776</td>
<td>-2.488242e-07</td>
<td>0.094832</td>
<td>8.627699</td>
<td>-4.010327e-08</td>
<td>0.097210</td>
</tr>
<tr>
<th>0.216107</th>
<td>0.216107</td>
<td>-15.297033</td>
<td>-1.102768e-07</td>
<td>0.097248</td>
<td>8.413362</td>
<td>1.253504e-07</td>
<td>0.095739</td>
</tr>
<tr>
<th>0.221477</th>
<td>0.221477</td>
<td>-15.252412</td>
<td>5.644805e-08</td>
<td>0.099664</td>
<td>8.209399</td>
<td>3.151234e-07</td>
<td>0.094220</td>
</tr>
</tbody>
</table>
</div>
### Solving the model by Linear-Quadratic Approximation
The `DPmodel.lqapprox` solves the linear-quadratic approximation, in this case arround the steady-state. It returns a LQmodel which works similar to the DPmodel object.
We also compute the shadow price and the approximation error to compare these results to the collocation results.
```python
growth_lq = growth_model.lqapprox(sstar, kstar)
L = growth_lq.solution(basis.nodes)
L['shadow price'] = L['value_Wealth']
L['error'] = L['value'] - vtrue(L['Wealth'])
L['D.Wealth'] = L['Wealth_next']- L['Wealth']
L.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Wealth</th>
<th>Investment</th>
<th>value</th>
<th>value_Wealth</th>
<th>Wealth_next</th>
<th>shadow price</th>
<th>error</th>
<th>D.Wealth</th>
</tr>
<tr>
<th>Wealth</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>0.202191</th>
<td>0.202191</td>
<td>-0.020528</td>
<td>-15.014820</td>
<td>4.444953</td>
<td>0.202192</td>
<td>4.444953</td>
<td>0.403234</td>
<td>2.977733e-07</td>
</tr>
<tr>
<th>0.219577</th>
<td>0.219577</td>
<td>-0.004880</td>
<td>-14.937786</td>
<td>4.416570</td>
<td>0.219578</td>
<td>4.416570</td>
<td>0.330284</td>
<td>2.769166e-07</td>
</tr>
<tr>
<th>0.253590</th>
<td>0.253590</td>
<td>0.025731</td>
<td>-14.788512</td>
<td>4.361045</td>
<td>0.253590</td>
<td>4.361045</td>
<td>0.217716</td>
<td>2.361148e-07</td>
</tr>
<tr>
<th>0.302742</th>
<td>0.302742</td>
<td>0.069968</td>
<td>-14.576129</td>
<td>4.280804</td>
<td>0.302742</td>
<td>4.280804</td>
<td>0.107984</td>
<td>1.771510e-07</td>
</tr>
<tr>
<th>0.364886</th>
<td>0.364886</td>
<td>0.125897</td>
<td>-14.313256</td>
<td>4.179353</td>
<td>0.364886</td>
<td>4.179353</td>
<td>0.031397</td>
<td>1.026024e-07</td>
</tr>
</tbody>
</table>
</div>
```python
growth_lq.G
```
array([[1.]])
## Plotting the results
### Optimal Policy
```python
fig1 = demo.figure('Optimal Investment Policy', 'Wealth', 'Investment')
plt.plot(S.Investment, label='Chebychev Collocation')
plt.plot(L.Investment, label='L-Q Approximation')
demo.annotate(sstar, kstar,'$s^*$ = %.2f\n$V^*$ = %.2f' % (sstar, kstar),'bo', (10, -17),ms=10)
plt.legend(loc= 'upper left')
```
### Value Function
```python
fig2 = demo.figure('Value Function', 'Wealth', 'Value')
plt.plot(S.value, label='Chebychev Collocation')
plt.plot(L.value, label='L-Q Approximation')
demo.annotate(sstar, vstar, f'$s^* = {sstar:.2f}$\n$V^* = {vstar:.2f}$', 'bo', (10, -17),ms=10)
plt.legend(loc= 'upper left')
```
### Shadow Price Function
```python
fig3 = demo.figure('Shadow Price Function', 'Wealth', 'Shadow Price')
plt.plot(S['shadow price'], label='Chebychev Collocation')
plt.plot(L['shadow price'], label='L-Q Approximation')
demo.annotate(sstar, pstar,f'$s^* = {sstar:.2f}$\n$\lambda^* = {pstar:.2f}$', 'bo', (10, 17),ms=10)
plt.legend(loc= 'upper right')
```
### Chebychev Collocation Residual and Approximation Error vs. Linear-Quadratic Approximation Error
```python
fig4 = plt.figure(figsize=[12, 6])
demo.subplot(1, 2, 1, 'Chebychev Collocation Residual\nand Approximation Error', 'Wealth', 'Residual/Error')
plt.hlines(0,smin,smax,'k',linestyles='--')
plt.plot(S[['resid', 'error']])
plt.legend(['Residual','Error'], loc='lower right')
plt.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))
demo.subplot(1, 2, 2, 'Linear-Quadratic Approximation Error', 'Wealth', 'Error')
plt.hlines(0,smin,smax,'k',linestyles='--')
plt.plot(L['error'], label='Error')
plt.legend(loc='upper left')
plt.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))
```
### Wealth dynamics
Notice how the steady-state is stable in the Chebyshev collocation solution, but unstable in the linear-quadratic approximation. In particular, simulated paths of wealth in the L-Q approximation will converge to zero, unless the initial states is within a small neighborhood of the steady-state.
```python
fig5 = demo.figure('Wealth dynamics', 'Wealth', 'Wealth change', figsize=[8,5])
plt.plot(S['D.Wealth'], label='Chebychev Collocation')
plt.plot(L['D.Wealth'], label='L-Q Approximation')
plt.hlines(0,smin,smax,linestyles=':')
demo.annotate(sstar, 0, f'$s^* = {sstar:.2f}$\n$\Delta s^* = {0:.2f}$', 'bo', (10, 10),ms=10,fs=11)
plt.legend(loc= 'lower left')
```
## Simulating the model
We simulate 20 periods of the model starting from $s=s_{\min}$
```python
T = 20
data = growth_model.simulate(T, smin)
```
### Simulated State and Policy Paths
```python
opts = dict(spec='r*', offset=(-5, -5), fs=11, ha='right')
fig6 = demo.figure('State and Policy Paths','Period', 'Wealth/Investment',[0, T + 0.5])
plt.plot(data[['Wealth', 'Investment']])
demo.annotate(T, sstar, 'steady-state wealth\n = %.2f' % sstar, **opts)
demo.annotate(T, kstar, 'steady-state investment\n = %.2f' % kstar, **opts)
```
```python
#demo.savefig([fig1,fig2,fig3,fig4,fig5,fig6])
```
|
64aa1da8038222065e9031a7fbe85185d0122ccf
| 214,958 |
ipynb
|
Jupyter Notebook
|
notebooks/dp/06 Deterministic Optimal Economic Growth Model.ipynb
|
daniel-schaefer/CompEcon-python
|
d3f66e04a7e02be648fc5a68065806ec7cc6ffd6
|
[
"MIT"
] | null | null | null |
notebooks/dp/06 Deterministic Optimal Economic Growth Model.ipynb
|
daniel-schaefer/CompEcon-python
|
d3f66e04a7e02be648fc5a68065806ec7cc6ffd6
|
[
"MIT"
] | null | null | null |
notebooks/dp/06 Deterministic Optimal Economic Growth Model.ipynb
|
daniel-schaefer/CompEcon-python
|
d3f66e04a7e02be648fc5a68065806ec7cc6ffd6
|
[
"MIT"
] | 1 |
2021-06-01T03:47:35.000Z
|
2021-06-01T03:47:35.000Z
| 215.388778 | 61,552 | 0.899506 | true | 4,440 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.891811 | 0.890294 | 0.793974 |
__label__eng_Latn
| 0.548218 | 0.683001 |
# Introduction to Decision Theory using Probabilistic Graphical Models
> So far, we have seen that probabilistic graphical models are useful for modeling situations that involve uncertainty. Furthermore, we will see in the next module how using inference algorithms we will also reach conclusions abount the current situation from partial evidence: predictions.
>
> On the other hand, we do not only want to obtain these conclusions (predictions), but actually make decisions on top of these conclusions.
>
> It turns out that we can actually use probabilistic graphical models to encode not only the uncertain situations, but also the decision making agents with all the policies they are allowed to implement and the possible utilities that one may obtain.
> **Objetives:**
> - To learn how to represent decision situations using PGMs.
> - To understand the maximum expected utility principle.
> - To learn how to measure the value of information when making a decision.
> **References:**
> - Probabilistic Graphical Models: Principles and Techniques, By Daphne Koller and Nir Friedman. Ch. 22 - 23.
> - Probabilistic Graphical Models Specialization, offered through Coursera. Prof. Daphne Koller.
<p style="text-align:right;"> Imagen recuperada de: https://upload.wikimedia.org/wikipedia/commons/b/bb/Risk_aversion_curve.jpg.</p>
___
# 1. Maximizing Expected Utility
The theoretical foundations of decision theory were established long befor probabilistic graphical models came to live. The framework of *maximum expected utility* allows to formulate and solve decision problems that involve uncertainty.
Before continuing, we should be clear that the **utility** is a numerical function that assigns numbers to the various possible outcomes, encoding the preferences of the agent. These numbers:
- Do not have meanings in themselves.
- We only know that the larger, the better, according to the preferences of the agent.
- Usually, we compare the utility of two outcomes by means of the $\Delta U$, which represents the strength of the "happiness" change from an outcome w.r.t. the other.
The outcomes we were talking about above can vary along *multiple dimensions*. One of those dimensions is often the monetary gain, but most of the settings consider other dimensions as well.
## 1.1. Problem formulation and maximum expected utility principle
A simple **decision making situation** $\mathcal{D}$ is defined by:
- A set of possible actions $A$, with $\mathrm{Val}(A)=\{a_1, \dots, a_k\}$.
- A set of possible states (RVs) $\bar{X}$, with $\mathrm{Val}(\bar{X})=\{\bar{x}_1, \dots, \bar{x}_n\}$.
- A conditional distribution $P(\bar{X}|A)$.
- A utility function $U(\bar{X}, A)$, which expresses the agent's preferences.
The **expected utility** on the above decision making situation, given that $A=a$, is
$$EU[\mathcal{D}[a]] = \sum_{\bar{X}}P(\bar{X}|a)U(\bar{X},a).$$
Furthermore, the **maximum expected utility (MEU)** principle states that we should choose the action that maximizes the expected utility
$$a^\ast = \arg\max_{a\in\mathrm{Val}(A)} EU[\mathcal{D}[a]].$$
**How can we represent the above using PGMs?**
We can use the ideas we developed for PGMs to represent the decision making situations in a very interpretable way.
In this sense, we have:
- Random variables are represented by *ovals* and stand for the state.
- Actions are represented by *rectangles*.
- Utilities are represented by *diamonds*. These have no children.
**Example.** Consider the decision situation $\mathcal{D}$ where a graduate of the Master's Degree in Data Science is deciding whether to found a Data Science consultancy company or not.
While this person does not exactly know which will be the demand of consultancy services, he/she knows that the demand will be either:
- $m^0$: nonexistent, with probability $0.5$;
- $m^1$: moderate, with probability $0.3$;
- $m^2$: high, with probability $0.2$.
Moreover, he/she will obtain a utility $U(M, f^0)=0$ for $M=m^0, m^1, m^2$ in the case that he/she doesn't found the company, or the following utilities in the case that he/she found the company:
- $U(m^0, f^1)=-7$;
- $U(m^1, f^1)=5$;
- $U(m^2, f^1)=20$.
Let's represent the graphical model corresponding to this situation:
```python
from IPython.display import Image
```
```python
# First draw in the white board, then show (first_representation)
Image("figures/first_representation.png")
```
Then, according to this:
- What are the expected utilities for each action?
- $E[\mathcal{D}[f^0]]=0$
- $E[\mathcal{D}[f^1]]=0.5 \times -7 + 0.3 \times 5 + 0.2 \times 20=2.$
- Which is the optimal action?
- $f^1 = \arg \max_{f=f^0, f^1} E[\mathcal{D}[f]]$.
With `pgmpy`:
```python
# Import pgmpy.factors.discrete.DiscreteFactor
```
```python
# Define factors P(M), U(M,F)
```
```python
```
+------+----------+
| M | phi(M) |
+======+==========+
| M(0) | 0.5000 |
+------+----------+
| M(1) | 0.3000 |
+------+----------+
| M(2) | 0.2000 |
+------+----------+
```python
```
+------+------+------------+
| M | F | phi(M,F) |
+======+======+============+
| M(0) | F(0) | 0.0000 |
+------+------+------------+
| M(0) | F(1) | -7.0000 |
+------+------+------------+
| M(1) | F(0) | 0.0000 |
+------+------+------------+
| M(1) | F(1) | 5.0000 |
+------+------+------------+
| M(2) | F(0) | 0.0000 |
+------+------+------------+
| M(2) | F(1) | 20.0000 |
+------+------+------------+
```python
# Find Expected Utility
```
+------+----------+
| F | phi(F) |
+======+==========+
| F(0) | 0.0000 |
+------+----------+
| F(1) | 2.0000 |
+------+----------+
```python
```
2.0
**Multiple utility nodes**
In the above example, we only had one utility node. However, we may include as many utility nodes as wanted to reduce the number of parameters:
```python
Image("figures/student_utility.png")
```
Where:
- $V_G$: Happiness with the grade itself.
- $V_Q$: Quality of life during studies.
- $V_S$: Value of getting a good job.
The total utility can be formulated as:
$$U=V_G+V_Q+V_S.$$
*Question.* If $|\mathrm{Val}(D)| = 2$, $|\mathrm{Val}(S)| = 2$, $|\mathrm{Val}(G)| = 3$, and $|\mathrm{Val}(J)| = 2$, how many parameters do you need to completely specify the utility?
- $|\mathrm{Val}(V_G)| = 3$; $|\mathrm{Val}(V_Q)| = 4$; $|\mathrm{Val}(V_S)| = 2$. We need $3+4+2=9$ parameters.
*Question.* How many if the utility weren't decomposed?
- $3\times 2 \times 2 \times 2 = 24$.
## 1.2. Information edges and decision rules
The influence diagrams we have depicted above, also allow us to capture the notion of information available to the agent when they make their decision.
**Example.** In the example of the Master's graduate deciding founding or not founding his company, let's assume that he/she has the opportunity to carry out a survey to measure the overall market demand for Data Science consultancy before making the decision.
In this sense, the graph now looks like the following:
```python
# First draw in the white board, then show (second_representation)
Image("figures/second_representation.png")
```
Hence, the agent can make its decision depending on the value of the survey, which is denoted by the precense of the edge.
Formally,
> *Definition.* A **decision rule** $\delta_A$ at an action node $A$ is a conditional probability $P(A|\mathrm{Pa}A)$ (a function that maps each instatiation of $\mathrm{Pa}A$ to a probability distribution $\delta_A$ over $\mathrm{Val}(A)$).
Given the above, **what is the expected utility with information?**
Following the same sort of ideas we get that
$$EU[\mathcal{D}[\delta_A]] = \sum_{\bar{X}, A}P_{\delta_A}(\bar{X},A)U(\bar{X},A),$$
where $P_{\delta_A}(\bar{X},A)$ is the joint probability distribution over $\bar{X}$ and $A$. The subindex $\delta_A$ makes reference that this joind distribution depends on the selection of the decision rule $\delta_A$.
Now, following the MEU, the optimal decision rule is:
$$\delta_A^\ast = \arg \max_{\delta_A} EU[\mathcal{D}[\delta_A]],$$
and the MEU is
$$MEU(\mathcal{D}) = \max_{\delta_A} EU[\mathcal{D}[\delta_A]].$$
**How can we find optimal decision rules?**
In our entrepreneur example, we have that
\begin{align}
EU[\mathcal{D}[\delta_A]] &= \sum_{\bar{X}, A}P_{\delta_A}(\bar{X},A)U(\bar{X},A) \\
& = \sum_{M,S,F} P(M)P(S|M) \delta_F(F|S) U(M,F)\\
& = \sum_{S,F} \delta_F(F|S) \sum_M P(M)P(S|M)U(M,F)\\
& = \sum_{S,F} \delta_F(F|S) \mu(S,F)
\end{align}
(see in the whiteboard, then show equations)
Thus, let's calculate $\mu(S,F)$ using `pgmpy`:
```python
# We already have P(M), and U(F,M). Define P(S|M)
```
```python
# Compute mu(F,S)
```
```python
# Print mu(F,S)
```
+------+------+------------+
| S | F | phi(S,F) |
+======+======+============+
| S(0) | F(0) | 0.0000 |
+------+------+------------+
| S(0) | F(1) | -1.2500 |
+------+------+------------+
| S(1) | F(0) | 0.0000 |
+------+------+------------+
| S(1) | F(1) | 1.1500 |
+------+------+------------+
| S(2) | F(0) | 0.0000 |
+------+------+------------+
| S(2) | F(1) | 2.1000 |
+------+------+------------+
Following the MEU principle, we should select for each state (the Survey, in this case) the action that maximizes $\mu$.
In this case:
```python
Image("figures/table.png")
```
Finally,
$$MEU[\mathcal{D}] = \sum_{S,F} \delta_F^\ast(F|S) \mu(S,F) = 0 + 1.15 + 2.1 = 3.25$$
```python
# Define optimal decision rule
```
```python
```
2.0
```python
# Obtain MEU
```
3.2500000000000004
Without this observation the MEU was 2, and now the MEU has increased more than 50%.
Nice, huh?
___
# 2. Utility functions
## 2.1. Utility of money
We have used utility functions in all the first section assuming that they were known. In this section we study these functions in more detail.
**Utility functions** are a necessary tool that enables us to compare complex scenarios that involve uncertainty or risk.
The first thing that we should understand is that utility is not the same as expected payoff.
**Example.** An investor must decide between a participation in company A, where he would earn $\$3$ million without risk, and a participation in company B, where he would earn $\$4$ million with probability 0.8 and $\$0$ with probability 0.2.
```python
Image("figures/utility_first.png")
```
Which one do you prefer?
- The risk-free one.
What is the expected payof of the company A?
- $\$3$ M.
What is the expected payoff of the company B?
- $0.8 \times \$4$ M + $0.2 \times \$0$ M = $\$3.2$ M.
**Example.**
Another common example that reflects this fact is the well-known St. Petersburg Paradox:
- A fair coin is tossed repeatedly until it comes up Heads.
- Each toss that it doesn't come up Heads, the payoff is doubled.
- In this sense, if the coin comes up Heads in the $n$-th toss, then the payoff will be $\$2^n$.
How much are you willing to pay to enter to this game?
- 20, 10, 1, 5.
What happens is that if we compute the expected payoff it is:
- $P(\text{comes up Heads in } n-\text{th toss}) = \frac{1}{2^n}$
$$E[\text{Payoff}] = \sum_{n=1}^{\infty}P(\text{comes up Heads in } n-\text{th toss}) \text{Payoff}(n) = \sum_{n=1}^{\infty} \frac{1}{2^n} 2^n = \sum_{n=1}^{\infty} 1 = \infty.$$
With these two examples, we have shown that almost always people do not always choose to maximize their monetary gain. What this implies is that the utility of money is not the money itself.
In fact, at this point of the history and after several psychological studies, we know that utility functions of money for most people look like
$$U(W) = \alpha + \beta \log(W + \gamma),$$
which is a nice *concave* function.
**How does this function look like?** (see in the whiteboard).
**How does the actual form of the curve have to be the attitude towards risk?**
## 2.2. Utility of multiple attributes
All the attributes affecting the utility must be integrated into one utility function.
This may be a difficult task, since we can enter into some complex fields, far beyond math, probability, and graphs.
- For instance, how do we compare human life with money?
- A low cost airline is considering the decision of not to run mainteinance plans over the aircraft at every arrival.
- If you have car, you don't change the tires that often (every 3 months).
There have been several attempts to addres this problem:
- Micromorts: $\frac{1}{10^6}$ chance of death.
- [QALY](https://en.wikipedia.org/wiki/Quality-adjusted_life_year).
# 3. Value of perfect information
We used influence diagrams to make decisions given a set of observations.
Another type of question that may arise is **which observations sould I make before making a decision?**.
- Initially one may think that the more information, the better (because information is power).
- But the answer to this question is far from being that simple.
For instance:
- In our entrepreneur example, we saw that including the information of the survey increased the MEU significatively. However, we did not take into account the costs of performing that survey. What if the cost of performing that survey makes the money gains of the company negative or too little?
- Medical diagnosis relies on tests. Some of these tests are painful, risky and/or very expensive.
A notion that allows us to adress this question is the **Value of Perfect Information**.
- The value of perfect information $\mathrm{VPI}(A|\bar{X})$ is the value (in utility units) of observing $\bar{X}$ before choosing an action at the node $A$.
- If $\mathcal{D}$ is the original influence diagram, and
- $\mathcal{D}_{\bar{X}\to A}$ is the influence diagram with the edge(s) $\bar{X}\to A$,
- then
$$\mathrm{VPI}(A|\bar{X}) = MEU(\mathcal{D}_{\bar{X}\to A}) - MEU(\mathcal{D}).$$
In the entrepreneur example,
$$\mathrm{VPI}(F|S) = MEU(\mathcal{D}_{S\to F}) - MEU(\mathcal{D})=3.25 - 2 = 1.25.$$
> *Theorem.* The value of perfect information satisfies:
>
> (i) $\mathrm{VPI}(A|\bar{X})\geq 0$.
>
> (ii) $\mathrm{VPI}(A|\bar{X})= 0$ if and only if the optimal decision rule for $\mathcal{D}$ is also optimal for $\mathcal{D}_{\bar{X}\to A}$.
This theorem practically says that the information is valuable if and only if it changes the agent's decision at least in one case.
**Example.** Consider the case that you are interested in two job offers in two different companies. Furthermore, these two companies are startups and both are looking for funding, which highly depends on the organizational quality of the company.
This situation can be modeled as:
```python
Image("figures/vpi.png")
```
Let's find the MEU using `pgmpy`:
```python
# Define factors
```
```python
# Obtain Expected utility
```
```python
# Print EU
```
+------+----------+
| D | phi(D) |
+======+==========+
| D(0) | 0.7200 |
+------+----------+
| D(1) | 0.3300 |
+------+----------+
```python
# Obtain MEU(D)
```
0.72
Now, let's say that a friend of yours already works in the Company 2, so he informs you about the organizational status of that company. What is the value of that information?
```python
Image("figures/vpi2.png")
```
```python
# Obtain the factor mu(D, C2)
```
```python
# Print
```
+-------+------+-------------+
| C2 | D | phi(C2,D) |
+=======+======+=============+
| C2(0) | D(0) | 0.2880 |
+-------+------+-------------+
| C2(0) | D(1) | 0.0400 |
+-------+------+-------------+
| C2(1) | D(0) | 0.3600 |
+-------+------+-------------+
| C2(1) | D(1) | 0.2000 |
+-------+------+-------------+
| C2(2) | D(0) | 0.0720 |
+-------+------+-------------+
| C2(2) | D(1) | 0.0900 |
+-------+------+-------------+
```python
# Select optimal decision
```
```python
# Obtain MEU(D_C2->D)
```
0.738
```python
# Obtain VPI
```
0.018000000000000016
# Announcements
## Exam of module 1.
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
|
6fdcb1d567e41e10023e47c22b5f87ace57d2fc1
| 267,071 |
ipynb
|
Jupyter Notebook
|
Modulo1/Clase4/DecisionTheory.ipynb
|
esjimenezro/mgp_online_public
|
b2d2a49c1c8730d1e507144ac4f65ec6842a5d94
|
[
"MIT"
] | null | null | null |
Modulo1/Clase4/DecisionTheory.ipynb
|
esjimenezro/mgp_online_public
|
b2d2a49c1c8730d1e507144ac4f65ec6842a5d94
|
[
"MIT"
] | null | null | null |
Modulo1/Clase4/DecisionTheory.ipynb
|
esjimenezro/mgp_online_public
|
b2d2a49c1c8730d1e507144ac4f65ec6842a5d94
|
[
"MIT"
] | null | null | null | 262.865157 | 52,860 | 0.920261 | true | 4,614 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.795658 | 0.787931 | 0.626924 |
__label__eng_Latn
| 0.993975 | 0.294885 |
```python
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
```
# Homework 8
**Instructions:** Complete the notebook below. Download the completed notebook in HTML format. Upload assignment using Canvas.
**Due:** Mar. 12 at **12:30pm.**
## Exercise: New-Keynesian Model Stochastic Simulation
### Equilibrium Conditions and Paramter Values
The most basic version of the New-Keynesian Model can be expressed as:
\begin{align}
y_t & = E_t y_{t+1} - \left( r_{t} - \bar{r}\right) + g_t\\
i_{t} & = r_{t} + E_t \pi_{t+1}\\
i_{t} & = \bar{r} + \pi^T + \phi_{\pi}\big(\pi_t - \pi^T\big) + \phi_{y}\big(y_t - \bar{y}\big)\\
\pi_t -\pi^T & = \beta \left( E_t\pi_{t+1} - \pi^T\right) + \kappa (y_t -\bar{y})+ u_t,
\end{align}
where: $y_t$ is (log) output, $r_t$ is the real interest rate, $i_t$ is the nominal interest rate, $\pi_t$ is the rate of inflation between periods $t-1$ and $t$, $\bar{r}$ is the long-run average real interest rate or the *natural rate of interest*, $\beta$ is the household's subjective discount factor, and $\pi^T$ is the central bank's inflation target. The coeffieints $\phi_{\pi}$ and $\phi_{y}$ reflect the degree of intensity to which the central bank *endogenously* adjusts the nominal interest rate in response to movements in inflation and output.
The variables $g_t$ and $u_t$ represent exogenous shocks to aggregate demand and inflation. They follow AR(1) processes:
\begin{align}
g_{t+1} & = \rho_g g_{t} + \epsilon^g_{t+1}\\
u_{t+1} & = \rho_u u_{t} + \epsilon^u_{t+1}
\end{align}
### Parameter Values:
You will use the following parameter values:
| $\bar{y}$ | $\beta$ | $\bar{r}$ | $\kappa$ | $\pi^T$ | $\phi_{\pi}$ | $\phi_y$ | $\rho_g$ | $\sigma_g^2$ | $\rho_u$ | $\sigma_u^2$ |
|-----------|---------|--------------|----------|---------|--------------|----------|----------|--------------|----------|--------------|
| 0 | 0.995 | $-\log\beta$ | 0.1 | 0.02/4 | 1.5 | 0.5/4 | 0.5 | 0.002<sup>2</sup> | 0.5 | 0.001<sup>2</sup> |
### Input Model and Solve
Refer to the Notebook for the week 9 discussion section for a complete example of how to input the model and solve.
```python
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series.
# Create variable called 'varNames' that stores the variable names in a list with state variables ordered first
# Create variable called 'shockNames' that stores an exogenous shock name for each state variable.
# Define a function that evaluates the equilibrium conditions of the model solved for zero.
# Parameters
# Current variables
# Forward variables
# IS equation
# Fisher_equation
# Monetary policy
# Phillips curve
# Demand process
# Inflation process
# Stack equilibrium conditions into a numpy array
# Initialize the model into a variable named 'nk_model'
# Compute the steady state numerically using .compute_ss() method of nk_model
# Approximate and solve the model. Set loglinear argument to False since the model is already linear
```
### Stochastic Simulation
Construct a stochastic simulation of the New-Keynesian model with the follwoing properties:
1. 201 periods.
2. Seed for random number generator: 126.
Note that the shock covariance matrix is:
\begin{align}
\text{Covariance matrix} & = \left[\begin{array}{cc}\sigma_g^2 & 0\\ 0 & \sigma_u^2\end{array}\right]
\end{align}
Use the following values for $\sigma_g^2$ and $\sigma_u^2$ in the simulation:
| $\sigma_g^2$ | $\sigma_u^2$ |
|-------------------|-------------------|
| 0.002<sup>2</sup> | 0.001<sup>2</sup> |
Refer to the Notebook for the week 7 discussion section for an example of how to use `linearsolve` to construct a stochastic simulation.
```python
# Compute the simulation
```
### Analyze Simulation Results
Construct a plot with simulated output and inflation plotted together. Multiply simulated output by 100 and simulated inflation by 400 since, by convention, we always annualize inflation and interest rates.
```python
# Create a figure with dimensions 12x4. PROVIDED
fig = plt.figure(figsize=(12,4))
# Create the left axis. PROVIDED
ax1 = fig.add_subplot(1,1,1)
# Plot the simulated series for output (times 100) and inflation (times 400)
# Construct legend. PROVIDED
ax1.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
Compute the standard deviations of inflation $\pi_t$ (times 400), output $y_t$ (times 100), and the nominal interest rate $i_t$ (times 400) for the simulated series.
```python
```
Compute the coefficients of correlation of inflation $\pi_t$, output $y_t$, and the nominal interest rate $i_t$ for the simulated series.
```python
```
**Questions**
1. In the smulation data, which variable fluctuates more, inflation (times 400) or output (times 100)?
2. Do the simulations suggest a positive or negative correlation between output and inflation?
3. Note that for the simulations, it was assumed that the variances on the inflation and demand shocks were about the same. Based on the answer to your previous question, which shocks appear to have a dominant effect on the dynamics of the model?
**Answers**
1.
2.
3.
|
5df99fc1b50c563d3caa8e462cde31ca6f3c3ee2
| 11,512 |
ipynb
|
Jupyter Notebook
|
Homework/Econ126_Winter2020_Homework_08_blank.ipynb
|
t-hdd/econ126
|
17029937bd6c40e606d145f8d530728585c30a1d
|
[
"MIT"
] | null | null | null |
Homework/Econ126_Winter2020_Homework_08_blank.ipynb
|
t-hdd/econ126
|
17029937bd6c40e606d145f8d530728585c30a1d
|
[
"MIT"
] | null | null | null |
Homework/Econ126_Winter2020_Homework_08_blank.ipynb
|
t-hdd/econ126
|
17029937bd6c40e606d145f8d530728585c30a1d
|
[
"MIT"
] | null | null | null | 39.696552 | 586 | 0.392547 | true | 1,463 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.808067 | 0.839734 | 0.678561 |
__label__eng_Latn
| 0.978918 | 0.414857 |
# 微積分の計算について N0.3 不定積分の内容-1
### 学籍番号[_________]クラス[_____] クラス番号[_____] 名前[_______________]
積分の式
(1)変形、整理できるか
(2)部分分数に変換できるか
(3)三角関数などは公式を使って変形できるか
(4)分数の分母の有理化できるか
(5)分母を平方完成形にできるか
積分のルール
$$ \int cf(x) dx = c \int f(x) dx $$
$$ \int \{ f(x)\pm g(x)\} dx = \int f(x) dx \pm \int g(x) dx $$
積分の手法
(1)置換積分法
(2)部分積分法
積分の公式の基礎から
```python
from sympy import *
x, n , y, a = symbols('x n y a')
init_printing()
m ='3//1'
i =0
```
```python
expr = 6*x**2
itg = Integral(expr,x)
i=i+1
print( 'No.',m,'-',i)
itg
```
```python
itg.doit()
```
```python
expr = (x-1)*(x+1)
itg = Integral(expr,x)
i=i+1
print( 'No.',m,'-',i)
itg
```
```python
itg.doit()
```
```python
expr = 2/x
itg = Integral(expr,x)
i=i+1
print( 'No.',m,'-',i)
itg
```
```python
itg.doit()
```
```python
expr = cbrt(x**2)
itg = Integral(expr,x)
i=i+1
print( 'No.',m,'-',i)
itg
```
```python
itg.doit()
```
```python
expr = 2*x + sin(x)
itg = Integral(expr,x)
i=i+1
print( 'No.',m,'-',i)
itg
```
```python
itg.doit()
```
```python
expr = sec(x)**2 +1/x
itg = Integral(expr,x)
i=i+1
print( 'No.',m,'-',i)
itg
```
```python
simplify(itg.doit())
```
```python
expr = 1-exp(x)
itg = Integral(expr,x)
i=i+1
print( 'No.',m,'-',i)
itg
```
```python
simplify(itg.doit())
```
```python
```
```python
```
|
101501a149db7cca175021c830f550a63998509f
| 31,524 |
ipynb
|
Jupyter Notebook
|
03_20181023-sekibun-1-Ex&ans.ipynb
|
kt-pro-git-1/Calculus_Differential_Equation-public
|
d5deaf117e6841c4f6ceb53bc80b020220fd4814
|
[
"MIT"
] | 1 |
2019-07-10T11:33:18.000Z
|
2019-07-10T11:33:18.000Z
|
03_20181023-sekibun-1-Ex&ans.ipynb
|
kt-pro-git-1/Calculus_Differential_Equation-public
|
d5deaf117e6841c4f6ceb53bc80b020220fd4814
|
[
"MIT"
] | null | null | null |
03_20181023-sekibun-1-Ex&ans.ipynb
|
kt-pro-git-1/Calculus_Differential_Equation-public
|
d5deaf117e6841c4f6ceb53bc80b020220fd4814
|
[
"MIT"
] | null | null | null | 55.893617 | 2,684 | 0.77506 | true | 717 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.882428 | 0.808067 | 0.713061 |
__label__roh_Latn
| 0.269096 | 0.495011 |
# Click "Edit App" to see the code
# Histogram and normal distribution
In this tutorial we'll learn how to read a CSV file into a _pands_ DataFrame, compute the average of the data in the second column, build a histogram and compare it to the _normal_ distribution.
# The Jupyter Notebook
Let's start by loading the usual Python packages.
```python
# python packages
import pandas as pd # Dataframes and reading CSV files
import numpy as np # Numerical libraries
import matplotlib.pyplot as plt # Plotting library
from lmfit import Model # Least squares fitting library
```
We can now create a _pandas_ DataFrame_ that contains the data we generated previously.
We can use the pandas' function **read_csv** to store the file and store its content in a dataframe. We can also create two DataFrames, one for each of the two Random number functions that were in the first week's notebook.
```python
data1 = pd.read_csv("../miscData/random1.csv")
data2 = pd.read_csv("../miscData/random2.csv")
```
We can then print the dataframe to see what it contains
```python
print("First DataFrame")
print(data1)
print("Second DataFrame")
print(data2)
```
For simplicity, we can change the names of the columns of the DataFrames to be the same.
This could be useful for referencing the content of the DataFrames later.
We don't need to print the entire DataFrame again, but we can just check that the headers have changed, and are the same.
```python
data1.columns = ("X","Y")
data2.columns = ("X","Y")
print(data1.columns)
print(data2.columns)
```
One useful information about the DataFrame that we normally need to know is the number of lines it contains.
This can be achieved _e.g_ by _measuring_ the length of one column
```python
numberOfValues1 = len(data1["Y"])
numberOfValues2 = len(data2["Y"])
print("Number of data points in the first file :",numberOfValues1)
print("Number of data points in the second file :",numberOfValues2)
```
Let's now compute the average of the data in the second column.
The first column is just an index. The simplest way to do this is to use the NumPy function _mean_.
There are multiple ways of selecting the data in the second column of the dataframe.
Here we use the *name* of the column
```python
average1 = np.mean(data1["Y"])
average2 = np.mean(data2["Y"])
print("Averages, method #1 :",average1, average2)
```
Alternatively, we can use the **iloc** function, which allows us to specify the desired range of the data that we want to look at.
* Remember that python starts counting from zero and that the upper limit of the range is not included.
```python
average1 = np.mean(data1.iloc[0:numberOfValues1,1])
average2 = np.mean(data2.iloc[0:numberOfValues2,1])
print("Averages, method #2 :",average1,average2)
```
Although this is more complicated, we can also compute the average manually using a **for** loop.
```python
average1 = 0
for val in data1["Y"]:
average1 += val
average1 /= numberOfValues1
average2 = 0
for val in data2["Y"]:
average2 += val
average2 /= numberOfValues2
print("Averages, method #3 :",average1,average2)
```
Analogously, the standard deviation can be readily computed using the NumPy function _std_
```python
standardDeviation1 = np.std(data1["Y"])
standardDeviation2 = np.std(data2["Y"])
print("Standard deviations :",standardDeviation1,standardDeviation2)
```
Unfortunately, there is no NumPy function for the standard error, so we have to use the definition
\begin{equation}
StdErr = \frac{\sigma}{\sqrt{N}}
\end{equation}
where $\sigma$ is the standard deviation and $N$ is the total number of data points.
```python
standardError1 = np.std(data2["Y"]) / np.sqrt(numberOfValues1)
standardError2 = np.std(data2["Y"]) / np.sqrt(numberOfValues2)
print("Standard errors :",standardError1,standardError2)
```
Let's now compute the histogram of the data, and compare it with the "normal" distribution that you have seen in statistics. If our data obey the normal distribution, the "normalised" hystogram should resemble a gaussian function centred on the average of the data, whose width is given by the standard deviation of the data.
In order to compute the histogram we can use the function **histogram** in NumPy.
This function produces two arrays in output, one with the position of the bins
and one with the hight of the bar of the histogram.
In the example below we compute the histogram using 50 and 75 bins.
* Note that the "bins" arrays have one extra element. This is because they specify the left and right side of the bin, not its centre.
```python
histogram1 , bins1 = np.histogram(data1["Y"],bins=50)
histogram2 , bins2 = np.histogram(data2["Y"],bins=75)
print("Size of the histogram arrays :",len(histogram1),len(histogram2))
print("Size of the bins arrays :",len(bins1),len(bins2))
```
The histogram that numpy generates is just the tally of how many data points fall within each bean, it is not a probability. In fact, the integral of a probability must be equal to one
\begin{equation}
\int_{-\infty}^{+\infty} P(x)\ \mathrm{d}x = 1
\end{equation}
while the integral of the histogram, $h(x)$, that we have generated is equal to the number of values times the bins' width, which in this case is constant
\begin{equation}
\int_{-\infty}^{+\infty} h(x)\ \mathrm{d}x = \sum_i (h_i\ \mathrm{d}x) = N\ \mathrm{d}x
\end{equation}
Let's verify this
```python
print("Sum of the heights of the histogram bars :",
np.sum(histogram1),np.sum(histogram2))
dx1 = bins1[1] - bins1[0]
dx2 = bins2[1] - bins2[0]
integral1 = dx1*np.sum(histogram1)
integral2 = dx2*np.sum(histogram2)
print("Integrals of the histograms :",integral1,integral2)
```
We can therefore normalise the histograms to become _probabilities_ by dividing the height of each bar by the total area.
```python
histogram1 = histogram1 / integral1
histogram2 = histogram2 / integral2
integral1 = dx1*np.sum(histogram1)
integral2 = dx2*np.sum(histogram2)
print("Integrals of the histograms :",integral1,integral2)
```
In order to compare the histogram with the _normal_ distribution we need to define a function that returns the values of a normalised Gaussian.
\begin{equation}
G(x) = \frac{1}{\sigma\sqrt{2\pi}}\exp\Bigg[ -\frac{(x-x_0)^2}{2\sigma^2} \Bigg]
\end{equation}
where $x_0$ and $\sigma$ are the mean and standard deviation of the distribution.
```python
def gaussian(x,x0,std):
return np.exp(-0.5*(x-x0)**2 / std**2) / (std * np.sqrt(2*np.pi))
```
We can now evaluate the function at the positions of the _bins_ and put the values in an array.
* Note that the output of the function will have the same size of the input array.
```python
normalDistribution1 = gaussian(bins1,average1,standardDeviation1)
normalDistribution2 = gaussian(bins2,average2,standardDeviation2)
```
We now have all the elements we need to make a plot and compare the normalised histograms with the normal distributions.
We first have to create an object for the figure and its axes. Because we want to make two figures at once, we can use the **subplots** function of the _mathplotlib_ library with the _1,2_ options, which will generate 2 figures laid on one row and two columns. We can also define the _total_ size of the figure (12,6), which would produce two almost square graphs. We then add labels to the axes and the legend and display the figure.
* Note that we now have two set of axes, so _ax_ is a two dimensional array.
* Note how we can take all elements of an array bar the last one using **[:-1]**
```python
figure , ax = plt.subplots(1,2,figsize=(12,6))
ax[0].bar(bins1[:-1], histogram1, width=0.1, label="Histogram")
ax[0].plot(bins1, normalDistribution1, label="Gaussian", color='red')
ax[0].set(xlabel="Values")
ax[0].set(ylabel="Probability")
ax[0].legend()
ax[1].bar(bins2[:-1], histogram2,width=0.1, label="Histogram")
ax[1].plot(bins2, normalDistribution2, label="Gaussian", color='red')
ax[1].set(xlabel="Values")
ax[1].set(ylabel="Probability")
ax[1].legend()
plt.show()
# If you want we can save the figure to a file
# fig1.savefig("fig.png")
```
Note how the first set of data were uniformly distributed between 6 and 20, hence the standard deviation is not a measure of the uncertainty of the data. On the other hand, the data in the second set are _normally_ distributed, hence the variance is indeed a measure of the uncertainty.
|
8997d3aa9be6b1cbb40d802fc3b764bd6cdc2086
| 13,991 |
ipynb
|
Jupyter Notebook
|
codeSnippets/1_averageAndHistogram.ipynb
|
praiteri/TeachingNotebook
|
75ee8baf8ef81154dffcac556d4739bf73eba712
|
[
"MIT"
] | null | null | null |
codeSnippets/1_averageAndHistogram.ipynb
|
praiteri/TeachingNotebook
|
75ee8baf8ef81154dffcac556d4739bf73eba712
|
[
"MIT"
] | null | null | null |
codeSnippets/1_averageAndHistogram.ipynb
|
praiteri/TeachingNotebook
|
75ee8baf8ef81154dffcac556d4739bf73eba712
|
[
"MIT"
] | 1 |
2022-02-23T11:36:12.000Z
|
2022-02-23T11:36:12.000Z
| 31.091111 | 443 | 0.594525 | true | 2,183 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.897695 | 0.879147 | 0.789206 |
__label__eng_Latn
| 0.992877 | 0.671922 |
# Thomson's "Multitaper" Estimator
This notebook is a demo & test of new multitaper estimator code.
**TODO**: the jackknife is not working in spawn mode
```python
import multiprocessing as mp
try:
mp.set_start_method('spawn')
except:
pass
```
```python
%matplotlib inline
```
```python
# basic stuff
import numpy as np
from scipy.signal import periodogram, detrend
import scipy.stats.distributions as dists
# Ecog stuff
import ecoglib.estimation.multitaper as mt
from ecogdata.devices.data_util import load_experiment_auto
from ecogdata.trigger_fun import extract_epochs
from ecoglib.vis.tile_images import tile_traces_1ax
from ecoglib.vis.ani import animate_frames
from ecoglib.vis.plot_util import filled_interval
# Other plotting stuff
import matplotlib.pyplot as plt
from matplotlib import rcParams
import seaborn as sns
sns.set_context('notebook')
rcParams['figure.figsize'] = 8, 6
```
## Discrete prolate spheroidal sequences and PSD basics
To begin, create a simple test sequence that is a mixture of cosines.
```python
# amplitudes in (2, 5)
amps = np.random.rand(5) * 3 + 2
# phases in (0, 2pi)
phases = np.random.rand(5) * 2 * np.pi
# frequencies in (0.1, 0.4)
mix_freqs = np.random.rand(5) * 0.3 + 0.1
# mixture of cosines
n = np.arange(1024)
x_parts = amps[:, np.newaxis] * np.cos(2 * np.pi * n * mix_freqs[:, np.newaxis] + phases[:, np.newaxis])
x = np.sum(x_parts, axis=0)
plt.figure()
plt.plot(n, x_parts.T + np.arange(2, 7) * 10, lw=0.25)
plt.plot(n, x, lw=0.5, color='k')
plt.yticks([])
sns.despine(left=True)
```
The NW parameter and the resulting eigenvalues relate to the "spectral concentration" property of DPSS. NW is a "time-bandwidth" product. The main lobe of the the DPSS spectrum is about ±NW DFT frequency bins wide. Also, the ratio of energy inside the lobe versus total signal energy is given by the eigenvalue.
Since the DFT bin size is $1/N$, the simple (normalized) bandwidth identity is $W_{(n)}=NW(1/N)$. Note that
1. This is *half* of the full estimator bandwidth
1. To convert to frequency in Hz, multiply by the sampling rate: $W_{(f)}=NW(f_{s}/N)$
```python
NW = 3.5
mte = mt.MultitaperEstimator(len(x), NW, fs=1, low_bias=0.9)
mte.eigs
```
```python
f, axs = plt.subplots(3, 1, figsize=(6, 8))
lns = axs[0].plot(n, mte.dpss.T)
axs[0].set_xlabel('samples')
axs[0].legend(lns[:1], ('DPSS',))
freqs, spectra = periodogram(mte.dpss, fs=1.0, detrend=False, window='boxcar')
spectra = spectra[..., :len(freqs)]
band_limit = float(mte.NW) / len(x)
axs[1].semilogy(freqs, spectra.T)
axs[1].set_xlabel('Normalized frequency')
axs[2].semilogy(freqs, spectra.T, marker='.', ls='-')
axs[2].axvline(band_limit, color='k', linestyle='dashed')
axs[2].set_xlim(0, 15.0 / len(x))
axs[2].set_ylim(1e-8, 2)
axs[2].annotate('Concentration band-limit (NW / N)', (band_limit, 0.01), (band_limit * 1.5, 0.01),
arrowprops=dict(width=1))
_ = axs[2].set_xlabel('Normalized frequency')
f.tight_layout(pad=0)
```
The DPSS are progressively *less* concentrated as the order increases, as expected from the decreasing eigenvalues. Interestingly, these spectra are computed via periodogram suffer from the broad-band bias (spectral leakage out of the main lobe) that tapering is supposed to address. The next plot shows the reduction of broad-band bias using a single taper (Hamming window) versus the square wave taper.
The advantage of multitaper is to use *multiple* orthogonal tapers to create uncorrelated estimates that can be averaged to reduce variance.
```python
plt.figure()
freqs, spectra = periodogram(mte.dpss, fs=1.0, detrend=False, window='boxcar')
lns_a = plt.semilogy(freqs, spectra.T, lw=1, color='k')
freqs, spectra = periodogram(mte.dpss, fs=1.0, detrend=False, window='hamming')
lns_b = plt.semilogy(freqs, spectra.T, lw=1, color='r')
plt.legend(lns_a[:1] + lns_b[:1], ('Flat taper', 'Hamming taper'))
plt.title('Hamming window: broad-band bias reduction')
```
So the multitaper estimate is essentially an average of $K$ individual "direct spectral estimates" of the series $x(t)$. Let $v_{k}(t)$ be the k'th DPSS taper (technically also uniquely parameterized by $(N, NW)$). A direct estimate is
\begin{equation}
\hat{S}_{k}(\omega)=\left|y_{k}(\omega)\right|^{2}
\tag{1}
\end{equation}
with $y_{k}(\omega)$ being the DFT of the signal-taper product.
\begin{equation}
y_{k}(\omega)=\sum_{t=0}^{N-1} x(t)v_{k}(t)\exp \{-i\omega t\}
\tag{2}
\end{equation}
It is in fact a tapered periodogram estimate, but all $\hat{S}_{k}(\omega)$ are (basically) uncorrelated. The normalization for a periodogram estimate is usually $1/N$, but this is not needed with the orthonormal taper.
The final multitaper estimator is the average (or weighted average) of these functions
\begin{equation}
\hat{S}(\omega)=\frac{1}{K}\sum_{k=0}^{K-1}\hat{S}_{k}(\omega)
\tag{3}
\end{equation}
Multiplying by a factor of two is conventional to count power from the negative side of the spectrum (which is then discarded).
```python
freqs, mt_psd = mte.compute_psd(x, adaptive_weights=True)
freqs, pg_psd = periodogram(x, window='boxcar', detrend=False)
freqs, ht_psd = periodogram(x, window='hamming', detrend=False)
f, axs = plt.subplots(2, 1)
axs[0].semilogy(freqs, np.c_[pg_psd, ht_psd, mt_psd])
axs[0].legend(('P-gram (boxcar)', 'P-gram (hamming)', 'Multitaper'))
axs[1].semilogy(freqs, np.c_[pg_psd, ht_psd, mt_psd])
axs[1].set_xlim(mix_freqs[2] - band_limit * 4, mix_freqs[2] + band_limit * 4)
```
The Hamming window improves the broad-band bias compared to a plain periodgram. The multitaper estimator--especially using adaptive weighting for summing low power estimates--has by far the lowest leakage bias. However this comes at the expense of bias within the ±W frequency window. With this test signal, the **true** power spectral density (the quantity we're trying to estimate) is a collection of delta functions at the mixed frequencies. Each type of taper (square, Hamming, MT) blurs that line to an extent: the full power of each component is spread across the estimator bandwidth. This implies that larger NW values would blur the line value across a larger window (more bias).
The full variance is given by $E[(x-\bar{x})^{2}]=\int_{0}^{f_{s}} P_{x}(f)df$
```python
print('Input variance', np.var(x))
print('MT variance:', np.trapz(mt_psd, x=freqs))
print('P-gram(BC) variance:', np.trapz(pg_psd, x=freqs))
print('P-gram(H) variance:', np.trapz(ht_psd, x=freqs))
```
## PSD for real data
Grabbing a random $2^{14}$ point segement from a rat auditory cortex recording.
```python
data = load_experiment_auto('viventi/2017-11-28_acute', 'test_003', mapped=True)
```
```python
n0 = int(100 * data.Fs)
n_pts = 2 ** 14
rando_data = detrend(data.data[:, n0:n0 + n_pts], axis=1)
fig = tile_traces_1ax(rando_data, p=data.chan_map, twin=(0, 1e3 * n_pts / data.Fs), calib_unit='uV', linewidths=0.25)
```
Despite the larger in-window bias, the multitaper PSD has much less estimator variance than either the boxcar or Hamming-window periodogram. Note the amount of deviation from the $1/f$ LFP + transistor noise spectrum in the different estimates.
```python
x = rando_data[data.chan_map.lookup(3, 0)]
# Using the "psd" class method
freqs, mt_psd_1 = mt.MultitaperEstimator.psd(x, NW=3.5, fs=data.Fs, adaptive_weights=True)
freqs, mt_psd_2 = mt.MultitaperEstimator.psd(x, NW=7, fs=data.Fs, adaptive_weights=True)
freqs, pg_psd = periodogram(x, window='boxcar', detrend=False, fs=data.Fs)
freqs, ht_psd = periodogram(x, window='hamming', detrend=False, fs=data.Fs)
f, axs = plt.subplots(1, 1)
axs.loglog(freqs, np.c_[pg_psd, ht_psd, mt_psd_1, mt_psd_2], alpha=0.6)
axs.set_ylim(bottom=1e-3)
axs.set_xlim(left=0.5)
axs.legend(('P-gram (boxcar)', 'P-gram (hamming)', 'Multitaper (NW=3.5)', 'Multitaper (NW=7)'))
```
The multitaper variance relationship is closer than a simple tapered periodogram. The naked periodogram is essentially the same as the sample variance because of Parseval's theorem of the DFT--in short the sum of squares is the same under $x(t)$ and $X(\omega_k)$.
```python
print('Input variance', np.var(x))
print('MT(NW=3.5) variance:', np.trapz(mt_psd_1, x=freqs))
print('MT(NW=7) variance:', np.trapz(mt_psd_2, x=freqs))
print('P-gram(BC) variance:', np.trapz(pg_psd, x=freqs))
print('P-gram(H) variance:', np.trapz(ht_psd, x=freqs))
```
## PSD estimator variance & confidence intervals
We can explicitly find the variance of the spectral estimator, and from that variance form confidence intervals. Confidence intervals can be estimated two ways.
* intervals based on a $\chi{}^{2}_{2K}$ distribution
* intervals based on the Jackknife standard error estimate and Student's $t_{2K-1}$ distribution
### $\chi^{2}$ argument
The $\chi{}^{2}$ interval is derived from normality assumptions about the timeseries (serial independence, stationarity). Both these assumptions are particularly bad for LFP, btw. For periodograms in general, the estimator can be considered the sum of squares:
\begin{equation}
\hat{S}_{\omega}=\left(A_{\omega}^{2}+B_{\omega}^{2}\right)
\tag{4}
\end{equation}
and $A$ and $B$ are the cosine and sine components of the DFT of taper-signal product $v(t)x(t)$.
If the series $x(t)$ is iid zero mean Gaussian with variance $\sigma^{2}_{x}$, then $A$ and $B$ are both sums of Gaussians (thus also Gaussian). The expected value is $\sum E\{x(t)\}\cos\omega t=0$ and variance is proportional to $\sigma_{x}^{2}$ (in the case of eq 2 above, $Var(A_{\omega})=\sigma_{x}^{2}/2$). So the appropriately normalized sum of squares in eq 4 is distributed as $\chi^{2}$ with 2 degrees of freedom (DOF)
\begin{equation}
\frac{A_{\omega}^{2}+B_{\omega}^{2}}{\sigma_{x}^{2}/2}=\frac{2\hat{S}_{\omega}}{\sigma_{x}^{2}}\sim \chi^{2}_{2}
\end{equation}
Asymptotically (as $N\rightarrow \infty$) $\sigma_{x}^{2}$ is replaced by the *true* power spectral density (the quantity we're trying to estimate) for this fundamental result
\begin{equation}
\frac{2\hat{S}_{\omega}}{S_{\omega}}\sim \chi^{2}_{2}
\end{equation}
For the multitaper estimator, $K\hat{S}^{(mt)}(\omega)=\sum_{k=0}^{K-1}\hat{S}_{k}(\omega)$ is a sum or more uncorrelated squared-Gaussians, which just increases the $\chi^{2}$ DOF
\begin{equation}
\frac{2K\hat{S}^{(mt)}_{\omega}}{S_{\omega}}\sim \chi_{2K}^{2}
\end{equation}
So the ratio of the PSD estimator to the actual PSD is $chi^{2}$, which means we can calculate a confidence interval for the lowest and highest points that the ratio should reach. That CI can then be manipulated to show the CI for the PSD itself. The 95% CI is
\begin{equation}
\frac{2K\hat{S}^{(mt)}_{\omega}}{\chi^{2}_{0.975,2K}}\lt S_{\omega} \lt \frac{2K\hat{S}^{(mt)}_{\omega}}{\chi^{2}_{0.025,2K}}
\end{equation}
where $\chi^{2}_{\alpha,2K}$ denotes the inverse cumulative distribution function at point $\alpha$. If the direct spectral estimates are weighted with the adaptive weighting scheme, then the effective DOF can be slightly different at each frequency, as a function of weights. In this case
\begin{equation}
\hat{S}^{(mt)}=\frac{\sum_{k=0}^{K-1}\left|d_{k}(\omega)\right|^{2}\hat{S}_{k}(\omega)}{\sum_{k=0}^{K-1}\left|d_{k}(\omega)\right|^{2}}
\end{equation}
and the effective DOF is $\nu(\omega)=2\sum_{k=0}^{K-1}\left|d_{k}(\omega)\right|^{2}$.
For the standard case, the bounds get clearly tighter with additional estimates averaged in
```python
tk = np.arange(1, 8) * 2
print('upper bound scaling:', tk / dists.chi2.ppf(0.025, tk))
print('lower bound scaling:', tk / dists.chi2.ppf(0.975, tk))
plt.figure()
plt.semilogy(tk, tk / dists.chi2.ppf(0.025, tk), label='upper scaling', marker='*', ms=10)
plt.semilogy(tk, tk / dists.chi2.ppf(0.975, tk), label='lower scaling', marker='*', ms=10)
plt.axhline(1, color='k')
plt.legend()
plt.xlabel('DOF')
```
### Jackknife variance
The "Jackknife" estimate of variance follows from a leave-one-out resampling technique called the [jackknife](https://en.wikipedia.org/wiki/Jackknife_resampling). For the purpose of the multitaper estimator of a PSD, the jackknife calculates $K$ versions of eq 3 leaving out one direct spectral estimate at a time. Call these $\hat{S}_{-k}(\omega)$, and their average is $\bar{S}(\omega)$. The jackknife variance of an estimator is just the scaled sample variance of these $K$ sub-estimators:
\begin{equation}
\operatorname{var}\left[S^{(mt)}(\omega)\right]=\frac{n-1}{n}\sum_{k=0}^{K-1}\left(\hat{S}_{-k}(\omega)-\bar{S}(\omega)\right)^{2}
\end{equation}
and the standard error (SE) is the square root of this variance. Using the SE, the normal CI is based on Student's t distribution with $K-1$ degrees of freedom.
\begin{equation}
\hat{S}^{(mt)}(\omega)-(SE)t_{0.975,K-1}\lt S(\omega)\lt\hat{S}^{(mt)}(\omega)+(SE)t_{0.975,K-1}
\end{equation}
```python
x = rando_data[data.chan_map.lookup(3, 0)]
# Using the "psd" class method
mte_1 = mt.MultitaperEstimator(len(x), NW=3.5, fs=data.Fs, low_bias=True)
dof_1 = 2 * len(mte_1.dpss)
freqs, mt_psd_1, ci_1 = mte_1.compute_psd(x, adaptive_weights=True, ci=True)
mte_2 = mt.MultitaperEstimator(len(x), NW=7, fs=data.Fs, low_bias=True)
dof_2 = 2 * len(mte_2.dpss)
freqs, mt_psd_2, ci_2 = mte_2.compute_psd(x, adaptive_weights=True, ci=True)
freqs, mt_psd_3, ci_3 = mt.MultitaperEstimator.psd(x, NW=7, fs=data.Fs, adaptive_weights=True,
ci=True, jackknife=True, jn_jobs=4)
f, axs = plt.subplots(2, 1, sharex=True, sharey=True)
axs[0].fill_between(freqs, ci_1[0], ci_1[1], color=(0.2, 0.2, 0.2), label='Chi2 {} dof'.format(dof_1))
axs[0].fill_between(freqs, ci_2[0], ci_2[1], color=(0.6, 0.6, 0.6), label='Chi2 {} dof'.format(dof_2))
axs[1].fill_between(freqs, ci_3[0], ci_3[1], color=(0.6, 0.2, 0.2), label='Jackknife SE')
axs[1].set_yscale('log')
axs[1].set_xscale('log')
# axs.loglog(freqs, mt_psd_3)
f_cutoff = freqs.searchsorted(freqs.max() * 0.9)
axs[1].set_ylim(bottom=0.25 * ci_1[0][f_cutoff])
axs[1].set_xlim(left=0.5)
axs[0].legend()
axs[1].legend()
_ = axs[0].set_title('Relative chi-squared confidence intervals')
```
## Cross spectral density (CSD)
For this section, we'll focus on an evoked tone response from this dataset.
```python
fft_pts = 256
all_tone_responses = extract_epochs(data.data, data.exp, pre=64, post=fft_pts - 64)
all_tone_responses = detrend(all_tone_responses, type='linear', axis=-1)
row_tone_responses = np.array([all_tone_responses[data.chan_map.lookup(3, i)] for i in range(8)])
tone_ss = np.sum(np.sum(row_tone_responses ** 2, axis=0), axis=1)
big_tone = np.argsort(tone_ss)[int(0.9 * len(tone_ss))]
```
```python
fig = tile_traces_1ax(all_tone_responses[:, big_tone],
p=data.chan_map,
twin=(-64 / data.Fs, (fft_pts - 64) / data.Fs),
calib_unit='uV',
linewidths=0.5)
```
```python
frames = data.chan_map.embed(all_tone_responses[:, big_tone].T, axis=1)
time = (np.arange(frames.shape[0]) - 64) * 1e3 / data.Fs
animate_frames(frames, notebook=True, time=time, axis_toggle='off', colorbar=True)
```
This response has, in general, a right-to-left sweep. Earliest onset on the right-side electrodes and then sequential activation moving left. Cross spectral density (CSD) is a tool to look at magnitude and phase of covarying parts of the signal power per frequency. We should expect to see a structured phase lag between electrodes along the row transect.
The `MultitaperEstimator.csd` class method returns a "matrix" of cross spectral densities for the signals in `x`. Each row i and column j in the matrix has the CSD $C_{ij}(f)$. Like a covariance matrix, this is somewhat redundant with $C_{ij}(f)=C_{ji}^{*}(f)$ (* is complex conjugate). The conjugation just reflects that the phase relationship between two signals is reversed from the perspective of one signal versus the other. The *diagonal* of the matrix is actually equivalent to the normal PSDs of the signals.
```python
# x = np.row_stack([rando_data[data.chan_map.lookup(3, i)] for i in range(8)])
x = row_tone_responses[:, big_tone]
freqs, csd_matrix = mt.MultitaperEstimator.csd(x, NW=2.5, fs=data.Fs, adaptive_weights=True)
```
```python
with sns.husl_palette(n_colors=8):
f, axs = plt.subplots(2, 1, sharex=True, figsize=(8, 8))
axs[0].loglog(freqs, np.abs(csd_matrix[7, :7]).T, alpha=0.5)
axs[0].loglog(freqs, csd_matrix[7, 7].real, color='k', alpha=0.5, ls='--')
# axs[0].set_xlim(left=2, right=15)
labels = ['(3,7) to (3,{})'.format(i) for i in range(7)]
labels.append('(3,7) PSD')
axs[0].legend(labels, ncol=2)
axs[0].set_title('Cross-spectral density (power)')
axs[1].semilogx(freqs, np.unwrap(np.angle(csd_matrix[7, :7])).T, alpha=0.5)
# axs[1].semilogx(freqs, np.angle(csd_matrix[7, :7]).T, alpha=0.5)
axs[1].set_title('Cross-spectral density (phase)')
axs[1].set_ylim(-1 * np.pi, 3 * np.pi)
```
The CSD reflects two things about the response.
1. The covarying power between ~10-50 Hz decreases from right to left: i.e. as a function of distance
1. The covarying part of the response has a phase lag that increases from right to left.
The phase tells us the response happens later on channel (3, 0) than on (3, 7) because the phase difference is positive. This is most clear up to ~40 Hz. After 40 Hz the phase estimates become pretty ragged, partly due to phase wrapping but mostly due to truly random noise. Phase is unwrapped here, but after a certain point the unwrapping is a random walk.
### Coherence: normalized CSD
If CSD is analogous to the covariance, we can calculate an analogous correlation coefficient called coherence. Coherence is the CSD normalized by the square-root product of the two signals' PSDs. Its magnitude is directly analogous to the correlation coefficient. Its phase is equal to the CSD phase (since the normalization introduces no new phase). For this reason, magnitude squared coherence (MSC) is usually calculated.
MSC is a *very* noisy estimator, and it typically needs averaging over several trials.
```python
all_8k = np.where(data.exp.tones == 8000)[0][:10]
```
```python
coh_specs = []
row_sites = [data.chan_map.lookup(3, i) for i in range(8)]
for trial in all_8k:
x = np.array([all_tone_responses[chan, trial] for chan in row_sites])
coh_spec = mt.coherence(x, 2.5, msc=True)
coh_specs.append(coh_spec)
coh_spec_avg = np.mean(coh_specs, axis=0)
```
```python
with sns.husl_palette(n_colors=8):
f, axs = plt.subplots(1, 1, sharex=True)
axs.semilogx(freqs, np.abs(coh_spec_avg[7, 0:7]).T ** 2, alpha=0.5)
labels = ['(3,7) to (3,{})'.format(i) for i in range(7)]
axs.legend(labels, ncol=2)
axs.set_ylabel('Mag. squared coherence')
```
The jackknife can be used to find the estimator variance and compute a confidence interval. This gives some insight into the reliability of MSC on short windows. The confidence interval improves with higher NW, but the bandwidth resolution goes down. Note that when using a jackknife, you really want more than ~3 samples. For this reason, we'll tune down the eigenvalue threshold from a default 0.99 to 0.9 for DPSS with NW=2.5.
```python
coh_spec_1, ci_1 = mt.coherence(x, 2.5, msc=True, ci=True, low_bias=.9, jn_jobs=4)
bw_1 = mt.nw2bw(2.5, x.shape[-1], data.Fs)
coh_spec_2, ci_2 = mt.coherence(x, 5, msc=True, ci=True, low_bias=True, jn_jobs=4)
bw_2 = mt.nw2bw(6.5, x.shape[-1], data.Fs)
```
```python
f, ax = plt.subplots(1, 1)
l1 = 'BW={:.1f}'.format(bw_1)
l2 = 'BW={:.1f}'.format(bw_2)
filled_interval(ax.plot, freqs, coh_spec_1[7, 0], ci_1[:, 7, 0], alpha=0.5, ax=ax, label=l1)
filled_interval(ax.plot, freqs, coh_spec_2[7, 0], ci_2[:, 7, 0], alpha=0.5, ax=ax, label=l2)
ax.legend()
# ax.set_xscale('log')
```
The higher resolution estimator shows a lot more peaks than the lower resolution estimator. But the CIs are pretty big, so hard to say if it's legit.
## Todo: higher order spectra, spectrograms
|
0a1a87e2fffab9ced868788530479ed468705425
| 27,506 |
ipynb
|
Jupyter Notebook
|
docs/source/usage_demos/multitaper_estimator.ipynb
|
miketrumpis/ecoglib
|
2ecc5bc64920d96e01297cce5472d4b4797c3a7d
|
[
"BSD-3-Clause"
] | 1 |
2021-11-06T21:39:01.000Z
|
2021-11-06T21:39:01.000Z
|
docs/source/usage_demos/multitaper_estimator.ipynb
|
miketrumpis/ecoglib
|
2ecc5bc64920d96e01297cce5472d4b4797c3a7d
|
[
"BSD-3-Clause"
] | null | null | null |
docs/source/usage_demos/multitaper_estimator.ipynb
|
miketrumpis/ecoglib
|
2ecc5bc64920d96e01297cce5472d4b4797c3a7d
|
[
"BSD-3-Clause"
] | 1 |
2022-01-10T20:40:18.000Z
|
2022-01-10T20:40:18.000Z
| 41.424699 | 696 | 0.605504 | true | 6,221 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.861538 | 0.833325 | 0.717941 |
__label__eng_Latn
| 0.917912 | 0.506349 |
# Unrestricted Open-Shell Hartree-Fock
In the first two tutorials in this module, we wrote programs which implement a closed-shell formulation of Hartree-Fock theory using restricted orbitals, aptly named Restricted Hartree-Fock (RHF). In this tutorial, we will abandon strictly closed-shell systems and the notion of restricted orbitals, in favor of a more general theory known as Unrestricted Hartree-Fock (UHF) which can accommodate more diverse molecules. In UHF, the orbitals occupied by spin up ($\alpha$) electrons and those occupied by spin down ($\beta$) electrons no longer have the same spatial component, e.g.,
$$\chi_i({\bf x}) = \begin{cases}\psi^{\alpha}_j({\bf r})\alpha(\omega) \\ \psi^{\beta}_j({\bf r})\beta(\omega)\end{cases},$$
meaning that they will not have the same orbital energy. This relaxation of orbital constraints allows for more variational flexibility, which leads to UHF always being able to find a lower total energy solution than RHF.
## I. Theoretical Overview
In UHF, we seek to solve the coupled equations
\begin{align}
{\bf F}^{\alpha}{\bf C}^{\alpha} &= {\bf SC}^{\alpha}{\bf\epsilon}^{\alpha} \\
{\bf F}^{\beta}{\bf C}^{\beta} &= {\bf SC}^{\beta}{\bf\epsilon}^{\beta},
\end{align}
which are the unrestricted generalizations of the restricted Roothan equations, called the Pople-Nesbet-Berthier equations. Here, the one-electron Fock matrices are given by
\begin{align}
F_{\mu\nu}^{\alpha} &= H_{\mu\nu} + (\mu\,\nu\mid\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\beta}\\
F_{\mu\nu}^{\beta} &= H_{\mu\nu} + (\mu\,\nu\mid\,\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\alpha},
\end{align}
where the density matrices $D_{\lambda\sigma}^{\alpha}$ and $D_{\lambda\sigma}^{\beta}$ are given by
\begin{align}
D_{\lambda\sigma}^{\alpha} &= C_{\sigma i}^{\alpha}C_{\lambda i}^{\alpha}\\
D_{\lambda\sigma}^{\beta} &= C_{\sigma i}^{\beta}C_{\lambda i}^{\beta}.
\end{align}
Unlike for RHF, the orbital coefficient matrices ${\bf C}^{\alpha}$ and ${\bf C}^{\beta}$ are of dimension $M\times N^{\alpha}$ and $M\times N^{\beta}$, where $M$ is the number of AO basis functions and $N^{\alpha}$ ($N^{\beta}$) is the number of $\alpha$ ($\beta$) electrons. The total UHF energy is given by
\begin{align}
E^{\rm UHF}_{\rm total} &= E^{\rm UHF}_{\rm elec} + E^{\rm BO}_{\rm nuc},\;\;{\rm with}\\
E^{\rm UHF}_{\rm elec} &= \frac{1}{2}[({\bf D}^{\alpha} + {\bf D}^{\beta}){\bf H} +
{\bf D}^{\alpha}{\bf F}^{\alpha} + {\bf D}^{\beta}{\bf F}^{\beta}].
\end{align}
## II. Implementation
In any SCF program, there will be several common elements which can be abstracted from the program itself into separate modules, classes, or functions to 'clean up' the code that will need to be written explicitly; examples of this concept can be seen throughout the Psi4Julia reference implementations. For the purposes of this tutorial, we can achieve some degree of code cleanup without sacrificing readabilitiy and clarity by focusing on abstracting only the parts of the code which are both
- Lengthy subroutines, and
- Used repeatedly.
In our UHF program, let's use what we've learned in the last tutorial by also implementing DIIS convergence accelleration for our SCF iterations. With this in mind, two subroutines in particular would benefit from abstraction are
1. Orthogonalize & diagonalize Fock matrix
2. Extrapolate previous trial vectors for new DIIS solution vector
Before we start writing our UHF program, let's try to write functions which can perform the above tasks so that we can use them in our implementation of UHF. Recall that defining functions in Julia has the following syntax:
~~~julia
function function_name(args; kwargs)
# function block
return_values
end
~~~
A thorough discussion of defining functions in Julia can be found [here](https://docs.julialang.org/en/v1/manual/functions/index.html "Go to Julia docs"). First, let's write a function which can diagonalize the Fock matrix and return the orbital coefficient matrix **C** and the density matrix **D**. From our RHF tutorial, this subroutine is executed with:
~~~julia
F_p = A * F * A
e, C_p = eigen(Hermitian(F_p))
C = A * C_p
C_occ = C[:, 1:ndocc]
D = C_occ * C_occ'
~~~
Examining this code block, there are three quantities which must be specified beforehand:
- Fock matrix, **F**
- Orthogonalization matrix, ${\bf A} = {\bf S}^{-1/2}$
- Number of doubly occupied orbitals, `ndocc`
However, since the orthogonalization matrix **A** is a static quantity (only built once, then left alone) we may choose to leave **A** as a *global* quantity, instead of an argument to our function. In the cell below, using the code snippet given above, write a function `diag_F()` which takes **F** and the number of orbitals `norb` as arguments, and returns **C** and **D**:
```julia
# ==> Define function to diagonalize F <==
function diag_F(F, norb, A)
F_p = A * F * A
e, C_p = eigen(Hermitian(F_p))
C = A * C_p
C_occ = C[:, 1:norb]
D = C_occ * C_occ'
C, D
end
```
Next, let's write a function to perform DIIS extrapolation and generate a new solution vector. Recall that the DIIS-accellerated SCF algorithm is:
#### Algorithm 1: DIIS within a generic SCF Iteration
1. Compute **F**, append to list of previous trial vectors
2. Compute AO orbital gradient **r**, append to list of previous residual vectors
3. Compute RHF energy
3. Check convergence criteria
- If RMSD of **r** sufficiently small, and
- If change in SCF energy sufficiently small, break
4. Build **B** matrix from previous AO gradient vectors
5. Solve Pulay equation for coefficients $\{c_i\}$
6. Compute DIIS solution vector **F_DIIS** from $\{c_i\}$ and previous trial vectors
7. Compute new orbital guess with **F_DIIS**
In our function, we will perform steps 4-6 of the above algorithm. What information will we need to provide our function in order to do so? To build **B** (step 4 above) in the previous tutorial, we used:
~~~julia
# Build B matrix
B_dim = length(F_list) + 1
B = zeros(B_dim, B_dim)
B[end, :] .= -1
B[: , end] .= -1
B[end, end] = 0
for i in eachindex(F_list), j in eachindex(F_list)
B[i, j] = dot(DIIS_RESID[i], DIIS_RESID[j])
end
~~~
Here, we see that we must have all previous DIIS residual vectors (`DIIS_RESID`), as well as knowledge about how many previous trial vectors there are (for the dimension of **B**). To solve the Pulay equation (step 5 above):
~~~julia
# Build RHS of Pulay equation
rhs = zeros(B_dim)
rhs[end] = -1
# Solve Pulay equation for c_i's with NumPy
coeff = B \ rhs
~~~
For this step, we only need the dimension of **B** (which we computed in step 4 above) and a Julia routine, so this step doesn't require any additional arguments. Finally, to build the DIIS Fock matrix (step 6):
~~~julia
# Build DIIS Fock matrix
F = zeros(size(F_list[0]))
for x in 1:length(coeff) - 1
F += coeff[x] * F_list[x]
end
~~~
Clearly, for this step, we need to know all the previous trial vectors (`F_list`) and the coefficients we generated in the previous step. In the cell below, write a funciton `diis_xtrap()` according to Algorithm 1 steps 4-6, using the above code snippets, which takes a list of previous trial vectors `F_list` and residual vectors `DIIS_RESID` as arguments and returns the new DIIS solution vector `F_DIIS`:
```julia
# ==> Build DIIS Extrapolation Function <==
function diis_xtrap(F_list, DIIS_RESID)
# Build B matrix
B_dim = length(F_list) + 1
B = zeros(B_dim, B_dim)
B[end, :] .= -1
B[: , end] .= -1
B[end, end] = 0
for i in eachindex(F_list), j in eachindex(F_list)
B[i, j] = dot(DIIS_RESID[i], DIIS_RESID[j])
end
# Build RHS of Pulay equation
rhs = zeros(B_dim)
rhs[end] = -1
# Solve Pulay equation for c_i's with Julia
coeff = B \ rhs
# Build DIIS Fock matrix
F = zeros(size(F_list[1]))
for i in 1:length(coeff) - 1
F += coeff[i] * F_list[i]
end
F
end
```
We are now ready to begin writing our UHF program! Let's begin by importing <span style='font-variant: small-caps'> Psi4 </span>, NumPy, TensorOperations, LinearAlgebra, and defining our molecule & basic options:
```julia
# ==> Import Psi4 & NumPy <==
using PyCall: pyimport
psi4 = pyimport("psi4")
np = pyimport("numpy") # used only to cast to Psi4 arrays
using TensorOperations: @tensor
using LinearAlgebra: Diagonal, Hermitian, eigen, tr, norm, dot
using Printf: @printf
```
```julia
# ==> Set Basic Psi4 Options <==
# Memory specification
psi4.set_memory(Int(5e8))
numpy_memory = 2
# Set output file
psi4.core.set_output_file("output.dat", false)
# Define Physicist's water -- don't forget C1 symmetry!
mol = psi4.geometry("""
O
H 1 1.1
H 1 1.1 2 104
symmetry c1
""")
# Set computation options
psi4.set_options(Dict("basis" => "cc-pvdz",
"scf_type" => "pk",
"e_convergence" => 1e-8,
"guess" => "core",
"reference" => "uhf"))
```
You may notice that in the above `psi4.set_options()` block, there are two additional options -- namely, `'guess': 'core'` and `'reference': 'uhf'`. These options make sure that when we ultimately check our program against <span style='font-variant: small-caps'> Psi4</span>, the options <span style='font-variant: small-caps'> Psi4 </span> uses are identical to our implementation. Next, let's define the options for our UHF program; we can borrow these options from our RHF implementation with DIIS accelleration that we completed in our last tutorial.
```julia
# ==> Set default program options <==
# Maximum SCF iterations
MAXITER = 40
# Energy convergence criterion
E_conv = 1.0e-6
D_conv = 1.0e-3
```
Static quantities like the ERI tensor, core Hamiltonian, and orthogonalization matrix have exactly the same form in UHF as in RHF. Unlike in RHF, however, we will need the number of $\alpha$ and $\beta$ electrons. Fortunately, both these values are available through querying the Wavefunction object. In the cell below, generate these static objects and compute each of the following:
- Number of basis functions, `nbf`
- Number of alpha electrons, `nalpha`
- Number of beta electrons, `nbeta`
- Number of doubly occupied orbitals, `ndocc` (Hint: In UHF, there can be unpaired electrons!)
```julia
# ==> Compute static 1e- and 2e- quantities with Psi4 <==
# Class instantiation
wfn = psi4.core.Wavefunction.build(mol, psi4.core.get_global_option("basis"))
mints = psi4.core.MintsHelper(wfn.basisset())
# Overlap matrix
S = np.asarray(mints.ao_overlap()) # we only need a copy
# Number of basis Functions, alpha & beta orbitals, and # doubly occupied orbitals
nbf = wfn.nso()
nalpha = wfn.nalpha()
nbeta = wfn.nbeta()
ndocc = min(nalpha, nbeta)
println("Number of basis functions: ", nbf)
println("Number of singly occupied orbitals: ", abs(nalpha-nbeta))
println("Number of doubly occupied orbitals: ", ndocc)
# Memory check for ERI tensor
I_size = nbf^4 * 8.e-9
println("\nSize of the ERI tensor will be $I_size GB.")
memory_footprint = I_size * 1.5
if I_size > numpy_memory
psi4.core.clean()
throw(OutOfMemoryError("Estimated memory utilization ($memory_footprint GB) exceeds " *
"allotted memory limit of $numpy_memory GB."))
end
# Build ERI Tensor
I = np.asarray(mints.ao_eri()) # we only need a copy
# Build core Hamiltonian
T = np.asarray(mints.ao_kinetic()) # we only need a copy
V = np.asarray(mints.ao_potential()) # we only need a copy
H = T + V;
# Construct AO orthogonalization matrix A
A = mints.ao_overlap()
A.power(-0.5, 1.e-16) # ≈ Julia's A^(-0.5) after psi4view()
A = np.asarray(A);
```
Unlike the static quantities above, the CORE guess in UHF is slightly different than in RHF. Since the $\alpha$ and $\beta$ electrons do not share spatial orbitals, we must construct a guess for *each* of the $\alpha$ and $\beta$ orbitals and densities. In the cell below, using the function `diag_F()`, construct the CORE guesses and compute the nuclear repulsion energy:
(Hint: The number of $\alpha$ orbitals is the same as the number of $\alpha$ electrons!)
```julia
# ==> Build alpha & beta CORE guess <==
Ca, Da = diag_F(H, nalpha, A)
Cb, Db = diag_F(H, nbeta, A)
# Get nuclear repulsion energy
E_nuc = mol.nuclear_repulsion_energy()
```
We are almost ready to perform our SCF iterations; beforehand, however, we must initiate variables for the current & previous SCF energies, and the lists to hold previous residual vectors and trial vectors for the DIIS procedure. Since, in UHF, there are Fock matrices ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ for both $\alpha$ and $\beta$ orbitals, we must apply DIIS to each of these matrices separately. In the cell below, define empty lists to hold previous Fock matrices and residual vectors for both $\alpha$ and $\beta$ orbitals:
```julia
# ==> Pre-Iteration Setup <==
# SCF & Previous Energy
SCF_E = 0.0
E_old = 0.0
```
We are now ready to write the SCF iterations. The algorithm for UHF-SCF iteration, with DIIS convergence accelleration, is:
#### Algorithm 2: DIIS within UHF-SCF Iteration
1. Build ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$, append to trial vector lists
2. Compute the DIIS residual for $\alpha$ and $\beta$, append to residual vector lists
3. Compute UHF energy
4. Convergence check
- If average of RMSD of $\alpha$ and $\beta$ residual sufficiently small, and
- If change in UHF energy sufficiently small, break
5. DIIS extrapolation of ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ to form new solution vector
6. Compute new ${\alpha}$ and ${\beta}$ orbital & density guesses
In the cell below, write the UHF-SCF iteration according to Algorithm 2:
(Hint: Use your functions `diis_xtrap()` and `diag_F` for Algorithm 2 steps 5 & 6, respectively)
```julia
SCF_E = let SCF_E = SCF_E, E_old = E_old, Da = Da, Db = Db, A = A, I = I, H = H, S = S
# Trial & Residual Vector Lists -- one each for α & β
F_list_a = []
F_list_b = []
R_list_a = []
R_list_b = []
# ==> UHF-SCF Iterations <==
println("==> Starting SCF Iterations <==")
# Begin Iterations
for scf_iter in 1:MAXITER
# Build Fa & Fb matrices
@tensor Ja[p,q] := I[p,q,r,s] * Da[r,s]
@tensor Jb[p,q] := I[p,q,r,s] * Db[r,s]
@tensor Ka[p,q] := I[p,r,q,s] * Da[r,s]
@tensor Kb[p,q] := I[p,r,q,s] * Db[r,s]
Fa = H + (Ja + Jb) - Ka
Fb = H + (Ja + Jb) - Kb
# Compute DIIS residual for Fa & Fb
diis_r_a = A * (Fa * Da * S - S * Da * Fa) * A
diis_r_b = A * (Fb * Db * S - S * Db * Fb) * A
# Append trial & residual vectors to lists
push!(F_list_a, Fa)
push!(F_list_b, Fb)
push!(R_list_a, diis_r_a)
push!(R_list_b, diis_r_b)
# Compute UHF Energy
SCF_E = 0.5*tr( H*(Da + Db) + Fa*Da + Fb*Db) + E_nuc
dE = SCF_E - E_old
dRMS = 0.5(norm(diis_r_a) + norm(diis_r_b))
@printf("SCF Iteration %3d: Energy = %4.16f dE = %1.5e dRMS = %1.5e \n",
scf_iter, SCF_E, SCF_E - E_old, dRMS)
# Convergence Check
if abs(dE) < E_conv && dRMS < D_conv
break
end
E_old = SCF_E
# DIIS Extrapolation
if scf_iter >= 2
Fa = diis_xtrap(F_list_a, R_list_a)
Fb = diis_xtrap(F_list_b, R_list_b)
end
# Compute new orbital guess
Ca, Da = diag_F(Fa, nalpha, A)
Cb, Db = diag_F(Fb, nbeta, A)
# MAXITER exceeded?
if scf_iter == MAXITER
psi4.core.clean()
throw(MethodError("Maximum number of SCF iterations exceeded."))
end
end
SCF_E
end
# Post iterations
println("\nSCF converged.")
println("Final RHF Energy: $SCF_E [Eh]")
println()
```
Congratulations! You've written your very own Unrestricted Hartree-Fock program with DIIS convergence accelleration! Finally, let's check your final UHF energy against <span style='font-variant: small-caps'> Psi4</span>:
```julia
# Compare to Psi4
SCF_E_psi = psi4.energy("SCF")
SCF_E
psi4.compare_values(SCF_E_psi, SCF_E, 6, "SCF Energy")
```
## References
1. A. Szabo and N. S. Ostlund, *Modern Quantum Chemistry*, Introduction to Advanced Electronic Structure Theory. Courier Corporation, 1996.
2. I. N. Levine, *Quantum Chemistry*. Prentice-Hall, New Jersey, 5th edition, 2000.
3. T. Helgaker, P. Jorgensen, and J. Olsen, *Molecular Electronic Structure Theory*, John Wiley & Sons Inc, 2000.
|
3888d8295b949bad6f4528677bd606f5426e7067
| 22,150 |
ipynb
|
Jupyter Notebook
|
Tutorials/03_Hartree-Fock/3c_unrestricted-hartree-fock.ipynb
|
zyth0s/psi4julia
|
beb0384028f1a3654b8a2f8690b7db5bd9c24b86
|
[
"BSD-3-Clause"
] | 4 |
2021-02-13T22:14:21.000Z
|
2021-04-17T07:34:10.000Z
|
Tutorials/03_Hartree-Fock/3c_unrestricted-hartree-fock.ipynb
|
zyth0s/psi4julia
|
beb0384028f1a3654b8a2f8690b7db5bd9c24b86
|
[
"BSD-3-Clause"
] | null | null | null |
Tutorials/03_Hartree-Fock/3c_unrestricted-hartree-fock.ipynb
|
zyth0s/psi4julia
|
beb0384028f1a3654b8a2f8690b7db5bd9c24b86
|
[
"BSD-3-Clause"
] | null | null | null | 44.3 | 597 | 0.573138 | true | 4,929 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.872347 | 0.800692 | 0.698482 |
__label__eng_Latn
| 0.960547 | 0.461138 |
# Consumption Equivalent Variation (CEV)
1. Use the model in the **ConsumptionSaving.pdf** slides and solve it using **egm**
2. This notebooks estimates the *cost of income risk* through the Consumption Equivalent Variation (CEV)
We will here focus on the cost of income risk, but the CEV can be used to estimate the value of many different aspects of an economy. For eaxample, [Oswald (2019)](http://qeconomics.org/ojs/index.php/qe/article/view/701 "The option value of homeownership") estimated the option value of homeownership using a similar strategy as described below.
**Goal:** To estimate the CEV by comparing the *value of life* under the baseline economy and an alternative economy with higher permanent income shock variance along with a consumption compensation.
**Value of Life:**
1. Let the *utility function* be a generalized version of the CRRA utility function with $\delta$ included as a potential consumption compensation.
\begin{equation}
{u}(c,\delta) = \frac{(c\cdot(1+\delta))^{1-\rho}}{1-\rho}
\end{equation}
2. Let the *value of life* of a synthetic consumer $s$ for a given level of permanent income shock varaince, $\sigma_{\psi}$, and $\delta$, be
\begin{equation}
{V}_{s}({\sigma}_{\psi},\delta)=\sum_{t=1}^T \beta ^{t-1}{u}({c}^{\star}_{s,t}({\sigma}_{\psi},\delta),\delta)
\end{equation}
where ${c}^{\star}_{s,t}({\sigma}_{\psi},\delta)$ is optimal consumption found using the **egm**. The value of life is calcualted in the function `value_of_life(.)` defined below.
**Consumption Equivalent Variation:**
1. Let $V=\frac{1}{S}\sum_{s=1}^SV(\sigma_{\psi},0)$ be the average value of life under the *baseline* economy with the baseline value of $\sigma_{\psi}$ and $\delta=0$.
2. Let $\tilde{V}(\delta)=\frac{1}{S}\sum_{s=1}^SV(\tilde{\sigma}_{\psi},\delta)$ be the average value of life under the *alternative* economy with $\tilde{\sigma}_{\psi} > \sigma_{\psi}$.
The CEV is the value of $\delta$ that sets $V=\tilde{V}(\delta)$ and can be estimated as
\begin{equation}
\hat{\delta} = \arg\min_\delta (V-\tilde{V}(\delta))^2
\end{equation}
where the objective function is calculated in `obj_func_cev(.)` defined below.
# Setup
```python
%load_ext autoreload
%autoreload 2
import time
import numpy as np
import scipy.optimize as optimize
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
import sys
sys.path.append('../')
from consav import jit
import ConsumptionSavingModel as csm
from ConsumptionSavingModel import ConsumptionSavingModelClass
```
# Setup the baseline model and the alternative model
```python
par = {'simT':40}
model = ConsumptionSavingModelClass(name='baseline',par=par)
# increase the permanent income with 100 percent and allow for consumption compensation
par_cev = {'sigma_psi':0.2,'do_cev':1,'simT':40}
model_cev = ConsumptionSavingModelClass(name='cev',par=par_cev)
```
```python
model.solve()
model.simulate()
```
model solved in 3.1 secs
model simulated in 3.4 secs
# Average value of life
**Define Functions:** value of life and objective function used to estimate "cev"
```python
def value_of_life(model):
# utility associated with consumption for all N and T
with jit(model) as model_jit:
util = csm.utility(model_jit.sim.c,model_jit.par)
# discounted sum of utility
disc = np.ones(model.par.simT)
disc[1:] = np.cumprod(np.ones(model.par.simT-1)*model.par.beta)
disc_util = np.sum(disc*util,axis=1)
# return average of discounted sum of utility
return np.mean(disc_util)
def obj_func_cev(theta,model_cev,value_of_life_baseline):
# update cev-parameter
setattr(model_cev.par,'cev',theta)
# re-solve and simulate alternative model
model_cev.solve(do_print=False)
model_cev.simulate(do_print=False)
# calculate value of life
value_of_life_cev = value_of_life(model_cev)
# return squared difference to baseline
return (value_of_life_cev - value_of_life_baseline)*(value_of_life_cev - value_of_life_baseline)
```
**Baseline value of life and objective function at cev=0**
```python
value_of_life_baseline = value_of_life(model)
obj_func_cev(0.0,model_cev,value_of_life_baseline)
```
3.46846115430638
```python
# plot the objective function
grid_cev = np.linspace(0.0,0.2,20)
grid_obj = np.empty(grid_cev.size)
for j,cev in enumerate(grid_cev):
grid_obj[j] = obj_func_cev(cev,model_cev,value_of_life_baseline)
plt.plot(grid_cev,grid_obj);
```
# Estimate the Consumption Equivalent Variation (CEV)
```python
res = optimize.minimize_scalar(obj_func_cev, bounds=[-0.01,0.5],
args=(model_cev,value_of_life_baseline),method='golden')
res
```
fun: 9.21421081920921e-18
nfev: 48
nit: 43
success: True
x: 0.0975865281411968
The estimated CEV suggests that consumers would be indifferent between the baseline economy and a 100% increase in the permanent income shock variance along with a 10% increase in consumption in all periods.
|
6ad9f6a094b82bab66af718a10fe430258b2b528
| 24,603 |
ipynb
|
Jupyter Notebook
|
00. DynamicProgramming/extra/Consumption Equivalent Variation (CEV).ipynb
|
alanlujan91/ConsumptionSavingNotebooks
|
4455500d17fed4dd1f3f4844aeb5dd5d3b89903f
|
[
"MIT"
] | 20 |
2019-03-09T02:08:49.000Z
|
2022-03-28T15:56:04.000Z
|
00. DynamicProgramming/extra/Consumption Equivalent Variation (CEV).ipynb
|
alanlujan91/ConsumptionSavingNotebooks
|
4455500d17fed4dd1f3f4844aeb5dd5d3b89903f
|
[
"MIT"
] | 1 |
2019-06-03T18:33:44.000Z
|
2019-07-02T13:51:21.000Z
|
00. DynamicProgramming/extra/Consumption Equivalent Variation (CEV).ipynb
|
alanlujan91/ConsumptionSavingNotebooks
|
4455500d17fed4dd1f3f4844aeb5dd5d3b89903f
|
[
"MIT"
] | 34 |
2019-02-26T19:27:37.000Z
|
2021-12-27T09:34:04.000Z
| 86.024476 | 16,056 | 0.830834 | true | 1,402 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.808067 | 0.766294 | 0.619217 |
__label__eng_Latn
| 0.920455 | 0.276979 |
# Transformée de Fourier
```python
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (6, 6)
import math
import cmath # math functions for complex numbers
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets
from ipywidgets import interact
import sympy as sp
# See: http://docs.sympy.org/latest/tutorial/printing.html
sp.init_printing()
t = sp.symbols("t")
```
**TODO**:
* https://cdn.uclouvain.be/public/Exports%20reddot/fsa/documents/Travail_Complet.pdf <- semble très intéressant !
* Exemples en notation exponentielle
* Cas d'un signal 2D
* Discrete Fourier Transform + algo pour implémentation concrète (du calcul des coefs) en Python
* FFT
The following notations come from the following book: *Toutes les mathématiques et les bases de l'informatique*, Horst Stöcker (Dunod, 2002)
## Cas simple: fonction périodique de période $2\pi$
### Définitions
#### Série de Fourier
$$
f(t) = \frac{a_0}{2} + \sum^{\infty}_{n=1} [a_n \cos(n t) + b_n \sin(n t)]
$$
#### Calcul des coefficients de Fourier
\begin{eqnarray*}
\forall n \in \mathbb{N}, ~~~~~ a_n & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \\
\forall n \in \mathbb{N}^*, ~~~~~ b_n & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \sin(n t) ~ dt \\
\end{eqnarray*}
Cf. section \ref{sec:exemples} pour des exemples de calculs de coefficients de Fourier.
Rappel: $\mathbb{N}^*$ est l'ensemble $\mathbb{N}$ privé de $0$.
#### Condition de Dirichlet
...
rem: si on avait que des cos (ou que des sin) dans la série de fourier, elle ne pourrait coder définir que des fonctions qui s'annulent tous les pi/2...
### Questions / notes / remarques
* Que représente le terme $\frac{a_0}{2}$ dans la définition de la série de Fourier ?
...
* Que représente le facteur $\frac{1}{\pi}$ dans les coeffients de Fourier ?
...
## Cas général: fonction périodique de période $T$
### Définition
#### Série de Fourier
$$
f(t) = \frac{a_0}{2} + \sum^{\infty}_{n=1} [a_n \cos(\omega n t) + b_n \sin(\omega n t)]
$$
#### Calcul des *coefficients de Fourier*
\begin{eqnarray*}
\forall n \in \mathbb{N}, ~~~~~ a_n & = & \frac{2}{T} \int^{T/2}_{-T/2} f(t) \cos(\omega n t) ~ dt \\
\forall n \in \mathbb{N}^*, ~~~~~ b_n & = & \frac{2}{T} \int^{T/2}_{-T/2} f(t) \sin(\omega n t) ~ dt \\
\end{eqnarray*}
## Représentation spectrale
**TODO**: rendre plus claire la définition suivante...
### Série de Fourier
$$
f(t) = \frac{a_0}{2} + \sum^{\infty}_{n=1} [A_n \sin(\omega n t + \phi_n)]
$$
avec
$$A_n = \sqrt{a^2_n + b^2_n} \quad \text{ et } \quad \tan \phi_n = \frac{a_n}{b_n}$$
rem:
ne fait qu'écrire autrement les coefs (passage d'une notation carthésienne à une notation polaire)...
* $a_1$ et $b_1$ en $A_1$ et $\phi_1$,
* $a_2$ et $b_2$ en $A_2$ et $\phi_2$,
* ...
Pour chaque $n$, les coefs $a_n$ et $b_n$ peuvent être vus comme un point dans le plan.
Ces points peuvent aussi être définis avec une notation polaire (c'est ce qui est fait ici): $A_n$ et $\phi_n$.
Qu'est-ce que ça apporte d'un point de vue pratique ? Il serait intéressant de visualiser la différence entre ces deux espaces de coefficients... Le filtrage d'un signal est peut-être plus facile dans le 2e ?
## Représentation complexe (notation exponentielle)
**TODO**: rendre plus claire la définition suivante...
### Définitions pour une fonction $2\pi$ periodique
#### Série de Fourier
$$
f(t) = \sum^{\infty}_{n=\color{red}{-\infty}} c_n e^{i n t}
$$
#### Calcul des *coefficients de Fourier*
$$
c_n = \frac{1}{2\pi} \int^{\pi}_{-\pi} f(t) e^{-i n t} dt
$$
### Définitions pour une fonction $T$ periodique
#### Série de Fourier
$$
f(t) = \sum^{\infty}_{n=\color{red}{-\infty}} c_n e^{i\omega n t}
$$
#### Calcul des *coefficients de Fourier*
$$
c_n = \frac{1}{T} \int^{T/2}_{-T/2} f(t) e^{-i \omega n t} dt
$$
### Relation entre les coefficients $a_n$, $b_n$ et $c_n$
$$
c_n =
\left\{
\begin{align}
\frac{a_0}{2} & \quad n = 0 \\
\\
\frac{a_n - ib_n}{2} & \quad n > 0 \\
\\
\frac{a_{-n} - ib_{-n}}{2} & \quad n < 0
\end{align}
\right.
$$
ou
$$
\left.
\begin{align}
a_n & = c_n + c_{-n} \\
b_n & = i(c_n - c_{-n})
\end{align}
\right\}
n > 0
$$
Les valeurs $\omega_n = \omega n$ sont appelées le *spectre* de $f(t)$.
rem:
ne fait que regrouper les coefs
* $a_1$ et $b_1$ dans un nombre complexe $z_1$,
* $a_2$ et $b_2$ dans un nombre complexe $z_2$,
* ...
Pour chaque $n$, les coefs $a_n$ et $b_n$ peuvent être vus comme un point dans le plan et donc comme un nombre complexe.
Du coup, ces points peuvent aussi être définis avec une notation exponentielle.
motivation: "ça simplifie les calculs et les notations"
## Conclusion
TODO...
### Synthèse
Définitions à connaitre:
* Série de Fourier
* Coefficients de Fourier
* Condition de Dirichlet
## Annexes
### Rappels de maths
TODO: add plots...
\begin{eqnarray*}
\forall c \in \mathbb{N}^*, \int^{\pi}_{-\pi} c ~ dt & = & c \times 2\pi \\
\forall c \in \mathbb{N}^*, \int^{\pi}_{-\pi} c \cos(t) ~ dt & = & 0 \\
\forall c \in \mathbb{N}^*, \int^{\pi}_{-\pi} c \sin(t) ~ dt & = & 0 \\
\end{eqnarray*}
$$
\forall m \in \mathbb{N}^*, n \in \mathbb{N}^*,
\int^{\pi}_{-\pi} \cos(n ~ t) \cos(m ~ t) ~ dt =
\left\{
\begin{array}{l l}
\pi & \quad \text{si $n = m$}\\
0 & \quad \text{si $n \neq m$}
\end{array} \right.
$$
Par exemple:
\begin{eqnarray*}
\int^{\pi}_{-\pi} \cos(t) \cos(t) ~ dt & = & \pi \\
\int^{\pi}_{-\pi} \sin(t) \sin(t) ~ dt & = & \pi \\
\int^{\pi}_{-\pi} \cos(t) \sin(t) ~ dt & = & 0 \\
\int^{\pi}_{-\pi} \cos(t) \cos(2t) ~ dt & = & 0 \\
\int^{\pi}_{-\pi} \cos(t) \sin(2t) ~ dt & = & 0 \\
\int^{\pi}_{-\pi} \cos(2t) \cos(2t) ~ dt & = & \pi \\
\int^{\pi}_{-\pi} \sin(2t) \sin(2t) ~ dt & = & \pi \\
\end{eqnarray*}
### Exemples détaillés
#### Calcule des coefficients de Fourier pour la fonction $f(t) = 3$
\begin{eqnarray*}
a_0 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} 3 \cos(0) ~ dt \\
& = & \frac{1}{\pi} \int^{\pi}_{-\pi} 3 ~ dt \\
& = & \frac{1}{\pi} \times 3 \times 2 \pi \\
& = & 6 \\
\end{eqnarray*}
\begin{eqnarray*}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} 3 \cos(t) ~ dt \\
& = & 0 \\
\end{eqnarray*}
\begin{eqnarray*}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} 3 \sin(t) ~ dt \\
& = & 0 \\
\end{eqnarray*}
De même, les coefficients $a_3$, $b_3$, $a_4$, $b_4$, etc. sont tous nuls.
##### Vérification
On a bien:
\begin{eqnarray*}
f(t) & = & \frac{a_0}{2} + \sum^{\infty}_{n=1} (a_n \cos(n t) + b_n \sin(n t)) \\
& = & \frac{6}{2} + 0 \times \cos(t) + 0 \times \sin(t) + 0 \times \cos(2t) + 0 \times \sin(2t) + ... \\
& = & 3 \\
\end{eqnarray*}
#### Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t)$
TODO: add plots...
\begin{eqnarray*}
a_0 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) \cos(0) ~ dt \\
& = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) ~ dt \\
& = & 0 \\
\end{eqnarray*}
\begin{eqnarray*}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) \cos(t) ~ dt \\
& = & \frac{1}{\pi} \times \pi \\
& = & 1 \\
\end{eqnarray*}
\begin{eqnarray*}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) \sin(t) ~ dt \\
& = & \frac{1}{\pi} \times 0 \\
& = & 0 \\
\end{eqnarray*}
\begin{eqnarray*}
a_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) \cos(2t) ~ dt \\
& = & \frac{1}{\pi} \times 0 \\
& = & 0 \\
\end{eqnarray*}
\begin{eqnarray*}
b_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) \sin(2t) ~ dt \\
& = & \frac{1}{\pi} \times 0 \\
& = & 0 \\
\end{eqnarray*}
De même, les coefficients $a_3$, $b_3$, $a_4$, $b_4$, etc. sont tous nuls.
##### Vérification
On a bien:
\begin{eqnarray*}
f(t) & = & \frac{a_0}{2} + \sum^{\infty}_{n=1} (a_n \cos(n t) + b_n \sin(n t)) \\
& = & \frac{0}{2} + 1 \times \cos(t) + 0 \times \sin(t) + 0 \times \cos(2t) + 0 \times \sin(2t) + ... \\
& = & \cos(t) \\
\end{eqnarray*}
#### Calcule des coefficients de Fourier pour la fonction $f(t) = 3 \cos(t)$
\begin{eqnarray*}
a_0 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \\
& = & ... \\
\end{eqnarray*}
\begin{eqnarray*}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \\
& = & ... \\
\end{eqnarray*}
\begin{eqnarray*}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \sin(n t) ~ dt \\
& = & ... \\
\end{eqnarray*}
#### Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t) + 3$
\begin{eqnarray*}
a_0 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \\
& = & ... \\
\end{eqnarray*}
\begin{eqnarray*}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \\
& = & ... \\
\end{eqnarray*}
\begin{eqnarray*}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \sin(n t) ~ dt \\
& = & ... \\
\end{eqnarray*}
#### Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t + 3)$
**TODO: ???**
\begin{eqnarray*}
a_0 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \\
& = & ... \\
\end{eqnarray*}
\begin{eqnarray*}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \\
& = & ... \\
\end{eqnarray*}
\begin{eqnarray*}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \sin(n t) ~ dt \\
& = & ... \\
\end{eqnarray*}
#### Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t) + 2 \sin(t) - 3 \sin(2t) + 4$
TODO: add plots...
```python
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.cos(0), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
```
\begin{eqnarray*}
a_0 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \cos(0) ~ dt \\
& = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) ~ dt \\
& = & \frac{1}{\pi} \times 8\pi\\
& = & 8\\
\end{eqnarray*}
```python
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.cos(t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
```
\begin{eqnarray*}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \cos(t) ~ dt \\
& = & \frac{1}{\pi} \times \pi \\
& = & 1 \\
\end{eqnarray*}
```python
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.sin(t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
```
\begin{eqnarray*}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \sin(t) ~ dt \\
& = & \frac{1}{\pi} \times 2\pi \\
& = & 2 \\
\end{eqnarray*}
```python
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.cos(2*t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
```
\begin{eqnarray*}
a_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \cos(2t) ~ dt \\
& = & \frac{1}{\pi} \times 0 \\
& = & 0 \\
\end{eqnarray*}
```python
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.sin(2*t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
```
\begin{eqnarray*}
b_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \sin(2t) ~ dt \\
& = & \frac{1}{\pi} \times -3\pi \\
& = & -3 \\
\end{eqnarray*}
Les coefficients $a_3$, $b_3$, $a_4$, $b_4$, etc. sont tous nuls.
##### Vérification
On a bien:
\begin{eqnarray*}
f(t) & = & \frac{a_0}{2} + \sum^{\infty}_{n=1} (a_n \cos(n t) + b_n \sin(n t)) \\
& = & \frac{8}{2} + 1 \times \cos(t) + 2 \times \sin(t) + 0 \times \cos(2t) + (-3) \times \sin(2t) + \dots \\
& = & \cos(t) + 2 \sin(t) - 3 \sin(2t) + 4 \\
\end{eqnarray*}
#### Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t) + 2 \sin(t) - 3 \sin(2t) + 4$ en utilisant la notation exponentielle
TODO: add plots...
**TODO**
$$
c_n = \frac{1}{2\pi} \int^{\pi}_{-\pi} f(t) e^{-i n t} dt
$$
```python
sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4
```
```python
n = 0
sp.plot(sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4, (t, -sp.pi, sp.pi));
```
```python
import mpmath
n = 2
mpmath.cplot(lambda t: mpmath.exp(-mpmath.j * n * t), points=100000)
```
```python
n = 0
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
```
```python
sp.simplify(integ_res / sp.pi)
```
\begin{eqnarray*}
c_0 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{0} ~ dt \\
& = & \frac{1}{\pi} \times 8\pi\\
& = & 8\\
\end{eqnarray*}
```python
n = 1
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
```
```python
sp.simplify(integ_res / sp.pi)
```
\begin{eqnarray*}
c_1 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{-it} ~ dt \\
& = & \frac{1}{\pi} \times \pi - 2i\pi\\
& = & 1 - 2i\\
\end{eqnarray*}
```python
n = -1
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
```
```python
sp.simplify(integ_res / sp.pi)
```
\begin{eqnarray*}
c_{-1} & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{it} ~ dt \\
& = & \frac{1}{\pi} \times \pi + 2i\pi\\
& = & 1 + 2i\\
\end{eqnarray*}
```python
n = 2
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
```
```python
sp.simplify(integ_res / sp.pi)
```
\begin{eqnarray*}
c_2 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{-2it} ~ dt \\
& = & \frac{1}{\pi} \times 3i\pi\\
& = & 3i\\
\end{eqnarray*}
```python
n = -2
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
```
```python
sp.simplify(integ_res / sp.pi)
```
\begin{eqnarray*}
c_{-2} & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{2it} ~ dt \\
& = & \frac{1}{\pi} \times -3i\pi\\
& = & -3i\\
\end{eqnarray*}
```python
n = 3
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
```
```python
sp.simplify(integ_res / sp.pi)
```
\begin{eqnarray*}
c_3 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{-3it} ~ dt \\
& = & \frac{1}{\pi} \times 0 \\
& = & 0\\
\end{eqnarray*}
```python
n = -3
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
```
```python
sp.simplify(integ_res / sp.pi)
```
\begin{eqnarray*}
c_{-3} & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{3it} ~ dt \\
& = & \frac{1}{\pi} \times 0 \\
& = & 0 \\
\end{eqnarray*}
Les coefficients $c_4$, $b_-4$, $a_5$, $b_-5$, etc. sont tous nuls.
##### Vérification
On a bien:
\begin{eqnarray*}
f(t) & = & \sum^{\infty}_{n=-\infty} c_n e^{i n t} \\
& = & \underbrace{8 \times e^{0}} + \underbrace{1-2i \times e^{-it}} + \underbrace{1+2i \times e^{it}} + \underbrace{3i \times e^{-2it}} + \underbrace{-3i \times e^{2it}} + \dots \\
& = & TODO !!!!!!! \\
& = & \cos(t) + 2 \sin(t) - 3 \sin(2t) + 4 \\
\end{eqnarray*}
```python
eq1 = sp.simplify(8 * sp.exp(0) \
+ 1 - 2 * sp.I * sp.exp(-sp.I * t) \
+ 1 + 2 * sp.I * sp.exp( sp.I * t) \
+ 3 * sp.I * sp.exp(-2* sp.I * t) \
- 3 * sp.I * sp.exp( 2* sp.I * t) )
eq1
```
```python
sp.simplify(eq1 - sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2 * t) + 4)
```
**TODO**
L'exemple précédent est foireux... Regarder plutôt les 3 exemples du site suivant pour s'entrainer avec la notation complexe: https://www.math24.net/complex-form-fourier-series/
Voir aussi https://www.youtube.com/watch?v=FIKPlRsADL0
Intérêt de la notation exponentielle: certains signaux tel que celui présenté dans
http://www.thefouriertransform.com/series/complexcoefficients.php
nécessiterait de calculer une infinité de coefficients avec la notation trigonométrique... Avec la notation exponentielle, deux formules suffisent à donner tous les coefficients $\neq$ 0 !
**TODO**:
* step 1: calcul pour retrouver la série réelle ($\mathbb{R}$) à partir des coefs complexes -> prendre un de ces exemples, partir du résultat (les coefs. $c_n$) et calculer la série -> il y a un ex dans https://www.math24.net/complex-form-fourier-series/
* step 2: calcul des coefs.
**TODO**: DFT (p.794) et FFT (p.797)
|
b90d74d5cd5dd4938e2e665a6026d21e29622b15
| 30,147 |
ipynb
|
Jupyter Notebook
|
nb_sci_signal_processing/signal_processing_fourier_transform_fr.ipynb
|
jdhp-docs/python-notebooks
|
91a97ea5cf374337efa7409e4992ea3f26b99179
|
[
"MIT"
] | 3 |
2017-05-03T12:23:36.000Z
|
2020-10-26T17:30:56.000Z
|
nb_sci_signal_processing/signal_processing_fourier_transform_fr.ipynb
|
jdhp-docs/python-notebooks
|
91a97ea5cf374337efa7409e4992ea3f26b99179
|
[
"MIT"
] | null | null | null |
nb_sci_signal_processing/signal_processing_fourier_transform_fr.ipynb
|
jdhp-docs/python-notebooks
|
91a97ea5cf374337efa7409e4992ea3f26b99179
|
[
"MIT"
] | 1 |
2020-10-26T17:30:57.000Z
|
2020-10-26T17:30:57.000Z
| 25.988793 | 264 | 0.423624 | true | 6,987 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.771844 | 0.819893 | 0.632829 |
__label__fra_Latn
| 0.208426 | 0.308605 |
# Part 0: Hello Qiskit
While skip talking about how a quantum computer is important and hot in recent days, let's jump into the quantum circuit directly by using Qiskit.
Qiskit is an open-source SDK for working with quantum computers at the level of pulses, circuits, and algorithms. Qiskit supports many quantum backends - IBM Quantum, IonQ, AQT, Honeywell and etc and quantum cloud - IBM Quantum Experience, Microsoft Azure Quantum. Qiskit's great contribution to the quantum computing ecosystem is its huge and active developer community and we will talk about this at the 29th Oct lecture.
With this 1st lecture, we will demonstrate:
- Part 1: Qiskit Basic (10min)
- Basic Quantum Gates and Operations
- Exercise: Quantum State manipulating with the gates - Bell State
- Part 2: Tutorial with GHZ Circuit (30 min)
- Compose 3-qubits GHZ quantum circuits
- simulating them on various backends, and visualizing the results
- Part 3: Introduction of the Quantum Information Theory (30min)
- Phase Kickback
- the Deutsch Jozsa algorithm
- Exercise (20min):
- explore easy physics model by using qiskit with assignment1
# Part1 : Qiskit Basic - Basic Quantum Gates and Operations
## Single Qubit Quantum states <a name="single_states"/>
A single qubit quantum state can be written as
$$\left|\psi\right\rangle = \alpha\left|0\right\rangle + \beta \left|1\right\rangle$$
where $\alpha$ and $\beta$ are complex numbers. In a measurement the probability of the bit being in $\left|0\right\rangle$ is $|\alpha|^2$ and $\left|1\right\rangle$ is $|\beta|^2$. As a vector this is
$$
\left|\psi\right\rangle =
\begin{pmatrix}
\alpha \\
\beta
\end{pmatrix}.
$$
Note, due to the conservation of probability $|\alpha|^2+ |\beta|^2 = 1$ and since global phase is undetectable $\left|\psi\right\rangle := e^{i\delta} \left|\psi\right\rangle$ we only require two real numbers to describe a single qubit quantum state.
A convenient representation is
$$\left|\psi\right\rangle = \cos(\theta/2)\left|0\right\rangle + \sin(\theta/2)e^{i\phi}\left|1\right\rangle$$
where $0\leq \phi < 2\pi$, and $0\leq \theta \leq \pi$. From this, it is clear that there is a one-to-one correspondence between qubit states ($\mathbb{C}^2$) and the points on the surface of a unit sphere ($\mathbb{R}^3$). This is called the Bloch sphere representation of a qubit state.
Quantum gates/operations are usually represented as matrices. A gate which acts on a qubit is represented by a $2\times 2$ unitary matrix $U$. The action of the quantum gate is found by multiplying the matrix representing the gate with the vector which represents the quantum state.
$$\left|\psi'\right\rangle = U\left|\psi\right\rangle$$
A general unitary must be able to take the $\left|0\right\rangle$ to the above state. That is
$$
U = \begin{pmatrix}
\cos(\theta/2) & a \\
e^{i\phi}\sin(\theta/2) & b
\end{pmatrix}
$$
where $a$ and $b$ are complex numbers constrained such that $U^\dagger U = I$ for all $0\leq\theta\leq\pi$ and $0\leq \phi<2\pi$. This gives 3 constraints and as such $a\rightarrow -e^{i\lambda}\sin(\theta/2)$ and $b\rightarrow e^{i\lambda+i\phi}\cos(\theta/2)$ where $0\leq \lambda<2\pi$ giving
$$
U = \begin{pmatrix}
\cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\
e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2)
\end{pmatrix}.
$$
This is the most general form of a single qubit unitary.
Qiskit support many single-qubi gates. You can check the whole list of them at here:
https://qiskit.org/textbook/ch-states/single-qubit-gates.html
The most frequent used and famouse gate and it's matrix forms are examine below:
```python
from qiskit import QuantumCircuit, execute, Aer, IBMQ
from qiskit.visualization import *
from qiskit.quantum_info import state_fidelity
import numpy as np
#ignore warnings
import warnings
warnings.filterwarnings('ignore')
# Magic function to render plots
%matplotlib inline
```
#### u gates
In Qiskit we give you access to the general unitary using the $u3$ gate
$$
u3(\theta, \phi, \lambda) = U(\theta, \phi, \lambda)
$$
```python
u_gate = QuantumCircuit(1)
u_gate.u(np.pi/2,np.pi/2,np.pi/2,0)
u_gate.draw()
```
### Pauli gates
#### $X$: bit-flip gate
The bit-flip gate $X$ is defined as:
$$
X =
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}= u3(\pi,0,\pi)
$$
```python
x_gate=QuantumCircuit(1) # Create a quantum circuit with 1 qubit
x_gate.x(0)
x_gate.draw(output='mpl')
```
#### $Y$: bit- and phase-flip gate
The $Y$ gate is defined as:
$$
Y =
\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix}=u3(\pi,\pi/2,\pi/2)
$$
```python
y_gate = QuantumCircuit(1)
y_gate.y(0)
y_gate.draw(output='mpl')
```
#### $Z$: phase-flip gate
The phase-flip gate $Z$ is defined as:
$$
Z =
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}=u1(\pi)
$$
```python
z_gate = QuantumCircuit(1)
z_gate.z(0)
z_gate.draw(output='mpl')
```
### Clifford gates
the Clifford gates are the elements of the Clifford group, a set of mathematical transformations which effect permutations of the Pauli operators.
#### Hadamard gate
A Hadamard gate represents a rotation of $\pi$ about the axis that is in the middle of the X-axis and Z-axis.
It maps the basis state $|0\rangle$ to $\frac{|0\rangle + |1\rangle}{\sqrt{2}}$, which means that a measurement will have equal probabilities of being `1` or `0`, creating a 'superposition' of states. This state is also written as $|+\rangle$. What the Hadamard does is to transform between the $|0\rangle$ $|1\rangle$ and the $|+\rangle$ $|-\rangle$ base.
$$
H =
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\\
1 & -1
\end{pmatrix}= u2(0,\pi)
$$
```python
# Let's do an H-gate on a |0> qubit
h_gate = QuantumCircuit(1)
h_gate.h(0)
h_gate.draw(output='mpl')
```
#### $S$ (or, $\sqrt{Z}$ phase) gate
$$
S =
\begin{pmatrix}
1 & 0\\
0 & i
\end{pmatrix}= u1(\pi/2)
$$
```python
s_gate = QuantumCircuit(1)
s_gate.s(0)
s_gate.draw(output='mpl')
```
#### $S^{\dagger}$ (or, conjugate of $\sqrt{Z}$ phase) gate
$$
S^{\dagger} =
\begin{pmatrix}
1 & 0\\
0 & -i
\end{pmatrix}= u1(-\pi/2)
$$
```python
sdg_gate = QuantumCircuit(1)
sdg_gate.sdg(0)
sdg_gate.draw(output='mpl')
```
### Standard Rotations
The standard rotation gates are those that define rotations around the Paulis $P=\{X,Y,Z\}$. They are defined as
$$ R_P(\theta) = \exp(-i \theta P/2) = \cos(\theta/2)I -i \sin(\theta/2)P$$
#### Rotation around X-axis
$$
R_x(\theta) =
\begin{pmatrix}
\cos(\theta/2) & -i\sin(\theta/2)\\
-i\sin(\theta/2) & \cos(\theta/2)
\end{pmatrix} = u3(\theta, -\pi/2,\pi/2)
$$
```python
rx_gate = QuantumCircuit(1)
rx_gate.rx(np.pi/2,0)
rx_gate.draw(output='mpl')
```
#### Rotation around Y-axis
$$
R_y(\theta) =
\begin{pmatrix}
\cos(\theta/2) & - \sin(\theta/2)\\
\sin(\theta/2) & \cos(\theta/2).
\end{pmatrix} =u3(\theta,0,0)
$$
```python
ry_gate = QuantumCircuit(1)
ry_gate.ry(np.pi/2,0)
ry_gate.draw(output='mpl')
```
#### Rotation around Z-axis
$$
R_z(\phi) =
\begin{pmatrix}
e^{-i \phi/2} & 0 \\
0 & e^{i \phi/2}
\end{pmatrix}\equiv u1(\phi)
$$
Note that here we have used an equivalent as it is different to u1 by a global phase $e^{-i \phi/2}$.
```python
rz_gate = QuantumCircuit(1)
rz_gate.rz(np.pi/2,0)
rz_gate.draw(output='mpl')
```
## Multi-Qubit Gates <a name="multi_gates"/>
### Mathematical Preliminaries
The space of a quantum computer grows exponentially with the number of qubits. For $n$ qubits the complex vector space has dimension $d=2^n$. To describe states of a multi-qubit system, the tensor product is used to "glue together" operators and basis vectors.
Let's start by considering a 2-qubit system. Given two operators $A$ and $B$ that each act on one qubit, the joint operator $A \otimes B$ acting on two qubits is
$$\begin{equation}
A\otimes B =
\begin{pmatrix}
A_{00} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix} & A_{01} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix} \\
A_{10} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix} & A_{11} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix}
\end{pmatrix},
\end{equation}$$
where $A_{jk}$ and $B_{lm}$ are the matrix elements of $A$ and $B$, respectively.
Analogously, the basis vectors for the 2-qubit system are formed using the tensor product of basis vectors for a single qubit:
$$\begin{equation}\begin{split}
\left|{00}\right\rangle &= \begin{pmatrix}
1 \begin{pmatrix}
1 \\
0
\end{pmatrix} \\
0 \begin{pmatrix}
1 \\
0
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \\0 \end{pmatrix}~~~\left|{01}\right\rangle = \begin{pmatrix}
1 \begin{pmatrix}
0 \\
1
\end{pmatrix} \\
0 \begin{pmatrix}
0 \\
1
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix}0 \\ 1 \\ 0 \\ 0 \end{pmatrix}\end{split}
\end{equation}$$
$$\begin{equation}\begin{split}\left|{10}\right\rangle = \begin{pmatrix}
0\begin{pmatrix}
1 \\
0
\end{pmatrix} \\
1\begin{pmatrix}
1 \\
0
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}~~~ \left|{11}\right\rangle = \begin{pmatrix}
0 \begin{pmatrix}
0 \\
1
\end{pmatrix} \\
1\begin{pmatrix}
0 \\
1
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\1 \end{pmatrix}\end{split}
\end{equation}.$$
Note we've introduced a shorthand for the tensor product of basis vectors, wherein $\left|0\right\rangle \otimes \left|0\right\rangle$ is written as $\left|00\right\rangle$. The state of an $n$-qubit system can be described using the $n$-fold tensor product of single-qubit basis vectors. Notice that the basis vectors for a 2-qubit system are 4-dimensional; in general, the basis vectors of an $n$-qubit sytsem are $2^{n}$-dimensional, as noted earlier.
### Basis vector ordering in Qiskit
Within the physics community, the qubits of a multi-qubit systems are typically ordered with the first qubit on the left-most side of the tensor product and the last qubit on the right-most side. For instance, if the first qubit is in state $\left|0\right\rangle$ and second is in state $\left|1\right\rangle$, their joint state would be $\left|01\right\rangle$. Qiskit uses a slightly different ordering of the qubits, in which the qubits are represented from the most significant bit (MSB) on the left to the least significant bit (LSB) on the right (big-endian). This is similar to bitstring representation on classical computers, and enables easy conversion from bitstrings to integers after measurements are performed. For the example just given, the joint state would be represented as $\left|10\right\rangle$. Importantly, *this change in the representation of multi-qubit states affects the way multi-qubit gates are represented in Qiskit*, as discussed below.
The representation used in Qiskit enumerates the basis vectors in increasing order of the integers they represent. For instance, the basis vectors for a 2-qubit system would be ordered as $\left|00\right\rangle$, $\left|01\right\rangle$, $\left|10\right\rangle$, and $\left|11\right\rangle$. Thinking of the basis vectors as bit strings, they encode the integers 0,1,2 and 3, respectively.
### Controlled operations on qubits
A common multi-qubit gate involves the application of a gate to one qubit, conditioned on the state of another qubit. For instance, we might want to flip the state of the second qubit when the first qubit is in $\left|0\right\rangle$. Such gates are known as _controlled gates_. The standard multi-qubit gates consist of two-qubit gates and three-qubit gates. The two-qubit gates are:
- controlled Pauli gates
- controlled Hadamard gate
- controlled rotation gates
- controlled phase gate
- controlled u3 gate
- swap gate
The three-qubit gates are:
- Toffoli gate
- Fredkin gate
## Two-qubit gates <a name="two_gates"/>
Most of the two-qubit gates are of the controlled type (the SWAP gate being the exception). In general, a controlled two-qubit gate $C_{U}$ acts to apply the single-qubit unitary $U$ to the second qubit when the state of the first qubit is in $\left|1\right\rangle$. Suppose $U$ has a matrix representation
$$U = \begin{pmatrix} u_{00} & u_{01} \\ u_{10} & u_{11}\end{pmatrix}.$$
We can work out the action of $C_{U}$ as follows. Recall that the basis vectors for a two-qubit system are ordered as $\left|00\right\rangle, \left|01\right\rangle, \left|10\right\rangle, \left|11\right\rangle$. Suppose the **control qubit** is **qubit 0** (which, according to Qiskit's convention, is one the _right-hand_ side of the tensor product). If the control qubit is in $\left|1\right\rangle$, $U$ should be applied to the **target** (qubit 1, on the _left-hand_ side of the tensor product). Therefore, under the action of $C_{U}$, the basis vectors are transformed according to
$$\begin{align*}
C_{U}: \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle} &\rightarrow \underset{\text{qubit}~1}{U\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle} &\rightarrow \underset{\text{qubit}~1}{U\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle}\\
\end{align*}.$$
In matrix form, the action of $C_{U}$ is
$$\begin{equation}
C_U = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & u_{00} & 0 & u_{01} \\
0 & 0 & 1 & 0 \\
0 & u_{10} &0 & u_{11}
\end{pmatrix}.
\end{equation}$$
To work out these matrix elements, let
$$C_{(jk), (lm)} = \left(\underset{\text{qubit}~1}{\left\langle j \right|} \otimes \underset{\text{qubit}~0}{\left\langle k \right|}\right) C_{U} \left(\underset{\text{qubit}~1}{\left| l \right\rangle} \otimes \underset{\text{qubit}~0}{\left| k \right\rangle}\right),$$
compute the action of $C_{U}$ (given above), and compute the inner products.
As shown in the examples below, this operation is implemented in Qiskit as `cU(q[0],q[1])`.
If **qubit 1 is the control and qubit 0 is the target**, then the basis vectors are transformed according to
$$\begin{align*}
C_{U}: \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{U\left|0\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{U\left|1\right\rangle}\\
\end{align*},$$
which implies the matrix form of $C_{U}$ is
$$\begin{equation}
C_U = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & u_{00} & u_{01} \\
0 & 0 & u_{10} & u_{11}
\end{pmatrix}.
\end{equation}$$
### Controlled Pauli Gates
#### Controlled-X (or, controlled-NOT) gate
The controlled-not gate flips the `target` qubit when the control qubit is in the state $\left|1\right\rangle$. If we take the MSB as the control qubit (e.g. `cx(q[1],q[0])`), then the matrix would look like
$$
C_X =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{pmatrix}.
$$
However, when the LSB is the control qubit, (e.g. `cx(q[0],q[1])`), this gate is equivalent to the following matrix:
$$
C_X =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0
\end{pmatrix}.
$$
```python
cx_gate = QuantumCircuit(2)
cx_gate.cx(0,1)
cx_gate.draw(output='mpl')
```
#### Controlled $Y$ gate
Apply the $Y$ gate to the target qubit if the control qubit is the MSB
$$
C_Y =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & -i\\
0 & 0 & i & 0
\end{pmatrix},
$$
or when the LSB is the control
$$
C_Y =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & -i\\
0 & 0 & 1 & 0\\
0 & i & 0 & 0
\end{pmatrix}.
$$
```python
cy_gate = QuantumCircuit(2)
cy_gate.cy(0,1)
cy_gate.draw(output='mpl')
```
#### Controlled $Z$ (or, controlled Phase) gate
Similarly, the controlled Z gate flips the phase of the target qubit if the control qubit is $\left|1\right\rangle$. The matrix looks the same regardless of whether the MSB or LSB is the control qubit:
$$
C_Z =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & -1
\end{pmatrix}
$$
```python
cz_gate = QuantumCircuit(2)
cz_gate.cz(0,1)
cz_gate.draw(output='mpl')
```
### Controlled $u3$ rotation
Perform controlled-$u3$ rotation on the target qubit if the control qubit (here LSB) is $\left|1\right\rangle$.
$$
C_{u3}(\theta, \phi, \lambda) \equiv
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & e^{-i(\phi+\lambda)/2}\cos(\theta/2) & 0 & -e^{-i(\phi-\lambda)/2}\sin(\theta/2)\\
0 & 0 & 1 & 0\\
0 & e^{i(\phi-\lambda)/2}\sin(\theta/2) & 0 & e^{i(\phi+\lambda)/2}\cos(\theta/2)
\end{pmatrix}.
$$
```python
cu_gate = QuantumCircuit(2)
cu_gate.cu3(np.pi/2, np.pi/2, np.pi/2, 0, 1)
cu_gate.draw(output='mpl')
```
### SWAP gate
The SWAP gate exchanges the two qubits. It transforms the basis vectors as
$$\left|00\right\rangle \rightarrow \left|00\right\rangle~,~\left|01\right\rangle \rightarrow \left|10\right\rangle~,~\left|10\right\rangle \rightarrow \left|01\right\rangle~,~\left|11\right\rangle \rightarrow \left|11\right\rangle,$$
which gives a matrix representation of the form
$$
\mathrm{SWAP} =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}.
$$
```python
swap_gate = QuantumCircuit(2)
swap_gate.swap(0, 1)
swap_gate.draw(output='mpl')
```
## Non-unitary operations <a name="non_unitary"/>
Now that we have gone through all the unitary operations in quantum circuits, we also have access to non-unitary operations. These include measurements, reset of qubits, and classical conditional operations.
### Measurements
We don't have access to all the information when we make a measurement in a quantum computer. The quantum state is projected onto the standard basis. Below are two examples showing a circuit that is prepared in a basis state and the quantum computer prepared in a superposition state.
```python
qc = QuantumCircuit(2)
qc.measure_all()
qc.draw(output='mpl')
```
### Reset
It is also possible to `reset` qubits to the $\left|0\right\rangle$ state in the middle of computation. Note that `reset` is not a Gate operation, since it is irreversible.
```python
qc = QuantumCircuit(1)
qc.x(0)
qc.reset(0)
qc.measure_all()
qc.draw(output='mpl')
```
### Conditional operations
It is also possible to do operations conditioned on the state of the classical register
```python
from qiskit import ClassicalRegister, QuantumRegister
q = QuantumRegister(1)
c = ClassicalRegister(1)
qc = QuantumCircuit(q,c)
qc.x(q[0]).c_if(c, 0)
qc.measure(0,0)
qc.draw(output='mpl')
```
## Arbitrary initialization <a name="initialization"/>
What if we want to initialize a qubit register to an arbitrary state? An arbitrary state for $n$ qubits may be specified by a vector of $2^n$ amplitudes, where the sum of amplitude-norms-squared equals 1. For example, the following three-qubit state can be prepared:
$$\left|\psi\right\rangle = \frac{i}{4}\left|000\right\rangle + \frac{1}{\sqrt{8}}\left|001\right\rangle + \frac{1+i}{4}\left|010\right\rangle + \frac{1+2i}{\sqrt{8}}\left|101\right\rangle + \frac{1}{4}\left|110\right\rangle$$
```python
# Initializing a three-qubit quantum state
import math
desired_vector = [
1 / math.sqrt(16) * complex(0, 1),
1 / math.sqrt(8) * complex(1, 0),
1 / math.sqrt(16) * complex(1, 1),
0,
0,
1 / math.sqrt(8) * complex(1, 2),
1 / math.sqrt(16) * complex(1, 0),
0]
init_qc = QuantumCircuit(3)
init_qc.initialize(desired_vector, [0,1,2])
init_qc.draw(output='mpl')
```
# Part1 Exercise: Quantum State manipulating with the gates - Bell State
There are four Bell states given by
$$ |\Phi^+\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle\otimes |0\rangle + |1\rangle\otimes|1\rangle\right)$$
$$ |\Phi^-\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle\otimes |0\rangle - |1\rangle\otimes|1\rangle\right)$$
$$ |\Psi^+\rangle = \frac{1}{\sqrt{2}} \left(|0\rangle\otimes |1\rangle + |1\rangle\otimes|0\rangle\right)$$
$$ |\Psi^-\rangle = \frac{1}{\sqrt{2}} \left(|0\rangle\otimes |1\rangle - |1\rangle\otimes|0\rangle\right)$$
Can you find circuits preparing all these four Bell states? Convince yourself that the Bell states are indeed orthogonal using the state_vector backend of Aer. We will see more details on running quantum circuit at Part2 so at here, just try to use below function to run the circuit.
```python
# We prepare a similar function for running on the state vector simulator
def circuit_run(quantum_circuit, decimals=6):
"""Takes a circuit, and runs it on the state vector simulator backend."""
statevector_simulator = Aer.get_backend('statevector_simulator')
job = execute(quantum_circuit, backend=statevector_simulator)
result = job.result()
statevector = result.get_statevector(quantum_circuit, decimals=decimals)
return statevector
```
### The state $|\Phi^+\rangle$
|$\Phi^+$> = [0.707107+0.j 0. +0.j 0. +0.j 0.707107+0.j]
```python
qc_phi_plus = QuantumCircuit(2)
#your code here
qc_phi_plus.draw(output='mpl')
```
```python
phi_plus_state = circuit_run(qc_phi_plus)
print('|Phi^+> =', phi_plus_state)
```
### The state $|\Phi^-\rangle$
|$\Phi^-$> = [ 0.707107+0.j 0. +0.j 0. +0.j -0.707107-0.j]
```python
# Let us first prepare the Phi^- state
qc_phi_minus = QuantumCircuit(2)
#your code here
qc_phi_minus.draw(output='mpl')
```
```python
phi_minus_state = circuit_run(qc_phi_minus)
print('|Phi^-> =', phi_minus_state)
```
### The state $|\Psi^+\rangle$
|$\Psi^+$> = [0. +0.j 0.707107+0.j 0.707107+0.j 0. +0.j]
```python
#The Psi^+ state
qc_psi_plus = QuantumCircuit(2)
#your code here
qc_psi_plus.draw(output='mpl')
```
```python
psi_plus_state = circuit_run(qc_psi_plus)
print('|Psi^+> =', psi_plus_state)
```
### The state $|\Psi^-\rangle$
|$\Psi^-$> = [ 0. +0.j -0.707107-0.j 0.707107+0.j 0. +0.j]
```python
# Let us first prepare the Psi^- state
qc_psi_minus = QuantumCircuit(2)
#your code here
qc_psi_minus.draw(output='mpl')
```
```python
psi_minus_state = circuit)run(qc_psi_minus)
print('|Psi^-> =', psi_minus_state)
```
```python
### Check the orthogonality of the Bell states
print('|<Phi^+|Phi^+>|^2 =', state_fidelity(phi_plus_state, phi_plus_state))
print('|<Phi^+|Phi^->|^2 =', state_fidelity(phi_plus_state, phi_minus_state))
print('|<Phi^+|Psi^+>|^2 =', state_fidelity(phi_plus_state, psi_plus_state))
print('|<Phi^+|Psi^->|^2 =', state_fidelity(phi_plus_state, psi_minus_state))
print('|<Psi^+|Phi^+>|^2 =', state_fidelity(psi_plus_state, phi_plus_state))
print('|<Psi^+|Phi^->|^2 =', state_fidelity(psi_plus_state, phi_minus_state))
print('|<Psi^+|Psi^+>|^2 =', state_fidelity(psi_plus_state, psi_plus_state))
print('|<Psi^+|Psi^->|^2 =', state_fidelity(psi_plus_state, psi_minus_state))
```
```python
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
```python
```
|
63db239c2666277086d9ba4ceb9bd457aa9585cc
| 37,550 |
ipynb
|
Jupyter Notebook
|
Lecture1/Lecture 1 - Intro to QC for the physicist Part0 Part1.ipynb
|
0sophy1/Oct2021HKUST
|
c968f23e73469681a5a67882cef6a7dad8f36ab7
|
[
"Apache-2.0"
] | null | null | null |
Lecture1/Lecture 1 - Intro to QC for the physicist Part0 Part1.ipynb
|
0sophy1/Oct2021HKUST
|
c968f23e73469681a5a67882cef6a7dad8f36ab7
|
[
"Apache-2.0"
] | null | null | null |
Lecture1/Lecture 1 - Intro to QC for the physicist Part0 Part1.ipynb
|
0sophy1/Oct2021HKUST
|
c968f23e73469681a5a67882cef6a7dad8f36ab7
|
[
"Apache-2.0"
] | 1 |
2021-10-31T10:30:15.000Z
|
2021-10-31T10:30:15.000Z
| 33.377778 | 989 | 0.544154 | true | 8,033 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.901921 | 0.841826 | 0.75926 |
__label__eng_Latn
| 0.91904 | 0.602347 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.