text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# Expérimentation sur le filtre de Kalman sans parfum
Ces expérimentations sont tirées de : https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python (Chapitre 10).
```python
%matplotlib inline
```
Pour exécuter la cellule suivante il faut que le fichier *book_format.py* et le répertoire *kf_book* soient dans votre répertoire de travail.
```python
import book_format
book_format.set_style()
```
<style>
.output_wrapper, .output {
height:auto !important;
max-height:100000px;
}
.output_scroll {
box-shadow:none !important;
webkit-box-shadow:none !important;
}
</style>
Pour réaliser les expériences vous devez installer la bibliothèque *FilterPy*.
Voir : https://anaconda.org/conda-forge/filterpy
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/kf_book/nonlinear_plots.py
```python
from kf_book.book_plots import set_figsize, figsize
import matplotlib.pyplot as plt
from kf_book.nonlinear_plots import plot_nonlinear_func
from scipy.stats import norm
import numpy as np
# create 500, 000 samples with mean 0, std 1
gaussian = (0., 1.)
nbSamples = 500000
data = norm.rvs(loc = gaussian[0], scale = gaussian[1], size = nbSamples)
def f(x):
return (np.cos(4 * (x / 2 + 0.7))) - 1.3 * x
plot_nonlinear_func(data, f)
```
```python
nbrSamples = 30000
plt.subplot(121)
plt.scatter(data[:nbrSamples], range(nbrSamples), alpha = 0.2, s = 1)
plt.title('Input (before transformation $f(x)$)')
plt.subplot(122)
plt.title('Output (after transformation $f(x)$)')
plt.scatter(f(data[:nbrSamples]), range(nbrSamples), alpha = 0.2, s = 1)
plt.show()
```
Let's consider the 1D-Tracking problem gouverns by the following state space model:
$$
\begin{bmatrix}
x_{k+1} \\
\dot{x}_{k+1}
\end{bmatrix}
=
\begin{bmatrix}
1 & \Delta t \\
0 & 1
\end{bmatrix}
\begin{bmatrix}
x_{k} \\
\dot{x}_{k}
\end{bmatrix}
+
v_k
\\
z_k
=
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
x_{k} \\
\dot{x}_{k}
\end{bmatrix}
+
w_k
$$
The state vector is defined by the position $x_k$ and the velocity $\dot{x}_{k}$ and $\Delta t$ the sampling period.
The following paper described an algorithme to compute the sigma points.
Simon J. Julier and Jeffrey K. Uhlmann. *New extension of the Kalman filter to nonlinear systems*. Proceedings of SPIE, Vol 3068, N° 1, pp. 182-193, 1997.
```python
import os
os.system('open UKF1.pdf')
```
0
```python
from filterpy.kalman import JulierSigmaPoints
from kf_book.ukf_internal import plot_sigmas
sigmaPoints = JulierSigmaPoints(n = 2, kappa = 1)
plt.title('Sigma points generated with the Julier\'s method')
plot_sigmas(sigmaPoints, x = [3, 17], cov = [[1, 0.5], [0.5, 3]])
```
```python
def fx(x, dt):
xout = np.empty_like(x)
xout[0] = x[1] * dt + x[0]
xout[1] = x[1]
return xout
def hx(x):
return x[:1] # return position [x]
```
```python
from scipy.stats import norm
from filterpy.kalman import UnscentedKalmanFilter
from filterpy.common import Q_discrete_white_noise
ukf = UnscentedKalmanFilter(dim_x = 2, dim_z = 1, dt = 1.0, hx = hx, fx = fx, points = sigmaPoints)
ukf.P = ukf.P * 10
ukf.R = ukf.R * 0.5
ukf.Q = Q_discrete_white_noise(dim = 2, dt = 1.0, var = 0.03)
zs, xs = [], []
for i in range(50):
z = norm.rvs(loc = i, scale = 0.5, size = 1)
ukf.predict()
ukf.update(z)
xs.append(ukf.x[0])
zs.append(z)
plt.plot(xs, label = 'Position')
plt.plot(zs, marker = 'o', ls = '', color = 'magenta', label = 'Position estimate')
plt.title('Position estimate with UKF')
plt.legend()
plt.show()
```
The following paper described another algorithm to compute the sigma points.
Wan E., Van Der Merwe R. *The unscented Kalman filter for nonlinear estimation*. In: Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373). IEEE; 2000:153-158. doi:10.1109/ASSPCC.2000.882463
```python
import os
os.system('open UKF2.pdf')
```
0
```python
def f_nonlinear_xy(x, y):
return np.array([x + y, 0.1 * x ** 2 + y * y])
```
```python
from filterpy.kalman import unscented_transform, MerweScaledSigmaPoints
from numpy.random import multivariate_normal
from kf_book.nonlinear_plots import plot_monte_carlo_mean
import scipy.stats as stats
#initial mean and covariance
mean = (0., 0.)
p = np.array([[32., 15], [15., 40.]])
# create sigma points and weights
points = MerweScaledSigmaPoints(n=2, alpha=.3, beta=2., kappa=.1)
sigmaPoints = points.sigma_points(mean, p)
### pass through nonlinear function
sigmas_f = np.empty((5, 2))
for i in range(5):
sigmas_f[i] = f_nonlinear_xy(sigmaPoints[i, 0], sigmaPoints[i ,1])
### use unscented transform to get new mean and covariance
ukf_mean, ukf_cov = unscented_transform(sigmas_f, points.Wm, points.Wc)
#generate random points
np.random.seed(100)
xs, ys = multivariate_normal(mean=mean, cov=p, size=5000).T
plot_monte_carlo_mean(xs, ys, f_nonlinear_xy, ukf_mean, 'Unscented Mean')
ax = plt.gcf().axes[0]
ax.scatter(sigmaPoints[:,0], sigmaPoints[:,1], c='r', s=30);
```
## Using the UKF
We will consider a linear problem you already know how to solve with the linear Kalman filter. Although the UKF was designed for nonlinear problems, it finds the same optimal result as the linear Kalman filter for linear problems. We will write a filter to track an object in 2D using a constant velocity model.
\begin{equation}
\left\{
\begin{array}{l}
x_{k+1} = F x_k +C u + G v_k \\
z_k = H x_k + w_k
\end{array}
\right.
\end{equation}
$u_t \in \mathbb{R}^n$ : a known input, $v_t \in \mathbb{R}^{\ell}$ : gaussian state noise, $w_t \in \mathbb{R}^p$ : gaussian measurement noise.
$v_t$ et $w_t$ are white noise whose correlation matrix are:
\begin{equation}
\left\{
\begin{array}{l}
\mathbb{E}[v_i v_j^T] = \delta_{ij}Q_i \\
\mathbb{E}[w_i w_j^T] = \delta_{ij}R_i \\
\mathbb{E}[v_i w_j^T] = 0
\end{array}
\right.
\end{equation}
$x_0 \sim \mathcal{N}(\hat{x}_0,P_{0\vert0})$ is independent from the noises $v_t$ et $w_t$.
We want a constant velocity model, so we define $\bf{x}$ from the Newtonian equations:
$$
\begin{aligned}
x_k &= x_{k-1} + \dot x_{k-1}\Delta t \\
y_k &= y_{k-1} + \dot y_{k-1}\Delta t
\end{aligned}
$$
$$
\mathbf{x} = \begin{bmatrix}
x \\
\dot{x} \\
y \\
\dot{y}
\end{bmatrix}
$$
With this ordering of state variables the state transition matrix is
$$
\mathbf{F} = \begin{bmatrix}
1 & \Delta t & 0 & 0 \\
0 & 1& 0 & 0 \\
0 & 0 & 1 & \Delta t\\
0 & 0 & 0 & 1
\end{bmatrix}
$$
Our sensors provide position but not velocity, so the measurement function is
$$
\mathbf{H} = \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0
\end{bmatrix}
$$
The sensor readings are in meters with an error of $\sigma=0.3$ meters in both *x* and *y*. This gives us a measurement noise matrix of
$$
\mathbf{R} = \begin{bmatrix}
0.3 ^ 2 & 0 \\
0 & 0.3 ^ 2
\end{bmatrix}
$$
Finally, let's assume that the process noise can be represented by the discrete white noise model - that is, that over each time period the acceleration is constant. We can use `FilterPy`'s `Q_discrete_white_noise()` to create this matrix for us, but for review the matrix is
$$
\mathbf{Q} = \begin{bmatrix}
\frac{1}{4} \Delta t ^ 4 & \frac{1}{2} \Delta t ^ 3 \\
\frac{1}{2} \Delta t ^ 3 & \Delta t ^ 2
\end{bmatrix} \sigma ^ 2
$$
The model is linear, so we can use the **Kalman filter**. An implementation of this filter is proposed below:
```python
from filterpy.kalman import KalmanFilter
from filterpy.common import Q_discrete_white_noise
from scipy.stats import norm
std_x, std_y = 0.3, 0.3
dt = 1.0
np.random.seed(1234)
kf = KalmanFilter(4, 2)
kf.x = np.array([0., 0., 0., 0.])
kf.R = np.diag([std_x ** 2, std_y ** 2])
kf.F = np.array([[1, dt, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, dt],
[0, 0, 0, 1]])
kf.H = np.array([[1, 0, 0, 0],
[0, 0, 1, 0]])
kf.Q[0:2, 0:2] = Q_discrete_white_noise(2, dt = 1, var = 0.02)
kf.Q[2:4, 2:4] = Q_discrete_white_noise(2, dt = 1, var = 0.02)
zs = [np.array([norm.rvs(loc = i, scale = std_x, size = 1),
norm.rvs(loc = i, scale = std_y, size = 1)]) for i in range(100)]
xs, _, _, _ = kf.batch_filter(zs)
plt.plot(xs[:, 0], xs[:, 2])
plt.xlabel('x')
plt.ylabel('y')
plt.title('Estimation of $x$ and $y$ with a KF')
plt.show()
```
```python
plt.plot(xs[:, 0])
plt.title('$\hat{x}$')
plt.show()
```
```python
plt.plot(xs[:, 1])
plt.title('$\hat{\dot{x}}$')
plt.show()
plt.show()
```
```python
plt.plot(xs[:, 2])
plt.title('$\hat{y}$')
plt.show()
```
```python
plt.plot(xs[:, 3])
plt.title('$\hat{\dot{y}}$')
plt.show()
```
We are going to implement an Unscented Kalman Filter (UKF) to solve the same problem although this is not useful given the linear nature of the model. This is just to illustrate the construction of the filter in the context of the _FilterPy_ library.
```python
def f_cv(x, dt):
""" state transition function for a
constant velocity model"""
F = np.array([[1, dt, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, dt],
[0, 0, 0, 1]])
return F @ x
def h_cv(x):
return x[[0, 2]]
```
```python
from filterpy.kalman import MerweScaledSigmaPoints
from filterpy.kalman import UnscentedKalmanFilter as UKF
from filterpy.common import Q_discrete_white_noise
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
std_x, std_y = 0.3, 0.3
dt = 1.0
sigmaPoints = MerweScaledSigmaPoints(4, alpha = 0.1, beta = 2.0, kappa = 1.0)
ukf = UKF(dim_x = 4, dim_z = 2, fx = f_cv, hx = h_cv, dt = dt, points = sigmaPoints)
ukf.x = np.array([0.0, 0.0, 0.0, 0.0])
ukf.R = np.diag([0.09, 0.09])
ukf.Q[0:2, 0:2] = Q_discrete_white_noise(2, dt = 1, var = 0.02)
ukf.Q[2:4, 2:4] = Q_discrete_white_noise(2, dt = 1, var = 0.02)
#zs = [np.array([norm.rvs(loc = i, scale = std_x, size = 1),
# norm.rvs(loc = i, scale = std_y, size = 1)]) for i in range(100)]
zs = [np.array([i + randn() * std_x,
i + randn() * std_y]) for i in range(100)]
uxs = []
for z in zs:
ukf.predict()
ukf.update(z)
uxs.append(ukf.x.copy())
uxs = np.array(uxs)
plt.plot(uxs[:, 0], uxs[:, 2])
plt.xlabel('x')
plt.ylabel('y')
plt.title('Estimation of $x$ and $y$ with an UKF')
plt.show()
#print(f'UKF standard deviation {np.std(uxs - xs):.3f} meters')
```
### Study the example below
```python
import kf_book.ekf_internal as ekf_internal
ekf_internal.show_radar_chart()
```
The *elevation angle* $\epsilon$ is the angle above the line of sight formed by the ground.
We assume that the aircraft is flying at a constant altitude. The state vector is:
$$
\mathbf{x} = \begin{bmatrix}
x \\
\dot{x} \\
y
\end{bmatrix}
$$
with $x$ the distance, $\dot{x}$ the velocity and $y$ the altitude.
The matrix $\mathbf{F}$ is:
$$
\mathbf{x}_{k+1} = \begin{bmatrix}
1 & \Delta t & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
\mathbf{x}_{k}
$$
The computation of $\mathbf{F}$ can be implemented with the following function:
```python
def f_radar(x, dt):
""" state transition function for a constant velocity
aircraft with state vector [x, velocity, altitude]'"""
F = np.array([[1, dt, 0],
[0, 1, 0],
[0, 0, 1]], dtype=float)
return F @ x
```
The measurement function $\mathbf{H}$.
$$
\begin{bmatrix}
r \\
\epsilon
\end{bmatrix}
=
\mathbf{H} \begin{bmatrix}
x \\
\dot{x} \\
y
\end{bmatrix}
$$
$$
r = \sqrt{(x_\text{aircraft} - x_\text{radar})^2 + (y_\text{aircraft} - y_\text{radar})^2}
$$
$$
\epsilon = \tan^{-1} \frac{y}{x}
$$
$$
\epsilon = \tan^{-1}{\frac{y_\text{aircraft} - y_\text{radar}}{x_\text{aircraft} - x_\text{radar}}}
$$
The computation of $\mathbf{H}$ can be implemented with the following function:
```python
def h_radar(x):
dx = x[0] - h_radar.radar_pos[0]
dy = x[2] - h_radar.radar_pos[1]
slant_range = math.sqrt(dx ** 2 + dy ** 2)
elevation_angle = math.atan2(dy, dx)
return [slant_range, elevation_angle]
h_radar.radar_pos = (0, 0)
```
The simulation of the aircraft and the radar are carried out by the following code:
```python
from numpy.linalg import norm
from math import atan2
class RadarStation:
def __init__(self, pos, range_std, elev_angle_std):
self.pos = np.asarray(pos)
self.range_std = range_std
self.elev_angle_std = elev_angle_std
def reading_of(self, ac_pos):
""" Returns (range, elevation angle) to aircraft.
Elevation angle is in radians.
"""
diff = np.subtract(ac_pos, self.pos)
rng = norm(diff)
brg = atan2(diff[1], diff[0])
return rng, brg
def noisy_reading(self, ac_pos):
""" Compute range and elevation angle to aircraft with
simulated noise"""
rng, brg = self.reading_of(ac_pos)
rng = rng + randn() * self.range_std
brg = brg + randn() * self.elev_angle_std
return rng, brg
class ACSim:
def __init__(self, pos, vel, vel_std):
self.pos = np.asarray(pos, dtype=float)
self.vel = np.asarray(vel, dtype=float)
self.vel_std = vel_std
def update(self, dt):
""" Compute and returns next position. Incorporates
random variation in velocity. """
dx = self.vel * dt + (randn() * self.vel_std) * dt
self.pos = self.pos + dx
return self.pos
```
UKF implementation for aircraft tracking.
```python
import math
from kf_book.ukf_internal import plot_radar
dt = 3. # 12 seconds between readings
range_std = 5 # meters
elevation_angle_std = math.radians(0.5)
ac_pos = (0., 1000.)
ac_vel = (100., 0.)
radar_pos = (0., 0.)
h_radar.radar_pos = radar_pos
points = MerweScaledSigmaPoints(n=3, alpha=.1, beta=2., kappa=0.)
kf = UKF(3, 2, dt, fx=f_radar, hx=h_radar, points=points)
kf.Q[0:2, 0:2] = Q_discrete_white_noise(2, dt=dt, var=0.1)
kf.Q[2,2] = 0.1
kf.R = np.diag([range_std**2, elevation_angle_std**2])
kf.x = np.array([0., 90., 1100.])
kf.P = np.diag([300**2, 30**2, 150**2])
np.random.seed(200)
pos = (0, 0)
radar = RadarStation(pos, range_std, elevation_angle_std)
ac = ACSim(ac_pos, (100, 0), 0.02)
time = np.arange(0, 360 + dt, dt)
xs = []
for _ in time:
ac.update(dt)
r = radar.noisy_reading(ac.pos)
kf.predict()
kf.update([r[0], r[1]])
xs.append(kf.x)
plot_radar(xs, time)
```
```python
```
|
f3e5302b137477ef0a421eac5eb51b7f39d93538
| 884,256 |
ipynb
|
Jupyter Notebook
|
TP6/manipUKF.ipynb
|
LoanSarazin/sdia-estimation
|
8551f5e42300c08bf466fd06068df7efa98e2ace
|
[
"MIT"
] | 1 |
2021-11-24T14:39:34.000Z
|
2021-11-24T14:39:34.000Z
|
TP6/manipUKF.ipynb
|
LoanSarazin/sdia-estimation
|
8551f5e42300c08bf466fd06068df7efa98e2ace
|
[
"MIT"
] | null | null | null |
TP6/manipUKF.ipynb
|
LoanSarazin/sdia-estimation
|
8551f5e42300c08bf466fd06068df7efa98e2ace
|
[
"MIT"
] | null | null | null | 846.988506 | 315,040 | 0.950667 | true | 4,881 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.795658 | 0.849971 | 0.676286 |
__label__eng_Latn
| 0.619331 | 0.409571 |
```python
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
```
# Class 13: Introduction to Real Business Cycle Modeling
Real business cycle (RBC) models are extensions of the stochastic Solow model. RBC models replace the ad hoc assumption of a constant saving rate in the Solow model with the solution to an intertemporal utility maximization problem that gives rise to a variable saving rate. RBC models also often feature some sort of household labor-leisure tradeoff that produces endogenous labor varation.
In this notebook, we'll consider a baseline RBC model that does not have labor. We'll use the model to compute impulse responses to a one percent shock to TFP.
## The Baseline RBC Model without Labor
The equilibrium conditions for the RBC model without labor are:
\begin{align}
\frac{1}{C_t} & = \beta E_t \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1} +1-\delta }{C_{t+1}}\right]\\
K_{t+1} & = I_t + (1-\delta) K_t\\
Y_t & = A_t K_t^{\alpha}\\
Y_t & = C_t + I_t\\
\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}
\end{align}
where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$.
The objective is use `linearsolve` to simulate impulse responses to a TFP shock using the following parameter values for the simulation:
| $\rho$ | $\sigma$ | $\beta$ | $\alpha$ | $\delta $ | $T$ |
|--------|----------|---------|----------|-----------|-----|
| 0.75 | 0.006 | 0.99 | 0.35 | 0.025 | 26 |
## Model Preparation
Before proceding, let's recast the model in the form required for `linearsolve`. Write the model with all variables moved to the lefthand side of the equations and dropping the expecations operator $E_t$ and the exogenous shock $\epsilon_{t+1}$:
\begin{align}
0 & = \beta\left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1} +1-\delta }{C_{t+1}}\right] - \frac{1}{C_t}\\
0 & = A_t K_t^{\alpha} - Y_t\\
0 & = I_t + (1-\delta) K_t - K_{t+1}\\
0 & = C_t + I_t - Y_t\\
0 & = \rho \log A_t - \log A_{t+1}
\end{align}
Remember, capital and TFP are called *state variables* because they're $t+1$ values are predetermined. Output, consumption, and investment are called a *costate* or *control* variables. Note that the model as 5 equations in 5 endogenous variables.
## Initialization, Approximation, and Solution
The next several cells initialize the model in `linearsolve` and then approximate and solve it.
```python
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
parameters = pd.Series()
parameters['rho'] = .75
parameters['beta'] = 0.99
parameters['alpha'] = 0.35
parameters['delta'] = 0.025
# Print the model's parameters
print(parameters)
```
rho 0.750
beta 0.990
alpha 0.350
delta 0.025
dtype: float64
```python
# Create a variable called 'sigma' that stores the value of sigma
sigma = 0.006
```
```python
# Create variable called 'var_names' that stores the variable names in a list with state variables ordered first
var_names = ['a','k','y','c','i']
# Create variable called 'shock_names' that stores an exogenous shock name for each state variable.
shock_names = ['e_a','e_k']
```
```python
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters. PROVIDED
p = parameters
# Current variables. PROVIDED
cur = variables_current
# Forward variables. PROVIDED
fwd = variables_forward
# Euler equation
euler_equation = p.beta*(p.alpha*fwd.a*fwd.k**(p.alpha-1)+1-p.delta)/fwd.c - 1/cur.c
# Production function
production_function = cur.a*cur.k**p.alpha - cur.y
# Capital evolution
capital_evolution = cur.i + (1 - p.delta)*cur.k - fwd.k
# Market clearing
market_clearing = cur.c+cur.i - cur.y
# Exogenous tfp
tfp_process = p.rho*np.log(cur.a) - np.log(fwd.a)
# Stack equilibrium conditions into a numpy array
return np.array([
euler_equation,
production_function,
capital_evolution,
market_clearing,
tfp_process
])
```
Next, initialize the model using `ls.model` which takes the following required arguments:
* `equations`
* `n_states`
* `var_names`
* `shock_names`
* `parameters`
```python
# Initialize the model into a variable named 'rbc_model'
rbc_model = ls.model(equations = equilibrium_equations,
n_states=2,
var_names=var_names,
shock_names=shock_names,
parameters=parameters)
```
```python
# Compute the steady state numerically using .compute_ss() method of rbc_model
guess = [1,4,1,1,1]
rbc_model.compute_ss(guess)
# Print the computed steady state
print(rbc_model.ss)
```
a 1.000000
k 34.398226
y 3.449750
c 2.589794
i 0.859956
dtype: float64
```python
# Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of rbc_model
rbc_model.approximate_and_solve()
```
### Impulse Responses
Compute a 26 period impulse responses of the model's variables to a 0.01 unit shock to TFP in period 5.
```python
# Compute impulse responses
rbc_model.impulse(T=26,t0=5,shocks=[0.01,0])
# Print the first 10 rows of the computed impulse responses to the TFP shock
print(rbc_model.irs['e_a'].head(10))
```
e_a a k y c i
0 0.00 0.000000 0.000000 0.000000 0.000000 0.000000
1 0.00 0.000000 0.000000 0.000000 0.000000 0.000000
2 0.00 0.000000 0.000000 0.000000 0.000000 0.000000
3 0.00 0.000000 0.000000 0.000000 0.000000 0.000000
4 0.00 0.000000 0.000000 0.000000 0.000000 0.000000
5 0.01 0.010000 0.000000 0.010000 0.001253 0.036342
6 0.00 0.007500 0.000909 0.007818 0.001493 0.026865
7 0.00 0.005625 0.001557 0.006170 0.001654 0.019772
8 0.00 0.004219 0.002013 0.004923 0.001755 0.014465
9 0.00 0.003164 0.002324 0.003978 0.001812 0.010499
Construct a $2\times2$ grid of plots of simulated TFP, output, consumption, and investment. Be sure to multiply simulated values by 100 so that vertical axis units are in "percent deviation from steady state."
```python
# Create figure. PROVIDED
fig = plt.figure(figsize=(12,8))
# Create upper-left axis. PROVIDED
ax = fig.add_subplot(2,2,1)
ax.plot(rbc_model.irs['e_a']['a']*100,'b',lw=5,alpha=0.75)
ax.set_title('TFP')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.2,1.2])
ax.grid()
# Create upper-right axis. PROVIDED
ax = fig.add_subplot(2,2,2)
ax.plot(rbc_model.irs['e_a']['y']*100,'b',lw=5,alpha=0.75)
ax.set_title('Output')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.2,1.2])
ax.grid()
# Create lower-left axis. PROVIDED
ax = fig.add_subplot(2,2,3)
ax.plot(rbc_model.irs['e_a']['c']*100,'b',lw=5,alpha=0.75)
ax.set_title('Consumption')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.05,0.30])
ax.grid()
# Create lower-right axis. PROVIDED
ax = fig.add_subplot(2,2,4)
ax.plot(rbc_model.irs['e_a']['i']*100,'b',lw=5,alpha=0.75)
ax.set_title('Investment')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-1,6])
ax.grid()
fig.tight_layout()
```
|
9acd0ca8217bfabca2fad3d12c063d66d92e8b7e
| 65,936 |
ipynb
|
Jupyter Notebook
|
Lecture Notebooks/Econ126_Class_13.ipynb
|
t-hdd/econ126
|
17029937bd6c40e606d145f8d530728585c30a1d
|
[
"MIT"
] | null | null | null |
Lecture Notebooks/Econ126_Class_13.ipynb
|
t-hdd/econ126
|
17029937bd6c40e606d145f8d530728585c30a1d
|
[
"MIT"
] | null | null | null |
Lecture Notebooks/Econ126_Class_13.ipynb
|
t-hdd/econ126
|
17029937bd6c40e606d145f8d530728585c30a1d
|
[
"MIT"
] | null | null | null | 177.247312 | 54,408 | 0.888331 | true | 2,318 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.731059 | 0.622827 |
__label__eng_Latn
| 0.846334 | 0.285367 |
# ***Introduction to Radar Using Python and MATLAB***
## Andy Harrison - Copyright (C) 2019 Artech House
<br/>
# Ovals of Cassini
***
Referring to Figure 4.8, The Cassini ovals are a family of quartic curves, sometimes referred to as Cassini ellipses, described by a point such that the product of its distances from two fixed points a distance apart is a constant [8]. For bistatic systems, the system performance may be analyzed by plotting the Cassini ovals for various signal-to-noise ratios. The Cassini ovals are governed by (Equation 4.65)
\begin{equation}\label{eq:bistatic_range_product_polar}
(r_t\, r_r)^2 = \Big(\rho^2 + (D/2)^2\Big)^2 - \rho^2\, D^2\, \cos^2\theta
\end{equation}
***
Begin by getting the library path
```python
import lib_path
```
Set the separation distance, ***D***, between the transmitting and receiving radars (m)
```python
separation_distance = 10e3
```
Set the system temperature (K), bandwidth (Hz), noise_figure (dB), transmitting and receiving losses (dB), peak transmit power (W), transmitting and receiving antenna gain (dB), operating frequency (Hz), and bistatic target RCS (dBsm)
```python
system_temperature = 290
bandwidth = 10e6
noise_figure = 3
transmit_losses = 4
receive_losses = 6
peak_power = 100e3
transmit_antenna_gain = 30
receive_antenna_gain = 28
frequency = 1e9
bistatic_target_rcs = 10
```
Set the number of points for plotting the Cassini ovals
```python
number_of_points = 100000
```
Set the parameters for the Cassini ovals equation
$$ r ^ 4 + a ^ 4 - 2 a ^ 2 r ^ 2(1 + cos(2 \theta)) = b ^ 4 $$
Import the `linspace` and `log10` routines along with some constants from `scipy` for the angle sweep
```python
from numpy import linspace, log10
from scipy.constants import pi, c, k
```
```python
# Parameter "a"
a = 0.5 * separation_distance
# Full angle sweep
t = linspace(0, 2.0 * pi, number_of_points)
```
Calculate the bistatic range factor and use this along with the separation distance to calculate SNR<sub>0</sub> (where the factors ***a*** and ***b*** are equal)
```python
# Calculate the wavelength (m)
wavelength = c / frequency
# Calculate the bistatic radar range factor
bistatic_range_factor = (peak_power * transmit_antenna_gain * receive_antenna_gain * wavelength ** 2 * 10.0 ** (bistatic_target_rcs / 10.0)) / ((4.0 * pi) ** 3 * k * system_temperature * bandwidth * 10.0 ** (noise_figure / 10.0) * transmit_losses * receive_losses)
# Calculate the signal to noise ratio at which a = b
SNR_0 = 10.0 * log10(16.0 * bistatic_range_factor / separation_distance ** 4)
```
Create a list of the signal to noise ratios to plot
```python
SNR = [SNR_0 - 6, SNR_0 - 3, SNR_0, SNR_0 + 3]
```
Import the `matplotlib` routines for plotting the Cassini ovals
```python
from matplotlib import pyplot as plt
```
Import `sqrt`, `sin`, `cos`, `real`, and `imag` from `scipy` for plotting the Cassini ovals
```python
from numpy import sqrt, sin, cos, real, imag
```
Display the resulting Cassini ovals
```python
# Set the figure size
plt.rcParams["figure.figsize"] = (15, 10)
# Loop over all the desired signal to noise ratios
for s in SNR:
# Convert to linear units
snr = 10.0 ** (s / 10.0)
# Parameter for Cassini ovals
b = (bistatic_range_factor / snr) ** 0.25
if a > b:
# Calculate the +/- curves
r1 = sqrt(a ** 2 * (cos(2.0 * t) + sqrt(cos(2 * t) ** 2 - 1.0 + (b / a) ** 4)))
r2 = sqrt(a ** 2 * (cos(2.0 * t) - sqrt(cos(2 * t) ** 2 - 1.0 + (b / a) ** 4)))
# Find the correct indices for imaginary parts = 0
i1 = imag(r1) == 0
i2 = imag(r2) == 0
r1 = real(r1)
r2 = real(r2)
# Plot both parts of the curve
label_text = "SNR = {:.1f}".format(s)
plt.plot(r1[i1] * cos(t[i1]), r1[i1] * sin(t[i1]), 'k.', label=label_text)
plt.plot(r2[i2] * cos(t[i2]), r2[i2] * sin(t[i2]), 'k.')
else:
# Calculate the range for the continuous curves
r = sqrt(a ** 2 * cos(2 * t) + sqrt(b ** 4 - a ** 4 * sin(2.0 * t) ** 2))
# Plot the continuous parts
label_text = "SNR = {:.1f}".format(s)
plt.plot(r * cos(t), r * sin(t), '.', label=label_text)
# Add the text for Tx/Rx locations
plt.text(-a, 0, 'Tx')
plt.text(a, 0, 'Rx')
# Set the plot title and labels
plt.title('Ovals of Cassini', size=14)
plt.xlabel('Range (km)', size=12)
plt.ylabel('Range (km)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Add the legend
plt.legend(loc='upper left', prop={'size': 10})
```
|
7bfad352036e0338efeb3c6a3d141150ea865cb9
| 80,441 |
ipynb
|
Jupyter Notebook
|
jupyter/Chapter04/ovals_of_cassini.ipynb
|
mberkanbicer/software
|
89f8004f567129216b92c156bbed658a9c03745a
|
[
"Apache-2.0"
] | null | null | null |
jupyter/Chapter04/ovals_of_cassini.ipynb
|
mberkanbicer/software
|
89f8004f567129216b92c156bbed658a9c03745a
|
[
"Apache-2.0"
] | null | null | null |
jupyter/Chapter04/ovals_of_cassini.ipynb
|
mberkanbicer/software
|
89f8004f567129216b92c156bbed658a9c03745a
|
[
"Apache-2.0"
] | null | null | null | 205.731458 | 70,796 | 0.912035 | true | 1,443 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.847968 | 0.779993 | 0.661409 |
__label__eng_Latn
| 0.942115 | 0.375005 |
## RSA (Rivest–Shamir–Adleman) - Cryptosystem
### Algorithm
#### 1. Choose two distinct prime numbers p and q.
<p> For security purposes, the integers p and q should be chosen at random, and should be similar in magnitude but differ in length by a few digits to make factoring harder. Prime integers can be efficiently found using a primality test. </p>
#### 2. Compute n = pq.
<p> n is used as the modulus for both the public and private keys. Its length, usually expressed in bits, is the key length. </p>
#### 3. Compute Φ(n) = lcm(Φ(p), Φ(q)) = lcm(p − 1, q − 1)
<p> Φ is Carmichael's totient function. This value is kept private. </p>
#### 4. Choose an integer e such that 1 < e < Φ(n) and gcd(e, Φ(n)) = 1; i.e., e and Φ(n) are coprime.
<p> Note that if e is prime and also not a divisor of Φ(n), so e and Φ(n) are coprime. </p>
#### 5. Determine d as (e*d) % Φ(n) = 1
<p> d is the modular multiplicative inverse of e (modulo Φ(n)).</p>
#### To encrypt:
<p> c(m) = m^e % n </p>
<p> The <b>public key</b>, composed by e and n, is used to encrypt the message. </p>
#### To decrypt:
<p> m(c) = m^d % n </p>
<p> The <b>private key</b>, composed by d and n, is used to decrypt the encrypted message. </p>
#### References:
[Wikipedia - RSA (cryptosystem)](https://en.wikipedia.org/wiki/RSA_%28cryptosystem%29)
[Wikipedia - Carmichael function](https://en.wikipedia.org/wiki/Carmichael_function)
[Sympy.org](http://www.sympy.org/pt/index.html)
[Wikibooks - Extended Euclidean algorithm](https://en.wikibooks.org/wiki/Algorithm_Implementation/Mathematics/Extended_Euclidean_algorithm)
```python
from random import randint
import sympy
import numpy
# return the smallest prime in range [n,n+1000)
# return -1 if it doesn't exist
def next_prime(n):
for i in range(n,n+1000):
if (sympy.isprime(i)):
return i
return -1
# Extended Euclidean algorithm
def egcd(a, b):
if a != 0:
x, y, z = egcd(b % a, a)
return (x, z - (b // a) * y, y)
return (b, 0, 1)
# Modular Inverse
# return -1 if it doesn't exist
def mod_inv(a, b):
x, y, _ = egcd(a, b)
if x != 1:
return -1
else:
return y % b
def encrypt(m,e,n):
return pow(m,e,n)
def decrypt(c,d,n):
return pow(c,d,n)
## 1. p and q
p = -1
while(p == -1):
p = next_prime(randint(10**130,10**150))
q = -1
while(q == -1):
q = next_prime(randint(10**100,10**120))
if (q == p):
q = -1
if(randint(0,9) % 2 == 0):
aux = p
p = q
q = aux
print("p and q:")
print(hex(p))
print(hex(q))
## 2. n
n = p*q
print("\nn:")
print(hex(n))
## 3. phi
phi = sympy.lcm((p-1),(q-1))
print("\nphi(n):")
print(hex(phi))
## 4. e
e = -1
while(e == -1):
e = next_prime(randint(2,phi))
if e >= phi:
e = -1
elif e%phi == 0:
e = -1
print("\ne:")
print(hex(e))
## 5. d
d = int(mod_inv(e, phi))
if d == -1:
raise Exception('Modular inverse does not exist, but don`t worry, just run it again. =)')
print("\nd:")
print(hex(d))
## Encrypt:
c = encrypt(42,e,n)
print("\nmessage encrypted:")
print(hex(c))
## Decrypt:
m = decrypt(c,d,n)
print("\nmessage decrypted:")
print(m)
```
p and q:
0x4972224d37eeec694dbc80cdb3b88052a29faa76b113148ad06f89995c3a7fcdb97cbe523efadde3a0b889e7f8094b893b255c5c7a05d7c99819c9c3e9d29
0x3e5810c28da948c519eb2b92dccc0f1afc654f60bc1ad1588514a5cf20b0ee057117cc0071499f887d56e9477be11a12f23b
n:
0x11e2e859715e5783e98e5c5d6d5d73817889d24062c2f6c3212a495b072cc1a30fbac63d61d0221094d0c31731a3d4ed56a36d57374822025b1f9626e90c087607d3da2127815f5f38216729e9fe211b8c6e37b702828131a57555c8d0b23f3d584e65118d537fc2f28b2173169e0fa73
phi(n):
0x8f1742cb8af2bc1f4c72e2eb6aeb9c0bc44e92031617b61909524ad839660d187dd631eb0e811084a68618b98d1ea76ab5191f28a7d7509b75b2435341f286c033a8753743f2e56b86eb476cfb98a8d5bd5393c4b8d5cf2be9a61829bf4f8114802916bcb7f3ef3d61688cd9d9c7b588
e:
0x5645c37219fad185353336fd4f4aa8a1d95ecfe4d083ef88e064082ddc892eee4db2127436c07be7edeed3fb96cf941531ac9ea5bb4b17bb5bb8854b42c67f6998038558b7f3033cf2c84cfd598a3218e3d1fa34901767143c26c744d80bc88fb691ebfe975279f1aba94fd56924ce9d
d:
0x3d0cf1214c4b09200b502b82b053d719e78bb40a1bbfbe8314a4faaa0fecd828273b559921ba436d6ae979c7d6aeedac2255a76ec54d61e0476091101b2f2c3de4a31e2f00061ad351bad807dfbe4a55f5e3e4ef94e3b65f89f3d03ba7f79fadfb17a7a9113462fa256b4c2201f32a6d
message encrypted:
0x1036f1372d8acabac7b61cc594e93393c9a30754d8270134da09a71cdbd1aa62591f132ef877a21e595996b90ab8fb3f2361afd44a5665f8ff51f86a1ea35f2f5144d9afeac125c48696bb3fd6dd034a3cffb423cc57d2a706cbbc6833344272be898d53d14bdc459b6b958128d96beaa
message decrypted:
42
|
ff164429ba61533d3af1420af58c32a7556be6c7
| 6,928 |
ipynb
|
Jupyter Notebook
|
RSA.ipynb
|
vdbalbom/RSA
|
cdc9189c6f6a6170a41d30ea8ed4ba1f5f5a30ed
|
[
"MIT"
] | 2 |
2018-06-15T10:47:31.000Z
|
2019-06-13T10:17:08.000Z
|
RSA.ipynb
|
vdbalbom/RSA
|
cdc9189c6f6a6170a41d30ea8ed4ba1f5f5a30ed
|
[
"MIT"
] | null | null | null |
RSA.ipynb
|
vdbalbom/RSA
|
cdc9189c6f6a6170a41d30ea8ed4ba1f5f5a30ed
|
[
"MIT"
] | null | null | null | 31.779817 | 251 | 0.554994 | true | 1,879 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.968856 | 0.868827 | 0.841768 |
__label__eng_Latn
| 0.399202 | 0.794043 |
# Integrerte hastighetslover
Her skal vi integrere [0. ordens](#0.-ordens-reaksjon), [1. ordens](#1.-ordens-reaksjon) og [2. ordens](#2.-ordens-reaksjon) hastighetslover. Målet vårt er å bruke definisjonen på reaksjonshastighet, sammen med hastighetsloven, for å finne integrerte hastighetslover.
For å gjøre det mer konkret, ser vi på reaksjonen,
$$\text{A} \to \text{produkter},$$
der reaksjonshastigheten, $r$, er gitt ved,
$$r = -\frac{\text{d} [\text{A}]}{\text{d} t}.$$
Hastighetsloven er,
$$r = k [\text{A}]^\alpha,$$
der $k$ er hastighetskonstanten og $\alpha$ er reaksjonens orden. Setter vi disse to uttrykkene for $r$ lik hverandre, så får vi en differensialligning vi kan løse,
$$-\frac{\text{d} [\text{A}]}{\text{d} t} = k [\text{A}]^\alpha.$$
I differensialligningen er det *funksjonen* $[\text{A}]$ som er ukjent. Vi
skal løse differensialligningen ved å bruke [SymPy](https://www.sympy.org/).
## 0. ordens reaksjon
For en 0. ordens reaksjon setter vi $\alpha = 0$ og får da følgende differensialligning,
$$-\frac{\text{d} [\text{A}]}{\text{d} t} = k.$$
```python
import sympy as sym
```
```python
# Definer variabler:
k, t = sym.symbols('k t', positive=True, real=True)
# Definer [A] som ukjent, men fortell SymPy at dette er en funksjon:
A = sym.Function('A')(t)
```
```python
# Definer differensialligningen vi skal løse:
ligning0 = sym.Eq(-sym.Derivative(A,t), k)
# Skriv ut ligningen for å sjekke at den ser riktig ut:
ligning0
```
```python
# Her definerer vi at konsentrasjonen ved t = 0 er A0:
A0 = sym.symbols('A0', positive=True, real=True)
start = {A.subs(t, 0): A0}
# Løs differensialligningen:
løsning0 = sym.dsolve(ligning0, ics=start)
løsning0
```
Den integrerte hastighetsloven for en 0. ordens reaksjon blir altså,
$$ [\text{A}] = [\text{A}]_0 - kt.$$
Vi kan også prøve å finne halveringstiden ($t_{1/2}$) som er tiden det tar til halvparten av opprinnelig mengde av A er igjen. Dvs. tiden som oppfyller:
$$ [\text{A}] = \frac{[\text{A}]_0}{2} = [\text{A}]_0 - kt_{1/2}.$$
```python
# Ved halveringstiden er halvparten av opprinnelig mengde A igjen, dvs A(t) = A0 / 2.
# Vi skriver dette som en ligning:
halv_0 = løsning0.subs({A: A0/2})
halv_0
```
```python
# Og løser den:
sym.solve(halv_0)
```
Halveringstiden er altså bestemt ved,
$$ \text{A}_0 = 2 k t_{1/2} \implies t_{1/2} = \frac{\text{A}_0}{ 2 k}.$$
## 1. ordens reaksjon
For en 1. ordens reaksjon setter vi $\alpha = 1$ og får da følgende differensialligning,
$$-\frac{\text{d} [\text{A}]}{\text{d} t} = k [\text{A}].$$
```python
# Definer differensialligningen vi skal løse:
ligning1 = sym.Eq(-sym.Derivative(A,t), k * A)
# Skriv ut ligningen for å sjekke at den ser riktig ut:
ligning1
```
```python
# Løs differensialligningen:
løsning1 = sym.dsolve(ligning1, ics=start)
løsning1
```
```python
# Vi kan også skrive ligningen på logaritmisk form:
løsning11 = sym.Eq(sym.log(løsning1.lhs), sym.log(løsning1.rhs))
sym.simplify(løsning11)
```
Den integrerte hastighetsloven for en 1. ordens reaksjon blir altså,
$$ \ln [\text{A}] = \ln [\text{A}]_0 - kt.$$
Vi kan også prøve å finne halveringstiden ($t_{1/2}$) som er tiden det tar til halvparten av opprinnelig mengde av A er igjen. Dvs. tiden som oppfyller:
$$ \ln [\text{A}] = \ln \left( \frac{[\text{A}]_0}{2} \right) = \ln [\text{A}]_0 - kt_{1/2}.$$
```python
# Ved halveringstiden er halvparten av opprinnelig mengde A igjen, dvs A(t) = A0 / 2.
# Vi skriver dette som en ligning:
halv_1 = løsning1.subs({A: A0/2})
halv_1
```
```python
# Og løser den:
sym.solve(halv_1)
```
Her finner vi at halveringstiden oppfyller,
$$ k = \frac{\ln 2}{t_{1/2}} \implies t_{1/2} = \frac{\ln 2}{k} .$$
(Vi merker oss at denne halveringstiden er *uavhengig* av startkonsentrasjonen.)
## 2. ordens reaksjon
For en 2. ordens reaksjon setter vi $\alpha = 2$ og får da følgende differensialligning,
$$-\frac{\text{d} [\text{A}]}{\text{d} t} = k [\text{A}]^2.$$
```python
# Definer differensialligningen vi skal løse:
ligning2 = sym.Eq(-sym.Derivative(A,t), k * A**2)
# Skriv ut ligningen for å sjekke at den ser riktig ut:
ligning2
```
```python
# Løs differensialligningen:
løsning2 = sym.dsolve(ligning2, ics=start)
løsning2
```
Denne løsningen ser kanskje litt forskjellig ut fra læreboken, la oss skrive den om:
```python
løsning22 = sym.Eq(1 / løsning2.lhs, 1 / løsning2.rhs)
løsning22
```
Den integrerte hastighetsloven for en 2. ordens reaksjon blir altså,
$$ \frac{1}{[\text{A}]} = \frac{1}{[\text{A}]_0} + kt.$$
Vi kan også prøve å finne halveringstiden ($t_{1/2}$) som er tiden det tar til halvparten av opprinnelig mengde av A er igjen. Dvs. tiden som oppfyller:
$$ \frac{1}{[\text{A}]} = \frac{1}{\frac{[\text{A}]_0}{2}} = \frac{2}{[\text{A}]_0} = \frac{1}{[\text{A}]_0} + kt_{1/2}.$$
```python
# Ved halveringstiden er halvparten av opprinnelig mengde A igjen, dvs A(t) = A0 / 2.
# Vi skriver dette som en ligning:
halv_2 = løsning22.subs({A: A0/2})
halv_2
```
```python
# Og løser den:
sym.solve(halv_2)
```
Halveringstiden er gitt ved,
$$[\text{A}_0] = \frac{1}{k t_{1/2}} \implies t_{1/2} = \frac{1}{k [\text{A}_0]}.$$
## Oppsummering
Vi har funnet integrerte lover ved å løse differensialligningene. Vi fant:
| Orden | Integrert hastighetslov | Halveringstid |
|-------|-------------------------------------------------------------|------------------------------------------|
| 0 | $[\text{A}]_t = [\text{A}]_0 - k t$ | $t_{1/2} = \frac{[\text{A}]_0}{2 k}$ |
| 1 | $[\text{A}]_t = [\text{A}]_0 \text{e}^{-kt}$ | $t_{1/2} = \frac{\ln 2}{k}$ |
| 2 | $\frac{1}{[\text{A}]_t} = \frac{1}{[\text{A}]_0} + kt$ | $t_{1/2} = \frac{1}{k [\text{A}]_0}$ |
| 3 | $\frac{1}{[\text{A}]_t^2} = \frac{1}{[\text{A}]_0^2} + 2kt$ | $t_{1/2} = \frac{3}{2 k [\text{A}]_0^2}$ |
I tabellen over har vi også tatt med en 3. ordens hastighetslov. Kan du vise at dette stemmer (f.eks. ved å endre Python-koden over)? Og, litt ekstra utfordring for de som har matematikk-fag med differensialligninger, kan du løse det generelt for en orden $n \geq 2$? Dvs. for
$$-\frac{\text{d} [\text{A}]}{\text{d} t} = k [\text{A}]^n, \quad n \geq 2$$
kan du finne en generell integrert hastighetslov?
|
623699556e040e8e4143e94a38dd1c061b4f1ac4
| 10,572 |
ipynb
|
Jupyter Notebook
|
jupyter/kinetikk/integrertehastighetslover.ipynb
|
andersle/kj1000
|
9d68e9810c5541ebbe2e4559df8d066a85780129
|
[
"CC-BY-4.0"
] | null | null | null |
jupyter/kinetikk/integrertehastighetslover.ipynb
|
andersle/kj1000
|
9d68e9810c5541ebbe2e4559df8d066a85780129
|
[
"CC-BY-4.0"
] | 5 |
2021-06-21T15:04:15.000Z
|
2021-11-10T10:58:07.000Z
|
jupyter/kinetikk/integrertehastighetslover.ipynb
|
andersle/kj1000
|
9d68e9810c5541ebbe2e4559df8d066a85780129
|
[
"CC-BY-4.0"
] | null | null | null | 29.123967 | 286 | 0.521945 | true | 2,481 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.879147 | 0.70253 | 0.617627 |
__label__nob_Latn
| 0.973732 | 0.273285 |
# Removing `if` Statements from Expressions
## Author: Patrick Nelson
### NRPy+ Source Code for this module:
* [Min_Max_and_Piecewise_Expressions.py](../edit/Min_Max_and_Piecewise_Expressions.py) Contains functions that can be used to compute the minimum or maximum of two values and to implement piecewise-defined expressions
## Introduction:
Conditional statements are a critical tool in programming, allowing us to control the flow through a program to avoid pitfalls, code piecewise-defined functions, and so forth. However, there are times when it is useful to work around them. It takes a processor time to evaluate the whether or not to execute the code block, so for some expressions, performance can be improved by rewriting the expression to use an absolute value function in a manner upon which we will expand in this tutorial. Even more relevant to NRPy+ are piecewise-defined functions. These inherently involve `if` statements, but NRPy+'s automatic code generation cannot handle these by itself, requiring hand-coding to be done. However, if it is possible to rewrite the expression in terms of absolute values, then NRPy+ can handle the entire thing itself.
The absolute value is a function that simply returns the magnitude of its argument, a positive value. That is,
\begin{align}
|x|&= \left \{ \begin{array}{lll}x & \mbox{if} & x \geq 0 \\
-x & \mbox{if} & x \leq 0 \end{array} \right. \\
\end{align}
In C, this is implemented as `fabs()`, which merely has to make the first bit of a double-precision floating point number 0, and is thus quite fast.
There are myriad uses for these tricks in practice. One example comes from GRMHD (and, by extension, the special cases of GRFFE and GRHD), in which it is necessary to limit the velocity of the plasma in order to keep the simulations stable. This is done by calculating the Lorentz factor $\Gamma$ of the plasma and comparing to some predefined maximum $\Gamma_\max$. Then, if
$$
R = 1-\frac{1}{\Gamma^2} > 1-\frac{1}{\Gamma_{\max}^2} = R_\max,
$$
we rescale the velocities by $\sqrt{R_\max/R}$. In NRPy+, we instead always rescale by
$$
\sqrt{\frac{\min(R,R_\max)}{R+\epsilon}},
$$
which has the same effect while allowing the entire process to be handled by NRPy+'s automatic code generation. ($\epsilon$ is some small number chosen to avoid division by zero without affecting the results otherwise.) See [here](Tutorial-GRHD_Equations-Cartesian.ipynb#convertvtou) for more information on this.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#min_max): Minimum and Maximum
1. [Step 1.a](#confirm): Confirm that these work for real numbers
1. [Step 2](#piecewise): Piecewise-defined functions
1. [Step 3](#sympy): Rewrite functions to work with symbolic expressions
1. [Step 4](#validation): Validation against `Min_Max_and_Piecewise_Expressions` NRPy+ module
1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='min_max'></a>
# Step 1: Minimum and Maximum \[Back to [top](#toc)\]
$$\label{min_max}$$
Our first job will be to rewrite minimum and maximum functions without if statements. For example, the typical implementation of `min(a,b)` will be something like this:
```python
def min(a,b):
if a<b:
return a
else:
return b
```
However, to take full advantage of NRPy+'s automated function generation capabilities, we want to write this without the `if` statements, replacing them with calls to `fabs()`. We will define these functions in the following way:
$$\boxed{
\min(a,b) = \tfrac{1}{2} \left( a+b - \lvert a-b \rvert \right)\\
\max(a,b) = \tfrac{1}{2} \left( a+b + \lvert a-b \rvert \right).}
$$
<a id='confirm'></a>
## Step 1.a: Confirm that these work for real numbers \[Back to [top](#toc)\]
$$\label{confirm}$$
For real numbers, these operate exactly as expected. In the case $a>b$,
\begin{align}
\min(a,b) &= \tfrac{1}{2} \left( a+b - (a-b) \right) = b \\
\max(a,b) &= \tfrac{1}{2} \left( a+b + (a-b) \right) = a, \\
\end{align}
and in the case $a<b$, the reverse holds:
\begin{align}
\min(a,b) &= \tfrac{1}{2} \left( a+b - (b-a) \right) = a \\
\max(a,b) &= \tfrac{1}{2} \left( a+b + (b-a) \right) = b, \\
\end{align}
In code, we will represent this as:
```
min_noif(a,b) = sp.Rational(1,2)*(a+b-nrpyAbs(a-b))
max_noif(a,b) = sp.Rational(1,2)*(a+b+nrpyAbs(a-b))
```
For demonstration purposes, we will use `np.absolute()` and floating point numbers.
```python
import numpy as np # NumPy: Python module specializing in numerical computations
import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities
thismodule = "Min_Max_and_Piecewise_Expressions"
# First, we'll write the functions. Note that we are not using sympy right now. For NRPy+ code generation,
# use the expressions above.
def min_noif(a,b):
return 0.5 * (a+b-np.absolute(a-b))
def max_noif(a,b):
return 0.5 * (a+b+np.absolute(a-b))
# Now, let's put these through their paces.
a_number = 5.0
another_number = 10.0
print("The minimum of "+str(a_number)+" and "+str(another_number)+" is "+str(min_noif(a_number,another_number)))
```
The minimum of 5.0 and 10.0 is 5.0
Feel free to test other cases above if you'd like. Note that we use a suffix, `_noif`, to avoid conflicts with other functions. When using this in NRPy+, make sure you use `sp.Rational()` and the `nrpyAbs()` function, which will always be interpreted as the C function `fabs()` (Sympy's `sp.Abs()` may get interpreted as $\sqrt{zz^*}$, for instance).
<a id='piecewise'></a>
# Step 2: Piecewise-defined functions \[Back to [top](#toc)\]
$$\label{piecewise}$$
Next, we'll define functions to represent branches of a piecewise-defined function. For example, consider the function
\begin{align}
f(x) &= \left \{ \begin{array}{lll} \frac{1}{10}x^2+1 & \mbox{if} & x \leq 0 \\
\exp(\frac{x}{5}) & \mbox{if} & x > 0 \end{array} \right. , \\
\end{align}
which is continuous, but not differentiable at $x=0$.
To solve this problem, let's add the two parts together, multiplying each part by a function that is either one or zero depending on $x$. To define $x \leq 0$, this can be done by multiplying by the minimum of $x$ and $0$. We also will need to normalize this. To avoid putting a zero in the denominator, however, we will add some small $\epsilon$ to the denominator, i.e.,
$$
\frac{\min(x,0)}{x-\epsilon}
$$
This $\epsilon$ corresponds `TINYDOUBLE` in NRPy+; so, we will define the variable here with its default value, `1e-100`. Additionally, to get the correct behavior on the boundary, we shift the boundary by $\epsilon$, giving us
$$
\frac{\min(x-\epsilon,0)}{x-\epsilon}
$$
The corresponding expression for $x > 0$ can be written as
$$
\frac{\max(x,0)}{x+\epsilon},
$$
using a positive small number to once again avoid division by zero.
When using these for numerical relativity codes, it is important to consider the relationship between $\epsilon$, or `TINYDOUBLE`, and the gridpoints in the simulation. As long as $\epsilon$ is positive and large enough to avoid catastrophic cancellation, these functional forms avoid division by zero, as proven [below](#proof).
So, we'll code NumPy versions of these expressions below. Naturally, there are many circumstances in which one will want the boundary between two pieces of a function to be something other than 0; if we let that boundary be $x^*$, this can easily be done by passing $x-x^*$ to the maximum/minimum functions. For the sake of code readability, we will write the functions to pass $x$ and $x^*$ as separate arguments. Additionally, we code separate functions for $\leq$ and $<$, and likewise for $\geq$ and $>$. The "or equal to" versions add a small offset to the boundary to give the proper behavior on the desired boundary.
```python
TINYDOUBLE = 1.0e-100
def coord_leq_bound(x,xstar):
# Returns 1.0 if x <= xstar, 0.0 otherwise.
# Requires appropriately defined TINYDOUBLE
return min_noif(x-xstar-TINYDOUBLE,0.0)/(x-xstar-TINYDOUBLE)
def coord_geq_bound(x,xstar):
# Returns 1.0 if x >= xstar, 0.0 otherwise.
# Requires appropriately defined TINYDOUBLE
return max_noif(x-xstar+TINYDOUBLE,0.0)/(x-xstar+TINYDOUBLE)
def coord_less_bound(x,xstar):
# Returns 1.0 if x < xstar, 0.0 otherwise.
# Requires appropriately defined TINYDOUBLE
return min_noif(x-xstar,0.0)/(x-xstar-TINYDOUBLE)
def coord_greater_bound(x,xstar):
# Returns 1.0 if x > xstar, 0.0 otherwise.
# Requires appropriately defined TINYDOUBLE
return max_noif(x-xstar,0.0)/(x-xstar+TINYDOUBLE)
# Now, define our the equation and plot it.
x_data = np.arange(start = -10.0, stop = 11.0, step = 1.0)
y_data = coord_less_bound(x_data,0.0)*(0.1*x_data**2.0+1.0)\
+coord_geq_bound(x_data,0.0)*np.exp(x_data/5.0)
plt.figure()
a = plt.plot(x_data,y_data,'k',label="Piecewise function")
b = plt.plot(x_data,0.1*x_data**2.0+1.0,'b.',label="y=0.1*x^2+1")
c = plt.plot(x_data,np.exp(x_data/5.0),'g.',label="y=exp(x/5)")
plt.legend()
plt.xlabel("x")
plt.ylabel("y")
plt.show()
```
The plot above shows the expected piecewise-defined function. It is important in applying these functions that each greater-than be paired with a less-than-or-equal-to, or vice versa. Otherwise, the way these are written, a point on the boundary will be set to zero or twice the expected value.
These functions can be easily combined for more complicated piecewise-defined functions; if a piece of a function is defined as $f(x)$ on $x^*_- \leq x < x^*_+$, for instance, simply multiply by both functions, e.g.
```
coord_geq_bound(x,x_star_minus)*coord_less_bound(x,x_star_plus)*f(x)
```
<a id='sympy'></a>
# Step 3: Rewrite functions to work with symbolic expressions \[Back to [top](#toc)\]
$$\label{sympy}$$
In order to use this with sympy expressions in NRPy+, we will need to rewrite the `min` and `max` functions with slightly different syntax. Critically, we will change `0.5` to `sp.Rational(1,2)` and calls to `np.absolute()` to `nrpyAbs()`. We will also need to import `outputC.py` here for access to `nrpyAbs()`. The other functions will not require redefinition, because they only call specific combinations of the `min` and `max` function.
In practice, we want to use `nrpyAbs()` and *not* `sp.Abs()` with our symbolic expressions, which will force `outputC` to use the C function `fabs()`, and not try to multiply the argument by its complex conjugate and then take the square root.
```python
from outputC import nrpyAbs # NRPy+: Core C code output module
def min_noif(a,b):
# Returns the minimum of a and b
if a==sp.sympify(0):
return sp.Rational(1,2) * (b-nrpyAbs(b))
if b==sp.sympify(0):
return sp.Rational(1,2) * (a-nrpyAbs(a))
return sp.Rational(1,2) * (a+b-nrpyAbs(a-b))
def max_noif(a,b):
# Returns the maximum of a and b
if a==sp.sympify(0):
return sp.Rational(1,2) * (b+nrpyAbs(b))
if b==sp.sympify(0):
return sp.Rational(1,2) * (a+nrpyAbs(a))
return sp.Rational(1,2) * (a+b+nrpyAbs(a-b))
```
<a id='validation'></a>
# Step 4: Validation against `Min_Max_and_Piecewise_Expressions` NRPy+ module \[Back to [top](#toc)\]
$$\label{validation}$$
As a code validation check, we will verify agreement in the SymPy expressions for plane-wave initial data for the Scalar Wave equation between
1. this tutorial and
2. the NRPy+ [Min_Max_and_Piecewise_Expressions](../edit/Min_Max_and_Piecewise_Expressions.py) module.
```python
# Reset & redefine TINYDOUBLE for proper comparison
%reset_selective -f TINYDOUBLE
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: parameter interface
TINYDOUBLE = par.Cparameters("REAL", thismodule, "TINYDOUBLE", 1e-100)
import Min_Max_and_Piecewise_Expressions as noif
all_passed=0
def comp_func(expr1,expr2,basename,prefixname2="noif."):
passed = 0
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
passed = 1
return passed
a,b = sp.symbols("a b")
here = min_noif(a,b)
there = noif.min_noif(a,b)
all_passed += comp_func(here,there,"min_noif")
here = max_noif(a,b)
there = noif.max_noif(a,b)
all_passed += comp_func(here,there,"max_noif")
here = coord_leq_bound(a,b)
there = noif.coord_leq_bound(a,b)
all_passed += comp_func(here,there,"coord_leq_bound")
here = coord_geq_bound(a,b)
there = noif.coord_geq_bound(a,b)
all_passed += comp_func(here,there,"coord_geq_bound")
here = coord_less_bound(a,b)
there = noif.coord_less_bound(a,b)
all_passed += comp_func(here,there,"coord_less_bound")
here = coord_greater_bound(a,b)
there = noif.coord_greater_bound(a,b)
all_passed += comp_func(here,there,"coord_greater_bound")
import sys
if all_passed==0:
print("ALL TESTS PASSED!")
else:
print("ERROR: AT LEAST ONE TEST DID NOT PASS")
sys.exit(1)
```
ALL TESTS PASSED!
<a id='latex_pdf_output'></a>
# Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Min_Max_and_Piecewise_Expressions.pdf](Tutorial-Min_Max_and_Piecewise_Expressions.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Min_Max_and_Piecewise_Expressions")
```
Created Tutorial-Min_Max_and_Piecewise_Expressions.tex, and compiled LaTeX
file to PDF file Tutorial-Min_Max_and_Piecewise_Expressions.pdf
|
df3d1eca88ac1b0f15b56e8762f151e6cceb28f2
| 39,952 |
ipynb
|
Jupyter Notebook
|
Tutorial-Min_Max_and_Piecewise_Expressions.ipynb
|
fedelopezar/nrpytutorial
|
753acd954be4a2f99639c9f9fd5e623689fc7493
|
[
"BSD-2-Clause"
] | 66 |
2018-06-26T22:18:09.000Z
|
2022-02-09T21:12:33.000Z
|
Tutorial-Min_Max_and_Piecewise_Expressions.ipynb
|
fedelopezar/nrpytutorial
|
753acd954be4a2f99639c9f9fd5e623689fc7493
|
[
"BSD-2-Clause"
] | 14 |
2020-02-13T16:09:29.000Z
|
2021-11-12T14:59:59.000Z
|
Tutorial-Min_Max_and_Piecewise_Expressions.ipynb
|
fedelopezar/nrpytutorial
|
753acd954be4a2f99639c9f9fd5e623689fc7493
|
[
"BSD-2-Clause"
] | 30 |
2019-01-09T09:57:51.000Z
|
2022-03-08T18:45:08.000Z
| 86.663774 | 19,796 | 0.780411 | true | 3,940 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.880797 | 0.740174 | 0.651943 |
__label__eng_Latn
| 0.973388 | 0.353014 |
---
layout: post
title: "Bayesian Sample Selection Effects"
desc: "Sample bias and selection effects are the worst. Here's one solution."
date: 2019-07-30
categories: [tutorial]
tags: [bayesian]
loc: 'tutorials/sampleselectionbias/'
permalink: /tutorials/sample_selection
math: true
---
!!!replace
In a perfect world our experiments would capture all the data that exists. This is not a perfect world, and we miss a lot of data. Let's consider one method of accounting for this in a Bayesian formalism - integrating it out.
Let's begin with a motivational dataset.
```python
# Remove
import matplotlib.pyplot as plt
from base import *
from cycler import cycler
# plt.rcParams['axes.prop_cycle'] = (cycler(color=['#56d870', '#f9ee4a', '#44d9ff', '#f95b4a', '#3d9fe2', '#ffa847', '#c4ef7a', '#e195e2', '#ced9ed', '#fff29b']) + cycler(linestyle=['-', '--', ':', '-.', '-', '--', ':', '-.', '-', '--']))
#plt.rcParams['axes.prop_cycle'] = (cycler(color=['#003049', '#D62828', '#F77F00', '#FCBF49', '#EAE2B7']) + cycler(linestyle=['-', '--', ':', '-.', '-']))
```
```python
import matplotlib.pyplot as plt
import numpy as np
n = 10000
alpha = 85
mu, sigma = 100, 10
x = np.random.normal(mu, sigma, size=n)
mask = x > alpha
x_good = x[mask]
fig, ax = plt.subplots(figsize=(7,4))
ax.hist(x_good, label="Observed data", color='#F77F00', alpha=0.3, density=True)
ax.hist(x_good, histtype="step", color='#F77F00', linewidth=1.5, density=True)
xs = np.linspace(70, 140, 1000)
ax.axvline(alpha, ls="--", lw=1.0, label="Instrument cutoff", color='#D62828')
from scipy.stats import norm
ax.plot(xs, norm(mu, sigma).pdf(xs), label="All data", lw=1.8)
leg = ax.legend(frameon=False, loc=1)
for lh in leg.legendHandles:
lh.set_alpha(1)
ax.set_xlabel("x")
ax.set_ylabel("Probability");
```
!!!main
So it looks like for our example data, we've got some gaussian-like distribution of $x$ observations, but at some point it seems like our instrument is unable to pick up the observations. Maybe its brightness and its too dim! Or maybe something else, who knows! But regardless, we can work with this.
To start at the beginning, let's write out the full formula for our posterior:
$$ P(\theta|d) = \frac{P(d|\theta) P(\theta)}{P(d)} $$
Everything looks good, so let's home in here on the likelihood. Now, what we should also do to make our lives easier if we write out the fact we have some sort of *selection effect* applying to our model. That is, some separate probability that dictates our experiment successfully observes an event which actually happened.
$$ P(d|\theta) \rightarrow P(d|\theta, S), $$
where $S$ in colloquial English represents "we successfully observed the event". Now, we can normally write our selection probability given data or model easily, so we want to get things into a state where we have $P(S\|d,\theta)$. Via some probability manipulation we can twist this around and get the following:
$$ P(d|\theta, S) = \frac{P(S|d,\theta) P(d|\theta)}{P(S|\theta)} $$
So let's break that down. Our likelihood given our model and our selection effects is given by the chance we observed our experimenal data (which is going to be $1$ for deterministic processes given we **have** observed it already) multiplied by our standard likelihood where we ignore selection effects, divided by the chance of observing data in general at the area in parameter space.
Now the denominator here cannot be evaluated in its current state, we need to introduce an integral over all data (denoted $D$) such that we can apply the selection effect on it.
$$ P(d|\theta, S) = \frac{P(S|d,\theta) P(d|\theta)}{\int P(S|D, \theta) P(D|\theta) dD} $$
## The Simplest Gaussian Example
Let's assume we have data samples of some observable $d$. In our model, $d$ is drawn from a normal distribution such that we wish to characterise two model parameters describing said normal - the mean $\mu$ and the standard deviation $\sigma$.
$$ P(d|\mu, \sigma) = \mathcal{N}(d|\mu, \sigma) $$
Now let's imagine the case as described in our data where we can only observe some value when its above a threshold of $\alpha$. Or more formally,
$$P(S|d, \theta) = \mathcal{H}(d-\alpha),$$
where $\mathcal{H}$ is the Heaviside step function:
$$ \mathcal{H}(y)\equiv \begin{cases}
1 \quad {\rm if }\ y \ge 0 \\
0 \quad {\rm otherwise.}
\end{cases} $$
You can see that this selection probability is deterministic, so any data we did observe we observed with a probability of one. This helps simplify the numerator such that $P(S\|d,\theta) = 1$. Which just leaves us the denominator:
$$ \int P(S|D, \theta) P(D|\mu, \sigma) dD = \int \mathcal{H}(d-\alpha) \mathcal{N}(D|\mu, \sigma) dD $$
And because the Heaviside step function sets all $d<\alpha$ to zero, we can simply modify the bounds of the integral and do this analytically:
$$\begin{align}
\int \mathcal{H}(d-\alpha) P(D|\mu, \sigma) dD &= \int_{\alpha}^\infty \mathcal{N}(D|\mu, \sigma)\, dD \\ &= \frac{1}{2} {\rm erfc}\left[ \frac{\alpha - \mu}{\sqrt{2}\sigma} \right]
\end{align}$$
So this means we can throw this denominator back into our full expression for the likelihood:
$$ P(d|\mu, \sigma, S) = \frac{2\mathcal{N}(d|\mu, \sigma)}{ {\rm erfc}\left[ \frac{\alpha - \mu}{\sqrt{2}\sigma} \right]} $$
Finally, we should note this likelihood (and correction) is for one data point. If we had a hundred data points, we'd do this multiplicatively for each $d$.
## Verifying the selection-corrected likelihood
To do this, lets first generate a dataset, create a model, fit it with `emcee` and verify that our estimations of $\mu$ and $\sigma$ are unbiased. First, the dataset.
```python
import numpy as np
np.random.seed(3)
mu, sigma, alpha, num_points = 100, 10, 85, 1000
d_all = np.random.normal(mu, sigma, size=num_points)
d = d_all[d_all > alpha]
```
This is the same code used to generat the plot you saw up the top, just with less datapoints! Let's create our model, one with the sample selection, and one without. Remember, we work in log space for probability, so that you don't get tripped up when reading the code implementation of the math above.
```python
from scipy.stats import norm
from scipy.special import erfc
def uncorrected_likelihood(xs, data):
mu, sigma = xs
if sigma < 0:
return -np.inf
return norm(mu, sigma).logpdf(data).sum()
def corrected_likelihood(xs, data):
mu, sigma = xs
if sigma < 0:
return -np.inf
correction = data.size * np.log(0.5 * erfc((alpha - mu)/(np.sqrt(2) * sigma)))
return uncorrected_likelihood(xs, data) - correction
```
Note here that I've been cheeky and included flat priors and a prior boundary to keep $\sigma$ positive in the likehood, which means I should really call it the posterior, but let's not get bogged down on semantics.
With that, our model is fully defined. We can now try and fit it to the data to see how we go.
### Model Fitting
Let's use my go-to solution, `emcee` to sample our likelihood given our dataset, and `ChainConsumer` to take those samples and turn them into handy plots. If you want more details check out the [Bayesian Linear Regression tutorial](/tutorial/2019/07/27/BayesianLinearRegression.html) for implementation details.
```python
import emcee
ndim = 2
nwalkers = 50
p0 = np.array([95, 0]) + np.random.uniform(low=1, high=10, size=(nwalkers, ndim))
results = {}
functions = [corrected_likelihood, uncorrected_likelihood]
names = ["Corrected Likelihood", "Uncorrected Likelihood"]
for fn, name in zip(functions, names):
sampler = emcee.EnsembleSampler(nwalkers, ndim, fn, args=[d])
state = sampler.run_mcmc(p0, 2000)
chain = sampler.chain[:, 300:, :]
flat_chain = chain.reshape((-1, ndim))
results[name] = flat_chain
```
And now to plot these samples:
```python
from chainconsumer import ChainConsumer
c = ChainConsumer()
for name, flat_chain in results.items():
c.add_chain(flat_chain, parameters=["$\mu$", "$\sigma$"], name=name)
c.configure()
c.configure_truth(color='w') ### REMOVE
c.plotter.plot(truth=[mu, sigma], figsize=2.0);
```
Hopefully you can now see how the correction we applied to the likelihood unbiases its estimations.
You'll notice that the contours don't sit perfectly on the true value, but if we made a hundred realisations of the data and averaged out the contour positions, you'd see they would. [In fact, you can see it right here](https://arxiv.org/abs/1706.03856).
Thinking about the problem in this way allows us to neatly separate out the selection effects and generic likelihood so we can treat them independently. Of course, when you get past the point where analytic approximations to your selection effects aren't good enough, you can expect a good numerical hit where you have to numerically compute the correction.
But one thing that is *correct* about this approach that a lot of other approaches miss (such as adding bias corrections to your data) is that the *correction* is dependent on where you are in parameter space. And this should make sense conceptually - the correction is just answering the question "How efficient are we *in general* given our current model parametrisation". If we've charactered our instrument cannot detect $d < 85$, we expect to lose more events if the population mean is close to $85$ and less events if the population mean is at $200$.
|
4e5253a56d7521539b573c54339dcdbab348b5d9
| 555,278 |
ipynb
|
Jupyter Notebook
|
notebooks/2019-07-30-SampleSelectionBias.ipynb
|
Samreay/samreay
|
0f314ff4a9dcd8c6bbba93f4cdbc448e0b8eb4fd
|
[
"MIT"
] | null | null | null |
notebooks/2019-07-30-SampleSelectionBias.ipynb
|
Samreay/samreay
|
0f314ff4a9dcd8c6bbba93f4cdbc448e0b8eb4fd
|
[
"MIT"
] | 3 |
2020-02-24T20:26:23.000Z
|
2020-05-23T11:45:07.000Z
|
notebooks/2019-07-30-SampleSelectionBias.ipynb
|
Samreay/samreay
|
0f314ff4a9dcd8c6bbba93f4cdbc448e0b8eb4fd
|
[
"MIT"
] | null | null | null | 1,030.200371 | 311,200 | 0.949303 | true | 2,513 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.754915 | 0.812867 | 0.613646 |
__label__eng_Latn
| 0.992344 | 0.264035 |
```python
import gurobipy as gp
from gurobipy import GRB
```
# Max-Flow Min-Cut
The maximum flow minimum cut problem holds a special place in the history of optimization theory. We will first model the problems as Linear Programs and use the results to discuss some somewhat surprising results.
The problems consider a directed graph consisting of a set of nodes and a set of labeled arcs. The arc labels are non-negative values representing a notion of capacity for the arc. In the node set, there exists a source node $s$ and a terminal node $t$. The amount of flow into one of the intermediary nodes must equal the amount of flow out of the node, i.e., flow is conserved.
The maximum flow question asks: what is the maximum flow that can be transferred from the source to the sink. The minimum cut question asks: which is the subset of arcs, that once removed would disconnect the source node from the terminal node, which has the minimum sum of capacities. For example, removing arcs $(C, t)$ and $(D, t)$ from the network below would mean there is no longer a path from $s$ to $t$ and the sum of the capacities of these arcs is $140 + 90 = 250$. It is reasonably straight forward to find a better cut, i.e., a subset of nodes with sum of capacities less than 250.
A complete model is provided for the maximum flow problem, whereas the minimum cut problem is left as a challenge to the reader.
### Notation
Let's represent the set of nodes and arcs as below
| index | Set | Description |
|:---------|:--------|:--------------|
| $i$ | $V$ | Set of nodes |
| $(i,j)$ | $A$ | Set of arcs |
Additionally we can define the capacity of the arcs as follows
| Parameter | Description |
|:---------|:--------|
| $c_{i,j}$ | Capacity of arc $(i,j) \in A$ |
```python
# Programmatically we can define the problem data as
nodes = ['s', 'A', 'B', 'C', 'D', 't']
capacity = {
('s', 'A'): 100,
('s', 'B'): 150,
('A', 'B'): 120,
('A', 'C'): 90,
('B', 'D'): 110,
('C', 'D'): 120,
('C', 't'): 140,
('D', 't'): 90,
}
arcs = capacity.keys()
```
## Maximum Flow
First, let's consider the problem of calculating the maximum flow that can pass through the network.
### Variables
The variables we will use are as follows
| Variable | Type | Description |
|:---------|:--------| :----- |
| $f_{i,j}$ | Continuous | Flow along arc $(i,j) \in A$ |
### Model
A model of the problem can then be defined as follows:
$$
\begin{align}
\text{maximise} \ & \sum_{j \in V: (s,j) \in A} f_{s, j} && & \quad (1a) \label{model-obj}\\
s.t. \ & f_{i, j} \leq c_{i,j} \quad && \forall (i, j) \in A & \quad (1b) \label{m1-c1}\\
& \sum_{i \in V: (i,j) \in A} f_{i, j} - \sum_{k \in V: (j, k) \in A} f_{j,k} = 0 \quad && \forall j \in V \setminus \{s, t\} & \quad (1c) \label{m2-c2}
\end{align}
$$
The objective (1a) is to maximise the sum of flow leaving the source node $s$. Constraints (1b) ensure that the flow in each arc does not exceed the capacity of that arc. Constraints (1c) are continuity constraints, which ensure that the flow into each of the nodes, excluding the source and sink, is equal to the flow out of that node.
```python
# A function that takes a set of nodes, arcs, and capacities, creates a model and optimises can be defined as follows:
def max_flow(nodes, arcs, capacity):
# Create optimization model
m = gp.Model('flow')
# Create variables
flow = m.addVars(arcs, obj=1, name="flow")
# Objective
m.setObjective(gp.quicksum(var for (i, j), var in flow.items() if i == "s"), sense=-1)
# Arc-capacity constraints
m.addConstrs(
(flow.sum(i, j) <= capacity[i, j] for i, j in arcs), "cap")
# Flow-conservation constraints
m.addConstrs(
(flow.sum(j, '*') == flow.sum('*', j)
for j in nodes if j not in ('s', 't')), "node")
# Compute optimal solution
m.optimize()
# Print solution
if m.status == GRB.OPTIMAL:
solution = m.getAttr('x', flow)
print('\nOptimal flows')
for i, j in arcs:
if solution[i, j] > 0:
print('%s -> %s: %g' % (i, j, solution[i, j]))
```
```python
max_flow(nodes, arcs, capacity)
```
Using license file /Library/gurobi/gurobi.lic
Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (mac64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 12 rows, 8 columns and 20 nonzeros
Model fingerprint: 0xfa101f8c
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 1e+00]
Bounds range [0e+00, 0e+00]
RHS range [9e+01, 2e+02]
Presolve removed 12 rows and 8 columns
Presolve time: 0.01s
Presolve: All rows and columns removed
Iteration Objective Primal Inf. Dual Inf. Time
0 1.8000000e+02 0.000000e+00 0.000000e+00 0s
Solved in 0 iterations and 0.01 seconds
Optimal objective 1.800000000e+02
Optimal flows
s -> A: 90
s -> B: 90
A -> C: 90
B -> D: 90
C -> t: 90
D -> t: 90
## Minimum Cut
Next, let's consider the problem of determining the minimum cut.
### Variables
The variables used are as follows
| Parameter | Type | Description |
|:---------|:--------| :----- |
| $r_{i,j}$ | Continuous | 1 if arc $(i,j) \in A$ is removed |
| $z_{i,j}$ | Continuous | 1 if $i \in V \setminus \{s,t\}$ is removed |
### Model
A model of the problem can then be defined as follows:
$$
\begin{alignat}3
\text{minimize} \ & \sum_{(i,j) \in A} c_{i,j} \cdot r_{i,j} && & (2a) \\
s.t.\ & r_{s,j} + z_j \geq 1 \quad && \forall j \in V : (s, j) \in A & \quad (2c)\\
& r_{i,j} + z_j \geq z_i \quad && \forall (i, j) \in A: i \neq s \text{ and } j \neq t & \quad (2b) \\
& r_{i,t} - z_i \geq 0 \quad && \forall i \in V : (i, t) \in A & \quad (2d)
\end{alignat}
$$
The objective (2a) is to minimise the sum of capacities of the arcs that are removed from the network. Constraints (2b) ensure that for all arcs leaving the source node, either the adjacent node $j$ is connected to the sink, i.e., $z_j = 1$, or the arc is removed, $r_{s, j} = 1$. Constraints (2c) ensure that, for each arc where both nodes are neither the source or the sink, that if predecessor is connected, $z_i = 1$, then either the successor is connected $z_j =1$ or the arc is removed $r_{i,j} = 1$. Finally constraints (2d) ensure that for any arc adjacent to the sink node, if the predecessor is connected, $z_i = 1$, then the arc must be removed, $r_{i,t}=1$.
```python
def min_cut(nodes, arcs, capacity):
# Create optimization model
m = gp.Model('cut')
m.ModelSense = 1
# Create variables
remove = m.addVars(arcs, vtype=GRB.CONTINUOUS, obj=capacity, name="r_")
connect = m.addVars((i for i in nodes if i not in ("s", "t")), name="z_", vtype=GRB.CONTINUOUS)
# Arc-capacity constraints
for (i, j) in arcs:
if i == "s":
m.addConstr(remove["s", j] + connect[j] >= 1)
elif j == "t":
m.addConstr(remove[i, "t"] - connect[i] >= 0)
else:
m.addConstr(remove[i, j] + connect[j] - connect[i] >= 0)
# Compute optimal solution
m.optimize()
# Print solution
if m.status == GRB.OPTIMAL:
solution = m.getAttr('x', remove)
print('\nOptimal cuts')
for i, j in arcs:
if solution[i, j] > 0.5:
print('%s -> %s: %g' % (i, j, capacity[i, j]))
```
```python
min_cut(nodes, arcs, capacity)
```
Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (mac64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 8 rows, 12 columns and 20 nonzeros
Model fingerprint: 0x28bf68bd
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [9e+01, 2e+02]
Bounds range [0e+00, 0e+00]
RHS range [1e+00, 1e+00]
Presolve removed 6 rows and 9 columns
Presolve time: 0.01s
Presolved: 2 rows, 3 columns, 4 nonzeros
Iteration Objective Primal Inf. Dual Inf. Time
0 9.0000000e+01 1.000000e+00 0.000000e+00 0s
1 1.8000000e+02 0.000000e+00 0.000000e+00 0s
Solved in 1 iterations and 0.01 seconds
Optimal objective 1.800000000e+02
Optimal cuts
A -> C: 90
D -> t: 90
### Questions
* How do the number of variables of one model compare with the number of constraints of the other?
* How do the optimal solutions compare?
* How do the objective values of feasible solutions to the max flow problem compare with those of the min cut problem?
* Why are the $r$ and $c$ variables continuous when a cut is clearly a binary operation?
|
4df2b12ff816be13a4808ab92ec8903849d66096
| 12,794 |
ipynb
|
Jupyter Notebook
|
week2/Max Flow + Min Cut.ipynb
|
stevedwards/gurobi_course
|
badb716d5dac86a77712908637cbc08722d0415d
|
[
"MIT"
] | null | null | null |
week2/Max Flow + Min Cut.ipynb
|
stevedwards/gurobi_course
|
badb716d5dac86a77712908637cbc08722d0415d
|
[
"MIT"
] | null | null | null |
week2/Max Flow + Min Cut.ipynb
|
stevedwards/gurobi_course
|
badb716d5dac86a77712908637cbc08722d0415d
|
[
"MIT"
] | null | null | null | 36.659026 | 676 | 0.521494 | true | 2,749 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.921922 | 0.914901 | 0.843467 |
__label__eng_Latn
| 0.983428 | 0.79799 |
# Some manipulations on (Kahraman, 1994)
[1] A. Kahraman, "Natural Modes of Planetary Gear Trains", Journal of Sound and Vibration, vol. 173, no. 1, pp. 125-130, 1994. https://doi.org/10.1006/jsvi.1994.1222.
```python
from sympy import *
init_printing()
def symb(x,y):
return symbols('{0}_{1}'.format(x,y), type = float)
```
# Displacement vector:
```python
n = 3 # number of planets
N = n + 3 # number of degrees of freedom
crs = ['c', 'r', 's'] # carrier, ring, sun
pla = ['p{}'.format(idx + 1) for idx in range(n)] # planet
crs = crs + pla # put them together
coeff_list = symbols(crs)
c = coeff_list[0]
r = coeff_list[1]
s = coeff_list[2]
X = Matrix([symb('u', v) for v in coeff_list])
coeff_list[3:] = symbols(['p']*n)
p = coeff_list[3]
X.transpose() # Eq. (1a)
```
$\displaystyle \left[\begin{matrix}u_{c} & u_{r} & u_{s} & u_{p1} & u_{p2} & u_{p3}\end{matrix}\right]$
## Stiffness matrix:
where:
* $k_1$: mesh stiffness for the ring-planet gear pair
* $k_2$: mesh stiffness for the sun-planet gear pair
* $k_c$: carrier housing stiffness
* $k_r$: ring housing stiffness
* $k_s$: sun housing stiffness
* Diagonal 1, in red
* Diagonal 2, in grey
* Off-diagonal, in blue
```python
k_1, k_2, k_c, k_r, k_s = symbols('k_1 k_2 k_c k_r k_s', type = float)
# Diagonal 1:
K_d1 = zeros(3, 3)
K_d1[0, 0] = n*(k_1 + k_2) + k_c
K_d1[1, 1] = n* k_1 + k_r
K_d1[2, 2] = n* k_2 + k_s
K_d1[0, 1] = K_d1[1, 0] = -n*k_1
K_d1[0, 2] = K_d1[2, 0] = -n*k_2
# Diagonal 2:
K_d2 = eye(n)*(k_1 + k_2)
# Off diagonal:
K_od = zeros(n, n)
K_od[:, 0] = (k_1 - k_2)*ones(n, 1)
K_od[:, 1] = -k_1 *ones(n, 1)
K_od[:, 2] = k_2 *ones(n, 1)
K = BlockMatrix([[K_d1, K_od.transpose()],
[K_od, K_d2]])
K = Matrix(K)
if(not K.is_symmetric()):
print('error.')
K
```
$\displaystyle \left[\begin{matrix}3 k_{1} + 3 k_{2} + k_{c} & - 3 k_{1} & - 3 k_{2} & k_{1} - k_{2} & k_{1} - k_{2} & k_{1} - k_{2}\\- 3 k_{1} & 3 k_{1} + k_{r} & 0 & - k_{1} & - k_{1} & - k_{1}\\- 3 k_{2} & 0 & 3 k_{2} + k_{s} & k_{2} & k_{2} & k_{2}\\k_{1} - k_{2} & - k_{1} & k_{2} & k_{1} + k_{2} & 0 & 0\\k_{1} - k_{2} & - k_{1} & k_{2} & 0 & k_{1} + k_{2} & 0\\k_{1} - k_{2} & - k_{1} & k_{2} & 0 & 0 & k_{1} + k_{2}\end{matrix}\right]$
## Inertia matrix:
```python
M = diag(*[symb('m', v) for v in coeff_list])
M
```
$\displaystyle \left[\begin{matrix}m_{c} & 0 & 0 & 0 & 0 & 0\\0 & m_{r} & 0 & 0 & 0 & 0\\0 & 0 & m_{s} & 0 & 0 & 0\\0 & 0 & 0 & m_{p} & 0 & 0\\0 & 0 & 0 & 0 & m_{p} & 0\\0 & 0 & 0 & 0 & 0 & m_{p}\end{matrix}\right]$
## Remove ring degree of freedom
```python
X.row_del(1)
K.row_del(1)
K.col_del(1)
M.row_del(1)
M.col_del(1)
coeff_list.remove(r)
N = N - 1
```
## Coordinate transformation:
First from translational to torsional coordinates, them making the sun DOF to be the last one, making it easier to assemble a multi-stage gearbox.
```python
R_1 = diag(*[symb('r', v) for v in coeff_list])
R_1
```
$\displaystyle \left[\begin{matrix}r_{c} & 0 & 0 & 0 & 0\\0 & r_{s} & 0 & 0 & 0\\0 & 0 & r_{p} & 0 & 0\\0 & 0 & 0 & r_{p} & 0\\0 & 0 & 0 & 0 & r_{p}\end{matrix}\right]$
making the sun DOF to be the last one:
```python
N1 = N - 1
R_2 = zeros(N, N)
R_2[0, 0] = 1
R_2[1, N1] = 1
R_2[2:N, 1:N1] = eye(n)
R_2
```
$\displaystyle \left[\begin{matrix}1 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 1\\0 & 1 & 0 & 0 & 0\\0 & 0 & 1 & 0 & 0\\0 & 0 & 0 & 1 & 0\end{matrix}\right]$
```python
R = R_1*R_2
RMR = lambda m: transpose(R)*m*R
```
### Inertia matrix
```python
M = RMR(M)
if(not M.is_symmetric()):
print('error in M matrix')
M
```
$\displaystyle \left[\begin{matrix}m_{c} r_{c}^{2} & 0 & 0 & 0 & 0\\0 & m_{p} r_{p}^{2} & 0 & 0 & 0\\0 & 0 & m_{p} r_{p}^{2} & 0 & 0\\0 & 0 & 0 & m_{p} r_{p}^{2} & 0\\0 & 0 & 0 & 0 & m_{s} r_{s}^{2}\end{matrix}\right]$
### Stiffness matrix
```python
K = RMR(K)
if(not K.is_symmetric()):
print('error in K matrix')
```
The housing stiffness for both carrier and sunare null:
```python
K = K.subs([(k_c, 0), (k_s, 0)])
K
```
$\displaystyle \left[\begin{matrix}r_{c}^{2} \left(3 k_{1} + 3 k_{2}\right) & r_{c} r_{p} \left(k_{1} - k_{2}\right) & r_{c} r_{p} \left(k_{1} - k_{2}\right) & r_{c} r_{p} \left(k_{1} - k_{2}\right) & - 3 k_{2} r_{c} r_{s}\\r_{c} r_{p} \left(k_{1} - k_{2}\right) & r_{p}^{2} \left(k_{1} + k_{2}\right) & 0 & 0 & k_{2} r_{p} r_{s}\\r_{c} r_{p} \left(k_{1} - k_{2}\right) & 0 & r_{p}^{2} \left(k_{1} + k_{2}\right) & 0 & k_{2} r_{p} r_{s}\\r_{c} r_{p} \left(k_{1} - k_{2}\right) & 0 & 0 & r_{p}^{2} \left(k_{1} + k_{2}\right) & k_{2} r_{p} r_{s}\\- 3 k_{2} r_{c} r_{s} & k_{2} r_{p} r_{s} & k_{2} r_{p} r_{s} & k_{2} r_{p} r_{s} & 3 k_{2} r_{s}^{2}\end{matrix}\right]$
From that, one can write the matrices for a planetary system with $n$-planets using the following code:
```python
m_c, m_s, m_p, r_c, r_s, r_p = symbols('m_c m_s m_p r_c r_s r_p', type = float)
M_p = zeros(N, N)
M_p[0, 0] = m_c*r_c**2
M_p[N1, N1] = m_s*r_s**2
M_p[1:N1, 1:N1] = m_p*r_p**2 * eye(n)
K_p = zeros(N, N)
K_p[0, 0] = n*(k_1 + k_2)*r_c**2
K_p[N1, 0] = -n*k_2*r_s*r_c
K_p[0, N1] = -n*k_2*r_s*r_c
K_p[N1, N1] = n*k_2*r_s**2
K_p[0, 1:N1] = (k_1 - k_2)*r_c*r_p*ones(1, n)
K_p[1:N1, 0] = (k_1 - k_2)*r_c*r_p*ones(n, 1)
K_p[N1, 1:N1] = k_2*r_p*r_s*ones(1, n)
K_p[1:N1, N1] = k_2*r_p*r_s*ones(n, 1)
K_p[1:N1, 1:N1] = (k_1 + k_2)*r_p**2 * eye(n)
m_diff = abs(matrix2numpy(simplify(M_p - M))).sum()
k_diff = abs(matrix2numpy(simplify(K_p - K))).sum()
if(m_diff != 0.0):
print('Error in M matrix.')
if(k_diff != 0.0):
print('Error in K matrix.')
```
## Combining planet DOFs:
```python
C = zeros(N, 3)
C[ 0, 0] = 1
C[ N1, 2] = 1
C[1:N1, 1] = ones(n, 1)
CMC = lambda m: transpose(C)*m*C
```
### Inertia matrix
```python
M_C = CMC(M)
if(not M_C.is_symmetric()):
print('error in M_C matrix')
M_C
```
$\displaystyle \left[\begin{matrix}m_{c} r_{c}^{2} & 0 & 0\\0 & 3 m_{p} r_{p}^{2} & 0\\0 & 0 & m_{s} r_{s}^{2}\end{matrix}\right]$
### Stiffness matrix
```python
K_C = CMC(K)
if(not K_C.is_symmetric()):
print('error in M_C matrix')
K_C
```
$\displaystyle \left[\begin{matrix}r_{c}^{2} \left(3 k_{1} + 3 k_{2}\right) & 3 r_{c} r_{p} \left(k_{1} - k_{2}\right) & - 3 k_{2} r_{c} r_{s}\\3 r_{c} r_{p} \left(k_{1} - k_{2}\right) & 3 r_{p}^{2} \left(k_{1} + k_{2}\right) & 3 k_{2} r_{p} r_{s}\\- 3 k_{2} r_{c} r_{s} & 3 k_{2} r_{p} r_{s} & 3 k_{2} r_{s}^{2}\end{matrix}\right]$
## Adapting it to a parallel gear set
Considering only one of the sun-planets pairs, one should change the sub-indexes in the following way:
* [p]lanet => [w]heel
* [s]un => [p]inion;
It also necessary to remove the mesh stiffness of the ring-planet pair
### Inertia matrix
```python
k, w, p = symbols('k w p', type = float)
m_w, m_p, r_w, r_p = symbols('m_w m_p r_w r_p', type = float)
N2 = N - 2
M_par = M[N2:, N2:]
M_par = M_par.subs([(m_p, m_w), (m_s, m_p), (r_p, r_w), (r_s, r_p)]) #
M_par
```
$\displaystyle \left[\begin{matrix}m_{w} r_{w}^{2} & 0\\0 & m_{p} r_{p}^{2}\end{matrix}\right]$
### Stiffness matrix
```python
K_par = K[N2:, N2:]
K_par = K_par.subs(k_1, 0) # ring-planet mesh stiffness
K_par = K_par.subs(k_s, 0) # sun's bearing stiffness
K_par = K_par.subs(n*k_2, k_2) # only one pair, not n
K_par = K_par.subs(k_2, k) # mesh-stiffness of the pair
K_par = K_par.subs([(r_p, r_w), (r_s, r_p)])
K_par
```
$\displaystyle \left[\begin{matrix}k r_{w}^{2} & k r_{p} r_{w}\\k r_{p} r_{w} & k r_{p}^{2}\end{matrix}\right]$
From that, one can write the matrices for a parallel system using the following code:
```python
M_p = diag(m_w*r_w**2, m_p*r_p**2)
mat_diff = abs(matrix2numpy(simplify(M_p - M_par))).sum()
if(mat_diff != 0.0):
print('Error in M_p matrix.')
K_p = diag(r_w**2, r_p**2)
K_p[0, 1] = r_p*r_w
K_p[1, 0] = r_p*r_w
K_p = k*K_p
mat_diff = abs(matrix2numpy(simplify(K_p - K_par))).sum()
if(mat_diff != 0.0):
print('Error in K_p matrix.')
```
|
66e8ae45f6a5669a8a16c1f9e2d5b56ff1bf17b1
| 20,875 |
ipynb
|
Jupyter Notebook
|
notes/.ipynb_checkpoints/Kahraman_1994-checkpoint.ipynb
|
gfsReboucas/Drivetrain-python
|
90cc8a0b26fa6dd851a8ddaaf321f5ae9f5cf431
|
[
"MIT"
] | 1 |
2020-10-17T13:43:01.000Z
|
2020-10-17T13:43:01.000Z
|
notes/.ipynb_checkpoints/Kahraman_1994-checkpoint.ipynb
|
gfsReboucas/Drivetrain-python
|
90cc8a0b26fa6dd851a8ddaaf321f5ae9f5cf431
|
[
"MIT"
] | null | null | null |
notes/.ipynb_checkpoints/Kahraman_1994-checkpoint.ipynb
|
gfsReboucas/Drivetrain-python
|
90cc8a0b26fa6dd851a8ddaaf321f5ae9f5cf431
|
[
"MIT"
] | null | null | null | 29.03338 | 708 | 0.372263 | true | 3,564 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.954647 | 0.793106 | 0.757137 |
__label__eng_Latn
| 0.212139 | 0.597414 |
<center> <H1>Validation of the NN code generator for the 2D diffusion equation </H1>
<H3>O. Pannekoucke</H3>
</center>
$ %\newcommand{\pde}{\partial}
%\newcommand{\pdt}{\partial_t}
%\newcommand{\pdx}{\partial_x}
\newcommand{\bu}{\bar u}
\newcommand{\eps}{\varepsilon}$
<center> <b>Objectifs</b> </center>
* Definition of the 2D diffusion equation by using `sympy`
* Computation of the numerical solution of the 2D diffusion using a NN
---
<h1><center>Contents</center></h1>
1. [Introduction](#intro)
1. [Dynamics](#model)
1. [Numerical code for the resolution](#code)
1. [Numerical application](#num)
1. [Conclusion](#conclusion)
---
## Introduction <a id='intro'/>
The aim is to compute the solution of the diffusion equation given by
$$\partial_t u = \partial_{x^i}\left(\kappa_{ij}\partial_{x^j} u \right),$$
where $\kappa$ is a field of diffusion tensors.
## Dynamics <a id='model'>
```python
from sympy import init_printing
init_printing()
```
**Set the diffusion equation**
```python
from sympy import Function, symbols, Derivative
from pdenetgen import Eq, NNModelBuilder
# Defines the diffusion equation using sympy
t, x, y = symbols('t x y')
u = Function('u')(t,x,y)
kappa11 = Function('\\kappa_{11}')(x,y)
kappa12 = Function('\\kappa_{12}')(x,y)
kappa22 = Function('\\kappa_{22}')(x,y)
diffusion_2D = Eq(Derivative(u,t),
Derivative(kappa11*Derivative(u,x)+
kappa12*Derivative(u,y),x)+
Derivative(kappa12*Derivative(u,x)+
kappa22*Derivative(u,y),y)).doit()
# Defines the neural network code generator
diffusion_nn_builder = NNModelBuilder(diffusion_2D,
class_name="NNDiffusion2DHeterogeneous")
# Renders the neural network code
exec(diffusion_nn_builder.code)
# Create a 2D Diffusion model
diffusion_model = NNDiffusion2DHeterogeneous()
```
Warning: function `kappa_12` has to be set
Warning: function `kappa_22` has to be set
Warning: function `kappa_11` has to be set
**Sample of NN code generated**
```python
# Example of computation of a derivative
kernel_Du_x_o1 = np.asarray([[0.0,-1/(2*self.dx[self.coordinates.index('x')]),0.0],
[0.0,0.0,0.0],
[0.0,1/(2*self.dx[self.coordinates.index('x')]),0.0]]).reshape((3, 3)+(1,1))
Du_x_o1 = DerivativeFactory((3, 3),kernel=kernel_Du_x_o1,name='Du_x_o1')(u)
# Computation of trend_u
mul_0 = keras.layers.multiply([Dkappa_11_x_o1,Du_x_o1],name='MulLayer_0')
mul_1 = keras.layers.multiply([Dkappa_12_x_o1,Du_y_o1],name='MulLayer_1')
mul_2 = keras.layers.multiply([Dkappa_12_y_o1,Du_x_o1],name='MulLayer_2')
mul_3 = keras.layers.multiply([Dkappa_22_y_o1,Du_y_o1],name='MulLayer_3')
mul_4 = keras.layers.multiply([Du_x_o2,kappa_11],name='MulLayer_4')
mul_5 = keras.layers.multiply([Du_y_o2,kappa_22],name='MulLayer_5')
mul_6 = keras.layers.multiply([Du_x_o1_y_o1,kappa_12],name='MulLayer_6')
sc_mul_0 = keras.layers.Lambda(lambda x: 2.0*x,name='ScalarMulLayer_0')(mul_6)
trend_u = keras.layers.add([mul_0,mul_1,mul_2,mul_3,mul_4,mul_5,sc_mul_0],name='AddLayer_0')
```
```python
print(diffusion_nn_builder.code)
```
from pdenetgen.model import Model
import numpy as np
import tensorflow.keras as keras
from pdenetgen.symbolic.nn_builder import DerivativeFactory, TrainableScalarLayerFactory
class NNDiffusion2DHeterogeneous(Model):
# Prognostic functions (sympy functions):
prognostic_functions = (
'u', # Write comments on the function here
)
# Spatial coordinates
coordinates = (
'x', # Write comments on the coordinate here
'y', # Write comments on the coordinate here
)
# Set constant functions
constant_functions = (
'kappa_12', # Writes comment on the constant function here
'kappa_22', # Writes comment on the constant function here
'kappa_11', # Writes comment on the constant function here
)
def __init__(self, shape=None, lengths=None, **kwargs):
super().__init__() # Time scheme is set from Model.__init__()
#---------------------------------
# Set index array from coordinates
#---------------------------------
# a) Set shape
shape = len(self.coordinates)*(100,) if shape is None else shape
if len(shape)!=len(self.coordinates):
raise ValueError(f"len(shape) {len(shape)} is different from len(coordinates) {len(self.coordinates)}")
else:
self.shape = shape
# b) Set input shape for coordinates
self.input_shape_x = shape[0]
self.input_shape_y = shape[1]
# c) Set lengths
lengths = len(self.coordinates)*(1.0,) if lengths is None else lengths
if len(lengths)!=len(self.coordinates):
raise ValueError(f"len(lengths) {len(lengths)} is different from len(coordinates) {len(self.coordinates)}")
else:
self.lengths = lengths
# d) Set indexes
self._index = {}
for k,coord in enumerate(self.coordinates):
self._index[(coord,0)] = np.arange(self.shape[k], dtype=int)
# Set x/dx
#-------------
self.dx = tuple(length/shape for length, shape in zip(self.lengths, self.shape))
self.x = tuple(self.index(coord,0)*dx for coord, dx in zip(self.coordinates, self.dx))
self.X = np.meshgrid(*self.x)
#-----------------------
# Set constant functions
#-----------------------
# Set a default nan value for constants
self.kappa_12 = np.nan # @@ set constant value @@
self.kappa_22 = np.nan # @@ set constant value @@
self.kappa_11 = np.nan # @@ set constant value @@
# Set constant function values from external **kwargs (when provided)
for key in kwargs:
if key in self.constant_functions:
setattr(self, key, kwargs[key])
# Alert when a constant is np.nan
for function in self.constant_functions:
if getattr(self, function) is np.nan:
print(f"Warning: function `{function}` has to be set")
# Set NN models
self._trend_model = None
self._exogenous_model = None
def index(self, coord, step:int):
""" Return int array of shift index associated with coordinate `coord` for shift `step` """
# In this implementation, indexes are memory saved in a dictionary, feed at runtime
if (coord,step) not in self._index:
self._index[(coord,step)] = (self._index[(coord,0)]+step)%self.shape[self.coordinates.index(coord)]
return self._index[(coord,step)]
def _make_trend_model(self):
""" Generate the NN used to compute the trend of the dynamics """
# Alias for constant functions
#-----------------------------
kappa_12 = self.kappa_12
if kappa_12 is np.nan:
raise ValueError("Constant function 'kappa_12' is not set")
kappa_22 = self.kappa_22
if kappa_22 is np.nan:
raise ValueError("Constant function 'kappa_22' is not set")
kappa_11 = self.kappa_11
if kappa_11 is np.nan:
raise ValueError("Constant function 'kappa_11' is not set")
# Set input layers
#------------------
# Set Alias for coordinate input shapes
input_shape_x = self.input_shape_x
input_shape_y = self.input_shape_y
# Set input shape for prognostic functions
u = keras.layers.Input(shape =(input_shape_x,input_shape_y,1,))
# Set input shape for constant functions
kappa_12 = keras.layers.Input(shape =(input_shape_x,input_shape_y,1,))
kappa_22 = keras.layers.Input(shape =(input_shape_x,input_shape_y,1,))
kappa_11 = keras.layers.Input(shape =(input_shape_x,input_shape_y,1,))
# Keras code
# 2) Implementation of derivative as ConvNet
# Compute derivative
#-----------------------
#
# Warning: might be modified to fit appropriate boundary conditions.
#
kernel_Dkappa_11_x_o1 = np.asarray([[0.0,-1/(2*self.dx[self.coordinates.index('x')]),0.0],
[0.0,0.0,0.0],
[0.0,1/(2*self.dx[self.coordinates.index('x')]),0.0]]).reshape((3, 3)+(1,1))
Dkappa_11_x_o1 = DerivativeFactory((3, 3),kernel=kernel_Dkappa_11_x_o1,name='Dkappa_11_x_o1')(kappa_11)
kernel_Du_x_o1_y_o1 = np.asarray([[1/(4*self.dx[self.coordinates.index('x')]*self.dx[self.coordinates.index('y')]),
0.0,
-1/(4*self.dx[self.coordinates.index('x')]*self.dx[self.coordinates.index('y')])],
[0.0,0.0,0.0],
[-1/(4*self.dx[self.coordinates.index('x')]*self.dx[self.coordinates.index('y')]),
0.0,
1/(4*self.dx[self.coordinates.index('x')]*self.dx[self.coordinates.index('y')])]]).reshape((3, 3)+(1,1))
Du_x_o1_y_o1 = DerivativeFactory((3, 3),kernel=kernel_Du_x_o1_y_o1,name='Du_x_o1_y_o1')(u)
kernel_Du_y_o2 = np.asarray([[0.0,0.0,0.0],
[self.dx[self.coordinates.index('y')]**(-2),
-2/self.dx[self.coordinates.index('y')]**2,
self.dx[self.coordinates.index('y')]**(-2)],
[0.0,0.0,0.0]]).reshape((3, 3)+(1,1))
Du_y_o2 = DerivativeFactory((3, 3),kernel=kernel_Du_y_o2,name='Du_y_o2')(u)
kernel_Dkappa_12_y_o1 = np.asarray([[0.0,0.0,0.0],
[-1/(2*self.dx[self.coordinates.index('y')]),0.0,
1/(2*self.dx[self.coordinates.index('y')])],
[0.0,0.0,0.0]]).reshape((3, 3)+(1,1))
Dkappa_12_y_o1 = DerivativeFactory((3, 3),kernel=kernel_Dkappa_12_y_o1,name='Dkappa_12_y_o1')(kappa_12)
kernel_Dkappa_12_x_o1 = np.asarray([[0.0,-1/(2*self.dx[self.coordinates.index('x')]),0.0],
[0.0,0.0,0.0],
[0.0,1/(2*self.dx[self.coordinates.index('x')]),0.0]]).reshape((3, 3)+(1,1))
Dkappa_12_x_o1 = DerivativeFactory((3, 3),kernel=kernel_Dkappa_12_x_o1,name='Dkappa_12_x_o1')(kappa_12)
kernel_Du_x_o1 = np.asarray([[0.0,-1/(2*self.dx[self.coordinates.index('x')]),0.0],
[0.0,0.0,0.0],
[0.0,1/(2*self.dx[self.coordinates.index('x')]),0.0]]).reshape((3, 3)+(1,1))
Du_x_o1 = DerivativeFactory((3, 3),kernel=kernel_Du_x_o1,name='Du_x_o1')(u)
kernel_Du_x_o2 = np.asarray([[0.0,self.dx[self.coordinates.index('x')]**(-2),0.0],
[0.0,-2/self.dx[self.coordinates.index('x')]**2,0.0],
[0.0,self.dx[self.coordinates.index('x')]**(-2),0.0]]).reshape((3, 3)+(1,1))
Du_x_o2 = DerivativeFactory((3, 3),kernel=kernel_Du_x_o2,name='Du_x_o2')(u)
kernel_Dkappa_22_y_o1 = np.asarray([[0.0,0.0,0.0],
[-1/(2*self.dx[self.coordinates.index('y')]),0.0,
1/(2*self.dx[self.coordinates.index('y')])],
[0.0,0.0,0.0]]).reshape((3, 3)+(1,1))
Dkappa_22_y_o1 = DerivativeFactory((3, 3),kernel=kernel_Dkappa_22_y_o1,name='Dkappa_22_y_o1')(kappa_22)
kernel_Du_y_o1 = np.asarray([[0.0,0.0,0.0],
[-1/(2*self.dx[self.coordinates.index('y')]),0.0,
1/(2*self.dx[self.coordinates.index('y')])],
[0.0,0.0,0.0]]).reshape((3, 3)+(1,1))
Du_y_o1 = DerivativeFactory((3, 3),kernel=kernel_Du_y_o1,name='Du_y_o1')(u)
# 3) Implementation of the trend as NNet
#
# Computation of trend_u
#
mul_0 = keras.layers.multiply([Dkappa_11_x_o1,Du_x_o1],name='MulLayer_0')
mul_1 = keras.layers.multiply([Dkappa_12_x_o1,Du_y_o1],name='MulLayer_1')
mul_2 = keras.layers.multiply([Dkappa_12_y_o1,Du_x_o1],name='MulLayer_2')
mul_3 = keras.layers.multiply([Dkappa_22_y_o1,Du_y_o1],name='MulLayer_3')
mul_4 = keras.layers.multiply([Du_x_o2,kappa_11],name='MulLayer_4')
mul_5 = keras.layers.multiply([Du_y_o2,kappa_22],name='MulLayer_5')
mul_6 = keras.layers.multiply([Du_x_o1_y_o1,kappa_12],name='MulLayer_6')
sc_mul_0 = keras.layers.Lambda(lambda x: 2.0*x,name='ScalarMulLayer_0')(mul_6)
trend_u = keras.layers.add([mul_0,mul_1,mul_2,mul_3,mul_4,mul_5,sc_mul_0],name='AddLayer_0')
# 4) Set 'input' of model
inputs = [
# Prognostic functions
u,
# Constant functions
kappa_12,kappa_22,kappa_11,
]
# 5) Set 'outputs' of model
outputs = [
trend_u,
]
model = keras.models.Model(inputs=inputs, outputs=outputs)
#model.trainable = False
self._trend_model = model
def trend(self, t, state):
""" Trend of the dynamics """
if self._trend_model is None:
self._make_trend_model()
# Init output state with pointer on data
#-------------------------------------------
# a) Set the output array
dstate = np.zeros(state.shape)
# b) Set pointers on output array `dstate` for the computation of the physical trend (alias only).
du = dstate[0]
# Load physical functions from state
#------------------------------------
u = state[0]
# Compute the trend value from model.predict
#-------------------------------------------
inputs = [
# Prognostic functions
u,
# Constant functions
self.kappa_12,
self.kappa_22,
self.kappa_11,
]
dstate = self._trend_model.predict( inputs )
if not isinstance(dstate,list):
dstate = [dstate]
return np.array(dstate)
def _make_dynamical_trend(self):
"""
Computation of a trend model so to be used in a time scheme (as solving a dynamical system or an ODE)
Description:
------------
In the present implementation, the inputs of the trend `self._trend_model` is a list of fields, while
entry of a time-scheme is a single array which contains all fields.
The aims of `self._dynamical_trend` is to produce a Keras model which:
1. takes a single array as input
2. extract the `self._trend_model` input list from the input array
3. compute the trends from `self._trend_model`
4. outputs the trends as a single array
Explaination of the code:
-------------------------
Should implement a code as the following, that is valid for the PKF-Burgers
def _make_dynamical_trend(self):
if self._trend_model is None:
self._make_trend_model()
# 1. Extract the input of the model
# 1.1 Set the input as an array
state = keras.layers.Input(shape=(3,self.input_shape_x,1))
# 1.2 Extract each components of the state
u = keras.layers.Lambda(lambda x : x[:,0,:,:])(state)
V = keras.layers.Lambda(lambda x : x[:,1,:,:])(state)
nu_u_xx = keras.layers.Lambda(lambda x : x[:,2,:,:])(state)
# 2. Compute the trend
trend_u, trend_V, trend_nu = self._trend_model([u,V,nu_u_xx])
# 3. Outputs the trend as a single array
# 3.1 Reshape trends
trend_u = keras.layers.Reshape((1,self.input_shape_x,1))(trend_u)
trend_V = keras.layers.Reshape((1,self.input_shape_x,1))(trend_V)
trend_nu = keras.layers.Reshape((1,self.input_shape_x,1))(trend_nu)
# 3.2 Concatenates all trends
trends = keras.layers.Concatenate(axis=1)([trend_u,trend_V,trend_nu])
# 4. Set the dynamical_trend model
self._dynamical_trend = keras.models.Model(inputs=state,outputs=trends)
"""
if self._trend_model is None:
self._make_trend_model()
for exclude_case in ['constant_functions','exogenous_functions']:
if hasattr(self,exclude_case):
raise NotImplementedError(f'Design of dynamical_model with {exclude_case} is not implemented')
# Case 1 -- corresponds to the _trend_model if input is a single field
if not isinstance(self._trend_model.input_shape, list):
self._dynamical_trend = self._trend_model
return
# Case 2 -- Case where multiple list is used
# 1. Extract the input of the model
# 1.1 Set the input as an array
""" from PKF-Burgers code:
state = keras.layers.Input(shape=(3,self.input_shape_x,1))
"""
# 1.1.1 Compute the input_shape from _trend_model
shapes = []
dimensions = []
for shape in self._trend_model.input_shape:
shape = shape[1:] # Exclude batch_size (assumed to be at first)
shapes.append(shape)
dimensions.append(len(shape)-1)
max_dimension = max(dimensions)
if max_dimension!=1:
if 1 in dimensions:
raise NotImplementedError('1D fields incompatible with 2D/3D fields')
# todo: add test to check compatibility of shapes!!!!
if max_dimension in [1,2]:
input_shape = (len(shapes),)+shapes[0]
elif max_dimension==3:
# a. check the size of 2D fields: this is given by the first 2D field.
for shape, dimension in zip(shapes, dimensions):
if dimension==2:
input_shape_2D = shape
break
# b. Compute the numbers of 2D fields: this corresponds to the number of 3D layers and the number of 2D fields.
for shape, dimension in zip(shapes, dimensions):
if dimension==2:
nb_outputs += 1
else:
nb_outputs += shape[0]
input_shape = (nb_outputs,)+input_shape_2D
# 1.1.2 Init the state of the dynamical_trend
state = keras.layers.Input(shape=input_shape)
# 1.2 Extract each components of the state
""" From PKF-Burgers code:
u = keras.layers.Lambda(lambda x : x[:,0,:,:])(state)
V = keras.layers.Lambda(lambda x : x[:,1,:,:])(state)
nu_u_xx = keras.layers.Lambda(lambda x : x[:,2,:,:])(state)
inputs = [u, V, nu_u_xx]
"""
def get_slice(dimension, k):
def func(x):
if dimension == 1:
return x[:,k,:,:]
elif dimension == 2:
return x[:,k,:,:]
return func
def get_slice_3d(start,end):
def func(x):
return x[:,start:end,:,:,:]
return func
inputs = []
if max_dimension in [1,2]:
for k in range(len(shapes)):
inputs.append(keras.layers.Lambda(get_slice(max_dimension,k))(state))
#if max_dimension==1:
# inputs.append(keras.layers.Lambda(lambda x : x[:,k,:,:])(state))
#
#if max_dimension==2:
# inputs.append(keras.layers.Lambda(lambda x : x[:,k,:,:,:])(state))
else:
k=0
for shape, dimension in zip(shapes, dimensions):
if dimension==2:
#inputs.append(keras.layers.Lambda(lambda x : x[:,k,:,:,:])(state))
inputs.append(keras.layers.Lambda(get_slice(dimension,k))(state))
k += 1
if dimension==3:
start = k
end = start+shape[0]
inputs.append(keras.layers.Lambda(get_slice_3d(start,end))(state))
k = end
# 2. Compute the trend
""" From PKF-Burgers code
trend_u, trend_V, trend_nu = self._trend_model([u,V,nu_u_xx])
"""
trends = self._trend_model(inputs)
# 3. Outputs the trend as a single array
# 3.1 Reshape trends
""" from PKF-Burgers code
trend_u = keras.layers.Reshape((1,self.input_shape_x,1))(trend_u)
trend_V = keras.layers.Reshape((1,self.input_shape_x,1))(trend_V)
trend_nu = keras.layers.Reshape((1,self.input_shape_x,1))(trend_nu)
"""
reshape_trends = []
for trend, dimension in zip(trends, dimensions):
#shape = tuple(dim.value for dim in trend.shape[1:])
# update from keras -> tensorflow.keras
shape = tuple(dim for dim in trend.shape[1:])
if dimension==1 or dimension==2:
# for 1D fields like (128,1) transform into (1,128,1)
# for 2D fields like (128,128,1) transform into (1,128,128,1)
shape = (1,)+shape
elif dimension==3:
# 3D fields can be compated: two fields (36,128,128,1) become the single field (72,128,128,1)
pass
else:
raise NotImplementedError
reshape_trends.append(keras.layers.Reshape(shape)(trend))
# 3.2 Concatenates all trends
""" From PKF-Burgers code:
trends = keras.layers.Concatenate(axis=1)([trend_u,trend_V,trend_nu])
"""
trends = keras.layers.Concatenate(axis=1)(reshape_trends)
# 2.5 Compute the model
self._dynamical_trend = keras.models.Model(inputs=state,outputs=trends)
## Numerical application <a id='num'/>
### Definition of the domain of computation
The domain is the bi-periodic sqare $[0,1)\times [0,1)$
### Set the numerical NN model
```python
domain = diffusion_model
```
**Set initial fields**
```python
import numpy as np
```
```python
dx, dy = diffusion_model.dx
# Set a dirac at the center of the domain.
U = np.zeros(diffusion_model.shape)
U[diffusion_model.shape[0]//2, diffusion_model.shape[0]//2] = 1./(dx*dy)
```
```python
diffusion_model.shape
```
```python
X = np.asarray(diffusion_model.X)
k = np.asarray([1,2])
X = np.moveaxis(X,0,2)
print(X.shape)
np.linalg.norm(X@k -k[0]*diffusion_model.X[0]-k[1]*diffusion_model.X[1])
```
**Set constants and time step**
```python
import matplotlib.pyplot as plt
```
```python
time_scale = 1.
#
# Construction du tenseur de diffusion
#
# a) Définition des composantes principales
lx, ly = 10*dx, 5*dy
kappa_11 = lx**2/time_scale
kappa_22 = ly**2/time_scale
# b) Construction d'un matrice de rotation
R = lambda theta : np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])
# c) Spectre du tenseur de référence
D = np.diag([kappa_11,kappa_22])
# d) Set diffusion tensors field
diffusion_model.kappa_11 = np.zeros(diffusion_model.shape)
diffusion_model.kappa_12 = np.zeros(diffusion_model.shape)
diffusion_model.kappa_22 = np.zeros(diffusion_model.shape)
X = np.moveaxis(np.asarray(diffusion_model.X),0,2)
k = 2*np.pi*np.array([2,3])
theta = np.pi/3*np.cos(X@k)
#plt.contourf(*num_model.x, theta)
for i in range(diffusion_model.shape[0]):
for j in range(diffusion_model.shape[1]):
lR = R(theta[i,j])
nu = lR@np.diag([kappa_11,kappa_22])@lR.T
diffusion_model.kappa_11[i,j] = nu[0,0]
diffusion_model.kappa_12[i,j] = nu[0,1]
diffusion_model.kappa_22[i,j] = nu[1,1]
diffusion_model.kappa_11 = diffusion_model.kappa_11.reshape((1,100,100,1))
diffusion_model.kappa_12 = diffusion_model.kappa_12.reshape((1,100,100,1))
diffusion_model.kappa_22 = diffusion_model.kappa_22.reshape((1,100,100,1))
#
# Calcul du pas de temps adapté au problème
#
dt = np.min([dx**2/kappa_11, dy**2/kappa_22])
CFL = 1/6
diffusion_model._dt = CFL * dt
print('time step:', diffusion_model._dt)
```
time step: 0.0016666666666666663
**Illustrates trend at initial condition**
```python
def plot(field):
plt.contourf(*diffusion_model.x, field.T)
```
```python
state0 = U.copy().reshape((1,1)+diffusion_model.shape+(1,))
print(state0.shape)
```
(1, 1, 100, 100, 1)
```python
import tensorflow.keras as keras
```
```python
dU, = diffusion_model.trend(0,state0)
plot(dU[0].reshape((100,100)))
plt.title('Trend for the diffusion')
```
**Short forecast**
```python
times = diffusion_model.window(time_scale)
#saved_times = times[::100]
saved_times = times
```
```python
diffusion_model.set_time_scheme('euler')
traj = diffusion_model.forecast(times, state0, saved_times)
```
```python
plt.figure(figsize=(12,5))
start, end = [traj[time] for time in [saved_times[0], saved_times[-1]]]
title = ['start', 'end']
for k, state in enumerate([start, end]):
plt.subplot(121+k)
plot(state[0].reshape((100,100)))
plt.title(title[k])
plt.savefig('./figures/NN-diffusion-2D-prediction.pdf')
np.save('nn-diffusion.npy', end)
```
**Comparison with the finite difference solution**
```python
fd_end_solution = np.load('fd-diffusion.npy')
```
```python
np.linalg.norm(fd_end_solution[0] - end[0,0,:,:,0])
```
## Conclusion <a id='conclusion'/>
In this notebook, the solution of the diffusion equation using a NN code has been presented.
The results are those of the finite-difference solution (not shown here). This validate the NN generator and illustrates how the physical equations can be used for the design of a NN architecture.
|
3b163b5c9fabcf4c1e54d2eba22c67ddba17f961
| 62,419 |
ipynb
|
Jupyter Notebook
|
example/pdenetgen-diffusion2D.ipynb
|
relmonta/pdenetgen
|
7395f003904e5a4503c013a826d9dc66838776e3
|
[
"CECILL-B"
] | null | null | null |
example/pdenetgen-diffusion2D.ipynb
|
relmonta/pdenetgen
|
7395f003904e5a4503c013a826d9dc66838776e3
|
[
"CECILL-B"
] | null | null | null |
example/pdenetgen-diffusion2D.ipynb
|
relmonta/pdenetgen
|
7395f003904e5a4503c013a826d9dc66838776e3
|
[
"CECILL-B"
] | null | null | null | 56.847905 | 9,604 | 0.656098 | true | 6,984 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.865224 | 0.622459 | 0.538567 |
__label__eng_Latn
| 0.440512 | 0.089601 |
# Scientific Computing with Python
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"></a><br />This notebook by Xiaozhou Li is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
All code examples are also licensed under the [MIT license](http://opensource.org/licenses/MIT).
```python
# what is this line all about?
%matplotlib inline
import matplotlib.pyplot as plt
```
## Numpy and Scipy
### Introduction
The numpy package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good.
The SciPy framework builds on top of the low-level NumPy framework for multidimensional arrays, and provides a large number of higher-level scientific algorithms.
### Fitting to polynomial
```python
import numpy as np
```
```python
np.random.seed(12)
x = np.linspace(0, 1, 20)
y = np.cos(x) + 0.3*np.random.rand(20)
p = np.poly1d(np.polyfit(x, y, 16))
t = np.linspace(0, 1, 200)
plt.plot(x, y, 'o', t, p(t), '-')
plt.show()
```
### Fit in a Chebyshev basis
```python
np.random.seed(0)
x = np.linspace(-1, 1, 2000)
y = np.cos(x) + 0.3*np.random.rand(2000)
p = np.polynomial.Chebyshev.fit(x, y, 90)
t = np.linspace(-1, 1, 200)
plt.plot(x, y, 'r.')
plt.plot(t, p(t), 'k-', lw=3)
plt.show()
```
### A demo of 1D interpolation
```python
np.random.seed(0)
measured_time = np.linspace(0, 1, 10)
noise = 1e-1 * (np.random.random(10)*2 - 1)
measures = np.sin(2 * np.pi * measured_time) + noise
# Interpolate it to new time points
from scipy.interpolate import interp1d
linear_interp = interp1d(measured_time, measures)
interpolation_time = np.linspace(0, 1, 50)
linear_results = linear_interp(interpolation_time)
cubic_interp = interp1d(measured_time, measures, kind='cubic')
cubic_results = cubic_interp(interpolation_time)
# Plot the data and the interpolation
from matplotlib import pyplot as plt
plt.figure(figsize=(6, 4))
plt.plot(measured_time, measures, 'o', ms=6, label='measures')
plt.plot(interpolation_time, linear_results, label='linear interp')
plt.plot(interpolation_time, cubic_results, label='cubic interp')
plt.legend()
plt.show()
```
### Minima and roots of a function
\begin{equation}
f(x) = x^2 + 10\sin(x)
\end{equation}
**(1) find minima**
```python
def f(x):
return x**2 + 10*np.sin(x)
from scipy import optimize
# Global optimization
grid = (-10, 10, 0.1)
xmin_global = optimize.brute(f, (grid, ))
print("Global minima found %s" % xmin_global)
# Constrain optimization
xmin_local = optimize.fminbound(f, 0, 10)
print("Local minimum found %s" % xmin_local)
```
Global minima found [-1.30641113]
Local minimum found 3.8374671194983834
**(2) root finding**
```python
root = optimize.root(f, 1) # our initial guess is 1
print("First root found %s" % root.x)
root2 = optimize.root(f, -2.5)
print("Second root found %s" % root2.x)
```
First root found [0.]
Second root found [-2.47948183]
**(3) Plot function, minima, and roots**
```python
fig = plt.figure(figsize=(6, 4))
ax = fig.add_subplot(111)
x = np.arange(-10, 10, 0.1)
# Plot the function
ax.plot(x, f(x), 'b-', label="f(x)")
# Plot the minima
xmins = np.array([xmin_global[0], xmin_local])
ax.plot(xmins, f(xmins), 'go', label="Minima")
# Plot the roots
roots = np.array([root.x, root2.x])
ax.plot(roots, f(roots), 'kv', label="Roots")
# Decorate the figure
ax.legend(loc='best')
ax.set_xlabel('x')
ax.set_ylabel('f(x)')
ax.axhline(0, color='gray')
plt.show()
```
## Matplotlib
### Introduction
Matplotlib is an excellent 2D and 3D graphics library for generating scientific figures.
### Reading and writing a panda
**(1) original figure**
```python
plt.figure()
img = plt.imread('../data/panda.jpg')
plt.imshow(img)
plt.imsave("original.jpg",img)
print (np.shape(img))
```
**(2) red channel displayed in grey**
```python
plt.figure()
img_red = img[:, :, 0]
plt.imshow(img_red, cmap=plt.cm.gray)
```
**(3) lower resolution (compression)**
```python
plt.figure()
img_tiny = img[::8, ::8]
plt.imshow(img_tiny, interpolation='nearest')
#plt.savefig("compressed.jpg")
plt.imsave("compressed.jpg",img_tiny)
```
### Mandlebrot Set (Mandelbrot fractal)
```python
def compute_mandelbrot(N_max, some_threshold, nx, ny):
# A grid of c-values
x = np.linspace(-2, 1, nx)
y = np.linspace(-1.5, 1.5, ny)
c = x[:,np.newaxis] + 1j*y[np.newaxis,:]
# Mandelbrot iteration
z = c
for j in range(N_max):
z = z**2 + c
mandelbrot_set = (abs(z) < some_threshold)
return mandelbrot_set
mandelbrot_set = compute_mandelbrot(50, 50., 601, 401)
plt.imshow(mandelbrot_set.T, extent=[-2, 1, -1.5, 1.5])
plt.gray()
plt.show()
```
### A simple example of 3D plotting
$$ z = \sin(\sqrt{x^2 + y^2}) $$
```python
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
X = np.arange(-4, 4, 0.25)
Y = np.arange(-4, 4, 0.25)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X ** 2 + Y ** 2)
Z = np.sin(R)
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=plt.cm.hot)
ax.contourf(X, Y, Z, zdir='z', offset=-2, cmap=plt.cm.hot)
ax.set_zlim(-2, 2)
plt.show()
```
### An example displaying the contours of a function
$$ f(x,y) = \left(1 - \frac{x}{2} + x^5 + y^3\right)e^{-x^2-y^2}.$$
```python
def f(x,y):
return (1 - x / 2 + x**5 + y**3) * np.exp(-x**2 -y**2)
n = 256
x = np.linspace(-3, 3, n)
y = np.linspace(-3, 3, n)
X,Y = np.meshgrid(x, y)
plt.axes([0.025, 0.025, 0.95, 0.95])
plt.contourf(X, Y, f(X, Y), 8, alpha=.75, cmap=plt.cm.hot)
C = plt.contour(X, Y, f(X, Y), 8, colors='black')
plt.clabel(C, inline=1, fontsize=10)
plt.xticks(())
plt.yticks(())
plt.show()
```
## Sympy - Symbolic algebra in Python
### Introduction
There are two notable Computer Algebra Systems (CAS) for Python:
* [SymPy](http://sympy.org/en/index.html) - A python module that can be used in any Python program, or in an IPython session, that provides powerful CAS features.
* [Sage](http://www.sagemath.org/) - Sage is a full-featured and very powerful CAS enviroment that aims to provide an open source system that competes with Mathematica and Maple. Sage is not a regular Python module, but rather a CAS environment that uses Python as its programming language.
Sage is in some aspects more powerful than SymPy, but both offer very comprehensive CAS functionality. The advantage of SymPy is that it is a regular Python module and integrates well with the Jupyter notebook.
```python
from sympy import *
init_printing()
```
### Expand, factor and simplify
```python
x, y = symbols('x y')
(x+1)*(x+2)*(x+3)*(x+4)*(x+5)
```
```python
expand((x+1)*(x+2)*(x+3)*(x+4)*(x+5))
```
```python
sin(x+y)
```
```python
expand(sin(x+y), trig=True)
```
```python
expand((x+y)**8)
```
```python
x**3 + 6 * x**2 + 11*x + 6
```
```python
factor(x**3 + 6 * x**2 + 11*x + 6)
```
```python
sin(x)**2 + cos(x)**2
```
```python
simplify(sin(x)**2 + cos(x)**2)
```
```python
cos(x)/sin(x)
```
```python
simplify(cos(x)/sin(x))
```
### Calculus
**(1) differentiation and integration**
$f(x) = (x+1)^2$
```python
f = (x+1)**2
f
```
Computing $\frac{d f}{dx}$, $\frac{d f^2}{dx}$
```python
diff(f,x)
```
```python
diff(f**2,x)
```
Computing $\frac{d \sin(f)}{dx}$, $\frac{d^2 \sin(f)}{dx^2}$
```python
diff(sin(f),x)
```
```python
diff(sin(f),x,2)
```
```python
diff(sin(f),x,4)
```
$$ f(x,y) = \sin(xy) + \cos(xy),$$
computing
$$ \frac{\partial^3 f}{\partial x \partial y^2},\quad \int f(x,y)\,dx,\quad \int_{-1}^{1}f(x,y)\,dx$$
```python
f = sin(x*y) + cos(y*x)
f
```
```python
diff(f, x, 1, y, 2)
```
```python
integrate(f, x)
```
```python
integrate(f, (x, -1, 1))
```
Computing $\int_{-\infty}^\infty e^{-x^2}\,dx$
```python
integrate(exp(-x**2), (x, -oo, oo))
```
**(2) limits**
$$ \lim\limits_{x\rightarrow 0}\frac{\sin(x)}{x},\quad \lim\limits_{x\rightarrow 0^{+}}\frac{1}{x},\quad \lim\limits_{x\rightarrow 0^{-}}\frac{1}{x}$$
```python
limit(sin(x)/x, x, 0)
```
```python
limit(1/x, x, 0, dir="+")
```
```python
limit(1/x, x, 0, dir="-")
```
**(3) series**
```python
exp(x)
```
```python
series(exp(x), x)
```
```python
series(exp(x), x, 1)
```
```python
series(sin(x), x, 0, 12)
```
```python
series(sin(x)*cos(x), x, 0, 8)
```
```python
series(sin(x)*cos(x)*exp(x), x, 0, 12)
```
### Linear algebra: Matrices
```python
m11, m12, m21, m22 = symbols("m11, m12, m21, m22")
b1, b2 = symbols("b1, b2")
```
```python
A = Matrix([[m11, m12],[m21, m22]])
A
```
```python
b = Matrix([[b1], [b2]])
b
```
```python
A**2
```
```python
A**5
```
```python
A * b
```
```python
A.det()
```
```python
A.inv()
```
### Solving equations
Solving
$$ x^2 - 1 = 0,\quad x^4 - x^2 - 1 = 0$$
```python
solve(x**2 - 1, x)
```
```python
solve(x**4 + x**3 - x**2 - 1, x)
```
Solving systems:
$$ x + y - 1 = 0,\quad x - y - 1 = 0,$$
and
$$ x + y - a = 0,\quad x - y - b = 0.$$
```python
solve([x + y - 1, x - y - 1], [x,y])
```
```python
a, b = symbols('a, b')
solve([x + y - a, x - y - b], [x,y])
```
```python
```
|
5a7fef66e1431fc87c367c88ae2ee9840bd5b4c0
| 906,623 |
ipynb
|
Jupyter Notebook
|
Scientific_Computing/Scientific_Python.ipynb
|
xiaozhouli/Jupyter
|
68d5a384dd939b3e8079da4470d6401d11b63a4c
|
[
"MIT"
] | 6 |
2020-02-27T13:09:06.000Z
|
2021-11-14T09:50:30.000Z
|
Scientific_Computing/Scientific_Python.ipynb
|
xiaozhouli/Jupyter
|
68d5a384dd939b3e8079da4470d6401d11b63a4c
|
[
"MIT"
] | null | null | null |
Scientific_Computing/Scientific_Python.ipynb
|
xiaozhouli/Jupyter
|
68d5a384dd939b3e8079da4470d6401d11b63a4c
|
[
"MIT"
] | 8 |
2018-10-18T10:20:56.000Z
|
2021-09-24T08:09:27.000Z
| 483.017048 | 231,684 | 0.936773 | true | 3,132 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.935347 | 0.937211 | 0.876617 |
__label__eng_Latn
| 0.653976 | 0.875008 |
# Parameter estimation example: fitting a straight line II
## Bayesian handling of nuisance parameters
$% Some LaTeX definitions we'll use.
\newcommand{\pr}{\textrm{p}}
$
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn; seaborn.set("talk") # for plot formatting
```
## The Data and the question
Let's start by defining some data that we will fit with a straight line. The following data is measured velocities and distances for a set of galaxies. We will assume that there is a constant standard deviation of $\sigma = 200$ km/sec on the $y$ values and no error on $x$.
```python
# Data from student lab observations;
# d0 = Galaxy distances in MPc
# v0 = Galaxy velocity in km/sec
d0 = np.array([6.75, 25, 33.8, 9.36, 21.8, 5.58, 8.52, 15.1])
v0 = np.array([462, 2562, 2130, 750, 2228, 598, 224, 971])
# Assumed exp. uncertainty
err_v0 = 200
```
```python
x=d0; y=v0; dy=err_v0
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1)
ax.errorbar(x, y, dy, fmt='o')
ax.set_xlabel(r'distance [MPc]')
ax.set_ylabel(r'velocity [km/sec]')
fig.tight_layout()
```
The question that we will be asking is:
> What value would you infer for the Hubble constant given this data?
We will make the prior assumption that the data can be fitted with a straight line (see also the [parameter_estimation_fitting_straight_line_I.ipynb](parameter_estimation_fitting_straight_line_I.ipynb) notebook). But we note that we are actually not interested in the offset of the straight line, but just its slope.
We will try three different approaches:
* Maximum likelihood estimate
* Single-parameter inference
* Full Bayesian analysis
As a final part of this notebook, we will also explore how the posterior belief from this analysis can feed into a second data analysis.
## The Model
We follow the procedure outlined in [parameter_estimation_fitting_straight_line_I.ipynb](../bayesian-parameter-estimation/parameter_estimation_fitting_straight_line_I.ipynb).
Thus, we're fitting a straight line to data,
$$
y_M(x) = mx + b
$$
where our parameter vector will be
$$
\theta = [b, m].
$$
But this is only half the picture: what we mean by a "model" in a Bayesian sense is not only this expected value $y_M(x;\theta)$, but a **probability distribution** for our data.
That is, we need an expression to compute the likelihood $\pr(D\mid\theta)$ for our data as a function of the parameters $\theta$.
Here we are given data with simple error bars, which imply that the probability for any *single* data point is a normal distribution about the true value. That is,
$$
y_i \sim \mathcal{N}(y_M(x_i;\theta), \sigma)
$$
or, in other words,
$$
\pr(y_i\mid x_i,\theta) = \frac{1}{\sqrt{2\pi\varepsilon_i^2}} \exp\left(\frac{-\left[y_i - y_M(x_i;\theta)\right]^2}{2\varepsilon_i^2}\right)
$$
where $\varepsilon_i$ are the (known) measurement errors indicated by the error bars.
Assuming all the points are independent, we can find the full likelihood by multiplying the individual likelihoods together:
$$
\pr(D\mid\theta) = \prod_{i=1}^N \pr(x_i,y_i\mid\theta)
$$
For convenience (and also for numerical accuracy) this is often expressed in terms of the log-likelihood:
$$
\log \pr(D\mid\theta) = -\frac{1}{2}\sum_{i=1}^N\left(\log(2\pi\varepsilon_i^2) + \frac{\left[y_i - y_M(x_i;\theta)\right]^2}{\varepsilon_i^2}\right)
$$
## Step 1: Maximum likelihood estimate
```python
# Log likelihood
def log_likelihood(theta, x, y, dy):
y_model = theta[0] + theta[1] * x
return -0.5 * np.sum(np.log(2 * np.pi * dy ** 2) +
(y - y_model) ** 2 / dy ** 2)
```
Use tools in [``scipy.optimize``](http://docs.scipy.org/doc/scipy/reference/optimize.html) to maximize this likelihood (i.e. minimize the negative log-likelihood).
```python
from scipy import optimize
def minfunc(theta, x, y, dy):
"""
Function to be minimized: minus the logarithm of the likelihood
"""
return -log_likelihood(theta, x, y, dy)
result = optimize.minimize(minfunc, x0=[0, 0], args=(x, y, dy))
```
The output from 'scipy.optimize' contains the set of parameters, and also the inverse of the hessian matrix (that measures the second-order curvature of the optimum). The inverse hessian is related to the covariance matrix. Very often you see the square root of the diagonal elements of this matrix as uncertainty estimates. We will not discuss this measure here, but refer to the highly recommended review: [Error estimates of theoretical models: a guide](https://iopscience.iop.org/article/10.1088/0954-3899/41/7/074001).
```python
# Print the MLE and the square-root of the diagonal elements of the inverse hessian
print(f'Maximum Likelihood Estimate (MLE):')
ndim = len(result.x)
theta_MLE=result.x
err_theta_MLE = np.array([np.sqrt(result.hess_inv[i,i]) for i in range(ndim)])
for i in range(ndim):
print(f'... theta[{i}] = {theta_MLE[i]:>5.1f} +/- {err_theta_MLE[i]:>5.1f}')
```
Maximum Likelihood Estimate (MLE):
... theta[0] = -26.7 +/- 136.6
... theta[1] = 80.5 +/- 7.4
## Step 2: Single-parameter model
As we are not interested in the offset parameter, we might be tempted to fix its value to the most-likely estimate and then infer our knowledge about the slope from a single-parameter model. You have probably realized by now that this is not the Bayesian way of doing the analysis, but since this is a rather common way of handling nuisance parameters, we will still try it.
```python
offset = theta_MLE[0]
```
Let's define the log-likelihood for the case that the offset is fixed. It will be a function of a single free parameter: the slope.
```python
# Log likelihood
def log_likelihood_single(slope, x, y, dy, offset=0.):
y_model = offset + slope * x
return -0.5 * np.sum(np.log(2 * np.pi * dy ** 2) +
(y - y_model) ** 2 / dy ** 2)
```
Next we will plot the log-likelihood (left panel) and the likelihood (right panel) pdfs as a function of the slope. We normalize the peak of the likelihood to one
```python
slope_range = np.linspace(60, 100, num=1000)
log_P1 = [log_likelihood_single(slope, x, y, dy,offset=offset) for slope in slope_range]
log_P1_1 = log_P1 - np.max(log_P1)
fig,ax = plt.subplots(1, 2, figsize=(12,6),sharex=True)
ax[0].plot(slope_range,log_P1_1,'-k');
ax[1].plot(slope_range,np.exp(log_P1_1),'-k');
```
```python
def contour_levels(grid,sigma):
_sorted = np.sort(grid.ravel())[::-1]
pct = np.cumsum(_sorted) / np.sum(_sorted)
cutoffs = np.searchsorted(pct, np.array(sigma) )
return _sorted[cutoffs]
P1 = np.exp(log_P1 - np.max(log_P1))
sigma_contours = contour_levels(P1,0.68)
# Find the max likelihood and the 68% contours
slope_max = slope_range[P1==1.][0]
err_slope_min = np.min(slope_range[P1>sigma_contours])
err_slope_max = np.max(slope_range[P1>sigma_contours])
# The error will be symmetric around the max
err_slope = (err_slope_max - err_slope_min) / 2
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
ax.plot(slope_range,P1,'-k')
ax.vlines([err_slope_min,slope_max,err_slope_max],0,[sigma_contours,1.,sigma_contours])
ax.fill_between(slope_range, P1, where=P1>=sigma_contours, interpolate=True, alpha=0.2);
print('Single parameter estimate')
print(f'... slope = {slope_max:>5.1f} +/- {err_slope:>5.1f}')
```
## Step 3: Full Bayesian approach
You might not be surprised to learn that we underestimate the uncertainty of the slope since we are making the assumption that we know the value of the offset (by fixing it to a specific estimate).
We will now repeat the data analysis, but with the full model and with marginalization on the posterior.
Let's use the (log) symmetric prior, which is the scale-invariant one.
```python
def log_prior(theta):
# symmetric prior for the slope, and normal pdf (mean=0, variance=200) for the intercept
return - 0.5* theta[0]**2 / dy**2 - 1.5 * np.log(1 + theta[1] ** 2)
```
With these defined, we now have what we need to compute the log posterior as a function of the model parameters.
```python
def log_posterior(theta, x, y, dy):
return log_prior(theta) + log_likelihood(theta, x, y, dy)
```
Next we will plot the posterior probability as a function of the slope and intercept.
We will illustrate the use of MCMC sampling for obtaining the posterior pdf, which also offers a very convenient way of performing the marginalization
```python
import emcee
print('emcee sampling (version: )', emcee.__version__)
ndim = 2 # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nsteps = 2000 # steps per walker
print(f'{nwalkers} walkers: {nsteps} samples each')
# initialize walkers
starting_guesses = np.random.randn(nwalkers, ndim)
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[x, y, dy])
%time sampler.run_mcmc(starting_guesses, nsteps)
print("done")
```
emcee sampling (version: ) 2.2.1
50 walkers: 2000 samples each
CPU times: user 1.91 s, sys: 9.9 ms, total: 1.92 s
Wall time: 1.91 s
done
```python
# sampler.chain is of shape (nwalkers, nsteps, ndim)
# Let us reshape and all walker chains together
# Then make a scatter plot
emcee_trace = sampler.chain[:, :, :].reshape(-1, ndim).T
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(1,1,1)
ax.plot(emcee_trace[0], emcee_trace[1], ',k', alpha=0.1);
```
Our choice of starting points were not optimal. It takes some time for the MCMC chains to converge. Let us study the traces.
```python
fig, ax = plt.subplots(ndim, sharex=True,figsize=(10,6))
for i in range(ndim):
ax[i].plot(sampler.chain[:, :, i].T, '-k', alpha=0.2);
```
```python
# We choose a warm-up time
nwarmup = 200 # warm up
# sampler.chain is of shape (nwalkers, nsteps, ndim)
# we'll throw-out the warmup points and reshape:
emcee_trace = sampler.chain[:, nwarmup:, :].reshape(-1, ndim).T
emcee_lnprob = sampler.lnprobability[:, nwarmup:].reshape(-1).T
```
Let us create some convenience tools for plotting, including a machinery to extract 1-, 2-, 3-sigma contour levels.
We will later use the 'corner' package to achieve such visualization.
```python
def compute_sigma_level(trace1, trace2, nbins=20):
"""From a set of traces, bin by number of standard deviations"""
L, xbins, ybins = np.histogram2d(trace1, trace2, nbins)
L[L == 0] = 1E-16
logL = np.log(L)
shape = L.shape
L = L.ravel()
# obtain the indices to sort and unsort the flattened array
i_sort = np.argsort(L)[::-1]
i_unsort = np.argsort(i_sort)
L_cumsum = L[i_sort].cumsum()
L_cumsum /= L_cumsum[-1]
xbins = 0.5 * (xbins[1:] + xbins[:-1])
ybins = 0.5 * (ybins[1:] + ybins[:-1])
return xbins, ybins, L_cumsum[i_unsort].reshape(shape)
def plot_MCMC_trace(ax, xdata, ydata, trace, scatter=False, **kwargs):
"""Plot traces and contours"""
xbins, ybins, sigma = compute_sigma_level(trace[0], trace[1])
ax.contour(xbins, ybins, sigma.T, levels=[0.683, 0.955, 0.997], **kwargs)
if scatter:
ax.plot(trace[0], trace[1], ',k', alpha=0.1)
ax.set_xlabel(r'$\theta_0$')
ax.set_ylabel(r'$\theta_1$')
# Convenience function to extract the peak position of the mode
def max_of_mode(sampler_object):
max_arg = np.argmax(sampler.flatlnprobability)
return(sampler.flatchain[max_arg])
```
```python
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(1,1,1)
plot_MCMC_trace(ax, x, y, emcee_trace, scatter=True,colors='k');
max_mode_theta=max_of_mode(sampler)
with np.printoptions(precision=3):
print(f'Max posterior is at: {max_mode_theta}')
```
#### Marginalization
Next we will perform the marginalization over the offset parameter using the samples from the posterior pdf.
Furthermore, the extraction of a 68% credible region (not to be confused with the frequentist _confidence interval_) is made simple since the posterior is well described by a single mode.
```python
# Sort the samples according to the log-probability.
# Note that we want to have the sorted by increasing -log(p)
sorted_lnprob = -np.sort(-emcee_lnprob)
# In this sorted list we then keep 1-sigma volume of the samples
# (note 1-sigma = 1-exp(-0.5) ~ 0.393 for 2-dim pdf). See
# https://corner.readthedocs.io/en/latest/pages/sigmas.html
# We then identify what log-prob this corresponds to
log_prob_max = sorted_lnprob[0]
level_1sigma = 1-np.exp(-0.5)
log_prob_cutoff = sorted_lnprob[np.int(level_1sigma*nwalkers*(nsteps-nwarmup))]
# From the list of samples that have log-prob larger than this cutoff,
# we then find the smallest and largest value for the slope parameter.
# Here we simply ignore the values for the offset parameter (this is marginalization when having MCMC samples).
slope_samples = emcee_trace[1,:]
# Mode
bayesian_slope_maxprob = slope_samples[emcee_lnprob==log_prob_max][0]
# Mean
bayesian_slope_mean = np.sort(slope_samples)[np.int(0.5*nwalkers*(nsteps-nwarmup))]
# 68% CR
bayesian_CR_slope_min = np.min(slope_samples[emcee_lnprob>log_prob_cutoff])
bayesian_CR_slope_max = np.max(slope_samples[emcee_lnprob>log_prob_cutoff])
```
```python
print('Bayesian slope parameter estimate')
print(f'... slope = {bayesian_slope_mean:>6.2f} ',\
f'(-{bayesian_slope_mean-bayesian_CR_slope_min:>4.2f},',\
f'+{bayesian_CR_slope_max-bayesian_slope_mean:>4.2f})')
```
Bayesian slope parameter estimate
... slope = 78.23 (-6.27, +6.82)
```python
# Alternatively we can use corner
import corner
fig, ax = plt.subplots(2,2, figsize=(10,10))
corner.corner(emcee_trace.T,labels=[r"$\theta_0$", r"$\theta_1$"],
quantiles=[0.16, 0.5, 0.84],fig=fig,show_titles=True
);
```
We can use the parameter samples to create corrsponding samples of our model. Finally, we plot the mean and 1-sigma contours of these samples.
```python
def plot_MCMC_model(ax, xdata, ydata, trace, yerr=0):
"""Plot the linear model and 2sigma contours"""
ax.errorbar(xdata, ydata, yerr, fmt='o')
alpha, beta = trace[:2]
xfit = np.linspace(0, 50, 5)
yfit = alpha[:, None] + beta[:, None] * xfit
mu = yfit.mean(0)
sig = yfit.std(0)
ax.plot(xfit, mu, '-k')
ax.fill_between(xfit, mu - sig, mu + sig, color='lightgray')
ax.set_xlabel('x')
ax.set_ylabel('y')
```
```python
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1)
plot_MCMC_model(ax,x,y,emcee_trace,dy)
```
Caution:
- Might we have underestimated the error bars on the experimental data?
- And/or is some of the data "bad" (are there outliers)?
More on this later.
### Summary
```python
print('{0:>25s}: {1:>3.1f} +/- {2:>3.1f}'.format('Max Likelihood (1-sigma)', theta_MLE[1],err_theta_MLE[1]))
print('{0:>25s}: {1:>3.1f} +/- {2:>3.1f}'.format('Fixed offset (1-sigma)',slope_max, err_slope))
print('{0:>25s}: {1:>3.1f} (+{2:>3.1f};-{3:>3.1f})'.format('Full Bayesian (68% CR)',bayesian_slope_mean, bayesian_CR_slope_max-bayesian_slope_mean,bayesian_slope_mean-bayesian_CR_slope_min))
```
Max Likelihood (1-sigma): 80.5 +/- 7.4
Fixed offset (1-sigma): 80.5 +/- 3.8
Full Bayesian (68% CR): 78.2 (+6.8;-6.3)
### Breakout session
In the Bayesian analysis we had to specify our prior assumption on the value for the offset.
* *Is this a feature or a deficiancy of the Bayesian approach? Discuss!*
What happens if we modify this prior assumption?
* *Redo the analysis with a very broad, uniform prior on the offset. How is the inference affected?*
* *Redo the analysis with a very narrow, normal prior on the offset. How is the inference affected?*
<a id='error_propagation'></a>
## Step 4: Error propagation
The Bayesian approach offers a straight-forward approach for dealing with (known) systematic uncertainties; namely by marginalization.
### Systematic error example
The Hubble constant acts as a galactic ruler as it is used to measure astronomical distances according to $v = H_0 x$. An error in this ruler will therefore correspond to a systematic uncertainty in such measurements.
Suppose that a particular galaxy has a measured recessional velocity $v_\mathrm{measured} = (100 \pm 5) \times 10^3$ km/sec. Also assume that the Hubble constant $H_0$ is known from the analysis performed above in Step 3. Determine the posterior pdf for the distance to the galaxy assuming:
1. A fixed value of $H_0$ corresponding to the mean of the previous analysis.
1. Using the sampled posterior pdf for $H_0$ from the above analysis.
```python
vm=100000
sig_vm=5000
```
We assume that we can write
$$
v_\mathrm{measured} = v_\mathrm{theory} + \delta v_\mathrm{exp},
$$
where $v_\mathrm{theory}$ is the recessional velocity according to our model, and $\delta v_\mathrm{exp}$ represents the noise component of the measurement. We know that $\delta v_\mathrm{exp}$ can be described by a Gaussian pdf with mean 0 and standard deviation $\sigma_v = 5 \times 10^3$ km/sec. Note that we have also assumed that our model is perfect, i.e. $\delta v_\mathrm{theory}$ is negligible.
In the following, we also assume that the error in the measurement in $v$ is uncorrelated with the uncertainty in $H_0$.
Through application of Bayes' rule we can readily evaluate the posterior pdf $p(x|D,I)$ for the distance $x$ to the galaxy.
#### Case 1: Fixed $H_0$
\begin{align}
p(x | D,I) & \propto p(D | x, I) p(x|I) \\
& = \frac{1}{\sqrt{2\pi}\sigma_v} \exp \left( - \frac{(v_\mathrm{measured} - v_\mathrm{theory})^2}{2\sigma_v^2} \right) p(x|I)\\
&= \left\{ \begin{array}{ll} \frac{1}{\sqrt{2\pi}\sigma_v} \exp \left( - \frac{(v_\mathrm{measured} - H_0 x)^2}{2\sigma_v^2} \right) & \text{with }x \in [x_\mathrm{min},x_\mathrm{max}] \\
0 & \text{otherwise},
\end{array} \right.
\end{align}
where $p(x|I)$ is the prior for the distance, which we have assumed to be uniform, i.e. $p(x|I) \propto 1$ in some (possibly large) region $[x_\mathrm{min},x_\mathrm{max}]$.
```python
def x_with_fixedH(x,H0,vmeasured=vm,vsigma=sig_vm,xmin=0,xmax=10000):
# Not including the prior
x_posterior = np.exp(-(vmeasured-H0*x)**2/(2*vsigma**2))
return x_posterior
```
#### Case 2: Using the inferred pdf for $H_0$
Here we use marginalization to obtain the desired posterior pdf $p(x|D,I)$ from the joint distribution of $p(x,H_0|D,I)$
$$
p(x|D,I) = \int_{-\infty}^\infty dH_0 p(x,H_0|D,I).
$$
Using Bayes' rule, the product rule, and the fact that $H_0$ is independent of $x$ we find that
$$
p(x|D,I) \propto p(x|I) \int dH_0 p(H_0|I) p(D|x,H_0,I),
$$
which means that we have expressed the quantity that we want (the posterior for $x$) in terms of quantities that we know.
The pdf $p(H_0 | I)$ is known via its $N$ samples $\{H_{i}\}$ generated by the MCMC sampler.
This means that we can approximate
$$
p(x |D,I) = \int dH_0 p(H_0|I) p(D|x,H_0,I) \approx \frac{1}{N} \sum_{i=1}^N p(D | x, H_0^{(i)}, I)
$$
where we have used $p(x|I) \propto 1$ and where $H_0^{(i)}$ is drawn from $p(H_0|I)$.
```python
x_arr = np.linspace(800,2000,1200)
xposterior_fixedH = x_with_fixedH(x_arr,bayesian_slope_mean)
xposterior_fixedH /= np.sum(xposterior_fixedH)
xposterior_pdfH = np.zeros_like(x_arr)
for H0 in slope_samples:
xposterior_pdfH += x_with_fixedH(x_arr,H0)
xposterior_pdfH /= np.sum(xposterior_pdfH)
```
```python
fig, ax = plt.subplots(1,1, figsize=(8,6))
ax.plot(x_arr,xposterior_fixedH);
ax.plot(x_arr,xposterior_pdfH,'--');
print("The mean and 68% DoB of the inferred distance is")
for ix, xposterior in enumerate([xposterior_fixedH,xposterior_pdfH]):
# Mean
x_mean = np.min(x_arr[np.cumsum(xposterior)>0.5])
# 68% DoB
x_min = np.min(x_arr[np.cumsum(xposterior)>0.16])
x_max = np.min(x_arr[np.cumsum(xposterior)>0.84])
print(f"... Case {ix+1}: x_mean = {x_mean:.0f}; 68% DoB [-{x_mean-x_min:.0f},+{x_max-x_mean:.0f}]")
```
```python
```
|
a9231c1fe77384c020e760d5d1a178c9730a26f6
| 615,395 |
ipynb
|
Jupyter Notebook
|
topics/why-bayes-is-better/parameter_estimation_fitting_straight_line_II.ipynb
|
asemposki/Bayes2019
|
bea9dbe5205fbf5939a154b1c3773e6c3baf39a4
|
[
"CC0-1.0"
] | 13 |
2019-06-06T17:55:08.000Z
|
2021-11-16T08:26:26.000Z
|
topics/why-bayes-is-better/parameter_estimation_fitting_straight_line_II.ipynb
|
asemposki/Bayes2019
|
bea9dbe5205fbf5939a154b1c3773e6c3baf39a4
|
[
"CC0-1.0"
] | 1 |
2019-06-14T16:17:36.000Z
|
2019-06-15T04:41:39.000Z
|
topics/why-bayes-is-better/parameter_estimation_fitting_straight_line_II.ipynb
|
asemposki/Bayes2019
|
bea9dbe5205fbf5939a154b1c3773e6c3baf39a4
|
[
"CC0-1.0"
] | 17 |
2019-06-10T18:23:29.000Z
|
2021-12-22T15:38:30.000Z
| 521.963528 | 150,660 | 0.943683 | true | 5,889 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.882428 | 0.885631 | 0.781506 |
__label__eng_Latn
| 0.939015 | 0.654032 |
# Neural Contextual Bandits with UCB-based Exploration
Notation:
| Notation | Description |
| :----------------------- | :----------------------------------------------------------- |
| $K$ | number of arms |
| $T$ | number of total rounds |
| $t$ | index of round |
| $x_{t,a}$ | $x_{t,a}\in\mathbb{R}^d$, $a\in [K]$, it is the context, the context consists of $K$ feature vectors $\{x_{t,a}\in\mathbb{R}^d|a\in[K]\}$ |
| $a_t$ | after observes the context, the agent select an action $a_t$ in round t |
| $r_{t,a_t}$ | the reward after the agent select action $a_t$ |
| $h$ | we assume that $r_{t,a_t}=h(x_{t,a_t})+\xi_t$, h is an unknown function satisfying $0\le h(x)\le 1$ for any x |
| $\xi_t$ | $\xi_t$ is v-sub-Gaussian noise conditioned on $x_{1,a_1},\cdots,x_{t-1,a_{t-1}}$, satisfying $\mathbb{E}\xi_t=0$ |
| $L$ | the depth of neural network |
| $m$ | number of neural in each layer of network |
| $\sigma(x)$ | we define $\sigma(x)=\max\{x,0\}$ |
| $W_1,\cdots,W_{L-1},W_L$ | the weight in neural network. $W_1\in\mathbb{R}^{m\times d}$, $W_i\in\mathbb{R}^{m\times m}$, $2\le i\le L-1$, $W_L\in\mathbb{R}^{m\times 1}$ |
| $\theta$ | $\theta=[vec(W_1)^T,\cdots,vec(W_l)^T]\in\mathbb{R}^p$, $p=m+md+m^2(L-1)$ |
| $f(x;\theta)$ | we define $f(x;\theta)=\sqrt{m}W_L\sigma(W_{l-1}\sigma(\cdots\sigma(W_1x)))$ |
Initialization of parameters:
UCB algorithm:
we set $\mathcal{L}(\theta) = \sum_{i=1}^t\frac{(f(x_{i,a_i};\theta)-r_{i,a_i})^2}{2}+\frac{m\lambda||\theta-\theta^{(0)}||^2_2}{2}$
Then the gradient would be
$$
\nabla\mathcal{L}(\theta) = \sum_{i=1}^t(f(x_{i,a_i};\theta)-r_{i,a_i})\nabla f(x_{i,a_i};\theta) + m\lambda(\theta-\theta^{(0)})
$$
Forawar Algorithm of Neural Network
$$
\begin{align}
X_0 &= X\\
X_1 &=\sigma(W_1X_0)\\
X_2 &=\sigma(W_2X_1)\\
\cdots\\
X_{L-1}&=\sigma(W_{L-1}X_{L-2})\\
X_{L} &=W_L X_{L-1}
\end{align}
$$
$f(X) = X_L$
Backward Propagation
$$
\begin{align}
\nabla_{X_L}f &= 1\\
\nabla_{W_L}f &= X_{L-1}\\
\nabla_{X_{L-1}}f &= W_{L}\\
\\
\nabla_{W_{L-1}}f &= \nabla_{W_{L-1}}f(X_{L-1}(W_{L-1}, X_{L-2}), W_L)=\nabla_{X_{L-1}}f \cdot \nabla_{W_{L-1}}X_{L-1}(W_{L-1}, X_{L-2})\\
\nabla_{X_{L-2}}f &= \nabla_{X_{L-2}}f(X_{L-1}(W_{L-1}, X_{L-2}), W_L)=\nabla_{X_{L-1}}f \cdot \nabla_{X_{L-2}}X_{L-1}(W_{L-1}, X_{L-2})\\
\\
\nabla_{W_{L-2}}f &= \nabla_{W_{L-2}}f(X_{L-2}(W_{L-2}, X_{L-3}), W_L,W_{L-1})=\nabla_{X_{L-2}}f \cdot \nabla_{W_{L-2}}X_{L-2}(W_{L-2}, X_{L-3})\\
\nabla_{X_{L-3}}f &= \nabla_{X_{L-3}}f(X_{L-2}(W_{L-2}, X_{L-3}), W_L,W_{L-1})=\nabla_{X_{L-2}}f \cdot \nabla_{X_{L-3}}X_{L-2}(W_{L-2}, X_{L-3})\\
\cdots\\
\nabla_{W_{l}}f &= \nabla_{W_{l}}f(X_{l}(W_{l}, X_{l-1}), W_L,W_{L-1},\cdots,W_{l+1})=\nabla_{X_{l}}f \cdot \nabla_{W_{l}}X_{l}(W_{l}, X_{l-1})\\
\nabla_{X_{l-1}}f &= \nabla_{X_{l-1}}f(X_{l}(W_{l}, X_{l-1}), W_L,W_{L-1},\cdots,W_{l+1})=\nabla_{X_{l}}f \cdot \nabla_{X_{l-1}}X_{l}(W_{l}, X_{l-1})\\
\cdots\\
\nabla_{W_{1}}f &= \nabla_{W_{1}}f(X_{1}(W_{1}, X_{0}), W_L,W_{L-1},\cdots,W_{2})=\nabla_{X_{1}}f \cdot \nabla_{W_{1}}X_{1}(W_{1}, X_{0})\\
\nabla_{X_{0}}f &= \nabla_{X_{0}}f(X_{1}(W_{1}, X_{0}), W_L,W_{L-1},\cdots,W_{2})=\nabla_{X_{1}}f \cdot \nabla_{X_{0}}X_{1}(W_{1}, X_{0})\\
\end{align}
$$
$\nabla_{W_{l}}f$ is a matrix, to be specific, $\nabla_{W_{l}}f=\left[\begin{matrix}\frac{\partial f}{\partial w^{(l)}_{11}}&\cdots &\frac{\partial f}{\partial w^{(l)}_{1m}}\\ \vdots&& \vdots\\ \frac{\partial f}{\partial w^{(l)}_{m1}}&\cdots &\frac{\partial f}{\partial w^{(l)}_{mm}}\end{matrix}\right]$
$$
\begin{align}
\left[\begin{matrix}\frac{\partial f}{\partial w^{(l)}_{11}}&\cdots &\frac{\partial f}{\partial w^{(l)}_{1m}}\\ \vdots&& \vdots\\ \frac{\partial f}{\partial w^{(l)}_{m1}}&\cdots &\frac{\partial f}{\partial w^{(l)}_{mm}}\end{matrix}\right]=&
\left[\begin{matrix}\frac{\partial f}{\partial x^{(l)}_{1}} \frac{\partial x^{(l)}_{1}}{\partial w^{(l)}_{11}}&\cdots &\frac{\partial f}{\partial x^{(l)}_{1}}\frac{\partial x^{(l)}_{1}}{\partial w^{(l)}_{1m}}\\ \vdots&& \vdots\\ \frac{\partial f}{\partial x^{(l)}_{m}}\frac{\partial x^{(l)}_{m}}{\partial w^{(l)}_{m1}}&\cdots &\frac{\partial f}{\partial x^{(l)}_{m}}\frac{\partial x^{(l)}_{m}}{\partial w^{(l)}_{mm}}\end{matrix}\right]\\
=&
\left[\begin{matrix}\frac{\partial f}{\partial x^{(l)}_{1}} \mathbb{1}_{\sigma(w^{(l)}_{11}x^{(l-1)}_1+w^{(l)}_{12}x^{(l-1)}_2+\cdots+w^{(l)}_{1m}x^{(l-1)}_m)>0} x^{(l-1)}_{1}&\cdots &\frac{\partial f}{\partial x^{(l)}_{1}}\mathbb{1}_{\sigma(w^{(l)}_{11}x^{(l-1)}_1+w^{(l)}_{12}x^{(l-1)}_2+\cdots+w^{(l)}_{1m}x^{(l-1)}_m)>0} x^{(l-1)}_{m}\\ \vdots&& \vdots\\ \frac{\partial f}{\partial x^{(l)}_{m}}\mathbb{1}_{\sigma(w^{(l)}_{m1}x^{(l-1)}_1+w^{(l)}_{m2}x^{(l-1)}_2+\cdots+w^{(l)}_{mm}x^{(l-1)}_m)>0} x^{(l-1)}_{1}&\cdots &\frac{\partial f}{\partial x^{(l)}_{m}}\mathbb{1}_{\sigma(w^{(l)}_{m1}x^{(l-1)}_1+w^{(l)}_{m2}x^{(l-1)}_2+\cdots+w^{(l)}_{mm}x^{(l-1)}_m)>0} x^{(l-1)}_{m}\end{matrix}\right]\\
\end{align}
$$
$\nabla_{X_{l-1}}f$ is a vector, to be specific, $\nabla_{X_{l-1}}f=\left[\begin{matrix}\frac{\partial f}{\partial x^{(l-1)}_{1}}\\ \vdots\\ \frac{\partial f}{\partial x^{(l-1)}_{m}}\end{matrix}\right]$
$$
\begin{align}
\left[\begin{matrix}\frac{\partial f}{\partial x^{(l-1)}_{1}}\\ \vdots\\ \frac{\partial f}{\partial x^{(l-1)}_{m}}\end{matrix}\right]=&
\left[\begin{matrix}\frac{\partial f}{\partial x^{(l)}_{1}}\frac{\partial x^{(l)}_1}{\partial x^{(l-1)}_1}+\frac{\partial f}{\partial x^{(l)}_{2}}\frac{\partial x^{(l)}_{2}}{\partial x^{(l-1)}_{1}}+\cdots+\frac{\partial f}{\partial x^{(l)}_{m}}\frac{\partial x^{(l)}_{m}}{\partial x^{(l-1)}_{1}}\\ \vdots\\ \frac{\partial f}{\partial x^{(l)}_{1}}\frac{\partial x^{(l)}_{1}}{\partial x^{(l-1)}_{m}}+\frac{\partial f}{\partial x^{(l)}_{2}}\frac{\partial x^{(l)}_{2}}{\partial x^{(l-1)}_{m}}+\cdots+\frac{\partial f}{\partial x^{(l)}_{m}}\frac{\partial x^{(l)}_{m}}{\partial x^{(l-1)}_{m}}\end{matrix}\right]\\
=&
\left[\begin{matrix}\frac{\partial f}{\partial x^{(l)}_{1}}\mathbb{1}_{\sigma(w^{(l)}_{11}x^{(l-1)}_1+w^{(l)}_{12}x^{(l-1)}_2+\cdots+w^{(l)}_{1m}x^{(l-1)}_m)>0}w^{(l)}_{11}+\frac{\partial f}{\partial x^{(l)}_{2}}\mathbb{1}_{\sigma(w^{(l)}_{21}x^{(l-1)}_1+w^{(l)}_{22}x^{(l-1)}_2+\cdots+w^{(l)}_{2m}x^{(l-1)}_m)>0}w^{(l)}_{21}+\cdots+\frac{\partial f}{\partial x^{(l)}_{m}}\mathbb{1}_{\sigma(w^{(l)}_{m1}x^{(l-1)}_1+w^{(l)}_{m2}x^{(l-1)}_2+\cdots+w^{(l)}_{mm}x^{(l-1)}_m)>0}w^{(l)}_{m1}\\
\vdots\\
\frac{\partial f}{\partial x^{(l)}_{1}}\mathbb{1}_{\sigma(w^{(l)}_{11}x^{(l-1)}_1+w^{(l)}_{12}x^{(l-1)}_2+\cdots+w^{(l)}_{1m}x^{(l-1)}_m)>0}w^{(l)}_{1m}+\frac{\partial f}{\partial x^{(l)}_{2}}\mathbb{1}_{\sigma(w^{(l)}_{21}x^{(l-1)}_1+w^{(l)}_{22}x^{(l-1)}_2+\cdots+w^{(l)}_{2m}x^{(l-1)}_m)>0}w^{(l)}_{2m}+\cdots+\frac{\partial f}{\partial x^{(l)}_{m}}\mathbb{1}_{\sigma(w^{(l)}_{m1}x^{(l-1)}_1+w^{(l)}_{m2}x^{(l-1)}_2+\cdots+w^{(l)}_{mm}x^{(l-1)}_m)>0}w^{(l)}_{mm}\end{matrix}\right ]\\
=&\left[\begin{matrix}w^{(l)}_{11}&\cdots& w^{(l)}_{1m}\\ \vdots&&\vdots\\ w^{(l)}_{m1}&\cdots& w^{(l)}_{mm}\end{matrix}\right]^T
\left[\begin{matrix}\frac{\partial f}{\partial x^{(l)}_{1}}\mathbb{1}_{\sigma(w^{(l)}_{11}x^{(l-1)}_1+w^{(l)}_{12}x^{(l-1)}_2+\cdots+w^{(l)}_{1m}x^{(l-1)}_m)>0}\\ \vdots\\ \frac{\partial f}{\partial x^{(l)}_{m}}\mathbb{1}_{\sigma(w^{(l)}_{m1}x^{(l-1)}_1+w^{(l)}_{m2}x^{(l-1)}_2+\cdots+w^{(l)}_{mm}x^{(l-1)}_m)>0}\end{matrix}\right]\\
\end{align}
$$
We assume the reward follows reward = context^T * A^T * A * context + \xi
$\xi$ is a random variable following standard normal distribution N(0, 1)
A is d\*d matrix, randomly generated from N(0, 1)
We assume the context is independent from the action and round index.
Given action a and round index t, the context is randomly sample from a unit ball in dimension d
```python
%reset -f
import numpy as np
import random
from copy import deepcopy
from GameSetting import *
from NeuralNetworkRelatedFunction import *
```
```python
class BestAgent:
def __init__(self, K, T, d):
# K is Total number of actions,
# T is Total number of periods
# d is the dimension of context
self.K = K
self.T = T
self.d = d
self.t = 0 # marks the index of period
self.history_reward = np.zeros(T)
self.history_action = np.zeros(T)
self.history_context = np.zeros((d, T))
def Action(self, context_list):
# context_list is a d*K matrix, each column represent a context
# the return value is the action we choose, represent the index of action, is a scalar
expected_reward = np.zeros(K)
for kk in range(0, K):
context = context_list[:, kk]
expected_reward[kk] = context.transpose().dot(A.transpose().dot(A)).dot(context)
ind = np.argmax(expected_reward, axis=None)
self.history_context[:, self.t] = context_list[:, ind]
self.history_action[self.t] = ind
return ind
def Update(self, reward):
# reward is the realized reward after we adopt policy, a scalar
self.history_reward[self.t] = reward
self.t = self.t + 1
def GetHistoryReward(self):
return self.history_reward
def GetHistoryAction(self):
return self.history_action
def GetHistoryContext(self):
return self.history_context
```
```python
class UniformAgent:
def __init__(self, K, T, d):
# K is Total number of actions,
# T is Total number of periods
# d is the dimension of context
self.K = K
self.T = T
self.d = d
self.t = 0 # marks the index of period
self.history_reward = np.zeros(T)
self.history_action = np.zeros(T)
self.history_context = np.zeros((d, T))
def Action(self, context_list):
# context_list is a d*K matrix, each column represent a context
# the return value is the action we choose, represent the index of action, is a scalar
ind = np.random.randint(0, high = K) # we just uniformly choose an action
self.history_context[:, self.t] = context_list[:, ind]
return ind
def Update(self, reward):
# reward is the realized reward after we adopt policy, a scalar
self.history_reward[self.t] = reward
self.t = self.t + 1
def GetHistoryReward(self):
return self.history_reward
def GetHistoryAction(self):
return self.history_action
def GetHistoryContext(self):
return self.history_context
```
```python
class NeuralAgent:
def __init__(self, K, T, d, L = 2, m = 20, gamma_t = 0.01, v = 0.1, lambda_ = 0.01, delta = 0.01, S = 0.01, eta = 0.001, frequency = 50, batchsize = None):
# K is Total number of actions,
# T is Total number of periods
# d is the dimension of context
self.K = K
self.T = T
self.d = d
self.L = L
self.m = m
self.gamma_t = gamma_t
self.v = v
self.lambda_ = lambda_
self.delta = delta
self.S = S
self.eta = eta
self.frequency = frequency # we train the network after frequency, e.g. per 50 round
self.batchsize = batchsize
self.t = 0 # marks the index of period
self.history_reward = np.zeros(T)
self.history_action = np.zeros(T)
self.predicted_reward = np.zeros(T)
self.predicted_reward_upperbound = np.zeros(T)
self.history_context = np.zeros((d, T))
# initialize the value of parameter
np.random.seed(12345)
self.theta_0 = {}
W = np.random.normal(loc = 0, scale = 4 / m, size=(int(m/2), int(m/2)))
w = np.random.normal(loc = 0, scale = 2 / m, size=(1, int(m/2)))
for key in range(1, L + 1):
if key == 1:
# this paper doesn't present the initialization of w1
# in its setting, d = m, then he let theta_0["w1"]=[W,0;0,W]
# but in fact d might not equal to m
tempW = np.random.normal(loc = 0, scale = 4 / m, size=(int(m/2), int(d/2)))
self.theta_0["w1"] = np.zeros((m, d))
self.theta_0["w1"][0:int(m/2), 0:int(d/2)] = tempW
self.theta_0["w1"][int(m/2):, int(d/2):] = tempW
elif 2 <= key and key <= L - 1:
self.theta_0["w" + str(key)] = np.zeros((m, m))
self.theta_0["w" + str(key)][0:int(m/2), 0:int(m/2)] = W
self.theta_0["w" + str(key)][int(m/2):, int(m/2):] = W
else:
self.theta_0["w" + str(key)] = np.concatenate([w, -w], axis = 1)
self.p = m + m * d + m * m * (L - 2)
self.params = deepcopy(self.theta_0)
self.Z_t_minus1 = lambda_ * np.eye(self.p)
self.params_history = {}
self.grad_history = {}
def Action(self, context_list):
# context_list is a d*K matrix, each column represent a context
# the return value is the action we choose, represent the index of action, is a scalar
U_t_a = np.zeros(K) # the upper bound of K actions
predict_reward = np.zeros(K)
for a in range(1, K + 1):
predict_reward[a - 1] = NeuralNetwork(context_list[:, a - 1], self.params, self.L, self.m)['x' + str(self.L)][0]
grad_parameter = GradientNeuralNetwork(context_list[:, a - 1], self.params, self.L, self.m)
grad_parameter = FlattenDict(grad_parameter, self.L)
Z_t_minus1_inverse = np.linalg.inv(self.Z_t_minus1)
U_t_a[a - 1] = predict_reward[a - 1] +\
self.gamma_t * np.sqrt(grad_parameter.dot(Z_t_minus1_inverse).dot(grad_parameter)/self.m)
ind = np.argmax(U_t_a, axis=None)
self.predicted_reward[self.t] = predict_reward[ind]
self.predicted_reward_upperbound[self.t] = U_t_a[ind]
self.history_action[self.t] = ind
self.history_context[:, self.t] = context_list[:, ind]
return ind
def Update(self, reward):
# reward is the realized reward after we adopt policy, a scalar
# print("round {:d}".format(self.t))
self.history_reward[self.t] = reward
ind = self.history_action[self.t]
context = self.history_context[:, self.t]
# compute Z_t_minus1
grad_parameter = GradientNeuralNetwork(context, self.params, self.L, self.m)
grad_parameter = FlattenDict(grad_parameter, self.L)
grad_parameter = np.expand_dims(grad_parameter, axis = 1)
self.Z_t_minus1 = self.Z_t_minus1 + grad_parameter.dot(grad_parameter.transpose()) / self.m
# train neural network
if self.t % self.frequency == 0 and self.t > 0:
J = self.t
else:
J = 0
if self.batchsize == None:
trainindex = range(0, self.t + 1)
else:
if self.batchsize > self.t + 1:
trainindex = range(0, self.t + 1)
else:
trainindex = random.sample(range(0, self.t + 1), self.batchsize)
grad_loss = {}
for j in range(J):
grad_loss = GradientLossFunction(self.history_context[:, trainindex],# we had not update self.t yet, so here we must +1
self.params,
self.L,
self.m,
self.history_reward[trainindex],
self.theta_0,
self.lambda_)
# if j < 10:
# eta = 1e-4
# else:
# eta = self.eta
eta = self.eta
for key in self.params.keys():
self.params[key] = self.params[key] - eta * grad_loss[key]
loss = LossFunction(self.history_context[:, trainindex],
self.params,
self.L,
self.m,
self.history_reward[trainindex],
self.theta_0,
self.lambda_)
# print("j {:d}, loss {:4f}".format(j, loss))
print("round {:d}, predicted reward {:4f},predicted upper bound {:4f},actual reward {:4f}".format(self.t,
self.predicted_reward[self.t],
self.predicted_reward_upperbound[self.t],
reward))
self.params_history[self.t] = deepcopy(self.params)
self.grad_history[self.t] = deepcopy(grad_loss)
self.t = self.t + 1
def GetHistoryReward(self):
return self.history_reward
def GetHistoryAction(self):
return self.history_action
def GetHistoryContext(self):
return self.history_context
```
```python
# Set the parameter of the game
np.random.seed(12345)
K = 4# Total number of actions,
T = 5000 # Total number of periods
d = 20 # the dimension of context
A = np.random.normal(loc=0, scale=1, size=(d, d))
```
```python
# Neural UCB
np.random.seed(12345)
# Set the parameter of the network
# the setting is based on the description of section 7.1 of the papaer
L = 2
m = 20
# we fix gamma in each round, according to the description of section 3.1
gamma_t = 0.01 #{0.01, 0.1, 1, 10}
v = 0.1 #{0.01, 0.1, 1}
lambda_ = 1 #{0.1, 1, 10}
delta = 0.01 #{0.01, 0.1, 1}
S = 0.01 #{0.01, 0.1, 1, 10}
eta = 1e-4 #{0.001, 0.01, 0.1}
frequency = 50
batchsize = 500
# we set J equal to round index t
neuralagent = NeuralAgent(K, T, d, L, m, gamma_t, v, lambda_, delta, S, eta, frequency, batchsize)
for tt in range(1, T + 1):
# observe \{x_{t,a}\}_{a=1}^{k=1}
context_list = SampleContext(d, K)
# compute the upper bound of reward
ind = neuralagent.Action(context_list)
# play ind and observe reward
reward = GetRealReward(context_list[:, ind], A)
neuralagent.Update(reward)
```
round 0, predicted reward 0.156496,predicted upper bound 0.163969,actual reward 26.866324
round 1, predicted reward 0.152098,predicted upper bound 0.157251,actual reward 21.779967
round 2, predicted reward 0.223601,predicted upper bound 0.229045,actual reward 22.502430
round 3, predicted reward 0.350837,predicted upper bound 0.356987,actual reward 23.820496
round 4, predicted reward 0.292728,predicted upper bound 0.296934,actual reward 23.628268
round 5, predicted reward 0.163629,predicted upper bound 0.168596,actual reward 24.395122
round 6, predicted reward 0.150042,predicted upper bound 0.153091,actual reward 26.479825
round 7, predicted reward 0.282388,predicted upper bound 0.287587,actual reward 16.411790
round 8, predicted reward 0.009460,predicted upper bound 0.014848,actual reward 26.086504
round 9, predicted reward 0.106620,predicted upper bound 0.111166,actual reward 22.741131
round 10, predicted reward 0.343633,predicted upper bound 0.348210,actual reward 18.129200
round 11, predicted reward 0.024985,predicted upper bound 0.028719,actual reward 12.385876
round 12, predicted reward 0.527528,predicted upper bound 0.533315,actual reward 23.321083
round 13, predicted reward 0.236167,predicted upper bound 0.241390,actual reward 38.542890
round 14, predicted reward 0.039366,predicted upper bound 0.044243,actual reward 16.437376
round 15, predicted reward 0.314568,predicted upper bound 0.319064,actual reward 28.158622
round 16, predicted reward 0.320190,predicted upper bound 0.325268,actual reward 29.140470
round 17, predicted reward 0.281421,predicted upper bound 0.286372,actual reward 17.353608
round 18, predicted reward -0.010189,predicted upper bound -0.005281,actual reward 18.826907
round 19, predicted reward 0.421241,predicted upper bound 0.425192,actual reward 11.449585
round 20, predicted reward 0.119419,predicted upper bound 0.122951,actual reward 38.120675
round 21, predicted reward 0.228744,predicted upper bound 0.233303,actual reward 28.433292
round 22, predicted reward 0.315052,predicted upper bound 0.320800,actual reward 19.248693
round 23, predicted reward 0.059136,predicted upper bound 0.063596,actual reward 18.198123
round 24, predicted reward 0.300010,predicted upper bound 0.305025,actual reward 23.300051
round 25, predicted reward 0.198885,predicted upper bound 0.203364,actual reward 14.475313
round 26, predicted reward 0.247887,predicted upper bound 0.252130,actual reward 17.704919
round 27, predicted reward 0.144905,predicted upper bound 0.148240,actual reward 16.759412
round 28, predicted reward 0.208747,predicted upper bound 0.212602,actual reward 20.488660
round 29, predicted reward 0.165630,predicted upper bound 0.169607,actual reward 25.917058
round 30, predicted reward 0.056182,predicted upper bound 0.059950,actual reward 20.260931
round 31, predicted reward 0.128695,predicted upper bound 0.132632,actual reward 21.570369
round 32, predicted reward 0.129648,predicted upper bound 0.135204,actual reward 24.936236
round 33, predicted reward 0.159791,predicted upper bound 0.164016,actual reward 16.134349
round 34, predicted reward 0.013973,predicted upper bound 0.018509,actual reward 31.580753
round 35, predicted reward 0.560455,predicted upper bound 0.564927,actual reward 25.174120
round 36, predicted reward 0.295000,predicted upper bound 0.299220,actual reward 19.354114
round 37, predicted reward 0.205437,predicted upper bound 0.210574,actual reward 26.834291
round 38, predicted reward 0.040120,predicted upper bound 0.044390,actual reward 20.280294
round 39, predicted reward 0.240947,predicted upper bound 0.244452,actual reward 20.487989
round 40, predicted reward 0.328114,predicted upper bound 0.332106,actual reward 14.207510
round 41, predicted reward 0.291775,predicted upper bound 0.296436,actual reward 13.174174
round 42, predicted reward 0.178332,predicted upper bound 0.182355,actual reward 16.017797
round 43, predicted reward 0.195788,predicted upper bound 0.199415,actual reward 24.979637
round 44, predicted reward 0.457667,predicted upper bound 0.462285,actual reward 14.990857
round 45, predicted reward 0.186996,predicted upper bound 0.190440,actual reward 37.255346
round 46, predicted reward 0.078680,predicted upper bound 0.082426,actual reward 20.687110
round 47, predicted reward 0.254361,predicted upper bound 0.259520,actual reward 26.349630
round 48, predicted reward 0.402168,predicted upper bound 0.406433,actual reward 23.909314
round 49, predicted reward 0.311260,predicted upper bound 0.315214,actual reward 22.767743
round 50, predicted reward 0.062184,predicted upper bound 0.066672,actual reward 23.466417
round 51, predicted reward 8.606091,predicted upper bound 8.640735,actual reward 14.642113
round 52, predicted reward 17.627259,predicted upper bound 17.662157,actual reward 18.885952
round 53, predicted reward 7.995933,predicted upper bound 8.030579,actual reward 18.359690
round 54, predicted reward 5.429062,predicted upper bound 5.460406,actual reward 26.046702
round 55, predicted reward 18.930439,predicted upper bound 18.971483,actual reward 11.706470
round 56, predicted reward 14.594812,predicted upper bound 14.628113,actual reward 17.134158
round 57, predicted reward 16.372851,predicted upper bound 16.406117,actual reward 17.379211
round 58, predicted reward 16.534704,predicted upper bound 16.571372,actual reward 15.656447
round 59, predicted reward 14.027420,predicted upper bound 14.063159,actual reward 19.518779
round 60, predicted reward 24.912907,predicted upper bound 24.942811,actual reward 26.741491
round 61, predicted reward 16.426400,predicted upper bound 16.462498,actual reward 20.959550
round 62, predicted reward 14.281727,predicted upper bound 14.319157,actual reward 17.221722
round 63, predicted reward 15.054633,predicted upper bound 15.085599,actual reward 14.741157
round 64, predicted reward 32.300895,predicted upper bound 32.336853,actual reward 20.169107
round 65, predicted reward 7.407692,predicted upper bound 7.441547,actual reward 21.379650
round 66, predicted reward 17.846936,predicted upper bound 17.878941,actual reward 22.110123
round 67, predicted reward 16.504289,predicted upper bound 16.537194,actual reward 21.403313
round 68, predicted reward 18.028844,predicted upper bound 18.057299,actual reward 19.551126
round 69, predicted reward 7.267333,predicted upper bound 7.292153,actual reward 17.314159
round 70, predicted reward 14.856813,predicted upper bound 14.885812,actual reward 17.622531
round 71, predicted reward 16.589743,predicted upper bound 16.621304,actual reward 37.133653
round 72, predicted reward 31.980646,predicted upper bound 32.008496,actual reward 27.050000
round 73, predicted reward 18.537559,predicted upper bound 18.565678,actual reward 23.105176
round 74, predicted reward 14.009138,predicted upper bound 14.035760,actual reward 17.557551
round 75, predicted reward 11.138431,predicted upper bound 11.164345,actual reward 31.715111
round 76, predicted reward 22.696676,predicted upper bound 22.725281,actual reward 18.833883
round 77, predicted reward 11.517352,predicted upper bound 11.544708,actual reward 29.761181
round 78, predicted reward 19.359240,predicted upper bound 19.389224,actual reward 23.260744
round 79, predicted reward 23.841858,predicted upper bound 23.874079,actual reward 13.178028
round 80, predicted reward 17.279209,predicted upper bound 17.308259,actual reward 19.171228
round 81, predicted reward 13.657639,predicted upper bound 13.679428,actual reward 19.811594
round 82, predicted reward 18.063326,predicted upper bound 18.088501,actual reward 28.378565
round 83, predicted reward 28.552671,predicted upper bound 28.576200,actual reward 22.619617
round 84, predicted reward 23.537586,predicted upper bound 23.562342,actual reward 13.321176
round 85, predicted reward 17.181897,predicted upper bound 17.206923,actual reward 25.013070
round 86, predicted reward 30.894249,predicted upper bound 30.916550,actual reward 26.919864
round 87, predicted reward 32.041375,predicted upper bound 32.065035,actual reward 27.878379
round 88, predicted reward 16.352468,predicted upper bound 16.375227,actual reward 13.295649
round 89, predicted reward 11.583217,predicted upper bound 11.610271,actual reward 12.657600
round 90, predicted reward 17.705822,predicted upper bound 17.726968,actual reward 12.612084
round 91, predicted reward 16.046766,predicted upper bound 16.071448,actual reward 20.233196
round 92, predicted reward 14.144616,predicted upper bound 14.164875,actual reward 27.889139
round 93, predicted reward 16.121896,predicted upper bound 16.145989,actual reward 30.610312
round 94, predicted reward 15.225252,predicted upper bound 15.248123,actual reward 22.012892
round 95, predicted reward 16.355666,predicted upper bound 16.380269,actual reward 23.974887
round 96, predicted reward 8.217883,predicted upper bound 8.240996,actual reward 17.748972
round 97, predicted reward 26.009223,predicted upper bound 26.028347,actual reward 20.992405
round 98, predicted reward 14.951986,predicted upper bound 14.975622,actual reward 19.180408
round 99, predicted reward 26.249606,predicted upper bound 26.269704,actual reward 25.827047
round 100, predicted reward 16.151437,predicted upper bound 16.174269,actual reward 30.430881
round 101, predicted reward 27.898543,predicted upper bound 27.929937,actual reward 26.278745
round 102, predicted reward 18.710856,predicted upper bound 18.740857,actual reward 25.795415
round 103, predicted reward 25.865475,predicted upper bound 25.894136,actual reward 21.518010
round 104, predicted reward 20.259688,predicted upper bound 20.287348,actual reward 15.963689
round 105, predicted reward 20.152761,predicted upper bound 20.185419,actual reward 16.177697
round 106, predicted reward 21.634282,predicted upper bound 21.661057,actual reward 24.037160
round 107, predicted reward 20.475148,predicted upper bound 20.507660,actual reward 12.749993
round 108, predicted reward 17.872910,predicted upper bound 17.897472,actual reward 25.255918
round 109, predicted reward 18.114403,predicted upper bound 18.139309,actual reward 33.301586
round 110, predicted reward 21.184504,predicted upper bound 21.212341,actual reward 14.625824
round 111, predicted reward 19.244821,predicted upper bound 19.274795,actual reward 19.955009
round 112, predicted reward 29.802609,predicted upper bound 29.828185,actual reward 34.618448
round 113, predicted reward 14.922558,predicted upper bound 14.945205,actual reward 38.375085
round 114, predicted reward 25.467172,predicted upper bound 25.494545,actual reward 16.759919
round 115, predicted reward 24.033530,predicted upper bound 24.061557,actual reward 14.806201
round 116, predicted reward 16.800245,predicted upper bound 16.823533,actual reward 35.138419
round 117, predicted reward 20.402109,predicted upper bound 20.435326,actual reward 29.043583
round 118, predicted reward 20.644041,predicted upper bound 20.669804,actual reward 24.419414
round 119, predicted reward 18.951313,predicted upper bound 18.976432,actual reward 28.243550
round 120, predicted reward 25.089538,predicted upper bound 25.114679,actual reward 21.694475
round 121, predicted reward 22.584291,predicted upper bound 22.608065,actual reward 19.451703
round 122, predicted reward 25.139846,predicted upper bound 25.161923,actual reward 30.241232
round 123, predicted reward 26.259558,predicted upper bound 26.285970,actual reward 30.250533
round 124, predicted reward 23.916421,predicted upper bound 23.942913,actual reward 19.407280
round 125, predicted reward 25.559071,predicted upper bound 25.586802,actual reward 32.771126
round 126, predicted reward 18.227899,predicted upper bound 18.252393,actual reward 18.111151
round 127, predicted reward 22.213214,predicted upper bound 22.236682,actual reward 26.144206
round 128, predicted reward 14.417640,predicted upper bound 14.441146,actual reward 24.276563
round 129, predicted reward 22.246411,predicted upper bound 22.273271,actual reward 22.786668
round 130, predicted reward 16.896966,predicted upper bound 16.917932,actual reward 22.843833
round 131, predicted reward 26.852486,predicted upper bound 26.876064,actual reward 27.724264
round 132, predicted reward 20.853834,predicted upper bound 20.879880,actual reward 31.556063
round 133, predicted reward 15.220276,predicted upper bound 15.239049,actual reward 19.034335
round 134, predicted reward 28.086191,predicted upper bound 28.107554,actual reward 14.144158
round 135, predicted reward 18.752047,predicted upper bound 18.779067,actual reward 28.623331
round 136, predicted reward 28.930499,predicted upper bound 28.955853,actual reward 17.611621
round 137, predicted reward 19.618000,predicted upper bound 19.643267,actual reward 22.990426
round 138, predicted reward 24.962505,predicted upper bound 24.983094,actual reward 21.877166
round 139, predicted reward 19.143336,predicted upper bound 19.168428,actual reward 27.291976
round 140, predicted reward 19.837250,predicted upper bound 19.863107,actual reward 33.742850
round 141, predicted reward 17.044638,predicted upper bound 17.073835,actual reward 17.630771
round 142, predicted reward 15.408549,predicted upper bound 15.437337,actual reward 23.226348
round 143, predicted reward 22.495624,predicted upper bound 22.520906,actual reward 24.279765
round 144, predicted reward 24.449657,predicted upper bound 24.467361,actual reward 32.037554
round 145, predicted reward 28.070409,predicted upper bound 28.093692,actual reward 18.088100
round 146, predicted reward 21.769857,predicted upper bound 21.795832,actual reward 14.068824
round 147, predicted reward 21.362847,predicted upper bound 21.382868,actual reward 20.625575
round 148, predicted reward 24.108577,predicted upper bound 24.128892,actual reward 24.792132
round 149, predicted reward 31.305155,predicted upper bound 31.328888,actual reward 23.412657
round 150, predicted reward 26.480364,predicted upper bound 26.503703,actual reward 27.039334
round 151, predicted reward 19.383585,predicted upper bound 19.415251,actual reward 21.379650
round 152, predicted reward 24.025492,predicted upper bound 24.050102,actual reward 26.735973
round 153, predicted reward 23.079243,predicted upper bound 23.104665,actual reward 18.316809
round 154, predicted reward 25.222249,predicted upper bound 25.245979,actual reward 29.855392
round 155, predicted reward 21.376171,predicted upper bound 21.395969,actual reward 28.950713
round 156, predicted reward 14.369346,predicted upper bound 14.400701,actual reward 17.400596
round 157, predicted reward 17.331990,predicted upper bound 17.357879,actual reward 10.466660
round 158, predicted reward 22.872946,predicted upper bound 22.895533,actual reward 17.371566
round 159, predicted reward 18.887204,predicted upper bound 18.919371,actual reward 33.074061
round 160, predicted reward 26.428769,predicted upper bound 26.453866,actual reward 24.313228
round 161, predicted reward 18.658765,predicted upper bound 18.685345,actual reward 24.241578
round 162, predicted reward 21.067331,predicted upper bound 21.092944,actual reward 20.424007
round 163, predicted reward 24.962838,predicted upper bound 24.988219,actual reward 31.410511
round 164, predicted reward 25.958915,predicted upper bound 25.980552,actual reward 18.363835
round 165, predicted reward 22.668678,predicted upper bound 22.693081,actual reward 17.510460
round 166, predicted reward 26.362126,predicted upper bound 26.390475,actual reward 30.685054
round 167, predicted reward 20.567619,predicted upper bound 20.592681,actual reward 19.401971
round 168, predicted reward 29.522772,predicted upper bound 29.544542,actual reward 35.913454
round 169, predicted reward 23.517825,predicted upper bound 23.542266,actual reward 17.662427
round 170, predicted reward 21.043265,predicted upper bound 21.066685,actual reward 26.210918
round 171, predicted reward 31.668816,predicted upper bound 31.692322,actual reward 35.813192
round 172, predicted reward 26.428517,predicted upper bound 26.448090,actual reward 28.432307
round 173, predicted reward 29.028598,predicted upper bound 29.050641,actual reward 27.179995
round 174, predicted reward 23.990750,predicted upper bound 24.012266,actual reward 22.385836
round 175, predicted reward 22.640801,predicted upper bound 22.664807,actual reward 23.476106
round 176, predicted reward 23.082837,predicted upper bound 23.103748,actual reward 19.640802
round 177, predicted reward 23.749021,predicted upper bound 23.774446,actual reward 28.787638
round 178, predicted reward 19.261646,predicted upper bound 19.288201,actual reward 27.497351
round 179, predicted reward 25.493315,predicted upper bound 25.518154,actual reward 17.462747
round 180, predicted reward 19.401808,predicted upper bound 19.427003,actual reward 19.733407
round 181, predicted reward 22.755637,predicted upper bound 22.781273,actual reward 24.842331
round 182, predicted reward 32.526842,predicted upper bound 32.546296,actual reward 36.637592
round 183, predicted reward 25.517931,predicted upper bound 25.540066,actual reward 28.720944
round 184, predicted reward 24.978386,predicted upper bound 25.003894,actual reward 34.455809
round 185, predicted reward 23.745775,predicted upper bound 23.769681,actual reward 23.671121
round 186, predicted reward 25.427070,predicted upper bound 25.447935,actual reward 22.593549
round 187, predicted reward 21.837370,predicted upper bound 21.858599,actual reward 15.965466
round 188, predicted reward 26.481024,predicted upper bound 26.504886,actual reward 25.377860
round 189, predicted reward 22.596720,predicted upper bound 22.623500,actual reward 24.258048
round 190, predicted reward 25.311981,predicted upper bound 25.332812,actual reward 20.864208
round 191, predicted reward 20.782321,predicted upper bound 20.800912,actual reward 18.467663
round 192, predicted reward 22.275420,predicted upper bound 22.296662,actual reward 19.073194
round 193, predicted reward 20.074677,predicted upper bound 20.099228,actual reward 23.297718
round 194, predicted reward 27.444798,predicted upper bound 27.465936,actual reward 23.394712
round 195, predicted reward 18.174278,predicted upper bound 18.196238,actual reward 13.148697
round 196, predicted reward 22.791176,predicted upper bound 22.811481,actual reward 17.623899
round 197, predicted reward 22.172199,predicted upper bound 22.195742,actual reward 25.140992
round 198, predicted reward 19.017947,predicted upper bound 19.042642,actual reward 32.007888
round 199, predicted reward 21.947309,predicted upper bound 21.970396,actual reward 30.576206
round 200, predicted reward 28.569220,predicted upper bound 28.589263,actual reward 27.229753
round 201, predicted reward 21.804745,predicted upper bound 21.831981,actual reward 22.926045
round 202, predicted reward 26.421969,predicted upper bound 26.445980,actual reward 28.557619
round 203, predicted reward 24.269991,predicted upper bound 24.291389,actual reward 17.551181
round 204, predicted reward 22.711272,predicted upper bound 22.734678,actual reward 19.331618
round 205, predicted reward 23.165566,predicted upper bound 23.192620,actual reward 27.984101
round 206, predicted reward 22.799348,predicted upper bound 22.821693,actual reward 23.129912
round 207, predicted reward 29.946937,predicted upper bound 29.970393,actual reward 31.169346
round 208, predicted reward 32.127192,predicted upper bound 32.149625,actual reward 31.951191
round 209, predicted reward 23.961492,predicted upper bound 23.985529,actual reward 24.061564
round 210, predicted reward 20.623061,predicted upper bound 20.645991,actual reward 30.079992
round 211, predicted reward 28.006112,predicted upper bound 28.028942,actual reward 30.346848
round 212, predicted reward 28.236721,predicted upper bound 28.261904,actual reward 19.947383
round 213, predicted reward 17.812111,predicted upper bound 17.834953,actual reward 19.659480
round 214, predicted reward 22.668847,predicted upper bound 22.686810,actual reward 21.911955
round 215, predicted reward 25.688613,predicted upper bound 25.713161,actual reward 27.210401
round 216, predicted reward 25.744912,predicted upper bound 25.768897,actual reward 19.481306
round 217, predicted reward 24.845274,predicted upper bound 24.866659,actual reward 25.048318
round 218, predicted reward 21.865141,predicted upper bound 21.884999,actual reward 24.100033
round 219, predicted reward 26.529381,predicted upper bound 26.550160,actual reward 22.668683
round 220, predicted reward 22.201611,predicted upper bound 22.223913,actual reward 16.063566
round 221, predicted reward 21.938489,predicted upper bound 21.962990,actual reward 24.266150
round 222, predicted reward 21.995987,predicted upper bound 22.017235,actual reward 23.979146
round 223, predicted reward 27.843543,predicted upper bound 27.862738,actual reward 35.147981
round 224, predicted reward 22.686701,predicted upper bound 22.707826,actual reward 21.294915
round 225, predicted reward 29.003230,predicted upper bound 29.022150,actual reward 26.133858
round 226, predicted reward 25.384285,predicted upper bound 25.410130,actual reward 26.776742
round 227, predicted reward 23.723504,predicted upper bound 23.741405,actual reward 22.723087
round 228, predicted reward 18.850782,predicted upper bound 18.871120,actual reward 17.663279
round 229, predicted reward 25.322292,predicted upper bound 25.345190,actual reward 31.734585
round 230, predicted reward 24.947435,predicted upper bound 24.965572,actual reward 28.162845
round 231, predicted reward 24.232749,predicted upper bound 24.253754,actual reward 16.691607
round 232, predicted reward 29.278461,predicted upper bound 29.295093,actual reward 29.173788
round 233, predicted reward 22.391432,predicted upper bound 22.411690,actual reward 24.881548
round 234, predicted reward 25.654450,predicted upper bound 25.676358,actual reward 25.483518
round 235, predicted reward 25.311130,predicted upper bound 25.334066,actual reward 20.452540
round 236, predicted reward 23.098871,predicted upper bound 23.120709,actual reward 17.909351
round 237, predicted reward 34.854073,predicted upper bound 34.873388,actual reward 36.070589
round 238, predicted reward 26.370952,predicted upper bound 26.393217,actual reward 26.457008
round 239, predicted reward 20.517089,predicted upper bound 20.540390,actual reward 18.910845
round 240, predicted reward 20.719461,predicted upper bound 20.736537,actual reward 24.879516
round 241, predicted reward 30.510980,predicted upper bound 30.534706,actual reward 35.110334
round 242, predicted reward 22.848970,predicted upper bound 22.869440,actual reward 23.114881
round 243, predicted reward 28.843733,predicted upper bound 28.866597,actual reward 36.708900
round 244, predicted reward 26.597703,predicted upper bound 26.618377,actual reward 20.661982
round 245, predicted reward 28.668092,predicted upper bound 28.691439,actual reward 30.030348
round 246, predicted reward 24.119923,predicted upper bound 24.142349,actual reward 20.881649
round 247, predicted reward 25.343047,predicted upper bound 25.364024,actual reward 28.995775
round 248, predicted reward 23.144703,predicted upper bound 23.162487,actual reward 21.266792
round 249, predicted reward 21.427712,predicted upper bound 21.447417,actual reward 19.554265
round 250, predicted reward 21.636136,predicted upper bound 21.659186,actual reward 26.772540
round 251, predicted reward 31.176602,predicted upper bound 31.195665,actual reward 28.331387
round 252, predicted reward 36.630393,predicted upper bound 36.650718,actual reward 46.832335
round 253, predicted reward 20.950706,predicted upper bound 20.969237,actual reward 22.948096
round 254, predicted reward 22.155958,predicted upper bound 22.178221,actual reward 19.693619
round 255, predicted reward 22.338459,predicted upper bound 22.362027,actual reward 22.297879
round 256, predicted reward 21.667692,predicted upper bound 21.686601,actual reward 19.622319
round 257, predicted reward 24.916570,predicted upper bound 24.939415,actual reward 19.820018
round 258, predicted reward 25.917908,predicted upper bound 25.939345,actual reward 23.881907
round 259, predicted reward 20.474679,predicted upper bound 20.494392,actual reward 17.318144
round 260, predicted reward 23.844313,predicted upper bound 23.866704,actual reward 26.436840
round 261, predicted reward 27.781794,predicted upper bound 27.803426,actual reward 30.587766
round 262, predicted reward 32.721881,predicted upper bound 32.744889,actual reward 29.084829
round 263, predicted reward 25.475526,predicted upper bound 25.496651,actual reward 20.192529
round 264, predicted reward 26.262947,predicted upper bound 26.286297,actual reward 22.712370
round 265, predicted reward 26.607803,predicted upper bound 26.628503,actual reward 22.873040
round 266, predicted reward 28.254518,predicted upper bound 28.272194,actual reward 34.175406
round 267, predicted reward 22.901525,predicted upper bound 22.923126,actual reward 24.177456
round 268, predicted reward 28.249704,predicted upper bound 28.268581,actual reward 29.121999
round 269, predicted reward 20.692437,predicted upper bound 20.713129,actual reward 20.514044
round 270, predicted reward 21.819916,predicted upper bound 21.837290,actual reward 18.121877
round 271, predicted reward 27.451308,predicted upper bound 27.472579,actual reward 26.632305
round 272, predicted reward 19.079472,predicted upper bound 19.100488,actual reward 22.698227
round 273, predicted reward 35.684243,predicted upper bound 35.701899,actual reward 37.236786
round 274, predicted reward 23.962647,predicted upper bound 23.983628,actual reward 27.147218
round 275, predicted reward 26.473766,predicted upper bound 26.495858,actual reward 29.143695
round 276, predicted reward 30.241188,predicted upper bound 30.258147,actual reward 26.717140
round 277, predicted reward 23.250596,predicted upper bound 23.271696,actual reward 30.272687
round 278, predicted reward 32.923956,predicted upper bound 32.944329,actual reward 33.749474
round 279, predicted reward 30.712262,predicted upper bound 30.729232,actual reward 32.950217
round 280, predicted reward 26.457784,predicted upper bound 26.478405,actual reward 22.176620
round 281, predicted reward 26.611222,predicted upper bound 26.630940,actual reward 36.661694
round 282, predicted reward 27.440935,predicted upper bound 27.461005,actual reward 27.095449
round 283, predicted reward 25.948729,predicted upper bound 25.966879,actual reward 25.535366
round 284, predicted reward 22.242942,predicted upper bound 22.261223,actual reward 23.175035
round 285, predicted reward 24.737832,predicted upper bound 24.756111,actual reward 30.339552
round 286, predicted reward 22.579617,predicted upper bound 22.595930,actual reward 19.810297
round 287, predicted reward 24.738486,predicted upper bound 24.757494,actual reward 31.463143
round 288, predicted reward 27.479055,predicted upper bound 27.498112,actual reward 28.569927
round 289, predicted reward 24.717680,predicted upper bound 24.734967,actual reward 24.159995
round 290, predicted reward 25.796865,predicted upper bound 25.813974,actual reward 30.024971
round 291, predicted reward 27.134781,predicted upper bound 27.154967,actual reward 28.335806
round 292, predicted reward 15.223371,predicted upper bound 15.243954,actual reward 16.750180
round 293, predicted reward 28.813027,predicted upper bound 28.831978,actual reward 25.300563
round 294, predicted reward 21.027566,predicted upper bound 21.048434,actual reward 20.862800
round 295, predicted reward 22.834194,predicted upper bound 22.849518,actual reward 25.343369
round 296, predicted reward 27.278277,predicted upper bound 27.297304,actual reward 25.306339
round 297, predicted reward 27.281453,predicted upper bound 27.300997,actual reward 33.525973
round 298, predicted reward 21.839927,predicted upper bound 21.857256,actual reward 25.263409
round 299, predicted reward 19.107514,predicted upper bound 19.127771,actual reward 20.158310
round 300, predicted reward 23.011957,predicted upper bound 23.033224,actual reward 28.929577
round 301, predicted reward 18.981318,predicted upper bound 19.002418,actual reward 21.694897
round 302, predicted reward 19.618039,predicted upper bound 19.638648,actual reward 15.073932
round 303, predicted reward 26.984925,predicted upper bound 27.004683,actual reward 21.818682
round 304, predicted reward 28.930407,predicted upper bound 28.950379,actual reward 27.862276
round 305, predicted reward 22.254693,predicted upper bound 22.275282,actual reward 25.712432
round 306, predicted reward 26.568585,predicted upper bound 26.588521,actual reward 24.579077
round 307, predicted reward 27.272811,predicted upper bound 27.294045,actual reward 22.406017
round 308, predicted reward 28.679666,predicted upper bound 28.698445,actual reward 26.572130
round 309, predicted reward 24.043329,predicted upper bound 24.061231,actual reward 26.227653
round 310, predicted reward 21.986542,predicted upper bound 22.006272,actual reward 21.895643
round 311, predicted reward 23.526961,predicted upper bound 23.546590,actual reward 26.350202
round 312, predicted reward 18.792024,predicted upper bound 18.808728,actual reward 18.288657
round 313, predicted reward 23.408697,predicted upper bound 23.428572,actual reward 23.873949
round 314, predicted reward 35.892369,predicted upper bound 35.907572,actual reward 35.002580
round 315, predicted reward 27.093949,predicted upper bound 27.113734,actual reward 25.389419
round 316, predicted reward 27.660863,predicted upper bound 27.679490,actual reward 27.684901
round 317, predicted reward 23.626363,predicted upper bound 23.645869,actual reward 22.113029
round 318, predicted reward 27.360997,predicted upper bound 27.382564,actual reward 32.460627
round 319, predicted reward 25.476729,predicted upper bound 25.496393,actual reward 20.711017
round 320, predicted reward 29.121892,predicted upper bound 29.138899,actual reward 32.015616
round 321, predicted reward 28.115336,predicted upper bound 28.132507,actual reward 26.518370
round 322, predicted reward 26.479674,predicted upper bound 26.499280,actual reward 28.522367
round 323, predicted reward 35.052191,predicted upper bound 35.071422,actual reward 37.102618
round 324, predicted reward 34.394233,predicted upper bound 34.410351,actual reward 39.777010
round 325, predicted reward 25.519277,predicted upper bound 25.540961,actual reward 23.186744
round 326, predicted reward 25.593085,predicted upper bound 25.612633,actual reward 24.624234
round 327, predicted reward 34.547747,predicted upper bound 34.565951,actual reward 37.901271
round 328, predicted reward 26.478763,predicted upper bound 26.496553,actual reward 27.292871
round 329, predicted reward 20.856017,predicted upper bound 20.873737,actual reward 23.897103
round 330, predicted reward 32.895425,predicted upper bound 32.910775,actual reward 36.577826
round 331, predicted reward 27.486014,predicted upper bound 27.507909,actual reward 23.667333
round 332, predicted reward 18.533416,predicted upper bound 18.552301,actual reward 18.753208
round 333, predicted reward 24.511399,predicted upper bound 24.528739,actual reward 26.157284
round 334, predicted reward 25.894504,predicted upper bound 25.913909,actual reward 36.517846
round 335, predicted reward 25.863861,predicted upper bound 25.883343,actual reward 20.100779
round 336, predicted reward 18.252186,predicted upper bound 18.273491,actual reward 19.283114
round 337, predicted reward 24.148518,predicted upper bound 24.165453,actual reward 19.552147
round 338, predicted reward 22.465870,predicted upper bound 22.487930,actual reward 18.442350
round 339, predicted reward 24.500859,predicted upper bound 24.519219,actual reward 22.581162
round 340, predicted reward 26.984431,predicted upper bound 27.002671,actual reward 31.554546
round 341, predicted reward 31.103481,predicted upper bound 31.116772,actual reward 29.557085
round 342, predicted reward 24.372299,predicted upper bound 24.389145,actual reward 25.890633
round 343, predicted reward 24.603341,predicted upper bound 24.619047,actual reward 23.374907
round 344, predicted reward 23.549593,predicted upper bound 23.567195,actual reward 27.221957
round 345, predicted reward 19.538368,predicted upper bound 19.560089,actual reward 16.373085
round 346, predicted reward 24.424098,predicted upper bound 24.438718,actual reward 24.687748
round 347, predicted reward 25.923067,predicted upper bound 25.942428,actual reward 23.183493
round 348, predicted reward 24.019259,predicted upper bound 24.040216,actual reward 25.481358
round 349, predicted reward 24.575239,predicted upper bound 24.596584,actual reward 19.176036
round 350, predicted reward 26.100265,predicted upper bound 26.118707,actual reward 22.849963
round 351, predicted reward 27.588916,predicted upper bound 27.609385,actual reward 29.524987
round 352, predicted reward 22.789218,predicted upper bound 22.809635,actual reward 20.253380
round 353, predicted reward 20.835463,predicted upper bound 20.854509,actual reward 15.682392
round 354, predicted reward 25.056255,predicted upper bound 25.076520,actual reward 24.932531
round 355, predicted reward 30.649971,predicted upper bound 30.664866,actual reward 34.599706
round 356, predicted reward 29.660338,predicted upper bound 29.676242,actual reward 27.675229
round 357, predicted reward 32.141569,predicted upper bound 32.158894,actual reward 30.918927
round 358, predicted reward 27.603080,predicted upper bound 27.620777,actual reward 26.866438
round 359, predicted reward 24.916010,predicted upper bound 24.935426,actual reward 22.970294
round 360, predicted reward 30.136226,predicted upper bound 30.154940,actual reward 30.993208
round 361, predicted reward 22.610151,predicted upper bound 22.630500,actual reward 17.958827
round 362, predicted reward 26.003524,predicted upper bound 26.020202,actual reward 32.321188
round 363, predicted reward 22.741048,predicted upper bound 22.758170,actual reward 21.497563
round 364, predicted reward 27.967536,predicted upper bound 27.985663,actual reward 29.934508
round 365, predicted reward 21.382053,predicted upper bound 21.402890,actual reward 15.067486
round 366, predicted reward 22.969964,predicted upper bound 22.985946,actual reward 24.209817
round 367, predicted reward 27.706727,predicted upper bound 27.726993,actual reward 26.244708
round 368, predicted reward 28.811462,predicted upper bound 28.827135,actual reward 33.797131
round 369, predicted reward 34.283689,predicted upper bound 34.297439,actual reward 36.985730
round 370, predicted reward 33.013575,predicted upper bound 33.028599,actual reward 42.608397
round 371, predicted reward 27.696474,predicted upper bound 27.713003,actual reward 28.193921
round 372, predicted reward 21.556794,predicted upper bound 21.574337,actual reward 25.651449
round 373, predicted reward 21.715519,predicted upper bound 21.732515,actual reward 24.252629
round 374, predicted reward 25.304167,predicted upper bound 25.320929,actual reward 22.635578
round 375, predicted reward 20.716057,predicted upper bound 20.733777,actual reward 16.601546
round 376, predicted reward 26.395439,predicted upper bound 26.412791,actual reward 27.284897
round 377, predicted reward 25.547962,predicted upper bound 25.565773,actual reward 28.751471
round 378, predicted reward 28.941368,predicted upper bound 28.960950,actual reward 29.706052
round 379, predicted reward 24.231701,predicted upper bound 24.249281,actual reward 25.431377
round 380, predicted reward 28.259054,predicted upper bound 28.273904,actual reward 25.111188
round 381, predicted reward 24.040467,predicted upper bound 24.059104,actual reward 25.249773
round 382, predicted reward 31.944053,predicted upper bound 31.960280,actual reward 32.192478
round 383, predicted reward 27.029066,predicted upper bound 27.046422,actual reward 31.494182
round 384, predicted reward 21.787705,predicted upper bound 21.803026,actual reward 24.562237
round 385, predicted reward 22.328852,predicted upper bound 22.347852,actual reward 15.964167
round 386, predicted reward 26.983816,predicted upper bound 26.998211,actual reward 30.045357
round 387, predicted reward 27.951507,predicted upper bound 27.969550,actual reward 24.336679
round 388, predicted reward 25.987478,predicted upper bound 26.003452,actual reward 26.429914
round 389, predicted reward 21.693097,predicted upper bound 21.710411,actual reward 21.191616
round 390, predicted reward 23.032123,predicted upper bound 23.050879,actual reward 15.807913
round 391, predicted reward 28.969273,predicted upper bound 28.982553,actual reward 34.917404
round 392, predicted reward 30.987251,predicted upper bound 31.000984,actual reward 31.328443
round 393, predicted reward 28.799859,predicted upper bound 28.813874,actual reward 26.352679
round 394, predicted reward 29.058180,predicted upper bound 29.074753,actual reward 32.232722
round 395, predicted reward 26.692433,predicted upper bound 26.708426,actual reward 26.511006
round 396, predicted reward 26.653988,predicted upper bound 26.672630,actual reward 27.513520
round 397, predicted reward 21.270435,predicted upper bound 21.290159,actual reward 16.973643
round 398, predicted reward 30.493747,predicted upper bound 30.508673,actual reward 30.279113
round 399, predicted reward 25.472415,predicted upper bound 25.487767,actual reward 25.327879
round 400, predicted reward 34.165869,predicted upper bound 34.180056,actual reward 29.459556
round 401, predicted reward 23.809432,predicted upper bound 23.824664,actual reward 23.043277
round 402, predicted reward 22.794905,predicted upper bound 22.816651,actual reward 21.797900
round 403, predicted reward 22.428384,predicted upper bound 22.443576,actual reward 14.628479
round 404, predicted reward 24.984283,predicted upper bound 25.003574,actual reward 25.922631
round 405, predicted reward 27.882618,predicted upper bound 27.899270,actual reward 29.442347
round 406, predicted reward 21.853912,predicted upper bound 21.870298,actual reward 24.691268
round 407, predicted reward 23.476167,predicted upper bound 23.491613,actual reward 23.268864
round 408, predicted reward 26.286944,predicted upper bound 26.303432,actual reward 28.577532
round 409, predicted reward 24.609468,predicted upper bound 24.624984,actual reward 25.807924
round 410, predicted reward 21.579328,predicted upper bound 21.596000,actual reward 18.658583
round 411, predicted reward 33.879143,predicted upper bound 33.894579,actual reward 38.664597
round 412, predicted reward 26.199254,predicted upper bound 26.214427,actual reward 25.312893
round 413, predicted reward 21.286987,predicted upper bound 21.305400,actual reward 22.667175
round 414, predicted reward 21.066425,predicted upper bound 21.085235,actual reward 22.018669
round 415, predicted reward 24.891024,predicted upper bound 24.908359,actual reward 27.506958
round 416, predicted reward 21.710575,predicted upper bound 21.728240,actual reward 17.400607
round 417, predicted reward 20.493534,predicted upper bound 20.510665,actual reward 21.428317
round 418, predicted reward 23.814785,predicted upper bound 23.831772,actual reward 24.343636
round 419, predicted reward 26.728624,predicted upper bound 26.744180,actual reward 29.031543
round 420, predicted reward 25.618887,predicted upper bound 25.637683,actual reward 26.801461
round 421, predicted reward 31.647766,predicted upper bound 31.662346,actual reward 34.153312
round 422, predicted reward 35.550705,predicted upper bound 35.564205,actual reward 36.352535
round 423, predicted reward 25.179455,predicted upper bound 25.197426,actual reward 24.950913
round 424, predicted reward 28.022263,predicted upper bound 28.037757,actual reward 31.216991
round 425, predicted reward 25.770606,predicted upper bound 25.786393,actual reward 26.009563
round 426, predicted reward 27.999365,predicted upper bound 28.014322,actual reward 23.009981
round 427, predicted reward 30.387061,predicted upper bound 30.402277,actual reward 33.157724
round 428, predicted reward 31.363332,predicted upper bound 31.375335,actual reward 35.218025
round 429, predicted reward 28.116772,predicted upper bound 28.134481,actual reward 26.562262
round 430, predicted reward 21.707069,predicted upper bound 21.724309,actual reward 19.483794
round 431, predicted reward 26.431525,predicted upper bound 26.445658,actual reward 24.085927
round 432, predicted reward 22.253566,predicted upper bound 22.270537,actual reward 28.638007
round 433, predicted reward 21.828722,predicted upper bound 21.844129,actual reward 18.118544
round 434, predicted reward 29.057391,predicted upper bound 29.070756,actual reward 24.973080
round 435, predicted reward 30.342250,predicted upper bound 30.354673,actual reward 32.569962
round 436, predicted reward 22.900178,predicted upper bound 22.917357,actual reward 21.763968
round 437, predicted reward 30.242726,predicted upper bound 30.258126,actual reward 37.796753
round 438, predicted reward 25.884359,predicted upper bound 25.900769,actual reward 30.058727
round 439, predicted reward 30.421777,predicted upper bound 30.435939,actual reward 30.963014
round 440, predicted reward 25.672133,predicted upper bound 25.686974,actual reward 26.847850
round 441, predicted reward 23.569586,predicted upper bound 23.583027,actual reward 22.491834
round 442, predicted reward 29.483692,predicted upper bound 29.499081,actual reward 28.524658
round 443, predicted reward 36.352439,predicted upper bound 36.364120,actual reward 36.922999
round 444, predicted reward 25.540743,predicted upper bound 25.558617,actual reward 28.695512
round 445, predicted reward 25.136522,predicted upper bound 25.153018,actual reward 25.643233
round 446, predicted reward 25.363969,predicted upper bound 25.378697,actual reward 29.576295
round 447, predicted reward 27.200960,predicted upper bound 27.219135,actual reward 23.007766
round 448, predicted reward 27.093009,predicted upper bound 27.106942,actual reward 25.825975
round 449, predicted reward 25.276567,predicted upper bound 25.289643,actual reward 21.424413
round 450, predicted reward 38.644627,predicted upper bound 38.655068,actual reward 39.843273
round 451, predicted reward 24.071545,predicted upper bound 24.087954,actual reward 20.728969
round 452, predicted reward 30.262285,predicted upper bound 30.277424,actual reward 26.825522
round 453, predicted reward 23.104371,predicted upper bound 23.122905,actual reward 26.949321
round 454, predicted reward 29.638855,predicted upper bound 29.653458,actual reward 35.075624
round 455, predicted reward 29.650331,predicted upper bound 29.667744,actual reward 27.514945
round 456, predicted reward 28.237349,predicted upper bound 28.255932,actual reward 31.325534
round 457, predicted reward 26.476769,predicted upper bound 26.490851,actual reward 31.121621
round 458, predicted reward 25.534189,predicted upper bound 25.547933,actual reward 23.169431
round 459, predicted reward 28.621635,predicted upper bound 28.638469,actual reward 31.099677
round 460, predicted reward 27.139624,predicted upper bound 27.154866,actual reward 22.503277
round 461, predicted reward 21.399211,predicted upper bound 21.414527,actual reward 20.381082
round 462, predicted reward 26.570639,predicted upper bound 26.585969,actual reward 28.307946
round 463, predicted reward 32.368089,predicted upper bound 32.382522,actual reward 35.578016
round 464, predicted reward 19.394188,predicted upper bound 19.414356,actual reward 19.398505
round 465, predicted reward 26.379518,predicted upper bound 26.396672,actual reward 24.817552
round 466, predicted reward 26.075640,predicted upper bound 26.093163,actual reward 29.883927
round 467, predicted reward 25.472238,predicted upper bound 25.490848,actual reward 22.201107
round 468, predicted reward 29.145126,predicted upper bound 29.161465,actual reward 32.141100
round 469, predicted reward 22.874865,predicted upper bound 22.890779,actual reward 22.751550
round 470, predicted reward 24.835682,predicted upper bound 24.852573,actual reward 19.149374
round 471, predicted reward 23.908657,predicted upper bound 23.925300,actual reward 22.568943
round 472, predicted reward 24.750952,predicted upper bound 24.764831,actual reward 22.616045
round 473, predicted reward 26.697952,predicted upper bound 26.710885,actual reward 27.100263
round 474, predicted reward 24.219802,predicted upper bound 24.236601,actual reward 19.999634
round 475, predicted reward 22.793237,predicted upper bound 22.809101,actual reward 21.467004
round 476, predicted reward 23.935144,predicted upper bound 23.951136,actual reward 26.200421
round 477, predicted reward 29.116324,predicted upper bound 29.130758,actual reward 26.871476
round 478, predicted reward 20.227742,predicted upper bound 20.246112,actual reward 22.290762
round 479, predicted reward 27.600397,predicted upper bound 27.617862,actual reward 28.702833
round 480, predicted reward 25.574860,predicted upper bound 25.590149,actual reward 20.272970
round 481, predicted reward 32.186276,predicted upper bound 32.200971,actual reward 30.999253
round 482, predicted reward 33.758654,predicted upper bound 33.773866,actual reward 35.149415
round 483, predicted reward 25.535650,predicted upper bound 25.552432,actual reward 26.986683
round 484, predicted reward 24.173798,predicted upper bound 24.191209,actual reward 25.560679
round 485, predicted reward 30.256448,predicted upper bound 30.272604,actual reward 27.796142
round 486, predicted reward 37.744111,predicted upper bound 37.756677,actual reward 40.331263
round 487, predicted reward 29.293546,predicted upper bound 29.311445,actual reward 30.606174
round 488, predicted reward 22.548467,predicted upper bound 22.566067,actual reward 13.197247
round 489, predicted reward 23.395587,predicted upper bound 23.411309,actual reward 21.987194
round 490, predicted reward 26.758036,predicted upper bound 26.772508,actual reward 22.543846
round 491, predicted reward 29.729294,predicted upper bound 29.745568,actual reward 33.792337
round 492, predicted reward 26.732969,predicted upper bound 26.748972,actual reward 25.420548
round 493, predicted reward 25.044596,predicted upper bound 25.062720,actual reward 26.518000
round 494, predicted reward 22.817496,predicted upper bound 22.833472,actual reward 25.089354
round 495, predicted reward 31.110455,predicted upper bound 31.123634,actual reward 31.028541
round 496, predicted reward 24.704976,predicted upper bound 24.719812,actual reward 23.231921
round 497, predicted reward 23.409165,predicted upper bound 23.424707,actual reward 28.124001
round 498, predicted reward 27.995826,predicted upper bound 28.010317,actual reward 25.278982
round 499, predicted reward 22.585831,predicted upper bound 22.602168,actual reward 21.982998
round 500, predicted reward 26.931161,predicted upper bound 26.947908,actual reward 31.246715
round 501, predicted reward 24.310699,predicted upper bound 24.325178,actual reward 22.486606
round 502, predicted reward 24.866481,predicted upper bound 24.882253,actual reward 29.015511
round 503, predicted reward 29.831992,predicted upper bound 29.849843,actual reward 35.418033
round 504, predicted reward 31.246927,predicted upper bound 31.263285,actual reward 30.697924
round 505, predicted reward 25.119634,predicted upper bound 25.131890,actual reward 25.708865
round 506, predicted reward 27.981628,predicted upper bound 27.998031,actual reward 30.143438
round 507, predicted reward 28.712107,predicted upper bound 28.728409,actual reward 24.942963
round 508, predicted reward 20.882081,predicted upper bound 20.899384,actual reward 19.902480
round 509, predicted reward 23.935727,predicted upper bound 23.953166,actual reward 18.993417
round 510, predicted reward 27.175284,predicted upper bound 27.188176,actual reward 28.954062
round 511, predicted reward 27.373657,predicted upper bound 27.388531,actual reward 25.700509
round 512, predicted reward 23.041380,predicted upper bound 23.054818,actual reward 25.404289
round 513, predicted reward 26.849626,predicted upper bound 26.863553,actual reward 26.043686
round 514, predicted reward 30.267237,predicted upper bound 30.281412,actual reward 28.184198
round 515, predicted reward 30.344908,predicted upper bound 30.358385,actual reward 29.728060
round 516, predicted reward 21.288392,predicted upper bound 21.301384,actual reward 13.408060
round 517, predicted reward 28.133802,predicted upper bound 28.153060,actual reward 34.372860
round 518, predicted reward 23.686337,predicted upper bound 23.703776,actual reward 23.433568
round 519, predicted reward 23.182246,predicted upper bound 23.197618,actual reward 22.413465
round 520, predicted reward 22.398971,predicted upper bound 22.414713,actual reward 22.541023
round 521, predicted reward 22.671854,predicted upper bound 22.690008,actual reward 17.732196
round 522, predicted reward 29.233865,predicted upper bound 29.246549,actual reward 26.904329
round 523, predicted reward 27.729657,predicted upper bound 27.743671,actual reward 26.215404
round 524, predicted reward 31.743227,predicted upper bound 31.757807,actual reward 29.132666
round 525, predicted reward 31.284375,predicted upper bound 31.301504,actual reward 30.675108
round 526, predicted reward 31.979924,predicted upper bound 31.993030,actual reward 37.544652
round 527, predicted reward 31.471695,predicted upper bound 31.488914,actual reward 31.384870
round 528, predicted reward 32.938010,predicted upper bound 32.952225,actual reward 39.620412
round 529, predicted reward 24.932454,predicted upper bound 24.949097,actual reward 22.256554
round 530, predicted reward 25.602341,predicted upper bound 25.616906,actual reward 20.942927
round 531, predicted reward 24.938046,predicted upper bound 24.953052,actual reward 24.120332
round 532, predicted reward 23.823877,predicted upper bound 23.841309,actual reward 23.732569
round 533, predicted reward 25.335646,predicted upper bound 25.349301,actual reward 27.575525
round 534, predicted reward 34.059130,predicted upper bound 34.070557,actual reward 33.674405
round 535, predicted reward 30.825005,predicted upper bound 30.838504,actual reward 30.498603
round 536, predicted reward 25.559143,predicted upper bound 25.572932,actual reward 23.679405
round 537, predicted reward 29.456657,predicted upper bound 29.471729,actual reward 32.692675
round 538, predicted reward 27.162760,predicted upper bound 27.178843,actual reward 28.989715
round 539, predicted reward 21.521821,predicted upper bound 21.536109,actual reward 26.213913
round 540, predicted reward 34.206068,predicted upper bound 34.221215,actual reward 35.296488
round 541, predicted reward 26.909462,predicted upper bound 26.923067,actual reward 26.959677
round 542, predicted reward 23.395673,predicted upper bound 23.410372,actual reward 21.100494
round 543, predicted reward 29.143683,predicted upper bound 29.159822,actual reward 30.079592
round 544, predicted reward 26.387360,predicted upper bound 26.403096,actual reward 23.334370
round 545, predicted reward 22.011177,predicted upper bound 22.023784,actual reward 22.166315
round 546, predicted reward 22.498167,predicted upper bound 22.515329,actual reward 22.248599
round 547, predicted reward 33.036011,predicted upper bound 33.049075,actual reward 33.031173
round 548, predicted reward 23.656828,predicted upper bound 23.670354,actual reward 22.013536
round 549, predicted reward 28.482609,predicted upper bound 28.499154,actual reward 30.634164
round 550, predicted reward 26.217812,predicted upper bound 26.231553,actual reward 30.177168
round 551, predicted reward 28.120700,predicted upper bound 28.137117,actual reward 28.219999
round 552, predicted reward 29.341951,predicted upper bound 29.354802,actual reward 32.466618
round 553, predicted reward 33.775906,predicted upper bound 33.787621,actual reward 37.511635
round 554, predicted reward 24.671900,predicted upper bound 24.683964,actual reward 23.757741
round 555, predicted reward 28.011424,predicted upper bound 28.023168,actual reward 31.142057
round 556, predicted reward 31.431850,predicted upper bound 31.446744,actual reward 32.013341
round 557, predicted reward 34.202273,predicted upper bound 34.214930,actual reward 31.861755
round 558, predicted reward 26.031261,predicted upper bound 26.043032,actual reward 25.419425
round 559, predicted reward 22.135163,predicted upper bound 22.149361,actual reward 17.188549
round 560, predicted reward 20.800725,predicted upper bound 20.818385,actual reward 19.559349
round 561, predicted reward 28.614820,predicted upper bound 28.629547,actual reward 21.170586
round 562, predicted reward 35.519158,predicted upper bound 35.529062,actual reward 37.555099
round 563, predicted reward 22.833849,predicted upper bound 22.846722,actual reward 28.406143
round 564, predicted reward 28.336841,predicted upper bound 28.348632,actual reward 23.826132
round 565, predicted reward 22.118057,predicted upper bound 22.132471,actual reward 21.511176
round 566, predicted reward 28.581254,predicted upper bound 28.595763,actual reward 23.542903
round 567, predicted reward 28.844317,predicted upper bound 28.857464,actual reward 29.689281
round 568, predicted reward 27.853755,predicted upper bound 27.869200,actual reward 28.481667
round 569, predicted reward 25.237962,predicted upper bound 25.252839,actual reward 22.495750
round 570, predicted reward 28.642019,predicted upper bound 28.652539,actual reward 24.409900
round 571, predicted reward 22.148535,predicted upper bound 22.164807,actual reward 23.771918
round 572, predicted reward 24.504439,predicted upper bound 24.518327,actual reward 22.314191
round 573, predicted reward 24.241904,predicted upper bound 24.255969,actual reward 18.886456
round 574, predicted reward 31.280969,predicted upper bound 31.296957,actual reward 30.542736
round 575, predicted reward 28.464712,predicted upper bound 28.476020,actual reward 26.518754
round 576, predicted reward 24.191523,predicted upper bound 24.209152,actual reward 18.616807
round 577, predicted reward 24.629994,predicted upper bound 24.643683,actual reward 29.035239
round 578, predicted reward 29.298599,predicted upper bound 29.310255,actual reward 34.295028
round 579, predicted reward 29.950593,predicted upper bound 29.965291,actual reward 34.156378
round 580, predicted reward 34.974815,predicted upper bound 34.985616,actual reward 35.266102
round 581, predicted reward 26.993075,predicted upper bound 27.008885,actual reward 31.942121
round 582, predicted reward 33.288810,predicted upper bound 33.298989,actual reward 34.222106
round 583, predicted reward 28.790474,predicted upper bound 28.803777,actual reward 30.286428
round 584, predicted reward 23.785150,predicted upper bound 23.801271,actual reward 24.338064
round 585, predicted reward 22.033484,predicted upper bound 22.051354,actual reward 17.628701
round 586, predicted reward 29.844004,predicted upper bound 29.855761,actual reward 27.003718
round 587, predicted reward 24.235717,predicted upper bound 24.251406,actual reward 21.423584
round 588, predicted reward 33.774264,predicted upper bound 33.787505,actual reward 38.654748
round 589, predicted reward 32.688382,predicted upper bound 32.702062,actual reward 30.702832
round 590, predicted reward 32.295844,predicted upper bound 32.308416,actual reward 31.683060
round 591, predicted reward 20.349055,predicted upper bound 20.361548,actual reward 15.099992
round 592, predicted reward 28.676019,predicted upper bound 28.688924,actual reward 31.455172
round 593, predicted reward 22.940663,predicted upper bound 22.951761,actual reward 28.149118
round 594, predicted reward 23.292972,predicted upper bound 23.307189,actual reward 20.619158
round 595, predicted reward 29.264048,predicted upper bound 29.277485,actual reward 24.644965
round 596, predicted reward 27.117096,predicted upper bound 27.130434,actual reward 25.339570
round 597, predicted reward 22.688290,predicted upper bound 22.701748,actual reward 23.348106
round 598, predicted reward 32.308630,predicted upper bound 32.321053,actual reward 33.008111
round 599, predicted reward 26.553441,predicted upper bound 26.568146,actual reward 25.427124
round 600, predicted reward 25.049846,predicted upper bound 25.064631,actual reward 24.177203
round 601, predicted reward 21.919285,predicted upper bound 21.933291,actual reward 22.822126
round 602, predicted reward 29.782101,predicted upper bound 29.794099,actual reward 30.686638
round 603, predicted reward 25.802484,predicted upper bound 25.815288,actual reward 28.760809
round 604, predicted reward 23.504970,predicted upper bound 23.519604,actual reward 27.137133
round 605, predicted reward 30.751474,predicted upper bound 30.765474,actual reward 34.666523
round 606, predicted reward 25.728789,predicted upper bound 25.743128,actual reward 21.846951
round 607, predicted reward 20.285344,predicted upper bound 20.299037,actual reward 14.619994
round 608, predicted reward 32.853706,predicted upper bound 32.864657,actual reward 33.445563
round 609, predicted reward 26.307635,predicted upper bound 26.320208,actual reward 25.851883
round 610, predicted reward 23.895676,predicted upper bound 23.908688,actual reward 29.970922
round 611, predicted reward 25.618939,predicted upper bound 25.632228,actual reward 22.823494
round 612, predicted reward 34.313677,predicted upper bound 34.326144,actual reward 33.442853
round 613, predicted reward 26.531160,predicted upper bound 26.543008,actual reward 23.414105
round 614, predicted reward 24.298306,predicted upper bound 24.313845,actual reward 21.727190
round 615, predicted reward 30.084970,predicted upper bound 30.097229,actual reward 35.754270
round 616, predicted reward 35.340616,predicted upper bound 35.353465,actual reward 35.674963
round 617, predicted reward 25.806906,predicted upper bound 25.821536,actual reward 26.065292
round 618, predicted reward 22.988087,predicted upper bound 23.002049,actual reward 23.164118
round 619, predicted reward 29.014622,predicted upper bound 29.026977,actual reward 25.470699
round 620, predicted reward 29.912038,predicted upper bound 29.927311,actual reward 24.383353
round 621, predicted reward 26.797193,predicted upper bound 26.811688,actual reward 20.565877
round 622, predicted reward 27.843564,predicted upper bound 27.859101,actual reward 32.454031
round 623, predicted reward 32.444582,predicted upper bound 32.457112,actual reward 33.350110
round 624, predicted reward 25.480746,predicted upper bound 25.492731,actual reward 22.177569
round 625, predicted reward 24.962072,predicted upper bound 24.977718,actual reward 22.186437
round 626, predicted reward 27.426681,predicted upper bound 27.438734,actual reward 29.813857
round 627, predicted reward 25.986337,predicted upper bound 26.000267,actual reward 20.793318
round 628, predicted reward 28.502355,predicted upper bound 28.516062,actual reward 26.730474
round 629, predicted reward 23.534040,predicted upper bound 23.549334,actual reward 22.356764
round 630, predicted reward 21.289496,predicted upper bound 21.304718,actual reward 17.232672
round 631, predicted reward 24.974678,predicted upper bound 24.989750,actual reward 31.618457
round 632, predicted reward 24.990303,predicted upper bound 25.003893,actual reward 25.661305
round 633, predicted reward 24.980352,predicted upper bound 24.991816,actual reward 21.399722
round 634, predicted reward 23.656661,predicted upper bound 23.668887,actual reward 30.578423
round 635, predicted reward 24.440622,predicted upper bound 24.453227,actual reward 23.241694
round 636, predicted reward 28.266996,predicted upper bound 28.277735,actual reward 24.083239
round 637, predicted reward 22.309860,predicted upper bound 22.323300,actual reward 22.512340
round 638, predicted reward 25.510221,predicted upper bound 25.523745,actual reward 21.209247
round 639, predicted reward 25.949709,predicted upper bound 25.959813,actual reward 24.105653
round 640, predicted reward 25.410241,predicted upper bound 25.424898,actual reward 21.220096
round 641, predicted reward 29.080304,predicted upper bound 29.089631,actual reward 34.723026
round 642, predicted reward 35.906825,predicted upper bound 35.917428,actual reward 39.678056
round 643, predicted reward 29.912052,predicted upper bound 29.924714,actual reward 29.512086
round 644, predicted reward 27.121181,predicted upper bound 27.132757,actual reward 22.493152
round 645, predicted reward 28.900565,predicted upper bound 28.912025,actual reward 33.352456
round 646, predicted reward 25.003507,predicted upper bound 25.015143,actual reward 22.845018
round 647, predicted reward 26.591438,predicted upper bound 26.603393,actual reward 28.759942
round 648, predicted reward 23.789573,predicted upper bound 23.802127,actual reward 17.267670
round 649, predicted reward 27.370695,predicted upper bound 27.382695,actual reward 26.325639
round 650, predicted reward 29.704972,predicted upper bound 29.715279,actual reward 35.209748
round 651, predicted reward 27.823362,predicted upper bound 27.835804,actual reward 25.143489
round 652, predicted reward 28.660976,predicted upper bound 28.673269,actual reward 33.796349
round 653, predicted reward 27.040076,predicted upper bound 27.055326,actual reward 22.095849
round 654, predicted reward 24.280061,predicted upper bound 24.296810,actual reward 25.152162
round 655, predicted reward 20.665063,predicted upper bound 20.677271,actual reward 19.287291
round 656, predicted reward 23.299256,predicted upper bound 23.314511,actual reward 19.779469
round 657, predicted reward 25.183425,predicted upper bound 25.196896,actual reward 25.679367
round 658, predicted reward 25.295513,predicted upper bound 25.307243,actual reward 23.875771
round 659, predicted reward 31.953659,predicted upper bound 31.965637,actual reward 32.695160
round 660, predicted reward 23.394320,predicted upper bound 23.408184,actual reward 19.330943
round 661, predicted reward 30.759183,predicted upper bound 30.771723,actual reward 28.092920
round 662, predicted reward 27.529858,predicted upper bound 27.541409,actual reward 24.917760
round 663, predicted reward 24.527274,predicted upper bound 24.542002,actual reward 21.917614
round 664, predicted reward 28.426453,predicted upper bound 28.440152,actual reward 30.354244
round 665, predicted reward 30.465620,predicted upper bound 30.477416,actual reward 33.277705
round 666, predicted reward 25.882692,predicted upper bound 25.895080,actual reward 26.798331
round 667, predicted reward 20.000207,predicted upper bound 20.012103,actual reward 20.939162
round 668, predicted reward 26.420549,predicted upper bound 26.435347,actual reward 26.224022
round 669, predicted reward 25.058354,predicted upper bound 25.071141,actual reward 22.059009
round 670, predicted reward 26.122280,predicted upper bound 26.136916,actual reward 29.723780
round 671, predicted reward 22.765318,predicted upper bound 22.781120,actual reward 25.113181
round 672, predicted reward 28.811271,predicted upper bound 28.822167,actual reward 24.585381
round 673, predicted reward 20.913715,predicted upper bound 20.926728,actual reward 13.545550
round 674, predicted reward 28.763675,predicted upper bound 28.775146,actual reward 28.433889
round 675, predicted reward 27.122443,predicted upper bound 27.134161,actual reward 28.173626
round 676, predicted reward 29.261312,predicted upper bound 29.273254,actual reward 27.844974
round 677, predicted reward 31.336438,predicted upper bound 31.348944,actual reward 36.037314
round 678, predicted reward 25.474902,predicted upper bound 25.485338,actual reward 23.620702
round 679, predicted reward 29.255538,predicted upper bound 29.268535,actual reward 27.852726
round 680, predicted reward 21.904424,predicted upper bound 21.918711,actual reward 22.839176
round 681, predicted reward 22.628945,predicted upper bound 22.640298,actual reward 20.046133
round 682, predicted reward 24.787028,predicted upper bound 24.800144,actual reward 24.206649
round 683, predicted reward 36.986348,predicted upper bound 36.997571,actual reward 37.524737
round 684, predicted reward 25.104974,predicted upper bound 25.118157,actual reward 24.045214
round 685, predicted reward 26.364570,predicted upper bound 26.375034,actual reward 23.998678
round 686, predicted reward 27.229590,predicted upper bound 27.240848,actual reward 24.711304
round 687, predicted reward 29.217145,predicted upper bound 29.228451,actual reward 28.789713
round 688, predicted reward 23.398924,predicted upper bound 23.411155,actual reward 18.642473
round 689, predicted reward 20.622328,predicted upper bound 20.637457,actual reward 19.316089
round 690, predicted reward 23.858333,predicted upper bound 23.868898,actual reward 24.842206
round 691, predicted reward 27.811200,predicted upper bound 27.821109,actual reward 26.352195
round 692, predicted reward 28.711179,predicted upper bound 28.721102,actual reward 30.746562
round 693, predicted reward 35.256331,predicted upper bound 35.266644,actual reward 37.314877
round 694, predicted reward 24.866797,predicted upper bound 24.877218,actual reward 21.127933
round 695, predicted reward 27.931200,predicted upper bound 27.944072,actual reward 35.266085
round 696, predicted reward 23.429944,predicted upper bound 23.442498,actual reward 28.913205
round 697, predicted reward 36.033358,predicted upper bound 36.044033,actual reward 38.758597
round 698, predicted reward 28.671868,predicted upper bound 28.683962,actual reward 28.634347
round 699, predicted reward 24.041999,predicted upper bound 24.056296,actual reward 20.270469
round 700, predicted reward 29.735634,predicted upper bound 29.744727,actual reward 32.247901
round 701, predicted reward 23.595525,predicted upper bound 23.606773,actual reward 19.070377
round 702, predicted reward 36.700805,predicted upper bound 36.711092,actual reward 41.972075
round 703, predicted reward 32.518814,predicted upper bound 32.532295,actual reward 37.409698
round 704, predicted reward 22.897298,predicted upper bound 22.914149,actual reward 18.597794
round 705, predicted reward 27.711103,predicted upper bound 27.723500,actual reward 31.939030
round 706, predicted reward 22.632287,predicted upper bound 22.644638,actual reward 24.291439
round 707, predicted reward 26.059641,predicted upper bound 26.070782,actual reward 28.825064
round 708, predicted reward 26.961535,predicted upper bound 26.975330,actual reward 26.941056
round 709, predicted reward 26.634880,predicted upper bound 26.647799,actual reward 28.871324
round 710, predicted reward 28.963541,predicted upper bound 28.973466,actual reward 29.118502
round 711, predicted reward 23.327633,predicted upper bound 23.340837,actual reward 23.841794
round 712, predicted reward 27.405371,predicted upper bound 27.417346,actual reward 28.695943
round 713, predicted reward 23.811798,predicted upper bound 23.827238,actual reward 19.296273
round 714, predicted reward 26.325488,predicted upper bound 26.335583,actual reward 27.371117
round 715, predicted reward 27.373028,predicted upper bound 27.385105,actual reward 26.096060
round 716, predicted reward 22.579025,predicted upper bound 22.591003,actual reward 15.575143
round 717, predicted reward 24.051914,predicted upper bound 24.063732,actual reward 24.117353
round 718, predicted reward 30.939546,predicted upper bound 30.950172,actual reward 36.429658
round 719, predicted reward 27.403342,predicted upper bound 27.414196,actual reward 28.897613
round 720, predicted reward 33.713592,predicted upper bound 33.723594,actual reward 31.827460
round 721, predicted reward 26.424671,predicted upper bound 26.436747,actual reward 28.513481
round 722, predicted reward 23.913046,predicted upper bound 23.926852,actual reward 19.212153
round 723, predicted reward 29.658253,predicted upper bound 29.670231,actual reward 32.397014
round 724, predicted reward 25.539232,predicted upper bound 25.551315,actual reward 25.193380
round 725, predicted reward 27.950927,predicted upper bound 27.962350,actual reward 26.962260
round 726, predicted reward 25.563421,predicted upper bound 25.576025,actual reward 26.341793
round 727, predicted reward 20.718541,predicted upper bound 20.731304,actual reward 19.301114
round 728, predicted reward 29.327862,predicted upper bound 29.340013,actual reward 31.945575
round 729, predicted reward 27.511380,predicted upper bound 27.521273,actual reward 26.512392
round 730, predicted reward 30.390499,predicted upper bound 30.400647,actual reward 24.148067
round 731, predicted reward 33.060098,predicted upper bound 33.071799,actual reward 30.740461
round 732, predicted reward 28.387689,predicted upper bound 28.398800,actual reward 32.000720
round 733, predicted reward 22.237775,predicted upper bound 22.251164,actual reward 22.062546
round 734, predicted reward 29.531601,predicted upper bound 29.542861,actual reward 34.018870
round 735, predicted reward 25.074806,predicted upper bound 25.087352,actual reward 18.746153
round 736, predicted reward 29.521365,predicted upper bound 29.532525,actual reward 32.581890
round 737, predicted reward 25.302241,predicted upper bound 25.314031,actual reward 25.959032
round 738, predicted reward 25.799731,predicted upper bound 25.810320,actual reward 21.836669
round 739, predicted reward 24.689648,predicted upper bound 24.700955,actual reward 24.321185
round 740, predicted reward 27.569243,predicted upper bound 27.582492,actual reward 36.227284
round 741, predicted reward 23.337877,predicted upper bound 23.349635,actual reward 23.013423
round 742, predicted reward 27.209366,predicted upper bound 27.221252,actual reward 27.545870
round 743, predicted reward 24.541307,predicted upper bound 24.551503,actual reward 28.637498
round 744, predicted reward 27.107219,predicted upper bound 27.119657,actual reward 24.275175
round 745, predicted reward 22.102178,predicted upper bound 22.114272,actual reward 21.004012
round 746, predicted reward 31.600246,predicted upper bound 31.609442,actual reward 29.055848
round 747, predicted reward 21.980852,predicted upper bound 21.994515,actual reward 20.345785
round 748, predicted reward 28.023262,predicted upper bound 28.034560,actual reward 28.167501
round 749, predicted reward 23.996476,predicted upper bound 24.009304,actual reward 25.166135
round 750, predicted reward 27.505889,predicted upper bound 27.515728,actual reward 26.185887
round 751, predicted reward 26.856880,predicted upper bound 26.870606,actual reward 28.102724
round 752, predicted reward 23.580675,predicted upper bound 23.592829,actual reward 21.213042
round 753, predicted reward 24.077495,predicted upper bound 24.089747,actual reward 24.673494
round 754, predicted reward 29.401312,predicted upper bound 29.411329,actual reward 28.514186
round 755, predicted reward 27.098119,predicted upper bound 27.109344,actual reward 22.436178
round 756, predicted reward 25.311490,predicted upper bound 25.322833,actual reward 24.261228
round 757, predicted reward 26.859981,predicted upper bound 26.872726,actual reward 22.421760
round 758, predicted reward 29.178360,predicted upper bound 29.190541,actual reward 33.263547
round 759, predicted reward 22.593373,predicted upper bound 22.605338,actual reward 21.277976
round 760, predicted reward 25.241146,predicted upper bound 25.252059,actual reward 17.653098
round 761, predicted reward 33.375835,predicted upper bound 33.387263,actual reward 35.408531
round 762, predicted reward 31.305173,predicted upper bound 31.315138,actual reward 35.802580
round 763, predicted reward 23.235094,predicted upper bound 23.247799,actual reward 20.763785
round 764, predicted reward 25.649072,predicted upper bound 25.659917,actual reward 24.563699
round 765, predicted reward 28.078454,predicted upper bound 28.091238,actual reward 28.169920
round 766, predicted reward 27.391475,predicted upper bound 27.402839,actual reward 26.972942
round 767, predicted reward 22.373162,predicted upper bound 22.386301,actual reward 18.487825
round 768, predicted reward 23.260673,predicted upper bound 23.271313,actual reward 20.188439
round 769, predicted reward 21.776244,predicted upper bound 21.786961,actual reward 21.778014
round 770, predicted reward 24.828267,predicted upper bound 24.841136,actual reward 21.501955
round 771, predicted reward 30.512396,predicted upper bound 30.522843,actual reward 32.701877
round 772, predicted reward 31.754116,predicted upper bound 31.764676,actual reward 38.299640
round 773, predicted reward 28.751194,predicted upper bound 28.761058,actual reward 31.739013
round 774, predicted reward 28.085391,predicted upper bound 28.095661,actual reward 26.138364
round 775, predicted reward 32.363952,predicted upper bound 32.372595,actual reward 32.746799
round 776, predicted reward 25.428522,predicted upper bound 25.440309,actual reward 23.272516
round 777, predicted reward 20.863313,predicted upper bound 20.874534,actual reward 22.254521
round 778, predicted reward 25.768808,predicted upper bound 25.778987,actual reward 26.284050
round 779, predicted reward 27.032101,predicted upper bound 27.041691,actual reward 26.879204
round 780, predicted reward 19.074132,predicted upper bound 19.088274,actual reward 18.084229
round 781, predicted reward 26.501985,predicted upper bound 26.513074,actual reward 20.239976
round 782, predicted reward 32.696557,predicted upper bound 32.705858,actual reward 32.285075
round 783, predicted reward 25.281444,predicted upper bound 25.292714,actual reward 25.090832
round 784, predicted reward 25.648193,predicted upper bound 25.658440,actual reward 25.231746
round 785, predicted reward 26.350143,predicted upper bound 26.360697,actual reward 21.602137
round 786, predicted reward 23.938717,predicted upper bound 23.951323,actual reward 23.117642
round 787, predicted reward 28.097901,predicted upper bound 28.109685,actual reward 29.756016
round 788, predicted reward 26.542701,predicted upper bound 26.552967,actual reward 23.738926
round 789, predicted reward 20.627285,predicted upper bound 20.639080,actual reward 18.519128
round 790, predicted reward 28.265549,predicted upper bound 28.275944,actual reward 29.255553
round 791, predicted reward 28.812325,predicted upper bound 28.823426,actual reward 31.117293
round 792, predicted reward 30.210461,predicted upper bound 30.219479,actual reward 28.088166
round 793, predicted reward 29.941829,predicted upper bound 29.950875,actual reward 27.770951
round 794, predicted reward 27.722995,predicted upper bound 27.732922,actual reward 27.252859
round 795, predicted reward 27.054522,predicted upper bound 27.065019,actual reward 30.257823
round 796, predicted reward 24.470999,predicted upper bound 24.482122,actual reward 27.399765
round 797, predicted reward 28.692899,predicted upper bound 28.702418,actual reward 31.966456
round 798, predicted reward 25.838020,predicted upper bound 25.849614,actual reward 33.282280
round 799, predicted reward 25.147515,predicted upper bound 25.157935,actual reward 17.567362
round 800, predicted reward 24.387634,predicted upper bound 24.397835,actual reward 20.205890
round 801, predicted reward 23.532342,predicted upper bound 23.543964,actual reward 22.846605
round 802, predicted reward 27.884054,predicted upper bound 27.895802,actual reward 27.986271
round 803, predicted reward 25.692897,predicted upper bound 25.702892,actual reward 25.490356
round 804, predicted reward 21.860758,predicted upper bound 21.874054,actual reward 22.531630
round 805, predicted reward 20.900756,predicted upper bound 20.912118,actual reward 21.139373
round 806, predicted reward 27.112579,predicted upper bound 27.122379,actual reward 22.910670
round 807, predicted reward 26.842669,predicted upper bound 26.854728,actual reward 29.011071
round 808, predicted reward 29.869483,predicted upper bound 29.880023,actual reward 33.565085
round 809, predicted reward 30.165421,predicted upper bound 30.176192,actual reward 31.814369
round 810, predicted reward 29.360002,predicted upper bound 29.369338,actual reward 27.037316
round 811, predicted reward 31.384445,predicted upper bound 31.394076,actual reward 32.637457
round 812, predicted reward 28.721762,predicted upper bound 28.731600,actual reward 28.506880
round 813, predicted reward 29.258244,predicted upper bound 29.269890,actual reward 30.975151
round 814, predicted reward 32.522399,predicted upper bound 32.530968,actual reward 34.698581
round 815, predicted reward 22.272241,predicted upper bound 22.284060,actual reward 21.239080
round 816, predicted reward 24.577344,predicted upper bound 24.587114,actual reward 23.934075
round 817, predicted reward 26.886820,predicted upper bound 26.898005,actual reward 22.160080
round 818, predicted reward 25.206361,predicted upper bound 25.215156,actual reward 23.149973
round 819, predicted reward 23.342509,predicted upper bound 23.354503,actual reward 21.891322
round 820, predicted reward 29.011169,predicted upper bound 29.022476,actual reward 28.404898
round 821, predicted reward 19.392035,predicted upper bound 19.404908,actual reward 19.816797
round 822, predicted reward 22.695155,predicted upper bound 22.707070,actual reward 19.232277
round 823, predicted reward 25.106694,predicted upper bound 25.118999,actual reward 18.723017
round 824, predicted reward 20.918101,predicted upper bound 20.930401,actual reward 21.900377
round 825, predicted reward 25.065597,predicted upper bound 25.076606,actual reward 27.135035
round 826, predicted reward 22.679684,predicted upper bound 22.690027,actual reward 23.404386
round 827, predicted reward 32.762609,predicted upper bound 32.771426,actual reward 35.237724
round 828, predicted reward 29.604668,predicted upper bound 29.615095,actual reward 28.465131
round 829, predicted reward 23.557833,predicted upper bound 23.570601,actual reward 27.042488
round 830, predicted reward 19.822079,predicted upper bound 19.833226,actual reward 17.842662
round 831, predicted reward 23.996575,predicted upper bound 24.007061,actual reward 26.453348
round 832, predicted reward 31.445175,predicted upper bound 31.454403,actual reward 31.283674
round 833, predicted reward 25.093622,predicted upper bound 25.102587,actual reward 22.694612
round 834, predicted reward 29.056867,predicted upper bound 29.066921,actual reward 33.531984
round 835, predicted reward 25.875434,predicted upper bound 25.884993,actual reward 22.080369
round 836, predicted reward 22.398874,predicted upper bound 22.409217,actual reward 19.435537
round 837, predicted reward 25.916951,predicted upper bound 25.927759,actual reward 24.002491
round 838, predicted reward 26.667904,predicted upper bound 26.678472,actual reward 23.815089
round 839, predicted reward 29.723578,predicted upper bound 29.733095,actual reward 34.027456
round 840, predicted reward 23.563872,predicted upper bound 23.574275,actual reward 23.191727
round 841, predicted reward 21.606126,predicted upper bound 21.618139,actual reward 15.200241
round 842, predicted reward 27.614567,predicted upper bound 27.624807,actual reward 28.100483
round 843, predicted reward 31.783663,predicted upper bound 31.793439,actual reward 29.930818
round 844, predicted reward 28.679346,predicted upper bound 28.689879,actual reward 28.198387
round 845, predicted reward 26.983627,predicted upper bound 26.994259,actual reward 26.401742
round 846, predicted reward 26.236613,predicted upper bound 26.248143,actual reward 28.183134
round 847, predicted reward 30.324847,predicted upper bound 30.333034,actual reward 34.686238
round 848, predicted reward 27.657172,predicted upper bound 27.669043,actual reward 29.080026
round 849, predicted reward 33.567923,predicted upper bound 33.577540,actual reward 34.945725
round 850, predicted reward 29.281172,predicted upper bound 29.291501,actual reward 33.113838
round 851, predicted reward 27.828208,predicted upper bound 27.839620,actual reward 28.327010
round 852, predicted reward 26.780135,predicted upper bound 26.793004,actual reward 23.360550
round 853, predicted reward 31.010718,predicted upper bound 31.022966,actual reward 32.595251
round 854, predicted reward 25.437261,predicted upper bound 25.447054,actual reward 25.783166
round 855, predicted reward 32.365177,predicted upper bound 32.376939,actual reward 33.918042
round 856, predicted reward 19.589394,predicted upper bound 19.600946,actual reward 20.288236
round 857, predicted reward 33.037090,predicted upper bound 33.046668,actual reward 33.814424
round 858, predicted reward 22.704821,predicted upper bound 22.716724,actual reward 22.037271
round 859, predicted reward 22.154065,predicted upper bound 22.164647,actual reward 26.755302
round 860, predicted reward 22.582001,predicted upper bound 22.590690,actual reward 22.974822
round 861, predicted reward 23.696238,predicted upper bound 23.707091,actual reward 17.538221
round 862, predicted reward 32.539201,predicted upper bound 32.548563,actual reward 31.386725
round 863, predicted reward 28.585130,predicted upper bound 28.596737,actual reward 33.143655
round 864, predicted reward 26.869282,predicted upper bound 26.880210,actual reward 23.004557
round 865, predicted reward 23.385054,predicted upper bound 23.396534,actual reward 22.148753
round 866, predicted reward 28.260269,predicted upper bound 28.271244,actual reward 25.940031
round 867, predicted reward 32.670621,predicted upper bound 32.680177,actual reward 32.274319
round 868, predicted reward 28.595282,predicted upper bound 28.604064,actual reward 29.266206
round 869, predicted reward 32.957538,predicted upper bound 32.968569,actual reward 32.569107
round 870, predicted reward 24.565103,predicted upper bound 24.577536,actual reward 21.856485
round 871, predicted reward 26.010493,predicted upper bound 26.019891,actual reward 29.762548
round 872, predicted reward 25.595294,predicted upper bound 25.606118,actual reward 22.047279
round 873, predicted reward 27.897421,predicted upper bound 27.905966,actual reward 26.740357
round 874, predicted reward 28.430572,predicted upper bound 28.440952,actual reward 28.342594
round 875, predicted reward 22.001200,predicted upper bound 22.010821,actual reward 18.662396
round 876, predicted reward 29.758096,predicted upper bound 29.768214,actual reward 32.353265
round 877, predicted reward 34.069052,predicted upper bound 34.078145,actual reward 34.276820
round 878, predicted reward 26.271325,predicted upper bound 26.280693,actual reward 24.155397
round 879, predicted reward 30.551692,predicted upper bound 30.561727,actual reward 25.684772
round 880, predicted reward 29.055314,predicted upper bound 29.065731,actual reward 30.022540
round 881, predicted reward 26.393363,predicted upper bound 26.404094,actual reward 22.109261
round 882, predicted reward 27.406245,predicted upper bound 27.417573,actual reward 24.616388
round 883, predicted reward 29.127524,predicted upper bound 29.135947,actual reward 28.534049
round 884, predicted reward 26.616957,predicted upper bound 26.630824,actual reward 28.878648
round 885, predicted reward 29.259624,predicted upper bound 29.271902,actual reward 28.184674
round 886, predicted reward 29.533000,predicted upper bound 29.543373,actual reward 29.632946
round 887, predicted reward 23.918541,predicted upper bound 23.928909,actual reward 23.339982
round 888, predicted reward 24.104271,predicted upper bound 24.115482,actual reward 20.239919
round 889, predicted reward 30.267438,predicted upper bound 30.274758,actual reward 31.510028
round 890, predicted reward 34.345512,predicted upper bound 34.355006,actual reward 36.470812
round 891, predicted reward 25.752483,predicted upper bound 25.763442,actual reward 22.526545
round 892, predicted reward 25.686934,predicted upper bound 25.698748,actual reward 22.261467
round 893, predicted reward 35.724418,predicted upper bound 35.733945,actual reward 40.390333
round 894, predicted reward 30.702329,predicted upper bound 30.712109,actual reward 33.925813
round 895, predicted reward 21.747230,predicted upper bound 21.758739,actual reward 17.554719
round 896, predicted reward 38.036557,predicted upper bound 38.045196,actual reward 39.889951
round 897, predicted reward 23.420665,predicted upper bound 23.431270,actual reward 23.266150
round 898, predicted reward 26.308440,predicted upper bound 26.317611,actual reward 28.337892
round 899, predicted reward 24.821557,predicted upper bound 24.831702,actual reward 23.446093
round 900, predicted reward 23.793568,predicted upper bound 23.802622,actual reward 22.730515
round 901, predicted reward 30.518416,predicted upper bound 30.528813,actual reward 27.042381
round 902, predicted reward 25.637554,predicted upper bound 25.647578,actual reward 19.334374
round 903, predicted reward 34.849846,predicted upper bound 34.857771,actual reward 37.165600
round 904, predicted reward 20.210724,predicted upper bound 20.220406,actual reward 16.289269
round 905, predicted reward 33.521785,predicted upper bound 33.529254,actual reward 35.179977
round 906, predicted reward 30.362469,predicted upper bound 30.372555,actual reward 35.949069
round 907, predicted reward 31.181892,predicted upper bound 31.191930,actual reward 32.217635
round 908, predicted reward 30.486405,predicted upper bound 30.496496,actual reward 33.009562
round 909, predicted reward 31.786348,predicted upper bound 31.794394,actual reward 28.969642
round 910, predicted reward 25.485059,predicted upper bound 25.496560,actual reward 24.369782
round 911, predicted reward 27.305720,predicted upper bound 27.315551,actual reward 32.746273
round 912, predicted reward 23.226867,predicted upper bound 23.236724,actual reward 22.483447
round 913, predicted reward 30.729196,predicted upper bound 30.738398,actual reward 32.570946
round 914, predicted reward 25.672494,predicted upper bound 25.682060,actual reward 22.195243
round 915, predicted reward 29.234529,predicted upper bound 29.243690,actual reward 30.690380
round 916, predicted reward 34.862647,predicted upper bound 34.871990,actual reward 36.457416
round 917, predicted reward 26.017965,predicted upper bound 26.025930,actual reward 22.819436
round 918, predicted reward 37.214953,predicted upper bound 37.225974,actual reward 40.933068
round 919, predicted reward 31.579631,predicted upper bound 31.588028,actual reward 36.288782
round 920, predicted reward 29.136594,predicted upper bound 29.147431,actual reward 28.327701
round 921, predicted reward 27.560340,predicted upper bound 27.569404,actual reward 30.439399
round 922, predicted reward 27.320390,predicted upper bound 27.330062,actual reward 25.002235
round 923, predicted reward 29.174340,predicted upper bound 29.184224,actual reward 31.879791
round 924, predicted reward 30.760740,predicted upper bound 30.770680,actual reward 29.492374
round 925, predicted reward 28.343745,predicted upper bound 28.353492,actual reward 26.069400
round 926, predicted reward 30.576085,predicted upper bound 30.586303,actual reward 31.330111
round 927, predicted reward 22.707349,predicted upper bound 22.716251,actual reward 18.538871
round 928, predicted reward 29.033502,predicted upper bound 29.043181,actual reward 25.111112
round 929, predicted reward 22.312785,predicted upper bound 22.322182,actual reward 23.782644
round 930, predicted reward 27.292130,predicted upper bound 27.299867,actual reward 26.523561
round 931, predicted reward 28.588491,predicted upper bound 28.598158,actual reward 27.262748
round 932, predicted reward 28.324270,predicted upper bound 28.333766,actual reward 27.014807
round 933, predicted reward 28.329341,predicted upper bound 28.339125,actual reward 24.961561
round 934, predicted reward 23.988441,predicted upper bound 24.000726,actual reward 23.435836
round 935, predicted reward 28.550687,predicted upper bound 28.559213,actual reward 28.910386
round 936, predicted reward 24.364734,predicted upper bound 24.376053,actual reward 24.146929
round 937, predicted reward 26.986933,predicted upper bound 26.996438,actual reward 29.576005
round 938, predicted reward 22.475645,predicted upper bound 22.485803,actual reward 21.329339
round 939, predicted reward 22.734291,predicted upper bound 22.744866,actual reward 25.613813
round 940, predicted reward 25.507594,predicted upper bound 25.516432,actual reward 28.824679
round 941, predicted reward 21.911848,predicted upper bound 21.919910,actual reward 16.793056
round 942, predicted reward 31.106604,predicted upper bound 31.115455,actual reward 29.000452
round 943, predicted reward 29.880215,predicted upper bound 29.889643,actual reward 30.327269
round 944, predicted reward 26.966195,predicted upper bound 26.974818,actual reward 23.811684
round 945, predicted reward 24.812993,predicted upper bound 24.823742,actual reward 25.070391
round 946, predicted reward 22.890476,predicted upper bound 22.900737,actual reward 23.345826
round 947, predicted reward 23.630063,predicted upper bound 23.639808,actual reward 21.786329
round 948, predicted reward 18.000927,predicted upper bound 18.011796,actual reward 17.925718
round 949, predicted reward 25.075051,predicted upper bound 25.084406,actual reward 22.967729
round 950, predicted reward 32.366342,predicted upper bound 32.376037,actual reward 27.429246
round 951, predicted reward 23.796774,predicted upper bound 23.808164,actual reward 19.911047
round 952, predicted reward 22.727352,predicted upper bound 22.737300,actual reward 22.541564
round 953, predicted reward 33.553001,predicted upper bound 33.560517,actual reward 36.487774
round 954, predicted reward 34.282469,predicted upper bound 34.291965,actual reward 36.194414
round 955, predicted reward 25.013200,predicted upper bound 25.023541,actual reward 24.151194
round 956, predicted reward 31.544004,predicted upper bound 31.552059,actual reward 34.255734
round 957, predicted reward 31.320985,predicted upper bound 31.329115,actual reward 33.421457
round 958, predicted reward 24.831890,predicted upper bound 24.841397,actual reward 14.945888
round 959, predicted reward 22.210638,predicted upper bound 22.220862,actual reward 19.663173
round 960, predicted reward 29.019493,predicted upper bound 29.027903,actual reward 32.294037
round 961, predicted reward 32.913037,predicted upper bound 32.921090,actual reward 30.687455
round 962, predicted reward 26.707050,predicted upper bound 26.716864,actual reward 26.835038
round 963, predicted reward 25.041408,predicted upper bound 25.050545,actual reward 26.152920
round 964, predicted reward 28.327536,predicted upper bound 28.336947,actual reward 25.590876
round 965, predicted reward 26.992367,predicted upper bound 27.002755,actual reward 29.639282
round 966, predicted reward 29.808254,predicted upper bound 29.815009,actual reward 28.256365
round 967, predicted reward 30.416997,predicted upper bound 30.425885,actual reward 32.629103
round 968, predicted reward 29.263141,predicted upper bound 29.271928,actual reward 31.020613
round 969, predicted reward 31.092536,predicted upper bound 31.100004,actual reward 32.133109
round 970, predicted reward 25.844967,predicted upper bound 25.853782,actual reward 26.704575
round 971, predicted reward 25.819152,predicted upper bound 25.828332,actual reward 29.250637
round 972, predicted reward 30.180160,predicted upper bound 30.189468,actual reward 29.959832
round 973, predicted reward 29.371642,predicted upper bound 29.380020,actual reward 32.123321
round 974, predicted reward 26.983843,predicted upper bound 26.991163,actual reward 21.678565
round 975, predicted reward 31.820875,predicted upper bound 31.829606,actual reward 30.149814
round 976, predicted reward 25.761042,predicted upper bound 25.771219,actual reward 20.071706
round 977, predicted reward 33.595235,predicted upper bound 33.604131,actual reward 35.057234
round 978, predicted reward 29.037935,predicted upper bound 29.048308,actual reward 28.058427
round 979, predicted reward 22.589109,predicted upper bound 22.598511,actual reward 27.399768
round 980, predicted reward 23.109104,predicted upper bound 23.118356,actual reward 19.542999
round 981, predicted reward 26.322172,predicted upper bound 26.331235,actual reward 24.950769
round 982, predicted reward 28.003614,predicted upper bound 28.013982,actual reward 24.235728
round 983, predicted reward 32.985694,predicted upper bound 32.994764,actual reward 33.208312
round 984, predicted reward 30.816937,predicted upper bound 30.825618,actual reward 31.119247
round 985, predicted reward 21.285567,predicted upper bound 21.294119,actual reward 21.160134
round 986, predicted reward 25.000497,predicted upper bound 25.009535,actual reward 24.312095
round 987, predicted reward 28.399174,predicted upper bound 28.406678,actual reward 30.507079
round 988, predicted reward 29.537816,predicted upper bound 29.545872,actual reward 28.870477
round 989, predicted reward 25.570045,predicted upper bound 25.578768,actual reward 20.529336
round 990, predicted reward 22.357220,predicted upper bound 22.367543,actual reward 19.504455
round 991, predicted reward 26.158675,predicted upper bound 26.169250,actual reward 29.071205
round 992, predicted reward 26.116971,predicted upper bound 26.125326,actual reward 26.812725
round 993, predicted reward 34.227371,predicted upper bound 34.235737,actual reward 35.461081
round 994, predicted reward 21.664158,predicted upper bound 21.672223,actual reward 19.203079
round 995, predicted reward 30.438566,predicted upper bound 30.445355,actual reward 29.465026
round 996, predicted reward 31.842443,predicted upper bound 31.849593,actual reward 34.200154
round 997, predicted reward 30.470036,predicted upper bound 30.479669,actual reward 30.030257
round 998, predicted reward 27.034569,predicted upper bound 27.043614,actual reward 31.643816
round 999, predicted reward 34.216969,predicted upper bound 34.225404,actual reward 32.652940
round 1000, predicted reward 25.186229,predicted upper bound 25.195913,actual reward 26.853681
round 1001, predicted reward 28.881766,predicted upper bound 28.889166,actual reward 26.444332
round 1002, predicted reward 34.848284,predicted upper bound 34.855849,actual reward 34.402960
round 1003, predicted reward 27.570353,predicted upper bound 27.578239,actual reward 25.563840
round 1004, predicted reward 32.661833,predicted upper bound 32.669316,actual reward 35.035583
round 1005, predicted reward 33.015759,predicted upper bound 33.022979,actual reward 37.780368
round 1006, predicted reward 21.121564,predicted upper bound 21.131688,actual reward 20.259578
round 1007, predicted reward 33.263188,predicted upper bound 33.272230,actual reward 33.261427
round 1008, predicted reward 36.944546,predicted upper bound 36.952287,actual reward 43.414665
round 1009, predicted reward 27.218082,predicted upper bound 27.225599,actual reward 28.151473
round 1010, predicted reward 26.451816,predicted upper bound 26.459810,actual reward 24.414960
round 1011, predicted reward 27.765261,predicted upper bound 27.773748,actual reward 24.098132
round 1012, predicted reward 29.327632,predicted upper bound 29.336662,actual reward 30.493431
round 1013, predicted reward 33.036211,predicted upper bound 33.044299,actual reward 34.227126
round 1014, predicted reward 25.109896,predicted upper bound 25.118040,actual reward 26.991640
round 1015, predicted reward 23.638898,predicted upper bound 23.647682,actual reward 25.066309
round 1016, predicted reward 37.372962,predicted upper bound 37.379374,actual reward 45.710074
round 1017, predicted reward 35.489903,predicted upper bound 35.498322,actual reward 37.671333
round 1018, predicted reward 25.289694,predicted upper bound 25.298615,actual reward 29.361853
round 1019, predicted reward 29.640150,predicted upper bound 29.647967,actual reward 26.282889
round 1020, predicted reward 24.050774,predicted upper bound 24.060508,actual reward 20.467822
round 1021, predicted reward 28.784004,predicted upper bound 28.793215,actual reward 29.026090
round 1022, predicted reward 24.023124,predicted upper bound 24.032350,actual reward 24.818711
round 1023, predicted reward 24.398369,predicted upper bound 24.406402,actual reward 22.186418
round 1024, predicted reward 31.433150,predicted upper bound 31.442120,actual reward 32.038714
round 1025, predicted reward 34.156684,predicted upper bound 34.164791,actual reward 33.814990
round 1026, predicted reward 25.338096,predicted upper bound 25.346567,actual reward 24.606032
round 1027, predicted reward 31.979305,predicted upper bound 31.986967,actual reward 34.887648
round 1028, predicted reward 27.411403,predicted upper bound 27.421946,actual reward 28.521387
round 1029, predicted reward 30.088854,predicted upper bound 30.097732,actual reward 32.587005
round 1030, predicted reward 31.446524,predicted upper bound 31.454555,actual reward 30.613176
round 1031, predicted reward 20.153199,predicted upper bound 20.161566,actual reward 17.041650
round 1032, predicted reward 19.813663,predicted upper bound 19.823259,actual reward 21.150712
round 1033, predicted reward 26.727936,predicted upper bound 26.736554,actual reward 26.641290
round 1034, predicted reward 30.974037,predicted upper bound 30.984465,actual reward 31.550720
round 1035, predicted reward 29.015579,predicted upper bound 29.023983,actual reward 29.990441
round 1036, predicted reward 19.014894,predicted upper bound 19.023997,actual reward 13.491184
round 1037, predicted reward 28.066313,predicted upper bound 28.074348,actual reward 27.651664
round 1038, predicted reward 27.758094,predicted upper bound 27.765936,actual reward 29.720250
round 1039, predicted reward 30.247736,predicted upper bound 30.256918,actual reward 32.921113
round 1040, predicted reward 25.062161,predicted upper bound 25.073029,actual reward 22.651237
round 1041, predicted reward 27.872072,predicted upper bound 27.881864,actual reward 30.807451
round 1042, predicted reward 22.825378,predicted upper bound 22.836788,actual reward 19.031730
round 1043, predicted reward 21.556287,predicted upper bound 21.566019,actual reward 25.480216
round 1044, predicted reward 30.007119,predicted upper bound 30.014733,actual reward 28.583069
round 1045, predicted reward 27.054270,predicted upper bound 27.061819,actual reward 23.236354
round 1046, predicted reward 26.660051,predicted upper bound 26.670215,actual reward 22.564712
round 1047, predicted reward 25.509671,predicted upper bound 25.519014,actual reward 23.062456
round 1048, predicted reward 26.835830,predicted upper bound 26.845125,actual reward 26.274926
round 1049, predicted reward 36.760731,predicted upper bound 36.768580,actual reward 45.913106
round 1050, predicted reward 28.425241,predicted upper bound 28.432058,actual reward 24.400243
round 1051, predicted reward 28.795476,predicted upper bound 28.803655,actual reward 32.820535
round 1052, predicted reward 25.752272,predicted upper bound 25.761362,actual reward 22.957242
round 1053, predicted reward 26.437895,predicted upper bound 26.446884,actual reward 22.168147
round 1054, predicted reward 32.021900,predicted upper bound 32.029688,actual reward 29.919992
round 1055, predicted reward 24.698927,predicted upper bound 24.707781,actual reward 27.582775
round 1056, predicted reward 26.605999,predicted upper bound 26.615636,actual reward 27.832938
round 1057, predicted reward 30.680354,predicted upper bound 30.687279,actual reward 34.058149
round 1058, predicted reward 25.516779,predicted upper bound 25.525390,actual reward 26.418034
round 1059, predicted reward 28.635557,predicted upper bound 28.642421,actual reward 28.585124
round 1060, predicted reward 30.732309,predicted upper bound 30.741370,actual reward 28.715284
round 1061, predicted reward 30.355739,predicted upper bound 30.367039,actual reward 27.396850
round 1062, predicted reward 27.121886,predicted upper bound 27.131447,actual reward 27.930193
round 1063, predicted reward 27.182231,predicted upper bound 27.192349,actual reward 30.203713
round 1064, predicted reward 27.502727,predicted upper bound 27.510878,actual reward 28.185932
round 1065, predicted reward 35.730208,predicted upper bound 35.738120,actual reward 41.155279
round 1066, predicted reward 28.512432,predicted upper bound 28.521870,actual reward 26.164187
round 1067, predicted reward 28.582149,predicted upper bound 28.591336,actual reward 29.420291
round 1068, predicted reward 27.431265,predicted upper bound 27.440329,actual reward 24.347857
round 1069, predicted reward 29.681923,predicted upper bound 29.688848,actual reward 29.739797
round 1070, predicted reward 32.618614,predicted upper bound 32.626477,actual reward 33.213375
round 1071, predicted reward 25.912331,predicted upper bound 25.919914,actual reward 27.063951
round 1072, predicted reward 28.168359,predicted upper bound 28.177790,actual reward 23.538810
round 1073, predicted reward 31.850020,predicted upper bound 31.859293,actual reward 30.874225
round 1074, predicted reward 30.057116,predicted upper bound 30.064337,actual reward 32.120229
round 1075, predicted reward 25.244053,predicted upper bound 25.252047,actual reward 25.882728
round 1076, predicted reward 29.755790,predicted upper bound 29.762463,actual reward 30.144145
round 1077, predicted reward 30.268613,predicted upper bound 30.276192,actual reward 32.267536
round 1078, predicted reward 32.016363,predicted upper bound 32.024398,actual reward 36.025785
round 1079, predicted reward 23.452521,predicted upper bound 23.460323,actual reward 24.402846
round 1080, predicted reward 29.710607,predicted upper bound 29.719921,actual reward 31.519442
round 1081, predicted reward 24.823456,predicted upper bound 24.834187,actual reward 21.724728
round 1082, predicted reward 23.451595,predicted upper bound 23.460302,actual reward 20.742730
round 1083, predicted reward 28.152674,predicted upper bound 28.159824,actual reward 29.862166
round 1084, predicted reward 23.095151,predicted upper bound 23.103287,actual reward 27.271216
round 1085, predicted reward 25.468771,predicted upper bound 25.477641,actual reward 29.822852
round 1086, predicted reward 22.099428,predicted upper bound 22.106379,actual reward 21.579637
round 1087, predicted reward 32.397764,predicted upper bound 32.404611,actual reward 30.100223
round 1088, predicted reward 31.107890,predicted upper bound 31.115026,actual reward 31.628368
round 1089, predicted reward 24.309523,predicted upper bound 24.318120,actual reward 26.644095
round 1090, predicted reward 24.285981,predicted upper bound 24.293801,actual reward 19.345540
round 1091, predicted reward 21.230449,predicted upper bound 21.240921,actual reward 26.182862
round 1092, predicted reward 26.052487,predicted upper bound 26.061802,actual reward 29.879240
round 1093, predicted reward 24.142154,predicted upper bound 24.149807,actual reward 21.679050
round 1094, predicted reward 22.946717,predicted upper bound 22.954447,actual reward 19.756812
round 1095, predicted reward 24.827567,predicted upper bound 24.837332,actual reward 19.144593
round 1096, predicted reward 34.089354,predicted upper bound 34.098861,actual reward 40.050292
round 1097, predicted reward 31.495341,predicted upper bound 31.503173,actual reward 33.988818
round 1098, predicted reward 28.772020,predicted upper bound 28.780304,actual reward 26.707141
round 1099, predicted reward 26.848500,predicted upper bound 26.855579,actual reward 25.328072
round 1100, predicted reward 22.406329,predicted upper bound 22.413099,actual reward 16.024518
round 1101, predicted reward 31.018768,predicted upper bound 31.027268,actual reward 31.147118
round 1102, predicted reward 25.510335,predicted upper bound 25.518241,actual reward 27.976161
round 1103, predicted reward 24.603203,predicted upper bound 24.612382,actual reward 28.095345
round 1104, predicted reward 29.241981,predicted upper bound 29.251079,actual reward 29.122675
round 1105, predicted reward 23.929836,predicted upper bound 23.937515,actual reward 21.743737
round 1106, predicted reward 27.144025,predicted upper bound 27.153035,actual reward 30.599739
round 1107, predicted reward 27.787954,predicted upper bound 27.796082,actual reward 31.827576
round 1108, predicted reward 24.672967,predicted upper bound 24.682652,actual reward 19.637642
round 1109, predicted reward 24.126275,predicted upper bound 24.135794,actual reward 27.940083
round 1110, predicted reward 31.259944,predicted upper bound 31.268998,actual reward 30.114413
round 1111, predicted reward 28.169509,predicted upper bound 28.176938,actual reward 26.255659
round 1112, predicted reward 21.745553,predicted upper bound 21.754681,actual reward 23.276540
round 1113, predicted reward 27.885784,predicted upper bound 27.891735,actual reward 25.772814
round 1114, predicted reward 31.561842,predicted upper bound 31.569160,actual reward 33.766068
round 1115, predicted reward 27.863537,predicted upper bound 27.871964,actual reward 27.511984
round 1116, predicted reward 24.782354,predicted upper bound 24.790921,actual reward 24.023004
round 1117, predicted reward 36.511927,predicted upper bound 36.518991,actual reward 38.327290
round 1118, predicted reward 27.954687,predicted upper bound 27.962143,actual reward 26.462799
round 1119, predicted reward 31.481743,predicted upper bound 31.489964,actual reward 31.722354
round 1120, predicted reward 32.098384,predicted upper bound 32.106553,actual reward 33.849197
round 1121, predicted reward 27.434819,predicted upper bound 27.443496,actual reward 26.371471
round 1122, predicted reward 20.821211,predicted upper bound 20.830696,actual reward 23.593847
round 1123, predicted reward 23.081285,predicted upper bound 23.089599,actual reward 23.152537
round 1124, predicted reward 30.414259,predicted upper bound 30.422567,actual reward 30.062300
round 1125, predicted reward 24.619841,predicted upper bound 24.627212,actual reward 21.983500
round 1126, predicted reward 23.062305,predicted upper bound 23.073287,actual reward 26.202295
round 1127, predicted reward 29.823497,predicted upper bound 29.830861,actual reward 30.942819
round 1128, predicted reward 28.202292,predicted upper bound 28.210868,actual reward 26.463790
round 1129, predicted reward 29.571339,predicted upper bound 29.579586,actual reward 29.772938
round 1130, predicted reward 30.804547,predicted upper bound 30.811475,actual reward 31.961597
round 1131, predicted reward 26.666465,predicted upper bound 26.674522,actual reward 29.262052
round 1132, predicted reward 24.728301,predicted upper bound 24.735448,actual reward 22.178366
round 1133, predicted reward 33.392127,predicted upper bound 33.399224,actual reward 41.805699
round 1134, predicted reward 26.542149,predicted upper bound 26.551173,actual reward 23.356284
round 1135, predicted reward 20.527301,predicted upper bound 20.534210,actual reward 21.429998
round 1136, predicted reward 23.682610,predicted upper bound 23.690009,actual reward 25.285774
round 1137, predicted reward 32.078320,predicted upper bound 32.084586,actual reward 34.387813
round 1138, predicted reward 20.181573,predicted upper bound 20.189400,actual reward 16.643054
round 1139, predicted reward 26.889049,predicted upper bound 26.895294,actual reward 26.285184
round 1140, predicted reward 24.548215,predicted upper bound 24.556313,actual reward 30.076645
round 1141, predicted reward 21.929239,predicted upper bound 21.939352,actual reward 23.859871
round 1142, predicted reward 32.946341,predicted upper bound 32.954476,actual reward 32.255820
round 1143, predicted reward 27.372042,predicted upper bound 27.380359,actual reward 27.676570
round 1144, predicted reward 33.036677,predicted upper bound 33.043926,actual reward 34.748440
round 1145, predicted reward 32.005603,predicted upper bound 32.012192,actual reward 28.161786
round 1146, predicted reward 26.153203,predicted upper bound 26.163517,actual reward 23.022716
round 1147, predicted reward 24.789285,predicted upper bound 24.797240,actual reward 21.388414
round 1148, predicted reward 29.727929,predicted upper bound 29.737268,actual reward 31.579280
round 1149, predicted reward 27.340529,predicted upper bound 27.349144,actual reward 31.439356
round 1150, predicted reward 27.857517,predicted upper bound 27.865889,actual reward 27.118446
round 1151, predicted reward 33.385500,predicted upper bound 33.393459,actual reward 33.785772
round 1152, predicted reward 24.450656,predicted upper bound 24.458708,actual reward 22.624466
round 1153, predicted reward 26.673369,predicted upper bound 26.680526,actual reward 26.022005
round 1154, predicted reward 33.967437,predicted upper bound 33.973852,actual reward 33.876288
round 1155, predicted reward 23.175177,predicted upper bound 23.182867,actual reward 21.481343
round 1156, predicted reward 27.313456,predicted upper bound 27.323527,actual reward 27.195304
round 1157, predicted reward 37.536673,predicted upper bound 37.543419,actual reward 40.155230
round 1158, predicted reward 23.688126,predicted upper bound 23.697649,actual reward 24.895287
round 1159, predicted reward 23.930159,predicted upper bound 23.938258,actual reward 20.170827
round 1160, predicted reward 27.057650,predicted upper bound 27.065585,actual reward 33.763336
round 1161, predicted reward 31.407692,predicted upper bound 31.414837,actual reward 38.144810
round 1162, predicted reward 34.509164,predicted upper bound 34.516691,actual reward 35.695704
round 1163, predicted reward 28.673354,predicted upper bound 28.682364,actual reward 30.256250
round 1164, predicted reward 24.092708,predicted upper bound 24.099554,actual reward 24.172709
round 1165, predicted reward 29.340954,predicted upper bound 29.349670,actual reward 29.759134
round 1166, predicted reward 26.203372,predicted upper bound 26.211821,actual reward 21.566165
round 1167, predicted reward 24.406649,predicted upper bound 24.414913,actual reward 19.178325
round 1168, predicted reward 30.393643,predicted upper bound 30.401423,actual reward 33.063067
round 1169, predicted reward 30.765651,predicted upper bound 30.771755,actual reward 31.061219
round 1170, predicted reward 30.224640,predicted upper bound 30.231656,actual reward 31.633271
round 1171, predicted reward 33.907792,predicted upper bound 33.915710,actual reward 35.190282
round 1172, predicted reward 27.192654,predicted upper bound 27.200301,actual reward 27.572614
round 1173, predicted reward 33.659236,predicted upper bound 33.665806,actual reward 35.721599
round 1174, predicted reward 24.800425,predicted upper bound 24.809108,actual reward 23.224682
round 1175, predicted reward 31.842729,predicted upper bound 31.850805,actual reward 32.013406
round 1176, predicted reward 29.883781,predicted upper bound 29.891679,actual reward 32.119318
round 1177, predicted reward 26.158025,predicted upper bound 26.166064,actual reward 27.732645
round 1178, predicted reward 31.636223,predicted upper bound 31.644679,actual reward 28.831843
round 1179, predicted reward 22.541979,predicted upper bound 22.550033,actual reward 22.074365
round 1180, predicted reward 23.706705,predicted upper bound 23.715132,actual reward 24.155402
round 1181, predicted reward 25.331853,predicted upper bound 25.340456,actual reward 21.678513
round 1182, predicted reward 31.642284,predicted upper bound 31.649125,actual reward 31.262154
round 1183, predicted reward 24.959860,predicted upper bound 24.967941,actual reward 28.263622
round 1184, predicted reward 27.446341,predicted upper bound 27.453666,actual reward 28.496557
round 1185, predicted reward 35.736098,predicted upper bound 35.742220,actual reward 34.522702
round 1186, predicted reward 28.726539,predicted upper bound 28.733344,actual reward 25.942714
round 1187, predicted reward 29.952025,predicted upper bound 29.959035,actual reward 29.621343
round 1188, predicted reward 28.252147,predicted upper bound 28.259237,actual reward 28.577983
round 1189, predicted reward 31.120803,predicted upper bound 31.127282,actual reward 33.179832
round 1190, predicted reward 22.352468,predicted upper bound 22.363073,actual reward 20.358892
round 1191, predicted reward 28.871611,predicted upper bound 28.878094,actual reward 31.989997
round 1192, predicted reward 21.038951,predicted upper bound 21.048510,actual reward 17.796118
round 1193, predicted reward 27.029224,predicted upper bound 27.037131,actual reward 28.384044
round 1194, predicted reward 28.021052,predicted upper bound 28.028137,actual reward 27.484326
round 1195, predicted reward 24.022075,predicted upper bound 24.029596,actual reward 20.070403
round 1196, predicted reward 23.013000,predicted upper bound 23.020228,actual reward 20.556808
round 1197, predicted reward 32.162603,predicted upper bound 32.169636,actual reward 33.025583
round 1198, predicted reward 19.775850,predicted upper bound 19.783924,actual reward 20.944849
round 1199, predicted reward 28.730396,predicted upper bound 28.737290,actual reward 30.136452
round 1200, predicted reward 29.592203,predicted upper bound 29.601469,actual reward 28.484690
round 1201, predicted reward 26.486789,predicted upper bound 26.495609,actual reward 27.638935
round 1202, predicted reward 29.420925,predicted upper bound 29.428064,actual reward 28.351917
round 1203, predicted reward 29.775554,predicted upper bound 29.782512,actual reward 31.394891
round 1204, predicted reward 27.501545,predicted upper bound 27.508829,actual reward 30.403654
round 1205, predicted reward 26.457650,predicted upper bound 26.465160,actual reward 25.310321
round 1206, predicted reward 33.502228,predicted upper bound 33.510232,actual reward 29.100389
round 1207, predicted reward 21.398002,predicted upper bound 21.407888,actual reward 17.891261
round 1208, predicted reward 22.697100,predicted upper bound 22.704641,actual reward 19.234768
round 1209, predicted reward 23.282534,predicted upper bound 23.290094,actual reward 23.604132
round 1210, predicted reward 27.696750,predicted upper bound 27.704160,actual reward 25.880377
round 1211, predicted reward 28.931327,predicted upper bound 28.940794,actual reward 32.126923
round 1212, predicted reward 25.177306,predicted upper bound 25.185454,actual reward 25.663387
round 1213, predicted reward 26.751819,predicted upper bound 26.758601,actual reward 27.123719
round 1214, predicted reward 24.433469,predicted upper bound 24.441116,actual reward 24.695533
round 1215, predicted reward 23.389448,predicted upper bound 23.396893,actual reward 22.338489
round 1216, predicted reward 18.519137,predicted upper bound 18.528498,actual reward 18.264187
round 1217, predicted reward 32.828216,predicted upper bound 32.834525,actual reward 32.617252
round 1218, predicted reward 34.504769,predicted upper bound 34.511513,actual reward 36.736130
round 1219, predicted reward 23.537021,predicted upper bound 23.546899,actual reward 21.357628
round 1220, predicted reward 23.850752,predicted upper bound 23.857640,actual reward 22.837129
round 1221, predicted reward 27.107003,predicted upper bound 27.114101,actual reward 25.201779
round 1222, predicted reward 25.986748,predicted upper bound 25.993565,actual reward 27.091506
round 1223, predicted reward 26.448395,predicted upper bound 26.456672,actual reward 25.821459
round 1224, predicted reward 24.742709,predicted upper bound 24.750280,actual reward 23.921610
round 1225, predicted reward 29.708664,predicted upper bound 29.715353,actual reward 24.738428
round 1226, predicted reward 27.597886,predicted upper bound 27.606745,actual reward 30.568082
round 1227, predicted reward 26.079097,predicted upper bound 26.086222,actual reward 29.664447
round 1228, predicted reward 35.767126,predicted upper bound 35.775266,actual reward 36.030788
round 1229, predicted reward 21.027930,predicted upper bound 21.035726,actual reward 20.293431
round 1230, predicted reward 21.800435,predicted upper bound 21.809734,actual reward 20.600951
round 1231, predicted reward 23.921271,predicted upper bound 23.930228,actual reward 20.972149
round 1232, predicted reward 29.461517,predicted upper bound 29.469006,actual reward 26.000761
round 1233, predicted reward 30.222786,predicted upper bound 30.230520,actual reward 29.620668
round 1234, predicted reward 22.788768,predicted upper bound 22.796892,actual reward 24.139155
round 1235, predicted reward 25.987791,predicted upper bound 25.994742,actual reward 28.723938
round 1236, predicted reward 28.042098,predicted upper bound 28.049652,actual reward 29.098579
round 1237, predicted reward 27.188335,predicted upper bound 27.196553,actual reward 22.876282
round 1238, predicted reward 26.198313,predicted upper bound 26.206896,actual reward 25.222052
round 1239, predicted reward 23.143033,predicted upper bound 23.152529,actual reward 23.521390
round 1240, predicted reward 16.671903,predicted upper bound 16.681536,actual reward 15.418046
round 1241, predicted reward 29.641975,predicted upper bound 29.648342,actual reward 30.485478
round 1242, predicted reward 24.863830,predicted upper bound 24.870936,actual reward 26.692363
round 1243, predicted reward 24.851737,predicted upper bound 24.860756,actual reward 23.354130
round 1244, predicted reward 26.237502,predicted upper bound 26.244275,actual reward 25.490612
round 1245, predicted reward 28.803463,predicted upper bound 28.810920,actual reward 28.834829
round 1246, predicted reward 28.868950,predicted upper bound 28.875742,actual reward 25.601671
round 1247, predicted reward 27.830528,predicted upper bound 27.838509,actual reward 30.470890
round 1248, predicted reward 30.265790,predicted upper bound 30.273050,actual reward 27.291907
round 1249, predicted reward 23.633251,predicted upper bound 23.642096,actual reward 18.247801
round 1250, predicted reward 24.365052,predicted upper bound 24.372325,actual reward 25.100133
round 1251, predicted reward 27.850152,predicted upper bound 27.857530,actual reward 26.709086
round 1252, predicted reward 25.814378,predicted upper bound 25.822770,actual reward 23.342311
round 1253, predicted reward 32.092820,predicted upper bound 32.099812,actual reward 35.455443
round 1254, predicted reward 28.677990,predicted upper bound 28.685910,actual reward 27.306569
round 1255, predicted reward 26.598613,predicted upper bound 26.607535,actual reward 24.452870
round 1256, predicted reward 27.925856,predicted upper bound 27.933161,actual reward 26.748338
round 1257, predicted reward 33.135087,predicted upper bound 33.142214,actual reward 31.033471
round 1258, predicted reward 28.774575,predicted upper bound 28.782671,actual reward 28.339583
round 1259, predicted reward 29.632039,predicted upper bound 29.638882,actual reward 28.352901
round 1260, predicted reward 27.445836,predicted upper bound 27.452968,actual reward 24.580822
round 1261, predicted reward 32.327144,predicted upper bound 32.332956,actual reward 32.802689
round 1262, predicted reward 26.877312,predicted upper bound 26.884394,actual reward 29.723381
round 1263, predicted reward 23.890884,predicted upper bound 23.898213,actual reward 21.020130
round 1264, predicted reward 26.745440,predicted upper bound 26.753086,actual reward 28.104068
round 1265, predicted reward 20.762528,predicted upper bound 20.770013,actual reward 20.205563
round 1266, predicted reward 23.738793,predicted upper bound 23.745484,actual reward 21.438065
round 1267, predicted reward 28.498978,predicted upper bound 28.506921,actual reward 28.903007
round 1268, predicted reward 40.303218,predicted upper bound 40.309627,actual reward 42.787994
round 1269, predicted reward 27.489454,predicted upper bound 27.495046,actual reward 30.279555
round 1270, predicted reward 26.199993,predicted upper bound 26.206369,actual reward 21.925785
round 1271, predicted reward 25.184956,predicted upper bound 25.192518,actual reward 26.315502
round 1272, predicted reward 24.712095,predicted upper bound 24.720458,actual reward 22.368576
round 1273, predicted reward 21.505526,predicted upper bound 21.513240,actual reward 19.920674
round 1274, predicted reward 29.984687,predicted upper bound 29.991537,actual reward 24.356939
round 1275, predicted reward 27.934444,predicted upper bound 27.941737,actual reward 24.560054
round 1276, predicted reward 19.946564,predicted upper bound 19.955453,actual reward 15.716410
round 1277, predicted reward 30.411004,predicted upper bound 30.417245,actual reward 37.308817
round 1278, predicted reward 30.026037,predicted upper bound 30.032748,actual reward 30.760264
round 1279, predicted reward 16.023523,predicted upper bound 16.032391,actual reward 14.806897
round 1280, predicted reward 25.882825,predicted upper bound 25.890392,actual reward 27.555191
round 1281, predicted reward 21.450600,predicted upper bound 21.459356,actual reward 20.892333
round 1282, predicted reward 27.429443,predicted upper bound 27.436662,actual reward 28.486520
round 1283, predicted reward 26.296607,predicted upper bound 26.305014,actual reward 25.925723
round 1284, predicted reward 30.695787,predicted upper bound 30.704254,actual reward 28.677976
round 1285, predicted reward 24.640123,predicted upper bound 24.648603,actual reward 19.836621
round 1286, predicted reward 29.042320,predicted upper bound 29.049536,actual reward 23.852198
round 1287, predicted reward 26.943581,predicted upper bound 26.950254,actual reward 24.075066
round 1288, predicted reward 33.281984,predicted upper bound 33.289148,actual reward 31.937274
round 1289, predicted reward 33.026345,predicted upper bound 33.031737,actual reward 32.781006
round 1290, predicted reward 23.787496,predicted upper bound 23.796108,actual reward 25.370506
round 1291, predicted reward 25.337728,predicted upper bound 25.345812,actual reward 23.298693
round 1292, predicted reward 29.368686,predicted upper bound 29.374430,actual reward 25.013952
round 1293, predicted reward 32.495003,predicted upper bound 32.500984,actual reward 29.137873
round 1294, predicted reward 24.477693,predicted upper bound 24.484663,actual reward 28.291366
round 1295, predicted reward 19.156426,predicted upper bound 19.163588,actual reward 15.488147
round 1296, predicted reward 34.673000,predicted upper bound 34.680456,actual reward 34.779239
round 1297, predicted reward 32.255224,predicted upper bound 32.261651,actual reward 35.833010
round 1298, predicted reward 32.935522,predicted upper bound 32.941789,actual reward 34.457378
round 1299, predicted reward 28.382984,predicted upper bound 28.389415,actual reward 29.082119
round 1300, predicted reward 23.728937,predicted upper bound 23.737001,actual reward 21.639623
round 1301, predicted reward 32.463898,predicted upper bound 32.472791,actual reward 33.430474
round 1302, predicted reward 37.236515,predicted upper bound 37.242205,actual reward 42.716928
round 1303, predicted reward 25.418871,predicted upper bound 25.425939,actual reward 28.252370
round 1304, predicted reward 25.660756,predicted upper bound 25.667858,actual reward 26.856803
round 1305, predicted reward 23.328785,predicted upper bound 23.337105,actual reward 17.865086
round 1306, predicted reward 30.750704,predicted upper bound 30.757599,actual reward 29.453537
round 1307, predicted reward 26.358486,predicted upper bound 26.364549,actual reward 25.121798
round 1308, predicted reward 29.882974,predicted upper bound 29.889298,actual reward 32.275778
round 1309, predicted reward 24.854798,predicted upper bound 24.863133,actual reward 33.010122
round 1310, predicted reward 19.881230,predicted upper bound 19.888589,actual reward 12.425805
round 1311, predicted reward 25.397745,predicted upper bound 25.404863,actual reward 26.855046
round 1312, predicted reward 26.459421,predicted upper bound 26.467555,actual reward 23.973820
round 1313, predicted reward 24.590822,predicted upper bound 24.598106,actual reward 25.114247
round 1314, predicted reward 27.007916,predicted upper bound 27.014179,actual reward 22.827081
round 1315, predicted reward 25.652504,predicted upper bound 25.660455,actual reward 24.855073
round 1316, predicted reward 23.300888,predicted upper bound 23.308508,actual reward 23.610984
round 1317, predicted reward 25.726284,predicted upper bound 25.732957,actual reward 25.209972
round 1318, predicted reward 28.754250,predicted upper bound 28.761325,actual reward 24.112306
round 1319, predicted reward 29.085660,predicted upper bound 29.092117,actual reward 32.725891
round 1320, predicted reward 31.431103,predicted upper bound 31.437414,actual reward 31.660596
round 1321, predicted reward 26.446478,predicted upper bound 26.454481,actual reward 32.322167
round 1322, predicted reward 30.314343,predicted upper bound 30.321434,actual reward 29.357770
round 1323, predicted reward 34.307727,predicted upper bound 34.313534,actual reward 38.493073
round 1324, predicted reward 32.007354,predicted upper bound 32.014523,actual reward 38.875420
round 1325, predicted reward 24.198215,predicted upper bound 24.204728,actual reward 20.074771
round 1326, predicted reward 26.474164,predicted upper bound 26.481956,actual reward 23.941860
round 1327, predicted reward 33.959754,predicted upper bound 33.966676,actual reward 36.574315
round 1328, predicted reward 27.840313,predicted upper bound 27.847833,actual reward 25.579003
round 1329, predicted reward 31.571502,predicted upper bound 31.577705,actual reward 27.021275
round 1330, predicted reward 28.756350,predicted upper bound 28.763267,actual reward 29.088038
round 1331, predicted reward 35.450595,predicted upper bound 35.457331,actual reward 39.564238
round 1332, predicted reward 19.118400,predicted upper bound 19.126512,actual reward 14.092667
round 1333, predicted reward 24.113914,predicted upper bound 24.121930,actual reward 23.897260
round 1334, predicted reward 27.273068,predicted upper bound 27.281172,actual reward 25.002191
round 1335, predicted reward 26.009298,predicted upper bound 26.017169,actual reward 24.589509
round 1336, predicted reward 27.539874,predicted upper bound 27.545937,actual reward 27.223491
round 1337, predicted reward 21.940905,predicted upper bound 21.951095,actual reward 23.285171
round 1338, predicted reward 23.286766,predicted upper bound 23.295730,actual reward 19.937279
round 1339, predicted reward 24.383773,predicted upper bound 24.392119,actual reward 25.754892
round 1340, predicted reward 26.018991,predicted upper bound 26.025447,actual reward 25.121481
round 1341, predicted reward 25.775074,predicted upper bound 25.781931,actual reward 29.868081
round 1342, predicted reward 30.907258,predicted upper bound 30.914156,actual reward 27.298560
round 1343, predicted reward 30.883271,predicted upper bound 30.889367,actual reward 28.647636
round 1344, predicted reward 18.356102,predicted upper bound 18.363634,actual reward 18.469595
round 1345, predicted reward 27.609572,predicted upper bound 27.617327,actual reward 25.245069
round 1346, predicted reward 21.797640,predicted upper bound 21.806121,actual reward 17.287661
round 1347, predicted reward 26.676081,predicted upper bound 26.682179,actual reward 28.112758
round 1348, predicted reward 30.423183,predicted upper bound 30.430971,actual reward 23.228766
round 1349, predicted reward 24.640371,predicted upper bound 24.646043,actual reward 26.272038
round 1350, predicted reward 34.408143,predicted upper bound 34.415126,actual reward 40.710829
round 1351, predicted reward 24.882511,predicted upper bound 24.889703,actual reward 21.137777
round 1352, predicted reward 24.484943,predicted upper bound 24.492338,actual reward 24.264653
round 1353, predicted reward 25.017080,predicted upper bound 25.023990,actual reward 21.842579
round 1354, predicted reward 28.081471,predicted upper bound 28.087480,actual reward 26.050884
round 1355, predicted reward 28.259755,predicted upper bound 28.266923,actual reward 30.861572
round 1356, predicted reward 36.441090,predicted upper bound 36.446486,actual reward 41.052421
round 1357, predicted reward 27.743797,predicted upper bound 27.751705,actual reward 26.497405
round 1358, predicted reward 26.450989,predicted upper bound 26.458571,actual reward 26.606180
round 1359, predicted reward 24.167632,predicted upper bound 24.175069,actual reward 27.991534
round 1360, predicted reward 30.472201,predicted upper bound 30.479467,actual reward 29.210796
round 1361, predicted reward 27.089356,predicted upper bound 27.096945,actual reward 23.449616
round 1362, predicted reward 26.169362,predicted upper bound 26.175965,actual reward 22.519356
round 1363, predicted reward 28.863719,predicted upper bound 28.871204,actual reward 25.707545
round 1364, predicted reward 37.851626,predicted upper bound 37.857498,actual reward 40.165808
round 1365, predicted reward 25.974774,predicted upper bound 25.981758,actual reward 24.023900
round 1366, predicted reward 26.091421,predicted upper bound 26.098979,actual reward 23.728129
round 1367, predicted reward 24.098833,predicted upper bound 24.105337,actual reward 24.780915
round 1368, predicted reward 31.998051,predicted upper bound 32.004668,actual reward 35.255924
round 1369, predicted reward 26.907044,predicted upper bound 26.912982,actual reward 27.580112
round 1370, predicted reward 24.416668,predicted upper bound 24.424024,actual reward 30.367587
round 1371, predicted reward 27.730275,predicted upper bound 27.737962,actual reward 27.944618
round 1372, predicted reward 26.589738,predicted upper bound 26.597222,actual reward 23.698709
round 1373, predicted reward 25.427362,predicted upper bound 25.434961,actual reward 21.337315
round 1374, predicted reward 28.705949,predicted upper bound 28.714086,actual reward 31.339984
round 1375, predicted reward 23.555009,predicted upper bound 23.562397,actual reward 26.075031
round 1376, predicted reward 27.495625,predicted upper bound 27.501105,actual reward 33.085598
round 1377, predicted reward 26.744357,predicted upper bound 26.752018,actual reward 25.572250
round 1378, predicted reward 27.009938,predicted upper bound 27.018365,actual reward 26.466370
round 1379, predicted reward 24.466206,predicted upper bound 24.473095,actual reward 20.435204
round 1380, predicted reward 23.335284,predicted upper bound 23.342263,actual reward 19.343833
round 1381, predicted reward 26.179617,predicted upper bound 26.186092,actual reward 27.954429
round 1382, predicted reward 33.946366,predicted upper bound 33.951703,actual reward 26.510275
round 1383, predicted reward 23.656888,predicted upper bound 23.666309,actual reward 25.212599
round 1384, predicted reward 28.502800,predicted upper bound 28.509073,actual reward 29.025683
round 1385, predicted reward 25.975186,predicted upper bound 25.982037,actual reward 24.719422
round 1386, predicted reward 25.828692,predicted upper bound 25.835144,actual reward 28.177883
round 1387, predicted reward 29.410104,predicted upper bound 29.418224,actual reward 27.579475
round 1388, predicted reward 25.265594,predicted upper bound 25.271801,actual reward 25.563582
round 1389, predicted reward 23.401160,predicted upper bound 23.408731,actual reward 19.950389
round 1390, predicted reward 23.861576,predicted upper bound 23.868089,actual reward 22.783415
round 1391, predicted reward 32.899589,predicted upper bound 32.906437,actual reward 36.604446
round 1392, predicted reward 23.609508,predicted upper bound 23.615857,actual reward 17.495672
round 1393, predicted reward 24.474955,predicted upper bound 24.481605,actual reward 19.944561
round 1394, predicted reward 25.186695,predicted upper bound 25.193893,actual reward 24.253549
round 1395, predicted reward 24.588996,predicted upper bound 24.594773,actual reward 23.380612
round 1396, predicted reward 32.979296,predicted upper bound 32.985877,actual reward 37.398036
round 1397, predicted reward 28.295972,predicted upper bound 28.301596,actual reward 25.779696
round 1398, predicted reward 30.347622,predicted upper bound 30.354098,actual reward 34.131068
round 1399, predicted reward 24.343624,predicted upper bound 24.350399,actual reward 26.381054
round 1400, predicted reward 24.486041,predicted upper bound 24.493635,actual reward 25.586776
round 1401, predicted reward 24.423513,predicted upper bound 24.430541,actual reward 18.705526
round 1402, predicted reward 28.368061,predicted upper bound 28.374905,actual reward 27.029106
round 1403, predicted reward 24.049720,predicted upper bound 24.056538,actual reward 17.050653
round 1404, predicted reward 30.437486,predicted upper bound 30.443259,actual reward 33.039484
round 1405, predicted reward 24.189405,predicted upper bound 24.196860,actual reward 25.229495
round 1406, predicted reward 26.552880,predicted upper bound 26.559838,actual reward 23.392280
round 1407, predicted reward 17.092521,predicted upper bound 17.100647,actual reward 16.527244
round 1408, predicted reward 29.735010,predicted upper bound 29.741336,actual reward 26.853729
round 1409, predicted reward 24.877000,predicted upper bound 24.884464,actual reward 20.666689
round 1410, predicted reward 25.581625,predicted upper bound 25.589476,actual reward 22.388037
round 1411, predicted reward 19.841006,predicted upper bound 19.848670,actual reward 18.540553
round 1412, predicted reward 28.110004,predicted upper bound 28.115698,actual reward 30.870561
round 1413, predicted reward 33.379664,predicted upper bound 33.386155,actual reward 38.939603
round 1414, predicted reward 29.869373,predicted upper bound 29.876366,actual reward 25.339540
round 1415, predicted reward 26.755404,predicted upper bound 26.763136,actual reward 19.276693
round 1416, predicted reward 22.973555,predicted upper bound 22.981097,actual reward 17.762495
round 1417, predicted reward 29.780348,predicted upper bound 29.787010,actual reward 31.493964
round 1418, predicted reward 30.376749,predicted upper bound 30.383654,actual reward 29.384826
round 1419, predicted reward 24.668380,predicted upper bound 24.674214,actual reward 23.323027
round 1420, predicted reward 34.493278,predicted upper bound 34.499641,actual reward 37.242465
round 1421, predicted reward 32.515217,predicted upper bound 32.522649,actual reward 30.518711
round 1422, predicted reward 28.091697,predicted upper bound 28.097574,actual reward 31.263542
round 1423, predicted reward 30.008986,predicted upper bound 30.014162,actual reward 28.345657
round 1424, predicted reward 29.070080,predicted upper bound 29.076229,actual reward 30.426404
round 1425, predicted reward 28.231120,predicted upper bound 28.238304,actual reward 28.038162
round 1426, predicted reward 28.788038,predicted upper bound 28.796299,actual reward 28.089514
round 1427, predicted reward 26.153686,predicted upper bound 26.159042,actual reward 24.198646
round 1428, predicted reward 30.951259,predicted upper bound 30.958804,actual reward 39.091180
round 1429, predicted reward 24.911120,predicted upper bound 24.919288,actual reward 23.608351
round 1430, predicted reward 26.469200,predicted upper bound 26.476920,actual reward 28.580541
round 1431, predicted reward 28.805757,predicted upper bound 28.811234,actual reward 31.267564
round 1432, predicted reward 27.915825,predicted upper bound 27.922534,actual reward 27.668926
round 1433, predicted reward 25.172449,predicted upper bound 25.179082,actual reward 24.705828
round 1434, predicted reward 27.651968,predicted upper bound 27.658950,actual reward 25.743353
round 1435, predicted reward 30.249986,predicted upper bound 30.256120,actual reward 32.689312
round 1436, predicted reward 30.680155,predicted upper bound 30.687526,actual reward 31.161927
round 1437, predicted reward 24.674447,predicted upper bound 24.680481,actual reward 24.635820
round 1438, predicted reward 22.466616,predicted upper bound 22.474378,actual reward 21.877732
round 1439, predicted reward 23.193074,predicted upper bound 23.201432,actual reward 23.205046
round 1440, predicted reward 18.216712,predicted upper bound 18.225265,actual reward 16.123777
round 1441, predicted reward 30.414329,predicted upper bound 30.420073,actual reward 32.026188
round 1442, predicted reward 32.588367,predicted upper bound 32.594816,actual reward 35.579599
round 1443, predicted reward 32.076214,predicted upper bound 32.081568,actual reward 32.528175
round 1444, predicted reward 28.812203,predicted upper bound 28.818421,actual reward 22.706790
round 1445, predicted reward 27.980116,predicted upper bound 27.987629,actual reward 26.088851
round 1446, predicted reward 27.796689,predicted upper bound 27.803786,actual reward 28.005577
round 1447, predicted reward 24.129153,predicted upper bound 24.136041,actual reward 24.849823
round 1448, predicted reward 22.760277,predicted upper bound 22.767887,actual reward 23.127398
round 1449, predicted reward 33.814807,predicted upper bound 33.819646,actual reward 36.859971
round 1450, predicted reward 29.356525,predicted upper bound 29.362572,actual reward 27.324439
round 1451, predicted reward 24.725155,predicted upper bound 24.732145,actual reward 22.825959
round 1452, predicted reward 21.608100,predicted upper bound 21.617677,actual reward 17.147486
round 1453, predicted reward 28.839338,predicted upper bound 28.845778,actual reward 33.496776
round 1454, predicted reward 32.634155,predicted upper bound 32.641400,actual reward 37.836412
round 1455, predicted reward 26.893353,predicted upper bound 26.900542,actual reward 29.102715
round 1456, predicted reward 28.634406,predicted upper bound 28.641566,actual reward 31.921372
round 1457, predicted reward 21.934580,predicted upper bound 21.941245,actual reward 19.993267
round 1458, predicted reward 25.700483,predicted upper bound 25.707936,actual reward 21.392429
round 1459, predicted reward 31.020361,predicted upper bound 31.028001,actual reward 30.205887
round 1460, predicted reward 29.052145,predicted upper bound 29.059072,actual reward 27.710780
round 1461, predicted reward 22.301559,predicted upper bound 22.307556,actual reward 18.070089
round 1462, predicted reward 26.854411,predicted upper bound 26.861377,actual reward 28.730703
round 1463, predicted reward 30.627669,predicted upper bound 30.633722,actual reward 32.414784
round 1464, predicted reward 17.431423,predicted upper bound 17.440316,actual reward 12.284715
round 1465, predicted reward 28.070542,predicted upper bound 28.077278,actual reward 30.545715
round 1466, predicted reward 22.955325,predicted upper bound 22.962655,actual reward 22.773654
round 1467, predicted reward 28.886319,predicted upper bound 28.893040,actual reward 28.327149
round 1468, predicted reward 28.316463,predicted upper bound 28.322561,actual reward 29.095552
round 1469, predicted reward 27.343058,predicted upper bound 27.349383,actual reward 27.610248
round 1470, predicted reward 22.753688,predicted upper bound 22.760952,actual reward 19.972661
round 1471, predicted reward 33.354670,predicted upper bound 33.361092,actual reward 34.280660
round 1472, predicted reward 30.004274,predicted upper bound 30.010428,actual reward 30.065387
round 1473, predicted reward 23.842806,predicted upper bound 23.850125,actual reward 22.590841
round 1474, predicted reward 30.196803,predicted upper bound 30.202630,actual reward 34.605467
round 1475, predicted reward 26.155591,predicted upper bound 26.161939,actual reward 29.554596
round 1476, predicted reward 30.510168,predicted upper bound 30.516127,actual reward 30.416917
round 1477, predicted reward 24.784105,predicted upper bound 24.791229,actual reward 24.715052
round 1478, predicted reward 26.887599,predicted upper bound 26.893602,actual reward 29.244549
round 1479, predicted reward 24.068587,predicted upper bound 24.075607,actual reward 22.633157
round 1480, predicted reward 24.074803,predicted upper bound 24.081561,actual reward 20.500122
round 1481, predicted reward 23.482879,predicted upper bound 23.489040,actual reward 20.748956
round 1482, predicted reward 27.048213,predicted upper bound 27.053879,actual reward 20.705535
round 1483, predicted reward 23.961675,predicted upper bound 23.968409,actual reward 21.186820
round 1484, predicted reward 31.602611,predicted upper bound 31.609689,actual reward 31.851634
round 1485, predicted reward 30.078528,predicted upper bound 30.084029,actual reward 30.433977
round 1486, predicted reward 23.210959,predicted upper bound 23.216849,actual reward 20.935627
round 1487, predicted reward 29.013743,predicted upper bound 29.019198,actual reward 28.340024
round 1488, predicted reward 22.959711,predicted upper bound 22.967289,actual reward 19.650676
round 1489, predicted reward 22.279920,predicted upper bound 22.287352,actual reward 21.430090
round 1490, predicted reward 29.046873,predicted upper bound 29.053521,actual reward 27.864425
round 1491, predicted reward 23.181576,predicted upper bound 23.186974,actual reward 19.627825
round 1492, predicted reward 28.660128,predicted upper bound 28.666942,actual reward 27.095329
round 1493, predicted reward 28.338123,predicted upper bound 28.342975,actual reward 23.968061
round 1494, predicted reward 24.147575,predicted upper bound 24.155220,actual reward 24.074304
round 1495, predicted reward 32.279774,predicted upper bound 32.285184,actual reward 36.549700
round 1496, predicted reward 26.741243,predicted upper bound 26.746761,actual reward 22.345183
round 1497, predicted reward 25.754481,predicted upper bound 25.760527,actual reward 22.818264
round 1498, predicted reward 25.844604,predicted upper bound 25.851547,actual reward 27.117820
round 1499, predicted reward 29.943022,predicted upper bound 29.947746,actual reward 31.505276
round 1500, predicted reward 29.044884,predicted upper bound 29.051840,actual reward 25.855371
round 1501, predicted reward 21.996260,predicted upper bound 22.003469,actual reward 19.953404
round 1502, predicted reward 23.657341,predicted upper bound 23.664629,actual reward 23.055252
round 1503, predicted reward 23.155983,predicted upper bound 23.162341,actual reward 22.974570
round 1504, predicted reward 35.507403,predicted upper bound 35.513652,actual reward 35.879713
round 1505, predicted reward 37.794447,predicted upper bound 37.799855,actual reward 47.375307
round 1506, predicted reward 26.415423,predicted upper bound 26.421598,actual reward 25.911229
round 1507, predicted reward 31.203189,predicted upper bound 31.211054,actual reward 36.855749
round 1508, predicted reward 36.467909,predicted upper bound 36.473071,actual reward 36.701436
round 1509, predicted reward 35.826946,predicted upper bound 35.832589,actual reward 37.495430
round 1510, predicted reward 21.449325,predicted upper bound 21.457766,actual reward 19.271116
round 1511, predicted reward 32.156815,predicted upper bound 32.163558,actual reward 34.957089
round 1512, predicted reward 26.607845,predicted upper bound 26.615410,actual reward 26.129651
round 1513, predicted reward 25.353609,predicted upper bound 25.361146,actual reward 25.269248
round 1514, predicted reward 31.324966,predicted upper bound 31.332601,actual reward 29.659286
round 1515, predicted reward 22.572735,predicted upper bound 22.580656,actual reward 19.492208
round 1516, predicted reward 28.529506,predicted upper bound 28.537769,actual reward 29.954764
round 1517, predicted reward 28.418914,predicted upper bound 28.425049,actual reward 26.685014
round 1518, predicted reward 21.148340,predicted upper bound 21.154627,actual reward 24.154795
round 1519, predicted reward 20.236508,predicted upper bound 20.243520,actual reward 20.047127
round 1520, predicted reward 37.358144,predicted upper bound 37.363256,actual reward 40.095894
round 1521, predicted reward 28.202080,predicted upper bound 28.208311,actual reward 26.292369
round 1522, predicted reward 26.836277,predicted upper bound 26.843355,actual reward 27.978175
round 1523, predicted reward 29.531910,predicted upper bound 29.539014,actual reward 28.520552
round 1524, predicted reward 26.236420,predicted upper bound 26.244317,actual reward 29.431056
round 1525, predicted reward 22.540066,predicted upper bound 22.547151,actual reward 18.822201
round 1526, predicted reward 24.636955,predicted upper bound 24.644541,actual reward 21.509115
round 1527, predicted reward 24.747515,predicted upper bound 24.754978,actual reward 22.947505
round 1528, predicted reward 31.757515,predicted upper bound 31.763931,actual reward 31.964514
round 1529, predicted reward 31.432397,predicted upper bound 31.437625,actual reward 28.589266
round 1530, predicted reward 27.144614,predicted upper bound 27.151649,actual reward 26.175025
round 1531, predicted reward 36.568532,predicted upper bound 36.574967,actual reward 44.643200
round 1532, predicted reward 24.493732,predicted upper bound 24.500511,actual reward 25.371734
round 1533, predicted reward 29.315485,predicted upper bound 29.322155,actual reward 29.850153
round 1534, predicted reward 28.832186,predicted upper bound 28.838676,actual reward 23.550509
round 1535, predicted reward 30.827824,predicted upper bound 30.833979,actual reward 31.222860
round 1536, predicted reward 22.986020,predicted upper bound 22.993246,actual reward 22.236342
round 1537, predicted reward 35.206087,predicted upper bound 35.212570,actual reward 42.548779
round 1538, predicted reward 24.591818,predicted upper bound 24.599620,actual reward 23.470691
round 1539, predicted reward 28.113764,predicted upper bound 28.120285,actual reward 24.358748
round 1540, predicted reward 35.267752,predicted upper bound 35.273126,actual reward 39.563120
round 1541, predicted reward 25.599911,predicted upper bound 25.608212,actual reward 20.343591
round 1542, predicted reward 28.005058,predicted upper bound 28.012033,actual reward 27.075617
round 1543, predicted reward 28.659487,predicted upper bound 28.666527,actual reward 26.038195
round 1544, predicted reward 30.253311,predicted upper bound 30.259667,actual reward 34.575398
round 1545, predicted reward 25.480253,predicted upper bound 25.487156,actual reward 25.419217
round 1546, predicted reward 28.421434,predicted upper bound 28.427383,actual reward 29.408950
round 1547, predicted reward 32.276899,predicted upper bound 32.283314,actual reward 29.240187
round 1548, predicted reward 19.663317,predicted upper bound 19.668823,actual reward 13.506849
round 1549, predicted reward 27.780665,predicted upper bound 27.786680,actual reward 32.213828
round 1550, predicted reward 27.560934,predicted upper bound 27.567802,actual reward 22.370443
round 1551, predicted reward 27.795992,predicted upper bound 27.802916,actual reward 32.784048
round 1552, predicted reward 22.648596,predicted upper bound 22.656798,actual reward 18.432335
round 1553, predicted reward 20.955095,predicted upper bound 20.961044,actual reward 16.551795
round 1554, predicted reward 29.537429,predicted upper bound 29.543010,actual reward 33.071744
round 1555, predicted reward 26.520545,predicted upper bound 26.527001,actual reward 25.066876
round 1556, predicted reward 24.399153,predicted upper bound 24.405658,actual reward 26.460793
round 1557, predicted reward 30.497275,predicted upper bound 30.503675,actual reward 28.366773
round 1558, predicted reward 24.301713,predicted upper bound 24.308789,actual reward 23.457073
round 1559, predicted reward 24.474561,predicted upper bound 24.480224,actual reward 22.863214
round 1560, predicted reward 25.228873,predicted upper bound 25.234965,actual reward 23.608360
round 1561, predicted reward 23.846569,predicted upper bound 23.854347,actual reward 17.047728
round 1562, predicted reward 28.989759,predicted upper bound 28.994978,actual reward 33.762617
round 1563, predicted reward 35.363705,predicted upper bound 35.369419,actual reward 32.515461
round 1564, predicted reward 21.748913,predicted upper bound 21.756877,actual reward 21.708969
round 1565, predicted reward 27.773287,predicted upper bound 27.779271,actual reward 25.488040
round 1566, predicted reward 34.135939,predicted upper bound 34.142706,actual reward 39.229962
round 1567, predicted reward 24.729311,predicted upper bound 24.735573,actual reward 22.750445
round 1568, predicted reward 27.629665,predicted upper bound 27.635711,actual reward 22.736341
round 1569, predicted reward 28.942889,predicted upper bound 28.948547,actual reward 32.139841
round 1570, predicted reward 22.135403,predicted upper bound 22.144027,actual reward 21.598473
round 1571, predicted reward 29.987273,predicted upper bound 29.992606,actual reward 27.745497
round 1572, predicted reward 31.050585,predicted upper bound 31.056574,actual reward 33.935761
round 1573, predicted reward 28.035729,predicted upper bound 28.041395,actual reward 28.948308
round 1574, predicted reward 27.196028,predicted upper bound 27.202220,actual reward 22.513079
round 1575, predicted reward 30.529918,predicted upper bound 30.536751,actual reward 29.767754
round 1576, predicted reward 29.786243,predicted upper bound 29.793647,actual reward 28.552597
round 1577, predicted reward 25.946119,predicted upper bound 25.952352,actual reward 26.088552
round 1578, predicted reward 26.257568,predicted upper bound 26.264120,actual reward 21.474290
round 1579, predicted reward 30.658609,predicted upper bound 30.663661,actual reward 29.112503
round 1580, predicted reward 22.795455,predicted upper bound 22.801874,actual reward 22.377053
round 1581, predicted reward 23.013390,predicted upper bound 23.020668,actual reward 16.436692
round 1582, predicted reward 30.218277,predicted upper bound 30.224980,actual reward 30.515080
round 1583, predicted reward 24.781497,predicted upper bound 24.788134,actual reward 25.005261
round 1584, predicted reward 33.910396,predicted upper bound 33.917231,actual reward 34.395947
round 1585, predicted reward 27.145263,predicted upper bound 27.151968,actual reward 28.974411
round 1586, predicted reward 24.405443,predicted upper bound 24.412534,actual reward 23.164468
round 1587, predicted reward 23.348794,predicted upper bound 23.356527,actual reward 21.580811
round 1588, predicted reward 31.333787,predicted upper bound 31.339516,actual reward 28.676350
round 1589, predicted reward 24.558528,predicted upper bound 24.565343,actual reward 22.837678
round 1590, predicted reward 25.947871,predicted upper bound 25.954307,actual reward 23.834143
round 1591, predicted reward 25.780172,predicted upper bound 25.787194,actual reward 22.132077
round 1592, predicted reward 28.168913,predicted upper bound 28.175321,actual reward 24.323680
round 1593, predicted reward 27.353371,predicted upper bound 27.359627,actual reward 28.196486
round 1594, predicted reward 30.588506,predicted upper bound 30.594793,actual reward 35.270784
round 1595, predicted reward 34.804783,predicted upper bound 34.810661,actual reward 33.020025
round 1596, predicted reward 23.107544,predicted upper bound 23.114990,actual reward 20.277862
round 1597, predicted reward 23.336538,predicted upper bound 23.343189,actual reward 26.889266
round 1598, predicted reward 29.110846,predicted upper bound 29.116361,actual reward 28.067417
round 1599, predicted reward 17.167349,predicted upper bound 17.175602,actual reward 16.086935
round 1600, predicted reward 33.312012,predicted upper bound 33.317459,actual reward 31.833784
round 1601, predicted reward 32.558454,predicted upper bound 32.565621,actual reward 29.499012
round 1602, predicted reward 28.837829,predicted upper bound 28.843727,actual reward 31.203812
round 1603, predicted reward 27.496335,predicted upper bound 27.503199,actual reward 26.788869
round 1604, predicted reward 30.450759,predicted upper bound 30.458419,actual reward 34.925962
round 1605, predicted reward 23.275850,predicted upper bound 23.283062,actual reward 20.449564
round 1606, predicted reward 31.817892,predicted upper bound 31.823707,actual reward 34.956623
round 1607, predicted reward 22.454349,predicted upper bound 22.461418,actual reward 21.721957
round 1608, predicted reward 24.688636,predicted upper bound 24.694602,actual reward 27.849367
round 1609, predicted reward 25.635062,predicted upper bound 25.642175,actual reward 26.121616
round 1610, predicted reward 28.835234,predicted upper bound 28.841289,actual reward 28.319314
round 1611, predicted reward 29.218568,predicted upper bound 29.224121,actual reward 27.313885
round 1612, predicted reward 30.267385,predicted upper bound 30.274486,actual reward 27.015882
round 1613, predicted reward 29.673568,predicted upper bound 29.679142,actual reward 35.357550
round 1614, predicted reward 30.531573,predicted upper bound 30.537104,actual reward 30.881705
round 1615, predicted reward 24.462536,predicted upper bound 24.469246,actual reward 23.560290
round 1616, predicted reward 25.126677,predicted upper bound 25.132426,actual reward 22.758818
round 1617, predicted reward 28.202420,predicted upper bound 28.207840,actual reward 32.474977
round 1618, predicted reward 26.124160,predicted upper bound 26.129455,actual reward 21.160881
round 1619, predicted reward 22.200545,predicted upper bound 22.208755,actual reward 18.975220
round 1620, predicted reward 27.299343,predicted upper bound 27.307053,actual reward 23.565488
round 1621, predicted reward 29.330768,predicted upper bound 29.338762,actual reward 26.930291
round 1622, predicted reward 20.804965,predicted upper bound 20.812333,actual reward 20.001350
round 1623, predicted reward 33.274205,predicted upper bound 33.280821,actual reward 37.730266
round 1624, predicted reward 28.522977,predicted upper bound 28.530435,actual reward 25.675728
round 1625, predicted reward 24.243651,predicted upper bound 24.250635,actual reward 19.705192
round 1626, predicted reward 19.268511,predicted upper bound 19.276445,actual reward 17.009337
round 1627, predicted reward 21.268342,predicted upper bound 21.275480,actual reward 22.333582
round 1628, predicted reward 30.019580,predicted upper bound 30.025766,actual reward 31.020486
round 1629, predicted reward 24.768397,predicted upper bound 24.776499,actual reward 26.537827
round 1630, predicted reward 25.975148,predicted upper bound 25.982675,actual reward 25.857780
round 1631, predicted reward 25.395997,predicted upper bound 25.404597,actual reward 23.782274
round 1632, predicted reward 21.490461,predicted upper bound 21.498161,actual reward 21.594375
round 1633, predicted reward 23.405962,predicted upper bound 23.412400,actual reward 24.818518
round 1634, predicted reward 21.846968,predicted upper bound 21.853358,actual reward 18.337528
round 1635, predicted reward 24.764429,predicted upper bound 24.771186,actual reward 23.823825
round 1636, predicted reward 22.897617,predicted upper bound 22.902885,actual reward 23.149018
round 1637, predicted reward 28.847835,predicted upper bound 28.855257,actual reward 31.522367
round 1638, predicted reward 30.544413,predicted upper bound 30.550631,actual reward 31.953249
round 1639, predicted reward 25.094654,predicted upper bound 25.101444,actual reward 23.208591
round 1640, predicted reward 20.056987,predicted upper bound 20.063234,actual reward 16.783669
round 1641, predicted reward 28.382138,predicted upper bound 28.388895,actual reward 26.099833
round 1642, predicted reward 22.673832,predicted upper bound 22.681646,actual reward 23.910519
round 1643, predicted reward 28.322698,predicted upper bound 28.328995,actual reward 29.395805
round 1644, predicted reward 25.625800,predicted upper bound 25.632452,actual reward 28.251698
round 1645, predicted reward 28.246634,predicted upper bound 28.253298,actual reward 28.519346
round 1646, predicted reward 23.287352,predicted upper bound 23.293901,actual reward 18.502728
round 1647, predicted reward 28.692300,predicted upper bound 28.698730,actual reward 27.863075
round 1648, predicted reward 24.109068,predicted upper bound 24.114722,actual reward 20.461912
round 1649, predicted reward 29.067040,predicted upper bound 29.074121,actual reward 29.318704
round 1650, predicted reward 25.634593,predicted upper bound 25.641858,actual reward 25.079598
round 1651, predicted reward 32.152091,predicted upper bound 32.158471,actual reward 31.118403
round 1652, predicted reward 24.370437,predicted upper bound 24.376981,actual reward 25.067602
round 1653, predicted reward 30.690991,predicted upper bound 30.696910,actual reward 29.227906
round 1654, predicted reward 26.698443,predicted upper bound 26.705622,actual reward 27.120538
round 1655, predicted reward 22.833620,predicted upper bound 22.841341,actual reward 24.459103
round 1656, predicted reward 24.639406,predicted upper bound 24.647266,actual reward 23.824216
round 1657, predicted reward 26.019663,predicted upper bound 26.026331,actual reward 24.861720
round 1658, predicted reward 23.366523,predicted upper bound 23.374710,actual reward 17.289297
round 1659, predicted reward 31.954595,predicted upper bound 31.962078,actual reward 33.175393
round 1660, predicted reward 34.047319,predicted upper bound 34.053347,actual reward 36.652558
round 1661, predicted reward 26.157387,predicted upper bound 26.163808,actual reward 32.674773
round 1662, predicted reward 33.249582,predicted upper bound 33.256345,actual reward 36.160137
round 1663, predicted reward 26.705633,predicted upper bound 26.711934,actual reward 28.500450
round 1664, predicted reward 39.768086,predicted upper bound 39.774749,actual reward 41.854409
round 1665, predicted reward 26.251930,predicted upper bound 26.257907,actual reward 26.007447
round 1666, predicted reward 23.253364,predicted upper bound 23.260529,actual reward 27.184070
round 1667, predicted reward 21.192483,predicted upper bound 21.198601,actual reward 21.038350
round 1668, predicted reward 33.814195,predicted upper bound 33.820753,actual reward 31.274187
round 1669, predicted reward 34.954134,predicted upper bound 34.958639,actual reward 36.108553
round 1670, predicted reward 28.589514,predicted upper bound 28.596037,actual reward 28.921584
round 1671, predicted reward 31.924715,predicted upper bound 31.930056,actual reward 35.351475
round 1672, predicted reward 28.523474,predicted upper bound 28.530091,actual reward 28.356801
round 1673, predicted reward 22.799223,predicted upper bound 22.804315,actual reward 14.353874
round 1674, predicted reward 22.692279,predicted upper bound 22.698578,actual reward 21.021609
round 1675, predicted reward 31.668677,predicted upper bound 31.674953,actual reward 34.558286
round 1676, predicted reward 24.931861,predicted upper bound 24.938505,actual reward 22.621798
round 1677, predicted reward 27.710963,predicted upper bound 27.717597,actual reward 25.449821
round 1678, predicted reward 31.321084,predicted upper bound 31.327201,actual reward 33.167178
round 1679, predicted reward 30.527136,predicted upper bound 30.533631,actual reward 32.147161
round 1680, predicted reward 21.806834,predicted upper bound 21.813304,actual reward 19.640140
round 1681, predicted reward 29.981175,predicted upper bound 29.986360,actual reward 32.371832
round 1682, predicted reward 25.763911,predicted upper bound 25.770598,actual reward 26.392054
round 1683, predicted reward 25.248193,predicted upper bound 25.253639,actual reward 21.347786
round 1684, predicted reward 30.411005,predicted upper bound 30.416155,actual reward 29.167566
round 1685, predicted reward 20.541960,predicted upper bound 20.549321,actual reward 18.188682
round 1686, predicted reward 28.812984,predicted upper bound 28.818379,actual reward 29.215460
round 1687, predicted reward 31.553890,predicted upper bound 31.560346,actual reward 32.559151
round 1688, predicted reward 24.169843,predicted upper bound 24.176810,actual reward 20.566403
round 1689, predicted reward 20.577370,predicted upper bound 20.584554,actual reward 15.060943
round 1690, predicted reward 30.813378,predicted upper bound 30.819558,actual reward 33.293819
round 1691, predicted reward 27.481228,predicted upper bound 27.487713,actual reward 26.296436
round 1692, predicted reward 17.732101,predicted upper bound 17.737926,actual reward 14.857797
round 1693, predicted reward 21.195273,predicted upper bound 21.202832,actual reward 22.052190
round 1694, predicted reward 26.709979,predicted upper bound 26.716118,actual reward 24.410480
round 1695, predicted reward 29.752902,predicted upper bound 29.759525,actual reward 28.115260
round 1696, predicted reward 34.299936,predicted upper bound 34.304896,actual reward 39.168357
round 1697, predicted reward 26.491029,predicted upper bound 26.495814,actual reward 26.846165
round 1698, predicted reward 21.506032,predicted upper bound 21.512925,actual reward 14.895809
round 1699, predicted reward 24.794321,predicted upper bound 24.801176,actual reward 18.363251
round 1700, predicted reward 25.519636,predicted upper bound 25.527258,actual reward 22.839802
round 1701, predicted reward 20.828173,predicted upper bound 20.834864,actual reward 18.167309
round 1702, predicted reward 27.650114,predicted upper bound 27.655891,actual reward 25.214085
round 1703, predicted reward 24.168363,predicted upper bound 24.173968,actual reward 27.866684
round 1704, predicted reward 22.716193,predicted upper bound 22.721897,actual reward 19.613663
round 1705, predicted reward 24.878822,predicted upper bound 24.884565,actual reward 22.100796
round 1706, predicted reward 28.562220,predicted upper bound 28.568812,actual reward 28.255748
round 1707, predicted reward 32.804983,predicted upper bound 32.810192,actual reward 32.801477
round 1708, predicted reward 38.039200,predicted upper bound 38.045459,actual reward 38.591137
round 1709, predicted reward 31.521461,predicted upper bound 31.527962,actual reward 32.765802
round 1710, predicted reward 20.231593,predicted upper bound 20.237827,actual reward 17.782449
round 1711, predicted reward 26.135833,predicted upper bound 26.141495,actual reward 28.381198
round 1712, predicted reward 30.020131,predicted upper bound 30.026054,actual reward 32.061507
round 1713, predicted reward 28.682066,predicted upper bound 28.688741,actual reward 27.466266
round 1714, predicted reward 27.568008,predicted upper bound 27.574648,actual reward 26.710012
round 1715, predicted reward 26.304051,predicted upper bound 26.310232,actual reward 25.825225
round 1716, predicted reward 23.821334,predicted upper bound 23.828328,actual reward 22.681176
round 1717, predicted reward 30.636391,predicted upper bound 30.642757,actual reward 32.629864
round 1718, predicted reward 20.690591,predicted upper bound 20.696864,actual reward 16.129806
round 1719, predicted reward 29.838179,predicted upper bound 29.844113,actual reward 32.345067
round 1720, predicted reward 28.492334,predicted upper bound 28.498552,actual reward 25.485926
round 1721, predicted reward 29.838783,predicted upper bound 29.844432,actual reward 31.002716
round 1722, predicted reward 26.094670,predicted upper bound 26.100979,actual reward 25.536870
round 1723, predicted reward 31.870220,predicted upper bound 31.876456,actual reward 35.698641
round 1724, predicted reward 24.246616,predicted upper bound 24.251874,actual reward 22.374535
round 1725, predicted reward 28.909291,predicted upper bound 28.915707,actual reward 26.474532
round 1726, predicted reward 27.106296,predicted upper bound 27.112743,actual reward 23.039830
round 1727, predicted reward 30.319166,predicted upper bound 30.324897,actual reward 29.544600
round 1728, predicted reward 25.270964,predicted upper bound 25.277754,actual reward 19.823285
round 1729, predicted reward 22.802064,predicted upper bound 22.807937,actual reward 21.973614
round 1730, predicted reward 27.273243,predicted upper bound 27.279638,actual reward 23.572925
round 1731, predicted reward 23.662688,predicted upper bound 23.668839,actual reward 20.292368
round 1732, predicted reward 39.909641,predicted upper bound 39.914461,actual reward 41.662831
round 1733, predicted reward 26.380496,predicted upper bound 26.386195,actual reward 26.536286
round 1734, predicted reward 25.482018,predicted upper bound 25.487657,actual reward 27.698612
round 1735, predicted reward 25.106164,predicted upper bound 25.111816,actual reward 27.288662
round 1736, predicted reward 22.729106,predicted upper bound 22.734531,actual reward 19.121002
round 1737, predicted reward 32.354037,predicted upper bound 32.360186,actual reward 30.756867
round 1738, predicted reward 27.461023,predicted upper bound 27.467970,actual reward 28.250295
round 1739, predicted reward 27.659938,predicted upper bound 27.665313,actual reward 29.955945
round 1740, predicted reward 27.316812,predicted upper bound 27.322858,actual reward 27.554865
round 1741, predicted reward 30.998766,predicted upper bound 31.004746,actual reward 26.137145
round 1742, predicted reward 30.716285,predicted upper bound 30.721102,actual reward 27.425463
round 1743, predicted reward 25.972672,predicted upper bound 25.977619,actual reward 23.503398
round 1744, predicted reward 27.331352,predicted upper bound 27.336814,actual reward 25.965486
round 1745, predicted reward 23.623106,predicted upper bound 23.630068,actual reward 21.953902
round 1746, predicted reward 28.089219,predicted upper bound 28.094578,actual reward 28.880524
round 1747, predicted reward 22.527326,predicted upper bound 22.533720,actual reward 17.609628
round 1748, predicted reward 29.899947,predicted upper bound 29.905667,actual reward 36.326332
round 1749, predicted reward 29.898326,predicted upper bound 29.903571,actual reward 33.182054
round 1750, predicted reward 25.755672,predicted upper bound 25.761956,actual reward 19.963518
round 1751, predicted reward 28.069582,predicted upper bound 28.074830,actual reward 26.518521
round 1752, predicted reward 33.428826,predicted upper bound 33.433700,actual reward 33.461944
round 1753, predicted reward 30.322243,predicted upper bound 30.327556,actual reward 31.424933
round 1754, predicted reward 26.068994,predicted upper bound 26.075775,actual reward 28.440516
round 1755, predicted reward 29.339676,predicted upper bound 29.345232,actual reward 31.334906
round 1756, predicted reward 31.593687,predicted upper bound 31.599634,actual reward 40.161928
round 1757, predicted reward 31.252640,predicted upper bound 31.258462,actual reward 33.221951
round 1758, predicted reward 34.506475,predicted upper bound 34.511468,actual reward 39.668439
round 1759, predicted reward 21.502542,predicted upper bound 21.508492,actual reward 20.136601
round 1760, predicted reward 27.985321,predicted upper bound 27.991915,actual reward 27.448100
round 1761, predicted reward 21.330092,predicted upper bound 21.337282,actual reward 20.283196
round 1762, predicted reward 23.127459,predicted upper bound 23.133188,actual reward 20.450177
round 1763, predicted reward 23.293110,predicted upper bound 23.299702,actual reward 20.269584
round 1764, predicted reward 27.625236,predicted upper bound 27.630826,actual reward 28.867759
round 1765, predicted reward 31.129500,predicted upper bound 31.135517,actual reward 24.859678
round 1766, predicted reward 29.478306,predicted upper bound 29.484717,actual reward 25.443222
round 1767, predicted reward 30.016480,predicted upper bound 30.022305,actual reward 30.670680
round 1768, predicted reward 25.345170,predicted upper bound 25.350057,actual reward 19.369123
round 1769, predicted reward 28.906141,predicted upper bound 28.912124,actual reward 26.700690
round 1770, predicted reward 29.900600,predicted upper bound 29.905446,actual reward 29.770549
round 1771, predicted reward 26.103110,predicted upper bound 26.108293,actual reward 29.762136
round 1772, predicted reward 24.519786,predicted upper bound 24.526126,actual reward 19.349060
round 1773, predicted reward 25.237375,predicted upper bound 25.243369,actual reward 23.534662
round 1774, predicted reward 29.600981,predicted upper bound 29.606890,actual reward 25.872827
round 1775, predicted reward 22.929979,predicted upper bound 22.935944,actual reward 21.980727
round 1776, predicted reward 28.920085,predicted upper bound 28.925131,actual reward 21.651651
round 1777, predicted reward 28.287308,predicted upper bound 28.292925,actual reward 28.850688
round 1778, predicted reward 25.728488,predicted upper bound 25.734283,actual reward 22.656801
round 1779, predicted reward 26.344021,predicted upper bound 26.349862,actual reward 28.091390
round 1780, predicted reward 28.884812,predicted upper bound 28.890565,actual reward 30.236382
round 1781, predicted reward 28.404883,predicted upper bound 28.411328,actual reward 27.860066
round 1782, predicted reward 26.266176,predicted upper bound 26.272739,actual reward 27.112118
round 1783, predicted reward 23.054456,predicted upper bound 23.061948,actual reward 23.688108
round 1784, predicted reward 31.668051,predicted upper bound 31.672557,actual reward 33.866673
round 1785, predicted reward 19.620306,predicted upper bound 19.626564,actual reward 23.899296
round 1786, predicted reward 32.688769,predicted upper bound 32.694449,actual reward 31.849084
round 1787, predicted reward 28.482968,predicted upper bound 28.488337,actual reward 29.231576
round 1788, predicted reward 25.661467,predicted upper bound 25.667087,actual reward 23.287874
round 1789, predicted reward 24.142532,predicted upper bound 24.148277,actual reward 23.703654
round 1790, predicted reward 29.046068,predicted upper bound 29.051215,actual reward 26.876546
round 1791, predicted reward 29.507012,predicted upper bound 29.512571,actual reward 32.398635
round 1792, predicted reward 29.435892,predicted upper bound 29.440753,actual reward 30.870120
round 1793, predicted reward 25.743646,predicted upper bound 25.749259,actual reward 24.708199
round 1794, predicted reward 26.617089,predicted upper bound 26.623076,actual reward 22.051348
round 1795, predicted reward 27.509880,predicted upper bound 27.516968,actual reward 21.954888
round 1796, predicted reward 32.903986,predicted upper bound 32.909297,actual reward 36.221464
round 1797, predicted reward 32.360757,predicted upper bound 32.366550,actual reward 34.827814
round 1798, predicted reward 23.698248,predicted upper bound 23.704958,actual reward 24.065335
round 1799, predicted reward 21.740930,predicted upper bound 21.747753,actual reward 23.613614
round 1800, predicted reward 28.121166,predicted upper bound 28.126895,actual reward 28.382732
round 1801, predicted reward 30.045854,predicted upper bound 30.050400,actual reward 33.662366
round 1802, predicted reward 28.416385,predicted upper bound 28.421978,actual reward 25.081252
round 1803, predicted reward 23.739428,predicted upper bound 23.745199,actual reward 23.056740
round 1804, predicted reward 32.385946,predicted upper bound 32.392558,actual reward 27.605476
round 1805, predicted reward 26.776932,predicted upper bound 26.783423,actual reward 28.888464
round 1806, predicted reward 24.024175,predicted upper bound 24.029781,actual reward 23.324532
round 1807, predicted reward 23.462314,predicted upper bound 23.467762,actual reward 17.337427
round 1808, predicted reward 29.654477,predicted upper bound 29.661069,actual reward 35.122262
round 1809, predicted reward 21.455231,predicted upper bound 21.461690,actual reward 20.522201
round 1810, predicted reward 22.259544,predicted upper bound 22.265226,actual reward 21.758385
round 1811, predicted reward 27.597689,predicted upper bound 27.604144,actual reward 29.768623
round 1812, predicted reward 31.835824,predicted upper bound 31.841142,actual reward 30.149501
round 1813, predicted reward 30.087020,predicted upper bound 30.092767,actual reward 25.564450
round 1814, predicted reward 25.035908,predicted upper bound 25.043887,actual reward 25.512466
round 1815, predicted reward 26.020642,predicted upper bound 26.027138,actual reward 19.420479
round 1816, predicted reward 25.740608,predicted upper bound 25.748333,actual reward 21.527873
round 1817, predicted reward 30.670749,predicted upper bound 30.675989,actual reward 33.600442
round 1818, predicted reward 32.074744,predicted upper bound 32.080277,actual reward 37.910273
round 1819, predicted reward 25.627384,predicted upper bound 25.634209,actual reward 20.833526
round 1820, predicted reward 28.087848,predicted upper bound 28.092673,actual reward 27.809748
round 1821, predicted reward 30.893571,predicted upper bound 30.900653,actual reward 28.306014
round 1822, predicted reward 24.746344,predicted upper bound 24.752320,actual reward 21.959185
round 1823, predicted reward 26.145592,predicted upper bound 26.151509,actual reward 29.753861
round 1824, predicted reward 32.477516,predicted upper bound 32.482423,actual reward 33.861466
round 1825, predicted reward 29.779861,predicted upper bound 29.786286,actual reward 29.105419
round 1826, predicted reward 23.548039,predicted upper bound 23.552962,actual reward 19.914301
round 1827, predicted reward 23.984218,predicted upper bound 23.990826,actual reward 19.723652
round 1828, predicted reward 32.816810,predicted upper bound 32.822924,actual reward 36.390698
round 1829, predicted reward 28.610412,predicted upper bound 28.617598,actual reward 29.146592
round 1830, predicted reward 25.522819,predicted upper bound 25.528042,actual reward 22.779629
round 1831, predicted reward 21.965360,predicted upper bound 21.970763,actual reward 15.850689
round 1832, predicted reward 27.567299,predicted upper bound 27.573217,actual reward 23.612303
round 1833, predicted reward 25.844012,predicted upper bound 25.849961,actual reward 25.677286
round 1834, predicted reward 28.603212,predicted upper bound 28.608461,actual reward 26.168801
round 1835, predicted reward 26.712884,predicted upper bound 26.717259,actual reward 23.614569
round 1836, predicted reward 21.282619,predicted upper bound 21.288327,actual reward 21.017918
round 1837, predicted reward 28.215131,predicted upper bound 28.221234,actual reward 29.313189
round 1838, predicted reward 29.504797,predicted upper bound 29.511004,actual reward 29.463540
round 1839, predicted reward 28.762542,predicted upper bound 28.768072,actual reward 26.116953
round 1840, predicted reward 30.398245,predicted upper bound 30.403275,actual reward 31.026819
round 1841, predicted reward 29.604850,predicted upper bound 29.610151,actual reward 31.580337
round 1842, predicted reward 26.909660,predicted upper bound 26.915802,actual reward 21.290241
round 1843, predicted reward 25.488477,predicted upper bound 25.494790,actual reward 22.309022
round 1844, predicted reward 22.600536,predicted upper bound 22.607055,actual reward 20.052080
round 1845, predicted reward 27.893957,predicted upper bound 27.899457,actual reward 25.657946
round 1846, predicted reward 25.068972,predicted upper bound 25.074984,actual reward 27.475731
round 1847, predicted reward 26.552364,predicted upper bound 26.557442,actual reward 28.849048
round 1848, predicted reward 29.819370,predicted upper bound 29.824848,actual reward 28.930927
round 1849, predicted reward 21.365671,predicted upper bound 21.372150,actual reward 19.495907
round 1850, predicted reward 30.449918,predicted upper bound 30.456691,actual reward 35.975202
round 1851, predicted reward 27.063340,predicted upper bound 27.069335,actual reward 27.176897
round 1852, predicted reward 24.664185,predicted upper bound 24.669535,actual reward 21.636742
round 1853, predicted reward 28.982557,predicted upper bound 28.988438,actual reward 30.976731
round 1854, predicted reward 27.079657,predicted upper bound 27.084784,actual reward 27.972547
round 1855, predicted reward 28.032507,predicted upper bound 28.038231,actual reward 22.019240
round 1856, predicted reward 32.010853,predicted upper bound 32.016593,actual reward 33.582880
round 1857, predicted reward 30.801024,predicted upper bound 30.805730,actual reward 31.067357
round 1858, predicted reward 28.276883,predicted upper bound 28.282399,actual reward 26.316474
round 1859, predicted reward 38.446701,predicted upper bound 38.451423,actual reward 40.320064
round 1860, predicted reward 25.869257,predicted upper bound 25.875305,actual reward 21.982990
round 1861, predicted reward 29.207508,predicted upper bound 29.212972,actual reward 30.107227
round 1862, predicted reward 22.357164,predicted upper bound 22.363471,actual reward 17.998464
round 1863, predicted reward 30.558056,predicted upper bound 30.564070,actual reward 29.124011
round 1864, predicted reward 24.246373,predicted upper bound 24.251027,actual reward 21.160804
round 1865, predicted reward 31.618143,predicted upper bound 31.623415,actual reward 21.290525
round 1866, predicted reward 33.622101,predicted upper bound 33.627427,actual reward 35.071425
round 1867, predicted reward 22.193585,predicted upper bound 22.200010,actual reward 21.269664
round 1868, predicted reward 29.305587,predicted upper bound 29.311486,actual reward 28.020526
round 1869, predicted reward 21.366045,predicted upper bound 21.373734,actual reward 23.620109
round 1870, predicted reward 25.632097,predicted upper bound 25.637342,actual reward 17.867721
round 1871, predicted reward 20.556433,predicted upper bound 20.562493,actual reward 18.974091
round 1872, predicted reward 35.551913,predicted upper bound 35.557861,actual reward 36.136690
round 1873, predicted reward 25.822697,predicted upper bound 25.828858,actual reward 22.506692
round 1874, predicted reward 23.583590,predicted upper bound 23.589035,actual reward 20.299083
round 1875, predicted reward 24.307386,predicted upper bound 24.312961,actual reward 18.676660
round 1876, predicted reward 23.454998,predicted upper bound 23.460312,actual reward 15.979806
round 1877, predicted reward 23.823556,predicted upper bound 23.829547,actual reward 25.402122
round 1878, predicted reward 27.039567,predicted upper bound 27.045036,actual reward 24.078604
round 1879, predicted reward 24.694369,predicted upper bound 24.700871,actual reward 22.845509
round 1880, predicted reward 37.706312,predicted upper bound 37.711965,actual reward 40.654658
round 1881, predicted reward 21.251529,predicted upper bound 21.258019,actual reward 23.298858
round 1882, predicted reward 22.440032,predicted upper bound 22.445608,actual reward 14.507302
round 1883, predicted reward 19.620010,predicted upper bound 19.626134,actual reward 21.919864
round 1884, predicted reward 22.606557,predicted upper bound 22.612995,actual reward 27.830759
round 1885, predicted reward 23.858294,predicted upper bound 23.863597,actual reward 19.093888
round 1886, predicted reward 31.047496,predicted upper bound 31.052350,actual reward 30.278392
round 1887, predicted reward 26.413412,predicted upper bound 26.418402,actual reward 25.138363
round 1888, predicted reward 33.504398,predicted upper bound 33.510101,actual reward 35.706347
round 1889, predicted reward 22.366973,predicted upper bound 22.373445,actual reward 20.735823
round 1890, predicted reward 31.620799,predicted upper bound 31.626619,actual reward 37.727372
round 1891, predicted reward 24.486216,predicted upper bound 24.493433,actual reward 21.862992
round 1892, predicted reward 24.951354,predicted upper bound 24.958271,actual reward 21.710784
round 1893, predicted reward 23.908851,predicted upper bound 23.914835,actual reward 20.130188
round 1894, predicted reward 26.266431,predicted upper bound 26.271931,actual reward 27.812903
round 1895, predicted reward 35.169819,predicted upper bound 35.174703,actual reward 36.148636
round 1896, predicted reward 26.134324,predicted upper bound 26.140410,actual reward 26.056170
round 1897, predicted reward 25.522426,predicted upper bound 25.529417,actual reward 28.133153
round 1898, predicted reward 30.760915,predicted upper bound 30.765819,actual reward 31.501943
round 1899, predicted reward 22.772123,predicted upper bound 22.778205,actual reward 26.972078
round 1900, predicted reward 36.194933,predicted upper bound 36.199471,actual reward 39.293082
round 1901, predicted reward 23.458412,predicted upper bound 23.464706,actual reward 24.791248
round 1902, predicted reward 30.569944,predicted upper bound 30.576511,actual reward 28.790150
round 1903, predicted reward 23.385660,predicted upper bound 23.392048,actual reward 20.451544
round 1904, predicted reward 31.166547,predicted upper bound 31.170884,actual reward 36.729511
round 1905, predicted reward 32.002581,predicted upper bound 32.006843,actual reward 35.160743
round 1906, predicted reward 29.677222,predicted upper bound 29.681681,actual reward 28.752053
round 1907, predicted reward 25.408127,predicted upper bound 25.414157,actual reward 22.473592
round 1908, predicted reward 26.984514,predicted upper bound 26.990574,actual reward 22.595844
round 1909, predicted reward 27.687880,predicted upper bound 27.693193,actual reward 21.832298
round 1910, predicted reward 27.689068,predicted upper bound 27.694455,actual reward 30.085007
round 1911, predicted reward 30.297689,predicted upper bound 30.302828,actual reward 29.283820
round 1912, predicted reward 28.090830,predicted upper bound 28.096844,actual reward 31.091339
round 1913, predicted reward 25.218916,predicted upper bound 25.226385,actual reward 21.593612
round 1914, predicted reward 28.912175,predicted upper bound 28.917459,actual reward 29.614617
round 1915, predicted reward 23.975830,predicted upper bound 23.982073,actual reward 23.440716
round 1916, predicted reward 27.754174,predicted upper bound 27.759674,actual reward 25.292146
round 1917, predicted reward 29.982996,predicted upper bound 29.987769,actual reward 31.801518
round 1918, predicted reward 30.160290,predicted upper bound 30.166645,actual reward 27.225508
round 1919, predicted reward 31.840662,predicted upper bound 31.846313,actual reward 32.074728
round 1920, predicted reward 22.813683,predicted upper bound 22.819736,actual reward 15.140420
round 1921, predicted reward 27.316550,predicted upper bound 27.322260,actual reward 20.373120
round 1922, predicted reward 30.471459,predicted upper bound 30.476563,actual reward 31.329693
round 1923, predicted reward 23.082706,predicted upper bound 23.088333,actual reward 20.824142
round 1924, predicted reward 28.086671,predicted upper bound 28.092534,actual reward 29.527645
round 1925, predicted reward 28.830407,predicted upper bound 28.835868,actual reward 27.867434
round 1926, predicted reward 21.375234,predicted upper bound 21.380982,actual reward 21.290889
round 1927, predicted reward 26.719336,predicted upper bound 26.726099,actual reward 26.197297
round 1928, predicted reward 30.071189,predicted upper bound 30.075991,actual reward 29.449115
round 1929, predicted reward 30.471890,predicted upper bound 30.477305,actual reward 30.349052
round 1930, predicted reward 34.594351,predicted upper bound 34.599299,actual reward 32.419985
round 1931, predicted reward 24.797459,predicted upper bound 24.802776,actual reward 22.461176
round 1932, predicted reward 26.221268,predicted upper bound 26.227731,actual reward 29.038700
round 1933, predicted reward 28.888113,predicted upper bound 28.893398,actual reward 24.396976
round 1934, predicted reward 33.287833,predicted upper bound 33.292502,actual reward 33.493156
round 1935, predicted reward 28.416721,predicted upper bound 28.422029,actual reward 26.569058
round 1936, predicted reward 30.198486,predicted upper bound 30.204309,actual reward 27.484228
round 1937, predicted reward 28.662907,predicted upper bound 28.668138,actual reward 29.673027
round 1938, predicted reward 25.017973,predicted upper bound 25.024343,actual reward 28.884529
round 1939, predicted reward 29.938330,predicted upper bound 29.944156,actual reward 34.275560
round 1940, predicted reward 22.642579,predicted upper bound 22.648629,actual reward 23.305016
round 1941, predicted reward 25.483530,predicted upper bound 25.488902,actual reward 25.699065
round 1942, predicted reward 34.443875,predicted upper bound 34.449251,actual reward 34.216656
round 1943, predicted reward 22.581832,predicted upper bound 22.586401,actual reward 18.523297
round 1944, predicted reward 34.879718,predicted upper bound 34.884876,actual reward 40.638870
round 1945, predicted reward 27.249251,predicted upper bound 27.254373,actual reward 25.415322
round 1946, predicted reward 27.301985,predicted upper bound 27.308069,actual reward 24.353097
round 1947, predicted reward 21.353291,predicted upper bound 21.359366,actual reward 22.204465
round 1948, predicted reward 26.940633,predicted upper bound 26.945991,actual reward 23.880056
round 1949, predicted reward 22.914974,predicted upper bound 22.922526,actual reward 21.637987
round 1950, predicted reward 21.886363,predicted upper bound 21.892249,actual reward 21.944981
round 1951, predicted reward 24.723800,predicted upper bound 24.729484,actual reward 23.808841
round 1952, predicted reward 29.868293,predicted upper bound 29.874895,actual reward 35.018094
round 1953, predicted reward 26.753872,predicted upper bound 26.759917,actual reward 27.220635
round 1954, predicted reward 35.584729,predicted upper bound 35.590074,actual reward 35.221511
round 1955, predicted reward 28.856317,predicted upper bound 28.862731,actual reward 26.132581
round 1956, predicted reward 27.830601,predicted upper bound 27.836394,actual reward 21.253843
round 1957, predicted reward 24.144906,predicted upper bound 24.150778,actual reward 24.895322
round 1958, predicted reward 27.832032,predicted upper bound 27.837531,actual reward 22.858300
round 1959, predicted reward 23.048056,predicted upper bound 23.053369,actual reward 21.280229
round 1960, predicted reward 25.906686,predicted upper bound 25.911761,actual reward 23.091028
round 1961, predicted reward 29.183576,predicted upper bound 29.190193,actual reward 26.682329
round 1962, predicted reward 28.115047,predicted upper bound 28.120694,actual reward 23.166897
round 1963, predicted reward 28.009540,predicted upper bound 28.016219,actual reward 30.303508
round 1964, predicted reward 30.552995,predicted upper bound 30.559442,actual reward 40.052616
round 1965, predicted reward 28.270068,predicted upper bound 28.275156,actual reward 27.463672
round 1966, predicted reward 25.304272,predicted upper bound 25.309727,actual reward 27.047139
round 1967, predicted reward 22.990311,predicted upper bound 22.996255,actual reward 25.224405
round 1968, predicted reward 30.598191,predicted upper bound 30.604134,actual reward 30.971480
round 1969, predicted reward 27.705213,predicted upper bound 27.711072,actual reward 27.425930
round 1970, predicted reward 20.754602,predicted upper bound 20.759316,actual reward 16.809658
round 1971, predicted reward 22.366500,predicted upper bound 22.373058,actual reward 21.054473
round 1972, predicted reward 33.898098,predicted upper bound 33.904585,actual reward 36.937056
round 1973, predicted reward 35.307061,predicted upper bound 35.311847,actual reward 35.749582
round 1974, predicted reward 30.108452,predicted upper bound 30.114582,actual reward 29.680852
round 1975, predicted reward 24.358921,predicted upper bound 24.364615,actual reward 26.163521
round 1976, predicted reward 24.314837,predicted upper bound 24.321246,actual reward 26.537058
round 1977, predicted reward 21.902963,predicted upper bound 21.909696,actual reward 20.327149
round 1978, predicted reward 33.093913,predicted upper bound 33.099830,actual reward 26.825718
round 1979, predicted reward 25.700717,predicted upper bound 25.706692,actual reward 26.233695
round 1980, predicted reward 36.176731,predicted upper bound 36.182028,actual reward 36.786978
round 1981, predicted reward 36.572882,predicted upper bound 36.578213,actual reward 49.524380
round 1982, predicted reward 29.697060,predicted upper bound 29.701644,actual reward 33.275539
round 1983, predicted reward 25.867103,predicted upper bound 25.871780,actual reward 22.886260
round 1984, predicted reward 31.704988,predicted upper bound 31.710674,actual reward 30.300752
round 1985, predicted reward 22.585395,predicted upper bound 22.592383,actual reward 22.090444
round 1986, predicted reward 27.859351,predicted upper bound 27.864915,actual reward 28.834850
round 1987, predicted reward 29.564525,predicted upper bound 29.569996,actual reward 28.430276
round 1988, predicted reward 23.553377,predicted upper bound 23.559360,actual reward 25.688265
round 1989, predicted reward 22.148252,predicted upper bound 22.153739,actual reward 19.095462
round 1990, predicted reward 20.032460,predicted upper bound 20.039286,actual reward 19.780212
round 1991, predicted reward 23.325789,predicted upper bound 23.331723,actual reward 23.953185
round 1992, predicted reward 29.563413,predicted upper bound 29.569559,actual reward 30.353872
round 1993, predicted reward 29.037299,predicted upper bound 29.042702,actual reward 32.254612
round 1994, predicted reward 26.706084,predicted upper bound 26.712557,actual reward 26.817471
round 1995, predicted reward 26.204506,predicted upper bound 26.211070,actual reward 25.851589
round 1996, predicted reward 32.108943,predicted upper bound 32.114509,actual reward 33.116165
round 1997, predicted reward 34.308468,predicted upper bound 34.313805,actual reward 40.010628
round 1998, predicted reward 20.527714,predicted upper bound 20.534440,actual reward 19.805144
round 1999, predicted reward 20.395682,predicted upper bound 20.401925,actual reward 14.901175
round 2000, predicted reward 26.353374,predicted upper bound 26.359690,actual reward 25.142495
round 2001, predicted reward 31.397229,predicted upper bound 31.402956,actual reward 31.269700
round 2002, predicted reward 23.199853,predicted upper bound 23.206479,actual reward 26.197144
round 2003, predicted reward 24.470627,predicted upper bound 24.476976,actual reward 22.934094
round 2004, predicted reward 32.680764,predicted upper bound 32.685953,actual reward 34.705419
round 2005, predicted reward 20.322780,predicted upper bound 20.328407,actual reward 20.766909
round 2006, predicted reward 26.842501,predicted upper bound 26.848185,actual reward 27.995495
round 2007, predicted reward 29.337359,predicted upper bound 29.342725,actual reward 30.209948
round 2008, predicted reward 18.749650,predicted upper bound 18.755591,actual reward 17.270623
round 2009, predicted reward 23.300237,predicted upper bound 23.307142,actual reward 25.050790
round 2010, predicted reward 29.898703,predicted upper bound 29.904474,actual reward 27.283880
round 2011, predicted reward 32.765831,predicted upper bound 32.770985,actual reward 29.605258
round 2012, predicted reward 29.371450,predicted upper bound 29.376872,actual reward 27.561393
round 2013, predicted reward 30.848137,predicted upper bound 30.854299,actual reward 33.585790
round 2014, predicted reward 21.272647,predicted upper bound 21.278228,actual reward 16.987300
round 2015, predicted reward 28.189199,predicted upper bound 28.194544,actual reward 26.870393
round 2016, predicted reward 31.546908,predicted upper bound 31.551733,actual reward 30.964208
round 2017, predicted reward 26.642639,predicted upper bound 26.647498,actual reward 21.569521
round 2018, predicted reward 27.816417,predicted upper bound 27.821785,actual reward 22.650646
round 2019, predicted reward 28.688870,predicted upper bound 28.693360,actual reward 25.498381
round 2020, predicted reward 32.087017,predicted upper bound 32.092681,actual reward 36.030509
round 2021, predicted reward 26.781368,predicted upper bound 26.786950,actual reward 23.860498
round 2022, predicted reward 23.548322,predicted upper bound 23.553397,actual reward 22.368312
round 2023, predicted reward 23.579318,predicted upper bound 23.583870,actual reward 23.379771
round 2024, predicted reward 24.447831,predicted upper bound 24.454138,actual reward 24.087848
round 2025, predicted reward 23.490425,predicted upper bound 23.496235,actual reward 22.922334
round 2026, predicted reward 25.020976,predicted upper bound 25.027964,actual reward 21.643711
round 2027, predicted reward 30.176064,predicted upper bound 30.180200,actual reward 26.152175
round 2028, predicted reward 25.128547,predicted upper bound 25.134724,actual reward 22.578140
round 2029, predicted reward 31.487530,predicted upper bound 31.493334,actual reward 27.215204
round 2030, predicted reward 28.347956,predicted upper bound 28.354439,actual reward 32.442320
round 2031, predicted reward 22.457281,predicted upper bound 22.463038,actual reward 19.723024
round 2032, predicted reward 23.714434,predicted upper bound 23.720573,actual reward 21.823875
round 2033, predicted reward 24.513760,predicted upper bound 24.519786,actual reward 23.700567
round 2034, predicted reward 24.764612,predicted upper bound 24.769467,actual reward 26.428006
round 2035, predicted reward 24.619085,predicted upper bound 24.624691,actual reward 17.846413
round 2036, predicted reward 28.460281,predicted upper bound 28.465506,actual reward 26.077179
round 2037, predicted reward 22.309483,predicted upper bound 22.315534,actual reward 17.943429
round 2038, predicted reward 28.946275,predicted upper bound 28.951434,actual reward 24.758551
round 2039, predicted reward 27.063800,predicted upper bound 27.068567,actual reward 25.060183
round 2040, predicted reward 32.286373,predicted upper bound 32.291219,actual reward 34.242058
round 2041, predicted reward 35.101480,predicted upper bound 35.106737,actual reward 38.615929
round 2042, predicted reward 25.974595,predicted upper bound 25.980190,actual reward 22.872003
round 2043, predicted reward 26.094316,predicted upper bound 26.100132,actual reward 24.716968
round 2044, predicted reward 27.632086,predicted upper bound 27.637451,actual reward 33.266973
round 2045, predicted reward 26.083487,predicted upper bound 26.088708,actual reward 22.091182
round 2046, predicted reward 21.858535,predicted upper bound 21.864089,actual reward 24.526222
round 2047, predicted reward 23.715575,predicted upper bound 23.721385,actual reward 19.972794
round 2048, predicted reward 27.996575,predicted upper bound 28.001995,actual reward 24.564684
round 2049, predicted reward 21.145839,predicted upper bound 21.152192,actual reward 24.747469
round 2050, predicted reward 25.757870,predicted upper bound 25.762640,actual reward 23.055407
round 2051, predicted reward 25.032580,predicted upper bound 25.039629,actual reward 20.287597
round 2052, predicted reward 33.311703,predicted upper bound 33.317041,actual reward 36.997393
round 2053, predicted reward 22.785153,predicted upper bound 22.791514,actual reward 19.405191
round 2054, predicted reward 31.229015,predicted upper bound 31.234165,actual reward 32.944711
round 2055, predicted reward 25.757404,predicted upper bound 25.763744,actual reward 24.843721
round 2056, predicted reward 28.049752,predicted upper bound 28.055682,actual reward 30.613437
round 2057, predicted reward 29.442815,predicted upper bound 29.448080,actual reward 19.725383
round 2058, predicted reward 28.661769,predicted upper bound 28.667472,actual reward 33.481130
round 2059, predicted reward 30.561839,predicted upper bound 30.566920,actual reward 35.395311
round 2060, predicted reward 29.140164,predicted upper bound 29.147080,actual reward 28.151468
round 2061, predicted reward 30.118390,predicted upper bound 30.123100,actual reward 28.791242
round 2062, predicted reward 25.213912,predicted upper bound 25.218818,actual reward 29.374203
round 2063, predicted reward 24.608274,predicted upper bound 24.613926,actual reward 27.248164
round 2064, predicted reward 25.121055,predicted upper bound 25.126336,actual reward 21.500537
round 2065, predicted reward 27.455112,predicted upper bound 27.460563,actual reward 27.773959
round 2066, predicted reward 28.509291,predicted upper bound 28.514393,actual reward 27.405514
round 2067, predicted reward 26.485850,predicted upper bound 26.491893,actual reward 25.766226
round 2068, predicted reward 26.202468,predicted upper bound 26.207959,actual reward 21.757723
round 2069, predicted reward 26.553337,predicted upper bound 26.559068,actual reward 27.673164
round 2070, predicted reward 25.311575,predicted upper bound 25.316834,actual reward 25.559903
round 2071, predicted reward 22.141307,predicted upper bound 22.147182,actual reward 25.681063
round 2072, predicted reward 28.962918,predicted upper bound 28.968723,actual reward 22.694268
round 2073, predicted reward 20.050698,predicted upper bound 20.057244,actual reward 19.849246
round 2074, predicted reward 23.763550,predicted upper bound 23.768614,actual reward 27.185516
round 2075, predicted reward 26.304925,predicted upper bound 26.310047,actual reward 25.125781
round 2076, predicted reward 27.791348,predicted upper bound 27.796116,actual reward 28.841180
round 2077, predicted reward 25.775147,predicted upper bound 25.780142,actual reward 24.532999
round 2078, predicted reward 41.703517,predicted upper bound 41.708065,actual reward 47.969544
round 2079, predicted reward 29.001620,predicted upper bound 29.007431,actual reward 31.110640
round 2080, predicted reward 32.854734,predicted upper bound 32.859301,actual reward 36.044634
round 2081, predicted reward 21.817502,predicted upper bound 21.823554,actual reward 18.509769
round 2082, predicted reward 25.726907,predicted upper bound 25.733066,actual reward 22.690113
round 2083, predicted reward 27.874541,predicted upper bound 27.879582,actual reward 31.390220
round 2084, predicted reward 23.928897,predicted upper bound 23.934133,actual reward 22.597277
round 2085, predicted reward 34.619528,predicted upper bound 34.623712,actual reward 39.312580
round 2086, predicted reward 28.594368,predicted upper bound 28.600064,actual reward 26.193660
round 2087, predicted reward 29.758764,predicted upper bound 29.763434,actual reward 28.448187
round 2088, predicted reward 20.348144,predicted upper bound 20.353299,actual reward 16.854434
round 2089, predicted reward 24.832705,predicted upper bound 24.838036,actual reward 27.333502
round 2090, predicted reward 26.433635,predicted upper bound 26.439054,actual reward 22.266376
round 2091, predicted reward 24.288537,predicted upper bound 24.295181,actual reward 21.037850
round 2092, predicted reward 28.761939,predicted upper bound 28.767837,actual reward 31.499982
round 2093, predicted reward 31.341869,predicted upper bound 31.346936,actual reward 33.838608
round 2094, predicted reward 25.454117,predicted upper bound 25.459063,actual reward 23.251995
round 2095, predicted reward 26.712660,predicted upper bound 26.717976,actual reward 32.079515
round 2096, predicted reward 27.747368,predicted upper bound 27.754638,actual reward 28.071651
round 2097, predicted reward 22.024392,predicted upper bound 22.029186,actual reward 25.999458
round 2098, predicted reward 31.497020,predicted upper bound 31.501266,actual reward 33.514144
round 2099, predicted reward 26.346072,predicted upper bound 26.350774,actual reward 25.276522
round 2100, predicted reward 28.449484,predicted upper bound 28.454408,actual reward 33.153929
round 2101, predicted reward 22.578065,predicted upper bound 22.584323,actual reward 25.703527
round 2102, predicted reward 26.414121,predicted upper bound 26.419677,actual reward 26.527799
round 2103, predicted reward 26.015058,predicted upper bound 26.021187,actual reward 19.026460
round 2104, predicted reward 22.710101,predicted upper bound 22.716039,actual reward 22.028656
round 2105, predicted reward 28.853562,predicted upper bound 28.859320,actual reward 31.448019
round 2106, predicted reward 21.918402,predicted upper bound 21.924025,actual reward 20.109532
round 2107, predicted reward 36.658222,predicted upper bound 36.662857,actual reward 33.529372
round 2108, predicted reward 23.379389,predicted upper bound 23.385098,actual reward 25.360612
round 2109, predicted reward 23.463949,predicted upper bound 23.469999,actual reward 22.502578
round 2110, predicted reward 26.993330,predicted upper bound 26.999329,actual reward 24.589472
round 2111, predicted reward 25.473564,predicted upper bound 25.479008,actual reward 26.876679
round 2112, predicted reward 22.972558,predicted upper bound 22.978966,actual reward 27.067544
round 2113, predicted reward 23.659905,predicted upper bound 23.665780,actual reward 22.204004
round 2114, predicted reward 23.522128,predicted upper bound 23.527132,actual reward 19.556443
round 2115, predicted reward 24.577349,predicted upper bound 24.583678,actual reward 18.619123
round 2116, predicted reward 30.158330,predicted upper bound 30.163038,actual reward 29.568773
round 2117, predicted reward 27.822545,predicted upper bound 27.826950,actual reward 30.178911
round 2118, predicted reward 21.680967,predicted upper bound 21.686473,actual reward 18.518081
round 2119, predicted reward 29.711463,predicted upper bound 29.716824,actual reward 26.613573
round 2120, predicted reward 31.060548,predicted upper bound 31.066472,actual reward 34.316017
round 2121, predicted reward 25.530312,predicted upper bound 25.535947,actual reward 23.683390
round 2122, predicted reward 22.300033,predicted upper bound 22.306295,actual reward 19.307454
round 2123, predicted reward 25.924866,predicted upper bound 25.930441,actual reward 24.773075
round 2124, predicted reward 32.056474,predicted upper bound 32.061788,actual reward 33.062881
round 2125, predicted reward 33.851561,predicted upper bound 33.856694,actual reward 34.171673
round 2126, predicted reward 30.335889,predicted upper bound 30.341078,actual reward 27.141407
round 2127, predicted reward 25.740573,predicted upper bound 25.745699,actual reward 28.559858
round 2128, predicted reward 26.534494,predicted upper bound 26.539661,actual reward 25.022871
round 2129, predicted reward 28.034823,predicted upper bound 28.039982,actual reward 23.340342
round 2130, predicted reward 33.546459,predicted upper bound 33.552092,actual reward 33.728174
round 2131, predicted reward 24.476596,predicted upper bound 24.481624,actual reward 22.978131
round 2132, predicted reward 37.325268,predicted upper bound 37.330602,actual reward 39.851067
round 2133, predicted reward 25.872899,predicted upper bound 25.878333,actual reward 31.901571
round 2134, predicted reward 36.335968,predicted upper bound 36.341280,actual reward 38.981673
round 2135, predicted reward 23.659541,predicted upper bound 23.666388,actual reward 20.574436
round 2136, predicted reward 26.953986,predicted upper bound 26.959399,actual reward 25.343646
round 2137, predicted reward 31.313251,predicted upper bound 31.317598,actual reward 31.229093
round 2138, predicted reward 27.917062,predicted upper bound 27.922997,actual reward 23.789932
round 2139, predicted reward 26.564156,predicted upper bound 26.570459,actual reward 22.087629
round 2140, predicted reward 27.018269,predicted upper bound 27.022728,actual reward 24.446324
round 2141, predicted reward 25.689389,predicted upper bound 25.694305,actual reward 22.904457
round 2142, predicted reward 25.848450,predicted upper bound 25.854704,actual reward 25.894495
round 2143, predicted reward 27.479488,predicted upper bound 27.484403,actual reward 35.374411
round 2144, predicted reward 30.570666,predicted upper bound 30.575906,actual reward 28.655959
round 2145, predicted reward 30.365446,predicted upper bound 30.371120,actual reward 28.176284
round 2146, predicted reward 29.653880,predicted upper bound 29.659427,actual reward 33.125228
round 2147, predicted reward 20.908498,predicted upper bound 20.914273,actual reward 19.648635
round 2148, predicted reward 28.641491,predicted upper bound 28.647450,actual reward 29.980045
round 2149, predicted reward 33.691980,predicted upper bound 33.696936,actual reward 30.925093
round 2150, predicted reward 26.156737,predicted upper bound 26.161986,actual reward 24.950506
round 2151, predicted reward 21.384140,predicted upper bound 21.389695,actual reward 18.051182
round 2152, predicted reward 31.220651,predicted upper bound 31.225884,actual reward 29.841917
round 2153, predicted reward 25.553063,predicted upper bound 25.558536,actual reward 27.155592
round 2154, predicted reward 27.699044,predicted upper bound 27.704422,actual reward 26.101281
round 2155, predicted reward 20.431182,predicted upper bound 20.437535,actual reward 17.850016
round 2156, predicted reward 29.811212,predicted upper bound 29.816163,actual reward 33.442587
round 2157, predicted reward 32.441902,predicted upper bound 32.447331,actual reward 30.556203
round 2158, predicted reward 29.423531,predicted upper bound 29.429288,actual reward 28.771054
round 2159, predicted reward 30.154361,predicted upper bound 30.159954,actual reward 25.407549
round 2160, predicted reward 29.794024,predicted upper bound 29.799185,actual reward 30.008930
round 2161, predicted reward 30.450847,predicted upper bound 30.456830,actual reward 34.441574
round 2162, predicted reward 28.138316,predicted upper bound 28.143123,actual reward 21.318268
round 2163, predicted reward 24.895210,predicted upper bound 24.901177,actual reward 26.153320
round 2164, predicted reward 22.680104,predicted upper bound 22.686827,actual reward 19.154211
round 2165, predicted reward 28.463120,predicted upper bound 28.468094,actual reward 26.972987
round 2166, predicted reward 24.919672,predicted upper bound 24.925777,actual reward 30.732475
round 2167, predicted reward 20.241957,predicted upper bound 20.248191,actual reward 19.916234
round 2168, predicted reward 22.214365,predicted upper bound 22.221310,actual reward 23.253040
round 2169, predicted reward 23.937488,predicted upper bound 23.944017,actual reward 27.533191
round 2170, predicted reward 30.506212,predicted upper bound 30.511454,actual reward 29.688937
round 2171, predicted reward 33.136258,predicted upper bound 33.141077,actual reward 32.479181
round 2172, predicted reward 28.050172,predicted upper bound 28.056216,actual reward 29.878744
round 2173, predicted reward 28.568045,predicted upper bound 28.573482,actual reward 33.193572
round 2174, predicted reward 24.833222,predicted upper bound 24.840254,actual reward 18.347687
round 2175, predicted reward 31.528498,predicted upper bound 31.534196,actual reward 28.045434
round 2176, predicted reward 29.048230,predicted upper bound 29.053228,actual reward 27.973635
round 2177, predicted reward 22.512675,predicted upper bound 22.519136,actual reward 19.766507
round 2178, predicted reward 27.065143,predicted upper bound 27.070317,actual reward 30.741282
round 2179, predicted reward 28.988213,predicted upper bound 28.993454,actual reward 29.382770
round 2180, predicted reward 24.206898,predicted upper bound 24.213095,actual reward 22.770425
round 2181, predicted reward 27.242189,predicted upper bound 27.247591,actual reward 20.124602
round 2182, predicted reward 40.368619,predicted upper bound 40.372906,actual reward 50.351977
round 2183, predicted reward 22.968583,predicted upper bound 22.975396,actual reward 25.760789
round 2184, predicted reward 27.915511,predicted upper bound 27.921281,actual reward 27.985212
round 2185, predicted reward 27.989707,predicted upper bound 27.995012,actual reward 23.555450
round 2186, predicted reward 25.090460,predicted upper bound 25.095825,actual reward 19.425431
round 2187, predicted reward 22.031773,predicted upper bound 22.038160,actual reward 19.472625
round 2188, predicted reward 30.241228,predicted upper bound 30.245283,actual reward 31.181324
round 2189, predicted reward 24.527985,predicted upper bound 24.534037,actual reward 20.266593
round 2190, predicted reward 29.120047,predicted upper bound 29.125656,actual reward 26.919782
round 2191, predicted reward 26.118739,predicted upper bound 26.124681,actual reward 26.272092
round 2192, predicted reward 29.004489,predicted upper bound 29.009850,actual reward 27.098606
round 2193, predicted reward 22.156089,predicted upper bound 22.161532,actual reward 17.595657
round 2194, predicted reward 25.547813,predicted upper bound 25.553369,actual reward 29.198181
round 2195, predicted reward 25.673423,predicted upper bound 25.679400,actual reward 22.905733
round 2196, predicted reward 22.976370,predicted upper bound 22.981632,actual reward 18.466829
round 2197, predicted reward 28.000681,predicted upper bound 28.005978,actual reward 30.874154
round 2198, predicted reward 32.479202,predicted upper bound 32.484669,actual reward 35.690185
round 2199, predicted reward 30.386083,predicted upper bound 30.391130,actual reward 37.241224
round 2200, predicted reward 28.282554,predicted upper bound 28.287739,actual reward 25.152207
round 2201, predicted reward 27.994278,predicted upper bound 27.999562,actual reward 30.822462
round 2202, predicted reward 29.797340,predicted upper bound 29.802506,actual reward 31.814054
round 2203, predicted reward 21.197620,predicted upper bound 21.203878,actual reward 19.503890
round 2204, predicted reward 25.709173,predicted upper bound 25.714156,actual reward 23.701270
round 2205, predicted reward 30.815338,predicted upper bound 30.820434,actual reward 25.887473
round 2206, predicted reward 26.579643,predicted upper bound 26.585264,actual reward 22.011152
round 2207, predicted reward 27.586381,predicted upper bound 27.590801,actual reward 27.307102
round 2208, predicted reward 29.956892,predicted upper bound 29.963497,actual reward 29.717877
round 2209, predicted reward 30.275772,predicted upper bound 30.281393,actual reward 29.199221
round 2210, predicted reward 29.077281,predicted upper bound 29.082710,actual reward 28.244324
round 2211, predicted reward 32.878057,predicted upper bound 32.882359,actual reward 30.915805
round 2212, predicted reward 21.968678,predicted upper bound 21.974127,actual reward 17.150652
round 2213, predicted reward 30.322003,predicted upper bound 30.327474,actual reward 30.303616
round 2214, predicted reward 21.804016,predicted upper bound 21.809589,actual reward 20.037549
round 2215, predicted reward 35.455664,predicted upper bound 35.461384,actual reward 33.610045
round 2216, predicted reward 39.965102,predicted upper bound 39.969582,actual reward 44.487671
round 2217, predicted reward 26.648026,predicted upper bound 26.652914,actual reward 24.684409
round 2218, predicted reward 30.829723,predicted upper bound 30.834717,actual reward 34.254960
round 2219, predicted reward 26.797590,predicted upper bound 26.802780,actual reward 18.831428
round 2220, predicted reward 28.023226,predicted upper bound 28.028468,actual reward 28.346745
round 2221, predicted reward 24.019957,predicted upper bound 24.025330,actual reward 21.392433
round 2222, predicted reward 26.627443,predicted upper bound 26.632335,actual reward 25.455602
round 2223, predicted reward 24.937873,predicted upper bound 24.943294,actual reward 21.797387
round 2224, predicted reward 28.133649,predicted upper bound 28.138674,actual reward 28.168299
round 2225, predicted reward 27.929667,predicted upper bound 27.934857,actual reward 26.472722
round 2226, predicted reward 29.561301,predicted upper bound 29.566994,actual reward 24.177776
round 2227, predicted reward 21.559189,predicted upper bound 21.564811,actual reward 20.727433
round 2228, predicted reward 23.000299,predicted upper bound 23.005186,actual reward 20.366058
round 2229, predicted reward 22.649622,predicted upper bound 22.655359,actual reward 22.486597
round 2230, predicted reward 31.416325,predicted upper bound 31.421476,actual reward 34.270079
round 2231, predicted reward 23.807492,predicted upper bound 23.813424,actual reward 21.683380
round 2232, predicted reward 24.915428,predicted upper bound 24.921096,actual reward 25.245768
round 2233, predicted reward 28.570370,predicted upper bound 28.575514,actual reward 28.310038
round 2234, predicted reward 34.063405,predicted upper bound 34.068790,actual reward 33.693833
round 2235, predicted reward 21.248508,predicted upper bound 21.254290,actual reward 23.701475
round 2236, predicted reward 24.152856,predicted upper bound 24.158256,actual reward 22.792005
round 2237, predicted reward 23.623755,predicted upper bound 23.628999,actual reward 22.273234
round 2238, predicted reward 33.224619,predicted upper bound 33.229773,actual reward 34.806807
round 2239, predicted reward 22.693504,predicted upper bound 22.698940,actual reward 18.387818
round 2240, predicted reward 35.646479,predicted upper bound 35.651513,actual reward 35.215462
round 2241, predicted reward 25.412695,predicted upper bound 25.417859,actual reward 24.241455
round 2242, predicted reward 26.265536,predicted upper bound 26.271737,actual reward 26.603674
round 2243, predicted reward 27.676564,predicted upper bound 27.682428,actual reward 26.728910
round 2244, predicted reward 20.668413,predicted upper bound 20.674276,actual reward 17.855066
round 2245, predicted reward 28.737139,predicted upper bound 28.742297,actual reward 25.631689
round 2246, predicted reward 27.962430,predicted upper bound 27.966857,actual reward 27.311438
round 2247, predicted reward 26.253825,predicted upper bound 26.258240,actual reward 25.712662
round 2248, predicted reward 27.724263,predicted upper bound 27.729598,actual reward 27.427128
round 2249, predicted reward 28.683739,predicted upper bound 28.689890,actual reward 27.408651
round 2250, predicted reward 32.660789,predicted upper bound 32.664656,actual reward 34.148873
round 2251, predicted reward 27.297240,predicted upper bound 27.303228,actual reward 21.993825
round 2252, predicted reward 22.169362,predicted upper bound 22.173983,actual reward 20.779544
round 2253, predicted reward 23.526062,predicted upper bound 23.531698,actual reward 24.500195
round 2254, predicted reward 24.046320,predicted upper bound 24.052287,actual reward 22.023449
round 2255, predicted reward 32.995845,predicted upper bound 33.000348,actual reward 32.518221
round 2256, predicted reward 23.801062,predicted upper bound 23.807105,actual reward 21.484231
round 2257, predicted reward 27.708857,predicted upper bound 27.713623,actual reward 24.345202
round 2258, predicted reward 27.137535,predicted upper bound 27.143149,actual reward 30.417102
round 2259, predicted reward 25.002282,predicted upper bound 25.008285,actual reward 19.325348
round 2260, predicted reward 32.996846,predicted upper bound 33.001605,actual reward 28.756415
round 2261, predicted reward 30.087149,predicted upper bound 30.092134,actual reward 32.827948
round 2262, predicted reward 21.408467,predicted upper bound 21.414279,actual reward 20.152666
round 2263, predicted reward 23.958415,predicted upper bound 23.963291,actual reward 22.332821
round 2264, predicted reward 23.876793,predicted upper bound 23.881927,actual reward 23.296156
round 2265, predicted reward 29.588709,predicted upper bound 29.593722,actual reward 23.668700
round 2266, predicted reward 25.806641,predicted upper bound 25.812670,actual reward 24.886252
round 2267, predicted reward 27.842160,predicted upper bound 27.846654,actual reward 26.353198
round 2268, predicted reward 25.400780,predicted upper bound 25.405897,actual reward 27.251618
round 2269, predicted reward 24.571294,predicted upper bound 24.576160,actual reward 27.716579
round 2270, predicted reward 23.900380,predicted upper bound 23.905438,actual reward 19.620607
round 2271, predicted reward 20.396127,predicted upper bound 20.402608,actual reward 19.354858
round 2272, predicted reward 25.543584,predicted upper bound 25.549283,actual reward 23.709190
round 2273, predicted reward 25.662588,predicted upper bound 25.668853,actual reward 20.327350
round 2274, predicted reward 27.171252,predicted upper bound 27.176272,actual reward 21.871637
round 2275, predicted reward 27.673192,predicted upper bound 27.678400,actual reward 32.334070
round 2276, predicted reward 35.109584,predicted upper bound 35.114023,actual reward 37.855383
round 2277, predicted reward 23.588653,predicted upper bound 23.594046,actual reward 19.088898
round 2278, predicted reward 27.596265,predicted upper bound 27.601517,actual reward 27.547706
round 2279, predicted reward 34.908824,predicted upper bound 34.913633,actual reward 37.635575
round 2280, predicted reward 27.266002,predicted upper bound 27.270840,actual reward 29.371999
round 2281, predicted reward 32.657229,predicted upper bound 32.661805,actual reward 34.534134
round 2282, predicted reward 33.833056,predicted upper bound 33.837413,actual reward 35.963901
round 2283, predicted reward 33.647253,predicted upper bound 33.652353,actual reward 34.698937
round 2284, predicted reward 29.104949,predicted upper bound 29.109589,actual reward 29.277090
round 2285, predicted reward 36.485768,predicted upper bound 36.490640,actual reward 38.503271
round 2286, predicted reward 27.574447,predicted upper bound 27.579773,actual reward 28.843097
round 2287, predicted reward 21.928903,predicted upper bound 21.934024,actual reward 26.189095
round 2288, predicted reward 32.135700,predicted upper bound 32.141255,actual reward 28.004429
round 2289, predicted reward 21.798352,predicted upper bound 21.803967,actual reward 21.698178
round 2290, predicted reward 31.622063,predicted upper bound 31.627013,actual reward 30.702165
round 2291, predicted reward 21.179685,predicted upper bound 21.185279,actual reward 19.062517
round 2292, predicted reward 34.827003,predicted upper bound 34.831751,actual reward 39.717859
round 2293, predicted reward 27.463589,predicted upper bound 27.468142,actual reward 27.709575
round 2294, predicted reward 31.559465,predicted upper bound 31.564275,actual reward 33.163105
round 2295, predicted reward 25.808726,predicted upper bound 25.813848,actual reward 30.434606
round 2296, predicted reward 22.478784,predicted upper bound 22.484665,actual reward 23.841240
round 2297, predicted reward 24.632480,predicted upper bound 24.638859,actual reward 26.427468
round 2298, predicted reward 25.263652,predicted upper bound 25.268903,actual reward 25.257937
round 2299, predicted reward 31.444551,predicted upper bound 31.449243,actual reward 32.226862
round 2300, predicted reward 24.102504,predicted upper bound 24.107639,actual reward 23.581415
round 2301, predicted reward 28.415610,predicted upper bound 28.421233,actual reward 25.871316
round 2302, predicted reward 30.168722,predicted upper bound 30.174254,actual reward 30.254524
round 2303, predicted reward 29.232738,predicted upper bound 29.238184,actual reward 28.067769
round 2304, predicted reward 26.014409,predicted upper bound 26.018864,actual reward 26.266816
round 2305, predicted reward 28.125013,predicted upper bound 28.129704,actual reward 34.284819
round 2306, predicted reward 24.367275,predicted upper bound 24.372124,actual reward 21.928616
round 2307, predicted reward 24.999159,predicted upper bound 25.004599,actual reward 25.191401
round 2308, predicted reward 27.538920,predicted upper bound 27.544379,actual reward 25.821459
round 2309, predicted reward 27.994171,predicted upper bound 27.999124,actual reward 30.280317
round 2310, predicted reward 31.531272,predicted upper bound 31.536746,actual reward 30.550226
round 2311, predicted reward 31.896890,predicted upper bound 31.901529,actual reward 35.242519
round 2312, predicted reward 26.289392,predicted upper bound 26.293757,actual reward 28.244926
round 2313, predicted reward 28.490648,predicted upper bound 28.496136,actual reward 36.111155
round 2314, predicted reward 30.561663,predicted upper bound 30.566551,actual reward 34.148861
round 2315, predicted reward 26.990362,predicted upper bound 26.996316,actual reward 26.751144
round 2316, predicted reward 28.281389,predicted upper bound 28.286563,actual reward 30.705301
round 2317, predicted reward 26.491878,predicted upper bound 26.496350,actual reward 25.321012
round 2318, predicted reward 25.920487,predicted upper bound 25.925970,actual reward 21.676670
round 2319, predicted reward 24.449121,predicted upper bound 24.454050,actual reward 27.248195
round 2320, predicted reward 24.008175,predicted upper bound 24.013092,actual reward 25.869762
round 2321, predicted reward 24.187050,predicted upper bound 24.192987,actual reward 24.894467
round 2322, predicted reward 26.573132,predicted upper bound 26.577602,actual reward 26.981594
round 2323, predicted reward 26.079124,predicted upper bound 26.085172,actual reward 24.842529
round 2324, predicted reward 28.536290,predicted upper bound 28.541353,actual reward 29.318443
round 2325, predicted reward 27.143908,predicted upper bound 27.149022,actual reward 21.676976
round 2326, predicted reward 27.862268,predicted upper bound 27.867639,actual reward 27.165734
round 2327, predicted reward 27.785477,predicted upper bound 27.791221,actual reward 25.267799
round 2328, predicted reward 30.472340,predicted upper bound 30.477927,actual reward 32.806101
round 2329, predicted reward 28.731482,predicted upper bound 28.736642,actual reward 33.289381
round 2330, predicted reward 23.372297,predicted upper bound 23.377449,actual reward 22.607912
round 2331, predicted reward 31.083645,predicted upper bound 31.088603,actual reward 31.196568
round 2332, predicted reward 25.432627,predicted upper bound 25.437784,actual reward 25.213091
round 2333, predicted reward 33.884681,predicted upper bound 33.889496,actual reward 34.016772
round 2334, predicted reward 24.536674,predicted upper bound 24.542630,actual reward 26.140585
round 2335, predicted reward 20.765788,predicted upper bound 20.771075,actual reward 15.958831
round 2336, predicted reward 27.525096,predicted upper bound 27.529805,actual reward 24.268194
round 2337, predicted reward 29.091600,predicted upper bound 29.096958,actual reward 28.159152
round 2338, predicted reward 25.708195,predicted upper bound 25.713369,actual reward 21.623771
round 2339, predicted reward 20.672706,predicted upper bound 20.678209,actual reward 20.981657
round 2340, predicted reward 28.266111,predicted upper bound 28.271313,actual reward 28.201731
round 2341, predicted reward 27.728090,predicted upper bound 27.732854,actual reward 27.503075
round 2342, predicted reward 30.650585,predicted upper bound 30.655178,actual reward 32.004544
round 2343, predicted reward 29.945292,predicted upper bound 29.950251,actual reward 29.881232
round 2344, predicted reward 27.408940,predicted upper bound 27.414679,actual reward 22.941910
round 2345, predicted reward 29.846670,predicted upper bound 29.852597,actual reward 32.953079
round 2346, predicted reward 31.430971,predicted upper bound 31.436110,actual reward 29.282357
round 2347, predicted reward 27.878308,predicted upper bound 27.883406,actual reward 27.400307
round 2348, predicted reward 27.051903,predicted upper bound 27.057052,actual reward 26.205096
round 2349, predicted reward 20.882442,predicted upper bound 20.887760,actual reward 19.208866
round 2350, predicted reward 30.285619,predicted upper bound 30.290585,actual reward 36.510600
round 2351, predicted reward 19.077800,predicted upper bound 19.082710,actual reward 18.178139
round 2352, predicted reward 26.074370,predicted upper bound 26.079033,actual reward 24.898232
round 2353, predicted reward 30.408744,predicted upper bound 30.413397,actual reward 27.552971
round 2354, predicted reward 32.096500,predicted upper bound 32.100775,actual reward 36.777233
round 2355, predicted reward 20.922297,predicted upper bound 20.927350,actual reward 23.779012
round 2356, predicted reward 29.140125,predicted upper bound 29.144741,actual reward 25.199648
round 2357, predicted reward 24.139939,predicted upper bound 24.145480,actual reward 20.730014
round 2358, predicted reward 28.835357,predicted upper bound 28.839647,actual reward 34.632234
round 2359, predicted reward 27.484020,predicted upper bound 27.488350,actual reward 28.309270
round 2360, predicted reward 30.813129,predicted upper bound 30.818023,actual reward 31.296234
round 2361, predicted reward 27.340047,predicted upper bound 27.344246,actual reward 30.243952
round 2362, predicted reward 28.336293,predicted upper bound 28.341925,actual reward 25.745282
round 2363, predicted reward 29.359499,predicted upper bound 29.364067,actual reward 30.898251
round 2364, predicted reward 25.341948,predicted upper bound 25.347443,actual reward 27.633573
round 2365, predicted reward 23.797233,predicted upper bound 23.802554,actual reward 23.119034
round 2366, predicted reward 25.537598,predicted upper bound 25.543514,actual reward 14.461822
round 2367, predicted reward 29.426572,predicted upper bound 29.431173,actual reward 31.936260
round 2368, predicted reward 25.592221,predicted upper bound 25.597287,actual reward 24.026432
round 2369, predicted reward 24.872659,predicted upper bound 24.878333,actual reward 20.842100
round 2370, predicted reward 26.734903,predicted upper bound 26.739590,actual reward 27.216555
round 2371, predicted reward 28.806629,predicted upper bound 28.811573,actual reward 25.438886
round 2372, predicted reward 24.833938,predicted upper bound 24.839333,actual reward 19.772468
round 2373, predicted reward 28.807901,predicted upper bound 28.813244,actual reward 24.341429
round 2374, predicted reward 23.968354,predicted upper bound 23.973168,actual reward 21.304676
round 2375, predicted reward 22.566346,predicted upper bound 22.571301,actual reward 17.158877
round 2376, predicted reward 26.081923,predicted upper bound 26.087528,actual reward 25.440407
round 2377, predicted reward 25.515008,predicted upper bound 25.520063,actual reward 24.657305
round 2378, predicted reward 24.729039,predicted upper bound 24.734144,actual reward 22.283388
round 2379, predicted reward 28.451564,predicted upper bound 28.457601,actual reward 30.526189
round 2380, predicted reward 21.187386,predicted upper bound 21.192304,actual reward 24.337561
round 2381, predicted reward 26.802667,predicted upper bound 26.807207,actual reward 25.718763
round 2382, predicted reward 32.303032,predicted upper bound 32.308131,actual reward 36.380046
round 2383, predicted reward 36.282473,predicted upper bound 36.286826,actual reward 37.922987
round 2384, predicted reward 26.709044,predicted upper bound 26.714370,actual reward 23.840195
round 2385, predicted reward 24.136588,predicted upper bound 24.142049,actual reward 25.377856
round 2386, predicted reward 24.710658,predicted upper bound 24.715772,actual reward 19.201142
round 2387, predicted reward 28.023811,predicted upper bound 28.027853,actual reward 28.011980
round 2388, predicted reward 31.609638,predicted upper bound 31.614271,actual reward 31.914855
round 2389, predicted reward 29.246378,predicted upper bound 29.251248,actual reward 29.406721
round 2390, predicted reward 23.742800,predicted upper bound 23.747684,actual reward 26.843282
round 2391, predicted reward 27.886749,predicted upper bound 27.891362,actual reward 33.843893
round 2392, predicted reward 31.756546,predicted upper bound 31.760929,actual reward 36.810886
round 2393, predicted reward 24.964831,predicted upper bound 24.970025,actual reward 27.519584
round 2394, predicted reward 27.394056,predicted upper bound 27.397906,actual reward 26.875617
round 2395, predicted reward 30.581373,predicted upper bound 30.586327,actual reward 26.441175
round 2396, predicted reward 26.663549,predicted upper bound 26.668508,actual reward 22.526665
round 2397, predicted reward 32.910850,predicted upper bound 32.916265,actual reward 30.567542
round 2398, predicted reward 24.897677,predicted upper bound 24.902346,actual reward 28.101400
round 2399, predicted reward 26.664182,predicted upper bound 26.669132,actual reward 20.545341
round 2400, predicted reward 30.358812,predicted upper bound 30.364175,actual reward 27.907792
round 2401, predicted reward 29.043024,predicted upper bound 29.047429,actual reward 31.495051
round 2402, predicted reward 32.073494,predicted upper bound 32.078973,actual reward 33.851957
round 2403, predicted reward 22.184564,predicted upper bound 22.188974,actual reward 16.391935
round 2404, predicted reward 31.514978,predicted upper bound 31.519898,actual reward 32.139978
round 2405, predicted reward 34.926907,predicted upper bound 34.931900,actual reward 36.977831
round 2406, predicted reward 25.791708,predicted upper bound 25.797144,actual reward 23.966982
round 2407, predicted reward 29.993268,predicted upper bound 29.997643,actual reward 31.152401
round 2408, predicted reward 22.175834,predicted upper bound 22.181247,actual reward 22.067148
round 2409, predicted reward 27.000642,predicted upper bound 27.005333,actual reward 26.750914
round 2410, predicted reward 22.279600,predicted upper bound 22.284422,actual reward 17.442301
round 2411, predicted reward 27.797246,predicted upper bound 27.803184,actual reward 29.697218
round 2412, predicted reward 26.140685,predicted upper bound 26.144702,actual reward 28.084000
round 2413, predicted reward 31.579992,predicted upper bound 31.584707,actual reward 31.429064
round 2414, predicted reward 26.460203,predicted upper bound 26.466085,actual reward 26.876914
round 2415, predicted reward 27.296819,predicted upper bound 27.302735,actual reward 23.746601
round 2416, predicted reward 22.617859,predicted upper bound 22.623356,actual reward 18.396360
round 2417, predicted reward 28.953300,predicted upper bound 28.957089,actual reward 32.159530
round 2418, predicted reward 28.869114,predicted upper bound 28.873413,actual reward 24.129224
round 2419, predicted reward 26.927039,predicted upper bound 26.931426,actual reward 26.542763
round 2420, predicted reward 25.776143,predicted upper bound 25.781123,actual reward 23.276508
round 2421, predicted reward 33.556100,predicted upper bound 33.560615,actual reward 32.833144
round 2422, predicted reward 25.846870,predicted upper bound 25.851768,actual reward 26.969024
round 2423, predicted reward 30.340587,predicted upper bound 30.345986,actual reward 26.667743
round 2424, predicted reward 37.300325,predicted upper bound 37.304658,actual reward 44.416471
round 2425, predicted reward 36.820114,predicted upper bound 36.824002,actual reward 37.464721
round 2426, predicted reward 30.262667,predicted upper bound 30.267324,actual reward 30.130428
round 2427, predicted reward 29.816369,predicted upper bound 29.820577,actual reward 22.596383
round 2428, predicted reward 21.827112,predicted upper bound 21.832428,actual reward 22.557257
round 2429, predicted reward 23.761749,predicted upper bound 23.767697,actual reward 26.440309
round 2430, predicted reward 26.253314,predicted upper bound 26.257690,actual reward 27.530442
round 2431, predicted reward 33.622289,predicted upper bound 33.626730,actual reward 43.294974
round 2432, predicted reward 21.002535,predicted upper bound 21.008056,actual reward 17.952133
round 2433, predicted reward 26.750365,predicted upper bound 26.754968,actual reward 29.096853
round 2434, predicted reward 31.850006,predicted upper bound 31.854425,actual reward 35.570019
round 2435, predicted reward 26.962366,predicted upper bound 26.966887,actual reward 23.714121
round 2436, predicted reward 28.409432,predicted upper bound 28.414207,actual reward 29.689308
round 2437, predicted reward 28.326609,predicted upper bound 28.331168,actual reward 29.791018
round 2438, predicted reward 28.062367,predicted upper bound 28.067135,actual reward 25.090330
round 2439, predicted reward 22.485220,predicted upper bound 22.489657,actual reward 23.552762
round 2440, predicted reward 25.656775,predicted upper bound 25.662413,actual reward 17.887726
round 2441, predicted reward 30.234873,predicted upper bound 30.239105,actual reward 27.587210
round 2442, predicted reward 19.967446,predicted upper bound 19.973767,actual reward 18.826537
round 2443, predicted reward 31.640440,predicted upper bound 31.644525,actual reward 27.468932
round 2444, predicted reward 27.451793,predicted upper bound 27.456304,actual reward 27.448633
round 2445, predicted reward 24.653684,predicted upper bound 24.659755,actual reward 23.633593
round 2446, predicted reward 30.227168,predicted upper bound 30.231396,actual reward 25.169623
round 2447, predicted reward 24.173194,predicted upper bound 24.178360,actual reward 24.290342
round 2448, predicted reward 22.872622,predicted upper bound 22.878385,actual reward 19.786477
round 2449, predicted reward 34.752436,predicted upper bound 34.756673,actual reward 34.228463
round 2450, predicted reward 27.068090,predicted upper bound 27.072725,actual reward 30.083812
round 2451, predicted reward 30.627912,predicted upper bound 30.632300,actual reward 31.210089
round 2452, predicted reward 26.471972,predicted upper bound 26.477640,actual reward 24.254535
round 2453, predicted reward 31.309124,predicted upper bound 31.313644,actual reward 26.216860
round 2454, predicted reward 25.983570,predicted upper bound 25.988603,actual reward 24.696834
round 2455, predicted reward 18.732092,predicted upper bound 18.737372,actual reward 19.029820
round 2456, predicted reward 29.607960,predicted upper bound 29.613276,actual reward 23.524955
round 2457, predicted reward 36.887445,predicted upper bound 36.892080,actual reward 38.900422
round 2458, predicted reward 24.573404,predicted upper bound 24.578226,actual reward 22.034943
round 2459, predicted reward 25.018870,predicted upper bound 25.025106,actual reward 25.235994
round 2460, predicted reward 24.649520,predicted upper bound 24.654762,actual reward 25.194680
round 2461, predicted reward 23.348723,predicted upper bound 23.354800,actual reward 20.022884
round 2462, predicted reward 25.493574,predicted upper bound 25.498004,actual reward 27.482186
round 2463, predicted reward 25.339422,predicted upper bound 25.343528,actual reward 20.970886
round 2464, predicted reward 25.374052,predicted upper bound 25.379709,actual reward 24.879229
round 2465, predicted reward 33.093765,predicted upper bound 33.097894,actual reward 31.470030
round 2466, predicted reward 24.291412,predicted upper bound 24.296510,actual reward 21.082586
round 2467, predicted reward 26.878900,predicted upper bound 26.883778,actual reward 31.703035
round 2468, predicted reward 31.543470,predicted upper bound 31.548275,actual reward 37.541953
round 2469, predicted reward 28.662904,predicted upper bound 28.668190,actual reward 28.209708
round 2470, predicted reward 26.569512,predicted upper bound 26.573880,actual reward 28.047806
round 2471, predicted reward 25.824045,predicted upper bound 25.829429,actual reward 21.829914
round 2472, predicted reward 27.701397,predicted upper bound 27.706480,actual reward 31.659800
round 2473, predicted reward 25.183392,predicted upper bound 25.188310,actual reward 23.963164
round 2474, predicted reward 27.606125,predicted upper bound 27.611971,actual reward 27.397266
round 2475, predicted reward 27.937322,predicted upper bound 27.941895,actual reward 26.495059
round 2476, predicted reward 25.130948,predicted upper bound 25.136761,actual reward 20.282465
round 2477, predicted reward 30.520859,predicted upper bound 30.525636,actual reward 34.529036
round 2478, predicted reward 31.244023,predicted upper bound 31.248902,actual reward 32.245158
round 2479, predicted reward 32.925251,predicted upper bound 32.929281,actual reward 35.353284
round 2480, predicted reward 23.859985,predicted upper bound 23.865817,actual reward 24.841040
round 2481, predicted reward 33.570621,predicted upper bound 33.575603,actual reward 30.898795
round 2482, predicted reward 32.532971,predicted upper bound 32.537895,actual reward 32.475788
round 2483, predicted reward 31.055802,predicted upper bound 31.060823,actual reward 33.558426
round 2484, predicted reward 26.147923,predicted upper bound 26.152112,actual reward 23.200016
round 2485, predicted reward 36.810759,predicted upper bound 36.814743,actual reward 38.540152
round 2486, predicted reward 21.604820,predicted upper bound 21.610162,actual reward 24.490223
round 2487, predicted reward 25.219318,predicted upper bound 25.224911,actual reward 24.620056
round 2488, predicted reward 27.125697,predicted upper bound 27.130300,actual reward 30.710557
round 2489, predicted reward 27.084759,predicted upper bound 27.089392,actual reward 30.219493
round 2490, predicted reward 28.854515,predicted upper bound 28.859928,actual reward 25.445737
round 2491, predicted reward 28.993356,predicted upper bound 28.998604,actual reward 25.692543
round 2492, predicted reward 28.091353,predicted upper bound 28.096469,actual reward 26.079483
round 2493, predicted reward 33.184605,predicted upper bound 33.189300,actual reward 38.868013
round 2494, predicted reward 25.729288,predicted upper bound 25.733982,actual reward 24.983659
round 2495, predicted reward 40.800750,predicted upper bound 40.805346,actual reward 47.592909
round 2496, predicted reward 22.618299,predicted upper bound 22.624208,actual reward 19.843463
round 2497, predicted reward 28.895762,predicted upper bound 28.901151,actual reward 24.770286
round 2498, predicted reward 30.595342,predicted upper bound 30.598834,actual reward 32.106047
round 2499, predicted reward 33.806674,predicted upper bound 33.811466,actual reward 38.578152
round 2500, predicted reward 24.253882,predicted upper bound 24.259533,actual reward 24.016859
round 2501, predicted reward 28.078111,predicted upper bound 28.082872,actual reward 27.296606
round 2502, predicted reward 23.733116,predicted upper bound 23.738806,actual reward 26.594268
round 2503, predicted reward 24.236417,predicted upper bound 24.241543,actual reward 22.705905
round 2504, predicted reward 22.528771,predicted upper bound 22.534060,actual reward 23.861394
round 2505, predicted reward 25.193018,predicted upper bound 25.198019,actual reward 21.234573
round 2506, predicted reward 26.453088,predicted upper bound 26.458160,actual reward 35.804666
round 2507, predicted reward 24.073428,predicted upper bound 24.077846,actual reward 20.895739
round 2508, predicted reward 25.636709,predicted upper bound 25.641371,actual reward 27.883436
round 2509, predicted reward 21.413791,predicted upper bound 21.419088,actual reward 20.376629
round 2510, predicted reward 27.351115,predicted upper bound 27.356395,actual reward 24.063230
round 2511, predicted reward 24.532573,predicted upper bound 24.537438,actual reward 27.036939
round 2512, predicted reward 17.810765,predicted upper bound 17.815971,actual reward 13.834153
round 2513, predicted reward 35.211954,predicted upper bound 35.217033,actual reward 38.466517
round 2514, predicted reward 25.188877,predicted upper bound 25.193981,actual reward 24.290745
round 2515, predicted reward 24.564759,predicted upper bound 24.569014,actual reward 25.155004
round 2516, predicted reward 24.603241,predicted upper bound 24.608589,actual reward 22.233530
round 2517, predicted reward 30.435202,predicted upper bound 30.440651,actual reward 24.824205
round 2518, predicted reward 22.107986,predicted upper bound 22.112838,actual reward 23.163737
round 2519, predicted reward 24.529635,predicted upper bound 24.534337,actual reward 27.276526
round 2520, predicted reward 28.586793,predicted upper bound 28.590908,actual reward 26.766862
round 2521, predicted reward 30.469109,predicted upper bound 30.474010,actual reward 26.330049
round 2522, predicted reward 28.036138,predicted upper bound 28.041329,actual reward 29.802105
round 2523, predicted reward 30.331223,predicted upper bound 30.335408,actual reward 33.464003
round 2524, predicted reward 27.943702,predicted upper bound 27.949203,actual reward 30.597386
round 2525, predicted reward 26.085216,predicted upper bound 26.089866,actual reward 28.868242
round 2526, predicted reward 29.693119,predicted upper bound 29.697618,actual reward 32.339783
round 2527, predicted reward 29.746808,predicted upper bound 29.751071,actual reward 27.052173
round 2528, predicted reward 28.793426,predicted upper bound 28.798907,actual reward 25.354358
round 2529, predicted reward 30.381365,predicted upper bound 30.385891,actual reward 31.394094
round 2530, predicted reward 25.229183,predicted upper bound 25.234267,actual reward 23.846720
round 2531, predicted reward 21.701017,predicted upper bound 21.705545,actual reward 21.710962
round 2532, predicted reward 22.248167,predicted upper bound 22.253095,actual reward 15.612553
round 2533, predicted reward 23.647428,predicted upper bound 23.651893,actual reward 14.637810
round 2534, predicted reward 22.902392,predicted upper bound 22.907530,actual reward 18.730173
round 2535, predicted reward 33.818501,predicted upper bound 33.822856,actual reward 30.650499
round 2536, predicted reward 24.988749,predicted upper bound 24.993590,actual reward 21.674994
round 2537, predicted reward 38.465987,predicted upper bound 38.470212,actual reward 44.042618
round 2538, predicted reward 23.698344,predicted upper bound 23.702911,actual reward 22.250722
round 2539, predicted reward 28.796229,predicted upper bound 28.800352,actual reward 30.389486
round 2540, predicted reward 26.614765,predicted upper bound 26.619799,actual reward 22.439718
round 2541, predicted reward 24.194109,predicted upper bound 24.198726,actual reward 17.699537
round 2542, predicted reward 30.804752,predicted upper bound 30.809504,actual reward 26.286795
round 2543, predicted reward 25.160868,predicted upper bound 25.166172,actual reward 24.686466
round 2544, predicted reward 20.863528,predicted upper bound 20.868865,actual reward 13.355189
round 2545, predicted reward 28.733493,predicted upper bound 28.738167,actual reward 27.092350
round 2546, predicted reward 20.334015,predicted upper bound 20.339155,actual reward 17.007320
round 2547, predicted reward 29.556013,predicted upper bound 29.561145,actual reward 34.225097
round 2548, predicted reward 30.752878,predicted upper bound 30.757574,actual reward 33.378031
round 2549, predicted reward 21.601514,predicted upper bound 21.606262,actual reward 23.610175
round 2550, predicted reward 24.979807,predicted upper bound 24.984361,actual reward 27.149372
round 2551, predicted reward 30.135727,predicted upper bound 30.140615,actual reward 28.751397
round 2552, predicted reward 25.727772,predicted upper bound 25.732439,actual reward 24.006265
round 2553, predicted reward 27.719079,predicted upper bound 27.723193,actual reward 27.636734
round 2554, predicted reward 24.004731,predicted upper bound 24.009632,actual reward 21.701585
round 2555, predicted reward 27.947978,predicted upper bound 27.953282,actual reward 26.010064
round 2556, predicted reward 28.046673,predicted upper bound 28.051483,actual reward 26.436628
round 2557, predicted reward 23.644250,predicted upper bound 23.649030,actual reward 22.704077
round 2558, predicted reward 25.612534,predicted upper bound 25.617875,actual reward 24.002118
round 2559, predicted reward 27.935805,predicted upper bound 27.941876,actual reward 27.903686
round 2560, predicted reward 30.459351,predicted upper bound 30.463718,actual reward 34.470847
round 2561, predicted reward 32.199542,predicted upper bound 32.205283,actual reward 35.988244
round 2562, predicted reward 30.863285,predicted upper bound 30.867118,actual reward 30.724755
round 2563, predicted reward 22.492382,predicted upper bound 22.496552,actual reward 20.758637
round 2564, predicted reward 22.959935,predicted upper bound 22.965343,actual reward 19.742711
round 2565, predicted reward 26.652352,predicted upper bound 26.657234,actual reward 25.407572
round 2566, predicted reward 30.642105,predicted upper bound 30.646581,actual reward 32.222160
round 2567, predicted reward 27.298172,predicted upper bound 27.302441,actual reward 27.867636
round 2568, predicted reward 27.397933,predicted upper bound 27.402408,actual reward 29.020879
round 2569, predicted reward 24.553434,predicted upper bound 24.558622,actual reward 26.759314
round 2570, predicted reward 26.994248,predicted upper bound 26.999309,actual reward 23.796600
round 2571, predicted reward 21.335367,predicted upper bound 21.341250,actual reward 20.149461
round 2572, predicted reward 24.779654,predicted upper bound 24.784794,actual reward 18.172761
round 2573, predicted reward 28.508102,predicted upper bound 28.513211,actual reward 26.994282
round 2574, predicted reward 29.639579,predicted upper bound 29.643641,actual reward 28.571951
round 2575, predicted reward 25.137425,predicted upper bound 25.142230,actual reward 22.105212
round 2576, predicted reward 27.965899,predicted upper bound 27.971673,actual reward 26.523364
round 2577, predicted reward 20.177423,predicted upper bound 20.182932,actual reward 19.781865
round 2578, predicted reward 30.962676,predicted upper bound 30.967226,actual reward 29.446651
round 2579, predicted reward 29.754173,predicted upper bound 29.758337,actual reward 35.731790
round 2580, predicted reward 28.086271,predicted upper bound 28.090937,actual reward 24.734148
round 2581, predicted reward 31.166754,predicted upper bound 31.171172,actual reward 35.745825
round 2582, predicted reward 27.215403,predicted upper bound 27.220257,actual reward 22.257462
round 2583, predicted reward 18.504614,predicted upper bound 18.510672,actual reward 10.489030
round 2584, predicted reward 27.984870,predicted upper bound 27.989642,actual reward 24.108669
round 2585, predicted reward 24.413190,predicted upper bound 24.418065,actual reward 26.631261
round 2586, predicted reward 30.668843,predicted upper bound 30.673658,actual reward 28.755107
round 2587, predicted reward 24.500987,predicted upper bound 24.505250,actual reward 20.103435
round 2588, predicted reward 23.789132,predicted upper bound 23.794136,actual reward 22.751700
round 2589, predicted reward 31.263361,predicted upper bound 31.267523,actual reward 29.974625
round 2590, predicted reward 31.902309,predicted upper bound 31.907492,actual reward 36.106023
round 2591, predicted reward 25.859029,predicted upper bound 25.864397,actual reward 28.528148
round 2592, predicted reward 26.536364,predicted upper bound 26.541836,actual reward 27.160142
round 2593, predicted reward 26.571913,predicted upper bound 26.577490,actual reward 26.723097
round 2594, predicted reward 31.689652,predicted upper bound 31.694405,actual reward 33.296341
round 2595, predicted reward 24.268182,predicted upper bound 24.272289,actual reward 21.627988
round 2596, predicted reward 22.218693,predicted upper bound 22.223080,actual reward 18.628102
round 2597, predicted reward 21.944751,predicted upper bound 21.949747,actual reward 18.915978
round 2598, predicted reward 25.517371,predicted upper bound 25.522137,actual reward 24.700893
round 2599, predicted reward 20.327520,predicted upper bound 20.332779,actual reward 20.030410
round 2600, predicted reward 36.545436,predicted upper bound 36.549327,actual reward 36.052319
round 2601, predicted reward 33.151538,predicted upper bound 33.156582,actual reward 36.289660
round 2602, predicted reward 23.024568,predicted upper bound 23.029531,actual reward 18.566732
round 2603, predicted reward 25.507455,predicted upper bound 25.512825,actual reward 24.922886
round 2604, predicted reward 28.704785,predicted upper bound 28.708773,actual reward 28.908023
round 2605, predicted reward 30.142979,predicted upper bound 30.148531,actual reward 28.832181
round 2606, predicted reward 34.593050,predicted upper bound 34.597259,actual reward 39.218401
round 2607, predicted reward 26.697615,predicted upper bound 26.702582,actual reward 25.487732
round 2608, predicted reward 35.963037,predicted upper bound 35.966996,actual reward 39.321605
round 2609, predicted reward 28.997461,predicted upper bound 29.001581,actual reward 26.065699
round 2610, predicted reward 22.328496,predicted upper bound 22.334099,actual reward 21.913855
round 2611, predicted reward 27.083213,predicted upper bound 27.088411,actual reward 21.301123
round 2612, predicted reward 28.293108,predicted upper bound 28.298142,actual reward 27.138608
round 2613, predicted reward 24.857967,predicted upper bound 24.863553,actual reward 25.926231
round 2614, predicted reward 20.289094,predicted upper bound 20.295644,actual reward 20.336245
round 2615, predicted reward 27.012428,predicted upper bound 27.016874,actual reward 25.253953
round 2616, predicted reward 26.849166,predicted upper bound 26.855026,actual reward 23.700753
round 2617, predicted reward 26.720730,predicted upper bound 26.726480,actual reward 19.234748
round 2618, predicted reward 27.677077,predicted upper bound 27.682335,actual reward 22.978272
round 2619, predicted reward 22.373144,predicted upper bound 22.378881,actual reward 24.263069
round 2620, predicted reward 25.100208,predicted upper bound 25.105462,actual reward 25.514734
round 2621, predicted reward 22.925415,predicted upper bound 22.931197,actual reward 21.703179
round 2622, predicted reward 22.063336,predicted upper bound 22.067957,actual reward 16.173370
round 2623, predicted reward 29.668328,predicted upper bound 29.672815,actual reward 27.561912
round 2624, predicted reward 22.067508,predicted upper bound 22.072354,actual reward 18.382751
round 2625, predicted reward 26.564006,predicted upper bound 26.569095,actual reward 23.840457
round 2626, predicted reward 23.771059,predicted upper bound 23.776446,actual reward 22.175233
round 2627, predicted reward 27.882737,predicted upper bound 27.887731,actual reward 25.481096
round 2628, predicted reward 27.902064,predicted upper bound 27.907290,actual reward 22.849637
round 2629, predicted reward 27.401342,predicted upper bound 27.406012,actual reward 30.836035
round 2630, predicted reward 33.303179,predicted upper bound 33.307357,actual reward 40.895819
round 2631, predicted reward 32.385009,predicted upper bound 32.389368,actual reward 24.991908
round 2632, predicted reward 27.531441,predicted upper bound 27.536807,actual reward 24.904821
round 2633, predicted reward 29.539422,predicted upper bound 29.544590,actual reward 29.986919
round 2634, predicted reward 26.606232,predicted upper bound 26.610593,actual reward 24.233643
round 2635, predicted reward 23.782571,predicted upper bound 23.787137,actual reward 23.824658
round 2636, predicted reward 32.431623,predicted upper bound 32.436262,actual reward 28.590216
round 2637, predicted reward 28.793812,predicted upper bound 28.798232,actual reward 24.537577
round 2638, predicted reward 23.676847,predicted upper bound 23.681289,actual reward 17.262971
round 2639, predicted reward 32.054159,predicted upper bound 32.059354,actual reward 25.869101
round 2640, predicted reward 31.211366,predicted upper bound 31.215964,actual reward 35.062708
round 2641, predicted reward 29.906581,predicted upper bound 29.911811,actual reward 31.614691
round 2642, predicted reward 29.874585,predicted upper bound 29.879610,actual reward 26.756074
round 2643, predicted reward 30.068335,predicted upper bound 30.073568,actual reward 34.312219
round 2644, predicted reward 30.847336,predicted upper bound 30.851634,actual reward 32.222495
round 2645, predicted reward 29.152765,predicted upper bound 29.157559,actual reward 26.590320
round 2646, predicted reward 26.961663,predicted upper bound 26.966023,actual reward 24.650230
round 2647, predicted reward 23.933759,predicted upper bound 23.938992,actual reward 24.144643
round 2648, predicted reward 29.932000,predicted upper bound 29.936024,actual reward 30.082479
round 2649, predicted reward 23.764112,predicted upper bound 23.769771,actual reward 29.072551
round 2650, predicted reward 30.906616,predicted upper bound 30.912018,actual reward 29.469093
round 2651, predicted reward 31.855068,predicted upper bound 31.859616,actual reward 31.666546
round 2652, predicted reward 26.534745,predicted upper bound 26.538958,actual reward 27.447284
round 2653, predicted reward 31.477772,predicted upper bound 31.482246,actual reward 28.937345
round 2654, predicted reward 25.602220,predicted upper bound 25.606953,actual reward 26.695887
round 2655, predicted reward 25.363669,predicted upper bound 25.368626,actual reward 26.422275
round 2656, predicted reward 24.711147,predicted upper bound 24.715532,actual reward 21.142238
round 2657, predicted reward 27.417739,predicted upper bound 27.422442,actual reward 28.881441
round 2658, predicted reward 26.161131,predicted upper bound 26.165379,actual reward 20.738356
round 2659, predicted reward 24.811447,predicted upper bound 24.816418,actual reward 20.857514
round 2660, predicted reward 29.222754,predicted upper bound 29.228063,actual reward 36.625052
round 2661, predicted reward 23.270775,predicted upper bound 23.275495,actual reward 26.312674
round 2662, predicted reward 28.663850,predicted upper bound 28.668323,actual reward 26.902190
round 2663, predicted reward 31.993447,predicted upper bound 31.998040,actual reward 28.213099
round 2664, predicted reward 25.564312,predicted upper bound 25.568698,actual reward 27.188831
round 2665, predicted reward 28.498435,predicted upper bound 28.502984,actual reward 27.421412
round 2666, predicted reward 26.895195,predicted upper bound 26.898753,actual reward 30.786628
round 2667, predicted reward 29.726514,predicted upper bound 29.731183,actual reward 30.421373
round 2668, predicted reward 28.831716,predicted upper bound 28.836244,actual reward 30.027232
round 2669, predicted reward 31.423838,predicted upper bound 31.428741,actual reward 33.722877
round 2670, predicted reward 25.397448,predicted upper bound 25.402224,actual reward 23.773819
round 2671, predicted reward 28.797668,predicted upper bound 28.802163,actual reward 27.692459
round 2672, predicted reward 26.240064,predicted upper bound 26.245322,actual reward 30.721947
round 2673, predicted reward 29.041594,predicted upper bound 29.046151,actual reward 35.886150
round 2674, predicted reward 27.061162,predicted upper bound 27.066107,actual reward 28.556534
round 2675, predicted reward 24.458977,predicted upper bound 24.463648,actual reward 21.205852
round 2676, predicted reward 22.492379,predicted upper bound 22.496572,actual reward 21.142149
round 2677, predicted reward 23.563446,predicted upper bound 23.569048,actual reward 21.836967
round 2678, predicted reward 30.087214,predicted upper bound 30.091527,actual reward 35.264549
round 2679, predicted reward 21.911066,predicted upper bound 21.915839,actual reward 18.915671
round 2680, predicted reward 18.982189,predicted upper bound 18.986843,actual reward 16.813387
round 2681, predicted reward 24.565356,predicted upper bound 24.570916,actual reward 22.805098
round 2682, predicted reward 23.731343,predicted upper bound 23.736436,actual reward 20.567820
round 2683, predicted reward 38.471647,predicted upper bound 38.476197,actual reward 42.292831
round 2684, predicted reward 30.540580,predicted upper bound 30.544374,actual reward 32.197983
round 2685, predicted reward 22.125577,predicted upper bound 22.130324,actual reward 20.029607
round 2686, predicted reward 35.880341,predicted upper bound 35.884210,actual reward 31.304179
round 2687, predicted reward 31.133029,predicted upper bound 31.137681,actual reward 32.616525
round 2688, predicted reward 20.303859,predicted upper bound 20.308385,actual reward 21.842789
round 2689, predicted reward 23.706394,predicted upper bound 23.711227,actual reward 22.628871
round 2690, predicted reward 28.239603,predicted upper bound 28.244622,actual reward 27.527144
round 2691, predicted reward 22.101986,predicted upper bound 22.106740,actual reward 27.437824
round 2692, predicted reward 31.177207,predicted upper bound 31.180803,actual reward 31.133985
round 2693, predicted reward 25.078727,predicted upper bound 25.083881,actual reward 22.914725
round 2694, predicted reward 33.338255,predicted upper bound 33.342787,actual reward 38.619877
round 2695, predicted reward 28.638644,predicted upper bound 28.642943,actual reward 25.610486
round 2696, predicted reward 26.337552,predicted upper bound 26.342177,actual reward 25.599230
round 2697, predicted reward 33.092794,predicted upper bound 33.097474,actual reward 33.533684
round 2698, predicted reward 30.161536,predicted upper bound 30.166803,actual reward 33.287263
round 2699, predicted reward 29.325980,predicted upper bound 29.330630,actual reward 25.454793
round 2700, predicted reward 29.791520,predicted upper bound 29.796871,actual reward 27.957798
round 2701, predicted reward 32.474189,predicted upper bound 32.479006,actual reward 36.088432
round 2702, predicted reward 22.997908,predicted upper bound 23.002601,actual reward 24.812218
round 2703, predicted reward 30.364495,predicted upper bound 30.369147,actual reward 29.585617
round 2704, predicted reward 28.007042,predicted upper bound 28.012039,actual reward 28.562693
round 2705, predicted reward 26.908829,predicted upper bound 26.914121,actual reward 19.550858
round 2706, predicted reward 26.889035,predicted upper bound 26.893165,actual reward 26.177323
round 2707, predicted reward 29.402538,predicted upper bound 29.407249,actual reward 26.010793
round 2708, predicted reward 26.291411,predicted upper bound 26.296448,actual reward 23.350149
round 2709, predicted reward 37.242965,predicted upper bound 37.247340,actual reward 42.337784
round 2710, predicted reward 23.004023,predicted upper bound 23.008004,actual reward 21.015851
round 2711, predicted reward 34.055003,predicted upper bound 34.058754,actual reward 31.088121
round 2712, predicted reward 29.814599,predicted upper bound 29.818762,actual reward 27.659557
round 2713, predicted reward 28.833045,predicted upper bound 28.837580,actual reward 32.083744
round 2714, predicted reward 29.813880,predicted upper bound 29.818583,actual reward 25.599467
round 2715, predicted reward 28.406365,predicted upper bound 28.410244,actual reward 32.203892
round 2716, predicted reward 27.978600,predicted upper bound 27.983194,actual reward 25.827166
round 2717, predicted reward 27.997295,predicted upper bound 28.001340,actual reward 24.929524
round 2718, predicted reward 23.906195,predicted upper bound 23.911544,actual reward 21.160777
round 2719, predicted reward 26.223303,predicted upper bound 26.227727,actual reward 26.788781
round 2720, predicted reward 23.569352,predicted upper bound 23.573887,actual reward 15.654229
round 2721, predicted reward 26.163679,predicted upper bound 26.168390,actual reward 29.781759
round 2722, predicted reward 21.948186,predicted upper bound 21.953010,actual reward 19.495054
round 2723, predicted reward 34.050562,predicted upper bound 34.054992,actual reward 32.268413
round 2724, predicted reward 21.746411,predicted upper bound 21.751851,actual reward 21.595637
round 2725, predicted reward 28.081946,predicted upper bound 28.085898,actual reward 22.940019
round 2726, predicted reward 25.900212,predicted upper bound 25.904870,actual reward 25.624477
round 2727, predicted reward 27.316022,predicted upper bound 27.320632,actual reward 27.715010
round 2728, predicted reward 26.978596,predicted upper bound 26.984216,actual reward 26.273613
round 2729, predicted reward 25.101764,predicted upper bound 25.107115,actual reward 24.881641
round 2730, predicted reward 21.344291,predicted upper bound 21.350081,actual reward 16.826368
round 2731, predicted reward 20.208504,predicted upper bound 20.213707,actual reward 16.876453
round 2732, predicted reward 21.554822,predicted upper bound 21.560140,actual reward 17.377447
round 2733, predicted reward 27.567782,predicted upper bound 27.571945,actual reward 27.239913
round 2734, predicted reward 25.466877,predicted upper bound 25.472068,actual reward 25.553935
round 2735, predicted reward 28.111892,predicted upper bound 28.116746,actual reward 25.332294
round 2736, predicted reward 25.313980,predicted upper bound 25.319566,actual reward 21.785366
round 2737, predicted reward 24.594262,predicted upper bound 24.598318,actual reward 26.826116
round 2738, predicted reward 27.131503,predicted upper bound 27.135676,actual reward 29.391143
round 2739, predicted reward 26.667418,predicted upper bound 26.671866,actual reward 26.887174
round 2740, predicted reward 28.646322,predicted upper bound 28.651125,actual reward 30.900003
round 2741, predicted reward 23.176993,predicted upper bound 23.180929,actual reward 20.425008
round 2742, predicted reward 30.752224,predicted upper bound 30.757041,actual reward 34.329429
round 2743, predicted reward 28.060368,predicted upper bound 28.065675,actual reward 29.401176
round 2744, predicted reward 31.276064,predicted upper bound 31.280990,actual reward 30.612738
round 2745, predicted reward 24.429343,predicted upper bound 24.434181,actual reward 26.811445
round 2746, predicted reward 25.582382,predicted upper bound 25.587198,actual reward 21.904755
round 2747, predicted reward 20.209941,predicted upper bound 20.214835,actual reward 14.876561
round 2748, predicted reward 28.043217,predicted upper bound 28.047926,actual reward 26.554468
round 2749, predicted reward 25.117576,predicted upper bound 25.122262,actual reward 24.349158
round 2750, predicted reward 31.044786,predicted upper bound 31.050055,actual reward 28.563233
round 2751, predicted reward 25.657413,predicted upper bound 25.662271,actual reward 27.758083
round 2752, predicted reward 34.161850,predicted upper bound 34.166541,actual reward 35.017070
round 2753, predicted reward 26.791819,predicted upper bound 26.796555,actual reward 32.730373
round 2754, predicted reward 31.813472,predicted upper bound 31.818423,actual reward 38.627936
round 2755, predicted reward 27.855560,predicted upper bound 27.860427,actual reward 33.176643
round 2756, predicted reward 23.643219,predicted upper bound 23.648394,actual reward 21.610852
round 2757, predicted reward 29.859718,predicted upper bound 29.864515,actual reward 26.832358
round 2758, predicted reward 25.200219,predicted upper bound 25.205304,actual reward 27.184709
round 2759, predicted reward 35.927030,predicted upper bound 35.931431,actual reward 39.659427
round 2760, predicted reward 20.774594,predicted upper bound 20.779482,actual reward 19.352514
round 2761, predicted reward 24.892919,predicted upper bound 24.897728,actual reward 26.516727
round 2762, predicted reward 25.351083,predicted upper bound 25.355054,actual reward 22.716795
round 2763, predicted reward 36.608048,predicted upper bound 36.612316,actual reward 39.830780
round 2764, predicted reward 29.909566,predicted upper bound 29.914925,actual reward 34.255466
round 2765, predicted reward 21.255611,predicted upper bound 21.259568,actual reward 24.269989
round 2766, predicted reward 24.792015,predicted upper bound 24.796143,actual reward 23.152312
round 2767, predicted reward 32.704216,predicted upper bound 32.708356,actual reward 36.234972
round 2768, predicted reward 29.637214,predicted upper bound 29.641831,actual reward 30.911317
round 2769, predicted reward 25.065356,predicted upper bound 25.069375,actual reward 24.561202
round 2770, predicted reward 26.413025,predicted upper bound 26.418268,actual reward 25.637052
round 2771, predicted reward 29.012836,predicted upper bound 29.016910,actual reward 28.982490
round 2772, predicted reward 31.193004,predicted upper bound 31.197399,actual reward 31.284044
round 2773, predicted reward 26.642241,predicted upper bound 26.646642,actual reward 27.966210
round 2774, predicted reward 27.027613,predicted upper bound 27.031940,actual reward 23.108832
round 2775, predicted reward 25.565058,predicted upper bound 25.569507,actual reward 24.905721
round 2776, predicted reward 31.482365,predicted upper bound 31.486927,actual reward 27.134497
round 2777, predicted reward 23.109217,predicted upper bound 23.114710,actual reward 21.051438
round 2778, predicted reward 17.867750,predicted upper bound 17.873438,actual reward 16.122295
round 2779, predicted reward 26.819270,predicted upper bound 26.824433,actual reward 22.489351
round 2780, predicted reward 28.109173,predicted upper bound 28.113867,actual reward 24.374329
round 2781, predicted reward 26.485197,predicted upper bound 26.490072,actual reward 26.012890
round 2782, predicted reward 30.398360,predicted upper bound 30.402995,actual reward 34.593108
round 2783, predicted reward 27.599813,predicted upper bound 27.604537,actual reward 28.143747
round 2784, predicted reward 19.270890,predicted upper bound 19.275734,actual reward 22.125580
round 2785, predicted reward 24.110168,predicted upper bound 24.114637,actual reward 19.200376
round 2786, predicted reward 27.733244,predicted upper bound 27.737661,actual reward 30.201300
round 2787, predicted reward 18.063288,predicted upper bound 18.067671,actual reward 11.990135
round 2788, predicted reward 30.556977,predicted upper bound 30.561769,actual reward 30.516734
round 2789, predicted reward 21.705866,predicted upper bound 21.710491,actual reward 20.084216
round 2790, predicted reward 23.788147,predicted upper bound 23.792534,actual reward 22.709562
round 2791, predicted reward 27.960363,predicted upper bound 27.964353,actual reward 29.201941
round 2792, predicted reward 26.017786,predicted upper bound 26.023132,actual reward 24.663331
round 2793, predicted reward 25.734135,predicted upper bound 25.739216,actual reward 22.596134
round 2794, predicted reward 23.858928,predicted upper bound 23.863842,actual reward 23.019900
round 2795, predicted reward 23.180768,predicted upper bound 23.185947,actual reward 20.650919
round 2796, predicted reward 21.474126,predicted upper bound 21.478824,actual reward 21.754652
round 2797, predicted reward 28.611148,predicted upper bound 28.616166,actual reward 27.217750
round 2798, predicted reward 33.458236,predicted upper bound 33.462196,actual reward 31.068243
round 2799, predicted reward 28.544817,predicted upper bound 28.549551,actual reward 33.986420
round 2800, predicted reward 31.634926,predicted upper bound 31.638711,actual reward 33.992224
round 2801, predicted reward 26.886919,predicted upper bound 26.892205,actual reward 31.122447
round 2802, predicted reward 24.890736,predicted upper bound 24.895596,actual reward 27.950028
round 2803, predicted reward 29.775705,predicted upper bound 29.780687,actual reward 33.833299
round 2804, predicted reward 21.175381,predicted upper bound 21.180107,actual reward 14.357170
round 2805, predicted reward 21.127369,predicted upper bound 21.132126,actual reward 16.683793
round 2806, predicted reward 33.832222,predicted upper bound 33.836493,actual reward 36.407937
round 2807, predicted reward 26.934359,predicted upper bound 26.939080,actual reward 25.901916
round 2808, predicted reward 28.863684,predicted upper bound 28.867122,actual reward 31.697573
round 2809, predicted reward 32.588228,predicted upper bound 32.591981,actual reward 36.125655
round 2810, predicted reward 22.977545,predicted upper bound 22.982885,actual reward 21.751474
round 2811, predicted reward 31.658411,predicted upper bound 31.662717,actual reward 32.915428
round 2812, predicted reward 27.458898,predicted upper bound 27.463350,actual reward 22.688707
round 2813, predicted reward 24.574677,predicted upper bound 24.579532,actual reward 23.304822
round 2814, predicted reward 22.532037,predicted upper bound 22.536921,actual reward 17.933199
round 2815, predicted reward 27.036564,predicted upper bound 27.041083,actual reward 25.751846
round 2816, predicted reward 24.994536,predicted upper bound 24.999066,actual reward 22.668013
round 2817, predicted reward 34.355650,predicted upper bound 34.359299,actual reward 37.227062
round 2818, predicted reward 27.760582,predicted upper bound 27.764956,actual reward 28.640430
round 2819, predicted reward 26.712901,predicted upper bound 26.717969,actual reward 25.264147
round 2820, predicted reward 31.668227,predicted upper bound 31.673298,actual reward 32.257514
round 2821, predicted reward 24.173725,predicted upper bound 24.178796,actual reward 31.039031
round 2822, predicted reward 21.476417,predicted upper bound 21.481504,actual reward 22.733572
round 2823, predicted reward 22.900471,predicted upper bound 22.905257,actual reward 20.140795
round 2824, predicted reward 33.020242,predicted upper bound 33.024575,actual reward 29.688008
round 2825, predicted reward 25.237982,predicted upper bound 25.242961,actual reward 21.886607
round 2826, predicted reward 30.034630,predicted upper bound 30.039131,actual reward 28.834868
round 2827, predicted reward 26.118280,predicted upper bound 26.123058,actual reward 28.424009
round 2828, predicted reward 24.760110,predicted upper bound 24.764760,actual reward 25.082563
round 2829, predicted reward 21.494585,predicted upper bound 21.499346,actual reward 21.397360
round 2830, predicted reward 32.097848,predicted upper bound 32.102782,actual reward 31.722146
round 2831, predicted reward 31.315169,predicted upper bound 31.320165,actual reward 36.716752
round 2832, predicted reward 30.707191,predicted upper bound 30.711493,actual reward 32.679684
round 2833, predicted reward 31.529326,predicted upper bound 31.534187,actual reward 32.640038
round 2834, predicted reward 22.146203,predicted upper bound 22.151899,actual reward 17.300943
round 2835, predicted reward 22.557291,predicted upper bound 22.562359,actual reward 17.544594
round 2836, predicted reward 26.234024,predicted upper bound 26.238923,actual reward 29.502493
round 2837, predicted reward 29.055222,predicted upper bound 29.059124,actual reward 30.257000
round 2838, predicted reward 29.322056,predicted upper bound 29.326999,actual reward 25.555318
round 2839, predicted reward 26.445022,predicted upper bound 26.450209,actual reward 22.892332
round 2840, predicted reward 23.722515,predicted upper bound 23.727859,actual reward 24.393460
round 2841, predicted reward 22.457544,predicted upper bound 22.462010,actual reward 16.771769
round 2842, predicted reward 27.928836,predicted upper bound 27.933665,actual reward 28.927391
round 2843, predicted reward 28.111777,predicted upper bound 28.115818,actual reward 26.041538
round 2844, predicted reward 28.169095,predicted upper bound 28.173418,actual reward 26.457771
round 2845, predicted reward 23.522184,predicted upper bound 23.527070,actual reward 23.708258
round 2846, predicted reward 17.804865,predicted upper bound 17.810938,actual reward 15.357513
round 2847, predicted reward 25.862569,predicted upper bound 25.867522,actual reward 23.060892
round 2848, predicted reward 24.542198,predicted upper bound 24.546558,actual reward 23.622656
round 2849, predicted reward 20.114877,predicted upper bound 20.119667,actual reward 13.396448
round 2850, predicted reward 29.546083,predicted upper bound 29.551618,actual reward 35.078403
round 2851, predicted reward 24.743757,predicted upper bound 24.749434,actual reward 25.412880
round 2852, predicted reward 30.122116,predicted upper bound 30.125393,actual reward 30.022071
round 2853, predicted reward 27.317665,predicted upper bound 27.321975,actual reward 26.042615
round 2854, predicted reward 28.168856,predicted upper bound 28.173061,actual reward 26.806819
round 2855, predicted reward 24.975308,predicted upper bound 24.979719,actual reward 22.334079
round 2856, predicted reward 26.393064,predicted upper bound 26.397354,actual reward 23.913246
round 2857, predicted reward 26.528247,predicted upper bound 26.533085,actual reward 29.513282
round 2858, predicted reward 23.843058,predicted upper bound 23.848810,actual reward 16.302448
round 2859, predicted reward 26.842460,predicted upper bound 26.846156,actual reward 25.445356
round 2860, predicted reward 28.394344,predicted upper bound 28.398338,actual reward 24.828768
round 2861, predicted reward 23.450916,predicted upper bound 23.455941,actual reward 17.816336
round 2862, predicted reward 27.057002,predicted upper bound 27.062371,actual reward 25.325452
round 2863, predicted reward 25.959391,predicted upper bound 25.964216,actual reward 23.331942
round 2864, predicted reward 31.562426,predicted upper bound 31.567493,actual reward 24.786517
round 2865, predicted reward 36.267649,predicted upper bound 36.271406,actual reward 39.345479
round 2866, predicted reward 28.443783,predicted upper bound 28.447923,actual reward 26.286624
round 2867, predicted reward 24.476853,predicted upper bound 24.482616,actual reward 20.483394
round 2868, predicted reward 25.087112,predicted upper bound 25.091220,actual reward 20.835856
round 2869, predicted reward 23.690482,predicted upper bound 23.695315,actual reward 19.959559
round 2870, predicted reward 28.211253,predicted upper bound 28.216015,actual reward 22.319389
round 2871, predicted reward 27.587403,predicted upper bound 27.591721,actual reward 29.494119
round 2872, predicted reward 35.263110,predicted upper bound 35.266869,actual reward 34.391687
round 2873, predicted reward 30.310071,predicted upper bound 30.313929,actual reward 30.627737
round 2874, predicted reward 26.095459,predicted upper bound 26.100654,actual reward 27.769623
round 2875, predicted reward 29.167345,predicted upper bound 29.171793,actual reward 27.258356
round 2876, predicted reward 36.593638,predicted upper bound 36.597320,actual reward 36.788237
round 2877, predicted reward 27.036632,predicted upper bound 27.041781,actual reward 23.178718
round 2878, predicted reward 21.091891,predicted upper bound 21.097482,actual reward 18.706842
round 2879, predicted reward 27.844121,predicted upper bound 27.847787,actual reward 31.913172
round 2880, predicted reward 28.599216,predicted upper bound 28.603419,actual reward 27.976608
round 2881, predicted reward 34.492682,predicted upper bound 34.497053,actual reward 32.491211
round 2882, predicted reward 26.382135,predicted upper bound 26.386663,actual reward 23.470136
round 2883, predicted reward 32.287800,predicted upper bound 32.291832,actual reward 33.635128
round 2884, predicted reward 28.714958,predicted upper bound 28.719957,actual reward 19.952724
round 2885, predicted reward 26.685841,predicted upper bound 26.690276,actual reward 24.529257
round 2886, predicted reward 25.718751,predicted upper bound 25.723456,actual reward 24.118323
round 2887, predicted reward 24.009607,predicted upper bound 24.014412,actual reward 21.754810
round 2888, predicted reward 23.973302,predicted upper bound 23.977271,actual reward 19.239937
round 2889, predicted reward 25.708290,predicted upper bound 25.713032,actual reward 24.407866
round 2890, predicted reward 31.240410,predicted upper bound 31.244322,actual reward 33.549594
round 2891, predicted reward 28.994121,predicted upper bound 28.998465,actual reward 29.672874
round 2892, predicted reward 23.494610,predicted upper bound 23.498735,actual reward 21.122252
round 2893, predicted reward 23.505653,predicted upper bound 23.511084,actual reward 15.940907
round 2894, predicted reward 29.749536,predicted upper bound 29.754161,actual reward 34.585025
round 2895, predicted reward 24.091593,predicted upper bound 24.095385,actual reward 23.344410
round 2896, predicted reward 27.332085,predicted upper bound 27.335608,actual reward 24.602304
round 2897, predicted reward 31.225823,predicted upper bound 31.229622,actual reward 35.935521
round 2898, predicted reward 32.087053,predicted upper bound 32.091515,actual reward 35.848472
round 2899, predicted reward 30.626606,predicted upper bound 30.631190,actual reward 27.811409
round 2900, predicted reward 25.073372,predicted upper bound 25.078139,actual reward 23.557636
round 2901, predicted reward 27.347361,predicted upper bound 27.351277,actual reward 26.230040
round 2902, predicted reward 30.533934,predicted upper bound 30.538056,actual reward 29.804215
round 2903, predicted reward 34.013212,predicted upper bound 34.018127,actual reward 37.293216
round 2904, predicted reward 26.664268,predicted upper bound 26.668712,actual reward 19.844339
round 2905, predicted reward 31.918356,predicted upper bound 31.923032,actual reward 30.175260
round 2906, predicted reward 22.752071,predicted upper bound 22.755884,actual reward 22.755584
round 2907, predicted reward 26.800583,predicted upper bound 26.805617,actual reward 22.179505
round 2908, predicted reward 20.474572,predicted upper bound 20.479040,actual reward 15.102695
round 2909, predicted reward 29.321412,predicted upper bound 29.325378,actual reward 24.059393
round 2910, predicted reward 30.528456,predicted upper bound 30.532656,actual reward 33.402459
round 2911, predicted reward 29.943094,predicted upper bound 29.947074,actual reward 26.323621
round 2912, predicted reward 31.407870,predicted upper bound 31.412084,actual reward 29.439116
round 2913, predicted reward 25.566506,predicted upper bound 25.570846,actual reward 24.597302
round 2914, predicted reward 39.942104,predicted upper bound 39.945853,actual reward 41.441456
round 2915, predicted reward 26.782538,predicted upper bound 26.786788,actual reward 27.316246
round 2916, predicted reward 18.120359,predicted upper bound 18.124840,actual reward 24.037520
round 2917, predicted reward 24.091535,predicted upper bound 24.096521,actual reward 26.045932
round 2918, predicted reward 30.254883,predicted upper bound 30.259615,actual reward 33.676840
round 2919, predicted reward 24.329253,predicted upper bound 24.335016,actual reward 24.851776
round 2920, predicted reward 19.383848,predicted upper bound 19.389501,actual reward 17.886614
round 2921, predicted reward 17.072692,predicted upper bound 17.077110,actual reward 16.464426
round 2922, predicted reward 35.435345,predicted upper bound 35.439676,actual reward 36.796330
round 2923, predicted reward 30.160344,predicted upper bound 30.164044,actual reward 26.251252
round 2924, predicted reward 28.912982,predicted upper bound 28.917275,actual reward 34.326467
round 2925, predicted reward 36.072186,predicted upper bound 36.075896,actual reward 36.866711
round 2926, predicted reward 27.301518,predicted upper bound 27.305855,actual reward 32.656322
round 2927, predicted reward 32.509296,predicted upper bound 32.513771,actual reward 32.152662
round 2928, predicted reward 35.078065,predicted upper bound 35.081792,actual reward 32.450805
round 2929, predicted reward 24.091769,predicted upper bound 24.095848,actual reward 21.051535
round 2930, predicted reward 34.726053,predicted upper bound 34.730021,actual reward 37.984468
round 2931, predicted reward 32.913120,predicted upper bound 32.918247,actual reward 31.566026
round 2932, predicted reward 24.677516,predicted upper bound 24.681203,actual reward 24.217303
round 2933, predicted reward 28.054988,predicted upper bound 28.059385,actual reward 29.948051
round 2934, predicted reward 26.494186,predicted upper bound 26.499412,actual reward 24.587445
round 2935, predicted reward 28.624946,predicted upper bound 28.629691,actual reward 30.071323
round 2936, predicted reward 27.454443,predicted upper bound 27.458448,actual reward 23.494553
round 2937, predicted reward 24.469379,predicted upper bound 24.474731,actual reward 25.677063
round 2938, predicted reward 31.209632,predicted upper bound 31.214392,actual reward 33.290325
round 2939, predicted reward 26.264862,predicted upper bound 26.269519,actual reward 20.193382
round 2940, predicted reward 29.368467,predicted upper bound 29.372575,actual reward 22.532865
round 2941, predicted reward 28.545799,predicted upper bound 28.549600,actual reward 23.804118
round 2942, predicted reward 20.902750,predicted upper bound 20.907737,actual reward 15.200150
round 2943, predicted reward 22.359268,predicted upper bound 22.363715,actual reward 15.880508
round 2944, predicted reward 31.761630,predicted upper bound 31.765656,actual reward 26.638859
round 2945, predicted reward 26.306831,predicted upper bound 26.311852,actual reward 25.805303
round 2946, predicted reward 21.264495,predicted upper bound 21.270210,actual reward 17.917857
round 2947, predicted reward 24.378660,predicted upper bound 24.383636,actual reward 25.028442
round 2948, predicted reward 29.020616,predicted upper bound 29.024531,actual reward 22.831482
round 2949, predicted reward 26.300782,predicted upper bound 26.305201,actual reward 25.807163
round 2950, predicted reward 29.224024,predicted upper bound 29.227825,actual reward 25.123102
round 2951, predicted reward 30.953545,predicted upper bound 30.958625,actual reward 37.950904
round 2952, predicted reward 28.696203,predicted upper bound 28.700880,actual reward 32.348934
round 2953, predicted reward 20.351161,predicted upper bound 20.355518,actual reward 19.412383
round 2954, predicted reward 24.776843,predicted upper bound 24.781622,actual reward 22.774817
round 2955, predicted reward 25.162108,predicted upper bound 25.167583,actual reward 23.318851
round 2956, predicted reward 28.432217,predicted upper bound 28.436331,actual reward 27.481203
round 2957, predicted reward 26.329301,predicted upper bound 26.333499,actual reward 27.306210
round 2958, predicted reward 26.259694,predicted upper bound 26.264269,actual reward 24.565900
round 2959, predicted reward 27.522133,predicted upper bound 27.526274,actual reward 25.018864
round 2960, predicted reward 26.637751,predicted upper bound 26.641705,actual reward 23.774601
round 2961, predicted reward 24.965509,predicted upper bound 24.970689,actual reward 26.493128
round 2962, predicted reward 31.774004,predicted upper bound 31.778009,actual reward 32.023906
round 2963, predicted reward 28.240724,predicted upper bound 28.245435,actual reward 24.372166
round 2964, predicted reward 26.630929,predicted upper bound 26.635504,actual reward 26.479222
round 2965, predicted reward 29.732206,predicted upper bound 29.737374,actual reward 28.936756
round 2966, predicted reward 24.826428,predicted upper bound 24.830567,actual reward 26.630429
round 2967, predicted reward 25.237821,predicted upper bound 25.241692,actual reward 27.440667
round 2968, predicted reward 29.795278,predicted upper bound 29.799862,actual reward 30.535154
round 2969, predicted reward 24.008947,predicted upper bound 24.014722,actual reward 22.952686
round 2970, predicted reward 26.305005,predicted upper bound 26.308543,actual reward 24.234748
round 2971, predicted reward 34.133735,predicted upper bound 34.138565,actual reward 30.546285
round 2972, predicted reward 33.846759,predicted upper bound 33.851719,actual reward 30.540607
round 2973, predicted reward 27.506569,predicted upper bound 27.511042,actual reward 30.860192
round 2974, predicted reward 28.366929,predicted upper bound 28.370678,actual reward 25.548922
round 2975, predicted reward 29.247026,predicted upper bound 29.252323,actual reward 27.075758
round 2976, predicted reward 27.618810,predicted upper bound 27.623358,actual reward 20.789867
round 2977, predicted reward 25.294949,predicted upper bound 25.299643,actual reward 22.586138
round 2978, predicted reward 25.906472,predicted upper bound 25.910545,actual reward 21.029298
round 2979, predicted reward 33.438591,predicted upper bound 33.442222,actual reward 36.697295
round 2980, predicted reward 28.608740,predicted upper bound 28.613921,actual reward 25.624605
round 2981, predicted reward 34.166406,predicted upper bound 34.170595,actual reward 33.992449
round 2982, predicted reward 21.978893,predicted upper bound 21.983526,actual reward 26.094753
round 2983, predicted reward 21.555774,predicted upper bound 21.560558,actual reward 23.255845
round 2984, predicted reward 30.276611,predicted upper bound 30.281199,actual reward 32.678933
round 2985, predicted reward 26.694538,predicted upper bound 26.698515,actual reward 23.137408
round 2986, predicted reward 31.399216,predicted upper bound 31.403829,actual reward 26.347415
round 2987, predicted reward 26.490061,predicted upper bound 26.494174,actual reward 25.923404
round 2988, predicted reward 24.307756,predicted upper bound 24.312881,actual reward 23.459378
round 2989, predicted reward 23.164376,predicted upper bound 23.168541,actual reward 23.961106
round 2990, predicted reward 23.457789,predicted upper bound 23.462355,actual reward 23.545262
round 2991, predicted reward 21.285477,predicted upper bound 21.290960,actual reward 22.307924
round 2992, predicted reward 26.152724,predicted upper bound 26.157202,actual reward 26.333400
round 2993, predicted reward 33.925143,predicted upper bound 33.929595,actual reward 35.863799
round 2994, predicted reward 34.490666,predicted upper bound 34.494505,actual reward 35.778626
round 2995, predicted reward 26.019727,predicted upper bound 26.023231,actual reward 20.159978
round 2996, predicted reward 22.331376,predicted upper bound 22.335622,actual reward 23.050716
round 2997, predicted reward 22.113069,predicted upper bound 22.117848,actual reward 20.528599
round 2998, predicted reward 26.410667,predicted upper bound 26.414928,actual reward 28.459654
round 2999, predicted reward 33.059224,predicted upper bound 33.063682,actual reward 35.335862
round 3000, predicted reward 26.584138,predicted upper bound 26.588738,actual reward 22.877156
round 3001, predicted reward 27.823665,predicted upper bound 27.828146,actual reward 29.425366
round 3002, predicted reward 27.581346,predicted upper bound 27.586250,actual reward 23.873646
round 3003, predicted reward 26.127431,predicted upper bound 26.132413,actual reward 20.632919
round 3004, predicted reward 22.611606,predicted upper bound 22.616986,actual reward 18.342580
round 3005, predicted reward 27.568130,predicted upper bound 27.572916,actual reward 25.465469
round 3006, predicted reward 23.632251,predicted upper bound 23.637103,actual reward 20.880953
round 3007, predicted reward 22.948651,predicted upper bound 22.953887,actual reward 19.279664
round 3008, predicted reward 27.489683,predicted upper bound 27.494921,actual reward 31.584824
round 3009, predicted reward 26.761731,predicted upper bound 26.766891,actual reward 24.752428
round 3010, predicted reward 22.852365,predicted upper bound 22.857245,actual reward 17.341675
round 3011, predicted reward 21.767178,predicted upper bound 21.771475,actual reward 19.329089
round 3012, predicted reward 29.244661,predicted upper bound 29.249314,actual reward 28.487232
round 3013, predicted reward 30.196619,predicted upper bound 30.200933,actual reward 29.395742
round 3014, predicted reward 28.894745,predicted upper bound 28.899213,actual reward 26.330995
round 3015, predicted reward 27.798620,predicted upper bound 27.802697,actual reward 26.908793
round 3016, predicted reward 24.678873,predicted upper bound 24.684421,actual reward 21.928815
round 3017, predicted reward 28.192052,predicted upper bound 28.196102,actual reward 27.717078
round 3018, predicted reward 38.899521,predicted upper bound 38.903526,actual reward 43.114104
round 3019, predicted reward 18.319535,predicted upper bound 18.324123,actual reward 16.058969
round 3020, predicted reward 33.122365,predicted upper bound 33.126591,actual reward 36.886621
round 3021, predicted reward 23.818185,predicted upper bound 23.822753,actual reward 27.787147
round 3022, predicted reward 33.996929,predicted upper bound 34.001513,actual reward 36.424257
round 3023, predicted reward 26.702205,predicted upper bound 26.707187,actual reward 29.980900
round 3024, predicted reward 25.170946,predicted upper bound 25.176088,actual reward 21.736487
round 3025, predicted reward 23.175565,predicted upper bound 23.180529,actual reward 21.619604
round 3026, predicted reward 30.028186,predicted upper bound 30.032824,actual reward 31.065413
round 3027, predicted reward 34.436878,predicted upper bound 34.441495,actual reward 31.855175
round 3028, predicted reward 31.183472,predicted upper bound 31.188191,actual reward 30.250520
round 3029, predicted reward 27.613939,predicted upper bound 27.618673,actual reward 27.028803
round 3030, predicted reward 27.450883,predicted upper bound 27.455450,actual reward 22.145583
round 3031, predicted reward 24.811291,predicted upper bound 24.815408,actual reward 21.611099
round 3032, predicted reward 29.219917,predicted upper bound 29.224654,actual reward 29.178853
round 3033, predicted reward 26.429856,predicted upper bound 26.434322,actual reward 23.317203
round 3034, predicted reward 25.712547,predicted upper bound 25.717857,actual reward 22.045078
round 3035, predicted reward 36.966323,predicted upper bound 36.971285,actual reward 36.634119
round 3036, predicted reward 26.118658,predicted upper bound 26.123340,actual reward 26.220477
round 3037, predicted reward 25.749308,predicted upper bound 25.753381,actual reward 20.709431
round 3038, predicted reward 29.935501,predicted upper bound 29.939846,actual reward 30.197246
round 3039, predicted reward 38.831816,predicted upper bound 38.835534,actual reward 37.844975
round 3040, predicted reward 27.340903,predicted upper bound 27.345281,actual reward 23.150565
round 3041, predicted reward 31.187183,predicted upper bound 31.191124,actual reward 26.649688
round 3042, predicted reward 25.071540,predicted upper bound 25.076374,actual reward 22.596667
round 3043, predicted reward 29.355025,predicted upper bound 29.360053,actual reward 27.288762
round 3044, predicted reward 26.027289,predicted upper bound 26.032082,actual reward 25.500269
round 3045, predicted reward 25.143461,predicted upper bound 25.148406,actual reward 20.204617
round 3046, predicted reward 25.547905,predicted upper bound 25.552284,actual reward 22.120848
round 3047, predicted reward 35.101322,predicted upper bound 35.105091,actual reward 32.314279
round 3048, predicted reward 21.704489,predicted upper bound 21.709631,actual reward 23.073429
round 3049, predicted reward 30.584753,predicted upper bound 30.589612,actual reward 21.683043
round 3050, predicted reward 24.500876,predicted upper bound 24.506020,actual reward 20.713206
round 3051, predicted reward 22.051902,predicted upper bound 22.056648,actual reward 17.981132
round 3052, predicted reward 22.931717,predicted upper bound 22.935663,actual reward 22.516802
round 3053, predicted reward 36.296513,predicted upper bound 36.300819,actual reward 38.258994
round 3054, predicted reward 29.370874,predicted upper bound 29.375197,actual reward 26.894507
round 3055, predicted reward 29.293281,predicted upper bound 29.297173,actual reward 24.664150
round 3056, predicted reward 31.085929,predicted upper bound 31.090355,actual reward 31.365390
round 3057, predicted reward 25.837113,predicted upper bound 25.840603,actual reward 25.900379
round 3058, predicted reward 28.534224,predicted upper bound 28.539261,actual reward 29.725004
round 3059, predicted reward 34.768222,predicted upper bound 34.771710,actual reward 38.140418
round 3060, predicted reward 23.377708,predicted upper bound 23.381908,actual reward 25.887609
round 3061, predicted reward 26.087404,predicted upper bound 26.091513,actual reward 26.118322
round 3062, predicted reward 26.002607,predicted upper bound 26.006641,actual reward 22.199678
round 3063, predicted reward 25.424905,predicted upper bound 25.429360,actual reward 17.974490
round 3064, predicted reward 28.019030,predicted upper bound 28.023467,actual reward 27.917635
round 3065, predicted reward 25.749053,predicted upper bound 25.753514,actual reward 23.527320
round 3066, predicted reward 30.689802,predicted upper bound 30.694530,actual reward 30.699659
round 3067, predicted reward 35.414709,predicted upper bound 35.418540,actual reward 37.173252
round 3068, predicted reward 23.602159,predicted upper bound 23.606403,actual reward 24.715882
round 3069, predicted reward 27.701551,predicted upper bound 27.705700,actual reward 28.842257
round 3070, predicted reward 22.961074,predicted upper bound 22.964870,actual reward 21.731692
round 3071, predicted reward 26.466837,predicted upper bound 26.470547,actual reward 26.795425
round 3072, predicted reward 27.408242,predicted upper bound 27.411676,actual reward 23.087063
round 3073, predicted reward 28.966651,predicted upper bound 28.970704,actual reward 34.836574
round 3074, predicted reward 31.994933,predicted upper bound 31.999946,actual reward 28.879590
round 3075, predicted reward 27.283080,predicted upper bound 27.286492,actual reward 25.292163
round 3076, predicted reward 23.463709,predicted upper bound 23.467737,actual reward 20.549764
round 3077, predicted reward 29.820342,predicted upper bound 29.824829,actual reward 25.095972
round 3078, predicted reward 25.671903,predicted upper bound 25.676620,actual reward 22.576746
round 3079, predicted reward 22.310513,predicted upper bound 22.315404,actual reward 25.635011
round 3080, predicted reward 27.167671,predicted upper bound 27.172423,actual reward 26.901029
round 3081, predicted reward 28.330559,predicted upper bound 28.334134,actual reward 31.978793
round 3082, predicted reward 23.603543,predicted upper bound 23.607581,actual reward 23.773779
round 3083, predicted reward 23.502949,predicted upper bound 23.508124,actual reward 23.690657
round 3084, predicted reward 30.368291,predicted upper bound 30.371902,actual reward 32.292753
round 3085, predicted reward 26.873077,predicted upper bound 26.876394,actual reward 27.156251
round 3086, predicted reward 24.062160,predicted upper bound 24.066100,actual reward 19.733419
round 3087, predicted reward 29.955299,predicted upper bound 29.960063,actual reward 33.947655
round 3088, predicted reward 27.638396,predicted upper bound 27.643047,actual reward 28.020743
round 3089, predicted reward 32.669674,predicted upper bound 32.673947,actual reward 29.827524
round 3090, predicted reward 24.013207,predicted upper bound 24.018528,actual reward 22.381993
round 3091, predicted reward 18.807787,predicted upper bound 18.812859,actual reward 14.150165
round 3092, predicted reward 29.505285,predicted upper bound 29.509042,actual reward 30.291814
round 3093, predicted reward 18.684655,predicted upper bound 18.689966,actual reward 12.439224
round 3094, predicted reward 34.380556,predicted upper bound 34.384415,actual reward 38.341713
round 3095, predicted reward 19.747208,predicted upper bound 19.751212,actual reward 14.545345
round 3096, predicted reward 20.415985,predicted upper bound 20.420834,actual reward 19.843274
round 3097, predicted reward 31.987334,predicted upper bound 31.992139,actual reward 33.717124
round 3098, predicted reward 21.990613,predicted upper bound 21.996142,actual reward 23.867944
round 3099, predicted reward 25.703835,predicted upper bound 25.707766,actual reward 27.238941
round 3100, predicted reward 26.082023,predicted upper bound 26.086499,actual reward 25.491597
round 3101, predicted reward 27.493266,predicted upper bound 27.497816,actual reward 29.811483
round 3102, predicted reward 26.092851,predicted upper bound 26.097910,actual reward 24.477709
round 3103, predicted reward 33.918516,predicted upper bound 33.923188,actual reward 36.579294
round 3104, predicted reward 25.267694,predicted upper bound 25.271906,actual reward 24.407710
round 3105, predicted reward 28.569760,predicted upper bound 28.574083,actual reward 29.966520
round 3106, predicted reward 29.092066,predicted upper bound 29.095774,actual reward 30.157840
round 3107, predicted reward 26.200211,predicted upper bound 26.204367,actual reward 17.094366
round 3108, predicted reward 32.029883,predicted upper bound 32.033775,actual reward 33.564356
round 3109, predicted reward 31.258276,predicted upper bound 31.262388,actual reward 33.934262
round 3110, predicted reward 27.147991,predicted upper bound 27.151910,actual reward 23.781610
round 3111, predicted reward 23.540414,predicted upper bound 23.544367,actual reward 19.120954
round 3112, predicted reward 28.157845,predicted upper bound 28.162596,actual reward 29.519215
round 3113, predicted reward 33.722523,predicted upper bound 33.726688,actual reward 31.468197
round 3114, predicted reward 30.429713,predicted upper bound 30.433858,actual reward 30.684098
round 3115, predicted reward 26.249127,predicted upper bound 26.253334,actual reward 26.456259
round 3116, predicted reward 24.800959,predicted upper bound 24.804900,actual reward 22.300443
round 3117, predicted reward 30.114677,predicted upper bound 30.118408,actual reward 27.333123
round 3118, predicted reward 23.827753,predicted upper bound 23.832189,actual reward 24.795096
round 3119, predicted reward 27.029763,predicted upper bound 27.033525,actual reward 23.044764
round 3120, predicted reward 32.777892,predicted upper bound 32.782026,actual reward 38.291620
round 3121, predicted reward 23.567154,predicted upper bound 23.571833,actual reward 22.509173
round 3122, predicted reward 26.547921,predicted upper bound 26.553023,actual reward 25.241557
round 3123, predicted reward 30.008131,predicted upper bound 30.012768,actual reward 26.902279
round 3124, predicted reward 23.621221,predicted upper bound 23.626028,actual reward 20.744409
round 3125, predicted reward 25.176224,predicted upper bound 25.181136,actual reward 27.774379
round 3126, predicted reward 23.325693,predicted upper bound 23.329843,actual reward 22.413794
round 3127, predicted reward 23.264187,predicted upper bound 23.268300,actual reward 19.902533
round 3128, predicted reward 22.609113,predicted upper bound 22.613576,actual reward 25.740806
round 3129, predicted reward 30.634860,predicted upper bound 30.639797,actual reward 31.814102
round 3130, predicted reward 29.056656,predicted upper bound 29.061021,actual reward 27.747539
round 3131, predicted reward 29.534902,predicted upper bound 29.539071,actual reward 32.730852
round 3132, predicted reward 29.351114,predicted upper bound 29.355240,actual reward 29.527078
round 3133, predicted reward 27.551626,predicted upper bound 27.555628,actual reward 32.499306
round 3134, predicted reward 27.512813,predicted upper bound 27.517301,actual reward 29.127066
round 3135, predicted reward 26.297221,predicted upper bound 26.301571,actual reward 21.591692
round 3136, predicted reward 23.958168,predicted upper bound 23.962697,actual reward 21.643824
round 3137, predicted reward 25.025508,predicted upper bound 25.029929,actual reward 20.589293
round 3138, predicted reward 27.879081,predicted upper bound 27.883548,actual reward 28.014975
round 3139, predicted reward 22.149466,predicted upper bound 22.153812,actual reward 18.089903
round 3140, predicted reward 21.307042,predicted upper bound 21.311043,actual reward 17.869329
round 3141, predicted reward 29.532907,predicted upper bound 29.536746,actual reward 26.047725
round 3142, predicted reward 32.784781,predicted upper bound 32.789535,actual reward 29.618860
round 3143, predicted reward 25.532183,predicted upper bound 25.535772,actual reward 26.633662
round 3144, predicted reward 24.365550,predicted upper bound 24.370273,actual reward 25.681202
round 3145, predicted reward 27.254016,predicted upper bound 27.258422,actual reward 19.630087
round 3146, predicted reward 19.810685,predicted upper bound 19.814341,actual reward 14.331284
round 3147, predicted reward 25.746692,predicted upper bound 25.750731,actual reward 21.853804
round 3148, predicted reward 24.381274,predicted upper bound 24.385791,actual reward 24.110438
round 3149, predicted reward 28.383166,predicted upper bound 28.387121,actual reward 26.065705
round 3150, predicted reward 30.373775,predicted upper bound 30.378228,actual reward 31.372070
round 3151, predicted reward 28.489441,predicted upper bound 28.494673,actual reward 28.932657
round 3152, predicted reward 27.503939,predicted upper bound 27.508571,actual reward 25.991721
round 3153, predicted reward 31.202800,predicted upper bound 31.206896,actual reward 35.166941
round 3154, predicted reward 24.703537,predicted upper bound 24.707920,actual reward 22.284598
round 3155, predicted reward 26.978181,predicted upper bound 26.981779,actual reward 28.192964
round 3156, predicted reward 31.909934,predicted upper bound 31.913655,actual reward 28.148737
round 3157, predicted reward 31.348515,predicted upper bound 31.352098,actual reward 31.959908
round 3158, predicted reward 30.985458,predicted upper bound 30.989002,actual reward 29.268130
round 3159, predicted reward 28.266735,predicted upper bound 28.270621,actual reward 26.932569
round 3160, predicted reward 27.342604,predicted upper bound 27.346799,actual reward 27.000205
round 3161, predicted reward 30.063224,predicted upper bound 30.066900,actual reward 33.628811
round 3162, predicted reward 27.034434,predicted upper bound 27.039517,actual reward 27.017498
round 3163, predicted reward 27.382997,predicted upper bound 27.387168,actual reward 26.376154
round 3164, predicted reward 24.324551,predicted upper bound 24.329556,actual reward 24.148449
round 3165, predicted reward 26.726381,predicted upper bound 26.730751,actual reward 26.786376
round 3166, predicted reward 20.801655,predicted upper bound 20.805447,actual reward 16.457173
round 3167, predicted reward 32.969204,predicted upper bound 32.973127,actual reward 34.738640
round 3168, predicted reward 29.677357,predicted upper bound 29.681187,actual reward 30.525264
round 3169, predicted reward 23.193833,predicted upper bound 23.197404,actual reward 20.988824
round 3170, predicted reward 25.677918,predicted upper bound 25.681862,actual reward 28.701455
round 3171, predicted reward 24.570926,predicted upper bound 24.575186,actual reward 24.871606
round 3172, predicted reward 24.966505,predicted upper bound 24.970378,actual reward 24.090965
round 3173, predicted reward 25.723923,predicted upper bound 25.728623,actual reward 24.107870
round 3174, predicted reward 25.207305,predicted upper bound 25.211772,actual reward 27.276320
round 3175, predicted reward 35.401621,predicted upper bound 35.405196,actual reward 29.867104
round 3176, predicted reward 28.566914,predicted upper bound 28.570719,actual reward 32.462477
round 3177, predicted reward 21.167419,predicted upper bound 21.171043,actual reward 16.282527
round 3178, predicted reward 26.501293,predicted upper bound 26.505735,actual reward 22.908517
round 3179, predicted reward 35.750946,predicted upper bound 35.754264,actual reward 36.008157
round 3180, predicted reward 24.223908,predicted upper bound 24.227882,actual reward 25.623867
round 3181, predicted reward 25.832172,predicted upper bound 25.836986,actual reward 21.514050
round 3182, predicted reward 29.602420,predicted upper bound 29.606483,actual reward 27.115965
round 3183, predicted reward 31.057277,predicted upper bound 31.060813,actual reward 32.772443
round 3184, predicted reward 23.122542,predicted upper bound 23.127002,actual reward 22.661869
round 3185, predicted reward 29.186145,predicted upper bound 29.190582,actual reward 28.990613
round 3186, predicted reward 32.603013,predicted upper bound 32.606606,actual reward 27.940496
round 3187, predicted reward 22.057679,predicted upper bound 22.061097,actual reward 20.277470
round 3188, predicted reward 27.592068,predicted upper bound 27.596528,actual reward 30.904286
round 3189, predicted reward 25.448256,predicted upper bound 25.452732,actual reward 25.970265
round 3190, predicted reward 24.407172,predicted upper bound 24.410771,actual reward 24.590876
round 3191, predicted reward 25.832749,predicted upper bound 25.837093,actual reward 23.732585
round 3192, predicted reward 25.311492,predicted upper bound 25.314944,actual reward 19.760451
round 3193, predicted reward 29.489504,predicted upper bound 29.494295,actual reward 29.341339
round 3194, predicted reward 29.778949,predicted upper bound 29.783438,actual reward 30.200642
round 3195, predicted reward 28.153153,predicted upper bound 28.156834,actual reward 28.285916
round 3196, predicted reward 31.839285,predicted upper bound 31.843029,actual reward 35.155298
round 3197, predicted reward 24.649574,predicted upper bound 24.653923,actual reward 26.593601
round 3198, predicted reward 29.618975,predicted upper bound 29.622500,actual reward 30.449838
round 3199, predicted reward 37.344404,predicted upper bound 37.347732,actual reward 39.443218
round 3200, predicted reward 32.128301,predicted upper bound 32.133126,actual reward 31.348397
round 3201, predicted reward 27.911041,predicted upper bound 27.915341,actual reward 28.612622
round 3202, predicted reward 30.900193,predicted upper bound 30.904473,actual reward 26.593051
round 3203, predicted reward 27.347068,predicted upper bound 27.351755,actual reward 26.088795
round 3204, predicted reward 25.386082,predicted upper bound 25.390239,actual reward 29.641599
round 3205, predicted reward 23.586531,predicted upper bound 23.590550,actual reward 26.720938
round 3206, predicted reward 26.581096,predicted upper bound 26.586460,actual reward 26.130912
round 3207, predicted reward 24.699045,predicted upper bound 24.703961,actual reward 23.473433
round 3208, predicted reward 25.649750,predicted upper bound 25.654061,actual reward 25.254713
round 3209, predicted reward 28.733517,predicted upper bound 28.737185,actual reward 28.875098
round 3210, predicted reward 24.739831,predicted upper bound 24.744389,actual reward 25.750280
round 3211, predicted reward 34.255814,predicted upper bound 34.259574,actual reward 34.260808
round 3212, predicted reward 25.837672,predicted upper bound 25.842232,actual reward 21.192640
round 3213, predicted reward 24.611006,predicted upper bound 24.615385,actual reward 18.815038
round 3214, predicted reward 26.042595,predicted upper bound 26.046814,actual reward 27.054701
round 3215, predicted reward 29.901192,predicted upper bound 29.904273,actual reward 35.830879
round 3216, predicted reward 29.228967,predicted upper bound 29.233381,actual reward 21.899153
round 3217, predicted reward 25.270714,predicted upper bound 25.274972,actual reward 22.125245
round 3218, predicted reward 26.999477,predicted upper bound 27.003485,actual reward 26.110321
round 3219, predicted reward 22.210347,predicted upper bound 22.214947,actual reward 20.311272
round 3220, predicted reward 26.952985,predicted upper bound 26.956950,actual reward 29.683192
round 3221, predicted reward 27.921883,predicted upper bound 27.925895,actual reward 19.625030
round 3222, predicted reward 27.560839,predicted upper bound 27.564764,actual reward 32.434034
round 3223, predicted reward 25.072623,predicted upper bound 25.076899,actual reward 20.161832
round 3224, predicted reward 30.023126,predicted upper bound 30.027290,actual reward 27.307517
round 3225, predicted reward 28.272890,predicted upper bound 28.276810,actual reward 29.235431
round 3226, predicted reward 25.688956,predicted upper bound 25.693564,actual reward 18.085927
round 3227, predicted reward 23.607212,predicted upper bound 23.611756,actual reward 25.210702
round 3228, predicted reward 22.983554,predicted upper bound 22.988220,actual reward 27.595432
round 3229, predicted reward 24.768760,predicted upper bound 24.773798,actual reward 27.351453
round 3230, predicted reward 24.407439,predicted upper bound 24.411963,actual reward 26.349497
round 3231, predicted reward 24.911790,predicted upper bound 24.916189,actual reward 30.004640
round 3232, predicted reward 31.900222,predicted upper bound 31.903809,actual reward 26.609741
round 3233, predicted reward 30.757769,predicted upper bound 30.761664,actual reward 35.154916
round 3234, predicted reward 28.783689,predicted upper bound 28.788047,actual reward 26.862342
round 3235, predicted reward 22.319106,predicted upper bound 22.323399,actual reward 19.806139
round 3236, predicted reward 25.034620,predicted upper bound 25.039907,actual reward 21.658590
round 3237, predicted reward 27.584097,predicted upper bound 27.588973,actual reward 21.778093
round 3238, predicted reward 26.962122,predicted upper bound 26.966603,actual reward 25.866267
round 3239, predicted reward 24.790737,predicted upper bound 24.795377,actual reward 28.308852
round 3240, predicted reward 29.603866,predicted upper bound 29.607462,actual reward 34.441966
round 3241, predicted reward 30.509221,predicted upper bound 30.513885,actual reward 30.279111
round 3242, predicted reward 19.271753,predicted upper bound 19.275881,actual reward 24.460131
round 3243, predicted reward 31.768694,predicted upper bound 31.771686,actual reward 36.455028
round 3244, predicted reward 24.755526,predicted upper bound 24.759277,actual reward 24.007518
round 3245, predicted reward 23.573004,predicted upper bound 23.577741,actual reward 16.058815
round 3246, predicted reward 20.978419,predicted upper bound 20.983003,actual reward 17.130431
round 3247, predicted reward 24.403923,predicted upper bound 24.408592,actual reward 23.514572
round 3248, predicted reward 25.891765,predicted upper bound 25.895144,actual reward 23.033142
round 3249, predicted reward 28.722504,predicted upper bound 28.726685,actual reward 26.658771
round 3250, predicted reward 26.695226,predicted upper bound 26.699870,actual reward 20.313624
round 3251, predicted reward 24.777584,predicted upper bound 24.782323,actual reward 22.030140
round 3252, predicted reward 22.164934,predicted upper bound 22.169638,actual reward 23.637047
round 3253, predicted reward 23.744223,predicted upper bound 23.748355,actual reward 28.123779
round 3254, predicted reward 24.684193,predicted upper bound 24.688654,actual reward 23.179281
round 3255, predicted reward 27.719412,predicted upper bound 27.724066,actual reward 26.502461
round 3256, predicted reward 38.301812,predicted upper bound 38.305801,actual reward 45.432844
round 3257, predicted reward 29.384032,predicted upper bound 29.388082,actual reward 26.696699
round 3258, predicted reward 21.583971,predicted upper bound 21.588098,actual reward 18.496785
round 3259, predicted reward 27.846642,predicted upper bound 27.850947,actual reward 26.784434
round 3260, predicted reward 31.113513,predicted upper bound 31.117409,actual reward 35.566743
round 3261, predicted reward 23.847789,predicted upper bound 23.851883,actual reward 23.842536
round 3262, predicted reward 30.598055,predicted upper bound 30.601341,actual reward 29.732285
round 3263, predicted reward 30.708223,predicted upper bound 30.711559,actual reward 24.227740
round 3264, predicted reward 20.784198,predicted upper bound 20.788615,actual reward 13.726107
round 3265, predicted reward 28.113157,predicted upper bound 28.117536,actual reward 27.236295
round 3266, predicted reward 24.126131,predicted upper bound 24.130434,actual reward 21.686221
round 3267, predicted reward 21.267340,predicted upper bound 21.272196,actual reward 19.592203
round 3268, predicted reward 28.718166,predicted upper bound 28.721895,actual reward 29.761307
round 3269, predicted reward 27.356547,predicted upper bound 27.360510,actual reward 24.217388
round 3270, predicted reward 30.389575,predicted upper bound 30.393594,actual reward 31.186695
round 3271, predicted reward 28.495057,predicted upper bound 28.499132,actual reward 27.769776
round 3272, predicted reward 21.238519,predicted upper bound 21.242970,actual reward 19.138130
round 3273, predicted reward 24.078445,predicted upper bound 24.083061,actual reward 24.949461
round 3274, predicted reward 29.054532,predicted upper bound 29.058662,actual reward 34.001365
round 3275, predicted reward 25.905968,predicted upper bound 25.910092,actual reward 26.375434
round 3276, predicted reward 23.493675,predicted upper bound 23.497823,actual reward 24.401695
round 3277, predicted reward 29.778137,predicted upper bound 29.782468,actual reward 25.848427
round 3278, predicted reward 28.923533,predicted upper bound 28.928116,actual reward 23.714464
round 3279, predicted reward 27.399965,predicted upper bound 27.404306,actual reward 30.704758
round 3280, predicted reward 23.134932,predicted upper bound 23.139337,actual reward 26.611315
round 3281, predicted reward 28.564165,predicted upper bound 28.567770,actual reward 29.739763
round 3282, predicted reward 22.774241,predicted upper bound 22.778239,actual reward 23.461857
round 3283, predicted reward 24.329178,predicted upper bound 24.333688,actual reward 23.840057
round 3284, predicted reward 26.660707,predicted upper bound 26.664562,actual reward 22.554042
round 3285, predicted reward 26.806007,predicted upper bound 26.810627,actual reward 22.211585
round 3286, predicted reward 20.769139,predicted upper bound 20.773583,actual reward 23.709821
round 3287, predicted reward 32.075405,predicted upper bound 32.080118,actual reward 32.659285
round 3288, predicted reward 38.228449,predicted upper bound 38.231663,actual reward 38.150434
round 3289, predicted reward 33.341988,predicted upper bound 33.345983,actual reward 33.965128
round 3290, predicted reward 24.094570,predicted upper bound 24.098923,actual reward 21.594139
round 3291, predicted reward 24.765274,predicted upper bound 24.769641,actual reward 21.505561
round 3292, predicted reward 27.062476,predicted upper bound 27.066268,actual reward 21.711354
round 3293, predicted reward 28.673294,predicted upper bound 28.678068,actual reward 29.792772
round 3294, predicted reward 25.463317,predicted upper bound 25.467118,actual reward 23.604667
round 3295, predicted reward 32.867360,predicted upper bound 32.870659,actual reward 27.249256
round 3296, predicted reward 27.324942,predicted upper bound 27.328790,actual reward 21.456365
round 3297, predicted reward 29.361470,predicted upper bound 29.365639,actual reward 30.749288
round 3298, predicted reward 34.308951,predicted upper bound 34.312598,actual reward 39.967375
round 3299, predicted reward 30.867812,predicted upper bound 30.871463,actual reward 32.122893
round 3300, predicted reward 34.507681,predicted upper bound 34.511547,actual reward 40.541502
round 3301, predicted reward 27.426257,predicted upper bound 27.429941,actual reward 27.644732
round 3302, predicted reward 28.506588,predicted upper bound 28.510320,actual reward 29.971791
round 3303, predicted reward 21.528534,predicted upper bound 21.532783,actual reward 17.355942
round 3304, predicted reward 25.740067,predicted upper bound 25.743907,actual reward 25.972729
round 3305, predicted reward 19.118307,predicted upper bound 19.122780,actual reward 18.802193
round 3306, predicted reward 29.379482,predicted upper bound 29.383745,actual reward 23.472700
round 3307, predicted reward 23.927336,predicted upper bound 23.932287,actual reward 20.976461
round 3308, predicted reward 32.097855,predicted upper bound 32.102075,actual reward 29.727834
round 3309, predicted reward 20.325900,predicted upper bound 20.329959,actual reward 20.460677
round 3310, predicted reward 32.725706,predicted upper bound 32.730003,actual reward 30.489302
round 3311, predicted reward 23.605353,predicted upper bound 23.609853,actual reward 19.015114
round 3312, predicted reward 25.713317,predicted upper bound 25.717212,actual reward 21.013729
round 3313, predicted reward 23.474288,predicted upper bound 23.478212,actual reward 22.136172
round 3314, predicted reward 23.248198,predicted upper bound 23.252080,actual reward 24.287367
round 3315, predicted reward 26.298096,predicted upper bound 26.301857,actual reward 26.099186
round 3316, predicted reward 21.042928,predicted upper bound 21.047074,actual reward 19.691841
round 3317, predicted reward 29.414543,predicted upper bound 29.418577,actual reward 29.969137
round 3318, predicted reward 30.775027,predicted upper bound 30.779991,actual reward 37.065792
round 3319, predicted reward 26.925099,predicted upper bound 26.929565,actual reward 28.815747
round 3320, predicted reward 25.497084,predicted upper bound 25.500622,actual reward 22.253019
round 3321, predicted reward 36.077558,predicted upper bound 36.081620,actual reward 38.918374
round 3322, predicted reward 34.767409,predicted upper bound 34.770647,actual reward 36.096904
round 3323, predicted reward 29.270024,predicted upper bound 29.274454,actual reward 32.088025
round 3324, predicted reward 27.605668,predicted upper bound 27.609516,actual reward 30.548632
round 3325, predicted reward 29.736512,predicted upper bound 29.740379,actual reward 29.380588
round 3326, predicted reward 27.207594,predicted upper bound 27.212194,actual reward 27.020713
round 3327, predicted reward 29.651347,predicted upper bound 29.655870,actual reward 32.551879
round 3328, predicted reward 25.358344,predicted upper bound 25.361670,actual reward 22.605114
round 3329, predicted reward 29.126222,predicted upper bound 29.130629,actual reward 25.271676
round 3330, predicted reward 32.850909,predicted upper bound 32.854727,actual reward 35.978135
round 3331, predicted reward 31.511605,predicted upper bound 31.515943,actual reward 27.721233
round 3332, predicted reward 34.477203,predicted upper bound 34.480606,actual reward 35.424496
round 3333, predicted reward 32.167383,predicted upper bound 32.171733,actual reward 24.880297
round 3334, predicted reward 30.053642,predicted upper bound 30.056846,actual reward 29.208931
round 3335, predicted reward 27.132342,predicted upper bound 27.136959,actual reward 32.802161
round 3336, predicted reward 30.104145,predicted upper bound 30.108123,actual reward 35.739824
round 3337, predicted reward 27.712071,predicted upper bound 27.716456,actual reward 25.145268
round 3338, predicted reward 29.278778,predicted upper bound 29.282880,actual reward 33.210280
round 3339, predicted reward 28.723897,predicted upper bound 28.727949,actual reward 24.411196
round 3340, predicted reward 24.247011,predicted upper bound 24.251166,actual reward 25.239570
round 3341, predicted reward 29.928158,predicted upper bound 29.932083,actual reward 29.287188
round 3342, predicted reward 23.389874,predicted upper bound 23.393752,actual reward 25.058285
round 3343, predicted reward 30.717718,predicted upper bound 30.721794,actual reward 32.004037
round 3344, predicted reward 25.047478,predicted upper bound 25.051789,actual reward 21.276228
round 3345, predicted reward 31.093447,predicted upper bound 31.097643,actual reward 31.665869
round 3346, predicted reward 23.798947,predicted upper bound 23.802982,actual reward 19.724937
round 3347, predicted reward 27.299354,predicted upper bound 27.304088,actual reward 24.865523
round 3348, predicted reward 23.278034,predicted upper bound 23.282560,actual reward 23.426132
round 3349, predicted reward 31.267009,predicted upper bound 31.270563,actual reward 31.756360
round 3350, predicted reward 28.988588,predicted upper bound 28.991989,actual reward 31.285344
round 3351, predicted reward 19.388564,predicted upper bound 19.393326,actual reward 16.004774
round 3352, predicted reward 28.874591,predicted upper bound 28.878620,actual reward 28.842441
round 3353, predicted reward 24.251036,predicted upper bound 24.255521,actual reward 21.524705
round 3354, predicted reward 27.556373,predicted upper bound 27.560227,actual reward 22.324671
round 3355, predicted reward 28.474498,predicted upper bound 28.478404,actual reward 25.200907
round 3356, predicted reward 30.355948,predicted upper bound 30.359221,actual reward 31.675481
round 3357, predicted reward 26.958173,predicted upper bound 26.963499,actual reward 28.975760
round 3358, predicted reward 31.278255,predicted upper bound 31.282290,actual reward 25.906170
round 3359, predicted reward 27.413334,predicted upper bound 27.417535,actual reward 23.429543
round 3360, predicted reward 33.084302,predicted upper bound 33.088781,actual reward 37.674914
round 3361, predicted reward 30.585595,predicted upper bound 30.590059,actual reward 28.890389
round 3362, predicted reward 29.010978,predicted upper bound 29.015013,actual reward 30.231324
round 3363, predicted reward 29.082320,predicted upper bound 29.086755,actual reward 36.716298
round 3364, predicted reward 26.593903,predicted upper bound 26.597834,actual reward 22.464843
round 3365, predicted reward 27.621555,predicted upper bound 27.625501,actual reward 35.760173
round 3366, predicted reward 30.663755,predicted upper bound 30.667802,actual reward 22.091565
round 3367, predicted reward 28.271731,predicted upper bound 28.275857,actual reward 29.746584
round 3368, predicted reward 20.321592,predicted upper bound 20.325881,actual reward 24.792588
round 3369, predicted reward 26.739919,predicted upper bound 26.743086,actual reward 24.087157
round 3370, predicted reward 28.938096,predicted upper bound 28.941946,actual reward 27.809488
round 3371, predicted reward 28.863745,predicted upper bound 28.867961,actual reward 28.880495
round 3372, predicted reward 33.512524,predicted upper bound 33.516571,actual reward 34.737342
round 3373, predicted reward 24.210537,predicted upper bound 24.215259,actual reward 28.104241
round 3374, predicted reward 22.760791,predicted upper bound 22.764457,actual reward 23.944086
round 3375, predicted reward 29.062563,predicted upper bound 29.066786,actual reward 22.139565
round 3376, predicted reward 24.154150,predicted upper bound 24.158498,actual reward 21.232166
round 3377, predicted reward 28.146818,predicted upper bound 28.150316,actual reward 30.119965
round 3378, predicted reward 30.186507,predicted upper bound 30.191120,actual reward 33.643345
round 3379, predicted reward 24.703361,predicted upper bound 24.707053,actual reward 25.576606
round 3380, predicted reward 27.582703,predicted upper bound 27.586912,actual reward 28.326620
round 3381, predicted reward 20.954518,predicted upper bound 20.958501,actual reward 19.286490
round 3382, predicted reward 25.713860,predicted upper bound 25.717780,actual reward 26.124954
round 3383, predicted reward 32.195704,predicted upper bound 32.199938,actual reward 30.953180
round 3384, predicted reward 24.796231,predicted upper bound 24.800808,actual reward 21.205411
round 3385, predicted reward 34.601399,predicted upper bound 34.605262,actual reward 39.721447
round 3386, predicted reward 23.541402,predicted upper bound 23.545774,actual reward 18.627979
round 3387, predicted reward 21.966589,predicted upper bound 21.971184,actual reward 21.034157
round 3388, predicted reward 26.529024,predicted upper bound 26.533442,actual reward 27.802327
round 3389, predicted reward 23.402829,predicted upper bound 23.407372,actual reward 15.948164
round 3390, predicted reward 20.252007,predicted upper bound 20.256099,actual reward 21.480097
round 3391, predicted reward 34.727315,predicted upper bound 34.731296,actual reward 33.815643
round 3392, predicted reward 27.178962,predicted upper bound 27.183358,actual reward 23.933178
round 3393, predicted reward 32.599530,predicted upper bound 32.604057,actual reward 33.989416
round 3394, predicted reward 29.089438,predicted upper bound 29.093405,actual reward 28.665729
round 3395, predicted reward 26.156644,predicted upper bound 26.160862,actual reward 23.183931
round 3396, predicted reward 24.933617,predicted upper bound 24.937625,actual reward 21.512036
round 3397, predicted reward 27.371966,predicted upper bound 27.376366,actual reward 27.854947
round 3398, predicted reward 24.318091,predicted upper bound 24.322564,actual reward 24.054206
round 3399, predicted reward 24.018635,predicted upper bound 24.022822,actual reward 28.965673
round 3400, predicted reward 18.954069,predicted upper bound 18.958035,actual reward 16.084948
round 3401, predicted reward 25.889836,predicted upper bound 25.893733,actual reward 26.818505
round 3402, predicted reward 30.023504,predicted upper bound 30.028036,actual reward 26.762146
round 3403, predicted reward 25.433635,predicted upper bound 25.437478,actual reward 25.932499
round 3404, predicted reward 32.019802,predicted upper bound 32.024151,actual reward 29.486968
round 3405, predicted reward 34.090431,predicted upper bound 34.094401,actual reward 38.768475
round 3406, predicted reward 27.092851,predicted upper bound 27.096563,actual reward 27.196928
round 3407, predicted reward 31.044014,predicted upper bound 31.048551,actual reward 31.466428
round 3408, predicted reward 23.039687,predicted upper bound 23.044148,actual reward 20.501721
round 3409, predicted reward 30.716835,predicted upper bound 30.720439,actual reward 30.192092
round 3410, predicted reward 25.572165,predicted upper bound 25.576496,actual reward 25.394135
round 3411, predicted reward 30.832981,predicted upper bound 30.837276,actual reward 32.000665
round 3412, predicted reward 25.747969,predicted upper bound 25.752192,actual reward 22.199595
round 3413, predicted reward 33.900903,predicted upper bound 33.905190,actual reward 35.046580
round 3414, predicted reward 21.727174,predicted upper bound 21.731673,actual reward 22.209090
round 3415, predicted reward 24.410553,predicted upper bound 24.415107,actual reward 20.438512
round 3416, predicted reward 27.950241,predicted upper bound 27.953904,actual reward 26.422204
round 3417, predicted reward 21.681272,predicted upper bound 21.686041,actual reward 14.753796
round 3418, predicted reward 32.736386,predicted upper bound 32.739916,actual reward 37.828593
round 3419, predicted reward 23.187139,predicted upper bound 23.190897,actual reward 17.232236
round 3420, predicted reward 22.109500,predicted upper bound 22.114009,actual reward 23.388857
round 3421, predicted reward 33.131795,predicted upper bound 33.135953,actual reward 30.688474
round 3422, predicted reward 29.373237,predicted upper bound 29.377413,actual reward 26.799564
round 3423, predicted reward 23.209805,predicted upper bound 23.214548,actual reward 21.527983
round 3424, predicted reward 26.789691,predicted upper bound 26.793729,actual reward 29.669207
round 3425, predicted reward 28.665210,predicted upper bound 28.669639,actual reward 26.621831
round 3426, predicted reward 30.999082,predicted upper bound 31.002663,actual reward 32.626453
round 3427, predicted reward 26.880319,predicted upper bound 26.884585,actual reward 21.794258
round 3428, predicted reward 24.529140,predicted upper bound 24.533518,actual reward 21.446856
round 3429, predicted reward 30.222152,predicted upper bound 30.226159,actual reward 29.749272
round 3430, predicted reward 30.240281,predicted upper bound 30.244924,actual reward 28.677402
round 3431, predicted reward 34.753500,predicted upper bound 34.756919,actual reward 35.575037
round 3432, predicted reward 24.105584,predicted upper bound 24.109895,actual reward 21.700408
round 3433, predicted reward 21.383540,predicted upper bound 21.387554,actual reward 17.374524
round 3434, predicted reward 36.124838,predicted upper bound 36.128594,actual reward 38.541360
round 3435, predicted reward 28.185968,predicted upper bound 28.190201,actual reward 28.095340
round 3436, predicted reward 34.973751,predicted upper bound 34.977131,actual reward 34.587599
round 3437, predicted reward 26.757247,predicted upper bound 26.761586,actual reward 27.161880
round 3438, predicted reward 22.891017,predicted upper bound 22.895540,actual reward 17.586568
round 3439, predicted reward 30.910149,predicted upper bound 30.914008,actual reward 29.215784
round 3440, predicted reward 28.816901,predicted upper bound 28.820881,actual reward 29.609293
round 3441, predicted reward 31.767285,predicted upper bound 31.771448,actual reward 28.238147
round 3442, predicted reward 19.775159,predicted upper bound 19.779427,actual reward 16.727150
round 3443, predicted reward 25.283868,predicted upper bound 25.287697,actual reward 27.496225
round 3444, predicted reward 31.540145,predicted upper bound 31.543558,actual reward 31.274246
round 3445, predicted reward 22.410747,predicted upper bound 22.414812,actual reward 20.798459
round 3446, predicted reward 26.297384,predicted upper bound 26.300810,actual reward 20.825872
round 3447, predicted reward 27.557760,predicted upper bound 27.562037,actual reward 26.726604
round 3448, predicted reward 31.942232,predicted upper bound 31.946304,actual reward 36.010793
round 3449, predicted reward 30.950707,predicted upper bound 30.953898,actual reward 31.607292
round 3450, predicted reward 22.464419,predicted upper bound 22.468929,actual reward 22.346378
round 3451, predicted reward 23.820443,predicted upper bound 23.824019,actual reward 20.792617
round 3452, predicted reward 32.118175,predicted upper bound 32.121922,actual reward 31.641684
round 3453, predicted reward 31.657000,predicted upper bound 31.661128,actual reward 36.207342
round 3454, predicted reward 34.437749,predicted upper bound 34.441814,actual reward 34.886930
round 3455, predicted reward 22.608343,predicted upper bound 22.612504,actual reward 21.729156
round 3456, predicted reward 30.375979,predicted upper bound 30.379983,actual reward 29.370108
round 3457, predicted reward 24.058821,predicted upper bound 24.062899,actual reward 18.575544
round 3458, predicted reward 32.691199,predicted upper bound 32.694623,actual reward 34.166719
round 3459, predicted reward 26.495540,predicted upper bound 26.500328,actual reward 28.700539
round 3460, predicted reward 28.265273,predicted upper bound 28.269711,actual reward 28.559153
round 3461, predicted reward 30.410229,predicted upper bound 30.414599,actual reward 27.520657
round 3462, predicted reward 21.935894,predicted upper bound 21.940308,actual reward 21.412183
round 3463, predicted reward 35.832263,predicted upper bound 35.836094,actual reward 41.866517
round 3464, predicted reward 24.999479,predicted upper bound 25.003367,actual reward 23.319010
round 3465, predicted reward 35.691341,predicted upper bound 35.695488,actual reward 37.646346
round 3466, predicted reward 30.664612,predicted upper bound 30.668276,actual reward 31.072630
round 3467, predicted reward 25.345895,predicted upper bound 25.349571,actual reward 26.597663
round 3468, predicted reward 42.614015,predicted upper bound 42.617258,actual reward 49.549765
round 3469, predicted reward 24.382807,predicted upper bound 24.386874,actual reward 24.299249
round 3470, predicted reward 29.750966,predicted upper bound 29.754795,actual reward 30.179418
round 3471, predicted reward 24.721680,predicted upper bound 24.726342,actual reward 21.383790
round 3472, predicted reward 32.426422,predicted upper bound 32.430786,actual reward 35.937683
round 3473, predicted reward 24.310011,predicted upper bound 24.314139,actual reward 24.710260
round 3474, predicted reward 25.478811,predicted upper bound 25.482558,actual reward 21.838230
round 3475, predicted reward 27.579426,predicted upper bound 27.583193,actual reward 32.526538
round 3476, predicted reward 29.652939,predicted upper bound 29.657244,actual reward 26.666216
round 3477, predicted reward 28.607011,predicted upper bound 28.610625,actual reward 25.550690
round 3478, predicted reward 23.034766,predicted upper bound 23.039008,actual reward 23.145966
round 3479, predicted reward 23.263375,predicted upper bound 23.267433,actual reward 24.349528
round 3480, predicted reward 31.854236,predicted upper bound 31.858576,actual reward 34.487166
round 3481, predicted reward 29.549458,predicted upper bound 29.553631,actual reward 30.129891
round 3482, predicted reward 25.060337,predicted upper bound 25.065030,actual reward 27.263553
round 3483, predicted reward 32.729516,predicted upper bound 32.733790,actual reward 33.791151
round 3484, predicted reward 21.090064,predicted upper bound 21.094834,actual reward 16.850461
round 3485, predicted reward 25.379608,predicted upper bound 25.383633,actual reward 19.451080
round 3486, predicted reward 28.684283,predicted upper bound 28.688664,actual reward 28.132621
round 3487, predicted reward 24.198437,predicted upper bound 24.201887,actual reward 22.373334
round 3488, predicted reward 37.607215,predicted upper bound 37.610697,actual reward 37.303514
round 3489, predicted reward 24.912508,predicted upper bound 24.916681,actual reward 20.460807
round 3490, predicted reward 27.232689,predicted upper bound 27.236040,actual reward 20.713884
round 3491, predicted reward 22.230860,predicted upper bound 22.234968,actual reward 18.875790
round 3492, predicted reward 32.751864,predicted upper bound 32.756030,actual reward 33.741056
round 3493, predicted reward 33.301117,predicted upper bound 33.304484,actual reward 31.682158
round 3494, predicted reward 25.101128,predicted upper bound 25.105362,actual reward 23.208183
round 3495, predicted reward 25.753956,predicted upper bound 25.758276,actual reward 25.441800
round 3496, predicted reward 24.748640,predicted upper bound 24.752166,actual reward 21.806119
round 3497, predicted reward 24.440524,predicted upper bound 24.444504,actual reward 25.703949
round 3498, predicted reward 27.315802,predicted upper bound 27.319600,actual reward 24.161702
round 3499, predicted reward 28.608986,predicted upper bound 28.612796,actual reward 20.923911
round 3500, predicted reward 29.201750,predicted upper bound 29.205844,actual reward 28.618469
round 3501, predicted reward 23.707734,predicted upper bound 23.713004,actual reward 19.566575
round 3502, predicted reward 30.614629,predicted upper bound 30.617686,actual reward 31.243813
round 3503, predicted reward 29.421455,predicted upper bound 29.425475,actual reward 29.100907
round 3504, predicted reward 26.646987,predicted upper bound 26.651334,actual reward 32.257049
round 3505, predicted reward 25.761418,predicted upper bound 25.765760,actual reward 26.961542
round 3506, predicted reward 27.581694,predicted upper bound 27.585206,actual reward 24.544575
round 3507, predicted reward 24.950348,predicted upper bound 24.954468,actual reward 23.980309
round 3508, predicted reward 23.788290,predicted upper bound 23.792768,actual reward 23.013771
round 3509, predicted reward 28.287917,predicted upper bound 28.292200,actual reward 20.666989
round 3510, predicted reward 30.158522,predicted upper bound 30.162381,actual reward 21.601830
round 3511, predicted reward 27.770836,predicted upper bound 27.774888,actual reward 25.605448
round 3512, predicted reward 31.586985,predicted upper bound 31.590952,actual reward 31.163843
round 3513, predicted reward 27.317953,predicted upper bound 27.321638,actual reward 28.481413
round 3514, predicted reward 30.573313,predicted upper bound 30.577057,actual reward 32.167459
round 3515, predicted reward 30.146388,predicted upper bound 30.150293,actual reward 29.781229
round 3516, predicted reward 27.548261,predicted upper bound 27.552561,actual reward 30.207720
round 3517, predicted reward 27.229861,predicted upper bound 27.234388,actual reward 26.726143
round 3518, predicted reward 29.132502,predicted upper bound 29.136186,actual reward 26.486909
round 3519, predicted reward 28.539493,predicted upper bound 28.543940,actual reward 28.875505
round 3520, predicted reward 28.938371,predicted upper bound 28.942259,actual reward 28.605244
round 3521, predicted reward 29.738757,predicted upper bound 29.742595,actual reward 32.793363
round 3522, predicted reward 25.574416,predicted upper bound 25.579063,actual reward 23.930443
round 3523, predicted reward 27.849653,predicted upper bound 27.852874,actual reward 27.098072
round 3524, predicted reward 29.898732,predicted upper bound 29.902166,actual reward 32.061956
round 3525, predicted reward 23.801787,predicted upper bound 23.805891,actual reward 22.820241
round 3526, predicted reward 27.498330,predicted upper bound 27.502438,actual reward 23.562952
round 3527, predicted reward 25.659482,predicted upper bound 25.663758,actual reward 24.563925
round 3528, predicted reward 27.991162,predicted upper bound 27.995504,actual reward 26.826471
round 3529, predicted reward 30.034699,predicted upper bound 30.038666,actual reward 20.765095
round 3530, predicted reward 24.916770,predicted upper bound 24.920459,actual reward 21.594323
round 3531, predicted reward 24.597255,predicted upper bound 24.600750,actual reward 23.081179
round 3532, predicted reward 23.157611,predicted upper bound 23.161658,actual reward 20.953678
round 3533, predicted reward 20.664683,predicted upper bound 20.669421,actual reward 23.456715
round 3534, predicted reward 30.566755,predicted upper bound 30.569977,actual reward 27.748066
round 3535, predicted reward 27.028796,predicted upper bound 27.032546,actual reward 27.215593
round 3536, predicted reward 29.799075,predicted upper bound 29.802916,actual reward 30.118844
round 3537, predicted reward 25.722501,predicted upper bound 25.726544,actual reward 17.667510
round 3538, predicted reward 26.840113,predicted upper bound 26.844384,actual reward 22.300650
round 3539, predicted reward 27.651074,predicted upper bound 27.654234,actual reward 28.445294
round 3540, predicted reward 28.515220,predicted upper bound 28.519000,actual reward 27.371256
round 3541, predicted reward 22.385819,predicted upper bound 22.390017,actual reward 24.496165
round 3542, predicted reward 21.594207,predicted upper bound 21.597993,actual reward 21.688751
round 3543, predicted reward 21.264540,predicted upper bound 21.268754,actual reward 15.972574
round 3544, predicted reward 31.669354,predicted upper bound 31.673443,actual reward 32.779367
round 3545, predicted reward 28.488618,predicted upper bound 28.492724,actual reward 31.588650
round 3546, predicted reward 34.889698,predicted upper bound 34.893745,actual reward 28.764316
round 3547, predicted reward 29.420095,predicted upper bound 29.424142,actual reward 30.063705
round 3548, predicted reward 28.381178,predicted upper bound 28.385182,actual reward 25.736300
round 3549, predicted reward 29.110914,predicted upper bound 29.115063,actual reward 27.083336
round 3550, predicted reward 23.561681,predicted upper bound 23.565847,actual reward 26.164132
round 3551, predicted reward 24.265399,predicted upper bound 24.269081,actual reward 22.596285
round 3552, predicted reward 23.481115,predicted upper bound 23.485591,actual reward 19.760645
round 3553, predicted reward 28.820028,predicted upper bound 28.823393,actual reward 30.874312
round 3554, predicted reward 32.326955,predicted upper bound 32.330753,actual reward 29.020710
round 3555, predicted reward 27.699084,predicted upper bound 27.702568,actual reward 21.208968
round 3556, predicted reward 27.264894,predicted upper bound 27.268889,actual reward 21.522158
round 3557, predicted reward 24.540349,predicted upper bound 24.544613,actual reward 24.879467
round 3558, predicted reward 27.509426,predicted upper bound 27.513487,actual reward 25.547326
round 3559, predicted reward 27.951286,predicted upper bound 27.955104,actual reward 20.732649
round 3560, predicted reward 29.037857,predicted upper bound 29.041965,actual reward 26.732289
round 3561, predicted reward 27.008160,predicted upper bound 27.011845,actual reward 27.713183
round 3562, predicted reward 24.191057,predicted upper bound 24.195307,actual reward 27.248562
round 3563, predicted reward 35.658939,predicted upper bound 35.662168,actual reward 36.828725
round 3564, predicted reward 26.948183,predicted upper bound 26.952929,actual reward 25.734115
round 3565, predicted reward 27.263399,predicted upper bound 27.267716,actual reward 22.394454
round 3566, predicted reward 29.438147,predicted upper bound 29.441830,actual reward 32.723584
round 3567, predicted reward 28.321112,predicted upper bound 28.324960,actual reward 27.678823
round 3568, predicted reward 27.848512,predicted upper bound 27.852051,actual reward 26.535889
round 3569, predicted reward 23.906463,predicted upper bound 23.910710,actual reward 25.161398
round 3570, predicted reward 23.092398,predicted upper bound 23.096647,actual reward 20.269872
round 3571, predicted reward 29.805272,predicted upper bound 29.809730,actual reward 21.503384
round 3572, predicted reward 32.943713,predicted upper bound 32.947840,actual reward 32.629076
round 3573, predicted reward 30.330283,predicted upper bound 30.333552,actual reward 29.566358
round 3574, predicted reward 28.666205,predicted upper bound 28.669998,actual reward 29.259380
round 3575, predicted reward 30.314185,predicted upper bound 30.317641,actual reward 26.574172
round 3576, predicted reward 28.575224,predicted upper bound 28.579231,actual reward 33.184628
round 3577, predicted reward 28.131188,predicted upper bound 28.134381,actual reward 24.303095
round 3578, predicted reward 35.781323,predicted upper bound 35.784998,actual reward 34.864978
round 3579, predicted reward 24.055261,predicted upper bound 24.058816,actual reward 25.520101
round 3580, predicted reward 27.941876,predicted upper bound 27.945121,actual reward 22.219567
round 3581, predicted reward 21.968318,predicted upper bound 21.971945,actual reward 20.147757
round 3582, predicted reward 24.586976,predicted upper bound 24.591073,actual reward 19.934208
round 3583, predicted reward 24.452556,predicted upper bound 24.456131,actual reward 15.633942
round 3584, predicted reward 23.209631,predicted upper bound 23.214278,actual reward 21.446437
round 3585, predicted reward 24.973315,predicted upper bound 24.976833,actual reward 23.361014
round 3586, predicted reward 26.900383,predicted upper bound 26.903976,actual reward 19.826229
round 3587, predicted reward 29.702466,predicted upper bound 29.705801,actual reward 29.624355
round 3588, predicted reward 24.983527,predicted upper bound 24.986780,actual reward 23.448892
round 3589, predicted reward 26.030654,predicted upper bound 26.034583,actual reward 23.537587
round 3590, predicted reward 25.010585,predicted upper bound 25.014885,actual reward 20.206762
round 3591, predicted reward 23.105638,predicted upper bound 23.109244,actual reward 19.445048
round 3592, predicted reward 23.921800,predicted upper bound 23.925515,actual reward 24.665084
round 3593, predicted reward 29.803214,predicted upper bound 29.807476,actual reward 35.123640
round 3594, predicted reward 29.029573,predicted upper bound 29.033540,actual reward 27.726730
round 3595, predicted reward 33.225725,predicted upper bound 33.229056,actual reward 32.975310
round 3596, predicted reward 28.564306,predicted upper bound 28.567408,actual reward 25.747632
round 3597, predicted reward 28.723001,predicted upper bound 28.726437,actual reward 25.907753
round 3598, predicted reward 27.336711,predicted upper bound 27.339884,actual reward 27.370909
round 3599, predicted reward 28.908500,predicted upper bound 28.912083,actual reward 27.387111
round 3600, predicted reward 27.753089,predicted upper bound 27.757064,actual reward 23.694028
round 3601, predicted reward 28.240104,predicted upper bound 28.244074,actual reward 24.139758
round 3602, predicted reward 23.322240,predicted upper bound 23.326861,actual reward 16.172254
round 3603, predicted reward 24.045168,predicted upper bound 24.049742,actual reward 15.681395
round 3604, predicted reward 28.367372,predicted upper bound 28.371594,actual reward 26.777981
round 3605, predicted reward 39.767849,predicted upper bound 39.771334,actual reward 46.595520
round 3606, predicted reward 26.139598,predicted upper bound 26.143935,actual reward 30.958032
round 3607, predicted reward 27.110274,predicted upper bound 27.113887,actual reward 27.721671
round 3608, predicted reward 26.194638,predicted upper bound 26.198390,actual reward 27.809620
round 3609, predicted reward 30.023874,predicted upper bound 30.027484,actual reward 27.480486
round 3610, predicted reward 26.541261,predicted upper bound 26.544822,actual reward 29.956403
round 3611, predicted reward 27.910334,predicted upper bound 27.914936,actual reward 24.146737
round 3612, predicted reward 25.661284,predicted upper bound 25.665324,actual reward 23.271479
round 3613, predicted reward 22.686231,predicted upper bound 22.690162,actual reward 20.707249
round 3614, predicted reward 26.150927,predicted upper bound 26.154964,actual reward 29.456420
round 3615, predicted reward 29.843783,predicted upper bound 29.846835,actual reward 32.105871
round 3616, predicted reward 34.885128,predicted upper bound 34.888863,actual reward 38.681100
round 3617, predicted reward 24.645349,predicted upper bound 24.649049,actual reward 22.089030
round 3618, predicted reward 21.488798,predicted upper bound 21.493480,actual reward 21.656562
round 3619, predicted reward 26.961694,predicted upper bound 26.965449,actual reward 22.540182
round 3620, predicted reward 29.095902,predicted upper bound 29.100016,actual reward 27.365077
round 3621, predicted reward 35.337968,predicted upper bound 35.341542,actual reward 39.155468
round 3622, predicted reward 25.484082,predicted upper bound 25.488195,actual reward 23.551064
round 3623, predicted reward 29.318491,predicted upper bound 29.322347,actual reward 29.443412
round 3624, predicted reward 28.612120,predicted upper bound 28.615584,actual reward 21.554013
round 3625, predicted reward 33.717389,predicted upper bound 33.721145,actual reward 36.077952
round 3626, predicted reward 30.448357,predicted upper bound 30.452289,actual reward 22.446262
round 3627, predicted reward 31.197333,predicted upper bound 31.200876,actual reward 29.627798
round 3628, predicted reward 30.041275,predicted upper bound 30.045268,actual reward 24.728234
round 3629, predicted reward 30.450350,predicted upper bound 30.454067,actual reward 31.968321
round 3630, predicted reward 30.806780,predicted upper bound 30.810645,actual reward 30.242415
round 3631, predicted reward 22.617195,predicted upper bound 22.621673,actual reward 25.474006
round 3632, predicted reward 27.965084,predicted upper bound 27.968560,actual reward 31.842040
round 3633, predicted reward 29.047224,predicted upper bound 29.050735,actual reward 27.730740
round 3634, predicted reward 22.590015,predicted upper bound 22.594586,actual reward 25.115182
round 3635, predicted reward 27.529902,predicted upper bound 27.533811,actual reward 32.091817
round 3636, predicted reward 27.212781,predicted upper bound 27.216330,actual reward 22.126145
round 3637, predicted reward 31.926023,predicted upper bound 31.929853,actual reward 30.301458
round 3638, predicted reward 23.269026,predicted upper bound 23.273678,actual reward 21.944871
round 3639, predicted reward 28.879116,predicted upper bound 28.883477,actual reward 29.246060
round 3640, predicted reward 25.914686,predicted upper bound 25.919149,actual reward 23.203862
round 3641, predicted reward 21.229310,predicted upper bound 21.233882,actual reward 21.636549
round 3642, predicted reward 31.825029,predicted upper bound 31.828877,actual reward 32.249528
round 3643, predicted reward 28.116091,predicted upper bound 28.120049,actual reward 30.811319
round 3644, predicted reward 29.146426,predicted upper bound 29.150128,actual reward 24.567246
round 3645, predicted reward 27.425480,predicted upper bound 27.429247,actual reward 27.581447
round 3646, predicted reward 22.742390,predicted upper bound 22.746758,actual reward 25.849436
round 3647, predicted reward 22.537994,predicted upper bound 22.542256,actual reward 23.205454
round 3648, predicted reward 30.475319,predicted upper bound 30.478919,actual reward 32.430030
round 3649, predicted reward 29.663523,predicted upper bound 29.667103,actual reward 29.241217
round 3650, predicted reward 24.224513,predicted upper bound 24.229162,actual reward 22.325925
round 3651, predicted reward 30.413537,predicted upper bound 30.418323,actual reward 26.303364
round 3652, predicted reward 27.567053,predicted upper bound 27.570753,actual reward 23.179460
round 3653, predicted reward 20.322997,predicted upper bound 20.327212,actual reward 15.206259
round 3654, predicted reward 27.668295,predicted upper bound 27.672512,actual reward 30.942860
round 3655, predicted reward 24.742874,predicted upper bound 24.747149,actual reward 23.601615
round 3656, predicted reward 27.440410,predicted upper bound 27.444241,actual reward 24.710494
round 3657, predicted reward 25.271159,predicted upper bound 25.275399,actual reward 21.693531
round 3658, predicted reward 31.294272,predicted upper bound 31.297422,actual reward 31.066107
round 3659, predicted reward 31.232631,predicted upper bound 31.237206,actual reward 31.929295
round 3660, predicted reward 23.637015,predicted upper bound 23.641588,actual reward 24.244138
round 3661, predicted reward 30.975806,predicted upper bound 30.979615,actual reward 32.470370
round 3662, predicted reward 32.325651,predicted upper bound 32.329696,actual reward 38.154785
round 3663, predicted reward 26.197800,predicted upper bound 26.201017,actual reward 23.476967
round 3664, predicted reward 29.145241,predicted upper bound 29.149498,actual reward 27.352131
round 3665, predicted reward 29.290201,predicted upper bound 29.294416,actual reward 28.923662
round 3666, predicted reward 25.292852,predicted upper bound 25.297158,actual reward 21.290333
round 3667, predicted reward 35.307923,predicted upper bound 35.311495,actual reward 41.831135
round 3668, predicted reward 30.651497,predicted upper bound 30.655857,actual reward 28.726221
round 3669, predicted reward 27.669611,predicted upper bound 27.673375,actual reward 29.349238
round 3670, predicted reward 24.654632,predicted upper bound 24.658767,actual reward 27.472289
round 3671, predicted reward 26.182569,predicted upper bound 26.186231,actual reward 24.954806
round 3672, predicted reward 28.121779,predicted upper bound 28.125411,actual reward 25.971640
round 3673, predicted reward 22.550143,predicted upper bound 22.554406,actual reward 21.386129
round 3674, predicted reward 24.748203,predicted upper bound 24.752542,actual reward 22.217627
round 3675, predicted reward 24.269621,predicted upper bound 24.273434,actual reward 23.811843
round 3676, predicted reward 25.992119,predicted upper bound 25.995629,actual reward 27.964065
round 3677, predicted reward 26.749021,predicted upper bound 26.753369,actual reward 24.687849
round 3678, predicted reward 35.563371,predicted upper bound 35.567517,actual reward 37.891986
round 3679, predicted reward 28.314815,predicted upper bound 28.318933,actual reward 29.771245
round 3680, predicted reward 29.160318,predicted upper bound 29.164062,actual reward 25.807462
round 3681, predicted reward 27.070714,predicted upper bound 27.074765,actual reward 29.587777
round 3682, predicted reward 31.260733,predicted upper bound 31.264581,actual reward 29.770669
round 3683, predicted reward 28.922497,predicted upper bound 28.926905,actual reward 26.768508
round 3684, predicted reward 23.652524,predicted upper bound 23.656083,actual reward 21.637620
round 3685, predicted reward 34.635565,predicted upper bound 34.638885,actual reward 33.227724
round 3686, predicted reward 24.235805,predicted upper bound 24.239182,actual reward 19.399515
round 3687, predicted reward 24.353088,predicted upper bound 24.357623,actual reward 22.823781
round 3688, predicted reward 36.399201,predicted upper bound 36.402129,actual reward 41.908006
round 3689, predicted reward 26.781393,predicted upper bound 26.785714,actual reward 24.369055
round 3690, predicted reward 31.057205,predicted upper bound 31.061422,actual reward 31.258369
round 3691, predicted reward 23.967435,predicted upper bound 23.971628,actual reward 24.498100
round 3692, predicted reward 31.912102,predicted upper bound 31.916293,actual reward 33.397500
round 3693, predicted reward 34.578586,predicted upper bound 34.581567,actual reward 32.485540
round 3694, predicted reward 25.131777,predicted upper bound 25.135944,actual reward 23.102225
round 3695, predicted reward 32.920299,predicted upper bound 32.924895,actual reward 31.729885
round 3696, predicted reward 25.113247,predicted upper bound 25.117446,actual reward 22.236753
round 3697, predicted reward 24.444483,predicted upper bound 24.448666,actual reward 22.693065
round 3698, predicted reward 21.846384,predicted upper bound 21.849984,actual reward 18.967543
round 3699, predicted reward 29.142793,predicted upper bound 29.147414,actual reward 31.671589
round 3700, predicted reward 39.051760,predicted upper bound 39.055377,actual reward 43.442883
round 3701, predicted reward 31.970351,predicted upper bound 31.974189,actual reward 35.468036
round 3702, predicted reward 31.628238,predicted upper bound 31.631593,actual reward 30.634258
round 3703, predicted reward 23.421570,predicted upper bound 23.425743,actual reward 24.011102
round 3704, predicted reward 27.088115,predicted upper bound 27.092387,actual reward 24.546939
round 3705, predicted reward 26.368496,predicted upper bound 26.372597,actual reward 24.722960
round 3706, predicted reward 22.773500,predicted upper bound 22.777553,actual reward 18.471419
round 3707, predicted reward 28.917176,predicted upper bound 28.920803,actual reward 23.556262
round 3708, predicted reward 22.999336,predicted upper bound 23.003816,actual reward 21.275538
round 3709, predicted reward 24.679011,predicted upper bound 24.683520,actual reward 22.117255
round 3710, predicted reward 25.794734,predicted upper bound 25.798285,actual reward 27.229366
round 3711, predicted reward 23.152690,predicted upper bound 23.157220,actual reward 22.279209
round 3712, predicted reward 28.301751,predicted upper bound 28.305957,actual reward 29.722315
round 3713, predicted reward 24.526213,predicted upper bound 24.530630,actual reward 19.249043
round 3714, predicted reward 22.905505,predicted upper bound 22.910153,actual reward 22.499339
round 3715, predicted reward 37.583307,predicted upper bound 37.586728,actual reward 35.992201
round 3716, predicted reward 22.867289,predicted upper bound 22.871168,actual reward 22.520386
round 3717, predicted reward 26.656553,predicted upper bound 26.660254,actual reward 24.713242
round 3718, predicted reward 26.129820,predicted upper bound 26.134216,actual reward 24.653297
round 3719, predicted reward 31.269723,predicted upper bound 31.273344,actual reward 33.660680
round 3720, predicted reward 26.805585,predicted upper bound 26.809725,actual reward 24.736948
round 3721, predicted reward 23.722345,predicted upper bound 23.726130,actual reward 28.811766
round 3722, predicted reward 31.744594,predicted upper bound 31.748572,actual reward 35.347442
round 3723, predicted reward 26.062623,predicted upper bound 26.067018,actual reward 22.156494
round 3724, predicted reward 34.296861,predicted upper bound 34.300533,actual reward 38.068123
round 3725, predicted reward 28.538473,predicted upper bound 28.542694,actual reward 32.666835
round 3726, predicted reward 27.850655,predicted upper bound 27.853670,actual reward 28.971129
round 3727, predicted reward 21.722048,predicted upper bound 21.725997,actual reward 15.828571
round 3728, predicted reward 32.599595,predicted upper bound 32.603149,actual reward 33.272134
round 3729, predicted reward 22.468927,predicted upper bound 22.472633,actual reward 20.042963
round 3730, predicted reward 24.896421,predicted upper bound 24.900994,actual reward 22.156675
round 3731, predicted reward 26.400239,predicted upper bound 26.404006,actual reward 31.788005
round 3732, predicted reward 22.797669,predicted upper bound 22.801818,actual reward 18.928089
round 3733, predicted reward 23.799852,predicted upper bound 23.803662,actual reward 26.407700
round 3734, predicted reward 35.217063,predicted upper bound 35.221308,actual reward 32.557969
round 3735, predicted reward 24.723489,predicted upper bound 24.727771,actual reward 25.884941
round 3736, predicted reward 23.853428,predicted upper bound 23.856982,actual reward 22.560067
round 3737, predicted reward 25.349402,predicted upper bound 25.353627,actual reward 28.073481
round 3738, predicted reward 25.344275,predicted upper bound 25.348159,actual reward 23.479843
round 3739, predicted reward 32.878636,predicted upper bound 32.882861,actual reward 29.114662
round 3740, predicted reward 29.793204,predicted upper bound 29.797047,actual reward 33.834330
round 3741, predicted reward 25.061326,predicted upper bound 25.065494,actual reward 25.888495
round 3742, predicted reward 23.070667,predicted upper bound 23.074358,actual reward 24.216725
round 3743, predicted reward 24.949959,predicted upper bound 24.954408,actual reward 22.418907
round 3744, predicted reward 23.745774,predicted upper bound 23.749433,actual reward 21.097009
round 3745, predicted reward 28.145131,predicted upper bound 28.148704,actual reward 26.779057
round 3746, predicted reward 30.686793,predicted upper bound 30.690871,actual reward 32.197678
round 3747, predicted reward 27.417032,predicted upper bound 27.420757,actual reward 26.401262
round 3748, predicted reward 22.063056,predicted upper bound 22.067559,actual reward 21.774648
round 3749, predicted reward 20.038807,predicted upper bound 20.042385,actual reward 18.826002
round 3750, predicted reward 24.041459,predicted upper bound 24.045883,actual reward 26.056567
round 3751, predicted reward 25.738434,predicted upper bound 25.743432,actual reward 20.049341
round 3752, predicted reward 30.429005,predicted upper bound 30.432518,actual reward 25.895618
round 3753, predicted reward 25.612481,predicted upper bound 25.616352,actual reward 24.525736
round 3754, predicted reward 29.044158,predicted upper bound 29.047865,actual reward 30.679178
round 3755, predicted reward 32.005672,predicted upper bound 32.009373,actual reward 30.210265
round 3756, predicted reward 28.971071,predicted upper bound 28.974393,actual reward 29.611142
round 3757, predicted reward 20.602762,predicted upper bound 20.606968,actual reward 19.368249
round 3758, predicted reward 25.710231,predicted upper bound 25.714774,actual reward 22.603578
round 3759, predicted reward 28.934416,predicted upper bound 28.938603,actual reward 24.286336
round 3760, predicted reward 26.823376,predicted upper bound 26.826776,actual reward 23.246721
round 3761, predicted reward 22.157160,predicted upper bound 22.161272,actual reward 25.033980
round 3762, predicted reward 34.494567,predicted upper bound 34.499044,actual reward 30.924989
round 3763, predicted reward 26.053126,predicted upper bound 26.056945,actual reward 27.395456
round 3764, predicted reward 36.316400,predicted upper bound 36.319913,actual reward 37.988007
round 3765, predicted reward 25.048141,predicted upper bound 25.051415,actual reward 20.724815
round 3766, predicted reward 27.849306,predicted upper bound 27.852919,actual reward 29.266673
round 3767, predicted reward 26.630840,predicted upper bound 26.635477,actual reward 19.562836
round 3768, predicted reward 29.363881,predicted upper bound 29.367860,actual reward 28.061045
round 3769, predicted reward 23.199471,predicted upper bound 23.204688,actual reward 21.619133
round 3770, predicted reward 27.187163,predicted upper bound 27.191861,actual reward 31.329967
round 3771, predicted reward 30.662400,predicted upper bound 30.665932,actual reward 30.207981
round 3772, predicted reward 25.478736,predicted upper bound 25.482832,actual reward 20.950021
round 3773, predicted reward 29.800758,predicted upper bound 29.804758,actual reward 27.403215
round 3774, predicted reward 30.817967,predicted upper bound 30.821362,actual reward 32.517216
round 3775, predicted reward 31.051892,predicted upper bound 31.055374,actual reward 29.071053
round 3776, predicted reward 27.224324,predicted upper bound 27.228285,actual reward 24.970060
round 3777, predicted reward 34.371721,predicted upper bound 34.375068,actual reward 40.427245
round 3778, predicted reward 30.234451,predicted upper bound 30.238402,actual reward 26.165997
round 3779, predicted reward 27.121456,predicted upper bound 27.125783,actual reward 26.814761
round 3780, predicted reward 30.087706,predicted upper bound 30.091512,actual reward 33.224559
round 3781, predicted reward 27.371849,predicted upper bound 27.375272,actual reward 26.609239
round 3782, predicted reward 28.986584,predicted upper bound 28.990519,actual reward 25.154245
round 3783, predicted reward 33.292113,predicted upper bound 33.295039,actual reward 32.923268
round 3784, predicted reward 23.777930,predicted upper bound 23.781673,actual reward 25.966380
round 3785, predicted reward 24.149584,predicted upper bound 24.153657,actual reward 28.186216
round 3786, predicted reward 26.385467,predicted upper bound 26.389064,actual reward 21.360345
round 3787, predicted reward 24.672284,predicted upper bound 24.676154,actual reward 22.704850
round 3788, predicted reward 32.289921,predicted upper bound 32.293423,actual reward 27.710123
round 3789, predicted reward 30.559606,predicted upper bound 30.563030,actual reward 29.499762
round 3790, predicted reward 28.347414,predicted upper bound 28.351418,actual reward 33.269299
round 3791, predicted reward 27.457360,predicted upper bound 27.461192,actual reward 25.232326
round 3792, predicted reward 27.663741,predicted upper bound 27.667112,actual reward 27.656295
round 3793, predicted reward 32.343226,predicted upper bound 32.346829,actual reward 29.315543
round 3794, predicted reward 27.841813,predicted upper bound 27.846184,actual reward 25.780002
round 3795, predicted reward 30.441220,predicted upper bound 30.445219,actual reward 34.692296
round 3796, predicted reward 31.172763,predicted upper bound 31.176467,actual reward 32.773144
round 3797, predicted reward 24.700023,predicted upper bound 24.703753,actual reward 23.494621
round 3798, predicted reward 27.104472,predicted upper bound 27.108229,actual reward 20.238977
round 3799, predicted reward 25.370113,predicted upper bound 25.373489,actual reward 25.624353
round 3800, predicted reward 29.323815,predicted upper bound 29.328025,actual reward 29.211721
round 3801, predicted reward 23.561009,predicted upper bound 23.565539,actual reward 25.735259
round 3802, predicted reward 32.817575,predicted upper bound 32.821253,actual reward 40.649840
round 3803, predicted reward 23.479699,predicted upper bound 23.483709,actual reward 24.801653
round 3804, predicted reward 23.350434,predicted upper bound 23.355215,actual reward 20.732817
round 3805, predicted reward 29.296176,predicted upper bound 29.299863,actual reward 25.286014
round 3806, predicted reward 28.510049,predicted upper bound 28.514066,actual reward 20.411058
round 3807, predicted reward 21.278442,predicted upper bound 21.282761,actual reward 17.654861
round 3808, predicted reward 22.830626,predicted upper bound 22.835069,actual reward 21.212324
round 3809, predicted reward 26.442063,predicted upper bound 26.444972,actual reward 20.667374
round 3810, predicted reward 22.414438,predicted upper bound 22.417900,actual reward 19.960851
round 3811, predicted reward 37.811079,predicted upper bound 37.814438,actual reward 41.192612
round 3812, predicted reward 25.892067,predicted upper bound 25.896114,actual reward 23.650162
round 3813, predicted reward 29.317277,predicted upper bound 29.321085,actual reward 31.976938
round 3814, predicted reward 35.105844,predicted upper bound 35.109074,actual reward 35.730631
round 3815, predicted reward 29.459465,predicted upper bound 29.463779,actual reward 28.036970
round 3816, predicted reward 22.949815,predicted upper bound 22.953645,actual reward 18.721727
round 3817, predicted reward 23.689760,predicted upper bound 23.693848,actual reward 17.816109
round 3818, predicted reward 26.873257,predicted upper bound 26.877410,actual reward 27.349659
round 3819, predicted reward 28.869404,predicted upper bound 28.873353,actual reward 30.352086
round 3820, predicted reward 21.825414,predicted upper bound 21.829942,actual reward 17.532876
round 3821, predicted reward 30.118507,predicted upper bound 30.122044,actual reward 28.585973
round 3822, predicted reward 23.188142,predicted upper bound 23.192449,actual reward 23.427672
round 3823, predicted reward 22.737853,predicted upper bound 22.741267,actual reward 26.275299
round 3824, predicted reward 28.141569,predicted upper bound 28.145673,actual reward 23.573595
round 3825, predicted reward 24.554121,predicted upper bound 24.557656,actual reward 25.146139
round 3826, predicted reward 29.336740,predicted upper bound 29.340932,actual reward 27.107246
round 3827, predicted reward 27.608448,predicted upper bound 27.612310,actual reward 29.954945
round 3828, predicted reward 22.038310,predicted upper bound 22.042699,actual reward 21.656998
round 3829, predicted reward 22.379800,predicted upper bound 22.383309,actual reward 17.289814
round 3830, predicted reward 27.779104,predicted upper bound 27.782748,actual reward 29.567062
round 3831, predicted reward 26.928324,predicted upper bound 26.931788,actual reward 24.814062
round 3832, predicted reward 25.285390,predicted upper bound 25.289889,actual reward 22.037944
round 3833, predicted reward 23.618298,predicted upper bound 23.622929,actual reward 24.777435
round 3834, predicted reward 20.947958,predicted upper bound 20.952605,actual reward 20.236075
round 3835, predicted reward 30.819985,predicted upper bound 30.823623,actual reward 35.734125
round 3836, predicted reward 26.972238,predicted upper bound 26.975915,actual reward 24.908242
round 3837, predicted reward 30.147581,predicted upper bound 30.150735,actual reward 33.283025
round 3838, predicted reward 27.618728,predicted upper bound 27.621844,actual reward 27.474283
round 3839, predicted reward 22.860822,predicted upper bound 22.865292,actual reward 25.789679
round 3840, predicted reward 30.419471,predicted upper bound 30.422780,actual reward 32.170882
round 3841, predicted reward 29.505467,predicted upper bound 29.509453,actual reward 29.531497
round 3842, predicted reward 25.874637,predicted upper bound 25.878450,actual reward 21.435367
round 3843, predicted reward 26.968395,predicted upper bound 26.972055,actual reward 27.679113
round 3844, predicted reward 28.387012,predicted upper bound 28.390403,actual reward 26.988111
round 3845, predicted reward 21.101822,predicted upper bound 21.105795,actual reward 16.914895
round 3846, predicted reward 33.381913,predicted upper bound 33.385603,actual reward 39.334292
round 3847, predicted reward 27.100184,predicted upper bound 27.104209,actual reward 24.149638
round 3848, predicted reward 30.119772,predicted upper bound 30.123030,actual reward 28.135578
round 3849, predicted reward 33.328802,predicted upper bound 33.332219,actual reward 37.359648
round 3850, predicted reward 31.585382,predicted upper bound 31.588781,actual reward 34.829520
round 3851, predicted reward 29.932047,predicted upper bound 29.936371,actual reward 31.319225
round 3852, predicted reward 26.695448,predicted upper bound 26.699864,actual reward 26.178170
round 3853, predicted reward 30.110500,predicted upper bound 30.113930,actual reward 31.997719
round 3854, predicted reward 33.657012,predicted upper bound 33.661074,actual reward 37.923010
round 3855, predicted reward 29.830191,predicted upper bound 29.833847,actual reward 28.932549
round 3856, predicted reward 31.616428,predicted upper bound 31.620583,actual reward 37.530819
round 3857, predicted reward 29.963828,predicted upper bound 29.967417,actual reward 28.582865
round 3858, predicted reward 33.526067,predicted upper bound 33.529745,actual reward 34.072612
round 3859, predicted reward 25.262788,predicted upper bound 25.266738,actual reward 27.614132
round 3860, predicted reward 23.160435,predicted upper bound 23.164497,actual reward 19.123802
round 3861, predicted reward 26.741840,predicted upper bound 26.745479,actual reward 25.615641
round 3862, predicted reward 34.268464,predicted upper bound 34.271919,actual reward 36.163468
round 3863, predicted reward 33.633549,predicted upper bound 33.637531,actual reward 33.040724
round 3864, predicted reward 22.849928,predicted upper bound 22.854520,actual reward 25.437084
round 3865, predicted reward 25.332430,predicted upper bound 25.336612,actual reward 21.937463
round 3866, predicted reward 36.877797,predicted upper bound 36.881113,actual reward 41.632656
round 3867, predicted reward 26.830930,predicted upper bound 26.833956,actual reward 30.413582
round 3868, predicted reward 32.142415,predicted upper bound 32.145867,actual reward 33.684374
round 3869, predicted reward 26.643155,predicted upper bound 26.647271,actual reward 19.443475
round 3870, predicted reward 30.046267,predicted upper bound 30.049423,actual reward 26.731278
round 3871, predicted reward 29.819755,predicted upper bound 29.823086,actual reward 32.844394
round 3872, predicted reward 27.948523,predicted upper bound 27.952086,actual reward 31.030660
round 3873, predicted reward 25.981304,predicted upper bound 25.984293,actual reward 26.430878
round 3874, predicted reward 28.858694,predicted upper bound 28.861832,actual reward 27.659893
round 3875, predicted reward 27.021376,predicted upper bound 27.024828,actual reward 33.111020
round 3876, predicted reward 27.745142,predicted upper bound 27.748764,actual reward 25.973530
round 3877, predicted reward 30.022817,predicted upper bound 30.026941,actual reward 33.051011
round 3878, predicted reward 25.950528,predicted upper bound 25.954324,actual reward 26.330939
round 3879, predicted reward 26.385278,predicted upper bound 26.388792,actual reward 28.971822
round 3880, predicted reward 24.161908,predicted upper bound 24.165665,actual reward 18.651224
round 3881, predicted reward 20.715351,predicted upper bound 20.719624,actual reward 23.780070
round 3882, predicted reward 27.833970,predicted upper bound 27.837626,actual reward 30.446262
round 3883, predicted reward 23.466975,predicted upper bound 23.470874,actual reward 17.690391
round 3884, predicted reward 31.324827,predicted upper bound 31.329106,actual reward 34.322429
round 3885, predicted reward 28.143294,predicted upper bound 28.147039,actual reward 28.601126
round 3886, predicted reward 24.374096,predicted upper bound 24.377649,actual reward 22.722085
round 3887, predicted reward 25.084372,predicted upper bound 25.087736,actual reward 25.968776
round 3888, predicted reward 31.390080,predicted upper bound 31.393619,actual reward 32.258273
round 3889, predicted reward 23.377180,predicted upper bound 23.381028,actual reward 22.662152
round 3890, predicted reward 26.408887,predicted upper bound 26.412794,actual reward 29.631938
round 3891, predicted reward 24.534633,predicted upper bound 24.538474,actual reward 24.649368
round 3892, predicted reward 20.562747,predicted upper bound 20.566641,actual reward 16.887048
round 3893, predicted reward 27.227344,predicted upper bound 27.231266,actual reward 25.295754
round 3894, predicted reward 25.899868,predicted upper bound 25.903481,actual reward 25.736821
round 3895, predicted reward 27.137642,predicted upper bound 27.141779,actual reward 23.224825
round 3896, predicted reward 28.333243,predicted upper bound 28.336362,actual reward 28.834734
round 3897, predicted reward 27.972845,predicted upper bound 27.976274,actual reward 30.461925
round 3898, predicted reward 23.126405,predicted upper bound 23.129958,actual reward 23.563282
round 3899, predicted reward 29.377038,predicted upper bound 29.381068,actual reward 26.773988
round 3900, predicted reward 30.680133,predicted upper bound 30.683299,actual reward 30.144038
round 3901, predicted reward 29.864692,predicted upper bound 29.868437,actual reward 30.215369
round 3902, predicted reward 30.680880,predicted upper bound 30.684117,actual reward 28.543598
round 3903, predicted reward 31.349917,predicted upper bound 31.353638,actual reward 31.381646
round 3904, predicted reward 23.063237,predicted upper bound 23.067199,actual reward 16.351095
round 3905, predicted reward 28.568261,predicted upper bound 28.571989,actual reward 23.936005
round 3906, predicted reward 30.361250,predicted upper bound 30.365194,actual reward 32.375246
round 3907, predicted reward 32.037578,predicted upper bound 32.041196,actual reward 35.832757
round 3908, predicted reward 28.510271,predicted upper bound 28.513910,actual reward 35.197857
round 3909, predicted reward 32.020938,predicted upper bound 32.024598,actual reward 33.569401
round 3910, predicted reward 31.013806,predicted upper bound 31.017693,actual reward 31.760160
round 3911, predicted reward 34.024005,predicted upper bound 34.027159,actual reward 35.521670
round 3912, predicted reward 20.371858,predicted upper bound 20.376370,actual reward 17.005392
round 3913, predicted reward 27.286440,predicted upper bound 27.289636,actual reward 23.687602
round 3914, predicted reward 24.987569,predicted upper bound 24.991234,actual reward 20.111421
round 3915, predicted reward 26.161194,predicted upper bound 26.165081,actual reward 21.002170
round 3916, predicted reward 34.902883,predicted upper bound 34.906256,actual reward 38.533867
round 3917, predicted reward 26.478670,predicted upper bound 26.482568,actual reward 30.755253
round 3918, predicted reward 27.972848,predicted upper bound 27.976568,actual reward 24.352362
round 3919, predicted reward 31.282756,predicted upper bound 31.286447,actual reward 33.214193
round 3920, predicted reward 33.635499,predicted upper bound 33.638454,actual reward 37.324235
round 3921, predicted reward 23.257166,predicted upper bound 23.260143,actual reward 17.658751
round 3922, predicted reward 21.449077,predicted upper bound 21.452557,actual reward 19.877017
round 3923, predicted reward 22.989871,predicted upper bound 22.993844,actual reward 21.649304
round 3924, predicted reward 26.621349,predicted upper bound 26.625624,actual reward 28.213582
round 3925, predicted reward 23.752578,predicted upper bound 23.755733,actual reward 20.265509
round 3926, predicted reward 29.293961,predicted upper bound 29.297865,actual reward 25.085507
round 3927, predicted reward 27.454049,predicted upper bound 27.458335,actual reward 25.371381
round 3928, predicted reward 25.074784,predicted upper bound 25.078980,actual reward 23.846492
round 3929, predicted reward 27.896876,predicted upper bound 27.900756,actual reward 30.685948
round 3930, predicted reward 21.654628,predicted upper bound 21.658738,actual reward 17.219491
round 3931, predicted reward 26.565124,predicted upper bound 26.569030,actual reward 21.970620
round 3932, predicted reward 26.019072,predicted upper bound 26.022947,actual reward 30.722955
round 3933, predicted reward 27.539497,predicted upper bound 27.543706,actual reward 30.814193
round 3934, predicted reward 24.875755,predicted upper bound 24.879750,actual reward 23.790932
round 3935, predicted reward 26.047847,predicted upper bound 26.052386,actual reward 27.838446
round 3936, predicted reward 27.423003,predicted upper bound 27.426541,actual reward 23.157517
round 3937, predicted reward 27.029422,predicted upper bound 27.033193,actual reward 22.324091
round 3938, predicted reward 32.436079,predicted upper bound 32.438930,actual reward 31.858492
round 3939, predicted reward 30.397960,predicted upper bound 30.401540,actual reward 31.572633
round 3940, predicted reward 27.758164,predicted upper bound 27.761842,actual reward 25.313163
round 3941, predicted reward 31.392473,predicted upper bound 31.396119,actual reward 35.849083
round 3942, predicted reward 30.753988,predicted upper bound 30.757687,actual reward 28.267728
round 3943, predicted reward 32.460215,predicted upper bound 32.463917,actual reward 27.732246
round 3944, predicted reward 20.566341,predicted upper bound 20.570941,actual reward 15.609420
round 3945, predicted reward 30.650268,predicted upper bound 30.653601,actual reward 27.352272
round 3946, predicted reward 24.155513,predicted upper bound 24.159331,actual reward 18.452289
round 3947, predicted reward 26.560008,predicted upper bound 26.563914,actual reward 24.782136
round 3948, predicted reward 27.315740,predicted upper bound 27.319524,actual reward 27.626067
round 3949, predicted reward 33.898771,predicted upper bound 33.902137,actual reward 38.774665
round 3950, predicted reward 24.230981,predicted upper bound 24.234929,actual reward 18.875440
round 3951, predicted reward 23.383400,predicted upper bound 23.387150,actual reward 23.212750
round 3952, predicted reward 24.595467,predicted upper bound 24.599163,actual reward 22.121532
round 3953, predicted reward 25.214747,predicted upper bound 25.218131,actual reward 20.128142
round 3954, predicted reward 26.980082,predicted upper bound 26.984041,actual reward 25.045288
round 3955, predicted reward 24.514819,predicted upper bound 24.518204,actual reward 24.540625
round 3956, predicted reward 29.073310,predicted upper bound 29.077193,actual reward 29.659902
round 3957, predicted reward 26.988288,predicted upper bound 26.991862,actual reward 23.514969
round 3958, predicted reward 22.253006,predicted upper bound 22.257370,actual reward 22.752127
round 3959, predicted reward 26.275550,predicted upper bound 26.279416,actual reward 25.376078
round 3960, predicted reward 27.194553,predicted upper bound 27.198271,actual reward 28.765685
round 3961, predicted reward 21.695107,predicted upper bound 21.699678,actual reward 20.803996
round 3962, predicted reward 32.331344,predicted upper bound 32.334923,actual reward 30.914940
round 3963, predicted reward 25.710806,predicted upper bound 25.715180,actual reward 22.125559
round 3964, predicted reward 26.120604,predicted upper bound 26.124756,actual reward 25.575317
round 3965, predicted reward 27.375785,predicted upper bound 27.379545,actual reward 23.209575
round 3966, predicted reward 28.104159,predicted upper bound 28.108112,actual reward 27.191576
round 3967, predicted reward 26.916968,predicted upper bound 26.920784,actual reward 27.499333
round 3968, predicted reward 31.226368,predicted upper bound 31.229778,actual reward 33.017518
round 3969, predicted reward 23.756608,predicted upper bound 23.760263,actual reward 20.427807
round 3970, predicted reward 28.059681,predicted upper bound 28.063801,actual reward 29.761683
round 3971, predicted reward 22.414701,predicted upper bound 22.418896,actual reward 18.088186
round 3972, predicted reward 23.960123,predicted upper bound 23.964175,actual reward 20.081219
round 3973, predicted reward 24.259429,predicted upper bound 24.263252,actual reward 20.899899
round 3974, predicted reward 20.093573,predicted upper bound 20.096864,actual reward 15.093915
round 3975, predicted reward 26.693928,predicted upper bound 26.697449,actual reward 28.330706
round 3976, predicted reward 29.555978,predicted upper bound 29.559465,actual reward 31.686860
round 3977, predicted reward 31.684948,predicted upper bound 31.689217,actual reward 32.675689
round 3978, predicted reward 28.540704,predicted upper bound 28.544910,actual reward 27.135283
round 3979, predicted reward 21.630628,predicted upper bound 21.634771,actual reward 19.898472
round 3980, predicted reward 29.279906,predicted upper bound 29.283843,actual reward 30.869694
round 3981, predicted reward 21.468420,predicted upper bound 21.472418,actual reward 19.679721
round 3982, predicted reward 23.857343,predicted upper bound 23.861345,actual reward 16.113588
round 3983, predicted reward 24.586788,predicted upper bound 24.591327,actual reward 21.479645
round 3984, predicted reward 31.670243,predicted upper bound 31.673392,actual reward 28.880547
round 3985, predicted reward 22.007598,predicted upper bound 22.011823,actual reward 21.582837
round 3986, predicted reward 25.826923,predicted upper bound 25.830266,actual reward 22.234141
round 3987, predicted reward 29.830825,predicted upper bound 29.833613,actual reward 30.350716
round 3988, predicted reward 23.760156,predicted upper bound 23.764032,actual reward 17.301203
round 3989, predicted reward 29.501699,predicted upper bound 29.504685,actual reward 27.927078
round 3990, predicted reward 22.049888,predicted upper bound 22.054260,actual reward 26.334854
round 3991, predicted reward 30.108201,predicted upper bound 30.112485,actual reward 31.279028
round 3992, predicted reward 30.053325,predicted upper bound 30.056380,actual reward 29.928584
round 3993, predicted reward 25.626562,predicted upper bound 25.629974,actual reward 26.865187
round 3994, predicted reward 21.945896,predicted upper bound 21.949573,actual reward 16.700786
round 3995, predicted reward 24.600291,predicted upper bound 24.604623,actual reward 26.361102
round 3996, predicted reward 21.184344,predicted upper bound 21.188817,actual reward 21.308627
round 3997, predicted reward 26.432439,predicted upper bound 26.436648,actual reward 24.527608
round 3998, predicted reward 29.726946,predicted upper bound 29.729666,actual reward 34.215108
round 3999, predicted reward 26.283629,predicted upper bound 26.287363,actual reward 27.317403
round 4000, predicted reward 24.404166,predicted upper bound 24.407989,actual reward 23.842965
round 4001, predicted reward 21.811624,predicted upper bound 21.815528,actual reward 15.744375
round 4002, predicted reward 23.591782,predicted upper bound 23.595657,actual reward 25.737672
round 4003, predicted reward 27.409045,predicted upper bound 27.412604,actual reward 22.972703
round 4004, predicted reward 35.225317,predicted upper bound 35.228935,actual reward 36.567045
round 4005, predicted reward 27.853548,predicted upper bound 27.856615,actual reward 21.832390
round 4006, predicted reward 27.240831,predicted upper bound 27.244811,actual reward 23.704195
round 4007, predicted reward 24.368850,predicted upper bound 24.372371,actual reward 28.103332
round 4008, predicted reward 30.649711,predicted upper bound 30.653279,actual reward 32.810060
round 4009, predicted reward 25.818266,predicted upper bound 25.821891,actual reward 22.899550
round 4010, predicted reward 25.132515,predicted upper bound 25.136348,actual reward 23.854832
round 4011, predicted reward 19.838927,predicted upper bound 19.842969,actual reward 17.396192
round 4012, predicted reward 28.631070,predicted upper bound 28.634188,actual reward 32.143603
round 4013, predicted reward 30.005878,predicted upper bound 30.009470,actual reward 30.120697
round 4014, predicted reward 28.865375,predicted upper bound 28.869273,actual reward 29.280531
round 4015, predicted reward 26.442848,predicted upper bound 26.446921,actual reward 23.032539
round 4016, predicted reward 27.310444,predicted upper bound 27.314417,actual reward 27.758315
round 4017, predicted reward 28.077939,predicted upper bound 28.081073,actual reward 25.900574
round 4018, predicted reward 27.414170,predicted upper bound 27.417349,actual reward 28.222189
round 4019, predicted reward 22.603276,predicted upper bound 22.607295,actual reward 22.258085
round 4020, predicted reward 27.245036,predicted upper bound 27.248595,actual reward 25.122155
round 4021, predicted reward 32.106654,predicted upper bound 32.109916,actual reward 31.016035
round 4022, predicted reward 28.446579,predicted upper bound 28.449565,actual reward 27.238744
round 4023, predicted reward 22.544711,predicted upper bound 22.549077,actual reward 23.342858
round 4024, predicted reward 23.117183,predicted upper bound 23.121319,actual reward 18.970428
round 4025, predicted reward 28.401982,predicted upper bound 28.405969,actual reward 23.619005
round 4026, predicted reward 23.965904,predicted upper bound 23.969963,actual reward 22.034089
round 4027, predicted reward 26.863029,predicted upper bound 26.867056,actual reward 30.831807
round 4028, predicted reward 26.345095,predicted upper bound 26.348080,actual reward 25.829920
round 4029, predicted reward 27.150047,predicted upper bound 27.153860,actual reward 24.890648
round 4030, predicted reward 22.027537,predicted upper bound 22.031903,actual reward 20.082336
round 4031, predicted reward 31.983380,predicted upper bound 31.987574,actual reward 29.738782
round 4032, predicted reward 31.716084,predicted upper bound 31.719839,actual reward 31.261296
round 4033, predicted reward 27.631185,predicted upper bound 27.635078,actual reward 26.472733
round 4034, predicted reward 23.576236,predicted upper bound 23.580049,actual reward 20.707464
round 4035, predicted reward 24.501051,predicted upper bound 24.505164,actual reward 21.206452
round 4036, predicted reward 29.329754,predicted upper bound 29.333187,actual reward 27.903403
round 4037, predicted reward 27.841270,predicted upper bound 27.845189,actual reward 28.979087
round 4038, predicted reward 29.662327,predicted upper bound 29.666059,actual reward 30.068259
round 4039, predicted reward 27.957691,predicted upper bound 27.960929,actual reward 29.183401
round 4040, predicted reward 31.566314,predicted upper bound 31.570347,actual reward 28.681730
round 4041, predicted reward 27.468561,predicted upper bound 27.472705,actual reward 26.404403
round 4042, predicted reward 30.616433,predicted upper bound 30.620437,actual reward 33.233911
round 4043, predicted reward 31.002502,predicted upper bound 31.006219,actual reward 34.993012
round 4044, predicted reward 23.908694,predicted upper bound 23.912318,actual reward 22.842373
round 4045, predicted reward 26.336620,predicted upper bound 26.340191,actual reward 22.846869
round 4046, predicted reward 27.963731,predicted upper bound 27.967911,actual reward 22.054619
round 4047, predicted reward 19.747557,predicted upper bound 19.751049,actual reward 19.469521
round 4048, predicted reward 27.825458,predicted upper bound 27.828799,actual reward 27.002109
round 4049, predicted reward 23.505722,predicted upper bound 23.509641,actual reward 24.981102
round 4050, predicted reward 22.332678,predicted upper bound 22.336863,actual reward 21.458366
round 4051, predicted reward 29.758717,predicted upper bound 29.762606,actual reward 32.001697
round 4052, predicted reward 21.132830,predicted upper bound 21.136137,actual reward 22.080277
round 4053, predicted reward 29.415556,predicted upper bound 29.419080,actual reward 26.830067
round 4054, predicted reward 25.976669,predicted upper bound 25.980999,actual reward 33.446048
round 4055, predicted reward 28.545310,predicted upper bound 28.549756,actual reward 32.164887
round 4056, predicted reward 24.118165,predicted upper bound 24.122559,actual reward 17.485021
round 4057, predicted reward 26.835229,predicted upper bound 26.839397,actual reward 28.118475
round 4058, predicted reward 23.772706,predicted upper bound 23.776638,actual reward 24.553004
round 4059, predicted reward 23.538649,predicted upper bound 23.542264,actual reward 20.186978
round 4060, predicted reward 24.901487,predicted upper bound 24.905085,actual reward 20.198912
round 4061, predicted reward 34.975122,predicted upper bound 34.978020,actual reward 38.170903
round 4062, predicted reward 19.140351,predicted upper bound 19.144228,actual reward 15.667697
round 4063, predicted reward 32.540699,predicted upper bound 32.544561,actual reward 30.867999
round 4064, predicted reward 27.041691,predicted upper bound 27.046216,actual reward 22.425035
round 4065, predicted reward 20.608033,predicted upper bound 20.612614,actual reward 13.738337
round 4066, predicted reward 27.531291,predicted upper bound 27.535046,actual reward 27.357371
round 4067, predicted reward 30.784564,predicted upper bound 30.788111,actual reward 32.783947
round 4068, predicted reward 27.580028,predicted upper bound 27.583984,actual reward 26.430938
round 4069, predicted reward 23.335331,predicted upper bound 23.339755,actual reward 21.257443
round 4070, predicted reward 34.450673,predicted upper bound 34.454015,actual reward 31.884196
round 4071, predicted reward 32.517263,predicted upper bound 32.521101,actual reward 33.622836
round 4072, predicted reward 29.385363,predicted upper bound 29.388018,actual reward 27.851750
round 4073, predicted reward 33.128442,predicted upper bound 33.131796,actual reward 24.815220
round 4074, predicted reward 27.250820,predicted upper bound 27.255009,actual reward 24.489094
round 4075, predicted reward 28.144341,predicted upper bound 28.147807,actual reward 27.078848
round 4076, predicted reward 28.592002,predicted upper bound 28.596148,actual reward 23.095879
round 4077, predicted reward 27.857900,predicted upper bound 27.861782,actual reward 26.491405
round 4078, predicted reward 32.062850,predicted upper bound 32.066499,actual reward 24.929547
round 4079, predicted reward 24.991596,predicted upper bound 24.994650,actual reward 19.912197
round 4080, predicted reward 28.062380,predicted upper bound 28.065176,actual reward 23.827845
round 4081, predicted reward 26.566107,predicted upper bound 26.569804,actual reward 30.222876
round 4082, predicted reward 26.190318,predicted upper bound 26.194120,actual reward 27.347061
round 4083, predicted reward 27.529903,predicted upper bound 27.533684,actual reward 25.544983
round 4084, predicted reward 31.211693,predicted upper bound 31.215339,actual reward 32.555349
round 4085, predicted reward 27.938310,predicted upper bound 27.941791,actual reward 30.698154
round 4086, predicted reward 30.635816,predicted upper bound 30.639091,actual reward 35.365746
round 4087, predicted reward 25.869873,predicted upper bound 25.873623,actual reward 30.238455
round 4088, predicted reward 21.779750,predicted upper bound 21.783265,actual reward 20.693308
round 4089, predicted reward 29.576563,predicted upper bound 29.580332,actual reward 27.830139
round 4090, predicted reward 22.452993,predicted upper bound 22.456823,actual reward 20.163935
round 4091, predicted reward 28.523257,predicted upper bound 28.527540,actual reward 26.951336
round 4092, predicted reward 25.812632,predicted upper bound 25.816656,actual reward 26.296358
round 4093, predicted reward 26.226491,predicted upper bound 26.229893,actual reward 21.189480
round 4094, predicted reward 27.972849,predicted upper bound 27.976713,actual reward 30.659962
round 4095, predicted reward 26.214430,predicted upper bound 26.218169,actual reward 19.964471
round 4096, predicted reward 27.407367,predicted upper bound 27.411212,actual reward 25.107843
round 4097, predicted reward 27.719735,predicted upper bound 27.722976,actual reward 23.812575
round 4098, predicted reward 34.161260,predicted upper bound 34.164531,actual reward 29.667531
round 4099, predicted reward 25.162211,predicted upper bound 25.166443,actual reward 20.662068
round 4100, predicted reward 24.054665,predicted upper bound 24.059224,actual reward 25.598712
round 4101, predicted reward 22.784463,predicted upper bound 22.788503,actual reward 22.571448
round 4102, predicted reward 27.879943,predicted upper bound 27.882974,actual reward 28.812858
round 4103, predicted reward 26.268292,predicted upper bound 26.271982,actual reward 27.562912
round 4104, predicted reward 33.670710,predicted upper bound 33.674259,actual reward 32.607938
round 4105, predicted reward 26.436181,predicted upper bound 26.439949,actual reward 26.973366
round 4106, predicted reward 28.794576,predicted upper bound 28.798315,actual reward 27.744147
round 4107, predicted reward 31.049320,predicted upper bound 31.053099,actual reward 27.692507
round 4108, predicted reward 23.345423,predicted upper bound 23.349621,actual reward 22.535591
round 4109, predicted reward 25.886319,predicted upper bound 25.889836,actual reward 27.902489
round 4110, predicted reward 27.200400,predicted upper bound 27.203687,actual reward 26.411091
round 4111, predicted reward 26.502250,predicted upper bound 26.505461,actual reward 27.444668
round 4112, predicted reward 25.074735,predicted upper bound 25.078570,actual reward 21.109601
round 4113, predicted reward 28.890314,predicted upper bound 28.893807,actual reward 27.544900
round 4114, predicted reward 29.929633,predicted upper bound 29.933796,actual reward 26.815844
round 4115, predicted reward 27.857604,predicted upper bound 27.861112,actual reward 25.574087
round 4116, predicted reward 29.545968,predicted upper bound 29.549824,actual reward 28.216873
round 4117, predicted reward 26.031194,predicted upper bound 26.035615,actual reward 24.817212
round 4118, predicted reward 23.705263,predicted upper bound 23.709144,actual reward 22.135841
round 4119, predicted reward 25.860826,predicted upper bound 25.864561,actual reward 18.874196
round 4120, predicted reward 34.583465,predicted upper bound 34.587262,actual reward 38.589381
round 4121, predicted reward 25.785027,predicted upper bound 25.788776,actual reward 26.950484
round 4122, predicted reward 25.054115,predicted upper bound 25.058393,actual reward 19.488839
round 4123, predicted reward 28.474278,predicted upper bound 28.478251,actual reward 27.157125
round 4124, predicted reward 32.538755,predicted upper bound 32.542254,actual reward 27.696814
round 4125, predicted reward 29.748814,predicted upper bound 29.752600,actual reward 23.313296
round 4126, predicted reward 23.290497,predicted upper bound 23.294854,actual reward 21.897658
round 4127, predicted reward 22.384245,predicted upper bound 22.388035,actual reward 20.515820
round 4128, predicted reward 29.874162,predicted upper bound 29.878484,actual reward 25.838851
round 4129, predicted reward 27.417382,predicted upper bound 27.421105,actual reward 23.343695
round 4130, predicted reward 20.916575,predicted upper bound 20.920783,actual reward 13.605539
round 4131, predicted reward 32.566563,predicted upper bound 32.570249,actual reward 33.030055
round 4132, predicted reward 26.059780,predicted upper bound 26.063631,actual reward 22.762501
round 4133, predicted reward 20.543495,predicted upper bound 20.547294,actual reward 16.613413
round 4134, predicted reward 29.457380,predicted upper bound 29.461615,actual reward 31.911966
round 4135, predicted reward 26.843687,predicted upper bound 26.847588,actual reward 26.200127
round 4136, predicted reward 27.545351,predicted upper bound 27.549175,actual reward 28.295695
round 4137, predicted reward 21.298681,predicted upper bound 21.302543,actual reward 16.839681
round 4138, predicted reward 24.502593,predicted upper bound 24.506299,actual reward 25.611673
round 4139, predicted reward 31.237399,predicted upper bound 31.240801,actual reward 25.955028
round 4140, predicted reward 22.263114,predicted upper bound 22.266991,actual reward 25.372708
round 4141, predicted reward 25.069088,predicted upper bound 25.072901,actual reward 24.450432
round 4142, predicted reward 18.249763,predicted upper bound 18.253858,actual reward 20.157297
round 4143, predicted reward 32.122968,predicted upper bound 32.126962,actual reward 31.649589
round 4144, predicted reward 22.918570,predicted upper bound 22.922423,actual reward 19.586578
round 4145, predicted reward 25.415028,predicted upper bound 25.419067,actual reward 25.282302
round 4146, predicted reward 23.584348,predicted upper bound 23.587721,actual reward 25.709355
round 4147, predicted reward 32.773561,predicted upper bound 32.776959,actual reward 35.600237
round 4148, predicted reward 23.669197,predicted upper bound 23.672172,actual reward 23.408726
round 4149, predicted reward 22.192854,predicted upper bound 22.196828,actual reward 16.235990
round 4150, predicted reward 38.229140,predicted upper bound 38.232526,actual reward 42.910523
round 4151, predicted reward 33.466472,predicted upper bound 33.470188,actual reward 33.300373
round 4152, predicted reward 25.298093,predicted upper bound 25.301267,actual reward 27.240387
round 4153, predicted reward 23.825691,predicted upper bound 23.829155,actual reward 24.732281
round 4154, predicted reward 29.911752,predicted upper bound 29.915182,actual reward 33.709769
round 4155, predicted reward 20.327116,predicted upper bound 20.331421,actual reward 20.607644
round 4156, predicted reward 25.172475,predicted upper bound 25.176686,actual reward 25.307983
round 4157, predicted reward 23.992395,predicted upper bound 23.996212,actual reward 19.469525
round 4158, predicted reward 24.957433,predicted upper bound 24.961473,actual reward 19.087255
round 4159, predicted reward 26.434489,predicted upper bound 26.437691,actual reward 27.518280
round 4160, predicted reward 26.356530,predicted upper bound 26.360818,actual reward 22.160006
round 4161, predicted reward 30.897197,predicted upper bound 30.900086,actual reward 28.190807
round 4162, predicted reward 29.593354,predicted upper bound 29.597134,actual reward 30.220973
round 4163, predicted reward 29.770721,predicted upper bound 29.774064,actual reward 27.503788
round 4164, predicted reward 22.539395,predicted upper bound 22.543303,actual reward 24.615519
round 4165, predicted reward 29.456120,predicted upper bound 29.459749,actual reward 27.708277
round 4166, predicted reward 21.211228,predicted upper bound 21.214485,actual reward 21.467685
round 4167, predicted reward 27.169038,predicted upper bound 27.172638,actual reward 26.732894
round 4168, predicted reward 30.167205,predicted upper bound 30.170815,actual reward 29.777804
round 4169, predicted reward 24.611090,predicted upper bound 24.614277,actual reward 20.735241
round 4170, predicted reward 34.228606,predicted upper bound 34.231438,actual reward 38.006277
round 4171, predicted reward 24.427379,predicted upper bound 24.431457,actual reward 25.269506
round 4172, predicted reward 33.519757,predicted upper bound 33.522657,actual reward 32.637661
round 4173, predicted reward 30.690685,predicted upper bound 30.694453,actual reward 29.636540
round 4174, predicted reward 26.604158,predicted upper bound 26.607936,actual reward 22.119672
round 4175, predicted reward 33.006059,predicted upper bound 33.009107,actual reward 35.078992
round 4176, predicted reward 29.411661,predicted upper bound 29.415502,actual reward 27.256203
round 4177, predicted reward 28.746225,predicted upper bound 28.749450,actual reward 26.933629
round 4178, predicted reward 21.633483,predicted upper bound 21.637615,actual reward 19.463213
round 4179, predicted reward 25.416615,predicted upper bound 25.419731,actual reward 21.641685
round 4180, predicted reward 22.066208,predicted upper bound 22.069918,actual reward 26.087449
round 4181, predicted reward 20.034116,predicted upper bound 20.038613,actual reward 21.746012
round 4182, predicted reward 34.083593,predicted upper bound 34.086935,actual reward 37.661718
round 4183, predicted reward 29.023607,predicted upper bound 29.026814,actual reward 27.305337
round 4184, predicted reward 33.131075,predicted upper bound 33.134403,actual reward 30.375094
round 4185, predicted reward 25.743019,predicted upper bound 25.746916,actual reward 27.602969
round 4186, predicted reward 24.712634,predicted upper bound 24.715977,actual reward 17.579960
round 4187, predicted reward 31.476172,predicted upper bound 31.479627,actual reward 31.079832
round 4188, predicted reward 27.501042,predicted upper bound 27.505115,actual reward 22.661048
round 4189, predicted reward 25.037753,predicted upper bound 25.041527,actual reward 21.301385
round 4190, predicted reward 24.969908,predicted upper bound 24.973603,actual reward 24.305878
round 4191, predicted reward 31.569572,predicted upper bound 31.573211,actual reward 34.781789
round 4192, predicted reward 29.282620,predicted upper bound 29.286363,actual reward 28.224130
round 4193, predicted reward 27.965146,predicted upper bound 27.968944,actual reward 27.009698
round 4194, predicted reward 26.603904,predicted upper bound 26.607565,actual reward 21.166158
round 4195, predicted reward 20.881638,predicted upper bound 20.885089,actual reward 23.598829
round 4196, predicted reward 23.095435,predicted upper bound 23.099579,actual reward 20.129931
round 4197, predicted reward 25.705172,predicted upper bound 25.708367,actual reward 25.531199
round 4198, predicted reward 36.976776,predicted upper bound 36.980319,actual reward 38.810325
round 4199, predicted reward 32.231226,predicted upper bound 32.234613,actual reward 31.199277
round 4200, predicted reward 25.459960,predicted upper bound 25.463919,actual reward 18.054663
round 4201, predicted reward 22.808465,predicted upper bound 22.812274,actual reward 22.023880
round 4202, predicted reward 24.304334,predicted upper bound 24.308377,actual reward 21.812052
round 4203, predicted reward 25.358032,predicted upper bound 25.361756,actual reward 25.209357
round 4204, predicted reward 32.091739,predicted upper bound 32.094777,actual reward 29.703635
round 4205, predicted reward 22.498060,predicted upper bound 22.501596,actual reward 19.764836
round 4206, predicted reward 27.865876,predicted upper bound 27.869564,actual reward 23.553934
round 4207, predicted reward 28.912079,predicted upper bound 28.915111,actual reward 29.284406
round 4208, predicted reward 26.437340,predicted upper bound 26.440521,actual reward 27.974798
round 4209, predicted reward 22.122436,predicted upper bound 22.126269,actual reward 23.467258
round 4210, predicted reward 25.748623,predicted upper bound 25.752472,actual reward 22.683767
round 4211, predicted reward 29.866054,predicted upper bound 29.869502,actual reward 35.663608
round 4212, predicted reward 28.652696,predicted upper bound 28.656208,actual reward 21.829953
round 4213, predicted reward 25.455048,predicted upper bound 25.458508,actual reward 26.706271
round 4214, predicted reward 32.681072,predicted upper bound 32.684361,actual reward 25.677414
round 4215, predicted reward 28.105789,predicted upper bound 28.109295,actual reward 31.345131
round 4216, predicted reward 22.948316,predicted upper bound 22.952003,actual reward 25.269322
round 4217, predicted reward 25.365791,predicted upper bound 25.369810,actual reward 22.343162
round 4218, predicted reward 26.365288,predicted upper bound 26.368934,actual reward 32.057530
round 4219, predicted reward 35.657646,predicted upper bound 35.661170,actual reward 38.807818
round 4220, predicted reward 29.011487,predicted upper bound 29.015366,actual reward 30.253299
round 4221, predicted reward 29.956961,predicted upper bound 29.960566,actual reward 31.048988
round 4222, predicted reward 27.865269,predicted upper bound 27.868788,actual reward 21.363134
round 4223, predicted reward 30.158340,predicted upper bound 30.161877,actual reward 30.004868
round 4224, predicted reward 33.918925,predicted upper bound 33.922202,actual reward 33.840283
round 4225, predicted reward 33.798395,predicted upper bound 33.801686,actual reward 34.253005
round 4226, predicted reward 22.585880,predicted upper bound 22.589062,actual reward 25.166850
round 4227, predicted reward 31.358060,predicted upper bound 31.360927,actual reward 32.241841
round 4228, predicted reward 30.888682,predicted upper bound 30.892115,actual reward 31.153447
round 4229, predicted reward 22.937168,predicted upper bound 22.940650,actual reward 20.840714
round 4230, predicted reward 28.468293,predicted upper bound 28.471887,actual reward 28.987186
round 4231, predicted reward 25.535295,predicted upper bound 25.539002,actual reward 26.377685
round 4232, predicted reward 24.762618,predicted upper bound 24.765923,actual reward 25.712017
round 4233, predicted reward 26.774524,predicted upper bound 26.778332,actual reward 23.196035
round 4234, predicted reward 30.554292,predicted upper bound 30.557196,actual reward 26.172262
round 4235, predicted reward 26.871713,predicted upper bound 26.875054,actual reward 26.316277
round 4236, predicted reward 21.880157,predicted upper bound 21.884099,actual reward 17.287151
round 4237, predicted reward 33.173778,predicted upper bound 33.177412,actual reward 33.228938
round 4238, predicted reward 23.480922,predicted upper bound 23.484856,actual reward 16.732272
round 4239, predicted reward 25.007993,predicted upper bound 25.011974,actual reward 23.791882
round 4240, predicted reward 23.621065,predicted upper bound 23.624688,actual reward 16.036211
round 4241, predicted reward 30.580928,predicted upper bound 30.584787,actual reward 28.940222
round 4242, predicted reward 38.097346,predicted upper bound 38.100556,actual reward 40.862769
round 4243, predicted reward 26.238249,predicted upper bound 26.241784,actual reward 31.153717
round 4244, predicted reward 27.527518,predicted upper bound 27.530937,actual reward 23.501853
round 4245, predicted reward 31.468016,predicted upper bound 31.471071,actual reward 28.940204
round 4246, predicted reward 30.478404,predicted upper bound 30.482062,actual reward 30.503788
round 4247, predicted reward 23.340367,predicted upper bound 23.344353,actual reward 22.847790
round 4248, predicted reward 24.811133,predicted upper bound 24.814191,actual reward 23.654711
round 4249, predicted reward 25.500318,predicted upper bound 25.503894,actual reward 27.217921
round 4250, predicted reward 26.439794,predicted upper bound 26.443007,actual reward 32.875294
round 4251, predicted reward 22.783803,predicted upper bound 22.787488,actual reward 21.060035
round 4252, predicted reward 27.539662,predicted upper bound 27.543831,actual reward 25.222575
round 4253, predicted reward 29.003570,predicted upper bound 29.007222,actual reward 31.996806
round 4254, predicted reward 24.443862,predicted upper bound 24.447376,actual reward 26.081449
round 4255, predicted reward 23.338242,predicted upper bound 23.342222,actual reward 24.688098
round 4256, predicted reward 21.281480,predicted upper bound 21.285302,actual reward 24.434398
round 4257, predicted reward 29.396634,predicted upper bound 29.400084,actual reward 30.313514
round 4258, predicted reward 21.695565,predicted upper bound 21.699584,actual reward 17.654550
round 4259, predicted reward 30.673715,predicted upper bound 30.676986,actual reward 31.178297
round 4260, predicted reward 26.252858,predicted upper bound 26.256356,actual reward 20.948581
round 4261, predicted reward 26.954324,predicted upper bound 26.958179,actual reward 27.579226
round 4262, predicted reward 25.175036,predicted upper bound 25.179222,actual reward 25.428007
round 4263, predicted reward 29.752123,predicted upper bound 29.755845,actual reward 28.679507
round 4264, predicted reward 32.789752,predicted upper bound 32.792739,actual reward 33.811997
round 4265, predicted reward 25.163215,predicted upper bound 25.166781,actual reward 22.543767
round 4266, predicted reward 26.647384,predicted upper bound 26.651186,actual reward 24.171335
round 4267, predicted reward 37.026444,predicted upper bound 37.029730,actual reward 43.431882
round 4268, predicted reward 32.715251,predicted upper bound 32.718474,actual reward 35.896545
round 4269, predicted reward 24.107484,predicted upper bound 24.111374,actual reward 22.539611
round 4270, predicted reward 25.262910,predicted upper bound 25.266928,actual reward 26.618246
round 4271, predicted reward 30.452915,predicted upper bound 30.456687,actual reward 24.573597
round 4272, predicted reward 28.492660,predicted upper bound 28.495809,actual reward 27.735523
round 4273, predicted reward 27.807766,predicted upper bound 27.811633,actual reward 27.727185
round 4274, predicted reward 30.923891,predicted upper bound 30.927337,actual reward 30.635221
round 4275, predicted reward 26.448218,predicted upper bound 26.452507,actual reward 23.272667
round 4276, predicted reward 28.821896,predicted upper bound 28.825125,actual reward 32.708097
round 4277, predicted reward 33.863782,predicted upper bound 33.866909,actual reward 35.046746
round 4278, predicted reward 32.799970,predicted upper bound 32.803427,actual reward 28.987894
round 4279, predicted reward 33.869823,predicted upper bound 33.873259,actual reward 33.043093
round 4280, predicted reward 22.955245,predicted upper bound 22.958890,actual reward 19.915658
round 4281, predicted reward 24.166759,predicted upper bound 24.170601,actual reward 25.710075
round 4282, predicted reward 35.267776,predicted upper bound 35.270756,actual reward 36.709702
round 4283, predicted reward 31.323234,predicted upper bound 31.326607,actual reward 32.716248
round 4284, predicted reward 25.699105,predicted upper bound 25.702314,actual reward 26.428292
round 4285, predicted reward 29.213707,predicted upper bound 29.217418,actual reward 26.502633
round 4286, predicted reward 30.834355,predicted upper bound 30.837873,actual reward 31.799185
round 4287, predicted reward 28.597326,predicted upper bound 28.600341,actual reward 32.320855
round 4288, predicted reward 29.147824,predicted upper bound 29.151170,actual reward 23.049482
round 4289, predicted reward 27.178902,predicted upper bound 27.182335,actual reward 24.645534
round 4290, predicted reward 29.400528,predicted upper bound 29.403203,actual reward 26.590761
round 4291, predicted reward 31.872632,predicted upper bound 31.876411,actual reward 27.690865
round 4292, predicted reward 29.687760,predicted upper bound 29.691225,actual reward 30.001876
round 4293, predicted reward 37.364077,predicted upper bound 37.366834,actual reward 35.704292
round 4294, predicted reward 23.775763,predicted upper bound 23.779315,actual reward 21.180635
round 4295, predicted reward 27.730442,predicted upper bound 27.733654,actual reward 27.997237
round 4296, predicted reward 28.175232,predicted upper bound 28.178476,actual reward 22.677880
round 4297, predicted reward 31.096556,predicted upper bound 31.099822,actual reward 32.033760
round 4298, predicted reward 27.049845,predicted upper bound 27.053550,actual reward 26.666823
round 4299, predicted reward 22.846914,predicted upper bound 22.850511,actual reward 24.563997
round 4300, predicted reward 24.169543,predicted upper bound 24.172722,actual reward 24.480964
round 4301, predicted reward 29.631337,predicted upper bound 29.634843,actual reward 27.722964
round 4302, predicted reward 38.306929,predicted upper bound 38.309519,actual reward 41.683449
round 4303, predicted reward 32.071754,predicted upper bound 32.075092,actual reward 33.412717
round 4304, predicted reward 32.088212,predicted upper bound 32.091515,actual reward 36.583170
round 4305, predicted reward 28.292982,predicted upper bound 28.296267,actual reward 25.321019
round 4306, predicted reward 28.762956,predicted upper bound 28.766671,actual reward 24.850862
round 4307, predicted reward 35.733549,predicted upper bound 35.736221,actual reward 38.551546
round 4308, predicted reward 29.524855,predicted upper bound 29.528198,actual reward 32.268242
round 4309, predicted reward 18.524664,predicted upper bound 18.528481,actual reward 13.392241
round 4310, predicted reward 32.452252,predicted upper bound 32.455705,actual reward 31.807383
round 4311, predicted reward 24.117512,predicted upper bound 24.121280,actual reward 23.268977
round 4312, predicted reward 32.213321,predicted upper bound 32.216587,actual reward 30.988964
round 4313, predicted reward 30.289121,predicted upper bound 30.292550,actual reward 30.484980
round 4314, predicted reward 23.969606,predicted upper bound 23.973336,actual reward 20.555611
round 4315, predicted reward 33.443033,predicted upper bound 33.446485,actual reward 32.167015
round 4316, predicted reward 27.836792,predicted upper bound 27.840353,actual reward 24.081057
round 4317, predicted reward 28.973951,predicted upper bound 28.977587,actual reward 31.316388
round 4318, predicted reward 19.534256,predicted upper bound 19.537752,actual reward 21.249221
round 4319, predicted reward 32.307603,predicted upper bound 32.310640,actual reward 34.074510
round 4320, predicted reward 24.910904,predicted upper bound 24.915281,actual reward 20.746779
round 4321, predicted reward 26.485046,predicted upper bound 26.488810,actual reward 20.864327
round 4322, predicted reward 30.718787,predicted upper bound 30.721764,actual reward 31.681343
round 4323, predicted reward 28.381035,predicted upper bound 28.384836,actual reward 33.450398
round 4324, predicted reward 28.665688,predicted upper bound 28.669161,actual reward 26.255734
round 4325, predicted reward 24.847073,predicted upper bound 24.850842,actual reward 20.457775
round 4326, predicted reward 24.448368,predicted upper bound 24.451386,actual reward 20.498541
round 4327, predicted reward 24.557444,predicted upper bound 24.561065,actual reward 20.908419
round 4328, predicted reward 20.602984,predicted upper bound 20.606428,actual reward 18.933597
round 4329, predicted reward 23.284514,predicted upper bound 23.288176,actual reward 18.034898
round 4330, predicted reward 21.769202,predicted upper bound 21.773013,actual reward 21.609492
round 4331, predicted reward 31.985763,predicted upper bound 31.988816,actual reward 36.794521
round 4332, predicted reward 27.583330,predicted upper bound 27.586993,actual reward 24.207345
round 4333, predicted reward 29.761422,predicted upper bound 29.765681,actual reward 30.087038
round 4334, predicted reward 32.459689,predicted upper bound 32.462756,actual reward 32.151693
round 4335, predicted reward 24.821507,predicted upper bound 24.825271,actual reward 21.253743
round 4336, predicted reward 26.150128,predicted upper bound 26.153463,actual reward 25.242710
round 4337, predicted reward 24.660616,predicted upper bound 24.664066,actual reward 20.830679
round 4338, predicted reward 17.190608,predicted upper bound 17.193916,actual reward 14.791194
round 4339, predicted reward 28.424216,predicted upper bound 28.427631,actual reward 27.827864
round 4340, predicted reward 30.397056,predicted upper bound 30.400578,actual reward 25.691915
round 4341, predicted reward 35.701729,predicted upper bound 35.705036,actual reward 38.544474
round 4342, predicted reward 24.838342,predicted upper bound 24.841968,actual reward 22.056556
round 4343, predicted reward 32.522000,predicted upper bound 32.525264,actual reward 32.165799
round 4344, predicted reward 21.943464,predicted upper bound 21.946598,actual reward 20.636805
round 4345, predicted reward 31.323125,predicted upper bound 31.326119,actual reward 32.685124
round 4346, predicted reward 27.525646,predicted upper bound 27.528893,actual reward 21.867177
round 4347, predicted reward 22.732218,predicted upper bound 22.736349,actual reward 23.412721
round 4348, predicted reward 22.489737,predicted upper bound 22.493305,actual reward 22.324217
round 4349, predicted reward 23.841942,predicted upper bound 23.846085,actual reward 22.259793
round 4350, predicted reward 29.172152,predicted upper bound 29.175575,actual reward 29.537142
round 4351, predicted reward 35.352898,predicted upper bound 35.355988,actual reward 39.098729
round 4352, predicted reward 23.615396,predicted upper bound 23.619126,actual reward 21.192701
round 4353, predicted reward 33.948979,predicted upper bound 33.951958,actual reward 33.775383
round 4354, predicted reward 29.183722,predicted upper bound 29.187333,actual reward 31.088966
round 4355, predicted reward 31.809999,predicted upper bound 31.813510,actual reward 30.054197
round 4356, predicted reward 26.775185,predicted upper bound 26.777915,actual reward 20.426290
round 4357, predicted reward 28.526029,predicted upper bound 28.530049,actual reward 27.854854
round 4358, predicted reward 25.761270,predicted upper bound 25.764324,actual reward 25.324120
round 4359, predicted reward 29.143351,predicted upper bound 29.146663,actual reward 28.596902
round 4360, predicted reward 29.141219,predicted upper bound 29.144344,actual reward 29.946857
round 4361, predicted reward 23.395514,predicted upper bound 23.398870,actual reward 25.005917
round 4362, predicted reward 28.994364,predicted upper bound 28.996927,actual reward 24.422204
round 4363, predicted reward 27.133168,predicted upper bound 27.136847,actual reward 26.573686
round 4364, predicted reward 27.398894,predicted upper bound 27.402539,actual reward 27.062943
round 4365, predicted reward 27.674108,predicted upper bound 27.677594,actual reward 25.038362
round 4366, predicted reward 24.362640,predicted upper bound 24.366514,actual reward 21.095590
round 4367, predicted reward 27.073764,predicted upper bound 27.077175,actual reward 26.683888
round 4368, predicted reward 28.506105,predicted upper bound 28.509661,actual reward 26.135772
round 4369, predicted reward 22.483277,predicted upper bound 22.486295,actual reward 20.861106
round 4370, predicted reward 30.956009,predicted upper bound 30.958788,actual reward 32.935334
round 4371, predicted reward 22.805322,predicted upper bound 22.809192,actual reward 19.694217
round 4372, predicted reward 27.950272,predicted upper bound 27.953944,actual reward 28.000998
round 4373, predicted reward 24.617106,predicted upper bound 24.620729,actual reward 28.189766
round 4374, predicted reward 23.244477,predicted upper bound 23.248324,actual reward 20.408439
round 4375, predicted reward 21.889993,predicted upper bound 21.893680,actual reward 15.548684
round 4376, predicted reward 24.002617,predicted upper bound 24.006250,actual reward 20.874668
round 4377, predicted reward 27.137255,predicted upper bound 27.140613,actual reward 28.282844
round 4378, predicted reward 28.259817,predicted upper bound 28.263005,actual reward 27.164811
round 4379, predicted reward 22.981759,predicted upper bound 22.985014,actual reward 22.278325
round 4380, predicted reward 24.129119,predicted upper bound 24.132640,actual reward 26.247055
round 4381, predicted reward 30.898166,predicted upper bound 30.901785,actual reward 32.148167
round 4382, predicted reward 27.201422,predicted upper bound 27.205342,actual reward 24.722498
round 4383, predicted reward 28.144592,predicted upper bound 28.147440,actual reward 23.864656
round 4384, predicted reward 26.858475,predicted upper bound 26.861867,actual reward 25.670208
round 4385, predicted reward 22.989734,predicted upper bound 22.993343,actual reward 21.550701
round 4386, predicted reward 31.873828,predicted upper bound 31.876792,actual reward 24.832094
round 4387, predicted reward 26.658953,predicted upper bound 26.662224,actual reward 23.424899
round 4388, predicted reward 24.828238,predicted upper bound 24.831417,actual reward 26.816601
round 4389, predicted reward 24.866497,predicted upper bound 24.870181,actual reward 26.100135
round 4390, predicted reward 25.408346,predicted upper bound 25.412101,actual reward 22.496031
round 4391, predicted reward 28.963044,predicted upper bound 28.965786,actual reward 33.459879
round 4392, predicted reward 17.935182,predicted upper bound 17.938076,actual reward 12.519734
round 4393, predicted reward 22.968412,predicted upper bound 22.971994,actual reward 22.067510
round 4394, predicted reward 30.216214,predicted upper bound 30.219569,actual reward 26.466805
round 4395, predicted reward 23.185236,predicted upper bound 23.188678,actual reward 21.687406
round 4396, predicted reward 29.242219,predicted upper bound 29.245038,actual reward 32.019823
round 4397, predicted reward 29.990593,predicted upper bound 29.993605,actual reward 33.521698
round 4398, predicted reward 27.251732,predicted upper bound 27.255227,actual reward 26.023737
round 4399, predicted reward 24.445106,predicted upper bound 24.448612,actual reward 25.306865
round 4400, predicted reward 28.603303,predicted upper bound 28.606818,actual reward 26.762494
round 4401, predicted reward 31.835492,predicted upper bound 31.838942,actual reward 30.562290
round 4402, predicted reward 25.743186,predicted upper bound 25.747119,actual reward 21.677759
round 4403, predicted reward 29.746885,predicted upper bound 29.750573,actual reward 27.192679
round 4404, predicted reward 23.138053,predicted upper bound 23.142176,actual reward 20.347741
round 4405, predicted reward 31.458904,predicted upper bound 31.461962,actual reward 31.130166
round 4406, predicted reward 28.138074,predicted upper bound 28.142024,actual reward 28.147105
round 4407, predicted reward 27.125388,predicted upper bound 27.129027,actual reward 28.474864
round 4408, predicted reward 27.392295,predicted upper bound 27.395580,actual reward 24.220613
round 4409, predicted reward 28.271083,predicted upper bound 28.273977,actual reward 31.335511
round 4410, predicted reward 24.078760,predicted upper bound 24.082111,actual reward 27.031009
round 4411, predicted reward 20.834415,predicted upper bound 20.838003,actual reward 18.787090
round 4412, predicted reward 25.579075,predicted upper bound 25.582773,actual reward 26.052977
round 4413, predicted reward 24.069622,predicted upper bound 24.073014,actual reward 27.049133
round 4414, predicted reward 29.300726,predicted upper bound 29.303648,actual reward 37.163735
round 4415, predicted reward 24.361027,predicted upper bound 24.364449,actual reward 19.480107
round 4416, predicted reward 31.306898,predicted upper bound 31.310289,actual reward 30.853156
round 4417, predicted reward 29.054021,predicted upper bound 29.057764,actual reward 28.936648
round 4418, predicted reward 34.052055,predicted upper bound 34.055424,actual reward 36.211272
round 4419, predicted reward 20.740753,predicted upper bound 20.744287,actual reward 20.985176
round 4420, predicted reward 28.193985,predicted upper bound 28.197547,actual reward 18.703342
round 4421, predicted reward 25.193302,predicted upper bound 25.197041,actual reward 25.275366
round 4422, predicted reward 28.832328,predicted upper bound 28.835282,actual reward 25.358456
round 4423, predicted reward 26.876230,predicted upper bound 26.879454,actual reward 29.952952
round 4424, predicted reward 30.444911,predicted upper bound 30.448239,actual reward 29.521826
round 4425, predicted reward 27.602828,predicted upper bound 27.605772,actual reward 26.508056
round 4426, predicted reward 29.032568,predicted upper bound 29.035980,actual reward 28.502170
round 4427, predicted reward 24.058443,predicted upper bound 24.062106,actual reward 23.709675
round 4428, predicted reward 28.419038,predicted upper bound 28.422929,actual reward 27.178643
round 4429, predicted reward 30.648867,predicted upper bound 30.652163,actual reward 32.377503
round 4430, predicted reward 27.127720,predicted upper bound 27.131228,actual reward 20.035827
round 4431, predicted reward 29.953391,predicted upper bound 29.956667,actual reward 28.657231
round 4432, predicted reward 20.582005,predicted upper bound 20.585357,actual reward 18.920893
round 4433, predicted reward 31.615233,predicted upper bound 31.618340,actual reward 28.862119
round 4434, predicted reward 20.806436,predicted upper bound 20.809870,actual reward 24.010958
round 4435, predicted reward 30.358772,predicted upper bound 30.361888,actual reward 34.742132
round 4436, predicted reward 25.319901,predicted upper bound 25.323268,actual reward 18.621060
round 4437, predicted reward 22.621650,predicted upper bound 22.625103,actual reward 24.772104
round 4438, predicted reward 31.445599,predicted upper bound 31.448959,actual reward 29.067914
round 4439, predicted reward 21.308226,predicted upper bound 21.311878,actual reward 14.789910
round 4440, predicted reward 27.059996,predicted upper bound 27.063019,actual reward 27.322316
round 4441, predicted reward 23.562403,predicted upper bound 23.566111,actual reward 20.386308
round 4442, predicted reward 23.657218,predicted upper bound 23.660758,actual reward 27.477774
round 4443, predicted reward 23.178952,predicted upper bound 23.182408,actual reward 22.422102
round 4444, predicted reward 23.473877,predicted upper bound 23.477602,actual reward 18.947500
round 4445, predicted reward 31.357641,predicted upper bound 31.360892,actual reward 28.526537
round 4446, predicted reward 35.801101,predicted upper bound 35.804076,actual reward 40.228375
round 4447, predicted reward 25.977584,predicted upper bound 25.981057,actual reward 24.286751
round 4448, predicted reward 22.866375,predicted upper bound 22.870091,actual reward 24.491049
round 4449, predicted reward 26.875507,predicted upper bound 26.878694,actual reward 24.734727
round 4450, predicted reward 31.709868,predicted upper bound 31.713183,actual reward 31.486309
round 4451, predicted reward 23.416102,predicted upper bound 23.419579,actual reward 21.212096
round 4452, predicted reward 27.678884,predicted upper bound 27.681623,actual reward 27.334256
round 4453, predicted reward 26.155542,predicted upper bound 26.159069,actual reward 22.561817
round 4454, predicted reward 25.167141,predicted upper bound 25.170809,actual reward 28.922418
round 4455, predicted reward 30.767726,predicted upper bound 30.771599,actual reward 30.003221
round 4456, predicted reward 34.399857,predicted upper bound 34.403145,actual reward 32.474493
round 4457, predicted reward 28.250484,predicted upper bound 28.254048,actual reward 26.775211
round 4458, predicted reward 28.048499,predicted upper bound 28.052043,actual reward 26.690597
round 4459, predicted reward 34.013686,predicted upper bound 34.016513,actual reward 34.995396
round 4460, predicted reward 28.101449,predicted upper bound 28.104705,actual reward 26.063509
round 4461, predicted reward 26.686300,predicted upper bound 26.689704,actual reward 24.169319
round 4462, predicted reward 30.678537,predicted upper bound 30.682352,actual reward 28.381278
round 4463, predicted reward 32.137355,predicted upper bound 32.140225,actual reward 31.887074
round 4464, predicted reward 23.989859,predicted upper bound 23.993497,actual reward 21.585167
round 4465, predicted reward 31.614447,predicted upper bound 31.618028,actual reward 31.257636
round 4466, predicted reward 23.166301,predicted upper bound 23.169955,actual reward 21.603810
round 4467, predicted reward 25.689221,predicted upper bound 25.693323,actual reward 22.624528
round 4468, predicted reward 24.225218,predicted upper bound 24.228906,actual reward 22.286128
round 4469, predicted reward 34.771703,predicted upper bound 34.774824,actual reward 44.008341
round 4470, predicted reward 26.333681,predicted upper bound 26.337201,actual reward 21.968175
round 4471, predicted reward 26.734139,predicted upper bound 26.738018,actual reward 25.919888
round 4472, predicted reward 23.618252,predicted upper bound 23.621786,actual reward 30.255255
round 4473, predicted reward 32.150992,predicted upper bound 32.155005,actual reward 37.581049
round 4474, predicted reward 21.498109,predicted upper bound 21.501621,actual reward 15.480236
round 4475, predicted reward 27.864962,predicted upper bound 27.868467,actual reward 20.139127
round 4476, predicted reward 29.301633,predicted upper bound 29.305447,actual reward 30.938349
round 4477, predicted reward 29.904698,predicted upper bound 29.907879,actual reward 23.997229
round 4478, predicted reward 22.669260,predicted upper bound 22.673113,actual reward 18.650481
round 4479, predicted reward 24.419845,predicted upper bound 24.423427,actual reward 24.260686
round 4480, predicted reward 23.858611,predicted upper bound 23.862587,actual reward 20.819213
round 4481, predicted reward 21.657773,predicted upper bound 21.661850,actual reward 18.517345
round 4482, predicted reward 23.180223,predicted upper bound 23.184187,actual reward 20.099029
round 4483, predicted reward 27.356565,predicted upper bound 27.360196,actual reward 27.387200
round 4484, predicted reward 27.572012,predicted upper bound 27.575687,actual reward 28.317783
round 4485, predicted reward 27.244309,predicted upper bound 27.247904,actual reward 27.144034
round 4486, predicted reward 30.378812,predicted upper bound 30.382871,actual reward 34.253658
round 4487, predicted reward 21.151142,predicted upper bound 21.154814,actual reward 20.670885
round 4488, predicted reward 27.245963,predicted upper bound 27.249721,actual reward 23.432681
round 4489, predicted reward 25.982932,predicted upper bound 25.986486,actual reward 23.964042
round 4490, predicted reward 27.575337,predicted upper bound 27.578577,actual reward 27.532119
round 4491, predicted reward 25.294358,predicted upper bound 25.298054,actual reward 23.522545
round 4492, predicted reward 30.717986,predicted upper bound 30.720908,actual reward 34.677595
round 4493, predicted reward 27.785879,predicted upper bound 27.789519,actual reward 29.091830
round 4494, predicted reward 33.051885,predicted upper bound 33.055048,actual reward 37.127270
round 4495, predicted reward 25.737806,predicted upper bound 25.741484,actual reward 24.165855
round 4496, predicted reward 32.844152,predicted upper bound 32.847561,actual reward 33.291214
round 4497, predicted reward 24.776867,predicted upper bound 24.780365,actual reward 22.104075
round 4498, predicted reward 28.240325,predicted upper bound 28.243340,actual reward 25.708338
round 4499, predicted reward 25.425131,predicted upper bound 25.428756,actual reward 21.180114
round 4500, predicted reward 30.808093,predicted upper bound 30.811633,actual reward 29.257390
round 4501, predicted reward 21.682726,predicted upper bound 21.686655,actual reward 22.931669
round 4502, predicted reward 26.008636,predicted upper bound 26.012018,actual reward 21.308557
round 4503, predicted reward 23.814751,predicted upper bound 23.817994,actual reward 22.462229
round 4504, predicted reward 21.364391,predicted upper bound 21.368532,actual reward 23.864197
round 4505, predicted reward 26.365336,predicted upper bound 26.368981,actual reward 29.300436
round 4506, predicted reward 28.016387,predicted upper bound 28.019734,actual reward 26.496799
round 4507, predicted reward 25.333380,predicted upper bound 25.336692,actual reward 23.700059
round 4508, predicted reward 26.003332,predicted upper bound 26.006414,actual reward 26.455004
round 4509, predicted reward 29.268927,predicted upper bound 29.272446,actual reward 33.190776
round 4510, predicted reward 24.738070,predicted upper bound 24.741325,actual reward 23.012811
round 4511, predicted reward 29.502509,predicted upper bound 29.506124,actual reward 34.062491
round 4512, predicted reward 26.468994,predicted upper bound 26.473091,actual reward 20.464013
round 4513, predicted reward 29.569343,predicted upper bound 29.572567,actual reward 31.105318
round 4514, predicted reward 25.438395,predicted upper bound 25.442266,actual reward 23.316779
round 4515, predicted reward 24.889915,predicted upper bound 24.893912,actual reward 18.815422
round 4516, predicted reward 26.649901,predicted upper bound 26.653494,actual reward 27.448341
round 4517, predicted reward 25.925982,predicted upper bound 25.929585,actual reward 23.420279
round 4518, predicted reward 30.809662,predicted upper bound 30.813509,actual reward 29.248715
round 4519, predicted reward 30.116955,predicted upper bound 30.119983,actual reward 29.057586
round 4520, predicted reward 30.999800,predicted upper bound 31.002625,actual reward 29.478303
round 4521, predicted reward 22.170741,predicted upper bound 22.173994,actual reward 26.165079
round 4522, predicted reward 21.228942,predicted upper bound 21.233041,actual reward 18.399793
round 4523, predicted reward 30.190265,predicted upper bound 30.193706,actual reward 30.244349
round 4524, predicted reward 26.307583,predicted upper bound 26.311465,actual reward 27.508984
round 4525, predicted reward 27.015767,predicted upper bound 27.019550,actual reward 29.236369
round 4526, predicted reward 19.765627,predicted upper bound 19.769128,actual reward 18.127179
round 4527, predicted reward 32.346408,predicted upper bound 32.349881,actual reward 28.264110
round 4528, predicted reward 30.811016,predicted upper bound 30.814490,actual reward 31.078597
round 4529, predicted reward 24.734178,predicted upper bound 24.737852,actual reward 25.205728
round 4530, predicted reward 27.184593,predicted upper bound 27.188513,actual reward 27.536310
round 4531, predicted reward 23.167424,predicted upper bound 23.171259,actual reward 21.586232
round 4532, predicted reward 24.338668,predicted upper bound 24.342157,actual reward 24.936505
round 4533, predicted reward 24.124664,predicted upper bound 24.128069,actual reward 24.749341
round 4534, predicted reward 26.541057,predicted upper bound 26.544822,actual reward 27.877137
round 4535, predicted reward 30.526562,predicted upper bound 30.530080,actual reward 31.568407
round 4536, predicted reward 23.562759,predicted upper bound 23.566327,actual reward 20.658324
round 4537, predicted reward 28.513726,predicted upper bound 28.517093,actual reward 23.396604
round 4538, predicted reward 30.197427,predicted upper bound 30.200638,actual reward 33.393233
round 4539, predicted reward 30.134736,predicted upper bound 30.138259,actual reward 29.424156
round 4540, predicted reward 29.475997,predicted upper bound 29.479436,actual reward 35.838313
round 4541, predicted reward 24.648458,predicted upper bound 24.652026,actual reward 23.604852
round 4542, predicted reward 26.575338,predicted upper bound 26.578923,actual reward 25.819097
round 4543, predicted reward 24.253140,predicted upper bound 24.255851,actual reward 20.905679
round 4544, predicted reward 22.218081,predicted upper bound 22.221906,actual reward 23.173791
round 4545, predicted reward 26.869362,predicted upper bound 26.872863,actual reward 24.557518
round 4546, predicted reward 29.006316,predicted upper bound 29.009373,actual reward 33.317574
round 4547, predicted reward 27.350390,predicted upper bound 27.354006,actual reward 25.887707
round 4548, predicted reward 29.793272,predicted upper bound 29.796834,actual reward 24.541750
round 4549, predicted reward 21.997712,predicted upper bound 22.001667,actual reward 20.097558
round 4550, predicted reward 28.565580,predicted upper bound 28.568971,actual reward 30.984033
round 4551, predicted reward 27.743810,predicted upper bound 27.747478,actual reward 28.189332
round 4552, predicted reward 24.929287,predicted upper bound 24.933060,actual reward 25.367232
round 4553, predicted reward 31.759950,predicted upper bound 31.763636,actual reward 29.375456
round 4554, predicted reward 25.467689,predicted upper bound 25.471556,actual reward 22.674169
round 4555, predicted reward 27.597166,predicted upper bound 27.600813,actual reward 29.671828
round 4556, predicted reward 21.556092,predicted upper bound 21.559662,actual reward 17.345576
round 4557, predicted reward 23.803019,predicted upper bound 23.806673,actual reward 21.295239
round 4558, predicted reward 25.129230,predicted upper bound 25.132791,actual reward 22.346853
round 4559, predicted reward 24.707262,predicted upper bound 24.710854,actual reward 22.466534
round 4560, predicted reward 26.025365,predicted upper bound 26.029173,actual reward 24.455027
round 4561, predicted reward 29.574580,predicted upper bound 29.577838,actual reward 26.242856
round 4562, predicted reward 32.985691,predicted upper bound 32.989143,actual reward 33.283519
round 4563, predicted reward 22.777361,predicted upper bound 22.781075,actual reward 18.841326
round 4564, predicted reward 25.363530,predicted upper bound 25.367507,actual reward 20.148271
round 4565, predicted reward 24.532830,predicted upper bound 24.536287,actual reward 25.313280
round 4566, predicted reward 25.989233,predicted upper bound 25.992889,actual reward 23.990671
round 4567, predicted reward 26.317651,predicted upper bound 26.321200,actual reward 25.325618
round 4568, predicted reward 25.465690,predicted upper bound 25.468644,actual reward 24.716037
round 4569, predicted reward 22.562709,predicted upper bound 22.566342,actual reward 18.837479
round 4570, predicted reward 33.338545,predicted upper bound 33.342027,actual reward 33.026128
round 4571, predicted reward 25.500532,predicted upper bound 25.503562,actual reward 25.437039
round 4572, predicted reward 30.059473,predicted upper bound 30.062847,actual reward 31.737788
round 4573, predicted reward 28.925413,predicted upper bound 28.928299,actual reward 28.138544
round 4574, predicted reward 24.999779,predicted upper bound 25.002831,actual reward 25.313049
round 4575, predicted reward 29.299969,predicted upper bound 29.303141,actual reward 27.981612
round 4576, predicted reward 28.474145,predicted upper bound 28.477950,actual reward 30.556545
round 4577, predicted reward 31.079894,predicted upper bound 31.082857,actual reward 33.939893
round 4578, predicted reward 31.173285,predicted upper bound 31.176723,actual reward 27.911211
round 4579, predicted reward 29.252386,predicted upper bound 29.256180,actual reward 33.308183
round 4580, predicted reward 23.329594,predicted upper bound 23.332771,actual reward 22.705625
round 4581, predicted reward 28.875173,predicted upper bound 28.879154,actual reward 27.022500
round 4582, predicted reward 29.883830,predicted upper bound 29.887005,actual reward 32.926829
round 4583, predicted reward 21.233684,predicted upper bound 21.236797,actual reward 17.952447
round 4584, predicted reward 25.094455,predicted upper bound 25.097910,actual reward 20.087962
round 4585, predicted reward 31.619295,predicted upper bound 31.622632,actual reward 27.147481
round 4586, predicted reward 26.169954,predicted upper bound 26.173488,actual reward 26.942836
round 4587, predicted reward 31.000528,predicted upper bound 31.003904,actual reward 31.069222
round 4588, predicted reward 31.882808,predicted upper bound 31.886157,actual reward 31.045937
round 4589, predicted reward 38.092636,predicted upper bound 38.095834,actual reward 35.455965
round 4590, predicted reward 32.138829,predicted upper bound 32.142196,actual reward 25.460557
round 4591, predicted reward 32.993861,predicted upper bound 32.997765,actual reward 35.227382
round 4592, predicted reward 33.586299,predicted upper bound 33.589662,actual reward 33.960315
round 4593, predicted reward 33.722840,predicted upper bound 33.726280,actual reward 32.805195
round 4594, predicted reward 36.773472,predicted upper bound 36.776408,actual reward 41.518442
round 4595, predicted reward 27.380020,predicted upper bound 27.383111,actual reward 25.768104
round 4596, predicted reward 24.649845,predicted upper bound 24.653477,actual reward 27.032386
round 4597, predicted reward 24.373935,predicted upper bound 24.378225,actual reward 17.363761
round 4598, predicted reward 32.921261,predicted upper bound 32.925032,actual reward 34.184054
round 4599, predicted reward 28.919339,predicted upper bound 28.923024,actual reward 26.026081
round 4600, predicted reward 26.915287,predicted upper bound 26.918918,actual reward 24.419790
round 4601, predicted reward 24.649719,predicted upper bound 24.653538,actual reward 22.816197
round 4602, predicted reward 34.299958,predicted upper bound 34.303197,actual reward 36.434533
round 4603, predicted reward 31.377740,predicted upper bound 31.381675,actual reward 32.510543
round 4604, predicted reward 28.736626,predicted upper bound 28.740265,actual reward 28.055747
round 4605, predicted reward 29.797894,predicted upper bound 29.801217,actual reward 26.103129
round 4606, predicted reward 32.070331,predicted upper bound 32.073433,actual reward 27.300698
round 4607, predicted reward 21.905069,predicted upper bound 21.908975,actual reward 25.128232
round 4608, predicted reward 22.717301,predicted upper bound 22.720705,actual reward 21.947101
round 4609, predicted reward 24.162561,predicted upper bound 24.166277,actual reward 25.868112
round 4610, predicted reward 20.651280,predicted upper bound 20.654782,actual reward 14.299900
round 4611, predicted reward 24.525102,predicted upper bound 24.528363,actual reward 22.529086
round 4612, predicted reward 24.166445,predicted upper bound 24.170394,actual reward 21.957012
round 4613, predicted reward 31.933187,predicted upper bound 31.936388,actual reward 32.090765
round 4614, predicted reward 30.502860,predicted upper bound 30.506252,actual reward 29.810148
round 4615, predicted reward 27.713200,predicted upper bound 27.716755,actual reward 32.326798
round 4616, predicted reward 24.674864,predicted upper bound 24.678441,actual reward 19.761976
round 4617, predicted reward 23.185125,predicted upper bound 23.189770,actual reward 22.083070
round 4618, predicted reward 22.961885,predicted upper bound 22.965684,actual reward 24.538423
round 4619, predicted reward 37.119572,predicted upper bound 37.122776,actual reward 41.101428
round 4620, predicted reward 22.799449,predicted upper bound 22.802949,actual reward 25.622522
round 4621, predicted reward 33.325545,predicted upper bound 33.328753,actual reward 31.630079
round 4622, predicted reward 30.990194,predicted upper bound 30.993176,actual reward 33.922057
round 4623, predicted reward 32.148859,predicted upper bound 32.152303,actual reward 35.722543
round 4624, predicted reward 27.525525,predicted upper bound 27.528635,actual reward 28.074109
round 4625, predicted reward 28.753093,predicted upper bound 28.756706,actual reward 26.198979
round 4626, predicted reward 23.099737,predicted upper bound 23.103478,actual reward 18.869264
round 4627, predicted reward 23.637088,predicted upper bound 23.640528,actual reward 19.502664
round 4628, predicted reward 29.488587,predicted upper bound 29.491672,actual reward 29.532351
round 4629, predicted reward 27.948076,predicted upper bound 27.951422,actual reward 28.135902
round 4630, predicted reward 29.701335,predicted upper bound 29.704778,actual reward 28.501818
round 4631, predicted reward 26.934122,predicted upper bound 26.937673,actual reward 24.929000
round 4632, predicted reward 26.259964,predicted upper bound 26.263527,actual reward 26.775698
round 4633, predicted reward 32.653935,predicted upper bound 32.657195,actual reward 33.752372
round 4634, predicted reward 33.026591,predicted upper bound 33.029733,actual reward 36.402873
round 4635, predicted reward 29.427102,predicted upper bound 29.429871,actual reward 29.955044
round 4636, predicted reward 27.464052,predicted upper bound 27.467967,actual reward 32.316614
round 4637, predicted reward 31.348252,predicted upper bound 31.351665,actual reward 35.308956
round 4638, predicted reward 30.426090,predicted upper bound 30.429818,actual reward 24.809821
round 4639, predicted reward 21.672519,predicted upper bound 21.675563,actual reward 20.308486
round 4640, predicted reward 23.000547,predicted upper bound 23.004312,actual reward 19.090554
round 4641, predicted reward 24.391350,predicted upper bound 24.394587,actual reward 25.613147
round 4642, predicted reward 27.612248,predicted upper bound 27.615367,actual reward 29.323917
round 4643, predicted reward 23.473722,predicted upper bound 23.477010,actual reward 23.241176
round 4644, predicted reward 22.505500,predicted upper bound 22.508948,actual reward 19.539737
round 4645, predicted reward 26.754775,predicted upper bound 26.758078,actual reward 26.549357
round 4646, predicted reward 26.348103,predicted upper bound 26.351304,actual reward 20.942391
round 4647, predicted reward 28.892214,predicted upper bound 28.895500,actual reward 28.200071
round 4648, predicted reward 28.993165,predicted upper bound 28.996317,actual reward 27.672606
round 4649, predicted reward 28.545729,predicted upper bound 28.548936,actual reward 31.940863
round 4650, predicted reward 31.097049,predicted upper bound 31.100120,actual reward 31.251707
round 4651, predicted reward 26.592610,predicted upper bound 26.595829,actual reward 24.681921
round 4652, predicted reward 29.734484,predicted upper bound 29.737734,actual reward 29.943489
round 4653, predicted reward 24.023787,predicted upper bound 24.027478,actual reward 24.671378
round 4654, predicted reward 30.894765,predicted upper bound 30.897967,actual reward 35.385665
round 4655, predicted reward 24.205255,predicted upper bound 24.208586,actual reward 15.273508
round 4656, predicted reward 26.437390,predicted upper bound 26.440392,actual reward 20.679843
round 4657, predicted reward 25.299087,predicted upper bound 25.302641,actual reward 20.693658
round 4658, predicted reward 29.550933,predicted upper bound 29.554093,actual reward 34.560256
round 4659, predicted reward 29.570810,predicted upper bound 29.574186,actual reward 32.731747
round 4660, predicted reward 25.127065,predicted upper bound 25.130195,actual reward 23.372362
round 4661, predicted reward 28.779362,predicted upper bound 28.782663,actual reward 21.905052
round 4662, predicted reward 23.923565,predicted upper bound 23.926981,actual reward 22.991836
round 4663, predicted reward 28.684118,predicted upper bound 28.687752,actual reward 34.447126
round 4664, predicted reward 31.296851,predicted upper bound 31.299919,actual reward 34.756893
round 4665, predicted reward 27.672139,predicted upper bound 27.675623,actual reward 21.628302
round 4666, predicted reward 29.629203,predicted upper bound 29.632678,actual reward 33.945934
round 4667, predicted reward 28.897317,predicted upper bound 28.900964,actual reward 28.878571
round 4668, predicted reward 22.667361,predicted upper bound 22.670995,actual reward 20.533508
round 4669, predicted reward 24.157281,predicted upper bound 24.160681,actual reward 23.624403
round 4670, predicted reward 28.426523,predicted upper bound 28.430123,actual reward 25.123005
round 4671, predicted reward 17.076083,predicted upper bound 17.079713,actual reward 11.840772
round 4672, predicted reward 26.359733,predicted upper bound 26.363414,actual reward 23.008907
round 4673, predicted reward 23.148059,predicted upper bound 23.151367,actual reward 19.081134
round 4674, predicted reward 28.339942,predicted upper bound 28.343658,actual reward 28.422939
round 4675, predicted reward 30.192884,predicted upper bound 30.196353,actual reward 30.958689
round 4676, predicted reward 27.424911,predicted upper bound 27.428563,actual reward 24.879106
round 4677, predicted reward 25.916891,predicted upper bound 25.920217,actual reward 24.844061
round 4678, predicted reward 25.773355,predicted upper bound 25.776686,actual reward 18.780720
round 4679, predicted reward 26.462864,predicted upper bound 26.466142,actual reward 24.536207
round 4680, predicted reward 27.097302,predicted upper bound 27.100262,actual reward 24.418865
round 4681, predicted reward 24.914312,predicted upper bound 24.917954,actual reward 26.276225
round 4682, predicted reward 28.365275,predicted upper bound 28.368215,actual reward 26.566143
round 4683, predicted reward 30.053071,predicted upper bound 30.057070,actual reward 24.696925
round 4684, predicted reward 29.194684,predicted upper bound 29.197602,actual reward 28.250647
round 4685, predicted reward 28.541188,predicted upper bound 28.544157,actual reward 27.222029
round 4686, predicted reward 28.913231,predicted upper bound 28.917309,actual reward 23.882023
round 4687, predicted reward 29.118120,predicted upper bound 29.121737,actual reward 36.039094
round 4688, predicted reward 26.113390,predicted upper bound 26.116741,actual reward 21.710948
round 4689, predicted reward 29.428578,predicted upper bound 29.431982,actual reward 29.887827
round 4690, predicted reward 30.639652,predicted upper bound 30.643148,actual reward 27.326436
round 4691, predicted reward 22.614615,predicted upper bound 22.617623,actual reward 19.066887
round 4692, predicted reward 27.521603,predicted upper bound 27.524883,actual reward 24.176567
round 4693, predicted reward 39.804140,predicted upper bound 39.807179,actual reward 45.120052
round 4694, predicted reward 23.840003,predicted upper bound 23.843342,actual reward 22.830235
round 4695, predicted reward 24.608855,predicted upper bound 24.612100,actual reward 20.682197
round 4696, predicted reward 22.415858,predicted upper bound 22.419763,actual reward 15.150650
round 4697, predicted reward 22.902662,predicted upper bound 22.905833,actual reward 23.314437
round 4698, predicted reward 26.091232,predicted upper bound 26.094941,actual reward 22.280177
round 4699, predicted reward 30.540105,predicted upper bound 30.543349,actual reward 35.246530
round 4700, predicted reward 29.429533,predicted upper bound 29.432877,actual reward 28.404502
round 4701, predicted reward 28.875297,predicted upper bound 28.878898,actual reward 29.164339
round 4702, predicted reward 26.744402,predicted upper bound 26.747474,actual reward 29.343856
round 4703, predicted reward 26.276498,predicted upper bound 26.279846,actual reward 25.317382
round 4704, predicted reward 32.018699,predicted upper bound 32.021746,actual reward 33.902434
round 4705, predicted reward 24.895265,predicted upper bound 24.899024,actual reward 21.334678
round 4706, predicted reward 32.196219,predicted upper bound 32.199639,actual reward 31.371204
round 4707, predicted reward 33.157908,predicted upper bound 33.161766,actual reward 33.318446
round 4708, predicted reward 22.741811,predicted upper bound 22.745766,actual reward 26.508055
round 4709, predicted reward 24.233353,predicted upper bound 24.237092,actual reward 23.577799
round 4710, predicted reward 32.706158,predicted upper bound 32.710454,actual reward 38.582651
round 4711, predicted reward 37.788364,predicted upper bound 37.791463,actual reward 40.663999
round 4712, predicted reward 25.236208,predicted upper bound 25.240164,actual reward 28.017335
round 4713, predicted reward 26.293871,predicted upper bound 26.297002,actual reward 26.871914
round 4714, predicted reward 25.263965,predicted upper bound 25.267531,actual reward 27.179632
round 4715, predicted reward 36.580687,predicted upper bound 36.583947,actual reward 40.703900
round 4716, predicted reward 25.807185,predicted upper bound 25.810591,actual reward 25.925835
round 4717, predicted reward 34.305211,predicted upper bound 34.308120,actual reward 33.704493
round 4718, predicted reward 35.832170,predicted upper bound 35.835014,actual reward 44.380927
round 4719, predicted reward 21.694513,predicted upper bound 21.697843,actual reward 17.967630
round 4720, predicted reward 28.444134,predicted upper bound 28.447609,actual reward 29.039484
round 4721, predicted reward 24.778367,predicted upper bound 24.781698,actual reward 21.000563
round 4722, predicted reward 26.970889,predicted upper bound 26.974179,actual reward 31.559928
round 4723, predicted reward 27.918790,predicted upper bound 27.922322,actual reward 27.545783
round 4724, predicted reward 31.053971,predicted upper bound 31.057692,actual reward 29.388877
round 4725, predicted reward 30.687290,predicted upper bound 30.690500,actual reward 31.744290
round 4726, predicted reward 21.329448,predicted upper bound 21.333762,actual reward 25.657260
round 4727, predicted reward 30.412294,predicted upper bound 30.415512,actual reward 27.823821
round 4728, predicted reward 27.918858,predicted upper bound 27.922612,actual reward 27.725837
round 4729, predicted reward 28.867364,predicted upper bound 28.870131,actual reward 30.565378
round 4730, predicted reward 28.038346,predicted upper bound 28.041738,actual reward 24.983078
round 4731, predicted reward 26.195802,predicted upper bound 26.198839,actual reward 29.422389
round 4732, predicted reward 26.227513,predicted upper bound 26.230894,actual reward 30.061477
round 4733, predicted reward 24.550497,predicted upper bound 24.554241,actual reward 21.537512
round 4734, predicted reward 29.252077,predicted upper bound 29.255890,actual reward 25.481927
round 4735, predicted reward 23.872675,predicted upper bound 23.876472,actual reward 17.514838
round 4736, predicted reward 26.690567,predicted upper bound 26.694026,actual reward 25.818096
round 4737, predicted reward 23.764316,predicted upper bound 23.767685,actual reward 18.656587
round 4738, predicted reward 38.445973,predicted upper bound 38.448584,actual reward 44.682762
round 4739, predicted reward 34.374019,predicted upper bound 34.377478,actual reward 36.913564
round 4740, predicted reward 28.609588,predicted upper bound 28.612699,actual reward 22.653548
round 4741, predicted reward 27.853174,predicted upper bound 27.856812,actual reward 30.578976
round 4742, predicted reward 37.418469,predicted upper bound 37.422275,actual reward 48.805774
round 4743, predicted reward 27.065124,predicted upper bound 27.068870,actual reward 24.367006
round 4744, predicted reward 21.992751,predicted upper bound 21.996030,actual reward 19.277272
round 4745, predicted reward 34.009705,predicted upper bound 34.013093,actual reward 33.586379
round 4746, predicted reward 25.873845,predicted upper bound 25.877214,actual reward 21.473513
round 4747, predicted reward 26.653538,predicted upper bound 26.657022,actual reward 23.259507
round 4748, predicted reward 29.757917,predicted upper bound 29.761803,actual reward 27.536396
round 4749, predicted reward 24.239620,predicted upper bound 24.243144,actual reward 24.080038
round 4750, predicted reward 25.195917,predicted upper bound 25.198982,actual reward 20.186183
round 4751, predicted reward 20.187263,predicted upper bound 20.190818,actual reward 13.151231
round 4752, predicted reward 28.933775,predicted upper bound 28.937139,actual reward 31.870568
round 4753, predicted reward 36.121515,predicted upper bound 36.125378,actual reward 31.838981
round 4754, predicted reward 24.355502,predicted upper bound 24.359171,actual reward 20.878878
round 4755, predicted reward 29.485773,predicted upper bound 29.489291,actual reward 29.037989
round 4756, predicted reward 34.567093,predicted upper bound 34.570252,actual reward 43.285928
round 4757, predicted reward 28.310110,predicted upper bound 28.313477,actual reward 29.528042
round 4758, predicted reward 25.125569,predicted upper bound 25.128644,actual reward 20.223758
round 4759, predicted reward 20.184978,predicted upper bound 20.189084,actual reward 22.681415
round 4760, predicted reward 27.538126,predicted upper bound 27.541392,actual reward 30.350921
round 4761, predicted reward 30.803025,predicted upper bound 30.806935,actual reward 29.994290
round 4762, predicted reward 26.576417,predicted upper bound 26.580045,actual reward 26.353034
round 4763, predicted reward 30.529457,predicted upper bound 30.532839,actual reward 26.360610
round 4764, predicted reward 27.468383,predicted upper bound 27.472206,actual reward 26.515274
round 4765, predicted reward 29.733399,predicted upper bound 29.736810,actual reward 27.322458
round 4766, predicted reward 28.183814,predicted upper bound 28.187354,actual reward 28.845312
round 4767, predicted reward 28.148216,predicted upper bound 28.151569,actual reward 32.052380
round 4768, predicted reward 24.622170,predicted upper bound 24.625103,actual reward 26.392781
round 4769, predicted reward 22.524936,predicted upper bound 22.528872,actual reward 22.768598
round 4770, predicted reward 28.694595,predicted upper bound 28.698027,actual reward 27.975027
round 4771, predicted reward 30.256128,predicted upper bound 30.259463,actual reward 29.776621
round 4772, predicted reward 26.253838,predicted upper bound 26.257057,actual reward 27.742672
round 4773, predicted reward 32.925746,predicted upper bound 32.928808,actual reward 31.121802
round 4774, predicted reward 22.824653,predicted upper bound 22.827994,actual reward 17.884980
round 4775, predicted reward 25.452413,predicted upper bound 25.456208,actual reward 24.746655
round 4776, predicted reward 23.743561,predicted upper bound 23.747210,actual reward 21.075529
round 4777, predicted reward 25.630799,predicted upper bound 25.634742,actual reward 26.549087
round 4778, predicted reward 23.293325,predicted upper bound 23.296787,actual reward 21.882965
round 4779, predicted reward 29.751512,predicted upper bound 29.755175,actual reward 26.819782
round 4780, predicted reward 28.631073,predicted upper bound 28.634613,actual reward 27.160041
round 4781, predicted reward 23.986652,predicted upper bound 23.990103,actual reward 17.037033
round 4782, predicted reward 22.295199,predicted upper bound 22.298034,actual reward 19.645209
round 4783, predicted reward 32.046044,predicted upper bound 32.049275,actual reward 26.583118
round 4784, predicted reward 28.769344,predicted upper bound 28.772398,actual reward 25.158028
round 4785, predicted reward 31.753833,predicted upper bound 31.757304,actual reward 27.781382
round 4786, predicted reward 30.460890,predicted upper bound 30.464433,actual reward 34.685771
round 4787, predicted reward 29.351388,predicted upper bound 29.354419,actual reward 28.339847
round 4788, predicted reward 28.788539,predicted upper bound 28.792278,actual reward 29.602120
round 4789, predicted reward 21.116072,predicted upper bound 21.119411,actual reward 19.594981
round 4790, predicted reward 28.198575,predicted upper bound 28.202276,actual reward 28.136968
round 4791, predicted reward 30.649780,predicted upper bound 30.653194,actual reward 27.166065
round 4792, predicted reward 26.726637,predicted upper bound 26.729913,actual reward 33.728311
round 4793, predicted reward 34.809670,predicted upper bound 34.812925,actual reward 34.938378
round 4794, predicted reward 36.335894,predicted upper bound 36.339398,actual reward 39.471992
round 4795, predicted reward 21.008935,predicted upper bound 21.012875,actual reward 23.804491
round 4796, predicted reward 20.913325,predicted upper bound 20.917164,actual reward 16.800638
round 4797, predicted reward 31.315995,predicted upper bound 31.319116,actual reward 26.862399
round 4798, predicted reward 31.039721,predicted upper bound 31.043181,actual reward 33.377599
round 4799, predicted reward 30.816576,predicted upper bound 30.819634,actual reward 34.501417
round 4800, predicted reward 25.194962,predicted upper bound 25.198111,actual reward 22.974697
round 4801, predicted reward 35.924800,predicted upper bound 35.927772,actual reward 34.812956
round 4802, predicted reward 24.016001,predicted upper bound 24.019269,actual reward 26.118643
round 4803, predicted reward 27.331467,predicted upper bound 27.334730,actual reward 29.372592
round 4804, predicted reward 40.066721,predicted upper bound 40.069940,actual reward 45.182434
round 4805, predicted reward 26.780587,predicted upper bound 26.784587,actual reward 20.510571
round 4806, predicted reward 25.147330,predicted upper bound 25.150859,actual reward 28.510107
round 4807, predicted reward 23.845531,predicted upper bound 23.849055,actual reward 21.970557
round 4808, predicted reward 23.966245,predicted upper bound 23.969728,actual reward 24.733990
round 4809, predicted reward 31.611911,predicted upper bound 31.614993,actual reward 28.919209
round 4810, predicted reward 28.230297,predicted upper bound 28.233352,actual reward 29.720609
round 4811, predicted reward 24.524742,predicted upper bound 24.528350,actual reward 22.071102
round 4812, predicted reward 30.966268,predicted upper bound 30.969665,actual reward 29.505445
round 4813, predicted reward 29.210180,predicted upper bound 29.213730,actual reward 30.988534
round 4814, predicted reward 30.655468,predicted upper bound 30.658704,actual reward 30.181341
round 4815, predicted reward 25.743802,predicted upper bound 25.747349,actual reward 21.308576
round 4816, predicted reward 39.673156,predicted upper bound 39.676135,actual reward 42.827665
round 4817, predicted reward 25.766412,predicted upper bound 25.770077,actual reward 19.723741
round 4818, predicted reward 30.782536,predicted upper bound 30.785342,actual reward 29.388993
round 4819, predicted reward 31.903594,predicted upper bound 31.907421,actual reward 28.994485
round 4820, predicted reward 30.472265,predicted upper bound 30.475738,actual reward 33.383600
round 4821, predicted reward 24.804448,predicted upper bound 24.808202,actual reward 22.815195
round 4822, predicted reward 34.306272,predicted upper bound 34.309293,actual reward 29.470347
round 4823, predicted reward 24.783057,predicted upper bound 24.786860,actual reward 25.219861
round 4824, predicted reward 28.192042,predicted upper bound 28.194992,actual reward 29.439173
round 4825, predicted reward 27.079078,predicted upper bound 27.082367,actual reward 28.078904
round 4826, predicted reward 28.258568,predicted upper bound 28.261910,actual reward 28.298723
round 4827, predicted reward 28.363100,predicted upper bound 28.366989,actual reward 25.112717
round 4828, predicted reward 28.464766,predicted upper bound 28.467093,actual reward 26.644157
round 4829, predicted reward 31.195737,predicted upper bound 31.199704,actual reward 35.106215
round 4830, predicted reward 20.794839,predicted upper bound 20.798442,actual reward 20.330769
round 4831, predicted reward 26.828463,predicted upper bound 26.831407,actual reward 23.745113
round 4832, predicted reward 24.159445,predicted upper bound 24.163307,actual reward 26.947905
round 4833, predicted reward 19.913157,predicted upper bound 19.916710,actual reward 15.484870
round 4834, predicted reward 26.606107,predicted upper bound 26.609744,actual reward 22.273762
round 4835, predicted reward 28.066682,predicted upper bound 28.069794,actual reward 22.828869
round 4836, predicted reward 28.638896,predicted upper bound 28.642320,actual reward 26.678383
round 4837, predicted reward 25.694919,predicted upper bound 25.698372,actual reward 21.116343
round 4838, predicted reward 24.373417,predicted upper bound 24.376528,actual reward 24.223229
round 4839, predicted reward 36.485077,predicted upper bound 36.487730,actual reward 42.392471
round 4840, predicted reward 27.586100,predicted upper bound 27.589730,actual reward 23.939564
round 4841, predicted reward 24.721735,predicted upper bound 24.725443,actual reward 24.940629
round 4842, predicted reward 23.987479,predicted upper bound 23.990591,actual reward 22.119993
round 4843, predicted reward 22.225131,predicted upper bound 22.229297,actual reward 17.858237
round 4844, predicted reward 24.797751,predicted upper bound 24.801181,actual reward 22.587767
round 4845, predicted reward 29.705731,predicted upper bound 29.709070,actual reward 30.297358
round 4846, predicted reward 31.231309,predicted upper bound 31.235233,actual reward 32.345902
round 4847, predicted reward 28.298769,predicted upper bound 28.302316,actual reward 27.736735
round 4848, predicted reward 31.224575,predicted upper bound 31.228232,actual reward 32.481657
round 4849, predicted reward 25.631121,predicted upper bound 25.634718,actual reward 23.687329
round 4850, predicted reward 22.997097,predicted upper bound 23.000526,actual reward 24.804317
round 4851, predicted reward 31.309802,predicted upper bound 31.313027,actual reward 25.624747
round 4852, predicted reward 26.574077,predicted upper bound 26.577867,actual reward 28.699243
round 4853, predicted reward 29.816550,predicted upper bound 29.819903,actual reward 34.447912
round 4854, predicted reward 29.717040,predicted upper bound 29.720708,actual reward 29.634608
round 4855, predicted reward 27.388787,predicted upper bound 27.392764,actual reward 26.801237
round 4856, predicted reward 27.030030,predicted upper bound 27.032955,actual reward 30.942786
round 4857, predicted reward 22.404275,predicted upper bound 22.408101,actual reward 23.290679
round 4858, predicted reward 28.217848,predicted upper bound 28.220917,actual reward 27.008672
round 4859, predicted reward 19.082923,predicted upper bound 19.086520,actual reward 16.305560
round 4860, predicted reward 25.212899,predicted upper bound 25.216096,actual reward 26.421202
round 4861, predicted reward 25.432197,predicted upper bound 25.435622,actual reward 20.261297
round 4862, predicted reward 23.538644,predicted upper bound 23.542098,actual reward 17.590117
round 4863, predicted reward 26.053141,predicted upper bound 26.056250,actual reward 26.298715
round 4864, predicted reward 24.692521,predicted upper bound 24.696080,actual reward 22.305614
round 4865, predicted reward 29.720352,predicted upper bound 29.723650,actual reward 32.403549
round 4866, predicted reward 22.957925,predicted upper bound 22.961133,actual reward 19.918541
round 4867, predicted reward 29.224451,predicted upper bound 29.227719,actual reward 34.600656
round 4868, predicted reward 35.461847,predicted upper bound 35.465166,actual reward 37.080134
round 4869, predicted reward 27.503291,predicted upper bound 27.506787,actual reward 27.505565
round 4870, predicted reward 22.393780,predicted upper bound 22.397250,actual reward 16.438495
round 4871, predicted reward 23.075272,predicted upper bound 23.078642,actual reward 17.206229
round 4872, predicted reward 33.962754,predicted upper bound 33.965943,actual reward 32.735572
round 4873, predicted reward 23.766670,predicted upper bound 23.770391,actual reward 26.764693
round 4874, predicted reward 30.864504,predicted upper bound 30.867383,actual reward 26.249856
round 4875, predicted reward 31.155608,predicted upper bound 31.159079,actual reward 30.308677
round 4876, predicted reward 34.651025,predicted upper bound 34.654338,actual reward 34.480221
round 4877, predicted reward 28.467803,predicted upper bound 28.471103,actual reward 30.942015
round 4878, predicted reward 24.636333,predicted upper bound 24.639783,actual reward 25.254537
round 4879, predicted reward 26.073606,predicted upper bound 26.077252,actual reward 26.903106
round 4880, predicted reward 26.408258,predicted upper bound 26.412083,actual reward 22.825524
round 4881, predicted reward 29.041894,predicted upper bound 29.045024,actual reward 32.389857
round 4882, predicted reward 29.984961,predicted upper bound 29.988451,actual reward 28.826253
round 4883, predicted reward 22.250108,predicted upper bound 22.253630,actual reward 20.179341
round 4884, predicted reward 20.308008,predicted upper bound 20.311882,actual reward 22.074262
round 4885, predicted reward 24.780694,predicted upper bound 24.784230,actual reward 20.886133
round 4886, predicted reward 26.366907,predicted upper bound 26.370757,actual reward 20.849453
round 4887, predicted reward 23.491065,predicted upper bound 23.494332,actual reward 24.185776
round 4888, predicted reward 26.185157,predicted upper bound 26.188685,actual reward 25.755905
round 4889, predicted reward 27.146256,predicted upper bound 27.149593,actual reward 25.512882
round 4890, predicted reward 27.712942,predicted upper bound 27.716970,actual reward 33.344131
round 4891, predicted reward 27.568282,predicted upper bound 27.571130,actual reward 25.390663
round 4892, predicted reward 23.759204,predicted upper bound 23.763000,actual reward 23.615929
round 4893, predicted reward 29.729813,predicted upper bound 29.733411,actual reward 28.314728
round 4894, predicted reward 25.972023,predicted upper bound 25.975601,actual reward 27.750776
round 4895, predicted reward 28.578300,predicted upper bound 28.581343,actual reward 30.319038
round 4896, predicted reward 27.608422,predicted upper bound 27.612038,actual reward 23.747391
round 4897, predicted reward 29.815515,predicted upper bound 29.818325,actual reward 28.899612
round 4898, predicted reward 25.557603,predicted upper bound 25.560981,actual reward 22.686880
round 4899, predicted reward 30.276344,predicted upper bound 30.279751,actual reward 29.701461
round 4900, predicted reward 25.748641,predicted upper bound 25.751876,actual reward 26.503240
round 4901, predicted reward 18.389963,predicted upper bound 18.393097,actual reward 16.288632
round 4902, predicted reward 28.417145,predicted upper bound 28.420761,actual reward 28.242651
round 4903, predicted reward 19.398351,predicted upper bound 19.402039,actual reward 20.246939
round 4904, predicted reward 23.980184,predicted upper bound 23.983731,actual reward 26.665233
round 4905, predicted reward 25.723650,predicted upper bound 25.727359,actual reward 26.416105
round 4906, predicted reward 25.272159,predicted upper bound 25.275623,actual reward 26.214151
round 4907, predicted reward 28.299474,predicted upper bound 28.302653,actual reward 22.792790
round 4908, predicted reward 27.461581,predicted upper bound 27.464459,actual reward 23.504429
round 4909, predicted reward 32.940043,predicted upper bound 32.943565,actual reward 35.378684
round 4910, predicted reward 21.391114,predicted upper bound 21.395116,actual reward 21.378244
round 4911, predicted reward 25.492710,predicted upper bound 25.496115,actual reward 25.663124
round 4912, predicted reward 26.143684,predicted upper bound 26.146691,actual reward 28.359063
round 4913, predicted reward 28.560704,predicted upper bound 28.564136,actual reward 29.625494
round 4914, predicted reward 30.501521,predicted upper bound 30.504691,actual reward 29.151988
round 4915, predicted reward 26.680913,predicted upper bound 26.684212,actual reward 29.530570
round 4916, predicted reward 21.557803,predicted upper bound 21.561575,actual reward 17.937623
round 4917, predicted reward 32.238227,predicted upper bound 32.241447,actual reward 31.614717
round 4918, predicted reward 20.343105,predicted upper bound 20.346275,actual reward 22.604958
round 4919, predicted reward 31.854839,predicted upper bound 31.857703,actual reward 36.073819
round 4920, predicted reward 25.889202,predicted upper bound 25.892871,actual reward 26.980248
round 4921, predicted reward 32.566478,predicted upper bound 32.569943,actual reward 33.489174
round 4922, predicted reward 25.565149,predicted upper bound 25.568703,actual reward 21.211837
round 4923, predicted reward 22.186491,predicted upper bound 22.190236,actual reward 23.118246
round 4924, predicted reward 28.554177,predicted upper bound 28.557981,actual reward 27.019093
round 4925, predicted reward 21.322372,predicted upper bound 21.325918,actual reward 17.498192
round 4926, predicted reward 35.560187,predicted upper bound 35.563177,actual reward 37.987018
round 4927, predicted reward 36.151319,predicted upper bound 36.154367,actual reward 37.725499
round 4928, predicted reward 22.368936,predicted upper bound 22.372279,actual reward 19.556997
round 4929, predicted reward 25.402711,predicted upper bound 25.406062,actual reward 27.312758
round 4930, predicted reward 25.611790,predicted upper bound 25.615066,actual reward 25.396517
round 4931, predicted reward 32.242663,predicted upper bound 32.245773,actual reward 29.605116
round 4932, predicted reward 37.707897,predicted upper bound 37.711117,actual reward 39.750510
round 4933, predicted reward 28.268564,predicted upper bound 28.271992,actual reward 30.139276
round 4934, predicted reward 23.723007,predicted upper bound 23.725817,actual reward 26.723093
round 4935, predicted reward 27.571978,predicted upper bound 27.575485,actual reward 28.207607
round 4936, predicted reward 28.049276,predicted upper bound 28.052434,actual reward 28.220342
round 4937, predicted reward 36.413401,predicted upper bound 36.416621,actual reward 41.283683
round 4938, predicted reward 24.565735,predicted upper bound 24.569391,actual reward 22.854746
round 4939, predicted reward 35.455397,predicted upper bound 35.458583,actual reward 38.717145
round 4940, predicted reward 32.833635,predicted upper bound 32.836751,actual reward 37.790202
round 4941, predicted reward 27.658706,predicted upper bound 27.662121,actual reward 27.390829
round 4942, predicted reward 32.915593,predicted upper bound 32.918723,actual reward 35.402596
round 4943, predicted reward 23.740846,predicted upper bound 23.744569,actual reward 25.620008
round 4944, predicted reward 28.160204,predicted upper bound 28.163059,actual reward 24.352726
round 4945, predicted reward 32.065714,predicted upper bound 32.068694,actual reward 34.080185
round 4946, predicted reward 30.802859,predicted upper bound 30.806618,actual reward 27.205220
round 4947, predicted reward 22.805345,predicted upper bound 22.809251,actual reward 24.782072
round 4948, predicted reward 30.825497,predicted upper bound 30.828585,actual reward 31.440848
round 4949, predicted reward 28.222930,predicted upper bound 28.226324,actual reward 28.130807
round 4950, predicted reward 33.221769,predicted upper bound 33.224954,actual reward 42.295054
round 4951, predicted reward 25.935678,predicted upper bound 25.939059,actual reward 20.777133
round 4952, predicted reward 27.831880,predicted upper bound 27.835105,actual reward 24.104806
round 4953, predicted reward 26.459733,predicted upper bound 26.463351,actual reward 24.577644
round 4954, predicted reward 27.655663,predicted upper bound 27.659032,actual reward 27.364949
round 4955, predicted reward 27.171939,predicted upper bound 27.175039,actual reward 29.125195
round 4956, predicted reward 31.111947,predicted upper bound 31.115485,actual reward 26.945953
round 4957, predicted reward 24.015302,predicted upper bound 24.019459,actual reward 20.008714
round 4958, predicted reward 26.475048,predicted upper bound 26.478016,actual reward 21.967803
round 4959, predicted reward 23.359900,predicted upper bound 23.363590,actual reward 26.021774
round 4960, predicted reward 20.198063,predicted upper bound 20.201335,actual reward 21.079236
round 4961, predicted reward 27.944726,predicted upper bound 27.947649,actual reward 26.839641
round 4962, predicted reward 26.470087,predicted upper bound 26.473485,actual reward 26.062996
round 4963, predicted reward 25.164131,predicted upper bound 25.167479,actual reward 24.430589
round 4964, predicted reward 25.014773,predicted upper bound 25.018657,actual reward 23.153182
round 4965, predicted reward 28.186964,predicted upper bound 28.190753,actual reward 27.108823
round 4966, predicted reward 24.611970,predicted upper bound 24.614927,actual reward 27.623583
round 4967, predicted reward 23.980570,predicted upper bound 23.983862,actual reward 23.257094
round 4968, predicted reward 30.705533,predicted upper bound 30.708994,actual reward 31.272304
round 4969, predicted reward 25.998794,predicted upper bound 26.002105,actual reward 24.794907
round 4970, predicted reward 26.056527,predicted upper bound 26.059782,actual reward 26.441597
round 4971, predicted reward 31.204698,predicted upper bound 31.207723,actual reward 28.128856
round 4972, predicted reward 23.106125,predicted upper bound 23.109638,actual reward 22.898104
round 4973, predicted reward 27.188411,predicted upper bound 27.191672,actual reward 25.990747
round 4974, predicted reward 29.255329,predicted upper bound 29.259073,actual reward 25.954251
round 4975, predicted reward 21.911887,predicted upper bound 21.915304,actual reward 22.536785
round 4976, predicted reward 32.403013,predicted upper bound 32.405914,actual reward 32.773971
round 4977, predicted reward 28.538373,predicted upper bound 28.541957,actual reward 29.091667
round 4978, predicted reward 23.005786,predicted upper bound 23.009583,actual reward 24.661568
round 4979, predicted reward 29.063198,predicted upper bound 29.066606,actual reward 28.411031
round 4980, predicted reward 24.448313,predicted upper bound 24.451607,actual reward 27.048027
round 4981, predicted reward 24.637468,predicted upper bound 24.640975,actual reward 21.055092
round 4982, predicted reward 24.000526,predicted upper bound 24.003748,actual reward 22.158658
round 4983, predicted reward 30.650624,predicted upper bound 30.653494,actual reward 36.843869
round 4984, predicted reward 25.968667,predicted upper bound 25.971890,actual reward 29.164415
round 4985, predicted reward 24.518542,predicted upper bound 24.522503,actual reward 22.530659
round 4986, predicted reward 29.773040,predicted upper bound 29.776358,actual reward 27.114129
round 4987, predicted reward 28.664226,predicted upper bound 28.667007,actual reward 25.248989
round 4988, predicted reward 21.920604,predicted upper bound 21.924243,actual reward 15.328395
round 4989, predicted reward 26.704728,predicted upper bound 26.707877,actual reward 29.511249
round 4990, predicted reward 33.348367,predicted upper bound 33.350810,actual reward 40.177833
round 4991, predicted reward 27.510555,predicted upper bound 27.513675,actual reward 27.426223
round 4992, predicted reward 29.572991,predicted upper bound 29.576145,actual reward 34.939596
round 4993, predicted reward 23.657027,predicted upper bound 23.660841,actual reward 19.723196
round 4994, predicted reward 29.905052,predicted upper bound 29.908328,actual reward 28.999252
round 4995, predicted reward 26.173200,predicted upper bound 26.176490,actual reward 25.089410
round 4996, predicted reward 24.956617,predicted upper bound 24.960538,actual reward 26.215060
round 4997, predicted reward 26.046181,predicted upper bound 26.048869,actual reward 22.337862
round 4998, predicted reward 32.957670,predicted upper bound 32.961174,actual reward 33.827069
round 4999, predicted reward 29.024906,predicted upper bound 29.028452,actual reward 29.235928
```python
# Best agent benchmark
np.random.seed(12345)
# reward = np.zeros(T)
# X_history = np.zeros((d, T))
# params_history = {}
# grad_history = {}
bestagent = BestAgent(K, T, d)
for tt in range(1, T + 1):
# observe \{x_{t,a}\}_{a=1}^{k=1}
context_list = SampleContext(d, K)
# compute the upper bound of reward
ind = bestagent.Action(context_list)
# play ind and observe reward
reward = GetRealReward(context_list[:, ind], A)
bestagent.Update(reward)
```
```python
# Uniform agent benchmark
np.random.seed(12345)
uniformagent = UniformAgent(K, T, d)
for tt in range(1, T + 1):
# observe \{x_{t,a}\}_{a=1}^{k=1}
context_list = SampleContext(d, K)
# compute the upper bound of reward
ind = uniformagent.Action(context_list)
# play ind and observe reward
reward = GetRealReward(context_list[:, ind], A)
uniformagent.Update(reward)
```
```python
import matplotlib.pyplot as plt
h_r_b = bestagent.GetHistoryReward()
plt.plot(range(0, T), np.cumsum(h_r_b))
h_r_u = uniformagent.GetHistoryReward()
plt.plot(range(0, T), np.cumsum(h_r_u))
h_r_n = neuralagent.GetHistoryReward()
plt.plot(range(0, T), np.cumsum(h_r_n))
plt.legend(["Best", "Uniform", "Neural"])
plt.xlabel("Round Index")
plt.ylabel("Total Reward")
```
```python
plt.plot(range(100, T), np.cumsum(h_r_n)[100:T] / np.cumsum(h_r_b)[100:T])
plt.plot(range(100, T), np.cumsum(h_r_u)[100:T] / np.cumsum(h_r_n)[100:T])
plt.legend(["Neural / Best", "Uniform / Neural"])
plt.plot(range(100, T), np.ones(T)[100:T])
plt.plot(range(100, T), 0.9 * np.ones(T)[100:T])
```
```python
```
|
3c61bfb78008d47d97eb370ed84ff41a92b958ce
| 606,989 |
ipynb
|
Jupyter Notebook
|
Zhou-et-al-2020-Neural_UCB_Exploration/.ipynb_checkpoints/UCB_Exploration-checkpoint.ipynb
|
lzt68/Online-Learning-Implementation
|
303692a901dcc58555bd2acf0aa6cf6ae5b392da
|
[
"MIT"
] | 1 |
2021-11-26T08:46:01.000Z
|
2021-11-26T08:46:01.000Z
|
Zhou-et-al-2020-Neural_UCB_Exploration/UCB_Exploration.ipynb
|
lzt68/Online-Learning-Implementation
|
303692a901dcc58555bd2acf0aa6cf6ae5b392da
|
[
"MIT"
] | null | null | null |
Zhou-et-al-2020-Neural_UCB_Exploration/UCB_Exploration.ipynb
|
lzt68/Online-Learning-Implementation
|
303692a901dcc58555bd2acf0aa6cf6ae5b392da
|
[
"MIT"
] | null | null | null | 101.877979 | 26,952 | 0.753607 | true | 165,856 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92944 | 0.644225 | 0.598769 |
__label__eng_Latn
| 0.930763 | 0.229471 |
# Planck's law
Let's briefly introduce the main aspects of the theory combining some Wikipedia articles. Links below.
Planck's law describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature, when there is no net flow of matter or energy between the body and its environment.
Planck's law can be written in terms of the spectral energy density. These distributions have units of energy per volume per spectral unit.
\begin{equation}
\rho (\lambda, T) = \frac{8 \pi h c}{\lambda^5 \left(e^{\frac{hc}{\lambda k_B T}} - 1\right)}
\label{eq:planck}
\end{equation}
where $k_B$ is the Boltzmann constant, $h$ is the Planck constant, and $c$ is the speed of light in the medium, whether material or vacuum.
Black-body radiation is the thermal electromagnetic radiation within or surrounding a body in thermodynamic equilibrium with its environment, emitted by a black body (an idealized opaque, non-reflective body). It has a specific spectrum of wavelengths, inversely related to intensity that depend only on the body's temperature, which is assumed for the sake of calculations and theory to be uniform and constant.
References: [Planck's law](https://en.wikipedia.org/wiki/Planck%27s_law), [Black-body radiation](https://en.wikipedia.org/wiki/Black-body_radiation), [Energy density](https://en.wikipedia.org/wiki/Energy_density)
# About this piece of software
A simple script to plot the Planck's law.
Please, run the following cell so that the script can be imported (the notebook must be in the same folder as the script). Then, run each code cell after reading their meaning.
```python
import planck
```
The following must be imported so that we can use them in the examples.
```python
import matplotlib.pyplot as plt
import numpy as np
# the following must be imported for interactive plots
from ipywidgets import interact, interactive, fixed
```
Now let's test the methods of the planck module.
We can use the `planck_energy_density` function to calculate the energy density at a given wavelength at a given temperature. Remember that the module uses SI units. For example, at 500 nm and 7000 K:
```python
planck.planck_energy_density(500.0e-9, 7000)
```
2662877.9681075383
Usually we want the energy density at a given wavelength _range_ in order to plot a distribution. The function accepts wavelengths arrays.
Let's create a wavelength array with 10 values between 400 and 800 nm and calculate the energy density values at 5000 K:
```python
lambda_example = np.linspace(400e-9, 800e-9, 10)
energy_density_5000K_example = planck.planck_energy_density(lambda_example, 5000)
energy_density_5000K_example
```
array([366503.0978276 , 444693.41497869, 498031.66222402, 527350.02877923,
536480.59647891, 530167.30064167, 512920.45172757, 488528.23748235,
459942.12455755, 429341.64031601])
Great. Now we have data points that can be plotted. Let's create a figure and uses Matplotlib to draft a plot.
```python
# plot draft
fig_ex1 = plt.figure(figsize=(8,4))
ax_ex1 = fig_ex1.add_subplot(111)
ax_ex1.plot(lambda_example, energy_density_5000K_example)
plt.show()
```
Yeah, it resembles a Planck distribution. You can modify the plot using Matplotlib's methods to be more suitable for presentation (axes labels, wavelengths in nanometers etc). You probably should increase the number of points to get a smoother plot.
However, the planck module has a function for plots that can do all this!
The function receives a wavelength array and a temperature array/list. So, let's plot the same range again but using it and increasing the number of data points to 1000 (probably overkill but hey it's a computer doing math).
The function always get the current axis to plot, so we are going to create a figure with axes before calling the function.
```python
lambda_example = np.linspace(400e-9, 800e-9, 1000)
fig_ex2 = plt.figure(figsize=(8,4))
ax_ex2 = fig_ex2.add_subplot(111)
planck.plot_planck(lambda_example,[5000])
```
The function generates the labels and a decent scale. The wavelength unit is nanometers by default.
Since the function receives a temperature _list_, more than one value can be passed. Let's test this feature passing 5000 and 6000 K.
```python
fig_ex3 = plt.figure(figsize=(8,4))
ax_ex3 = fig_ex3.add_subplot(111)
planck.plot_planck(lambda_example,[5000, 6000])
```
Great. Let's increase the wavelength range and pass more temperatures using a numpy array:
```python
lambda_array = np.linspace(1.0e-9, 2.0e-6, 1000)
temperature_array = np.arange(1000, 7001, 500)
fig1 = plt.figure(figsize=(10, 6))
ax = fig1.add_subplot(111)
planck.plot_planck(lambda_array, temperature_array)
```
By default, the method uses the coolwarm [colormap](https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html). Let's change it to gist_rainbow:
```python
fig2 = plt.figure(figsize=(10, 6))
ax = fig2.add_subplot(111)
planck.plot_planck(lambda_array, temperature_array, colors=plt.cm.gist_rainbow)
```
The planck module also has a function that can be called _before_ the plot. It's the `plot visible` one and it adds a visible spectrum in the plot background so that we can see lower temperatures bodies with maximum wavelength emission near the red (or even before, infrared) and higher ones towards the violet and beyond (ultraviolet):
```python
fig3 = plt.figure(figsize=(10, 6))
ax = fig3.add_subplot(111)
planck.plot_visible()
planck.plot_planck(lambda_array, temperature_array)
```
Last but not the least, the module has a function to interactive plots: `plot_planck_interactive`. This function must be passed to the `interactive` method from ipywidgets package. A wavelength array must also be passed as a fixed argument. The temperature must be passed as follows with a initial value (500 in the example), a final value (8000) and a step (500).
```python
lambda_array = np.linspace(1.0e-9, 3.0e-6, 1000)
graph = interactive(planck.plot_planck_interactive,
wavelength_array=fixed(lambda_array),
temperature=(500,8000,500))
display(graph)
```
interactive(children=(IntSlider(value=500, description='temperature', max=8000, min=500, step=500), Output()),…
Let's change the values to see the effects.
```python
lambda_array = np.linspace(200e-9, 900e-9, 1000)
graph = interactive(planck.plot_planck_interactive,
wavelength_array=fixed(lambda_array),
temperature=(3000,9000,400))
display(graph)
```
interactive(children=(IntSlider(value=3000, description='temperature', max=9000, min=3000, step=400), Output()…
```python
```
|
509ee43b3deaa35a4cc482a793ccbcc4f0ccdf34
| 444,640 |
ipynb
|
Jupyter Notebook
|
tutorial.ipynb
|
chicolucio/planck
|
fc5a7a903036404a7c94515e4517e9184e1cf5d8
|
[
"MIT"
] | 6 |
2020-08-20T22:34:22.000Z
|
2022-03-23T01:37:34.000Z
|
tutorial.ipynb
|
chicolucio/planck
|
fc5a7a903036404a7c94515e4517e9184e1cf5d8
|
[
"MIT"
] | null | null | null |
tutorial.ipynb
|
chicolucio/planck
|
fc5a7a903036404a7c94515e4517e9184e1cf5d8
|
[
"MIT"
] | 3 |
2020-02-11T18:46:10.000Z
|
2022-03-12T13:39:57.000Z
| 814.358974 | 122,364 | 0.952739 | true | 1,677 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90599 | 0.847968 | 0.76825 |
__label__eng_Latn
| 0.98595 | 0.623235 |
```python
from IPython.display import Image
Image('../../Python_probability_statistics_machine_learning_2E.png',width=200)
```
# Worked Examples of Conditional Expectation and Mean Square Error Optimization
Brzezniak [[brzezniak1999basic]](#brzezniak1999basic) is a great book because it
approaches
conditional expectation through a sequence of exercises, which is
what we are
trying to do here. The main difference is that Brzezniak takes a
more abstract
measure-theoretic approach to the same problems. Note that you
*do* need to
grasp measure theory for advanced areas in probability, but for
what we have
covered so far, working the same problems in his text using our
methods is
illuminating. It always helps to have more than one way to solve
*any* problem.
I have numbered the examples corresponding to the book and tried
to follow its
notation.
## Example
This is Example 2.1 from Brzezniak.
Three coins, 10p, 20p and 50p are tossed.
The values of the coins that land
heads up are totaled. What is the expected
total given that two coins have
landed heads up? In this case we have
we want to compute $\mathbb{E}(\xi|\eta)$
where
$$
\xi := 10 X_{10} + 20 X_{20} +50 X_{50}
$$
where $X_i \in \{0,1\} $ and where $X_{10}$ is the
Bernoulli-distributed random
variable corresponding to the 10p coin (and so
on). Thus, $\xi$ represents the
total value of the heads-up coins. The $\eta$
represents the condition that only
two of the three coins are heads-up,
$$
\eta := X_{10} X_{20} (1-X_{50})+ (1-X_{10}) X_{20} X_{50}+ X_{10} (1-X_{20})
X_{50}
$$
and is a function that is non-zero *only* when two of the three coins lands
heads-up. Each triple term catches each of these three possibilities. For
example,
the first term equals one when the 10p and 20p are heads up and the
50p is
heads down. The the remaining terms are zero.
To compute the
conditional expectation, we want to find a function $h$ of
$\eta$ that minimizes
the mean-squared-error (MSE),
$$
\mbox{MSE}= \sum_{X\in\{0,1\}^3} \frac{1}{2^3} (\xi-h(\eta))^2
$$
where the sum is taken over all possible triples of outcomes for
$\{X_{10},X_{20} ,X_{50}\}$ because each
of the three coins has a $\frac{1}{2}$
chance of coming up heads.
Now, the question boils down to how can we
characterize the function $h(\eta)$?
Note that $\eta \mapsto \{0,1\}$ so $h$
takes on only two values. So, the
orthogonal inner product condition is the
following:
$$
\langle \xi -h(\eta), \eta \rangle = 0
$$
But, because are only interested in $\eta=1$, this simplifies to
$$
\begin{align*}
\langle \xi -h(1), 1 \rangle &= 0 \\\
\langle \xi,1 \rangle
&=\langle h(1),1 \rangle
\end{align*}
$$
This doesn't look so hard to evaluate but we have to compute the
integral over
the set where $\eta=1$. In other words, we need the set of
triples
$\{X_{10},X_{20},X_{50}\}$ where $\eta=1$. That is, we can
compute
$$
\int_{\{\eta=1\}} \xi dX = h(1) \int_{\{\eta=1\}} dX
$$
which is what Brzezniak does. Instead, we can define
$h(\eta)=\alpha \eta$ and
then find $\alpha$. Re-writing the
orthogonal condition gives
$$
\begin{align*}
\langle \xi -\eta, \alpha\eta \rangle &= 0 \\\
\langle \xi,
\eta \rangle &= \alpha \langle \eta,\eta \rangle \\\
\alpha &= \frac{\langle
\xi, \eta \rangle}{\langle \eta,\eta \rangle}
\end{align*}
$$
where
$$
\langle \xi, \eta \rangle =\sum_{X\in\{0,1\}^3} \frac{1}{2^3}(\xi\eta)
$$
Note that we can just sweep over all triples
$\{X_{10},X_{20},X_{50}\}$ because
the definition of $h(\eta)$ zeros out when
$\eta=0$ anyway. All we have to do
is plug everything in and solve. This
tedious job is perfect for Sympy.
```python
import sympy as S
X10,X20,X50 = S.symbols('X10,X20,X50',real=True)
xi = 10*X10+20*X20+50*X50
eta = X10*X20*(1-X50)+X10*(1-X20)*(X50)+(1-X10)*X20*(X50)
num=S.summation(xi*eta,(X10,0,1),(X20,0,1),(X50,0,1))
den=S.summation(eta*eta,(X10,0,1),(X20,0,1),(X50,0,1))
alpha=num/den
print(alpha) # alpha=160/3
```
160/3
This means that
$$
\mathbb{E}(\xi|\eta) = \frac{160}{3} \eta
$$
which we can check with a quick simulation
```python
import numpy as np
import pandas as pd
d = pd.DataFrame(columns=['X10','X20','X50'])
d.X10 = np.random.randint(0,2,1000)
d.X10 = np.random.randint(0,2,1000)
d.X20 = np.random.randint(0,2,1000)
d.X50 = np.random.randint(0,2,1000)
```
**Programming Tip.**
The code above creates an empty Pandas data frame with the
named columns.
The next four lines assigns values to each of the columns.
The code above simulates flipping the three coins 1000
times. Each column of the
dataframe is either `0` or `1`
corresponding to heads-down or heads-up,
respectively. The
condition is that two of the three coins have landed heads-up.
Next, we can group the columns according to their sums. Note that
the sum can
only be in $\{0,1,2,3\}$ corresponding to `0`
heads-up, `1` heads-up, and so on.
```python
grp=d.groupby(d.eval('X10+X20+X50'))
```
**Programming Tip.**
The `eval` function of the Pandas data frame takes the
named
columns and evaluates the given formula. At the time of this
writing, only
simple formulas involving primitive operations are
possible.
Next, we can
get the `2` group, which corresponds to
exactly two coins having landed heads-
up, and then evaluate
the sum of the values of the coins. Finally, we can take
the mean
of these sums.
```python
grp.get_group(2).eval('10*X10+20*X20+50*X50').mean()
```
The result is close to `160/3=53.33` which supports
the analytic result. The
following code shows that we
can accomplish the same simulation using pure
Numpy.
```python
import numpy as np
from numpy import array
x=np.random.randint(0,2,(3,1000))
print(np.dot(x[:,x.sum(axis=0)==2].T,array([10,20,50])).mean())
```
In this case, we used the Numpy dot product to compute
the value of the heads-
up coins. The `sum(axis=0)==2` part selects
the columns that correspond to two
heads-up coins.
Still another way to get at the same problem is to forego the
random sampling part and just consider all possibilities
exhaustively using the
`itertools` module in Python's standard
library.
```python
import itertools as it
list(it.product((0,1),(0,1),(0,1)))
```
Note that we need to call `list` above in order to trigger the
iteration in
`it.product`. This is because the `itertools` module is
generator-based so does
not actually *do* the iteration until it is iterated
over (by `list` in this
case). This shows all possible triples
$(X_{10},X_{20},X_{50})$ where `0` and
`1` indicate heads-down and heads-up,
respectively. The next step is to filter
out the cases that correspond to two
heads-up coins.
```python
list(filter(lambda i:sum(i)==2,it.product((0,1),(0,1),(0,1))))
```
Next, we need to compute the sum of the coins and combine
the prior code.
```python
list(map(lambda k:10*k[0]+20*k[1]+50*k[2],
filter(lambda i:sum(i)==2,
it.product((0,1),(0,1),(0,1)))))
```
The mean of the output is `53.33`, which is yet another way to get
the same
result. For this example, we demonstrated the full spectrum of
approaches made
possible using Sympy, Numpy, and Pandas. It is always valuable
to have multiple
ways of approaching the same problem and cross-checking
the result.
## Example
This is Example 2.2 from Brzezniak. Three coins, 10p, 20p and 50p are tossed as
before. What is the conditional expectation of the total amount shown by the
three coins given the total amount shown by the 10p and 20p coins only? For
this problem,
$$
\begin{align*}
\xi := & 10 X_{10} + 20 X_{20} +50 X_{50} \\\
\eta :=& 30
X_{10} X_{20} + 20 (1-X_{10}) X_{20} + 10 X_{10} (1-X_{20})
\end{align*}
$$
which takes on four values $\eta \mapsto \{0,10,20,30\}$ and only
considers the
10p and 20p coins. In contrast to the last problem, here we are
interested in
$h(\eta)$ for all of the values of $\eta$. Naturally, there are
only four values
for $h(\eta)$ corresponding to each of these four values.
Let's first consider
$\eta=10$. The orthogonal condition is then
$$
\langle\xi-h(10),10\rangle = 0
$$
The domain for $\eta=10$ is $\{X_{10}=1,X_{20}=0,X_{50}\}$ which we
can
integrate out of the expectation below,
$$
\begin{align*}
\mathbb{E}_{\{X_{10}=1,X_{20}=0,X_{50}\}}(\xi-h(10)) 10 &=0
\\\
\mathbb{E}_{\{X_{50}\}}(10-h(10)+50 X_{50}) &=0 \\\
10-h(10) + 25 &=0
\end{align*}
$$
which gives $h(10)=35$. Repeating the same process for $\eta \in
\{20,30\}$
gives $h(20)=45$ and $h(30)=55$, respectively. This is the approach
Brzezniak
takes. On the other hand, we can just look at affine functions,
$h(\eta) = a
\eta + b $ and use brute-force calculus.
```python
from sympy.abc import a,b
h = a*eta + b
eta = X10*X20*30 + X10*(1-X20)*(10)+ (1-X10)*X20*(20)
MSE=S.summation((xi-h)**2*S.Rational(1,8),(X10,0,1),
(X20,0,1),
(X50,0,1))
sol=S.solve([S.diff(MSE,a),S.diff(MSE,b)],(a,b))
print(sol)
```
**Programming Tip.**
The `Rational` function from Sympy code expresses a
rational number that Sympy
is able to manipulate as such. This is different that
specifying a fraction
like `1/8.`, which Python would automatically compute as a
floating point
number (i.e., `0.125`). The advantage of using `Rational` is that
Sympy can
later produce rational numbers as output, which are sometimes easier
to make
sense of.
This means that
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
\mathbb{E}(\xi|\eta) = 25+\eta
\label{_auto1} \tag{1}
\end{equation}
$$
since $\eta$ takes on only four values, $\{0,10,20,30\}$, we can
write this
out explicitly as
<!-- Equation labels as ordinary links -->
<div id="eq:ex21sol"></div>
$$
\begin{equation}
\mathbb{E}(\xi|\eta) =
\begin{cases}
25 & \text{for}\: \eta=0
\\\
35 & \text{for}\: \eta=10 \\\
45 & \text{for}\: \eta=20 \\\
55 &
\text{for}\: \eta=30
\end{cases}
\end{equation}
\label{eq:ex21sol} \tag{2}
$$
Alternatively, we can use orthogonal inner products to write out
the following
conditions for the postulated affine function:
<!-- Equation labels as ordinary links -->
<div id="eq:ex22a"></div>
$$
\begin{equation}
\label{eq:ex22a} \tag{3}
\langle \xi-h(\eta), \eta \rangle = 0
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="eq:ex22b"></div>
$$
\begin{equation}
\label{eq:ex22b} \tag{4}
\langle \xi-h(\eta),1\rangle = 0
\end{equation}
$$
Writing these out and solving for $a$ and $b$ is tedious and
a perfect job for
Sympy. Starting with Equation [3](#eq:ex22a),
```python
expr=S.expand((xi-h)*eta)
print(expr)
```
and then because $\mathbb{E}(X_i^2)=1/2=\mathbb{E}(X_i)$, we make the
following
substitutions
```python
expr.xreplace({X10**2:0.5, X20**2:0.5,X10:0.5,X20:0.5,X50:0.5})
```
We can do this for the other orthogonal inner product in Equation
[4](#eq:ex22b) as follows,
**Programming Tip.**
Because Sympy symbols are
hashable, they can be used as keys in Python
dictionaries as in the `xreplace`
function above.
```python
S.expand((xi-h)*1).xreplace({X10**2:0.5,
X20**2:0.5,
X10:0.5,
X20:0.5,
X50:0.5})
```
Then, combining this result with the previous one and solving
for `a` and `b`
gives,
```python
S.solve([-350.0*a-15.0*b+725.0,-15.0*a-b+40.0])
```
which again gives us the final solution,
$$
\mathbb{E}(\xi|\eta) = 25+ \eta
$$
The following is a quick simulation to demonstrate this. We can
build on the
Pandas dataframe we used for the last example and create
a new column for the
sum of the 10p and 20p coins, as shown below.
```python
d['sm'] = d.eval('X10*10+X20*20')
```
We can group this by the values of this sum,
```python
d.groupby('sm').mean()
```
But we want the expectation of the value of the coins
```python
d.groupby('sm').mean().eval('10*X10+20*X20+50*X50')
```
which is very close to our analytical result in Equation [2](#eq:ex21sol).
##
Example
This is Example 2.3 paraphrased from Brzezniak. Given $X$ uniformly
distributed
on $[0,1]$, find $\mathbb{E}(\xi|\eta)$ where
$$
\xi(x) = 2 x^2
$$
$$
\eta(x) =
\begin{cases}
1 & \mbox{if } x \in [0,1/3] \\\
2 & \mbox{if } x
\in (1/3,2/3) \\\
0 & \mbox{if } x \in (2/3,1]
\end{cases}
$$
Note that this problem is different from the previous two because the
sets that
characterize $\eta$ are intervals instead of discrete points.
Nonetheless, we
will eventually have three values for $h(\eta)$ because $\eta
\mapsto
\{0,1,2\}$. For $\eta=1$, we have the orthogonal conditions,
$$
\langle \xi-h(1),1\rangle = 0
$$
which boils down to
$$
\mathbb{E}_{\{x \in [0,1/3]\}}(\xi-h(1))=0
$$
$$
\int_0^{\frac{1}{3}}(2 x^2-h(1))dx = 0
$$
and then by solving this for $h(1)$ gives $h(1)=2/24$. This is the way
Brzezniak works this problem. Alternatively, we can use $h(\eta) = a + b\eta
+
c\eta^2$ and brute force calculus.
```python
x,c,b,a=S.symbols('x,c,b,a')
xi = 2*x**2
eta=S.Piecewise((1,S.And(S.Gt(x,0),
S.Lt(x,S.Rational(1,3)))), # 0 < x < 1/3
(2,S.And(S.Gt(x,S.Rational(1,3)),
S.Lt(x,S.Rational(2,3)))), # 1/3 < x < 2/3,
(0,S.And(S.Gt(x,S.Rational(2,3)),
S.Lt(x,1)))) # 1/3 < x < 2/3
h = a + b*eta + c*eta**2
J=S.integrate((xi-h)**2,(x,0,1))
sol=S.solve([S.diff(J,a),
S.diff(J,b),
S.diff(J,c),
],
(a,b,c))
```
```python
print(sol)
print(S.piecewise_fold(h.subs(sol)))
```
Thus, collecting this result gives:
$$
\mathbb{E}(\xi|\eta) = \frac{38}{27} - \frac{20}{9}\eta + \frac{8}{9} \eta^2
$$
which can be re-written as a piecewise function of x,
<!-- Equation labels as ordinary links -->
<div id="eq:ex23a"></div>
$$
\begin{equation}
\mathbb{E}(\xi|\eta(x)) =\begin{cases} \frac{2}{27} &
\text{for}\: 0 < x < \frac{1}{3} \\\frac{14}{27} & \text{for}\: \frac{1}{3} < x
< \frac{2}{3} \\\frac{38}{27} & \text{for}\: \frac{2}{3}<x < 1 \end{cases}
\end{equation}
\label{eq:ex23a} \tag{5}
$$
Alternatively, we can use the orthogonal inner product conditions directly by
choosing $h(\eta)=c+\eta b +\eta^2 a$,
<!-- Equation labels as ordinary links -->
<div id="eq:ex23b"></div>
$$
\begin{align*}
\langle \xi-h(\eta),1\rangle = 0 \\\
\langle
\xi-h(\eta),\eta\rangle = 0 \\\
\langle \xi-h(\eta),\eta^2\rangle = 0
\end{align*}
\label{eq:ex23b} \tag{6}
$$
and then solving for $a$,$b$, and $c$.
```python
x,a,b,c,eta = S.symbols('x,a,b,c,eta',real=True)
xi = 2*x**2
eta=S.Piecewise((1,S.And(S.Gt(x,0),
S.Lt(x,S.Rational(1,3)))), # 0 < x < 1/3
(2,S.And(S.Gt(x,S.Rational(1,3)),
S.Lt(x,S.Rational(2,3)))), # 1/3 < x < 2/3,
(0,S.And(S.Gt(x,S.Rational(2,3)),
S.Lt(x,1)))) # 1/3 < x < 2/3
h = c+b*eta+a*eta**2
```
Then, the orthogonal conditions become,
```python
S.integrate((xi-h)*1,(x,0,1))
S.integrate((xi-h)*eta,(x,0,1))
S.integrate((xi-h)*eta**2,(x,0,1))
```
Now, we just combine the three equations and solve
for the parameters,
```python
eqs=[ -5*a/3 - b - c + 2/3,
-3*a - 5*b/3 - c + 10/27,
-17*a/3 - 3*b - 5*c/3 + 58/81]
sol=S.solve(eqs)
print(sol)
```
We can assemble the final result by substituting in the solution,
```python
print(S.piecewise_fold(h.subs(sol)))
```
which is the same as our analytic result in Equation [5](#eq:ex23a),
just in
decimal format.
**Programming Tip.**
The definition of Sympy's piecewise
function is verbose because of the way
Python parses inequality statements. As
of this writing, this has not been
reconciled in Sympy, so we have to use the
verbose declaration.
To reinforce our result, let's do a quick simulation
using Pandas.
```python
d = pd.DataFrame(columns=['x','eta','xi'])
d.x = np.random.rand(1000)
d.xi = 2*d.x**2
d.xi.head()
```
Now, we can use the `pd.cut` function to group the `x`
values in the following,
```python
pd.cut(d.x,[0,1/3,2/3,1]).head()
```
Note that the `head()` call above is only to limit the printout shown.
The
categories listed are each of the intervals for `eta` that we specified
using
the `[0,1/3,2/3,1]` list. Now that we know how to use `pd.cut`, we
can just
compute the mean on each group as shown below,
```python
d.groupby(pd.cut(d.x,[0,1/3,2/3,1])).mean()['xi']
```
which is pretty close to our analytic result in Equation
[5](#eq:ex23a).
Alternatively, `sympy.stats` has some limited tools for the same
calculation.
```python
from sympy.stats import E, Uniform
x=Uniform('x',0,1)
E(2*x**2,S.And(x < S.Rational(1,3), x > 0))
E(2*x**2,S.And(x < S.Rational(2,3), x > S.Rational(1,3)))
E(2*x**2,S.And(x < 1, x > S.Rational(2,3)))
```
which again gives the same result still another way.
## Example
This is
Example 2.4 from Brzezniak. Find $\mathbb{E}(\xi|\eta)$ for
$$
\xi(x) = 2 x^2
$$
<!-- Equation labels as ordinary links -->
<div id="eq:ex24"></div>
$$
\eta =
\begin{cases}2 & \mbox{if } 0 \le x < \frac{1}{2} \\ x & \mbox{if } \frac{1}{2}
< x \le 1 \end{cases}
\label{eq:ex24} \tag{7}
$$
Once again, $X$ is uniformly distributed on the unit interval. Note
that $\eta$
is no longer discrete for every domain. For the domain $0 <x <
1/2$, $h(2)$
takes on only one value, say, $h_0$. For this domain, the
orthogonal condition
becomes,
$$
\mathbb{E}_{\{\eta=2\}}((\xi(x)-h_0)2)=0
$$
which simplifies to,
$$
\begin{align*}
\int_0^{1/2} 2 x^2-h_0 dx &= 0 \\\
\int_0^{1/2} 2 x^2 dx &=
\int_0^{1/2} h_0 dx \\\
h_0 &= 2 \int_0^{1/2} 2 x^2 dx \\\
h_0 &= \frac{1}{6}
\end{align*}
$$
For the other domain where $\{\eta=x\}$ in Equation [7](#eq:ex24), we again
use
the orthogonal condition,
$$
\begin{align*}
\mathbb{E}_{\{\eta=x\}}((\xi(x)-h(x))x)&=0 \\\
\int_{1/2}^1
(2x^2-h(x)) x dx &=0 \\\
h(x) &= 2x^2
\end{align*}
$$
Assembling the solution gives,
$$
\mathbb{E}(\xi|\eta(x)) =\begin{cases} \frac{1}{6} & \text{for}\: 0 \le x <
\frac{1}{2} \\ 2 x^2 & \text{for}\: \frac{1}{2} < x \le 1 \end{cases}
$$
although this result is not explicitly written as a function of $\eta$.
##
Example
This is Exercise 2.6 in Brzezniak. Find $\mathbb{E}(\xi|\eta)$ where
$$
\xi(x) = 2 x^2
$$
$$
\eta(x) = 1 - \lvert 2 x-1 \rvert
$$
and $X$ is uniformly distributed in the unit interval. We
can write this out as
a piecewise function in the following,
$$
\eta =\begin{cases} 2 x & \text{for}\: 0 \le x < \frac{1}{2} \\ 2 -2x &
\text{for}\: \frac{1}{2} < x \le 1 \end{cases}
$$
The discontinuity is at $x=1/2$. Let's start with the $\{\eta=2x\}$ domain.
$$
\begin{align*}
\mathbb{E}_{\{\eta=2x\}}((2 x^2-h(2 x)) 2 x)& = 0 \\\
\int_{0}^{1/2} (2x^2-h(2 x) ) 2 x dx &=0
\end{align*}
$$
We can make this explicitly a function of $\eta$ by a change
of variables
($\eta=2x$) which gives
$$
\int_{0}^{1} (\eta^2/2-h(\eta))\frac{\eta}{2} d\eta =0
$$
Thus, for this domain, $h(\eta)=\eta^2/2$. Note that due to the
change of
variables, $h(\eta)$ is valid defined over $\eta\in[0,1]$.
For the other domain
where $\{\eta=2-2x\}$, we have
$$
\begin{align*}
\mathbb{E}_{\{\eta=2-2x\}}((2 x^2-h(2-2x)) (2-2x))& = 0 \\\
\int_{1/2}^{1} (2 x^2-h(2-2x) ) (2-2x) dx &=0
\end{align*}
$$
Once again, a change of variables makes the $ \eta$ dependency
explicit using
$\eta=2-2x$ which gives
$$
\begin{align*}
\int_{0}^{1} ((2-\eta)^2/2-h(\eta) ) \frac{\eta}{2} d\eta &=0
\\\
h(\eta) &= (2-\eta)^2/2
\end{align*}
$$
Once again, the change of variables means this solution is valid
over
$\eta\in[0,1]$. Thus, because both pieces are valid over the
same domain
($\eta\in[0,1]$), we can just add them to get the final solution,
$$
h(\eta) = \eta^2-2\eta+2
$$
A quick simulation can help bear this out.
```python
from pandas import DataFrame
import numpy as np
d = DataFrame(columns=['xi','eta','x','h','h1','h2'])
# 100 random samples
d.x = np.random.rand(100)
d.xi = d.eval('2*x**2')
d.eta =1-abs(2*d.x-1)
d.h1=d[(d.x<0.5)].eval('eta**2/2')
d.h2=d[(d.x>=0.5)].eval('(2-eta)**2/2')
d.fillna(0,inplace=True)
d.h = d.h1+d.h2
d.head()
```
Note that we have to be careful where we apply the individual
solutions using
the slice `(d.x<0.5)` index. The `fillna` part ensures that the
default `NaN`
that fills out the empty row-etries is replaced with zero before
combining the
individual solutions. Otherwise, the `NaN` values would circulate
through the
rest of the computation. The following is the
essential code that draws
[Figure](#fig:Conditional_expectation_MSE_005).
```python
%matplotlib inline
from matplotlib.pyplot import subplots
fig,ax=subplots()
ax.plot(d.xi,d.eta,'.',alpha=.3,label='$\eta$')
ax.plot(d.xi,d.h,'k.',label='$h(\eta)$')
ax.legend(loc=0,fontsize=18)
ax.set_xlabel('$2 x^2$',fontsize=18)
ax.set_ylabel('$h(\eta)$',fontsize=18)
```
**Programming Tip.**
Basic LaTeX formatting works for the labels in
[Figure](#fig:Conditional_expectation_MSE_005). The `loc=0` in the `legend`
function is the code for the *best* placement for the labels in the legend. The
individual labels should be specified when the elements are drawn individually,
otherwise they will be hard to separate out later. This is accomplished using
the `label` keyword in the `plot` commands.
```python
from matplotlib.pyplot import subplots
from pandas import DataFrame
import numpy as np
d = DataFrame(columns=['xi','eta','x','h','h1','h2'])
# 100 random samples
d.x = np.random.rand(100)
d.xi = d.eval('2*x**2')
d.eta =1-abs(2*d.x-1)
d.h1=d[(d.x<0.5)].eval('eta**2/2')
d.h2=d[(d.x>=0.5)].eval('(2-eta)**2/2')
d.fillna(0,inplace=True)
d.h = d.h1+d.h2
fig,ax=subplots()
_=ax.plot(d.xi,d.eta,'.k',alpha=.3,label=r'$\eta$')
_=ax.plot(d.xi,d.h,'ks',label=r'$h(\eta)$',alpha=.3)
_=ax.set_aspect(1)
_=ax.legend(loc=0,fontsize=18)
_=ax.set_xlabel(r'$\xi=2 x^2$',fontsize=24)
_=ax.set_ylabel(r'$h(\eta),\eta$',fontsize=24)
fig.tight_layout()
fig.savefig('fig-probability/Conditional_expectation_MSE_Ex_005.png')
```
<!-- dom:FIGURE: [fig-probability/Conditional_expectation_MSE_Ex_005.png,
width=500 frac=0.85] The diagonal line shows where the conditional expectation
equals the $\xi$ function. <div id="fig:Conditional_expectation_MSE_005"></div>
-->
<!-- begin figure -->
<div id="fig:Conditional_expectation_MSE_005"></div>
<p>The diagonal line shows where the conditional expectation equals the $\xi$
function.</p>
<!-- end figure -->
[Figure](#fig:Conditional_expectation_MSE_005)
shows the $\xi$ data plotted
against $\eta$ and $h(\eta) =
\mathbb{E}(\xi|\eta)$. Points on the diagonal
are points where $\xi$ and
$\mathbb{E}(\xi|\eta)$ match. As shown by the
dots, there is no agreement
between the raw $\eta$ data and $\xi$. Thus, one
way to think about the
conditional expectation is as a functional transform
that bends the curve onto
the diagonal line. The black dots plot $\xi$
versus $\mathbb{E}(\xi|\eta)$ and
the two match everywhere along the diagonal
line. This is to be expected because
the conditional expectation is the MSE
best estimate for $\xi$ among all
functions of $\eta$.
## Example
This is Exercise 2.14 from Brzezniak. Find
$\mathbb{E}(\xi|\eta)$ where
$$
\xi(x) = 2 x^2
$$
$$
\eta =
\begin{cases} 2x & \mbox{if } 0 \le x < \frac{1}{2} \\ 2x-1 & \mbox{if
} \frac{1}{2} < x \le 1 \end{cases}
$$
and $X$ is uniformly distributed in the unit interval. This is the
same as the
last example and the only difference here is that $\eta$ is not
continuous at
$x=\frac{1}{2}$, as before. The first part is exactly the same as
the first part
of the prior example so we will skip it here. The second part
follows the same
reasoning as the last example, so we will just write the
answer for the $\{\eta
= 2x-1\}$ case as the following
$$
h(\eta)=\frac{(1+\eta)^2}{2} , \: \forall \eta \: \in [0,1]
$$
and then adding these up as before gives the full solution:
$$
h(\eta)= \frac{1}{2} +\eta + \eta^2
$$
The interesting part about this example is shown in
[Figure](#fig:Conditional_expectation_MSE_006). The dots show where $\eta$ is
discontinuous and yet the $h(\eta)=\mathbb{E}(\xi|\eta)$ solution is equal to
$\xi$ (i.e., matches the diagonal). This illustrates the power of the orthogonal
inner product technique, which does not need continuity or complex
set-theoretic
arguments to calculate solutions. By contrast, I urge you to
consider
Brzezniak's solution to this problem which requires such methods.
```python
d = DataFrame(columns=['xi','eta','x','h','h1','h2'])
d.x = np.random.rand(100) # 100 random samples
d.xi = d.eval('2*x**2')
d['eta']=(d.x<0.5)*(2*d.x)+(d.x>=0.5)*(2*d.x-1)
d.h1=d[(d.x<0.5)].eval('eta**2/2')
d.h2=d[(d.x>=0.5)].eval('(1+eta)**2/2')
d.fillna(0,inplace=True)
d.h = d.h1+d.h2
fig,ax=subplots()
_=ax.plot(d.xi,d.eta,'.k',alpha=.3,label='$\eta$')
_=ax.plot(d.xi,d.h,'ks',label='$h(\eta)$',alpha=0.3)
_=ax.set_aspect(1)
_=ax.legend(loc=0,fontsize=18)
_=ax.set_xlabel('$2 x^2$',fontsize=24)
_=ax.set_ylabel('$h(\eta),\eta$',fontsize=24)
fig.tight_layout()
fig.savefig('fig-probability/Conditional_expectation_MSE_Ex_006.png')
```
<!-- dom:FIGURE: [fig-probability/Conditional_expectation_MSE_Ex_006.png,
width=500 frac=0.85] The diagonal line shows where the conditional expectation
equals the $\xi$ function. <div id="fig:Conditional_expectation_MSE_006"></div>
-->
<!-- begin figure -->
<div id="fig:Conditional_expectation_MSE_006"></div>
<p>The diagonal line shows where the conditional expectation equals the $\xi$
function.</p>
<!-- end figure -->
Extending projection methods to random
variables provides multiple ways for
calculating solutions to conditional
expectation problems. In this section, we
also worked out corresponding
simulations using a variety of Python modules. It
is always advisable to have
more than one technique at hand to cross-check
potential solutions. We worked
out some of the examples in Brzezniak's
book using our methods as a way to show
multiple ways to solve the same
problem. Comparing Brzezniak's measure-theoretic
methods to our less abstract
techniques is a great way to get a handle on both
concepts, which are important
for advanced study in stochastic process.
|
9c4bc619254224110c9e4cf2104c733812a44e78
| 220,051 |
ipynb
|
Jupyter Notebook
|
chapter/probability/Conditional_expectation_MSE_Ex.ipynb
|
derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E
|
9d12a298d43ae285d9549a79bb5544cf0a9b7516
|
[
"MIT"
] | 224 |
2019-05-07T08:56:01.000Z
|
2022-03-25T15:50:41.000Z
|
chapter/probability/Conditional_expectation_MSE_Ex.ipynb
|
derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E
|
9d12a298d43ae285d9549a79bb5544cf0a9b7516
|
[
"MIT"
] | 9 |
2019-08-27T12:57:17.000Z
|
2021-09-21T15:45:13.000Z
|
chapter/probability/Conditional_expectation_MSE_Ex.ipynb
|
derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E
|
9d12a298d43ae285d9549a79bb5544cf0a9b7516
|
[
"MIT"
] | 73 |
2019-05-25T07:15:47.000Z
|
2022-03-07T00:22:37.000Z
| 157.291637 | 176,652 | 0.879028 | true | 8,795 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.831143 | 0.72487 | 0.602471 |
__label__eng_Latn
| 0.977694 | 0.238072 |
# CMSIS-DSP Python package example
## Installing and importing the needed packages
The following command may take some time to execute : the full cmsisdsp library is built.
```python
!pip install cmsisdsp
```
Requirement already satisfied: cmsisdsp in c:\benchresults\pythonwrappertests\testenv\lib\site-packages (1.2.1)
Requirement already satisfied: numpy>=1.19 in c:\benchresults\pythonwrappertests\testenv\lib\site-packages (from cmsisdsp) (1.22.2)
Requirement already satisfied: jinja2>=3.0 in c:\benchresults\pythonwrappertests\testenv\lib\site-packages (from cmsisdsp) (3.0.3)
Requirement already satisfied: networkx>=2.5 in c:\benchresults\pythonwrappertests\testenv\lib\site-packages (from cmsisdsp) (2.6.3)
Requirement already satisfied: sympy>=1.6 in c:\benchresults\pythonwrappertests\testenv\lib\site-packages (from cmsisdsp) (1.9)
Requirement already satisfied: MarkupSafe>=2.0 in c:\benchresults\pythonwrappertests\testenv\lib\site-packages (from jinja2>=3.0->cmsisdsp) (2.1.0)
Requirement already satisfied: mpmath>=0.19 in c:\benchresults\pythonwrappertests\testenv\lib\site-packages (from sympy>=1.6->cmsisdsp) (1.2.1)
```python
import numpy as np
import cmsisdsp as dsp
import cmsisdsp.fixedpoint as f
```
```python
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed, interact_manual,FloatSlider
import ipywidgets as widgets
```
## Creating the signal
### Conversion functions to use CMSIS-DSP FFTs with complex numbers
CMSIS-DSP FFTs are processing array of complex numbers which are represented in memory asan array of floats. There is no specific data types for complex numbers.
The Python array is containing complex numbers. They need to be replaced by a sequence of real numbers.
The two functions below are doing those conversions.
```python
# Array of complex numbers as an array of real numbers
def imToReal1D(a):
ar=np.zeros(np.array(a.shape) * 2)
ar[0::2]=a.real
ar[1::2]=a.imag
return(ar)
# Array of real numbers as an array of complex numbers
def realToIm1D(ar):
return(ar[0::2] + 1j * ar[1::2])
```
```python
nb = 512
signal = None
```
You can play with the slider to change the frequency of the signal.
Don't forget to reconvert the signal to a Q15 format if you want to test the Q15 FFT.
```python
@interact(f=FloatSlider(100,min=10,max=150,step=20,continuous_update=False))
def gen_signal(f=100):
global signal
global nb
signal = np.sin(2 * np.pi * np.arange(nb)*f / nb) + 0.1*np.random.randn(nb)
plt.plot(signal)
plt.show()
```
interactive(children=(FloatSlider(value=100.0, continuous_update=False, description='f', max=150.0, min=10.0, …
## Using the F32 CMSIS-DSP FFT
The `arm_cfft_instance_f32` is created and initialized.
```python
# CMSIS-DSP FFT F32 initialization
cfftf32=dsp.arm_cfft_instance_f32()
status=dsp.arm_cfft_init_f32(cfftf32,nb)
print(status)
```
0
The log magnitude of the FFT is computed and siplayed.
```python
# Re-evaluate this each time you change the signal
signalR = imToReal1D(signal)
resultR = dsp.arm_cfft_f32(cfftf32,signalR,0,1)
resultI = realToIm1D(resultR)
mag=20 * np.log10(np.abs(resultI))
plt.plot(mag[1:nb//2])
plt.show()
```
## Using the Q15 CMSIS-DSP FFT
The signal must be converted to Q15 each time it is changed with the slider above.
```python
# Convert the signal to Q15 and viewed as a real array
signalR = imToReal1D(signal)
signalRQ15 = f.toQ15(signalR)
```
The `arm_cfft_instance_q15` is created and initialized
```python
# Initialize the Q15 CFFT
cfftq15 = dsp.arm_cfft_instance_q15()
status = dsp.arm_cfft_init_q15(cfftq15,nb)
print(status)
```
0
```python
# Compute the Q15 CFFT and convert back to float and complex array
resultR = dsp.arm_cfft_q15(cfftq15,signalRQ15,0,1)
resultR = f.Q15toF32(resultR)
resultI = realToIm1D(resultR)*nb
mag = 20 * np.log10(np.abs(resultI))
plt.plot(mag[1:nb//2])
plt.show()
```
```python
```
|
6ec43b72bdbdbe4a0795c668da27efbe54f9f8a9
| 67,000 |
ipynb
|
Jupyter Notebook
|
CMSIS/DSP/PythonWrapper/examples/cmsisdsp_tests.ipynb
|
shosakam/CMSIS_5
|
18205c6c2b68e7e96f40dc941c47efdbdd9f7d01
|
[
"Apache-2.0"
] | 1 |
2022-03-12T13:50:01.000Z
|
2022-03-12T13:50:01.000Z
|
CMSIS/DSP/PythonWrapper/examples/cmsisdsp_tests.ipynb
|
shosakam/CMSIS_5
|
18205c6c2b68e7e96f40dc941c47efdbdd9f7d01
|
[
"Apache-2.0"
] | null | null | null |
CMSIS/DSP/PythonWrapper/examples/cmsisdsp_tests.ipynb
|
shosakam/CMSIS_5
|
18205c6c2b68e7e96f40dc941c47efdbdd9f7d01
|
[
"Apache-2.0"
] | null | null | null | 183.060109 | 29,796 | 0.913537 | true | 1,172 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.847968 | 0.843895 | 0.715596 |
__label__eng_Latn
| 0.874925 | 0.500901 |
```python
%load_ext autoreload
%autoreload 2
```
```python
# Importing packages and own module
import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
from types import SimpleNamespace
import inauguralproject as ip
```
# Question 1
This question investigates what level of insurance an agent would choose, if the insurance company are not trying to create any profit, but simply make sure that the premiums covers the cost. In that case, the premium, $\pi$, is a function of the size of the coverage, $q$, and the probality of the loss, $p$,:
\begin{equation}
\pi(p,q)= pq.
\end{equation}
The expected utility of the agent is
\begin{equation}
V(q;\pi) = pu\left(y-x+q-\pi(p,q)\right) + (1-p) u\left(y-\pi(p,q)\right).
\end{equation}
where $y$ is the initial assets hold by the agent, $x$ is the amount of loss, and $u(\cdot)$ is a CRRA utility function.
As the coverage can not exceed the loss, the solution to the question is given by
\begin{equation}
q^\ast = argmax_{q\in \left[0,x\right]}V(q;\pi)
\end{equation}
```python
# a. initial parameters
par = SimpleNamespace()
par.theta = -2
par.p = 0.2
par.y = 1
# b. creating vector of x's and for the optimal q's
N = 1000
x_vec = np.linspace(0.01,0.9,N)
optimal_q_vec = np.empty(N)
# c. solving the problem
for i, x in enumerate(x_vec):
optimal_q_vec[i] = ip.optimal_q_function(x,par)
# d. plotting the solution
plt.style.use('seaborn-whitegrid')
fig = plt.figure(dpi=100)
ax = fig.add_subplot(1,1,1)
ax.plot(x_vec, optimal_q_vec, label = '$q^*$')
ax.set_xlabel('Loss, $x$')
ax.set_ylabel('Coverage, $q$')
ax.set_title('Optimal Coverage for the Agent')
ax.set_xlim(0,0.9)
ax.set_ylim(0,0.9)
legend = ax.legend(loc=2, frameon=True, framealpha=1)
plt.show()
```
As shown in the figure, agent would choose full coverage.
# Question 2
The agent is willing to pay the premium of an insurance contract, as long as the expected utility of taking the contract is lower or equal to the expected utility of not taking the contract. Here, I will cover which contracts that will be acceptable for both the agent and the ensurer, in the case where the potential loss is $0.6$. The results is shown in the figure.
```python
# a. Creating a vector of q's and a vector of minimum premiums
x = 0.6
q_vec = np.linspace(0.01,x,10)
pi = q_vec*par.p
# b. Solving for the maximum premium
premium_vec = np.empty(len(q_vec))
for i, q in enumerate(q_vec):
premium_vec[i] = ip.optimal_premium(x,q,par)
# c. Plotting graph
fig = plt.figure(dpi=111)
ax = fig.add_subplot(1,1,1)
ax.plot(q_vec,premium_vec, label = 'Maximimum premium for agent', color='red')
ax.plot(q_vec, pi, label = 'Minimum premium for insurance company', color='blue')
ax.fill_between(q_vec, pi, premium_vec, color= 'grey', alpha=0.3, label= 'Acceptable for both parties')
ax.set_title('Area of Feasible Contracts')
ax.set_xlim(0,x)
ax.set_ylim(0,0.3)
ax.set_xlabel('Insurance Coverage')
ax.set_ylabel('Premium')
legend = ax.legend(loc=2, frameon=True, framealpha=1)
plt.show()
```
In the case of a market with a lot of insurance companies, the premium would be near the blue line, but not at it, as the companies have to make some amount of contribution margin, to run the companies. In a case with monopoly the premium would at the red line line.
# Question 3
Instead of the loss being fixed and either occur or not, I now consider the case where the loss is drawn from a random beta distribution and where the coverage is a fraction, $\gamma$, of the loss. Thus, the coverage and the loss can be expessed by
\begin{equation}
q = \gamma x \:\:\: \gamma\in[0,1]
\end{equation}
and
\begin{equation}
x\sim Beta(\alpha, \beta), \:\:where\:\: \alpha = 2, \beta = 7
\end{equation}
The agents utility, if they take on a insurance contract, is
\begin{equation}
V(\gamma, \pi) = \int_0^1 u(y-(1-\gamma)x-\pi)f(x)dx
\end{equation}
and if they do not take on a contract, it is
\begin{equation}
V_{NI} = \int_0^1 u(y-x)f(x)dx.
\end{equation}
By performing a Monte Carlo integration, I solve for the expected value of the agents utility, given different values of $\gamma$ and $\pi$
```python
# a. Drawing random numbers from beta-distribuition
np.random.seed(118)
N = 10000
par.X = np.random.beta(a=2,b=7,size = N)
# b. Solving for different values of gamma and pi
gamma1 = par.gamma = 0.9
pi1 = par.pi = 0.2
expected_value1 = ip.expected_value(par)
gamma2 =par.gamma = 0.45
pi2 = par.pi = 0.1
expected_value2 = ip.expected_value(par)
# c. Printing results
print(f'When coverage is {gamma1} and the premium is {pi1} the expected utility is {expected_value1:.3f}')
print(f'When coverage is {gamma2} and the premium is {pi2} the expected utility is {expected_value2:.3f}')
if expected_value1 > expected_value2:
print(f'Therefore, the agent prefers a coverage of {gamma1} with a premium of {pi1}')
else:
print(f'Therefore, the agent prefers a coverage of {gamma2} and a premium of {pi2}')
```
When coverage is 0.9 and the premium is 0.2 the expected utility is -1.286
When coverage is 0.45 and the premium is 0.1 the expected utility is -1.297
Therefore, the agent prefers a coverage of 0.9 with a premium of 0.2
# Question 4
For the insurance company's viewpoint, they want to maximize their profit. If they are a monopoly, they will take the maximum premium that the agent is willing to accept. Therefore, they have to solve
$$
\begin{aligned}
\pi^{\ast} = argmax_\pi & V(\gamma, \pi)\\
&\text{s.t.}\\
V(\gamma, \pi)& \ge V_{NI}
\end{aligned}
$$
I solve it for a coverage ratio of $\gamma = 0.95$.
```python
# a. initial parameters
gamma = par.gamma = 0.95
initial_guess = 0.2
# b. Solving for optimal premium
sol = optimize.root(
ip.objective_for_maximizing_profit,
initial_guess,
args = (par),
)
solution = float(sol.x)
# c. Print solution
print(f'If the customer wants a coverage of {gamma} their are willing to accept a premium up to {solution:.4f}.')
```
If the customer wants a coverage of 0.95 their are willing to accept a premium up to 0.2369.
|
0b4b0ca07120675a6f559cec4bff630d2f58080e
| 103,305 |
ipynb
|
Jupyter Notebook
|
inauguralproject/inauguralproject.ipynb
|
NumEconCopenhagen/projects-2022-ludvigsen
|
b19d3081d988d41e0acf8be0903e28043a7762bc
|
[
"MIT"
] | null | null | null |
inauguralproject/inauguralproject.ipynb
|
NumEconCopenhagen/projects-2022-ludvigsen
|
b19d3081d988d41e0acf8be0903e28043a7762bc
|
[
"MIT"
] | 2 |
2022-03-29T12:16:49.000Z
|
2022-03-31T11:53:27.000Z
|
inauguralproject/inauguralproject.ipynb
|
NumEconCopenhagen/projects-2022-ludvigsen
|
b19d3081d988d41e0acf8be0903e28043a7762bc
|
[
"MIT"
] | null | null | null | 296.853448 | 57,264 | 0.925512 | true | 1,779 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.787931 | 0.861538 | 0.678833 |
__label__eng_Latn
| 0.991695 | 0.415487 |
#### **Model project: A Solow model with fossil fuels and a climate externality**
The motivation for this project is to analyze long-term growth in th presence of finite resources and a negative externality from previously utilized ressources. For this purpose, we analyze the following Solow model:
**Model setup** <br>
Equations (1)-(7) below charicterize a Solow model for a closed economy with depletable fossil , fuels. The stock of fossil fuels is denoted by $R_r$ and $E_t$ is the ammount of fossil fuels used as an input in production in each period. $Y_t$ denotes GDP and $K_t, L_t$ and $A_t$ denotes Capital, labor and total factor productivity (TFP), respectively. Equations (3) describes capital accumulation, equation (4) and (5) decribes the evolution of the stock of labour and TFP, respectively.
:
\begin{equation}
Y = D_t \cdot K^{\alpha}_{t} (A_t L_t)^\beta E^\epsilon_t, \quad \alpha, \beta, \epsilon > 0 \quad \alpha + \beta +\epsilon = 1 \tag{1}
\end{equation}
\begin{equation}
D_t = \left( \frac{R_t}{R_0} \right) ^\phi, \quad \phi > 0 \tag{2}
\end{equation}
\begin{equation}
K_{t+1} = sY_t +(1-\delta)K_t, \quad 0 < \delta < 1 \tag{3}
\end{equation}
\begin{equation}
L_{t+1} = (1+n)L_t, \quad n \geq 0 \tag{4}
\end{equation}
\begin{equation}
A_{t+1} = (1+g)A_t, \quad g \geq 0 \tag{5}
\end{equation}
\begin{equation}
R_{t+1} = R_t - E_t \tag{6}
\end{equation}
\begin{equation}
E_{t} = s_E R_t, \quad 0 < s_E < \delta \tag{7}
\end{equation}
**Loading packages**
```python
import numpy as np
from scipy import linalg
from scipy import optimize as opt
from scipy.ndimage.interpolation import shift
import sympy as sm
import pandas as pd
import matplotlib.pyplot as plt
sm.init_printing(pretty_print = True)
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import matplotlib.animation as animation
from matplotlib.widgets import Slider
```
**Climate externality**
The following graph maps the damage function of the climate externality for different values of the $\phi$, i.e. the parameter that determines the marginal damage that the externality inflcits. As we will see, the nature of the loss functio is critical for whether climate change will affect long term growth.
\begin{equation}
D_t = \left( \frac{R_t}{R_0} \right) ^\phi , \qquad \phi > 0 \tag{Externality}
\end{equation}
For $\phi>1$ the marginal damage caused by the cumulcative use of fossil fuels will be decreasing and for $1>\phi>0$ the marginal damage will be increasing.
```python
# The following figure shoes the transition of the damage function as fossil fuels are being depleated.
def Damage_function(R0,RT,phi):
D = (RT/R0)**phi;
return D
def plot_func(phi):
R0 = 100;
R_series = np.linspace(0.1,R0,100)
D_series = np.zeros(100)
for i in range(0,100):
D_series[i] = Damage_function(R0, R_series[i],phi)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title("Transition of $D_t$ for different values of phi")
ax.set_xlabel('$R_T$ - stock left')
ax.set_ylabel('Damage function')
ax.set_xlim([0, 100])
ax.set_ylim([0, 1])
x1 = np.linspace(*ax.get_xlim())
x2 = np.linspace(*ax.get_ylim())
ax.plot(x1, x2, 'k--')
ax.plot(R_series,D_series,'b')
interact(plot_func, phi = widgets.FloatSlider(Value=1, min=0.05, max=4, step=0.05))
```
interactive(children=(FloatSlider(value=0.05, description='phi', max=4.0, min=0.05, step=0.05), Output()), _do…
<function __main__.plot_func(phi)>
**Production function** <br>
The following function defines the Cobb-Douglas production function with the multiplicative damage function.
We rewrite (1) using $E_t = s_E \cdot R_T$ and the defition of $D_t$:
$$Y_t = R_t^{\phi + \epsilon} R_0^{-\phi} s_E^\epsilon K^{\alpha}_{t} (A_t L_t)^\beta$$
```python
# Production function
def production(R,R0,A,L,K,phi,epsilon,se, beta, alpha):
Y = R**(phi + epsilon) * R0 **(-phi) * se**epsilon * (A*L)**beta * K**alpha
return Y
```
```python
# Analytical long run steady state level growth rate for plotting
def gy_ss(beta,epsilon,g,n,phi,se):
g_ss = g*beta/(beta+epsilon) -n*epsilon/(beta+epsilon) -se*(epsilon+phi)/(beta+epsilon)
return g_ss
```
**Simulating the model** <br>
We simulate the model from time $t=1$ up to time $T=200$ which is abitrarily chosen, in a recursive fashion. The variables $R_t$, $A_t$ and $L_t$ grows exogeneously, output in the current period is given by current values $Y_t = F(K_t,L_t,A_t,R_T)$, but the next period kapital level $K_{t+1} = sY_t +(1-\delta)K_t $ grows in a recursive fashion. We need to initialize the series at index [0] given this feature. We simulate the economy this way and plot the output/labor ratios and kapital/labor ratios.
```python
#Parameter values
alpha = 0.2
beta = 0.6
epsilon = 1-alpha -beta
s = 0.2
se = 0.005
g = 0.02
n = 0.02
phi = 0.8
delta = 0.08
#Starting values for state varaibles
K0 = 10
R0 = 10
L0 = 10
A0 = 10
# Empty vectors for storring result
K_list = []
R_list = []
A_list = []
L_list = []
Y_list = []
# Initialize
K_list.append(K0)
R_list.append(R0)
A_list.append(A0)
L_list.append(L0)
Y_list.append(production(R0,R0,A0,L0,K0,phi,epsilon,se, beta,alpha))
T = 200;
t = 1
while t <= T:
K_list.append(s*Y_list[t-1] + (1-delta)*K_list[t-1])
R_list.append(R_list[t-1]*(1-se))
A_list.append(A_list[t-1]*(1+g))
L_list.append(L_list[t-1]*(1+n))
Y_list.append(production(R_list[t],R0,A_list[t],L_list[t],K_list[t],phi,epsilon,se, beta,alpha))
t += 1
Y_list = np.array(Y_list) # To numpy array
L_list = np.array(L_list) # To numpy array
K_list = np.array(K_list) # To numpy array
# Per units of labor
y_list = Y_list/L_list # Element wise division
k_list = K_list/L_list
Y_max = np.max(Y_list)
K_max = np.max(K_list)
```
```python
# Growth rate
y_list_lagged = np.roll(y_list,1)
k_list_lagged = np.roll(k_list,1)
k_growth = np.log(k_list[1:200]) - np.log(k_list_lagged[1:200])
y_growth = np.log(y_list[1:200]) - np.log(y_list_lagged[1:200])
g_ss=gy_ss(beta,epsilon,g,n,phi,se) # steady state growth rate
#Plot
f = plt.figure()
# first subplot
f.set_figheight(10)
f.set_figwidth(20)
plt.subplot(2,2,1)
plt.plot(y_list)
plt.ylabel('y - GDP per worker')
plt.xlabel('t - time period')
plt.title('Evolution of GDP per worker')
plt.subplot(2,2,2)
plt.plot(k_list, color = 'g')
plt.ylabel('k - Capital per worker')
plt.xlabel('t - time period')
plt.title('Evolution of Capital per worker')
plt.subplot(2,2,3)
plt.plot(y_growth)
plt.axhline(g_ss, color='r', linestyle='--')
plt.ylabel('Growth in - GDP per worker')
plt.xlabel('t - time period')
plt.title('Evolution of Growth in gdp per worker')
plt.subplot(2,2,4)
plt.plot(k_growth, color = 'g')
plt.axhline(g_ss, color='r', linestyle='--')
plt.ylabel('Growth in - GDP per worker')
plt.xlabel('t - time period')
plt.title('Evolution of Growth in Capital per worker')
plt.show()
```
From the graphs above we notice there is possitive growth in GDP pr. worker and Capital pr. worker even in the long run. This seems at firsthand rather counter intuitive: How can one manage to obtained sustained growth in the long run, when we exhaust a non-renewable ressource which is essential for production. This is a result of perfect substitution between the inputs of manmade capital and natural ressources. As time passes we will substitute towards using less natural ressources and more manmade capital in production, and in the limit we use an infinitesimal amount of the natural ressource. It should be noted the assumption of perfect substitution between these inputs have been critized by prominent economist as HERMAN E. DALY (1990) who argues against neoclassical production functions "... Do extra sawmills substitute for diminishing forests? Do more refineries substitute for depleted oil wells? Do larger nets substitute for declining fish populations?".
Additional it is noticeable that the growth rate in capital per worker and growth rate in gdp per worker, converges to the same level in the long run. From this we suggest there must exist some level of capital/ output ratio which is constant in a steady state level. In the following we examing this.
**Finding the steady-state**
In the last section we established that the growth rate of capital and output was identical in the long run. If we define capital/ output ratio $z_t \equiv \frac{K_t}{Y_t} = \frac{k_t}{y_t}$, we are now interested in determining the steady of this ratio. Straight forward algebra of the model equations gives us the following transition equation:
$z_{t+1} = \left( \frac{1}{1-s_e}\right)^{\epsilon + \phi}\left(\frac{1}{(1+n)(1+g)}\right)^{\beta}(s+(1-\delta)z_{t})^{1-\alpha}z_{t}^{\alpha}$
The steady state value is characterized by $z_{t} = z_{t+1} = z_{ss}$ or equivalently the growth rate of $z$ $g_z = 0$.
**Numerical solution of the capital output ratio, $z_t$**
To find the steady state numerically, we try different numerical methods. We consider the Newton algorithm which is gradient based, the bisection method and the brentq method, which combines a bisection medthod along with other optimization procedures. We simply use numerical derivatives for the newton algorithm, but one could consider for simple problems of passing analytical gradient and hessian. Concerning the bisection method and the brentq (who combines this method), we need to pass an interval in which to find the root. WE have used different optimization methods with the same same tolerance level for all the algorithms.
```python
# Solve for ss
#Parameter values
alpha = 0.2
beta = 0.6
epsilon = 1-alpha -beta
s = 0.2
se = 0.005
g = 0.02
n = 0.02
phi = 0.8
delta = 0.08
def helpfun(se, eps, phi, n, g, beta):
frac = (1/(1-se))**(epsilon + phi) * (1/((1+n)*(1+g)))**beta
return (frac)
power_alpha = lambda z: z**alpha
depsav = lambda z: (s + (1-delta)*z)**(1-alpha)
Obj_ss = lambda zss: zss - helpfun(se, epsilon, phi, n, g, beta)*depsav(zss)*power_alpha(zss)
# Newton Method
z_start = 0.2;
zss_newton = opt.newton(Obj_ss, z_start, tol = 1.48e-08, rtol=4.5e-9, maxiter=100,
full_output=True, disp=True)
# Bisection
zmin = 0.001
zmax = 2
zss_bisect = opt.bisect(Obj_ss, zmin, zmax,args=(), xtol=1.48e-08,
rtol=4.5e-9, maxiter=100,
full_output=True, disp=True)
# Brent - q
zss_brentq = opt.brentq(Obj_ss, zmin, zmax,
args=(), xtol=1.48e-08,
rtol=4.5e-9, maxiter=100,
full_output=True, disp=True)
print(zss_newton)
print(zss_bisect)
print(zss_brentq)
# Brentq method
result = opt.root_scalar(Obj_ss,bracket=[0.01,10], method = 'brentq')
```
(1.92835954580555, converged: True
flag: 'converged'
function_calls: 7
iterations: 6
root: 1.92835954580555)
(1.9283595391735433, converged: True
flag: 'converged'
function_calls: 29
iterations: 27
root: 1.9283595391735433)
(1.9283595458055507, converged: True
flag: 'converged'
function_calls: 6
iterations: 5
root: 1.9283595458055507)
From the results we notice all 3 root-findings method converge to the same result (within an appropriate number of decimals). There is big difference in how fast the algorithms converge. The newton- and qbrent method approximately converge at the same speed the bisection method use a vastly higher amount of iterations to converge. In this application the "brent-q" method seems the fastest, but this may also be a result of starting values and interval size etc.
**Solving the model symbolcally using the sympy packages**
In the following we solve the steady state symbolically, by the sympy package. We had to "help" with some of the derivations for it to work.
```python
z = sm.symbols('k')
se = sm.symbols('se')
epsilon = sm.symbols('epsilon')
phi = sm.symbols('phi')
beta = sm.symbols('beta')
n = sm.symbols('n')
g = sm.symbols('g')
s = sm.symbols('s')
delta = sm.symbols('delta')
alpha = sm.symbols('alpha')
# write your code here
frag = 1/((1 + n)*(1 + g)) # Fraction
V = ((1/(1-se))**(phi + epsilon)) * frag**beta # Terms which does not depend on z
steadystate = sm.Eq(V**(1/(1-alpha)) * (s + (1-delta)*z),z)
zss = sm.solve(steadystate,z)[0]
z = sm.symbols('z^*')
zequation = sm.Eq(z,zss)
zequation
```
This expression looks somewhat similar as the analytical expression for the model, we will compare it numerically later.
```python
# We save python function
z_ss_func = sm.lambdify((s,epsilon,se,n,g,phi,beta,delta,alpha),zss) # Steady state value
```
We will compare these results to the analytical solution:
$$z^*=\frac{s}{(1-s_E)^{\frac{\epsilon+\phi}{\beta+\epsilon}}[(1+n)(1+g)]^\frac{\beta}{\beta+\epsilon}-(1-\delta)} >0$$
```python
# Analytical solution and comparing all the results
alpha = 0.2
beta = 0.6
epsilon = 1-alpha -beta
s = 0.2
se = 0.005
g = 0.02
n = 0.02
phi = 0.8
delta = 0.08
def z_ss_analytical(s, se, epsilon, phi, n, g, beta, alpha, delta):
denominator = (1-se)**((epsilon+phi)/(beta+epsilon))*((1+n)*(1+g))**(beta/(beta + epsilon)) - (1-delta)
z_ss = s/denominator
return(z_ss)
z_ss_analytical = z_ss_analytical(s,se, epsilon, phi, n, g, beta, alpha, delta)
z_ss_sm = z_ss_func(s,epsilon,se,n,g,phi,beta,delta,alpha)
z_num = zss_brentq
print(f'''The steady-state value using the analytical expression is {z_ss_analytical:.4f},
The steady-state value using the sympy derieved expression is {z_ss_sm:.4f},
The steady-state value using numerical methods is {z_num[0]:.4f}''')
```
The steady-state value using the analytical expression is 1.9284,
The steady-state value using the sympy derieved expression is 1.9284,
The steady-state value using numerical methods is 1.9284
Both our "sympy" and numerical methods gives the same result as the analytical solution. This shows that the solution derived by sympy was the correct solution.
**Plotting the transition diagram**
In order to plot the transition diagram we need the values for $z_t$ and $z_{t+1}$, respectively. Steady state is found where $z_t = z_{t+1}$ and thus where the curve intersects the 45 degree line.
We start out by defining a function for the transition equation.
```python
def z_next_func(s,epsilon,se,n,g,phi,beta,delta,alpha,x):
z = (1/(1-se))**(epsilon+phi)*(1/((1+n)*(1+g)))**beta*(s+(1-delta)*x)**(1-alpha)*x**alpha
return z
```
```python
# Create a for loop to calculate the values of the transition equation in all T=200 periods.
x = np.zeros((T,1))
x[0] = 0.00
for t in range(0,T-1):
x[t+1] = z_next_func(s,epsilon,se,n,g,phi,beta,delta,alpha,x[t])
# Define the function "transition()" in order to create figure with the transition diagram
# and make it possible to create float sliders for specific variables.
def transition(s,epsilon,se,n,g,phi,beta,delta,alpha):
zmin = 0.0
zmax = z_ss_func(s,epsilon,se,n,g,phi,beta,delta,alpha)*1.2
z_set = np.linspace(zmin, zmax,100)
fig1 = plt.figure(figsize=(6.5,5),facecolor = 'white')
plt.plot(z_set,z_next_func(s,epsilon,se,n,g,phi,beta,delta,alpha,z_set),'k-', #transition function
z_set, z_set, 'r--', #45 degree line
z_ss_func(s,epsilon,se,n,g,phi,beta,delta,alpha), z_ss_func(s,epsilon,se,n,g,phi,beta,delta,alpha), 'ko', #st.st. point
(z_ss_func(s,epsilon,se,n,g,phi,beta,delta,alpha), z_ss_func(s,epsilon,se,n,g,phi,beta,delta,alpha)),
(z_ss_func(s,epsilon,se,n,g,phi,beta,delta,alpha),0),'k--') # Vertical dashed line at st.st.
plt.ylim(bottom=0) #let plot start at z_(t+1) = 0
plt.xlim(left=0) #let plot start at z_t = 0
plt.xlabel('$z_t$')
plt.ylabel('$z_{t+1}$')
plt.title('Transition diagram for the capital output ratio, $z_t$')
plt.show(fig1)
# Create float sliders
widgets.interact(transition,
s = widgets.FloatSlider(description='$s$',min=0, max=1, step=0.01, value=0.20),
epsilon = widgets.fixed(epsilon),
beta = widgets.fixed(beta),
alpha = widgets.fixed(alpha),
se = widgets.FloatSlider(description='$s_{E}$', min=0, max=0.01, step=0.001, value=0.005),
n = widgets.FloatSlider(description='$n$', min=0, max=1, step=0.001, value=0.01),
g = widgets.FloatSlider(description='$g$', min=0, max=1, step=0.001, value=0.02),
phi = widgets.FloatSlider(description='$\phi$', min=0, max=5, step=0.1, value=2),
delta = widgets.fixed(delta)
)
```
interactive(children=(FloatSlider(value=0.2, description='$s$', max=1.0, step=0.01), FloatSlider(value=0.005, …
<function __main__.transition(s, epsilon, se, n, g, phi, beta, delta, alpha)>
**An increase in the savings rate:** Capital accumulation increases which leads to an increase in both capital and output. The effect of an increase in the savings rate is greater with respect to capital, however. This is due to the (highly plausable) assumption that the output elasticity with respect to capital is less than 1. Hence, an increase in the savings rate increases the capital/output ratio in steady state.
**An increase in $s_E$:** An increase in $s_E$ increases the steady state value of the capital/output ratio. An increase in $s_E$ means that a greater fraction of the remaining stock of fossil fuels is used each period. Initially this increases production through an increase in energy input.
<br>**An increase in n:** An increase in population growth lowers the steady state value.
*An increase in g:* An increase in technology growth lowers the steady state value of the capital output ratio. This follows from the fact that less capital is needed in order to produce a given amount of output.
<br>**An increase in $\phi$:** An increase in $\phi$ increases the steady state value of $z$.
**New damage function and the prospect of long-term growth** <br>
Untill now it have been possible to have long term growth in the output labor ratio and the kapital labor ratio. Thus, we have been able to find a steady state capital output ratio $z_{ss}$. Now, we alter the damage function and introduce a parameter $\bar \rho $. This parameter reflects a "tipping point" in the damage function: when this point is surpassed production will halt completely! Formaly we define:
\begin{equation}
D_t = \left( \frac{ \left( \frac{R_t}{R_0}\right) - \bar \rho }{1-\bar \rho} \right) ^\phi , \qquad \phi > 0 \qquad 0<\bar \rho < 1
\end{equation}
with $\bar \rho R_t \leq R_t \leq R_0$.
```python
## Second damage function
def Damagefunc2(R0,RT,phi,rho):
D = (((RT/R0)-rho)/(1-rho))**phi
return (D)
def plot_func(phi,rho):
R0 = 100;
R_series = np.linspace(R0*rho,R0,100)
D_series = np.zeros(100)
for i in range(0,100):
D_series[i] = Damagefunc2(R0, R_series[i],phi,rho)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title("Transition of $D_t$ for different values of phi")
ax.set_xlabel('$R_T$ - stock left')
ax.set_ylabel('Damage function')
ax.set_xlim([0, 100])
ax.set_ylim([0, 1])
x1 = np.linspace(*ax.get_xlim())
x2 = np.linspace(*ax.get_ylim())
ax.plot(R_series,D_series,'b')
interact(plot_func, phi = widgets.FloatSlider(Value=1, min=0.05, max=4, step=0.05), rho = widgets.FloatSlider(Value=2, min=0.1, max=1, step=0.05))
```
interactive(children=(FloatSlider(value=0.05, description='phi', max=4.0, min=0.05, step=0.05), FloatSlider(va…
<function __main__.plot_func(phi, rho)>
The figure illustrates that the prospect of long-term growth is higly dependent on the nature of the climate externality. In the first case, long-term growth was possible even though the value of the damage function converges to zero. Growth was possible because output tended to infinity as $D_t$ tenden to zero. In the seconde case, however, the tipping point means that the damage function becomes zero at a certain year and
**Concluding remarks**
In this project we have investigated how incorporating fossil fuels and an associated damage function alters the classic results from the general Solow model with technology. We have shown that balanced growth is attainable even in the presence of a climate externality. The assumptions underlying the damage function are however vital for this conclusion. A damage function that builds on the assumption that the damage effect has a gradual impact on production is compatible with balanced growth while the introduction of a "tipping point" in the damage function is not.
Naturally, this is of great importance when considering policy implications.
|
d25e4b27af59e4f50b7c82b5af6399e6d7553af7
| 104,093 |
ipynb
|
Jupyter Notebook
|
modelproject/modelproject/Model project.ipynb
|
NumEconCopenhagen/projects-2019-hoj-pa-lerner-indekset
|
01f842bd46e44a42ae5bf089d9ec43f94f0e3533
|
[
"MIT"
] | null | null | null |
modelproject/modelproject/Model project.ipynb
|
NumEconCopenhagen/projects-2019-hoj-pa-lerner-indekset
|
01f842bd46e44a42ae5bf089d9ec43f94f0e3533
|
[
"MIT"
] | 12 |
2019-04-14T09:02:47.000Z
|
2019-05-14T13:57:23.000Z
|
modelproject/modelproject/Model project.ipynb
|
NumEconCopenhagen/projects-2019-hoj-pa-lerner-indekset
|
01f842bd46e44a42ae5bf089d9ec43f94f0e3533
|
[
"MIT"
] | null | null | null | 133.111253 | 70,028 | 0.84925 | true | 6,036 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.812867 | 0.718607 |
__label__eng_Latn
| 0.960213 | 0.507896 |
# Extended Kalman filter for 3 DOF linear model
An Extended Kalman filter with a 3 DOF linear model as the predictor will be developed.
The filter is run on simulated data as well as real model test data.
```python
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import inv
import sympy as sp
import src.visualization.book_format as book_format
book_format.set_style()
from src.substitute_dynamic_symbols import lambdify
from sympy import Matrix
from sympy.physics.mechanics import (dynamicsymbols, ReferenceFrame,
Particle, Point)
from IPython.display import display, Math, Latex
from src.substitute_dynamic_symbols import run, lambdify
from sympy.physics.vector.printing import vpprint, vlatex
from src.data import mdl
from src.extended_kalman_filter import extended_kalman_filter, rts_smoother
import src.models.vmm_nonlinear_EOM as vmm
from docs.book.example_1 import ship_parameters, df_parameters
from src.symbols import *
from src import prime_system
p = df_parameters["symbol"]
from src.visualization.plot import track_plot, plot
import matplotlib.pyplot as plt
import os
if os.name == 'nt':
plt.style.use('../docs/book/book.mplstyle') # Windows
```
## 3DOF model
```python
X_eq = vmm.X_eq
Y_eq = vmm.Y_eq
N_eq = vmm.N_eq
A, b = sp.linear_eq_to_matrix([X_eq, Y_eq, N_eq], [u1d, v1d, r1d])
acceleration = sp.matrices.MutableDenseMatrix([u1d,v1d,r1d])
eq_simulator = sp.Eq(sp.UnevaluatedExpr(A)*sp.UnevaluatedExpr(acceleration),sp.UnevaluatedExpr(b))
eq_simulator
```
```python
A_inv = A.inv()
S = sp.symbols('S')
eq_S=sp.Eq(S,-sp.fraction(A_inv[1,1])[1])
A_inv_S = A_inv.subs(eq_S.rhs,S)
eq_acceleration_matrix_clean = sp.Eq(sp.UnevaluatedExpr(acceleration),sp.UnevaluatedExpr(A_inv_S)*sp.UnevaluatedExpr(b))
Math(vlatex(eq_acceleration_matrix_clean))
```
```python
u1d_function = sp.Function(r'\dot{u}')(u,v,r,delta)
v1d_function = sp.Function(r'\dot{v}')(u,v,r,delta)
r_function = sp.Function(r'\dot{r}')(u,v,r,delta)
subs_prime = [
(m,m/prime_system.df_prime.mass.denominator),
(I_z,I_z/prime_system.df_prime.inertia_moment.denominator),
(x_G,x_G/prime_system.df_prime.length.denominator),
(u, u/sp.sqrt(u**2+v**2)),
(v, v/sp.sqrt(u**2+v**2)),
(r, r/(sp.sqrt(u**2+v**2)/L)),
]
subs = [
(X_D, vmm.X_qs_eq.rhs),
(Y_D, vmm.Y_qs_eq.rhs),
(N_D, vmm.N_qs_eq.rhs),
]
subs = subs + subs_prime
A_SI = A.subs(subs)
b_SI = b.subs(subs)
x_dot = sympy.matrices.dense.matrix_multiply_elementwise(A_SI.inv()*b_SI,
sp.Matrix([(u**2+v**2)/L,(u**2+v**2)/L,(u**2+v**2)/(L**2)]))
```
```python
x_dot
```
```python
x_dot[1].args[2]
```
```python
x_ = sp.Matrix([u*sp.cos(psi)-v*sp.sin(psi),
u*sp.sin(psi)+v*sp.cos(psi),
r])
f_ = sp.Matrix.vstack(x_, x_dot)
subs = {value: key for key, value in p.items()}
subs[psi] = sp.symbols('psi')
lambda_f = lambdify(f_.subs(subs))
```
```python
import inspect
lines = inspect.getsource(lambda_f)
print(lines)
```
```python
from src.models.vmm import get_coefficients
eq = sp.Eq(sp.UnevaluatedExpr(acceleration), f_[5])
get_coefficients(eq)
```
```python
parameters=df_parameters['prime'].copy()
```
```python
%%time
lambda_f(**parameters)
```
```python
%%time
expr=f_.subs(subs)
lambda_f = lambdify(expr)
```
```python
subs = {value: key for key, value in p.items()}
keys = list(set(subs.keys()) & f_.free_symbols)
subs = {key : subs[key] for key in keys}
expr=f_.subs(subs)
```
```python
subs
```
```python
%%time
subs[psi] = sp.symbols('psi')
expr=f_.subs(subs)
sp.lambdify(list(expr.free_symbols), expr)
```
```python
sp.lambdify(list(f_.free_symbols), f_)
```
```python
lambda_f
```
```python
def lambda_f_constructor(parameters, ship_parameters):
def f(x, input):
psi=x[2]
u=x[3]
v=x[4]
r=x[5]
x_dot = run(lambda_f, **parameters, **ship_parameters, psi=psi, u=u, v=v, r=r, **input).reshape(x.shape)
return x_dot
return f
```
## Simulation
```python
def time_step(x_,u_):
psi=x_[2]
u=x_[3]
v=x_[4]
r=x_[5]
delta = u_
x_dot = run(lambda_f, **parameters, **ship_parameters, psi=psi, u=u, v=v, r=r, delta=delta).flatten()
return x_dot
def simulate(x0,E, ws, t, us):
simdata = np.zeros((6,len(t)))
x_=x0
Ed = h_ * E
for i,(u_,w_) in enumerate(zip(us,ws)):
w_ = w_.reshape(len(w_),1)
x_dot = lambda_f_(x_,u_) + Ed @ w_
x_=x_ + h_*x_dot
simdata[:,i] = x_.flatten()
df = pd.DataFrame(simdata.T, columns=["x0","y0","psi","u","v","r"], index=t)
df.index.name = 'time'
df['delta'] = us
return df
```
```python
parameters=df_parameters['prime'].copy()
lambda_f_ = lambda_f_constructor(parameters=parameters,
ship_parameters=ship_parameters)
N_ = 4000
t_ = np.linspace(0,50,N_)
h_ = float(t_[1]-t_[0])
us = np.deg2rad(30*np.concatenate((-1*np.ones(int(N_/4)),
1*np.ones(int(N_/4)),
-1*np.ones(int(N_/4)),
1*np.ones(int(N_/4)))))
x0_ = np.array([[0,0,0,3,0,0]]).T
no_states = len(x0_)
np.random.seed(42)
E = np.array([
[0,0,0],
[0,0,0],
[0,0,0],
[1,0,0],
[0,1,0],
[0,0,1],
],
)
process_noise_u = 0.01
process_noise_v = 0.01
process_noise_r = np.deg2rad(0.01)
ws = np.zeros((N_,3))
ws[:,0] = np.random.normal(loc=process_noise_u, size=N_)
ws[:,1] = np.random.normal(loc=process_noise_v, size=N_)
ws[:,2] = np.random.normal(loc=process_noise_r, size=N_)
df = simulate(x0=x0_, E=E, ws=ws, t=t_, us=us)
```
```python
a = np.array([1,2,3])
M = np.array([[1,1,1],[1,1,1]])
a@M.T
```
```python
w_ = ws[0]
E@w_+w_.reshape(3,1)
```
```python
track_plot(
df=df,
lpp=ship_parameters["L"],
beam=ship_parameters["B"],
color="green",
);
plot({'Simulation':df});
```
## Kalman filter
Implementation of the Kalman filter. The code is inspired of this Matlab implementation: [ExEKF.m](https://github.com/cybergalactic/MSS/blob/master/mssExamples/ExEKF.m).
```python
x, x1d = sp.symbols(r'\vec{x} \dot{\vec{x}}') # State vector
h = sp.symbols('h')
u_input = sp.symbols(r'u_{input}') # input vector
w_noise = sp.symbols(r'w_{noise}') # input vector
f = sp.Function('f')(x,u_input,w_noise)
eq_system = sp.Eq(x1d, f)
eq_system
```
```python
eq_x = sp.Eq(x, sp.UnevaluatedExpr(sp.Matrix([x_0, y_0, psi, u, v, r])))
eq_x
```
```python
jac = sp.eye(6,6) + f_.jacobian(eq_x.rhs.doit())*h
subs = {value: key for key, value in p.items()}
subs[psi] = sp.symbols('psi')
lambda_jacobian = lambdify(jac.subs(subs))
```
```python
lambda_jacobian
```
```python
def lambda_jacobian_constructor(parameters, ship_parameters, h):
def f(x, input):
psi=x[2]
u=x[3]
v=x[4]
r=x[5]
jacobian = run(lambda_jacobian, **parameters, **ship_parameters, psi=psi, u=u, v=v, r=r, h=h, **input)
return jacobian
return f
```
```python
lambda_jacobian_ = lambda_jacobian_constructor(parameters=parameters,
ship_parameters=ship_parameters, h=h_)
```
```python
df_measure = df.copy()
measurement_noise_psi_max = 3
measurement_noise_psi = np.deg2rad(measurement_noise_psi_max/3)
epsilon_psi = np.random.normal(scale=measurement_noise_psi, size=N_)
measurement_noise_xy_max=2
measurement_noise_xy = measurement_noise_xy_max/3
epsilon_x0 = np.random.normal(scale=measurement_noise_xy, size=N_)
epsilon_y0 = np.random.normal(scale=measurement_noise_xy, size=N_)
df_measure['psi'] = df['psi'] + epsilon_psi
df_measure['x0'] = df['x0'] + epsilon_x0
df_measure['y0'] = df['y0'] + epsilon_y0
```
```python
P_prd = np.diag([0.1, 0.1, np.deg2rad(0.01), 0.001, 0.001, np.deg2rad(0.001)])
Qd = np.diag([0.01, 0.01, np.deg2rad(0.1)]) #process variances: u,v,r
Rd = h_*np.diag([measurement_noise_xy**2, measurement_noise_xy**2, measurement_noise_psi**2]) #measurement variances: x0,y0,psi
ys = df_measure[['x0','y0','psi']].values
x0_ = np.array([[0,0,0,3,0,0]]).T
Cd = np.array([
[1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0],
])
E = np.array([
[0,0,0],
[0,0,0],
[0,0,0],
[1,0,0],
[0,1,0],
[0,0,1],
],
)
time_steps = extended_kalman_filter(
P_prd=P_prd,
lambda_f=lambda_f_,
lambda_jacobian=lambda_jacobian_,
#h=h_,
#us=us,
#ys=ys,
E=E,
Qd=Qd,
Rd=Rd,
Cd=Cd, data=df_measure)
x_hats = np.array([time_step["x_hat"].flatten() for time_step in time_steps]).T
time = np.array([time_step["time"] for time_step in time_steps]).T
Ks = np.array([time_step["K"] for time_step in time_steps]).T
variances = np.array([np.diagonal(time_step["P_hat"]) for time_step in time_steps]).T
stds = np.sqrt(variances)
```
```python
keys = ['x0','y0','psi','u','v','r']
fig,ax=plt.subplots()
for i,key in enumerate(keys):
ax.plot(time, variances[i,:], label=key)
ax.legend()
ax.set_ylabel('std')
ax.set_xlabel('time [s]')
ax.set_ylim(0,10*np.max(variances[:,-1]))
```
```python
df_kalman = pd.DataFrame(data=x_hats.T, index=time, columns=['x0','y0','psi','u','v','r'])
df_kalman['delta'] = us
dataframes = {
'Mesurement' : df_measure,
'Kalman filter' : df_kalman,
'Simulation' : df,
}
fig,ax=plt.subplots()
styles = {
'Mesurement' : {
'linestyle' : '',
'marker' : '.',
'ms' : 1,
},
'Kalman filter' : {
'lw' : 2,
},
'Simulation' : {
'lw' : 1,
'linestyle' : ':',
},
}
for label,df_ in dataframes.items():
track_plot(
df=df_,
lpp=ship_parameters["L"],
beam=ship_parameters["B"],
ax=ax,
label=label,
plot_boats=False,
**styles.get(label,{})
);
ax.legend()
plot(dataframes = dataframes, fig_size=(10,15), styles = ['-','-',':']);
```
# Real data
Using the developed Kalman filter on some real model test data
## Load test
```python
id=22773
df, units, meta_data = mdl.load(dir_path = '../data/raw', id=id)
df.index = df.index.total_seconds()
df.index-=df.index[0]
df['x0']-=df.iloc[0]['x0']
df['y0']-=df.iloc[0]['y0']
df['psi']-=df.iloc[0]['psi']
```
```python
fig,ax=plt.subplots()
fig.set_size_inches(10,10)
track_plot(df=df, lpp=meta_data.lpp, x_dataset='x0', y_dataset='y0', psi_dataset='psi', beam=meta_data.beam, ax=ax);
```
```python
sp.simplify(sp.Matrix([
[sp.cos(psi), -sp.sin(psi)],
[sp.sin(psi), sp.cos(psi)]]).inv())
```
```python
from numpy import cos as cos
from numpy import sin as sin
from src.data.lowpass_filter import lowpass_filter
df_lowpass = df.copy()
t = df_lowpass.index
ts = np.mean(np.diff(t))
fs = 1/ts
position_keys = ['x0','y0','psi']
for key in position_keys:
df_lowpass[key] = lowpass_filter(data=df_lowpass[key], fs=fs, cutoff=1, order=1)
df_lowpass['x01d_gradient'] = x1d_ = np.gradient(df_lowpass['x0'], t)
df_lowpass['y01d_gradient'] = y1d_ = np.gradient(df_lowpass['y0'], t)
df_lowpass['r'] = r_ = np.gradient(df_lowpass['psi'], t)
psi_ = df_lowpass['psi']
df_lowpass['u'] = x1d_*cos(psi_) + y1d_*sin(psi_)
df_lowpass['v'] = -x1d_*sin(psi_) + y1d_*cos(psi_)
velocity_keys = ['u','v','r']
for key in velocity_keys:
df_lowpass[key] = lowpass_filter(data=df_lowpass[key], fs=fs, cutoff=1, order=1)
```
```python
x1d_[0:10]
```
```python
for key in position_keys + velocity_keys:
fig,ax=plt.subplots()
fig.set_size_inches(12,3)
df_lowpass.plot(y=key, ax=ax, zorder=-10, label='filter')
if key in df:
df.plot(y=key, ax=ax, label='raw')
ax.set_ylabel(key)
```
```python
data = df.copy()
data['u'] = df_lowpass['u']
data['v'] = df_lowpass['v']
data['r'] = df_lowpass['r']
data=data.iloc[200:-100]
data.index-=data.index[0]
P_prd = np.diag([0.1, 0.1, np.deg2rad(0.01), 0.001, 0.001, np.deg2rad(0.001)])
Qd = np.diag([0.01, 0.01, np.deg2rad(0.1)]) #process variances: u,v,r
Cd = np.array([
[1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0],
])
E = np.array([
[0,0,0],
[0,0,0],
[0,0,0],
[1,0,0],
[0,1,0],
[0,0,1],
],
)
ys = data[['x0','y0','psi']].values
h_m = h_ = np.mean(np.diff(data.index))
x0_ = np.concatenate((
data.iloc[0][['x0','y0','psi']].values,
data.iloc[0][['u','v','r']].values))
us = data['delta'].values
error_max_pos = 0.05
sigma_pos = error_max_pos/3
variance_pos = sigma_pos**2
error_max_psi = np.deg2rad(0.5)
sigma_psi = error_max_psi/3
variance_psi = sigma_psi**2
Rd = np.diag([variance_pos, variance_pos, variance_psi])
time_steps = extended_kalman_filter(
no_states=6,
no_measurement_states=3,
x0=x0_,
P_prd=P_prd,
lambda_f=lambda_f_,
lambda_jacobian=lambda_jacobian_,
h=h_,
us=us,
ys=ys,
E=E,
Qd=Qd,
Rd=Rd,
Cd=Cd)
x_hats = np.array([time_step["x_hat"].flatten() for time_step in time_steps]).T
time = np.array([time_step["time"] for time_step in time_steps]).T
Ks = np.array([time_step["K"] for time_step in time_steps]).T
variances = np.array([np.diagonal(time_step["P_hat"]) for time_step in time_steps]).T
stds = np.sqrt(variances)
```
```python
keys = ['x0','y0','psi','u','v','r']
fig,ax=plt.subplots()
for i,key in enumerate(keys):
ax.plot(time, variances[i,:], label=key)
ax.legend()
ax.set_ylabel('std')
ax.set_xlabel('time [s]')
ax.set_ylim(0,3*np.max(variances[:,-1]))
```
```python
df_kalman = pd.DataFrame(data=x_hats.T, index=time, columns=['x0','y0','psi','u','v','r'])
df_kalman['delta'] = us
for key in ['u','v','r']:
df_kalman[f'{key}1d'] = np.gradient(df_kalman[key], df_kalman.index)
dataframes = {
'Mesurement' : data,
'Kalman filter' : df_kalman,
}
fig,ax=plt.subplots()
styles = {
'Mesurement' : {
'linestyle' : '',
'marker' : '.',
'ms' : 1,
'zorder':-10,
},
'Kalman filter' : {
'lw' : 2,
},
}
for label,df_ in dataframes.items():
track_plot(
df=df_,
lpp=ship_parameters["L"],
beam=ship_parameters["B"],
ax=ax,
label=label,
plot_boats=False,
**styles.get(label,{})
);
ax.legend()
plot(dataframes = dataframes,
fig_size=(10,15),
styles = ['-','-',':'],
keys=['x0','y0','psi','u','v','r','u1d','v1d','r1d']);
```
## RTS smoother
```python
smooth_time_steps = rts_smoother(
time_steps=time_steps,
us=us,
lambda_jacobian=lambda_jacobian_,
Qd=Qd,
lambda_f=lambda_f_, E=E,
)
## Post process rts smoother:
x_hats = np.array(
[time_step["x_hat"].flatten() for time_step in smooth_time_steps]
).T
time = np.array([time_step["time"] for time_step in smooth_time_steps]).T
df_rts = pd.DataFrame(data=x_hats.T, index=time, columns=['x0','y0','psi','u','v','r'])
df_rts["delta"] = us
for key in ['u','v','r']:
df_rts[f'{key}1d'] = np.gradient(df_rts[key], df_kalman.index)
```
```python
dataframes = {
'Mesurement' : data,
'Kalman filter' : df_kalman,
'RTS': df_rts,
}
fig,ax=plt.subplots()
styles = {
'Mesurement' : {
'linestyle' : '',
'marker' : '.',
'ms' : 1,
'zorder':-10,
},
'Kalman filter' : {
'lw' : 2,
},
}
for label,df_ in dataframes.items():
track_plot(
df=df_,
lpp=ship_parameters["L"],
beam=ship_parameters["B"],
ax=ax,
label=label,
plot_boats=False,
**styles.get(label,{})
);
ax.legend()
plot(dataframes = dataframes,
fig_size=(10,15),
styles = ['r-','g-','b-'],
keys=['x0','y0','psi','u','v','r','u1d','v1d','r1d']);
```
```python
data['thrust'] = data['Prop/PS/Thrust'] + data['Prop/SB/Thrust']
df_rts['thrust'] = data['thrust'].values
df_rts.to_csv('test.csv')
```
```python
smooth_time_steps[100]['P_hat']
```
```python
variances = np.array([np.diagonal(time_step["P_hat"]) for time_step in smooth_time_steps]).T
stds = np.sqrt(variances)
```
```python
keys = ['x0','y0','psi','u','v','r']
fig,ax=plt.subplots()
for i,key in enumerate(keys):
ax.plot(time, variances[i,:], label=key)
ax.legend()
ax.set_ylabel('std')
ax.set_xlabel('time [s]')
ax.set_ylim(0,3*np.max(variances[:,-1]))
```
```python
from scipy.stats import multivariate_normal
```
```python
likelihoods = np.zeros(len(time_steps))
for n,smooth_time_step in enumerate(smooth_time_steps):
cov = smooth_time_step['P_hat']
mean = smooth_time_step['x_hat'].flatten()
rv = multivariate_normal(mean=mean, cov=cov)
likelihoods[n] = rv.pdf(x=mean)
```
```python
fig,ax=plt.subplots()
ax.plot(likelihoods)
#ax.set_ylim(2.5*10**14,3*10**14)
```
```python
```
|
3fedfb49f18cc8ce5bc88ff9a402a2d65377c542
| 33,010 |
ipynb
|
Jupyter Notebook
|
notebooks/15.47_EKF_3DOF.ipynb
|
martinlarsalbert/wPCC
|
16e0d4cc850d503247916c9f5bd9f0ddb07f8930
|
[
"MIT"
] | null | null | null |
notebooks/15.47_EKF_3DOF.ipynb
|
martinlarsalbert/wPCC
|
16e0d4cc850d503247916c9f5bd9f0ddb07f8930
|
[
"MIT"
] | null | null | null |
notebooks/15.47_EKF_3DOF.ipynb
|
martinlarsalbert/wPCC
|
16e0d4cc850d503247916c9f5bd9f0ddb07f8930
|
[
"MIT"
] | null | null | null | 27.394191 | 175 | 0.48449 | true | 5,511 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.815232 | 0.819893 | 0.668404 |
__label__eng_Latn
| 0.16489 | 0.391257 |
# Random Variables
$\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\EE}[2][\,\!]{\mathbb{E}_{#1}\left[#2\right]}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)}
\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathrm{N} \left( #1 \right)}
\newcommand{\space}{\text{ }}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\CB}[1]{\left\{ #1 \right\}}$During an experiment, the quantities of interest, or the real-valued functions defined on the sample space are known as the ***Random Variables***, shortened as $r.v.$. And the value of the outcome variable is determined by the outcome of the experiment, we shall assign probabilities to the possible values of the $r.v.$.
A special case for this is the ***Indicator Variable***, denoted as $I$ for an event $E$.
$$I = \begin{cases}
1 & \text{if } E \text{ occurs} \\
0 & \text{if } E \text{ doesn't occur}
\end{cases}$$
**e.g.**
Suppose that independent trials, each of which results in any of $m$ possible outcomes with respectively probabilities $p_1, \dots, p_m$, $\sum p_i = 1$, are continually performed. Let $X$ denote the number of trials needed until each outcome has occurred at least once. What's $P\CB{X = n}$?
>Instead solve that directly, we first calculate $P\CB{X = n}$. Let $A_i$ denote the event that outcome $i$ has not yet occured after the first $n$ trials, $i = 1, \dots, m$.
>
>$\begin{align}
P\left\{X > n\right\} &= P\left( \bigcup_{i=1}^{m} A_i \right) \\
&= \sum_{i=1}^{m} P(A_i) - \underset{i<j}{\sum\sum} P(A_iA_j) \\
& \;\;\; + \underset{i<j<k}{\sum\sum\sum} P(A_iA_jA_k) - \cdots + (-1)^{m+1}P(A_1 A_2 \cdots A_m)
\end{align}$
>
>Now, $P(A_i)$ is the probability that each of the first $n$ trials results in a $\text{non-}i$ outcome, and so by independence
>
>$P(A_i) = (1 - p_i)^{n}$
>
>And similarly, $P(A_iA_j)$ is the probability that the first $n$ trials all result in a $\text{non-}i$ and $\text{non-}j$ outcome, and so
>
>$P(A_iA_j) = (1-p_i - p_j)^{n}$
>
>As all of the other possibilities are similar, we see that
>
>$\begin{align}
P\left\{X>n\right\} = \sum_{i=1}^{n}(1-p_i)^n - \underset{i<j}{\sum\sum} (1-p_i - p_j)^n + \underset{i<j<k}{\sum\sum\sum} (1 - p_i - p_j - p_k)^n - \cdots
\end{align}$
>
>Since $P\left\{X=n\right\} = P\left\{X>n-1\right\} -P\left\{X>n\right\}$, and $(1-a)^{n-1} - (1-a)^ n = a(1-a)^{n-1}$ that
>
>$\begin{align}
P\left\{X=n\right\} &= \sum_{i=1}^{m} p_i (1-p_i)^{n-1} - \underset{i<j}{\sum\sum} (p_i + p_j) (1-p_i - p_j)^{n-1} \\
&\;\;\;+ \underset{i<j<k}{\sum\sum\sum} (p_i+p_j+p_k)(1 - p_i - p_j - p_k)^n - \cdots
\end{align}$
***
Other than this **discrete** $r.v.$, we still have the **continuous** $r.v.$, like the lifetime of the car.
We also define the ***culmulative distribution function***, $F(\cdot)$, of the $r.v.$ $X$, on any real number $b$, $-\infty < b < \infty$, by $F(b) = P\left\{X \leq b\right\}$. And some properties of the cdf $F$ are:
- $F(b)$ is nondecreasing function of $b$.$\\[0.7em]$
- $\lim\limits_{b \to \infty} F(b) = F(\infty) = 1\\[0.7em]$
- $\lim\limits_{b \to -\infty} F(b) = F(-\infty) = 0$
Also, we have: $P\left\{a < X \leq b\right\} = F(b) - F(a)$ for all $a < b$. And for $P\left\{X<b\right\}$, we need a new strategy:
$$\begin{align}
P\left\{X<b\right\}&= \lim_{h \to 0^+} P\left\{X \leq b-h\right\}\\
&= \lim_{h \to 0^+} F(b-h)
\end{align}$$
just keep in mind that $P\left\{X<b\right\}$ *may not* equal to $F(b)$.
# Discrete $r.v.$
A $r.v.$ that can take on at most a **countable** number of possible values is said to be ***discrete***, say $X$. We can define its ***probability mass function*** $p(a)$ as: $p(a) = P\left\{X = a\right\}$.
Easy to find that $p(a)$ is **positive** for at most a countable number of values of $a$. So if $X$ must assume one of the values $x_1, x_2, \dots$, then $p(x_i) > 0$ for $i = 1, 2, \dots$ and $p(x_i)=0$ for all other values of $x$.
Direct conclusions would be $\sum_{i=1}^{\infty}p(x_i) = 1$ and $F(a) = \sum_{x_i \leq a}x_i$
## The Bernoulli $r.v.$
For those $r.v.$ with the probability mass function defined as
$$
\begin{cases}
p(0) = P\left\{X=0\right\} = 1-p\\[0.5em]
p(1) = P\left\{X=1\right\} = p
\end{cases}
$$
where $0 < p < 1$, namely, the probability of successful trial.
## The Binomial $r.v.$
Suppose that $n$ independent trials, each of which results in a success with probability $p$ and a failure with probability $1-p$. Let $X$ denote the **number of successes** that occur in the $n$ trials, then $X$ is said to be a ***Binomial*** $r.v.$ with **parameters** $(n,p)$.
It's probability mass function is given by $p(i) = \d{\binom{n} {i}}p^i(1-p)^{n-i}$ for $i=0,1,\dots,n$, and it's easy to verify that this holds:
$$\sum_{i=0}^{\infty}p(i) = \sum_{i=0}^{n} \binom{n} {i}p^i(1-p)^{n-i} = (p+(1-p))^{n} = 1$$
## The Geometric $r.v.$
Suppose that independent trials, each having probability $p$ of being a success and denote $X$ as the number of trials required until the first success. Then this $X$ is said to be a ***geometric*** $r.v.$ with parameter $p$. It's probability mass function is given by $p(n) = P\left\{X=n\right\} = (1-p)^{n-1}p$ for $n = 1,2,\dots$
And it's easy to verify that $\sum\limits_{n=1}^{\infty} p(n) = p\sum\limits_{n=1}^{\infty} (1-p)^{n-1} = 1$
## The Poisson Random Variable
For $r.v.$ $X$, taking on one of the values $i = 0,1,\dots$ with probability mass function given by
$$p(i) = P\left\{X=i\right\} = e^{-\lambda} \frac{\lambda^i} {i!}$$
And it's easy to verify that $\sum\limits_{i=0}^{\infty} p(i) = e^{-\lambda} \sum\limits_{i=0}^{\infty} \ffrac{\lambda^i} {i!} = e^{-\lambda}e^{\lambda} = 1$
One important application is to **approximate** a *binomial* $r.v.$, with large $n$ and small $p$.
$$\begin{align}
P_{\text{binom}}\left\{X=i\right\} &= \binom{n} {i} p^i (1-p)^{n-i} \\
&= \frac{n!} {(n-i)!i!} \left( \frac{\lambda} {n} \right)^i \left( 1 - \frac{\lambda} {n} \right)^{n-i} \\[0.6em]
&= \frac{n(n-1)(n-2) \cdots (n-i+1)} {n!} \frac{\lambda^i} {i!} \frac{(1-\lambda/n)^n} {(1-\lambda/n)^i} \\
& \approx 1 \cdot \frac{\lambda^i} {i!} \cdot \frac{e^{-\lambda}} {1} = P_{\text{poisson}}\left\{X=i\right\}
\end{align}$$
$Remark$
- $0!=1$
- For poisson distributed $X$, $\EE{X} = \lambda$
# Continuous $r.v.$
Now the $r.v.$ can take on a uncountable set of values, also say $X$. We say $X$ is a ***continuous*** $r.v.$ if there exists a **nonnegative** function $f(x)$, defined for all real $x \in (-\infty, \infty)$, having the property that for any set $B$ of real numbers,
$$P\left\{X \in B\right\} = \int_{B} f(x) \;\dd{x}$$
here $f(x)$ is the ***probability density function*** of $X$. It must sastify $P\CB{X \in \left( -\infty, \infty\right)} = \d{\int_{-\infty}^{\infty} f(x)\;\dd{x} = 1}$. And one funny thing about this is for any *particular value* assumed to $X$ like $a$, $P\CB{X = a} = \d{\int_{a}^{a}f(x)\;\dd{x}}=0$.
Also, we can use this to define the cumulative distribution $F(\cdot)$: $F(a) = \d{\int_{-\infty}^{a} f(x) \;\dd{x}}$, then we can differentiate both sides and it yields: $\ffrac{\dd{}} {\dd{a}}F(a) = f(a)$.
## The Uniform Random Variable
A $r.v.$ is said to be ***uniformly distributed*** over the interval $(0,1)$ if its pdf is given by
$$f(x) = \begin{cases}
1 & \text{if } 0 < x < 1 \\[0.6em]
0 & \text{otherwise}
\end{cases}$$
And in general, we say that $X$ is a uniform random variable on the interval $(\alpha, \beta)$ if its pdf is given by
$$f(x) = \begin{cases}
\ffrac{1} {\beta - \alpha} & \text{if } \alpha < x < \beta \\[0.6em]
0 & \text{otherwise}
\end{cases}$$
## Exponential Random Variables
A continuous $r.v.$ whose pdf is given, for some $\lambda > 0$, by,
$$f(x) = \begin{cases}
\lambda e ^{-\lambda x} & \text{if }x\geq 0 \\[0.6em]
0 & \text{if } x<0
\end{cases}$$
is said to be an ***exponential*** $r.v.$ with parameter $\lambda$. And for its cdf, we have
$$F(a) = \begin{cases}
\d{\int_{0}^{a} \lambda e ^{-\lambda x} \;\dd{x}} = 1 - e^{-\lambda a} & \text{if } a\geq 0 \\[0.6em]
0 & \text{if } a<0
\end{cases}$$
And also it's easy to verify that $F(\infty) = \d{\int_{0}^{\infty} \lambda e ^{-\lambda x} \;\dd{x} = 1}$
## Gamma Random Variables
A continuous $r.v.$ whose pdf is given, for some $\lambda > 0$ and $\alpha > 0$, by
$$f(x) = \begin{cases}
\ffrac{\lambda e ^{-\lambda x} (\lambda x)^{\alpha-1}} {\Gamma(\alpha)} & \text{if } x\geq 0 \\[0.6em]
0 & \text{if } x<0
\end{cases}$$
is said to be a ***gamma*** $r.v.$ with parameter $\alpha$, $\lambda$, and ***gamma function***, $\Gamma(\alpha) = \d{\int_{0}^{\infty} e^{-x} x^{\alpha - 1} \; \dd{x}}$.
$Remark$
By induction we can show that $\Gamma(n) = (n-1)!$ for integral $n$.
>$$\begin{align}
\Gamma(n) &= \int_{0}^{\infty} e^{-x} x^{n-1} \;\dd{x} = (n-1)! \\
&= \int_{0}^{\infty} e^{-x}\; \ffrac{\dd{x^{n}}} {n}\\
&= \left.\ffrac{e^{-x}x^n} {n}\right|_{0}^{\infty} - \int_{0}^{\infty} -e^{-x} \ffrac{x^{n}} {n} \;\dd{x}
\end{align}$$
## Normal Random Variables
We say that $X$ is a ***normal*** $r.v.$ with parameters $\mu$ and $\sigma^2$ if its pdf is given by
$$f(x) = \frac{1} {\sqrt{2\pi\sigma^2}} \exp\CB{-\ffrac{(x-\mu)^2} {2\sigma^2}}$$
with $x \in \mathbb{R}$. It's density function is a bell-shaped curve that is symmetric around $\mu$.
$Remark$
If $X$ is normally distributed with parameters $\mu$ and $\sigma^2$, then for $Y = \alpha X + \beta$, it's also normally distributed with parameters $\alpha \mu + \beta$ and $\alpha^2 \sigma^2$. and from the linearity, $Y \in \mathbb{R}$.
>$$\begin{align}
F_Y(a) &= P(Y \leq a) = P(\alpha X + \beta \leq a)\\[0.6em]
&= F_X \left( \ffrac{a-\beta} {\alpha} \right) \\
&= \int_{-\infty}^{(a - \beta)/\alpha} \frac{1} {\sqrt{2\pi\sigma^2}} \exp\CB{-\ffrac{(x-\mu)^2} {2\sigma^2}} \;\dd{x} \\
&\stackrel{ y = \alpha x + \beta} {=} \int_{-\infty}^{a}\frac{1} {\sqrt{2\pi}\sigma\alpha} \exp\CB{-\ffrac{(y-(\alpha x + \beta))^2} {2\sigma^2\alpha^2}} \;\dd{y}
\end{align}$$
$Remark$
The previous result can be applied inversely so that any normally distributed $r.v.$ $X$ can be transformed into a specific one with parameters $0$ and $1$, by conducting $Z = (X - \mu)/\sigma$
# Expectation of a Random Variable
## The Discrete Case
If $X$ is a discrete $r.v.$ having a pmf $p(x)$, the the ***expected value*** of $X$ is defined by:
$\d{\EE{X} = \sum_{x:p(x)>0}xp(x)}$
**e.g.**
Expectation of a **Bernoulli** $r.v.$
> $\EE{X} = 0 \cdot (1-p) + 1 \cdot p = p $
**e.g.**
Expectation of a **Binomial** $r.v.$
> $\begin{align}
\EE{X} &= \sum_{i=0}^{n} i \cdot p(i) = \sum_{i=0}^{n} i \cdot \binom{n} {i} p^i (1-p)^{n-i} \\
&= \sum_{i=\mathbf{1}}^{n} \ffrac{n!} {(n-i)!(i-1)!} p^i (1-p)^{n-i} \\
&= np \sum_{i=\mathbf{1}}^{n} \ffrac{(n-1)!} {(n-i)!(i-1)!} p^{i-1} (1-p)^{n-i} \\
&\stackrel{k=i-1}{=} np \sum_{k=\mathbf{0}}^{n-1} \cdot \binom{n-1} {k} p^k (1-p)^{n-1-k} \\
&= np\left[p+(1-p)\right]^{n-1} = np
\end{align}$
**e.g.**
Expectation of a **Geometric** $r.v.$
> $\begin{align}
\EE{X} &= \sum_{n=1}^{\infty} n \cdot p(1-p)^{n-1} \\
&\stackrel{q=1-p}{=} p \sum_{n=1}^{\infty}nq^{n-1} \\
&= p \sum_{n=1}^{\infty} \ffrac{\dd{}} {\dd{q}}q^{n} \\
&= p \ffrac{\dd{}} {\dd{q}} \left( \ffrac{q} {1-q} \right)\\
&= \ffrac{p} {(1-q)^2} = \ffrac{1} {p}
\end{align}$
>
> 错位相减法 also works
**e.g.**
Expectation of a **Poisson** $r.v.$
> $\begin{align}
\EE{X} &= \sum_{i=0}^{\infty} i\cdot\ffrac{e^{-\lambda}\lambda^i} {i!} \\
&= \lambda e^{-\lambda} \sum_{i=\mathbf{1}}^{\infty} \ffrac{\lambda^{i-1}} {(i-1)!} \\
&= \lambda e^{-\lambda} e^{\lambda} = \lambda
\end{align}$
***
## The Continuous Case
For $r.v.$ $X$ with pdf $f(x)$, its expected value is defined by $\EE{X} = \d{\int_{-\infty}^{\infty} xf(x) \;\dd{x}}$.
**e.g.**
Expectation of a **Uniform** $r.v.$
> $\begin{align}
\EE{X} &= \int_{\alpha}^{\beta} x \cdot \frac{1} {\beta - \alpha} \; \dd{x} \\
&= \frac{\beta^2 - \alpha^2} {2(\beta - \alpha)} = \frac{\beta + \alpha} {2}
\end{align}$
**e.g.**
Expectation of a **Exponential** $r.v.$
> $\begin{align}
\EE{X} &= \int_{0}^{\infty} x \cdot \lambda e^{-\lambda x} \;\dd{x} = \int_{0}^{\infty} -x\;\dd{e^{-\lambda x}}\\
&= \left. -xe^{-\lambda x}\right|_{0}^{\infty} + \int_{0}^{\infty} e^{-\lambda x} \; \dd{x} \\
&= 0 - \left. \frac{e^{-\lambda x}} {\lambda} \right|_{0}^{\infty} \\
&= \frac{1} {\lambda}
\end{align}$
**e.g.**
Expectation of a **Normal** $r.v.$
> $\begin{align}
\EE{X} &= \frac{1} {\sqrt{2\pi}\sigma} \int_{-\infty}^{\infty} x \cdot \exp\CB{-\frac{(x-\mu)^2} {2\sigma^2}} \;\dd{x} \\
&\stackrel{y=x-\mu}{=}\frac{1}{\sqrt{2\pi}\sigma}\left( \int_{-\infty}^{\infty} y \exp\CB{-\frac{y^2} {2\sigma^2}} \; \dd{y} + \mu \int_{-\infty}^{\infty} \exp\CB{-\frac{(x-\mu)^2} {2\sigma^2}} \; \dd{x} \right) \\
&= \frac{1}{\sqrt{2\pi}\sigma} \int_{-\infty}^{\infty} y \exp\CB{-\frac{y^2} {2\sigma^2}} \; \dd{y} + \mu \int_{-\infty}^{\infty} f(x) \; \dd{x}
\end{align}$
>By symmetricity of the first term we can conclude that $\EE{X} = \mu \cdot 1 = \mu$.
***
## Expectation of a Function of a $r.v.$
$Proposition$
For $X$ with pmf $p(x)$, and any real-valued function $g$, $\EE{g(X)} = \d{\sum_{x:p(x)>0}} g(x)p(x)$. And for those with pdf $f(x)$, we have $\EE{g(X)} = \d{\int_{-\infty}^{\infty}} g(x)f(x) \;\dd{x}$
$Corollary$
For constant $a$ and $b$, then $\EE{aX+b} = a\EE{X} + b$.
***
We also call the quantity $\EE{X^n}$ for $n \geq 1$ the $n\text{-th}$ ***moment*** of $X$. And the variance, defined by $\Var{X} = \EE{(X - \EE{X})^2}$.
**e.g.**
Variance of the ***Normal*** $r.v.$
> $\begin{align}
\Var{X} &= \EE{(X - \mu)^2} \\
&= \frac{1} {\sqrt{2\pi}\sigma}\int_{-\infty}^{\infty} (x-\mu)^2 \exp\CB{-\frac{(x-\mu)^2} {2\sigma^2}}\;\dd{x} \\
&\stackrel{y=(x-\mu)/\sigma}{=} \frac{\sigma^2} {\sqrt{2\pi}} \int_{-\infty}^{\infty} y^2 \exp\CB{-\frac{y^2} {2}} \;\dd{y} \\
&= \frac{\sigma^2} {\sqrt{2\pi}} \int_{-\infty}^{\infty} -y \;\dd{e^{-y^2/2}}\\
&= \frac{\sigma^2} {\sqrt{2\pi}} \left( \left.-ye^{-y^2/2}\right|_{-\infty}^{\infty} + \int_{-\infty}^{\infty} e^{-y^2/2} \;\dd{y} \right) \\
&= \frac{\sigma^2} {\sqrt{2\pi}} \cdot \int_{-\infty}^{\infty} e^{-y^2/2} \;\dd{y} \\
&= \sigma^2
\end{align}$
***
$Remark$
About the provement of $\d{\int_{-\infty}^{\infty} e^{-y^2/2} \;\dd{y}} = \sqrt{2\pi}$, u can use the method of double integral. Well, I am gonna think out of another way. (But failed finnaly... sad)
$Remark$
Another formular will connect the expectation and the variance: $\Var{X} = \EE{X^2} - (\EE{X})^2$, for both continuous case and discrete case.
# Jointly Distributed Random Variables
## Joint Distribution Functions
For any two $r.v.$s $X$ and $Y$, we can define the ***joint cumulative probability distritbution function*** of $X$ and $Y$ by
$$F(a,b) = P\CB{X \leq a, Y \leq b}$$
for $a,b \in \mathbb{R}$. And with this we can find the ***marginal cumulative probability distribution*** like:
$$\begin{align}
F_X(a) &= P\CB{X \leq a} = P \CB{X \leq a, Y < \infty} = F(a, \infty)\\
F_Y(b) &= F(\infty, b)
\end{align}$$
In the case where $X$ and $Y$ are both discrete $r.v.$, it's also convenient to define the ***joint probability mass function*** of $X$ and $Y$ by: $p(x,y) = P\CB{X = x, Y=y}$, then following the ***marginal probability mass function*** like:
$$
p_X(x) = \sum_{y:p(x,y)>0} p(x,y) \;\lvert\; p_Y(y) = \sum_{x:p(x,y)>0} p(x,y)
$$
We say that $X$ and $Y$ are ***jointly continuous*** if there exists a function $f(x,y)$, namely, the ***joint probability density funciton*** of $X$ and $Y$, defined for all real $x$ and $y$, having the property that for all sets $A$ and $B$ of real numbers this holds:
$$\d{P\CB{X \in A, Y \in B} = \int_B \int_A f(x,y) \; \dd{x} \; \dd{y}}$$
And the ***marginal*** part:
$$\begin{align}
P\CB{X\in A} &= P\CB{X \in A, Y \in (-\infty,\infty)} \\
&= \int_{-\infty}^{\infty} \int_A f(x,y) \;\dd{x} \; \dd{y} \\
&= \int_A f_X(x)\;\dd{x}
\end{align}$$
where $f_X(x) = \d{\int_{-\infty}^{\infty} f(x,y) \; \dd{y}}$, which is how we obtain the marginal pdf of $X$.
And because $F(a,b) = P\CB{X \leq a,Y \leq b}=\d{\int_{-\infty}^{a}\int_{-\infty}^{b} f(x,y) \;\dd{y} \;\dd{x}}$, differentiation yields:
$$\ffrac{\dd{}^2} {\dd{a}\;\dd{b}}F(a,b) = f(a,b)$$
The expectation can be calculated by
$$
\EE{g(X,Y)} = \begin{cases}
\d{\sum_y \sum_x g(x,y) p(x,y)} & \text{discrete case}\\
\d{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} g(x,y) f(x,y) \;\dd{x}\;\dd{y}} &\text{continuous case}
\end{cases}
$$
A direct application of this is
$$\EE{\sum_{i=1}^{n}a_iX_i} = \sum_{i=1}^{n}a_i\EE{X_i}$$
for $n$ $r.v.$s and $n$ constants $a_1, \dots, a_n$.
$Remark$
Only applicable for linear combination!
**(V)e.g.**
Choose $10$ letters from $A$ to $Z$. Compute the expected number of different types that are contained in a set of $10$ letters.
> It's hard to calculate that directly, so we break it apart, $10$ parts. We first define $X_i$ as
>$$X_i = \begin{cases}
1, & \text{if at least one type of letter } i \text{ is in the set of } 10\\
0, & \text{otherwise}
\end{cases}$$
>Then $X$, as the number of different types in the set of $10$ letters, we have $X = \sum X_i$. And we have
>$$\begin{align}
\EE{X_i} &= P\CB{X_i = 1} \\
&= 1 - P\CB{\text{no type of letter }i\text{ are in the set of }10} \\
&= 1 - \left(\ffrac{25} {26}\right)^{10}
\end{align}$$
>
>So that $\EE{X} = \sum\EE{X_i} = 26\left[1 - \left(\ffrac{25} {26}\right)^{10}\right]$
***
## Independent $r.v.$
$X$ and $Y$ are said to be ***independent*** if for all $a$, $b$, we have $P\CB{X \leq a, Y \leq b} = P\CB{X \leq a}\cdot P\CB{Y \leq b}$. *In other words*, the events $E_a = \CB{X \leq a}$ and $F_b = \CB{Y \leq b}$ are independent.
In terms of the joint distribution function $F$, we have that $X$ and $Y$ are independent if for $\forall a,b$,
$$F(a,b) = F_X (a) \cdot F_Y(b)$$
which can also be reduced to
$$\begin{cases}
p(x,y) &\!\!\!\!= p_X(x) \cdot p_Y(y) & \text{discrete case}\\[0.6em]
f(x,y) &\!\!\!\!= f_X(x) \cdot f_Y(y) & \text{continuous case}
\end{cases}$$
$Proposition$
If $X$ and $Y$ are independent, the for any functions $h$ and $g$: $\EE{g(X)\cdot h(Y)} = \EE{g(X)} \cdot \EE{h(Y)}$.
## Covariance and Variance of Sums of $r.v.$
The covariance of *any* two random variables $X$ and $Y$, denoted by $\Cov{X,Y}$, is defined by
$$\begin{align}
\Cov{X, Y} &= \EE{(X - \EE{X}) \cdot (Y - \EE{Y})} \\
&= \EE{XY - Y\EE{X} - X\EE{Y} + \EE{X}\EE{Y}} \\
&= \EE{XY} - \EE{X} \EE{Y}
\end{align}$$
Easy to see that if $X$ and $Y$ are independent, then $\Cov{X,Y} = 0$
$Remark$
In general it can be shown that a **positive** value of $\Cov{X,Y}$ is an **indication** that $Y$ tends to increase as $X$ does, whereas a negative value indicates that $Y$ tends to decrease as $X$ increase.
**e.g.**
Given the joint density function of $X$, $Y$, $f(x) = \ffrac{1} {y} \exp\CB{-y-\ffrac{x} {y}}$, for $0 < X,Y < \infty$. Verify that and find the covariance.
> For the verification:
>$$\begin{align}
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} f(x,y) \;\dd{y} \;\dd{x} &= \int_{0}^{\infty}\int_{0}^{\infty} \ffrac{1} {y} \exp\CB{-y-\frac{x} {y}} \;\dd{y} \;\dd{x} \\
&= \int_{0}^{\infty} e^{-y} \int_{0}^{\infty} \frac{1} {y} \exp\CB{-\frac{x} {y}} \;\dd{x} \;\dd{y}\\
&= \int_{0}^{\infty} e^{-y} \;\dd{y} \\
&= 1
\end{align}$$
>And for the covariance we first need the expectation of separate $r.v.$s. Two ways available. For $\EE{X}$,
>$$
\begin{align}
\EE{X} &= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} x\cdot f(x,y) \;\dd{y} \;\dd{x} \\
&= \int_{0}^{\infty} e^{-y} \int_{0}^{\infty} \frac{x} {y} \exp\CB{-\frac{x} {y}} \;\dd{x} \;\dd{y}
\end{align}$$
>Note that $\d{\int_{0}^{\infty} \frac{x} {y} \exp\CB{-\frac{x} {y}} \;\dd{x}}$ is the exponential $r.v.$ with parameter $\ffrac{1}{y}$ and thus is equal to $y$. Consequently, $\EE{X} = \d{\int_{0}^{\infty} y e^{-y} \;\dd{y} = 1}$.
>***
>Then for $\EE{Y}$, we need another method. We first calculate the marginal probablity $f_Y(y)$.
>
>$f_Y(y) = e^{-y} \d{\int_{0}^{\infty} \ffrac{1} {y} \exp\CB{-\ffrac{x} {y}}\;\dd{x}} = e^{-y}$, then $\EE{Y} = 1$.
>
>***
>$$
\begin{align}
\EE{XY} &= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} xy \cdot f(x,y) \;\dd{y} \;\dd{x} \\
&= \int_{0}^{\infty} y e^{-y} \int_{0}^{\infty} \frac{x} {y} \exp\CB{-\frac{x} {y}} \;\dd{x} \;\dd{y} \\
&= \int_{0}^{\infty} y^2 e^{-y} \;\dd{y} \\
&= \int_{0}^{\infty} -y^2 \;\dd{e^{-y}} \\
&= \left.-y^2 e^{-y}\right|_{0}^{\infty} + \int_{0}^{\infty} 2ye^{-y} \;\dd{y} = 2\EE{Y} = 2
\end{align}$$
>Consequently, $\Cov{X,Y} = \EE{XY} - \EE{X}\EE{Y} = 1$
$Remark$
>Covariance equaling to $0$ can't imply that the two are independent, the inverse statement is true though.
$Other \space properties$
For any $r.v.$s $X$, $Y$, $Z$ and constant $c$, we have
- $\Cov{X,X} = \Var{X}\\[0.5em]$
- $\Cov{X,Y} = \Cov{Y,X}\\[0.5em]$
- $\Cov{cX,Y} = c \cdot\Cov{X, Y}\\[0.5em]$
- $\Cov{X,Y+Z} = \Cov{X,Y} + \Cov{X,Z}$
And the generalized forth property: $\d{\Cov{\sum_{i=1}^{n}X_i,\sum_{j=1}^{m}Y_j}=\sum_{i=1}^{n}\sum_{j=1}^{m} \Cov{X_i,Y_j}}$.
And one more application for variance
$$
\begin{align}
\Var{\sum_{i=1}^{n} X_i} &= \Cov{\sum_{i=1}^{n}X_i,\sum_{j=1}^{n}X_j} \\
&= \sum_{i=1}^{n} \sum_{j=1}^{n} \Cov{X_i, X_j} \\
&= \sum_{i=1}^{n}\Cov{X_i, X_i} + \sum_{i=1}^{n} \sum_{j \neq i} \Cov{X_i, X_j} \\
&= \sum_{i=1}^{n}\Var{X_i} + 2 \sum_{i=1}^{n} \sum_{j < i} \Cov{X_i, X_j}
\end{align}$$
Even, when $X_i$ are independent, this will reduce to $\d{\Var{\sum_{i=1}^{n}X_i} = \sum_{i=1}^{n} \Var{X_i}}$
$Def$
If $X_1, X_2, \dots, X_n$ are **independent** and **identically distributed**, we define the ***sample mean*** as
$$\bar{X} = \frac{1} {n}\sum_{i=1}^{n} {X_i}$$
$Proposition$
Suppose that $X_1, \dots, X_n$ araea independent distributed with expected value $\mu$ and variance $\sigma^2$. Then:
- $\EE{\bar{X}} = \mu$
- $\Var{\bar{X}} = \ffrac{\sigma^2}{n}\\[0.5em]$
- $\Cov{\bar{X}, X_i - \bar{X}} = 0$, $i = 1,2,\dots,n\\[0.5em]$
**e.g.**
Variance of a **Binomial** $r.v.$
>We first break it up. $X = X_1 +\cdots+X_n$, with the $n$ components from $n$ independent *Bernoulli* $r.v.$. Then we have
>$$\Var{X} = \sum \Var{X_i}$$
>Since $\Var{X_i} = \EE{X_i^2} - \left(\EE{X_i}\right)^2 = p - p^2$, $\Var{X} = np(1-p)$
***
**e.g.** ***The Hypergeometric***
Consider $N$ individuals with $p$ percent of whom are in favor of a certain proposition and the rest are opposed, where $p$ is assumed to be *unknown* and required to *estimate*. We will randomly choosing and then determining the positions of $n$ members of the population.
>We use the portion of the favored in the sample as an estimator of $p$. First we let
>
>$$X_i = \begin{cases}
1, &\text{if the }i\texttt{th}\text{ person chosen is in favor} \\[0.5em]
0, &\text{otherwise}
\end{cases}$$
>
>Then the estimator of $p$ is $\ffrac{1} {n}\sum_{i=1}^{n} X_i$. We now compute its mean and variance for a little comparison
>
>$\d{\EE{\ffrac{1} {n}\sum_{i=1}^{n} X_i} = \ffrac{1} {n}\sum_{i=1}^{n} \EE{X_i} } = p$
>
>$\d{\Var{\ffrac{1} {n}\sum_{i=1}^{n} X_i} = \ffrac{1} {n^2} \left(\sum_{i=1}^{n} \Var{X_i} + 2 \underset{i<j}{\sum\sum} \Cov{X_i, X_j}\right)}$
>
>Easy to see that $X_i$ is a **Bernoulli** $r.v.$ so that $\Var{X_i} = p(1-p)$, so now we get down to handling the covariance.
>
>$\begin{align}
\Cov{X_i,X_j} &= \EE{X_i \cdot X_j} - \EE{X_i} \cdot \EE{X_j} \\[0.5em]
&= P\CB{X_i = 1, X_j = 1} - p^2 \\[0.5em]
&= \ffrac{Np} {N} \cdot \ffrac{Np-1} {N-1} - p^2\\[0.5em]
\end{align}$
>
>$\begin{align}
\Var{\ffrac{1} {n}\sum_{i=1}^{n} X_i} &= \ffrac{1} {n^2} \left[ np(1-p) + 2\binom{n}{2} \left( \ffrac{Np} {N} \cdot \ffrac{Np-1} {N-1} - p^2 \right) \right] \\
&= \ffrac{p(1-p)} {n} - \ffrac{(n-1)p(1-p)} {n(N-1)} = \ffrac{p(1-p)(N-n)} {n(N-1)}
\end{align}$
$Remark$
When $N$ increases, the variance goes larger and the limiting value as $N \to \infty$ is $p(1-p)/n$, which is not surprising since for $N$ large enough each $X_i$ can be considered as *independent* **bernoulli** $r.v.$ and thus $\sum X_i$ can be considered as **binomial** distribution with parameter $n$ and $p$.
$Remark$
The real ***Hypergeometric*** $r.v.$ is brought out like this: Totally $N$ identities with $p$ percent with a feature and the rest not. Then we select $n$ identities from all $N$ and denote $X$ as the number of identities with that feature:
$$\d{P\CB{X=k}} = \ffrac{\d{\binom{Np} {k}\binom{N-Np} {n-k}}} {\d{\binom{N} {n}}}$$
And an easy example for this is to consider an urn with $Np$ red balls and $N-Np$ blu balls in. We take $n$ balls out and this is the *distribution* of the number of blue balls.
***
Another thing is the ***convolution*** of the distributions $F_X$ and $F_Y$, the distribution of $X+Y$, from the distributions of $X$ and $Y$, given they are **independent**.
$$
\begin{align}
F_{X+Y}(a) &= P \CB{X+Y \leq a} \\
&= \iint_{x+y \leq a} f(x) g(y) \;\dd{x} \;\dd{y} \\
&= \int_{-\infty}^{\infty} \int_{-\infty}^{a-y} f(x) g(y) \;\dd{x} \;\dd{y}\\
&= \int_{-\infty}^{\infty} \left( \int_{-\infty}^{a-y} f(x) \;\dd{x} \right) g(y) \;\dd{y} \\
&= \int_{-\infty}^{\infty} F_X (a-y) g(y) \;\dd{y}
\end{align}$$
Then we differentiating both sides of the equation above, the pdf comes:
$$
\begin{align}
\ffrac{\dd{}} {\dd{a}} F_{X+Y}(a) &= \ffrac{\dd{}} {\dd{a}} \int_{-\infty}^{\infty} F_X (a-y) g(y) \;\dd{y} \\[0.7em]
f_{X+Y}(a) &= \int_{-\infty}^{\infty} \ffrac{\dd{}} {\dd{a}} \big(F_X (a-y)\big)g(y) \;\dd{y} \\
&= \int_{-\infty}^{\infty} f(a-y) g(y) \;\dd{y}
\end{align}$$
**(V)e.g.** **Sum** of Two Independent **Uniform** $r.v.$
Given $X$ and $Y$ are independent $r.v.$ both uniformly distributed on $(0,1)$, find the pdf of $X+Y$.
>First we have $f(z) = g(z) = \begin{cases}
1, & 0 < z < 1 \\[0.5em]
0, & \text{otherwise}
\end{cases}$, and with the previous formula we have:
>
>$$f_{X+Y} (z) = \int_{-\infty}^{\infty} f(z-y)g(y) \;\dd{y} = \int_0^1f(z-y)\;\dd{y}$$
>
>Then for $0 \leq z \leq 1$, this yields $\d{f_{X+Y}(z) = \int_{0}^{z} \;\dd{y} = z}$. For $1 < z < 2$, we get $\d{f_{X+Y} (z) = \int_{z-1}^{1}\;\dd{y} = 2-z}$. Hence we draw the conclusion as
>
>$$
f_{X+Y}(z) = \begin{cases}
z, & 0 \leq z \leq 1 \\[0.5em]
2-z, & 1 < z < 2 \\[0.5em]
0, & \text{otherwise}
\end{cases}$$
>
>Just for fun, I also calculate the "triple" one:
>$$
f_{Z+Y}(w) = \begin{cases}
\frac{1} {2}w^2, & 0 \leq z \leq 1 \\[0.5em]
-w^2 + 3w - \frac{3} {2}, & 1 < z \leq 2 \\[0.5em]
\frac{1} {2}w^2 - 3w + \frac{9} {2} = \frac{1} {2}(w-3)^2, & 2 < z < 3 \\[0.5em]
0, & \text{otherwise}
\end{cases}$$
**e.g.** **Sum** of Independent **Poisson** $r.v.$
Let $Χ$ and $Y$ be independent **Poisson** $r.v.$ with respective means $\lambda_1$ and $\lambda_2$.
>$$\begin{align}
P\CB{X + Y = n} &= \sum_{k=0}^{n} P \CB{X=k, Y = n-k} \\
&= \sum_{k=0}^{n} P\CB{X = k} \cdot P\CB{Y = n-k} \\
&= \sum_{k=0}^{n} e^{-\lambda_1} \ffrac{\lambda_1^k} {k!} \cdot e^{-\lambda_2} \ffrac{\lambda_2^{n-k}} {(n-k)!}\\
&= \ffrac{e^{-\lambda_1 - \lambda_2}} {n!} \sum_{k=0}^{n} \ffrac{n!} {k!(n-k)!} \lambda_1^k \lambda_2^{n-k} \\
&= \ffrac{e^{-\lambda_1 - \lambda_2}} {n!} (\lambda_1 + \lambda_2)^n
\end{align}$$
>In words, $X_1+X_2$ has a **Poisson** distribution with mean $\lambda_1 + \lambda_2$.
$Remark$
The general idea of independency is for all values $a_1, a_2, \dots, a_n$, we have
$$P\CB{X_1 \leq a_1, X_2 \leq a_2, \dots, X_n \leq a_n} = P\CB{X_1 \leq a_1} \cdot P\CB{X_2 \leq a_2} \cdot \cdots \cdot P\CB{X_n \leq a_n} $$
The ***Order Statistics***
Let $X_1, \dots, X_n$ be $i.i.d.$ continuous $r.v.$ with cdf $F$ and pdf $f = F'$. Define $X_{(i)}$ as the $i\texttt{th}$ smallest of these $r.v.$, then $X_{(1)}, \dots, X_{(n)}$ are called the ***Order Statistics*** . Find their distributions.
$$P\CB{X_{(i)} \leq x} = \sum_{k=i}^{n} \binom{n} {k} \big(F(x)\big)^k \big( 1-F(x) \big)^{n-k}$$
Differentiation yields that the density function of $X_{(i)}$ is as follows:
$$
\begin{align}
f_{X_{(i)}} (x) &= f(x) \left( \sum_{k=i}^{n}\binom{n} {k} \cdot k \big(F(x)\big)^{k-1} \big( 1-F(x) \big)^{n-k} - \sum_{k=i}^{n}\binom{n} {k} \big(F(x)\big)^{k} \cdot (n-k) \big( 1-F(x) \big)^{n-k-1} \right) \\
&= f(x) \left( \sum_{k=i}^{n} \ffrac{n!} {(n-k)!(k-1)!} \big(F(x)\big)^{k-1} \big( 1-F(x) \big)^{n-k} - \sum_{k=i}^{n}\ffrac{n!} {(n-k-1)!k!} \big(F(x)\big)^{k} \big( 1-F(x) \big)^{n-k-1} \right) \\
&= f(x) \left( \sum_{k=i}^{n} \ffrac{n!} {(n-k)!(k-1)!} \big(F(x)\big)^{k-1} \big( 1-F(x) \big)^{n-k} - \sum_{j=i+1}^{n}\ffrac{n!} {(n-j)!(j-1)!} \big(F(x)\big)^{j-1} \big( 1-F(x) \big)^{n-j} \right) \\
&= \ffrac{n!} {(n-i)!(i-1)!} f(x) \big(F(x)\big)^{i-1}\big(1-F(x)\big)^{n-i}
\end{align}$$
***
## Joint Probability Distribution of Functions of $r.v.$
Let $X_1$ and $X_2$ be jointly continuous $r.v.$ with jointly pdf $f(x_1,x_2)$. We need to obtain the joint distribution of two new $r.v.$s $Y_1$ and $Y_2$ that arise as functions of $X_1$ and $X_2$, with $Y_1 = g_1(X_1, X_2)$ and $Y_2 = g_2(X_1, X_2)$.
$Assumptions$
1. The equations $y_1 = g_1(x_1, x_2)$ and $y_2 = g_2(x_1, x_2)$ can be *uniquely* solved for $x_1$ and $x_2$ in terms of $y_1$ and $y_2$ with solutions given by $x_1 = h_1(y_1,y_2)$ and $x_2 = h_2(y_1,y_2)$.$\\[0.7em]$
2. The functions $g_1$ and $g_2$ have *continuous partial derivatives* at *all points* $(x_1, x_2)$ and are such that the following determinant$\\[0.6em]$
$$J(x_1, x_2) = \begin{vmatrix}
\ffrac{\partial g_1}{\partial x_1} & \ffrac{\partial g_1}{\partial x_2} \\
\ffrac{\partial g_2}{\partial x_1} & \ffrac{\partial g_2}{\partial x_2}
\end{vmatrix} \equiv \ffrac{\partial g_1}{\partial x_1} \cdot \ffrac{\partial g_2}{\partial x_2} \; \: - \; \: \ffrac{\partial g_1}{\partial x_2} \cdot \ffrac{\partial g_2}{\partial x_1} \neq 0\\[0.5em]$$
at all points $(x_1, x_2)$.
Under these, $Y_1$ and $Y_2$ are jointly continuous with their joint density function given by
$$f_{Y_1,Y_2}(y_1,y_2) = g(y_1,y_2) = f_{X_1,X_2}(x_1, x_2) \big| J(x_1, x_2) \big|^{-1}$$
where $x_1 = h_1(y_1,y_2)$ and $x_2 = h_2(y_1,y_2)$. This formula can be obtained by differiate the following equationon both sides with respect to $y_1$ and $y_2$.
$$P\CB{Y_1 \leq y_1,Y_2 \leq y_2} = \iint\limits_{\d{\begin{array}{c}
(x_1,x_2): \\
g_1(x_1,x_2) \leq y_1\\
g_2(x_1,x_2) \leq y_2
\end{array}}} f_{X_1,X_2}(x_1, x_2) \;\dd{x_1}\;\dd{x_2}$$
**e.g.**
If $X$ and $Y$ are independent **gamma** $r.v.$s with parameters $(\alpha, \lambda)$ and $(\beta, \lambda)$, respectively. Find the joint density of $U = X + Y$ and $V = X/(X+Y)$.
> From their independency, we have their joint density function first
>
> $$\begin{align}
f_{X,Y}(x,y) &= f_X(x) \cdot f_Y(y) \\
&= \ffrac{\lambda e^{-\lambda x} (\lambda x)^{\alpha - 1}} {\Gamma(\alpha)} \cdot \ffrac{\lambda e^{-\lambda x} (\lambda x)^{\beta - 1}} {\Gamma(\beta)} \\
&= \ffrac{\lambda^{\alpha + \beta} } {\Gamma(\alpha)\Gamma(\beta)} e^{-\lambda(x+y)} x^{\alpha-1} y^{\beta -1}
\end{align}$$
>
>Given $g_1(x,y) = x+y, g_2(x,y) = x/(x+y)$, we have $\ffrac{\partial g_1} {\partial x} = \ffrac{\partial g_1} {y} = 1$, $\ffrac{\partial g_2}{\partial x} = \ffrac{y} {(x+y)^2}$, and $\ffrac{\partial g_2} {\partial y}=- \ffrac{x} {(x+y)^2}$, also the solutions: $x = u\upsilon$ and $y=u(1-\upsilon)$ so that:
>
>$$J(x,y) = \begin{vmatrix}
1 & 1\\[0.6em]
\ffrac{y}{\left( x+y \right )^2} & \ffrac{-x}{\left( x+y \right )^2}
\end{vmatrix} = - \ffrac{1} {x+y}$$
>
>$$
\begin{align}
f_{U,V}(u,\upsilon) &= f_{X,Y}(x,y) \cdot (x+y) \\[0.6em]
&= f_{X,Y}(u\upsilon, u(1-\upsilon)) \cdot u \\[0.6em]
&= \ffrac{\lambda^{\alpha + \beta} } {\Gamma(\alpha)\Gamma(\beta)} e^{-\lambda(u\upsilon+u(1-\upsilon))} (u\upsilon)^{\alpha-1} (u(1-\upsilon))^{\beta -1} \cdot u \\[0.6em]
&= \ffrac{\lambda^{\alpha + \beta-1}\cdot \lambda} {\Gamma(\alpha)\Gamma(\beta)} \cdot e^{-\lambda u} \cdot u^{1 + \alpha-1 + \beta -1} \cdot \ffrac{\Gamma(\alpha + \beta)} {\Gamma(\alpha + \beta)} \cdot \upsilon^{\alpha-1} \cdot (1-\upsilon)^{\beta -1} \\[0.6em]
&= \ffrac{\lambda e^{-\lambda u} (\lambda u)^{\alpha + \beta -1}} {\Gamma(\alpha + \beta)} \cdot \ffrac{ \upsilon^{\alpha-1} (1-\upsilon)^{\beta -1} \Gamma(\alpha + \beta)} {\Gamma(\alpha)\Gamma(\beta)}
\end{align}$$
$Remark$
Later we will know that $X+Y$ is also a **gamma** $r.v.$ with parameter $(\alpha + \beta , \lambda)$, thus with a pdf: $\d{f_{U}(u) = \ffrac{\lambda e^{-\lambda u} (\lambda u)^{\alpha + \beta -1}} {\Gamma(\alpha + \beta)}}$.
Also, since $X+Y$ and $X/(X+Y)$ are independent, we can also see that: $\d{f_V{(\upsilon)} = \ffrac{\upsilon^ {\alpha-1} (1-\upsilon)^{\beta -1} \Gamma(\alpha + \beta)} {\Gamma(\alpha)\Gamma(\beta)}}$, which is called the ***beta density*** with parameters $(\alpha, \beta)$, with $0<\upsilon<1$.
$\QQQ$ The last paragraph.
***
And the same method can be applied to more than $2$ $r.v.$s. When the joint density function of the $n$ variables $X_1, X_2, \dots, X_n$ is given and we wnat to compute the joint density function of $Y_1, Y_2, \dots, Y_n$, where
$$Y_1 = g_1(X_1, X_2, \dots, X_n), Y_2 = g_2(X_1, X_2, \dots, X_n), \dots, Y_n = g_n(X_1, X_2, \dots, X_n)$$
Same assumptions required, like the continuous partial derivable and that the Jacobian determinant $J(x_1,x_2,\dots, x_n) \neq 0$ for all points $(x_1,x_2,\dots, x_n)$.
$$J(x_1,x_2,\dots, x_n) = \begin{vmatrix}
\ffrac{\partial g_1} {\partial x_1} & \ffrac{\partial g_1} {\partial x_2} & \cdots & \ffrac{\partial g_1} {\partial x_n} \\
\ffrac{\partial g_2} {\partial x_1} & \ffrac{\partial g_2} {\partial x_2} & \cdots & \ffrac{\partial g_2} {\partial x_n} \\
\vdots & \vdots & \ddots & \vdots \\
\ffrac{\partial g_n} {\partial x_1} & \ffrac{\partial g_n} {\partial x_2} & \cdots & \ffrac{\partial g_n} {\partial x_n}
\end{vmatrix}$$
and the equation set has a unique solution, $x_i = h_i(y_1,y_2,\dots,y_n)$, for $y_1 = g_1 (x_1,x_2,\dots,x_n), y_2 = g_2 (x_1,x_2,\dots,x_n), \dots, y_n = g_n (x_1,x_2,\dots,x_n)$. Under these, the joint dnsity function of the $r.v.$s $Y_i$ is given by
$$f_{Y_1,Y_2,\dots, Y_n}(y_1,y_2,\dots,y_n) = f_{X_1,X_2,\dots,X_n}(x_1,x_2,\dots, x_n)\big|J(x_1,x_2,\dots, x_n)\big|^{-1}$$
where $x_i = h_i(y_1,y_2,\dots,y_n)$.
***
# Moment Generating Functions
The ***moment generating function*** $\phi(t)$ of the $r.v.$ $X$ is defined for all values $t$ by
$$\phi(t) = \EE{e^{tX}} = \begin{cases}
\d{\sum_x e^{tx} \cdot p(x)}, & \text{if } X \text{is discrete} \\[0.5em]
\d{\int_{-\infty}^{\infty} e^{tx} \cdot f(x) \;\dd{x}}, & \text{if } X \text{is continuous}
\end{cases}$$
We can use this function to obtain all the moments of $X$ by successively differentiating $\phi(t)$.
$$\begin{align}
\phi'(t) &= \ffrac{\dd{}} {\dd{t}} \EE{e^{tX}} \\
&= \EE{\ffrac{\dd{}} {\dd{t}} e^{tX}} \\
&= \EE{Xe^{tX}} \\[0.8em]
\Longrightarrow \phi'(0) &= \EE{X}
\end{align}$$
Similarly, $\phi''(t) = \EE{X^2 e^{tX}} \; \Longrightarrow \; \phi''(0) = \EE{X^2}$. So in general, the $n\texttt{th}$ derivative of $\phi(t)$ evaluated at $t=0$ equals $\EE{X^n}$, for $n \geq 1$.
**e.g.** The **Binomial** Distribution
>$$\begin{align}
\phi(t) &= \EE{e^{tX}} \\
&= \sum_{k=0}^{n} e^{tk} \cdot \left( \binom{n} {k} p^k (1-p)^{n-k} \right)\\
&= \sum_{k=0}^{n} \binom{n} {k} (pe^t)^k (1-p)^{n-k} \\[0.5em]
&= (pe^t + 1 - p)^n
\end{align}$$
>Hence, $\EE{X} = \phi'(0) = n(pe^t + 1 - p)^{n-1} \cdot pe^t \big.\big|_{t=0} = np$ and $\EE{X^2} = \cdots = n(n-1) p^2 + np$. Thus we can also obtain the variance: $\Var{X} = \EE{X^2} - (\EE{X})^2 = \cdots = np(1-p)$.
**e.g.** The **Poisson** Distribution
>$$\begin{align}
\phi(t) &= \EE{e^{tX}} \\
&= \sum_{n=0}^{\infty} e^{tn} \cdot \ffrac{e^{-\lambda} \lambda^n} {n!}\\
&= e^{-\lambda} \sum_{n=0}^{\infty} \ffrac{\left( \lambda e^t \right)^n} {n!} \\
&= \QQQ e^{-\lambda} \cdot e^{\lambda e^t} = \exp\CB{\lambda \left(e^t - 1 \right)}
\end{align}$$
>Differentiation yields: $\phi'(t) = \lambda e^t \exp\CB{\lambda \left(e^t - 1 \right)}$ and $\phi''(t) = \left( \lambda e^t \right)^2 \exp\CB{\lambda \left(e^t - 1 \right)} + \lambda e^t \exp\CB{\lambda \left(e^t - 1 \right)}$
> and so $\EE{X} = \lambda$, $\EE{X^2} = \lambda^2 + \lambda$. And $\Var{X} = \lambda$
**e.g.** The **Exponential** Distribution
>$$\begin{align}
\phi(t) &= \EE{e^{tX}} \\
&= \int_{0}^{\infty} e^{tx} \cdot \lambda e^{-\lambda x} \;\dd{x} \\
&= \lambda \int_{0}^{\infty} e^{-\left(\lambda - t\right)x} \;\dd{x} \\
&\stackrel{t < \lambda} {=} \ffrac{\lambda} {\lambda - t}
\end{align}$$
>Differentiation of $\phi(t)$ yields $\phi'(t) = \ffrac{\lambda} {\left(\lambda - t\right)^2}, \phi''(t) = \ffrac{2\lambda} {\left(\lambda - t\right)^3}$. Thus, $\EE{X} = \phi'(0) = \ffrac{1} {\lambda}$, $\EE{X^2} = \phi''(0) = \ffrac{2} {\lambda^2}$ and the variance of $X$ is given by $\Var{X} = \EE{X^2} - \left(\EE{X}\right) ^2 = \ffrac{1} {\lambda^2}$
$Remark$
Only when $t < \lambda$ can we calculate the integral.
**e.g.** The **Normal** Distribution
>$$\begin{align}
\EE{e^{tZ}} &= \int_{-\infty}^{\infty} e^{tz} \cdot \ffrac{1} {\sqrt{2\pi}} \exp\CB{-\ffrac{z^2} {2}} \;\dd{z} \\
&= \ffrac{1} {\sqrt{2\pi}} \int_{-\infty}^{\infty} \exp\CB{-\ffrac{z^2 - 2tz} {2}} \;\dd{z} \\
&= \ffrac{e^{t^2/2}} {\sqrt{2\pi}} \int_{-\infty}^{\infty} \exp\CB{-\ffrac{(x-t)^2} {2}} \;\dd{z} = \exp\CB{ \ffrac{t^2}{2}}
\end{align}$$
$\space$
>Here $Z$ is a ***standard normal*** $r.v.$, so for any normal $r.v.$ $X = \sigma Z + \mu$ with parameters $\mu$ and $\sigma^2$, we have
>
>$$\phi(t) = \EE{e^{tX}} = \EE{e^{t\left(\sigma Z + \mu\right)}} = e^{t\mu} \EE{e^{t\sigma Z}} = \exp\CB{ \ffrac{\sigma^2 t^2} {2} + \mu t}$$
>
>And by differentiating we obtian $\phi'(t) = \left(\mu + t \sigma^2\right) \exp\CB{\ffrac{\sigma^2 t^2} {2} + \mu t}$, so $\EE{X} = \phi'(0) = \mu$, and $\phi''(t) = \left(\mu + t \sigma^2\right)^2 \exp\CB{\ffrac{\sigma^2 t^2} {2}} + \sigma^2 \exp\CB{\ffrac{\sigma^2 t^2} {2}}$, so $\EE{X^2} = \phi''(0) = \mu^2 + \sigma^2$, implying that $\Var{X} = \sigma^2$.
***
An important property of the **moment generating function** is that: For the sum of *independent* $r.v.$s, it's mgf is just the product of the individual mgfs. Suppose $X$ and $Y$ are independent and have mgf $\phi_X(t)$ and $\phi_Y(t)$, respectively.
$$\begin{align}
\phi_{X+Y}(t) &= \EE{e^{t\left(X+Y\right)}} \\
&= \EE{e^{tX} \cdot e^{tY}} \\
&\stackrel{\texttt{independency}} {=} \EE{e^{tX}}\cdot \EE{e^{tY}} = \phi_X(t)\phi_Y(t)
\end{align}$$
Another important property is that the mgf *uniquely* determines the distribution. It's a one-to-one correspondence.
$Remark$
More about the **Poisson** Distribution, the ***Poisson paradigm***, that the number of success in $n$ trials that are either independent or at most weakly dependent is, when the trial success probabilities are all small, approximately a **Poisson** $r.v.$.
$Remark$
***Laplace transform***, for nonnegative $r.v.$ $X$, is defined as for $t \geq 0$, $g(t) = \phi(-t) = \EE{e^{-tX}}$. This would limit the value between $0$ and $1$.
We can also define the ***joint moment generating function*** of more than just two $r.v.$s. For any $n$ $r.v.$s $X_1, X_2, \dots, X_n$, and for all real values of $t_1, t_2, \dots, t_n$ we define:
$$\phi(t_1, t_2, \dots, t_n) = \EE{\exp\CB{t_1X_1 + t_2X_2 + \cdots + t_nX_n}}$$
and it can be shown that $\phi(t_1, t_2, \dots, t_n)$ uniquely determines the joint distribution of $X_1, X_2, \dots, X_n$.
**e.g.** The ***Multivariate Normal Distribution***
Let $Z_1,\dots,Z_n$ be a set of $n$ independent standart normal random variables. If, for some constants $a_{ij}$ and $\mu_i$, $1 \leq i \leq m$, $1 \leq j \leq n$,
$$
\begin{array}{rcl}
X_1\!\!\!\! &=&\!\!\!\!a_{11}Z_1 + \cdots + a_{1n}Z_n + \mu_1 \\
X_2 \!\!\!\!&=&\!\!\!\!a_{21}Z_1 + \cdots + a_{2n}Z_n + \mu_2 \\
& \vdots & \\
X_i \!\!\!\!&=&\!\!\!\!a_{i1}Z_1 + \cdots + a_{in}Z_n + \mu_i \\
& \vdots & \\
X_m \!\!\!\!&=&\!\!\!\!a_{m1}Z_1 + \cdots + a_{mn}Z_n + \mu_m
\end{array}$$
Then the $r.v.$s $X_1, X_2, \dots, X_n$. are said to have a **Multivariate Normal Distribution**.
> Easy to see that $\EE{X_i} = \mu_i$ and $\Var{X_i} = \sum\limits_{j=1}^{n}a_{ij}^2$. Then $\EE{\sum\limits_{i=1} ^{m} t_iX_i} = \sum\limits_{i=1}^{m} t_i\mu_i$ and
>$$\Var{\sum\limits_{i=1} ^{m} t_iX_i} = \Cov{\sum\limits_{i=1} ^{m} t_iX_i,\sum\limits_{j=1} ^{m} t_jX_j} = \sum\limits_{i=1}^{m}\sum\limits_{j=1}^{m} t_it_j\Cov{X_i,X_j}$$
>$$\phi\left(t_1,\dots,t_m\right) = \exp\CB{\sum\limits_{i=1}^{m} t_i\mu_i + \ffrac{1} {2} \sum\limits_{i=1}^{m} \sum\limits_{j=1}^{m} t_it_j\Cov{X_i,X_j}}$$
***
## The Joint Distribution of the Sample Mean and Sample Variance from a Normal Population
$X_1,\dots,X_n$ are independent and identical distributed $r.v.$s, each with mean $\mu$ and variance $\sigma^2$. We now define the ***sample mean*** $\bar{X} =\ffrac{1} {n}\sum\limits_{i=1}^{n} X_i$ and ***sample variance***:
$$S^2 = \sum_{i=1}^{n}\ffrac{\left(X_i - \bar{X}\right)^2} {n-1}$$
With the fact that
$$\begin{align}
\sum_{i=1}^{n} \left(X_i - \bar{X}\right) &= \sum_{i=1}^{n} \left( X_i - \mu + \mu - \bar{X} \right)^2 \\
&= \left[\sum_{i=1}^{n} \left(X_i - \mu \right)^2\right] + n\left(\mu - \bar{X}\right)^2 + 2\left(\mu - \bar{X} \right)\sum_{i=1}^{n} \left(X_i - \mu \right) \\
&= \left[\sum_{i=1}^{n} \left(X_i - \mu \right)^2\right] + n\left(\mu - \bar{X}\right)^2 - 2n\left(\mu - \bar{X}\right)^2 = \left[\sum_{i=1}^{n} \left(X_i - \mu \right)^2\right] - n\left(\bar{X} - \mu\right)^2
\end{align}$$
we can calculate the expectation as
$$\begin{align}
\EE{S^2} &= \ffrac{1} {n-1} \left[\left(\sum_{i=1}^{n} \EE{(X_i - \mu)^2}\right)-n\EE{\left(\bar{X} - \mu\right)^2 }\right] \\
&= \ffrac{1} {n-1}\left(n\sigma^2 - n\Var{\bar{X}}\right) = \sigma^2
\end{align}$$
$Def$ ***Chi-Squared*** $r.v.$
If $Z_1,\dots,Z_n$ are *independent* **standard normal** $r.v.$s then the $r.v.$ $\sum Z_i^2$ is said to be a **chi-squared** $r.v.$ with $n$ ***degrees of freedom***.
We first compute its mgf, note that
$$\begin{align}
\EE{\exp\CB{tZ_i^2}} &= \int_{-\infty}^{\infty}\exp\CB{tx^2}\cdot\ffrac{1} {\sqrt{2\pi}} \exp\CB{-\ffrac{x^2} {2}} \;\dd{x} \\
&= \ffrac{1} {\sqrt{2\pi}} \int_{-\infty}^{\infty} \exp\underset{\;\;\begin{array}{c}
\uparrow \\
\sigma^2 = \left(1-2t\right)^{-1}
\end{array}}{\CB{-\ffrac{x^2} {2\sigma^2}}} \;\dd{x} \\[0.8em]
&= \sigma = \left(1-2t\right)^{-1/2}
\end{align}$$
Hence,
$$\EE{\exp\CB{t\sum_{i=1}^{n} Z_i^2}} = \prod_{i=1}^{n} \EE{\exp\CB{tZ_i^2}} = \left(1-2t\right)^{-n/2}$$
$Remark$
Consider $Y$ be a **normal** $r.v.$ with mean $\mu$ and variance $\sigma^2/n$ that is independent of $X_1, \dots, X_n$, then the $r.v.$s $Y, X_1-\bar{X}, X_2-\bar{X},\dots,X_n-\bar{X}$ have a **multivariate normal** distribution. Since they are independent, $\Cov{Y,X_i - \bar{X}} = 0$ for $i = 1,\dots, n$. Also, $\EE{Y + X_1-\bar{X} + \cdots + X_n-\bar{X}} = \EE{Y} = \EE{\bar{X}}$.
Our conclusion is that for a **multivariate normal** distribution is *completely*, *uniquely* determined by its expected values and covariances, $\bar{X}$ is independent of the sequence of deviations $X_i - \bar{X}$, $i = 1,\dots, n$.
So that it's also independent of the **sample variance** $S^2 \equiv \ffrac{1} {n-1} \sum_{i=1}^{n}\left(X_i - \bar{X} \right)^2$ and now we're gonna determine the distribution of $S^2$.
$$\ffrac{n-1} {\sigma^2} S^2 = \left[ \sum_{i=1}^{n} \ffrac{\left(X_i - \mu \right)^2} {\sigma^2}\right] - \ffrac{n\left(\bar{X} - \mu\right)^2} {\sigma^2} \Rightarrow \ffrac{(n-1)S^2} {\sigma^2} + \left( \ffrac{\bar{X} - \mu} {\sigma / \sqrt{n}} \right)^2 = \sum_{i=1}^{n} \ffrac{\left(X_i - \mu \right)^2} {\sigma^2}$$
The key is to use the mgf. We've already seen that the mgf for the right side term, the **chi-squared** $r.v.$ with $n$ degree of freedom and the second term on the left, the square of a **standard normal** $r.v.$, the **chi-squared** $r.v.$ with $1$ degree of freedom. So that
$$\EE{\exp\CB{t\cdot \ffrac{(n-1)S^2} {\sigma^2}}}(1-2t)^{-1/2} = (1-2t)^{-n/2}$$
Thus, the mgf of $\ffrac{n-1} {\sigma^2} S^2$ is the same with that of a **chi-squared** $r.v.$ with $n-1$ degrees of freedom, where we can claim the proposition
$Proposition$
If $X_1,\dots,X_n$ are $i.i.d.$ **normal** $r.v.$s with mean $\mu$ and varianc $\sigma^2$, then the **sample mean** $\bar{X}$ and **sample variance** $S^2$ are independent. $\bar{X}$ is a **normal** $r.v.$ with mean $\mu$ and variance $\sigma^2/n$; $(n-1)S^2/\sigma^2$ is a **chi-squared** $r.v.$ with $n-1$ degrees of freedom.
# The Distribution of the Number of Events that Occur
Consider arbitrary events $A_1,\dots,A_n$, and let $X$ denote the number of these events that occur. What's the pmf of $X$? We first define:
$$S_k = \sum_{\d{i_1 < \cdots < i_k}} P\left( A_{\d{i_1}},\dots,A_{\d{i_k}} \right)$$
as the sum of the probabilities of all the $\d{\binom{n} {k}}$ intersections ($\cap$) of $k$ distinct events, and note that the inclusion-exclusion identity states that
$$P\CB{X>0} = P\left(\bigcup_{i=1}^{n}A_i\right) = S_1 - S_2 + S_3 - \cdots + (-1)^{n+1} S_n$$
Now, to help understand we fix $h$ of the $n$ events, say $A_{1},\dots,A_{h}$ and let $A=\bigcap\limits_{j=1}^{h} A_{j}$ be the event that all $h$ of these events occur. Also, let $B=\bigcap\limits_{j\notin\CB{1, 2, \dots, h}} A_{j}^{c}$ be the none of the other $n-h$ events occur. Consequently, $A\cap B = AB$ is the event that $A_{1}, \dots,A_{h}$ are the *only* events to occur. Then, since $A = AB \cup AB^c$, we have $P(AB) = P(A) - P(AB^c)$.
While $B^c = \bigcup\limits_{j \notin \CB{1, 2, \dots, h}} A_j$, so that $P(AB^c) = P\left( A\bigcup\limits _{j\notin\CB{1,\dots,h}} A_j\right) = P\left( \; \bigcup\limits _{j\notin\CB{1,\dots,h}} AA_j \right)$.
Then we apply the inclusion-exclusion identity again:
$$
\begin{align}
P(AB^c) &= \sum_{\d{j\notin \CB{1, 2, \dots, h}}} P(AA_j) - \sum_{\d{j_1 <j_2 \notin \CB{1,\dots,h}}} P( AA_{\d{j_1}}A_{\d{j_2}} ) \\[1em]
&\;\;\;\;+ \sum_{\d{j_1 < j_2 < j_3 \notin \CB{1, 2, \dots, h}}} P( AA_{\d{j_1}}A_{\d{j_2}} A_{\d{j_3}} ) - \cdots
\end{align}$$
Then followed by $P(A\cap B) = P(A) - P(AB^c) = P(A_1 \cap A_2 \cap \cdots \cap A_h) - P(AB^c)$, we can approach our final generalized answer,
$\;\;\;\;
\begin{align}
P\CB{X=k} &= \sum_{\d{i_1 <\dots<i_k}} \left[ P\left(A_{\d{i_1}} \cap A_{\d{i_2}} \cap \cdots \cap A_{\d{i_k}}\right) - \sum _{\d{j \notin \CB{i_1,\dots, i_k}}} P\left(A_{\d{i_1}} \cap A_{\d{i_2}} \cap \cdots \cap A_{\d{i_k}} \cap A_{\d{j}}\right)\right. \\[0.8em]
&\;\;\;\;+ \sum_{\d{j_1 < j_2 \notin \CB{i_1,\dots,i_k}}} P\left( A_{\d{i_1}} \cap A_{\d{i_2}} \cap \cdots \cap A_{\d{i_k}} \cap A_{\d{j_1}} \cap A_{\d{j_2}} \right) \\
&\;\;\;\;- \left.\sum_{\d{j_1 < j_2 < j_3 \notin \CB{i_1,\dots,i_k}}} P\left( A_{\d{i_1}} \cap A_{\d{i_2}} \cap \cdots \cap A_{\d{i_k}} \cap A_{\d{j_1}} \cap A_{\d{j_2}} \cap A_{\d{j_3}} \right)+\cdots \right]
\end{align}$
Kinda complex, how to simplify this expression?
First note that $S_k = \sum\limits_{\d{i_1 <\dots<i_k}} P\left( A_{\d{i_1}} \cap A_{\d{i_2}} \cap \cdots \cap A_{\d{i_k}} \right)$. Now consider
$$\sum_{\d{i_1 < \cdots <i_k}} \; \sum_{\d{j \notin \CB{i_1,\dots,i_k}}}P\left( A_{\d{i_1}} \cap A_{\d{i_2}} \cap \cdots \cap A_{\d{i_k}} \cap A_j \right)$$
Surely, there're repetition. These $k+1$ distinct events are choosed by two steps. We let them be $A_{\d{m_1}} \cap A_{\d{m_2}} \cap \cdots \cap A_{\d{m_{k+1}}}$, thus it's easier to find out the probability of every intersection actually appear $\d{\binom{k+1}{k}}$ times in this multiple summation. Hence:
$\;\;\;\;
\begin{align}
&\sum_{\d{i_1 < \cdots <i_k}} \; \sum_{\d{j \notin \CB{i_1,\dots,i_k}}}P\left( A_{\d{i_1}} \cap A_{\d{i_2}} \cap \cdots \cap A_{\d{i_k}} \cap A_j \right) \\
=& \binom{k+1}{k} \sum_{\d{m_1 < \cdots < m_{k+1}}} P\left(A_{\d{m_1}} \cap A_{\d{m_2}} \cap \cdots \cap A_{\d{m_{k+1}}}\right)\\
=& \binom{k+1}{k} S_{k+1} \\[1em]
&\texttt{So mf easier!}
\end{align}$
Similarly, we can say
$\;\;\;\;P\CB{X=k} = S_k - \d{\binom{k+1} {k}}S_{k+1} + \cdots + (-1)^{n-k}\d{\binom{n} {k}}S_n = \d{\sum_{j=k}^{n}} (-1)^{k+j} \d{\binom{j} {k}} S_j$
Using this we will now prove that $P\CB{X \geq k} = \d{\sum_{j=k}^{n} (-1)^{k+j} \binom{j-1}{k-1}S_j}$. We will use a backwards mathematical induction that starts with $k=n$. Now, when $k=n$ the preceding identity states that
$$P\CB{X=n} = Sn$$
First step finished! So we assume that $P\CB{X \geq k+1} = \d{\sum_{j=k+1}^{n} (-1)^{k+1+j} \binom{j-1} {k}S_j}$. And then
$$\begin{align}
P\CB{X \geq k} &= P\CB{X = k} + p\CB{X \geq k+1} \\[0.8em]
&= \left[\sum_{j=k}^{n} (-1)^{k+j} \binom{j} {k} S_j\right] + \left[ \sum_{j=k+1}^{n} (-1)^{k+1+j} \binom{j-1} {k}S_j \right]\\
&= S_k + \left[ \sum_{j=k+1}^{n} (-1)^{k+j} \left[\binom{j} {k} - \binom{j-1} {k} \right] S_j \right] \\
&= S_k + \sum_{j=k+1}^{n} (-1)^{k+j} \binom{j-1} {k-1}S_j = \sum_{j=k}^{n} (-1)^{k+j} \binom{j-1} {k-1}S_j
\end{align}$$
All done!
# Limit Theorems
First we prove the **Markov's inequality**.
$Proposition$ ***Markov's inequality***
If $X$ is a $r.v.$ that takes only *nonnegative* values, then for any value $\alpha > 0$,
$$P\CB{X \geq a} \leq \ffrac{\EE{X}} {a}$$
$Proof$
This proof is for the case where $X$ is continuous with density $f$.
$$\begin{align}
\EE{X} &= \int_{0}^{\infty} x\cdot f(x) \;\dd{x}\\
&= \int_{0}^{a} x\cdot f(x) \;\dd{x} + \int_{a}^{\infty} x\cdot f(x) \;\dd{x}\\
&\geq \int_{a}^{\infty} x\cdot f(x) \;\dd{x} \geq \int_{a}^{\infty} a\cdot f(x)\;\dd{x} \\
&= a \cdot P\CB{X \geq a}
\end{align}$$
$Remark$
From the process of proving it, we can easily find that this holds for discrete $r.v.$, and a slightly different result can be made if the $r.v.$ only takes nonpositive values.
$Proposition$ ***Chebyshev's Inequality***
If $X$ is a $r.v.$ with mean $\mu$ and variance $\sigma^2$, then, for any value $k > 0$,
$$P \CB{\left|X - \mu \right| \geq k} \leq \ffrac{\sigma^2} {k^2}$$
$Proof$
Since $\left(X - \mu\right)^2$ is a nonnegative $r.v.$, we can apply the previous proposition, the **Markov's inequality** (with $a = k^2$) to obtain:
$$P\CB{\left(X - \mu\right)^2 \geq k^2} = P\CB{\left|X - \mu\right| \geq k} \leq \ffrac{\EE{\left(X-\mu\right)^2}} {k^2}$$
$Remark$
These two propositions are important for that they produce the methods to find a bound for a certain probability given limited infomations like the mean or the mean and the variane.
**e.g.**
The number of items produced in a factory during a week is a $r.v.$ with *mean* $500$.
What's the probability that this week's production will be at least $1000$?
> Let $X$ be the number of items that will be produced in a week.
>
>$P\CB{X \geq 1000} \leq\ffrac{\EE{X}} {1000} = \ffrac{500} {1000} = 0.5$
If the variance is also given with value $100$, then what's the probability that this week's production will be between $400$ and $600$?
>$P\CB{|X-500| \geq 100} \leq \ffrac{\sigma^2} {100^2} = \ffrac{1} {100}$, hence, $P\CB{400 < X < 600} = 1 - \ffrac{1} {100} = \ffrac{99} {100}$.
***
$Theorem$ ***Strong Law of Large Numbers***
Let $X_1,X_2,\dots$ be a sequence of *independent* $r.v.$ having a common distribution, and let $\EE{X_i} = \mu$. Then **with probability** $1$ (later will be shortened as $wp1$).
$$\lim_{n \to \infty}\ffrac{X_1 + X_2 + \cdots + X_n} {n} = \mu$$
$Theorem$ ***Central Limit Theorem***
Let $X_1,X_2,\dots$ be a sequence of $i.i.d.$ $r.v.$, each with mean $\mu$ and variance $\sigma^2$. Then
$$\lim_{n \to \infty} P\CB{\ffrac{X_1 + X_2 + \cdots + X_n - n \mu} {\sigma \sqrt{n}} \leq a} = \ffrac{1} {\sqrt{2 \pi}} \int_{-\infty}^{a} e^{-x^2/2} \;\dd{x}$$
$Remark$
This holds for *any* distribution of the $X_i$s! Herein lies its power! Say an **binomially** distributed $r.v.$ with parameters $n$ and $p$, $X$. Then $X$ can be seen as the sum of $n$ independent **Bernoulli** $r.v.$s, each with parameter $p$. Hence the distribution
$$\ffrac{X- \EE{X}} {\sqrt{\Var{X}}} = \ffrac{X - np} {\sqrt{np(1-p)}}$$
approaches the **standard normal** distribution as $n$ approaches $\infty$. And this normal approximation will be generally great for values of $n$ satisfying $np(1-p) \geq 10$. See the next example~
**e.g.** From **Binomial** to **Normal**
$X$ is the number of times that a fair coin, flipped $40$ times, lands *heads*. What's the probability that $X = 20$? How's the normal approximation comparing to the exact solution?
> How to approximate a discrete $r.v.$ using a continuous $r.v.$? Here's the *trick*.
>
>$$
\begin{align}
P\CB{X=20} &= P\CB{19.5 < X < 20.5} \\
&= P\CB{\ffrac{19.5-20} {\sqrt{10}} < \ffrac{X - 20} {\sqrt{10}} < \ffrac{20.5 - 20} {\sqrt{10}}} \\[0.6em]
&= P\CB{-0.16 < Z < 0.16} \\[0.7em]
& \mathbf{\approx} \Phi(0.16) - \Phi(-0.16)
\end{align}$$
$Remark$
Here, $\Phi(z)$ is the probability that the **standard normal** is less than $z$ and is given by
$$\Phi(z) = \ffrac{1} {\sqrt{2\pi}} \int_{-\infty}^{z} e^{-x^2/2}\;\dd{x} $$
>Thus by the symmetry of the **standard normal** distribution: $\Phi(-0.16) = P\CB{\N{0,1} > 0.16} = 1 - \Phi(0.16)$, where $\N{0,1}$ is a **standard normal** $r.v.$. Hence the answer is
>$$P\CB{X = 20} \approx 2\Phi(0.16) - 1 = 0.1272$$
>Then, the exact result is $P\CB{X=20} = \d{\binom{40} {20}\left(\frac{1} {2}\right)^{40}} = 0.1268$
The following will be a heuristic proof of the ***CLT***, central limit theorem. We first suppose that $X_i$ have mean $0$ and variance $1$, and their common mgf is $\EE{e^{tX}}$. Then the mgf of $\ffrac{\sum X_i} {\sqrt{n}}$ is:
$$\begin{align}
\EE{\exp\CB{t\cdot\left( \ffrac{X_1+\cdots+X_n} {\sqrt{n}} \right)}} &= \EE{e^{tX_1/\sqrt{n}} \cdots e^{tX_n/ \sqrt{n}}} \\
&= \left( \EE{e^{tX/\sqrt{n}}} \right)^n
\end{align}$$
From the **Taylor series expnsion** of $e^y$, for large $n$, we have
$$e^{tX/\sqrt{x}} = 1 + \ffrac{tX} {\sqrt{n}} + \ffrac{t^2X^2} {2n} $$
and since $\EE{X} = 0$ and $\EE{X^2} = 1$, we have
$$
\begin{align}
\EE{\exp\CB{t\cdot\left( \ffrac{X_1+\cdots+X_n} {\sqrt{n}} \right)}} &= \left( \EE{e^{tX/\sqrt{n}}} \right)^n \\
&= \left(1 + \ffrac{t^2} {2n}\right)^n \\
&\to e^{t^2/2} \;\;\;\;\text{as } n \to \infty
\end{align}$$
This is the mgf of a **standard normal** $r.v.$ with mean $0$ and variance $1$. So we can already say that the $r.v.$ $\ffrac{X_1 + \cdots + X_n} {\sqrt{n}}$ converges to the **standard normal** distribution function $\Phi$.
And then when $X_i$ have mean $\mu$ and variance $\sigma^2$, we convert them to $\ffrac{X_i - \mu} {\sigma}$ with mean $0$ and $1$. Thus the preceding shows that:
$$P\CB{\ffrac{X_1 - \mu + \cdots +X_n - \mu} {\sigma\sqrt{n}} \leq a} \to \Phi(a)$$
which proves the **CLT**.
# Stochastic Processes
A ***stochastic process*** $\CB{X(t), t \in T}$ is a *collection* of $r.v.$. The index $t$ is often interpreted as ***time*** and as a result, we refer to $X(t)$ as the ***state*** of the process at time $t$. And it must be *infinite* elements in it.
The set $T$ is the ***index set*** of the process. When $T$ is countable set, the stochastic process is said to be a ***discrete-time process***. And if $T$ is an interval of the real line, the stochastic process is said to be a ***continuous-time process***.
The ***state space*** of a stochastic process is the set of *ALL possible values* that the $r.v.$ $X(t)$ can assume.
- State Space and Time Parameter are both Discrete (Random Walk)
- State Space is continuous and Time Parameter is Discrete (Common)
- State Space is Discrete and Time Parameter is continuous (Poisson Process)
- State Space and Time Parameter are both Continuous (Brownian Motion Process)
**e.g.**
Consider a partical that moves along a set of $m+1$ nodes, labeled from $0$ to $m$, that are arranged arround a circle. At each step the particle is equally likely to move one position in either the clockwise or counterclockwise direction. That is, if $X_n$ is the position of the particle after its $n\texttt{th}$ step, then
$$P\CB{X_{n+1} = i+1 \mid X_n = i} = P \CB{X_{n+1} = i-1 \mid X_n = i} = \ffrac{1} {2}$$
where we let $i+1=0$ when $i=m$ and $i-1=m$ when $i=0$. Now the particle starts at $0$ and continues to move around according to the preceding rules until all the nodes have been visited. What is the probability that node $i$ is the last one visited?
> Consider the first time that the particle is at one of the two neighbors of node $i$ not $0$ as assumed, say, node $i − 1$. Since either node $i$ or $i+1$ has yet been visited, it follows that $i$ will be the last node visited $iff$ $i+1$ is visited before $i$. So that the particle will progress $m − 1$ steps in a specified direction before progressing one step in the other direction.
>That is, it is equal to the probability that a gambler who starts with one unit, and wins one when a fair coin turns up heads and loses one when it turns up tails, will have his fortune go up by $m − 1$ before he goes broke. Hence, because the preceding $\QQQ$ implies that the probability that node $i$ is the last node visited is the same for all $i$, and because these probabilities must sum to $1$, we obtain
>$$P \CB{ i \text{ is the last node visited}} = 1/ m$$
$Remark$
Consider that gambler again. He going down $n$ before being up $1$ is with probability $\QQQ$ $1/(n+1)$; or equivalently,
$$P\CB{\text{gambler is up }1\text{ before being down }n} = \ffrac{n} {n+1}$$
Then:
$$
\begin{align}
& P\CB{\text{gambler is up }2\text{ before being down }n} \\
=& P\CB{\text{up }2\text{ before down }n \mid \text{up }1\text{ before down }n} \cdot \ffrac{n} {n+1} \\
=& P\CB{\text{up }2\text{ before down }n+1}\cdot \ffrac{n} {n+1} \\
=& \ffrac{n+1} {n+2}\ffrac{n} {n+1} = \ffrac{n} {n+2}
\end{align}$$
Repeating this argument yields that
$$P\CB{\text{gambler is up }k\text{ before being down }n} = \ffrac{n} {n+k}$$
|
f2cb94c25bb640613c45bafe2b7141286d62ee3b
| 75,147 |
ipynb
|
Jupyter Notebook
|
Probability and Statistics/Applied Random Process/Chap_02.ipynb
|
XavierOwen/Notes
|
d262a9103b29ee043aa198b475654aabd7a2818d
|
[
"MIT"
] | 2 |
2018-11-27T10:31:08.000Z
|
2019-01-20T03:11:58.000Z
|
Probability and Statistics/Applied Random Process/Chap_02.ipynb
|
XavierOwen/Notes
|
d262a9103b29ee043aa198b475654aabd7a2818d
|
[
"MIT"
] | null | null | null |
Probability and Statistics/Applied Random Process/Chap_02.ipynb
|
XavierOwen/Notes
|
d262a9103b29ee043aa198b475654aabd7a2818d
|
[
"MIT"
] | 1 |
2020-07-14T19:57:23.000Z
|
2020-07-14T19:57:23.000Z
| 53.220255 | 468 | 0.493526 | true | 24,143 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.721743 | 0.879147 | 0.634518 |
__label__eng_Latn
| 0.81273 | 0.312529 |
# Constraint Satisfaction Problems Lab
## Introduction
Constraint Satisfaction is a technique for solving problems by expressing limits on the values of each variable in the solution with mathematical constraints. We've used constraints before -- constraints in the Sudoku project are enforced implicitly by filtering the legal values for each box, and the planning project represents constraints as arcs connecting nodes in the planning graph -- but in this lab exercise we will use a symbolic math library to explicitly construct binary constraints and then use Backtracking to solve the N-queens problem (which is a generalization [8-queens problem](https://en.wikipedia.org/wiki/Eight_queens_puzzle)). Using symbolic constraints should make it easier to visualize and reason about the constraints (especially for debugging), but comes with a performance penalty.
Briefly, the 8-queens problem asks you to place 8 queens on a standard 8x8 chessboard such that none of the queens are in "check" (i.e., no two queens occupy the same row, column, or diagonal). The N-queens problem generalizes the puzzle to to any size square board.
## I. Lab Overview
Students should read through the code and the wikipedia page (or other resources) to understand the N-queens problem, then:
0. Complete the warmup exercises in the [Sympy_Intro notebook](Sympy_Intro.ipynb) to become familiar with they sympy library and symbolic representation for constraints
0. Implement the [NQueensCSP class](#II.-Representing-the-N-Queens-Problem) to develop an efficient encoding of the N-queens problem and explicitly generate the constraints bounding the solution
0. Write the [search functions](#III.-Backtracking-Search) for recursive backtracking, and use them to solve the N-queens problem
0. (Optional) Conduct [additional experiments](#IV.-Experiments-%28Optional%29) with CSPs and various modifications to the search order (minimum remaining values, least constraining value, etc.)
```python
import matplotlib as mpl
import matplotlib.pyplot as plt
from util import constraint, displayBoard
from sympy import *
from IPython.display import display
init_printing()
%matplotlib inline
```
## II. Representing the N-Queens Problem
There are many acceptable ways to represent the N-queens problem, but one convenient way is to recognize that one of the constraints (either the row or column constraint) can be enforced implicitly by the encoding. If we represent a solution as an array with N elements, then each position in the array can represent a column of the board, and the value at each position can represent which row the queen is placed on.
In this encoding, we only need a constraint to make sure that no two queens occupy the same row, and one to make sure that no two queens occupy the same diagonal.
### Define Symbolic Expressions for the Problem Constraints
Before implementing the board class, we need to construct the symbolic constraints that will be used in the CSP. Declare any symbolic terms required, and then declare two generic constraint generators:
- `diffRow` - generate constraints that return True if the two arguments do not match
- `diffDiag` - generate constraints that return True if two arguments are not on the same diagonal (Hint: you can easily test whether queens in two columns are on the same diagonal by testing if the difference in the number of rows and the number of columns match)
Both generators should produce binary constraints (i.e., each should have two free symbols) once they're bound to specific variables in the CSP. For example, Eq((a + b), (b + c)) is not a binary constraint, but Eq((a + b), (b + c)).subs(b, 1) _is_ a binary constraint because one of the terms has been bound to a constant, so there are only two free variables remaining.
```python
# Declare any required symbolic variables
R1, R2, Rdiff = symbols("R1 R2 Rdiff")
"TODO: declare symbolic variables for the constraint generators"
# Define diffRow and diffDiag constraints
"TODO: create the diffRow and diffDiag constraint generators"
diffRow = constraint("Row", Ne(R1, R2))
diffDiag = constraint("Diag", Ne(Abs(R1-R2),Rdiff))
```
```python
# Test diffRow and diffDiag
_x = symbols("x:3")
# generate a diffRow instance for testing
"TODO: use your diffRow constraint to generate a diffRow constraint for _x[0] and _x[1]"
diffRow_test = diffRow.subs({R1: _x[0], R2: _x[1]})
assert(len(diffRow_test.free_symbols) == 2)
assert(diffRow_test.subs({_x[0]: 0, _x[1]: 1}) == True)
assert(diffRow_test.subs({_x[0]: 0, _x[1]: 0}) == False)
assert(diffRow_test.subs({_x[0]: 0}) != False) # partial assignment is not false
print("Passed all diffRow tests.")
# generate a diffDiag instance for testing
"TODO: use your diffDiag constraint to generate a diffDiag constraint for _x[0] and _x[2]"
diffDiag_test = diffDiag.subs({R1: _x[0], R2: _x[2], Rdiff: 2})
assert(len(diffDiag_test.free_symbols) == 2)
assert(diffDiag_test.subs({_x[0]: 0, _x[2]: 2}) == False)
assert(diffDiag_test.subs({_x[0]: 0, _x[2]: 0}) == True)
assert(diffDiag_test.subs({_x[0]: 0}) != False) # partial assignment is not false
print("Passed all diffDiag tests.")
```
Passed all diffRow tests.
Passed all diffDiag tests.
### The N-Queens CSP Class
Implement the CSP class as described above, with constraints to make sure each queen is on a different row and different diagonal than every other queen, and a variable for each column defining the row that containing a queen in that column.
```python
class NQueensCSP:
"""CSP representation of the N-queens problem
Parameters
----------
N : Integer
The side length of a square chess board to use for the problem, and
the number of queens that must be placed on the board
"""
def __init__(self, N):
"TODO: declare symbolic variables in self._vars in the CSP constructor"
_rows = symbols("R:" + str(N))
_domain = set(range(N))
self.size = N
self._vars = _rows
self.domains = {v: _domain for v in _vars}
self._constraints = {x: set() for x in _vars}
# add constraints - for each pair of variables xi and xj, create
# a diffRow(xi, xj) and a diffDiag(xi, xj) instance, and add them
# to the self._constraints dictionary keyed to both xi and xj;
# (i.e., add them to both self._constraints[xi] and self._constraints[xj])
"TODO: add constraints in self._constraints in the CSP constructor"
for i in range(N):
for j in range(N):
xi = self._vars[i]
xj = self._vars[j]
dR = diffRow.subs({R1:_vars[i], R2: _vars[j]})
dD = diffDiag.subs({R1:_vars[i], R2: _vars[j], Rdiff: abs(i-j)})
self._constraints[xi].add(dR)
self._constraints[xj].add(dR)
self._constraints[xi].add(dD)
self._constraints[xj].add(dD)
@property
def constraints(self):
"""Read-only list of constraints -- cannot be used for evaluation """
constraints = set()
for _cons in self._constraints.values():
constraints |= _cons
return list(constraints)
def is_complete(self, assignment):
"""An assignment is complete if it is consistent, and all constraints
are satisfied.
Hint: Backtracking search checks consistency of each assignment, so checking
for completeness can be done very efficiently
Parameters
----------
assignment : dict(sympy.Symbol: Integer)
An assignment of values to variables that have previously been checked
for consistency with the CSP constraints
"""
if len(assignment) == self.size:
return True
else:
return False
def is_consistent(self, var, value, assignment):
"""Check consistency of a proposed variable assignment
self._constraints[x] returns a set of constraints that involve variable `x`.
An assignment is consistent unless the assignment it causes a constraint to
return False (partial assignments are always consistent).
Parameters
----------
var : sympy.Symbol
One of the symbolic variables in the CSP
value : Numeric
A valid value (i.e., in the domain of) the variable `var` for assignment
assignment : dict(sympy.Symbol: Integer)
A dictionary mapping CSP variables to row assignment of each queen
"""
raise NotImplementedError("TODO: implement the is_consistent() method of the CSP")
def inference(self, var, value):
"""Perform logical inference based on proposed variable assignment
Returns an empty dictionary by default; function can be overridden to
check arc-, path-, or k-consistency; returning None signals "failure".
Parameters
----------
var : sympy.Symbol
One of the symbolic variables in the CSP
value : Integer
A valid value (i.e., in the domain of) the variable `var` for assignment
Returns
-------
dict(sympy.Symbol: Integer) or None
A partial set of values mapped to variables in the CSP based on inferred
constraints from previous mappings, or None to indicate failure
"""
# TODO (Optional): Implement this function based on AIMA discussion
return {}
def show(self, assignment):
"""Display a chessboard with queens drawn in the locations specified by an
assignment
Parameters
----------
assignment : dict(sympy.Symbol: Integer)
A dictionary mapping CSP variables to row assignment of each queen
"""
locations = [(i, assignment[j]) for i, j in enumerate(self.variables)
if assignment.get(j, None) is not None]
displayBoard(locations, self.size)
```
## III. Backtracking Search
Implement the [backtracking search](https://github.com/aimacode/aima-pseudocode/blob/master/md/Backtracking-Search.md) algorithm (required) and helper functions (optional) from the AIMA text.
```python
def select(csp, assignment):
"""Choose an unassigned variable in a constraint satisfaction problem """
# TODO (Optional): Implement a more sophisticated selection routine from AIMA
for var in csp.variables:
if var not in assignment:
return var
return None
def order_values(var, assignment, csp):
"""Select the order of the values in the domain of a variable for checking during search;
the default is lexicographically.
"""
# TODO (Optional): Implement a more sophisticated search ordering routine from AIMA
return csp.domains[var]
def backtracking_search(csp):
"""Helper function used to initiate backtracking search """
return backtrack({}, csp)
def backtrack(assignment, csp):
"""Perform backtracking search for a valid assignment to a CSP
Parameters
----------
assignment : dict(sympy.Symbol: Integer)
An partial set of values mapped to variables in the CSP
csp : CSP
A problem encoded as a CSP. Interface should include csp.variables, csp.domains,
csp.inference(), csp.is_consistent(), and csp.is_complete().
Returns
-------
dict(sympy.Symbol: Integer) or None
A partial set of values mapped to variables in the CSP, or None to indicate failure
"""
raise NotImplementedError("TODO: complete the backtrack function")
```
### Solve the CSP
With backtracking implemented, now you can use it to solve instances of the problem. We've started with the classical 8-queen version, but you can try other sizes as well. Boards larger than 12x12 may take some time to solve because sympy is slow in the way its being used here, and because the selection and value ordering methods haven't been implemented. See if you can implement any of the techniques in the AIMA text to speed up the solver!
```python
num_queens = 8
csp = NQueensCSP(num_queens)
var = csp.variables[0]
print("CSP problems have variables, each variable has a domain, and the problem has a list of constraints.")
print("Showing the variables for the N-Queens CSP:")
display(csp.variables)
print("Showing domain for {}:".format(var))
display(csp.domains[var])
print("And showing the constraints for {}:".format(var))
display(csp._constraints[var])
print("Solving N-Queens CSP...")
assn = backtracking_search(csp)
if assn is not None:
csp.show(assn)
print("Solution found:\n{!s}".format(assn))
else:
print("No solution found.")
```
## IV. Experiments (Optional)
For each optional experiment, discuss the answers to these questions on the forum: Do you expect this change to be more efficient, less efficient, or the same? Why or why not? Is your prediction correct? What metric did you compare (e.g., time, space, nodes visited, etc.)?
- Implement a _bad_ N-queens solver: generate & test candidate solutions one at a time until a valid solution is found. For example, represent the board as an array with $N^2$ elements, and let each element be True if there is a queen in that box, and False if it is empty. Use an $N^2$-bit counter to generate solutions, then write a function to check if each solution is valid. Notice that this solution doesn't require any of the techniques we've applied to other problems -- there is no DFS or backtracking, nor constraint propagation, or even explicitly defined variables.
- Use more complex constraints -- i.e., generalize the binary constraint RowDiff to an N-ary constraint AllRowsDiff, etc., -- and solve the problem again.
- Rewrite the CSP class to use forward checking to restrict the domain of each variable as new values are assigned.
- The sympy library isn't very fast, so this version of the CSP doesn't work well on boards bigger than about 12x12. Write a new representation of the problem class that uses constraint functions (like the Sudoku project) to implicitly track constraint satisfaction through the restricted domain of each variable. How much larger can you solve?
- Create your own CSP!
|
fedf869f2e3d0c9bfb2871dc80eac11b5fc276b9
| 18,558 |
ipynb
|
Jupyter Notebook
|
AIND-Constraint_Satisfaction/AIND-Constraint_Satisfaction.ipynb
|
omdv/udacity-aind
|
2428fb99c68680f1a57f2db6e2e733d02b02f4f0
|
[
"MIT"
] | null | null | null |
AIND-Constraint_Satisfaction/AIND-Constraint_Satisfaction.ipynb
|
omdv/udacity-aind
|
2428fb99c68680f1a57f2db6e2e733d02b02f4f0
|
[
"MIT"
] | null | null | null |
AIND-Constraint_Satisfaction/AIND-Constraint_Satisfaction.ipynb
|
omdv/udacity-aind
|
2428fb99c68680f1a57f2db6e2e733d02b02f4f0
|
[
"MIT"
] | null | null | null | 48.07772 | 822 | 0.604106 | true | 3,261 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.882428 | 0.908618 | 0.80179 |
__label__eng_Latn
| 0.99386 | 0.701159 |
# Section 5.6 $\quad$ Least Squares
**Recall** An $m\times n$ linear system $A\mathbf{x}=\mathbf{b}$ is consistent if and only if <br /><br /><br /><br />
**Question** What can we do if the system $A\mathbf{x}=\mathbf{b}$ is inconsistent?<br /><br /><br /><br />
>The **least square solution** to the linear system $A\mathbf{x}=\mathbf{b}$ is the solution to the system<br /><br /><br /><br />
**Remark** If $A$ is an $m\times n$ matrix,<br /><br /><br /><br />
### Example 1
Determine the least square solution to $A\mathbf{x}=\mathbf{b}$, where
\begin{equation*}
A =
\left[
\begin{array}{cc}
2 & 1 \\
1 & 0 \\
0 & -1 \\
-1 & 1 \\
\end{array}
\right],~~~~~~
\mathbf{b} =
\left[
\begin{array}{c}
3 \\
1 \\
2 \\
-1\\
\end{array}
\right].
\end{equation*}
```python
from sympy import *
A = Matrix([[2, 1], [1, 0], [0, -1], [-1, 1]]);
b = Matrix([3, 1, 2, -1]);
A.LDLsolve(b)
```
Matrix([
[24/17],
[-8/17]])
Least square problems often arise in constructing a mathematical model from discrete data.
### Example 2
The following data shows U.S. per capita health care expenditures
Year | Per Capita Expenditures (in \$)
-----|------
1960 | $\qquad\qquad$ 143
1970 | $\qquad\qquad$ 348
1980 | $\qquad\qquad$ 1,067
1990 | $\qquad\qquad$ 2,738
1995 | $\qquad\qquad$ 3,698
2000 | $\qquad\qquad$ 4,560
- Determine the line of best fit to the given data.
- Predict the per capita expenditure for the year 2005, 2010, and 2015.
```python
from sympy import *
import numpy as np
import matplotlib.pyplot as plt
A = Matrix([[1960, 1], [1970, 1], [1980, 1], [1990, 1], [1995, 1], [2000, 1]]);
b = Matrix([143, 348, 1067, 2738, 3698, 4560]);
linePara = A.LDLsolve(b);
plt.xlabel('Year');
plt.ylabel('Per Capita Expenditures (in $)');
plt.title('U.S. per capita health care expenditures');
plt.plot(A[:,0], b, 'o', label = 'Data');
x = np.linspace(1950, 2030, 1000);
y = x * linePara[0] + linePara[1];
plt.plot(x, y, label = 'Prediction');
plt.legend();
plt.show()
2005*linePara[0] + linePara[1], 2010*linePara[0] + linePara[1], 2015*linePara[0] + linePara[1]
```
(14026/3, 15748/3, 17470/3)
|
26d84f61393c5574b734ca30b8dcbf8e41ce0865
| 4,611 |
ipynb
|
Jupyter Notebook
|
Jupyter_Notes/Lecture29_Sec5-6_LeastSquares.ipynb
|
xiuquan0418/MAT341
|
2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59
|
[
"MIT"
] | null | null | null |
Jupyter_Notes/Lecture29_Sec5-6_LeastSquares.ipynb
|
xiuquan0418/MAT341
|
2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59
|
[
"MIT"
] | null | null | null |
Jupyter_Notes/Lecture29_Sec5-6_LeastSquares.ipynb
|
xiuquan0418/MAT341
|
2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59
|
[
"MIT"
] | null | null | null | 23.406091 | 138 | 0.466059 | true | 800 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92079 | 0.907312 | 0.835444 |
__label__eng_Latn
| 0.709368 | 0.779349 |
<a href="https://colab.research.google.com/github/Astraplas/LinearAlgebra_2ndSem/blob/main/Assignment_5.ipynb" target="_parent"></a>
# Linear Algebra for CHE
## Laboratory 6 : Matrix Operations
Now that you have a fundamental knowledge about representing and operating with vectors as well as the fundamentals of matrices, we'll try to the same operations with matrices and even more.
## Objectives
At the end of this activity you will be able to:
1. Be familiar with the fundamental matrix operations.
2. Apply the operations to solve intemrediate equations.
3. Apply matrix algebra in engineering solutions.
# Discussion
The codes below will serve as the foundation to be used in the matrix operation codes to be created.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
# Transposition
One of the fundamental operations in matrix algebra is Transposition. The transpose of a matrix is done by flipping the values of its elements over its diagonals. With this, the rows and columns from the original matrix will be switched. So for a matrix $A$ its transpose is denoted as $A^T$. So for example:
:$$
A=\begin{bmatrix} 1 & 2 & 5 \\ 5 & -1 & 0 \\ 0 & -3 & 3\end{bmatrix} \\
A^T=\begin{bmatrix} 1 & 5 & 0\\ 2 & -1 & -3\\ 5 & 0 & 3\end{bmatrix}
$$
his can now be achieved programmatically by using np.transpose() or using the T method.
```python
A = np.array([
[1 ,2, 5],
[5, -1, 0],
[0, -3, 3]
])
A
```
array([[ 1, 2, 5],
[ 5, -1, 0],
[ 0, -3, 3]])
```python
AT1 = np.transpose(A)
AT1
```
array([[ 1, 5, 0],
[ 2, -1, -3],
[ 5, 0, 3]])
```python
AT2 = A.T
```
```python
np.array_equiv(AT1, AT2)
```
True
```python
B = np.array([
[40,13,55,32],
[32,12,64,21],
])
B.shape
```
(2, 4)
```python
np.transpose(B).shape
```
(4, 2)
```python
B.T.shape
```
(4, 2)
The codes mentioned above have simply interconverted the rows and columns of the given matrices
##### Try to create your own matrix (you can try non-squares) to test transposition.
```python
```
## Dot Product / Inner Product
If you recall the dot product from laboratory activity before, we will try to implement the same operation with matrices. In matrix dot product we are going to get the sum of products of the vectors by row-column pairs. So if we have two matrices $X$ and $Y$:
$$X=\begin{bmatrix}
x_{(0,0)}& x_{(0,1)}\\
x_{(1,0)}& x_{(1,1)}\\
\end{bmatrix}
,
Y=\begin{bmatrix}
y_{(0,0)}& y_{(0,1)}\\
y_{(1,0)}& y_{(1,1)}\\
\end{bmatrix}
$$
The dot product will then be computed as:
$$X=\begin{bmatrix}
x_{(0,0)}*y_{(0,0)} + x_{(0,1)}*y_{(1,0)} & x_{(0,0)}*y_{(0,1)} + x_{(0,1)}*y_{(1,1)}\\
x_{(1,0)}*y_{(0,0)} + x_{(1,1)}*y_{(1,0)} & x_{(1,0)}*y_{(0,1)} + x_{(1,1)}*y_{(1,1)}\\
\end{bmatrix}
$$
So if we assign values to $X$ and $Y$:
:$$
X=\begin{bmatrix} 1 & 2\\ 0 & 1\end{bmatrix}
,
Y=\begin{bmatrix} -1 & 0\\ 2 & 2\end{bmatrix}
$$
:$$
X⋅Y=\begin{bmatrix} 1*-1 + 2*2 & 1*0 + 2*2\\ 0*-1 + 1*2 & 0*0 + 1*2\end{bmatrix} =
\begin{bmatrix} 3 & 4\\ 2 & 2\end{bmatrix}
$$
This could be achieved programmatically using np.dot(), np.matmul() or the @ operator.
```python
X = np.array([
[3,4,6],
[1,6,7],
[6,3,0]
])
Y = np.array([
[4,-2,5],
[5,7,-4],
[5,1,-8]
])
```
```python
np.dot(X,Y)
```
array([[ 62, 28, -49],
[ 69, 47, -75],
[ 39, 9, 18]])
```python
X.dot(Y)
```
array([[ 62, 28, -49],
[ 69, 47, -75],
[ 39, 9, 18]])
```python
X @ Y
```
array([[ 62, 28, -49],
[ 69, 47, -75],
[ 39, 9, 18]])
```python
np.matmul(X,Y)
```
array([[ 62, 28, -49],
[ 69, 47, -75],
[ 39, 9, 18]])
In matrix dot products there are additional rules compared with vector dot products. Since vector dot products were just in one dimension there are less restrictions. Since now we are dealing with Rank 2 vectors we need to consider some rules:
## Rule 1: The inner dimensions of the two matrices in question must be the same.
So given a matrix $A$ with a shape of $(a,b)$ where $a$ and $b$ are any integers. If we want to do a dot product between $A$ and $B$ another matrix , then matrix $B$ should have a shape of $(b,c)$ where $b$ and $c$ are any integers. So for given the following matrices:
$$
Q=\begin{bmatrix} 5 & 5 & 5\\ 6 & 7 & -2\end{bmatrix},
R=\begin{bmatrix} 4 & 5 & 6\\ 9 & 0 & 1\end{bmatrix},
S=\begin{bmatrix} 1 & 6\\ 3 & 2\\ 4 & 6\end{bmatrix}
$$
So in this case $A$ has a shape of $(3,2)$, $B$ has a shape of $(3,2)$ and $C$ has a shape of $(2,3)$ . So the only matrix pairs that is eligible to perform dot product is matrices $A⋅C$ or $B⋅C$.
```python
Q = np.array([
[5, 5, 5],
[6, 7, -2]
])
R = np.array([
[4,5,6],
[9,0,1]
])
S = np.array([
[1,6],
[3,2],
[4,6]
])
print(Q.shape)
print(R.shape)
print(S.shape)
```
(2, 3)
(2, 3)
(3, 2)
```python
Q @ S
```
array([[40, 70],
[19, 38]])
```python
R @ S
```
array([[43, 70],
[13, 60]])
If you would notice the shape of the dot product changed and its shape is not the same as any of the matrices we used. The shape of a dot product is actually derived from the shapes of the matrices used. So recall matrix $A$ with a shape of $(a,b)$ and matrix $B$ with a shape of $(b,c)$, $A⋅B$ should have a shape $(a,c)$.
```python
Q @ R.T
```
array([[75, 50],
[47, 52]])
```python
X = np.array([
[1,2,3,0]
])
Y = np.array([
[1,0,4,-1]
])
print(X.shape)
print(Y.shape)
```
(1, 4)
(1, 4)
```python
R.T @ Q
```
array([[74, 83, 2],
[25, 25, 25],
[36, 37, 28]])
And you can see that when you try to multiply A and B, it returns ValueError pertaining to matrix shape mismatch.
## Rule 2: Dot Product has special properties
Dot products are prevalent in matrix algebra, this implies that it has several unique properties and it should be considered when formulation solutions:
1. $A \cdot B \neq B \cdot A$
2. $A \cdot (B \cdot C) = (A \cdot B) \cdot C$
3. $A\cdot(B+C) = A\cdot B + A\cdot C$
4. $(B+C)\cdot A = B\cdot A + C\cdot A$
5. $A\cdot I = A$
6. $A\cdot \emptyset = \emptyset$
I'll be doing just one of the properties and I'll leave the rest to test your skills!
```python
H = np.array([
[5,6,11],
[7,4,10],
[8,7,0]
])
I = np.array([
[7,18,26],
[44,31,0],
[5,5,6]
])
J = np.array([
[3,0,1],
[6,0,2],
[7,8,8]
])
```
```python
H.dot(np.zeros(H.shape))
```
array([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
```python
l_mat = np.zeros(H.shape)
l_mat
```
array([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
```python
r_dot_l = H.dot(np.zeros(H.shape))
r_dot_l
```
array([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
```python
np.array_equal(r_dot_l,l_mat)
```
True
```python
null_mat = np.empty(H.shape, dtype=float)
null = np.array(null_mat,dtype=float)
print(null)
np.allclose(r_dot_l,null)
```
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
True
```python
K = H.dot(I)
L = I.dot(H)
print(K)
print(L)
row1 = len(K);
col1 = len(K[0]);
row2 = len(L);
col2 = len(L[0]);
if(row1 != row2 or col1 != col2):
print("Matrices are not equal");
else:
for i in range(0, row1):
for j in range(0, col1):
if(K[i][j] != L[i][j]):
flag = False;
break;
if(flag):
print("Matrices are equal");
else:
print("Matrices are not equal. Therefore A⋅B ≠ B⋅A");
```
[[354 331 196]
[275 300 242]
[364 361 208]]
[[369 296 257]
[437 388 794]
[108 92 105]]
Matrices are not equal. Therefore A⋅B ≠ B⋅A
```python
v = print(H*(I*J))
w = print((H*I)*J)
if v == w:
print("Thus, A⋅(B⋅C)=(A⋅B)⋅C")
```
[[ 105 0 286]
[1848 0 0]
[ 280 280 0]]
[[ 105 0 286]
[1848 0 0]
[ 280 280 0]]
Thus, A⋅(B⋅C)=(A⋅B)⋅C
```python
a = print(H*(I+J))
b = print(H*I+H*J)
if a == b:
print("Thus, A⋅(B+C)=A⋅B+A⋅C")
```
[[ 50 108 297]
[350 124 20]
[ 96 91 0]]
[[ 50 108 297]
[350 124 20]
[ 96 91 0]]
Thus, A⋅(B+C)=A⋅B+A⋅C
```python
c = print((I+J)*H)
d= print(I*H+J*H)
if c == d:
print("Thus, (B+C)⋅A=B⋅A+C⋅A")
```
[[ 50 108 297]
[350 124 20]
[ 96 91 0]]
[[ 50 108 297]
[350 124 20]
[ 96 91 0]]
Thus, (B+C)⋅A=B⋅A+C⋅A
```python
H.dot(1)
```
array([[ 5, 6, 11],
[ 7, 4, 10],
[ 8, 7, 0]])
```python
H.dot(0)
```
array([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
The codes mentioned above proves that the rule established for the special properties of the dot products are true.
## Determinant
A determinant is a scalar value derived from a square matrix. The determinant is a fundamental and important value used in matrix algebra. Although it will not be evident in this laboratory on how it can be used practically, but it will be reatly used in future lessons.
The determinant of some matrix $A$ is denoted as $det(A)$ or $|A|$. So let's say $A$ is represented as:
$$A=\begin{bmatrix}
a_{(0,0)}& a_{(0,1)}\\
a_{(1,0)}& a_{(1,1)}\\
\end{bmatrix}
$$
We can compute for the determinant as:
$$
|A| = a_{(0,0)} * a_{(1,1)} - a_{(1,0)} * a_{(0,1)}
$$
So if we have $A$ as:
$$A=\begin{bmatrix}
1 & 4\\
0 & 3\\
\end{bmatrix},
|A| = 3
$$
But you might wonder how about square matrices beyond the shape ? We can approach this problem by using several methods such as co-factor expansion and the minors method. This can be taught in the lecture of the laboratory but we can achieve the strenuous computation of high-dimensional matrices programmatically using Python. We can achieve this by using np.linalg.det().
```python
```
```python
A = np.array([
[58,32],
[71,36]
])
np.linalg.det(A)
```
-184.00000000000006
```python
## Now other mathematics classes would require you to solve this by hand,
## and that is great for practicing your memorization and coordination skills
## but in this class we aim for simplicity and speed so we'll use programming
## but it's completely fine if you want to try to solve this one by hand.
B = np.array([
[54,321,53],
[90,13,10],
[34,192,81]
])
np.linalg.det(B)
```
-1385353.9999999995
## Inverse
The inverse of a matrix is another fundamental operation in matrix algebra. Determining the inverse of a matrix let us determine if its solvability and its characteristic as a system of linear equation — we'll expand on this in the nect module. Another use of the inverse matrix is solving the problem of divisibility between matrices. Although element-wise division exists but dividing the entire concept of matrices does not exists. Inverse matrices provides a related operation that could have the same concept of "dividing" matrices.
Now to determine the inverse of a matrix we need to perform several steps. So let's say we have a matrix $M$:
$$A=\begin{bmatrix}
1 & 7\\
-3 & 5\\
\end{bmatrix},
|A| = 3
$$
First, we need to get the determinant of $M$.
$$
|M| = (1)(5) - (-3)(7) = 26
$$
Next, we need to reform the matrix into the inverse form
$$
M^{-1} = \begin{align} \frac{1}{|M|} \end{align}
\begin{bmatrix}
m_{(1,1)}& -m_{(0,1)}\\
-m_{(1,0)}& m_{(0,0)}\\
\end{bmatrix}
$$
So that will be:
$$M^{-1} = \frac{1}{26} \begin{bmatrix} 5 & -7 \\ 3 & 1\end{bmatrix} = \begin{bmatrix} \frac{5}{26} & \frac{-7}{26} \\ \frac{3}{26} & \frac{1}{26}\end{bmatrix}$$
For higher-dimension matrices you might need to use co-factors, minors, adjugates, and other reduction techinques. To solve this programmatially we can use np.linalg.inv().
```python
M = np.array([
[17,57],
[23, 45]
])
np.array(M @ np.linalg.inv(M), dtype=int)
```
array([[0, 0],
[0, 0]])
```python
## And now let's test your skills in solving a matrix with high dimensions:
N = np.array([
[12,45,423,121,10,533,553],
[560,255,34,513,524,242,23],
[55,49,220,235,205,10,356],
[61,56,24,24,38,443,15],
[86,426,283,724,12,624,241],
[-5,-165,222,230,-310,356,330],
[224,521,132,246,146,-225,132],
])
N_inv = np.linalg.inv(N)
np.array(N @ N_inv,dtype=int)
```
array([[0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 1]])
To validate the wether if the matric that you have solved is really the inverse, we follow this dot product property for a matrix $M$:
$$
M⋅M^{-1} = I
$$
```python
squad = np.array([
[1.0, 1.0, 0.5],
[0.7, 0.7, 0.9],
[0.3, 0.3, 1.0]
])
weights = np.array([
[0.2, 0.2, 0.6]
])
p_grade = squad @ weights.T
p_grade
```
array([[0.7 ],
[0.82],
[0.72]])
## Activity
### Task 1
Prove and implement the remaining 6 matrix multiplication properties. You may create your own matrices in which their shapes should not be lower than $(3,3)$. In your methodology, create individual flowcharts for each property and discuss the property you would then present your proofs or validity of your implementation in the results section by comparing your result to present functions from NumPy.
## Conclusion
For your conclusion synthesize the concept and application of the laboratory. Briefly discuss what you have learned and achieved in this activity. Also answer the question: "how can matrix operations solve problems in healthcare?".
Conclusion in the Lab report:
In light of the learnings that the students were able to obtain, it could also be applied to several fields. One of which is that the matrix operation could be used through the healthcare system. In the healthcare system, there is a field that is specialized in treating certain eye illnesses. In this particular field, the use of optics would be prevalent in order to execute a certain diagnosis. With the help of matrices, calculating the reflection and refraction of a certain light would be comparatively easy compared to calculating it through the manual method. Furthermore, it could also be applied when dealing with certain bills that are needed to be paid for healthcare. In here, rectangular arrays of matrices are used by a certain program that would flexibly conduct the calculations quickly and efficiently. With all of this in mind, it is irrefutable that matrix does indeed ease the burden when it comes to solving problems in healthcare. As such, exploring the applications and possibilities of matrices is imperative in order to nurture a better living.
|
79d3c348bba8081f91f7d7c461ab722c655f1649
| 44,498 |
ipynb
|
Jupyter Notebook
|
Assignment_5.ipynb
|
Astraplas/LinearAlgebra_2ndSem
|
6a19a2f2e106e5ba2d995d609d82cde2d0dd83ae
|
[
"Apache-2.0"
] | null | null | null |
Assignment_5.ipynb
|
Astraplas/LinearAlgebra_2ndSem
|
6a19a2f2e106e5ba2d995d609d82cde2d0dd83ae
|
[
"Apache-2.0"
] | null | null | null |
Assignment_5.ipynb
|
Astraplas/LinearAlgebra_2ndSem
|
6a19a2f2e106e5ba2d995d609d82cde2d0dd83ae
|
[
"Apache-2.0"
] | null | null | null | 26.022222 | 1,080 | 0.416266 | true | 5,089 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90053 | 0.793106 | 0.714216 |
__label__eng_Latn
| 0.983896 | 0.497694 |
# Nonlinear prediction of chaotic dynamical systems
Assume you observe a time series $(y_1, y_2, \dots, y_T)$ that represents a variable of a high dimensional and possibly chaotic dynamical system.
The method proposed by Sugihara and May (Nature, 1990) entails first choosing an embedding dimension, $n$, and then predicting $y_t$ by using past observations $\vec y_p(t) = (y_{t-1}, y_{t-2}, \dots, y_{t-n}) \in \mathbb{R}^n$.
Intuitively, the prediction is obtained by finding a set of vectors $\{ \vec y^1_p, \dots, \vec y^n_p \}$ in the past, which are nearest neighbors of $\vec y_p$, and basing future predictions on what occurred immediately after these past events.
Let's begin with an example.
```python
# simulate Lorenz attractor
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from mpl_toolkits.mplot3d import Axes3D
# standard parameters
rho=28;sigma=10;beta=8/3
def f(X, t):
x, y, z = X
return sigma * (y - x), x * (rho - z) - y, x * y - beta * z
# initial value
x0 = np.array([-11.40057002, -14.01987468, 27.49928125])
t = np.arange(0.0, 100, 0.01)
lorenz = odeint(f, x0, t)
```
```python
# 3D plot of Lorenz attractor - beautiful!
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(lorenz[:,0], lorenz[:,1], lorenz[:,2])
plt.axis('off')
plt.xlim([-13,14])
plt.ylim([-20,25])
ax.set_zlim([15,38])
plt.show()
```
```python
# plot only first component
plt.plot(lorenz[:,0])
```
#### How can we predict the future of such a chaotic system, given its past??
Many methods have been proposed that are based on Takens (1981) and Sauer (1991) theorems (see: http://www.scholarpedia.org/article/Attractor_reconstruction)
In detail, the method proposed by Sugihara and May (https://www.nature.com/articles/344734a0) works in three steps:
( 1 ) divide your time series $y_1,\dots, y_T$ into train and test. Use train time series as a library of past patterns, i.e., by computing past vectors $\vec y_p(t) = (y_{t-1}, y_{t-2}, \dots, y_{t-n}) \in \mathbb{R}^n$ for each point $t>n$, with associated future value $y_t$.
( 2 ) for each test time point $y^*$ compute its past vector $\vec y_p^* = (y_{t^*-1}, y_{t^*-2}, \dots, y_{t^*-n})$, and find the $n+1$ nearest neighbors in the library of past patterns: $\{y_p^1, \dots, y_p^{n+1}\}$ and compute their distance to $y_p^*$: $d_i = || y_p^* - y_p^i||$.
( 3 ) predict test time point $y^*$ by taking a weighted average of future values of the $n+1$ nearest neighbors found in the library of past patterns:
\begin{equation}
\hat y^* = \frac{\sum_i^{n+1} y^i e^{-d_i}}{\sum_i^{n+1} e^{-d_i}}.
\end{equation}
```python
from sklearn.neighbors import NearestNeighbors
from scipy import stats as stats
# split dataset into train/test
def train_test_split(X, fraction_train=.75):
split = int(len(X)*fraction_train)
return X[:split], X[split:]
# exponential weights: w_i = exp(-d_i) / sum_i exp(-d_i)
def weights(distances):
num = np.exp(-distances) # numerator: e^{-d_i}
den = np.sum(num,axis=1,keepdims=True) # denominator: sum_i e^{-d_i}
return num/den
# embed vectors into n-dimensional past values (the last element is the one to be predicted)
def embed_vectors_1d(X, n_embed):
size = len(X)
leng = size-n_embed
out_vects = np.zeros((leng,n_embed + 1))
for i in range(leng):
out_vects[i,:] = X[i:i+n_embed+1]
return out_vects
# implement the Sugihara nonlinear prediction
def nonlinear_prediction(X_train, X_test, n_embed):
# initialize nearest neighbors from sklearn
knn = NearestNeighbors(n_neighbors=n_embed+1)
# Nearest neigbors is fit on the train data (only on the past vectors - i.e. till [:-1])
knn.fit(X_train[:,:-1])
# find the nearest neighbors for each test vector input
dist,ind = knn.kneighbors(X_test[:,:-1])
# compute exponential weights given distances
W = weights(dist)
# predict test using train (weighted average)
x_pred = np.sum(X_train[ind][:,-1] * W, axis=1)
return x_pred
```
Find the best embedding dimension by cross-validation - i.e., find best reconstruction
```python
nonlinear_reconstr_cor = []
for n_embed in np.arange(1,5):
X = embed_vectors_1d(lorenz[:,0],n_embed)
# split train/test
X_train, X_test = train_test_split(X,fraction_train=0.7)
# nonlinear prediction on individual time series
x_p = nonlinear_prediction(X_train, X_test, n_embed)
# simply check correlation of real vs predicted
nonlinear_reconstr_cor.append(np.corrcoef(X_test[:,-1], x_p)[0,1])
```
```python
plt.plot(np.arange(1,5),nonlinear_reconstr_cor[:5])
plt.xticks(np.arange(1,5))
plt.xlabel('Embedding dimension (n_embed)')
plt.ylabel('Pearson correlation')
```
The best embedding dimension is 2!
Now visualize the results and check where the highest errors are
```python
n_embed = 2
# prediction using optimal n_embed
X = embed_vectors_1d(lorenz[:,0],n_embed)
# split train/test
X_train, X_test = train_test_split(X,fraction_train=0.7)
# nonlinear prediction on individual time series
x_p = nonlinear_prediction(X_train, X_test, n_embed)
# simply check correlation of
nonlinear_reconstr_cor.append(np.corrcoef(X_test[:,-1], x_p)[0,1])
# scatter plot real vs predicted
plt.scatter(X_test[:,-1], x_p)
plt.xlabel('Test Data')
plt.ylabel('Prediction Data')
```
Are errors higher where the gradient is higher? Yes!
```python
plt.scatter(np.abs(np.gradient(X_test[:,-1])), np.abs(X_test[:,-1]- x_p))
plt.xlabel('Abs. Gradient')
plt.ylabel('Abs. Prediction Error')
```
|
86313bd2931aa6ae4de3e8a0a425aa593f024f10
| 173,911 |
ipynb
|
Jupyter Notebook
|
Notebook 1 Nonlinear prediction of chaotic dynamical systems.ipynb
|
michnard/nonlinear_prediction
|
b36f3fc3824f554fd86b6140434d7141baeec937
|
[
"MIT"
] | null | null | null |
Notebook 1 Nonlinear prediction of chaotic dynamical systems.ipynb
|
michnard/nonlinear_prediction
|
b36f3fc3824f554fd86b6140434d7141baeec937
|
[
"MIT"
] | null | null | null |
Notebook 1 Nonlinear prediction of chaotic dynamical systems.ipynb
|
michnard/nonlinear_prediction
|
b36f3fc3824f554fd86b6140434d7141baeec937
|
[
"MIT"
] | null | null | null | 472.584239 | 83,020 | 0.943253 | true | 1,637 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.919643 | 0.861538 | 0.792307 |
__label__eng_Latn
| 0.919685 | 0.679128 |
# Tide-influenced Turbidity Current
## Formulation
A layer-averaged model for a turbidity current and an upper ambient water layer is described herein. Let $t$ and $x$ be time and streamwise bed-attached coordinate. Mass conservation equations of two layers take the form:
\begin{equation}
\frac{\partial h_a}{\partial t} + \frac{\partial h_a U_a}{\partial x} = - e_w \left|U_t - U_a \right| \tag{1}\label{eq:cont_ha}
\end{equation}
\begin{equation}
\frac{\partial h_t}{\partial t} + \frac{\partial h_t U_t}{\partial x} = e_w \left|U_t - U_a \right| \tag{2}\label{eq:cont_ht}
\end{equation}
where $h_a$ and $h_t$ are thickness of layers of an ambient water and a turbidity current respectively. The parameter $e_w$ denotes the entrainment rate of ambient water to a turbidity current. $U_t$ and $U_a$ are velocities of an ambient water layer and a turbidity current.
Momentum equations of two layers are:
\begin{equation}
\frac{\partial h_a U_a}{\partial t} + \frac{\partial h_a U_a^{2}}{\partial x} = -g h_a \frac{\partial \eta}{\partial x} - g h_a \frac{\partial \left(h_a + h_t \right)}{\partial x} - 2 \nu \frac{U_a - U_t}{h_a + h_t} \tag{3}\label{eq:momentum_Ua}
\end{equation}
\begin{equation}
\frac{\partial h_t U_t}{\partial t} + \frac{\partial h_t U_t^{2}}{\partial x} = - (1 + RC)gh_t \frac{\partial \eta}{\partial x} - g h_t \frac{\partial \left( h_a + h_t \right)}{\partial x} -RCgh_t \frac{ \partial h_t}{\partial x} + 2 \nu \frac{U_a - U_t}{h_a + h_t} - C_f U_t \left|U_t\right| \tag{4}\label{eq:momentum_Ut}
\end{equation}
where $\eta$ indicates the bed elevation. $R(=(\rho_s - \rho_f)/\rho_f)$ denotes submerged specific density of sediment particles ($\rho_s$ and $\rho_f$ are density of sediment and water), and $\nu$ is kinematic viscosity of water (assuming $RC << 1$). The parameters $C_f$ and $g$ denotes bed friction coefficient and gravity acceleraton respectively. The parameter $C$ is concentration of suspended sediment in a layer of a turbidity current, and the mass conservation of suspended sediment takes the form:
\begin{equation}
\frac{\partial C h_t}{\partial t} + \frac{\partial U_t C h_t}{\partial x} = w_s \left( e_s - r_0 C \right) \tag{5}\label{eq:cont_C}
\end{equation}
where $w_s$ is the settling velocity of sediment particle, and $r_0$ is a ratio of near-bed to layer-averaged concentration. The parameter $e_s$ indicates the rate of sediment entrainment from the bed.
From equations \eqref{eq:cont_ha}, \eqref{eq:cont_ht}, \eqref{eq:momentum_Ua}, \eqref{eq:momentum_Ut} and \eqref{eq:cont_C}, we obtain
\begin{equation}
\frac{\partial h_a}{\partial t} + U_a \frac{\partial h_a}{\partial x} = - e_w \left|U_t - U_a \right| - h_a \frac{\partial U_a}{\partial x} \equiv G_{ha} \tag{6}\label{eq:cont_ha_cip}
\end{equation}
\begin{equation}
\frac{\partial h_t}{\partial t} + U_t \frac{\partial h_t}{\partial x} = e_w \left|U_t - U_a \right| - h_t \frac{\partial U_t}{\partial x} \equiv G_{ht} \tag{7}\label{eq:cont_ht_cip}
\end{equation}
\begin{equation}
\frac{\partial U_a}{\partial t} + U_a \frac{\partial U_a}{\partial x} = -g \frac{\partial \eta}{\partial x} - g \frac{\partial \left(h_a + h_t \right)}{\partial x} - \frac{2 \nu}{h_a} \left(\frac{U_a - U_t}{h_a + h_t}\right) + \frac{e_w \left|U_t - U_a \right| U_a}{h_a} \equiv G_{Ua} \tag{8}\label{eq:momentum_Ua_cip}
\end{equation}
\begin{equation}
\frac{\partial U_t}{\partial t} + U_t \frac{\partial U_t}{\partial x} = - (1 + RC)g\frac{\partial \eta}{\partial x} - g \frac{\partial \left( h_a + h_t \right)}{\partial x} -RCg\frac{ \partial h_t}{\partial x} + \frac{2 \nu}{h_t} \left( \frac{U_a - U_t}{h_a + h_t} \right) - \frac{C_f U_t \left|U_t\right|}{h_t} - \frac{e_w \left|U_t - U_a \right| U_t}{h_t} \equiv G_{Ut} \tag{9}\label{eq:momentum_Ut_cip}
\end{equation}
\begin{equation}
\frac{\partial C}{\partial t} + U_t \frac{\partial C}{\partial x} = \frac{1}{h} \left\{ w_s \left( e_s - r_0 C \right) - e_w C \left|U_t - U_a \right| \right\} \equiv G_{C} \tag{10}\label{eq:cont_C_cip}
\end{equation}
where $G_{ha}, G_{ht}, G_{Ua}, G_{Ut}, G_{C}$ are non-advective terms of the two-layer shallow water equation system. We write the two layers sytem under the compact form:
\begin{equation}
\frac{\partial \boldsymbol{f}}{\partial t} + \boldsymbol{U} \frac{\partial \boldsymbol{f}}{\partial x} = \boldsymbol{G} \tag{11}\label{eq:compact_system}
\end{equation}
where
\begin{equation}
\boldsymbol{f}=(h_a, h_t, U_a, U_t, C)^{T}, \boldsymbol{U}=(U_a, U_t, U_a, U_t, U_t), \boldsymbol{G}=(G_{ha}, G_{ht}, G_{Ua}, G_{Ut}, G_{C})^{T}
\end{equation}
## Numerical solution
Equation \eqref{eq:compact_system} can be solved numerically by CIP method. In this methodology, the equation \eqref{eq:compact_system} is split into advection and non-advection phases. The advection phase takes the form:
\begin{equation}
\frac{\partial \boldsymbol{f}}{\partial t} + \boldsymbol{U} \frac{\partial \boldsymbol{f}}{\partial x} = 0 \tag{12}\label{eq:cip_advection_f}
\end{equation}
\begin{equation}
\frac{\partial \left(\partial_x \boldsymbol{f}\right)}{\partial t} + \boldsymbol{U} \frac{\partial \left(\partial_x \boldsymbol{f}\right)}{\partial x} = 0 \tag{13}\label{eq:cip_advection_df}
\end{equation}
These equations are descretized as:
\begin{equation}
f_i^{*} = a \xi^3 + b \xi^2 + \partial_x f^{n} \xi + f_i \tag{14}\label{eq:cip_adv_scheme}
\end{equation}
\begin{equation}
\partial_x f_i^{*} = 3 a \xi^2 + 2 b \xi + \partial_x f^{n} \tag{15}\label{eq:cip_adv_scheme_dif}
\end{equation}
where $\xi= -U \Delta t$. The coefficients $a$ and $b$ are defined as:
\begin{equation}
a = \frac{\partial_x f_i^n + \partial_x f_{iup}^n}{D^2} + \frac{2 \left(f_i^n - f_{iup}^n \right)}{D^3} \tag{16}\label{cip_adv_coeff_a}
\end{equation}
\begin{equation}
b = \frac{3 \left(f_{iup}^n - f_i^n\right)}{D^2} - \frac{2 \partial_x f_i^n + \partial_x f_{iup}^n }{D} \tag{17}\label{cip_adv_coeff_b}
\end{equation}
where $iup$ denotes the upstream grid of the $i$th grid, and $D$ is
\begin{equation}
D = \begin{cases}
- \Delta x & (U_i > 0) \\
\Delta x & (U_i < 0)
\end{cases} \tag{18}\label{cip_D}
\end{equation}
After the calculation of the advection phase, the non-advection phase is calculated by:
\begin{equation}
\frac{\partial \boldsymbol{f}}{\partial t} = G \tag{19}
\end{equation}
\begin{equation}
\frac{\partial \left(\partial_x \boldsymbol{f}\right)}{\partial t} = \partial_x G - \frac{\partial f}{\partial x} \frac{\partial U}{\partial x} \tag{20}
\end{equation}
These equations are descretized as:
\begin{equation}
f_i^{n+1} = f^* + G_i \Delta t \tag{21}
\end{equation}
\begin{equation}
\partial_x f_i^{n + 1} = \partial_x f^* + \frac{f_{i + 1}^{n+1} - f_{i-1}^{n + 1}}{2 \Delta x} - \frac{f_{i + 1}^{*} - f_{i-1}^{n*}}{2 \Delta x} - \left( \frac{\partial U}{\partial x} \right)_i \partial_x f^* \tag{22}
\end{equation}
```python
from tideturb import Grid, TwoLayerTurbidityCurrent
from matplotlib import pyplot as plt
grid = Grid(number_of_grids=200, spacing=20.0)
grid.eta = grid.x * -0.05
tc = TwoLayerTurbidityCurrent(
grid=grid,
turb_vel=2.0,
ambient_vel=0.3,
turb_thick=5.0,
ambient_thick=100.0,
concentration=0.01,
Ds=80*10**-6,
alpha=0.5,
implicit_repeat_num=5,
)
steps = 50
for i in range(steps):
# tc.plot(ylim_velocity=[-0.5, 8.0], ylim_concentration=[0, 9.0])
# plt.savefig('test11/tidal_ebb_{:04d}'.format(i))
tc.run_one_step(dt=100.0)
print("", end='\r')
print('{:.1f}% finished.'.format(i / steps * 100), end='\r')
tc.plot(ylim_velocity=[-0.5, 8.0], ylim_concentration=[0, 9.0])
# plt.savefig('test11/tidal_ebb_{:04d}'.format(i))
plt.show()
tc.save('test12_5000sec')
```
98.0% finished.
/home/naruse/anaconda3/lib/python3.6/site-packages/matplotlib/figure.py:445: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
% get_backend())
```python
from tideturb import Grid, TwoLayerTurbidityCurrent
import matplotlib as mpl
# mpl.use('Agg')
import matplotlib.pyplot as plt
import tideturb
import numpy as np
import pickle
tc = tideturb.load_model('test12_5000sec')
steps = 12
# offset = 50
periodicity = 12 * 3600
for i in range(steps):
# tc.plot(ylim_velocity=[-0.5, 8.0], ylim_concentration=[0, 9.0])
# plt.savefig('test11_cycle/tidal_cycle_{:04d}'.format(i))
for j in range(36):
tc.ambient_vel = 0.3 * np.sin((i * 3600 + j * 100) / periodicity * 2 * np.pi + 0.5 * np.pi)
tc.U_link_temp[0, 0] = tc.ambient_vel
tc.U_link_temp[0, 1] = tc.ambient_vel
tc.run_one_step(dt=100.0)
tc.save('test12_cycle_{:0=2}h'.format(i))
print("", end='\r')
print('{:.1f}% finished.'.format(i / steps * 100), end='\r')
# tc.plot(ylim_velocity=[-0.5, 8.0], ylim_concentration=[0, 9.0])
# plt.savefig('test11_cycle/tidal_cycle_{:04d}'.format(i))
plt.show()
tc.save('test12_cycle_12hrs')
```
91.7% finished.
```python
from tideturb import Grid, TwoLayerTurbidityCurrent
from matplotlib import pyplot as plt
import numpy as np
grid = Grid(number_of_grids=100, spacing=40.0)
grid.eta = grid.x * -0.05
C = np.linspace(0.006, 0.015, 30)
velocity = np.linspace(1.5, 2.5, 30)
vel_difference = np.zeros([len(velocity), len(C)])
for i in range(len(velocity)):
for j in range(len(C)):
tc1 = TwoLayerTurbidityCurrent(
grid=grid,
turb_vel=velocity[i],
ambient_vel=0.3,
turb_thick=5.0,
ambient_thick=100.0,
concentration=C[j],
Ds=80*10**-6,
alpha=0.5,
implicit_repeat_num=5,
)
tc1.run_one_step(dt=3000)
plt.close(tc1.fig)
tc2 = TwoLayerTurbidityCurrent(
grid=grid,
turb_vel=velocity[i],
ambient_vel=-0.3,
turb_thick=5.0,
ambient_thick=100.0,
concentration=C[j],
Ds=80*10**-6,
alpha=0.5,
implicit_repeat_num=5,
)
tc2.run_one_step(dt=3000)
plt.close(tc2.fig)
vel_difference[i][j] = tc1.U_node[1][-1] - tc2.U_node[1][-1]
print("", end='\r')
print('{:.1f}% finished.'.format((i * len(C) + j + 1) / (len(C) * len(velocity)) * 100), end='\r')
print(vel_difference)
np.savetxt('vel_difference4.txt', vel_difference, delimiter=',')
```
[[ 1.67553697e-02 2.04528849e-03 -2.13336859e-02 -3.92631066e-02
-5.03579013e-02 -5.64753047e-02 -6.01815366e-02 -6.31804350e-02
-6.76992399e-02 -8.27539357e-02 -1.29821513e-01 -2.24908740e-01
-3.24605963e-01 -3.52185120e-01 -3.15732888e-01 -2.34236046e-01
-2.32577550e-01 -3.02061751e-01 -4.34000939e-01 -2.71201961e-01
6.29298729e-02 7.77410601e-02 9.20932326e-02 1.10539708e-01
1.34650764e-01 1.66752963e-01 2.11245175e-01 2.76604538e-01
3.75484327e-01 5.75596527e-01]
[ 1.48006098e-02 -4.73205635e-03 -2.82634940e-02 -4.48598233e-02
-5.51446920e-02 -6.14138037e-02 -6.68661481e-02 -7.65740912e-02
-1.06100643e-01 -1.77571367e-01 -2.80862884e-01 -3.46695973e-01
-3.30772986e-01 -2.75228584e-01 -2.39512915e-01 -2.82362465e-01
-3.97130822e-01 -3.34359836e-01 5.39608048e-02 6.95909987e-02
8.17879578e-02 9.74218492e-02 1.22579904e-01 1.49541739e-01
1.86858325e-01 2.40110699e-01 3.21784685e-01 4.63206132e-01
7.75067837e-01 1.80424823e+00]
[ 1.10414629e-02 -1.24794483e-02 -3.58545973e-02 -5.17062387e-02
-6.18642988e-02 -7.11431720e-02 -9.13669523e-02 -1.44329594e-01
-2.39042166e-01 -3.28352687e-01 -3.41372877e-01 -3.08147158e-01
-2.54222506e-01 -2.63955914e-01 -3.54975138e-01 -3.68889268e-01
4.51169023e-02 6.46064216e-02 7.51996357e-02 8.88137937e-02
1.06395989e-01 1.29380303e-01 1.59834042e-01 2.02035753e-01
2.74275773e-01 3.76774600e-01 5.72586351e-01 1.10308874e+00
2.47083246e+00 3.28862489e+00]
[ 5.53821445e-03 -2.11673077e-02 -4.47152353e-02 -6.11411592e-02
-7.75607778e-02 -1.10162028e-01 -1.84695030e-01 -2.89526214e-01
-3.42924565e-01 -3.31469371e-01 -2.82831629e-01 -2.56759778e-01
-3.10566425e-01 -4.02544281e-01 2.05853764e-02 5.89285036e-02
6.80572972e-02 8.00246667e-02 9.56158124e-02 1.15335006e-01
1.40978811e-01 1.75629204e-01 2.24630658e-01 2.98832641e-01
4.24411621e-01 6.87420750e-01 1.65894373e+00 2.91206517e+00
3.45773520e+00 3.64205163e+00]
[-2.99735849e-03 -3.19859077e-02 -5.71797684e-02 -8.42654343e-02
-1.35874248e-01 -2.30702482e-01 -3.20510785e-01 -3.39883453e-01
-3.15562492e-01 -2.65798487e-01 -2.73760222e-01 -3.72661486e-01
-5.98338910e-02 5.39312231e-02 6.15512021e-02 7.12661970e-02
8.41860692e-02 1.00964320e-01 1.22794121e-01 1.51750655e-01
1.91757135e-01 2.48430565e-01 3.37337151e-01 4.97732928e-01
8.83833534e-01 2.10528891e+00 3.12525769e+00 3.52730655e+00
3.63127410e+00 3.35068590e+00]
[-1.45083992e-02 -4.81294104e-02 -8.75605123e-02 -1.59697354e-01
-2.63540331e-01 -3.29393233e-01 -3.30367903e-01 -2.84078086e-01
-2.56952799e-01 -3.16460022e-01 -2.22893911e-01 4.44438091e-02
5.32182827e-02 6.21441235e-02 7.42054234e-02 8.84582051e-02
1.06398012e-01 1.29892885e-01 1.61503615e-01 2.05443617e-01
2.70848402e-01 3.77845685e-01 5.86976065e-01 1.17532633e+00
2.55439329e+00 3.30651091e+00 3.58119788e+00 3.55752123e+00
3.04213614e+00 1.97474079e+00]
[-3.29313455e-02 -8.65374924e-02 -1.77120605e-01 -2.77426774e-01
-3.19662240e-01 -2.99947166e-01 -2.52588849e-01 -2.65881184e-01
-2.80095442e-01 2.70528896e-02 4.80543963e-02 5.24963596e-02
6.16898411e-02 7.40449035e-02 9.00571092e-02 1.10355006e-01
1.37088628e-01 1.71205294e-01 2.19758142e-01 2.93611212e-01
4.19251094e-01 6.85229513e-01 1.54949395e+00 2.85410313e+00
3.42387709e+00 3.59791691e+00 3.45161725e+00 2.65762020e+00
1.67983389e+00 1.14428021e+00]
[-7.59951045e-02 -1.75256552e-01 -2.71762898e-01 -2.93314280e-01
-2.46467303e-01 -2.27678709e-01 -2.69609763e-01 -3.98809467e-02
4.67974697e-02 4.96575416e-02 5.51112796e-02 6.39821609e-02
7.49629521e-02 9.11446279e-02 1.12415943e-01 1.40525571e-01
1.78695311e-01 2.33391207e-01 3.17719034e-01 4.65705440e-01
8.09425662e-01 1.94436054e+00 3.04945550e+00 3.48772127e+00
3.57839254e+00 3.27872453e+00 2.27984241e+00 1.46040019e+00
1.02478213e+00 7.71930186e-01]
[-1.54614962e-01 -2.46627888e-01 -2.25375848e-01 -1.89403911e-01
-2.12650676e-01 -1.03332072e-01 4.67997036e-02 4.96208865e-02
5.25386631e-02 5.84424292e-02 6.77204952e-02 7.91809636e-02
9.50404998e-02 1.15506810e-01 1.44152385e-01 1.84801813e-01
2.44340995e-01 3.39393342e-01 5.14730739e-01 9.68654112e-01
2.28651976e+00 3.19098315e+00 3.52556031e+00 3.52926500e+00
3.02907308e+00 1.96593640e+00 1.29220352e+00 9.29612459e-01
7.12409388e-01 5.69297016e-01]
[-1.87995486e-01 -1.51197591e-01 -1.50044658e-01 -1.10957590e-01
3.99206791e-02 4.89964571e-02 4.97983983e-02 5.45381049e-02
6.24092461e-02 7.18263303e-02 8.40825233e-02 1.00900061e-01
1.22539629e-01 1.51112899e-01 1.92615952e-01 2.55655901e-01
3.61287535e-01 5.68537793e-01 1.16627061e+00 2.54981580e+00
3.28911367e+00 3.54225214e+00 3.45037190e+00 2.72524269e+00
1.72029073e+00 1.16115129e+00 8.51859198e-01 6.61495413e-01
5.33637305e-01 4.42669660e-01]
[-8.98977212e-02 -8.92777506e-02 1.69253328e-02 5.22037630e-02
4.88351979e-02 5.04046898e-02 5.56097247e-02 6.37364274e-02
7.48755816e-02 8.93934148e-02 1.06683465e-01 1.29349769e-01
1.61054199e-01 2.05926025e-01 2.73866761e-01 3.87435999e-01
6.29907894e-01 1.40892531e+00 2.74838883e+00 3.35171881e+00
3.52185157e+00 3.30230124e+00 2.39628302e+00 1.52873640e+00
1.05662962e+00 7.88154712e-01 6.18723122e-01 5.03006935e-01
4.19656286e-01 3.57021503e-01]
[-7.20435670e-03 6.16422972e-02 5.34243408e-02 4.85259326e-02
5.11078074e-02 5.68777190e-02 6.54107230e-02 7.67587540e-02
9.15491348e-02 1.11056026e-01 1.37019759e-01 1.70523200e-01
2.18571634e-01 2.94065615e-01 4.26406125e-01 7.22912549e-01
1.69758796e+00 2.89937603e+00 3.39097889e+00 3.47967399e+00
3.09447445e+00 2.06726033e+00 1.34134415e+00 9.57920967e-01
7.32117014e-01 5.81972119e-01 4.76381648e-01 3.99341252e-01
3.41076931e-01 2.95445876e-01]
[ 7.38179618e-02 5.67588766e-02 5.08940248e-02 5.27621766e-02
5.78993570e-02 6.69762925e-02 7.87839192e-02 9.40904219e-02
1.14050379e-01 1.40729572e-01 1.77985446e-01 2.32666850e-01
3.15762772e-01 4.66164167e-01 8.29965968e-01 2.01224005e+00
3.05317958e+00 3.43342289e+00 3.41525659e+00 2.81599546e+00
1.79760861e+00 1.20139001e+00 8.75588977e-01 6.74291316e-01
5.42533155e-01 4.48362555e-01 3.80833403e-01 3.26723128e-01
2.83844032e-01 2.49490514e-01]
[ 6.28142278e-02 5.49897002e-02 5.51438403e-02 6.03804968e-02
6.91787500e-02 8.04764958e-02 9.64793196e-02 1.17232219e-01
1.45019960e-01 1.83781098e-01 2.41135963e-01 3.34424410e-01
5.12223251e-01 9.69651008e-01 2.28282769e+00 3.15977991e+00
3.46219353e+00 3.36751790e+00 2.57183012e+00 1.59374095e+00
1.08611032e+00 8.03446173e-01 6.27704923e-01 5.09113384e-01
4.23399808e-01 3.59101305e-01 3.09954957e-01 2.70859540e-01
2.40384238e-01 2.13886825e-01]
[ 5.86039242e-02 5.82418726e-02 6.39817210e-02 7.17558753e-02
8.35374357e-02 9.95242434e-02 1.20089709e-01 1.49208668e-01
1.89980462e-01 2.50879843e-01 3.51239558e-01 5.48472938e-01
1.11878996e+00 2.51304472e+00 3.25100438e+00 3.47508326e+00
3.27913094e+00 2.33991567e+00 1.46975033e+00 1.01190732e+00
7.48447607e-01 5.88836788e-01 4.79102308e-01 4.00784014e-01
3.41922309e-01 2.96042555e-01 2.59065757e-01 2.29390723e-01
2.04584201e-01 1.84142079e-01]
[ 6.12579987e-02 6.67371220e-02 7.51567296e-02 8.75966438e-02
1.02781270e-01 1.24470814e-01 1.52739535e-01 1.96008087e-01
2.60938712e-01 3.69913624e-01 5.92486543e-01 1.28473783e+00
2.66482235e+00 3.31222638e+00 3.47604895e+00 3.16056830e+00
2.12150726e+00 1.35028849e+00 9.46519068e-01 7.12094905e-01
5.61766728e-01 4.56431265e-01 3.82460102e-01 3.26774522e-01
2.82899627e-01 2.48394549e-01 2.20469793e-01 1.96992396e-01
1.77282037e-01 1.60802055e-01]
[ 6.87287967e-02 7.82103817e-02 9.10576880e-02 1.07270377e-01
1.29897795e-01 1.59744339e-01 2.04125673e-01 2.70886274e-01
3.89962595e-01 6.43543562e-01 1.48463757e+00 2.80514103e+00
3.35778781e+00 3.45430030e+00 3.00061728e+00 1.92634472e+00
1.24726761e+00 8.87632129e-01 6.74650135e-01 5.36138579e-01
4.39620974e-01 3.68750459e-01 3.14488591e-01 2.72572592e-01
2.39583610e-01 2.12640460e-01 1.89977649e-01 1.71313315e-01
1.55402839e-01 1.41666874e-01]
[ 8.00865367e-02 9.36872726e-02 1.11875665e-01 1.35131781e-01
1.69142949e-01 2.13687518e-01 2.85926391e-01 4.15060235e-01
7.04150643e-01 1.70905979e+00 2.92854854e+00 3.39517477e+00
3.41981199e+00 2.80217092e+00 1.74963492e+00 1.15442254e+00
8.34266600e-01 6.40276695e-01 5.11987311e-01 4.21707564e-01
3.55230808e-01 3.04402544e-01 2.64266327e-01 2.32000061e-01
2.05608820e-01 1.84298376e-01 1.66270722e-01 1.50924864e-01
1.37565383e-01 1.26013644e-01]
[ 9.56611224e-02 1.14279643e-01 1.39636956e-01 1.74905710e-01
2.23794568e-01 3.05368360e-01 4.45315812e-01 7.87434316e-01
1.95322695e+00 3.03694250e+00 3.42192508e+00 3.36345072e+00
2.57365733e+00 1.59225762e+00 1.07039363e+00 7.83440583e-01
6.06954091e-01 4.89040670e-01 4.04736807e-01 3.42078892e-01
2.93851053e-01 2.55856605e-01 2.25264720e-01 2.00076834e-01
1.79055660e-01 1.61312175e-01 1.46593886e-01 1.33971282e-01
1.22961589e-01 1.13337872e-01]
[ 1.16831848e-01 1.42702549e-01 1.79319860e-01 2.34025436e-01
3.20678942e-01 4.79498919e-01 9.06602064e-01 2.20698884e+00
3.14491894e+00 3.43106552e+00 3.26942918e+00 2.32434118e+00
1.44828453e+00 9.92814665e-01 7.36254372e-01 5.75205425e-01
4.65973598e-01 3.87631487e-01 3.29102730e-01 2.83617760e-01
2.47510331e-01 2.18276578e-01 1.94273061e-01 1.74280566e-01
1.57354660e-01 1.42859682e-01 1.30287311e-01 1.19680278e-01
1.10415860e-01 1.02192401e-01]
[ 1.46197346e-01 1.83972087e-01 2.40879250e-01 3.35100411e-01
5.20590953e-01 1.02974287e+00 2.49332715e+00 3.24857453e+00
3.46089264e+00 3.17481516e+00 2.05372441e+00 1.31088184e+00
9.18612570e-01 6.90660236e-01 5.44491606e-01 4.43824823e-01
3.70868557e-01 3.15850911e-01 2.73053980e-01 2.39029933e-01
2.11343167e-01 1.88463819e-01 1.69307867e-01 1.53086408e-01
1.39247988e-01 1.27244140e-01 1.16779381e-01 1.07585178e-01
9.96911473e-02 9.26896397e-02]
[ 1.89129321e-01 2.48621251e-01 3.48799893e-01 5.51678659e-01
1.18364410e+00 2.65215738e+00 3.32859939e+00 3.48305179e+00
3.01665453e+00 1.89780959e+00 1.21495907e+00 8.42918045e-01
6.45089022e-01 5.13959593e-01 4.21922250e-01 3.54172066e-01
3.02795710e-01 2.62535696e-01 2.30364335e-01 2.04148211e-01
1.82476932e-01 1.64220640e-01 1.48708877e-01 1.35482123e-01
1.23982544e-01 1.13971417e-01 1.05207756e-01 9.74436018e-02
9.04624725e-02 8.44477203e-02]
[ 2.57511497e-01 3.64850033e-01 5.90336769e-01 1.34522965e+00
2.79614867e+00 3.40195797e+00 3.47726093e+00 2.86797954e+00
1.73369722e+00 1.11325611e+00 7.99468165e-01 6.12169621e-01
4.81386605e-01 3.99269344e-01 3.37368039e-01 2.89643705e-01
2.52009746e-01 2.21711944e-01 1.96810262e-01 1.76250740e-01
1.58924370e-01 1.44251463e-01 1.31600804e-01 1.20658540e-01
1.11068433e-01 1.02618737e-01 9.51455906e-02 8.85233943e-02
8.25661684e-02 7.72137623e-02]
[ 3.83144078e-01 6.36945125e-01 1.54875599e+00 2.93200277e+00
3.44818860e+00 3.45628714e+00 2.68617075e+00 1.59876685e+00
1.04338864e+00 7.51065433e-01 5.75318730e-01 4.61973619e-01
3.72516069e-01 3.18943836e-01 2.75727810e-01 2.41194110e-01
2.12922267e-01 1.89547023e-01 1.70062767e-01 1.53644569e-01
1.39609731e-01 1.27505597e-01 1.17041020e-01 1.07935966e-01
9.98947864e-02 9.27689492e-02 8.64279252e-02 8.07263900e-02
7.55886227e-02 7.09937513e-02]
[ 7.20115454e-01 1.84237020e+00 3.07025737e+00 3.48709067e+00
3.40641127e+00 2.46452417e+00 1.47198796e+00 9.81000052e-01
7.11612973e-01 5.51798073e-01 4.38827807e-01 3.63282698e-01
3.07895570e-01 2.59519357e-01 2.29089974e-01 2.03271707e-01
1.81783201e-01 1.63730850e-01 1.48335145e-01 1.35019283e-01
1.23501288e-01 1.13489849e-01 1.04788932e-01 9.70255346e-02
9.02399591e-02 8.41641279e-02 7.87809048e-02 7.38680072e-02
6.94245037e-02 6.53667222e-02]
[ 2.10734176e+00 3.19431493e+00 3.52696991e+00 3.33668609e+00
2.23817249e+00 1.35016529e+00 9.17299559e-01 6.75751600e-01
5.24330981e-01 4.21212528e-01 3.50065742e-01 2.93903590e-01
2.53678260e-01 2.21881902e-01 1.92503313e-01 1.73195374e-01
1.56678636e-01 1.42331532e-01 1.30011836e-01 1.19272158e-01
1.09900135e-01 1.01635342e-01 9.42541561e-02 8.77361083e-02
8.18889292e-02 7.66874453e-02 7.19531431e-02 6.77273527e-02
6.38607719e-02 6.03032763e-02]
[ 3.30176673e+00 3.52763096e+00 3.14607655e+00 1.96081506e+00
1.22063086e+00 8.50494064e-01 6.37019537e-01 4.99869098e-01
4.05542091e-01 3.35880091e-01 2.83415840e-01 2.44660778e-01
2.12819553e-01 1.88124146e-01 1.68125674e-01 1.48837607e-01
1.36032173e-01 1.24635100e-01 1.14573776e-01 1.05792621e-01
9.81095055e-02 9.12107211e-02 8.51100660e-02 7.95479567e-02
7.45833732e-02 7.00611241e-02 6.59922656e-02 6.23766989e-02
5.89936415e-02 5.59102997e-02]
[ 3.53196232e+00 2.94759693e+00 1.74689369e+00 1.10701387e+00
7.81495520e-01 5.91345669e-01 4.67680159e-01 3.81298866e-01
3.20168293e-01 2.73397713e-01 2.35925204e-01 2.05512019e-01
1.81524260e-01 1.61902695e-01 1.45931436e-01 1.32471272e-01
1.19084650e-01 1.10147155e-01 1.01958946e-01 9.45985611e-02
8.79785014e-02 8.20891226e-02 7.67509645e-02 7.19986807e-02
6.77766664e-02 6.38966941e-02 6.04252394e-02 5.72513592e-02
5.42642922e-02 5.16174863e-02]
[ 2.64246333e+00 1.54246950e+00 1.00448919e+00 7.23013563e-01
5.53489871e-01 4.41909246e-01 3.63678269e-01 3.06010359e-01
2.61971579e-01 2.27669659e-01 1.98860116e-01 1.75238314e-01
1.57541485e-01 1.40493117e-01 1.27612641e-01 1.16594285e-01
1.04343330e-01 9.74444038e-02 9.08966267e-02 8.49809267e-02
7.96569969e-02 7.48026838e-02 7.04696190e-02 6.63620506e-02
6.27876581e-02 5.94104764e-02 5.62810549e-02 5.34694219e-02
5.08492717e-02 4.84557226e-02]
[ 1.35449725e+00 9.08485859e-01 6.65463948e-01 5.15796353e-01
4.15263643e-01 3.43656956e-01 2.90560204e-01 2.49795105e-01
2.17657735e-01 1.91785675e-01 1.70338980e-01 1.51858994e-01
1.35881666e-01 1.23944752e-01 1.12524048e-01 1.03434956e-01
9.54993223e-02 8.64305556e-02 8.13053667e-02 7.64256792e-02
7.18869683e-02 6.77618655e-02 6.41181571e-02 6.08078547e-02
5.78276350e-02 5.49758761e-02 5.25148072e-02 5.01257044e-02
4.79070543e-02 4.57095239e-02]]
```python
from matplotlib import pyplot as plt
import numpy as np
X, Y = np.meshgrid(C * 100, velocity)
Z = np.loadtxt('vel_difference4.txt', delimiter=',')
cont=plt.contour(X, Y, Z, 5, vmin=-1,vmax=3, colors=['black'])
cont.clabel(fmt='%1.1f', fontsize=14)
plt.xlabel('Concentration', fontsize=24)
plt.ylabel('Velocity', fontsize=24)
plt.pcolormesh(X,Y,Z, cmap='cool') #カラー等高線図
pp=plt.colorbar (orientation="vertical") # カラーバーの表示
pp.set_label("Amplitude of velocity fluctuation (m/s)", fontsize=14)
plt.savefig('result_map.svg')
# im = plt.imshow(vel_difference, aspect='auto', extent=[C[0]*100, C[-1]*100, velocity[-1], velocity[0]], interpolation='bicubic')
# ax = plt.gca()
# ax.invert_yaxis()
# ax.set_xlabel('Concentration (%)')
# ax.set_ylabel('Velocity (m/s)')
# plt.colorbar(im)
# plt.show()
# plt.savefig('testimage.svg')
```
```python
print(C)
print(velocity)
a = np.array([[0,1,0],[1,0,0],[0,0,1],[0,0,0]])
plt.imshow(a)
```
```python
import tideturb
tc_org = tideturb.load_model('test09_5000sec')
tc_org.axM.plot(tc_org.grid.x, tc_org.U_node[1, :])
tc_org.axL.plot(tc_org.grid.x, tc_org.h_node[1, :])
tc_org.axR.plot(tc_org.grid.x, tc_org.C_node[1, :])
U = tc_org.U_node[1, -1]
h = tc_org.h_node[1, -1]
C = tc_org.C_node[1, -1]
Fr = U / np.sqrt(1.65 * 9.81 * C * h)
print(Fr)
```
|
b48eb08aa462dc65e669852715c987430a42e587
| 155,731 |
ipynb
|
Jupyter Notebook
|
tide_influenced_turbidity_currents_v1.ipynb
|
narusehajime/tideturb
|
caaeb7543bb9895706a9d8f39f25346ab585dc66
|
[
"MIT"
] | null | null | null |
tide_influenced_turbidity_currents_v1.ipynb
|
narusehajime/tideturb
|
caaeb7543bb9895706a9d8f39f25346ab585dc66
|
[
"MIT"
] | null | null | null |
tide_influenced_turbidity_currents_v1.ipynb
|
narusehajime/tideturb
|
caaeb7543bb9895706a9d8f39f25346ab585dc66
|
[
"MIT"
] | null | null | null | 227.676901 | 73,016 | 0.882599 | true | 13,245 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.945801 | 0.743168 | 0.702889 |
__label__yue_Hant
| 0.142331 | 0.471379 |
# [Learn Quantum Computing with Python and Q#](https://www.manning.com/books/learn-quantum-computing-with-python-and-q-sharp?a_aid=learn-qc-granade&a_bid=ee23f338)<br>Chapter 8 Exercise Solutions
----
> Copyright (c) Sarah Kaiser and Chris Granade.
> Code sample from the book "Learn Quantum Computing with Python and Q#" by
> Sarah Kaiser and Chris Granade, published by Manning Publications Co.
> Book ISBN 9781617296130.
> Code licensed under the MIT License.
### Preamble
```python
import numpy as np
import qutip as qt
import matplotlib.pyplot as plt
import qsharp
%matplotlib inline
```
### Exercise 8.1
**In Chapter 4, you used Python type annotations to represent the concept of a _strategy_ in the CHSH game.
User-defined types in Q# can be used in a similar fashion.
Give it a go by defining a new UDT for CHSH strategies and then use your new UDT to wrap the constant strategy from Chapter 4.**
*HINT*: Your and Eve's parts of the strategy can each be represented as operations that take a `Result` and output a `Result`.
That is, as operations of type `Result => Result`.
```python
strategy = qsharp.compile("""
newtype Strategy = (
PlayAlice: (Result => Result),
PlayBob: (Result => Result)
);
""")
strategy
```
<Q# callable Strategy>
----
### Exercise 8.2
**You can find the model for Lancelot's results if you use Born's rule!
We have put the definition from Chapter 2 below, see if you can plot the resulting value as a function of Lancelot's scale using Python.
Does your plot look like a trigonometric function?**
\begin{align}
\Pr(\text{measurement} | \text{state}) = |\left\langle \text{measurement} \mid \text{state} \right\rangle|^2
\end{align}
*HINT*: For Lancelot's measurements, the $\left\langle \text{measurement} \right|$ part of Born's rule is given by $\left\langle 1 \right|$.
Immediately before measuring, his qubit will be in the state $H R_1(\theta * \textrm{scale}) H \left|0\right\rangle$.
You can simulate the `R1` operation in QuTiP by using the matrix form in the Q# reference at https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.intrinsic.r1.
For the purposes of illustration, let's choose $\theta = 0.456$ radians.
```python
theta = 0.456
```
Next, as the hint gives us, we'll need to define a matrix that we can use to simulate the `R1` operation:
```python
def r1_matrix(angle: float) -> qt.Qobj:
return qt.Qobj([
[1, 0],
[0, np.exp(1j * angle)]
])
```
```python
r1_matrix(theta)
```
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = False\begin{equation*}\left(\begin{array}{*{11}c}1.0 & 0.0\\0.0 & (0.898+0.440j)\\\end{array}\right)\end{equation*}
We can use this to find Lancelot's state after applying each hidden rotation:
```python
def lancelot_final_state(theta: float, scale: float) -> qt.Qobj:
initial_state = qt.basis(2, 0)
# Simulate the H Q# operation.
state = qt.qip.operations.hadamard_transform() * initial_state
# Simulate the R1 operation.
state = r1_matrix(theta * scale) * state
# Simulate undoing the H operation with another call to H.
state = qt.qip.operations.hadamard_transform() * state
return state
```
```python
lancelot_final_state(theta, 1.2)
```
Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket\begin{equation*}\left(\begin{array}{*{11}c}(0.927+0.260j)\\(0.073-0.260j)\\\end{array}\right)\end{equation*}
We now have everything we need to predict the probability of a "1" outcome:
```python
def lancelot_pr1(theta: float, scale: float) -> float:
ket1 = qt.basis(2, 1)
# Apply Born's rule.
return np.abs((ket1.dag() * lancelot_final_state(theta, scale))[0, 0]) ** 2
```
```python
lancelot_pr1(theta, 1.2)
```
0.07300764875288458
Plotting for a variety of different scales, we see the expected sinusoidal shape:
```python
scales = np.linspace(0, 20, 201)
pr1s = [lancelot_pr1(theta, scale) for scale in scales]
plt.plot(scales, pr1s)
```
----
### Exercise 8.3
**Try writing Q# programs that use `AssertQubit` and `DumpMachine` to verify that:**
- $\left|+\right\rangle$ and $\left|-\right\rangle$ are both eigenstates of the `X` operation.
- $\left|0\right\rangle$ and $\left|1\right\rangle$ are both eigenstates of the `Rz` operation, regardless of what angle you choose to rotate by.
For even more practice, try figuring out what the eigenstates of the `Y` and `CNOT` operations and writing a Q# program to verify your guesses!
*HINT*: You can find the vector form of the eigenstates of a unitary operation using QuTiP.
For instance, the eigenstates of the `Y` operation are given by `qt.sigmay().eigenstates()`.
From there, you can use what you learned about rotations in Chapters 4 and 5 to figure out which Q# operations prepare those states.
Don't forget you can always test if a particular state is an eigenstate of an operation by just writing a quick test in Q#!
Let's start by verifying that $\left|+\right\rangle$ and $\left|-\right\rangle$ are both eigenstates of the `X` operation.
```python
verify_x_eigenstates = qsharp.compile("""
open Microsoft.Quantum.Diagnostics;
operation VerifyXEigenstates() : Unit {
using (q = Qubit()) {
// Prepare |+⟩.
H(q);
// Check that the X operation does nothing.
X(q);
Message("Checking that |+⟩ is an eigenstate of the X operation.");
DumpMachine();
// Reset so that we're ready for the next check.
Reset(q);
// Next, do the same with |−⟩.
X(q);
H(q);
X(q);
Message("");
Message("Checking that |−⟩ is an eigenstate of the X operation.");
DumpMachine();
Reset(q);
}
}
""")
```
```python
verify_x_eigenstates.simulate()
```
Checking that |+⟩ is an eigenstate of the X operation.
|0⟩ 0.7071067811865476 + 0𝑖
|1⟩ 0.7071067811865476 + 0𝑖
Checking that |−⟩ is an eigenstate of the X operation.
|0⟩ -0.7071067811865476 + 0𝑖
|1⟩ 0.7071067811865476 + 0𝑖
()
Notice that in both cases, we got back the same state (up to a global phase), confirming the first part of the exercise.
Doing the same for `Rz`, we add an input for the rotation angle:
```python
verify_rz_eigenstates = qsharp.compile("""
open Microsoft.Quantum.Diagnostics;
operation VerifyRzEigenstates(angle : Double) : Unit {
using (q = Qubit()) {
// Prepare |0⟩ by doing nothing.
// Check that the Rz operation does nothing.
Rz(angle, q);
Message("Checking that |0⟩ is an eigenstate of the Rz operation.");
DumpMachine();
// Reset so that we're ready for the next check.
Reset(q);
// Next, do the same with |1⟩.
X(q);
Rz(angle, q);
Message("");
Message("Checking that |1⟩ is an eigenstate of the Rz operation.");
DumpMachine();
Reset(q);
}
}
""")
```
```python
verify_rz_eigenstates.simulate(angle=0.123)
```
Checking that |0⟩ is an eigenstate of the Rz operation.
|0⟩ 0.9981094709838179 + -0.061461239268365025𝑖
|1⟩ 0 + 0𝑖
Checking that |1⟩ is an eigenstate of the Rz operation.
|0⟩ 0 + 0𝑖
|1⟩ 1.0000000000000002 + -3.580928224338447E-18𝑖
()
```python
verify_rz_eigenstates.simulate(angle=4.567)
```
Checking that |0⟩ is an eigenstate of the Rz operation.
|0⟩ -0.6538817488057485 + -0.7565967608830586𝑖
|1⟩ 0 + 0𝑖
Checking that |1⟩ is an eigenstate of the Rz operation.
|0⟩ 0 + 0𝑖
|1⟩ 1.0000000000000002 + -2.699466214891338E-17𝑖
()
Using the hint, we can find what eigenstates we should try for the `Y` and `CNOT` operations:
```python
qt.sigmay().eigenstates()
```
(array([-1., 1.]),
array([Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
Qobj data =
[[-0.70710678+0.j ]
[ 0. +0.70710678j]],
Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
Qobj data =
[[-0.70710678+0.j ]
[ 0. -0.70710678j]]], dtype=object))
```python
qt.qip.operations.cnot().eigenstates()
```
(array([-1., 1., 1., 1.]),
array([Quantum object: dims = [[2, 2], [1, 1]], shape = (4, 1), type = ket
Qobj data =
[[ 0. ]
[ 0. ]
[ 0.70710678]
[-0.70710678]],
Quantum object: dims = [[2, 2], [1, 1]], shape = (4, 1), type = ket
Qobj data =
[[0.]
[1.]
[0.]
[0.]],
Quantum object: dims = [[2, 2], [1, 1]], shape = (4, 1), type = ket
Qobj data =
[[1.]
[0.]
[0.]
[0.]],
Quantum object: dims = [[2, 2], [1, 1]], shape = (4, 1), type = ket
Qobj data =
[[0. ]
[0. ]
[0.70710678]
[0.70710678]]], dtype=object))
That is, $(|0\rangle + i |1\rangle) / \sqrt{2}$ and $(|0\rangle - i |1\rangle) / \sqrt{2}$ are eigenstates of the `Y` operation, while $|00\rangle$, $|01\rangle$, $|1+\rangle$ and $|1-\rangle$ are eigenstates of the `CNOT` operation.
----
### Exercise 8.4
**Verify that $\left|0\right\rangle\left\langle 0\right| \otimes \mathbb{1} + \left|1\right\rangle\left\langle{1}\right| \otimes X$ is the same as:**
\begin{align}
U_{\mathrm{CNOT}} = \left(\begin{matrix}
\mathbb{1} & 0 \\
0 & X
\end{matrix}\right).
\end{align}
*HINT*: You can verify this by hand, by using NumPy's `np.kron` function, or QuTiP's `qt.tensor` function.
If you need a refresher, check out how you simulated teleportation in Chapter 5, or check out the derivation of the Deutsch–Jozsa algorithm in Chapter 7.
```python
ket0 = qt.basis(2, 0)
ket1 = qt.basis(2, 1)
```
```python
projector_0 = ket0 * ket0.dag()
projector_0
```
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True\begin{equation*}\left(\begin{array}{*{11}c}1.0 & 0.0\\0.0 & 0.0\\\end{array}\right)\end{equation*}
```python
projector_1 = ket1 * ket1.dag()
projector_1
```
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True\begin{equation*}\left(\begin{array}{*{11}c}0.0 & 0.0\\0.0 & 1.0\\\end{array}\right)\end{equation*}
```python
qt.tensor(projector_0, qt.qeye(2)) + qt.tensor(projector_1, qt.sigmax())
```
Quantum object: dims = [[2, 2], [2, 2]], shape = (4, 4), type = oper, isherm = True\begin{equation*}\left(\begin{array}{*{11}c}1.0 & 0.0 & 0.0 & 0.0\\0.0 & 1.0 & 0.0 & 0.0\\0.0 & 0.0 & 0.0 & 1.0\\0.0 & 0.0 & 1.0 & 0.0\\\end{array}\right)\end{equation*}
----
### Exercise 8.5
**Either by hand or using QuTiP, verify that state dumped by running the Q# snippet below is the same as $\left|-1\right\rangle = \left|-\right\rangle \otimes \left|1\right\rangle$.**
```Q#
using ((control, target) = (Qubit(), Qubit())) {
H(control);
X(target);
CZ(control, target);
DumpRegister((), [control, target]);
Reset(control);
Reset(target);
}
```
*NOTE*: If you seem to get the right answer other than that the order of the qubits are swapped, note that `DumpMachine` uses a _little-endian_ representation to order states.
In little-endian, |2⟩ is short-hand for |01⟩, not |10⟩.
If this seems confusing, blame the x86 processor architecture…
Let's first run the above snippet to see what output is generated.
```python
run_exercise_85 = qsharp.compile("""
open Microsoft.Quantum.Diagnostics;
operation RunExercise85() : Unit {
using ((control, target) = (Qubit(), Qubit())) {
H(control);
X(target);
CZ(control, target);
DumpRegister((), [control, target]);
Reset(control);
Reset(target);
}
}
""")
```
```python
run_exercise_85.simulate()
```
|0⟩ 0 + 0𝑖
|1⟩ 0 + 0𝑖
|2⟩ 0.7071067811865476 + 0𝑖
|3⟩ -0.7071067811865476 + 0𝑖
()
Next, let's compute what $\left|-1\right\rangle = \left|-\right\rangle \otimes \left|1\right\rangle$ in vector notation by using QuTiP.
```python
ket_minus = qt.Qobj([
[1],
[-1]
]) / np.sqrt(2)
ket1 = qt.basis(2, 1)
```
```python
qt.tensor(ket_minus, ket1)
```
Quantum object: dims = [[2, 2], [1, 1]], shape = (4, 1), type = ket\begin{equation*}\left(\begin{array}{*{11}c}0.0\\0.707\\0.0\\-0.707\\\end{array}\right)\end{equation*}
As the note suggests, these two outputs appear different at first, but the resolution is that Q# uses little-endian notation, such that "|2⟩" means the |01⟩ amplitude, which QuTiP prints as the second row.
We can make this more clear by manually telling IQ# to print out as bitstrings instead of little-endian notation.
**WARNING:** Calling the `%config` magic from Python is not officially supported, and may break in future versions of Q#.
```python
qsharp.client._execute('%config dump.basisStateLabelingConvention = "Bitstring"')
```
'"Bitstring"'
```python
run_exercise_85.simulate()
```
|00⟩ 0 + 0𝑖
|01⟩ 0.7071067811865476 + 0𝑖
|10⟩ 0 + 0𝑖
|11⟩ -0.7071067811865476 + 0𝑖
()
----
### Epilogue
_The following cell logs what version of the components this was last tested with._
```python
qsharp.component_versions()
```
{'iqsharp': LooseVersion ('0.12.20070124'),
'Jupyter Core': LooseVersion ('1.4.0.0'),
'.NET Runtime': LooseVersion ('.NETCoreApp,Version=v3.1'),
'qsharp': LooseVersion ('0.12.2007.124')}
|
8a4c266725d57f062498ee82ff36a47c22525220
| 43,341 |
ipynb
|
Jupyter Notebook
|
ch08/ch08-exercise-solutions.ipynb
|
jmgimeno/learn-qc-with-python-and-qsharp
|
9b3c4bba9cf5c09df089be9efb9ac0ea57dfc0ac
|
[
"MIT"
] | null | null | null |
ch08/ch08-exercise-solutions.ipynb
|
jmgimeno/learn-qc-with-python-and-qsharp
|
9b3c4bba9cf5c09df089be9efb9ac0ea57dfc0ac
|
[
"MIT"
] | null | null | null |
ch08/ch08-exercise-solutions.ipynb
|
jmgimeno/learn-qc-with-python-and-qsharp
|
9b3c4bba9cf5c09df089be9efb9ac0ea57dfc0ac
|
[
"MIT"
] | null | null | null | 44.727554 | 17,132 | 0.680164 | true | 4,226 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.810479 | 0.879147 | 0.71253 |
__label__eng_Latn
| 0.928265 | 0.493777 |
# New features v0.3.3
[1. Classical-quantum maps and mixed circuits](#1.-Classical-quantum-maps-and-mixed-circuits)
- `discopy.quantum.cqmap` implements Bob and Aleks' classical-quantum maps.
- Now `discopy.quantum.circuit` diagrams have two generating objects: `bit` and `qubit`.
- New boxes `Discard`, `Measure` and `ClassicalGate` can be simulated with `cqmap` or sent to `pytket`.
[2. ZX diagrams and PyZX interface](#2.-ZX-diagrams-and-PyZX-interface)
- `discopy.quantum.zx` implements diagrams with spiders, swaps and Hadamard boxes.
- `to_pyzx` and `from_pyzx` methods can be used to turn diagrams into graphs, simplify then back.
[3. Parametrised diagrams, formal sums and automatic gradients](#3.-Parametrised-diagrams,-formal-sums-and-automatic-gradients)
- We can use `sympy.Symbols` as variables in our diagrams (tensor, circuit or ZX).
- We can take formal sums of diagrams. `TensorFunctor` sends formal sums to concrete sums.
- Given a diagram (tensor, circuit or ZX) with a free variable, we can compute its gradient as a sum.
[4. Learning functors, diagrammatically](#4.-Learning-functors,-diagrammatically)
- We can use automatic gradients to learn functors (classical and/or quantum) from data.
## 1. Classical-quantum maps and mixed circuits
```python
from discopy.quantum import *
circuit = H @ X >> CX >> Measure() @ Id(qubit)
circuit.draw()
```
```python
circuit.eval()
```
CQMap(dom=Q(Dim(2, 2)), cod=C(Dim(2)) @ Q(Dim(2)), array=[0.0, 0.0, 0.0, 0.49999997, 0.49999997, 0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.49999997])
```python
circuit.init_and_discard().draw()
```
```python
circuit.init_and_discard().eval()
```
CQMap(dom=CQ(), cod=C(Dim(2)), array=[0.49999997, 0.49999997])
```python
circuit.to_tk()
```
tk.Circuit(2, 1).H(0).X(1).CX(0, 1).Measure(0, 0)
```python
from pytket.backends.ibm import AerBackend
backend = AerBackend()
circuit.eval(backend)
```
Tensor(dom=Dim(1), cod=Dim(2), array=[0.5019531, 0.49804688])
```python
postprocess = ClassicalGate('postprocess', 2, 0, array=[1, 0, 0, 0])
postprocessed_circuit = Ket(0, 0) >> H @ X >> CX >> Measure() @ Measure() >> postprocess
postprocessed_circuit.draw(aspect='auto')
```
```python
postprocessed_circuit.to_tk()
```
tk.Circuit(2, 2).H(0).X(1).CX(0, 1).Measure(0, 0).Measure(0, 1).post_process(ClassicalGate('postprocess', bit @ bit, Ty(), array=[1, 0, 0, 0]))
```python
postprocessed_circuit.eval(backend)
```
Tensor(dom=Dim(1), cod=Dim(1), array=[0.47851562])
## 2. ZX diagrams and PyZX interface
```python
from discopy.quantum.zx import *
from pyzx import draw
bialgebra = Z(1, 2, .25) @ Z(1, 2, .75) >> Id(1) @ SWAP @ Id(1) >> X(2, 1) @ X(2, 1, .5)
bialgebra.draw(aspect='equal')
draw(bialgebra.to_pyzx())
```
```python
from pyzx import generate, simplify
graph = generate.cliffordT(2, 5)
print("From DisCoPy:")
Diagram.from_pyzx(graph).draw()
print("To PyZX:")
draw(graph)
simplify.full_reduce(graph)
draw(graph)
print("And back!")
Diagram.from_pyzx(graph).draw()
```
## 3. Parametrised diagrams, formal sums and automatic gradients
```python
from sympy.abc import phi
from discopy import drawing
from discopy.quantum import *
circuit = sqrt(2) @ Ket(0, 0) >> H @ Rx(phi) >> CX >> Bra(0, 1)
drawing.equation(circuit, circuit.subs(phi, .5), symbol="|-->")
```
```python
gradient = scalar(1j) @ (circuit >> circuit[::-1]).grad(phi)
gradient.draw()
```
```python
import numpy as np
x = np.arange(0, 1, 0.05)
y = np.array([circuit.subs(phi, i).measure() for i in x])
dy = np.array([gradient.subs(phi, i).eval().array for i in x])
```
```python
from matplotlib import pyplot as plt
plt.subplot(2, 1, 1)
plt.plot(x, y)
plt.ylabel("Amplitude")
plt.subplot(2, 1, 2)
plt.plot(x, dy)
plt.ylabel("Gradient")
```
## 4. Learning functors, diagrammatically
```python
from discopy import *
s, n = Ty('s'), Ty('n')
Alice, loves, Bob = Word("Alice", n), Word("loves", n.r @ s @ n.l), Word("Bob", n)
grammar = Cup(n, n.r) @ Id(s) @ Cup(n.l, n)
parsing = {
"{} {} {}".format(subj, verb, obj): subj @ verb @ obj >> grammar
for subj in [Alice, Bob] for verb in [loves] for obj in [Alice, Bob]}
pregroup.draw(parsing["Alice loves Bob"], aspect='equal')
print("Our favorite toy dataset:")
corpus = {
"{} {} {}".format(subj, verb, obj): int(obj != subj)
for subj in [Alice, Bob] for verb in [loves] for obj in [Alice, Bob]}
for sentence, scalar in corpus.items():
print("'{}' is {}.".format(sentence, "true" if scalar else "false"))
```
```python
from sympy import symbols
parameters = symbols("a0 a1 b0 b1 c00 c01 c10 c11")
F = TensorFunctor(
ob={s: 1, n: 2},
ar={Alice: symbols("a0 a1"),
Bob: symbols("b0 b1"),
loves: symbols("c00 c01 c10 c11")})
gradient = F(parsing["Alice loves Bob"]).grad(*parameters)
gradient
```
Tensor(dom=Dim(8), cod=Dim(1), array=[1.0*b0*c00 + 1.0*b1*c01, 1.0*b0*c10 + 1.0*b1*c11, 1.0*a0*c00 + 1.0*a1*c10,
1.0*a0*c01 + 1.0*a1*c11, 1.0*a0*b0, 1.0*a0*b1, 1.0*a1*b0, 1.0*a1*b1])
```python
gradient.subs(list(zip(parameters, 8 * [0])))
```
Tensor(dom=Dim(8), cod=Dim(1), array=[0, 0, 0, 0, 0, 0, 0, 0])
```python
from discopy.quantum import *
gates = {
Alice: ClassicalGate('Alice', 0, 1, symbols("a0 a1")),
Bob: ClassicalGate('Bob', 0, 1, symbols("b0 b1")),
loves: ClassicalGate('loves', 0, 2, symbols("c00 c01 c10 c11"))}
F = CircuitFunctor(ob={s: Ty(), n: bit}, ar=gates)
F(parsing["Alice loves Bob"]).draw()
```
```python
F(parsing["Alice loves Alice"]).grad(symbols("a0")).draw()
```
```python
F(parsing["Alice loves Alice"]).grad(symbols("a0")).eval()
```
CQMap(dom=CQ(), cod=CQ(), array=[2.0*a0*c00 + 1.0*a1*c01 + 1.0*a1*c10])
|
d7779260acb5eb742a9e706d8ecb23f0680baceb
| 199,630 |
ipynb
|
Jupyter Notebook
|
docs/notebooks/new-features-0.3.3.ipynb
|
paddlelaw/discopy
|
86b27fe75ef220bfaaf837555a33553b710c7287
|
[
"BSD-3-Clause"
] | null | null | null |
docs/notebooks/new-features-0.3.3.ipynb
|
paddlelaw/discopy
|
86b27fe75ef220bfaaf837555a33553b710c7287
|
[
"BSD-3-Clause"
] | null | null | null |
docs/notebooks/new-features-0.3.3.ipynb
|
paddlelaw/discopy
|
86b27fe75ef220bfaaf837555a33553b710c7287
|
[
"BSD-3-Clause"
] | null | null | null | 135.895167 | 21,740 | 0.797746 | true | 1,974 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.731059 | 0.709019 | 0.518335 |
__label__eng_Latn
| 0.507798 | 0.042594 |
dS/dt=-bS+gI, dI/dt=bS-gI (uso b para beta y g para gamma)
```python
from sympy import *
from sympy.abc import S,I,t,b,g
```
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
import pylab as pl
```
```python
#puntos criticos
P=-b*S+g*I
Q=b*S-g*I
#establecer P(S,I)=0 y Q(S,I)=0
Peqn=Eq(P,0)
Qeqn=Eq(Q,0)
print(solve((Peqn,Qeqn),S,I))
#Eigenvalores y eigenvectores
M=Matrix([[-b,g],[b,-g]])
print(M.eigenvals())
pprint(M.eigenvects())
```
{S: I*g/b}
{-b - g: 1, 0: 1}
⎡⎛ ⎡⎡g⎤⎤⎞ ⎤
⎢⎜ ⎢⎢─⎥⎥⎟ ⎛ ⎡⎡-1⎤⎤⎞⎥
⎢⎜0, 1, ⎢⎢b⎥⎥⎟, ⎜-b - g, 1, ⎢⎢ ⎥⎥⎟⎥
⎢⎜ ⎢⎢ ⎥⎥⎟ ⎝ ⎣⎣1 ⎦⎦⎠⎥
⎣⎝ ⎣⎣1⎦⎦⎠ ⎦
```python
b=1
g=1
def dx_dt(x,t):
return [ -b*x[0]+g*x[1] , b*x[0]-g*x[1] ]
#trayectorias en tiempo hacia adelante
ts=np.linspace(0,10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#trayectorias en tiempo hacia atras
ts=np.linspace(0,-10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#etiquetas de ejes y estilo de letra
plt.xlabel('S',fontsize=20)
plt.ylabel('I',fontsize=20)
plt.tick_params(labelsize=12)
plt.ticklabel_format(style="sci", scilimits=(0,0))
plt.xlim(0,100000)
plt.ylim(0,100000)
#campo vectorial
X,Y=np.mgrid[0:100000:15j,0:100000:15j]
u=-b*X+g*Y
v=b*X-g*Y
pl.quiver(X,Y,u,v,color='dimgray')
plt.savefig("SIS.pdf",bbox_inches='tight')
plt.show()
```
```python
b=1
g=3
def dx_dt(x,t):
return [ -b*x[0]+g*x[1] , b*x[0]-g*x[1] ]
#trayectorias en tiempo hacia adelante
ts=np.linspace(0,10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#trayectorias en tiempo hacia atras
ts=np.linspace(0,-10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#etiquetas de ejes y estilo de letra
plt.xlabel('S',fontsize=20)
plt.ylabel('I',fontsize=20)
plt.tick_params(labelsize=12)
plt.ticklabel_format(style="sci", scilimits=(0,0))
plt.xlim(0,100000)
plt.ylim(0,100000)
#campo vectorial
X,Y=np.mgrid[0:100000:15j,0:100000:15j]
u=-b*X+g*Y
v=b*X-g*Y
pl.quiver(X,Y,u,v,color='dimgray')
plt.show()
```
```python
b=3
g=1
def dx_dt(x,t):
return [ -b*x[0]+g*x[1] , b*x[0]-g*x[1] ]
#trayectorias en tiempo hacia adelante
ts=np.linspace(0,10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#trayectorias en tiempo hacia atras
ts=np.linspace(0,-10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#etiquetas de ejes y estilo de letra
plt.xlabel('S',fontsize=20)
plt.ylabel('I',fontsize=20)
plt.tick_params(labelsize=12)
plt.ticklabel_format(style="sci", scilimits=(0,0))
plt.xlim(0,100000)
plt.ylim(0,100000)
#campo vectorial
X,Y=np.mgrid[0:100000:15j,0:100000:15j]
u=-b*X+g*Y
v=b*X-g*Y
pl.quiver(X,Y,u,v,color='dimgray')
plt.show()
```
```python
```
|
1e838fa8b52019a28d2d97c80755118130f9ba0a
| 424,551 |
ipynb
|
Jupyter Notebook
|
ModeloSIS(no infeccioso).ipynb
|
deleonja/dynamical-sys
|
024acc61a4e36d46b1502ce0391707e4afbc58e2
|
[
"MIT"
] | null | null | null |
ModeloSIS(no infeccioso).ipynb
|
deleonja/dynamical-sys
|
024acc61a4e36d46b1502ce0391707e4afbc58e2
|
[
"MIT"
] | null | null | null |
ModeloSIS(no infeccioso).ipynb
|
deleonja/dynamical-sys
|
024acc61a4e36d46b1502ce0391707e4afbc58e2
|
[
"MIT"
] | null | null | null | 1,671.46063 | 83,752 | 0.793141 | true | 1,479 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.939025 | 0.868827 | 0.81585 |
__label__kor_Hang
| 0.088857 | 0.733826 |
# Spectral Analysis of Deterministic Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Zero-Padding
### Concept
Let's assume a signal $x_N[k]$ of finite length $N$, for instance a windowed signal $x_N[k] = x[k] \cdot \text{rect}_N[k]$. The discrete Fourier transformation (DFT) of $x_N[k]$ reads
\begin{equation}
X_N[\mu] = \sum_{k=0}^{N-1} x_N[k] \; w_N^{\mu k}
\end{equation}
where $w_N = \mathrm{e}^{-\mathrm{j} \frac{2 \pi}{N}}$ denotes the kernel of the DFT. For a sampled time-domain signal, the distance in frequency between two neighboring coefficients is given as $\Delta f = \frac{f_s}{N}$, where $f_s = \frac{1}{T}$ denotes the sampling frequency. Hence, if $N$ is increased the distance between neighboring frequencies is decreased. This leads to the concept of zero-padding in spectral analysis. Here the signal $x_N[k]$ of finite length is filled up with (M-N) zero values to a total length $M \geq N$
\begin{equation}
x_M[k] = \begin{cases}
x_N[k] & \mathrm{for} \; k=0,1,\dots,N-1 \\
0 & \mathrm{for} \; k=N,N+1,\dots,M-1
\end{cases}
\end{equation}
Appending zeros does not change the contents of the signal itself. However, the DFT $X_M[\mu]$ of $x_M[k]$ has now a decreased distance between neighboring frequencies $\Delta f = \frac{f_s}{M}$.
The question arises what influence zero-padding has on the spectrum and if it can enhance spectral analysis. On first sight it seems that the frequency resolution is higher, however do we get more information on the signal? In order to discuss this, a short numerical example is evaluated followed by a derivation of the mathematical relations between the spectrum $X_M[k]$ with zero-padding and $X_N[k]$ without zero-padding.
#### Example - Zero-Padding
The following example computes and plots the magnitude spectra $|X[\mu]|$ of a truncated complex exponential signal $x_N[k] = \mathrm{e}^{\,\mathrm{j}\,\Omega_0\,k} \cdot \text{rect}_N[k]$ and its zero-padded version $x_M[k]$.
```python
import matplotlib.pyplot as plt
import numpy as np
N = 16 # length of the signal
M = 32 # length of zero-padded signal
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# DFT of the exponential signal
xN = np.exp(1j*Om0*np.arange(N))
XN = np.fft.fft(xN)
# DFT of the zero-padded exponential signal
xM = np.concatenate((xN, np.zeros(M-N)))
XM = np.fft.fft(xM)
# plot spectra
plt.figure(figsize=(10, 6))
plt.subplot(121)
plt.stem(np.arange(N), np.abs(XN), use_line_collection=True)
plt.title(r'$\mathrm{{DFT}}_{{{0}}}$ of $e^{{j \Omega_0 k}}$ without zero-padding'.format(N))
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_N[\mu]|$')
plt.axis([0, N, 0, 18])
plt.grid()
plt.subplot(122)
plt.stem(np.arange(M), np.abs(XM), use_line_collection=True)
plt.title(r'$\mathrm{{DFT}}_{{{0}}}$ of $e^{{j \Omega_0 k}}$ with zero-padding'.format(M))
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_M[\mu]|$')
plt.axis([0, M, 0, 18])
plt.grid()
```
**Exercise**
* Check the two spectra carefully for relations. Are there common coefficients for the case $M = 2 N$?
* Increase the length `M` of the zero-padded signal $x_M[k]$. Can you gain additional information from the spectrum?
Solution: Every second (because the DFT length has been doubled) coefficient has been added, the other coefficients stay the same. With longer zero-padding, the maximum of the main lobe of the window gets closer to its true maximum.
### Interpolation of the Discrete Fourier Transformation
Lets step back to the discrete-time Fourier transformation (DTFT) of the finite-length signal $x_N[k]$ without zero-padding
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k = -\infty}^{\infty} x_N[k] \, \mathrm{e}^{\,-\mathrm{j}\,\Omega\,k} = \sum_{k=0}^{N-1} x_N[k] \,\mathrm{e}^{-\,\mathrm{j}\,\Omega\,k}
\end{equation}
The discrete Fourier transformation (DFT) is derived by sampling $X_N(\mathrm{e}^{\mathrm{j}\,\Omega})$ at $\Omega = \mu \frac{2 \pi}{N}$
\begin{equation}
X_N[\mu] = X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega}) \big\vert_{\Omega = \mu \frac{2 \pi}{N}} = \sum_{k=0}^{N-1} x_N[k] \, \mathrm{e}^{\,-\mathrm{j}\, \mu \frac{2\pi}{N}\,k}
\end{equation}
Since the DFT coefficients $X_N[\mu]$ are sampled equidistantly from the DTFT $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$, we can reconstruct the DTFT of $x_N[k]$ from the DFT coefficients by interpolation. Introduce the inverse DFT of $X_N[\mu]$
\begin{equation}
x_N[k] = \frac{1}{N} \sum_{\mu = 0}^{N-1} X_N[\mu] \; \mathrm{e}^{\,\mathrm{j}\,\frac{2 \pi}{N} \mu \,k}
\end{equation}
into the DTFT
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k=0}^{N-1} x_N[k] \; \mathrm{e}^{-\,\mathrm{j}\, \Omega\, k} =
\sum_{\mu=0}^{N-1} X_N[\mu] \cdot \frac{1}{N} \sum_{k=0}^{N-1} \mathrm{e}^{-\mathrm{j}\, k \,(\Omega - \frac{2 \pi}{N} \mu)}
\end{equation}
reveals the relation between $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and $X_N[\mu]$. The last sum over $k$ constitutes a [geometric series](https://en.wikipedia.org/wiki/Geometric_series) and can be rearranged to
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \frac{1}{N} \cdot \frac{1-\mathrm{e}^{-\mathrm{j}(\Omega-\frac{2\pi}{N}\mu)N}}{1-\mathrm{e}^{-\mathrm{j}(\Omega-\frac{2\pi}{N}\mu)}}
\end{equation}
By factorizing the last fraction to
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \frac{1}{N} \cdot \frac{\mathrm{e}^{-\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)N}{2}}}{\mathrm{e}^{-\mathrm{j}\frac{\Omega-\frac{2\pi}{N}\mu}{2}}} \cdot \frac{\mathrm{e}^{\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)N}{2}}-\mathrm{e}^{-\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)N}{2}}}{\mathrm{e}^{\mathrm{j}\frac{\Omega-\frac{2\pi}{N}\mu}{2}}-\mathrm{e}^{-\mathrm{j}\frac{\Omega-\frac{2\pi}{N}\mu}{2}}}
\end{equation}
and making use of [Euler's identity](https://en.wikipedia.org/wiki/Euler%27s_identity) $2\mathrm{j}\cdot\sin(x)=\mathrm{e}^{\mathrm{j} x}-\mathrm{e}^{-\mathrm{j} x}$ this can be simplified to
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \mathrm{e}^{-\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)(N-1)}{2}} \cdot \frac{1}{N} \cdot \frac{\sin(N\frac{\Omega-\frac{2\pi}{N}\mu}{2})}{\sin(\frac{\Omega-\frac{2\pi}{N}\mu}{2})}
\end{equation}
The last fraction can be written in terms of the $N$-th order periodic sinc function (aliased sinc function, [Dirichlet kernel](https://en.wikipedia.org/wiki/Dirichlet_kernel)), which is defined as
\begin{equation}
\text{psinc}_N (\Omega) = \frac{1}{N} \frac{\sin(\frac{N}{2} \Omega)}{ \sin(\frac{1}{2} \Omega)}
\end{equation}
According to this definition, the periodic sinc function is not defined at $\Omega = 2 \pi \,n$ for $n \in \mathbb{Z}$. This is resolved by applying [L'Hôpital's rule](https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule) which results in $\text{psinc}_N (2 \pi \,n) = 1$ for $n \in \mathbb{Z}$.
Using the periodic sinc function, the DTFT $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ of a finite-length signal $x_N[k]$ can be derived from its DFT $X_N[\mu]$ by
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \mathrm{e}^{-\,\mathrm{j}\, \frac{( \Omega - \frac{2 \pi}{N} \mu ) (N-1)}{2}} \cdot \text{psinc}_N ( \Omega - \frac{2 \pi}{N} \mu )
\end{equation}
#### Example - Periodic sinc function
This example illustrates the
1. periodic sinc function, and
2. interpolation of $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ from $X_N[\mu]$ for an exponential signal using above relation.
```python
N = 16 # order of periodic sinc function
M = 1024 # number of frequency points
Om = np.linspace(-np.pi, np.pi, M)
def psinc(x, N):
'''Periodic sinc function.'''
x = np.asanyarray(x)
y = np.where(x == 0, 1.0e-20, x)
return 1/N * np.sin(N/2*y)/np.sin(1/2*y)
# plot psinc
plt.figure(figsize=(10, 8))
plt.plot(Om, psinc(Om, 16))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\mathrm{psinc}_N (\Omega)$')
plt.grid()
```
```python
N = 16 # length of the signal
M = 1024 # number of frequency points for DTFT
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# DFT of the exponential signal
xN = np.exp(1j*Om0*np.arange(N))
XN = np.fft.fft(xN)
# interpolation of DTFT from DFT coefficients
Xi = np.asarray(np.zeros(M), dtype=complex)
for mu in np.arange(M):
Omd = 2*np.pi/M*mu-2*np.pi*np.arange(N)/N
interpolator = psinc(Omd, N) * np.exp(-1j*Omd*(N-1)/2)
Xi[mu] = np.sum(XN * interpolator)
# plot spectra
plt.figure(figsize=(10, 8))
ax1 = plt.gca()
plt.plot(np.arange(M)*2*np.pi/M, abs(Xi), label=r'$|X_N(e^{j \Omega})|$')
plt.stem(np.arange(N)*2*np.pi/N, abs(XN), basefmt=' ', linefmt='C1',
markerfmt='C1o', label=r'$|X_N[\mu]|$', use_line_collection=True)
plt.title(r'DFT $X_N[\mu]$ and interpolated DTFT $X_N(e^{j \Omega})$', y=1.08)
plt.ylim([-0.5, N+2])
plt.legend()
ax1.set_xlabel(r'$\Omega$')
ax1.set_xlim([0, 2*np.pi])
ax1.grid()
ax2 = ax1.twiny()
ax2.set_xlim([0, N])
ax2.set_xlabel(r'$\mu$', color='C1')
ax2.tick_params('x', colors='C1')
```
### Relation between Discrete Fourier Transformations with and without Zero-Padding
It was already outlined above that the DFT is related to the DTFT by sampling. Hence, the DFT $X_M[\mu]$ is given by sampling the DTFT $X_M(\mathrm{e}^{\mathrm{j}\, \Omega})$ at $\Omega = \frac{2 \pi}{M} \mu$. Since the zero-padded signal $x_M[k]$ differs from $x_N[k]$ only with respect to the additional zeros, the DTFTs of both are equal
\begin{equation}
X_M(\mathrm{e}^{\mathrm{j}\, \Omega}) = X_N(\mathrm{e}^{\mathrm{j}\, \Omega})
\end{equation}
The desired relation between the DFTs $X_N[\mu]$ and $X_M[\mu]$ of the signal $x_N[k]$ and its zero-padded version $x_M[k]$ can be found by sampling the interpolated DTFT $X_N(\mathrm{e}^{\mathrm{j}\, \Omega})$ at $\Omega = \frac{2 \pi}{M} \mu$
\begin{equation}
X_M[\mu] = \sum_{\eta=0}^{N-1} X_N[\eta] \cdot \mathrm{e}^{\,-\mathrm{j}\, \frac{( \frac{2 \pi}{M} \mu - \frac{2 \pi}{N} \eta ) (N-1)}{2}} \cdot \text{psinc}_N \left( \frac{2 \pi}{M} \mu - \frac{2 \pi}{N} \eta \right)
\end{equation}
for $\mu = 0, 1, \dots, M-1$.
Above equation relates the spectrum $X_N[\mu]$ of the original signal $x_N[k]$ to the spectrum $X_M[\mu]$ of the zero-padded signal $x_M[k]$. It essentially constitutes a bandlimited interpolation of the coefficients $X_N[\mu]$.
All spectral information of a signal of finite length $N$ is already contained in its spectrum derived from a DFT of length $N$. By applying zero-padding and a longer DFT, the frequency resolution is only virtually increased. The additional coefficients are related to the original ones by bandlimited interpolation. In general, zero-padding does not bring additional insights in spectral analysis. It may bring a benefit in special applications, for instance when estimating the frequency of an isolated harmonic signal from its spectrum. This is illustrated in the following example.
Zero-padding is also used to make a circular convolution equivalent to a linear convolution. However, there is a different reasoning behind this. Details are discussed in a [later section](../nonrecursive_filters/fast_convolution.ipynb#Linear-Convolution-by-Periodic-Convolution).
#### Example - Interpolation of the DFT
The following example shows that the coefficients $X_M[\mu]$ of the spectrum of the zero-padded signal $x_M[k]$ can be derived by interpolation from the spectrum $X_N[\mu]$.
```python
N = 16 # length of the signal
M = 32 # number of points for interpolated DFT
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# periodic sinc function
def psinc(x, N):
x = np.asanyarray(x)
y = np.where(x == 0, 1.0e-20, x)
return 1/N * np.sin(N/2*y)/np.sin(1/2*y)
# DFT of the exponential signal
xN = np.exp(1j*Om0*np.arange(N))
XN = np.fft.fft(xN)
# interpolation of DFT coefficients
XM = np.asarray(np.zeros(M), dtype=complex)
for mu in np.arange(M):
Omd = 2*np.pi/M*mu-2*np.pi*np.arange(N)/N
interpolator = psinc(Omd, N) * np.exp(-1j*Omd*(N-1)/2)
XM[mu] = np.sum(XN * interpolator)
# plot spectra
plt.figure(figsize=(10, 6))
plt.subplot(121)
plt.stem(np.arange(N), np.abs(XN), use_line_collection=True)
plt.title(r'$\mathrm{{DFT}}_{{{0}}}$ of $e^{{j \Omega_0 k}}$ without zero-padding'.format(N))
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_N[\mu]|$')
plt.axis([0, N, 0, 18])
plt.grid()
plt.subplot(122)
plt.stem(np.arange(M), np.abs(XM), use_line_collection=True)
plt.title(r'Interpolated spectrum')
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_M[\mu]|$')
plt.axis([0, M, 0, 18])
plt.grid()
```
**Exercise**
* Compare the interpolated spectrum to the spectrum with zero padding from the first example.
* Estimate the frequency $\Omega_0$ of the exponential signal from the interpolated spectrum. How could you further increase the accuracy of your estimate?
Solution: The interpolated spectrum is the same as the spectrum with zero padding from the first example. The estimated frequency from the interpolated spectrum is $\Omega_0=\frac{2\pi}{M}\mu=\frac{2\pi}{32}\cdot11$. A better estimate can be obtained by increasing the number of points for the interpolated DFT or by further zero-padding of the time domain signal.
#### Example - Estimation of Frequency and Amplitude of a Harmonic Signal
The estimation of the normalized frequency $\Omega_0$ and amplitude $A$ of a single exponential signal $x_N[k] = A \cdot e^{j \Omega_0 k}$ by the DFT of the zero-padded signal (or interpolated DFT) is illustrated in the following example. The frequency is estimated from the DFT of the zero-padded signal by finding the maximum in the magnitude spectrum
\begin{equation}
\hat{\mu_0} = \underset{\mu}{\mathrm{argmax}} \{ |X_M[\mu]| \}
\end{equation}
The amplitude is estimated by taking the magnitude at the maximum $\hat{A} = | X_M[\hat{\mu}_0] |$.
First a function is defined which estimates the frequency for a given number of zeros appended to the signal before calculating the DFT. Without loss of generality is is assumed that $A=1$.
```python
N = 128 # length of the signal
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# generate harmonic signal
k = np.arange(N)
x = np.exp(1j*Om0*np.arange(N))
def estimate_frequency_amplitude(x, P):
'''Estimate frequency and amplitude of an exponential signal.'''
# perform zero-padding and DFT
xM = np.concatenate((x, np.zeros(P)))
XM = np.fft.fft(xM)
# estimate frequency/amplitude of harmonic signal
mu_max = np.argmax(abs(XM))
amplitude = 1/N * abs(XM[mu_max])
# print results
Om = np.fft.fftfreq(N+P, 1/(2*np.pi))
print('Normalized frequency of signal: {0:1.4f} (real) / {1:1.4f} (estimated) / {2:1.4f} (absolute error)'.format(
Om0, Om[mu_max], abs(Om0 - Om[mu_max])))
print('Amplitude of signal: {0:1.4f} (real) / {1:1.4f} (estimated) / {2:2.2f} dB (magnitude error)'.format(
1, amplitude, 20*np.log10(abs(1/amplitude))))
```
First the estimation is performed without zero-padding
```python
estimate_frequency_amplitude(x, 0)
```
Normalized frequency of signal: 0.2616 (real) / 0.2454 (estimated) / 0.0162 (absolute error)
Amplitude of signal: 1.0000 (real) / 0.8303 (estimated) / 1.62 dB (magnitude error)
Then the signal is zero-padded to a total length of eight times its original length
```python
estimate_frequency_amplitude(x, 7*N)
```
Normalized frequency of signal: 0.2616 (real) / 0.2638 (estimated) / 0.0022 (absolute error)
Amplitude of signal: 1.0000 (real) / 0.9967 (estimated) / 0.03 dB (magnitude error)
**Exercise**
* What is the maximum error that can occur when estimating the frequency from the maximum of the (zero-padded) magnitude spectrum?
Solution: The maximum absolute error occurs if the maximum in the DTFT of the signal is in between two adjacent bins $\mu$ of the DFT. Since the DTFT is sampled at $\Omega = \frac{2 \pi}{M}$ to derive the DFT, the maximum absolute error is given by $\frac{\pi}{M}$ where $M$ denotes the length of the zero-padded signal/DFT.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
|
6dfbc2251e5ff29abad546e0875d7cec6070b984
| 434,055 |
ipynb
|
Jupyter Notebook
|
spectral_analysis_deterministic_signals/zero_padding.ipynb
|
ZeroCommits/digital-signal-processing-lecture
|
e1e65432a5617a309ec02327a14962e37a0f7ec5
|
[
"MIT"
] | 630 |
2016-01-05T17:11:43.000Z
|
2022-03-30T07:48:27.000Z
|
spectral_analysis_deterministic_signals/zero_padding.ipynb
|
alirezaopmc/digital-signal-processing-lecture
|
e1e65432a5617a309ec02327a14962e37a0f7ec5
|
[
"MIT"
] | 12 |
2016-11-07T15:49:55.000Z
|
2022-03-10T13:05:50.000Z
|
spectral_analysis_deterministic_signals/zero_padding.ipynb
|
alirezaopmc/digital-signal-processing-lecture
|
e1e65432a5617a309ec02327a14962e37a0f7ec5
|
[
"MIT"
] | 172 |
2015-12-26T21:05:40.000Z
|
2022-03-10T23:13:30.000Z
| 59.655717 | 28,030 | 0.605854 | true | 5,320 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.843895 | 0.879147 | 0.741908 |
__label__eng_Latn
| 0.917251 | 0.562032 |
## Exercise:
This example is sourced from the [Scipy website](https://scipython.com/book/chapter-8-scipy/additional-examples/the-sir-epidemic-model/) where you can find more details. For more on these models and their variations, see [Compartmental models in epidimiology, Wikipedia](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology).
Numerically solve the SIR epidemic model:
1. In pure Python using for loops to integrate in time.
1. Using Scipy's odeint package.
The differential equations that describe the model are:
\begin{equation}
\frac{dS}{dt} = \frac{-\beta I S}{N}
\end{equation}
\begin{equation}
\frac{dI}{dt} = \frac{\beta I S}{N} - \gamma I
\end{equation}
\begin{equation}
\frac{dR}{dt} = \gamma I
\end{equation}
Where S are the susceptible numbers in the population, I is the number of infected, R is the number of recovered persons in the population. $\beta$ is the *effective contact rate*, that is, an infected individual comes into contact with $\beta N$ individuals. $\gamma$ is the mean recovery rate: $1/\gamma$ is the mean period of time that an individual can pass on the infection.
A vitally important number to keep track of is the ratio: $R_0 = \beta/\gamma$; when $R_0 \gt 1$, the disease spreads through the population, when $R_0 \lt 1$, the disease quickly dies out.
```python
import numpy as np
import matplotlib.pyplot as plt
```
```python
N = 1000
Rec0 = 0
Inf0 = 1
Sus0 = N - Inf0
```
```python
beta = 0.2
gamma = 0.1
R0 = beta/gamma
```
```python
R0
```
2.0
```python
Sus = [Sus0]
Rec = [Rec0]
Inf = [Inf0]
for t in range(150):
delta_S = -beta * Inf[-1] * Sus[-1] / N
Sus.append(Sus[-1]+delta_S)
delta_I = beta * Inf[-1] * Sus[-1] / N - gamma * Inf[-1]
Inf.append(Inf[-1] + delta_I)
delta_R = gamma * Inf[-1]
Rec.append(Rec[-1] + delta_R)
#print(delta_S, delta_I, delta_R)
```
```python
Sus = np.array(Sus)
Rec = np.array(Rec)
Inf = np.array(Inf)
```
```python
plt.plot(np.arange(151), Sus, label="Susceptible")
plt.plot(np.arange(151), Rec, label="Recovered")
plt.plot(np.arange(151), Inf, label="Infected")
plt.legend()
```
### Exercise 07: SIR model in scipy
Use scipy.integrate.odeint to solve the SIR model. For more information, refer to the [documentation page](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html).
```python
# %load ./solutions/sol_scipy_SIR.py
```
|
35c5fd73b1bf49625a466a96ad1d9a4fe549b7c3
| 27,248 |
ipynb
|
Jupyter Notebook
|
04_Numpy_SIR.ipynb
|
adityarn/MAV110_PythonModule
|
c3ee6457ba0e4d2cae04f3f6a138d0b473bb4f8e
|
[
"MIT"
] | 2 |
2021-11-25T13:08:27.000Z
|
2021-11-25T13:08:30.000Z
|
04_Numpy_SIR.ipynb
|
adityarn/MAV110_PythonModule
|
c3ee6457ba0e4d2cae04f3f6a138d0b473bb4f8e
|
[
"MIT"
] | null | null | null |
04_Numpy_SIR.ipynb
|
adityarn/MAV110_PythonModule
|
c3ee6457ba0e4d2cae04f3f6a138d0b473bb4f8e
|
[
"MIT"
] | null | null | null | 132.271845 | 22,108 | 0.882377 | true | 729 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.953966 | 0.945801 | 0.902262 |
__label__eng_Latn
| 0.864883 | 0.934592 |
# COURSE: A deep understanding of deep learning
## SECTION: Math prerequisites
### LECTURE: Derivatives: intuition and polynomials
#### TEACHER: Mike X Cohen, sincxpress.com
##### COURSE URL: udemy.com/course/dudl/?couponCode=202201
```python
import numpy as np
import matplotlib.pyplot as plt
# sympy = symbolic math in Python
import sympy as sym
import sympy.plotting.plot as symplot
```
```python
# create symbolic variables in sympy
x = sym.symbols('x')
# create a function
fx = 2*x**2
# compute its derivative
df = sym.diff(fx,x)
# print them
print(fx)
print(df)
```
```python
# plot them
symplot(fx,(x,-4,4),title='The function')
plt.show()
symplot(df,(x,-4,4),title='Its derivative')
plt.show()
```
```python
# repeat with relu and sigmoid
# create symbolic functions
relu = sym.Max(0,x)
sigmoid = 1 / (1+sym.exp(-x))
# graph the functions
p = symplot(relu,(x,-4,4),label='ReLU',show=False,line_color='blue')
p.extend( symplot(sigmoid,(x,-4,4),label='Sigmoid',show=False,line_color='red') )
p.legend = True
p.title = 'The functions'
p.show()
# graph their derivatives
p = symplot(sym.diff(relu),(x,-4,4),label='df(ReLU)',show=False,line_color='blue')
p.extend( symplot(sym.diff(sigmoid),(x,-4,4),label='df(Sigmoid)',show=False,line_color='red') )
p.legend = True
p.title = 'The derivatives'
p.show()
```
|
b9e2eaaa14a06e0a03da93d0aeb71bb65058523c
| 2,211 |
ipynb
|
Jupyter Notebook
|
01_math/DUDL_math_derivatives1.ipynb
|
amitmeel/DUDL
|
92348d5fc95b179dd2119f35f671366cd73718e6
|
[
"MIT"
] | null | null | null |
01_math/DUDL_math_derivatives1.ipynb
|
amitmeel/DUDL
|
92348d5fc95b179dd2119f35f671366cd73718e6
|
[
"MIT"
] | null | null | null |
01_math/DUDL_math_derivatives1.ipynb
|
amitmeel/DUDL
|
92348d5fc95b179dd2119f35f671366cd73718e6
|
[
"MIT"
] | null | null | null | 2,211 | 2,211 | 0.661239 | true | 395 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.939913 | 0.884039 | 0.83092 |
__label__eng_Latn
| 0.593992 | 0.768839 |
# 3. 1D Burgers Equation
We consider the 1d Burgers equation
$$
\partial_t u + u \partial_x u = \nu \frac{\partial ^2u}{\partial x^2}
$$
$u_0(x) := u(x,t)$ denotes the initial condition.
We choose homogeneous neumann boundary conditions in this example, i.e.
$$\partial_n u = 0, \partial \Omega$$ with $\Omega = (0,1)$
```python
# needed imports
from numpy import zeros, ones, linspace, zeros_like
from matplotlib.pyplot import plot, show
%matplotlib inline
```
```python
# Initial condition
from numpy import exp
#u0 = lambda x: exp(-(x-.3)**2/.05**2)
u0 = lambda x: exp(-(x-.5)**2/.02**2)
grid = linspace(0., 1., 401)
u = u0(grid)
```
```python
plot(grid, u) ; show()
```
## Time scheme
We shall use a $\theta$-scheme in this case and consider the following problem
$$
\frac{u^{n+1}-u^n}{\Delta t} +
\theta~ u^{n+1} \partial_x u^{n+1} + (1-\theta)~ u^n \partial_x u^n = \theta~\nu \frac{\partial ^2u^{n+1}}{\partial x^2} + (1-\theta)~\nu \frac{\partial ^2u^{n}}{\partial x^2}
$$
hence
$$
u^{n+1} + \Delta t ~ \theta~ u^{n+1} \partial_x u^{n+1} - \Delta t ~ \theta~\nu \frac{\partial ^2u^{n+1}}{\partial x^2} =
u^{n} - \Delta t ~ (1-\theta)~ u^{n} \partial_x u^{n} + \Delta t ~ (1-\theta)~\nu \frac{\partial ^2u^{n}}{\partial x^2}
$$
from now on, we shall denote by $f^n$ the right hand side of the previous equation
$$f^n := u^{n} - \Delta t ~ (1-\theta)~ u^{n} \partial_x u^{n} + \Delta t ~ (1-\theta)~\nu \frac{\partial ^2u^{n}}{\partial x^2}$$
## Weak formulation
Let $v \in \mathcal{V}$ be a function test, we have by integrating by parts the highest order term:
$$
\langle v, u^{n+1} \rangle
+ \Delta t ~ \theta~ \langle v, u^{n+1} \partial_x u^{n+1} \rangle
+ \Delta t ~ \theta~\nu \langle \frac{\partial v}{\partial x}, \frac{\partial u^{n+1}}{\partial x} \rangle
=
\langle v, f^n \rangle
$$
The previous weak formulation is still nonlinear with respect to $u^{n+1}$. We shall then follow the same strategy as for the previous chapter on nonlinear Poisson problem.
The strategy is to define the left hand side as a **LinearForm** with respect to $v$, then linearize it around $u^{n+1}$. We therefor can use either Picard or Newton method to treat the nonlinearity.
We consider the following linear form
$$
G(v;u,w) := \langle v, u \rangle
+ \Delta t ~ \theta~ \langle v, w \partial_x u \rangle
+ \Delta t ~ \theta~\nu \langle \frac{\partial v}{\partial x}, \frac{\partial u}{\partial x} \rangle
, \quad \forall u,v,w \in \mathcal{V}
$$
Our problem is then
$$
\mbox{Find } u^{n+1} \in \mathcal{V}, \mbox{such that}\\
G(v;u^{n+1},u^{n+1}) = l(v), \quad \forall v \in \mathcal{V}
$$
where
$$
l(v) := \int_{\Omega} f^n v ~d\Omega, \quad \forall v \in \mathcal{V}
$$
## Abstract Model
```python
from sympde.core import Constant
from sympde.expr import BilinearForm, LinearForm, integral
from sympde.topology import ScalarFunctionSpace, Line, element_of
from sympde.topology import dx1 as dx # TODO: this is a bug right now
from sympde.expr import find
from sympde.expr.expr import linearize
from psydac.fem.basic import FemField
from psydac.api.discretization import discretize
```
```python
domain = Line()
V = ScalarFunctionSpace('V', domain)
u = element_of(V, name='u')
v = element_of(V, name='v')
w = element_of(V, name='w')
un = element_of(V, name='un') # time iteration
uk = element_of(V, name='uk') # nonlinear solver iteration
x = domain.coordinates
nu = Constant('nu')
theta = Constant('theta')
dt = Constant('dt')
```
#### Defining the Linear form $G$
```python
# Linear form g: V --> R
expr = v * u + dt*theta*v*w*dx(u) + dt*theta*nu*dx(v)*dx(u)
g = LinearForm(v, integral(domain, expr))
```
#### Defining the Linear form $l$
```python
# Linear form l: V --> R
expr = v * un - dt*theta*v*un*dx(un) - dt*theta*nu*dx(v)*dx(un)
l = LinearForm(v, integral(domain, expr))
```
### Picard Method
$$
\mbox{Find } u_{n+1} \in \mathcal{V}_h, \mbox{such that}\\
G(v;u_{n+1},u_n) = l(v), \quad \forall v \in \mathcal{V}_h
$$
#### Picard iteration
```python
# Variational problem
picard = find(u, forall=v, lhs=g(v, u=u,w=uk), rhs=l(v))
```
### Newton Method
Let's define
$$
F(v;u) := G(v;u,u) -l(v), \quad \forall v \in \mathcal{V}
$$
Newton method writes
$$
\mbox{Find } u_{n+1} \in \mathcal{V}_h, \mbox{such that}\\
F^{\prime}(\delta u,v; u_n) = - F(v;u_n), \quad \forall v \in \mathcal{V} \\
u_{n+1} := u_{n} + \delta u, \quad \delta u \in \mathcal{V}
$$
#### Newton Iteration
```python
F = LinearForm(v, g(v,w=u)-l(v))
du = element_of(V, name='du')
Fprime = linearize(F, u, trials=du)
# Variational problem
newton = find(du, forall=v, lhs=Fprime(du, v,u=uk), rhs=-F(v,u=uk))
```
## Discrete Space
```python
# Create computational domain from topological domain
domain_h = discretize(domain, ncells=[64], comm=None)
# Discrete spaces
Vh = discretize(V, domain_h, degree=[2])
```
### Picard method
```python
# Discretize equation
picard_h = discretize(picard, domain_h, [Vh, Vh])
def picard(Un, niter=10):
Uk = FemField( Vh, Vh.vector_space.zeros() )
for i in range(niter):
Uk = picard_h.solve(uk=Uk, un=Un)
return Uk
```
### Newton method
```python
# Discretize equation
newton_h = discretize(newton, domain_h, [Vh, Vh])
def newton(Un, niter=10):
Uk = FemField( Vh, Vh.vector_space.zeros() )
for i in range(niter):
delta_x = newton_h.solve(uk=Uk, un=Un)
Uk = FemField( Vh, delta_x.coeffs + Uk.coeffs )
return Uk
```
## L2-Projection
```python
a_mass = BilinearForm((u,v), integral(domain , u*v))
from sympy import exp
#u0 = exp(-(x-.3)**2/.05**2)
u0 = exp(-(x-.5)**2/.02**2)
l_mass = LinearForm(v, integral(domain, u0*v))
# Abstract projection
projection = find(u, forall=v, lhs=a_mass(u, v), rhs=l_mass(v))
# Discrete projection
projection_h = discretize(projection, domain_h, [Vh, Vh])
```
```python
u0_h = projection_h.solve()
```
```python
from utilities.plot import plot_field_1d
plot_field_1d(Vh.knots[0], Vh.degree[0], u0_h.coeffs.toarray(), nx=401)
```
### Important note
Right now, Psydac does not provide a **grmes** solver. For this reason, we shall rewrite our nonlinear solver using the following **gmres_driver**
```python
from scipy.sparse.linalg import gmres
def gmres_driver(equation_h, tol=1.e-8, maxiter=5000):
M = equation_h.linear_system.lhs.tosparse()
rhs = equation_h.linear_system.rhs.toarray()
x, status = gmres(M, rhs, tol=tol, maxiter=maxiter)
xh = FemField( Vh, Vh.vector_space.zeros() )
n = xh.coeffs._data.shape[0]
xh.coeffs._data[Vh.degree[0]:n-Vh.degree[0]] = x[:]
return xh
```
```python
def picard(Un, theta, nu, dt, niter=10):
Uk = FemField( Vh, Vh.vector_space.zeros() )
for i in range(niter):
picard_h.assemble(uk=Uk, un=Un, theta=theta, nu=nu, dt=dt)
Uk = gmres_driver(picard_h, tol=1.e-8, maxiter=5000)
return Uk
```
```python
def newton(Un, theta, nu, dt, niter=10):
Uk = FemField( Vh, Vh.vector_space.zeros() )
for i in range(niter):
newton_h.assemble(uk=Uk, un=Un, theta=theta, nu=nu, dt=dt)
delta_x = gmres_driver(newton_h, tol=1.e-8, maxiter=5000)
Uk = FemField( Vh, delta_x.coeffs + Uk.coeffs )
return Uk
```
```python
Uk = picard(u0_h, theta=0.5, nu=0.5, dt=0.01)
```
```python
plot_field_1d(Vh.knots[0], Vh.degree[0], Uk.coeffs.toarray(), nx=401)
```
```python
Uk = newton(u0_h, theta=0.5, nu=0.5, dt=0.01)
plot_field_1d(Vh.knots[0], Vh.degree[0], Uk.coeffs.toarray(), nx=401)
```
```python
def time_solver(theta, nu, dt, T, nonlinear_solver, U0):
n_time = int(T / dt)
Uk = U0.copy()
for i_time in range(0, n_time):
Uk = nonlinear_solver(Uk, theta=theta, nu=nu, dt=dt)
return Uk
```
```python
Uk = time_solver(theta=0.5, nu=0.02, dt=0.01, T=0.1, nonlinear_solver=newton, U0=u0_h)
```
```python
plot_field_1d(Vh.knots[0], Vh.degree[0], Uk.coeffs.toarray(), nx=401)
```
```python
```
```python
```
|
2c26d7125816aaa3413bbd91263403dc718c1451
| 65,245 |
ipynb
|
Jupyter Notebook
|
lessons/Chapter3/02_burgers_1d.ipynb
|
pyccel/IGA-Python
|
e3604ba3d76a20e3d30ed3c7c952dcd2dc8147bb
|
[
"MIT"
] | 2 |
2022-01-21T08:51:30.000Z
|
2022-03-17T12:14:02.000Z
|
lessons/Chapter3/02_burgers_1d.ipynb
|
pyccel/IGA-Python
|
e3604ba3d76a20e3d30ed3c7c952dcd2dc8147bb
|
[
"MIT"
] | null | null | null |
lessons/Chapter3/02_burgers_1d.ipynb
|
pyccel/IGA-Python
|
e3604ba3d76a20e3d30ed3c7c952dcd2dc8147bb
|
[
"MIT"
] | 1 |
2022-03-01T06:41:54.000Z
|
2022-03-01T06:41:54.000Z
| 107.310855 | 11,484 | 0.853307 | true | 2,768 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.855851 | 0.705785 | 0.604047 |
__label__eng_Latn
| 0.575216 | 0.241734 |
# EECS 498-007/598-005 Assignment 3-2: Convolutional Neural Networks and Batch Normalization
Before we start, please put your name and UMID in following format
: Firstname LASTNAME, #00000000 // e.g.) Justin JOHNSON, #12345678
**Your Answer:**
Your NAME, #XXXXXXXX
## Setup Code
Before getting started, we need to run some boilerplate code to set up our environment, same as Assignment 1. You'll need to rerun this setup code each time you start the notebook.
First, run this cell load the autoreload extension. This allows us to edit .py source files, and re-import them into the notebook for a seamless editing and debugging experience.
```python
%load_ext autoreload
%autoreload 2
```
### Google Colab Setup
Next we need to run a few commands to set up our environment on Google Colab. If you are running this notebook on a local machine you can skip this section.
Run the following cell to mount your Google Drive. Follow the link, sign in to your Google account (the same account you used to store this notebook!) and copy the authorization code into the text box that appears below.
```python
from google.colab import drive
drive.mount('/content/drive')
```
Now recall the path in your Google Drive where you uploaded this notebook, fill it in below. If everything is working correctly then running the folowing cell should print the filenames from the assignment:
```
['convolutional_networks.ipynb', 'fully_connected_networks.ipynb', 'eecs598', 'convolutional_networks.py', 'fully_connected_networks.py', 'a3_helper.py']
```
```python
import os
# TODO: Fill in the Google Drive path where you uploaded the assignment
# Example: If you create a 2020FA folder and put all the files under A3 folder, then '2020FA/A3'
GOOGLE_DRIVE_PATH_AFTER_MYDRIVE = None
GOOGLE_DRIVE_PATH = os.path.join('drive', 'My Drive', GOOGLE_DRIVE_PATH_AFTER_MYDRIVE)
print(os.listdir(GOOGLE_DRIVE_PATH))
```
Once you have successfully mounted your Google Drive and located the path to this assignment, run th following cell to allow us to import from the `.py` files of this assignment. If it works correctly, it should print the message:
```
Hello from convolutional_networks.py!
Hello from a3_helper.py!
```
as well as the last edit time for the file `convolutional_networks.py`.
```python
import sys
sys.path.append(GOOGLE_DRIVE_PATH)
import time, os
os.environ["TZ"] = "US/Eastern"
time.tzset()
from convolutional_networks import hello_convolutional_networks
hello_convolutional_networks()
from a3_helper import hello_helper
hello_helper()
convolutional_networks_path = os.path.join(GOOGLE_DRIVE_PATH, 'convolutional_networks.py')
convolutional_networks_edit_time = time.ctime(os.path.getmtime(convolutional_networks_path))
print('convolutional_networks.py last edited on %s' % convolutional_networks_edit_time)
```
# Data preprocessing
## Setup code
Run some setup code for this notebook: Import some useful packages and increase the default figure size.
```python
import eecs598
import torch
import torchvision
import matplotlib.pyplot as plt
import statistics
import random
import time
import math
%matplotlib inline
from eecs598 import reset_seed, Solver
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['font.size'] = 16
```
Starting in this assignment, we will use the GPU to accelerate our computation. Run this cell to make sure you are using a GPU.
```python
if torch.cuda.is_available:
print('Good to go!')
else:
print('Please set GPU via Edit -> Notebook Settings.')
```
## Load the CIFAR-10 dataset
Then, we will first load the CIFAR-10 dataset, same as knn. The utility function `get_CIFAR10_data()` in `helper_functions` returns the entire CIFAR-10 dataset as a set of six **Torch tensors** while also preprocessing the RGB images:
- `X_train` contains all training images (real numbers in the range $[0, 1]$)
- `y_train` contains all training labels (integers in the range $[0, 9]$)
- `X_val` contains all validation images
- `y_val` contains all validation labels
- `X_test` contains all test images
- `y_test` contains all test labels
```python
# Invoke the above function to get our data.
import eecs598
eecs598.reset_seed(0)
data_dict = eecs598.data.preprocess_cifar10(cuda=True, dtype=torch.float64, flatten=False)
print('Train data shape: ', data_dict['X_train'].shape)
print('Train labels shape: ', data_dict['y_train'].shape)
print('Validation data shape: ', data_dict['X_val'].shape)
print('Validation labels shape: ', data_dict['y_val'].shape)
print('Test data shape: ', data_dict['X_test'].shape)
print('Test labels shape: ', data_dict['y_test'].shape)
```
# Convolutional networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
# Convolutional layer
As in the previous notebook, we will package each new neural network operator in a class that defines a `forward` and `backward` function.
## Convolutional layer: forward
The core of a convolutional network is the convolution operation. Implement the forward pass for the convolution layer in the function `Conv.forward`.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
After implementing the forward pass of the convolution operation, run the following to check your implementation. You should get a relative error less than `1e-7`.
```python
from convolutional_networks import Conv
x_shape = torch.tensor((2, 3, 4, 4))
w_shape = torch.tensor((3, 3, 4, 4))
x = torch.linspace(-0.1, 0.5, steps=torch.prod(x_shape), dtype=torch.float64, device='cuda').reshape(*x_shape)
w = torch.linspace(-0.2, 0.3, steps=torch.prod(w_shape), dtype=torch.float64, device='cuda').reshape(*w_shape)
b = torch.linspace(-0.1, 0.2, steps=3, dtype=torch.float64, device='cuda')
conv_param = {'stride': 2, 'pad': 1}
out, _ = Conv.forward(x, w, b, conv_param)
correct_out = torch.tensor([[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]],
dtype=torch.float64, device='cuda',
)
# Compare your output to ours; difference should be around e-8
print('Testing Conv.forward')
print('difference: ', eecs598.grad.rel_error(out, correct_out))
```
## Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
```python
from imageio import imread
from PIL import Image
from torchvision.transforms import ToTensor
kitten_url = 'https://web.eecs.umich.edu/~justincj/teaching/eecs498/assets/a3/kitten.jpg'
puppy_url = 'https://web.eecs.umich.edu/~justincj/teaching/eecs498/assets/a3/puppy.jpg'
kitten = imread(kitten_url)
puppy = imread(puppy_url)
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d//2:-d//2, :]
img_size = 200 # Make this smaller if it runs too slow
resized_puppy = ToTensor()(Image.fromarray(puppy).resize((img_size, img_size)))
resized_kitten = ToTensor()(Image.fromarray(kitten_cropped).resize((img_size, img_size)))
x = torch.stack([resized_puppy, resized_kitten])
# Set up a convolutional weights holding 2 filters, each 3x3
w = torch.zeros(2, 3, 3, 3, dtype=x.dtype)
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = torch.tensor([[0, 0, 0], [0, 0.3, 0], [0, 0, 0]])
w[0, 1, :, :] = torch.tensor([[0, 0, 0], [0, 0.6, 0], [0, 0, 0]])
w[0, 2, :, :] = torch.tensor([[0, 0, 0], [0, 0.1, 0], [0, 0, 0]])
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = torch.tensor([[1, 2, 1], [0, 0, 0], [-1, -2, -1]])
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = torch.tensor([0, 128], dtype=x.dtype)
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = Conv.forward(x, w, b, {'stride': 1, 'pad': 1})
def imshow_no_ax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = img.max(), img.min()
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img)
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_no_ax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_no_ax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_no_ax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_no_ax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_no_ax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_no_ax(out[1, 1])
plt.show()
```
## Convolutional layer: backward
Implement the backward pass for the convolution operation in the function `Conv.backward`. Again, you don't need to worry too much about computational efficiency.
After implementing the convolution backward pass, run the following to test your implementation. You should get errors less than `1e-8`.
```python
from convolutional_networks import Conv
reset_seed(0)
x = torch.randn(4, 3, 5, 5, dtype=torch.float64, device='cuda')
w = torch.randn(2, 3, 3, 3, dtype=torch.float64, device='cuda')
b = torch.randn(2, dtype=torch.float64, device='cuda')
dout = torch.randn(4, 2, 5, 5, dtype=torch.float64, device='cuda')
conv_param = {'stride': 1, 'pad': 1}
dx_num = eecs598.grad.compute_numeric_gradient(lambda x: Conv.forward(x, w, b, conv_param)[0], x, dout)
dw_num = eecs598.grad.compute_numeric_gradient(lambda w: Conv.forward(x, w, b, conv_param)[0], w, dout)
db_num = eecs598.grad.compute_numeric_gradient(lambda b: Conv.forward(x, w, b, conv_param)[0], b, dout)
out, cache = Conv.forward(x, w, b, conv_param)
dx, dw, db = Conv.backward(dout, cache)
print('Testing Conv.backward function')
print('dx error: ', eecs598.grad.rel_error(dx, dx_num))
print('dw error: ', eecs598.grad.rel_error(dw, dw_num))
print('db error: ', eecs598.grad.rel_error(db, db_num))
```
# Max-pooling
## Max-pooling: forward
Implement the forward pass for the max-pooling operation. Again, don't worry too much about computational efficiency.
After implementing the forward pass for max-pooling, run the following to check your implementation. You should get errors less than `1e-7`.
```python
from convolutional_networks import MaxPool
reset_seed(0)
x_shape = torch.tensor((2, 3, 4, 4))
x = torch.linspace(-0.3, 0.4, steps=torch.prod(x_shape), dtype=torch.float64, device='cuda').reshape(*x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = MaxPool.forward(x, pool_param)
correct_out = torch.tensor([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]],
dtype=torch.float64, device='cuda')
# Compare your output with ours. Difference should be on the order of e-8.
print('Testing MaxPool.forward function:')
print('difference: ', eecs598.grad.rel_error(out, correct_out))
```
## Max-pooling: backward
Implement the backward pass for the max-pooling operation. You don't need to worry about computational efficiency.
Check your implementation of the max pooling backward pass with numeric gradient checking by running the following. You should get errors less than `1e-10`.
```python
from convolutional_networks import MaxPool
reset_seed(0)
x = torch.randn(3, 2, 8, 8, dtype=torch.float64, device='cuda')
dout = torch.randn(3, 2, 4, 4, dtype=torch.float64, device='cuda')
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eecs598.grad.compute_numeric_gradient(lambda x: MaxPool.forward(x, pool_param)[0], x, dout)
out, cache = MaxPool.forward(x, pool_param)
dx = MaxPool.backward(dout, cache)
print('Testing MaxPool.backward function:')
print('dx error: ', eecs598.grad.rel_error(dx, dx_num))
```
# Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers. Those can be found at the bottom of `convolutional_networks.py`
The fast convolution implementation depends on `torch.nn`
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
```python
class FastConv(object):
@staticmethod
def forward(x, w, b, conv_param):
N, C, H, W = x.shape
F, _, HH, WW = w.shape
stride, pad = conv_param['stride'], conv_param['pad']
layer = torch.nn.Conv2d(C, F, (HH, WW), stride=stride, padding=pad)
layer.weight = torch.nn.Parameter(w)
layer.bias = torch.nn.Parameter(b)
tx = x.detach()
tx.requires_grad = True
out = layer(tx)
cache = (x, w, b, conv_param, tx, out, layer)
return out, cache
@staticmethod
def backward(dout, cache):
try:
x, _, _, _, tx, out, layer = cache
out.backward(dout)
dx = tx.grad.detach()
dw = layer.weight.grad.detach()
db = layer.bias.grad.detach()
layer.weight.grad = layer.bias.grad = None
except RuntimeError:
dx, dw, db = torch.zeros_like(tx), torch.zeros_like(layer.weight), torch.zeros_like(layer.bias)
return dx, dw, db
class FastMaxPool(object):
@staticmethod
def forward(x, pool_param):
N, C, H, W = x.shape
pool_height, pool_width = pool_param['pool_height'], pool_param['pool_width']
stride = pool_param['stride']
layer = torch.nn.MaxPool2d(kernel_size=(pool_height, pool_width), stride=stride)
tx = x.detach()
tx.requires_grad = True
out = layer(tx)
cache = (x, pool_param, tx, out, layer)
return out, cache
@staticmethod
def backward(dout, cache):
try:
x, _, tx, out, layer = cache
out.backward(dout)
dx = tx.grad.detach()
except RuntimeError:
dx = torch.zeros_like(tx)
return dx
```
We will now compare three different implementations of convolution (both forward and backward):
1. Your naive, non-vectorized implementation on CPU
2. The fast, vectorized implementation on CPU
3. The fast, vectorized implementation on GPU
The differences between your implementation and FastConv should be less than `1e-10`. When moving from your implementation to FastConv CPU, you will likely see speedups of at least 100x. When comparing your implementation to FastConv CUDA, you will likely see speedups of more than 500x. (These speedups are not hard requirements for this assignment since we are not asking you to write any vectorized implementations)
```python
# Rel errors should be around e-11 or less
from convolutional_networks import Conv, FastConv
reset_seed(0)
x = torch.randn(10, 3, 31, 31, dtype=torch.float64, device='cuda')
w = torch.randn(25, 3, 3, 3, dtype=torch.float64, device='cuda')
b = torch.randn(25, dtype=torch.float64, device='cuda')
dout = torch.randn(10, 25, 16, 16, dtype=torch.float64, device='cuda')
x_cuda, w_cuda, b_cuda, dout_cuda = x.to('cuda'), w.to('cuda'), b.to('cuda'), dout.to('cuda')
conv_param = {'stride': 2, 'pad': 1}
t0 = time.time()
out_naive, cache_naive = Conv.forward(x, w, b, conv_param)
t1 = time.time()
out_fast, cache_fast = FastConv.forward(x, w, b, conv_param)
t2 = time.time()
out_fast_cuda, cache_fast_cuda = FastConv.forward(x_cuda, w_cuda, b_cuda, conv_param)
t3 = time.time()
print('Testing FastConv.forward:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Fast CUDA: %fs' % (t3 - t2))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Speedup CUDA: %fx' % ((t1 - t0) / (t3 - t2)))
print('Difference: ', eecs598.grad.rel_error(out_naive, out_fast))
print('Difference CUDA: ', eecs598.grad.rel_error(out_naive, out_fast_cuda.to(out_naive.device)))
t0 = time.time()
dx_naive, dw_naive, db_naive = Conv.backward(dout, cache_naive)
t1 = time.time()
dx_fast, dw_fast, db_fast = FastConv.backward(dout, cache_fast)
t2 = time.time()
dx_fast_cuda, dw_fast_cuda, db_fast_cuda = FastConv.backward(dout_cuda, cache_fast_cuda)
t3 = time.time()
print('\nTesting FastConv.backward:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Fast CUDA: %fs' % (t3 - t2))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Speedup CUDA: %fx' % ((t1 - t0) / (t3 - t2)))
print('dx difference: ', eecs598.grad.rel_error(dx_naive, dx_fast))
print('dw difference: ', eecs598.grad.rel_error(dw_naive, dw_fast))
print('db difference: ', eecs598.grad.rel_error(db_naive, db_fast))
print('dx difference CUDA: ', eecs598.grad.rel_error(dx_naive, dx_fast_cuda.to(dx_naive.device)))
print('dw difference CUDA: ', eecs598.grad.rel_error(dw_naive, dw_fast_cuda.to(dw_naive.device)))
print('db difference CUDA: ', eecs598.grad.rel_error(db_naive, db_fast_cuda.to(db_naive.device)))
```
We will now similarly compare your naive implementation of max pooling against the fast implementation. You should see differences of 0 between your implementation and the fast implementation.
When comparing your implementation against FastMaxPool on CPU, you will likely see speedups of more than 100x. When comparing your implementation against FastMaxPool on GPU, you will likely see speedups of more than 500x.
```python
# Relative errors should be close to 0.0
from convolutional_networks import Conv, MaxPool, FastConv, FastMaxPool
reset_seed(0)
x = torch.randn(40, 3, 32, 32, dtype=torch.float64, device='cuda')
dout = torch.randn(40, 3, 16, 16, dtype=torch.float64, device='cuda')
x_cuda, dout_cuda = x.to('cuda'), dout.to('cuda')
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time.time()
out_naive, cache_naive = MaxPool.forward(x, pool_param)
t1 = time.time()
out_fast, cache_fast = FastMaxPool.forward(x, pool_param)
t2 = time.time()
out_fast_cuda, cache_fast_cuda = FastMaxPool.forward(x_cuda, pool_param)
t3 = time.time()
print('Testing FastMaxPool.forward:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Fast CUDA: %fs' % (t3 - t2))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Speedup CUDA: %fx' % ((t1 - t0) / (t3 - t2)))
print('Difference: ', eecs598.grad.rel_error(out_naive, out_fast))
print('Difference CUDA: ', eecs598.grad.rel_error(out_naive, out_fast_cuda.to(out_naive.device)))
t0 = time.time()
dx_naive = MaxPool.backward(dout, cache_naive)
t1 = time.time()
dx_fast = FastMaxPool.backward(dout, cache_fast)
t2 = time.time()
dx_fast_cuda = FastMaxPool.backward(dout_cuda, cache_fast_cuda)
t3 = time.time()
print('\nTesting FastMaxPool.backward:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Fast CUDA: %fs' % (t3 - t2))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Speedup CUDA: %fx' % ((t1 - t0) / (t3 - t2)))
print('dx difference: ', eecs598.grad.rel_error(dx_naive, dx_fast))
print('dx difference CUDA: ', eecs598.grad.rel_error(dx_naive, dx_fast_cuda.to(dx_naive.device)))
```
# Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. Below you will find sandwich layers that implement a few commonly used patterns for convolutional networks. We've included them at the bottom of `covolutional_networks.py` Run the cells below to sanity check they're working.
**Note:** This will be using the ReLU function you implemented in the previous notebook. Make sure to implement it first.
```python
class Conv_ReLU(object):
@staticmethod
def forward(x, w, b, conv_param):
"""
A convenience layer that performs a convolution followed by a ReLU.
Inputs:
- x: Input to the convolutional layer
- w, b, conv_param: Weights and parameters for the convolutional layer
Returns a tuple of:
- out: Output from the ReLU
- cache: Object to give to the backward pass
"""
a, conv_cache = FastConv.forward(x, w, b, conv_param)
out, relu_cache = ReLU.forward(a)
cache = (conv_cache, relu_cache)
return out, cache
@staticmethod
def backward(dout, cache):
"""
Backward pass for the conv-relu convenience layer.
"""
conv_cache, relu_cache = cache
da = ReLU.backward(dout, relu_cache)
dx, dw, db = FastConv.backward(da, conv_cache)
return dx, dw, db
class Conv_ReLU_Pool(object):
@staticmethod
def forward(x, w, b, conv_param, pool_param):
"""
A convenience layer that performs a convolution, a ReLU, and a pool.
Inputs:
- x: Input to the convolutional layer
- w, b, conv_param: Weights and parameters for the convolutional layer
- pool_param: Parameters for the pooling layer
Returns a tuple of:
- out: Output from the pooling layer
- cache: Object to give to the backward pass
"""
a, conv_cache = FastConv.forward(x, w, b, conv_param)
s, relu_cache = ReLU.forward(a)
out, pool_cache = FastMaxPool.forward(s, pool_param)
cache = (conv_cache, relu_cache, pool_cache)
return out, cache
@staticmethod
def backward(dout, cache):
"""
Backward pass for the conv-relu-pool convenience layer
"""
conv_cache, relu_cache, pool_cache = cache
ds = FastMaxPool.backward(dout, pool_cache)
da = ReLU.backward(ds, relu_cache)
dx, dw, db = FastConv.backward(da, conv_cache)
return dx, dw, db
```
Test the implementations of the sandwich layers by running the following. You should see errors less than `1e-7`.
```python
from convolutional_networks import Conv_ReLU, Conv_ReLU_Pool
reset_seed(0)
# Test Conv ReLU
x = torch.randn(2, 3, 8, 8, dtype=torch.float64, device='cuda')
w = torch.randn(3, 3, 3, 3, dtype=torch.float64, device='cuda')
b = torch.randn(3, dtype=torch.float64, device='cuda')
dout = torch.randn(2, 3, 8, 8, dtype=torch.float64, device='cuda')
conv_param = {'stride': 1, 'pad': 1}
out, cache = Conv_ReLU.forward(x, w, b, conv_param)
dx, dw, db = Conv_ReLU.backward(dout, cache)
dx_num = eecs598.grad.compute_numeric_gradient(lambda x: Conv_ReLU.forward(x, w, b, conv_param)[0], x, dout)
dw_num = eecs598.grad.compute_numeric_gradient(lambda w: Conv_ReLU.forward(x, w, b, conv_param)[0], w, dout)
db_num = eecs598.grad.compute_numeric_gradient(lambda b: Conv_ReLU.forward(x, w, b, conv_param)[0], b, dout)
# Relative errors should be around e-8 or less
print('Testing Conv_ReLU:')
print('dx error: ', eecs598.grad.rel_error(dx_num, dx))
print('dw error: ', eecs598.grad.rel_error(dw_num, dw))
print('db error: ', eecs598.grad.rel_error(db_num, db))
# Test Conv ReLU Pool
x = torch.randn(2, 3, 16, 16, dtype=torch.float64, device='cuda')
w = torch.randn(3, 3, 3, 3, dtype=torch.float64, device='cuda')
b = torch.randn(3, dtype=torch.float64, device='cuda')
dout = torch.randn(2, 3, 8, 8, dtype=torch.float64, device='cuda')
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = Conv_ReLU_Pool.forward(x, w, b, conv_param, pool_param)
dx, dw, db = Conv_ReLU_Pool.backward(dout, cache)
dx_num = eecs598.grad.compute_numeric_gradient(lambda x: Conv_ReLU_Pool.forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eecs598.grad.compute_numeric_gradient(lambda w: Conv_ReLU_Pool.forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eecs598.grad.compute_numeric_gradient(lambda b: Conv_ReLU_Pool.forward(x, w, b, conv_param, pool_param)[0], b, dout)
# Relative errors should be around e-8 or less
print()
print('Testing Conv_ReLU_Pool')
print('dx error: ', eecs598.grad.rel_error(dx_num, dx))
print('dw error: ', eecs598.grad.rel_error(dw_num, dw))
print('db error: ', eecs598.grad.rel_error(db_num, db))
```
# Three-layer convolutional network
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Complete the implementation of the `ThreeLayerConvNet` class. We STRONGLY recommend you to use the fast/sandwich layers (already imported for you) in your implementation. Run the following cells to help you debug:
## Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about `log(C)` for `C` classes. When we add regularization the loss should go up slightly.
```python
from convolutional_networks import ThreeLayerConvNet
reset_seed(0)
model = ThreeLayerConvNet(dtype=torch.float64, device='cuda')
N = 50
X = torch.randn(N, 3, 32, 32, dtype=torch.float64, device='cuda')
y = torch.randint(10, size=(N,), dtype=torch.int64, device='cuda')
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss.item())
model.reg = 0.5
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss.item())
```
## Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artificial data and a small number of neurons at each layer.
You should see errors less than `1e-5`.
```python
from convolutional_networks import ThreeLayerConvNet
num_inputs = 2
input_dims = (3, 16, 16)
reg = 0.0
num_classes = 10
reset_seed(0)
X = torch.randn(num_inputs, *input_dims, dtype=torch.float64, device='cuda')
y = torch.randint(num_classes, size=(num_inputs,), dtype=torch.int64, device='cuda')
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dims=input_dims, hidden_dim=7,
weight_scale=5e-2, dtype=torch.float64, device='cuda')
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eecs598.grad.compute_numeric_gradient(f, model.params[param_name])
print('%s max relative error: %e' % (param_name, eecs598.grad.rel_error(param_grad_num, grads[param_name])))
```
## Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
```python
from convolutional_networks import ThreeLayerConvNet
from fully_connected_networks import adam
reset_seed(0)
num_train = 100
small_data = {
'X_train': data_dict['X_train'][:num_train],
'y_train': data_dict['y_train'][:num_train],
'X_val': data_dict['X_val'],
'y_val': data_dict['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-3, dtype=torch.float32, device='cuda')
solver = Solver(model, small_data,
num_epochs=30, batch_size=50,
update_rule=adam,
optim_config={
'learning_rate': 2e-3,
},
verbose=True, print_every=1,
device='cuda')
solver.train()
```
Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
```python
plt.title('Training losses')
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.gcf().set_size_inches(9, 4)
plt.show()
plt.title('Train and Val accuracies')
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.gcf().set_size_inches(9, 4)
plt.show()
```
## Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 50% accuracy on the training set:
```python
from convolutional_networks import ThreeLayerConvNet
from fully_connected_networks import adam
reset_seed(0)
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001, dtype=torch.float, device='cuda')
solver = Solver(model, data_dict,
num_epochs=1, batch_size=64,
update_rule=adam,
optim_config={
'learning_rate': 2e-3,
},
verbose=True, print_every=50, device='cuda')
solver.train()
```
## Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
```python
from torchvision.utils import make_grid
nrow = math.ceil(math.sqrt(model.params['W1'].shape[0]))
grid = make_grid(model.params['W1'], nrow=nrow, padding=1, normalize=True, scale_each=True)
plt.imshow(grid.to(device='cpu').permute(1, 2, 0))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
```
# Deep convolutional network
Next you will implement a deep convolutional network with an arbitrary number of conv layers in VGGNet style.
Read through the `DeepConvNet` class.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing batch normalization; we will add those features soon. Again, we STRONGLY recommend you to use the fast/sandwich layers (already imported for you) in your implementation.
## Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about `log(C)` for `C` classes. When we add regularization the loss should go up slightly.
```python
from convolutional_networks import DeepConvNet
from fully_connected_networks import adam
reset_seed(0)
input_dims = (3, 32, 32)
model = DeepConvNet(num_filters=[8, 64], max_pools=[0, 1], dtype=torch.float64, device='cuda')
N = 50
X = torch.randn(N, *input_dims, dtype=torch.float64, device='cuda')
y = torch.randint(10, size=(N,), dtype=torch.int64, device='cuda')
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss.item())
model.reg = 1.
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss.item())
```
## Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
You should see relative errors less than `1e-5`.
```python
from convolutional_networks import DeepConvNet
from fully_connected_networks import adam
reset_seed(0)
num_inputs = 2
input_dims = (3, 8, 8)
num_classes = 10
X = torch.randn(N, *input_dims, dtype=torch.float64, device='cuda')
y = torch.randint(10, size=(N,), dtype=torch.int64, device='cuda')
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = DeepConvNet(input_dims=input_dims, num_classes=num_classes,
num_filters=[8, 8, 8],
max_pools=[0, 2],
reg=reg,
weight_scale=5e-2, dtype=torch.float64, device='cuda')
loss, grads = model.loss(X, y)
# The relative errors should be up to the order of e-6
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eecs598.grad.compute_numeric_gradient(f, model.params[name])
print('%s max relative error: %e' % (name, eecs598.grad.rel_error(grad_num, grads[name])))
if reg == 0: print()
```
## Overfit small data
As another sanity check, make sure you can overfit a small dataset of 50 images. In the following cell, tweak the **learning rate** and **weight initialization scale** to overfit and achieve 100% training accuracy within 30 epochs.
```python
# TODO: Use a DeepConvNet to overfit 50 training examples by
# tweaking just the learning rate and initialization scale.
from convolutional_networks import DeepConvNet, find_overfit_parameters
from fully_connected_networks import adam
reset_seed(0)
num_train = 50
small_data = {
'X_train': data_dict['X_train'][:num_train],
'y_train': data_dict['y_train'][:num_train],
'X_val': data_dict['X_val'],
'y_val': data_dict['y_val'],
}
input_dims = small_data['X_train'].shape[1:]
# Update the parameters in find_overfit_parameters in convolutional_networks.py
weight_scale, learning_rate = find_overfit_parameters()
model = DeepConvNet(input_dims=input_dims, num_classes=10,
num_filters=[8, 16, 32, 64],
max_pools=[0, 1, 2, 3],
reg=1e-5, weight_scale=weight_scale, dtype=torch.float32, device='cuda')
solver = Solver(model, small_data,
print_every=10, num_epochs=30, batch_size=10,
update_rule=adam,
optim_config={
'learning_rate': learning_rate,
},
device='cuda',
)
# Turn off keep_best_params to allow final weights to be saved, instead of best weights on validation set.
solver.train(return_best_params=False)
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
val_acc = solver.check_accuracy(
solver.X_train, solver.y_train, num_samples=solver.num_train_samples
)
print(val_acc)
```
If you're happy with the model's perfromance, run the following cell to save it.
We will also reload the model and run it on the training data to verify it's the right weights.
```python
path = os.path.join(GOOGLE_DRIVE_PATH, 'overfit_deepconvnet.pth')
solver.model.save(path)
# Create a new instance
model = DeepConvNet(input_dims=input_dims, num_classes=10,
num_filters=[8, 16, 32, 64],
max_pools=[0, 1, 2, 3],
reg=1e-5, weight_scale=weight_scale, dtype=torch.float32, device='cuda')
solver = Solver(model, small_data,
print_every=10, num_epochs=30, batch_size=10,
update_rule=adam,
optim_config={
'learning_rate': learning_rate,
},
device='cuda',
)
# Load model
solver.model.load(path, dtype=torch.float32, device='cuda')
# Evaluate on validation set
accuracy = solver.check_accuracy(small_data['X_train'], small_data['y_train'])
print(f"Saved model's accuracy on training is {accuracy}")
```
# Kaiming initialization
So far, you manually tuned the weight scale and for weight initialization.
However, this is inefficient when it comes to training deep neural networks; practically, as your weight matrix is larger, the weight scale should be small.
Below you will implement [Kaiming initialization](http://arxiv-web3.library.cornell.edu/abs/1502.01852). For more details, refer to [cs231n note](http://cs231n.github.io/neural-networks-2/#init) and [PyTorch documentation](https://pytorch.org/docs/stable/nn.init.html#torch.nn.init.kaiming_normal_).
# Convolutional nets with Kaiming initialization
Now that you have a working implementation for Kaiming initialization, go back to your [`DeepConvnet`](#scrollTo=Ah-_nwx2BSxl). Modify your implementation to add Kaiming initialization.
Concretely, when the `weight_scale` is set to `'kaiming'` in the constructor, you should initialize weights of convolutional and linear layers using `kaiming_initializer`. Once you are done, run the following to see the effect of kaiming initialization in deep CNNs.
In this experiment, we train a 31-layer network with four different weight initialization schemes. Among them, only the Kaiming initialization method should achieve a non-random accuracy after one epoch of training.
You may see `nan` loss when `weight_scale` is large, this shows a catastrophe of inappropriate weight initialization.
```python
from convolutional_networks import DeepConvNet
from fully_connected_networks import sgd_momentum
reset_seed(0)
# Try training a deep convolutional net with different weight initialization methods
num_train = 10000
small_data = {
'X_train': data_dict['X_train'][:num_train],
'y_train': data_dict['y_train'][:num_train],
'X_val': data_dict['X_val'],
'y_val': data_dict['y_val'],
}
input_dims = data_dict['X_train'].shape[1:]
weight_scales = ['kaiming', 1e-1, 1e-2, 1e-3]
solvers = []
for weight_scale in weight_scales:
print('Solver with weight scale: ', weight_scale)
model = DeepConvNet(input_dims=input_dims, num_classes=10,
num_filters=([8] * 10) + ([32] * 10) + ([128] * 10),
max_pools=[9, 19],
weight_scale=weight_scale,
reg=1e-5,
dtype=torch.float32,
device='cuda'
)
solver = Solver(model, small_data,
num_epochs=1, batch_size=128,
update_rule=sgd_momentum,
optim_config={
'learning_rate': 2e-3,
},
print_every=20, device='cuda')
solver.train()
solvers.append(solver)
```
```python
def plot_training_history_init(title, xlabel, solvers, labels, plot_fn, marker='-o'):
plt.title(title)
plt.xlabel(xlabel)
for solver, label in zip(solvers, labels):
data = plot_fn(solver)
label = 'weight_scale=' + str(label)
plt.plot(data, marker, label=label)
plt.legend(loc='lower center', ncol=len(solvers))
plt.subplot(3, 1, 1)
plot_training_history_init('Training loss','Iteration', solvers, weight_scales,
lambda x: x.loss_history, marker='o')
plt.subplot(3, 1, 2)
plot_training_history_init('Training accuracy','Epoch', solvers, weight_scales,
lambda x: x.train_acc_history)
plt.subplot(3, 1, 3)
plot_training_history_init('Validation accuracy','Epoch', solvers, weight_scales,
lambda x: x.val_acc_history)
plt.gcf().set_size_inches(15, 25)
plt.show()
```
# Train a good model!
Train the best convolutional model that you can on CIFAR-10, storing your best model in the `best_model` variable. We require you to get at least 71% accuracy on the validation set using a convolutional net, within 60 seconds of training.
You might find it useful to use batch normalization in your model. However, since we do not ask you to implement it CUDA-friendly, it might slow down training.
**Implement** `create_convolutional_solver_instance` while making sure to use the initialize your model with the input `dtype` and `device`, as well as initializing the solver on the input `device`.
Hint: Your model does not have to be too deep.
Hint 2: We used `batch_size = 128` for training a model with 74% validation accuracy. You don't have to follow this, but it would save your time for hyperparameter search.
Hint 3: Note that we import all the functions from fully_connected_networks, so feel free to use the optimizers you've already imolemented; e.g., adam.
```python
from convolutional_networks import DeepConvNet, create_convolutional_solver_instance
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
solver = create_convolutional_solver_instance(data_dict, torch.float32, "cuda")
solver.train(time_limit=60)
torch.backends.cudnn.benchmark = False
```
# Test your model!
Run your best model on the validation and test sets. You should achieve above 71% accuracy on the validation set and 70% accuracy on the test set.
(Our best model gets 74.3% validation accuracy and 73.5% test accuracy -- can you beat ours?)
```python
print('Validation set accuracy: ', solver.check_accuracy(data_dict['X_val'], data_dict['y_val']))
print('Test set accuracy: ', solver.check_accuracy(data_dict['X_test'], data_dict['y_test']))
```
If you're happy with the model's perfromance, run the following cell to save it.
We will also reload the model and run it on the training data to verify it's the right weights.
```python
path = os.path.join(GOOGLE_DRIVE_PATH, 'one_minute_deepconvnet.pth')
solver.model.save(path)
# Create a new instance
from convolutional_networks import DeepConvNet, create_convolutional_solver_instance
solver = create_convolutional_solver_instance(data_dict, torch.float32, "cuda")
# Load model
solver.model.load(path, dtype=torch.float32, device='cuda')
# Evaluate on validation set
print('Validation set accuracy: ', solver.check_accuracy(data_dict['X_val'], data_dict['y_val']))
print('Test set accuracy: ', solver.check_accuracy(data_dict['X_test'], data_dict['y_test']))
```
# Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train.
One idea along these lines is batch normalization which was proposed by [1] in 2015.
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[1] [Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.](https://arxiv.org/abs/1502.03167)
## Batch normalization: forward
Implement the batch normalization forward pass in the function `BatchNorm.forward`. Once you have done so, run the following to test your implementation.
Referencing the paper linked to above in [1] may be helpful!
After implementing the forward pass for batch normalization, you can run the following to sanity check your implementation. After running batch normalization with beta=0 and gamma=1, the data should have zero mean and unit variance.
After running batch normalization with nontrivial beta and gamma, the output data should have mean approximately equal to beta, and std approximatly equal to gamma.
```python
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
from convolutional_networks import BatchNorm
def print_mean_std(x,dim=0):
means = ['%.3f' % xx for xx in x.mean(dim=dim).tolist()]
stds = ['%.3f' % xx for xx in x.std(dim=dim).tolist()]
print(' means: ', means)
print(' stds: ', stds)
print()
# Simulate the forward pass for a two-layer network
reset_seed(0)
N, D1, D2, D3 = 200, 50, 60, 3
X = torch.randn(N, D1, dtype=torch.float64, device='cuda')
W1 = torch.randn(D1, D2, dtype=torch.float64, device='cuda')
W2 = torch.randn(D2, D3, dtype=torch.float64, device='cuda')
a = X.matmul(W1).clamp(min=0.).matmul(W2)
print('Before batch normalization:')
print_mean_std(a,dim=0)
# Run with gamma=1, beta=0. Means should be close to zero and stds close to one
gamma = torch.ones(D3, dtype=torch.float64, device='cuda')
beta = torch.zeros(D3, dtype=torch.float64, device='cuda')
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = BatchNorm.forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,dim=0)
# Run again with nontrivial gamma and beta. Now means should be close to beta
# and std should be close to gamma.
gamma = torch.tensor([1.0, 2.0, 3.0], dtype=torch.float64, device='cuda')
beta = torch.tensor([11.0, 12.0, 13.0], dtype=torch.float64, device='cuda')
print('After batch normalization (gamma=', gamma.tolist(), ', beta=', beta.tolist(), ')')
a_norm, _ = BatchNorm.forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,dim=0)
```
We can sanity-check the test-time forward pass of batch normalization by running the following. First we run the training-time forward pass many times to "warm up" the running averages. If we then run a test-time forward pass, the output should have approximately zero mean and unit variance.
```python
from convolutional_networks import BatchNorm
reset_seed(0)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = torch.randn(D1, D2, dtype=torch.float64, device='cuda')
W2 = torch.randn(D2, D3, dtype=torch.float64, device='cuda')
bn_param = {'mode': 'train'}
gamma = torch.ones(D3, dtype=torch.float64, device='cuda')
beta = torch.zeros(D3, dtype=torch.float64, device='cuda')
for t in range(500):
X = torch.randn(N, D1, dtype=torch.float64, device='cuda')
a = X.matmul(W1).clamp(min=0.).matmul(W2)
BatchNorm.forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = torch.randn(N, D1, dtype=torch.float64, device='cuda')
a = X.matmul(W1).clamp(min=0.).matmul(W2)
a_norm, _ = BatchNorm.forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print_mean_std(a_norm,dim=0)
```
## Batch normalization: backward
Now implement the backward pass for batch normalization in the function `BatchNorm.backward`.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Please don't forget to implement the train and test mode separately.
Once you have finished, run the following to numerically check your backward pass.
```python
from convolutional_networks import BatchNorm
# Gradient check batchnorm backward pass
reset_seed(0)
N, D = 4, 5
x = 5 * torch.randn(N, D, dtype=torch.float64, device='cuda') + 12
gamma = torch.randn(D, dtype=torch.float64, device='cuda')
beta = torch.randn(D, dtype=torch.float64, device='cuda')
dout = torch.randn(N, D, dtype=torch.float64, device='cuda')
bn_param = {'mode': 'train'}
fx = lambda x: BatchNorm.forward(x, gamma, beta, bn_param)[0]
fg = lambda a: BatchNorm.forward(x, a, beta, bn_param)[0]
fb = lambda b: BatchNorm.forward(x, gamma, b, bn_param)[0]
dx_num = eecs598.grad.compute_numeric_gradient(fx, x, dout)
da_num = eecs598.grad.compute_numeric_gradient(fg, gamma.clone(), dout)
db_num = eecs598.grad.compute_numeric_gradient(fb, beta.clone(), dout)
_, cache = BatchNorm.forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = BatchNorm.backward(dout, cache)
# You should expect to see relative errors between 1e-12 and 1e-9
print('dx error: ', eecs598.grad.rel_error(dx_num, dx))
print('dgamma error: ', eecs598.grad.rel_error(da_num, dgamma))
print('dbeta error: ', eecs598.grad.rel_error(db_num, dbeta))
```
## Batch normalization: alternative backward
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too!
In the forward pass, given a set of inputs $X=\begin{bmatrix}x_1\\x_2\\...\\x_N\end{bmatrix}$,
we first calculate the mean $\mu$ and variance $v$.
With $\mu$ and $v$ calculated, we can calculate the standard deviation $\sigma$ and normalized data $Y$.
The equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).
\begin{align}
& \mu=\frac{1}{N}\sum_{k=1}^N x_k & v=\frac{1}{N}\sum_{k=1}^N (x_k-\mu)^2 \\
& \sigma=\sqrt{v+\epsilon} & y_i=\frac{x_i-\mu}{\sigma}
\end{align}
The meat of our problem during backpropagation is to compute $\frac{\partial L}{\partial X}$, given the upstream gradient we receive, $\frac{\partial L}{\partial Y}.$ To do this, recall the chain rule in calculus gives us $\frac{\partial L}{\partial X} = \frac{\partial L}{\partial Y} \cdot \frac{\partial Y}{\partial X}$.
The unknown/hart part is $\frac{\partial Y}{\partial X}$. We can find this by first deriving step-by-step our local gradients at
$\frac{\partial v}{\partial X}$, $\frac{\partial \mu}{\partial X}$,
$\frac{\partial \sigma}{\partial v}$,
$\frac{\partial Y}{\partial \sigma}$, and $\frac{\partial Y}{\partial \mu}$,
and then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\frac{\partial Y}{\partial X}$.
If it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\frac{\partial L}{\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\frac{\partial \mu}{\partial x_i}, \frac{\partial v}{\partial x_i}, \frac{\partial \sigma}{\partial x_i},$ then assemble these pieces to calculate $\frac{\partial y_i}{\partial x_i}$.
You should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation.
After doing so, implement the simplified batch normalization backward pass in the function `BatchNorm.backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
```python
from convolutional_networks import BatchNorm
reset_seed(0)
N, D = 128, 2048
x = 5 * torch.randn(N, D, dtype=torch.float64, device='cuda') + 12
gamma = torch.randn(D, dtype=torch.float64, device='cuda')
beta = torch.randn(D, dtype=torch.float64, device='cuda')
dout = torch.randn(N, D, dtype=torch.float64, device='cuda')
bn_param = {'mode': 'train'}
out, cache = BatchNorm.forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = BatchNorm.backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = BatchNorm.backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', eecs598.grad.rel_error(dx1, dx2))
print('dgamma difference: ', eecs598.grad.rel_error(dgamma1, dgamma2))
print('dbeta difference: ', eecs598.grad.rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
```
# Spatial Batch Normalization
As proposed in the original paper, batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape `(N, D)` and produces outputs of shape `(N, D)`, where we normalize across the minibatch dimension `N`. For data coming from convolutional layers, batch normalization needs to accept inputs of shape `(N, C, H, W)` and produce outputs of shape `(N, C, H, W)` where the `N` dimension gives the minibatch size and the `(H, W)` dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect every feature channel's statistics e.g. mean, variance to be relatively consistent both between different images, and different locations within the same image -- after all, every feature channel is produced by the same convolutional filter! Therefore spatial batch normalization computes a mean and variance for each of the `C` feature channels by computing statistics over the minibatch dimension `N` as well the spatial dimensions `H` and `W`.
[1] [Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.](https://arxiv.org/abs/1502.03167)
## Spatial batch normalization: forward
Implement the forward pass for spatial batch normalization in the function `SpatialBatchNorm.forward`. Check your implementation by running the following:
After implementing the forward pass for spatial batch normalization, you can run the following to sanity check your code.
```python
from convolutional_networks import SpatialBatchNorm
reset_seed(0)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * torch.randn(N, C, H, W, dtype=torch.float64, device='cuda') + 10
print('Before spatial batch normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x.mean(dim=(0, 2, 3)))
print(' Stds: ', x.std(dim=(0, 2, 3)))
# Means should be close to zero and stds close to one
gamma = torch.ones(C, dtype=torch.float64, device='cuda')
beta = torch.zeros(C,dtype=torch.float64, device='cuda')
bn_param = {'mode': 'train'}
out, _ = SpatialBatchNorm.forward(x, gamma, beta, bn_param)
print('After spatial batch normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(dim=(0, 2, 3)))
print(' Stds: ', out.std(dim=(0, 2, 3)))
# Means should be close to beta and stds close to gamma
gamma = torch.tensor([3, 4, 5], dtype=torch.float64, device='cuda')
beta = torch.tensor([6, 7, 8], dtype=torch.float64, device='cuda')
out, _ = SpatialBatchNorm.forward(x, gamma, beta, bn_param)
print('After spatial batch normalization (nontrivial gamma, beta):')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(dim=(0, 2, 3)))
print(' Stds: ', out.std(dim=(0, 2, 3)))
```
Similar to the vanilla batch normalization implementation, run the following to sanity-check the test-time forward pass of spatial batch normalization.
```python
reset_seed(0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = torch.ones(C, dtype=torch.float64, device='cuda')
beta = torch.zeros(C, dtype=torch.float64, device='cuda')
for t in range(50):
x = 2.3 * torch.randn(N, C, H, W, dtype=torch.float64, device='cuda') + 13
SpatialBatchNorm.forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * torch.randn(N, C, H, W, dtype=torch.float64, device='cuda') + 13
a_norm, _ = SpatialBatchNorm.forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After spatial batch normalization (test-time):')
print(' means: ', a_norm.mean(dim=(0, 2, 3)))
print(' stds: ', a_norm.std(dim=(0, 2, 3)))
```
## Spatial batch normalization: backward
Implement the backward pass for spatial batch normalization in the function `SpatialBatchNorm.backward`.
After implementing the backward pass for spatial batch normalization, run the following to perform numeric gradient checking on your implementation. You should see errors less than `1e-6`.
```python
reset_seed(0)
N, C, H, W = 2, 3, 4, 5
x = 5 * torch.randn(N, C, H, W, dtype=torch.float64, device='cuda') + 12
gamma = torch.randn(C, dtype=torch.float64, device='cuda')
beta = torch.randn(C, dtype=torch.float64, device='cuda')
dout = torch.randn(N, C, H, W, dtype=torch.float64, device='cuda')
bn_param = {'mode': 'train'}
fx = lambda x: SpatialBatchNorm.forward(x, gamma, beta, bn_param)[0]
fg = lambda a: SpatialBatchNorm.forward(x, gamma, beta, bn_param)[0]
fb = lambda b: SpatialBatchNorm.forward(x, gamma, beta, bn_param)[0]
dx_num = eecs598.grad.compute_numeric_gradient(fx, x, dout)
da_num = eecs598.grad.compute_numeric_gradient(fg, gamma, dout)
db_num = eecs598.grad.compute_numeric_gradient(fb, beta, dout)
_, cache = SpatialBatchNorm.forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = SpatialBatchNorm.backward(dout, cache)
print('dx error: ', eecs598.grad.rel_error(dx_num, dx))
print('dgamma error: ', eecs598.grad.rel_error(da_num, dgamma))
print('dbeta error: ', eecs598.grad.rel_error(db_num, dbeta))
```
# "Sandwich" layers with batch normalization
Again, below you will find sandwich layers that implement a few commonly used patterns for convolutional networks. We include the functions in `convolutional_networks.py` but you can see them here for your convenience.
```python
class Linear_BatchNorm_ReLU(object):
@staticmethod
def forward(x, w, b, gamma, beta, bn_param):
"""
Convenience layer that performs an linear transform, batch normalization,
and ReLU.
Inputs:
- x: Array of shape (N, D1); input to the linear layer
- w, b: Arrays of shape (D2, D2) and (D2,) giving the weight and bias for
the linear transform.
- gamma, beta: Arrays of shape (D2,) and (D2,) giving scale and shift
parameters for batch normalization.
- bn_param: Dictionary of parameters for batch normalization.
Returns:
- out: Output from ReLU, of shape (N, D2)
- cache: Object to give to the backward pass.
"""
a, fc_cache = Linear.forward(x, w, b)
a_bn, bn_cache = BatchNorm.forward(a, gamma, beta, bn_param)
out, relu_cache = ReLU.forward(a_bn)
cache = (fc_cache, bn_cache, relu_cache)
return out, cache
@staticmethod
def backward(dout, cache):
"""
Backward pass for the linear-batchnorm-relu convenience layer.
"""
fc_cache, bn_cache, relu_cache = cache
da_bn = ReLU.backward(dout, relu_cache)
da, dgamma, dbeta = BatchNorm.backward(da_bn, bn_cache)
dx, dw, db = Linear.backward(da, fc_cache)
return dx, dw, db, dgamma, dbeta
class Conv_BatchNorm_ReLU(object):
@staticmethod
def forward(x, w, b, gamma, beta, conv_param, bn_param):
a, conv_cache = FastConv.forward(x, w, b, conv_param)
an, bn_cache = SpatialBatchNorm.forward(a, gamma, beta, bn_param)
out, relu_cache = ReLU.forward(an)
cache = (conv_cache, bn_cache, relu_cache)
return out, cache
@staticmethod
def backward(dout, cache):
conv_cache, bn_cache, relu_cache = cache
dan = ReLU.backward(dout, relu_cache)
da, dgamma, dbeta = SpatialBatchNorm.backward(dan, bn_cache)
dx, dw, db = FastConv.backward(da, conv_cache)
return dx, dw, db, dgamma, dbeta
class Conv_BatchNorm_ReLU_Pool(object):
@staticmethod
def forward(x, w, b, gamma, beta, conv_param, bn_param, pool_param):
a, conv_cache = FastConv.forward(x, w, b, conv_param)
an, bn_cache = SpatialBatchNorm.forward(a, gamma, beta, bn_param)
s, relu_cache = ReLU.forward(an)
out, pool_cache = FastMaxPool.forward(s, pool_param)
cache = (conv_cache, bn_cache, relu_cache, pool_cache)
return out, cache
@staticmethod
def backward(dout, cache):
conv_cache, bn_cache, relu_cache, pool_cache = cache
ds = FastMaxPool.backward(dout, pool_cache)
dan = ReLU.backward(ds, relu_cache)
da, dgamma, dbeta = SpatialBatchNorm.backward(dan, bn_cache)
dx, dw, db = FastConv.backward(da, conv_cache)
return dx, dw, db, dgamma, dbeta
```
# Convolutional nets with batch normalization
Now that you have a working implementation for batch normalization, go back to your [`DeepConvnet`](#scrollTo=Ah-_nwx2BSxl). Modify your implementation to add batch normalization.
Concretely, when the `batchnorm` flag is set to `True` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last linear layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
In the reg=0 case, you should see errors less than `1e-6` for all weights and batchnorm parameters (beta and gamma); for biases you will see high relative errors due to the extremely small magnitude of both numeric and analytic gradients.
In the reg=3.14 case, you should see errors less than `1e-6` for all parameters.
```python
from convolutional_networks import DeepConvNet
reset_seed(0)
num_inputs = 2
input_dims = (3, 8, 8)
num_classes = 10
X = torch.randn(num_inputs, *input_dims, dtype=torch.float64, device='cuda')
y = torch.randint(num_classes, size=(num_inputs,), dtype=torch.int64, device='cuda')
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = DeepConvNet(input_dims=input_dims, num_classes=num_classes,
num_filters=[8, 8, 8],
max_pools=[0, 2],
reg=reg, batchnorm=True,
weight_scale='kaiming',
dtype=torch.float64, device='cuda')
loss, grads = model.loss(X, y)
# The relative errors should be up to the order of e-3
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eecs598.grad.compute_numeric_gradient(f, model.params[name])
print('%s max relative error: %e' % (name, eecs598.grad.rel_error(grad_num, grads[name])))
print()
```
# Batchnorm for deep convolutional networks
Run the following to train a deep convolutional network on a subset of 500 training examples both with and without batch normalization.
```python
from convolutional_networks import DeepConvNet
reset_seed(0)
# Try training a deep convolutional net with batchnorm
num_train = 500
small_data = {
'X_train': data_dict['X_train'][:num_train],
'y_train': data_dict['y_train'][:num_train],
'X_val': data_dict['X_val'],
'y_val': data_dict['y_val'],
}
input_dims = data_dict['X_train'].shape[1:]
bn_model = DeepConvNet(input_dims=input_dims, num_classes=10,
num_filters=[16, 32, 32, 64, 64],
max_pools=[0, 1, 2, 3, 4],
weight_scale='kaiming',
batchnorm=True,
reg=1e-5, dtype=torch.float32, device='cuda')
model = DeepConvNet(input_dims=input_dims, num_classes=10,
num_filters=[16, 32, 32, 64, 64],
max_pools=[0, 1, 2, 3, 4],
weight_scale='kaiming',
batchnorm=False,
reg=1e-5, dtype=torch.float32, device='cuda')
print('Solver with batch norm:')
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=100,
update_rule=adam,
optim_config={
'learning_rate': 1e-3,
},
print_every=20, device='cuda')
bn_solver.train()
print('\nSolver without batch norm:')
solver = Solver(model, small_data,
num_epochs=10, batch_size=100,
update_rule=adam,
optim_config={
'learning_rate': 1e-3,
},
print_every=20, device='cuda')
solver.train()
```
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
```python
def plot_training_history_bn(title, label, solvers, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):
"""utility function for plotting training history"""
plt.title(title)
plt.xlabel(label)
bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]
bl_plots = [plot_fn(solver) for solver in solvers]
num_bn = len(bn_plots)
num_bl = len(bl_plots)
for i in range(num_bn):
label='w/ BN'
if labels is not None:
label += str(labels[i])
plt.plot(bn_plots[i], bn_marker, label=label)
for i in range(num_bl):
label='w/o BN'
if labels is not None:
label += str(labels[i])
plt.plot(bl_plots[i], bl_marker, label=label)
plt.legend(loc='lower center', ncol=num_bn+num_bl)
plt.subplot(3, 1, 1)
plot_training_history_bn('Training loss','Iteration', [solver], [bn_solver], \
lambda x: x.loss_history, bl_marker='-o', bn_marker='-o')
plt.subplot(3, 1, 2)
plot_training_history_bn('Training accuracy','Epoch', [solver], [bn_solver], \
lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')
plt.subplot(3, 1, 3)
plot_training_history_bn('Validation accuracy','Epoch', [solver], [bn_solver], \
lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')
plt.gcf().set_size_inches(15, 25)
plt.show()
```
# Batch normalization and learning rate
We will now run a small experiment to study the interaction of batch normalization and learning rate.
The first cell will train convolutional networks with different learning rates. The second layer will plot training accuracy and validation set accuracy over time. You should find that using batch normalization helps the network to be less dependent to the learning rate.
```python
from convolutional_networks import DeepConvNet
from fully_connected_networks import sgd_momentum
reset_seed(0)
# Try training a very deep net with batchnorm
num_train = 10000
small_data = {
'X_train': data_dict['X_train'][:num_train],
'y_train': data_dict['y_train'][:num_train],
'X_val': data_dict['X_val'],
'y_val': data_dict['y_val'],
}
input_dims = data_dict['X_train'].shape[1:]
num_epochs = 5
lrs = [2e-1, 1e-1, 5e-2]
lrs = [5e-3, 1e-2, 2e-2]
solvers = []
for lr in lrs:
print('No normalization: learning rate = ', lr)
model = DeepConvNet(input_dims=input_dims, num_classes=10,
num_filters=[8, 8, 8],
max_pools=[0, 1, 2],
weight_scale='kaiming',
batchnorm=False,
reg=1e-5, dtype=torch.float32, device='cuda')
solver = Solver(model, small_data,
num_epochs=num_epochs, batch_size=100,
update_rule=sgd_momentum,
optim_config={
'learning_rate': lr,
},
verbose=False, device='cuda')
solver.train()
solvers.append(solver)
bn_solvers = []
for lr in lrs:
print('Normalization: learning rate = ', lr)
bn_model = DeepConvNet(input_dims=input_dims, num_classes=10,
num_filters=[8, 8, 16, 16, 32, 32],
max_pools=[1, 3, 5],
weight_scale='kaiming',
batchnorm=True,
reg=1e-5, dtype=torch.float32, device='cuda')
bn_solver = Solver(bn_model, small_data,
num_epochs=num_epochs, batch_size=128,
update_rule=sgd_momentum,
optim_config={
'learning_rate': lr,
},
verbose=False, device='cuda')
bn_solver.train()
bn_solvers.append(bn_solver)
```
```python
plt.subplot(2, 1, 1)
plot_training_history_bn('Training accuracy (Batch Normalization)','Epoch', solvers, bn_solvers, \
lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=[' lr={:.0e}'.format(lr) for lr in lrs])
plt.subplot(2, 1, 2)
plot_training_history_bn('Validation accuracy (Batch Normalization)','Epoch', solvers, bn_solvers, \
lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=[' lr={:.0e}'.format(lr) for lr in lrs])
plt.gcf().set_size_inches(10, 15)
plt.show()
```
# Submit Your Work
After completing both notebooks for this assignment (`fully_connected_networks.ipynb` and this notebook, `convolutional_networks.ipynb`), run the following cell to create a `.zip` file for you to download and turn in.
**Please MANUALLY SAVE every `*.ipynb` and `*.py` files before executing the following cell:**
```python
from eecs598.submit import make_a3_submission
# TODO: Replace these with your actual uniquename and umid
uniquename = None
umid = None
make_a3_submission(GOOGLE_DRIVE_PATH, uniquename, umid)
```
|
063b1593c5246dda6609cd8170a554a555f17b31
| 128,667 |
ipynb
|
Jupyter Notebook
|
A3/convolutional_networks.ipynb
|
Haian-Jin/EECS498_labs_public
|
8f3fda5f910271749c8177846cd5052080e6af4c
|
[
"MIT"
] | null | null | null |
A3/convolutional_networks.ipynb
|
Haian-Jin/EECS498_labs_public
|
8f3fda5f910271749c8177846cd5052080e6af4c
|
[
"MIT"
] | null | null | null |
A3/convolutional_networks.ipynb
|
Haian-Jin/EECS498_labs_public
|
8f3fda5f910271749c8177846cd5052080e6af4c
|
[
"MIT"
] | null | null | null | 34.992385 | 805 | 0.588348 | true | 18,319 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.692642 | 0.760651 | 0.526859 |
__label__eng_Latn
| 0.912932 | 0.062398 |
```python
import numpy as np
import scipy.misc
from scipy.fftpack import dct, idct
import sys
from PIL import Image
import matplotlib
import matplotlib.pyplot as plt
import random
from tqdm._tqdm_notebook import tqdm_notebook
from scipy.fftpack import dct, idct
import seaborn as sns
from skimage.metrics import structural_similarity as ssim
import pandas as pd
import sympy
%matplotlib inline
class ImageLoader:
def __init__(self, FILE_PATH):
self.img = np.array(Image.open(FILE_PATH))
# 行数
self.row_blocks_count = self.img.shape[0] // 8
# 列数
self.col_blocks_count = self.img.shape[1] // 8
def get_points(self, POINT):
Row = random.randint(0, len(self.img) - POINT - 1)
Col = random.randint(0, len(self.img) - 1)
return self.img[Row : Row + POINT, Col]
def get_block(self, col, row):
return self.img[col * 8 : (col + 1) * 8, row * 8 : (row + 1) * 8]
# plt.rcParams['font.family'] ='sans-serif'#使用するフォント
# plt.rcParams["font.sans-serif"] = "Source Han Sans"
plt.rcParams["font.family"] = "Source Han Sans JP" # 使用するフォント
plt.rcParams["xtick.direction"] = "in" # x軸の目盛線が内向き('in')か外向き('out')か双方向か('inout')
plt.rcParams["ytick.direction"] = "in" # y軸の目盛線が内向き('in')か外向き('out')か双方向か('inout')
plt.rcParams["xtick.major.width"] = 1.0 # x軸主目盛り線の線幅
plt.rcParams["ytick.major.width"] = 1.0 # y軸主目盛り線の線幅
plt.rcParams["font.size"] = 12 # フォントの大きさ
plt.rcParams["axes.linewidth"] = 1.0 # 軸の線幅edge linewidth。囲みの太さ
matplotlib.font_manager._rebuild()
MONO_DIR_PATH = "../../Mono/"
AIRPLANE = ImageLoader(MONO_DIR_PATH + "airplane512.bmp")
BARBARA = ImageLoader(MONO_DIR_PATH + "barbara512.bmp")
BOAT = ImageLoader(MONO_DIR_PATH + "boat512.bmp")
GOLDHILL = ImageLoader(MONO_DIR_PATH + "goldhill512.bmp")
LENNA = ImageLoader(MONO_DIR_PATH + "lenna512.bmp")
MANDRILL = ImageLoader(MONO_DIR_PATH + "mandrill512.bmp")
MILKDROP = ImageLoader(MONO_DIR_PATH + "milkdrop512.bmp")
SAILBOAT = ImageLoader(MONO_DIR_PATH + "sailboat512.bmp")
IMAGES = [
AIRPLANE,
BARBARA,
BOAT,
GOLDHILL,
LENNA,
MANDRILL,
MILKDROP,
SAILBOAT
]
```
```python
class DMLCT:
def __init__(self, n_bar, N):
self.n_bar = n_bar
self.N = N
self.x_l = (2 * np.arange(N) + 1) / (2 * N)
self.s_l = np.arange(n_bar) / (n_bar - 1)
self.xi = (np.arange(n_bar + 1) - 0.5) / (n_bar - 1)
self.lambda_kh = self.get_lambda_kh(self.n_bar)
self.w_k_j = self.get_w_k_j(self.n_bar, self.N)
self.W_L_k_kh = self.get_W_L_k_kh(self.n_bar, self.N)
self.W_k_kh = self.get_W_k_kh(self.n_bar, self.N)
self.W_R_k_kh = self.get_W_R_k_kh(self.n_bar, self.N)
def Lagrange_j(self, j):
x = sympy.Symbol("x")
L_x = 1.0
for l in range(self.n_bar):
if l != j:
L_x *= (x - self.s_l[l]) / (self.s_l[j] - self.s_l[l])
return sympy.integrate(L_x)
def get_lambda_kh(self, n_bar):
lambda_kh = np.ones(n_bar)
lambda_kh[0] = np.sqrt(1 / 2)
return lambda_kh
def get_w_k_j(self, n_bar, N):
L_j = np.zeros((n_bar, N))
x = sympy.Symbol("x")
for j in range(n_bar):
temp = []
Lj = self.Lagrange_j(j)
for k in range(N):
temp.append(Lj.subs(x, self.x_l[k]))
L_j[j] = np.array(temp)
w_k_j = np.zeros((n_bar, N))
for j in range(n_bar):
w_k_j[j] = scipy.fftpack.dct(L_j[j], norm="ortho")
return w_k_j
def get_W_L_k_kh(self, n_bar, N):
W_L_k_kh = np.zeros((n_bar - 1, N))
lambda_kh = self.get_lambda_kh(n_bar)
for kh in range(n_bar - 1):
W_L_k_kh[kh] = (
(1 - n_bar)
* np.sqrt(2 / N)
* lambda_kh[kh]
* np.cos(np.pi * kh * (self.xi[0] + 1))
* self.w_k_j[0]
)
return W_L_k_kh
def get_W_k_kh(self, n_bar, N):
W_k_kh = np.zeros((n_bar - 1, N))
for kh in range(n_bar - 1):
sum_sin = np.zeros(N)
for j in range(1, n_bar - 2 + 1):
sum_sin += np.sin(np.pi * kh * self.s_l[j]) * self.w_k_j[j]
W_k_kh[kh] = (
(n_bar - 1)
* np.sqrt(2 / N)
* self.lambda_kh[kh]
* (
np.cos(np.pi * kh * self.xi[1])
* (self.w_k_j[0] - (-1) ** (kh) * self.w_k_j[n_bar - 1])
- 2 * np.sin((np.pi * kh) / (2 * (n_bar - 1))) * sum_sin
)
)
return W_k_kh
def get_W_R_k_kh(self, n_bar, N):
W_R_k_kh = np.zeros((n_bar - 1, N))
for kh in range(n_bar - 1):
W_R_k_kh[kh] = (
(n_bar - 1)
* np.sqrt(2 / N)
* self.lambda_kh[kh]
* np.cos(np.pi * kh * (self.xi[n_bar] - 1))
* self.w_k_j[n_bar - 1]
)
return W_R_k_kh
```
```python
def get_F_L_k_horizontal(arr, N, row, col):
# w
if col == 0:
w_block = np.zeros(N)
else:
w_block = arr[row, (col - 1) * N : col * N]
return w_block
```
```python
def get_F_R_k_horizontal(arr, N, row, col):
# e
if col == arr.shape[1] // N - 1:
e_block = np.zeros(N)
else:
e_block = arr[row, (col + 1) * N : (col + 2) * N]
return e_block
```
```python
def get_F_L_k_vertical(arr, N, row, col):
# n
if row == 0:
n_block = np.zeros(N)
else:
n_block = arr[(row - 1) * N : row * N, col]
return n_block
```
```python
def get_F_R_k_vertical(arr, N, row, col):
# s
if row == arr.shape[0] // N - 1:
s_block = np.zeros(N)
else:
s_block = arr[(row + 1) * N : (row + 2) * N, col]
return s_block
```
```python
# n_bar = 4
N = 16
```
```python
# dmlct = DMLCT(n_bar, N)
```
```python
IMG = LENNA
```
```python
Fk = np.zeros(IMG.img.shape)
```
# 順変換
```python
def DMLCT_forward(IMG,n_bar,N):
Fk = np.zeros(IMG.img.shape)
## 縦方向
### DCT
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
eight_points = IMG.img[N * row : N * (row + 1), col]
c = scipy.fftpack.dct(eight_points, norm="ortho")
Fk[N * row : N * (row + 1), col] = c
### 残差
dmlct = DMLCT(n_bar,N)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
# ビューなら直接いじっちゃう
F = Fk[N * row : N * (row + 1), col]
F_L = get_F_L_k_vertical(Fk, N, row, col)
F_R = get_F_R_k_vertical(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range(n_bar - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
# n_bar = 4 なら 0,1,2は残す 3,4,5,6,7を書き換える
F[n_bar - 2 + 1 :] -= U_k_n_bar[n_bar - 2 + 1 :]
# 0を残す
for k in reversed(range(1, n_bar - 2 + 1)):
dmlct = DMLCT(k+1, N)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
# ビューなら直接いじっちゃう
F = Fk[N * row : N * (row + 1), col]
F_L = get_F_L_k_vertical(Fk, N, row, col)
F_R = get_F_R_k_vertical(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range((k + 1) - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
F[k] -= U_k_n_bar[k]
## 横方向
### DCT
for row in range(Fk.shape[0]):
for col in range(Fk.shape[1] // N):
eight_points = Fk[row, N * col : N * (col + 1)]
c = scipy.fftpack.dct(eight_points, norm="ortho")
Fk[row, N * col : N * (col + 1)] = c
### 残差
dmlct = DMLCT(n_bar,N)
for row in range(IMG.img.shape[0]):
for col in range(IMG.img.shape[1] // N):
F = Fk[row, N * col : N * (col + 1)]
F_L = get_F_L_k_horizontal(Fk, N, row, col)
F_R = get_F_R_k_horizontal(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range(n_bar - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
# n_bar = 4 なら 0,1,2は残す 3,4,5,6,7を書き換える
F[n_bar - 2 + 1 :] -= U_k_n_bar[n_bar - 2 + 1 :]
# 0を残す
for k in reversed(range(1, n_bar - 2 + 1)):
dmlct = DMLCT(k+1, N)
for row in range(IMG.img.shape[0]):
for col in range(IMG.img.shape[1] // N):
F = Fk[row, N * col : N * (col + 1)]
F_L = get_F_L_k_horizontal(Fk, N, row, col)
F_R = get_F_R_k_horizontal(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range((k + 1) - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
F[k] -= U_k_n_bar[k]
return Fk
```
```python
# DCT係数の平均を求める
```
```python
Fk_values = np.zeros((512,512))
```
```python
for IMG in tqdm_notebook(IMAGES):
values = np.zeros((25,4))
Fk = np.zeros(IMG.img.shape)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1] // N):
block = IMG.img[row * N : (row + 1) * N, col * N : (col + 1) * N]
c = scipy.fftpack.dct(
scipy.fftpack.dct(block, axis=0, norm="ortho"), axis=1, norm="ortho"
)
Fk[row * N : (row + 1) * N, col * N : (col + 1) * N] = c
Fk_values += np.abs(Fk)
```
HBox(children=(IntProgress(value=0, max=8), HTML(value='')))
```python
Fk_values /= len(IMAGES)
pd.DataFrame(Fk_values).to_csv("DCT_coef_ave.csv",header=False,index=False)
```
```python
# 各n_barの残差係数の平均を求める
```
```python
Vk_values = np.zeros((512,512))
```
```python
n_bar = 5
for n_bar in tqdm_notebook(range(2,n_bar+1)):
Vk_values = np.zeros((512,512))
for IMG in IMAGES:
Fk = DMLCT_forward(IMG,n_bar,N)
Vk_values += np.abs(Fk)
pd.DataFrame(Vk_values / len(IMAGES)).to_csv("DMLCT_" + str(n_bar) + "_coef_ave.csv",header=False,index=False)
```
HBox(children=(IntProgress(value=0, max=4), HTML(value='')))
```python
# DCT係数を読み込む
```
```python
Fk_values = pd.read_csv("DCT_coef_ave.csv",header=None).values
```
```python
# 残差係数を読み込む
```
```python
Vk_values_arr = []
for i in range(2,n_bar+1,1):
Vk_values_arr.append(pd.read_csv("DMLCT_" + str(i) + "_coef_ave.csv",header=None).values)
```
```python
# NxNブロック1個当たりの係数の平均を求める
```
```python
Fk_block_ave_values = np.zeros((N,N))
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1] // N):
if col == 0:
continue
if col == IMG.img.shape[1] // N -1:
continue
if row == 0:
continue
if row == IMG.img.shape[0] // N -1:
continue
block = Fk_values[row * N : (row + 1) * N, col * N : (col + 1) * N]
Fk_block_ave_values += np.abs(block)
Fk_block_ave_values /= (IMG.img.shape[0]//N)**2
```
```python
Vk_block_ave_values_arr = []
for Vk_values in Vk_values_arr:
Vk_block_ave_values = np.zeros((N,N))
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1] // N):
if col == 0:
continue
if col == IMG.img.shape[1] // N -1:
continue
if row == 0:
continue
if row == IMG.img.shape[0] // N -1:
continue
block = Vk_values[row * N : (row + 1) * N, col * N : (col + 1) * N]
Vk_block_ave_values += np.abs(block)
Vk_block_ave_values /= (IMG.img.shape[0]//N)**2
Vk_block_ave_values_arr.append(Vk_block_ave_values)
```
```python
columns = []
for n in reversed(range(2,n_bar+1,1)):
columns.append("n=" + str(n))
df = pd.DataFrame(columns=columns)
```
```python
k_max = 5
for index in range(1,k_max,1):
for i in range(index):
Gk1k2_1 = []
Gk1k2_2 = []
for n in reversed(range(2,n_bar+1,1)):
Vk_block = Vk_block_ave_values_arr[n-2]
Vk = Vk_block[i,index]
Fk = Fk_block_ave_values[i,index]
Gk1k2_1.append(100 * (1 - Vk/Fk))
Vk = Vk_block[index,i]
Fk = Fk_block_ave_values[index,i]
Gk1k2_2.append(100 * (1 - Vk/Fk))
df.loc["(" + str(i) + "," + str(index) + ")"] = Gk1k2_1
df.loc["(" + str(index) + "," + str(i) + ")"] = Gk1k2_2
Gk1k2 = []
for n in reversed(range(2,n_bar+1,1)):
Vk_block = Vk_block_ave_values_arr[n-2]
Vk = Vk_block[index,index]
Fk = Fk_block_ave_values[index,index]
Gk1k2.append(100 * (1 - Vk/Fk))
df.loc["(" + str(index) + "," + str(index) + ")"] = Gk1k2
Gk1k2 = []
for n in reversed(range(2,n_bar+1,1)):
Vk_block = Vk_block_ave_values_arr[n-2]
Vk_sum = 0
Fk_sum = 0
for row in range(N):
for col in range(N):
if row > k_max-1 or col > k_max-1:
Vk_sum += Vk_block[row,col]
Fk_sum += Fk_block_ave_values[row,col]
Gk1k2.append(100 * (1 - Vk_sum/Fk_sum))
df.loc["others"] = Gk1k2
```
```python
df.to_csv("DMLCT_high_freq_comp.csv")
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>n=5</th>
<th>n=4</th>
<th>n=3</th>
<th>n=2</th>
</tr>
</thead>
<tbody>
<tr>
<th>(0,1)</th>
<td>18.310981</td>
<td>18.310981</td>
<td>18.310981</td>
<td>18.310981</td>
</tr>
<tr>
<th>(1,0)</th>
<td>21.832817</td>
<td>21.832817</td>
<td>21.832817</td>
<td>21.832817</td>
</tr>
<tr>
<th>(1,1)</th>
<td>14.575232</td>
<td>14.575232</td>
<td>14.575232</td>
<td>14.575232</td>
</tr>
<tr>
<th>(0,2)</th>
<td>17.877560</td>
<td>17.877560</td>
<td>17.877560</td>
<td>9.336089</td>
</tr>
<tr>
<th>(2,0)</th>
<td>16.195630</td>
<td>16.195630</td>
<td>16.195630</td>
<td>8.333530</td>
</tr>
<tr>
<th>(1,2)</th>
<td>11.987230</td>
<td>11.987230</td>
<td>11.987230</td>
<td>7.455402</td>
</tr>
<tr>
<th>(2,1)</th>
<td>12.711960</td>
<td>12.711960</td>
<td>12.711960</td>
<td>7.439948</td>
</tr>
<tr>
<th>(2,2)</th>
<td>12.214650</td>
<td>12.214650</td>
<td>12.214650</td>
<td>5.681808</td>
</tr>
<tr>
<th>(0,3)</th>
<td>12.405071</td>
<td>12.405071</td>
<td>8.628719</td>
<td>3.997779</td>
</tr>
<tr>
<th>(3,0)</th>
<td>10.430801</td>
<td>10.430801</td>
<td>7.309138</td>
<td>3.667256</td>
</tr>
<tr>
<th>(1,3)</th>
<td>7.824086</td>
<td>7.824086</td>
<td>5.045261</td>
<td>2.374177</td>
</tr>
<tr>
<th>(3,1)</th>
<td>8.051693</td>
<td>8.051693</td>
<td>5.461716</td>
<td>2.826230</td>
</tr>
<tr>
<th>(2,3)</th>
<td>8.804471</td>
<td>8.804471</td>
<td>6.944298</td>
<td>2.749921</td>
</tr>
<tr>
<th>(3,2)</th>
<td>7.777447</td>
<td>7.777447</td>
<td>5.828321</td>
<td>2.416075</td>
</tr>
<tr>
<th>(3,3)</th>
<td>6.360546</td>
<td>6.360546</td>
<td>3.840187</td>
<td>0.963791</td>
</tr>
<tr>
<th>(0,4)</th>
<td>9.997110</td>
<td>8.267106</td>
<td>7.113365</td>
<td>3.912433</td>
</tr>
<tr>
<th>(4,0)</th>
<td>7.706722</td>
<td>6.083833</td>
<td>5.606183</td>
<td>3.186106</td>
</tr>
<tr>
<th>(1,4)</th>
<td>4.039410</td>
<td>2.487527</td>
<td>2.614220</td>
<td>1.213619</td>
</tr>
<tr>
<th>(4,1)</th>
<td>4.838427</td>
<td>3.434484</td>
<td>3.363089</td>
<td>1.906202</td>
</tr>
<tr>
<th>(2,4)</th>
<td>5.953920</td>
<td>4.662677</td>
<td>4.576186</td>
<td>2.363430</td>
</tr>
<tr>
<th>(4,2)</th>
<td>4.229894</td>
<td>3.112613</td>
<td>3.123368</td>
<td>1.878314</td>
</tr>
<tr>
<th>(3,4)</th>
<td>5.651344</td>
<td>4.630047</td>
<td>3.272790</td>
<td>0.999370</td>
</tr>
<tr>
<th>(4,3)</th>
<td>4.845592</td>
<td>3.681487</td>
<td>2.855581</td>
<td>0.987264</td>
</tr>
<tr>
<th>(4,4)</th>
<td>3.622517</td>
<td>1.985599</td>
<td>2.198705</td>
<td>0.982007</td>
</tr>
<tr>
<th>others</th>
<td>1.054861</td>
<td>1.299350</td>
<td>0.865533</td>
<td>0.328743</td>
</tr>
</tbody>
</table>
</div>
```python
```
|
6be437038f4c5c5dafbb0b9b665db0e6fcde33ae
| 29,811 |
ipynb
|
Jupyter Notebook
|
DMLCT/16x16/DMLCT 交流成分比較_ok.ipynb
|
Hiroya-W/Python_DCT
|
5acb7553792335e178d8b99ca1ee42431cc26f92
|
[
"MIT"
] | null | null | null |
DMLCT/16x16/DMLCT 交流成分比較_ok.ipynb
|
Hiroya-W/Python_DCT
|
5acb7553792335e178d8b99ca1ee42431cc26f92
|
[
"MIT"
] | 2 |
2020-01-06T14:12:55.000Z
|
2020-02-06T07:00:31.000Z
|
DMLCT/16x16/DMLCT 交流成分比較_ok.ipynb
|
Hiroya-W/Python_DCT
|
5acb7553792335e178d8b99ca1ee42431cc26f92
|
[
"MIT"
] | null | null | null | 31.479409 | 124 | 0.408943 | true | 6,381 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.885631 | 0.824462 | 0.730169 |
__label__yue_Hant
| 0.178819 | 0.53476 |
# Probability
### Miles Erickson
#### August 14, 2017
## Objectives
* Use permutations and combinations to solve probability problems.
* Explain basic laws of probability.
## Agenda
Morning
* Review Sets
* Permutations and combinations
* Laws of Probability
## Some definitions
* A set $S$ consists of all possible outcomes or events and is called the sample space
* Union: $A \cup B = \{ x: x \in A ~\mathtt{ or} ~x \in B\}$
* Intersection: $A \cap B = \{x: x \in A ~\mathtt{and} ~x \in B\}$
* Complement: $A^\complement = \{ x: x \notin A \}$
* Disjoint: $A \cap B = \emptyset$
* Partition: a set of pairwise disjoint sets, ${A_j}$, such that $\underset{j=1}{\overset{\infty}{\cup}}A_j = S$
* DeMorgan's laws: $(A \cup B)^\complement = A^\complement \cap B^\complement$ and $(A \cap B)^\complement = A^\complement \cup B^\complement$
```python
from scipy import stats
import numpy as np
import math
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
## Permutations and Combinations
In general, there are $n!$ ways we can order $n$ objects, since there are $n$ that can come first, $n-1$ that can come 2nd, and so on. So we can line 16 students up $16!$ ways.
```python
math.factorial(16)
```
20922789888000
Suppose we choose 5 students at random from the class of 20 students. How many different ways could we do that?
If the order matters, it's a **permutation**. If the order doesn't, it's a **combination**.
There are $20$ ways they can choose one student, $20 \cdot 19$ ways we can choose two, and so on, so $$20\cdot19\cdot18\cdot17\cdot16 = \frac{20!}{15!} = {_{20}P_{15}}$$ ways we can choose five students, assuming the order matters. In general
$$_nP_k = \frac{n!}{(n-k)!}$$
```python
def permutations(n, k):
return math.factorial(n)/math.factorial(n-k)
```
```python
permutations(20,5)
```
1860480
There are $5!$ different way we can order those different students, so the number of combinations is that number divided by $5!$. We write this as $${20 \choose 5} = \frac{20!}{15! \cdot 5!}$$
In general,
$${n \choose k} = {_nC_k} = \frac{n!}{n!(n-k)!}$$
```python
def combinations(n, k):
return math.factorial(n) / (math.factorial(n-k) * math.factorial(k))
```
```python
combinations(20,5)
```
15504
### Tea-drinking problem
There's a classic problem in which a woman claims she can tell whether tea or milk is added to the cup first. The famous statistician R.A. Fisher proposed a test: he would prepare eight cups of tea, four each way, and she would select which was which.
Assuming the null hypothesis (that she was guessing randomly) what's the probability that she would guess all correctly?
```python
```
## Multinomial
Combinations explain the number of ways of dividing something into two categories. When dividing into more categories, use
$${n \choose {n_1, n_2, ... n_k}} = \frac{n!}{n_1! n_2! ... n_k!}$$
which reduces to the above for two cases.
## Definition of probability
Given a sample space S, a *probability function* P of a set has three properties.
* $P(A) \ge 0 \; \forall \; A \subset S$
* $P(S) = 1$
* For a set of pairwise disjoint sets $\{A_j\}$, $P(\cup_j A_j) = \sum_j P(A_j)$
## Independence
Two events $A$ and $B$ are said to be *independent* iff
$$ P(A \cap B) = P(A) P(B)$$
or equivalently
$$ P(B \mid A) = P(B)$$
so knowlege of $A$ provides no information about $B$. This can also be written as $A \perp B$.
### Example: dice
The probability of rolling a 1 on a single fair 6-sided die is $1\over 6$.
What's the probability of two dice having a total value of 3?
```python
```
# Bayes' theorem
Bayes' therem says that
$$P(A\mid B) = \frac{P(B\mid A) P(A)}{P(B)}$$
Where A and B are two possible events.
To prove it, consider that
$$\begin{equation}
\begin{aligned}
P(A\mid B) P(B) & = P(A \cap B) \\
& = P(B \cap A) \\
& = P(B\mid A) P(A) \\
\end{aligned}
\end{equation}
$$
so dividing both sides by $P(B)$ gives the above theorem.
In here we usually think of A as being our hypothesis, and B as our observed data, so
$$ P(hypothesis \mid data) = \frac{P(data \mid hypothesis) P(hypothesis)}{P(data)}$$
where
$$ P(data \mid hypothesis) \text{ is the likelihood} \\
P(hypothesis) \text{ is the prior probability} \\
P(hypothesis \mid data) \text{ is the posterior probability} \\
P(data) \text{ is the normalizing constant} \\
$$
## Law of Total Probability
If ${B_n}$ is a partition of all possible options, then
$$\begin{align}
P(A) & = \sum_j P(A \cap B_j) \\
& = \sum_j P(A \mid B_j) \cdot P(B_j)
\end{align}
$$
### Example: the cookie problem
Bowl A has 30 vanilla cookies and 10 chocolate cookies; bowl B has 30 of each. You pick a bowl at random and draw a cookie. Assuming the cookie is vanilla, what's the probability it comes from bowl A?
```python
```
### Example: two-sided coins
There are three coins in a bag, one with two heads, another with two tails, another with a head and a tail. You pick one and flip it, getting a head. What's the probability of getting a head on the next flip?
```python
```
## Probability chain rule
$$\begin{align}
P(A_n, A_{n-1}, ..., A_1) & = P(A_n \mid A_{n-1},...,A_1) \cdot P(A_{n-1},...,A_1) \\
& = P(A_n \mid A_{n-1},...,A_1) \cdot P(A_{n-1} \mid A_{n-2},...,A_1) \cdot P(A_{n-1},...,A_1) \\
& = \prod_{j=1}^n P(A_j \mid A_{j-1},...,A_1)
\end{align}
$$
```python
```
|
de728b6dd6436fff9870b7c3fc52d66e6d9eca2c
| 10,885 |
ipynb
|
Jupyter Notebook
|
probability/lecture/.ipynb_checkpoints/my_Probability_AM-checkpoint.ipynb
|
loganjhennessy/data-science-reference
|
2b57a91b3fb98ef617252b5cfc76072ea38f0f4a
|
[
"MIT"
] | null | null | null |
probability/lecture/.ipynb_checkpoints/my_Probability_AM-checkpoint.ipynb
|
loganjhennessy/data-science-reference
|
2b57a91b3fb98ef617252b5cfc76072ea38f0f4a
|
[
"MIT"
] | null | null | null |
probability/lecture/.ipynb_checkpoints/my_Probability_AM-checkpoint.ipynb
|
loganjhennessy/data-science-reference
|
2b57a91b3fb98ef617252b5cfc76072ea38f0f4a
|
[
"MIT"
] | null | null | null | 25.491803 | 260 | 0.518236 | true | 1,694 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.718594 | 0.855851 | 0.61501 |
__label__eng_Latn
| 0.987907 | 0.267204 |
# Using optimization routines from `scipy` and `statsmodels`
```python
%matplotlib inline
```
```python
import scipy.linalg as la
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
import pandas as pd
```
```python
np.set_printoptions(precision=3, suppress=True)
```
Using `scipy.optimize`
----
One of the most convenient libraries to use is `scipy.optimize`, since it is already part of the Anaconda installation and it has a fairly intuitive interface.
```python
from scipy import optimize as opt
```
#### Minimizing a univariate function $f: \mathbb{R} \rightarrow \mathbb{R}$
```python
def f(x):
return x**4 + 3*(x-2)**3 - 15*(x)**2 + 1
```
```python
x = np.linspace(-8, 5, 100)
plt.plot(x, f(x));
```
The [`minimize_scalar`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize_scalar.html#scipy.optimize.minimize_scalar) function will find the minimum, and can also be told to search within given bounds. By default, it uses the Brent algorithm, which combines a bracketing strategy with a parabolic approximation.
```python
opt.minimize_scalar(f, method='Brent')
```
```python
opt.minimize_scalar(f, method='bounded', bounds=[0, 6])
```
### Local and global minima
```python
def f(x, offset):
return -np.sinc(x-offset)
```
```python
x = np.linspace(-20, 20, 100)
plt.plot(x, f(x, 5));
```
```python
# note how additional function arguments are passed in
sol = opt.minimize_scalar(f, args=(5,))
sol
```
```python
plt.plot(x, f(x, 5))
plt.axvline(sol.x, c='red')
pass
```
#### We can try multiple random starts to find the global minimum
```python
lower = np.random.uniform(-20, 20, 100)
upper = lower + 1
sols = [opt.minimize_scalar(f, args=(5,), bracket=(l, u)) for (l, u) in zip(lower, upper)]
```
```python
idx = np.argmin([sol.fun for sol in sols])
sol = sols[idx]
```
```python
plt.plot(x, f(x, 5))
plt.axvline(sol.x, c='red');
```
#### Using a stochastic algorithm
See documentation for the [`basinhopping`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.basinhopping.html) algorithm, which also works with multivariate scalar optimization. Note that this is heuristic and not guaranteed to find a global minimum.
```python
from scipy.optimize import basinhopping
x0 = 0
sol = basinhopping(f, x0, stepsize=1, minimizer_kwargs={'args': (5,)})
sol
```
```python
plt.plot(x, f(x, 5))
plt.axvline(sol.x, c='red');
```
### Constrained optimization with `scipy.optimize`
Many real-world optimization problems have constraints - for example, a set of parameters may have to sum to 1.0 (equality constraint), or some parameters may have to be non-negative (inequality constraint). Sometimes, the constraints can be incorporated into the function to be minimized, for example, the non-negativity constraint $p \gt 0$ can be removed by substituting $p = e^q$ and optimizing for $q$. Using such workarounds, it may be possible to convert a constrained optimization problem into an unconstrained one, and use the methods discussed above to solve the problem.
Alternatively, we can use optimization methods that allow the specification of constraints directly in the problem statement as shown in this section. Internally, constraint violation penalties, barriers and Lagrange multipliers are some of the methods used used to handle these constraints. We use the example provided in the Scipy [tutorial](http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html) to illustrate how to set constraints.
We will optimize:
$$
f(x) = -(2xy + 2x - x^2 -2y^2)
$$
subject to the constraint
$$
x^3 - y = 0 \\
y - (x-1)^4 - 2 \ge 0
$$
and the bounds
$$
0.5 \le x \le 1.5 \\
1.5 \le y \le 2.5
$$
```python
def f(x):
return -(2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2)
```
```python
x = np.linspace(0, 3, 100)
y = np.linspace(0, 3, 100)
X, Y = np.meshgrid(x, y)
Z = f(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))
plt.contour(X, Y, Z, np.arange(-1.99,10, 1), cmap='jet');
plt.plot(x, x**3, 'k:', linewidth=1)
plt.plot(x, (x-1)**4+2, 'k:', linewidth=1)
plt.fill([0.5,0.5,1.5,1.5], [2.5,1.5,1.5,2.5], alpha=0.3)
plt.axis([0,3,0,3])
```
To set constraints, we pass in a dictionary with keys `type`, `fun` and `jac`. Note that the inequality constraint assumes a $C_j x \ge 0$ form. As usual, the `jac` is optional and will be numerically estimated if not provided.
```python
cons = ({'type': 'eq',
'fun' : lambda x: np.array([x[0]**3 - x[1]]),
'jac' : lambda x: np.array([3.0*(x[0]**2.0), -1.0])},
{'type': 'ineq',
'fun' : lambda x: np.array([x[1] - (x[0]-1)**4 - 2])})
bnds = ((0.5, 1.5), (1.5, 2.5))
```
```python
x0 = [0, 2.5]
```
Unconstrained optimization
```python
ux = opt.minimize(f, x0, constraints=None)
ux
```
Constrained optimization
```python
cx = opt.minimize(f, x0, bounds=bnds, constraints=cons)
cx
```
```python
x = np.linspace(0, 3, 100)
y = np.linspace(0, 3, 100)
X, Y = np.meshgrid(x, y)
Z = f(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))
plt.contour(X, Y, Z, np.arange(-1.99,10, 1), cmap='jet');
plt.plot(x, x**3, 'k:', linewidth=1)
plt.plot(x, (x-1)**4+2, 'k:', linewidth=1)
plt.text(ux['x'][0], ux['x'][1], 'x', va='center', ha='center', size=20, color='blue')
plt.text(cx['x'][0], cx['x'][1], 'x', va='center', ha='center', size=20, color='red')
plt.fill([0.5,0.5,1.5,1.5], [2.5,1.5,1.5,2.5], alpha=0.3)
plt.axis([0,3,0,3]);
```
## Some applications of optimization
### Finding paraemeters for ODE models
This is a specialized application of `curve_fit`, in which the curve to be fitted is defined implicitly by an ordinary differential equation
$$
\frac{dx}{dt} = -kx
$$
and we want to use observed data to estimate the parameters $k$ and the initial value $x_0$. Of course this can be explicitly solved but the same approach can be used to find multiple parameters for $n$-dimensional systems of ODEs.
[A more elaborate example for fitting a system of ODEs to model the zombie apocalypse](http://adventuresinpython.blogspot.com/2012/08/fitting-differential-equation-system-to.html)
```python
from scipy.integrate import odeint
def f(x, t, k):
"""Simple exponential decay."""
return -k*x
def x(t, k, x0):
"""
Solution to the ODE x'(t) = f(t,x,k) with initial condition x(0) = x0
"""
x = odeint(f, x0, t, args=(k,))
return x.ravel()
```
```python
# True parameter values
x0_ = 10
k_ = 0.1*np.pi
# Some random data genererated from closed form solution plus Gaussian noise
ts = np.sort(np.random.uniform(0, 10, 200))
xs = x0_*np.exp(-k_*ts) + np.random.normal(0,0.1,200)
popt, cov = opt.curve_fit(x, ts, xs)
k_opt, x0_opt = popt
print("k = %g" % k_opt)
print("x0 = %g" % x0_opt)
```
```python
import matplotlib.pyplot as plt
t = np.linspace(0, 10, 100)
plt.plot(ts, xs, 'r.', t, x(t, k_opt, x0_opt), '-');
```
### Another example of fitting a system of ODEs using the `lmfit` package
You may have to install the [`lmfit`](http://cars9.uchicago.edu/software/python/lmfit/index.html) package using `pip` and restart your kernel. The `lmfit` algorithm is another wrapper around `scipy.optimize.leastsq` but allows for richer model specification and more diagnostics.
```python
! pip install lmfit
```
```python
from lmfit import minimize, Parameters, Parameter, report_fit
import warnings
```
```python
def f(xs, t, ps):
"""Lotka-Volterra predator-prey model."""
try:
a = ps['a'].value
b = ps['b'].value
c = ps['c'].value
d = ps['d'].value
except:
a, b, c, d = ps
x, y = xs
return [a*x - b*x*y, c*x*y - d*y]
def g(t, x0, ps):
"""
Solution to the ODE x'(t) = f(t,x,k) with initial condition x(0) = x0
"""
x = odeint(f, x0, t, args=(ps,))
return x
def residual(ps, ts, data):
x0 = ps['x0'].value, ps['y0'].value
model = g(ts, x0, ps)
return (model - data).ravel()
t = np.linspace(0, 10, 100)
x0 = np.array([1,1])
a, b, c, d = 3,1,1,1
true_params = np.array((a, b, c, d))
np.random.seed(123)
data = g(t, x0, true_params)
data += np.random.normal(size=data.shape)
# set parameters incluing bounds
params = Parameters()
params.add('x0', value= float(data[0, 0]), min=0, max=10)
params.add('y0', value=float(data[0, 1]), min=0, max=10)
params.add('a', value=2.0, min=0, max=10)
params.add('b', value=2.0, min=0, max=10)
params.add('c', value=2.0, min=0, max=10)
params.add('d', value=2.0, min=0, max=10)
# fit model and find predicted values
result = minimize(residual, params, args=(t, data), method='leastsq')
final = data + result.residual.reshape(data.shape)
# plot data and fitted curves
plt.plot(t, data, 'o')
plt.plot(t, final, '-', linewidth=2);
# display fitted statistics
report_fit(result)
```
#### Optimization of graph node placement
To show the many different applications of optimization, here is an example using optimization to change the layout of nodes of a graph. We use a physical analogy - nodes are connected by springs, and the springs resist deformation from their natural length $l_{ij}$. Some nodes are pinned to their initial locations while others are free to move. Because the initial configuration of nodes does not have springs at their natural length, there is tension resulting in a high potential energy $U$, given by the physics formula shown below. Optimization finds the configuration of lowest potential energy given that some nodes are fixed (set up as boundary constraints on the positions of the nodes).
$$
U = \frac{1}{2}\sum_{i,j=1}^n ka_{ij}\left(||p_i - p_j||-l_{ij}\right)^2
$$
Note that the ordination algorithm Multi-Dimensional Scaling (MDS) works on a very similar idea - take a high dimensional data set in $\mathbb{R}^n$, and project down to a lower dimension ($\mathbb{R}^k$) such that the sum of distances $d_n(x_i, x_j) - d_k(x_i, x_j)$, where $d_n$ and $d_k$ are some measure of distance between two points $x_i$ and $x_j$ in $n$ and $d$ dimension respectively, is minimized. MDS is often used in exploratory analysis of high-dimensional data to get some intuitive understanding of its "structure".
```python
from scipy.spatial.distance import pdist, squareform
```
- P0 is the initial location of nodes
- P is the minimal energy location of nodes given constraints
- A is a connectivity matrix - there is a spring between $i$ and $j$ if $A_{ij} = 1$
- $L_{ij}$ is the resting length of the spring connecting $i$ and $j$
- In addition, there are a number of `fixed` nodes whose positions are pinned.
```python
n = 20
k = 1 # spring stiffness
P0 = np.random.uniform(0, 5, (n,2))
A = np.ones((n, n))
A[np.tril_indices_from(A)] = 0
L = A.copy()
```
```python
L.astype('int')
```
```python
def energy(P):
P = P.reshape((-1, 2))
D = squareform(pdist(P))
return 0.5*(k * A * (D - L)**2).sum()
```
```python
D0 = squareform(pdist(P0))
E0 = 0.5* k * A * (D0 - L)**2
```
```python
D0[:5, :5]
```
```python
E0[:5, :5]
```
```python
energy(P0.ravel())
```
```python
# fix the position of the first few nodes just to show constraints
fixed = 4
bounds = (np.repeat(P0[:fixed,:].ravel(), 2).reshape((-1,2)).tolist() +
[[None, None]] * (2*(n-fixed)))
bounds[:fixed*2+4]
```
```python
sol = opt.minimize(energy, P0.ravel(), bounds=bounds)
```
#### Visualization
Original placement is BLUE
Optimized arrangement is RED.
```python
plt.scatter(P0[:, 0], P0[:, 1], s=25)
P = sol.x.reshape((-1,2))
plt.scatter(P[:, 0], P[:, 1], edgecolors='red', facecolors='none', s=30, linewidth=2);
```
Optimization of standard statistical models
---
When we solve standard statistical problems, an optimization procedure similar to the ones discussed here is performed. For example, consider multivariate logistic regression - typically, a Newton-like algorithm known as iteratively reweighted least squares (IRLS) is used to find the maximum likelihood estimate for the generalized linear model family. However, using one of the multivariate scalar minimization methods shown above will also work, for example, the BFGS minimization algorithm.
The take home message is that there is nothing magic going on when Python or R fits a statistical model using a formula - all that is happening is that the objective function is set to be the negative of the log likelihood, and the minimum found using some first or second order optimization algorithm.
```python
import statsmodels.api as sm
```
### Logistic regression as optimization
Suppose we have a binary outcome measure $Y \in {0,1}$ that is conditinal on some input variable (vector) $x \in (-\infty, +\infty)$. Let the conditioanl probability be $p(x) = P(Y=y | X=x)$. Given some data, one simple probability model is $p(x) = \beta_0 + x\cdot\beta$ - i.e. linear regression. This doesn't really work for the obvious reason that $p(x)$ must be between 0 and 1 as $x$ ranges across the real line. One simple way to fix this is to use the transformation $g(x) = \frac{p(x)}{1 - p(x)} = \beta_0 + x.\beta$. Solving for $p$, we get
$$
p(x) = \frac{1}{1 + e^{-(\beta_0 + x\cdot\beta)}}
$$
As you all know very well, this is logistic regression.
Suppose we have $n$ data points $(x_i, y_i)$ where $x_i$ is a vector of features and $y_i$ is an observed class (0 or 1). For each event, we either have "success" ($y = 1$) or "failure" ($Y = 0$), so the likelihood looks like the product of Bernoulli random variables. According to the logistic model, the probability of success is $p(x_i)$ if $y_i = 1$ and $1-p(x_i)$ if $y_i = 0$. So the likelihood is
$$
L(\beta_0, \beta) = \prod_{i=1}^n p(x_i)^y(1-p(x_i))^{1-y}
$$
and the log-likelihood is
\begin{align}
l(\beta_0, \beta) &= \sum_{i=1}^{n} y_i \log{p(x_i)} + (1-y_i)\log{1-p(x_i)} \\
&= \sum_{i=1}^{n} \log{1-p(x_i)} + \sum_{i=1}^{n} y_i \log{\frac{p(x_i)}{1-p(x_i)}} \\
&= \sum_{i=1}^{n} -\log 1 + e^{\beta_0 + x_i\cdot\beta} + \sum_{i=1}^{n} y_i(\beta_0 + x_i\cdot\beta)
\end{align}
Using the standard 'trick', if we augment the matrix $X$ with a column of 1s, we can write $\beta_0 + x_i\cdot\beta$ as just $X\beta$.
```python
df_ = pd.read_csv("binary.csv")
df_.columns = df_.columns.str.lower()
df_.head()
```
```python
# We will ignore the rank categorical value
cols_to_keep = ['admit', 'gre', 'gpa']
df = df_[cols_to_keep]
df.insert(1, 'dummy', 1)
df.head()
```
### Solving as a GLM with IRLS
This is very similar to what you would do in R, only using Python's `statsmodels` package. The GLM solver uses a special variant of Newton's method known as iteratively reweighted least squares (IRLS), which will be further desribed in the lecture on multivarite and constrained optimizaiton.
```python
model = sm.GLM.from_formula('admit ~ gre + gpa',
data=df, family=sm.families.Binomial())
fit = model.fit()
fit.summary()
```
### Or use R
```python
%load_ext rpy2.ipython
```
```r
%%R -i df
m <- glm(admit ~ gre + gpa, data=df, family="binomial")
summary(m)
```
### Home-brew logistic regression using a generic minimization function
This is to show that there is no magic going on - you can write the function to minimize directly from the log-likelihood equation and run a minimizer. It will be more accurate if you also provide the derivative (+/- the Hessian for second order methods), but using just the function and numerical approximations to the derivative will also work. As usual, this is for illustration so you understand what is going on - when there is a library function available, you should probably use that instead.
```python
def f(beta, y, x):
"""Minus log likelihood function for logistic regression."""
return -((-np.log(1 + np.exp(np.dot(x, beta)))).sum() + (y*(np.dot(x, beta))).sum())
```
```python
beta0 = np.zeros(3)
opt.minimize(f, beta0, args=(df['admit'], df.loc[:, 'dummy':]), method='BFGS', options={'gtol':1e-2})
```
### Optimization with `sklearn`
There are also many optimization routines in the `scikit-learn` package, as you already know from the previous lectures. Many machine learning problems essentially boil down to the minimization of some appropriate loss function.
### Resources
- [Scipy Optimize reference](http://docs.scipy.org/doc/scipy/reference/optimize.html)
- [Scipy Optimize tutorial](http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html)
- [LMFit - a modeling interface for nonlinear least squares problems](http://cars9.uchicago.edu/software/python/lmfit/index.html)
- [CVXpy- a modeling interface for convex optimization problems](https://github.com/cvxgrp/cvxpy)
- [Quasi-Newton methods](http://en.wikipedia.org/wiki/Quasi-Newton_method)
- [Convex optimization book by Boyd & Vandenberghe](http://stanford.edu/~boyd/cvxbook/)
- [Nocedal and Wright textbook](http://www.springer.com/us/book/9780387303031)
|
0f5dd6c347b3b7f6453a41382058a4087ecd7e76
| 26,804 |
ipynb
|
Jupyter Notebook
|
notebooks/T07D_Optimization_Examples.ipynb
|
Yijia17/sta-663-2021
|
e6484e3116c041b8c8eaae487eff5f351ff499c9
|
[
"MIT"
] | 18 |
2021-01-19T16:35:54.000Z
|
2022-01-01T02:12:30.000Z
|
notebooks/T07D_Optimization_Examples.ipynb
|
Yijia17/sta-663-2021
|
e6484e3116c041b8c8eaae487eff5f351ff499c9
|
[
"MIT"
] | null | null | null |
notebooks/T07D_Optimization_Examples.ipynb
|
Yijia17/sta-663-2021
|
e6484e3116c041b8c8eaae487eff5f351ff499c9
|
[
"MIT"
] | 24 |
2021-01-19T16:26:13.000Z
|
2022-03-15T05:10:14.000Z
| 30.459091 | 707 | 0.559431 | true | 5,015 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.891811 | 0.923039 | 0.823177 |
__label__eng_Latn
| 0.970404 | 0.750848 |
以下の論文のPSOアルゴリズムに従った。
http://www.iba.t.u-tokyo.ac.jp/iba/AI/PSOTSP.pdf
http://ci.nii.ac.jp/els/110006977755.pdf?id=ART0008887051&type=pdf&lang=en&host=cinii&order_no=&ppv_type=0&lang_sw=&no=1452683083&cp=
回る順を配列に記録し、その順番を入れ替えることに関してPSOを行う。通常のPSOを離散型に拡張したものである。
```python
%matplotlib inline
import numpy as np
import pylab as pl
import math
from sympy import *
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from mpl_toolkits.mplot3d import Axes3D
```
```python
def TSP_map(N): #100×100の正方格子内にN個の点を配置する関数
TSP_map = []
X = [i for i in range(100)]
Y = [i for i in range(100)]
x = np.array([])
y = np.array([])
for i in range(N):
x = np.append(x, np.random.choice(X))
y = np.append(y, np.random.choice(Y))
for i in range(N):
TSP_map.append([x[i], y[i]])
return TSP_map
```
```python
class PSO:
def __init__(self, N, pN, omega, alpha, beta):
self.N = N
self.pN = pN
self.omega = omega
self.alpha = alpha
self.beta = beta
self.city = TSP_map(N)
def initialize(self):
ptcl = np.array([])
for i in range(self.pN):
a = np.random.choice([j for j in range(self.N - 1)])
b = np.random.choice([j for j in range(a, self.N)])
V = [[a, b]]
ptcl = np.append(ptcl, particle(i, V, self.N, self.omega, self.alpha, self.beta))
self.ptcl = ptcl
return self.ptcl
def one_simulate(self):
for i in range(self.pN):
self.ptcl[i].SS_id()
self.ptcl[i].SS_gd(self.p_gd_X)
self.ptcl[i].new_V()
self.ptcl[i].new_X()
self.ptcl[i].P_id(self.city)
def simulate(self, sim_num):
for i in range(self.pN):
self.ptcl[i].initial(self.city)
self.p_gd_X = self.P_gd()
for i in range(sim_num):
self.one_simulate()
self.p_gd_X = self.P_gd()
self.p_gd_X = self.P_gd()
return self.p_gd_X
def P_gd(self):
P_gd = self.ptcl[0].p_id
self.no = 0
for i in range(self.pN):
if P_gd > self.ptcl[i].p_id:
P_gd = self.ptcl[i].p_id
self.no = i
return self.ptcl[self.no].p_id_X
```
```python
class particle:
def __init__(self, No, V, num_city, omega, alpha, beta): #noは粒子の番号(ナンバー)である。
self.No = No
self.V = V
self.num_city = num_city
self.omega = omega
self.alpha = alpha
self.beta = beta
self.X = self.init_X()
self.p_id_X = self.X
self.p_id = 1000000
def initial(self, city):
self.ss_id = []
self.ss_gd = []
self.P_id(city)
def init_X(self):
c = np.array([i for i in range(self.num_city)])
np.random.shuffle(c)
return c
def SO(self, V, P):
SO = []
for i in range(len(V)):
if V[i] != P[i]:
t = np.where(V == P[i])
t = int(t[0])
a = V[i]
b = V[t]
V[i] = b
V[t] = a
SO.append([i, t])
if len(SO) == 0:
SO.append([0, 0])
return SO
def SS_id(self):
self.ss_id = self.SO(self.X, self.p_id_X)
def SS_gd(self, p_gd_X):
self.ss_gd = self.SO(self.X, p_gd_X)
def select(self, V, p):
select_v = []
for i in range(len(V)):
x = np.random.choice([1, 0], p=[p, 1-p])
if x == 1:
select_v.append(V[i])
if len(select_v) != 0:
self.newV.append(select_v[0])
def new_V(self):
self.newV = []
self.select(self.V, self.omega)
self.select(self.ss_id, self.alpha)
self.select(self.ss_gd, self.beta)
while [0, 0] in self.newV:
self.newV.remove([0, 0])
self.V = self.newV
return self.V
def new_X(self):
for i in range(len(self.V)):
j = self.V[i][0]
k = self.V[i][1]
a = self.X[j]
b = self.X[k]
self.X[j] = b
self.X[k] = a
return self.X
def P_id(self, city): #二都市間の距離を足し上げてP_idを求める関数。
P_id = 0
for i in range(self.num_city):
if i != self.num_city-1:
x1 = city[self.X[i]][0]
y1 = city[self.X[i]][1]
x2 = city[self.X[i+1]][0]
y2 = city[self.X[i+1]][1]
else:
x1 = city[self.X[i]][0]
y1 = city[self.X[i]][1]
x2 = city[self.X[0]][0]
y2 = city[self.X[0]][1]
a = np.array([x1, y1])
b = np.array([x2, y2])
u = b - a
p = np.linalg.norm(u)
P_id += p
if P_id < self.p_id:
self.p_id = P_id
self.p_id_X = self.X
return self.p_id
```
30都市で行う。パラメータは左から順に、(都市の数、粒子の数、前期の速度の影響率、localベストの影響率、globalベストの影響率)である。
```python
pso = PSO(30, 1000, 0.8, 0.8, 0.5)
```
都市の座標とプロット。
```python
pso.city
```
[[2.0, 79.0],
[18.0, 21.0],
[56.0, 1.0],
[71.0, 83.0],
[77.0, 7.0],
[1.0, 24.0],
[34.0, 0.0],
[54.0, 64.0],
[90.0, 20.0],
[39.0, 16.0],
[1.0, 55.0],
[68.0, 46.0],
[68.0, 51.0],
[70.0, 79.0],
[88.0, 24.0],
[96.0, 40.0],
[92.0, 49.0],
[55.0, 68.0],
[11.0, 60.0],
[75.0, 87.0],
[37.0, 20.0],
[34.0, 44.0],
[64.0, 94.0],
[6.0, 71.0],
[7.0, 82.0],
[60.0, 14.0],
[69.0, 46.0],
[80.0, 77.0],
[21.0, 12.0],
[40.0, 68.0]]
```python
x = []
y = []
for i in range(len(pso.city)):
x.append(pso.city[i][0])
y.append(pso.city[i][1])
plt.scatter(x, y)
```
粒子の初期化。
```python
pso.initialize()
```
array([<__main__.particle instance at 0x119f8be60>,
<__main__.particle instance at 0x119f8b440>,
<__main__.particle instance at 0x119f8b680>,
<__main__.particle instance at 0x119f8b5f0>,
<__main__.particle instance at 0x119f8b0e0>,
<__main__.particle instance at 0x119f8bbd8>,
<__main__.particle instance at 0x119f8be18>,
<__main__.particle instance at 0x119f8b758>,
<__main__.particle instance at 0x119f8b128>,
<__main__.particle instance at 0x119e02758>,
<__main__.particle instance at 0x11d55c1b8>,
<__main__.particle instance at 0x11e6c6d88>,
<__main__.particle instance at 0x11e6c6d40>,
<__main__.particle instance at 0x11e6c6cb0>,
<__main__.particle instance at 0x11e6c6c68>,
<__main__.particle instance at 0x11e6c6b48>,
<__main__.particle instance at 0x11e6c6b00>,
<__main__.particle instance at 0x11e921ab8>,
<__main__.particle instance at 0x11e6c73f8>,
<__main__.particle instance at 0x11e6c7908>,
<__main__.particle instance at 0x11e6c77e8>,
<__main__.particle instance at 0x11e6c7830>,
<__main__.particle instance at 0x11e6c7710>,
<__main__.particle instance at 0x11e6c7758>,
<__main__.particle instance at 0x11e6c75f0>,
<__main__.particle instance at 0x11e6c7488>,
<__main__.particle instance at 0x11e6c7368>,
<__main__.particle instance at 0x11e6c73b0>,
<__main__.particle instance at 0x11e6c7fc8>,
<__main__.particle instance at 0x11e6c7290>,
<__main__.particle instance at 0x11e6c7e60>,
<__main__.particle instance at 0x11e6c7e18>,
<__main__.particle instance at 0x11e6c7d88>,
<__main__.particle instance at 0x11e6c7d40>,
<__main__.particle instance at 0x11e6ccbd8>,
<__main__.particle instance at 0x11e6cc050>,
<__main__.particle instance at 0x11e6cc098>,
<__main__.particle instance at 0x11e6c8560>,
<__main__.particle instance at 0x11e6c8f38>,
<__main__.particle instance at 0x11e6c8ea8>,
<__main__.particle instance at 0x11e927440>,
<__main__.particle instance at 0x11e6f0b90>,
<__main__.particle instance at 0x11e6ca368>,
<__main__.particle instance at 0x11e6cab90>,
<__main__.particle instance at 0x11e6ca908>,
<__main__.particle instance at 0x11e6ca320>,
<__main__.particle instance at 0x11e6ca878>,
<__main__.particle instance at 0x11e6cac20>,
<__main__.particle instance at 0x11d2d6a70>,
<__main__.particle instance at 0x11e6d80e0>,
<__main__.particle instance at 0x11d5158c0>,
<__main__.particle instance at 0x11d50b200>,
<__main__.particle instance at 0x11d548098>,
<__main__.particle instance at 0x11d54bdd0>,
<__main__.particle instance at 0x11e6bf6c8>,
<__main__.particle instance at 0x11e6bfb48>,
<__main__.particle instance at 0x11e6bf908>,
<__main__.particle instance at 0x11e6bf638>,
<__main__.particle instance at 0x11e6bf758>,
<__main__.particle instance at 0x11e6bf050>,
<__main__.particle instance at 0x11d2d93f8>,
<__main__.particle instance at 0x11d2d9e18>,
<__main__.particle instance at 0x11d2d9d88>,
<__main__.particle instance at 0x11e6c49e0>,
<__main__.particle instance at 0x11e6c0128>,
<__main__.particle instance at 0x11e6c0950>,
<__main__.particle instance at 0x11e6c03f8>,
<__main__.particle instance at 0x11e6c01b8>,
<__main__.particle instance at 0x11e6c0d40>,
<__main__.particle instance at 0x11e6c0d88>,
<__main__.particle instance at 0x11e6c0b00>,
<__main__.particle instance at 0x11e6c0cb0>,
<__main__.particle instance at 0x11e6c05f0>,
<__main__.particle instance at 0x11e6c0998>,
<__main__.particle instance at 0x11e6c07e8>,
<__main__.particle instance at 0x11e6c0f38>,
<__main__.particle instance at 0x11e6cb998>,
<__main__.particle instance at 0x11e6cba28>,
<__main__.particle instance at 0x11e6c1998>,
<__main__.particle instance at 0x11e6c1050>,
<__main__.particle instance at 0x11e6c1518>,
<__main__.particle instance at 0x11e6c1cf8>,
<__main__.particle instance at 0x11e6c1290>,
<__main__.particle instance at 0x11e902dd0>,
<__main__.particle instance at 0x11e900f80>,
<__main__.particle instance at 0x11e900b90>,
<__main__.particle instance at 0x11d54ae60>,
<__main__.particle instance at 0x11d54add0>,
<__main__.particle instance at 0x11e9149e0>,
<__main__.particle instance at 0x11e929cf8>,
<__main__.particle instance at 0x11e6bdc20>,
<__main__.particle instance at 0x11e6bddd0>,
<__main__.particle instance at 0x11e6be518>,
<__main__.particle instance at 0x11e6be488>,
<__main__.particle instance at 0x11e6be710>,
<__main__.particle instance at 0x11e6bee60>,
<__main__.particle instance at 0x11d518cf8>,
<__main__.particle instance at 0x11d518fc8>,
<__main__.particle instance at 0x11e6bcab8>,
<__main__.particle instance at 0x11e6bca70>,
<__main__.particle instance at 0x11e6bc2d8>,
<__main__.particle instance at 0x11e6bc3f8>,
<__main__.particle instance at 0x11e6bcc20>,
<__main__.particle instance at 0x11e6bccf8>,
<__main__.particle instance at 0x11e6bcdd0>,
<__main__.particle instance at 0x11e6bcea8>,
<__main__.particle instance at 0x11e6bcf80>,
<__main__.particle instance at 0x11e6bc440>,
<__main__.particle instance at 0x11e6bc518>,
<__main__.particle instance at 0x11e6bc5f0>,
<__main__.particle instance at 0x11e6bc6c8>,
<__main__.particle instance at 0x11e6bc7a0>,
<__main__.particle instance at 0x11e6bc878>,
<__main__.particle instance at 0x11e6bc950>,
<__main__.particle instance at 0x11e6bc098>,
<__main__.particle instance at 0x11e6bc170>,
<__main__.particle instance at 0x11e6babd8>,
<__main__.particle instance at 0x11e6ba518>,
<__main__.particle instance at 0x11e6bab00>,
<__main__.particle instance at 0x11e6bac68>,
<__main__.particle instance at 0x11e6bacb0>,
<__main__.particle instance at 0x11e6bad88>,
<__main__.particle instance at 0x11e6bae60>,
<__main__.particle instance at 0x11e6baf38>,
<__main__.particle instance at 0x11e6ba5f0>,
<__main__.particle instance at 0x11e6ba6c8>,
<__main__.particle instance at 0x11e6ba7a0>,
<__main__.particle instance at 0x11e6ba878>,
<__main__.particle instance at 0x11e6ba950>,
<__main__.particle instance at 0x11e6baa28>,
<__main__.particle instance at 0x11e6ba098>,
<__main__.particle instance at 0x11e6ba170>,
<__main__.particle instance at 0x11e6ba248>,
<__main__.particle instance at 0x11e6ba320>,
<__main__.particle instance at 0x11e6b9488>,
<__main__.particle instance at 0x11e6b9e18>,
<__main__.particle instance at 0x11e6b9c20>,
<__main__.particle instance at 0x11e6b9dd0>,
<__main__.particle instance at 0x11e6b9ef0>,
<__main__.particle instance at 0x11e6b9fc8>,
<__main__.particle instance at 0x11e6b96c8>,
<__main__.particle instance at 0x11e6b97a0>,
<__main__.particle instance at 0x11e6b9878>,
<__main__.particle instance at 0x11e6b9950>,
<__main__.particle instance at 0x11e6b9a28>,
<__main__.particle instance at 0x11e6b9b00>,
<__main__.particle instance at 0x11e6b90e0>,
<__main__.particle instance at 0x11e6b9128>,
<__main__.particle instance at 0x11e6b91b8>,
<__main__.particle instance at 0x11e6b9290>,
<__main__.particle instance at 0x11e6b9368>,
<__main__.particle instance at 0x11e6b8d88>,
<__main__.particle instance at 0x11e6b8518>,
<__main__.particle instance at 0x11e6b8d40>,
<__main__.particle instance at 0x11e6b8c20>,
<__main__.particle instance at 0x11e6b8e18>,
<__main__.particle instance at 0x11e6b8ef0>,
<__main__.particle instance at 0x11e6b8fc8>,
<__main__.particle instance at 0x11e6b86c8>,
<__main__.particle instance at 0x11e6b87a0>,
<__main__.particle instance at 0x11e6b8878>,
<__main__.particle instance at 0x11e6b8950>,
<__main__.particle instance at 0x11e6b8a28>,
<__main__.particle instance at 0x11e6b8b00>,
<__main__.particle instance at 0x11e6b8bd8>,
<__main__.particle instance at 0x11e6b80e0>,
<__main__.particle instance at 0x11e6b81b8>,
<__main__.particle instance at 0x11e6b8290>,
<__main__.particle instance at 0x11e6b8368>,
<__main__.particle instance at 0x11e6b7680>,
<__main__.particle instance at 0x11e6b7dd0>,
<__main__.particle instance at 0x11e6b75f0>,
<__main__.particle instance at 0x11e6b74d0>,
<__main__.particle instance at 0x11e6b7ef0>,
<__main__.particle instance at 0x11e6b7fc8>,
<__main__.particle instance at 0x11e6b7758>,
<__main__.particle instance at 0x11e6b7830>,
<__main__.particle instance at 0x11e6b7908>,
<__main__.particle instance at 0x11e6b79e0>,
<__main__.particle instance at 0x11e6b7ab8>,
<__main__.particle instance at 0x11e6b7b90>,
<__main__.particle instance at 0x11e6b7c68>,
<__main__.particle instance at 0x11e6b7098>,
<__main__.particle instance at 0x11e6b7170>,
<__main__.particle instance at 0x11e6b7248>,
<__main__.particle instance at 0x11e6b7320>,
<__main__.particle instance at 0x11e6b73f8>,
<__main__.particle instance at 0x11e6b6518>,
<__main__.particle instance at 0x11e6b65f0>,
<__main__.particle instance at 0x11e6b6710>,
<__main__.particle instance at 0x11e6b6e60>,
<__main__.particle instance at 0x11e6b6ea8>,
<__main__.particle instance at 0x11e6b6f80>,
<__main__.particle instance at 0x11e6b6680>,
<__main__.particle instance at 0x11e6b67a0>,
<__main__.particle instance at 0x11e6b6878>,
<__main__.particle instance at 0x11e6b6950>,
<__main__.particle instance at 0x11e6b6a28>,
<__main__.particle instance at 0x11e6b6b00>,
<__main__.particle instance at 0x11e6b6bd8>,
<__main__.particle instance at 0x11e6b6cb0>,
<__main__.particle instance at 0x11e6b60e0>,
<__main__.particle instance at 0x11e6b61b8>,
<__main__.particle instance at 0x11e6b6290>,
<__main__.particle instance at 0x11e6b6368>,
<__main__.particle instance at 0x11e6b6440>,
<__main__.particle instance at 0x11e6b5cb0>,
<__main__.particle instance at 0x11e6b5cf8>,
<__main__.particle instance at 0x11e6b5680>,
<__main__.particle instance at 0x11e6b5e18>,
<__main__.particle instance at 0x11e6b5ef0>,
<__main__.particle instance at 0x11e6b5fc8>,
<__main__.particle instance at 0x11e6b5758>,
<__main__.particle instance at 0x11e6b5830>,
<__main__.particle instance at 0x11e6b5908>,
<__main__.particle instance at 0x11e6b59e0>,
<__main__.particle instance at 0x11e6b5ab8>,
<__main__.particle instance at 0x11e6b5b90>,
<__main__.particle instance at 0x11e6b5098>,
<__main__.particle instance at 0x11e6b5170>,
<__main__.particle instance at 0x11e6b5248>,
<__main__.particle instance at 0x11e6b5320>,
<__main__.particle instance at 0x11e6b53f8>,
<__main__.particle instance at 0x11e6b45a8>,
<__main__.particle instance at 0x11e6b46c8>,
<__main__.particle instance at 0x11e6b40e0>,
<__main__.particle instance at 0x11e6b4560>,
<__main__.particle instance at 0x11e6b4f80>,
<__main__.particle instance at 0x11e6b47a0>,
<__main__.particle instance at 0x11e6b4878>,
<__main__.particle instance at 0x11e6b4950>,
<__main__.particle instance at 0x11e6b4a28>,
<__main__.particle instance at 0x11e6b4b00>,
<__main__.particle instance at 0x11e6b4bd8>,
<__main__.particle instance at 0x11e6b4cb0>,
<__main__.particle instance at 0x11e6b4098>,
<__main__.particle instance at 0x11e6b4170>,
<__main__.particle instance at 0x11e6b4248>,
<__main__.particle instance at 0x11e6b4320>,
<__main__.particle instance at 0x11e6b43f8>,
<__main__.particle instance at 0x11e6b44d0>,
<__main__.particle instance at 0x11e6b37e8>,
<__main__.particle instance at 0x11e6b3ef0>,
<__main__.particle instance at 0x11e6b3098>,
<__main__.particle instance at 0x11e6b3fc8>,
<__main__.particle instance at 0x11e6b3998>,
<__main__.particle instance at 0x11e6b3a70>,
<__main__.particle instance at 0x11e6b3b48>,
<__main__.particle instance at 0x11e6b3c20>,
<__main__.particle instance at 0x11e6b3cf8>,
<__main__.particle instance at 0x11e6b3dd0>,
<__main__.particle instance at 0x11e6b3128>,
<__main__.particle instance at 0x11e6b3248>,
<__main__.particle instance at 0x11e6b3320>,
<__main__.particle instance at 0x11e6b33f8>,
<__main__.particle instance at 0x11e6b34d0>,
<__main__.particle instance at 0x11e6b35a8>,
<__main__.particle instance at 0x11e6b3680>,
<__main__.particle instance at 0x11e6b3758>,
<__main__.particle instance at 0x11e6b2950>,
<__main__.particle instance at 0x11e6b20e0>,
<__main__.particle instance at 0x11e6b2098>,
<__main__.particle instance at 0x11e6b2fc8>,
<__main__.particle instance at 0x11e6b2a28>,
<__main__.particle instance at 0x11e6b2b48>,
<__main__.particle instance at 0x11e6b2c20>,
<__main__.particle instance at 0x11e6b2cf8>,
<__main__.particle instance at 0x11e6b2dd0>,
<__main__.particle instance at 0x11e6b2ea8>,
<__main__.particle instance at 0x11e6b2f80>,
<__main__.particle instance at 0x11e6b22d8>,
<__main__.particle instance at 0x11e6b23b0>,
<__main__.particle instance at 0x11e6b2488>,
<__main__.particle instance at 0x11e6b2560>,
<__main__.particle instance at 0x11e6b2638>,
<__main__.particle instance at 0x11e6b2710>,
<__main__.particle instance at 0x11e6b27e8>,
<__main__.particle instance at 0x11e6b1998>,
<__main__.particle instance at 0x11e6b1a28>,
<__main__.particle instance at 0x11e6b18c0>,
<__main__.particle instance at 0x11e6b1ab8>,
<__main__.particle instance at 0x11e6b1b90>,
<__main__.particle instance at 0x11e6b1c68>,
<__main__.particle instance at 0x11e6b1d40>,
<__main__.particle instance at 0x11e6b1e18>,
<__main__.particle instance at 0x11e6b1ef0>,
<__main__.particle instance at 0x11e6b1fc8>,
<__main__.particle instance at 0x11e6b13f8>,
<__main__.particle instance at 0x11e6b14d0>,
<__main__.particle instance at 0x11e6b15a8>,
<__main__.particle instance at 0x11e6b1680>,
<__main__.particle instance at 0x11e6b1758>,
<__main__.particle instance at 0x11e6b1830>,
<__main__.particle instance at 0x11e6b1098>,
<__main__.particle instance at 0x11e6b1170>,
<__main__.particle instance at 0x11d2e0ef0>,
<__main__.particle instance at 0x11d2e02d8>,
<__main__.particle instance at 0x11d2e0b00>,
<__main__.particle instance at 0x11d2e0050>,
<__main__.particle instance at 0x11d2e07e8>,
<__main__.particle instance at 0x11d2e06c8>,
<__main__.particle instance at 0x11d2e05f0>,
<__main__.particle instance at 0x11d2e0518>,
<__main__.particle instance at 0x11d2e0440>,
<__main__.particle instance at 0x11d2e0368>,
<__main__.particle instance at 0x11d2e0f80>,
<__main__.particle instance at 0x11d2e0e60>,
<__main__.particle instance at 0x11d2e0d88>,
<__main__.particle instance at 0x11d2e0c68>,
<__main__.particle instance at 0x11d2e0b48>,
<__main__.particle instance at 0x11d2e0128>,
<__main__.particle instance at 0x11d2e0200>,
<__main__.particle instance at 0x11d2e0a28>,
<__main__.particle instance at 0x11e6b0248>,
<__main__.particle instance at 0x11e6b02d8>,
<__main__.particle instance at 0x11e6b0368>,
<__main__.particle instance at 0x11e6b0290>,
<__main__.particle instance at 0x11e6b0b48>,
<__main__.particle instance at 0x11e6b0c68>,
<__main__.particle instance at 0x11e6b0d40>,
<__main__.particle instance at 0x11e6b0e18>,
<__main__.particle instance at 0x11e6b0ef0>,
<__main__.particle instance at 0x11e6b0fc8>,
<__main__.particle instance at 0x11e6b0440>,
<__main__.particle instance at 0x11e6b0518>,
<__main__.particle instance at 0x11e6b05f0>,
<__main__.particle instance at 0x11e6b06c8>,
<__main__.particle instance at 0x11e6b07a0>,
<__main__.particle instance at 0x11e6b0878>,
<__main__.particle instance at 0x11e6b0950>,
<__main__.particle instance at 0x11e6b0098>,
<__main__.particle instance at 0x11e6b0170>,
<__main__.particle instance at 0x11e6afab8>,
<__main__.particle instance at 0x11e6afbd8>,
<__main__.particle instance at 0x11e6af200>,
<__main__.particle instance at 0x11e6afcb0>,
<__main__.particle instance at 0x11e6afd88>,
<__main__.particle instance at 0x11e6afe60>,
<__main__.particle instance at 0x11e6aff38>,
<__main__.particle instance at 0x11e6af3b0>,
<__main__.particle instance at 0x11e6af4d0>,
<__main__.particle instance at 0x11e6af5a8>,
<__main__.particle instance at 0x11e6af680>,
<__main__.particle instance at 0x11e6af758>,
<__main__.particle instance at 0x11e6af830>,
<__main__.particle instance at 0x11e6af908>,
<__main__.particle instance at 0x11e6af9e0>,
<__main__.particle instance at 0x11e6af0e0>,
<__main__.particle instance at 0x11e6af1b8>,
<__main__.particle instance at 0x11e6aeab8>,
<__main__.particle instance at 0x11e6ae2d8>,
<__main__.particle instance at 0x11e6ae290>,
<__main__.particle instance at 0x11e6ae9e0>,
<__main__.particle instance at 0x11e6aec20>,
<__main__.particle instance at 0x11e6aecf8>,
<__main__.particle instance at 0x11e6aedd0>,
<__main__.particle instance at 0x11e6aeea8>,
<__main__.particle instance at 0x11e6aef80>,
<__main__.particle instance at 0x11e6ae3f8>,
<__main__.particle instance at 0x11e6ae4d0>,
<__main__.particle instance at 0x11e6ae5a8>,
<__main__.particle instance at 0x11e6ae680>,
<__main__.particle instance at 0x11e6ae758>,
<__main__.particle instance at 0x11e6ae830>,
<__main__.particle instance at 0x11e6ae908>,
<__main__.particle instance at 0x11e6ae050>,
<__main__.particle instance at 0x11e6ae128>,
<__main__.particle instance at 0x11e6ac2d8>,
<__main__.particle instance at 0x11e6acc20>,
<__main__.particle instance at 0x11e6ac320>,
<__main__.particle instance at 0x11e6ac368>,
<__main__.particle instance at 0x11e6acc68>,
<__main__.particle instance at 0x11e6acd40>,
<__main__.particle instance at 0x11e6ace18>,
<__main__.particle instance at 0x11e6acef0>,
<__main__.particle instance at 0x11e6acfc8>,
<__main__.particle instance at 0x11e6ac4d0>,
<__main__.particle instance at 0x11e6ac5a8>,
<__main__.particle instance at 0x11e6ac680>,
<__main__.particle instance at 0x11e6ac758>,
<__main__.particle instance at 0x11e6ac830>,
<__main__.particle instance at 0x11e6ac908>,
<__main__.particle instance at 0x11e6ac9e0>,
<__main__.particle instance at 0x11e6ac0e0>,
<__main__.particle instance at 0x11e6ac1b8>,
<__main__.particle instance at 0x11e6a9d40>,
<__main__.particle instance at 0x11e6a9638>,
<__main__.particle instance at 0x11e6a9e18>,
<__main__.particle instance at 0x11e6a94d0>,
<__main__.particle instance at 0x11e6a9e60>,
<__main__.particle instance at 0x11e6a9f38>,
<__main__.particle instance at 0x11e6a9680>,
<__main__.particle instance at 0x11e6a97a0>,
<__main__.particle instance at 0x11e6a9878>,
<__main__.particle instance at 0x11e6a9950>,
<__main__.particle instance at 0x11e6a9a28>,
<__main__.particle instance at 0x11e6a9b00>,
<__main__.particle instance at 0x11e6a9bd8>,
<__main__.particle instance at 0x11e6a9098>,
<__main__.particle instance at 0x11e6a9170>,
<__main__.particle instance at 0x11e6a9248>,
<__main__.particle instance at 0x11e6a9320>,
<__main__.particle instance at 0x11e6a93f8>,
<__main__.particle instance at 0x11e6a8d40>,
<__main__.particle instance at 0x11e6a8680>,
<__main__.particle instance at 0x11e6a8758>,
<__main__.particle instance at 0x11e6a8638>,
<__main__.particle instance at 0x11e6a8f80>,
<__main__.particle instance at 0x11e6a87e8>,
<__main__.particle instance at 0x11e6a88c0>,
<__main__.particle instance at 0x11e6a8998>,
<__main__.particle instance at 0x11e6a8a70>,
<__main__.particle instance at 0x11e6a8b48>,
<__main__.particle instance at 0x11e6a8c20>,
<__main__.particle instance at 0x11e6a80e0>,
<__main__.particle instance at 0x11e6a81b8>,
<__main__.particle instance at 0x11e6a8290>,
<__main__.particle instance at 0x11e6a8368>,
<__main__.particle instance at 0x11e6a8440>,
<__main__.particle instance at 0x11e6a8518>,
<__main__.particle instance at 0x11e6a85f0>,
<__main__.particle instance at 0x11e6a7f38>,
<__main__.particle instance at 0x11e6a7950>,
<__main__.particle instance at 0x11e6a7ab8>,
<__main__.particle instance at 0x11e6a7998>,
<__main__.particle instance at 0x11e6a7a70>,
<__main__.particle instance at 0x11e6a7b90>,
<__main__.particle instance at 0x11e6a7c68>,
<__main__.particle instance at 0x11e6a7d40>,
<__main__.particle instance at 0x11e6a7e18>,
<__main__.particle instance at 0x11e6a72d8>,
<__main__.particle instance at 0x11e6a73b0>,
<__main__.particle instance at 0x11e6a7488>,
<__main__.particle instance at 0x11e6a7560>,
<__main__.particle instance at 0x11e6a7638>,
<__main__.particle instance at 0x11e6a7710>,
<__main__.particle instance at 0x11e6a77e8>,
<__main__.particle instance at 0x11e6a7098>,
<__main__.particle instance at 0x11d51a5a8>,
<__main__.particle instance at 0x11d51a248>,
<__main__.particle instance at 0x11d51ab00>,
<__main__.particle instance at 0x11d51a5f0>,
<__main__.particle instance at 0x11d51aef0>,
<__main__.particle instance at 0x11d51a0e0>,
<__main__.particle instance at 0x11d51a8c0>,
<__main__.particle instance at 0x11d51aea8>,
<__main__.particle instance at 0x11d51a440>,
<__main__.particle instance at 0x11d51ab48>,
<__main__.particle instance at 0x11d51a6c8>,
<__main__.particle instance at 0x11e6a6998>,
<__main__.particle instance at 0x11e6a6290>,
<__main__.particle instance at 0x11e6a62d8>,
<__main__.particle instance at 0x11e6a6b48>,
<__main__.particle instance at 0x11e6a6c68>,
<__main__.particle instance at 0x11e6a6d40>,
<__main__.particle instance at 0x11e6a6e18>,
<__main__.particle instance at 0x11e6a6ef0>,
<__main__.particle instance at 0x11e6a6fc8>,
<__main__.particle instance at 0x11e6a6440>,
<__main__.particle instance at 0x11e6a6518>,
<__main__.particle instance at 0x11e6a65f0>,
<__main__.particle instance at 0x11e6a66c8>,
<__main__.particle instance at 0x11e6a67a0>,
<__main__.particle instance at 0x11e6a6878>,
<__main__.particle instance at 0x11e6a6050>,
<__main__.particle instance at 0x11e6a6128>,
<__main__.particle instance at 0x11e6a52d8>,
<__main__.particle instance at 0x11e6a5368>,
<__main__.particle instance at 0x11e6a59e0>,
<__main__.particle instance at 0x11e6a5b90>,
<__main__.particle instance at 0x11e6a5cb0>,
<__main__.particle instance at 0x11e6a5d88>,
<__main__.particle instance at 0x11e6a5e60>,
<__main__.particle instance at 0x11e6a5f38>,
<__main__.particle instance at 0x11e6a5440>,
<__main__.particle instance at 0x11e6a5560>,
<__main__.particle instance at 0x11e6a5638>,
<__main__.particle instance at 0x11e6a5710>,
<__main__.particle instance at 0x11e6a57e8>,
<__main__.particle instance at 0x11e6a58c0>,
<__main__.particle instance at 0x11e6a5998>,
<__main__.particle instance at 0x11e6a50e0>,
<__main__.particle instance at 0x11e6a51b8>,
<__main__.particle instance at 0x11e6a4368>,
<__main__.particle instance at 0x11e6a4a70>,
<__main__.particle instance at 0x11e6a4b90>,
<__main__.particle instance at 0x11e6a4c20>,
<__main__.particle instance at 0x11e6a4cf8>,
<__main__.particle instance at 0x11e6a4dd0>,
<__main__.particle instance at 0x11e6a4ea8>,
<__main__.particle instance at 0x11e6a4f80>,
<__main__.particle instance at 0x11e6a44d0>,
<__main__.particle instance at 0x11e6a45a8>,
<__main__.particle instance at 0x11e6a4680>,
<__main__.particle instance at 0x11e6a4758>,
<__main__.particle instance at 0x11e6a4830>,
<__main__.particle instance at 0x11e6a4908>,
<__main__.particle instance at 0x11e6a4050>,
<__main__.particle instance at 0x11e6a4128>,
<__main__.particle instance at 0x11e6a4200>,
<__main__.particle instance at 0x11e6a42d8>,
<__main__.particle instance at 0x11d2e1b48>,
<__main__.particle instance at 0x11d2e16c8>,
<__main__.particle instance at 0x11d2e15f0>,
<__main__.particle instance at 0x11d2e1518>,
<__main__.particle instance at 0x11d2e1440>,
<__main__.particle instance at 0x11d2e1320>,
<__main__.particle instance at 0x11d2e1248>,
<__main__.particle instance at 0x11d2e1e60>,
<__main__.particle instance at 0x11d2e1d88>,
<__main__.particle instance at 0x11d2e1cb0>,
<__main__.particle instance at 0x11d2e1bd8>,
<__main__.particle instance at 0x11d2e1ab8>,
<__main__.particle instance at 0x11d2e19e0>,
<__main__.particle instance at 0x11d2e18c0>,
<__main__.particle instance at 0x11d2e1098>,
<__main__.particle instance at 0x11d2e10e0>,
<__main__.particle instance at 0x11d2e1878>,
<__main__.particle instance at 0x11d2e1170>,
<__main__.particle instance at 0x11e6a3518>,
<__main__.particle instance at 0x11e6a3c68>,
<__main__.particle instance at 0x11e6a34d0>,
<__main__.particle instance at 0x11e6a33b0>,
<__main__.particle instance at 0x11e6a3e18>,
<__main__.particle instance at 0x11e6a3ef0>,
<__main__.particle instance at 0x11e6a3fc8>,
<__main__.particle instance at 0x11e6a35f0>,
<__main__.particle instance at 0x11e6a3710>,
<__main__.particle instance at 0x11e6a37e8>,
<__main__.particle instance at 0x11e6a38c0>,
<__main__.particle instance at 0x11e6a3998>,
<__main__.particle instance at 0x11e6a3a70>,
<__main__.particle instance at 0x11e6a3b48>,
<__main__.particle instance at 0x11e6a30e0>,
<__main__.particle instance at 0x11e6a31b8>,
<__main__.particle instance at 0x11e6a3290>,
<__main__.particle instance at 0x11e6a3368>,
<__main__.particle instance at 0x11d51bcb0>,
<__main__.particle instance at 0x11d51bb00>,
<__main__.particle instance at 0x11d51b5a8>,
<__main__.particle instance at 0x11d51b3b0>,
<__main__.particle instance at 0x11d51bbd8>,
<__main__.particle instance at 0x11d51b7a0>,
<__main__.particle instance at 0x11d51bd88>,
<__main__.particle instance at 0x11d51b2d8>,
<__main__.particle instance at 0x11d51b998>,
<__main__.particle instance at 0x11d51bef0>,
<__main__.particle instance at 0x11e6a2dd0>,
<__main__.particle instance at 0x11e6a2518>,
<__main__.particle instance at 0x11e6a24d0>,
<__main__.particle instance at 0x11e6a2e18>,
<__main__.particle instance at 0x11e6a2e60>,
<__main__.particle instance at 0x11e6a2f38>,
<__main__.particle instance at 0x11e6a2680>,
<__main__.particle instance at 0x11e6a2758>,
<__main__.particle instance at 0x11e6a2830>,
<__main__.particle instance at 0x11e6a2908>,
<__main__.particle instance at 0x11e6a29e0>,
<__main__.particle instance at 0x11e6a2ab8>,
<__main__.particle instance at 0x11e6a2b90>,
<__main__.particle instance at 0x11e6a2050>,
<__main__.particle instance at 0x11e6a2128>,
<__main__.particle instance at 0x11e6a2200>,
<__main__.particle instance at 0x11e6a22d8>,
<__main__.particle instance at 0x11e6a23b0>,
<__main__.particle instance at 0x11e6a1d40>,
<__main__.particle instance at 0x11e6a1560>,
<__main__.particle instance at 0x11e6a1cb0>,
<__main__.particle instance at 0x11e6a1680>,
<__main__.particle instance at 0x11e6a13f8>,
<__main__.particle instance at 0x11e6a1e60>,
<__main__.particle instance at 0x11e6a1f38>,
<__main__.particle instance at 0x11e6a15a8>,
<__main__.particle instance at 0x11e6a16c8>,
<__main__.particle instance at 0x11e6a17a0>,
<__main__.particle instance at 0x11e6a1878>,
<__main__.particle instance at 0x11e6a1950>,
<__main__.particle instance at 0x11e6a1a28>,
<__main__.particle instance at 0x11e6a1b00>,
<__main__.particle instance at 0x11e6a1050>,
<__main__.particle instance at 0x11e6a1128>,
<__main__.particle instance at 0x11e6a1200>,
<__main__.particle instance at 0x11e6a12d8>,
<__main__.particle instance at 0x11e6a13b0>,
<__main__.particle instance at 0x11e6a05f0>,
<__main__.particle instance at 0x11e6a0710>,
<__main__.particle instance at 0x11e6a0dd0>,
<__main__.particle instance at 0x11e6a0ef0>,
<__main__.particle instance at 0x11e6a0fc8>,
<__main__.particle instance at 0x11e6a07e8>,
<__main__.particle instance at 0x11e6a08c0>,
<__main__.particle instance at 0x11e6a0998>,
<__main__.particle instance at 0x11e6a0a70>,
<__main__.particle instance at 0x11e6a0b48>,
<__main__.particle instance at 0x11e6a0050>,
<__main__.particle instance at 0x11e6a0128>,
<__main__.particle instance at 0x11e6a0200>,
<__main__.particle instance at 0x11e6a02d8>,
<__main__.particle instance at 0x11e6a03b0>,
<__main__.particle instance at 0x11e6a0488>,
<__main__.particle instance at 0x11e6a0560>,
<__main__.particle instance at 0x11e69f6c8>,
<__main__.particle instance at 0x11e69f758>,
<__main__.particle instance at 0x11e69ff38>,
<__main__.particle instance at 0x11e69fea8>,
<__main__.particle instance at 0x11e69f680>,
<__main__.particle instance at 0x11e69f8c0>,
<__main__.particle instance at 0x11e69f9e0>,
<__main__.particle instance at 0x11e69fab8>,
<__main__.particle instance at 0x11e69fb90>,
<__main__.particle instance at 0x11e69fc68>,
<__main__.particle instance at 0x11e69fd40>,
<__main__.particle instance at 0x11e69f050>,
<__main__.particle instance at 0x11e69f128>,
<__main__.particle instance at 0x11e69f248>,
<__main__.particle instance at 0x11e69f320>,
<__main__.particle instance at 0x11e69f3f8>,
<__main__.particle instance at 0x11e69f4d0>,
<__main__.particle instance at 0x11e69f5a8>,
<__main__.particle instance at 0x11e69e7a0>,
<__main__.particle instance at 0x11e69ef38>,
<__main__.particle instance at 0x11e69e050>,
<__main__.particle instance at 0x11e69e830>,
<__main__.particle instance at 0x11e69e638>,
<__main__.particle instance at 0x11e69e8c0>,
<__main__.particle instance at 0x11e69e998>,
<__main__.particle instance at 0x11e69ea70>,
<__main__.particle instance at 0x11e69eb48>,
<__main__.particle instance at 0x11e69ec20>,
<__main__.particle instance at 0x11e69ecf8>,
<__main__.particle instance at 0x11e69edd0>,
<__main__.particle instance at 0x11e69e0e0>,
<__main__.particle instance at 0x11e69e1b8>,
<__main__.particle instance at 0x11e69e290>,
<__main__.particle instance at 0x11e69e368>,
<__main__.particle instance at 0x11e69e440>,
<__main__.particle instance at 0x11e69e518>,
<__main__.particle instance at 0x11e69e5f0>,
<__main__.particle instance at 0x119ed27a0>,
<__main__.particle instance at 0x119ed2320>,
<__main__.particle instance at 0x119ed2f80>,
<__main__.particle instance at 0x119ed2710>,
<__main__.particle instance at 0x119ed2050>,
<__main__.particle instance at 0x119ed2830>,
<__main__.particle instance at 0x119ed22d8>,
<__main__.particle instance at 0x119ed25f0>,
<__main__.particle instance at 0x11e69df80>,
<__main__.particle instance at 0x11e69d050>,
<__main__.particle instance at 0x11e69d758>,
<__main__.particle instance at 0x11e69de18>,
<__main__.particle instance at 0x11e69d8c0>,
<__main__.particle instance at 0x11e69d998>,
<__main__.particle instance at 0x11e69da70>,
<__main__.particle instance at 0x11e69db48>,
<__main__.particle instance at 0x11e69dc20>,
<__main__.particle instance at 0x11e69dcf8>,
<__main__.particle instance at 0x11e69ddd0>,
<__main__.particle instance at 0x11e69d170>,
<__main__.particle instance at 0x11e69d248>,
<__main__.particle instance at 0x11e69d320>,
<__main__.particle instance at 0x11e69d3f8>,
<__main__.particle instance at 0x11e69d4d0>,
<__main__.particle instance at 0x11e69d5a8>,
<__main__.particle instance at 0x11e69d680>,
<__main__.particle instance at 0x11e69c0e0>,
<__main__.particle instance at 0x11e69c8c0>,
<__main__.particle instance at 0x11e69c050>,
<__main__.particle instance at 0x11e69c758>,
<__main__.particle instance at 0x11e69c908>,
<__main__.particle instance at 0x11e69c9e0>,
<__main__.particle instance at 0x11e69cab8>,
<__main__.particle instance at 0x11e69cb90>,
<__main__.particle instance at 0x11e69cc68>,
<__main__.particle instance at 0x11e69cd40>,
<__main__.particle instance at 0x11e69ce18>,
<__main__.particle instance at 0x11e69c1b8>,
<__main__.particle instance at 0x11e69c290>,
<__main__.particle instance at 0x11e69c368>,
<__main__.particle instance at 0x11e69c440>,
<__main__.particle instance at 0x11e69c518>,
<__main__.particle instance at 0x11e69c5f0>,
<__main__.particle instance at 0x11e69b950>,
<__main__.particle instance at 0x11e69b830>,
<__main__.particle instance at 0x11e69b0e0>,
<__main__.particle instance at 0x11e69b248>,
<__main__.particle instance at 0x11e69b098>,
<__main__.particle instance at 0x11e69ba70>,
<__main__.particle instance at 0x11e69bb48>,
<__main__.particle instance at 0x11e69bc20>,
<__main__.particle instance at 0x11e69bcf8>,
<__main__.particle instance at 0x11e69bdd0>,
<__main__.particle instance at 0x11e69bea8>,
<__main__.particle instance at 0x11e69bf80>,
<__main__.particle instance at 0x11e69b320>,
<__main__.particle instance at 0x11e69b3f8>,
<__main__.particle instance at 0x11e69b4d0>,
<__main__.particle instance at 0x11e69b5a8>,
<__main__.particle instance at 0x11e69b680>,
<__main__.particle instance at 0x11e69b758>,
<__main__.particle instance at 0x11e69a3b0>,
<__main__.particle instance at 0x11e69aa70>,
<__main__.particle instance at 0x11e69a950>,
<__main__.particle instance at 0x11e69aab8>,
<__main__.particle instance at 0x11e69abd8>,
<__main__.particle instance at 0x11e69acb0>,
<__main__.particle instance at 0x11e69ad88>,
<__main__.particle instance at 0x11e69ae60>,
<__main__.particle instance at 0x11e69af38>,
<__main__.particle instance at 0x11e69a3f8>,
<__main__.particle instance at 0x11e69a518>,
<__main__.particle instance at 0x11e69a5f0>,
<__main__.particle instance at 0x11e69a6c8>,
<__main__.particle instance at 0x11e69a7a0>,
<__main__.particle instance at 0x11e69a878>,
<__main__.particle instance at 0x11e69a050>,
<__main__.particle instance at 0x11e69a128>,
<__main__.particle instance at 0x11e69a200>,
<__main__.particle instance at 0x11e699b48>,
<__main__.particle instance at 0x11e699320>,
<__main__.particle instance at 0x11e699488>,
<__main__.particle instance at 0x11e699248>,
<__main__.particle instance at 0x11e699cb0>,
<__main__.particle instance at 0x11e699d88>,
<__main__.particle instance at 0x11e699e60>,
<__main__.particle instance at 0x11e699f38>,
<__main__.particle instance at 0x11e699440>,
<__main__.particle instance at 0x11e699560>,
<__main__.particle instance at 0x11e699638>,
<__main__.particle instance at 0x11e699710>,
<__main__.particle instance at 0x11e6997e8>,
<__main__.particle instance at 0x11e6998c0>,
<__main__.particle instance at 0x11e699998>,
<__main__.particle instance at 0x11e699098>,
<__main__.particle instance at 0x11e699170>,
<__main__.particle instance at 0x11e698518>,
<__main__.particle instance at 0x11e698b90>,
<__main__.particle instance at 0x11e6983b0>,
<__main__.particle instance at 0x11e698cb0>,
<__main__.particle instance at 0x11e698d88>,
<__main__.particle instance at 0x11e698e60>,
<__main__.particle instance at 0x11e698f38>,
<__main__.particle instance at 0x11e6984d0>,
<__main__.particle instance at 0x11e6985f0>,
<__main__.particle instance at 0x11e6986c8>,
<__main__.particle instance at 0x11e6987a0>,
<__main__.particle instance at 0x11e698878>,
<__main__.particle instance at 0x11e698950>,
<__main__.particle instance at 0x11e698a28>,
<__main__.particle instance at 0x11e698098>,
<__main__.particle instance at 0x11e698170>,
<__main__.particle instance at 0x11e698248>,
<__main__.particle instance at 0x11e698320>,
<__main__.particle instance at 0x11e697440>,
<__main__.particle instance at 0x11e697c68>,
<__main__.particle instance at 0x11e697cb0>,
<__main__.particle instance at 0x11e697cf8>,
<__main__.particle instance at 0x11e697e18>,
<__main__.particle instance at 0x11e697ef0>,
<__main__.particle instance at 0x11e697fc8>,
<__main__.particle instance at 0x11e697638>,
<__main__.particle instance at 0x11e697758>,
<__main__.particle instance at 0x11e697830>,
<__main__.particle instance at 0x11e697908>,
<__main__.particle instance at 0x11e6979e0>,
<__main__.particle instance at 0x11e697ab8>,
<__main__.particle instance at 0x11e697b90>,
<__main__.particle instance at 0x11e6970e0>,
<__main__.particle instance at 0x11e6971b8>,
<__main__.particle instance at 0x11e697290>,
<__main__.particle instance at 0x11e697368>,
<__main__.particle instance at 0x11e696d40>,
<__main__.particle instance at 0x11e696c68>,
<__main__.particle instance at 0x11e696518>,
<__main__.particle instance at 0x11e696d88>,
<__main__.particle instance at 0x11e696e60>,
<__main__.particle instance at 0x11e696f38>,
<__main__.particle instance at 0x11e6965f0>,
<__main__.particle instance at 0x11e696710>,
<__main__.particle instance at 0x11e6967e8>,
<__main__.particle instance at 0x11e6968c0>,
<__main__.particle instance at 0x11e696998>,
<__main__.particle instance at 0x11e696a70>,
<__main__.particle instance at 0x11e696b48>,
<__main__.particle instance at 0x11e696098>,
<__main__.particle instance at 0x11e696170>,
<__main__.particle instance at 0x11e696248>,
<__main__.particle instance at 0x11e696320>,
<__main__.particle instance at 0x11e6963f8>,
<__main__.particle instance at 0x11e6955a8>,
<__main__.particle instance at 0x11e695cf8>,
<__main__.particle instance at 0x11e695680>,
<__main__.particle instance at 0x11e695e60>,
<__main__.particle instance at 0x11e695f38>,
<__main__.particle instance at 0x11e6956c8>,
<__main__.particle instance at 0x11e6957a0>,
<__main__.particle instance at 0x11e695878>,
<__main__.particle instance at 0x11e695950>,
<__main__.particle instance at 0x11e695a28>,
<__main__.particle instance at 0x11e695b00>,
<__main__.particle instance at 0x11e695bd8>,
<__main__.particle instance at 0x11e695098>,
<__main__.particle instance at 0x11e695170>,
<__main__.particle instance at 0x11e695248>,
<__main__.particle instance at 0x11e695320>,
<__main__.particle instance at 0x11e6953f8>,
<__main__.particle instance at 0x11e6954d0>,
<__main__.particle instance at 0x11e694758>,
<__main__.particle instance at 0x11e694dd0>,
<__main__.particle instance at 0x11e694f38>,
<__main__.particle instance at 0x11e694ea8>,
<__main__.particle instance at 0x11e694fc8>,
<__main__.particle instance at 0x11e6947e8>,
<__main__.particle instance at 0x11e6948c0>,
<__main__.particle instance at 0x11e694998>,
<__main__.particle instance at 0x11e694a70>,
<__main__.particle instance at 0x11e694b48>,
<__main__.particle instance at 0x11e694c20>,
<__main__.particle instance at 0x11e694cf8>,
<__main__.particle instance at 0x11e694128>,
<__main__.particle instance at 0x11e694200>,
<__main__.particle instance at 0x11e6942d8>,
<__main__.particle instance at 0x11e6943b0>,
<__main__.particle instance at 0x11e694488>,
<__main__.particle instance at 0x11e694560>,
<__main__.particle instance at 0x11e6930e0>,
<__main__.particle instance at 0x11e693830>,
<__main__.particle instance at 0x11e6937e8>,
<__main__.particle instance at 0x11e693998>,
<__main__.particle instance at 0x11e693a70>,
<__main__.particle instance at 0x11e693b48>,
<__main__.particle instance at 0x11e693c20>,
<__main__.particle instance at 0x11e693cf8>,
<__main__.particle instance at 0x11e693dd0>,
<__main__.particle instance at 0x11e693ea8>,
<__main__.particle instance at 0x11e6931b8>,
<__main__.particle instance at 0x11e693320>,
<__main__.particle instance at 0x11e6933f8>,
<__main__.particle instance at 0x11e6934d0>,
<__main__.particle instance at 0x11e6935a8>,
<__main__.particle instance at 0x11e693680>,
<__main__.particle instance at 0x11e693758>,
<__main__.particle instance at 0x11d2e20e0>,
<__main__.particle instance at 0x11d2e2d40>,
<__main__.particle instance at 0x11d2e2098>,
<__main__.particle instance at 0x11d2e25f0>,
<__main__.particle instance at 0x11d2e24d0>,
<__main__.particle instance at 0x11d2e23f8>,
<__main__.particle instance at 0x11d2e22d8>,
<__main__.particle instance at 0x11d2e2200>,
<__main__.particle instance at 0x11d2e2cf8>,
<__main__.particle instance at 0x11d2e2c20>,
<__main__.particle instance at 0x11d2e2b48>,
<__main__.particle instance at 0x11d2e2a70>,
<__main__.particle instance at 0x11d2e2998>,
<__main__.particle instance at 0x11d2e2878>,
<__main__.particle instance at 0x11d2e26c8>,
<__main__.particle instance at 0x11d2e2710>,
<__main__.particle instance at 0x11d2e2ef0>,
<__main__.particle instance at 0x11d2e2ea8>,
<__main__.particle instance at 0x11e691b90>,
<__main__.particle instance at 0x11e6912d8>,
<__main__.particle instance at 0x11e691320>,
<__main__.particle instance at 0x11e691c20>,
<__main__.particle instance at 0x11e691cf8>,
<__main__.particle instance at 0x11e691dd0>,
<__main__.particle instance at 0x11e691ea8>,
<__main__.particle instance at 0x11e691f80>,
<__main__.particle instance at 0x11e691440>,
<__main__.particle instance at 0x11e691518>,
<__main__.particle instance at 0x11e6915f0>,
<__main__.particle instance at 0x11e6916c8>,
<__main__.particle instance at 0x11e6917a0>,
<__main__.particle instance at 0x11e691878>,
<__main__.particle instance at 0x11e691950>,
<__main__.particle instance at 0x11e691a28>,
<__main__.particle instance at 0x11e6910e0>,
<__main__.particle instance at 0x11e6911b8>,
<__main__.particle instance at 0x11e6902d8>,
<__main__.particle instance at 0x11e690b48>,
<__main__.particle instance at 0x11e6903b0>,
<__main__.particle instance at 0x11e690200>,
<__main__.particle instance at 0x11e690c20>,
<__main__.particle instance at 0x11e690d40>,
<__main__.particle instance at 0x11e690e18>,
<__main__.particle instance at 0x11e690ef0>,
<__main__.particle instance at 0x11e690fc8>,
<__main__.particle instance at 0x11e690440>,
<__main__.particle instance at 0x11e690518>,
<__main__.particle instance at 0x11e6905f0>,
<__main__.particle instance at 0x11e6906c8>,
<__main__.particle instance at 0x11e6907a0>,
<__main__.particle instance at 0x11e690878>,
<__main__.particle instance at 0x11e690950>,
<__main__.particle instance at 0x11e690050>,
<__main__.particle instance at 0x11e690128>,
<__main__.particle instance at 0x11e68fbd8>,
<__main__.particle instance at 0x11e68f3b0>,
<__main__.particle instance at 0x11e68f440>,
<__main__.particle instance at 0x11e68fc20>,
<__main__.particle instance at 0x11e68fcf8>,
<__main__.particle instance at 0x11e68fdd0>,
<__main__.particle instance at 0x11e68fea8>,
<__main__.particle instance at 0x11e68ff80>,
<__main__.particle instance at 0x11e68f488>,
<__main__.particle instance at 0x11e68f5a8>,
<__main__.particle instance at 0x11e68f6c8>,
<__main__.particle instance at 0x11e68f7a0>,
<__main__.particle instance at 0x11e68f878>,
<__main__.particle instance at 0x11e68f950>,
<__main__.particle instance at 0x11e68fa28>,
<__main__.particle instance at 0x11e68f050>,
<__main__.particle instance at 0x11e68f128>,
<__main__.particle instance at 0x11e68f200>,
<__main__.particle instance at 0x11e68f2d8>,
<__main__.particle instance at 0x11e68ebd8>,
<__main__.particle instance at 0x11e68e518>,
<__main__.particle instance at 0x11e68ecf8>,
<__main__.particle instance at 0x11e68edd0>,
<__main__.particle instance at 0x11e68eea8>,
<__main__.particle instance at 0x11e68ef80>,
<__main__.particle instance at 0x11e68e4d0>,
<__main__.particle instance at 0x11e68e5f0>,
<__main__.particle instance at 0x11e68e6c8>,
<__main__.particle instance at 0x11e68e7a0>,
<__main__.particle instance at 0x11e68e878>,
<__main__.particle instance at 0x11e68e950>,
<__main__.particle instance at 0x11e68ea28>,
<__main__.particle instance at 0x11e68e050>,
<__main__.particle instance at 0x11e68e128>,
<__main__.particle instance at 0x11e68e200>,
<__main__.particle instance at 0x11e68e2d8>,
<__main__.particle instance at 0x11e68dc20>,
<__main__.particle instance at 0x11e68dc68>,
<__main__.particle instance at 0x11e68d518>,
<__main__.particle instance at 0x11e68dd40>,
<__main__.particle instance at 0x11e68de18>,
<__main__.particle instance at 0x11e68def0>,
<__main__.particle instance at 0x11e68dfc8>,
<__main__.particle instance at 0x11e68d5a8>,
<__main__.particle instance at 0x11e68d680>,
<__main__.particle instance at 0x11e68d758>,
<__main__.particle instance at 0x11e68d830>,
<__main__.particle instance at 0x11e68d908>,
<__main__.particle instance at 0x11e68d9e0>,
<__main__.particle instance at 0x11e68dab8>,
<__main__.particle instance at 0x11e68d098>,
<__main__.particle instance at 0x11e68d170>,
<__main__.particle instance at 0x11e68d248>,
<__main__.particle instance at 0x11e68d320>,
<__main__.particle instance at 0x11e68cb00>,
<__main__.particle instance at 0x11e68c3f8>,
<__main__.particle instance at 0x11e68cc68>,
<__main__.particle instance at 0x11e68cd40>,
<__main__.particle instance at 0x11e68ce18>,
<__main__.particle instance at 0x11e68cef0>,
<__main__.particle instance at 0x11e68cfc8>,
<__main__.particle instance at 0x11e68c488>,
<__main__.particle instance at 0x11e68c560>,
<__main__.particle instance at 0x11e68c638>,
<__main__.particle instance at 0x11e68c710>,
<__main__.particle instance at 0x11e68c7e8>,
<__main__.particle instance at 0x11e68c8c0>,
<__main__.particle instance at 0x11e68c998>,
<__main__.particle instance at 0x11e68ca70>,
<__main__.particle instance at 0x11e68c0e0>,
<__main__.particle instance at 0x11e68c1b8>,
<__main__.particle instance at 0x11e68c290>,
<__main__.particle instance at 0x11e68bb00>,
<__main__.particle instance at 0x11e68b368>,
<__main__.particle instance at 0x11e68bb48>,
<__main__.particle instance at 0x11e68bc68>,
<__main__.particle instance at 0x11e68bd40>,
<__main__.particle instance at 0x11e68be18>,
<__main__.particle instance at 0x11e68bef0>,
<__main__.particle instance at 0x11e68bfc8>,
<__main__.particle instance at 0x11e68b440>,
<__main__.particle instance at 0x11e68b518>,
<__main__.particle instance at 0x11e68b5f0>,
<__main__.particle instance at 0x11e68b6c8>,
<__main__.particle instance at 0x11e68b7a0>,
<__main__.particle instance at 0x11e68b878>,
<__main__.particle instance at 0x11e68b950>,
<__main__.particle instance at 0x11e68b098>,
<__main__.particle instance at 0x11e68b170>,
<__main__.particle instance at 0x11e68a368>,
<__main__.particle instance at 0x11e68ac20>,
<__main__.particle instance at 0x11e68ac68>,
<__main__.particle instance at 0x11e68acf8>,
<__main__.particle instance at 0x11e68add0>,
<__main__.particle instance at 0x11e68aea8>,
<__main__.particle instance at 0x11e68af80>,
<__main__.particle instance at 0x11e68a518>,
<__main__.particle instance at 0x11e68a5f0>,
<__main__.particle instance at 0x11e68a6c8>,
<__main__.particle instance at 0x11e68a7a0>,
<__main__.particle instance at 0x11e68a878>,
<__main__.particle instance at 0x11e68a950>,
<__main__.particle instance at 0x11e68aa28>,
<__main__.particle instance at 0x11e68ab00>,
<__main__.particle instance at 0x11e68a0e0>], dtype=object)
シミュレーションをする。下はプロットの関数。
```python
def plot():
x = []
y = []
for i in pso.p_gd_X:
x.append(pso.city[i][0])
y.append(pso.city[i][1])
plt.plot(x, y)
```
1回シミュレーションしてみた。
```python
pso.simulate(1)
```
array([ 2, 25, 23, 24, 10, 29, 0, 22, 3, 15, 12, 7, 9, 20, 18, 17, 19,
27, 11, 14, 8, 5, 4, 26, 21, 13, 16, 6, 28, 1])
```python
plot()
```
(前のと合わせて)合計で10回シミュレーションしてみた。
```python
pso.simulate(9)
```
array([ 5, 18, 23, 24, 10, 29, 0, 22, 16, 15, 12, 7, 9, 20, 3, 13, 19,
27, 11, 14, 8, 4, 25, 2, 21, 17, 26, 6, 28, 1])
```python
plot()
```
(前のと合わせて)合計で50回シミュレーションしてみた。
```python
pso.simulate(40)
```
array([10, 5, 23, 0, 24, 29, 18, 22, 16, 15, 12, 7, 9, 20, 3, 13, 19,
27, 11, 14, 8, 4, 25, 2, 26, 17, 21, 6, 28, 1])
```python
plot()
```
以上のアルゴリズムはα、β、ωなどのパラメーターをすべての粒子で同じものとして行ったものである。
次に粒子ごとにパラメーターを変えた場合で行う。
```python
class PSO:
def __init__(self, N, pN):
self.N = N
self.pN = pN
self.city = TSP_map(N)
def initialize(self):
ptcl = np.array([])
for i in range(self.pN):
a = np.random.choice([j for j in range(self.N - 1)])
b = np.random.choice([j for j in range(a, self.N)])
V = [[a, b]]
ptcl = np.append(ptcl, particle(i, V, self.N, np.random.uniform(), np.random.uniform(), np.random.uniform()))
self.ptcl = ptcl
return self.ptcl
def one_simulate(self):
for i in range(self.pN):
self.ptcl[i].SS_id()
self.ptcl[i].SS_gd(self.p_gd_X)
self.ptcl[i].new_V()
self.ptcl[i].new_X()
self.ptcl[i].P_id(self.city)
def simulate(self, sim_num):
for i in range(self.pN):
self.ptcl[i].initial(self.city)
self.p_gd_X = self.P_gd()
for i in range(sim_num):
self.one_simulate()
self.p_gd_X = self.P_gd()
self.p_gd_X = self.P_gd()
return self.p_gd_X
def P_gd(self):
P_gd = self.ptcl[0].p_id
self.no = 0
for i in range(self.pN):
if P_gd > self.ptcl[i].p_id:
P_gd = self.ptcl[i].p_id
self.no = i
return self.ptcl[self.no].p_id_X
```
```python
pso = PSO(30, 1000)
```
```python
x = []
y = []
for i in range(len(pso.city)):
x.append(pso.city[i][0])
y.append(pso.city[i][1])
plt.scatter(x, y)
```
```python
pso.initialize()
```
array([<__main__.particle instance at 0x11e69ffc8>,
<__main__.particle instance at 0x11e92f050>,
<__main__.particle instance at 0x11e92f290>,
<__main__.particle instance at 0x11e92fa70>,
<__main__.particle instance at 0x11e92f170>,
<__main__.particle instance at 0x11e92fb00>,
<__main__.particle instance at 0x11e92fab8>,
<__main__.particle instance at 0x11e92fb90>,
<__main__.particle instance at 0x11e92fbd8>,
<__main__.particle instance at 0x11e92fc20>,
<__main__.particle instance at 0x11e92fc68>,
<__main__.particle instance at 0x11e92fcb0>,
<__main__.particle instance at 0x11e92fcf8>,
<__main__.particle instance at 0x11e92fd40>,
<__main__.particle instance at 0x11e92fd88>,
<__main__.particle instance at 0x11e92fdd0>,
<__main__.particle instance at 0x11e92fe18>,
<__main__.particle instance at 0x11e92fe60>,
<__main__.particle instance at 0x11e92fea8>,
<__main__.particle instance at 0x11e92fef0>,
<__main__.particle instance at 0x11e92ff38>,
<__main__.particle instance at 0x11e92ff80>,
<__main__.particle instance at 0x11e92ffc8>,
<__main__.particle instance at 0x11e92f368>,
<__main__.particle instance at 0x11e92f2d8>,
<__main__.particle instance at 0x11e92f320>,
<__main__.particle instance at 0x11e92f3f8>,
<__main__.particle instance at 0x11e92f440>,
<__main__.particle instance at 0x11e92f488>,
<__main__.particle instance at 0x11e92f4d0>,
<__main__.particle instance at 0x11e92f518>,
<__main__.particle instance at 0x11e92f560>,
<__main__.particle instance at 0x11e92f5f0>,
<__main__.particle instance at 0x11e92f6c8>,
<__main__.particle instance at 0x11e92f7a0>,
<__main__.particle instance at 0x11e92f878>,
<__main__.particle instance at 0x11e92f950>,
<__main__.particle instance at 0x11e92f0e0>,
<__main__.particle instance at 0x119ecfd40>,
<__main__.particle instance at 0x11d2d4cb0>,
<__main__.particle instance at 0x11d2d4878>,
<__main__.particle instance at 0x11d2d4e60>,
<__main__.particle instance at 0x11d2d4680>,
<__main__.particle instance at 0x11d2d48c0>,
<__main__.particle instance at 0x11d2d4998>,
<__main__.particle instance at 0x11d2d4a70>,
<__main__.particle instance at 0x11d2d4b48>,
<__main__.particle instance at 0x11d2d4d88>,
<__main__.particle instance at 0x11d2d4710>,
<__main__.particle instance at 0x11d2d4368>,
<__main__.particle instance at 0x11e6733b0>,
<__main__.particle instance at 0x11e673ea8>,
<__main__.particle instance at 0x11e673a70>,
<__main__.particle instance at 0x11e673320>,
<__main__.particle instance at 0x11e673b90>,
<__main__.particle instance at 0x11e673830>,
<__main__.particle instance at 0x11e673fc8>,
<__main__.particle instance at 0x11e6735f0>,
<__main__.particle instance at 0x11e6733f8>,
<__main__.particle instance at 0x11e673248>,
<__main__.particle instance at 0x11e673098>,
<__main__.particle instance at 0x11e673ab8>,
<__main__.particle instance at 0x11e673908>,
<__main__.particle instance at 0x11e673758>,
<__main__.particle instance at 0x11e673f80>,
<__main__.particle instance at 0x11e673dd0>,
<__main__.particle instance at 0x11e6734d0>,
<__main__.particle instance at 0x11d506cb0>,
<__main__.particle instance at 0x11d506e60>,
<__main__.particle instance at 0x11d506dd0>,
<__main__.particle instance at 0x11d506cf8>,
<__main__.particle instance at 0x11d506b90>,
<__main__.particle instance at 0x11d506170>,
<__main__.particle instance at 0x11d506ab8>,
<__main__.particle instance at 0x11d5066c8>,
<__main__.particle instance at 0x11d5065f0>,
<__main__.particle instance at 0x11d506518>,
<__main__.particle instance at 0x11d5063f8>,
<__main__.particle instance at 0x11d506320>,
<__main__.particle instance at 0x11d506248>,
<__main__.particle instance at 0x11d506ea8>,
<__main__.particle instance at 0x11d506c20>,
<__main__.particle instance at 0x11d2d2ab8>,
<__main__.particle instance at 0x11d2d2368>,
<__main__.particle instance at 0x11d2d2e18>,
<__main__.particle instance at 0x11d2d26c8>,
<__main__.particle instance at 0x11d2d2710>,
<__main__.particle instance at 0x11d2d2f38>,
<__main__.particle instance at 0x11d2d2830>,
<__main__.particle instance at 0x11d2d2908>,
<__main__.particle instance at 0x11d2d29e0>,
<__main__.particle instance at 0x11d2d24d0>,
<__main__.particle instance at 0x11d2d2488>,
<__main__.particle instance at 0x11d2d2bd8>,
<__main__.particle instance at 0x11d2d2cf8>,
<__main__.particle instance at 0x11d2d2098>,
<__main__.particle instance at 0x11d2d2170>,
<__main__.particle instance at 0x11d2d2248>,
<__main__.particle instance at 0x119ed07e8>,
<__main__.particle instance at 0x119ed0170>,
<__main__.particle instance at 0x119ed0710>,
<__main__.particle instance at 0x119ed0a70>,
<__main__.particle instance at 0x119ed0dd0>,
<__main__.particle instance at 0x119ed0368>,
<__main__.particle instance at 0x119ed0878>,
<__main__.particle instance at 0x119ed0908>,
<__main__.particle instance at 0x11d2d1638>,
<__main__.particle instance at 0x11d2d1b00>,
<__main__.particle instance at 0x11d2d15a8>,
<__main__.particle instance at 0x11d2d17e8>,
<__main__.particle instance at 0x11d2d1ef0>,
<__main__.particle instance at 0x11d2d1878>,
<__main__.particle instance at 0x11d2d1fc8>,
<__main__.particle instance at 0x11d2d1998>,
<__main__.particle instance at 0x11d2d1a70>,
<__main__.particle instance at 0x11d2d1bd8>,
<__main__.particle instance at 0x11d2d1d88>,
<__main__.particle instance at 0x11d2d10e0>,
<__main__.particle instance at 0x11d2d11b8>,
<__main__.particle instance at 0x11d2d1290>,
<__main__.particle instance at 0x11d2d1368>,
<__main__.particle instance at 0x11d2d1440>,
<__main__.particle instance at 0x11d2d1518>,
<__main__.particle instance at 0x11d5b45a8>,
<__main__.particle instance at 0x11d5b45f0>,
<__main__.particle instance at 0x11d2d0cb0>,
<__main__.particle instance at 0x11d2d0680>,
<__main__.particle instance at 0x11d2d0908>,
<__main__.particle instance at 0x11d2d0098>,
<__main__.particle instance at 0x11d2d0758>,
<__main__.particle instance at 0x11d2d09e0>,
<__main__.particle instance at 0x11d2d0ab8>,
<__main__.particle instance at 0x11d2d0b90>,
<__main__.particle instance at 0x11d2d0878>,
<__main__.particle instance at 0x11d2d0dd0>,
<__main__.particle instance at 0x11d2d0f80>,
<__main__.particle instance at 0x11d2d0e60>,
<__main__.particle instance at 0x11d2d0248>,
<__main__.particle instance at 0x11d2d0320>,
<__main__.particle instance at 0x11d2d03f8>,
<__main__.particle instance at 0x11d2d04d0>,
<__main__.particle instance at 0x11d2cd098>,
<__main__.particle instance at 0x11d2cd050>,
<__main__.particle instance at 0x11d2cd908>,
<__main__.particle instance at 0x11d2cda70>,
<__main__.particle instance at 0x11d2cdb90>,
<__main__.particle instance at 0x11d2cdc68>,
<__main__.particle instance at 0x11d2cdd40>,
<__main__.particle instance at 0x11d2cde18>,
<__main__.particle instance at 0x11d2cdef0>,
<__main__.particle instance at 0x11d2cd200>,
<__main__.particle instance at 0x11d2cd3b0>,
<__main__.particle instance at 0x11d2cd488>,
<__main__.particle instance at 0x11d2cd560>,
<__main__.particle instance at 0x11d2cd638>,
<__main__.particle instance at 0x11d2cd710>,
<__main__.particle instance at 0x11d2cd7e8>,
<__main__.particle instance at 0x119efe200>,
<__main__.particle instance at 0x119efe950>,
<__main__.particle instance at 0x119efe0e0>,
<__main__.particle instance at 0x119efe998>,
<__main__.particle instance at 0x119efe7e8>,
<__main__.particle instance at 0x119efe488>,
<__main__.particle instance at 0x119efe368>,
<__main__.particle instance at 0x119efefc8>,
<__main__.particle instance at 0x119efe2d8>,
<__main__.particle instance at 0x119efeef0>,
<__main__.particle instance at 0x119efee18>,
<__main__.particle instance at 0x119efea28>,
<__main__.particle instance at 0x11e6a0830>,
<__main__.particle instance at 0x11e6a04d0>,
<__main__.particle instance at 0x11e6a0248>,
<__main__.particle instance at 0x11e6a0098>,
<__main__.particle instance at 0x11e6a0f38>,
<__main__.particle instance at 0x11e6a0368>,
<__main__.particle instance at 0x11e6a00e0>,
<__main__.particle instance at 0x11e6a0a28>,
<__main__.particle instance at 0x11e6a0758>,
<__main__.particle instance at 0x11e6a0c20>,
<__main__.particle instance at 0x11d2ca170>,
<__main__.particle instance at 0x11d2cac68>,
<__main__.particle instance at 0x11d2cad40>,
<__main__.particle instance at 0x11d2cae18>,
<__main__.particle instance at 0x11d2caef0>,
<__main__.particle instance at 0x11d2cafc8>,
<__main__.particle instance at 0x11d2ca2d8>,
<__main__.particle instance at 0x11d2ca368>,
<__main__.particle instance at 0x11d2ca488>,
<__main__.particle instance at 0x11d2ca680>,
<__main__.particle instance at 0x11d2ca7a0>,
<__main__.particle instance at 0x11d2ca878>,
<__main__.particle instance at 0x11d2ca950>,
<__main__.particle instance at 0x11d2caa28>,
<__main__.particle instance at 0x11d2ca098>,
<__main__.particle instance at 0x11d2c9128>,
<__main__.particle instance at 0x11d2c9560>,
<__main__.particle instance at 0x11d2c9b90>,
<__main__.particle instance at 0x11d2c91b8>,
<__main__.particle instance at 0x11d2c9290>,
<__main__.particle instance at 0x11d2c9b00>,
<__main__.particle instance at 0x11d2c9c68>,
<__main__.particle instance at 0x11d2c9d88>,
<__main__.particle instance at 0x11d2c9e60>,
<__main__.particle instance at 0x11d2c9f38>,
<__main__.particle instance at 0x11d2c94d0>,
<__main__.particle instance at 0x11d2c97e8>,
<__main__.particle instance at 0x11d2c9bd8>,
<__main__.particle instance at 0x11d2c9710>,
<__main__.particle instance at 0x11d2c98c0>,
<__main__.particle instance at 0x11d2c9998>,
<__main__.particle instance at 0x11d2c9a70>,
<__main__.particle instance at 0x119f87200>,
<__main__.particle instance at 0x119f87b00>,
<__main__.particle instance at 0x119f87050>,
<__main__.particle instance at 0x11d2c8098>,
<__main__.particle instance at 0x11d2c88c0>,
<__main__.particle instance at 0x11d2c8950>,
<__main__.particle instance at 0x11d2c8a28>,
<__main__.particle instance at 0x11d2c8cb0>,
<__main__.particle instance at 0x11d2c8bd8>,
<__main__.particle instance at 0x11d2c8d88>,
<__main__.particle instance at 0x11d2c8440>,
<__main__.particle instance at 0x11d2c8cf8>,
<__main__.particle instance at 0x11d2c8128>,
<__main__.particle instance at 0x11d2c8fc8>,
<__main__.particle instance at 0x11d2c8170>,
<__main__.particle instance at 0x11d2c8290>,
<__main__.particle instance at 0x11d2c84d0>,
<__main__.particle instance at 0x11d2c8638>,
<__main__.particle instance at 0x11d2c8710>,
<__main__.particle instance at 0x11d2c87e8>,
<__main__.particle instance at 0x11d2c78c0>,
<__main__.particle instance at 0x11d2c7998>,
<__main__.particle instance at 0x11d2c7a70>,
<__main__.particle instance at 0x11d2c7b48>,
<__main__.particle instance at 0x11d2c75f0>,
<__main__.particle instance at 0x11d2c74d0>,
<__main__.particle instance at 0x11d2c7b90>,
<__main__.particle instance at 0x11d2c7cf8>,
<__main__.particle instance at 0x11d2c7e18>,
<__main__.particle instance at 0x11d2c7128>,
<__main__.particle instance at 0x11d2c7ef0>,
<__main__.particle instance at 0x11d2c7320>,
<__main__.particle instance at 0x11d2c73f8>,
<__main__.particle instance at 0x11d2c7680>,
<__main__.particle instance at 0x11d2c7758>,
<__main__.particle instance at 0x11d2c7830>,
<__main__.particle instance at 0x119fc0440>,
<__main__.particle instance at 0x119fc0a28>,
<__main__.particle instance at 0x119fc00e0>,
<__main__.particle instance at 0x119fc0fc8>,
<__main__.particle instance at 0x119fc0998>,
<__main__.particle instance at 0x119fc0c20>,
<__main__.particle instance at 0x119fc0290>,
<__main__.particle instance at 0x119fbbe18>,
<__main__.particle instance at 0x119fbb128>,
<__main__.particle instance at 0x11d2c52d8>,
<__main__.particle instance at 0x11d2c5290>,
<__main__.particle instance at 0x11d2c53b0>,
<__main__.particle instance at 0x11d2c5dd0>,
<__main__.particle instance at 0x11d2c5cb0>,
<__main__.particle instance at 0x11d2c5e18>,
<__main__.particle instance at 0x11d2c5c68>,
<__main__.particle instance at 0x11d2c5ea8>,
<__main__.particle instance at 0x11d2c5f80>,
<__main__.particle instance at 0x11d2c5518>,
<__main__.particle instance at 0x11d2c5a70>,
<__main__.particle instance at 0x11d2c55f0>,
<__main__.particle instance at 0x11d2c56c8>,
<__main__.particle instance at 0x11d2c5b00>,
<__main__.particle instance at 0x11d2c5bd8>,
<__main__.particle instance at 0x11d2c5128>,
<__main__.particle instance at 0x11d2c3c20>,
<__main__.particle instance at 0x11d2c3368>,
<__main__.particle instance at 0x11d2c33b0>,
<__main__.particle instance at 0x11d2c3440>,
<__main__.particle instance at 0x11d2c34d0>,
<__main__.particle instance at 0x11d2c35a8>,
<__main__.particle instance at 0x11d2c3680>,
<__main__.particle instance at 0x11d2c3758>,
<__main__.particle instance at 0x11d2c3290>,
<__main__.particle instance at 0x11d2c3a70>,
<__main__.particle instance at 0x11d2c3320>,
<__main__.particle instance at 0x11d2c3ab8>,
<__main__.particle instance at 0x11d2c37e8>,
<__main__.particle instance at 0x11d2c3e60>,
<__main__.particle instance at 0x11d2c3cf8>,
<__main__.particle instance at 0x11d2c3128>,
<__main__.particle instance at 0x11d2c3ef0>,
<__main__.particle instance at 0x11d504dd0>,
<__main__.particle instance at 0x11d504050>,
<__main__.particle instance at 0x11d504c68>,
<__main__.particle instance at 0x11d504b90>,
<__main__.particle instance at 0x11d504a70>,
<__main__.particle instance at 0x11d504998>,
<__main__.particle instance at 0x11d5048c0>,
<__main__.particle instance at 0x11d504098>,
<__main__.particle instance at 0x11d504f80>,
<__main__.particle instance at 0x11d504cf8>,
<__main__.particle instance at 0x11d504638>,
<__main__.particle instance at 0x11d504560>,
<__main__.particle instance at 0x11d504488>,
<__main__.particle instance at 0x11d5043b0>,
<__main__.particle instance at 0x11d5042d8>,
<__main__.particle instance at 0x11d504200>,
<__main__.particle instance at 0x11d5040e0>,
<__main__.particle instance at 0x11d2c2d88>,
<__main__.particle instance at 0x11d2c25f0>,
<__main__.particle instance at 0x11d2c2dd0>,
<__main__.particle instance at 0x11d2c2710>,
<__main__.particle instance at 0x11d2c27e8>,
<__main__.particle instance at 0x11d2c28c0>,
<__main__.particle instance at 0x11d2c2ab8>,
<__main__.particle instance at 0x11d2c2b48>,
<__main__.particle instance at 0x11d2c2b90>,
<__main__.particle instance at 0x11d2c2290>,
<__main__.particle instance at 0x11d2c2c68>,
<__main__.particle instance at 0x11d2c2e60>,
<__main__.particle instance at 0x11d2c2a28>,
<__main__.particle instance at 0x11d2c2e18>,
<__main__.particle instance at 0x11d2c21b8>,
<__main__.particle instance at 0x11d2c2f80>,
<__main__.particle instance at 0x11d2c2488>,
<__main__.particle instance at 0x11d2c2560>,
<__main__.particle instance at 0x11e674b90>,
<__main__.particle instance at 0x11e6741b8>,
<__main__.particle instance at 0x11e674248>,
<__main__.particle instance at 0x11e674998>,
<__main__.particle instance at 0x11e674128>,
<__main__.particle instance at 0x11e674758>,
<__main__.particle instance at 0x11e674ea8>,
<__main__.particle instance at 0x11e674c20>,
<__main__.particle instance at 0x11e674320>,
<__main__.particle instance at 0x11e674170>,
<__main__.particle instance at 0x11e674b00>,
<__main__.particle instance at 0x11e674950>,
<__main__.particle instance at 0x11e6747a0>,
<__main__.particle instance at 0x11e6745f0>,
<__main__.particle instance at 0x11e674e60>,
<__main__.particle instance at 0x11e674c68>,
<__main__.particle instance at 0x11e6744d0>,
<__main__.particle instance at 0x11d2c1dd0>,
<__main__.particle instance at 0x11d2c1cf8>,
<__main__.particle instance at 0x11d2c1d88>,
<__main__.particle instance at 0x11d2c1ea8>,
<__main__.particle instance at 0x11d2c17e8>,
<__main__.particle instance at 0x11d2c18c0>,
<__main__.particle instance at 0x11d2c1998>,
<__main__.particle instance at 0x11d2c1a70>,
<__main__.particle instance at 0x11d2c1050>,
<__main__.particle instance at 0x11d2c1b90>,
<__main__.particle instance at 0x11d2c1560>,
<__main__.particle instance at 0x11d2c13b0>,
<__main__.particle instance at 0x11d2c1170>,
<__main__.particle instance at 0x11d2c1f80>,
<__main__.particle instance at 0x11d2c1200>,
<__main__.particle instance at 0x11d2c1320>,
<__main__.particle instance at 0x11d2c1638>,
<__main__.particle instance at 0x11e6a1710>,
<__main__.particle instance at 0x11e6a1ea8>,
<__main__.particle instance at 0x11e6a14d0>,
<__main__.particle instance at 0x11e6a1a70>,
<__main__.particle instance at 0x11e6a1cf8>,
<__main__.particle instance at 0x11e6a11b8>,
<__main__.particle instance at 0x11e6a1ab8>,
<__main__.particle instance at 0x11e6a1830>,
<__main__.particle instance at 0x11e6a1fc8>,
<__main__.particle instance at 0x11e6a1bd8>,
<__main__.particle instance at 0x11e6a1488>,
<__main__.particle instance at 0x11d2c0248>,
<__main__.particle instance at 0x11d2c0c68>,
<__main__.particle instance at 0x11d2c05f0>,
<__main__.particle instance at 0x11d2c0d40>,
<__main__.particle instance at 0x11d2c0e18>,
<__main__.particle instance at 0x11d2c0290>,
<__main__.particle instance at 0x11d2c0368>,
<__main__.particle instance at 0x11d2c0440>,
<__main__.particle instance at 0x11d2c0710>,
<__main__.particle instance at 0x11d2c07e8>,
<__main__.particle instance at 0x11d2c08c0>,
<__main__.particle instance at 0x11d2c0998>,
<__main__.particle instance at 0x11d2c0a70>,
<__main__.particle instance at 0x11d2c0f38>,
<__main__.particle instance at 0x11d2c0f80>,
<__main__.particle instance at 0x11d2c0fc8>,
<__main__.particle instance at 0x11d2bf8c0>,
<__main__.particle instance at 0x11d2bf248>,
<__main__.particle instance at 0x11d2bf098>,
<__main__.particle instance at 0x11d2bf1b8>,
<__main__.particle instance at 0x11d2bf6c8>,
<__main__.particle instance at 0x11d2bfea8>,
<__main__.particle instance at 0x11d2bf680>,
<__main__.particle instance at 0x11d2bf7a0>,
<__main__.particle instance at 0x11d2bf3f8>,
<__main__.particle instance at 0x11d2bf4d0>,
<__main__.particle instance at 0x11d2bf5a8>,
<__main__.particle instance at 0x11d2bf878>,
<__main__.particle instance at 0x11d2bf998>,
<__main__.particle instance at 0x11d2bfa70>,
<__main__.particle instance at 0x11d2bfb48>,
<__main__.particle instance at 0x11d2bffc8>,
<__main__.particle instance at 0x11d2bea28>,
<__main__.particle instance at 0x11d2beab8>,
<__main__.particle instance at 0x11d2bebd8>,
<__main__.particle instance at 0x11d2becb0>,
<__main__.particle instance at 0x11d2bed88>,
<__main__.particle instance at 0x11d2bef38>,
<__main__.particle instance at 0x11d2beef0>,
<__main__.particle instance at 0x11d2befc8>,
<__main__.particle instance at 0x11d2be3b0>,
<__main__.particle instance at 0x11d2be320>,
<__main__.particle instance at 0x11d2be488>,
<__main__.particle instance at 0x11d2be560>,
<__main__.particle instance at 0x11d2be638>,
<__main__.particle instance at 0x11d2be710>,
<__main__.particle instance at 0x11d2be7a0>,
<__main__.particle instance at 0x11d2be2d8>,
<__main__.particle instance at 0x11d2bee18>,
<__main__.particle instance at 0x11d2bdc68>,
<__main__.particle instance at 0x11d2bdd40>,
<__main__.particle instance at 0x11d2bde18>,
<__main__.particle instance at 0x11d2bdef0>,
<__main__.particle instance at 0x11d2bdfc8>,
<__main__.particle instance at 0x11d2bd368>,
<__main__.particle instance at 0x11d2bd9e0>,
<__main__.particle instance at 0x11d2bdb00>,
<__main__.particle instance at 0x11d2bd908>,
<__main__.particle instance at 0x11d2bd638>,
<__main__.particle instance at 0x11d2bd710>,
<__main__.particle instance at 0x11d2bd7e8>,
<__main__.particle instance at 0x11d2bd8c0>,
<__main__.particle instance at 0x11d2bd0e0>,
<__main__.particle instance at 0x11d2bdb48>,
<__main__.particle instance at 0x11d2bd950>,
<__main__.particle instance at 0x11d2bcd40>,
<__main__.particle instance at 0x11d2bce60>,
<__main__.particle instance at 0x11d2bcf38>,
<__main__.particle instance at 0x11d2bc488>,
<__main__.particle instance at 0x11d2bc5a8>,
<__main__.particle instance at 0x11d2bc680>,
<__main__.particle instance at 0x11d2bc3f8>,
<__main__.particle instance at 0x11d2bc3b0>,
<__main__.particle instance at 0x11d2bc908>,
<__main__.particle instance at 0x11d2bc830>,
<__main__.particle instance at 0x11d2bcb00>,
<__main__.particle instance at 0x11d2bc9e0>,
<__main__.particle instance at 0x11d2bc098>,
<__main__.particle instance at 0x11d2bc170>,
<__main__.particle instance at 0x11d2bc248>,
<__main__.particle instance at 0x11d2bc320>,
<__main__.particle instance at 0x11d2bccf8>,
<__main__.particle instance at 0x119efdc68>,
<__main__.particle instance at 0x119efda28>,
<__main__.particle instance at 0x119efd950>,
<__main__.particle instance at 0x119efd830>,
<__main__.particle instance at 0x119efd5f0>,
<__main__.particle instance at 0x119efd4d0>,
<__main__.particle instance at 0x119efd3b0>,
<__main__.particle instance at 0x119efd2d8>,
<__main__.particle instance at 0x119efd638>,
<__main__.particle instance at 0x119efdb00>,
<__main__.particle instance at 0x119efd0e0>,
<__main__.particle instance at 0x119efdd40>,
<__main__.particle instance at 0x11d55e128>,
<__main__.particle instance at 0x11d55ed40>,
<__main__.particle instance at 0x11d55e170>,
<__main__.particle instance at 0x11d2bacb0>,
<__main__.particle instance at 0x11d2bad40>,
<__main__.particle instance at 0x11d2bae60>,
<__main__.particle instance at 0x11d2ba488>,
<__main__.particle instance at 0x11d2bae18>,
<__main__.particle instance at 0x11d2baf38>,
<__main__.particle instance at 0x11d2ba2d8>,
<__main__.particle instance at 0x11d2ba3b0>,
<__main__.particle instance at 0x11d2ba6c8>,
<__main__.particle instance at 0x11d2ba7e8>,
<__main__.particle instance at 0x11d2ba8c0>,
<__main__.particle instance at 0x11d2ba998>,
<__main__.particle instance at 0x11d2baa70>,
<__main__.particle instance at 0x11d2bab48>,
<__main__.particle instance at 0x11d2ba0e0>,
<__main__.particle instance at 0x11d2ba638>,
<__main__.particle instance at 0x11d2b9200>,
<__main__.particle instance at 0x11d2b95a8>,
<__main__.particle instance at 0x11d2b9e60>,
<__main__.particle instance at 0x11d2b9d88>,
<__main__.particle instance at 0x11d2b9f38>,
<__main__.particle instance at 0x11d2b93b0>,
<__main__.particle instance at 0x11d2b9488>,
<__main__.particle instance at 0x11d2b98c0>,
<__main__.particle instance at 0x11d2b9998>,
<__main__.particle instance at 0x11d2b9a70>,
<__main__.particle instance at 0x11d2b9b48>,
<__main__.particle instance at 0x11d2b9c20>,
<__main__.particle instance at 0x11d2b9cf8>,
<__main__.particle instance at 0x11d2b91b8>,
<__main__.particle instance at 0x11d2b90e0>,
<__main__.particle instance at 0x119ecd638>,
<__main__.particle instance at 0x119ecdd40>,
<__main__.particle instance at 0x119ecd170>,
<__main__.particle instance at 0x119ecd3b0>,
<__main__.particle instance at 0x119ecd248>,
<__main__.particle instance at 0x119ecd488>,
<__main__.particle instance at 0x11d2b8dd0>,
<__main__.particle instance at 0x11d2b8d40>,
<__main__.particle instance at 0x11d2b8170>,
<__main__.particle instance at 0x11d2b8248>,
<__main__.particle instance at 0x11d2b87e8>,
<__main__.particle instance at 0x11d2b8f38>,
<__main__.particle instance at 0x11d2b80e0>,
<__main__.particle instance at 0x11d2b8290>,
<__main__.particle instance at 0x11d2b8488>,
<__main__.particle instance at 0x11d2b8560>,
<__main__.particle instance at 0x11d2b8638>,
<__main__.particle instance at 0x11d2b8710>,
<__main__.particle instance at 0x11d2b8a28>,
<__main__.particle instance at 0x11d2b8b00>,
<__main__.particle instance at 0x11d2b8bd8>,
<__main__.particle instance at 0x11d2b8cb0>,
<__main__.particle instance at 0x11d2b8878>,
<__main__.particle instance at 0x11e6a2320>,
<__main__.particle instance at 0x11e6a2b00>,
<__main__.particle instance at 0x11e6a23f8>,
<__main__.particle instance at 0x11e6a2a28>,
<__main__.particle instance at 0x11e6a2248>,
<__main__.particle instance at 0x11e6a2368>,
<__main__.particle instance at 0x11e6a20e0>,
<__main__.particle instance at 0x11e6a2a70>,
<__main__.particle instance at 0x11e6a27e8>,
<__main__.particle instance at 0x11e6a2ef0>,
<__main__.particle instance at 0x11e6a2cb0>,
<__main__.particle instance at 0x11d2b7bd8>,
<__main__.particle instance at 0x11d2b7f38>,
<__main__.particle instance at 0x11d2b7950>,
<__main__.particle instance at 0x11d2b7c68>,
<__main__.particle instance at 0x11d2b7d88>,
<__main__.particle instance at 0x11d2b7e60>,
<__main__.particle instance at 0x11d2b7320>,
<__main__.particle instance at 0x11d2b73f8>,
<__main__.particle instance at 0x11d2b7248>,
<__main__.particle instance at 0x11d2b7fc8>,
<__main__.particle instance at 0x11d2b7200>,
<__main__.particle instance at 0x11d2b7b90>,
<__main__.particle instance at 0x11d2b7710>,
<__main__.particle instance at 0x11d2b7560>,
<__main__.particle instance at 0x11d2b7680>,
<__main__.particle instance at 0x11d2b77a0>,
<__main__.particle instance at 0x11d2b6128>,
<__main__.particle instance at 0x11d2b60e0>,
<__main__.particle instance at 0x11d2b6950>,
<__main__.particle instance at 0x11d2b6ab8>,
<__main__.particle instance at 0x11d2b6dd0>,
<__main__.particle instance at 0x11d2b6ea8>,
<__main__.particle instance at 0x11d2b6f80>,
<__main__.particle instance at 0x11d2b6368>,
<__main__.particle instance at 0x11d2b64d0>,
<__main__.particle instance at 0x11d2b65a8>,
<__main__.particle instance at 0x11d2b6680>,
<__main__.particle instance at 0x11d2b6cb0>,
<__main__.particle instance at 0x11d2b67e8>,
<__main__.particle instance at 0x11d2b6b90>,
<__main__.particle instance at 0x11d2b66c8>,
<__main__.particle instance at 0x11d2b6050>,
<__main__.particle instance at 0x119ed6488>,
<__main__.particle instance at 0x119ed6ab8>,
<__main__.particle instance at 0x119ed6e60>,
<__main__.particle instance at 0x119ed6878>,
<__main__.particle instance at 0x119ed6440>,
<__main__.particle instance at 0x119ed6b48>,
<__main__.particle instance at 0x11d2b4128>,
<__main__.particle instance at 0x11d2b47a0>,
<__main__.particle instance at 0x11d2b4878>,
<__main__.particle instance at 0x11d2b4950>,
<__main__.particle instance at 0x11d2b4a28>,
<__main__.particle instance at 0x11d2b4ab8>,
<__main__.particle instance at 0x11d2b44d0>,
<__main__.particle instance at 0x11d2b4560>,
<__main__.particle instance at 0x11d2b4cf8>,
<__main__.particle instance at 0x11d2b4f80>,
<__main__.particle instance at 0x11d2b4d40>,
<__main__.particle instance at 0x11d2b42d8>,
<__main__.particle instance at 0x11d2b43b0>,
<__main__.particle instance at 0x11d2b4ef0>,
<__main__.particle instance at 0x11d2b45a8>,
<__main__.particle instance at 0x11d2b4680>,
<__main__.particle instance at 0x11d2b4758>,
<__main__.particle instance at 0x11d2b3998>,
<__main__.particle instance at 0x11d2b39e0>,
<__main__.particle instance at 0x11d2b3ab8>,
<__main__.particle instance at 0x11d2b3b90>,
<__main__.particle instance at 0x11d2b3488>,
<__main__.particle instance at 0x11d2b3e18>,
<__main__.particle instance at 0x11d2b3cf8>,
<__main__.particle instance at 0x11d2b3cb0>,
<__main__.particle instance at 0x11d2b3e60>,
<__main__.particle instance at 0x11d2b3050>,
<__main__.particle instance at 0x11d2b3fc8>,
<__main__.particle instance at 0x11d2b3f38>,
<__main__.particle instance at 0x11d2b3200>,
<__main__.particle instance at 0x11d2b32d8>,
<__main__.particle instance at 0x11d2b33b0>,
<__main__.particle instance at 0x11d2b3758>,
<__main__.particle instance at 0x11d2b3830>,
<__main__.particle instance at 0x11e6750e0>,
<__main__.particle instance at 0x11e675758>,
<__main__.particle instance at 0x11e675c68>,
<__main__.particle instance at 0x11e675488>,
<__main__.particle instance at 0x11e675998>,
<__main__.particle instance at 0x11e6759e0>,
<__main__.particle instance at 0x11e6756c8>,
<__main__.particle instance at 0x11e675ea8>,
<__main__.particle instance at 0x11e675248>,
<__main__.particle instance at 0x11e675098>,
<__main__.particle instance at 0x11e675830>,
<__main__.particle instance at 0x11e675680>,
<__main__.particle instance at 0x11e6754d0>,
<__main__.particle instance at 0x11e675ef0>,
<__main__.particle instance at 0x11e675d40>,
<__main__.particle instance at 0x11e675b48>,
<__main__.particle instance at 0x11e675a70>,
<__main__.particle instance at 0x11d2b2128>,
<__main__.particle instance at 0x11d2b25f0>,
<__main__.particle instance at 0x11d2b27e8>,
<__main__.particle instance at 0x11d2b2908>,
<__main__.particle instance at 0x11d2b29e0>,
<__main__.particle instance at 0x11d2b2ab8>,
<__main__.particle instance at 0x11d2b2b90>,
<__main__.particle instance at 0x11d2b2d88>,
<__main__.particle instance at 0x11d2b2dd0>,
<__main__.particle instance at 0x11d2b2680>,
<__main__.particle instance at 0x11d2b2c68>,
<__main__.particle instance at 0x11d2b2710>,
<__main__.particle instance at 0x11d2b24d0>,
<__main__.particle instance at 0x11d2b2248>,
<__main__.particle instance at 0x11d2b2320>,
<__main__.particle instance at 0x11d2b23f8>,
<__main__.particle instance at 0x11d2b2f80>,
<__main__.particle instance at 0x11d51b8c0>,
<__main__.particle instance at 0x11d51b560>,
<__main__.particle instance at 0x11d51b5f0>,
<__main__.particle instance at 0x11d51b050>,
<__main__.particle instance at 0x11d51b7e8>,
<__main__.particle instance at 0x11d51bcf8>,
<__main__.particle instance at 0x11d2b1e60>,
<__main__.particle instance at 0x11d2b1c68>,
<__main__.particle instance at 0x11d2b1c20>,
<__main__.particle instance at 0x11d2b1878>,
<__main__.particle instance at 0x11d2b1950>,
<__main__.particle instance at 0x11d2b1a28>,
<__main__.particle instance at 0x11d2b1b00>,
<__main__.particle instance at 0x11d2b1d88>,
<__main__.particle instance at 0x11d2b15f0>,
<__main__.particle instance at 0x11d2b15a8>,
<__main__.particle instance at 0x11d2b1098>,
<__main__.particle instance at 0x11d2b1680>,
<__main__.particle instance at 0x11d2b1638>,
<__main__.particle instance at 0x11d2b1200>,
<__main__.particle instance at 0x11d2b12d8>,
<__main__.particle instance at 0x11d2b13b0>,
<__main__.particle instance at 0x11d2b1f80>,
<__main__.particle instance at 0x11d2b16c8>,
<__main__.particle instance at 0x11d2b10e0>,
<__main__.particle instance at 0x119efc5a8>,
<__main__.particle instance at 0x119efc488>,
<__main__.particle instance at 0x119efc3b0>,
<__main__.particle instance at 0x119efcd40>,
<__main__.particle instance at 0x119efc128>,
<__main__.particle instance at 0x119efcdd0>,
<__main__.particle instance at 0x119efccb0>,
<__main__.particle instance at 0x119efc0e0>,
<__main__.particle instance at 0x119efcc68>,
<__main__.particle instance at 0x119efc9e0>,
<__main__.particle instance at 0x119efc758>,
<__main__.particle instance at 0x119efc710>,
<__main__.particle instance at 0x11d500248>,
<__main__.particle instance at 0x11d500440>,
<__main__.particle instance at 0x11d500200>,
<__main__.particle instance at 0x11d500a70>,
<__main__.particle instance at 0x11d500290>,
<__main__.particle instance at 0x11d5003b0>,
<__main__.particle instance at 0x11d500fc8>,
<__main__.particle instance at 0x11d500ef0>,
<__main__.particle instance at 0x11d500e18>,
<__main__.particle instance at 0x11d500cf8>,
<__main__.particle instance at 0x11d500bd8>,
<__main__.particle instance at 0x11d500b00>,
<__main__.particle instance at 0x11d500dd0>,
<__main__.particle instance at 0x11d5004d0>,
<__main__.particle instance at 0x11d500050>,
<__main__.particle instance at 0x11d500830>,
<__main__.particle instance at 0x11d500758>,
<__main__.particle instance at 0x11d500680>,
<__main__.particle instance at 0x11d5005a8>,
<__main__.particle instance at 0x11d2b0a28>,
<__main__.particle instance at 0x11d2b0ef0>,
<__main__.particle instance at 0x11d2b0710>,
<__main__.particle instance at 0x11d2b0ab8>,
<__main__.particle instance at 0x11d2b0b90>,
<__main__.particle instance at 0x11d2b0c68>,
<__main__.particle instance at 0x11d2b0050>,
<__main__.particle instance at 0x11d2b0128>,
<__main__.particle instance at 0x11d2b0200>,
<__main__.particle instance at 0x11d2b05a8>,
<__main__.particle instance at 0x11d2b0dd0>,
<__main__.particle instance at 0x11d2b0680>,
<__main__.particle instance at 0x11d2b08c0>,
<__main__.particle instance at 0x11d2b02d8>,
<__main__.particle instance at 0x11d2b0440>,
<__main__.particle instance at 0x11d2b0950>,
<__main__.particle instance at 0x119ecc9e0>,
<__main__.particle instance at 0x119eccdd0>,
<__main__.particle instance at 0x119ecc950>,
<__main__.particle instance at 0x119ecce18>,
<__main__.particle instance at 0x119ecccf8>,
<__main__.particle instance at 0x11d2af518>,
<__main__.particle instance at 0x11d2afef0>,
<__main__.particle instance at 0x11d2af6c8>,
<__main__.particle instance at 0x11d2aff38>,
<__main__.particle instance at 0x11d2af7e8>,
<__main__.particle instance at 0x11d2af8c0>,
<__main__.particle instance at 0x11d2af998>,
<__main__.particle instance at 0x11d2afa70>,
<__main__.particle instance at 0x11d2af560>,
<__main__.particle instance at 0x11d2afc68>,
<__main__.particle instance at 0x11d2af050>,
<__main__.particle instance at 0x11d2af128>,
<__main__.particle instance at 0x11d2af200>,
<__main__.particle instance at 0x11d2af2d8>,
<__main__.particle instance at 0x11d2af3b0>,
<__main__.particle instance at 0x11d2af488>,
<__main__.particle instance at 0x11d2af710>,
<__main__.particle instance at 0x11d2ae3f8>,
<__main__.particle instance at 0x11d2ae560>,
<__main__.particle instance at 0x11d2aee60>,
<__main__.particle instance at 0x11d2aefc8>,
<__main__.particle instance at 0x11d2ae758>,
<__main__.particle instance at 0x11d2ae680>,
<__main__.particle instance at 0x11d2ae8c0>,
<__main__.particle instance at 0x11d2ae998>,
<__main__.particle instance at 0x11d2aea70>,
<__main__.particle instance at 0x11d2aeb00>,
<__main__.particle instance at 0x11d2aecb0>,
<__main__.particle instance at 0x11d2aecf8>,
<__main__.particle instance at 0x11d2ae050>,
<__main__.particle instance at 0x11d2ae170>,
<__main__.particle instance at 0x11d2ae248>,
<__main__.particle instance at 0x11d2ae320>,
<__main__.particle instance at 0x11d2adb00>,
<__main__.particle instance at 0x11d2ada70>,
<__main__.particle instance at 0x11d2adb90>,
<__main__.particle instance at 0x11d2ad3f8>,
<__main__.particle instance at 0x11d2ad518>,
<__main__.particle instance at 0x11d2ad5f0>,
<__main__.particle instance at 0x11d2ad878>,
<__main__.particle instance at 0x11d2ad050>,
<__main__.particle instance at 0x11d2ade60>,
<__main__.particle instance at 0x11d2ad908>,
<__main__.particle instance at 0x11d2ad9e0>,
<__main__.particle instance at 0x11d2ad830>,
<__main__.particle instance at 0x11d2adcf8>,
<__main__.particle instance at 0x11d2adc68>,
<__main__.particle instance at 0x11d2ad200>,
<__main__.particle instance at 0x11d2ad2d8>,
<__main__.particle instance at 0x11d2acc20>,
<__main__.particle instance at 0x11d2ac3b0>,
<__main__.particle instance at 0x11d2acb90>,
<__main__.particle instance at 0x11d2accf8>,
<__main__.particle instance at 0x11d2acdd0>,
<__main__.particle instance at 0x11d2ac3f8>,
<__main__.particle instance at 0x11d2ac560>,
<__main__.particle instance at 0x11d2ac638>,
<__main__.particle instance at 0x11d2ac710>,
<__main__.particle instance at 0x11d2ac878>,
<__main__.particle instance at 0x11d2ac7e8>,
<__main__.particle instance at 0x11d2ac998>,
<__main__.particle instance at 0x11d2acab8>,
<__main__.particle instance at 0x11d2ac098>,
<__main__.particle instance at 0x11d2ace60>,
<__main__.particle instance at 0x11d2ac050>,
<__main__.particle instance at 0x11d2ac2d8>,
<__main__.particle instance at 0x11d2ab3f8>,
<__main__.particle instance at 0x11d2abb00>,
<__main__.particle instance at 0x11d2abc20>,
<__main__.particle instance at 0x11d2abcf8>,
<__main__.particle instance at 0x11d2abdd0>,
<__main__.particle instance at 0x11d2abea8>,
<__main__.particle instance at 0x11d2ab518>,
<__main__.particle instance at 0x11d2ab5f0>,
<__main__.particle instance at 0x11d2ab6c8>,
<__main__.particle instance at 0x11d2ab7a0>,
<__main__.particle instance at 0x11d2ab878>,
<__main__.particle instance at 0x11d2ab098>,
<__main__.particle instance at 0x11d2ab908>,
<__main__.particle instance at 0x11d2aba70>,
<__main__.particle instance at 0x11d2aba28>,
<__main__.particle instance at 0x11d2ab2d8>,
<__main__.particle instance at 0x11d2abef0>,
<__main__.particle instance at 0x11d2aa200>,
<__main__.particle instance at 0x11d2aa128>,
<__main__.particle instance at 0x11d2aab90>,
<__main__.particle instance at 0x11d2aac68>,
<__main__.particle instance at 0x11d2aad40>,
<__main__.particle instance at 0x11d2aae18>,
<__main__.particle instance at 0x11d2aaef0>,
<__main__.particle instance at 0x11d2aa368>,
<__main__.particle instance at 0x11d2aa3f8>,
<__main__.particle instance at 0x11d2aa518>,
<__main__.particle instance at 0x11d2aa5f0>,
<__main__.particle instance at 0x11d2aa6c8>,
<__main__.particle instance at 0x11d2aa7a0>,
<__main__.particle instance at 0x11d2aa878>,
<__main__.particle instance at 0x11d2aa050>,
<__main__.particle instance at 0x11e6a3128>,
<__main__.particle instance at 0x11e6a3050>,
<__main__.particle instance at 0x11e6a3bd8>,
<__main__.particle instance at 0x11e6a3e60>,
<__main__.particle instance at 0x11e6a3d40>,
<__main__.particle instance at 0x11e6a3248>,
<__main__.particle instance at 0x11e6a3b00>,
<__main__.particle instance at 0x11e6a3878>,
<__main__.particle instance at 0x11e6a35a8>,
<__main__.particle instance at 0x11e6a3dd0>,
<__main__.particle instance at 0x11e6a3488>,
<__main__.particle instance at 0x11d2a9050>,
<__main__.particle instance at 0x11d2a90e0>,
<__main__.particle instance at 0x11d2a9170>,
<__main__.particle instance at 0x11d2a9b48>,
<__main__.particle instance at 0x11d2a9c20>,
<__main__.particle instance at 0x11d2a9cf8>,
<__main__.particle instance at 0x11d2a9dd0>,
<__main__.particle instance at 0x11d2a9ea8>,
<__main__.particle instance at 0x11d2a9ab8>,
<__main__.particle instance at 0x11d2a93b0>,
<__main__.particle instance at 0x11d2a9f80>,
<__main__.particle instance at 0x11d2a9518>,
<__main__.particle instance at 0x11d2a95f0>,
<__main__.particle instance at 0x11d2a96c8>,
<__main__.particle instance at 0x11d2a97a0>,
<__main__.particle instance at 0x11d2a88c0>,
<__main__.particle instance at 0x11d2a8ea8>,
<__main__.particle instance at 0x11d2a8f80>,
<__main__.particle instance at 0x11d2a8200>,
<__main__.particle instance at 0x11d2a8098>,
<__main__.particle instance at 0x11d2a8170>,
<__main__.particle instance at 0x11d2a8a28>,
<__main__.particle instance at 0x11d2a8b00>,
<__main__.particle instance at 0x11d2a8bd8>,
<__main__.particle instance at 0x11d2a8cb0>,
<__main__.particle instance at 0x11d2a8d88>,
<__main__.particle instance at 0x11d2a8e60>,
<__main__.particle instance at 0x11d2a83f8>,
<__main__.particle instance at 0x11d2a8518>,
<__main__.particle instance at 0x11d2a8fc8>,
<__main__.particle instance at 0x11d2a85f0>,
<__main__.particle instance at 0x11d2a86c8>,
<__main__.particle instance at 0x11d2a87a0>,
<__main__.particle instance at 0x11e676128>,
<__main__.particle instance at 0x11e676a28>,
<__main__.particle instance at 0x11e676c68>,
<__main__.particle instance at 0x11e6761b8>,
<__main__.particle instance at 0x11e676638>,
<__main__.particle instance at 0x11e676fc8>,
<__main__.particle instance at 0x11e676b48>,
<__main__.particle instance at 0x11e676170>,
<__main__.particle instance at 0x11e6768c0>,
<__main__.particle instance at 0x11e676710>,
<__main__.particle instance at 0x11e676560>,
<__main__.particle instance at 0x11e6763b0>,
<__main__.particle instance at 0x11e676e60>,
<__main__.particle instance at 0x11e676cb0>,
<__main__.particle instance at 0x11e676b00>,
<__main__.particle instance at 0x11e676998>,
<__main__.particle instance at 0x11d2a77e8>,
<__main__.particle instance at 0x11d2a7d40>,
<__main__.particle instance at 0x11d2a7e60>,
<__main__.particle instance at 0x11d2a7f38>,
<__main__.particle instance at 0x11d2a7710>,
<__main__.particle instance at 0x11d2a78c0>,
<__main__.particle instance at 0x11d2a7098>,
<__main__.particle instance at 0x11d2a7170>,
<__main__.particle instance at 0x11d2a73f8>,
<__main__.particle instance at 0x11d2a7200>,
<__main__.particle instance at 0x11d2a7bd8>,
<__main__.particle instance at 0x11d2a7cb0>,
<__main__.particle instance at 0x11d2a7488>,
<__main__.particle instance at 0x11d2a7518>,
<__main__.particle instance at 0x11d2a7998>,
<__main__.particle instance at 0x11d2a7368>,
<__main__.particle instance at 0x11d2a75a8>,
<__main__.particle instance at 0x11d2a6170>,
<__main__.particle instance at 0x11d2a6878>,
<__main__.particle instance at 0x11d2a6b48>,
<__main__.particle instance at 0x11d2a65a8>,
<__main__.particle instance at 0x11d2a6b00>,
<__main__.particle instance at 0x11d2a6200>,
<__main__.particle instance at 0x11d2a62d8>,
<__main__.particle instance at 0x11d2a6ef0>,
<__main__.particle instance at 0x11d2a6fc8>,
<__main__.particle instance at 0x11d2a6560>,
<__main__.particle instance at 0x11d2a6680>,
<__main__.particle instance at 0x11d2a6758>,
<__main__.particle instance at 0x11d2a6830>,
<__main__.particle instance at 0x11d2a6dd0>,
<__main__.particle instance at 0x11d2a6a28>,
<__main__.particle instance at 0x11d2a6d40>,
<__main__.particle instance at 0x11d2a6050>,
<__main__.particle instance at 0x119ecbb00>,
<__main__.particle instance at 0x119ecb998>,
<__main__.particle instance at 0x119ecb6c8>,
<__main__.particle instance at 0x119ecb878>,
<__main__.particle instance at 0x119ecbf80>,
<__main__.particle instance at 0x11d2a5560>,
<__main__.particle instance at 0x11d2a5f38>,
<__main__.particle instance at 0x11d2a55a8>,
<__main__.particle instance at 0x11d2a5680>,
<__main__.particle instance at 0x11d2a5758>,
<__main__.particle instance at 0x11d2a5830>,
<__main__.particle instance at 0x11d2a54d0>,
<__main__.particle instance at 0x11d2a5bd8>,
<__main__.particle instance at 0x11d2a5d40>,
<__main__.particle instance at 0x11d2a5e60>,
<__main__.particle instance at 0x11d2a5cb0>,
<__main__.particle instance at 0x11d2a5b48>,
<__main__.particle instance at 0x11d2a50e0>,
<__main__.particle instance at 0x11d2a51b8>,
<__main__.particle instance at 0x11d2a5290>,
<__main__.particle instance at 0x11d2a5368>,
<__main__.particle instance at 0x11d2a5ef0>,
<__main__.particle instance at 0x119ecae60>,
<__main__.particle instance at 0x119eca908>,
<__main__.particle instance at 0x119eca320>,
<__main__.particle instance at 0x119ecab00>,
<__main__.particle instance at 0x119eca488>,
<__main__.particle instance at 0x119eca6c8>,
<__main__.particle instance at 0x11d2fe518>,
<__main__.particle instance at 0x11d2fe8c0>,
<__main__.particle instance at 0x11d2fe248>,
<__main__.particle instance at 0x11d2fe290>,
<__main__.particle instance at 0x11d2fe440>,
<__main__.particle instance at 0x11d2fe320>,
<__main__.particle instance at 0x11d2fef38>,
<__main__.particle instance at 0x11d2fee60>,
<__main__.particle instance at 0x11d2fed88>,
<__main__.particle instance at 0x11d2fecb0>,
<__main__.particle instance at 0x11d2febd8>,
<__main__.particle instance at 0x11d2feb00>,
<__main__.particle instance at 0x11d2fe830>,
<__main__.particle instance at 0x11d2fe128>,
<__main__.particle instance at 0x11d2fe050>,
<__main__.particle instance at 0x11d2fe7a0>,
<__main__.particle instance at 0x11d2fe6c8>,
<__main__.particle instance at 0x11d2fe5f0>,
<__main__.particle instance at 0x11d2a4d88>,
<__main__.particle instance at 0x11d2a42d8>,
<__main__.particle instance at 0x11d2a46c8>,
<__main__.particle instance at 0x11d2a4518>,
<__main__.particle instance at 0x11d2a43b0>,
<__main__.particle instance at 0x11d2a4638>,
<__main__.particle instance at 0x11d2a47a0>,
<__main__.particle instance at 0x11d2a48c0>,
<__main__.particle instance at 0x11d2a4998>,
<__main__.particle instance at 0x11d2a4a70>,
<__main__.particle instance at 0x11d2a4b48>,
<__main__.particle instance at 0x11d2a4098>,
<__main__.particle instance at 0x11d2a4d40>,
<__main__.particle instance at 0x11d2a4fc8>,
<__main__.particle instance at 0x11d2a4290>,
<__main__.particle instance at 0x11d2a4f38>,
<__main__.particle instance at 0x11d2a4e60>,
<__main__.particle instance at 0x119e01d40>,
<__main__.particle instance at 0x119e019e0>,
<__main__.particle instance at 0x119e01290>,
<__main__.particle instance at 0x119e01830>,
<__main__.particle instance at 0x119e01908>,
<__main__.particle instance at 0x119e01ab8>,
<__main__.particle instance at 0x119e01560>,
<__main__.particle instance at 0x119e01e18>,
<__main__.particle instance at 0x119e01f80>,
<__main__.particle instance at 0x119e01128>,
<__main__.particle instance at 0x119e01248>,
<__main__.particle instance at 0x119e01368>,
<__main__.particle instance at 0x119e01440>,
<__main__.particle instance at 0x119e01518>,
<__main__.particle instance at 0x119e01cb0>,
<__main__.particle instance at 0x11d2a2128>,
<__main__.particle instance at 0x11d2a2950>,
<__main__.particle instance at 0x11d2a2200>,
<__main__.particle instance at 0x11d2a22d8>,
<__main__.particle instance at 0x11d2a2710>,
<__main__.particle instance at 0x11d2a2680>,
<__main__.particle instance at 0x11d2a2638>,
<__main__.particle instance at 0x11d2a2ef0>,
<__main__.particle instance at 0x11d2a2fc8>,
<__main__.particle instance at 0x11d2a2440>,
<__main__.particle instance at 0x11d2a2878>,
<__main__.particle instance at 0x11d2a2518>,
<__main__.particle instance at 0x11d2a2a28>,
<__main__.particle instance at 0x11d2a2b00>,
<__main__.particle instance at 0x11d2a2bd8>,
<__main__.particle instance at 0x11d2a2cb0>], dtype=object)
```python
pso.simulate(10)
```
array([ 4, 6, 7, 28, 9, 8, 3, 21, 20, 11, 2, 24, 0, 22, 29, 26, 17,
12, 14, 23, 1, 18, 5, 15, 10, 25, 13, 19, 16, 27])
```python
plot()
```
```python
pso.simulate(40)
```
array([ 4, 6, 7, 28, 9, 0, 3, 21, 20, 11, 8, 24, 10, 22, 29, 26, 17,
14, 12, 23, 1, 18, 5, 15, 25, 2, 13, 19, 16, 27])
```python
plot()
```
単純に一様乱数で表示させてもあまり精度が良くない。
実は実験の途中で前期の速度をある程度強く反映させた方が成績がいいことがわかった。
そこで前期の速度の重みを増すことにする。具体的には(0.5,1.0)での重みにする。
```python
class PSO:
def __init__(self, N, pN):
self.N = N
self.pN = pN
self.city = TSP_map(N)
def initialize(self):
ptcl = np.array([])
for i in range(self.pN):
a = np.random.choice([j for j in range(self.N - 1)])
b = np.random.choice([j for j in range(a, self.N)])
V = [[a, b]]
ptcl = np.append(ptcl, particle(i, V, self.N, np.random.uniform(0.5, 1.0), np.random.uniform(), np.random.uniform()))
self.ptcl = ptcl
return self.ptcl
def one_simulate(self):
for i in range(self.pN):
self.ptcl[i].SS_id()
self.ptcl[i].SS_gd(self.p_gd_X)
self.ptcl[i].new_V()
self.ptcl[i].new_X()
self.ptcl[i].P_id(self.city)
def simulate(self, sim_num):
for i in range(self.pN):
self.ptcl[i].initial(self.city)
self.p_gd_X = self.P_gd()
for i in range(sim_num):
self.one_simulate()
self.p_gd_X = self.P_gd()
self.p_gd_X = self.P_gd()
return self.p_gd_X
def P_gd(self):
P_gd = self.ptcl[0].p_id
self.no = 0
for i in range(self.pN):
if P_gd > self.ptcl[i].p_id:
P_gd = self.ptcl[i].p_id
self.no = i
return self.ptcl[self.no].p_id_X
```
```python
pso = PSO(30, 1000)
pso.initialize()
```
array([<__main__.particle instance at 0x11ed5ac20>,
<__main__.particle instance at 0x11ed62cf8>,
<__main__.particle instance at 0x11ed03b00>,
<__main__.particle instance at 0x11d2b74d0>,
<__main__.particle instance at 0x11d2b79e0>,
<__main__.particle instance at 0x11d2b73b0>,
<__main__.particle instance at 0x11d2b7878>,
<__main__.particle instance at 0x11d2b7e18>,
<__main__.particle instance at 0x11d2b7290>,
<__main__.particle instance at 0x11d2b7b48>,
<__main__.particle instance at 0x11d2b7758>,
<__main__.particle instance at 0x11d2b7cf8>,
<__main__.particle instance at 0x11d2b7638>,
<__main__.particle instance at 0x11d2b7b00>,
<__main__.particle instance at 0x11ecd43b0>,
<__main__.particle instance at 0x11ed2d998>,
<__main__.particle instance at 0x11e66d8c0>,
<__main__.particle instance at 0x11f608128>,
<__main__.particle instance at 0x11f608b48>,
<__main__.particle instance at 0x11d2b6a70>,
<__main__.particle instance at 0x11f605878>,
<__main__.particle instance at 0x11f605440>,
<__main__.particle instance at 0x11f605c68>,
<__main__.particle instance at 0x11f605d88>,
<__main__.particle instance at 0x11f605368>,
<__main__.particle instance at 0x11f605f38>,
<__main__.particle instance at 0x11f6053b0>,
<__main__.particle instance at 0x11f605488>,
<__main__.particle instance at 0x11d2895f0>,
<__main__.particle instance at 0x11d2894d0>,
<__main__.particle instance at 0x11d289758>,
<__main__.particle instance at 0x11d289950>,
<__main__.particle instance at 0x11d289200>,
<__main__.particle instance at 0x11d289908>,
<__main__.particle instance at 0x11d289878>,
<__main__.particle instance at 0x11d289830>,
<__main__.particle instance at 0x11d289e60>,
<__main__.particle instance at 0x11d289d88>,
<__main__.particle instance at 0x11d289998>,
<__main__.particle instance at 0x11f603488>,
<__main__.particle instance at 0x11f603b90>,
<__main__.particle instance at 0x11f603ab8>,
<__main__.particle instance at 0x11f603c68>,
<__main__.particle instance at 0x11f603d40>,
<__main__.particle instance at 0x11f603e18>,
<__main__.particle instance at 0x11f603ef0>,
<__main__.particle instance at 0x11f603fc8>,
<__main__.particle instance at 0x11f603518>,
<__main__.particle instance at 0x11f603638>,
<__main__.particle instance at 0x11f603710>,
<__main__.particle instance at 0x11f6038c0>,
<__main__.particle instance at 0x11f603908>,
<__main__.particle instance at 0x11f603998>,
<__main__.particle instance at 0x11f603a70>,
<__main__.particle instance at 0x11f6030e0>,
<__main__.particle instance at 0x11f6031b8>,
<__main__.particle instance at 0x11f603290>,
<__main__.particle instance at 0x11f603368>,
<__main__.particle instance at 0x11f601ea8>,
<__main__.particle instance at 0x11f601e18>,
<__main__.particle instance at 0x11f6015f0>,
<__main__.particle instance at 0x11f601ef0>,
<__main__.particle instance at 0x11f601fc8>,
<__main__.particle instance at 0x11f601710>,
<__main__.particle instance at 0x11f6017e8>,
<__main__.particle instance at 0x11f6018c0>,
<__main__.particle instance at 0x11f601998>,
<__main__.particle instance at 0x11f601a70>,
<__main__.particle instance at 0x11f601b48>,
<__main__.particle instance at 0x11f601c20>,
<__main__.particle instance at 0x11f601050>,
<__main__.particle instance at 0x11f601128>,
<__main__.particle instance at 0x11f601248>,
<__main__.particle instance at 0x11f601320>,
<__main__.particle instance at 0x11f6013f8>,
<__main__.particle instance at 0x11f6014d0>,
<__main__.particle instance at 0x11f600f38>,
<__main__.particle instance at 0x11f600fc8>,
<__main__.particle instance at 0x11f600f80>,
<__main__.particle instance at 0x11f600248>,
<__main__.particle instance at 0x11f600a28>,
<__main__.particle instance at 0x11f600b00>,
<__main__.particle instance at 0x11f600bd8>,
<__main__.particle instance at 0x11f600cb0>,
<__main__.particle instance at 0x11f600d88>,
<__main__.particle instance at 0x11f600e60>,
<__main__.particle instance at 0x11f6002d8>,
<__main__.particle instance at 0x11f6003b0>,
<__main__.particle instance at 0x11f600488>,
<__main__.particle instance at 0x11f600560>,
<__main__.particle instance at 0x11f600638>,
<__main__.particle instance at 0x11f600710>,
<__main__.particle instance at 0x11f6007e8>,
<__main__.particle instance at 0x11f600050>,
<__main__.particle instance at 0x11f4ff908>,
<__main__.particle instance at 0x11f4ff9e0>,
<__main__.particle instance at 0x11f4ff2d8>,
<__main__.particle instance at 0x11f4ff1b8>,
<__main__.particle instance at 0x11f4ffb00>,
<__main__.particle instance at 0x11f4ffbd8>,
<__main__.particle instance at 0x11f4ffcb0>,
<__main__.particle instance at 0x11f4ffd88>,
<__main__.particle instance at 0x11f4ffe60>,
<__main__.particle instance at 0x11f4fff38>,
<__main__.particle instance at 0x11f4ff320>,
<__main__.particle instance at 0x11f4ff3f8>,
<__main__.particle instance at 0x11f4ff4d0>,
<__main__.particle instance at 0x11f4ff5a8>,
<__main__.particle instance at 0x11f4ff680>,
<__main__.particle instance at 0x11f4ff758>,
<__main__.particle instance at 0x11f4ff830>,
<__main__.particle instance at 0x11f4ff0e0>,
<__main__.particle instance at 0x11f4fea70>,
<__main__.particle instance at 0x11f4fe9e0>,
<__main__.particle instance at 0x11f4fe248>,
<__main__.particle instance at 0x11f4fe2d8>,
<__main__.particle instance at 0x11f4fe908>,
<__main__.particle instance at 0x11f4feb48>,
<__main__.particle instance at 0x11f4fec20>,
<__main__.particle instance at 0x11f4fecf8>,
<__main__.particle instance at 0x11f4fee18>,
<__main__.particle instance at 0x11f4feef0>,
<__main__.particle instance at 0x11f4fefc8>,
<__main__.particle instance at 0x11f4fe440>,
<__main__.particle instance at 0x11f4fe518>,
<__main__.particle instance at 0x11f4fe5f0>,
<__main__.particle instance at 0x11f4fe6c8>,
<__main__.particle instance at 0x11f4fe7a0>,
<__main__.particle instance at 0x11f4fe878>,
<__main__.particle instance at 0x11f4fe098>,
<__main__.particle instance at 0x11f4fe170>,
<__main__.particle instance at 0x11e6a5a28>,
<__main__.particle instance at 0x11e6a5c68>,
<__main__.particle instance at 0x11e6a5ef0>,
<__main__.particle instance at 0x11e6a55f0>,
<__main__.particle instance at 0x11e6a5950>,
<__main__.particle instance at 0x11e6a5248>,
<__main__.particle instance at 0x11e6a5bd8>,
<__main__.particle instance at 0x11e6a54d0>,
<__main__.particle instance at 0x11e6a5cf8>,
<__main__.particle instance at 0x11e6d43f8>,
<__main__.particle instance at 0x11ef335a8>,
<__main__.particle instance at 0x11ef33710>,
<__main__.particle instance at 0x11ef335f0>,
<__main__.particle instance at 0x11ef33e60>,
<__main__.particle instance at 0x11ef33f38>,
<__main__.particle instance at 0x11ef33680>,
<__main__.particle instance at 0x11ef33758>,
<__main__.particle instance at 0x11ef337e8>,
<__main__.particle instance at 0x11ef331b8>,
<__main__.particle instance at 0x11ef33440>,
<__main__.particle instance at 0x11ef33050>,
<__main__.particle instance at 0x11ef33b00>,
<__main__.particle instance at 0x11ef33bd8>,
<__main__.particle instance at 0x11ef33950>,
<__main__.particle instance at 0x11ef334d0>,
<__main__.particle instance at 0x11ef33488>,
<__main__.particle instance at 0x11ef33290>,
<__main__.particle instance at 0x11e6cc638>,
<__main__.particle instance at 0x11e6ccfc8>,
<__main__.particle instance at 0x11e6cc680>,
<__main__.particle instance at 0x11e6ccdd0>,
<__main__.particle instance at 0x11e6cc170>,
<__main__.particle instance at 0x11e6cc830>,
<__main__.particle instance at 0x11ef09170>,
<__main__.particle instance at 0x11ef096c8>,
<__main__.particle instance at 0x11ef09560>,
<__main__.particle instance at 0x11ef09f80>,
<__main__.particle instance at 0x11ef09638>,
<__main__.particle instance at 0x11ef09fc8>,
<__main__.particle instance at 0x11ef09dd0>,
<__main__.particle instance at 0x11ef09200>,
<__main__.particle instance at 0x11ef093b0>,
<__main__.particle instance at 0x11ef09c68>,
<__main__.particle instance at 0x11ef09b00>,
<__main__.particle instance at 0x11ef09878>,
<__main__.particle instance at 0x11ef09098>,
<__main__.particle instance at 0x11ef090e0>,
<__main__.particle instance at 0x11ef09ef0>,
<__main__.particle instance at 0x11ef09b48>,
<__main__.particle instance at 0x11ef21050>,
<__main__.particle instance at 0x11ef213b0>,
<__main__.particle instance at 0x11ef21488>,
<__main__.particle instance at 0x11ef21bd8>,
<__main__.particle instance at 0x11ef21d88>,
<__main__.particle instance at 0x11ef21c20>,
<__main__.particle instance at 0x11ef21f38>,
<__main__.particle instance at 0x11ef21dd0>,
<__main__.particle instance at 0x11ef21320>,
<__main__.particle instance at 0x11ef21368>,
<__main__.particle instance at 0x11ef21ab8>,
<__main__.particle instance at 0x11ef21518>,
<__main__.particle instance at 0x11ef21560>,
<__main__.particle instance at 0x11ef219e0>,
<__main__.particle instance at 0x11ef214d0>,
<__main__.particle instance at 0x11ef21830>,
<__main__.particle instance at 0x11ef211b8>,
<__main__.particle instance at 0x11ef17e60>,
<__main__.particle instance at 0x11ef17f38>,
<__main__.particle instance at 0x11ef17320>,
<__main__.particle instance at 0x11ef17b00>,
<__main__.particle instance at 0x11ef17950>,
<__main__.particle instance at 0x11ef17cb0>,
<__main__.particle instance at 0x11ef17b48>,
<__main__.particle instance at 0x11ef17cf8>,
<__main__.particle instance at 0x11ef172d8>,
<__main__.particle instance at 0x11ef17dd0>,
<__main__.particle instance at 0x11ef17488>,
<__main__.particle instance at 0x11ef174d0>,
<__main__.particle instance at 0x11ef177e8>,
<__main__.particle instance at 0x11ef177a0>,
<__main__.particle instance at 0x11ef17518>,
<__main__.particle instance at 0x11ef17ef0>,
<__main__.particle instance at 0x11ef17200>,
<__main__.particle instance at 0x11e91b290>,
<__main__.particle instance at 0x11d569248>,
<__main__.particle instance at 0x11e6bf440>,
<__main__.particle instance at 0x11e6bfbd8>,
<__main__.particle instance at 0x11e6bf518>,
<__main__.particle instance at 0x11e6bf560>,
<__main__.particle instance at 0x11e6bfa28>,
<__main__.particle instance at 0x11e6bf4d0>,
<__main__.particle instance at 0x11e6bf830>,
<__main__.particle instance at 0x11ed5f878>,
<__main__.particle instance at 0x11ed5fbd8>,
<__main__.particle instance at 0x11ed5fab8>,
<__main__.particle instance at 0x11ed5f1b8>,
<__main__.particle instance at 0x11ed5f248>,
<__main__.particle instance at 0x11ed5f368>,
<__main__.particle instance at 0x11ed5f440>,
<__main__.particle instance at 0x11ed5f518>,
<__main__.particle instance at 0x11ed5f5f0>,
<__main__.particle instance at 0x11ed5f6c8>,
<__main__.particle instance at 0x11ed5fe18>,
<__main__.particle instance at 0x11ed5f950>,
<__main__.particle instance at 0x11ed5f7e8>,
<__main__.particle instance at 0x11ed5f0e0>,
<__main__.particle instance at 0x11ed5fef0>,
<__main__.particle instance at 0x11ed5fb48>,
<__main__.particle instance at 0x11d50d9e0>,
<__main__.particle instance at 0x11edd9368>,
<__main__.particle instance at 0x11edd97e8>,
<__main__.particle instance at 0x11edd9128>,
<__main__.particle instance at 0x11edd98c0>,
<__main__.particle instance at 0x11edd95a8>,
<__main__.particle instance at 0x11edd9170>,
<__main__.particle instance at 0x11edd9cb0>,
<__main__.particle instance at 0x11edd9bd8>,
<__main__.particle instance at 0x11edd9cf8>,
<__main__.particle instance at 0x11edd9998>,
<__main__.particle instance at 0x11edd9a28>,
<__main__.particle instance at 0x11edd99e0>,
<__main__.particle instance at 0x11edd9b48>,
<__main__.particle instance at 0x11edd9e60>,
<__main__.particle instance at 0x11edd9f80>,
<__main__.particle instance at 0x11edd9d40>,
<__main__.particle instance at 0x11edd9518>,
<__main__.particle instance at 0x11edd91b8>,
<__main__.particle instance at 0x11e91c3b0>,
<__main__.particle instance at 0x11e6cba70>,
<__main__.particle instance at 0x11e6cb878>,
<__main__.particle instance at 0x11e68aa70>,
<__main__.particle instance at 0x11e68a1b8>,
<__main__.particle instance at 0x11e68a830>,
<__main__.particle instance at 0x11e68a5a8>,
<__main__.particle instance at 0x11e68ae60>,
<__main__.particle instance at 0x11e68a3f8>,
<__main__.particle instance at 0x11e68a2d8>,
<__main__.particle instance at 0x11e68a128>,
<__main__.particle instance at 0x11e68a638>,
<__main__.particle instance at 0x11e68a488>,
<__main__.particle instance at 0x11e68ab48>,
<__main__.particle instance at 0x11d50b950>,
<__main__.particle instance at 0x11edbf9e0>,
<__main__.particle instance at 0x11edbf0e0>,
<__main__.particle instance at 0x11edbfdd0>,
<__main__.particle instance at 0x11edbf128>,
<__main__.particle instance at 0x11edbf878>,
<__main__.particle instance at 0x11edbf7a0>,
<__main__.particle instance at 0x11edbf7e8>,
<__main__.particle instance at 0x11edbf998>,
<__main__.particle instance at 0x11edbff80>,
<__main__.particle instance at 0x11edbffc8>,
<__main__.particle instance at 0x11edbf5f0>,
<__main__.particle instance at 0x11edbf290>,
<__main__.particle instance at 0x11edbf320>,
<__main__.particle instance at 0x11edbf368>,
<__main__.particle instance at 0x11edbf4d0>,
<__main__.particle instance at 0x11edbfc68>,
<__main__.particle instance at 0x11edbfb90>,
<__main__.particle instance at 0x11e689680>,
<__main__.particle instance at 0x11e6895f0>,
<__main__.particle instance at 0x11e689710>,
<__main__.particle instance at 0x11e689ef0>,
<__main__.particle instance at 0x11e689ab8>,
<__main__.particle instance at 0x11e689368>,
<__main__.particle instance at 0x11e689320>,
<__main__.particle instance at 0x11e6897e8>,
<__main__.particle instance at 0x11e689518>,
<__main__.particle instance at 0x11e689c20>,
<__main__.particle instance at 0x11e6892d8>,
<__main__.particle instance at 0x11e689128>,
<__main__.particle instance at 0x11e6899e0>,
<__main__.particle instance at 0x11e689830>,
<__main__.particle instance at 0x11ed7f560>,
<__main__.particle instance at 0x11ed7fbd8>,
<__main__.particle instance at 0x11ed7fc68>,
<__main__.particle instance at 0x11ed7fcb0>,
<__main__.particle instance at 0x11ed7f8c0>,
<__main__.particle instance at 0x11ed7fab8>,
<__main__.particle instance at 0x11ed7f518>,
<__main__.particle instance at 0x11ed7f638>,
<__main__.particle instance at 0x11ed7f200>,
<__main__.particle instance at 0x11ed7fef0>,
<__main__.particle instance at 0x11ed7f0e0>,
<__main__.particle instance at 0x11ed7f3b0>,
<__main__.particle instance at 0x11ed7f4d0>,
<__main__.particle instance at 0x11ed7f050>,
<__main__.particle instance at 0x11ed7fa70>,
<__main__.particle instance at 0x11ed7f710>,
<__main__.particle instance at 0x11ed7f7a0>,
<__main__.particle instance at 0x11ed6add0>,
<__main__.particle instance at 0x11ed6a4d0>,
<__main__.particle instance at 0x11ed6a9e0>,
<__main__.particle instance at 0x11ed6afc8>,
<__main__.particle instance at 0x11ed6a950>,
<__main__.particle instance at 0x11ed6a098>,
<__main__.particle instance at 0x11ed6aef0>,
<__main__.particle instance at 0x11ed6ad88>,
<__main__.particle instance at 0x11ed6a518>,
<__main__.particle instance at 0x11ed6a710>,
<__main__.particle instance at 0x11ed6ac68>,
<__main__.particle instance at 0x11ed6aa28>,
<__main__.particle instance at 0x11ed6ab00>,
<__main__.particle instance at 0x11ed6a758>,
<__main__.particle instance at 0x11ed6ac20>,
<__main__.particle instance at 0x11ed6ab48>,
<__main__.particle instance at 0x11ed6ad40>,
<__main__.particle instance at 0x11ed6af38>,
<__main__.particle instance at 0x11ed84ab8>,
<__main__.particle instance at 0x11ed840e0>,
<__main__.particle instance at 0x11ed84758>,
<__main__.particle instance at 0x11ed84320>,
<__main__.particle instance at 0x11ed84440>,
<__main__.particle instance at 0x11ed84ef0>,
<__main__.particle instance at 0x11ed84cb0>,
<__main__.particle instance at 0x11ed84a70>,
<__main__.particle instance at 0x11ed847e8>,
<__main__.particle instance at 0x11ed846c8>,
<__main__.particle instance at 0x11ed84680>,
<__main__.particle instance at 0x11ed84b00>,
<__main__.particle instance at 0x11ed842d8>,
<__main__.particle instance at 0x11ed84cf8>,
<__main__.particle instance at 0x11ed843f8>,
<__main__.particle instance at 0x11ed84830>,
<__main__.particle instance at 0x11ed84c20>,
<__main__.particle instance at 0x11d28a638>,
<__main__.particle instance at 0x11d28ae18>,
<__main__.particle instance at 0x11d28a878>,
<__main__.particle instance at 0x11d28a9e0>,
<__main__.particle instance at 0x11d28a3b0>,
<__main__.particle instance at 0x11d28ad88>,
<__main__.particle instance at 0x11d28acb0>,
<__main__.particle instance at 0x11d28abd8>,
<__main__.particle instance at 0x11d28aab8>,
<__main__.particle instance at 0x11d28a5f0>,
<__main__.particle instance at 0x11d28a518>,
<__main__.particle instance at 0x11d28a248>,
<__main__.particle instance at 0x11d28af38>,
<__main__.particle instance at 0x11d28aef0>,
<__main__.particle instance at 0x11d28a1b8>,
<__main__.particle instance at 0x11d28a0e0>,
<__main__.particle instance at 0x11d28a098>,
<__main__.particle instance at 0x11e666518>,
<__main__.particle instance at 0x11e6667e8>,
<__main__.particle instance at 0x11e666680>,
<__main__.particle instance at 0x11e666e60>,
<__main__.particle instance at 0x11e666c68>,
<__main__.particle instance at 0x11e666170>,
<__main__.particle instance at 0x11e666b00>,
<__main__.particle instance at 0x11e666bd8>,
<__main__.particle instance at 0x11e6660e0>,
<__main__.particle instance at 0x11e6663b0>,
<__main__.particle instance at 0x11e6668c0>,
<__main__.particle instance at 0x11e6661b8>,
<__main__.particle instance at 0x11e666f38>,
<__main__.particle instance at 0x11e666ea8>,
<__main__.particle instance at 0x11e6bdf38>,
<__main__.particle instance at 0x11e6bdea8>,
<__main__.particle instance at 0x11e6bdd40>,
<__main__.particle instance at 0x11e6bdcb0>,
<__main__.particle instance at 0x11e665998>,
<__main__.particle instance at 0x11e665440>,
<__main__.particle instance at 0x11e6653f8>,
<__main__.particle instance at 0x11e6655f0>,
<__main__.particle instance at 0x11e665ea8>,
<__main__.particle instance at 0x11e665098>,
<__main__.particle instance at 0x11e665ef0>,
<__main__.particle instance at 0x11e665fc8>,
<__main__.particle instance at 0x11e665830>,
<__main__.particle instance at 0x11e665908>,
<__main__.particle instance at 0x11e6654d0>,
<__main__.particle instance at 0x11e6655a8>,
<__main__.particle instance at 0x11e665b00>,
<__main__.particle instance at 0x11e665bd8>,
<__main__.particle instance at 0x11e6650e0>,
<__main__.particle instance at 0x11e6651b8>,
<__main__.particle instance at 0x11e665290>,
<__main__.particle instance at 0x11e6fcd88>,
<__main__.particle instance at 0x11e6fc908>,
<__main__.particle instance at 0x11e6fc950>,
<__main__.particle instance at 0x11e6fc7a0>,
<__main__.particle instance at 0x11e6fcf80>,
<__main__.particle instance at 0x11e6fc830>,
<__main__.particle instance at 0x11e900b00>,
<__main__.particle instance at 0x11e900758>,
<__main__.particle instance at 0x11d2d9440>,
<__main__.particle instance at 0x11d2d9b90>,
<__main__.particle instance at 0x11d2d9b48>,
<__main__.particle instance at 0x11d2b9c68>,
<__main__.particle instance at 0x11d2b9170>,
<__main__.particle instance at 0x11d2b9908>,
<__main__.particle instance at 0x11d2b9dd0>,
<__main__.particle instance at 0x11d2b9ef0>,
<__main__.particle instance at 0x11d2b9128>,
<__main__.particle instance at 0x11d2b9b00>,
<__main__.particle instance at 0x11d2b9878>,
<__main__.particle instance at 0x11d2b9ea8>,
<__main__.particle instance at 0x11d2b9560>,
<__main__.particle instance at 0x11e6c4998>,
<__main__.particle instance at 0x11e6c4950>,
<__main__.particle instance at 0x11e6c48c0>,
<__main__.particle instance at 0x11e663128>,
<__main__.particle instance at 0x11e6638c0>,
<__main__.particle instance at 0x11e663fc8>,
<__main__.particle instance at 0x11e6631b8>,
<__main__.particle instance at 0x11e663f80>,
<__main__.particle instance at 0x11e663998>,
<__main__.particle instance at 0x11e663a70>,
<__main__.particle instance at 0x11e663b48>,
<__main__.particle instance at 0x11e663050>,
<__main__.particle instance at 0x11e663cf8>,
<__main__.particle instance at 0x11e663e18>,
<__main__.particle instance at 0x11e663ef0>,
<__main__.particle instance at 0x11e663290>,
<__main__.particle instance at 0x11e663368>,
<__main__.particle instance at 0x11e663440>,
<__main__.particle instance at 0x11e6635f0>,
<__main__.particle instance at 0x11e663680>,
<__main__.particle instance at 0x119f8bc20>,
<__main__.particle instance at 0x119f8b7a0>,
<__main__.particle instance at 0x119f8b170>,
<__main__.particle instance at 0x119f8b8c0>,
<__main__.particle instance at 0x119f8bdd0>,
<__main__.particle instance at 0x11e6c6e60>,
<__main__.particle instance at 0x11e662e18>,
<__main__.particle instance at 0x11e662d88>,
<__main__.particle instance at 0x11e6627e8>,
<__main__.particle instance at 0x11e662200>,
<__main__.particle instance at 0x11e662170>,
<__main__.particle instance at 0x11e662bd8>,
<__main__.particle instance at 0x11e662e60>,
<__main__.particle instance at 0x11e662f38>,
<__main__.particle instance at 0x11e662320>,
<__main__.particle instance at 0x11e662440>,
<__main__.particle instance at 0x11e662518>,
<__main__.particle instance at 0x11e6625f0>,
<__main__.particle instance at 0x11e6626c8>,
<__main__.particle instance at 0x11e6627a0>,
<__main__.particle instance at 0x11e662cf8>,
<__main__.particle instance at 0x11e662050>,
<__main__.particle instance at 0x11e6ca9e0>,
<__main__.particle instance at 0x11e6ca3f8>,
<__main__.particle instance at 0x11e6cabd8>,
<__main__.particle instance at 0x11e6ca1b8>,
<__main__.particle instance at 0x11e661320>,
<__main__.particle instance at 0x11e661ea8>,
<__main__.particle instance at 0x11e661170>,
<__main__.particle instance at 0x11e6619e0>,
<__main__.particle instance at 0x11e6611b8>,
<__main__.particle instance at 0x11e661b00>,
<__main__.particle instance at 0x11e661c20>,
<__main__.particle instance at 0x11e661cf8>,
<__main__.particle instance at 0x11e6612d8>,
<__main__.particle instance at 0x11e6613f8>,
<__main__.particle instance at 0x11e6614d0>,
<__main__.particle instance at 0x11e6615a8>,
<__main__.particle instance at 0x11e661680>,
<__main__.particle instance at 0x11e661758>,
<__main__.particle instance at 0x11e661290>,
<__main__.particle instance at 0x11e6617e8>,
<__main__.particle instance at 0x11e661d40>,
<__main__.particle instance at 0x11e661908>,
<__main__.particle instance at 0x11e6610e0>,
<__main__.particle instance at 0x11e6c1ef0>,
<__main__.particle instance at 0x11e6c1680>,
<__main__.particle instance at 0x11e6c17e8>,
<__main__.particle instance at 0x11e6c1440>,
<__main__.particle instance at 0x11e6c1bd8>,
<__main__.particle instance at 0x11e6c1170>,
<__main__.particle instance at 0x11e6c19e0>,
<__main__.particle instance at 0x11e6c1908>,
<__main__.particle instance at 0x11e6c1758>,
<__main__.particle instance at 0x11e6c1dd0>,
<__main__.particle instance at 0x11d517950>,
<__main__.particle instance at 0x11d517638>,
<__main__.particle instance at 0x11d517c20>,
<__main__.particle instance at 0x11d5171b8>,
<__main__.particle instance at 0x11e660638>,
<__main__.particle instance at 0x11e660560>,
<__main__.particle instance at 0x11e6603b0>,
<__main__.particle instance at 0x11e660f80>,
<__main__.particle instance at 0x11e660cb0>,
<__main__.particle instance at 0x11e660d88>,
<__main__.particle instance at 0x11e6606c8>,
<__main__.particle instance at 0x11e6607a0>,
<__main__.particle instance at 0x11e660878>,
<__main__.particle instance at 0x11e660950>,
<__main__.particle instance at 0x11e660a28>,
<__main__.particle instance at 0x11e660440>,
<__main__.particle instance at 0x11e660518>,
<__main__.particle instance at 0x11e660e18>,
<__main__.particle instance at 0x11e6601b8>,
<__main__.particle instance at 0x11e660290>,
<__main__.particle instance at 0x11e660ef0>,
<__main__.particle instance at 0x11d2ba5a8>,
<__main__.particle instance at 0x11d2ba830>,
<__main__.particle instance at 0x11d2baf80>,
<__main__.particle instance at 0x11d2ba320>,
<__main__.particle instance at 0x11d2bab00>,
<__main__.particle instance at 0x11d2ba878>,
<__main__.particle instance at 0x11d2ba368>,
<__main__.particle instance at 0x11d2bad88>,
<__main__.particle instance at 0x11d2bac68>,
<__main__.particle instance at 0x11d2e8e18>,
<__main__.particle instance at 0x11d2e8830>,
<__main__.particle instance at 0x11d2e8878>,
<__main__.particle instance at 0x11d2e8ea8>,
<__main__.particle instance at 0x11d2e8f80>,
<__main__.particle instance at 0x11d2e8dd0>,
<__main__.particle instance at 0x11d2e89e0>,
<__main__.particle instance at 0x11d2e8098>,
<__main__.particle instance at 0x11d2e8290>,
<__main__.particle instance at 0x11d2e81b8>,
<__main__.particle instance at 0x11d2e80e0>,
<__main__.particle instance at 0x11d2e8908>,
<__main__.particle instance at 0x11d2e8b00>,
<__main__.particle instance at 0x11d2e8a70>,
<__main__.particle instance at 0x11d2e8710>,
<__main__.particle instance at 0x11d2e8638>,
<__main__.particle instance at 0x11d2e8560>,
<__main__.particle instance at 0x11d2e84d0>,
<__main__.particle instance at 0x11d54ad88>,
<__main__.particle instance at 0x11d54aea8>,
<__main__.particle instance at 0x11e6cd248>,
<__main__.particle instance at 0x11e65ed88>,
<__main__.particle instance at 0x11e65e9e0>,
<__main__.particle instance at 0x11e65e950>,
<__main__.particle instance at 0x11e65e710>,
<__main__.particle instance at 0x11e65ef38>,
<__main__.particle instance at 0x11e65e6c8>,
<__main__.particle instance at 0x11e65eb00>,
<__main__.particle instance at 0x11e65eb90>,
<__main__.particle instance at 0x11e65ec68>,
<__main__.particle instance at 0x11e65e050>,
<__main__.particle instance at 0x11e65e128>,
<__main__.particle instance at 0x11e65e200>,
<__main__.particle instance at 0x11e65e2d8>,
<__main__.particle instance at 0x11e65e3b0>,
<__main__.particle instance at 0x11e65e320>,
<__main__.particle instance at 0x11e65e440>,
<__main__.particle instance at 0x11e65e518>,
<__main__.particle instance at 0x11e6c7b00>,
<__main__.particle instance at 0x11e65dd88>,
<__main__.particle instance at 0x11e65dc68>,
<__main__.particle instance at 0x11e65de18>,
<__main__.particle instance at 0x11e65dc20>,
<__main__.particle instance at 0x11e65d8c0>,
<__main__.particle instance at 0x11e65d998>,
<__main__.particle instance at 0x11e65d170>,
<__main__.particle instance at 0x11e65d290>,
<__main__.particle instance at 0x11e65d368>,
<__main__.particle instance at 0x11e65d440>,
<__main__.particle instance at 0x11e65d758>,
<__main__.particle instance at 0x11e65d098>,
<__main__.particle instance at 0x11e65d518>,
<__main__.particle instance at 0x11e65d9e0>,
<__main__.particle instance at 0x11e65d638>,
<__main__.particle instance at 0x11e65dab8>,
<__main__.particle instance at 0x11e65db90>,
<__main__.particle instance at 0x11e65cef0>,
<__main__.particle instance at 0x11e65cab8>,
<__main__.particle instance at 0x11e65c4d0>,
<__main__.particle instance at 0x11e65cd40>,
<__main__.particle instance at 0x11e65c200>,
<__main__.particle instance at 0x11e65c368>,
<__main__.particle instance at 0x11e65c488>,
<__main__.particle instance at 0x11e65c5a8>,
<__main__.particle instance at 0x11e65c680>,
<__main__.particle instance at 0x11e65ca70>,
<__main__.particle instance at 0x11e65cf80>,
<__main__.particle instance at 0x11e65c758>,
<__main__.particle instance at 0x11e65c878>,
<__main__.particle instance at 0x11e65c098>,
<__main__.particle instance at 0x11e65c170>,
<__main__.particle instance at 0x11e65cc20>,
<__main__.particle instance at 0x11e65ccf8>,
<__main__.particle instance at 0x11d28b320>,
<__main__.particle instance at 0x11d28b200>,
<__main__.particle instance at 0x11d28bc68>,
<__main__.particle instance at 0x11d28bb90>,
<__main__.particle instance at 0x11d28bab8>,
<__main__.particle instance at 0x11d28b998>,
<__main__.particle instance at 0x11d28b8c0>,
<__main__.particle instance at 0x11d28b7e8>,
<__main__.particle instance at 0x11d28b518>,
<__main__.particle instance at 0x11d28b3f8>,
<__main__.particle instance at 0x11d28b098>,
<__main__.particle instance at 0x11d28bd40>,
<__main__.particle instance at 0x11d28b6c8>,
<__main__.particle instance at 0x11d28b0e0>,
<__main__.particle instance at 0x11d28b3b0>,
<__main__.particle instance at 0x11e65bfc8>,
<__main__.particle instance at 0x11e65b488>,
<__main__.particle instance at 0x11e65bab8>,
<__main__.particle instance at 0x11e65bbd8>,
<__main__.particle instance at 0x11e65b290>,
<__main__.particle instance at 0x11e65bcf8>,
<__main__.particle instance at 0x11e65b5f0>,
<__main__.particle instance at 0x11e65b6c8>,
<__main__.particle instance at 0x11e65b7a0>,
<__main__.particle instance at 0x11e65b878>,
<__main__.particle instance at 0x11e65b950>,
<__main__.particle instance at 0x11e65b2d8>,
<__main__.particle instance at 0x11e65b0e0>,
<__main__.particle instance at 0x11e65b998>,
<__main__.particle instance at 0x11e65b050>,
<__main__.particle instance at 0x11e65b1b8>,
<__main__.particle instance at 0x11e65bdd0>,
<__main__.particle instance at 0x11e65bef0>,
<__main__.particle instance at 0x11e65add0>,
<__main__.particle instance at 0x11e65acb0>,
<__main__.particle instance at 0x11e65aab8>,
<__main__.particle instance at 0x11e65acf8>,
<__main__.particle instance at 0x11e65a758>,
<__main__.particle instance at 0x11e65a878>,
<__main__.particle instance at 0x11e65a950>,
<__main__.particle instance at 0x11e65aa28>,
<__main__.particle instance at 0x11e65a2d8>,
<__main__.particle instance at 0x11e65a320>,
<__main__.particle instance at 0x11e65ac68>,
<__main__.particle instance at 0x11e65ae60>,
<__main__.particle instance at 0x11e65a0e0>,
<__main__.particle instance at 0x11e65a200>,
<__main__.particle instance at 0x11e65af38>,
<__main__.particle instance at 0x11e65a440>,
<__main__.particle instance at 0x11e65a560>,
<__main__.particle instance at 0x11e658248>,
<__main__.particle instance at 0x11e658d40>,
<__main__.particle instance at 0x11e658dd0>,
<__main__.particle instance at 0x11e6586c8>,
<__main__.particle instance at 0x11e658ea8>,
<__main__.particle instance at 0x11e658878>,
<__main__.particle instance at 0x11e658998>,
<__main__.particle instance at 0x11e658a70>,
<__main__.particle instance at 0x11e658098>,
<__main__.particle instance at 0x11e658170>,
<__main__.particle instance at 0x11e6583f8>,
<__main__.particle instance at 0x11e658560>,
<__main__.particle instance at 0x11e658320>,
<__main__.particle instance at 0x11e6581b8>,
<__main__.particle instance at 0x11e6584d0>,
<__main__.particle instance at 0x11e658638>,
<__main__.particle instance at 0x11e6c2200>,
<__main__.particle instance at 0x11e6c2248>,
<__main__.particle instance at 0x11e6c2170>,
<__main__.particle instance at 0x11e6c2368>,
<__main__.particle instance at 0x11e6c2cb0>,
<__main__.particle instance at 0x11e6c2e60>,
<__main__.particle instance at 0x11e6c2ea8>,
<__main__.particle instance at 0x11e6c2b90>,
<__main__.particle instance at 0x11e6c2128>,
<__main__.particle instance at 0x11e6c2a28>,
<__main__.particle instance at 0x11e6c2050>,
<__main__.particle instance at 0x11e6c2878>,
<__main__.particle instance at 0x11e6c27a0>,
<__main__.particle instance at 0x11e6c2488>,
<__main__.particle instance at 0x11e6c2710>,
<__main__.particle instance at 0x11e6c2ef0>,
<__main__.particle instance at 0x11e6c23f8>,
<__main__.particle instance at 0x11e657cf8>,
<__main__.particle instance at 0x11e6573b0>,
<__main__.particle instance at 0x11e657b48>,
<__main__.particle instance at 0x11e657d40>,
<__main__.particle instance at 0x11e657e60>,
<__main__.particle instance at 0x11e657f38>,
<__main__.particle instance at 0x11e657518>,
<__main__.particle instance at 0x11e657a28>,
<__main__.particle instance at 0x11e657ab8>,
<__main__.particle instance at 0x11e6570e0>,
<__main__.particle instance at 0x11e6571b8>,
<__main__.particle instance at 0x11e657cb0>,
<__main__.particle instance at 0x11e657c20>,
<__main__.particle instance at 0x11e6572d8>,
<__main__.particle instance at 0x11e657368>,
<__main__.particle instance at 0x11e6576c8>,
<__main__.particle instance at 0x11e6577a0>,
<__main__.particle instance at 0x11e656710>,
<__main__.particle instance at 0x11e656cf8>,
<__main__.particle instance at 0x11e6566c8>,
<__main__.particle instance at 0x11e656440>,
<__main__.particle instance at 0x11e656d40>,
<__main__.particle instance at 0x11e656dd0>,
<__main__.particle instance at 0x11e656ea8>,
<__main__.particle instance at 0x11e6567a0>,
<__main__.particle instance at 0x11e656830>,
<__main__.particle instance at 0x11e656908>,
<__main__.particle instance at 0x11e6569e0>,
<__main__.particle instance at 0x11e656ab8>,
<__main__.particle instance at 0x11e656050>,
<__main__.particle instance at 0x11e656128>,
<__main__.particle instance at 0x11e656bd8>,
<__main__.particle instance at 0x11e656248>,
<__main__.particle instance at 0x11e656fc8>,
<__main__.particle instance at 0x11e6562d8>,
<__main__.particle instance at 0x11d51e518>,
<__main__.particle instance at 0x11d51ee18>,
<__main__.particle instance at 0x11d51e3f8>,
<__main__.particle instance at 0x11d51e320>,
<__main__.particle instance at 0x11d51e710>,
<__main__.particle instance at 0x11d51e368>,
<__main__.particle instance at 0x11d51e7a0>,
<__main__.particle instance at 0x11d51e1b8>,
<__main__.particle instance at 0x11e655d40>,
<__main__.particle instance at 0x11e655d88>,
<__main__.particle instance at 0x11e6554d0>,
<__main__.particle instance at 0x11e655320>,
<__main__.particle instance at 0x11e6555f0>,
<__main__.particle instance at 0x11e655c20>,
<__main__.particle instance at 0x11e655ea8>,
<__main__.particle instance at 0x11e655f80>,
<__main__.particle instance at 0x11e6556c8>,
<__main__.particle instance at 0x11e6557e8>,
<__main__.particle instance at 0x11e655908>,
<__main__.particle instance at 0x11e6559e0>,
<__main__.particle instance at 0x11e655ab8>,
<__main__.particle instance at 0x11e655b90>,
<__main__.particle instance at 0x11e655098>,
<__main__.particle instance at 0x11e655170>,
<__main__.particle instance at 0x11e655710>,
<__main__.particle instance at 0x11d544290>,
<__main__.particle instance at 0x11d5445a8>,
<__main__.particle instance at 0x11d544e18>,
<__main__.particle instance at 0x11d544128>,
<__main__.particle instance at 0x11d544200>,
<__main__.particle instance at 0x11d544bd8>,
<__main__.particle instance at 0x11d5447a0>,
<__main__.particle instance at 0x11d544cb0>,
<__main__.particle instance at 0x11d544518>,
<__main__.particle instance at 0x11d544440>,
<__main__.particle instance at 0x11e654488>,
<__main__.particle instance at 0x11e654248>,
<__main__.particle instance at 0x11e654290>,
<__main__.particle instance at 0x11e654320>,
<__main__.particle instance at 0x11e6545a8>,
<__main__.particle instance at 0x11e654758>,
<__main__.particle instance at 0x11e654ea8>,
<__main__.particle instance at 0x11e654ef0>,
<__main__.particle instance at 0x11e654fc8>,
<__main__.particle instance at 0x11e6547e8>,
<__main__.particle instance at 0x11e654518>,
<__main__.particle instance at 0x11e654998>,
<__main__.particle instance at 0x11e654a70>,
<__main__.particle instance at 0x11e654b48>,
<__main__.particle instance at 0x11e654c20>,
<__main__.particle instance at 0x11e654050>,
<__main__.particle instance at 0x11e654128>,
<__main__.particle instance at 0x11e653170>,
<__main__.particle instance at 0x11e653248>,
<__main__.particle instance at 0x11e6532d8>,
<__main__.particle instance at 0x11e6535a8>,
<__main__.particle instance at 0x11e653d88>,
<__main__.particle instance at 0x11e653e60>,
<__main__.particle instance at 0x11e653560>,
<__main__.particle instance at 0x11e653f80>,
<__main__.particle instance at 0x11e6537e8>,
<__main__.particle instance at 0x11e6538c0>,
<__main__.particle instance at 0x11e6533f8>,
<__main__.particle instance at 0x11e6533b0>,
<__main__.particle instance at 0x11e653518>,
<__main__.particle instance at 0x11e653ab8>,
<__main__.particle instance at 0x11e653bd8>,
<__main__.particle instance at 0x11e653cb0>,
<__main__.particle instance at 0x11e653098>,
<__main__.particle instance at 0x11d2e9368>,
<__main__.particle instance at 0x11d2e94d0>,
<__main__.particle instance at 0x11d2e9518>,
<__main__.particle instance at 0x11d2e9cf8>,
<__main__.particle instance at 0x11d2e9dd0>,
<__main__.particle instance at 0x11d2e9b48>,
<__main__.particle instance at 0x11d2e9a70>,
<__main__.particle instance at 0x11d2e9830>,
<__main__.particle instance at 0x11d2e9638>,
<__main__.particle instance at 0x11d2e97e8>,
<__main__.particle instance at 0x11d2e9710>,
<__main__.particle instance at 0x11d2e95a8>,
<__main__.particle instance at 0x11d2e9248>,
<__main__.particle instance at 0x11d2e9200>,
<__main__.particle instance at 0x11d2e98c0>,
<__main__.particle instance at 0x11d2e9f80>,
<__main__.particle instance at 0x11d2e9ea8>,
<__main__.particle instance at 0x11e6c8638>,
<__main__.particle instance at 0x11e6c8290>,
<__main__.particle instance at 0x11e652320>,
<__main__.particle instance at 0x11e6527e8>,
<__main__.particle instance at 0x11e652c20>,
<__main__.particle instance at 0x11e652830>,
<__main__.particle instance at 0x11e652cf8>,
<__main__.particle instance at 0x11e652d88>,
<__main__.particle instance at 0x11e6520e0>,
<__main__.particle instance at 0x11e6521b8>,
<__main__.particle instance at 0x11e652290>,
<__main__.particle instance at 0x11e652e60>,
<__main__.particle instance at 0x11e652ea8>,
<__main__.particle instance at 0x11e652dd0>,
<__main__.particle instance at 0x11e652950>,
<__main__.particle instance at 0x11e652368>,
<__main__.particle instance at 0x11e652488>,
<__main__.particle instance at 0x11e652560>,
<__main__.particle instance at 0x11e652a70>,
<__main__.particle instance at 0x11d50aea8>,
<__main__.particle instance at 0x11e651128>,
<__main__.particle instance at 0x11e651e18>,
<__main__.particle instance at 0x11e651cb0>,
<__main__.particle instance at 0x11e651368>,
<__main__.particle instance at 0x11e651440>,
<__main__.particle instance at 0x11e651518>,
<__main__.particle instance at 0x11e6515f0>,
<__main__.particle instance at 0x11e6511b8>,
<__main__.particle instance at 0x11e651248>,
<__main__.particle instance at 0x11e651320>,
<__main__.particle instance at 0x11e6516c8>,
<__main__.particle instance at 0x11e651b00>,
<__main__.particle instance at 0x11e6517e8>,
<__main__.particle instance at 0x11e6510e0>,
<__main__.particle instance at 0x11e651b48>,
<__main__.particle instance at 0x11e651c20>,
<__main__.particle instance at 0x11e64fd40>,
<__main__.particle instance at 0x11e64fc68>,
<__main__.particle instance at 0x11e64f488>,
<__main__.particle instance at 0x11e64f560>,
<__main__.particle instance at 0x11e64fdd0>,
<__main__.particle instance at 0x11e64f6c8>,
<__main__.particle instance at 0x11e64f7a0>,
<__main__.particle instance at 0x11e64f878>,
<__main__.particle instance at 0x11e64f950>,
<__main__.particle instance at 0x11e64fab8>,
<__main__.particle instance at 0x11e64f3b0>,
<__main__.particle instance at 0x11e64f1b8>,
<__main__.particle instance at 0x11e64f9e0>,
<__main__.particle instance at 0x11e64f098>,
<__main__.particle instance at 0x11e64f170>,
<__main__.particle instance at 0x11e64ff80>,
<__main__.particle instance at 0x11e64f3f8>,
<__main__.particle instance at 0x11e64eb90>,
<__main__.particle instance at 0x11e64e170>,
<__main__.particle instance at 0x11e64e950>,
<__main__.particle instance at 0x11e64e488>,
<__main__.particle instance at 0x11e64ecb0>,
<__main__.particle instance at 0x11e64ed88>,
<__main__.particle instance at 0x11e64e5a8>,
<__main__.particle instance at 0x11e64e6c8>,
<__main__.particle instance at 0x11e64e7e8>,
<__main__.particle instance at 0x11e64e1b8>,
<__main__.particle instance at 0x11e64ea70>,
<__main__.particle instance at 0x11e64e200>,
<__main__.particle instance at 0x11e64ef38>,
<__main__.particle instance at 0x11e64e8c0>,
<__main__.particle instance at 0x11e64e0e0>,
<__main__.particle instance at 0x11e64eef0>,
<__main__.particle instance at 0x11e64e290>,
<__main__.particle instance at 0x11e64e368>,
<__main__.particle instance at 0x11e64d4d0>,
<__main__.particle instance at 0x11e64d3f8>,
<__main__.particle instance at 0x11e64d440>,
<__main__.particle instance at 0x11e64dcb0>,
<__main__.particle instance at 0x11e64d560>,
<__main__.particle instance at 0x11e64d638>,
<__main__.particle instance at 0x11e64d8c0>,
<__main__.particle instance at 0x11e64da70>,
<__main__.particle instance at 0x11e64d128>,
<__main__.particle instance at 0x11e64d200>,
<__main__.particle instance at 0x11e64da28>,
<__main__.particle instance at 0x11e64db48>,
<__main__.particle instance at 0x11e64dd40>,
<__main__.particle instance at 0x11e64d7a0>,
<__main__.particle instance at 0x11e64d098>,
<__main__.particle instance at 0x11e64de60>,
<__main__.particle instance at 0x11e64df38>,
<__main__.particle instance at 0x11e688b48>,
<__main__.particle instance at 0x11e688170>,
<__main__.particle instance at 0x11e688518>,
<__main__.particle instance at 0x11e688050>,
<__main__.particle instance at 0x11e688b00>,
<__main__.particle instance at 0x11e6887a0>,
<__main__.particle instance at 0x11e6883f8>,
<__main__.particle instance at 0x11e688248>,
<__main__.particle instance at 0x11e688098>,
<__main__.particle instance at 0x11e688998>,
<__main__.particle instance at 0x11e6887e8>,
<__main__.particle instance at 0x11e6885f0>,
<__main__.particle instance at 0x11e6889e0>,
<__main__.particle instance at 0x11e64c320>,
<__main__.particle instance at 0x11e64c6c8>,
<__main__.particle instance at 0x11e64c3b0>,
<__main__.particle instance at 0x11e64cfc8>,
<__main__.particle instance at 0x11e64c518>,
<__main__.particle instance at 0x11e64c2d8>,
<__main__.particle instance at 0x11e64c908>,
<__main__.particle instance at 0x11e64c290>,
<__main__.particle instance at 0x11e64c200>,
<__main__.particle instance at 0x11e64ca70>,
<__main__.particle instance at 0x11e64cb48>,
<__main__.particle instance at 0x11e64c7e8>,
<__main__.particle instance at 0x11e64c098>,
<__main__.particle instance at 0x11e64cc20>,
<__main__.particle instance at 0x11e64cd40>,
<__main__.particle instance at 0x11e64ce18>,
<__main__.particle instance at 0x11e64cef0>,
<__main__.particle instance at 0x11e64c680>,
<__main__.particle instance at 0x11e912ea8>,
<__main__.particle instance at 0x11e64bab8>,
<__main__.particle instance at 0x11e64b950>,
<__main__.particle instance at 0x11e64b5a8>,
<__main__.particle instance at 0x11e64b638>,
<__main__.particle instance at 0x11e64b710>,
<__main__.particle instance at 0x11e64b7e8>,
<__main__.particle instance at 0x11e64b3f8>,
<__main__.particle instance at 0x11e64bb00>,
<__main__.particle instance at 0x11e64b9e0>,
<__main__.particle instance at 0x11e64b0e0>,
<__main__.particle instance at 0x11e64b200>,
<__main__.particle instance at 0x11e64b2d8>,
<__main__.particle instance at 0x11e64bd88>,
<__main__.particle instance at 0x11e64be60>,
<__main__.particle instance at 0x11e64bf38>,
<__main__.particle instance at 0x11e64b440>,
<__main__.particle instance at 0x11e64bc20>,
<__main__.particle instance at 0x11d548200>,
<__main__.particle instance at 0x11d5481b8>,
<__main__.particle instance at 0x11d548e18>,
<__main__.particle instance at 0x11d548dd0>,
<__main__.particle instance at 0x11e64ae18>,
<__main__.particle instance at 0x11e64ab00>,
<__main__.particle instance at 0x11e64a128>,
<__main__.particle instance at 0x11e64a830>,
<__main__.particle instance at 0x11e64ae60>,
<__main__.particle instance at 0x11e64aa28>,
<__main__.particle instance at 0x11e64acb0>,
<__main__.particle instance at 0x11e64a3f8>,
<__main__.particle instance at 0x11e64ab48>,
<__main__.particle instance at 0x11e64ad40>,
<__main__.particle instance at 0x11e64a170>,
<__main__.particle instance at 0x11e64a290>,
<__main__.particle instance at 0x11e64af38>,
<__main__.particle instance at 0x11e64a4d0>,
<__main__.particle instance at 0x11e64a5f0>,
<__main__.particle instance at 0x11e64a6c8>,
<__main__.particle instance at 0x11e64a050>,
<__main__.particle instance at 0x11d28d200>,
<__main__.particle instance at 0x11d28dcf8>,
<__main__.particle instance at 0x11d28db90>,
<__main__.particle instance at 0x11d28d950>,
<__main__.particle instance at 0x11d28d878>,
<__main__.particle instance at 0x11d28d7a0>,
<__main__.particle instance at 0x11d28d6c8>,
<__main__.particle instance at 0x11d28d5a8>,
<__main__.particle instance at 0x11d28df38>,
<__main__.particle instance at 0x11d28d3f8>,
<__main__.particle instance at 0x11d28d518>,
<__main__.particle instance at 0x11d28d4d0>,
<__main__.particle instance at 0x11d28d5f0>,
<__main__.particle instance at 0x11d28d9e0>,
<__main__.particle instance at 0x11d28d2d8>,
<__main__.particle instance at 0x11a356950>,
<__main__.particle instance at 0x11e649b48>,
<__main__.particle instance at 0x11e649c68>,
<__main__.particle instance at 0x11e649fc8>,
<__main__.particle instance at 0x11e649368>,
<__main__.particle instance at 0x11e649d40>,
<__main__.particle instance at 0x11e649680>,
<__main__.particle instance at 0x11e649758>,
<__main__.particle instance at 0x11e649830>,
<__main__.particle instance at 0x11e649908>,
<__main__.particle instance at 0x11e6499e0>,
<__main__.particle instance at 0x11e649050>,
<__main__.particle instance at 0x11e649b90>,
<__main__.particle instance at 0x11e6490e0>,
<__main__.particle instance at 0x11e649170>,
<__main__.particle instance at 0x11e649248>], dtype=object)
```python
pso.simulate(10)
plot()
```
```python
pso.simulate(40)
plot()
```
ωの重みを増すと結果がやや良くなる気がする。
実際、http://ci.nii.ac.jp/els/110006977755.pdf?id=ART0008887051&type=pdf&lang=en&host=cinii&order_no=&ppv_type=0&lang_sw=&no=1452683083&cp=
の論文でも重みを大きめにした方がいいことが指摘されている。
以上の考察に加えて可視化した経路から局所解に陥っていることがわかる。
加えて、省略したが粒子数の多さはさほど結果に影響を与えないことが、むしろ局所解を避けることの方が重要であることがわかった。
そこで遺伝的アルゴリズムのように突然変異を加えてみる。
ある一定の確率で全体をシャッフルする変異、ある一定の確率で一部の並びを変える変異を加えてみる。
```python
class PSO:
def __init__(self, N, pN):
self.N = N
self.pN = pN
self.city = TSP_map(N)
def initialize(self):
ptcl = np.array([])
for i in range(self.pN):
a = np.random.choice([j for j in range(self.N - 1)])
b = np.random.choice([j for j in range(a, self.N)])
V = [[a, b]]
ptcl = np.append(ptcl, particle(i, V, self.N, np.random.uniform(0.5, 1.0), np.random.uniform(), np.random.uniform()))
self.ptcl = ptcl
return self.ptcl
def one_simulate(self):
for i in range(self.pN):
x = np.random.choice([1, 0], p=[0.1, 0.9]) #ここで確率0.1で突然変異する。
if x == 1:
np.random.shuffle(self.ptcl[i].X)
self.ptcl[i].SS_id()
self.ptcl[i].SS_gd(self.p_gd_X)
self.ptcl[i].new_V()
self.ptcl[i].new_X()
self.ptcl[i].P_id(self.city)
def simulate(self, sim_num):
for i in range(self.pN):
self.ptcl[i].initial(self.city)
self.p_gd_X = self.P_gd()
for i in range(sim_num):
self.one_simulate()
self.p_gd_X = self.P_gd()
self.p_gd_X = self.P_gd()
return self.p_gd_X
def P_gd(self):
P_gd = self.ptcl[0].p_id
self.no = 0
for i in range(self.pN):
if P_gd > self.ptcl[i].p_id:
P_gd = self.ptcl[i].p_id
self.no = i
return self.ptcl[self.no].p_id_X
```
```python
pso = PSO(30, 1000)
pso.initialize()
```
array([<__main__.particle instance at 0x11ed3e128>,
<__main__.particle instance at 0x11f6b2560>,
<__main__.particle instance at 0x11f6b2878>,
<__main__.particle instance at 0x11f6b2fc8>,
<__main__.particle instance at 0x11ed4c128>,
<__main__.particle instance at 0x11e66c368>,
<__main__.particle instance at 0x11f6ba710>,
<__main__.particle instance at 0x11f6ba050>,
<__main__.particle instance at 0x11f6ba1b8>,
<__main__.particle instance at 0x11f6ba950>,
<__main__.particle instance at 0x11fd5ff38>,
<__main__.particle instance at 0x1205db5a8>,
<__main__.particle instance at 0x1205dbfc8>,
<__main__.particle instance at 0x1205dba70>,
<__main__.particle instance at 0x1205db7a0>,
<__main__.particle instance at 0x11f694d88>,
<__main__.particle instance at 0x11fd42248>,
<__main__.particle instance at 0x11fd57290>,
<__main__.particle instance at 0x11e65eef0>,
<__main__.particle instance at 0x11e65e638>,
<__main__.particle instance at 0x11d52c128>,
<__main__.particle instance at 0x11ecb6560>,
<__main__.particle instance at 0x1205f0cf8>,
<__main__.particle instance at 0x1205e0dd0>,
<__main__.particle instance at 0x1205e0cf8>,
<__main__.particle instance at 0x11d2971b8>,
<__main__.particle instance at 0x11d297cf8>,
<__main__.particle instance at 0x11d297248>,
<__main__.particle instance at 0x1205fd6c8>,
<__main__.particle instance at 0x1205fdfc8>,
<__main__.particle instance at 0x1205fdd88>,
<__main__.particle instance at 0x1205fdf38>,
<__main__.particle instance at 0x1205fdcb0>,
<__main__.particle instance at 0x1205fd7e8>,
<__main__.particle instance at 0x11ecac998>,
<__main__.particle instance at 0x11e679bd8>,
<__main__.particle instance at 0x1205d6560>,
<__main__.particle instance at 0x1205e99e0>,
<__main__.particle instance at 0x1205e9fc8>,
<__main__.particle instance at 0x1205e9998>,
<__main__.particle instance at 0x11f6ad6c8>,
<__main__.particle instance at 0x11f6adab8>,
<__main__.particle instance at 0x11e642e18>,
<__main__.particle instance at 0x11d2beb90>,
<__main__.particle instance at 0x119f4df80>,
<__main__.particle instance at 0x11ecaf758>,
<__main__.particle instance at 0x11ecaf128>,
<__main__.particle instance at 0x11ecaf0e0>,
<__main__.particle instance at 0x11ecafc20>,
<__main__.particle instance at 0x11fd728c0>,
<__main__.particle instance at 0x11ecc89e0>,
<__main__.particle instance at 0x11ecc8638>,
<__main__.particle instance at 0x1205f6e18>,
<__main__.particle instance at 0x1205f6758>,
<__main__.particle instance at 0x1205f6200>,
<__main__.particle instance at 0x1205c5170>,
<__main__.particle instance at 0x11ed133b0>,
<__main__.particle instance at 0x1205fe6c8>,
<__main__.particle instance at 0x1205fe3b0>,
<__main__.particle instance at 0x1205feb48>,
<__main__.particle instance at 0x11ecb5bd8>,
<__main__.particle instance at 0x11ecb5fc8>,
<__main__.particle instance at 0x11ecb5878>,
<__main__.particle instance at 0x11ecb53f8>,
<__main__.particle instance at 0x11ecb58c0>,
<__main__.particle instance at 0x11ecb5440>,
<__main__.particle instance at 0x11ecb5e18>,
<__main__.particle instance at 0x11ecb5200>,
<__main__.particle instance at 0x11ecb5170>,
<__main__.particle instance at 0x119e08098>,
<__main__.particle instance at 0x119e08440>,
<__main__.particle instance at 0x119e08f38>,
<__main__.particle instance at 0x119e08ab8>,
<__main__.particle instance at 0x119e08cf8>,
<__main__.particle instance at 0x119e08fc8>,
<__main__.particle instance at 0x119e08b48>,
<__main__.particle instance at 0x119e088c0>,
<__main__.particle instance at 0x119e08e60>,
<__main__.particle instance at 0x11e6720e0>,
<__main__.particle instance at 0x11e672518>,
<__main__.particle instance at 0x11e672488>,
<__main__.particle instance at 0x11e6725a8>,
<__main__.particle instance at 0x11e672ab8>,
<__main__.particle instance at 0x11e672248>,
<__main__.particle instance at 0x11e672b48>,
<__main__.particle instance at 0x11e672638>,
<__main__.particle instance at 0x11e672950>,
<__main__.particle instance at 0x11e672a70>,
<__main__.particle instance at 0x11ecb2200>,
<__main__.particle instance at 0x11ecb2ea8>,
<__main__.particle instance at 0x11ecb29e0>,
<__main__.particle instance at 0x11ecb2248>,
<__main__.particle instance at 0x11ecb28c0>,
<__main__.particle instance at 0x11ecb23b0>,
<__main__.particle instance at 0x11ecb2e60>,
<__main__.particle instance at 0x11ecb20e0>,
<__main__.particle instance at 0x11fd7a5a8>,
<__main__.particle instance at 0x11fd7a248>,
<__main__.particle instance at 0x11fd7a2d8>,
<__main__.particle instance at 0x11fd7a050>,
<__main__.particle instance at 0x11fd7ab90>,
<__main__.particle instance at 0x11fd7a368>,
<__main__.particle instance at 0x11eccf488>,
<__main__.particle instance at 0x11eccf440>,
<__main__.particle instance at 0x11eccfbd8>,
<__main__.particle instance at 0x11eccf3f8>,
<__main__.particle instance at 0x11eccfcb0>,
<__main__.particle instance at 0x11eccf518>,
<__main__.particle instance at 0x11eccf3b0>,
<__main__.particle instance at 0x11eccf1b8>,
<__main__.particle instance at 0x11eccf8c0>,
<__main__.particle instance at 0x11eccff38>,
<__main__.particle instance at 0x11eccfe18>,
<__main__.particle instance at 0x11fda26c8>,
<__main__.particle instance at 0x11fd583f8>,
<__main__.particle instance at 0x11fd587e8>,
<__main__.particle instance at 0x11fd583b0>,
<__main__.particle instance at 0x11e62c2d8>,
<__main__.particle instance at 0x11e62cfc8>,
<__main__.particle instance at 0x11e62cb48>,
<__main__.particle instance at 0x11e62c1b8>,
<__main__.particle instance at 0x11e62c098>,
<__main__.particle instance at 0x11e62cbd8>,
<__main__.particle instance at 0x11e62cab8>,
<__main__.particle instance at 0x11e62c998>,
<__main__.particle instance at 0x11f6b44d0>,
<__main__.particle instance at 0x11f6b4b90>,
<__main__.particle instance at 0x11f6b4710>,
<__main__.particle instance at 0x11f6b4758>,
<__main__.particle instance at 0x11f6b47e8>,
<__main__.particle instance at 0x11ed25830>,
<__main__.particle instance at 0x11ed25cb0>,
<__main__.particle instance at 0x1205cedd0>,
<__main__.particle instance at 0x1205cef80>,
<__main__.particle instance at 0x1205ceb90>,
<__main__.particle instance at 0x1205ce5a8>,
<__main__.particle instance at 0x1205ce908>,
<__main__.particle instance at 0x1205ce4d0>,
<__main__.particle instance at 0x1205ce2d8>,
<__main__.particle instance at 0x1205cea28>,
<__main__.particle instance at 0x1205ce5f0>,
<__main__.particle instance at 0x1205ce998>,
<__main__.particle instance at 0x1205ce3b0>,
<__main__.particle instance at 0x11d289638>,
<__main__.particle instance at 0x11e674f80>,
<__main__.particle instance at 0x11e674908>,
<__main__.particle instance at 0x11e674cb0>,
<__main__.particle instance at 0x11e674ef0>,
<__main__.particle instance at 0x11e674878>,
<__main__.particle instance at 0x11e6747e8>,
<__main__.particle instance at 0x11e674050>,
<__main__.particle instance at 0x1205c4c20>,
<__main__.particle instance at 0x1205c4dd0>,
<__main__.particle instance at 0x1205c4248>,
<__main__.particle instance at 0x1205c4368>,
<__main__.particle instance at 0x1205c4878>,
<__main__.particle instance at 0x1205c4560>,
<__main__.particle instance at 0x1205c45f0>,
<__main__.particle instance at 0x1205c4320>,
<__main__.particle instance at 0x1205c4170>,
<__main__.particle instance at 0x1205c4c68>,
<__main__.particle instance at 0x11ece0248>,
<__main__.particle instance at 0x11ece0a70>,
<__main__.particle instance at 0x11ece0950>,
<__main__.particle instance at 0x11ece0050>,
<__main__.particle instance at 0x11d2e0f38>,
<__main__.particle instance at 0x11d2e07a0>,
<__main__.particle instance at 0x11d2e0d40>,
<__main__.particle instance at 0x11d2e0248>,
<__main__.particle instance at 0x11d2e0ea8>,
<__main__.particle instance at 0x11d2e03b0>,
<__main__.particle instance at 0x11d2e0758>,
<__main__.particle instance at 0x1205e7320>,
<__main__.particle instance at 0x1205e7e18>,
<__main__.particle instance at 0x1205e7290>,
<__main__.particle instance at 0x1205e7200>,
<__main__.particle instance at 0x1205e7638>,
<__main__.particle instance at 0x1205e7680>,
<__main__.particle instance at 0x1205e7128>,
<__main__.particle instance at 0x1205e73b0>,
<__main__.particle instance at 0x1205e7488>,
<__main__.particle instance at 0x1205e7d40>,
<__main__.particle instance at 0x1205e7f80>,
<__main__.particle instance at 0x1205e7050>,
<__main__.particle instance at 0x11f688ab8>,
<__main__.particle instance at 0x11f688cf8>,
<__main__.particle instance at 0x11f688560>,
<__main__.particle instance at 0x11f6883b0>,
<__main__.particle instance at 0x11ed5e488>,
<__main__.particle instance at 0x11ed42638>,
<__main__.particle instance at 0x11ed42050>,
<__main__.particle instance at 0x11ecd20e0>,
<__main__.particle instance at 0x11ecd2050>,
<__main__.particle instance at 0x11ecd2dd0>,
<__main__.particle instance at 0x11ecd23b0>,
<__main__.particle instance at 0x11ecd2098>,
<__main__.particle instance at 0x11ecd2950>,
<__main__.particle instance at 0x11ecd2440>,
<__main__.particle instance at 0x11ecd2758>,
<__main__.particle instance at 0x11ecd24d0>,
<__main__.particle instance at 0x11ecd2ea8>,
<__main__.particle instance at 0x11e670518>,
<__main__.particle instance at 0x11e670dd0>,
<__main__.particle instance at 0x11e670950>,
<__main__.particle instance at 0x11e670a28>,
<__main__.particle instance at 0x11e670ab8>,
<__main__.particle instance at 0x11e670440>,
<__main__.particle instance at 0x11e6703f8>,
<__main__.particle instance at 0x11e6707e8>,
<__main__.particle instance at 0x11e670710>,
<__main__.particle instance at 0x11e670bd8>,
<__main__.particle instance at 0x11f6b8710>,
<__main__.particle instance at 0x11f6b88c0>,
<__main__.particle instance at 0x11f6b8830>,
<__main__.particle instance at 0x11fd8c320>,
<__main__.particle instance at 0x11fd8c1b8>,
<__main__.particle instance at 0x11fd8c3f8>,
<__main__.particle instance at 0x11fd81b00>,
<__main__.particle instance at 0x11fd81050>,
<__main__.particle instance at 0x11fd81830>,
<__main__.particle instance at 0x11fd81098>,
<__main__.particle instance at 0x11fd81ef0>,
<__main__.particle instance at 0x11fd81128>,
<__main__.particle instance at 0x11fd81b90>,
<__main__.particle instance at 0x11fd81170>,
<__main__.particle instance at 0x11fd81560>,
<__main__.particle instance at 0x1205e8128>,
<__main__.particle instance at 0x1205e8cf8>,
<__main__.particle instance at 0x1205e88c0>,
<__main__.particle instance at 0x1205e8680>,
<__main__.particle instance at 0x1205e87a0>,
<__main__.particle instance at 0x1205e8c20>,
<__main__.particle instance at 0x1205e8320>,
<__main__.particle instance at 0x1205e8b00>,
<__main__.particle instance at 0x1205e8a70>,
<__main__.particle instance at 0x1205e8830>,
<__main__.particle instance at 0x1205e8050>,
<__main__.particle instance at 0x1205e83f8>,
<__main__.particle instance at 0x1205e84d0>,
<__main__.particle instance at 0x11fd94c68>,
<__main__.particle instance at 0x11fd63b90>,
<__main__.particle instance at 0x11fd63c68>,
<__main__.particle instance at 0x11fd63878>,
<__main__.particle instance at 0x11d2c0bd8>,
<__main__.particle instance at 0x11d2c0680>,
<__main__.particle instance at 0x11d2c0878>,
<__main__.particle instance at 0x11f6b33f8>,
<__main__.particle instance at 0x11f6b3f38>,
<__main__.particle instance at 0x11f6b38c0>,
<__main__.particle instance at 0x11f6b3b48>,
<__main__.particle instance at 0x11f6b3680>,
<__main__.particle instance at 0x11f6b3950>,
<__main__.particle instance at 0x11f6b3758>,
<__main__.particle instance at 0x119f4b830>,
<__main__.particle instance at 0x119f4b638>,
<__main__.particle instance at 0x119f4b050>,
<__main__.particle instance at 0x119f4b680>,
<__main__.particle instance at 0x119f4b0e0>,
<__main__.particle instance at 0x119f4b320>,
<__main__.particle instance at 0x119f4b200>,
<__main__.particle instance at 0x11e630e60>,
<__main__.particle instance at 0x11e6306c8>,
<__main__.particle instance at 0x11ed24248>,
<__main__.particle instance at 0x11ed247a0>,
<__main__.particle instance at 0x11ecf5908>,
<__main__.particle instance at 0x11f6062d8>,
<__main__.particle instance at 0x11f6065f0>,
<__main__.particle instance at 0x11f606f80>,
<__main__.particle instance at 0x11f606c20>,
<__main__.particle instance at 0x11f606e60>,
<__main__.particle instance at 0x11f6066c8>,
<__main__.particle instance at 0x11f606fc8>,
<__main__.particle instance at 0x11f606d88>,
<__main__.particle instance at 0x11d53ca28>,
<__main__.particle instance at 0x11d53ca70>,
<__main__.particle instance at 0x11d53c950>,
<__main__.particle instance at 0x11d53c050>,
<__main__.particle instance at 0x11d53c320>,
<__main__.particle instance at 0x11d53c3b0>,
<__main__.particle instance at 0x1205dcd88>,
<__main__.particle instance at 0x1205dcb48>,
<__main__.particle instance at 0x1205dcea8>,
<__main__.particle instance at 0x1205dc7a0>,
<__main__.particle instance at 0x1205dc3f8>,
<__main__.particle instance at 0x1205dc2d8>,
<__main__.particle instance at 0x1205dcc20>,
<__main__.particle instance at 0x1205dcab8>,
<__main__.particle instance at 0x1205dc758>,
<__main__.particle instance at 0x11f6c5878>,
<__main__.particle instance at 0x11ed02200>,
<__main__.particle instance at 0x11ed02758>,
<__main__.particle instance at 0x11ed02950>,
<__main__.particle instance at 0x11f6d1d88>,
<__main__.particle instance at 0x11ed1e950>,
<__main__.particle instance at 0x11ed1e5a8>,
<__main__.particle instance at 0x11d2a7a28>,
<__main__.particle instance at 0x11fd8a368>,
<__main__.particle instance at 0x11fd7e8c0>,
<__main__.particle instance at 0x11fd7e560>,
<__main__.particle instance at 0x11fd7ed40>,
<__main__.particle instance at 0x11fd7eea8>,
<__main__.particle instance at 0x11fd7e050>,
<__main__.particle instance at 0x11fd7ecb0>,
<__main__.particle instance at 0x11fd7e200>,
<__main__.particle instance at 0x11fd7e0e0>,
<__main__.particle instance at 0x11fd7e7a0>,
<__main__.particle instance at 0x119fd9560>,
<__main__.particle instance at 0x119fd95f0>,
<__main__.particle instance at 0x1205f4ea8>,
<__main__.particle instance at 0x1205f4638>,
<__main__.particle instance at 0x1205f4128>,
<__main__.particle instance at 0x1205f4320>,
<__main__.particle instance at 0x1205f4710>,
<__main__.particle instance at 0x1205f4098>,
<__main__.particle instance at 0x1205f4488>,
<__main__.particle instance at 0x1205f4950>,
<__main__.particle instance at 0x1205f4560>,
<__main__.particle instance at 0x1205f4b48>,
<__main__.particle instance at 0x1205f45f0>,
<__main__.particle instance at 0x1205f41b8>,
<__main__.particle instance at 0x1205f46c8>,
<__main__.particle instance at 0x1205f4c68>,
<__main__.particle instance at 0x1205f4908>,
<__main__.particle instance at 0x1205f4050>,
<__main__.particle instance at 0x1205f9b90>,
<__main__.particle instance at 0x1205f9050>,
<__main__.particle instance at 0x1205f9950>,
<__main__.particle instance at 0x1205f9560>,
<__main__.particle instance at 0x1205f9368>,
<__main__.particle instance at 0x1205f9d88>,
<__main__.particle instance at 0x1205f9e18>,
<__main__.particle instance at 0x1205f9128>,
<__main__.particle instance at 0x1205f9200>,
<__main__.particle instance at 0x1205f99e0>,
<__main__.particle instance at 0x1205f9e60>,
<__main__.particle instance at 0x1205f9fc8>,
<__main__.particle instance at 0x1205f9dd0>,
<__main__.particle instance at 0x1205f9710>,
<__main__.particle instance at 0x1205f97e8>,
<__main__.particle instance at 0x1205f9d40>,
<__main__.particle instance at 0x1205f9098>,
<__main__.particle instance at 0x11a3fbcf8>,
<__main__.particle instance at 0x11a3fb248>,
<__main__.particle instance at 0x11a3fb5a8>,
<__main__.particle instance at 0x11a3fb170>,
<__main__.particle instance at 0x11a3fbdd0>,
<__main__.particle instance at 0x11a3fbef0>,
<__main__.particle instance at 0x11a3fb1b8>,
<__main__.particle instance at 0x11a3fb200>,
<__main__.particle instance at 0x11a3fb878>,
<__main__.particle instance at 0x1205f7320>,
<__main__.particle instance at 0x1205f7cf8>,
<__main__.particle instance at 0x1205f79e0>,
<__main__.particle instance at 0x1205f7a70>,
<__main__.particle instance at 0x1205f7ab8>,
<__main__.particle instance at 0x1205f7368>,
<__main__.particle instance at 0x1205f7680>,
<__main__.particle instance at 0x1205f7c68>,
<__main__.particle instance at 0x1205f7b00>,
<__main__.particle instance at 0x1205f7998>,
<__main__.particle instance at 0x1205f7290>,
<__main__.particle instance at 0x1205f7248>,
<__main__.particle instance at 0x1205f70e0>,
<__main__.particle instance at 0x1205f7638>,
<__main__.particle instance at 0x1205f7dd0>,
<__main__.particle instance at 0x1205f7758>,
<__main__.particle instance at 0x1205f73f8>,
<__main__.particle instance at 0x1205f7f80>,
<__main__.particle instance at 0x1205c8518>,
<__main__.particle instance at 0x1205c8f38>,
<__main__.particle instance at 0x1205c85f0>,
<__main__.particle instance at 0x1205c87a0>,
<__main__.particle instance at 0x1205c8b00>,
<__main__.particle instance at 0x1205c84d0>,
<__main__.particle instance at 0x1205c8dd0>,
<__main__.particle instance at 0x1205c8bd8>,
<__main__.particle instance at 0x1205c8f80>,
<__main__.particle instance at 0x1205c8710>,
<__main__.particle instance at 0x1205c8758>,
<__main__.particle instance at 0x1205c8e18>,
<__main__.particle instance at 0x1205c8c68>,
<__main__.particle instance at 0x1205c8128>,
<__main__.particle instance at 0x1205c83b0>,
<__main__.particle instance at 0x1205c8200>,
<__main__.particle instance at 0x1205c8ab8>,
<__main__.particle instance at 0x1205c8878>,
<__main__.particle instance at 0x11a3fc560>,
<__main__.particle instance at 0x11a3fc2d8>,
<__main__.particle instance at 0x11a3fc170>,
<__main__.particle instance at 0x1205c9d88>,
<__main__.particle instance at 0x1205c9488>,
<__main__.particle instance at 0x1205c9878>,
<__main__.particle instance at 0x1205c9e60>,
<__main__.particle instance at 0x1205c9290>,
<__main__.particle instance at 0x1205c9ef0>,
<__main__.particle instance at 0x1205c93b0>,
<__main__.particle instance at 0x1205c9b90>,
<__main__.particle instance at 0x1205c9ab8>,
<__main__.particle instance at 0x1205c9b48>,
<__main__.particle instance at 0x1205c9f80>,
<__main__.particle instance at 0x1205c9a70>,
<__main__.particle instance at 0x1205c95a8>,
<__main__.particle instance at 0x1205c9cb0>,
<__main__.particle instance at 0x1205c9ea8>,
<__main__.particle instance at 0x1205c9560>,
<__main__.particle instance at 0x1205c93f8>,
<__main__.particle instance at 0x1205c9680>,
<__main__.particle instance at 0x1205ca638>,
<__main__.particle instance at 0x1205ca5a8>,
<__main__.particle instance at 0x1205ca5f0>,
<__main__.particle instance at 0x1205ca320>,
<__main__.particle instance at 0x1205cabd8>,
<__main__.particle instance at 0x1205ca950>,
<__main__.particle instance at 0x1205ca248>,
<__main__.particle instance at 0x1205ca200>,
<__main__.particle instance at 0x1205ca290>,
<__main__.particle instance at 0x1205cae60>,
<__main__.particle instance at 0x1205ca758>,
<__main__.particle instance at 0x1205caa28>,
<__main__.particle instance at 0x1205cadd0>,
<__main__.particle instance at 0x1205ca3f8>,
<__main__.particle instance at 0x1205ca908>,
<__main__.particle instance at 0x1205cab00>,
<__main__.particle instance at 0x1205ca560>,
<__main__.particle instance at 0x1205ed2d8>,
<__main__.particle instance at 0x1205ed8c0>,
<__main__.particle instance at 0x1205edfc8>,
<__main__.particle instance at 0x1205eda28>,
<__main__.particle instance at 0x1205ed950>,
<__main__.particle instance at 0x1205ed248>,
<__main__.particle instance at 0x1205ed050>,
<__main__.particle instance at 0x1205ed4d0>,
<__main__.particle instance at 0x1205ed638>,
<__main__.particle instance at 0x1205edab8>,
<__main__.particle instance at 0x1205ed1b8>,
<__main__.particle instance at 0x1205edbd8>,
<__main__.particle instance at 0x1205ed878>,
<__main__.particle instance at 0x1205ed098>,
<__main__.particle instance at 0x1205ed518>,
<__main__.particle instance at 0x1205ede18>,
<__main__.particle instance at 0x1205edef0>,
<__main__.particle instance at 0x1205edcf8>,
<__main__.particle instance at 0x119f53ab8>,
<__main__.particle instance at 0x119f533f8>,
<__main__.particle instance at 0x119f53d40>,
<__main__.particle instance at 0x119f53c68>,
<__main__.particle instance at 0x119f53950>,
<__main__.particle instance at 0x119f53b48>,
<__main__.particle instance at 0x119f53908>,
<__main__.particle instance at 0x119f53a28>,
<__main__.particle instance at 0x119f53a70>,
<__main__.particle instance at 0x119f53518>,
<__main__.particle instance at 0x119f537a0>,
<__main__.particle instance at 0x119f53320>,
<__main__.particle instance at 0x1205d7440>,
<__main__.particle instance at 0x1205d7560>,
<__main__.particle instance at 0x1205d7170>,
<__main__.particle instance at 0x1205d7758>,
<__main__.particle instance at 0x1205d7518>,
<__main__.particle instance at 0x1205d7d40>,
<__main__.particle instance at 0x1205d7c68>,
<__main__.particle instance at 0x1205d7b90>,
<__main__.particle instance at 0x1205d7b48>,
<__main__.particle instance at 0x1205d7710>,
<__main__.particle instance at 0x1205d7998>,
<__main__.particle instance at 0x1205d7638>,
<__main__.particle instance at 0x1205d74d0>,
<__main__.particle instance at 0x1205d72d8>,
<__main__.particle instance at 0x1205d7dd0>,
<__main__.particle instance at 0x1205d7f80>,
<__main__.particle instance at 0x1205d77e8>,
<__main__.particle instance at 0x119ec1d88>,
<__main__.particle instance at 0x119ec1488>,
<__main__.particle instance at 0x1205ea5f0>,
<__main__.particle instance at 0x1205ea7e8>,
<__main__.particle instance at 0x1205ea638>,
<__main__.particle instance at 0x1205ea248>,
<__main__.particle instance at 0x1205eae18>,
<__main__.particle instance at 0x1205eaf80>,
<__main__.particle instance at 0x1205ea830>,
<__main__.particle instance at 0x1205ead40>,
<__main__.particle instance at 0x1205eab90>,
<__main__.particle instance at 0x1205eac68>,
<__main__.particle instance at 0x1205ea320>,
<__main__.particle instance at 0x1205ea3f8>,
<__main__.particle instance at 0x1205ea4d0>,
<__main__.particle instance at 0x1205eacb0>,
<__main__.particle instance at 0x1205ea128>,
<__main__.particle instance at 0x1205eacf8>,
<__main__.particle instance at 0x1205ea710>,
<__main__.particle instance at 0x1205d43b0>,
<__main__.particle instance at 0x1205d4200>,
<__main__.particle instance at 0x1205d4170>,
<__main__.particle instance at 0x1205d4cf8>,
<__main__.particle instance at 0x1205d4d40>,
<__main__.particle instance at 0x1205d4f80>,
<__main__.particle instance at 0x1205d4758>,
<__main__.particle instance at 0x1205d4320>,
<__main__.particle instance at 0x1205d4560>,
<__main__.particle instance at 0x1205d48c0>,
<__main__.particle instance at 0x1205d4ef0>,
<__main__.particle instance at 0x1205d4fc8>,
<__main__.particle instance at 0x1205d4878>,
<__main__.particle instance at 0x1205d4710>,
<__main__.particle instance at 0x1205d4d88>,
<__main__.particle instance at 0x1205d4bd8>,
<__main__.particle instance at 0x1205d4368>,
<__main__.particle instance at 0x119f50758>,
<__main__.particle instance at 0x119f50ab8>,
<__main__.particle instance at 0x119f507a0>,
<__main__.particle instance at 0x119f50bd8>,
<__main__.particle instance at 0x119f50ea8>,
<__main__.particle instance at 0x119f505f0>,
<__main__.particle instance at 0x119f500e0>,
<__main__.particle instance at 0x119f50878>,
<__main__.particle instance at 0x119f503f8>,
<__main__.particle instance at 0x119f507e8>,
<__main__.particle instance at 0x119f50560>,
<__main__.particle instance at 0x119f50d88>,
<__main__.particle instance at 0x119f506c8>,
<__main__.particle instance at 0x1205d5908>,
<__main__.particle instance at 0x1205d5b00>,
<__main__.particle instance at 0x1205d58c0>,
<__main__.particle instance at 0x1205d5758>,
<__main__.particle instance at 0x1205d53f8>,
<__main__.particle instance at 0x1205d5368>,
<__main__.particle instance at 0x1205d5638>,
<__main__.particle instance at 0x1205d5680>,
<__main__.particle instance at 0x1205d5878>,
<__main__.particle instance at 0x1205d5128>,
<__main__.particle instance at 0x1205d5d40>,
<__main__.particle instance at 0x1205d5e18>,
<__main__.particle instance at 0x1205d5ef0>,
<__main__.particle instance at 0x1205d51b8>,
<__main__.particle instance at 0x1205d5290>,
<__main__.particle instance at 0x1205d5b48>,
<__main__.particle instance at 0x1205d5320>,
<__main__.particle instance at 0x1205d5710>,
<__main__.particle instance at 0x1205ee440>,
<__main__.particle instance at 0x1205ee998>,
<__main__.particle instance at 0x1205ee368>,
<__main__.particle instance at 0x1205eef38>,
<__main__.particle instance at 0x1205ee5a8>,
<__main__.particle instance at 0x1205ee518>,
<__main__.particle instance at 0x1205ee950>,
<__main__.particle instance at 0x1205eee18>,
<__main__.particle instance at 0x1205ee908>,
<__main__.particle instance at 0x1205ee050>,
<__main__.particle instance at 0x1205ee128>,
<__main__.particle instance at 0x1205ee878>,
<__main__.particle instance at 0x1205ee6c8>,
<__main__.particle instance at 0x1205ee170>,
<__main__.particle instance at 0x1205ee3f8>,
<__main__.particle instance at 0x1205eec68>,
<__main__.particle instance at 0x1205ee4d0>,
<__main__.particle instance at 0x11fa85dd0>,
<__main__.particle instance at 0x11fa85cb0>,
<__main__.particle instance at 0x11fa85ea8>,
<__main__.particle instance at 0x11fa853f8>,
<__main__.particle instance at 0x11fa85a28>,
<__main__.particle instance at 0x11fa85fc8>,
<__main__.particle instance at 0x11fa852d8>,
<__main__.particle instance at 0x11fa855f0>,
<__main__.particle instance at 0x11fa85b00>,
<__main__.particle instance at 0x11fa85b48>,
<__main__.particle instance at 0x11fa85878>,
<__main__.particle instance at 0x11fa85c20>,
<__main__.particle instance at 0x11d2fa758>,
<__main__.particle instance at 0x11d2fae60>,
<__main__.particle instance at 0x11d2fa998>,
<__main__.particle instance at 0x11d2fa5f0>,
<__main__.particle instance at 0x11d2fa170>,
<__main__.particle instance at 0x11d2fa320>,
<__main__.particle instance at 0x11d2fab48>,
<__main__.particle instance at 0x11d2fac20>,
<__main__.particle instance at 0x11d2faef0>,
<__main__.particle instance at 0x11d2facb0>,
<__main__.particle instance at 0x1205facb0>,
<__main__.particle instance at 0x1205fa050>,
<__main__.particle instance at 0x1205fa3f8>,
<__main__.particle instance at 0x1205fa950>,
<__main__.particle instance at 0x1205faa70>,
<__main__.particle instance at 0x1205fa878>,
<__main__.particle instance at 0x1205fac20>,
<__main__.particle instance at 0x1205fa1b8>,
<__main__.particle instance at 0x1205fa2d8>,
<__main__.particle instance at 0x1205fa248>,
<__main__.particle instance at 0x1205faf38>,
<__main__.particle instance at 0x1205fa518>,
<__main__.particle instance at 0x1205fa638>,
<__main__.particle instance at 0x1205fa710>,
<__main__.particle instance at 0x1205fa998>,
<__main__.particle instance at 0x1205fabd8>,
<__main__.particle instance at 0x1205fa440>,
<__main__.particle instance at 0x1205fa560>,
<__main__.particle instance at 0x11fa86b48>,
<__main__.particle instance at 0x11fa869e0>,
<__main__.particle instance at 0x11fa86440>,
<__main__.particle instance at 0x11fa865f0>,
<__main__.particle instance at 0x11fa86ea8>,
<__main__.particle instance at 0x11fa86170>,
<__main__.particle instance at 0x11fa86d40>,
<__main__.particle instance at 0x1205fc830>,
<__main__.particle instance at 0x1205fcb00>,
<__main__.particle instance at 0x1205fc998>,
<__main__.particle instance at 0x1205fc0e0>,
<__main__.particle instance at 0x1205fc518>,
<__main__.particle instance at 0x1205fc7a0>,
<__main__.particle instance at 0x1205fc9e0>,
<__main__.particle instance at 0x1205fce18>,
<__main__.particle instance at 0x1205fc098>,
<__main__.particle instance at 0x1205fc5a8>,
<__main__.particle instance at 0x1205fc4d0>,
<__main__.particle instance at 0x1205fcd88>,
<__main__.particle instance at 0x1205fc950>,
<__main__.particle instance at 0x1205fc488>,
<__main__.particle instance at 0x1205fc560>,
<__main__.particle instance at 0x1205fc320>,
<__main__.particle instance at 0x1205fc170>,
<__main__.particle instance at 0x1205c2fc8>,
<__main__.particle instance at 0x1205c2dd0>,
<__main__.particle instance at 0x1205c24d0>,
<__main__.particle instance at 0x1205c2488>,
<__main__.particle instance at 0x1205c2d88>,
<__main__.particle instance at 0x1205c27e8>,
<__main__.particle instance at 0x1205c2a28>,
<__main__.particle instance at 0x1205c2998>,
<__main__.particle instance at 0x1205c2b48>,
<__main__.particle instance at 0x1205c2908>,
<__main__.particle instance at 0x1205c2b90>,
<__main__.particle instance at 0x1205c28c0>,
<__main__.particle instance at 0x1205c23f8>,
<__main__.particle instance at 0x1205c2248>,
<__main__.particle instance at 0x1205c2680>,
<__main__.particle instance at 0x1205c2ea8>,
<__main__.particle instance at 0x11fd78e60>,
<__main__.particle instance at 0x11fd780e0>,
<__main__.particle instance at 0x11fd78c20>,
<__main__.particle instance at 0x11fd78c68>,
<__main__.particle instance at 0x11fd78b90>,
<__main__.particle instance at 0x11fd783f8>,
<__main__.particle instance at 0x11fd78320>,
<__main__.particle instance at 0x11fd78638>,
<__main__.particle instance at 0x11fd784d0>,
<__main__.particle instance at 0x11fd787e8>,
<__main__.particle instance at 0x11fd78200>,
<__main__.particle instance at 0x11fd78b00>,
<__main__.particle instance at 0x119b72098>,
<__main__.particle instance at 0x1205bea70>,
<__main__.particle instance at 0x1205bec68>,
<__main__.particle instance at 0x1205bec20>,
<__main__.particle instance at 0x1205bea28>,
<__main__.particle instance at 0x1205be878>,
<__main__.particle instance at 0x1205be200>,
<__main__.particle instance at 0x1205be8c0>,
<__main__.particle instance at 0x1205befc8>,
<__main__.particle instance at 0x1205be5a8>,
<__main__.particle instance at 0x1205be2d8>,
<__main__.particle instance at 0x1205bef80>,
<__main__.particle instance at 0x1205be518>,
<__main__.particle instance at 0x1205bef38>,
<__main__.particle instance at 0x1205be050>,
<__main__.particle instance at 0x1205bee60>,
<__main__.particle instance at 0x1205c3d88>,
<__main__.particle instance at 0x1205c35a8>,
<__main__.particle instance at 0x1205c3710>,
<__main__.particle instance at 0x1205c3950>,
<__main__.particle instance at 0x1205c3a28>,
<__main__.particle instance at 0x1205c3bd8>,
<__main__.particle instance at 0x1205c38c0>,
<__main__.particle instance at 0x1205c3440>,
<__main__.particle instance at 0x1205c3ea8>,
<__main__.particle instance at 0x1205c3e18>,
<__main__.particle instance at 0x1205c37e8>,
<__main__.particle instance at 0x1205c3758>,
<__main__.particle instance at 0x1205c32d8>,
<__main__.particle instance at 0x1205c3dd0>,
<__main__.particle instance at 0x1205c33b0>,
<__main__.particle instance at 0x1205c34d0>,
<__main__.particle instance at 0x1205c3170>,
<__main__.particle instance at 0x1205c36c8>,
<__main__.particle instance at 0x1205f8098>,
<__main__.particle instance at 0x1205f8710>,
<__main__.particle instance at 0x1205f8fc8>,
<__main__.particle instance at 0x1205f8e18>,
<__main__.particle instance at 0x1205f8368>,
<__main__.particle instance at 0x1205f8878>,
<__main__.particle instance at 0x1205f8ea8>,
<__main__.particle instance at 0x1205f80e0>,
<__main__.particle instance at 0x1205f8560>,
<__main__.particle instance at 0x1205f83f8>,
<__main__.particle instance at 0x1205f8248>,
<__main__.particle instance at 0x1205f8950>,
<__main__.particle instance at 0x1205f8a28>,
<__main__.particle instance at 0x1205f8b00>,
<__main__.particle instance at 0x1205f8050>,
<__main__.particle instance at 0x1205f8f38>,
<__main__.particle instance at 0x1205f8440>,
<__main__.particle instance at 0x11fd77560>,
<__main__.particle instance at 0x11fd775f0>,
<__main__.particle instance at 0x11fd77e18>,
<__main__.particle instance at 0x11fd77b00>,
<__main__.particle instance at 0x11fd77290>,
<__main__.particle instance at 0x11fd77a28>,
<__main__.particle instance at 0x11fd77d40>,
<__main__.particle instance at 0x11fd77bd8>,
<__main__.particle instance at 0x11fd77a70>,
<__main__.particle instance at 0x1205bf3b0>,
<__main__.particle instance at 0x1205bf290>,
<__main__.particle instance at 0x1205bf1b8>,
<__main__.particle instance at 0x1205bf680>,
<__main__.particle instance at 0x1205bf050>,
<__main__.particle instance at 0x1205bf4d0>,
<__main__.particle instance at 0x1205bf710>,
<__main__.particle instance at 0x1205bf7e8>,
<__main__.particle instance at 0x1205bfb90>,
<__main__.particle instance at 0x1205bfab8>,
<__main__.particle instance at 0x1205bf2d8>,
<__main__.particle instance at 0x1205bfc68>,
<__main__.particle instance at 0x1205bfa70>,
<__main__.particle instance at 0x1205bfd88>,
<__main__.particle instance at 0x1205bf758>,
<__main__.particle instance at 0x1205bf7a0>,
<__main__.particle instance at 0x1205bfea8>,
<__main__.particle instance at 0x1205bf440>,
<__main__.particle instance at 0x11f6a9ea8>,
<__main__.particle instance at 0x11f6a9ef0>,
<__main__.particle instance at 0x11f6a9998>,
<__main__.particle instance at 0x11f6a97e8>,
<__main__.particle instance at 0x11f6a9c20>,
<__main__.particle instance at 0x11f6a90e0>,
<__main__.particle instance at 0x11f6a9c68>,
<__main__.particle instance at 0x11f6a9290>,
<__main__.particle instance at 0x11f6a9128>,
<__main__.particle instance at 0x11f6a9560>,
<__main__.particle instance at 0x1205c0560>,
<__main__.particle instance at 0x1205c07e8>,
<__main__.particle instance at 0x1205c06c8>,
<__main__.particle instance at 0x1205c05a8>,
<__main__.particle instance at 0x1205c0638>,
<__main__.particle instance at 0x1205c03b0>,
<__main__.particle instance at 0x1205c0320>,
<__main__.particle instance at 0x1205c03f8>,
<__main__.particle instance at 0x1205c0950>,
<__main__.particle instance at 0x1205c0a28>,
<__main__.particle instance at 0x1205c0b48>,
<__main__.particle instance at 0x1205c0cf8>,
<__main__.particle instance at 0x1205c0b90>,
<__main__.particle instance at 0x1205c0e60>,
<__main__.particle instance at 0x1205c0f80>,
<__main__.particle instance at 0x1205c0290>,
<__main__.particle instance at 0x1205c0cb0>,
<__main__.particle instance at 0x11f6a87e8>,
<__main__.particle instance at 0x11f6a8830>,
<__main__.particle instance at 0x11f6a80e0>,
<__main__.particle instance at 0x11f6a8cb0>,
<__main__.particle instance at 0x11f6a8cf8>,
<__main__.particle instance at 0x11f6a8b48>,
<__main__.particle instance at 0x11f6a8b00>,
<__main__.particle instance at 0x11f6a8998>,
<__main__.particle instance at 0x11f6a8fc8>,
<__main__.particle instance at 0x11f6a8bd8>,
<__main__.particle instance at 0x11f6a8098>,
<__main__.particle instance at 0x11fd7b098>,
<__main__.particle instance at 0x11fd7bd40>,
<__main__.particle instance at 0x11fd7b1b8>,
<__main__.particle instance at 0x11fd7b710>,
<__main__.particle instance at 0x11fd7b518>,
<__main__.particle instance at 0x11fd7b0e0>,
<__main__.particle instance at 0x11fd7bc68>,
<__main__.particle instance at 0x11fd7bf80>,
<__main__.particle instance at 0x11ed192d8>,
<__main__.particle instance at 0x11ed194d0>,
<__main__.particle instance at 0x11ed19320>,
<__main__.particle instance at 0x11fd793b0>,
<__main__.particle instance at 0x11fd790e0>,
<__main__.particle instance at 0x11fd79488>,
<__main__.particle instance at 0x11fd795f0>,
<__main__.particle instance at 0x11fd79368>,
<__main__.particle instance at 0x11fd79bd8>,
<__main__.particle instance at 0x11fd79cb0>,
<__main__.particle instance at 0x11fd79dd0>,
<__main__.particle instance at 0x11fd79f80>,
<__main__.particle instance at 0x11fd79e18>,
<__main__.particle instance at 0x11fd791b8>,
<__main__.particle instance at 0x11fd79b48>,
<__main__.particle instance at 0x11ecf7830>,
<__main__.particle instance at 0x11ecf7cb0>,
<__main__.particle instance at 0x11fd75d88>,
<__main__.particle instance at 0x11fd75ea8>,
<__main__.particle instance at 0x11fd75f38>,
<__main__.particle instance at 0x11fd75fc8>,
<__main__.particle instance at 0x11fd9e560>,
<__main__.particle instance at 0x11fd9ecb0>,
<__main__.particle instance at 0x11fd9ed88>,
<__main__.particle instance at 0x11fd9eb00>,
<__main__.particle instance at 0x1205c14d0>,
<__main__.particle instance at 0x1205c1518>,
<__main__.particle instance at 0x1205c1e60>,
<__main__.particle instance at 0x1205c1f38>,
<__main__.particle instance at 0x1205c1710>,
<__main__.particle instance at 0x1205c1b90>,
<__main__.particle instance at 0x1205c1e18>,
<__main__.particle instance at 0x1205c1200>,
<__main__.particle instance at 0x1205c19e0>,
<__main__.particle instance at 0x1205c12d8>,
<__main__.particle instance at 0x1205c1488>,
<__main__.particle instance at 0x1205c13b0>,
<__main__.particle instance at 0x1205c1d88>,
<__main__.particle instance at 0x1205c1fc8>,
<__main__.particle instance at 0x1205c13f8>,
<__main__.particle instance at 0x1205c1f80>,
<__main__.particle instance at 0x1205bdc68>,
<__main__.particle instance at 0x1205bd320>,
<__main__.particle instance at 0x1205bdab8>,
<__main__.particle instance at 0x1205bdfc8>,
<__main__.particle instance at 0x1205bdb90>,
<__main__.particle instance at 0x1205bdcb0>,
<__main__.particle instance at 0x1205bd248>,
<__main__.particle instance at 0x1205bd3b0>,
<__main__.particle instance at 0x1205bd4d0>,
<__main__.particle instance at 0x1205bd5a8>,
<__main__.particle instance at 0x1205bd680>,
<__main__.particle instance at 0x1205bd758>,
<__main__.particle instance at 0x1205bd830>,
<__main__.particle instance at 0x1205bd908>,
<__main__.particle instance at 0x119f56098>,
<__main__.particle instance at 0x119f56ef0>,
<__main__.particle instance at 0x119f56440>,
<__main__.particle instance at 0x119f56638>,
<__main__.particle instance at 0x119f56710>,
<__main__.particle instance at 0x119f56e60>,
<__main__.particle instance at 0x119f56998>,
<__main__.particle instance at 0x119f567e8>,
<__main__.particle instance at 0x119f56dd0>,
<__main__.particle instance at 0x119f56560>,
<__main__.particle instance at 0x119f56368>,
<__main__.particle instance at 0x119f562d8>,
<__main__.particle instance at 0x11a582878>,
<__main__.particle instance at 0x11a582488>,
<__main__.particle instance at 0x11a582fc8>,
<__main__.particle instance at 0x1205ba3f8>,
<__main__.particle instance at 0x1205ba290>,
<__main__.particle instance at 0x1205babd8>,
<__main__.particle instance at 0x1205bab90>,
<__main__.particle instance at 0x1205bacb0>,
<__main__.particle instance at 0x1205badd0>,
<__main__.particle instance at 0x1205baea8>,
<__main__.particle instance at 0x1205baf80>,
<__main__.particle instance at 0x1205ba440>,
<__main__.particle instance at 0x1205ba518>,
<__main__.particle instance at 0x1205ba5f0>,
<__main__.particle instance at 0x1205ba6c8>,
<__main__.particle instance at 0x1205ba7a0>,
<__main__.particle instance at 0x1205ba878>,
<__main__.particle instance at 0x1205ba950>,
<__main__.particle instance at 0x1205baa28>,
<__main__.particle instance at 0x1205ba098>,
<__main__.particle instance at 0x1205ba170>,
<__main__.particle instance at 0x11a59de18>,
<__main__.particle instance at 0x11a59d1b8>,
<__main__.particle instance at 0x11a59df38>,
<__main__.particle instance at 0x11a59d368>,
<__main__.particle instance at 0x11a59d6c8>,
<__main__.particle instance at 0x11a576f80>,
<__main__.particle instance at 0x1205b7e18>,
<__main__.particle instance at 0x1205b75f0>,
<__main__.particle instance at 0x1205b7ea8>,
<__main__.particle instance at 0x1205b7f38>,
<__main__.particle instance at 0x1205b7758>,
<__main__.particle instance at 0x1205b77e8>,
<__main__.particle instance at 0x1205b78c0>,
<__main__.particle instance at 0x1205b7998>,
<__main__.particle instance at 0x1205b7a70>,
<__main__.particle instance at 0x1205b7b48>,
<__main__.particle instance at 0x1205b7c20>,
<__main__.particle instance at 0x1205b7cf8>,
<__main__.particle instance at 0x1205b70e0>,
<__main__.particle instance at 0x1205b71b8>,
<__main__.particle instance at 0x1205b7290>,
<__main__.particle instance at 0x1205b7368>,
<__main__.particle instance at 0x1205b7440>,
<__main__.particle instance at 0x1205b7518>,
<__main__.particle instance at 0x1205b6710>,
<__main__.particle instance at 0x1205b67e8>,
<__main__.particle instance at 0x1205b6638>,
<__main__.particle instance at 0x1205b6e60>,
<__main__.particle instance at 0x1205b6908>,
<__main__.particle instance at 0x1205b69e0>,
<__main__.particle instance at 0x1205b6ab8>,
<__main__.particle instance at 0x1205b6b90>,
<__main__.particle instance at 0x1205b6c68>,
<__main__.particle instance at 0x1205b6d40>,
<__main__.particle instance at 0x1205b6758>,
<__main__.particle instance at 0x1205b6f80>,
<__main__.particle instance at 0x1205b60e0>,
<__main__.particle instance at 0x1205b6200>,
<__main__.particle instance at 0x1205b62d8>,
<__main__.particle instance at 0x1205b63b0>,
<__main__.particle instance at 0x1205b6488>,
<__main__.particle instance at 0x1205b6560>,
<__main__.particle instance at 0x11ed5f680>,
<__main__.particle instance at 0x11ed5fcf8>,
<__main__.particle instance at 0x11ed5f638>,
<__main__.particle instance at 0x11ed5fa28>,
<__main__.particle instance at 0x11ed5f4d0>,
<__main__.particle instance at 0x11ed5f3f8>,
<__main__.particle instance at 0x11ed5fdd0>,
<__main__.particle instance at 0x11ed5f320>,
<__main__.particle instance at 0x11ed5ff38>,
<__main__.particle instance at 0x11ed5f8c0>,
<__main__.particle instance at 0x1205b5878>,
<__main__.particle instance at 0x1205b5830>,
<__main__.particle instance at 0x1205b57e8>,
<__main__.particle instance at 0x1205b5f80>,
<__main__.particle instance at 0x1205b5998>,
<__main__.particle instance at 0x1205b5a70>,
<__main__.particle instance at 0x1205b5b48>,
<__main__.particle instance at 0x1205b5c20>,
<__main__.particle instance at 0x1205b5cf8>,
<__main__.particle instance at 0x1205b5dd0>,
<__main__.particle instance at 0x1205b5ea8>,
<__main__.particle instance at 0x1205b51b8>,
<__main__.particle instance at 0x1205b52d8>,
<__main__.particle instance at 0x1205b53b0>,
<__main__.particle instance at 0x1205b5488>,
<__main__.particle instance at 0x1205b5560>,
<__main__.particle instance at 0x1205b5638>,
<__main__.particle instance at 0x1205b5710>,
<__main__.particle instance at 0x11d29df38>,
<__main__.particle instance at 0x11d29dc20>,
<__main__.particle instance at 0x11d29dd88>,
<__main__.particle instance at 0x11d29db48>,
<__main__.particle instance at 0x11d29d9e0>,
<__main__.particle instance at 0x11d29dfc8>,
<__main__.particle instance at 0x11d29d998>,
<__main__.particle instance at 0x1205fb4d0>,
<__main__.particle instance at 0x1205fb5a8>,
<__main__.particle instance at 0x1205fbcb0>,
<__main__.particle instance at 0x1205fb680>,
<__main__.particle instance at 0x1205fb6c8>,
<__main__.particle instance at 0x1205fbf80>,
<__main__.particle instance at 0x1205fbc20>,
<__main__.particle instance at 0x1205fb998>,
<__main__.particle instance at 0x1205fbd88>,
<__main__.particle instance at 0x1205fbe60>,
<__main__.particle instance at 0x1205fb638>,
<__main__.particle instance at 0x1205fbef0>,
<__main__.particle instance at 0x1205fb050>,
<__main__.particle instance at 0x1205fb128>,
<__main__.particle instance at 0x1205fb200>,
<__main__.particle instance at 0x1205fb2d8>,
<__main__.particle instance at 0x1205fb3b0>,
<__main__.particle instance at 0x1205fb878>,
<__main__.particle instance at 0x1205b1170>,
<__main__.particle instance at 0x1205b1128>,
<__main__.particle instance at 0x1205b17e8>,
<__main__.particle instance at 0x1205b1998>,
<__main__.particle instance at 0x1205b1440>,
<__main__.particle instance at 0x1205b1248>,
<__main__.particle instance at 0x1205b1950>,
<__main__.particle instance at 0x1205b1758>,
<__main__.particle instance at 0x1205b15a8>,
<__main__.particle instance at 0x1205b1680>,
<__main__.particle instance at 0x1205b19e0>,
<__main__.particle instance at 0x1205b1b48>,
<__main__.particle instance at 0x1205b1bd8>,
<__main__.particle instance at 0x1205b1cb0>,
<__main__.particle instance at 0x1205b1dd0>,
<__main__.particle instance at 0x1205b1ea8>,
<__main__.particle instance at 0x1205b1f80>,
<__main__.particle instance at 0x1205b0c20>,
<__main__.particle instance at 0x1205b0ef0>,
<__main__.particle instance at 0x1205b0cf8>,
<__main__.particle instance at 0x1205b0e60>,
<__main__.particle instance at 0x1205b0f80>,
<__main__.particle instance at 0x1205b0b00>,
<__main__.particle instance at 0x1205b0488>,
<__main__.particle instance at 0x1205b0560>,
<__main__.particle instance at 0x1205b05f0>,
<__main__.particle instance at 0x1205b06c8>,
<__main__.particle instance at 0x1205b07a0>,
<__main__.particle instance at 0x1205b0878>,
<__main__.particle instance at 0x1205b0950>,
<__main__.particle instance at 0x1205b0a28>,
<__main__.particle instance at 0x1205b0098>,
<__main__.particle instance at 0x1205b0170>,
<__main__.particle instance at 0x1205b0248>,
<__main__.particle instance at 0x1205b0320>,
<__main__.particle instance at 0x1205af518>,
<__main__.particle instance at 0x1205afc68>,
<__main__.particle instance at 0x1205afd88>,
<__main__.particle instance at 0x1205afe60>,
<__main__.particle instance at 0x1205aff38>,
<__main__.particle instance at 0x1205af4d0>,
<__main__.particle instance at 0x1205af5f0>,
<__main__.particle instance at 0x1205af710>,
<__main__.particle instance at 0x1205af7e8>,
<__main__.particle instance at 0x1205af8c0>,
<__main__.particle instance at 0x1205af998>], dtype=object)
```python
pso.simulate(10)
plot()
```
```python
pso.simulate(40)
plot()
```
微妙な結果である。
次に局所的に変異(交差)させてみる。
```python
class PSO:
def __init__(self, N, pN):
self.N = N
self.pN = pN
self.city = TSP_map(N)
def initialize(self):
ptcl = np.array([])
for i in range(self.pN):
a = np.random.choice([j for j in range(self.N - 1)])
b = np.random.choice([j for j in range(a, self.N)])
V = [[a, b]]
ptcl = np.append(ptcl, particle(i, V, self.N, np.random.uniform(0.5, 1.0), np.random.uniform(), np.random.uniform()))
self.ptcl = ptcl
return self.ptcl
def one_simulate(self):
for i in range(self.pN):
x = np.random.choice([1, 0], p=[0.1, 0.9])
if x == 1:
j = np.random.choice([i for i in range(self.N)])
k = np.random.choice([i for i in range(self.N)])
a = self.ptcl[i].X[j]
b = self.ptcl[i].X[k]
self.ptcl[i].X[j] = b
self.ptcl[i].X[k] = a
self.ptcl[i].SS_id()
self.ptcl[i].SS_gd(self.p_gd_X)
self.ptcl[i].new_V()
self.ptcl[i].new_X()
self.ptcl[i].P_id(self.city)
def simulate(self, sim_num):
for i in range(self.pN):
self.ptcl[i].initial(self.city)
self.p_gd_X = self.P_gd()
for i in range(sim_num):
self.one_simulate()
self.p_gd_X = self.P_gd()
self.p_gd_X = self.P_gd()
return self.p_gd_X
def P_gd(self):
P_gd = self.ptcl[0].p_id
self.no = 0
for i in range(self.pN):
if P_gd > self.ptcl[i].p_id:
P_gd = self.ptcl[i].p_id
self.no = i
return self.ptcl[self.no].p_id_X
```
```python
pso = PSO(30, 1000)
pso.initialize()
```
array([<__main__.particle instance at 0x1205ec098>,
<__main__.particle instance at 0x1202f7320>,
<__main__.particle instance at 0x1202f72d8>,
<__main__.particle instance at 0x11f6899e0>,
<__main__.particle instance at 0x11d2f7ef0>,
<__main__.particle instance at 0x1205e8ef0>,
<__main__.particle instance at 0x1205e86c8>,
<__main__.particle instance at 0x1205e80e0>,
<__main__.particle instance at 0x120570320>,
<__main__.particle instance at 0x120570830>,
<__main__.particle instance at 0x120570488>,
<__main__.particle instance at 0x1205703f8>,
<__main__.particle instance at 0x1205705f0>,
<__main__.particle instance at 0x120570290>,
<__main__.particle instance at 0x120533320>,
<__main__.particle instance at 0x11fd88758>,
<__main__.particle instance at 0x11fd88680>,
<__main__.particle instance at 0x11ed13830>,
<__main__.particle instance at 0x11ed138c0>,
<__main__.particle instance at 0x1202efb90>,
<__main__.particle instance at 0x1202ef170>,
<__main__.particle instance at 0x1202ef1b8>,
<__main__.particle instance at 0x1202efc20>,
<__main__.particle instance at 0x11f68b488>,
<__main__.particle instance at 0x119f31e60>,
<__main__.particle instance at 0x119f311b8>,
<__main__.particle instance at 0x11d2d4ab8>,
<__main__.particle instance at 0x11d2d4b00>,
<__main__.particle instance at 0x11d2d4950>,
<__main__.particle instance at 0x11d2d4b90>,
<__main__.particle instance at 0x11d2d4170>,
<__main__.particle instance at 0x12028d518>,
<__main__.particle instance at 0x12028db90>,
<__main__.particle instance at 0x11f68a170>,
<__main__.particle instance at 0x11f68af80>,
<__main__.particle instance at 0x11edd9f38>,
<__main__.particle instance at 0x11f69ce18>,
<__main__.particle instance at 0x11e6a8878>,
<__main__.particle instance at 0x11ecc8c68>,
<__main__.particle instance at 0x11fe457a0>,
<__main__.particle instance at 0x11fe45c20>,
<__main__.particle instance at 0x11e6b2830>,
<__main__.particle instance at 0x1202a7ab8>,
<__main__.particle instance at 0x1202a7a70>,
<__main__.particle instance at 0x11e6ae878>,
<__main__.particle instance at 0x11e6aecb0>,
<__main__.particle instance at 0x11e6ac7a0>,
<__main__.particle instance at 0x11e6acd88>,
<__main__.particle instance at 0x1202c6c68>,
<__main__.particle instance at 0x1202c6e60>,
<__main__.particle instance at 0x1202c6cf8>,
<__main__.particle instance at 0x1202c6f38>,
<__main__.particle instance at 0x1202c6ab8>,
<__main__.particle instance at 0x12054fb00>,
<__main__.particle instance at 0x12054ffc8>,
<__main__.particle instance at 0x12054f830>,
<__main__.particle instance at 0x12054f098>,
<__main__.particle instance at 0x12054f128>,
<__main__.particle instance at 0x12054f248>,
<__main__.particle instance at 0x12054f518>,
<__main__.particle instance at 0x12054f0e0>,
<__main__.particle instance at 0x12054fdd0>,
<__main__.particle instance at 0x12054ff80>,
<__main__.particle instance at 0x11ed25200>,
<__main__.particle instance at 0x11e648560>,
<__main__.particle instance at 0x11e648680>,
<__main__.particle instance at 0x11e648248>,
<__main__.particle instance at 0x11e648488>,
<__main__.particle instance at 0x11e648518>,
<__main__.particle instance at 0x11e648f38>,
<__main__.particle instance at 0x11e648170>,
<__main__.particle instance at 0x11e648878>,
<__main__.particle instance at 0x11e6488c0>,
<__main__.particle instance at 0x11e648128>,
<__main__.particle instance at 0x11e6487e8>,
<__main__.particle instance at 0x11e648a70>,
<__main__.particle instance at 0x11e6485f0>,
<__main__.particle instance at 0x11ecb6200>,
<__main__.particle instance at 0x11ecb6dd0>,
<__main__.particle instance at 0x11ecb6a28>,
<__main__.particle instance at 0x11ecb6638>,
<__main__.particle instance at 0x11d265878>,
<__main__.particle instance at 0x11d265e60>,
<__main__.particle instance at 0x11d265368>,
<__main__.particle instance at 0x11d265a70>,
<__main__.particle instance at 0x11d265050>,
<__main__.particle instance at 0x11d265290>,
<__main__.particle instance at 0x11d2653f8>,
<__main__.particle instance at 0x11d265998>,
<__main__.particle instance at 0x119f3e098>,
<__main__.particle instance at 0x119f3e050>,
<__main__.particle instance at 0x119f3ed40>,
<__main__.particle instance at 0x119f3e830>,
<__main__.particle instance at 0x119f3ebd8>,
<__main__.particle instance at 0x119f3ecb0>,
<__main__.particle instance at 0x119f3e998>,
<__main__.particle instance at 0x119f3ea28>,
<__main__.particle instance at 0x119f3efc8>,
<__main__.particle instance at 0x119f3e560>,
<__main__.particle instance at 0x119f3e290>,
<__main__.particle instance at 0x119f3ef80>,
<__main__.particle instance at 0x1202b81b8>,
<__main__.particle instance at 0x1202b8e18>,
<__main__.particle instance at 0x1202b8878>,
<__main__.particle instance at 0x1202b85f0>,
<__main__.particle instance at 0x1202b8758>,
<__main__.particle instance at 0x1202b8998>,
<__main__.particle instance at 0x1202b8488>,
<__main__.particle instance at 0x11fd77c20>,
<__main__.particle instance at 0x11fd774d0>,
<__main__.particle instance at 0x11fd77f38>,
<__main__.particle instance at 0x11fd77ab8>,
<__main__.particle instance at 0x11fd77ea8>,
<__main__.particle instance at 0x11fd77908>,
<__main__.particle instance at 0x11fd777e8>,
<__main__.particle instance at 0x119f4a128>,
<__main__.particle instance at 0x119f4ac20>,
<__main__.particle instance at 0x119f4af38>,
<__main__.particle instance at 0x119f4a9e0>,
<__main__.particle instance at 0x119f4aa70>,
<__main__.particle instance at 0x119f4a878>,
<__main__.particle instance at 0x119f4a320>,
<__main__.particle instance at 0x119f4a098>,
<__main__.particle instance at 0x11e6b8488>,
<__main__.particle instance at 0x11e6b8cf8>,
<__main__.particle instance at 0x1202c98c0>,
<__main__.particle instance at 0x1202c9290>,
<__main__.particle instance at 0x1202c9680>,
<__main__.particle instance at 0x1202c9f80>,
<__main__.particle instance at 0x1202c9d88>,
<__main__.particle instance at 0x1202c9b90>,
<__main__.particle instance at 0x1202c9998>,
<__main__.particle instance at 0x1202c9710>,
<__main__.particle instance at 0x1202c90e0>,
<__main__.particle instance at 0x1202c9c68>,
<__main__.particle instance at 0x1202c97e8>,
<__main__.particle instance at 0x1202c9878>,
<__main__.particle instance at 0x1202c9758>,
<__main__.particle instance at 0x120584200>,
<__main__.particle instance at 0x1205846c8>,
<__main__.particle instance at 0x120584638>,
<__main__.particle instance at 0x120584368>,
<__main__.particle instance at 0x120584fc8>,
<__main__.particle instance at 0x120584320>,
<__main__.particle instance at 0x120584758>,
<__main__.particle instance at 0x1205840e0>,
<__main__.particle instance at 0x120584128>,
<__main__.particle instance at 0x120584a70>,
<__main__.particle instance at 0x120584cf8>,
<__main__.particle instance at 0x120584248>,
<__main__.particle instance at 0x120584170>,
<__main__.particle instance at 0x120584710>,
<__main__.particle instance at 0x120297050>,
<__main__.particle instance at 0x1202970e0>,
<__main__.particle instance at 0x120297c68>,
<__main__.particle instance at 0x120297518>,
<__main__.particle instance at 0x120297488>,
<__main__.particle instance at 0x120297cf8>,
<__main__.particle instance at 0x120297908>,
<__main__.particle instance at 0x120297320>,
<__main__.particle instance at 0x120297128>,
<__main__.particle instance at 0x1202ff6c8>,
<__main__.particle instance at 0x1202ffb90>,
<__main__.particle instance at 0x1202ff248>,
<__main__.particle instance at 0x1202ff950>,
<__main__.particle instance at 0x1202ff5f0>,
<__main__.particle instance at 0x1202ffa70>,
<__main__.particle instance at 0x1202ffd40>,
<__main__.particle instance at 0x1202ff710>,
<__main__.particle instance at 0x1202ff998>,
<__main__.particle instance at 0x1202ff368>,
<__main__.particle instance at 0x1202ff7a0>,
<__main__.particle instance at 0x1202ff4d0>,
<__main__.particle instance at 0x1202fff38>,
<__main__.particle instance at 0x1202ff908>,
<__main__.particle instance at 0x1202d7b90>,
<__main__.particle instance at 0x1202d73b0>,
<__main__.particle instance at 0x1202d7fc8>,
<__main__.particle instance at 0x1202d73f8>,
<__main__.particle instance at 0x1202d7098>,
<__main__.particle instance at 0x1202d7320>,
<__main__.particle instance at 0x1202d7518>,
<__main__.particle instance at 0x1202d77e8>,
<__main__.particle instance at 0x1202d7878>,
<__main__.particle instance at 0x1202d7950>,
<__main__.particle instance at 0x1202d7560>,
<__main__.particle instance at 0x1202d7bd8>,
<__main__.particle instance at 0x1202d7998>,
<__main__.particle instance at 0x1202d7ab8>,
<__main__.particle instance at 0x1202d7cf8>,
<__main__.particle instance at 0x119fd5f38>,
<__main__.particle instance at 0x119fd5e60>,
<__main__.particle instance at 0x1205e7908>,
<__main__.particle instance at 0x1205e7cb0>,
<__main__.particle instance at 0x1205e7998>,
<__main__.particle instance at 0x1205e7fc8>,
<__main__.particle instance at 0x1205e7ab8>,
<__main__.particle instance at 0x11e672e60>,
<__main__.particle instance at 0x11e6729e0>,
<__main__.particle instance at 0x11e6723f8>,
<__main__.particle instance at 0x11e6726c8>,
<__main__.particle instance at 0x11e672320>,
<__main__.particle instance at 0x11e672290>,
<__main__.particle instance at 0x11a576d40>,
<__main__.particle instance at 0x12029f950>,
<__main__.particle instance at 0x12029f050>,
<__main__.particle instance at 0x12029f4d0>,
<__main__.particle instance at 0x12029ff80>,
<__main__.particle instance at 0x12029f7a0>,
<__main__.particle instance at 0x12029f710>,
<__main__.particle instance at 0x12029f290>,
<__main__.particle instance at 0x12029fcf8>,
<__main__.particle instance at 0x12029fab8>,
<__main__.particle instance at 0x12029f3f8>,
<__main__.particle instance at 0x12029fbd8>,
<__main__.particle instance at 0x12029fb00>,
<__main__.particle instance at 0x12029f5a8>,
<__main__.particle instance at 0x12029f0e0>,
<__main__.particle instance at 0x12029fe18>,
<__main__.particle instance at 0x119f2e998>,
<__main__.particle instance at 0x119f2e320>,
<__main__.particle instance at 0x119f2e830>,
<__main__.particle instance at 0x119f2e638>,
<__main__.particle instance at 0x119f2e878>,
<__main__.particle instance at 0x119f2e200>,
<__main__.particle instance at 0x119f2e560>,
<__main__.particle instance at 0x119f2e050>,
<__main__.particle instance at 0x119f2ed40>,
<__main__.particle instance at 0x119f2e680>,
<__main__.particle instance at 0x119f5bc20>,
<__main__.particle instance at 0x119f5b3b0>,
<__main__.particle instance at 0x119f5bb00>,
<__main__.particle instance at 0x119f5b908>,
<__main__.particle instance at 0x119f5ba28>,
<__main__.particle instance at 0x119f5b9e0>,
<__main__.particle instance at 0x119f5bfc8>,
<__main__.particle instance at 0x11e674440>,
<__main__.particle instance at 0x11e674830>,
<__main__.particle instance at 0x11e674200>,
<__main__.particle instance at 0x11e674bd8>,
<__main__.particle instance at 0x11d262758>,
<__main__.particle instance at 0x11d262ab8>,
<__main__.particle instance at 0x11d262e60>,
<__main__.particle instance at 0x11d262830>,
<__main__.particle instance at 0x11d262518>,
<__main__.particle instance at 0x11d262cb0>,
<__main__.particle instance at 0x11d262368>,
<__main__.particle instance at 0x11d262950>,
<__main__.particle instance at 0x11d2e0cb0>,
<__main__.particle instance at 0x11d2e0830>,
<__main__.particle instance at 0x11d2e0b90>,
<__main__.particle instance at 0x11d2e0560>,
<__main__.particle instance at 0x11f922368>,
<__main__.particle instance at 0x11f922518>,
<__main__.particle instance at 0x11f922680>,
<__main__.particle instance at 0x11f9227e8>,
<__main__.particle instance at 0x11f922d40>,
<__main__.particle instance at 0x11f922440>,
<__main__.particle instance at 0x11f922cf8>,
<__main__.particle instance at 0x11f922ab8>,
<__main__.particle instance at 0x11f922950>,
<__main__.particle instance at 0x11f922e18>,
<__main__.particle instance at 0x11f922e60>,
<__main__.particle instance at 0x11f922488>,
<__main__.particle instance at 0x11f9223b0>,
<__main__.particle instance at 0x11f922fc8>,
<__main__.particle instance at 0x11fdbecf8>,
<__main__.particle instance at 0x11fdbec20>,
<__main__.particle instance at 0x11fdbe3b0>,
<__main__.particle instance at 0x11fdbeea8>,
<__main__.particle instance at 0x11fdbed88>,
<__main__.particle instance at 0x11fdbe5a8>,
<__main__.particle instance at 0x11fdbe4d0>,
<__main__.particle instance at 0x11fdbe170>,
<__main__.particle instance at 0x11fdbe1b8>,
<__main__.particle instance at 0x11fdbe0e0>,
<__main__.particle instance at 0x11fdbe440>,
<__main__.particle instance at 0x11fdbee60>,
<__main__.particle instance at 0x11fdbe908>,
<__main__.particle instance at 0x11fdbe8c0>,
<__main__.particle instance at 0x119ea38c0>,
<__main__.particle instance at 0x11fdb4f80>,
<__main__.particle instance at 0x11fdb4ab8>,
<__main__.particle instance at 0x11fdb4440>,
<__main__.particle instance at 0x11fdb4560>,
<__main__.particle instance at 0x11fdb4a70>,
<__main__.particle instance at 0x11fdb45a8>,
<__main__.particle instance at 0x11fdb4998>,
<__main__.particle instance at 0x11fdb4290>,
<__main__.particle instance at 0x11fdb4cb0>,
<__main__.particle instance at 0x11fdb49e0>,
<__main__.particle instance at 0x11fdb4680>,
<__main__.particle instance at 0x11fdb4b90>,
<__main__.particle instance at 0x11fdb4758>,
<__main__.particle instance at 0x11fdb4170>,
<__main__.particle instance at 0x11fdb4d40>,
<__main__.particle instance at 0x11fdb4488>,
<__main__.particle instance at 0x11fa860e0>,
<__main__.particle instance at 0x11fa86a70>,
<__main__.particle instance at 0x11fa86908>,
<__main__.particle instance at 0x11fa861b8>,
<__main__.particle instance at 0x11e673200>,
<__main__.particle instance at 0x11e673488>,
<__main__.particle instance at 0x11e673680>,
<__main__.particle instance at 0x11f4ff7a0>,
<__main__.particle instance at 0x11f4ffe18>,
<__main__.particle instance at 0x11f4ff710>,
<__main__.particle instance at 0x11f4ffdd0>,
<__main__.particle instance at 0x11f4ff998>,
<__main__.particle instance at 0x11f4ff488>,
<__main__.particle instance at 0x11f4ffb48>,
<__main__.particle instance at 0x11f4ffab8>,
<__main__.particle instance at 0x11f4ff128>,
<__main__.particle instance at 0x1205a8170>,
<__main__.particle instance at 0x1205a8c68>,
<__main__.particle instance at 0x1205a83b0>,
<__main__.particle instance at 0x1205a8128>,
<__main__.particle instance at 0x1205a8560>,
<__main__.particle instance at 0x1205a8a70>,
<__main__.particle instance at 0x1205a8d40>,
<__main__.particle instance at 0x1205a8710>,
<__main__.particle instance at 0x1205a8680>,
<__main__.particle instance at 0x1205a8248>,
<__main__.particle instance at 0x1205a8290>,
<__main__.particle instance at 0x1205a8050>,
<__main__.particle instance at 0x1205a8950>,
<__main__.particle instance at 0x119f431b8>,
<__main__.particle instance at 0x119f43830>,
<__main__.particle instance at 0x119f43128>,
<__main__.particle instance at 0x119f43248>,
<__main__.particle instance at 0x119f43908>,
<__main__.particle instance at 0x119f43b48>,
<__main__.particle instance at 0x119f43b00>,
<__main__.particle instance at 0x119f432d8>,
<__main__.particle instance at 0x119f43c68>,
<__main__.particle instance at 0x119f30fc8>,
<__main__.particle instance at 0x119f30cf8>,
<__main__.particle instance at 0x119f307e8>,
<__main__.particle instance at 0x119f30b90>,
<__main__.particle instance at 0x119f303b0>,
<__main__.particle instance at 0x119f30638>,
<__main__.particle instance at 0x119f30a28>,
<__main__.particle instance at 0x119f30098>,
<__main__.particle instance at 0x119f308c0>,
<__main__.particle instance at 0x119f30878>,
<__main__.particle instance at 0x1202c0998>,
<__main__.particle instance at 0x1202c0a70>,
<__main__.particle instance at 0x1202c0320>,
<__main__.particle instance at 0x1202c0b90>,
<__main__.particle instance at 0x1202c00e0>,
<__main__.particle instance at 0x1202c0950>,
<__main__.particle instance at 0x1202c08c0>,
<__main__.particle instance at 0x1202c0518>,
<__main__.particle instance at 0x1202c0d40>,
<__main__.particle instance at 0x1202c0bd8>,
<__main__.particle instance at 0x1202c0908>,
<__main__.particle instance at 0x1202c06c8>,
<__main__.particle instance at 0x1202c05f0>,
<__main__.particle instance at 0x1202a0cb0>,
<__main__.particle instance at 0x1202a00e0>,
<__main__.particle instance at 0x1202a0b48>,
<__main__.particle instance at 0x1202a0d88>,
<__main__.particle instance at 0x1202a0ab8>,
<__main__.particle instance at 0x1202a0b90>,
<__main__.particle instance at 0x1202a0200>,
<__main__.particle instance at 0x1202a0878>,
<__main__.particle instance at 0x1202a05a8>,
<__main__.particle instance at 0x1202a0e18>,
<__main__.particle instance at 0x1202a0d40>,
<__main__.particle instance at 0x1202a03f8>,
<__main__.particle instance at 0x1202a0638>,
<__main__.particle instance at 0x1202a06c8>,
<__main__.particle instance at 0x1202a0368>,
<__main__.particle instance at 0x1202a0a28>,
<__main__.particle instance at 0x119f08cb0>,
<__main__.particle instance at 0x119f8d518>,
<__main__.particle instance at 0x119f8ddd0>,
<__main__.particle instance at 0x119f8df80>,
<__main__.particle instance at 0x119f8d098>,
<__main__.particle instance at 0x11d28a998>,
<__main__.particle instance at 0x11d28a560>,
<__main__.particle instance at 0x11d28a440>,
<__main__.particle instance at 0x11d28a6c8>,
<__main__.particle instance at 0x11d28a290>,
<__main__.particle instance at 0x11d28ab90>,
<__main__.particle instance at 0x11ed55560>,
<__main__.particle instance at 0x119f35fc8>,
<__main__.particle instance at 0x119f35e60>,
<__main__.particle instance at 0x119f35290>,
<__main__.particle instance at 0x119f353f8>,
<__main__.particle instance at 0x119f35638>,
<__main__.particle instance at 0x119f35998>,
<__main__.particle instance at 0x119f35878>,
<__main__.particle instance at 0x119f356c8>,
<__main__.particle instance at 0x119f35ab8>,
<__main__.particle instance at 0x119f35518>,
<__main__.particle instance at 0x119ef4680>,
<__main__.particle instance at 0x119ef4b00>,
<__main__.particle instance at 0x119ef4128>,
<__main__.particle instance at 0x119ef4440>,
<__main__.particle instance at 0x119ef4290>,
<__main__.particle instance at 0x119ef4200>,
<__main__.particle instance at 0x119ef4830>,
<__main__.particle instance at 0x119ef41b8>,
<__main__.particle instance at 0x119ef40e0>,
<__main__.particle instance at 0x1205bf128>,
<__main__.particle instance at 0x1205bf200>,
<__main__.particle instance at 0x1205bfcf8>,
<__main__.particle instance at 0x1205bf8c0>,
<__main__.particle instance at 0x1205bfe60>,
<__main__.particle instance at 0x1205bfef0>,
<__main__.particle instance at 0x1205bf6c8>,
<__main__.particle instance at 0x1205bf170>,
<__main__.particle instance at 0x1205bfc20>,
<__main__.particle instance at 0x11d2be248>,
<__main__.particle instance at 0x11d2be878>,
<__main__.particle instance at 0x119f2d488>,
<__main__.particle instance at 0x119f2d680>,
<__main__.particle instance at 0x119f2d3b0>,
<__main__.particle instance at 0x119f2dcf8>,
<__main__.particle instance at 0x119f2de60>,
<__main__.particle instance at 0x119f2d908>,
<__main__.particle instance at 0x119f2d2d8>,
<__main__.particle instance at 0x119f2dab8>,
<__main__.particle instance at 0x11e9ef098>,
<__main__.particle instance at 0x11e9ef878>,
<__main__.particle instance at 0x11e9efb00>,
<__main__.particle instance at 0x11e9ef128>,
<__main__.particle instance at 0x11e9efa28>,
<__main__.particle instance at 0x11e9ef8c0>,
<__main__.particle instance at 0x11e9efbd8>,
<__main__.particle instance at 0x11e9efa70>,
<__main__.particle instance at 0x11e9ef9e0>,
<__main__.particle instance at 0x11e63e050>,
<__main__.particle instance at 0x11fd14710>,
<__main__.particle instance at 0x11fd14b00>,
<__main__.particle instance at 0x11fd14b90>,
<__main__.particle instance at 0x11fd14368>,
<__main__.particle instance at 0x11fd14998>,
<__main__.particle instance at 0x11fd144d0>,
<__main__.particle instance at 0x11fd14320>,
<__main__.particle instance at 0x11fd14d88>,
<__main__.particle instance at 0x11fd14e60>,
<__main__.particle instance at 0x11fd145a8>,
<__main__.particle instance at 0x11fd148c0>,
<__main__.particle instance at 0x11fd14f80>,
<__main__.particle instance at 0x11fd14908>,
<__main__.particle instance at 0x11fd52bd8>,
<__main__.particle instance at 0x11f6c2e60>,
<__main__.particle instance at 0x11ecaad88>,
<__main__.particle instance at 0x11ed6a170>,
<__main__.particle instance at 0x11ed6a5a8>,
<__main__.particle instance at 0x11ed6a638>,
<__main__.particle instance at 0x11ed6a200>,
<__main__.particle instance at 0x119f3c758>,
<__main__.particle instance at 0x119f3c638>,
<__main__.particle instance at 0x119f3c368>,
<__main__.particle instance at 0x119f3cea8>,
<__main__.particle instance at 0x119f3c290>,
<__main__.particle instance at 0x119f3c200>,
<__main__.particle instance at 0x119f3c518>,
<__main__.particle instance at 0x119f3ccb0>,
<__main__.particle instance at 0x119f3c908>,
<__main__.particle instance at 0x119f3ce60>,
<__main__.particle instance at 0x119f3cdd0>,
<__main__.particle instance at 0x12052f170>,
<__main__.particle instance at 0x12052fd40>,
<__main__.particle instance at 0x12052f518>,
<__main__.particle instance at 0x12052f5a8>,
<__main__.particle instance at 0x12052f440>,
<__main__.particle instance at 0x12052f290>,
<__main__.particle instance at 0x12052f2d8>,
<__main__.particle instance at 0x12052fe18>,
<__main__.particle instance at 0x12052fd88>,
<__main__.particle instance at 0x12052f680>,
<__main__.particle instance at 0x12052f8c0>,
<__main__.particle instance at 0x12052fef0>,
<__main__.particle instance at 0x12052fe60>,
<__main__.particle instance at 0x119f2f320>,
<__main__.particle instance at 0x119f2f8c0>,
<__main__.particle instance at 0x119f2f560>,
<__main__.particle instance at 0x119f2f488>,
<__main__.particle instance at 0x119f2f170>,
<__main__.particle instance at 0x119f2f3b0>,
<__main__.particle instance at 0x119f2f3f8>,
<__main__.particle instance at 0x119f2fcf8>,
<__main__.particle instance at 0x119f2fbd8>,
<__main__.particle instance at 0x11e67bbd8>,
<__main__.particle instance at 0x11e67b710>,
<__main__.particle instance at 0x11ed04290>,
<__main__.particle instance at 0x11d258368>,
<__main__.particle instance at 0x1202f2ab8>,
<__main__.particle instance at 0x1202f2758>,
<__main__.particle instance at 0x1202f24d0>,
<__main__.particle instance at 0x1202f2050>,
<__main__.particle instance at 0x1202f2368>,
<__main__.particle instance at 0x1202f2f80>,
<__main__.particle instance at 0x1202f28c0>,
<__main__.particle instance at 0x1202f27e8>,
<__main__.particle instance at 0x1202f2248>,
<__main__.particle instance at 0x1202f2a70>,
<__main__.particle instance at 0x1202f2560>,
<__main__.particle instance at 0x1202f2b90>,
<__main__.particle instance at 0x1202f29e0>,
<__main__.particle instance at 0x1202f22d8>,
<__main__.particle instance at 0x1202f2b48>,
<__main__.particle instance at 0x11d2f5cf8>,
<__main__.particle instance at 0x11d2f5128>,
<__main__.particle instance at 0x11faa24d0>,
<__main__.particle instance at 0x11faa2440>,
<__main__.particle instance at 0x11faa28c0>,
<__main__.particle instance at 0x11faa2c20>,
<__main__.particle instance at 0x11faa2bd8>,
<__main__.particle instance at 0x11faa2f80>,
<__main__.particle instance at 0x11faa2e18>,
<__main__.particle instance at 0x11faa2098>,
<__main__.particle instance at 0x11faa2d88>,
<__main__.particle instance at 0x11faa2560>,
<__main__.particle instance at 0x11faa2680>,
<__main__.particle instance at 0x11faa2998>,
<__main__.particle instance at 0x11faa2d40>,
<__main__.particle instance at 0x11faa25f0>,
<__main__.particle instance at 0x11faa25a8>,
<__main__.particle instance at 0x11faa2ef0>,
<__main__.particle instance at 0x11e649a70>,
<__main__.particle instance at 0x11e649950>,
<__main__.particle instance at 0x11e649d88>,
<__main__.particle instance at 0x11e649c20>,
<__main__.particle instance at 0x11e649290>,
<__main__.particle instance at 0x11e649488>,
<__main__.particle instance at 0x11e6493b0>,
<__main__.particle instance at 0x12057f5f0>,
<__main__.particle instance at 0x12057fdd0>,
<__main__.particle instance at 0x12057f830>,
<__main__.particle instance at 0x12057ff80>,
<__main__.particle instance at 0x12057f8c0>,
<__main__.particle instance at 0x12057f950>,
<__main__.particle instance at 0x12057f368>,
<__main__.particle instance at 0x12057fc68>,
<__main__.particle instance at 0x12057fb00>,
<__main__.particle instance at 0x12057f680>,
<__main__.particle instance at 0x12057fa28>,
<__main__.particle instance at 0x12057f3b0>,
<__main__.particle instance at 0x12057f4d0>,
<__main__.particle instance at 0x12057fd88>,
<__main__.particle instance at 0x11d5d15f0>,
<__main__.particle instance at 0x11d5d1ef0>,
<__main__.particle instance at 0x11d5d14d0>,
<__main__.particle instance at 0x11d5d13b0>,
<__main__.particle instance at 0x11d5d1320>,
<__main__.particle instance at 0x11d5d1e18>,
<__main__.particle instance at 0x11d5d17a0>,
<__main__.particle instance at 0x11d5d1290>,
<__main__.particle instance at 0x11d5d1fc8>,
<__main__.particle instance at 0x11d5d19e0>,
<__main__.particle instance at 0x11d5d17e8>,
<__main__.particle instance at 0x11d5d1908>,
<__main__.particle instance at 0x119ebf0e0>,
<__main__.particle instance at 0x119ebfa70>,
<__main__.particle instance at 0x119fda200>,
<__main__.particle instance at 0x119fdad40>,
<__main__.particle instance at 0x1202a4b90>,
<__main__.particle instance at 0x1202a4908>,
<__main__.particle instance at 0x1202a4368>,
<__main__.particle instance at 0x1202a4128>,
<__main__.particle instance at 0x1202a4cf8>,
<__main__.particle instance at 0x1202a45a8>,
<__main__.particle instance at 0x1202a4290>,
<__main__.particle instance at 0x1202a41b8>,
<__main__.particle instance at 0x1202a4a70>,
<__main__.particle instance at 0x1202a45f0>,
<__main__.particle instance at 0x1202a4878>,
<__main__.particle instance at 0x1202a4098>,
<__main__.particle instance at 0x1202a4c68>,
<__main__.particle instance at 0x1202a4cb0>,
<__main__.particle instance at 0x1202a4950>,
<__main__.particle instance at 0x1202fa680>,
<__main__.particle instance at 0x1202fa758>,
<__main__.particle instance at 0x1202fa7e8>,
<__main__.particle instance at 0x1202fad40>,
<__main__.particle instance at 0x1202fa290>,
<__main__.particle instance at 0x1202fafc8>,
<__main__.particle instance at 0x1202faf80>,
<__main__.particle instance at 0x1202fa248>,
<__main__.particle instance at 0x1202faea8>,
<__main__.particle instance at 0x1202fae60>,
<__main__.particle instance at 0x1202fa128>,
<__main__.particle instance at 0x1202fa368>,
<__main__.particle instance at 0x1202fabd8>,
<__main__.particle instance at 0x11ed5e998>,
<__main__.particle instance at 0x1202f8c20>,
<__main__.particle instance at 0x1202f8a28>,
<__main__.particle instance at 0x1202f8908>,
<__main__.particle instance at 0x1202f85a8>,
<__main__.particle instance at 0x1202f8998>,
<__main__.particle instance at 0x1202f8758>,
<__main__.particle instance at 0x1202f87a0>,
<__main__.particle instance at 0x1202f8f80>,
<__main__.particle instance at 0x1202f82d8>,
<__main__.particle instance at 0x1202f8440>,
<__main__.particle instance at 0x1202f8d40>,
<__main__.particle instance at 0x1202f81b8>,
<__main__.particle instance at 0x1202f8830>,
<__main__.particle instance at 0x1202f8f38>,
<__main__.particle instance at 0x1202f84d0>,
<__main__.particle instance at 0x11fda75a8>,
<__main__.particle instance at 0x11fda7f38>,
<__main__.particle instance at 0x11fda7c68>,
<__main__.particle instance at 0x11fda7ab8>,
<__main__.particle instance at 0x11fda70e0>,
<__main__.particle instance at 0x11fda7cb0>,
<__main__.particle instance at 0x11fd9e098>,
<__main__.particle instance at 0x11fd9ef80>,
<__main__.particle instance at 0x119f16128>,
<__main__.particle instance at 0x119f16d40>,
<__main__.particle instance at 0x119f16200>,
<__main__.particle instance at 0x119f16290>,
<__main__.particle instance at 0x119f161b8>,
<__main__.particle instance at 0x119f16bd8>,
<__main__.particle instance at 0x119f162d8>,
<__main__.particle instance at 0x119f168c0>,
<__main__.particle instance at 0x119f16a70>,
<__main__.particle instance at 0x120555b00>,
<__main__.particle instance at 0x120555638>,
<__main__.particle instance at 0x119f66bd8>,
<__main__.particle instance at 0x119f66710>,
<__main__.particle instance at 0x119f66518>,
<__main__.particle instance at 0x119f66d40>,
<__main__.particle instance at 0x119f66e60>,
<__main__.particle instance at 0x11f6b0758>,
<__main__.particle instance at 0x11f6b0cb0>,
<__main__.particle instance at 0x119f1b170>,
<__main__.particle instance at 0x119f1b098>,
<__main__.particle instance at 0x119f1b518>,
<__main__.particle instance at 0x119f1b488>,
<__main__.particle instance at 0x119f1ba28>,
<__main__.particle instance at 0x119f1b830>,
<__main__.particle instance at 0x119f1b4d0>,
<__main__.particle instance at 0x119f1b7a0>,
<__main__.particle instance at 0x119f1bc20>,
<__main__.particle instance at 0x11f69e758>,
<__main__.particle instance at 0x1205e2998>,
<__main__.particle instance at 0x11f695a28>,
<__main__.particle instance at 0x1205f5878>,
<__main__.particle instance at 0x1205f8830>,
<__main__.particle instance at 0x1205f89e0>,
<__main__.particle instance at 0x1205f87e8>,
<__main__.particle instance at 0x1205f8758>,
<__main__.particle instance at 0x1205f8ab8>,
<__main__.particle instance at 0x1205f87a0>,
<__main__.particle instance at 0x1205f8d40>,
<__main__.particle instance at 0x119ef38c0>,
<__main__.particle instance at 0x11ecf8830>,
<__main__.particle instance at 0x11fda3050>,
<__main__.particle instance at 0x11fda3b90>,
<__main__.particle instance at 0x11fda37e8>,
<__main__.particle instance at 0x11fda3908>,
<__main__.particle instance at 0x11fac8f80>,
<__main__.particle instance at 0x11fac8248>,
<__main__.particle instance at 0x11fac8878>,
<__main__.particle instance at 0x11fac8830>,
<__main__.particle instance at 0x11fac85f0>,
<__main__.particle instance at 0x11fac8b00>,
<__main__.particle instance at 0x11fac86c8>,
<__main__.particle instance at 0x11fac8908>,
<__main__.particle instance at 0x11fac84d0>,
<__main__.particle instance at 0x11fac8bd8>,
<__main__.particle instance at 0x11fac8d40>,
<__main__.particle instance at 0x11fac8098>,
<__main__.particle instance at 0x11fac8e18>,
<__main__.particle instance at 0x11fac8c68>,
<__main__.particle instance at 0x11fac8f38>,
<__main__.particle instance at 0x11fac8440>,
<__main__.particle instance at 0x11fac89e0>,
<__main__.particle instance at 0x119f8af80>,
<__main__.particle instance at 0x119f8a710>,
<__main__.particle instance at 0x119f8a098>,
<__main__.particle instance at 0x119f8a758>,
<__main__.particle instance at 0x119f8af38>,
<__main__.particle instance at 0x119f8a290>,
<__main__.particle instance at 0x119f8a950>,
<__main__.particle instance at 0x119f8a878>,
<__main__.particle instance at 0x119f8a0e0>,
<__main__.particle instance at 0x11ed8ffc8>,
<__main__.particle instance at 0x11ed8fcf8>,
<__main__.particle instance at 0x11ed8f830>,
<__main__.particle instance at 0x11ed8fef0>,
<__main__.particle instance at 0x11ed8fb00>,
<__main__.particle instance at 0x11ed8fea8>,
<__main__.particle instance at 0x11ed8f998>,
<__main__.particle instance at 0x11ed8f170>,
<__main__.particle instance at 0x11ed8f1b8>,
<__main__.particle instance at 0x11ed8f560>,
<__main__.particle instance at 0x11ed8fa28>,
<__main__.particle instance at 0x1205eb488>,
<__main__.particle instance at 0x1205eb290>,
<__main__.particle instance at 0x1205ebea8>,
<__main__.particle instance at 0x11eca8830>,
<__main__.particle instance at 0x11eca8560>,
<__main__.particle instance at 0x11eca8200>,
<__main__.particle instance at 0x11eca85a8>,
<__main__.particle instance at 0x11eca85f0>,
<__main__.particle instance at 0x12053f0e0>,
<__main__.particle instance at 0x12053f6c8>,
<__main__.particle instance at 0x12053f4d0>,
<__main__.particle instance at 0x12053fcb0>,
<__main__.particle instance at 0x12053f3b0>,
<__main__.particle instance at 0x12053fa70>,
<__main__.particle instance at 0x12053ff80>,
<__main__.particle instance at 0x12053fc20>,
<__main__.particle instance at 0x12053fa28>,
<__main__.particle instance at 0x12053f170>,
<__main__.particle instance at 0x12053f200>,
<__main__.particle instance at 0x12053f830>,
<__main__.particle instance at 0x12053f488>,
<__main__.particle instance at 0x12053fab8>,
<__main__.particle instance at 0x1205fd1b8>,
<__main__.particle instance at 0x1205fd3b0>,
<__main__.particle instance at 0x1205fd950>,
<__main__.particle instance at 0x1205fd8c0>,
<__main__.particle instance at 0x11ed1f758>,
<__main__.particle instance at 0x119f46fc8>,
<__main__.particle instance at 0x119f46d40>,
<__main__.particle instance at 0x119f463f8>,
<__main__.particle instance at 0x119f467a0>,
<__main__.particle instance at 0x119f46dd0>,
<__main__.particle instance at 0x119f46a70>,
<__main__.particle instance at 0x119f46488>,
<__main__.particle instance at 0x119f469e0>,
<__main__.particle instance at 0x119f46ea8>,
<__main__.particle instance at 0x119f46e18>,
<__main__.particle instance at 0x11e6406c8>,
<__main__.particle instance at 0x1202d8bd8>,
<__main__.particle instance at 0x1202d8758>,
<__main__.particle instance at 0x1202d8440>,
<__main__.particle instance at 0x1202d8488>,
<__main__.particle instance at 0x11efceef0>,
<__main__.particle instance at 0x11efce518>,
<__main__.particle instance at 0x11efce128>,
<__main__.particle instance at 0x11efce998>,
<__main__.particle instance at 0x11efcef80>,
<__main__.particle instance at 0x11efce5f0>,
<__main__.particle instance at 0x11efce8c0>,
<__main__.particle instance at 0x11efce2d8>,
<__main__.particle instance at 0x11efceb90>,
<__main__.particle instance at 0x11efce3b0>,
<__main__.particle instance at 0x11efce878>,
<__main__.particle instance at 0x11efce710>,
<__main__.particle instance at 0x11efce0e0>,
<__main__.particle instance at 0x11efce368>,
<__main__.particle instance at 0x11efcea28>,
<__main__.particle instance at 0x11efce680>,
<__main__.particle instance at 0x1205599e0>,
<__main__.particle instance at 0x1205594d0>,
<__main__.particle instance at 0x120559a28>,
<__main__.particle instance at 0x120559f80>,
<__main__.particle instance at 0x120559cf8>,
<__main__.particle instance at 0x120559830>,
<__main__.particle instance at 0x120559cb0>,
<__main__.particle instance at 0x120559998>,
<__main__.particle instance at 0x1205598c0>,
<__main__.particle instance at 0x120559ea8>,
<__main__.particle instance at 0x120559128>,
<__main__.particle instance at 0x120559050>,
<__main__.particle instance at 0x120559e60>,
<__main__.particle instance at 0x120559680>,
<__main__.particle instance at 0x120559560>,
<__main__.particle instance at 0x11f6568c0>,
<__main__.particle instance at 0x11f656fc8>,
<__main__.particle instance at 0x11f6562d8>,
<__main__.particle instance at 0x11f6561b8>,
<__main__.particle instance at 0x11f656a28>,
<__main__.particle instance at 0x11f656368>,
<__main__.particle instance at 0x11f656c68>,
<__main__.particle instance at 0x11f6563f8>,
<__main__.particle instance at 0x11f6564d0>,
<__main__.particle instance at 0x11f656908>,
<__main__.particle instance at 0x11f656560>,
<__main__.particle instance at 0x11f656998>,
<__main__.particle instance at 0x11f656638>,
<__main__.particle instance at 0x11f6567e8>,
<__main__.particle instance at 0x1205f2cb0>,
<__main__.particle instance at 0x1202b6050>,
<__main__.particle instance at 0x1202b6bd8>,
<__main__.particle instance at 0x1202b6a28>,
<__main__.particle instance at 0x1202b6f38>,
<__main__.particle instance at 0x1202b6998>,
<__main__.particle instance at 0x1202b6b90>,
<__main__.particle instance at 0x1202b6248>,
<__main__.particle instance at 0x1202b6200>,
<__main__.particle instance at 0x1202b6488>,
<__main__.particle instance at 0x1202b68c0>,
<__main__.particle instance at 0x1202b6cb0>,
<__main__.particle instance at 0x1202b6d88>,
<__main__.particle instance at 0x1202b6fc8>,
<__main__.particle instance at 0x1202b67e8>,
<__main__.particle instance at 0x12029c560>,
<__main__.particle instance at 0x12029ccb0>,
<__main__.particle instance at 0x12029c290>,
<__main__.particle instance at 0x12029c098>,
<__main__.particle instance at 0x12029ce18>,
<__main__.particle instance at 0x12029c758>,
<__main__.particle instance at 0x12029cbd8>,
<__main__.particle instance at 0x12029c9e0>,
<__main__.particle instance at 0x12029c908>,
<__main__.particle instance at 0x12029c830>,
<__main__.particle instance at 0x12029c128>,
<__main__.particle instance at 0x12029cc68>,
<__main__.particle instance at 0x12029c368>,
<__main__.particle instance at 0x12029c2d8>,
<__main__.particle instance at 0x12029cea8>,
<__main__.particle instance at 0x11d26bd40>,
<__main__.particle instance at 0x11d26ba70>,
<__main__.particle instance at 0x11d26b248>,
<__main__.particle instance at 0x11d26bfc8>,
<__main__.particle instance at 0x11d26b878>,
<__main__.particle instance at 0x11d26b488>,
<__main__.particle instance at 0x11d26b8c0>,
<__main__.particle instance at 0x11d26b0e0>,
<__main__.particle instance at 0x11d26b830>,
<__main__.particle instance at 0x120534518>,
<__main__.particle instance at 0x1205349e0>,
<__main__.particle instance at 0x1205343f8>,
<__main__.particle instance at 0x120534830>,
<__main__.particle instance at 0x120534d88>,
<__main__.particle instance at 0x120534ef0>,
<__main__.particle instance at 0x120534200>,
<__main__.particle instance at 0x120534ea8>,
<__main__.particle instance at 0x120534fc8>,
<__main__.particle instance at 0x120534170>,
<__main__.particle instance at 0x1205345a8>,
<__main__.particle instance at 0x120534290>,
<__main__.particle instance at 0x120534cf8>,
<__main__.particle instance at 0x120534368>,
<__main__.particle instance at 0x1202ae5a8>,
<__main__.particle instance at 0x1202ae6c8>,
<__main__.particle instance at 0x1202aee18>,
<__main__.particle instance at 0x1202ae950>,
<__main__.particle instance at 0x1202aef80>,
<__main__.particle instance at 0x1202ae9e0>,
<__main__.particle instance at 0x1202ae200>,
<__main__.particle instance at 0x1202aea70>,
<__main__.particle instance at 0x1202ae5f0>,
<__main__.particle instance at 0x1202aef38>,
<__main__.particle instance at 0x1202ae1b8>,
<__main__.particle instance at 0x1202ae2d8>,
<__main__.particle instance at 0x1202aed88>,
<__main__.particle instance at 0x1202aee60>,
<__main__.particle instance at 0x1202ae170>,
<__main__.particle instance at 0x11faa04d0>,
<__main__.particle instance at 0x11faa0200>,
<__main__.particle instance at 0x11faa0320>,
<__main__.particle instance at 0x11faa0e18>,
<__main__.particle instance at 0x11faa0488>,
<__main__.particle instance at 0x11faa09e0>,
<__main__.particle instance at 0x11faa0518>,
<__main__.particle instance at 0x11faa06c8>,
<__main__.particle instance at 0x11faa07a0>,
<__main__.particle instance at 0x11faa0878>,
<__main__.particle instance at 0x11faa0098>,
<__main__.particle instance at 0x11faa0170>,
<__main__.particle instance at 0x11faa0a28>,
<__main__.particle instance at 0x11faa0950>,
<__main__.particle instance at 0x11faa0ef0>,
<__main__.particle instance at 0x11faa0c68>,
<__main__.particle instance at 0x119fc0cf8>,
<__main__.particle instance at 0x119fc0368>,
<__main__.particle instance at 0x119fc0710>,
<__main__.particle instance at 0x11fa83c68>,
<__main__.particle instance at 0x11fa83710>,
<__main__.particle instance at 0x11fa833b0>,
<__main__.particle instance at 0x11fa83320>,
<__main__.particle instance at 0x11fa83170>,
<__main__.particle instance at 0x11f692290>,
<__main__.particle instance at 0x1202f3488>,
<__main__.particle instance at 0x1202f3ab8>,
<__main__.particle instance at 0x1202f35a8>,
<__main__.particle instance at 0x1202f3518>,
<__main__.particle instance at 0x1202f3a70>,
<__main__.particle instance at 0x1202f3ef0>,
<__main__.particle instance at 0x1202f3680>,
<__main__.particle instance at 0x1202f3050>,
<__main__.particle instance at 0x1202f3f80>,
<__main__.particle instance at 0x1202f3998>,
<__main__.particle instance at 0x1202f3bd8>,
<__main__.particle instance at 0x1202f3830>,
<__main__.particle instance at 0x1202f3cf8>,
<__main__.particle instance at 0x1202f3638>,
<__main__.particle instance at 0x120560758>,
<__main__.particle instance at 0x120560f80>,
<__main__.particle instance at 0x120560488>,
<__main__.particle instance at 0x1205606c8>,
<__main__.particle instance at 0x120560bd8>,
<__main__.particle instance at 0x120560248>,
<__main__.particle instance at 0x120560680>,
<__main__.particle instance at 0x120560ab8>,
<__main__.particle instance at 0x120560ef0>,
<__main__.particle instance at 0x120560b48>,
<__main__.particle instance at 0x120560878>,
<__main__.particle instance at 0x120560368>,
<__main__.particle instance at 0x120560170>,
<__main__.particle instance at 0x120560320>,
<__main__.particle instance at 0x1205607e8>,
<__main__.particle instance at 0x1205b7638>,
<__main__.particle instance at 0x1205b7830>,
<__main__.particle instance at 0x1205b74d0>,
<__main__.particle instance at 0x1205b7dd0>,
<__main__.particle instance at 0x1205b73f8>,
<__main__.particle instance at 0x1205b7f80>,
<__main__.particle instance at 0x1205b7cb0>,
<__main__.particle instance at 0x1205b7710>,
<__main__.particle instance at 0x11e670488>,
<__main__.particle instance at 0x11e670b90>,
<__main__.particle instance at 0x11e670e60>,
<__main__.particle instance at 0x11e670170>,
<__main__.particle instance at 0x11e6705a8>,
<__main__.particle instance at 0x11d266cf8>,
<__main__.particle instance at 0x11d266830>,
<__main__.particle instance at 0x11d266b90>,
<__main__.particle instance at 0x11d266290>,
<__main__.particle instance at 0x11d2663f8>,
<__main__.particle instance at 0x11d2664d0>,
<__main__.particle instance at 0x11d266f38>,
<__main__.particle instance at 0x11d266050>,
<__main__.particle instance at 0x11faf0ef0>,
<__main__.particle instance at 0x120588440>,
<__main__.particle instance at 0x1205885f0>,
<__main__.particle instance at 0x1205884d0>,
<__main__.particle instance at 0x120588368>,
<__main__.particle instance at 0x120588ef0>,
<__main__.particle instance at 0x120588638>,
<__main__.particle instance at 0x120588a70>,
<__main__.particle instance at 0x1205880e0>,
<__main__.particle instance at 0x120588b00>,
<__main__.particle instance at 0x120588cf8>,
<__main__.particle instance at 0x120588950>,
<__main__.particle instance at 0x120588518>,
<__main__.particle instance at 0x1205882d8>,
<__main__.particle instance at 0x120588320>,
<__main__.particle instance at 0x119f44320>,
<__main__.particle instance at 0x119f44830>,
<__main__.particle instance at 0x119f44c20>,
<__main__.particle instance at 0x119f44ab8>,
<__main__.particle instance at 0x119f447a0>,
<__main__.particle instance at 0x119f44050>,
<__main__.particle instance at 0x119f444d0>,
<__main__.particle instance at 0x119f443b0>,
<__main__.particle instance at 0x119f44ef0>,
<__main__.particle instance at 0x119f44bd8>,
<__main__.particle instance at 0x119e00128>,
<__main__.particle instance at 0x119e007e8>,
<__main__.particle instance at 0x119ec3bd8>,
<__main__.particle instance at 0x119ec3b00>,
<__main__.particle instance at 0x119ec34d0>,
<__main__.particle instance at 0x119e53950>,
<__main__.particle instance at 0x119e533f8>,
<__main__.particle instance at 0x119e53f80>,
<__main__.particle instance at 0x119e537e8>,
<__main__.particle instance at 0x119ec03b0>,
<__main__.particle instance at 0x119ec07a0>,
<__main__.particle instance at 0x119ec05a8>,
<__main__.particle instance at 0x119f22758>,
<__main__.particle instance at 0x119f22320>,
<__main__.particle instance at 0x119f22fc8>,
<__main__.particle instance at 0x119f22b90>,
<__main__.particle instance at 0x119f223f8>,
<__main__.particle instance at 0x119f22ef0>,
<__main__.particle instance at 0x119f22b00>,
<__main__.particle instance at 0x119f22830>,
<__main__.particle instance at 0x119f22ab8>,
<__main__.particle instance at 0x119f22290>,
<__main__.particle instance at 0x119f22440>,
<__main__.particle instance at 0x119f22878>,
<__main__.particle instance at 0x1202c1d40>,
<__main__.particle instance at 0x1202c1ea8>,
<__main__.particle instance at 0x1202c14d0>,
<__main__.particle instance at 0x1202c15a8>,
<__main__.particle instance at 0x1202c1c20>,
<__main__.particle instance at 0x1202c1b90>,
<__main__.particle instance at 0x1202c1320>,
<__main__.particle instance at 0x1202c16c8>,
<__main__.particle instance at 0x1202c1d88>,
<__main__.particle instance at 0x1202c1248>,
<__main__.particle instance at 0x1202c11b8>,
<__main__.particle instance at 0x1202c1998>,
<__main__.particle instance at 0x1202c12d8>,
<__main__.particle instance at 0x1202c1488>,
<__main__.particle instance at 0x11ecbc3f8>,
<__main__.particle instance at 0x11ecbc998>,
<__main__.particle instance at 0x11ecbc128>,
<__main__.particle instance at 0x11ecbc9e0>,
<__main__.particle instance at 0x11ecbc6c8>,
<__main__.particle instance at 0x11ecbc950>,
<__main__.particle instance at 0x11ecbc7e8>,
<__main__.particle instance at 0x11ecbc440>,
<__main__.particle instance at 0x11ecbcd88>,
<__main__.particle instance at 0x11ecbc5a8>,
<__main__.particle instance at 0x11ecbc290>,
<__main__.particle instance at 0x11ecbcf38>,
<__main__.particle instance at 0x11ecbc878>], dtype=object)
```python
pso.simulate(10)
plot()
```
```python
pso.simulate(40)
plot()
```
これも悪くなったこの原因はベストな経路が変異で変化してしまうことによるからだと考えられる。
そこでエリート戦略を導入し、最も優秀な結果を残すものへの変異は避けることにする。
```python
class PSO:
def __init__(self, N, pN):
self.N = N
self.pN = pN
self.city = TSP_map(N)
def initialize(self):
ptcl = np.array([])
for i in range(self.pN):
a = np.random.choice([j for j in range(self.N - 1)])
b = np.random.choice([j for j in range(a, self.N)])
V = [[a, b]]
ptcl = np.append(ptcl, particle(i, V, self.N, np.random.uniform(0.5, 1.0), np.random.uniform(), np.random.uniform()))
self.ptcl = ptcl
return self.ptcl
def one_simulate(self):
for i in range(self.pN):
if i != self.no:
x = np.random.choice([1, 0], p=[0.1, 0.9])
if x == 1:
j = np.random.choice([i for i in range(self.N)])
k = np.random.choice([i for i in range(self.N)])
a = self.ptcl[i].X[j]
b = self.ptcl[i].X[k]
self.ptcl[i].X[j] = b
self.ptcl[i].X[k] = a
self.ptcl[i].SS_id()
self.ptcl[i].SS_gd(self.p_gd_X)
self.ptcl[i].new_V()
self.ptcl[i].new_X()
self.ptcl[i].P_id(self.city)
def simulate(self, sim_num):
for i in range(self.pN):
self.ptcl[i].initial(self.city)
self.p_gd_X = self.P_gd()
for i in range(sim_num):
self.one_simulate()
self.p_gd_X = self.P_gd()
self.p_gd_X = self.P_gd()
return self.p_gd_X
def P_gd(self):
P_gd = self.ptcl[0].p_id
self.no = 0
for i in range(self.pN):
if P_gd > self.ptcl[i].p_id:
P_gd = self.ptcl[i].p_id
self.bestP = P_gd
self.no = i
return self.ptcl[self.no].p_id_X
```
```python
pso = PSO(30, 1000)
pso.initialize()
```
array([<__main__.particle instance at 0x121048e18>,
<__main__.particle instance at 0x11fd0e3f8>,
<__main__.particle instance at 0x120dc1fc8>,
<__main__.particle instance at 0x120de4d40>,
<__main__.particle instance at 0x120de4b90>,
<__main__.particle instance at 0x120de4908>,
<__main__.particle instance at 0x120de49e0>,
<__main__.particle instance at 0x120736200>,
<__main__.particle instance at 0x121098908>,
<__main__.particle instance at 0x11f85bcb0>,
<__main__.particle instance at 0x11f85b758>,
<__main__.particle instance at 0x12118a2d8>,
<__main__.particle instance at 0x120ecc5f0>,
<__main__.particle instance at 0x120eccea8>,
<__main__.particle instance at 0x120610878>,
<__main__.particle instance at 0x121416a70>,
<__main__.particle instance at 0x121416710>,
<__main__.particle instance at 0x121416878>,
<__main__.particle instance at 0x1211596c8>,
<__main__.particle instance at 0x121074680>,
<__main__.particle instance at 0x120dc8128>,
<__main__.particle instance at 0x120763ab8>,
<__main__.particle instance at 0x1210ff050>,
<__main__.particle instance at 0x1210f87e8>,
<__main__.particle instance at 0x1210f8b90>,
<__main__.particle instance at 0x1210f84d0>,
<__main__.particle instance at 0x121025e18>,
<__main__.particle instance at 0x120efb170>,
<__main__.particle instance at 0x121078320>,
<__main__.particle instance at 0x121078128>,
<__main__.particle instance at 0x120db6170>,
<__main__.particle instance at 0x120d98908>,
<__main__.particle instance at 0x1210fa8c0>,
<__main__.particle instance at 0x12140c7e8>,
<__main__.particle instance at 0x12140c0e0>,
<__main__.particle instance at 0x120628170>,
<__main__.particle instance at 0x1210bf320>,
<__main__.particle instance at 0x1210bfbd8>,
<__main__.particle instance at 0x1210bf680>,
<__main__.particle instance at 0x11fded5f0>,
<__main__.particle instance at 0x11fded758>,
<__main__.particle instance at 0x1210c0d40>,
<__main__.particle instance at 0x1210c0e18>,
<__main__.particle instance at 0x1210c04d0>,
<__main__.particle instance at 0x1210c0950>,
<__main__.particle instance at 0x1210c0830>,
<__main__.particle instance at 0x1210c0c68>,
<__main__.particle instance at 0x1210c0a70>,
<__main__.particle instance at 0x1210c0680>,
<__main__.particle instance at 0x1210c0248>,
<__main__.particle instance at 0x1210c0c20>,
<__main__.particle instance at 0x1210d6d88>,
<__main__.particle instance at 0x1210d67a0>,
<__main__.particle instance at 0x1210d6128>,
<__main__.particle instance at 0x1210d6368>,
<__main__.particle instance at 0x1210d6908>,
<__main__.particle instance at 0x1210d6170>,
<__main__.particle instance at 0x1210d6ab8>,
<__main__.particle instance at 0x1210d67e8>,
<__main__.particle instance at 0x1210d6f80>,
<__main__.particle instance at 0x1210d63f8>,
<__main__.particle instance at 0x1210d64d0>,
<__main__.particle instance at 0x1210d65a8>,
<__main__.particle instance at 0x1210d6ea8>,
<__main__.particle instance at 0x1210d62d8>,
<__main__.particle instance at 0x1210d6cf8>,
<__main__.particle instance at 0x1210d6758>,
<__main__.particle instance at 0x121105290>,
<__main__.particle instance at 0x1211053b0>,
<__main__.particle instance at 0x1211050e0>,
<__main__.particle instance at 0x121105998>,
<__main__.particle instance at 0x1211053f8>,
<__main__.particle instance at 0x1211054d0>,
<__main__.particle instance at 0x121105320>,
<__main__.particle instance at 0x1211057a0>,
<__main__.particle instance at 0x121105680>,
<__main__.particle instance at 0x121105bd8>,
<__main__.particle instance at 0x121105c68>,
<__main__.particle instance at 0x121105d40>,
<__main__.particle instance at 0x121105e18>,
<__main__.particle instance at 0x121105a70>,
<__main__.particle instance at 0x121105a28>,
<__main__.particle instance at 0x1211058c0>,
<__main__.particle instance at 0x12107d6c8>,
<__main__.particle instance at 0x12107d830>,
<__main__.particle instance at 0x12107d998>,
<__main__.particle instance at 0x12107d710>,
<__main__.particle instance at 0x12107df38>,
<__main__.particle instance at 0x12107d3f8>,
<__main__.particle instance at 0x12107d248>,
<__main__.particle instance at 0x12107d368>,
<__main__.particle instance at 0x12107d440>,
<__main__.particle instance at 0x12107d560>,
<__main__.particle instance at 0x12107dd40>,
<__main__.particle instance at 0x12107d170>,
<__main__.particle instance at 0x12107d2d8>,
<__main__.particle instance at 0x12107de18>,
<__main__.particle instance at 0x12107da70>,
<__main__.particle instance at 0x12107db00>,
<__main__.particle instance at 0x12107dd88>,
<__main__.particle instance at 0x120ed7a28>,
<__main__.particle instance at 0x120ed73f8>,
<__main__.particle instance at 0x120ed77a0>,
<__main__.particle instance at 0x120ed7b48>,
<__main__.particle instance at 0x120ed7ef0>,
<__main__.particle instance at 0x120ed7dd0>,
<__main__.particle instance at 0x120ed7200>,
<__main__.particle instance at 0x120ed7908>,
<__main__.particle instance at 0x120ed7638>,
<__main__.particle instance at 0x120ed7050>,
<__main__.particle instance at 0x120ed7f38>,
<__main__.particle instance at 0x120ed7d40>,
<__main__.particle instance at 0x120ed7758>,
<__main__.particle instance at 0x120ed79e0>,
<__main__.particle instance at 0x120ed7560>,
<__main__.particle instance at 0x12119c710>,
<__main__.particle instance at 0x12119cd88>,
<__main__.particle instance at 0x12119c248>,
<__main__.particle instance at 0x12119c5a8>,
<__main__.particle instance at 0x12119ca70>,
<__main__.particle instance at 0x12119ccb0>,
<__main__.particle instance at 0x12119c290>,
<__main__.particle instance at 0x12119c098>,
<__main__.particle instance at 0x12119cb90>,
<__main__.particle instance at 0x12119cbd8>,
<__main__.particle instance at 0x12119c9e0>,
<__main__.particle instance at 0x12119c908>,
<__main__.particle instance at 0x12119c878>,
<__main__.particle instance at 0x12119c5f0>,
<__main__.particle instance at 0x12119c638>,
<__main__.particle instance at 0x12119cef0>,
<__main__.particle instance at 0x12119c830>,
<__main__.particle instance at 0x12119ce18>,
<__main__.particle instance at 0x121101cb0>,
<__main__.particle instance at 0x121101fc8>,
<__main__.particle instance at 0x121101950>,
<__main__.particle instance at 0x121101680>,
<__main__.particle instance at 0x121101710>,
<__main__.particle instance at 0x121101830>,
<__main__.particle instance at 0x121101200>,
<__main__.particle instance at 0x121101098>,
<__main__.particle instance at 0x1211011b8>,
<__main__.particle instance at 0x1211015f0>,
<__main__.particle instance at 0x121101a70>,
<__main__.particle instance at 0x121101d88>,
<__main__.particle instance at 0x121101e60>,
<__main__.particle instance at 0x1211013b0>,
<__main__.particle instance at 0x121101488>,
<__main__.particle instance at 0x1211015a8>,
<__main__.particle instance at 0x1211012d8>,
<__main__.particle instance at 0x12107b638>,
<__main__.particle instance at 0x12107bb48>,
<__main__.particle instance at 0x12107b098>,
<__main__.particle instance at 0x12107b1b8>,
<__main__.particle instance at 0x12107b290>,
<__main__.particle instance at 0x12107b368>,
<__main__.particle instance at 0x12107bd40>,
<__main__.particle instance at 0x12107b680>,
<__main__.particle instance at 0x12107b758>,
<__main__.particle instance at 0x12107b830>,
<__main__.particle instance at 0x12107b908>,
<__main__.particle instance at 0x12107b9e0>,
<__main__.particle instance at 0x12107bdd0>,
<__main__.particle instance at 0x12107be18>,
<__main__.particle instance at 0x12107b4d0>,
<__main__.particle instance at 0x12107b3f8>,
<__main__.particle instance at 0x12107bab8>,
<__main__.particle instance at 0x12104d050>,
<__main__.particle instance at 0x12104d998>,
<__main__.particle instance at 0x12104d248>,
<__main__.particle instance at 0x12104d0e0>,
<__main__.particle instance at 0x12104db90>,
<__main__.particle instance at 0x12104dc68>,
<__main__.particle instance at 0x12104dd88>,
<__main__.particle instance at 0x12104de60>,
<__main__.particle instance at 0x12104df38>,
<__main__.particle instance at 0x12104d950>,
<__main__.particle instance at 0x12104d9e0>,
<__main__.particle instance at 0x12104d2d8>,
<__main__.particle instance at 0x12104d4d0>,
<__main__.particle instance at 0x12104d5a8>,
<__main__.particle instance at 0x12104d680>,
<__main__.particle instance at 0x12104d758>,
<__main__.particle instance at 0x12104d830>,
<__main__.particle instance at 0x120ed5e60>,
<__main__.particle instance at 0x120ed5d88>,
<__main__.particle instance at 0x120ed57a0>,
<__main__.particle instance at 0x120ed58c0>,
<__main__.particle instance at 0x120ed5488>,
<__main__.particle instance at 0x120ed50e0>,
<__main__.particle instance at 0x120ed5050>,
<__main__.particle instance at 0x120ed53b0>,
<__main__.particle instance at 0x120ed57e8>,
<__main__.particle instance at 0x120ed5a70>,
<__main__.particle instance at 0x120ed5b48>,
<__main__.particle instance at 0x120ece488>,
<__main__.particle instance at 0x120ecef80>,
<__main__.particle instance at 0x120ece320>,
<__main__.particle instance at 0x120eceea8>,
<__main__.particle instance at 0x120ece998>,
<__main__.particle instance at 0x120ece560>,
<__main__.particle instance at 0x120ece248>,
<__main__.particle instance at 0x120ece170>,
<__main__.particle instance at 0x120ecec20>,
<__main__.particle instance at 0x120ece680>,
<__main__.particle instance at 0x12118cea8>,
<__main__.particle instance at 0x12118c2d8>,
<__main__.particle instance at 0x12118c830>,
<__main__.particle instance at 0x12118c518>,
<__main__.particle instance at 0x12118ccb0>,
<__main__.particle instance at 0x12118cb90>,
<__main__.particle instance at 0x12118c5a8>,
<__main__.particle instance at 0x12118c7e8>,
<__main__.particle instance at 0x12118cd40>,
<__main__.particle instance at 0x12118c3f8>,
<__main__.particle instance at 0x12118c128>,
<__main__.particle instance at 0x12118cf80>,
<__main__.particle instance at 0x12118ce60>,
<__main__.particle instance at 0x12118cb00>,
<__main__.particle instance at 0x12118cab8>,
<__main__.particle instance at 0x12118c9e0>,
<__main__.particle instance at 0x120eef680>,
<__main__.particle instance at 0x120eef050>,
<__main__.particle instance at 0x120eef248>,
<__main__.particle instance at 0x120eef0e0>,
<__main__.particle instance at 0x120eef878>,
<__main__.particle instance at 0x120eef758>,
<__main__.particle instance at 0x120eeff80>,
<__main__.particle instance at 0x120eefe60>,
<__main__.particle instance at 0x120eef7e8>,
<__main__.particle instance at 0x120eefa70>,
<__main__.particle instance at 0x120eef8c0>,
<__main__.particle instance at 0x120eef5f0>,
<__main__.particle instance at 0x120eef440>,
<__main__.particle instance at 0x120eef368>,
<__main__.particle instance at 0x120eefb48>,
<__main__.particle instance at 0x120eef4d0>,
<__main__.particle instance at 0x120eef998>,
<__main__.particle instance at 0x120e0a200>,
<__main__.particle instance at 0x120e0ac20>,
<__main__.particle instance at 0x120e0a4d0>,
<__main__.particle instance at 0x120e0a9e0>,
<__main__.particle instance at 0x120e0aef0>,
<__main__.particle instance at 0x120e0aab8>,
<__main__.particle instance at 0x120e0a098>,
<__main__.particle instance at 0x120e0a560>,
<__main__.particle instance at 0x120e0a3f8>,
<__main__.particle instance at 0x120e0ae18>,
<__main__.particle instance at 0x120e0a7e8>,
<__main__.particle instance at 0x12119f6c8>,
<__main__.particle instance at 0x12119f7e8>,
<__main__.particle instance at 0x12119fc20>,
<__main__.particle instance at 0x12119fb48>,
<__main__.particle instance at 0x12119f2d8>,
<__main__.particle instance at 0x12119f3f8>,
<__main__.particle instance at 0x12119fbd8>,
<__main__.particle instance at 0x12119f3b0>,
<__main__.particle instance at 0x12119f098>,
<__main__.particle instance at 0x12119f050>,
<__main__.particle instance at 0x12119ffc8>,
<__main__.particle instance at 0x12119fab8>,
<__main__.particle instance at 0x12119f710>,
<__main__.particle instance at 0x12119f680>,
<__main__.particle instance at 0x12119f758>,
<__main__.particle instance at 0x1211abe18>,
<__main__.particle instance at 0x1211abc20>,
<__main__.particle instance at 0x1211abab8>,
<__main__.particle instance at 0x1211abc68>,
<__main__.particle instance at 0x1211ab950>,
<__main__.particle instance at 0x1211ab518>,
<__main__.particle instance at 0x1211ab878>,
<__main__.particle instance at 0x1211abbd8>,
<__main__.particle instance at 0x1211ab638>,
<__main__.particle instance at 0x1211ab0e0>,
<__main__.particle instance at 0x1211ab488>,
<__main__.particle instance at 0x1211abf38>,
<__main__.particle instance at 0x1211ab5f0>,
<__main__.particle instance at 0x1211ab6c8>,
<__main__.particle instance at 0x1211abb90>,
<__main__.particle instance at 0x1211ab2d8>,
<__main__.particle instance at 0x1211ab998>,
<__main__.particle instance at 0x1211abea8>,
<__main__.particle instance at 0x1211ab290>,
<__main__.particle instance at 0x121269b00>,
<__main__.particle instance at 0x121269710>,
<__main__.particle instance at 0x121269e60>,
<__main__.particle instance at 0x121269f38>,
<__main__.particle instance at 0x121269a70>,
<__main__.particle instance at 0x121269bd8>,
<__main__.particle instance at 0x121269cb0>,
<__main__.particle instance at 0x121269d88>,
<__main__.particle instance at 0x121269a28>,
<__main__.particle instance at 0x121269680>,
<__main__.particle instance at 0x121269488>,
<__main__.particle instance at 0x121269560>,
<__main__.particle instance at 0x1212692d8>,
<__main__.particle instance at 0x1212693b0>,
<__main__.particle instance at 0x121269050>,
<__main__.particle instance at 0x121269128>,
<__main__.particle instance at 0x121269200>,
<__main__.particle instance at 0x1210673f8>,
<__main__.particle instance at 0x121067518>,
<__main__.particle instance at 0x1210672d8>,
<__main__.particle instance at 0x1210671b8>,
<__main__.particle instance at 0x1210675f0>,
<__main__.particle instance at 0x121067680>,
<__main__.particle instance at 0x121067758>,
<__main__.particle instance at 0x121067878>,
<__main__.particle instance at 0x121067950>,
<__main__.particle instance at 0x121067a28>,
<__main__.particle instance at 0x121067b00>,
<__main__.particle instance at 0x121067c68>,
<__main__.particle instance at 0x121067d40>,
<__main__.particle instance at 0x121067e18>,
<__main__.particle instance at 0x121067ef0>,
<__main__.particle instance at 0x121067248>,
<__main__.particle instance at 0x121067bd8>,
<__main__.particle instance at 0x12118e488>,
<__main__.particle instance at 0x12118eb00>,
<__main__.particle instance at 0x12118ef80>,
<__main__.particle instance at 0x12118ea28>,
<__main__.particle instance at 0x12118eef0>,
<__main__.particle instance at 0x12118e878>,
<__main__.particle instance at 0x12118e830>,
<__main__.particle instance at 0x12118ecf8>,
<__main__.particle instance at 0x12118e320>,
<__main__.particle instance at 0x12118e758>,
<__main__.particle instance at 0x12118e3f8>,
<__main__.particle instance at 0x12118e518>,
<__main__.particle instance at 0x12118eb90>,
<__main__.particle instance at 0x12118e128>,
<__main__.particle instance at 0x12118ed40>,
<__main__.particle instance at 0x12118e170>,
<__main__.particle instance at 0x12114f998>,
<__main__.particle instance at 0x12114f248>,
<__main__.particle instance at 0x12114fef0>,
<__main__.particle instance at 0x12114fb48>,
<__main__.particle instance at 0x12114f0e0>,
<__main__.particle instance at 0x12114f5a8>,
<__main__.particle instance at 0x12114f518>,
<__main__.particle instance at 0x12114fab8>,
<__main__.particle instance at 0x12114f908>,
<__main__.particle instance at 0x12114f830>,
<__main__.particle instance at 0x12114f200>,
<__main__.particle instance at 0x12114f320>,
<__main__.particle instance at 0x12114f3f8>,
<__main__.particle instance at 0x12114f4d0>,
<__main__.particle instance at 0x12114fa28>,
<__main__.particle instance at 0x12114ff38>,
<__main__.particle instance at 0x12140af80>,
<__main__.particle instance at 0x12140abd8>,
<__main__.particle instance at 0x12140ac68>,
<__main__.particle instance at 0x12140add0>,
<__main__.particle instance at 0x12140aea8>,
<__main__.particle instance at 0x12140afc8>,
<__main__.particle instance at 0x12140a9e0>,
<__main__.particle instance at 0x12140a908>,
<__main__.particle instance at 0x12140a710>,
<__main__.particle instance at 0x12140a878>,
<__main__.particle instance at 0x12140a170>,
<__main__.particle instance at 0x12140a200>,
<__main__.particle instance at 0x12140a2d8>,
<__main__.particle instance at 0x12140a3b0>,
<__main__.particle instance at 0x12140a488>,
<__main__.particle instance at 0x12140a560>,
<__main__.particle instance at 0x12140a638>,
<__main__.particle instance at 0x12140a098>,
<__main__.particle instance at 0x12140a830>,
<__main__.particle instance at 0x121198fc8>,
<__main__.particle instance at 0x121198e60>,
<__main__.particle instance at 0x1211980e0>,
<__main__.particle instance at 0x121198170>,
<__main__.particle instance at 0x121198830>,
<__main__.particle instance at 0x121198098>,
<__main__.particle instance at 0x1211985f0>,
<__main__.particle instance at 0x121198638>,
<__main__.particle instance at 0x121198cf8>,
<__main__.particle instance at 0x1211989e0>,
<__main__.particle instance at 0x121198e18>,
<__main__.particle instance at 0x121198cb0>,
<__main__.particle instance at 0x1211982d8>,
<__main__.particle instance at 0x121198c20>,
<__main__.particle instance at 0x121198998>,
<__main__.particle instance at 0x1211987e8>,
<__main__.particle instance at 0x121198368>,
<__main__.particle instance at 0x1211a06c8>,
<__main__.particle instance at 0x1211a0908>,
<__main__.particle instance at 0x1211a0a28>,
<__main__.particle instance at 0x1211a0d40>,
<__main__.particle instance at 0x1211a0200>,
<__main__.particle instance at 0x1211a07e8>,
<__main__.particle instance at 0x1211a01b8>,
<__main__.particle instance at 0x1211a0950>,
<__main__.particle instance at 0x1211a05a8>,
<__main__.particle instance at 0x1211a05f0>,
<__main__.particle instance at 0x1211a0758>,
<__main__.particle instance at 0x1211a0170>,
<__main__.particle instance at 0x1211a0ef0>,
<__main__.particle instance at 0x1211a0cb0>,
<__main__.particle instance at 0x12104e9e0>,
<__main__.particle instance at 0x12104e3f8>,
<__main__.particle instance at 0x12104e710>,
<__main__.particle instance at 0x12104e7a0>,
<__main__.particle instance at 0x12104e878>,
<__main__.particle instance at 0x12104e0e0>,
<__main__.particle instance at 0x12104e560>,
<__main__.particle instance at 0x12104eb90>,
<__main__.particle instance at 0x12104ec20>,
<__main__.particle instance at 0x12104ecf8>,
<__main__.particle instance at 0x12104edd0>,
<__main__.particle instance at 0x12104eea8>,
<__main__.particle instance at 0x12104ef80>,
<__main__.particle instance at 0x12104e368>,
<__main__.particle instance at 0x12140b8c0>,
<__main__.particle instance at 0x12140b7a0>,
<__main__.particle instance at 0x12140b5a8>,
<__main__.particle instance at 0x12140b680>,
<__main__.particle instance at 0x12140b4d0>,
<__main__.particle instance at 0x12140b440>,
<__main__.particle instance at 0x12140b368>,
<__main__.particle instance at 0x12140b098>,
<__main__.particle instance at 0x12140b1b8>,
<__main__.particle instance at 0x12140b320>,
<__main__.particle instance at 0x12140bea8>,
<__main__.particle instance at 0x12140bc20>,
<__main__.particle instance at 0x12140bb90>,
<__main__.particle instance at 0x12140bdd0>,
<__main__.particle instance at 0x12140ba28>,
<__main__.particle instance at 0x12140bb00>,
<__main__.particle instance at 0x12140b830>,
<__main__.particle instance at 0x121050908>,
<__main__.particle instance at 0x121050998>,
<__main__.particle instance at 0x121050ab8>,
<__main__.particle instance at 0x121050b90>,
<__main__.particle instance at 0x121050950>,
<__main__.particle instance at 0x1210504d0>,
<__main__.particle instance at 0x121050680>,
<__main__.particle instance at 0x121050758>,
<__main__.particle instance at 0x1210505a8>,
<__main__.particle instance at 0x121050c68>,
<__main__.particle instance at 0x1210500e0>,
<__main__.particle instance at 0x121050cf8>,
<__main__.particle instance at 0x121050f38>,
<__main__.particle instance at 0x121050200>,
<__main__.particle instance at 0x1210502d8>,
<__main__.particle instance at 0x1210503b0>,
<__main__.particle instance at 0x12104fb48>,
<__main__.particle instance at 0x12104f488>,
<__main__.particle instance at 0x12104ffc8>,
<__main__.particle instance at 0x12104f9e0>,
<__main__.particle instance at 0x12104fa70>,
<__main__.particle instance at 0x12104f1b8>,
<__main__.particle instance at 0x12104fcf8>,
<__main__.particle instance at 0x12104fd88>,
<__main__.particle instance at 0x12104fea8>,
<__main__.particle instance at 0x12104f248>,
<__main__.particle instance at 0x12104f320>,
<__main__.particle instance at 0x12104fcb0>,
<__main__.particle instance at 0x12104f4d0>,
<__main__.particle instance at 0x12104f5a8>,
<__main__.particle instance at 0x12104f6c8>,
<__main__.particle instance at 0x12104f7a0>,
<__main__.particle instance at 0x12142ae60>,
<__main__.particle instance at 0x12142a098>,
<__main__.particle instance at 0x12142af38>,
<__main__.particle instance at 0x12142c290>,
<__main__.particle instance at 0x12142c098>,
<__main__.particle instance at 0x12142c3f8>,
<__main__.particle instance at 0x12142c440>,
<__main__.particle instance at 0x12142c518>,
<__main__.particle instance at 0x12142c5f0>,
<__main__.particle instance at 0x12142c6c8>,
<__main__.particle instance at 0x12142c7a0>,
<__main__.particle instance at 0x12142c878>,
<__main__.particle instance at 0x12142c950>,
<__main__.particle instance at 0x12142ca28>,
<__main__.particle instance at 0x12142cb00>,
<__main__.particle instance at 0x12142cbd8>,
<__main__.particle instance at 0x12142ccb0>,
<__main__.particle instance at 0x12142cd88>,
<__main__.particle instance at 0x12142ce60>,
<__main__.particle instance at 0x12142cf38>,
<__main__.particle instance at 0x11f861050>,
<__main__.particle instance at 0x11f861128>,
<__main__.particle instance at 0x11f861200>,
<__main__.particle instance at 0x11f8612d8>,
<__main__.particle instance at 0x11f8613b0>,
<__main__.particle instance at 0x11f861488>,
<__main__.particle instance at 0x11f861560>,
<__main__.particle instance at 0x11f861638>,
<__main__.particle instance at 0x11f861710>,
<__main__.particle instance at 0x11f8617e8>,
<__main__.particle instance at 0x11f8618c0>,
<__main__.particle instance at 0x11f861998>,
<__main__.particle instance at 0x11f861a70>,
<__main__.particle instance at 0x11f861b48>,
<__main__.particle instance at 0x11f861c20>,
<__main__.particle instance at 0x11f861cf8>,
<__main__.particle instance at 0x11f861dd0>,
<__main__.particle instance at 0x11f861ea8>,
<__main__.particle instance at 0x11f861f80>,
<__main__.particle instance at 0x11fabe098>,
<__main__.particle instance at 0x11fabe170>,
<__main__.particle instance at 0x11fabe248>,
<__main__.particle instance at 0x11fabe320>,
<__main__.particle instance at 0x11fabe3f8>,
<__main__.particle instance at 0x11fabe4d0>,
<__main__.particle instance at 0x11fabe5a8>,
<__main__.particle instance at 0x11fabe680>,
<__main__.particle instance at 0x11fabe758>,
<__main__.particle instance at 0x11fabe830>,
<__main__.particle instance at 0x11fabe908>,
<__main__.particle instance at 0x11fabe9e0>,
<__main__.particle instance at 0x11fabeab8>,
<__main__.particle instance at 0x11fabeb90>,
<__main__.particle instance at 0x11fabec68>,
<__main__.particle instance at 0x11fabed40>,
<__main__.particle instance at 0x11fabee18>,
<__main__.particle instance at 0x11fabeef0>,
<__main__.particle instance at 0x11fabefc8>,
<__main__.particle instance at 0x1202d60e0>,
<__main__.particle instance at 0x1202d61b8>,
<__main__.particle instance at 0x1202d6290>,
<__main__.particle instance at 0x1202d6368>,
<__main__.particle instance at 0x1202d6440>,
<__main__.particle instance at 0x1202d6518>,
<__main__.particle instance at 0x1202d65f0>,
<__main__.particle instance at 0x1202d66c8>,
<__main__.particle instance at 0x1202d67a0>,
<__main__.particle instance at 0x1202d6878>,
<__main__.particle instance at 0x1202d6950>,
<__main__.particle instance at 0x1202d6a28>,
<__main__.particle instance at 0x1202d6b00>,
<__main__.particle instance at 0x1202d6bd8>,
<__main__.particle instance at 0x1202d6cb0>,
<__main__.particle instance at 0x1202d6d88>,
<__main__.particle instance at 0x1202d6e60>,
<__main__.particle instance at 0x1202d6f38>,
<__main__.particle instance at 0x11fdf7050>,
<__main__.particle instance at 0x11fdf7128>,
<__main__.particle instance at 0x11fdf7200>,
<__main__.particle instance at 0x11fdf72d8>,
<__main__.particle instance at 0x11fdf73b0>,
<__main__.particle instance at 0x11fdf7488>,
<__main__.particle instance at 0x11fdf7560>,
<__main__.particle instance at 0x11fdf7638>,
<__main__.particle instance at 0x11fdf7710>,
<__main__.particle instance at 0x11fdf77e8>,
<__main__.particle instance at 0x11fdf78c0>,
<__main__.particle instance at 0x11fdf7998>,
<__main__.particle instance at 0x11fdf7a70>,
<__main__.particle instance at 0x11fdf7b48>,
<__main__.particle instance at 0x11fdf7c20>,
<__main__.particle instance at 0x11fdf7cf8>,
<__main__.particle instance at 0x11fdf7dd0>,
<__main__.particle instance at 0x11fdf7ea8>,
<__main__.particle instance at 0x11fdf7f80>,
<__main__.particle instance at 0x120687098>,
<__main__.particle instance at 0x120687170>,
<__main__.particle instance at 0x120687248>,
<__main__.particle instance at 0x120687320>,
<__main__.particle instance at 0x1206873f8>,
<__main__.particle instance at 0x1206874d0>,
<__main__.particle instance at 0x1206875a8>,
<__main__.particle instance at 0x120687680>,
<__main__.particle instance at 0x120687758>,
<__main__.particle instance at 0x120687830>,
<__main__.particle instance at 0x120687908>,
<__main__.particle instance at 0x1206879e0>,
<__main__.particle instance at 0x120687ab8>,
<__main__.particle instance at 0x120687b90>,
<__main__.particle instance at 0x120687c68>,
<__main__.particle instance at 0x120687d40>,
<__main__.particle instance at 0x120687e18>,
<__main__.particle instance at 0x120687ef0>,
<__main__.particle instance at 0x120687fc8>,
<__main__.particle instance at 0x1206a20e0>,
<__main__.particle instance at 0x1206a21b8>,
<__main__.particle instance at 0x1206a2290>,
<__main__.particle instance at 0x1206a2368>,
<__main__.particle instance at 0x1206a2440>,
<__main__.particle instance at 0x1206a2518>,
<__main__.particle instance at 0x1206a25f0>,
<__main__.particle instance at 0x1206a26c8>,
<__main__.particle instance at 0x1206a27a0>,
<__main__.particle instance at 0x1206a2878>,
<__main__.particle instance at 0x1206a2950>,
<__main__.particle instance at 0x1206a2a28>,
<__main__.particle instance at 0x1206a2b00>,
<__main__.particle instance at 0x1206a2bd8>,
<__main__.particle instance at 0x1206a2cb0>,
<__main__.particle instance at 0x1206a2d88>,
<__main__.particle instance at 0x1206a2e60>,
<__main__.particle instance at 0x1206a2f38>,
<__main__.particle instance at 0x12079b050>,
<__main__.particle instance at 0x12079b128>,
<__main__.particle instance at 0x12079b200>,
<__main__.particle instance at 0x12079b2d8>,
<__main__.particle instance at 0x12079b3b0>,
<__main__.particle instance at 0x12079b488>,
<__main__.particle instance at 0x12079b560>,
<__main__.particle instance at 0x12079b638>,
<__main__.particle instance at 0x12079b710>,
<__main__.particle instance at 0x12079b7e8>,
<__main__.particle instance at 0x12079b8c0>,
<__main__.particle instance at 0x12079b998>,
<__main__.particle instance at 0x12079ba70>,
<__main__.particle instance at 0x12079bb48>,
<__main__.particle instance at 0x12079bc20>,
<__main__.particle instance at 0x12079bcf8>,
<__main__.particle instance at 0x12079bdd0>,
<__main__.particle instance at 0x12079bea8>,
<__main__.particle instance at 0x12079bf80>,
<__main__.particle instance at 0x120789098>,
<__main__.particle instance at 0x120789170>,
<__main__.particle instance at 0x120789248>,
<__main__.particle instance at 0x120789320>,
<__main__.particle instance at 0x1207893f8>,
<__main__.particle instance at 0x1207894d0>,
<__main__.particle instance at 0x1207895a8>,
<__main__.particle instance at 0x120789680>,
<__main__.particle instance at 0x120789758>,
<__main__.particle instance at 0x120789830>,
<__main__.particle instance at 0x120789908>,
<__main__.particle instance at 0x1207899e0>,
<__main__.particle instance at 0x120789ab8>,
<__main__.particle instance at 0x120789b90>,
<__main__.particle instance at 0x120789c68>,
<__main__.particle instance at 0x120789d40>,
<__main__.particle instance at 0x120789e18>,
<__main__.particle instance at 0x120789ef0>,
<__main__.particle instance at 0x120789fc8>,
<__main__.particle instance at 0x1206350e0>,
<__main__.particle instance at 0x1206351b8>,
<__main__.particle instance at 0x120635290>,
<__main__.particle instance at 0x120635368>,
<__main__.particle instance at 0x120635440>,
<__main__.particle instance at 0x120635518>,
<__main__.particle instance at 0x1206355f0>,
<__main__.particle instance at 0x1206356c8>,
<__main__.particle instance at 0x1206357a0>,
<__main__.particle instance at 0x120635878>,
<__main__.particle instance at 0x120635950>,
<__main__.particle instance at 0x120635a28>,
<__main__.particle instance at 0x120635b00>,
<__main__.particle instance at 0x120635bd8>,
<__main__.particle instance at 0x120635cb0>,
<__main__.particle instance at 0x120635d88>,
<__main__.particle instance at 0x120635e60>,
<__main__.particle instance at 0x120635f38>,
<__main__.particle instance at 0x12061c050>,
<__main__.particle instance at 0x12061c128>,
<__main__.particle instance at 0x12061c200>,
<__main__.particle instance at 0x12061c2d8>,
<__main__.particle instance at 0x12061c3b0>,
<__main__.particle instance at 0x12061c488>,
<__main__.particle instance at 0x12061c560>,
<__main__.particle instance at 0x12061c638>,
<__main__.particle instance at 0x12061c710>,
<__main__.particle instance at 0x12061c7e8>,
<__main__.particle instance at 0x12061c8c0>,
<__main__.particle instance at 0x12061c998>,
<__main__.particle instance at 0x12061ca70>,
<__main__.particle instance at 0x12061cb48>,
<__main__.particle instance at 0x12061cc20>,
<__main__.particle instance at 0x12061ccf8>,
<__main__.particle instance at 0x12061cdd0>,
<__main__.particle instance at 0x12061cea8>,
<__main__.particle instance at 0x12061cf80>,
<__main__.particle instance at 0x120621098>,
<__main__.particle instance at 0x120621170>,
<__main__.particle instance at 0x120621248>,
<__main__.particle instance at 0x120621320>,
<__main__.particle instance at 0x1206213f8>,
<__main__.particle instance at 0x1206214d0>,
<__main__.particle instance at 0x1206215a8>,
<__main__.particle instance at 0x120621680>,
<__main__.particle instance at 0x120621758>,
<__main__.particle instance at 0x120621830>,
<__main__.particle instance at 0x120621908>,
<__main__.particle instance at 0x1206219e0>,
<__main__.particle instance at 0x120621ab8>,
<__main__.particle instance at 0x120621b90>,
<__main__.particle instance at 0x120621c68>,
<__main__.particle instance at 0x120621d40>,
<__main__.particle instance at 0x120621e18>,
<__main__.particle instance at 0x120621ef0>,
<__main__.particle instance at 0x120621fc8>,
<__main__.particle instance at 0x11fe4b0e0>,
<__main__.particle instance at 0x11fe4b1b8>,
<__main__.particle instance at 0x11fe4b290>,
<__main__.particle instance at 0x11fe4b368>,
<__main__.particle instance at 0x11fe4b440>,
<__main__.particle instance at 0x11fe4b518>,
<__main__.particle instance at 0x11fe4b5f0>,
<__main__.particle instance at 0x11fe4b6c8>,
<__main__.particle instance at 0x11fe4b7a0>,
<__main__.particle instance at 0x11fe4b878>,
<__main__.particle instance at 0x11fe4b950>,
<__main__.particle instance at 0x11fe4ba28>,
<__main__.particle instance at 0x11fe4bb00>,
<__main__.particle instance at 0x11fe4bbd8>,
<__main__.particle instance at 0x11fe4bcb0>,
<__main__.particle instance at 0x11fe4bd88>,
<__main__.particle instance at 0x11fe4be60>,
<__main__.particle instance at 0x11fe4bf38>,
<__main__.particle instance at 0x11fe43050>,
<__main__.particle instance at 0x11fe43128>,
<__main__.particle instance at 0x11fe43200>,
<__main__.particle instance at 0x11fe432d8>,
<__main__.particle instance at 0x11fe433b0>,
<__main__.particle instance at 0x11fe43488>,
<__main__.particle instance at 0x11fe43560>,
<__main__.particle instance at 0x11fe43638>,
<__main__.particle instance at 0x11fe43710>,
<__main__.particle instance at 0x11fe437e8>,
<__main__.particle instance at 0x11fe438c0>,
<__main__.particle instance at 0x11fe43998>,
<__main__.particle instance at 0x11fe43a70>,
<__main__.particle instance at 0x11fe43b48>,
<__main__.particle instance at 0x11fe43c20>,
<__main__.particle instance at 0x11fe43cf8>,
<__main__.particle instance at 0x11fe43dd0>,
<__main__.particle instance at 0x11fe43ea8>,
<__main__.particle instance at 0x11fe43f80>,
<__main__.particle instance at 0x121003098>,
<__main__.particle instance at 0x121003170>,
<__main__.particle instance at 0x121003248>,
<__main__.particle instance at 0x121003320>,
<__main__.particle instance at 0x1210033f8>,
<__main__.particle instance at 0x1210034d0>,
<__main__.particle instance at 0x1210035a8>,
<__main__.particle instance at 0x121003680>,
<__main__.particle instance at 0x121003758>,
<__main__.particle instance at 0x121003830>,
<__main__.particle instance at 0x121003908>,
<__main__.particle instance at 0x1210039e0>,
<__main__.particle instance at 0x121003ab8>,
<__main__.particle instance at 0x121003b90>,
<__main__.particle instance at 0x121003c68>,
<__main__.particle instance at 0x121003d40>,
<__main__.particle instance at 0x121003e18>,
<__main__.particle instance at 0x121003ef0>,
<__main__.particle instance at 0x121003fc8>,
<__main__.particle instance at 0x12100f0e0>,
<__main__.particle instance at 0x12100f1b8>,
<__main__.particle instance at 0x12100f290>,
<__main__.particle instance at 0x12100f368>,
<__main__.particle instance at 0x12100f440>,
<__main__.particle instance at 0x12100f518>,
<__main__.particle instance at 0x12100f5f0>,
<__main__.particle instance at 0x12100f6c8>,
<__main__.particle instance at 0x12100f7a0>,
<__main__.particle instance at 0x12100f878>,
<__main__.particle instance at 0x12100f950>,
<__main__.particle instance at 0x12100fa28>,
<__main__.particle instance at 0x12100fb00>,
<__main__.particle instance at 0x12100fbd8>,
<__main__.particle instance at 0x12100fcb0>,
<__main__.particle instance at 0x12100fd88>,
<__main__.particle instance at 0x12100fe60>,
<__main__.particle instance at 0x12100ff38>,
<__main__.particle instance at 0x121012050>,
<__main__.particle instance at 0x121012128>,
<__main__.particle instance at 0x121012200>,
<__main__.particle instance at 0x1210122d8>,
<__main__.particle instance at 0x1210123b0>,
<__main__.particle instance at 0x121012488>,
<__main__.particle instance at 0x121012560>,
<__main__.particle instance at 0x121012638>,
<__main__.particle instance at 0x121012710>,
<__main__.particle instance at 0x1210127e8>,
<__main__.particle instance at 0x1210128c0>,
<__main__.particle instance at 0x121012998>,
<__main__.particle instance at 0x121012a70>,
<__main__.particle instance at 0x121012b48>,
<__main__.particle instance at 0x121012c20>,
<__main__.particle instance at 0x121012cf8>,
<__main__.particle instance at 0x121012dd0>,
<__main__.particle instance at 0x121012ea8>,
<__main__.particle instance at 0x121012f80>,
<__main__.particle instance at 0x11fe2e098>,
<__main__.particle instance at 0x11fe2e170>,
<__main__.particle instance at 0x11fe2e248>,
<__main__.particle instance at 0x11fe2e320>,
<__main__.particle instance at 0x11fe2e3f8>,
<__main__.particle instance at 0x11fe2e4d0>,
<__main__.particle instance at 0x11fe2e5a8>,
<__main__.particle instance at 0x11fe2e680>,
<__main__.particle instance at 0x11fe2e758>,
<__main__.particle instance at 0x11fe2e830>,
<__main__.particle instance at 0x11fe2e908>,
<__main__.particle instance at 0x11fe2e9e0>,
<__main__.particle instance at 0x11fe2eab8>,
<__main__.particle instance at 0x11fe2eb90>,
<__main__.particle instance at 0x11fe2ec68>,
<__main__.particle instance at 0x11fe2ed40>,
<__main__.particle instance at 0x11fe2ee18>,
<__main__.particle instance at 0x11fe2eef0>,
<__main__.particle instance at 0x11fe2efc8>,
<__main__.particle instance at 0x11fe300e0>,
<__main__.particle instance at 0x11fe301b8>,
<__main__.particle instance at 0x11fe30290>,
<__main__.particle instance at 0x11fe30368>,
<__main__.particle instance at 0x11fe30440>,
<__main__.particle instance at 0x11fe30518>,
<__main__.particle instance at 0x11fe305f0>,
<__main__.particle instance at 0x11fe306c8>,
<__main__.particle instance at 0x11fe307a0>,
<__main__.particle instance at 0x11fe30878>,
<__main__.particle instance at 0x11fe30950>,
<__main__.particle instance at 0x11fe30a28>,
<__main__.particle instance at 0x11fe30b00>,
<__main__.particle instance at 0x11fe30bd8>,
<__main__.particle instance at 0x11fe30cb0>,
<__main__.particle instance at 0x11fe30d88>,
<__main__.particle instance at 0x11fe30e60>,
<__main__.particle instance at 0x11fe30f38>,
<__main__.particle instance at 0x11fe3f050>,
<__main__.particle instance at 0x11fe3f128>,
<__main__.particle instance at 0x11fe3f200>,
<__main__.particle instance at 0x11fe3f2d8>,
<__main__.particle instance at 0x11fe3f3b0>,
<__main__.particle instance at 0x11fe3f488>,
<__main__.particle instance at 0x11fe3f560>,
<__main__.particle instance at 0x11fe3f638>,
<__main__.particle instance at 0x11fe3f710>,
<__main__.particle instance at 0x11fe3f7e8>,
<__main__.particle instance at 0x11fe3f8c0>,
<__main__.particle instance at 0x11fe3f998>,
<__main__.particle instance at 0x11fe3fa70>,
<__main__.particle instance at 0x11fe3fb48>,
<__main__.particle instance at 0x11fe3fc20>,
<__main__.particle instance at 0x11fe3fcf8>,
<__main__.particle instance at 0x11fe3fdd0>,
<__main__.particle instance at 0x11fe3fea8>,
<__main__.particle instance at 0x11fe3ff80>,
<__main__.particle instance at 0x11fe00098>,
<__main__.particle instance at 0x11fe00170>,
<__main__.particle instance at 0x11fe00248>,
<__main__.particle instance at 0x11fe00320>,
<__main__.particle instance at 0x11fe003f8>,
<__main__.particle instance at 0x11fe004d0>,
<__main__.particle instance at 0x11fe005a8>,
<__main__.particle instance at 0x11fe00680>,
<__main__.particle instance at 0x11fe00758>,
<__main__.particle instance at 0x11fe00830>,
<__main__.particle instance at 0x11fe00908>,
<__main__.particle instance at 0x11fe009e0>,
<__main__.particle instance at 0x11fe00ab8>,
<__main__.particle instance at 0x11fe00b90>,
<__main__.particle instance at 0x11fe00c68>,
<__main__.particle instance at 0x11fe00d40>,
<__main__.particle instance at 0x11fe00e18>,
<__main__.particle instance at 0x11fe00ef0>,
<__main__.particle instance at 0x11fe00fc8>,
<__main__.particle instance at 0x120de10e0>,
<__main__.particle instance at 0x120de11b8>,
<__main__.particle instance at 0x120de1290>,
<__main__.particle instance at 0x120de1368>,
<__main__.particle instance at 0x120de1440>,
<__main__.particle instance at 0x120de1518>,
<__main__.particle instance at 0x120de15f0>,
<__main__.particle instance at 0x120de16c8>,
<__main__.particle instance at 0x120de17a0>,
<__main__.particle instance at 0x120de1878>,
<__main__.particle instance at 0x120de1950>,
<__main__.particle instance at 0x120de1a28>,
<__main__.particle instance at 0x120de1b00>,
<__main__.particle instance at 0x120de1bd8>,
<__main__.particle instance at 0x120de1cb0>,
<__main__.particle instance at 0x120de1d88>,
<__main__.particle instance at 0x120de1e60>,
<__main__.particle instance at 0x120de1f38>,
<__main__.particle instance at 0x120df8050>,
<__main__.particle instance at 0x120df8128>,
<__main__.particle instance at 0x120df8200>,
<__main__.particle instance at 0x120df82d8>,
<__main__.particle instance at 0x120df83b0>,
<__main__.particle instance at 0x120df8488>,
<__main__.particle instance at 0x120df8560>,
<__main__.particle instance at 0x120df8638>,
<__main__.particle instance at 0x120df8710>,
<__main__.particle instance at 0x120df87e8>,
<__main__.particle instance at 0x120df88c0>,
<__main__.particle instance at 0x120df8998>,
<__main__.particle instance at 0x120df8a70>,
<__main__.particle instance at 0x120df8b48>,
<__main__.particle instance at 0x120df8c20>,
<__main__.particle instance at 0x120df8cf8>,
<__main__.particle instance at 0x120df8dd0>,
<__main__.particle instance at 0x120df8ea8>,
<__main__.particle instance at 0x120df8f80>,
<__main__.particle instance at 0x120dea098>,
<__main__.particle instance at 0x120dea170>,
<__main__.particle instance at 0x120dea248>,
<__main__.particle instance at 0x120dea320>,
<__main__.particle instance at 0x120dea3f8>,
<__main__.particle instance at 0x120dea4d0>,
<__main__.particle instance at 0x120dea5a8>,
<__main__.particle instance at 0x120dea680>,
<__main__.particle instance at 0x120dea758>,
<__main__.particle instance at 0x120dea830>,
<__main__.particle instance at 0x120dea908>,
<__main__.particle instance at 0x120dea9e0>,
<__main__.particle instance at 0x120deaab8>,
<__main__.particle instance at 0x120deab90>,
<__main__.particle instance at 0x120deac68>,
<__main__.particle instance at 0x120dead40>,
<__main__.particle instance at 0x120deae18>,
<__main__.particle instance at 0x120deaef0>,
<__main__.particle instance at 0x120deafc8>,
<__main__.particle instance at 0x120de00e0>,
<__main__.particle instance at 0x120de01b8>,
<__main__.particle instance at 0x120de0290>,
<__main__.particle instance at 0x120de0368>,
<__main__.particle instance at 0x120de0440>,
<__main__.particle instance at 0x120de0518>,
<__main__.particle instance at 0x120de05f0>,
<__main__.particle instance at 0x120de06c8>,
<__main__.particle instance at 0x120de07a0>,
<__main__.particle instance at 0x120de0878>,
<__main__.particle instance at 0x120de0950>,
<__main__.particle instance at 0x120de0a28>,
<__main__.particle instance at 0x120de0b00>,
<__main__.particle instance at 0x120de0bd8>,
<__main__.particle instance at 0x120de0cb0>,
<__main__.particle instance at 0x120de0d88>,
<__main__.particle instance at 0x120de0e60>,
<__main__.particle instance at 0x120de0f38>,
<__main__.particle instance at 0x120ed2050>,
<__main__.particle instance at 0x120ed2128>,
<__main__.particle instance at 0x120ed2200>,
<__main__.particle instance at 0x120ed22d8>,
<__main__.particle instance at 0x120ed23b0>,
<__main__.particle instance at 0x120ed2488>,
<__main__.particle instance at 0x120ed2560>,
<__main__.particle instance at 0x120ed2638>,
<__main__.particle instance at 0x120ed2710>,
<__main__.particle instance at 0x120ed27e8>,
<__main__.particle instance at 0x120ed28c0>,
<__main__.particle instance at 0x120ed2998>,
<__main__.particle instance at 0x120ed2a70>,
<__main__.particle instance at 0x120ed2b48>,
<__main__.particle instance at 0x120ed2c20>,
<__main__.particle instance at 0x120ed2cf8>,
<__main__.particle instance at 0x120ed2dd0>,
<__main__.particle instance at 0x120ed2ea8>,
<__main__.particle instance at 0x120ed2f80>,
<__main__.particle instance at 0x120ede098>,
<__main__.particle instance at 0x120ede170>,
<__main__.particle instance at 0x120ede248>,
<__main__.particle instance at 0x120ede320>,
<__main__.particle instance at 0x120ede3f8>,
<__main__.particle instance at 0x120ede4d0>,
<__main__.particle instance at 0x120ede5a8>,
<__main__.particle instance at 0x120ede680>,
<__main__.particle instance at 0x120ede758>,
<__main__.particle instance at 0x120ede830>,
<__main__.particle instance at 0x120ede908>,
<__main__.particle instance at 0x120ede9e0>,
<__main__.particle instance at 0x120edeab8>,
<__main__.particle instance at 0x120edeb90>,
<__main__.particle instance at 0x120edec68>,
<__main__.particle instance at 0x120eded40>,
<__main__.particle instance at 0x120edee18>,
<__main__.particle instance at 0x120edeef0>,
<__main__.particle instance at 0x120edefc8>,
<__main__.particle instance at 0x120eec0e0>,
<__main__.particle instance at 0x120eec1b8>,
<__main__.particle instance at 0x120eec290>,
<__main__.particle instance at 0x120eec368>,
<__main__.particle instance at 0x120eec440>,
<__main__.particle instance at 0x120eec518>,
<__main__.particle instance at 0x120eec5f0>,
<__main__.particle instance at 0x120eec6c8>,
<__main__.particle instance at 0x120eec7a0>,
<__main__.particle instance at 0x120eec878>,
<__main__.particle instance at 0x120eec950>,
<__main__.particle instance at 0x120eeca28>,
<__main__.particle instance at 0x120eecb00>,
<__main__.particle instance at 0x120eecbd8>,
<__main__.particle instance at 0x120eeccb0>,
<__main__.particle instance at 0x120eecd88>,
<__main__.particle instance at 0x120eece60>,
<__main__.particle instance at 0x120eecf38>,
<__main__.particle instance at 0x120ee5050>,
<__main__.particle instance at 0x120ee5128>,
<__main__.particle instance at 0x120ee5200>,
<__main__.particle instance at 0x120ee52d8>,
<__main__.particle instance at 0x120ee53b0>,
<__main__.particle instance at 0x120ee5488>,
<__main__.particle instance at 0x120ee5560>,
<__main__.particle instance at 0x120ee5638>,
<__main__.particle instance at 0x120ee5710>,
<__main__.particle instance at 0x120ee57e8>,
<__main__.particle instance at 0x120ee58c0>,
<__main__.particle instance at 0x120ee5998>,
<__main__.particle instance at 0x120ee5a70>,
<__main__.particle instance at 0x120ee5b48>], dtype=object)
```python
pso.simulate(10)
plot()
```
距離の和を見てみる。
```python
pso.bestP
```
793.46463563700217
```python
pso.simulate(40)
plot()
```
```python
pso.bestP
```
730.70542834265984
あまりよくはならない。しかし、変異がないと局所解から抜け出せないので、とりあえずかなり微小な確率で交差を入れることにする。ωも少し幅を広くとることでとにかく局所解対策を立てる。粒子の数も多すぎては計算量的に大変になるだけなので半分にする。
```python
class PSO:
def __init__(self, N, pN):
self.N = N
self.pN = pN
self.city = TSP_map(N)
def initialize(self):
ptcl = np.array([])
for i in range(self.pN):
a = np.random.choice([j for j in range(self.N - 1)])
b = np.random.choice([j for j in range(a, self.N)])
V = [[a, b]]
ptcl = np.append(ptcl, particle(i, V, self.N, np.random.uniform(0.4, 1.0), np.random.uniform(), np.random.uniform()))
self.ptcl = ptcl
return self.ptcl
def one_simulate(self):
for i in range(self.pN):
if i != self.no:
x = np.random.choice([1, 0], p=[0.01, 0.99])
if x == 1:
j = np.random.choice([i for i in range(self.N)])
k = np.random.choice([i for i in range(self.N)])
a = self.ptcl[i].X[j]
b = self.ptcl[i].X[k]
self.ptcl[i].X[j] = b
self.ptcl[i].X[k] = a
self.ptcl[i].SS_id()
self.ptcl[i].SS_gd(self.p_gd_X)
self.ptcl[i].new_V()
self.ptcl[i].new_X()
self.ptcl[i].P_id(self.city)
def simulate(self, sim_num):
for i in range(self.pN):
self.ptcl[i].initial(self.city)
self.p_gd_X = self.P_gd()
for i in range(sim_num):
self.one_simulate()
self.p_gd_X = self.P_gd()
self.p_gd_X = self.P_gd()
return self.p_gd_X
def P_gd(self):
P_gd = self.ptcl[0].p_id
self.no = 0
for i in range(self.pN):
if P_gd > self.ptcl[i].p_id:
P_gd = self.ptcl[i].p_id
self.bestP = P_gd
self.no = i
return self.ptcl[self.no].p_id_X
```
```python
pso = PSO(30, 500)
pso.initialize()
```
array([<__main__.particle instance at 0x11feac128>,
<__main__.particle instance at 0x1210c0e60>,
<__main__.particle instance at 0x1210c0ef0>,
<__main__.particle instance at 0x1210c0878>,
<__main__.particle instance at 0x1210c03b0>,
<__main__.particle instance at 0x1210c0f38>,
<__main__.particle instance at 0x1210c0ea8>,
<__main__.particle instance at 0x120ecc488>,
<__main__.particle instance at 0x120ecc998>,
<__main__.particle instance at 0x1214a8c20>,
<__main__.particle instance at 0x1211f4bd8>,
<__main__.particle instance at 0x12107ce18>,
<__main__.particle instance at 0x12107cf80>,
<__main__.particle instance at 0x12107c998>,
<__main__.particle instance at 0x12107c488>,
<__main__.particle instance at 0x1211b6cb0>,
<__main__.particle instance at 0x1217b5b00>,
<__main__.particle instance at 0x1217b5c68>,
<__main__.particle instance at 0x1217b5cb0>,
<__main__.particle instance at 0x1217b5758>,
<__main__.particle instance at 0x1217b5c20>,
<__main__.particle instance at 0x1217b5cf8>,
<__main__.particle instance at 0x1217b5710>,
<__main__.particle instance at 0x1217b5e18>,
<__main__.particle instance at 0x1217b5fc8>,
<__main__.particle instance at 0x1217b5200>,
<__main__.particle instance at 0x1217b56c8>,
<__main__.particle instance at 0x1217b57a0>,
<__main__.particle instance at 0x1217b51b8>,
<__main__.particle instance at 0x1217b58c0>,
<__main__.particle instance at 0x1217b5a70>,
<__main__.particle instance at 0x1217b5170>,
<__main__.particle instance at 0x1217b5518>,
<__main__.particle instance at 0x1217b5248>,
<__main__.particle instance at 0x1217cfea8>,
<__main__.particle instance at 0x120d185a8>,
<__main__.particle instance at 0x12149b5a8>,
<__main__.particle instance at 0x1217aaa28>,
<__main__.particle instance at 0x1217aacb0>,
<__main__.particle instance at 0x1217aabd8>,
<__main__.particle instance at 0x1217aa9e0>,
<__main__.particle instance at 0x1217aa758>,
<__main__.particle instance at 0x1217aa488>,
<__main__.particle instance at 0x121198518>,
<__main__.particle instance at 0x1211ea758>,
<__main__.particle instance at 0x1214e53f8>,
<__main__.particle instance at 0x1217bfea8>,
<__main__.particle instance at 0x1217bfc20>,
<__main__.particle instance at 0x1217bf758>,
<__main__.particle instance at 0x1217a4cb0>,
<__main__.particle instance at 0x1217a4b48>,
<__main__.particle instance at 0x1217a44d0>,
<__main__.particle instance at 0x12180a680>,
<__main__.particle instance at 0x12180a758>,
<__main__.particle instance at 0x12180a830>,
<__main__.particle instance at 0x12180a908>,
<__main__.particle instance at 0x12180a9e0>,
<__main__.particle instance at 0x12180aab8>,
<__main__.particle instance at 0x12180ab90>,
<__main__.particle instance at 0x12180ac68>,
<__main__.particle instance at 0x12180ad40>,
<__main__.particle instance at 0x12180ae18>,
<__main__.particle instance at 0x12180aef0>,
<__main__.particle instance at 0x12180afc8>,
<__main__.particle instance at 0x1217ef0e0>,
<__main__.particle instance at 0x1217ef1b8>,
<__main__.particle instance at 0x1217ef290>,
<__main__.particle instance at 0x1217ef368>,
<__main__.particle instance at 0x1217ef440>,
<__main__.particle instance at 0x1217ef518>,
<__main__.particle instance at 0x1217ef5f0>,
<__main__.particle instance at 0x1217ef6c8>,
<__main__.particle instance at 0x1217ef7a0>,
<__main__.particle instance at 0x1217ef878>,
<__main__.particle instance at 0x1217ef950>,
<__main__.particle instance at 0x1217efa28>,
<__main__.particle instance at 0x1217efb00>,
<__main__.particle instance at 0x1217efbd8>,
<__main__.particle instance at 0x1217efcb0>,
<__main__.particle instance at 0x1217efd88>,
<__main__.particle instance at 0x1217efe60>,
<__main__.particle instance at 0x1217eff38>,
<__main__.particle instance at 0x12180d050>,
<__main__.particle instance at 0x12180d128>,
<__main__.particle instance at 0x12180d200>,
<__main__.particle instance at 0x12180d2d8>,
<__main__.particle instance at 0x12180d3b0>,
<__main__.particle instance at 0x12180d488>,
<__main__.particle instance at 0x12180d560>,
<__main__.particle instance at 0x12180d638>,
<__main__.particle instance at 0x12180d710>,
<__main__.particle instance at 0x12180d7e8>,
<__main__.particle instance at 0x12180d8c0>,
<__main__.particle instance at 0x12180d998>,
<__main__.particle instance at 0x12180da70>,
<__main__.particle instance at 0x12180db48>,
<__main__.particle instance at 0x12180dc20>,
<__main__.particle instance at 0x12180dcf8>,
<__main__.particle instance at 0x12180ddd0>,
<__main__.particle instance at 0x12180dea8>,
<__main__.particle instance at 0x12180df80>,
<__main__.particle instance at 0x12180e098>,
<__main__.particle instance at 0x12180e170>,
<__main__.particle instance at 0x12180e248>,
<__main__.particle instance at 0x12180e320>,
<__main__.particle instance at 0x12180e3f8>,
<__main__.particle instance at 0x12180e4d0>,
<__main__.particle instance at 0x12180e5a8>,
<__main__.particle instance at 0x12180e680>,
<__main__.particle instance at 0x12180e758>,
<__main__.particle instance at 0x12180e830>,
<__main__.particle instance at 0x12180e908>,
<__main__.particle instance at 0x12180e9e0>,
<__main__.particle instance at 0x12180eab8>,
<__main__.particle instance at 0x12180eb90>,
<__main__.particle instance at 0x12180ec68>,
<__main__.particle instance at 0x12180ed40>,
<__main__.particle instance at 0x12180ee18>,
<__main__.particle instance at 0x12180eef0>,
<__main__.particle instance at 0x12180efc8>,
<__main__.particle instance at 0x12180c0e0>,
<__main__.particle instance at 0x12180c1b8>,
<__main__.particle instance at 0x12180c290>,
<__main__.particle instance at 0x12180c368>,
<__main__.particle instance at 0x12180c440>,
<__main__.particle instance at 0x12180c518>,
<__main__.particle instance at 0x12180c5f0>,
<__main__.particle instance at 0x12180c6c8>,
<__main__.particle instance at 0x12180c7a0>,
<__main__.particle instance at 0x12180c878>,
<__main__.particle instance at 0x12180c950>,
<__main__.particle instance at 0x12180ca28>,
<__main__.particle instance at 0x12180cb00>,
<__main__.particle instance at 0x12180cbd8>,
<__main__.particle instance at 0x12180ccb0>,
<__main__.particle instance at 0x12180cd88>,
<__main__.particle instance at 0x12180ce60>,
<__main__.particle instance at 0x12180cf38>,
<__main__.particle instance at 0x1217fb050>,
<__main__.particle instance at 0x1217fb128>,
<__main__.particle instance at 0x1217fb200>,
<__main__.particle instance at 0x1217fb2d8>,
<__main__.particle instance at 0x1217fb3b0>,
<__main__.particle instance at 0x1217fb488>,
<__main__.particle instance at 0x1217fb560>,
<__main__.particle instance at 0x1217fb638>,
<__main__.particle instance at 0x1217fb710>,
<__main__.particle instance at 0x1217fb7e8>,
<__main__.particle instance at 0x1217fb8c0>,
<__main__.particle instance at 0x1217fb998>,
<__main__.particle instance at 0x1217fba70>,
<__main__.particle instance at 0x1217fbb48>,
<__main__.particle instance at 0x1217fbc20>,
<__main__.particle instance at 0x1217fbcf8>,
<__main__.particle instance at 0x1217fbdd0>,
<__main__.particle instance at 0x1217fbea8>,
<__main__.particle instance at 0x1217fbf80>,
<__main__.particle instance at 0x1217ed098>,
<__main__.particle instance at 0x1217ed170>,
<__main__.particle instance at 0x1217ed248>,
<__main__.particle instance at 0x1217ed320>,
<__main__.particle instance at 0x1217ed3f8>,
<__main__.particle instance at 0x1217ed4d0>,
<__main__.particle instance at 0x1217ed5a8>,
<__main__.particle instance at 0x1217ed680>,
<__main__.particle instance at 0x1217ed758>,
<__main__.particle instance at 0x1217ed830>,
<__main__.particle instance at 0x1217ed908>,
<__main__.particle instance at 0x1217ed9e0>,
<__main__.particle instance at 0x1217edab8>,
<__main__.particle instance at 0x1217edb90>,
<__main__.particle instance at 0x1217edc68>,
<__main__.particle instance at 0x1217edd40>,
<__main__.particle instance at 0x1217ede18>,
<__main__.particle instance at 0x1217edef0>,
<__main__.particle instance at 0x1217edfc8>,
<__main__.particle instance at 0x1218020e0>,
<__main__.particle instance at 0x1218021b8>,
<__main__.particle instance at 0x121802290>,
<__main__.particle instance at 0x121802368>,
<__main__.particle instance at 0x121802440>,
<__main__.particle instance at 0x121802518>,
<__main__.particle instance at 0x1218025f0>,
<__main__.particle instance at 0x1218026c8>,
<__main__.particle instance at 0x1218027a0>,
<__main__.particle instance at 0x121802878>,
<__main__.particle instance at 0x121802950>,
<__main__.particle instance at 0x121802a28>,
<__main__.particle instance at 0x121802b00>,
<__main__.particle instance at 0x121802bd8>,
<__main__.particle instance at 0x121802cb0>,
<__main__.particle instance at 0x121802d88>,
<__main__.particle instance at 0x121802e60>,
<__main__.particle instance at 0x121802f38>,
<__main__.particle instance at 0x1217f5050>,
<__main__.particle instance at 0x1217f5128>,
<__main__.particle instance at 0x1217f5200>,
<__main__.particle instance at 0x1217f52d8>,
<__main__.particle instance at 0x1217f53b0>,
<__main__.particle instance at 0x1217f5488>,
<__main__.particle instance at 0x1217f5560>,
<__main__.particle instance at 0x1217f5638>,
<__main__.particle instance at 0x1217f5710>,
<__main__.particle instance at 0x1217f57e8>,
<__main__.particle instance at 0x1217f58c0>,
<__main__.particle instance at 0x1217f5998>,
<__main__.particle instance at 0x1217f5a70>,
<__main__.particle instance at 0x1217f5b48>,
<__main__.particle instance at 0x1217f5c20>,
<__main__.particle instance at 0x1217f5cf8>,
<__main__.particle instance at 0x1217f5dd0>,
<__main__.particle instance at 0x1217f5ea8>,
<__main__.particle instance at 0x1217f5f80>,
<__main__.particle instance at 0x1217f3098>,
<__main__.particle instance at 0x1217f3170>,
<__main__.particle instance at 0x1217f3248>,
<__main__.particle instance at 0x1217f3320>,
<__main__.particle instance at 0x1217f33f8>,
<__main__.particle instance at 0x1217f34d0>,
<__main__.particle instance at 0x1217f35a8>,
<__main__.particle instance at 0x1217f3680>,
<__main__.particle instance at 0x1217f3758>,
<__main__.particle instance at 0x1217f3830>,
<__main__.particle instance at 0x1217f3908>,
<__main__.particle instance at 0x1217f39e0>,
<__main__.particle instance at 0x1217f3ab8>,
<__main__.particle instance at 0x1217f3b90>,
<__main__.particle instance at 0x1217f3c68>,
<__main__.particle instance at 0x1217f3d40>,
<__main__.particle instance at 0x1217f3e18>,
<__main__.particle instance at 0x1217f3ef0>,
<__main__.particle instance at 0x1217f3fc8>,
<__main__.particle instance at 0x1218000e0>,
<__main__.particle instance at 0x1218001b8>,
<__main__.particle instance at 0x121800290>,
<__main__.particle instance at 0x121800368>,
<__main__.particle instance at 0x121800440>,
<__main__.particle instance at 0x121800518>,
<__main__.particle instance at 0x1218005f0>,
<__main__.particle instance at 0x1218006c8>,
<__main__.particle instance at 0x1218007a0>,
<__main__.particle instance at 0x121800878>,
<__main__.particle instance at 0x121800950>,
<__main__.particle instance at 0x121800a28>,
<__main__.particle instance at 0x121800b00>,
<__main__.particle instance at 0x121800bd8>,
<__main__.particle instance at 0x121800cb0>,
<__main__.particle instance at 0x121800d88>,
<__main__.particle instance at 0x121800e60>,
<__main__.particle instance at 0x121800f38>,
<__main__.particle instance at 0x1217f9050>,
<__main__.particle instance at 0x1217f9128>,
<__main__.particle instance at 0x1217f9200>,
<__main__.particle instance at 0x1217f92d8>,
<__main__.particle instance at 0x1217f93b0>,
<__main__.particle instance at 0x1217f9488>,
<__main__.particle instance at 0x1217f9560>,
<__main__.particle instance at 0x1217f9638>,
<__main__.particle instance at 0x1217f9710>,
<__main__.particle instance at 0x1217f97e8>,
<__main__.particle instance at 0x1217f98c0>,
<__main__.particle instance at 0x1217f9998>,
<__main__.particle instance at 0x1217f9a70>,
<__main__.particle instance at 0x1217f9b48>,
<__main__.particle instance at 0x1217f9c20>,
<__main__.particle instance at 0x1217f9cf8>,
<__main__.particle instance at 0x1217f9dd0>,
<__main__.particle instance at 0x1217f9ea8>,
<__main__.particle instance at 0x1217f9f80>,
<__main__.particle instance at 0x1217f7098>,
<__main__.particle instance at 0x1217f7170>,
<__main__.particle instance at 0x1217f7248>,
<__main__.particle instance at 0x1217f7320>,
<__main__.particle instance at 0x1217f73f8>,
<__main__.particle instance at 0x1217f74d0>,
<__main__.particle instance at 0x1217f75a8>,
<__main__.particle instance at 0x1217f7680>,
<__main__.particle instance at 0x1217f7758>,
<__main__.particle instance at 0x1217f7830>,
<__main__.particle instance at 0x1217f7908>,
<__main__.particle instance at 0x1217f79e0>,
<__main__.particle instance at 0x1217f7ab8>,
<__main__.particle instance at 0x1217f7b90>,
<__main__.particle instance at 0x1217f7c68>,
<__main__.particle instance at 0x1217f7d40>,
<__main__.particle instance at 0x1217f7e18>,
<__main__.particle instance at 0x1217f7ef0>,
<__main__.particle instance at 0x1217f7fc8>,
<__main__.particle instance at 0x1217f20e0>,
<__main__.particle instance at 0x1217f21b8>,
<__main__.particle instance at 0x1217f2290>,
<__main__.particle instance at 0x1217f2368>,
<__main__.particle instance at 0x1217f2440>,
<__main__.particle instance at 0x1217f2518>,
<__main__.particle instance at 0x1217f25f0>,
<__main__.particle instance at 0x1217f26c8>,
<__main__.particle instance at 0x1217f27a0>,
<__main__.particle instance at 0x1217f2878>,
<__main__.particle instance at 0x1217f2950>,
<__main__.particle instance at 0x1217f2a28>,
<__main__.particle instance at 0x1217f2b00>,
<__main__.particle instance at 0x1217f2bd8>,
<__main__.particle instance at 0x1217f2cb0>,
<__main__.particle instance at 0x1217f2d88>,
<__main__.particle instance at 0x1217f2e60>,
<__main__.particle instance at 0x1217f2f38>,
<__main__.particle instance at 0x1217fd050>,
<__main__.particle instance at 0x1217fd128>,
<__main__.particle instance at 0x1217fd200>,
<__main__.particle instance at 0x1217fd2d8>,
<__main__.particle instance at 0x1217fd3b0>,
<__main__.particle instance at 0x1217fd488>,
<__main__.particle instance at 0x1217fd560>,
<__main__.particle instance at 0x1217fd638>,
<__main__.particle instance at 0x1217fd710>,
<__main__.particle instance at 0x1217fd7e8>,
<__main__.particle instance at 0x1217fd8c0>,
<__main__.particle instance at 0x1217fd998>,
<__main__.particle instance at 0x1217fda70>,
<__main__.particle instance at 0x1217fdb48>,
<__main__.particle instance at 0x1217fdc20>,
<__main__.particle instance at 0x1217fdcf8>,
<__main__.particle instance at 0x1217fddd0>,
<__main__.particle instance at 0x1217fdea8>,
<__main__.particle instance at 0x1217fdf80>,
<__main__.particle instance at 0x121807098>,
<__main__.particle instance at 0x121807170>,
<__main__.particle instance at 0x121807248>,
<__main__.particle instance at 0x121807320>,
<__main__.particle instance at 0x1218073f8>,
<__main__.particle instance at 0x1218074d0>,
<__main__.particle instance at 0x1218075a8>,
<__main__.particle instance at 0x121807680>,
<__main__.particle instance at 0x121807758>,
<__main__.particle instance at 0x121807830>,
<__main__.particle instance at 0x121807908>,
<__main__.particle instance at 0x1218079e0>,
<__main__.particle instance at 0x121807ab8>,
<__main__.particle instance at 0x121807b90>,
<__main__.particle instance at 0x121807c68>,
<__main__.particle instance at 0x121807d40>,
<__main__.particle instance at 0x121807e18>,
<__main__.particle instance at 0x121807ef0>,
<__main__.particle instance at 0x121807fc8>,
<__main__.particle instance at 0x1217f80e0>,
<__main__.particle instance at 0x1217f81b8>,
<__main__.particle instance at 0x1217f8290>,
<__main__.particle instance at 0x1217f8368>,
<__main__.particle instance at 0x1217f8440>,
<__main__.particle instance at 0x1217f8518>,
<__main__.particle instance at 0x1217f85f0>,
<__main__.particle instance at 0x1217f86c8>,
<__main__.particle instance at 0x1217f87a0>,
<__main__.particle instance at 0x1217f8878>,
<__main__.particle instance at 0x1217f8950>,
<__main__.particle instance at 0x1217f8a28>,
<__main__.particle instance at 0x1217f8b00>,
<__main__.particle instance at 0x1217f8bd8>,
<__main__.particle instance at 0x1217f8cb0>,
<__main__.particle instance at 0x1217f8d88>,
<__main__.particle instance at 0x1217f8e60>,
<__main__.particle instance at 0x1217f8f38>,
<__main__.particle instance at 0x1217f4050>,
<__main__.particle instance at 0x1217f4128>,
<__main__.particle instance at 0x1217f4200>,
<__main__.particle instance at 0x1217f42d8>,
<__main__.particle instance at 0x1217f43b0>,
<__main__.particle instance at 0x1217f4488>,
<__main__.particle instance at 0x1217f4560>,
<__main__.particle instance at 0x1217f4638>,
<__main__.particle instance at 0x1217f4710>,
<__main__.particle instance at 0x1217f47e8>,
<__main__.particle instance at 0x1217f48c0>,
<__main__.particle instance at 0x1217f4998>,
<__main__.particle instance at 0x1217f4a70>,
<__main__.particle instance at 0x1217f4b48>,
<__main__.particle instance at 0x1217f4c20>,
<__main__.particle instance at 0x1217f4cf8>,
<__main__.particle instance at 0x1217f4dd0>,
<__main__.particle instance at 0x1217f4ea8>,
<__main__.particle instance at 0x1217f4f80>,
<__main__.particle instance at 0x121810098>,
<__main__.particle instance at 0x121810170>,
<__main__.particle instance at 0x121810248>,
<__main__.particle instance at 0x121810320>,
<__main__.particle instance at 0x1218103f8>,
<__main__.particle instance at 0x1218104d0>,
<__main__.particle instance at 0x1218105a8>,
<__main__.particle instance at 0x121810680>,
<__main__.particle instance at 0x121810758>,
<__main__.particle instance at 0x121810830>,
<__main__.particle instance at 0x121810908>,
<__main__.particle instance at 0x1218109e0>,
<__main__.particle instance at 0x121810ab8>,
<__main__.particle instance at 0x121810b90>,
<__main__.particle instance at 0x121810c68>,
<__main__.particle instance at 0x121810d40>,
<__main__.particle instance at 0x121810e18>,
<__main__.particle instance at 0x121810ef0>,
<__main__.particle instance at 0x121810fc8>,
<__main__.particle instance at 0x1218130e0>,
<__main__.particle instance at 0x1218131b8>,
<__main__.particle instance at 0x121813290>,
<__main__.particle instance at 0x121813368>,
<__main__.particle instance at 0x121813440>,
<__main__.particle instance at 0x121813518>,
<__main__.particle instance at 0x1218135f0>,
<__main__.particle instance at 0x1218136c8>,
<__main__.particle instance at 0x1218137a0>,
<__main__.particle instance at 0x121813878>,
<__main__.particle instance at 0x121813950>,
<__main__.particle instance at 0x121813a28>,
<__main__.particle instance at 0x121813b00>,
<__main__.particle instance at 0x121813bd8>,
<__main__.particle instance at 0x121813cb0>,
<__main__.particle instance at 0x121813d88>,
<__main__.particle instance at 0x121813e60>,
<__main__.particle instance at 0x121813f38>,
<__main__.particle instance at 0x121815050>,
<__main__.particle instance at 0x121815128>,
<__main__.particle instance at 0x121815200>,
<__main__.particle instance at 0x1218152d8>,
<__main__.particle instance at 0x1218153b0>,
<__main__.particle instance at 0x121815488>,
<__main__.particle instance at 0x121815560>,
<__main__.particle instance at 0x121815638>,
<__main__.particle instance at 0x121815710>,
<__main__.particle instance at 0x1218157e8>,
<__main__.particle instance at 0x1218158c0>,
<__main__.particle instance at 0x121815998>,
<__main__.particle instance at 0x121815a70>,
<__main__.particle instance at 0x121815b48>,
<__main__.particle instance at 0x121815c20>,
<__main__.particle instance at 0x121815cf8>,
<__main__.particle instance at 0x121815dd0>,
<__main__.particle instance at 0x121815ea8>,
<__main__.particle instance at 0x121815f80>,
<__main__.particle instance at 0x121818098>,
<__main__.particle instance at 0x121818170>,
<__main__.particle instance at 0x121818248>,
<__main__.particle instance at 0x121818320>,
<__main__.particle instance at 0x1218183f8>,
<__main__.particle instance at 0x1218184d0>,
<__main__.particle instance at 0x1218185a8>,
<__main__.particle instance at 0x121818680>,
<__main__.particle instance at 0x121818758>,
<__main__.particle instance at 0x121818830>,
<__main__.particle instance at 0x121818908>,
<__main__.particle instance at 0x1218189e0>,
<__main__.particle instance at 0x121818ab8>,
<__main__.particle instance at 0x121818b90>,
<__main__.particle instance at 0x121818c68>,
<__main__.particle instance at 0x121818d40>,
<__main__.particle instance at 0x121818e18>,
<__main__.particle instance at 0x121818ef0>,
<__main__.particle instance at 0x121818fc8>,
<__main__.particle instance at 0x12181b0e0>,
<__main__.particle instance at 0x12181b1b8>,
<__main__.particle instance at 0x12181b290>,
<__main__.particle instance at 0x12181b368>,
<__main__.particle instance at 0x12181b440>,
<__main__.particle instance at 0x12181b518>,
<__main__.particle instance at 0x12181b5f0>,
<__main__.particle instance at 0x12181b6c8>,
<__main__.particle instance at 0x12181b7a0>,
<__main__.particle instance at 0x12181b878>,
<__main__.particle instance at 0x12181b950>,
<__main__.particle instance at 0x12181ba28>,
<__main__.particle instance at 0x12181bb00>,
<__main__.particle instance at 0x12181bbd8>,
<__main__.particle instance at 0x12181bcb0>,
<__main__.particle instance at 0x12181bd88>,
<__main__.particle instance at 0x12181be60>,
<__main__.particle instance at 0x12181bf38>,
<__main__.particle instance at 0x12181d050>,
<__main__.particle instance at 0x12181d128>,
<__main__.particle instance at 0x12181d200>,
<__main__.particle instance at 0x12181d2d8>,
<__main__.particle instance at 0x12181d3b0>,
<__main__.particle instance at 0x12181d488>,
<__main__.particle instance at 0x12181d560>,
<__main__.particle instance at 0x12181d638>,
<__main__.particle instance at 0x12181d710>,
<__main__.particle instance at 0x12181d7e8>,
<__main__.particle instance at 0x12181d8c0>,
<__main__.particle instance at 0x12181d998>,
<__main__.particle instance at 0x12181da70>,
<__main__.particle instance at 0x12181db48>,
<__main__.particle instance at 0x12181dc20>,
<__main__.particle instance at 0x12181dcf8>,
<__main__.particle instance at 0x12181ddd0>,
<__main__.particle instance at 0x12181dea8>,
<__main__.particle instance at 0x12181df80>,
<__main__.particle instance at 0x12187f098>,
<__main__.particle instance at 0x12187f170>,
<__main__.particle instance at 0x12187f248>,
<__main__.particle instance at 0x12187f320>,
<__main__.particle instance at 0x12187f3f8>,
<__main__.particle instance at 0x12187f4d0>,
<__main__.particle instance at 0x12187f5a8>], dtype=object)
```python
pso.simulate(50)
plot()
```
```python
pso.bestP
```
633.43042985412319
粒子数は少ないのに劇的に良くなった。他の都市分布でも試してみる。
```python
pso = PSO(30, 500)
pso.initialize()
```
array([<__main__.particle instance at 0x120e40f80>,
<__main__.particle instance at 0x12147ff38>,
<__main__.particle instance at 0x121234b90>,
<__main__.particle instance at 0x1219800e0>,
<__main__.particle instance at 0x121980ab8>,
<__main__.particle instance at 0x1219803b0>,
<__main__.particle instance at 0x121980e60>,
<__main__.particle instance at 0x11febb488>,
<__main__.particle instance at 0x11febbd88>,
<__main__.particle instance at 0x1211b3e18>,
<__main__.particle instance at 0x1211b33f8>,
<__main__.particle instance at 0x1211b3050>,
<__main__.particle instance at 0x12109fbd8>,
<__main__.particle instance at 0x120d25128>,
<__main__.particle instance at 0x120e28440>,
<__main__.particle instance at 0x121245518>,
<__main__.particle instance at 0x120d18710>,
<__main__.particle instance at 0x1212668c0>,
<__main__.particle instance at 0x121266e60>,
<__main__.particle instance at 0x1210bb8c0>,
<__main__.particle instance at 0x121416488>,
<__main__.particle instance at 0x121416ef0>,
<__main__.particle instance at 0x11feacdd0>,
<__main__.particle instance at 0x121974b90>,
<__main__.particle instance at 0x121974290>,
<__main__.particle instance at 0x121974488>,
<__main__.particle instance at 0x120edbd40>,
<__main__.particle instance at 0x121265320>,
<__main__.particle instance at 0x1211ff6c8>,
<__main__.particle instance at 0x1211fffc8>,
<__main__.particle instance at 0x12144b050>,
<__main__.particle instance at 0x11ff49560>,
<__main__.particle instance at 0x11ff49ef0>,
<__main__.particle instance at 0x11ff49518>,
<__main__.particle instance at 0x12181d518>,
<__main__.particle instance at 0x12181d1b8>,
<__main__.particle instance at 0x12143aa28>,
<__main__.particle instance at 0x11ff50a70>,
<__main__.particle instance at 0x1210bf998>,
<__main__.particle instance at 0x1210f8c68>,
<__main__.particle instance at 0x12142e170>,
<__main__.particle instance at 0x1210c2170>,
<__main__.particle instance at 0x120e58440>,
<__main__.particle instance at 0x1217cfcf8>,
<__main__.particle instance at 0x121973170>,
<__main__.particle instance at 0x1219733f8>,
<__main__.particle instance at 0x121973560>,
<__main__.particle instance at 0x1217025a8>,
<__main__.particle instance at 0x121702638>,
<__main__.particle instance at 0x1217027a0>,
<__main__.particle instance at 0x121972cb0>,
<__main__.particle instance at 0x12197c878>,
<__main__.particle instance at 0x12197cbd8>,
<__main__.particle instance at 0x1217fde18>,
<__main__.particle instance at 0x12197aa70>,
<__main__.particle instance at 0x1217c8830>,
<__main__.particle instance at 0x1217c83b0>,
<__main__.particle instance at 0x1217c8488>,
<__main__.particle instance at 0x1217c8758>,
<__main__.particle instance at 0x1217ed560>,
<__main__.particle instance at 0x12180eb00>,
<__main__.particle instance at 0x12180ec20>,
<__main__.particle instance at 0x1219755a8>,
<__main__.particle instance at 0x1219755f0>,
<__main__.particle instance at 0x121978dd0>,
<__main__.particle instance at 0x1214f07e8>,
<__main__.particle instance at 0x1214f0908>,
<__main__.particle instance at 0x1214f0440>,
<__main__.particle instance at 0x12197f248>,
<__main__.particle instance at 0x12197f3f8>,
<__main__.particle instance at 0x12197f758>,
<__main__.particle instance at 0x12197fc68>,
<__main__.particle instance at 0x12197fdd0>,
<__main__.particle instance at 0x12197f4d0>,
<__main__.particle instance at 0x12197fbd8>,
<__main__.particle instance at 0x12197f200>,
<__main__.particle instance at 0x12197f320>,
<__main__.particle instance at 0x12197f0e0>,
<__main__.particle instance at 0x12197f998>,
<__main__.particle instance at 0x12197ffc8>,
<__main__.particle instance at 0x1217ef3f8>,
<__main__.particle instance at 0x1217ef248>,
<__main__.particle instance at 0x1217eff80>,
<__main__.particle instance at 0x1217efd40>,
<__main__.particle instance at 0x1217ef3b0>,
<__main__.particle instance at 0x1217ef830>,
<__main__.particle instance at 0x121136cb0>,
<__main__.particle instance at 0x121115878>,
<__main__.particle instance at 0x1211159e0>,
<__main__.particle instance at 0x1210554d0>,
<__main__.particle instance at 0x12112bea8>,
<__main__.particle instance at 0x12112b830>,
<__main__.particle instance at 0x12103aef0>,
<__main__.particle instance at 0x120ef07e8>,
<__main__.particle instance at 0x1210e0ea8>,
<__main__.particle instance at 0x120df4050>,
<__main__.particle instance at 0x121129098>,
<__main__.particle instance at 0x12112ddd0>,
<__main__.particle instance at 0x12112a050>,
<__main__.particle instance at 0x12112abd8>,
<__main__.particle instance at 0x121407200>,
<__main__.particle instance at 0x1214072d8>,
<__main__.particle instance at 0x121407758>,
<__main__.particle instance at 0x12115d560>,
<__main__.particle instance at 0x12126f638>,
<__main__.particle instance at 0x1210b72d8>,
<__main__.particle instance at 0x1211aed40>,
<__main__.particle instance at 0x121263248>,
<__main__.particle instance at 0x1211b1878>,
<__main__.particle instance at 0x120628518>,
<__main__.particle instance at 0x120628fc8>,
<__main__.particle instance at 0x1210884d0>,
<__main__.particle instance at 0x1210885a8>,
<__main__.particle instance at 0x1210886c8>,
<__main__.particle instance at 0x121088e60>,
<__main__.particle instance at 0x121088d40>,
<__main__.particle instance at 0x1210dd050>,
<__main__.particle instance at 0x1210dd560>,
<__main__.particle instance at 0x1210dd518>,
<__main__.particle instance at 0x1210dd5a8>,
<__main__.particle instance at 0x1210ddef0>,
<__main__.particle instance at 0x1210cd518>,
<__main__.particle instance at 0x12110ca70>,
<__main__.particle instance at 0x1211089e0>,
<__main__.particle instance at 0x12127ef38>,
<__main__.particle instance at 0x12102d3f8>,
<__main__.particle instance at 0x1211a2c68>,
<__main__.particle instance at 0x1211a2998>,
<__main__.particle instance at 0x1211a2c20>,
<__main__.particle instance at 0x1211a2f80>,
<__main__.particle instance at 0x1211a2488>,
<__main__.particle instance at 0x1211a2440>,
<__main__.particle instance at 0x120ee25a8>,
<__main__.particle instance at 0x12101aab8>,
<__main__.particle instance at 0x1211923f8>,
<__main__.particle instance at 0x120dc8488>,
<__main__.particle instance at 0x120dc82d8>,
<__main__.particle instance at 0x120dc8248>,
<__main__.particle instance at 0x1210e6830>,
<__main__.particle instance at 0x1210ec7a0>,
<__main__.particle instance at 0x121075368>,
<__main__.particle instance at 0x1210aff80>,
<__main__.particle instance at 0x12102f5a8>,
<__main__.particle instance at 0x121110950>,
<__main__.particle instance at 0x121110ef0>,
<__main__.particle instance at 0x12126e7e8>,
<__main__.particle instance at 0x1210fc368>,
<__main__.particle instance at 0x12111aef0>,
<__main__.particle instance at 0x1210fab48>,
<__main__.particle instance at 0x1210dcdd0>,
<__main__.particle instance at 0x120ec95a8>,
<__main__.particle instance at 0x120ec3710>,
<__main__.particle instance at 0x121065ab8>,
<__main__.particle instance at 0x120e8bbd8>,
<__main__.particle instance at 0x121046998>,
<__main__.particle instance at 0x121046a28>,
<__main__.particle instance at 0x1210b9200>,
<__main__.particle instance at 0x1210b92d8>,
<__main__.particle instance at 0x12104ccb0>,
<__main__.particle instance at 0x120de4ab8>,
<__main__.particle instance at 0x120de4cb0>,
<__main__.particle instance at 0x120de4a70>,
<__main__.particle instance at 0x11fe83200>,
<__main__.particle instance at 0x121076320>,
<__main__.particle instance at 0x120efb0e0>,
<__main__.particle instance at 0x1210e2170>,
<__main__.particle instance at 0x121091050>,
<__main__.particle instance at 0x121058b90>,
<__main__.particle instance at 0x121154c68>,
<__main__.particle instance at 0x121154fc8>,
<__main__.particle instance at 0x121154bd8>,
<__main__.particle instance at 0x121154b48>,
<__main__.particle instance at 0x121154998>,
<__main__.particle instance at 0x1211547e8>,
<__main__.particle instance at 0x1211543f8>,
<__main__.particle instance at 0x121154830>,
<__main__.particle instance at 0x121154878>,
<__main__.particle instance at 0x121154c20>,
<__main__.particle instance at 0x121154ea8>,
<__main__.particle instance at 0x12103b3b0>,
<__main__.particle instance at 0x12103b560>,
<__main__.particle instance at 0x1210d8cf8>,
<__main__.particle instance at 0x12140c368>,
<__main__.particle instance at 0x12140c560>,
<__main__.particle instance at 0x12140c2d8>,
<__main__.particle instance at 0x12140c5f0>,
<__main__.particle instance at 0x12140c050>,
<__main__.particle instance at 0x12140c488>,
<__main__.particle instance at 0x121078368>,
<__main__.particle instance at 0x121078ab8>,
<__main__.particle instance at 0x121078638>,
<__main__.particle instance at 0x121078b00>,
<__main__.particle instance at 0x121078758>,
<__main__.particle instance at 0x121279878>,
<__main__.particle instance at 0x120ecc200>,
<__main__.particle instance at 0x120ecc560>,
<__main__.particle instance at 0x120ecc3b0>,
<__main__.particle instance at 0x120ecc518>,
<__main__.particle instance at 0x120eccf80>,
<__main__.particle instance at 0x120ecc0e0>,
<__main__.particle instance at 0x120ecc050>,
<__main__.particle instance at 0x121029200>,
<__main__.particle instance at 0x121029f38>,
<__main__.particle instance at 0x121029368>,
<__main__.particle instance at 0x121029248>,
<__main__.particle instance at 0x121029e60>,
<__main__.particle instance at 0x121029f80>,
<__main__.particle instance at 0x12103ecb0>,
<__main__.particle instance at 0x12109e518>,
<__main__.particle instance at 0x12109e758>,
<__main__.particle instance at 0x121160b00>,
<__main__.particle instance at 0x121099ef0>,
<__main__.particle instance at 0x12119ec20>,
<__main__.particle instance at 0x121042680>,
<__main__.particle instance at 0x121037e18>,
<__main__.particle instance at 0x121102830>,
<__main__.particle instance at 0x12109b7a0>,
<__main__.particle instance at 0x1210f1ef0>,
<__main__.particle instance at 0x1210efb00>,
<__main__.particle instance at 0x12104a8c0>,
<__main__.particle instance at 0x1214159e0>,
<__main__.particle instance at 0x120735710>,
<__main__.particle instance at 0x12111cdd0>,
<__main__.particle instance at 0x12105a638>,
<__main__.particle instance at 0x12105a4d0>,
<__main__.particle instance at 0x12105ae18>,
<__main__.particle instance at 0x12105a3f8>,
<__main__.particle instance at 0x12105a320>,
<__main__.particle instance at 0x12105ac68>,
<__main__.particle instance at 0x12105a998>,
<__main__.particle instance at 0x12105ab48>,
<__main__.particle instance at 0x12105a9e0>,
<__main__.particle instance at 0x12105a170>,
<__main__.particle instance at 0x12105ab90>,
<__main__.particle instance at 0x12105a368>,
<__main__.particle instance at 0x12107c050>,
<__main__.particle instance at 0x12107ce60>,
<__main__.particle instance at 0x12107c3b0>,
<__main__.particle instance at 0x12107cfc8>,
<__main__.particle instance at 0x12107cdd0>,
<__main__.particle instance at 0x120dd7950>,
<__main__.particle instance at 0x120dd7908>,
<__main__.particle instance at 0x120dd77a0>,
<__main__.particle instance at 0x120dd77e8>,
<__main__.particle instance at 0x120dd7200>,
<__main__.particle instance at 0x121894e18>,
<__main__.particle instance at 0x121894f80>,
<__main__.particle instance at 0x121894f38>,
<__main__.particle instance at 0x1218947a0>,
<__main__.particle instance at 0x121894200>,
<__main__.particle instance at 0x121894ea8>,
<__main__.particle instance at 0x121894ab8>,
<__main__.particle instance at 0x1218949e0>,
<__main__.particle instance at 0x121894908>,
<__main__.particle instance at 0x1218947e8>,
<__main__.particle instance at 0x1218946c8>,
<__main__.particle instance at 0x1218945f0>,
<__main__.particle instance at 0x1218944d0>,
<__main__.particle instance at 0x1218942d8>,
<__main__.particle instance at 0x1218941b8>,
<__main__.particle instance at 0x121894098>,
<__main__.particle instance at 0x12181bab8>,
<__main__.particle instance at 0x12181bb90>,
<__main__.particle instance at 0x12181bd40>,
<__main__.particle instance at 0x12181be18>,
<__main__.particle instance at 0x12181b830>,
<__main__.particle instance at 0x12181bc68>,
<__main__.particle instance at 0x12181bb48>,
<__main__.particle instance at 0x12181b8c0>,
<__main__.particle instance at 0x12181bef0>,
<__main__.particle instance at 0x12181b050>,
<__main__.particle instance at 0x12181b710>,
<__main__.particle instance at 0x12181b3b0>,
<__main__.particle instance at 0x12196f5f0>,
<__main__.particle instance at 0x12196fcf8>,
<__main__.particle instance at 0x12196fe18>,
<__main__.particle instance at 0x12196fef0>,
<__main__.particle instance at 0x12196ffc8>,
<__main__.particle instance at 0x12196f710>,
<__main__.particle instance at 0x12196f830>,
<__main__.particle instance at 0x12196f908>,
<__main__.particle instance at 0x12196f9e0>,
<__main__.particle instance at 0x12196fab8>,
<__main__.particle instance at 0x12196fb90>,
<__main__.particle instance at 0x12196fc68>,
<__main__.particle instance at 0x12196f0e0>,
<__main__.particle instance at 0x12196f1b8>,
<__main__.particle instance at 0x12196f290>,
<__main__.particle instance at 0x12196f368>,
<__main__.particle instance at 0x12196f440>,
<__main__.particle instance at 0x12196f518>,
<__main__.particle instance at 0x12196e6c8>,
<__main__.particle instance at 0x12196e998>,
<__main__.particle instance at 0x12196eb00>,
<__main__.particle instance at 0x12196edd0>,
<__main__.particle instance at 0x12196e710>,
<__main__.particle instance at 0x12196eab8>,
<__main__.particle instance at 0x12196eb48>,
<__main__.particle instance at 0x12196e8c0>,
<__main__.particle instance at 0x12196eea8>,
<__main__.particle instance at 0x12196efc8>,
<__main__.particle instance at 0x12196e0e0>,
<__main__.particle instance at 0x12196e200>,
<__main__.particle instance at 0x12196e2d8>,
<__main__.particle instance at 0x12196e3b0>,
<__main__.particle instance at 0x12196e488>,
<__main__.particle instance at 0x12196e560>,
<__main__.particle instance at 0x12196b7e8>,
<__main__.particle instance at 0x12196bf38>,
<__main__.particle instance at 0x12196bf80>,
<__main__.particle instance at 0x12196b8c0>,
<__main__.particle instance at 0x12196b998>,
<__main__.particle instance at 0x12196ba70>,
<__main__.particle instance at 0x12196bb48>,
<__main__.particle instance at 0x12196bc20>,
<__main__.particle instance at 0x12196bcf8>,
<__main__.particle instance at 0x12196bdd0>,
<__main__.particle instance at 0x12196bea8>,
<__main__.particle instance at 0x12196b170>,
<__main__.particle instance at 0x12196b248>,
<__main__.particle instance at 0x12196b320>,
<__main__.particle instance at 0x12196b3f8>,
<__main__.particle instance at 0x12196b4d0>,
<__main__.particle instance at 0x12196b5a8>,
<__main__.particle instance at 0x12196b680>,
<__main__.particle instance at 0x12196af80>,
<__main__.particle instance at 0x12196a7e8>,
<__main__.particle instance at 0x12196a128>,
<__main__.particle instance at 0x12196a908>,
<__main__.particle instance at 0x12196a9e0>,
<__main__.particle instance at 0x12196aab8>,
<__main__.particle instance at 0x12196ab90>,
<__main__.particle instance at 0x12196ac68>,
<__main__.particle instance at 0x12196ad40>,
<__main__.particle instance at 0x12196ae18>,
<__main__.particle instance at 0x12196aef0>,
<__main__.particle instance at 0x12196a1b8>,
<__main__.particle instance at 0x12196a2d8>,
<__main__.particle instance at 0x12196a3b0>,
<__main__.particle instance at 0x12196a488>,
<__main__.particle instance at 0x12196a560>,
<__main__.particle instance at 0x12196a638>,
<__main__.particle instance at 0x12196a710>,
<__main__.particle instance at 0x1218187a0>,
<__main__.particle instance at 0x121818cf8>,
<__main__.particle instance at 0x121818050>,
<__main__.particle instance at 0x121818e60>,
<__main__.particle instance at 0x121818878>,
<__main__.particle instance at 0x121818a70>,
<__main__.particle instance at 0x121818200>,
<__main__.particle instance at 0x121818128>,
<__main__.particle instance at 0x121818f38>,
<__main__.particle instance at 0x121818440>,
<__main__.particle instance at 0x121818bd8>,
<__main__.particle instance at 0x1219680e0>,
<__main__.particle instance at 0x121968170>,
<__main__.particle instance at 0x121968908>,
<__main__.particle instance at 0x121968ab8>,
<__main__.particle instance at 0x121968b90>,
<__main__.particle instance at 0x121968c68>,
<__main__.particle instance at 0x121968d40>,
<__main__.particle instance at 0x121968e18>,
<__main__.particle instance at 0x121968ef0>,
<__main__.particle instance at 0x121968fc8>,
<__main__.particle instance at 0x1219682d8>,
<__main__.particle instance at 0x1219683b0>,
<__main__.particle instance at 0x121968488>,
<__main__.particle instance at 0x121968560>,
<__main__.particle instance at 0x121968638>,
<__main__.particle instance at 0x121968710>,
<__main__.particle instance at 0x1219687e8>,
<__main__.particle instance at 0x121968050>,
<__main__.particle instance at 0x121967b48>,
<__main__.particle instance at 0x121967200>,
<__main__.particle instance at 0x121967a28>,
<__main__.particle instance at 0x121967b00>,
<__main__.particle instance at 0x121967c20>,
<__main__.particle instance at 0x121967cf8>,
<__main__.particle instance at 0x121967dd0>,
<__main__.particle instance at 0x121967ea8>,
<__main__.particle instance at 0x121967f80>,
<__main__.particle instance at 0x121967320>,
<__main__.particle instance at 0x1219673f8>,
<__main__.particle instance at 0x1219674d0>,
<__main__.particle instance at 0x1219675a8>,
<__main__.particle instance at 0x121967680>,
<__main__.particle instance at 0x121967758>,
<__main__.particle instance at 0x121967830>,
<__main__.particle instance at 0x121967908>,
<__main__.particle instance at 0x1219670e0>,
<__main__.particle instance at 0x121966a70>,
<__main__.particle instance at 0x121966ab8>,
<__main__.particle instance at 0x121966bd8>,
<__main__.particle instance at 0x121966cb0>,
<__main__.particle instance at 0x121966d88>,
<__main__.particle instance at 0x121966e60>,
<__main__.particle instance at 0x121966f38>,
<__main__.particle instance at 0x1219664d0>,
<__main__.particle instance at 0x121966560>,
<__main__.particle instance at 0x121966638>,
<__main__.particle instance at 0x121966710>,
<__main__.particle instance at 0x1219667e8>,
<__main__.particle instance at 0x1219668c0>,
<__main__.particle instance at 0x121966998>,
<__main__.particle instance at 0x121966050>,
<__main__.particle instance at 0x121966128>,
<__main__.particle instance at 0x121966200>,
<__main__.particle instance at 0x1219662d8>,
<__main__.particle instance at 0x121965c20>,
<__main__.particle instance at 0x121965cf8>,
<__main__.particle instance at 0x1219654d0>,
<__main__.particle instance at 0x121965cb0>,
<__main__.particle instance at 0x121965d88>,
<__main__.particle instance at 0x121965e60>,
<__main__.particle instance at 0x121965f38>,
<__main__.particle instance at 0x121965518>,
<__main__.particle instance at 0x121965638>,
<__main__.particle instance at 0x121965710>,
<__main__.particle instance at 0x1219657e8>,
<__main__.particle instance at 0x1219658c0>,
<__main__.particle instance at 0x121965998>,
<__main__.particle instance at 0x121965a70>,
<__main__.particle instance at 0x121965050>,
<__main__.particle instance at 0x121965128>,
<__main__.particle instance at 0x121965200>,
<__main__.particle instance at 0x1219652d8>,
<__main__.particle instance at 0x1219653b0>,
<__main__.particle instance at 0x121893dd0>,
<__main__.particle instance at 0x121893a70>,
<__main__.particle instance at 0x121893ef0>,
<__main__.particle instance at 0x121893e18>,
<__main__.particle instance at 0x121893cf8>,
<__main__.particle instance at 0x121893b90>,
<__main__.particle instance at 0x121893ab8>,
<__main__.particle instance at 0x121893998>,
<__main__.particle instance at 0x1218938c0>,
<__main__.particle instance at 0x1218937e8>,
<__main__.particle instance at 0x1218936c8>,
<__main__.particle instance at 0x1218935f0>,
<__main__.particle instance at 0x121893518>,
<__main__.particle instance at 0x121893440>,
<__main__.particle instance at 0x121893368>,
<__main__.particle instance at 0x121893248>,
<__main__.particle instance at 0x121893170>,
<__main__.particle instance at 0x121893098>,
<__main__.particle instance at 0x121964d88>,
<__main__.particle instance at 0x121964c68>,
<__main__.particle instance at 0x121964cf8>,
<__main__.particle instance at 0x121964dd0>,
<__main__.particle instance at 0x121964ea8>,
<__main__.particle instance at 0x121964f80>,
<__main__.particle instance at 0x121964680>,
<__main__.particle instance at 0x121964758>,
<__main__.particle instance at 0x121964830>,
<__main__.particle instance at 0x121964908>,
<__main__.particle instance at 0x1219649e0>,
<__main__.particle instance at 0x121964ab8>,
<__main__.particle instance at 0x121964b90>,
<__main__.particle instance at 0x121964098>,
<__main__.particle instance at 0x121964170>,
<__main__.particle instance at 0x121964248>,
<__main__.particle instance at 0x121964320>,
<__main__.particle instance at 0x1219643f8>,
<__main__.particle instance at 0x121963e18>,
<__main__.particle instance at 0x121963638>,
<__main__.particle instance at 0x121963ea8>,
<__main__.particle instance at 0x121963f80>,
<__main__.particle instance at 0x121963710>,
<__main__.particle instance at 0x121963830>,
<__main__.particle instance at 0x121963908>,
<__main__.particle instance at 0x1219639e0>,
<__main__.particle instance at 0x121963ab8>,
<__main__.particle instance at 0x121963b90>,
<__main__.particle instance at 0x121963c68>,
<__main__.particle instance at 0x121963d40>,
<__main__.particle instance at 0x1219630e0>,
<__main__.particle instance at 0x121963200>,
<__main__.particle instance at 0x1219632d8>,
<__main__.particle instance at 0x1219633b0>,
<__main__.particle instance at 0x121963488>,
<__main__.particle instance at 0x121963560>,
<__main__.particle instance at 0x121962710>,
<__main__.particle instance at 0x121962e60>,
<__main__.particle instance at 0x121962f38>,
<__main__.particle instance at 0x121962fc8>,
<__main__.particle instance at 0x1219627e8>,
<__main__.particle instance at 0x1219628c0>,
<__main__.particle instance at 0x121962998>,
<__main__.particle instance at 0x121962a70>,
<__main__.particle instance at 0x121962b48>,
<__main__.particle instance at 0x121962c20>,
<__main__.particle instance at 0x121962cf8>,
<__main__.particle instance at 0x121962050>,
<__main__.particle instance at 0x121962128>,
<__main__.particle instance at 0x121962200>,
<__main__.particle instance at 0x1219622d8>,
<__main__.particle instance at 0x1219623b0>,
<__main__.particle instance at 0x121962488>,
<__main__.particle instance at 0x121962560>,
<__main__.particle instance at 0x121961d88>], dtype=object)
```python
pso.simulate(50)
plot()
```
```python
pso.bestP
```
722.75511924124783
他のものと比べるとこれもなかなかである。
距離の和の推移を可視化しつつ、計算量を増やしてみる。
```python
pso = PSO(30, 500)
pso.initialize()
```
array([<__main__.particle instance at 0x122aab7e8>,
<__main__.particle instance at 0x122aa5d88>,
<__main__.particle instance at 0x122aa5d40>,
<__main__.particle instance at 0x122aa55f0>,
<__main__.particle instance at 0x122aa5ef0>,
<__main__.particle instance at 0x122aa5fc8>,
<__main__.particle instance at 0x122aa5710>,
<__main__.particle instance at 0x122aa57e8>,
<__main__.particle instance at 0x122aa58c0>,
<__main__.particle instance at 0x122aa5998>,
<__main__.particle instance at 0x122aa5a70>,
<__main__.particle instance at 0x122aa5b48>,
<__main__.particle instance at 0x122aa5c20>,
<__main__.particle instance at 0x122aa5050>,
<__main__.particle instance at 0x122aa5128>,
<__main__.particle instance at 0x122aa5200>,
<__main__.particle instance at 0x122aa52d8>,
<__main__.particle instance at 0x122aa53b0>,
<__main__.particle instance at 0x122aa5488>,
<__main__.particle instance at 0x1218b0b48>,
<__main__.particle instance at 0x1218b0c68>,
<__main__.particle instance at 0x1218b04d0>,
<__main__.particle instance at 0x1218b0560>,
<__main__.particle instance at 0x1218b03f8>,
<__main__.particle instance at 0x1218b0d40>,
<__main__.particle instance at 0x1218b0bd8>,
<__main__.particle instance at 0x1218b03b0>,
<__main__.particle instance at 0x122a9a638>,
<__main__.particle instance at 0x122a9ae60>,
<__main__.particle instance at 0x122a9acf8>,
<__main__.particle instance at 0x122a9ae18>,
<__main__.particle instance at 0x122a9af38>,
<__main__.particle instance at 0x122a9a6c8>,
<__main__.particle instance at 0x122a9a7a0>,
<__main__.particle instance at 0x122a9a878>,
<__main__.particle instance at 0x122a9a950>,
<__main__.particle instance at 0x122a9aa70>,
<__main__.particle instance at 0x122a9ab48>,
<__main__.particle instance at 0x122a9ac20>,
<__main__.particle instance at 0x122a9a0e0>,
<__main__.particle instance at 0x122a9a1b8>,
<__main__.particle instance at 0x122a9a290>,
<__main__.particle instance at 0x122a9a368>,
<__main__.particle instance at 0x122a9a440>,
<__main__.particle instance at 0x122a9a518>,
<__main__.particle instance at 0x122a9a5f0>,
<__main__.particle instance at 0x1218b1d40>,
<__main__.particle instance at 0x1218b16c8>,
<__main__.particle instance at 0x1218b1878>,
<__main__.particle instance at 0x1218b1050>,
<__main__.particle instance at 0x1218b1ef0>,
<__main__.particle instance at 0x1218b1638>,
<__main__.particle instance at 0x1218b1128>,
<__main__.particle instance at 0x1218b1290>,
<__main__.particle instance at 0x1218b1830>,
<__main__.particle instance at 0x1218b22d8>,
<__main__.particle instance at 0x1218b25f0>,
<__main__.particle instance at 0x1218b2cb0>,
<__main__.particle instance at 0x1218b2b00>,
<__main__.particle instance at 0x1218b2a70>,
<__main__.particle instance at 0x1218b2710>,
<__main__.particle instance at 0x1218b28c0>,
<__main__.particle instance at 0x1218b2488>,
<__main__.particle instance at 0x1218b26c8>,
<__main__.particle instance at 0x1218b2050>,
<__main__.particle instance at 0x1218b2758>,
<__main__.particle instance at 0x1218b3b00>,
<__main__.particle instance at 0x1218b3830>,
<__main__.particle instance at 0x1218b35f0>,
<__main__.particle instance at 0x1218b3170>,
<__main__.particle instance at 0x1218b37a0>,
<__main__.particle instance at 0x1218b35a8>,
<__main__.particle instance at 0x1218b3998>,
<__main__.particle instance at 0x1218b3cb0>,
<__main__.particle instance at 0x1218b3fc8>,
<__main__.particle instance at 0x1218b33f8>,
<__main__.particle instance at 0x1218b3c20>,
<__main__.particle instance at 0x122a79b48>,
<__main__.particle instance at 0x122a79710>,
<__main__.particle instance at 0x122a79440>,
<__main__.particle instance at 0x122a79d40>,
<__main__.particle instance at 0x122a79e18>,
<__main__.particle instance at 0x122a79ef0>,
<__main__.particle instance at 0x122a79fc8>,
<__main__.particle instance at 0x122a795a8>,
<__main__.particle instance at 0x122a79680>,
<__main__.particle instance at 0x122a797a0>,
<__main__.particle instance at 0x122a79878>,
<__main__.particle instance at 0x122a79950>,
<__main__.particle instance at 0x122a79a28>,
<__main__.particle instance at 0x122a79b00>,
<__main__.particle instance at 0x122a790e0>,
<__main__.particle instance at 0x122a791b8>,
<__main__.particle instance at 0x122a79290>,
<__main__.particle instance at 0x122a79368>,
<__main__.particle instance at 0x122a78bd8>,
<__main__.particle instance at 0x122a785f0>,
<__main__.particle instance at 0x122a78560>,
<__main__.particle instance at 0x122a785a8>,
<__main__.particle instance at 0x122a78dd0>,
<__main__.particle instance at 0x122a78ea8>,
<__main__.particle instance at 0x122a78f80>,
<__main__.particle instance at 0x122a78680>,
<__main__.particle instance at 0x122a78758>,
<__main__.particle instance at 0x122a78830>,
<__main__.particle instance at 0x122a78908>,
<__main__.particle instance at 0x122a789e0>,
<__main__.particle instance at 0x122a78ab8>,
<__main__.particle instance at 0x122a78b90>,
<__main__.particle instance at 0x122a780e0>,
<__main__.particle instance at 0x122a781b8>,
<__main__.particle instance at 0x122a78290>,
<__main__.particle instance at 0x122a78368>,
<__main__.particle instance at 0x11feb4f80>,
<__main__.particle instance at 0x11feb43f8>,
<__main__.particle instance at 0x11feb4758>,
<__main__.particle instance at 0x11feb4560>,
<__main__.particle instance at 0x11feb4098>,
<__main__.particle instance at 0x11feb4368>,
<__main__.particle instance at 0x11feb4488>,
<__main__.particle instance at 0x11feb48c0>,
<__main__.particle instance at 0x11feb4d88>,
<__main__.particle instance at 0x11feb45a8>,
<__main__.particle instance at 0x11feb4ef0>,
<__main__.particle instance at 0x122a70dd0>,
<__main__.particle instance at 0x122a70c20>,
<__main__.particle instance at 0x122a70c68>,
<__main__.particle instance at 0x122a704d0>,
<__main__.particle instance at 0x122a70ea8>,
<__main__.particle instance at 0x122a70f80>,
<__main__.particle instance at 0x122a70638>,
<__main__.particle instance at 0x122a70710>,
<__main__.particle instance at 0x122a707e8>,
<__main__.particle instance at 0x122a708c0>,
<__main__.particle instance at 0x122a70998>,
<__main__.particle instance at 0x122a70a70>,
<__main__.particle instance at 0x122a70b48>,
<__main__.particle instance at 0x122a70050>,
<__main__.particle instance at 0x122a70128>,
<__main__.particle instance at 0x122a70200>,
<__main__.particle instance at 0x122a702d8>,
<__main__.particle instance at 0x122a703b0>,
<__main__.particle instance at 0x1218b4c20>,
<__main__.particle instance at 0x1218b4950>,
<__main__.particle instance at 0x1218b4f38>,
<__main__.particle instance at 0x1218b4b48>,
<__main__.particle instance at 0x1218b4a28>,
<__main__.particle instance at 0x1218b4128>,
<__main__.particle instance at 0x1218b4bd8>,
<__main__.particle instance at 0x1218b4710>,
<__main__.particle instance at 0x1218b4998>,
<__main__.particle instance at 0x1218b4ef0>,
<__main__.particle instance at 0x1218b4758>,
<__main__.particle instance at 0x122a6ec68>,
<__main__.particle instance at 0x122a6e908>,
<__main__.particle instance at 0x122a6ed40>,
<__main__.particle instance at 0x122a6ecb0>,
<__main__.particle instance at 0x122a6eea8>,
<__main__.particle instance at 0x122a6ef80>,
<__main__.particle instance at 0x122a6e680>,
<__main__.particle instance at 0x122a6e7a0>,
<__main__.particle instance at 0x122a6e878>,
<__main__.particle instance at 0x122a6e998>,
<__main__.particle instance at 0x122a6ea70>,
<__main__.particle instance at 0x122a6eb48>,
<__main__.particle instance at 0x122a6ec20>,
<__main__.particle instance at 0x122a6e0e0>,
<__main__.particle instance at 0x122a6e1b8>,
<__main__.particle instance at 0x122a6e290>,
<__main__.particle instance at 0x122a6e368>,
<__main__.particle instance at 0x122a6e440>,
<__main__.particle instance at 0x122af0290>,
<__main__.particle instance at 0x122af08c0>,
<__main__.particle instance at 0x122af00e0>,
<__main__.particle instance at 0x122af0098>,
<__main__.particle instance at 0x122af0908>,
<__main__.particle instance at 0x122af09e0>,
<__main__.particle instance at 0x122af0ab8>,
<__main__.particle instance at 0x122af0b90>,
<__main__.particle instance at 0x122af0c68>,
<__main__.particle instance at 0x122af0d40>,
<__main__.particle instance at 0x122af0e18>,
<__main__.particle instance at 0x122af01b8>,
<__main__.particle instance at 0x122af02d8>,
<__main__.particle instance at 0x122af03b0>,
<__main__.particle instance at 0x122af0488>,
<__main__.particle instance at 0x122af0560>,
<__main__.particle instance at 0x122af0638>,
<__main__.particle instance at 0x122af0710>,
<__main__.particle instance at 0x122a64560>,
<__main__.particle instance at 0x122a64488>,
<__main__.particle instance at 0x122a64518>,
<__main__.particle instance at 0x122a645a8>,
<__main__.particle instance at 0x122a64d40>,
<__main__.particle instance at 0x122a64e60>,
<__main__.particle instance at 0x122a64f38>,
<__main__.particle instance at 0x122a64638>,
<__main__.particle instance at 0x122a64710>,
<__main__.particle instance at 0x122a647e8>,
<__main__.particle instance at 0x122a648c0>,
<__main__.particle instance at 0x122a64998>,
<__main__.particle instance at 0x122a64a70>,
<__main__.particle instance at 0x122a64b48>,
<__main__.particle instance at 0x122a64098>,
<__main__.particle instance at 0x122a64170>,
<__main__.particle instance at 0x122a64248>,
<__main__.particle instance at 0x122a64320>,
<__main__.particle instance at 0x122a643f8>,
<__main__.particle instance at 0x1218b6cf8>,
<__main__.particle instance at 0x1218b6a28>,
<__main__.particle instance at 0x1218b6200>,
<__main__.particle instance at 0x1218b6e60>,
<__main__.particle instance at 0x1218b6ea8>,
<__main__.particle instance at 0x1218b6fc8>,
<__main__.particle instance at 0x1218b6b48>,
<__main__.particle instance at 0x1218b6ef0>,
<__main__.particle instance at 0x1218b6e18>,
<__main__.particle instance at 0x1218b6488>,
<__main__.particle instance at 0x1218a1a70>,
<__main__.particle instance at 0x1218a1ab8>,
<__main__.particle instance at 0x1218a13b0>,
<__main__.particle instance at 0x1218a1290>,
<__main__.particle instance at 0x1218a1998>,
<__main__.particle instance at 0x1218a1170>,
<__main__.particle instance at 0x1218a1e18>,
<__main__.particle instance at 0x1218a1638>,
<__main__.particle instance at 0x1218a1c20>,
<__main__.particle instance at 0x1218a1fc8>,
<__main__.particle instance at 0x1218a1098>,
<__main__.particle instance at 0x1218a1d88>,
<__main__.particle instance at 0x1218a1b00>,
<__main__.particle instance at 0x1218a1680>,
<__main__.particle instance at 0x1218a1518>,
<__main__.particle instance at 0x1218a1b48>,
<__main__.particle instance at 0x1218cdc68>,
<__main__.particle instance at 0x1218cde18>,
<__main__.particle instance at 0x1218cd908>,
<__main__.particle instance at 0x122a54170>,
<__main__.particle instance at 0x122a54ef0>,
<__main__.particle instance at 0x122a54200>,
<__main__.particle instance at 0x122a541b8>,
<__main__.particle instance at 0x122a54950>,
<__main__.particle instance at 0x122a54a28>,
<__main__.particle instance at 0x122a54b00>,
<__main__.particle instance at 0x122a54bd8>,
<__main__.particle instance at 0x122a54cb0>,
<__main__.particle instance at 0x122a54d88>,
<__main__.particle instance at 0x122a54e60>,
<__main__.particle instance at 0x122a542d8>,
<__main__.particle instance at 0x122a543b0>,
<__main__.particle instance at 0x122a54488>,
<__main__.particle instance at 0x122a54560>,
<__main__.particle instance at 0x122a54638>,
<__main__.particle instance at 0x122a54710>,
<__main__.particle instance at 0x122a547e8>,
<__main__.particle instance at 0x1218bd758>,
<__main__.particle instance at 0x1218bde18>,
<__main__.particle instance at 0x12124c830>,
<__main__.particle instance at 0x12124c560>,
<__main__.particle instance at 0x12124c3f8>,
<__main__.particle instance at 0x12124c4d0>,
<__main__.particle instance at 0x12124c638>,
<__main__.particle instance at 0x12124c170>,
<__main__.particle instance at 0x12124c7a0>,
<__main__.particle instance at 0x12124cd88>,
<__main__.particle instance at 0x12124c950>,
<__main__.particle instance at 0x12124c320>,
<__main__.particle instance at 0x12124c6c8>,
<__main__.particle instance at 0x12124cef0>,
<__main__.particle instance at 0x12124cf38>,
<__main__.particle instance at 0x12124c290>,
<__main__.particle instance at 0x122a49fc8>,
<__main__.particle instance at 0x122a49050>,
<__main__.particle instance at 0x122a498c0>,
<__main__.particle instance at 0x122a49908>,
<__main__.particle instance at 0x122a499e0>,
<__main__.particle instance at 0x122a49b00>,
<__main__.particle instance at 0x122a49bd8>,
<__main__.particle instance at 0x122a49cb0>,
<__main__.particle instance at 0x122a49d88>,
<__main__.particle instance at 0x122a49e60>,
<__main__.particle instance at 0x122a49f38>,
<__main__.particle instance at 0x122a49170>,
<__main__.particle instance at 0x122a49290>,
<__main__.particle instance at 0x122a49368>,
<__main__.particle instance at 0x122a49440>,
<__main__.particle instance at 0x122a49518>,
<__main__.particle instance at 0x122a495f0>,
<__main__.particle instance at 0x122a496c8>,
<__main__.particle instance at 0x1218cccf8>,
<__main__.particle instance at 0x12062bb90>,
<__main__.particle instance at 0x1218a2e60>,
<__main__.particle instance at 0x1218a2ef0>,
<__main__.particle instance at 0x1218a21b8>,
<__main__.particle instance at 0x1218a2878>,
<__main__.particle instance at 0x1218a2128>,
<__main__.particle instance at 0x1218a2050>,
<__main__.particle instance at 0x1218a2710>,
<__main__.particle instance at 0x1218a25a8>,
<__main__.particle instance at 0x1218a2c20>,
<__main__.particle instance at 0x1218a26c8>,
<__main__.particle instance at 0x1218a2908>,
<__main__.particle instance at 0x1218a28c0>,
<__main__.particle instance at 0x1218a2518>,
<__main__.particle instance at 0x1218a2440>,
<__main__.particle instance at 0x1218a2320>,
<__main__.particle instance at 0x1218a2b48>,
<__main__.particle instance at 0x1218a2b00>,
<__main__.particle instance at 0x122546098>,
<__main__.particle instance at 0x122546170>,
<__main__.particle instance at 0x122546248>,
<__main__.particle instance at 0x122546320>,
<__main__.particle instance at 0x1225463f8>,
<__main__.particle instance at 0x1225464d0>,
<__main__.particle instance at 0x1225465a8>,
<__main__.particle instance at 0x122546680>,
<__main__.particle instance at 0x122546758>,
<__main__.particle instance at 0x122546830>,
<__main__.particle instance at 0x122546908>,
<__main__.particle instance at 0x1225469e0>,
<__main__.particle instance at 0x122546ab8>,
<__main__.particle instance at 0x122546b90>,
<__main__.particle instance at 0x122546c68>,
<__main__.particle instance at 0x122546d40>,
<__main__.particle instance at 0x122546e18>,
<__main__.particle instance at 0x122546ef0>,
<__main__.particle instance at 0x122546fc8>,
<__main__.particle instance at 0x122a5d0e0>,
<__main__.particle instance at 0x122a5d1b8>,
<__main__.particle instance at 0x122a5d290>,
<__main__.particle instance at 0x122a5d368>,
<__main__.particle instance at 0x122a5d440>,
<__main__.particle instance at 0x122a5d518>,
<__main__.particle instance at 0x122a5d5f0>,
<__main__.particle instance at 0x122a5d6c8>,
<__main__.particle instance at 0x122a5d7a0>,
<__main__.particle instance at 0x122a5d878>,
<__main__.particle instance at 0x122a5d950>,
<__main__.particle instance at 0x122a5da28>,
<__main__.particle instance at 0x122a5db00>,
<__main__.particle instance at 0x122a5dbd8>,
<__main__.particle instance at 0x122a5dcb0>,
<__main__.particle instance at 0x122a5dd88>,
<__main__.particle instance at 0x122a5de60>,
<__main__.particle instance at 0x122a5df38>,
<__main__.particle instance at 0x122a69050>,
<__main__.particle instance at 0x122a69128>,
<__main__.particle instance at 0x122a69200>,
<__main__.particle instance at 0x122a692d8>,
<__main__.particle instance at 0x122a693b0>,
<__main__.particle instance at 0x122a69488>,
<__main__.particle instance at 0x122a69560>,
<__main__.particle instance at 0x122a69638>,
<__main__.particle instance at 0x122a69710>,
<__main__.particle instance at 0x122a697e8>,
<__main__.particle instance at 0x122a698c0>,
<__main__.particle instance at 0x122a69998>,
<__main__.particle instance at 0x122a69a70>,
<__main__.particle instance at 0x122a69b48>,
<__main__.particle instance at 0x122a69c20>,
<__main__.particle instance at 0x122a69cf8>,
<__main__.particle instance at 0x122a69dd0>,
<__main__.particle instance at 0x122a69ea8>,
<__main__.particle instance at 0x122a69f80>,
<__main__.particle instance at 0x122a50098>,
<__main__.particle instance at 0x122a50170>,
<__main__.particle instance at 0x122a50248>,
<__main__.particle instance at 0x122a50320>,
<__main__.particle instance at 0x122a503f8>,
<__main__.particle instance at 0x122a504d0>,
<__main__.particle instance at 0x122a505a8>,
<__main__.particle instance at 0x122a50680>,
<__main__.particle instance at 0x122a50758>,
<__main__.particle instance at 0x122a50830>,
<__main__.particle instance at 0x122a50908>,
<__main__.particle instance at 0x122a509e0>,
<__main__.particle instance at 0x122a50ab8>,
<__main__.particle instance at 0x122a50b90>,
<__main__.particle instance at 0x122a50c68>,
<__main__.particle instance at 0x122a50d40>,
<__main__.particle instance at 0x122a50e18>,
<__main__.particle instance at 0x122a50ef0>,
<__main__.particle instance at 0x122a50fc8>,
<__main__.particle instance at 0x122a5e0e0>,
<__main__.particle instance at 0x122a5e1b8>,
<__main__.particle instance at 0x122a5e290>,
<__main__.particle instance at 0x122a5e368>,
<__main__.particle instance at 0x122a5e440>,
<__main__.particle instance at 0x122a5e518>,
<__main__.particle instance at 0x122a5e5f0>,
<__main__.particle instance at 0x122a5e6c8>,
<__main__.particle instance at 0x122a5e7a0>,
<__main__.particle instance at 0x122a5e878>,
<__main__.particle instance at 0x122a5e950>,
<__main__.particle instance at 0x122a5ea28>,
<__main__.particle instance at 0x122a5eb00>,
<__main__.particle instance at 0x122a5ebd8>,
<__main__.particle instance at 0x122a5ecb0>,
<__main__.particle instance at 0x122a5ed88>,
<__main__.particle instance at 0x122a5ee60>,
<__main__.particle instance at 0x122a5ef38>,
<__main__.particle instance at 0x122b5a050>,
<__main__.particle instance at 0x122b5a128>,
<__main__.particle instance at 0x122b5a200>,
<__main__.particle instance at 0x122b5a2d8>,
<__main__.particle instance at 0x122b5a3b0>,
<__main__.particle instance at 0x122b5a488>,
<__main__.particle instance at 0x122b5a560>,
<__main__.particle instance at 0x122b5a638>,
<__main__.particle instance at 0x122b5a710>,
<__main__.particle instance at 0x122b5a7e8>,
<__main__.particle instance at 0x122b5a8c0>,
<__main__.particle instance at 0x122b5a998>,
<__main__.particle instance at 0x122b5aa70>,
<__main__.particle instance at 0x122b5ab48>,
<__main__.particle instance at 0x122b5ac20>,
<__main__.particle instance at 0x122b5acf8>,
<__main__.particle instance at 0x122b5add0>,
<__main__.particle instance at 0x122b5aea8>,
<__main__.particle instance at 0x122b5af80>,
<__main__.particle instance at 0x122b6d098>,
<__main__.particle instance at 0x122b6d170>,
<__main__.particle instance at 0x122b6d248>,
<__main__.particle instance at 0x122b6d320>,
<__main__.particle instance at 0x122b6d3f8>,
<__main__.particle instance at 0x122b6d4d0>,
<__main__.particle instance at 0x122b6d5a8>,
<__main__.particle instance at 0x122b6d680>,
<__main__.particle instance at 0x122b6d758>,
<__main__.particle instance at 0x122b6d830>,
<__main__.particle instance at 0x122b6d908>,
<__main__.particle instance at 0x122b6d9e0>,
<__main__.particle instance at 0x122b6dab8>,
<__main__.particle instance at 0x122b6db90>,
<__main__.particle instance at 0x122b6dc68>,
<__main__.particle instance at 0x122b6dd40>,
<__main__.particle instance at 0x122b6de18>,
<__main__.particle instance at 0x122b6def0>,
<__main__.particle instance at 0x122b6dfc8>,
<__main__.particle instance at 0x122b4c0e0>,
<__main__.particle instance at 0x122b4c1b8>,
<__main__.particle instance at 0x122b4c290>,
<__main__.particle instance at 0x122b4c368>,
<__main__.particle instance at 0x122b4c440>,
<__main__.particle instance at 0x122b4c518>,
<__main__.particle instance at 0x122b4c5f0>,
<__main__.particle instance at 0x122b4c6c8>,
<__main__.particle instance at 0x122b4c7a0>,
<__main__.particle instance at 0x122b4c878>,
<__main__.particle instance at 0x122b4c950>,
<__main__.particle instance at 0x122b4ca28>,
<__main__.particle instance at 0x122b4cb00>,
<__main__.particle instance at 0x122b4cbd8>,
<__main__.particle instance at 0x122b4ccb0>,
<__main__.particle instance at 0x122b4cd88>,
<__main__.particle instance at 0x122b4ce60>,
<__main__.particle instance at 0x122b4cf38>,
<__main__.particle instance at 0x122b51050>,
<__main__.particle instance at 0x122b51128>,
<__main__.particle instance at 0x122b51200>,
<__main__.particle instance at 0x122b512d8>,
<__main__.particle instance at 0x122b513b0>,
<__main__.particle instance at 0x122b51488>,
<__main__.particle instance at 0x122b51560>,
<__main__.particle instance at 0x122b51638>,
<__main__.particle instance at 0x122b51710>,
<__main__.particle instance at 0x122b517e8>,
<__main__.particle instance at 0x122b518c0>,
<__main__.particle instance at 0x122b51998>,
<__main__.particle instance at 0x122b51a70>,
<__main__.particle instance at 0x122b51b48>,
<__main__.particle instance at 0x122b51c20>,
<__main__.particle instance at 0x122b51cf8>,
<__main__.particle instance at 0x122b51dd0>,
<__main__.particle instance at 0x122b51ea8>,
<__main__.particle instance at 0x122b51f80>,
<__main__.particle instance at 0x122b53098>,
<__main__.particle instance at 0x122b53170>,
<__main__.particle instance at 0x122b53248>,
<__main__.particle instance at 0x122b53320>,
<__main__.particle instance at 0x122b533f8>,
<__main__.particle instance at 0x122b534d0>,
<__main__.particle instance at 0x122b535a8>,
<__main__.particle instance at 0x122b53680>,
<__main__.particle instance at 0x122b53758>,
<__main__.particle instance at 0x122b53830>,
<__main__.particle instance at 0x122b53908>,
<__main__.particle instance at 0x122b539e0>,
<__main__.particle instance at 0x122b53ab8>,
<__main__.particle instance at 0x122b53b90>,
<__main__.particle instance at 0x122b53c68>,
<__main__.particle instance at 0x122b53d40>,
<__main__.particle instance at 0x122b53e18>,
<__main__.particle instance at 0x122b53ef0>,
<__main__.particle instance at 0x122b53fc8>,
<__main__.particle instance at 0x122b560e0>,
<__main__.particle instance at 0x122b561b8>,
<__main__.particle instance at 0x122b56290>,
<__main__.particle instance at 0x122b56368>,
<__main__.particle instance at 0x122b56440>], dtype=object)
```python
p = []
for i in range(10):
pso.simulate(10)
p.append(pso.bestP)
```
```python
plt.plot(p)
```
途中で局所解に入っているから、数を大きくしても意味がなかった。
少なめにして推移をみることにする。
```python
pso = PSO(30, 500)
pso.initialize()
p = []
for i in range(50):
pso.simulate(1)
p.append(pso.bestP)
```
```python
plot()
```
```python
plt.plot(p)
```
こんな具合であった。
以上が実験の結果である。最後のプログラムを仕上げて提出する。
```python
```
|
285cc5897babb2ed8a7da2a6036e8efd33461778
| 949,707 |
ipynb
|
Jupyter Notebook
|
PSO_discre.ipynb
|
NlGG/EvolutionaryAlgorithm
|
9398ab9b72d8bdbe12af94594ff8c3031601207a
|
[
"MIT"
] | null | null | null |
PSO_discre.ipynb
|
NlGG/EvolutionaryAlgorithm
|
9398ab9b72d8bdbe12af94594ff8c3031601207a
|
[
"MIT"
] | null | null | null |
PSO_discre.ipynb
|
NlGG/EvolutionaryAlgorithm
|
9398ab9b72d8bdbe12af94594ff8c3031601207a
|
[
"MIT"
] | null | null | null | 101.638164 | 34,958 | 0.747221 | true | 136,072 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.853913 | 0.718594 | 0.613617 |
__label__eng_Latn
| 0.670378 | 0.263968 |
```python
# Import packages that are relevant for the exam.
import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
import sympy as sm
from mpl_toolkits.mplot3d import Axes3D
from colorama import Fore
from colorama import Style
import ipywidgets as widgets
import matplotlib
```
# 1. Human capital accumulation
Consider a worker living in **two periods**, $t \in \{1,2\}$.
In each period she decides whether to **work ($l_t = 1$) or not ($l_t = 0$)**.
She can *not* borrow or save and thus **consumes all of her income** in each period.
If she **works** her **consumption** becomes:
$$c_t = w h_t l_t\,\,\text{if}\,\,l_t=1$$
where $w$ is **the wage rate** and $h_t$ is her **human capital**.
If she does **not work** her consumption becomes:
$$c_t = b\,\,\text{if}\,\,l_t=0$$
where $b$ is the **unemployment benefits**.
Her **utility of consumption** is:
$$ \frac{c_t^{1-\rho}}{1-\rho} $$
Her **disutility of working** is:
$$ \gamma l_t $$
From period 1 to period 2, she **accumulates human capital** according to:
$$ h_2 = h_1 + l_1 +
\begin{cases}
0 & \text{with prob. }0.5 \\
\Delta & \text{with prob. }0.5
\end{cases} \\
$$
where $\Delta$ is a **stochastic experience gain**.
In the **second period** the worker thus solves:
$$
\begin{eqnarray*}
v_{2}(h_{2}) & = &\max_{l_{2}} \frac{c_2^{1-\rho}}{1-\rho} - \gamma l_2
\\ & \text{s.t.} & \\
c_{2}& = & w h_2 l_2 \\
l_{2}& \in &\{0,1\}
\end{eqnarray*}
$$
In the **first period** the worker thus solves:
$$
\begin{eqnarray*}
v_{1}(h_{1}) &=& \max_{l_{1}} \frac{c_1^{1-\rho}}{1-\rho} - \gamma l_1 + \beta\mathbb{E}_{1}\left[v_2(h_2)\right]
\\ & \text{s.t.} & \\
c_1 &=& w h_1 l_1 \\
h_2 &=& h_1 + l_1 + \begin{cases}
0 & \text{with prob. }0.5\\
\Delta & \text{with prob. }0.5
\end{cases}\\
l_{1} &\in& \{0,1\}\\
\end{eqnarray*}
$$
where $\beta$ is the **discount factor** and $\mathbb{E}_{1}\left[v_2(h_2)\right]$ is the **expected value of living in period two**.
The **parameters** of the model are:
```python
rho = 2
beta = 0.96
gamma = 0.1
w = 2
b = 1
Delta = 0.1
```
The **relevant levels of human capital** are:
```python
h_vec = np.linspace(0.1,1.5,100)
```
**Question 1:** Solve the model in period 2 and illustrate the solution (including labor supply as a function of human capital).
**Question 2:** Solve the model in period 1 and illustrate the solution (including labor supply as a function of human capital).
**Question 3:** Will the worker never work if her potential wage income is lower than the unemployment benefits she can get? Explain and illustrate why or why not.
## Question 1.1
We will start by defining a function which returns the maximum utility level an individual can obtain given the level of human capital, $h$, the wage, $w$, the benifit level, $b$, and the parameters $\rho$ and $\gamma$. The function will further return the binary variable for weather an individual will work or not and the minimum level of human captial for which an individual will work. We will calculate the utility level if an individual work in the second period and utility level if an individual do not work as well. The size of these utilities determines if the individual will work or not as the individual will only work if the utility from working is higher than not working.
```python
# Define the function
def utility(h, w, b, rho, gamma):
# 1) Define the consumption given the level of human capital and the wage
c = w*h*1
# 2) Calculate the utility if the individual works
v_e = c ** (1 - rho) / (1 - rho) - gamma * 1
# 3) Calculate the utility if the individual do not work
v_u = b ** (1 - rho) / (1 - rho)
# 4) Return the maximum utility between working and not working
v_max = np.maximum(v_e, v_u)
# 5) If the individual works then the maximum utility will be different from the utility when working.
# We will use this to create a boolean which tells whether the individual is working or not
boole = v_max!=v_u
# 6) Convert the boolean to an binary taking the value 1 if the individual is working.
# We use this to plot the individuals who are working
working = 1*boole
# 7) Find the lowest value (threshold) of human capital for which an individual will work.
bounds = [(0.0001, None), (-0.9999, 0.9999), (0.0001, None)]
h_star = optimize.minimize(
lambda h_t: ((w*h_t)**(1 - rho) / (1 - rho) - gamma*1 - v_u)**2, 1)
h_star2 = h_star.x
return v_max, working, h_star2
```
We will use the function defined above to find out who which value of human capital an individual will work in the second period. We do this for the relevant levels of human capital defined above (h_vec) and the given parameters above.
```python
# 1) Calulate the maximum utilities and the decision weather to work or not for different levels of human capital.
optimum2 = utility(h_vec, w, b, rho, gamma)
# 2) Extract the array with the maximum utilities from the return typle from utility function.
utility_opt2 = optimum2[0]
# 3) Extract the array with the decision whether to work or not from the return typle from utility function.
work_opt2 = optimum2[1]
```
We will now plot the labor supply as a function of the human capital level. We will use the relevant levels of human capital defined above (h_vec).
```python
# 1) Devide the relevant levels of human captial between the values below and above the human capital threshold for working
h_vec_low = [i for i in h_vec if i <= optimum2[2]]
h_vec_high = [i for i in h_vec if i > optimum2[2]]
# 2) Devide the decision to work between the values below and above the human capital threshold for working
work_opt2_low = [i for i in work_opt2 if i <= 0]
work_opt2_high = [i for i in work_opt2 if i > 0]
# 3) Plot the labor supply as a function of the human capital level and the threshold for working
plt.style.use('ggplot')
fig = plt.figure(figsize=(12,8))
plt.plot(h_vec_low, work_opt2_low, color='b')
plt.axvline(x=optimum2[2], linewidth=1, color='r', linestyle='dashed')
plt.plot(h_vec_high, work_opt2_high, color='b')
print(f'The threshold for working is: {optimum2[2][0]:.3f}')
plt.ylabel('Labor supply')
plt.xlabel('Level of human capital')
plt.legend(['Labor supply','Threshold for working'])
plt.show()
```
As we can see from the graph above, the individuals will work in period 2 if their human capital level is above 0.55. We will now present the results from above in a graph with the level of human capital on the x-axis and the utility level on the y-axis.
```python
# 1) plot the utility level as a function of the level of human capital and plot the threshold for working
plt.figure(figsize=(12,8))
plt.plot(h_vec, utility_opt2, color='b')
plt.axvline(x=optimum2[2], linewidth=1, color='r', linestyle='dashed')
# 2) Change the axis titles and legend
plt.ylabel('Utility')
plt.xlabel('Level of human capital')
plt.legend(['Utility level','Threshold for working'])
# 3) show the graph
plt.show()
```
The graph above shows that the utility level is equal to -1 for all individuals with a human capital level below 0.55. The utility level for individuals above 0.55 is an increasing concave function with a utility of at least -1.
## Question 1.2
In the problem we will use the same approach as in question 1.1. The only difference is that we will include the expected utility in period 2.
```python
# Define the function
def utility2(h_1, w, b, rho, gamma, beta, Delta):
# 1) Define the wage given the level of human capital and the wage
c = w*h_1*1
# 2) Define the expected level of human capital in period 2 when working and not working in period 1 given the level of human capital in period 1
h_2_e = h_1 + 1 + Delta * 0.5
h_2_u = h_1 + 0 + Delta * 0.5
# 3) Define expected utility in period 2 for both when working and when not working in period 1
u_2_m = utility(h_2_e, w, b, rho, gamma)
u_2_e = u_2_m[0]
u_2_n = utility(h_2_u, w, b, rho, gamma)
u_2_u = u_2_n[0]
# 4) Calculate the utility if the individual works in period 1
v_e = c**(1 - rho) / (1 - rho) - gamma*1 + beta * u_2_e
# 5) Calculate the utility if the individual do not work in period 1
v_u = b**(1 - rho) / (1 - rho) + beta * u_2_u
# 6) Return the maximum utility between working and not working
v_max = np.maximum(v_e, v_u)
# 7) If the individual works then the maximum utility will be different from the utility when working.
# We will use this to create a boolean which tells whether the individual is working or not
boole = v_max!=v_u
# 8) Convert the boolean to an binary taking the value 1 if the individual is working.
# We use this to plot the individuals who are working
working = 1*boole
# 9) Find the lowest value (threshold) of human capital for which an individual will work.
h_star = optimize.minimize(
lambda h_t: ((w*h_t)**(1 - rho) / (1 - rho) - gamma*1 + beta * utility(h_t+1+0.5*Delta, w, b, rho, gamma)[0] - b**(1 - rho) / (1 - rho) - beta * utility(h_t+0+0.5*Delta, w, b, rho, gamma)[0])**2,1)
h_star2 = h_star.x
return v_max, working, h_star2
```
```python
# 1) Calulate the maximum utilities and the decision weather to work or not for different levels of human capital.
optimum1 = utility2(h_vec, w, b, rho, gamma, beta, Delta)
# 2) Devide the decision to work between the values below and above the human capital threshold for working
utility_opt1 = optimum1[0]
# 3) Plot the labor supply as a function of the human capital level and the threshold for working
work_opt1 = optimum1[1]
```
We will now plot the labor supply as a function of the human capital level. We will use the relevant levels of human capital defined above (h_vec)
```python
# 1) Devide the relevant levels of human captial between the values below and above the human capital threshold for working
h_vec_low = [i for i in h_vec if i <= optimum1[2]]
h_vec_high = [i for i in h_vec if i > optimum1[2]]
# 2) Devide the decision to work between the values below and above the human capital threshold for working
work_opt1_low = [i for i in work_opt1 if i <= 0]
work_opt1_high = [i for i in work_opt1 if i > 0]
# 3) Plot the labor supply as a function of the human capital level and the threshold for working
plt.figure(figsize=(12,8))
plt.plot(h_vec_low, work_opt1_low, color='b')
plt.axvline(x=optimum1[2], linewidth=1, color='r', linestyle='dashed')
plt.plot(h_vec_high, work_opt1_high, color='b')
print(f'The threshold for working is: {optimum1[2][0]:.3f}')
plt.ylabel('Labor supply')
plt.xlabel('Level of human capital')
plt.legend(['Labor supply','Threshold for working'])
plt.show()
```
As we can see from the graph above, the individuals will work in period 1 if their human capital level is above 0.35. We will now present the results from above in a graph with the level of human capital on the x-axis and the utility level on the y-axis.
```python
# 1) plot the utility level as a function of the level of human capital and plot the threshold for working
plt.figure(figsize=(12,8))
plt.plot(h_vec, utility_opt1, color='b')
plt.axvline(x=optimum1[2], linewidth=1, color='r', linestyle='dashed')
# 2) Change the axis titles and legend
plt.ylabel('Utility')
plt.xlabel('Level of human capital')
plt.legend(['Utility level','Threshold for working'])
# 3) show the graph
plt.show()
```
The graph above shows that the utility level is close to -2 for all individuals with a human capital level below 0.35. The utility level for individuals above 0.35 is an increasing concave function with a utility of at least the utility level of those not working.
## Question 1.3
The worker will never work if the benefits, $b$, are higher than the potential wage, $wh$, in period 2 as
$$ \frac{b^{1-\rho}}{(1-\rho)} > \frac{(wh)^{1-\rho}}{(1-\rho)}-\gamma \, \, \text{ for } b>wh \, \, \text{and } \gamma>0$$
Becasue $\gamma>0$ the worker will actually thoose not to work if the potential wage is just above the unemployment benefits because she gets disutility from working. We will show this graphically by plotting the unemployment benefits and the potential wage as a function of the human capital level and include the threshold for working, which we found in problem 1.1
```python
# 1) Define a function which return the potential wage as a function of the level of human capital
def pot_earnings(h):
return (w*h)**(1 - rho) / (1 - rho)
# 2) Define a function which returns the unemployment benefits as a funciton of human capital
def benifit(h):
return (h/h) * b ** (1 - rho) / (1 - rho)
# 3) Plot the potential wage, the unemployment benefit and the threshol for working
plt.figure(figsize=(12,8))
plt.plot(h_vec, pot_earnings(h_vec), 'g-')
plt.plot(h_vec, benifit(h_vec), 'orange')
plt.axvline(x=optimum2[2], linewidth=1, color='r', linestyle='dashed')
# 4) Add labels
plt.xlabel('Human capital level')
plt.ylabel('Potential earnings / benefit')
# 5) Add different colors for different working decisions based on the potential wage relative to the unemployment benefit
plt.axvspan(h_vec[0], b/w, facecolor='r', alpha=0.1)
plt.axvspan(b/w, optimum2[2], facecolor='b', alpha=0.1)
plt.axvspan(optimum2[2], h_vec[-1], facecolor='g', alpha=0.1)
axes = plt.gca()
axes.set_ylim([-5,0])
axes.set_xlim([h_vec[0],1.5])
# 6) Add ledend and show the graph
plt.legend(['Potential earnings','Unemployment benefit','Threshold for working','Not working, b>w*h', 'Not working, b<w*h', 'Working, b<w*h'])
plt.show()
```
In the first period the worker will for some levels of human capital thoose to work even though the potential wage is below the unemployment benefits. This is due to the human capital accumulation the worker gets if she decides to work which will increse the potential wage in the second period. We will show this in the same way as we showed for period 2.
```python
# 1) Plot the potential wage, the unemployment benefit and the threshol for working
plt.figure(figsize=(12,8))
plt.plot(h_vec, pot_earnings(h_vec), 'g-')
plt.plot(h_vec, benifit(h_vec), 'orange')
plt.axvline(x=optimum1[2], linewidth=1, color='r', linestyle='dashed')
# 2) Add labels
plt.xlabel('Human capital level')
plt.ylabel('Potential earnings / benefit')
# 3) Add different colors for different working decisions based on the potential wage relative to the unemployment benefit
plt.axvspan(h_vec[0], optimum1[2], facecolor='r', alpha=0.1)
plt.axvspan(optimum1[2], b/w, facecolor='y', alpha=0.1)
plt.axvspan(b/w, h_vec[-1], facecolor='g', alpha=0.1)
axes = plt.gca()
axes.set_ylim([-5,0])
axes.set_xlim([h_vec[0],1.5])
# 4) Add ledend and show the graph
plt.legend(['Potential earnings','Unemployment benefits','Threshold for working','Not working, b>w*b','Working, b>w*b', 'Working, b<w*b'])
print(f'The worker will work in the interval between {optimum1[2][0]:.3f} and {b/w:.3f}, even though the wage is below the benefits')
plt.show()
```
# 2. AS-AD model
Consider the following **AS-AD model**. The **goods market equilibrium** is given by
$$ y_{t} = -\alpha r_{t} + v_{t} $$
where $y_{t}$ is the **output gap**, $r_{t}$ is the **ex ante real interest** and $v_{t}$ is a **demand disturbance**.
The central bank's **Taylor rule** is
$$ i_{t} = \pi_{t+1}^{e} + h \pi_{t} + b y_{t}$$
where $i_{t}$ is the **nominal interest rate**, $\pi_{t}$ is the **inflation gap**, and $\pi_{t+1}^{e}$ is the **expected inflation gap**.
The **ex ante real interest rate** is given by
$$ r_{t} = i_{t} - \pi_{t+1}^{e} $$
Together, the above implies that the **AD-curve** is
$$ \pi_{t} = \frac{1}{h\alpha}\left[v_{t} - (1+b\alpha)y_{t}\right]$$
Further, assume that the **short-run supply curve (SRAS)** is given by
$$ \pi_{t} = \pi_{t}^{e} + \gamma y_{t} + s_{t}$$
where $s_t$ is a **supply disturbance**.
**Inflation expectations are adaptive** and given by
$$ \pi_{t}^{e} = \phi\pi_{t-1}^{e} + (1-\phi)\pi_{t-1}$$
Together, this implies that the **SRAS-curve** can also be written as
$$ \pi_{t} = \pi_{t-1} + \gamma y_{t} - \phi\gamma y_{t-1} + s_{t} - \phi s_{t-1} $$
The **parameters** of the model are:
```python
par = {}
par['alpha'] = 5.76
par['h'] = 0.5
par['b'] = 0.5
par['phi'] = 0
par['gamma'] = 0.075
```
**Question 1:** Use the ``sympy`` module to solve for the equilibrium values of output, $y_t$, and inflation, $\pi_t$, (where AD = SRAS) given the parameters ($\alpha$, $h$, $b$, $\alpha$, $\gamma$) and $y_{t-1}$ , $\pi_{t-1}$, $v_t$, $s_t$, and $s_{t-1}$.
**Question 2:** Find and illustrate the equilibrium when $y_{t-1} = \pi_{t-1} = v_t = s_t = s_{t-1} = 0$. Illustrate how the equilibrium changes when instead $v_t = 0.1$.
**Persistent disturbances:** Now, additionaly, assume that both the demand and the supply disturbances are AR(1) processes
$$ v_{t} = \delta v_{t-1} + x_{t} $$
$$ s_{t} = \omega s_{t-1} + c_{t} $$
where $x_{t}$ is a **demand shock**, and $c_t$ is a **supply shock**. The **autoregressive parameters** are:
```python
par['delta'] = 0.80
par['omega'] = 0.15
```
**Question 3:** Starting from $y_{-1} = \pi_{-1} = s_{-1} = 0$, how does the economy evolve for $x_0 = 0.1$, $x_t = 0, \forall t > 0$ and $c_t = 0, \forall t \geq 0$?
**Stochastic shocks:** Now, additionally, assume that $x_t$ and $c_t$ are stochastic and normally distributed
$$ x_{t}\sim\mathcal{N}(0,\sigma_{x}^{2}) $$
$$ c_{t}\sim\mathcal{N}(0,\sigma_{c}^{2}) $$
The **standard deviations of the shocks** are:
```python
par['sigma_x'] = 3.492
par['sigma_c'] = 0.2
```
**Question 4:** Simulate the AS-AD model for 1,000 periods. Calculate the following five statistics:
1. Variance of $y_t$, $var(y_t)$
2. Variance of $\pi_t$, $var(\pi_t)$
3. Correlation between $y_t$ and $\pi_t$, $corr(y_t,\pi_t)$
4. Auto-correlation between $y_t$ and $y_{t-1}$, $corr(y_t,y_{t-1})$
5. Auto-correlation between $\pi_t$ and $\pi_{t-1}$, $corr(\pi_t,\pi_{t-1})$
**Question 5:** Plot how the correlation between $y_t$ and $\pi_t$ changes with $\phi$. Use a numerical optimizer or root finder to choose $\phi\in(0,1)$ such that the simulated correlation between $y_t$ and $\pi_t$ comes close to 0.31.
**Quesiton 6:** Use a numerical optimizer to choose $\sigma_x>0$, $\sigma_c>0$ and $\phi\in(0,1)$ to make the simulated statistics as close as possible to US business cycle data where:
1. $var(y_t) = 1.64$
2. $var(\pi_t) = 0.21$
3. $corr(y_t,\pi_t) = 0.31$
4. $corr(y_t,y_{t-1}) = 0.84$
5. $corr(\pi_t,\pi_{t-1}) = 0.48$
## Question 2.1
We define the variables and parameters by appliying the `sympy`-package.
```python
# 1) Apply the ".init_printing"-function in order to ensure pretty printing.
sm.init_printing(use_unicode=True)
# 2) Define the parameters and variables of the models.
y_t = sm.symbols('y_t')
y_tm1 = sm.symbols('y_t-1')
alpha_par = sm.symbols('alpha')
r_t = sm.symbols('r_t')
v_t = sm.symbols('v_t')
i_t = sm.symbols('i_t')
pi_e1 = sm.symbols('pi^e_t+1')
h_par = sm.symbols('h')
pi_t = sm.symbols('pi_t')
b_par = sm.symbols('b')
s_t = sm.symbols('s_t')
s_tm1 = sm.symbols('s_t-1')
pi_e0 = sm.symbols('pi^e_t')
gamma_par = sm.symbols('gamma')
phi_par = sm.symbols('phi')
pi_em0 = sm.symbols('pi^e_t-1')
pi_tm1 = sm.symbols('pi_t-1')
```
We know that the equilibrium of output gap and the equilibrium of the inflation gap is given at the point where the AD-curve and the SRAS-curve intersect. Thus, we define the AD-curve and the SRAS-curve by the parameters and variables of the given models.
Afterwards, we isolate for $\pi_t$ in the SRAS-curve and substitute the expression into the AD-curve. When this is done, we find the equilibrium of the output gap ($y^*$) by isolating $y_t$.
```python
# 1) Define the AD-curve and the SRAS-curve.
AD = sm.Eq(pi_t, 1 / (h_par*alpha_par) * (v_t - (1+ b_par * alpha_par) * y_t))
SRAS = sm.Eq(pi_t, pi_tm1 + gamma_par * y_t - phi_par * gamma_par * y_tm1 + s_t - phi_par * s_tm1)
# 2) Ensure that we have isolated pi_t in the SRAS-curve.
SRAS_solve = sm.solve(SRAS, pi_t)
# 3) Substitute the SRAS-curve into the AD-curve by substituting pi_t with the expression of the SRAS-curve.
SRAS_eq_AD = AD.subs(pi_t, SRAS_solve[0])
# 4) Solve for y_t in order to find y*.
y_star = sm.solve(SRAS_eq_AD, y_t)
```
Next, we want to derive the equilibrium of the inflation gap. In order to do so, we substitute the previously found expression of the equilibrium of the output gap into the AD-curve at the place of $y_t$. Now, we have found the equilibrium of the inflation gap. The expression is simplified in the end.
```python
# 1) Insert the equilibrium of the output gap into the AD_curve.
y_insert = AD.subs(y_t, y_star[0])
# 2) Simplify the expression.
pi_simp = sm.simplify(y_insert)
# 3) Only keep the right hand side.
pi_star = sm.solve(pi_simp, pi_t)
```
We are now able to examine the equilibrium of the output gap and the equilibrium of the inflation gap as a function of the given parameters and $y_{t-1}$, $\pi _{t-1}$, $v_t$, $s_t$ and $s_{t-1}$.
```python
# Print the equilibrium of the output gap (with text in front)
print('The equilibrium of the output gap is given by \n')
y_star
```
```python
# Print the equilibrium of the inflation gap (with text in front)
print('The equilibrium of the inflation gap is given by \n')
pi_star
```
## Question 2.2
In order to find and illustrate the equilibrium values of output and inflation, we must find the values for which the AD-curve and the SRAS-curve intersect. We use the previously calculated analytically equilibrium in order to determine the numerical eqiulibrium. However, first we insert the values of the variables in the same dictionary as before (par), i.e. $y_{t-1} = \pi_{t-1} = v_t = s_t = s_{t-1} = 0$.
```python
# Define values of the given variables.
par['y_t-1'] = 0
par['pi_t-1'] = 0
par['v_t'] = 0
par['s_t'] = 0
par['s_t-1'] = 0
```
Next, we define the equilibrium of the output gap and the equilibrium of the inflation gap as a python functions and plug in the given values of the variables and of the parameters.
```python
# 1) Define the equilibrium of the output gap as a python function.
y_eq_func = sm.lambdify((alpha_par, gamma_par, h_par, phi_par, y_tm1, s_tm1, pi_tm1, s_t, v_t, b_par), y_star[0])
# 2) Insert the values of the variables of the parameters and variables into the function.
y_eq_q2 = y_eq_func(par['alpha'], par['gamma'], par['h'], par['phi'], par['y_t-1'], par['s_t-1'], par['pi_t-1'], par['s_t'], par['v_t'], par['b'])
# 3) Define the equilibrium of the inflation gap as a python function.
pi_eq_func = sm.lambdify((alpha_par, gamma_par, h_par, y_tm1, phi_par, pi_tm1, s_tm1, v_t, b_par, s_t), pi_star[0])
# 4) Insert the values of the variables of the parameters and variables into the function.
pi_eq_q2 = pi_eq_func(par['alpha'], par['gamma'], par['h'], par['y_t-1'], par['phi'], par['pi_t-1'], par['s_t-1'], par['v_t'], par['b'], par['s_t'])
```
We are now able to examine the equilibrium of the output gap and the equilibrium of the inflation gap.
```python
# Print the equilibrium.
print('The equilibrium is given by (y_t, pi_t) = (' + str(round(y_eq_q2,3)) + ', ' + str(round(pi_eq_q2,3)) +')')
```
We can plot the SRAS-curve and the AD-curve in a graph. However, first we need to define the function of the SRAS-curve and the function of the AD-curve. Furthermore, we show the equilibrium in the graph.
```python
# 1) Isolate pi_t in the SRAS-function.
SRAS_solve = sm.solve(SRAS, pi_t)
# 2) Turn the SRAS-curve into a python function.
SRAS_func = sm.lambdify((gamma_par, phi_par, y_tm1, y_t, s_tm1, pi_tm1, s_t), SRAS_solve[0])
# 3) Isolate pi_t in the AD-curve.
AD_solve = sm.solve(AD, pi_t)
# 4) Turn the AD-curve into a python function.
AD_func = sm.lambdify((alpha_par, b_par, y_t, v_t, h_par), AD_solve[0])
# 5) Define some values of the x-axis.
y_arange = np.arange(-1, 1, 0.01)
# 6) Plot the functions.
plt.figure(figsize=(12,8))
plt.plot(y_arange, AD_func(par['alpha'], par['b'], y_arange, par['v_t'], par['h']))
plt.plot(y_arange, SRAS_func(par['gamma'], par['phi'], par['y_t-1'], y_arange, par['s_t-1'], par['pi_t-1'], par['s_t']))
plt.plot(y_eq_q2, pi_eq_q2, 'ro')
plt.ylabel('Inflation gap')
plt.xlabel('Output gap')
plt.legend(['AD-curve','SRAS-curve', 'Equilibrium'])
axes = plt.gca()
axes.set_ylim([-0.1,0.1])
axes.set_xlim([-0.1,0.1])
plt.show()
print('The equilibrium is given by (y_t, pi_t) = (' + str(round(y_eq_q2,3)) + ', ' + str(round(pi_eq_q2,3)) +')')
```
We consider a situation where $v_t=0.1$. We define a new variable for $v_t$, and use the functions defined before to examine the difference.
```python
# 1) Define a new variable for v_t.
par['v_t_new'] = 0.1
# 2) Examine the new eqilibrium by inserting the new value of v_t.
y_eq_q2_new = y_eq_func(par['alpha'], par['gamma'], par['h'], par['phi'], par['y_t-1'], par['s_t-1'], par['pi_t-1'], par['s_t'], par['v_t_new'], par['b'])
pi_eq_q2_new = pi_eq_func(par['alpha'], par['gamma'], par['h'], par['y_t-1'], par['phi'], par['pi_t-1'], par['s_t-1'], par['v_t_new'], par['b'], par['s_t'])
# 3) Print the equilibrium.
print('The equilibrium is given by (y_t, pi_t) = (' + str(round(y_eq_q2_new, 3)) + ', ' + str(round(pi_eq_q2_new, 3)) +')')
```
Again, we examine this on a graph.
```python
# Plot the functions.
plt.figure(figsize=(12,8))
plt.plot(y_arange, AD_func(par['alpha'], par['b'], y_arange, par['v_t_new'], par['h']))
plt.plot(y_arange, SRAS_func(par['gamma'], par['phi'], par['y_t-1'], y_arange, par['s_t-1'], par['pi_t-1'], par['s_t']))
plt.plot(y_eq_q2_new, pi_eq_q2_new, 'ro')
plt.ylabel('Inflation gap')
plt.xlabel('Output gap')
plt.legend(['AD-curve','SRAS-curve', 'Equilibrium'])
axes = plt.gca()
axes.set_ylim([-0.1,0.1])
axes.set_xlim([-0.1,0.1])
plt.show()
print('The equilibrium is given by (y_t, pi_t) = (' + str(round(y_eq_q2_new, 3)) + ', ' + str(round(pi_eq_q2_new, 3)) +')')
```
## Question 2.3
We want to examine how the economy evolves when $v_t$ and $s_t$ are given by AR(1)-processes. We consider a situation where a demand shock hits the economy in period 0. The shock is only present in period 0.
In order to examine the effect of a shock on the output gap and the inflation gap, we create a vector of $x_t$, $c_t$, $v_t$ and $s_t$. We use these vector to generate a vector of $y_t$ and a vector of $\pi_t$. The vectors of $y_t$ and $\pi_t$ are used to plot the evolution of the economy.
```python
# 1) Define length of period.
total = 1000
# 2) Define empty lists.
t_list = np.empty(total)
x_t_list = np.empty(total)
c_t_list = np.empty(total)
v_t_list = np.empty(total)
s_t_list = np.empty(total)
y_t_list = np.empty(total)
pi_t_list = np.empty(total)
# 3) Define starting parameters.
start = {}
start['y_n'] = 0
start['pi_n'] = 0
start['v_n'] = 0
start['s_n'] = 0
# 4) Define the length of the time vector.
for i in range (0, total):
t_list[i] = i
# 5) Set the values in the first period.
if t_list[i] == 0:
c_t_list[i] = 0
x_t_list[i] = 0.1
v_t_list[i] = par['delta'] * start['v_n'] + x_t_list[i]
s_t_list[i] = par['omega'] * start['s_n'] + c_t_list[i]
y_t_list[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], par['phi'], start['y_n'], start['s_n'], start['pi_n'], s_t_list[i], v_t_list[i], par['b'])
pi_t_list[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], start['y_n'], par['phi'], start['pi_n'], start['s_n'], v_t_list[i], par['b'], s_t_list[i])
# 6) Set the values in the following periods.
else:
c_t_list[i] = 0
x_t_list[i] = 0
v_t_list[i] = par['delta'] * v_t_list[i-1] + x_t_list[i]
s_t_list[i] = par['omega'] * s_t_list[i-1] + c_t_list[i]
y_t_list[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], par['phi'], y_t_list[i-1], s_t_list[i-1], pi_t_list[i-1], s_t_list[i], v_t_list[i], par['b'])
pi_t_list[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], y_t_list[i-1], par['phi'], pi_t_list[i-1], s_t_list[i-1], v_t_list[i], par['b'], s_t_list[i])
```
We are now able to examine the evolution of the economy when a demand shock occurs. In the first graph, we consider the change in the SRAS-curve and the change in the AD-curve from period 0 to period 1 (the shock does affect the economy already in period 0). In the second graph, we examine the evolution of the output gap and the evolution of the inflation gap as a function of time.
```python
# Plot the functions.
plt.figure(figsize=(12,8))
plt.plot(y_arange, AD_func(par['alpha'], par['b'], y_arange, par['v_t'], par['h']),color='r',linestyle='dashed')
plt.plot(y_arange, AD_func(par['alpha'], par['b'], y_arange, v_t_list[0], par['h']),color='r')
plt.plot(y_arange, SRAS_func(par['gamma'], par['phi'], start['y_n'], y_arange, start['s_n'], start['pi_n'], s_t_list[0]),color='b')
plt.plot(y_t_list[0], pi_t_list[0], 'ro',color='r')
plt.ylabel('Inflation gap')
plt.xlabel('Output gap')
plt.plot(y_arange, AD_func(par['alpha'], par['b'], y_arange, v_t_list[1], par['h']),color='r',linestyle=':')
plt.plot(y_arange, SRAS_func(par['gamma'], par['phi'], y_t_list[0], y_arange, s_t_list[0], pi_t_list[0], s_t_list[1]),color='b',linestyle=':')
plt.plot(y_t_list[1], pi_t_list[1], 'ro', color='b')
plt.legend(['AD stady state', 'AD-curve, $p_0$','SRAS-curve, steady state and $p_0$', 'Equilibrium, $p_0$', 'AD-curve, $p_1$','SRAS-curve, $p_1$', 'Equilibrium, $p_1$'])
axes1 = plt.gca()
axes1.set_ylim([-0.05,0.05])
axes1.set_xlim([-0.05,0.05])
plt.show()
```
```python
# Plot the functions.
plt.figure(figsize=(12,8))
plt.plot(t_list, y_t_list)
plt.plot(t_list, pi_t_list)
plt.ylabel('Output gap and inflation gap')
plt.xlabel('Time')
plt.legend(['Output gap','Inflation gap'])
axes_q3 = plt.gca()
axes_q3.axhline(y=0, color='black')
axes_q3.set_ylim([-0.005,0.025])
axes_q3.set_xlim([0,100])
plt.show()
```
In the first graph, we note that the AD-curve moves upwards in period 0 due to the demand shock. However, since supply initially is unaffected by the demand shock, the SRAS-curve is unchanged from the SRAS-curve in steady state. This generates a boom in the economy with a positive output gap and a positive inflation gap.
In period 1, the SRAS-curve moves up as a response so the boom in the previous period. The AD-curve moves downwards, since the demand shocks is no longer present. Hence, the output gap decreases from period 0 to period 1 while the inflation gap increases from period 0 to period 1. This tendency would continue, so at a point the output gap will be negative.
We note this on the second graph, where the output gap decreases from period 0 and onwards until it converges back to steady state. The inflation gap increases until it too converges back to steady state. We note that the economy is back in steady state in approximately period 75.
## Question 2.4
Now, we assume that $x_t$ and $c_t$ are stocastic shocks with a standard deviation of $\sigma$ and a mean of $0$. This implies that the expected shock in period $t$ is $0$, but that a shock can occur in every period. In order to show the evolution of the economy and derive the moments, we use almost the same structure as before, but with a new definition of $x_t$ and $c_t$.
```python
# 1) Define length of period.
total_q4 = 1000
# 2) Define empty lists.
t_list_q4 = np.empty(total_q4)
x_t_list_q4 = np.empty(total_q4)
c_t_list_q4 = np.empty(total_q4)
v_t_list_q4 = np.empty(total_q4)
s_t_list_q4 = np.empty(total_q4)
y_t_list_q4 = np.empty(total_q4)
pi_t_list_q4 = np.empty(total_q4)
# 3) Define starting parameters.
start = {}
start['y_n'] = 0
start['pi_n'] = 0
start['v_n'] = 0
start['s_n'] = 0
# 4) Define the length of the time vector.
for i in range (0, total_q4):
t_list_q4[i] = i
# 5) Set a seed number and draw a random number from the normal distribution with mean 0 and variance sigma.
np.random.seed(117)
c_t_list_q4_old = np.random.normal(loc=0, scale=par['sigma_c'], size=(total_q4, 1))
c_t_list_q4 = c_t_list_q4_old[:,0]
x_t_list_q4_old = np.random.normal(loc=0, scale=par['sigma_x'], size=(total_q4, 1))
x_t_list_q4 = x_t_list_q4_old[:,0]
# 6) Set the values in the first period.
if t_list_q4[i] == 0:
v_t_list_q4[i] = par['delta'] * start['v_n'] + x_t_list_q4[i]
s_t_list_q4[i] = par['omega'] * start['s_n'] + c_t_list_q4[i]
y_t_list_q4[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], par['phi'], start['y_n'], start['s_n'], start['pi_n'], s_t_list_q4[i], v_t_list_q4[i], par['b'])
pi_t_list_q4[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], start['y_n'], par['phi'], start['pi_n'], start['s_n'], v_t_list_q4[i], par['b'], s_t_list_q4[i])
# 7) Set the values in the following periods.
else:
v_t_list_q4[i] = par['delta'] * v_t_list_q4[i-1] + x_t_list_q4[i]
s_t_list_q4[i] = par['omega'] * s_t_list_q4[i-1] + c_t_list_q4[i]
y_t_list_q4[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], par['phi'], y_t_list_q4[i-1], s_t_list_q4[i-1], pi_t_list_q4[i-1], s_t_list_q4[i], v_t_list_q4[i], par['b'])
pi_t_list_q4[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], y_t_list_q4[i-1], par['phi'], pi_t_list_q4[i-1], s_t_list_q4[i-1], v_t_list_q4[i], par['b'], s_t_list_q4[i])
```
We simulate the model for 1,000 periods. We note that the output gap and the inflation gap are very volatile when a shock can hit the economy in every period.
```python
# Plot the graph.
plt.figure(figsize=(12,8))
plt.plot(t_list_q4, y_t_list_q4)
plt.plot(t_list_q4, pi_t_list_q4)
plt.ylabel('Output gap and inflation gap')
plt.xlabel('Time')
axes_q4 = plt.gca()
axes_q4.axhline(y=0, color='black')
plt.legend(['Output gap','Inflation gap'])
plt.show()
```
We are asked to calculate specific moments. These are e.g. the auto-correlation of the output gap and auto-correlation of the inflation gap. In order to derive those, we must define the values of the lagged output gap and the lagged inflation gap.
```python
# 1) Define empty lists.
y_tm1_list_q4 = np.empty(total_q4)
pi_tm1_list_q4 = np.empty(total_q4)
# 2) Define starting parameters.
y_tm1_list_q4[0] = start['y_n']
pi_tm1_list_q4[0] = start['pi_n']
# 3) Lag output gap and inflation gap in the following periods.
for k in range (1, total_q4):
if t_list_q4[i] > 0:
y_tm1_list_q4[k] = y_t_list_q4[k-1]
pi_tm1_list_q4[k] = pi_t_list_q4[k-1]
```
This moments to be calculated are
- The variance of the output gap, $var(y_t)$
- The variance of the inflation gap, $var(\pi_t)$
- The correlation between the output gap and the inflation gap, $corr(y_t, \pi_t)$
- The auto-correlation between the output gap and the lagged output gap, $corr(y_t, y_{t-1})$
- The auto-correlation between the inflation gap and the lagged inflatio gap, $corr(\pi_t, \pi_{t-1})$
This is done by using the functions of the `numpy` package.
```python
# 1) Calculate the variance of the output gap.
variance_y = np.var(y_t_list_q4)
# 2) Calculate the variance of the inflation gap.
variance_pi = np.var(pi_t_list_q4)
# 3) Calculate the correlation between the output gap and the inflation gap.
corr_y_pi_old = np.corrcoef(y_t_list_q4, pi_t_list_q4)
corr_y_pi = corr_y_pi_old[1,0]
# 4) Calculate the auto-correlation of the output gap.
auto_corr_y_old = np.corrcoef(y_t_list_q4, y_tm1_list_q4)
auto_corr_y = auto_corr_y_old[1,0]
# 5) Calculate the auto-correlation of the inflation gap.
auto_corr_pi_old = np.corrcoef(pi_t_list_q4, pi_tm1_list_q4)
auto_corr_pi = auto_corr_pi_old[1,0]
# 6) Print the results.
print('- var(y_t) = ' + str(round(variance_y,3)))
print('- var(pi_t) = ' + str(round(variance_pi,3)))
print('- corr(y_t, pi_t) = ' + str(round(corr_y_pi,3)))
print('- corr(y_t, y_t_1) = ' + str(round(auto_corr_y,3)))
print('- corr(pi_t, pi_t_1) = ' + str(round(auto_corr_pi,3)))
```
## Question 2.5
We want to plot the correlation between the output gap and the inflation gap as a function of $\phi$. In order to do this, we define a function with the content of the code produced in problem 2.4. However, the value of $\phi$ is not fixed, and we are able to change the value of $\phi$ by calling the function.
```python
# 1) Define function.
def phi_func(phi_value):
"""
This function defines the array of every variable in our model. This ensures that we are able to determine the array
of y_t and pi_t which are the variables of special interest in this context. First, the lenth of the period is set.
Afterwards, an empty list for every variable is defined. In the following step, we set the starting values. Afterwards,
we define the values in period 0 and in every subsequent period.
In the end, the correlation between y_t and pi_t as a function of phi is calculated and returned. One can choose the
value of phi by replacing phi_value with this value when calling the function later on.
"""
# 2) Define length of period.
total_q5 = 1000
# 3) Define empty lists.
t_list_q5 = np.empty(total_q5)
x_t_list_q5 = np.empty(total_q5)
c_t_list_q5 = np.empty(total_q5)
v_t_list_q5 = np.empty(total_q5)
s_t_list_q5 = np.empty(total_q5)
y_t_list_q5 = np.empty(total_q5)
pi_t_list_q5 = np.empty(total_q5)
# 4) Define starting parameters.
start = {}
start['y_n'] = 0
start['pi_n'] = 0
start['v_n'] = 0
start['s_n'] = 0
# 5) Define the length of the time vector.
for i in range (0, total_q5):
t_list_q5[i] = i
# 6) Set a seed number and draw a random number from the normal distribution with mean 0 and variance sigma.
np.random.seed(117)
c_t_list_q5_old = np.random.normal(loc=0, scale=par['sigma_c'], size=(total_q5, 1))
c_t_list_q5 = c_t_list_q5_old[:,0]
x_t_list_q5_old = np.random.normal(loc=0, scale=par['sigma_x'], size=(total_q5, 1))
x_t_list_q5 = x_t_list_q5_old[:,0]
# 7) Set the values in the first period.
if t_list_q5[i] == 0:
v_t_list_q5[i] = par['delta'] * start['v_n'] + x_t_list_q5[i]
s_t_list_q5[i] = par['omega'] * start['s_n'] + c_t_list_q5[i]
y_t_list_q5[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], phi_value, start['y_n'], start['s_n'], start['pi_n'], s_t_list_q5[i], v_t_list_q5[i], par['b'])
pi_t_list_q5[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], start['y_n'], phi_value, start['pi_n'], start['s_n'], v_t_list_q5[i], par['b'], s_t_list_q5[i])
# 8) Set the values in the following periods.
else:
v_t_list_q5[i] = par['delta'] * v_t_list_q5[i-1] + x_t_list_q5[i]
s_t_list_q5[i] = par['omega'] * s_t_list_q5[i-1] + c_t_list_q5[i]
y_t_list_q5[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], phi_value, y_t_list_q5[i-1], s_t_list_q5[i-1], pi_t_list_q5[i-1], s_t_list_q5[i], v_t_list_q5[i], par['b'])
pi_t_list_q5[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], y_t_list_q5[i-1], phi_value, pi_t_list_q5[i-1], s_t_list_q5[i-1], v_t_list_q5[i], par['b'], s_t_list_q5[i])
# 9) Define the correlation between y_t and pi_t
corr_q5_old = np.corrcoef(y_t_list_q5, pi_t_list_q5)
corr_q5 = corr_q5_old[1,0]
return corr_q5
```
We have just defined the function. Now, we choose some values of $\phi$ between 0 and 1 and find the corresponding correlation between $y_t$ and $\pi_t$.
```python
# 1) Define the number of different values of phi that we want to find the correlation for.
dif_values = 100
# 2) Define a vector with the number of different values in the range between 0 and 1.
phi_values = np.linspace(0, 1, dif_values)
# 3) Define an empty vector with the length of the number of different values of phi where we want to find the correlation.
corr_list = np.empty(dif_values)
# 4) Find the correlation as a function of the different values of phi.
for k, phi_val in enumerate(phi_values):
corr_list[k] = phi_func(phi_val)
```
We plot the graph in order to examine how the correlation between $y_t$ and $\pi_t$ changes with $\phi$.
We note that higher levels of $\phi$ implies a more positive correlation between $y_t$ and $\pi_t$. This makes sense if we return to the equilibrium of the inflation gap is given by ($\pi^*$) from problem 2.1. In this equation, a higher level of $\phi$ implies that $\pi_t$ is more affected by $y_{t-1}$. We found in problem 2.4 that the auto-correlatiob between $y_t$ and $y_{t-1}$ was close to 1, so the change in $\pi_t$ must be more similar to the change in $y_t$ when $\phi$ is greater.
```python
# Plot the graph.
plt.figure(figsize=(12,8))
plt.plot(phi_values, corr_list)
plt.ylabel('Correlation between y_t and pi_t')
plt.xlabel('Values of phi')
axes_q5 = plt.gca()
axes_q5.axhline(y=0, color='black')
plt.show()
```
We want to find the value of $\phi$ that ensures that the correlation between the output gap and the inflation gap is closest possible to 0.31. In order to do so, we can use the optimize function from the `scipy` package.
First, we define the desired value of correlation. Afterwards, we use the previously defined function, but withdraws the desired value of correlation in the end and squares the value. This ensures that we only consider positive values, and that we find the value of $\phi$ where correlation is closest to 0.31 by minimizing the function.
```python
# 1) Define desired level of correlation.
corr_given = 0.31
# 2) Take an initial guess.
initial_guess = 0
# 3) Define the function that should be minimized and withdraw desired level of correlation.
def min_function(phi_values):
""" Minimizes the function with respect to phi_values. The desired level of correlation is inserted, and the expression
is squared, so we only consider non-negative values. The value of phi that minimizes the function that ensures that
the correlation is closest possible to the desired correlation level is returned.
"""
return (phi_func(phi_values)-corr_given)**2
# 4) Find the value of phi that minimizes the function.
result = optimize.minimize(min_function, initial_guess)
# 5) Save value of phi that minimzes the function that ensures that correlation is closest possible to 0.31.
phi_min_old = result.x
phi_min = phi_min_old[0]
# 6) Print the value of phi that ensures that the correlation is closest possible to 0.31
print('The value of phi that ensures that the correlation between y_t and pi_t is closest to 0.31 is phi = ' + str(round(phi_min, 3)))
```
We plot the same graph as before and insert the value of $\phi$ that ensures that the level of correlation between the output gap and the inflation gap is closest possible to 0.31.
```python
# Plot the graph.
plt.figure(figsize=(12,8))
plt.plot(phi_values, corr_list)
plt.plot(phi_min, corr_given, 'ro')
plt.ylabel('Correlation between y_t and pi_t')
plt.xlabel('Values of phi')
plt.legend(['Correlation as function of phi', 'Value of phi with correlation closest to 0.31'])
axes_q5 = plt.gca()
axes_q5.axhline(y=0, color='black')
plt.show()
```
## Question 2.6
We want to find the value of $\phi$, $\sigma_x$ and $\sigma_c$ so that the model has moments that are closest possible to the US economy. First, we define the real values of the variance of $y_t$ and $\pi_t$, and the auto-correlation of $y_t$ and $\pi_t$. The correlation between $y_t$ and $\pi_t$ is defined in problem 2.5
```python
# Define real values of variance and auto-correlation.
var_y_given = 1.64
var_pi_given = 0.21
auto_y_given = 0.84
auto_pi_given = 0.48
```
We apply the same function as in 2.5, but now we want to minimize for $\phi$, $\sigma_x$ and $\sigma_c$ instead of just $\phi$. In the end, we withdraw the real values and square the parameters, so that it is the minimal value we find. We sum the five moments, so that we find the values that minimizes the sum, and hence, the values that ensures that the parameters of our model is closest possible to the US economy.
```python
# 1) Define function.
def bs_usa(opt_values):
""" Step 3-9 follow the same procedure as the previusly defined function, "phi_func". In step 2, the parameters to be
calibrated are combined into a list. In step 10, the variance, correlation and auto-correlation are calculated.
In step 11, the desired levels of the moments are withdrawn and the expressions are squared in order to ensure
that these are non-negative. In the last step, the sum of the squared moments with withdrawn values is calculated
and returned.
"""
# 2) Combine the parameters to be calibrated into a list.
phi_usa, sigma_x_usa, sigma_c_usa = opt_values
# 3) Define length of period.
total_q6 = 1000
# 4) Define empty lists.
t_list_q6 = np.empty(total_q6)
x_t_list_q6 = np.empty(total_q6)
c_t_list_q6 = np.empty(total_q6)
v_t_list_q6 = np.empty(total_q6)
s_t_list_q6 = np.empty(total_q6)
y_t_list_q6 = np.empty(total_q6)
pi_t_list_q6 = np.empty(total_q6)
y_tm1_list_q6 = np.empty(total_q6)
pi_tm1_list_q6 = np.empty(total_q6)
# 5) Define starting parameters.
start = {}
start['y_n'] = 0
start['pi_n'] = 0
start['v_n'] = 0
start['s_n'] = 0
# 6) Define the length of the time vector.
for i in range (0, total_q6):
t_list_q6[i] = i
# 7) Set a seed number and draw a random number from the normal distribution with mean 0 and variance sigma.
np.random.seed(117)
c_t_list_q6_old = np.random.normal(loc=0, scale=sigma_c_usa, size=(total_q6, 1))
c_t_list_q6 = c_t_list_q6_old[:,0]
x_t_list_q6_old = np.random.normal(loc=0, scale=sigma_x_usa, size=(total_q6, 1))
x_t_list_q6 = x_t_list_q6_old[:,0]
# 8) Set the values in the first period.
if t_list_q6[i] == 0:
v_t_list_q6[i] = par['delta'] * start['v_n'] + x_t_list_q6[i]
s_t_list_q6[i] = par['omega'] * start['s_n'] + c_t_list_q6[i]
y_t_list_q6[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], phi_usa, start['y_n'], start['s_n'], start['pi_n'], s_t_list_q6[i], v_t_list_q6[i], par['b'])
pi_t_list_q6[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], start['y_n'], phi_usa, start['pi_n'], start['s_n'], v_t_list_q6[i], par['b'], s_t_list_q6[i])
y_tm1_list_q6[i] = start['y_n']
pi_tm1_list_q6[i] = start['pi_n']
# 9) Set the values in the following periods.
else:
v_t_list_q6[i] = par['delta'] * v_t_list_q6[i-1] + x_t_list_q6[i]
s_t_list_q6[i] = par['omega'] * s_t_list_q6[i-1] + c_t_list_q6[i]
y_t_list_q6[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], phi_usa, y_t_list_q6[i-1], s_t_list_q6[i-1], pi_t_list_q6[i-1], s_t_list_q6[i], v_t_list_q6[i], par['b'])
pi_t_list_q6[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], y_t_list_q6[i-1], phi_usa, pi_t_list_q6[i-1], s_t_list_q6[i-1], v_t_list_q6[i], par['b'], s_t_list_q6[i])
y_tm1_list_q6[i] = y_t_list_q6[i-1]
pi_tm1_list_q6[i] = pi_t_list_q6[i-1]
# 10) Define the variance, the correlation and the auto-correlation.
variance_y_q6 = np.var(y_t_list_q6)
variance_pi_q6 = np.var(pi_t_list_q6)
corr_y_pi_q6_old = np.corrcoef(y_t_list_q6, pi_t_list_q6)
corr_y_pi_q6 = corr_y_pi_q6_old[1,0]
auto_corr_y_q6_old = np.corrcoef(y_t_list_q6, y_tm1_list_q6)
auto_corr_y_q6 = auto_corr_y_q6_old[1,0]
auto_corr_pi_q6_old = np.corrcoef(pi_t_list_q6, pi_tm1_list_q6)
auto_corr_pi_q6 = auto_corr_pi_q6_old[1,0]
# 11) Withdraw the real values and square the expression.
variance_y_q6_new = (variance_y_q6 - var_y_given)**2
variance_pi_q6_new = (variance_pi_q6 - var_pi_given)**2
corr_y_pi_q6_new = (corr_y_pi_q6 - corr_given)**2
auto_corr_y_q6_new = (auto_corr_y_q6 - auto_y_given)**2
auto_corr_pi_q6_new = (auto_corr_pi_q6 - auto_pi_given)**2
# 12) Calculate the sum to be minimized.
sum_min = variance_y_q6_new + variance_pi_q6_new + corr_y_pi_q6_new + auto_corr_y_q6_new + auto_corr_pi_q6_new
return sum_min
```
We want to minimize the sum of differences between the real values of $\phi$, $\sigma_x$ and $\sigma_c$ and the parameters in our model. Thus, we want to calibrate our model to the real world so to say. Again, we use the optimize function from the `scipy` package.
```python
# 1) Take initial guess of the three parameters.
initial_guess_q6 = [0, 1, 1]
# 2) Define the constraint. phi must be compromised between 0 and 1, and variances must be positive.
constraint_q6 = ((0,1), (0.001,100**100), (0.001,100**100))
# 3) Optimize where we condition on the constraint.
result_q6 = optimize.minimize(bs_usa, initial_guess_q6, method='L-BFGS-B', bounds=constraint_q6)
# 4) Store the estimated parameters.
phi_cal = result_q6.x[0]
sigma_x_cal = result_q6.x[1]
sigma_c_cal = result_q6.x[2]
# 5) Print the calibrated parameters.
print('The calibrated value of phi is given by phi = ' + str(round(phi_cal, 3)))
print('The calibrated value of sigma_x is given by sigma_x = ' + str(round(sigma_x_cal, 3)))
print('The calibrated value of sigma_c is given by sigma_c = ' + str(round(sigma_c_cal, 3)))
```
Finally, we can compare the moments in our sample after the calibration with the true moments in the US economy. We define a function where we insert the calibrated parameters in order to estimate the moments in our sample.
```python
# 1) Define function.
def compare(phi_comp, sigma_x_comp, sigma_c_comp):
""" The procedure is almost identical to the procedure in the function "bs_usa". In this function, one can plug in
different values of phi, sigma_x and sigma_c. The moments (varians, correlation, and auto-correlation) are returned
as a function of the choice of these three parameters.
"""
# 2) Define length of period.
total_comp = 1000
# 3) Define empty lists.
t_list_comp = np.empty(total_comp)
x_t_list_comp = np.empty(total_comp)
c_t_list_comp = np.empty(total_comp)
v_t_list_comp = np.empty(total_comp)
s_t_list_comp = np.empty(total_comp)
y_t_list_comp = np.empty(total_comp)
pi_t_list_comp = np.empty(total_comp)
y_tm1_list_comp = np.empty(total_comp)
pi_tm1_list_comp = np.empty(total_comp)
# 4) Define starting parameters.
start = {}
start['y_n'] = 0
start['pi_n'] = 0
start['v_n'] = 0
start['s_n'] = 0
# 5) Define the length of the time vector.
for i in range (0, total_comp):
t_list_comp[i] = i
# 6) Set a seed number and draw a random number from the normal distribution with mean 0 and variance sigma.
np.random.seed(117)
c_t_list_comp_old = np.random.normal(loc=0, scale=sigma_c_comp, size=(total_comp, 1))
c_t_list_comp = c_t_list_comp_old[:,0]
x_t_list_comp_old = np.random.normal(loc=0, scale=sigma_x_comp, size=(total_comp, 1))
x_t_list_comp = x_t_list_comp_old[:,0]
# 7) Set the values in the first period.
if t_list_comp[i] == 0:
v_t_list_comp[i] = par['delta'] * start['v_n'] + x_t_list_comp[i]
s_t_list_comp[i] = par['omega'] * start['s_n'] + c_t_list_comp[i]
y_t_list_comp[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], phi_comp, start['y_n'], start['s_n'], start['pi_n'], s_t_list_comp[i], v_t_list_comp[i], par['b'])
pi_t_list_comp[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], start['y_n'], phi_comp, start['pi_n'], start['s_n'], v_t_list_comp[i], par['b'], s_t_list_comp[i])
y_tm1_list_comp[i] = start['y_n']
pi_tm1_list_comp[i] = start['pi_n']
# 8) Set the values in the following periods.
else:
v_t_list_comp[i] = par['delta'] * v_t_list_comp[i-1] + x_t_list_comp[i]
s_t_list_comp[i] = par['omega'] * s_t_list_comp[i-1] + c_t_list_comp[i]
y_t_list_comp[i] = y_eq_func(par['alpha'], par['gamma'], par['h'], phi_comp, y_t_list_comp[i-1], s_t_list_comp[i-1], pi_t_list_comp[i-1], s_t_list_comp[i], v_t_list_comp[i], par['b'])
pi_t_list_comp[i] = pi_eq_func(par['alpha'], par['gamma'], par['h'], y_t_list_comp[i-1], phi_comp, pi_t_list_comp[i-1], s_t_list_comp[i-1], v_t_list_comp[i], par['b'], s_t_list_comp[i])
y_tm1_list_comp[i] = y_t_list_comp[i-1]
pi_tm1_list_comp[i] = pi_t_list_comp[i-1]
variance_y_comp = np.var(y_t_list_comp)
variance_pi_comp = np.var(pi_t_list_comp)
corr_y_pi_comp_old = np.corrcoef(y_t_list_comp, pi_t_list_comp)
corr_y_pi_comp = corr_y_pi_comp_old[1,0]
auto_corr_y_comp_old = np.corrcoef(y_t_list_comp, y_tm1_list_comp)
auto_corr_y_comp = auto_corr_y_comp_old[1,0]
auto_corr_pi_comp_old = np.corrcoef(pi_t_list_comp, pi_tm1_list_comp)
auto_corr_pi_comp = auto_corr_pi_comp_old[1,0]
return variance_y_comp, variance_pi_comp, corr_y_pi_comp, auto_corr_y_comp, auto_corr_pi_comp
```
We are now able to compare the moments graphically. First, we combine the moments in a list, and afterwards we plot the graph.
We note that some of the moments are alike, while other moments from the calibrated model differ from the true moments.
```python
# 1) Combine the calibrated parameters in a list.
model_cal = compare(phi_cal , sigma_x_cal, sigma_c_cal)
# 2) Combine the true moments in a list.
given_list = [var_y_given, var_pi_given, corr_given, auto_y_given, auto_pi_given]
# 3) Plot the graph
plt.figure(figsize=(12,8))
x = np.arange(len(model_cal))
bar_width = 0.30
plt.bar(x, model_cal, width=bar_width, color='blue', zorder=2)
plt.bar(x + bar_width, given_list, width=bar_width, color='red', zorder=2)
plt.xticks(x+bar_width/2, ['var(y_t)', 'var(pi_t)', 'c(y_t, pi_t)', 'c(y_t, y_t-1)', 'c(pi_t, pi_t-1)'])
plt.ylabel('Value')
plt.legend(['Moments after calibration', 'True moments'])
plt.show()
```
# 3. Exchange economy
Consider an **exchange economy** with
1. 3 goods, $(x_1,x_2,x_3)$
2. $N$ consumers indexed by \\( j \in \{1,2,\dots,N\} \\)
3. Preferences are Cobb-Douglas with log-normally distributed coefficients
$$ \begin{eqnarray*}
u^{j}(x_{1},x_{2},x_{3}) &=&
\left(x_{1}^{\beta_{1}^{j}}x_{2}^{\beta_{2}^{j}}x_{3}^{\beta_{3}^{j}}\right)^{\gamma}\\
& & \,\,\,\beta_{i}^{j}=\frac{\alpha_{i}^{j}}{\alpha_{1}^{j}+\alpha_{2}^{j}+\alpha_{3}^{j}} \\
& & \,\,\,\boldsymbol{\alpha}^{j}=(\alpha_{1}^{j},\alpha_{2}^{j},\alpha_{3}^{j}) \\
& & \,\,\,\log(\boldsymbol{\alpha}^j) \sim \mathcal{N}(\mu,\Sigma) \\
\end{eqnarray*} $$
4. Endowments are exponentially distributed,
$$
\begin{eqnarray*}
\boldsymbol{e}^{j} &=& (e_{1}^{j},e_{2}^{j},e_{3}^{j}) \\
& & e_i^j \sim f, f(z;\zeta) = 1/\zeta \exp(-z/\zeta)
\end{eqnarray*}
$$
Let $p_3 = 1$ be the **numeraire**. The implied **demand functions** are:
$$
\begin{eqnarray*}
x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j})&=&\beta^{j}_i\frac{I^j}{p_{i}} \\
\end{eqnarray*}
$$
where consumer $j$'s income is
$$I^j = p_1 e_1^j + p_2 e_2^j +p_3 e_3^j$$
The **parameters** and **random preferences and endowments** are given by:
```python
# a. parameters
N = 50000
mu = np.array([3,2,1])
Sigma = np.array([[0.25, 0, 0], [0, 0.25, 0], [0, 0, 0.25]])
gamma = 0.8
zeta = 1
# b. random draws
seed = 1986
np.random.seed(seed)
# preferences
alphas = np.exp(np.random.multivariate_normal(mu, Sigma, size=N))
betas = alphas/np.reshape(np.sum(alphas,axis=1),(N,1))
# endowments
e1 = np.random.exponential(zeta,size=N)
e2 = np.random.exponential(zeta,size=N)
e3 = np.random.exponential(zeta,size=N)
```
**Question 1:** Plot the histograms of the budget shares for each good across agents.
Consider the **excess demand functions:**
$$ z_i(p_1,p_2) = \sum_{j=1}^N x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j}) - e_i^j$$
**Question 2:** Plot the excess demand functions.
**Quesiton 3:** Find the Walras-equilibrium prices, $(p_1,p_2)$, where both excess demands are (approximately) zero, e.g. by using the following tâtonnement process:
1. Guess on $p_1 > 0$, $p_2 > 0$ and choose tolerance $\epsilon > 0$ and adjustment aggressivity parameter, $\kappa > 0$.
2. Calculate $z_1(p_1,p_2)$ and $z_2(p_1,p_2)$.
3. If $|z_1| < \epsilon$ and $|z_2| < \epsilon$ then stop.
4. Else set $p_1 = p_1 + \kappa \frac{z_1}{N}$ and $p_2 = p_2 + \kappa \frac{z_2}{N}$ and return to step 2.
**Question 4:** Plot the distribution of utility in the Walras-equilibrium and calculate its mean and variance.
**Question 5:** Find the Walras-equilibrium prices if instead all endowments were distributed equally. Discuss the implied changes in the distribution of utility. Does the value of $\gamma$ play a role for your conclusions?
## Question 3.1
The budget share for good $j$ is defined by the demand for good $j$ relative to the total income
$$\text{budget share}_j = \frac{p_j x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j})}{I^J} = \frac{p_j \beta^{j}_i\frac{I^j}{p_{i}}}{I^J} = \beta^{j}_i$$
The budget shares are therefore equal to $\beta^{j}_i$
```python
# 1) Define the budget shares
budget_share_1 = betas[:,0]
budget_share_2 = betas[:,1]
budget_share_3 = betas[:,2]
# 2) Plot the budget shares
plt.figure(figsize=(12,8))
plt.hist(budget_share_1, bins=100, color="r", alpha=0.6)
plt.hist(budget_share_2, bins=100, color="b", alpha=0.6)
plt.hist(budget_share_3, bins=100, color="g", alpha=0.6)
# 3) Add labels and ledends and show the graph
plt.ylabel('Budget share')
plt.xlabel('N')
plt.legend(['Budget share good 1','Budget share good 2','Budget share good 3'])
axes = plt.gca()
axes.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
plt.show()
```
The histogram above shows that most of the agents will demand the largest budget share for good 1 and demand the lowest budget share for good 3.
## Question 3.2
We will start out by defining a function which returns the excess demand.
```python
# 1) Define the excess demand
def excess_demand(p1, p2, e1=e1, e2=e2, e3=e3, betas=betas):
# 1.1) Find the income
I = p1 * e1 + p2 * e2 + 1 * e3
# 1.2) Find the demand from the demand function
x1 = betas[:,0] * I / p1
x2 = betas[:,1] * I / p2
# 1.3) Find the excess demand
z1 = sum(x1) - sum(e1)
z2 = sum(x2) - sum(e2)
return z1, z2
```
Now we will plot the excess demand for each three goods in a 3D plot with the excess demand on the z-axis and $p_1$ and $p_2$ on the x- and y-axis, respectively.
```python
# 1) Make a range over the prices
p1 = np.arange(1, 10, 0.4)
p2 = np.arange(1, 5, 0.4)
# 2) Make a meshgrid over x and y
X, Y = np.meshgrid(p1, p2)
# 3) Make a 3d plot of the excess demand for good 1 as a function of p1 and p2.
# 3.1) Create a figure
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1, projection='3d')
fig.set_size_inches(12, 8)
# 3.2) Find the excess demand for good 1 and find the negative excess demand
z1 = np.array([excess_demand(x,y)[0] for x,y in zip(np.ravel(X), np.ravel(Y))])
Z1 = z1.reshape(X.shape)
Z1neg = Z1.copy()
Z1neg[Z1 > 0] = np.nan
# 3.3) Plot the excess demand for good 1 as a function of p1 and p2
ax1.plot_surface(X, Y, Z1, color='b')
ax1.plot_surface(X, Y, Z1neg, color='r')
ax1.invert_xaxis()
# 3.4) Add labels
ax1.set_title('Figur 3.2.1: Excess demand good 1')
ax1.set_xlabel('$P_1$')
ax1.set_ylabel('$P_2$')
ax1.set_zlabel('Excess demand')
# 4) Make a 3d plot of the excess demand for good 2 as a function of p1 and p2.
# 4.1) Create a figure
fig = plt.figure()
ax2 = fig.add_subplot(1,1,1, projection='3d')
fig.set_size_inches(12, 8)
# 4.2) Find the excess demand for good 2 and find the negative excess demand
z2 = np.array([excess_demand(x,y)[1] for x,y in zip(np.ravel(X), np.ravel(Y))])
Z2 = z2.reshape(X.shape)
Z2neg = Z2.copy()
Z2neg[Z2 > 0] = np.nan
# 4.3) Plot the excess demand for good 3 as a function of p1 and p2
ax2.plot_surface(X, Y, Z2, color='b')
ax2.plot_surface(X, Y, Z2neg, color='r')
ax2.invert_xaxis()
# 4.4) Add labels
ax2.set_title('Figur 3.2.2: Excess demand good 2')
ax2.set_xlabel('$P_1$')
ax2.set_ylabel('$P_2$')
ax2.set_zlabel('Excess demand')
plt.show()
print(f'Note: {Fore.BLUE}Blue{Style.RESET_ALL} areas represent positive excess demand and {Fore.RED}red{Style.RESET_ALL} areas represent negative excess demand')
```
Graph 3.2.1 shows that the excess demand for good 1 depends positively on $p_2$ and negatively on $p_1$. Graph 3.2.2 shows that the excess demand for good 2 depends positively on $p_1$ and negatively on $p_2$.
## Question 3.3
Below we have defined a function which will find the Wasras equilibrium using the tâtonnement process.
```python
# 1) Define the tâtonnement function.
def tatonnement(p1, p2, e1=e1, e2=e2, e3=e3, betas=betas, eps=0.1, kappa=1, N=N, max_iter=500):
# 2) Define arrays which will contain the time and all the guesses for the value of p1 and p2.
global P1, P2, T, p1_star, p2_star
P1 = []
P2 = []
T = []
# 3) Set the timer to zero. The tâtonnement process will stop if the number of iterations excessed the max_iter value
t=0
# 4) Create the loop in which the tâtonnement process will run
while t<max_iter:
# 5) Calculate the excess demand
z1 = excess_demand(p1, p2, e1, e2, e3, betas)[0]
z2 = excess_demand(p1, p2, e1, e2, e3, betas)[1]
# 6) Stop if the absolute value of excess demands are lower than the critical value eps
if np.absolute(z1) < eps and np.absolute(z2) < eps:
print(f'The Walras equilibrium is p1 = {p1:.3f} and p2 = {p2:.3f}. Stoped after {t:.0f} iterations. ')
print(f'The excess demand for good 1 is {z1:.3f} and the excess demand for good 2 is {z2:.3f}')
p1_star = p1
p2_star = p2
return
# 7) Change the value of p1 and p2 if the the absolute value of excess demands are lower than the critical value eps
else:
p1_star = p1
p2_star = p2
P1.append(p1)
P2.append(p2)
T.append(t)
p1 += kappa * z1 / N
p2 += kappa * z2 / N
# 8) Add 1 to the number of iterations
t += 1
# 9) Print the current value of p1 and p2 if the number of iterations are higher than the max_iter value
print(f'The Walras equilibrium is p1 = {p1:.3f} and p2 = {p2:.3f}. Stoped after {t:.0f} iterations. ')
print(f'The excess demand for good 1 is {z1:.3f} and the excess demand for good 2 is {z2:.3f}')
return
```
```python
tatonnement(4, 2, e1=e1, e2=e2, e3=e3, betas=betas, eps=0.1, kappa=1, N=N, max_iter=800)
```
We will now plot the p1's and the p2's as a function of the number of iteration
```python
# 1) Plot the estiamte for p1 and p2 as a function of the number of iterations
plt.figure(figsize=(12,8))
plt.plot(T, P1, color='b')
plt.plot(T, P2, color='r')
plt.ylabel('Price')
plt.xlabel('Number of iterations')
plt.legend(['$P_1$','$P_2$'])
plt.show()
```
The graph above shows that the estimate of $p_1$ and $p_2$ are fairly stable after 300 observations. This indicates that the true values of $p_1$ and $p_2$ are very close to 6,5 and 2,6 respectively.
## Question 3.4
First, we define a function which returns the utility level given the prices, the endowment and the preferences.
```python
def utility(p1, p2, e1, e2, e3, betas, gamma=gamma):
# 1) Find the income
I = p1 * e1 + p2 * e2 + 1 * e3
# 2) Find the demand from the demand function
x1 = betas[:,0] * I / p1
x2 = betas[:,1] * I / p2
x3 = betas[:,2] * I / 1
# 3) Find the utility
u = (x1**betas[:,0] * x2**betas[:,1] * x3**betas[:,2])**gamma
return u
```
We now use this function to plot the distribution of the utilities and calculate the mean and the variance.
```python
# 1) Find the utilities for all the agents
u = utility(p1_star, p2_star, e1, e2, e3, betas, gamma)
# 2) Plot the utilities in a histogram
plt.figure(figsize=(12,8))
plt.hist(u, bins=100, color='b')
plt.xlabel('Utility')
axes = plt.gca()
axes.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
plt.show()
# 3) Calculate the mean and variance of the utilities
mean = np.mean(u)
var = np.var(u)
print(f' The mean of the utilities is {mean:.3f} and variance is {var:.3f}')
```
The distribution of the utilities are quite right skewed.
## Question 3.5
If all the agents have the same endowment, and the total size of the endowments are unchanged, then all endowment must be equal to the mean of the current endowment. We will therefore calculate the mean of the current endowments and make three array with N number of oberservations equal to the mean.
```python
# 1) Calculate the mean of the current endowments.
e1_mean = np.mean(e1)
e2_mean = np.mean(e2)
e3_mean = np.mean(e3)
# 2) Make three arrays with N number of observations equal to the mean of the previous endowments.
e1_new = np.asarray([e1_mean] * N)
e2_new = np.asarray([e2_mean] * N)
e3_new = np.asarray([e3_mean] * N)
```
Next, we will chech if the prices have changed from the changed endowments using the tatonnement function defined in problem 3.3
```python
tatonnement(4, 2, e1_new, e2_new, e3_new, betas=betas, eps=0.1, kappa=1, N=N, max_iter=800)
```
We see that the prices have not changed much. We define a function that plots the distribution of the utilities as a function of $\gamma$ based on the utility function defined in problem 3.4. Finally, we apply and interactive slider to change the value of $\gamma$
```python
def graph(gamma_par):
# 1) Calculate the utilities for the agents
u1 = utility(p1_star, p2_star, e1_new, e2_new, e3_new, betas, gamma)
u2 = utility(p1_star, p2_star, e1_new, e2_new, e3_new, betas, gamma_par)
# 2) Plot the utilities for gamma=0.8 and for gamma=gamma_par
plt.figure(figsize=(12,8))
bins = np.linspace(0.98, 2, 100)
plt.hist(u1, bins=bins, color='r', alpha=0.5)
plt.hist(u2, bins=bins, color='b', alpha=0.5)
plt.xlabel('Utility')
plt.legend(['0.8', gamma_par],title="Value of $\gamma$")
axes = plt.gca()
axes.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
# 3) Calculate the mean and the variance for gamma=0.8 and for gamma=gamma_par
mean1 = np.mean(u1)
mean2 = np.mean(u2)
var1 = np.var(u1)
var2 = np.var(u2)
# 4) print and plot
print(f' For gamma = {gamma_par:.2f}, the mean is {mean2:.3f} with a variance of {var2:.3f}')
print(f' For gamma = {gamma:.2f}, the mean is {mean1:.3f} with a variance of {var1:.3f}')
plt.show()
# 5) Apply an interactive widget
widgets.interact(graph,
gamma_par=(0.1,2,0.05));
```
First of all, we see that the average income is sligtly hihger and the variance is much lower for an even distribution of endowments compared to an uneven distribution in probelm 3.4. The graph above shows that as $\gamma$ increases the average utility increases as well. However, the variance will also increase if $\gamma$ is increased. This shows that a higher $\gamma$ implies that the agents gets a higher utility but the inequality between the agents will increase.
|
0b65b1beb37b09332c1acf0a472beade169464f0
| 93,093 |
ipynb
|
Jupyter Notebook
|
examproject/exam_2019_final.ipynb
|
NumEconCopenhagen/projects-2019-den-usynlige-fod
|
7683ed4ebab9e28c6ea06d7e9f76a5509bba24d7
|
[
"MIT"
] | null | null | null |
examproject/exam_2019_final.ipynb
|
NumEconCopenhagen/projects-2019-den-usynlige-fod
|
7683ed4ebab9e28c6ea06d7e9f76a5509bba24d7
|
[
"MIT"
] | 8 |
2019-04-09T12:24:40.000Z
|
2019-05-14T21:55:30.000Z
|
examproject/exam_2019_final.ipynb
|
NumEconCopenhagen/projects-2019-den-usynlige-fod
|
7683ed4ebab9e28c6ea06d7e9f76a5509bba24d7
|
[
"MIT"
] | 1 |
2019-05-10T17:45:04.000Z
|
2019-05-10T17:45:04.000Z
| 38.708108 | 695 | 0.572578 | true | 20,902 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.868827 | 0.90599 | 0.787148 |
__label__eng_Latn
| 0.965262 | 0.667142 |
# Problem 1: Basics of Neural Networks
* <b>Learning Objective:</b> In the entrance exam, we asked you to implement a K-NN classifier to classify some tiny images extracted from CIFAR-10 dataset. Probably many of you noticed that the performances were quite bad. In this problem, you are going to implement a basic multi-layer fully connected neural network to perform the same classification task.
* <b>Provided Code:</b> We provide the skeletons of classes you need to complete. Forward checking and gradient checkings are provided for verifying your implementation as well.
* <b>TODOs:</b> You are asked to implement the forward passes and backward passes for standard layers and loss functions, various widely-used optimizers, and part of the training procedure. And finally we want you to train a network from scratch on your own.
```python
from lib.fully_conn import *
from lib.layer_utils import *
from lib.grad_check import *
from lib.datasets import *
from lib.optim import *
from lib.train import *
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
```
## Loading the data (CIFAR-10)
Run the following code block to load in the properly splitted CIFAR-10 data.
```python
data = CIFAR10_data()
for k, v in data.iteritems():
print "Name: {} Shape: {}".format(k, v.shape)
```
Name: data_train Shape: (49000, 3, 32, 32)
Name: data_val Shape: (1000, 3, 32, 32)
Name: data_test Shape: (1000, 3, 32, 32)
Name: labels_train Shape: (49000,)
Name: labels_val Shape: (1000,)
Name: labels_test Shape: (1000,)
## Implement Standard Layers
You will now implement all the following standard layers commonly seen in a fully connected neural network. Please refer to the file layer_utils.py under the directory lib. Take a look at each class skeleton, and we will walk you through the network layer by layer. We provide results of some examples we pre-computed for you for checking the forward pass, and also the gradient checking for the backward pass.
## FC Forward
In the class skeleton "fc", please complete the forward pass in function "forward", the input to the fc layer may not be of dimension (batch size, features size), it could be an image or any higher dimensional data. Make sure that you handle this dimensionality issue.
```python
# Test the fc forward function
input_bz = 3
input_dim = (6, 5, 4)
output_dim = 4
input_size = input_bz * np.prod(input_dim)
weight_size = output_dim * np.prod(input_dim)
single_fc = fc(np.prod(input_dim), output_dim, init_scale=0.02, name="fc_test")
x = np.linspace(-0.1, 0.5, num=input_size).reshape(input_bz, *input_dim)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_dim), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
single_fc.params[single_fc.w_name] = w
single_fc.params[single_fc.b_name] = b
out = single_fc.forward(x)
correct_out = np.array([[0.70157129, 0.83483484, 0.96809839, 1.10136194],
[1.86723094, 2.02561647, 2.18400199, 2.34238752],
[3.0328906, 3.2163981, 3.3999056, 3.5834131]])
# Compare your output with the above pre-computed ones.
# The difference should not be larger than 1e-8
print "Difference: ", rel_error(out, correct_out)
```
Difference: 2.48539291792e-09
## FC Backward
Please complete the function "backward" as the backward pass of the fc layer. Follow the instructions in the comments to store gradients into the predefined dictionaries in the attributes of the class. Parameters of the layer are also stored in the predefined dictionary.
```python
# Test the fc backward function
x = np.random.randn(10, 2, 2, 3)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(10, 10)
single_fc = fc(np.prod(x.shape[1:]), 10, init_scale=5e-2, name="fc_test")
single_fc.params[single_fc.w_name] = w
single_fc.params[single_fc.b_name] = b
dx_num = eval_numerical_gradient_array(lambda x: single_fc.forward(x), x, dout)
dw_num = eval_numerical_gradient_array(lambda w: single_fc.forward(x), w, dout)
db_num = eval_numerical_gradient_array(lambda b: single_fc.forward(x), b, dout)
out = single_fc.forward(x)
dx = single_fc.backward(dout)
dw = single_fc.grads[single_fc.w_name]
db = single_fc.grads[single_fc.b_name]
# The error should be around 1e-10
print "dx Error: ", rel_error(dx_num, dx)
print "dw Error: ", rel_error(dw_num, dw)
print "db Error: ", rel_error(db_num, db)
```
dx Error: 8.52174876315e-10
dw Error: 1.22843425191e-09
db Error: 8.82086649315e-11
## ReLU Forward
In the class skeleton "relu", please complete the forward pass.
```python
# Test the relu forward function
x = np.linspace(-1.0, 1.0, num=12).reshape(3, 4)
relu_f = relu(name="relu_f")
out = relu_f.forward(x)
correct_out = np.array([[0., 0., 0., 0. ],
[0., 0., 0.09090909, 0.27272727],
[0.45454545, 0.63636364, 0.81818182, 1. ]])
# Compare your output with the above pre-computed ones.
# The difference should not be larger than 1e-8
print "Difference: ", rel_error(out, correct_out)
```
Difference: 5.00000005012e-09
## ReLU Backward
Please complete the backward pass of the class relu.
```python
# Test the relu backward function
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
relu_b = relu(name="relu_b")
dx_num = eval_numerical_gradient_array(lambda x: relu_b.forward(x), x, dout)
out = relu_b.forward(x)
dx = relu_b.backward(dout)
# The error should not be larger than 1e-10
print "dx Error: ", rel_error(dx_num, dx)
```
dx Error: 3.27561972634e-12
## Dropout Forward
In the class "dropout", please complete the forward pass. Remember that the dropout is only applied during training phase, you should pay attention to this while implementing the function.
```python
x = np.random.randn(100, 100) + 5.0
print "----------------------------------------------------------------"
for p in [0.25, 0.50, 0.75]:
dropout_f = dropout(p)
out = dropout_f.forward(x, True)
out_test = dropout_f.forward(x, False)
print "Dropout p = ", p
print "Mean of input: ", x.mean()
print "Mean of output during training time: ", out.mean()
print "Mean of output during testing time: ", out_test.mean()
print "Fraction of output set to zero during training time: ", (out == 0).mean()
print "Fraction of output set to zero during testing time: ", (out_test == 0).mean()
print "----------------------------------------------------------------"
```
----------------------------------------------------------------
Dropout p = 0.25
Mean of input: 5.01578563647
Mean of output during training time: 4.99372767041
Mean of output during testing time: 5.01578563647
Fraction of output set to zero during training time: 0.7504
Fraction of output set to zero during testing time: 0.0
----------------------------------------------------------------
Dropout p = 0.5
Mean of input: 5.01578563647
Mean of output during training time: 5.03282467518
Mean of output during testing time: 5.01578563647
Fraction of output set to zero during training time: 0.5004
Fraction of output set to zero during testing time: 0.0
----------------------------------------------------------------
Dropout p = 0.75
Mean of input: 5.01578563647
Mean of output during training time: 4.96014951445
Mean of output during testing time: 5.01578563647
Fraction of output set to zero during training time: 0.2584
Fraction of output set to zero during testing time: 0.0
----------------------------------------------------------------
## Dropout Backward
Please complete the backward pass. Again remember that the dropout is only applied during training phase, handle this in the backward pass as well.
```python
x = np.random.randn(5, 5) + 5
dout = np.random.randn(*x.shape)
p = 0.75
dropout_b = dropout(p, seed=100)
out = dropout_b.forward(x, True)
dx = dropout_b.backward(dout)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_b.forward(xx, True), x, dout)
# The error should not be larger than 1e-9
print 'dx relative error: ', rel_error(dx, dx_num)
```
dx relative error: 3.00311469865e-11
## Testing cascaded layers: FC + ReLU
Please find the TestFCReLU function in fully_conn.py under lib directory. <br />
You only need to complete few lines of code in the TODO block. <br />
Please design an FC --> ReLU two-layer-mini-network where the parameters of them match the given x, w, and b <br />
Please insert the corresponding names you defined for each layer to param_name_w, and param_name_b respectively. <br />
Here you only modify the param_name part, the _w, and _b are automatically assigned during network setup
```python
x = np.random.randn(2, 3, 4) # the input features
w = np.random.randn(12, 10) # the weight of fc layer
b = np.random.randn(10) # the bias of fc layer
dout = np.random.randn(2, 10) # the gradients to the output, notice the shape
tiny_net = TestFCReLU()
tiny_net.net.assign("fc_w", w)
tiny_net.net.assign("fc_b", b)
out = tiny_net.forward(x)
dx = tiny_net.backward(dout)
dw = tiny_net.net.get_grads("fc_w")
db = tiny_net.net.get_grads("fc_b")
dx_num = eval_numerical_gradient_array(lambda x: tiny_net.forward(x), x, dout)
dw_num = eval_numerical_gradient_array(lambda w: tiny_net.forward(x), w, dout)
db_num = eval_numerical_gradient_array(lambda b: tiny_net.forward(x), b, dout)
# The errors should not be larger than 1e-7
print "dx error: ", rel_error(dx_num, dx)
print "dw error: ", rel_error(dw_num, dw)
print "db error: ", rel_error(db_num, db)
```
dx error: 1.13598437673e-10
dw error: 1.14872404377e-10
db error: 3.27562531714e-12
## SoftMax Function and Loss Layer
In the layer_utils.py, please first complete the function softmax, which will be use in the function cross_entropy. Please refer to the lecture slides of the mathematical expressions of the cross entropy loss function, and complete its forward pass and backward pass.
```python
num_classes, num_inputs = 5, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
test_loss = cross_entropy()
dx_num = eval_numerical_gradient(lambda x: test_loss.forward(x, y), x, verbose=False)
loss = test_loss.forward(x, y)
dx = test_loss.backward()
# Test softmax_loss function. Loss should be around 1.609
# and dx error should be at the scale of 1e-8 (or smaller)
print "Cross Entropy Loss: ", loss
print "dx error: ", rel_error(dx_num, dx)
```
Cross Entropy Loss: 1.60945846468
dx error: 2.77559987163e-09
## Test a Small Fully Connected Network
Please find the SmallFullyConnectedNetwork function in fully_conn.py under lib directory. <br />
Again you only need to complete few lines of code in the TODO block. <br />
Please design an FC --> ReLU --> FC --> ReLU network where the shapes of parameters match the given shapes <br />
Please insert the corresponding names you defined for each layer to param_name_w, and param_name_b respectively. <br />
Here you only modify the param_name part, the _w, and _b are automatically assigned during network setup
```python
model = SmallFullyConnectedNetwork()
loss_func = cross_entropy()
N, D, = 4, 4 # N: batch size, D: input dimension
H, C = 30, 7 # H: hidden dimension, C: output dimension
std = 0.02
x = np.random.randn(N, D)
y = np.random.randint(C, size=N)
print "Testing initialization ... "
w1_std = abs(model.net.get_params("fc1_w").std() - std)
b1 = model.net.get_params("fc1_b").std()
w2_std = abs(model.net.get_params("fc2_w").std() - std)
b2 = model.net.get_params("fc2_b").std()
assert w1_std < std / 10, "First layer weights do not seem right"
assert np.all(b1 == 0), "First layer biases do not seem right"
assert w2_std < std / 10, "Second layer weights do not seem right"
assert np.all(b2 == 0), "Second layer biases do not seem right"
print "Passed!"
print "Testing test-time forward pass ... "
w1 = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
w2 = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
b1 = np.linspace(-0.1, 0.9, num=H)
b2 = np.linspace(-0.9, 0.1, num=C)
model.net.assign("fc1_w", w1)
model.net.assign("fc1_b", b1)
model.net.assign("fc2_w", w2)
model.net.assign("fc2_b", b2)
feats = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.forward(feats)
correct_scores = np.asarray([[4.20670862, 4.87188359, 5.53705856, 6.20223352, 6.86740849, 7.53258346, 8.19775843],
[4.74826036, 5.35984681, 5.97143326, 6.58301972, 7.19460617, 7.80619262, 8.41777907],
[5.2898121, 5.84781003, 6.40580797, 6.96380591, 7.52180384, 8.07980178, 8.63779971],
[5.83136384, 6.33577326, 6.84018268, 7.3445921, 7.84900151, 8.35341093, 8.85782035]])
scores_diff = np.sum(np.abs(scores - correct_scores))
assert scores_diff < 1e-6, "Your implementation might went wrong!"
print "Passed!"
print "Testing the loss ...",
y = np.asarray([0, 5, 1, 4])
loss = loss_func.forward(scores, y)
dLoss = loss_func.backward()
correct_loss = 2.90181552716
assert abs(loss - correct_loss) < 1e-10, "Your implementation might went wrong!"
print "Passed!"
print "Testing the gradients (error should be no larger than 1e-7) ..."
din = model.backward(dLoss)
for layer in model.net.layers:
if not layer.params:
continue
for name in sorted(layer.grads):
f = lambda _: loss_func.forward(model.forward(feats), y)
grad_num = eval_numerical_gradient(f, layer.params[name], verbose=False)
print '%s relative error: %.2e' % (name, rel_error(grad_num, layer.grads[name]))
```
Testing initialization ...
Passed!
Testing test-time forward pass ...
Passed!
Testing the loss ... Passed!
Testing the gradients (error should be no larger than 1e-7) ...
fc1_b relative error: 2.85e-09
fc1_w relative error: 7.76e-09
fc2_b relative error: 6.71e-08
fc2_w relative error: 3.03e-09
## Test a Fully Connected Network regularized with Dropout
Please find the DropoutNet function in fully_conn.py under lib directory. <br />
For this part you don't need to design a new network, just simply run the following test code <br />
If something goes wrong, you might want to double check your dropout implementation
```python
N, D, C = 3, 15, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
seed = 123
for dropout_p in [0., 0.25, 0.5]:
print "Dropout p =", dropout_p
model = DropoutNet(dropout_p=dropout_p, seed=seed)
loss_func = cross_entropy()
output = model.forward(X, True)
loss = loss_func.forward(output, y)
dLoss = loss_func.backward()
dX = model.backward(dLoss)
grads = model.net.grads
print "Loss (should be ~2.30) : ", loss
print "Error of gradients should be no larger than 1e-5"
for name in sorted(model.net.params):
f = lambda _: loss_func.forward(model.forward(X, True), y)
grad_num = eval_numerical_gradient(f, model.net.params[name], verbose=False, h=1e-5)
print "{} relative error: {}".format(name, rel_error(grad_num, grads[name]))
print
```
Dropout p = 0.0
Loss (should be ~2.30) : 2.30285163514
Error of gradients should be no larger than 1e-5
fc1_b relative error: 3.86706745047e-08
fc1_w relative error: 1.68428083818e-06
fc2_b relative error: 3.81209866093e-09
fc2_w relative error: 2.26785962239e-06
fc3_b relative error: 1.52481732925e-10
fc3_w relative error: 1.16962221954e-07
Dropout p = 0.25
Loss (should be ~2.30) : 2.30423200279
Error of gradients should be no larger than 1e-5
fc1_b relative error: 8.88557372487e-08
fc1_w relative error: 9.65553469517e-06
fc2_b relative error: 1.99592245205e-07
fc2_w relative error: 1.46154121819e-06
fc3_b relative error: 1.11483204553e-10
fc3_w relative error: 1.87136172477e-08
Dropout p = 0.5
Loss (should be ~2.30) : 2.29994690876
Error of gradients should be no larger than 1e-5
fc1_b relative error: 1.00992963045e-07
fc1_w relative error: 5.24658134615e-06
fc2_b relative error: 1.67052460093e-08
fc2_w relative error: 1.41351178592e-05
fc3_b relative error: 6.01233850074e-11
fc3_w relative error: 2.63122521848e-07
## Training a Network
In this section, we defined a TinyNet class for you to fill in the TODO block in fully_conn.py.
* Here please design a two layer fully connected network for this part.
* Please read the train.py under lib directory carefully and complete the TODO blocks in the train_net function first.
* In addition, read how the SGD function is implemented in optim.py, you will be asked to complete three other optimization methods in the later sections.
```python
# Arrange the data
data_dict = {
"data_train": (data["data_train"], data["labels_train"]),
"data_val": (data["data_val"], data["labels_val"]),
"data_test": (data["data_test"], data["labels_test"])
}
```
```python
model = TinyNet()
loss_f = cross_entropy()
optimizer = SGD(model.net, 1e-4)
```
### Now train the network to achieve at least 50% validation accuracy
```python
results = None
#############################################################################
# TODO: Use the train_net function you completed to train a network #
#############################################################################
#train_net(data,model,loss_func,optimizer,batch_size,max_epochs,lr_decay,lr_decay_every,show_every,verbose)
results = train_net(data_dict, model, loss_f, optimizer, 100, 200, 2.0, 1000, 10, False)
#############################################################################
# END OF YOUR CODE #
#############################################################################
opt_params, loss_hist, train_acc_hist, val_acc_hist = results
```
```python
# Take a look at what names of params were stored
print opt_params.keys()
```
['fc1_w', 'fc2_b', 'fc1_b', 'fc2_w']
```python
# Demo: How to load the parameters to a newly defined network
model = TinyNet()
model.net.load(opt_params)
val_acc = compute_acc(model, data["data_val"], data["labels_val"])
print "Validation Accuracy: {}%".format(val_acc*100)
test_acc = compute_acc(model, data["data_test"], data["labels_test"])
print "Testing Accuracy: {}%".format(test_acc*100)
```
Loading Params: fc1_w Shape: (3072, 100)
Loading Params: fc1_b Shape: (100,)
Loading Params: fc2_b Shape: (10,)
Loading Params: fc2_w Shape: (100, 10)
Validation Accuracy: 50.3%
Testing Accuracy: 47.6%
```python
# Plot the learning curves
plt.subplot(2, 1, 1)
plt.title('Training loss')
loss_hist_ = loss_hist[1::100] # sparse the curve a bit
plt.plot(loss_hist_, '-o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(train_acc_hist, '-o', label='Training')
plt.plot(val_acc_hist, '-o', label='Validation')
plt.plot([0.5] * len(val_acc_hist), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
```
## Different Optimizers
There are several more advanced optimizers than vanilla SGD, you will implement three more sophisticated and widely-used methods in this section. Please complete the TODOs in the optim.py under lib directory.
## SGD + Momentum
The update rule of SGD plus momentum is as shown below: <br\ >
\begin{equation}
v_t: velocity \\
\gamma: momentum \\
\eta: learning\ rate \\
v_t = \gamma v_{t-1} + \eta \nabla_{\theta}J(\theta) \\
\theta = \theta - v_t
\end{equation}
Complete the SGDM() function in optim.py
```python
# SGD with momentum
model = TinyNet()
loss_f = cross_entropy()
optimizer = SGD(model.net, 1e-4)
```
```python
# Test the implementation of SGD with Momentum
N, D = 4, 5
test_sgd = sequential(fc(N, D, name="sgd_fc"))
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
test_sgd.layers[0].params = {"sgd_fc_w": w}
test_sgd.layers[0].grads = {"sgd_fc_w": dw}
test_sgd_momentum = SGDM(test_sgd, 1e-3, 0.9)
test_sgd_momentum.velocity = {"sgd_fc_w": v}
test_sgd_momentum.step()
updated_w = test_sgd.layers[0].params["sgd_fc_w"]
velocity = test_sgd_momentum.velocity["sgd_fc_w"]
expected_updated_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print 'updated_w error: ', rel_error(updated_w, expected_updated_w)
print 'velocity error: ', rel_error(expected_velocity, velocity)
```
updated_w error: 8.88234703351e-09
velocity error: 4.26928774328e-09
Run the following code block to train a multi-layer fully connected network with both SGD and SGD plus Momentum. The network trained with SGDM optimizer should converge faster.
```python
# Arrange a small data
num_train = 4000
small_data_dict = {
"data_train": (data["data_train"][:num_train], data["labels_train"][:num_train]),
"data_val": (data["data_val"], data["labels_val"]),
"data_test": (data["data_test"], data["labels_test"])
}
model_sgd = FullyConnectedNetwork()
model_sgdm = FullyConnectedNetwork()
loss_f_sgd = cross_entropy()
loss_f_sgdm = cross_entropy()
optimizer_sgd = SGD(model_sgd.net, 1e-2)
optimizer_sgdm = SGDM(model_sgdm.net, 1e-2, 0.9)
print "Training with Vanilla SGD..."
results_sgd = train_net(small_data_dict, model_sgd, loss_f_sgd, optimizer_sgd, batch_size=100,
max_epochs=5, show_every=100, verbose=True)
print "\nTraining with SGD plus Momentum..."
results_sgdm = train_net(small_data_dict, model_sgdm, loss_f_sgdm, optimizer_sgdm, batch_size=100,
max_epochs=5, show_every=100, verbose=True)
opt_params_sgd, loss_hist_sgd, train_acc_hist_sgd, val_acc_hist_sgd = results_sgd
opt_params_sgdm, loss_hist_sgdm, train_acc_hist_sgdm, val_acc_hist_sgdm = results_sgdm
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(loss_hist_sgd, 'o', label="Vanilla SGD")
plt.subplot(3, 1, 2)
plt.plot(train_acc_hist_sgd, '-o', label="Vanilla SGD")
plt.subplot(3, 1, 3)
plt.plot(val_acc_hist_sgd, '-o', label="Vanilla SGD")
plt.subplot(3, 1, 1)
plt.plot(loss_hist_sgdm, 'o', label="SGD with Momentum")
plt.subplot(3, 1, 2)
plt.plot(train_acc_hist_sgdm, '-o', label="SGD with Momentum")
plt.subplot(3, 1, 3)
plt.plot(val_acc_hist_sgdm, '-o', label="SGD with Momentum")
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
```
## RMSProp
The update rule of RMSProp is as shown below: <br\ >
\begin{equation}
\gamma: decay\ rate \\
\epsilon: small\ number \\
g_t^2: squared\ gradients \\
\eta: learning\ rate \\
E[g^2]_t: decaying\ average\ of\ past\ squared\ gradients\ at\ update\ step\ t \\
E[g^2]_t = \gamma E[g^2]_{t-1} + (1-\gamma)g_t^2 \\
\theta_{t+1} = \theta_t - \frac{\eta}{\sqrt{E[g^2]_t+\epsilon}}
\end{equation}
Complete the RMSProp() function in optim.py
```python
# Test RMSProp implementation; you should see errors less than 1e-7
N, D = 4, 5
test_rms = sequential(fc(N, D, name="rms_fc"))
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
test_rms.layers[0].params = {"rms_fc_w": w}
test_rms.layers[0].grads = {"rms_fc_w": dw}
opt_rms = RMSProp(test_rms, 1e-2, 0.99)
opt_rms.cache = {"rms_fc_w": cache}
opt_rms.step()
updated_w = test_rms.layers[0].params["rms_fc_w"]
cache = opt_rms.cache["rms_fc_w"]
expected_updated_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print 'updated_w error: ', rel_error(expected_updated_w, updated_w)
print 'cache error: ', rel_error(expected_cache, opt_rms.cache["rms_fc_w"])
```
updated_w error: 9.50264522989e-08
cache error: 1.06132471212e-08
## Adam
The update rule of Adam is as shown below: <br\ >
\begin{equation}
g_t: gradients\ at\ update\ step\ t \\
m_t = \beta_1m_{t-1} + (1-\beta_1)g_t \\
v_t = \beta_2v_{t-1} + (1-\beta_1)g_t^2 \\
\hat{m_t}: bias\ corrected\ m_t \\
\hat{v_t}: bias\ corrected\ v_t \\
\theta_{t+1} = \theta_t - \frac{\eta}{\sqrt{\hat{v_t}}+\epsilon}
\end{equation}
Complete the Adam() function in optim.py
```python
# Test Adam implementation; you should see errors around 1e-7 or less
N, D = 4, 5
test_adam = sequential(fc(N, D, name="adam_fc"))
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
test_adam.layers[0].params = {"adam_fc_w": w}
test_adam.layers[0].grads = {"adam_fc_w": dw}
opt_adam = Adam(test_adam, 1e-2, 0.9, 0.999, t=5)
opt_adam.mt = {"adam_fc_w": m}
opt_adam.vt = {"adam_fc_w": v}
opt_adam.step()
updated_w = test_adam.layers[0].params["adam_fc_w"]
mt = opt_adam.mt["adam_fc_w"]
vt = opt_adam.vt["adam_fc_w"]
expected_updated_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print 'updated_w error: ', rel_error(expected_updated_w, updated_w)
print 'mt error: ', rel_error(expected_m, mt)
print 'vt error: ', rel_error(expected_v, vt)
```
updated_w error: 1.13956917985e-07
mt error: 4.21496319311e-09
vt error: 4.20831403811e-09
## Comparing the optimizers
Run the following code block to compare the plotted results among all the above optimizers
```python
model_rms = FullyConnectedNetwork()
model_adam = FullyConnectedNetwork()
loss_f_rms = cross_entropy()
loss_f_adam = cross_entropy()
optimizer_rms = RMSProp(model_rms.net, 5e-4)
optimizer_adam = Adam(model_adam.net, 5e-4)
print "Training with RMSProp..."
results_rms = train_net(small_data_dict, model_rms, loss_f_rms, optimizer_rms, batch_size=100,
max_epochs=5, show_every=100, verbose=True)
print "\nTraining with Adam..."
results_adam = train_net(small_data_dict, model_adam, loss_f_adam, optimizer_adam, batch_size=100,
max_epochs=5, show_every=100, verbose=True)
opt_params_rms, loss_hist_rms, train_acc_hist_rms, val_acc_hist_rms = results_rms
opt_params_adam, loss_hist_adam, train_acc_hist_adam, val_acc_hist_adam = results_adam
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(loss_hist_sgd, 'o', label="Vanilla SGD")
plt.subplot(3, 1, 2)
plt.plot(train_acc_hist_sgd, '-o', label="Vanilla SGD")
plt.subplot(3, 1, 3)
plt.plot(val_acc_hist_sgd, '-o', label="Vanilla SGD")
plt.subplot(3, 1, 1)
plt.plot(loss_hist_sgdm, 'o', label="SGD with Momentum")
plt.subplot(3, 1, 2)
plt.plot(train_acc_hist_sgdm, '-o', label="SGD with Momentum")
plt.subplot(3, 1, 3)
plt.plot(val_acc_hist_sgdm, '-o', label="SGD with Momentum")
plt.subplot(3, 1, 1)
plt.plot(loss_hist_rms, 'o', label="RMSProp")
plt.subplot(3, 1, 2)
plt.plot(train_acc_hist_rms, '-o', label="RMSProp")
plt.subplot(3, 1, 3)
plt.plot(val_acc_hist_rms, '-o', label="RMSProp")
plt.subplot(3, 1, 1)
plt.plot(loss_hist_adam, 'o', label="Adam")
plt.subplot(3, 1, 2)
plt.plot(train_acc_hist_adam, '-o', label="Adam")
plt.subplot(3, 1, 3)
plt.plot(val_acc_hist_adam, '-o', label="Adam")
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
```
## Training a Network with Dropout
Run the following code blocks to compare the results with and without dropout
```python
# Train two identical nets, one with dropout and one without
num_train = 500
data_dict_500 = {
"data_train": (data["data_train"][:num_train], data["labels_train"][:num_train]),
"data_val": (data["data_val"], data["labels_val"]),
"data_test": (data["data_test"], data["labels_test"])
}
solvers = {}
dropout_ps = [0, 0.25] # you can try some dropout prob yourself
results_dict = {}
for dropout_p in dropout_ps:
results_dict[dropout_p] = {}
for dropout_p in dropout_ps:
print "Dropout =", dropout_p
model = DropoutNetTest(dropout_p=dropout_p)
loss_f = cross_entropy()
optimizer = SGDM(model.net, 1e-4)
results = train_net(data_dict_500, model, loss_f, optimizer, batch_size=100,
max_epochs=20, show_every=100, verbose=True)
opt_params, loss_hist, train_acc_hist, val_acc_hist = results
results_dict[dropout_p] = {
"opt_params": opt_params,
"loss_hist": loss_hist,
"train_acc_hist": train_acc_hist,
"val_acc_hist": val_acc_hist
}
```
Dropout = 0
(Iteration 1 / 100) loss: 2.54822405595
(Epoch 1 / 20) Training Accuracy: 0.092, Validation Accuracy: 0.096
(Epoch 2 / 20) Training Accuracy: 0.116, Validation Accuracy: 0.101
(Epoch 3 / 20) Training Accuracy: 0.144, Validation Accuracy: 0.112
(Epoch 4 / 20) Training Accuracy: 0.174, Validation Accuracy: 0.128
(Epoch 5 / 20) Training Accuracy: 0.196, Validation Accuracy: 0.135
(Epoch 6 / 20) Training Accuracy: 0.226, Validation Accuracy: 0.145
(Epoch 7 / 20) Training Accuracy: 0.23, Validation Accuracy: 0.161
(Epoch 8 / 20) Training Accuracy: 0.236, Validation Accuracy: 0.163
(Epoch 9 / 20) Training Accuracy: 0.24, Validation Accuracy: 0.168
(Epoch 10 / 20) Training Accuracy: 0.25, Validation Accuracy: 0.171
(Epoch 11 / 20) Training Accuracy: 0.26, Validation Accuracy: 0.18
(Epoch 12 / 20) Training Accuracy: 0.278, Validation Accuracy: 0.184
(Epoch 13 / 20) Training Accuracy: 0.286, Validation Accuracy: 0.192
(Epoch 14 / 20) Training Accuracy: 0.292, Validation Accuracy: 0.196
(Epoch 15 / 20) Training Accuracy: 0.296, Validation Accuracy: 0.201
(Epoch 16 / 20) Training Accuracy: 0.308, Validation Accuracy: 0.201
(Epoch 17 / 20) Training Accuracy: 0.31, Validation Accuracy: 0.203
(Epoch 18 / 20) Training Accuracy: 0.318, Validation Accuracy: 0.207
(Epoch 19 / 20) Training Accuracy: 0.328, Validation Accuracy: 0.215
(Epoch 20 / 20) Training Accuracy: 0.332, Validation Accuracy: 0.219
Dropout = 0.25
(Iteration 1 / 100) loss: 3.0082675448
(Epoch 1 / 20) Training Accuracy: 0.13, Validation Accuracy: 0.096
(Epoch 2 / 20) Training Accuracy: 0.13, Validation Accuracy: 0.107
(Epoch 3 / 20) Training Accuracy: 0.158, Validation Accuracy: 0.12
(Epoch 4 / 20) Training Accuracy: 0.168, Validation Accuracy: 0.126
(Epoch 5 / 20) Training Accuracy: 0.196, Validation Accuracy: 0.14
(Epoch 6 / 20) Training Accuracy: 0.208, Validation Accuracy: 0.15
(Epoch 7 / 20) Training Accuracy: 0.218, Validation Accuracy: 0.153
(Epoch 8 / 20) Training Accuracy: 0.234, Validation Accuracy: 0.159
(Epoch 9 / 20) Training Accuracy: 0.24, Validation Accuracy: 0.16
(Epoch 10 / 20) Training Accuracy: 0.25, Validation Accuracy: 0.167
(Epoch 11 / 20) Training Accuracy: 0.276, Validation Accuracy: 0.16
(Epoch 12 / 20) Training Accuracy: 0.274, Validation Accuracy: 0.169
(Epoch 13 / 20) Training Accuracy: 0.292, Validation Accuracy: 0.176
(Epoch 14 / 20) Training Accuracy: 0.308, Validation Accuracy: 0.176
(Epoch 15 / 20) Training Accuracy: 0.3, Validation Accuracy: 0.18
(Epoch 16 / 20) Training Accuracy: 0.332, Validation Accuracy: 0.181
(Epoch 17 / 20) Training Accuracy: 0.328, Validation Accuracy: 0.179
(Epoch 18 / 20) Training Accuracy: 0.326, Validation Accuracy: 0.193
(Epoch 19 / 20) Training Accuracy: 0.338, Validation Accuracy: 0.183
(Epoch 20 / 20) Training Accuracy: 0.376, Validation Accuracy: 0.176
```python
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout_p in dropout_ps:
curr_dict = results_dict[dropout_p]
train_accs.append(curr_dict["train_acc_hist"][-1])
val_accs.append(curr_dict["val_acc_hist"][-1])
plt.subplot(3, 1, 1)
for dropout_p in dropout_ps:
curr_dict = results_dict[dropout_p]
plt.plot(curr_dict["train_acc_hist"], 'o', label='%.2f dropout' % dropout_p)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout_p in dropout_ps:
curr_dict = results_dict[dropout_p]
plt.plot(curr_dict["val_acc_hist"], 'o', label='%.2f dropout' % dropout_p)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
```
### Inline Question: Describe what you observe from the above results and graphs
#### Ans: The overall loss is lesser in the case of training without dropout. Also, we achieve higher training accuracy when training the network with dropout of 0.25 as compared to the one with no dropout. But for the reverse is true in the case of validation accuracy.
## Plot the Activation Functions
In each of the activation function, use the given lambda function template to plot their corresponding curves.
```python
left, right = -10, 10
X = np.linspace(left, right, 100)
XS = np.linspace(-5, 5, 10)
lw = 4
alpha = 0.1
elu_alpha = 0.5
selu_alpha = 1.6732
selu_scale = 1.0507
#########################
####### YOUR CODE #######
#########################
sigmoid = lambda x: 1/(1 + np.exp(-x))
leaky_relu = lambda x: ((0.1*x) * (x<0))+(x*(x>=0))
relu = lambda x: x * (x > 0)
elu = lambda x: ((elu_alpha*(np.exp(x)-1)) * (x<0))+(x*(x>=0))
selu = lambda x: selu_scale*(((selu_alpha*(np.exp(x)-1)) * (x<0))+(x *(x>=0)))
tanh = lambda x: np.tanh(x)
#########################
### END OF YOUR CODE ####
#########################
activations = {
"Sigmoid": sigmoid,
"LeakyReLU": leaky_relu,
"ReLU": relu,
"ELU": elu,
"SeLU": selu,
"Tanh": tanh
}
# Ground Truth activations
GT_Act = {
"Sigmoid": [0.00669285092428, 0.0200575365379, 0.0585369028744, 0.158869104881, 0.364576440742,
0.635423559258, 0.841130895119, 0.941463097126, 0.979942463462, 0.993307149076],
"LeakyReLU": [-0.5, -0.388888888889, -0.277777777778, -0.166666666667, -0.0555555555556,
0.555555555556, 1.66666666667, 2.77777777778, 3.88888888889, 5.0],
"ReLU": [-0.0, -0.0, -0.0, -0.0, -0.0, 0.555555555556, 1.66666666667, 2.77777777778, 3.88888888889, 5.0],
"ELU": [-0.4966310265, -0.489765962143, -0.468911737989, -0.405562198581, -0.213123289631,
0.555555555556, 1.66666666667, 2.77777777778, 3.88888888889, 5.0],
"SeLU": [-1.74618571868, -1.72204772347, -1.64872296837, -1.42598202974, -0.749354802287,
0.583722222222, 1.75116666667, 2.91861111111, 4.08605555556, 5.2535],
"Tanh": [-0.999909204263, -0.999162466631, -0.992297935288, -0.931109608668, -0.504672397722,
0.504672397722, 0.931109608668, 0.992297935288, 0.999162466631, 0.999909204263]
}
for label in activations:
fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(1, 1, 1)
ax.plot(X, activations[label](X), color='darkorchid', lw=lw, label=label)
assert rel_error(activations[label](XS), GT_Act[label]) < 1e-9, \
"Your implementation of {} might be wrong".format(label)
ax.legend(loc="lower right")
ax.axhline(0, color='black')
ax.axvline(0, color='black')
ax.set_title('{}'.format(label), fontsize=14)
plt.xlabel(r"X")
plt.ylabel(r"Y")
plt.show()
```
# Phew! You're done for problem 1 now, but 3 more to go... LOL
|
580f65ed1a80d5fe2e404ede03c174ba1a798ad9
| 473,427 |
ipynb
|
Jupyter Notebook
|
Assignment 1/Problem_1.ipynb
|
ar1607/CSCI-599-Assignments
|
4d9e364163f9c9f7c874526afff6c5c01bc4d0c4
|
[
"Unlicense"
] | null | null | null |
Assignment 1/Problem_1.ipynb
|
ar1607/CSCI-599-Assignments
|
4d9e364163f9c9f7c874526afff6c5c01bc4d0c4
|
[
"Unlicense"
] | null | null | null |
Assignment 1/Problem_1.ipynb
|
ar1607/CSCI-599-Assignments
|
4d9e364163f9c9f7c874526afff6c5c01bc4d0c4
|
[
"Unlicense"
] | null | null | null | 299.258534 | 140,374 | 0.907333 | true | 12,083 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.857768 | 0.855851 | 0.734122 |
__label__eng_Latn
| 0.654747 | 0.543943 |
########################################################
### This file is used to generate Table 4-6, Fig 2-3 ###
########################################################
- [Forward Problem](#Forward-Problem)
- [Verify Assumption 1](#Verify-Assumption-1)
- [Table 4](#Table-4)
- [Table 5](#Table-5)
- [Verify Lemma 1](#Verify-Lemma-1)
- [Left plot in Figure 2](#Left-plot-in-Figure-2)
- [Verify Theorem 3.1](#Verify-Theorem-3.1)
- [Right plot in Figure 2](#Right-plot-in-Figure-2)
- [Inverse Problem](#Inverse-Problem)
- [Verify Assumption 2](#Verify-Assumption-2)
- [Table 6](#Table-6)
- [Verify Theorem 4.2](#Verify-Theorem-4.2)
- [Figure 3](#Figure-3)
```python
import os
import numpy as np
import numpy.polynomial.legendre as leg
from scipy.stats import beta
from scipy.stats import uniform
from scipy.integrate import odeint
from scipy.stats import gaussian_kde as kde
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from matplotlib import pyplot as plt
%matplotlib inline
```
```python
####### Plot Formatting ######
plt.rc('lines', linewidth = 1.5)
plt.rc('xtick', labelsize = 14)
plt.rc('ytick', labelsize = 14)
plt.rc('legend',fontsize=14)
# plt.rcParams["font.family"] = "serif"
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.titlesize'] = 12
plt.rcParams['lines.markersize'] = 6
plt.rcParams['figure.figsize'] = (8.0, 6.0)
```
## Modified version of Example from Xiu2002
$$ \frac{dy(t)}{dt} = -\lambda y, \ \ y(0)=1 $$
$$ y(t) = e^{-\lambda t} $$
$$QoI = y(0.5)$$
$\lambda\sim U[-1,1]$, $t\in[0,1]$
$\Lambda = [-1,1]$, $\mathcal{D}=[e^{-0.5},e^{0.5}]$
```python
def Phi(n):
'''Define L_n'''
coeffs = [0]*(n+1)
coeffs[n] = 1
return coeffs
def inner2_leg(n):
return 2/(2*n+1)
def product3_leg(i,j,l):
#compute \Phi_i*\Phi_j*\Phi_l
return lambda x: leg.legval(x, leg.legmul(leg.legmul(Phi(i),Phi(j)),Phi(l)))
def inner3_leg(i,j,l):
'''
compute <\Phi_i\Phi_j\Phi_l>
Set up Gauss-Legendra quadrature
'''
x, w=leg.leggauss(20)
inner=sum([product3_leg(i,j,l)(x[idx]) * w[idx] for idx in range(20)])
return inner
```
```python
def ode_system_leg(y, t, P):
'''P indicates highest order of Polynomial we use'''
dydt = np.zeros(P+1)
for l in range(len(dydt)):
dydt[l] = -(sum(sum(inner3_leg(i,j,l)*ki_leg[i]*y[j] for j in range(P+1)) for i in range(P+1)))/inner2_leg(l)
return dydt
```
```python
P=5
ki_leg = [0,1]+[0]*(P-1)
sol_leg = odeint(ode_system_leg, [1.0]+[0.0]*P, np.linspace(0,1,101), args=(P,))
```
```python
def a(i):
return sol_leg[:,i][50]
coef = np.array([a(0), a(1), a(2), a(3), a(4), a(5)]) #fixed
def Q(i,x):
return leg.legval(x,coef[:(i+1)])
def Qexact(x):
return np.exp(-x*0.5)
```
```python
#### Use plot to show the difference between the exact and approximate map #####
fig = plt.figure()
def plot_Qn(n):
fig.clear()
x = np.linspace(-3,3,100)
y = Qexact(x)
yn = Q(n, x)
plt.plot(x,y,linestyle='-.',linewidth=4,label="$Q(\lambda)$")
plt.plot(x,yn,label='Q_'+str(n)+'$(\lambda)$')
plt.xlabel('$\Lambda$')
plt.legend();
interact(plot_Qn,
n = widgets.IntSlider(value=1,min=1,max=5,step=1))
```
<Figure size 576x432 with 0 Axes>
interactive(children=(IntSlider(value=1, description='n', max=5, min=1), Output()), _dom_classes=('widget-inte…
<function __main__.plot_Qn(n)>
## Forward Problem
$\lambda\sim U([-1,1])$, QOI is the value at $t=0.5$ ($y(0.5)$). $Q_n$ defines the Polynomial Chaos expansion with degree $n$.
$$
Q(\lambda)=y(0.5)=\sum\limits_{i=0}^{\infty} y_i(0.5)\Phi_i
$$
$$
Q_n(\lambda)=\sum\limits_{i=0}^n y_i(0.5)\Phi_i
$$
Verify Result of Lemma 2:
$Q_n(\lambda)\to Q(\lambda)$ in $L^p(\Lambda)$, if Assumptions 1 holds and $D_c\subset\mathcal{D}$ being compact, then
\begin{equation}
\pi_{\mathcal{D}}^{Q_n}(q) \to \pi_{\mathcal{D}}^{Q}(q) \text{ almost in} {L^r(D_c)}
\end{equation}
Since $\mathcal{D}$ is compact in this problem, we choose $D_c=\mathcal{D}$.
Verify Result of Theorem 3.1:
$Q_n(\lambda)\to Q(\lambda)$ in $L^p(\Lambda)$, if Assumptions 1 holds, $\{\pi_{\mathcal{D}}^{Q_n}\}$ are uniformly integrable in $L^p(\mathcal{D})$, $\mathcal{D}$ is compact, then
\begin{equation}
\pi_{\mathcal{D}}^{Q_n}(Q_n(\lambda)) \to \pi_{\mathcal{D}}^{Q}(Q(\lambda)) \text{ in } L^p(\Lambda)
\end{equation}
### Verify Assumption 1
```python
##### Generate data in Table 4 and 5 #####
def assumption1(n,J):
np.random.seed(123456)
initial_sample = np.random.uniform(-1,1,size = J)
pfprior_sample_n = Q(n,initial_sample)
pfprior_dens_n = kde(pfprior_sample_n)
x = np.linspace(-1,3,1000)
return np.round(np.max(np.abs(np.gradient(pfprior_dens_n(x), x))), 2), np.round(np.max(pfprior_dens_n(x)),2)
size_J = [int(1E3), int(1E4), int(1E5)]
degree_n = [1, 2, 3, 4, 5]
Bound_matrix, Lip_Bound_matrix = np.zeros((3,5)), np.zeros((3,5))
for i in range(3):
for j in range(5):
n, J = degree_n[j], size_J[i]
Lip_Bound_matrix[i,j] = assumption1(n, J)[0]
Bound_matrix[i,j] = assumption1(n, J)[1]
```
#### Table 4
```python
###########################################
################ Table 4 ##################
###########################################
print('Table 4')
print('Bound under certain n and J values')
print(Bound_matrix)
```
Table 4
Bound under certain n and J values
[[1.06 1.28 1.24 1.24 1.24]
[1.03 1.49 1.42 1.42 1.42]
[1. 1.61 1.49 1.5 1.5 ]]
#### Table 5
```python
###########################################
################ Table 5 ##################
###########################################
print('Table 5')
print('Lipschitz bound under certain n and J values')
print(Lip_Bound_matrix)
```
Table 5
Lipschitz bound under certain n and J values
[[ 5.58 8.18 7.73 7.76 7.76]
[ 8.4 14.18 12.97 13.05 13.05]
[13.45 23.43 21. 21.22 21.21]]
```python
#### Use plot to show the difference between the exact pushforward and approximate pushforward #####
fig=plt.figure()
def plot_pushforward(n,J):
fig.clear()
np.random.seed(123456)
initial_sample = np.random.uniform(-1,1,size = J)
pfprior_sample = Qexact(initial_sample)
pfprior_dens = kde(pfprior_sample)
pfprior_sample_n = Q(n,initial_sample)
pfprior_dens_n = kde(pfprior_sample_n)
fig.clear()
x = np.linspace(-1,3,1000)
y = pfprior_dens(x)
yn = pfprior_dens_n(x)
plt.plot(x,y,color='r', linestyle='-.', linewidth=4, label="$\pi_{\mathcal{D}}^Q$")
plt.plot(x,yn,linewidth=2,label="$\pi_{\mathcal{D}}^{Q_{n}}$")
plt.title('Lipschitz const. = %4.2f and Bound = %2.2f' %(np.max(np.abs(np.gradient(pfprior_dens_n(x), x))),
np.max(pfprior_dens_n(x))))
plt.xlabel("$\mathcal{D}$")
plt.legend()
interact(plot_pushforward,
n = widgets.IntSlider(value=1,min=1,max=5,step=1),
J = widgets.IntSlider(value=int(1E3),min=int(1E3),max=int(1E5),step=int(1E3)))
```
<Figure size 576x432 with 0 Axes>
interactive(children=(IntSlider(value=1, description='n', max=5, min=1), IntSlider(value=1000, description='J'…
<function __main__.plot_pushforward(n, J)>
### Verify Lemma 1
**Print out Monte Carlo Approximation of $ \|\pi_{\mathcal{D}}^Q(q)-\pi_{\mathcal{D}}^{Q_n}(q)\|_{L^r(\mathcal{D_c})} $ where $r>0$ and $D_c=\mathcal{D}$ because $\mathcal{D}$ is compact.**
```python
#Build $\pi_D^Q$ and $\pi_D^{Q,n}$, use 10,000 samples
N_kde = int(1E4)
N_mc = int(1E4)
np.random.seed(123456)
initial_sample = np.random.uniform(-1,1,size = N_kde)
pfprior_sample = Qexact(initial_sample)
pfprior_dens = kde(pfprior_sample)
def pfprior_dens_n(n,x):
pfprior_sample_n = Q(n,initial_sample)
pdf = kde(pfprior_sample_n)
return pdf(x)
```
```python
error_r_D = np.zeros((5,5))
np.random.seed(123456)
qsample = np.random.uniform(np.exp(-0.5),np.exp(0.5),N_mc)
for i in range(5):
for j in range(5):
error_r_D[i,j] = (np.mean((np.abs(pfprior_dens(qsample) - pfprior_dens_n(j+1,qsample)))**(i+1)))**(1/(i+1))
```
```python
np.set_printoptions(linewidth=110)
print('L^r error on data space for Forward Problem',end='\n\n')
print(error_r_D)
```
L^r error on data space for Forward Problem
[[1.94238726e-01 2.05042717e-02 1.47356406e-03 7.87288538e-05 3.59183884e-06]
[2.22506150e-01 2.49390997e-02 1.76945246e-03 9.32066906e-05 4.07528839e-06]
[2.44870659e-01 2.96231360e-02 2.02466852e-03 1.04710022e-04 4.44179886e-06]
[2.63194738e-01 3.41497229e-02 2.25659747e-03 1.14385798e-04 4.75561998e-06]
[2.78426119e-01 3.81183280e-02 2.46559182e-03 1.22601045e-04 5.03864501e-06]]
```python
#### To make it cleaner, create Directory "images" to store all the figures ####
imagepath = os.path.join(os.getcwd(),"images")
os.makedirs(imagepath,exist_ok=True)
```
#### Left plot in Figure 2
```python
###########################################
######### The left plot of Fig 2 ##########
###########################################
fig = plt.figure()
plt.xlim([0,6])
marker = ['-D', '-o', '-v', '-s', '-.']
for i in range(5):
plt.semilogy([1,2,3,4,5],error_r_D[i,:],marker[i],label='r = ' + np.str(i+1))
plt.xlabel('Order of PCE (n)')
plt.ylabel('$L^r$'+' Error in Push-Forward on '+'$\mathcal{D}$')
plt.legend();
# fig.savefig("images/1forward_D_uniform.png")
fig.savefig("images/Fig2(Left).png")
```
### Verify Theorem 3.1
**Print out Monte Carlo Approximation of $ \|\pi_{\mathcal{D}}^Q(Q(\lambda))-\pi_{\mathcal{D}}^{Q_n}(Q_n(\lambda))\|_{L^2(\Lambda)} $**
```python
##### Generate data for the right plot of Fig 2 #####
np.random.seed(123456)
lamsample = np.random.uniform(-1,1,size = N_mc)
error_2 = np.zeros(5)
for i in range(5):
error_2[i] = (np.mean((np.abs(pfprior_dens(Qexact(lamsample)) - pfprior_dens_n(i+1,Q(i+1,lamsample))))**2))**(1/2)
```
```python
np.set_printoptions(linewidth=110)
print('L^2 error on parameter space for Forward Problem',end='\n\n')
print(error_2)
```
L^2 error on parameter space for Forward Problem
[2.42452472e-01 3.51921960e-02 2.34712105e-03 1.20430887e-04 4.89385277e-06]
#### Right plot in Figure 2
```python
############################################
######### The right plot of Fig 2 ##########
############################################
fig = plt.figure()
plt.xlim([0,6])
plt.semilogy([1,2,3,4,5],error_2,'-s')
plt.xlabel('Order of PCE (n)')
plt.ylabel('$L^2$'+' Error in Push-Forward on '+'$\Lambda$');
# fig.savefig("images/1forward_Lam_uniform.png")
fig.savefig("images/Fig2(Right).png")
```
## Inverse Problem
Initial guess is $\lambda\sim U([-1,1])$.
Observation is $\pi_{\mathcal{D}}\sim Beta(4,4)$ with location and scale parameters chosen to be on $[1,1.25]$.
Verify Result of Theorem 4.2:
$Q_n(\lambda)\to Q(\lambda)$ in $L^p(\Lambda)$, $\pi_{\Lambda}^i\in L^p(\mathcal{D})$. If Assumptions 1, 2 hold, $\{\pi_{\mathcal{D}}^{Q_n}\}$ are uniformly integrable in $L^p(\mathcal{D})$, then
\begin{equation}
\pi_{\Lambda}^{u,n}(\lambda) \to \pi_{\Lambda}^{u}(\lambda) \text{ in } L^p(\Lambda)
\end{equation}
```python
def pdf_obs(x):
return beta.pdf(x, a=4, b=4, loc=1, scale=0.25)
```
```python
#### Use plot to show the difference between the pushforward of the init and the observed #####
fig = plt.figure()
xx = np.linspace(-1,3,1000)
y = pdf_obs(xx)
y_pf = pfprior_dens(xx)
plt.plot(xx,y,label="$\pi_{\mathcal{D}}^{obs}$")
plt.plot(xx,y_pf, label="$\pi_{\mathcal{D}}^{Q(init)}$")
plt.xlabel("$\mathcal{D}$")
plt.legend();
```
### Verify Assumption 2
```python
def Meanr(n):
pfprior_sample_n = Q(n,initial_sample)
if n==0:
r = pdf_obs(pfprior_sample)/pfprior_dens(pfprior_sample)
else:
r = pdf_obs(pfprior_sample_n)/pfprior_dens_n(n,pfprior_sample_n)
return np.mean(r)
def pdf_update(n,x):
if n==0:
r = pdf_obs(pfprior_sample)/pfprior_dens(pfprior_sample)
pdf = kde(initial_sample,weights=r)
else:
pfprior_sample_n = Q(n,initial_sample)
# pfprior_dens_n = kde(pfprior_sample_n)
r = pdf_obs(pfprior_sample_n)/pfprior_dens_n(n,pfprior_sample_n)
pdf = kde(initial_sample,weights=r)
return pdf(x)
Expect_r = np.zeros(6)
for i in range(6):
Expect_r[i] = Meanr(i)
```
#### Table 6
```python
###########################################
################ Table 6 ##################
###########################################
print('Table 6')
print('Expected ratio for verifying Assumption 2')
print(Expect_r[1:])
```
Table 6
Expected ratio for verifying Assumption 2
[1.00342642 0.97855489 0.97788712 0.97809176 0.97809595]
```python
#### Use plot to show the difference between the initial, updated, approximate updated #####
fig=plt.figure()
def plot_update(n):
fig.clear()
xx = np.linspace(-1.1,1.1,100)
plt.plot(xx, uniform.pdf(xx, loc=-1, scale=2), label="Initial Density")
plt.plot(xx, pdf_update(0,xx), label="$\pi_{\Lambda}^u$")
plt.plot(xx, pdf_update(n,xx), label="$\pi_{\Lambda}^{u,n}$, n="+str(n))
plt.legend()
plt.xlabel("$\Lambda$")
plt.title('$\mathbb{E}(r) =$ %3.2f' %(Expect_r[n]));
interact(plot_update,
n = widgets.IntSlider(value=int(1),min=int(1),max=int(5),step=1))
```
<Figure size 576x432 with 0 Axes>
interactive(children=(IntSlider(value=1, description='n', max=5, min=1), Output()), _dom_classes=('widget-inte…
<function __main__.plot_update(n)>
```python
#### Use plot to show the difference between the observed and the pushforward of the approximate updated pdf #####
def update_pushforward(n,x):
pfprior_sample_n = Q(n,initial_sample)
r = pdf_obs(pfprior_sample_n)/pfprior_dens_n(n,pfprior_sample_n)
pdf = kde(pfprior_sample_n,weights=r)
return pdf(x)
fig = plt.figure()
xx = np.linspace(-1,3,100)
y = pdf_obs(xx)
plt.plot(xx,y,label="$\pi_{\mathcal{D}}^{obs}$")
for i in range(1,6,1):
y_pf = update_pushforward(i,xx)
plt.plot(xx,y_pf, label="n="+str(i))
plt.xlabel("$\mathcal{D}$")
plt.legend();
```
### Verify Theorem 4.2
Print out Monte Carlo Approximation of $\|\pi_{\Lambda}^{u,n}(\lambda)-\pi_{\Lambda}^u(\lambda)\|_{L^2(\Lambda)} $
```python
##### Generate data for Fig 3 #####
np.random.seed(123456)
lamsample = np.random.uniform(-1,1,size = N_mc)
error_update = np.zeros(5)
for i in range(5):
error_update[i] = (np.mean((np.abs(pdf_update(0,lamsample) - pdf_update(i+1,lamsample)))**2))**(1/2)
```
```python
np.set_printoptions(linewidth=110)
print('L^2 Error for Inverse Problem',end='\n\n')
print(error_update)
```
L^2 Error for Inverse Problem
[7.93989864e-01 6.16431006e-02 3.38614091e-03 2.52824991e-04 6.91016768e-06]
#### Figure 3
```python
###########################################
################ Figure 3 #################
###########################################
fig = plt.figure()
plt.xlim([0,6])
plt.semilogy([1,2,3,4,5],error_update,'-s')
plt.xlabel('Order of PCE (n)')
plt.ylabel('$L^2$'+' Error in Update');
# fig.savefig("images/1inverse_error_uniform.png")
fig.savefig("images/Fig3.png")
```
|
237ca892296e2f685b00420a103985145b6b1262
| 159,751 |
ipynb
|
Jupyter Notebook
|
ODE example.ipynb
|
User-zwj/Lp
|
0e645de38e0745da179ad32f53c04b57a15d0adc
|
[
"MIT"
] | 1 |
2021-06-24T20:10:59.000Z
|
2021-06-24T20:10:59.000Z
|
ODE example.ipynb
|
User-zwj/Lp
|
0e645de38e0745da179ad32f53c04b57a15d0adc
|
[
"MIT"
] | null | null | null |
ODE example.ipynb
|
User-zwj/Lp
|
0e645de38e0745da179ad32f53c04b57a15d0adc
|
[
"MIT"
] | null | null | null | 138.913913 | 44,972 | 0.874805 | true | 5,127 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.833325 | 0.760651 | 0.633869 |
__label__eng_Latn
| 0.237532 | 0.31102 |
# Plotting with Matplotlib
## Prepare for action
```
import numpy as np
import scipy as sp
import sympy
# Pylab combines the pyplot functionality (for plotting) with the numpy
# functionality (for mathematics and for working with arrays) in a single namespace
# aims to provide a closer MATLAB feel (the easy way). Note that this approach
# should only be used when doing some interactive quick and dirty data inspection.
# DO NOT USE THIS FOR SCRIPTS
#from pylab import *
# the convienient Matplotlib plotting interface pyplot (the tidy/right way)
# use this for building scripts. The examples here will all use pyplot.
import matplotlib.pyplot as plt
# for using the matplotlib API directly (the hard and verbose way)
# use this when building applications, and/or backends
import matplotlib as mpl
```
How would you like the IPython notebook show your plots? In order to use the
matplotlib IPython magic youre IPython notebook should be launched as
ipython notebook --matplotlib=inline
Make plots appear as a pop up window, chose the backend: 'gtk', 'inline', 'osx', 'qt', 'qt4', 'tk', 'wx'
%matplotlib qt
or inline the notebook (no panning, zooming through the plot). Not working in IPython 0.x
%matplotib inline
```
# activate pop up plots
#%matplotlib qt
# or change to inline plots
%matplotlib inline
```
ERROR: Line magic function `%matplotlib` not found.
### Matplotlib documentation
Finding your own way (aka RTFM). Hint: there is search box available!
* http://matplotlib.org/contents.html
The Matplotlib API docs:
* http://matplotlib.org/api/index.html
Pyplot, object oriented plotting:
* http://matplotlib.org/api/pyplot_api.html
* http://matplotlib.org/api/pyplot_summary.html
Extensive gallery with examples:
* http://matplotlib.org/gallery.html
### Tutorials for those who want to start playing
If reading manuals is too much for you, there is a very good tutorial available here:
* http://nbviewer.ipython.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-4-Matplotlib.ipynb
Note that this tutorial uses
from pylab import *
which is usually not adviced in more advanced script environments. When using
import matplotlib.pyplot as plt
you need to preceed all plotting commands as used in the above tutorial with
plt.
Give me more!
[EuroScipy 2012 Matlotlib tutorial](http://www.loria.fr/~rougier/teaching/matplotlib/). Note that here the author uses ```from pylab import * ```. When using ```import matplotliblib.pyplot as plt``` the plotting commands need to be proceeded with ```plt.```
## Plotting template starting point
```
# some sample data
x = np.arange(-10,10,0.1)
```
To change the default plot configuration values.
```
page_width_cm = 13
dpi = 200
inch = 2.54 # inch in cm
# setting global plot configuration using the RC configuration style
plt.rc('font', family='serif')
plt.rc('xtick', labelsize=12) # tick labels
plt.rc('ytick', labelsize=20) # tick labels
plt.rc('axes', labelsize=20) # axes labels
# If you don’t need LaTeX, don’t use it. It is slower to plot, and text
# looks just fine without. If you need it, e.g. for symbols, then use it.
#plt.rc('text', usetex=True) #<- P-E: Doesn't work on my Mac
```
```
# create a figure instance, note that figure size is given in inches!
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8,6))
# set the big title (note aligment relative to figure)
fig.suptitle("suptitle 16, figure alignment", fontsize=16)
# actual plotting
ax.plot(x, x**2, label="label 12")
# set axes title (note aligment relative to axes)
ax.set_title("title 14, axes alignment", fontsize=14)
# axes labels
ax.set_xlabel('xlabel 12')
ax.set_ylabel(r'$y_{\alpha}$ 12', fontsize=22)
# legend
ax.legend(fontsize=12, loc="best")
# saving the figure in different formats
fig.savefig('figure-%03i.png' % dpi, dpi=dpi)
fig.savefig('figure.svg')
fig.savefig('figure.eps')
```
```
# following steps are only relevant when using figures as pop up windows (with %matplotlib qt)
# to update a figure with has been modified
fig.canvas.draw()
# show a figure
fig.show()
```
## Exercise
The current section is about you trying to figure out how to do several plotting features. You should use the previously mentioned resources to find how to do that. In many cases, google is your friend!
* add a grid to the plot
```
plt.plot(x,x**2)
#Write code to show grid in plot here
plt.grid()
```
* change the location of the legend to different places
```
plt.plot(x,x**2, label="label 12")
plt.legend(fontsize=12, loc="upper center")
plt.grid()
```
* find a way to control the line type and color, marker type and color, control the frequency of the marks (`markevery`). See plot options at: http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot
```
plt.plot(x,x**2,'--',color="black",marker=">",markerfacecolor="red",markeredgecolor="black",markevery=5,markersize=10,label="F(x)=x^2")
plt.legend(fontsize=12, loc="upper center")
plt.grid()
```
* add different sub-plots
```
x = np.linspace(0, 1 * np.pi, 120)
y = np.tanh(x ** 1.5)
f, axarr = plt.subplots(2, 2, figsize=(8,6))
axarr[0, 0].plot(x, y,"--",color="red")
axarr[0, 0].set_title('Axis [0,0]')
axarr[0, 1].plot(x, y,"o",color="red")
axarr[0, 1].set_title('Axis [0,1]')
axarr[1, 0].plot(x, y ** 2)
axarr[1, 0].set_title('Axis [1,0]')
axarr[1, 1].scatter(x, y ** 2)
axarr[1, 1].set_title('Axis [1,1]')
#Hides x ticks for top plots and y ticks for right plots
plt.setp([a.get_xticklabels() for a in axarr[0, :]], visible=False);
plt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False);
```
* size the figure such that when included on an A4 page the fonts are given in their true size
```
figure(num=None, figsize=(12, 10), dpi=80, facecolor='w', edgecolor='k')
plt.plot(x,exp(x)*cos(x),'-',color="red",marker=">",markerfacecolor="black",markeredgecolor="black",markevery=4,markersize=14,label="F(x)=cos(x)e^x")
plt.legend(fontsize=16, loc="upper right")
plt.grid()
```
* make a contour plot
```
X, Y = np.meshgrid(x,x)
Z=np.cos(X+Y)*np.exp(Y);
plt.contourf(X,Y,Z)
plt.colorbar()
plt.title('F(x,y)=cos(x+y)e^(y)')
plt.grid()
```
* use twinx() to create a second axis on the right for the second plot
```
plt.plot(x,x**2)
plt.plot(x,x**4, 'r')
plt.grid()
ax=twinx()
ax.set_ylim(0,50)
```
* add horizontal and vertical lines using axvline(), axhline()
```
plt.plot(x,x**2)
axvline(x=0.25,ymin=0,ymax=10,color="black")
axhline(y=5,xmin=0,xmax=3.5,color="black")
plt.grid()
```
* autoformat dates for nice printing on the x-axis using fig.autofmt_xdate()
```
import datetime
dates = np.array([datetime.datetime.now() + datetime.timedelta(days=i) for i in xrange(24)])
fig, ax = plt.subplots(nrows=1, ncols=1)
plt.plot(x,x**2)
plt.grid()
fig.autofmt_xdate(bottom=0.1, rotation=90, ha="left")
```
## Advanced exercises
We are going to play a bit with regression
* Create a vector x of equally spaced number between $x \in [0, 5\pi]$ of 1000 points (keyword: linspace)
```
x=linspace(0,5*pi,1000)
```
* create a vector y, so that y=sin(x) with some random noise
```
y=sin(x)+np.random.normal(-0.2,0.2,1000)
```
* plot it like this:
```
plt.plot(x,y,'o',color="black",label="F(x)=sin(x)+E")
plt.legend(fontsize=12, loc="upper right")
plt.grid()
```
Try to do a polynomial fit on y(x) with different polynomial degree (Use numpy.polyfit to obtain coefficients)
Plot it like this (use np.poly1d(coef)(x) to plot polynomials)
```
#Polynomial fits:
PF0 = np.polyfit(x,y,0)
PF1 = np.polyfit(x,y,1)
PF2 = np.polyfit(x,y,2)
PF3 = np.polyfit(x,y,3)
PF4 = np.polyfit(x,y,4)
PF5 = np.polyfit(x,y,5)
PF6 = np.polyfit(x,y,6)
PF7 = np.polyfit(x,y,7)
PF8 = np.polyfit(x,y,8)
PF9 = np.polyfit(x,y,9)
#Polynomial values in x∈[0,5π]
PV0 = np.poly1d(PF0)
PV1 = np.poly1d(PF1)
PV2 = np.poly1d(PF2)
PV3 = np.poly1d(PF3)
PV4 = np.poly1d(PF4)
PV5 = np.poly1d(PF5)
PV6 = np.poly1d(PF6)
PV7 = np.poly1d(PF7)
PV8 = np.poly1d(PF8)
PV9 = np.poly1d(PF9)
```
```
#Original plot:
figure(num=None, figsize=(16, 14), dpi=80, facecolor='w', edgecolor='k')
plt.plot(x,y,'o',color="black",label="F(x)=sin(x)+E")
plt.legend(fontsize=12, loc="upper right")
plt.grid()
#Polynomial fits:
plt.plot(x,PV0(x), linewidth=3,label="deg=0")
plt.plot(x,PV1(x), linewidth=3,label="deg=1")
plt.plot(x,PV2(x), linewidth=3,label="deg=2")
plt.plot(x,PV3(x), linewidth=3,label="deg=3")
plt.plot(x,PV4(x), linewidth=3,label="deg=4")
plt.plot(x,PV5(x), linewidth=3,label="deg=5")
plt.plot(x,PV6(x), linewidth=3,label="deg=6")
plt.plot(x,PV7(x), linewidth=3,label="deg=7")
plt.plot(x,PV8(x), linewidth=3,label="deg=8")
plt.plot(x,PV9(x), linewidth=3,label="deg=9")
plt.legend(fontsize=12,loc="best")
title("Polynomial Fit", fontsize=25)
```
```
```
|
0724920de37a7d5d8fe6aa19598595945eb21019
| 550,158 |
ipynb
|
Jupyter Notebook
|
lesson 3/results/Fco.Herbert.ipynb
|
gtpedrosa/Python4WindEnergy
|
f8ad09018420cfb3a419173f97b129de7118d814
|
[
"Apache-2.0"
] | 48 |
2015-01-19T18:21:10.000Z
|
2021-11-27T22:41:06.000Z
|
lesson 3/results/Fco.Herbert.ipynb
|
arash7444/Python4WindEnergy
|
8f97a5f86e81ce01d80dafb6f8104165fd3ad397
|
[
"Apache-2.0"
] | 1 |
2016-05-24T06:07:07.000Z
|
2016-05-24T08:26:29.000Z
|
lesson 3/results/Fco.Herbert.ipynb
|
arash7444/Python4WindEnergy
|
8f97a5f86e81ce01d80dafb6f8104165fd3ad397
|
[
"Apache-2.0"
] | 24 |
2015-06-26T14:44:07.000Z
|
2021-06-07T18:36:52.000Z
| 1,313.026253 | 227,198 | 0.941855 | true | 2,627 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.603932 | 0.831143 | 0.501954 |
__label__eng_Latn
| 0.854387 | 0.004536 |
## Multi-class Classification
In this part,you will extend your previous implemention of logistic regression and apply it to one-vs-all classification
using ex3data1.mat
### 1.1 DataSet
```python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
import matplotlib
%matplotlib inline
%config InlineBackend.figure_format='svg'
```
Each training example is 20pixel by 20pixel unrolled into a 400-dimensional vector
This give us a 5000 x 400 matrix X
```python
dataSet=loadmat('ex3data1.mat')
```
```python
print(dataSet)
```
{'__header__': b'MATLAB 5.0 MAT-file, Platform: GLNXA64, Created on: Sun Oct 16 13:09:09 2011', '__version__': '1.0', '__globals__': [], 'X': array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]), 'y': array([[10],
[10],
[10],
...,
[ 9],
[ 9],
[ 9]], dtype=uint8)}
### 1.2 Visualizing the data
```python
def PlotDataX100(dataSet):
"""
:param dataSet:
"""
sampleIndex=np.random.choice(np.arange(dataSet['X'].shape[0]),100)
sampleImage=dataSet['X'][sampleIndex,:]
fig,ax=plt.subplots(nrows=10,ncols=10,sharey=True,sharex=True,figsize=(12,8))
for row in range(10):
for col in range(10):
ax[row,col].matshow(np.array(sampleImage[row*10+col].reshape((20,20))),cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
```
```python
PlotDataX100(dataSet)
```
### 1.3 Vectorizing Logistic Regression
In this part,you will be using multiple one-vs-all logistic regression models to build a multi-class classifier
#### 1.3.1 Vectorizing the cost function
```python
def sigmoid(z):
"""
:param z:
"""
return 1/(1+np.exp(-z))
```
```python
def cost(theta,X,y):
"""
:param theta:
:param X:
:param y:
"""
theta=np.mat(theta)
X=np.mat(X)
y=np.mat(y)
m=theta.shape[0]
term1=np.multiply(-y,np.log(sigmoid(X)))
term2=np.multiply(1-y,np.log(1-sigmoid(X)))
return 1/(2*m)*np.sum(term1-term2)
```
#### 1.3.2 Vectorizing the gradient
```python
def gradient(theta,X,y):
"""
:param theta:
:param X:
:param y:
"""
theta=np.mat(theta)
X=np.mat(X)
y=np.mat(y)
m=theta.shape[0]
parameters=int(theta.ravel().shape[1])
g=np.zeros(parameters)
error=sigmoid(X*theta.T)-y
for j in range(parameters):
term=np.multiply(error,X[:,j])
g[0]=1/m*np.sum(term)
return g
```
#### 1.3.3 Vectorizing regularized logistic regression
```python
def costRe(theta,X,y,C):
"""
:param theta:
:param X:
:param y:
:param C: learning rate
"""
theta=np.mat(theta)
X=np.mat(X)
y=np.mat(y)
m=theta.shape[0]
term1=np.multiply(-y,np.log(sigmoid(X*theta.T)))
term2=np.multiply(1-y,np.log(1-sigmoid(X*theta.T)))
reg=C/(2*m)*np.sum(np.power(theta[:,1:],2))
return np.sum(term1-term2)/m+reg
```
```python
def gradientRe(theta,X,y,C):
"""
:param theta:
:param X:
:param y:
:param C: learning rate
"""
theta=np.mat(theta)
X=np.mat(X)
y=np.mat(y)
m=theta.shape[0]
parameters=int(theta.ravel().shape[1])
g=np.zeros(parameters)
error=sigmoid(theta*X.T)-y
for j in range(parameters):
term=np.multiply(error,X[:,j])
if(j==0):
g[0]=np.sum(term)
else:
g[j]=np.sum(term)+C/m*theta[:,j]
return g
```
$\frac{1}{m}\sum_{i=1}^{m}(h_{\theta}(x^{i})-y^{(i)})x_{j}^{(i)}=\frac{1}{m}X^{T}(h\theta(x)-y)$
\begin{align}
& Repeat\text{ }until\text{ }convergence\text{ }\!\!\{\!\!\text{ } \\
& \text{ }{{\theta }_{0}}:={{\theta }_{0}}-a\frac{1}{m}\sum\limits_{i=1}^{m}{[{{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}}]x_{_{0}}^{(i)}} \\
& \text{ }{{\theta }_{j}}:={{\theta }_{j}}-a\frac{1}{m}\sum\limits_{i=1}^{m}{[{{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}}]x_{j}^{(i)}}+\frac{\lambda }{m}{{\theta }_{j}} \\
& \text{ }\!\!\}\!\!\text{ } \\
& Repeat \\
\end{align}
```python
def vecGradientRe(theta,X,y,C):
"""
:param theta:
:param X:
:param y:
:param C
: learning rate
"""
theta=np.mat(theta)
X=np.mat(X)
y=np.mat(y)
parameters=int(theta.ravel().shape[1])
error=sigmoid(X*theta.T)-y
grad=((X.T*error)/len(X)).T+((C/len(X))*theta)
grad[0,0]=np.sum(np.multiply(error,X[:,0]))/len(X)
return np.array(grad).ravel()
```
### 1.4 One-vs-all Classification
```python
from scipy.optimize import minimize
def OneVsAll(X,y,numLabels,C):
rows=X.shape[0]
parameters=X.shape[1]
allTheta=np.zeros((numLabels,parameters+1))
X=np.insert(X,0,values=np.ones(rows),axis=1)
for i in range(1,numLabels+1):
theta=np.zeros(parameters+1)
y_i=np.array([1 if label==i else 0 for label in y])
y_i=np.reshape(y_i,(rows,1))
fmin=minimize(fun=costRe,x0=theta,args=(X,y_i,C),method='TNC',jac=vecGradientRe)
allTheta[i-1,:]=fmin.x
return allTheta
```
```python
#Initial
#X,y
X=dataSet['X']
y=dataSet['y']
#numbers of labels
print(np.unique(dataSet['y']))
numLabels=10
#learning rate
C=1
```
[ 1 2 3 4 5 6 7 8 9 10]
```python
%%time
allTheta=OneVsAll(X,y,numLabels,C)
allTheta
```
CPU times: user 14.4 s, sys: 1.62 s, total: 16 s
Wall time: 4.61 s
array([[-2.38271930e+00, 0.00000000e+00, 0.00000000e+00, ...,
1.30447495e-03, -7.62371548e-10, 0.00000000e+00],
[-3.18312017e+00, 0.00000000e+00, 0.00000000e+00, ...,
4.46387429e-03, -5.08959133e-04, 0.00000000e+00],
[-4.79741727e+00, 0.00000000e+00, 0.00000000e+00, ...,
-2.87108841e-05, -2.47526194e-07, 0.00000000e+00],
...,
[-7.98743806e+00, 0.00000000e+00, 0.00000000e+00, ...,
-8.94588384e-05, 7.21025071e-06, 0.00000000e+00],
[-4.57242880e+00, 0.00000000e+00, 0.00000000e+00, ...,
-1.33039090e-03, 1.30275261e-04, 0.00000000e+00],
[-5.40568023e+00, 0.00000000e+00, 0.00000000e+00, ...,
-1.16598780e-04, 7.88289072e-06, 0.00000000e+00]])
#### 1.4.1 One-vs-all Prediction
```python
from sklearn.metrics import classification_report
```
```python
def predict_all(X,all_theta):
rows=X.shape[0]
parameters=X.shape[1]
numLabels=all_theta.shape[0]
X=np.insert(X,0,values=np.ones(rows),axis=1)
X=np.mat(X)
all_theta=np.mat(all_theta)
h=sigmoid(X*all_theta.T)
h_argmax=np.argmax(h,axis=1) #by row
h_argmax=h_argmax+1
return h_argmax
```
```python
yPred=predict_all(X,allTheta)
print(classification_report(dataSet['y'],yPred))
```
precision recall f1-score support
1 0.95 0.99 0.97 500
2 0.95 0.92 0.93 500
3 0.95 0.91 0.93 500
4 0.95 0.95 0.95 500
5 0.92 0.92 0.92 500
6 0.97 0.98 0.97 500
7 0.95 0.95 0.95 500
8 0.93 0.92 0.92 500
9 0.92 0.92 0.92 500
10 0.97 0.99 0.98 500
accuracy 0.94 5000
macro avg 0.94 0.94 0.94 5000
weighted avg 0.94 0.94 0.94 5000
|
89ffce3263599a7e83a6f799dcede0b44c8f016d
| 306,104 |
ipynb
|
Jupyter Notebook
|
ML/ML_by_Standford_AndrewNg/Labs/W3-assignment-Multiply Classification/ex3-MultiClassification.ipynb
|
Bingogogo8/skills
|
463a540be8f515a5f9aa8fd8fe3488e11f341444
|
[
"MIT"
] | 362 |
2020-10-08T07:34:25.000Z
|
2022-03-30T05:11:30.000Z
|
ML/Learn_by_Standford_AndrewNg/Labs/W3-assignment-Multiply Classification/ex3-MultiClassification.ipynb
|
abcd1758323829/skills
|
195fad43e99de5efe6491817ad2b79e12665cc2a
|
[
"MIT"
] | null | null | null |
ML/Learn_by_Standford_AndrewNg/Labs/W3-assignment-Multiply Classification/ex3-MultiClassification.ipynb
|
abcd1758323829/skills
|
195fad43e99de5efe6491817ad2b79e12665cc2a
|
[
"MIT"
] | 238 |
2020-10-08T12:01:31.000Z
|
2022-03-25T08:10:42.000Z
| 68.525632 | 1,370 | 0.651778 | true | 2,751 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.891811 | 0.855851 | 0.763258 |
__label__eng_Latn
| 0.136712 | 0.611635 |
Hasta el momento hemos trabajado independientemente con caminantes aleatorios y distribuciones de probabilidad. Ahora trataremos de juntar los dos tópicos.
Si $P_{t}(i)$ es la probabilidad de que un caminante se encuentra en el sitio $i$ al tiempo $t$, entonces la distribución de probabilidad está dada por el conjunto $\{ P_{t}(i): i \in \mathbb{Z} \}$. Abstrayendo un poco, este objeto puede ser visto también como un vector con el número de entradas igual al número de sitios en el cual puede estar nuestro caminante.
A este vector lo llamaremos $\mathbf{P}_{t}$.
**Nota 1**: Este es el primer ejemplo en cual podemos decir que el tiempo es discreto, por lo que $t \in \mathbb{N}$.
**Nota 2**: En principio, el *lugar* donde nuestros caminantes marchen puede ser *infinito* (por ahora en una dimensión), por lo que el vector $\mathbf{P}_{t}$ tendría una infinidad de entradas.
## Ecuación maestra
Coloquemonos en nuestro paso $0$, i.e. $t=0$. El caminante estará en su condición inicial esperando a la bandera de salida. Supongamos que su condición inicial es en $i = 0$ y que el caminante tiene una probabilidad $p$ de dar un paso a la derecha y $q:= 1-p$ de darlo a la izquierda.
Para lo siguiente supondremos que $p = q = \frac{1}{2}$
No es dificil llegar entonces a que $P_{0}(i) = 0, \ \forall i \neq 0$ y que $P_{0}(0) = 1 $
En el primer paso, tendremos que:
$$
\begin{matrix}
P_{1}(-1) = \frac{1}{2} & P_{1}(0) = 0 & P_{1}(1) = \frac{1}{2}
\end{matrix}
$$
¿Qué pasa en el siguiente paso?
El espacio que el caminante puede abarcar se hace más grande, yendo de $i =-2,\dotsc, 2$. Ahora veamos como queda la distribución de probabilidad.
$$
\begin{matrix}
P_{2}(-2) = \frac{1}{4} & P_{2}(-1) = 0 & P_{2}(0) = \frac{1}{2} & P_{2}(1) = 0 & P_{2}(2) = \frac{1}{4}
\end{matrix}
$$
Las cosas ya se han puesto un poco más interesantes. Para entender un poco más como se han calculado un poco más rigurosamente estas probabilidades tomemos el caso de $P_{2}(0)$.
en el paso $t = 1$, el caminante tenía $\frac{1}{2}$ de probabilidad de estar en $i= -1, 1$. Supongamos que estaba en la celda $i=-1$, entonces el caminante tiene $\frac{1}{2}$ de probabilidad de dar un paso a la derecha en el paso $t = 2$; y de la misma forma, si el caminante estuviera en la celda $i = 1$, también habría $\frac{1}{2}$ de probabilidad de que al siguiente paso estuviera de nuevo en la celda $i = 0$.
De esta manera llegamos a la **ecuación maestra** de nuestro ejemplo en una dimensión y con probabilidades $p =q = \frac{1}{2}$
$$P_{t+1}(i) = \frac{1}{2}P_t(i-1) + \frac{1}{2}P_t(i+1) $$
La generalización de la **ecuación maestra** es la siguiente
$$
\begin{equation}
P_{t+1}(i) = pP_t(i-1) + (1-p)P_t(i+1) \ \ \ \ \ (1)
\end{equation}
$$
**[1a]** Pongamos manos a la obra. El objetivo general será graficar cómo evoluciona temporalmente la distribución de probabilidad con ayuda de nuestra ecuación maestra.
Para esto necesitaremos un `kernel` un poco más sofisticado de los que ya hemos hecho. El gran cambio es que aquí sólo jugaremos con *un solo* caminante aleatorio, y no con varios como lo habíamos hecho hasta entonces. La `malla` de `bloques` de `núcleos` pasa a ser el espacio en que se mueve el caminante. Cambio sutil pero de grandes consecuencias.
En primer lugar necesitamos el arreglo en el cual el caminante pueda moverse de un lado a otro. A este le llamaremos $X$. Recuerda hacerlo lo suficientemente grande para que el caminante no choque con los extremos.
Para calcular la distribución de probabilidades del paso $t+1$ necesitamos la distribución del paso $t$. Sin embargo al sobreescribir nuestro arreglo $X$ perderemos información, y por lo tanto nuestros cálculos serán incorrectos. Es por esto que necesitaremos declarar otro `arreglo` en el cual podamos copiar nuestra información del tiempo $t$ para calcular la distribución deseada.
Ahora, aquí es donde viene lo interesante. *La manera de copiar los datos*. Para esto nos basaremos en el ejemplo de *tiled programmation* hecho en el Notebook 6 de la primera parte de este curso el cual se basaba en declarar un `arreglo` tipo `__shared__` en el cual copiar los datos.
Así que veamos un poco como se vería el kernel. Supongamos que, tal cual dicho anteriormente, nuestros datos estuviesen en un arreglo $X$ y el lugar en el cual nos apoyaremos sea un arreglo en la memoria *compartida* llamado $X_{copia}$.
La idea general es entonces es de ir calculando los estados de X en el tiempo $t+1$ tesela por tesela. Supongamos entonces que X consiste de un arreglo de 200 celdas y buscamos calcular estas 200 celdas en el tiempo $t+1$ en grupos de 5. Entonces en $X_{copia}$ habremos de tener todos aquellas celdas con las cuales podamos realizar dichos cálculos.
En este caso en específico, puesto que el estado de una celda en el tiempo $t+1$ está determinada por ella misma **y sus vecinos**, entonces habremos de copiar cada uno de estos en $X_{copia}$. Esto no causa problemas a no ser por las celdas extremas de cada bloque. Para resolver esto tendremos que copiar también los vecinos que no aparecen en nuestro bloque de 5 celdas pero que también son necesarios para los cálculos.
De esta manera, para un arreglo de 200 celdas cuyos estados quieren ser calculados en bloques de 5, entonces necesitaremos en la memoria compartida teselas de 7 celdas.
A continuación mostramos la manera en la cual se copian los datos a la memoria compartida. En primer lugar mostraremos un programa en Python para darnos una idea más clara de que es lo que estamos buscando. Sólo después pasaremos al kernel.
```python
import numpy as np
```
Supongamos un arreglo A con 17 estados iniciales. Nuestra intención es calcular el estado de cada celda en el tiempo siguiente. Usaremos el método con teselas para copiar los datos. Cada tesela se ocupará de 4 datos en A, por lo que según la ecuación maestra (1) necesitaremos que la dimensión teselar sea de 6.
```python
A = np.array([1,2,3,4,5,6,7,8,9,10,11,12, 13, 14, 15, 16, 17])
tesela_A = np.ones(6)
```
```python
# blockDim es la número de celdas que serán copiadas a la tesela
# gridDim es el número de bloques que tendremos
# ANCHO_TESELA es el número de datos que necesitaremos para calcular blockDim celdas en el siguiente estado
blockDim = 4
gridDim = len(A)/blockDim+1
ANCHO_TESELA = blockDim+2
# Regresamos a tener dos bucles...
# El primer bucle va de bloque en bloque por A
for blockIdx in xrange(gridDim):
# el segundo bucle busca los elementos de cada bloque
for tx in xrange(ANCHO_TESELA-1):
# los elementos necesarios son copiados a la tesela
if blockDim*blockIdx + tx-1 >= 0 and blockDim*blockIdx + tx-1 < len(A) :
tesela_A[tx] = A[blockDim*blockIdx + tx-1]
# y si hemos llegado a los extremos de A, entonces se coloca un 0.
else:
tesela_A[tx] = 0.0
# Este if else se dedica a colocar aquellos datos en la frontera derecha de la tesela
if 4*(blockIdx+1) < len(A):
tesela_A[ANCHO_TESELA-1] = A[blockDim*(blockIdx+1)]
else:
tesela_A[ANCHO_TESELA-1] = 0.
print B
```
[ 0. 1. 2. 3. 4. 5.]
[ 4. 5. 6. 7. 8. 9.]
[ 8. 9. 10. 11. 12. 13.]
[ 12. 13. 14. 15. 16. 17.]
[ 16. 17. 0. 0. 0. 0.]
Nota como en cada tesela los 4 valores centrales corresponden a aquellas celdas cuyo estado será calculado al tiempo $t+1$. En el caso de la última tesela, puesto que los valores de A ya fueron cubiertos, entonces la tesela es llenada con $0$'s para que no haya cálculos erróneos.
Ahora sí podemos pasar al kernel en CUDA C. Algunos nombres fueron cambiados debido al modo de escribir los programas, sin embargo la idea es la misma. Entre estos cambios notamos que `blockDim` fue cambiado a `TAMANIO_BLOQUE`y la introducción de `tesela_idx` que es en realidad `blockDim*blockIdx + tx-1` con el que trabajamos en Python.
Este índice es usado puesto que para cada dato en A, necesitamos también su vecino a la izquierda (idx-1). Con `tesela_idx` cubrimos cada uno de estos. Ahora sólo falta el vecino derecho de la última celda. Este estará cubierto por otro pequeño programa condicional con un `if else`.
```C++
__shared__ float tesela_X[TAMANIO_BLOQUE+2] ;
int tx = threadIdx.x ;
int idx = blockIdx.x*TAMANIO_BLOQUE + tx ;
int tesela_idx = idx - 1 ;
if tx < TAMANIO_BLOQUE {
if ((tesela_idx >= 0) && (tesela_idx < Dim_Camino) ) {
tesela_X[tx] = X[tesela_idx] ;
} else {
tesela_X[tx] = 0.0f ;
}
__syncthreads() ;
}
if blockDim.x*(blockIdx.x+1) < Dim_Camino {
tesela_X[TAMANIO_BLOQUE+1] = X[blockDim.x*(blockIdx.x +1)] ;
} else {
tesela_X[TEMANIO_BLOQUE+1] = 0.0f ;
}
__syncthreads() ;
```
Una vez hecha la copia de los datos de `X` en `tesela_X` sólo falta reescribir `X` con los nuevos valores. Eso quedará de ustedes.
También es importante fijar el tamaño de los bloques `TAMANIO_BLOQUE`. Así que es hora de que el lector se ponga a trabajar y complete el `kernel` para luego graficar la evolución temporal de la distribución de probabilidad del caminante aleatorio. Supón en un primer tiempo que $p = q = \frac{1}{2}$.
Recomendamos graficar con la función `imshow()` de `matplotlib`.
A este método de resolver la ecuación maestra numéricamente se le llama **enumeración exacta** y es sumamente importante y utilizado para resolver ecuaciones diferenciales parciales.
**[1b]** Una vez que hayas obtenido las imágenes, cambia el valor de $p$ para ver como varía la distribución de probabilidad.
## Dos dimensiones
**[2]** Escribe la ecuación maestra del caminante aleatorio en dos dimensiones.
**[3]** Modifica tu código para obtener una seria de imágenes con las que puedas observar la evolución temporal de la distribución de probabilidad en 2 dimensiones.
**Hint**: En este caso tendrás que hacer una matriz tipo `__shared__` y no un arreglo unidimensional. Te recomendamos revisar los notebooks sobre multiplicación de matrices para que recuerdes la manera de indexar.
Ahora tendremos cuatro indices:
```C++
int Fila = blockIdx.y*BLOCK_SIZE + ty ;
int Columna = blockIdx.x*BLOCK_SIZE + tx ;
int Fila_copia = Fila - 1 ; int Columna_copia = Columna - 1 ;
if( (Fila_copia >= 0) && (Fila_copia < DimY) && (Columna_copia >= 0) && (Columna_copia < DimX) ) {
ds_copiaPlano[ty][tx] = Plano[Fila_copia][Columna_copia];
} else {
ds_copiaPlano[ty][tx] = 0.0f ;
}
```
En caso de que te pierdas con los índices y copias, haz un código en Python semejante al de arriba que te sirva como guía.
## Fronteras
Hasta ahora no nos hemos enfrentado con el problema de las fronteras, pero no podíamos escapar de él. Supongamos que son paredes *reflejantes* y no *absorbentes*, lo cual hará que el caminante "rebote" en las fronteras.
**[4]** Escribe la regla que tienen que seguir las probabilidades cuando un caminante llega a cualquiera de las cuatro fronteras.
**[5]** Implementa esta regla en tu código y observa qué pasa.
## Una primera aproximación a las EDP
Supongamos ahora que los cambios en el espacio y en el tiempo del caminante aleatorio se dan por diferenciales $\delta x$ y $\delta t$. La ecuación (1) se vuelve entonces
$$ P(x, t+\delta t) = pP(x-\delta x, t) + qP(x+\delta x, t) $$
Si ahora expandemos cada termino en series de Taylor (hasta 2º orden), llegamos a que:
$$ \frac{\partial P}{\partial t}(x, t) = (q-p)\frac{\delta x}{\delta t}\frac{\partial P}{\partial x}(x, t)+ \frac{\delta x^2}{2\delta t}\frac{\partial^2 P}{\partial x^2}(x, t) $$
Si ahora volvemos de nuevo al caso $p = q = \frac{1}{2}$, y haciendo $D = \frac{\delta x^2}{2\delta t}$ obtenemos entonces la ya conocida ecuación de difusión.
$$\frac{\partial P}{\partial t}(x, t) = D\frac{\partial^2 P}{\partial x^2}(x, t)$$
**[6]** Las soluciones analíticas de esta EDP son bien conocidas. Compáralas con tu solución numérica.
Así, vemos que el método de enumeración exacta para un caminante aleatorio provee un método numérico para resolver esta EDP de evolución. [El método se llama de diferencias finitas.]
## Referencias
+ [Ecuación maestra](https://en.wikipedia.org/wiki/Master_equation)
+ Método de [diferencias finitas](https://en.wikipedia.org/wiki/Finite_difference_method)
+ [Ecuación de difusión](https://en.wikipedia.org/wiki/Diffusion_equation)
```python
```
|
a76f1736e7d6d8b71e370acac02a70029ca2ac13
| 16,567 |
ipynb
|
Jupyter Notebook
|
Parte 2 - PyCUDA y aplicaciones/04 - Ecuacion maestra.ipynb
|
brincolab/Servicio_social
|
ac3cb224a2c1934e84c490a40420cf8bbad30235
|
[
"MIT"
] | 5 |
2016-03-20T00:45:31.000Z
|
2020-11-25T23:54:22.000Z
|
Parte 2 - PyCUDA y aplicaciones/04 - Ecuacion maestra.ipynb
|
Sebzero77/Cuda_y_PyCuda
|
ac3cb224a2c1934e84c490a40420cf8bbad30235
|
[
"MIT"
] | 4 |
2015-08-25T19:36:00.000Z
|
2018-04-20T07:10:23.000Z
|
Parte 2 - PyCUDA y aplicaciones/04 - Ecuacion maestra.ipynb
|
Sebzero77/Cuda_y_PyCuda
|
ac3cb224a2c1934e84c490a40420cf8bbad30235
|
[
"MIT"
] | 11 |
2015-02-11T17:55:55.000Z
|
2020-11-25T23:54:34.000Z
| 43.828042 | 433 | 0.611939 | true | 3,864 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.737158 | 0.822189 | 0.606083 |
__label__spa_Latn
| 0.995464 | 0.246465 |
# Random Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Cumulative Distribution Functions
A random process can be characterized by the statistical properties of its amplitude values. [Cumulative distribution functions](https://en.wikipedia.org/wiki/Cumulative_distribution_function) (CDFs) are one possibility to do so.
### Univariate Cumulative Distribution Function
The univariate CDF $P_x(\theta, k)$ of a continuous-amplitude real-valued random signal $x[k]$ is defined as
\begin{equation}
P_x(\theta, k) := \Pr \{ x[k] \leq \theta\}
\end{equation}
where $\Pr \{ \cdot \}$ denotes the probability that the given condition holds. The univariate CDF quantifies the probability that for the entire ensemble and for a fixed time index $k$ the amplitude $x[k]$ is smaller or equal to $\theta$. The term '*univariate*' reflects the fact that only one random process is considered.
The CDF shows the following properties which can be concluded directly from its definition
\begin{equation}
\lim_{\theta \to -\infty} P_x(\theta, k) = 0
\end{equation}
and
\begin{equation}
\lim_{\theta \to \infty} P_x(\theta, k) = 1
\end{equation}
The former property results from the fact that all amplitude values $x[k]$ are larger than $- \infty$, the latter from the fact that all amplitude values lie within $- \infty$ and $\infty$. The univariate CDF $P_x(\theta, k)$ is furthermore a non-decreasing function.
The probability that $\theta_1 < x[k] \leq \theta_2$ is given as
\begin{equation}
\Pr \{\theta_1 < x[k] \leq \theta_2\} = P_x(\theta_2, k) - P_x(\theta_1, k)
\end{equation}
Hence, the probability that a continuous-amplitude random signal takes a specific value $x[k]=\theta$ is zero when calculated by means of the CDF. This motivates the definition of probability density functions introduced later.
### Bivariate Cumulative Distribution Function
The statistical dependencies between two signals are frequently of interest in statistical signal processing. The bivariate or joint CDF $P_{xy}(\theta_x, \theta_y, k_x, k_y)$ of two continuous-amplitude real-valued random signals $x[k]$ and $y[k]$ is defined as
\begin{equation}
P_{xy}(\theta_x, \theta_y, k_x, k_y) := \Pr \{ x[k_x] \leq \theta_x \wedge y[k_y] \leq \theta_y \}
\end{equation}
The joint CDF quantifies the probability for the entire ensemble of sample functions that for a fixed $k_x$ the amplitude value $x[k_x]$ is smaller or equal to $\theta_x$ and that for a fixed $k_y$ the amplitude value $y[k_y]$ is smaller or equal to $\theta_y$. The term '*bivariate*' reflects the fact that two random processes are considered. The bivariate CDF can also be used to characterize the statistical properties of one random signal $x[k]$ at two different time-instants $k_x$ and $k_y$ by setting $y[k] = x[k]$
\begin{equation}
P_{xx}(\theta_1, \theta_2, k_1, k_2) := \Pr \{ x[k_1] \leq \theta_1 \wedge y[k_2] \leq \theta_2 \}
\end{equation}
The definition of the bivariate CDF can be extended straightforward to the case of more than two random variables. The resulting CDF is termed as multivariate CDF.
## Probability Density Functions
[Probability density functions](https://en.wikipedia.org/wiki/Probability_density_function) (PDFs) describe the probability for one or multiple random signals to take on a specific value. Again the univariate case is discussed first.
### Univariate Probability Density Function
The univariate PDF $p_x(\theta, k)$ of a continuous-amplitude real-valued random signal $x[k]$ is defined as the derivative of the univariate CDF
\begin{equation}
p_x(\theta, k) = \frac{\partial}{\partial \theta} P_x(\theta, k)
\end{equation}
Due to the properties of the CDF and the definition of the PDF, it shows the following properties
\begin{equation}
p_x(\theta, k) \geq 0
\end{equation}
and
\begin{equation}
\int\limits_{-\infty}^{\infty} p_x(\theta, k) \, \mathrm{d}\theta = P_x(\infty, k) = 1
\end{equation}
The univariate PDF has only positive values and the area below the PDF is equal to one.
Due to the definition of the PDF as derivative of the CDF, the CDF can be computed from the PDF by integration
\begin{equation}
P_x(\theta, k) = \int\limits_{-\infty}^{\theta} p_x(\theta, k) \, \mathrm{d}\theta
\end{equation}
#### Example - Estimate of an univariate PDF by the histogram
In the process of calculating a [histogram](https://en.wikipedia.org/wiki/Histogram), the entire range of amplitude values of a random signal is split into a series of intervals (bins). For a given random signal the number of samples is counted which fall into one of these intervals. This is repeated for all intervals. The counts are finally normalized with respect to the total number of samples. This process constitutes a numerical estimation of the PDF of a random process.
In the following example the histogram of an ensemble of random signals is computed for each time index $k$. The CDF is computed by taking the cumulative sum over the histogram bins. This constitutes a numerical approximation of above integral
\begin{equation}
\int\limits_{-\infty}^{\theta} p_x(\theta, k) \approx \sum_{i=0}^{N} p_x(\theta_i, k) \, \Delta\theta_i
\end{equation}
where $p_x(\theta_i, k)$ denotes the $i$-th bin of the PDF and $\Delta\theta_i$ its width.
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
K = 32 # number of temporal samples
N = 10000 # number of sample functions
bins = 100 # number of bins for the histogram
# draw sample functions from a random process
np.random.seed(2)
x = np.random.normal(size=(N, K))
x += np.tile(np.cos(2*np.pi/K*np.arange(K)), [N, 1])
# compute the histogram
px = np.zeros((bins, K))
for k in range(K):
px[:, k], edges = np.histogram(x[:, k], bins=bins, range=(-4,4), density=True)
# compute the CDF
Px = np.cumsum(px, axis=0) * 8/bins
# plot the PDF
plt.figure(figsize=(10,6))
plt.pcolor(np.arange(K), edges, px)
plt.title(r'Estimated PDF $\hat{p}_x(\theta, k)$')
plt.xlabel(r'$k$')
plt.ylabel(r'$\theta$')
plt.colorbar()
plt.autoscale(tight=True)
# plot the CDF
plt.figure(figsize=(10,6))
plt.pcolor(np.arange(K), edges, Px, vmin=0, vmax=1)
plt.title(r'Estimated CDF $\hat{P}_x(\theta, k)$')
plt.xlabel(r'$k$')
plt.ylabel(r'$\theta$')
plt.colorbar()
plt.autoscale(tight=True)
```
**Exercise**
* Change the number of sample functions `N` or/and the number of `bins` and rerun the examples. What changes? Why?
In numerical simulations of random processes only a finite number of sample functions and temporal samples can be considered. This holds also for the number of intervals (bins) used for the histogram. As a result, numerical approximations of the CDF/PDF will be subject to statistical uncertainties that typically will become smaller if the number of sample functions `N` is increased.
### Bivariate Probability Density Function
The bivariate or joint PDF $p_{xy}(\theta_x, \theta_y, k_x, k_y)$ of two continuous-amplitude real-valued random signals $x[k]$ and $y[k]$ is defined as
\begin{equation}
p_{xy}(\theta_x, \theta_y, k_x, k_y) := \frac{\partial^2}{\partial \theta_x \partial \theta_y} P_{xy}(\theta_x, \theta_y, k_x, k_y)
\end{equation}
The bivariate PDF quantifies the joint probability that $x[k]$ takes the value $\theta_x$ and that $y[k]$ takes the value $\theta_y$ for the entire ensemble of sample functions.
If $x[k] = y[k]$ the bivariate PDF $p_{xx}(\theta_1, \theta_2, k_1, k_2)$ describes the probability that the random signal $x[k]$ takes the value $\theta_1$ at time instance $k_1$ and the value $\theta_2$ at time instance $k_2$. Hence, $p_{xx}(\theta_1, \theta_2, k_1, k_2)$ provides insights into the temporal dependencies of a random signal $x[k]$.
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016*.
|
0e87109782d943fc06108ab5bc8abbb029813c3a
| 58,020 |
ipynb
|
Jupyter Notebook
|
random_signals/distributions.ipynb
|
ganlubbq/digital-signal-processing-lecture
|
f9ac5b2f5500aa612b48d1d920c7cba366c44dba
|
[
"MIT"
] | 2 |
2017-11-14T16:14:37.000Z
|
2021-05-16T21:01:41.000Z
|
random_signals/distributions.ipynb
|
ganlubbq/digital-signal-processing-lecture
|
f9ac5b2f5500aa612b48d1d920c7cba366c44dba
|
[
"MIT"
] | null | null | null |
random_signals/distributions.ipynb
|
ganlubbq/digital-signal-processing-lecture
|
f9ac5b2f5500aa612b48d1d920c7cba366c44dba
|
[
"MIT"
] | 2 |
2020-06-26T14:19:29.000Z
|
2020-12-11T08:31:29.000Z
| 216.492537 | 25,952 | 0.887901 | true | 2,287 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.882428 | 0.845942 | 0.746483 |
__label__eng_Latn
| 0.953726 | 0.572662 |
# Kozeny-Carman equation
\begin{equation}
K = \dfrac{d_p^2}{180}\dfrac{\theta^3}{(1-\theta)^2} \dfrac{\rho g }{\mu}
\end{equation}
```python
%reset -f
```
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import root
#Globals
rho = 1000. #kg/m3
g = 9.81 #cm/s2
mu = 0.001 #Ns/m2
dp = 4.4E-4 #m
def KozenyCarman(theta):
return dp**2 * theta**3 * rho * g / (180 * (1-theta)**2 * mu)
def KozenyCarman(theta):
return dp**2 * theta**3 * rho * g / (180 * (1-theta)**2 * mu)
def findTheta(K_expected=1.0E-8):
def minimizer(theta):
K_init = KozenyCarman(theta)
return (K_init - K_expected)**2
solution = root(minimizer,0.1)
print(solution.message + f" >> Porosity = {solution.x}")
return solution.x
```
```python
porosity = np.linspace(0.001,0.5,100)
hydrCond = KozenyCarman(porosity)
```
```python
fig,ax = plt.subplots(figsize=(8,5),facecolor="white");
ax.plot(porosity,hydrCond,lw=3,c="blue",label='Kozeny-Carman')
ax.plot(porosity,840*(porosity**3.1),lw=3,c="red",label="Chen2010")
ax.set_yscale('log')
ax.set_xlabel("Porosity $\\theta$ ")
ax.set_ylabel("Hydraulic conductivity \n$K$ [m/s]")
ax.axhline(y=1.0E-8,lw=1,ls='dotted')
ax.legend()
plt.show()
```
```python
theta2 = findTheta(1.0E-7)
```
The solution converged. >> Porosity = [0.02086702]
```python
print("{:.4E} m/s".format(KozenyCarman(0.35)))
```
1.0707E-03 m/s
```python
from jupypft import attachmentRateCFT
```
```python
katt,_ = attachmentRateCFT.attachmentRate(dp=1.0E-7,dc=4.4E-4,
q=0.35E-3,
theta=0.35,
visco=0.001,
rho_f=1000.,
rho_p=1050.0,
A=1.0E-20,
T=298.0,
alpha=0.0043273861959162,
debug=True)
```
Diffusion coeff: 4.3654E-12
Darcy velocity: 3.5000E-04
Pore-water vel: 1.0000E-03
---
Happel parameter: 5.2527E+01
NR number: 2.2727E-04
NPe number: 3.5277E+04
NvW number: 2.4305E+00
NGr number: 3.1211E-06
---
etaD collector: 1.0409E-02
etaI collector: 1.9641E-05
etaG collector: 2.8626E-07
eta0 collector: 1.0429E-02
---
Attach rate : 1.0000E-04
```python
"{:.6E}".format(0.0043273861959162)
```
'4.327386E-03'
```python
1.0E-4/katt
```
1.0000000000000002
```python
```
|
8da1b17b6da3aa1e8693a9f96b47114f1e71a1b1
| 29,832 |
ipynb
|
Jupyter Notebook
|
notebooks/_old/.ipynb_checkpoints/KozenyCarman-checkpoint.ipynb
|
edsaac/bioclogging
|
bd4be9c9bb0adcc094ce4ef45f6066ffde1b825f
|
[
"MIT"
] | null | null | null |
notebooks/_old/.ipynb_checkpoints/KozenyCarman-checkpoint.ipynb
|
edsaac/bioclogging
|
bd4be9c9bb0adcc094ce4ef45f6066ffde1b825f
|
[
"MIT"
] | null | null | null |
notebooks/_old/.ipynb_checkpoints/KozenyCarman-checkpoint.ipynb
|
edsaac/bioclogging
|
bd4be9c9bb0adcc094ce4ef45f6066ffde1b825f
|
[
"MIT"
] | null | null | null | 113.862595 | 24,168 | 0.875939 | true | 916 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.913677 | 0.718594 | 0.656563 |
__label__yue_Hant
| 0.295777 | 0.363746 |
# Modelos con variables latentes y repaso del algoritmo K-Means
> **¿Qué significa variable latente?**
> Para responder a esta pregunta, nos remontamos hacia la raiz *latina* etimológica de la palabra latente. Esta palabra viene la palabra en Latín **latens** que significa escondido u oculto.
> En el contexto del modelado probabilístico nos referimos con variables latentes a variables que nunca observamos, pero que (inferimos) están ahí.
## 1. Variables latentes
### ¿Porqué consideramos variables latentes?
Hay diversas razones por las que permitirnos incluir variables latentes en nuestros modelos cobra muchísima importancia. Algunas de ellas:
1. **No porque no observemos una variable significa que no exista.**
2. **Muchas veces nos permiten conseguir modelos más simples.**
**Ejemplo:**
Una empresa acaba de abrir una posición en el equipo de ciencia de datos. En este sentido, el departamento de RH está interesado en entrevistar a varios candidatos para encontrar a alguien idóneo para la posición.
Para ello, ya se tiene un tabulador que involucra varias variables:
- Grado académico.
- Promedio de calificaciones del último grado académico.
- Entrevista telefónica.
- Entrevista en vivo.
Sin embargo, la entrevista en vivo es un evento que puede llegar a involucrar muchos recursos económicos, y por experiencia, hay varios candidatos que se pueden descartar con solo el conocimiento de las otras variables. La idea es desarrollar un modelo usando datos históricos del departamento de RH:
| Candidato | Grado | Promedio | E. Telefónica | E. Vivo |
| --------- | ----- | -------- | ------------- | ------- |
| 1 | Lic | 8.4 | 7 | 5 |
| 2 | Maes | 8.0 | 7 | 6 |
| 3 | Lic | 9.5 | 8 | 9 |
| 4 | Doc | 8.9 | 9 | 10 |
Si intentamos establecer un modelo que relacione estas variables, después de revisarlo un poco, llegaríamos a que todas estas variables están relacionadas entre sí, obteniendo un modelo completamente conectado,
con lo que no tendríamos ni una sola pista de la estructura del modelo, y tendríamos que definir la probabilidad conjunta sobre todas las variables (número exponencial de parámetros).
*Alternativa 1*: considerar un modelo estructurado del tipo
$$
p(x_1, x_2, x_3, x_4) = \frac{\exp\{-w^T x\}}{Z},
$$
con lo que solo tendríamos 5 parámetros $w_0, w_1, w_2, w_3, w_4$. Sin embargo, $Z$ es una constante de normalización que involucra una suma sobre todos los posibles valores de las cuatro variables aleatorias.
*Alternativa 2*: considerar una variable latente de Inteligencia
con lo que el modelo sería:
$$
p(x_1, x_2, x_3, x_4) = \sum_{I} p(x_1, x_2, x_3, x_4 | I) p(I) = \sum_{I} p(x_1 | I)p(x_2 | I)p(x_3 | I)p(x_4 | I) p(I)
$$
Con esto reducimos notablemente la complejidad del modelo.
3. **Aplicaciones prácticas -> Clustering -> Segmentación de clientes, Motores de búsqueda, Sistemas de recomendación, ...**
En aplicaciones clustering pretendemos descubrir segmentaciones en los datos. Esta segmentación la podemos entender como una variable latente.
## 2. Clustering
Probablemente ya hayan escuchado hablar de clustering. Como tal, es de las aplicaciones más importantes en aprendizaje **no supervisado**, y seguramente en un proyecto de análisis de datos no va a pasar más de un mes para cuando necesiten utilizar este tipo de técnicas. Así que vamos a estudiar un par de ellas.
**Ejemplo:**
Los departamentos de créditos en general (personas, pymes, empresarial) normalmente estudian la relación entre ingresos y deuda del candidato a crédito para decidir si acreditan o no a la persona.
Hay varias heurísticas que se utilizan. Por ejemplo:
- Si las deudas rebasan el 40% del apalancamiento (capital social + ingresos + pasivos), es una empresa de alto riesgo.
- Si las deudas rebasan 3 meses de ingresos, es una empresa de alto riesgo.
Se puede estudiar esta relación, y segmentar a los clientes de acuerdo a su perfil en estas variables.
```python
# Importamos función para generar datos
from bank_customer_data import generate_bank_customer_data
# Importamos pyplot
from matplotlib import pyplot as plt
```
```python
# Generamos datos
data = generate_bank_customer_data()
```
```python
data.head(10)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>income</th>
<th>debt</th>
<th>labels</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2.784242</td>
<td>1.362889</td>
<td>0.0</td>
</tr>
<tr>
<th>1</th>
<td>1.592602</td>
<td>2.562608</td>
<td>0.0</td>
</tr>
<tr>
<th>2</th>
<td>3.480014</td>
<td>1.563046</td>
<td>0.0</td>
</tr>
<tr>
<th>3</th>
<td>4.254610</td>
<td>2.204767</td>
<td>2.0</td>
</tr>
<tr>
<th>4</th>
<td>1.000312</td>
<td>5.724966</td>
<td>1.0</td>
</tr>
<tr>
<th>5</th>
<td>2.051364</td>
<td>1.143760</td>
<td>0.0</td>
</tr>
<tr>
<th>6</th>
<td>3.715297</td>
<td>2.682284</td>
<td>0.0</td>
</tr>
<tr>
<th>7</th>
<td>2.142184</td>
<td>2.812753</td>
<td>0.0</td>
</tr>
<tr>
<th>8</th>
<td>6.087734</td>
<td>4.938509</td>
<td>2.0</td>
</tr>
<tr>
<th>9</th>
<td>1.761891</td>
<td>6.742030</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div>
```python
# Datos
plt.scatter(data["income"], data["debt"], c="gray", alpha=0.6)
plt.xlabel("Ingresos mensuales (x100k MXN)")
plt.ylabel("Deuda (x100k MXN)")
```
Lo que queremos en clustering es identificar los grupos a los que pertenecen cada uno de los clientes.
```python
# Grupos "reales"
plt.scatter(data["income"], data["debt"], c=data["labels"], cmap="Accent", alpha=0.6)
plt.xlabel("Ingresos mensuales (x100k MXN)")
plt.ylabel("Deuda (x100k MXN)")
```
Esta idea se conoce como **Hard clustering**; bajo este esquema, identificamos para cada punto un único grupo al que pertenece, es decir:
$$
\text{cluster_id}_x = f(x)
$$
- Los puntos verdes son 100% verdes.
- Los puntos azules son 100% azules.
- Los puntos grises son 100% grises.
Sin embargo, fijemos un momento nuestra atención en los recuadros a continuación ($[2, 3] \times [4, 5]$ y $[4, 5] \times [3, 4]$):
```python
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.scatter(data["income"], data["debt"], c=data["labels"], cmap="Accent", alpha=0.6)
plt.axis([1, 3, 4, 5])
plt.xlabel("Ingresos mensuales (x100k MXN)")
plt.ylabel("Deuda (x100k MXN)")
plt.subplot(1, 2, 2)
plt.scatter(data["income"], data["debt"], c=data["labels"], cmap="Accent", alpha=0.6)
plt.axis([3.5, 5, 2, 4])
plt.xlabel("Ingresos mensuales (x100k MXN)")
plt.ylabel("Deuda (x100k MXN)")
```
Desde un punto de vista intuitivo, los puntos en estos sectores no es muy claro a qué grupo pertenecen. Estaríamos tentados a decir que pertenecen a un grupo con cierta probabilidad y con cierta probabilidad a otro.
Esta idea se conoce como **Soft clustering**, y está estrechamente relacionado a **clustering probabilístico**.
$$
p(\text{cluster_id}_x |x)
$$
Al darle un enfoque probabilístico, tenemos varias ventajas colaterales:
- Sintonización de hiperparámetros.
- Modelo generativo.
## 3. Un algoritmo de hard clustering: K-Means
Aunque el K-Means es uno de los algoritmos de hard clustering más conocidos y usados, lo veremos en un par de sesiones más desde una perspectiva probabilística.
De manera que nos conviene estudiarlo antes.
**Problema:** dado un conjunto de observaciones $x_1, x_2, \dots, x_N \in \mathbb{R}^d$, se debe particionar las $N$ observaciones en $k$ ($\leq N$) clusters $\{1, 2, \dots, k\}$, de manera que se minimice la suma de distancias al cuadrado (varianza).
**Algoritmo:**
1. Inicializar los parámetros $\theta = \{\mu_1, \dots, \mu_k\}$ de manera aleatoria.
2. Repetir hasta la convergencia (hasta que los parámetros no varíen):
1. Para cada punto calcule el centroide más cercano:
$$
c_i = \arg \min_{c} ||x_i - \mu_c||.
$$
2. Actualizar centroides:
$$
\mu_c = \frac{\sum_{i: c_i = c} x_i}{\sum_{i: c_i = c} 1}
$$
**Tarea:** Programar el algoritmo K-Means.
Nosotros usaremos sklearn durante la clase:
```python
# Importamos sklearn.cluster.KMeans
from sklearn.cluster import KMeans
```
```python
# Algoritmo de sklearn
KMeans?
```
```python
# Instanciamos el algoritmo
kmeans = KMeans(n_clusters=3)
```
```python
# Entrenamos
kmeans.fit(X=data[["income", "debt"]])
```
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
n_clusters=3, n_init=10, n_jobs=None, precompute_distances='auto',
random_state=None, tol=0.0001, verbose=0)
```python
# Gráfico
plt.scatter(data["income"], data["debt"], c=kmeans.labels_, cmap="Accent", alpha=0.6)
plt.plot(kmeans.cluster_centers_[0, 0], kmeans.cluster_centers_[0, 1], "*g", ms=20)
plt.plot(kmeans.cluster_centers_[1, 0], kmeans.cluster_centers_[1, 1], "*b", ms=20)
plt.plot(kmeans.cluster_centers_[2, 0], kmeans.cluster_centers_[2, 1], "*k", ms=20)
plt.xlabel("Ingresos mensuales (x100k MXN)")
plt.ylabel("Deuda (x100k MXN)")
```
Todo se ve bien hasta acá. **¿Qué pasa si aumentamos el número de clusters?**
Una métrica que podemos usar para ver qué tan bueno está siendo el agrupado es la suma de las distancias al cuadrado de cada punto a su centroide respectivo:
$$
\frac{1}{N}\sum_{i=1}^N ||x_i - \mu_{c_i}||^2.
$$
```python
from sklearn.model_selection import train_test_split
import numpy as np
```
```python
def msd(X, cluster_id, centroids):
"""
Mean squared distance.
:param data: Data.
:param centroids: Centroids.
:return: Mean squared distance.
"""
# Number of clusters
k = centroids.shape[0]
# Number of points
N = X.shape[0]
# Distances initialization
distances = np.zeros(N)
# Compute distances to corresponding cluster
for j in range(k):
distances[cluster_id == j] = np.linalg.norm(X[cluster_id == j] - centroids[j, :], axis=1)
return (distances**2).mean()
```
```python
X_train, X_test = train_test_split(data[["income", "debt"]], test_size=0.2)
msd_train = []
msd_test = []
for k in range(2, 20):
# Instanciamos el algoritmo
kmeans = KMeans(n_clusters=k)
# Entrenamos
kmeans.fit(X=X_train)
# Métrica con datos de entrenamiento
msd_train.append(msd(X_train, kmeans.labels_, kmeans.cluster_centers_))
# Métrica con datos de prueba
msd_test.append(msd(X_test, kmeans.predict(X_test), kmeans.cluster_centers_))
```
```python
plt.plot(range(2, 20), msd_train, label="train")
plt.plot(range(2, 20), msd_test, label="test")
plt.legend()
plt.xlabel("Número de clusters")
plt.ylabel("Suma de distancias cuadradas")
```
```python
dist_orig = np.array(msd_test)**2 + np.arange(2, 20)**2
dist_orig
```
array([ 12.12205427, 12.47908258, 17.78525849, 26.3824619 ,
36.99025887, 49.80146088, 64.54630547, 81.42108952,
100.34572121, 121.31073854, 144.29311666, 169.21797374,
196.2439913 , 225.19173974, 256.15941281, 289.14232662,
324.14097165, 361.14044157])
```python
np.arange(2, 20)[dist_orig.argmin()]
```
2
```python
# Gráfico
plt.scatter(X_train["income"], X_train["debt"], c=kmeans.labels_, cmap="inferno", alpha=0.6)
plt.xlabel("Ingresos mensuales (x100k MXN)")
plt.ylabel("Deuda (x100k MXN)")
```
Como observamos, esta métrica siempre decrece con la cantidad de clusters, lo que hace bastante complejo elegir un número de clusters adecuado cuando este es desconocido.
# 4. Modelo de mezcla Gaussiana (GMM)
Como vimos, el K-Means (y en general los algoritmos de hard clustering) tienen varios detalles:
- No es claro cómo elegir el número de clusters.
- Hay puntos que podrían estar en una frontera entre dos o más clusters, y el hard clustering no nos permite tener incertidumbre en la pertenencia.
Para lidiar con estos problemas, podemos plantear un modelo probablístico de nuestros datos.
Hasta ahora, conocemos algunas distribuciones. Entre ellas la Gaussiana, para la cual ya sabemos como estimar sus parámetros.
¿Qué pasa si intentamos ajustar una **distribución Gaussiana** a los datos? Es decir, si modelamos los datos con
$$
p(x|\theta) = \mathcal{N}(x|\mu, \Sigma), \qquad \theta=\{\mu, \Sigma\}
$$
```python
# Datos
plt.scatter(data["income"], data["debt"], c="gray", alpha=0.6)
plt.xlabel("Ingresos mensuales (x100k MXN)")
plt.ylabel("Deuda (x100k MXN)")
```
```python
from scipy.stats import multivariate_normal
import numpy as np
```
```python
# Ajustamos parámetros
mu = data[["income", "debt"]].mean()
cov = data[["income", "debt"]].cov()
# Definimos VA
X = multivariate_normal(mean=mu, cov=cov)
```
```python
# Datos
plt.scatter(data["income"], data["debt"], c="gray", alpha=0.6)
# Gaussiana
x = np.linspace(0, 8, 100)
y = np.linspace(0, 9, 100)
x, y = np.meshgrid(x, y)
z = X.pdf(np.dstack([x, y]))
plt.contour(x, y, z)
plt.xlabel("Ingresos mensuales (x100k MXN)")
plt.ylabel("Deuda (x100k MXN)")
```
```python
mu
```
income 2.913434
debt 4.104847
dtype: float64
Sin embargo, este modelo no parece corresponder con nuestros datos. La región de máxima probabilidad (media) cae en un punto medio entre los clusters, y allí no hay muchos datos.
### ¿Qué pasa si usamos varias Gaussianas?
```python
# Ajustamos parámetros
mu1 = data.loc[data["labels"] == 0, ["income", "debt"]].mean()
mu2 = data.loc[data["labels"] == 1, ["income", "debt"]].mean()
mu3 = data.loc[data["labels"] == 2, ["income", "debt"]].mean()
cov1 = data.loc[data["labels"] == 0, ["income", "debt"]].cov()
cov2 = data.loc[data["labels"] == 1, ["income", "debt"]].cov()
cov3 = data.loc[data["labels"] == 2, ["income", "debt"]].cov()
# Definimos VA
X1 = multivariate_normal(mean=mu1, cov=cov1)
X2 = multivariate_normal(mean=mu2, cov=cov2)
X3 = multivariate_normal(mean=mu3, cov=cov3)
```
```python
# Datos
plt.scatter(data["income"], data["debt"], c="gray", alpha=0.6)
# Gaussiana 1
x = np.linspace(0, 6, 100)
y = np.linspace(0, 5, 100)
x, y = np.meshgrid(x, y)
z = X1.pdf(np.dstack([x, y]))
plt.contour(x, y, z)
# Gaussiana 2
x = np.linspace(0, 4, 100)
y = np.linspace(4, 9, 100)
x, y = np.meshgrid(x, y)
z = X2.pdf(np.dstack([x, y]))
plt.contour(x, y, z)
# Gaussiana 3
x = np.linspace(3, 8, 100)
y = np.linspace(2, 8, 100)
x, y = np.meshgrid(x, y)
z = X3.pdf(np.dstack([x, y]))
plt.contour(x, y, z)
plt.xlabel("Ingresos mensuales (x100k MXN)")
plt.ylabel("Deuda (x100k MXN)")
```
**¡Mucho mejor!**
Cada Gaussiana explica un cluster de puntos, y el modelo general sería una suma ponderada de estas densidades Gaussianas:
$$
p(x | \theta) = \sum_{c=1}^{3} \pi_c \mathcal{N}(x | \mu_c, \Sigma_c), \qquad \theta = \{\pi_1, \pi_2, \pi_3, \mu_1, \mu_2, \mu_3, \Sigma_1, \Sigma_2, \Sigma_3\}
$$
**¿Y esto cómo lo interpretamos?**
Bueno, pues si logramos encontrar los parámetros $\pi_1, \pi_2, \pi_3, \mu_1, \mu_2, \mu_3, \Sigma_1, \Sigma_2, \Sigma_3$ para este conjunto de datos, habremos resuelto el problema de (soft) clustering, ya que encontraremos para cada punto la probabilidad de que venga de cada una de las Gaussianas.
**¿Qué ventajas tenemos?**
Como ventaja respecto a usar una sola Gaussiana, hemos añadido flexibilidad a nuestro modelo. Es decir, con esta estuctura podemos representar conjuntos de datos complejos.
> En efecto, podemos aproximar casi cualquier distribución continua con una mezcla de Gaussianas con precisión arbitraria, dado que incluyamos un número suficiente de Gaussianas en la mezcla.
**¿A qué costo?**
La cantidad de parámetros que debemos estimar se multiplica por la cantidad de Gaussianas en la mezcla.
### ¿Cómo encontramos (entrenamos) los parámetros?
Podemos maximizar la función de verosimilitud (suposición de independencia):
$$
\max_{\theta} p(X | \theta) = \prod_{i=1}^N p(x_i | \theta) = \prod_{i=1}^N \sum_{c=1}^{3} \pi_c \mathcal{N}(x_i | \mu_c, \Sigma_c)
$$
sujeto a:
\begin{align}
\sum_{c=1}^3 \pi_c & = 1 \\
\pi_c & \geq 0 \quad \text{for } c=1,2,3\\
\Sigma_c & \succ 0 \quad \text{for } c=1,2,3
\end{align}
Es decir, las matrices de covarianza deben ser definidas positivas (¿por qué?).
**Complejidades numéricas:**
Este problema de optimización puede ser resuelto numéricamente con un algoritmo como el gradiente descendiente. Sin embargo,
1. La restricción sobre la matriz de covarianzas hace el problema de optimización muy complejo de resolver numéricamente hablando.
Una simplificación para poder trabajar con esta restricción es suponer que las matrices de covarianza son diagonales:
$$
\Sigma_c = \text{diag}(\sigma_{c1}, \sigma_{c2}, \dots, \sigma_{cn}),
$$
2. La suma dentro del producto también hace bastante complejo el cálculo de los gradientes. Comúnmente, para evitar el producto se toma logaritmo de la verosimilitud:
$$
\log p(X | \theta) = \log \left(\prod_{i=1}^N p(x_i | \theta) \right)= \sum_{i=1}^N \log\left(\sum_{c=1}^{3} \pi_c \mathcal{N}(x_i | \mu_c, \Sigma_c)\right)
$$
y con esto, podemos observar que permanece una suma ponderada dentro del logaritmo.
### ¿Y entonces?
Afortunadamente, existe un algoritmo alternativo con base probabilística llamado **algoritmo de maximización de la esperanza**, el cual estaremos estudiando en las próximas clases no solo para el problema de mezclas Gaussianas, sino para entrenar cualquier modelo con **variables latentes**.
**¿Variables latentes?**
Recordamos que propusimos el siguiente modelo:
$$
p(x | \theta) = \sum_{c=1}^{3} \pi_c \mathcal{N}(x | \mu_c, \Sigma_c), \qquad \theta = \{\pi_1, \pi_2, \pi_3, \mu_1, \mu_2, \mu_3, \Sigma_1, \Sigma_2, \Sigma_3\}
$$
Este modelo en realidad, lo podemos pensar como un modelo con una variable latente $t$ que determina a cuál Gaussiana pertenece cada punto:
Entonces, razonablemente podemos atribuirle a $t$ tres posibles valores (1, 2, y 3), que nos dicen de qué Gaussiana viene el punto. Recordamos que $t$ es una variable latente, nunca la observamos.
Sin embargo, razonando probabilísticamente, después de entrenar nuestra mezcla Gaussiana, podríamos preguntarle al modelo, ¿Cuál es el valor más probable de $t$ dado el punto $x$? --> **Clustering**
Con este modelo, podemos asignar las siguientes probabilidades:
- Previa:
$$
p(t=c | \theta) = \pi_c
$$
- Verosimilitud:
$$
p(x | t=c, \theta) = \mathcal{N}(x | \mu_c, \Sigma_c)
$$
Razonable, ¿no?
Y con lo anterior,
$$
p(x | \theta) = \sum_{c=1}^3 p(x, t=c | \theta) = \sum_{c=1}^3 \underbrace{p(x | t=c, \theta)}_{\mathcal{N}(x | \mu_c, \Sigma_c)} \underbrace{p(t=c | \theta)}_{\pi_c},
$$
justo como el modelo intuitivo que habíamos propuesto.
### Algoritmo de maximización de la esperanza para mezclas Gaussianas - Intuición
Supongamos que tenemos los siguientes puntos de tamaños de playeras, y queremos definir cuáles son talla chica y cuales son talla grande:
```python
from shirts_size_data import generate_shirts_data
from scipy.stats.distributions import norm
```
```python
shirts_data = generate_shirts_data()
shirts_data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>size</th>
<th>labels</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>31.554606</td>
<td>0.0</td>
</tr>
<tr>
<th>1</th>
<td>31.222931</td>
<td>0.0</td>
</tr>
<tr>
<th>2</th>
<td>30.678384</td>
<td>0.0</td>
</tr>
<tr>
<th>3</th>
<td>42.104057</td>
<td>1.0</td>
</tr>
<tr>
<th>4</th>
<td>40.355446</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div>
```python
plt.scatter(shirts_data["size"], np.zeros(len(shirts_data)), c="gray", alpha=0.6)
plt.ylim([-0.01, 0.3])
plt.xlabel("Tamaño (cm)")
```
¿Cómo estimamos los parámetros de nuestro modelo de variable latente?
Analicemos varios casos:
1. Si de entrada supiéramos cuáles playeras son chicas y cuáles grandes:
```python
plt.scatter(shirts_data.loc[shirts_data["labels"] == 0, "size"],
np.zeros((shirts_data["labels"] == 0).sum()),
alpha=0.6)
plt.scatter(shirts_data.loc[shirts_data["labels"] == 1, "size"],
np.zeros((shirts_data["labels"] == 1).sum()),
alpha=0.6)
x = shirts_data["size"].copy().values
x.sort()
mu1 = shirts_data.loc[shirts_data["labels"] == 0, "size"].mean()
mu2 = shirts_data.loc[shirts_data["labels"] == 1, "size"].mean()
s1 = shirts_data.loc[shirts_data["labels"] == 0, "size"].std()
s2 = shirts_data.loc[shirts_data["labels"] == 1, "size"].std()
plt.plot(x, norm.pdf(x, loc=mu1, scale=s1))
plt.plot(x, norm.pdf(x, loc=mu2, scale=s2))
plt.ylim([-0.01, 0.3])
plt.xlabel("Tamaño (cm)")
```
$$
p(x | t=1, \theta) = \mathcal{N}(x | \mu_1, \sigma_1)
$$
Con lo cual:
$$
\mu_1 = \frac{\sum_{i: t_i = 1} x_i}{\sum_{i: t_i = 1} 1}, \qquad \sigma_1^2 = \frac{\sum_{i: t_i = 1} (x_i - \mu_1)^2}{\sum_{i: t_i = 1} 1},
$$
y
$$
\mu_2 = \frac{\sum_{i: t_i = 2} x_i}{\sum_{i: t_i = 2} 1}, \qquad \sigma_2^2 = \frac{\sum_{i: t_i = 2} (x_i - \mu_1)^2}{\sum_{i: t_2 = 1} 1}
$$
2. Como sablemos, en el algoritmo de mezclas Gaussianas nunca sabremos si un punto pertenece a cierto cluster o no, sino que conoceremos las probabilidades de que pertenezca a cada cluster.
De modo que si conocemos la posterior $p(t | x, \theta)$, entonces ponderamos lo anterior por esta probabilidad:
$$
\mu_1 = \frac{\sum_{i} p(t_i=1 | x_i, \theta)x_i}{\sum_{i} p(t_i=1 | x_i, \theta)}, \qquad \sigma_1^2 = \frac{\sum_{i} p(t_i=1 | x_i, \theta) (x_i - \mu_1)^2}{\sum_{i} p(t_i=1 | x_i, \theta)}.
$$
3. ¿Y cómo conocemos la posterior $p(t | x, \theta)$?
Bueno, pues si conocemos los parámetros, es bastante fácil:
$$
p(t=c | x, \theta) = \frac{p(x | t=c, \theta) p(t=c | \theta)}{Z} = \frac{\pi_c \mathcal{N}(x | \mu_c, \sigma_c)}{Z}.
$$
Tenemos un razonamiento circular (un problema del tipo, ¿Qué fue primero?, ¿El huevo?, ¿O la gallina?).
**¿Cómo lo resolvemos? Iterando...**
**Algoritmo de maximización de la esperanza:**
1. Inicializamos los parámetros de cada Gaussiana aleatoriamente.
2. Repetir hasta la convergencia:
- Calcular para cada punto la probabilidad posterior $p(t_i=c | x_i, \theta)$.
- Actualizar los parámetros de las Gaussianas con las probabildades calculadas.
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
|
9044c4c6ae429b8dae2504073860ea11bf4e40cf
| 516,081 |
ipynb
|
Jupyter Notebook
|
modulo2/tema1/1_variables_latentes_k_means.ipynb
|
G4ll4rd0/mebo2021
|
11096e6bb9c897c38ae02f9c30f90e1c878ee619
|
[
"MIT"
] | null | null | null |
modulo2/tema1/1_variables_latentes_k_means.ipynb
|
G4ll4rd0/mebo2021
|
11096e6bb9c897c38ae02f9c30f90e1c878ee619
|
[
"MIT"
] | null | null | null |
modulo2/tema1/1_variables_latentes_k_means.ipynb
|
G4ll4rd0/mebo2021
|
11096e6bb9c897c38ae02f9c30f90e1c878ee619
|
[
"MIT"
] | null | null | null | 338.858175 | 80,668 | 0.929034 | true | 7,548 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.815232 | 0.681113 |
__label__spa_Latn
| 0.893995 | 0.420786 |
```python
from scipy import optimize
import math as math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import ipywidgets as wd
```
# A simple CGE model
In the first section of this code we construct a simple Computable General Equilibruim (CGE) model and calibrate it to an observed equilibrium. In the second section of this code we examine the properties of the model and the effect of certain shocks.
## Building a CGE model - a crash course in CGE-modelling
The building blocks of a CGE model are:
1. Model equations describing an economy
2. Behavioural parameters
3. Variable values representing an initial equilibrium
Below we will construct a simple CGE model in three steps:
1. We will define the model's equations. The model is a simple open economy with producers, consumers and a world market
2. We will discuss the behavioural parameters
3. We will calibrate the models unknown behavioural parameters such that the model can replicate the initial equilibrium
### 1. Model equations
Our model is a slightly modified version of a model from the course in applied CGE-modelling. The model consists of producers who buy labour and produce a single domestic consumption good, cosumers who sell labour to domestic producers and buy both domestic and foregin consumption goods, and a world market where producers can sell sell domestic goods and consumers can buy foreign goods.
#### Domestic producers
The domestic producers are in full competition with each other and all have the CES-production function:
$$ Y = \left[ \mu^\frac{1}{E_Y} L^\frac{E_Y-1}{E_Y} \right]^\frac{E_Y}{E_Y-1} $$
They maximize their profits:
$$ \pi = p_dY - wL $$
This implies the following labour demand and zero profit condition, i.e. the price of the domestic consumption good is equal to the avg. costs of producing it.
$$
\begin{align}
L = \mu \left( \frac{w}{p_d} \right)^{-E_Y} Y \tag{1}
\end{align}
$$
$$
\begin{align}
p_d = \frac{w L}{Y} \tag{2}
\end{align}
$$
Below we define the two equations:
```python
## Labour demand
def E1_L_D(mu,w,p_d,E_Y,Y) :
""" Funtion defining the producers' demand for labour. """
return mu * (w/p_d)**(-E_Y) * Y
## Zero profit condition - expressed as a function of p_d
def E2_p_d(Y,w,L) :
""" Function defining the zero profit condidtion. """
return w*L / Y
```
#### Domestic consumers
The model has N domestic consumers who have a CES utility function and derive utility from the consumption of domestic and foreign goods:
$$ U(C_d,C_f)=\left[ \gamma_d^\frac{1}{E_C} C_d^\frac{E_C-1}{E_C} + \gamma_f^\frac{1}{E_C} C_f^\frac{E_C-1}{E_C} \right]^\frac{E_C}{E_C-1} $$
Their budget constaint is:
$$
\begin{align}
p_d C_d + p_f C_f = w N \tag{3}
\end{align}
$$
The consumers maximize their utility subject to their budget constraint. This implies the following demand functions:
$$
\begin{align}
C_d = \gamma_d \left(\frac{p_d}{P_C}\right)^{-E_C} \frac{wN}{P_C} \tag{4}
\end{align}
$$
$$
\begin{align}
C_f = \gamma_d \left(\frac{p_f}{P_C}\right)^{-E_C} \frac{wN}{P_C} \tag{5}
\end{align}
$$
$P_C$ is a CES-price index.
```python
## The budget of the households constraint
def E3_wN(p_d,C_d,p_f,C_f) :
""" Budget constraint of the households. """
return p_d*C_d+p_f*C_f
## The demand for the domestic consumption good
def E4_C_d(gamma_d,p_d,P_C,E_C,w,N) :
""" Demand for domestic consumption."""
return gamma_d * (p_d/P_C)**(-E_C) *w*N/P_C
## The demand for the foreign consumption good
def E5_C_f(gamma_f,p_f,P_C,E_C,w,N) :
""" Demand for foregin consumption good."""
return gamma_f * (p_f/P_C)**(-E_C) *w*N/P_C
```
#### The labour market
We assume that everybody works. Thus the labour market becomes:
$$
\begin{align}
L = N \tag{6}
\end{align}
$$
```python
## The labour market
def E6_L_S(N) :
""" Labour market. """
return N
```
#### The goods market
In equilibria the entire domestic production is either consumed by domestic consumers or exported:
$$
\begin{align}
Y = C_d + X \tag{7}
\end{align}$$
```python
## The goods market
def E7_Y(C_d,X) :
""" The goods market. """
return C_d + X
```
#### Foregin trade
We model the foreign trade using an Armington's approach, where the export depends on the relative price of foreign and domestic goods:
$$
\begin{align}
X = \phi \left(\frac{p_d}{p_f}\right)^{-E_X} \tag{8}
\end{align}
$$
We use the foreign price as a numéraire:
$$
\begin{align}
p_f = 1 \tag{9}
\end{align}
$$
```python
## Armington's approach to foreign trade
def E8_X(phi,p_d,p_f,E_X) :
""" Foreign trade """
return phi*(p_d/p_f)**(-E_X)
## Setting the foreign price as a numéraire
def E9_p_f() :
""" The price of the foreign good """
return 1
```
#### The model
Equation (1)-(9) represents the nine equations in our model. We treat the nine variables $Y$, $L$, $w$, $C_d$, $C_f$, $p_d$, $p_f$, $P_C$, and $X$ as endogenous.
The remaining parameters are $N$, $\mu$, $\gamma_d$, $\gamma_f$, $\phi$, $E_Y$, $E_C$ and $E_X$.
### 2. Behavioural parameters
The model has 8 unknown parameters: $N$, $\mu$, $\gamma_d$, $\gamma_f$, $\phi$, $E_Y$, $E_C$ and $E_X$. $\mu$, $\gamma_d$, $\gamma_f$, $\phi$, $E_Y$, $E_C$ and $E_X$ are the so-called behavioural parameters in the CES-functions and in Armington's approach to foreign trade. In the original assignment from the course in applied CGE-modelling, it was given that $E_Y \equiv 2.0$, $E_C \equiv 0.5$, and $E_C \equiv 5.0$.
The remaining behavioural parameters and $N$ are determined by calibrating the model to the initial equilibrium, e.g. setting $\phi$ in equation $(8)$ such that the equation is consistent with the initial values of $X$, $p_d$, and $p_f$.
```python
E_Y = 2.0
E_C = 0.5
E_X = 5.0
```
### 3. Initial dataset and calibration
To model is calibrated to an initial equilibrium described by the IO-table:
| I/O | $PS$ | $PC$ | $X$ |
| --- | --: | --: | --: |
| $PS$ | 0 | 800 | 200 |
| $M$ | 0 | 200 | 0 |
| $w$ | 1000 | 0 | 0 |
PS is the private sector, $M$ is imports, $w$ is wages, PC is private consumption, and $X$ is exports. The rows describe input and the columns output. The private sector e.g. uses 1.000 units of wage, and outputs 800 units of goods used for domestic consumption. The table represents a simple National Account in current prices.
```python
rows = ['PS','M' ,'w']
columns = ['PS','PC','X']
data = [(0 , 800 , 200),
(0 , 200 , 0) ,
(1000 , 0 , 0) ]
IO = pd.DataFrame(data,columns=columns,index=rows)
```
#### Initialising the variables
All prices are simply set to 1 and interpretated as price indices. The ammount of labour, the total prodcution etc. are found using the IO-table. The IO-table is in current prices so we devide by the price indices.
```python
## Endogenous variables in initial equilibrium
# Prices
p_d0 = 1
p_f0 = 1
w0 = 1
P_C0 = 1
# Amounts
L0 = IO.loc['w']['PS']/w0
Y0 = IO.loc['w']['PS']/p_d0
X0 = IO.loc['PS']['X']/p_d0
C_d0 = IO.loc['PS']['PC']/p_d0
C_f0 = IO.loc['M']['PC']/p_f0
## List of initial values
ini_list = [Y0,L0,w0,C_d0,C_f0,p_d0,p_f0,P_C0,X0]
```
We also save the initial equilibrium in a dataframe.
```python
## Saving initial equilibrium in a dictionary
ini_eq = [{"Shock" : "Initial eq.",
"$L$" : "%1.f" % L0,
"$Y$" : "%0.2f" % Y0,
"$w$" : "%0.3f" % w0,
"$C_d$" : "%0.2f" % C_d0,
"$p_d$" : "%0.3f" % p_d0,
"$C_f$" : "%0.2f" % C_f0,
"$p_f$" : "%0.3f" % p_f0,
"$P_C$" : "%0.3f" % P_C0,
"$X$" : "%0.2f" % X0}]
## Converting the dictionary to a data-frame
ini_eq = pd.DataFrame(ini_eq)
## Rearranging the columns of the dataframe
cols = ['Shock','$L$','$Y$','$X$','$C_d$','$C_f$','$w$','$p_d$','$p_f$','$P_C$']
ini_eq = ini_eq[cols]
## Printing the inital equilibrium
ini_eq
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Shock</th>
<th>$L$</th>
<th>$Y$</th>
<th>$X$</th>
<th>$C_d$</th>
<th>$C_f$</th>
<th>$w$</th>
<th>$p_d$</th>
<th>$p_f$</th>
<th>$P_C$</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Initial eq.</td>
<td>1000</td>
<td>1000.00</td>
<td>200.00</td>
<td>800.00</td>
<td>200.00</td>
<td>1.000</td>
<td>1.000</td>
<td>1.000</td>
<td>1.000</td>
</tr>
</tbody>
</table>
</div>
#### Calibrating
The next step is to calibrate $N$, $\mu$, $\gamma_d$, $\gamma_f$, $\phi$, i.e. setting their values such that the model replicates the initial equilibrium. In the case of e.g. $\mu$ this is done by solving the labour demand function, equation (1), for $\mu$ with the inital values inserted:
$$
\begin{align}
L_0 = \mu \left( \frac{w_0}{p_{d0}} \right)^{-E_Y} Y_0
\end{align}
$$
We do this numerically below.
```python
## Calibrating population size
def f_L_S(N) :
return E6_L_S(N) - L0
solution = optimize.root(f_L_S, (0) )
N = np.asscalar(solution.x)
## Calibrating mu
def f_mu(mu) :
return E1_L_D(mu,w0,p_d0,E_Y,Y0) - L0
solution = optimize.root(f_mu, (0) )
mu = np.asscalar(solution.x)
## Calibrating gamma_d and gamma_f
def f_gamma_d(gamma_d) :
return E4_C_d(gamma_d,p_d0,P_C0,E_C,w0,N) -C_d0
solution = optimize.root(f_gamma_d, (0) )
gamma_d = np.asscalar(solution.x)
def f_gamma_f(gamma_f) :
return E5_C_f(gamma_f,p_f0,P_C0,E_C,w0,N) -C_f0
solution = optimize.root(f_gamma_f, (0) )
gamma_f = np.asscalar(solution.x)
## Calibrating phi
def f_phi(phi) :
return E8_X(phi,p_d0,p_f0,E_X) - X0
solution = optimize.root(f_phi, (0) )
phi = np.asscalar(solution.x)
## Printing the results
print("N =",N,", mu =",mu,",","gamma_d =",gamma_d,",","gamma_f =",gamma_f,",","phi =",phi)
```
N = 1000.0 , mu = 1.0 , gamma_d = 0.8 , gamma_f = 0.2 , phi = 200.0
#### Solving the model
The last step of building our model is to solve it and check if it replicates the initial equilibrium. If it does then our model works. This is called a zero-shock in CGE-modelling.
The model is solved as a system of nine equations, equation $(1)$-$(9)$, and nine unknowns, $Y$, $L$, $w$, $C_d$, $C_f$, $p_d$, $p_f$, $P_C$, and $X$:
$$
\begin{align}
L = \mu \left( \frac{w}{p_d} \right)^{-E_Y} Y \tag{1}
\end{align}
$$
$$
\begin{align}
p_d = \frac{w L}{Y} \tag{2}
\end{align}
$$
$$
\begin{align}
p_d C_d + p_f C_f = w N \tag{3}
\end{align}
$$
$$
\begin{align}
C_d = \gamma_d \left(\frac{p_d}{P_C}\right)^{-E_C} \frac{wN}{P_C} \tag{4}
\end{align}
$$
$$
\begin{align}
C_f = \gamma_d \left(\frac{p_f}{P_C}\right)^{-E_C} \frac{wN}{P_C} \tag{5}
\end{align}
$$
$$
\begin{align}
L = N \tag{6}
\end{align}
$$
$$
\begin{align}
Y = C_d + X \tag{7}
\end{align}
$$
$$
\begin{align}
X = \phi \left(\frac{p_d}{p_f}\right)^{-E_X} \tag{8}
\end{align}
$$
$$
\begin{align}
p_f = 1 \tag{9}
\end{align}
$$
We solve the model by defining a function called CGEsolve. The function defines the system of equations and solves it. This functionh will become handy later, when we examine the properties of the model.
```python
## Function solving the CGE model taking parameters as inputs
def CGEsolve(N,mu,gamma_d,gamma_f,phi,E_Y,E_C,E_X,ini_list,status='yes') :
"""
This function defines and solves the CGE-model as a function of its parameters and
a list of inital values for the solver.
status = 'yes' prints a summary of the solver results
"""
## Defining the CGE model as a system of nine equations and nine unknowns
def CGEmodel(variables) :
## Defining variables
(Y,L,w,C_d,C_f,p_d,p_f,P_C,X) = variables
## Defining equations
EQ_L_D = E1_L_D(mu,w,p_d,E_Y,Y) - L
EQ_p_d = E2_p_d(Y,w,L) - p_d
EQ_wN = E3_wN(p_d,C_d,p_f,C_f) - w*N
EQ_C_d = E4_C_d(gamma_d,p_d,P_C,E_C,w,N) - C_d
EQ_C_f = E5_C_f(gamma_f,p_f,P_C,E_C,w,N) - C_f
EQ_L_S = E6_L_S(N) - L
EQ_Y = E7_Y(C_d,X) - Y
EQ_X = E8_X(phi,p_d,p_f,E_X) - X
EQ_p_f = E9_p_f()-p_f
## Returning a list of equations
return [EQ_L_D,EQ_p_d,EQ_wN,EQ_C_d,EQ_C_f,EQ_L_S,EQ_Y,EQ_X,EQ_p_f]
## Solving the model using the inital equilibrium as starting values fore the solver
solution = optimize.root(CGEmodel,(ini_list[0],ini_list[1],ini_list[2],ini_list[3],ini_list[4],ini_list[5],ini_list[6],ini_list[7],ini_list[8]))
## Prints the status of the solver
if status=='yes' :
print(solution.message,"Success =",solution.success)
## Returning solution
return solution
## Solving the model
zeroshock = CGEsolve(N,mu,gamma_d,gamma_f,phi,E_Y,E_C,E_X,ini_list,status='yes')
```
The solution converged. Success = True
```python
## Function creating a dataframe with results
def CGEresults(solution,name,nice='yes') :
"""
This function takes the results from CGEsolve function and stores the variable values in a data frame.
solution = a result from the CGEsolve function
name = name of the shock
nice = yes if results are for printing in a table.
"""
## Saving the results in a dictionary - strings for presenting results
if nice == 'yes' :
results = [{"Shock" : name,
"$Y$" : "%0.2f" % solution.x[0],
"$L$" : "%1.f" % solution.x[1],
"$w$" : "%0.3f" % solution.x[2],
"$C_d$" : "%0.2f" % solution.x[3],
"$p_d$" : "%0.3f" % solution.x[5],
"$C_f$" : "%0.2f" % solution.x[4],
"$p_f$" : "%0.3f" % solution.x[6],
"$P_C$" : "%0.3f" % solution.x[7],
"$X$" : "%0.2f" % solution.x[8]}]
## floats for figures
else :
results = [{"Shock" : name,
"$Y$" : solution.x[0],
"$L$" : solution.x[1],
"$w$" : solution.x[2],
"$C_d$" : solution.x[3],
"$p_d$" : solution.x[5],
"$C_f$" : solution.x[4],
"$p_f$" : solution.x[6],
"$P_C$" : solution.x[7],
"$X$" : solution.x[8]}]
## Converting the results to a dataframe
results = pd.DataFrame(results)
## Rearranging the columns
cols = ['Shock','$L$','$Y$','$X$','$C_d$','$C_f$','$w$','$p_d$','$p_f$','$P_C$']
results = results[cols]
## Returning the dataframe
return results
## Printing the results
ini_eq.append(CGEresults(zeroshock,'Zero-shock',nice='yes'))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Shock</th>
<th>$L$</th>
<th>$Y$</th>
<th>$X$</th>
<th>$C_d$</th>
<th>$C_f$</th>
<th>$w$</th>
<th>$p_d$</th>
<th>$p_f$</th>
<th>$P_C$</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Initial eq.</td>
<td>1000</td>
<td>1000.00</td>
<td>200.00</td>
<td>800.00</td>
<td>200.00</td>
<td>1.000</td>
<td>1.000</td>
<td>1.000</td>
<td>1.000</td>
</tr>
<tr>
<th>0</th>
<td>Zero-shock</td>
<td>1000</td>
<td>1000.00</td>
<td>200.00</td>
<td>800.00</td>
<td>200.00</td>
<td>1.000</td>
<td>1.000</td>
<td>1.000</td>
<td>1.000</td>
</tr>
</tbody>
</table>
</div>
We have successfully solved the model. Our 'zero'-shock replicates the intial equilibrium. The model is correctly calibrated.
## The effects of a productivity shock and the price sensitivity of exports
In this section we examine how a 10 percent increase in productivity affects our model. The shok is implemented by increasing $\mu$ by 10 percent.
```python
prodshock = CGEsolve(N,1.1*mu,gamma_d,gamma_f,phi,E_Y,E_C,E_X,ini_list,status='yes')
```
The solution converged. Success = True
```python
ini_eq.append(CGEresults(prodshock,'Prod-shock',nice='yes'))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Shock</th>
<th>$L$</th>
<th>$Y$</th>
<th>$X$</th>
<th>$C_d$</th>
<th>$C_f$</th>
<th>$w$</th>
<th>$p_d$</th>
<th>$p_f$</th>
<th>$P_C$</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Initial eq.</td>
<td>1000</td>
<td>1000.00</td>
<td>200.00</td>
<td>800.00</td>
<td>200.00</td>
<td>1.000</td>
<td>1.000</td>
<td>1.000</td>
<td>1.000</td>
</tr>
<tr>
<th>0</th>
<td>Prod-shock</td>
<td>1000</td>
<td>1100.00</td>
<td>221.83</td>
<td>878.17</td>
<td>217.28</td>
<td>1.077</td>
<td>0.979</td>
<td>1.000</td>
<td>0.984</td>
</tr>
</tbody>
</table>
</div>
The shock went as expected. A 10 percent increase in the productivity of the only input in a CES-function with constant returns to scale equals a 10 percent increase in domestic production. An increase in production also increases the export, as a larger supply lowers the price of the domestic good and thus increases exports. Also, a higher productivity leads to higher wages and thus higher consumption of both domestic and foregin goods.
But how does the effect of the shock depend on $E_X$ in equation (8)? An inelastic (low values of $E_X$) foregin demand for domestic goods imply lower prices for domestic goods in order to get the market for domestic goods to clear. To see how dominant this effect is, we solve the model for a range of $E_X$'s.
```python
## Solving the models for E_X=2
shock = CGEsolve(N,1.1*mu,gamma_d,gamma_f,phi,E_Y,E_C,2,ini_list,status='yes')
results = CGEresults(shock,2,nice = 'no')
```
The solution converged. Success = True
```python
## Solving the model for E_X=2-30 - [we don't print the status of the solver, but we have checked.
## The solver behaves nicely for all values of E_X]
for i in range(21,301) :
#print(0.1*i)
shock = CGEsolve(N,1.1*mu,gamma_d,gamma_f,phi,E_Y,E_C,0.1*i,ini_list,status='no')
results = results.append(CGEresults(shock,0.1*i,nice='no'))
```
We present the results in an interactive figure.
```python
varbls = ['$L$','$Y$','$X$','$C_d$','$C_f$','$w$','$p_d$','$p_f$','$P_C$']
def fig_plt(var) :
y = results[var].values
x = results['Shock'].values
plt.plot(x,y)
plt.yticks(np.arange(0.95*y.min(), 1.05*y.max(), (1.05*y.max()-0.95*y.min())/4))
return plt.show()
var_select = wd.Dropdown(options=varbls, description='Variables')
wd.interact(fig_plt,var=var_select);
```
interactive(children=(Dropdown(description='Variables', options=('$L$', '$Y$', '$X$', '$C_d$', '$C_f$', '$w$',…
The most interesting figures are those of $X$ and $p_d$. It shows that an inelastic foregin demand implies, that the domestic producers have to lower their prices by a lot to clear the goods market. The lower prices affects the wages and thus the consumption of both foregin and domestic goods in equilibria with different $E_X$.
|
237657f6af57442af5cbef59d9339d1fc3fed83f
| 31,794 |
ipynb
|
Jupyter Notebook
|
modelproject/CGE_model.ipynb
|
NumEconCopenhagen/projects-2019-faetter-br
|
80c5afb7496dcd756f12cde9c0d988a891cf9619
|
[
"MIT"
] | null | null | null |
modelproject/CGE_model.ipynb
|
NumEconCopenhagen/projects-2019-faetter-br
|
80c5afb7496dcd756f12cde9c0d988a891cf9619
|
[
"MIT"
] | 8 |
2019-04-10T10:38:49.000Z
|
2019-05-14T19:22:39.000Z
|
modelproject/CGE_model.ipynb
|
NumEconCopenhagen/projects-2019-faetter-br
|
80c5afb7496dcd756f12cde9c0d988a891cf9619
|
[
"MIT"
] | 1 |
2019-04-02T08:27:23.000Z
|
2019-04-02T08:27:23.000Z
| 32.278173 | 451 | 0.478361 | true | 6,348 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.956634 | 0.826712 | 0.790861 |
__label__eng_Latn
| 0.907886 | 0.675767 |
<h1>Simplification</h1>
<p><a href="https://docs.sympy.org/latest/_sources/tutorial/simplification.rst.txt">From</a></p>
<hr />
<p>To make this document easier to read, we are going to enable pretty printing.</p>
```julia
>>> from sympy import *
>>> x, y, z = symbols('x y z')
>>> init_printing(use_unicode=True)
```
<h5>In <code>Julia</code>:</h5>
<ul>
<li><p>" not ' are used for strings:</p>
</li>
<li><p>pretty printing in enable by default</p>
</li>
<li><p>we input extra functions from <code>sympy</code> such as <code>powsimp</code>, ...</p>
</li>
</ul>
```julia
using SymPy
import_from(sympy)
x, y, z = symbols("x y z")
```
(x, y, z)
<hr />
<h2><code>simplify</code></h2>
<p>Now let's jump in and do some interesting mathematics. One of the most useful features of a symbolic manipulation system is the ability to simplify mathematical expressions. SymPy has dozens of functions to perform various kinds of simplification. There is also one general function called <code>simplify()</code> that attempts to apply all of these functions in an intelligent way to arrive at the simplest form of an expression. Here are some examples</p>
```julia
>>> simplify(sin(x)**2 + cos(x)**2)
1
>>> simplify((x**3 + x**2 - x - 1)/(x**2 + 2*x + 1))
x - 1
>>> simplify(gamma(x)/gamma(x - 2))
(x - 2)⋅(x - 1)
```
<h5>In <code>Julia</code>:</h5>
<ul>
<li><p>we need to load in <code>SpecialFunctions</code> to have access to <code>gamma</code>:</p>
</li>
</ul>
```julia
simplify(sin(x)^2 + cos(x)^2)
```
\begin{equation*}1\end{equation*}
```julia
simplify((x^3 + x^2 - x - 1)/(x^2 + 2*x + 1))
```
\begin{equation*}x - 1\end{equation*}
```julia
using SpecialFunctions
simplify(gamma(x)/gamma(x - 2))
```
\begin{equation*}\left(x - 2\right) \left(x - 1\right)\end{equation*}
<hr />
<p>Here, <code>gamma(x)</code> is $\Gamma(x)$, the <code>gamma function <http://en.wikipedia.org/wiki/Gamma_function></code>_. We see that <code>simplify()</code> is capable of handling a large class of expressions.</p>
<p>But <code>simplify()</code> has a pitfall. It just applies all the major simplification operations in SymPy, and uses heuristics to determine the simplest result. But "simplest" is not a well-defined term. For example, say we wanted to "simplify" <code>x^2 + 2x + 1</code> into <code>(x + 1)^2</code>:</p>
```julia
>>> simplify(x**2 + 2*x + 1)
2
x + 2⋅x + 1
```
<h5>In <code>Julia</code>:</h5>
```julia
simplify(x^2 + 2*x + 1)
```
\begin{equation*}x^{2} + 2 x + 1\end{equation*}
<hr />
<p>We did not get what we want. There is a function to perform this simplification, called <code>factor()</code>, which will be discussed below.</p>
<p>Another pitfall to <code>simplify()</code> is that it can be unnecessarily slow, since it tries many kinds of simplifications before picking the best one. If you already know exactly what kind of simplification you are after, it is better to apply the specific simplification function(s) that apply those simplifications.</p>
<p>Applying specific simplification functions instead of <code>simplify()</code> also has the advantage that specific functions have certain guarantees about the form of their output. These will be discussed with each function below. For example, <code>factor()</code>, when called on a polynomial with rational coefficients, is guaranteed to factor the polynomial into irreducible factors. <code>simplify()</code> has no guarantees. It is entirely heuristical, and, as we saw above, it may even miss a possible type of simplification that SymPy is capable of doing.</p>
<p><code>simplify()</code> is best when used interactively, when you just want to whittle down an expression to a simpler form. You may then choose to apply specific functions once you see what <code>simplify()</code> returns, to get a more precise result. It is also useful when you have no idea what form an expression will take, and you need a catchall function to simplify it.</p>
<h2>Polynomial/Rational Function Simplification</h2>
<h3>expand</h3>
<p><code>expand()</code> is one of the most common simplification functions in SymPy. Although it has a lot of scopes, for now, we will consider its function in expanding polynomial expressions. For example:</p>
```julia
>>> expand((x + 1)**2)
2
x + 2⋅x + 1
>>> expand((x + 2)*(x - 3))
2
x - x - 6
```
<h5>In <code>Julia</code>:</h5>
```julia
expand((x + 1)^2)
```
\begin{equation*}x^{2} + 2 x + 1\end{equation*}
```julia
expand((x + 2)*(x - 3))
```
\begin{equation*}x^{2} - x - 6\end{equation*}
<hr />
<p>Given a polynomial, <code>expand()</code> will put it into a canonical form of a sum of monomials.</p>
<p><code>expand()</code> may not sound like a simplification function. After all, by its very name, it makes expressions bigger, not smaller. Usually this is the case, but often an expression will become smaller upon calling <code>expand()</code> on it due to cancellation.</p>
```julia
>>> expand((x + 1)*(x - 2) - (x - 1)*x)
-2
```
<h5>In <code>Julia</code>:</h5>
```julia
expand((x + 1)*(x - 2) - (x - 1)*x)
```
\begin{equation*}-2\end{equation*}
<hr />
<h3>factor</h3>
<p><code>factor()</code> takes a polynomial and factors it into irreducible factors over the rational numbers. For example:</p>
```julia
>>> factor(x**3 - x**2 + x - 1)
⎛ 2 ⎞
(x - 1)⋅⎝x + 1⎠
>>> factor(x**2*z + 4*x*y*z + 4*y**2*z)
2
z⋅(x + 2⋅y)
```
<h5>In <code>Julia</code>:</h5>
```julia
factor(x^3 - x^2 + x - 1)
```
\begin{equation*}\left(x - 1\right) \left(x^{2} + 1\right)\end{equation*}
```julia
factor(x^2*z + 4*x*y*z + 4*y^2*z)
```
\begin{equation*}z \left(x + 2 y\right)^{2}\end{equation*}
<hr />
<p>For polynomials, <code>factor()</code> is the opposite of <code>expand()</code>. <code>factor()</code> uses a complete multivariate factorization algorithm over the rational numbers, which means that each of the factors returned by <code>factor()</code> is guaranteed to be irreducible.</p>
<p>If you are interested in the factors themselves, <code>factor_list</code> returns a more structured output.</p>
```julia
>>> factor_list(x**2*z + 4*x*y*z + 4*y**2*z)
(1, [(z, 1), (x + 2⋅y, 2)])
```
<h5>In <code>Julia</code>:</h5>
```julia
factor_list(x^2*z + 4*x*y*z + 4*y^2*z)
```
(1, Tuple{SymPy.Sym,Int64}[(z, 1), (x + 2*y, 2)])
<hr />
<p>Note that the input to <code>factor</code> and <code>expand</code> need not be polynomials in the strict sense. They will intelligently factor or expand any kind of expression (though note that the factors may not be irreducible if the input is no longer a polynomial over the rationals).</p>
```julia
>>> expand((cos(x) + sin(x))**2)
2 2
sin (x) + 2⋅sin(x)⋅cos(x) + cos (x)
>>> factor(cos(x)**2 + 2*cos(x)*sin(x) + sin(x)**2)
2
(sin(x) + cos(x))
```
<h5>In <code>Julia</code>:</h5>
```julia
expand((cos(x) + sin(x))^2)
factor(cos(x)^2 + 2*cos(x)*sin(x) + sin(x)^2)
```
\begin{equation*}\left(\sin{\left (x \right )} + \cos{\left (x \right )}\right)^{2}\end{equation*}
<hr />
<h3>collect</h3>
<p><code>collect()</code> collects common powers of a term in an expression. For example</p>
```julia
>>> expr = x*y + x - 3 + 2*x**2 - z*x**2 + x**3
>>> expr
3 2 2
x - x ⋅z + 2⋅x + x⋅y + x - 3
>>> collected_expr = collect(expr, x)
>>> collected_expr
3 2
x + x ⋅(-z + 2) + x⋅(y + 1) - 3
```
<h5>In <code>Julia</code>:</h5>
```julia
expr = x*y + x - 3 + 2*x^2 - z*x^2 + x^3
expr
```
\begin{equation*}x^{3} - x^{2} z + 2 x^{2} + x y + x - 3\end{equation*}
```julia
collected_expr = collect(expr, x)
collected_expr
```
\begin{equation*}x^{3} + x^{2} \left(- z + 2\right) + x \left(y + 1\right) - 3\end{equation*}
<hr />
<p><code>collect()</code> is particularly useful in conjunction with the <code>.coeff()</code> method. <code>expr.coeff(x, n)</code> gives the coefficient of <code>x**n</code> in <code>expr</code>:</p>
```julia
>>> collected_expr.coeff(x, 2)
-z + 2
```
<h5>In <code>Julia</code>:</h5>
```julia
collected_expr.coeff(x, 2)
```
\begin{equation*}- z + 2\end{equation*}
<hr />
<div class="admonition note"><p class="admonition-title">TODO</p><p>Discuss coeff method in more detail in some other section (maybe basic expression manipulation tools)</p>
</div>
<h3>cancel</h3>
<p><code>cancel()</code> will take any rational function and put it into the standard canonical form, $\frac{p}{q}$, where $p$ and $q$ are expanded polynomials with no common factors, and the leading coefficients of $p$ and $q$ do not have denominators (i.e., are integers).</p>
```julia
>>> cancel((x**2 + 2*x + 1)/(x**2 + x))
x + 1
─────
x
>>> expr = 1/x + (3*x/2 - 2)/(x - 4)
>>> expr
3⋅x
─── - 2
2 1
─────── + ─
x - 4 x
>>> cancel(expr)
2
3⋅x - 2⋅x - 8
──────────────
2
2⋅x - 8⋅x
>>> expr = (x*y**2 - 2*x*y*z + x*z**2 + y**2 - 2*y*z + z**2)/(x**2 - 1)
>>> expr
2 2 2 2
x⋅y - 2⋅x⋅y⋅z + x⋅z + y - 2⋅y⋅z + z
───────────────────────────────────────
2
x - 1
>>> cancel(expr)
2 2
y - 2⋅y⋅z + z
───────────────
x - 1
```
<h5>In <code>Julia</code>:</h5>
```julia
cancel((x^2 + 2*x + 1)/(x^2 + x))
```
\begin{equation*}\frac{x + 1}{x}\end{equation*}
```julia
expr = 1/x + (3*x/2 - 2)/(x - 4)
expr
```
\begin{equation*}\frac{\frac{3 x}{2} - 2}{x - 4} + \frac{1}{x}\end{equation*}
```julia
cancel(expr)
```
\begin{equation*}\frac{3 x^{2} - 2 x - 8}{2 x^{2} - 8 x}\end{equation*}
```julia
expr = (x*y^2 - 2*x*y*z + x*z^2 + y^2 - 2*y*z + z^2)/(x^2 - 1)
expr
cancel(expr)
```
\begin{equation*}\frac{y^{2} - 2 y z + z^{2}}{x - 1}\end{equation*}
<hr />
<p>Note that since <code>factor()</code> will completely factorize both the numerator and the denominator of an expression, it can also be used to do the same thing:</p>
```julia
>>> factor(expr)
2
(y - z)
────────
x - 1
```
<h5>In <code>Julia</code>:</h5>
```julia
factor(expr)
```
\begin{equation*}\frac{\left(y - z\right)^{2}}{x - 1}\end{equation*}
<hr />
<p>However, if you are only interested in making sure that the expression is in canceled form, <code>cancel()</code> is more efficient than <code>factor()</code>.</p>
<h3>apart</h3>
<p><code>apart()</code> performs a <code>partial fraction decomposition <http://en.wikipedia.org/wiki/Partial_fraction_decomposition></code>_ on a rational function.</p>
```julia
>>> expr = (4*x**3 + 21*x**2 + 10*x + 12)/(x**4 + 5*x**3 + 5*x**2 + 4*x)
>>> expr
3 2
4⋅x + 21⋅x + 10⋅x + 12
────────────────────────
4 3 2
x + 5⋅x + 5⋅x + 4⋅x
>>> apart(expr)
2⋅x - 1 1 3
────────── - ───── + ─
2 x + 4 x
x + x + 1
```
<h5>In <code>Julia</code>:</h5>
```julia
expr = (4*x^3 + 21*x^2 + 10*x + 12)/(x^4 + 5*x^3 + 5*x^2 + 4*x)
expr
```
\begin{equation*}\frac{4 x^{3} + 21 x^{2} + 10 x + 12}{x^{4} + 5 x^{3} + 5 x^{2} + 4 x}\end{equation*}
```julia
apart(expr)
```
\begin{equation*}\frac{2 x - 1}{x^{2} + x + 1} - \frac{1}{x + 4} + \frac{3}{x}\end{equation*}
<hr />
<h2>Trigonometric Simplification</h2>
<div class="admonition note"><p class="admonition-title">Note</p><p>SymPy follows Python's naming conventions for inverse trigonometric functions, which is to append an <code>a</code> to the front of the function's name. For example, the inverse cosine, or arc cosine, is called <code>acos()</code>.</p>
</div>
```julia
>>> acos(x)
acos(x)
>>> cos(acos(x))
x
>>> asin(1)
π
─
2
```
<h5>In <code>Julia</code>:</h5>
```julia
acos(x)
```
\begin{equation*}\operatorname{acos}{\left (x \right )}\end{equation*}
```julia
cos(acos(x))
```
\begin{equation*}x\end{equation*}
```julia
asin(1)
```
1.5707963267948966
<hr />
<div class="admonition note"><p class="admonition-title">TODO</p><p>Can we actually do anything with inverse trig functions, simplification wise?</p>
</div>
<h3>trigsimp</h3>
<p>To simplify expressions using trigonometric identities, use <code>trigsimp()</code>.</p>
```julia
>>> trigsimp(sin(x)^2 + cos(x)**2)
1
>>> trigsimp(sin(x)**4 - 2*cos(x)**2*sin(x)**2 + cos(x)**4)
cos(4⋅x) 1
──────── + ─
2 2
>>> trigsimp(sin(x)*tan(x)/sec(x))
2
sin (x)
```
<h5>In <code>Julia</code>:</h5>
```julia
trigsimp(sin(x)^2 + cos(x)^2)
```
\begin{equation*}1\end{equation*}
```julia
trigsimp(sin(x)^4 - 2*cos(x)^2*sin(x)^2 + cos(x)^4)
```
\begin{equation*}\frac{\cos{\left (4 x \right )}}{2} + \frac{1}{2}\end{equation*}
```julia
trigsimp(sin(x)*tan(x)/sec(x))
```
\begin{equation*}\sin^{2}{\left (x \right )}\end{equation*}
<hr />
<p><code>trigsimp()</code> also works with hyperbolic trig functions.</p>
```julia
>>> trigsimp(cosh(x)**2 + sinh(x)**2)
cosh(2⋅x)
>>> trigsimp(sinh(x)/tanh(x))
cosh(x)
```
<h5>In <code>Julia</code>:</h5>
```julia
trigsimp(cosh(x)^2 + sinh(x)^2)
```
\begin{equation*}\cosh{\left (2 x \right )}\end{equation*}
```julia
trigsimp(sinh(x)/tanh(x))
```
\begin{equation*}\cosh{\left (x \right )}\end{equation*}
<hr />
<p>Much like <code>simplify()</code>, <code>trigsimp()</code> applies various trigonometric identities to the input expression, and then uses a heuristic to return the "best" one.</p>
<h3>expand_trig</h3>
<p>To expand trigonometric functions, that is, apply the sum or double angle identities, use <code>expand_trig()</code>.</p>
```julia
>>> expand_trig(sin(x + y))
sin(x)⋅cos(y) + sin(y)⋅cos(x)
>>> expand_trig(tan(2*x))
2⋅tan(x)
─────────────
2
- tan (x) + 1
```
<h5>In <code>Julia</code>:</h5>
```julia
expand_trig(sin(x + y))
```
\begin{equation*}\sin{\left (x \right )} \cos{\left (y \right )} + \sin{\left (y \right )} \cos{\left (x \right )}\end{equation*}
```julia
expand_trig(tan(2*x))
```
\begin{equation*}\frac{2 \tan{\left (x \right )}}{- \tan^{2}{\left (x \right )} + 1}\end{equation*}
<hr />
<p>Because <code>expand_trig()</code> tends to make trigonometric expressions larger, and <code>trigsimp()</code> tends to make them smaller, these identities can be applied in reverse using <code>trigsimp()</code></p>
```julia
>>> trigsimp(sin(x)*cos(y) + sin(y)*cos(x))
sin(x + y)
```
<h5>In <code>Julia</code>:</h5>
```julia
trigsimp(sin(x)*cos(y) + sin(y)*cos(x))
```
\begin{equation*}\sin{\left (x + y \right )}\end{equation*}
<hr />
<div class="admonition note"><p class="admonition-title">TODO</p><p>It would be much better to teach individual trig rewriting functions here, but they don't exist yet. See https://github.com/sympy/sympy/issues/3456.</p>
</div>
<h2>Powers</h2>
<p>Before we introduce the power simplification functions, a mathematical discussion on the identities held by powers is in order. There are three kinds of identities satisfied by exponents</p>
<ol>
<li><p><code>x^ax^b = x^{a + b}</code></p>
</li>
<li><p><code>x^ay^a = (xy)^a</code></p>
</li>
<li><p><code>(x^a)^b = x^{ab}</code></p>
</li>
</ol>
<p>Identity 1 is always true.</p>
<p>Identity 2 is not always true. For example, if $x = y = -1$ and $a = \frac{1}{2}$, then $x^ay^a = \sqrt{-1}\sqrt{-1} = i\cdot i = -1$, whereas $(xy)^a = \sqrt{-1\cdot-1} = \sqrt{1} = 1$. However, identity 2 is true at least if $x$ and $y$ are nonnegative and $a$ is real (it may also be true under other conditions as well). A common consequence of the failure of identity 2 is that $\sqrt{x}\sqrt{y} \neq \sqrt{xy}$.</p>
<p>Identity 3 is not always true. For example, if $x = -1$, $a = 2$, and $b = \frac{1}{2}$, then $(x^a)^b = {\left ((-1)^2\right )}^{1/2} = \sqrt{1} = 1$ and $x^{ab} = (-1)^{2\cdot1/2} = (-1)^1 = -1$. However, identity 3 is true when $b$ is an integer (again, it may also hold in other cases as well). Two common consequences of the failure of identity 3 are that $\sqrt{x^2}\neq x$ and that $\sqrt{\frac{1}{x}} \neq \frac{1}{\sqrt{x}}$.</p>
<p>To summarize</p>
<ol>
<li><p>This: $x^ax^b = x^{a + b}$ is always true</p>
</li>
<li><p>This: $x^ay^a = (xy)^a$ is true when $x, y \geq 0$ and $a \in \mathbb{R}$; but note $(-1)^{1/2}(-1)^{1/2} \neq (-1\cdot-1)^{1/2}$ and $\sqrt{x}\sqrt{y} \neq \sqrt{xy}$ in general</p>
</li>
<li><p>This: $(x^a)^b = x^{ab}$ when $b \in \mathbb{Z}$; but note ${\left((-1)^2\right )}^{1/2} \neq (-1)^{2\cdot1/2}$ and $\sqrt{x^2}\neq x$ and $\sqrt{\frac{1}{x}}\neq\frac{1}{\sqrt{x}}$ in general</p>
</li>
</ol>
<p>This is important to remember, because by default, SymPy will not perform simplifications if they are not true in general.</p>
<p>In order to make SymPy perform simplifications involving identities that are only true under certain assumptions, we need to put assumptions on our Symbols. We will undertake a full discussion of the assumptions system later, but for now, all we need to know are the following.</p>
<ul>
<li><p>By default, SymPy Symbols are assumed to be complex (elements of $\mathbb{C}$). That is, a simplification will not be applied to an expression with a given Symbol unless it holds for all complex numbers.</p>
</li>
<li><p>Symbols can be given different assumptions by passing the assumption to <code>symbols()</code>. For the rest of this section, we will be assuming that <code>x</code> and <code>y</code> are positive, and that <code>a</code> and <code>b</code> are real. We will leave <code>z</code>, <code>t</code>, and <code>c</code> as arbitrary complex Symbols to demonstrate what happens in that case.</p>
</li>
</ul>
```julia
>>> x, y = symbols('x y', positive=True)
>>> a, b = symbols('a b', real=True)
>>> z, t, c = symbols('z t c')
```
<h5>In <code>Julia</code>:</h5>
```julia
x, y = symbols("x y", positive=true)
a, b = symbols("a b", real=true)
z, t, c = symbols("z t c")
```
(z, t, c)
<hr />
<div class="admonition note"><p class="admonition-title">TODO:</p><p>Rewrite this using the new assumptions</p>
</div>
<div class="admonition note"><p class="admonition-title">Note</p><p>In SymPy, <code>sqrt(x)</code> is just a shortcut to <code>x**Rational(1, 2)</code>. They are exactly the same object.</p>
</div>
```julia
>>> sqrt(x) == x**Rational(1, 2)
True
```
<h5>In <code>Julia</code>:</h5>
<ul>
<li><p>we can construction rational numbers with <code>//</code></p>
</li>
</ul>
```julia
sqrt(x) == x^(1//2)
```
true
<hr />
<h2>powsimp</h2>
<p><code>powsimp()</code> applies identities 1 and 2 from above, from left to right.</p>
```julia
>>> powsimp(x**a*x**b)
a + b
x
>>> powsimp(x**a*y**a)
a
(x⋅y)
```
<h5>In <code>Julia</code>:</h5>
```julia
powsimp(x^a*x^b)
powsimp(x^a*y^a)
```
\begin{equation*}\left(x y\right)^{a}\end{equation*}
<hr />
<p>Notice that <code>powsimp()</code> refuses to do the simplification if it is not valid.</p>
```julia
>>> powsimp(t**c*z**c)
c c
t ⋅z
```
<h5>In <code>Julia</code>:</h5>
```julia
powsimp(t^c*z^c)
```
\begin{equation*}t^{c} z^{c}\end{equation*}
<hr />
<p>If you know that you want to apply this simplification, but you don't want to mess with assumptions, you can pass the <code>force=True</code> flag. This will force the simplification to take place, regardless of assumptions.</p>
```julia
>>> powsimp(t**c*z**c, force=True)
c
(t⋅z)
```
<h5>In <code>Julia</code>:</h5>
```julia
powsimp(t^c*z^c, force=true)
```
\begin{equation*}\left(t z\right)^{c}\end{equation*}
<hr />
<p>Note that in some instances, in particular, when the exponents are integers or rational numbers, and identity 2 holds, it will be applied automatically.</p>
```julia
>>> (z*t)**2
2 2
t ⋅z
>>> sqrt(x*y)
√x⋅√y
```
<h5>In <code>Julia</code>:</h5>
```julia
(z*t)^2
```
\begin{equation*}t^{2} z^{2}\end{equation*}
```julia
sqrt(x*y)
```
\begin{equation*}\sqrt{x} \sqrt{y}\end{equation*}
<hr />
<p>This means that it will be impossible to undo this identity with <code>powsimp()</code>, because even if <code>powsimp()</code> were to put the bases together, they would be automatically split apart again.</p>
```julia
>>> powsimp(z**2*t**2)
2 2
t ⋅z
>>> powsimp(sqrt(x)*sqrt(y))
√x⋅√y
```
<h5>In <code>Julia</code>:</h5>
```julia
powsimp(z^2*t^2)
```
\begin{equation*}t^{2} z^{2}\end{equation*}
```julia
powsimp(sqrt(x)*sqrt(y))
```
\begin{equation*}\sqrt{x} \sqrt{y}\end{equation*}
<hr />
<h3><code>expand_power_exp</code> / <code>expand_power_base</code></h3>
<p><code>expand_power_exp()</code> and <code>expand_power_base()</code> apply identities 1 and 2 from right to left, respectively.</p>
```julia
>>> expand_power_exp(x**(a + b))
a b
x ⋅x
>>> expand_power_base((x*y)**a)
a a
x ⋅y
```
<h5>In <code>Julia</code>:</h5>
```julia
expand_power_exp(x^(a + b))
expand_power_base((x*y)^a)
```
\begin{equation*}x^{a} y^{a}\end{equation*}
<hr />
<p>As with <code>powsimp()</code>, identity 2 is not applied if it is not valid.</p>
```julia
>>> expand_power_base((z*t)**c)
c
(t⋅z)
```
<h5>In <code>Julia</code>:</h5>
```julia
expand_power_base((z*t)^c)
```
\begin{equation*}\left(t z\right)^{c}\end{equation*}
<hr />
<p>And as with <code>powsimp()</code>, you can force the expansion to happen without fiddling with assumptions by using <code>force=True</code>.</p>
```julia
>>> expand_power_base((z*t)**c, force=True)
c c
t ⋅z
```
<h5>In <code>Julia</code>:</h5>
```julia
expand_power_base((z*t)^c, force=true)
```
\begin{equation*}t^{c} z^{c}\end{equation*}
<hr />
<p>As with identity 2, identity 1 is applied automatically if the power is a number, and hence cannot be undone with <code>expand_power_exp()</code>.</p>
```julia
>>> x**2*x**3
5
x
>>> expand_power_exp(x**5)
5
x
```
<h5>In <code>Julia</code>:</h5>
```julia
x^2*x^3
expand_power_exp(x^5)
```
\begin{equation*}x^{5}\end{equation*}
<hr />
<h3>powdenest</h3>
<p><code>powdenest()</code> applies identity 3, from left to right.</p>
```julia
>>> powdenest((x**a)**b)
a⋅b
x
```
<h5>In <code>Julia</code>:</h5>
```julia
powdenest((x^a)^b)
```
\begin{equation*}x^{a b}\end{equation*}
<hr />
<p>As before, the identity is not applied if it is not true under the given assumptions.</p>
```julia
>>> powdenest((z**a)**b)
b
⎛ a⎞
⎝z ⎠
```
<h5>In <code>Julia</code>:</h5>
```julia
powdenest((z^a)^b)
```
\begin{equation*}\left(z^{a}\right)^{b}\end{equation*}
<hr />
<p>And as before, this can be manually overridden with <code>force=True</code>.</p>
```julia
>>> powdenest((z**a)**b, force=True)
a⋅b
z
```
<h5>In <code>Julia</code>:</h5>
```julia
powdenest((z^a)^b, force=true)
```
\begin{equation*}z^{a b}\end{equation*}
<hr />
<h2>Exponentials and logarithms</h2>
<div class="admonition note"><p class="admonition-title">Note</p><p>In SymPy, as in Python and most programming languages, <code>log</code> is the natural logarithm, also known as <code>ln</code>. SymPy automatically provides an alias <code>ln = log</code> in case you forget this.</p>
</div>
```julia
>>> ln(x)
log(x)
```
<h5>In <code>Julia</code>:</h5>
<ul>
<li><p><code>ln</code> is exported</p>
</li>
</ul>
```julia
ln(x)
```
\begin{equation*}\log{\left (x \right )}\end{equation*}
<hr />
<p>Logarithms have similar issues as powers. There are two main identities</p>
<ol>
<li>$\log{(xy)} = \log{(x)} + \log{(y)}$
</li>
<li>$\log{(x^n)} = n\log{(x)}$
</li>
</ol>
<p>Neither identity is true for arbitrary complex $x$ and $y$, due to the branch cut in the complex plane for the complex logarithm. However, sufficient conditions for the identities to hold are if $x$ and $y$ are positive and $n$ is real.</p>
```julia
>>> x, y = symbols('x y', positive=True)
>>> n = symbols('n', real=True)
```
<h5>In <code>Julia</code>:</h5>
```julia
x, y = symbols("x y", positive=true)
n = symbols("n", real=true)
```
\begin{equation*}n\end{equation*}
<hr />
<p>As before, <code>z</code> and <code>t</code> will be Symbols with no additional assumptions.</p>
<p>Note that the identity $\log{\left (\frac{x}{y}\right )} = \log(x) - \log(y)$ is a special case of identities 1 and 2 by $\log{\left (\frac{x}{y}\right )} =$ $\log{\left (x\cdot\frac{1}{y}\right )} =$ $\log(x) + \log{\left( y^{-1}\right )} =$ $\log(x) - \log(y)$, and thus it also holds if <code>x</code> and <code>y</code> are positive, but may not hold in general.</p>
<p>We also see that $\log{\left( e^x \right)} = x$ comes from $\log{\left ( e^x \right)} = x\log(e) = x$, and thus holds when $x$ is real (and it can be verified that it does not hold in general for arbitrary complex $x$, for example, $\log{\left (e^{x + 2\pi i}\right)} = \log{\left (e^x\right )} = x \neq x + 2\pi i$).</p>
<h2>expand_log</h2>
<p>To apply identities 1 and 2 from left to right, use <code>expand_log()</code>. As always, the identities will not be applied unless they are valid.</p>
```julia
>>> expand_log(log(x*y))
log(x) + log(y)
>>> expand_log(log(x/y))
log(x) - log(y)
>>> expand_log(log(x**2))
2⋅log(x)
>>> expand_log(log(x**n))
n⋅log(x)
>>> expand_log(log(z*t))
log(t⋅z)
```
<h5>In <code>Julia</code>:</h5>
```julia
expand_log(log(x*y))
```
\begin{equation*}\log{\left (x \right )} + \log{\left (y \right )}\end{equation*}
```julia
expand_log(log(x/y))
```
\begin{equation*}\log{\left (x \right )} - \log{\left (y \right )}\end{equation*}
```julia
expand_log(log(x^2))
```
\begin{equation*}2 \log{\left (x \right )}\end{equation*}
```julia
expand_log(log(x^n))
```
\begin{equation*}n \log{\left (x \right )}\end{equation*}
```julia
expand_log(log(z*t))
```
\begin{equation*}\log{\left (t z \right )}\end{equation*}
<hr />
<p>As with <code>powsimp()</code> and <code>powdenest()</code>, <code>expand_log()</code> has a <code>force</code> option that can be used to ignore assumptions.</p>
```julia
>>> expand_log(log(z**2))
⎛ 2⎞
log⎝z ⎠
>>> expand_log(log(z**2), force=True)
2⋅log(z)
```
<h5>In <code>Julia</code>:</h5>
```julia
expand_log(log(z^2))
```
\begin{equation*}\log{\left (z^{2} \right )}\end{equation*}
```julia
expand_log(log(z^2), force=true)
```
\begin{equation*}2 \log{\left (z \right )}\end{equation*}
<hr />
<h2>logcombine</h2>
<p>To apply identities 1 and 2 from right to left, use <code>logcombine()</code>.</p>
```julia
>>> logcombine(log(x) + log(y))
log(x⋅y)
>>> logcombine(n*log(x))
⎛ n⎞
log⎝x ⎠
>>> logcombine(n*log(z))
n⋅log(z)
```
<h5>In <code>Julia</code>:</h5>
```julia
logcombine(log(x) + log(y))
logcombine(n*log(x))
logcombine(n*log(z))
```
\begin{equation*}n \log{\left (z \right )}\end{equation*}
<hr />
<p><code>logcombine()</code> also has a <code>force</code> option that can be used to ignore assumptions.</p>
```julia
>>> logcombine(n*log(z), force=True)
⎛ n⎞
log⎝z ⎠
```
<h5>In <code>Julia</code>:</h5>
```julia
logcombine(n*log(z), force=true)
```
\begin{equation*}\log{\left (z^{n} \right )}\end{equation*}
<hr />
<h2>Special Functions</h2>
<p>SymPy implements dozens of special functions, ranging from functions in combinatorics to mathematical physics.</p>
<p>An extensive list of the special functions included with SymPy and their documentation is at the :ref:<code>Functions Module <functions-contents></code> page.</p>
<p>For the purposes of this tutorial, let's introduce a few special functions in SymPy.</p>
<p>Let's define <code>x</code>, <code>y</code>, and <code>z</code> as regular, complex Symbols, removing any assumptions we put on them in the previous section. We will also define <code>k</code>, <code>m</code>, and <code>n</code>.</p>
```julia
>>> x, y, z = symbols('x y z')
>>> k, m, n = symbols('k m n')
```
<h5>In <code>Julia</code>:</h5>
```julia
x, y, z = symbols("x y z")
k, m, n = symbols("k m n")
```
(k, m, n)
<hr />
<p>The <code>factorial <http://en.wikipedia.org/wiki/Factorial></code>_ function is <code>factorial</code>. <code>factorial(n)</code> represents $n!= 1\cdot2\cdots(n - 1)\cdot n$. <code>n!</code> represents the number of permutations of <code>n</code> distinct items.</p>
```julia
>>> factorial(n)
n!
```
<h5>In <code>Julia</code>:</h5>
```julia
factorial(n)
```
\begin{equation*}n!\end{equation*}
<hr />
<p>The <code>binomial coefficient <http://en.wikipedia.org/wiki/Binomial_coefficient></code>_ function is <code>binomial</code>. <code>binomial(n, k)</code> represents $\binom{n}{k}$, the number of ways to choose <code>k</code> items from a set of <code>n</code> distinct items. It is also often written as <code>nCk</code>, and is pronounced "<code>n</code> choose <code>k</code>".</p>
```julia
>>> binomial(n, k)
⎛n⎞
⎜ ⎟
⎝k⎠
```
<h5>In <code>Julia</code>:</h5>
```julia
binomial(n, k)
```
\begin{equation*}{\binom{n}{k}}\end{equation*}
<hr />
<p>The factorial function is closely related to the <code>gamma function <http://en.wikipedia.org/wiki/Gamma_function></code><em>, <code>gamma</code>. <code>gamma(z)</code> represents \Gamma(z) = \int</em>0^\infty t^{z - 1}e^{-t}\,dt$, which for positive integer <code>z</code> is the same as <code>(z - 1)!</code>.</p>
```julia
>>> gamma(z)
Γ(z)
```
<h5>In <code>Julia</code>:</h5>
<ul>
<li><p>recall, we need to load <code>SpecialFunctions</code> for <code>gamma</code> to be available</p>
</li>
</ul>
```julia
gamma(z)
```
\begin{equation*}\Gamma\left(z\right)\end{equation*}
<hr />
<p>The <code>generalized hypergeometric function <http://en.wikipedia.org/wiki/Generalized_hypergeometric_function></code> is <code>hyper</code>. <code>hyper([a_1, ..., a_p], [b_1, ..., b_q], z)</code> represents ${}_pF_q\left(\begin{matrix} a_1, \cdots, a_p \\ b_1, \cdots, b_q \end{matrix} \middle| z \right)$. The most common case is ${}_2F_1$, which is often referred to as the <code>ordinary hypergeometric function <http://en.wikipedia.org/wiki/Hypergeometric_function></code>.</p>
```julia
>>> hyper([1, 2], [3], z)
┌─ ⎛1, 2 │ ⎞
├─ ⎜ │ z⎟
2╵ 1 ⎝ 3 │ ⎠
```
<h5>In <code>Julia</code>:</h5>
<ul>
<li><p>as <code>[1,2]</code> is not symbolic, we qualify <code>hyper</code></p>
</li>
</ul>
```julia
sympy.hyper([1, 2], [3], z)
```
\begin{equation*}{{}_{2}F_{1}\left(\begin{matrix} 1, 2 \\ 3 \end{matrix}\middle| {z} \right)}\end{equation*}
<hr />
<h2>rewrite</h2>
<p>A common way to deal with special functions is to rewrite them in terms of one another. This works for any function in SymPy, not just special functions. To rewrite an expression in terms of a function, use <code>expr.rewrite(function)</code>. For example,</p>
```julia
>>> tan(x).rewrite(sin)
2
2⋅sin (x)
─────────
sin(2⋅x)
>>> factorial(x).rewrite(gamma)
Γ(x + 1)
```
<h5>In <code>Julia</code>:</h5>
```julia
tan(x).rewrite(sin)
```
\begin{equation*}\frac{2 \sin^{2}{\left (x \right )}}{\sin{\left (2 x \right )}}\end{equation*}
```julia
factorial(x).rewrite(gamma)
```
\begin{equation*}x!\end{equation*}
<hr />
<p>For some tips on applying more targeted rewriting, see the :ref:<code>tutorial-manipulation</code> section.</p>
<h3>expand_func</h3>
<p>To expand special functions in terms of some identities, use <code>expand_func()</code>. For example</p>
```julia
>>> expand_func(gamma(x + 3))
x⋅(x + 1)⋅(x + 2)⋅Γ(x)
```
<h5>In <code>Julia</code>:</h5>
```julia
expand_func(gamma(x + 3))
```
\begin{equation*}x \left(x + 1\right) \left(x + 2\right) \Gamma\left(x\right)\end{equation*}
<hr />
<h2>hyperexpand</h2>
<p>To rewrite <code>hyper</code> in terms of more standard functions, use <code>hyperexpand()</code>.</p>
```julia
>>> hyperexpand(hyper([1, 1], [2], z))
-log(-z + 1)
─────────────
z
```
<h5>In <code>Julia</code>:</h5>
<ul>
<li><p>As <code>[1,1]</code> is not symbolic, we qualify <code>hyperexpand</code>:</p>
</li>
</ul>
```julia
sympy.hyperexpand(hyper([1, 1], [2], z))
```
MethodError(SymPy.hyper, ([1, 1], [2], z), 0x000000000000827e)
<hr />
<p><code>hyperexpand()</code> also works on the more general Meijer G-function (see :py:meth:<code>its documentation <sympy.functions.special.hyper.meijerg></code> for more information).</p>
```julia
>>> expr = meijerg([[1],[1]], [[1],[]], -z)
>>> expr
╭─╮1, 1 ⎛1 1 │ ⎞
│╶┐ ⎜ │ -z⎟
╰─╯2, 1 ⎝1 │ ⎠
>>> hyperexpand(expr)
1
─
z
ℯ
```
<h5>In <code>Julia</code>:</h5>
<ul>
<li><p>again, we qualify <code>meijerg</code></p>
</li>
</ul>
```julia
expr = sympy.meijerg([[1],[1]], [[1],[]], -z)
expr
```
\begin{equation*}{G_{2, 1}^{1, 1}\left(\begin{matrix} 1 & 1 \\1 & \end{matrix} \middle| {- z} \right)}\end{equation*}
```julia
hyperexpand(expr)
```
\begin{equation*}e^{\frac{1}{z}}\end{equation*}
<hr />
<h3>combsimp</h3>
<p>To simplify combinatorial expressions, use <code>combsimp()</code>.</p>
```julia
>>> n, k = symbols('n k', integer = True)
>>> combsimp(factorial(n)/factorial(n - 3))
n⋅(n - 2)⋅(n - 1)
>>> combsimp(binomial(n+1, k+1)/binomial(n, k))
n + 1
─────
k + 1
```
<h5>In <code>Julia</code>:</h5>
```julia
n, k = symbols("n k", integer = true)
```
(n, k)
```julia
combsimp(factorial(n)/factorial(n - 3))
```
\begin{equation*}n \left(n - 2\right) \left(n - 1\right)\end{equation*}
```julia
combsimp(binomial(n+1, k+1)/binomial(n, k))
```
\begin{equation*}\frac{n + 1}{k + 1}\end{equation*}
<hr />
<h3>gammasimp</h3>
<p>To simplify expressions with gamma functions or combinatorial functions with non-integer argument, use <code>gammasimp()</code>.</p>
```julia
>>> gammasimp(gamma(x)*gamma(1 - x))
π
────────
sin(π⋅x)
```
<h5>In <code>Julia</code>:</h5>
```julia
gammasimp(gamma(x)*gamma(1 - x))
```
\begin{equation*}\frac{\pi}{\sin{\left (\pi x \right )}}\end{equation*}
<hr />
<h3>Example: Continued Fractions</h3>
<p>Let's use SymPy to explore continued fractions. A <code>continued fraction <http://en.wikipedia.org/wiki/Continued_fraction></code>_ is an expression of the form</p>
$$
a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{ \ddots + \cfrac{1}{a_n}
}}}
$$
<p>where $a_0, \ldots, a_n$ are integers, and $a_1, \ldots, a_n$ are positive. Acontinued fraction can also be infinite, but infinite objects are more difficult to represent in computers, so we will only examine the finite case here.</p>
<p>A continued fraction of the above form is often represented as a list $[a_0; a_1, \ldots, a_n]$. Let's write a simple function that converts such a list to its continued fraction form. The easiest way to construct a continued fraction from a list is to work backwards. Note that despite the apparent symmetry of the definition, the first element, <code>a_0</code>, must usually be handled differently from the rest.</p>
```julia
>>> def list_to_frac(l):
... expr = Integer(0)
... for i in reversed(l[1:]):
... expr += i
... expr = 1/expr
... return l[0] + expr
>>> list_to_frac([x, y, z])
1
x + ─────
1
y + ─
z
```
<h5>In <code>Julia</code>:</h5>
```julia
function list_to_frac(l)
expr = sympy.Integer(0)
for i in reverse(l[1:end])
expr += i
expr = 1/expr
end
return l[1] + expr
end
list_to_frac([x, y, z])
```
\begin{equation*}x + \frac{1}{x + \frac{1}{y + \frac{1}{z}}}\end{equation*}
<hr />
<p>We use <code>Integer(0)</code> in <code>list_to_frac</code> so that the result will always be a SymPy object, even if we only pass in Python ints.</p>
```julia
>>> list_to_frac([1, 2, 3, 4])
43
──
30
```
<h5>In <code>Julia</code>:</h5>
```julia
list_to_frac([1, 2, 3, 4])
```
\begin{equation*}\frac{73}{43}\end{equation*}
<hr />
<p>Every finite continued fraction is a rational number, but we are interested in symbolics here, so let's create a symbolic continued fraction. The <code>symbols()</code> function that we have been using has a shortcut to create numbered symbols. <code>symbols('a0:5')</code> will create the symbols <code>a0</code>, <code>a1</code>, ..., <code>a4</code>.</p>
```julia
>>> syms = symbols('a0:5')
>>> syms
(a₀, a₁, a₂, a₃, a₄)
>>> a0, a1, a2, a3, a4 = syms
>>> frac = list_to_frac(syms)
>>> frac
1
a₀ + ─────────────────
1
a₁ + ────────────
1
a₂ + ───────
1
a₃ + ──
a₄
```
<h5>In <code>Julia</code>:</h5>
```julia
syms = symbols("a0:5")
syms
```
(a0, a1, a2, a3, a4)
```julia
a0, a1, a2, a3, a4 = syms
```
(a0, a1, a2, a3, a4)
```julia
frac = list_to_frac(syms)
frac
```
\begin{equation*}a_{0} + \frac{1}{a_{0} + \frac{1}{a_{1} + \frac{1}{a_{2} + \frac{1}{a_{3} + \frac{1}{a_{4}}}}}}\end{equation*}
<hr />
<p>This form is useful for understanding continued fractions, but lets put it into standard rational function form using <code>cancel()</code>.</p>
```julia
>>> frac = cancel(frac)
>>> frac
a₀⋅a₁⋅a₂⋅a₃⋅a₄ + a₀⋅a₁⋅a₂ + a₀⋅a₁⋅a₄ + a₀⋅a₃⋅a₄ + a₀ + a₂⋅a₃⋅a₄ + a₂ + a₄
─────────────────────────────────────────────────────────────────────────
a₁⋅a₂⋅a₃⋅a₄ + a₁⋅a₂ + a₁⋅a₄ + a₃⋅a₄ + 1
```
<h5>In <code>Julia</code>:</h5>
```julia
frac = cancel(frac)
frac
```
\begin{equation*}\frac{a_{0}^{2} a_{1} a_{2} a_{3} a_{4} + a_{0}^{2} a_{1} a_{2} + a_{0}^{2} a_{1} a_{4} + a_{0}^{2} a_{3} a_{4} + a_{0}^{2} + a_{0} a_{2} a_{3} a_{4} + a_{0} a_{2} + a_{0} a_{4} + a_{1} a_{2} a_{3} a_{4} + a_{1} a_{2} + a_{1} a_{4} + a_{3} a_{4} + 1}{a_{0} a_{1} a_{2} a_{3} a_{4} + a_{0} a_{1} a_{2} + a_{0} a_{1} a_{4} + a_{0} a_{3} a_{4} + a_{0} + a_{2} a_{3} a_{4} + a_{2} + a_{4}}\end{equation*}
<hr />
<p>Now suppose we were given <code>frac</code> in the above canceled form. In fact, we might be given the fraction in any form, but we can always put it into the above canonical form with <code>cancel()</code>. Suppose that we knew that it could be rewritten as a continued fraction. How could we do this with SymPy? A continued fraction is recursively $c + \frac{1}{f}$, where $c$ is an integer and $f$ is a (smaller) continued fraction. If we could write the expression in this form, we could pull out each $c$ recursively and add it to a list. We could then get a continued fraction with our <code>list_to_frac()</code> function.</p>
<p>The key observation here is that we can convert an expression to the form <code>c + \frac{1}{f}</code> by doing a partial fraction decomposition with respect to <code>c</code>. This is because <code>f</code> does not contain <code>c</code>. This means we need to use the <code>apart()</code> function. We use <code>apart()</code> to pull the term out, then subtract it from the expression, and take the reciprocal to get the <code>f</code> part.</p>
```julia
>>> l = []
>>> frac = apart(frac, a0)
>>> frac
a₂⋅a₃⋅a₄ + a₂ + a₄
a₀ + ───────────────────────────────────────
a₁⋅a₂⋅a₃⋅a₄ + a₁⋅a₂ + a₁⋅a₄ + a₃⋅a₄ + 1
>>> l.append(a0)
>>> frac = 1/(frac - a0)
>>> frac
a₁⋅a₂⋅a₃⋅a₄ + a₁⋅a₂ + a₁⋅a₄ + a₃⋅a₄ + 1
───────────────────────────────────────
a₂⋅a₃⋅a₄ + a₂ + a₄
```
<h5>In <code>Julia</code>:</h5>
```julia
l = []
frac = apart(frac, a0)
frac
push!(l, append(a0))
frac = 1/(frac - a0)
frac
```
\begin{equation*}\frac{a_{0} a_{1} a_{2} a_{3} a_{4} + a_{0} a_{1} a_{2} + a_{0} a_{1} a_{4} + a_{0} a_{3} a_{4} + a_{0} + a_{2} a_{3} a_{4} + a_{2} + a_{4}}{a_{1} a_{2} a_{3} a_{4} + a_{1} a_{2} + a_{1} a_{4} + a_{3} a_{4} + 1}\end{equation*}
<hr />
<p>Now we repeat this process</p>
```julia
>>> frac = apart(frac, a1)
>>> frac
a₃⋅a₄ + 1
a₁ + ──────────────────
a₂⋅a₃⋅a₄ + a₂ + a₄
>>> l.append(a1)
>>> frac = 1/(frac - a1)
>>> frac = apart(frac, a2)
>>> frac
a₄
a₂ + ─────────
a₃⋅a₄ + 1
>>> l.append(a2)
>>> frac = 1/(frac - a2)
>>> frac = apart(frac, a3)
>>> frac
1
a₃ + ──
a₄
>>> l.append(a3)
>>> frac = 1/(frac - a3)
>>> frac = apart(frac, a4)
>>> frac
a₄
>>> l.append(a4)
>>> list_to_frac(l)
1
a₀ + ─────────────────
1
a₁ + ────────────
1
a₂ + ───────
1
a₃ + ──
a₄
```
<h5>In <code>Julia</code>:</h5>
```julia
frac = apart(frac, a1)
frac
```
\begin{equation*}a_{0} + \frac{a_{2} a_{3} a_{4} + a_{2} + a_{4}}{a_{1} a_{2} a_{3} a_{4} + a_{1} a_{2} + a_{1} a_{4} + a_{3} a_{4} + 1}\end{equation*}
```julia
push!(l, a1)
frac = 1/(frac - a1)
```
\begin{equation*}\frac{1}{a_{0} - a_{1} + \frac{a_{2} a_{3} a_{4} + a_{2} + a_{4}}{a_{1} a_{2} a_{3} a_{4} + a_{1} a_{2} + a_{1} a_{4} + a_{3} a_{4} + 1}}\end{equation*}
```julia
frac = apart(frac, a2)
frac
```
\begin{equation*}\frac{a_{1}}{a_{0} a_{1} - a_{1}^{2} + 1} + \frac{a_{3} a_{4} + 1}{\left(a_{0} a_{1} - a_{1}^{2} + 1\right) \left(a_{0} a_{1} a_{2} a_{3} a_{4} + a_{0} a_{1} a_{2} + a_{0} a_{1} a_{4} + a_{0} a_{3} a_{4} + a_{0} - a_{1}^{2} a_{2} a_{3} a_{4} - a_{1}^{2} a_{2} - a_{1}^{2} a_{4} - a_{1} a_{3} a_{4} - a_{1} + a_{2} a_{3} a_{4} + a_{2} + a_{4}\right)}\end{equation*}
```julia
push!(l, a2)
frac = 1/(frac - a2)
```
\begin{equation*}\frac{1}{\frac{a_{1}}{a_{0} a_{1} - a_{1}^{2} + 1} - a_{2} + \frac{a_{3} a_{4} + 1}{\left(a_{0} a_{1} - a_{1}^{2} + 1\right) \left(a_{0} a_{1} a_{2} a_{3} a_{4} + a_{0} a_{1} a_{2} + a_{0} a_{1} a_{4} + a_{0} a_{3} a_{4} + a_{0} - a_{1}^{2} a_{2} a_{3} a_{4} - a_{1}^{2} a_{2} - a_{1}^{2} a_{4} - a_{1} a_{3} a_{4} - a_{1} + a_{2} a_{3} a_{4} + a_{2} + a_{4}\right)}}\end{equation*}
```julia
frac = apart(frac, a3)
frac
```
\begin{equation*}\frac{a_{4}}{\left(a_{0} a_{1} a_{2}^{2} + a_{0} a_{2} - a_{1}^{2} a_{2}^{2} - 2 a_{1} a_{2} + a_{2}^{2} - 1\right) \left(a_{0} a_{1} a_{2}^{2} a_{3} a_{4} + a_{0} a_{1} a_{2}^{2} + a_{0} a_{1} a_{2} a_{4} + a_{0} a_{2} a_{3} a_{4} + a_{0} a_{2} - a_{1}^{2} a_{2}^{2} a_{3} a_{4} - a_{1}^{2} a_{2}^{2} - a_{1}^{2} a_{2} a_{4} - 2 a_{1} a_{2} a_{3} a_{4} - 2 a_{1} a_{2} - a_{1} a_{4} + a_{2}^{2} a_{3} a_{4} + a_{2}^{2} + a_{2} a_{4} - a_{3} a_{4} - 1\right)} - \frac{a_{0} a_{1} a_{2} + a_{0} - a_{1}^{2} a_{2} - a_{1} + a_{2}}{a_{0} a_{1} a_{2}^{2} + a_{0} a_{2} - a_{1}^{2} a_{2}^{2} - 2 a_{1} a_{2} + a_{2}^{2} - 1}\end{equation*}
```julia
push!(l, a3)
frac = 1/(frac - a3)
```
\begin{equation*}\frac{1}{- a_{3} + \frac{a_{4}}{\left(a_{0} a_{1} a_{2}^{2} + a_{0} a_{2} - a_{1}^{2} a_{2}^{2} - 2 a_{1} a_{2} + a_{2}^{2} - 1\right) \left(a_{0} a_{1} a_{2}^{2} a_{3} a_{4} + a_{0} a_{1} a_{2}^{2} + a_{0} a_{1} a_{2} a_{4} + a_{0} a_{2} a_{3} a_{4} + a_{0} a_{2} - a_{1}^{2} a_{2}^{2} a_{3} a_{4} - a_{1}^{2} a_{2}^{2} - a_{1}^{2} a_{2} a_{4} - 2 a_{1} a_{2} a_{3} a_{4} - 2 a_{1} a_{2} - a_{1} a_{4} + a_{2}^{2} a_{3} a_{4} + a_{2}^{2} + a_{2} a_{4} - a_{3} a_{4} - 1\right)} - \frac{a_{0} a_{1} a_{2} + a_{0} - a_{1}^{2} a_{2} - a_{1} + a_{2}}{a_{0} a_{1} a_{2}^{2} + a_{0} a_{2} - a_{1}^{2} a_{2}^{2} - 2 a_{1} a_{2} + a_{2}^{2} - 1}}\end{equation*}
```julia
frac = apart(frac, a4)
frac
```
\begin{equation*}- \frac{a_{0} a_{1} a_{2}^{2} a_{3} + a_{0} a_{1} a_{2} + a_{0} a_{2} a_{3} - a_{1}^{2} a_{2}^{2} a_{3} - a_{1}^{2} a_{2} - 2 a_{1} a_{2} a_{3} - a_{1} + a_{2}^{2} a_{3} + a_{2} - a_{3}}{a_{0} a_{1} a_{2}^{2} a_{3}^{2} + 2 a_{0} a_{1} a_{2} a_{3} + a_{0} a_{1} + a_{0} a_{2} a_{3}^{2} + a_{0} a_{3} - a_{1}^{2} a_{2}^{2} a_{3}^{2} - 2 a_{1}^{2} a_{2} a_{3} - a_{1}^{2} - 2 a_{1} a_{2} a_{3}^{2} - 2 a_{1} a_{3} + a_{2}^{2} a_{3}^{2} + 2 a_{2} a_{3} - a_{3}^{2} + 1} + \frac{1}{\left(a_{0} a_{1} a_{2}^{2} a_{3}^{2} + 2 a_{0} a_{1} a_{2} a_{3} + a_{0} a_{1} + a_{0} a_{2} a_{3}^{2} + a_{0} a_{3} - a_{1}^{2} a_{2}^{2} a_{3}^{2} - 2 a_{1}^{2} a_{2} a_{3} - a_{1}^{2} - 2 a_{1} a_{2} a_{3}^{2} - 2 a_{1} a_{3} + a_{2}^{2} a_{3}^{2} + 2 a_{2} a_{3} - a_{3}^{2} + 1\right) \left(a_{0} a_{1} a_{2}^{2} a_{3}^{2} a_{4} + a_{0} a_{1} a_{2}^{2} a_{3} + 2 a_{0} a_{1} a_{2} a_{3} a_{4} + a_{0} a_{1} a_{2} + a_{0} a_{1} a_{4} + a_{0} a_{2} a_{3}^{2} a_{4} + a_{0} a_{2} a_{3} + a_{0} a_{3} a_{4} + a_{0} - a_{1}^{2} a_{2}^{2} a_{3}^{2} a_{4} - a_{1}^{2} a_{2}^{2} a_{3} - 2 a_{1}^{2} a_{2} a_{3} a_{4} - a_{1}^{2} a_{2} - a_{1}^{2} a_{4} - 2 a_{1} a_{2} a_{3}^{2} a_{4} - 2 a_{1} a_{2} a_{3} - 2 a_{1} a_{3} a_{4} - a_{1} + a_{2}^{2} a_{3}^{2} a_{4} + a_{2}^{2} a_{3} + 2 a_{2} a_{3} a_{4} + a_{2} - a_{3}^{2} a_{4} - a_{3} + a_{4}\right)}\end{equation*}
```julia
push!(l, a4)
list_to_frac(l)
```
\begin{equation*}a_{1} + \frac{1}{a_{1} + \frac{1}{a_{2} + \frac{1}{a_{3} + \frac{1}{a_{4}}}}}\end{equation*}
<hr />
<div class="admonition note"><p class="admonition-title">Quick tip</p><p>You can execute multiple lines at once in SymPy Live. Typing <code>Shift-Enter</code> instead of <code>Enter</code> will enter a newline instead of executing.</p>
</div>
<p>Of course, this exercise seems pointless, because we already know that our <code>frac</code> is <code>list_to_frac([a0, a1, a2, a3, a4])</code>. So try the following exercise. Take a list of symbols and randomize them, and create the canceled continued fraction, and see if you can reproduce the original list. For example</p>
```julia
>>> import random
>>> l = list(symbols('a0:5'))
>>> random.shuffle(l)
>>> orig_frac = frac = cancel(list_to_frac(l))
>>> del l
```
<h5>In <code>Julia</code>:</h5>
<ul>
<li><p><code>shuffle</code> from Python is <code>randperm</code> in the <code>Random</code> module</p>
</li>
</ul>
```julia
using Random
l = symbols("a0:5")
l = l[randperm(length(l))]
orig_frac = frac = cancel(list_to_frac(l))
```
\begin{equation*}\frac{a_{0} a_{1}^{2} a_{2} a_{3} a_{4} + a_{0} a_{1}^{2} a_{2} + a_{0} a_{1}^{2} a_{3} + a_{0} a_{2} a_{3} a_{4} + a_{0} a_{2} + a_{0} a_{3} + a_{1}^{2} a_{3} a_{4} + a_{1}^{2} + a_{1} a_{2} a_{3} a_{4} + a_{1} a_{2} + a_{1} a_{3} + a_{3} a_{4} + 1}{a_{0} a_{1} a_{2} a_{3} a_{4} + a_{0} a_{1} a_{2} + a_{0} a_{1} a_{3} + a_{1} a_{3} a_{4} + a_{1} + a_{2} a_{3} a_{4} + a_{2} + a_{3}}\end{equation*}
<hr />
<p>Click on "Run code block in SymPy Live" on the definition of <code>list_to_frac()</code> above, and then on the above example, and try to reproduce <code>l</code> from <code>frac</code>. I have deleted <code>l</code> at the end to remove the temptation for peeking (you can check your answer at the end by calling <code>cancel(list_to_frac(l))</code> on the list that you generate at the end, and comparing it to <code>orig_frac</code>.</p>
<p>See if you can think of a way to figure out what symbol to pass to <code>apart()</code> at each stage (hint: think of what happens to $a_0$ in the formula $a_0 + \frac{1}{a_1 + \cdots}$ when it is canceled).</p>
<div class="admonition note"><p class="admonition-title">Note</p><p>Answer: a0 is the only symbol that does not appear in the denominator</p>
</div>
<hr />
<p><a href="./index.html">return to index</a></p>
|
22c2ddeee85debced14bb86258013fea697216d8
| 88,696 |
ipynb
|
Jupyter Notebook
|
examples/simplification.ipynb
|
UnofficialJuliaMirrorSnapshots/SymPy.jl-24249f21-da20-56a4-8eb1-6a02cf4ae2e6
|
a6e5a24b3d1ad069a413d0c28f01052c5fa4c6cc
|
[
"MIT"
] | null | null | null |
examples/simplification.ipynb
|
UnofficialJuliaMirrorSnapshots/SymPy.jl-24249f21-da20-56a4-8eb1-6a02cf4ae2e6
|
a6e5a24b3d1ad069a413d0c28f01052c5fa4c6cc
|
[
"MIT"
] | null | null | null |
examples/simplification.ipynb
|
UnofficialJuliaMirrorSnapshots/SymPy.jl-24249f21-da20-56a4-8eb1-6a02cf4ae2e6
|
a6e5a24b3d1ad069a413d0c28f01052c5fa4c6cc
|
[
"MIT"
] | null | null | null | 214.241546 | 1,572 | 0.617006 | true | 18,629 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.913677 | 0.887205 | 0.810618 |
__label__eng_Latn
| 0.742187 | 0.72167 |
>### <font color ='red'> Ejemplo 3 **Tarea**
>El tiempo en el cual un movimiento browniano se mantiene sobre su punto máximo en el intervalo [0,1] tiene una distribución
>$$F(x)=\frac{2}{\pi}\sin^{-1}(\sqrt x),\quad 0\leq x\leq 1$$ </font>
Genere muestres aleatorias que distribuyan según la función dada usando el método de la transformada inversa y grafique en una gráfica el historias 100 muestras generadas y comparela con el función F(x) dada, esto con el fín de validar que el procedimiento fue realizado de manera correcta
```python
import numpy as np
import scipy.stats as ss
from scipy.stats import rayleigh
import statsmodels as st
import matplotlib.pyplot as plt
from sympy import Derivative, diff, simplify
import matplotlib.pyplot as plt
```
```python
u=(2/np.pi)*np.arcos(root(x))
```
```python
N=10**6
def d_fun(N):
U= np.random(N)
return np.sin(np.pi*U/2)**2
f= lambda x: 1/(np.pi*np.sqrt(x)*np.sqrt(1-x))
#dominio 0<=x<=1
x=np.arange(0.1,.99,.01)
plt.plot(x,f(x))
```
```python
d= d_fun(N)
plt.hist(d, bins= 100, density=True)
plt.show()
```
### Ejemplo 4
Distribución de Rayleigh
$$F(x)=1-e^{-2x(x-b)},\quad x\geq b $$
|
d64f2e0f4e663b4648e87bd978a17044b5d493ea
| 18,238 |
ipynb
|
Jupyter Notebook
|
TEMA-2/Untitled.ipynb
|
eremarin45/SPF-2019-II-G2
|
c5c967411b7aeaaef570859314b22fc72aea8e16
|
[
"MIT"
] | null | null | null |
TEMA-2/Untitled.ipynb
|
eremarin45/SPF-2019-II-G2
|
c5c967411b7aeaaef570859314b22fc72aea8e16
|
[
"MIT"
] | null | null | null |
TEMA-2/Untitled.ipynb
|
eremarin45/SPF-2019-II-G2
|
c5c967411b7aeaaef570859314b22fc72aea8e16
|
[
"MIT"
] | null | null | null | 137.12782 | 12,308 | 0.851903 | true | 376 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.822189 | 0.819893 | 0.674107 |
__label__spa_Latn
| 0.816759 | 0.404508 |
# The minimum jerk hypothesis
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
Hogan and Flash (1984, 1985), based on observations of voluntary movements in primates, suggested that movements are performed (organized) with the smoothest trajectory possible. In this organizing principle, the endpoint trajectory is such that the mean squared-jerk across time of this movement is minimum.
Jerk is the derivative of acceleration and the observation of the minimum-jerk trajectory is for the endpoint in the extracorporal coordinates (not for joint angles) and according to Flash and Hogan (1985), the minimum-jerk trajectory of a planar movement is such that minimizes the following objective function:
$$ C=\frac{1}{2} \int\limits_{t_{i}}^{t_{f}}\;\left[\left(\frac{d^{3}x}{dt^{3}}\right)^2+\left(\frac{d^{3}y}{dt^{3}}\right)^2\right]\:\mathrm{d}t $$
Hogan (1984) found that the solution for this objective function is a fifth-order polynomial trajectory (see Shadmehr and Wise (2004) for a simpler proof):
$$ \begin{array}{l l}
x(t) = a_0+a_1t+a_2t^2+a_3t^3+a_4t^4+a_5t^5 \\
y(t) = b_0+b_1t+b_2t^2+b_3t^3+b_4t^4+b_5t^5
\end{array} $$
With the following boundary conditions for $ x(t) $ and $ y(t) $: initial and final positions are $ (x_i,y_i) $ and $ (x_f,y_f) $ and initial and final velocities and accelerations are zero.
Let's employ [Sympy](http://sympy.org/en/index.html) to find the solution for the minimum jerk trajectory using symbolic algebra.
```python
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import display, Math, Latex
from sympy import symbols, Matrix, latex, Eq, collect, solve, diff, simplify
from sympy.utilities.lambdify import lambdify
```
Using Sympy, the equation for minimum jerk trajectory for x is:
```python
# declare the symbolic variables
x, xi, xf, y, yi, yf, d, t = symbols('x, x_i, x_f, y, y_i, y_f, d, t')
a0, a1, a2, a3, a4, a5 = symbols('a_0:6')
x = a0 + a1*t + a2*t**2 + a3*t**3 + a4*t**4 + a5*t**5
display(Math(latex('x(t)=') + latex(x)))
```
$$x(t)=a_{0} + a_{1} t + a_{2} t^{2} + a_{3} t^{3} + a_{4} t^{4} + a_{5} t^{5}$$
Without loss of generality, consider $ t_i=0 $ and let's use $ d $ for movement duration ($ d=t_f $). The system of equations with the boundary conditions for $ x $ is:
```python
# define the system of equations
s = Matrix([Eq(x.subs(t,0) , xi),
Eq(diff(x,t,1).subs(t,0), 0),
Eq(diff(x,t,2).subs(t,0), 0),
Eq(x.subs(t,d) , xf),
Eq(diff(x,t,1).subs(t,d), 0),
Eq(diff(x,t,2).subs(t,d), 0)])
display(Math(latex(s, mat_str='matrix', mat_delim='[')))
```
$$\left[\begin{matrix}a_{0} = x_{i}\\a_{1} = 0\\2 a_{2} = 0\\a_{0} + a_{1} d + a_{2} d^{2} + a_{3} d^{3} + a_{4} d^{4} + a_{5} d^{5} = x_{f}\\a_{1} + 2 a_{2} d + 3 a_{3} d^{2} + 4 a_{4} d^{3} + 5 a_{5} d^{4} = 0\\2 a_{2} + 6 a_{3} d + 12 a_{4} d^{2} + 20 a_{5} d^{3} = 0\end{matrix}\right]$$
Which gives the following solution:
```python
# algebraically solve the system of equations
sol = solve(s, [a0, a1, a2, a3, a4, a5])
display(Math(latex(sol)))
```
$$\left \{ a_{0} : x_{i}, \quad a_{1} : 0, \quad a_{2} : 0, \quad a_{3} : \frac{10}{d^{3}} \left(x_{f} - x_{i}\right), \quad a_{4} : \frac{15}{d^{4}} \left(- x_{f} + x_{i}\right), \quad a_{5} : \frac{6}{d^{5}} \left(x_{f} - x_{i}\right)\right \}$$
Substituting this solution in the fifth order polynomial trajectory equation, we have the actual displacement trajectories:
```python
# substitute the equation parameters by the solution
x2 = x.subs(sol)
x2 = collect(simplify(x2, ratio=1), xf-xi)
display(Math(latex('x(t)=') + latex(x2)))
y2 = x2.subs([(xi, yi), (xf, yf)])
display(Math(latex('y(t)=') + latex(y2)))
```
$$x(t)=x_{i} + \left(x_{f} - x_{i}\right) \left(\frac{10 t^{3}}{d^{3}} - \frac{15 t^{4}}{d^{4}} + \frac{6 t^{5}}{d^{5}}\right)$$
$$y(t)=y_{i} + \left(y_{f} - y_{i}\right) \left(\frac{10 t^{3}}{d^{3}} - \frac{15 t^{4}}{d^{4}} + \frac{6 t^{5}}{d^{5}}\right)$$
And for the velocity, acceleration, and jerk trajectories in x:
```python
# symbolic differentiation
vx = x2.diff(t, 1)
display(Math(latex('v_x(t)=') + latex(vx)))
ax = x2.diff(t, 2)
display(Math(latex('a_x(t)=') + latex(ax)))
jx = x2.diff(t, 3)
display(Math(latex('j_x(t)=') + latex(jx)))
```
$$v_x(t)=\left(x_{f} - x_{i}\right) \left(\frac{30 t^{2}}{d^{3}} - \frac{60 t^{3}}{d^{4}} + \frac{30 t^{4}}{d^{5}}\right)$$
$$a_x(t)=\frac{60 t}{d^{3}} \left(x_{f} - x_{i}\right) \left(1 - \frac{3 t}{d} + \frac{2 t^{2}}{d^{2}}\right)$$
$$j_x(t)=\frac{60}{d^{3}} \left(x_{f} - x_{i}\right) \left(1 - \frac{6 t}{d} + \frac{6 t^{2}}{d^{2}}\right)$$
Let's plot the minimum jerk trajectory for x and its velocity, acceleration, and jerk considering $x_i=0,x_f=1,d=1$:
```python
# substitute by the numerical values
x3 = x2.subs([(xi, 0), (xf, 1), (d, 1)])
#create functions for calculation of numerical values
xfu = lambdify(t, diff(x3, t, 0), 'numpy')
vfu = lambdify(t, diff(x3, t, 1), 'numpy')
afu = lambdify(t, diff(x3, t, 2), 'numpy')
jfu = lambdify(t, diff(x3, t, 3), 'numpy')
#plots using matplotlib
ts = np.arange(0, 1.01, .01)
fig, axs = plt.subplots(1, 4, figsize=(12, 5), sharex=True, squeeze=True)
axs[0].plot(ts, xfu(ts), linewidth=3)
axs[0].set_title('Displacement [$\mathrm{m}$]')
axs[1].plot(ts, vfu(ts), linewidth=3)
axs[1].set_title('Velocity [$\mathrm{m/s}$]')
axs[2].plot(ts, afu(ts), linewidth=3)
axs[2].set_title('Acceleration [$\mathrm{m/s^2}$]')
axs[3].plot(ts, jfu(ts), linewidth=3)
axs[3].set_title('Jerk [$\mathrm{m/s^3}$]')
for axi in axs:
axi.set_xlabel('Time [s]', fontsize=14)
axi.grid(True)
fig.suptitle('Minimum jerk trajectory kinematics', fontsize=20, y=1.03)
fig.tight_layout()
plt.show()
```
Note that for the minimum jerk trajectory, initial and final values of both velocity and acceleration are zero, but not for the jerk.
Read more about the minimum jerk trajectory hypothesis in the [Shadmehr and Wise's book companion site](http://www.shadmehrlab.org/book/minimum_jerk/minimumjerk.htm) and in [Paul Gribble's website](http://www.gribblelab.org/compneuro/4_Computational_Motor_Control_Kinematics.html#sec-5-1).
### The angular trajectory of a minimum jerk trajectory
Let's calculate the resulting angular trajectory given a minimum jerk linear trajectory, supposing it is from a circular motion of an elbow flexion. The length of the forearm is 0.5 m, the movement duration is 1 s, the elbow starts flexed at 90$^o$ and the flexes to 180$^o$.
First, the linear trajectories for this circular motion:
```python
# substitute by the numerical values
x3 = x2.subs([(xi, 0.5), (xf, 0), (d, 1)])
y3 = x2.subs([(xi, 0), (xf, 0.5), (d, 1)])
display(Math(latex('y(t)=') + latex(x3)))
display(Math(latex('x(t)=') + latex(y3)))
#create functions for calculation of numerical values
xfux = lambdify(t, diff(x3, t, 0), 'numpy')
vfux = lambdify(t, diff(x3, t, 1), 'numpy')
afux = lambdify(t, diff(x3, t, 2), 'numpy')
jfux = lambdify(t, diff(x3, t, 3), 'numpy')
xfuy = lambdify(t, diff(y3, t, 0), 'numpy')
vfuy = lambdify(t, diff(y3, t, 1), 'numpy')
afuy = lambdify(t, diff(y3, t, 2), 'numpy')
jfuy = lambdify(t, diff(y3, t, 3), 'numpy')
```
$$y(t)=- 3.0 t^{5} + 7.5 t^{4} - 5.0 t^{3} + 0.5$$
$$x(t)=3.0 t^{5} - 7.5 t^{4} + 5.0 t^{3}$$
```python
#plots using matplotlib
ts = np.arange(0, 1.01, .01)
fig, axs = plt.subplots(1, 4, figsize=(12, 5), sharex=True, squeeze=True)
axs[0].plot(ts, xfux(ts), 'b', linewidth=3)
axs[0].plot(ts, xfuy(ts), 'r', linewidth=3)
axs[0].set_title('Displacement [$\mathrm{m}$]')
axs[1].plot(ts, vfux(ts), 'b', linewidth=3)
axs[1].plot(ts, vfuy(ts), 'r', linewidth=3)
axs[1].set_title('Velocity [$\mathrm{m/s}$]')
axs[2].plot(ts, afux(ts), 'b', linewidth=3)
axs[2].plot(ts, afuy(ts), 'r', linewidth=3)
axs[2].set_title('Acceleration [$\mathrm{m/s^2}$]')
axs[3].plot(ts, jfux(ts), 'b', linewidth=3)
axs[3].plot(ts, jfuy(ts), 'r', linewidth=3)
axs[3].set_title('Jerk [$\mathrm{m/s^3}$]')
for axi in axs:
axi.set_xlabel('Time [s]', fontsize=14)
axi.grid(True)
fig.suptitle('Minimum jerk trajectory kinematics', fontsize=20, y=1.03)
fig.tight_layout()
plt.show()
```
Now, the angular trajectories for this circular motion:
```python
from sympy import atan2
ang = atan2(y3, x3)*180/np.pi
display(Math(latex('angle(t)=') + latex(ang)))
xang = lambdify(t, diff(ang, t, 0), 'numpy')
vang = lambdify(t, diff(ang, t, 1), 'numpy')
aang = lambdify(t, diff(ang, t, 2), 'numpy')
jang = lambdify(t, diff(ang, t, 3), 'numpy')
```
$$angle(t)=57.2957795130823 \operatorname{atan_{2}}{\left (3.0 t^{5} - 7.5 t^{4} + 5.0 t^{3},- 3.0 t^{5} + 7.5 t^{4} - 5.0 t^{3} + 0.5 \right )}$$
```python
ts = np.arange(0, 1.01, .01)
fig, axs = plt.subplots(1, 4, figsize=(12, 5), sharex=True, squeeze=True)
axs[0].plot(ts, xang(ts), linewidth=3)
axs[0].set_title('Displacement [$\mathrm{m}$]')
axs[1].plot(ts, vang(ts), linewidth=3)
axs[1].set_title('Velocity [$\mathrm{m/s}$]')
axs[2].plot(ts, aang(ts), linewidth=3)
axs[2].set_title('Acceleration [$\mathrm{m/s^2}$]')
axs[3].plot(ts, jang(ts), linewidth=3)
axs[3].set_title('Jerk [$\mathrm{m/s^3}$]')
for axi in axs:
axi.set_xlabel('Time [s]', fontsize=14)
axi.grid(True)
fig.suptitle('Minimum jerk trajectory angular kinematics', fontsize=20, y=1.03)
fig.tight_layout()
plt.show()
```
## Problems
1. What is your opinion on the the minimum jerk hypothesis? Do you think humans control movement based on this principle? (Think about what biomechanical and neurophysiological properties are not considered on this hypothesis.)
2. Calculate and plot the position, velocity, acceleration, and jerk trajectories for different movement speeds (for example, consider always a displacement of 1 m and movement durations of 0.5, 1, and 2 s).
3. For the data in the previous item, calculate the ratio peak speed to average speed. Shadmehr and Wise (2004) argue that psychophysical experiments show that reaching movements with the hand have this ratio equals to 1.75. Compare with the calculated values.
4. Can you propose alternative hypotheses for the control of movement?
## References
- Flash T, Hogan N (1985) [The coordination of arm movements: an experimentally confirmed mathematical model](http://www.jneurosci.org/cgi/reprint/5/7/1688.pdf). Journal of Neuroscience, 5, 1688-1703.
- Hogan N (1984) [An organizing principle for a class of voluntary movements](http://www.jneurosci.org/content/4/11/2745.full.pdf). Journal of Neuroscience, 4, 2745-2754.
- Shadmehr R, Wise S (2004) [The Computational Neurobiology of Reaching and Pointing: A Foundation for Motor Learning](http://www.shadmehrlab.org/book/). A Bradford Book. [Companion site](http://www.shadmehrlab.org/book/).
- Zatsiorsky VM (1998) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics.
|
ecd986b49e426cc473f9b14d9949d0ad02cb579b
| 203,632 |
ipynb
|
Jupyter Notebook
|
notebooks/MinimumJerkHypothesis.ipynb
|
0todd0000/BMC
|
bfad103e0fc02afc662ce417bf062b4758a39897
|
[
"CC-BY-4.0"
] | 293 |
2015-01-17T12:36:30.000Z
|
2022-02-13T13:13:12.000Z
|
notebooks/MinimumJerkHypothesis.ipynb
|
0todd0000/BMC
|
bfad103e0fc02afc662ce417bf062b4758a39897
|
[
"CC-BY-4.0"
] | 11 |
2018-06-21T21:40:40.000Z
|
2018-08-09T19:55:26.000Z
|
notebooks/MinimumJerkHypothesis.ipynb
|
0todd0000/BMC
|
bfad103e0fc02afc662ce417bf062b4758a39897
|
[
"CC-BY-4.0"
] | 162 |
2015-01-16T22:54:31.000Z
|
2022-02-14T21:14:43.000Z
| 327.382637 | 68,648 | 0.926073 | true | 3,865 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.894789 | 0.841826 | 0.753257 |
__label__eng_Latn
| 0.710513 | 0.5884 |
# Lecture 26: Two envelope paradox (cont.), conditional expectation (cont.), waiting for HT vs. waiting for HH
## Stat 110, Prof. Joe Blitzstein, Harvard University
----
## Two-envelope Paradox (continued)
From [Wikipedia](https://en.wikipedia.org/wiki/Two_envelopes_problem):
**Basic setup:** You are given two indistinguishable envelopes, each of which contains a positive sum of money. One envelope contains twice as much as the other. You may pick one envelope and keep whatever amount it contains. You pick one envelope at random but before you open it you are given the chance to take the other envelope instead.
There is no indication as to which of $X$ and $Y$ contains the lesser/greater amount.
Let's consider two competing arguments:
\begin{align}
&\text{argument 1: } &\quad \mathbb{E}(Y) &= \mathbb{E}(X) \\
\\
&\text{argument 2: } &\quad \mathbb{E}(Y) &= \mathbb{E}(Y|Y=2X) \, P(Y=2X) \, + \, \mathbb{E}(Y|Y=\frac{X}{2}) \, P(Y=\frac{X}{2}) \\
& & &= \mathbb{E}(2X) \, \frac{1}{2} \, + \, \mathbb{E}(\frac{X}{2}) \, \frac{1}{2} \\
& & &= \frac{5}{4} \, \mathbb{E}(X)
\end{align}
So which argument is correct?
Argument 1 is _symmetry_, and that takes precedence.
Argument 2, however, has a flaw: we start with a condition, but right after assuming that we equate $\mathbb{E}(Y|Y=2X)$ with $\mathbb{E}(2X)$ and $\mathbb{E}(Y|Y=\frac{X}{2})$ with $\mathbb{E}(\frac{X}{2})$, and then we aren't conditioning any more. There is no reason to confuse a conditional probability with an _unconditional_ probability.
* Let $I$ be the indicator of $Y=2X$
* Then $X,I$ are _dependent_
* Therefore $\mathbb{E}(Y|Y=2X) \neq \mathbb{E}(2X)$
## Patterns in Coin Flips
Continuing with a further example of conditional expectation, consider repeated trials of coin flips using a fair coin.
* How many flips until $HT$ (how many flips until you run into an $H$ followed by a $T$, including those flips)? _Let us call this event
$W_{HT}$._
* How many flips until $HH$ (how many flips until you run into an $H$ followed by another $H$, including those flips)? _Let us call this event $W_{HH}$._
So what we are really looking for are $\mathbb{E}(W_{HT})$ and $\mathbb{E}(W_{HH})$. Which do you think is greater? Are the two _equal_, is $W_{HT} \lt W_{HH}$, or is $W_{HT} \gt W_{HH}$?
If you think they are equal by _symmetry_, then you're wrong. By symmetry we know:
\begin{align}
& & \mathbb{E}(W_{TT}) &= \mathbb{E}(W_{HH}) \\
& & \mathbb{E}(W_{HT}) &= \mathbb{E}(W_{TH}) \\
\\
&\text{but } & \mathbb{E}(W_{HT}) &\neq \mathbb{E}(W_{HH}
\end{align}
### $\mathbb{E}(W_{HT})$
Consider first $\mathbb{E}(W_{HT})$; we can solve for this without using conditional expectation if we just think about things.
From the picture above, you can see that by the time we get the first $H$, we are actually halfway done. With this partial progress, all we need now is to see a $T$. If we see another $H$, that is OK, and we still keep waiting for a $T$. If we call the number of flips until the first $H$ $W_{1}$, then we can call the number of coin flips after that wait until we see $T$ $W_{2}$.
It is easy to recognize that $W_{1}, W_{2} \sim 1 - \operatorname{Geom}(\frac{1}{2})$, where support $k \in \{0,1,2,\dots\}$
So we have
\begin{align}
\mathbb{E}(W_{HT}) &= \mathbb{E}(W_1) + \mathbb{E}(W_2) \\
&= \left[\frac{1 - {1/2}}{1/2} + 1 \right] + \left[\frac{1 - {1/2}}{1/2} + 1 \right] \\
&= 1 + 1 + 1 + 1 \\
&= \boxed{4}
\end{align}
### $\mathbb{E}(W_{HH})$
Now let's consider $\mathbb{E}(W_{HH})$
In this case, if we get another $H$ immediately after seeing the first $H$, then we are done. But if we don't get $H$, then we have to start all over again and so we don't enjoy any partial progress.
Let's solve this using _conditional expectation_.
Similar to how we solved Gambler's Ruin by _conditioning on the first toss_, we have
\begin{align}
\mathbb{E}(W_{HH}) &= \mathbb{E}(W_{HH} | \text{first toss is } H) \frac{1}{2} + \mathbb{E}(W_{HH} | \text{first toss is } T) \frac{1}{2} \\
&= \left( 2 \, \frac{1}{2} + (2 + \mathbb{E}(W_{HH}))\frac{1}{2} \right) \frac{1}{2} + \left(1 + \mathbb{E}(W_{HH}) \right) \frac{1}{2} \\
&= \left(\frac{1}{2} + \frac{1}{2} + \frac{\mathbb{E}(W_{HH})}{4} \right) + \left(\frac{1}{2} + \frac{\mathbb{E}(W_{HH})}{2} \right) \\
&= \frac{3}{2} + \frac{3 \, \mathbb{E}(W_{HH})}{4} \\
&= \boxed{6}
\end{align}
### Related application
Genetics is a field where you might need to know about strings of letters, not $H,T$ but rather $A,C,T,G$.
If you're interested here's a good [TED talk by Peter Donnelly on genetics and statistics](https://www.ted.com/talks/peter_donnelly_shows_how_stats_fool_juries/transcript?language=en).
## Conditioning on a Random Variable
Consider $\mathbb{E}(Y | X=x)$: what is $X=x$?
It is an _event_, and we _condition on that event_.
\begin{align}
&\text{discrete case: } &\quad &\mathbb{E}(Y|X=x) = \sum_{y} y \, P(Y=y|X=x) \\
\\
&\text{continuous case: } &\quad &\mathbb{E}(Y|X=x) = \int_{-\infty}^{\infty} y \, f_{Y|X}(y|x) \, dy = \int_{-\infty}^{\infty} y \, \frac{ f_{X,Y}(x,y) }{ f_{X}(x) } \, dy
\end{align}
Now let $g(x) = \mathbb{E}(Y|X=x)$. This is a function of $Y$.
Then define $\mathbb{E}(Y|X) = g(X)$. e.g. if $g(x) = x^2$, then $g(X) = X^2$. So $\mathbb{E}(Y|X)$ is itself a _random variable_, and rather than a function of $Y$, is a function of _$X$_.
### Example with Poisson
Let $X,Y$ be i.i.d. $\operatorname{Pois}(\lambda)$.
#### $ \mathbb{E}(X + Y | X)$
\begin{align}
\mathbb{E}(X + Y | X) &= \mathbb{E}(X|X) + \mathbb{E}(Y|X) \\
&= \underbrace{ X }_{ \text{X is function of X} } + \underbrace{ \mathbb{E}(Y) }_{ \text{independence} } \\
&= X + \lambda
\end{align}
#### $\mathbb{E}(X | X + Y)$
Let $T = X + Y$, find the conditional PMF.
\begin{align}
P(X=k|T=n) &= \frac{P(T=n|X=k) \, P(X=k)}{P(T=n)} &\quad \text{by Bayes' Rule} \\
&= \frac{P(Y=n-k) \, P(X=k)}{P(T=n)} \\
&= \frac{ \frac{e^{-\lambda} \, \lambda^{n-k}}{(n-k)!} \, \frac{e^{-\lambda} \, \lambda^{k}}{k!}}{ \frac{e^{-2\lambda} \, (2\lambda)^n}{n!} } \\
&= \frac{n!}{(n-k)! \, k!} \, \left( \frac{1}{2} \right)^n \\
&= \binom{n}{k} \, \left( \frac{1}{2} \right)^n \\
\\
X | T=n &\sim \operatorname{Bin}(n, \frac{1}{2}) \\
\\
\mathbb{E}(X|T=n) &= \frac{n}{2} \Rightarrow \mathbb{E}(X|T) = \frac{T}{2}
\end{align}
Alternately, notice the _symmetry_...
\begin{align}
\mathbb{E}(X|X+Y) &= \mathbb{E}(Y|X+Y) &\quad \text{by symmetry (because i.i.d.)} \\\\
\\
\mathbb{E}(X|X+Y) + \mathbb{E}(Y|X+Y) &= \mathbb{E}(X+Y|X+Y) \\
&= X + Y \\
&= T \\
\\
\Rightarrow \mathbb{E}(X|T) &= \frac{T}{2}
\end{align}
## Foreshadowing: Iterated Expectation or Adam's Law
The single most important property of conditional expection is closely related to the Law of Total Probability.
Recall that $\mathbb{E}(Y|X)$ is a random variable. That being so, it is natural to wonder what the expected value is.
Consider this:
\begin{align}
\mathbb{E} \left( \mathbb{E}(Y|X) \right) &= \mathbb{E}(Y)
\end{align}
We will go into more detail next time!
----
View [Lecture 26: Conditional Expectation Continued | Statistics 110](http://bit.ly/2oOXv6D) on YouTube.
|
6c121cb65070e5504a2332316d3904b064c382db
| 10,242 |
ipynb
|
Jupyter Notebook
|
Lecture_26.ipynb
|
abhra-nilIITKgp/stats-110
|
258461cdfbdcf99de5b96bcf5b4af0dd98d48f85
|
[
"BSD-3-Clause"
] | 113 |
2016-04-29T07:27:33.000Z
|
2022-02-27T18:32:47.000Z
|
Lecture_26.ipynb
|
snoop2head/stats-110
|
88d0cc56ede406a584f6ba46368e548010f2b14a
|
[
"BSD-3-Clause"
] | null | null | null |
Lecture_26.ipynb
|
snoop2head/stats-110
|
88d0cc56ede406a584f6ba46368e548010f2b14a
|
[
"BSD-3-Clause"
] | 65 |
2016-12-24T02:02:25.000Z
|
2022-02-13T13:20:02.000Z
| 40.482213 | 390 | 0.516891 | true | 2,625 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.882428 | 0.710912 |
__label__eng_Latn
| 0.953313 | 0.490019 |
# Principal Component Analysis
*The code in this notebook has been taken from a [notebook](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.09-Principal-Component-Analysis.ipynb) in the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The code has been released by VanderPlas under the [MIT license](https://opensource.org/licenses/MIT).*
*Our text is original, though the presentation structure partially follows VanderPlas' presentation of the topic.*
Version: 1.0 (2020/09), Jesús Cid-Sueiro
<!-- I KEEP THIS LINK, MAY BE WE COULD GENERATE SIMILAR COLAB LINKS TO ML4ALL
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.09-Principal-Component-Analysis.ipynb"></a>
-->
```python
# Basic imports
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
```
Many machine learning applications involve the processing of highly **multidimensional** data. More data dimensions usually imply more information to make better predictions. However, a large dimension may state computational problems (the computational load of machine learning algorithms usually grows with the data dimension) and more difficulties to design a good predictor.
For this reason, a whole area of machine learning has been focused on [**feature extraction**](https://en.wikipedia.org/wiki/Feature_extraction) algorithms, i.e. algorithms that transform a multidimensional dataset into data with a reduced set of features. The goal of these techniques is to reduce the data dimension while preserving the most relevant information for the prediction task.
Feature extraction (and, more generally, [**dimensionality reduction**](https://en.wikipedia.org/wiki/Dimensionality_reduction)) algorithms are also useful for visualization. By reducing the data dimensions to 2 or 3, we can transform data into points in the plane or the space, that can be represented graphically.
**Principal Component Analysis (PCA)** is a particular example of linear feature extraction methods, that compute the new features as linear combinations of the original data components. Besides feature extraction and visualization, PCA is also a usefull tool for noise filtering, as we will see later.
## 1. A visual explanation.
Before going into the mathematical details, we can illustrate the behavior of PCA by looking at a two-dimensional dataset with 200 samples:
```python
rng = np.random.RandomState(1)
X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T
plt.scatter(X[:, 0], X[:, 1])
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.axis('equal')
plt.show()
```
PCA looks for the principal axes in the data, using them as new coordinates to represent the data points.
We can compute this as follows:
```python
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
```
PCA(n_components=2)
After fitting PCA to the data, we can read the directions of the new axes (the *principal* directions) using:
```python
print(pca.components_)
```
[[-0.94446029 -0.32862557]
[-0.32862557 0.94446029]]
These directions are unit vectors. We can plot them over the scatter plot of the input data, scaled up by the standard deviation of the data along each direction. The standard deviations can be computed as the square root of the variance along each direction, which is available through
```python
print(pca.explained_variance_)
```
[0.7625315 0.0184779]
The resulting axis plot is the following
```python
def draw_vector(v0, v1, ax=None):
ax = ax or plt.gca()
arrowprops=dict(arrowstyle='->',
linewidth=2,
shrinkA=0, shrinkB=0, color='k')
ax.annotate('', v1, v0, arrowprops=arrowprops)
# plot data
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
draw_vector(pca.mean_, pca.mean_ + v)
plt.axis('equal');
```
The *principal axes* of the data can be used as a new basis for the data representation. The *principal components* of any point are given by the projections of the point onto each principal axes.
```python
# plot principal components
T = pca.transform(X)
plt.scatter(T[:, 0], T[:, 1], alpha=0.2)
plt.axis('equal')
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.title('principal components')
plt.show()
```
Note that PCA is essentially an **affine transformation**: data is centered around the mean and rotated according to the principal directions. At this point, we can select those directions that may be more relevant for prediction.
## 2. Mathematical Foundations
*(The material in this section is based on [Wikipedia: Principal Component Analysis](https://en.wikipedia.org/wiki/Principal_component_analysis))*
In this section we will see how the principal directions are determined mathematically, and how can they be used to tranform the original dataset.
PCA is defined as a **linear transformation** that transforms the data to a new **coordinate system** such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.
Consider a dataset ${\cal S} = \{{\bf x}_k, k=0,\cdots, K-1\}$ of $m$-dimensional samples arranged by rows in data matrix, ${\bf X}$. Assume the dataset has **zero sample mean**, that is
\begin{align}
\sum_{k=0}^{K-1} {\bf x}_k = {\bf 0}
\end{align}
which implies that the sample mean of each column in ${\bf X}$ is zero. If data is not zero-mean, the data matrix ${\bf X}$ is built with rows ${\bf x}_k - {\bf m}$, where ${\bf m}$ is the mean.
PCA transforms each sample ${\bf x}_k \in {\cal S}$ into a vector of principal components ${\bf t}_k$. The transformation is linear so each principal component can be computed as the scalar product of each sample with a weight vector of coefficients. For instance, if the coeficient vectors are ${\bf w}_0, {\bf w}_1, \ldots, {\bf w}_{l-1}$, the principal components of ${\bf x}_k$ are
$$
t_{k0} = {\bf w}_0^\top \mathbf{x}_k, \\
t_{k1} = {\bf w}_1^\top \mathbf{x}_k, \\
t_{k2} = {\bf w}_2^\top \mathbf{x}_k, \\
...
$$
These components can be computed iteratively. In the next section we will see how to compute the first one.
### 2.1. Computing the first component
#### 2.2.1. Computing ${\bf w}_0$
The **principal direction** is selected in such a way that the sample variance of the first components of the data (that is, $t_{00}, t_{10}, \ldots, t_{K-1,0}$) is maximized. Since we can make the variance arbitrarily large by using an arbitrarily large ${\bf w}_0$, we will impose a constraint of the size of the coefficient vectors, that should be unitary. Thus,
$$
\|{\bf w}_0\| = 1
$$
Note that the mean of the transformed components is zero, because samples are zero-mean:
\begin{align}
\sum_{k=0}^{K-1} t_{k0} = \sum_{k=0}^{K-1} {\bf w}_0^\top {\bf x}_k = {\bf w}_0^\top \sum_{k=0}^{K-1} {\bf x}_k ={\bf 0}
\end{align}
therefore, the variance of the first principal component can be computed as
\begin{align}
V &= \frac{1}{K} \sum_{k=0}^{K-1} t_{k0}^2
= \frac{1}{K} \sum_{k=0}^{K-1} {\bf w}_0^\top {\bf x}_k {\bf x}_k^\top {\bf w}_0
= \frac{1}{K} {\bf w}_0^\top \left(\sum_{k=0}^{K-1} {\bf x}_k {\bf x}_k^\top \right) {\bf w}_0 \\
&= \frac{1}{K} {\bf w}_0^\top {\bf X}^\top {\bf X} {\bf w}_0
\end{align}
The first principal component ${\bf w}_0$ is the maximizer of the variance, thus, it can be computed as
$$
{\bf w}_0 = \underset{\Vert {\bf w} \Vert= 1}{\operatorname{\arg\,max}} \left\{ {\bf w}^\top {\bf X}^\top {\bf X} {\bf w} \right\}$$
Since ${\bf X}^\top {\bf X}$ is necessarily a semidefinite matrix, the maximum is equal to the largest eigenvalue of the matrix, which occurs when ${\bf w}_0$ is the corresponding eigenvector.
#### 2.2.2. Computing $t_{k0}$
Once we have computed the first eigenvector ${\bf w}_0$, we can compute the first component of each sample,
$$
t_{k0} = {\bf w}_0^\top \mathbf{x}_k
$$
Also, we can compute the projection of each sample along the first principal direction as
$$
t_{k0} {\bf w}_0
$$
We can illustrate this with the example data, applying PCA with only one component
```python
pca = PCA(n_components=1)
pca.fit(X)
T = pca.transform(X)
print("original shape: ", X.shape)
print("transformed shape:", T.shape)
```
original shape: (200, 2)
transformed shape: (200, 1)
and projecting the data over the first principal direction:
```python
X_new = pca.inverse_transform(T)
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8)
plt.axis('equal');
```
### 2.2. Computing further components
The *error*, i.e. the difference between any sample an its projection, is given by
\begin{align}
\hat{\bf x}_{k0} &= {\bf x}_k - t_{k0} {\bf w}_0 = {\bf x}_k - {\bf w}_0 {\bf w}_0^\top \mathbf{x}_k = \\
&= ({\bf I} - {\bf w}_0{\bf w}_0^\top ) {\bf x}_k
\end{align}
If we arrange all error vectors, by rows, in a data matrix, we get
$$
\hat{\bf X}_{0} = {\bf X}({\bf I} - {\bf w}_0 {\bf w}_0^T)
$$
The second principal component can be computed by repeating the analysis in section 2.1 over the error matrix $\hat{\bf X}_{0}$. Thus, it is given by
$$
{\bf w}_1 = \underset{\Vert {\bf w} \Vert= 1}{\operatorname{\arg\,max}} \left\{ {\bf w}^\top \hat{\bf X}_0^\top \hat{\bf X}_0 {\bf w} \right\}
$$
It turns out that this gives the eigenvector of ${\bf X}^\top {\bf X}$ with the second largest eigenvalue.
Repeating this process iterativelly (by substracting from the data all components in the previously computed principal directions) we can compute the third, fourth and succesive principal directions.
### 2.3. Summary of computations
Summarizing, we can conclude that the $l$ principal components of the data can be computed as follows:
1. Compute the $l$ unitary eigenvectors ${\bf w}_0, {\bf w}_1, \ldots, {\bf w}_{l-1}$ from matrix ${\bf X}^\top{\bf X}$ with the $l$ largest eigenvalues.
2. Arrange the eigenvectors columnwise into an $m \times l$ weight matrix ${\bf W} = ({\bf w}_0 | {\bf w}_1 | \ldots | {\bf w}_{l-1})$
3. Compute the principal components for all samples in data matrix ${\bf X}$ as
$$
{\bf T} = {\bf X}{\bf W}
$$
The computation of the eigenvectors of ${\bf X}^\top{\bf X}$ can be problematic, specially if the data dimension is very high. Fortunately, there exist efficient algorithms for the computation of the eigenvectors without computing ${\bf X}^\top{\bf X}$, by means of the [singular value decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition) of matrix ${\bf X}$. This is the method used by the PCA method from the `sklearn` library
## 2. PCA as dimensionality reduction
After a PCA transformation, we may find that the variance of the data along some of the principal directions is very small. Thus, we can simply remove those directions, and represent data using the components with the highest variance only.
In the above 2-dimensional example, we selected the principal direction only, and all data become projected onto a single line.
The key idea in the use of PCA for dimensionality reduction is that, if the removed dimensions had a very low variance, we can expect a small information loss for a prediction task. Thus, we can try to design our predictor with the selected features, with the hope to preserve a good prediction performance.
## 3. PCA for visualization: Hand-written digits
In the illustrative example we used PCA to project 2-dimensional data into one dimension, but the same analysis can be applied to project $N$-dimensional data to $r<N$ dimensions. An interesting application of this is the projection to 2 or 3 dimensions, that can be visualized.
We will illustrate this using the digits dataset:
```python
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
```
(1797, 64)
This dataset contains $8\times 8$ pixel images of digit manuscritps. Thus, each image can be converted into a 64-dimensional vector, and then projected over into two dimensions:
```python
pca = PCA(2) # project from 64 to 2 dimensions
projected = pca.fit_transform(digits.data)
print(digits.data.shape)
print(projected.shape)
```
(1797, 64)
(1797, 2)
Every image has been tranformed into a 2 dimensional vector, and we can represent them into a scatter plot:
```python
plt.scatter(projected[:, 0], projected[:, 1],
c=digits.target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('rainbow', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
```
Note that we have just transformed a collection of digital images into a cloud of points, using a different color to represent the points corresponding to the same digit. Note that colors from the same digit tend to be grouped in the same cluster, which suggests that these two components may contain useful information for discriminating between digits. Clusters show some overlap, so maybe using more components could help for a better discrimination.
The example shows that, despite a 2-dimensional projection may loose relevant information for a prediction task, the visualization of this projections may provide some insights to the data analyst on the predition problem to solve.
### 3.1. Interpreting principal components
Note that an important step in the application of PCA to digital images is the vectorization: each digit image is converted into a 64 dimensional vector:
$$
{\bf x} = (x_0, x_1, x_2 \cdots x_{63})^\top
$$
where $x_i$ represents the intesity of the $i$-th pixel in the image. We can go back to reconstruct the original image as follows: if $I_i$ is an black image with unit intensity at the $i$-th pixel only, we can reconstruct the original image as
$$
{\rm image}({\bf x}) = \sum_{i=0}^{63} x_i I_i
$$
A crude way to reduce the dimensionality of this data is to remove some of the components in the sum. For instance, we can keep the first eight pixels, only. But we then we get a poor representation of the original image:
```python
def plot_pca_components(x, coefficients=None, mean=0, components=None,
imshape=(8, 8), n_components=8, fontsize=12,
show_mean=True):
if coefficients is None:
coefficients = x
if components is None:
components = np.eye(len(coefficients), len(x))
mean = np.zeros_like(x) + mean
fig = plt.figure(figsize=(1.2 * (5 + n_components), 1.2 * 2))
g = plt.GridSpec(2, 4 + bool(show_mean) + n_components, hspace=0.3)
def show(i, j, x, title=None):
ax = fig.add_subplot(g[i, j], xticks=[], yticks=[])
ax.imshow(x.reshape(imshape), interpolation='nearest')
if title:
ax.set_title(title, fontsize=fontsize)
show(slice(2), slice(2), x, "True")
approx = mean.copy()
counter = 2
if show_mean:
show(0, 2, np.zeros_like(x) + mean, r'$\mu$')
show(1, 2, approx, r'$1 \cdot \mu$')
counter += 1
for i in range(n_components):
approx = approx + coefficients[i] * components[i]
show(0, i + counter, components[i], f'$c_{i}$')
show(1, i + counter, approx, f"${coefficients[i]:.2f} \cdot c_{i}$")
#r"${0:.2f} \cdot c_{1}$".format(coefficients[i], i))
if show_mean or i > 0:
plt.gca().text(0, 1.05, '$+$', ha='right', va='bottom',
transform=plt.gca().transAxes, fontsize=fontsize)
show(slice(2), slice(-2, None), approx, "Approx")
return fig
```
PCA provides an alternative basis for the image representation. Using PCA, we can represent each vector as linear combination of the principal direction vectors ${\bf w}_0, {\bf w}_1, \cdots, {\bf w}_{63}$:
$$
{\bf x} = {\bf m} + \sum_{i=0}^{63} t_i {\bf w}_i
$$
and, thus, we can represent the image as the linear combination of the images associated to each direction vector
$$
image({\bf x}) = image({\bf m}) + \sum_{i=0}^{63} t_i \cdot image({\bf w}_i)
$$
PCA selects the principal directions in such a way that the first components capture most of the variance of the data. Thus, a few components may provide a good approximation to the original image.
The figure shows a reconstruction of a digit using the mean image and the first eight PCA components:
```python
idx = 25 # Select digit from the dataset
pca = PCA(n_components=10)
Xproj = pca.fit_transform(digits.data)
sns.set_style('white')
fig = plot_pca_components(digits.data[idx], Xproj[idx],
pca.mean_, pca.components_)
```
## 4. Choosing the number of components
The number of components required to approximate the data can be quantified by computing the cumulative *explained variance ratio* as a function of the number of components:
```python
pca = PCA().fit(digits.data)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
print(np.cumsum(pca.explained_variance_ratio_))
```
In this curve we can see that the 16 principal components explain more than 86 % of the data variance. 32 out of 64 components explain 96.6 % of the data variance. This suggest that the original data dimension can be substantally reduced.
## 5. PCA as Noise Filtering
The use of PCA for noise filtering can be illustrated with some examples from the digits dataset.
```python
def plot_digits(data):
fig, axes = plt.subplots(4, 10, figsize=(10, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(data[i].reshape(8, 8),
cmap='binary', interpolation='nearest',
clim=(0, 16))
plot_digits(digits.data)
```
As we have shown before, the majority of the data variance is concentrated in a fraction of the principal components. Now assume that the dataset is affected by AWGN noise:
```python
np.random.seed(42)
noisy = np.random.normal(digits.data, 4)
plot_digits(noisy)
```
It is not difficult to show that, in the noise samples are independent for all pixels, the noise variance over all principal directions is the same. Thus, the principal components with higher variance will be less afected by nose. By removing the compoments with lower variance, we will be removing noise, majoritarily.
Let's train a PCA on the noisy data, requesting that the projection preserve 55% of the variance:
```python
pca = PCA(0.55).fit(noisy)
pca.n_components_
```
15
15 components contain this amount of variance. The corresponding images are shown below:
```python
components = pca.transform(noisy)
filtered = pca.inverse_transform(components)
plot_digits(filtered)
```
This is another reason why PCA works well in some prediction problems: by removing the components with less variance, we can be removing mostly noise, keeping the relevant information for a prediction task in the selected components.
## 6. Example: Eigenfaces
We will see another application of PCA using the Labeled Faces from the dataset taken from Scikit-Learn:
```python
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)
print(faces.target_names)
print(faces.images.shape)
```
['Ariel Sharon' 'Colin Powell' 'Donald Rumsfeld' 'George W Bush'
'Gerhard Schroeder' 'Hugo Chavez' 'Junichiro Koizumi' 'Tony Blair']
(1348, 62, 47)
We will take a look at the first 150 principal components. Because of the large dimensionality of this dataset (close to 3000), we will select the ``randomized`` solver for a fast approximation to the first $N$ principal components.
```python
#from sklearn.decomposition import Randomized PCA
pca = PCA(150, svd_solver="randomized")
pca.fit(faces.data)
```
PCA(n_components=150, svd_solver='randomized')
Now, let us visualize the images associated to the eigenvectors of the first principal components (the "eigenfaces"). These are the basis images, and all faces can be approximated as linear combinations of them.
```python
fig, axes = plt.subplots(3, 8, figsize=(9, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone')
```
Note that some eigenfaces seem to be associated to the lighting conditions of the image, an other to specific features of the faces (noses, eyes, mouth, etc).
The cumulative variance shows that 150 components cope with more than 90 % of the variance:
```python
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
```
We can compare the input images with the images reconstructed from these 150 components:
```python
# Compute the components and projected faces
pca = PCA(150, svd_solver="randomized").fit(faces.data)
components = pca.transform(faces.data)
projected = pca.inverse_transform(components)
```
```python
# Plot the results
fig, ax = plt.subplots(2, 10, figsize=(10, 2.5),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i in range(10):
ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r')
ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r')
ax[0, 0].set_ylabel('full-dim\ninput')
ax[1, 0].set_ylabel('150-dim\nreconstruction');
```
Note that, despite some image resolution is loss, only 150 features are enough to recognize the faces in the image. This shows the potential of PCA as a preprocessing step to reduce de dimensionality of the data (in this case, for more than 3000 to 150) without loosing prediction power.
|
6762870cdffbe158ce1ea47799a0b789c1df5485
| 589,964 |
ipynb
|
Jupyter Notebook
|
U3.PCA/PCA_professor.ipynb
|
ML4DS/ML4all
|
7336489dcb87d2412ad62b5b972d69c98c361752
|
[
"MIT"
] | 27 |
2016-11-30T17:34:00.000Z
|
2022-03-23T23:11:48.000Z
|
U3.PCA/PCA_professor.ipynb
|
ML4DS/ML4all
|
7336489dcb87d2412ad62b5b972d69c98c361752
|
[
"MIT"
] | 5 |
2019-08-12T18:28:49.000Z
|
2019-11-26T11:01:39.000Z
|
U3.PCA/PCA_professor.ipynb
|
ML4DS/ML4all
|
7336489dcb87d2412ad62b5b972d69c98c361752
|
[
"MIT"
] | 14 |
2016-11-30T17:34:18.000Z
|
2021-09-15T09:53:32.000Z
| 390.187831 | 161,060 | 0.935016 | true | 5,798 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.83762 | 0.750718 |
__label__eng_Latn
| 0.988784 | 0.582502 |
# Homework 6
**Submit the MP4 files you make along with a PDF of this notebook**
This week we will learn about how to turn the plots we have been making into animations! This will be really helpful for any groups that want to make simulations or other animations for their final project. This will build off of the differential equations we learned last week as well as the plotting skills we have learned so far.
First let us import the neccessary libraries,
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy
import scipy.integrate
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.animation import FFMpegWriter
%config InlineBackend.figure_format='retina' # makes animation display better
%matplotlib osx
# ^ UNCOMMENT THIS LINE IF USING MAC
# %matplotlib qt
# ^ UNCOMMENT THIS LINE IF USING WINDOWS
```
In /Users/jamessunseri/anaconda3/lib/python3.7/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle:
The text.latex.preview rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /Users/jamessunseri/anaconda3/lib/python3.7/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle:
The mathtext.fallback_to_cm rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /Users/jamessunseri/anaconda3/lib/python3.7/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: Support for setting the 'mathtext.fallback_to_cm' rcParam is deprecated since 3.3 and will be removed two minor releases later; use 'mathtext.fallback : 'cm' instead.
In /Users/jamessunseri/anaconda3/lib/python3.7/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle:
The validate_bool_maybe_none function was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /Users/jamessunseri/anaconda3/lib/python3.7/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle:
The savefig.jpeg_quality rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /Users/jamessunseri/anaconda3/lib/python3.7/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle:
The keymap.all_axes rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /Users/jamessunseri/anaconda3/lib/python3.7/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle:
The animation.avconv_path rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /Users/jamessunseri/anaconda3/lib/python3.7/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle:
The animation.avconv_args rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
# Problem 1 (10 points)
**Animating a Cool Function**
In homework 5, we plotted the following function:
\begin{equation}
f = \frac{\sin(10(x^2 + y^2))}{10}
\end{equation}
```python
x, y = np.meshgrid(np.linspace(0, 2*np.pi, 100),
np.linspace(0, 2*np.pi, 100))
f = np.sin(10*(x**2 + y**2))/10
plt.figure(figsize=(12,12))
plt.imshow(f) # Plot the 2D array
plt.set_cmap('hot') # Set the color scheme ("jet" is matplotlib's default)
plt.colorbar() # Display a legend for the color values
plt.show()
```
Now let's animate it!
For each iteration, add 1 more to x and y. For example, add 1 to x and y for the first iteration, and add 10 to both x and y for the tenth iteration.
Rember to use plt.imshow( ) so that your function is shown.
```python
## SAVE AS MP4 ##
fig, ax = plt.subplots(figsize=(5,5))
num_iterations = 20
with writer.saving(fig, "3Dfunction.mp4", dpi=200):
for i in range(num_iterations):
ax.clear() # first clear the figure
### YOUR CODE HERE ###
f = np.sin(10*((x+i)**2 + (y+i)**2))/10
plt.imshow(f)
plt.draw()
plt.pause(0.01)
writer.grab_frame() # save the current frame to mp4
```
# Problem 2 (20 points)
**Binary Star System**
In lecture we animated a single planet orbiting a star, but now let's animate a binary star system — two stars orbiting eachother. Because these two objects are of similar mass, we need to update the movements of both stars instead of just one.
First we'll define our initial conditions. For the sake of simplicity, don't worry about units for this problem.
- Star 1: mass $=1$, $x_i=-1$, $y_i=0$, $v_{xi}=1$, $v_{yx}=1$
- Star 2: mass $=1$, $x_i=1$, $y_i=0$, $v_{xi}=-1$, $v_{yx}=1$
```python
G = 1 #gravitational constant
# Define masses (in units of solar mass)
m1 = 1.0
m2 = 1.0
# Define initial position vectors
r1 = np.array([-1, 0])
r2 = np.array([1, 0])
# Define initial velocities
v1 = np.array([1, 1])
v2 = np.array([-1, 1])
```
The stars motions are characterized by the Law of Universal Gravitation:
\begin{equation}
F = G \frac{m_1 m_2}{r^2}
\end{equation}
Ultimately, all we need to plot the stars motion is the position coordinates of the two stars. But to correctly update the stars positions we need the stars velocity and the change in velocity (acceleration). Instead of working with forces, let us work with accelerations. We can then rewrite the equations for star 1 and star 2 like this:
- $ a_1 = \frac{m_2(r_2 - r_1)}{r_{12}^3} $
- $ a_2 = \frac{m_1(r_1 - r_2)}{r_{12}^3} $
Fill out the following cell to define a function that we will integrate over to obtain our star positions.
Refer to the lecture demo notebook for help! OrbitEquation( ) that was used in the demo is set up the same way, but now we want to change position and velocity for another object as well.
```python
def TwoBodyEquations(w, t, m1, m2): # w is an array containing positions and velocities
r1 = w[:2]
r2 = w[2:4]
v1 = w[4:6]
v2 = w[6:8]
r12 = np.linalg.norm(r2-r1)
dv1bydt = m2*(r2-r1)/r12**3 # derivative of velocity
dv2bydt = m1*(r1-r2)/r12**3
dr1bydt = v1 # derivative of position
dr2bydt = v2
r_derivs = np.concatenate((dr1bydt,dr2bydt)) # joining arrays together
v_derivs = np.concatenate((dv1bydt,dv2bydt))
derivs = np.concatenate((r_derivs,v_derivs))
return derivs
```
Now we want to run the ordinary differential equation solver. Then set r1_sol equal to the parts of two_body_sol that correspond to the first star. Do the same for r2_sol but for the second star.
```python
# Package initial parameters into one array (just easier to work with this way)
init_params = np.array([r1,r2,v1,v2])
init_params = init_params.flatten()
time_span = np.linspace(0, 20, 5000) # run for t=20 (5000 points)
# Run the ordinary differential equation solver
two_body_sol = scipy.integrate.odeint(TwoBodyEquations, init_params, time_span, args=(m1,m2)) # use scipy.integrate.odeint()
r1_sol=two_body_sol[:,:2]
r2_sol=two_body_sol[:,2:4]
```
Now we have all of the data we want to plot. Using FFMpegWriter, let us loop through all of our data and save each frame to our MP4 file. Set your x and y axes to range from -2 to 2.
Again, this code is very similar to what was shown in the lecture demo!
```python
# Initilize writer
metadata = dict(title='2D animation', artist='Matplotlib')
writer = FFMpegWriter(fps=50, metadata=metadata, bitrate=200000)
fig = plt.figure(dpi=200)
## SAVE AS MP4 ##
fig, ax = plt.subplots()
with writer.saving(fig, "binary_stars.mp4", dpi=200):
for i in range(len(time_span)):
ax.clear()
### YOUR CODE HERE ###
ax.plot(r1_sol[:i,0],r1_sol[:i,1],color="#00C9C8", alpha=0.5)
ax.plot(r2_sol[:i,0],r2_sol[:i,1],color="#9296F0", alpha=0.5)
ax.scatter(r1_sol[i,0],r1_sol[i,1],color="#00C9C8",marker="o",s= m1*30)
ax.scatter(r2_sol[i,0],r2_sol[i,1],color="#9296F0",marker="o",s=m2*30)
ax.set_xlim(-3, 3)
ax.set_ylim(-3, 3)
###
plt.draw()
plt.pause(0.01)
writer.grab_frame()
```
**Optional**: try changing the star masses, initial postions, and/or initial velocities and show us an animation that you think looks cool!
```python
# can be whatever other initial conditions
```
# Problem 3 (20 points)
## Swinging Pendulum
Now we will animate a basic swinging pendulum that you have probably seen in your Physics classes! The steps for animating this are similar to how we animated the binary star system. The main difference is how we define our ode_func and the differential equations.
For our pendulum system, we will have a point mass $m$ connected to a string of length $l$ with slight damping:
* $m = 1 kg$ is the mass,
* $g = 9.81 m/s^2$ is gravity,
* $l = 1 m$ is the length of the massless string
* $b = 0.05$ is the damping coefficient, and
* $t = 0$ to $10 s$ is the time duration with 100 intervals.
\
\
We can derive the change in the pendulum system as:
\
\
$$\frac{d^2\theta}{dt} = - \frac{b}{m} \frac{d\theta}{dt} - \frac{g * sin(\theta)}{l}$$
\
To further simplify this second order differential equation, we can rewrite this as two linear equations by defining
$$\theta_{1} = \theta , \theta_{2} = \frac{d\theta}{dt}$$
$$\frac{d\theta_{1}}{dt} = \theta_{2}$$
$$\frac{d\theta_{2}}{dt} = -\frac{b * \theta_{2}}{m} - \frac{g * sin(\theta_{1})}{l}$$
Don't worry if you do not understand the physics equations and math for this derivation! We will only be needing the last two equations for this problem. Now use ode_func as an argument to scipy.integrate.odeint to solve for the theta1 and theta2 values for all time steps. Use the initial conditions of
* $\theta_{1} = \frac{\pi}{10} $ as the initial angle and
* $\theta_{2} = 0$ as the initial angular velocity.
```python
def ode_func(theta, t, g, l, m, b):
'''
Given a list theta that is [theta1, theta2] and the conditions of the system
Computes the differential change for theta1 and theta2 following the last two eqs.
Returns dtheta1dt and dtheta2dt as a list in that order
'''
# YOUR CODE HERE
theta1, theta2 = theta
dtheta1 = theta2
dtheta2 = (- b * theta2 / m - g * np.sin(theta1) / l)
return [dtheta1, dtheta2]
# Define system variables l, m, g, b, t and initial conditions
# YOUR CODE HERE
l = 1
m = 1
g = 9.81
b = 0.05
t = np.linspace(0, 10, 100)
initial_conditions = [np.pi/10,1]
# Calculate theta1 and theta2 at all time steps
# HINT: scipy.integrate.odeint returns a 2D matrix where
# the first column is the theta1 values and the second column is the theta2 values
# YOUR CODE HERE
theta = scipy.integrate.odeint(ode_func, initial_conditions, t, args=(g, l, m, b))
```
```python
print(theta)
```
[[ 0.31415927 1. ]
[ 0.398018 0.64771035]
[ 0.44306772 0.23782479]
[ 0.44541221 -0.19139118]
[ 0.4050558 -0.60124506]
[ 0.32587182 -0.95407744]
[ 0.21540056 -1.21543582]
[ 0.08434552 -1.35790867]
[-0.05437793 -1.36587406]
[-0.18702947 -1.23896065]
[-0.30062355 -0.99218707]
[-0.38434018 -0.6526558 ]
[-0.4305026 -0.25469205]
[-0.43508437 0.16430654]
[-0.39788089 0.56636046]
[-0.32248769 0.91446998]
[-0.21610397 1.17475525]
[-0.08904618 1.32010269]
[ 0.04616388 1.3346923 ]
[ 0.17613041 1.21738761]
[ 0.28811269 0.98204388]
[ 0.37141091 0.6545109 ]
[ 0.41834859 0.26802143]
[ 0.42480095 -0.14089232]
[ 0.39038601 -0.53496928]
[ 0.31845253 -0.87787375]
[ 0.21588882 -1.13631559]
[ 0.09265104 -1.28353388]
[-0.03911396 -1.30361605]
[-0.1663489 -1.19480033]
[-0.27656608 -0.96991274]
[-0.35920571 -0.65364313]
[-0.40661823 -0.27817347]
[-0.41460943 0.12082177]
[-0.38264889 0.50680309]
[-0.31386723 0.84410259]
[-0.21486975 1.10003385]
[-0.09527732 1.24823432]
[ 0.03311995 1.27278942]
[ 0.15759615 1.17143583]
[ 0.26592229 0.95609385]
[ 0.34769527 0.65038302]
[ 0.39531587 0.28547918]
[ 0.4045465 -0.10378911]
[ 0.37473486 -0.48160467]
[ 0.30881971 -0.81297023]
[ 0.21314903 -1.06581477]
[ 0.09703176 -1.21421233]
[-0.0280812 -1.24232403]
[-0.14978745 -1.14749441]
[-0.2561203 -0.9408506 ]
[-0.33684714 -0.64502755]
[-0.38443955 -0.29024138]
[-0.39464008 0.08950979]
[-0.36669871 0.45912963]
[-0.30338652 0.78429348]
[-0.21081781 1.03355532]
[-0.09801129 1.18145801]
[ 0.02390466 1.21230494]
[ 0.14284272 1.12314448]
[ 0.24710037 0.92441299]
[ 0.3266268 0.63784241]
[ 0.37398233 0.29273627]
[ 0.38491068 -0.07771987]
[ 0.35858614 -0.43914702]
[ 0.29763419 -0.75789414]
[ 0.20795719 -1.00314801]
[ 0.09830372 -1.14994717]
[-0.02050443 -1.1827944 ]
[-0.13668639 -1.09852661]
[-0.23880433 -0.90698197]
[-0.31699849 -0.6290658 ]
[-0.3639334 -0.29321584]
[-0.37537275 0.06817479]
[-0.35043508 0.42143914]
[-0.2916205 0.73359985]
[-0.20463935 0.9744828 ]
[-0.09798863 1.11964476]
[ 0.01780143 1.15383682]
[ 0.13124769 1.07375836]
[ 0.23117637 0.8887328 ]
[ 0.30792618 0.61891031]
[ 0.35427918 0.29190893]
[ 0.36603578 -0.06064914]
[ 0.34227687 -0.40580225]
[ 0.28539555 -0.71124587]
[ 0.20092836 -0.94744985]
[ 0.09713789 -1.09050795]
[-0.01572309 -1.12546133]
[-0.12646046 -1.0489367 ]
[-0.22416306 -0.86981778]
[-0.29937384 -0.60756581]
[-0.34500393 -0.28902375]
[-0.35690524 0.05493486]
[-0.33413724 0.39204552]
[-0.27900277 0.69067486]
[-0.19688124 0.92194029]
[-0.09581655 1.06248806]
[ 0.01420275 1.09768504]
[ 0.12226303 1.02414186]]
Now that we have all of the $\theta_{1}$ and $\theta_{2}$ values from scipy.integrate.odeint, we can find the position of the point mass (x, y) values as
\
\
$$ x = l*sin(\theta_{1})$$
$$ y = - l*cos(\theta_{1})$$
\
For every time step, plot and save the (x, y) position using the $\theta$ values calculated above as an animation.
* Save the animation as "pendulum.mp4"
* Set y-axis limits as -1.1, 0
* Set x-axis limits as -0.5, 0.5
* If you would like to animate the string, include a line at each frame:
"plt.plot( [ 0, x[i] ], [ 0, y[i] ] )" that will plot a line between the origin and the pendulum in the current frame i
```python
# Calculate x and y from theta1
# YOUR CODE HERE
theta1 = theta[:,0]
x = [l * np.sin(th) for th in theta1]
y = [-l * np.cos(th) for th in theta1]
# Initialize writer
# YOUR CODE HERE
metadata = dict(title='2D animation', artist='Matplotlib')
writer = FFMpegWriter(fps=100, metadata=metadata, bitrate=200000)
fig = plt.figure(dpi=200)
## SAVE AS MP4 ##
# YOUR CODE HERE
fig = plt.figure()
with writer.saving(fig, "pendulum.mp4", dpi=200):
for i in range(len(t)):
fig.clear()
plt.plot([0, x[i]], [0, y[i]]) # plots the string
plt.plot(x[i], y[i], marker="o", markersize=10) # plots the point mass
plt.xlim([-0.5,0.5])
plt.ylim([-1.1,0])
plt.draw()
plt.pause(0.01)
writer.grab_frame()
```
```python
```
|
b9de6afa32083e914317c8175e5d1d9866e57673
| 22,064 |
ipynb
|
Jupyter Notebook
|
Spring_2021_DeCal_Material/Homework/Week7/Homework_6 Solutions.ipynb
|
James11222/Python_DeCal_2020
|
7e7d28bce2248812446ef2e2e141230308b318c4
|
[
"MIT"
] | 2 |
2020-10-24T04:46:05.000Z
|
2020-10-24T04:48:50.000Z
|
Spring_2021_DeCal_Material/Homework/Week7/Homework_6 Solutions.ipynb
|
James11222/Python_DeCal_2020
|
7e7d28bce2248812446ef2e2e141230308b318c4
|
[
"MIT"
] | null | null | null |
Spring_2021_DeCal_Material/Homework/Week7/Homework_6 Solutions.ipynb
|
James11222/Python_DeCal_2020
|
7e7d28bce2248812446ef2e2e141230308b318c4
|
[
"MIT"
] | null | null | null | 34.966719 | 348 | 0.558829 | true | 5,035 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.793106 | 0.662627 |
__label__eng_Latn
| 0.858537 | 0.377835 |
```python
import psi4
from psi4 import *
from psi4.core import *
import numpy as np
import os
sys.path.append('os.getcwd()')
from opt_helper import stre, bend, intcosMisc, linearAlgebra
```
## The Step Back-transformation
An optimization algorithm carried out in internal coordinates (see, e.g., the RFO tutorial) will generate a displacement step to be taken in internal coordinates. The conversion of the step into Cartesian coordinates is here called the "back-transformation" ("back", since the original gradient was computed in Cartesians).
As shown in the tutorial on coordinates and the B-matrix,
$$\textbf {B} \Delta x = \Delta q $$
and the $\textbf A^T$ matrix defined by
$$ \textbf A^T \equiv (\textbf{B} \textbf{u} \textbf {B}^T)^{-1} \textbf {B} \textbf{u}$$
was shown to be the left inverse of $\textbf B^T$ where __u__ is an arbitrary symmetric matrix. Attention must be paid to the non-square nature of __A__ and __B__. Here, we have
\begin{align}
\textbf B \Delta x &= \Delta q \\
\\
\textbf B \Delta x &= \big( \textbf{BuB}^T \big) \big( \textbf{BuB}^T\big)^{-1} \Delta q \\
\\
\textbf B \Delta x &= \textbf B \big[ \textbf{uB}^T \big( \textbf{BuB}^T\big)^{-1}\big] \Delta q \\
\\
\Delta x &= \textbf{uB}^T \big( \textbf{BuB}^T\big)^{-1} \Delta q = \textbf A \Delta q \\
\end{align}
The __u__ matrix may be chosen to be the unit matrix which gives
$$\Delta x = \textbf B^T (\textbf B \textbf B^T)^{-1} \Delta q$$
where redundant coordinates can be accommodated simply by using the generalized inverse. It is common to introduce $ \textbf{G} = \textbf B \textbf B^T $ and write the expression as
$$ \Delta x = \textbf{B}^T \textbf{G}^{-1} \Delta q$$
Note the __G__ matrix is a square matrix of dimension (number of internals) by (number of internals). This equation is exact only for infinitesimal displacements, because the B-matrix elements depend upon the molecular geometry (i.e., the Cartesian coordinates). Thus, the back-transformation is carried out iteratively.
To converge on a Cartesian geometry with the desired internal coordinate values, we repeatedly compute the difference between the current internal coordinate values and the desired ones (generating repeated $\Delta q$'s) and using the equation above to compute a new Cartesian geometry.
### Illustration of back-transformation
The back-transformation will now be demonstrated by taking a 0.2 au step increase in the bond lengths and a 5 degree increase in the bond angle of a water molecule.
```python
# Setup the water molecule and coordinates.
mol = psi4.geometry("""
O
H 1 1.7
H 1 1.7 2 104
unit au
""")
# We'll use cc-pVDZ RHF.
psi4.set_options({"basis": "cc-pvdz"})
mol.update_geometry()
xyz_0 = np.array( mol.geometry() )
# Generate the internal coordinates manually. Show their values.
intcos = [stre.STRE(0,1), stre.STRE(0,2), bend.BEND(1,0,2)]
print("%15s%15s" % ('Coordinate', 'Value'))
for I in intcos:
print("%15s = %15.8f %15.8f" % (I, I.q(xyz_0), I.qShow(xyz_0)))
# Handy variables for later.
Natom = mol.natom()
Nintco = len(intcos)
Ncart = 3*Natom
```
```python
# Create an internal coordinate displacement of +0.2au in bond lengths,
# and +5 degrees in the bond angle.
dq = np.array( [0.2, 0.2, 5.0/180*np.pi], float)
B = intcosMisc.Bmat(intcos, xyz_0)
G = np.dot(B, B.T)
G_inv = linearAlgebra.symmMatInv(G, redundant=True)
# Dx = B^T G^(-1) Dq
dx = np.dot(B.T, np.dot(G_inv, dq))
print("Displacement in Cartesians")
print(dx)
# Add Dx to original geometry.
xyz_1 = np.add(np.reshape(dx, (3, 3)), xyz_0)
print("New geometry in cartesians")
print(xyz_1)
# Compute internal coordinate values of new geometry.
print("\n%15s%15s" % ('Coordinate', 'Value'))
for I in intcos:
print("%15s = %15.8f %15.8f" % (I, I.q(xyz_1), I.qShow(xyz_1)))
```
You see that the desired internal coordinate value is not _exactly_ achieved. You can play with the desired displacement and observe more diverse behavior. For water, if you displace only the bond lengths, then the result will be exact, because if the bond angle is fixed then the direction of the displacements (_s_-vectors on each atom) are constant wrt to the bond lengths. On the other hand, the displacement directions for the bend depend upon the value of the angle. So if you displace only along a bend, the result will not be exact. In general, the result is reasonable but only approximate for small displacements.
### Illustration of iterative back-transformation
Finally, we demonstrate how convergence to the desired internal coordinate displacement can be achieved by an interative process.
```python
# Create array of target internal coordinate values.
dq_target = np.array( [0.2, 0.2, 5.0/180*np.pi], float)
q_target = np.zeros( (len(intcos)), float)
for i, intco in enumerate(intcos):
q_target[i] = intco.q(xyz_0) + dq_target[i]
xyz = xyz_0.copy()
rms_dq = 1
niter = 1
while rms_dq > 1e-10:
print("Iteration %d" % niter)
dq = dq_target.copy()
# Compute distance from target in internal coordinates.
for i, intco in enumerate(intcos):
dq[i] = q_target[i] - intco.q(xyz)
rms_dq = np.sqrt(np.mean(dq**2))
print("\tRMS(dq) = %10.5e" % rms_dq)
# Dx = B^T G^(-1) Dq
B = intcosMisc.Bmat(intcos, xyz)
G = np.dot(B, B.T)
G_inv = linearAlgebra.symmMatInv(G, redundant=True)
dx = np.dot(B.T, np.dot(G_inv, dq))
print("\tRMS(dx) = %10.5e" % np.sqrt(np.mean(dx**2)))
# Compute new Cartesian geometry.
xyz[:] += np.reshape(dx, (3,3))
niter += 1
print("\nFinal converged geometry.")
print(xyz)
# Compute internal coordinate values of new geometry.
print("\n%15s%15s" % ('Coordinate', 'Value'))
for I in intcos:
print("%15s = %15.8f %15.8f" % (I, I.q(xyz), I.qShow(xyz)))
```
The exact desired displacement is achieved.
Due to the non-orthogonal nature of the coordinates, the iterations may not always converge. In this case, common tactics include using the Cartesian geometry generated by the first back-transformation step, or using the Cartesian geometry that was closest to the desired internal coordinates. Hopefully, as a geometry optimization proceeds, the forces and displacements get smaller and convergence occurs.
A serious complication in procedures such as this one are discontinuities in the values of the internal coordinates. In some way, the internal coordinate values must be canonicalized so that, e.g., an increase in a torsion from 179 degrees to -178 degrees is interpreted as an increase of 3 degrees. Similar problems present for bond angles near 180 degrees. (The consistent computation of these changes in values and forces is also critical in Hessian update schemes.)
|
2ffa88155d34e02cd25aa73d1c3f8a031c9991e2
| 9,456 |
ipynb
|
Jupyter Notebook
|
Example/Psi4Numpy/13-GeometryOptimization/13e_Step-Backtransformation.ipynb
|
yychuang/109-2-compchem-lite
|
cbf17e542f9447e89fb48de1b28759419ffff956
|
[
"BSD-3-Clause"
] | 214 |
2017-03-01T08:04:48.000Z
|
2022-03-23T08:52:04.000Z
|
Example/Psi4Numpy/13-GeometryOptimization/13e_Step-Backtransformation.ipynb
|
yychuang/109-2-compchem-lite
|
cbf17e542f9447e89fb48de1b28759419ffff956
|
[
"BSD-3-Clause"
] | 100 |
2017-03-03T13:20:20.000Z
|
2022-03-05T18:20:27.000Z
|
Example/Psi4Numpy/13-GeometryOptimization/13e_Step-Backtransformation.ipynb
|
yychuang/109-2-compchem-lite
|
cbf17e542f9447e89fb48de1b28759419ffff956
|
[
"BSD-3-Clause"
] | 150 |
2017-02-17T19:44:47.000Z
|
2022-03-22T05:52:43.000Z
| 39.236515 | 634 | 0.595495 | true | 1,907 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.833325 | 0.740625 |
__label__eng_Latn
| 0.968045 | 0.559051 |
```python
# Author-Vishal Burman
```
## Accessing and Reading Data Sets
```python
%matplotlib inline
from mxnet import autograd, gluon, init, nd
from mxnet.gluon import data as gdata, loss as gloss, nn
import numpy as np
import pandas as pd
```
```python
train_data=pd.read_csv("train.csv")
test_data=pd.read_csv("test.csv")
```
```python
print(train_data.shape)
print(test_data.shape)
```
(1460, 81)
(1459, 80)
```python
# The first 4 and the last 2 features as well as label(Saleprice) from first 4 examples:
```
```python
train_data.iloc[0:4, [0, 1, 2, 3, -3, -2, -1]]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Id</th>
<th>MSSubClass</th>
<th>MSZoning</th>
<th>LotFrontage</th>
<th>SaleType</th>
<th>SaleCondition</th>
<th>SalePrice</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>60</td>
<td>RL</td>
<td>65.0</td>
<td>WD</td>
<td>Normal</td>
<td>208500</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>20</td>
<td>RL</td>
<td>80.0</td>
<td>WD</td>
<td>Normal</td>
<td>181500</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>60</td>
<td>RL</td>
<td>68.0</td>
<td>WD</td>
<td>Normal</td>
<td>223500</td>
</tr>
<tr>
<th>3</th>
<td>4</td>
<td>70</td>
<td>RL</td>
<td>60.0</td>
<td>WD</td>
<td>Abnorml</td>
<td>140000</td>
</tr>
</tbody>
</table>
</div>
```python
# Removing the id column from the dataset
```
```python
all_features=pd.concat((train_data.iloc[:, 1:-1], test_data.iloc[:, 1:]))
```
## Data Preprocessing
```python
# We begin be replacing the missing values with mean
# Then we adjust the values to a common scale(with zero mean and unit variance)
```
\begin{equation}
x \leftarrow \frac{x - \mu}{\sigma}
\end{equation}
```python
numeric_features=all_features.dtypes[all_features.dtypes!='object'].index
all_features[numeric_features]=all_features[numeric_features].apply(lambda x: (x-x.mean())/(x.std()))
# After standardising the data all means vanish, hence we can set the missing values to 0
all_features[numeric_features]=all_features[numeric_features].fillna(0)
```
```python
# Next we deal with discrete values
# We replace them with one-hot encoding
```
```python
# dummy_na=True refers to a missing value being a legal eigen-value
# Creates an indicative feature for it
all_features=pd.get_dummies(all_features, dummy_na=True)
all_features.shape
```
(2919, 331)
```python
# Via the values attribute we can extract the NumPy format from the Pandas dataframe
# We can then convert it to MxNet's native NDArray representation for training
```
```python
n_train=train_data.shape[0]
train_features=nd.array(all_features[:n_train].values)
test_features=nd.array(all_features[:n_train].values)
train_labels=nd.array(train_data.SalePrice.values).reshape((-1, 1))
```
## Training
```python
# We define a simple squared loss model
# It wont be the perfect criteria but provides for a baseline model
```
```python
loss=gloss.L2Loss()
def get_net():
net=nn.Sequential()
net.add(nn.Dense(1))
net.initialize()
return net
```
```python
```
|
b8b1698737e188783e1261294e02bfd12d4b7075
| 8,123 |
ipynb
|
Jupyter Notebook
|
Multilayer_NN/test7.ipynb
|
vishal-burman/MXNet_Architectures
|
d4e371226e814c1507974244c4642b906566f1d8
|
[
"MIT"
] | null | null | null |
Multilayer_NN/test7.ipynb
|
vishal-burman/MXNet_Architectures
|
d4e371226e814c1507974244c4642b906566f1d8
|
[
"MIT"
] | 3 |
2020-03-24T17:14:05.000Z
|
2021-02-02T22:01:48.000Z
|
Multilayer_NN/test7.ipynb
|
vishal-burman/MXNet_Architectures
|
d4e371226e814c1507974244c4642b906566f1d8
|
[
"MIT"
] | null | null | null | 23.891176 | 110 | 0.46276 | true | 1,105 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.72487 | 0.7773 | 0.563442 |
__label__eng_Latn
| 0.675766 | 0.147393 |
```python
from collections import Counter
from typing import Tuple, List
import numpy as np
from networkx import MultiGraph
from networkx import nx
from sympy.combinatorics import Permutation
import matplotlib.pyplot as plt
# from SurfaceCodes.utilites import permlist_to_tuple
class SurfaceCodeGraph(MultiGraph):
def __init__(self, sigma: Tuple[Tuple[int]], alpha: Tuple[Tuple[int]]):
super().__init__()
self.sigma = sigma # should include singletons corresponding to fixed points
self.alpha = alpha # should include singletons corresponding to fixed points
f = self.compute_phi()
self.phi = self.permlist_to_tuple(f)
self.node_info = self.build_node_info() # print dictionary for [sigma, alpha, phi]
self.code_graph = nx.MultiGraph()
# Create black nodes for each cycle in sigma along with white nodes
# representing "half edges" around the black nodes
for cycle in self.sigma:
self.code_graph.add_node(cycle, bipartite=1)
for node in cycle:
self.code_graph.add_node(node, bipartite=0)
self.code_graph.add_edge(cycle, node)
# Create black nodes for each cycle in phi along with white nodes
# representing "half edges" around the black nodes
for cycle in self.phi:
self.code_graph.add_node(cycle, bipartite=1)
for node in cycle:
self.code_graph.add_edge(cycle, node)
# Create nodes for each cycle in alpha then
# glue the nodes corresponding to a the pairs
for pair in self.alpha:
self.code_graph.add_node(pair)
self.code_graph = nx.contracted_nodes(self.code_graph, pair[0], pair[1], self_loops=True)
# Now contract pair with pair[0] to make sure edges (white nodes) are labeled
# by the pairs in alpha to keep track of the gluing from the previous step
self.code_graph = nx.contracted_nodes(self.code_graph, pair, pair[0], self_loops=True)
def permlist_to_tuple(self, perms):
"""
convert list of lists to tuple of tuples in order to have two level iterables
that are hashable for the dictionaries used later
"""
return tuple(tuple(perm) for perm in perms)
def compute_phi(self):
"""compute the list of lists full cyclic form of phi (faces of dessin [sigma, alpha, phi])"""
s = Permutation(self.sigma)
a = Permutation(self.alpha)
f = ~(a * s)
f = f.full_cyclic_form # prints permutation as a list of lists including all singletons (fixed points)
return f
def build_node_info(self):
count = -1
self.sigma_dict = dict()
for count, cycle in enumerate(self.sigma):
self.sigma_dict[cycle] = count
self.phi_dict = dict()
for count, cycle in enumerate(self.phi, start=count + 1):
self.phi_dict[cycle] = count
self.alpha_dict = dict()
for count, pair in enumerate(self.alpha, start=count + 1):
self.alpha_dict[pair] = count
return tuple([self.sigma_dict, self.alpha_dict, self.phi_dict])
def boundary_1(self, edge):
"""
compute boundary of a single edge given by a white node (cycle in alpha)
"""
# if len(self.code_graph.neighbors(edge)) < 2:
# boundary1 = []
# else:
boundary1 = Counter([x[1] for x in self.code_graph.edges(edge) if x[1] in self.sigma_dict])
odd_boundaries = [x for x in boundary1 if boundary1[x] % 2]
# [node for node in self.code_graph.neighbors(edge) if node in self.sigma_dict]
return odd_boundaries
def del_1(self, edges: List[Tuple[int]]):
"""
boundary of a list of edges, i.e. an arbitrary 1-chain over Z/2Z
"""
boundary_list = [self.boundary_1(edge) for edge in edges]
a = Counter([y for x in boundary_list for y in x])
boundary_list = [x[0] for x in a.items() if x[1] % 2 == 1]
return boundary_list
def boundary_2(self, face):
"""
compute boundary of a single face node
"""
# boundary2 = [node for node in self.code_graph.neighbors(face) if node in self.alpha_dict]
boundary2 = Counter([x[1] for x in self.code_graph.edges(face) if x[1] in self.alpha_dict])
odd_boundaries = [x for x in boundary2 if boundary2[x] % 2]
return odd_boundaries
def del_2(self, faces: List[Tuple[int]]):
"""
boundary of a list of faces, i.e. an arbitrary 2-chain over Z/2Z
"""
boundary_list = [self.boundary_2(face) for face in faces]
a = Counter([y for x in boundary_list for y in x])
boundary_list = [x[0] for x in a.items() if x[1] % 2 == 1]
return boundary_list
def coboundary_1(self, star):
"""
compute coboundary of a single star
"""
# coboundary = self.code_graph.neighbors(star)
coboundary1 = Counter([x[1] for x in self.code_graph.edges(star)])
odd_coboundaries = [x for x in coboundary1 if coboundary1[x] % 2]
return odd_coboundaries
def delta_1(self, stars: List[Tuple[int]]):
"""
coboundary of a list of stars, i.e. an arbitrary 0-cochain over Z/2Z
"""
coboundary_list = [self.coboundary_1(star) for star in stars]
a = Counter([y for x in coboundary_list for y in x])
coboundary_list = [x[0] for x in a.items() if x[1] % 2 == 1]
return coboundary_list
def coboundary_2(self, edge):
"""
compute coboundary of a single edge given by a white node (cycle in alpha)
"""
# coboundary2 = [node for node in self.code_graph.neighbors(edge) if node in self.phi_dict]
coboundary2 = Counter([x[1] for x in self.code_graph.edges(edge) if x[1] in self.phi_dict])
odd_coboundaries = [x for x in coboundary2 if coboundary2[x] % 2]
return odd_coboundaries
def delta_2(self, edges: List[Tuple[int]]):
"""
coboundary of a list of edges, i.e. an arbitrary 1-cochain over Z/2Z
given by a list of cycles in alpha
"""
coboundary_list = [self.coboundary_2(edge) for edge in edges]
a = Counter([y for x in coboundary_list for y in x])
coboundary_list = [x[0] for x in a.items() if x[1] % 2 == 1]
return coboundary_list
def vertex_basis(self):
self.v_basis_dict = dict()
self.v_dict = dict()
A = np.eye(len(self.sigma), dtype=np.uint8)
for count, cycle in enumerate(self.sigma):
self.v_dict[cycle] = count
self.v_basis_dict[cycle] = A[count, :].T
return (self.v_basis_dict)
def edge_basis(self):
self.e_basis_dict = dict()
self.e_dict = dict()
B = np.eye(len(self.alpha), dtype=np.uint8)
for count, cycle in enumerate(self.alpha):
self.e_dict[cycle] = count
self.e_basis_dict[cycle] = B[count, :].T
return (self.e_basis_dict)
def face_basis(self):
self.f_basis_dict = dict()
self.f_dict = dict()
C = np.eye(len(self.phi), dtype=np.uint8)
for count, cycle in enumerate(self.phi):
self.f_dict[cycle] = count
self.f_basis_dict[cycle] = C[count, :].T
return (self.f_basis_dict)
def d_2(self):
self.D2 = np.zeros(len(self.e_dict), dtype=np.uint8)
for cycle in self.phi:
bd = self.boundary_2(cycle)
if bd != []:
image = sum([self.e_basis_dict[edge] for edge in bd])
else:
image = np.zeros(len(self.e_dict))
self.D2 = np.vstack((self.D2, image))
self.D2 = np.array(self.D2[1:, :]).T
return self.D2, self.D2.shape
def d_1(self):
self.D1 = np.zeros(len(self.v_dict), dtype=np.uint8)
for cycle in self.alpha:
bd = self.boundary_1(cycle)
if bd != []:
image = sum([self.v_basis_dict[vertex] for vertex in bd])
else:
bd = np.zeros(len(self.e_dict))
self.D1 = np.vstack((self.D1, image))
self.D1 = np.array(self.D1[1:, :]).T
return self.D1, self.D1.shape
def euler_characteristic(self):
"""
Compute the Euler characteristic of the surface in which the graph is embedded
"""
chi = len(self.phi) - len(self.alpha) + len(self.sigma)
return (chi)
def genus(self):
"""
Compute the genus of the surface in which the graph is embedded
"""
g = int(-(len(self.phi) - len(self.alpha) + len(self.sigma) - 2) / 2)
return (g)
def draw(self, node_type='', layout=''):
"""
Draw graph with vertices, edges, and faces labeled by colored nodes and their integer indices
corresponding to the qubit indices for the surface code
"""
if node_type not in ['cycles', 'dict']:
raise ValueError('node_type can be "cycles" or "dict"')
elif layout == 'spring':
pos = nx.spring_layout(self.code_graph)
elif layout == 'spectral':
pos = nx.spectral_layout(self.code_graph)
elif layout == 'planar':
pos = nx.planar_layout(self.code_graph)
elif layout == 'shell':
pos = nx.shell_layout(self.code_graph)
elif layout == 'circular':
pos = nx.circular_layout(self.code_graph)
elif layout == 'spiral':
pos = nx.spiral_layout(self.code_graph)
elif layout == 'random':
pos = nx.random_layout(self.code_graph)
else:
raise ValueError(
"no layout defined: try one of these: " +
"['spring','spectral','planar','shell','circular','spiral','random']")
# white nodes
nx.draw_networkx_nodes(self.code_graph, pos,
nodelist=list(self.alpha),
node_color='c',
node_size=500,
alpha=0.3)
# vertex nodes
nx.draw_networkx_nodes(self.code_graph, pos,
nodelist=list(self.sigma),
node_color='b',
node_size=500,
alpha=0.6)
# face nodes
nx.draw_networkx_nodes(self.code_graph, pos,
nodelist=list(self.phi),
node_color='r',
node_size=500,
alpha=0.6)
# edges
nx.draw_networkx_edges(self.code_graph, pos, width=1.0, alpha=0.5)
labels = {}
if node_type == 'cycles':
'''
label nodes the cycles of sigma, alpha, and phi
'''
for node in self.alpha_dict:
# stuff = self.alpha_dict[node]
labels[node] = f'$e$({node})'
for node in self.sigma_dict:
# something = self.sigma_dict[node]
labels[node] = f'$v$({node})'
for node in self.phi_dict:
# something2 = self.phi_dict[node]
labels[node] = f'$f$({node})'
nx.draw_networkx_labels(self.code_graph, pos, labels, font_size=12)
if node_type == 'dict':
'''
label nodes with v, e, f and indices given by node_dict corresponding to
qubit indices of surface code
'''
for node in self.alpha_dict:
# stuff = self.alpha_dict[node]
labels[node] = f'$e$({self.alpha_dict[node]})'
for node in self.sigma_dict:
# something = self.sigma_dict[node]
labels[node] = f'$v$({self.sigma_dict[node]})'
for node in self.phi_dict:
# something2 = self.phi_dict[node]
labels[node] = f'$f$({self.phi_dict[node]})'
nx.draw_networkx_labels(self.code_graph, pos, labels, font_size=12)
# plt.axis('off')
# plt.savefig("labels_and_colors.png") # save as png
plt.show() # display
```
```python
sigma = ((0,1,2), (3,4,5), (6,7,8,9))
alpha = ((0,3),(1,4),(2,6),(5,7),(8,9))
SCG = SurfaceCodeGraph(sigma, alpha)
```
```python
SCG.draw('dict', layout = 'spring')
```
```python
SCG.draw('cycles', layout = 'spring')
```
```python
SCG.node_info
```
({(0, 1, 2): 0, (3, 4, 5): 1, (6, 7, 8, 9): 2},
{(0, 3): 5, (1, 4): 6, (2, 6): 7, (5, 7): 8, (8, 9): 9},
{(0, 6, 8, 5, 1, 3, 7, 2, 4): 3, (9,): 4})
```python
SCG.genus()
```
1
```python
SCG.face_basis()
```
{(0, 6, 8, 5, 1, 3, 7, 2, 4): array([1, 0], dtype=uint8),
(9,): array([0, 1], dtype=uint8)}
```python
SCG.vertex_basis()
```
{(0, 1, 2): array([1, 0, 0], dtype=uint8),
(3, 4, 5): array([0, 1, 0], dtype=uint8),
(6, 7, 8, 9): array([0, 0, 1], dtype=uint8)}
```python
SCG.edge_basis()
```
{(0, 3): array([1, 0, 0, 0, 0], dtype=uint8),
(1, 4): array([0, 1, 0, 0, 0], dtype=uint8),
(2, 6): array([0, 0, 1, 0, 0], dtype=uint8),
(5, 7): array([0, 0, 0, 1, 0], dtype=uint8),
(8, 9): array([0, 0, 0, 0, 1], dtype=uint8)}
```python
SCG.del_1([(0,3)])
```
[(0, 1, 2), (3, 4, 5)]
```python
SCG.boundary_1((0,3))
```
[(0, 1, 2), (3, 4, 5)]
```python
SCG.del_1([(1,4)])
```
[(0, 1, 2), (3, 4, 5)]
```python
SCG.boundary_1((1,4))
```
[(0, 1, 2), (3, 4, 5)]
```python
SCG.del_1([(2,6)])
```
[(0, 1, 2), (6, 7, 8, 9)]
```python
SCG.boundary_1((2,6))
```
[(0, 1, 2), (6, 7, 8, 9)]
```python
SCG.del_1([(5,7)])
```
[(3, 4, 5), (6, 7, 8, 9)]
```python
SCG.boundary_1((5,7))
```
[(3, 4, 5), (6, 7, 8, 9)]
```python
SCG.del_1([(8,9)])
```
[]
```python
SCG.boundary_1((8,9))
```
[]
```python
SCG.d_1()
```
(array([[1, 1, 1, 0, 0],
[1, 1, 0, 1, 1],
[0, 0, 1, 1, 1]], dtype=uint8),
(3, 5))
```python
SCG.d_2()
```
(array([[0, 0],
[0, 0],
[0, 0],
[0, 0],
[1, 1]], dtype=uint8),
(5, 2))
```python
SCG.D1
```
array([[1, 1, 1, 0, 0],
[1, 1, 0, 1, 1],
[0, 0, 1, 1, 1]], dtype=uint8)
```python
SCG.D2
```
array([[0, 0],
[0, 0],
[0, 0],
[0, 0],
[1, 1]], dtype=uint8)
```python
def rowSwap(A, i, j):
temp = np.copy(A[i, :])
A[i, :] = A[j, :]
A[j, :] = temp
def colSwap(A, i, j):
temp = np.copy(A[:, i])
A[:, i] = A[:, j]
A[:, j] = temp
def scaleCol(A, i, c):
A[:, i] *= int(c) * np.ones(A.shape[0], dtype=np.int64)
def scaleRow(A, i, c):
A[i, :] = np.array(A[i, :], dtype=np.float64) * c * np.ones(A.shape[1], dtype=np.float64)
def colCombine(A, addTo, scaleCol, scaleAmt):
A[:, addTo] += scaleAmt * A[:, scaleCol]
def rowCombine(A, addTo, scaleRow, scaleAmt):
A[addTo, :] += scaleAmt * A[scaleRow, :]
```
```python
def simultaneousReduce(A, B):
if A.shape[1] != B.shape[0]:
raise Exception("Matrices have the wrong shape.")
numRows, numCols = A.shape
i, j = 0, 0
while True:
if i >= numRows or j >= numCols:
break
if A[i, j] == 0:
nonzeroCol = j
while nonzeroCol < numCols and A[i, nonzeroCol] == 0:
nonzeroCol += 1
if nonzeroCol == numCols:
i += 1
continue
colSwap(A, j, nonzeroCol)
rowSwap(B, j, nonzeroCol)
pivot = A[i, j]
scaleCol(A, j, 1.0 / pivot)
scaleRow(B, j, 1.0 / pivot)
for otherCol in range(0, numCols):
if otherCol == j:
continue
if A[i, otherCol] != 0:
scaleAmt = -A[i, otherCol]
colCombine(A, otherCol, j, scaleAmt)
rowCombine(B, j, otherCol, -scaleAmt)
i += 1;
j += 1
return A%2, B%2
def finishRowReducing(B):
numRows, numCols = B.shape
i, j = 0, 0
while True:
if i >= numRows or j >= numCols:
break
if B[i, j] == 0:
nonzeroRow = i
while nonzeroRow < numRows and B[nonzeroRow, j] == 0:
nonzeroRow += 1
if nonzeroRow == numRows:
j += 1
continue
rowSwap(B, i, nonzeroRow)
pivot = B[i, j]
scaleRow(B, i, 1.0 / pivot)
for otherRow in range(0, numRows):
if otherRow == i:
continue
if B[otherRow, j] != 0:
scaleAmt = -B[otherRow, j]
rowCombine(B, otherRow, i, scaleAmt)
i += 1;
j += 1
return B%2
def numPivotCols(A):
z = np.zeros(A.shape[0])
return [np.all(A[:, j] == z) for j in range(A.shape[1])].count(False)
def numPivotRows(A):
z = np.zeros(A.shape[1])
return [np.all(A[i, :] == z) for i in range(A.shape[0])].count(False)
def bettiNumber(d_k, d_kplus1):
A, B = np.copy(d_k), np.copy(d_kplus1)
simultaneousReduce(A, B)
finishRowReducing(B)
dimKChains = A.shape[1]
print("dim 1-chains:",dimKChains)
kernelDim = dimKChains - numPivotCols(A)
print("dim ker d_1:",kernelDim)
imageDim = numPivotRows(B)
print("dim im d_2:",imageDim)
return "dim homology:",kernelDim - imageDim
```
```python
simultaneousReduce(SCG.D1.astype('float64'), SCG.D2.astype('float64'))
```
(array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[1., 1., 0., 0., 0.]]),
array([[0., 0.],
[1., 1.],
[1., 1.],
[0., 0.],
[1., 1.]]))
```python
finishRowReducing(SCG.D2.astype('float64'))
```
array([[1., 1.],
[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]])
```python
numPivotCols(SCG.D2.astype('float64'))
```
2
```python
numPivotRows(SCG.D2.astype('float64'))
```
1
```python
bettiNumber(SCG.D1.astype('float64'), SCG.D2.astype('float64'))
```
dim 1-chains: 5
dim ker d_1: 2
dim im d_2: 1
('dim homology:', 1)
|
85012dd2193d3ace232500b0dd309ce5616afd53
| 88,991 |
ipynb
|
Jupyter Notebook
|
Use Case Examples/surface_codes_homology.ipynb
|
The-Singularity-Research/QISKit-Surface-Codes
|
185fcb4dae205dcca9ebec9768f10d24cf05f2d2
|
[
"MIT"
] | 5 |
2020-10-04T12:23:49.000Z
|
2022-01-19T17:23:28.000Z
|
Use Case Examples/surface_codes_homology.ipynb
|
The-Singularity-Research/QISKit-Surface-Codes
|
185fcb4dae205dcca9ebec9768f10d24cf05f2d2
|
[
"MIT"
] | 1 |
2020-07-12T10:42:31.000Z
|
2020-07-12T10:42:31.000Z
|
Use Case Examples/surface_codes_homology.ipynb
|
The-Singularity-Research/QISKit-Surface-Codes
|
185fcb4dae205dcca9ebec9768f10d24cf05f2d2
|
[
"MIT"
] | 3 |
2020-06-26T04:29:38.000Z
|
2022-01-19T17:23:39.000Z
| 83.559624 | 32,840 | 0.786091 | true | 5,258 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.903294 | 0.679179 | 0.613498 |
__label__eng_Latn
| 0.780016 | 0.263692 |
```python
import sympy as sm
import sympy.vector
import sympy.plotting
sigma_s = sm.symbols('\hat{\sigma}^2')
lamb, x,y,a, b = sm.symbols('lambda x,y a b')
B = sm.vector.CoordSys3D('B')
#### transformation factor
# P_{B}^(N) = N.e [] = P[](N_e1 + N_e2)
P = sm.Matrix([[1,1],[4,1]])
I = sm.eye(2)
lI = lamb * I
V = sm.Matrix([a,b])
#### A*V = lamb*V
# A*V = lamb*I*V
# A*V - lamb*I*V = 0
# (A - lamb*I)*V = 0
# A 와 lamb*I, 두 벡터가 평행 해야만 빼서 0을 만들수 있으므로
# 평행한 벡터의 면적은 0 !!
# det(A - lamb*I) = 0
s = P - lamb*I
s = s.det()
s = sm.solve(s,lamb)
## substitute lamb to V
# (A - s[0] * I) *V = 0
s1 = (P - s[0]*I) * V
s2 = (P - s[1]*I) * V
s1 = V.subs(sm.solve(s1))
s2 = V.subs(sm.solve(s2))
v1 = sm.vector.matrix_to_vector(s1,B)
```
```python
f = sm.symbols('f',cls=sm.Function)
sm.dsolve(f(x).diff(x,2) - f(x),f(x))
```
$\displaystyle f{\left(x \right)} = C_{1} e^{- x} + C_{2} e^{x}$
# [basis](https://www.youtube.com/watch?v=PT8FyU0dd3k) > standard basis(orthogonal)
>> $ S , N \in \mathbb R^2\\
S = \{e_1,e_2 \}, e_1=(1,0),e_2=(0,1)\\
N =
\begin{bmatrix}1 & 1 \\ 1 & -1 \end{bmatrix}\\
[V]_S = \big[?\big]_S^N\:[v]_N \\
[V]_N = \big[?\big]_N^S\:[v]_S \\
\therefore P_{N}^{S} = \big(? \big)_{2\times 2}\\
N.e_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}
\Leftarrow
\begin{bmatrix}? \\ ? \end{bmatrix}_N^S
\begin{bmatrix}1\\1\end{bmatrix}
+
\begin{bmatrix}? \\ ? \end{bmatrix}_N^S
\begin{bmatrix}1 \\ -1 \end{bmatrix}
\\
N.e_2 = \begin{bmatrix} 0 \\ 1 \end{bmatrix}
\Leftarrow
\begin{bmatrix}? \\ ? \end{bmatrix}_N^S \begin{bmatrix}1\\1\end{bmatrix}
+
\begin{bmatrix}? \\ ? \end{bmatrix}_N^S
\begin{bmatrix}1 \\ -1 \end{bmatrix}
\\
\therefore
P_N^S = \begin{bmatrix} \frac{1}{2} & \frac{1}{2}\\
\frac{1}{2} & -\frac{1}{2} \end{bmatrix} \\
\quad [v]_N = P_N^S\:[v]_S
$
> ## that is inverse
>> ## $ N^{-1}
=
\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}^{-1}
\iff
P_N^S = \begin{bmatrix} \frac{1}{2} & \frac{1}{2}\\
\frac{1}{2} & -\frac{1}{2} \end{bmatrix}
$
# origin
> 기시점, 원점
>> 참조 point
>> reference point: one point를 중심으로 모든 대상을 상대적인 비율(거리)로 나타내려할때 그 중심 point
# axis
> 축
>> 원점에 서있을때 정면 옆면 윗면방향으로 무한히 확장한 축
# perspective
> 관점.
>> 목적 대상(object)을 원점과의 상대적인 물리적인 관계로 상태(states)를 기술하는 방법
# Scaling Factor
> 축척
>> line
>>> 1D
>> Area
>>> Determinant = scaling factor for areas in matrix
>> Volumn
>>> corss product = scaling factor fo rarea in matix
```python
# 직각좌표계
import sympy as sm
import sympy.vector
### nabla operator = Del()
### sympy.vector.Del()
nabla = sm.vector.Del()
# 직각 좌표계
B = sm.vector.CoordSys3D('')
# 직각 좌표계의 base vector(i,j,k), base scalar(x,y,z)
B.base_scalars(), B.base_vectors()
# vector in i,j,k vector
v1 = 1*B.i+2*B.j+3*B.k
type(v1)
v2 = 3*B.i+4*B.j+5*B.k
type(v2)
# scalar in x,y,z not a vector
v0 = B.x + B.y + B.z
type(v0)
# to matrix of the Coordinate system,where, v is vector
v1.to_matrix(B)
v1.magnitude() == sm.sqrt(v1.dot(v1))
v1.cross(v2)
v1.normalize()
### projection vector ###
# v1.projection(v2)
# v1.dot(v2) = ||v1|| ||v2|| cos(theta)
# cos(theta) = (v1.dot(v2))/(||v1|| ||v2||)
# v1.project(v2) = ||v2|| cost(theta) \hat{v1} = (v1.dot(v2))/(|\v1||^2) x \vec{v1}
# v1.project(v2) = \frac{||v2||cost(theta)}{\hat{v1}} = \frac{(v1.dot(v2))}{|\v1||^2} \vec{v1}
# v1.project(v2) = \frac{(v1.dot(v2))}{v1.dot(v1)}\vec{v1}
v1.projection(v2) == v1.dot(v2)/v1.dot(v1)*v1
#### nabla Del(), where in field potential energy ###
# \nabla \cdot = divergence
# \nabla \times = curl
# \nabla * f(x,y,z) = gradient, where f(x,y,z) is scalar function(that return real numbers)
u_field = 2 * B.x * B.y**2 * B.i + 3 * B.z**2 * B.y*B.j + 4 * B.x**3 * B.z * B.k
type(u_field)
f_field = 3* B.x**2 + B.y**3 + B.z**2
type(f_field)
nabla = sm.vector.Del()
nabla.dot(u_field)
nabla.dot(u_field).doit() == sm.vector.divergence(u_field)
nabla.cross(u_field)
nabla.cross(u_field).doit() == sm.vector.curl(u_field)
nabla(f_field)
nabla(f_field).doit() == sm.vector.gradient(f_field)
```
True
```python
#### cylindrical CoordSys3D ####
#### 원통 좌표계
B = sm.vector.CoordSys3D('B')
C = B.create_new('C', transformation='cylindrical')
#### (r,theta,z) -> (x,y.z)
# (r x cos(theta), r x sin(theta), z)
#rv = C.transformation_from_parent()
ra = [ C.r * sm.cos(C.theta), C.r* sm.sin(C.theta), C.z]
rm = sm.Matrix(ra)
rv = sm.vector.matrix_to_vector(rm,C)
#### (r,theta,z) -> (sqrt(x^2 + y^2), atan(y\x), z)
#r_v = C.transformation_from_parent()
#### r > 0, 0 < theta < 2{\pi} -oo < z < oo
r_a = [ sm.sqrt(B.x**2 + B.y**2), sm.atan(B.y/B.x), B.z]
r_m = sm.Matrix(r_a)
r_v = sm.vector.matrix_to_vector(r_m, C)
#### Scale factor Jacobian(행열식) ####
hr = sm.sqrt(rv.diff(C.r).dot(rv.diff(C.r))).simplify()
ht = sm.sqrt(rv.diff(C.theta).dot(rv.diff(C.theta))).simplify()
hz = sm.sqrt(rv.diff(C.z).dot(rv.diff(C.z))).simplify()
#### bases
rh = rv.diff(C.r)/hr
th = rv.diff(C.theta)/ht
zh = rv.diff(C.z)/hz
rh.cross(th).simplify() == zh
#### dS = d{\theta} * dz + 2*dr
## dS = ht * hz * d{\theta}*dz + 2(hr*dr)
## dS = r*d{\theta}*dz + 2(dr)
sm.Integral(C.r,(C.theta,0,2*sm.pi),(C.z,0,C.z)) + 2*sm.Integral(C.r,(C.theta,0,2*sm.pi))
sm.integrate(C.r,(C.theta,0,2*sm.pi),(C.z,0,C.z)) + 2*sm.integrate(C.r,(C.theta,0,2*sm.pi))
sm.integrate(ht*hz,(C.theta,0,2*sm.pi),(C.z,0,C.z)) + 2*sm.integrate(ht,(C.theta,0,2*sm.pi))
#### dV
## dV = hr * ht * hz * dr * dt * dz = dr * r*dt * dz
## dV = dr * r*d{\theta} * dz
sm.Integral(hr*ht*hz,(C.z,0,C.z),(C.theta,0,2*sm.pi),(C.r,0,C.r))
sm.integrate(hr*ht*hz,(C.z,0,C.z),(C.theta,0,2*sm.pi),(C.r,0,C.r))
sm.integrate(C.r,(C.z,0,C.z),(C.theta,0,2*sm.pi),(C.r,0,C.r))
sm.Matrix(C.transformation_to_parent()).subs({C.i:sm.cos(C.theta),C.j:sm.sin(C.theta)})
uf = C.r*sm.cos(C.theta)**2*C.i + C.r*sm.sin(C.theta)**2*C.j + C.z*C.k
type(uf)
uf.to_matrix(C)
sf = C.r**2
type(sf)
nabla = sm.vector.Del()
nabla(sf).doit() == sm.vector.gradient(sf)
nabla.dot(uf).doit() == sm.vector.divergence(uf)
nabla.cross(uf).doit() == sm.vector.curl(uf)
nabla.cross(uf).doit()
```
$\displaystyle (\frac{2 \mathbf{{r}_{C}} \sin^{2}{\left(\mathbf{{theta}_{C}} \right)} + 2 \mathbf{{r}_{C}} \sin{\left(\mathbf{{theta}_{C}} \right)} \cos{\left(\mathbf{{theta}_{C}} \right)}}{\mathbf{{r}_{C}}})\mathbf{\hat{k}_{C}}$
```python
#### spherical CoordSys3D #####
#### 구 좌표계 ####
B = sm.vector.CoordSys3D('B')
S = B.create_new('S', transformation='spherical')
u = S.r*sm.sin(S.theta)*sm.cos(S.phi)*S.i + S.r*sm.sin(S.theta)*sm.sin(S.phi)*S.j + S.r*sm.cos(S.theta)
u.to_matrix(S)
sm.vector.matrix_to_vector(sm.Matrix(S.transformation_to_parent()),S)
### \ver{r} (r,theta,phi) -> (x,y,z)
ra = [S.r*sm.sin(S.theta)*sm.cos(S.phi), S.r*sm.sin(S.theta)*sm.sin(S.phi),S.r*sm.cos(S.theta)]
rm = sm.Matrix(ra)
rv = sm.vector.matrix_to_vector(rm,S)
#### (x,y,z) -> (r, theta, phi)
r_a = [sm.sqrt(B.x**2 + B.y**2 + B.z**2),
sm.atan(sm.sqrt(B.x**2 + B.y**2)/B.z),
sm.atan(B.y/B.x)]
r_m = sm.Matrix(r_a)
r_x = sm.vector.matrix_to_vector(r_m,S)
#print(f'{r_x}\n{sm.vector.matrix_to_vector(sm.Matrix(S.transformation_from_parent()),S)}')
### height : length r theta phi
##
#hr = rv.diff(S.r).magnitude().doit().simplify()
hr = rv.diff(S.r)
print(f'{hr}')
hr = sm.sqrt(hr.dot(hr)).simplify()
print(f'dr \n{hr} \n{rv.diff(S.r)}')
ht = rv.diff(S.theta)
ht = sm.sqrt(ht.dot(ht)).simplify()
hp = rv.diff(S.phi)
hp = sm.sqrt(hp.dot(hp)).simplify()
rh = rv.diff(S.r) / hr
th = rv.diff(S.theta) / ht
ph = rv.diff(S.phi) / hp
rh.dot(th).doit().simplify()
rh.dot(ph).doit().simplify()
th.dot(ph).doit().simplify()
rh.cross(th).simplify()
th.cross(ph).simplify()
### dS ###
## dS = d{\theta } * d{\phi}
## dr/{\partial \theta}.magnitude() => r * d{\theta}
#print(f'D_theta= {ht.simplify()}')
## dr/{\partial \phi}.magnitude() /phi 축 방향 크기 => r * sin(\theta) d{\phi}
#print(f'D_phi = {hp.simplify()}')
# r^2 x sin({\theta})
# dS = ht x hp x dt x dp = r^2 x sin(theta) x d{\theta} x d{\phi}
sm.Integral(S.r**2 * sm.sin(S.theta),(S.theta,0,sm.pi),(S.phi,0,2*sm.pi))
sm.integrate(S.r**2 * sm.sin(S.theta),(S.theta,0,sm.pi),(S.phi,0,2*sm.pi))
### dV ###
## dV = dr x d{\theta} x d{\phi}
## dr = 1
#print(f'D_r = {hr.simplify()}')
rv.diff(S.r)
sm.Integral(hr*ht*hp,(S.r,0,2*sm.pi),(S.theta,0,sm.pi),(S.phi,0,2*sm.pi))
sm.integrate(hr*ht*hp,(S.r,0,S.r),(S.theta,0,sm.pi),(S.phi,0,2*sm.pi))
### gradient = \nabla(f) ###
nabla = sm.vector.Del()
sm.integrate(S.r**2 * sm.sin(S.theta),(S.theta,0,sm.pi),(S.phi,0,2*sm.pi)) == sm.integrate(ht*hp,(S.theta,0,sm.pi),(S.phi,0,2*sm.pi))
```
(sin(S.theta)*cos(S.phi))*S.i + (sin(S.phi)*sin(S.theta))*S.j + (cos(S.theta))*S.k
dr
1
(sin(S.theta)*cos(S.phi))*S.i + (sin(S.phi)*sin(S.theta))*S.j + (cos(S.theta))*S.k
True
# Creating New System v.to_matrix(B)
> ## sympy.vector.CoordSys3D('name')
# Transformaing New System
> ## obj.create_new (name, transforamtion='')
>> ## 원점은 변하지 않는다, axis 만 각과 섞어서 쓴다.
>>> name: str,
>>> transformation: lambda,tuple.str('cylindrical','spherical'),
>>> vector_names = ' ',
>>> variable_names = ' '
```python
C = B.create_new('C','cylindrical')
C = B.create_new(name='C',transformation='cylindrical')
D = B.create_new('D',transformation=lambda x,y,z:(sm.cos(x),sm.sin(y),z))
C.base_vectors()
C.base_scalars()
sm.Matrix(C.transformation_from_parent())
sm.Matrix(C.transformation_to_parent())
# wrt = with respect to
#C.position_wrt(B)
sm.vector.express(p,C)
```
```python
x = sm.symbols('x')
expr = abs(sm.sin(x**2))
sm.ccode(expr)
sm.julia_code(expr)
sm.octave_code(expr)
#import sympy.printing.cxxcode
sm.cxxcode(expr)
sm.ccode?
```
[0;31mSignature:[0m [0msm[0m[0;34m.[0m[0mccode[0m[0;34m([0m[0mexpr[0m[0;34m,[0m [0massign_to[0m[0;34m=[0m[0;32mNone[0m[0;34m,[0m [0mstandard[0m[0;34m=[0m[0;34m'c99'[0m[0;34m,[0m [0;34m**[0m[0msettings[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mDocstring:[0m
Converts an expr to a string of c code
Parameters
==========
expr : Expr
A sympy expression to be converted.
assign_to : optional
When given, the argument is used as the name of the variable to which
the expression is assigned. Can be a string, ``Symbol``,
``MatrixSymbol``, or ``Indexed`` type. This is helpful in case of
line-wrapping, or for expressions that generate multi-line statements.
standard : str, optional
String specifying the standard. If your compiler supports a more modern
standard you may set this to 'c99' to allow the printer to use more math
functions. [default='c89'].
precision : integer, optional
The precision for numbers such as pi [default=17].
user_functions : dict, optional
A dictionary where the keys are string representations of either
``FunctionClass`` or ``UndefinedFunction`` instances and the values
are their desired C string representations. Alternatively, the
dictionary value can be a list of tuples i.e. [(argument_test,
cfunction_string)] or [(argument_test, cfunction_formater)]. See below
for examples.
dereference : iterable, optional
An iterable of symbols that should be dereferenced in the printed code
expression. These would be values passed by address to the function.
For example, if ``dereference=[a]``, the resulting code would print
``(*a)`` instead of ``a``.
human : bool, optional
If True, the result is a single string that may contain some constant
declarations for the number symbols. If False, the same information is
returned in a tuple of (symbols_to_declare, not_supported_functions,
code_text). [default=True].
contract: bool, optional
If True, ``Indexed`` instances are assumed to obey tensor contraction
rules and the corresponding nested loops over indices are generated.
Setting contract=False will not generate loops, instead the user is
responsible to provide values for the indices in the code.
[default=True].
Examples
========
>>> from sympy import ccode, symbols, Rational, sin, ceiling, Abs, Function
>>> x, tau = symbols("x, tau")
>>> expr = (2*tau)**Rational(7, 2)
>>> ccode(expr)
'8*M_SQRT2*pow(tau, 7.0/2.0)'
>>> ccode(expr, math_macros={})
'8*sqrt(2)*pow(tau, 7.0/2.0)'
>>> ccode(sin(x), assign_to="s")
's = sin(x);'
>>> from sympy.codegen.ast import real, float80
>>> ccode(expr, type_aliases={real: float80})
'8*M_SQRT2l*powl(tau, 7.0L/2.0L)'
Simple custom printing can be defined for certain types by passing a
dictionary of {"type" : "function"} to the ``user_functions`` kwarg.
Alternatively, the dictionary value can be a list of tuples i.e.
[(argument_test, cfunction_string)].
>>> custom_functions = {
... "ceiling": "CEIL",
... "Abs": [(lambda x: not x.is_integer, "fabs"),
... (lambda x: x.is_integer, "ABS")],
... "func": "f"
... }
>>> func = Function('func')
>>> ccode(func(Abs(x) + ceiling(x)), standard='C89', user_functions=custom_functions)
'f(fabs(x) + CEIL(x))'
or if the C-function takes a subset of the original arguments:
>>> ccode(2**x + 3**x, standard='C99', user_functions={'Pow': [
... (lambda b, e: b == 2, lambda b, e: 'exp2(%s)' % e),
... (lambda b, e: b != 2, 'pow')]})
'exp2(x) + pow(3, x)'
``Piecewise`` expressions are converted into conditionals. If an
``assign_to`` variable is provided an if statement is created, otherwise
the ternary operator is used. Note that if the ``Piecewise`` lacks a
default term, represented by ``(expr, True)`` then an error will be thrown.
This is to prevent generating an expression that may not evaluate to
anything.
>>> from sympy import Piecewise
>>> expr = Piecewise((x + 1, x > 0), (x, True))
>>> print(ccode(expr, tau, standard='C89'))
if (x > 0) {
tau = x + 1;
}
else {
tau = x;
}
Support for loops is provided through ``Indexed`` types. With
``contract=True`` these expressions will be turned into loops, whereas
``contract=False`` will just print the assignment expression that should be
looped over:
>>> from sympy import Eq, IndexedBase, Idx
>>> len_y = 5
>>> y = IndexedBase('y', shape=(len_y,))
>>> t = IndexedBase('t', shape=(len_y,))
>>> Dy = IndexedBase('Dy', shape=(len_y-1,))
>>> i = Idx('i', len_y-1)
>>> e=Eq(Dy[i], (y[i+1]-y[i])/(t[i+1]-t[i]))
>>> ccode(e.rhs, assign_to=e.lhs, contract=False, standard='C89')
'Dy[i] = (y[i + 1] - y[i])/(t[i + 1] - t[i]);'
Matrices are also supported, but a ``MatrixSymbol`` of the same dimensions
must be provided to ``assign_to``. Note that any expression that can be
generated normally can also exist inside a Matrix:
>>> from sympy import Matrix, MatrixSymbol
>>> mat = Matrix([x**2, Piecewise((x + 1, x > 0), (x, True)), sin(x)])
>>> A = MatrixSymbol('A', 3, 1)
>>> print(ccode(mat, A, standard='C89'))
A[0] = pow(x, 2);
if (x > 0) {
A[1] = x + 1;
}
else {
A[1] = x;
}
A[2] = sin(x);
[0;31mFile:[0m ~/.local/lib/python3.9/site-packages/sympy/printing/codeprinter.py
[0;31mType:[0m function
```python
S = B.create_new('S','spherical')
S.to_matrix()
```
# Locating New System
> # obj.locate_new(...)
>> name,
>> postition,
>> vector_names,
>> variable_names
```python
L = B.locate_new('L', 5*B.i+ 2*B.j + 3*B.k)
sm.Matrix(L.transformation_to_parent()).subs({L.x:7,L.y:2,L.z:3})
sm.Matrix(L.transformation_to_parent())
L.position_wrt(B)
# something wrong???
sm.vector.express(2*L.i+L.j+L.k,B)
```
$\displaystyle (2)\mathbf{\hat{i}_{B}} + \mathbf{\hat{j}_{B}} + \mathbf{\hat{k}_{B}}$
# Orienting New System
> ## 한축을 중심으로 나머지 origin 회전시킨다..
> # obj.orient_new(..)
> ## 축의 각도가 변한다.
>> ### obj.orient_new_axis
>>> ### B.orient_new_axis(
name,
angle,
axis vector,
locastion=$\vec{v}$
)
>> ### obj.orient_new_body
>> ### obj.orient_new_space
>> ### obj.orient_new_quaternion
# QuaternionOrienter
## obj.orient_new(name,(,))v
```python
theta = sm.symbols('theta')
# \hat{k}를 축으로 해서 \theta 만큼 회전시킨다.
N = B.orient_new_axis('N', theta, B.k)
sm.Matrix(N.transformation_to_parent())
#sm.Matrix(N.transformation_to_parent()).subs({theta:sm.pi/4,N.x:1,N.y:2,N.z:2})
# sm.vector.express(p,N)
```
$\displaystyle \left[\begin{matrix}\mathbf{{x}_{N}} \cos{\left(\theta \right)} + \mathbf{{y}_{N}} \sin{\left(\theta \right)}\\- \mathbf{{x}_{N}} \sin{\left(\theta \right)} + \mathbf{{y}_{N}} \cos{\left(\theta \right)}\\\mathbf{{z}_{N}}\end{matrix}\right]$
```python
N.rotation_matrix(B)
N.rotation_matrix(B).subs({theta:sm.pi/4})
N.rotation_matrix(B).subs({theta:sm.pi/4}).dot([1,2,2])
#sm.Matrix(N.rotation_matrix(B).subs({theta:sm.pi/4}).dot([1,2,2]))
```
[3*sqrt(2)/2, sqrt(2)/2, 2]
# Orienting and Locating New Coordinate System
> ## B.orient_new_axis (
>> ### 'name',
>> ### $(\angle)$ angle scalar, $(\vec{axis})$ axis vector,
>> ### location = $(\vec{move})$ vector
> ## )
```python
a,b,r = sm.symbols('alpha beta gamma')
M = B.orient_new_axis('M',theta,B.k,location= a*B.i + b*B.j + r*B.k)
sm.Matrix(M.transformation_to_parent()).subs({theta:sm.pi/4,a:1,b:1,r:1,M.x:1,M.y:2,M.z:2})
```
$\displaystyle \left[\begin{matrix}1 + \frac{3 \sqrt{2}}{2}\\\frac{\sqrt{2}}{2} + 1\\3\end{matrix}\right]$
# quaternion
> ## sympy.algebras.quaternion
>> ## $q = (w , \vec{v})$
>>> ## $q = (w,(x,y,z))$
>>> ## $q = w + xi + yj + zk \\ q_1 = w_1 + \vec{v_1} \\ q_2= w_2 + \vec{v_2}$
> ## $q_1\:q_2 = (w_1w_2 - \vec{v_1}\cdot \vec{v_2}, \quad w_1\vec{v_1} + w_2\vec{v_2} + \vec{v_1}\times\vec{v_2})$
> ## $q^2 = (0,\vec{v})(0,\vec{v}) = (-\vec{v}\cdot \vec{v}, \vec{0}) = -|\vec{v}|^2 $
```python
import sympy.algebras
x = sm.symbols('x')
q = sm.algebras.Quaternion(1,2,3,4)
q1 = sm.algebras.Quaternion(x,x**3,x,x**2,real_field=False)
q2 = sm.algebras.Quaternion(3+4*sm.I,2+5*sm.I,0,7+8*sm.I,real_field=False)
q2
```
$\displaystyle \left(3 + 4 i\right) + \left(2 + 5 i\right) i + 0 j + \left(7 + 8 i\right) k$
#
> ### $ \vec{v} = ai + bj + ck \\
q = w + \vec{v} \\
\quad = (w, \vec{v}) \\
\quad = (w, ai + bj + ck) $
>> ### $ e^{ai + bj + ck} = cos(|\vec{v}|) +
\frac{sin(|\vec{v}|}{|\vec{v}|}
(ai + bj + ck)$
>> ### $ e^{w + ai + bj + ck} \\
\quad = e^{w} e^{ai+bj+ck} \\
\quad = cos(|\vec{v}|) + \frac{sin(|\vec{v}|}{|\vec{v}|}
(ai + bj + ck) \\
\quad = cos(|\vec{v}|) + sin(|\vec{v}|)\,
\frac {ai + bj + ck}{|\vec{v}|} \\
\quad = cos(|\vec{v}|) + sin(|\vec{v}|)\,
\frac {\vec{v}}{|\vec{v}|} \\
\quad = cos(|\vec{v}|) + sin(|\vec{v}|)\,\hat{v}\\
$
> ## $ e^{\vec{v}} = e^{ai+bj+ck}
\begin{cases}
cos(|\vec{v}|) +
\frac{sin(|\vec{v}|}{|\vec{v}|}\,(ai + bj + ck) \\
cos(|\vec{v}|) +
sin(|\vec{v}|)\,
(
\frac{ai}{|\vec{v}|} +
\frac{bj}{|\vec{v}|} +
\frac{ck}{|\vec{v}|}
) \\
cos(|\vec{v}|) + sin(|\vec{v}|)\,\hat{v} \\
\end{cases}$
```python
# e^q
import sympy.algebras
theta = sm.symbols('theta')
q = sm.algebras.Quaternion(sm.cos(theta),
sm.sin(theta)*sm.sqrt(1/3),
sm.sin(theta)*sm.sqrt(1/3),
sm.sin(theta)*sm.sqrt(1/3))
q*q
sm?
```
[0;31mType:[0m module
[0;31mString form:[0m <module 'sympy' from '/home/jkarng/.local/lib/python3.9/site-packages/sympy/__init__.py'>
[0;31mFile:[0m ~/.local/lib/python3.9/site-packages/sympy/__init__.py
[0;31mDocstring:[0m
SymPy is a Python library for symbolic mathematics. It aims to become a
full-featured computer algebra system (CAS) while keeping the code as simple
as possible in order to be comprehensible and easily extensible. SymPy is
written entirely in Python. It depends on mpmath, and other external libraries
may be optionally for things like plotting support.
See the webpage for more information and documentation:
https://sympy.org
```python
sm.exp(sm.algebras.Quaternion(1,1,1,1))
```
```python
sm.algebras.Quaternion(1,1,1,1).exp()
```
# Quaternion Multiply
> #[quaternion](https://www.youtube.com/watch?v=88BA8aO3qXA)
> $ (a + bi + cj + dk)\:(e + fi + gj + hk ) \\
\begin{bmatrix}
a & -b & -c & -d \\
b & a & -d & c \\
c & d & a & -b \\
d & -c & b & a
\end{bmatrix} \quad
\begin{bmatrix}
e \\ f \\ g \\ h
\end{bmatrix}
$
> # unit quaternion matrix
>> $ 1 = (1,0,0,0) \\
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
$
>> $ i = (0,1,0,0) \\
\begin{bmatrix}
0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0 \\
\end{bmatrix}
$
>> $ j = (0,0,1,0) \\
\begin{bmatrix}
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
\end{bmatrix}
$
>> $k = (0,0,0,1) \\
\begin{bmatrix}
0 & 0 & 0 & -1 \\
0 & 0 & -1 & 0 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
\end{bmatrix}
$
```python
q0, q1, q2, q3 = sm.symbols('q_0 q_1 q_2 q_3')
Q = B.orient_new_quaternion('Q',q0,q1,q2,q3)
sm.Matrix(Q.transformation_to_parent())
```
```python
m = sm.Matrix([1,2,3])
q = sm.algebras.Quaternion(1,2,3,4)
q.norm()
q.normalize()
q.inverse()
q.pow(2) == q.mul(q) == q*q
sm.vector.divergence(q)
sm.vector.curl(q)
sm.vector.gradient(q)
```
```python
x = sm.symbols('x')
q1 = sm.algebras.Quaternion(x**2,x**3,x)
q2 = sm.algebras.Quaternion(2,(3+2*sm.I), x**2, 3.5*sm.I)
```
```python
q1 * q2
```
```python
q1.inverse()
q1.conjugate()/q1.norm()
```
```python
q1.inverse()
```
```python
```
|
e35e5ccfb5f90e20453d5d8f9abf8f23c9941051
| 37,375 |
ipynb
|
Jupyter Notebook
|
python/Vectors/CoordSys3D.ipynb
|
karng87/nasm_game
|
a97fdb09459efffc561d2122058c348c93f1dc87
|
[
"MIT"
] | null | null | null |
python/Vectors/CoordSys3D.ipynb
|
karng87/nasm_game
|
a97fdb09459efffc561d2122058c348c93f1dc87
|
[
"MIT"
] | null | null | null |
python/Vectors/CoordSys3D.ipynb
|
karng87/nasm_game
|
a97fdb09459efffc561d2122058c348c93f1dc87
|
[
"MIT"
] | null | null | null | 33.915608 | 668 | 0.493967 | true | 8,132 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.819893 | 0.679179 | 0.556854 |
__label__eng_Latn
| 0.263629 | 0.132088 |
JuPOT - Tutorial
================
```julia
push!(LOAD_PATH, "$(homedir())/desktop/financial-opt-tools/src")
using JuPOT
# Generate synthetic data sets for Demonstration
############
# Assets
############
n = 10 # No. Of Assets
returns = rand(n)*0.4
covariance = let
S = randn(n, n)
S'S + eye(n)
end
names = [randstring(3) for i in 1:n] # List of asset names
# Assets data structure containing, names, expected returns, covarariance
assets = AssetsCollection(names, returns, covariance)
```
10x2 DataFrames.DataFrame
| Row | A | B |
|-----|-------|-----------|
| 1 | "fqU" | 0.145521 |
| 2 | "nbG" | 0.0794849 |
| 3 | "yVw" | 0.205992 |
| 4 | "nlr" | 0.117006 |
| 5 | "eZU" | 0.230518 |
| 6 | "NO1" | 0.0615766 |
| 7 | "Bnh" | 0.292437 |
| 8 | "M22" | 0.235276 |
| 9 | "mUe" | 0.0722919 |
| 10 | "aZe" | 0.0120013 |
## Simple MVO
\begin{align}
&\text{minimize} && w^\top\Sigma w \\
&\text{subject to} && \mu^\top w\geq r_{\min} \\
& && \mathbf{1}^\top w = 1 \\
& && w \succeq 0 \\
\end{align}
```julia
######################################################
################### SIMPLE MVO #######################
######################################################
# Example 1: Basic
target_return = 0.2
# refer to model definition for keyword arguments, etc.
mvo = SimpleMVO(assets, target_return; short_sale=false)
```
Sense: Min
Variables:
w[1:10] >= 0
Objective Function:
dot(w,10x10 Array{Float64,2}:
23.7016 9.98336 -0.553158 … 0.733098 -0.814721 -0.56417
9.98336 17.9551 2.96608 -0.121899 0.172077 1.30404
-0.553158 2.96608 10.1192 -3.80496 0.209278 0.126627
-0.570444 1.48801 -0.296499 1.93211 -1.10089 1.20087
2.06455 -0.231737 -2.93694 -1.33384 -0.421206 0.919909
3.19321 -1.29136 4.48853 … -0.792987 1.35509 -0.088859
-2.3449 3.35075 0.865592 -0.983744 2.56856 1.105
0.733098 -0.121899 -3.80496 5.67463 -0.1951 -0.615585
-0.814721 0.172077 0.209278 -0.1951 8.57526 -2.73014
-0.56417 1.30404 0.126627 -0.615585 -2.73014 6.77963 * w)
Constraints:
0x2 DataFrames.DataFrame
2x2 DataFrames.DataFrame
| Row | Default |
|-----|-----------|
| 1 | "default" |
| 2 | "default" |
| Row | Constraint |
|-----|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | :(dot([0.5479590591553458,0.8124768275985059,0.38122815802239063,0.8908056131630402,0.21612177093254403,0.021889817591750127,0.18684295493730074,0.7584938923953053,0.23944112136715145,0.7859992194071441],w) ≥ 0.2) |
| 2 | :(dot(ones(10),w) == 1) |
Assets:
10x2 DataFrames.DataFrame
| Row | A | B |
|-----|-------|-----------|
| 1 | "xN2" | 0.547959 |
| 2 | "7NE" | 0.812477 |
| 3 | "k3A" | 0.381228 |
| 4 | "ji3" | 0.890806 |
| 5 | "TRc" | 0.216122 |
| 6 | "vy1" | 0.0218898 |
| 7 | "Hqy" | 0.186843 |
| 8 | "DOO" | 0.758494 |
| 9 | "Gi9" | 0.239441 |
| 10 | "lyg" | 0.785999 |
```julia
optimize(mvo)
```
10-element Array{Float64,1}:
0.00709497
5.81211e-14
0.221814
5.78534e-10
0.235785
6.99527e-14
0.0849516
0.310166
0.066205
0.0739836
Constraints
-----------
\begin{align}
&\text{minimize} && w^\top\Sigma w \\
&\text{subject to} && \mu^\top w\geq r_{\min} \\
& && \mathbf{1}^\top w = 1 \\
& && w \succeq 0 \\
& && I_{tech}^\top w \leq 0.3 \\
& && I_{fin}^\top w \leq 0.5 \\
\end{align}
```julia
##################################
# Example 2: Adding a Constraint #
##################################
function genTechIndicator()
[0,0,1,1,0,1,0,1,1,0]
end
# Adding a simple weight constraint
constraints = Dict((:techClassWeightConstraint => :(dot(w,tech) <= tech_thresh)),
(:finClassWeightConstraint => :(dot(w,fin) <= fin_thresh)))
parameters = Dict(:tech=>genTechIndicator(),
:tech_thresh => 0.3,
:fin=> [1,1,0,0,1,0,1,0,0,0],
:fin_thresh => 0.5)
# refer to model definition for keyword arguments, etc.
mvo = SimpleMVO(assets, target_return, constraints; short_sale=false)
w = optimize(mvo, parameters)
```
10-element Array{Float64,1}:
0.0499577
4.4887e-12
0.114043
0.000115883
0.256528
5.46294e-12
0.193514
0.171354
0.0144872
0.2
```julia
# Constraint Checks
print("Tech Class Constraint: \n")
techClassWeight = dot(w, parameters[:tech])
print(techClassWeight, ", ", techClassWeight <= parameters[:tech_thresh])
print("\n")
print("Fin Class Constraint: \n")
finClassWeight = dot(w, parameters[:fin])
print(finClassWeight, ", ", finClassWeight <= parameters[:fin_thresh])
```
Tech Class Constraint:
0.29999999999897375, true
Fin Class Constraint:
0.49999999999262246, true
Updated Constraint
------------------
\begin{align}
&\text{minimize} && w^\top\Sigma w \\
&\text{subject to} && \mu^\top w\geq r_{\min} \\
& && \mathbf{1}^\top w = 1 \\
& && w \succeq 0 \\
& && I_{tech}^\top w \leq 0.04 \\
& && I_{fin}^\top w \leq 0.5 \\
\end{align}
```julia
#######################################################
# Example 3: Changing a Constraint's parameter values #
#######################################################
# Changing values of an entered constraint
parameters[:tech_thresh] = 0.04
# refer to model definition for keyword arguments, etc.
w = optimize(mvo, parameters)
```
10-element Array{Float64,1}:
0.0854787
5.03612e-11
1.24508e-10
4.85935e-11
0.204448
1.53004e-11
0.210073
0.0122023
0.0277977
0.46
```julia
# Constraint Checks
print("Tech Class Constraint: \n")
techClassWeight = dot(w, parameters[:tech])
print(techClassWeight, ", ", techClassWeight <= parameters[:tech_thresh])
print("\n")
print("Fin Class Constraint: \n")
finClassWeight = dot(w, parameters[:fin])
print(finClassWeight, ", ", finClassWeight <= parameters[:fin_thresh])
```
Tech Class Constraint:
0.03999999999938704, true
Fin Class Constraint:
0.4999999999983238, true
Constraints
-----------
New Model
\begin{align}
&\text{minimize} && w^\top\Sigma w \\
&\text{subject to} && \mu^\top w\geq r_{\min} \\
& && \mathbf{1}^\top w = 1 \\
& && w \succeq 0 \\
& && I_{fin}^\top w \leq 0.5 \\
\end{align}
```julia
####################################
# Example 4: Deleting a Constraint #
####################################
# Removing a previously defined constraint
delete!(constraints, :techClassWeightConstraint)
# refer to model definition for keyword arguments, etc.
mvo = SimpleMVO(assets, target_return, constraints; short_sale=false)
w = optimize(mvo, parameters)
```
10-element Array{Float64,1}:
0.00709497
8.75224e-14
0.221814
1.10185e-9
0.235785
1.04763e-13
0.0849516
0.310166
0.066205
0.0739836
```julia
#Display Current Constraint Container
constraints
```
Dict{Symbol,Expr} with 1 entry:
:finClassWeightConstrai… => :(dot(w,fin) <= fin_thresh)
```julia
# Constraint Checks
print("Tech Class Constraint: \n")
techClassWeight = dot(w, parameters[:tech])
print(techClassWeight, ", ", techClassWeight <= parameters[:tech_thresh])
print("\n")
print("Fin Class Constraint: \n")
finClassWeight = dot(w, parameters[:fin])
print(finClassWeight, ", ", finClassWeight <= parameters[:fin_thresh])
```
Tech Class Constraint:
0.5981848381137542, false
Fin Class Constraint:
0.3278316105611047, true
\begin{align}
&\text{minimize} && w^\top\Sigma w \\
&\text{subject to} && \mu^\top w\geq r_{\min} \\
& && \mathbf{1}^\top w = 1 \\
& && w \succeq 0 \\
& && I_{fin}^\top w \leq 0.5 \\
& && {w_i} \leq 0.5 && \forall i \in \{1, 2, 3\} \\
& && {w_i} \leq 0.01 \text{ } && \forall i \in \{4, 5, 6\}\\
\end{align}
```julia
##########################################
# Example 5: Adding multiple Constraints #
##########################################
# Adding a multiple weight constraints
# Adding a simple weight constraint
assetClassThresholds = [0.5, 0.5, 0.5, 0.01, 0.01, 0.01]
assetsWeightConstraints = [symbol("assetWeightConstraint$i") => :(w[$i] <= $(assetClassThresholds[i])) for i=1:6]
# Different constraint sets can be merged to form new ones
new_constraints = merge(constraints, assetsWeightConstraints)
# refer to model definition for keyword arguments, etc.
mvo = SimpleMVO(assets, target_return, new_constraints; short_sale=false)
w = optimize(mvo, parameters)
```
10-element Array{Float64,1}:
0.046634
3.84734e-11
0.191747
0.01
0.01
0.00735835
0.0797681
0.312535
0.145346
0.196611
```julia
print("Fin Class Constraint: \n")
finClassWeight = dot(w, parameters[:fin])
print(finClassWeight, ", ", finClassWeight <= parameters[:fin_thresh])
print("\n")
print("Weights 1 - 3 Constraint: \n")
for i=1:3
print(w[i], ", ", w[i] <= 0.5)
print("\n")
end
print("Weights 4 - 6 Constraint: \n")
for i=4:6
print(w[i], ", ", w[i] <= 0.01)
print("\n")
end
```
Fin Class Constraint:
0.13640211660280685, true
Weights 1 - 3 Constraint:
0.046634024632747455, true
3.847338889024689e-11, true
0.19174749057889884, true
Weights 4 - 6 Constraint:
0.009999999980661997, true
0.009999999994074223, true
0.007358350255935825, true
```julia
#####################################
# Example 6: Using Different Assets #
#####################################
############
# Asset #2 #
############
n = 10 # No. Of Assets
returns_new = rand(n)
covariance_new = let
S = randn(n, n)
S'S + eye(n)
end
names_new = [randstring(3) for i in 1:n]
# Assets data structure containing, names, expected returns, covarariance
assets_new = AssetsCollection(names_new, returns_new, covariance_new)
# Using the same previously defined constraints we can run the model on a different set of assets effortlessly
mvo = SimpleMVO(assets_new, target_return; short_sale=false)
optimize(mvo, parameters)
```
10-element Array{Float64,1}:
1.35231e-11
0.294902
1.10843e-11
1.90023e-8
0.185996
2.80191e-11
0.294275
7.3748e-10
0.0180918
0.206734
```julia
# Forgot what constraints and parameters were defined for initial constraints? No Problem!
constraints # Prints the constraints
```
Dict{Symbol,Expr} with 1 entry:
:finClassWeightConstrai… => :(dot(w,fin) <= fin_thresh)
```julia
parameters # prints the parameters
```
Dict{Symbol,Any} with 4 entries:
:tech => [0,0,1,1,0,1,0,1,1,0]
:tech_thresh => 0.04
:fin_thresh => 0.5
:fin => [1,1,0,0,1,0,1,0,0,0]
```julia
# It's good practice to remove unnecessary parameters
delete!(parameters, :tech)
delete!(parameters, :tech_thresh)
parameters
```
Dict{Symbol,Any} with 2 entries:
:fin_thresh => 0.5
:fin => [1,1,0,0,1,0,1,0,0,0]
## Robust MVO
\begin{align}
&\text{minimize} && w^\top\Sigma w \\
&\text{subject to} && \big\lVert \Theta^{\frac{1}{2}}w \big\rVert \leq \epsilon \\
& && \mu^\top w\geq r_{\min} \\
& && \mathbf{1}^\top w = 1 \\
& && w \succeq 0 \\
\end{align}
```julia
##############################
# Example 7 Using Robust MVO#
##############################
# refer to model definition for keyword arguments, etc
# If no uncertainty matrix is entered the model defaults
# to the ellipse whose axes are proportional to the
# individual variances of each asset
rmvo = RobustMVO(assets, target_return; short_sale=true)
optimize(rmvo, parameters)
```
```julia
#################################################################
# Example 8 Creating Custom Functions (e.g efficient frontier) #
#################################################################
n = 20
variance = Array(Float32,n)
returns = Array(Float32,n)
target_returns = linspace(0,0.4,20)
for i in 1:n
target_ret = target_returns[i]
mvo = SimpleMVO(assets, target_ret; short_sale=true)
w = optimize(mvo, parameters)
variance[i] = mvo.objVal
returns[i] = dot(w, JuPOT.getReturns(assets))
end
```
```julia
variance
```
20-element Array{Float32,1}:
0.806573
0.806573
0.806573
0.806573
0.806573
0.806573
0.806573
0.806573
0.806573
0.808813
0.833964
0.887046
0.968058
1.077
1.21387
1.37868
1.57141
1.79208
2.04067
2.3172
```julia
returns
```
20-element Array{Float32,1}:
0.181042
0.181042
0.181042
0.181042
0.181042
0.181042
0.181042
0.181042
0.181042
0.189474
0.210526
0.231579
0.252632
0.273684
0.294737
0.315789
0.336842
0.357895
0.378947
0.4
```julia
mean(JuPOT.getReturns(assets))
```
0.4841258434570479
```julia
mean(rand(10)*0.4)
```
0.15125274010964593
```julia
```
|
b257d911f33660b521dc7227cf2a36d9f4645826
| 24,695 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/JuPOT_demo_theo_NEW-checkpoint.ipynb
|
7purplebulls/finance-opt-tools
|
d9a1f6a201a5fa3ee1c9cb5e19599e826a2aafbd
|
[
"MIT"
] | 2 |
2015-12-28T22:07:29.000Z
|
2016-06-01T18:24:06.000Z
|
.ipynb_checkpoints/JuPOT_demo_theo_NEW-checkpoint.ipynb
|
7purplebulls/finance-opt-tools
|
d9a1f6a201a5fa3ee1c9cb5e19599e826a2aafbd
|
[
"MIT"
] | 2 |
2016-02-04T02:06:23.000Z
|
2016-02-04T02:59:30.000Z
|
.ipynb_checkpoints/JuPOT_demo_theo_NEW-checkpoint.ipynb
|
7purplebulls/finance-opt-tools
|
d9a1f6a201a5fa3ee1c9cb5e19599e826a2aafbd
|
[
"MIT"
] | 2 |
2016-01-15T19:33:29.000Z
|
2021-09-07T19:50:40.000Z
| 26.35539 | 237 | 0.435675 | true | 4,649 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.927363 | 0.831143 | 0.770772 |
__label__eng_Latn
| 0.220014 | 0.629093 |
```python
%matplotlib inline
```
Linear Regression with Regularization
=====================================
Regularization is a way to prevent overfitting and allows the model to
generalize better. We'll cover the *Ridge* and *Lasso* regression here.
The Need for Regularization
---------------------------
Unlike polynomial fitting, it's hard to imagine how linear regression
can overfit the data, since it's just a single line (or a hyperplane).
One situation is that features are **correlated** or redundant.
Suppose there are two features, both are exactly the same, our predicted
hyperplane will be in this format:
\begin{align}\hat{y} = w_0 + w_1x_1 + w_2x_2\end{align}
and the true values of $x_2$ is almost the same as $x_1$ (or
with some multiplicative factor and noise). Then, it's best to just drop
$w_2x_2$ term and use:
\begin{align}\hat{y} = w_0 + w_1x_1\end{align}
to fit the data. This is a simpler model.
But we don't know whether $x_1$ and $x_2$ is **actually**
redundant or not, at least with bare eyes, and we don't want to manually
drop a parameter just because we feel like it. We want to model to learn
to do this itself, that is, to *prefer a simpler model that fits the
data well enough*.
To do this, we add a *penalty term* to our loss function. Two common
penalty terms are L2 and L1 norm of $w$.
L2 and L1 Penalty
-----------------
0. No Penalty (or Linear)
~~~~~~~~~~~~~~~~~~~~~~~~~
This is linear regression without any regularization (from `previous
article </blog_content/linear_regression/linear_regression_tutorial.html#writing-sse-loss-in-matrix-notation>`__):
\begin{align}L(w) = \sum_{i=1}^{n} \left( y^i - wx^i \right)^2\end{align}
1. L2 Penalty (or Ridge)
~~~~~~~~~~~~~~~~~~~~~~~~
We can add the **L2 penalty term** to it, and this is called **L2
regularization**.:
\begin{align}L(w) = \sum_{i=1}^{n} \left( y^i - wx^i \right)^2 + \lambda\sum_{j=0}^{d}w_j^2\end{align}
This is called L2 penalty just because it's a L2-norm of $w$. In
fancy term, this whole loss function is also known as **Ridge
regression**.
Let's see what's going on. Loss function is something we **minimize**.
Any terms that we add to it, we also want it to be minimized (that's why
it's called *penalty term*). The above means we want $w$ that fits
the data well (first term), but we also want the values of $w$ to
be small as possible (second term). The lambda ($\lambda$) is
there to adjust how much to penalize $w$. Note that ``sklearn``
refers to this as alpha ($\alpha$) instead, but whatever.
It's tricky to know the appropriate value for lambda. You just have to
try them out, in exponential range (0.01, 0.1, 1, 10, etc), then select
the one that has the lowest loss on validation set, or doing k-fold
cross validation.
Setting $\lambda$ to be very low means we don't penalize the
complex model much. Setting it to $0$ is the original linear
regression. Setting it high means we strongly prefer simpler model, at
the cost of how well it fits the data.
Closed-form solution of Ridge
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It's not hard to find a closed-form solution for Ridge, first write the
loss function in matrix notation:
\begin{align}L(w) = {\left\lVert y - Xw \right\rVert}^2 + \lambda{\left\lVert w \right\rVert}_2^2\end{align}
Then the gradient is:
\begin{align}\nabla L_w = -2X^T(y-Xw) + 2\lambda w\end{align}
Setting to zero and solve:
\begin{align}\begin{align}
0 &= -2X^T(y-Xw) + 2\lambda w \\
&= X^T(y-Xw) - \lambda w \\
&= X^Ty - X^TXw - \lambda w \\
&= X^Ty - (X^TX + \lambda I_d) w
\end{align}\end{align}
Move that to other side and we get a closed-form solution:
\begin{align}\begin{align}
(X^TX + \lambda I_d) w &= X^Ty \\
w &= (X^TX + \lambda I_d)^{-1}X^Ty
\end{align}\end{align}
which is almost the same as linear regression without regularization.
2. L1 Penalty (or Lasso)
~~~~~~~~~~~~~~~~~~~~~~~~
As you might guess, you can also use L1-norm for **L1 regularization**:
\begin{align}L(w) = \sum_{i=1}^{n} \left( y^i - wx^i \right)^2 + \lambda\sum_{j=0}^{d}\left|w_j\right|\end{align}
Again, in fancy term, this loss function is also known as **Lasso
regression**. Using matrix notation:
\begin{align}L(w) = {\left\lVert y - Xw \right\rVert}^2 + \lambda{\left\lVert w \right\rVert}_1\end{align}
It's more complex to get a closed-form solution for this, so we'll leave
it here.
Visualizing the Loss Surface with Regularization
------------------------------------------------
Let's see what these penalty terms mean geometrically.
L2 loss surface
~~~~~~~~~~~~~~~
.. figure:: imgs/img_l2_surface.png
:alt: img\_l2\_surface
img\_l2\_surface
This simply follows the 3D equation:
\begin{align}L(w) = {\left\lVert w \right\rVert}_2^2 = w_0^2 + w_1^2\end{align}
The center of the bowl is lowest, since ``w = [0,0]``, but that is not
even a line and it won't predict anything useful.
L2 loss surface under different lambdas
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you multiply the L2 norm function with lambda,
$L(w) = \lambda(w_0^2 + w_1^2)$, the width of the bowl changes.
The lowest (and flattest) one has lambda of 0.25, which you can see it
penalizes The two subsequent ones has lambdas of 0.5 and 1.0.
.. figure:: imgs/img_l2_surface_lambdas.png
:alt: img\_l2\_surface\_lambdas
img\_l2\_surface\_lambdas
L1 loss surface
~~~~~~~~~~~~~~~
Below is the loss surface of L1 penalty:
.. figure:: imgs/img_l1_surface.png
:alt: img\_l1\_surface
img\_l1\_surface
Similarly the equation is
$L(w) = \lambda(\left| w_0 \right| + \left| w_1 \right|)$.
Contour of different penalty terms
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the L2 norm is 1, you get a unit circle ($w_0^2 + w_1^2 = 1$).
In the same manner, you get "unit" shapes in other norms:
.. figure:: imgs/img_penalty_contours.png
:alt: img\_penalty\_contours
img\_penalty\_contours
**When you walk along these lines, you get the same loss, which is 1**
These shapes can hint us different behaviors of each norm, which brings
us to the next question.
Which one to use, L1 or L2?
---------------------------
What's the point of using different penalty terms, as it seems like both
try to push down the size of $w$.
**Turns out L1 penalty tends to produce sparse solutions**. This means
many entries in $w$ are zeros. This is good if you want the model
to be simple and compact. Why is that?
Geometrical Explanation
~~~~~~~~~~~~~~~~~~~~~~~
*Note: these figures are generated with unusually high lambda to
exaggerate the plot*
First let's bring both linear regression and penalty loss surface
together (left), and recall that we want to find the **minimum loss when
both surfaces are summed up** (right):
.. figure:: imgs/img_ridge_regression.png
:alt: ridge
ridge
Ridge regression is like finding the middle point where the loss of a
sum between linear regression and L2 penalty loss is lowest:
.. figure:: imgs/img_ridge_sol_30.png
:alt: ridge\_solution
ridge\_solution
You can imagine starting with the linear regression solution (red point)
where the loss is the lowest, then you move towards the origin (blue
point), where the penalty loss is lowest. **The more lambda you set, the
more you'll be drawn towards the origin, since you penalize the values
of $w_i$ more** so it wants to get to where they're all zeros:
.. figure:: imgs/img_ridge_sol_60.png
:alt: ridge\_solution
ridge\_solution
Since the loss surfaces of linear regression and L2 norm are both
ellipsoid, the solution found for Ridge regression **tends to be
directly between both solutions**. Notice how the summed ellipsoid is
still right in the middle.
--------------
For Lasso:
.. figure:: imgs/img_lasso_regression.png
:alt: lasso
lasso
And this is the Lasso solution for lambda = 30 and 60:
.. figure:: imgs/img_lasso_sol_30.png
:alt: lasso\_solution
lasso\_solution
.. figure:: imgs/img_lasso_sol_60.png
:alt: lasso\_solution
lasso\_solution
Notice that the ellipsoid of linear regression **approaches, and finally
hits a corner of L1 loss**, and will always stay at that corner. What
does a corner of L1 norm means in this situation? It means
$w_1 = 0$.
Again, this is because the contour lines **at the same loss value** of
L2 norm reaches out much farther than L1 norm:
.. figure:: imgs/img_l1_vs_l2_contour.png
:alt: img\_l1\_vs\_l2\_contour
img\_l1\_vs\_l2\_contour
If the linear regression finds an optimal contact point along the L2
circle, then it will stop since there's no use to move sideways where
the loss is usually higher. However, with L1 penalty, it can drift
toward a corner, because it's **the same loss along the line** anyway (I
mean, why not?) and thus is exploited, if the opportunity arises.
|
b0844ec213406eb55e71db392474e87aa4a8eda4
| 10,869 |
ipynb
|
Jupyter Notebook
|
docs/html/_downloads/b46a4fb45981c08755e62c4ed9308063/linear_regression_regularized_tutorial.ipynb
|
aunnnn/ml-tutorial
|
b40a6fb04dd4dc560f87486f464b292d84f02fdf
|
[
"MIT"
] | null | null | null |
docs/html/_downloads/b46a4fb45981c08755e62c4ed9308063/linear_regression_regularized_tutorial.ipynb
|
aunnnn/ml-tutorial
|
b40a6fb04dd4dc560f87486f464b292d84f02fdf
|
[
"MIT"
] | null | null | null |
docs/html/_downloads/b46a4fb45981c08755e62c4ed9308063/linear_regression_regularized_tutorial.ipynb
|
aunnnn/ml-tutorial
|
b40a6fb04dd4dc560f87486f464b292d84f02fdf
|
[
"MIT"
] | null | null | null | 102.537736 | 2,824 | 0.641917 | true | 2,505 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.757794 | 0.831143 | 0.629836 |
__label__eng_Latn
| 0.995828 | 0.30165 |
## Equations
Find the derivative with respect to x of the following functions. **Additionally, find the derivative with respect to a of function h.**
Provide your answers on this sheet. Where you took more than one step to arrive at the final result, please include the important steps. You may fill out the sheet by hand and submit a scanned version.
$
\begin{align}
f(x) = 3x^2
\end{align}
$
$
\begin{align}
g(x) = (x+8)^2
\end{align}
$
$
\begin{align}
h(x) = ax^3+\frac{1}{2}x^8
\end{align}
$
$
\begin{align}
k(x) = x^{501}+3x^7 - \frac{1}{2}x^6+x^5+2x^3+3x^2-1 % Note 501 is between {} to display correctly
\end{align}
$
## Solutions
#### df/dx
$
\begin{align}
\frac{df}{dx} &= \frac{d}{dx}(3x^2) &\\
&= 3.2x^{2-1} &\\
&= 3.2x &\\
&= 6x
\end{align}
$
#### dg/dx
$
\begin{align}
\frac{dg}{dx} &= \frac{d}{dx} (x+8)^2 &\\
&= \frac{d}{dx} (x^2+16x+64) &\\
&= (2x^{2-1}+16x^0+0) &\\
&= (2x+16) &\\
&= 2(x+8)
\end{align}
$
#### dk/dx
$
\begin{align}
\frac{dk}{dx} &= \frac{d}{dx} (x^{501}+3x^7 - \frac{1}{2}x^6+x^5+2x^3+3x^2-1) &\\
&= (501x^{501-1})+3(7x^{7-1}) - \frac{1}{2}(6x^{6-1})+(5x^{5-1})+2(3x^{3-1})+3(2x^{2-1})-0 &\\
&= 501x^{500} + 3.7x^6 - \frac{1}{2}.6x^5 + 5x^4 + 2.3x^2 + 3.2x &\\
&= 501x^{500} + 21x^6 - 3x^5 + 5x^4 + 6x^2 + 6x
\end{align}
$
#### ∂h/∂x
$
\begin{align}
\frac{\partial h}{\partial x} &= \frac{\partial}{\partial x} (ax^3+\frac{1}{2}x^8) &\\
&= a(3x^2)+\frac{1}{2}(8x^7) &\\
&= 3ax^2+4x^7
\end{align}
$
#### ∂h/∂a
$
\begin{align}
\frac{\partial h}{\partial a} &= \frac{\partial}{\partial a} (ax^3+\frac{1}{2}x^8) &\\
&= (x^3+0) &\\
&= x^3
\end{align}
$
|
b254e0afdd86fb1ac0e6de880036b6fbebbb3cf2
| 3,358 |
ipynb
|
Jupyter Notebook
|
study-guides/Certificate-in-AI/Derivatives.ipynb
|
zlig/masters-thesis-ai
|
0dbf144b1ed9a93730952de0ffd8eb7157154960
|
[
"Apache-2.0"
] | null | null | null |
study-guides/Certificate-in-AI/Derivatives.ipynb
|
zlig/masters-thesis-ai
|
0dbf144b1ed9a93730952de0ffd8eb7157154960
|
[
"Apache-2.0"
] | null | null | null |
study-guides/Certificate-in-AI/Derivatives.ipynb
|
zlig/masters-thesis-ai
|
0dbf144b1ed9a93730952de0ffd8eb7157154960
|
[
"Apache-2.0"
] | null | null | null | 23 | 210 | 0.422275 | true | 818 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90053 | 0.870597 | 0.783999 |
__label__eng_Latn
| 0.420205 | 0.659824 |
# K-Nearest Neighbors Algorithm
* Last class, we introduced the probabilistic generative classifier.
* As discussed, the probabilistic generative classifier requires us to assume a parametric form for each class (e.g., each class is represented by a multi-variate Gaussian distribution, etc..). Because of this, the probabilistic generative classifier is a *parametric* approach
* Parametric approaches have the drawback that the functional parametric form needs to be decided/assumed in advance and, if chosen poorly, might be a poor model of the distribution that generates the data resulting in poor performance.
* Non-parametric methods are those that do not assume a particular generating distribution for the data. The $K$-nearest neighbors algorithm is one example of a non-parametric classifier.
* Nearest neighbor methods compare test point to the $k$ nearest training data points and then estimate an output value based on the desired/true output values of the $k$ nearest training points
* Essentially, there is no ``training'' other than storing the training data points and their desired outputs
* In test, you need to: (1) determine which $k$ training data points are closest to the test point; and (2) determine the output value for the test point
* In order to find the $k$ nearest neighbors in the training data, you need to define a *similarity measure* or a *dissimilarity measure*. The most common dissimilarity measure is Euclidean distance.
* Euclidean distance: $d_E = \sqrt{\left(\mathbf{x}_1-\mathbf{x}_2\right)^T\left(\mathbf{x}_1-\mathbf{x}_2\right)}$
* City block distance: $d_C = \sum_{i=1}^d \left| x_{1i} - x_{2i} \right|$
* Mahalanobis distance: $\left(\mathbf{x}_1-\mathbf{x}_2\right)^T\Sigma^{-1}\left(\mathbf{x}_1-\mathbf{x}_2\right)$
* Geodesic distance
* Cosine angle similarity: $\cos \theta = \frac{\mathbf{x}_1^T\mathbf{x}_2}{\left\|\mathbf{x}_1\right\|_2\left\|\mathbf{x}_2\right\|_2}$
* and many more...
* If you are doing classification, once you find the $k$ nearest neighbors to your test point in the training data, then you can determine the class label of your test point using (most commonly) *majority vote*
* If there are ties, they can be broken randomly or using schemes like applying the label of the closest data point in the neighborhood
* Of course, there are MANY modifications to you can make to this. A common one is to weight the votes of each of the nearest neighbors by their distance/similarity measure value. If they are closer, they get more weight.
```python
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn import neighbors
%matplotlib inline
#figure params
h = .02 # step size in the mesh
figure = plt.figure(figsize=(17, 9))
#set up classifiers
n_neighbors = 3
classifiers = []
classifiers.append(neighbors.KNeighborsClassifier(n_neighbors, weights='uniform'))
classifiers.append(neighbors.KNeighborsClassifier(n_neighbors, weights='distance'))
names = ['K-NN_Uniform', 'K-NN_Weighted']
#Put together datasets
n_samples = 300
X, y = make_classification(n_samples, n_features=2, n_redundant=0, n_informative=2,
random_state=0, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(n_samples, noise=0.3, random_state=0),
make_circles(n_samples, noise=0.2, factor=0.5, random_state=1),
linearly_separable]
i = 1
# iterate over datasets
for X, y in datasets:
# preprocess dataset, split into training and test part
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.8) #split into train/test folds
#set up meshgrid for figure
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], marker='+', c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], marker='+', c=y_test, cmap=cm_bright,
alpha=0.4)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
figure.subplots_adjust(left=.02, right=.98)
plt.show()
```
# Error and Evaluation Metrics
* A key step in machine learning algorithm development and testing is determining a good error and evaluation metric.
* Evaluation metrics help us to estimate how well our model is trained and it is important to pick a metric that matches our overall goal for the system.
* Some common evaluation metrics include precision, recall, receiver operating curves, and confusion matrices.
### Classification Accuracy and Error
* Classification accuracy is defined as the number of correctly classified samples divided by all samples:
\begin{equation}
\text{accuracy} = \frac{N_{cor}}{N}
\end{equation}
where $N_{cor}$ is the number of correct classified samples and $N$ is the total number of samples.
* Classification error is defined as the number of incorrectly classified samples divided by all samples:
\begin{equation}
\text{error} = \frac{N_{mis}}{N}
\end{equation}
where $N_{mis}$ is the number of misclassified samples and $N$ is the total number of samples.
* Suppose there is a 3-class classification problem, in which we would like to classify each training sample (a fish) to one of the three classes (A = salmon or B = sea bass or C = cod).
* Let's assume there are 150 samples, including 50 salmon, 50 sea bass and 50 cod. Suppose our model misclassifies 3 salmon, 2 sea bass and 4 cod.
* Prediction accuracy of our binary classification model is calculated as:
\begin{equation}
\text{accuracy} = \frac{47+48+46}{50+50+50} = \frac{47}{50}
\end{equation}
* Prediction error is calculated as:
\begin{equation}
\text{error} = \frac{N_{mis}}{N} = \frac{3+2+4}{50+50+50} = \frac{3}{50}
\end{equation}
### Confusion Matrices
* A confusion matrix summarizes the classification accuracy across several classes. It shows the ways in which our classification model is confused when it makes predictions, allowing visualization of the performance of our algorithm. Generally, each row represents the instances of a actual class while each column represents the instances in an predicted class.
* If our classifier is trained to distinguish between salmon, sea bass and cod. We can summarize the prediction result in the confusion matrix as follows:
| Actual/Predicted | Salmon | Sea bass | Cod |
| --- | --- | --- | --- |
| Salmon | 47 | 2 | 1 |
| Sea Bass | 2 | 48 | 0 |
| Cod | 0 | 0 | 50 |
* In this confusion matrix, of the 50 actual salmon, the classifier predicted that 2 are sea bass, 1 is cod incorrectly and 47 are labeled salmon correctly. All correct predictions are located in the diagonal of the table. So it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal.
### TP, FP, TN, and FN
* True positive (TP): correctly predicting event values
* False positive (FP): incorrectly calling non-events as an event
* True negative (TN): correctly predicting non-event values
* False negative (FN): incorrectly labeling events as non-event
* Precision is also called positive predictive value.
\begin{equation}
\text{Precision} = \frac{\text{TP}}{\text{TP}+\text{FP}}
\end{equation}
* Recall is also called true positive rate, probability of detection
\begin{equation}
\text{Recall} = \frac{\text{TP}}{\text{TP}+\text{FN}}
\end{equation}
* Fall-out is also called false positive rate, probability of false alarm.
\begin{equation}
\text{Fall-out} = \frac{\text{FP}}{\text{N}}= \frac{\text{FP}}{\text{FP}+\text{TN}}
\end{equation}
* *Consider the salmon/non-salmon classification problem, what are the TP, FP, TN, FN values?*
| Actual/Predicted | Salmon | Non-Salmon |
| --- | --- | --- |
| Salmon | 47 | 3 |
| Non-Salmon | 2 | 98 |
### ROC curves
* The Receiver Operating Characteristic (ROC) curve is a plot between the true positive rate (TPR) and the false positive rate (FPR), where the TPR is defined on the $x$-axis and FPR is defined on the $y$-axis.
* $TPR = TP/(TP+FN)$ is defined as ratio between true positive prediction and all real positive samples. The definition used for $FPR$ in a ROC curve is often problem dependent. For example, for detection of targets in an area, FPR may be defined as the ratio between the number of false alarms per unit area ($FA/m^2$). In another example, if you have a set number of images and you are looking for targets in these collection of images, FPR may be defined as the number of false alarms per image. In some cases, it may make the most sense to simply use the Fall-out or false positive rate.
* Given a binary classifier and its threshold, the (x,y) coordinates of ROC space can be calculated from all the prediction result. You trace out a ROC curve by varying the threshold to get all of the points on the ROC.
* The diagonal between (0,0) and (1,1) separates the ROC space into two areas, which are left up area and right bottom area. The points above the diagonal represent good classification (better than random guess) which below the diagonal represent bad classification (worse than random guess).
* *What is the perfect prediction point in a ROC curve?*
### MSE and MAE
* *Mean Square Error* (MSE) is the average of the squared error between prediction and actual observation.
* For each sample $\mathbf{x}_i$, the prediction value is $y_i$ and the actual output is $d_i$. The MSE is
\begin{equation}
MSE = \sum_{i=1}^n \frac{(d_i - y_i)^2}{n}
\end{equation}
* *Root Mean Square Error* (RMSE) is simply the square root the MSE.
\begin{equation}
RMSE = \sqrt{MSE}
\end{equation}
* *Mean Absolute Error* (MAE) is the average of the absolute error.
\begin{equation}
MAE = \frac{1}{n} \sum_{i=1}^n \lvert d_i - y_i \rvert
\end{equation}
```python
```
|
eaaabc75bbfbfc14264c4c6428daf0ce638cf4d1
| 192,165 |
ipynb
|
Jupyter Notebook
|
Lecture07_KNN/Lecture 07 Non-parametric K-Nearest Neighbors Algorithm.ipynb
|
Michael-Monaldi/LectureNotes
|
3afc1b4473aa91297ac4cf515a77a578547cca5f
|
[
"MIT"
] | null | null | null |
Lecture07_KNN/Lecture 07 Non-parametric K-Nearest Neighbors Algorithm.ipynb
|
Michael-Monaldi/LectureNotes
|
3afc1b4473aa91297ac4cf515a77a578547cca5f
|
[
"MIT"
] | null | null | null |
Lecture07_KNN/Lecture 07 Non-parametric K-Nearest Neighbors Algorithm.ipynb
|
Michael-Monaldi/LectureNotes
|
3afc1b4473aa91297ac4cf515a77a578547cca5f
|
[
"MIT"
] | null | null | null | 568.535503 | 177,028 | 0.937476 | true | 3,039 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.94079 | 0.800692 | 0.753283 |
__label__eng_Latn
| 0.991677 | 0.58846 |
# PharmSci 175/275 (UCI)
## What is this??
The material below is Lecture 2 (on Energy Minimization) from Drug Discovery Computing Techniques, PharmSci 175/275 at UC Irvine.
Extensive materials for this course, as well as extensive background and related materials, are available on the course GitHub repository: [github.com/mobleylab/drug-computing](https://github.com/mobleylab/drug-computing)
This material is a set of slides intended for presentation with RISE as detailed [in the course materials on GitHub](https://github.com/MobleyLab/drug-computing/tree/master/uci-pharmsci/lectures/energy_minimization). While it may be useful without RISE, it will also likely appear somewhat less verbose than it would if it were intended for use in written form.
# Energy landscapes and energy minimization
Today: Energy landscapes, energy functions and energy mimimization
### Instructor: David L. Mobley
### Contributors to today's materials:
- David L. Mobley (UCI)
- David Wych (Mobley lab, UCI)
- [Previous contributions](https://engineering.ucsb.edu/~shell/che210d/) from M. Scott Shell (UCSB); some images used here are drawn from his materials.
## Energy landscapes provide a useful conceptual device
### Energy landscapes govern conformations and flexibility
### Chemistry takes place on energy landscapes
<div style="float: right">
</div>
- Reactions and barriers
- Conformational change
- Binding/association
### Energy landscapes govern dynamics and thermodynamics
<div style="float: right">
</div>
- Reaction rates
- Equilibrium properties
## Often, we are interested in exploring the energy landscape
A potential, $U(\bf{r^N})$ describes the energy as a function of the coordinates of the particles; here ${\bf r}$ is the coordinates, **boldface** denotes it is a vector, and the superscript denotes it is coordinates of all $N$ particles in the system.
<div style="float: right">
</div>
### Landscapes have many features, including
- Global minimum, the most stable state (caveat: entropy)
- Local minima: other stable/metastable states
<div style="float: right">
</div>
### We can't visualize 3N dimensions, so we often project onto fewer dimensions
<div style="float: right">
</div>
### Background: Vector notation
- For a single particle, we have coordinates $x$, $y$, and $z$, or $x_1$, $y_1$, and $z_1$ if it is particle 1
- We might write these as $(-1, 3, 2)$ for example, if $x=-1$, $y=3$, $z=2$.
- For even two particles, we have $x_1$ and $x_2$, $y_1$ and $y_2$, etc.
- Writing out names of coordinates becomes slow, e.g. $f(x_1, y_1, z_1, x_2, y_2, z_2, ... z_N)$
- We simplify by writing $f({\bf r}^N)$ and remember that ${\bf r}^N$ really means:
\begin{equation}
{\bf r}^N=
\begin{bmatrix}
x_1 & y_1 & z_1 \\
x_2 & y_2 & z_2 \\
... & ... & ... \\
x_N & y_N & z_N \\
\end{bmatrix}
\end{equation}
### A concrete example
Imagine we have two particles with particle 1 having coordinates $x_1 = 1$, $y_1 = 3$, and $z_1 = 2$, and particle 2 having coordinates $x_2 = -1$, $y_2 = 3$, $z_2 = -1$. That would give us an array like this:
\begin{equation}
{\bf r}^N=
\begin{bmatrix}
1 & 3 & 2 \\
-1 & 3 & -1 \\
\end{bmatrix}
\end{equation}
### In Python, we'd store that as a numpy array
```python
import numpy as np
r_N = np.array( [[1, 3, 2], [-1, 3, -1]], float)
print('y coordinate of first particle is %.2f' % r_N[0,1])
```
y coordinate of first particle is 3.00
We could compute the distance between particles as
$d = \sqrt{(x_1-x_2)^2 + (y_1-y_2)^2 + (z_1-z_2)^2}$
You could code that up directly in Python, or you can use array operations:
```python
d = np.sqrt( ((r_N[0,:]-r_N[1,:])**2).sum() )
print(d)
```
3.60555127546
### Let's let this sink in for a minute
We're using the notation $
{\bf r}^N$ as shorthand to refer to the x, y, and z positions of all of the particles in a system. This is actually an array, or a matrix, of particle coordinates:
\begin{equation}
{\bf r}^N=
\begin{bmatrix}
1 & 3 & 2 \\
-1 & 3 & -1 \\
\end{bmatrix}
\end{equation}
Each row of that matrix has the x, y, and z coordinates of an atom in the system. **We will use this concept heavily in the Energy Minimization assignment*.
## Forces are properties of energy landscapes, too
The force is the slope (technically, gradient):
$f_{x,i} = -\frac{\partial U({\bf r^N})}{\partial x_i}$, $f_{y,i} = -\frac{\partial U({\bf r^N})}{\partial y_i}$, $f_{z,i} = -\frac{\partial U({\bf r^N})}{\partial z_i}$
As shorthand, this may be written ${\bf f}^N = -\frac{\partial U({\bf r^N})}{\partial {\bf r^N}}$ or ${\bf f}^N = -\nabla \cdot U({\bf r^N})$ where the result, ${\bf f}^N$, is an Nx3 array (matrix, if you prefer)
If energy function is pairwise additive, can evaluate via summing individual interactions -- force on atom k is
\begin{equation}
{\bf f}_k = \sum_{j\neq k} \frac{ {\bf r}_{kj}}{r_{kj}} \frac{\partial}{\partial r_{kj}} U(r_{kj})
\end{equation} where ${\bf r_{kj}} = {\bf r}_j - {\bf r_k}$. Note not all force calculations are necessary: ${\bf f}_{kj} = -{\bf f}_{jk}$
## The matrix of second derivatives is called the Hessian and distinguishes minima from saddle points
\begin{equation}
{\bf H}({\bf r}^N)=
\begin{bmatrix}
\frac{d^2 U({\bf r^N}) }{ d x_1^2} & \frac{d^2 U({\bf r^N}) }{ dx_1 d y_1} & ... & \frac{d^2 U({\bf r^N}) }{ dx_1 d z_1} \\
\frac{d^2 U({\bf r^N}) }{ d y_1 d x_1} & \frac{d^2 U({\bf r^N}) }{ d y_1^2} & ... & \frac{d^2 U({\bf r^N}) }{ dy_1 d z_N}\\
... & ... & ... & ... \\
\frac{d^2 U({\bf r^N}) }{ d z_N d x_1} & \frac{d^2 U({\bf r^N}) }{ d z_N d y_1} & ... & \frac{d^2 U({\bf r^N}) }{ dz_N^2}\\
\end{bmatrix}
\end{equation}
## Types of stationary points can be distiguished from derivatives
- Stationary points have zero force on each particle: $ \nabla \cdot U({\bf r^N}) = {\bf 0}$
- These can be minima or maxima
- Minima have negative curvature in all directions (restoring force is towards the minimum)
## Energy landscapes have *lots* of minima
For an Lennard-Jones 38 cluster:
<div style="float: right">
</div>
See also Doye, Miller, and Wales, J. Chem. Phys. 111: 8417 (1999)
- They have a disconnectivity graph shows minima for 13 LJ atoms
- To move between two minima, have to go to point where lines from two minima reach same energy
- 1467 distinct minima for 13 atoms!
- This is a different system, more atoms, far more minima
Note related ["Python energy landscape explorer" (PELE)](https://github.com/pele-python/pele)
(Image source https://pele-python.github.io/pele/disconnectivity_graph.html, CC-BY license, by the Pele authors: https://github.com/pele-python/pele/graphs/contributors)
Here's a bonus disconnectivity graph:
<div style="float: right">
</div>
(Image source https://commons.wikimedia.org/wiki/File:Fedg.png#filelinks, by Snd0, CC-BY-SA 4.0)
## We care a lot about finding minima
- As noted, gives a first guess about most stable states
- Minima are stable structures, point of contact with experiment
- Initial structures need to be relaxed so forces are not too large
- Remove strained bonds, atom overlaps
- Minimization is really optimization:
- If you can find the minimum of $U$, you can find the minimum of $-U$
- Same techniques apply to other things, i.e. finding set of parameters that minimizes an error, etc.
### Findining minima often becomes a numerical task, because analytical solutions become impractical very quickly
Consider $U(x,y) = x^2 + (y-1)^2$ for a single particle in a two dimensional potential. Finding $\nabla\cdot U = 0$ yields:
$2x = 0$ and $2(y-1)=0$ or $x=0$, $y=1$
simple enough
But in general, N dimensions means N coupled, potentially nonlinear equations. Consider
\begin{equation}
U = x^2 z^2 +x (y-1)^2 + xyz + 14y + z^3
\end{equation}
Setting the derivatives to zero yields:
$0 = 2xz^2 + (y-1)^2 +yz$
$0 = 2x(y-1) + xz + 14$
$0 = 2x^2z + xy + 3z^2$
**Volunteers??** It can be solved, but not fun.
**And this is just for a single particle in a 3D potential**, so we are typically forced to numerical minimizations, even when potential is analytic
### Energy minimization is a sub-class of the more general problem of finding roots
Common problem: For some $f(x)$, find values of $x$ for which $f(x)=0$
Many equations can be re-cast this way. *i.e.*, if you need to solve $g(x) = 3$, define $f(x) = g(x)-3$ and find $x$ such that $f(x)=0$
If $f(x)$ happens to be the force, this maps to energy minimization
As a consequence: Algorithms used for energy minimization typically have broader application to finding roots
### Let's check out a toy minimization problem to see how this would work
Here we'll set up a simple function to represent an energy landscape in 1D, and play with energy minimizing on that landscape.
```python
#Import pylab library we'll use
import scipy.optimize
#Get pylab ready for plotting in this notebook
%pylab inline
#Define a range of x values to look at in your plot
xlower = -5 #Start search at xlower
xupper = 5 #End search at xupper
#Define a starting guess for the location of the minimum
xstart = 0.01
#Create an array of x values for our plot, starting with xlower
#and running to xupper with step size 0.01
xvals = np.arange( xlower, xupper, 0.01)
```
Populating the interactive namespace from numpy and matplotlib
```python
#Define and the function f we want to minimize
def f(x):
return 10*np.cos(x)+x**2-3.
#Store function values at those x values for our plot
fvals = f(xvals)
#Do our minimization ("line search" of sorts), store results to 'res'
res = scipy.optimize.minimize(f, xstart) # Apply canned minimization algorithm from scipy
```
```python
#Make a plot of our function over the specified range
plot(xvals, fvals, 'k-') #Use a black line to show the function
plot(res.x, f(res.x), 'bo') #Add the identified minimum to the plot as a blue circle
plot(xstart, f(xstart), 'ro') #Add starting point as red circle
#Add axis labels
xlabel('x')
ylabel('f(x)')
```
### Sandbox section
Try adjusting the above to explore what happens if you alter the starting conditions or the energy landscape or both. You might try:
- Change the starting point (`xstart`) so it is slightly to the left or slightly to the right
- Change the starting point so it is far up the wall to the left or the right
- Change the energy landscape to alter its shape, perhaps adding a term proportional to `+x`. Can you make it so one of the wells is a local minimum, perhaps by altering the coefficient of this term?
- If you adjust the starting point further, can you make the blue ball get stuck in a local minimum? Can you make it so it still finds the global minimum?
## Steepest descents is a simple minimization algorithm that always steps as far as possible along the direction of the force
Take ${\bf f}^N = -\frac{\partial U({\bf r}^N)}{\partial {\bf r}^N}$, then:
1. Move in direction of last force until the minimum *in that direction* is found
2. Compute new ${\bf f}_i^N $ for iteration $i$, perpendicular to previous force
<div style="float: right">
</div>
Repeat until minimum is found
**Limitations**:
Oscillates in narrow valleys; slow near minimum.
### Reminder: Here we're using vector notation for forces and positions
${\bf f}^N$ is the force on all of the atoms, as an array, where each row of the array is the force vector on that atom.
$U({\bf r}^N)$ is the potential energy, as a function of the positions of all of the atoms.
These use the same vector and array notation we introdued above for ${\bf r}^N$.
<div style="float: right">
</div>
### Steepest descents oscillates in narrow valleys and is slow near the minimum
<div style="float: center">
</div>
(Illustration, P.A. Simonescu, [Wikipedia](https://en.wikipedia.org/wiki/Gradient_descent#/media/File:Banana-SteepDesc.gif), [CC-BY-SA](https://creativecommons.org/licenses/by-sa/3.0/))
### Another illustration further highlights this
(In this case, steepest *ascents*, but it's just the negative...)
<div style="float: left">
</div>
<div style="float: right">
</div>
(Images public domain: [source](https://upload.wikimedia.org/wikipedia/commons/d/db/Gradient_ascent_%28contour%29.png), [source](https://upload.wikimedia.org/wikipedia/commons/6/68/Gradient_ascent_%28surface%29.png))
## A line search can make many minimization methods more efficient
A line search is an efficient way to find a minimum along a particular direction
- Line search: Bracket minimum
- Start with initial set of coordinates ${\bf r}$ and search direction ${\bf v}$ that is downhill
- Generate pairs of points a distance $d$ and $2d$ along the line (${\bf r} + d{\bf v}, {\bf r}+2d{\bf v}$)
1. If the energy at the further point is higher than the energy at the nearer point, stop
2. Otherwise, move the pair of points $d$ further along the line and go back to 1.
### To finish a line search, identify the minimum precisely
- Fit a quadratic to our 3 points (initial, and two bracket points)
- Guess that minimum is at the zero of the quadratic, call this point 4.
- Fit a new quadratic using points 2, 3, and 4, and move to the zero.
- Repeat until the energy stops changing within a given tolerance
## To do better than steepest descents, let's consider its pros and cons
<div style="float: right">
</div>
- Good to go in steepest direction initially
- It’s a good idea to move downhill
- It is initially very fast (see Leach Table 5.1)
- But steepest descent overcorrects
## SciPy does't implement steepest descents because of these issues
SciPy has lots of functions and tools for [optimization](https://docs.scipy.org/doc/scipy/reference/optimize.html), but it doesn't even implement steepest descents because of poor reliability.
However, the `Nelder-Mead` minimization method applies a downhill simplex method which also is less than ideal, so let's play around with that a bit. First, let's make a suitable landscape:
```python
#Define and the function f we want to minimize
def f(arr):
return 10*np.cos(arr[0])+arr[0]**2-3.+arr[1]**2
#Define a range of x, y values to look at in your plot
# NOTE IF YOU WANT TO ADJUST THESE YOU NEED TO RE-RUN ALL THREE CELLS (shift-enter)
xlower, ylower = -5, -5 #Start search at xlower and yupper
xupper, yupper = 5, 5 #End search at xupper and yupper
#Define a starting guess for the location of the minimum
xstart, ystart = -1.0, 1.0
#Create an array of coordinates for our plot
xvals = np.arange( xlower, xupper, 0.01)
yvals = np.arange( ylower, yupper, 0.01)
# Make a grid of x and y values
xx, yy = np.meshgrid(xvals, yvals)
```
```python
#Store function values at those x and y values for our plot
fvals = f(np.array([xx,yy]))
colors = np.linspace(0,1,len(xx))
#Create 9''x7'' figure
plt.figure(figsize=(9, 7))
#Plot the Energy Landscape with a colorbar to the side
plt.contourf(xvals, yvals, fvals, 10)
plt.colorbar()
plt.show()
```
```python
res = scipy.optimize.minimize(f, np.array((xstart,ystart)), method='Nelder-Mead', options={'return_all':True})
plt.figure(figsize=(9, 7))
plt.contourf(xvals, yvals, fvals, 10)
plt.colorbar()
# Plot path of minimization
xvals = [ entry[0] for entry in res.allvecs ]
yvals = [ entry[1] for entry in res.allvecs ]
colors = np.linspace(0,1,len(xvals))
plt.scatter(xvals, yvals, c=colors, cmap='spring', s=5, marker = "o")
plt.plot(xvals, yvals, 'm-')
```
## Conjugate gradient (CG) works like steepest descent but chooses a different direction
- Start with an initial direction ${\bf v}$ that is downhill, move in that direction until a minimum is reached.
- Compute a new direction using ${\bf v}_i = {\bf f}^N_i + \gamma_i {\bf v}_{i-1}^N$ where $\gamma_i = \frac{({\bf f}^N_i-{\bf f}^N_{i-1}){\bf f}^N_i}{{\bf f}^N_{i-1} {\bf f}^N_{i-1}}$; note $\gamma_i$ is a scalar.
Note that by ${\bf f}^N {\bf f}^N$ we mean vector multiplication, not a matrix multiplication; that is:
\begin{equation}
{\bf f}^N {\bf f}^N = f^2_{x,1} + f^2_{y,1} + f^2_{z,1} + f^2_{x,2} + ... + f^2_{z, N}
\end{equation}
(In Python, this can be coded by multiplying ${\bf f}\times {\bf f}$, where ${\bf f}$ are arrays, and taking the sum of the result; for normal matrix multiplication of arrays one would use a dot product)
$\gamma_i$ is designed so that the new direction is *conjugate* to the old direction so that the new step does not undo any of the work done by the old step (causing oscillation)
### Conjugate gradient (CG) is more efficient
<div style="float: right">
</div>
- Green: Steepest descent (with optimal step size)
- Red: Conjugate gradient
Ideally takes at most $M$ steps, where $M$ is the number of degrees of freedom, often $3N$, though in practice even small precision errors make it take longer than this.
(Image: [Wikipedia](https://en.wikipedia.org/wiki/Conjugate_gradient_method#/media/File:Conjugate_gradient_illustration.svg), public domain. Oleg Alexandrov.)
## Let's look at a CG example
```python
#Define and the function f we want to minimize
def f(arr):
return 10*np.cos(arr[0])+arr[0]**2-3.+arr[1]**2
#Define a range of x, y values to look at in your plot
# NOTE IF YOU WANT TO ADJUST THESE YOU NEED TO RE-RUN ALL THREE CELLS (shift-enter)
xlower, ylower = -5, -5 #Start search at xlower and yupper
xupper, yupper = 5, 5 #End search at xupper and yupper
#Define a starting guess for the location of the minimum
xstart, ystart = -1.0, 1.0
#Create an array of coordinates for our plot
xvals = np.arange( xlower, xupper, 0.01)
yvals = np.arange( ylower, yupper, 0.01)
# Make a grid of x and y values
xx, yy = np.meshgrid(xvals, yvals)
#Store function values at those x and y values for our plot
fvals = f(np.array([xx,yy]))
colors = np.linspace(0,1,len(xx))
```
```python
res = scipy.optimize.minimize(f, np.array((xstart,ystart)), method='CG', options={'return_all':True})
```
```python
plt.figure(figsize=(9, 7))
plt.contourf(xvals, yvals, fvals, 10)
plt.colorbar()
# Plot path of minimization
xvals = [ entry[0] for entry in res.allvecs ]
yvals = [ entry[1] for entry in res.allvecs ]
colors = np.linspace(0,1,len(xvals))
plt.scatter(xvals, yvals, c=colors, cmap='spring', s=5, marker = "o")
plt.plot(xvals, yvals, 'm-')
```
## More advanced minimization methods can have considerable advantages
- Newton-Raphson: Based on Taylor expansion of potential around zero
- reaches minimum in one step for quadratic potentials
- converges to minima and saddle points, so requires initial moves to reach minima
- slow because requires Hessian at every step
- Quasi-Newton methods approximate the Hessian to go faster
- L-BFGS is a popular quasi-Newton method
```python
#Define and the function f we want to minimize
def f(arr):
return 10*np.cos(arr[0])+arr[0]**2-3.+arr[1]**2
#Define a range of x, y values to look at in your plot
xlower, ylower = -5, -5 #Start search at xlower and yupper
xupper, yupper = 5, 5 #End search at xupper and yupper
#Define a starting guess for the location of the minimum
xstart, ystart = -0.1, 1.0
#Create an array of coordinates for our plot
xvals = np.arange( xlower, xupper, 0.01)
yvals = np.arange( ylower, yupper, 0.01)
xx, yy = np.meshgrid(xvals, yvals)
#Store function values at those x and y values for our plot
fvals = f(np.array([xx,yy]))
```
```python
res = scipy.optimize.minimize(f, np.array((xstart,ystart)), method='BFGS', options={'return_all':True})
plt.figure(figsize=(9, 7))
plt.contourf(xvals, yvals, fvals, 10)
plt.colorbar()
xvals = [ entry[0] for entry in res.allvecs ]
yvals = [ entry[1] for entry in res.allvecs ]
colors = np.linspace(0,1,len(xvals))
plt.scatter(xvals, yvals, c=colors, cmap='spring', s=5, marker = "o")
plt.plot(xvals, yvals, 'm-')
```
## Locating global minima is challenging and often involves combination of techniques
- Need some way to cross barriers, in addition to minimization
- Often combine techniques
- i.e. Molecular Dynamics or Monte Carlo + minimization
- Normal mode analysis can be used to identify collective motions
- Once identified, system can be moved in those directions
### When the number of dimensions is small, global minima can be found by repeated trials
One can simply do lots of minimizations from random starting points and find the global minimum sometimes, if there are not too many minima in total. See e.g. this [local and global minima](http://people.duke.edu/~ccc14/sta-663-2016/13_Optimization.html#Local-and-global-minima) discussion.
As an exercise, you might try to find the global minimum of this function by picking random starting points and minimizing many times (for solutions, see that link):
```python
def f(x, offset):
return -np.sinc(x-offset)
x = np.linspace(-20, 20, 100)
plt.plot(x, f(x, 5));
```
```python
```
|
e5685711946ebc7c9848d6d77e11f8f45e16b23a
| 152,430 |
ipynb
|
Jupyter Notebook
|
uci-pharmsci/lectures/energy_minimization/energy_minimization.ipynb
|
inferential/drug-computing
|
25ff2f04b2a1f7cb71c552f62e722edb26cc297f
|
[
"CC-BY-4.0",
"MIT"
] | 103 |
2017-10-21T18:49:01.000Z
|
2022-03-24T22:05:21.000Z
|
uci-pharmsci/lectures/energy_minimization/energy_minimization.ipynb
|
inferential/drug-computing
|
25ff2f04b2a1f7cb71c552f62e722edb26cc297f
|
[
"CC-BY-4.0",
"MIT"
] | 29 |
2017-10-23T20:57:17.000Z
|
2022-03-15T21:57:09.000Z
|
uci-pharmsci/lectures/energy_minimization/energy_minimization.ipynb
|
inferential/drug-computing
|
25ff2f04b2a1f7cb71c552f62e722edb26cc297f
|
[
"CC-BY-4.0",
"MIT"
] | 36 |
2018-01-18T20:22:29.000Z
|
2022-03-16T13:08:09.000Z
| 125.045119 | 21,694 | 0.863938 | true | 6,084 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.826712 | 0.822189 | 0.679713 |
__label__eng_Latn
| 0.987565 | 0.417533 |
## OIQ-Exam-Question-1 (Version 2)
Technical exam question from Ordre des ingénieurs du Québec. Obviously meant to be done using moment-distribution, but even easier using slope-deflection. This version use a newer 'sdutil' that also computes end shears.
```python
from sympy import *
init_printing(use_latex='mathjax')
from IPython import display
```
```python
display.SVG('oiq-exam-1.svg')
```
```python
from sdutil2 import SD, FEF
var('EI theta_a theta_b theta_c theta_d')
Mab,Mba,Vab,Vba = SD(6,EI,theta_a,theta_b) + FEF.p(6,180,4)
Mbc,Mcb,Vbc,Vcb = SD(8,2*EI,theta_b,theta_c) + FEF.udl(8,45)
Mcd,Mdc,Vcd,Vdc = SD(6,EI,theta_c,theta_d)
```
```python
Mab
```
$$\frac{EI}{6} \left(4 \theta_{a} + 2 \theta_{b}\right) - 80.0$$
Solve equilbrium equations for rotations:
```python
soln = solve( [Mab,Mba+Mbc,Mcb+Mcd,Mdc],[theta_a,theta_b,theta_c,theta_d] )
soln
```
$$\left \{ \theta_{a} : \frac{75.0}{EI}, \quad \theta_{b} : \frac{90.0}{EI}, \quad \theta_{c} : - \frac{190.0}{EI}, \quad \theta_{d} : \frac{95.0}{EI}\right \}$$
Member end moments:
```python
[m.subs(soln) for m in [Mab,Mba,Mbc,Mcb,Mcd,Mdc]]
```
$$\left [ 0, \quad 245.0, \quad -245.0, \quad 95.0, \quad -95.0, \quad 0\right ]$$
Member end shears:
```python
[v.subs(soln).n(4) for v in [Vab,Vba,Vbc,Vcb,Vcd,Vdc]]
```
$$\left [ 19.17, \quad -160.8, \quad 198.8, \quad -161.3, \quad 15.83, \quad 15.83\right ]$$
Reactions:
```python
Ra = Vab
Rb = Vbc - Vba
Rc = Vcd - Vcb
Rd = -Vdc
[r.subs(soln).n(4) for r in [Ra,Rb,Rc,Rd]]
```
$$\left [ 19.17, \quad 359.6, \quad 177.1, \quad -15.83\right ]$$
#### Check overal equilibrium
```python
# sum forces in vertical dirn.
(Ra+Rb+Rc+Rd - 180 - 45*8).subs(soln)
```
$$0$$
```python
# sum moments about left
(-Rb*6 - Rc*(6+8) -Rd*(6+8+6) + 180*4 + 45*8*(6 + 8/2.)).subs(soln)
```
$$0$$
```python
Ra.expand()
```
$$- \frac{EI \theta_{a}}{6} - \frac{EI \theta_{b}}{6} + 46.6666666666667$$
```python
```
|
f636d9a51652ad3c0eec3d11bd4983549d8b9e82
| 31,780 |
ipynb
|
Jupyter Notebook
|
slope-deflection/oiq-exam-question-1-Version-2.ipynb
|
nholtz/structural-analysis
|
246d6358355bd9768e30075d1f6af282ceb995be
|
[
"CC0-1.0"
] | 3 |
2016-05-26T07:01:51.000Z
|
2019-05-31T23:48:11.000Z
|
slope-deflection/oiq-exam-question-1-Version-2.ipynb
|
nholtz/structural-analysis
|
246d6358355bd9768e30075d1f6af282ceb995be
|
[
"CC0-1.0"
] | null | null | null |
slope-deflection/oiq-exam-question-1-Version-2.ipynb
|
nholtz/structural-analysis
|
246d6358355bd9768e30075d1f6af282ceb995be
|
[
"CC0-1.0"
] | 1 |
2016-08-30T06:08:03.000Z
|
2016-08-30T06:08:03.000Z
| 69.692982 | 675 | 0.604814 | true | 806 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90599 | 0.787931 | 0.713858 |
__label__eng_Latn
| 0.275049 | 0.496862 |
## Conditional Independence
Two random variable $X$ and $Y$ are conditiaonly independent given $Z$, denoted by $X \perp \!\! \perp Y \mid Z$ if
$$p_{X,Y\mid Z} (x,y\mid z) = p_{X\mid Z}(x\mid z) \, p_{Y\mid Z}(y\mid z)$$
In general Marginal independence doesn't imply conditional independence and vice versa.
### Example
R: Red Sox Game <br>
A: Accident <br>
T: Bad Traffic
Find the following probability
(a) $\mathbb{P}(R=1) = 0.5$
(b) $\mathbb{P}(R=1 \mid T=1)$
$$\begin{align}p_{R,A}(r,a\mid 1)
&= \frac{p_{T\mid R,A}(1 \mid r, a)\, p_R(r) \, p_A(a)}{p_T(1)}\\
&= c\cdot p_{T\mid R,A}(1 \mid r, a)\end{align}$$
(c) $\mathbb{P}(R=1 \mid T=1, A=1)= \mathbb{P}(R=1 \mid T=1)$
### Practice Problem: Conditional Independence
Suppose $X_0, \dots , X_{100}$ are random variables whose joint distribution has the following factorization:
$$p_{X_0, \dots , X_{100}}(x_0, \dots , x_{100}) = p_{X_0}(x_0) \cdot \prod _{i=1}^{100} p_{X_ i | X_{i-1}}(x_ i | x_{i-1})$$
This factorization is what's called a Markov chain. We'll be seeing Markov chains a lot more later on in the course.
Show that $X_{50} \perp \!\! \perp X_{52} \mid X_{51}$.
**Answer:**
$$
\begin{eqnarray}
p_{X_{50},X_{51},X_{52}}(x_{50},x_{51},x_{52})
&=& \sum_{x_{0} \dots x_{49}} \sum_{x_{53} \dots x_{100}} p_{X_0, \dots , X_{100}}(x_0, \dots , x_{100}) \\
&=& \sum_{x_{0} \dots x_{49}} \sum_{x_{53} \dots x_{100}} \left[p_{X_0}(x_{0}) \prod_{i=0}^{50} p_{X_i\mid X_{i-1}}(x_{i}|x_{i-1})\right] \\
&& \cdot \prod_{i=51}^{52} p_{X_i\mid X_{i-1}}(x_{i}|x_{i-1})\cdot \prod_{i=53}^{100} p_{X_i\mid X_{i-1}}(x_{i}|x_{i-1}) \\
&=& \underbrace{\sum_{x_{0} \dots x_{49}} \left[p_{X_0}(x_{0}) \prod_{i=0}^{50} p_{X_i\mid X_{i-1}}(x_{i}|x_{i-1})\right]}_{=p_{X_{50}}(x_{50})} \\
&& \cdot \prod_{i=51}^{52} p_{X_i\mid X_{i-1}}(x_{i}|x_{i-1})\cdot \underbrace{\sum_{x_{53} \dots x_{100}}\prod_{i=53}^{100} p_{X_i\mid X_{i-1}}(x_{i}|x_{i-1})}_{=1} \\[2ex]
&=& p_{X_{50}}(x_{50}) \cdot p_{X_{51}\mid X_{50}}(x_{51}|x_{50}) \cdot p_{X_{52}\mid X_{51}}(x_{52}|x_{51}) \\[2ex]
&=& p_{X_{50}\mid X_{51}}(x_{50}|x_{51}) \cdot p_{X_{52}\mid X_{51}}(x_{52}|x_{51}) \\[2ex]
\frac{p_{X_{50},X_{51},X_{52}}(x_{50},x_{51},x_{52})}{p_{X_{51}}(x_{51})}
&=& p_{X_{50}\mid X_{51}}(x_{50}|x_{51}) \cdot p_{X_{52}\mid X_{51}}(x_{52}|x_{51}) \\[2ex]
p_{X_{50},X_{52}\mid X_{51}}(x_{50},x_{52}\mid x_{51})
&=& p_{X_{50}\mid X_{51}}(x_{50}|x_{51}) \cdot p_{X_{52}\mid X_{51}}(x_{52}|x_{51})
\end{eqnarray}
$$
```python
```
|
233c7dc2b288c1f2a69cc16e242012c3fc7412b8
| 4,493 |
ipynb
|
Jupyter Notebook
|
week03/03 Conditional Independence.ipynb
|
infimath/Computational-Probability-and-Inference
|
e48cd52c45ffd9458383ba0f77468d31f781dc77
|
[
"MIT"
] | 1 |
2019-04-04T03:07:47.000Z
|
2019-04-04T03:07:47.000Z
|
week03/03 Conditional Independence.ipynb
|
infimath/Computational-Probability-and-Inference
|
e48cd52c45ffd9458383ba0f77468d31f781dc77
|
[
"MIT"
] | null | null | null |
week03/03 Conditional Independence.ipynb
|
infimath/Computational-Probability-and-Inference
|
e48cd52c45ffd9458383ba0f77468d31f781dc77
|
[
"MIT"
] | 1 |
2021-02-27T05:33:49.000Z
|
2021-02-27T05:33:49.000Z
| 32.557971 | 202 | 0.484531 | true | 1,185 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92944 | 0.882428 | 0.820164 |
__label__eng_Latn
| 0.207291 | 0.743849 |
### Tutorial de Julia para Otimização
## Escola de Verão - EMap/FGV
## Aula 03 - Modelagem e Solvers em Julia
### Ministrante
- Luiz-Rafael Santos ([LABMAC/UFSC/Blumenau](http://labmac.mat.blumenau.ufsc.br))
* Email para contato: [l.r.santos@ufsc.br](mailto:l.r.santos@ufsc.br) ou [lrsantos11@gmail.com](mailto:lrsantos11@ufsc.br)
- Repositório do curso no [Github](https://github.com/lrsantos11/Tutorial-Julia-Opt)
### Pacotes para Modelagem
* Pacotes do [JuliaOpt](https://www.juliaopt.org/)
* [JuMP](https://jump.dev/JuMP.jl/v0.19.0/index.html): lingagem de modelagem algébrica para otimização linear, quadratica, and não-linear (com ou sem restrições)
* Solver disponibilizados
* [GLPK](https://github.com/jump-dev/GLPK.jl) para otimização linear e inteira (código-aberto)
* [Ipopt](https://github.com/jump-dev/Ipopt.jl) para otimização não-linear (código-aberto)
* [Gurobi](https://github.com/jump-dev/Gurobi.jl), KNitro, CPLEX, Xpress, Mosek
* [JuliaSmoothOptimizers (JSO)](https://juliasmoothoptimizers.github.io/) coleção de pacotes em Julia para desenvolvimento, teste e *benchmark* de algoritmos de otimização (não-linear).
* Modelagem
* [NLPModels](https://github.com/JuliaSmoothOptimizers/NLPModels.jl): API para representar problemas de otimização `min f(x) s.t. l <= c(x) <= u`
* Respositórios de problemas
* [CUtEst.jl](https://github.com/JuliaSmoothOptimizers/CUTEst.jl): interface para o [CUTEst](http://ccpforge.cse.rl.ac.uk/gf/project/cutest/wiki), repositório de problemas de otimização para teste comparação de algoritmos de otimização.
```julia
using Pkg
pkg"activate ../."
pkg"instantiate"
```
[32m[1m Activating[22m[39m environment at `~/Dropbox/extensao/cursos/2021/Tutorial-Julia-Opt/Project.toml`
## Instalando os pacotes
* Vamos instalar:
* o pacote de modelagem `JuMP`
* os solvers `GLPK`, `Ipopt` e `Gurobi`
* Os dois primeiros são código-aberto e Julia vai instalar não só a interface como o próprio programa ou biblioteca
* `Gurobi` depende de o programa estar instalado e de licensa (eu tenho acadêmica), então não deve funcionar em qualquer ambiente computacional.
```julia
#pkg"add JuMP GLPK Ipopt"
#Descomente a linha acima e comente a linha abaixo caso não possua Gurobi instalado
pkg"add JuMP GLPK Ipopt Gurobi"
```
[32m[1m Resolving[22m[39m package versions...
[32m[1mNo Changes[22m[39m to `~/Dropbox/extensao/cursos/2021/Tutorial-Julia-Opt/Project.toml`
[32m[1mNo Changes[22m[39m to `~/Dropbox/extensao/cursos/2021/Tutorial-Julia-Opt/Manifest.toml`
```julia
# using Plots, LinearAlgebra, JuMP, GLPK, Ipopt
#Descomente a linha acima e comente a linha abaixo caso não possua Gurobi instalado
using Plots, LinearAlgebra, JuMP, GLPK, Ipopt, Gurobi
```
### Exemplo 1 - Programação Linear
* O problema de programação linear pode ser dado da forma
$$
\begin{align}
&\min &c^{T} x \\
&\text { s.a } &A x=b \\
&& x \geq 0
\end{align}.
$$
* Vamos utilizar `JuMP` para modelar um problema e os três (ou dois) solvers disponíveis pra resolvê-lo.
```julia
function plot_factivel()
contour(range(-0.5, 10, length=100), range(-0.5, 5, length=100),
(x,y)->-12x-20y,
levels=10,
frame_style=:origin)
plot!([0; 0 ; 8; 6 ; 0 ],
[ 4; 0 ; 0; 1.5 ; 4], c=:blue, lw=2, lab="Conjunto factível",series=:path)
end
plot_factivel()
```
* Considere o problema
$$
\begin{align*}
& \min & -12x - 20y \\
& \;\;\text{s.t.} & 3x + 4y \leq 24 \\
& & 5x + 12y \leq 48 \\
& & x \geq 0 \\
& & y \geq 0 \\
\end{align*}
$$
```julia
# Modelo e Solver
model = Model(Ipopt.Optimizer)
# Variaveis, canalizações (ou caixas) e tipo
@variable(model,x>=0)
@variable(model,y>=0)
# Restrições
@constraint(model,3x + 4y <=24)
@constraint(model,5x + 12y <=48)
# Função objetivo
@objective(model,Min,-12x -20y)
print(model)
```
Min -12 x - 20 y
Subject to
3 x + 4 y ≤ 24.0
5 x + 12 y ≤ 48.0
x ≥ 0.0
y ≥ 0.0
```julia
# Chamada do Solver
optimize!(model)
#Declarar solução
@show value(x)
@show value(y)
@show objective_value(model)
plot_factivel()
scatter!([value(x)],[value(y)],label="Solução")
```
This is Ipopt version 3.13.2, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 4
Number of nonzeros in Lagrangian Hessian.............: 0
Total number of variables............................: 2
variables with only lower bounds: 2
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 2
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 -3.1999968e-01 0.00e+00 1.32e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 -8.5810813e+00 0.00e+00 1.85e+01 -1.0 4.49e+00 - 5.07e-02 1.00e+00f 1
2 -8.4221745e+01 0.00e+00 5.50e+00 -1.0 6.15e+01 - 1.40e-01 6.98e-01f 1
3 -1.0155909e+02 0.00e+00 3.63e+00 -1.0 1.81e+01 - 1.00e+00 3.46e-01f 1
4 -1.0180081e+02 0.00e+00 1.00e-06 -1.0 2.24e-01 - 1.00e+00 1.00e+00f 1
5 -1.0199360e+02 0.00e+00 2.83e-08 -2.5 1.26e-01 - 1.00e+00 1.00e+00f 1
6 -1.0199970e+02 0.00e+00 1.50e-09 -3.8 4.95e-03 - 1.00e+00 1.00e+00f 1
7 -1.0200000e+02 0.00e+00 1.85e-11 -5.7 1.99e-04 - 1.00e+00 1.00e+00f 1
8 -1.0200000e+02 0.00e+00 2.53e-14 -8.6 2.46e-06 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 8
(scaled) (unscaled)
Objective...............: -1.0200000101498793e+02 -1.0200000101498793e+02
Dual infeasibility......: 2.5313084961453569e-14 2.5313084961453569e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.5062831816896742e-09 2.5062831816896742e-09
Overall NLP error.......: 2.5062831816896742e-09 2.5062831816896742e-09
Number of objective function evaluations = 9
Number of objective gradient evaluations = 9
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 9
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 1
Number of Lagrangian Hessian evaluations = 1
Total CPU secs in IPOPT (w/o function evaluations) = 0.011
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
value(x) = 6.000000060152027
value(y) = 1.5000000146581793
objective_value(model) = -102.00000101498793
### Vamos olhar para cada linha do código
* Um modelo é um objeto que contém as variáveis, as restrições, as opções do solver
* São criados com a função `Model()`.
* Um modelo pode ser criado sem o solver
```julia
model = Model(GLPK.Optimizer)
```
* Uma variável é modelada usando `@variable(nome_modelo, nome_variavel_e_limitantes, variable_type)`.
* Limitantes podem ser superiores ou inferiores. Caso não definido, a variável é tratada como real
```julia
@variable(model, x >= 0)
@variable(model, y >= 0)
```
* Uma restrição é modelada usando `@constraint(nome_modelo, restricao)`.
```julia
@constraint(model, 3x + 4y <= 24)
@constraint(model, 5x + 12y <= 48)
```
* A função objetivo é declarada usando `@objective(ome_modelo, Min/Max, function a ser otimizada)`
* `print(nome_modelo)` imprime o modelo (opcional).
```julia
@objective(model, Min, -12x - 20y)
print(model)
```
* Para resolver o problema de otimização chamamos a funlção `optimize`
```julia
optimize!(model)
```
* `x` e `y` são variáveis que estão no *workspace* mas para obtermos seus valores precisamos da função `value`
* Da mesma maneira, para obter o valor da função objetivo no ótimo usamos `objective_value(nome_modelo)`
```julia
@show value(x);
@show value(y);
@show objective_value(model);
```
## Exemplo 2 - Problema das $N$-rainhas (Problema de Factibilidade)
> O problema das $N$-rainhas consiste em um tabuleiro de xadrez de tamanho $N\times N$ no qual se quer colocar $N$ rainhas de tal modo que nenhuma rainha possa atacar outra. No xadrez, uma rainha pode se mover verticalmente, horizontalmente e diagonalmente. Desta maneira, não pode haver mais de uma rainha em nenhuma linha, coluna ou diagonal do tabuleiro.
```julia
N = 16
Nrainhas = Model(Gurobi.Optimizer)
#Definindo as variáveis
@variable(Nrainhas,x[i=1:N,j=1:N],Bin)
```
Academic license - for non-commercial use only - expires 2021-03-27
16×16 Array{VariableRef,2}:
x[1,1] x[1,2] x[1,3] x[1,4] x[1,5] … x[1,14] x[1,15] x[1,16]
x[2,1] x[2,2] x[2,3] x[2,4] x[2,5] x[2,14] x[2,15] x[2,16]
x[3,1] x[3,2] x[3,3] x[3,4] x[3,5] x[3,14] x[3,15] x[3,16]
x[4,1] x[4,2] x[4,3] x[4,4] x[4,5] x[4,14] x[4,15] x[4,16]
x[5,1] x[5,2] x[5,3] x[5,4] x[5,5] x[5,14] x[5,15] x[5,16]
x[6,1] x[6,2] x[6,3] x[6,4] x[6,5] … x[6,14] x[6,15] x[6,16]
x[7,1] x[7,2] x[7,3] x[7,4] x[7,5] x[7,14] x[7,15] x[7,16]
x[8,1] x[8,2] x[8,3] x[8,4] x[8,5] x[8,14] x[8,15] x[8,16]
x[9,1] x[9,2] x[9,3] x[9,4] x[9,5] x[9,14] x[9,15] x[9,16]
x[10,1] x[10,2] x[10,3] x[10,4] x[10,5] x[10,14] x[10,15] x[10,16]
x[11,1] x[11,2] x[11,3] x[11,4] x[11,5] … x[11,14] x[11,15] x[11,16]
x[12,1] x[12,2] x[12,3] x[12,4] x[12,5] x[12,14] x[12,15] x[12,16]
x[13,1] x[13,2] x[13,3] x[13,4] x[13,5] x[13,14] x[13,15] x[13,16]
x[14,1] x[14,2] x[14,3] x[14,4] x[14,5] x[14,14] x[14,15] x[14,16]
x[15,1] x[15,2] x[15,3] x[15,4] x[15,5] x[15,14] x[15,15] x[15,16]
x[16,1] x[16,2] x[16,3] x[16,4] x[16,5] … x[16,14] x[16,15] x[16,16]
```julia
# Restrições em relação a soma das linhas
@constraint(Nrainhas,[sum(x[i,:]) for i=1:N] .== 1)
```
16-element Array{ConstraintRef{Model,MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.EqualTo{Float64}},ScalarShape},1}:
x[1,1] + x[1,2] + x[1,3] + x[1,4] + x[1,5] + x[1,6] + x[1,7] + x[1,8] + x[1,9] + x[1,10] + x[1,11] + x[1,12] + x[1,13] + x[1,14] + x[1,15] + x[1,16] = 1.0
x[2,1] + x[2,2] + x[2,3] + x[2,4] + x[2,5] + x[2,6] + x[2,7] + x[2,8] + x[2,9] + x[2,10] + x[2,11] + x[2,12] + x[2,13] + x[2,14] + x[2,15] + x[2,16] = 1.0
x[3,1] + x[3,2] + x[3,3] + x[3,4] + x[3,5] + x[3,6] + x[3,7] + x[3,8] + x[3,9] + x[3,10] + x[3,11] + x[3,12] + x[3,13] + x[3,14] + x[3,15] + x[3,16] = 1.0
x[4,1] + x[4,2] + x[4,3] + x[4,4] + x[4,5] + x[4,6] + x[4,7] + x[4,8] + x[4,9] + x[4,10] + x[4,11] + x[4,12] + x[4,13] + x[4,14] + x[4,15] + x[4,16] = 1.0
x[5,1] + x[5,2] + x[5,3] + x[5,4] + x[5,5] + x[5,6] + x[5,7] + x[5,8] + x[5,9] + x[5,10] + x[5,11] + x[5,12] + x[5,13] + x[5,14] + x[5,15] + x[5,16] = 1.0
x[6,1] + x[6,2] + x[6,3] + x[6,4] + x[6,5] + x[6,6] + x[6,7] + x[6,8] + x[6,9] + x[6,10] + x[6,11] + x[6,12] + x[6,13] + x[6,14] + x[6,15] + x[6,16] = 1.0
x[7,1] + x[7,2] + x[7,3] + x[7,4] + x[7,5] + x[7,6] + x[7,7] + x[7,8] + x[7,9] + x[7,10] + x[7,11] + x[7,12] + x[7,13] + x[7,14] + x[7,15] + x[7,16] = 1.0
x[8,1] + x[8,2] + x[8,3] + x[8,4] + x[8,5] + x[8,6] + x[8,7] + x[8,8] + x[8,9] + x[8,10] + x[8,11] + x[8,12] + x[8,13] + x[8,14] + x[8,15] + x[8,16] = 1.0
x[9,1] + x[9,2] + x[9,3] + x[9,4] + x[9,5] + x[9,6] + x[9,7] + x[9,8] + x[9,9] + x[9,10] + x[9,11] + x[9,12] + x[9,13] + x[9,14] + x[9,15] + x[9,16] = 1.0
x[10,1] + x[10,2] + x[10,3] + x[10,4] + x[10,5] + x[10,6] + x[10,7] + x[10,8] + x[10,9] + x[10,10] + x[10,11] + x[10,12] + x[10,13] + x[10,14] + x[10,15] + x[10,16] = 1.0
x[11,1] + x[11,2] + x[11,3] + x[11,4] + x[11,5] + x[11,6] + x[11,7] + x[11,8] + x[11,9] + x[11,10] + x[11,11] + x[11,12] + x[11,13] + x[11,14] + x[11,15] + x[11,16] = 1.0
x[12,1] + x[12,2] + x[12,3] + x[12,4] + x[12,5] + x[12,6] + x[12,7] + x[12,8] + x[12,9] + x[12,10] + x[12,11] + x[12,12] + x[12,13] + x[12,14] + x[12,15] + x[12,16] = 1.0
x[13,1] + x[13,2] + x[13,3] + x[13,4] + x[13,5] + x[13,6] + x[13,7] + x[13,8] + x[13,9] + x[13,10] + x[13,11] + x[13,12] + x[13,13] + x[13,14] + x[13,15] + x[13,16] = 1.0
x[14,1] + x[14,2] + x[14,3] + x[14,4] + x[14,5] + x[14,6] + x[14,7] + x[14,8] + x[14,9] + x[14,10] + x[14,11] + x[14,12] + x[14,13] + x[14,14] + x[14,15] + x[14,16] = 1.0
x[15,1] + x[15,2] + x[15,3] + x[15,4] + x[15,5] + x[15,6] + x[15,7] + x[15,8] + x[15,9] + x[15,10] + x[15,11] + x[15,12] + x[15,13] + x[15,14] + x[15,15] + x[15,16] = 1.0
x[16,1] + x[16,2] + x[16,3] + x[16,4] + x[16,5] + x[16,6] + x[16,7] + x[16,8] + x[16,9] + x[16,10] + x[16,11] + x[16,12] + x[16,13] + x[16,14] + x[16,15] + x[16,16] = 1.0
```julia
# Restrições em relação a soma das colunas
@constraint(Nrainhas,[sum(x[:,j]) for j=1:N] .== 1)
```
16-element Array{ConstraintRef{Model,MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.EqualTo{Float64}},ScalarShape},1}:
x[1,1] + x[2,1] + x[3,1] + x[4,1] + x[5,1] + x[6,1] + x[7,1] + x[8,1] + x[9,1] + x[10,1] + x[11,1] + x[12,1] + x[13,1] + x[14,1] + x[15,1] + x[16,1] = 1.0
x[1,2] + x[2,2] + x[3,2] + x[4,2] + x[5,2] + x[6,2] + x[7,2] + x[8,2] + x[9,2] + x[10,2] + x[11,2] + x[12,2] + x[13,2] + x[14,2] + x[15,2] + x[16,2] = 1.0
x[1,3] + x[2,3] + x[3,3] + x[4,3] + x[5,3] + x[6,3] + x[7,3] + x[8,3] + x[9,3] + x[10,3] + x[11,3] + x[12,3] + x[13,3] + x[14,3] + x[15,3] + x[16,3] = 1.0
x[1,4] + x[2,4] + x[3,4] + x[4,4] + x[5,4] + x[6,4] + x[7,4] + x[8,4] + x[9,4] + x[10,4] + x[11,4] + x[12,4] + x[13,4] + x[14,4] + x[15,4] + x[16,4] = 1.0
x[1,5] + x[2,5] + x[3,5] + x[4,5] + x[5,5] + x[6,5] + x[7,5] + x[8,5] + x[9,5] + x[10,5] + x[11,5] + x[12,5] + x[13,5] + x[14,5] + x[15,5] + x[16,5] = 1.0
x[1,6] + x[2,6] + x[3,6] + x[4,6] + x[5,6] + x[6,6] + x[7,6] + x[8,6] + x[9,6] + x[10,6] + x[11,6] + x[12,6] + x[13,6] + x[14,6] + x[15,6] + x[16,6] = 1.0
x[1,7] + x[2,7] + x[3,7] + x[4,7] + x[5,7] + x[6,7] + x[7,7] + x[8,7] + x[9,7] + x[10,7] + x[11,7] + x[12,7] + x[13,7] + x[14,7] + x[15,7] + x[16,7] = 1.0
x[1,8] + x[2,8] + x[3,8] + x[4,8] + x[5,8] + x[6,8] + x[7,8] + x[8,8] + x[9,8] + x[10,8] + x[11,8] + x[12,8] + x[13,8] + x[14,8] + x[15,8] + x[16,8] = 1.0
x[1,9] + x[2,9] + x[3,9] + x[4,9] + x[5,9] + x[6,9] + x[7,9] + x[8,9] + x[9,9] + x[10,9] + x[11,9] + x[12,9] + x[13,9] + x[14,9] + x[15,9] + x[16,9] = 1.0
x[1,10] + x[2,10] + x[3,10] + x[4,10] + x[5,10] + x[6,10] + x[7,10] + x[8,10] + x[9,10] + x[10,10] + x[11,10] + x[12,10] + x[13,10] + x[14,10] + x[15,10] + x[16,10] = 1.0
x[1,11] + x[2,11] + x[3,11] + x[4,11] + x[5,11] + x[6,11] + x[7,11] + x[8,11] + x[9,11] + x[10,11] + x[11,11] + x[12,11] + x[13,11] + x[14,11] + x[15,11] + x[16,11] = 1.0
x[1,12] + x[2,12] + x[3,12] + x[4,12] + x[5,12] + x[6,12] + x[7,12] + x[8,12] + x[9,12] + x[10,12] + x[11,12] + x[12,12] + x[13,12] + x[14,12] + x[15,12] + x[16,12] = 1.0
x[1,13] + x[2,13] + x[3,13] + x[4,13] + x[5,13] + x[6,13] + x[7,13] + x[8,13] + x[9,13] + x[10,13] + x[11,13] + x[12,13] + x[13,13] + x[14,13] + x[15,13] + x[16,13] = 1.0
x[1,14] + x[2,14] + x[3,14] + x[4,14] + x[5,14] + x[6,14] + x[7,14] + x[8,14] + x[9,14] + x[10,14] + x[11,14] + x[12,14] + x[13,14] + x[14,14] + x[15,14] + x[16,14] = 1.0
x[1,15] + x[2,15] + x[3,15] + x[4,15] + x[5,15] + x[6,15] + x[7,15] + x[8,15] + x[9,15] + x[10,15] + x[11,15] + x[12,15] + x[13,15] + x[14,15] + x[15,15] + x[16,15] = 1.0
x[1,16] + x[2,16] + x[3,16] + x[4,16] + x[5,16] + x[6,16] + x[7,16] + x[8,16] + x[9,16] + x[10,16] + x[11,16] + x[12,16] + x[13,16] + x[14,16] + x[15,16] + x[16,16] = 1.0
```julia
x
```
16×16 Array{VariableRef,2}:
x[1,1] x[1,2] x[1,3] x[1,4] x[1,5] … x[1,14] x[1,15] x[1,16]
x[2,1] x[2,2] x[2,3] x[2,4] x[2,5] x[2,14] x[2,15] x[2,16]
x[3,1] x[3,2] x[3,3] x[3,4] x[3,5] x[3,14] x[3,15] x[3,16]
x[4,1] x[4,2] x[4,3] x[4,4] x[4,5] x[4,14] x[4,15] x[4,16]
x[5,1] x[5,2] x[5,3] x[5,4] x[5,5] x[5,14] x[5,15] x[5,16]
x[6,1] x[6,2] x[6,3] x[6,4] x[6,5] … x[6,14] x[6,15] x[6,16]
x[7,1] x[7,2] x[7,3] x[7,4] x[7,5] x[7,14] x[7,15] x[7,16]
x[8,1] x[8,2] x[8,3] x[8,4] x[8,5] x[8,14] x[8,15] x[8,16]
x[9,1] x[9,2] x[9,3] x[9,4] x[9,5] x[9,14] x[9,15] x[9,16]
x[10,1] x[10,2] x[10,3] x[10,4] x[10,5] x[10,14] x[10,15] x[10,16]
x[11,1] x[11,2] x[11,3] x[11,4] x[11,5] … x[11,14] x[11,15] x[11,16]
x[12,1] x[12,2] x[12,3] x[12,4] x[12,5] x[12,14] x[12,15] x[12,16]
x[13,1] x[13,2] x[13,3] x[13,4] x[13,5] x[13,14] x[13,15] x[13,16]
x[14,1] x[14,2] x[14,3] x[14,4] x[14,5] x[14,14] x[14,15] x[14,16]
x[15,1] x[15,2] x[15,3] x[15,4] x[15,5] x[15,14] x[15,15] x[15,16]
x[16,1] x[16,2] x[16,3] x[16,4] x[16,5] … x[16,14] x[16,15] x[16,16]
```julia
diag(x,3)
```
13-element Array{VariableRef,1}:
x[1,4]
x[2,5]
x[3,6]
x[4,7]
x[5,8]
x[6,9]
x[7,10]
x[8,11]
x[9,12]
x[10,13]
x[11,14]
x[12,15]
x[13,16]
```julia
# Restrições das diagonais principais
@constraint(Nrainhas,[sum(diag(x,i)) for i = -(N-1):(N-1)] .<=1)
```
31-element Array{ConstraintRef{Model,MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.LessThan{Float64}},ScalarShape},1}:
x[16,1] ≤ 1.0
x[15,1] + x[16,2] ≤ 1.0
x[14,1] + x[15,2] + x[16,3] ≤ 1.0
x[13,1] + x[14,2] + x[15,3] + x[16,4] ≤ 1.0
x[12,1] + x[13,2] + x[14,3] + x[15,4] + x[16,5] ≤ 1.0
x[11,1] + x[12,2] + x[13,3] + x[14,4] + x[15,5] + x[16,6] ≤ 1.0
x[10,1] + x[11,2] + x[12,3] + x[13,4] + x[14,5] + x[15,6] + x[16,7] ≤ 1.0
x[9,1] + x[10,2] + x[11,3] + x[12,4] + x[13,5] + x[14,6] + x[15,7] + x[16,8] ≤ 1.0
x[8,1] + x[9,2] + x[10,3] + x[11,4] + x[12,5] + x[13,6] + x[14,7] + x[15,8] + x[16,9] ≤ 1.0
x[7,1] + x[8,2] + x[9,3] + x[10,4] + x[11,5] + x[12,6] + x[13,7] + x[14,8] + x[15,9] + x[16,10] ≤ 1.0
x[6,1] + x[7,2] + x[8,3] + x[9,4] + x[10,5] + x[11,6] + x[12,7] + x[13,8] + x[14,9] + x[15,10] + x[16,11] ≤ 1.0
x[5,1] + x[6,2] + x[7,3] + x[8,4] + x[9,5] + x[10,6] + x[11,7] + x[12,8] + x[13,9] + x[14,10] + x[15,11] + x[16,12] ≤ 1.0
x[4,1] + x[5,2] + x[6,3] + x[7,4] + x[8,5] + x[9,6] + x[10,7] + x[11,8] + x[12,9] + x[13,10] + x[14,11] + x[15,12] + x[16,13] ≤ 1.0
⋮
x[1,5] + x[2,6] + x[3,7] + x[4,8] + x[5,9] + x[6,10] + x[7,11] + x[8,12] + x[9,13] + x[10,14] + x[11,15] + x[12,16] ≤ 1.0
x[1,6] + x[2,7] + x[3,8] + x[4,9] + x[5,10] + x[6,11] + x[7,12] + x[8,13] + x[9,14] + x[10,15] + x[11,16] ≤ 1.0
x[1,7] + x[2,8] + x[3,9] + x[4,10] + x[5,11] + x[6,12] + x[7,13] + x[8,14] + x[9,15] + x[10,16] ≤ 1.0
x[1,8] + x[2,9] + x[3,10] + x[4,11] + x[5,12] + x[6,13] + x[7,14] + x[8,15] + x[9,16] ≤ 1.0
x[1,9] + x[2,10] + x[3,11] + x[4,12] + x[5,13] + x[6,14] + x[7,15] + x[8,16] ≤ 1.0
x[1,10] + x[2,11] + x[3,12] + x[4,13] + x[5,14] + x[6,15] + x[7,16] ≤ 1.0
x[1,11] + x[2,12] + x[3,13] + x[4,14] + x[5,15] + x[6,16] ≤ 1.0
x[1,12] + x[2,13] + x[3,14] + x[4,15] + x[5,16] ≤ 1.0
x[1,13] + x[2,14] + x[3,15] + x[4,16] ≤ 1.0
x[1,14] + x[2,15] + x[3,16] ≤ 1.0
x[1,15] + x[2,16] ≤ 1.0
x[1,16] ≤ 1.0
```julia
x
```
16×16 Array{VariableRef,2}:
x[1,1] x[1,2] x[1,3] x[1,4] x[1,5] … x[1,14] x[1,15] x[1,16]
x[2,1] x[2,2] x[2,3] x[2,4] x[2,5] x[2,14] x[2,15] x[2,16]
x[3,1] x[3,2] x[3,3] x[3,4] x[3,5] x[3,14] x[3,15] x[3,16]
x[4,1] x[4,2] x[4,3] x[4,4] x[4,5] x[4,14] x[4,15] x[4,16]
x[5,1] x[5,2] x[5,3] x[5,4] x[5,5] x[5,14] x[5,15] x[5,16]
x[6,1] x[6,2] x[6,3] x[6,4] x[6,5] … x[6,14] x[6,15] x[6,16]
x[7,1] x[7,2] x[7,3] x[7,4] x[7,5] x[7,14] x[7,15] x[7,16]
x[8,1] x[8,2] x[8,3] x[8,4] x[8,5] x[8,14] x[8,15] x[8,16]
x[9,1] x[9,2] x[9,3] x[9,4] x[9,5] x[9,14] x[9,15] x[9,16]
x[10,1] x[10,2] x[10,3] x[10,4] x[10,5] x[10,14] x[10,15] x[10,16]
x[11,1] x[11,2] x[11,3] x[11,4] x[11,5] … x[11,14] x[11,15] x[11,16]
x[12,1] x[12,2] x[12,3] x[12,4] x[12,5] x[12,14] x[12,15] x[12,16]
x[13,1] x[13,2] x[13,3] x[13,4] x[13,5] x[13,14] x[13,15] x[13,16]
x[14,1] x[14,2] x[14,3] x[14,4] x[14,5] x[14,14] x[14,15] x[14,16]
x[15,1] x[15,2] x[15,3] x[15,4] x[15,5] x[15,14] x[15,15] x[15,16]
x[16,1] x[16,2] x[16,3] x[16,4] x[16,5] … x[16,14] x[16,15] x[16,16]
```julia
reverse(x,dims=1)
```
16×16 Array{VariableRef,2}:
x[16,1] x[16,2] x[16,3] x[16,4] x[16,5] … x[16,14] x[16,15] x[16,16]
x[15,1] x[15,2] x[15,3] x[15,4] x[15,5] x[15,14] x[15,15] x[15,16]
x[14,1] x[14,2] x[14,3] x[14,4] x[14,5] x[14,14] x[14,15] x[14,16]
x[13,1] x[13,2] x[13,3] x[13,4] x[13,5] x[13,14] x[13,15] x[13,16]
x[12,1] x[12,2] x[12,3] x[12,4] x[12,5] x[12,14] x[12,15] x[12,16]
x[11,1] x[11,2] x[11,3] x[11,4] x[11,5] … x[11,14] x[11,15] x[11,16]
x[10,1] x[10,2] x[10,3] x[10,4] x[10,5] x[10,14] x[10,15] x[10,16]
x[9,1] x[9,2] x[9,3] x[9,4] x[9,5] x[9,14] x[9,15] x[9,16]
x[8,1] x[8,2] x[8,3] x[8,4] x[8,5] x[8,14] x[8,15] x[8,16]
x[7,1] x[7,2] x[7,3] x[7,4] x[7,5] x[7,14] x[7,15] x[7,16]
x[6,1] x[6,2] x[6,3] x[6,4] x[6,5] … x[6,14] x[6,15] x[6,16]
x[5,1] x[5,2] x[5,3] x[5,4] x[5,5] x[5,14] x[5,15] x[5,16]
x[4,1] x[4,2] x[4,3] x[4,4] x[4,5] x[4,14] x[4,15] x[4,16]
x[3,1] x[3,2] x[3,3] x[3,4] x[3,5] x[3,14] x[3,15] x[3,16]
x[2,1] x[2,2] x[2,3] x[2,4] x[2,5] x[2,14] x[2,15] x[2,16]
x[1,1] x[1,2] x[1,3] x[1,4] x[1,5] … x[1,14] x[1,15] x[1,16]
```julia
# Restrições das diagonais secundárias
@constraint(Nrainhas,[sum(diag(reverse(x,dims=1),i)) for i = -(N-1):(N-1)] .<=1)
```
31-element Array{ConstraintRef{Model,MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.LessThan{Float64}},ScalarShape},1}:
x[1,1] ≤ 1.0
x[2,1] + x[1,2] ≤ 1.0
x[3,1] + x[2,2] + x[1,3] ≤ 1.0
x[4,1] + x[3,2] + x[2,3] + x[1,4] ≤ 1.0
x[5,1] + x[4,2] + x[3,3] + x[2,4] + x[1,5] ≤ 1.0
x[6,1] + x[5,2] + x[4,3] + x[3,4] + x[2,5] + x[1,6] ≤ 1.0
x[7,1] + x[6,2] + x[5,3] + x[4,4] + x[3,5] + x[2,6] + x[1,7] ≤ 1.0
x[8,1] + x[7,2] + x[6,3] + x[5,4] + x[4,5] + x[3,6] + x[2,7] + x[1,8] ≤ 1.0
x[9,1] + x[8,2] + x[7,3] + x[6,4] + x[5,5] + x[4,6] + x[3,7] + x[2,8] + x[1,9] ≤ 1.0
x[10,1] + x[9,2] + x[8,3] + x[7,4] + x[6,5] + x[5,6] + x[4,7] + x[3,8] + x[2,9] + x[1,10] ≤ 1.0
x[11,1] + x[10,2] + x[9,3] + x[8,4] + x[7,5] + x[6,6] + x[5,7] + x[4,8] + x[3,9] + x[2,10] + x[1,11] ≤ 1.0
x[12,1] + x[11,2] + x[10,3] + x[9,4] + x[8,5] + x[7,6] + x[6,7] + x[5,8] + x[4,9] + x[3,10] + x[2,11] + x[1,12] ≤ 1.0
x[13,1] + x[12,2] + x[11,3] + x[10,4] + x[9,5] + x[8,6] + x[7,7] + x[6,8] + x[5,9] + x[4,10] + x[3,11] + x[2,12] + x[1,13] ≤ 1.0
⋮
x[16,5] + x[15,6] + x[14,7] + x[13,8] + x[12,9] + x[11,10] + x[10,11] + x[9,12] + x[8,13] + x[7,14] + x[6,15] + x[5,16] ≤ 1.0
x[16,6] + x[15,7] + x[14,8] + x[13,9] + x[12,10] + x[11,11] + x[10,12] + x[9,13] + x[8,14] + x[7,15] + x[6,16] ≤ 1.0
x[16,7] + x[15,8] + x[14,9] + x[13,10] + x[12,11] + x[11,12] + x[10,13] + x[9,14] + x[8,15] + x[7,16] ≤ 1.0
x[16,8] + x[15,9] + x[14,10] + x[13,11] + x[12,12] + x[11,13] + x[10,14] + x[9,15] + x[8,16] ≤ 1.0
x[16,9] + x[15,10] + x[14,11] + x[13,12] + x[12,13] + x[11,14] + x[10,15] + x[9,16] ≤ 1.0
x[16,10] + x[15,11] + x[14,12] + x[13,13] + x[12,14] + x[11,15] + x[10,16] ≤ 1.0
x[16,11] + x[15,12] + x[14,13] + x[13,14] + x[12,15] + x[11,16] ≤ 1.0
x[16,12] + x[15,13] + x[14,14] + x[13,15] + x[12,16] ≤ 1.0
x[16,13] + x[15,14] + x[14,15] + x[13,16] ≤ 1.0
x[16,14] + x[15,15] + x[14,16] ≤ 1.0
x[16,15] + x[15,16] ≤ 1.0
x[16,16] ≤ 1.0
```julia
optimize!(Nrainhas)
```
Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (linux64)
Thread count: 6 physical cores, 12 logical processors, using up to 12 threads
Optimize a model with 94 rows, 256 columns and 1024 nonzeros
Model fingerprint: 0x416086d9
Variable types: 0 continuous, 256 integer (256 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [0e+00, 0e+00]
Bounds range [0e+00, 0e+00]
RHS range [1e+00, 1e+00]
Presolve removed 4 rows and 0 columns
Presolve time: 0.01s
Presolved: 90 rows, 256 columns, 1038 nonzeros
Variable types: 0 continuous, 256 integer (256 binary)
Root relaxation: objective 0.000000e+00, 100 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
H 0 0 0.0000000 0.00000 0.00% - 0s
0 0 0.00000 0 50 0.00000 0.00000 0.00% - 0s
Explored 0 nodes (100 simplex iterations) in 0.01 seconds
Thread count was 12 (of 12 available processors)
Solution count 1: 0
Optimal solution found (tolerance 1.00e-04)
Best objective 0.000000000000e+00, best bound 0.000000000000e+00, gap 0.0000%
User-callback calls 54, time in user-callback 0.00 sec
```julia
Int.(value.(x))
```
16×16 Array{Int64,2}:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
### Exemplo 3 - Voltando pra Rosenbrock
* Vamos usar `JuMP` e `Ipopt` para minimizar a função (não-linear) de Rosenbrock
$$f(x) = (1-x_1)^2 + 100(x_2-x_1^2)^2$$
```julia
rosen = Model(Ipopt.Optimizer)
@variable(rosen,x[1:2])
@NLobjective(rosen,Min,(1-x[1])^2 + 100(x[2]-x[1]^2)^2)
print(rosen)
```
Min (1.0 - x[1]) ^ 2.0 + 100.0 * (x[2] - x[1] ^ 2.0) ^ 2.0
Subject to
```julia
optimize!(rosen)
```
This is Ipopt version 3.13.2, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 2
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.0000000e+00 0.00e+00 2.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 9.5312500e-01 0.00e+00 1.25e+01 -1.0 1.00e+00 - 1.00e+00 2.50e-01f 3
2 4.8320569e-01 0.00e+00 1.01e+00 -1.0 9.03e-02 - 1.00e+00 1.00e+00f 1
3 4.5708829e-01 0.00e+00 9.53e+00 -1.0 4.29e-01 - 1.00e+00 5.00e-01f 2
4 1.8894205e-01 0.00e+00 4.15e-01 -1.0 9.51e-02 - 1.00e+00 1.00e+00f 1
5 1.3918726e-01 0.00e+00 6.51e+00 -1.7 3.49e-01 - 1.00e+00 5.00e-01f 2
6 5.4940990e-02 0.00e+00 4.51e-01 -1.7 9.29e-02 - 1.00e+00 1.00e+00f 1
7 2.9144630e-02 0.00e+00 2.27e+00 -1.7 2.49e-01 - 1.00e+00 5.00e-01f 2
8 9.8586451e-03 0.00e+00 1.15e+00 -1.7 1.10e-01 - 1.00e+00 1.00e+00f 1
9 2.3237475e-03 0.00e+00 1.00e+00 -1.7 1.00e-01 - 1.00e+00 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 2.3797236e-04 0.00e+00 2.19e-01 -1.7 5.09e-02 - 1.00e+00 1.00e+00f 1
11 4.9267371e-06 0.00e+00 5.95e-02 -1.7 2.53e-02 - 1.00e+00 1.00e+00f 1
12 2.8189505e-09 0.00e+00 8.31e-04 -2.5 3.20e-03 - 1.00e+00 1.00e+00f 1
13 1.0095040e-15 0.00e+00 8.68e-07 -5.7 9.78e-05 - 1.00e+00 1.00e+00f 1
14 1.3288608e-28 0.00e+00 2.02e-13 -8.6 4.65e-08 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 14
(scaled) (unscaled)
Objective...............: 1.3288608467480825e-28 1.3288608467480825e-28
Dual infeasibility......: 2.0183854587685121e-13 2.0183854587685121e-13
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 2.0183854587685121e-13 2.0183854587685121e-13
Number of objective function evaluations = 36
Number of objective gradient evaluations = 15
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 14
Total CPU secs in IPOPT (w/o function evaluations) = 0.008
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
```julia
value.(x)
```
2-element Array{Float64,1}:
0.9999999999999899
0.9999999999999792
### Exemplo 4 - Uma aplicação relacionada à pandemia de COVID-19
* Os professores Paulo J. S. Silva e Claudia Sagastizábal da Unicamp criara a página [Vidas Salvas](http://www.ime.unicamp.br/~pjssilva/vidas_salvas.html) com objetivo de apresentar uma estimativa do número de vidas salvas no país pelo isolamento social durante a pademia de COVID-19.
* Para tanto fazem ajustes do parâmetro $R_0$ do modelo SEIR, que representa a taxa de replicação do vírus SARS-CoV-2 (o corona vírus que causa a COVID-19) em uma população inteiramente sucetível, tentando descobrir se ele varia no tempo.
* A página foi escrita em Jupyter e é executada diariamente para atualizar as informações. O código foi escrito em Julia e JuMP. O $R_0$ foi estmado ajustando os dados oficiais ao modelo SEIR, mas permitindo que o $R_t$ varie no tempo. O problema de otimização não linear é então resolvido usando o solver Ipopt.
```julia
```
|
6a5050e8519e344b0fa816cc3bef7a61b8df79a6
| 174,036 |
ipynb
|
Jupyter Notebook
|
notebooks/Aula03.ipynb
|
lrsantos11/Tutorial-Julia
|
4b2add1d21ff5c9113c6d95ca21cf1ec0256cbc5
|
[
"CC0-1.0"
] | 12 |
2021-01-22T18:19:01.000Z
|
2021-05-24T01:03:38.000Z
|
notebooks/Aula03.ipynb
|
lrsantos11/Tutorial-Julia
|
4b2add1d21ff5c9113c6d95ca21cf1ec0256cbc5
|
[
"CC0-1.0"
] | null | null | null |
notebooks/Aula03.ipynb
|
lrsantos11/Tutorial-Julia
|
4b2add1d21ff5c9113c6d95ca21cf1ec0256cbc5
|
[
"CC0-1.0"
] | 9 |
2021-01-22T18:20:35.000Z
|
2021-05-24T01:45:31.000Z
| 106.967425 | 14,683 | 0.629123 | true | 17,078 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.774583 | 0.83762 | 0.648807 |
__label__yue_Hant
| 0.228131 | 0.345726 |
<a href="https://colab.research.google.com/github/aschelin/SimulacoesAGFE/blob/main/IntroPython.ipynb" target="_parent"></a>
# Introduçao a Python
### Variáveis e Estruturas Básicas
> Material didático baseado no livro: *Python Programming and Numerical Methods - A Guide for Engineers and Scientists by Qingkai Kong Timmy Siauw and Alexandre Bayen. Imprint: Academic Press*.
Variáveis são estruturas usadas em Python para armazenar dados. No entanto, os dados podem assumir várias formas. Por exemplo, os dados podem ser números, palavras ou ter uma estrutura mais complicada. É natural que o Python tenha diferentes tipos de variáveis para armazenar diferentes tipos de dados. Nesta seção, você aprenderá como criar e manipular os tipos de variáveis mais comuns do Python.
De maneira geral, uma *variável* é uma sequência de caracteres ou números associados a uma informação. Veja alguns exemplos para o caso numérico:
```python
x=2
```
```python
y=3.5
```
```python
x+y
```
5.5
Para obter propriedades de váriaveis use o comando abaixo:
```python
%whos
```
Variable Type Data/Info
-----------------------------
x int 2
y float 3.5
As variáveis podem conter caracteres, sendo do tipo *String*.
```python
s='Física'
w = ' de Plasmas'
```
```python
p = s+w
print(p)
```
Física de Plasmas
Strings também têm índices para indicar a localização de cada caractere, para que possamos encontrar facilmente algum caractere. O índice da posição começa com 0, conforme mostrado na imagem a seguir.
```python
w = "Hello World"
```
```python
type(w)
```
str
```python
len(w)
```
11
```python
w[:-2]
```
'Hello Wor'
Em Python, um objeto possui vários métodos que podem ser usados para manipulá-lo (falaremos mais sobre programação orientada a objetos posteriormente). A maneira de obter acesso aos vários métodos é usar este padrão “string.method_name”.
```python
w.upper()
```
'HELLO WORLD'
```python
w.count("l")
```
3
```python
w.replace("World", "Brasília")
```
'Hello Brasília'
Existem diferentes maneiras de pré-formatar uma string. Aqui, apresentamos duas maneiras de fazer isso. Por exemplo, se tivermos duas variáveis nome e país e quisermos imprimi-las em uma frase, podemos fazer o seguinte:
```python
materia = "Física de plasmas"
adjetivo = 'legal'
print("%s é muito %s!"%(materia, adjetivo))
```
Física de plasmas é muito legal!
```python
print(f"{materia} é muito {adjetivo}!")
```
Física de plasmas é muito legal!
## Listas
Agora, vamos ver uma estrutura de dados sequenciais mais versátil em Python - as Listas. A maneira de defini-la é usar um par de colchetes [], e os elementos dentro dela são separados por vírgulas. Uma lista pode conter qualquer tipo de dados: numéricos, strings ou outros tipos. Por exemplo:
```python
list_1 = [1, 2, 3]
list_1
```
[1, 2, 3]
```python
list_2 = ['Hello', 'World']
list_2
```
['Hello', 'World']
```python
list_3 = [1, 2, 3, 'Apple', 'orange']
list_3
```
[1, 2, 3, 'Apple', 'orange']
```python
list_4 = [list_1, list_2]
list_4
```
[[1, 2, 3], ['Hello', 'World']]
A maneira de obter o elemento na lista é semelhante às strings, veja a figura a seguir para o índice
```python
list_3[2]
```
3
```python
list_3[:3]
```
[1, 2, 3]
```python
list_3[-1]
```
'orange'
```python
list_4[0]
```
[1, 2, 3]
```python
list_5 = []
list_5.append(5)
list_5
```
[5]
```python
5 in list_5
```
True
```python
list('Hello World')
```
['H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd']
```python
```
## Tuplas
Vamos aprender mais uma estrutura de dados de sequência diferente em Python - as Tuplas. Geralmente uma tupla é definida usando um par de parênteses (), e seus elementos são separados por vírgulas. Por exemplo:
```python
tuple_1 = (1, 2, 3, 2)
tuple_1
```
(1, 2, 3, 2)
```python
len(tuple_1)
```
4
```python
tuple_1[1:4]
```
(2, 3, 2)
```python
tuple_1.count(2)
```
2
Você pode perguntar: qual é a diferença entre listas e tuplas? Se eles são semelhantes entre si, por que precisamos de outra estrutura de dados de sequência?
As tuplas são criadas por uma razão. Segundo a documentação do Python:
> Embora as tuplas possam parecer semelhantes a listas, elas são freqüentemente usadas em diferentes situações e para diferentes propósitos. As tuplas são imutáveis e geralmente contêm uma sequência heterogênea de elementos que são acessados via desempacotamento ou indexação. As listas são mutáveis e seus elementos geralmente são homogêneos e são acessados pela iteração da lista
```python
list_1 = [1, 2, 3]
list_1[2] = 1
list_1
```
[1, 2, 1]
```python
tuple_1[2] = 1
```
O que significa heterogêneo? As tuplas geralmente contêm uma sequência heterogênea de elementos, enquanto as listas geralmente contêm uma sequência homogênea. Vamos ver um exemplo: temos uma lista que contém frutas diferentes. Normalmente o nome dos frutos pode ser armazenado em uma lista, uma vez que são homogêneos. Agora queremos ter uma estrutura de dados para armazenar quantas frutas temos para cada tipo, normalmente é aqui que as tuplas entram, já que o nome da fruta e o número são heterogêneos. Como (‘maçã’, 3), o que significa que temos 3 maçãs.
```python
# a fruit list
['apple', 'banana', 'orange', 'pear']
```
['apple', 'banana', 'orange', 'pear']
```python
# a list of (fruit, number) pairs
[('apple', 3), ('banana', 4) , ('orange', 1), ('pear', 4)]
```
[('apple', 3), ('banana', 4), ('orange', 1), ('pear', 4)]
As tuplas podem ser acessadas descompactando. Neste caso, é necessário que o número de variáveis no lado esquerdo seja igual ao número de elementos no lado direito.
```python
a, b, c = list_1
print(a, b, c)
```
1 2 1
## Sets (conjuntos)
Outro tipo de dados em Python são conjuntos (sets). É um tipo de variável que pode armazenar uma coleção não ordenada sem elementos duplicados. É também suporte para as operações matemáticas como união, interseção, diferença e diferença simétrica. É definido usando um par de chaves {} e seus elementos são separados por vírgulas.
```python
{3, 3, 2, 3, 1, 4, 5, 6, 4, 2}
```
{1, 2, 3, 4, 5, 6}
Um uso rápido disso é descobrir os elementos únicos em uma string, lista ou tupla.
**Exemplo**: Encontre os elementos únicos na lista [1, 2, 2, 3, 2, 1, 2].
```python
set_1 = set([1, 2, 2, 3, 2, 1, 2])
set_1
```
{1, 2, 3}
```python
set_2 = set((2, 4, 6, 5, 2))
set_2
```
{2, 4, 5, 6}
```python
set('Banana')
```
{'B', 'a', 'n'}
**Exemplo**: Encontre a união entre os conjuntos set_1 e set_2:
```python
print(set_1)
print(set_2)
```
{1, 2, 3}
{2, 4, 5, 6}
```python
set_1.union(set_2)
```
{1, 2, 3, 4, 5, 6}
**Exemplo**: Encontre a intersecção entre os conjuntos set_1 e set_2:
```python
set_1.intersection(set_2)
```
{2}
**Exemplo:** O conjunto set_1 é um subconjunto de {1, 2, 3, 3, 4, 5}?
```python
set_1.issubset({1, 2, 3, 3, 4, 5})
```
True
## Dicionários (dictionaries)
Introduzimos vários tipos de dados sequenciais nas seções anteriores. Agora vamos apresentar a você um tipo novo e útil - os **Dicionários**. É um tipo de mapeamento, o que o torna diferente das variáveis que falamos antes. Em vez de usar uma sequência de números para indexar os elementos (como listas ou tuplas), os dicionários são indexados por chaves, que podem ser uma string, um número ou mesmo uma tupla (mas não uma lista). Um dicionário é um par de valores-chave e cada chave é mapeada para um valor correspondente. É definido usando um par de chaves {}, enquanto os elementos são uma lista de pares chave: valor separados por vírgulas (observe o par chave: valor é separado por dois pontos, com a chave na frente e o valor no final).
```python
dict_1 = {'apple':3, 'oragne':4, 'pear':2}
dict_1
```
{'apple': 3, 'oragne': 4, 'pear': 2}
Dentro de um dicionário, os elementos são armazenados sem ordem, portanto, você não pode acessar um dicionário baseado em uma sequência de números de índice. Para obter acesso a um dicionário, precisamos usar a chave do elemento - dicionário [chave].
```python
dict_1['apple']
```
3
Podemos obter todas as chaves de um dicionário usando o método *keys* ou todos os valores usando o método *values*.
```python
dict_1.keys()
```
dict_keys(['apple', 'oragne', 'pear'])
```python
dict_1.values()
```
dict_values([3, 4, 2])
```python
len(dict_1)
```
3
Podemos definir um dicionário vazio e preencher o elemento mais tarde. Ou podemos transformar uma lista de tuplas com pares (chave, valor) em um dicionário.
```python
school_dict = {}
school_dict['UC Berkeley'] = 'USA'
school_dict
```
{'UC Berkeley': 'USA'}
```python
school_dict['Oxford'] = 'UK'
school_dict
```
{'Oxford': 'UK', 'UC Berkeley': 'USA'}
```python
dict([("UC Berkeley", "USA"), ('Oxford', 'UK')])
```
{'Oxford': 'UK', 'UC Berkeley': 'USA'}
```python
"UC Berkeley" in school_dict
```
True
```python
"Harvard" not in school_dict
```
True
```python
list(school_dict)
```
['UC Berkeley', 'Oxford']
```python
```
# Numpy Arrays
Agora, vamos apresentar a maneira mais comum de lidar com matrizes em Python usando o *módulo Numpy*. O Numpy é provavelmente o módulo de computação numérica mais fundamental do Python.
Para usar o módulo Numpy, precisamos importá-lo primeiro. Uma forma convencional de importá-lo é usar “np” como um nome abreviado.
```python
import numpy as np
```
Para definir uma matriz em Python, você pode usar a função *np.array* para converter uma lista.
**Exemplo:**
Construa as seguintes matrizes usando o numpy
\begin{equation}
x = \begin{pmatrix}
1 & 4 & 3 \\
\end{pmatrix}
\end{equation}
\begin{equation}
y = \begin{pmatrix}
1 & 4 & 3 \\
9 & 2 & 7 \\
\end{pmatrix}
\end{equation}
```python
x = np.array([1, 4, 3])
x
```
array([1, 4, 3])
```python
y = np.array([[1, 4, 3], [9, 2, 7]])
y
```
array([[1, 4, 3],
[9, 2, 7]])
Uma matriz 2-D pode usar listas aninhadas para representar, com a lista interna representando cada linha.
Muitas vezes é útil saber o tamanho ou comprimento de um array. O atributo de forma da matriz é chamado em uma matriz M e retorna uma matriz 2 × 3 onde o primeiro elemento é o número de linhas na matriz M e o segundo elemento é o número de colunas em M. Observe que a saída do atributo de forma é uma tupla. O atributo de tamanho é chamado em uma matriz M e retorna o número total de elementos na matriz M.
Encontre as linhas, colunas e o tamanho total da matriz y.
```python
y.shape
```
(2, 3)
```python
y.size
```
6
Você pode notar a diferença de que usamos apenas *y.shape* em vez de *y.shape()*, porque a forma é um atributo e não um método neste objeto de matriz. Apresentaremos mais sobre a programação orientada a objetos posteriormente. Por enquanto, você precisa lembrar que quando chamamos um método em um objeto, precisamos usar os parênteses, enquanto o atributo não.
Muitas vezes, queremos gerar arrays que possuam uma estrutura ou padrão. Por exemplo, podemos desejar criar a matriz $z = [1 2 3… 2000]$. Seria muito complicado digitar toda a descrição de $z$ no Python. Para gerar matrizes ordenadas e espaçadas uniformemente, é útil usar a função *arange* no Numpy.
**Exemplo:** Crie uma matriz $z$ de 1 a 2000 com um incremento 1
```python
z = np.arange(1, 2000, 1)
z
```
array([ 1, 2, 3, ..., 1997, 1998, 1999])
Os primeiros dois números de $z$ são o início e o fim da sequência e o último é o incremento. Como é muito comum ter um incremento de 1, se um incremento não for especificado, o Python usará um valor padrão de 1. Portanto, *np.arange(1, 2000)* terá o mesmo resultado que *np.arange(1,2000,1)*. Incrementos negativos ou não inteiros também podem ser usados. Se o incremento “perder” o último valor, ele somente se estenderá até o valor imediatamente anterior ao valor final. Por exemplo, $x = np.arange(1,8,2)$ seria $[1, 3, 5, 7]$.
**Exemplo:** Gere uma matriz com [0.5, 1, 1.5, 2, 2.5].
```python
np.arange(0.5, 3, 0.5)
```
array([0.5, 1. , 1.5, 2. , 2.5])
Às vezes, queremos garantir um ponto inicial e final para uma matriz, mas ainda ter elementos espaçados uniformemente. Por exemplo, queremos um array que comece em 1, termine em 8 e tenha exatamente 10 elementos. Para este propósito, você pode usar a função *np.linspace*. O *linspace* aceita três valores de entrada separados por vírgulas. Portanto, *A = linspace (a, b, n)* gera uma matriz de *n* elementos igualmente espaçados começando em *a* e terminando em *b*.
**Exemplo:** Use o *linspace* para gerar uma matriz começando em 3, terminando em 9 e contendo 10 elementos
```python
np.linspace(3, 9, 10)
```
array([3. , 3.66666667, 4.33333333, 5. , 5.66666667,
6.33333333, 7. , 7.66666667, 8.33333333, 9. ])
Obter acesso ao array numpy 1D é semelhante ao que descrevemos para listas ou tuplas, tem um índice para indicar a localização. Por exemplo:
```python
# get the 2nd element of x
x[1]
```
4
```python
# get all the element after the 2nd element of x
x[1:]
```
array([4, 3])
```python
# get the last element of x
x[-1]
```
3
Para arrays em 2D, é um pouco diferente, já que temos linhas e colunas. Para obter acesso aos dados em um array 2D M, precisamos usar $M[r, c]$, onde a linha *r* e a coluna *c* são separadas por vírgula. Essa é a indexação da array. O *r* e *c* podem ser um único número, uma lista e assim por diante. Se você pensar apenas no índice da linha ou no índice da coluna, ele será semelhante a uma array 1D. Vamos usar $y = \begin{pmatrix}
1 & 4 & 3 \\
9 & 2 & 7 \\
\end{pmatrix}
$ como exemplo.
**Exemplo:** Obtenha o elemento na primeira linha e na 2ª coluna da matriz y.
```python
y[0,1]
```
4
```python
# última coluna:
y[:, -1]
```
array([3, 7])
```python
# terceira coluna
y[:, [0, 2]]
```
array([[1, 3],
[9, 7]])
Existem algumas arrays predefinidas que são úteis. Por exemplo, o *np.zeros*, *np.ones* e *np.empty* são 3 funções úteis. Vamos ver os exemplos.
```python
np.zeros((3, 5))
```
array([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]])
```python
np.ones((5, 3))
```
array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
A forma da matriz é definida em uma tupla com a linha como o primeiro item e a coluna como o segundo. Se você só precisa de uma matriz 1D,use apenas um número como entrada: $np.ones(5)$.
**Exemplo:** Gere uma array vazia 1D com 3 elementos.
```python
np.empty(3)
```
array([0.75, 0.75, 0. ])
A matriz vazia não está realmente vazia, ela é preenchida com números pequenos aleatórios.
Você pode reatribuir um valor de uma matriz usando a indexação de matriz e o operador de atribuição. Você pode reatribuir vários elementos a um único número usando a indexação de matriz no lado esquerdo. Você também pode reatribuir vários elementos de uma matriz, desde que o número de elementos sendo atribuídos e o número de elementos atribuídos sejam os mesmos. Você pode criar uma matriz usando a indexação de matriz.
**Exemplo:** Seja a = [1, 2, 3, 4, 5, 6]. Reatribua o quarto elemento de A para 7. Reatribua o primeiro, o segundo e o terceiro elemento para 1. Reatribua o segundo, terceiro e quarto elementos para 9, 8 e 7.
```python
a = np.arange(1, 7)
a
```
array([1, 2, 3, 4, 5, 6])
```python
a[3] = 7
a
```
array([1, 2, 3, 7, 5, 6])
```python
a[:3] = 1
a
```
array([1, 1, 1, 7, 5, 6])
```python
a[1:4] = [9, 8, 7]
a
```
array([1, 9, 8, 7, 5, 6])
**Exemplo:** Crie uma matriz de zeros com a forma 2 por 2 e defina $b = \begin{pmatrix}
1 & 2 \\
3 & 4 \\
\end{pmatrix}$ usando a indexação de matriz.
```python
b = np.zeros((2, 2))
b[0, 0] = 1
b[0, 1] = 2
b[1, 0] = 3
b[1, 1] = 4
b
```
array([[1., 2.],
[3., 4.]])
Embora você possa criar uma matriz do zero usando a indexação, não é aconselhável. Isso pode confundir você e os erros serão mais difíceis de encontrar em seu código posteriormente. Por exemplo, b[1, 1] = 1 dará o resultado $b = \begin{pmatrix}
0 & 0 \\
0 & 1 \\
\end{pmatrix}$, o que é estranho porque b[0, 0], b[0, 1] e b[1, 0] nunca foram especificados.
A aritmética básica é definida para matrizes. No entanto, existem operações entre um escalar (um único número) e uma matriz e operações entre duas matrizes. Começaremos com operações entre um escalar e um array. Para ilustrar, seja *c* um escalar e *b* uma matriz.
$b + c$, $b - c$, $b * c$ e $b/c$ adiciona *a* a cada elemento de *b*, subtrai *c* de cada elemento de *b*, multiplica cada elemento de *b* por *c* e divide cada elemento de *b* por *c*, respectivamente.
**Exemplo** Seja $b = \begin{pmatrix}
1 & 2 \\
3 & 4 \\
\end{pmatrix}$. Adicione e subtraia 2 de *b*. Multiplique e divida *b* por 2. Eleve ao quadrado todos os elementos de *b*. Seja *c* um escalar. Por conta própria, verifique a reflexividade da adição e multiplicação escalar: $b+c=c+b$ e $cb=bc$.
```python
b + 2
```
array([[3., 4.],
[5., 6.]])
```python
b - 2
```
array([[-1., 0.],
[ 1., 2.]])
```python
2 * b
```
array([[2., 4.],
[6., 8.]])
```python
b**2
```
array([[ 1., 4.],
[ 9., 16.]])
Descrever operações entre duas matrizes é mais complicado. Sejam *b* e *d* duas matrizes do mesmo tamanho. A operação $b - d$ pega cada elemento de *b* e subtrai o elemento correspondente de *d*. Da mesma forma, $b + d$ adiciona cada elemento de $d$ ao elemento correspondente de $b$.
**Exemplo:** Seja $b = \begin{pmatrix}
1 & 2 \\
3 & 4 \\
\end{pmatrix}$ e $d = \begin{pmatrix}
3 & 4 \\
5 & 6 \\
\end{pmatrix}$. Calcule $b + d$ e $b - d$.
```python
b = np.array([[1, 2], [3, 4]])
d = np.array([[3, 4], [5, 6]])
```
```python
b + d
```
array([[ 4, 6],
[ 8, 10]])
```python
b + d
```
array([[ 4, 6],
[ 8, 10]])
```python
b - d
```
array([[-2, -2],
[-2, -2]])
Existem dois tipos diferentes de multiplicação (e divisão) de matrizes. Há multiplicação de matriz elemento por elemento e multiplicação de matriz padrão. Por enquanto, mostraremos apenas como a multiplicação e a divisão da matriz elemento por elemento funcionam. A multiplicação de matrizes padrão será descrita posteriormente. Para as matrizes b e d do mesmo tamanho, b * d pega cada elemento de *b* e os multiplica pelo elemento correspondente de *d*. O mesmo é válido para / e **.
**Exemplo** Calcule $b * d$, $b / d$ e $b^d$ [use ($b**d$) para o último].
```python
b * d
```
array([[ 3, 8],
[15, 24]])
```python
b / d
```
array([[0.33333333, 0.5 ],
[0.6 , 0.66666667]])
```python
b**d
```
array([[ 1, 16],
[ 243, 4096]])
A transposta de uma matriz, b, é uma matriz, d, onde b [i, j] = d [j, i]. Em outras palavras, a transposta troca as linhas e colunas de b. Você pode transpor uma matriz em Python usando o método de matriz T.
**Exemplo:** Calcule a transposição da matriz b.
```python
b.T
```
array([[1, 3],
[2, 4]])
O Numpy tem muitas funções aritméticas, como sin, cos, etc., que podem usar arrays como argumentos de entrada. A saída é a função avaliada para cada elemento da matriz de entrada. Uma função que recebe uma matriz como entrada e executa a função nela é considerada vetorizada.
**Exemplo:** Calcule *np.sqrt* para x = [1, 4, 9, 16].
```python
x = [1, 4, 9, 16]
np.sqrt(x)
```
array([1., 2., 3., 4.])
As operações lógicas são definidas apenas entre um escalar e uma array e entre duas arrays do mesmo tamanho. Entre um escalar e um array, a operação lógica é conduzida entre o escalar e cada elemento da array. Entre duas arrays, a operação lógica é conduzida elemento por elemento.
**Exemplo:** Verifique quais elementos da matriz x = [1, 2, 4, 5, 9, 3] são maiores que 3. Verifique quais elementos em x são maiores do que o elemento correspondente em y = [0, 2, 3, 1, 2 , 3].
```python
x = np.array([1, 2, 4, 5, 9, 3])
y = np.array([0, 2, 3, 1, 2, 3])
```
```python
x > 3
```
array([False, False, True, True, True, False])
```python
x > y
```
array([ True, False, True, True, True, False])
Python pode indexar elementos de uma matriz que satisfaça uma expressão lógica.
**Exemplo:** Seja *x* o mesmo array do exemplo anterior. Crie uma variável *y* que contenha todos os elementos de *x* estritamente maiores que 3. Atribua todos os valores de *x* maiores que 3, o valor 0.
```python
y = x[x > 3]
y
```
array([4, 5, 9])
```python
x[x > 3] = 0
x
```
array([1, 2, 0, 0, 0, 3])
# Sumário
1. Armazenar, recuperar e manipular informações e dados é importante em qualquer campo da ciência e da engenharia.
2. Variáveis são uma ferramenta importante para lidar com valores de dados.
3. Existem diferentes tipos de dados para armazenar informações em Python: int, float, boolean para valores únicos e strings, listas, tuplas, sets, dicionários para dados sequenciais.
4. O array Numpy é um objeto poderoso muito usado na computação científica.
```python
```
|
5bbbe4b3882822c3bc289a301494d784984bbc51
| 180,477 |
ipynb
|
Jupyter Notebook
|
IntroPython.ipynb
|
aschelin/SimulacoesAGFE
|
5294771ff8bf85a1129611bd3406780ef64ac75a
|
[
"MIT"
] | null | null | null |
IntroPython.ipynb
|
aschelin/SimulacoesAGFE
|
5294771ff8bf85a1129611bd3406780ef64ac75a
|
[
"MIT"
] | null | null | null |
IntroPython.ipynb
|
aschelin/SimulacoesAGFE
|
5294771ff8bf85a1129611bd3406780ef64ac75a
|
[
"MIT"
] | null | null | null | 56.807365 | 57,534 | 0.721798 | true | 7,161 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.798187 | 0.855851 | 0.683129 |
__label__por_Latn
| 0.999284 | 0.425469 |
<a href="https://colab.research.google.com/github/ofenerci/2013_fall_ASTR599/blob/master/9Sinif_4Odev.ipynb" target="_parent"></a>
Kod yazmak istediğinde ``+ Code`` tuşuna basman, metin girmek istediğinde ``+ Text`` düğmesine basman yeterli. Metin yazarken Colab'a ait özel markdown kelime işlemcesini kullanacağız. Bu Github'ın Markdown bazı bakımdan farklı komutlar içerecek.
Örnek olarak matematiksel formüller kollunacağız. Bir satır içeresinde matamatiksel ifade yazmak istersen, matematiksel ifadeyi iki dolar işareti arasına alman gerekir. Örnek olarak $x^2$ yazmak için yapmamız gereken \\$x^2\$ olarak yazman gerek. Pisagor teoremi örnek olarak $a^2+b^2=c^2$ şeklinde verilir. Eğer matematiği iki satır arasında göstermek istiyorsan,
\\$$
x^2+y^2=c^2
\$$
olarak yazabilirsin. Mesela yukarıdaki denklemi şu şekilde yazabiliriz.
$$
x^2+y^2=c^2
$$
Mesela matematiğin en ünlü formülünü yazalım:
$$
e^{i\pi}+1=0
$$
Bu formülün niçin matematiğin en ünlü formülü olduğunu öğrenmek için [buraya](https://www.youtube.com/watch?v=IUTGFQpKaPU) bakabilirsin. Tabiki de daha güzel denklemleri de yazabilirsin. Bunun gibi:
\begin{align}
(a+b)^3 &= (a+b)^2(a+b)\\
&=(a^2+2ab+b^2)(a+b)\\
&=(a^3+2a^2b+ab^2) + (a^2b+2ab^2+b^3)\\
&=a^3+3a^2b+3ab^2+b^3
\end{align}
Matematik ve fizik formüllerini kullanırken LaTeX matematik yazım kurallarını kullanacağız. Daha fazla bilgi için [buraya](https://math.meta.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference) bakabilirsin.
**Ödev 1** Yukarıda verilen bağlantıyı kullanırak LaTeX ile bir fizik veya matematik formülünü aşağıya yaz.
*Not:* Colab LaTeX'in bütün yazım kurallarını kapsamıyor. Onun için LaTeX'in bütün özelliklerini Colab'da kullanımıyabilirsin. Yukarıdaki bağlantıdaki bazı örnekleri yazamıyabilirsin.
1'den $n$'e kadar sayıların toplamını kısaca $\sum_{}{}$ işareti ile göstereceğim. Örnek olarak 1'den n'e kadar sayıların toplamı
$$
\sum_{i=0}^n i = \frac{n(n+1)}{2}
$$
formülü ile verilir.
Örnek olarak 1'den 100'ye kadar sayıların toplamını kısaca
$$
\sum_{i=0}^{100} i = \frac{100(100+1)}{2}
$$
verilir. Yukarıdaki toplamın cevabı 5050'dir. Yukarıdaki formülü ilkokulda Gauss'un bulduğu söyleniyor. Bunu aşağıdaki kodu kullanarak ispatlıyalım.
```
sum = 0
for i in range(1,101):
sum = sum +i
print (sum)
```
Şimdi de yukarıda yazdığımız kodu bir fonksiyonun içine koyalım.
```
def Gauss(n):
sum = 0
for i in range(1,101):
sum = sum +i
return sum
```
```
Gauss(100)
```
5050
Şimdi de *Collatz Sanısı'na* (Collatz Conjecture) bakalım. Biz buna sanı diyeceğiz. Matematiksel olarak ispatlanmadı ama şimdiye kadar yanlışlanamadı da. Eğer doğru olduğunu ispatlarsan Matematikteki en büyük ödülü (Field Medal) alacağın kesindir. (Matematikte Nobel ödülü verilmiyor.)
**Collatz Sanısı:** Verilen bir $n$ ile diziye başlayın. Eğer $n$ çift ise ikiye bölün, eğer tek sayı ise bu sayıyı 3 ile çarpıp 1 ekleyin. Bu dizi 1'e ulaştığında dizi sonar erer. Hangi $n$ ile başlarsak başlayalım, her zaman 1'e ulaşır. Bunu matematiksel ifade olarak yazarsak:
$$
f(n) =
\begin{cases}
n/2, & \text{eğer $n$ çift ise } \\
3n+1, & \text{eğer $n$ is tek ise}
\end{cases}
$$
Yukarıdaki ifadeyi kodun içine koyalım:
```
def Collatz(n):
while n>1:
if n%2 == 0: # % operatörü bölümden kalan sayıyı verir. Kendin dene istersen
n = n/2
else:
n= 3*n +1
print(n, end=" ") # end' burada çıktıyı bir sıraya koymak için kullandım.
```
```
Collatz(19)
```
58 29.0 88.0 44.0 22.0 11.0 34.0 17.0 52.0 26.0 13.0 40.0 20.0 10.0 5.0 16.0 8.0 4.0 2.0 1.0
Yukarıdaki fonksiyonu değiştirip, ulaştığı en yüksek sayıyı da bulan bir fonksiyon yazalım. Örnek olarak şöyle bir çıktı versin:
Collatz sanısı 19 için doğrulandı ve bu sınama sonunda ulaşılan en yüksek sayı 88 oldu.
```
```
```
def Collatz_max(n):
n_init = n
max_number = n
while n>1:
if n%2 == 0: # % operatörü bölümden kalan sayıyı verir. Kendin dene istersen
n = n/2
else:
n= 3*n +1
if (n > max_number):
max_number = n
print("Collatz sanısı " + str(n_init) + " için doğrulandı ve bu sınama sonunda ulaşılan en yüksek sayı " + str(max_number)+" oldu")
```
```
Collatz_max(19)
```
Collatz sanısı 19 için doğrulandı ve bu sınama sonunda ulaşılan en yüksek sayı 88.0 oldu
**Ödev 2 (Çözümlü):** Matematikçi Leibniz aşağıdaki formülü vermiştir.
$$
\frac{\pi}{4}= \sum_{k=0}^{\infty}\frac{(-1)^k}{2k+1}
$$
Yukarıdaki formülün açılımı
$$
\frac{\pi}{4}= 1 - \frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\ldots
$$
şeklinde yazabiliriz. ```Leibniz_pi(n)``` isimli bir fonksiyon yazınız ve fonksiyon ```n```'e kadar Leibniz formülünü kullansın. Sonuçta $\pi$'nin değerini ekrana bassın.
```
def Leibniz_pi(n):
sum=0
for i in range(0,n+1):
sum= sum+(-1)**i/(2*i+1)
result = sum*4
print("Pi="+ str(result))
```
```
Leibniz_pi(1000)
```
Pi=3.1425916543395442
**Ödev 3:** 3 ve 5'in katları
Eğer 10'dan düşük (10 dahil değil) sayılardan 3 veya 5'in katları olan sayıları dizersek, 3,5,6,9 sayılarını elde ederiz. Bu sayıların toplamı 23 dir.
1000'den düşük olan (1000 dahil değil) 3 veya 5'in katları olan sayıların toplamını yazan bir fonksiyon yazınız.
```
def multiples(n):
# Burayı doldur.
```
**Ödev 4:** Çift Fibonacci Sayıları
Fibonaci dizisinde her yeni terim kendinden önceki iki terimin toplamından elde edilir. 1 ve 2 ile başlayarak, ilk 10 terimin Fibonacci dizisi
$$
1,2,3,5,8,13,21,34,55,89,\ldots
$$
olur.
Yukarıdaki diziyi matematiksel olarak yazarsak
$$
F_n=
\begin{cases}
0, & n=0\\
1, & n=1 \\
F_{n-1}+F_{n-2}, & n> 1
\end{cases}
$$
Fibonacci dizisinde değeri 4,000,000 (4 milyon dahil)'u geçmeyen sayıya kadar çift değerli Fibonacci terimlerinin toplamını bulun.
Yardım: Aşağıdaki ```Fibonacci_term(n)``` n'ninci Fibonacci terimini vermektedir. Bu fonksiyonu başka bir fonksiyon içinde kullanarak ```Fibonacci_evenSum``` çift sayıların toplamını yapın.
```
def Fibonacci_term(n):
if (n == 1):
return 1
if (n == 2):
return 2
else:
return Fibonacci_term(n-1) + Fibonacci_term(n-2)
```
```
Fibonacci_term(3)
```
3
```
def Fibonacci_evenSum(nlast):
# burayı doldur. nlast burada 4,000,000 olarak deneyeceksin.
```
```
Fibonacci_evenSum(10)
```
44
```
```
|
216e380a41a8ee4a3224ca0e8829cfb950595a56
| 15,902 |
ipynb
|
Jupyter Notebook
|
9Sinif_4Odev.ipynb
|
ofenerci/2013_fall_ASTR599
|
b9ed92c24dce7676549f0f575ec1f1dc7c7a474a
|
[
"Apache-2.0"
] | null | null | null |
9Sinif_4Odev.ipynb
|
ofenerci/2013_fall_ASTR599
|
b9ed92c24dce7676549f0f575ec1f1dc7c7a474a
|
[
"Apache-2.0"
] | null | null | null |
9Sinif_4Odev.ipynb
|
ofenerci/2013_fall_ASTR599
|
b9ed92c24dce7676549f0f575ec1f1dc7c7a474a
|
[
"Apache-2.0"
] | 1 |
2020-04-27T11:23:57.000Z
|
2020-04-27T11:23:57.000Z
| 28.703971 | 383 | 0.468306 | true | 2,626 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.884039 | 0.744207 |
__label__tur_Latn
| 0.999258 | 0.567374 |
(EIGVALEIGVEC)=
# 2.2 Eigenvalores y eigenvectores
```{admonition} Notas para contenedor de docker:
Comando de docker para ejecución de la nota de forma local:
nota: cambiar `<ruta a mi directorio>` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker y `<versión imagen de docker>` por la versión más actualizada que se presenta en la documentación.
`docker run --rm -v <ruta a mi directorio>:/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:<versión imagen de docker>`
password para jupyterlab: `qwerty`
Detener el contenedor de docker:
`docker stop jupyterlab_optimizacion`
Documentación de la imagen de docker `palmoreck/jupyterlab_optimizacion:<versión imagen de docker>` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion).
```
---
Nota generada a partir de [liga](https://www.dropbox.com/s/s4ch0ww1687pl76/3.2.2.Factorizaciones_matriciales_SVD_Cholesky_QR.pdf?dl=0).
```{admonition} Al final de esta nota el y la lectora:
:class: tip
* Aprenderá las definiciones más relevantes en el tema de eigenvalores y eigenvectores para su uso en el desarrollo de algoritmos en el análisis numérico en la resolución de problemas del álgebra lineal numérica. En específico las definiciones de: diagonalizable o *non defective* y similitud son muy importantes.
* Comprenderá el significado geométrico de calcular los eigenvalores y eigenvectores de una matriz simétrica para una forma cuadrática que define a una elipse.
* Aprenderá cuáles problemas en el cálculo de eigenvalores y eigenvectores de una matriz son bien y mal condicionados.
```
En esta nota **asumimos** que $A \in \mathbb{R}^{n \times n}$.
## Eigenvalor (valor propio o característico)
```{admonition} Definición
El número $\lambda$ (real o complejo) se denomina *eigenvalor* de A si existe $v \in \mathbb{C}^n - \{0\}$ tal que $Av = \lambda v$. El vector $v$ se nombra eigenvector (vector propio o característico) de $A$ correspondiente al eigenvalor $\lambda$.
```
```{admonition} Observación
:class: tip
Observa que si $Av=\lambda v$ y $v \in \mathbb{C}^n-\{0\}$ entonces la matriz $A-\lambda I_n$ es singular por lo que su determinante es cero.
```
```{admonition} Comentarios
* Una matriz con componentes reales puede tener eigenvalores y eigenvectores con valores en $\mathbb{C}$ o $\mathbb{C}^n$ respectivamente.
* El conjunto de eigenvalores se le nombra **espectro de una matriz** y se denota como:
$$\lambda(A) = \{ \lambda | \det(A-\lambda I_n) = 0\}.$$
* El polinomio
$$p(z) = \det(A-zI_n) = (-1)^nz^n + a_{n-1}z^{n-1}+ \dots + a_1z + a_0$$
se le nombra **polinomio característico asociado a $A$** y sus raíces o ceros son los eigenvalores de $A$.
* La multiplicación de $A$ por un eigenvector es un reescalamiento y posible cambio de dirección del eigenvector.
* Si consideramos que nuestros espacios vectoriales se definen sobre $\mathbb{C}$ entonces siempre podemos asegurar que $A$ tiene un eigenvalor con eigenvector asociado. En este caso $A$ tiene $n$ eigenvalores y pueden o no repetirse.
* Se puede probar que el determinante de $A$: $\det(A) = \displaystyle \prod_{i=1}^n \lambda_i$ y la traza de $A$: $tr(A) = \displaystyle \sum_{i=1}^n \lambda_i$.
```
### Ejemplo
```python
import numpy as np
```
```python
np.set_printoptions(precision=3, suppress=True)
```
```python
A=np.array([[10,-18],[6,-11]])
```
```python
print(A)
```
[[ 10 -18]
[ 6 -11]]
**En *NumPy* con el módulo [numpy.linalg.eig](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) podemos obtener eigenvalores y eigenvectores**
```python
evalue, evector = np.linalg.eig(A)
```
```python
print('eigenvalores:')
print(evalue)
print('eigenvectores:')
print(evector)
```
eigenvalores:
[ 1. -2.]
eigenvectores:
[[0.894 0.832]
[0.447 0.555]]
```{margin}
$Av_1 = \lambda_1 v_1$.
```
```python
print('matriz * eigenvector:')
print(A@evector[:,0])
print('eigenvalor * eigenvector:')
print(evalue[0]*evector[:,0])
```
matriz * eigenvector:
[0.894 0.447]
eigenvalor * eigenvector:
[0.894 0.447]
```{margin}
$Av_2 = \lambda_2 v_2$.
```
```python
print('matriz * eigenvector:')
print(A@evector[:,1])
print('eigenvalor * eigenvector:')
print(evalue[1]*evector[:,1])
```
matriz * eigenvector:
[-1.664 -1.109]
eigenvalor * eigenvector:
[-1.664 -1.109]
### Ejemplo
Si $v$ es un eigenvector entonces $cv$ es eigenvector donde: $c$ es una constante distinta de cero.
```python
const = -2
const_evector = const*evector[:,0]
print(const_evector)
```
[-1.789 -0.894]
```{margin}
$cv$ es un eigenvector con eigenvalor asociado $c\lambda$ pues $A(cv) = \lambda(cv)$ se satisface si $Av = \lambda v$ y $c \neq 0$.
```
```python
print('matriz * (constante * eigenvector):')
print(A@const_evector)
print('eigenvalor * (constante * eigenvector):')
print(evalue[0]*const_evector)
```
matriz * (constante * eigenvector):
[-1.789 -0.894]
eigenvalor * (constante * eigenvector):
[-1.789 -0.894]
### Ejemplo
Una matriz con entradas reales puede tener eigenvalores y eigenvectores complejos:
```python
A=np.array([[3,-5],[1,-1]])
```
```python
print(A)
```
[[ 3 -5]
[ 1 -1]]
```python
evalue, evector = np.linalg.eig(A)
```
```{margin}
Para $A \in \mathbb{R}^{n \times n}$ se tiene: $\lambda \in \mathbb{C}$ si y sólo si $\bar{\lambda} \in \mathbb{C}$ con $\bar{\lambda}$ el conjugado de $\lambda$.
```
```python
print('eigenvalores:')
print(evalue)
print('eigenvectores:')
print(evector)
```
eigenvalores:
[1.+1.j 1.-1.j]
eigenvectores:
[[0.913+0.j 0.913-0.j ]
[0.365-0.183j 0.365+0.183j]]
```{admonition} Observación
:class: tip
En el ejemplo anterior cada eigenvalor tiene una multiplicidad simple y la multiplicidad geométrica de cada eigenvalor es $1$.
```
### Ejemplo
Los eigenvalores de una matriz diagonal son iguales a su diagonal y sus eigenvectores son los vectores canónicos $e_1, e_2, \dots e_n$.
```python
A = np.diag([2, 2, 2, 2])
```
```python
print(A)
```
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
```python
evalue, evector = np.linalg.eig(A)
```
```python
print('eigenvalores:')
print(evalue)
print('eigenvectores:')
print(evector)
```
eigenvalores:
[2. 2. 2. 2.]
eigenvectores:
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
```{admonition} Definición
La **multiplicidad algebraica** de un eigenvalor es su multiplicidad considerado como raíz/cero del polinomio característico $p(z)$. Si no se repite entonces tal eigenvalor se le nombra de multiplicidad **simple**.
La **multiplicidad geométrica** de un eigenvalor es el número máximo de eigenvectores linealmente independientes asociados a éste.
```
### Ejemplo
Los eigenvalores de una matriz triangular son iguales a su diagonal.
```python
A=np.array([[10,0, -1],
[6,10, 10],
[3, 4, 11.0]])
A = np.triu(A)
```
```python
print(A)
```
[[10. 0. -1.]
[ 0. 10. 10.]
[ 0. 0. 11.]]
```python
evalue, evector = np.linalg.eig(A)
```
```{margin}
Observa que el eigenvalor igual a $10$ está repetido dos veces (multiplicidad algebraica igual a $2$) y se tienen dos eigenvectores linealmente independientes asociados a éste (multiplicidad geométrica igual a $2$).
```
```python
print('eigenvalores:')
print(evalue)
print('eigenvectores:')
print(evector)
```
eigenvalores:
[10. 10. 11.]
eigenvectores:
[[ 1. 0. -0.099]
[ 0. 1. 0.99 ]
[ 0. 0. 0.099]]
**Otro ejemplo:**
```python
A=np.array([[10,18, -1],
[6,10, 10],
[3, 4, 11.0]])
A = np.triu(A)
```
```python
print(A)
```
[[10. 18. -1.]
[ 0. 10. 10.]
[ 0. 0. 11.]]
```python
evalue, evector = np.linalg.eig(A)
```
```{margin}
Observa que en este ejemplo el eigenvalor $10$ está repetido dos veces (multiplicidad algebraica es igual a $2$) y sus eigenvectores asociados son linealmente dependientes (multiplicidad geométrica es igual a $1$).
```
```python
print('eigenvalores:')
print(evalue)
print('eigenvectores:')
print(evector)
```
eigenvalores:
[10. 10. 11.]
eigenvectores:
[[ 1. -1. 0.998]
[ 0. 0. 0.056]
[ 0. 0. 0.006]]
### Ejemplo
Un eigenvalor puede estar repetido y tener un sólo eigenvector linealmente independiente:
```python
A = np.array([[2, 1, 0],
[0, 2, 1],
[0, 0, 2]])
```
```python
evalue, evector = np.linalg.eig(A)
```
```{margin}
Observa que en este ejemplo el eigenvalor $2$ está repetido tres veces (multiplicidad algebraica es igual a $3$) y sus eigenvectores asociados son linealmente dependientes (multiplicidad geométrica es igual a $1$).
```
```python
print('eigenvalores:')
print(evalue)
print('eigenvectores:')
print(evector)
```
eigenvalores:
[2. 2. 2.]
eigenvectores:
[[ 1. -1. 1.]
[ 0. 0. -0.]
[ 0. 0. 0.]]
```{admonition} Definición
Si $(\lambda, v)$ es una pareja de eigenvalor-eigenvector de $A$ tales que $Av = \lambda v$ entonces $v$ se le nombra eigenvector derecho. Si $(\lambda, v)$ es una pareja de eigenvalor-eigenvector de $A^T$ tales que $A^Tv = \lambda v$ (que es equivalente a $v^TA=\lambda v^T$) entonces $v$ se le nombra eigenvector izquierdo.
```
```{admonition} Observaciones
:class: tip
* En todos los ejemplos anteriores se calcularon eigenvectores derechos.
* Los eigenvectores izquierdos y derechos para una matriz simétrica son iguales.
```
(DIAGONALIZABLE)=
## $A$ diagonalizable
```{admonition} Definición
Si $A$ tiene $n$ eigenvectores linealmente independientes entonces $A$ se nombra diagonalizable o *non defective*. En este caso si $x_1, x_2, \dots, x_n$ son eigenvectores de $A$ con $Ax_i = \lambda_i x_i$ para $i=1,\dots,n$ entonces la igualdad anterior se escribe en ecuación matricial como:
$$AX = X \Lambda$$
o bien:
$$A = X \Lambda X^{-1}$$
donde: $X$ tiene por columnas los eigenvectores de $A$ y $\Lambda$ tiene en su diagonal los eigenvalores de $A$.
A la descomposición anterior $A = X \Lambda X^{-1}$ para $A$ diagonalizable o *non defective* se le nombra ***eigen decomposition***.
```
```{admonition} Observación
:class: tip
* Si $A = X \Lambda X^{-1}$ entonces $X^{-1}A = \Lambda X^{-1}$ y los renglones de $X^{-1}$ (o equivalentemente las columnas de $X^{-T}$) son eigenvectores izquierdos.
* Si $A = X \Lambda X^{-1}$ y $b = Ax = (X \Lambda X^{-1}) x$ entonces:
$$\tilde{b} = X^{-1}b = X^{-1} (Ax) = X^{-1} (X \Lambda X^{-1}) x = \Lambda X^{-1}x = \Lambda \tilde{x}.$$
Lo anterior indica que el producto matricial $Ax$ para $A$ diagonalizable es equivalente a multiplicar una matriz diagonal por un vector denotado como $\tilde{x}$ que contiene los coeficientes de la combinación lineal de las columnas de $X$ para el vector $x$ . El resultado de tal multiplicación es un vector denotado como $\tilde{b}$ que también contiene los coeficientes de la combinación lineal de las columnas de $X$ para el vector $b$. En resúmen, si $A$ es diagonalizable o *non defective* la multiplicación $Ax$ es equivalente a la multiplicación por una matriz diagonal $\Lambda \tilde{x}$ (salvo un cambio de bases, ver [Change of basis](https://en.wikipedia.org/wiki/Change_of_basis)).
* Si una matriz $A$ tiene eigenvalores distintos entonces es diagonalizable y más general: si $A$ tiene una multiplicidad geométrica igual a su multiplicidad algebraica de cada eigenvalor entonces es diagonalizable.
```
### Ejemplo
La matriz:
$$A = \left[
\begin{array}{ccc}
1 & -4 & -4\\
8 & -11 & -8\\
-8 & 8 & 5
\end{array}
\right]
$$
es diagonalizable.
```python
A = np.array([[1, -4, -4],
[8, -11, -8],
[-8, 8, 5.0]])
```
```python
print(A)
```
[[ 1. -4. -4.]
[ 8. -11. -8.]
[ -8. 8. 5.]]
```python
evalue, evector = np.linalg.eig(A)
```
```python
print('eigenvalores:')
print(evalue)
```
eigenvalores:
[ 1. -3. -3.]
```{margin}
Se verifica que los eigenvectores de este ejemplo es un conjunto linealmente independiente por lo que $A=X\Lambda X^{-1}$.
```
```python
print('eigenvectores:')
print(evector)
```
eigenvectores:
[[ 0.333 -0.474 -0.326]
[ 0.667 0.339 -0.811]
[-0.667 -0.813 0.485]]
```python
X = evector
Lambda = np.diag(evalue)
```
```{margin}
Observa que si $Z$ es desconocida y $X^T Z^T = \Lambda$ entonces $Z^T = X^{-T} \Lambda$ y por tanto $XZ =X\Lambda X^{-1}$.
```
```python
print(X@np.linalg.solve(X.T, Lambda).T)
```
[[ 1. -4. -4.]
[ 8. -11. -8.]
[ -8. 8. 5.]]
```python
print(A)
```
[[ 1. -4. -4.]
[ 8. -11. -8.]
[ -8. 8. 5.]]
$A$ es diagonalizable pues: $X^{-1} A X = \Lambda$
```{margin}
Observa que si $Z$ es desconocida y $XZ = A$ entonces $Z = X^{-1}A$ y por tanto $ZX = X^{-1} A X$.
```
```python
print(np.linalg.solve(X, A)@X)
```
[[ 1. 0. -0.]
[ 0. -3. -0.]
[ 0. 0. -3.]]
```python
print(Lambda)
```
[[ 1. 0. 0.]
[ 0. -3. 0.]
[ 0. 0. -3.]]
```{admonition} Observación
:class: tip
Observa que **no necesariamente** $X$ en la *eigen decomposition* es una matriz ortogonal.
```
```{margin}
Aquí se toma $X[1:3,1]$ como la primera columna de $X$ y se satisface $X[1:3,1]^TX[1:3,1] = 1$ en este ejemplo pero en general esto no se cumple.
```
```python
X[:,0].dot(X[:,0])
```
0.9999999999999997
```{margin}
$X[1:3,1]^TX[1:3,2] \neq 0$ por lo que la primera y segunda columna de $X$ no son ortogonales.
```
```python
X[:,0].dot(X[:,1])
```
0.6098650780191467
**Eigenvectores derechos:**
```{margin}
`x_1` es la primer columna de $X$: $X[1:3, 1]$ y `lambda_1` el eigenvalor asociado.
```
```python
x_1 = X[:,0]
lambda_1 = Lambda[0,0]
```
```python
print(A@x_1)
```
[ 0.333 0.667 -0.667]
```{margin}
$Ax_1 = \lambda_1 x_1$.
```
```python
print(lambda_1*x_1)
```
[ 0.333 0.667 -0.667]
```{margin}
`x_2` es la segunda columna de $X$: $X[1:3, 2]$ y `lambda_2` el eigenvalor asociado.
```
```python
x_2 = X[:,1]
lambda_2 = Lambda[1,1]
```
```python
print(A@x_2)
```
[ 1.422 -1.017 2.438]
```{margin}
$Ax_2 = \lambda_2 x_2$.
```
```python
print(lambda_2*x_2)
```
[ 1.422 -1.017 2.438]
**Eigenvectores izquierdos:**
```{admonition} Observación
:class: tip
Para los eigenvectores izquierdos se deben tomar los renglones de $X^{-1}$ (o equivalentemente las columnas de $X^{-T}$) sin embargo no se utiliza el método [inv](https://numpy.org/doc/stable/reference/generated/numpy.linalg.inv.html) de *NumPy* pues es más costoso computacionalmente y amplifica los errores por redondeo. En su lugar se utiliza el método [solve](https://numpy.org/doc/stable/reference/generated/numpy.linalg.solve.html) y se resuelve el sistema: $X^{-T} z = e_i$ para $e_i$ $i$-ésimo vector canónico.
```
```python
e1 = np.zeros((X.shape[0],1))
```
```python
e1[0] = 1
```
```python
print(e1)
```
[[1.]
[0.]
[0.]]
```{margin}
`x_inv_1` es el primer renglón de $X^{-1}$: $X^{-1}[1, 1:3]$.
```
```python
x_inv_1 = np.linalg.solve(X.T, e1)
```
```python
print(x_inv_1)
```
[[ 3.]
[-3.]
[-3.]]
```python
print(A.T@x_inv_1)
```
[[ 3.]
[-3.]
[-3.]]
```{margin}
$A^TX^{-T}[1:3,1] = \lambda_1 X^{-T}[1:3,1]$, `lambda_1` el eigenvalor asociado a `x_inv_1`.
```
```python
print(lambda_1*x_inv_1)
```
[[ 3.]
[-3.]
[-3.]]
```python
e2 = np.zeros((X.shape[0],1))
```
```python
e2[1] = 1
```
```{margin}
`x_inv_2` es el segundo renglón de $X^{-1}$: $X^{-1}[2, 1:3]$.
```
```python
x_inv_2 = np.linalg.solve(X.T, e2)
```
```python
print(x_inv_2)
```
[[-1.318]
[ 0.337]
[-0.321]]
```python
print(A.T@x_inv_2)
```
[[ 3.953]
[-1.012]
[ 0.964]]
```{margin}
$A^TX^{-T}[1:3,2] = \lambda_2 X^{-T}[1:3,2]$, `lambda_2` el eigenvalor asociado a `x_inv_2`.
```
```python
print(lambda_2*x_inv_2)
```
[[ 3.953]
[-1.012]
[ 0.964]]
```{admonition} Ejercicio
:class: tip
¿Es la siguiente matriz diagonalizable?
$$A = \left [
\begin{array}{ccc}
-1 & -1 & -2\\
8 & -11 & -8\\
-10 & 11 & 7
\end{array}
\right]
$$
si es así encuentra su *eigen decomposition* y diagonaliza a $A$.
```
(DESCESP)=
### Resultado: $A$ simétrica
Si A es simétrica entonces tiene eigenvalores reales. Aún más: $A$ tiene eigenvectores reales linealmente independientes, forman un conjunto ortonormal y se escribe como un producto de tres matrices nombrado **descomposición espectral o *symmetric eigen decomposition***:
$$A = Q \Lambda Q^T$$
donde: $Q$ es una matriz ortogonal cuyas columnas son eigenvectores de $A$ y $\Lambda$ es una matriz diagonal con eigenvalores de $A$.
```{admonition} Comentarios
* Por lo anterior una matriz simétrica es **ortogonalmente diagonalizable**, ver {ref}`A diagonalizable <DIAGONALIZABLE>`.
* Los eigenvalores de $A$ simétrica se pueden ordenar:
$$\lambda_n(A) \leq \lambda_{n-1}(A) \leq \dots \leq \lambda_1(A)$$
con:
$\lambda_{max}(A) = \lambda_1(A)$, $\lambda_{min}(A) = \lambda_n(A)$.
* Se prueba para $A$ simétrica:
$$\lambda_{max}(A) = \displaystyle \max_{x \neq 0} \frac{x^TAx}{x^Tx}$$
$$\lambda_{min}(A) = \displaystyle \min_{x \neq 0} \frac{x^TAx}{x^Tx}.$$
por lo tanto:
$$\lambda_{min}(A) \leq \frac{x^TAx}{x^Tx} \leq \lambda_{max}(A) \forall x \neq 0.$$
* $||A||_2 = \displaystyle \max\{|\lambda_1(A)|, |\lambda_n(A)|\}$.
* $||A||_F = \left( \displaystyle \sum_{i=1}^n \lambda_i ^2 \right)^{1/2}$.
* Los valores singulares de $A$ son el conjunto $\{|\lambda_1(A)|, \dots, |\lambda_{n-1}(A)|, |\lambda_n(A)|\}$.
```
### Ejemplo
Matriz simétrica y descomposición espectral de la misma:
```python
A=np.array([[5,4,2],[4,5,2],[2,2,2]])
```
```python
print(A)
```
[[5 4 2]
[4 5 2]
[2 2 2]]
```python
evalue, evector = np.linalg.eigh(A)
```
```{margin}
Como $A$ es simétrica sus eigenvalores son reales y sus eigenvectores forman un conjunto linealmente independiente. Por lo anterior $A$ tiene descomposción espectral.
```
```python
print('eigenvalores:')
print(evalue)
print('eigenvectores:')
print(evector)
```
eigenvalores:
[ 1. 1. 10.]
eigenvectores:
[[ 0.67 -0.327 0.667]
[-0.732 -0.14 0.667]
[ 0.125 0.935 0.333]]
```{margin}
$A = Q \Lambda Q^T$
```
```python
print('descomposición espectral:')
Lambda = np.diag(evalue)
Q = evector
print('QLambdaQ^T:')
print(Q@Lambda@Q.T)
print('A:')
print(A)
```
descomposición espectral:
QLambdaQ^T:
[[5. 4. 2.]
[4. 5. 2.]
[2. 2. 2.]]
A:
[[5 4 2]
[4 5 2]
[2 2 2]]
A es diagonalizable pues: $Q^T A Q = \Lambda$
```python
print(Q.T@A@Q)
```
[[ 1. -0. -0.]
[ 0. 1. -0.]
[-0. -0. 10.]]
```python
print(Lambda)
```
[[ 1. 0. 0.]
[ 0. 1. 0.]
[ 0. 0. 10.]]
Ver [numpy.linalg.eigh](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.eigh.html).
## Condición del problema del cálculo de eigenvalores y eigenvectores
La condición del problema del cálculo de eigenvalores y eigenvectores de una matriz, es la sensibilidad de los mismos ante perturbaciones en la matriz, ver {ref}`Condición de un problema y estabilidad de un algoritmo <CPEA>`. Diferentes eigenvalores o eigenvectores de una matriz no necesariamente son igualmente sensibles a perturbaciones en la matriz.
```{admonition} Observación
:class: tip
La condición del problema del cálculo de eigenvalores y eigenvectores de una matriz **no** es igual a la condición del problema de resolver un sistema de ecuaciones lineales, ver {ref}`Número de condición de una matriz <NCM>`.
```
Se prueba que la condición de un eigenvalor **simple** de una matriz $A$ está dado por $\frac{1}{|y^Tx|}$ con $x$ eigenvector derecho, $y$ eigenvector izquierdo de $A$ ambos asociados al eigenvalor simple y normalizados esto es: $x^Tx = y^Ty=1$.
```{admonition} Comentarios
* Para los casos en que: $\lambda$ eigenvalor de $A$ sea simple, $A$ sea diagonalizable, existen eigenvectores izquierdos y derechos asociados a un eigenvalor de $A$ tales que $y^Tx \neq 0$. En tales casos, el análisis del condicionamiento del problema del cálculo de eigenvalores y eigenvectores es más sencillo de realizar que para matrices no diagonalizables o eigenvalores con multiplicidad algebraica mayor a $1$. En particular, los eigenvalores de una matriz simétrica están muy bien condicionados: las perturbaciones en $A$ únicamente perturban a los eigenvalores en una magnitud medida con la norma de las perturbaciones y no depende de otros factores, por ejemplo del número de condición de $A$.
* La sensibilidad de un eigenvector depende de la sensibilidad de su eigenvalor asociado y de la distancia de tal eigenvalor de otros eigenvalores.
* Los eigenvalores que son "cercanos" o aquellos de multiplicidad mayor a $1$ pueden ser mal condicionados y por lo tanto difíciles de calcularse de forma exacta y precisa en especial si la matriz es defectuosa (no diagonalizable). Puede mejorarse el número de condición si se escala el problema por una matriz diagonal y similar a $A$, ver {ref}`similitud <SIMILITUD>`.
```
(SIMILITUD)=
## Similitud
```{admonition} Definición
Si existe $X \in \mathbb{R}^{n \times n}$ tal que $B = XAX^{-1}$ con $A, B \in \mathbb{R}^{n \times n}$ entonces $A$ y $B$ se nombran similares.
```
```{admonition} Observación
:class: tip
Las matrices que son similares tienen el mismo espectro, de hecho: $Ax = \lambda x$ si y sólo si $By = \lambda y$ para $y=Xx$. Lo anterior quiere decir que los eigenvalores de una matriz son **invariantes** ante cambios de bases o representación en coordenadas distintas.
```
### Ejemplo
Dada la matriz
$$A=
\left [
\begin{array}{cccc}
-1 & -1 & -1 & -1\\
0 & -5 & -16 & -22\\
0 & 3 & 10 & 14\\
4 & 8 & 12 & 14
\end{array}
\right ]
$$
Definir matrices $B_1, B_2$ similares a $A$ a partir de las matrices:
$$
\begin{array}{l}
X_1 =
\left [
\begin{array}{cccc}
2 & -1 & 0 & 0\\
-1 & 2 & -1 & 0\\
0 & -1 & 2 & -1\\
0 & 0 & -1 & 1
\end{array}
\right ],
X_2 = \left [
\begin{array}{cccc}
2 & -1 & 1 & 0\\
-1 & 2 & 0 & 0\\
0 & -1 & 0 & 0\\
0 & 0 & 0 & 1
\end{array}
\right ]
\end{array}
$$
y verificar que los eigenvalores de $A$ son los mismos que los de $B_1, B_2$, esto es, tienen el mismo espectro.
```python
A = np.array([[-1, -1 , -1, -1],
[0, -5, -16, -22],
[0, 3, 10, 14],
[4, 8, 12, 14.0]])
```
```python
X1 = np.array([[2, -1, 0, 0],
[-1, 2, -1, 0],
[0, -1, 2, -1],
[0, 0, -1, 1.0]])
```
$B_1 = X_1^{-1}AX_1$:
```{margin}
Calculamos $B1$ explícitamente para revisar qué forma tiene pero no es necesario.
```
```python
B1 = np.linalg.solve(X1, A)@X1
```
```python
print(B1)
```
[[ 1. 2. -0. 0.]
[ 3. 4. -0. 0.]
[ 0. 0. 5. 6.]
[ 0. 0. 7. 8.]]
```python
X2 = np.array([[2, -1, 1, 0],
[-1, 2, 0, 0],
[0, -1, 0, 0],
[0, 0, 0, 1.0]])
```
$B_2 = X_2^{-1}AX_2$:
```{margin}
Calculamos $B2$ explícitamente para revisar qué forma tiene pero no es necesario.
```
```python
B2 = np.linalg.solve(X2, A)@X2
```
```python
print(B2)
```
[[ 1. 2. 0. -6.]
[ 3. 4. 0. -14.]
[ -0. -0. -1. -3.]
[ 0. 0. 4. 14.]]
**$B1$ y $B2$ son similares a $A$ por tanto tienen los mismos eigenvalores:**
```python
evalue, evector = np.linalg.eig(A)
```
```{margin}
`evalue` son los eigenvalores de $A$.
```
```python
print(evalue)
```
[13.152 5.372 -0.152 -0.372]
```python
evalue_B1, evector_B1 = np.linalg.eig(B1)
```
```{margin}
`evalue_B1` son los eigenvalores de $B_1$, obsérvese que son iguales a los de $A$ salvo el orden.
```
```python
print(evalue_B1)
```
[-0.372 5.372 -0.152 13.152]
```python
evalue_B2, evector_B2 = np.linalg.eig(B2)
```
```{margin}
`evalue_B2` son los eigenvalores de $B_2$, obsérvese que son iguales a los de $A$ salvo el orden.
```
```python
print(evalue_B2)
```
[-0.372 5.372 13.152 -0.152]
Los eigenvectores **no son los mismos** pero pueden obtenerse vía multiplicación de matrices:
```{margin}
Elegimos un eigenvalor de $A$.
```
```python
print(evalue[1])
```
5.372281323269014
```{margin}
Y elegimos el mismo eigenvalor en el *array* `evalue_B1` para $B_1$ que para este ejemplo corresponde al índice $1$ (el mismo que en `evalue` pero podría haber sido otro).
```
```python
print(evalue_B1[1])
```
5.3722813232690125
```{margin}
Su correspondiente eigenvector en el índice $1$ del *array* `evector_B1`.
```
```python
print(evector_B1[:,1])
```
[-0.416 -0.909 0. 0. ]
**$X^{-1}x$ es eigenvector de $B_1$ para $x$ eigenvector de $A$**:
```{margin}
`evector[:,1]` es el eigenvector de $A$ correspondiente al eigenvalor `evalue[1]`. En esta celda se hace el producto $X_1^{-1}x$ y `evector[:,1]` representa a $x$.
```
```python
X1_inv_evector = np.linalg.solve(X1, evector[:,1])
```
```python
print(X1_inv_evector)
```
[ 0.249 0.543 -0. -0. ]
```python
print(B1@(X1_inv_evector))
```
[ 1.335 2.919 -0. -0. ]
```{margin}
Se verifica que $B1(X_1^{-1}x) = \lambda (X_1^{-1}x)$ con $\lambda$ igual al valor `evalue_B1[1]`.
```
```python
print(evalue_B1[1]*(X1_inv_evector))
```
[ 1.335 2.919 -0. -0. ]
```{admonition} Observación
:class: tip
Obsérvese que son los mismos eigenvectores salvo una constante distinta de cero.
```
```python
print(evector_B1)
```
[[-0.825 -0.416 0. 0. ]
[ 0.566 -0.909 -0. 0. ]
[ 0. 0. -0.759 -0.593]
[ 0. 0. 0.651 -0.805]]
```{margin}
El valor `1.33532534` es la primera entrada de `X1_inv_evector` que es $X_1^{-1}x$ y $x$ eigenvector de $A$. El valor `2.91920903` es la segunda entrada de `X_1_inv_evector`. Las entradas restantes son cercanas a cero.
```
```python
print(1.33532534e+00/evector_B1[0,1])
```
-3.2101207266138467
```python
print(2.91920903e+00/evector_B1[1,1])
```
-3.2101207350977647
La constante es aproximadamente $-3.21$:
```{margin}
`evector_B1` fue calculado con la función `eig` pero en la siguiente celda se observa que no es necesario si se tiene un eigenvector de $A$.
```
```python
print(evector_B1[:,1]*(-3.21))
```
[ 1.335 2.919 -0. -0. ]
```{margin}
Recuerda que `X_1_inv_evector` es $X_1^{-1}x$ con $x$ eigenvector de $A$ que en este caso se utilizó `evector[:,1]`.
```
```python
print(B1@(X1_inv_evector))
```
[ 1.335 2.919 -0. -0. ]
```{margin}
Se comprueba que $X_1^{-1}x$ es eigenvector de $B$ si $x$ es eigenvector de $A$.
```
```python
print(evalue_B1[1]*(X1_inv_evector))
```
[ 1.335 2.919 -0. -0. ]
Como $A$ tiene eigenvalores distintos entonces es diagonalizable, esto es existen $X_3, \Lambda$ tales que $X_3^{-1} A X_3 = \Lambda$.
```python
X_3 = evector
Lambda = np.diag(evalue)
```
```python
print(A)
```
[[ -1. -1. -1. -1.]
[ 0. -5. -16. -22.]
[ 0. 3. 10. 14.]
[ 4. 8. 12. 14.]]
```python
print(np.linalg.solve(X_3, A)@X_3)
```
[[13.152 0. -0. -0. ]
[ 0. 5.372 -0. -0. ]
[ 0. -0. -0.152 -0. ]
[-0. 0. -0. -0.372]]
```python
print(Lambda)
```
[[13.152 0. 0. 0. ]
[ 0. 5.372 0. 0. ]
[ 0. 0. -0.152 0. ]
[ 0. 0. 0. -0.372]]
```{admonition} Comentario
**$X_1$ diagonaliza a $A$ por bloques, $X_2$ triangulariza a $A$ por bloques y $X_3$ diagonaliza a $A$.** Las tres matrices representan al mismo operador lineal (que es una transformación lineal del espacio vectorial sobre sí mismo) pero en **coordenadas diferentes**. Un aspecto muy **importante** en el álgebra lineal es representar a tal operador lineal en unas coordenadas lo más simple posible. En el ejemplo la matriz $X_3$, que en sus columnas están los eigenvectores de $A$, ayuda a representarlo de forma muy simple.
```
```{admonition} Observación
:class: tip
$X_3$ es una matriz que diagonaliza a $A$ y tiene en sus columnas a eigenvectores de $A$, si el objetivo es diagonalizar a una matriz **no es necesario** resolver un problema de cálculo de eigenvalores-eigenvectores pues cualquier matriz $X$ no singular puede hacer el trabajo. Una opción es considerar una factorización para $A$ simétrica del tipo $LDL^T$ (que tiene un costo computacional bajo para calcularse), la matriz $L$ no es ortogonal y la matriz $D$ tiene los pivotes que se calculan en la eliminación Gaussiana, ver {ref}` Operaciones y transformaciones básicas del Álgebra Lineal Numérica <OTBALN>`.
```
```{admonition} Ejercicio
:class: tip
Considera
$$A=
\left [
\begin{array}{cccc}
-2 & -1 & -5 & 2\\
-9 & 0 & -8 & -2\\
2 & 3 & 11 & 5\\
3 & -5 & 13 & -7
\end{array}
\right ]
$$
Define $X_1$ tal que $X_1^{-1}AX_1$ sea diagonal.
```
### Ejemplo
```python
import sympy
import matplotlib.pyplot as plt
```
```{margin}
Equivalentemente la ecuación $1 = \frac{19}{192}x^2 - \frac{7 \sqrt{3}}{288}xy + \frac{43}{576}y^2$ representa a la misma elipse inclinada.
```
Considérese la siguiente ecuación cuadrática:
$$57x^2 - 14 \sqrt{3} xy + 43 y^2=576$$
Con Geometría Analítica sabemos que tal ecuación representa una elipse inclinada. El desarrollo que continúa mostrará que tal ecuación es equivalente a:
$$\frac{\tilde{x}^2}{16} + \frac{\tilde{y}^2}{9} = 1.$$
la cual representa a la misma elipse pero en los ejes coordenados $\tilde{x}\tilde{y}$ rotados un ángulo $\theta$.
Si:
```python
D = sympy.Matrix([[sympy.Rational(1,16), 0],
[0, sympy.Rational(1,9)]])
```
```python
sympy.pprint(D)
```
⎡1/16 0 ⎤
⎢ ⎥
⎣ 0 1/9⎦
entonces el producto
$$\left [ \begin{array}{c}
\tilde{x}\\
\tilde{y}
\end{array}
\right ] ^TD
\left [
\begin{array}{c}
\tilde{x}\\
\tilde{y}
\end{array}
\right ]
$$
es:
```python
x_tilde, y_tilde = sympy.symbols("x_tilde, y_tilde")
x_y_tilde = sympy.Matrix([x_tilde, y_tilde])
```
```python
sympy.pprint((x_y_tilde.T*D*x_y_tilde)[0])
```
2 2
x_tilde y_tilde
──────── + ────────
16 9
```{admonition} Definición
Al producto $x^TAx$ con $A$ simétrica se le nombra forma cuadrática y es un número en $\mathbb{R}$.
```
A partir de la ecuación:
$$\frac{\tilde{x}^2}{16} + \frac{\tilde{y}^2}{9} = 1$$
rotemos al [eje mayor de la elipse](https://en.wikipedia.org/wiki/Semi-major_and_semi-minor_axes) un ángulo de $\theta = \frac{\pi}{3}$ en **sentido contrario a las manecillas del reloj** con una {ref}`transformación de rotación <TROT>` que genera la ecuación matricial:
$$\begin{array}{l}
\left[
\begin{array}{c}
x\\
y
\end{array}
\right ]
=
\left [
\begin{array}{cc}
\cos(\theta) & -\sin(\theta)\\
\sin(\theta) & \cos(\theta)
\end{array}
\right ]
\left[
\begin{array}{c}
\tilde{x}\\
\tilde{y}
\end{array}
\right ]
=
\left [
\begin{array}{cc}
\frac{1}{2} & -\frac{\sqrt{3}}{2}\\
\frac{\sqrt{3}}{2} & \frac{1}{2}
\end{array}
\right ]
\left[
\begin{array}{c}
\tilde{x}\\
\tilde{y}
\end{array}
\right ]
=
Q\left[
\begin{array}{c}
\tilde{x}\\
\tilde{y}
\end{array}
\right ]
\end{array}
$$
donde: $Q$ es la matriz de rotación en sentido contrario a las manecillas del reloj por el ángulo $\theta$.
Esto es:
$$
\begin{eqnarray}
x =\frac{\tilde{x}}{2} - \frac{\tilde{y}\sqrt{3}}{2} \nonumber \\
y =\frac{\tilde{x}\sqrt{3}}{2} + \frac{\tilde{y}}{2} \nonumber
\end{eqnarray}
$$
Despejando $\tilde{x},\tilde{y}$:
$$\begin{array}{l}
\left[
\begin{array}{c}
\tilde{x}\\
\tilde{y}
\end{array}
\right ]
=
\left [
\begin{array}{cc}
\cos(\theta) & \sin(\theta)\\
-\sin(\theta) & \cos(\theta)
\end{array}
\right ]
\left[
\begin{array}{c}
x\\
y
\end{array}
\right ]
=
Q^T\left[
\begin{array}{c}
x\\
y
\end{array}
\right ]
\end{array}
$$
y sustituyendo en $\frac{\tilde{x}^2}{16} + \frac{\tilde{y}^2}{9} = 1$ resulta en la ecuación:
```python
theta = sympy.pi/3
Q = sympy.Matrix([[sympy.cos(theta), -sympy.sin(theta)],
[sympy.sin(theta), sympy.cos(theta)]])
x,y = sympy.symbols("x, y")
x_tilde = (Q.T*sympy.Matrix([x,y]))[0]
y_tilde = (Q.T*sympy.Matrix([x,y]))[1]
sympy.pprint((x_tilde**2/16 + y_tilde**2/9).expand()*576) #576 is the least common denominator
```
2 2
57⋅x - 14⋅√3⋅x⋅y + 43⋅y
```{margin}
Ecuación de una elipse inclinada.
```
$$57x^2 - 14 \sqrt{3} xy + 43 y^2=576$$
Que es equivalente a la forma cuadrática
$$\left [ \begin{array}{c}
x\\
y
\end{array}
\right ]^T A
\left [
\begin{array}{c}
x\\
y
\end{array}
\right ]
$$
```python
x_y = sympy.Matrix([x,y])
A = Q*D*Q.T
sympy.pprint(((x_y.T*A*x_y)[0]).expand()*576)
```
2 2
57⋅x - 14⋅√3⋅x⋅y + 43⋅y
con $A$ matriz dada por $A=QDQ^T$:
```{margin}
Observa que $A$ es **simétrica**.
```
```python
sympy.pprint(A)
```
⎡ 19 -7⋅√3 ⎤
⎢ ─── ──────⎥
⎢ 192 576 ⎥
⎢ ⎥
⎢-7⋅√3 43 ⎥
⎢────── ─── ⎥
⎣ 576 576 ⎦
En este ejemplo la matriz $Q$ de rotación es la matriz que diagonaliza ortogonalmente a $A$ pues: $Q^TAQ = D.$
Para realizar la **gráfica** de la elipse con *NumPy* observar que:
```{margin}
Estas ecuaciones nos indican que la misma elipse se puede representar en diferentes coordenadas. El cambio de coordenadas del vector $(x,y)^T$ (en coordenadas de la base canónica) al vector $(\tilde{x}, \tilde{y})$ (en coordenadas de los eigenvectores de $A$) se realiza con la matriz $Q^T$.
```
```python
sympy.pprint(((x_y.T*A*x_y)[0]).expand())
```
2 2
19⋅x 7⋅√3⋅x⋅y 43⋅y
───── - ──────── + ─────
192 288 576
$$
\begin{eqnarray}
1&=&\frac{19}{192}x^2 - \frac{7 \sqrt{3}}{288}xy + \frac{43}{576}y^2 \nonumber \\
&=& \left [ \begin{array}{c}
x\\
y
\end{array}
\right ]^T A
\left [
\begin{array}{c}
x\\
y
\end{array}
\right ] \nonumber \\
&=& \left [ \begin{array}{c}
x\\
y
\end{array}
\right ]^T QDQ^T \left [
\begin{array}{c}
x\\
y
\end{array}
\right ] \nonumber \\
&=& \left(Q^T \left [ \begin{array}{c}
x\\
y
\end{array}
\right ]\right)^TD\left(Q^T \left [ \begin{array}{c}
x\\
y
\end{array}
\right ]\right) \nonumber \\
&=& \left [ \begin{array}{c}
\tilde{x}\\
\tilde{y}
\end{array}
\right ] ^TD
\left [
\begin{array}{c}
\tilde{x}\\
\tilde{y}
\end{array}
\right ] \nonumber \\
&=& \frac{\tilde{x}^2}{16} + \frac{\tilde{y}^2}{9} \nonumber
\end{eqnarray}
$$
```python
sympy.pprint(Q)
```
⎡ -√3 ⎤
⎢1/2 ────⎥
⎢ 2 ⎥
⎢ ⎥
⎢√3 ⎥
⎢── 1/2 ⎥
⎣2 ⎦
```python
Q_np = np.array(Q.evalf(), dtype=float)
```
```python
print(Q_np)
```
[[ 0.5 -0.866]
[ 0.866 0.5 ]]
```python
A_np = np.array(A.evalf(),dtype = float)
```
```{margin}
Usamos [eig](https://numpy.org/doc/stable/reference/generated/numpy.linalg.eig.html) para el cálculo numérico de eigenvalores, eigenvectores de $A$.
```
```python
evalue_np, evector_np = np.linalg.eig(A_np)
```
```python
print(evector_np)
```
[[ 0.866 0.5 ]
[-0.5 0.866]]
**Obsérvese que la función de `eig` nos devuelve ordenados los eigenvalores en forma decreciente.**
```python
print(evalue_np)
```
[0.111 0.062]
**Para que coincida el orden con la matriz `Q_np` reordenamos las columnas de `evector`:**
```python
P1 = np.array([[0, 1],
[1, 0.0]])
```
```python
evector_np_permuted = evector_np@P1
```
```python
print(Q_np)
```
[[ 0.5 -0.866]
[ 0.866 0.5 ]]
```{margin}
El signo de la segunda columna está intercambiado pero no es un problema para eigenvectores pues son invariantes ante multiplicaciones por escalares distintos de cero.
```
```python
print(evector_np_permuted)
```
[[ 0.5 0.866]
[ 0.866 -0.5 ]]
```python
d1_inv=float(sympy.sqrt(D[0,0]))
d2_inv=float(sympy.sqrt(D[1,1]))
```
```python
evector_1_rescaled = 1/d1_inv*evector_np_permuted[:,0]
evector_2_rescaled = 1/d2_inv*evector_np_permuted[:,1]
```
```python
small_value = 1e-4
density=1e-2 + small_value
x=np.arange(-1/d1_inv,1/d1_inv,density)
y1=1/d2_inv*np.sqrt(1-(d1_inv*x)**2)
y2=-1/d2_inv*np.sqrt(1-(d1_inv*x)**2)
#transform
x_y1_hat = np.column_stack((x,y1))
x_y2_hat = np.column_stack((x,y2))
apply_evector_np_permuted = lambda vec : np.transpose(evector_np_permuted@np.transpose(vec))
evector_np_permuted_to_vector_1 = apply_evector_np_permuted(x_y1_hat)
evector_np_permuted_to_vector_2 = apply_evector_np_permuted(x_y2_hat)
fig = plt.figure(figsize=(12, 7))
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
#first plot
ax1.plot(evector_np_permuted_to_vector_1[:,0],evector_np_permuted_to_vector_1[:,1],'g',
evector_np_permuted_to_vector_2[:,0],evector_np_permuted_to_vector_2[:,1],'g')
ax1.set_title("$\\frac{19x^2}{192}-\\frac{7\\sqrt{3}xy}{288}+\\frac{43y^2}{576}=1$", fontsize=18)
ax1.set_xlabel("Ejes coordenados típicos")
ax1.axhline(color='r')
ax1.axvline(color='r')
ax1.grid()
#second plot
Evector_1 = np.row_stack((np.zeros(2), evector_1_rescaled))
Evector_2 = np.row_stack((np.zeros(2), evector_2_rescaled))
ax2.plot(evector_np_permuted_to_vector_1[:,0],evector_np_permuted_to_vector_1[:,1],
color='g', label = "Elipse")
ax2.plot(evector_np_permuted_to_vector_2[:,0],evector_np_permuted_to_vector_2[:,1],
color='g', label = "_nolegend_")
ax2.plot(Evector_1[:,0], Evector_1[:,1],
color='b', label = "Eigenvector Q[:,0], define al semieje mayor principal de la elipse")
ax2.plot(-Evector_1[:,0], -Evector_1[:,1],
color='b', label = "_nolegend_")
ax2.plot(Evector_2[:,0], Evector_2[:,1],
color='m', label = "Eigenvector Q[:,1], define al semieje menor principal de la elipse")
ax2.plot(-Evector_2[:,0], -Evector_2[:,1],
color='m', label = "_nolegend_")
ax2.scatter(evector_np_permuted[0,0],
evector_np_permuted[1,0], marker = '*', color='b', s=150)
ax2.scatter(Q_np[0,0], Q_np[1,0],
marker='o', facecolors='none', edgecolors='b',
s=150)
ax2.scatter(evector_1_rescaled[0], evector_1_rescaled[1],
marker='o', facecolors='none', edgecolors='b',
s=150)
ax2.scatter(evector_2_rescaled[0], evector_2_rescaled[1],
marker='o', facecolors='none', edgecolors='m',
s=150)
ax2.set_title("$\\frac{\\tilde{x}^2}{16} + \\frac{\\tilde{y}^2}{9}=1$", fontsize=18)
ax2.set_xlabel("Ejes coordenados rotados")
ax2.legend(bbox_to_anchor=(1, 1))
fig.suptitle("Puntos en el plano que cumplen $z^TAz=1$ y $\\tilde{z}^TD\\tilde{z}=1$")
ax2.grid()
plt.show()
```
```{margin}
Recuerda que $A = Q D Q^T$, $A$ es similar a $D$ matriz diagonal y $Q$ es ortogonal.
```
En la gráfica anterior se representa la rotación de los ejes coordenados definidos por los vectores canónicos $e_1, e_2$ y los rotados definidos por los eigenvectores de $A$. Los eigenvectores de $A$ están en las columnas de $Q$. La primera columna de $Q$ define al eje mayor principal de la elipse y la segunda columna al eje menor principal. La longitud de los semiejes están dados respectivamente por la raíz cuadrada de los recíprocos de los eigenvalores de $A$ que en este caso son: $\frac{1}{16}, \frac{1}{9}$, esto es: $4$ y $3$. Ver por ejemplo: [Principal_axis_theorem](https://en.wikipedia.org/wiki/Principal_axis_theorem), [Diagonalizable_matrix](https://en.wikipedia.org/wiki/Diagonalizable_matrix).
```python
print(evector_1_rescaled)
```
[2. 3.464]
```{margin}
Longitud del eigenvector reescalado asociado al eigenvalor mínimo y representa la longitud del semieje mayor de la elipse.
```
```python
print(np.linalg.norm(evector_1_rescaled))
```
4.0
```python
print(evector_2_rescaled)
```
[ 2.598 -1.5 ]
```{margin}
Longitud del eigenvector reescalado asociado al eigenvalor máximo y representa la longitud del semieje menor de la elipse.
```
```python
print(np.linalg.norm(evector_2_rescaled))
```
3.0000000000000004
```{admonition} Ejercicio
:class: tip
Rotar los ejes coordenados $45^o$ la ecuación de la elipse:
$$13x^2+10xy+13y^2=72$$
para representar tal ecuación alineando los ejes mayor y menor de la elipse a sus eigenvectores. Encontrar las matrices $Q, D$ tales que $A=QDQ^T$ con $Q$ ortogonal y $D$ diagonal.
```
## Algunos algoritmos para calcular eigenvalores y eigenvectores
Dependiendo de las siguientes preguntas es el tipo de algoritmo que se utiliza:
* ¿Se requiere el cómputo de todos los eigenvalores o de sólo algunos?
* ¿Se requiere el cómputo de únicamente los eigenvalores o también de los eigenvectores?
* ¿$A$ tiene entradas reales o complejas?
* ¿$A$ es de dimensión pequeña y es densa o grande y rala?
* ¿$A$ tiene una estructura especial o es una matriz general?
Para la última pregunta a continuación se tiene una tabla que resume las estructuras en las matrices que son relevantes para problemas del cálculo de eigenvalores-eigenvectores:
|Estructura|Definición|
|:---:|:---:|
|Simétrica|$A=A^T$|
|Ortogonal|$A^TA=AA^T=I_n$|
|Normal|$A^TA = AA^T$|
Ver {ref}`Ejemplos de matrices normales <EJMN>`.
(EJMN)=
### Una opción (inestable numéricamente respecto al redondeo): encontrar raíces del polinomio característico...
```{margin}
Como ejemplo que no es posible expresar las raíces o ceros por una fórmula cerrada que involucren a los coeficientes, operaciones aritméticas y raíces $\sqrt[n]{\cdot}$ para polinomios de grado mayor a $4$, considérese las raíces de $x^5 - x^2 + 1 = 0$.
```
Por definición, los eigenvalores de $A \in \mathbb{R}^{n \times n}$ son las raíces o ceros del polinomio característico $p(z)$ por lo que un método es calcularlas vía tal polinomio. Calcular los eigenvalores de matrices por tal método para una $n > 4$ necesariamente requiere que sea un **método iterativo** para matrices con dimensión $n >4$ pues [Abel](https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem) probó de forma teórica que las raíces en general no son posibles expresarlas por una fórmula cerrada que involucren los coeficientes, operaciones aritméticas y raíces $\sqrt[n]{\cdot}$ .
```{margin}
Como ejemplo de este enunciado considérese:
$$A=\left[
\begin{array}{cc}
1 & \epsilon\\
\epsilon & 1\\
\end{array}
\right]
$$
cuyos eigenvalores son $1 + \epsilon$, $1 - \epsilon$ con $\epsilon$ menor que $\epsilon_{maq}$. Usando aritmética en el SPF se prueba que las raíces del polinomio característico es $1$ de multiplicidad $2$.
```
Además de lo anterior, en ciertas bases de polinomios, por ejemplo $\{1, x, x^2, \dots, x^n\}$, los coeficientes de los polinomios numéricamente no están bien determinados por los errores por redondeo y las raíces de los polinomios son muy sensibles a perturbaciones de los coeficientes, esto es, es un **problema mal condicionado**, ver {ref}`condición de un problema y estabilidad de un algoritmo <CPEA>` y [Wilkinson's polynomial](https://en.wikipedia.org/wiki/Wilkinson%27s_polynomial) para un ejemplo.
### Alternativas
Revisaremos en la nota {ref}`Algoritmos y aplicaciones de eigenvalores, eigenvectores de una matriz <AAEVALEVEC>` algunos algoritmos como:
* Método de la potencia y método de la potencia inversa o iteración inversa.
* Iteración por el cociente de Rayleigh.
* Algoritmo QR.
* Método de rotaciones de Jacobi.
---
## Ejemplos de matrices normales
```{sidebar} Descomposición espectral para matrices normales
Las matrices normales generalizan al caso de entradas en $\mathbb{C}$ la diagonalización ortogonal al ser **unitariamente diagonalizables**. $A \in \mathbb{C}^{n \times n}$ es normal si y sólo si $A = U \Lambda U^H$ con $U$ matriz unitaria (generalización de una matriz ortogonal a entradas $\mathbb{C}$), $U^H$ la conjugada transpuesta de $U$ y $\Lambda$ matriz diagonal. Para $A \in \mathbb{R}^{n \times n}$ lo anterior se escribe como: $A$ es simétrica si y sólo si es ortogonalmente diagonalizable: $A = Q \Lambda Q^T$ (ver {ref}`descomposición espectral <DESCESP>`).
```
$$\begin{array}{l}
\left[
\begin{array}{cc}
1 &-2 \\
2 &1
\end{array}
\right],
\left[
\begin{array}{ccc}
1 &2 & 0\\
0 & 1 & 2\\
2 & 0 & 1
\end{array}
\right]
\end{array}
$$
Otro ejemplo:
$$A =
\left[
\begin{array}{ccc}
1 &1 & 0\\
0 & 1 & 1\\
1 & 0 & 1
\end{array}
\right]
$$
```python
A = np.array([[1, 1, 0],
[0, 1, 1],
[1, 0, 1.0]])
```
```python
print(A.T@A)
```
[[2. 1. 1.]
[1. 2. 1.]
[1. 1. 2.]]
```{margin}
Como $A$ es normal entonces se cumple que $AA^T=A^TA$.
```
```python
print(A@A.T)
```
[[2. 1. 1.]
[1. 2. 1.]
[1. 1. 2.]]
```python
evalue, evector = np.linalg.eig(A)
```
```python
print('eigenvalores:')
print(evalue)
```
eigenvalores:
[0.5+0.866j 0.5-0.866j 2. +0.j ]
```{margin}
Se verifica que los eigenvectores de este ejemplo forman un conjunto linealmente independiente pues $A$ es normal.
```
```python
print('eigenvectores:')
print(evector)
```
eigenvectores:
[[-0.289+0.5j -0.289-0.5j -0.577+0.j ]
[-0.289-0.5j -0.289+0.5j -0.577+0.j ]
[ 0.577+0.j 0.577-0.j -0.577+0.j ]]
```{margin}
Para una matriz normal $A$ se cumple que es unitariamente diagonalizable y $A = Q \Lambda Q^H$ donde: $Q^H$ es la conjugada transpuesta de $Q$.
```
```python
print('descomposición espectral:')
Lambda = np.diag(evalue)
Q = evector
```
descomposición espectral:
```python
print('QLambdaQ^H:')
print(Q@Lambda@Q.conjugate().T)
```
QLambdaQ^H:
[[ 1.+0.j 1.+0.j -0.+0.j]
[ 0.+0.j 1.+0.j 1.+0.j]
[ 1.+0.j -0.+0.j 1.+0.j]]
```python
print(A)
```
[[1. 1. 0.]
[0. 1. 1.]
[1. 0. 1.]]
```{margin}
Observa que $Q^HQ=QQ^H = I_3$ donde: $Q^H$ es la conjugada transpuesta de $Q$.
```
```python
print(Q.conjugate().T@Q)
```
[[1.+0.j 0.-0.j 0.+0.j]
[0.+0.j 1.+0.j 0.-0.j]
[0.-0.j 0.+0.j 1.+0.j]]
```{admonition} Observación
:class: tip
El problema del cálculo de eigenvalores para matrices normales es bien condicionado.
```
**Preguntas de comprehensión:**
1)¿Qué son los eigenvalores de una matriz y qué nombre recibe el conjunto de eigenvalores de una matriz?
2)¿Cuántos eigenvalores como máximo puede tener una matriz?
3)¿Qué característica geométrica tiene multiplicar una matriz por su eigenvector?
4)¿A qué se le nombra matriz diagonalizable o *non defective*?
5)¿Cuál es el número de condición del problema de cálculo de eigenvalores con multiplicidad simple para una matriz simétrica?
6)¿Verdadero o Falso?
a.Una matriz es diagonalizable entonces tiene eigenvalores distintos.
b.Una matriz con eigenvalores distintos es diagonalizable.
c.Si $A=XDX^{-1}$ con $X$ matriz invertible entonces en la diagonal de $D$ y en las columnas de $X$ encontramos eigenvalores y eigenvectores derechos de $A$ respectivamente.
7)Describe la descomposición espectral de una matriz simétrica.
8)¿Qué característica tienen las matrices similares?
**Referencias:**
1. M. T. Heath, Scientific Computing. An Introductory Survey, McGraw-Hill, 2002.
2. G. H. Golub, C. F. Van Loan, Matrix Computations, John Hopkins University Press, 2013.
3. L. Trefethen, D. Bau, Numerical linear algebra, SIAM, 1997.
4. C. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, 2000.
|
9992acb81f765f3026223473ff7fbae77f6aa3c9
| 171,527 |
ipynb
|
Jupyter Notebook
|
libro_optimizacion/temas/II.computo_matricial/2.2/Eigenvalores_y_eigenvectores.ipynb
|
vserranoc/analisis-numerico-computo-cientifico
|
336304bf713695df643460b1467bad7cc12141ae
|
[
"Apache-2.0"
] | null | null | null |
libro_optimizacion/temas/II.computo_matricial/2.2/Eigenvalores_y_eigenvectores.ipynb
|
vserranoc/analisis-numerico-computo-cientifico
|
336304bf713695df643460b1467bad7cc12141ae
|
[
"Apache-2.0"
] | null | null | null |
libro_optimizacion/temas/II.computo_matricial/2.2/Eigenvalores_y_eigenvectores.ipynb
|
vserranoc/analisis-numerico-computo-cientifico
|
336304bf713695df643460b1467bad7cc12141ae
|
[
"Apache-2.0"
] | null | null | null | 42.352346 | 78,872 | 0.708186 | true | 17,108 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.855851 | 0.868827 | 0.743586 |
__label__spa_Latn
| 0.865644 | 0.565932 |
# Support Vector Machines
## Motivating Support Vector Machines
### Developing the Intuition
Support vector machines (SVM) are a powerful and flexible class of supervised algorithms. Developed in the 1990s, SVM have shown to perform well in a variety of settings which explains their popularity. Though the underlying mathematics can become somewhat complicated, the basic concept of a SVM is easily understood. Therefore, in what follows we develop an intuition, introduce the mathematical basics of SVM and ultimately look into how we can apply SVM with Python.
As an introductory example, borrowed from VanderPlas (2016), consider the following simplified two-dimensional classification task, where the two classes (indicated by the colors) are well separated.
A linear discriminant classifier as discussed in chapter 8 would attempt to draw a separating hyperplane (which in two dimensions is nothing but a line) in order to distinguish the two classes. For two-dimensional data, we could even do this by hand. However, one problem arises: there are more than one separating hyperplane between the two classes.
There exist an infinite number of possible hyperplanes that perfectly discriminate between the two classes in the training data. In above figure we visualize but three of them. Depending on what hyperplane we choose, a new data point (e.g. the one marked by the red "X") will be assigned a different label. Yet, so far we have no decision criteria established to decide which one of the three hyperplanes we should choose.
How do we decide which line best separates the two classes? The idea of SVM is to add a margin of some width to both sides of each hyperplane - up to the nearest point. This might look something like this:
In SVM, the hyperplane that maximizes the margin to the nearest points is the one that is chosen as decision boundary. In other words, the maximum margin estimator is what we are looking for. Below figure shows the optimal solution for a (linear) SVM. Of all possible hyperplanes, the solid line has the largest margins (dashed lines) - measured from the decision boundary (solid line) to the nearest points (circled points).
### Support Vector
The three circled sample points in above figure represent the nearest points. All three lie along the (dashed) margin line and in terms of perpendicular distance are equidistant from the decision boundary (solid line). Together they form the so called **support vector**. The support vector "supports" the maximal margin hyperplane in the sense that if one of the observations were moved slightly, the maximal margin hyperplane would move as well. In other words, they dictate slope and intercept of the hyperplane. Interestingly, any points further from the margin that are on the correct side do not modify the decision boundary. For example points at $(x_1, x_2) = (2.5, 1)$ or $(1, 4.2)$ have no effect on the decision boundary. Technically, this is because these points do not contribute to the loss function used to fit the model, so their position and number do not matter so long as they do not cross the margin (VanderPlas (2016)) . This is an important and helpful property as it simplifies calculations significantly. It is not surprising that computations are a lot faster if a model has only a few data points (in the support vector) to consider (James et al. (2013)).
## Developing the Mathematical Intuition
### Hyperplanes
To start, let us do a brief (and superficial) refresher on hyperplanes. In a $p$-dimensional space, a hyperplane is a flat (affine) subspace of dimension $p - 1$. Affine simply indicates that the subspace need not pass through the origin. As we have seen above, in two dimensions a hyperplane is just a line. In three dimensions it is a plane. For $p > 3$ visualization is hardly possible but the notion applies in similar fashion. Mathematically a $p$-dimensional hyperplane is defined by the expression
\begin{equation}
\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_p x_p = 0
\end{equation}
If a point $\mathbf{x}^* = (x^*_1, x^*_2, \ldots, x^*_p)^T$ (i.e. a vector of length $p$) satisfies the above equation, then $\mathbf{x}^*$ lies on the hyperplane. If $\mathbf{x}^{*}$ does not satisfy above equation but yields a value $>0$, that is
\begin{equation}
\beta_0 + \beta_1 x^*_1 + \beta_2 x^*_2 + \ldots + \beta_p x^*_p > 0
\end{equation}
then this tells us that $\mathbf{x}^*$ lies on one side of the hyperplane. Similarly,
\begin{equation}
\beta_0 + \beta_1 x^*_1 + \beta_2 x^*_2 + \ldots + \beta_p x^*_p < 0
\end{equation}
tells us that $\mathbf{x}^*$ lies on the other side of the plane.
### Separating Hyperplanes
Suppose our training sample is a $n \times p$ data matrix $\mathbf{X}$ that consists of $n$ observations in $p$-dimensional space,
\begin{equation*}
\mathbf{x}_1 =
\begin{pmatrix}
x_{11} \\
\vdots \\
x_{1p}
\end{pmatrix}, \; \ldots, \; \mathbf{x}_n =
\begin{pmatrix}
x_{n1} \\
\vdots \\
x_{np}
\end{pmatrix}
\end{equation*}
and each observation falls into one of two classes: $y_1, \ldots, y_n \in \{-1, 1\}$. Then a separating hyperplane has the helpful property that
\begin{align}
f(x) = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \ldots + \beta_p x_{ip} \quad \text{is} \quad
\begin{cases}
> 0 & \quad \text{if } y_i =1 \\
< 0 & \quad \text{if } y_i = -1
\end{cases}
\end{align}
Given such a hyperplane exists, it can be used to construct a very intuitive classifier: a test observation is assigned to a class based on the side of the hyperplane it lies. This means we simply calculate $f(x^*)$ and if the result is positive, we assign the test observation to class 1, and to class -1 otherwise.
### Maximal Margin Classifier
If our data can be perfectly separated, then - as alluded to above - there exist an infinite number of separating hyperplanes. Therefore we seek to maximize the margin to the closest training observations (support vector). The result is what we call the *maximal margin hyperplane*.
Let us consider how such a maximal margin hyperplane is constructed. We follow Raschka (2015) in deriving the objective function as this approach is appealing to the intuition. For a mathematically more sound derivation, see e.g. Friedman et al. (2001, chapter 4.5). As before we assume to have a set of $n$ training observations $\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_n \in \mathbb{R}^p$ with corresponding class labels $y_1, y_2, \ldots, y_n \in \{-1, 1\}$. The hyperplane as our decision boundary we have introduced above. Here is the same in vector notation, where $\mathbf{\beta}$ and $\mathbf{x}$ are vector of dimension $[p \times 1]$:
\begin{equation}
\beta_0 + \mathbf{\beta}^T \mathbf{x}_{\text{hyper}} = 0
\end{equation}
This way of writing is much more concise and therefore we will stick to it moving forward. Let us further define the positive and negative margin hyperplanes, which lie parallel to the decision boundary:
\begin{align}
\beta_0 + \mathbf{\beta}^T \mathbf{x}_{\text{pos}} &= 1 &\text{pos. margin} \\
\beta_0 + \mathbf{\beta}^T \mathbf{x}_{\text{neg}} &= -1 &\text{neg. margin}
\end{align}
Below you find a visual representationof the above. Notice that the two margin hyperplanes are parallel and the values for $\beta_0, \mathbf{\beta}$ are identical
If we subtract the equation for the negative margin from the positive, we get:
\begin{equation}
\mathbf{\beta}^T (\mathbf{x}_{\text{pos}} - \mathbf{x}_{\text{neg}}) = 2
\end{equation}
Let us normalize both sides of the equation by the length of the vector $\mathbf{\beta}$, that is the norm, which is defined as follows:
\begin{equation}
\Vert \mathbf{\beta} \Vert := \sqrt{\sum_{i=1}^p \beta_i^2} = 1
\end{equation}
With that we arrive at the following expression:
\begin{equation}
\frac{\mathbf{\beta}^T (\mathbf{x}_{\text{pos}} - \mathbf{x}_{\text{neg}})}{\Vert \mathbf{\beta}\Vert} = \frac{2}{\Vert \mathbf{\beta} \Vert}
\end{equation}
The left side of the equation can be interpreted as the normalized distance between the positive (upper) and negative (lower) margin. This distance we aim to maximize. Since maximizing the lefthand side of above expression is similar to maximizing the right hand side, we can summarize this in the following optimization problem:
\begin{equation}
\begin{aligned}
& \underset{\beta_0, \beta_1, \ldots, \beta_p}{\text{maximize}}
& & \frac{2}{\Vert \mathbf{\beta} \Vert} \\
& \text{subject to} & & \beta_0 + \mathbf{\beta}^T \mathbf{x}_{i} \geq \;\; 1 \quad \text{if } y_i = 1 \\
&&& \beta_0 + \mathbf{\beta}^T \mathbf{x}_{i} \leq -1 \quad \text{if } y_i = -1 \\
&&& \text{for } i = 1, \ldots, N.
\end{aligned}
\end{equation}
The two constraints make sure that all positive samples ($y_i = 1$) fall on or above the positive side of the positive margin hyperplane and all negative samples ($y_i = -1$) are on or below the negative margin hyperplane. A few tweaks allow us to write the two constraints as one. We show this by transforming the second constraint, in which case $y_i = -1$:
\begin{align}
\beta_0 + \mathbf{\beta}^T \mathbf{x}_i &\leq -1 \\
\Leftrightarrow \qquad y_i (\beta_0 + \mathbf{\beta}^T \mathbf{x}_i) &\geq (-1)y_i \\
\Leftrightarrow \qquad y_i (\beta_0 + \mathbf{\beta}^T \mathbf{x}_i) &\geq 1
\end{align}
The same can be done for the first constraint - it will yield the same expression. Therefore, our maximization problem can be restated in a slightly simpler form:
\begin{equation}
\begin{aligned}
& \underset{\beta_0, \beta_1, \ldots, \beta_p}{\text{maximize}}
& & \frac{2}{\Vert \mathbf{\beta} \Vert} \\
& \text{subject to} & & y_i(\beta_0 + \mathbf{\beta}^T \mathbf{x}_{i}) \geq 1 \quad \text{for } i = 1, \ldots, N.
\end{aligned}
\end{equation}
This is a convex optimization problem (quadratic criterion with linear inequality constraints) and can be solved with Lagrange. For details refer to appendix (D1) of the script.
Note that in practice it is easier to minimize the reciprocal term of the squared norm of $\mathbf{\beta}$, $\frac{1}{2} \Vert\mathbf{\beta} \Vert^2$. Therefore the objective function is often given as
\begin{equation}
\begin{aligned}
& \underset{\beta_0, \beta}{\text{minimize}}
& & \frac{1}{2}\Vert \mathbf{\beta} \Vert^2 \\
& \text{subject to} & & y_i(\beta_0 + \mathbf{\beta}^T \mathbf{x}_{i}) \geq 1 \quad \text{for } i = 1, \ldots, N.
\end{aligned}
\end{equation}
This transformation does not change the optimization problem yet at the same time is computationally easier to be handled by quadratic programming. A detailed discussion of quadratic programming goes beyond the scope of this course. For details, see e.g. Vapnik (2000) or [Burges (1998)](http://www.cmap.polytechnique.fr/~mallat/papiers/svmtutorial.pdf).**
## Support Vector Classifier
### Non-Separable Data
Given our data is separable into two classes, the maximal margin classifier from before seems like a natural approach. However, it is easy to see that **when the data is not clearly discriminable, no separable hyperplane exists and therefore such a classifier does not exist**. In that case the above maximization problem has no solution. What makes the situation even more complicated is that the maximal margin classifier is very sensitive to changes in the support vectors. This means that this classifier might suffer from inappropriate sensitivity to individual observations and thus it has a substantial risk of overfitting the training data. That is why we might be willing to consider a classifier on a hyperplane that does not perfectly separate the two classes but allows for greater robustness to individual observations and better classification of most of the training observations. In other words it could be worthwhile to misclassify a few training observations in order to do a better job in classifying the test data (James et al. (2013)).
### Details of the Support Vector Classifier
This is where the Support Vector Classifier (SVC) comes into play. It allows a certain number of observations to be on the 'wrong' side of the hyperplane while seeking a solution where the majority of data points are still on the 'correct' side of the hyperplane. The following figure visualizes this.
The SVC still classifies a test observation based on which side of a hyperplane it lies. However, when we train the model, the margins are now somewhat softened. This means that the model allows for a limited number of training observations to be on the wrong side of the margin and hyperplane, respectively.
Let us briefly discuss in general terms how the support vector classifier reaches its optimal solution. For this we extend the optimization problem from the maximum margin classifier as follows:
\begin{equation}
\begin{aligned}
& \underset{\beta_0, \beta}{\text{minimize}}
& & \frac{1}{2}\Vert \mathbf{\beta} \Vert^2 + C \left(\sum_{i=1}^n \epsilon_i \right) \\
& \text{subject to} & & \beta_0 + \mathbf{\beta}^T \mathbf{x}_{i} \geq (1-\epsilon_i) \quad \text{for } i = 1, \ldots, N. \\
& & & \epsilon_i \geq 0 \quad \forall i
\end{aligned}
\end{equation}
This, again, can be solved with Lagrange similar to the way it is shown for the maximum margin classifier (see appendix (D1)) and it is left to the reader as an exercise to derive the Lagrange (primal and dual) objective function. For the impatient readers will find a solution draft in Friedman et al. (2001), section 12.2.1.
Let us now focus on the added term $C \left(\sum_{i=1}^n \epsilon_i \right)$. Here, $\epsilon_1, \epsilon_2, \ldots, \epsilon_n$ are slack variables that allow the individual observations to be on the wrong side of the margin or the hyperplane. They contain information on where the $i$th observation is located, relative to the hyperplane and relative to the margin.
* If $\epsilon_i = 0$ then the $i$th observation is on the correct side of the margin,
* if $1 \geq \epsilon_i > 0$ it is on the wrong side of the margin but correct side of the hyperplane, and
* if $\epsilon_i > 1$ it is on the wrong side of the hyperplane.
The tuning parameter $C$ can be interpreted as a penalty factor for misclassification. It is defined by the user. Large values of $C$ correspond to a significant error penalty, whereas small values are used if we are less strict about misclassification errors. By controlling for $C$ we indirectly control for the margin and therefore actively tune the bias-variance trade-off. Decreasing the value of $C$ increases the bias but lowers the variance of the model.
Below figure shows how $C$ impacts the decision boundary and its corresponding margin.
### Solving Nonlinear Problems
So far we worked with data that is linearly separable. What makes SVM so powerful and popular is that it can be kernelized to solve nonlinear classification problems. We start our discussion again with illustrations to build an intuition.
Clearly the data is not linear and the resulting (linear) decision boundary is useless. How, then, do we deal with this? With mapping functions. The basic idea is to project the data via some mapping function $\phi$ onto a higher dimension such that a linear separator would be sufficient. The idea is similar to using quadratic and cubic terms of the predictor in linear regression in order to address non-linearity $(y = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \beta_3 x_i^3 + \ldots)$ . For example, for the data in the preceding figure we could use the following mapping function $\phi: \mathbb{R}^2 \rightarrow \mathbb{R}^3$.
\begin{equation}
\phi(x_1, x_2) = (z_1, z_2, z_3) = \left(x_1, x_2, x_1^2 + x_2^2 \right)
\end{equation}
Here we enlarge our feature space from $\mathbb{R}^2 \rightarrow \mathbb{R}^3$ in oder to accommodate a non-linear boundary. The transformed data becomes trivially linearly separable. All we have to do is find a plane in $\mathbb{R}^3$. If we project this decision boundary back onto the original feature space $\mathbb{R}^2$ (with $\phi^{-1}$), we have a nonlinear decision boundary.
Here's an animated visualization of this concept.
```python
from IPython.display import YouTubeVideo
YouTubeVideo('3liCbRZPrZA')
```
### The Problem with Mapping Functions
One could think that this is the recipe to work with nonlinear data: Transform all training data onto a higher-dimensional feature space via some mapping function $\phi$ train a linear SVM model and use the same function $\phi$ to transform new (test) data to classify it.
As attractive as this idea seems, it is unfortunately unfeasible because it quickly becomes computationally very expensive. Here is a hands-on example why: Consider for example a degree-2 polynomial (kernel) transformation of the form $\phi(x_1, x_2) = (x_1^2, x_2^2, \sqrt{2} x_1 x_2, \sqrt{2c} x_1, \sqrt{2c} x_2, c)$. This means that for a dataset in $\mathbb{R}^2$ the transformation adds four additional dimensions ($\mathbb{R}^2 \rightarrow \mathbb{R}^6$). If we generalize this, it means that a $d$-dimensional polynomial (Kernel) transformation maps from $\mathbb{R}^p$ to an ${p + d}\choose{d}$-dimensional space [(Balcan (2011))](http://www.cs.cmu.edu/%7Eninamf/ML11/lect1020.pdf). Thus for datasets with $p$ large, naively performing such transformations will force most computers to its knees.
### The Kernel Trick
Thankfully, not all is lost. It turns out that one does not need to explicitly work in the higher-dimensional space. One can show that when using Lagrange to solve our optimization problem, the training samples are only used to compute the pair-wise dot products $\langle x_i, x_{j}\rangle$ (where $x_i, x_{j} \in \mathbb{R}^{p}$). This is significant because there exist functions that, given two vectors $x_i$ and $x_{j}$ in $\mathbb{R}^p$, implicitly compute the dot product between the two vectors in a higher-dimension $\mathbb{R}^q$ (with $q > p$) without explicitly transforming $x_i, x_{j}$ onto a higher dimension $\mathbb{R}^q$. Such functions are called **Kernel** functions, written $K(x_i, x_{j})$ [(Kim (2013))](http://www.eric-kim.net/eric-kim-net/posts/1/kernel_trick.html#[6]).
Let us show an example of such a Kernel function (following [Hofmann (2006)](http://www.cogsys.wiai.uni-bamberg.de/teaching/ss06/hs_svm/slides/SVM_Seminarbericht_Hofmann.pdf)). For ease of reading we use $x = (x_1, x_2)$ and $z=(z_1, z_2)$ instead of $x_i$ and $x_{j}$. Consider the Kernel function $K(x, z) = (x^T z)^2$ and the mapping function $\phi(x) = (x_1^2, \sqrt{2}x_1 x_2, x_2^2)$. If we were to solve our optimization problem from above with Lagrange, the mapping function appears in the form $\phi(x)^T \phi(z)$.
\begin{align}
\phi(x)^T \phi(z) &= (x_1^2, \sqrt{2}x_1 x_2, x_2^2)^T (z_1^2, \sqrt{2}z_1 z_2, z_2^2) \\
&= x_1^2 z_1^2 + 2x_1 z_1 x_2 z_2 + x_2^2 z_2^2 \\
&= (x_1 z_1 + x_2 z_2)^2 \\
&= (x^T z)^2 \\
&= K(x, z)
\end{align}
The mapping function would have transformed the data from $\mathbb{R}^2 \rightarrow \mathbb{R}^3$ and back. The Kernel function, however, stays in $\mathbb{R}^2$. This is of course only one (toy) example and far away from a proper proof but it provides the intuition of what can be generalized: that by using a Kernel function where $K(x_i, x_j) = (x^T z)^2 = \phi(x_i)^T \phi(x_j)$, we implicitly transforms our data to a higher-dimension without having to explicitly apply a mapping function $\phi$. This so called "Kernel Trick" allows us to efficiently learn nonlinear decision boundaries for SVM.
### Popular Kernel Functions
Not every random mapping function is also a Kernel function. For a function to be a Kernel function, it needs to have certain properties (see e.g. [Balcan (2011)](http://www.cogsys.wiai.uni-bamberg.de/teaching/ss06/hs_svm/slides/SVM_Seminarbericht_Hofmann.pdf) or [Hofmann (2006)](http://www.cogsys.wiai.uni-bamberg.de/teaching/ss06/hs_svm/slides/SVM_Seminarbericht_Hofmann.pdf) for a discussion). In SVM literature, the following three Kernel functions have emerged as popular choices (Friedman et al. (2001)):
\begin{align}
d\text{th-Degree polynomial} \qquad K(x_i, x_j) &= (r + \gamma \langle x_i, x_j \rangle)^d \\
\text{Radial Basis (RBF)} \qquad K(x_i, x_j) &= \exp(-\gamma \Vert x_i - x_j \Vert^2) \\
\text{Sigmoid} \qquad K(x_i, x_j) &= \tanh(\gamma \langle x_i, x_j \rangle + r)
\end{align}
In general there is no "best choice". With each Kernel having some degree of variability, one has to find the optimal solution by experimenting with different Kernels and playing with their parameter ($\gamma, r, d$).
### Optimization with Lagrange
We have mentioned before that the optimization problem of the maximum margin classifier and support vector classifier can be solved with Lagrange. The details of which are beyond the scope of this notebook. However, the interested reader is encouraged to learn the details in the appendix of the script (and the recommended reference sources) as these are crucial in understanding the mathematics/core of SVM and the application of Kernel functions.
## SVM with Scikit-Learn
### Preparing the Data
Having build an intuition of how SVM work, let us now see this algorithm applied in Python. We will again use the Scikit-learn package that has an optimized class implemented. The data we will work with is called "Polish Companies Bankruptcy Data Set" and was used in Zieba et al. (2014). The full set comprises five data files. Each file contains 64 features plus a class label. The features are ratios derived from the financial statements of the more than 10'000 manufacturing companies considered during the period of 2000 - 2013 (from EBITDA margin to equity ratio to liquidity ratios (quick ratio etc.)). The five files differ in that the first contains data with companies that defaulted/were still running **five** years down the road ('1year.csv'), the second **four** years down the road ('2year.csv') etc. Details can be found in the original publication (Zikeba et al. (2016)) or in the [description provided on the UCI Machine Learning Repository site](https://archive.ics.uci.edu/ml/datasets/Polish+companies+bankruptcy+data) where the data was downloaded from. For our purposes we will use the '5year.csv' file where we should predict defaults within the next year.
```python
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
plt.rcParams['font.size'] = 14
```
```python
# Load data
df = pd.read_csv('Data/5year.csv', sep=',')
df.head()
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Attr1</th>
<th>Attr2</th>
<th>Attr3</th>
<th>Attr4</th>
<th>Attr5</th>
<th>Attr6</th>
<th>Attr7</th>
<th>Attr8</th>
<th>Attr9</th>
<th>Attr10</th>
<th>...</th>
<th>Attr56</th>
<th>Attr57</th>
<th>Attr58</th>
<th>Attr59</th>
<th>Attr60</th>
<th>Attr61</th>
<th>Attr62</th>
<th>Attr63</th>
<th>Attr64</th>
<th>class</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.088238</td>
<td>0.55472</td>
<td>0.01134</td>
<td>1.0205</td>
<td>-66.5200</td>
<td>0.342040</td>
<td>0.109490</td>
<td>0.57752</td>
<td>1.0881</td>
<td>0.32036</td>
<td>...</td>
<td>0.080955</td>
<td>0.275430</td>
<td>0.91905</td>
<td>0.002024</td>
<td>7.2711</td>
<td>4.7343</td>
<td>142.760</td>
<td>2.5568</td>
<td>3.2597</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>-0.006202</td>
<td>0.48465</td>
<td>0.23298</td>
<td>1.5998</td>
<td>6.1825</td>
<td>0.000000</td>
<td>-0.006202</td>
<td>1.06340</td>
<td>1.2757</td>
<td>0.51535</td>
<td>...</td>
<td>-0.028591</td>
<td>-0.012035</td>
<td>1.00470</td>
<td>0.152220</td>
<td>6.0911</td>
<td>3.2749</td>
<td>111.140</td>
<td>3.2841</td>
<td>3.3700</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0.130240</td>
<td>0.22142</td>
<td>0.57751</td>
<td>3.6082</td>
<td>120.0400</td>
<td>0.187640</td>
<td>0.162120</td>
<td>3.05900</td>
<td>1.1415</td>
<td>0.67731</td>
<td>...</td>
<td>0.123960</td>
<td>0.192290</td>
<td>0.87604</td>
<td>0.000000</td>
<td>8.7934</td>
<td>2.9870</td>
<td>71.531</td>
<td>5.1027</td>
<td>5.6188</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>-0.089951</td>
<td>0.88700</td>
<td>0.26927</td>
<td>1.5222</td>
<td>-55.9920</td>
<td>-0.073957</td>
<td>-0.089951</td>
<td>0.12740</td>
<td>1.2754</td>
<td>0.11300</td>
<td>...</td>
<td>0.418840</td>
<td>-0.796020</td>
<td>0.59074</td>
<td>2.878700</td>
<td>7.6524</td>
<td>3.3302</td>
<td>147.560</td>
<td>2.4735</td>
<td>5.9299</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>0.048179</td>
<td>0.55041</td>
<td>0.10765</td>
<td>1.2437</td>
<td>-22.9590</td>
<td>0.000000</td>
<td>0.059280</td>
<td>0.81682</td>
<td>1.5150</td>
<td>0.44959</td>
<td>...</td>
<td>0.240400</td>
<td>0.107160</td>
<td>0.77048</td>
<td>0.139380</td>
<td>10.1180</td>
<td>4.0950</td>
<td>106.430</td>
<td>3.4294</td>
<td>3.3622</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>5 rows × 65 columns</p>
</div>
```python
# Check for NA values
df.isnull().sum()
```
Attr1 3
Attr2 3
Attr3 3
Attr4 21
Attr5 11
Attr6 3
Attr7 3
Attr8 18
Attr9 1
Attr10 3
Attr11 3
Attr12 21
Attr13 0
Attr14 3
Attr15 6
Attr16 18
Attr17 18
Attr18 3
Attr19 0
Attr20 0
Attr21 103
Attr22 3
Attr23 0
Attr24 135
Attr25 3
Attr26 18
Attr27 391
Attr28 107
Attr29 3
Attr30 0
...
Attr36 3
Attr37 2548
Attr38 3
Attr39 0
Attr40 21
Attr41 84
Attr42 0
Attr43 0
Attr44 0
Attr45 268
Attr46 21
Attr47 35
Attr48 3
Attr49 0
Attr50 18
Attr51 3
Attr52 36
Attr53 107
Attr54 107
Attr55 0
Attr56 0
Attr57 3
Attr58 0
Attr59 3
Attr60 268
Attr61 15
Attr62 0
Attr63 21
Attr64 107
class 0
Length: 65, dtype: int64
```python
# Calculate % of missing values for 'Attr37'
df['Attr37'].isnull().sum() / (len(df))
```
0.43113367174280881
Attribute 37 sticks out with 2'548 of 5'910 (43.1%) missing values. This attribute considers *"(current assets - inventories) / long-term liabilities"*. Due to the many missing values we can not use a fill method so let us drop this feature column.
```python
df = df.drop('Attr37', axis=1)
df.iloc[:, 30:38].head()
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Attr31</th>
<th>Attr32</th>
<th>Attr33</th>
<th>Attr34</th>
<th>Attr35</th>
<th>Attr36</th>
<th>Attr38</th>
<th>Attr39</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.077287</td>
<td>155.330</td>
<td>2.3498</td>
<td>0.24377</td>
<td>0.135230</td>
<td>1.4493</td>
<td>0.32101</td>
<td>0.095457</td>
</tr>
<tr>
<th>1</th>
<td>0.000778</td>
<td>108.050</td>
<td>3.3779</td>
<td>2.70750</td>
<td>-0.036475</td>
<td>1.2757</td>
<td>0.59380</td>
<td>-0.028591</td>
</tr>
<tr>
<th>2</th>
<td>0.143490</td>
<td>81.653</td>
<td>4.4701</td>
<td>0.65878</td>
<td>0.145860</td>
<td>1.1698</td>
<td>0.67731</td>
<td>0.129100</td>
</tr>
<tr>
<th>3</th>
<td>-0.138650</td>
<td>253.910</td>
<td>1.4375</td>
<td>0.83567</td>
<td>0.014027</td>
<td>1.2754</td>
<td>0.43830</td>
<td>0.010998</td>
</tr>
<tr>
<th>4</th>
<td>0.039129</td>
<td>140.120</td>
<td>2.6583</td>
<td>2.13360</td>
<td>0.364200</td>
<td>1.5150</td>
<td>0.51225</td>
<td>0.240400</td>
</tr>
</tbody>
</table>
</div>
As for the other missing values we are left to decide whether we want to remove the corresponding observations (rows) or apply a filling method. The problem with dropping all rows with missing values is that we might lose a lot of valuable information. Therefore in this case we prefer to use a common interpolation technique and impute `NaN` values with the feature mean. Alternatively we could use '`median`' or '`most_frequent`' as strategy. A convenient way to achieve this imputation is to use the `Imputer` class from `sklearn`.
```python
from sklearn.preprocessing import Imputer
# Impute missing values by mean (axis=0 --> along columns)
ipr = Imputer(missing_values='NaN', strategy='mean', axis=0)
ipr = ipr.fit(df.values)
imputed_data = ipr.transform(df.values)
# Assign imputed values to 'df' and check for 'NaN' values
df = pd.DataFrame(imputed_data, columns=df.columns)
df.isnull().sum().sum()
```
0
Now let us check if we have some categorical features that we need to transform. For this we compare the number of cells in the dataframe with the sum of numeric values (`np.isreal()`). If the result is 0, we do not need to apply a One-Hot-Encoding or LabelEncoding procedure.
```python
df.shape[0] * df.shape[1] - df.applymap(np.isreal).sum().sum()
```
0
As we see, the dataframe only consists of real values. Therefore, we can proceed by assigning columns 1-63 to variable `X` and column 64 to `y`.
```python
X = df.iloc[:, :-1].values
y = df.iloc[:, -1].values
```
### Applying SVM
Having assigned the data to `X` and `y` we are now ready to divide the dataset into separate training and test sets.
```python
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=0,
stratify=y)
```
Unlike e.g. decision tree algorithms SVM are sensitive to the magnitude the data. Therefore scaling our data is recommended.
```python
from sklearn.preprocessing import StandardScaler
# Create StandardScaler object
sc = StandardScaler()
# Standardize features; equal results as if done in two
# separate steps (first .fit() and then .transform())
X_train_std = sc.fit_transform(X_train)
# Transform test set
X_test_std = sc.transform(X_test)
```
With the data standardized, we can finally apply a SVM on the data. We import the `SVC` (for Support Vector Classifier) from the Scikit-learn toolbox and create a `svm_linear` object that represents a linear SVM with `C=0`. Recall that `C` helps us control the penalty for misclassification. Large values of `C` correspond to large error penalties and vice-versa. More parameter can be specified. Details are best explained in the function's [documentation page](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).
```python
from sklearn.svm import SVC
from sklearn import metrics
import matplotlib.pyplot as plt
# Create object
svm_linear = SVC(kernel='linear', C=1.0)
svm_linear
```
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='auto', kernel='linear',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
With the `svm_linear` object ready we can now fit the object to the training data and check for the model's accuracy.
```python
# Fit linear SVM to standardized training set
svm_linear.fit(X_train_std, y_train)
# Print results
print("Observed probability of default: {:.2f}".format(np.count_nonzero(y==0) / len(y)))
print("Train score: {:.2f}".format(svm_linear.score(X_train_std, y_train)))
print("Test score: {:.2f}".format(svm_linear.score(X_test_std, y_test)))
```
Observed probability of default: 0.93
Train score: 0.93
Test score: 0.93
```python
# Predict classes
y_pred = svm_linear.predict(X_test_std)
# Manual confusion matrix as pandas DataFrame
confm = pd.DataFrame({'Predicted': y_pred,
'True': y_test})
confm.replace(to_replace={0:'Non-Default', 1:'Default'}, inplace=True)
print(confm.groupby(['True','Predicted'], sort=False).size().unstack('Predicted'))
```
Predicted Non-Default Default
True
Non-Default 1096.0 4.0
Default 82.0 NaN
In the same way we can run a Kernel SVM on the data. We have four Kernel options: one linear as introduced above and three non-linear. All of them have hyperparameter available. If these are not specified, default values are taken. [Check the documentation for details](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).
* `linear`: linear SVM as shown above with `C` as hyperparameter
* `rbf`: Radial basis function Kernel with `C, gamma` as hyperparameter
* `poly`: Polynomial Kernel with `C, degree, gamma, coef0` as hyperparameter
* `sigmoid`: Sigmoid Kernel with `C, gamma, coef0` as hyperparameter
Let us apply a polynomial Kernel as example.
```python
svm_poly = SVC(kernel='poly', random_state=1)
svm_poly
```
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='auto', kernel='poly',
max_iter=-1, probability=False, random_state=1, shrinking=True,
tol=0.001, verbose=False)
Not having specified hyperparameter `C, degree, gamma`, and `coef0` we see that the algorithm has taken default values. For `C` it is equal to 1, default `degree` is 3, `gamma=auto` means that the value will be calculated as $1/n_{\text{features}}$, and `coef0` is set to 0 as default.
```python
# Fit polynomial SVM to standardized training set
svm_poly.fit(X_train_std, y_train)
# Print results
print("Observed probability of default: {:.2f}".format(np.count_nonzero(y==0) / len(y)))
print("Train score: {:.2f}".format(svm_poly.score(X_train_std, y_train)))
print("Test score: {:.2f}".format(svm_poly.score(X_test_std, y_test)))
```
Observed probability of default: 0.93
Train score: 0.94
Test score: 0.93
```python
# Predict classes
y_pred = svm_poly.predict(X_test_std)
# Manual confusion matrix as pandas DataFrame
confm = pd.DataFrame({'Predicted': y_pred,
'True': y_test})
confm.replace(to_replace={0:'Non-Default', 1:'Default'}, inplace=True)
print(confm.groupby(['True','Predicted'], sort=False).size().unstack('Predicted'))
```
Predicted Non-Default Default
True
Non-Default 1096 4
Default 81 1
As it looks linear and polynomial SVM yield similar results. What is clearly unsatisfactory is the number of true defaults that the SVM missed to detect. Both linear as well as non linear SVM miss to label $\geq$ 80 defaults [sic]. From a financial perspective, this is unacceptable and raises questions regarding
* Class imbalance
* Hyperparameter fine-tuning through cross validation and grid search
* Feature selection
* Noise & dimension reduction
which we want to address in the next section.
## Dealing with Class Imbalance
When we deal with default data sets we observe that the ratio of non-default to default records is heavily skewed towards non-default. This is a common problem in real-world data set: Samples from one class or multiple classes are over-represented. For the present data set we are talking 93% non-defaults vs. 7% defaults. Having an algorithm that predicts non-default 100 out of a 100 times is right in 93% of the cases. Therefore, training a model on such a data set that achieves the same 93% test accuracy (as our SVM above) means nothing else than our model hasn't learned anything informative from the features provided in this data set. Thus, when assessing a classifier on an imbalanced data set we have learned that other metrics such as precision, recall, ROC curve etc. might be more informative.
Having said that, what we have to consider is that a class imbalance might influences a learning algorithm during the model fitting itself. Machine learning algorithms typically optimize a reward or cost function. This means that an algorithm implicitly learns the model that optimizes the predictions based on the most abundant class in the dataset in order to minimize the cost or maximize the reward during the training phase. And this in turn might yield skewed results in case of imbalanced data sets.
There are several options to deal with class imbalance, we will discuss two of them. The first option is to set the `class_weight` parameter to `class_weight='balanced'`. Most classifier hae this option implemented (of the introduced classifiers, KNN, LDA and QDA lack such a parameter). This will assign a larger penalty to wrong predictions on the minority class.
```python
# Initiate and fit a polynomial SVM to training set
svm_poly = SVC(kernel='poly', random_state=1, class_weight='balanced')
svm_poly.fit(X_train_std, y_train)
# Predict classes and print results
y_pred = svm_poly.predict(X_test_std)
print(metrics.classification_report(y_test, y_pred))
print(metrics.confusion_matrix(y_test, y_pred))
print("Test score: {:.2f}".format(svm_poly.score(X_test_std, y_test)))
```
precision recall f1-score support
0.0 0.93 0.99 0.96 1100
1.0 0.12 0.02 0.04 82
avg / total 0.88 0.92 0.89 1182
[[1086 14]
[ 80 2]]
Test score: 0.92
The second option we want to discuss is up- & downsampling of the minority/majority class. Both up- and downsampling are implemented in Scikit-learn through the `resample` function and depending on the data and given the task at hand, one might be better suited than the other. For the upsampling, scikit-learn will apply a bootstrapping to draw new samples from the datasets with replacement. This means that the function will repeatedly draw new samples from the minority class until it contains the number of samples we define. Here's a code example:
```python
from sklearn.utils import resample
# Upsampling
X_upsampled, y_upsampled = resample(X[y==1], y[y==1],
replace=True,
n_samples=X[y==0].shape[0],
random_state=1)
print('No. of default samples BEFORE upsampling: {:.0f}'.format(y.sum()))
print('No. of default samples AFTER upsampling: {:.0f}'.format(y_upsampled.sum()))
```
No. of default samples BEFORE upsampling: 410
No. of default samples AFTER upsampling: 5500
Downsampling works in similar fashion.
```python
# Downsampling
X_dnsampled, y_dnsampled = resample(X[y==0], y[y==0],
replace=False,
n_samples=X[y==1].shape[0],
random_state=1)
```
Running the SVM algorighm on the balanced dataset works now as you would expect:
```python
# Combine datasets
X_bal = np.vstack((X[y==1], X_dnsampled))
y_bal = np.hstack((y[y==1], y_dnsampled))
# Train test split
X_train_bal, X_test_bal, y_train_bal, y_test_bal = \
train_test_split(X_bal, y_bal,
test_size=0.2,
random_state=0,
stratify=y_bal)
# Standardize features; equal results as if done in two
# separate steps (first .fit() and then .transform())
X_train_bal_std = sc.fit_transform(X_train_bal)
# Transform test set
X_test_bal_std = sc.transform(X_test_bal)
# Initiate and fit a polynomial SVM to training set
svm_poly_bal = SVC(kernel='poly', random_state=1)
svm_poly_bal.fit(X_train_bal_std, y_train_bal)
# Predict classes and print results
y_pred_bal = svm_poly_bal.predict(X_test_bal_std)
print(metrics.classification_report(y_test_bal, y_pred_bal))
print(metrics.confusion_matrix(y_test_bal, y_pred_bal))
print("Test score: {:.2f}".format(svm_poly_bal.score(X_test_bal_std, y_test_bal)))
```
precision recall f1-score support
0.0 0.51 1.00 0.68 82
1.0 1.00 0.05 0.09 82
avg / total 0.76 0.52 0.39 164
[[82 0]
[78 4]]
Test score: 0.52
By applying a SVM to a balanced set of data we improve our model slightly. Yet there remains some work to be done. The polynomial SVM still misses out on 95.1% (=78/82) of the default cases.
It should be said that in general using an upsampled set is to be preferred over a downsampled set. However, here we are talking 11'000 observations times 63 features for the upsampled set and this can easily take quite some time to run models on, especially if we compute a grid search as in the next section. For this reason the downsampled set was used.
## Hyperparameter Fine-Tuning
### Pipelines
Another tool that is of help in optimizing our model is the `GridSearchCV` function introduced in the previous chapter that finds the best hyperparameter through a brute-force (cross validation) approach. Yet before we simply copy-past the code from the last chapter we ought to address a subtle yet important difference between the decision tree and SVM (or most other ML) algorithms that has implications on the application: Decision tree algorithms are of the few models where data scaling is not necessary. SVM on the other hand are (as most ML algorithms) fairly sensitive to the magnitude of the data. Now you might say that this is precisely why we standardized the data at the very beginning and with that we are good to go. In principle, this is correct. However, if we are precise, we commit a subtle yet possibly significant thought error.
If we decide to apply a grid search using cross validation to find the optimal hyperparameter for e.g. a SVM we unfortunately can not just scale the full data set at the very beginning and then be good for the rest of the process. Conceptually it is important to understand why. Assume we have a data set. As we learned in the chapter on feature scaling and cross validation, applying a scaling on the combined data set and splitting the set into training and holdout set after the scaling is wrong. The reason is that information from the test set found its way into the model and distorts the results. The training set is scaled with not only based on information from that set but also based on information from the test set.
Now the same is true if we apply a gridsearch process with cross validation on a training set. For each fold in the CV, some part of the training set will be declared as the training part, and some the test part. The test part within this split is used to measure the performance of our model trained on the training part. However, if we simply scale the training set and then apply gridsearch-CV on the scaled training set we would commit the same thought error as if we simply scale the full set at the very beginning. The test fold (of the CV split) would no longer be independent but implicitly already be part of the training set we used to fit the model. This is fundamentally different from how new data looks to the model. The test data within each cross validation split would no longer correctly mirrors how new data would look to the modeling process. Information already leaked from the test data into our modeling process. This would lead to overly optimistic results during cross validation, and possibly the selection of suboptimal parameter (Müller & Guido (2017)).
We have not addressed this problem in the chapter on cross validation because so far we have not introduced the tool to deal with it. Furthermore, if our data set is homogeneous and of some size, this is less of an issue. Yet as Scikit-learn provides a fantastic tool to deal with this (and many other) issue(s), we want to introduce it here. The tool is called **pipelines** and allows to combine multiple processing steps in a very convenient and proper way. Let us look at how we can use the `Pipeline` class to express the end-to-end workflow. First we build a pipeline object. This object is provided a list of steps. Each step is a tuple containing a name (you define) and an instance of an estimator.
```python
from sklearn.pipeline import Pipeline
# Create pipeline object with standard scaler and SVC estimator
pipe = Pipeline([('scaler', StandardScaler()),
('svm_poly', SVC(kernel='poly', random_state=0))])
```
Next we define a parameter grid to search over and construct a `GridSearchCV` from the pipeline and the parameter grid. Notice that we have to specify for each parameter which step of the pipeline it belongs to. This is done by calling the name we gave this step, followed by a double underscore and the parameter name. For the present example, let us compare different degrees, and `C` values.
```python
# Define parameter grid
param_grid = {'svm_poly__C': [0.1, 1, 10, 100],
'svm_poly__degree': [1, 2, 3, 5, 7]}
```
With that we can run a `GridSearchCV` as usual.
```python
from sklearn.model_selection import GridSearchCV
# Run grid search
grid = GridSearchCV(pipe, param_grid=param_grid, cv=5, n_jobs=-1)
grid.fit(X_train_bal, y_train_bal)
# Print results
print('Best CV accuracy: {:.2f}'.format(grid.best_score_))
print('Test score: {:.2f}'.format(grid.score(X_test_bal, y_test_bal)))
print('Best parameters: {}'.format(grid.best_params_))
```
Best CV accuracy: 0.73
Test score: 0.78
Best parameters: {'svm_poly__C': 100, 'svm_poly__degree': 1}
Notice that thanks to the pipeline object, now for each split in the cross validation the `StandardScaler` is refit with only the training splits and no information is leaked from the test split into the parameter search.
Depending on the grid you search, computations might take quite some time. One way to improve speed is by reducing the feature space; that is reducing the number of features. We will discuss feature selection and dimension reduction options in the next section but for the moment, let us just apply a method called Principal Component Analysis (PCA). PCA effectively transforms the feature space from $\mathbb{R}^{p} \rightarrow \mathbb{R}^{q}$ with $q$ being a user specified value (but usually $q < < p$). PCA is similar to other preprocessing steps and can be included in pipelines as e.g. `StandardScaler`.
Here we reduce the feature space from $\mathbb{R}^{63}$ (i.e. $p=63$ features) to $\mathbb{R}^{2}$. This will make the fitting process faster. However, this comes at a cost: by reducing the feature space we might not only get rid of noise but also lose part of the information available in the full dataset. Our model accuracy might suffer as a consequence. Furthermore, the speed that we gain by fitting a model to a smaller subset can be set off by the additional computations it takes to calculate the PCA. In the example of the upsampled data set we would be talking of an $[11'000 \cdot 0.8 \cdot 0.8 \times 63]$ matrix (0.8 for the train/test-split and each cv fold) for which eigenvector and eigenvalues need to be calculated. This means up to 63 eigenvalues per grid search loop.
```python
from sklearn.decomposition import PCA
# Create pipeline object with standard scaler, PCA and SVC estimator
pipe = Pipeline([('scaler', StandardScaler()),
('pca', PCA(n_components=2)),
('svm_poly', SVC(kernel='poly', random_state=0))])
# Define parameter grid
param_grid = {'svm_poly__C': [100],
'svm_poly__degree': [1, 2, 3]}
# Run grid search
grid = GridSearchCV(pipe, param_grid=param_grid, cv=5, n_jobs=-1)
grid.fit(X_train_bal, y_train_bal)
# Print results
print('Best CV accuracy: {:.2f}'.format(grid.best_score_))
print('Test score: {:.2f}'.format(grid.score(X_test_bal, y_test_bal)))
print('Best parameters: {}'.format(grid.best_params_))
```
Best CV accuracy: 0.64
Test score: 0.74
Best parameters: {'svm_poly__C': 100, 'svm_poly__degree': 2}
Other so called preprocessing steps can be included in the pipeline too. This shows how seamless such workflows can be steered through pipelines. We can even combine multiple models as we show in the next code snippet. By now you are probably aware that trying all possible solutions is not a viable machine learning strategy. Computational power is certainly going to be an issue. Nevertheless, for the record we provide below an example where we apply logistic regression and a SVM with RBF kernel to find the best solution (details see section on PCA below).
```python
from sklearn.linear_model import LogisticRegression
# Create pipeline object with standard scaler, PCA and SVC estimator
pipe = Pipeline([('scaler', StandardScaler()),
('classifier', SVC(random_state=0))])
# Define parameter grid
param_grid = [{'scaler': [StandardScaler()],
'classifier': [SVC(kernel='rbf')],
'classifier__gamma': [1, 10],
'classifier__C': [10, 100]},
{'scaler': [StandardScaler(), None],
'classifier': [LogisticRegression()],
'classifier__C': [10, 100]}]
# Run grid search
grid = GridSearchCV(pipe, param_grid, cv=5, n_jobs=-1)
grid.fit(X_train_bal, y_train_bal)
# Print results
print('Best CV accuracy: {:.2f}'.format(grid.best_score_))
print('Test score: {:.2f}'.format(grid.score(X_test_bal, y_test_bal)))
print('Best parameters: {}'.format(grid.best_params_))
```
Best CV accuracy: 0.76
Test score: 0.79
Best parameters: {'classifier': LogisticRegression(C=100, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False), 'classifier__C': 100, 'scaler': StandardScaler(copy=True, with_mean=True, with_std=True)}
From the above output we see that surprisingly the Logistic regression yields the best accuracy (with `C=100`).
## Feature Selection and Dimensionality Reduction
### Complexity and the Curse of Overfitting
If we observe that a model performs much better on training than on test data, we have an indication that the model suffers from overfitting. The reason for the overfitting is most probably that our model is too complex for the given training data. Common solutions to reduce the generalization error are (Raschka (2015)):
* Collect more (training) data
* Introduce a penalty for complexity via regularization
* Choose a simpler model with fewer parameter
* Reduce the dimensionality of the data
Collecting more data is self explanatory but often not applicable. Regularization via a complexity penalty term is a technique that is primarily applicable to regression settings (e.g. logistic regression). We will not discuss it here but the interested reader will easily find helpful information in e.g. James et al. (2013) chapter 6 or Raschka (2015) chapter 4. Here we will look at one commonly used solution to reduce overfitting: dimensionality reduction via feature selection.
### Feature Selection
A useful approach to select relevant features from a data set is to use information from the random forest algorithm we introduced in the previous chapter. There we elaborated how decision trees rank the feature importance based on a impurity decrease. Conveniently, we can access this feature importance rank directly from the `RandomForestClassifier` object. By executing below code - following the example in Raschka (2015) - we will train a random forest model on the balanced default data set (from before) and rank the features by their respective importance measure.
```python
from sklearn.ensemble import RandomForestClassifier
# Extract feature labels
feat_labels = df.columns[:-1]
# Create Random Forest object, fit data and
# extract feature importance attributes
forest = RandomForestClassifier(random_state=1)
forest.fit(X_train_bal, y_train_bal)
importances = forest.feature_importances_
```
```python
# Sort output (by relative importance) and
# print top 15 features
indices = np.argsort(importances)[::-1]
n = 15
for i in range(n):
print('{0:2d}) {1:7s} {2:6.4f}'.format(i + 1,
feat_labels[indices[i]],
importances[indices[i]]))
```
1) Attr39 0.0811
2) Attr27 0.0684
3) Attr15 0.0471
4) Attr13 0.0447
5) Attr16 0.0421
6) Attr21 0.0355
7) Attr26 0.0339
8) Attr11 0.0310
9) Attr23 0.0303
10) Attr7 0.0292
11) Attr46 0.0251
12) Attr25 0.0228
13) Attr34 0.0212
14) Attr41 0.0201
15) Attr9 0.0185
The value in decimal is the relative importance for the respective feature. We can also plot this result to have a better overview. Below code shows one way of doing it.
```python
# Get cumsum of the n most important features
feat_imp = np.sort(importances)[::-1]
sum_feat_imp = np.cumsum(feat_imp)[:n]
```
```python
# Plot Feature Importance (both cumul., individual)
plt.figure(figsize=(12, 8))
plt.bar(range(n), importances[indices[:n]], align='center')
plt.xticks(range(n), feat_labels[indices[:n]], rotation=90)
plt.xlim([-1, n])
plt.xlabel('Feature')
plt.ylabel('Rel. Feature Importance')
plt.step(range(n), sum_feat_imp, where='mid',
label='Cumulative importance')
plt.tight_layout();
```
Executing the code will rank the different features according to their relative importance. The definition of each `AttrXX` we would have to [look up in the data description](https://archive.ics.uci.edu/ml/datasets/Polish+companies+bankruptcy+data). Note that the feature importance values are normalized such that they sum up to 1.
Feature selection in the way shown in the preceding code snippets will not work in combination with a `pipeline` object. However, Scikit-learn has implemented such a function that could be used in a preprocessing step. Its name is `SelectFromModel` and details can be found [here](http://scikit-learn.org/stable/modules/feature_selection.html#feature-selection-using-selectfrommodel). Instead of selecting the top $n$ features you define a threshold, which selects those features whose importance is greater or equal to said threshold (e.g. mean, median etc.). For reference, below it is shown how the function is applied inside a pipeline.
```python
from sklearn.feature_selection import SelectFromModel
pipe = Pipeline([('feature_selection', SelectFromModel(RandomForestClassifier(), threshold='median')),
('scaler', StandardScaler()),
('classification', SVC())])
pipe.fit(X_train_bal, y_train_bal).score(X_test_bal, y_test_bal)
```
0.78658536585365857
### Principal Component Analysis
In the previous section you learned an approach for reducing the dimensionality of a data set through feature selection. An alternative to feature selection is feature extraction, of which Principal Component Analysis (PCA) is the best known and most popular approach. It is an unsupervised method that aims to summarize the information content of a data set by transforming it onto a new feature subspace of lower dimensionality than the original one. With the rise of big data, this is a field that is gaining importance by the day. PCA is widely used in a variety of field - e.g. in finance to de-noise signals in stock market trading, create factor models, for feature selection in bankruptcy prediction, dimensionality reduction of high frequency data etc.. Unfortunately, the scope of this course does not allow us to discuss PCA in great detail. Nevertheless the fundamentals shall be addressed here briefly so that the reader has a good understanding of how PCA helps in reducing dimensionality.
To build an intuition for PCA we quote the excellent James et al. (2013, p. 375): *"PCA finds a low-dimensional representation of a dataset that contains as much as possible of the **variation**. The idea is that each of the $n$ observations lives in $p$-dimensional space, but not all of these dimensions are equally interesting. PCA seeks a small number of dimensions that are as interesting as possible, where the concept of interesting is measured by the amount that the observation vary along each dimension. Each of the dimensions found by PCA is a linear combination of the $p$ features."* Since each principal component is required to be orthogonal to all other principal components, we basically take correlated original variables (features) and replace them with a small set of principal components that capture their joint variation.
Below figures aim at visualizing the idea of principal components. In both figures we see the same two-dimensional dataset. PCA searches for the principal axis along which the data varies most. These principal axis measure the variance of the data when projected onto that axis. The two vectors (arrows) in the left plot visualize this. Notice that given an $[n \times p]$ feature matrix $\mathbf{X}$ there are at most $\min(n-1, p)$ principal components. The figure on the right-hand side displays the projection of the data points projected onto the first principal axis. In this way we have reduced the dimensionality from $\mathbf{R}^2$ to $\mathbf{R}^1$. In practice, PCA is of course primarily used for datasets with $p$ large and the selected number of principal components $q$ is usually much smaller than the dimension of the original dataset ($q << p)$.
The first principal component is the direction in space along which (orthogonal) projections have the largest variance. The second principal component is the direction which maximizes variance among all directions while being orthogonal to the first. The $k^{\text{th}}$ component is the variance-maximizing direction orthogonal to the previous $k-1$ components.
How do we express this in mathematical terms? Let $\mathbf{X}$ be an $n \times p$ dataset and let it be centered (i.e. each column mean is zero; notice that standardization is very important in PCA). The $p \times p$ variance-covariance matrix $\mathbf{C}$ is then equal to $\mathbf{C} = \frac{1}{n} \mathbf{X}^T \mathbf{X}$. Additionally, let $\mathbf{\phi}$ be a unit $p$-dimensional vector, i.e. $\phi \in \mathbb{R}^p$ and let $\sum_{i=1}^p \phi_{i1}^2 = \mathbf{\phi}^T \mathbf{\phi} = 1$.
The projections of the individual data points onto the principal axis are given by the linear combination of the form
\begin{equation}
Z_{i} = \phi_{1i} X_{1} + \phi_{2i} X_{2} + \ldots + \phi_{pi} X_{p}.
\end{equation}
In matrix notation we write
\begin{equation}
\mathbf{Z} = \mathbf{X \phi}
\end{equation}
Since each column vector $X_i$ is standardized, i.e. $\frac{1}{n} \sum_{i=1}^n x_{ip} = 0$, the average of $Z_i$ (the column vector for feature $i$) will be zero as well. With that, the variance of $\mathbf{Z}$ is
\begin{align}
\text{Var}(\mathbf{Z}) &= \frac{1}{n} (\mathbf{X \phi})^T (\mathbf{X \phi}) \\
&= \frac{1}{n} \mathbf{\phi}^T \mathbf{X}^T \mathbf{X \phi} \\
&= \mathbf{\phi}^T \frac{\mathbf{X}^T \mathbf{X}}{n} \mathbf{\phi} \\
&= \mathbf{\phi}^T \mathbf{C} \mathbf{\phi}
\end{align}
Note that it is common standard to use the population estimation of variance (division by $n$) instead of the sample variance (division by $n-1$).
Now, PCA seeks to solve a sequence of optimization problems:
\begin{equation}
\begin{aligned}
& \underset{\mathbf{\phi}}{\text{maximize}} & & \text{Var}(\mathbf{Z})\\
& \text{subject to} & & \mathbf{\phi}^T \mathbf{\phi}=1, \quad \phi \in \mathbb{R}^p \\
&&& \mathbf{Z}^T \mathbf{Z} = \mathbf{ZZ}^T = \mathbf{I}.
\end{aligned}
\end{equation}
Looking at the above term it should be clear why we haver restricted vector $\mathbf{\phi}$ to be a unit vector. If not, we could simply increase $\mathbf{\phi}$ - which is not what we want. This problem can be solved with Lagrange and via an eigen decomposition (a standard technique in linear algebra). The details of which are explained in the appendix of the script.
How we apply PCA within a pipeline workflow we have shown above. A more general setup is shown in below code snippet. We again make use of the polish bankruptcy set introduced above.
```python
from sklearn.decomposition import PCA
# Define no. of PC
q = 10
# Create PCA object and fit to find
# first q principal components
pca = PCA(n_components=q)
pca.fit(X_train_bal)
pca
```
PCA(copy=True, iterated_power='auto', n_components=10, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
To close, one last code snippet is provided. Running it will visualize the cumulative explained variance ratio as a function of the number of components. (Mathematically, the explained variance ratio is the ratio of the eigenvalue of principal component $i$ to the sum of the eigenvalues, $\frac{\lambda_i}{\sum_{i}^p \lambda_i}$. See the appendix in the script to better understand the meaning of eigenvalues in this context.) In practice, this might be helpful in deciding on the number of principal components $q$ to use.
```python
# Run PCA for all possible PCs
pca = PCA().fit(X_train_bal)
# Define max no. of PC
q = X_train_bal.shape[1]
# Get cumsum of the PC 1-q
expl_var = pca.explained_variance_ratio_
sum_expl_var = np.cumsum(expl_var)[:q]
```
```python
# Plot Feature Importance (both cumul., individual)
plt.figure(figsize=(12, 6))
plt.bar(range(1, q + 1), expl_var, align='center')
plt.xticks(range(1, q + 1, 5))
plt.xlim([0, q + 1])
plt.xlabel('Principal Components')
plt.ylabel('Explained Variance Ratio')
plt.step(range(1, 1 + q), sum_expl_var, where='mid')
plt.tight_layout();
```
This shows us that the first 5 principal components explain basically all variation in the data. Therefore we could focus to work with only these.
# Further Ressources
In writing this notebook, many ressources were consulted. For internet ressources the links are provided within the textflow above and will therefore not be listed again. Beyond these links, the following ressources were consulted and are recommended as further reading on the discussed topics:
* Burges, Christopher J.C., 1998, A tutorial on support vector machines for pattern recognition, Data mining and knowledge discovery 2.2, 121-167.
* Friedman, Jerome, Trevor Hastie, and Robert Tibshirani, 2001, *The Elements of Statistical Learning* (Springer, New York, NY).
* James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani, 2013, *An Introduction to Statistical Learning: With Applications in R* (Springer Science & Business Media, New York, NY).
* Müller, Andreas C., and Sarah Guido, 2017, *Introduction to Machine Learning with Python* (O’Reilly Media, Sebastopol, CA).
* Raschka, Sebastian, 2015, *Python Machine Learning* (Packt Publishing Ltd., Birmingham, UK).
* Shalizi, Cosma Rohilla, 2017, Advanced Data Analysis from an Elementary Point of View from website, http://www.stat.cmu.edu/~cshalizi/ADAfaEPoV/ADAfaEPoV.pdf, 08/24/17.
* VanderPlas, Jake, 2016, *Python Data Science Handbook* (O'Reilly Media, Sebastopol, CA).
* Vapnik, Vladimir N., 2013, *The Nature of Statistical Learning* (Springer, New York, NY).
|
2068dfd4b940eb63ba4ea4fda8f2bd2dea90df92
| 153,134 |
ipynb
|
Jupyter Notebook
|
0211_SVM.ipynb
|
mauriciocpereira/ML_in_Finance_UZH
|
d99fa0f56b92f4f81f9bbe024de317a7949f0d38
|
[
"MIT"
] | null | null | null |
0211_SVM.ipynb
|
mauriciocpereira/ML_in_Finance_UZH
|
d99fa0f56b92f4f81f9bbe024de317a7949f0d38
|
[
"MIT"
] | null | null | null |
0211_SVM.ipynb
|
mauriciocpereira/ML_in_Finance_UZH
|
d99fa0f56b92f4f81f9bbe024de317a7949f0d38
|
[
"MIT"
] | null | null | null | 73.304931 | 26,366 | 0.73981 | true | 17,502 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.936285 | 0.930458 | 0.871174 |
__label__eng_Latn
| 0.988926 | 0.862363 |
```python
import numpy as np
import scipy as sp
from scipy.stats import poisson
```
```python
import matplotlib.pyplot as plt
%matplotlib inline
```
##### Exercise 4.1
If $\pi$ is the equiprobable random policy, what is $Q^\pi(11, down)$, $Q^\pi(7, down)$?
We have $Q(s,a) = \sum_{s'} P^a_{ss'}[R^a_{ss'} + \gamma \sum_{a'} \pi(s', a') Q(s', a')]$, so
$$Q^\pi(11, down) = 1 [-1 + \gamma \cdot 1 \cdot 0] = -1$$
and
$$Q^\pi(7, down) = -1 + \gamma V^\pi(11) = -1 - 14 \cdot \gamma$$
##### Exercise 4.2
Suppose a new state 15 is added to the gridworld just below state 13, and its actions, left, up, right, and down, take the agent to states 12, 13, 14, and 15, respectively. Assume that the transitions from the original states are unchanged. What, then, is $V^\pi(15)$ for the equiprobable random policy?
$V^\pi(s) = \sum_a \pi(s,a) \sum_{s'} P^a_{ss'} [ R^a_{ss'} + \gamma V^\pi(s')]$
$V^\pi(15) = \frac{1}{4} [(-1 + \gamma V^\pi(12)) + (-1 + \gamma V^\pi(13)) + (-1 + \gamma V^\pi(14)) + (-1 + \gamma V^\pi(15))]$
$= \frac{1}{4}[ -4 + \gamma (V^\pi(12) + V^\pi(13) + V^\pi(14) + V^\pi(15))]$
so $V^\pi(15) = \frac{1}{1 - \frac{\gamma}{4}} [-1 + \frac{\gamma}{4} (V^\pi(12) + V^\pi(13) + V^\pi(14))]$
Now suppose the dynamics of state 13 are also changed, such that action down from state 13 takes the agent to the new state 15. What is $V^\pi(15)$ for the equiprobable random policy in this case?
It seems like we would need to do policy evaluation to find $V^\pi(15)$, since all value functions would change with the new state available? I'm not sure.
##### Exercise 4.3
What are the equations analogous to (4.3), (4.4), and (4.5) for the action-value function $Q^\pi$ and its successive approximation by a sequence of functions $Q_0, Q_1, Q_2, ...$?
Using the Bellman Equation for $Q^\pi$,
$
\begin{equation}
\begin{split}
Q^\pi(s, a) =& E_\pi[\sum_{k=0}^{\infty} \gamma^k r_{t+k+1} | s_t = s, a_t = a]\\
=& E_\pi[r_{t+1} + \gamma \sum_{a'} \pi(s', a') Q^\pi(s', a')| s_t = s, a_t = a]\\
=& \sum_{s'} P^a_{ss'} [R^a_{ss'} + \gamma \sum_{a'} \pi(s', a') Q^\pi(s', a')]
\end{split}
\end{equation}
$
we obtain the following update equation:
$$Q_{k+1}(s, a) = \sum_{s'} P^a_{ss'} [R^a_{ss'} + \gamma \sum_{a'} \pi(s', a') Q_k(s', a')]$$
##### Exercise 4.3.5
In some undiscounted episodic tasks there may be policies for which eventual termination is not guaranteed. For example, in the grid problem above it is possible to go back and forth between two states forever. In a task that is otherwise perfectly sensible, $V^\pi(s)$ may be negative infinity for some policies and states, in which case the algorithm for iterative policy evaluation given in Figure 4.1 will not terminate. As a purely practical matter, how might we amend this algorithm to assure termination even in this case? Assume that eventual termination is guaranteed under the optimal policy.
We can threshold the min of $V^\pi(s)$ at a constant value so that $V^\pi(s)$ doesn't go to $-\infty$ since we know that states where $V^\pi(s) \rightarrow -\infty$ are actually terminal states that can't be escaped.
##### Exercise 4.4
https://webdocs.cs.ualberta.ca/~sutton/book/code/jacks.lisp
```python
# ;;; Jack's car rental problem. The state is n1 and n2, the number of cars
# ;;; at each location a the end of the day, at most 20. Actions are numbers of cars
# ;;; to switch from location 1 to location 2, a number between -5 and +5.
# ;;; P1(n1,new-n1) is a 26x21 array giving the probability that the number of cars at
# ;;; location 1 is new-n1, given that it starts the day at n1. Similarly for P2
# ;;; R1(n1) is a 26 array giving the expected reward due to satisfied requests at
# ;;; location, given that the day starts with n1 cars at location 1. SImilarly for R2.
```
The expected Rewards due to satisfied requests in a given state $n$ (state = number of cars available) at a parking lot are:
$$R_n = 10 \sum_{r=0}^{20} P(r) min(\{r, n\})$$
and the transition probability that the number of cars at a location is $n'$ after starting at $n$ is
$$P(n, n') = \sum_{r=0} P(r) \sum_{d=0} P(d) \delta(r = n, n' = min(\{20, n + d - req_{satisfied}\}))$$
for all requests $r$ and dropoffs $d$.
```python
class JacksCarRental(object):
def __init__(self):
self.lambda_requests1 = 3
self.lambda_requests2 = 4
self.lambda_dropoffs1 = 3
self.lambda_dropoffs2 = 2
self.gamma = 0.9 # discount factor
self.theta = 0.0000001 # delta precision for policy evaluation
# value function
self.V = np.zeros((21, 21))
self.Vs = [self.V]
# policy
self.PI = np.zeros((21, 21))
self.PIs = [self.PI]
# transition probabilities for each state
self.P1 = np.zeros((26, 21))
self.P2 = np.zeros((26, 21))
# expected rewards in each state
self.R1 = np.zeros(26)
self.R2 = np.zeros(26)
# calculate trans. probs and expected rewards
self.P1, self.R1 = self.load_P_and_R(
self.P1, self.R1,
lambda_requests=self.lambda_requests1,
lambda_dropoffs=self.lambda_dropoffs1
)
self.P2, self.R2 = self.load_P_and_R(
self.P2, self.R2,
lambda_requests=self.lambda_requests2,
lambda_dropoffs=self.lambda_dropoffs2
)
def load_P_and_R(self, P, R, lambda_requests, lambda_dropoffs):
# Get the transition probabilities and expected rewards
requests = 0
request_prob = poisson.pmf(requests, mu=lambda_requests)
while request_prob >= .000001:
# expected rewards
for n in xrange(26):
# rent out car for $10 each
R[n] += 10 * request_prob * min([requests, n])
# transition probabilities
dropoffs = 0
dropoff_prob = poisson.pmf(dropoffs, mu=lambda_dropoffs)
while dropoff_prob >= .000001:
for n in xrange(26):
satisfied_requests = min([requests, n])
new_n = min([20, n + dropoffs - satisfied_requests])
if new_n < 0:
print 'Warning negative new_n', new_n
P[n, new_n] += request_prob * dropoff_prob
dropoffs += 1
dropoff_prob = poisson.pmf(dropoffs, mu=lambda_dropoffs)
requests += 1
request_prob = poisson.pmf(requests, mu=lambda_requests)
return P, R
# 2. policy evaluation
def backup_action(self, n1, n2, a):
# number of cars to move from location 1 to 2, thresholded at 5 and -5 according to problem specs
cars_to_move = max([min([n1, a]), -n2])
cars_to_move = min([max([cars_to_move, -5]), 5])
# costs $2 to move each cars
cost_to_move = -2 * abs(cars_to_move)
# do backup
morning_n1 = n1 - cars_to_move
morning_n2 = n2 + cars_to_move
# sum over all possible next states
newv = 0
for newn1 in xrange(21):
for newn2 in xrange(21):
newv += self.P1[morning_n1, newn1] * self.P2[morning_n2, newn2] *\
(self.R1[morning_n1] + self.R2[morning_n2] +\
self.gamma * self.V[newn1, newn2])
return newv + cost_to_move
def policy_evaluation(self):
delta = 1
while delta > self.theta:
delta = 0
# Loop through all States
for n1 in xrange(21):
for n2 in xrange(21):
old_v = self.V[n1, n2]
action = self.PI[n1, n2]
# do a full backup for each state
self.V[n1, n2] = self.backup_action(n1, n2, action)
delta = max([delta, abs(old_v - self.V[n1, n2])])
# print 'Policy evaluation delta: ', delta
print 'Done with Policy Evaluation'
return self.V
# 3. Policy Improvement
def get_best_policy(self, n1, n2):
best_value = -1
for a in range(max(-5, -n2), min(5, n1) + 1):
this_action_value = self.backup_action(n1, n2, a)
if this_action_value > best_value:
best_value = this_action_value
best_action = a
return best_action
def policy_improvement(self):
self.V = self.policy_evaluation()
policy_stable = False
while policy_stable is False:
policy_stable = True
for n1 in xrange(21):
for n2 in xrange(21):
b = self.PI[n1, n2]
self.PI[n1, n2] = self.get_best_policy(n1, n2)
if b != self.PI[n1, n2]:
policy_stable = False
self.Vs.append(self.policy_evaluation())
self.PIs.append(self.PI)
return policy_stable
```
```python
```
```python
# Policy Iteration for Jack's Car Rental problem
jacks = JacksCarRental()
V = jacks.policy_improvement()
```
/Applications/anaconda/envs/rl/lib/python2.7/site-packages/ipykernel/__main__.py:85: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
Done with Policy Evaluation
Done with Policy Evaluation
Done with Policy Evaluation
Done with Policy Evaluation
Done with Policy Evaluation
Done with Policy Evaluation
```python
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
xv, yv = np.meshgrid(range(21), range(21))
ax.plot_surface(xv, yv, jacks.V)
ax.set_xlabel('Cars at 2nd location')
ax.set_ylabel('Cars at 1st location')
```
```python
plt.figure()
cs = plt.contour(xv, yv, jacks.PI)
plt.clabel(cs, inline=1, fontsize=10)
plt.xlabel('Cars at 2nd location')
plt.ylabel('Cars at 1st location')
```
One of Jack's employees at the first location rides a bus home each night and lives near the second location. She is happy to shuttle one car to the second location for free. Each additional car still costs \$2, as do all cars moved in the other direction. In addition, Jack has limited parking space at each location. If more than 10 cars are kept overnight at a location (after any moving of cars), then an additional cost of \$4 must be incurred to use a second parking lot (independent of how many cars are kept there).
```python
class JacksCarRental(object):
def __init__(self):
self.lambda_requests1 = 3
self.lambda_requests2 = 4
self.lambda_dropoffs1 = 3
self.lambda_dropoffs2 = 2
self.gamma = 0.9 # discount factor
self.theta = 0.0000001 # delta precision for policy evaluation
# value function
self.V = np.zeros((21, 21))
self.Vs = [self.V]
# policy
self.PI = np.zeros((21, 21))
self.PIs = [self.PI]
# transition probabilities for each state
self.P1 = np.zeros((26, 21))
self.P2 = np.zeros((26, 21))
# expected rewards in each state
self.R1 = np.zeros(26)
self.R2 = np.zeros(26)
# calculate trans. probs and expected rewards
self.P1, self.R1 = self.load_P_and_R(
self.P1, self.R1,
lambda_requests=self.lambda_requests1,
lambda_dropoffs=self.lambda_dropoffs1
)
self.P2, self.R2 = self.load_P_and_R(
self.P2, self.R2,
lambda_requests=self.lambda_requests2,
lambda_dropoffs=self.lambda_dropoffs2
)
def load_P_and_R(self, P, R, lambda_requests, lambda_dropoffs):
# Get the transition probabilities and expected rewards
requests = 0
request_prob = poisson.pmf(requests, mu=lambda_requests)
while request_prob >= .000001:
# expected rewards
for n in xrange(26):
# rent out car for $10 each
R[n] += 10 * request_prob * min([requests, n])
# transition probabilities
dropoffs = 0
dropoff_prob = poisson.pmf(dropoffs, mu=lambda_dropoffs)
while dropoff_prob >= .000001:
for n in xrange(26):
satisfied_requests = min([requests, n])
new_n = min([20, n + dropoffs - satisfied_requests])
if new_n < 0:
print 'Warning negative new_n', new_n
P[n, new_n] += request_prob * dropoff_prob
dropoffs += 1
dropoff_prob = poisson.pmf(dropoffs, mu=lambda_dropoffs)
requests += 1
request_prob = poisson.pmf(requests, mu=lambda_requests)
return P, R
# 2. policy evaluation
def backup_action(self, n1, n2, a):
# number of cars to move from location 1 to 2, thresholded at 5 and -5 according to problem specs
cars_to_move = max([min([n1, a]), -n2])
cars_to_move = min([max([cars_to_move, -5]), 5])
# costs $2 to move each car,
# but we get one car free if we move from n1 to n2!
cost_to_move = -2 * abs(cars_to_move)
if cars_to_move > 0:
# 1 free one if we move n1 -> n2
cost_to_move += 2
# do backup
morning_n1 = n1 - cars_to_move
morning_n2 = n2 + cars_to_move
# If more than 10 cars are kept overnight at a location (after any moving of cars),
# then an additional cost of \$4 must be incurred
extra_parking_cost = 0
if morning_n1 > 10:
extra_parking_cost -= 4
if morning_n2 > 10:
extra_parking_cost -= 4
# sum over all possible next states
newv = 0
for newn1 in xrange(21):
for newn2 in xrange(21):
newv += self.P1[morning_n1, newn1] * self.P2[morning_n2, newn2] *\
(self.R1[morning_n1] + self.R2[morning_n2] +\
self.gamma * self.V[newn1, newn2])
return newv + cost_to_move + extra_parking_cost
def policy_evaluation(self):
delta = 1
while delta > self.theta:
delta = 0
# Loop through all States
for n1 in xrange(21):
for n2 in xrange(21):
old_v = self.V[n1, n2]
action = self.PI[n1, n2]
# do a full backup for each state
self.V[n1, n2] = self.backup_action(n1, n2, action)
delta = max([delta, abs(old_v - self.V[n1, n2])])
# print 'Policy evaluation delta: ', delta
print 'Done with Policy Evaluation'
return self.V
# 3. Policy Improvement
def get_best_policy(self, n1, n2):
best_value = -1
for a in range(max(-5, -n2), min(5, n1) + 1):
this_action_value = self.backup_action(n1, n2, a)
if this_action_value > best_value:
best_value = this_action_value
best_action = a
return best_action
def policy_improvement(self):
self.V = self.policy_evaluation()
policy_stable = False
while policy_stable is False:
policy_stable = True
for n1 in xrange(21):
for n2 in xrange(21):
b = self.PI[n1, n2]
self.PI[n1, n2] = self.get_best_policy(n1, n2)
if b != self.PI[n1, n2]:
policy_stable = False
self.Vs.append(np.copy(self.policy_evaluation()))
self.PIs.append(np.copy(self.PI))
return policy_stable
```
```python
jacks = JacksCarRental()
V = jacks.policy_improvement()
```
/Applications/anaconda/envs/rl/lib/python2.7/site-packages/ipykernel/__main__.py:99: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
Done with Policy Evaluation
Done with Policy Evaluation
Done with Policy Evaluation
Done with Policy Evaluation
Done with Policy Evaluation
Done with Policy Evaluation
```python
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
xv, yv = np.meshgrid(range(21), range(21))
ax.plot_surface(xv, yv, jacks.V)
ax.set_xlabel('Cars at 2nd location')
ax.set_ylabel('Cars at 1st location')
```
```python
f, axx = plt.subplots(2, 3, sharex='col', sharey='row')
for i in xrange(len(jacks.PIs)):
r = i / 3
c = i % 3
ax = axx[r, c]
cs = ax.contour(xv, yv, jacks.PIs[i])
ax.set_title(i)
```
```python
f, axx = plt.subplots(2, 3, sharex='col', sharey='row')
for i in xrange(len(jacks.Vs)):
r = i / 3
c = i % 3
ax = axx[r, c]
cs = ax.contour(xv, yv, jacks.Vs[i])
ax.set_title(i)
# ax.set_clabel(cs, inline=1, fontsize=10)
# ax.set_xlabel('Cars at 2nd location')
# ax.set_ylabel('Cars at 1st location')
```
##### 4.5
How would policy iteration be defined for action values? Give a complete algorithm for computing $Q^*$, analogous to Figure 4.3 for computing $V^*$. Please pay special attention to this exercise, because the ideas involved will be used throughout the rest of the book.
I'm not sure about step 3??, but I'm trying to use $\pi(s,a)$ as a deterministic probability of taking an action in a given state.
1. Initialization
- $Q(s,a) \in \mathbb{R}$ and $\pi(s, a) \in \{0, 1\}$ arbitrarily for all $s \in S$ and $a \in A(s)$
2. Policy Evaluation
- Repeat
- $\Delta \leftarrow 0$
- For each $s \in S$ and $a \in A(s)$
- $q \leftarrow Q(s,a)$
- $Q(s,a) \leftarrow \sum_{s'} P^a_{ss'}[R^a_{ss'} + \gamma \sum_{a'} \pi(s', a') Q(s', a')]$
- $\Delta \leftarrow max(\Delta, |q - Q(s,a)|)$
- Until $\Delta \lt \theta$
3. Policy Improvement
- policy_stable = True
- For each $s \in S$ and $a \in A(s)$
- $b \leftarrow Q(s,a)$
- $\pi(s,a) \leftarrow \sum_{s'} P^a_{ss'} [R^a_{ss'} + \gamma \cdot argmax_{a'} Q(s', a')] $
- If $b \neq \pi(s, a)$, then policy_stable = False
- For each $s \in S$ and $a \in A(s)$
- $\pi(s,a) \leftarrow$ 1 if $\pi(s,a) = argmax_a \pi(s,a)$ else 0
- If policy_stable, then stop, else go to 2
##### Exercise 4.6
Suppose you are restricted to considering only policies that are $\epsilon$-soft, meaning that the probability of selecting each action in each state, $s$, is at least $\epsilon / |A(s)|$. Describe qualitatively the changes that would be required in each of the steps 3, 2, and 1, in that order, of the policy iteration algorithm for (Figure 4.3).
Step 3 would still take greedy updates to $\pi(s, a)$, but would have to make sure that $\pi(s,a)$ has at least a probability of $\epsilon / |A(s)|$ for each non-greedy action. $\pi(s,a)$ is the probability of taking action $a$ in state $s$, as opposed to $\pi(s)$ as used previously since we had deterministic policies. Step 2 would have to use the Bellman Equation for $V(s)$ with a non-deterministic policy evaluation, $V(s) \leftarrow \sum_a \pi(s,a) \sum_{s'} P^a_{ss'} [R^a_{ss'} + \gamma V(s')]$. Step 1 would have to initiliaze the $\pi(s,a)$ as random uniform.
##### Exercise 4.7
Why does the optimal policy for the gambler's problem have such a curious form? In particular, for capital of 50 it bets it all on one flip, but for capital of 51 it does not. Why is this a good policy?
If p >= .5, then you'd want to bet 1 at all states, since you are more likely to win than lose. If p < .5, then you are more likely to lose on any bet than to win. The reason that at p=.4, the policy has a weird shape, is because at state 2 for example, it is more likely that you get to state 4 by betting $2 than by betting $1 twice. The probability of going from state 2 to state 4 in one time-step by betting $1 twice is .4*.4=.16, which is less than if you bet $2 once.
The spikes on 12, 25, 50, and 75, seem empirically independent of p as long as p is greater than around .05. This suggests that the spikes around those values exist because the game terminates at 100. You are more likely to get to 100 if you bet values that are factors of 100. Since there is no state > 100, the optimal policy has this jagged form.
##### Exercise 4.8
Implement value iteration for the gambler's problem and solve it for p=.25 and p=.55. In programming, you may find it convenient to introduce two dummy states corresponding to termination with capital of 0 and 100, giving them values of 0 and 1 respectively. Show your results graphically, as in Figure 4.6. Are your results stable as $\theta \rightarrow 0$?
```python
class GamblersProblem(object):
def __init__(self, p=.45, gamma=1):
self.PI = np.zeros(100)
self.V = np.zeros(101)
self.gamma = gamma
self.p = p
def backup_action(self, state, action):
return self.p * self.gamma * self.V[state + action] +\
(1 - self.p) * self.gamma * self.V[state - action]
def value_iteration(self, epsilon=.0000000001):
self.V = np.zeros(101)
# You get a reward of 1 if you win $100
self.V[100] = 1
delta = 1
while delta > epsilon:
delta = 0
for state in xrange(1, 100):
old_v = self.V[state]
# you can bet up to all your money as long as the winnings would be less than 100
self.V[state] = np.max(
[self.backup_action(state, action + 1) \
for action in xrange(min(state, 100 - state))]
)
delta = max(delta, abs(self.V[state] - old_v))
def get_deterministic_policy(self, epsilon=.0000000001):
PI = np.zeros(100)
for state in xrange(1, 100):
values = [self.backup_action(state, action + 1)\
for action in xrange(min(state, 100 - state))]
best_pi = 1
best_value = -1
for idx, v in enumerate(values):
if v > best_value + epsilon:
best_value = v
best_pi = idx + 1
PI[state] = best_pi
self.PI = PI
return PI
```
```python
gp = GamblersProblem(p=.25)
gp.value_iteration()
plt.plot(gp.V)
plt.show()
plt.plot(gp.get_deterministic_policy())
plt.show()
```
```python
gp = GamblersProblem(p=.55)
gp.value_iteration()
plt.plot(gp.V)
plt.show()
plt.plot(gp.get_deterministic_policy())
plt.show()
```
```python
gp = GamblersProblem(p=.001)
gp.value_iteration()
plt.plot(gp.V)
plt.show()
plt.plot(gp.get_deterministic_policy())
plt.show()
```
Results are not stable as $\theta \rightarrow 0$, I'm not really sure why.
##### Exercise 4.9
What is the analog of the value iteration backup (4.10) for action values, $Q_{k+1}(s,a)$?
- Initialize Q arbitrarily, e.g. $Q(s,a)=0$ for all $s \in S$ and $a \in A(s)$
- Repeat
- $\Delta \leftarrow 0$
- For each $s \in S$ and $a \in A(s)$
- $q \leftarrow Q(s,a)$
- $Q(s,a) = \sum_{s'} P^a_{ss'} [R^a_{ss'} + \gamma \cdot argmax_{a'} Q(s', a')]$
- $\Delta \leftarrow max(\Delta, |q - Q(s,a)|)$
- until $\Delta \lt \theta$
- Output a deterministic policy $\pi$ such that
- $\pi(s) = argmax_a Q(s,a)$
```python
```
|
ab1388ef184b2a04ddf5e95fe8837d334c1a3a1a
| 260,546 |
ipynb
|
Jupyter Notebook
|
notebooks/chapter4.ipynb
|
btaba/intro-to-rl
|
b65860cd81ce43ac344d4f618a6364c000ea971b
|
[
"MIT"
] | 39 |
2016-10-02T19:41:19.000Z
|
2019-07-30T18:10:37.000Z
|
notebooks/chapter4.ipynb
|
btaba/intro-to-rl
|
b65860cd81ce43ac344d4f618a6364c000ea971b
|
[
"MIT"
] | 4 |
2017-07-08T08:17:06.000Z
|
2017-08-03T01:38:33.000Z
|
notebooks/chapter4.ipynb
|
btaba/intro-to-rl
|
b65860cd81ce43ac344d4f618a6364c000ea971b
|
[
"MIT"
] | 16 |
2016-10-02T20:12:38.000Z
|
2021-05-14T20:30:57.000Z
| 265.592253 | 43,794 | 0.893643 | true | 6,544 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.746139 | 0.882428 | 0.658414 |
__label__eng_Latn
| 0.971758 | 0.368047 |
# Yet antoher re-implementation and cross-check for absorption on disk
In this notebook I will implement the formulas for the $\gamma \gamma$ absorption on disk presented in Finke 2016 and Dermer 2009 and comapre them.
```python
import numpy as np
import astropy.units as u
from astropy.constants import m_e, c, G, M_sun
import matplotlib.pyplot as plt
import pkg_resources
from IPython.display import Image
import sys
sys.path.append("../../")
from agnpy.targets import SSDisk
from agnpy.absorption import sigma
from agnpy.utils.math import axes_reshaper
from agnpy.utils.conversion import nu_to_epsilon_prime, to_R_g_units
```
```python
# useful constants
# electron radius
r_e = 2.81794 * 1e-15 * u.cm
```
## Formula derived from [Finke 2016](https://iopscience.iop.org/article/10.3847/0004-637X/830/2/94/pdf)
Let us consider the general formula for absorption
\begin{equation}
\tau_{\gamma \gamma}(\hat{\nu}_1) =
\int_{r}^{\infty} {\rm d}l \,
\int_{0}^{2\pi} {\rm d}\phi \,
\int_{-1}^{1} {\rm d}\mu \, (1 - \cos\psi) \,
\int_{0}^{\infty} {\rm d}\epsilon \,
\frac{\underline{u}(\epsilon, \Omega; l)}{\epsilon m_{\rm e} c^2} \,
\sigma_{\gamma \gamma}(s),
\end{equation}
where, for the Skakura Sunyaev Disk
\begin{equation}
\underline{u}(\epsilon, \Omega; r) = \frac{3 G M \dot{m}}{(4 \pi)^2 c R^3}\varphi(R) \, \delta(\epsilon - \epsilon_0(R))
\end{equation}
replacing, and simplifying
\begin{equation}
\begin{split}
\tau_{\gamma \gamma}(\hat{\nu}_1) &=
\int_{r}^{\infty} {\rm d}l \,
\int_{0}^{2\pi} {\rm d}\phi \,
\int_{\mu_{\rm min}}^{\mu_{\rm max}} {\rm d}\mu \, (1 - \cos\psi) \,
\int_{0}^{\infty} {\rm d}\epsilon \,
\frac{3 G M \dot{m}}{(4 \pi)^2 c R^3}\varphi(R) \, \delta(\epsilon - \epsilon_0(R))
\frac{1}{\epsilon m_{\rm e} c^2} \sigma_{\gamma \gamma}(s) \\
&= \frac{3 G M \dot{m}}{(4 \pi)^2 m_{\rm e} c^3}
\int_{0}^{2\pi} {\rm d}\phi \,
\int_{\mu_{\rm min}}^{\mu_{\rm max}} {\rm d}\mu \, (1 - \cos\psi) \,
\frac{1}{R^3}\frac{\varphi(R)}{\epsilon(R)} \sigma_{\gamma \gamma}(s).
\end{split}
\end{equation}
The extremes of integration in cosine $(\mu_{\rm in}, \mu_{\rm out})$ will change depending on the distance $l$, so let us change to an integration in $R$: $\mu = \frac{1}{\sqrt{1 + \frac{R^2}{r^2}}} \Rightarrow \frac{{\rm d}\mu}{{\rm d}R} = - \frac{1}{2}\frac{1}{\left(1 + \frac{R^2}{r^2}\right)^{3/2}}\frac{2R}{r^2} = - \frac{1}{\mu^3}\frac{R}{r^2}$.
\begin{equation}
\begin{split}
\tau_{\gamma \gamma}(\hat{\nu}_1) &=
\frac{3 G M \dot{m}}{(4 \pi)^2 m_{\rm e} c^3}
\int_{r}^{\infty} {\rm d}l \,
\int_{0}^{2\pi} {\rm d}\phi \,
\int_{R_{\rm out}}^{R_{\rm in}} -\frac{1}{\mu^3}\frac{R}{l^2}{\rm d}R \,
(1 - \cos\psi) \, \frac{1}{R^3}\frac{\varphi(R)}{\epsilon(R)} \sigma_{\gamma \gamma}(s) \\
&= \frac{3 G M \dot{m}}{(4 \pi)^2 m_{\rm e} c^3}
\int_{r}^{\infty} {\rm d}l \,
\int_{0}^{2\pi} {\rm d}\phi \,
\int_{R_{\rm in}}^{R_{\rm out}}{\rm d}R \,
(1 - \cos\psi) \, \frac{1}{\mu^3}\frac{1}{l^2}\frac{1}{R^2}
\frac{\varphi(R)}{\epsilon(R)} \sigma_{\gamma \gamma}(s) \\
&= \frac{3 G M \dot{m}}{(4 \pi)^2 m_{\rm e} c^3 R_g^2}
\int_{r}^{\infty} \frac{{\rm d}l}{R_g} \,
\int_{0}^{2\pi} {\rm d}\phi \,
\int_{R_{\rm in}}^{R_{\rm out}} \frac{{\rm d}R}{R_g} \,
(1 - \cos\psi) \, \frac{1}{\mu^3}\frac{R_g^2}{l^2}\frac{R_g^2}{R^2}
\frac{\varphi(R)}{\epsilon(R)} \sigma_{\gamma \gamma}(s) \\
&= \frac{3 \dot{m}}{(4 \pi)^2 m_{\rm e} c R_g}
\int_{\tilde{r}}^{\infty} {\rm d}\tilde{l} \,
\int_{0}^{2\pi} {\rm d}\phi \,
\int_{\tilde{R}_{\rm in}}^{\tilde{R}_{\rm out}} {\rm d}\tilde{R} \,
(1 - \cos\psi) \, \frac{1}{\mu^3}\frac{1}{\tilde{l}^2}\frac{1}{\tilde{R}^2}
\frac{\varphi(\tilde{R})}{\epsilon(\tilde{R})} \sigma_{\gamma \gamma}(s) \\
&= \frac{3 L_{\rm disk}}{(4 \pi)^2 \eta m_{\rm e} c^3 R_g}
\int_{\tilde{r}}^{\infty} {\rm d}\tilde{l} \,
\int_{0}^{2\pi} {\rm d}\phi \,
\int_{\tilde{R}_{\rm in}}^{\tilde{R}_{\rm out}} {\rm d}\tilde{R} \,
(1 - \cos\psi) \, \frac{1}{\mu^3}\frac{1}{\tilde{l}^2}\frac{1}{\tilde{R}^2}
\frac{\varphi(\tilde{R})}{\epsilon(\tilde{R})} \sigma_{\gamma \gamma}(s),
\end{split}
\end{equation}
and we have used $R_g=GM/c^2$ and $L_{\rm disk} = \eta \dot{m} c^2$.
Assuming $\mu_s=1 \Rightarrow \cos\psi=\mu$ and
\begin{equation}
\tau_{\gamma \gamma}(\hat{\nu}_1) = \frac{3 L_{\rm disk}}{8 \pi^2 \eta m_{\rm e} c^3 R_g}
\int_{\tilde{r}}^{\infty} {\rm d}\tilde{l} \,
\int_{\tilde{R}_{\rm in}}^{\tilde{R}_{\rm out}} {\rm d}\tilde{R} \,
\frac{(1 - \mu)}{\mu^3} \frac{1}{\tilde{l}^2 \tilde{R}^2}
\frac{\varphi(\tilde{R})}{\epsilon(\tilde{R})} \sigma_{\gamma \gamma}(s).
\end{equation}
```python
def evaluate_tau_disk_finke_2016(
nu,
z,
M_BH,
L_disk,
eta,
R_in,
R_out,
r,
R_tilde_size=100,
l_tilde_size=50,
):
"""expression of the disk absorption derived from the formulas in Finke 2016"""
# conversions
R_g = (G * M_BH / c ** 2).to("cm")
r_tilde = to_R_g_units(r, M_BH)
R_in_tilde = to_R_g_units(R_in, M_BH)
R_out_tilde = to_R_g_units(R_out, M_BH)
# multidimensional integration
R_tilde = np.linspace(R_in_tilde, R_out_tilde, R_tilde_size)
l_tilde = np.logspace(0, 6, l_tilde_size) * r_tilde
epsilon_1 = nu_to_epsilon_prime(nu, z)
_R_tilde, _l_tilde, _epsilon_1 = axes_reshaper(R_tilde, l_tilde, epsilon_1)
_epsilon = SSDisk.evaluate_epsilon(L_disk, M_BH, eta, _R_tilde)
_phi_disk = 1 - (R_in_tilde / _R_tilde) ** (1 / 2)
_mu = (1 + _R_tilde ** 2 / _l_tilde ** 2) ** (-1 / 2)
s = _epsilon * _epsilon_1 * (1 - _mu) / 2
integrand = (
(1 - _mu)
/ _mu ** 3
/ _l_tilde ** 2
/ _R_tilde ** 2
* _phi_disk
/ _epsilon
* sigma(s)
)
integral_R_tilde = np.trapz(integrand, R_tilde, axis=0)
integral = np.trapz(integral_R_tilde, l_tilde, axis=0)
prefactor = 3 * L_disk / (8 * np.pi**2 * eta * m_e * c**3 * R_g)
return (prefactor * integral).to_value("")
```
```python
# disk parameters as in Finke 2016
M_BH = 1.2 * 1e9 * M_sun.cgs
L_disk = 2 * 1e46 * u.Unit("erg s-1")
R_g = (G * M_BH / c**2).to("cm")
eta = 1 / 12
R_in = 6 * R_g
R_out = 200 * R_g
disk = SSDisk(M_BH, L_disk, eta, R_in, R_out)
print(disk)
# parameters of 3C454.3
z = 0.859
R_line = 1.1 * 1e17 * u.cm
# distance of the disk
r = 0.1 * R_line
```
* Shakura Sunyaev accretion disk:
- M_BH (central black hole mass): 2.39e+42 g
- L_disk (disk luminosity): 2.00e+46 erg / s
- eta (accretion efficiency): 8.33e-02
- dot(m) (mass accretion rate): 2.67e+26 g / s
- R_in (disk inner radius): 1.06e+15 cm
- R_out (disk inner radius): 3.54e+16 cm
```python
# let us try to reproduce the data shared by Finke
for l in ["1e-1", "1e0", "1e1", "1e2"]:
# read the reference SED in Figure 14 of Finke 2016
data_file_ref_abs = f"../../agnpy/data/reference_taus/finke_2016/figure_14_left/tau_SSdisk_r_{l}_R_Ly_alpha.txt"
data_ref = np.loadtxt(data_file_ref_abs, delimiter=",")
# read energies and opacity values
E_ref = data_ref[:, 0] * u.GeV
tau_ref = data_ref[:, 1]
nu_ref = E_ref.to("Hz", equivalencies=u.spectral())
# compute the tau
_r = float(l) * R_line
tau = evaluate_tau_disk_finke_2016(nu_ref, z, M_BH, L_disk, eta, R_in, R_out, _r)
# plot it against the reference
plt.loglog(E_ref, tau_ref, ls="--", color="k", label="reference")
plt.loglog(E_ref, tau, ls="-", color="crimson", label="agnpy")
plt.ylim([1e-8, 1e6])
plt.title(f"r = {l} R(Ly alpha)")
plt.ylabel(r"$\tau_{\gamma\gamma}$")
plt.xlabel("E / GeV")
plt.legend()
plt.show()
```
## Formula from [Dermer et al. 2009](https://iopscience.iop.org/article/10.1088/0004-637X/692/1/32/pdf), Eq. 80
Let us directly take Eq. (80) in Dermer et al. 2009 reporting the same quantity (again we are assuming $\mu_s = 1$).
\begin{equation}
\tau_{\gamma \gamma}(\hat{\nu}_1) = 3 \times 10^6 \frac{l_{\rm Edd}^{3/4} M_8^{1/4}}{\eta^{3/4}}
\int_{\tilde{r}}^{\infty} \frac{{\rm d}\tilde{l}}{\tilde{l}^2} \,
\int_{\tilde{R}_{\rm in}}^{\tilde{R}_{\rm out}} \frac{{\rm d}\tilde{R}}{\tilde{R}^{5/4}} \,
\frac{\left[\varphi(\tilde{R})\right]^{1/4}}{\left(1 + \frac{\tilde{R}^2}{\tilde{l}^2}\right)^{3/2}}
\left[\frac{\sigma_{\gamma \gamma}(s)}{\pi r_e^2}\right] (1 - \mu).
\end{equation}
As the author notice:
$$ \frac{\tau_{\gamma \gamma}}{M_8} = \text{function of } \xi \text{ and } r$$
where $\xi = \left(\frac{l_{\rm Edd}}{\eta M_8}\right)^{1/4}$.
Reorganising the formula:
\begin{equation}
\tau_{\gamma \gamma}(\hat{\nu}_1) = 3 \times 10^6 M_8 \xi^3
\int_{\tilde{r}}^{\infty} \frac{{\rm d}\tilde{l}}{\tilde{l}^2} \,
\int_{\tilde{R}_{\rm in}}^{\tilde{R}_{\rm out}} \frac{{\rm d}\tilde{R}}{\tilde{R}^{5/4}} \,
\frac{\left[\varphi(\tilde{R})\right]^{1/4}}{\left(1 + \frac{\tilde{R}^2}{\tilde{l}^2}\right)^{3/2}}
\left[\frac{\sigma_{\gamma \gamma}(s)}{\pi r_e^2}\right] (1 - \mu).
\end{equation}
```python
def evaluate_tau_dermer_2009(
E,
z,
M_8,
csi,
R_in_tilde,
R_out_tilde,
r_tilde,
R_tilde_size=100,
l_tilde_size=50,
):
"""expression of the disk absorption copied from Dermer 2009"""
R_tilde = np.linspace(R_in_tilde, R_out_tilde, R_tilde_size)
l_tilde = np.logspace(0, 6, l_tilde_size) * r_tilde
epsilon_1 = (E / (m_e * c**2)).to_value("") * (1 + z)
_R_tilde, _l_tilde, _epsilon_1 = axes_reshaper(R_tilde, l_tilde, epsilon_1)
_epsilon = 2.7 * 1e-4 * csi * _R_tilde ** (-3 / 4)
_phi_disk = 1 - (R_in_tilde / _R_tilde) ** (1 / 2)
_mu = 1 / (1 + _R_tilde ** 2 / _l_tilde ** 2) ** (1 / 2)
_s = _epsilon_1 * _epsilon * (1 - _mu) / 2
integrand = (
1
/ _l_tilde ** 2
/ _R_tilde ** (5 / 4)
/ (1 + _R_tilde ** 2 / _l_tilde ** 2) ** (3 / 2)
* _phi_disk ** (1 / 4)
* (sigma(_s) / (np.pi * r_e ** 2)).to_value("")
* (1 - _mu)
)
integral_R_tilde = np.trapz(integrand, R_tilde, axis=0)
integral = np.trapz(integral_R_tilde, l_tilde, axis=0)
prefactor = 3 * 1e6 * M_8 * csi ** 3
return prefactor * integral
```
```python
# let us try to reproduce Figure 7 in Dermer et al 2009
Image("figures/figure_7_dermer_et_al_2009.png", width = 600, height = 400)
```
```python
z = 1
M_8 = 1
csi = [1, 10]
r_tilde = [10, 1e2, 1e3]
E = np.logspace(-3, 2, 50) * u.TeV
for color, _csi in zip(["red", "green"], csi):
for ls, _r_tilde in zip(["-", ":", "--"], r_tilde):
tau = evaluate_tau_dermer_2009(E, z=z, M_8=M_8, csi=_csi, R_in_tilde=6, R_out_tilde=200, r_tilde=_r_tilde)
plt.loglog(E, tau, color=color, ls=ls, label=f"csi={_csi}, r = {_r_tilde} R_g")
plt.xlim([1e-3, 1e2])
plt.legend(loc="center left", bbox_to_anchor=(1, 0.5))
plt.ylabel(r"$\tau_{\gamma\gamma}$")
plt.xlabel("E / GeV")
plt.show()
```
## Check Finke 2016 vs Dermer 2009
Let us check the two formulas implementations against each other.
Let us use the same parameters of the accretion disk we defined above.
```python
print(f"distance from BH: {r:.2e}")
print("accretion disk considered:")
print(disk)
```
distance from BH: 1.10e+16 cm
accretion disk considered:
* Shakura Sunyaev accretion disk:
- M_BH (central black hole mass): 2.39e+42 g
- L_disk (disk luminosity): 2.00e+46 erg / s
- eta (accretion efficiency): 8.33e-02
- dot(m) (mass accretion rate): 2.67e+26 g / s
- R_in (disk inner radius): 1.06e+15 cm
- R_out (disk inner radius): 3.54e+16 cm
```python
nu = E.to("Hz", equivalencies = u.spectral())
tau_finke = evaluate_tau_disk_finke_2016(nu, z, disk.M_BH, disk.L_disk, disk.eta, disk.R_in, disk.R_out, r)
# calculate csi to use Dermer's formula
csi = (disk.l_Edd / (disk.M_8 * disk.eta))**(1/4)
r_tilde = (r / disk.R_g).to_value("")
tau_dermer = evaluate_tau_dermer_2009(E, z, disk.M_8, csi, disk.R_in_tilde, disk.R_in_tilde, r_tilde)
plt.loglog(E, tau_finke, label="Finke 2016")
plt.loglog(E, tau_finke, label="Dermer 2009", ls=":")
plt.ylabel(r"$\tau_{\gamma\gamma}$")
plt.xlabel("E / GeV")
plt.legend()
plt.show()
```
|
471819f275aad68d4422992bbd704dfac6f5f8c7
| 224,475 |
ipynb
|
Jupyter Notebook
|
experiments/basic/disk_absorption_dermer_finke_comparison.ipynb
|
vuillaut/agnpy
|
b3c9c09ca59c067f0739510e26e43e2693b42c99
|
[
"BSD-3-Clause"
] | 25 |
2020-01-24T09:27:45.000Z
|
2022-03-03T11:58:06.000Z
|
experiments/basic/disk_absorption_dermer_finke_comparison.ipynb
|
vuillaut/agnpy
|
b3c9c09ca59c067f0739510e26e43e2693b42c99
|
[
"BSD-3-Clause"
] | 107 |
2020-02-14T16:21:14.000Z
|
2022-03-24T16:38:28.000Z
|
experiments/basic/disk_absorption_dermer_finke_comparison.ipynb
|
vuillaut/agnpy
|
b3c9c09ca59c067f0739510e26e43e2693b42c99
|
[
"BSD-3-Clause"
] | 17 |
2020-01-18T05:46:51.000Z
|
2022-03-20T21:33:28.000Z
| 369.202303 | 83,984 | 0.91988 | true | 4,972 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.882428 | 0.715424 | 0.63131 |
__label__kor_Hang
| 0.150352 | 0.305075 |
```python
from datascience import *
import sympy
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.patches as patches
plt.style.use('seaborn-muted')
mpl.rcParams['figure.dpi'] = 200
%matplotlib inline
from IPython.display import display
import numpy as np
import pandas as pd
solve = lambda x,y: sympy.solve(x-y)[0] if len(sympy.solve(x-y))==1 else "Not Single Solution"
import warnings
warnings.filterwarnings('ignore')
```
# Market Equilibria
We will now explore the relationship between price and quantity of oranges produced between 1924 and 1938. Since the data {cite}`01demand-fruits` is from the 1920s and 1930s, it is important to remember that the prices are much lower than what they would be today because of inflation, competition, innovations, and other factors. For example, in 1924, a ton of oranges would have costed \$6.63; that same amount in 2019 is \$100.78.
```python
fruitprice = Table.read_table('fruitprice.csv')
fruitprice
```
<table border="1" class="dataframe">
<thead>
<tr>
<th>Year</th> <th>Pear Price</th> <th>Pear Unloads (Tons)</th> <th>Plum Price</th> <th>Plum Unloads</th> <th>Peach Price</th> <th>Peach Unloads</th> <th>Orange Price</th> <th>Orange Unloads</th> <th>NY Factory Wages</th>
</tr>
</thead>
<tbody>
<tr>
<td>1924</td> <td>8.04 </td> <td>18489 </td> <td>8.86 </td> <td>6582 </td> <td>4.96 </td> <td>41880 </td> <td>6.63 </td> <td>21258 </td> <td>27.22 </td>
</tr>
<tr>
<td>1925</td> <td>5.67 </td> <td>21919 </td> <td>7.27 </td> <td>5526 </td> <td>4.87 </td> <td>38772 </td> <td>9.19 </td> <td>15426 </td> <td>28.03 </td>
</tr>
<tr>
<td>1926</td> <td>5.44 </td> <td>29328 </td> <td>6.68 </td> <td>5742 </td> <td>3.35 </td> <td>46516 </td> <td>7.2 </td> <td>24762 </td> <td>28.89 </td>
</tr>
<tr>
<td>1927</td> <td>7.15 </td> <td>17082 </td> <td>8.09 </td> <td>5758 </td> <td>5.7 </td> <td>32500 </td> <td>8.63 </td> <td>22766 </td> <td>29.14 </td>
</tr>
<tr>
<td>1928</td> <td>5.81 </td> <td>20708 </td> <td>7.41 </td> <td>6000 </td> <td>4.13 </td> <td>46820 </td> <td>10.71 </td> <td>18766 </td> <td>29.34 </td>
</tr>
<tr>
<td>1929</td> <td>7.6 </td> <td>13071 </td> <td>10.86 </td> <td>3504 </td> <td>6.7 </td> <td>36990 </td> <td>6.36 </td> <td>35702 </td> <td>29.97 </td>
</tr>
<tr>
<td>1930</td> <td>5.06 </td> <td>22068 </td> <td>6.23 </td> <td>7998 </td> <td>6.35 </td> <td>29680 </td> <td>10.5 </td> <td>23718 </td> <td>28.68 </td>
</tr>
<tr>
<td>1931</td> <td>5.4 </td> <td>19255 </td> <td>6.86 </td> <td>5638 </td> <td>3.91 </td> <td>50940 </td> <td>5.81 </td> <td>39263 </td> <td>26.35 </td>
</tr>
<tr>
<td>1932</td> <td>4.06 </td> <td>17293 </td> <td>6.09 </td> <td>7364 </td> <td>4.57 </td> <td>27642 </td> <td>4.71 </td> <td>38553 </td> <td>21.98 </td>
</tr>
<tr>
<td>1933</td> <td>4.78 </td> <td>11063 </td> <td>5.86 </td> <td>8136 </td> <td>3.57 </td> <td>35560 </td> <td>4.6 </td> <td>36540 </td> <td>22.26 </td>
</tr>
</tbody>
</table>
<p>... (5 rows omitted)</p>
## Finding the Equilibrium
An important concept in econmics is the market equilibrium. This is the point at which the demand and supply curves meet and represents the "optimal" level of production and price in that market.
```{admonition} Definition
The **market equilibrium** ...
```
Let's walk through how to the market equilibrium using the market for oranges as an example.
### Data Preprocessing
Because we are only examining the relationship between prices and quantity for oranges, we can create a new table with the relevant columns: `Year`, `Orange Price`, and `Orange Unloads`.
```python
oranges_raw = fruitprice.select("Year", "Orange Price", "Orange Unloads")
oranges_raw
```
<table border="1" class="dataframe">
<thead>
<tr>
<th>Year</th> <th>Orange Price</th> <th>Orange Unloads</th>
</tr>
</thead>
<tbody>
<tr>
<td>1924</td> <td>6.63 </td> <td>21258 </td>
</tr>
<tr>
<td>1925</td> <td>9.19 </td> <td>15426 </td>
</tr>
<tr>
<td>1926</td> <td>7.2 </td> <td>24762 </td>
</tr>
<tr>
<td>1927</td> <td>8.63 </td> <td>22766 </td>
</tr>
<tr>
<td>1928</td> <td>10.71 </td> <td>18766 </td>
</tr>
<tr>
<td>1929</td> <td>6.36 </td> <td>35702 </td>
</tr>
<tr>
<td>1930</td> <td>10.5 </td> <td>23718 </td>
</tr>
<tr>
<td>1931</td> <td>5.81 </td> <td>39263 </td>
</tr>
<tr>
<td>1932</td> <td>4.71 </td> <td>38553 </td>
</tr>
<tr>
<td>1933</td> <td>4.6 </td> <td>36540 </td>
</tr>
</tbody>
</table>
<p>... (5 rows omitted)</p>
Next, we will rename our columns. In this case, let's rename `Orange Unloads` to `Quantity` and `Orange Price` to `Price` for brevity and understandability.
```python
oranges = oranges_raw.relabel("Orange Unloads", "Quantity").relabel("Orange Price", "Price")
oranges
```
<table border="1" class="dataframe">
<thead>
<tr>
<th>Year</th> <th>Price</th> <th>Quantity</th>
</tr>
</thead>
<tbody>
<tr>
<td>1924</td> <td>6.63 </td> <td>21258 </td>
</tr>
<tr>
<td>1925</td> <td>9.19 </td> <td>15426 </td>
</tr>
<tr>
<td>1926</td> <td>7.2 </td> <td>24762 </td>
</tr>
<tr>
<td>1927</td> <td>8.63 </td> <td>22766 </td>
</tr>
<tr>
<td>1928</td> <td>10.71</td> <td>18766 </td>
</tr>
<tr>
<td>1929</td> <td>6.36 </td> <td>35702 </td>
</tr>
<tr>
<td>1930</td> <td>10.5 </td> <td>23718 </td>
</tr>
<tr>
<td>1931</td> <td>5.81 </td> <td>39263 </td>
</tr>
<tr>
<td>1932</td> <td>4.71 </td> <td>38553 </td>
</tr>
<tr>
<td>1933</td> <td>4.6 </td> <td>36540 </td>
</tr>
</tbody>
</table>
<p>... (5 rows omitted)</p>
### Visualize the Relationship
To construct the demand curve, let's first see what the relationship between price and quantity is. We would expect to see a downward-sloping line between price and quantity; if a product's price increases, consumers will purchase less, and if a product's price decreases, then consumers will purchase more.
To find this, we will create a scatterplot and draw a regression line (by setting `fit_line = True` in the `oranges.scatter` scall) between the points. Regression lines are helpful because they consolidate all the datapoints into a single line, helping us better understand the relationship between the two variables.
```python
oranges.scatter("Quantity", "Price", fit_line = True, width=7, height=7)
plt.title("Demand Curve for Oranges", fontsize = 16);
```
The visualization shows a negative relationship between quantity and price, which is exactly what we expected! As we've discussed, as the price increases, fewer consumers will purchase the oranges, so the quantity demanded will decrease. This corresponds to a leftward movement along the demand curve. Alternatively, as the price decreases, the quantity sold will increase because consumers want to maximize their purchasing power and buy more oranges; this is shown by a rightward movement along the curve.
As a quick refresher, scatterplots can show positive, negative, or neutral correlations among two variables:
- If two variables have a positive correlation, then as one variable increases, the other increases too.
- If two variables have a negative correlation, then as one variable increass, the other decreases.
- If two variables have a neutral correlation, then if one varible increases, the other variable stays constant.
Note that scatterplots do not show or prove causation between two variables-- it is up to the data scientists to prove any causation.
### Fit a Polynomial
We will now quantify our demand curve using NumPy's [`np.polyfit` functio](https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html). `np.polyfit` returns an array of size 2, where the first element is the slope and the second is the $y$-intercept.
It takes 3 parameters:
- array of x-coordinates
- array of y-coordinates
- degree of polynomial
Because we are looking for a **linear** function to serve as the demand curve, we will use 1 for the degree of polynomial.
The general template for the demand curve is $y = mx + b$, where $m$ is the slope and $b$ is $y$-intercept. In economic terms, $m$ is the demand curve's slope that shows how the good's price affects the quantity demanded, and $b$ encompasses the effects of all of the exogenous non-price factors that affect demand.
```python
np.polyfit(oranges.column("Quantity"), oranges.column("Price"), 1)
```
array([-2.14089690e-04, 1.33040264e+01])
This shows that the demand curve is $y = -0.000214x+ 13.3$. The slope is -0.000214 and $y$-intercept is 13.3. That means that as quantity increases by 1 unit (in this case, 1 ton), price decreases by 0.000214 units (in this case, \$0.000214).
### Create the Demand Curve
We will now use SymPy to write out this demand curve. To do so, we start by creating a symbol `Q` that we can use to create the equation.
```python
Q = sympy.Symbol("Q")
demand = -0.000214 * Q + 13.3
demand
```
-0.000214*Q + 13.3
### Create the Supply Curve
As we will learn, the supply curve is the relationship between the price of a good or service and the quantity of that good or service that the seller is willing to supply. They show how much of a good suppliers are willing and able to supply at different prices. In this case, as the price of the beef increases, the quantity of beef that beef manufacturers are willing to supply increases. They capture the producer's side of market decisions and are upward-sloping.
Let's now assume that the supply curve is given by $P = 0.00023Q + 0.8$. (Note that this supply curve is not based on data.)
```python
supply = 0.00023 * Q + 0.8
supply
```
0.00023*Q + 0.8
This means that as the quantity of oranges produced increases by 1, the supply curve increases by 0.00023. It originally starts out with 0.8.
### Find the Quantity Equilibrium
With the supply and demand curves known, we can solve the for equilibrium.
The equilibrium is the point where the supply curve and demand curve intersect, and denotes the price and quantity of the good transacted in the market.
At this point, the quantity of the good that consumers desire to purchase is equivalent to the quantity of the good that producers supply; there is no shortage or surplus of the good at this quantity.
The equilbrium consists of 2 components: the quantity equilbrium and price equilbrium.
The quantity equilibrium is the quantity at which the supply curve and demand curve intersect.
Let's find the quantity equilibrium for this exercise. To do this, we will use the provided `solve` function. This is a custom function that leverages some SymPy magic and will be provided to you in assignments.
```python
Q_star = solve(demand, supply)
Q_star
```
28153.1531531532
This means that the number of tons of oranges that consumers want to purchase and producers want to provide in this market is about 28,153 tons of oranges.
### Find the Price Equilibrium
Similarly, the price equilibrium is the price at which the supply curve and demand curve intersect. The price of the good that consumers desire to purchase at is equivalent to the price of the good that producers want to sell at. There is no shortage of surplus of the product at this price.
Let's find the price equilibrium.
```python
demand.subs(Q, Q_star)
supply.subs(Q, Q_star)
```
7.27522522522523
This means that the price of oranges in tons that consumers want to purchase at and producers want to provide is about \$7.27.
### Visualize the Market Equilibrium
Now that we have our demand and supply curves and price and quantity equilibria, we can visualize them on a graph to see what they look like.
There are 2 pre-made functions we will use: `plot_equation` and `plot_intercept`.
- `plot_equation`: It takes in the equation we made previously (either demand or supply) and visualizes the equations between the different prices we give it
- `plot_intercept`: It takes in two different equations (demand and supply), finds the point at which the two intersect, and creates a scatter plot of the result
```python
def plot_equation(equation, price_start, price_end, label=None):
plot_prices = [price_start, price_end]
plot_quantities = [equation.subs(list(equation.free_symbols)[0], c) for c in plot_prices]
plt.plot(plot_prices, plot_quantities, label=label)
def plot_intercept(eq1, eq2):
ex = sympy.solve(eq1-eq2)[0]
why = eq1.subs(list(eq1.free_symbols)[0], ex)
plt.scatter([ex], [why], zorder=10, color="tab:orange")
return (ex, why)
```
We can leverage these functions and the equations we made earlier to create a graph that shows the market equilibrium.
```python
mpl.rcParams['figure.dpi'] = 150
plot_equation(demand, 5000, 50000, label = "Demand")
plot_equation(supply, 5000, 50000, label = "Supply")
plt.ylim(0,13)
plt.title("Orange Supply and Demand in 1920's and 1930's", fontsize = 20)
plt.xlabel("Quantity (Tons)", fontsize = 14)
plt.ylabel("Price ($)", fontsize = 14)
plot_intercept(supply, demand)
plt.legend(loc = "upper right", fontsize = 12)
plt.show()
```
You can also practice on your own and download additional data sets [here](http://users.stat.ufl.edu/~winner/datasets.html), courtesy of the University of Flordia's Statistics Department.
|
e89ef1bfe0c5c5f197c7ef2a4e9ebc0e8cf52263
| 128,372 |
ipynb
|
Jupyter Notebook
|
docs/_sources/content/01-demand/market-equilibria.ipynb
|
ds-connectors/econ-models-textbook
|
2314ad7aac8ff621d50d2499659562bdf5f5b3fe
|
[
"BSD-3-Clause"
] | 1 |
2020-11-18T06:45:20.000Z
|
2020-11-18T06:45:20.000Z
|
docs/_sources/content/01-demand/market-equilibria.ipynb
|
ds-connectors/econ-models-textbook
|
2314ad7aac8ff621d50d2499659562bdf5f5b3fe
|
[
"BSD-3-Clause"
] | null | null | null |
docs/_sources/content/01-demand/market-equilibria.ipynb
|
ds-connectors/econ-models-textbook
|
2314ad7aac8ff621d50d2499659562bdf5f5b3fe
|
[
"BSD-3-Clause"
] | null | null | null | 182.346591 | 77,456 | 0.870977 | true | 4,344 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.70253 | 0.785309 | 0.551703 |
__label__eng_Latn
| 0.979154 | 0.12012 |
# Symbolic Metamodeling of Univariate Functions using Meijer $G$-functions
In this notebook, we carry out the first experiment (Section 5.1) in our paper *"Demystifying Black-box Models with Symbolic Metamodels"* submitted to **NeurIPS 2019** by *Ahmed M. Alaa and Mihaela van der Schaar*. In this experiment, we demonstrate the first use case of symbolic metamodeling using synthetic data, where we show how can we learn symbolic expressions for unobserved black-box functions for which we have only query access.
## Can we learn complex symbolic expressions?
We start off with four synthetic experiments with the aim of evaluating the richness of symbolic expressions discovered by our metamodeling algorithm. In each experiment, we apply our Meijer $G$-function-based symbolic metamodeling on a ground-truth univariate function $f(x)$ to fit a metamodel $g(x) \approx f(x)$, and compare the resulting mathematical expression for $g(x)$ with that obtained by Symbolic regression [1-3], which we implement using the [**gplearn library** ](https://gplearn.readthedocs.io/en/stable/).
We use the following four expressions for the underlying univariate functions:
| **Function** | **Notation** | **Expression** |
|------|------|------|
| Exponential function | $f_1(x)$ | $e^{-3x}$ |
| Rational function | $f_2(x)$| $\frac{x}{(x+1)^2}$ |
| Sinusoid function | $f_3(x)$| $\sin(x)$ |
| Bessel function | $f_4(x)$| $J_0\left(10\sqrt{x}\right)$ |
As we can see, the functions $f_1(x)$, $f_2(x)$, $f_3(x)$ and $f_4(x)$ have very different functional forms and are of varying levels of complexity. To run the experiments, we first import the univariate functions above from the **benchmarks.univariate_functions** module in **pysymbolic** as follows:
```python
from pysymbolic.benchmarks.univariate_functions import *
```
Then, we create a list of the univariate functions $f_1(x)$, $f_2(x)$, $f_3(x)$ and $f_4(x)$ as follows:
```python
True_functions = [('Exponential function exp(-3x)', exponential_function), ('Rational function x/(x+1)^2', rational_function),
('Sinusoid function sin(x)', sinusoidal_function), ('Bessel function J_0(10*sqrt(x))', bessel_function)]
```
Before running the experimens, let us visualize the four functions in the range $x \in [0,1]$ to see how different they are, and the extent to which their complexity vary from one function to another.
```python
import numpy as np
from matplotlib import pyplot as plt
get_ipython().magic('matplotlib inline')
x_points = np.linspace(0,1,100)
fig, axs = plt.subplots(1, 4, figsize=(20,2.5))
axs[0].plot(x_points, True_functions[0][1](x_points), linewidth=4)
axs[0].set_title('$f_1(x)$')
axs[1].plot(x_points, True_functions[1][1](x_points), linewidth=4)
axs[1].set_title('$f_2(x)$')
axs[2].plot(x_points, True_functions[2][1](x_points), linewidth=4)
axs[2].set_title('$f_3(x)$')
axs[3].plot(x_points, True_functions[3][1](x_points), linewidth=4)
axs[3].set_title('$f_4(x)$')
for ax in axs.flat:
ax.set(xlabel='$x$', ylabel='$f(x)$')
for ax in axs.flat:
ax.label_outer()
```
As we can see, the Bessel function is the most complex. So will our symbolic metamodeling algorithm be able top recover the underlying mathematical expression describing these function and recognizing their varying levels of complexity?
## Running the experiments
Now we set up the experiment by first setting the number of evaluation points (npoints=100) that we will input to both the symbolic metamodeling and the symbolic regression models, and creating an empty list of learned symbolic expressions and $R^2$ scores.
```python
npoints = 100
xrange = [0.01, 1]
symbolic_metamodels = []
symbolic_regssion = []
sym_metamodel_R2 = []
sym_regression_R2 = []
```
Before running the experiments, we first import the **algorithms.symbolic_expressions** from **pysymbolic**. This module contains two functions **get_symbolic_model** and **symbolic_regressor**, which recovers univariate metamodels and symbolic regression models respectively.
```python
from mpmath import *
from sympy import *
from pysymbolic.algorithms.symbolic_expressions import *
```
Now we run the experiments by feeding in each function in **true_function** to both the functions **get_symbolic_model** and **symbolic_regressor**:
```python
for true_function in True_functions:
print('Now working on the ' + true_function[0])
print('--------------------------------------------------------')
print('--------------------------------------------------------')
symbolic_model, _mod_R2 = get_symbolic_model(true_function[1], npoints, xrange)
symbolic_metamodels.append(symbolic_model)
sym_metamodel_R2.append(_mod_R2)
symbolic_reg, _reg_R2 = symbolic_regressor(true_function[1], npoints, xrange)
symbolic_regssion.append(symbolic_reg)
sym_regression_R2.append(_reg_R2)
print('--------------------------------------------------------')
```
## Results and discussion
Now let us check the symbolic expressions retrieved by both symbolic metamodeling and symbolic regression. In order to enable printing in LaTex format, we first invoke the "init_print" command of sympy as follows:
```python
init_printing()
```
Now let us start with the first function $f_1(x) = e^{-3x}$, and see what the corresponding symbolic metamodel stroed in **symbolic_metamodels[0]**...
```python
symbolic_metamodels[0].expression()
```
As we can see, this is almost exactly equal to $e^{-3x}$! This means that the metamodeling algorithm was able to recover the true expression for $f_1(x)$ based on 100 evaluation samples only. To check the corresponding values of the poles and zeros recovered by the gradient descent algorithm used to optimize the metamodel, we can inspect the attributes of the **MeijerG** object **symbolic_metamodels[0]** as follows:
```python
symbolic_metamodels[0].a_p, symbolic_metamodels[0].b_q, symbolic_metamodels[0]._const
```
Now let us check the expression learned by symbolic regression (whcih is stored in **symbolic_regssion[0]**)...
```python
symbolic_regssion[0]
```
Here, the symbolic regression algorithm retreived an approximation of $f_1(x) = e^{-3x}$, but failed to capture the exponential functional form of $f_1(x)$. This is because the symbolic regression search algorithm starts with predefined forms (mostly polynomials), and hence is less flexible than our Meijer $G$-function parameterization.
**What if we want to restrict our metamodels to polynomials only?** In this case, we can use the *approximate_expression* method to recover a Taylor approximation of the learned symbolic expression as follows.
```python
from copy import deepcopy
polynomial_metamodel_of_f1 = deepcopy(symbolic_metamodels[0])
```
```python
polynomial_metamodel_of_f1.approximation_order = 2
polynomial_metamodel_of_f1.approx_expression()
```
As we can see, the second order Taylor approximation of our metamodel appears to be very closed to the symbolic regression model!
But what about the other functions? Let us check $f_2(x) = \frac{x}{(x+1)^2}$ and see what the metamodel was for that.
```python
symbolic_metamodels[1].expression()
```
For $f_2(x)$, the metamodeling algorithm nailed it! It exactly recovered the true symbolic expression. For the symbolic regression model for $f_2(x)$, we have the following expression:
```python
symbolic_regssion[1]
```
So the symbolic regression algorithm also did a good job in finding the true mathematical expression for $f_2(x)$, though it recovered a less accurate expression than that of the metamodel. Now let us examine the results third function $f_3(x) = \sin(x)$...
```python
symbolic_metamodels[2].expression()
```
```python
symbolic_regssion[2]
```
Here, both algorithms came up with approximations of the sinusoid function in the range $[0,1]$. This is because in the range $[0,1]$ we see no full cycles of sinusoid, and hence it is indistiguishable from, say, a linear approximation. The confluent hypergeometric function $_2 F_1$ in the metamodel is very close to 0, and hence the metamodel can be though of as a linear approximation for the sinusoidal function.
Now we look at the most tricky of the four functions: $f_4(x) = J_0\left(10\sqrt{x}\right)$. This one is diffcult because it already displays a lot of fluctuations in the range $[0,1]$, and has an unusual functional form. So what symbolic expressions did the two algorithms learn for $f_4(x)$?
```python
symbolic_metamodels[3].expression()
```
```python
symbolic_regssion[3]
```
This is an exciting result! The symbolic metamodel is very close to the ground truth: it corresponded to a Bessel function of the second kind $I_0(x)$ instead of a Bessel function of the first kind $J_0(x)$! Using the identity $J_0(ix) = I_0(x)$, we can see that our metamodel is in fact identical to the ground truth!
The above "qualitative" comparisons show that symbolic metamodeling can recover richer and more complex expressions compared to symbolic regression. The quantitative comparison can be done by simply comparing the $R^2$ scores for the two algorithms on the four functions:
```python
sym_metamodel_R2
```
```python
sym_regression_R2
```
Finally, to evaluate the numeric value of any metamodel for a given $x$, we can use the **evaluate** method of the **MeijerG** object. In the cell below, we evaluate all metamodels in the range $[0,1]$ and plot them along the true functions to see how accurate they are.
```python
import numpy as np
from matplotlib import pyplot as plt
get_ipython().magic('matplotlib inline')
x_points = np.linspace(0,1,100)
fig, axs = plt.subplots(1, 4, figsize=(20,2.5))
axs[0].plot(x_points, True_functions[0][1](x_points), linewidth=4, label='True function')
axs[0].plot(x_points, symbolic_metamodels[0].evaluate(x_points), color='red', linewidth=3, linestyle='--', label='Metamodel')
axs[0].set_title('$f_1(x)$')
axs[0].legend()
axs[1].plot(x_points, True_functions[1][1](x_points), linewidth=4, label='True function')
axs[1].plot(x_points, symbolic_metamodels[1].evaluate(x_points), color='red', linewidth=3, linestyle='--', label='Metamodel')
axs[1].set_title('$f_2(x)$')
axs[1].legend()
axs[2].plot(x_points, True_functions[2][1](x_points), linewidth=4, label='True function')
axs[2].plot(x_points, symbolic_metamodels[2].evaluate(x_points), color='red', linewidth=3, linestyle='--', label='Metamodel')
axs[2].set_title('$f_3(x)$')
axs[2].legend()
axs[3].plot(x_points, True_functions[3][1](x_points), linewidth=4, label='True function')
axs[3].plot(x_points, symbolic_metamodels[3].evaluate(x_points), color='red', linewidth=3, linestyle='--', label='Metamodel')
axs[3].set_title('$f_4(x)$')
axs[3].legend()
for ax in axs.flat:
ax.set(xlabel='$x$', ylabel='$f(x)$')
for ax in axs.flat:
ax.label_outer()
```
## References
[1] Patryk Orzechowski, William La Cava, and Jason H Moore. Where are we now?: a large benchmark study of recent symbolic regression methods. *In Proceedings of the Genetic and Evolutionary Computation Conference*, pages 1183–1190. ACM, 2018.
[2] Telmo Menezes and Camille Roth. Symbolic regression of generative network models. *Scientific reports*, 4:6284, 2014.
[3] Ekaterina J Vladislavleva, Guido F Smits, and Dick Den Hertog. Order of nonlinearity as a com344 plexity measure for models generated by symbolic regression via pareto genetic programming. *IEEE Transactions on Evolutionary Computation*, 13(2):333–349, 2009.
|
94cd9fc5143016c5c19947391bae2c8bbd9eb257
| 17,612 |
ipynb
|
Jupyter Notebook
|
alg/symbolic_metamodeling/2-_Metamodeling_of_univariate_black-box_functions_using_Meijer_G-functions.ipynb
|
loramf/mlforhealthlabpub
|
aa5a42a4814cf69c8223f27c21324ee39d43c404
|
[
"BSD-3-Clause"
] | 171 |
2021-02-12T10:23:19.000Z
|
2022-03-29T01:58:52.000Z
|
alg/symbolic_metamodeling/2-_Metamodeling_of_univariate_black-box_functions_using_Meijer_G-functions.ipynb
|
loramf/mlforhealthlabpub
|
aa5a42a4814cf69c8223f27c21324ee39d43c404
|
[
"BSD-3-Clause"
] | 4 |
2021-06-01T08:18:33.000Z
|
2022-02-20T13:37:30.000Z
|
alg/symbolic_metamodeling/2-_Metamodeling_of_univariate_black-box_functions_using_Meijer_G-functions.ipynb
|
loramf/mlforhealthlabpub
|
aa5a42a4814cf69c8223f27c21324ee39d43c404
|
[
"BSD-3-Clause"
] | 93 |
2021-02-10T03:21:59.000Z
|
2022-03-30T19:10:37.000Z
| 33.610687 | 529 | 0.608789 | true | 3,019 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.803174 | 0.831143 | 0.667552 |
__label__eng_Latn
| 0.975187 | 0.389279 |
```python
import sys, os
sys.path.insert(0, os.path.join(os.pardir, 'src'))
from fe_approx1D_numint import approximate, mesh_uniform, u_glob
from sympy import sqrt, exp, sin, Symbol, lambdify, simplify
import numpy as np
from math import log
x = Symbol('x')
A = 1
w = 1
cases = {'sqrt': {'f': sqrt(x), 'Omega': [0,1]},
'exp': {'f': A*exp(-w*x), 'Omega': [0, 3.0/w]},
'sin': {'f': A*sin(w*x), 'Omega': [0, 2*np.pi/w]}}
results = {}
d_values = [1, 2, 3, 4]
for case in cases:
f = cases[case]['f']
f_func = lambdify([x], f, modules='numpy')
Omega = cases[case]['Omega']
results[case] = {}
for d in d_values:
results[case][d] = {'E': [], 'h': [], 'r': []}
for N_e in [4, 8, 16, 32, 64, 128]:
try:
c = approximate(
f, symbolic=False,
numint='GaussLegendre%d' % (d+1),
d=d, N_e=N_e, Omega=Omega,
filename='tmp_%s_d%d_e%d' % (case, d, N_e))
except np.linalg.linalg.LinAlgError as e:
print((str(e)))
continue
vertices, cells, dof_map = mesh_uniform(
N_e, d, Omega, symbolic=False)
xc, u, _ = u_glob(c, vertices, cells, dof_map, 51)
e = f_func(xc) - u
# Trapezoidal integration of the L2 error over the
# xc/u patches
e2 = e**2
L2_error = 0
for i in range(len(xc)-1):
L2_error += 0.5*(e2[i+1] + e2[i])*(xc[i+1] - xc[i])
L2_error = np.sqrt(L2_error)
h = (Omega[1] - Omega[0])/float(N_e)
results[case][d]['E'].append(L2_error)
results[case][d]['h'].append(h)
# Compute rates
h = results[case][d]['h']
E = results[case][d]['E']
for i in range(len(h)-1):
r = log(E[i+1]/E[i])/log(h[i+1]/h[i])
results[case][d]['r'].append(round(r, 2))
print(results)
for case in results:
for d in sorted(results[case]):
print(('case=%s d=%d, r: %s' % \
(case, d, results[case][d]['r'])))
```
|
a6d68780c6ca5ae36807c4dac8c0478fbe4a5d38
| 3,225 |
ipynb
|
Jupyter Notebook
|
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/EXERCICES/19_PD_APPROX_ERROR.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/EXERCICES/19_PD_APPROX_ERROR.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/EXERCICES/19_PD_APPROX_ERROR.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | 2 |
2022-02-09T15:41:33.000Z
|
2022-02-11T07:47:40.000Z
| 33.947368 | 76 | 0.428217 | true | 665 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.935347 | 0.884039 | 0.826883 |
__label__eng_Latn
| 0.254982 | 0.759459 |
```python
import numpy as np
import sympy as sy
import matplotlib.pyplot as plt
from openmm import unit
```
# Double well potential
Our double well potential is described by the following expression:
$$
V(x,y,z)=E_{0}\left[ \left(\frac{x}{a}\right)^4 -2\left(\frac{x}{a}\right)^2 \right]-\frac{b}{a}x + \frac{1}{2}k\left( y^2 + z^2 \right)
$$ (potential)
This potential can be split in two summands, the first one with the potential for the $X$ axis, and a second term with the potential for the $Y$ and $Z$ axes:
$$
V(x,y,z)=V_{x}(x)+V_{y,z}(y,z)
$$ (potential-decomposed)
## A double well potential along $X$
The one dimensional potential $V_{x}(x)$ is a function of $x$ as well as three parameters: $E_{0}$, $a$, $b$.
$$
V_{x}(x)=E_{0}\left[ \left(\frac{x}{a}\right)^4 -2\left(\frac{x}{a}\right)^2 \right]-\frac{b}{a}x + \frac{1}{2}k\left( y^2 + z^2 \right)
$$ (x_potential)
Let's see the meaning of these constants. For this purpose the following method is defined to evaluate the contribution of the potential energy terms corresponding to the double well in $X$.
```python
def double_well_potential_1D(x,Eo,a,b):
return Eo*((x/a)**4-2*(x/a)**2)-(b/a)*x
```
### Symmetric case ($b=0$)
A symmetric double well potential can be represented by the former mathematical expression when $b=0$. In this situation, the double well minima are placed in $x=-a$ and $x=a$. And $E_{0}$ is the value of the barrier height (equal for both basins). Lets verify these statements with an example:
```python
Eo=3.0 * unit.kilocalories_per_mole # barrier height when the double well is symmetric.
a=1.0 * unit.nanometers # Absolute value of the coordinates of minima (the potential is an even function).
b=0.0 * unit.kilocalories_per_mole # No need to explanation at this moment
x_serie = np.arange(-5., 5., 0.05) * unit.nanometers
plt.plot(x_serie, double_well_potential_1D(x_serie,Eo,a,b), 'r-')
plt.ylim(-4,1)
plt.xlim(-2,2)
plt.grid()
plt.xlabel("X ({})".format(unit.nanometers))
plt.ylabel("Energy ({})".format(unit.kilocalories_per_mole))
plt.title("Symmetric Double Well")
plt.hlines(-3, -1.7, -1, color='gray', linestyle='--')
plt.hlines(0, -1.7, 0, color='gray', linestyle='--')
plt.vlines(-1.6, -3, 0, color='gray', linestyle='--')
plt.vlines(1, -4, -3, color='gray', linestyle='--')
plt.text(-1.85, -1.5, 'Eo', fontsize=12)
plt.text(1.1, -3.6, 'a', fontsize=12)
plt.show()
```
You can play with the last cell, changing the $E_{0}$ and $a$ values, to check the consistency of their description. Or you can instead work with the first derivative of the potential to do the same analytically:
```python
x, Eo, a = sy.symbols('x Eo a')
f = Eo*((x/a)**4-2*(x/a)**2)
g=sy.diff(f,x) # Primera derivada con respecto a x de la funcion f
```
```python
g
```
$\displaystyle Eo \left(- \frac{4 x}{a^{2}} + \frac{4 x^{3}}{a^{4}}\right)$
The first derivative can be factorized to unveil the value of its three roots: $x=0$, the barrier position, and $x=a$ y $x=-a$, the minima positions.
```python
sy.factor(g)
```
$\displaystyle \frac{4 Eo x \left(- a + x\right) \left(a + x\right)}{a^{4}}$
The height of the barrier, from the bottom of the basins to its top, can then be calculated:
$$
V_{x}(0)-V_{x}(c)=0-E_{0}\left[ 1-2 \right]=E_{0}
$$ (x_barrier)
#### Frequency of small oscillations around the minima
Now that the role of $E_{0}$ and $a$ is clear for the symmetric double well, we are interested in the frequency of the small oscillations at the energy minima. This frequency will be our time of reference to choose an appropriate integration time step of at the time of simulate the dynamics of a particle in this potential. This can be done attending to the second derivative of the potential.
The value of any mathematical function close enough to a minimum can be approximated by the value of the function at the minimum plus the resulting contribution of an harmonic potential. The stiffness of this harmonic potential is equal the to value of the second derivative of the function at the minimum. This is what is known as the Taylor expansion of any function (truncated in the third grade):
$$
f(x) \approx f(x_{0}) + f'(x_{0})(x-x_{0}) + \frac{1}{2} f''(x_{0})(x-x_{0})^{2}
$$ (taylor)
And by definition of minimum:
$$
f'(x_{0})=0
$$ (minimum_definition)
So:
$$
f(x) \approx f(x_{0}) + \frac{1}{2} f''(x_{0})(x-x_{0})^{2}
$$ (taylor_2)
At this point lets make a break to talk about the harmonic potential. We all know the hooks law to describe the force suffered by a mass attached to an ideal spring around the equilibrium position:
$$
F(x) = -k(x-x_{0})
$$ (hook)
Where $k$ is the stiffness of the spring and $x_{0}$ is the equilibrium position. The potential energy $V(x)$ is now deduced given that:
$$
F(x) = -\frac{d V(x)}{dx}
$$ (grad_potential)
So, the spring force is the result of the first harmonic potential derivative:
$$
V(x) = \frac{1}{2} k (x-x_{0})^{2}
$$ (x_potential_2)
And the frequency of oscillation of the spring, or a particle goberned by the former potential, is:
$$
\omega = \sqrt{\frac{k}{m}}
$$ (omega)
Where $m$ is the mass of the particle. This way the potential can also be written as:
$$
V(x) = \frac{1}{2} k (x-x_{0})^{2} = \frac{1}{2} m \omega^{2} (x-x_{0})^{2}
$$ (potential_omega)
Now, going back to our Taylor expansion of any mathematical function $f(x)$. If the shape of the $f(x)$ around a minimum $x_{0}$ can be approximated with an harmonic potential, this means that the characteristic frequency $\omega$ of the oscillations around the near surroundings of the minimum is -by comparison with the potential behind the Hooke's law-:
$$
\omega = \sqrt{\frac{f''(x_{0})}{m}}
$$ (omega_2)
This way the frequency of the small oscillations of a particle with mass $m$ around a minimum can then by obtained from the value of the second derivative at the minimum:
```python
x, Eo, a = sy.symbols('x Eo a')
f = Eo*((x/a)**4-2*(x/a)**2)
gg=sy.diff(f,x,x) # Second derivative of f with respect x
gg
```
$\displaystyle - \frac{4 Eo \left(1 - \frac{3 x^{2}}{a^{2}}\right)}{a^{2}}$
Let's look at the minimum at $x=a$:
```python
gg.subs({x:a})
```
$\displaystyle \frac{8 Eo}{a^{2}}$
In this case the frequency of the oscillations of a particle with mass $m$ is:
$$
\omega = \sqrt{\frac{8E_{0}}{ma^{2}}}
$$ (omega_example)
And the period is:
$$
T = \frac{2\pi}{\omega} = 2\pi \sqrt{\frac{ma^{2}}{8E_{0}}}
$$ (period_example)
We can see graphically how the Taylor expansion is a good aproximation when close enough to a minimum:
```python
def harmonic_well_potential_1D(x,k,a,Eo):
return 0.5*k*(x-a)**2-Eo
Eo=3.0 * unit.kilocalories_per_mole # barrier height when the double well is symmetric.
a=1.0 * unit.nanometers # Absolute value of the coordinates of minima (the potential is an even function).
b=0.0 * unit.kilocalories_per_mole # No need to explanation at this moment
k=(8*Eo)/a**2 # harmonic stiffness
x_serie = np.arange(-5., 5., 0.05) * unit.nanometers
plt.plot(x_serie, double_well_potential_1D(x_serie,Eo,a,b), 'r-')
plt.plot(x_serie, harmonic_well_potential_1D(x_serie,k,a,Eo), color='k', linestyle='--')
plt.ylim(-4,1)
plt.xlim(-2,2)
plt.grid()
plt.xlabel("X ({})".format(unit.nanometers))
plt.ylabel("Energy ({})".format(unit.kilocalories_per_mole))
plt.title("Symmetric Double Well and Harmonic Potential")
plt.show()
x_serie = np.arange(-0.5, 1.5, 0.005) * unit.nanometers
plt.plot(x_serie, double_well_potential_1D(x_serie,Eo,a,b), 'r-')
plt.plot(x_serie, harmonic_well_potential_1D(x_serie,k,a,Eo), color='k', linestyle='--')
plt.ylim(-3.05,-2.8)
plt.xlim(0.9,1.1)
plt.grid()
plt.xlabel("X ({})".format(unit.nanometers))
plt.ylabel("Energy ({})".format(unit.kilocalories_per_mole))
plt.title("Harmonic approximation nearby a minimum")
plt.show()
```
### Assymetric case ($b\neq0$)
In the case of $b\neq 0$, our double well turns in to an assymetric potential. In this situation $E_{0}$ and $a$ have **approximately** the same interpretation, and $b$ can be **approximately** understood as the amount energy that basins shift up or down, depending on the relative position to $x=0$. In our double well, the left basin raises and the right basin drops when $b>0$. Lets see this in a plot:
```python
Eo=3.0 * unit.kilocalories_per_mole # barrier height when the double well is symmetric.
a=1.0 * unit.nanometers # Absolute value of the coordinates of minima (the potential is an even function).
b=1.0 * unit.kilocalories_per_mole # vertical shift of basins (approx.)
x_serie = np.arange(-5., 5., 0.05) * unit.nanometers
plt.plot(x_serie, double_well_potential_1D(x_serie,Eo,a,b), 'r-')
plt.ylim(-4,1)
plt.xlim(-2,2)
plt.grid()
plt.xlabel("X ({})".format(unit.nanometers))
plt.ylabel("Energy ({})".format(unit.kilocalories_per_mole))
plt.title("Asymmetric Double Well")
plt.hlines(-2, -1.45, -1, color='gray', linestyle='--')
plt.hlines(0.05, -1.45, 0, color='gray', linestyle='--')
plt.vlines(-1.4, -2, 0.05, color='gray', linestyle='--')
plt.vlines(-0.95, -4, -2, color='gray', linestyle='--')
plt.text(-1.75, -2.4, r'$\approx Eo-b$', fontsize=12)
plt.text(-0.9, -3.6, r'$\approx a$', fontsize=12)
plt.show()
```
The value of the energy barrier from the left minimum is $\approx E_{0}+b$, while $\approx E_{0}-b$ accounts for the barrier from the right minimum.
### Frequency of small oscillations around the minima.
Given that $b$ is included as a linear factor in the potential, its effect vanishes in the second derivative. As such, the harmonic approximation described for the symmetric double well is exact also for this case. But notice that the positions of the minima, and the position of the barrier, are slightly shifted with respect to those for the symmetric double well.
## An harmonic potential along $Y$ and $Z$
The behaviour of a particle in the double well potential along $X$ is independent of what happens along $Y$ and $Z$. We could keep the three dimensional potential equal to $V_{x}(x)$, but in this case the particle would diffuse freely in the subspace $Y-Z$. To avoid these, for aesthetic purposes only, let's add an harmonic well in those two axes:
$$
V_{yz}(y,z)= \frac{1}{2}k\left( y^2 + z^2 \right)
$$ (xy_potential)
The value of the elastic constant $k$ should be lower, or at least equal, to the harmonic approximation of $V_{x}(x)$ around the minima. Otherwise we should be aware of $V_{yz}$ limiting the integration time step. It's then suggested that:
$$
k\le \frac{8E_{0}}{a^2}
$$ (k_limit)
|
4621b21be8b33dc88390ec0712494573d8af70b7
| 106,323 |
ipynb
|
Jupyter Notebook
|
docs/contents/molecular_systems/double_well/double_well_potential.ipynb
|
uibcdf/Molecular-Systems
|
74c4313ae25584ad24bea65f961280f187eda9cb
|
[
"MIT"
] | null | null | null |
docs/contents/molecular_systems/double_well/double_well_potential.ipynb
|
uibcdf/Molecular-Systems
|
74c4313ae25584ad24bea65f961280f187eda9cb
|
[
"MIT"
] | null | null | null |
docs/contents/molecular_systems/double_well/double_well_potential.ipynb
|
uibcdf/Molecular-Systems
|
74c4313ae25584ad24bea65f961280f187eda9cb
|
[
"MIT"
] | null | null | null | 204.074856 | 24,296 | 0.901837 | true | 3,196 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.817574 | 0.867036 | 0.708866 |
__label__eng_Latn
| 0.989112 | 0.485265 |
# Periodic Signals
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Spectrum
Periodic signals are an import class of signals. Many practical signals can be approximated reasonably well as periodic functions. The latter holds often when considering only a limited time-interval. Examples for periodic signals are a superposition of harmonic signals, signals captured from vibrating structures or rotating machinery, as well as speech signals or signals from musical instruments. The spectrum of a periodic signal exhibits specific properties which are derived in the following.
### Representation
A [periodic signal](https://en.wikipedia.org/wiki/Periodic_function) $x(t)$ is a signal that repeats its values in regular periods. It has to fulfill
\begin{equation}
x(t) = x(t + n \cdot T_\text{p})
\end{equation}
for $n \in \mathbb{Z}$ where its period is denoted by $T_\text{p} > 0$. A signal is termed *aperiodic* if is not periodic. One period $x_0(t)$ of a periodic signal is given as
\begin{equation}
x_0(t) = \begin{cases}
x(t) & \text{for } 0 \leq t < T_\text{p} \\
0 & \text{otherwise}
\end{cases}
\end{equation}
A periodic signal can be represented by [periodic summation](https://en.wikipedia.org/wiki/Periodic_summation) of one period $x_0(t)$
\begin{equation}
x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t - \mu T_\text{p})
\end{equation}
which can be rewritten as convolution
\begin{equation}
x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t) * \delta(t - \mu T_\text{p}) = x_0(t) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p})
\end{equation}
using the sifting property of the Dirac impulse. It can be concluded that a periodic signal can be represented by one period $x_0(t)$ of the signal convolved with a series of Dirac impulses.
**Example**
The cosine signal $x(t) = \cos (\omega_0 t)$ has a periodicity of $T_\text{p} = \frac{2 \pi}{\omega_0}$. One period is given as
\begin{equation}
x_0(t) = \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right)
\end{equation}
Introduced into above representation of a periodic signal yields
\begin{align}
x(t) &= \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p}) \\
&= \cos (\omega_0 t) \sum_{\mu = - \infty}^{\infty} \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} - \mu T_\text{p} \right) \\
&= \cos (\omega_0 t)
\end{align}
since the sum over the shifted rectangular signals is equal to one.
### The Dirac Comb
The sum of shifted Dirac impulses, as used above to represent a periodic signal, is known as [*Dirac comb*](https://en.wikipedia.org/wiki/Dirac_comb). The Dirac comb is defined as
\begin{equation}
{\bot \!\! \bot \!\! \bot}(t) = \sum_{\mu = - \infty}^{\infty} \delta(t - \mu)
\end{equation}
It is used for the representation of periodic signals and for the modeling of ideal sampling. In order to compute the spectrum of a periodic signal, the Fourier transform of the Dirac comb $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ is derived in the following.
Fourier transformation of the left- and right-hand side of above definition yields
\begin{equation}
\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} e^{-j \mu \omega}
\end{equation}
The exponential function $e^{-j \mu \omega}$ for $\mu \in \mathbb{Z}$ is periodic with a period of $2 \pi$. Hence, the Fourier transform of the Dirac comb is also periodic with a period of $2 \pi$.
Convolving a [rectangular signal](../notebooks/continuous_signals/standard_signals.ipynb#Rectangular-Signal) with the Dirac comb results in
\begin{equation}
{\bot \!\! \bot \!\! \bot}(t) * \text{rect}(t) = 1
\end{equation}
Fourier transform of the left- and right-hand side yields
\begin{equation}
\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} \cdot \text{sinc}\left(\frac{\omega}{2}\right) = 2 \pi \delta(\omega)
\end{equation}
For $\text{sinc}( \frac{\omega}{2} ) \neq 0$, which is equal to $\omega \neq 2 n \cdot \pi$ with $n \in \mathbb{Z} \setminus \{0\}$, this can be rearranged as
\begin{equation}
\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = 2 \pi \, \delta(\omega) \cdot \frac{1}{\text{sinc}\left(\frac{\omega}{2}\right)} = 2 \pi \, \delta(\omega)
\end{equation}
Note that the [multiplication property](../continuous_signals/standard_signals.ipynb#Dirac-Impulse) of the Dirac impulse and $\text{sinc}(0) = 1$ has been used to derive the last equality. The Fourier transform is now known for the interval $-2 \pi < \omega < 2 \pi$. It has already been concluded that the Fourier transform is periodic with a period of $2 \pi$. Hence, the Fourier transformation of the Dirac comb can be derived by periodic continuation as
\begin{equation}
\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \delta(\omega - 2 \pi \mu) = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \left( \frac{\omega}{2 \pi} - \mu \right)
\end{equation}
The last equality follows from the scaling property of the Dirac impulse. The Fourier transform can now be rewritten in terms of the Dirac comb
\begin{equation}
\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = {\bot \!\! \bot \!\! \bot} \left( \frac{\omega}{2 \pi} \right)
\end{equation}
The Fourier transform of a Dirac comb with unit distance between the Dirac impulses is a Dirac comb with a distance of $2 \pi$ between the Dirac impulses which are weighted by $2 \pi$.
**Example**
The following example computes the truncated series
\begin{equation}
X(j \omega) = \sum_{\mu = -M}^{M} e^{-j \mu \omega}
\end{equation}
as approximation of the Fourier transform $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ of the Dirac comb. For this purpose the sum is defined and plotted in `SymPy`.
```python
%matplotlib inline
import sympy as sym
sym.init_printing()
mu = sym.symbols('mu', integer=True)
w = sym.symbols('omega', real=True)
M = 20
X = sym.Sum(sym.exp(-sym.I*mu*w), (mu, -M, M)).doit()
sym.plot(X, xlabel='$\omega$', ylabel='$X(j \omega)$', adaptive=False, nb_of_points=1000);
```
**Exercise**
* Change the summation limit $M$. How does the approximation change? Note: Increasing $M$ above a certain threshold may lead to numerical instabilities.
### Fourier-Transform
In order to derive the Fourier transform $X(j \omega) = \mathcal{F} \{ x(t) \}$ of a periodic signal $x(t)$ with period $T_\text{p}$, the signal is represented by one period $x_0(t)$ and the Dirac comb. Rewriting above representation of a periodic signal in terms of a sum of Dirac impulses by noting that $\delta(t - \mu T_\text{p}) = \frac{1}{T_\text{p}} \delta(\frac{t}{T_\text{p}} - \mu)$ yields
\begin{equation}
x(t) = x_0(t) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)
\end{equation}
The Fourier transform is derived by application of the [convolution theorem](../fourier_transform/theorems.ipynb#Convolution-Theorem)
\begin{align}
X(j \omega) &= X_0(j \omega) \cdot {\bot \!\! \bot \!\! \bot} \left( \frac{\omega T_\text{p}}{2 \pi} \right) \\
&= \frac{2 \pi}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \cdot
\delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)
\end{align}
where $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ denotes the Fourier transform of one period of the periodic signal. From the last equality it can be concluded that the Fourier transform of a periodic signal consists of a series of weighted Dirac impulses. These Dirac impulse are equally distributed on the frequency axis $\omega$ at an interval of $\frac{2 \pi}{T_\text{p}}$. The weights of the Dirac impulse are given by the values of the spectrum $X_0(j \omega)$ of one period at the locations $\omega = \mu \frac{2 \pi}{T_\text{p}}$. Such a spectrum is termed *line spectrum*.
### Parseval's Theorem
[Parseval's theorem](../fourier_transform/theorems.ipynb#Parseval%27s-Theorem) relates the energy of a signal in the time domain to its spectrum. The energy of a periodic signal is in general not defined. This is due to the fact that its energy is unlimited, if the energy of one period is non-zero. As alternative, the average power of a periodic signal $x(t)$ is used. It is defined as
\begin{equation}
P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt
\end{equation}
Introducing the Fourier transform of a periodic signal into [Parseval's theorem](../fourier_transform/theorems.ipynb#Parseval%27s-Theorem) yields
\begin{equation}
\frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt = \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} \left| X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \right|^2
\end{equation}
The average power of a periodic signal can be calculated in the time-domain by integrating over the squared magnitude of one period or in the frequency domain by summing up the squared magnitude weights of the coefficients of the Dirac impulses of its Fourier transform.
### Fourier Transform of the Pulse Train
The [pulse train](https://en.wikipedia.org/wiki/Pulse_wave) is commonly used for power control using [pulse-width modulation (PWM)](https://en.wikipedia.org/wiki/Pulse-width_modulation). It is constructed from a periodic summation of a rectangular signal $x_0(t) = \text{rect} (\frac{t}{T} - \frac{T}{2})$
\begin{equation}
x(t) = \text{rect} \left( \frac{t}{T} - \frac{T}{2} \right) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)
\end{equation}
where $0 < T < T_\text{p}$ denotes the width of the pulse and $T_\text{p}$ its periodicity. Its usage for power control becomes evident when calculating the average power of the pulse train
\begin{equation}
P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} | x(t) |^2 dt = \frac{T}{T_\text{p}}
\end{equation}
The Fourier transform of one period $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ is derived by applying the scaling and shift theorem of the Fourier transform to the [Fourier transform of the retangular signal](../fourier_transform/definition.ipynb#Transformation-of-the-Rectangular-Signal)
\begin{equation}
X_0(j \omega) = e^{-j \omega \frac{T}{2}} \cdot T \, \text{sinc} \left( \frac{\omega T}{2} \right)
\end{equation}
from which the spectrum of the pulse train follows by application of above formula for the Fourier transform of a periodic signal
\begin{equation}
X(j \omega) = 2 \pi \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} e^{-j \mu \pi \frac{T}{T_\text{p}}} \cdot T \, \text{sinc} \left( \mu \pi \frac{T}{T_\text{p}} \right) \cdot \delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)
\end{equation}
The weights of the Dirac impulses are defined in `SymPy` for fixed values $T$ and $T_\text{p}$
```python
mu = sym.symbols('mu', integer=True)
T = 2
Tp = 5
X_mu = sym.exp(-sym.I * mu * sym.pi * T/Tp) * T * sym.sinc(mu * sym.pi * T/Tp)
X_mu
```
The weights of the Dirac impulses are plotted with [`matplotlib`](http://matplotlib.org/index.html), a Python plotting library. The library expects the values of the function to be plotted at a series of sampling points. In order to create these, the function [`sympy.lambdify`](http://docs.sympy.org/latest/modules/utilities/lambdify.html?highlight=lambdify#sympy.utilities.lambdify) is used which numerically evaluates a symbolic function at given sampling points. The resulting plot illustrates the positions and weights of the Dirac impulses.
```python
import numpy as np
import matplotlib.pyplot as plt
Xn = sym.lambdify(mu, sym.Abs(X_mu), 'numpy')
n = np.arange(-15, 15)
plt.stem(n*2*np.pi/Tp, Xn(n))
plt.xlabel('$\omega$')
plt.ylabel('$|X(j \omega)|$');
```
**Exercise**
* Change the ratio $\frac{T}{T_\text{p}}$. How does the spectrum of the pulse train change?
* Can you derive the periodicity $T_\text{p}$ of the signal from its spectrum?
* Calculate the average power of the pulse train in the frequency domain by applying Parseval's theorem.
**Copyright**
The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
|
18d7c74383e29ba9a122d201e3f193e9ef63a9ae
| 48,308 |
ipynb
|
Jupyter Notebook
|
periodic_signals/spectrum.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | 1 |
2019-01-11T02:04:18.000Z
|
2019-01-11T02:04:18.000Z
|
periodic_signals/spectrum.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | null | null | null |
periodic_signals/spectrum.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | null | null | null | 127.126316 | 21,440 | 0.817898 | true | 3,950 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.904651 | 0.77072 |
__label__eng_Latn
| 0.941414 | 0.628972 |
**A beta-Poisson model for infectious disease transmission**
This notebook is designed to accompany the upcoming paper of the same title by Joe Hilton and Ian Hall. In the paper, we introduce a branching process model for infectious disease transmission whose offspring distribution is a beta-Poisson mixture distribution. We compare this model's performance as a description of transmission by fitting it to several examples of transmission tree data and comparing these fits to those obtained by the Poisson, geometric, negative binomial, and zero-inflated Poisson (ZIP) models. The code in this notebook includes functions which perform standard probability and likelihood calculations for this model, estimate maximum likelihood parameters, and estimate confidence intervals for these parameters using bootstrapping. We provide a rough outline of the model here, with full derivations left to the paper.
```python
from __future__ import print_function
import math
import numpy as np
import random
import scipy as sp
from scipy import special as spsp
from scipy import sparse as sparse
from scipy import stats as stats
from scipy import optimize as opt
from scipy import interpolate as interpolate
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.path as path
import matplotlib.animation as animation
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import mpmath
import matplotlib as mpl
import matplotlib.cm as cm
import time
cmap = cm.hot
```
**Model description**
The beta-Poisson model describes the person-to-person spread of a pathogen in a population where contact behaviour is homogeneous but transmission behaviour varies from person to person. During their infectious period an infectious individual makes $n$ contacts, drawn from a Poisson distribution with mean $N$. For each case a transmission probability $p$ is chosen from a beta distribution with parameters $(\alpha_1,\alpha_2)$, so that the number of subsequent cases is drawn from a beta-binomial distribution with these parameters and $n$ trials. In our paper we demonstrate that the mean of the resulting distribution is given by
\begin{equation}
\lambda=N\frac{\alpha_1}{\alpha_1+\alpha_2},
\end{equation}
and that if we make the substitution $\Phi=\frac{\alpha_1+\alpha_2}{N}$ the probability mass function can be expressed as:
\begin{equation}
\begin{aligned}
P(x;\lambda,\Phi,N)=\frac{N^x}{\Gamma(x+1)}\frac{\Gamma(x+\Phi \lambda)\Gamma(\Phi N)}{\Gamma(\Phi \lambda)\Gamma(x+\Phi N)}M(x+\Phi \lambda,x+\Phi N,-N).
\end{aligned}
\end{equation}
We demonstrate in the paper that in the limit $N\to\infty$, the beta-Poisson distribution approximates a beta-gamma mixture, i.e. a negative binomial. We will use a slightly unusual parameterisation . The negative binomial with mean $\lambda$ is overdispersed, so that its variance can be expressed as $\lambda(1+\theta)$ for some $\theta>0$. We parameterise the negative binomial in terms of $\lambda$ and $\theta$ so that the pmf is given by
\begin{align}
P(x;\lambda,\theta)=\frac{\Gamma(x+\frac{\lambda}{\theta})}{\Gamma(x+1)\Gamma({\frac{\lambda}{\theta}})}\Bigg(\frac{1}{1+\theta}\Bigg)^{\frac{\lambda}{\theta}}\Bigg(\frac{\theta}{\theta+1}\Bigg)^x.
\end{align}
The negative binomial we obtain in the limit $N\to\infty$ has overdispersion $\theta=\Phi^{-1}$.
```python
def beta_poisson_pmf(x,lmbd,Phi,N):
if type(x)==int:
P=spsp.hyp1f1(x+Phi*lmbd,x+Phi*N,-N)
for n in range(1,x+1): # This loop gives us the N^x/gamma(x+1 term)
P=(N/n)*P
for m in range(x): # This loop gives us the term with the two gamma functions in numerator and denominator
P=((m+Phi*lmbd)/(m+Phi*N))*P
else:
P=[]
for i in range(0,len(x)):
p=spsp.hyp1f1(x[i]+Phi*lmbd,x[i]+Phi*N,-N)
for n in range(1,x[i]+1): # This loop gives us the N^x/gamma(x+1 term)
p=(N/n)*p
for m in range(x[i]): # This loop gives us the term with the two gamma functions in numerator and denominator
p=((m+Phi*lmbd)/(m+Phi*N))*p
P=P+[p]
return P
```
```python
class beta_poisson_gen:
# def __init__(self, lmbd, Phi, N):
# self.lmbd = lmbd
# self.Phi = Phi
# self.N = N
def pmf(self, x, lmbd, Phi, N):
return np.exp(self.logpmf(x,lmbd,Phi,N))
def logpmf(self,x,lmbd,Phi,N):
if type(x)==int:
log_p = (x*np.log(N) -
np.real(spsp.loggamma(x+1)) +
np.real(spsp.loggamma(Phi*N)) +
np.real(spsp.loggamma(x+Phi*lmbd)) -
np.real(spsp.loggamma(x+Phi*N)) -
np.real(spsp.loggamma(Phi*lmbd)))
if x+phi*N<50:
log_p += np.log(spsp.hyp1f1(x+Phi*lmbd,x+Phi*N,-N))
else:
log_p += np.log(float(mpmath.hyp1f1(x+Phi*lmbd,x+Phi*N,-N)))
else:
log_p = []
for i in range(0,len(x)):
log_pp = (x[i]*np.log(N)-np.real(spsp.loggamma(x[i]+1)) +
np.real(spsp.loggamma(Phi*N)) +
np.real(spsp.loggamma(x[i]+Phi*lmbd)) -
np.real(spsp.loggamma(x[i]+Phi*N)) -
np.real(spsp.loggamma(Phi*lmbd)))
if x[i]+Phi*N<50:
log_pp += np.log(spsp.hyp1f1(x[i]+Phi*lmbd,x[i]+Phi*N,-N))
else:
log_pp += np.log(float(mpmath.hyp1f1(x[i]+Phi*lmbd,x[i]+Phi*N,-N)))
log_p = log_p + [log_pp]
return log_p
def pgf(self, s, lmbd, Phi, N):
if np.size(np.asarray(s))==1:
G=spsp.hyp1f1(lmbd*Phi,N*Phi,N*(s-1));
else:
G=[]
for i in range(0,np.size(np.asarray(s))):
G=G+[spsp.hyp1f1(lmbd*Phi,N*phi,N*(s[i]-1))]
return G
def extinction_prob(self, lmbd, phi, N ):
if lmbd<=1:
return 1
else:
def f(s):
return beta_poisson_pgf(s,lmbd,phi,N)-s
q=opt.brentq(
f, 0, 1-1e-4);
return q
beta_poisson = beta_poisson_gen()
```
```python
print(beta_poisson_pmf([1,2,3,4],2,0.1,5))
print(beta_poisson.pmf([1,2,3,4],2,0.1,5))
```
[0.11547614919829788, 0.09421813032224745, 0.08866313316657161, 0.07987675522526165]
[0.11547615 0.09421813 0.08866313 0.07987676]
The next code block generates a widget with which you can plot the pmf of the beta-Poisson distribution and use a set of sliders to control the values of $\lambda$, $\Phi$, and $N$. You should find that when $\lambda$ is close to $N$ or $\Phi$ is large the distribution is more symmetrical, and that decreasing either $\lambda$ or $\Phi$ skews the distribution towards zero.
```python
%matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
def f(lmbd,Phi,N):
fig, ax=plt.subplots(figsize=(5,5))
plt.figure(1)
x = list(range(int(np.ceil(lmbd+4*np.sqrt(lmbd*(1+(N-lmbd)/(Phi*N+1)))))))
P=beta_poisson_pmf(x,lmbd,Phi,N)
plt.bar(x,P)
plt.show()
lmbd_widget=widgets.FloatSlider(min=0.05,max=5,step=0.05,value=1)
Phi_widget=widgets.FloatSlider(min=0.01,max=2,step=0.01,value=0.2)
N_widget=widgets.FloatSlider(min=0.1,max=40,step=0.1,value=5)
def update_lmbd_range(*args):
lmbd_widget.max=N_widget.value
N_widget.observe(update_lmbd_range,'value')
interactive_plot=interactive(f,lmbd=lmbd_widget,Phi=Phi_widget,N=N_widget)
ourput=interactive_plot.children[-1]
interactive_plot
```
interactive(children=(FloatSlider(value=1.0, description='lmbd', max=5.0, min=0.05, step=0.05), FloatSlider(va…
**Likelihood calculations**
Suppose that we have a vector $(x_1,...,x_K)$ of secondary case data, i.e. $x_i$ is the number of cases which have case $i$ as their infector (note that in this work we do not address the problem of how to identify these infectors and so generate a dataset). The log-likelihood of parameters $(\lambda,\Phi,N)$ given this data is given by
\begin{align}
\log\mathcal{L}=\sum\limits_{i=1}^Kx_i\log N-\log\Gamma(x_i+1)+\log\Gamma(\Phi N)+\log\Gamma(x_i+\Phi \lambda)-\log\Gamma(x_i+\Phi N)-\log\Gamma(\Phi \lambda)+\log M(x_i+\Phi \lambda,x_i+\Phi N,-N).
\end{align}
For large values of $N$ this formula is numerically unstable, and in this region we will use the log likelihood function of the negative binomial distribution with $\theta=\Phi^{-1}$. One can justify the choice of $N=1,000$ as a cutoff by calculating the beta-Poisson and negative binomial probability mass functions up to a suitable maximum (x=250 seems a generous upper limit for secondary case data) and for a range of $\lambda$ and $\Phi$ values. In an emerging infection context we are interested in $\lambda$ up to about $2$ and, since $\Phi$ is inversely proportional to the level of overdisperion in the data, considering $\Phi$ values below ten is usually sufficient for fitting overdispersed datasets.
In the next code block, `beta_poisson_loglh` calculates the log likelihood of the beta-Poisson parameters $(\lambda,\Phi,N)$ given a set of count data, and `neg_bin_bp_loglh` the log likelihood of the negative binomial parameters $(\lambda,\Phi^{-1})$, or equivalently the beta-Poisson parameters $(\lambda,\Phi,\infty)$. For the purposes of this notebook it is useful for us to define separate functions calculating the negative binomial log likelihood in the $N\to\infty$ extreme of the beta-Poisson model, and as the likelihood function of the negative binomial model in its own right - hence the "`bp`" in the name of the function in the box immediately below.
```python
def beta_poisson_loglh(data,lmbd,phi,N):
return sum(beta_poisson.logpmf(data,lmbd,phi,N))
def neg_bin_bp_loglh(data,lmbd,phi):
return sum(stats.nbinom.logpmf(data,lmbd*phi,phi/(phi+1)))
```
Given a vector of count data, we can calculate the maximum likelihood parameters numerically by finding the minimum of the likelihood function. This is a two-dimensional problem, since the MLE of $\lambda$ is just the sample mean. The maximum likelihood estimation is carried out in the function `get_phi_and_N_mles`. Because $N$ can theoretically range up to infinity but its MLE is bounded below by that of $\lambda$, we optimise over $\nu=\frac{1}{N}$ rather than $N$. For values of $\nu$ less than $10^{-3}$ (i.e. values of $N$ greater than $1,000$) the optimiser takes `neg_bin_bp_loglh` as the function to optimise over rather than `beta_poisson_loglh`.
```python
def get_phi_and_N_mles(data,phi_0,N_0):
def f(params):
lmbd=np.mean(data)
phi=params[0]
if params[1]>0.1e-3:
N=1/params[1]
return -beta_poisson_loglh(data,lmbd,phi,N)
else:
return -neg_bin_bp_loglh(data,lmbd,phi)
mle=sp.optimize.minimize(f,[phi_0,N_0],bounds=((1e-6,50),(0,1/np.mean(data))))
if mle.x[1]<0:
mle.x[1]=0
return mle.x[0],mle.x[1]
```
In the supplementary material to the paper we show that the pgf of a beta-Poisson distributed random variable $X$ is
\begin{equation}
G_X(s)=M(\lambda\Phi,N\Phi,N(s-1)).
\end{equation}
This is calculated using the function `beta_poisson_pgf`. From standard branching process theory, the extinction probability of a beta-Poisson epidemic is the solution to the equation
\begin{equation}
q=G(q),
\end{equation}
and can be calculated using the function `beta_poisson_extinction_prob`.
These functions are not used to calculate any of the results presented in our paper, but are included here for completeness.
The functions in the following block carry out the equivalent calculations (log likelihood, maximum likelihood estimates, pgf, and extinction probability) for the Poisson, geometric, negative binomial, and ZIP calculations.
The function `nbinom.pmf` in the `scipy.stats` package uses a parameterisation in terms of parameters $n$ and $p$. Under this parameterisation the mean and variance of the distribution are respectively given by $(1-p)n/p$ and $(1-p)n/p^2$. After some rearrangement, one can show that $n=\lambda/\theta$ and $p=1/(\theta+1)$, so that to calculate negative binomial probabilities (and likelihoods) in our $(\lambda,\theta)$ parameterisation we use `stats.nbinom.pmf(x,lmbd/theta,1/(theta+1))`.
```python
def poisson_loglh(data,lmbd):
return sum(stats.poisson.logpmf(data,lmbd))
def geo_loglh(data,lmbd):
return sum(stats.geom.logpmf(data,1/(lmbd+1),-1))
def neg_bin_loglh(data,lmbd,theta):
return sum(stats.nbinom.logpmf(data,lmbd/theta,1/(theta+1)))
def get_theta_mle(data,theta_0):
def f(theta):
lmbd=np.mean(data)
return -neg_bin_loglh(data,lmbd,theta)
mle=sp.optimize.minimize(f,[theta_0],bounds=((1e-6,50),))
return mle.x[0]
def zip_pmf(x,lmbd,sigma):
if type(x)==int:
return sigma*(x==0)+(1-sigma)*stats.poisson.pmf(x,lmbd)
else:
return sigma*np.equal(x,np.zeros(len(x)))+(1-sigma)*stats.poisson.pmf(x,lmbd)
def zip_loglh(data,lmbd,sigma):
llh=0
for x in data:
if x==0:
llh+=np.log(sigma+(1-sigma)*np.exp(-lmbd))
else:
llh+=np.log(1-sigma)+np.log(stats.poisson.pmf(x,lmbd))
return llh
def get_zip_mles(data,lmbd_0,sigma_0):
def f(params):
lmbd=params[0]
sigma=params[1]
return -zip_loglh(data,lmbd,sigma)
mle=sp.optimize.minimize(f,[lmbd_0,sigma_0],bounds=((np.mean(data),50),(0,1-1e-6)))
return mle.x[0],mle.x[1]
```
**Fits to transmission chain data**
In our paper we study the fitting behaviour of the beta-Poisson model by fitting it (and our other models of interest) to eight transmission chain datasets. We estimate maximum likelihood parameters and calculate the Akaike Information Criterion (AIC) for each fitted model. In the next cell we define a class whose objects contain the parameter fits, log likelihoods, and AICs of each model for a given dataset.
```python
class trans_chain_mles:
def __init__(self,
data):
self.data = data
self.mean = np.mean(data)
self.var = np.var(data)
theta_mle = get_theta_mle(data, self.mean)
lmbd_mle, sigma_mle = get_zip_mles(data, self.mean, 0.5)
phi_mle, Nu_mle=get_phi_and_N_mles(data, 1, 1/max(data))
if Nu_mle>1e-3:
beta_poi_var = self.mean * (1 + (1 - self.mean*Nu_mle)/(phi_mle + Nu_mle))
else:
beta_poi_var = self.mean * (1 + 1/phi_mle)
poisson_llh = poisson_loglh(data, self.mean)
geometric_llh = geo_loglh(data, self.mean)
neg_bin_llh = neg_bin_loglh(data, self.mean, theta_mle)
zip_llh = zip_loglh(data, lmbd_mle, sigma_mle)
if Nu_mle>1e-3:
beta_poi_llh = beta_poisson_loglh(data, self.mean, phi_mle, 1/Nu_mle)
else:
beta_poi_llh = neg_bin_loglh(data, self.mean, theta_mle)
self.poisson = {'lambda' : self.mean,
'mean' : self.mean,
'var' : self.mean,
'llh' : poisson_llh,
'AIC' : 2 - 2*poisson_llh}
self.geometric = {'lambda' : self.mean,
'mean' : self.mean,
'var' : self.mean * (1+self.mean),
'llh' : geometric_llh,
'AIC' : 2 - 2*geometric_llh}
self.neg_bin = {'lambda' : self.mean,
'theta' : theta_mle,
'mean' : self.mean,
'var' : self.mean * (1+theta_mle),
'llh' : neg_bin_llh,
'AIC' : 4 - 2*neg_bin_llh}
self.zip = {'lambda' : lmbd_mle,
'sigma' : sigma_mle,
'mean' : lmbd_mle * (1-sigma_mle),
'var' : lmbd_mle * (1-sigma_mle) * (1+lmbd_mle*sigma_mle),
'llh' : zip_llh,
'AIC' : 4 - 2*zip_llh}
self.beta_poi = {'lambda' : self.mean,
'Phi' : phi_mle,
'Nu' : Nu_mle,
'mean' : self.mean,
'var' : beta_poi_var,
'llh' : beta_poi_llh,
'AIC' : 6 - 2*beta_poi_llh}
```
The next cell introduces a function which plots the empirical and fitted secondary case distributions.
```python
def plot_chain_data_fits(fits, ax, title, fs=20):
xVals = range(max(fits.data)+1)
counts, bins = np.histogram(fits.data, max(fits.data)+1)
dist = counts / len(fits.data)
ax.plot(np.where(dist>0)[0], dist[dist>0], 'x', markersize=20)
poi_line = stats.poisson.pmf(xVals, fits.poisson['lambda'])
ax.plot(xVals, poi_line, lw=3)
geom_line = stats.geom.pmf(xVals, 1/(fits.geometric['lambda']+1),-1)
ax.plot(xVals, geom_line, lw=3)
neg_bin_line = stats.nbinom.pmf(xVals,
fits.neg_bin['lambda']/fits.neg_bin['theta'],
1/(fits.neg_bin['theta']+1))
ax.plot(xVals, neg_bin_line, lw=3)
zip_line = zip_pmf(xVals, fits.zip['lambda'], fits.zip['sigma'])
ax.plot(xVals, zip_line, lw=3)
if fits.beta_poi['Nu']>1e-3:
beta_poi_line = beta_poisson.pmf(xVals,
fits.beta_poi['lambda'],
fits.beta_poi['Phi'],
1/fits.beta_poi['Nu'])
ax.plot(xVals, beta_poi_line, lw=3)
else:
beta_poi_line = neg_bin_line
ax.axis([-0.5, max(fits.data)+0.5, 0, 1])
ax.set_aspect((max(fits.data)+1))
ax.set_xlabel('Secondary cases', fontsize=fs)
ax.set_ylabel('Probability', fontsize=fs)
ax.set_title(title, fontsize=fs)
```
In the following cell we define another plotting function, this one plotting the beta distribution which defines the individual-level variation in infectivity.
```python
def plot_underlying_beta(fits):
x = np.linspace(1e-2,1, 100)
y = stats.beta.pdf(x,
fits.beta_poi['lambda'] * fits.beta_poi['Phi'],
(1/fits.beta_poi['Nu'] - fits.beta_poi['lambda']) * fits.beta_poi['Phi'])
fig,ax = plt.subplots(figsize=(10,10))
plt.plot(x, y,'r-', lw=5, alpha=0.6, label='beta pdf')
ax.axis([-0.01, 1.01, 0, 1.01*np.max(y)])
ax.set_aspect(1.02/(1.01*np.max(y)))
plt.xlabel('Transmission probability',fontsize=25)
plt.ylabel('PDF',fontsize=25)
plt.xticks(fontsize=25)
plt.yticks(fontsize=25)
plt.show()
```
**Starting example: Plague**<br/>
From [Gani and Leach 2004](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3323083/pdf/03-0509.pdf).
To begin with, we calculate the maximum likelihood parameters and associated AIC for each model:
```python
gani_data = [0]*16+[1]*10+[2]*7+[3]*2+[4]*3+[5]+[6]
gani_fits = trans_chain_mles(gani_data)
np.set_printoptions(precision=3)
print('Distribution | Lambda | Theta | Sigma |Nu=1/N | Phi | Log Likelihood | AIC |')
print('__________________________________________________________________________________________')
print('Poisson |', "%.3f"% gani_fits.poisson['lambda'],' | | | | | ', "%.3f"% gani_fits.poisson['llh'],' |', "%.3f"% gani_fits.poisson['AIC'])
print('Geometric |', "%.3f"% gani_fits.geometric['lambda'],' | | | | | ', "%.3f"% gani_fits.geometric['llh'],' |', "%.3f"% gani_fits.geometric['AIC'])
print('Negative binomial |', "%.3f"% gani_fits.neg_bin['lambda'],' |', "%.3f"% gani_fits.neg_bin['theta'],'| | | | ', "%.3f"% gani_fits.neg_bin['llh'],' |', "%.3f"% gani_fits.neg_bin['AIC'])
print('Zero-inflated Poisson |', "%.3f"% gani_fits.zip['lambda'],' | |', "%.3f"% gani_fits.zip['sigma'],'| | | ', "%.3f"% gani_fits.zip['llh'],' |', "%.3f"% gani_fits.zip['AIC'])
print('Beta Poisson |', "%.3f"% gani_fits.beta_poi['lambda'],' | | |', "%.3f"% gani_fits.beta_poi['Nu'],'|', "%.3f"% gani_fits.beta_poi['Phi'],'| ', "%.3f"% gani_fits.beta_poi['llh'],' |', "%.3f"% gani_fits.beta_poi['AIC'])
```
Distribution | Lambda | Theta | Sigma |Nu=1/N | Phi | Log Likelihood | AIC |
__________________________________________________________________________________________
Poisson | 1.325 | | | | | -67.422 | 136.843
Geometric | 1.325 | | | | | -63.551 | 129.102
Negative binomial | 1.325 | 0.923 | | | | -63.312 | 130.624
Zero-inflated Poisson | 1.867 | | 0.290 | | | -63.945 | 131.890
Beta Poisson | 1.325 | | | 0.248 | 0.583 | -63.120 | 132.240
Plotting the data and fitted models demonstrates that the Poisson fails to capture the high incidence of zeros in the data:
```python
fig, ax = plt.subplots(figsize=(10,10))
plot_chain_data_fits(gani_fits, ax, 'Plague')
ax.legend(['Data','Poisson','Geometric','Negative binomial','ZIP','Beta-Poisson'],
bbox_to_anchor=(1.05,1),
loc='upper left')
plt.show()
```
We can also visualise the distribution of infectivity across cases in the fitted beta-Poisson by plotting the underlying beta distribution:
```python
plot_underlying_beta(gani_fits)
```
The fitting behaviour of the beta-Poisson model can be visualised by calculating thelikelihood function of each parameter with the other two fixed at their MLEs. The slow decay of the likelihood function associated with increasing values of $\Phi$ is consistent with the relatively high likelihood assigned to the Poisson distribution, which means that models with very low levels of overdispersion (i.e. large values of $\Phi$) can give a good fit.
```python
true_lmbd,true_phi,true_N=np.mean(current_data),phi_mle,1/N_inv_mle
data=current_data
lmbd_lh_array=beta_poisson_loglh(data,np.linspace(0.01,5,500),true_phi,true_N)
phi_lh_array=np.zeros(1000)
for i in range(1000):
phi_lh_array[i]=beta_poisson_loglh(data,true_lmbd,(i+1)*1e-2,true_N)
n_lh_array=np.zeros(750)
n_lh_array[0]=neg_bin_bp_loglh(data,true_lmbd,true_phi)
for i in range(1,750):
n_lh_array[i]=beta_poisson_loglh(data,true_lmbd,true_phi,1/(i/1000))
fig,ax=plt.subplots(figsize=(8,8))
ax.plot(np.linspace(0.01,5,500),lmbd_lh_array,'k',linewidth=6)
ax.plot([true_lmbd,true_lmbd],ax.get_ylim(),'--b',linewidth=6)
plt.xlabel('$\lambda$',fontsize=40)
plt.ylabel('Log likelihood',fontsize=40)
plt.xticks(fontsize=40)
plt.yticks(fontsize=40)
plt.show()
fig,ax=plt.subplots(figsize=(8,8))
ax.plot(np.linspace(0.01,10,1000),phi_lh_array,'k',linewidth=6)
ax.plot([true_phi,true_phi],ax.get_ylim(),'--b',linewidth=6)
plt.xlabel('$\Phi$',fontsize=40)
plt.ylabel('Log likelihood',fontsize=40)
plt.xticks(fontsize=40)
plt.yticks(fontsize=40)
plt.show()
fig,ax=plt.subplots(figsize=(8,8))
ax.plot(np.linspace(0.0,0.749,750),n_lh_array,'k',linewidth=6)
ax.plot([1/true_N,1/true_N],ax.get_ylim(),'--b',linewidth=6)
plt.xlabel('$\\nu$',fontsize=40)
plt.ylabel('Log likelihood',fontsize=40)
plt.xticks(fontsize=30)
plt.yticks(fontsize=40)
plt.show()
```
## More datasets
In the cell below we define some more datasets and fit maximum likelihood parameters to each of them for each model.
```python
jezek_data_g1=[0]*114+[1]*23+[2]*8+[3]*1+[5]*1 # First generation cases
jezek_data_g2=[0]*38+[1]*7+[2]*1+[3]*1 # Second generation cases
jezek_data_g3=[0]*9+[1]+[2]
jezek_data_g4=[0]*2+[1]
jezek_data=jezek_data_g1+jezek_data_g2+jezek_data_g3+jezek_data_g4
faye_data=[1,2,2,5,14,1,4,4,1,3,3,8,2,1,1,4,9,9,1,1,17,2,1,1,1,4,3,3,4,2,5,1,2,2,1,9,1,3,1,2,1,1,2]
faye_data=faye_data+[0]*(152-len(faye_data))
fasina_data=[0]*15+[1]*2+[2]+[3]+[12]
leo_data=[0]*162+[1]*19+[2]*8+[3]*7+[7]+[12]+[21]+[23]+[40]
cowling_data=[38,3,2,1,6,81,2,23,2,1,1,1,5,1,1,1,2,1,1,1]
cowling_data=cowling_data+[0]*(166-len(cowling_data))
chowell_data=[0]*13+[1]*5+[2]*4+[3]+[7]
heijne_data=[0]*22+[1]*13+[2]*6+[3]*3+[4]+[5]
jezek_fits = trans_chain_mles(jezek_data)
faye_fits = trans_chain_mles(faye_data)
fasina_fits = trans_chain_mles(fasina_data)
leo_fits = trans_chain_mles(leo_data)
cowling_fits = trans_chain_mles(cowling_data)
chowell_fits = trans_chain_mles(chowell_data)
heijne_fits = trans_chain_mles(heijne_data)
```
```python
print('Dataset | Empirical | Poisson | Geometric | Negative binomial | ZIP | Beta-Poisson ')
print('Gani | (',"%.3f"% gani_fits.mean,',',"%.3f"% gani_fits.var,') | (',"%.3f"% gani_fits.poisson['mean'],',',"%.3f"% gani_fits.poisson['var'],') | (',"%.3f"% gani_fits.geometric['mean'],',',"%.3f"% gani_fits.geometric['var'],') | (',"%.3f"% gani_fits.neg_bin['mean'],',',"%.3f"% gani_fits.neg_bin['var'],') | (',"%.3f"% gani_fits.zip['mean'],',',"%.3f"% gani_fits.zip['var'],') | (',"%.3f"% gani_fits.beta_poi['mean'],',',"%.3f"% gani_fits.beta_poi['var'],') ')
print('Jezek | (',"%.3f"% jezek_fits.mean,',',"%.3f"% jezek_fits.var,') | (',"%.3f"% jezek_fits.poisson['mean'],',',"%.3f"% jezek_fits.poisson['var'],') | (',"%.3f"% jezek_fits.geometric['mean'],',',"%.3f"% jezek_fits.geometric['var'],') | (',"%.3f"% jezek_fits.neg_bin['mean'],',',"%.3f"% jezek_fits.neg_bin['var'],') | (',"%.3f"% jezek_fits.zip['mean'],',',"%.3f"% jezek_fits.zip['var'],') | (',"%.3f"% jezek_fits.beta_poi['mean'],',',"%.3f"% jezek_fits.beta_poi['var'],') ')
print('Faye | (',"%.3f"% faye_fits.mean,',',"%.3f"% faye_fits.var,') | (',"%.3f"% faye_fits.poisson['mean'],',',"%.3f"% faye_fits.poisson['var'],') | (',"%.3f"% faye_fits.geometric['mean'],',',"%.3f"% faye_fits.geometric['var'],') | (',"%.3f"% faye_fits.neg_bin['mean'],',',"%.3f"% faye_fits.neg_bin['var'],') | (',"%.3f"% faye_fits.zip['mean'],',',"%.3f"% faye_fits.zip['var'],') | (',"%.3f"% faye_fits.beta_poi['mean'],',',"%.3f"% faye_fits.beta_poi['var'],') ')
print('Fasina | (',"%.3f"% fasina_fits.mean,',',"%.3f"% fasina_fits.var,') | (',"%.3f"% fasina_fits.poisson['mean'],',',"%.3f"% fasina_fits.poisson['var'],') | (',"%.3f"% fasina_fits.geometric['mean'],',',"%.3f"% fasina_fits.geometric['var'],') | (',"%.3f"% fasina_fits.neg_bin['mean'],',',"%.3f"% fasina_fits.neg_bin['var'],') | (',"%.3f"% fasina_fits.zip['mean'],',',"%.3f"% fasina_fits.zip['var'],') | (',"%.3f"% fasina_fits.beta_poi['mean'],',',"%.3f"% fasina_fits.beta_poi['var'],') ')
print('Leo | (',"%.3f"% leo_fits.mean,',',"%.3f"% leo_fits.var,') | (',"%.3f"% leo_fits.poisson['mean'],',',"%.3f"% leo_fits.poisson['var'],') | (',"%.3f"% leo_fits.geometric['mean'],',',"%.3f"% leo_fits.geometric['var'],') | (',"%.3f"% leo_fits.neg_bin['mean'],',',"%.3f"% leo_fits.neg_bin['var'],') | (',"%.3f"% leo_fits.zip['mean'],',',"%.3f"% leo_fits.zip['var'],') | (',"%.3f"% leo_fits.beta_poi['mean'],',',"%.3f"% leo_fits.beta_poi['var'],') ')
print('Cowling | (',"%.3f"% cowling_fits.mean,',',"%.3f"% cowling_fits.var,') | (',"%.3f"% cowling_fits.poisson['mean'],',',"%.3f"% cowling_fits.poisson['var'],') | (',"%.3f"% cowling_fits.geometric['mean'],',',"%.3f"% cowling_fits.geometric['var'],') | (',"%.3f"% cowling_fits.neg_bin['mean'],',',"%.3f"% cowling_fits.neg_bin['var'],') | (',"%.3f"% cowling_fits.zip['mean'],',',"%.3f"% cowling_fits.zip['var'],') | (',"%.3f"% cowling_fits.beta_poi['mean'],',',"%.3f"% cowling_fits.beta_poi['var'],') ')
print('Chowell | (',"%.3f"% chowell_fits.mean,',',"%.3f"% chowell_fits.var,') | (',"%.3f"% chowell_fits.poisson['mean'],',',"%.3f"% chowell_fits.poisson['var'],') | (',"%.3f"% chowell_fits.geometric['mean'],',',"%.3f"% chowell_fits.geometric['var'],') | (',"%.3f"% chowell_fits.neg_bin['mean'],',',"%.3f"% chowell_fits.neg_bin['var'],') | (',"%.3f"% chowell_fits.zip['mean'],',',"%.3f"% chowell_fits.zip['var'],') | (',"%.3f"% chowell_fits.beta_poi['mean'],',',"%.3f"% chowell_fits.beta_poi['var'],') ')
print('Heijne | (',"%.3f"% heijne_fits.mean,',',"%.3f"% heijne_fits.var,') | (',"%.3f"% heijne_fits.poisson['mean'],',',"%.3f"% heijne_fits.poisson['var'],') | (',"%.3f"% heijne_fits.geometric['mean'],',',"%.3f"% heijne_fits.geometric['var'],') | (',"%.3f"% heijne_fits.neg_bin['mean'],',',"%.3f"% heijne_fits.neg_bin['var'],') | (',"%.3f"% heijne_fits.zip['mean'],',',"%.3f"% heijne_fits.zip['var'],') | (',"%.3f"% heijne_fits.beta_poi['mean'],',',"%.3f"% heijne_fits.beta_poi['var'],') ')
```
```python
print('Dataset & Empirical & Poisson & Geometric & Negative binomial & ZIP & Beta-Poisson \')
print(' & Mean & Variance & Mean & Variance & Mean & Variance & Mean & Variance & Mean & Variance & Mean & Variance \')
print('Gani & ',"%.3f"% gani_fits.mean,'&',"%.3f"% gani_fits.var,' & ',"%.3f"% gani_fits.poisson['mean'],' & ',"%.3f"% gani_fits.poisson['var'],' & ',"%.3f"% gani_fits.geometric['mean'],' & ',"%.3f"% gani_fits.geometric['var'],' & ',"%.3f"% gani_fits.neg_bin['mean'],' & ',"%.3f"% gani_fits.neg_bin['var'],' & ',"%.3f"% gani_fits.zip['mean'],' & ',"%.3f"% gani_fits.zip['var'],' & ',"%.3f"% gani_fits.beta_poi['mean'],' & ',"%.3f"% gani_fits.beta_poi['var'],' \')
print('Jezek & ',"%.3f"% jezek_fits.mean,' & ',"%.3f"% jezek_fits.var,' & ',"%.3f"% jezek_fits.poisson['mean'],' & ',"%.3f"% jezek_fits.poisson['var'],' & ',"%.3f"% jezek_fits.geometric['mean'],' & ',"%.3f"% jezek_fits.geometric['var'],' & ',"%.3f"% jezek_fits.neg_bin['mean'],' & ',"%.3f"% jezek_fits.neg_bin['var'],' & ',"%.3f"% jezek_fits.zip['mean'],' & ',"%.3f"% jezek_fits.zip['var'],' & ',"%.3f"% jezek_fits.beta_poi['mean'],' & ',"%.3f"% jezek_fits.beta_poi['var'],' \')
print('Faye & ',"%.3f"% faye_fits.mean,' & ',"%.3f"% faye_fits.var,' & ',"%.3f"% faye_fits.poisson['mean'],' & ',"%.3f"% faye_fits.poisson['var'],' & ',"%.3f"% faye_fits.geometric['mean'],' & ',"%.3f"% faye_fits.geometric['var'],' & ',"%.3f"% faye_fits.neg_bin['mean'],' & ',"%.3f"% faye_fits.neg_bin['var'],' & ',"%.3f"% faye_fits.zip['mean'],' & ',"%.3f"% faye_fits.zip['var'],' & ',"%.3f"% faye_fits.beta_poi['mean'],' & ',"%.3f"% faye_fits.beta_poi['var'],' \')
print('Fasina & ',"%.3f"% fasina_fits.mean,' & ',"%.3f"% fasina_fits.var,' & ',"%.3f"% fasina_fits.poisson['mean'],' & ',"%.3f"% fasina_fits.poisson['var'],' & ',"%.3f"% fasina_fits.geometric['mean'],' & ',"%.3f"% fasina_fits.geometric['var'],' & ',"%.3f"% fasina_fits.neg_bin['mean'],' & ',"%.3f"% fasina_fits.neg_bin['var'],' & ',"%.3f"% fasina_fits.zip['mean'],' & ',"%.3f"% fasina_fits.zip['var'],' & ',"%.3f"% fasina_fits.beta_poi['mean'],' & ',"%.3f"% fasina_fits.beta_poi['var'],' \')
print('Leo & ',"%.3f"% leo_fits.mean,' & ',"%.3f"% leo_fits.var,' & ',"%.3f"% leo_fits.poisson['mean'],' & ',"%.3f"% leo_fits.poisson['var'],' & ',"%.3f"% leo_fits.geometric['mean'],' & ',"%.3f"% leo_fits.geometric['var'],' & ',"%.3f"% leo_fits.neg_bin['mean'],',',"%.3f"% leo_fits.neg_bin['var'],' & ',"%.3f"% leo_fits.zip['mean'],',',"%.3f"% leo_fits.zip['var'],' & ',"%.3f"% leo_fits.beta_poi['mean'],' & ',"%.3f"% leo_fits.beta_poi['var'],' \')
print('Cowling & ',"%.3f"% cowling_fits.mean,' & ',"%.3f"% cowling_fits.var,' & ',"%.3f"% cowling_fits.poisson['mean'],' & ',"%.3f"% cowling_fits.poisson['var'],' & ',"%.3f"% cowling_fits.geometric['mean'],' & ',"%.3f"% cowling_fits.geometric['var'],' & ',"%.3f"% cowling_fits.neg_bin['mean'],' & ',"%.3f"% cowling_fits.neg_bin['var'],' & ',"%.3f"% cowling_fits.zip['mean'],' & ',"%.3f"% cowling_fits.zip['var'],' & ',"%.3f"% cowling_fits.beta_poi['mean'],' & ',"%.3f"% cowling_fits.beta_poi['var'],' \')
print('Chowell & ',"%.3f"% chowell_fits.mean,' & ',"%.3f"% chowell_fits.var,' & ',"%.3f"% chowell_fits.poisson['mean'],' & ',"%.3f"% chowell_fits.poisson['var'],' & ',"%.3f"% chowell_fits.geometric['mean'],' & ',"%.3f"% chowell_fits.geometric['var'],' & ',"%.3f"% chowell_fits.neg_bin['mean'],' & ',"%.3f"% chowell_fits.neg_bin['var'],' & ',"%.3f"% chowell_fits.zip['mean'],' & ',"%.3f"% chowell_fits.zip['var'],' & ',"%.3f"% chowell_fits.beta_poi['mean'],' & ',"%.3f"% chowell_fits.beta_poi['var'],' \')
print('Heijne & ',"%.3f"% heijne_fits.mean,' & ',"%.3f"% heijne_fits.var,' & ',"%.3f"% heijne_fits.poisson['mean'],' & ',"%.3f"% heijne_fits.poisson['var'],' & ',"%.3f"% heijne_fits.geometric['mean'],' & ',"%.3f"% heijne_fits.geometric['var'],' & ',"%.3f"% heijne_fits.neg_bin['mean'],'v',"%.3f"% heijne_fits.neg_bin['var'],' & ',"%.3f"% heijne_fits.zip['mean'],' & ',"%.3f"% heijne_fits.zip['var'],' & ',"%.3f"% heijne_fits.beta_poi['mean'],' & ',"%.3f"% heijne_fits.beta_poi['var'],' \')
```
```python
print('Dataset | Poisson | Geometric | Negative binomial ')
print('__________________________________________________')
print('Gani | ',"%.3f"% np.exp(gani_fits.poisson['llh'] - gani_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(gani_fits.geometric['llh'] - gani_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(gani_fits.neg_bin['llh'] - gani_fits.beta_poi['llh']))
print('Jezek | ',"%.3f"% np.exp(jezek_fits.poisson['llh'] - jezek_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(jezek_fits.geometric['llh'] - jezek_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(jezek_fits.neg_bin['llh'] - jezek_fits.beta_poi['llh']))
print('Faye | ',"%.3f"% np.exp(faye_fits.poisson['llh'] - faye_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(faye_fits.geometric['llh'] - faye_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(faye_fits.neg_bin['llh'] - faye_fits.beta_poi['llh']))
print('Fasina | ',"%.3f"% np.exp(fasina_fits.poisson['llh'] - fasina_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(fasina_fits.geometric['llh'] - fasina_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(fasina_fits.neg_bin['llh'] - fasina_fits.beta_poi['llh']))
print('Leo | ',"%.3f"% np.exp(leo_fits.poisson['llh'] - leo_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(leo_fits.geometric['llh'] - leo_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(leo_fits.neg_bin['llh'] - leo_fits.beta_poi['llh']))
print('Cowling | ',"%.3f"% np.exp(cowling_fits.poisson['llh'] - cowling_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(cowling_fits.geometric['llh'] - cowling_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(cowling_fits.neg_bin['llh'] - cowling_fits.beta_poi['llh']))
print('Chowell | ',"%.3f"% np.exp(chowell_fits.poisson['llh'] - chowell_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(chowell_fits.geometric['llh'] - chowell_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(chowell_fits.neg_bin['llh'] - chowell_fits.beta_poi['llh']))
print('Heijne | ',"%.3f"% np.exp(heijne_fits.poisson['llh'] - heijne_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(heijne_fits.geometric['llh'] - heijne_fits.beta_poi['llh']),' | ',"%.3f"% np.exp(heijne_fits.neg_bin['llh'] - heijne_fits.beta_poi['llh']))
```
```python
print('Dataset | Nu')
print('____________________________')
print('Gani |', "%.3f"% gani_fits.beta_poi['Nu'],'= 1 /',"%.3f"% (1/gani_fits.beta_poi['Nu']))
print('Jezek |', "%.3f"% jezek_fits.beta_poi['Nu'],'= 1 /',"%.3f"% (1/jezek_fits.beta_poi['Nu']))
print('Faye |', "%.3f"% faye_fits.beta_poi['Nu'],'= 1 /',"%.3f"% (1/faye_fits.beta_poi['Nu']))
print('Fasina |', "%.3f"% fasina_fits.beta_poi['Nu'],'= 1 /',"%.3f"% (1/fasina_fits.beta_poi['Nu']))
print('Leo |', "%.3f"% leo_fits.beta_poi['Nu'],'= 1 /',"%.3f"% (1/leo_fits.beta_poi['Nu']))
print('Cowling |', "%.3f"% cowling_fits.beta_poi['Nu'],'= 1 /',"%.3f"% (1/cowling_fits.beta_poi['Nu']))
print('Chowell |', "%.3f"% chowell_fits.beta_poi['Nu'],'= 1 /',"%.3f"% (1/chowell_fits.beta_poi['Nu']))
print('Heijne |', "%.3f"% heijne_fits.beta_poi['Nu'],'= 1 /',"%.3f"% (1/heijne_fits.beta_poi['Nu']))
```
```python
fig = plt.figure(figsize=(50,20))
gani_ax = plt.subplot2grid((2,4),(0,0))
jezek_ax = plt.subplot2grid((2,4),(0,1))
faye_ax = plt.subplot2grid((2,4),(0,2))
fasina_ax = plt.subplot2grid((2,4),(0,3))
leo_ax = plt.subplot2grid((2,4),(1,0))
cowling_ax = plt.subplot2grid((2,4),(1,1))
chowell_ax = plt.subplot2grid((2,4),(1,2))
heijne_ax = plt.subplot2grid((2,4),(1,3))
plot_chain_data_fits(gani_fits, gani_ax, 'Plague, Gani and Leach',30)
plot_chain_data_fits(jezek_fits, jezek_ax, 'Monkeypox, Jezek et al.',30)
plot_chain_data_fits(faye_fits, faye_ax, 'Ebola, Faye et al.',30)
plot_chain_data_fits(fasina_fits, fasina_ax, 'Ebola, Fasina et al.',30)
plot_chain_data_fits(leo_fits, leo_ax, 'SARS, Leo et al.',30)
plot_chain_data_fits(cowling_fits, cowling_ax, 'MERS-CoV, Cowling et al.',30)
plot_chain_data_fits(chowell_fits, chowell_ax, 'MERS-CoV, Chowell et al.',30)
plot_chain_data_fits(heijne_fits, heijne_ax, 'Norovirus, Heijne et al.',30)
fasina_ax.legend(['Data','Poisson','Geometric','Negative binomial','ZIP','Beta-Poisson'],
bbox_to_anchor=(1.05,1),
loc='upper left',
fontsize=30)
plt.tight_layout()
plt.show()
```
## Confidence intervals
The bootstrap sampling used to calculate confidence intervals for the negative binomial and beta-Poisson models is very time consuming when large numbers of samples are used.
```python
def poisson_confidence_intervals(data,interval,points):
lmbd=np.linspace(interval[0],interval[1],points)
llh=np.zeros(points)
for x in data:
llh += stats.poisson.logpmf(x,lmbd)
mle_loc=np.argmax(llh)
lh_normed=np.exp(llh)/np.sum(np.exp(llh))
current_max=lh_normed[mle_loc]
interval_weight=current_max
while interval_weight<0.95:
max_loc=np.argmax(lh_normed[np.where(lh_normed<current_max)])
current_max=lh_normed[np.where(lh_normed<current_max)][max_loc]
interval_weight+=current_max
ci=[np.min(lmbd[np.where(lh_normed>=current_max)[0]]),np.max(lmbd[np.where(lh_normed>=current_max)[0]])]
return ci
def geometric_confidence_intervals(data,interval,points):
lmbd=np.linspace(interval[0],interval[1],points)
llh=np.zeros(points)
for x in data:
llh += stats.geom.logpmf(x,1/(lmbd+1),-1)
mle_loc=np.argmax(llh)
lh_normed=np.exp(llh)/np.sum(np.exp(llh))
current_max=lh_normed[mle_loc]
interval_weight=current_max
while interval_weight<0.95:
max_loc=np.argmax(lh_normed[np.where(lh_normed<current_max)])
current_max=lh_normed[np.where(lh_normed<current_max)][max_loc]
interval_weight+=current_max
ci=[np.min(lmbd[np.where(lh_normed>=current_max)[0]]),np.max(lmbd[np.where(lh_normed>=current_max)[0]])]
return ci
def zip_confidence_intervals(data,lmbd_interval,lmbd_points,sigma_points):
lmbd=np.linspace(lmbd_interval[0],lmbd_interval[1],lmbd_points)
sigma=np.linspace(0,1,sigma_points)
lmbdgrid,sigmagrid=np.meshgrid(lmbd,sigma)
mean_grid = np.multiply(lmbdgrid, (1-sigmagrid))
var_grid = np.multiply(lmbdgrid, np.multiply((1-sigmagrid),(1+np.multiply(lmbdgrid,sigmagrid))))
llh=np.zeros(lmbdgrid.shape)
for x in data:
llh += zip_loglh([x],lmbdgrid,sigmagrid)
mle_loc=np.unravel_index(np.argmax(llh),llh.shape)
lh_normed=np.exp(llh)/np.sum(np.exp(llh))
current_max=lh_normed[mle_loc]
interval_weight=current_max
while interval_weight<0.95:
max_loc=np.unravel_index(np.argmax(lh_normed),lh_normed.shape)
current_max=lh_normed[max_loc]
lh_normed[max_loc]=0
interval_weight+=current_max
lh_normed=np.exp(llh)/np.sum(np.exp(llh))
lmbd_ci=[np.min(lmbdgrid[np.where(lh_normed>=current_max)]),np.max(lmbdgrid[np.where(lh_normed>=current_max)])]
sigma_ci=[np.min(sigmagrid[np.where(lh_normed>=current_max)]),np.max(sigmagrid[np.where(lh_normed>=current_max)])]
mean_ci=[np.min(mean_grid[np.where(lh_normed>=current_max)]),np.max(mean_grid[np.where(lh_normed>=current_max)])]
var_ci=[np.min(var_grid[np.where(lh_normed>=current_max)]),np.max(var_grid[np.where(lh_normed>=current_max)])]
return lmbd_ci,sigma_ci,mean_ci, var_ci
```
```python
def neg_bin_bootstrap(data,no_samples,theta_0):
sample_size=np.size(data)
lmbd_samples = []
theta_samples = []
print('Now calculating',no_samples,'bootstrap samples.')
start_time=time.time()
for i in range(no_samples):
data_now=random.choices(data,k=sample_size)
lmbd_samples = lmbd_samples + [np.mean(data_now)]
theta_samples = theta_samples + [get_theta_mle(data_now,theta_0)]
if ((i+1)%1000)==0:
print('Sample',i+1,'of',no_samples,'completed.',time.time()-start_time,'seconds elapsed, approximately',(no_samples-i-1)*(time.time()-start_time)/(i+1),'remaining.')
lmbd_samples = np.array(lmbd_samples)
theta_samples = np.array(theta_samples)
var_samples=np.multiply(lmbd_samples,(1+theta_samples))
lmbd_ci=[np.percentile(lmbd_samples,2.5),np.percentile(lmbd_samples,97.5)]
theta_ci=[np.percentile(theta_samples,2.5),np.percentile(theta_samples,97.5)]
var_ci=[np.percentile(var_samples,2.5),np.percentile(var_samples,97.5)]
return lmbd_ci,lmbd_samples,theta_ci,theta_samples,var_ci,var_samples
```
```python
def beta_poisson_bootstrap(data,no_samples,phi_0,N_0):
sample_size=np.size(data)
lmbd_samples = []
Phi_samples = []
Nu_samples = []
print('Now calculating',no_samples,'bootstrap samples.')
start_time=time.time()
for i in range(no_samples):
data_now=random.choices(data,k=sample_size)
lmbd_samples = lmbd_samples + [np.mean(data_now)]
Phi_now, Nu_now=get_phi_and_N_mles(data_now,phi_0,N_0)
Phi_samples = Phi_samples + [Phi_now]
Nu_samples = Nu_samples + [Nu_now]
if ((i+1)%1000)==0:
print('Sample',i+1,'of',no_samples,'completed.',time.time()-start_time,'seconds elapsed, approximately',(no_samples-i-1)*(time.time()-start_time)/(i+1),'remaining.')
lmbd_samples = np.array(lmbd_samples)
Phi_samples = np.array(Phi_samples)
Nu_samples = np.array(Nu_samples)
var_samples=lmbd_samples*(1+(1-lmbd_samples*Nu_samples)/(Phi_samples+Nu_samples))
lmbd_ci=[np.percentile(lmbd_samples,2.5),np.percentile(lmbd_samples,97.5)]
Phi_ci=[np.percentile(Phi_samples,2.5),np.percentile(Phi_samples,97.5)]
Nu_ci=[np.percentile(Nu_samples,2.5),np.percentile(Nu_samples,97.5)]
var_ci=[np.percentile(var_samples,2.5),np.percentile(var_samples,97.5)]
return lmbd_ci,lmbd_samples,Phi_ci,Phi_samples,Nu_ci,Nu_samples,var_ci,var_samples
```
```python
class confidence_intervals:
def __init__(self,data):
self.data = data
self.mean = np.mean(data)
self.max = np.max(data)
def poisson(self,sfs):
points = 1 + (10**sfs)*self.max
lmbd_ci = poisson_confidence_intervals(self.data,[0,self.max],points)
return {'lmbd' : lmbd_ci,
'mean' : lmbd_ci,
'var' : lmbd_ci
}
def geometric(self,sfs):
points = 1 + (10**sfs)*self.max
lmbd_ci = geometric_confidence_intervals(self.data,[0,self.max],points)
var_ci = [lmbd_ci[0]*(1+lmbd_ci[0]), lmbd_ci[1]*(1+lmbd_ci[1])]
return {'lmbd' : lmbd_ci,
'mean' : lmbd_ci,
'var' : var_ci
}
def zip(self,sfs):
lmbd_points = 1 + (10**sfs)*self.max
sigma_points = 1 + 10**sfs
lmbd_ci, sigma_ci, mean_ci, var_ci = zip_confidence_intervals(self.data,[0,self.max],lmbd_points,sigma_points)
return {'lmbd' : lmbd_ci,
'sigma' : sigma_ci,
'mean' : mean_ci,
'var' : var_ci
}
def neg_bin(self,no_samples):
lmbd_ci,lmbd_samples,theta_ci,theta_samples,var_ci,var_samples = neg_bin_bootstrap(self.data,no_samples,self.mean)
return {'lmbd' : lmbd_ci,
'theta' : theta_ci,
'mean' : lmbd_ci,
'var' : var_ci}
def beta_poisson(self,no_samples):
lmbd_ci,lmbd_samples,Phi_ci,Phi_samples,Nu_ci,Nu_samples,var_ci,var_samples = \
beta_poisson_bootstrap(self.data,no_samples,1/self.mean,self.max)
return {'lmbd' : lmbd_ci,
'Phi' : Phi_ci,
'Nu' : Nu_ci,
'mean' : lmbd_ci,
'var' : var_ci}
```
```python
cis = confidence_intervals(gani_data)
gani_poisson_cis = cis.poisson(3)
gani_geometric_cis = cis.geometric(3)
gani_zip_cis = cis.zip(2)
gani_neg_bin_cis = cis.neg_bin(100)
gani_beta_poisson_cis = cis.beta_poisson(100)
cis = confidence_intervals(jezek_data)
jezek_poisson_cis = cis.poisson(3)
jezek_geometric_cis = cis.geometric(3)
jezek_zip_cis = cis.zip(2)
jezek_neg_bin_cis = cis.neg_bin(100)
jezek_beta_poisson_cis = cis.beta_poisson(100)
cis = confidence_intervals(faye_data)
faye_poisson_cis = cis.poisson(3)
faye_geometric_cis = cis.geometric(3)
faye_zip_cis = cis.zip(2)
faye_neg_bin_cis = cis.neg_bin(100)
faye_beta_poisson_cis = cis.beta_poisson(100)
cis = confidence_intervals(fasina_data)
fasina_poisson_cis = cis.poisson(3)
fasina_geometric_cis = cis.geometric(3)
fasina_zip_cis = cis.zip(2)
fasina_neg_bin_cis = cis.neg_bin(100)
fasina_beta_poisson_cis = cis.beta_poisson(100)
cis = confidence_intervals(leo_data)
leo_poisson_cis = cis.poisson(3)
leo_geometric_cis = cis.geometric(3)
leo_zip_cis = cis.zip(2)
leo_neg_bin_cis = cis.neg_bin(100)
leo_beta_poisson_cis = cis.beta_poisson(100)
cis = confidence_intervals(cowling_data)
cowling_poisson_cis = cis.poisson(3)
cowling_geometric_cis = cis.geometric(3)
cowling_zip_cis = cis.zip(2)
cowling_neg_bin_cis = cis.neg_bin(100)
cowling_beta_poisson_cis = cis.beta_poisson(100)
cis = confidence_intervals(chowell_data)
chowell_poisson_cis = cis.poisson(3)
chowell_geometric_cis = cis.geometric(3)
chowell_zip_cis = cis.zip(2)
chowell_neg_bin_cis = cis.neg_bin(100)
chowell_beta_poisson_cis = cis.beta_poisson(100)
cis = confidence_intervals(heijne_data)
heijne_poisson_cis = cis.poisson(3)
heijne_geometric_cis = cis.geometric(3)
heijne_zip_cis = cis.zip(2)
heijne_neg_bin_cis = cis.neg_bin(100)
heijne_beta_poisson_cis = cis.beta_poisson(100)
```
```python
```
|
a9555682e4f1b757b3bd3c054b5d97821db3bc82
| 150,766 |
ipynb
|
Jupyter Notebook
|
A beta-Poisson model for infectious disease transmission.ipynb
|
JBHilton/HiltonHallBetaPoisson
|
b650e387a029271fd22873923c26eef69ab01592
|
[
"MIT"
] | 1 |
2021-07-15T01:07:17.000Z
|
2021-07-15T01:07:17.000Z
|
A beta-Poisson model for infectious disease transmission.ipynb
|
JBHilton/HiltonHallBetaPoisson
|
b650e387a029271fd22873923c26eef69ab01592
|
[
"MIT"
] | null | null | null |
A beta-Poisson model for infectious disease transmission.ipynb
|
JBHilton/HiltonHallBetaPoisson
|
b650e387a029271fd22873923c26eef69ab01592
|
[
"MIT"
] | 1 |
2020-02-17T13:31:18.000Z
|
2020-02-17T13:31:18.000Z
| 131.903762 | 57,396 | 0.799577 | true | 14,803 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.672332 | 0.594368 |
__label__eng_Latn
| 0.336839 | 0.219245 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2017 L.A. Barba, N.C. Clementi
# Get with the oscillations
So far, in this module of our course in _Engineering Computations_ you have learned to:
* capture time histories of a body's position from images and video;
* compute velocity and acceleration of a body, from known positions over time—i.e., take numerical derivatives;
* find the motion description (position versus time) from acceleration data, stepping in time with Euler's method;
* form the state vector and the vectorized form of a second-order dynamical system;
* improve the simple free-fall model by adding air resistance.
You also learned that Euler's method is a _first-order_ method: a Taylor series expansion shows that stepping in time with Euler makes an error—called the _truncation error_—proportional to the time increment, $\Delta t$.
In this lesson, you'll work with oscillating systems. Euler's method doesn't do very well with oscillating systems, but we'll show you a clever way to fix this. (The modified method is _still_ first order, however. We will also confirm the **order of convergence** by computing the error using different values of $\Delta t$.
As always, we will need our best-loved numerical Python libraries, and we'll also re-use the `eulerstep()` function from the [previous lesson](http://go.gwu.edu/engcomp3lesson2). So let's get that out of the way.
```python
import numpy
from matplotlib import pyplot
%matplotlib inline
pyplot.rc('font', family='serif', size='14')
```
```python
def eulerstep(state, rhs, dt):
'''Update a state to the next time increment using Euler's method.
Arguments
---------
state : array of dependent variables
rhs : function that computes the RHS of the DiffEq
dt : float, time increment
Returns
-------
next_state : array, updated after one time increment'''
next_state = state + rhs(state) * dt
return next_state
```
## Spring-mass system
A prototypical mechanical system is a mass $m$ attached to a spring, in the simplest case without friction. The elastic constant of the spring, $k$, determines the restoring force it will apply to the mass when displaced by a distance $x$. The system then oscillates back and forth around its position of equilibrium.
#### Simple spring-mass system, without friction.
Newton's law applied to the friction-less spring-mass system is:
\begin{equation}
-k x = m \ddot{x}
\end{equation}
Introducing the parameter $\omega = \sqrt{k/m}$, the equation of motion is rewriten as:
\begin{equation}
\ddot{x} + \omega^2 x = 0
\end{equation}
where a dot above a dependent variable denotes the time derivative. This is a second-order differential equation for the position $x$, having a known analytical solution that represents _simple harmonic motion_:
$x(t) = x_0 \cos(\omega t)$
The solution represents oscillations with period $P = 2 \pi/ \omega $ (the time between two peaks), and amplitude $x_0$.
### System in vector form
It's useful to write a second-order differential equation as a set of two first-order equations: in this case, for position and velocity, respectively:
\begin{eqnarray}
\dot{x} &=& v \nonumber\\
\dot{v} &=& -\omega^2 x
\end{eqnarray}
Like we did in [Lesson 2](http://go.gwu.edu/engcomp3lesson2) of this module, we write the state of the system as a two-dimensional vector,
\begin{equation}
\mathbf{x} = \begin{bmatrix}
x \\ v
\end{bmatrix},
\end{equation}
and the differential equation in vector form:
\begin{equation}
\dot{\mathbf{x}} = \begin{bmatrix}
v \\ -\omega^2 x
\end{bmatrix}.
\end{equation}
Several advantages come from writing the differential equation in vector form, both theoretical and practical. In the study of dynamical systems, for example, the state vector lives in a state space called the _phase plane_, and many things can be learned from studying solutions to differential equations graphically on a phase plane.
Practically, writing the equation in vector form results in more general, compact code. Let's write a function to obtain the right-hand side of the spring-mass differential equation, in vector form.
```python
def springmass(state):
'''Computes the right-hand side of the spring-mass differential
equation, without friction.
Arguments
---------
state : array of two dependent variables [x v]^T
Returns
-------
derivs: array of two derivatives [v - ω*ω*x]^T
'''
derivs = numpy.array([state[1], -ω**2*state[0]])
return derivs
```
This worked example follows Reference [1], section 4.3 (note that the source is open access). We set the parameters of the system, choose a time interval equal to 1-20th of the oscillation period, and decide to solve the motion for a duration equal to 3 periods.
```python
ω = 2
period = 2*numpy.pi/ω
dt = period/20 # we choose 20 time intervals per period
T = 3*period # solve for 3 periods
N = round(T/dt)
```
```python
print(N)
print(dt)
```
60
0.15707963267948966
Next, set up the time array and initial conditions, initialize the solution array with zero values, and assign the initial values to the first elements of the solution array.
```python
t = numpy.linspace(0, T, N)
```
```python
x0 = 2 # initial position
v0 = 0 # initial velocity
```
```python
#initialize solution array
num_sol = numpy.zeros([N,2])
```
```python
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
```
We're ready to solve! Step through the time increments, calling the `eulerstep()` function with the `springmass` right-hand-side derivatives and time increment as inputs.
```python
for i in range(N-1):
num_sol[i+1] = eulerstep(num_sol[i], springmass, dt)
```
Now, let's compute the position with respect to time using the known analytical solution, so that we can compare the numerical result with it. Below, we make a plot including both numerical and analytical values in our chosen time range.
```python
x_an = x0*numpy.cos(ω * t)
```
```python
# plot solution with Euler's method
fig = pyplot.figure(figsize=(6,4))
pyplot.plot(t, num_sol[:, 0], linewidth=2, linestyle='--', label='Numerical solution')
pyplot.plot(t, x_an, linewidth=1, linestyle='-', label='Analytical solution')
pyplot.xlabel('Time [s]')
pyplot.ylabel('$x$ [m]')
pyplot.title('Spring-mass system with Euler\'s method (dashed line).\n');
```
Yikes! The numerical solution exhibits a marked growth in amplitude over time, which certainly is not what the physical system displays. _What is wrong with Euler's method?_
##### Exercise:
* Try repeating the calculation above using smaller values of the time increment, `dt`, and see if the results improve. Try `dt=P/40`, `P/160` and `P/2000`.
* Although the last case, with 2000 steps per oscillation, does look good enough, see what happens if you then increase the time of simulation, for example to 20 periods. —Run the case again: _What do you see now?_
We consistently observe a growth in amplitude in the numerical solution, worsening over time. The solution does improve when we reduce the time increment `dt` (as it should), but the amplitude still displays unphysical growth for longer simulations.
## Euler-Cromer method
The thing is, Euler's method has a fundamental problem with oscillatory systems. Look again at the approximation made by Euler's method to get the position at the next time interval:
\begin{equation}
x(t_i+\Delta t) \approx x(t_i) + v(t_i) \Delta t
\end{equation}
It uses the velocity value at the _beginning_ of the time interval to step the solution to the future.
A graphical explanation can help here. Remember that the derivative of a function corresponds to the slope of the tangent at a point. Euler's method approximates the derivative using the slope at the initial point in an interval, and advances the numerical position with that initial velocity. The sketch below illustrates two consecutive Euler steps on a function with high curvature.
#### Sketch of two Euler steps on a curved function.
Since Euler's method makes a linear approximation to project the solution into the future, assuming the value of the derivative at the start of the interval, it's not very good on oscillatory functions.
A clever idea that improves on Euler's method is to use the updated value of the derivatives for the _second_ equation.
Pure Euler's method applies:
\begin{eqnarray}
x(t_0) = x_0, \qquad x_{i+1} &=& x_i + v_i \Delta t \nonumber\\
v(t_0) = v_0, \qquad v_{i+1} &=& v_i - {\omega}^2 x_i \Delta t
\end{eqnarray}
What if in the equation for $v$ we used the value $x_{i+1}$ that was just computed? Like this:
\begin{eqnarray}
x(t_0) = x_0, \qquad x_{i+1} &=& x_i + v_i \Delta t \nonumber\\
v(t_0) = v_0, \qquad v_{i+1} &=& v_i - {\omega}^2 x_{i+1} \Delta t
\end{eqnarray}
Notice the $x_{i+1}$ on the right-hand side of the second equation: that's the updated value, giving the acceleration at the _end_ of the time interval. This modified scheme is called Euler-Cromer method, to honor clever Mr Cromer, who came up with the idea [2].
Let's see what it does. Study the function below carefully—it helps a lot if you write things out on a piece of paper!
```python
def euler_cromer(state, rhs, dt):
'''Update a state to the next time increment using Euler-Cromer's method.
Arguments
---------
state : array of dependent variables
rhs : function that computes the RHS of the DiffEq
dt : float, time increment
Returns
-------
next_state : array, updated after one time increment'''
mid_state = state + rhs(state)*dt # Euler step
mid_derivs = rhs(mid_state) # updated derivatives
next_state = numpy.array([mid_state[0], state[1] + mid_derivs[1]*dt])
return next_state
```
We've copied the whole problem set-up below, to get the solution in one code cell, for easy trial with different parameter choices. Try it out!
```python
ω = 2
period = 2*numpy.pi/ω
dt = period/200 # time intervals per period
T = 800*period # simulation time, in number of periods
N = round(T/dt)
print('The number of time steps is {}.'.format( N ))
print('The time increment is {}'.format( dt ))
# time array
t = numpy.linspace(0, T, N)
x0 = 2 # initial position
v0 = 0 # initial velocity
#initialize solution array
num_sol = numpy.zeros([N,2])
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
for i in range(N-1):
num_sol[i+1] = euler_cromer(num_sol[i], springmass, dt)
```
The number of time steps is 160000.
The time increment is 0.015707963267948967
Recompute the analytical solution, and plot it alongside the numerical one, when you're ready. We computed a crazy number of oscillations, so we'll need to pick carefully the range of time to plot.
First, get the analytical solution. We chose to then plot the first few periods of the oscillatory motion: numerical and analytical.
```python
x_an = x0*numpy.cos(ω * t) # analytical solution
```
```python
iend = 800 # in number of time steps
fig = pyplot.figure(figsize=(6,4))
pyplot.plot(t[:iend], num_sol[:iend, 0], linewidth=2, linestyle='--', label='Numerical solution')
pyplot.plot(t[:iend], x_an[:iend], linewidth=1, linestyle='-', label='Analytical solution')
pyplot.xlabel('Time [s]')
pyplot.ylabel('$x$ [m]')
pyplot.title('Spring-mass system, with Euler-Cromer method.\n');
```
The plot shows that Euler-Cromer does not have the problem of growing amplitudes. We're pretty happy with it in that sense.
But if we plot the end of a long period of simulation, you can see that it does start to deviate from the analytical solution.
```python
istart = 400
fig = pyplot.figure(figsize=(6,4))
pyplot.plot(t[-istart:], num_sol[-istart:, 0], linewidth=2, linestyle='--', label='Numerical solution')
pyplot.plot(t[-istart:], x_an[-istart:], linewidth=1, linestyle='-', label='Analytical solution')
pyplot.xlabel('Time [s]')
pyplot.ylabel('$x$ [m]')
pyplot.title('Spring-mass system, with Euler-Cromer method. \n');
```
Looking at the last few oscillations in a very long run shows a slight phase difference, even with a very small time increment. So although the Euler-Cromer method fixes a big problem with Euler's method, it still has some error. It's still a first-order method!
#### The Euler-Cromer method is first-order accurate, just like Euler's method. The global error is proportional to $\Delta t$.
##### Note:
You'll often find the presentation of the Euler-Cromer method with the reverse order of the equations, i.e., the velocity equation solved first, then the position equation solved with the updated value of the velocity. This makes no difference in the results: it's just a convention among physicists.
The Euler-Cromer method is equivalent to a [_semi-implicit Euler method_](https://en.wikipedia.org/wiki/Semi-implicit_Euler_method).
## Convergence
We've said that both Euler's method and the Cromer variant are _first-order accurate_: the error goes as the first power of $\Delta t$. In [Lesson 2](http://go.gwu.edu/engcomp3lesson2) of this module, we showed this using a Taylor series. Let's now confirm it numerically.
Because simple harmonic motion has a known analytical function that solves the differential equation, we can directly compute a measure of the error made by the numerical solution.
Suppose we ran a numerical solution in the interval from $t_0$ to $T=N/\Delta t$. We could then compute the error, as follows:
\begin{equation}
e = x_N - x_0 \cos(\omega T)
\end{equation}
where $x_N$ represents the numerical solution at the $N$-th time step.
How could we confirm the order of convergence of a numerical method? In the lucky scenario of having an analytical solution to directly compute the error, all we need to do is solve numerically with different values of $\Delta t$ and see if the error really varies linearly with this parameter.
In the code cell below, we compute the numerical solution with different time increments. We use two nested `for`-statements: one iterates over the values of $\Delta t$, and the other iterates over the time steps from the initial condition to the final time. We save the results in a new variable called `num_sol_time`, which is an array of arrays. Check it out!
```python
dt_values = numpy.array([period/50, period/100, period/200, period/400])
T = 1*period
num_sol_time = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for j, dt in enumerate(dt_values):
N = int(T/dt)
t = numpy.linspace(0, T, N)
#initialize solution array
num_sol = numpy.zeros([N,2])
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
for i in range(N-1):
num_sol[i+1] = eulerstep(num_sol[i], springmass, dt)
num_sol_time[j] = num_sol.copy()
```
We will need to compute the error with our chosen norm, so let's write a function for that. It includes a line to obtain the values of the analytical solution at the needed instant of time, and then it takes the difference with the numerical solution to compute the error.
```python
def get_error(num_sol, T):
x_an = x0 * numpy.cos(ω * T) # analytical solution at final time
error = numpy.abs(num_sol[-1,0] - x_an)
return error
```
All that is left to do is to call the error function with our chosen values of $\Delta t$, and plot the results. A logarithmic scale on the plot confirms close to linear scaling between error and time increment.
```python
error_values = numpy.empty_like(dt_values)
for j in range(len(dt_values)):
error_values[j] = get_error(num_sol_time[j], T)
```
```python
# plot the solution errors with respect to the time incremetn
fig = pyplot.figure(figsize=(6,6))
pyplot.loglog(dt_values, error_values, 'ko-') #log-log plot
pyplot.loglog(dt_values, 10*dt_values, 'k:')
pyplot.grid(True) #turn on grid lines
pyplot.axis('equal') #make axes scale equally
pyplot.xlabel('$\Delta t$')
pyplot.ylabel('Error')
pyplot.title('Convergence of the Euler method (dotted line: slope 1)\n');
```
What do you see in the plot of the error as a function of $\Delta t$? It looks like a straight line, with a slope close to 1. On a log-log convergence plot, a slope of 1 indicates that we have a first-order method: the error scales as ${\mathcal O}(\Delta t)$—using the "big-O" notation. It means that the error is proportional to the time increment: $ e \propto \Delta t.$
## Modified Euler's method
Another improvement on Euler's method is achieved by stepping the numerical solution to the midpoint of a time interval, computing the derivatives there, and then going back and updating the system state using the midpoint derivatives. This is called _modified Euler's method_.
If we write the vector form of the differential equation as:
\begin{equation}
\dot{\mathbf{x}} = f(\mathbf{x}),
\end{equation}
then modified Euler's method is:
\begin{align}
\mathbf{x}_{n+1/2} & = \mathbf{x}_n + \frac{\Delta t}{2} f(\mathbf{x}_n) \\
\mathbf{x}_{n+1} & = \mathbf{x}_n + \Delta t \,\, f(\mathbf{x}_{n+1/2}).
\end{align}
We can now write a Python function to update the state using this method. It's equivalent to a so-called _Runge-Kutta second-order_ method, so we call it `rk2_step()`.
```python
def rk2_step(state, rhs, dt):
'''Update a state to the next time increment using modified Euler's method.
Arguments
---------
state : array of dependent variables
rhs : function that computes the RHS of the DiffEq
dt : float, time increment
Returns
-------
next_state : array, updated after one time increment'''
mid_state = state + rhs(state) * dt*0.5
next_state = state + rhs(mid_state)*dt
return next_state
```
Let's see how it performs with our spring-mass model.
```python
dt_values = numpy.array([period/50, period/100, period/200,period/400])
T = 1*period
num_sol_time = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for j, dt in enumerate(dt_values):
N = int(T/dt)
t = numpy.linspace(0, T, N)
#initialize solution array
num_sol = numpy.zeros([N,2])
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
for i in range(N-1):
num_sol[i+1] = rk2_step(num_sol[i], springmass, dt)
num_sol_time[j] = num_sol.copy()
```
```python
error_values = numpy.empty_like(dt_values)
for j, dt in enumerate(dt_values):
error_values[j] = get_error(num_sol_time[j], dt)
```
```python
# plot of convergence for modified Euler's
fig = pyplot.figure(figsize=(6,6))
pyplot.loglog(dt_values, error_values, 'ko-')
pyplot.loglog(dt_values, 5*dt_values**2, 'k:')
pyplot.grid(True)
pyplot.axis('equal')
pyplot.xlabel('$\Delta t$')
pyplot.ylabel('Error')
pyplot.title('Convergence of modified Euler\'s method (dotted line: slope 2)\n');
```
The convergence plot, in this case, does look close to a slope-2 line. Modified Euler's method is second-order accurate:
the effect of computing the derivatives (slope) at the midpoint of the time interval, instead of the starting point, is to increase the accuracy by one order!
Using the derivatives at the midpoint of the time interval is equivalent to using the average of the derivatives at $t$ and $t+\Delta t$:
this corresponds to a second-order _Runge-Kutta method_, or RK2, for short.
Combining derivatives evaluated at different points in the time interval is the key to Runge-Kutta methods that achieve higher orders of accuracy.
## What we've learned
* vector form of the spring-mass differential equation
* Euler's method produces unphysical amplitude growth in oscillatory systems
* the Euler-Cromer method fixes the amplitude growth (while still being first order)
* Euler-Cromer does show a phase lag after a long simulation
* a convergence plot confirms the first-order accuracy of Euler's method
* a convergence plot shows that modified Euler's method, using the derivatives evaluated at the midpoint of the time interval, is a second-order method
## References
1. Linge S., Langtangen H.P. (2016) Solving Ordinary Differential Equations. In: Programming for Computations - Python. Texts in Computational Science and Engineering, vol 15. Springer, Cham, https://doi.org/10.1007/978-3-319-32428-9_4, open access and reusable under [CC-BY-NC](http://creativecommons.org/licenses/by-nc/4.0/) license.
2. Cromer, A. (1981). Stable solutions using the Euler approximation. _American Journal of Physics_, 49(5), 455-459. https://doi.org/10.1119/1.12478
```python
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
```
<link href="https://fonts.googleapis.com/css?family=Merriweather:300,300i,400,400i,700,700i,900,900i" rel='stylesheet' >
<link href="https://fonts.googleapis.com/css?family=Source+Sans+Pro:300,300i,400,400i,700,700i" rel='stylesheet' >
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' >
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 800px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.5em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Source Sans Pro', sans-serif;
line-height: 140%;
font-size: 110%;
width:680px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Merriweather', serif;
font-style:regular;
font-weight: bold;
font-size: 250%;
line-height: 100%;
color: #004065;
margin-bottom: 1em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Merriweather', serif;
font-weight: bold;
font-size: 180%;
line-height: 100%;
color: #0096d6;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h3 {
font-family: 'Merriweather', serif;
font-size: 150%;
margin-top:12px;
margin-bottom: 3px;
font-style: regular;
color: #008367;
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Merriweather', serif;
font-weight: 300;
font-size: 100%;
line-height: 120%;
text-align: left;
width:500px;
margin-top: 1em;
margin-bottom: 2em;
margin-left: 80pt;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Source Sans Pro', sans-serif;
font-weight: regular;
font-size: 130%;
color: #e31937;
font-style: italic;
margin-bottom: .5em;
margin-top: 1em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'Source Code Pro', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.warning{
color: rgb( 240, 20, 20 )
}
</style>
|
9a34693bcea8562a1b313c3d0390c254e5c02e18
| 188,535 |
ipynb
|
Jupyter Notebook
|
notebooks_en/3_Get_Oscillations.ipynb
|
engineersCode/EngCom3_flyatchange
|
ced7377e7a79e6a82da1249254013faccbd6763a
|
[
"BSD-3-Clause"
] | 6 |
2019-06-26T17:56:09.000Z
|
2019-12-14T17:04:37.000Z
|
notebooks_en/3_Get_Oscillations.ipynb
|
engineersCode/EngCom3_flyatchange
|
ced7377e7a79e6a82da1249254013faccbd6763a
|
[
"BSD-3-Clause"
] | 1 |
2018-05-18T13:25:58.000Z
|
2018-05-19T03:27:05.000Z
|
notebooks_en/3_Get_Oscillations.ipynb
|
engineersCode/EngCom3_flyatchange
|
ced7377e7a79e6a82da1249254013faccbd6763a
|
[
"BSD-3-Clause"
] | 7 |
2019-10-28T15:53:48.000Z
|
2021-09-12T21:43:16.000Z
| 173.604972 | 38,284 | 0.878664 | true | 6,233 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.861538 | 0.753051 |
__label__eng_Latn
| 0.982433 | 0.587922 |
```python
from sympy import *
from sympy.physics.mechanics import *
mass, R, c1, c2, grav = var("mass, R, c1, c2, grav", real=True, positive=True)
omega = var("omega", real=True, positive=True)
x, phi = dynamicsymbols('x, phi')
xp, phip = dynamicsymbols('x, phi', 1)
J = mass*R**2/2
T = J*phip**2/2 + mass*xp**2/2
U = - mass*grav*x + c1/2 * (x - R*phi)**2 + c2/2 * (x + R*phi)**2
L = T - U
pprint("\nL:")
pprint(L)
LM = LagrangesMethod(L, [x,phi])
# pprint(L.expand().collect([p1,p2,p3,p4]))
LM.form_lagranges_equations()
M = LM.mass_matrix
f = LM.forcing
pprint("\nMass matrix:")
pprint(M)
pprint("\nForce matrix (not stiffness matrix)")
pprint(f)
for i in range(2):
print("--- Row "+str(i+1)+":")
term = -1*f[i].expand()
pprint(term.collect([x,phi]))
from sympy.abc import x, y
system = Matrix([
(c1 + c2, R*(-c1 + c2), mass*grav), (R*(-c1 + c2), R**2*(c1 + c2), 0)
])
sol = solve_linear_system(system, x,y)
x0 = sol[x]
p0 = sol[y]
pprint("\nx0:")
pprint(x0)
pprint("\np0:")
pprint(p0)
```
|
f7ce141ed7e9a35f77053d95eea7ec5104a60e81
| 2,289 |
ipynb
|
Jupyter Notebook
|
ipynb/EMS_04/Selbst/4.7.ipynb
|
kassbohm/wb-snippets
|
f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe
|
[
"MIT"
] | null | null | null |
ipynb/EMS_04/Selbst/4.7.ipynb
|
kassbohm/wb-snippets
|
f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe
|
[
"MIT"
] | null | null | null |
ipynb/EMS_04/Selbst/4.7.ipynb
|
kassbohm/wb-snippets
|
f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe
|
[
"MIT"
] | null | null | null | 28.6125 | 93 | 0.41896 | true | 384 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.950411 | 0.774583 | 0.736173 |
__label__kor_Hang
| 0.172339 | 0.548707 |
```python
import math
import json
import sympy as sp
from sympy.utilities.lambdify import lambdify
import numpy as np
import matplotlib.pyplot as plt
import openrtdynamics2.lang as dy
import openrtdynamics2.py_execute as dyexe
from openrtdynamics2.ORTDtoNumpy import ORTDtoNumpy
from vehicle_lib.vehicle_lib import *
import vehicle_lib.path_transformations as pt
import vehicle_lib.motion_primitives as mp
```
```python
Ts = 0.01
```
```python
# load track data
with open("track_data/simple_track.json", "r") as read_file:
track_data = json.load(read_file)
```
# Under construction...
```python
```
```python
```
```python
```
# Kinematic bicycle model
The dynamic system equations are given by
$
\dot X
=
f(x,y,\psi)
=
\begin{pmatrix}
\dot x \\
\dot y \\
\dot \psi
\end{pmatrix}
=
\begin{pmatrix}
v \cos( \delta + \psi) \\
v \sin( \delta + \psi) \\
v / l_r \sin( \delta ), \\
\end{pmatrix}
$
with the state vector
$ X = [ x, y, \psi ]^T $.
Herein, $x$ and $y$ denote the coordinates of the vehicle front axle in cartesian space and $\psi$ the vehicle body orientation angle. The system inputs are the steering angle $\delta$ and the vehicle velocity $v$. Finally, the parameter $l_r$ denotes the wheelbase, which is the length in-between front and rear axle.
```python
x, y, v, delta, psi, l_r, T_s, n = sp.symbols('x y v delta psi l_r T_s n')
x_dot = v * sp.cos( delta + psi )
y_dot = v * sp.sin( delta + psi )
psi_dot = v / l_r * sp.sin( delta )
# system function f
f = sp.Matrix([ x_dot, y_dot, psi_dot ])
# state vector
X_bic = sp.Matrix( [ x, y, psi ])
# input vector
U_bic = sp.Matrix( [ delta, v ])
```
```python
f
```
$\displaystyle \left[\begin{matrix}v \cos{\left(\delta + \psi \right)}\\v \sin{\left(\delta + \psi \right)}\\\frac{v \sin{\left(\delta \right)}}{l_{r}}\end{matrix}\right]$
# Discretization of the continunous model
By applying Euler-forward discretization
$ {X}[k+1] = \underbrace{ {X}[k] + T_s \dot{X} }_{f_{dscr}} $,
the continuous system is time-discretized with the sampling time $T_s$ yielding the discrete system funtion $f_{dscr}$.
```python
# apply Euler forward
f_dscr = sp.Matrix( [x,y,psi]) + T_s * f
```
```python
f_dscr
```
$\displaystyle \left[\begin{matrix}T_{s} v \cos{\left(\delta + \psi \right)} + x\\T_{s} v \sin{\left(\delta + \psi \right)} + y\\\frac{T_{s} v \sin{\left(\delta \right)}}{l_{r}} + \psi\end{matrix}\right]$
# Analytically compute the Jacobian matrices
A linearization of the non-linear system function around a dynamic set point is calculated by deriving the jacobian matrices w.r.t. the state vector $X$ and each system input $\delta$ and $v$:
continuous case
$
A = \frac{ \partial f }{ \partial X},
\qquad
B = \frac{ \partial f }{ \partial [ \delta, v ]^T },
$
discrete-time case
$
A_{dscr} = \frac{ \partial f_{dscr} }{ \partial X},
\qquad
B_{dscr} = \frac{ \partial f_{dscr} }{ \partial [ \delta, v ]^T },
$
```python
# continuous system matrices
A = f.jacobian(X_bic)
B = f.jacobian(U_bic)
# discrete system matrices
A_dscr = f_dscr.jacobian(X_bic)
B_dscr = f_dscr.jacobian(U_bic)
```
```python
A_dscr
```
$\displaystyle \left[\begin{matrix}1 & 0 & - T_{s} v \sin{\left(\delta + \psi \right)}\\0 & 1 & T_{s} v \cos{\left(\delta + \psi \right)}\\0 & 0 & 1\end{matrix}\right]$
```python
B_dscr
```
$\displaystyle \left[\begin{matrix}- T_{s} v \sin{\left(\delta + \psi \right)} & T_{s} \cos{\left(\delta + \psi \right)}\\T_{s} v \cos{\left(\delta + \psi \right)} & T_{s} \sin{\left(\delta + \psi \right)}\\\frac{T_{s} v \cos{\left(\delta \right)}}{l_{r}} & \frac{T_{s} \sin{\left(\delta \right)}}{l_{r}}\end{matrix}\right]$
# Create functions that generate the matrices A, B, and the system function f
Create python functions with which the symbolically dervied matrices and system function can be evaluated.
```python
variables = (T_s,l_r, x,y,psi,v,delta)
array2mat = [{'ImmutableDenseMatrix': np.matrix}, 'numpy']
A_dscr_fn = lambdify( variables, A_dscr, modules=array2mat)
B_dscr_fn = lambdify( variables, B_dscr, modules=array2mat)
f_dscr_fn = lambdify( variables, f_dscr, modules=array2mat)
```
```python
A_dscr_fn(0.01, 3.0, 0.1,0.2,0.4,10,0.1)
```
matrix([[ 1. , 0. , -0.04794255],
[ 0. , 1. , 0.08775826],
[ 0. , 0. , 1. ]])
```python
B_dscr_fn(0.01, 3.0, 0.1,0.2,0.4,10,0.1)
```
matrix([[-0.04794255, 0.00877583],
[ 0.08775826, 0.00479426],
[ 0.03316681, 0.00033278]])
```python
f_dscr_fn(0.01, 3.0, 0.1,0.2,0.4,10,0.1)
```
matrix([[0.18775826],
[0.24794255],
[0.40332778]])
```python
```
```python
```
# Run a simulation to generate test data
Set-up a simulation of a vehicle (bicycle model). The vehicle is controlled to follow a given path. In addition, an intended lateral distance $\Delta l$ is modulated by applying a pre-defined profile to the reference $\Delta l_r$.
```python
path_transform = pt.LateralPathTransformer(wheelbase=3.0)
```
compiling system store_input_data (level 1)...
compiling system tracker_loop (level 3)...
compiling system Subsystem1000 (level 2)...
compiling system controller (level 2)...
compiling system simulation_model (level 2)...
compiling system process_data (level 1)...
compiling system simulation (level 0)...
Generated code will be written to generated/tmp1 .
```python
lateral_profile = mp.generate_one_dimensional_motion(Ts=Ts, T_phase1=1, T_phase2=3, T_phase3=1, T_phase4=3, T_phase5=1)
mp.plot_lateral_profile(lateral_profile)
```
```python
output_path = path_transform.run_lateral_path_transformer( track_data, lateral_profile )
plt.figure(figsize=(8,4), dpi=100)
plt.plot( output_path['X'], output_path['Y']+0.1 )
plt.plot( track_data['X'], track_data['Y'] )
plt.legend(['manipulated (output) path', 'original (input) path'])
plt.grid()
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.show()
```
```python
output_path.keys()
```
dict_keys(['D', 'X', 'Y', 'PSI', 'K', 'D_STAR', 'V_DELTA_DOT', 'V_DELTA', 'V_PSI_DOT', 'V_PSI', 'V_VELOCITY'])
# Add noise to the model (sensing noise)
Simulate measurement noise which is, e.g., introduced by GPS.
```python
# X/Y positioning noise (normal distribution)
N = len( output_path['X'] )
eta_x = np.random.normal(0, 0.1, N) * 1
eta_y = np.random.normal(0, 0.1, N) * 1
x_meas = eta_x + output_path['X']
y_meas = eta_y + output_path['Y']
psi_meas = output_path['V_PSI']
psi_dot_meas = output_path['V_PSI_DOT']
v_meas = output_path['V_VELOCITY']
plt.figure(figsize=(12,8), dpi=70)
plt.plot(x_meas, y_meas)
plt.plot(track_data['X'], track_data['Y'])
plt.show()
plt.figure(figsize=(12,8), dpi=70)
plt.plot(psi_meas)
plt.show()
```
# Extended Kalman filter
The extended Kalman filter is applied to the linearized model descibed by the matrices $A_{dscr}$ and $B_{dscr}$ and given the system function $f_{dscr}$. The simulated data serves as measured data and is the input to the filter.
```python
f_dscr
```
$\displaystyle \left[\begin{matrix}T_{s} v \cos{\left(\delta + \psi \right)} + x\\T_{s} v \sin{\left(\delta + \psi \right)} + y\\\frac{T_{s} v \sin{\left(\delta \right)}}{l_{r}} + \psi\end{matrix}\right]$
```python
A_dscr
```
$\displaystyle \left[\begin{matrix}1 & 0 & - T_{s} v \sin{\left(\delta + \psi \right)}\\0 & 1 & T_{s} v \cos{\left(\delta + \psi \right)}\\0 & 0 & 1\end{matrix}\right]$
```python
B_dscr
```
$\displaystyle \left[\begin{matrix}- T_{s} v \sin{\left(\delta + \psi \right)} & T_{s} \cos{\left(\delta + \psi \right)}\\T_{s} v \cos{\left(\delta + \psi \right)} & T_{s} \sin{\left(\delta + \psi \right)}\\\frac{T_{s} v \cos{\left(\delta \right)}}{l_{r}} & \frac{T_{s} \sin{\left(\delta \right)}}{l_{r}}\end{matrix}\right]$
The implemented filter in form of a loop
```python
l_r = 3.0
# allocate space to store the filter results
results = {'delta' : np.zeros(N), 'x' : np.zeros(N), 'y' : np.zeros(N), 'psi' : np.zeros(N) }
# the guess/estimate of the initial states
X = np.matrix([ [0.5], [0.5], [0.1] ])
P = np.matrix([ [0.1, 0, 0 ],
[0, 0.1, 0 ],
[0, 0, 0.1 ] ])
# covariance of the noise w addtitive to the states
Q = 0.00001*np.matrix([ [1, 0, 0 ],
[0, 1, 0 ],
[0, 0, 1 ] ])
# covariance of the noise v in the measured system output signal
R = np.matrix([ [0.1, 0 ],
[0 , 0.1 ] ])
for i in range(0,N):
# measured input signals
v = v_meas[i]
x = x_meas[i]
y = y_meas[i]
psi_dot = psi_dot_meas[i]
# compute steering angle by the inverse for the vehicle orientation change
delta = math.asin( psi_dot * l_r / v )
# system output vector (x, y)
z = np.matrix([ [x], [y] ])
# pridiction step using the non-linear model (f_dscr)
# x(k-1|k-1) --> x(k|k-1)
X[0] = X[0] + Ts * ( v * math.cos( X[2] + delta ) )
X[1] = X[1] + Ts * ( v * math.sin( X[2] + delta ) )
X[2] = X[2] + Ts * ( v / l_r * math.sin(delta) )
# optionally use the auto-generated python function for evaluation
# X = f_dscr_fn( Ts, l_r, float(X[0]), float(X[1]), float(X[2]), v, delta )
# evaluate jacobi matrices A_dscr and B_dscr
F = np.matrix([ [1, 0, -Ts*v*math.sin(delta+X[2]) ],
[0, 1, Ts*v*math.cos(delta+X[2]) ],
[0, 0, 1 ] ])
# optionally use the auto-generated python function for evaluation
# F = A_dscr_fn( Ts, l_r, float(X[0]), float(X[1]), float(X[2]), v, delta )
B = np.matrix([ [-Ts*v*math.sin(delta+X[2]), Ts*math.cos(delta+X[2]) ],
[ Ts*v*math.cos(delta+X[2]), Ts*math.sin(delta+X[2]) ],
[Ts*v/l_r * math.cos(delta), Ts/l_r * math.sin(delta) ] ])
# optionally use the auto-generated python function for evaluation
# B = B_dscr_fn( Ts, l_r, float(X[0]), float(X[1]), float(X[2]), v, delta )
# the system output matrix: returns X and Y when multiplied with the state vector X
# which are compared to the measurements
H = np.matrix([ [1,0,0],
[0,1,0] ])
# prdicted state covariance P(k|k-1)
P = F*P*F.transpose() + Q
# estimation output residual vector
e = z - H*X
# Kalman gain
S = H*P*H.transpose() + R
K = P*H.transpose() * np.linalg.inv( S )
# post priori state X(k|k)
X = X + K*e
# post priori covariance
P = (np.eye(3) - K*H) * P
# store results
results['delta'][i] = delta
results['x'][i] = X[0]
results['y'][i] = X[1]
results['psi'][i] = X[2]
# show results
plt.figure(figsize=(12,8), dpi=79)
#plt.plot(x_meas, y_meas, '+')
plt.plot(output_path['X'], output_path['Y'], 'g')
plt.plot(results['x'], results['y'], 'r')
plt.show()
plt.figure(figsize=(12,8), dpi=70)
plt.plot(results['psi'])
plt.plot(output_path['V_PSI'], 'g')
plt.show()
```
```python
```
|
345ec6657307733ebcd727b9b6f01e8099459b20
| 243,613 |
ipynb
|
Jupyter Notebook
|
localization_along_line.ipynb
|
christianausb/vehicleControl
|
e53eef2e15da4b381344259eb9c482d711d16551
|
[
"MIT"
] | 1 |
2022-01-10T08:16:51.000Z
|
2022-01-10T08:16:51.000Z
|
localization_along_line.ipynb
|
christianausb/vehicleControl
|
e53eef2e15da4b381344259eb9c482d711d16551
|
[
"MIT"
] | null | null | null |
localization_along_line.ipynb
|
christianausb/vehicleControl
|
e53eef2e15da4b381344259eb9c482d711d16551
|
[
"MIT"
] | 1 |
2021-07-16T02:34:33.000Z
|
2021-07-16T02:34:33.000Z
| 289.67063 | 62,656 | 0.922003 | true | 3,663 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.782662 | 0.705785 | 0.552391 |
__label__eng_Latn
| 0.700703 | 0.12172 |
```python
%matplotlib widget
from mayavi import mlab
```
```python
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import scipy as sp
import scipy.linalg
import sympy as sy
sy.init_printing()
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
```
```python
mlab.init_notebook(backend='x3d')
```
We start from plotting basics in Python environment, in the meanwhile refresh the system of linear equations.
# <font face="gotham" color="purple"> Visualisation of A System of Two Linear Equations </font>
Consider a linear system of two equations:
\begin{align}
x+y&=6\\
x-y&=-4
\end{align}
Easy to solve: $(x, y)^T = (1, 5)^T$. Let's plot the linear system.
```python
x = np.linspace(-5, 5, 100)
y1 = -x + 6
y2 = x + 4
fig, ax = plt.subplots(figsize = (12, 7))
ax.scatter(1, 5, s = 200, zorder=5, color = 'r', alpha = .8)
ax.plot(x, y1, lw =3, label = '$x+y=6$')
ax.plot(x, y2, lw =3, label = '$x-y=-4$')
ax.plot([1, 1], [0, 5], ls = '--', color = 'b', alpha = .5)
ax.plot([-5, 1], [5, 5], ls = '--', color = 'b', alpha = .5)
ax.set_xlim([-5, 5])
ax.set_ylim([0, 12])
ax.legend()
s = '$(1,5)$'
ax.text(1, 5.5, s, fontsize = 20)
ax.set_title('Solution of $x+y=6$, $x-y=-4$', size = 22)
ax.grid()
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
# <font face="gotham" color="purple"> How to Draw a Plane </font>
Before drawing a plane, let's refresh the logic of Matplotlib 3D plotting. This should be familiar to you if you are a MATLAB user.
First, create meshgrids.
```python
x, y = [-1, 0, 1], [-1, 0, 1]
X, Y = np.meshgrid(x, y)
```
Mathematically, meshgrids are the coordinates of <font face="gotham" color="red">Cartesian product</font>. To illustrate, we can plot all the coordinates of these meshgrids
```python
fig, ax = plt.subplots(figsize = (12, 7))
ax.scatter(X, Y, s = 200, color = 'red')
ax.axis([-2, 3.01, -2.01, 2])
ax.spines['left'].set_position('zero') # alternative position is 'center'
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('zero')
ax.spines['top'].set_color('none')
ax.grid()
plt.show()
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
Try a more complicated meshgrid.
```python
x, y = np.arange(-3, 4, 1), np.arange(-3, 4, 1)
X, Y = np.meshgrid(x, y)
fig, ax = plt.subplots(figsize = (12, 12))
ax.scatter(X, Y, s = 200, color = 'red', zorder = 3)
ax.axis([-5, 5, -5, 5])
ax.spines['left'].set_position('zero') # alternative position is 'center'
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('zero')
ax.spines['top'].set_color('none')
ax.grid()
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
Now consider the function $z = f(x, y)$, $z$ is in the $3rd$ dimension. Though Matplotlib is not meant for delicate plotting of 3D graphics, basic 3D plotting is still acceptable.
For example, we define a simple plane as
$$z= x + y$$
Then plot $z$
```python
Z = X + Y
fig = plt.figure(figsize = (9,9))
ax = fig.add_subplot(111, projection = '3d')
ax.scatter(X, Y, Z, s = 100, label = '$z=x+y$')
ax.legend()
plt.show()
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
Or we can plot it as a surface, Matplotlib will automatically interpolate values among the Cartesian coordinates such that the graph will look like a surface.
```python
fig = plt.figure(figsize = (9, 9))
ax = fig.add_subplot(111, projection = '3d')
ax.plot_surface(X, Y, Z, cmap ='viridis') # MATLAB default color map
ax.set_xlabel('x-axis')
ax.set_ylabel('y-axis')
ax.set_zlabel('z-axis')
ax.set_title('$z=x+y$', size = 18)
plt.show()
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
# <font face="gotham" color="purple"> Visualisation of A System of Three Linear Equations </font>
We have reviewed on plotting planes, now we are ready to plot several planes all together.
Consider this system of linear equations
\begin{align}
x_1- 2x_2+x_3&=0\\
2x_2-8x_3&=8\\
-4x_1+5x_2+9x_3&=-9
\end{align}
And solution is $(x_1, x_2, x_3)^T = (29, 16, 3)^T$. Let's reproduce the system visually.
```python
x1 = np.linspace(25, 35, 20)
x2 = np.linspace(10, 20, 20)
X1, X2 = np.meshgrid(x1, x2)
fig = plt.figure(figsize = (9, 9))
ax = fig.add_subplot(111, projection = '3d')
X3 = 2*X2 - X1
ax.plot_surface(X1, X2, X3, cmap ='viridis', alpha = 1)
X3 = .25*X2 - 1
ax.plot_surface(X1, X2, X3, cmap ='summer', alpha = 1)
X3 = -5/9*X2 + 4/9*X1 - 1
ax.plot_surface(X1, X2, X3, cmap ='spring', alpha = 1)
ax.scatter(29, 16, 3, s = 200, color = 'black')
plt.show()
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
We are certain there is a solution, however the graph does not show the intersection of planes. The problem originates from Matplotlib's rendering algorithm, which is not designed for drawing genuine 3D graphics. It merely projects 3D objects onto 2D dimension to imitate 3D features.
Mayavi is much professional in rendering 3D graphics, we give an example here. If not installed, run ```conda install -c anaconda mayavi```.
```python
mlab.clf()
X1, X2 = np.mgrid[-10:10:21*1j, -5:10:21*1j]
X3 = 6 - X1 - X2
mlab.mesh(X1, X2, X3,colormap="spring")
X3 = 3 - 2*X1 + X2
mlab.mesh(X1, X2, X3,colormap="winter")
X3 = 3*X1 + 2*X2 -4
mlab.mesh(X1, X2, X3,colormap="summer")
mlab.axes()
mlab.outline()
mlab.points3d(1, 2, 3, color = (.8, 0.2, .2), )
mlab.title('A System of Linear Equations')
```
## <font face="gotham" color="purple"> Visualisation of An Inconsistent System </font>
Now let's visualise the linear system that does not have a solution.
\begin{align}
x+y+z&=1\\
x-y-2z&=2\\
2x-z&=1
\end{align}
Rearrange the system to solve for $z$:
\begin{align}
z&=1-x-y\\
z&=\frac{x}{2}-\frac{y}{2}+1\\
z&=2x-1
\end{align}
```python
mlab.clf()
X, Y = np.mgrid[-5:5:21*1j, -5:5:21*1j]
Z = 1 - X - Y
mlab.mesh(X, Y, Z,colormap="spring")
Z = X/2 - Y/2 + 1
mlab.mesh(X, Y, Z,colormap="summer")
Z = 2*X - 1
mlab.mesh(X, Y, Z,colormap="autumn")
mlab.axes()
mlab.outline()
mlab.title('A Inconsistent System of Linear Equations')
```
## <font face="gotham" color="purple"> Visualisation of A System With Infinite Numbers of Solutions </font>
Our system of equations is given
\begin{align}
y-z=&4\\
2x+y+2z=&4\\
2x+2y+z=&8
\end{align}
Rearrange to solve for $z$
\begin{align}
z=&y-4\\
z=&2-x-\frac{y}{2}\\
z=&8-2x-2y
\end{align}
```python
mlab.clf()
X, Y = np.mgrid[-2:2:21*1j, 2:6:21*1j]
Z = Y - 4
mlab.mesh(X, Y, Z,colormap="spring")
Z = 2 - X - Y/2
mlab.mesh(X, Y, Z,colormap="summer")
Z = 8 - 2*X - 2*Y
mlab.mesh(X, Y, Z,colormap="autumn")
mlab.axes()
mlab.outline()
mlab.title('A System of Linear Equations With Infinite Number of Solutions')
```
The solution of the system is $(x,y,z)=(-3z/2,z+4,z)^T$, where $z$ is a **free variable**.
The solution is an infinite line in $\mathbb{R}^3$, to visualise the solution requires setting a range of $x$ and $y$, for instance we can set
\begin{align}
-2 \leq x \leq 2\\
2 \leq y \leq 6
\end{align}
which means
\begin{align}
-2\leq -\frac32z\leq 2\\
2\leq z+4 \leq 6
\end{align}
We can pick one inequality to set the range of $z$, e.g. second inequality: $-2 \leq z \leq 2$.
Then plot the planes and the solutions together.
```python
mlab.clf()
X, Y = np.mgrid[-2:2:21*1j, 2:6:21*1j]
Z = Y - 4
mlab.mesh(X, Y, Z,colormap="spring")
Z = 2 - X - Y/2
mlab.mesh(X, Y, Z,colormap="summer")
Z = 8 - 2*X - 2*Y
mlab.mesh(X, Y, Z,colormap="autumn")
ZL = np.linspace(-2, 2, 20) # ZL means Z for line, we have chosen the range [-2, 2]
X = -3*ZL/2
Y = ZL + 4
mlab.plot3d(X, Y, ZL)
mlab.axes()
mlab.outline()
mlab.title('A System of Linear Equations With Infinite Number of Solutions')
```
# <font face="gotham" color="purple"> Reduced Row Echelon Form </font>
For easy demonstration, we will be using SymPy frequently in lectures. SymPy is a very power symbolic computation library, we will see its basic features as the lectures move forward.
We define a SymPy matrix:
```python
M = sy.Matrix([[5, 0, 11, 3], [7, 23, -3, 7], [12, 11, 3, -4]]); M
```
$\displaystyle \left[\begin{matrix}5 & 0 & 11 & 3\\7 & 23 & -3 & 7\\12 & 11 & 3 & -4\end{matrix}\right]$
Think of it as an **augmented matrix** which combines coefficients of linear system. With row operations, we can solve the system quickly. Let's turn it into a **row reduced echelon form**.
```python
M_rref = M.rref(); M_rref # .rref() is the SymPy method for row reduced echelon form
```
$\displaystyle \left( \left[\begin{matrix}1 & 0 & 0 & - \frac{2165}{1679}\\0 & 1 & 0 & \frac{1358}{1679}\\0 & 0 & 1 & \frac{1442}{1679}\end{matrix}\right], \ \left( 0, \ 1, \ 2\right)\right)$
Take out the first element in the big parentheses, i.e. the rref matrix.
```python
M_rref = np.array(M_rref[0]);M_rref
```
array([[1, 0, 0, -2165/1679],
[0, 1, 0, 1358/1679],
[0, 0, 1, 1442/1679]], dtype=object)
If you don't like fractions, convert it into float type.
```python
M_rref.astype(float)
```
array([[ 1. , 0. , 0. , -1.289],
[ 0. , 1. , 0. , 0.809],
[ 0. , 0. , 1. , 0.859]])
The last column of the rref matrix is the solution of the system.
## <font face="gotham" color="purple"> Example: rref and Visualisation </font>
Let's use ```.rref()``` method to compute a solution of a system then visualise it. Consider the system:
\begin{align}
3x+6y+2z&=-13\\
x+2y+z&=-5\\
-5x-10y-2z&=19
\end{align}
Extract the augmented matrix into a SymPy matrix:
```python
A = sy.Matrix([[3, 6, 2, -13], [1, 2, 1, -5], [-5, -10, -2, 19]]);A
```
$\displaystyle \left[\begin{matrix}3 & 6 & 2 & -13\\1 & 2 & 1 & -5\\-5 & -10 & -2 & 19\end{matrix}\right]$
```python
A_rref = A.rref(); A_rref
```
$\displaystyle \left( \left[\begin{matrix}1 & 2 & 0 & -3\\0 & 0 & 1 & -2\\0 & 0 & 0 & 0\end{matrix}\right], \ \left( 0, \ 2\right)\right)$
In case you are wondering what's $(0, 2)$: they are the column number of pivot columns, in the augmented matrix above the pivot columns resides on the $0$th and $2$nd column.
Because it's not a rank matrix, therefore solutions is in general form
\begin{align}
x + 2y & = -3\\
z &= -2\\
y &= free
\end{align}
Let's pick 3 different values of $y$, for instance $(3, 5, 7)$, to calculate $3$ special solutions:
```python
point1 = (-2*3-3, 3, -2)
point2 = (-2*5-3, 5, -2)
point3 = (-2*7-3, 7, -2)
special_solution = np.array([point1, point2, point3]); special_solution # each row is a special solution
```
array([[ -9, 3, -2],
[-13, 5, -2],
[-17, 7, -2]])
We can visualise the general solution, and the 3 specific solutions altogether.
```python
y = np.linspace(2, 8, 20) # y is the free variable
x = -3 - 2*y
z = np.full((len(y), ), -2) # z is a constant
```
```python
fig = plt.figure(figsize = (12,9))
ax = fig.add_subplot(111, projection='3d')
ax.plot(x, y, z, lw = 3, color = 'red')
ax.scatter(special_solution[:,0], special_solution[:,1], special_solution[:,2], s = 200)
ax.set_title('General Solution and Special Solution of the Linear Sytem', size= 16)
plt.show()
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
## <font face="gotham" color="purple"> Example: A Symbolic Solution </font>
Consider a system where all right-hand side values are indeterminate:
\begin{align}
x + 2y - 3z &= a\\
4x - y + 8z &= b\\
2x - 6y - 4z &= c
\end{align}
We define $a, b, c$ as SymPy objects, then extract the augmented matrix
```python
a, b, c = sy.symbols('a, b, c', real = True)
A = sy.Matrix([[1, 2, -3, a], [4, -1, 8, b], [2, -6, -4, c]]); A
```
$\displaystyle \left[\begin{matrix}1 & 2 & -3 & a\\4 & -1 & 8 & b\\2 & -6 & -4 & c\end{matrix}\right]$
We can immediately achieve the symbolic solution by using ```.rref()``` method.
```python
A_rref = A.rref(); A_rref
```
$\displaystyle \left( \left[\begin{matrix}1 & 0 & 0 & \frac{2 a}{7} + \frac{b}{7} + \frac{c}{14}\\0 & 1 & 0 & \frac{16 a}{91} + \frac{b}{91} - \frac{10 c}{91}\\0 & 0 & 1 & - \frac{11 a}{91} + \frac{5 b}{91} - \frac{9 c}{182}\end{matrix}\right], \ \left( 0, \ 1, \ 2\right)\right)$
Of course, we can substitute values of $a$, $b$ and $c$ to get a specific solution.
```python
vDict = {a: 3, b: 6, c: 7}
A_rref = A_rref[0].subs(vDict);A_rref # define a dictionary for special values to substitute in
```
$\displaystyle \left[\begin{matrix}1 & 0 & 0 & \frac{31}{14}\\0 & 1 & 0 & - \frac{16}{91}\\0 & 0 & 1 & - \frac{69}{182}\end{matrix}\right]$
## <font face="gotham" color="purple"> Example: Polynomials </font>
Consider this question : How to find a cubic polynomial that passes through each of these points $(1,3)$,$(2, -2)$ ,$(3, -5)$, and $(4, 0)$.
The form of cubic polynomial is
\begin{align}
y=a_0+a_1x+a_2x^2+a_3x^3
\end{align}
We substitute all the points:
\begin{align}
(x,y)&=(1,3)\qquad\longrightarrow\qquad \ 2=a_0+3a_1+9a_2 +27a_3 \\
(x,y)&=(2,-2)\qquad\longrightarrow\qquad 3=a_0+a_1+a_2+a_3\\
(x,y)&=(3,-5)\qquad\longrightarrow\qquad 2=a_0-4a_1+16a_2-64a_3\\
(x,y)&=(4,0)\qquad\longrightarrow\qquad -2=a_0+2a_1+4a_2+8a_3
\end{align}
It turns to be a linear system, the rest should be familiar already.
The augmented matrix is
```python
A = sy.Matrix([[1, 1, 1, 1, 3], [1, 2, 4, 8, -2], [1, 3, 9, 27, -5], [1, 4, 16, 64, 0]]); A
```
$\displaystyle \left[\begin{matrix}1 & 1 & 1 & 1 & 3\\1 & 2 & 4 & 8 & -2\\1 & 3 & 9 & 27 & -5\\1 & 4 & 16 & 64 & 0\end{matrix}\right]$
```python
A_rref = A.rref(); A_rref
```
$\displaystyle \left( \left[\begin{matrix}1 & 0 & 0 & 0 & 4\\0 & 1 & 0 & 0 & 3\\0 & 0 & 1 & 0 & -5\\0 & 0 & 0 & 1 & 1\end{matrix}\right], \ \left( 0, \ 1, \ 2, \ 3\right)\right)$
```python
A_rref = np.array(A_rref[0]); A_rref
```
array([[1, 0, 0, 0, 4],
[0, 1, 0, 0, 3],
[0, 0, 1, 0, -5],
[0, 0, 0, 1, 1]], dtype=object)
The last column is the solution, i.e. the coefficients of the cubic polynomial.
```python
poly_coef = A_rref.astype(float)[:,-1]; poly_coef
```
array([ 4., 3., -5., 1.])
Cubic polynomial form is:
\begin{align}
y = 4 + 3x - 5x^2 + x^3
\end{align}
Since we have the specific form of the cubic polynomial, we can plot it
```python
x = np.linspace(-5, 5, 40)
y = poly_coef[0] + poly_coef[1]*x + poly_coef[2]*x**2 + poly_coef[3]*x**3
```
```python
fig, ax = plt.subplots(figsize = (8, 8))
ax.plot(x, y, lw = 3, color ='red')
ax.scatter([1, 2, 3, 4], [3, -2, -5, 0], s = 100, color = 'blue', zorder = 3)
ax.grid()
ax.set_xlim([0, 5])
ax.set_ylim([-10, 10])
ax.text(1, 3.5, '$(1, 3)$', fontsize = 15)
ax.text(1.5, -2.5, '$(2, -2)$', fontsize = 15)
ax.text(2.7, -4, '$(3, -5.5)$', fontsize = 15)
ax.text(4.1, 0, '$(4, .5)$', fontsize = 15)
plt.show()
```
Now you know the trick, try another 5 points: $(1,2)$, $(2,5)$, $(3,8)$, $(4,6)$, $(5, 9)$. And polynomial form is
\begin{align}
y=a_0+a_1x+a_2x^2+a_3x^3+a_4x^4
\end{align}
The augmented matrix is
```python
A = sy.Matrix([[1, 1, 1, 1, 1, 2],
[1, 2, 4, 8, 16, 5],
[1, 3, 9, 27, 81, 8],
[1, 4, 16, 64, 256, 6],
[1, 5, 25,125, 625, 9]]); A
```
$\displaystyle \left[\begin{matrix}1 & 1 & 1 & 1 & 1 & 2\\1 & 2 & 4 & 8 & 16 & 5\\1 & 3 & 9 & 27 & 81 & 8\\1 & 4 & 16 & 64 & 256 & 6\\1 & 5 & 25 & 125 & 625 & 9\end{matrix}\right]$
```python
A_rref = A.rref()
A_rref = np.array(A_rref[0])
coef = A_rref.astype(float)[:,-1];coef
```
array([ 19. , -37.417, 26.875, -7.083, 0.625])
```python
x = np.linspace(0, 6, 100)
y = coef[0] + coef[1]*x + coef[2]*x**2 + coef[3]*x**3 + coef[4]*x**4
```
```python
fig, ax = plt.subplots(figsize= (8, 8))
ax.plot(x, y, lw =3)
ax.scatter([1, 2, 3, 4, 5], [2, 5, 8, 6, 9], s= 100, color = 'red', zorder = 3)
ax.grid()
```
# <font face="gotham" color="purple"> Solving The System of Linear Equations By NumPy </font>
Set up the system $A x = b$, generate a random $A$ and $b$
```python
A = np.round(10 * np.random.rand(5, 5))
b = np.round(10 * np.random.rand(5,))
```
```python
x = np.linalg.solve(A, b);x
```
array([-2.283, 1.431, 0.677, -0.718, 1.908])
Let's verify if $ Ax = b$
```python
A@x - b
```
array([-0., 0., 0., 0., -0.])
They are technically zeros, due to some round-off errors omitted, that's why there is $-$ in front $0$.
|
784da251f921b56499fad5ffb8245c039b4ec3ff
| 83,944 |
ipynb
|
Jupyter Notebook
|
Chapter 1 - Linear Equation System.ipynb
|
testinggg-art/Linear_Algebra_With_Python
|
bd5c6bdac07e65b52e92960aee781f63489a0260
|
[
"MIT"
] | 1,719 |
2020-12-30T07:26:45.000Z
|
2022-03-31T21:05:57.000Z
|
Chapter 1 - Linear Equation System.ipynb
|
testinggg-art/Linear_Algebra_With_Python
|
bd5c6bdac07e65b52e92960aee781f63489a0260
|
[
"MIT"
] | 1 |
2021-01-13T00:02:03.000Z
|
2021-01-13T00:02:03.000Z
|
Chapter 1 - Linear Equation System.ipynb
|
testinggg-art/Linear_Algebra_With_Python
|
bd5c6bdac07e65b52e92960aee781f63489a0260
|
[
"MIT"
] | 421 |
2020-12-30T07:27:23.000Z
|
2022-03-01T17:40:41.000Z
| 57.57476 | 24,048 | 0.742352 | true | 6,306 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.841826 | 0.703331 |
__label__eng_Latn
| 0.722386 | 0.472406 |
```python
import numpy
from matplotlib import pyplot as plot
import matplotlib.cm as cm
def iter_count(C, max_iter):
X = C
for n in range(max_iter):
if abs(X) > 2.:
return n
X = X ** 2 + C
return max_iter
N = 32
max_iter = 64
xmin, xmax, ymin, ymax = -2.2, .8, -1.5, 1.5
X = numpy.linspace(xmin, xmax, N)
Y = numpy.linspace(ymin, ymax, N)
Z = numpy.empty((N, N))
for i, y in enumerate(Y):
for j, x in enumerate(X):
Z[i, j] = iter_count(complex(x, y), max_iter)
#선택적인 파라미터인 extent는 2D배열에 저장한 데이터에 대한 좌표계를 지정한다.
plot.imshow(Z, cmap = cm.binary, extent = (xmin, xmax, ymin, ymax), interpolation = 'bicubic')
plot.show()
```
```python
import numpy
from matplotlib import pyplot as plot
import matplotlib.cm as cm
def iter_count(C, max_iter):
X = C
for n in range(max_iter):
if abs(X) > 2.:
return n
X = X ** 2 + C
return max_iter
N = 512
max_iter = 64
xmin, xmax, ymin, ymax = -2.2, .8, -1.5, 1.5
X = numpy.linspace(xmin, xmax, N)
Y = numpy.linspace(ymin, ymax, N)
Z = numpy.empty((N, N))
for i, y in enumerate(Y):
for j, x in enumerate(X):
Z[i, j] = iter_count(complex(x, y), max_iter)
plot.imshow(Z,
cmap = cm.binary,
interpolation = 'bicubic',
extent=(xmin, xmax, ymin, ymax))
cb = plot.colorbar(orientation='horizontal', shrink=.75)
cb.set_label('iteration count')
plot.show()
```
```python
import numpy
from numpy.random import uniform, seed
from matplotlib import pyplot as plot
from matplotlib.mlab import griddata
import matplotlib.cm as cm
def iter_count(C, max_iter):
X = C
for n in range(max_iter):
if abs(X) > 2.:
return n
X = X ** 2 + C
return max_iter
max_iter = 64
xmin, xmax, ymin, ymax = -2.2, .8, -1.5, 1.5
sample_count = 2 ** 12
A = uniform(xmin, xmax, sample_count)
B = uniform(ymin, ymax, sample_count)
C = [iter_count(complex(a, b), max_iter) for a, b in zip(A, B)]
N = 512
X = numpy.linspace(xmin, xmax, N)
Y = numpy.linspace(ymin, ymax, N)
Z = griddata(A, B, C, X, Y, interp = 'linear')
plot.scatter(A, B, color = (0., 0., 0., .5), s = .5)
plot.imshow(Z,
cmap = cm.binary,
interpolation = 'bicubic',
extent=(xmin, xmax, ymin, ymax))
plot.show()
```
```python
import numpy, sympy
from sympy.abc import x, y
from matplotlib import pyplot as plot
import matplotlib.patches as patches
import matplotlib.cm as cm
def cylinder_stream_function(U = 1, R = 1):
r = sympy.sqrt(x ** 2 + y ** 2)
theta = sympy.atan2(y, x)
return U * (r - R ** 2 / r) * sympy.sin(theta)
def velocity_field(psi):
u = sympy.lambdify((x, y), psi.diff(y), 'numpy')
v = sympy.lambdify((x, y), -psi.diff(x), 'numpy')
return u, v
psi = cylinder_stream_function()
U_func, V_func = velocity_field(psi)
xmin, xmax, ymin, ymax = -3, 3, -3, 3
Y, X = numpy.ogrid[ymin:ymax:128j, xmin:xmax:128j]
U, V = U_func(X, Y), V_func(X, Y)
M = (X ** 2 + Y ** 2) < 1.
U = numpy.ma.masked_array(U, mask = M)
V = numpy.ma.masked_array(V, mask = M)
shape = patches.Circle((0, 0), radius = 1., lw = 2., fc = 'w', ec = 'k', zorder = 0)
plot.gca().add_patch(shape)
plot.streamplot(X, Y, U, V, color = U ** 2 + V ** 2, cmap = cm.binary)
plot.axes().set_aspect('equal')
plot.show()
```
```python
import numpy
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plot
# Dataset generation
a, b, c = 10., 28., 8. / 3.
def lorenz_map(X, dt = 1e-2):
X_dt = numpy.array([a * (X[1] - X[0]),
X[0] * (b - X[2]) - X[1],
X[0] * X[1] - c * X[2]])
return X + dt * X_dt
points = numpy.zeros((2000, 3))
X = numpy.array([.1, .0, .0])
for i in range(points.shape[0]):
points[i], X = X, lorenz_map(X)
# Plotting
fig = plot.figure()
ax = fig.gca(projection = '3d')
ax.set_xlabel('X axis')
ax.set_ylabel('Y axis')
ax.set_zlabel('Z axis')
ax.set_title('Lorenz Attractor a=%0.2f b=%0.2f c=%0.2f' % (a, b, c))
'''
ax.scatter(points[:, 0], points[:, 1], points[:, 2],
marker = 's',
edgecolor = '.5',
facecolor = '.5')
'''
ax.scatter(points[:, 0], points[:, 1], points[:, 2],
zdir = 'z',
c = '.5')
plot.show()
```
```python
import numpy
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plot
a, b, c = 10., 28., 8. / 3.
def lorenz_map(X, dt = 1e-2):
X_dt = numpy.array([a * (X[1] - X[0]),
X[0] * (b - X[2]) - X[1],
X[0] * X[1] - c * X[2]])
return X + dt * X_dt
points = numpy.zeros((10000, 3))
X = numpy.array([.1, .0, .0])
for i in range(points.shape[0]):
points[i], X = X, lorenz_map(X)
fig = plot.figure()
ax = fig.gca(projection = '3d')
ax.plot(points[:, 0], points[:, 1], points[:, 2], c = 'k')
plot.show()
```
```python
import numpy
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plot
x = numpy.linspace(-3, 3, 256)
y = numpy.linspace(-3, 3, 256)
X, Y = numpy.meshgrid(x, y)
Z = numpy.sinc(numpy.sqrt(X ** 2 + Y ** 2))
fig = plot.figure()
ax = fig.gca(projection = '3d')
#ax.plot_surface(X, Y, Z, color = 'w')
#ax.plot_surface(X, Y, Z, cmap=cm.gray)
ax.plot_surface(X, Y, Z, cmap=cm.gray, linewidth=0, antialiased=False)
plot.show()
```
```python
```
```python
```
|
5ea1929bcb18ad0e712af827a977efaa8ee76ce8
| 336,615 |
ipynb
|
Jupyter Notebook
|
2017/Issue/matplotlib_example/matplotlib_example3.ipynb
|
JeongChanwoo/lab_study_group
|
46fc6c8d6e7000380132829ee58a7686d32d28ae
|
[
"MIT"
] | 9 |
2018-03-11T21:34:38.000Z
|
2021-05-31T05:46:38.000Z
|
2017/Issue/matplotlib_example/matplotlib_example3.ipynb
|
JeongChanwoo/lab_study_group
|
46fc6c8d6e7000380132829ee58a7686d32d28ae
|
[
"MIT"
] | 17 |
2017-07-10T06:22:19.000Z
|
2019-03-19T11:25:04.000Z
|
2017/Issue/matplotlib_example/matplotlib_example3.ipynb
|
JeongChanwoo/lab_study_group
|
46fc6c8d6e7000380132829ee58a7686d32d28ae
|
[
"MIT"
] | 22 |
2017-07-03T07:53:44.000Z
|
2019-04-03T00:32:55.000Z
| 888.166227 | 81,776 | 0.943633 | true | 1,789 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.923039 | 0.882428 | 0.814515 |
__label__eng_Latn
| 0.311301 | 0.730725 |
## Getting Started
In the following we like to introduce *pymoo* by presenting an example optimization scenario. This guide goes through the most important steps to get started with our framework. First, we present an example optimization problem to be solved using *pymoo*. Second, we show how to formulate the optimization problem in our framework and how to instantiate an algorithm object to be used for optimization. Then, a termination criterion for the algorithm is defined and the optimization method is called. Finally, we quickly show a possible post-processing step which analyzes the optimization run performance.
### Multi-Objective Optimization
In general, multi-objective optimization has several objective functions with subject to inequality and equality constraints to optimize <cite data-cite="multi_objective_book"></cite>. The goal is to find a set of solutions that do not have any constraint violation and are as good as possible regarding all its objectives values. The problem definition in its general form is given by:
\begin{align}
\begin{split}
\min \quad& f_{m}(x) \quad \quad \quad \quad m = 1,..,M \\[4pt]
\text{s.t.} \quad& g_{j}(x) \leq 0 \quad \; \; \, \quad j = 1,..,J \\[2pt]
\quad& h_{k}(x) = 0 \quad \; \; \quad k = 1,..,K \\[4pt]
\quad& x_{i}^{L} \leq x_{i} \leq x_{i}^{U} \quad i = 1,..,N \\[2pt]
\end{split}
\end{align}
The formulation above defines a multi-objective optimization problem with $N$ variables, $M$ objectives, $J$ inequality and $K$ equality constraints. Moreover, for each variable $x_i$ lower and upper variable boundaries ($x_i^L$ and $x_i^U$) are defined.
### Example Optimization Problem
In the following, we investigate exemplarily a bi-objective optimization with two constraints.
The selection of a suitable optimization problem was made based on having enough complexity for the purpose of demonstration, but not being too difficult to lose track of the overall idea. Its definition is given by:
\begin{align}
\begin{split}
\min \;\; & f_1(x) = (x_1^2 + x_2^2) \\
\max \;\; & f_2(x) = -(x_1-1)^2 - x_2^2 \\[1mm]
\text{s.t.} \;\; & g_1(x) = 2 \, (x_1 - 0.1) \, (x_1 - 0.9) \leq 0\\
& g_2(x) = 20 \, (x_1 - 0.4) \, (x_1 - 0.6) \geq 0\\[1mm]
& -2 \leq x_1 \leq 2 \\
& -2 \leq x_2 \leq 2
\end{split}
\end{align}
It consists of two objectives ($M=2$) where $f_1(x)$ is minimized and $f_2(x)$ maximized. The optimization is with subject to two inequality constraints ($J=2$) where $g_1(x)$ is formulated as a less than and $g_2(x)$ as a greater than constraint. The problem is defined with respect to two variables ($N=2$), $x_1$ and $x_2$, which both are in the range $[-2,2]$. The problem does not contain any equality constraints ($K=0$).
```python
import numpy as np
X1, X2 = np.meshgrid(np.linspace(-2, 2, 500), np.linspace(-2, 2, 500))
F1 = X1**2 + X2**2
F2 = (X1-1)**2 + X2**2
G = X1**2 - X1 + 3/16
G1 = 2 * (X1[0] - 0.1) * (X1[0] - 0.9)
G2 = 20 * (X1[0] - 0.4) * (X1[0] - 0.6)
import matplotlib.pyplot as plt
plt.rc('font', family='serif')
levels = [0.02, 0.1, 0.25, 0.5, 0.8]
plt.figure(figsize=(7, 5))
CS = plt.contour(X1, X2, F1, levels, colors='black', alpha=0.5)
CS.collections[0].set_label("$f_1(x)$")
CS = plt.contour(X1, X2, F2, levels, linestyles="dashed", colors='black', alpha=0.5)
CS.collections[0].set_label("$f_2(x)$")
plt.plot(X1[0], G1, linewidth=2.0, color="green", linestyle='dotted')
plt.plot(X1[0][G1<0], G1[G1<0], label="$g_1(x)$", linewidth=2.0, color="green")
plt.plot(X1[0], G2, linewidth=2.0, color="blue", linestyle='dotted')
plt.plot(X1[0][X1[0]>0.6], G2[X1[0]>0.6], label="$g_2(x)$",linewidth=2.0, color="blue")
plt.plot(X1[0][X1[0]<0.4], G2[X1[0]<0.4], linewidth=2.0, color="blue")
plt.plot(np.linspace(0.1,0.4,100), np.zeros(100),linewidth=3.0, color="orange")
plt.plot(np.linspace(0.6,0.9,100), np.zeros(100),linewidth=3.0, color="orange")
plt.xlim(-0.5, 1.5)
plt.ylim(-0.5, 1)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.12),
ncol=4, fancybox=True, shadow=False)
plt.tight_layout()
plt.show()
```
The figure above shows the contours of the problem. The contour lines of the objective function $f_1(x)$ is represented by a solid and $f_2(x)$ by a dashed line. The constraints $g_1(x)$ and $g_2(x)$ are parabolas which intersect the $x_1$-axis at $(0.1, 0.9)$ and $(0.4, 0.6)$. The pareto-optimal set is illustrated by a thick orange line. Through the combination of both constraints the pareto-set is split into two parts.
Analytically, the pareto-optimal set is given by $PS = \{(x_1, x_2) \,|\, (0.1 \leq x_1 \leq 0.4) \lor (0.6 \leq x_1 \leq 0.9) \, \land \, x_2 = 0\}$ and the Pareto-front by $f_2 = (\sqrt{f_1} - 1)^2$ where $f_1$ is defined in $[0.01,0.16]$ and $[0.36,0.81]$.
### Problem Definition
In *pymoo*, we consider pure minimization problems for optimization in all our modules. However, without loss of generality an objective which is supposed to be maximized, can be multiplied by $-1$ and be minimized. Therefore, we minimize $-f_2(x)$ instead of maximizing $f_2(x)$ in our optimization problem. Furthermore, all constraint functions need to be formulated as a $\leq 0$ constraint.
The feasibility of a solution can, therefore, be expressed by:
$$ \begin{cases}
\text{feasible,} \quad \quad \sum_i^n \langle g_i(x)\rangle = 0\\
\text{infeasbile,} \quad \quad \quad \text{otherwise}\\
\end{cases}
$$
$$
\text{where} \quad \langle g_i(x)\rangle =
\begin{cases}
0, \quad \quad \; \text{if} \; g_i(x) \leq 0\\
g_i(x), \quad \text{otherwise}\\
\end{cases}
$$
For this reason, $g_2(x)$ needs to be multiplied by $-1$ in order to flip the $\geq$ to a $\leq$ relation. We recommend the normalization of constraints to give equal importance to each of them.
For $g_1(x)$, the coefficient results in $2 \cdot (-0.1) \cdot (-0.9) = 0.18$ and for $g_2(x)$ in $20 \cdot (-0.4) \cdot (-0.6) = 4.8$, respectively. We achieve normalization of constraints by dividing $g_1(x)$ and $g_2(x)$ by its corresponding coefficient.
Finally, the optimization problem to be optimized using *pymoo* is defined by:
\begin{align}
\label{eq:getting_started_pymoo}
\begin{split}
\min \;\; & f_1(x) = (x_1^2 + x_2^2) \\
\min \;\; & f_2(x) = (x_1-1)^2 + x_2^2 \\[1mm]
\text{s.t.} \;\; & g_1(x) = 2 \, (x_1 - 0.1) \, (x_1 - 0.9) \, / \, 0.18 \leq 0\\
& g_2(x) = - 20 \, (x_1 - 0.4) \, (x_1 - 0.6) \, / \, 4.8 \leq 0\\[1mm]
& -2 \leq x_1 \leq 2 \\
& -2 \leq x_2 \leq 2
\end{split}
\end{align}
Next, the derived problem formulation is implemented in Python. Each optimization problem in *pymoo* has to inherit from the *Problem* class. First, by calling the `super()` function the problem properties such as the number of variables `n_var`, objectives `n_obj` and constraints `n_constr` are initialized. Furthermore, lower `xl` and upper variables boundaries `xu` are supplied as a NumPy array. Additionally, the evaluation function _evaluate needs to be overwritten from the superclass.
The method takes a two-dimensional NumPy array x with n rows and m columns as an input. Each row represents an individual and each column an optimization variable. After doing the necessary calculations, the objective values have to be added to the dictionary out with the key `F` and the constraints with key `G`.
```python
import autograd.numpy as anp
import numpy as np
from pymoo.util.misc import stack
from pymoo.model.problem import Problem
class MyProblem(Problem):
def __init__(self):
super().__init__(n_var=2,
n_obj=2,
n_constr=2,
xl=anp.array([-2,-2]),
xu=anp.array([2,2]))
def _evaluate(self, x, out, *args, **kwargs):
f1 = x[:,0]**2 + x[:,1]**2
f2 = (x[:,0]-1)**2 + x[:,1]**2
g1 = 2*(x[:, 0]-0.1) * (x[:, 0]-0.9) / 0.18
g2 = - 20*(x[:, 0]-0.4) * (x[:, 0]-0.6) / 4.8
out["F"] = anp.column_stack([f1, f2])
out["G"] = anp.column_stack([g1, g2])
# --------------------------------------------------
# Pareto-front - not necessary but used for plotting
# --------------------------------------------------
def _calc_pareto_front(self, flatten=True, **kwargs):
f1_a = np.linspace(0.1**2, 0.4**2, 100)
f2_a = (np.sqrt(f1_a) - 1)**2
f1_b = np.linspace(0.6**2, 0.9**2, 100)
f2_b = (np.sqrt(f1_b) - 1)**2
a, b = np.column_stack([f1_a, f2_a]), np.column_stack([f1_b, f2_b])
return stack(a, b, flatten=flatten)
# --------------------------------------------------
# Pareto-set - not necessary but used for plotting
# --------------------------------------------------
def _calc_pareto_set(self, flatten=True, **kwargs):
x1_a = np.linspace(0.1, 0.4, 50)
x1_b = np.linspace(0.6, 0.9, 50)
x2 = np.zeros(50)
a, b = np.column_stack([x1_a, x2]), np.column_stack([x1_b, x2])
return stack(a,b, flatten=flatten)
problem = MyProblem()
```
Because we consider a test problem where the optimal solutions in design and objective space are known, we have implemented the `_calc_pareto_front` and `_calc_pareto_set` functions to observe the convergence of the algorithm later on. For the optimization run itself the methods need not to be overwritten. So, no worries if you are investigating benchmark or real-world optimization problems.
Moreover, we would like to mention that in many test optimization problems implementation already exist. For example, the test problem *ZDT1* can be initiated by:
```python
from pymoo.factory import get_problem
zdt1 = get_problem("zdt1")
```
Our framework has various single- and many-objective optimization test problems already implemented. Furthermore, a more advanced guide for custom problem definitions is available. In case problem functions are computationally expensive, parallelization of the evaluations functions might be an option.
[Optimization Test Problems](problems/index.ipynb) |
[Define a Custom Problem](problems/custom.ipynb) |
[Parallelization](problems/parallelization.ipynb)
[Callback](misc/callback.ipynb)
### Initialize an Algorithm
Moreover, we need to initialize a method to optimize the problem.
In *pymoo* factory methods create an `algorithm` object to be used for optimization. For each of those methods an API documentation is available and through supplying different parameters, algorithms can be customized in a plug-and-play manner.
Depending on the optimization problem different algorithms can be used to optimize the problem. Our framework offers various [Algorithms](algorithms/index.ipynb) which can be used to solve problems with different characteristics.
In general, the choice of a suitable algorithm for optimization problems is a challenge itself. Whenever problem characteristics are known beforehand, we recommended using those through customized operators.
However, in our case the optimization problem is rather simple, but the aspect of having two objectives and two constraints should be considered. For this reason, we decided to use [NSGA-II](algorithms/nsga2.ipynb) with its default configuration with minor modifications. We chose a population size of 40 (`pop_size=40`) and decided instead of generating the same number of offsprings to create only 10 (`n_offsprings=40`). This is a greedier variant and improves the convergence of rather simple optimization problems without difficulties regarding optimization, such as the existence of local Pareto fronts.
Moreover, we enable a duplicate check (`eliminate_duplicates=True`) which makes sure that the mating produces offsprings which are different with respect to themselves and the existing population regarding their design space values. To illustrate the customization aspect, we listed the other unmodified default operators in the code-snippet below.
```python
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_sampling, get_crossover, get_mutation
algorithm = NSGA2(
pop_size=40,
n_offsprings=10,
sampling=get_sampling("real_random"),
crossover=get_crossover("real_sbx", prob=0.9, eta=15),
mutation=get_mutation("real_pm", eta=20),
eliminate_duplicates=True
)
```
The `algorithm` object contains the implementation of NSGA-II with the custom settings supplied to the factory method.
### Define a Termination Criterion
Furthermore, a termination criterion needs to be defined to finally start the optimization procedure. Different kind of [Termination Criteria](misc/termination_criterion.ipynb) are available. Here, since the problem is rather simple, we run the algorithm for some number of generations.
```python
from pymoo.factory import get_termination
termination = get_termination("n_gen", 40)
```
Instead of number of generations (or iterations) other criteria such as the number of function evaluations or the improvement in design or objective space from the last to the current generation can be used.
### Optimize
Finally, we are solving the problem with the algorithm and termination criterion we have defined.
```python
from pymoo.optimize import minimize
res = minimize(problem,
algorithm,
termination,
seed=1,
pf=problem.pareto_front(use_cache=False),
save_history=True,
verbose=True)
```
==========================================================================================
n_gen | n_eval | cv (min) | cv (avg) | igd | gd | hv
==========================================================================================
1 | 40 | 0.00000E+00 | 2.36399E+01 | 0.323661577 | 1.114456088 | 0.259340233
2 | 50 | 0.00000E+00 | 1.17898E+01 | 0.323661577 | 1.335274628 | 0.259340233
3 | 60 | 0.00000E+00 | 5.657379846 | 0.320526332 | 1.926659946 | 0.259340233
4 | 70 | 0.00000E+00 | 2.416757355 | 0.300166590 | 1.617753498 | 0.266749155
5 | 80 | 0.00000E+00 | 0.969701447 | 0.165500768 | 1.535620684 | 0.287348889
6 | 90 | 0.00000E+00 | 0.183302529 | 0.165500768 | 1.443230755 | 0.287348889
7 | 100 | 0.00000E+00 | 0.020538438 | 0.156128592 | 1.404304867 | 0.293954680
8 | 110 | 0.00000E+00 | 0.000279181 | 0.073420213 | 1.248034651 | 0.394476116
9 | 120 | 0.00000E+00 | 0.00000E+00 | 0.072646964 | 0.793678104 | 0.398800177
10 | 130 | 0.00000E+00 | 0.00000E+00 | 0.070217746 | 0.571516864 | 0.398800177
11 | 140 | 0.00000E+00 | 0.00000E+00 | 0.055648038 | 0.313333192 | 0.407781774
12 | 150 | 0.00000E+00 | 0.00000E+00 | 0.046858363 | 0.165736198 | 0.411202862
13 | 160 | 0.00000E+00 | 0.00000E+00 | 0.043001686 | 0.075260262 | 0.416874083
14 | 170 | 0.00000E+00 | 0.00000E+00 | 0.041120015 | 0.056460681 | 0.417527396
15 | 180 | 0.00000E+00 | 0.00000E+00 | 0.035881270 | 0.045362876 | 0.418684299
16 | 190 | 0.00000E+00 | 0.00000E+00 | 0.031726648 | 0.031181960 | 0.426721723
17 | 200 | 0.00000E+00 | 0.00000E+00 | 0.027885895 | 0.018989167 | 0.431163180
18 | 210 | 0.00000E+00 | 0.00000E+00 | 0.026692335 | 0.009421017 | 0.431945172
19 | 220 | 0.00000E+00 | 0.00000E+00 | 0.026021835 | 0.007962640 | 0.433105085
20 | 230 | 0.00000E+00 | 0.00000E+00 | 0.023344056 | 0.008017948 | 0.435418583
21 | 240 | 0.00000E+00 | 0.00000E+00 | 0.023130354 | 0.007340408 | 0.436471441
22 | 250 | 0.00000E+00 | 0.00000E+00 | 0.023130616 | 0.007523541 | 0.436474250
23 | 260 | 0.00000E+00 | 0.00000E+00 | 0.023019215 | 0.008866366 | 0.436699102
24 | 270 | 0.00000E+00 | 0.00000E+00 | 0.023152933 | 0.008694703 | 0.436703727
25 | 280 | 0.00000E+00 | 0.00000E+00 | 0.022693903 | 0.009290452 | 0.437347050
26 | 290 | 0.00000E+00 | 0.00000E+00 | 0.019819708 | 0.009987094 | 0.439780517
27 | 300 | 0.00000E+00 | 0.00000E+00 | 0.016667584 | 0.006936027 | 0.442909098
28 | 310 | 0.00000E+00 | 0.00000E+00 | 0.018568007 | 0.004785434 | 0.443178540
29 | 320 | 0.00000E+00 | 0.00000E+00 | 0.018465458 | 0.003341749 | 0.443266952
30 | 330 | 0.00000E+00 | 0.00000E+00 | 0.016865096 | 0.002818623 | 0.449364577
31 | 340 | 0.00000E+00 | 0.00000E+00 | 0.015600645 | 0.002744785 | 0.450427370
32 | 350 | 0.00000E+00 | 0.00000E+00 | 0.014513849 | 0.002663537 | 0.451657874
33 | 360 | 0.00000E+00 | 0.00000E+00 | 0.012592618 | 0.001895409 | 0.454111680
34 | 370 | 0.00000E+00 | 0.00000E+00 | 0.012569395 | 0.001749484 | 0.454202052
35 | 380 | 0.00000E+00 | 0.00000E+00 | 0.009111864 | 0.001880313 | 0.455049071
36 | 390 | 0.00000E+00 | 0.00000E+00 | 0.009118849 | 0.001944459 | 0.455025500
37 | 400 | 0.00000E+00 | 0.00000E+00 | 0.009041905 | 0.002021041 | 0.455062331
38 | 410 | 0.00000E+00 | 0.00000E+00 | 0.009146514 | 0.001935355 | 0.455157243
39 | 420 | 0.00000E+00 | 0.00000E+00 | 0.009146514 | 0.001935355 | 0.455157243
40 | 430 | 0.00000E+00 | 0.00000E+00 | 0.008957385 | 0.001827609 | 0.455298401
The [Result](misc/results.ipynb) object provides the corresponding X and F values and some more information.
### Visualize
The optimization results are illustrated below (design and objective space). The solid line represents the analytically derived Pareto set and front in the corresponding space and the circles solutions found by the algorithm. It can be observed that the algorithm was able to converge, and a set of nearly-optimal solutions was obtained.
```python
from pymoo.visualization.scatter import Scatter
# get the pareto-set and pareto-front for plotting
ps = problem.pareto_set(use_cache=False, flatten=False)
pf = problem.pareto_front(use_cache=False, flatten=False)
# Design Space
plot = Scatter(title = "Design Space", axis_labels="x")
plot.add(res.X, s=30, facecolors='none', edgecolors='r')
plot.add(ps, plot_type="line", color="black", alpha=0.7)
plot.do()
plot.apply(lambda ax: ax.set_xlim(-0.5, 1.5))
plot.apply(lambda ax: ax.set_ylim(-2, 2))
plot.show()
# Objective Space
plot = Scatter(title = "Objective Space")
plot.add(res.F)
plot.add(pf, plot_type="line", color="black", alpha=0.7)
plot.show()
```
Visualization is an important post-processing step in multi-objective optimization. Although it seems to be pretty easy for our example optimization problem, it becomes much more difficult in higher dimensions where trade-offs between solutions are not easily observable. For visualizations in higher dimensions, various more advanced [Visualizations](visualization/index.ipynb) are implemented in our framework.
### Performance Tracking
If the optimization scenario is repetitive it makes sense to track the performance of the algorithm. Because we have stored the history of the optimization run, we can now analyze the convergence over time. To measure the performance, we need to decide what metric to be used. Here, we are using Hypervolume. Of course, other [Performance Indicators](misc/performance_indicator.ipynb) are available as well.
```python
import matplotlib.pyplot as plt
from pymoo.performance_indicator.hv import Hypervolume
# create the performance indicator object with reference point (4,4)
metric = Hypervolume(ref_point=np.array([1.0, 1.0]))
# collect the population in each generation
pop_each_gen = [a.pop for a in res.history]
# receive the population in each generation
obj_and_feasible_each_gen = [pop[pop.get("feasible")[:,0]].get("F") for pop in pop_each_gen]
# calculate for each generation the HV metric
hv = [metric.calc(f) for f in obj_and_feasible_each_gen]
# visualze the convergence curve
plt.plot(np.arange(len(hv)), hv, '-o')
plt.title("Convergence")
plt.xlabel("Generation")
plt.ylabel("Hypervolume")
plt.show()
```
We hope you have enjoyed the getting started guide. For more topics we refer to each section covered by on the [landing page](https://pymoo.org). If you have any question or concern do not hesitate to [contact us](contact.rst).
|
47177e196d44a0fd5b36a0b2b41ddedfbce2a189
| 326,955 |
ipynb
|
Jupyter Notebook
|
doc/source/getting_started.ipynb
|
Electr0phile/pymoo
|
652428473cc68b6d9deada3792635bc8a831b255
|
[
"Apache-2.0"
] | 1 |
2020-03-07T08:26:16.000Z
|
2020-03-07T08:26:16.000Z
|
doc/source/getting_started.ipynb
|
Asurada2015/pymoo
|
023a787d0b78813e789f170a3e94b2de85605aff
|
[
"Apache-2.0"
] | null | null | null |
doc/source/getting_started.ipynb
|
Asurada2015/pymoo
|
023a787d0b78813e789f170a3e94b2de85605aff
|
[
"Apache-2.0"
] | null | null | null | 492.402108 | 180,868 | 0.932544 | true | 6,687 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.812867 | 0.722443 |
__label__eng_Latn
| 0.94769 | 0.516809 |
# **Otimização de Processos (COQ897)**
# *Prof. Argimiro R. Secchi*
$\
$
Segunda Lista de Exercícios - 2020
$\
$
***José Rodrigues Torraca Neto***
$\
$
4) A engenheira Fiona, responsável pela operação da Unidade de Extração por Solvente (UES) de uma indústria química, recebeu a incumbência de encontrar condições operacionais que fossem lucrativas para a UES para evitar o seu desligamento. A avaliação econômica realizada pela Eng. Fiona resultou na seguinte função lucro:
>$L(\boldsymbol{x})=a-\frac{b}{x_{1}}-cx_{2}-d\frac{x_{1}}{x_{2}}$
, em que $x_{1}$ e $x_{2}$ são as razões mássicas do produto que deixam
cada estágio de extração na corrente rafinada,
com $x_{1}\leq 0,02$ e $x_{2}\leq x_{1}$,
e $a = 129,93, b = 0,5, c = 4000, d = 25$ são constantes.
A condição de operação atual é dada por:
$x_{1} = 0,015$ e $x_{2} = 0,001$.
(a) Qual é o valor da função lucro na condição atual?
(b) Qual a
condição de máximo lucro encontrada pela Eng. Fiona e o valor da função lucro nessa
nova condição, sabendo que a solução foi irrestrita?
(c) Mostre que a nova condição é
realmente um ponto de máximo;
(d) Após operar vários meses nessa nova condição, a
falta de solvente no mercado aumentou em quatro vezes o seu preço, modificando as
constantes da função lucro para $a = 279,72, b = 2,0, c = 4000, d = 100$. Se a planta
continuasse a operar nas mesmas condições encontradas em (b), qual seria o valor da
função lucro? Qual foi a decisão tomada pela Eng. Fiona nessa nova condição do
mercado? Por quê?
$\\
$
## ***Solução:***
***(a)*** O problema se resume a encontrar o valor da função lucro:
>$L(\boldsymbol{x})=a-\frac{b}{x_{1}}-cx_{2}-d\frac{x_{1}}{x_{2}}$
, com as constantes: $a = 129,93, b = 0,5, c = 4000, d = 25$
e impondo:
$x_{1} = 0,015$ e $x_{2} = 0,001 \\
$
```
#Definindo a função lucro:
def f(x1, x2, a=129.93, b=0.5, c=4000, d=25):
return a-(b/x1)-(c*x2)-(d*(x1/x2))
#Impondo x1 e x2:
result = f(0.015,0.001)
print(result)
```
-282.4033333333333
***Resposta: (a)*** O valor da função lucro atual é $L(x) = -282,4$.
***(b)*** O problema consiste em encontrar a condição de máximo lucro $(x_{1},x_{2})^{max}$ e o valor da função lucro nesta condição:
>$L(\boldsymbol{x})=a-\frac{b}{x_{1}}-cx_{2}-d\frac{x_{1}}{x_{2}}$
, com as constantes: $a = 129,93, b = 0,5, c = 4000, d = 25$
e sem restrições (mas impondo limites para $x_{1} \ e \ x_{2}$ como valores positivos e menores ou iguais a 1, para serem fisicamente consistentes, já que são razões mássicas).
```
import numpy as np
import scipy.integrate
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import minimize
#Verificando a forma da superfície:
from matplotlib import cm
x1 = np.linspace(1E-3, 2E-2, 50)
x2 = np.linspace(1E-3, 2E-2, 50)
X, Y = np.meshgrid(x1, x2)
Z = 129.93 - 0.5/X - 4000*Y - 25*(X/Y)
fig, ax = plt.subplots(subplot_kw={'projection': '3d'})
ax.plot_surface(X, Y, Z, cmap=cm.rainbow)
ax.set_xlabel('$x1$')
ax.set_ylabel('$x2$')
ax.set_zlabel('$L(x1,x2)$');
```
```
## Conceitualmente, a maximização é análoga à minimização: para encontrar
#o máximo da função f, basta encontrar o mínimo de −f.
## Para efetuar a minimização, utilizaremos a função scipy.optimize.minimize.
## Seu uso tem a seguinte sintaxe:
#scipy.optimize.minimize (fun, x0)
#sendo os argumentos:
#fun: função que deve ser minimizada, definida previamente (f);
#x0: estimativa inicial do mínimo.
## A função minimize também fornece uma interface para vários algoritmos
#de minimização com restrição. Como exemplo, o algoritmo de otimização
#Sequential Least SQuares Programming (SLSQP) será considerado aqui.
## Esse algoritmo permite lidar com problemas de otimização com restrições
#de igualdade (eq) e desigualdade (ineq).
## Definindo a função objetivo - lucro (func):
def func(x, a=129.93, b=0.5, c=4000, d=25, sign=-1.0):
return sign*(a - b/x[0] - c*x[1] - d*x[0]/x[1])
## Definindo a função das derivadas de f em relação a x1 e x2 (func_deriv):
def func_deriv(x, a=129.93, b=0.5, c=4000, d=25, sign=-1.0):
dfdx0 = sign*(0 + b/(x[0])**2 + 0 - d/x[1])
dfdx1 = sign*(0 + 0 - c + d*x[0]/(x[1])**2)
return np.array([ dfdx0, dfdx1 ])
#Observe que, uma vez que 'minimize' apenas minimiza funções,
#o parâmetro de sinal (sign) é introduzido para multiplicar a função objetivo
#(e sua derivada) por -1, para realizar uma maximização.
## Definindo as condições de contorno (limites físicos do problema:
# 0 < x1 <= 1; 0 < x2 <= 1) como
# um objeto 'bounds' do scipy:
from scipy.optimize import Bounds
bounds = Bounds([1E-9, 1E-9], [1.0, 1.0])
#Tivemos que impor [1E-9, 1E-9] e não [0, 0] porque senão o método
#acaba fazendo uma divisão por zero.
## (OPCIONAL) Em seguida, as restrições são definidas como uma sequência de
#dicionários (Python), com as teclas type, fun e jac (default x >= 0).
## Definindo as restrições de desigualdade x1 <= 0,02, x2 <= x1:
ineq_cons = {'type': 'ineq',
'fun' : lambda x: np.array([0.02 - x[0],
x[0] - x[1]]),
'jac' : lambda x: np.array([[-1.0, 0.0],
[1.0, -1.0]])}
#Em Python, existe um conceito bastante poderoso e integrado à linguagem
#que é chamado de expressão LAMBDA ou forma lambda.
#O conceito em si é bastante simples:
#consiste em uma função que é atribuida a um objeto.
#Por conter a palavra reservada lambda o objeto se comportará como uma função,
#e podemos usar funções anônimas dentro de outras funções.
```
```
#Agora, pode ser feita uma otimização sem restrições
#(apenas com os limites físicos 'bounds'):
x0 = np.array([0.99, 0.1])
res = minimize(func, x0, jac=func_deriv,
method='SLSQP', options={'ftol': 1e-9, 'disp': True},
bounds = bounds)
print(res.x)
```
Optimization terminated successfully. (Exit mode 0)
Current function value: -19.409055040270935
Iterations: 31
Function evaluations: 55
Gradient evaluations: 27
[0.01357208 0.00921004]
```
#E a otimização com restrições (OPCIONAL - constraints = ineq_cons):
x0 = np.array([0.5, 0.5])
res = minimize(func, x0, jac=func_deriv, constraints=ineq_cons,
method='SLSQP', options={'ftol': 1e-9, 'disp': True},
bounds = bounds)
print(res.x)
```
Optimization terminated successfully. (Exit mode 0)
Current function value: -19.409055040739943
Iterations: 14
Function evaluations: 21
Gradient evaluations: 14
[0.01357208 0.00921008]
```
#Resultado em notação científica:
scientific_notation1 = "{:.2e}".format(res.x[0])
scientific_notation2 = "{:.2e}".format(res.x[1])
print(scientific_notation1 + "," + scientific_notation2)
```
1.36e-02,9.21e-03
```
#Valor da função objetivo (lucro) no ponto máximo:
#Lembrar de alterar o sinal novamente para sign=1.0:
def func(x, a=129.93, b=0.5, c=4000, d=25, sign=1.0):
return sign*(a - b/x[0] - c*x[1] - d*x[0]/x[1])
result = func([1.36E-2, 9.21E-3])
print(result)
```
19.40889889506292
***Resposta: (b)*** A condição de máximo lucro é $(x_{1}; \ x_{2})^{max} = (1,36 \cdot 10^{-2}; \ 9,21 \cdot 10^{-3})$ e o valor da função lucro nessa nova condição é $L(x) = 19,41$.
***(c)*** Podemos verificar se o ponto encontrado é um ponto de máximo, analisando a matriz Hessiana $H(x)$:
Como já calculamos o jacobiano no item anterior, temos:
>$ L(\boldsymbol{x})=a-\frac{b}{x_{1}}-cx_{2}-d\frac{x_{1}}{x_{2}}
\\
\nabla L(x) =
\begin{pmatrix}
\frac{b}{x_{1}^{2}}-\frac{d}{x_{2}} \\
-c+d\frac{x_{1}}{x_{2}^{2}}
\end{pmatrix}
\\
$
Calculando a matriz Hessiana:
>$
H(x) =
\begin{pmatrix}
-\frac{2b}{x_{1}^{3}} & \ +\frac{d}{x_{2}^{2}} \\
\frac{d}{x_{2}^{2}} & \ -2d\frac{x_{1}}{x_{2}^{3}}
\end{pmatrix}
\\
$
Podemos analisar a matriz Hessiana diretamente no ponto ótimo $ x^* =
\begin{pmatrix}
1,36 \cdot 10^{-2} \\
9,21 \cdot 10^{-3}
\end{pmatrix}:
\\
$
>$
H(x^*) =
\begin{pmatrix}
-400001 & 294722 \\
294722 & -868613
\end{pmatrix}
\\
$
Como a matriz $H(x^*)$ é apenas simétrica, e não diagonal neste ponto, temos que calcular seus autovalores $(\lambda)$:
```
H = np.array([[-400001, 294722],
[294722, -868613]])
sigma = np.linalg.eigvals(H)
sigma
```
array([ -257796.23133594, -1010817.76866406])
```
#Verifica se os autovalores são positivos:
import numpy as np
def is_pos_def(x):
return np.all(np.linalg.eigvals(x) > 0)
is_pos_def(H)
```
False
$
\\
$
>$
\lambda =
\begin{pmatrix}
-257796 \\
-1010818
\end{pmatrix}
\\
$
***Resposta: (c)*** Como todos seus autovalores são negativos, a matriz $H(x^*)$ é negativa definida no ponto $ x^* =
\begin{pmatrix}
1,36 \cdot 10^{-2} \\
9,21 \cdot 10^{-3}
\end{pmatrix};
\\
$ o que implica em um ponto de máximo local.
***(c) Método alternativo:*** Apesar do Python não ser naturalmente uma linguagem de programação com métodos simbólicos, é possível importar uma biblioteca que utiliza métodos simbólicos ***(Sympy)***, e então calcular as matrizes do Jacobiano e Hessiana diretamente, quando os problemas são simples o suficiente:
A renderização de equações Sympy requer que o MathJax esteja disponível em cada saída de célula. A seguir temos uma função que fará isso acontecer:
```
from IPython.display import Math, HTML
def enable_sympy_in_cell():
display(HTML(""))
get_ipython().events.register('pre_run_cell', enable_sympy_in_cell)
```
```
from sympy import Function, hessian, Matrix, init_printing
from sympy.abc import x, y
init_printing
##Definindo a função L(x1,x2):
x1, x2, a, b, c, d = symbols('x1 x2 a b c d')
f = a-(b/x1)-(c*x2)-(d*(x1/x2))
f
```
```
Matrix([f]).jacobian([x1, x2])
```
$$\left[\begin{matrix}\frac{b}{x_{1}^{2}} - \frac{d}{x_{2}} & - c + \frac{d x_{1}}{x_{2}^{2}}\end{matrix}\right]$$
```
hessian(f, (x1,x2))
```
$$\left[\begin{matrix}- \frac{2 b}{x_{1}^{3}} & \frac{d}{x_{2}^{2}}\\\frac{d}{x_{2}^{2}} & - \frac{2 d}{x_{2}^{3}} x_{1}\end{matrix}\right]$$
```
##Substituindo as constantes 'a,b,c,d' na função lucro original:
import sympy as sp
from sympy import *
import numpy as np
x1, x2 = sp.symbols('x1 x2', real=True)
f = 129.93-(0.5/x1)-(4000*x2)-(25*(x1/x2))
F = sp.Matrix([f])
F
```
$$\left[\begin{matrix}- \frac{25 x_{1}}{x_{2}} - 4000 x_{2} + 129.93 - \frac{0.5}{x_{1}}\end{matrix}\right]$$
```
##Calculando a função hessiana:
H = hessian(f, (x1,x2))
H
```
$$\left[\begin{matrix}- \frac{1.0}{x_{1}^{3}} & \frac{25}{x_{2}^{2}}\\\frac{25}{x_{2}^{2}} & - \frac{50 x_{1}}{x_{2}^{3}}\end{matrix}\right]$$
```
##Calculando a função hessiana no ponto ótimo [0.01357208, 0.00921008]:
Hp = hessian(f, [x1,x2]).subs([(x1,0.01357208), (x2,0.00921008)])
Hp
```
$$\left[\begin{matrix}-400000.714671238 & 294722.439673709\\294722.439673709 & -868612.765371583\end{matrix}\right]$$
```
##Finalmente calculamos os autovalores de H no ponto ótimo:
Hp.eigenvals()
```
Esses autovalores são negativos, e a notação :1, significa que possuem multiplicidade algébrica 1.
***(c) Adicional:*** Também podemos demonstrar que o ponto encontrado é um ponto de máximo plotando o gráfico (superfície 3d) e curvas de níveis da função objetivo $L(x)$:
```
import numpy as np
import scipy.integrate
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import minimize
#Verificando a forma da superfície:
from matplotlib import cm
x1 = np.linspace(1E-2, 2E-2, 50)
x2 = np.linspace(5E-3, 2E-2, 50)
X, Y = np.meshgrid(x1, x2)
Z = 129.93 - 0.5/X - 4000*Y - 25*(X/Y)
fig, ax = plt.subplots(subplot_kw={'projection': '3d'})
ax.plot_surface(X, Y, Z, cmap=cm.rainbow)
ax.set_xlabel('$x1$')
ax.set_ylabel('$x2$')
ax.set_zlabel('$L(x1,x2)$');
```
```
#Plotando superfície de contorno (com colorbar) - com ponto de máximo indicado:
plt.contourf(X, Y, Z, 50, cmap='RdGy')
plt.colorbar();
plt.scatter([1.36E-2], [9.21E-3])
plt.annotate("(1.36E-2, 9.21E-3)", (1.36E-2, 9.21E-3))
plt.show()
```
```
#Plotando superfície de contorno (com colorbar) - com labels:
contours = plt.contour(X, Y, Z, 5, colors='black')
plt.clabel(contours, inline=True, fontsize=8)
plt.imshow(Z, extent=[0.005, 0.025, 0.0025, 0.025], origin='lower',
cmap='rainbow', alpha=0.5)
plt.colorbar();
```
```
#Plot superfície de contorno (padrão) - com labels:
fig, ax = plt.subplots()
CS = ax.contour(X, Y, Z, [10,15,18,19, 19.4], cmap='jet')
ax.clabel(CS, inline=1, fontsize=10)
ax.set_title('Superfície de contorno com labels')
ax.set_xlabel('$x1$')
ax.set_ylabel('$x2$')
```
***(d)*** Temos a mesma forma da função objetivo $L(x)$, mas com constantes diferentes:
$a=279,72;\ b=2,0; \ c=4000; \ d=100$.
```
#Valor da nova função objetivo (lucro) no ponto máximo da letra(b):
#Lembrar de alterar o sinal novamente para sign=1.0:
def func(x, a=279.72, b=2.0, c=4000, d=100, sign=1.0):
return sign*(a - b/x[0] - c*x[1] - d*x[0]/x[1])
result = func([1.36E-2, 9.21E-3])
print(result)
```
-51.844404419748344
```
## Vamos fazer uma maximização para a nova função objetivo:
def func(x, a=279.72, b=2.0, c=4000, d=100, sign=-1.0):
return sign*(a - b/x[0] - c*x[1] - d*x[0]/x[1])
## Definindo a função das derivadas de f em relação a x1 e x2 (func_deriv):
def func_deriv(x, a=279.72, b=2.0, c=4000, d=100, sign=-1.0):
dfdx0 = sign*(0 + b/(x[0])**2 + 0 - d/x[1])
dfdx1 = sign*(0 + 0 - c + d*x[0]/(x[1])**2)
return np.array([ dfdx0, dfdx1 ])
## Definindo as condições de contorno:
from scipy.optimize import Bounds
bounds = Bounds([1E-9, 1E-9], [1.0, 1.0])
## Definindo as restrições de desigualdade x1 <= 0,02, x2 <= x1:
ineq_cons = {'type': 'ineq',
'fun' : lambda x: np.array([0.02 - x[0],
x[0] - x[1]]),
'jac' : lambda x: np.array([[-1.0, 0.0],
[1.0, -1.0]])}
```
```
#Agora, pode ser feita uma otimização sem restrições
#(apenas com os limites físicos 'bounds'):
x0 = np.array([0.5, 0.5])
res = minimize(func, x0, jac=func_deriv,
method='SLSQP', options={'ftol': 1e-9, 'disp': True},
bounds = bounds)
print(res.x)
```
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.2246699827823875
Iterations: 16
Function evaluations: 27
Gradient evaluations: 16
[0.02154434 0.02320799]
```
#E a otimização com restrições (OPCIONAL - constraints = ineq_cons):
x0 = np.array([0.5, 0.5])
res = minimize(func, x0, jac=func_deriv, constraints=ineq_cons,
method='SLSQP', options={'ftol': 1e-9, 'disp': True},
bounds = bounds)
print(res.x)
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.2800000000520271
Iterations: 9
Function evaluations: 14
Gradient evaluations: 9
[0.02 0.02]
```
#Valor da função objetivo (lucro) no ponto máximo (sem restrições):
#Lembrar de alterar o sinal novamente para sign=1.0:
def func(x, a=279.72, b=2.0, c=4000, d=100, sign=1.0):
return sign*(a - b/x[0] - c*x[1] - d*x[0]/x[1])
result = func([2.15E-2, 2.32E-2])
print(result)
```
1.2243303929431022
```
#Valor da função objetivo (lucro) no ponto máximo (com restrições):
#Lembrar de alterar o sinal novamente para sign=1.0:
def func(x, a=279.72, b=2.0, c=4000, d=100, sign=1.0):
return sign*(a - b/x[0] - c*x[1] - d*x[0]/x[1])
result = func([2E-2, 2E-2])
print(result)
```
-0.2799999999999727
```
import numpy as np
import scipy.integrate
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import minimize
#Verificando a forma da superfície:
from matplotlib import cm
x1 = np.linspace(1E-2, 5E-2, 50)
x2 = np.linspace(1E-2, 1E-1, 50)
X, Y = np.meshgrid(x1, x2)
Z = 279.72 - 2.0/X - 4000*Y - 100*(X/Y)
fig, ax = plt.subplots(subplot_kw={'projection': '3d'})
ax.plot_surface(X, Y, Z, cmap=cm.rainbow)
ax.set_xlabel('$x1$')
ax.set_ylabel('$x2$')
ax.set_zlabel('$L(x1,x2)$');
```
***Resposta: (d)*** O novo valor da função lucro na condição da letra (b) seria $L(x) = -51,84$.
A Eng. Fiona tem 2 opções:
i) **Desligar a planta.** Porque considerando a nova função objetivo, obedecendo as restrições impostas $x_{1}\leq 0,02$ e $x_{2}\leq x_{1}$; o maior valor do lucro ainda seria negativo $(-0,28)$, o que seria inviável economicamente.
ii) **Continuar operando com a planta**, caso seja possível desrespeitar as restrições impostas $x_{1}\leq 0,02$ e $x_{2}\leq x_{1}$. Porque nesse caso (sem restrições), é possível obter um lucro positivo de $1,22$; para $x = (2,15 \cdot 10^{-2}, 2,32 \cdot 10^{-2})$. Mesmo assim, esse valor do lucro parece ser muito baixo para ser viável economicamente.
$\\
$
$\\
$
5) Determine as dimensões do paralelepípedo, cuja diagonal tem um
comprimento $d$, que apresenta o maior volume possível.
$\\
$
## ***Solução:***
O problema consiste em maximizar a função objetivo que representa o volume do paralelepípedo, com dimensões $(a, \ b, \ c)$ e diagonal $d$:
A função objetivo que representa o volume, em termos das dimensões $(a, \ b, \ c)$ pode ser expressa como:
>$V(a,b,c) = a\cdot b\cdot c$
Para expressar a diagonal $d$ em relação às dimensões, podemos fazer algumas relações geométricas (ver figura - triângulo interior):
>$d^{2}=c^{2}+f^{2} \\
f^{2}=a^{2}+b^{2} \\
d^{2}=c^{2}+b^{2}+a^{2}$
Para definir uma restrição, podemos expressar a última equação em relação à uma das dimensões $(c)$:
>$c=\sqrt{d^{2}-a^{2}-b^{2}}$
Então, podemos reescrever o volume como:
>$V(a,b)=ab\sqrt{d^{2}-a^{2}-b^{2}}$
Agora, precisamos maximizar V com as restrições $(a,b >0)$ e $(a^{2}+b^{2}\leq d^{2})$.
Podemos então encontrar os pontos críticos derivando $V$ em relação à $a$ e $b$:
>$V_{a}(a,b)=b\sqrt{d^{2}-a^{2}-b^{2}}-\frac{a^{2}b}{\sqrt{d^{2}-a^{2}-b^{2}}} = \frac{1}{\sqrt{d^{2}-a^{2}-b^{2}}}(d^{2}-2a^{2}-b^{2})b, \\
V_{b}(a,b)=a\sqrt{d^{2}-a^{2}-b^{2}}-\frac{ab^{2}}{\sqrt{d^{2}-a^{2}-b^{2}}}=\frac{1}{\sqrt{d^{2}-a^{2}-b^{2}}}(d^{2}-a^{2}-2b^{2})a$
Lembrando que $a,b \neq 0$, como dito anteriormente, e igualando as 2 últimas equações a zero; teremos que as seguintes relações serão verdadeiras:
>$(d^{2}-2a^{2}-b^{2}) =0,\\
(d^{2}-a^{2}-2b^{2}) =0$
Reescrevendo a primeira equação como $b^{2} =d^{2}-2a^{2}$ e substituindo na segunda equação, temos:
>$
d^{2}-a^{2}-2(d^{2}-2a^{2}) = 0 \\
3a^{2} = d^{2} \\
a = \frac{d}{\sqrt{3}}$
Substituindo $a$ na primeira equação, temos que $b = \frac{d}{\sqrt{3}}$.
Substituindo $a$ e $b$ em $c=\sqrt{d^{2}-a^{2}-b^{2}}$, temos que $c = \frac{d}{\sqrt{3}}$.
Então, temos que o volume máximo é obtido quando:
>$a=b=c=\frac{d}{\sqrt{3}} \\
V_{max}=\left (\frac{d}{\sqrt{3}} \right )^{3} = \frac{d^{3}}{3\sqrt{3}}$
***Resposta: (5)*** O volume máximo é dado por $V_{max}= \frac{d^{3}}{3\sqrt{3}}$, com dimensões $a=b=c=\frac{d}{\sqrt{3}}$.
# ***Solução alternativa:***
O problema também pode ser resolvido por meio de multiplicadores de Lagrange:
Com $f(a,b,c)=abc;$ $ \ \ g(a,b,c)=a^{2} + b^{2} + c^{2} - d^{2}=0$
Então,
>$\bigtriangledown f(a,b,c)=\left \langle bc,ac,ab \right \rangle, \\
\bigtriangledown g(a,b,c)=\left \langle 2a,2b,2c \right \rangle$
E as equações são:
>$bc=2a\lambda ,\\
ac=2b\lambda ,\\
ab=2c\lambda .$
Logo,
>$abc = 2a^{2}
\lambda =2b^{2}\lambda =2c^{2}\lambda $,
que obtemos ao multiplicar cada equação por $a,b,c;$ respectivamente.
Então, como $a,b,c>0;$ temos que $\lambda=0$ ou $a=b=c.$
Se $\lambda=0,$ então $a,b$ ou $c=0$; o que não pode ocorrer.
Então, devemos ter $a=b=c$; e, da equação de restrição, significa que:
>$
a=b=c=\frac{d}{\sqrt{3}}
$
E também temos novamente:
>$
V_{max}=\left (\frac{d}{\sqrt{3}} \right )^{3} = \frac{d^{3}}{3\sqrt{3}}$
```
```
|
9883701808c90e6acec937f21513a89b4111699c
| 424,085 |
ipynb
|
Jupyter Notebook
|
Trabalhos/Trabalhos Grupo/lista2/2a_lista_torraca_gr.ipynb
|
josetorraca/eqe855_otimizacao
|
c2ea35fc34548da0dc14a227038882a848189b75
|
[
"MIT"
] | null | null | null |
Trabalhos/Trabalhos Grupo/lista2/2a_lista_torraca_gr.ipynb
|
josetorraca/eqe855_otimizacao
|
c2ea35fc34548da0dc14a227038882a848189b75
|
[
"MIT"
] | null | null | null |
Trabalhos/Trabalhos Grupo/lista2/2a_lista_torraca_gr.ipynb
|
josetorraca/eqe855_otimizacao
|
c2ea35fc34548da0dc14a227038882a848189b75
|
[
"MIT"
] | null | null | null | 230.731774 | 81,430 | 0.892526 | true | 7,355 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.763484 | 0.841826 | 0.64272 |
__label__por_Latn
| 0.970197 | 0.331585 |
```python
%matplotlib inline
import sympy.physics.mechanics as mech
from sympy import S,Rational,pi
import sympy as sp
```
```python
l,t,m,g= sp.symbols(r'l t m g')
q1 = mech.dynamicsymbols('q_1')
q1d = mech.dynamicsymbols('q_1', 1)
# Create and initialize the reference frame
N = mech.ReferenceFrame('N')
pointN = mech.Point('N*')
pointN.set_vel(N, 0)
# Create the points
point1 = pointN.locatenew('p_1', l*(sp.sin(q1)*N.x-sp.cos(q1)*N.y))
# Set the points' velocities
point1.set_vel(N, point1.pos_from(pointN).dt(N))
# Create the particles
particle1 = mech.Particle('P_1',point1,m)
# Set the particles' potential energy
particle1.potential_energy = particle1.mass*g*point1.pos_from(pointN).dot(N.y)
# Define forces not coming from a potential function
forces=None
# Construct the Lagrangian
L = mech.Lagrangian(N, particle1)
# Create the LagrangesMethod object
LM = mech.LagrangesMethod(L, [q1], hol_coneqs=None, forcelist=forces, frame=N)
# Form Lagranges Equations
ELeqns = LM.form_lagranges_equations()
sp.simplify(ELeqns)
# # Holonomic Constraint Equations
# f_c = Matrix([q1**2 + q2**2 - L**2,q1**2 + q2**2 - L**2])
```
```python
sp.simplify(LM.rhs())
```
$\displaystyle \left[\begin{matrix}\frac{d}{d t} \operatorname{q_{1}}{\left(t \right)}\\- \frac{g \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)}}{l}\end{matrix}\right]$
```python
from numpy import array, linspace, sin, cos
from pydy.system import System
import numpy as np
sys = System(LM,constants={
m:1.0,l:1.0,g:9.81},
initial_conditions={
q1:1.5,q1d:0.0},
times=linspace(0.0, 10.0, 1000))
y = sys.integrate()
```
```python
```
|
e6ece1bab2438fd8164764fee1811b60c2d7c132
| 7,898 |
ipynb
|
Jupyter Notebook
|
Pendula/Simple/1Pendulum/1Pendulum.ipynb
|
ethank5149/Classical-Mechanics
|
4684cc91abcf65a684237c6ec21246d5cebd232a
|
[
"MIT"
] | null | null | null |
Pendula/Simple/1Pendulum/1Pendulum.ipynb
|
ethank5149/Classical-Mechanics
|
4684cc91abcf65a684237c6ec21246d5cebd232a
|
[
"MIT"
] | null | null | null |
Pendula/Simple/1Pendulum/1Pendulum.ipynb
|
ethank5149/Classical-Mechanics
|
4684cc91abcf65a684237c6ec21246d5cebd232a
|
[
"MIT"
] | null | null | null | 53.006711 | 1,247 | 0.619397 | true | 545 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.839734 | 0.76908 | 0.645823 |
__label__eng_Latn
| 0.510982 | 0.338794 |
Euler Problem 187
=================
A composite is a number containing at least two prime factors. For example, 15 = 3 × 5; 9 = 3 × 3; 12 = 2 × 2 × 3.
There are ten composites below thirty containing precisely two, not necessarily distinct, prime factors: 4, 6, 9, 10, 14, 15, 21, 22, 25, 26.
How many composite integers, n < 10<sup>8</sup>, have precisely two, not necessarily distinct, prime factors?
```python
from sympy import sieve, primepi
print(sum(primepi(10**8 / p) - i for i, p in enumerate(sieve.primerange(1, 10**4))))
```
17427258
```python
```
|
bb16b86c34b83e5b20f614b579ee211f65d9d037
| 1,466 |
ipynb
|
Jupyter Notebook
|
Euler 187 - Semiprimes.ipynb
|
Radcliffe/project-euler
|
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
|
[
"MIT"
] | 6 |
2016-05-11T18:55:35.000Z
|
2019-12-27T21:38:43.000Z
|
Euler 187 - Semiprimes.ipynb
|
Radcliffe/project-euler
|
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
|
[
"MIT"
] | null | null | null |
Euler 187 - Semiprimes.ipynb
|
Radcliffe/project-euler
|
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
|
[
"MIT"
] | null | null | null | 22.553846 | 150 | 0.532742 | true | 186 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.932453 | 0.817574 | 0.76235 |
__label__eng_Latn
| 0.963152 | 0.609527 |
# Algorithms Exercise 2
## Imports
```python
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
```
## Peak finding
Write a function `find_peaks` that finds and returns the indices of the local maxima in a sequence. Your function should:
* Properly handle local maxima at the endpoints of the input array.
* Return a Numpy array of integer indices.
* Handle any Python iterable as input.
```python
s=[]
i=0
def find_peaks(a):
"""Find the indices of the local maxima in a sequence."""
# YOUR CODE HERE
if a[0]>a[1]: #if the first number is bigger than the second number
s.append(0) #add 0 as a peak
for x in range (len(a)-1): #
if a[x]>a[x-1] and a[x]>a[x+1] and x!=0: #if the current number is bigger than the one before it and the one after it
# print (x)
s.append(x) #add it to the list of peaks
if a[-1]>a[-2]: #if the last number is bigger than the second to last one it is a peak
# print (len(a)-1)
s.append(len(a)-1) #add the location of the last number to the list of locations
return s
#below here is used for testing, not sure why assert tests are not working since my tests do
# p2 = find_peaks(np.array([0,1,2,3]))
# p2
p1 = find_peaks([2,0,1,0,2,0,1])
p1
# p3 = find_peaks([3,2,1,0])
# p3
# np.shape(p1)
# y=np.array([0,2,4,6])
# np.shape(y)
# print(s)
```
[0, 2, 4, 6]
```python
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
```
Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
* Convert that string to a Numpy array of integers.
* Find the indices of the local maxima in the digits of $\pi$.
* Use `np.diff` to find the distances between consequtive local maxima.
* Visualize that distribution using an appropriately customized histogram.
```python
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
```
```python
# YOUR CODE HERE
# num=[]
# pi_digits_str[0]
# for i in range(len(pi_digits_str)):
# num[i]=pi_digits_str[i]
f=plt.figure(figsize=(12,8))
plt.title("Histogram of Distances between Peaks in Pi")
plt.ylabel("Number of Occurences")
plt.xlabel("Distance from Previous Peak")
plt.tick_params(direction='out')
plt.box(True)
plt.grid(False)
test=np.array(list(pi_digits_str),dtype=np.int)
peaks=find_peaks(test)
dist=np.diff(peaks)
plt.hist(dist,bins=range(15));
```
```python
assert True # use this for grading the pi digits histogram
```
|
80064f611b7912fb67917beac284842270c1c126
| 27,758 |
ipynb
|
Jupyter Notebook
|
assignments/assignment07/AlgorithmsEx02.ipynb
|
JackDi/phys202-2015-work
|
23c89142b0a598606d78b600277557eab653d941
|
[
"MIT"
] | null | null | null |
assignments/assignment07/AlgorithmsEx02.ipynb
|
JackDi/phys202-2015-work
|
23c89142b0a598606d78b600277557eab653d941
|
[
"MIT"
] | null | null | null |
assignments/assignment07/AlgorithmsEx02.ipynb
|
JackDi/phys202-2015-work
|
23c89142b0a598606d78b600277557eab653d941
|
[
"MIT"
] | null | null | null | 104.74717 | 17,466 | 0.82596 | true | 798 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.749087 | 0.884039 | 0.662223 |
__label__eng_Latn
| 0.98235 | 0.376896 |
# Logistic Regression
```python
from IPython.display import Image
Image('../../../python_for_probability_statistics_and_machine_learning.jpg')
```
The Bernoulli distribution we studied earlier answers the question of which of
two outcomes ($Y \in \lbrace 0,1 \rbrace$) would be selected with probability,
$p$.
$$
\mathbb{P}(Y) = p^Y (1-p)^{ 1-Y }
$$
We also know how to solve the corresponding likelihood function for
the maximum likelihood estimate of $p$ given observations of the output,
$\lbrace Y_i \rbrace_{i=1}^n$. However, now we want to include other factors in
our estimate of $p$. For example, suppose we observe not just the outcomes, but
a corresponding continuous variable, $x$. That is, the observed data is now
$\lbrace (x_i,Y_i) \rbrace_{i=1}^n$ How can we incorporate $x$ into our
estimation of $p$?
The most straightforward idea is to model $p= a x + b$ where $a,b$ are
parameters of a fitted line. However, because $p$ is a probability with value
bounded between zero and one, we need to wrap this estimate in another function
that can map the entire real line into the $[0,1]$ interval. The logistic
(a.k.a. sigmoid) function has this property,
$$
\theta(s) = \frac{e^s}{1+e^s}
$$
Thus, the new parameterized estimate for $p$ is the following,
<!-- Equation labels as ordinary links -->
<div id="eq:prob"></div>
$$
\begin{equation}
\hat{p} = \theta(a x+b)= \frac{e^{a x + b}}{1+e^{a x + b}}
\label{eq:prob} \tag{1}
\end{equation}
$$
This is usually expressed using the *logit* function,
$$
\texttt{logit}(t)= \log \frac{t}{1-t}
$$
as,
$$
\texttt{logit}(p) = b + a x
$$
More continuous variables can be accommodated easily as
$$
\texttt{logit}(p) = b + \sum_k a_k x_k
$$
This can be further extended beyond the binary case to multiple
target labels. The maximum likelihood estimate of this uses
numerical optimization methods that are implemented in Scikit-learn.
Let's construct some data to see how this works. In the following, we assign
class labels to a set of randomly scattered points in the two-dimensional
plane,
```python
%matplotlib inline
import numpy as np
from matplotlib.pylab import subplots
v = 0.9
@np.vectorize
def gen_y(x):
if x<5: return np.random.choice([0,1],p=[v,1-v])
else: return np.random.choice([0,1],p=[1-v,v])
xi = np.sort(np.random.rand(500)*10)
yi = gen_y(xi)
```
**Programming Tip.**
The `np.vectorize` decorator used in the code above makes it easy to avoid
looping in code that uses Numpy arrays by embedding the looping semantics
inside of the so-decorated function. Note, however, that this does not
necessarily accelerate the wrapped function. It's mainly for convenience.
[Figure](#fig:logreg_001) shows a scatter plot of the data we constructed in
the above code, $\lbrace (x_i,Y_i) \rbrace$. As constructed, it is more
likely that large values of $x$ correspond to $Y=1$. On the other hand, values
of $x \in [4,6]$ of either category are heavily overlapped. This means that $x$
is not a particularly strong indicator of $Y$ in this region.
[Figure](#fig:logreg_002) shows the fitted logistic regression curve against the
same
data. The points along the curve are the probabilities that each point lies in
either of the two categories. For large values of $x$ the curve is near one,
meaning that the probability that the associated $Y$ value is equal to one. On
the other extreme, small values of $x$ mean that this probability is close to
zero. Because there are only two possible categories, this means that the
probability of $Y=0$ is thereby higher. The region in the middle corresponding
to the middle probabilities reflect the ambiguity between the two catagories
because of the overlap in the data for this region. Thus, logistic regression
cannot make a strong case for one category here.
The following code fits the logistic regression model,
```python
fig,ax=subplots()
_=ax.plot(xi,yi,'o',color='gray',alpha=.3)
_=ax.axis(ymax=1.1,ymin=-0.1)
_=ax.set_xlabel(r'$X$',fontsize=22)
_=ax.set_ylabel(r'$Y$',fontsize=22)
```
```python
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(np.c_[xi],yi)
```
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
```python
fig,ax=subplots()
xii=np.linspace(0,10,20)
_=ax.plot(xii,lr.predict_proba(np.c_[xii])[:,1],'k-',lw=3)
_=ax.plot(xi,yi,'o',color='gray',alpha=.3)
_=ax.axis(ymax=1.1,ymin=-0.1)
_=ax.set_xlabel(r'$x$',fontsize=20)
_=ax.set_ylabel(r'$\mathbb{P}(Y)$',fontsize=20)
```
<!-- dom:FIGURE: [fig-machine_learning/logreg_001.png, width=500 frac=0.75]
This scatterplot shows the binary $Y$ variables and the corresponding $x$ data
for each category. <div id="fig:logreg_001"></div> -->
<!-- begin figure -->
<div id="fig:logreg_001"></div>
<p>This scatterplot shows the binary $Y$ variables and the corresponding $x$
data for each category.</p>
<!-- end figure -->
<!-- dom:FIGURE: [fig-machine_learning/logreg_002.png, width=500 frac=0.75]
This shows the fitted logistic regression on the data shown in
[Figure](#fig:logreg_001). The points along the curve are the probabilities that
each point lies in either of the two categories. <div
id="fig:logreg_002"></div> -->
<!-- begin figure -->
<div id="fig:logreg_002"></div>
<p>This shows the fitted logistic regression on the data shown in
[Figure](#fig:logreg_001). The points along the curve are the probabilities that
each point lies in either of the two categories.</p>
<!-- end figure -->
For a deeper understanding of logistic regression, we need to alter our
notation slightly and once again use our projection methods. More generally we
can rewrite Equation [eq:prob](#eq:prob) as the following,
<!-- Equation labels as ordinary links -->
<div id="eq:probbeta"></div>
$$
\begin{equation}
p(\mathbf{x}) = \frac{1}{1+\exp(-\boldsymbol{\beta}^T \mathbf{x})}
\label{eq:probbeta} \tag{2}
\end{equation}
$$
where $\boldsymbol{\beta}, \mathbf{x}\in \mathbb{R}^n$. From our
prior work on projection we know that the signed perpendicular distance between
$\mathbf{x}$ and the linear boundary described by $\boldsymbol{\beta}$ is
$\boldsymbol{\beta}^T \mathbf{x}/\Vert\boldsymbol{\beta}\Vert$. This means
that the probability that is assigned to any point in $\mathbb{R}^n$ is a
function of how close that point is to the linear boundary described by the
following equation,
$$
\boldsymbol{\beta}^T \mathbf{x} = 0
$$
But there is something subtle hiding here. Note that
for any $\alpha\in\mathbb{R}$,
$$
\alpha\boldsymbol{\beta}^T \mathbf{x} = 0
$$
describes the *same* hyperplane. This means that we can multiply
$\boldsymbol{\beta}$ by an arbitrary scalar and still get the same geometry.
However, because of $\exp(-\alpha\boldsymbol{\beta}^T \mathbf{x})$ in Equation
[eq:probbeta](#eq:probbeta), this scaling determines the intensity of the
probability
attributed to $\mathbf{x}$. This is illustrated in [Figure](#fig:logreg_003).
The panel on the left shows two categories (squares/circles) split by the
dotted line that is determined by $\boldsymbol{\beta}^T\mathbf{x}=0$. The
background colors shows the probabilities assigned to points in the plane. The
right panel shows that by scaling with $\alpha$, we can increase the
probabilities of class membership for the given points, given the exact same
geometry. The points near the boundary have lower probabilities because they
could easily be on the opposite side. However, by scaling by $\alpha$, we can
raise those probabilities to any desired level at the cost of driving the
points further from the boundary closer to one. Why is this a problem? By
driving the probabilities arbitrarily using $\alpha$, we can overemphasize the
training set at the cost of out-of-sample data. That is, we may wind up
insisting on emphatic class membership of yet unseen points that are close to
the boundary that otherwise would have more equivocal probabilities (say, near
$1/2$). Once again, this is another manifestation of bias/variance trade-off.
<!-- dom:FIGURE: [fig-machine_learning/logreg_003.png, width=500 frac=1.25]
Scaling can arbitrarily increase the probabilities of points near the decision
boundary. <div id="fig:logreg_003"></div> -->
<!-- begin figure -->
<div id="fig:logreg_003"></div>
<p>Scaling can arbitrarily increase the probabilities of points near the
decision boundary.</p>
<!-- end figure -->
Regularization is a method that controls this effect by penalizing the size of
$\beta$ as part of its solution. Algorithmically, logistic regression works by
iteratively solving a sequence of weighted least squares problems. Regression
adds a $\Vert\boldsymbol{\beta}\Vert/C$ term to the least squares error. To see
this in action, let's create some data from a logistic regression and see if we
can recover it using Scikit-learn. Let's start with a scatter of points in the
two-dimensional plane,
```python
x0,x1=np.random.rand(2,20)*6-3
X = np.c_[x0,x1,x1*0+1] # stack as columns
```
Note that `X` has a third column of all ones. This is a
trick to allow the corresponding line to be offset from the origin
in the two-dimensional plane. Next, we create a linear boundary
and assign the class probabilities according to proximity to the
boundary.
```python
beta = np.array([1,-1,1]) # last coordinate for affine offset
prd = X.dot(beta)
probs = 1/(1+np.exp(-prd/np.linalg.norm(beta)))
c = (prd>0) # boolean array class labels
```
This establishes the training data. The next block
creates the logistic regression object and fits the data.
```python
lr = LogisticRegression()
_=lr.fit(X[:,:-1],c)
```
Note that we have to omit the third dimension because of
how Scikit-learn internally breaks down the components of the
boundary. The resulting code extracts the corresponding
$\boldsymbol{\beta}$ from the `LogisticRegression` object.
```python
betah = np.r_[lr.coef_.flat,lr.intercept_]
```
**Programming Tip.**
The Numpy `np.r_` object provides a quick way to stack Numpy
arrays horizontally instead of using `np.hstack`.
The resulting boundary is shown in the left panel in
[Figure](#fig:logreg_004). The crosses and triangles represent the two classes
we
created above, along with the separating gray line. The logistic regression
fit produces the dotted black line. The dark circle is the point that logistic
regression categorizes incorrectly. The regularization parameter is $C=1$ by
default. Next, we can change the strength of the regularization parameter as in
the following,
```python
lr = LogisticRegression(C=1000)
```
and the re-fit the data to produce the right panel in
[Figure](#fig:logreg_004). By increasing the regularization
parameter, we essentially nudged the fitting algorithm to
*believe* the data more than the general model. That is, by doing
this we accepted more variance in exchange for better bias.
<!-- dom:FIGURE: [fig-machine_learning/logreg_004.png, width=500 frac=1.25] The
left panel shows the resulting boundary (dashed line) with $C=1$ as the
regularization parameter. The right panel is for $C=1000$. The gray line is the
boundary used to assign the class membership for the synthetic data. The dark
circle is the point that logistic regression categorizes incorrectly. <div
id="fig:logreg_004"></div> -->
<!-- begin figure -->
<div id="fig:logreg_004"></div>
<p>The left panel shows the resulting boundary (dashed line) with $C=1$ as the
regularization parameter. The right panel is for $C=1000$. The gray line is the
boundary used to assign the class membership for the synthetic data. The dark
circle is the point that logistic regression categorizes incorrectly.</p>
<!-- end figure -->
## Generalized Linear Models
Logistic regression is one example of a wider class of generalized linear
models that embed non-linear transformations in the fitting process. Let's back
up and break down logistic regression into smaller parts. As usual, we want to
estimate the conditional expectation $\mathbb{E}(Y\vert X=\mathbf{x})$. For
plain linear regression, we have the following approximation,
$$
\mathbb{E}(Y\vert X=\mathbf{x})\approx\boldsymbol{\beta}^T\mathbf{x}
$$
For notation sake, we call $r(x):=\mathbb{E}(Y\vert X=\mathbf{x})$
the response. For logistic regression, because $Y\in\left\{0,1\right\}$, we
have $\mathbb{E}(Y\vert X=\mathbf{x})=\mathbb{P}(Y\vert X=\mathbf{x})$ and the
transformation makes $r(\mathbf{x})$ linear.
$$
\begin{align*}
\eta(\mathbf{x}) &= \boldsymbol{\beta}^T\mathbf{x} \\\
&= \log \frac{r(\mathbf{x})}{1-r(\mathbf{x})} \\\
&= g(r(\mathbf{x}))
\end{align*}
$$
where $g$ is defined as the logistic *link* function.
The $\eta(x)$ function is the linear predictor. Now that we have
transformed the original data space using the logistic function to
create the setting for the linear predictor, why don't we just do
the same thing for the $Y_i$ data? That is, for plain linear
regression, we usually take data, $\left\{X_i,Y_i\right\}$ and
then use it to fit an approximation to $\mathbb{E}(Y\vert X=x)$.
If we are transforming the conditional expectation using the
logarithm, which we are approximating using $Y_i$, then why don't
we correspondingly transform the binary $Y_i$ data? The answer is
that if we did so then we would get the logarithm of zero (i.e.,
infinity) or one (i.e., zero), which is not workable. The
alternative is to use a linear Taylor approximation, like we did
earlier with the delta method, to expand the $g$ function around
$r(x)$, as in the following,
$$
\begin{align*}
g(Y) &\approx \log\frac{r(x)}{1-r(x)} + \frac{Y-r(x)}{r(x)-r(x)^2} \\\
&= \eta(x)+ \frac{Y-r(x)}{r(x)-r(x)^2}
\end{align*}
$$
The interesting part is the $Y-r(x)$ term, because this is where the
class label data enters the problem. The expectation $\mathbb{E}(Y-r(x)\vert
X)=0$ so we can think of this differential as additive noise that dithers
$\eta(x)$. The variance of $g(Y)$ is the following,
$$
\begin{align*}
\mathbb{V}(g(Y)\vert X)&= \mathbb{V}(\eta(x)\vert X)+\frac{1}{(r(x)(1-r(x)))^2}
\mathbb{V}(Y-r(x)\vert X) \\\
&=\frac{1}{(r(x)(1-r(x)))^2} \mathbb{V}(Y-r(x)\vert X)
\end{align*}
$$
Note that $\mathbb{V}(Y\vert X)=r(x)(1-r(x))$ because $Y$ is
a binary variable. Ultimately, this boils down to the following,
$$
\mathbb{V}(g(Y)\vert X)=\frac{1}{r(x)(1-r(x))}
$$
Note that the variance is a function of $x$, which means it is
*heteroskedastic*, meaning that the iterative minimum-variance-finding
algorithm that computes $\boldsymbol{\beta}$ downplays $x$ where $r(x)\approx
0$ and $r(x)\approx 1$ because the peak of the variance occurs where
$r(x)\approx 0.5$, which are those equivocal points are close to the boundary.
For generalized linear models, the above sequence is the same and consists of
three primary ingredients: the linear predictor ($\eta(x)$), the link function
($g(x)$), and the *dispersion scale function*, $V_{ds}$ such that
$\mathbb{V}(Y\vert X)=\sigma^2 V_{ds}(r(x))$. For logistic regression, we have
$V_{ds}(r(x))=r(x)(1-r(x))$ and $\sigma^2=1$. Note that absolute knowledge of
$\sigma^2$ is not important because the iterative algorithm needs only a
relative proportional scale. To sum up, the iterative algorithm takes a linear
prediction for $\eta(x_i)$, computes the transformed responses, $g(Y_i)$,
calculates the weights $w_i=\left[(g^\prime(r(x_i))V_{ds}(r(x_i)))
\right]^{-1}$, and then does a weighted linear regression of $g(y_i)$ onto
$x_i$ with the weights $w_i$ to compute the next $\boldsymbol{\beta}$.
More details can be found in the following [[fox2015applied]](#fox2015applied),
[[lindsey1997applying]](#lindsey1997applying),
[[campbell2009generalized]](#campbell2009generalized).
<!-- # *Applied Predictive Modeling by Kuhn*, p. 283, -->
<!-- # Logit function, odds ratio -->
<!-- # *generalized linear models by Rodriguez*, p.72 -->
<!-- # *Scikit-learn cookbook*, p.78 -->
|
4f6d28c6bf3801fec6bf5ece7913156addd597ce
| 168,774 |
ipynb
|
Jupyter Notebook
|
chapters/machine_learning/notebooks/logreg.ipynb
|
nsydn/Python-for-Probability-Statistics-and-Machine-Learning
|
d3e0f8ea475525a694a975dbfd2bf80bc2967cc6
|
[
"MIT"
] | 570 |
2016-05-05T19:08:27.000Z
|
2022-03-31T05:09:19.000Z
|
chapters/machine_learning/notebooks/logreg.ipynb
|
crlsmcl/https-github.com-unpingco-Python-for-Probability-Statistics-and-Machine-Learning
|
6fd69459a28c0b76b37fad79b7e8e430d09a86a5
|
[
"MIT"
] | 2 |
2016-05-12T22:18:58.000Z
|
2019-11-06T14:37:06.000Z
|
chapters/machine_learning/notebooks/logreg.ipynb
|
crlsmcl/https-github.com-unpingco-Python-for-Probability-Statistics-and-Machine-Learning
|
6fd69459a28c0b76b37fad79b7e8e430d09a86a5
|
[
"MIT"
] | 276 |
2016-05-27T01:42:05.000Z
|
2022-03-27T11:20:27.000Z
| 259.253456 | 114,721 | 0.907563 | true | 4,406 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.882428 | 0.833325 | 0.735349 |
__label__eng_Latn
| 0.995591 | 0.546794 |
```python
from datascience import *
import sympy
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.patches as patches
plt.style.use('seaborn-muted')
mpl.rcParams['figure.dpi'] = 200
%matplotlib inline
from IPython.display import display
import numpy as np
import pandas as pd
solve = lambda x,y: sympy.solve(x-y)[0] if len(sympy.solve(x-y))==1 else "Not Single Solution"
import warnings
warnings.filterwarnings('ignore')
```
# Market Equilibria
We will now explore the relationship between price and quantity of oranges produced between 1924 and 1938. Since the data {cite}`01demand-fruits` is from the 1920s and 1930s, it is important to remember that the prices are much lower than what they would be today because of inflation, competition, innovations, and other factors. For example, in 1924, a ton of oranges would have costed \$6.63; that same amount in 2019 is \$100.78.
```python
fruitprice = Table.read_table('fruitprice.csv')
fruitprice
```
<table border="1" class="dataframe">
<thead>
<tr>
<th>Year</th> <th>Pear Price</th> <th>Pear Unloads (Tons)</th> <th>Plum Price</th> <th>Plum Unloads</th> <th>Peach Price</th> <th>Peach Unloads</th> <th>Orange Price</th> <th>Orange Unloads</th> <th>NY Factory Wages</th>
</tr>
</thead>
<tbody>
<tr>
<td>1924</td> <td>8.04 </td> <td>18489 </td> <td>8.86 </td> <td>6582 </td> <td>4.96 </td> <td>41880 </td> <td>6.63 </td> <td>21258 </td> <td>27.22 </td>
</tr>
<tr>
<td>1925</td> <td>5.67 </td> <td>21919 </td> <td>7.27 </td> <td>5526 </td> <td>4.87 </td> <td>38772 </td> <td>9.19 </td> <td>15426 </td> <td>28.03 </td>
</tr>
<tr>
<td>1926</td> <td>5.44 </td> <td>29328 </td> <td>6.68 </td> <td>5742 </td> <td>3.35 </td> <td>46516 </td> <td>7.2 </td> <td>24762 </td> <td>28.89 </td>
</tr>
<tr>
<td>1927</td> <td>7.15 </td> <td>17082 </td> <td>8.09 </td> <td>5758 </td> <td>5.7 </td> <td>32500 </td> <td>8.63 </td> <td>22766 </td> <td>29.14 </td>
</tr>
<tr>
<td>1928</td> <td>5.81 </td> <td>20708 </td> <td>7.41 </td> <td>6000 </td> <td>4.13 </td> <td>46820 </td> <td>10.71 </td> <td>18766 </td> <td>29.34 </td>
</tr>
<tr>
<td>1929</td> <td>7.6 </td> <td>13071 </td> <td>10.86 </td> <td>3504 </td> <td>6.7 </td> <td>36990 </td> <td>6.36 </td> <td>35702 </td> <td>29.97 </td>
</tr>
<tr>
<td>1930</td> <td>5.06 </td> <td>22068 </td> <td>6.23 </td> <td>7998 </td> <td>6.35 </td> <td>29680 </td> <td>10.5 </td> <td>23718 </td> <td>28.68 </td>
</tr>
<tr>
<td>1931</td> <td>5.4 </td> <td>19255 </td> <td>6.86 </td> <td>5638 </td> <td>3.91 </td> <td>50940 </td> <td>5.81 </td> <td>39263 </td> <td>26.35 </td>
</tr>
<tr>
<td>1932</td> <td>4.06 </td> <td>17293 </td> <td>6.09 </td> <td>7364 </td> <td>4.57 </td> <td>27642 </td> <td>4.71 </td> <td>38553 </td> <td>21.98 </td>
</tr>
<tr>
<td>1933</td> <td>4.78 </td> <td>11063 </td> <td>5.86 </td> <td>8136 </td> <td>3.57 </td> <td>35560 </td> <td>4.6 </td> <td>36540 </td> <td>22.26 </td>
</tr>
</tbody>
</table>
<p>... (5 rows omitted)</p>
## Finding the Equilibrium
An important concept in econmics is the market equilibrium. This is the point at which the demand and supply curves meet and represents the "optimal" level of production and price in that market.
```{admonition} Definition
The **market equilibrium** is the price and quantity at which the demand and supply curves intersect. The price and resulting transaction quantity at the equilibrium is what we would predict to observe in the market.
```
Let's walk through how to the market equilibrium using the market for oranges as an example.
### Data Preprocessing
Because we are only examining the relationship between prices and quantity for oranges, we can create a new table with the relevant columns: `Year`, `Orange Price`, and `Orange Unloads`. Here, `Orange Price` is measured in dollars, while `Orange Unloads` is measured in tons.
```python
oranges_raw = fruitprice.select("Year", "Orange Price", "Orange Unloads")
oranges_raw
```
<table border="1" class="dataframe">
<thead>
<tr>
<th>Year</th> <th>Orange Price</th> <th>Orange Unloads</th>
</tr>
</thead>
<tbody>
<tr>
<td>1924</td> <td>6.63 </td> <td>21258 </td>
</tr>
<tr>
<td>1925</td> <td>9.19 </td> <td>15426 </td>
</tr>
<tr>
<td>1926</td> <td>7.2 </td> <td>24762 </td>
</tr>
<tr>
<td>1927</td> <td>8.63 </td> <td>22766 </td>
</tr>
<tr>
<td>1928</td> <td>10.71 </td> <td>18766 </td>
</tr>
<tr>
<td>1929</td> <td>6.36 </td> <td>35702 </td>
</tr>
<tr>
<td>1930</td> <td>10.5 </td> <td>23718 </td>
</tr>
<tr>
<td>1931</td> <td>5.81 </td> <td>39263 </td>
</tr>
<tr>
<td>1932</td> <td>4.71 </td> <td>38553 </td>
</tr>
<tr>
<td>1933</td> <td>4.6 </td> <td>36540 </td>
</tr>
</tbody>
</table>
<p>... (5 rows omitted)</p>
Next, we will rename our columns. In this case, let's rename `Orange Unloads` to `Quantity` and `Orange Price` to `Price` for brevity and understandability.
```python
oranges = oranges_raw.relabel("Orange Unloads", "Quantity").relabel("Orange Price", "Price")
oranges
```
<table border="1" class="dataframe">
<thead>
<tr>
<th>Year</th> <th>Price</th> <th>Quantity</th>
</tr>
</thead>
<tbody>
<tr>
<td>1924</td> <td>6.63 </td> <td>21258 </td>
</tr>
<tr>
<td>1925</td> <td>9.19 </td> <td>15426 </td>
</tr>
<tr>
<td>1926</td> <td>7.2 </td> <td>24762 </td>
</tr>
<tr>
<td>1927</td> <td>8.63 </td> <td>22766 </td>
</tr>
<tr>
<td>1928</td> <td>10.71</td> <td>18766 </td>
</tr>
<tr>
<td>1929</td> <td>6.36 </td> <td>35702 </td>
</tr>
<tr>
<td>1930</td> <td>10.5 </td> <td>23718 </td>
</tr>
<tr>
<td>1931</td> <td>5.81 </td> <td>39263 </td>
</tr>
<tr>
<td>1932</td> <td>4.71 </td> <td>38553 </td>
</tr>
<tr>
<td>1933</td> <td>4.6 </td> <td>36540 </td>
</tr>
</tbody>
</table>
<p>... (5 rows omitted)</p>
### Visualize the Relationship
Let's first take a look to see what the relationship between price and quantity is. We would expect to see a downward-sloping relationship between price and quantity; if a product's price increases, consumers will purchase less, and if a product's price decreases, then consumers will purchase more.
We will create a scatterplot between the points.
```python
oranges.scatter("Quantity", "Price", width=5, height=5)
plt.title("Demand Curve for Oranges", fontsize = 16);
```
The visualization shows a negative relationship between quantity and price, which is in line with our expectations: as the price increases, fewer consumers will purchase oranges, so the quantity demanded will decrease. This corresponds to a leftward movement along the demand curve. Alternatively, as the price decreases, the quantity sold will increase because consumers want to maximize their purchasing power and buy more oranges; this is shown by a rightward movement along the curve.
### Fit a Polynomial
We will now quantify our demand curve using NumPy's [`np.polyfit` function](https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html). Recall that `np.polyfit` returns an array of size 2, where the first element is the slope and the second is the $y$-intercept.
For this exercise, we will be expressing demand and supply as quantities in terms of price.
```python
np.polyfit(oranges.column("Price"), oranges.column("Quantity"), 1)
```
array([-3432.84670093, 53625.8748401 ])
This shows that the demand curve is $D(P) = -3433 P+ 53626$. The slope is -3433 and $y$-intercept is 53626. That means that as price increases by 1 unit (in this case, \$1), quantity decreases by 3433 units (in this case, 3433 tons).
### Create the Demand Curve
We will now use SymPy to write out this demand curve. To do so, we start by creating a symbol `P` that we can use to create the equation.
```python
P = sympy.Symbol("P")
demand = -3432.846 * P + 53625.87
demand
```
$\displaystyle 53625.87 - 3432.846 P$
### Create the Supply Curve
As you've learned, the supply curve is the relationship between the price of a good or service and the quantity of that good or service that the seller is willing to supply. It shows how much of a good suppliers are willing and able to supply at different prices. In this case, as the price of the oranges increases, the quantity of oranges that orange manufacturers are willing to supply increases. They capture the producer's side of market decisions and are upward-sloping.
Let's now assume that the supply curve is given by $S(P) = 4348P$. (Note that this supply curve is not based on data.)
```python
supply = 4348 * P
supply
```
$\displaystyle 4348 P$
This means that as the price of oranges increases by 1, the quantity supplied increases by 4348. At a price of 0, no oranges are supplied.
### Find the Price Equilibrium
With the supply and demand curves known, we can solve the for equilibrium.
The equilibrium is the point where the supply curve and demand curve intersect, and denotes the price and quantity of the good transacted in the market.
The equilbrium consists of 2 components: the quantity equilbrium and price equilbrium.
The price equilibrium is the price at which the supply curve and demand curve intersect: the price of the good that consumers desire to purchase at is equivalent to the price of the good that producers want to sell at. There is no shortage of surplus of the product at this price.
Let's find the price equilibrium. To do this, we will use the provided `solve` function. This is a custom function that leverages some SymPy magic and will be provided to you in assignments.
```python
P_star = solve(demand, supply)
P_star
```
$\displaystyle 6.89203590457901$
This means that the price of oranges that consumers want to purchase at and producers want to provide is about \$6.89.
### Find the Quantity Equilibrium
Similarly, the quantity equilibrium is the quantity of the good that consumers desire to purchase is equivalent to the quantity of the good that producers supply; there is no shortage or surplus of the good at this quantity.
```python
demand.subs(P, P_star)
supply.subs(P, P_star)
```
$\displaystyle 29966.5721131095$
This means that the number of tons of oranges that consumers want to purchase and producers want to provide in this market is about 29,967 tons of oranges.
### Visualize the Market Equilibrium
Now that we have our demand and supply curves and price and quantity equilibria, we can visualize them on a graph to see what they look like.
There are 2 pre-made functions we will use: `plot_equation` and `plot_intercept`.
- `plot_equation`: It takes in the equation we made previously (either demand or supply) and visualizes the equations between the different prices we give it
- `plot_intercept`: It takes in two different equations (demand and supply), finds the point at which the two intersect, and creates a scatter plot of the result
```python
def plot_equation(equation, price_start, price_end, label=None):
plot_prices = [price_start, price_end]
plot_quantities = [equation.subs(list(equation.free_symbols)[0], c) for c in plot_prices]
plt.plot(plot_quantities, plot_prices, label=label)
def plot_intercept(eq1, eq2):
ex = sympy.solve(eq1-eq2)[0]
why = eq1.subs(list(eq1.free_symbols)[0], ex)
plt.scatter([why], [ex], zorder=10, color="tab:orange")
return (ex, why)
```
We can leverage these functions and the equations we made earlier to create a graph that shows the market equilibrium.
```python
mpl.rcParams['figure.dpi'] = 150
plot_equation(demand, 2, 10, label = "Demand")
plot_equation(supply, 2, 10, label = "Supply")
plt.ylim(0,13)
plt.title("Orange Supply and Demand in 1920's and 1930's", fontsize = 15)
plt.xlabel("Quantity (Tons)", fontsize = 14)
plt.ylabel("Price ($)", fontsize = 14)
plot_intercept(supply, demand)
plt.legend(loc = "upper right", fontsize = 12)
plt.show()
```
You can also practice on your own and download additional data sets [here](http://users.stat.ufl.edu/~winner/datasets.html), courtesy of the University of Flordia's Statistics Department.
## Movements Away from Equilibrium
What happens to market equilibrium when either supply or demand shifts due to an exogenous shock?
Let's assume that consumers now prefer Green Tea as their hot beverage of choice moreso than before. We have an outward shift of the demand curve - quantity demanded is greater at every price. The market is no longer in equilibrium.
```{figure} fig1-demand.png
---
width: 500px
name: demand-shift
---
A shift in the demand curve
```
At the same price level (the former equilibrium price), there is a shortage of Green Tea. The amount demanded by consumers exceeds that supplied by producers: $Q_D > Q_S$. This is a seller's market, as the excess quantity demanded gives producers leverage (or market power) over consumers. They are able to increase the price of Green Tea to clear the shortage. As prices increase, consumers who were willing and able to purchase tea at the previous equilibrium price would leave the market, reducing quantity demanded. $Q_S$ and $Q_D$ move up along their respective curves until the new equilibrium is achieved where $Q_S = Q_D$.
This dual effect of increasing $Q_S$ and $Q_D$ is sometimes referred to as the "invisible hand". Sans government intervention, it clears out the shortage or surplus in the market, resulting in the eventual convergence to a new equilibrium level of quantity $Q^*$ and price $P^*$.
|
409dde8bb35cd5e67e2553c79c3abf3bcf87069d
| 125,687 |
ipynb
|
Jupyter Notebook
|
docs/_sources/content/02-supply/03-market-equilibria.ipynb
|
d8a-88/econ-models-textbook
|
b0b34afaf1f182fe6cdb8968c3045dc0692452d1
|
[
"BSD-3-Clause"
] | 1 |
2019-12-06T17:30:11.000Z
|
2019-12-06T17:30:11.000Z
|
content/02-supply/03-market-equilibria.ipynb
|
d8a-88/econ-models-textbook
|
b0b34afaf1f182fe6cdb8968c3045dc0692452d1
|
[
"BSD-3-Clause"
] | null | null | null |
content/02-supply/03-market-equilibria.ipynb
|
d8a-88/econ-models-textbook
|
b0b34afaf1f182fe6cdb8968c3045dc0692452d1
|
[
"BSD-3-Clause"
] | 1 |
2019-12-06T17:30:12.000Z
|
2019-12-06T17:30:12.000Z
| 170.077131 | 66,384 | 0.865412 | true | 4,343 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.760651 | 0.79053 | 0.601317 |
__label__eng_Latn
| 0.980657 | 0.235392 |
# Machine Learning Calculus
```python
from sympy import *
import numpy as np
import math
x = Symbol('x')
y = Symbol('y')
```
```python
limit(1/x**2,x,0)
```
oo
```python
# Find the limit of Functions
from sympy import Limit, Symbol, S
Limit(1/x, x, S.Infinity)
```
Limit(1/x, x, oo, dir='-')
```python
# To find the value
l = Limit(1/x, x, S.Infinity)
l.doit()
```
0
```python
# Differentiation
diff(15*x**100-3*x**12+5*x-46)
```
1500*x**99 - 36*x**11 + 5
```python
# Constant
diff(99)
```
0
```python
# Multiplication by Constant
diff(3*x)
```
3
```python
# Power Rule
diff(x**3)
```
3*x**2
```python
# Sum Rule
diff(x**2+3*x)
```
2*x + 3
```python
# Difference Rule
diff(x**2-3*x)
```
2*x - 3
```python
# Product Rule
diff(x**2*x)
```
3*x**2
```python
# Chain Rule
diff(ln(x**2))
```
2/x
```python
# Example
diff(9*(x+x**2))
```
18*x + 9
```python
# Matrix Calculus
a = diff(3*x**2*y,x)
b = diff(3*x**2*y,y)
c = diff(2*x+8*y**7,x)
d = diff(2*x+y**8,y)
```
```python
print(a)
print(b)
print(c)
print(d)
```
6*x*y
3*x**2
2
8*y**7
```python
import numpy as np
Matrix_Calculus = np.matrix([[a, c], [b,d]])
Matrix_Calculus
```
matrix([[6*x*y, 2],
[3*x**2, 8*y**7]], dtype=object)
```python
# Chain Rule
diff(ln(sin(x**3)**2))
```
6*x**2*cos(x**3)/sin(x**3)
```python
# Sigmoid
# d/dx S(x)=S(x)(1−S(x))
diff(1/(1+math.e**-x),x)
```
1.0*2.71828182845905**(-x)/(1 + 2.71828182845905**(-x))**2
```python
# Integral
import sympy
sympy.init_printing()
sympy.integrate(2*x, (x, 1, 0))
```
```python
# Jacobian
from sympy import sin, cos, Matrix
from sympy.abc import rho, phi
X = Matrix([rho*cos(phi), rho*sin(phi), rho**2])
Y = Matrix([rho, phi])
X.jacobian(Y)
```
$$\left[\begin{matrix}\cos{\left (\phi \right )} & - \rho \sin{\left (\phi \right )}\\\sin{\left (\phi \right )} & \rho \cos{\left (\phi \right )}\\2 \rho & 0\end{matrix}\right]$$
```python
# Example Calculating the Jacobian
# Equation
eq = x**2*y + 3/4*x*y + 10
eq
```
```python
# Jacobian matrix
x, y, z = symbols('x y z')
Matrix([sin(x) + y, cos(y) + x, z]).jacobian([x, y, z])
```
$$\left[\begin{matrix}\cos{\left (x \right )} & 1 & 0\\1 & - \sin{\left (y \right )} & 0\\0 & 0 & 1\end{matrix}\right]$$
```python
# (x,y)=(0,0)
x1 = -y
x2 = x - 2*y * (2-x**2)
J = sympy.Matrix([x1,x2])
J.jacobian([x,y])
```
$$\left[\begin{matrix}0 & -1\\4 x y + 1 & 2 x^{2} - 4\end{matrix}\right]$$
```python
J.jacobian([x,y]).subs([(x,0), (y,0)])
```
$$\left[\begin{matrix}0 & -1\\1 & -4\end{matrix}\right]$$
```python
# Derivatives of x
de_x = diff(x**2*y + 3/4*x*y + 10, x)
de_x
```
```python
# Derivatives of y
de_y = diff(x**2*y + 3/4*x*y + 10, y)
de_y
```
```python
# Example
F = sympy.Matrix([de_x,de_y])
F.jacobian([x,y])
```
$$\left[\begin{matrix}2 y & 2 x + 0.75\\2 x + 0.75 & 0\end{matrix}\right]$$
```python
F.jacobian([x,y]).subs([(x,0), (y,0)])
```
$$\left[\begin{matrix}0 & 0.75\\0.75 & 0\end{matrix}\right]$$
```python
# Hessian
from sympy import Function, hessian, pprint
from sympy.abc import x, y
f = Function('f')(x, y)
g1 = Function('g')(x, y)
g2 = x**2+y**2
pprint(hessian(f, (x,y), [g1, g2]))
```
⎡ ∂ ∂ ⎤
⎢ 0 0 ──(g(x, y)) ──(g(x, y)) ⎥
⎢ ∂x ∂y ⎥
⎢ ⎥
⎢ 0 0 2⋅x 2⋅y ⎥
⎢ ⎥
⎢ 2 2 ⎥
⎢∂ ∂ ∂ ⎥
⎢──(g(x, y)) 2⋅x ───(f(x, y)) ─────(f(x, y))⎥
⎢∂x 2 ∂y ∂x ⎥
⎢ ∂x ⎥
⎢ ⎥
⎢ 2 2 ⎥
⎢∂ ∂ ∂ ⎥
⎢──(g(x, y)) 2⋅y ─────(f(x, y)) ───(f(x, y)) ⎥
⎢∂y ∂y ∂x 2 ⎥
⎣ ∂y ⎦
```python
import numpy as np
import matplotlib.pyplot as plt
def tanh(x):
return np.tanh(x)
X = np.linspace(-5, 5, 100)
plt.plot(X, tanh(X),'b')
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title('Neural Networks - Activation Function')
plt.grid()
plt.text(4, 0.8, r'$\sigma(x)=\tanh{(x)}}$', fontsize=16)
plt.show()
```
<Figure size 640x480 with 1 Axes>
```python
def sigma(x):
return (np.exp(x) - np.exp(-x)) / (np.exp(x) + np.exp(-x))
X = np.linspace(-5, 5, 100)
plt.plot(X, sigma(X),'b')
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title('Sigmoid Function')
plt.grid()
plt.text(4, 0.8, r'$\sigma(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}$', fontsize=16)
plt.show()
```
```python
def derivatives_sigma(x):
return 1 / (np.cosh(x))**2
X = np.linspace(-5, 5, 100)
plt.plot(X, derivatives_sigma(X),'b')
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title('Sigmoid Function')
plt.grid()
plt.text(4, 0.8, r'$\sigma(x)=\frac{1}{cosh^2(x)}$', fontsize=16)
plt.show()
```
```python
def derivatives_sigma(x):
return 4 / (np.exp(x)+np.exp(-x))**2
X = np.linspace(-5, 5, 100)
plt.plot(X, derivatives_sigma(X),'b')
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title('Sigmoid Function')
plt.grid()
plt.text(4, 0.8, r'$\sigma(x)=\frac{4}{(e^{x}+e^{-x})^2}$', fontsize=16)
plt.show()
```
```python
# Newton_Raphson
# f(x) - the function of the polynomial
from sympy import *
def f(x):
function = x**3 - x - 1
return function
def derivative(x): #function to find the derivative of the polynomial
derivative = diff(f(x), x)
return derivative
def Newton_Raphson(x):
return (x - (f(x) / derivative(x)))
Newton_Raphson(x)
```
```python
# Advanced Chain Rule
# F(y)=ln(1−5y2+y3)
from sympy import *
y = symbols('y')
F = symbols('F', cls=Function)
Derivative(ln(1 - 5*y**2 + y**3)).doit()
```
```python
diff(ln(1 - 5*y**2 + y**3))
```
|
0a36b0285ea61d4d4ecec7ae70d004cc67e31755
| 86,387 |
ipynb
|
Jupyter Notebook
|
ML_Calculus.ipynb
|
damonclifford/Mathematics_for_Machine_Learning
|
ecdb6b28ce6361dda4e810df829f2d3ed7e7e193
|
[
"MIT"
] | 28 |
2019-02-05T04:35:23.000Z
|
2022-03-21T19:05:15.000Z
|
ML_Calculus.ipynb
|
damonclifford/Mathematics_for_Machine_Learning
|
ecdb6b28ce6361dda4e810df829f2d3ed7e7e193
|
[
"MIT"
] | null | null | null |
ML_Calculus.ipynb
|
damonclifford/Mathematics_for_Machine_Learning
|
ecdb6b28ce6361dda4e810df829f2d3ed7e7e193
|
[
"MIT"
] | 19 |
2019-08-20T14:50:07.000Z
|
2022-03-19T19:24:02.000Z
| 77.616352 | 17,086 | 0.772964 | true | 2,309 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.958538 | 0.847968 | 0.812809 |
__label__eng_Latn
| 0.198942 | 0.726761 |
###This IPython Notebook is for performing a fit and generating a figure of the spectrum of sample VG12, in the mesh region with 49+/-6 nm gap. This version is modified to fit for two gaps.
The filename of the figure is **[TBD].pdf**.
Author: Michael Gully-Santiago, `gully@astro.as.utexas.edu`
Date: January 25, 2015
```python
%pylab inline
import emcee
import triangle
import pandas as pd
import seaborn as sns
from astroML.decorators import pickle_results
```
Populating the interactive namespace from numpy and matplotlib
```python
sns.set_context("paper", font_scale=2.0, rc={"lines.linewidth": 2.5})
sns.set(style="ticks")
```
Read in the data. We want "VG12"
```python
df = pd.read_csv('../data/cln_20130916_cary5000.csv', index_col=0)
df = df[df.index > 1250.0]
```
```python
plt.plot(df.index[::4], df.run11[::4]/100.0, label='On-mesh')
plt.plot(df.index, df.run10/100.0, label='Off-mesh')
plt.plot(df.index, df.run12/100.0, label='Shard2')
plt.plot(df.index, df.run9/100.0, label='DSP')
plt.plot(df.index, df.run15/100.0, label='VG08')
plt.plot(df.index, df.run17/100.0, label='VG08 alt')
#plt.plot(x, T_gap_Si_withFF_fast(x, 65.0, 0.5, n1)/T_DSP, label='Model')
plt.legend(loc='best')
plt.ylim(0.80, 1.05)
```
Import all the local models, saved locally as `etalon.py`. See the paper for derivations of these equations.
```python
from etalon import *
np.random.seed(78704)
```
```python
# Introduce the Real data, decimate the data.
x = df.index.values[::4]
N = len(x)
# Define T_DSP for the model
T_DSP = T_gap_Si(x, 0.0)
n1 = sellmeier_Si(x)
# Define uncertainty
yerr = 0.0004*np.ones(N)
iid_cov = np.diag(yerr ** 2)
# Select the spectrum of interest
# Normalize the spectrum by measured DSP Si wafer.
y = df.run11.values[::4]/100.0
```
Define the likelihood. In this case we are using two different gap sizes, but fixed fill factor.
\begin{equation}
T_{mix} = 0.5 \times T_{e}(d_M + \epsilon) + 0.5 \times T_{e}(\epsilon)
\end{equation}
```python
def lnlike(dM, eps, lna, lns):
a, s = np.exp(lna), np.exp(lns)
off_diag_terms = a**2 * np.exp(-0.5 * (x[:, None] - x[None, :])**2 / s**2)
C = iid_cov + off_diag_terms
sgn, logdet = np.linalg.slogdet(C)
if sgn <= 0:
return -np.inf
T_mix = 0.5 * (T_gap_Si_withFF_fast(x, dM+eps, 1.0, n1) + T_gap_Si_withFF_fast(x, eps, 1.0, n1))/T_DSP
r = y - T_mix
return -0.5 * (np.dot(r, np.linalg.solve(C, r)) + logdet)
```
Define the prior. We want to put a Normal prior on $d_M$:
$d_M \sim \mathcal{N}(\hat{d_M}, \sigma_{d_M})$
```python
def lnprior(dM, eps, lna, lns):
prior = -0.5 * ((49.0-dM)/6.0)**2.0
if not (31.0 < dM < 67 and 0.0 < eps < 60.0 and -12 < lna < -2 and 0 < lns < 10):
return -np.inf
return prior
```
Combine likelihood and prior to obtain the posterior.
```python
def lnprob(p):
lp = lnprior(*p)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(*p)
```
Set up `emcee`.
```python
@pickle_results('SiGaps_12_VG12_twoGaps-sampler.pkl')
def hammer_time(ndim, nwalkers, dM_Guess, eps_Guess, a_Guess, s_Guess, nburnins, ntrials):
# Initialize the walkers
p0 = np.array([dM_Guess, eps_Guess, np.log(a_Guess), np.log(s_Guess)])
pos = [p0 + 1.0e-2*p0 * np.random.randn(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob)
pos, lp, state = sampler.run_mcmc(pos, nburnins)
sampler.reset()
pos, lp, state = sampler.run_mcmc(pos, ntrials)
return sampler
```
Set up the initial conditions
```python
np.random.seed(78704)
ndim, nwalkers = 4, 32
dM_Guess = 49.0
eps_Guess = 15.0
a_Guess = 0.0016
s_Guess = 25.0
nburnins = 200
ntrials = 700
```
Run the burn-in phase. Run the full MCMC. Pickle the results.
```python
sampler = hammer_time(ndim, nwalkers, dM_Guess, eps_Guess, a_Guess, s_Guess, nburnins, ntrials)
```
@pickle_results: computing results and saving to 'SiGaps_12_VG12_twoGaps-sampler.pkl'
warning: cache file 'SiGaps_12_VG12_twoGaps-sampler.pkl' exists
- args match: False
- kwargs match: True
Linearize $a$ and $s$ for easy inspection of the values.
```python
chain = sampler.chain
samples_lin = copy(sampler.flatchain)
samples_lin[:, 2:] = np.exp(samples_lin[:, 2:])
```
Inspect the chain.
```python
fig, axes = plt.subplots(4, 1, figsize=(5, 6), sharex=True)
fig.subplots_adjust(left=0.1, bottom=0.1, right=0.96, top=0.98,
wspace=0.0, hspace=0.05)
[a.plot(np.arange(chain.shape[1]), chain[:, :, i].T, "k", alpha=0.5)
for i, a in enumerate(axes)]
[a.set_ylabel("${0}$".format(l)) for a, l in zip(axes, ["d_M", "\epsilon", "\ln a", "\ln s"])]
axes[-1].set_xlim(0, chain.shape[1])
axes[-1].set_xlabel("iteration");
```
Linearize $a$ and $s$ for graphical purposes.
Make a triangle corner plot.
```python
fig = triangle.corner(samples_lin,
labels=map("${0}$".format, ["d_M", "\epsilon", "a", "s"]),
quantiles=[0.16, 0.84])
```
```python
fig = triangle.corner(samples_lin[:,0:2],
labels=map("${0}$".format, ["d_M", "\epsilon"]),
quantiles=[0.16, 0.84])
plt.savefig("VG12_twoGaps_cornerb.pdf")
```
Calculate confidence intervals.
```python
dM_mcmc, eps_mcmc, a_mcmc, s_mcmc = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]),
zip(*np.percentile(samples_lin, [16, 50, 84],
axis=0)))
dM_mcmc, eps_mcmc, a_mcmc, s_mcmc
```
((50.610580678662942, 5.1769311919194294, 5.9127063792974255),
(12.239520192129437, 4.5876370711722956, 4.3801270211579979),
(0.0016357814253734524, 0.00044741010755174103, 0.00031281275486805585),
(81.843785041902038, 9.9105375741532953, 9.3345864173309678))
```python
print "{:.0f}^{{+{:.0f}}}_{{-{:.0f}}}".format(*dM_mcmc)
print "{:.0f}^{{+{:.0f}}}_{{-{:.0f}}}".format(*eps_mcmc)
```
51^{+5}_{-6}
12^{+5}_{-4}
Overlay draws from the Gaussian Process.
```python
plt.figure(figsize=(6,3))
for dM, eps, a, s in samples_lin[np.random.randint(len(samples_lin), size=60)]:
off_diag_terms = a**2 * np.exp(-0.5 * (x[:, None] - x[None, :])**2 / s**2)
C = iid_cov + off_diag_terms
fit = 0.5*(T_gap_Si_withFF_fast(x, dM+eps, 1.0, n1)+T_gap_Si_withFF_fast(x, eps, 1.0, n1))/T_DSP
vec = np.random.multivariate_normal(fit, C)
plt.plot(x, vec,"-b", alpha=0.06)
plt.step(x, y,color="k", label='Measurement')
fit = 0.5*(T_gap_Si_withFF_fast(x, dM_mcmc[0]+eps_mcmc[0], 1, n1)+T_gap_Si_withFF_fast(x, eps_mcmc[0], 1, n1))/T_DSP
fit_label = 'Model with $d_M={:.0f}$ nm, $\epsilon={:.0f}$'.format(dM_mcmc[0], eps_mcmc[0])
plt.plot(x, fit, '--', color=sns.xkcd_rgb["pale red"], alpha=1.0, label=fit_label)
fit1 = T_gap_Si_withFF_fast(x, 43, 0.5, n1)/T_DSP
fit2 = T_gap_Si_withFF_fast(x, 55, 0.5, n1)/T_DSP
fit2_label = 'Model with $d_M={:.0f}\pm{:.0f}$ nm, $\epsilon={:.0f}$'.format(49, 6, 0)
plt.fill_between(x, fit1, fit2, alpha=0.6, color=sns.xkcd_rgb["green apple"])
plt.plot([-10, -9], [-10, -9],"-", alpha=0.85, color=sns.xkcd_rgb["green apple"], label=fit2_label)
plt.plot([-10, -9], [-10, -9],"-b", alpha=0.85, label='Draws from GP')
plt.plot([0, 5000], [1.0, 1.0], '-.k', alpha=0.5)
plt.fill_between([1200, 1250], 2.0, 0.0, hatch='\\', alpha=0.4, color='k', label='Si absorption cutoff')
plt.xlabel('$\lambda$ (nm)');
plt.ylabel('$T_{gap}$');
plt.xlim(1200, 2501);
plt.ylim(0.9, 1.019);
plt.legend(loc='lower right')
plt.savefig("VG12_twoGapsb.pdf", bbox_inches='tight')
```
The end.
|
bd63e8a4ffae970fc1fd4fa8e0d0822ebddb27b0
| 476,999 |
ipynb
|
Jupyter Notebook
|
notebooks/SiGaps_12_VG12_twoGaps.ipynb
|
Echelle/AO_bonding_paper
|
2b1b40610e7a077a293e81824ef0a32e4b064dea
|
[
"MIT"
] | null | null | null |
notebooks/SiGaps_12_VG12_twoGaps.ipynb
|
Echelle/AO_bonding_paper
|
2b1b40610e7a077a293e81824ef0a32e4b064dea
|
[
"MIT"
] | 1 |
2015-09-23T07:01:52.000Z
|
2015-09-25T03:29:45.000Z
|
notebooks/SiGaps_12_VG12_twoGaps.ipynb
|
Echelle/AO_bonding_paper
|
2b1b40610e7a077a293e81824ef0a32e4b064dea
|
[
"MIT"
] | null | null | null | 776.871336 | 191,612 | 0.939564 | true | 2,694 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.685949 | 0.57745 |
__label__eng_Latn
| 0.325541 | 0.179939 |
# Pairs in a trace
Progress done between 2021-04-17 and 2021-04-22 by jgil@eso.org
## Some definitions
An *alphabet* $\Sigma$ is any finite set, and its elements are called *symbols*. A *trace* $T$ is any concatenation of symbols in $\Sigma$, written as $T \in \Sigma^{*}$. The symbols in $\Sigma$ that are also in a trace $T$ are written as $\Sigma(T)$
## Notion of a pair
In any trace we can precisely determine which of them appears in pairs by rewritting the trace as repetitions. For example in $T=ababab$, $T$ can be written as a repetition of the subtrace $ab$: $T= (ab)(ab)(ab) = 3 \times (ab)$. Note that if any other symbols exists in the trace, the pairing condition of $ab$ remains, for example $(ab)$ is paired in both $T_1=ababab$ and $T_2=abXabYab$.
Let be the set $\Sigma$ an alphabet, $T \in \Sigma^{*}$ a trace, and $a,b \in \Sigma; a \neq b$ a disjoint ordered pair. We say that $(a,b)$ is **paired in $T$** if exists $n \ge 0$ such that $T \cap \{a,b\} = n \times (ab)$, i.e. the symbols in $T$ restricted to $a$ and $b$ can be written as a repetition of the trace $(ab)$, possibly with $n=0$.
## Pair-cardinality
The pair-cardinality captures the repetitions of $ab$ in $T$.
**Definition**: Given $T \in \Sigma^{*}$ and $a,b \in \Sigma$ we define the function $n_T(ab)$ as:
$$
\begin{align}
n_T: & \Sigma^2 \rightarrow \mathbb{N} \cup \{- \infty \} \\
n_T(ab) & = \left\{
\begin{array}{rcl} n & \text{ if } a \neq b \text{ and } T \cap \{a,b\} = n \times (ab) \\
-\infty & \text{ otherwise } \end{array}\right.
\end{align}
$$
Now we can define more precisely the notion of pairs.
**Definition**: A pair $(a,b)$ is **paired in T** if $n_T(ab) \ge 0$, and $(a,b)$ is **unpaired in T** if $n_T(ab) \lt 0$.
**Definition**: The set of all pairs in $T$ is defined as $\mathcal{P}_T = \{ (ab) | (a,b) \text{ is paired in } T \}$
### Examples
```python
def pair_cardinality(x, y, T):
'''
Returns the cardinality if (x,y) is pair in T, otherwise it returns -1
'''
if x==y:
return -1
intersection=[a for a in T if a==x or a==y ]
cardinality=int(len(intersection) / 2)
return cardinality if [x,y]*cardinality==intersection else -1
```
Let be $\Sigma = \{a,b,x,y,m,n\}$ and $T=axbabyabyx$.
```python
T='axbabyabyx'
```
$(x,y)$ is unpaired in T because $T \cap \{x,y\} = xyyx$ cannot be written as repetitions of $(xy)$. Its cardinality $n_T(xy) = - \infty $
```python
[i for i in T if i in ('x', 'y')]
```
['x', 'y', 'y', 'x']
```python
pair_cardinality('x', 'y', T)
```
-1
The pair $(a,b)$ is paired in $T$ and $n_T(ab)=3$, because $T \cap \{a,b\} = ababab = 3 \times (ab)$.
```python
[i for i in T if i in ('a', 'b')]
```
['a', 'b', 'a', 'b', 'a', 'b']
```python
pair_cardinality('a', 'b', T)
```
3
## Properties of pairs
**Property**: If $(a,b)$ is paired in $T$ then it is easy to prove that $(b,a)$ is unpaired in $T$ because there not exists any $n$ that verifies $T \cap \{b,a\} = n \times (ba)$
```python
pair_cardinality('b', 'a', T)
```
-1
**Property**: If both $m$, $n$ are not in $\Sigma(T)$, then $T \cap \{m,n\} = 0 \times (mn)$ and its cardinality is equal to 0 which means that there is no evidence that $(m,n)$ be unpaired in T.
```python
pair_cardinality('m', 'n', T)
```
0
**Property**: If $a \in \Sigma(T); x \in \Sigma \setminus \Sigma(T)$ i.e. the first symbol is in $T$ and the second not in $T$, then both $(a,x)$ and $(x,a)$ are unpaired in $T$.
```python
pair_cardinality('a', 'm', T)
```
-1
```python
pair_cardinality('m', 'a', T)
```
-1
## All pairs in T
Strategy: $\forall a,b \in \Sigma(T); a \neq b$ test if $n_T(ab) \ge 0$
```python
T='axbabyabyx'
```
```python
list( set( [x for x in T] ) )
```
['a', 'b', 'y', 'x']
```python
import pandas as pd
```
```python
def pairs_in_trace(T):
Pairs_in_T = {}
# The alphabet in T
Sigma_T = list( set( [x for x in T] ) )
for i in range( len(Sigma_T) -1 ):
a = Sigma_T[i]
# By def (a,a) is not paired in T
Pairs_in_T[ (a,a) ] = -1
for b in Sigma_T[i+1:]:
Pairs_in_T[ (a,b) ] = pair_cardinality(a, b, T)
# asymmetric paired property: if (a,b) is paired in T the (b,a) is not
if Pairs_in_T[ (a,b) ]>=0:
Pairs_in_T[ (b,a) ] = -1
else:
Pairs_in_T[ (b,a) ] = pair_cardinality(b, a, T)
return Pairs_in_T
```
```python
# Note that the sequence 1234 can be destroyed if a 4321 is added.
T='123abcabc44321'
```
```python
pairs = pairs_in_trace(T)
P=pd.DataFrame.from_dict( { a[0]+a[1]:b for a,b in pairs.items() }, orient='index', columns=['pairs'] )
# Show only pairs with n_T >= 0
P[ P['pairs'] >= 0 ].T
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>bc</th>
<th>ac</th>
<th>ab</th>
</tr>
</thead>
<tbody>
<tr>
<th>pairs</th>
<td>2</td>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
```python
P=pairs_in_trace(T)
# See how just (a,b) is pair and the rest has n_T=-1
pd.DataFrame.from_dict( { a[0]+a[1]:b for a,b in P.items() }, orient='index', columns=['pairs'] ).T
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>33</th>
<th>32</th>
<th>23</th>
<th>31</th>
<th>13</th>
<th>34</th>
<th>43</th>
<th>3c</th>
<th>c3</th>
<th>3b</th>
<th>...</th>
<th>4a</th>
<th>a4</th>
<th>cc</th>
<th>cb</th>
<th>bc</th>
<th>ca</th>
<th>ac</th>
<th>bb</th>
<th>ba</th>
<th>ab</th>
</tr>
</thead>
<tbody>
<tr>
<th>pairs</th>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>...</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>2</td>
<td>-1</td>
<td>2</td>
<td>-1</td>
<td>-1</td>
<td>2</td>
</tr>
</tbody>
</table>
<p>1 rows × 48 columns</p>
</div>
```python
T='123abcabc4'
pairs = pairs_in_trace(T)
P=pd.DataFrame.from_dict( { a[0]+a[1]:b for a,b in pairs.items() }, orient='index', columns=['pairs'] )
# Show only pairs with n_T >= 0
P[ P['pairs'] >= 0 ].T
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>23</th>
<th>13</th>
<th>34</th>
<th>12</th>
<th>24</th>
<th>14</th>
<th>bc</th>
<th>ac</th>
<th>ab</th>
</tr>
</thead>
<tbody>
<tr>
<th>pairs</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
## Sequences
(Write here a discussion about notation, path, simple paths, sequences in literature, and be clear what a "sequence" is for us)
**Definition**: A sequence $S \in \Sigma^*$ is a trace whose symbols are different, $ s_i \neq s_j \forall i \neq j$. Similar to pairs, $S$ is a **sequence in $T$** if exists $n \ge 0$ such that $T \cap \Sigma(S) = n \times S$, i.e., the trace restricted to symbols of $S$ is a n-repetition of $S$.
The cardinality of $S$ in $T$ is defined in the same way than for pairs, and extended to any trace:
**Definition**: Let be $S$ any trace in $\Sigma^*$, then $n_T(S) = n$ for some suitable $n$ if $S$ is a sequence in $T$, or $- \infty$ if $S$ is not a sequence in T. For completeness we also define $n_T( S=s_1 ) = n_T( \emptyset ) = -\infty$, the cases for one symbol and empty sequence.
**Definition**: The set of all sequences in $T$ is denoted as $\mathcal{S}_T = \{ S | S \text{ is sequence in } T \}$
Note that if $abcd$ is a sequence in $T$, then also $ab$, $abc$, $bcd$, $ad$ and all other subtraces are sequences in $T$.
**Definition**: A sequence $S$ is called a **maximal sequence in $T$** if is not a combination of other sequences in $T$: $\nexists R, S' \in \mathcal{S}_T; S \neq R $ such that $\Sigma(S') = \Sigma(S) \cup \Sigma(R)$.
**Definition**: The set of all maximal sequences in T is denoted as $\overline{ \mathcal{S}_T } = \{ S | S \text{ is maximal sequence in } T \}$
If $S=ab$ we recover the definitions of pairs: $n_T(S) = n_T(ab)$. Also, any sequence in T can be written in terms of its pairs. Therefore, all pairs properties inherits to sequences.
```python
def sequence_cardinality(S, T):
'''
Returns the cardinality if S=abc...z is a sequence in T, otherwise it returns -1
'''
# Force list
if type(S)==type(''):
S = list(S.strip())
# two or more symbols only
if len(S) < 2:
return -1
# Check all symbols are different
if len(S) != len(set(S)):
return -1
intersection=[a for a in T if a in S ]
cardinality=int(len(intersection) / len(S) )
return cardinality if S*cardinality==intersection else -1
```
```python
T='123abcabc4'
```
```python
sequence_cardinality('abc', T)
```
2
```python
sequence_cardinality('ab', T)
```
2
```python
sequence_cardinality('a', T)
```
-1
```python
sequence_cardinality('ba', T)
```
-1
```python
# This sequence has cardinality 0
sequence_cardinality('MNO', T)
```
0
This sequence has $n_T(S) < 0$ because one of its symbols is not in T
```python
sequence_cardinality('abcM', T)
```
-1
```python
sequence_cardinality('1234', T)
```
1
```python
sequence_cardinality('abc', T)
```
2
```python
sequence_cardinality('1234', T)
```
1
```python
sequence_cardinality('abc', T)
```
2
## Properties of sequences
If $S = s_1 ... s_m$ , $m \ge 2$ is a sequence in $T$, then the following properties can be verified.
**Property**: $(s_i, s_j)$ are paired in $T$ for $i \lt j$ and unpaired for $i \ge j$.
**Property**: the cardinality is equal for the sequence and its pairs. $n_T(S) = n_T(s_i, s_j)$ for all $i \lt j$
```python
T='a1b2ca3bc4a5bc'
S='abc'
```
```python
print( "Cardinality of S in T: {}".format( sequence_cardinality(S,T) ) )
print( "Cardinality of ab in T: {}".format( pair_cardinality('a', 'b', T) ) )
print( "Cardinality of bc in T: {}".format( pair_cardinality('b', 'c', T) ) )
print( "Cardinality of ac in T: {}".format( pair_cardinality('a', 'c', T) ) )
print( "Cardinality of ba in T: {} (should be negative)".format( pair_cardinality('b', 'a', T) ) )
```
Cardinality of S in T: 3
Cardinality of ab in T: 3
Cardinality of bc in T: 3
Cardinality of ac in T: 3
Cardinality of ba in T: -1 (should be negative)
## Lemma: Consecutive sequence order
When describing a sequence based on the pairs which lies in it, an interesting property emerges. Clearly there are exactly $m$ pairs in $S=s_1 ... s_m$ of the form $(s_i, s_m)$ because all its symbols are different bby definition. And this is applicable to any subtrace of $S$. A stronger claim can be proved, that there are no such pairs ending in $s_m$ in $T$ outside $S$ with the same cardinality.
**Lemma**: $s_j$ is the $j$-esim element of a sequence $S$ in $T$ $\iff$ there are exactly $j-1$ pairs $(x, s_j)$ paired in $T$ where $n_T(S) = n_T(x, s_j)$. All such $x$ lies inside $S$.
(proof easy but pending) **however, please study $aXYbaYXb$ before sing and dance**
Below are some examples of the lemma.
```python
T='a1b2ca3bc4a5bc'
S='abc'
pairs = pairs_in_trace(T)
```
```python
# Show all pairs in T
P=pd.DataFrame.from_dict( { a[0]+a[1]:b for a,b in pairs.items() }, orient='index', columns=['pairs'] )
P[ P['pairs'] >= 0 ].T
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>23</th>
<th>35</th>
<th>13</th>
<th>34</th>
<th>25</th>
<th>12</th>
<th>24</th>
<th>15</th>
<th>45</th>
<th>14</th>
<th>bc</th>
<th>ac</th>
<th>ab</th>
</tr>
</thead>
<tbody>
<tr>
<th>pairs</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>3</td>
<td>3</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
```python
print( "Cardinality of S in T: {}".format( sequence_cardinality(S,T) ) )
```
Cardinality of S in T: 3
```python
# All pairs ending in 'a' for S=abc
[ (x,sj,n) for (x,sj), n in pairs.items()
if sj == 'a'
and sequence_cardinality(S, T) == pair_cardinality(x, sj, T) ]
```
[]
```python
# All pairs ending in 'b' for S=abc
[ (x,sj,n) for (x,sj), n in pairs.items()
if sj == 'b'
and sequence_cardinality(S, T) == pair_cardinality(x, sj, T) ]
```
[('a', 'b', 3)]
```python
# All pairs ending in 'c' for S=abc
[ (x,sj,n) for (x,sj), n in pairs.items()
if sj == 'c'
and sequence_cardinality(S, T) == pair_cardinality(x, sj, T) ]
```
[('b', 'c', 3), ('a', 'c', 3)]
## Complexity of getting all pairs in T
The time seems to be bounded by $O( |T| 2^{\Sigma(T)} )$ , it depends on the size of alphabet instead of the length of the traces.
```python
print('Extract all pairs: fixed length\n-------')
N=500
for i in range(1,6):
T = list(range(100*i)) * int(N/(10*i) )
print( "\nlength={}, symbols={}".format(len(T), 100*i))
%time pairs_in_trace( T )
```
Extract all pairs: fixed length
-------
length=5000, symbols=100
CPU times: user 1.32 s, sys: 7.27 ms, total: 1.33 s
Wall time: 1.33 s
length=5000, symbols=200
CPU times: user 5.2 s, sys: 20.7 ms, total: 5.22 s
Wall time: 5.24 s
length=4800, symbols=300
CPU times: user 11 s, sys: 17.8 ms, total: 11 s
Wall time: 11 s
length=4800, symbols=400
CPU times: user 19.1 s, sys: 22.6 ms, total: 19.1 s
Wall time: 19.1 s
length=5000, symbols=500
CPU times: user 30.8 s, sys: 22.2 ms, total: 30.8 s
Wall time: 30.9 s
```python
print('\nExtract all pairs: fixed symbols\n-------')
S=100
for N in range(1,11):
T = list(range(S)) * (N*10)
print( "\nlength={}, symbols={}".format(len(T), S))
%time pairs_in_trace( T )
```
Extract all pairs: fixed symbols
-------
length=1000, symbols=100
CPU times: user 268 ms, sys: 3.57 ms, total: 271 ms
Wall time: 269 ms
length=2000, symbols=100
CPU times: user 503 ms, sys: 1.99 ms, total: 504 ms
Wall time: 504 ms
length=3000, symbols=100
CPU times: user 752 ms, sys: 1.54 ms, total: 754 ms
Wall time: 753 ms
length=4000, symbols=100
CPU times: user 1.02 s, sys: 2.78 ms, total: 1.03 s
Wall time: 1.03 s
length=5000, symbols=100
CPU times: user 1.29 s, sys: 4.45 ms, total: 1.3 s
Wall time: 1.3 s
length=6000, symbols=100
CPU times: user 1.54 s, sys: 4.27 ms, total: 1.54 s
Wall time: 1.54 s
length=7000, symbols=100
CPU times: user 1.87 s, sys: 10.2 ms, total: 1.88 s
Wall time: 1.89 s
length=8000, symbols=100
CPU times: user 2.07 s, sys: 6.73 ms, total: 2.08 s
Wall time: 2.08 s
length=9000, symbols=100
CPU times: user 2.29 s, sys: 5.83 ms, total: 2.29 s
Wall time: 2.3 s
length=10000, symbols=100
CPU times: user 2.51 s, sys: 3.94 ms, total: 2.52 s
Wall time: 2.52 s
```python
```
|
e6ecc4f118faea2e1baab6ab28fd65dc3d96f6e7
| 34,285 |
ipynb
|
Jupyter Notebook
|
notebooks/2021-refactor/2021-04-21-Simplified pairs.ipynb
|
jpgil/logdelay
|
788e9402d63cecc732822ac49a39dd961b63219f
|
[
"Apache-2.0"
] | null | null | null |
notebooks/2021-refactor/2021-04-21-Simplified pairs.ipynb
|
jpgil/logdelay
|
788e9402d63cecc732822ac49a39dd961b63219f
|
[
"Apache-2.0"
] | 5 |
2020-03-24T18:06:50.000Z
|
2021-05-28T16:03:39.000Z
|
notebooks/2021-refactor/2021-04-21-Simplified pairs.ipynb
|
jpgil/logdelay
|
788e9402d63cecc732822ac49a39dd961b63219f
|
[
"Apache-2.0"
] | null | null | null | 25.415122 | 409 | 0.451509 | true | 5,716 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.924142 | 0.760651 | 0.702949 |
__label__eng_Latn
| 0.927819 | 0.471518 |
```python
import sympy as sp
import numpy as np
import cloudpickle
```
['m1', 'g'] [1, 9.81]
```python
q = q0, q1 = sp.symbols('q:2')
sympars = m1, g = sp.symbols('m_1, g') # Same order should be used inside MechanicalSystem Class
order = ['m1', 'g']
G = sp.Matrix([sp.cos(q0)*g, sp.sin(q1)*m1])
```
```python
to_save = {'order': order,
'G' : sp.lambdify([q, *sympars], G)}
with open('gfile.txt', 'wb') as gfile:
test = cloudpickle.dumps(to_save)
gfile.write(test)
with open('gfile.txt', 'rb') as gfile:
model = cloudpickle.load(gfile)
```
```python
# In main script
pars = {'m1' : 1, 'g' : 9.81} # order is arbitrary
# In systemsym
constant_names = model['order']
constant_value = [pars[name] for name in constant_names] # Corresponding numbers in same order as above
Luse = lambda q: model['G'](q, *constant_value) # in systemsym class
Luse([0,3.14/2]) # in equations of motion
```
array([[ 9.81 ],
[ 0.99999968]])
|
db4c0ba58ec2efd2fc5b014a0382396f8525b3f3
| 2,317 |
ipynb
|
Jupyter Notebook
|
sympytest/SavingFunctions.ipynb
|
laurensvalk/devbots
|
dd7285774c0ec1a02fb30aaaaa9d396d6a680b3e
|
[
"MIT"
] | 1 |
2020-05-19T11:06:46.000Z
|
2020-05-19T11:06:46.000Z
|
sympytest/SavingFunctions.ipynb
|
laurensvalk/devbots
|
dd7285774c0ec1a02fb30aaaaa9d396d6a680b3e
|
[
"MIT"
] | null | null | null |
sympytest/SavingFunctions.ipynb
|
laurensvalk/devbots
|
dd7285774c0ec1a02fb30aaaaa9d396d6a680b3e
|
[
"MIT"
] | null | null | null | 22.278846 | 112 | 0.492016 | true | 330 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.754915 | 0.667375 |
__label__eng_Latn
| 0.770511 | 0.388866 |
[](https://colab.research.google.com/github/mravanba/comp551-notebooks/blob/master/EMforGaussianMixture.ipynb)
# Expectation Maximization for Gaussian Mixture Model
In K-means each datapoint is assigned to one cluster. An alternative is for each datapoint ($n$) to have a distribution over clusters -- that is $r_{n,k} \in [0,1]$ and $\sum_k r_{n,k} = 1$. To do this we can assume each cluster has a Gaussian distribution $\mathcal{N}(\mu_k, \Sigma_k)$. So we assume the data-distrubtion is a **mixture of Gaussians**
$$
p(x; \pi, \{\mu_k, \Sigma_k\}) = \sum_k \pi_k \mathcal{N}(x; \mu_k, \Sigma_k)
$$
where $\pi_k= p(z=k)$ with $\sum_k \pi_k = 1$ defining the weight of each Gaussian in the mixture. These weights should sum to one so that we have a valid pdf. To maximize the logarithm of the marginal-likelihood
$\ell(\pi, \{\mu_k, \Sigma_k\}) = \sum_n \log \left ( \sum_k \pi_k \mathcal{N}(x; \mu_k, \Sigma_k) \right)$, we set its partial derivative wrt various parameters to zero. This gives us the value of these parameters in terms of membership probabilities, aka *responsibilities*: $r_{n,k} = p(z=k|x^{(n)})= \frac{\pi_k \mathcal{N}(x; \mu_k, \Sigma_k)}{\sum_c \pi_c \mathcal{N}(x; \mu_c, \Sigma_c)}$. Since responsibilities are functions of model parmeters, we perform an iterative updating of these two values:
1. update responsibilites given the model parameters
2. given the responsibilities $r_{n,k}$, update the parameters $\mu_k, \Sigma_k$ and $\pi$. As we said these updates are given by taking the derivative of the log-likelihood:
- New $\pi_k$ is easy to estimate, it is proportional to the $\sum_n r_{n,k}$. Since $\pi_k$ should sum to one we set
$$
\pi_k = \frac{\sum_n r_{n,k}}{\sum_{n,c} r_{n,c}}
$$
- For $\mu_k$ and $\Sigma_k$ we need to estimate mean and covariance of a Gaussian using *weighted* samples, where the (unnormalized) weights for the $k^{th}$ Gaussian in the mixture are $r_{n,k} \forall n$.
\begin{align}
\mu_k &= \frac{\sum_n r_{n,k} x^{(n)}}{\sum_n r_{n,k}} \\
\Sigma_k &= \frac{ \sum_n r_{n,k} (x^{(n)} - \mu_k)(x^{(n)} - \mu_k)^\top }{\sum_n r_{n,k}}
\end{align}
The above gives us weighted mean and weighted covariance. We then repeat steps 1 and 2 similar to K-means.
Lets implement EM for Gaussian Mixture Model (GMM) below. We re-use our previous impelementation of multivariate Gaussian in GMM.
```python
import numpy as np
#%matplotlib notebook
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.debugger import set_trace
import scipy as sp
np.random.seed(1234)
```
```python
#For more comments on multivariate Gaussian class refer to: https://github.com/mravanba/comp551-notebooks/blob/master/Gaussian.ipynb
class Gaussian():
def __init__(self, mu=0, sigma=0):
self.mu = np.atleast_1d(mu)
if np.array(sigma).ndim == 0:
self.Sigma = np.atleast_2d(sigma**2)
else:
self.Sigma = sigma
def density(self, x):
N,D = x.shape
xm = (x-self.mu[None,:])
normalization = (2*np.pi)**(-D/2) * np.linalg.det(self.Sigma)**(-1/2)
quadratic = np.sum((xm @ np.linalg.inv(self.Sigma)) * xm,axis=1)
return normalization * np.exp(-.5 * quadratic)
class GMM:
def __init__(self, K=5, max_iters=200, epsilon=1e-5):
self.K = K #Number of Gaussians
self.max_iters = max_iters #maximum number of iteration we want to run the mixture model
self.epsilon = epsilon #small value used as tolerance and for alleviating the singularity problem
def fit(self, x):
N,D = x.shape
init_centers = np.random.choice(N, self.K) #generate K random values from [0,N-1]
pi = np.ones(self.K)/self.K #initialize the weight of the Gaussians
mu = x[init_centers] #select K data points to initialize the mean parameter of K Gaussians shape:K X D
sigma = np.tile(np.diag(np.var(x,axis=0))[None,:,:], (self.K, 1,1)) #initialize the sigma parameter by computing the variance of the data and making a diagonal matrix from it
#Note that the tile function copies the sigma K times for all the Gausssians
r = np.zeros((N,self.K)) #initialize the responsibilities to zero
ll = -np.inf #initialize the log likelihood to negative inifinity
for t in range(self.max_iters):
#update the responsibilities
for i in range(self.K):
r[:,i] = pi[i] * Gaussian(mu[i], sigma[i]).density(x) #dimension N of the expression
#normalize them over number of Gaussians
r_norm = r / np.sum(r, axis=1, keepdims=True) #keepdims preserves the 2nd dimension for broadcasting during the division
#update the parameters of the gaussian using MLE
for i in range(self.K):
mu[i,:] = np.average(x, axis=0, weights=r_norm[:,i]) #computes the weighted average where the weights are given by responsibilities of the i-th Gaussian
sigma[i,:,:] = np.cov(x, aweights=r_norm[:,i], rowvar=False) + self.epsilon * np.eye(D)[None,:,:]
#Note that we compute the weighted covariance natrix where the weights are given by responsibilities of the i-th Gaussian
#We set rowvar False as our data is in the first axis with the feature variables in the second and we want to compute variance along columns
#An epsilon variance is added in all the features to make sure it doesn't reach singularity where a Gaussian fit a single data point with zero variance in all the features
#update the weight of Gaussians
pi = np.sum(r_norm, axis=0)
pi /= np.sum(pi) #normalize it
#calculate the new log likelihood
ll_new = np.mean(np.log(np.sum(r, axis=1)))
#check if the log likelihood differenc is within the tolerance
if np.abs(ll_new - ll) < self.epsilon:
print(f'converged after {t} iterations, average log-likelihood {ll_new}')
break
ll = ll_new
return mu, sigma, r
```
In the implementation above note that we add a diagonal matrix with small constant values ($\epsilon$) on the diagonal to the empirical covariance matrix. This is to avoid degeneracy. If the model puts one of the Gaussians centered on a data-points and make the covariance very small, it can arbitrarily increase the likelihood of the data (see Bishop p.434). This addition of epsilon to the diagonal prevents this degeneracy.
Now let's apply this to the Iris dataset
```python
from sklearn import datasets
dataset = datasets.load_iris()
x, y = dataset['data'][:,:2], dataset['target']
gmm = GMM(K=3)
mu, sigma, resp = gmm.fit(x)
resp /= np.sum(resp,axis=1,keepdims=True)
# plotting the result
fig, axes = plt.subplots(ncols=3, nrows=1, constrained_layout=True, figsize=(15, 5))
axes[0].scatter(x[:,0], x[:,1], c=resp, s=2)
axes[0].scatter(mu[:,0], mu[:,1], marker='x')
x0v = np.linspace(np.min(x[:,0]), np.max(x[:,0]), 200)
x1v = np.linspace(np.min(x[:,1]), np.max(x[:,1]), 200)
x0,x1 = np.meshgrid(x0v, x1v)
x_all = np.vstack((x0.ravel(),x1.ravel())).T
#Get the densities
p0 = Gaussian(mu[0], sigma[0]).density(x_all)
p1 = Gaussian(mu[1], sigma[1]).density(x_all)
p2 = Gaussian(mu[2], sigma[2]).density(x_all)
p = np.vstack([p0,p1,p2]).T
p /= np.sum(p,axis=1,keepdims=True)
#axes[0].contour(x0, x1, p0.reshape(x0.shape))
axes[0].tricontour(x_all[:,0], x_all[:,1], p0)
axes[0].tricontour(x_all[:,0], x_all[:,1], p1)
axes[0].tricontour(x_all[:,0], x_all[:,1], p2)
axes[0].set_title('Mixture of Gaussians found using EM')
axes[1].scatter(x[:,0], x[:,1], c=y, s=2)
axes[1].scatter(x_all[:,0], x_all[:,1], c=p, marker='.', alpha=.01)
axes[1].set_title('Cluster Responsibilities')
axes[2].scatter(x[:,0], x[:,1], c=y, s=2)
axes[2].set_title('class labels')
plt.show()
```
A mixture of Gaussians where the membership of each point is unobserved is an example of a **latent variable model**.
A latent variable model $p(x,z; \theta)$ assumes unobserved or latent variables $z$ that help explain the observations $x$. In this setup we still want to maximize the **marginal likelihood** of data ($z$ is marginalized out):
$$
\max_\theta \sum_n \log p(x^{(n)}; \theta) = \max_\theta \sum_n \log \sum_z p(x^{(n)}, z; \theta)
$$
EM can be used for learning with *latent variable models* or when we have *missing data*. The general approach is similar to what we saw here:
- computing the posterior $p(z | x^{(n)}; \theta) \forall n$ (Expectation or **E step**)
- maximizing the expected log-likelihood using this probabilistic *completion* of the data (Maximization or **M step**)
|
00d266d675bb60585fb00cdd38d15ba53071cf46
| 276,834 |
ipynb
|
Jupyter Notebook
|
EMforGaussianMixture.ipynb
|
brendan-kellam/comp551-notebooks
|
ff29e972b1a57bcbe31d262ccc5ed7d12bb24ace
|
[
"MIT"
] | 37 |
2020-09-08T21:36:56.000Z
|
2022-01-17T17:17:00.000Z
|
EMforGaussianMixture.ipynb
|
brendan-kellam/comp551-notebooks
|
ff29e972b1a57bcbe31d262ccc5ed7d12bb24ace
|
[
"MIT"
] | null | null | null |
EMforGaussianMixture.ipynb
|
brendan-kellam/comp551-notebooks
|
ff29e972b1a57bcbe31d262ccc5ed7d12bb24ace
|
[
"MIT"
] | 30 |
2020-09-09T19:48:52.000Z
|
2022-01-18T16:21:47.000Z
| 1,129.934694 | 263,786 | 0.945422 | true | 2,418 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.941654 | 0.847968 | 0.798492 |
__label__eng_Latn
| 0.966816 | 0.693498 |
```python
# https://arxiv.org/pdf/quant-ph/0104030.pdf
# ^^^ Need to be able to prepare arbitrary state!
import numpy as np
import cirq
from functools import reduce
from sympy import *
```
```python
# https://arxiv.org/pdf/quant-ph/9503016.pdf
```
bottom of pg 663[https://rdo.psu.ac.th/sjstweb/journal/27-3/18mathices.pdf]
## Roots of diagonalizable matrices
In this section, we consider an nth root of a diagonalizable matrix.
- Theorem 2.1: Let A be an $m\times m$ complex matrix. If A is diagonalizable, then A has an nth root, for anypositive integer n.
Proof:
Let $A$ be a diagonalizable matrix, i.e., there exists a non-singular matrix S such that $A = SDS^{-1}$where $D=[d_{ij}]_{m\times m}$ is a diagonal matrix.
Let $D^{\frac{1}{n}}=[d_{ij}^{\frac{1}{n}}]_{m \times m}$, where $d_{ij}^{\frac{1}{n}}$ is an n-th root of $d_{ij}$.
So $A = S (D^{\frac{1}{n}})^{n} S^{-1} = (SD^{\frac{1}{n}}S)^{n}$ Therefore an n-th root of A exists.
https://math.stackexchange.com/questions/1168438/the-nth-root-of-the-2x2-square-matrix
```python
X=Matrix(cirq.X._unitary_())
X
```
```python
X = Matrix([[0,1],[1j,0]])
from sympy.physics.quantum import Dagger
Dagger(X)
```
```python
np.array(X)
```
```python
from numpy import linalg as LA
from sympy import *
from sympy.physics.quantum import Dagger
# SYMPY calculation gets exact diagonal!!! (note matrices are Hermitian)
class Build_V_Gate():
# V^{n} = U
def __init__(self,U, n_power):
self.U=U
self.n = n_power
self.D = None
self.V = None
self.V_dag = None
def _diagonalise_U(self):
# find diagonal matrix:
U_matrix = Matrix(self.U)
self.S, self.D = U_matrix.diagonalize()
self.S_inv = self.S**-1
# where U = S D S^{-1}
if not np.allclose(np.array(self.S*(self.D*self.S_inv), complex), self.U):
raise ValueError('U != SDS-1')
def Get_V_gate_matrices(self):
if self.D is None:
self._diagonalise_U()
# D_nth_root = np.power(self.D, 1/self.n)
D_nth_root = self.D**(1/self.n)
# self.V = np.array(self.S,complex).dot(np.array(D_nth_root,complex)).dot(np.array(self.S_inv,complex))
# self.V_dag = self.V.conj().transpose()
self.V = self.S * D_nth_root * self.S_inv
self.V_dag = Dagger(self.V)
if not np.allclose(reduce(np.matmul, [np.array(self.V, complex) for _ in range(self.n)]), self.U, atol=1e-10):
raise ValueError('U != V^{}'.format(self.n))
return np.array(self.V, complex), np.array(self.V_dag, complex)
n_root=2
mat = cirq.X._unitary_()
aa = Build_V_Gate(mat, n_root)
V, V_dag = aa.Get_V_gate_matrices()
reduce(np.matmul, [V for _ in range(n_root)])
```
```python
# from numpy import linalg as LA
# # NUMPY VERSION... NOT as good!
# class Build_V_Gate():
# # V^{n} = U
# def __init__(self,U, n_power):
# self.U=U
# self.n = n_power
# self.D = None
# self.V = None
# self.V_dag = None
# def _diagonalise_U(self):
# val,vec = np.linalg.eig(self.U)
# #sorting
# idx = val.argsort()[::-1]
# val_sorted = val[idx]
# vec_sorted = vec[:,idx]
# # find diagonal matrix:
# vec_sorted_inv = np.linalg.inv(vec_sorted)
# self.D = vec_sorted_inv.dot(self.U.dot(vec_sorted))
# self.S=vec_sorted
# self.S_inv = vec_sorted_inv
# # where U = S D S^{-1}
# if not np.allclose(self.S.dot(self.D).dot(self.S_inv), self.U):
# raise ValueError('U != SDS-1')
# def Get_V_gate_matrices(self):
# if self.D is None:
# self._diagonalise_U()
# D_nth_root = np.power(self.D, 1/self.n)
# # D_nth_root = np.sqrt(self.D)
# self.V = self.S.dot(D_nth_root).dot(self.S_inv)
# self.V_dag = self.V.conj().transpose()
# if not np.allclose(reduce(np.matmul, [self.V for _ in range(self.n)]), self.U, atol=1e-1):
# raise ValueError('U != V^{}'.format(self.n))
# return self.V, self.V_dag
# mat = cirq.X._unitary_()
# aa = Build_V_Gate(mat, 2)
# V, V_dag = aa.Get_V_gate_matrices()
# np.around(V.dot(V), 3)
```
```python
# aa = Build_V_Gate(mat, 4)
# V, V_dag = aa.Get_V_gate_matrices()
# np.around(((V.dot(V)).dot(V)).dot(V), 3)
```
```python
```
```python
class My_V_gate(cirq.SingleQubitGate):
"""
Description
Args:
theta (float): angle to rotate by in radians.
number_control_qubits (int): number of control qubits
"""
def __init__(self, V, V_dag, dagger_gate = False):
self.V = V
self.V_dag = V_dag
self.dagger_gate = dagger_gate
def _unitary_(self):
if self.dagger_gate:
return self.V_dag
else:
return self.V
def num_qubits(self):
return 1
def _circuit_diagram_info_(self,args):
if self.dagger_gate:
return 'V^{†}'
else:
return 'V'
def __str__(self):
if self.dagger_gate:
return 'V^{†}'
else:
return 'V'
def __repr__(self):
return self.__str__()
```
```python
n_root=4
mat = cirq.X._unitary_()
aa = Build_V_Gate(mat, n_root)
V, V_dag = aa.Get_V_gate_matrices()
GATE = My_V_gate(V, V_dag, dagger_gate=True)
circuit = GATE.on(cirq.LineQubit(2))
cirq.Circuit(circuit)
```
```python
```
```python
def int_to_Gray(num, n_qubits):
# https://en.wikipedia.org/wiki/Gray_code
# print(np.binary_repr(num, n_qubits)) # standard binary form!
# The operator >> is shift right. The operator ^ is exclusive or
gray_int = num^(num>>1)
return np.binary_repr(gray_int,n_qubits)
### example... note that grey code reversed as indexing from left to right: [0,1,-->, N-1]
for i in range(2**3):
print(int_to_Gray(i, 3)[::-1])
int_to_Gray(6, 4)
```
```python
def check_binary_str_parity(binary_str):
"""
Returns 0 for EVEN parity
Returns 1 for ODD parity
"""
parity = sum(map(int,binary_str))%2
return parity
check_binary_str_parity('0101')
```
```python
# NOTE pg 17 of Elementary gates for quantum information
class n_control_U(cirq.Gate):
"""
"""
def __init__(self, V, V_dag, list_of_control_qubits, list_control_vals, U_qubit):
self.V = V
self.V_dag = V_dag
if len(list_of_control_qubits)!=len(list_control_vals):
raise ValueError('incorrect qubit control bits or incorrect number of control qubits')
self.list_of_control_qubits = list_of_control_qubits
self.list_control_vals = list_control_vals
self.U_qubit = U_qubit
self.n_ancilla=len(list_of_control_qubits)
def flip_control_to_zero(self):
for index, control_qubit in enumerate(self.list_of_control_qubits):
if self.list_control_vals[index]==0:
yield cirq.X.on(control_qubit)
def _get_gray_control_lists(self):
grey_cntrl_bit_lists=[]
n_ancilla = len(self.list_of_control_qubits)
for grey_index in range(1, 2**n_ancilla):
gray_control_str = int_to_Gray(grey_index, n_ancilla)[::-1] # note reversing order
control_list = list(map(int,gray_control_str))
parity = check_binary_str_parity(gray_control_str)
grey_cntrl_bit_lists.append((control_list, parity))
return grey_cntrl_bit_lists
def _decompose_(self, qubits):
## flip if controlled on zero
X_flip = self.flip_control_to_zero()
yield X_flip
## perform controlled gate
n_ancilla = len(self.list_of_control_qubits)
grey_control_lists = self._get_gray_control_lists()
for control_index, binary_control_tuple in enumerate(grey_control_lists):
binary_control_seq, parity = binary_control_tuple
control_indices = np.where(np.array(binary_control_seq)==1)[0]
control_qubit = control_indices[-1]
if parity==1:
gate = self.V.controlled(num_controls=1, control_values=[1]).on(self.list_of_control_qubits[control_qubit], self.U_qubit)
# gate= 'V'
else:
gate = self.V_dag.controlled(num_controls=1, control_values=[1]).on(self.list_of_control_qubits[control_qubit], self.U_qubit)
# gate= 'V_dagg'
if control_index==0:
yield gate
# print(gate, control_qubit)
else:
for c_index in range(len(control_indices[:-1])):
yield cirq.CNOT(self.list_of_control_qubits[control_indices[c_index]], self.list_of_control_qubits[control_indices[c_index+1]])
# print('CNOT', control_indices[c_index], control_indices[c_index+1])
# print(gate, control_qubit)
yield gate
for c_index in list(range(len(control_indices[:-1])))[::-1]:
# print('CNOT', control_indices[c_index], control_indices[c_index+1])
yield cirq.CNOT(self.list_of_control_qubits[control_indices[c_index]], self.list_of_control_qubits[control_indices[c_index+1]])
## unflip if controlled on zero
X_flip = self.flip_control_to_zero()
yield X_flip
def _circuit_diagram_info_(self, args):
# return cirq.CircuitDiagramInfo(
# wire_symbols=tuple([*['@' for _ in range(len(self.list_of_control_qubits))],'U']),exponent=1)
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=tuple([*['@' if bit==1 else '(0)' for bit in self.list_control_vals],'U']),
exponent=1)
def num_qubits(self):
return len(self.list_of_control_qubits) + 1 #(+1 for U_qubit)
```
```python
n_control_qubits=6
n_power = 2**(n_control_qubits-2)
```
```python
cirq.X._unitary_()
```
```python
cirq.LineQubit.range(3)
```
```python
### setup V gate ##
n_control_qubits=3
n_power = 2**(n_control_qubits-1)
theta= np.pi/2
# U_GATE_MATRIX = np.array([
# [np.cos(theta), np.sin(theta)],
# [np.sin(theta), -1* np.cos(theta)]
# ])
U_GATE_MATRIX = cirq.X._unitary_()
get_v_gate_obj = Build_V_Gate(U_GATE_MATRIX, n_power)
V, V_dag = get_v_gate_obj.Get_V_gate_matrices()
V_gate_DAGGER = My_V_gate(V, V_dag, dagger_gate=True)
V_gate = My_V_gate(V, V_dag, dagger_gate=False)
circuit = V_gate_DAGGER.on(cirq.LineQubit(2))
cirq.Circuit(circuit)
## setup n-control-U ###
list_of_control_qubits = cirq.LineQubit.range(3)
list_control_vals=[0,1,0]
U_qubit = cirq.LineQubit(3)
xx = n_control_U(V_gate, V_gate_DAGGER, list_of_control_qubits, list_control_vals, U_qubit)
Q_circuit = cirq.Circuit(cirq.decompose_once(
(xx(*cirq.LineQubit.range(xx.num_qubits())))))
print(cirq.Circuit((xx(*cirq.LineQubit.range(xx.num_qubits())))))
Q_circuit
```
```python
```
```python
op = cirq.X.controlled(num_controls=3, control_values=[0, 1, 0]).on(*[cirq.LineQubit(0), cirq.LineQubit(1), cirq.LineQubit(2)], cirq.LineQubit(3))
print(cirq.Circuit(op))
np.isclose(cirq.Circuit(op).unitary(), Q_circuit.unitary(), atol=1e-9)
```
```python
### setup V gate ##
n_control_qubits=2
n_power = 2**(n_control_qubits-1)
theta= np.pi/2
U_GATE_MATRIX = cirq.X._unitary_()
get_v_gate_obj = Build_V_Gate(U_GATE_MATRIX, n_power)
V, V_dag = get_v_gate_obj.Get_V_gate_matrices()
V_gate_DAGGER = My_V_gate(V, V_dag, dagger_gate=True)
V_gate = My_V_gate(V, V_dag, dagger_gate=False)
circuit = V_gate_DAGGER.on(cirq.LineQubit(2))
cirq.Circuit(circuit)
## setup n-control-U ###
list_of_control_qubits = cirq.LineQubit.range(2)
list_control_vals=[0,1]
U_qubit = cirq.LineQubit(2)
xx = n_control_U(V_gate, V_gate_DAGGER, list_of_control_qubits, list_control_vals, U_qubit)
Q_circuit = cirq.Circuit(cirq.decompose_once(
(xx(*cirq.LineQubit.range(xx.num_qubits())))))
print(cirq.Circuit((xx(*cirq.LineQubit.range(xx.num_qubits())))))
Q_circuit
```
```python
op = cirq.X.controlled(num_controls=2, control_values=[0, 1]).on(*[cirq.LineQubit(0), cirq.LineQubit(1)], cirq.LineQubit(2))
print(cirq.Circuit(op))
np.isclose(cirq.Circuit(op).unitary(), Q_circuit.unitary(), atol=1e-6)
```
```python
```
```python
```
```python
```
```python
```
```python
cirq.X.controlled(num_controls=2, control_values=[0,1]).on(
*list(cirq.LineQubit.range(3)))
```
```python
op = cirq.X.controlled(num_controls=3, control_values=[0, 0, 1]).on(*[cirq.LineQubit(1), cirq.LineQubit(2), cirq.LineQubit(3)], cirq.LineQubit(4))
print(cirq.Circuit(op))
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
def int_to_Gray(num, n_qubits):
# https://en.wikipedia.org/wiki/Gray_code
# print(np.binary_repr(num, n_qubits)) # standard binary form!
# The operator >> is shift right. The operator ^ is exclusive or
gray_int = num^(num>>1)
return np.binary_repr(gray_int,n_qubits)
### example... note that grey code reversed as indexing from left to right: [0,1,-->, N-1]
for i in range(2**3):
print(int_to_Gray(i, 3)[::-1])
int_to_Gray(6, 4)
```
000
100
110
010
011
111
101
001
'0101'
```python
def check_binary_str_parity(binary_str):
"""
Returns 0 for EVEN parity
Returns 1 for ODD parity
"""
parity = sum(map(int,binary_str))%2
return parity
check_binary_str_parity('0101')
```
0
```python
```
```python
class My_U_Gate(cirq.SingleQubitGate):
"""
Description
Args:
theta (float): angle to rotate by in radians.
number_control_qubits (int): number of control qubits
"""
def __init__(self, theta):
self.theta = theta
def _unitary_(self):
Unitary_Matrix = np.array([
[np.cos(self.theta), np.sin(self.theta)],
[np.sin(self.theta), -1* np.cos(self.theta)]
])
return Unitary_Matrix
def num_qubits(self):
return 1
def _circuit_diagram_info_(self,args):
# return cirq.CircuitDiagramInfo(
# wire_symbols=tuple([*['@' for _ in range(self.num_control_qubits-1)],' U = {} rad '.format(self.theta.__round__(4))]),exponent=1)
return ' U = {} rad '.format(self.theta.__round__(4))
def __str__(self):
return ' U = {} rad '.format(self.theta.__round__(4))
def __repr__(self):
return ' U_arb_state_prep'
```
```python
class My_V_gate(cirq.SingleQubitGate):
"""
Description
Args:
theta (float): angle to rotate by in radians.
number_control_qubits (int): number of control qubits
"""
def __init__(self, V_mat, V_dag_mat, dagger_gate = False):
self.V_mat = V_mat
self.V_dag_mat = V_dag_mat
self.dagger_gate = dagger_gate
def _unitary_(self):
if self.dagger_gate:
return self.V_dag_mat
else:
return self.V_mat
def num_qubits(self):
return 1
def _circuit_diagram_info_(self,args):
if self.dagger_gate:
return 'V^{†}'
else:
return 'V'
def __str__(self):
if self.dagger_gate:
return 'V^{†}'
else:
return 'V'
def __repr__(self):
return self.__str__()
```
```python
from sympy import *
from sympy.physics.quantum import Dagger
# NOTE pg 17 of Elementary gates for quantum information
class n_control_U(cirq.Gate):
"""
"""
def __init__(self, list_of_control_qubits, list_control_vals, U_qubit, U_cirq_gate, n_control_qubits):
self.U_qubit = U_qubit
self.U_cirq_gate = U_cirq_gate
if len(list_of_control_qubits)!=len(list_control_vals):
raise ValueError('incorrect qubit control bits or incorrect number of control qubits')
self.list_of_control_qubits = list_of_control_qubits
self.list_control_vals = list_control_vals
self.n_ancilla=len(list_of_control_qubits)
self.D = None
self.n_root = 2**(n_control_qubits-1)
self.n_control_qubits = n_control_qubits
self.V_mat = None
self.V_dag_mat = None
def _diagonalise_U(self):
# find diagonal matrix:
U_matrix = Matrix(self.U_cirq_gate._unitary_())
self.S, self.D = U_matrix.diagonalize()
self.S_inv = self.S**-1
# where U = S D S^{-1}
if not np.allclose(np.array(self.S*(self.D*self.S_inv), complex), self.U_cirq_gate._unitary_()):
raise ValueError('U != SDS-1')
def Get_V_gate_matrices(self, check=True):
if self.D is None:
self._diagonalise_U()
D_nth_root = self.D**(1/self.n_root)
V_mat = self.S * D_nth_root * self.S_inv
V_dag_mat = Dagger(V_mat)
self.V_mat = np.array(V_mat, complex)
self.V_dag_mat = np.array(V_dag_mat, complex)
if check:
V_power_n = reduce(np.matmul, [self.V_mat for _ in range(self.n_root)])
if not np.allclose(V_power_n, self.U_cirq_gate._unitary_()):
raise ValueError('V^{n} != U')
def flip_control_to_zero(self):
for index, control_qubit in enumerate(self.list_of_control_qubits):
if self.list_control_vals[index]==0:
yield cirq.X.on(control_qubit)
def _get_gray_control_lists(self):
grey_cntrl_bit_lists=[]
n_ancilla = len(self.list_of_control_qubits)
for grey_index in range(1, 2**n_ancilla):
gray_control_str = int_to_Gray(grey_index, n_ancilla)[::-1] # note reversing order
control_list = list(map(int,gray_control_str))
parity = check_binary_str_parity(gray_control_str)
grey_cntrl_bit_lists.append((control_list, parity))
return grey_cntrl_bit_lists
def _decompose_(self, qubits):
if (self.V_mat is None) or (self.V_dag_mat is None):
self.Get_V_gate_matrices()
V_gate_DAGGER = My_V_gate(self.V_mat, self.V_dag_mat, dagger_gate=True)
V_gate = My_V_gate(self.V_mat, self.V_dag_mat, dagger_gate=False)
## flip if controlled on zero
X_flip = self.flip_control_to_zero()
yield X_flip
## perform controlled gate
n_ancilla = len(self.list_of_control_qubits)
grey_control_lists = self._get_gray_control_lists()
for control_index, binary_control_tuple in enumerate(grey_control_lists):
binary_control_seq, parity = binary_control_tuple
control_indices = np.where(np.array(binary_control_seq)==1)[0]
control_qubit = control_indices[-1]
if parity==1:
gate = V_gate.controlled(num_controls=1, control_values=[1]).on(self.list_of_control_qubits[control_qubit], self.U_qubit)
else:
gate = V_gate_DAGGER.controlled(num_controls=1, control_values=[1]).on(self.list_of_control_qubits[control_qubit], self.U_qubit)
if control_index==0:
yield gate
else:
for c_index in range(len(control_indices[:-1])):
yield cirq.CNOT(self.list_of_control_qubits[control_indices[c_index]], self.list_of_control_qubits[control_indices[c_index+1]])
yield gate
for c_index in list(range(len(control_indices[:-1])))[::-1]:
yield cirq.CNOT(self.list_of_control_qubits[control_indices[c_index]], self.list_of_control_qubits[control_indices[c_index+1]])
## unflip if controlled on zero
X_flip = self.flip_control_to_zero()
yield X_flip
def _circuit_diagram_info_(self, args):
# return cirq.protocols.CircuitDiagramInfo(
# wire_symbols=tuple([*['@' if bit==1 else '(0)' for bit in self.list_control_vals],'U']),
# exponent=1)
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=tuple([*['@' if bit==1 else '(0)' for bit in self.list_control_vals],self.U_cirq_gate.__str__()]),
exponent=1)
def num_qubits(self):
return len(self.list_of_control_qubits) + 1 #(+1 for U_qubit)
def check_Gate_gate_decomposition(self, tolerance=1e-9):
"""
function compares single and two qubit gate construction of n-controlled-U
against perfect n-controlled-U gate
tolerance is how close unitary matrices are required
"""
# decomposed into single and two qubit gates
decomposed = self._decompose_(None)
n_controlled_U_quantum_Circuit = cirq.Circuit(decomposed)
# print(n_controlled_U_quantum_Circuit)
# perfect gate
perfect_circuit_obj = self.U_cirq_gate.controlled(num_controls=self.n_control_qubits, control_values=self.list_control_vals).on(
*self.list_of_control_qubits, self.U_qubit)
perfect_circuit = cirq.Circuit(perfect_circuit_obj)
# print(perfect_circuit)
if not np.allclose(n_controlled_U_quantum_Circuit.unitary(), perfect_circuit.unitary(), atol=tolerance):
raise ValueError('V^{n} != U')
else:
# print('Correct decomposition')
return True
```
```python
```
```python
```
```python
## setup
n_control_qubits=2
theta= np.pi/4
U_gate = My_U_Gate(theta)
list_of_control_qubits = cirq.LineQubit.range(2)
list_control_vals=[0,1]
U_qubit = cirq.LineQubit(2)
xx = n_control_U(list_of_control_qubits, list_control_vals, U_qubit, U_gate, n_control_qubits)
Q_circuit = cirq.Circuit(cirq.decompose_once(
(xx(*cirq.LineQubit.range(xx.num_qubits())))))
# NOT decomposing:
print(cirq.Circuit((xx(*cirq.LineQubit.range(xx.num_qubits())))))
# decomposing
Q_circuit
```
0: ───(0)────────────────
│
1: ───@──────────────────
│
2: ─── U = 0.7854 rad ───
<pre style="overflow: auto; white-space: pre;">0: ───X───@───@───────────@───X───
│ │ │
1: ───────┼───X───@───────X───@───
│ │ │
2: ───────V───────V^{†}───────V───</pre>
```python
xx.check_Gate_gate_decomposition(tolerance=1e-15)
```
True
```python
U_single_qubit = My_U_Gate(theta)
perfect_circuit_obj = U_single_qubit.controlled(num_controls=n_control_qubits, control_values=list_control_vals).on(
*list_of_control_qubits, U_qubit)
perfect_circuit = cirq.Circuit(perfect_circuit_obj)
perfect_circuit
```
<pre style="overflow: auto; white-space: pre;">0: ───(0)────────────────
│
1: ───@──────────────────
│
2: ─── U = 0.7854 rad ───</pre>
```python
np.allclose(Q_circuit.unitary(), perfect_circuit.unitary(), atol=1e-6)
```
True
```python
```
|
bbcb3e90a6663f30818722dd609d7339a90b1f44
| 36,827 |
ipynb
|
Jupyter Notebook
|
quchem_examples/Multi control gate decomposition.ipynb
|
AlexisRalli/VQE-code
|
4112d2bba4c327360e95dfd7cb6120b2ce67bf29
|
[
"MIT"
] | 1 |
2021-04-01T14:01:46.000Z
|
2021-04-01T14:01:46.000Z
|
quchem_examples/Multi control gate decomposition.ipynb
|
AlexisRalli/VQE-code
|
4112d2bba4c327360e95dfd7cb6120b2ce67bf29
|
[
"MIT"
] | 5 |
2019-11-13T16:23:54.000Z
|
2021-04-07T11:03:06.000Z
|
quchem_examples/Multi control gate decomposition.ipynb
|
AlexisRalli/VQE-code
|
4112d2bba4c327360e95dfd7cb6120b2ce67bf29
|
[
"MIT"
] | null | null | null | 30.869237 | 165 | 0.489885 | true | 6,324 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.812867 | 0.684293 |
__label__eng_Latn
| 0.260336 | 0.428172 |
# Infectious disease modelling
## The effect of transient behaviour on final epidemic size
With the recent Coronavirus pandemic, a lot of effort has been put into modelling the spread of infectious diseases. The simplest model is known as SIR (Susceptable-Infectious-Recovered) introduced by <cite>Kermack and McKendrick (1927)</cite> is an ODE system of the form
$$\begin{align}
\dot S_i &= -\lambda_i(t)S_i \\
\dot I_i &= \lambda_i(t)S_i - \gamma I_i \\
\dot R_i &= \gamma I_i
\end{align}$$
Where $S_i,I_i,R_i$ are susceptable, infectious and recovered percentages of the population $N$ in compartment i such that $\sum_i S_i+I_i+R_i=N$. In its most basic form, this model has only one compartment (i.e the index $i\in \{1\}$). However, if we have $n$ compartments, the linear response from the initial state may have a non-normal matrix contact structure. We first consider the 2 compartment model $$
\begin{align}
\lambda_1(t) = \beta(C_{11}\frac{I_1}{f_1} + C_{12}\frac{I_2}{f_1})\\
\lambda_2(t) = \beta(C_{21}\frac{I_1}{f_2} + C_{22}\frac{I_2}{f_2})
\end{align}$$
where our contact matrix obeys $f_1 C_{12} = f_2 C_{21}$ where $f_i = \frac{N_i}{N}$ is the fraction of the population in each compartment. If $f_1 = f_2$ then $C$ is symmetric and hence normal. However, there is the possibility for non-normality which we investigate here.
```python
import pyross
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as spl
```
```python
M = 2 # the SIR model has no age structure
# Ni = 1000*np.ones(M) # so there is only one age group
N = 100000 # and the total population is the size of this age group
Ni = np.zeros((M)) # population in each group
fi = np.zeros((M)) # fraction of population in age age group
# set the age structure
fi = np.array((0.25, 0.75))
for i in range(M):
Ni[i] = fi[i]*N
beta = 0.02 # infection rate
gamma = 0.007
gIa = gamma # recovery rate of asymptomatic infectives
gIs = gamma # recovery rate of symptomatic infectives
alpha = 0 # fraction of asymptomatic infectives
fsa = 1 # Fraction by which symptomatic individuals do not self isolate
Ia0 = np.array([0,0]) # the SIR model has only one kind of infective
Is0 = np.array([1,.1]) # we take these to be symptomatic
R0 = np.array([0,0]) # and assume there are no recovered individuals initially
S0 = Ni # so that the initial susceptibles are obtained from S + Ia + Is + R = N
### No f_i present here
# set the contact structure
C11, C22, C12 = 1,1,4
C = np.array(([C11, C12], [C12*fi[1]/fi[0], C22]))
# if Ni[0]*C[0,1]!=Ni[1]*C[1,0]:
# raise Exception("invalid contact matrix")
# there is no contact structure
def contactMatrix(t):
return C
# duration of simulation and data file
Tf = 160; Nt=160;
# instantiate model
parameters = {'alpha':alpha, 'beta':beta, 'gIa':gIa, 'gIs':gIs,'fsa':fsa}
model = pyross.deterministic.SIR(parameters, M, Ni)
# simulate model
data = model.simulate(S0, Ia0, Is0, contactMatrix, Tf, Nt)
```
```python
# matrix for linearised dynamics
C=contactMatrix(0)
A=((beta*C-gamma*np.identity(len(C))).T*fi).T/fi
mcA=pyross.contactMatrix.characterise_transient(A, ord=1)
AP = A-np.max(np.linalg.eigvals(A))*np.identity(len(A))
mcAA = pyross.contactMatrix.characterise_transient(AP,ord=1)
print(mcAA)
```
[0.0000000e+00+0.j 2.3476927e-01+0.j 3.0905325e+00+0.j 1.1283435e+03+0.j
6.9333333e-01+0.j]
Kreiss constant of $\Gamma = A-\lambda_{Max}A$ is ~3.09
```python
# plot the data and obtain the epidemic curve
Sa = data['X'][:,:1].flatten()
Sk = data['X'][:,1:M].flatten()
St=Sa+Sk
# Ia = data['X'][:,1].flatten()
Isa = data['X'][:,2*M:2*M+1].flatten()
Isk = data['X'][:,2*M+1:3*M].flatten()
It = Isa + Isk
# It = np.sqrt(Isa**2 + Isk**2)
t = data['t']
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
plt.fill_between(t, 0, St/N, color="#348ABD", alpha=0.3)
plt.plot(t, St/N, '-', color="#348ABD", label='$S$', lw=4)
plt.fill_between(t, 0, It/N, color='#A60628', alpha=0.3)
plt.plot(t, It/N, '-', color='#A60628', label='$I$', lw=4)
Rt=N-St-It; plt.fill_between(t, 0, Rt/N, color="dimgrey", alpha=0.3)
plt.plot(t, Rt/N, '-', color="dimgrey", label='$R$', lw=4)
plt.autoscale(enable=True, axis='x', tight=True)
###Estimate from Kreiss constant
plt.plot(t,mcAA[2]*It[0]*np.exp(mcA[0]*t)/N,'-', color="green",
label='$Estimate$', lw=4)
# plt.ylim([0,1])
plt.yscale('log')
# plt.xlim([0,60])
plt.xlabel("time")
plt.ylabel("% of population")
plt.legend(fontsize=26); plt.grid()
```
```python
# plot the data and obtain the epidemic curve
Sa = data['X'][:,:1].flatten()
Sk = data['X'][:,1:M].flatten()
St=Sa+Sk
Isa = data['X'][:,2*M:2*M+1].flatten()
Isk = data['X'][:,2*M+1:3*M].flatten()
It = Isa + Isk
t = data['t']
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
plt.fill_between(t, 0, St/N, color="#348ABD", alpha=0.3)
plt.plot(t, St/N, '-', color="#348ABD", label='$S$', lw=4)
plt.fill_between(t, 0, It/N, color='#A60628', alpha=0.3)
plt.plot(t, It/N, '-', color='#A60628', label='$I$', lw=4)
Rt=N-St-It; plt.fill_between(t, 0, Rt/N, color="dimgrey", alpha=0.3)
plt.plot(t, Rt/N, '-', color="dimgrey", label='$R$', lw=4)
###Estimate from Kreiss constant
plt.plot(t,mcAA[2]*It[0]*np.exp(mcA[0]*t)/N,'-', color="green",
label='$Estimate$', lw=4)
plt.ylim([0,1])
# plt.yscale('log')
plt.legend(fontsize=26); plt.grid()
plt.autoscale(enable=True, axis='x', tight=True)
# plt.xlim([0,100])
```
The only pitfall is that we need to specify the order of the spectral norm. Most physics processes naturally take place in an $L2$ norm space. However, here we are interested in $I_{total} = \sum_n I_n$ which is an $L1$ norm. This option can be specified in `characterise_transient`.
```python
```
|
41d823d0cd2a7f9e3ef745bd0064ff556d2dd34d
| 150,397 |
ipynb
|
Jupyter Notebook
|
examples/contactMatrix/ex05-infectious-disease-transient.ipynb
|
vishalbelsare/pyross
|
98dbdd7896661c790f7a9d13fda8595ddccadf04
|
[
"MIT"
] | null | null | null |
examples/contactMatrix/ex05-infectious-disease-transient.ipynb
|
vishalbelsare/pyross
|
98dbdd7896661c790f7a9d13fda8595ddccadf04
|
[
"MIT"
] | null | null | null |
examples/contactMatrix/ex05-infectious-disease-transient.ipynb
|
vishalbelsare/pyross
|
98dbdd7896661c790f7a9d13fda8595ddccadf04
|
[
"MIT"
] | null | null | null | 529.566901 | 70,520 | 0.939221 | true | 1,978 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.76908 | 0.805632 | 0.619596 |
__label__eng_Latn
| 0.85126 | 0.277859 |
# Gauss integration of Finite Elements
```python
from __future__ import division
from sympy.utilities.codegen import codegen
from sympy import *
from sympy import init_printing
from IPython.display import Image
init_printing()
```
```python
r, s, t, x, y, z = symbols('r s t x y z')
k, m, n = symbols('k m n', integer=True)
rho, nu, E = symbols('rho, nu, E')
```
## Predefinition
The constitutive model tensor in Voigt notation (plane strain) is
$$C = \frac{(1 - \nu) E}{(1 - 2\nu) (1 + \nu) }
\begin{pmatrix}
1 & \frac{\nu}{1-\nu} & 0\\
\frac{\nu}{1-\nu} & 1 & 0\\
0 & 0 & \frac{1 - 2\nu}{2(1 - \nu)}
\end{pmatrix}$$
But for simplicity we are going to use
$$\hat{C} = \frac{C (1 - 2\nu) (1 + \nu)}{E} =
\begin{pmatrix}
1-\nu & \nu & 0\\
\nu & 1-\nu & 0\\
0 & 0 & \frac{1 - 2\nu}{2}
\end{pmatrix} \enspace ,$$
since we can always multiply by that factor afterwards to obtain the correct stiffness matrices.
```python
C = Matrix([[1 - nu, nu, 0],
[nu, 1 - nu, 0],
[0, 0, (1 - 2*nu)/2]])
C_factor = E/(1-2*nu)/(1 + nu)
C
```
## Interpolation functions
The enumeration that we are using for the elements is shown below
```python
Image(filename='../img_src/4node_element_enumeration.png', width=300)
```
What leads to the following shape functions
```python
N = S(1)/4*Matrix([(1 + r)*(1 + s),
(1 - r)*(1 + s),
(1 - r)*(1 - s),
(1 + r)*(1 - s)])
N
```
Thus, the interpolation matrix renders
```python
H = zeros(2,8)
for i in range(4):
H[0, 2*i] = N[i]
H[1, 2*i + 1] = N[i]
H.T # Transpose of the interpolation matrix
```
And the mass matrix integrand is
$$M_\text{int}=H^TH$$
```python
M_int = H.T*H
```
## Derivatives interpolation matrix
```python
dHdr = zeros(2,4)
for i in range(4):
dHdr[0,i] = diff(N[i],r)
dHdr[1,i] = diff(N[i],s)
jaco = eye(2) # Jacobian matrix, identity for now
dHdx = jaco*dHdr
B = zeros(3,8)
for i in range(4):
B[0, 2*i] = dHdx[0, i]
B[1, 2*i+1] = dHdx[1, i]
B[2, 2*i] = dHdx[1, i]
B[2, 2*i+1] = dHdx[0, i]
B
```
Being the stiffness matrix integrand
$$K_\text{int} = B^T C B$$
```python
K_int = B.T*C*B
```
## Analytic integration
The mass matrix is obtained integrating the product of the interpolator matrix with itself, i.e.
$$\begin{align*}
M &= \int\limits_{-1}^{1}\int\limits_{-1}^{1} M_\text{int} dr\, ds\\
&= \int\limits_{-1}^{1}\int\limits_{-1}^{1} H^T H\, dr\, ds \enspace .
\end{align*}$$
```python
M = zeros(8,8)
for i in range(8):
for j in range(8):
M[i,j] = rho*integrate(M_int[i,j],(r,-1,1), (s,-1,1))
M
```
The stiffness matrix is obtained integrating the product of the interpolator-derivatives (displacement-to-strains) matrix with the constitutive tensor and itself, i.e.
$$\begin{align*}
K &= \int\limits_{-1}^{1}\int\limits_{-1}^{1} K_\text{int} dr\, ds\\
&= \int\limits_{-1}^{1}\int\limits_{-1}^{1} B^T C\, B\, dr\, ds \enspace .
\end{align*}$$
```python
K = zeros(8,8)
for i in range(8):
for j in range(8):
K[i,j] = integrate(K_int[i,j], (r,-1,1), (s,-1,1))
K
```
We can generate automatically code for `Fortran`, `C` or `Octave/Matlab`, although it will be useful just for non-distorted elements.
```python
K_local = MatrixSymbol('K_local', 8, 8)
code = codegen(("local_stiff", Eq(K_local, simplify(K))), "f95")
print code[0][1]
```
!******************************************************************************
!* Code generated with sympy 0.7.6 *
!* *
!* See http://www.sympy.org/ for more information. *
!* *
!* This file is part of 'project' *
!******************************************************************************
subroutine local_stiff(nu, K_local)
implicit none
REAL*8, intent(in) :: nu
REAL*8, intent(out), dimension(1:8, 1:8) :: K_local
K_local(1, 1) = -2.0d0/3.0d0*nu + 1.0d0/2.0d0
K_local(2, 1) = 1.0d0/8.0d0
K_local(3, 1) = (1.0d0/6.0d0)*nu - 1.0d0/4.0d0
K_local(4, 1) = (1.0d0/2.0d0)*nu - 1.0d0/8.0d0
K_local(5, 1) = (1.0d0/3.0d0)*nu - 1.0d0/4.0d0
K_local(6, 1) = -1.0d0/8.0d0
K_local(7, 1) = (1.0d0/6.0d0)*nu
K_local(8, 1) = -1.0d0/2.0d0*nu + 1.0d0/8.0d0
K_local(1, 2) = 1.0d0/8.0d0
K_local(2, 2) = -2.0d0/3.0d0*nu + 1.0d0/2.0d0
K_local(3, 2) = -1.0d0/2.0d0*nu + 1.0d0/8.0d0
K_local(4, 2) = (1.0d0/6.0d0)*nu
K_local(5, 2) = -1.0d0/8.0d0
K_local(6, 2) = (1.0d0/3.0d0)*nu - 1.0d0/4.0d0
K_local(7, 2) = (1.0d0/2.0d0)*nu - 1.0d0/8.0d0
K_local(8, 2) = (1.0d0/6.0d0)*nu - 1.0d0/4.0d0
K_local(1, 3) = (1.0d0/6.0d0)*nu - 1.0d0/4.0d0
K_local(2, 3) = -1.0d0/2.0d0*nu + 1.0d0/8.0d0
K_local(3, 3) = -2.0d0/3.0d0*nu + 1.0d0/2.0d0
K_local(4, 3) = -1.0d0/8.0d0
K_local(5, 3) = (1.0d0/6.0d0)*nu
K_local(6, 3) = (1.0d0/2.0d0)*nu - 1.0d0/8.0d0
K_local(7, 3) = (1.0d0/3.0d0)*nu - 1.0d0/4.0d0
K_local(8, 3) = 1.0d0/8.0d0
K_local(1, 4) = (1.0d0/2.0d0)*nu - 1.0d0/8.0d0
K_local(2, 4) = (1.0d0/6.0d0)*nu
K_local(3, 4) = -1.0d0/8.0d0
K_local(4, 4) = -2.0d0/3.0d0*nu + 1.0d0/2.0d0
K_local(5, 4) = -1.0d0/2.0d0*nu + 1.0d0/8.0d0
K_local(6, 4) = (1.0d0/6.0d0)*nu - 1.0d0/4.0d0
K_local(7, 4) = 1.0d0/8.0d0
K_local(8, 4) = (1.0d0/3.0d0)*nu - 1.0d0/4.0d0
K_local(1, 5) = (1.0d0/3.0d0)*nu - 1.0d0/4.0d0
K_local(2, 5) = -1.0d0/8.0d0
K_local(3, 5) = (1.0d0/6.0d0)*nu
K_local(4, 5) = -1.0d0/2.0d0*nu + 1.0d0/8.0d0
K_local(5, 5) = -2.0d0/3.0d0*nu + 1.0d0/2.0d0
K_local(6, 5) = 1.0d0/8.0d0
K_local(7, 5) = (1.0d0/6.0d0)*nu - 1.0d0/4.0d0
K_local(8, 5) = (1.0d0/2.0d0)*nu - 1.0d0/8.0d0
K_local(1, 6) = -1.0d0/8.0d0
K_local(2, 6) = (1.0d0/3.0d0)*nu - 1.0d0/4.0d0
K_local(3, 6) = (1.0d0/2.0d0)*nu - 1.0d0/8.0d0
K_local(4, 6) = (1.0d0/6.0d0)*nu - 1.0d0/4.0d0
K_local(5, 6) = 1.0d0/8.0d0
K_local(6, 6) = -2.0d0/3.0d0*nu + 1.0d0/2.0d0
K_local(7, 6) = -1.0d0/2.0d0*nu + 1.0d0/8.0d0
K_local(8, 6) = (1.0d0/6.0d0)*nu
K_local(1, 7) = (1.0d0/6.0d0)*nu
K_local(2, 7) = (1.0d0/2.0d0)*nu - 1.0d0/8.0d0
K_local(3, 7) = (1.0d0/3.0d0)*nu - 1.0d0/4.0d0
K_local(4, 7) = 1.0d0/8.0d0
K_local(5, 7) = (1.0d0/6.0d0)*nu - 1.0d0/4.0d0
K_local(6, 7) = -1.0d0/2.0d0*nu + 1.0d0/8.0d0
K_local(7, 7) = -2.0d0/3.0d0*nu + 1.0d0/2.0d0
K_local(8, 7) = -1.0d0/8.0d0
K_local(1, 8) = -1.0d0/2.0d0*nu + 1.0d0/8.0d0
K_local(2, 8) = (1.0d0/6.0d0)*nu - 1.0d0/4.0d0
K_local(3, 8) = 1.0d0/8.0d0
K_local(4, 8) = (1.0d0/3.0d0)*nu - 1.0d0/4.0d0
K_local(5, 8) = (1.0d0/2.0d0)*nu - 1.0d0/8.0d0
K_local(6, 8) = (1.0d0/6.0d0)*nu
K_local(7, 8) = -1.0d0/8.0d0
K_local(8, 8) = -2.0d0/3.0d0*nu + 1.0d0/2.0d0
end subroutine
```python
code = codegen(("local_stiff", Eq(K_local, simplify(K))), "C")
print code[0][1]
```
/******************************************************************************
* Code generated with sympy 0.7.6 *
* *
* See http://www.sympy.org/ for more information. *
* *
* This file is part of 'project' *
******************************************************************************/
#include "local_stiff.h"
#include <math.h>
void local_stiff(double nu, double *K_local) {
K_local[0] = -2.0L/3.0L*nu + 1.0L/2.0L;
K_local[1] = 1.0L/8.0L;
K_local[2] = (1.0L/6.0L)*nu - 1.0L/4.0L;
K_local[3] = (1.0L/2.0L)*nu - 1.0L/8.0L;
K_local[4] = (1.0L/3.0L)*nu - 1.0L/4.0L;
K_local[5] = -1.0L/8.0L;
K_local[6] = (1.0L/6.0L)*nu;
K_local[7] = -1.0L/2.0L*nu + 1.0L/8.0L;
K_local[8] = 1.0L/8.0L;
K_local[9] = -2.0L/3.0L*nu + 1.0L/2.0L;
K_local[10] = -1.0L/2.0L*nu + 1.0L/8.0L;
K_local[11] = (1.0L/6.0L)*nu;
K_local[12] = -1.0L/8.0L;
K_local[13] = (1.0L/3.0L)*nu - 1.0L/4.0L;
K_local[14] = (1.0L/2.0L)*nu - 1.0L/8.0L;
K_local[15] = (1.0L/6.0L)*nu - 1.0L/4.0L;
K_local[16] = (1.0L/6.0L)*nu - 1.0L/4.0L;
K_local[17] = -1.0L/2.0L*nu + 1.0L/8.0L;
K_local[18] = -2.0L/3.0L*nu + 1.0L/2.0L;
K_local[19] = -1.0L/8.0L;
K_local[20] = (1.0L/6.0L)*nu;
K_local[21] = (1.0L/2.0L)*nu - 1.0L/8.0L;
K_local[22] = (1.0L/3.0L)*nu - 1.0L/4.0L;
K_local[23] = 1.0L/8.0L;
K_local[24] = (1.0L/2.0L)*nu - 1.0L/8.0L;
K_local[25] = (1.0L/6.0L)*nu;
K_local[26] = -1.0L/8.0L;
K_local[27] = -2.0L/3.0L*nu + 1.0L/2.0L;
K_local[28] = -1.0L/2.0L*nu + 1.0L/8.0L;
K_local[29] = (1.0L/6.0L)*nu - 1.0L/4.0L;
K_local[30] = 1.0L/8.0L;
K_local[31] = (1.0L/3.0L)*nu - 1.0L/4.0L;
K_local[32] = (1.0L/3.0L)*nu - 1.0L/4.0L;
K_local[33] = -1.0L/8.0L;
K_local[34] = (1.0L/6.0L)*nu;
K_local[35] = -1.0L/2.0L*nu + 1.0L/8.0L;
K_local[36] = -2.0L/3.0L*nu + 1.0L/2.0L;
K_local[37] = 1.0L/8.0L;
K_local[38] = (1.0L/6.0L)*nu - 1.0L/4.0L;
K_local[39] = (1.0L/2.0L)*nu - 1.0L/8.0L;
K_local[40] = -1.0L/8.0L;
K_local[41] = (1.0L/3.0L)*nu - 1.0L/4.0L;
K_local[42] = (1.0L/2.0L)*nu - 1.0L/8.0L;
K_local[43] = (1.0L/6.0L)*nu - 1.0L/4.0L;
K_local[44] = 1.0L/8.0L;
K_local[45] = -2.0L/3.0L*nu + 1.0L/2.0L;
K_local[46] = -1.0L/2.0L*nu + 1.0L/8.0L;
K_local[47] = (1.0L/6.0L)*nu;
K_local[48] = (1.0L/6.0L)*nu;
K_local[49] = (1.0L/2.0L)*nu - 1.0L/8.0L;
K_local[50] = (1.0L/3.0L)*nu - 1.0L/4.0L;
K_local[51] = 1.0L/8.0L;
K_local[52] = (1.0L/6.0L)*nu - 1.0L/4.0L;
K_local[53] = -1.0L/2.0L*nu + 1.0L/8.0L;
K_local[54] = -2.0L/3.0L*nu + 1.0L/2.0L;
K_local[55] = -1.0L/8.0L;
K_local[56] = -1.0L/2.0L*nu + 1.0L/8.0L;
K_local[57] = (1.0L/6.0L)*nu - 1.0L/4.0L;
K_local[58] = 1.0L/8.0L;
K_local[59] = (1.0L/3.0L)*nu - 1.0L/4.0L;
K_local[60] = (1.0L/2.0L)*nu - 1.0L/8.0L;
K_local[61] = (1.0L/6.0L)*nu;
K_local[62] = -1.0L/8.0L;
K_local[63] = -2.0L/3.0L*nu + 1.0L/2.0L;
}
```python
code = codegen(("local_stiff", Eq(K_local, simplify(K))), "Octave")
print code[0][1]
```
function K_local = local_stiff(nu)
%LOCAL_STIFF Autogenerated by sympy
% Code generated with sympy 0.7.6
%
% See http://www.sympy.org/ for more information.
%
% This file is part of 'project'
K_local = [-2*nu/3 + 1/2 1/8 nu/6 - 1/4 nu/2 - 1/8 nu/3 - 1/4 -1/8 nu/6 -nu/2 + 1/8;
1/8 -2*nu/3 + 1/2 -nu/2 + 1/8 nu/6 -1/8 nu/3 - 1/4 nu/2 - 1/8 nu/6 - 1/4;
nu/6 - 1/4 -nu/2 + 1/8 -2*nu/3 + 1/2 -1/8 nu/6 nu/2 - 1/8 nu/3 - 1/4 1/8;
nu/2 - 1/8 nu/6 -1/8 -2*nu/3 + 1/2 -nu/2 + 1/8 nu/6 - 1/4 1/8 nu/3 - 1/4;
nu/3 - 1/4 -1/8 nu/6 -nu/2 + 1/8 -2*nu/3 + 1/2 1/8 nu/6 - 1/4 nu/2 - 1/8;
-1/8 nu/3 - 1/4 nu/2 - 1/8 nu/6 - 1/4 1/8 -2*nu/3 + 1/2 -nu/2 + 1/8 nu/6;
nu/6 nu/2 - 1/8 nu/3 - 1/4 1/8 nu/6 - 1/4 -nu/2 + 1/8 -2*nu/3 + 1/2 -1/8;
-nu/2 + 1/8 nu/6 - 1/4 1/8 nu/3 - 1/4 nu/2 - 1/8 nu/6 -1/8 -2*nu/3 + 1/2];
end
We can check some numerical vales for $E=8/3$ Pa, $\nu=1/3$ and $\rho=1$ kg\m$^3$, where we can multiply by the factor that we took away from the stiffness tensor
```python
(C_factor*K).subs([(E, S(8)/3), (nu, S(1)/3)])
```
```python
M.subs(rho, 1)
```
## Gauss integration
As stated before, the analytic expressions for the mass and stiffness matrices is useful for non-distorted elements. In the general case, a mapping between distorted elements and these _canonical_ elements is used to simplify the integration domain. When this transformation is done, the functions to be integrated are more convoluted and we should use numerical integration like _Gauss-Legendre quadrature_.
The Gauss-Legendre quadrature approximates the integral:
$$ \int_{-1}^1 f(x)\,dx \approx \sum_{i=1}^n w_i f(x_i)$$
The nodes $x_i$ of an order $n$ quadrature rule are the roots of $P_n$
and the weights $w_i$ are given by:
$$w_i = \frac{2}{\left(1-x_i^2\right) \left(P'_n(x_i)\right)^2}$$
For the first four orders, the weights and nodes are
```python
wts = [[2], [1,1], [S(5)/9, S(8)/9, S(5)/9],
[(18+sqrt(30))/36,(18+sqrt(30))/36, (18-sqrt(30))/36, (18-sqrt(30))/36]
]
pts = [[0], [-sqrt(S(1)/3), sqrt(S(1)/3)],
[-sqrt(S(3)/5), 0, sqrt(S(3)/5)],
[-sqrt(S(3)/7 - S(2)/7*sqrt(S(6)/5)), sqrt(S(3)/7 - S(2)/7*sqrt(S(6)/5)),
-sqrt(S(3)/7 + S(2)/7*sqrt(S(6)/5)), sqrt(S(3)/7 + S(2)/7*sqrt(S(6)/5))]]
```
And the numerical integral is computed as
```python
def stiff_num(n):
"""Compute the stiffness matrix using Gauss quadrature
Parameters
----------
n : int
Order of the polynomial.
"""
if n>4:
raise Exception("Number of points not valid")
K_num = zeros(8,8)
for x_i, w_i in zip(pts[n-1], wts[n-1]):
for y_j, w_j in zip(pts[n-1], wts[n-1]):
K_num = K_num + w_i*w_j*K_int.subs([(r,x_i), (s,y_j)])
return simplify(K_num)
```
```python
K_num = stiff_num(3)
K_num - K
```
### Best approach
A best approach is to use Python built-in functions for computing the Gauss-Legendre nodes and weights
```python
from sympy.integrals.quadrature import gauss_legendre
```
```python
x, w = gauss_legendre(5, 15)
print x
print w
```
[-0.906179845938664, -0.538469310105683, 0, 0.538469310105683, 0.906179845938664]
[0.236926885056189, 0.478628670499366, 0.568888888888889, 0.478628670499366, 0.236926885056189]
## References
[1] http://en.wikipedia.org/wiki/Gaussian_quadrature
```python
from IPython.core.display import HTML
def css_styling():
styles = open('../styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
/* Based on Lorena Barba template available at: https://github.com/barbagroup/AeroPython/blob/master/styles/custom.css*/
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: 'Alegreya Sans', sans-serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 135%;
font-size: 120%;
width:600px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.text_cell_render h1 {
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
|
d50044ddb978e876bfa666d6ba145ebb32f85c8d
| 183,165 |
ipynb
|
Jupyter Notebook
|
notebooks/gauss_integration.ipynb
|
jomorlier/FEM-Notes
|
3b81053aee79dc59965c3622bc0d0eb6cfc7e8ae
|
[
"MIT"
] | 1 |
2020-04-15T01:53:14.000Z
|
2020-04-15T01:53:14.000Z
|
notebooks/gauss_integration.ipynb
|
jomorlier/FEM-Notes
|
3b81053aee79dc59965c3622bc0d0eb6cfc7e8ae
|
[
"MIT"
] | null | null | null |
notebooks/gauss_integration.ipynb
|
jomorlier/FEM-Notes
|
3b81053aee79dc59965c3622bc0d0eb6cfc7e8ae
|
[
"MIT"
] | 1 |
2020-05-25T17:19:53.000Z
|
2020-05-25T17:19:53.000Z
| 66.46045 | 1,828 | 0.723359 | true | 7,270 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.932453 | 0.867036 | 0.80847 |
__label__eng_Latn
| 0.172641 | 0.71668 |
```python
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
import warnings
warnings.filterwarnings('ignore')
rho = 5 # 自然利率
alpha = 0.2 # 资本份额
beta = 0.99 # 跨期贴现率
sigma = 1 # 消费的跨期替代弹性的倒数
theta = 3/4 # 名义粘性指数(一个季度内价格不变的企业比例)
phi = 5 # 劳动的跨期替代弹性的倒数(其倒数是Fisher劳动供给弹性)
e = 9 # 产品的替代弹性
lam = (1-theta)/theta*(1-beta*theta)*(1-alpha)/(1-alpha+alpha*e);
psi_ya = (1+phi)/(sigma*(1-alpha)+phi+alpha)
k = (sigma+(phi+alpha)/(1-alpha))*lam # NKPC 参数 :pi_t = E pi_{t+1} + k x_t + u_t
Theta = k/e # 福利损失函数中产出缺口相对于通胀率的比重
gamma = Theta/(k**2+Theta*(1+beta)) # \hat{p}_t = \gamma*\hat{p}_{t-1}+\gamma\beta\hat{p}_{t+1} +\kappa\Lambda/Theta \delta/(1-\beta\delta\rho_u)u_t
delta = (1-np.sqrt(1-4*gamma**2*beta))/(2*gamma*beta) # 由上式迭代得出的价格的一阶自回归系数
Lambda = lam/e**2 # 平衡路径不同于有效路径的前提下,福利损失函数中产出波动的一阶项系数
T = 12 # 总期数
tz = 5 # 在前tz期,自然利率为负,然后回到正常水平rho
xpi = np.zeros((2,T+1)) # 第0期到第12期
er = 4 # 负实际利率
# 一下两个矩阵刻画了产出缺口(这里指同有效产出的缺口,而非同自然产出的缺口)与通胀的动态
A = Matrix([[1,1/sigma],[k,beta+k/sigma]])
B = Matrix([[1/sigma],[k/sigma]])
# 名义和实际利率
r = np.append(np.ones(tz+1)*-er,np.ones(T-tz)*rho)
it = np.append(np.zeros(tz+1),np.ones(T-tz)*rho)
# Case1 : 自由裁量(相机决策)
# 央行每一期单独优化,而不是考虑全时期总体优化,在冲击结束后,产出缺口和通胀都立即回到0
# 由于每一期的决策只取决于当期的影响,因此称“相机决策”
for i in range(tz+1):
xpi[:,tz-i:tz+1-i] = A@xpi[:,tz+1-i:tz+2-i]-er*B
# Case2 : 可信承诺(前瞻指引)
# 央行最小化损失函数贴现和,尽管有更多时期产出缺口为负,但是显著减小冲击在初期造成的影响,由于效用函数的凹性,总的福利损失比相机决策少。
tc = 7 # 在7期以后才让利率回升
xpi_c = np.zeros((2,T+1))
M = np.linalg.inv(np.array([[-k,1+beta*(1-delta)],[beta*(1-delta)+k**2/Theta,-k/Theta]]))@np.array([[beta*(1-delta),(1-delta)/sigma],[0,(1-delta)/Theta]])
H = Matrix([[1,1/beta/sigma],[k,1/beta*(1+k/sigma)]])
J = np.array([[0,1],[Theta,k]])
# 第tc+1期是一个转折点,这一期的状态变量待定
x = Symbol('x')
y = Symbol('y')
# 利用动态方程(以及初值)迭代求解tc+1期的状态变量
xpi_c = Matrix(xpi_c)
xpi_c[:,tc+1:tc+2] = np.array([[x],[y]])
for i in range(tc-tz):
xpi_c[:,tc-i:tc+1-i] = A@xpi_c[:,tc+1-i:tc+2-i]+rho*B
for i in range(tz+1):
xpi_c[:,tz-i:tz+1-i] = A@xpi_c[:,tz+1-i:tz+2-i]-er*B
summ = Matrix(np.zeros_like(xpi_c[:,tc+1:tc+2]))
for j in range(tc+1):
summ += H**j@J@Matrix(xpi_c[:,tc-j:tc+1-j])
sol = solve(M@summ+xpi_c[:,tc+1:tc+2],[x,y])
xpi_c[:,tc+1:tc+2] = np.array([[sol[x]],[sol[y]]])
# 再求出各期状态变量
for i in range(tc-tz):
xpi_c[:,tc-i:tc+1-i] = A@xpi_c[:,tc+1-i:tc+2-i]+rho*B
for i in range(tz+1):
xpi_c[:,tz-i:tz+1-i] = A@xpi_c[:,tz+1-i:tz+2-i]-er*B
# 求出各期拉格朗日参数 并根据tc+1期的参数计算之后各时期的状态变量
xis = Matrix(np.zeros((2,T+2))) # 从-1开始
for i in range(tc+2):
xis[:,i+1:i+2] = H@xis[:,i:i+1]-J@xpi_c[:,i:i+1]
for i in range(T-tc-1):
xpi_c[0,tc+2+i] = k*delta**(i+1)/Theta*xis[0,tc+2]
xpi_c[1,tc+2+i] = (1-delta)*delta**i*xis[0,tc+2]
xpi_c = np.asarray(xpi_c)
# 计算名义利率
it_c = r.copy()
for i in range(len(r)-1):
it_c[i] += xpi_c[1,i+1]+sigma*(xpi_c[0,i+1]-xpi_c[0,i])
# 画图
fig,axes = plt.subplots(2,2,figsize=(9,7))
axes[0,0].plot(xpi[0],'-o',label='discretion')
axes[0,0].plot(xpi_c[0],'-o',label='commitment')
axes[0,0].axhline(0,alpha=0.3,ls='--')
axes[0,0].legend()
axes[0,0].set_title('Output')
axes[0,1].plot(xpi[1],'-o',label='discretion')
axes[0,1].plot(xpi_c[1],'-o',label = 'commitment')
axes[0,1].axhline(0,alpha=0.3,ls='--')
axes[0,1].set_title('Inflation')
axes[1,0].plot(it,'-o',label = 'discretion')
axes[1,0].plot(it_c,'-o',label='commitment')
axes[1,0].set_title('Nominal rate')
axes[1,1].plot(r,'-o')
axes[1,1].set_title('Natural rate')
for ax in axes.flatten():
ax.set_xticks(np.linspace(0,T,T/2+1))
```
```python
print(gamma,delta,lam,Lambda,k,Theta,psi_ya)
```
0.03111119620156227 0.031141065074019782 0.4467076923076922 0.005514909781576447 3.3503076923076915 0.37225641025641015 1.0
```python
theta**4
```
0.31640625
```python
```
|
360165ac9c28f9fdb065ed3b86d9170d056ac8a6
| 41,377 |
ipynb
|
Jupyter Notebook
|
DSGE/4.monetary welfare/.ipynb_checkpoints/ZLB_welfare-checkpoint.ipynb
|
Jindi-Huang/test.github.io
|
2ea9d87bd4382d8a5bca5decc373df301f09bf4d
|
[
"MIT"
] | null | null | null |
DSGE/4.monetary welfare/.ipynb_checkpoints/ZLB_welfare-checkpoint.ipynb
|
Jindi-Huang/test.github.io
|
2ea9d87bd4382d8a5bca5decc373df301f09bf4d
|
[
"MIT"
] | 6 |
2019-09-28T22:27:14.000Z
|
2019-10-02T06:28:23.000Z
|
DSGE/4.monetary welfare/.ipynb_checkpoints/ZLB_welfare-checkpoint.ipynb
|
yoursilver/yoursilver.github.io
|
2ea9d87bd4382d8a5bca5decc373df301f09bf4d
|
[
"MIT"
] | null | null | null | 206.885 | 35,304 | 0.902216 | true | 2,015 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.879147 | 0.70253 | 0.617627 |
__label__yue_Hant
| 0.095529 | 0.273285 |
## [Problem 41](https://projecteuler.net/problem=41)
Pandigital prime
```python
import itertools
from sympy import isprime
seq = [7,6,5,4,3,2,1]
```
```python
def list2num(permutated_list):
num = 0
for i in range(7):
num += permutated_list[i]*10**(7-i)
```
```python
list7 = list(itertools.permutations(seq))
int7 = [int(''.join([str(_) for _ in __])) for __ in list7]
```
```python
for i in int7:
if isprime(i):
print(i)
break
```
7652413
## [Problem 42](https://projecteuler.net/problem=42)
Coded triangle numbers
```python
import csv,string
f = open('./data/p042.txt', "r")
file = list(csv.reader(f))[0]
```
```python
def word2num(word):
counter = 0
for alphabet in word:
counter += string.ascii_uppercase.index(alphabet)+1
return counter
```
```python
num_list = [word2num(word) for word in file]
```
```python
triangle_list = [int(n*(n+1)/2) for n in range(1,20)]
```
```python
counter = 0
for num in num_list:
if [num] == list(set([num]) & set(triangle_list)):
counter += 1
counter
```
162
## [Problem 43](https://projecteuler.net/problem=43)
Sub-string divisibility
```python
import copy
ten_str={str(i) for i in range(10)}
```
```python
pandigital_numbers=[]
tmps=[]
for i in range(1,59):
num=str(i*17).zfill(3)
if len(set(num))==3:
pandigital_numbers.append(num)
for x in [13,11,7,5,3,2]:
tmps=[]
for num in pandigital_numbers:
for i in ten_str-set(num):
tmp=str(i)+num
if int(tmp[:3])%x==0:
tmps.append(tmp)
pandigital_numbers=copy.copy(tmps)
pandigital_numbers=[str(list(ten_str-set(num))[0])+num for num in pandigital_numbers]
```
```python
sum_pandigital=0
for num in pandigital_numbers:
sum_pandigital+=int(num)
print(sum_pandigital)
```
16695334890
## [Problem 44](https://projecteuler.net/problem=44)
Pentagon numbers
```python
import numpy as np
from tqdm import tqdm
```
```python
def p(n):
return int(n*(3*n-1)/2)
def factorize(n):
answer=[]
max_factor=int(np.sqrt(n))
for i in range(1,max_factor+1):
if n%i==0:
answer.append([i,n//i])
return answer
def check_squred(n):
m=int(np.sqrt(n))
if m-n/m==0.0:
return True
else:
return False
def jk_list_from_m(m):
jk_list=[]
n=3*m*m-m
factors=factorize(n)
for factor in factors:
q,p=factor
j,k=int(((p+1)/3+q)/2),int(((p+1)/3-q)/2)
if 3*(j*j-k*k)-j+k==n and k>0:
jk_list.append([j,k])
return jk_list
def check_valid_jk(jk):
j,k=jk
n=3*(j*j+k*k)-j-k
l=int((1+np.sqrt(1+12*n))/6)
if 3*l*l-l-n==0:
return True
else:
return False
```
```python
max_m=5000
for m in tqdm(range(1,max_m)):
jk_list=jk_list_from_m(m)
for jk in jk_list:
if check_valid_jk(jk):
print(m,p(m))
```
46%|████▌ | 2287/4999 [00:00<00:00, 3221.80it/s]
1912 5482660
100%|██████████| 4999/4999 [00:02<00:00, 2133.62it/s]
## [Problem 45](https://projecteuler.net/problem=45)
Triangular, pentagonal, and hexagonal
```python
import numpy as np
from tqdm import tqdm
```
```python
def check_valid_n(n):
n=4*n*n-2*n
m=int((1+np.sqrt(1+12*n))/6)
if 3*m*m-m-n==0:
l=int((-1+np.sqrt(1+4*n))/2)
if l*l+l-n==0:
return True
else:
return False
else:
return False
```
```python
max_n=100000
for n in tqdm(range(1,max_n)):
if check_valid_n(n):
print(n,2*n*n-n)
```
37%|███▋ | 36701/99999 [00:00<00:00, 180175.55it/s]
1 1
143 40755
27693 1533776805
100%|██████████| 99999/99999 [00:00<00:00, 199493.64it/s]
## [Problem 46](https://projecteuler.net/problem=46)
Goldbach's other conjecture
```python
from sympy import sieve
import numpy as np
from itertools import product
from tqdm import tqdm
```
```python
max_num=10**5
goldbach_conjecture_list=[i for i in tqdm(range(2,max_num)) if i not in sieve and i%2==1]
```
100%|██████████| 99998/99998 [00:06<00:00, 15029.80it/s]
```python
for goldbach_conjecture in tqdm(goldbach_conjecture_list):
max_square_root=int(np.sqrt(goldbach_conjecture/2))
check_prime_list=[goldbach_conjecture-2*i*i for i in range(1,max_square_root+1)]
check_list=[num in sieve for num in check_prime_list]
check={False}==set(check_list)
if check:
print(goldbach_conjecture)
break
```
5%|▌ | 2104/40408 [00:02<01:44, 365.06it/s]
5777
## [Problem 47](https://projecteuler.net/problem=47)
Distinct primes factors
```python
from sympy import factorint
```
```python
def len_factor(n):
return len(list(factorint(n).keys()))
```
```python
n=210
flag=True
while flag:
four_nums=[n,n+1,n+2,n+3]
prime_nums=[len_factor(n) for n in four_nums]
flag=False
for i in range(4):
if prime_nums[i]!=4:
flag=True
n=four_nums[i]+1
continue
```
```python
n,prime_nums
```
(134043, [4, 4, 4, 4])
## [Problem 48](https://projecteuler.net/problem=48)
Self powers
```python
from scipy.special import comb
```
```python
answer=0
for i in range(1,11):
answer+=i**i
for i in range(11,1001):
x=i-10
for k in range(10):
answer+=(10**k)*(x**(i-k))*comb(i,k,True)
```
```python
answer%10**10
```
9110846700
## [Problem 49](https://projecteuler.net/problem=49)
Prime permutations
```python
from sympy import sieve
from itertools import permutations
from itertools import combinations
from tqdm import tqdm
import numpy as np
from copy import copy
```
```python
def permutate_num(n):
nums=[]
for xs in permutations(str(n)):
num=''
for x in xs:
num+=x
nums.append(int(num))
return nums
def increases(num_list):
n=len(num_list)
for i,j,k in combinations(range(n),3):
if num_list[k]-num_list[j]==num_list[j]-num_list[i]:
print(str(num_list[i])+str(num_list[j])+str(num_list[k]))
```
```python
prime_list=list(sieve.primerange(10**3,10**4))
prime_set=set(prime_list)
```
```python
possibly_permutation=[]
for prime in tqdm(prime_list):
permutate_list=permutate_num(prime)
permutate_prime_list=list(sorted(list(set([x for x in permutate_list if x in set(prime_list)]))))
if len(permutate_prime_list)>2:
possibly_permutation.append(permutate_prime_list)
```
100%|██████████| 1061/1061 [00:00<00:00, 1143.42it/s]
```python
permutation_list=[]
for nums in possibly_permutation:
if nums not in permutation_list:
permutation_list.append(nums)
```
```python
for permutation in tqdm(permutation_list):
increases(permutation)
```
100%|██████████| 174/174 [00:00<00:00, 20055.20it/s]
148748178147
296962999629
## [Problem 50](https://projecteuler.net/problem=50)
Consecutive prime sum
```python
from sympy import sieve
import numpy as np
from tqdm import tqdm
```
```python
prime_list=[i for i in sieve.primerange(1,10**6)]
dp=[set(prime_list)]
```
```python
newArray=prime_list.copy()
for i in tqdm(range(600)):
newArray=np.array(prime_list[i+1:])+np.array(newArray[:-1])
dp.append(set(newArray))
```
100%|██████████| 600/600 [00:08<00:00, 69.36it/s]
```python
for i in range(600):
intersection=dp[600-i-1] & dp[0]
if len(intersection)>0:
print(intersection)
break
```
{997651}
```python
```
|
33bb4aa1fe225a300f8c0a046c806350d193d9ea
| 16,616 |
ipynb
|
Jupyter Notebook
|
p041-050.ipynb
|
yonesuke/ProjectEuler
|
6001713b1ffb0e83431d6fae6d3ac081b843e432
|
[
"MIT"
] | 1 |
2019-09-06T18:49:09.000Z
|
2019-09-06T18:49:09.000Z
|
p041-050.ipynb
|
yonesuke/ProjectEuler
|
6001713b1ffb0e83431d6fae6d3ac081b843e432
|
[
"MIT"
] | null | null | null |
p041-050.ipynb
|
yonesuke/ProjectEuler
|
6001713b1ffb0e83431d6fae6d3ac081b843e432
|
[
"MIT"
] | null | null | null | 21.166879 | 110 | 0.472978 | true | 2,397 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.845942 | 0.675765 | 0.571658 |
__label__eng_Latn
| 0.283176 | 0.166483 |
Polynomial Regression
==========
In this week's programming exercise, you are asked to implement the basic building blocks for polynomial regression step-by-step. We will do the following:
- **a)** Load a very simple, noisy dataset and inspect it.
- **b)** Construct a design matrix for polynomial regression of degree m: the Vandermonde matrix.
- **c)** Calculate the Moore-Penrose pseudoinverse of the design matrix.
- **d)** Calculate a vector of coefficients that minimizes the squared error of an n-degree polynomial on our given set of measurements (data).
- **e)** Use this coefficient (weight) vector to construct a polynomial that predicts the underlying function the noisy data is drawn from.
- **f)** With the work we have done before, we look at a polynomials of different degrees we fit using the provided data.
Before you start, make sure that you have downloaded the file *poly_data.csv* from stud.ip and put it in the same folder as this notebook! You are supposed to implement the functions yourself!
```python
# the usual imports.
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
a)
------------
1. Use the numpy function **loadtxt** to load the data from *poly_data.csv* into a variable data. Data should now be a $n\times n$ **ndarray** matrix. You can check the type and size of data yourself using the **type** function and the **shape** attribute of the matrix.
2. The first row and second row correspond to the [independent and dependent variable](https://en.wikipedia.org/wiki/Dependent_and_independent_variables) respectively. Store them in two different, new variables **X** (independent) and **Y** (dependent).
3. Use a scatterplot to take a look at the data. It has been generated by sampling a function $f$ and adding Gaussian noise:
\begin{align}
y_i &= f(x_i) + \epsilon \\
\epsilon &\sim \mathcal{N}(0, \sigma^2)
\end{align}
You can use execute the second cell below to take a look at $f(x)$.
```python
# ~~ your code for a) here ~~
data = np.loadtxt('poly_data.csv', delimiter = ',') #load data from csv data points are seperated by a ","" "
X = data[0] #store independent variables in X
Y = data[1] #store dependent variable in Y
plt.plot(X,Y,'o',color = "royalblue")
# setting and labeling your axis
plt.ylabel('dependent variable')
plt.xlabel('independent variable')
plt.xlim((0,10));
```
```python
# Taking a loog at f(x)
def target_func(x):
return 1.5*np.sin(x) - np.cos(x/2) + 0.5
x = np.linspace(0,10, 101)
y = target_func(x)
plt.plot(x,y);
```
b)
--------
In the lecture, you have derived the formula for linear regression with arbitrary basis functions and normal distributed residuals $\epsilon$. Here, we choose polynomial basis functions and therefore will try and approximate the function above via a polynomial of degree $m$:
$$y = \alpha_0 + \alpha_1x + \alpha_2x^2 + \alpha_3x^3 + \dots + \alpha_mx^m + \epsilon$$
Due to our choice of basis functions, this is called polynomial regression.
The simplest version of polynomial regression uses monomial basis functions $\{1, x, x^2, x^3, \dots \}$ in the design matrix. Such a matrix is called the [Vandermonde matrix](https://en.wikipedia.org/wiki/Vandermonde_matrix) in linear algebra. Implement a function that takes the observed, independent variables $x_i$ stored in **X** and constrcuts a design matrix of the following form:
$$ \Phi = \begin{bmatrix} 1 & x_1 & x_1^2 & \dots & x_1^m \\ 1 & x_2 & x_2^2 & \dots & x_2^m \\ 1 & x_3 & x_3^2 & \dots & x_3^m \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_n & x_n^2 & \dots & x_n^m \end{bmatrix}$$
We have provided the function's doc string as well as two quick lines that test your implementation in the notebook cell below.
```python
def poly_dm(x, m):
"""
Generate a design matrix with monomial basis functions.
Parameters
----------
x : array_like
1-D input array.
m : int
degree of the monomial used for the last column.
Returns
-------
phi : ndarray
Design matrix.
The columns are ``x^0, x^1, ..., x^m``.
"""
# create an array of shape length of input vector x m and fill it with zero
#note: we use m + 1 because the amount of rows needs to be degree + the first row
design_matrix = np.zeros((len(x),m+1))
for i in range(m+1): #fill all rows with the same data set (complete values each row)
design_matrix[:,i] = x
design_matrix[:,0] = 1 #change all entries of row 0 to 1
for index, x in np.ndenumerate(design_matrix): # for all entries x with the index(column,row)
exponent = index[1] # we set the variable exponent to entry 1 of the index -> row_nr or degree
design_matrix[index] = x ** exponent #the entry at this index is now the original entry to the power of the row_nr
return design_matrix #return the design matrix
try:
print('poly_dm:',(lambda a=np.random.rand(10):'O.K.'if np.allclose(poly_dm(a,3),np.vander(a,4,True))else'Something went wrong! (Your result does not match!)')())
except:
print('poly_dm: Something went horribly wrong! (an error was thrown)')
example_array = np.array([1,2,3,4,5])
poly_dm(example_array,3)
```
poly_dm: O.K.
array([[ 1., 1., 1., 1.],
[ 1., 2., 4., 8.],
[ 1., 3., 9., 27.],
[ 1., 4., 16., 64.],
[ 1., 5., 25., 125.]])
c)
--------
According to the lecture, it is quite usefull to calculate the Moore-Penrose pseudoinverse $A^\dagger$ of a matrix:
$$ A^\dagger = (A^T A)^{-1}A^T$$
where $M^T$ means transpose of matrix $M$ and $M^{-1}$ denotes its inverse.
According to the docstring in the cell below, implement a function that returns $A^\dagger$ for a matrix $A$, and test your implementation against the small test that is included.
```python
def pseudoinverse(A):
"""
Compute the (Moore-Penrose) pseudo-inverse of a matrix.
Parameters
----------
A : (M, N) array_like
Matrix to be pseudo-inverted.
Returns
-------
A_plus : (N, M) ndarray
The pseudo-inverse of `a`.
"""
#print("Our original design-matrix:")
#print(A)
#print("Transposed:")
A_transposed = A.transpose()
#print(A_transposed)
#Matrix product of A and A_transposed:
A_to_be_inversed = np.matmul(A_transposed,A)
#print("Matrix product of A and A_transposed:")
#print(A_to_be_inversed)
#The inverse of the Matrix product
A_inversed = np.linalg.inv(A_to_be_inversed)
#print("The inverse of the Matrix product")
#print(A_inversed)
#The Resulting pseudo_inverse of our given matrix
pseudo_inverse = np.matmul(A_inversed, A_transposed)
#print("The Resulting pseudo_inverse of our given matrix")
#print(pseudo_inverse)
return pseudo_inverse
# the lines below test the pseudo_inverse function
try:
print('pseudo_inverse:',(lambda m=np.random.rand(9,5):'Good Job!'if np.allclose(pseudoinverse(m),np.linalg.pinv(m))else'Not quite! (Your result does not match!)')())
except:
print('pseudo_inverse: Absolutely not! (an error was thrown)')
```
pseudo_inverse: Good Job!
d)
-------
To estimate the parameters $\alpha_i$ up to a chosen degree $m$, we use call the vector containing all the $\alpha_i$ $w$ and solve the following formula presented in class:
\begin{align}
y &= \Phi w \\
w &= \Phi^\dagger y
\end{align}
where $\Phi$ is the design matrix and $\Phi^\dagger$ its pseudoinverse and $y$ is the vector of dependent variables we observed in our dataset and stored in **Y**.
Implement a function that calculates $w$ according to the docstring given below. Again, a short test of your implementation is provided.
```python
def poly_regress(x, y, deg):
"""
Least squares polynomial fit.
Parameters
----------
x : array, shape (M,)
x-coordinates of the M sample points.
y : array, shape (M,)
y-coordinates of the sample points.
deg : int
Degree of the fitting polynomial.
Returns
-------
w : array, shape (deg+1,)
Polynomial coefficients, highest power last.
"""
# variable to store our design matrix
design_matrix = poly_dm(x, deg)
#variavle to store our pseudo_inverse
inverse_dm = pseudoinverse(design_matrix)
# variable to store the parameters
# parameters (beta-coefficients) are calculated by the Matrix product of our dm and it`s pseudo-inverse
our_parameters = np.matmul(inverse_dm,y)
#print("The Parameters are")
#print(our_parameters)
return our_parameters
# the lines below test the poly_regress function
try:
print('poly_regress:',(lambda a1=np.random.rand(9),a2=np.random.rand(9):'Ace!'if
np.allclose(poly_regress(a1,a2,2),np.polyfit(a1,a2,2)[::-1])else'Almost! (Your result does not match!)')())
except:
print('poly_regress: Not nearly! (an error was thrown)')
```
poly_regress: Ace!
e)
--------
The last function we will write will use the vector of coefficients we can now calculate to construct a polynomial function and evaluate it at any point we choose to. Remember, the form of this polynomial is given by:
$$y = \alpha_0 + \alpha_1x + \alpha_2x^2 + \alpha_3x^3 + \dots + \alpha_mx^m$$
This is the model we assumed above, but we do not need to include the noise term here! Again, the function is specified in a docstring and tested in the little {try - catch} block below. *Hint:* The order of the polynomial you need to calculate is inherently given by the length of **w**, the number of coefficients.
```python
def polynom(x, w):
""" Evaluate a polynomial.
Parameters
----------
x : 1d array
Points to evaluate.
w : 1d array
Coefficients of the monomials.
Returns
-------
y : 1d array
Polynomial evaluated for each cell of x.
"""
y_values = np.zeros(len(x)) #create an array containing with the same length as x
for i in range(len(x)): # for every entry in x
for j in range(len(w)): # for every value(beta-coefficient) in our vector w
y_values[i] += w[j] * (x[i]**j) # the corresponding y value is the sum of all beta-coefficients times the x value to the power of j
return y_values # retunr the y values
# the lines below test the polynom function
try:
print('polynom:',(lambda a1=np.random.rand(9),a2=np.random.rand(9):'OK'if np.allclose(polynom(a1,a2),np.polyval(a2[::-1],a1))else'Slight failure! (Your result does not match!)')())
except:
print('polynom: Significant failure! (an error was thrown)')
```
polynom: OK
f)
------
f, as in finally. We can now use all the functions we have written above to investigate how well a polynomial of a degree $m$ fits the noisy data we are given. For $m \in \{1,2,10\}$, estimate a polynomial function on the data. Evaluate the three functions on a vector of equidistant points between 0 and 10 (*linearly spaced*). Additionally, plot the original target function $f(x)$, as well as the scatter plot of the data samples. Make sure every graph and the scatter appear in the same plot. Label each graph by adding a label argument to the **plt.plot** function. This allows the use of the **legend()** function and makes the plot significantly more understandable!
```python
plt.figure(figsize = (14, 10))
generated_numbers = np.linspace(0, 10, 1001) # generates 1000 evenly spaced number between 0 and 10
target_function = target_func(generated_numbers) # variable to store given target function
plt.scatter(X, Y, color = "white") #generate scatter plot with white dots
plt.plot(generated_numbers, target_function, label = 'target function') #plot target function with our generated numbers
ax = plt.gca() #select graph axis (used for backgroundcolor)
ax.set_facecolor("black") #set background color to black
for degree in [1,2,10]: # generate our polynomial given our data_set for degree 1, 2 and 10
w = poly_regress(X, Y, degree) # call above defined functions
y = polynom(generated_numbers, w)
plt.plot(generated_numbers, y, label = 'degree: ' + str(degree)) # plot polynomials and add labels for the legend
"""
for degree in [5]:
w = poly_regress(X, Y, degree)
y = polynom(generated_numbers, w)
plt.plot(generated_numbers, y, label = 'degree: ' + str(degree))
"""
plt.xlim((0, 10)) # zoom in
plt.legend(); # show legend
```
```python
```
|
aa644a929eaba017432e7cf01fe6e5c06ed876f6
| 115,961 |
ipynb
|
Jupyter Notebook
|
Task_37-Group_26_Polynomial-Regression/Task 37 Group 26 Polynomial Regression.ipynb
|
Flexi187/sharing
|
e742a3012d9b9990d8e1d0ffdb6c7a84e386e596
|
[
"MIT"
] | null | null | null |
Task_37-Group_26_Polynomial-Regression/Task 37 Group 26 Polynomial Regression.ipynb
|
Flexi187/sharing
|
e742a3012d9b9990d8e1d0ffdb6c7a84e386e596
|
[
"MIT"
] | null | null | null |
Task_37-Group_26_Polynomial-Regression/Task 37 Group 26 Polynomial Regression.ipynb
|
Flexi187/sharing
|
e742a3012d9b9990d8e1d0ffdb6c7a84e386e596
|
[
"MIT"
] | null | null | null | 223.862934 | 68,320 | 0.901717 | true | 3,300 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.941654 | 0.872347 | 0.82145 |
__label__eng_Latn
| 0.986075 | 0.746835 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.