text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
### Considered model: four-bar linkage (= two link manipulator + one link manipulator + rigid coupling)
```python
%load_ext ipydex.displaytools
%matplotlib inline
import os
import sympy as sp
import numpy as npy # we need `np` later
from sympy import sin, cos, pi
from sympy.interactive import printing
import symbtools as st
import symbtools.modeltools as mt
import symbtools.visualisation as vt
from symbtools.modeltools import Rz # Rotationsmatrix in the xy plane (c, -s; s, c)
import scipy.integrate as sc_integrate
import matplotlib.pyplot as pl
from assimulo.solvers import IDA # Imports the solver IDA from Assimulo
from assimulo.problem import Implicit_Problem # Imports the problem formulation from Assimulo
import ipydex
plt = pl
printing.init_printing(1)
```
Could not find GLIMDA.
```python
t = sp.Symbol('t') # time variable
# coordinates
np = 2
nq = 1
n = np + nq
pp = st.symb_vector("p1:{}".format(np+1)) ##:T
qq = st.symb_vector("q1:{}".format(nq+1)) ##:T
aa = st.symb_vector("a1:{}".format(nq+1)) ##:T
ttheta = st.row_stack(pp, qq) ##:T
tthetad = st.time_deriv(ttheta, ttheta) ##:T
tthetadd = st.time_deriv(ttheta, ttheta, order=2) ##:T
st.make_global(ttheta, tthetad, tthetadd)
```
```python
params = sp.symbols('s1, s2, s3, m1, m2, m3, J1, J2, J3, l1, l2, l3, l4, kappa1, kappa2, g')
parameter_values = list(dict(s1=1/2, s2=1/2, s3=1/2, m1=1, m2=1, m3=3, J1=1/12 , J2=1/12, J3=1/12,
l1=0.8, l2=1.5, l3=1.5, l4=2, kappa1=3/2 , kappa2=14.715, g=9.81).items())
st.make_global(params)
# ttau = sp.symbols('tau')
tau1, tau2 = ttau = st.symb_vector("tau1, tau2")
```
```python
Rz(q1) # Rotationsmatirx
```
Specify the geometry (joints G and centers of masses S)
```python
# uni vectors
ex = sp.Matrix([1, 0])
ey = sp.Matrix([0, 1])
# Basis 1 und 2
B1 = sp.Matrix([0, 0])
B2 = sp.Matrix([l4, 0])
# Coordinates two link manipulator
S1 = Rz(q1)*ex*s1 ##:
G1 = Rz(q1)*ex*l1 ##:
S2 = G1 + Rz(q1 + p1)*ex*s2 ##:
G2 = G1 + Rz(q1 + p1)*ex*l2 ##:
# one link manipulator
G2b = B2 + Rz(p2)*ex*l3 ##:
S3 = B2 + Rz(p2)*ex*s3 ##:
constraints = sp.Matrix([G2 - G2b]) ##:
# Time derivative
Sd1, Sd2, Sd3 = st.col_split(st.time_deriv(st.col_stack(S1, S2, S3), ttheta))
```
```python
# kinetic energy
T_rot = (J1*qdot1**2 + J2*(qdot1 + pdot1)**2 + J3*(pdot2)**2)/2
T_trans = ( m1*Sd1.T*Sd1 + m2*Sd2.T*Sd2 + m3*Sd3.T*Sd3 )/2
T = T_rot + T_trans[0] ##:
# potential energy
V = m1*g*S1[1] + m2*g*S2[1] + m3*g*S3[1] ##:
```
---
The next cell draws the fourbar and allows to interactively test the kinematics of the mechanism.
Use the sliders to adjust the coordinates. Activate the checkbox to respect the linkage constraints. Note that you can choose the keyword argument `free_vars` to be one of `q1`, `p1`, `p2`.
---
```python
# useful to test recent development of the lib
import importlib as il
il.reload(vt)
```
<module 'symbtools.visualisation' from '/media/workcard/workstickdir/projekte/rst_python/symbtools-TUD-RST-Account/symbtools/visualisation.py'>
```python
vis = vt.Visualiser(ttheta, xlim=(-2, 4), ylim=(-3, 3))
vis.add_linkage(st.col_stack(B1, G1, G2,).subs(parameter_values))
vis.add_linkage(st.col_stack(G2b, B2).subs(parameter_values))
cnstrs = constraints.subs(parameter_values)
# Note: There are two possibilities for the kinematic chain to close (two solutions to the algebraic constraints)
# prefer "upward solution"
vis.interact(p1=(-4, 4, .1, pi/2), free_vars=q1, constraints=cnstrs)
# prefer "downward solution"
# vis.interact(p1=(-4, 4, .1, -pi/2), p2=(-4, 4, .1, -2.1), free_vars=q1, constraints=cnstrs)
```
interactive(children=(Checkbox(value=False, description='solve constraints (fmin)'), FloatSlider(value=1.57079…
```python
print(list(npy.arange(3)))
```
[0, 1, 2]
```python
external_forces = [0 , 0, tau1]
%time mod = mt.generate_symbolic_model(T, V, ttheta, external_forces, constraints=constraints)
```
CPU times: user 5.32 s, sys: 0 ns, total: 5.32 s
Wall time: 5.33 s
```python
# condition that endeffectors of the two manipulators are at the same place (x and y)
mod.constraints
```
```python
# ODE-part oft the equation
mod.eqns
```
### Creation of DAE System
```python
mod.constraints
```
```python
# generate a dae object
dae = mod.calc_dae_eq(parameter_values)
dae.generate_eqns_funcs()
```
```python
# show dae variables
dae.yy ##:T
dae.yyd ##:T
```
### Calculate consistent initial values by optimization (with given hints)
```python
yy0, yyd0 = dae.calc_consistent_init_vals(p1=0.3) ##:
t0 = 0
# check if all values are almost zero (-> initial values fulfill the model)
assert npy.allclose(dae.model_func(t0, yy0, yyd0), 0)
```
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 91
Function evaluations: 176
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 33
Function evaluations: 67
(yy0, yyd0) := (array([ 0.3 , 1.74961317, 0.50948621, 0. , 0. , 0. , -0.27535424, 0.5455313 ]),
array([ 0. , 0. , 0. , 23.53968609, 2.82766884, -14.48960943, -0. , 0. ]))
---
```python
model = Implicit_Problem(dae.model_func, yy0, yyd0, t0)
model.name = 'fourbar linkage'
# indicate which components of y are differential (1) and which are purely algebraic (0)
# model.algvar = dae.diff_alg_vars ##:
sim = IDA(model)
# sim.suppress_alg = True #Necessary to set to True for >1 index problem
# values <= 30 produce lots of output
sim.verbosity = 0
```
```python
tfinal = 10.0 # Specify the final time
ncp = 500 # Number of communication points (number of return points)
# takes about 48 seconds
tt_sol, yy_sol, yyd_sol = sim.simulate(tfinal, ncp)
```
No initialization defined for the problem.
No finalization defined for the problem.
Final Run Statistics: fourbar linkage
Number of steps : 3504
Number of function evaluations : 5621
Number of Jacobian evaluations : 199
Number of function eval. due to Jacobian eval. : 1592
Number of error test failures : 100
Number of nonlinear iterations : 5621
Number of nonlinear convergence failures : 0
Solver options:
Solver : IDA (BDF)
Maximal order : 5
Suppressed algebr. variables : False
Tolerances (absolute) : 1e-06
Tolerances (relative) : 1e-06
Simulation interval : 0.0 - 10.0 seconds.
Elapsed simulation time: 9.14327409299949 seconds.
```python
ttheta_sol = yy_sol[:, :mod.dae.ntt]
ttheta_d_sol = yy_sol[:, mod.dae.ntt:mod.dae.ntt*2]
```
```python
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 7)); plt.sca(ax1)
ax1.plot(tt_sol, ttheta_sol)
ax1.set_title("angles")
ax2.plot(tt_sol, ttheta_d_sol)
ax2.set_title("angular velocities")
```
#### Visualization and Animation
```python
joint_pos_expr = st.col_stack(B1, G1, G2, B2).subs(parameter_values) ##:
joint_pos_func = st.expr_to_func(mod.tt, joint_pos_expr, keep_shape=True)
```
```python
# Create object for Animation
simanim = vt.SimAnimation(mod.xx[:3], tt_sol, yy_sol[:, :3], figsize=(14, 7))
simanim.add_visualiser(vis)
# plot first frame
simanim.plot_frame(0)
```
```python
# this might need to be adapted on other systems
plt.rcParams["animation.codec"] = "libvpx-vp9" # codec for webm
plt.rcParams['animation.ffmpeg_path'] = os.path.join(os.getenv("CONDA_PREFIX"), "bin", "ffmpeg")
# plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'
# writer = animation.FFMpegWriter(fps=30)
fname="fourbar_linkage_free_movement2.webm"
# this might take some 1e1 seconds
# simanim.save(fname, dpi=30)
```
```python
vt.display_video_file(fname)
```
|
b7e9f6862f0e9332f18f2396f08d9e1202477e02
| 285,334 |
ipynb
|
Jupyter Notebook
|
docs/demo_notebooks/modeltools/four-bar_linkage_model_and_simulation.ipynb
|
Xabo-RB/symbtools
|
d7c771319bc5929ce4bfda09c74c6845749f0c3e
|
[
"BSD-3-Clause"
] | 5 |
2017-10-15T16:25:01.000Z
|
2022-02-27T19:05:04.000Z
|
docs/demo_notebooks/modeltools/four-bar_linkage_model_and_simulation.ipynb
|
Xabo-RB/symbtools
|
d7c771319bc5929ce4bfda09c74c6845749f0c3e
|
[
"BSD-3-Clause"
] | 5 |
2019-07-16T13:09:17.000Z
|
2021-12-21T20:10:16.000Z
|
docs/demo_notebooks/modeltools/four-bar_linkage_model_and_simulation.ipynb
|
Xabo-RB/symbtools
|
d7c771319bc5929ce4bfda09c74c6845749f0c3e
|
[
"BSD-3-Clause"
] | 9 |
2017-02-08T12:24:10.000Z
|
2022-02-27T19:22:29.000Z
| 257.521661 | 117,732 | 0.902605 | true | 2,538 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.887205 | 0.815232 | 0.723278 |
__label__eng_Latn
| 0.600213 | 0.518749 |
# Case 1: No Water Layer
* Author: **Team G15**
* Attempt: 3
## Analysis
### To find
1. Temperature of Roof Surface $(T_s)$
2. Total heat flux entering the house through the roof, $(q_t)$ when no water layer is present
### Nomenclature
* $T_s$ = roof surface temperature (outside)
* $T_a$ = ambient air temperature (outside)
* $T_r$ = room temperature (inside)
* $Nu_a$ = Nusselt number of air
* $Ra_a$ = Rayleigh number of air
* $Re_a$ = Reynolds number of air
* $Pr_a$ = Prandtl number of air
* $\alpha_a$ = thermal diffusivity of air
* $k_a$ = thermal conductivity of air
* $h_r$ = free convection coefficient of room air
* $\nu_a$ = dynamic Viscosity of air
* Roof layers:
* 1: Concrete
* 2: Brick
* 3: Lime
* $k_i$ = thermal conductivity of $i^{th}$ roof layer
* $L_i$ = length of $i^{th}$ roof layer
* $q_{r}$ = radiative heat transfer (per unit area)
* $q_{c}$ = convective heat transfer (per unit area)
* $q_{t}$ = net heat transfer into the room (per unit area)
* $\beta$ = coefficient of thermal expansion
* $S$ = Intensity of Solar Radiation (i.e. solar constant)
### Assumptions
* Steady state with room maintained at fixed ambient temperature
### Equations
#### Energy balance,
$$ q_t = q_c + q_r $$
#### Radiation heat transfer,
\begin{align*}
q_r &= \tau_s\cdot S - h_r \cdot (T_a - T_s) \\
\\
h_r &= \epsilon_s\cdot \sigma\cdot \frac{(\overline T_s)^4 - (\overline T_a - 12)^4}{\overline T_a - \overline T_s}
\end{align*}
#### Convection heat transfer,
\begin{align*}
q_c &= h_c\cdot (T_a - T_w) \\
\\
h_c &= \frac{k_a}{L_s}\cdot Nu_a \\
\\
Nu_a &= 0.15\cdot Ra_a^{1/3} + 0.664\cdot Re_a^{1/2}\cdot Pr_a^{1/3} \\
\\
Re_a &= \frac{v_a\cdot L_s}{\nu_a} \\
\\
Ra_L &= \frac{g\cdot \beta\cdot (T_s - T_a)\cdot L_s^3}{\nu_a\cdot \alpha_a}
\end{align*}
#### Total heat transfer,
\begin{align*}
q_t &= \frac{T_w - T_r}{R_{net}} \\
\\
R_{net} &= \frac{1}{h_r} + \sum_{i=1}^{3} \frac{L_i}{k_i}
\end{align*}
### Properties
#### Outside Air
* Mild breeze $v_a = 2.78\ m/s$
* $T_a \in [305, 320] K$
* $T_f = 320K$
* $\beta = \frac{1}{T_f} = 0.0031\ K^{-1}$
* Table A.4, air ($T_f$):
* $\nu = 18 \cdot 10^{-6}\ m^2/s$
* $\alpha = 25 \cdot 10^{-6}\ m^2/s$
* $Pr = 0.702$
* $k = 27.7 \cdot 10^{-3}\ W/m\cdot K$
* $S = 1366\ W/m^2$
#### Roof
* $L_s = 5\ m$ (approx thickness of water layer)
* $\epsilon_s = 0.9$ (concrete surface)
* $\tau_s=0.9$
* $t = 0.2\ m$ thick with,
* Cement = $5\ cm$
* Brick = $10\ cm$
* Lime = $5\ cm$
* $K_i$, Conductivity of each layer,
* Cement = $0.72\ W/m\cdot K$
* Brick = $0.71\ W/m\cdot K$
* Lime = $0.73\ W/m\cdot K$
#### Inside air
* $T_r = 300K$ (Room Temperature)
* $h_r = 8.4\ W/m^2\cdot K$
### Tools used
* **Python**
* **SymPy** for creating symbolic equations and solving them
* **NumPy**
* **Matplotlib** for plotting results
## Solving (Python Code)
### Initialize Values
```python
%matplotlib inline
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
# Initialize matplotlib
plt.rc('text', usetex=True) # Unnecessary
plt.style.use('ggplot')
plt.rcParams['grid.color'] = '#C0C0C0'
```
#### Outside Air
* Table A.4 used (from reference #2)
```python
v_a = 2.78 # Velocity (m / s)
# Temperatures
T_f = 320.0 # (K)
beta = 1/T_f # (K)
T_a = np.array([305.0, 310.0, 315.0, 320.0]) # (K)
T_a_avg = 273 + 37 # (K)
# Universal Constants
sigma = 5.67e-8 # Stefan Boltzmann constant (W / m^2 * K^4)
g = 9.8 # (m / s^2)
S = 1366 # Solar constant
# Table A.6, air @ T = T_f
nu_a = 18e-6 # dynamic visosity (m^2 / s)
alpha_a = 25e-6 # (m^2 / s)
k_a = 27.7e-3 # thermal conductivity (W / m * K)
Pr_a = 0.702
```
#### Roof Layers
```python
# Temperatures
T_s = sp.symbols('T_s') # Roof surface temp (K)
T_s_avg = 273.0 + 35.0 # (K)
# Surface
L_s = 5 # Dimensions (m)
tau_s = 0.9 # Roof's solar absorbtivity
epsilon_s = 0.9 # Emissivity of roof surface (concrete)
# Layer 1: Concrete
k_1 = 0.72 # (W / m * K)
L_1 = 0.05 # (m)
# Layer 2: Brick
k_2 = 0.71 # (W / m * K)
L_2 = 0.10 # (m)
# Layer 3: Lime
k_3 = 0.73 # (W / m * K)
L_3 = 0.05 # (m)
```
#### Inside Air
```python
h_r = 8.4 # (W / m^2 * K)
T_r = 300 # (K)
```
### Equations
#### Radiation Heat
```python
h_r = epsilon_s * sigma * (T_s_avg**4 - (T_a_avg - 12)**4)/(T_a_avg - T_s_avg) # (W / m^2 * K)
q_r = tau_s * S - h_r * (T_a - T_s) # (W / m^2)
# Example at T_a = 310K and T_s = 314K
q_r_test = q_r[1].replace(T_s, 314)
print('Approximate value of q_r = %.2f W/m^2' % (q_r_test))
```
Approximate value of q_r = 1343.00 W/m^2
#### Convection Heat
* From below analysis, we can neglect free convection in comparison to forced convection
##### Free Convection
```python
Ra_a = (g * beta * (T_s - T_a) * L_s**3) / (nu_a * alpha_a)
Nu_a_fr = 0.15 * Ra_a**(1/3)
h_c_fr = k_a / L_s * Nu_a_fr
# Example at T_a = 310K and T_s = 314K
h_c_fr_test = h_c_fr[1].replace(T_s, 314)
print('Approximate value of free convection coefficient = %.2f W/K*m^2' % (h_c_fr_test))
```
Approximate value of free convection coefficient = 2.69 W/K*m^2
##### Forced Convection
```python
Re_a = v_a * L_s / nu_a
Nu_a_fo = 0.664 * Re_a**1/2 * Pr_a**1/3
h_c_fo = k_a / L_s * Nu_a_fo
# Example at T_a = 310K and T_s = 314K
print('Approximate value of forced convection coefficient = %.2f W/K*m^2' % (h_c_fo))
```
Approximate value of forced convection coefficient = 332.36 W/K*m^2
##### Total Convection
```python
h_c = h_c_fo # Neglicting free convection
q_c = h_c * (T_a - T_s) # (W / m^2)
# Example at T_a = 310K and T_s = 314K
q_c_test = q_c[1].replace(T_s, 314)
print('Approximate value of q_c = %.2f W/m^2' % (q_c_test))
```
Approximate value of q_c = -1329.43 W/m^2
#### Total Heat:
```python
R = 1/h_r + L_1/k_1 + L_2/k_2 + L_3/k_3 # (m^2 * K / W)
q_t = (T_s - T_r) / R # (W / m^2)
# Example at T_a = 310K and T_s = 314K
q_t_test = q_t.replace(T_s, 314)
print('Approximate value of q_t = %.2f W/m^2' % (q_t_test))
```
Approximate value of q_t = 44.59 W/m^2
### Solving
\begin{align*}
q_c + q_r &= q_t
\\
\therefore\hspace{3pt} q_c + q_r - q_t &= 0
\end{align*}
#### Calculate $T_s$
```python
eq = q_c + q_r - q_t
n = len(eq)
T_s_calc = np.empty(n, dtype=object)
for i in range(n):
T_s_calc[i] = round(sp.solve(eq[i], T_s)[0], 2)
for i in range(n):
print('T_s = %.1f K for T_a = %.1f K' % (T_s_calc[i], T_a[i]))
```
T_s = 309.0 K for T_a = 305.0 K
T_s = 313.9 K for T_a = 310.0 K
T_s = 318.9 K for T_a = 315.0 K
T_s = 323.8 K for T_a = 320.0 K
#### Calculate $q_t$
```python
q_t_calc_1 = np.empty(n, dtype=object)
for i in range(n):
q_t_calc_1[i] = q_t.replace(T_s, T_s_calc[i])
for i in range(n):
print('Heat entering = %.1f W/m^2 for T_a = %.1f K' % (q_t_calc_1[i], T_a[i]))
```
Heat entering = 28.5 W/m^2 for T_a = 305.0 K
Heat entering = 44.3 W/m^2 for T_a = 310.0 K
Heat entering = 60.0 W/m^2 for T_a = 315.0 K
Heat entering = 75.8 W/m^2 for T_a = 320.0 K
### Plot
* Total Heat Flux Entering ($q_t$) vs Outside Air Temp ($T_a$)
```python
def make_plot(x, y, xlabel, ylabel, title):
plt.plot(x, y, color='#1F77B4cc', marker='o')
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.xlabel(xlabel, fontsize=20)
plt.ylabel(ylabel, fontsize=20)
plt.title(title, fontsize=18, pad=15)
```
```python
fig = plt.figure(figsize=(10, 6))
make_plot(x=T_a, y=q_t_calc_1, xlabel='$T_a$', ylabel='$q_t$',
title='Total Heat Flux Entering ($q_t$) vs Outside Air Temp ($T_a$)')
```
# Case 2: Water Layer
* Author: **Team G15**
* Attempt: 3
## Analysis
### To find
1. Temperature of Water Surface $(T_w)$
2. Total heat flux entering the house through the roof, $(q_t)$ when a water layer is present
### Nomenclature
* $S$ = Intensity of Solar Radiation (i.e. solar constant)
* $v_w$ = water velocity
* $v_a$ = wind velocity
* $\epsilon_w$ = emissivity of water surface
* $\sigma$ = Stefan-Boltzmann constant $(5.67*10^{-8}\ W/m^2K^4)$
* $T_r$ = room temperature (inside)
* $T_w$ = water surface temperature (outside)
* $T_a$ = ambient air temperature (outside)
* $\overline T_w$ = average water surface temperature (outside)
* $\overline T_a$ = average air temperature (outside)
* $\tau_w$ = fraction of solar radiation absorbed by water
* $k_w$ = thermal conductivity of water
* $L_w$ = length of water layer
* $h_w$ = convection coefficient of water layer
* $h_r$ = radiative heat transfer coefficient
* $h_c$ = convective heat transfer coefficient
* $h_e$ = evaporative heat transfer coefficient
### Assumptions
1. Steady state with room maintained at fixed ambient temperature
2. Water is still ($v_w = 0$) but gentle breeze is present ($v_a = 10\ km/h$)
3. Dry Surroundings
### Equations
#### Energy balance,
$$ q_t = q_c + q_r - q_e $$
#### Radiation heat transfer,
\begin{align*}
q_r &= \tau_w\cdot S - h_r \cdot (T_a - T_w) \\
\\
h_r &= \epsilon_w\cdot \sigma\cdot \frac{(\overline T_w)^4 - (\overline T_a - 12)^4}{\overline T_a - \overline T_w}
\end{align*}
#### Convection heat transfer,
\begin{align*}
q_c &= h_c\cdot (T_a - T_w) \\
\\
h_c &= 5.678 \cdot (1 + 0.85\cdot(v_a - v_w))
\end{align*}
#### Evaporative heat transfer,
\begin{align*}
q_e &= 0.013\cdot h_c\cdot (p(\overline T_w) - \gamma\cdot p(\overline T_a)) \\
\\
p(T) &= R_1\cdot T + R_2
\end{align*}
#### Total heat transfer,
\begin{align*}
q_t &= \frac{T_w - T_r}{R_{net}} \\
\\
R_{net} &= \frac{1}{h_r} + \sum_{i=1}^{3} \frac{L_i}{k_i} + \frac{1}{h_{w}} \\
\\
h_w &= \frac{k_w}{L_w}\cdot (0.14\cdot(Gr\cdot Pr)^{1/3} + 0.644\cdot (Pr\cdot Re)^{1/3}) \\
\\
Gr &= \frac{g\cdot\beta\cdot(T_w-T_a)\cdot(L_w)^{3}}{\nu^2}
\end{align*}
### Properties
#### Outside Air
* Mild breeze $v_a = 2.78\ m/s$
* $T_a \in [305, 320] K$
* $T_f = 320K$
* $\beta = \frac{1}{T_f} = 0.0031\ K^{-1}$
* Table A.4, air ($T_f$):
* $\nu = 18 \cdot 10^{-6}\ m^2/s$
* $\alpha = 25 \cdot 10^{-6}\ m^2/s$
* $Pr = 0.702$
* $k = 27.7 \cdot 10^{-3}\ W/m\cdot K$
* $S = 1366\ W/m^2$
* $R_1=325\ Pa/^\circ C$ and $R_2 = -5155\ Pa$ (*from reference* **#1**)
* $\gamma=0.27$ (approx average over a day)
#### Water layer
* $L_w = 0.1\ m$ (approx thickness of water layer)
* Table A.6, water ($T_w$):
* $\nu = 18 \cdot 10^{-6}\ m^2/s$
* Still water $v_w = 0$
* $\epsilon_w = 0.95$
* $\tau_w=0.6$
#### Roof
* $t = 0.2\ m$ thick with,
* Cement = $5\ cm$
* Brick = $10\ cm$
* Lime = $5\ cm$
* $K_i$, Conductivity of each layer,
* Cement = $0.72\ W/m\cdot K$
* Brick = $0.71\ W/m\cdot K$
* Lime = $0.73\ W/m\cdot K$
#### Inside air
* $T_r = 300K$ (Room Temperature)
* $h_r = 8.4\ W/m^2\cdot K$
### Tools used
* **Python**
* **SymPy** for creating symbolic equations and solving them
* **NumPy**
* **Matplotlib** for plotting results
## Solving (Python Code)
### Initialize Values
#### Outside Air
* Saturation pressure of water p = R_1\*T + R_2
```python
v_a = 2.78 # Velocity (m / s)
# Temperatures
T_f = 320 # (K)
beta = 1/T_f # (K)
T_a = np.array([305.0, 310.0, 315.0, 320.0]) # (K)
T_a_avg = 273 + 37 # (K)
# Constants
sigma = 5.67e-8 # Stefan Boltzmann constant (W / m^2 * K^4)
g = 9.8 # (m / s^2)
R_1 = 325 # (N / m^2 °C)
R_2 = -5155 # (N / m^2)
gamma = 0.27
S = 1366 # Solar constant
def p(T): # Saturation pressure of water as a function of temperature (N / m^2)
return R_1 * (T-273) + R_2
```
#### Water Layer
```python
v_w = 0 # Velocity (m / s)
L_w = 5 # Dimensions (m)
# Temperatures
T_w = sp.symbols('T_w') # (K)
T_w_avg = 273 + 32 # (K)
# Constants
epsilon_w = 0.95 # Emissivity of water surface
tau_w = 0.6 # Water's solar absorbtivity
```
* Table A.6 used (*from reference* **#2**)
* Upon analysing the below data, we can approximate $h_w$ to $950\ W/m^2$
```python
rho_w = 990 # density (kg / m^3)
k_w = 0.63 # thermal conductivity (W / m * K)
mu_w = 1e-6 * np.array([769, 695, 631, 577]) # viscosity (N * s / m^2)
nu_w = mu_w / rho_w # dynamic visosity (m^2 / s)
Pr_w = np.array([5.20, 4.62, 4.16, 3.77]) # Prandtl number
Re_w = 0 # Reynolds number, still water
Gr_w = g * beta * (T_a - T_w) * L_w**3 / nu_w**2 # Grashof number
# Water free convection coeffecient
h_w = (k_w/L_w) * (0.14 * (Gr_w*Pr_w)**(1/3) + 0.644 * (Pr_w*Re_w)**(1/3))
# Example at T_a = 310K and T_w = 306K
h_w_test = h_w[1].replace(T_w, 306)
print('Approximate min value of h_w = %.2f W/K*m^2' % (h_w_test))
```
Approximate min value of h_w = 923.62 W/K*m^2
#### Roof Layers
```python
# Layer 1: Concrete
k_1 = 0.72 # (W / m * K)
L_1 = 0.05 # (m)
# Layer 2: Brick
k_2 = 0.71 # (W / m * K)
L_2 = 0.10 # (m)
# Layer 3: Lime
k_3 = 0.73 # (W / m * K)
L_3 = 0.05 # (m)
```
#### Inside Air
```python
h_r = 8.4 # (W / m^2 * K)
T_r = 300 # (K)
```
### Equations
#### Radiation Heat
```python
h_r = epsilon_w * sigma * (T_w_avg**4 - (T_a_avg - 12)**4)/(T_a_avg - T_w_avg) # (W / m^2 * K)
q_r = tau_w * S - h_r * (T_a - T_w) # (W / m^2)
# Example at T_a = 310K and T_w = 306K
q_r_test = q_r[1].replace(T_w, 306)
print('Approximate value of q_r = %.2f W/m^2' % (q_r_test))
```
Approximate value of q_r = 786.53 W/m^2
#### Convection Heat
* Forced convection and free convection both have been used
```python
h_c = 5.678 * (1 + 0.85 * (v_a - v_w))
print('h_c = %.2f W/K*m^2' % (h_c))
q_c = h_c * (T_a - T_w) # (W / m^2)
# Example at T_a = 310K and T_w = 306K
q_c_test = q_c[1].replace(T_w, 306)
print('Approximate value of q_c = %.2f W/m^2' % (q_c_test))
```
h_c = 19.10 W/K*m^2
Approximate value of q_c = 76.38 W/m^2
#### Evaporation Heat:
```python
q_e = 0.013 * h_c * (p(T_w_avg) - gamma * p(T_a_avg)) # function p defined above, (W / m^2)
# Example at T_a = 310K and T_w = 306K
print('Approximate value of q_e = %.2f' % (q_e))
```
Approximate value of q_e = 841.55
#### Total Heat:
```python
h_w = 1200 # from above approximation (W / m^2 * K)
R = 1/h_r + L_1/k_1 + L_2/k_2 + L_3/k_3 + 1/h_w # (m^2 * K / W)
q_t = (T_w - T_r) / R # (W / m^2)
# Example at T_a = 310K and T_w = 306K
q_t_test = q_t.replace(T_w, 306)
print('Approximate value of q_t = %.2f W/m^2' % (q_t_test))
```
Approximate value of q_t = 14.98 W/m^2
### Solving
\begin{align*}
q_c + q_r - q_e &= q_t
\\
\therefore\hspace{3pt} q_c + q_r - q_e - q_t &= 0
\end{align*}
#### Calculate $T_w$
```python
eq = q_c + q_r - q_e - q_t
n = len(eq)
T_w_calc = np.empty(n, dtype=object)
for i in range(n):
T_w_calc[i] = round(sp.solve(eq[i], T_w)[0], 2)
for i in range(n):
print('T_w = %.1f K for T_a = %.1f K' % (T_w_calc[i], T_a[i]))
```
T_w = 302.4 K for T_a = 305.0 K
T_w = 306.5 K for T_a = 310.0 K
T_w = 310.5 K for T_a = 315.0 K
T_w = 314.6 K for T_a = 320.0 K
#### Calculate $q_t$
```python
q_t_calc_2 = np.empty(n, dtype=object)
for i in range(n):
q_t_calc_2[i] = q_t.replace(T_w, T_w_calc[i])
for i in range(n):
print('Heat entering = %.1f W/m^2 for T_a = %.1f K' % (q_t_calc_2[i], T_a[i]))
```
Heat entering = 6.0 W/m^2 for T_a = 305.0 K
Heat entering = 16.2 W/m^2 for T_a = 310.0 K
Heat entering = 26.3 W/m^2 for T_a = 315.0 K
Heat entering = 36.5 W/m^2 for T_a = 320.0 K
### Plot
* Temp Drop Due to Water ($T_a - T_w$) vs Outside Air Temp ($T_a$)
* Total Heat Flux Entering ($q_t$) vs Outside Air Temp ($T_a$)
```python
fig = plt.figure(figsize=(16, 6))
ax1 = fig.add_subplot(121)
make_plot(x=T_a, y=T_a-T_w_calc, xlabel='$T_a$', ylabel='$T_w$',
title='Temp Drop Due to Water ($T_a - T_w$) vs Outside Air Temp ($T_a$)')
ax2 = fig.add_subplot(122)
make_plot(x=T_a, y=q_t_calc_2, xlabel='$T_a$', ylabel='$q_t$',
title='Total Heat Flux Entering ($q_t$) vs Outside Air Temp ($T_a$)')
fig.tight_layout(w_pad=10)
```
## References
1. A. Shrivastava *et al*. ["Evaporative cooling model..."](https://github.com/relaxxpls/CL246-G15/blob/main/docs/papers/Experimental_validation_of_a_thermal_mod.pdf) (1984)
2. F. Incropera *et al*. ["Fundamentals of Heat and Mass Transfer"](https://books.google.co.in/books?id=5cgbAAAAQBAJ&newbks=0&hl=en&source=newbks_fb&redir_esc=y)
|
13f2b5f27443f6eb17f9d9c605228d9b1ae6ecbe
| 101,425 |
ipynb
|
Jupyter Notebook
|
src/final.ipynb
|
relaxxpls/Optimize-Heating-Indoors
|
76bea58707337f1daf67de946812a383af12e6e6
|
[
"MIT"
] | null | null | null |
src/final.ipynb
|
relaxxpls/Optimize-Heating-Indoors
|
76bea58707337f1daf67de946812a383af12e6e6
|
[
"MIT"
] | null | null | null |
src/final.ipynb
|
relaxxpls/Optimize-Heating-Indoors
|
76bea58707337f1daf67de946812a383af12e6e6
|
[
"MIT"
] | null | null | null | 80.559968 | 43,488 | 0.813251 | true | 6,247 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.83762 | 0.746139 | 0.624981 |
__label__eng_Latn
| 0.377409 | 0.290371 |
# Step-by-step NMO correction
Devito is equally useful as a framework for other stencil computations in general; for example, computations where all array indices are affine functions of loop variables. The Devito compiler is also capable of generating
arbitrarily nested, possibly irregular, loops. This key feature is needed to support many complex algorithms that are used in engineering and scientific practice, including applications from image processing, cellular automata, and machine-learning. This tutorial, a step-by-step NMO correction, is an example of it.
In reflection seismology, normal moveout (NMO) describes the effect that the distance between a seismic source and a receiver (the offset) has on the arrival time of a reflection in the form of an increase of time with offset. The relationship between arrival time and offset is hyperbolic.
Based on the field geometry information, each individual trace is assigned to the midpoint between the shot and receiver locations associated with that trace. Those traces with the same midpoint location are grouped together, making up a common midpoint gather (CMP).
Consider a reflection event on a CMP gather. The difference between the two-way time at a given offset and the two-way zero-offset time is called normal moveout (NMO). Reflection traveltimes must be corrected for NMO prior to summing the traces in the CMP gather along the offset axis. The normal moveout depends on velocity above the reflector, offset, two-way zero-offset time associated with the reflection event, dip of the reflector, the source-receiver azimuth with respect to the true-dip direction, and the degree of complexity of the near-surface and the medium above the reflector.
# Seismic modelling with devito
Before the NMO corretion we will describe a setup of seismic modelling with Devito in a simple 2D case. We will create a physical model of our domain and define a multiple source and an according set of receivers to model for the forward model. But first, we initialize some basic utilities.
```python
import numpy as np
import sympy as sp
from devito import *
```
We will create a simple velocity model here by hand for demonstration purposes. This model essentially consists of three layers, each with a different velocity: 1.5km/s in the top layer, 2.5km/s in the middle layer and 4.5 km/s in the bottom layer.
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import Model, plot_velocity
shape = (301, 501) # Number of grid point (nx, ny, nz)
spacing = (10., 10) # Grid spacing in m. The domain size is now 3km by 5km
origin = (0., 0) # What is the location of the top left corner.
# Define a velocity profile. The velocity is in km/s
v = np.empty(shape, dtype=np.float32)
v[:,:100] = 1.5
v[:,100:350] = 2.5
v[:,350:] = 4.5
# With the velocity and model size defined, we can create the seismic model that
# encapsulates these properties. We also define the size of the absorbing layer as 10 grid points
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing, space_order=4, nbl=40, bcs="damp")
plot_velocity(model)
```
Next we define the positioning and the wave signal of our source, as well as the location of our receivers. To generate the wavelet for our sources we require the discretized values of time that we are going to use to model a multiple "shot", which depends on the grid spacing used in our model. We will use one source and eleven receivers. The source is located in the position (550, 20). The receivers start at (550, 20) with an even horizontal spacing of 100m at consistent depth.
```python
from examples.seismic import TimeAxis
t0 = 0. # Simulation starts a t=0
tn = 2400. # Simulation last 2.4 second (2400 ms)
dt = model.critical_dt # Time step from model grid spacing
time_range = TimeAxis(start=t0, stop=tn, step=dt)
nrcv = 250 # Number of Receivers
```
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import RickerSource
f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)
src = RickerSource(name='src', grid=model.grid, f0=f0,
npoint=1, time_range=time_range)
# Define the wavefield with the size of the model and the time dimension
u = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=4)
# We can now write the PDE
pde = model.m * u.dt2 - u.laplace + model.damp * u.dt
stencil = Eq(u.forward, solve(pde, u.forward))
src.coordinates.data[:, 0] = 400 # Source coordinates
src.coordinates.data[:, -1] = 20. # Depth is 20m
```
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import Receiver
rec = Receiver(name='rec', grid=model.grid, npoint=nrcv, time_range=time_range)
rec.coordinates.data[:,0] = np.linspace(src.coordinates.data[0, 0], model.domain_size[0], num=nrcv)
rec.coordinates.data[:,-1] = 20. # Depth is 20m
# Finally we define the source injection and receiver read function to generate the corresponding code
src_term = src.inject(field=u.forward, expr=src * dt**2 / model.m)
# Create interpolation expression for receivers
rec_term = rec.interpolate(expr=u.forward)
op = Operator([stencil] + src_term + rec_term, subs=model.spacing_map)
op(time=time_range.num-1, dt=model.critical_dt)
```
Operator `Kernel` run in 0.85 s
PerformanceSummary([(PerfKey(name='section0', rank=None),
PerfEntry(time=0.7779790000000012, gflopss=0.0, gpointss=0.0, oi=0.0, ops=0, itershapes=[])),
(PerfKey(name='section1', rank=None),
PerfEntry(time=0.028586000000000156, gflopss=0.0, gpointss=0.0, oi=0.0, ops=0, itershapes=[])),
(PerfKey(name='section2', rank=None),
PerfEntry(time=0.03306400000000026, gflopss=0.0, gpointss=0.0, oi=0.0, ops=0, itershapes=[]))])
How we are modelling a horizontal layers, we will group this traces and made a NMO correction using this set traces.
```python
offset = []
data = []
for i, coord in enumerate(rec.coordinates.data):
off = (src.coordinates.data[0, 0] - coord[0])
offset.append(off)
data.append(rec.data[:,i])
```
Auxiliary function for plotting traces:
```python
#NBVAL_IGNORE_OUTPUT
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.axes_grid1 import make_axes_locatable
mpl.rc('font', size=16)
mpl.rc('figure', figsize=(8, 6))
def plot_traces(rec, xb, xe, t0, tn, colorbar=True):
scale = np.max(rec)/100
extent = [xb, xe, 1e-3*tn, t0]
plot = plt.imshow(rec, cmap=cm.gray, vmin=-scale, vmax=scale, extent=extent)
plt.xlabel('X position (km)')
plt.ylabel('Time (s)')
# Create aligned colorbar on the right
if colorbar:
ax = plt.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(plot, cax=cax)
plt.show()
```
# Common Midpoint Gather
At this point, we have a dataset composed of the receivers. "If our model wasn't purely horizontal, we would have to sort these traces by common midpoints prior to NMO correction."
```python
plot_traces(np.transpose(data), rec.coordinates.data[0][0]/1000, rec.coordinates.data[nrcv-1][0]/1000, t0, tn)
```
# NMO Correction
We can correct the measured traveltime of a reflected wave $t$ at a given offset $x$ to obtain the traveltime at normal incidence $t_0$ by applying the following equation:
\begin{equation*}
t = \sqrt{t_0^2 + \frac{x^2}{V_{nmo}^2}}
\end{equation*}
in which $V_{nmo}$ is the NMO velocity. This equation results from the Pythagorean theorem, and is only valid for horizontal reflectors. There are variants of this equation with different degrees of accuracy, but we'll use this one for simplicity.
For the NMO Correction we use a grid of size samples x traces.
```python
ns = time_range.num # Number of samples in each trace
grid = Grid(shape=(ns, nrcv)) # Construction of grid with samples X traces dimension
```
In this example we will use a constant velocity guide. The guide will be arranged in a SparseFunction with the number of points equal to number of samples in the traces.
```python
vnmo = 1500
vguide = SparseFunction(name='v', grid=grid, npoint=ns)
vguide.data[:] = vnmo
```
The computed offset for each trace will be arraged in another SparseFunction with number of points equal to number of traces.
```python
off = SparseFunction(name='off', grid=grid, npoint=nrcv)
off.data[:] = offset
```
The previous modelled traces will be arranged in a SparseFunction with the same dimensions as the grid.
```python
amps = SparseFunction(name='amps', grid=grid, npoint=ns*nrcv, dimensions=grid.dimensions, shape=grid.shape)
amps.data[:] = np.transpose(data)
```
Now, we define SparseFunctions with the same dimensions as the grid, describing the NMO traveltime equation. The $t_0$ SparseFunction isn't offset dependent, so the number of points is equal to the number of samples.
```python
sample, trace = grid.dimensions
t_0 = SparseFunction(name='t0', grid=grid, npoint=ns, dimensions=[sample], shape=[grid.shape[0]])
tt = SparseFunction(name='tt', grid=grid, npoint=ns*nrcv, dimensions=grid.dimensions, shape=grid.shape)
snmo = SparseFunction(name='snmo', grid=grid, npoint=ns*nrcv, dimensions=grid.dimensions, shape=grid.shape)
s = SparseFunction(name='s', grid=grid, dtype=np.intc, npoint=ns*nrcv, dimensions=grid.dimensions,
shape=grid.shape)
```
The Equation relates traveltimes: the one we can measure ($t_0$) and the one we want to know (t). But the data in our CMP gather are actually a matrix of amplitudes measured as a function of time ($t_0$) and offset. Our NMO-corrected gather will also be a matrix of amplitudes as a function of time (t) and offset. So what we really have to do is transform one matrix of amplitudes into the other.
With Equations we describe the NMO traveltime equation, and use the Operator to compute the traveltime and the samples for each trace.
```python
#NBVAL_IGNORE_OUTPUT
dtms = model.critical_dt/1000 # Time discretization in ms
E1 = Eq(t_0, sample*dtms)
E2 = Eq(tt, sp.sqrt(t_0**2 + (off[trace]**2)/(vguide[sample]**2) ))
E3 = Eq(s, sp.floor(tt/dtms))
op1 = Operator([E1, E2, E3])
op1()
```
Operator `Kernel` run in 0.01 s
PerformanceSummary([(PerfKey(name='section0', rank=None),
PerfEntry(time=0.001971, gflopss=0.0, gpointss=0.0, oi=0.0, ops=0, itershapes=[]))])
With the computed samples, we remove all that are out of the samples range, and shift the amplitude for the correct sample.
```python
#NBVAL_IGNORE_OUTPUT
s.data[s.data >= time_range.num] = 0
E4 = Eq(snmo, amps[s[sample, trace], trace])
op2 = Operator([E4])
op2()
stack = snmo.data.sum(axis=1) # We can stack traces and create a ZO section!!!
plot_traces(snmo.data, rec.coordinates.data[0][0]/1000, rec.coordinates.data[nrcv-1][0]/1000, t0, tn)
```
# References:
https://library.seg.org/doi/full/10.1190/tle36020179.1
https://wiki.seg.org/wiki/Normal_moveout
https://en.wikipedia.org/wiki/Normal_moveout
|
6c1d11ed417a867698c06e2e7f7b62ad6b2bfba3
| 162,290 |
ipynb
|
Jupyter Notebook
|
examples/seismic/tutorials/10_nmo_correction.ipynb
|
kristiantorres/devito
|
9357d69448698fd2b7a57be6fbb400058716b532
|
[
"MIT"
] | 204 |
2020-01-09T11:27:58.000Z
|
2022-03-20T22:53:37.000Z
|
examples/seismic/tutorials/10_nmo_correction.ipynb
|
kristiantorres/devito
|
9357d69448698fd2b7a57be6fbb400058716b532
|
[
"MIT"
] | 949 |
2016-04-25T11:41:34.000Z
|
2019-12-27T10:43:40.000Z
|
examples/seismic/tutorials/10_nmo_correction.ipynb
|
kristiantorres/devito
|
9357d69448698fd2b7a57be6fbb400058716b532
|
[
"MIT"
] | 131 |
2020-01-08T17:43:13.000Z
|
2022-03-27T11:36:47.000Z
| 304.484053 | 70,604 | 0.928387 | true | 2,919 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.872347 | 0.812867 | 0.709103 |
__label__eng_Latn
| 0.980743 | 0.485815 |
# Minimizing memory usage: a matrix-free iterative solver
## How to deal with dense BEM matrices?
[In the previous section, I explained how to directly discretize a free surface using TDEs](sa_tdes). A downside of this approach is that the surface matrix can get very large very quickly. If I make the width of an element half as large, then there will be 2x many elements per dimension and 4x as many elements overall. And because the interaction matrix is dense, 4x as many elements leads to 16x as many matrix entries. In other words, $n$, the number of elements, scales like $O(h^2)$ in terms of the element width $h$. And the number of matrix rows or columns is exactly $3n$ (the 3 comes from the vector nature of the problem). That requires storing $9n^2$ entries. And, even worse, using a direct solver (LU decomposition, Gaussian elimination, etc) with such a matrix requires time like $O(n^3)$. Even for quite small problems with 10,000 elements, the cost of storage and solution get very large. And without an absolutely enormous machine or a distributed parallel implementation, solving a problem with 200,000 elements will just not be possible. On the other hand, in an ideal world, it would be nice to be able to solve problems with millions or even tens or hundreds of millions of elements.
Fundamentally, the problem is that the interaction matrix is dense. There are two approaches for resolving this problem:
1. Don't store the matrix!
2. Compress the matrix by taking advantage of low rank sub-blocks.
Eventually approach #2 will be critical since it is scalable up to very large problems. And that's exactly what I'll do in the next sections where I'll investigate low-rank methods and hierarchical matrices (H-matrices). However, here, I'll demonstrate approach #1 by using a matrix-free iterative solver. Ultimately, this is just a small patch on a big problem and it won't be a sustainable solution. But, it's immediately useful when you don't have a working implementation, are running into RAM constraints and are okay with a fairly slow solution. It's also useful to introduce iterative linear solvers since they are central to solving BEM linear systems.
When we solve a linear system without storing the matrix, [the method is called "matrix-free"](https://en.wikipedia.org/wiki/Matrix-free_methods). Generally, we'll just recompute any matrix entry whenever we need. How does this do algorithmically? The storage requirements drop to just the $O(n)$ source and observation info instead of the $O(n^2)$ dense matrix. And, as I'll demonstrate, for some problems, the runtime will drop to $O(n^2)$ instead of $O(n^3)$ because solving linear systems will be possible with a fixed and fairly small number of matrix-vector products.
## A demonstration on a large mesh.
To get started, I'll just copy the code to set up the linear system for the South America problem from the previous section. But, as a twist, I'll going to use a mesh with several times more elements. This surface mesh has 28,388 elements. As a result, the matrix would have 3x that many rows and columns and would require 58 GB of memory to store. That's still small enough that it could be stored on a medium sized workstation. But, it's too big for my personal computer!
```python
import cutde
import numpy as np
import matplotlib.pyplot as plt
from pyproj import Transformer
plt.rcParams["text.usetex"] = True
%config InlineBackend.figure_format='retina'
(surf_pts_lonlat, surf_tris), (fault_pts_lonlat, fault_tris) = np.load(
"sa_mesh16_7216.npy", allow_pickle=True
)
```
```python
print("Memory required to store this matrix: ", (surf_tris.shape[0] * 3) ** 2 * 8 / 1e9)
```
Memory required to store this matrix: 58.023255168
```python
transformer = Transformer.from_crs(
"+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs",
"+proj=geocent +datum=WGS84 +units=m +no_defs",
)
surf_pts_xyz = np.array(
transformer.transform(
surf_pts_lonlat[:, 0], surf_pts_lonlat[:, 1], surf_pts_lonlat[:, 2]
)
).T.copy()
fault_pts_xyz = np.array(
transformer.transform(
fault_pts_lonlat[:, 0], fault_pts_lonlat[:, 1], fault_pts_lonlat[:, 2]
)
).T.copy()
surf_tri_pts_xyz = surf_pts_xyz[surf_tris]
surf_xyz_to_tdcs_R = cutde.compute_efcs_to_tdcs_rotations(surf_tri_pts_xyz)
fault_centers_lonlat = np.mean(fault_pts_lonlat[fault_tris], axis=1)
fault_lonlat_to_xyz_T = cutde.compute_projection_transforms(
fault_centers_lonlat, transformer
)
fault_tri_pts_xyz = fault_pts_xyz[fault_tris]
fault_xyz_to_tdcs_R = cutde.compute_efcs_to_tdcs_rotations(fault_tri_pts_xyz)
fault_tri_pts_lonlat = fault_pts_lonlat[fault_tris]
fault_tdcs2_to_lonlat_R = cutde.compute_efcs_to_tdcs_rotations(fault_tri_pts_lonlat)
strike_lonlat = fault_tdcs2_to_lonlat_R[:, 0, :]
dip_lonlat = fault_tdcs2_to_lonlat_R[:, 1, :]
strike_xyz = np.sum(fault_lonlat_to_xyz_T * strike_lonlat[:, None, :], axis=2)
strike_xyz /= np.linalg.norm(strike_xyz, axis=1)[:, None]
dip_xyz = np.sum(fault_lonlat_to_xyz_T * dip_lonlat[:, None, :], axis=2)
dip_xyz /= np.linalg.norm(dip_xyz, axis=1)[:, None]
ft = np.float32
# The normal vectors for each triangle are the third rows of the XYZ->TDCS rotation matrices.
Vnormal = surf_xyz_to_tdcs_R[:, 2, :]
surf_centers_xyz = np.mean(surf_tri_pts_xyz, axis=1)
surf_tri_pts_xyz_conv = surf_tri_pts_xyz.astype(ft)
# The rotation matrix from TDCS to XYZ is the transpose of XYZ to TDCS.
# The inverse of a rotation matrix is its transpose.
surf_tdcs_to_xyz_R = np.transpose(surf_xyz_to_tdcs_R, (0, 2, 1)).astype(ft)
```
Proceeding like the previous section, the next step would be to construct our surface to surface left hand side matrix. But, instead, I'm just going to compute the action of that matrix without ever storing the entire matrix. Essentially, each matrix entry will be recomputed whenever it is needed. The `cutde.disp_free` and `cutde.strain_free` were written for this purpose.
First, let's check that the `cutde.disp_free` matrix free TDE computation is doing what I said it does. That is, it should be computing a matrix vector product. Since our problem is too big to generate the full matrix in memory, I'll just use the first 100 elements for this test.
First, I'll compute the matrix form. This should look familiar! I multiply the matrix by a random slip vector.
```python
test_centers = (surf_centers_xyz - 1.0 * Vnormal)[:100].astype(ft)
mat = cutde.disp_matrix(test_centers, surf_tri_pts_xyz_conv[:100], 0.25).reshape(
(300, 300)
)
slip = np.random.rand(mat.shape[1]).astype(ft)
correct_disp = mat.dot(slip)
```
And now the matrix free version. Note that the slip is passed to the `disp_free` function. This makes sense since it is required for a matrix-vector product even though it is not required to construct the matrix with `cutde.disp_matrix`.
```python
test_disp = cutde.disp_free(
test_centers, surf_tri_pts_xyz_conv[:100], slip.reshape((-1, 3)), 0.25
)
```
And let's calculate the error... It looks good for the first element. For 32-bit floats, this is machine precision.
```python
err = correct_disp.reshape((-1, 3)) - test_disp
err[0]
```
array([ 4.4703484e-08, -1.7881393e-07, -5.4389238e-07], dtype=float32)
```python
np.mean(np.abs(err)), np.max(np.abs(err))
```
(2.9506162e-07, 1.2218952e-06)
Okay, now that I've shown that `cutde.disp_free` is trustworthy, let's construct the full action of the left-hand side matrix. We need to transform all the rotation and extrapolation steps into a form that makes sense in an "on the fly" setting where we're not storing a matrix.
```python
offsets = [2.0, 1.0]
offset_centers = [(surf_centers_xyz - off * Vnormal).astype(ft) for off in offsets]
surf_xyz_to_tdcs_R = surf_xyz_to_tdcs_R.astype(ft)
# The extrapolate to the boundary step looked like:
# lhs = 2 * eps_mats[1] - eps_mats[0]
# This array stores the coefficients so that we can apply that formula
# on the fly.
extrapolation_mult = [-1, 2]
def matvec(x):
# Step 1) Rotate slip into the TDE-centric coordinate system.
slip_xyz = x.reshape((-1, 3)).astype(ft)
slip_tdcs = np.ascontiguousarray(
np.sum(surf_xyz_to_tdcs_R * slip_xyz[:, None, :], axis=2)
)
# Step 2) Compute the two point extrapolation to the boundary.
out = np.zeros_like(offset_centers[0])
for i, off in enumerate(offsets):
out += extrapolation_mult[i] * cutde.disp_free(
offset_centers[i], surf_tri_pts_xyz_conv, slip_tdcs, 0.25
)
out = out.flatten()
# Step 3) Don't forget the diagonal Identity matrix term!
out += x
return out
```
```python
%%time
matvec(np.random.rand(surf_tris.shape[0] * 3))
```
CPU times: user 8.16 s, sys: 0 ns, total: 8.16 s
Wall time: 8.16 s
array([-0.07595471, 0.14888829, 0.00571918, ..., 0.11337244, 0.06799013, 0.37440816], dtype=float32)
Great! We computed a matrix-free matrix-vector product! This little snippet below will demonstrate that the memory usage is still well under 1 GB proving that we're not storing a matrix anywhere.
```python
import os, psutil
process = psutil.Process(os.getpid())
print(process.memory_info().rss / 1e9)
```
0.178192384
## Iterative linear solution
Okay, so how do we use this matrix-vector product to solve the linear system? Because the entire matrix is never in memory, direct solvers like LU decomposition or Cholesky decomposition are no longer an option. But, iterative linear solvers are still an option. The [conjugate gradient (CG) method](https://en.wikipedia.org/wiki/Conjugate_gradient_method) is a well-known example of an iterative solver. However, CG requires a symmetric positive definite matrix. Because our columns come from integrals over elements but our rows come from observation points, there is an inherent asymmetry to the boundary element matrices we are producing here. [GMRES](https://en.wikipedia.org/wiki/Generalized_minimal_residual_method) is an iterative linear solver that tolerates asymmetry. It's specifically a type of ["Krylov subspace"](https://en.wikipedia.org/wiki/Krylov_subspace) iterative linear solver and as such requires only the set of vectors:
\begin{equation}
\{b, Ab, A^2b, ..., A^nb\}
\end{equation}
As such, only having an implementation of the matrix vector product $Ab$ is required since the later iterates can be computed with multiple matrix vector product. For example, $A^2b = A(Ab)$.
To start, we compute the right-hand side which is nothing new or fancy.
```python
slip = np.sum(fault_xyz_to_tdcs_R * dip_xyz[:, None, :], axis=2)
rhs = cutde.disp_free(
surf_centers_xyz.astype(ft), fault_pts_xyz[fault_tris].astype(ft), slip, 0.25
).flatten()
```
/home/tbent/Dropbox/active/eq/cutde/cutde/fullspace.py:69: UserWarning: The slips input array has type float64 but needs to be converted to dtype float32. Converting slips to float32 may be expensive.
warnings.warn(
/home/tbent/Dropbox/active/eq/cutde/cutde/fullspace.py:78: UserWarning: The slips input array has Fortran ordering. Converting to C ordering. This may be expensive.
warnings.warn(
Now, the fun stuff: Here, I'll use the [`scipy` implementation of GMRES](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.gmres.html). First, we need to do use the `scipy.sparse.linalg.LinearOperator` interface to wrap our `matvec` function in a form that the `gmres` function will recognize as a something that represents a linear system that can be solved.
```python
import time
import scipy.sparse.linalg as spla
# The number of rows and columns
n = surf_tris.shape[0] * 3
# The matrix vector product function that serves as the "backend" for the LinearOperator.
# This is just a handy wrapper around matvec to track the number of matrix-vector products
# used during the linear solve process.
def M(x):
M.n_iter += 1
start = time.time()
out = matvec(x)
print("n_matvec", M.n_iter, "took", time.time() - start)
return out
M.n_iter = 0
lhs = spla.LinearOperator((n, n), M, dtype=rhs.dtype)
lhs.shape
```
(85164, 85164)
And then we can pass that `LinearOperator` as the left hand side of a system of equations to `gmres`. I'm also going to pass a simple callback that will print the current residual norm at each step of the iterative solver and require a solution tolerance of `1e-4`.
```python
np.linalg.norm(rhs)
```
26.985018
```python
soln = spla.gmres(
lhs,
rhs,
tol=1e-4,
atol=1e-4,
restart=100,
maxiter=1,
callback_type="pr_norm",
callback=lambda x: print(x),
)
soln = soln[0].reshape((-1, 3))
```
n_matvec 1 took 8.181740522384644
n_matvec 2 took 8.217356204986572
0.21284150158882
n_matvec 3 took 8.110730171203613
0.026337904611568017
n_matvec 4 took 8.143013715744019
0.0058701005433943266
n_matvec 5 took 8.101205348968506
0.0016042871687649372
n_matvec 6 took 8.221377849578857
0.0004220627227557851
n_matvec 7 took 8.234569311141968
9.5677343550821e-05
As the figures below demonstrate, only eight matrix-vector products got us a great solution!
```python
inverse_transformer = Transformer.from_crs(
"+proj=geocent +datum=WGS84 +units=m +no_defs",
"+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs",
)
surf_xyz_to_lonlat_T = cutde.compute_projection_transforms(
surf_centers_xyz, inverse_transformer
)
surf_xyz_to_lonlat_T /= np.linalg.norm(surf_xyz_to_lonlat_T, axis=2)[:, :, None]
soln_lonlat = np.sum(surf_xyz_to_lonlat_T * soln[:, None, :], axis=2)
plt.figure(figsize=(13, 6))
for d in range(3):
plt.subplot(1, 3, 1 + d)
cntf = plt.tripcolor(
surf_pts_lonlat[:, 0], surf_pts_lonlat[:, 1], surf_tris, soln_lonlat[:, d]
)
plt.colorbar(cntf)
plt.axis("equal")
plt.xlim([-85, -70])
plt.ylim([-50, 10])
plt.title(
["$u_{\\textrm{east}}$", "$u_{\\textrm{north}}$", "$u_{\\textrm{up}}$"][d]
)
plt.show()
```
## Performance and convergence
An important thing to note about the solution above is that only a few matrix-vector products are required to get to a high-level of accuracy. GMRES (and many other iterative linear and nonlinear optimization algorithms) converges at a rate proportional to the condition number of the matrix {cite:p}`Saad1986`. So in order to productively use an iterative linear solver, we need to have a matrix with a small condition number. It turns out that these free surface self-interaction matrices have condition numbers that are very close to 1.0, meaning that all the eigenvalues are very similar in magnitude. As a result, a highly accurate solution with GMRES requires less than ten matrix-vector products even for very large matrices.
Because of this dependence on the condition number, in the worst case, iterative solvers are not faster than a direct solver. However, suppose that we need only 10 matrix-vector products. Then, the runtime is approximately $10(2n^2)$ because each matrix-vector product requires $2n^2$ operations (one multiplication and one addition per matrix entry). As a result, GMRES is solving the problem in $O(n^2)$ instead of the $O(n^3)$ asymptotic runtime of direct methods like LU decomposition. So, in addition to requiring less memory, the matrix free method here forced us into actually using a faster linear solver. Of course, LU decomposition comes out ahead again if we need to solve many linear systems with the same left hand side and different right hand sides. That is not the case here but would be relevant for many other problems (e.g. problems involving time stepping).
The mess of code below builds a few figures that demonstrate these points regarding performance and accuracy as a function of the number of elements.
```python
import time
fault_L = 1000.0
fault_H = 1000.0
fault_D = 0.0
fault_pts = np.array(
[
[-fault_L, 0, -fault_D],
[fault_L, 0, -fault_D],
[fault_L, 0, -fault_D - fault_H],
[-fault_L, 0, -fault_D - fault_H],
]
)
fault_tris = np.array([[0, 1, 2], [0, 2, 3]], dtype=np.int64)
results = []
for n_els_per_dim in [2, 4, 8, 16, 32, 48]:
surf_L = 4000
mesh_xs = np.linspace(-surf_L, surf_L, n_els_per_dim + 1)
mesh_ys = np.linspace(-surf_L, surf_L, n_els_per_dim + 1)
mesh_xg, mesh_yg = np.meshgrid(mesh_xs, mesh_ys)
surf_pts = np.array([mesh_xg, mesh_yg, 0 * mesh_yg]).reshape((3, -1)).T.copy()
surf_tris = []
nx = ny = n_els_per_dim + 1
idx = lambda i, j: i * ny + j
for i in range(n_els_per_dim):
for j in range(n_els_per_dim):
x1, x2 = mesh_xs[i : i + 2]
y1, y2 = mesh_ys[j : j + 2]
surf_tris.append([idx(i, j), idx(i + 1, j), idx(i + 1, j + 1)])
surf_tris.append([idx(i, j), idx(i + 1, j + 1), idx(i, j + 1)])
surf_tris = np.array(surf_tris, dtype=np.int64)
surf_tri_pts = surf_pts[surf_tris]
surf_centroids = np.mean(surf_tri_pts, axis=1)
fault_surf_mat = cutde.disp_matrix(surf_centroids, fault_pts[fault_tris], 0.25)
rhs = np.sum(fault_surf_mat[:, :, :, 0], axis=2).flatten()
start = time.time()
eps_mats = []
offsets = [0.002, 0.001]
offset_centers = [
np.mean(surf_tri_pts, axis=1) - off * np.array([0, 0, 1]) for off in offsets
]
for i, off in enumerate(offsets):
eps_mats.append(cutde.disp_matrix(offset_centers[i], surf_pts[surf_tris], 0.25))
lhs = 2 * eps_mats[1] - eps_mats[0]
lhs_reordered = np.empty_like(lhs)
lhs_reordered[:, :, :, 0] = lhs[:, :, :, 1]
lhs_reordered[:, :, :, 1] = lhs[:, :, :, 0]
lhs_reordered[:, :, :, 2] = lhs[:, :, :, 2]
lhs_reordered = lhs_reordered.reshape(
(surf_tris.shape[0] * 3, surf_tris.shape[0] * 3)
)
lhs_reordered += np.eye(lhs_reordered.shape[0])
direct_build_time = time.time() - start
start = time.time()
soln = np.linalg.solve(lhs_reordered, rhs).reshape((-1, 3))
direct_solve_time = time.time() - start
def matvec(x):
extrapolation_mult = [-1, 2]
slip = np.empty((surf_centroids.shape[0], 3))
xrshp = x.reshape((-1, 3))
slip[:, 0] = xrshp[:, 1]
slip[:, 1] = xrshp[:, 0]
slip[:, 2] = xrshp[:, 2]
out = np.zeros_like(offset_centers[0])
for i, off in enumerate(offsets):
out += extrapolation_mult[i] * cutde.disp_free(
offset_centers[i], surf_tri_pts, slip, 0.25
)
return out.flatten() + x
n = surf_tris.shape[0] * 3
def M(x):
M.n_iter += 1
return matvec(x)
M.n_iter = 0
lhs = spla.LinearOperator((n, n), M, dtype=rhs.dtype)
start = time.time()
soln_iter = spla.gmres(lhs, rhs, tol=1e-4)[0].reshape((-1, 3))
iterative_runtime = time.time() - start
l1_err = np.mean(np.abs((soln_iter - soln) / soln))
results.append(
dict(
l1_err=l1_err,
n_elements=surf_tris.shape[0],
iterations=M.n_iter,
direct_build_time=direct_build_time,
direct_solve_time=direct_solve_time,
iterative_runtime=iterative_runtime,
direct_memory=rhs.nbytes + lhs_reordered.nbytes,
iterative_memory=rhs.nbytes,
)
)
```
```python
import pandas as pd
results_df = pd.DataFrame({k: [r[k] for r in results] for k in results[0].keys()})
results_df["direct_runtime"] = (
results_df["direct_build_time"] + results_df["direct_solve_time"]
)
results_df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>l1_err</th>
<th>n_elements</th>
<th>iterations</th>
<th>direct_build_time</th>
<th>direct_solve_time</th>
<th>iterative_runtime</th>
<th>direct_memory</th>
<th>iterative_memory</th>
<th>direct_runtime</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.000390</td>
<td>8</td>
<td>6</td>
<td>0.001730</td>
<td>0.000297</td>
<td>0.196446</td>
<td>4800</td>
<td>192</td>
<td>0.002027</td>
</tr>
<tr>
<th>1</th>
<td>0.000052</td>
<td>32</td>
<td>7</td>
<td>0.001941</td>
<td>0.000353</td>
<td>0.045686</td>
<td>74496</td>
<td>768</td>
<td>0.002294</td>
</tr>
<tr>
<th>2</th>
<td>0.000371</td>
<td>128</td>
<td>7</td>
<td>0.007796</td>
<td>0.001663</td>
<td>0.152800</td>
<td>1182720</td>
<td>3072</td>
<td>0.009458</td>
</tr>
<tr>
<th>3</th>
<td>0.000520</td>
<td>512</td>
<td>7</td>
<td>0.085798</td>
<td>0.037337</td>
<td>0.748936</td>
<td>18886656</td>
<td>12288</td>
<td>0.123135</td>
</tr>
<tr>
<th>4</th>
<td>0.000599</td>
<td>2048</td>
<td>7</td>
<td>1.271554</td>
<td>1.452238</td>
<td>2.974715</td>
<td>302039040</td>
<td>49152</td>
<td>2.723793</td>
</tr>
<tr>
<th>5</th>
<td>0.000887</td>
<td>4608</td>
<td>7</td>
<td>6.120567</td>
<td>13.449475</td>
<td>11.854967</td>
<td>1528934400</td>
<td>110592</td>
<td>19.570042</td>
</tr>
</tbody>
</table>
</div>
```python
plt.rcParams["text.usetex"] = False
```
```python
plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.plot(results_df["n_elements"], results_df["direct_runtime"], label="direct")
plt.plot(results_df["n_elements"], results_df["iterative_runtime"], label="iterative")
plt.legend()
plt.title("Run time (secs)")
plt.subplot(1, 2, 2)
plt.plot(results_df["n_elements"], results_df["direct_memory"] / 1e6, label="direct")
plt.plot(
results_df["n_elements"], results_df["iterative_memory"] / 1e6, label="iterative"
)
plt.legend()
plt.title("Memory usage (MB)")
plt.show()
```
|
092c30e207552524c39528baed202ae640096e79
| 413,849 |
ipynb
|
Jupyter Notebook
|
tutorials/tdes/free_matvec.ipynb
|
tbenthompson/BIE_book
|
dcbbd7f0777ebf4a35d70737643e67138d9d684b
|
[
"MIT"
] | 1 |
2021-06-18T18:02:55.000Z
|
2021-06-18T18:02:55.000Z
|
tutorials/tdes/free_matvec.ipynb
|
tbenthompson/BIE_notebooks
|
dcbbd7f0777ebf4a35d70737643e67138d9d684b
|
[
"MIT"
] | null | null | null |
tutorials/tdes/free_matvec.ipynb
|
tbenthompson/BIE_notebooks
|
dcbbd7f0777ebf4a35d70737643e67138d9d684b
|
[
"MIT"
] | 1 |
2021-07-14T19:47:00.000Z
|
2021-07-14T19:47:00.000Z
| 459.832222 | 312,036 | 0.931982 | true | 6,354 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.819893 | 0.83762 | 0.686759 |
__label__eng_Latn
| 0.934743 | 0.433903 |
### Core collapse
Heat trasnfer due to self interactions can lead to core collapse of SIDM halos, particularly at high cross sections. This process can be accelerated if the halos exist in a tidal field (i.e. are subhalos).
To run this notebook, you'll need to install the software package pyHalo https://github.com/dangilman/pyHalo
```python
from sidmpy.CrossSections.power_law import PowerLaw
from sidmpy.CrossSections.tchannel import TChannel
from sidmpy.CrossSections.velocity_independent import VelocityIndependentCrossSection
from sidmpy.core_collapse_timescale import *
from sidmpy.Solver.util import nfw_velocity_dispersion, nfw_circular_velocity
from matplotlib import cm
import matplotlib.pyplot as plt
plt.rcParams['axes.linewidth'] = 2.5
plt.rcParams['xtick.major.width'] = 2.5
plt.rcParams['xtick.major.size'] = 8
plt.rcParams['xtick.minor.size'] = 5
plt.rcParams['ytick.major.width'] = 2.5
plt.rcParams['ytick.major.size'] = 8
plt.rcParams['ytick.minor.size'] = 4
plt.rcParams['ytick.labelsize'] = 15
plt.rcParams['xtick.labelsize'] = 15
```
## Timescales for subhalos and field halos
Eventually, all SIDM halos should undergoe core collapse. The timescale for this process depends on the strength of the interaction cross section and other processes like tidal stripping. Depending on these factors, it can be much longer than, comparable, or shorter than the age of the Universe.
We will use a characteristic timescale for core collapse
\begin{equation}
t_c = \frac{1}{3 \rho_s \langle \sigma\left(v\right) v \rangle }
\end{equation}
### Core collapse timescale as a function of halo mass and redshift.
#### Note: this cell requires the software package pyHalo
```python
from pyHalo.Halos.lens_cosmo import LensCosmo
import numpy as np
from sidmpy.Solver.util import nfw_velocity_dispersion, nfw_mass_from_velocity_dispersion
import matplotlib.pyplot as plt
from sidmpy.sidmpy import solve_rho_with_interpolation, solve_sigmav_with_interpolation
help(evolution_timescale_scattering_rate)
lc = LensCosmo()
t1 = []
t2 = []
t3 = []
kwargs_cross_section_1 = {'norm': 35., 'v_ref': 10}
kwargs_cross_section_2 = {'norm': 20., 'v_ref': 25}
kwargs_cross_section_3 = {'norm': 5., 'v_ref': 100}
cross_section_1 = TChannel(**kwargs_cross_section_1)
cross_section_2 = TChannel(**kwargs_cross_section_2)
cross_section_3 = TChannel(**kwargs_cross_section_3)
cmap = cm.gist_heat
z = 0.
m = np.logspace(6, 10, 100)
for mi in m:
c = lc.NFW_concentration(mi, z, scatter=False)
rhos, rs, _ = lc.NFW_params_physical(mi, c, z)
v1 = solve_sigmav_with_interpolation(mi, z, 0., 'TCHANNEL', kwargs_cross_section_1)
v2 = solve_sigmav_with_interpolation(mi, z, 0., 'TCHANNEL', kwargs_cross_section_2)
v3 = solve_sigmav_with_interpolation(mi, z, 0., 'TCHANNEL', kwargs_cross_section_3)
t0 = evolution_timescale_scattering_rate(rhos, v1, cross_section_1, rescale=1.)
t1.append(t0)
t0 = evolution_timescale_scattering_rate(rhos, v2, cross_section_2, rescale=1.)
t2.append(t0)
t0 = evolution_timescale_scattering_rate(rhos, v3, cross_section_3, rescale=1.)
t3.append(t0)
fig = plt.figure(1)
fig.set_size_inches(6, 6)
ax = plt.subplot(111)
ax.plot(np.log10(m), t1, color='k', lw=5)
ax.plot(np.log10(m), t2, color=cmap(0.4), lw=5)
ax.plot(np.log10(m), t3, color=cmap(0.8), lw=5)
ax.set_xlabel(r'$M_{200} \left[M_{\odot}\right]$', fontsize=18)
ax.set_ylabel('evolution timescale '+r'$t_0 \ \left[\rm{Gyr}\right]$', fontsize=16)
ax.set_xticks([6, 7, 8, 9, 10])
ax.set_xticklabels([r'$10^{6}$', r'$10^{7}$', r'$10^{8}$', r'$10^{9}$', r'$10^{10}$'], fontsize=20)
ax.set_yticks([0., 0.5, 1., 1.5, 2, 2.5, 3])
ax.set_ylim(0, 3)
ax.set_xlim(6,10)
ax.legend(frameon=False, fontsize=14)
```
```python
```
|
d1cc2cfcf239f98c06bff1d254c5d82e797c0b69
| 41,864 |
ipynb
|
Jupyter Notebook
|
example_notebooks/core_collapse.ipynb
|
jhod0/SIDMpy
|
e41d75e05af560614c9b1b0f3870a01ae1a35882
|
[
"MIT"
] | 2 |
2020-09-25T12:56:15.000Z
|
2020-09-30T17:52:04.000Z
|
example_notebooks/core_collapse.ipynb
|
jhod0/SIDMpy
|
e41d75e05af560614c9b1b0f3870a01ae1a35882
|
[
"MIT"
] | 1 |
2022-02-03T12:57:37.000Z
|
2022-02-07T22:16:33.000Z
|
example_notebooks/core_collapse.ipynb
|
jhod0/SIDMpy
|
e41d75e05af560614c9b1b0f3870a01ae1a35882
|
[
"MIT"
] | 2 |
2021-09-26T19:00:23.000Z
|
2022-01-28T21:55:36.000Z
| 206.226601 | 34,852 | 0.904405 | true | 1,153 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.861538 | 0.679179 | 0.585138 |
__label__eng_Latn
| 0.627404 | 0.197803 |
# Method of averaging
Find solution fo a 2nd order system with damping using method if averaging. The common equation in unitless form:
$$\ddot{x}+\frac{1}{Q}\dot{x}+x=a\cos{\Omega t}$$
Since the equation corresponds to a non-free vibration, consider the solution in general form:
\begin{aligned}
x&=r\cos(\Omega t + \phi) \\
&=r\cos\phi\cdot\cos\Omega t-r\sin\phi\cdot\sin\Omega t \\
&=u\cos\Omega t + v\sin\Omega t
\end{aligned}
where $u$ and $v$ are slowly varying in time finctions in comparioson with $\cos\Omega t$, such that
$$\dot{u}\cos\Omega t + \dot{v}\sin\Omega t = 0$$
Accounting for that, derivatives
\begin{align}
\dot{x}&=\Omega (-u\sin\Omega t+ v\cos\Omega t) \\
\ddot{x}&=\Omega(-\dot{u}\sin\Omega t +\dot{v}\cos\Omega t) -\Omega^2x
\end{align}
Substituting them to the main equation
\begin{equation}
\begin{pmatrix}
\cos\Omega t & \sin\Omega t \\
-\sin\Omega t & \cos\Omega t
\end{pmatrix}
\begin{pmatrix}
\dot{u} \\ \dot{v}
\end{pmatrix}=
\begin{pmatrix}
0 \\ F(t,x,\dot{x})
\end{pmatrix}
\end{equation}
where
$$F(t,x,\dot{x})=-\frac{1-\Omega^2}{\Omega}x-\frac{1}{Q\Omega}\dot{x}+\frac{a}{\Omega}\cos\Omega t$$
Finaly, get rid of the trigonometry by averaging $\dot{u}$ and $\dot{v}$ over period
\begin{aligned}
\dot{u}&\approx-\frac{1}{T}\int_0^T F(t,x,\dot{x}) \sin(\Omega t)dt \\
\dot{v}&\approx\frac{1}{T}\int_0^T F(t,x,\dot{x})\cos(\Omega t) dt
\end{aligned}
```python
import sympy as sp
```
```python
a, W, Q, t, u, v = sp.symbols('a, \Omega, Q, t, u, v')
```
```python
x = u * sp.cos(W*t) + v * sp.sin(W*t)
dx = -W * u * sp.sin(W*t) + W * v * sp.cos(W*t)
```
```python
F = -(1-W**2)/W * x - 1/Q/W * dx + a/W*sp.cos(W*t)
sp.collect(F.expand(),(sp.cos(W*t), sp.sin(W*t)))
```
$\displaystyle \left(\Omega v - \frac{v}{\Omega} + \frac{u}{Q}\right) \sin{\left(\Omega t \right)} + \left(\Omega u + \frac{a}{\Omega} - \frac{u}{\Omega} - \frac{v}{Q}\right) \cos{\left(\Omega t \right)}$
```python
T = 2*sp.pi / W
```
```python
du = (sp.integrate(-sp.sin(W*t)*F, (t,0,T))/T).expand()
du
```
$\displaystyle - \frac{\Omega v}{2} + \frac{v}{2 \Omega} - \frac{u}{2 Q}$
```python
dv = (sp.integrate(sp.cos(W*t)*F, (t,0,T))/T).expand()
dv
```
$\displaystyle \frac{\Omega u}{2} + \frac{a}{2 \Omega} - \frac{u}{2 \Omega} - \frac{v}{2 Q}$
Amplitude-frequency responce, for instance, can be obtained from the stationary solutiton:
```python
sol = sp.solve((du,dv), [u,v])
```
```python
sp.sqrt((sol[u]**2+sol[v]**2)/a**2).simplify()
```
$\displaystyle \sqrt{\frac{Q^{2} \left(Q^{2} \left(\Omega^{2} - 1\right)^{2} + \Omega^{2}\right)}{\left(Q^{2} \Omega^{4} - 2 Q^{2} \Omega^{2} + Q^{2} + \Omega^{2}\right)^{2}}}$
simplify this equation manually:
$$|K(\Omega)| = \frac{1}{\sqrt{(1-\Omega^2)^2+\frac{1}{Q^2}\Omega^2}}$$
For comparison, the same result can be derived in more traditional way:
$$\ddot{x}+\frac{1}{Q}\dot{x}+x=y$$
let $x=X\exp(i\Omega t)$, $y=Y\exp(i\Omega t)$, then
$$((1-\Omega^2)+\frac{1}{Q}i\Omega)X=Y$$
frerquency responce
$$K=\frac{Y}{X}=\frac{1}{(1-\Omega^2)+\frac{1}{Q}i\Omega}$$
taking the amplitude of the responce matches the same expression
$$|K| = \frac{1}{\sqrt{(1-\Omega^2)^2+\frac{1}{Q^2}\Omega^2}}$$
```python
from sympy.plotting import plot
```
```python
fig = plot(1/sp.sqrt((1-W**2)**2+W**2/100),
1/sp.sqrt((1-W**2)**2+W**2/25),
1/sp.sqrt((1-W**2)**2+W**2/4),
(W,0,2),
show=False)
fig[0].line_color='blue'
fig[1].line_color='green'
fig[2].line_color='orange'
fig[0].label='Q=10'
fig[1].label='Q=5'
fig[2].label='Q=2'
fig.axis_center = 1, 0
fig.ylabel = '|K|'
fig.xlabel = '$\Omega$'
fig.legend = True
fig.show()
```
```python
```
|
b44da790c8f7404cb0faa32419b7ca401a8560f7
| 93,749 |
ipynb
|
Jupyter Notebook
|
duffing-oscillator/linear-averaging.ipynb
|
vr050714/nonlinear-vibration-seminar
|
663584d46708857383b637610e54fafa753250e2
|
[
"CC0-1.0"
] | 1 |
2021-05-26T05:38:38.000Z
|
2021-05-26T05:38:38.000Z
|
duffing-oscillator/linear-averaging.ipynb
|
vr050714/nonlinear-vibration-seminar
|
663584d46708857383b637610e54fafa753250e2
|
[
"CC0-1.0"
] | null | null | null |
duffing-oscillator/linear-averaging.ipynb
|
vr050714/nonlinear-vibration-seminar
|
663584d46708857383b637610e54fafa753250e2
|
[
"CC0-1.0"
] | null | null | null | 306.369281 | 64,344 | 0.698845 | true | 1,432 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.927363 | 0.822189 | 0.762468 |
__label__eng_Latn
| 0.243699 | 0.609801 |
Rechneranwendungen in der Physik - Übung N.4 Umlaufbahnen
Santiago.R
```python
import numpy as np
import sympy as sy
from scipy import optimize
import matplotlib.pyplot as plt
from scipy import integrate
```
# Simulation der Erdumlaufbahn $U_E(t)$
Zuerst werden die wichtigen Konstanten und Startparameter definiert;
```python
#Konstanten
G = 6.67430e-11 #in m3/kg*s2
m_sonne = 1.989e30 #in kg
m_erde = 5.972e24 #in kg (wird in dieser Lösung vernachlässigt, da die Sonne als stationär im Punkt (x,y)=(0,0) angenohmen wird)
r_ES = 1.496e11 #in m
v_E = 29.78e3 #in m/s
#Startparameter der Erde
y0_erde = r_ES
x0_erde = 0
v_x0_erde=-v_E #negativ um in Richtung gegen den Uhrzeigersinn zu zeigen
v_y0_erde=0
```
Um einen Plot $U_E(t)$ für die Umlaufbahn der Erde um die Sonne zu erstellen müssen zuerst die Bewegungsgleichungen formuliert werden. Diese hängen im wesetlichen von der einzigen in diesem System relevanten Kraft, die Gravitationskraft $\vec{F_g}=-G \cdot \frac{M \cdot m}{\vec{r}^2}$ ab. Diese kann dann mit $\vec{r}=\frac{\vec{r_e}}{|r|}$ und $|r|=\sqrt{x^2+y^2}$ umgeschrieben werden als $\vec{F_g}=-G \cdot \frac{M \cdot m}{\sqrt{x^2+y^2}^{3/2}} \cdot \vec{r_e}$. Die $x$- und $y$-Komponenten der angreifenden Gravitationskraft können dann mit $F_x=-G \cdot \frac{M \cdot m}{\sqrt{x^2+y^2}^{3}} \cdot x$ und $F_y=-G \cdot \frac{M \cdot m}{\sqrt{x^2+y^2}^{3}} \cdot y$ parametrisiert und anschließend auf die gekoppelten Differentialgleichungen $\frac{d^2x}{dt^2}=-\frac{GM}{\sqrt{x^2+y^2}^{3}} \cdot x$ und $\frac{d^2y}{dt^2}=- \frac{GM}{\sqrt{x^2+y^2}^{3}} \cdot y$ überführt werden. Nummerisches Integrieren dieser Differentialgleichungen nach der Zeit $t$ liefert dann die $x$- und $y$-Werte der Umlaufbahn $U_E(t)$ an jedem integrierten Zeitpunkt $t$
```python
def dgl_erde_sonne(i, t):
x, y, v_x, v_y = i #Input für die Startparameter
g = G*m_sonne/np.sqrt(x**2+y**2)**3; #der angreifende Parameter an jedem Punkt x,y
return [v_x, v_y, -x*g, -y*g];
t = np.linspace(0, 31536000, 50000) #in SI-Einheiten ist die Angabe für ein Jahr auf Sekunden umgestellt
startparameter = [x0_erde, y0_erde, v_x0_erde, v_y0_erde]
s_t = integrate.odeint(dgl_erde_sonne, startparameter, t)
x,y,_,_ = s_t.T
plt.plot(0,0,'oy', ms=8, label = 'Sonne')
plt.plot(x,y, label = 'Erdumlaufbahn'); plt.axis('equal');
plt.xlabel("x-Achse in 10^11 Meter")
plt.ylabel("y-Achse in 10^11 Meter")
plt.legend(loc='upper right')
plt.show()
```
# Simulation für unterschiedliche Toleranzen [e-1,e-5]
```python
t = np.linspace(0, 31536000, 50000) #in SI-Einheiten ist die Angabe für ein Jahr auf Sekunden umgestellt
startparameter = [x0_erde, y0_erde, v_x0_erde, v_y0_erde]
for i in range(1,6,1):
s_t = integrate.odeint(dgl_erde_sonne, startparameter, t, rtol=10**(-i))
x,y,_,_ = s_t.T
plt.plot(x,y, label = 10**(-i)); plt.axis('equal');
plt.plot(0,0,'oy', ms=8, label = 'Sonne')
plt.xlabel("x-Achse in 10^11 Meter")
plt.ylabel("y-Achse in 10^11 Meter")
plt.legend(title='Plot-Toleranz',loc='upper right')
plt.show()
```
# Simulation für unterschiedliche Startwerte $r_0$
```python
#Neue Startwerte
y1_erde=0.6*r_ES
y2_erde=1.4*r_ES
#Integration für y1
t = np.linspace(0, 4*31536000, 50000) #in SI-Einheiten ist die Angabe für ein Jahr auf Sekunden umgestellt
startparameter1 = [x0_erde, y1_erde, v_x0_erde, v_y0_erde]
s_t1 = integrate.odeint(dgl_erde_sonne, startparameter1, t)
#Integration für y2
startparameter2 = [x0_erde, y2_erde, v_x0_erde, v_y0_erde]
s_t2 = integrate.odeint(dgl_erde_sonne, startparameter2, t)
#Plots
x1,y1,_,_ = s_t1.T
x2,y2,_,_ = s_t2.T
plt.plot(0,0,'oy', ms=8, label = 'Sonne')
plt.plot(x1,y1, label = '0.6*r_ES'); plt.axis('equal');
plt.plot(x2,y2, label = '1.4*r_ES'); plt.axis('equal');
plt.xlabel("x-Achse in 10^11 Meter")
plt.ylabel("y-Achse in 10^11 Meter")
plt.legend(title='Startparameter',loc='upper right')
plt.show()
```
|
0210a28d586aff0f0fceb0991879bde1b7a5b18e
| 88,825 |
ipynb
|
Jupyter Notebook
|
Umlaufbahnen/Umlaufbahnen.ipynb
|
RemovedMoney326/Rechneranwendungen-in-der-Physik-Uebungen
|
a189dbc3ae003189797ace97026d4abe010cda54
|
[
"MIT"
] | 2 |
2020-06-11T21:32:54.000Z
|
2021-04-17T13:42:56.000Z
|
Umlaufbahnen/Umlaufbahnen.ipynb
|
RemovedMoney326/Rechneranwendungen-in-der-Physik-Uebungen
|
a189dbc3ae003189797ace97026d4abe010cda54
|
[
"MIT"
] | null | null | null |
Umlaufbahnen/Umlaufbahnen.ipynb
|
RemovedMoney326/Rechneranwendungen-in-der-Physik-Uebungen
|
a189dbc3ae003189797ace97026d4abe010cda54
|
[
"MIT"
] | null | null | null | 407.454128 | 39,336 | 0.936099 | true | 1,469 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.926304 | 0.828939 | 0.767849 |
__label__deu_Latn
| 0.796622 | 0.622303 |
<a href="https://colab.research.google.com/github/justin-hsieh/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb" target="_parent"></a>
# Lambda School Data Science Module 142
## Sampling, Confidence Intervals, and Hypothesis Testing
## Prepare - examine other available hypothesis tests
If you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
```
import numpy as np
from scipy.stats import chisquare # One-way chi square test
# Chi square can take any crosstab/table and test the independence of rows/cols
# The null hypothesis is that the rows/cols are independent -> low chi square
# The alternative is that there is a dependence -> high chi square
# Be aware! Chi square does *not* tell you direction/causation
ind_obs = np.array([[1, 1], [2, 2]]).T
print(ind_obs)
print(chisquare(ind_obs, axis=None))
dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T
print(dep_obs)
print(chisquare(dep_obs, axis=None))
```
```
# Distribution tests:
# We often assume that something is normal, but it can be important to *check*
# For example, later on with predictive modeling, a typical assumption is that
# residuals (prediction errors) are normal - checking is a good diagnostic
from scipy.stats import normaltest
# Poisson models arrival times and is related to the binomial (coinflip)
sample = np.random.poisson(5, 1000)
print(normaltest(sample)) # Pretty clearly not normal
```
```
# Kruskal-Wallis H-test - compare the median rank between 2+ groups
# Can be applied to ranking decisions/outcomes/recommendations
# The underlying math comes from chi-square distribution, and is best for n>5
from scipy.stats import kruskal
x1 = [1, 3, 5, 7, 9]
y1 = [2, 4, 6, 8, 10]
print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so
x2 = [1, 1, 1]
y2 = [2, 2, 2]
z = [2, 2] # Hey, a third group, and of different size!
print(kruskal(x2, y2, z)) # x clearly dominates
```
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important.
## T-test Assumptions
<https://statistics.laerd.com/statistical-guides/independent-t-test-statistical-guide.php>
- Independence of means
Are the means of our voting data independent (do not affect the outcome of one another)?
The best way to increase thel likelihood of our means being independent is to randomly sample (which we did not do).
```
from scipy.stats import ttest_ind
?ttest_ind
```
- "Homogeneity" of Variance?
Is the magnitude of the variance between the two roughly the same?
I think we're OK on this one for the voting data, although it probably could be better, one party was larger than the other.
If we suspect this to be a problem then we can use Welch's T-test
```
?ttest_ind
```
- "Dependent Variable" (sample means) are Distributed Normally
<https://stats.stackexchange.com/questions/9573/t-test-for-non-normal-when-n50>
Lots of statistical tests depend on normal distributions. We can test for normality using Scipy as was shown above.
This assumption is often assumed even if the assumption is a weak one. If you strongly suspect that things are not normally distributed, you can transform your data to get it looking more normal and then run your test. This problem typically goes away for large sample sizes (yay Central Limit Theorem) and is often why you don't hear it brought up. People declare the assumption to be satisfied either way.
## Central Limit Theorem
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
sample_means = []
for x in range(0,3000):
coinflips = np.random.binomial(n=1, p=.5, size=30)
one_sample = coinflips
sample_means.append(coinflips.mean())
print(len(sample_means))
print(sample_means)
```
```
df = pd.DataFrame({'a': one_sample})
df.head()
```
```
df.a.hist()
```
```
ax = plt.hist(sample_means, bins=30)
plt.title('Distribution of 3000 sample means \n (of 30 coinflips each)');
```
What does the Central Limit Theorem State? That no matter the initial distribution of the population, the distribution of sample means taken will approximate a normal distribution as $n \rightarrow \infty$.
This has very important implications for hypothesis testing and is precisely the reason why the t-distribution begins to approximate the normal distribution as our sample size increases.
## Standard Error of the Mean
What does it mean to "estimate"? the Population mean?
```
import numpy as np
import pandas as pd
lambda_heights = np.random.uniform(4,6.5, size=2000)
print(len(lambda_heights))
lambda_heights
```
```
print("Population Mean:", lambda_heights.mean())
print("Population Standard Deviation:", lambda_heights.std())
```
```
population = pd.DataFrame({'heights': lambda_heights})
print(population.shape)
population.head()
```
```
sample = population.sample(100)
print(sample.shape)
sample.head()
```
```
print("Sample Mean 1:", sample['heights'].mean())
```
```
sample = population.sample(100)
print(sample.shape)
sample.head()
```
```
print("Sample Mean 2:", sample['heights'].mean())
```
## Build and Interpret a Confidence Interval
```
def confidence_interval(data, confidence=0.95):
"""
Calculate a confidence interval around a sample mean for given data.
Using t-distribution and two-tailed test, default 95% confidence.
Arguments:
data - iterable (list or numpy array) of sample observations
confidence - level of confidence for the interval
Returns:
tuple of (mean, lower bound, upper bound)
"""
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2.0, n - 1)
return (mean, mean - interval, mean + interval)
```
```
coinflips_100 = np.random.binomial(n=1, p=.5, size=100)
sample_std = np.std(coinflips_100)
print("sample standard deviation:", sample_std)
sample_size = len(coinflips_100)
print("sample size:", sample_size)
```
```
standard_error = sample_std / (sample_size**(.5))
print("standard error:", standard_error)
```
```
from scipy import stats
stderr = stats.sem(coinflips_100, ddof=0)
stderr
```
###What confidence level do we want our confidence interval to represent?
```
t = stats.t.ppf(.975 , sample_size-1)
```
```
sample_mean = coinflips_100.mean()
```
```
confidence_interval = (sample_mean - t*stderr, sample_mean + t*stderr)
margin_of_error = t*stderr
print("Sample Mean", sample_mean)
print("Margin of Error:", margin_of_error)
print("Confidence Interval:", confidence_interval)
```
```
confidence_interval[0]
```
```
confidence_interval[1]
```
## Graphically Represent a Confidence Interval
```
import seaborn as sns
sns.kdeplot(coinflips_100)
plt.axvline(x=confidence_interval[0], color='red')
plt.axvline(x=confidence_interval[1], color='red')
plt.axvline(x=sample_mean, color='k');
```
## Relationship between Confidence Intervals and T-tests
Confidence Interval == Bounds of statistical significance for our t-test
A sample mean that falls inside of our confidence interval will "FAIL TO REJECT" our null hypothesis
A sample mean that falls outside of our confidence interval will "REJECT" our null hypothesis
```
from scipy.stats import t, ttest_1samp
```
```
import numpy as np
coinflip_means = []
for x in range(0,100):
coinflips = np.random.binomial(n=1, p=.5, size=30)
coinflip_means.append(coinflips.mean())
print(coinflip_means)
```
```
# Sample Size
n = len(coinflip_means)
# Degrees of Freedom
dof = n-1
# The Mean of Means:
mean = np.mean(coinflip_means)
# Sample Standard Deviation
sample_std = np.std(coinflip_means, ddof=1)
# Standard Error
std_err = sample_std/n**.5
CI = t.interval(.95, dof, loc=mean, scale=std_err)
print("95% Confidence Interval: ", CI)
```
```
'''You can roll your own CI calculation pretty easily.
The only thing that's a little bit challenging
is understanding the t stat lookup'''
# 95% confidence interval
t_stat = t.ppf(.975, dof)
print("t Statistic:", t_stat)
CI = (mean-(t_stat*std_err), mean+(t_stat*std_err))
print("Confidence Interval", CI)
```
t Statistic: 1.9842169515086827
Confidence Interval (0.48189276007256693, 0.5181072399274331)
A null hypothesis that's just inside of our confidence interval == fail to reject
```
ttest_1samp(coinflip_means, .49)
```
A null hypothesis that's just outside of our confidence interval == reject
```
ttest_1samp(coinflip_means, .4818927)
```
## Run a $\chi^{2}$ Test "by hand" (Using Numpy)
```
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/adult.csv', na_values=" ?")
print(df.shape)
df.head()
```
```
df.describe()
```
```
df.describe(exclude='number')
```
```
cut_points = [0, 9, 19, 29, 39, 49, 1000]
label_names = ['0-9', '10-19', '20-29', '30-39', '40-49', '50+']
df['hours_per_week_categories'] = pd.cut(df['hours-per-week'], cut_points, labels=label_names)
df.head()
```
```
df['sex'].value_counts()
```
```
df['hours_per_week_categories'].value_counts()
```
```
df = df.sort_values(by='hours_per_week_categories', ascending=True)
df.head()
```
```
contingency_table = pd.crosstab(df['sex'], df['hours_per_week_categories'], margins=True)
contingency_table
```
```
femalecount = contingency_table.iloc[0][0:6].values
femalecount
```
```
malecount = contingency_table.iloc[1][0:6].values
malecount
```
```
import matplotlib.pyplot as plt
import seaborn as sns
#Plots the bar chart
fig = plt.figure(figsize=(10, 5))
sns.set(font_scale=1.8)
categories = ["0-9","10-19","20-29","30-39","40-49","50+"]
p1 = plt.bar(categories, malecount, 0.55, color='#d62728')
p2 = plt.bar(categories, femalecount, 0.55, bottom=malecount)
plt.legend((p2[0], p1[0]), ('Female', 'Male'))
plt.xlabel('Hours per Week Worked')
plt.ylabel('Count')
plt.show()
```
## Expected Value Calculation
\begin{align}
expected_{i,j} =\frac{(row_{i} \text{total})(column_{j} \text{total}) }{(\text{total observations})}
\end{align}
```
# Get Row Sums
row_sums = contingency_table.iloc[0:2, 6].values
col_sums = contingency_table.iloc[2, 0:6].values
print(row_sums)
print(col_sums)
```
```
total = contingency_table.loc['All','All']
total
```
32561
```
expected = []
for i in range(len(row_sums)):
expected_row = []
for column in col_sums:
expected_val = column*row_sums[i]/total
expected_row.append(expected_val)
expected.append(expected_row)
expected = np.array(expected)
print(expected.shape)
print(expected)
```
```
observed = pd.crosstab(df['sex'], df['hours_per_week_categories']).values
print(observed.shape)
observed
```
## Chi-Squared Statistic with Numpy
\begin{align}
\chi^2 = \sum \frac{(observed_{i}-expected_{i})^2}{(expected_{i})}
\end{align}
For the $observed$ values we will just use a version of our contingency table without the margins as a numpy array. In this way, if our observed values array and our expected values array are the same shape, then we can subtract them and divide them directly which makes the calculations a lot cleaner. No for loops!
```
chi_squared = ((observed - expected)**2/(expected)).sum()
print(f"Chi-Squared: {chi_squared}")
```
```
# Calculate Degrees of Freedom
dof = (len(row_sums)-1)*(len(col_sums)-1)
print(f"Degrees of Freedom: {dof}")
```
## Run a $\chi^{2}$ Test using Scipy
```
chi_squared, p_value, dof, expected = stats.chi2_contingency(observed)
print(f"Chi-Squared: {chi_squared}")
print(f"P-value: {p_value}")
print(f"Degrees of Freedom: {dof}")
print("Expected: \n", np.array(expected))
```
Null Hypothesis: Hours worked per week bins is **independent** of sex.
Due to a p-value of 0, we REJECT the null hypothesis that hours worked per week and sex are independent, and conclude that there is an association between hours worked per week and sex.
## Assignment - Build a confidence interval
A confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.
52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.
In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.
But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.
How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."
For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.
Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.
Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):
### Confidence Intervals:
1. Generate and numerically represent a confidence interval
2. Graphically (with a plot) represent the confidence interval
3. Interpret the confidence interval - what does it tell you about the data and its distribution?
### Chi-squared tests:
4. Take a dataset that we have used in the past in class that has **categorical** variables. Pick two of those categorical variables and run a chi-squared tests on that data
- By hand using Numpy
- In a single line using Scipy
Stretch goals:
1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).
2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.
3. Refactor your code so it is elegant, readable, and can be easily run for all issues.
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import t, ttest_1samp
```
```
df1 = pd.read_csv('house-votes-84.data')
df1.columns = ['Class Name', 'Handicapped_Infants','Water_project_cost_sharing',
'Adoption_of_the_budget_res',
'Physician_fee_freeze','El_Salvador_aid', 'Religious_Groups_in_schools',
'Anti_Satellite_test_ban','Aid_to_Nicaraguan_contras','Mx_missile',
'Immigration','Synfuels_corporation_cutback','Education_spending',
'Superfund_right_to_sue','Crime','Duty_free_exports','export_Admin_act_SA']
df1.replace({'y': 1, 'n': 0,'?': np.nan}, inplace=True)
df1.head()
```
```
dems = df1[df1['Class Name'] == 'democrat']
reps = df1[df1['Class Name'] == 'republican']
#Sample Mean and size
dem_crime = dems['Crime'].dropna()
sample_size = len(dem_crime)
# Sample Standard Deviation and Standard Error
sample_std = np.std(dem_crime)
standard_e = sample_std / (sample_size**(0.5))
# T-stat, margin of error, confidence interval
t = stats.t.ppf(.975 , sample_size-1)
margin_of_error = t*standard_e
confidence_interval = (dem_crime.mean() - t*standard_e, dem_crime.mean() + t*standard_e)
print("Sample Mean", dem_crime.mean())
print("Margin of Error:", margin_of_error)
print("Confidence Interval:", confidence_interval)
```
Sample Mean 0.35019455252918286
Margin of Error: 0.05859842293726726
Confidence Interval: (0.2915961295919156, 0.4087929754664501)
```
sns.kdeplot(dem_crime, color='orange')
plt.axvline(x=confidence_interval[0], color='blue')
plt.axvline(x=confidence_interval[1], color='blue')
plt.axvline(x=dem_crime.mean(), color='g');
```
Using the given data found in the 'Crime" column of dems, the 95% confidence interval was calculated to be:
(0.2915961295919156, 0.4087929754664501)
Any sample mean that falls within this interval will fail to reject the null hypothesis, whereas any sample mean that falls outside of it will reject the null hypothesis.
##Chi Squared Test
```
cut_points = [0, 19, 39, 59, 79, 99, 1000]
label_names = ['0-19', '20-39', '40-59', '60-79', '80-99', '100+']
df['age_cats'] = pd.cut(df['age'], cut_points, labels=label_names)
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>age</th>
<th>workclass</th>
<th>fnlwgt</th>
<th>education</th>
<th>education-num</th>
<th>marital-status</th>
<th>occupation</th>
<th>relationship</th>
<th>race</th>
<th>sex</th>
<th>capital-gain</th>
<th>capital-loss</th>
<th>hours-per-week</th>
<th>country</th>
<th>salary</th>
<th>age_cats</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>39</td>
<td>State-gov</td>
<td>77516</td>
<td>Bachelors</td>
<td>13</td>
<td>Never-married</td>
<td>Adm-clerical</td>
<td>Not-in-family</td>
<td>White</td>
<td>Male</td>
<td>2174</td>
<td>0</td>
<td>40</td>
<td>United-States</td>
<td><=50K</td>
<td>20-39</td>
</tr>
<tr>
<th>1</th>
<td>50</td>
<td>Self-emp-not-inc</td>
<td>83311</td>
<td>Bachelors</td>
<td>13</td>
<td>Married-civ-spouse</td>
<td>Exec-managerial</td>
<td>Husband</td>
<td>White</td>
<td>Male</td>
<td>0</td>
<td>0</td>
<td>13</td>
<td>United-States</td>
<td><=50K</td>
<td>40-59</td>
</tr>
<tr>
<th>2</th>
<td>38</td>
<td>Private</td>
<td>215646</td>
<td>HS-grad</td>
<td>9</td>
<td>Divorced</td>
<td>Handlers-cleaners</td>
<td>Not-in-family</td>
<td>White</td>
<td>Male</td>
<td>0</td>
<td>0</td>
<td>40</td>
<td>United-States</td>
<td><=50K</td>
<td>20-39</td>
</tr>
<tr>
<th>3</th>
<td>53</td>
<td>Private</td>
<td>234721</td>
<td>11th</td>
<td>7</td>
<td>Married-civ-spouse</td>
<td>Handlers-cleaners</td>
<td>Husband</td>
<td>Black</td>
<td>Male</td>
<td>0</td>
<td>0</td>
<td>40</td>
<td>United-States</td>
<td><=50K</td>
<td>40-59</td>
</tr>
<tr>
<th>4</th>
<td>28</td>
<td>Private</td>
<td>338409</td>
<td>Bachelors</td>
<td>13</td>
<td>Married-civ-spouse</td>
<td>Prof-specialty</td>
<td>Wife</td>
<td>Black</td>
<td>Female</td>
<td>0</td>
<td>0</td>
<td>40</td>
<td>Cuba</td>
<td><=50K</td>
<td>20-39</td>
</tr>
</tbody>
</table>
</div>
```
df1 = pd.crosstab(df['salary'], df['age_cats'], margins = True)
df1
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>age_cats</th>
<th>0-19</th>
<th>20-39</th>
<th>40-59</th>
<th>60-79</th>
<th>80-99</th>
<th>All</th>
</tr>
<tr>
<th>salary</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th><=50K</th>
<td>1655</td>
<td>13849</td>
<td>7224</td>
<td>1887</td>
<td>105</td>
<td>24720</td>
</tr>
<tr>
<th>>50K</th>
<td>2</td>
<td>2818</td>
<td>4369</td>
<td>636</td>
<td>16</td>
<td>7841</td>
</tr>
<tr>
<th>All</th>
<td>11593</td>
<td>1657</td>
<td>16667</td>
<td>2523</td>
<td>121</td>
<td>32561</td>
</tr>
</tbody>
</table>
</div>
```
row_sums = df1.iloc[0:2, 5].values
col_sums = df1.iloc[2, 0:5].values
total = df1.loc['All','All']
expected = []
for i in range(len(row_sums)):
expected_row = []
for column in col_sums:
expected_val = column*row_sums[i]/total
expected_row.append(expected_val)
expected.append(expected_row)
expected = np.array(expected)
print(expected.shape)
print(expected)
observed = pd.crosstab(df['salary'], df['age_cats']).values
print(observed.shape)
observed
```
(2, 5)
[[ 8801.29480053 1257.97856331 12653.42710605 1915.43748656
91.86204355]
[ 2791.70519947 399.02143669 4013.57289395 607.56251344
29.13795645]]
(2, 5)
array([[ 1655, 13849, 7224, 1887, 105],
[ 2, 2818, 4369, 636, 16]])
```
chi_squared = ((observed - expected)**2/(expected)).sum()
print(f"Chi-Squared: {chi_squared}")
dof = (len(row_sums)-1)*(len(col_sums)-1)
print(f"Degrees of Freedom: {dof}")
```
Chi-Squared: 2172.824317596846
Degrees of Freedom: 4
```
chi_squared, p_value, dof, expected = stats.chi2_contingency(observed)
print(f"Chi-Squared: {chi_squared}")
print(f"P-value: {p_value}")
print(f"Degrees of Freedom: {dof}")
print("Expected: \n", np.array(expected))
```
Chi-Squared: 2172.824317596846
P-value: 0.0
Degrees of Freedom: 4
Expected:
[[ 1257.97856331 12653.42710605 8801.29480053 1915.43748656
91.86204355]
[ 399.02143669 4013.57289395 2791.70519947 607.56251344
29.13795645]]
## Resources
- [Interactive visualize the Chi-Squared test](https://homepage.divms.uiowa.edu/~mbognar/applets/chisq.html)
- [Calculation of Chi-Squared test statistic](https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test)
- [Visualization of a confidence interval generated by R code](https://commons.wikimedia.org/wiki/File:Confidence-interval.svg)
- [Expected value of a squared standard normal](https://math.stackexchange.com/questions/264061/expected-value-calculation-for-squared-normal-distribution) (it's 1 - which is why the expected value of a Chi-Squared with $n$ degrees of freedom is $n$, as it's the sum of $n$ squared standard normals)
|
be30451b50ef143574c77c2db0854c015ca855fe
| 72,410 |
ipynb
|
Jupyter Notebook
|
LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
|
justin-hsieh/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
|
8df28a369a1b282dead1dede3c6b97ab6393d094
|
[
"MIT"
] | null | null | null |
LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
|
justin-hsieh/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
|
8df28a369a1b282dead1dede3c6b97ab6393d094
|
[
"MIT"
] | null | null | null |
LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
|
justin-hsieh/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
|
8df28a369a1b282dead1dede3c6b97ab6393d094
|
[
"MIT"
] | null | null | null | 41.259259 | 17,562 | 0.578401 | true | 6,941 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.70253 | 0.715424 | 0.502607 |
__label__eng_Latn
| 0.888778 | 0.006053 |
# Building blocks of Reservoir Computing
```python
from pyrcn.base.blocks import InputToNode
from sklearn.datasets import make_blobs
```
```python
# Generate a toy dataset
U, y = make_blobs(n_samples=100, n_features=10)
```
## Input-to-Node
The "Input-to-Node" component describes the connections from the input features to the reservoir and the activation functions of the reservoir neurons. Normally, the input weight matrix $\mathbf{W}^{\mathrm{in}}$ has the dimension of $N^{\mathrm{res}}\times N^{\mathrm{in}}$, where $N^{\mathrm{res}}$ and $N^{\mathrm{in}}$ are the size of the reservoir and dimension of the input feature vector $\mathbf{u}[n]$ with the time index $n$, respectively. With
\begin{align}
\label{eq:InputToNode}
\mathbf{r}'[n] = f'(\mathbf{W}^{\mathrm{in}}\mathbf{u}[n] + \mathbf{w}^{\mathrm{bi}}) \text{ , }
\end{align}
we can describe the non-linear projection of the input features $\mathbf{u}[n]$ into the high-dimensional reservoir space $\mathbf{r}'[n]$ via the non-linear input activation function $f'(\cdot)$.
The values inside the input weight matrix are usually initialized randomly from a uniform distribution on the interval $[-1, 1]$ and are afterwards scaled using the input scaling factor $\alpha_{\mathrm{u}}$. Since in case of a high dimensional input feature space and/or large reservoir sizes $N^{\mathrm{res}}$, this leads to a huge input weight matrix and expensive computations to feed the feature vectors into the reservoir, it was shown that it is sufficient to have only a very small number of connections from the input nodes to the nodes inside the reservoir. Each node of the reservoir may therefore be connected to only $K^{\mathrm{in}}$ ($\ll N^{\mathrm{in}}$) randomly selected input entries. This makes $\mathbf{W}^{\mathrm{in}}$ typically very sparse and feeding the feature vectors into the reservoir potentially more efficient.
The bias weights $\mathbf{w}^{\mathrm{bi}}$ with dimension $N^{\mathrm{res}}$ are typically initialized by fixed random values from a uniform distribution between $\pm 1$ and multiplied by the hyper-parameter $\alpha_{\mathrm{bi}}$.
```python
# _ _ _ _ _ _ _ _
# | |
# ----| Input-to-Node |------
# u[n]|_ _ _ _ _ _ _ _|r'[n]
# U R_i2n
input_to_node = InputToNode(hidden_layer_size=50,
k_in=5, input_activation="tanh",
input_scaling=1.0, bias_scaling=0.1)
R_i2n = input_to_node.fit_transform(U)
print(U.shape, R_i2n.shape)
```
## Node-to-Node
The "Node-to-Node" component describes the connections inside the reservoir. The output of "Input-to-Node" $\mathbf{r}'[n]$ together with the output of "Node-to-Node" from the previous time step $\mathbf{r}[n-1]$ are used to compute the new output of "Node-to-Node" $\mathbf{r}[n]$ using
\begin{align}
\label{eq:NodeToNode}
\mathbf{r}[n] = (1-\lambda)\mathbf{r}[n-1] + \lambda f(\mathbf{r}'[n] + \mathbf{W}^{\mathrm{res}}\mathbf{r}[n-1])\text{ , }
\end{align}
which is a leaky integration of the time-dependent reservoir states $\mathbf{r}[n]$. $f(\cdot)$ acts as the non-linear reservoir activation functions of the neurons in "Node-to-Node". The leaky integration is equivalent to a first-order lowpass filter. Depending on the leakage $\lambda \in (0, 1]$, the reservoir states are globally smoothed.
The reservoir weight matrix $\mathbf{W}^{\mathrm{res}}$ is a square matrix of the size $N^{\mathrm{res}}$. These weights are typically initialized from a standard normal distribution. The Echo State Property (ESP) requires that the states of all reservoir neurons need to decay in a finite time for a finite input pattern. In order to fulfill the ESP, the reservoir weight matrix is typically normalized by its largest absolute eigenvalue and rescaled to a spectral radius $\rho$, because it was shown that the ESP holds as long as $\rho \le 1$. The spectral radius and the leakage together shape the temporal memory of the reservoir. Similar as for "Input-to-Node", the reservoir weight matrix gets huge in case of large reservoir sizes $N^{\mathrm{res}}$, it can be sufficient to only connect each node in the reservoir only to $K^{\mathrm{rec}}$ ($\ll N^{\mathrm{res}}$) randomly selected other nodes in the reservoir, and to set the remaining weights to zero.
To incorporate some information from the future inputs, bidirectional RCNs have been introduced.
```python
from pyrcn.base.blocks import NodeToNode
```
```python
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# | | | |
# ----| Input-to-Node |------| Node-to-Node |------
# u[n]|_ _ _ _ _ _ _ _|r'[n] |_ _ _ _ _ _ _ |r[n]
# U R_i2n R_n2n
# Initialize, fit and apply NodeToNode
node_to_node = NodeToNode(hidden_layer_size=50,
reservoir_activation="tanh",
spectral_radius=1.0, leakage=0.9,
bidirectional=False)
R_n2n = node_to_node.fit_transform(R_i2n)
print(U.shape, R_n2n.shape)
```
## Node-to-Output
The "Node-to-Output" component is the mapping of the reservoir state $\mathbf{r}[n]$ to the output $\mathbf{y}[n]$ of the network. In conventional RCNs, this mapping is trained using (regularized) linear regression. To that end, all reservoir states $\mathbf{r}[n]$ are concatenated into the reservoir state collection matrix $\mathbf{R}$. As linear regression usually contains an intercept term, every reservoir state $\mathbf{r}[n]$ is expanded by a constant of 1. All desired outputs $\mathbf{d}[n]$ are collected into the desired output collection matrix $\mathbf{D}$. Then, the mapping matrix $\mathbf{W}^{\mathrm{out}}$ can be computed using
\begin{align}
\label{eq:linearRegression}
\mathbf{W}^{\mathrm{out}} =\left(\mathbf{R}\mathbf{R}^{\mathrm{T}} + \epsilon\mathbf{I}\right)^{-1}(\mathbf{D}\mathbf{R}^{\mathrm{T}}) \text{,}
\end{align}
where $\epsilon$ is a regularization parameter.
The size of the output weight matrix $N^{\mathrm{out}}\times (N^{\mathrm{res}} + 1)$ or $N^{\mathrm{out}}\times (2 \times N^{\mathrm{res}} + 1)$ in case of a bidirectional "Node-to-Node" determines the total number of free parameters to be trained in the neural network.
After training, the output $\mathbf{y}[n]$ can be computed using Equation
\begin{align}
\label{eq:readout}
\mathbf{y}[n] = \mathbf{W}^{\mathrm{out}}\mathbf{r}[n] \text{ . }
\end{align}
Note that, in general, other training methodologies could be used to compute output weights.
```python
from sklearn.linear_model import Ridge
```
```python
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# | | | | | |
# ----|Input-to-Node |-----|Node-to-Node |-----|Node-to-Output |
# u[n]| _ _ _ _ _ _ _|r'[n]|_ _ _ _ _ _ _|r[n] | _ _ _ _ _ _ _ |
# U R_i2n R_n2n |
# |
# y[n] | y_pred
# Initialize, fit and apply NodeToOutput
y_pred = Ridge().fit(R_n2n, y).predict(R_n2n)
print(y_pred.shape)
```
## Predicting the Mackey-Glass equation
Set up and train vanilla RCNs for predicting the Mackey-Glass time series with the same settings as used to introduce ESNs. The minimum working example shows the simplicity of implementing a model with PyRCN and the inter-operability with scikit-learn; it needs only four lines of code to load the Mackey-Glass dataset that is part of PyRCN and only two lines to fit the different RCN models, respectively. Instead of the default incremental regression, we have customized the ```ELMRegressor()``` by using ```Ridge``` from scikit-learn.
```python
from sklearn.linear_model import Ridge as skRidge
from pyrcn.echo_state_network import ESNRegressor
from pyrcn.extreme_learning_machine import ELMRegressor
from pyrcn.datasets import mackey_glass
```
```python
# Load the dataset
X, y = mackey_glass(n_timesteps=5000)
# Define Train/Test lengths
trainLen = 1900
X_train, y_train = X[:trainLen], y[:trainLen]
X_test, y_test = X[trainLen:], y[trainLen:]
# Initialize and train an ELMRegressor and an ESNRegressor
esn = ESNRegressor().fit(X=X_train.reshape(-1, 1), y=y_train)
elm = ELMRegressor(regressor=skRidge()).fit(X=X_train.reshape(-1, 1), y=y_train)
print("Fitted models")
```
# Build Reservoir Computing Networks with PyRCN
By combining the building blocks introduced above, a vast number of different RCNs can be constructed. In this section, we build two important variants of RCNs, namely ELMs and ESNs.
## Extreme Learning Machines
The vanilla ELM as a single-layer feedforward network consists of an "Input-to-Node" and a "Node-to-Output" module and is trained in two steps:
1. Compute the high-dimensional reservoir states $\mathbf{R}'$, which is the collection of reservoir states $\mathbf{r}'[n]$.
2. Compute the output weights $\mathbf{W}^{\mathrm{out}}$ with $\mathbf{R}'$.
```python
U, y = make_blobs(n_samples=100, n_features=10)
from pyrcn.extreme_learning_machine import ELMRegressor
```
```python
# Vanilla ELM for regression tasks with input_scaling
# _ _ _ _ _ _ _ _ _ _ _ _ _ _
# | | | |
# ----|Input-to-Node |-----|Node-to-Output |------
# u[n]| _ _ _ _ _ _ _|r'[n]| _ _ _ _ _ _ _ |y[n]
# y_pred
#
vanilla_elm = ELMRegressor(input_scaling=0.9)
vanilla_elm.fit(U, y)
print(vanilla_elm.predict(U))
```
Example of how to construct an ELM with a BIP "Input-to-Node" ELMs with PyRCN.
```python
from pyrcn.base.blocks import BatchIntrinsicPlasticity
# Custom ELM with BatchIntrinsicPlasticity
# _ _ _ _ _ _ _ _ _ _ _ _ _ _
# | | | |
# ----| BIP |-----|Node-to-Output |------
# u[n]| _ _ _ _ _ _ _|r'[n]| _ _ _ _ _ _ _ |y[n]
# y_pred
#
bip_elm = ELMRegressor(input_to_node=BatchIntrinsicPlasticity(),
regressor=Ridge(alpha=1e-5))
bip_elm.fit(U, y)
print(bip_elm.predict(U))
```
Hierarchical or Ensemble ELMs can then be built using multiple "Input-to-Node" modules in parallel or in a cascade. This is possible when using using scikit-learn's ```sklearn.pipeline.Pipeline``` (cascading) or ```sklearn.pipeline.FeatureUnion``` (ensemble).
```python
from sklearn.pipeline import Pipeline, FeatureUnion
```
```python
# ELM with cascaded InputToNode and default regressor
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# | (bip) | | (base) | | |
# ----|Input-to-Node1|-----|Input-to-Node2|-----|Node-to-Output |
# u[n]| _ _ _ _ _ _ _| | _ _ _ _ _ _ _|r'[n]| _ _ _ _ _ _ _ |
# |
# |
# y[n] | y_pred
#
i2n = Pipeline([('bip', BatchIntrinsicPlasticity()),
('base', InputToNode(bias_scaling=0.1))])
casc_elm = ELMRegressor(input_to_node=i2n).fit(U, y)
# Ensemble of InputToNode with activations
# _ _ _ _ _ _ _
# | (i) |
# |----|Input-to-Node1|-----|
# | | _ _ _ _ _ _ _| | _ _ _ _ _ _ _
# | -----| |
# -----o r'[n]|Node-to-Output |------
# u[n] | _ _ _ _ _ _ _ |-----| _ _ _ _ _ _ _ |y[n]
# | | (th) | | y_pred
# |----|Input-to-Node2|-----|
# | _ _ _ _ _ _ _|
#
i2n = FeatureUnion([('i', InputToNode(input_activation="identity")),
('th', InputToNode(input_activation="tanh"))])
ens_elm = ELMRegressor(input_to_node=i2n)
ens_elm.fit(U, y)
print(casc_elm, ens_elm)
```
## Echo State Networks
ESNs, as variants of RNNs, consist of an "Input-to-Node", a "Node-to-Node" and a "Node-to-Output" module and are trained in three steps.
1. Compute the neuron input states $\mathbf{R}'$, which is the collection of reservoir states $\mathbf{r}'[n]$. Note that here the activation function $f'(\cdot)$ is typically linear.
2. Compute the reservoir states $\mathbf{R}$, which is the collection of reservoir states $\mathbf{r}[n]$. Note that here the activation function $f(\cdot)$ is typically non-linear.
3. Compute the output weights $\mathbf{W}^{\mathrm{out}}$ using
1. Linear regression with $\mathbf{R}$ when considering an ESN.
2. Backpropagation or other optimization algorithm when considering a CRN or when using an ESN with non-linear outputs.
What follows is an example of how to construct such a vanilla ESN with PyRCN, where the ```ESNRegressor``` internally passes the input features through "Input-to-Node" and "Node-to-Node", and trains "Node-to-Output" using ```pyrcn.linear_model.IncrementalRegression```.
```python
from pyrcn.echo_state_network import ESNRegressor
```
```python
# Vanilla ESN for regression tasks with spectral_radius and leakage
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# | | | | | |
# ----|Input-to-Node |-----|Node-to-Node |-----|Node-to-Output |
# u[n]| _ _ _ _ _ _ _|r'[n]|_ _ _ _ _ _ _|r[n] | _ _ _ _ _ _ _ |
# |
# |
# y[n] | y_pred
#
vanilla_esn = ESNRegressor(spectral_radius=1, leakage=0.9)
vanilla_esn.fit(U, y)
print(vanilla_esn.predict(U))
```
As for ELMs, various unsupervised learning techniques can be used to pre-train "Input-to-Node" and "Node-to-Node".
```python
from pyrcn.base.blocks import HebbianNodeToNode
```
```python
# Custom ESN with BatchIntrinsicPlasticity and HebbianNodeToNode
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# | (bip) | | (hebb) | | |
# ----|Input-to-Node |-----|Node-to-Node |-----|Node-to-Output |
# u[n]| _ _ _ _ _ _ _|r'[n]|_ _ _ _ _ _ _|r[n] | _ _ _ _ _ _ _ |
# |
# |
# y[n] | y_pred
#
bip_esn = ESNRegressor(input_to_node=BatchIntrinsicPlasticity(),
node_to_node=HebbianNodeToNode(),
regressor=Ridge(alpha=1e-5))
bip_esn.fit(U, y)
print(bip_esn.predict(U))
```
The term "Deep ESN" can refer to different approaches of hierarchical ESN architectures:
Example of how to construct a rather complex ESN consisting of two layers. It is built out of two small parallel reservoirs in the first layer and a large reservoir in the second layer.
```python
# Multilayer ESN
# u[n]
# |
# |
# _________o_________
# | |
# _ _ _ | _ _ _ _ _ _ | _ _ _
# | (i) | | (i) |
# |Input-to-Node1| |Input-to-Node2|
# | _ _ _ _ _ _ _| | _ _ _ _ _ _ _|
# |r1'[n] | r2'[n]
# _ _ _ | _ _ _ _ _ _ | _ _ _
# | (th) | | (th) |
# | Node-to-Node1| | Node-to-Node2|
# | _ _ _ _ _ _ _| | _ _ _ _ _ _ _|
# |r1[n] | r2[n]
# |_____ _____|
# | |
# _ | _ _ _ | _
# | |
# | Node-to-Node3 |
# | _ _ _ _ _ _ _ |
# |
# r3[n]|
# _ _ _ | _ _ _
# | |
# |Node-to-Output |
# | _ _ _ _ _ _ _ |
# |
# y[n]|
l1 = Pipeline([('i2n1', InputToNode(hidden_layer_size=100)),
('n2n1', NodeToNode(hidden_layer_size=100))])
l2 = Pipeline([('i2n2', InputToNode(hidden_layer_size=400)),
('n2n2', NodeToNode(hidden_layer_size=400))])
i2n = FeatureUnion([('l1', l1),
('l2', l2)])
n2n = NodeToNode(hidden_layer_size=500)
layered_esn = ESNRegressor(input_to_node=i2n,
node_to_node=n2n)
layered_esn.fit(U, y)
print(layered_esn.predict(U))
```
## Complex example: Optimize the hyper-parameters of RCNs
Example for a sequential parameter optimization with PyRCN. Therefore, a model with initial parameters and various search steps are defined. Internally, ```SequentialSearchCV``` will perform the list of optimization steps sequentially.
```python
from sklearn.metrics import make_scorer
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import TimeSeriesSplit
from sklearn.model_selection import RandomizedSearchCV, \
GridSearchCV
from scipy.stats import uniform
from pyrcn.model_selection import SequentialSearchCV
from pyrcn.datasets import mackey_glass
```
```python
# Load the dataset
X, y = mackey_glass(n_timesteps=5000)
X_train, X_test = X[:1900], X[1900:]
y_train, y_test = y[:1900], y[1900:]
# Define initial ESN model
esn = ESNRegressor(bias_scaling=0, spectral_radius=0, leakage=1)
# Define optimization workflow
scorer = make_scorer(mean_squared_error, greater_is_better=False)
step_1_params = {'input_scaling': uniform(loc=1e-2, scale=1),
'spectral_radius': uniform(loc=0, scale=2)}
kwargs_1 = {'n_iter': 200, 'n_jobs': -1, 'scoring': scorer,
'cv': TimeSeriesSplit()}
step_2_params = {'leakage': [0.2, 0.4, 0.7, 0.9, 1.0]}
kwargs_2 = {'verbose': 5, 'scoring': scorer, 'n_jobs': -1,
'cv': TimeSeriesSplit()}
searches = [('step1', RandomizedSearchCV, step_1_params, kwargs_1),
('step2', GridSearchCV, step_2_params, kwargs_2)]
# Perform the search
esn_opti = SequentialSearchCV(esn, searches).fit(X_train.reshape(-1, 1), y_train)
print(esn_opti)
```
## Programming pattern for sequence processing
This complex use-case requires a serious hyper-parameter tuning. To keep the code example simple, we did not include the optimization in this paper and refer the interested readers to the Jupyter Notebook [^1] that was developed to produce these results.
[^1]: https://github.com/TUD-STKS/PyRCN/blob/master/examples/digits.ipynb
```python
from sklearn.base import clone
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_validate
from sklearn.model_selection import ParameterGrid
from sklearn.metrics import make_scorer
from pyrcn.echo_state_network import ESNClassifier
from pyrcn.metrics import accuracy_score
from pyrcn.datasets import load_digits
```
```python
# Load the dataset
X, y = load_digits(return_X_y=True, as_sequence=True)
print("Number of digits: {0}".format(len(X)))
print("Shape of digits {0}".format(X[0].shape))
# Divide the dataset into training and test subsets
X_tr, X_te, y_tr, y_te = train_test_split(X, y, test_size=0.2,
random_state=42)
print("Number of digits in training set: {0}".format(len(X_train)))
print("Shape of the first digit: {0}".format(X_train[0].shape))
print("Number of digits in test set: {0}".format(len(X_test)))
print("Shape of the first digit: {0}".format(X_test[0].shape))
# These parameters were optimized using SequentialSearchCV
esn_params = {'input_scaling': 0.1,
'spectral_radius': 1.2,
'input_activation': 'identity',
'k_in': 5,
'bias_scaling': 0.5,
'reservoir_activation': 'tanh',
'leakage': 0.1,
'k_rec': 10,
'alpha': 1e-5,
'decision_strategy': "winner_takes_all"}
b_esn = ESNClassifier(**esn_params)
param_grid = {'hidden_layer_size': [50, 100, 200, 400, 500],
'bidirectional': [False, True]}
for params in ParameterGrid(param_grid):
esn_cv = cross_validate(clone(b_esn).set_params(**params),
X=X_tr, y=y_tr,
scoring=make_scorer(accuracy_score))
esn = clone(b_esn).set_params(**params).fit(X_tr, y_tr, n_jobs=-1)
acc_score = accuracy_score(y_te, esn.predict(X_te))
```
```python
```
|
202a66e00a627fa22fd2900dd4f9b57745622e9f
| 29,364 |
ipynb
|
Jupyter Notebook
|
examples/PyRCN_Intro.ipynb
|
michael-schindler/PyRCN
|
f280ea8290a12888f6b275e5a925a8613a9bcf4b
|
[
"BSD-3-Clause"
] | null | null | null |
examples/PyRCN_Intro.ipynb
|
michael-schindler/PyRCN
|
f280ea8290a12888f6b275e5a925a8613a9bcf4b
|
[
"BSD-3-Clause"
] | null | null | null |
examples/PyRCN_Intro.ipynb
|
michael-schindler/PyRCN
|
f280ea8290a12888f6b275e5a925a8613a9bcf4b
|
[
"BSD-3-Clause"
] | null | null | null | 42.25036 | 984 | 0.545634 | true | 5,499 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.853913 | 0.73412 | 0.626874 |
__label__eng_Latn
| 0.932543 | 0.294769 |
# Holonomic Robot
**need to fix formatting**
Robots come in a variety of types and configurations: wheeled, tracked, legs, flying, etc. Common wheeled robots typically have two wheels (directly driven) with a caster wheel to make the robot stable. There are some without the caster wheel and employ a control system to keep them upright (inverted pendulum
problem) and resemble a Segway scooter. All of these two wheeled robot are
non-holonomic systems.
A **nonholonomic system** in physics and mathematics is a system whose state depends on the path taken in order to achieve it. A car is a typical example of a nonholonomic system. It can occupy any location (x,y) in any orientation ($\phi$), but the path from one location/orientation to another position/orientation is not always linear. Thus you have to parallel park.
A **holonomic system** is not constrained by this. Thus it can move anywhere at will.
```python
%matplotlib inline
```
```python
from __future__ import division
from __future__ import print_function
import numpy as np
from numpy.linalg import norm
from math import cos, sin, pi
import matplotlib.pyplot as plt
```
## Soccer
These types of robots are great for soccer, especially the goalie. Typical configuration are 3 or 4 motors. Omni wheels allow slip perpendicular to the axis.
Due to these constraints, a holonomic robot which could travel in any direction and immediately change its position and orientation is much more desirable. There are a variety of different wheels which make this type of robot possible such as mecanum or omni wheels.
Omni wheels operate like standard wheels in that the force is produced normal to the motor's axis of rotation and as a result of friction. However, there are a series of smaller wheels which ring the main wheel and allow the wheel to slip in the direction of the motor rotational axis. Note that no force is produced parallel to the motor axis, just slippage.
## Dynamics
The dynamics for a holonomic robot with 4 omni directional wheels (can be derived using Euler-Largrange ($\mathcal{L}$) which defines a system's kinectic ($T$) and potential ($V$) energies in relation to a set of generalized coordinates ($q$) and generalized forces ($Q$):
\begin{equation}
\newcommand{\dpar}[2]{\frac{\partial #1}{\partial #2}}
\end{equation}
\begin{equation}
\mathcal{L}=T-V \\
\frac{d}{dt} \left\{ \dpar{ \mathcal{L} }{\dot q} \right\} - \dpar{ \mathcal{L} }{q} = Q \\
T = \frac{1}{2}M v_w^2+ \frac{1}{2}J \dot \psi^2 + \frac{1}{2} J_w (\dot \theta_1^2 + \dot \theta_2^2 + \dot \theta_3^2 + \dot \theta_4^2) \\
V = 0
\end{equation}
However, the dynamics must be calculated from an inertial reference frame (${W}$) and take into account the rotating body frame dynamics (${B'}$). Now, assume the body frame is offset from the center of mass (CM) by $x_m$ and $y_m$ which compose a vector $r_m$. Thus the velocity of the robot in the rotating frame would be:
\begin{equation*}
v_w = v_{B'} + \dot \psi \times r_m \\
v_w = v_{B'} +
\begin{bmatrix}
0 & 0 & \dot \psi
\end{bmatrix}^T
\times
\begin{bmatrix}
x_m & y_m & 0
\end{bmatrix}^T
=
\begin{bmatrix}
\dot x & \dot y & 0
\end{bmatrix}^T +
\begin{bmatrix}
-y_m \dot \psi & x_m \dot \psi & 0
\end{bmatrix}^T \\
v_{B'} = \begin{bmatrix}
\dot x & \dot y & 0
\end{bmatrix}^T
\end{equation*}
where $v_{B'}$ is the speed of the body frame. Now substituting that into
the above kinetic energy equation $T$, we get:
\begin{equation}
T = \frac{1}{2}M( ( \dot x - \dot \psi y )^2 + (\dot y + \dot \psi x)^2)+ \dots \\
T = \frac{1}{2}M( \dot x^2 - 2 \dot \psi y_m \dot x +\dot \psi^2 y_m^2 + \dot y^2 + 2 \dot \psi x_m \dot y + \dot \psi^2 x_m^2)+ \frac{1}{2}J \dot \psi^2 + \frac{1}{2} J_w (\dot \theta_1^2 + \dot \theta_2^2 + \dot \theta_3^2 + \dot \theta_4^2) \\
\frac{d}{dt} \left\{ \dpar{ \mathcal{L} }{\dot x} \right\} = M ( \ddot x - \ddot \psi y - \dot \psi \dot y ) \hspace{1cm} \dpar{ \mathcal{L} }{x} = M(\dot \psi \dot y + \dot \psi^2 x) \\
\frac{d}{dt} \left\{ \dpar{ \mathcal{L} }{\dot y} \right\} = M (\ddot y + \ddot \psi x + \dot \psi \dot x) \hspace{1cm} \dpar{ \mathcal{L} }{y} = M( -\dot \psi \dot x + \dot \psi^2 y) \\
\frac{d}{dt} \left\{ \dpar{ \mathcal{L} }{\dot \psi} \right\} = J \ddot \psi \hspace{1cm} \dpar{ \mathcal{L} }{\phi} = 0 \\
\frac{d}{dt} \left\{ \dpar{ \mathcal{L} }{\dot \theta} \right\} = J_w \sum \limits_{i=1}^4 \ddot \theta_i \hspace{1cm} \dpar{ \mathcal{L} }{\theta} = 0
\end{equation}
Now we make the following assumptions: ${B'}$ is coincident with
${B}$, $x_m = 0$, $y_m = 0$, $\dot x = v_x$,
$\dot y = v_y$
\begin{equation*}
F_x = M (\ddot x - 2 \dot \psi \dot y ) \\
F_y = M (\ddot y + 2 \dot \psi \dot x) \\
T = J \ddot \psi \\
\tau_w = J_w \ddot \theta_1 \hspace{1cm}
\tau_w = J_w \ddot \theta_2 \hspace{1cm}
\tau_w = J_w \ddot \theta_3 \hspace{1cm}
\tau_w = J_w \ddot \theta_4
\end{equation*}
\begin{equation*}
\begin{bmatrix}
F_x \\
F_y \\
T
\end{bmatrix} =
\begin{bmatrix}
M & 0 & 0 \\
0 & M & 0 \\
0 & 0 & J
\end{bmatrix}
\begin{bmatrix}
\ddot x \\
\ddot y \\
\ddot \psi
\end{bmatrix} +
\begin{bmatrix}
0 & -2M \dot \psi & 0 \\
2M \dot \psi & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
\dot x \\
\dot y \\
\dot \psi
\end{bmatrix}
= \mathcal{M} \ddot X + \mathcal{O} \dot X = Q
\end{equation*}
## World Coordinates
Now the dynamics derived so far are all in the body frame and we could stop
here and develop a controller which performs velocity control. However, position
control is more useful and a transform needs to be performed to move the
velocities and accelerations into the world frame.
\begin{equation}
\dot X^W = R_B^W \dot X^B \\
R_B^W =
\begin{bmatrix}
\cos \psi & \sin \psi & 0 \\
-\sin \psi & \cos \psi & 0 \\
0 & 0 & 1
\end{bmatrix} \\
\ddot X^W = \dot R_B^W \dot X^B + R_B^W \ddot X^B \\
\dot R_B^W =
\begin{bmatrix}
\sin \psi & -\cos \psi & 0 \\
\cos \psi & \sin \psi & 0 \\
0 & 0 & 1
\end{bmatrix}
\end{equation}
Now, substituting this into the dynamics, gives dynamics in the world
coordinate system of:
\begin{equation}
F = \mathcal{M} (\dot R \dot X + R \ddot X ) + \mathcal{O} R \dot X \\
F = \mathcal{M} R \ddot X + (\mathcal{M} \dot R + \mathcal{O} R) \dot X
\end{equation}
## External Forces and Torques
Now summing the forces into their body referenced $x$ and $y$ directions and the torque about the $z$ axis, gives us:
\begin{equation}
\sum F_x=f_1 \sin(\phi) - f_2 \sin(\phi) - f_3 \sin(\phi) + f_4 \sin(\phi) \\
\sum F_y=f_1 \cos(\phi) + f_2 \cos(\phi) - f_3 \cos(\phi) - f_4 \cos(\phi) \\
\sum T=L(f_1+f_2+f_3+f_4)
\end{equation}
Additionally, we can simplify this by assuming all of the angles are the same
(e.g., $\phi_1 = \phi_2 = \phi_3 = \phi_4$) and can now put this into a
matrix form:
\begin{equation}
\begin{bmatrix}
F_x \\
F_y \\
T
\end{bmatrix} =
\begin{bmatrix}
\sin(\phi) & 0 & 0 \\
0 & \cos(\phi) & 0 \\
0 & 0 & L
\end{bmatrix}
\begin{bmatrix}
1 & -1 & -1 & 1\\
1 & 1 & -1 & -1\\
1 & 1 & 1& 1
\end{bmatrix}
\begin{bmatrix}
f_1 \\
f_2 \\
f_3 \\
f_4
\end{bmatrix}
\end{equation}
where $\phi$ is again the angle of the motors, $f_i$ is the magnitude of the force produced by the motors, and $L$ is the radius of the robot.
where $pinv()$ is defined as the pseudoinverse since $A(\phi)$ is not a square matrix. Finally, substituting these into the original equation, we can calculate the torques given the desired accelerations.
\begin{equation}
\begin{bmatrix} \tau_1 \\ \tau_2 \\ \tau_3 \\ \tau_4 \end{bmatrix} = \frac {M r_w} {4}
\begin{bmatrix}
-1 & 1 & 1 \\
-1 & -1 & 1 \\
1 & -1 & 1 \\
1 & 1 & 1
\end{bmatrix}
\begin{bmatrix}
\frac{1}{\sin(\phi)} & 0 & 0 \\
0 & \frac{1}{\cos(\phi)} & 0 \\
0 & 0 & \frac{1}{2}
\end{bmatrix}
\begin{bmatrix}
a_x \\
a_y \\
R \dot \omega
\end{bmatrix}
\end{equation}
Now looking at this equation, we notice that $\phi$ can not be equal to 0, 90, 180, 270, or 360 otherwise we get a singularity in the $A(\phi)$ matrix. This however is not an issue in the real world, since the motors would occupy the same physical space and the robot would essentially only have 2 and not 4 motors.
## Holonomic Robot Kinematics
Number of equivalent motors for any direction under linear movement only, no rotational movement allowed.
Now performing a similar exercise for what was done with the dynamics, looking
at $coordinate$, the velocity of motor 1is given by
$v_1 = -\sin(\phi) v_x + \cos(\phi) v_y + R \omega$. Performing this for
each wheel gives:
\begin{equation}
\begin{bmatrix}
v_1 \\
v_2 \\
v_3 \\
v_4
\end{bmatrix} =
\begin{bmatrix}
-\sin(\phi) & \cos(\phi) & L \\
-\sin(\phi) & -\cos(\phi) & L \\
\sin(\phi) & -\cos(\phi) & L \\
\sin(\phi) & \cos(\phi) & L
\end{bmatrix}
\begin{bmatrix}
v_x \\
v_y \\
\omega
\end{bmatrix} =
\begin{bmatrix}
-1 & 1 & 1 \\
-1 & -1 & 1 \\
1 & -1 & 1 \\
1 & 1 & 1
\end{bmatrix}
\begin{bmatrix}
\sin(\phi) & 0 & 0 \\
0 & \cos(\phi) & 0 \\
0 & 0 & L
\end{bmatrix}
\begin{bmatrix}
v_x \\
v_y \\
\omega
\end{bmatrix}
\end{equation}
Now setting $\omega$ to zero and calculating only linear movement, we can
determine the number of equivalent motors as shown in
\figref{fig:equivalent_motors}. For example, setting
$\phi$ to 30 $^\circ$ (the red line in \figref{fig:equivalent_motors})
and traveling in the x direction only ($\begin{bmatrix} v_x & v_y & \omega \end{bmatrix}^T = \begin{bmatrix}1& 0 & 0 \end{bmatrix}^T$),
the above equation simplifies to $4 \sin(30)$ or 2 equivalent motors.
Repeating for the y direction results in $4 \cos(30)$ or 3.46 equivalent
motors.
Now it is interesting to note that when $\phi$ is set to 30 $^\circ$,
the robot has more equivalent motors when going forward or backwards, while a
$\phi$ of 60 $^\circ$ provides more equivalent motors moving left or right.
When the motors are are angled at 45 $^\circ$, movement is clearly equally
optimized for both forward/backwards and left/right ( $2 \sin(45)$ is 2.83 motors)
movement.
\figref{fig:equivalent_motors} tells us that no mater how the 4 motors are
oriented in a realistic configuration, the robot will never have the equivalent
use of all 4 motors. Movement in one direction or another can be optimized, but
then a sacrifice is made in another direction. This fact is intuitively obvious.
Another issue is these results are also ideal. This logic assumes that the wheels
will not slip and have good traction in any orientation. Unfortunately real world
results do not mimic this situation and the robot's performance will be reduced.
## Equvilent Motors
Since this robot has 4 motors with omni wheels, in certain configurations, you can get more power/speed in certain directions than others. Let's look at a configuration with the wheels oriented at the $\phi$ angles of: 30, 45, and 60 degrees.
```python
def motors(phi, angles):
"""
in:
phi - orientation of motors
angles - array of angles from 0 ... 2*pi
out:
array of results
"""
phi = phi*pi/180
ans = []
for angle in angles:
a = np.array([
[-sin(phi), cos(phi), 1.0],
[-sin(phi), -cos(phi), 1.0],
[ sin(phi), -cos(phi), 1.0],
[ sin(phi), cos(phi), 1.0]
])
b = np.array([
cos(angle),
sin(angle),
0.0
])
v = sum(abs(a.dot(b)))
ans.append(v)
return ans
```
```python
theta = np.arange(0,2*pi,0.1)
em30 = motors(30, theta)
em45 = motors(45, theta)
em60 = motors(60, theta)
```
```python
plt.polar(theta, em30, label='30');
plt.polar(theta, em45, label='45');
plt.polar(theta, em60, label='60');
plt.grid(True)
plt.legend(loc='upper left');
```
As seen above, 30 and 60 degrees orientations favor certain directions. However, 45 degrees is better in all directions and all around movement.
## Control
Using the equations above to transform desired movement into motor actions, the robot can move in any direction. A basic summery of robot movement to motor direction is shown above.
# References
---
* Alexander Gloye, Raul Rojas, Holonomic Control of a Robot with an Omnidirectional Drive, accepted for publication by Künstliche Intelligenz, Springer-Verlag, 2006.
* http://en.wikipedia.org/wiki/Non-holonomic_system
* http://en.wikipedia.org/wiki/Lagrangian_mechanics
* http://www.kornylak.com
* R. Balakrishna, Ashitava Ghosal, "Modeling of Slip for Wheeled Mobile Robots," lEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. I I , NO. I , FEBRUARY 1995, pp. 126-132
* J. Agullo, S. Cardona, and J. Vivancos, “Kinematics of vehicles with directional sliding wheels,” Mechanisms and Muchine Theory, vol. 22, no. 4, pp. 295-301, 1987.
* Pseudoinverse: for m > n: $A_{left}^{-1}=(A^TA)^{-1}A^T$ or m < n: $A_{right}^{-1} = A^T(AA^T)^{-1}$ such that $AA^{-1}=I$ or $A^{-1}A=I$
* Masayoshi Wada (2010). Motion Control of a Four-wheel-drive Omnidirectional Wheelchair with High Step Climbing Capability, Climbing and Walking Robots, Behnam Miripour (Ed.), ISBN: 978-953-307-030-8, [InTech](http://www.intechopen.com/books/climbing-and-walking-robots/motion-control-of-a-four-wheeldrive-omnidirectional-wheelchair-with-high-step-climbing-capability)
-----------
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
|
d9fb15211565b0e497304d643f031caa6a1d8899
| 89,140 |
ipynb
|
Jupyter Notebook
|
docs/ipython/motion.ipynb
|
MomsFriendlyRobotCompany/soccer
|
9068506e374d8f9162cfa7730176adf43f0dcd86
|
[
"MIT"
] | null | null | null |
docs/ipython/motion.ipynb
|
MomsFriendlyRobotCompany/soccer
|
9068506e374d8f9162cfa7730176adf43f0dcd86
|
[
"MIT"
] | null | null | null |
docs/ipython/motion.ipynb
|
MomsFriendlyRobotCompany/soccer
|
9068506e374d8f9162cfa7730176adf43f0dcd86
|
[
"MIT"
] | 1 |
2017-01-23T19:31:41.000Z
|
2017-01-23T19:31:41.000Z
| 172.417795 | 69,090 | 0.863024 | true | 4,630 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.76908 | 0.782662 | 0.60193 |
__label__eng_Latn
| 0.958407 | 0.236816 |
## Lista de Exercício 02 - Fundamentos de Matemática para Computação.
# Universidade Federal do ABC - UFABC
## Centro de Matemática, Computação e Cognição - CMCC
## Disciplina: Fundamentos de Matemática para a Computação - CCM-007
Responsável: Prof. Dr. Saul Leite
Estudantes: Bruno Aristimunha.
Santo André, 26 de Fevereiro de 2019
### Primeira Parte do Curso
#### Objetivos
O objetivo dessa lista é explorar e aplicar os conceitos de sala de aula sobre **Polinômio de Taylor**.
Para alcançar esse objetivo a resolução dos exercícios é necessária.
```
import matplotlib.pylab as plt
import sympy as sy
import numpy as np
from scipy.special import factorial
```
___
## Exercício 01.
Encontre o polinômio de Taylor de grau 1, 2, 3, e 4 para a função $f (x) = e^{x^{2}}$ , expandindo em torno de $x_0 = 0$. Usando Python, faça os gráficos dos polinômios e da função.
___
$f(x) = e^{x^{2}}$
$f'(x) = \frac{d(e^{x^{2}})}{dx} = e^{x^{2}}\cdot 2 \cdot = 2 \cdot x \cdot e^{x^{2}}$
$f^{''}(x) = \frac{d (2 \cdot x \cdot e^{x^{2}})}{dx} = 2 \cdot (e^{x^{2}} + x\cdot e^{x^{2}} \cdot 2x) = 2\cdot e^{x^{2}} + 2\cdot x \cdot e^{x^{2}} = 2 \cdot e^{x^2} (1 + 2x^2) $
$f^{(3)}(x) = \frac{d (2\cdot e^{x^2} \cdot (1+2x^2))}{dx} = 2 \cdot e^{x^{2}} \cdot 2\cdot x (1+2x^2) + 2 \cdot e^{x^{2}} (4x) = 4 e^2\cdot x + 8x^3 \cdot e^{x^2} + 8\cdot e^{x^{2}}\cdot x $
$\qquad \quad = 12x\cdot e^{x^{2}} + 8 x^3 e^{x^2} = 4 x e^{x^2} (3 + 2x^2)$
$f^{(4)} = 12 e^{x^2} + 12xe^{x^2}\cdot 2x + 24 x^2 e^{x^2} + 8x^3 e^{x^2}2x $
$f^{(4)} = 12 e^{x^2} + 48xe^{x^2} + 16x^3 e^{x^2}$
Para $x_0 = 0$.
> $f(x_0) = f(0) = e^{0^2} = 1$
> $f'(x_0) = f'(0) = 2\cdot 0 \cdot e^{0^{2}} = 0 $
> $f''(x_0) = 2 \cdot e^{0^{2}} (1 + 2\cdot 0^{2}) = 2\cdot 1 = 2 $
> $f^{(3)}(x_0) = 4\cdot 0 \cdot e^{x^{2}} \cdot (3+2\cdot 0^2) = 0$
> $f^{(4)}(x_0) = 16\cdot 0^{4} \cdot e^{x^{2}} + 48\cdot 0^2 \cdot e^{0^2} + 12\cdot e^{0^²}) = 0$
Grau 1:
>$P_n (x) = f(x_0) + f'(x_0)(x-x_0)$
>$P_1 (x) = 1 + 0\cdot (x-0) = 1$
Grau 2:
>$P_2(x) = f(x_0) + f'(x_0)(x-x_0)) + f^{''}(x) \frac{(x-x_0)^2}{2}$
>$P_2(x) = 1 + 0\cdot (x-0) + 2 \frac{(x-0)^2}{2} = 1 + 0 + x² = x²+1$
Grau 3:
>$P_3(x) = 1+ x^2 $
Grau 4:
> $P_4(x) = 1 +x^2 +12 \cdot \frac{x^{4}}{24} = 1 + x^2 + \frac{x^4}{2}$
```
plt.style.use("ggplot")
x1 = np.linspace(-1, 1, 10)
f = [lambda x: np.e**x**2, # função original
lambda x: 1, # polinômio de Taylor de grau 1
lambda x: x**2 + 1, # polinômio de Taylor de grau 2
lambda x: 1 + x**2, # polinômio de Taylor de grau 3
lambda x: 1 + x**2 +x**4 # polinômio de Taylor de grau 4
]
```
```
plt.plot(list(map(f[0],x1)),label = 'função original',c ='r')
plt.plot(list(map(f[1],x1)),label = '1 grau' ,c = 'b')
plt.plot(list(map(f[2],x1)),label = '2 grau' ,c = 'y')
plt.plot(list(map(f[3],x1)),label = '3 grau' ,c = 'g')
plt.plot(list(map(f[4],x1)),label = '4 grau', c ='k')
plt.legend(loc='upper center', fontsize='x-large')
plt.show()
```
---
## Exercício 02.
E2: Use os polinômios de Taylor para mostrar que $(1 + t)^n = \sum_{j=0}^n {n\choose j} \cdot t^{j}$, para $n$ inteiro e maior do que 1 e onde é o coeficiente binomial
---
Seja:
$f(t)=(1+t)^n\\$
$f'(t) = n \cdot (1+t)^{n-1}\\$
$f''(t) = n \cdot (n-1)\cdot (1+t)^{n-2}\\$
$f'''(t) = n \cdot (n-1) \cdot (n-2) \cdot (1+t)^{n-3}$
Do caso geral do polinômio de Taylor temos que:
$\sum_{k=0}^n f^{(k)}(t_0) \frac{(t-t_0)^k}{k!}$
Partindo o polinômio de Taylor, seja uma $f(x) = x^n$ polinomial qualquer. Ao expandir o polinômio, temos que:
>$f(x-\varepsilon) = x^n + n x^{n-1} \varepsilon + (n(n+1))x^{n-2}\frac{\varepsilon}{2!} + \dots + n! x^{0} \frac{\varepsilon^{n}}{n!} + \varnothing$
Seja $x = 1$.
> $1 + n \varepsilon +n(n-1) \frac{\varepsilon^2}{2!} + n(n-1)(n-2) \frac{\varepsilon^3}{3!} + \dots + n! \frac{\varepsilon^n}{n!}$
Manipulando algebricamente temos:
> $\frac{n!}{n!} + \frac{n!}{1!(n-1)!}\varepsilon + \frac{n!}{(n-2)!}\frac{\varepsilon^2}{2!} + \frac{n!}{(n-3)!}\frac{\varepsilon^3}{3!} + \dots + \frac{n!}{n! 1!} \varepsilon^n$
Ou seja:
>$\sum^n_{j=0} {n \choose j} \varepsilon^j$.
Com $\varepsilon = t$, C.Q.D.
---
## Exercício 03.
E3: Usando a linguagem Python, criar uma função para calcular $f (x) = sin(x)$ utilizando os polinômios de Taylor em torno de $x_0 = 0$. O polinômio deve ter o menor grau possível de modo que o erro da aproximação seja inferior a $10^{−7}$ para qualquer valor de $x$. (Dica: o valor de $x$, argumento da sua função, pode ser transformado antes de operar o polinômio.)
---
Inicialmente precisamos descobri qual o grau minímo do polinômio de Taylor para aproximar com a precisão com resíduo igual $10^{-7}$.
$R_{n+1} \leq 10^{-7}$, para todo $x\in[-\pi,\pi]$
Observe:
$|R_n(x)| = f^{n+1} (\xi_n) \frac{\varepsilon^{n+1}}{(n+1)!}$
$|R_{n+1}(x)| \leq |\sin(\xi)| \left| \frac{x^{n+1}}{(n+1)!} \right|$
$|R_{n+1}(x) |\leq \max_{t\in [-\pi,\pi]}|\sin(t)| \cdot \max_{y\in [-\pi,\pi]} \left| \frac{y^{n+1}}{(n+1)!} \right|$
Apesar da derivada da função $\sin$ depender do $n$ do resíduo, o máximo nesse intervalo será $1$.
$|R_{n+1}(x) |\leq 1 \cdot \left| \frac{\pi^{n+1}}{(n+1)!} \right|$, para todo $x \in [-\pi, \pi]$
Dessa forma:
$(n+1)! \geq 10^7 \left( \pi^{n+1} \right)$
```
fd
condicao = False
while not condicao:
condicao = (factorial(n+1) >=10**7 *(np.pi)**(n+1))
print("teste com n =",n,"é: ", condicao)
n = n+1
```
teste com n = 1 é: False
teste com n = 2 é: False
teste com n = 3 é: False
teste com n = 4 é: False
teste com n = 5 é: False
teste com n = 6 é: False
teste com n = 7 é: False
teste com n = 8 é: False
teste com n = 9 é: False
teste com n = 10 é: False
teste com n = 11 é: False
teste com n = 12 é: False
teste com n = 13 é: False
teste com n = 14 é: False
teste com n = 15 é: False
teste com n = 16 é: False
teste com n = 17 é: False
teste com n = 18 é: True
Dessa forma, o nosso polinômio será de grau 9.
```
from sympy.functions import sin
#Implementação retirada de: http://firsttimeprogrammer.blogspot.com/2015/03/taylor-series-with-python-and-sympy.html
# Define the variable and the function to approximate
x = sy.Symbol('x')
f_sen = sin(x)
# Taylor approximation at x0 of the function 'function'
def taylor(function,x0,n):
i = 0
p = 0
while i <= n:
p = p + (function.diff(x,i).subs(x,x0))/(factorial(i))*(x-x0)**i
i += 1
return p
```
```
x2 = np.arange(-np.pi,np.pi, 0.00001)
taylor(f_sen,0,18)
```
2.81145725434552e-15*x**17 - 7.64716373181982e-13*x**15 + 1.60590438368216e-10*x**13 - 2.50521083854417e-8*x**11 + 2.75573192239859e-6*x**9 - 0.000198412698412698*x**7 + 0.00833333333333333*x**5 - 0.166666666666667*x**3 + 1.0*x
```
f_taylor_18 = lambda x: 2.81145725434552e-15*x**17 - 7.64716373181982e-13*x**15 + 1.60590438368216e-10*x**13 - 2.50521083854417e-8*x**11 + 2.75573192239859e-6*x**9 - 0.000198412698412698*x**7 + 0.00833333333333333*x**5 - 0.166666666666667*x**3 + 1.0*x
```
```
plt.plot(f_taylor_18(x2), c='b')
plt.plot(np.sin(x2), c='r')
```
Soma das diferenças no intervalo
### Dentro do erro esperado :) O valor de $\pi$ usado será esse:
```
sum(f_taylor_18(x2)-np.sin(x2))
```
-1.1898375539887341e-08
### Dado a periodicidade da função $\sin$, para cada valor recebido basta retirar o módulo de $2\pi$.
Testando para duas vezes o valor de $\pi$ temos que:
```
entrada = float(input("Informe o valor de x que você deseja calcular sen(x)"))
```
Informe o valor de x que você deseja calcular sen(x)-6.283185307179586
```
f_taylor_18(entrada%(2*np.pi))
```
0.0
|
11822d86df94941a652b2408ed2ce531002d898d
| 88,608 |
ipynb
|
Jupyter Notebook
|
list_02.ipynb
|
bruAristimunha/FMC
|
8fbc7bd1f3296efcfd5114791b59a5a5b940f305
|
[
"MIT"
] | null | null | null |
list_02.ipynb
|
bruAristimunha/FMC
|
8fbc7bd1f3296efcfd5114791b59a5a5b940f305
|
[
"MIT"
] | null | null | null |
list_02.ipynb
|
bruAristimunha/FMC
|
8fbc7bd1f3296efcfd5114791b59a5a5b940f305
|
[
"MIT"
] | null | null | null | 146.217822 | 44,932 | 0.842305 | true | 3,333 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.689306 | 0.752013 | 0.518366 |
__label__por_Latn
| 0.711071 | 0.042668 |
# Homework 14: Nonlinear Equations
### Problem 1
Use fsolve to find the roots of the polynomial $f(x) = 2x^2 + 3x - 10$.
```python
import numpy as np
from scipy.optimize import fsolve
```
```python
```
### Problem 2
Use fsolve to find the solution of the following two equations:
\begin{align}
f(x,y) &= 2x^{2/3}+y^{2/3}-9^{1/3} \\
g(x,y) &= \frac{x^2}{4} + \sqrt{y} - 1.
\end{align}
Use an initial guess of $x_0=1$, $y_0$ = 1.
```python
```
### Problem 3
```python
# import or install wget
try:
import wget
except:
try:
from pip import main as pipmain
except:
from pip._internal import main as pipmain
pipmain(['install','wget'])
import wget
# retrieve thermoData.yaml
url = 'https://apmonitor.com/che263/uploads/Main/thermoData.yaml'
filename = wget.download(url)
print('')
print('Retrieved thermoData.yaml')
```
100% [..............................................................................] 14985 / 14985
Retrieved thermoData.yaml
Compute the adiabatic flame temperature for a stoichiometric methane-air flame. The code is given below. There is a thermo class that is modified from your last homework. Also, you'll need thermoData.yaml again. Then there is a function to define. Fill in the blanks as indicated. You should also read all of the code given below and make sure you understand it.
**Equation Summary:**
* Your function (started for you below) is: ```f_flame(Ta) = 0```.
* That is, $f_{flame}(T_a) = 0 = H_r(T_r) - H_p(T_a) = 0$.
* $T_a$ is the unknown.
* $T_r = 300\,K$
* $H_r(T_r) = y_{CH4}h_{CH4}(T_r) + y_{O2}h_{O2}(T_r) + y_{N2}h_{N2}(T_r)$.
* $H_p(T_a) = y_{CO2}h_{CO2}(T_a) + y_{H2O}h_{H2O}(T_a) + y_{N2}h_{N2}(T_a)$.
* $y_i = m_i/m_t$.
* $m_i = n_iM_i$.
* $n_i$ and $M_i$ are given.
* $m_t = \sum_im_i$.
* **Do these separately for reactants and products** That is: $m_t = m_{O2}+m_{N2}+m_{CH4}$ for the reactants. (Also $m_t$ is the same for products since mass is conserved.)
* $h_i$ is computed using the thermo class. So, if ```t_CO2``` is my thermo class object for $CO_2$, then ```h_CO2=t_CO2.h_mass(T)```.
**Description:**
* We have a chemical reaction:
* $CH_4 + 2O_2 + 7.52N_2 \rightarrow CO_2 + 2H_2O$ + 7.52$N_2$.
* You can think of the burning as potential energy stored in the reactant bonds being released as kinetic energy in the products so the product temperature is higher.
* Adiabatic means there is no enthalpy loss. You can think of enthalpy as energy. This means the products have the same enthalpy as the reactants. And this is just a statement that energy is conserved, like mass is.
* The idea is to take a known reactant temperature, find the reactant enthalpy (which is an easy explicit equation you can calculate directly), then set the product enthalpy equal to the reactant enthalpy and find the corresponding product temperature (which is a harder nonlinear solve).
* $T_r\rightarrow h_r = h_p \rightarrow T_p$.
* The reactants start at room temperature, $T=300\,K$, so we can compute their enthalpy.
* We know the moles of reactants: $n_{ch4}=1$, $n_{O2}=2$, $n_{N2}=7.52$.
* So, we can compute the corresponding masses using the molecular weights.
* Then we sum the masses of each species to get the total mass, and compute the mass fractions.
* Then we can compute the enthalpy as $h=\sum_iy_ih_i$. That is, the total enthalpy is the sum of the enthalpy per unit mass of each species times the mass fraction of each species.
* For reactants we have $h_r = y_{CH4}h_{CH4}+y_{O2}h_{O2}+y_{N2}h_{N2}$, where $h_i$ are evaluated using the class function h_mass(T), and T=300 for reactants.
* Now, $h_p=h_r$. For products, we have $h_p = y_{CO2}h_{CO2}+y_{H2O}h_{H2O}+y_{N2}h_{N2}$, where we evaluate the class function h_mass(Tp), where Tp is the product temperature we are trying to compute.
* Solving for $T_p$ amounts to solving $f(T_p)=0$, where $$f(T_p) = h_p - y_{CO2}h_{CO2}(T_p)+y_{H2O}h_{H2O}(T_p)+y_{N2}h_{N2}(T_p)$$.
```python
import numpy as np
from scipy.optimize import fsolve
import yaml
class thermo:
def __init__(self, species, MW) :
"""
species: input string name of species in thermoData.yaml
M: input (species molecular weight, kg/kmol)
"""
self.Rgas = 8314.46 # J/kmol*K
self.M = MW
with open("thermoData.yaml") as yfile :
yfile = yaml.load(yfile)
self.a_lo = yfile[species]["a_lo"]
self.a_hi = yfile[species]["a_hi"]
self.T_lo = 300.
self.T_mid = 1000.
self.T_hi = 3000.
#--------------------------------------------------------
def h_mole(self,T) :
"""
return enthalpy in units of J/kmol
T: input (K)
"""
if T<=self.T_mid and T>=self.T_lo :
a = self.a_lo
elif T>self.T_mid and T<=self.T_hi :
a = self.a_hi
else :
print ("ERROR: temperature is out of range")
hrt = a[0] + a[1]/2.0*T + a[2]/3.0*T*T + a[3]/4.0*T**3.0 + a[4]/5.0*T**4.0 + a[5]/T
return hrt * self.Rgas * T
#--------------------------------------------------------
def h_mass(self,T) :
"""
return enthalpy in units of J/kg
T: input (K)
"""
return self.h_mole(T)/self.M
```
```python
def f_flame(Ta) :
"""
We are solving for hp = sum_i y_i*h_i. In f=0 form this is f = hp - sum_i y_i*h_i
We know the reactant temperature, so we can compute enthalpy (h). Then we know hp = hr (adiabatic).
Vary T until sum_i y_i*h_i = hp.
Steps:
1. Given moles --> mass --> mass fractions.
2. Make thermo classes for each species.
3. Compute hr = sum_i y_i*h_i.
... Do this for the reactants, then products.
"""
no2 = 2. # kmol
nch4 = 1.
nn2 = 7.52
nco2 = 1.
nh2o = 2.
Mo2 = 32. # kg/kmol
Mch4 = 16.
Mn2 = 28.
Mco2 = 44.
Mh2o = 18.
mo2 = no2*Mo2 # mass
mch4 = nch4*Mch4 # mass
mn2 = nn2*Mn2 # mass
mh2o = nh2o*Mh2o
mco2 = nco2*Mco2
t_o2 = thermo("O2",Mo2) # thermo object; use as: t_o2.h_mass(T) to get h_O2, etc.
t_ch4 = thermo("CH4",Mch4)
t_n2 = thermo("N2",Mn2)
t_co2 = thermo("CO2",Mco2)
t_h2o = thermo("H2O",Mh2o)
#-------- Reactants
# TO DO: compute total mass, then mass fractions
# TO DO: Set reactant temperature, then compute reactant enthalpy
#---------- Products
# TO DO: Set the product enthalpy = reactant enthalpy
# TO DO: Set the product mass fractions
# TO DO: Compute the enthalpy of the products corresponding to the current Tp
# Then return the function: f(Tp) = hp - hp_based_on_current_Tp
```
```python
# TO DO: Set a guess temperature, then solve for the product temperature
```
### Problem 4
**Example: Solve a system of 6 equations in 6 unknowns**
This is solving a parallel pipe network where we have three pipes that are connected at the beginning and the end. The pipes can be of different lengths and diameter and pipe roughness. Given the total flow rate, and the pipe properties, find the flow rate through each of three parallel pipes.
* **Unknowns: three flow rates: $Q_1$, $Q_2$, $Q_3$**.
* We need ***three equations***.
* We'll label the pipes 1, 2, and 3.
* **Eq. 1:** $Q_{tot} = Q_1+Q_2+Q_3$.
* That is, the total flow rate is just the sum through each pipe.
* Because the pipes are connected, the pressure drop across each pipe is the same:
* **Eq. 2:** $\Delta P_1=\Delta P_2,$
* **Eq. 3:** $\Delta P_1=\Delta P_3$
* Now we need to relate the pressure drop equations to the unknowns. The pressure is related to the flow rate by:
* $\Delta P=\frac{fL\rho v^2}{2D}$, and we use $Q=Av=\frac{\pi}{4}D^2v\rightarrow v=\frac{4Q}{\pi D^2}$, where $Q$ is volumetric flow rate. Then, substitute for v to get: $$\Delta P=\frac{fL\rho}{2D}\left(\frac{4Q}{\pi D^2}\right)^2$$
* Here, $f$ is the friction factor in the pipe. We treat it as an unknown so we have **three more unknowns: $f_1$, $f_2$, $f_3$**. The Colbrook equation relates $f$ to $Q$ for given pipe properties. So, we have **three more equations**.
* Here are the **six equations** in terms of the **six unknowns: $Q_1$, $Q_2$, $Q_3$, $f_1$, $f_2$, $f_3$**.
1. $Q_1+Q_2+Q_3-Q_{tot} = 0$.
2. $\frac{f_1L_1\rho}{2D_1}\left(\frac{4Q_1}{\pi D_1^2}\right)^2 - \frac{f_2L_2\rho}{2D_2}\left(\frac{4Q_2}{\pi D_2^2}\right)^2 = 0$
3. $\frac{f_1L_1\rho}{2D_1}\left(\frac{4Q_1}{\pi D_1^2}\right)^2 - \frac{f_3L_3\rho}{2D_3}\left(\frac{4Q_3}{\pi D_3^2}\right)^2 = 0$
4. Colbrook equation relating $f_1$ to $Q_1$:
$$\frac{1}{\sqrt{f_1}}+2\log_{10}\left(\frac{\epsilon_1}{3.7D_1} + \frac{2.51\mu\pi D_1}{\rho 4Q_1\sqrt{f_1}}\right).$$
5. Colbrook equation relating $f_2$ to $Q_2$.
6. Colbrook equation relating $f_3$ to $Q_3$.
* All units are SI.
```python
def F_pipes(x) :
Q1 = x[0] # rename the vars so we can read our equations below.
Q2 = x[1]
Q3 = x[2]
f1 = x[3]
f2 = x[4]
f3 = x[5]
Qt = 0.01333 # Given total volumetric flow rate
e1 = 0.00024 # pipe roughness (m) (epsilon in the equation)
e2 = 0.00012
e3 = 0.0002
L1 = 100 # pipe length (m)
L2 = 150
L3 = 80
D1 = 0.05 # pipe diameter (m)
D2 = 0.045
D3 = 0.04
mu = 1.002E-3 # viscosity (kg/m*s)
rho = 998. # density (kg/m3)
F = np.zeros(6) # initialize the function array
# TO DO: Define the functions here
return F
#--------------------------------------
# TO DO: make a guess array for the unknowns: Q1, Q2, Q3, f1, f2, f3
# (use Q3 = Qtot-Q1-Q2 in your guess, for consistency)
# TO DO: Solve the problem and print the results.
```
```python
```
|
bb7179fd05fcbff338da682a036e5c9900f9c5be
| 14,375 |
ipynb
|
Jupyter Notebook
|
python/HW14.ipynb
|
uw-cheme375/uw-cheme375.github.io
|
5b20393705c4640a9e6af89708730eb08cb15ded
|
[
"BSD-3-Clause"
] | null | null | null |
python/HW14.ipynb
|
uw-cheme375/uw-cheme375.github.io
|
5b20393705c4640a9e6af89708730eb08cb15ded
|
[
"BSD-3-Clause"
] | null | null | null |
python/HW14.ipynb
|
uw-cheme375/uw-cheme375.github.io
|
5b20393705c4640a9e6af89708730eb08cb15ded
|
[
"BSD-3-Clause"
] | null | null | null | 39.491758 | 371 | 0.500313 | true | 3,360 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.890294 | 0.92523 | 0.823727 |
__label__eng_Latn
| 0.95461 | 0.752126 |
# Fundamentals of Data Science
Winter Semester 2021
## Prof. Fabio Galasso, Guido D'Amely, Alessandro Flaborea, Luca Franco, Muhammad Rameez Ur Rahman and Alessio Sampieri
<galasso@di.uniroma1.it>, <damely@di.uniroma1.it>, <flaborea@di.uniroma1.it>, <franco@diag.uniroma1.it>, <rahman@di.uniroma1.it>, <alessiosampieri27@gmail.com>
## Exercise 2: Classification
In Exercise 2, you will re-derive and implement logistic regression and optimize the parameters with Gradient Descent and with the Newton's method. Also, in this exercise you will re-derive and implement Gassian Discriminant Analysis.
We will use datasets generated from the make_classification function from the SkLearn library. Its first output contains the feature values $x^{(i)}_1$ and $x^{(i)}_2$ for the $i$-th data sample $x^{(i)}$. The second contains the ground truth label $y^{(i)}$ for each corresponding data sample.
The completed exercise should be handed in as a single notebook file. Use Markdown to provide equations. Use the code sections to provide your scripts and the corresponding plots.
Submit it by sending an email to galasso@di.uniroma1.it, flaborea@di.uniroma1.it, franco@diag.uniroma1.it and alessiosampieri27@gmail.com by Wednesday November 17th 2021, 23:59.
## Notation
- $x^i$ is the $i^{th}$ feature vector
- $y^i$ is the expected outcome for the $i^{th}$ training example
- $m$ is the number of training examples
- $n$ is the number of features
Let's start by setting up our Python environment and importing the required libraries:
```python
%matplotlib inline
import numpy as np # imports a fast numerical programming library
import scipy as sp # imports stats functions, amongst other things
import matplotlib as mpl # this actually imports matplotlib
import matplotlib.cm as cm # allows us easy access to colormaps
import matplotlib.pyplot as plt # sets up plotting under plt
import pandas as pd # lets us handle data as dataframes
from sklearn.datasets import make_classification
import seaborn as sns
# sets up pandas table display
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns # sets up styles and gives us more plotting options
```
## Question 1: Logistic Regression with Gradient Ascent **(10 Points)**
### Code and Theory
#### Exercise 1.a **(3 Points)** Equations for the log likelihood, its gradient, and the gradient ascent update rule.
Write and simplify the likelihood $L(\theta)$ and log-likelihood $l(\theta)$ of the parameters $\theta$.
Recall the probabilistic interpretation of the hypothesis $h_\theta(x)= P(y=1|x;\theta)$ and that $h_\theta(x)=\frac{1}{1+\exp(-\theta^T x)}$.
Also derive the gradient $\frac{\delta l(\theta)}{\delta \theta_j}$ of $l(\theta)$ and write the gradient update equation.
Question: Are we looking for a local minimum or a local maximum using the gradient ascent rule?
################# Do not write above this line #################
Assuming that $y$ could have only two different values $(0,1)$, the distribution of $y$ could be rapresented by a $Ber(h_{\theta}(x))$, knowing that $m$ is the number of examples.
* Write and simplify the likelihood $L(\theta)$ and log-likelihood $l(\theta)$ of the parameters $\theta$.
We can write the $L(\theta)$ as follow:
$ L(\theta) = \mathbb{P}(\vec{y} | x;\theta) = \prod_{i = 1}^{m}\mathbb{P}(y^{(i)} | x^{(i)};\theta) = \prod_{i = 1}^{m}(h_{\theta}(x^{(i)}))^{y^{(i)}}\times (1-(h_{\theta}(x^{(i)}))^{1-y^{(i)}} $
Then for $l(\theta)$ we have:
$ l(\theta) = \log(L(\theta)) = \sum_{i=1}^{m}y^{(i)} \log(h_{\theta}(x^{(i)})) + (1-y^{(i)}) \log (1-h_{\theta}(x^{(i)}))$
* Recall the probabilistic interpretation of the hypothesis $h_\theta(x)= P(y=1|x;\theta)$ and that $h_\theta(x)=\frac{1}{1+\exp(-\theta^T x)}$.
$h_\theta(x)= P(y=1|x;\theta)$ is the probability to predict class 1 of the features x parametrized by $\theta$
$h_\theta(x)=\frac{1}{1+\exp(-\theta^T x)}$: in the linear regression the hypotesis could have assume values in $\mathbb{R}$ but now we want to classify in two different classes: {0,1} so we need to remap the values $\theta^T x \in [0,1]$ and to do so we apply the sigmoid function.
* Also derive the gradient $\frac{\delta l(\theta)}{\delta \theta_j}$ of $l(\theta)$ and write the gradient update equation.
The gradient $\frac{\delta l(\theta)}{\delta \theta_j}$ of $l(\theta)$ is:
$ \frac{\delta l(\theta)}{\delta \theta_j} = \sum_{i=1}^{m}(y^{(i)}-h_{\theta}(x^{(i)}))\times x_{j}^{(i)} $
The gradient ascent update equation is:
$ \theta_{j} := \theta_{j} + \alpha \sum_{i=1}^{m}(y^{(i)}-h_{\theta}(x^{(i)}))\times x_{j}^{(i)} $
* Question: Are we looking for a local minimum or a local maximum using the gradient ascent rule?
Are we looking for a local maximum because we want to maximaze $l(\theta)$ that is a concave function.
################# Do not write below this line #################
#### Exercise 1.b **(7 Points)** Implementation of logistic regression with Gradient Ascent
Code up the equations above to learn the logistic regression parameters. The dataset used here is created using the make_classification function present in the SkLearn library. $x^{(i)}_1$ and $x^{(i)}_2$ represent the two features for the $i$-th data sample $x^{(i)}$ and $y^{(i)}$ is its ground truth label.
```python
X, y = make_classification(n_samples=500, n_features=2, n_informative=2, n_redundant=0, n_classes=2, random_state=5)
X.shape, y.shape
```
((500, 2), (500,))
```python
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y);
```
Adding a column of 1's to $X$ to take into account the zero intercept
```python
x = np.hstack([np.ones((X.shape[0], 1)), X])
```
```python
[x[:5,:],x[-5:,:]] # Plot the first and last 5 lines of x, now containing features x0 (constant=1), x1 and x2
```
[array([[ 1. , 2.25698215, -1.34710915],
[ 1. , 1.43699308, 1.28420453],
[ 1. , 0.57927295, 0.23690172],
[ 1. , 0.42538132, -0.24611145],
[ 1. , 1.13485101, -0.61162683]]),
array([[ 1. , 1.56638944, 0.81749944],
[ 1. , -1.94913831, -1.90601147],
[ 1. , 1.53440506, -0.11687238],
[ 1. , -0.39243599, 1.39209018],
[ 1. , -0.11881249, 0.96973739]])]
```python
[y[:5],y[-5:]] # Plot the first and last 5 lines of y
```
[array([1, 1, 1, 0, 1]), array([1, 0, 0, 0, 1])]
Define the sigmoid function "sigmoid", the function to compute the gradient of the log likelihood "grad_l" and the gradient ascent algorithm.
################# Do not write above this line #################
```python
def sigmoid(x):
'''
Function to compute the sigmoid of a given input x.
Input:
x: it's the input data matrix. The shape is (N, H)
Output:
g: The sigmoid of the input x
'''
#####################################################
## YOUR CODE HERE ##
#####################################################
g = 1/(1 + np.exp(-x))
return g
def log_likelihood(theta,features,target):
'''
Function to compute the log likehood of theta according to data x and label y
Input:
theta: it's the model parameter matrix.
features: it's the input data matrix. The shape is (N, H)
target: the label array
Output:
log_g: the log likehood of theta according to data x and label y
'''
# theta^T*x
h = np.dot(features, theta)
# apply sigmoid function
h = sigmoid(h)
dim = features.shape
m = dim[0] # number of training example
# log_likelihood
log_l = (1/m)*np.sum( target*np.log(h)+(1-target)*np.log(1-h) )
return log_l
def predictions(features, theta):
'''
Function to compute the predictions for the input features
Input:
theta: it's the model parameter matrix.
features: it's the input data matrix. The shape is (N, H)
Output:
preds: the predictions of the input features
'''
#####################################################
## YOUR CODE HERE ##
#####################################################
# predictions = our hypothesis
h = np.dot(features, theta)
preds = sigmoid(h)
return preds
def update_theta(theta, target, preds, features, lr):
'''
Function to compute the gradient of the log likelihood
and then return the updated weights
Input:
theta: the model parameter matrix.
target: the label array
preds: the predictions of the input features
features: it's the input data matrix. The shape is (N, H)
lr: the learning rate
Output:
theta: the updated model parameter matrix.
'''
#####################################################
## YOUR CODE HERE ##
#####################################################
# calculate the derivative of the log_likelihood for all theta
dlog = np.dot(features.T, target - preds)
dim = features.shape
m = dim[0] # number of training example
# 1 time gradiant ascent rule: update theta
theta = theta + lr*dlog/m
return theta
def gradient_ascent(theta, features, target, lr, num_steps):
'''
Function to execute the gradient ascent algorithm
Input:
theta: the model parameter matrix.
target: the label array
num_steps: the number of iterations
features: the input data matrix. The shape is (N, H)
lr: the learning rate
Output:
theta: the final model parameter matrix.
log_likelihood_history: the values of the log likelihood during the process
'''
log_likelihood_history = np.zeros(num_steps)
#####################################################
## YOUR CODE HERE ##
#####################################################
# iter over the num_steps and compute the gradient ascent
for i in range(0,num_steps):
preds = predictions(features, theta)
theta = update_theta(theta, target, preds, features, lr)
# save the log_likelihood history
log_likelihood_history[i] = log_likelihood(theta,features,target)
return theta, log_likelihood_history
```
################# Do not write below this line #################
Check your grad_l implementation:
grad_l applied to the theta_test (defined below) should provide a value for log_l_test close to the target_value (defined below); in other words the error_test should be 0, up to machine error precision.
```python
target_value = -1.630501731599431
output_test = log_likelihood(np.array([-7,4,1]),x,y)
error_test=np.abs(output_test-target_value)
print("{:f}".format(error_test))
```
0.000000
Let's now apply the function gradient_ascent and print the final theta as well as theta_history
```python
# Initialize theta0
theta0 = np.zeros(x.shape[1])
# Run Gradient Ascent method
n_iter=1000
theta_final, log_l_history = gradient_ascent(theta0,x,y,lr=0.5,num_steps=n_iter)
print(theta_final)
```
[-0.46097042 2.90036399 0.23146846]
Let's plot the log likelihood over iterations
```python
fig,ax = plt.subplots(num=2)
ax.set_ylabel('l(Theta)')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)),log_l_history,'b.')
```
Plot the data and the decision boundary:
```python
# Generate vector to plot decision boundary
x1_vec = np.linspace(X[:,0].min(),X[:,1].max(),2)
print(x1_vec)
# Plot raw data
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y, data=X)
# Plot decision boundary
plt.plot(x1_vec,(-x1_vec*theta_final[1]-theta_final[0])/theta_final[2], color="red")
plt.ylim(X[:,1].min()-1,X[:,1].max()+1)
# Save the theta_final value for later comparisons
theta_GA = theta_final.copy()
```
################# Do not write above this line #################
Discuss these two points:
1. You have implemented the gradient ascent rule. Could we have also used gradient descent instead for the proposed problem? Why/Why not?
In the form we write the likelihood $l(\theta)$ we can't use directly the gradient descent because we want to maximaze this function that is concave. The only way to use gradient descent in this analysis is to apply it to $-l(\theta)$ changing the sign having a convex function.
2. Let's deeply analyze how the learning rate $\alpha$ and the number of iterations affect the final results. Run the algorithm you have written for different values of $\alpha$ and the number of iterations and look at the outputs you get. Is the decision boundary influenced by these parameters change? Why do you think these parameters are affecting/not affecting the results?
We observe that for $\alpha \to 0$ we have that $\theta_j = \theta_j$ and $l(\theta) \to \ln(1/2)$, in fact for very small value of $\alpha$ the update of $\theta$ is very slow and remain approximately near zero, than $\theta^T*x \to 0$ i.e the decision boundary approch the vertical axis x = 0.
In set $alpha \to 0$ we also had modified the number of iterarions and we notice that incresing the iteration increse also the precision of the log_likelihood history graph that gives a smoother increase curve and more accuracy.
################# Do not write below this line #################
## Question 2: Logistic Regression with non linear boundaries (7 points)
#### Exercise 2.a **(4 Points)** Polynomial features for logistic regression
Define new features, e.g. of 2nd and 3rd degrees, and learn a logistic regression classifier by using the new features, by using the gradient ascent optimization algorithm you defined in Question 1.
In particular, we would consider a polynomial boundary with equation:
$f(x_1, x_2) = c_0 + c_1 x_1 + c_2 x_2 + c_3 x_1^2 + c_4 x_2^2 + c_5 x_1 x_2 + c_6 x_1^3 + c_7 x_2^3 + c_8 x_1^2 x_2 + c_9 x_1 x_2^2$
We would therefore compute 7 new features: 3 new ones for the quadratic terms and 4 new ones for the cubic terms.
Create new arrays by stacking x and the new 7 features (in the order x1x1, x2x2, x1x2, x1x1x1, x2x2x2, x1x1x2, x1x2x2). In particular create x_new_quad by additionally stacking with x the quadratic features, and x_new_cubic by additionally stacking with x the quadratic and the cubic features.
```python
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=500, n_features=2, n_informative=2, n_redundant=0, n_classes=2, random_state=5)
X.shape, y.shape
```
((500, 2), (500,))
```python
x = np.hstack([np.ones((X.shape[0], 1)), X])
```
```python
import seaborn as sns
import matplotlib.pyplot as plt
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y);
```
```python
# First extract features x1 and x2 from x and reshape them to x1 vector arrays
x1 = x[:,1]
x2 = x[:,2]
x1 = x1.reshape(x1.shape[0], 1)
x2 = x2.reshape(x2.shape[0], 1)
print(x[:5,:]) # For visualization of the first 5 values
print(x1[:5,:]) # For visualization of the first 5 values
print(x2[:5,:]) # For visualization of the first 5 values
```
[[ 1. 2.25698215 -1.34710915]
[ 1. 1.43699308 1.28420453]
[ 1. 0.57927295 0.23690172]
[ 1. 0.42538132 -0.24611145]
[ 1. 1.13485101 -0.61162683]]
[[2.25698215]
[1.43699308]
[0.57927295]
[0.42538132]
[1.13485101]]
[[-1.34710915]
[ 1.28420453]
[ 0.23690172]
[-0.24611145]
[-0.61162683]]
################# Do not write above this line #################
Your code here
```python
def new_features(x, degree=2):
'''
Function to create n-degree features from the input
Input:
x: the initial features
degree: the maximum degree you want the features
Output:
features: the final features.
2nd degree features must have the order [x, x1x1, x1x2, x2x2]
3nd degree features must have the order [x, x1x1, x1x2, x2x2, x1x1x1, x1x1x2, x1x2x2, x2x2x2]
'''
# Initialize features as x0
features = np.ones(x[:,1].shape[0])
# take x1 and x2
x1 = x[:,1]
x2 = x[:,2]
# initialize array that store the feature at time-step i
tmp_new = x[:,1:3]
# initialize tmp_old
tmp_old = np.array([])
# Generating the new_features
if degree < 2:
return x
else:
for i in range(1, degree):
# intialize tmp array
array_1 = np.empty((tmp_new.shape[0], tmp_new.shape[1]))
x1 = x1.T
# multiply the feature of previous degree with x1
np.multiply(tmp_new, x1[:,np.newaxis], out = array_1)
x2 = x2.T
# same for x2
tmp_new = np.column_stack((array_1, np.multiply(tmp_new, x2[:,np.newaxis])[:,-1]))
# store the new feature
tmp_old = np.column_stack((tmp_old, tmp_new)) if tmp_old.size else tmp_new
# stack the desire feature
features = np.column_stack((x, tmp_old))
return features
```
################# Do not write below this line #################
```python
x_new_quad = new_features(x, degree=2)
x_new_cubic = new_features(x, degree=3)
#reordering output features
temp = np.copy(x_new_quad[:, -1])
x_new_quad[:, -1] = x_new_quad[:, -2]
x_new_quad[:, -2] = temp
temp = np.copy(x_new_cubic[:, -1])
x_new_cubic[:, -1] = x_new_cubic[:, -2]
x_new_cubic[:, -2] = x_new_cubic[:, -3]
x_new_cubic[:, -3] = temp
```
Now use the gradient ascent optimization algorithm to learn theta by maximizing the log-likelihood, both for the case of x_new_quad and x_new_cubic.
```python
# Initialize theta0, in case of quadratic features
theta0_quad = np.zeros(x_new_quad.shape[1])
theta_final_quad, log_l_history_quad = gradient_ascent(theta0_quad,x_new_quad,y,lr=0.5,num_steps=n_iter)
# Initialize theta0, in case of quadratic and cubic features
theta0_cubic = np.zeros(x_new_cubic.shape[1])
# Run Newton's method, in case of quadratic and cubic features
theta_final_cubic, log_l_history_cubic = gradient_ascent(theta0_cubic,x_new_cubic,y,lr=0.5,num_steps=n_iter)
# check and compare with previous results
print(theta_final_quad)
print(theta_final_cubic)
```
[ 0.07548038 3.32398303 0.27106753 -0.51873505 -0.34088582 -0.04846172]
[ 0.82395453 2.41957969 1.66049077 -1.13927962 -0.20657699 -0.96074035
0.34384121 -0.64078458 0.95132383 1.40927121]
```python
# Plot the log likelihood values in the optimization iterations, in one of the two cases.
fig,ax = plt.subplots(num=2)
ax.set_ylabel('l(Theta)')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history_quad)),log_l_history_quad,'b.')
```
#### Exercise 2.b **(3 Points)** Plot the computed non-linear boundary and discuss the questions
First, define a boundary_function to compute the boundary equation for the input feature vectors $x_1$ and $x_2$, according to estimated parameters theta, both in the case of quadratic (theta_final_quad) and of quadratic and cubic features (theta_final_cubic). Refer for the equation to the introductory part of Question 2.
################# Do not write above this line #################
Your code here
```python
def boundary_function(x1_vec, x2_vec, theta_final):
x1_vec, x2_vec = np.meshgrid(x1_vec,x2_vec)
if len(theta_final) == 6:
# boundary function value for features up to quadratic
c_0, c_1, c_2, c_3, c_4, c_5 = theta_final
f = c_0 + c_1*x1_vec + c_2*x2_vec + c_3*(x1_vec**2) + c_4*(x2_vec**2) + c_5*(x1_vec*x2_vec)
elif len(theta_final) == 10:
# boundary function value for features up to cubic
c_0, c_1, c_2, c_3, c_4, c_5, c_6, c_7, c_8, c_9 = theta_final
f = c_0 + c_1*x1_vec + c_2*x2_vec + c_3*(x1_vec**2) + c_4*(x2_vec**2) + c_5*(x1_vec*x2_vec) + c_6*(x1_vec**3) + c_7*(x2_vec**3) + c_8*(x1_vec**2 * x2_vec) + c_9*(x1_vec*x2_vec**2)
else:
raise("Number of Parameters is not correct")
return x1_vec, x2_vec, f
```
################# Do not write below this line #################
Now plot the decision boundaries corresponding to the theta_final_quad and theta_final_cubic solutions.
```python
x1_vec = np.linspace(X[:,0].min()-1,X[:,0].max()+1,200);
x2_vec = np.linspace(X[:,1].min()-1,X[:,1].max()+1,200);
x1_vec, x2_vec, f = boundary_function(x1_vec, x2_vec, theta_final_quad)
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y, data=X);
plt.contour(x1_vec, x2_vec, f, colors="red", levels=[0])
plt.show()
```
```python
x1_vec = np.linspace(X[:,0].min()-1,X[:,0].max()+1,200);
x2_vec = np.linspace(X[:,1].min()-1,X[:,1].max()+1,200);
x1_vec, x2_vec, f = boundary_function(x1_vec, x2_vec, theta_final_cubic)
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y, data=X);
plt.contour(x1_vec, x2_vec, f, colors="red", levels=[0])
plt.show()
```
#### Confusion Matrix
Here you can see the confusion matrices related to the three models you've implemented.
```python
from sklearn.metrics import confusion_matrix
```
```python
## logistic regression with linear buondary
z = np.dot(x,theta_final)
probabilities = sigmoid(z)
y_hat = np.array(list(map(lambda x: 1 if x>0.5 else 0, probabilities)))
confusion_matrix(y, y_hat)
```
array([[218, 35],
[ 22, 225]], dtype=int64)
```python
## logistic regression with non linear buondary - quadratic
z = np.dot(x_new_quad,theta_final_quad)
probabilities = sigmoid(z)
y_hat = np.array(list(map(lambda x: 1 if x>0.5 else 0, probabilities)))
confusion_matrix(y, y_hat)
```
array([[220, 33],
[ 15, 232]], dtype=int64)
```python
## logistic regression with non linear buondary - cubic
z = np.dot(x_new_cubic,theta_final_cubic)
probabilities = sigmoid(z)
y_hat = np.array(list(map(lambda x: 1 if x>0.5 else 0, probabilities)))
confusion_matrix(y, y_hat)
```
array([[225, 28],
[ 11, 236]], dtype=int64)
################# Do not write above this line #################
Write now your considerations. Discuss in particular:
- Look back at the plots you have generated. What can you say about the differences between the linear, quadratic, and cubic decision boundaries? Can you say if the model is improving in performances, increasing the degree of the polynomial? Do you think you can incur in underfitting increasing more and more the degree?
The decision boundary is more precise when we increase the degree, in fact we can see that the model is predicting very well classes with our training data.
There could be problem if we increase too much the degree for the decision boundary because we could encounter the phenomenon of "overfitting" (so not "underfitting") in which we are tracking a too precise and high degree boundary with our training data that when we supply new data the algorithm will fail in classification.
In conclusion, a exaggeratedly high degree could affect negatively our model and could generate an overfitting problem.
- Let's now delve into some quantitative analysis. The three tables you have generated represent the confusion matrix for the model you have implemented in the first two questions. What can you say about actual performances? Does the increase of the degree have a high effect on the results?
The performances of our model are increasing as increase of the degree chosen for the logistic regression.
In the three confusion matrix we could observe that the number of true positive and true negative is increasing and, at the same time, the false positive and the false negative are decreasing, so in conclusion the model is working better with higher degrees.
################# Do not write below this line #################
## Question 3: Multinomial Classification (Softmax Regression) **(13 Points)**
### Code and Theory **(10 Points)**
### Report **(3 Points)**
#### Exercise 3.a **(4 Points)**
In the multinomial classification we generally have $K>2$ classes. So the label for the $i$-th sample $X_i$ is $y_i\in\{1,...,K\}$, where $i=1,...,N$. The output class for each sample is estimated by returning a score $s_i$ for each of the K classes. This results in a vector of scores of dimension K.
In this exercise we'll use the *Softmax Regression* model, which is the natural extension of *Logistic Regression* for the case of more than 2 classes. The score array is given by the linear model:
\begin{equation}
s_i = X_i \theta
\end{equation}
Scores may be interpreted probabilistically, upon application of the function *softmax*. The position in the vector with the highest probability will be predicted as the output class. The probability of the class k for the $i$-th data sample is:
\begin{equation}
p_{ik} = \frac{\exp(X_i \theta_k)}{\sum_{j=1}^K(X_i \theta_j))}
\end{equation}
We will adopt the *Cross Entropy* loss and optimize the model via *Gradient Descent*.
In the first of this exercise we have to:
- Write the equations of the Cross Entropy loss for the Softmax regression model;
- Compute the equation for the gradient of the Cross Entropy loss for the model, in order to use it in the gradient descent algorithm.
#### A bit of notation
* N: is the number of samples
* K: is the number of classes
* X: is the input dataset and it has shape (N, H) where H is the number of features
* y: is the output array with the labels; it has shape (N, 1)
* $\theta$: is the parameter matrix of the model; it has shape (H, K)
################# Do not write above this line #################
Your equations here.
\begin{equation}
L(\theta) = -\sum_{i=1}^k p(y_k) \times \log(p_{ik}) = -\log(p_{iC}) = -\log\Bigg(\frac{\exp(X_i \theta_C)}{\sum_{j=1}^K(X_i \theta_j)}\Bigg)
\end{equation}
Where $p(y_k)$ is the one-hot vector and $C = \text{right class}$
\begin{equation}
\nabla_{\theta_k} L(\theta) = \frac{1}{m}\sum_{i=1}^m (p_{ik}-p(y_k)^{(i)})\times x^{(i)}
\end{equation}
Credits for the derivative equation (we didn't do it during the lessions): https://medium.datadriveninvestor.com/softmax-classifier-using-gradient-descent-and-early-stopping-7a2bb99f8500#:~:text=We%20will%20be%20using%20the%20following%20function%20for%20cross%2Dentropy%3A
################# Do not write below this line #################
#### Exercise 3.b **(4 Points)**
Now we will implement the code for the equations. Let's implement the functions:
- softmax
- CELoss
- CELoss gradient
- gradient descent
We generate a toy dataset with *sklearn* library. Do not change anything outside the parts provided of your own code (else the provided checkpoint will not work).
```python
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=300, n_features=7, n_informative=7, n_redundant=0, n_classes=3, random_state=1)
X.shape, y.shape
```
((300, 7), (300,))
As a hint for the implementations of your functions: consider the labels $y$ as one-hot vector. This will allow matrix operations (element-wise multiplication and summation).
```python
import scipy
import numpy as np
def class2OneHot(vec):
out_sparse = scipy.sparse.csr_matrix((np.ones(vec.shape[0]), (vec, np.array(range(vec.shape[0])))))
out_onehot = np.array(out_sparse.todense()).T
return out_onehot
y_onehot = class2OneHot(y)
```
Let's visualize the generated dataset. We use as visualizzation method the *Principal Component Analysis* (PCA). PCA summarize the high-dimensional feature vectors of each sample into 2 features, which we can illustrate with a 2D plot. Look at the following plot, the 3 generated classes do not seem separable.
```python
from sklearn.decomposition import PCA
import pandas as pd
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(X)
principalDf = pd.DataFrame(data = principalComponents, columns = ['pc1', 'pc2'])
finalDf = pd.concat([principalDf, pd.DataFrame(y, columns = ['target'])], axis = 1)
```
```python
import seaborn as sns
import matplotlib.pyplot as plt
sns.scatterplot(x='pc1', y='pc2', hue='target', data=finalDf);
```
################# Do not write above this line #################
```python
def softmax(theta, X):
'''
Function to compute associated probability for each sample and each class.
Input:
theta: it's the model parameter matrix. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
Output:
softmax: it's the matrix containing probability for each sample and each class. The shape is (N, K)
'''
#####################################################
## YOUR CODE HERE ##
#####################################################
# calculete the normalition term of the softmax
normalization = np.sum( np.exp(np.matmul(X,theta)) , axis = 1)
# apply the softmax formula
softmax = np.exp(np.matmul(X,theta)) / normalization[: , np.newaxis]
return softmax
def CELoss(theta, X, y_onehot):
'''
Function to compute softmax regression model and Cross Entropy loss.
Input:
theta: it's the model parameter matrix. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
y_onehot: it's the label array in encoded as one hot vector. The shape is (N, K)
Output:
loss: The scalar that is the mean error for each sample.
'''
#####################################################
## YOUR CODE HERE ##
#####################################################
# calculate the cross entropy using the formula
psoft = np.log(softmax(theta, X))
loss = - np.sum ( np.multiply( y_onehot, psoft ), axis = 1)
# taking the mean of the loss values
loss = np.mean(loss)
return loss
def CELoss_jacobian(theta, X, y_onehot):
'''
Function to compute gradient of the cross entropy loss with respect the parameters.
Input:
theta: it's the model parameter matrix. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
y_onehot: it's the label array in encoded as one hot vector. The shape is (N, K)
Output:
jacobian: A matrix with the partial derivatives of the loss. The shape is (H, K)
'''
#####################################################
## YOUR CODE HERE ##
#####################################################
psoft = softmax(theta, X)
m = X.shape[0]
# calculate the jacobian matrix using the formula
jacobian = (1/m)*np.dot(X.T, psoft-y_onehot)
return jacobian
def gradient_descent(theta, X, y_onehot, alpha=0.01, iterations=100):
'''
Function to compute gradient of the cross entropy loss with respect the parameters.
Input:
theta: it's the model parameter matrix. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
y_onehot: it's the label array in encoded as one hot vector. The shape is (N, K)
alpha: it's the learning rate, so it determines the speed of each step of the GD algorithm
iterations: it's the total number of step the algorithm performs
Output:
theta: it's the updated matrix of the parameters after all the iterations of the optimization algorithm. The shape is (H, K)
loss_history: it's an array with the computed loss after each iteration
'''
# We initialize an empty array to be filled with loss value after each iteration
loss_history = np.zeros(iterations)
# With a for loop we compute the steps of GD algo
for it in range(iterations):
loss_history[it] = CELoss(theta, X, y_onehot)
# gradient descent
theta = theta - alpha * CELoss_jacobian(theta, X, y_onehot)
return theta, loss_history
```
################# Do not write below this line #################
```python
# Initialize a theta matrix with random parameters
theta0 = np.random.rand(X.shape[1], len(np.unique(y)))
print("Initial Loss with initialized theta is:", CELoss(theta0, X, y_onehot))
# Run Gradient Descent method
n_iter = 1000
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha=0.01, iterations=n_iter)
```
Initial Loss with initialized theta is: 1.5982827200914658
```python
theta_final
```
array([[ 0.46609239, 0.52911426, 0.54754199],
[ 0.70465698, 0.57336488, 0.67684666],
[-0.01248112, -0.00863701, 0.63646858],
[ 0.18673016, -0.12676055, 0.96504178],
[ 0.90906325, 0.37700336, 0.43062136],
[ 0.71457232, 0.70276477, 0.49710281],
[ 0.66020061, -0.37887338, 0.88270064]])
```python
loss = CELoss(theta_final, X, y_onehot)
loss
```
0.5856876967961754
```python
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
#### Exercise 3.c **(2 Points)**
Let's now evaluate the goodness of the learnt based on accuracy:
\begin{equation}
Accuracy = \frac{Number\ of\ correct\ predictions}{Total\ number\ of\ predictions}
\end{equation}
Implement the compute_accuracy function. You may compare the accuracy achieved with learnt model Vs. a random model (random $\Theta$) or one based on $\Theta$'s filled with zeros.
################# Do not write above this line #################
```python
def compute_accuracy(theta, X, y):
'''
Function to compute accuracy metrics of the softmax regression model.
Input:
theta: it's the final parameter matrix. The one we learned after all the iterations of the GD algorithm. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
y: it's the label array. The shape is (N, 1)
Output:
accuracy: Score of the accuracy.
'''
# we compute the hypothesis h(theta) i.e the softmax
h = softmax(theta,X)
# we take the index of the maximum score of each row corrisponding to the class predicted
pred = np.argmax(h, axis = 1)
# compute favorite case comparing with the TRUE vector
fav = np.sum(pred == y)
accuracy = fav/len(y)
return accuracy
```
################# Do not write below this line #################
```python
compute_accuracy(theta_final, X, y)
```
0.7933333333333333
```python
theta0 = np.random.rand(X.shape[1], len(np.unique(y)))
compute_accuracy(theta0, X, y)
```
0.44
```python
compute_accuracy(np.zeros((X.shape[1], len(np.unique(y)))), X, y)
```
0.3333333333333333
### Report **(3 Points)**
Experiment with different values for the learning rate $\alpha$ and the number of iterations. Look how the loss plot changes the convergence rate and the resulting accuracy metric. Report also execution time of each run. For this last step you could you %%time at the beginning of the cell to display time needed for the algorithm.
```python
%%time
# Initialize a theta matrix with random parameters
theta0 = np.random.rand(X.shape[1], len(np.unique(y)))
print("Initial Loss with initialized theta is:", CELoss(theta0, X, y_onehot))
# Run Gradient Descent method
n_iter = 100
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha=0.001, iterations=n_iter)
```
Initial Loss with initialized theta is: 1.7431235982316517
Wall time: 16 ms
**Write your Report here**
| LR | Iter | Accuracy | Time |
|---|---|---|---|
| | | | |
| | | | |
| | | | |
| ... | | | |
## Question 4: Multinomial Naive Bayes **(6 Points)**
### Code and Theory
The Naive Bayes classifier is a probabilistic machine learning model often used for classification tasks, e.g. document classification problems.
In the multinomial Naive Bayes classification you generally have $K>2$ classes, and the features are assumed to be generated from a multinomial distribution.
##### __*Example Data*__
General models consider input data as values. In the case of MultinomialNB, being used mainly in the field of document classification, these data consider how many features $X_i$ are present in the sample. Basically, it is a count of features within each document.
Taking into account $D=3$ documents and a vocabulary consisting of $N=4$ words, the data are considered as follows.
| | $w_1$ | $w_2$ | $w_3$ | $w_4$ |
|---|---|---|---|---|
| $d_1$ | 3 | 0 | 1 | 1 |
| $d_2$ | 2 | 1 | 3 |0|
| $d_3$ | 2 | 2 | 0 |2|
By randomly generating the class to which each document belongs we have $y=[1,0,1]$
##### __*A bit of notation*__
- $Y =\{y_1, y_2, ... , y_{|Y|}\}$: set of classes
- $V =\{w_1, w_2, ... , w_{|V|}\}$: set of vocabulary
- $D =\{d_1, d_2, ... , d_{|D|}\}$: set of documents
- $N_{yi}$: count of a specific word $w_i$ in each unique class, e.g. for $y=1$ you select $D_1$ and $D_3$, then for third column you have $N_{y,3}=1$
- $N_y$: total count of features for a specific class, e.g. for $y=1$ you sum all rows values which the correspondent label is 1, so $N_y=11$
- $n$: total number of features (words in vocabulary)
- $\alpha$: smoothing parameters
##### __*Task*__
Find the class $y$ to which the document is most likely to belong given the words $w$.
Use the Bayes formula and the posterior probability for it.
Bayes Formula:
\begin{equation}
P(A|B) = \frac{P(A)*P(B|A)}{P(B)}
\end{equation}
Where:
- P(A): Prior probability of A
- P(B): Prior probability of B
- P(B|A): Likelihood, multiplying posterior probability, that is multinomial Naive Bayes is:
\begin{equation}
P(B|A) = \left(\frac{N_{yi}+\alpha}{N_{y}+\alpha*n\_features}\right)^{X_{doc,i}}
\end{equation}
**Reminder: do not change any part of this notebook outside the assigned work spaces**
#### Generate random dataset
```python
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X, y = make_classification(n_samples=300, n_features=7, n_informative=7, n_redundant=0, n_classes=3, random_state=1)
X = np.floor(X)-np.min(np.floor(X))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, y_train.shape
```
((240, 7), (240,))
#### Step0: $N_y$ and $N_{yi}$
```python
def feature_count(X, y, classes, n_classes, n_features):
'''
Function to compute the count of a specific word in each unique class and the total count.
Input:
X: it's the input data matrix.
y: label array
classes: unique values of y
n_classes: number of classes
n_features: it's the number of word in Vocabulary.
Output:
N_yi: count of a specific word $w_i$ in each unique class
N_y: total count of features for a specific class
'''
N_yi = np.zeros((n_classes, n_features)) # feature count
N_y = np.zeros((n_classes)) # total count
#####################################################
## YOUR CODE HERE ##
#####################################################
# iterate over the classes
for class_ in classes:
# create a mask to find the document belonging to the
# selected class
select_class = np.where(y == class_)
# sum over the columns of X (i.e the word fixed)
N_yi[class_] = np.sum(X[select_class], axis = 0)
# sum over the rows of N_yi (i.e. over the all words of a class)
N_y = np.sum(N_yi, axis = 1)
return N_yi, N_y
```
```python
n_samples_train, n_features = X_train.shape
classes = np.unique(y_train)
n_classes = 3
alpha = 0.1
N_yi, N_y = feature_count(X_train, y_train, classes, n_classes, n_features)
```
#### Step1: Prior Probability
The probability of a document being in a specific category from the given set of documents.
######################################
Your equations here:
\begin{equation}
P(y_j) = \frac{\sum_{i=1}^{|D|} \mathbb{1}(\{ y^{(i)} = j\}) }{|D|} \hspace{0.3cm} \forall y_j \in Y = \{y_0, \ldots , y_{|Y|} \}
\end{equation}
Where: <br>
* $|D|$ = number of documents
* $y^{(i)}$ = is the class of $document_i$
We are counting the relative frequence of $document_i$ to be classified as $j$.
######################################
```python
def prior_(X, y, n_classes, n_samples):
"""
Calculates prior for each unique class in y.
Input:
X: it's the input data matrix.
y: label array
n_classes: number of classes
n_samples: number of documents
Output:
P: prior probability for each class. Shape: (, n_classes)
"""
classes = np.unique(y)
P = np.zeros(n_classes)
# Implement Prior Probability P(A)
#####################################################
## YOUR CODE HERE ##
#####################################################
for class_ in classes:
P[class_] = np.sum(y==class_) / n_samples
return P
```
```python
prior_prob = prior_(X_train, y_train, n_classes, n_samples_train)
print(prior_prob)
```
[0.3 0.34583333 0.35416667]
#### Step2
Posterior Probability: The conditional probability of a word occurring in a document given that the document belongs to a particular category.
\begin{equation}
P(w_i|y_j) = \left(\frac{N_{yi}+\alpha}{N_{y}+\alpha*n\_features}\right)^{X_{doc,i}}
\end{equation}
Likelihood for a single document:
######################################
Your equations here:
\begin{equation}
P(w|y_j) = \prod_{i = 1}^{n\_features} P(w_i|y_j)
\end{equation}
We use the fact of $ w = (w_1, \dots ,w_{n\_features} )$ as each component indipendent
######################################
```python
def posterior_(x_i, i, h, N_y, N_yi, n_features, alpha):
"""
Calculates posterior probability. aka P(w_i|y_j) using equation in the notebook.
Input:
x_i: feature x_i
i: feature index.
h: a class in y
N_yi: count of a specific word in each unique class
N_y: total count of features for a specific class
n_features: it's the number of word in Vocabulary.
alpha: smoothing parameter
Output:
posterior: P(xi | y). Float.
"""
# Implement Posterior Probability
#####################################################
## YOUR CODE HERE ##
#####################################################
# formula
posterior = ( (N_yi[h,i] + alpha) / (N_y[h] + alpha * n_features) )**x_i
return posterior
def likelihood_(x, h, N_y, N_yi, n_features, alpha):
"""
Calculates Likelihood P(w|j_i).
Input:
x: a row of test data. Shape(n_features,)
h: a class in y
N_yi: count of a specific word in each unique class
N_y: total count of features for a specific class
n_features: it's the number of word in Vocabulary.
alpha: smoothing parameter
Output:
likelihood: Float.
"""
tmp = []
for i in range(x.shape[0]):
tmp.append(posterior_(x[i], i, h, N_y, N_yi, n_features, alpha))
# Implement Likelihood
#####################################################
## YOUR CODE HERE ##
#####################################################
likelihood = np.prod(tmp)
return likelihood
```
```python
# Example of likelihood for first document
likelihood_(X_test[0], 0, N_y, N_yi, n_features, alpha)
```
2.7754694679413126e-53
#### Step3
Joint Likelihood that, given the words, the documents belongs to specific class
######################################
Your equations here:
\begin{equation}
P(y_i|w) = \frac{P(y_i)P(w|y_j)}{P(w)}
\end{equation}
Where $ P(w) = 1/|D| $
######################################
Finally, from the probability that the document is in that class given the words, take the argument correspond to max value.
\begin{equation}
y(D) = argmax_{y \in Y} \frac{P(y|w)}{\sum_{j}P(y_j|w)}
\end{equation}
```python
def joint_likelihood(X, prior_prob, classes, n_classes, N_y, N_yi, n_features, alpha):
"""
Calculates the joint probability P(y_i|w) for each class and makes it probability.
Then take the argmax.
Input:
X: test data
prior_prob:
classes:
n_classes:
N_yi: count of a specific word in each unique class
N_y: total count of features for a specific class
n_features: it's the number of word in Vocabulary.
alpha: smoothing parameter
Output:
predicted_class: Predicted class of the documents. Int. Shape: (,#documents)
"""
samples, features = X.shape
predict_proba = np.zeros((samples,n_classes))
# Calculate Joint Likelihood of each row for each class, then normalize in order to make them probabilities
# Finally take the argmax to have the predicted class for each document
#####################################################
## YOUR CODE HERE ##
#####################################################
# Calculate the likelihood for each document and classes
likelihood = np.zeros((samples, n_classes))
for d in range(samples):
for class_ in classes:
likelihood[d][class_] = likelihood_(X[d], class_, N_y, N_yi, n_features, alpha)
# Calculate the joint likelihood
Pw = 1/samples
predicted_proba = (1/Pw)*(np.multiply(prior_prob,likelihood))
# normalize
predicted_proba = predicted_proba / np.sum(predicted_proba, axis = 1)[:, np.newaxis]
# argmax
predicted_class = np.argmax(predicted_proba, axis = 1)
return predicted_class
```
```python
yhat = joint_likelihood(X_test, prior_prob, classes, n_classes, N_y, N_yi, n_features, alpha)
```
#### Step4: Calculate the Accuracy Score
```python
print('Accuracy: ', np.round(accuracy_score(yhat, y_test),3))
```
Accuracy: 0.717
**Sanity Check**
Here we use a function from the sklearn library, one of the most widely used in machine learning. MultinomialNB() implements the required algorithm, so the result of your implementation should be equal to the output of the following function.
```python
from sklearn import naive_bayes
clf = naive_bayes.MultinomialNB(alpha=0.1)
clf.fit(X_train,y_train)
sk_y = clf.predict(X_test)
print('Accuracy: ', np.round(accuracy_score(sk_y, y_test),3))
```
Accuracy: 0.717
|
88a7ad252b42273fe9d4de37d91d5f93ed6269dc
| 408,962 |
ipynb
|
Jupyter Notebook
|
FDS_2_Giulio.ipynb
|
obster991/FDS---HW2
|
e9c85adc0f194a8d0973442c2ab18ea3a29d4803
|
[
"MIT"
] | 1 |
2021-12-03T10:51:19.000Z
|
2021-12-03T10:51:19.000Z
|
FDS_2_Giulio.ipynb
|
obster991/FDS-HW2
|
e9c85adc0f194a8d0973442c2ab18ea3a29d4803
|
[
"MIT"
] | null | null | null |
FDS_2_Giulio.ipynb
|
obster991/FDS-HW2
|
e9c85adc0f194a8d0973442c2ab18ea3a29d4803
|
[
"MIT"
] | 1 |
2021-11-05T17:04:35.000Z
|
2021-11-05T17:04:35.000Z
| 163.978348 | 53,420 | 0.882911 | true | 12,525 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.749087 | 0.890294 | 0.666908 |
__label__eng_Latn
| 0.957889 | 0.387782 |
<a href="https://colab.research.google.com/github/jmamath/UVS-Probabilite-Statistiques/blob/master/Risque_empirique_et_biais_d'un_estimateur.ipynb" target="_parent"></a>
## 0. Introduction
Objectifs pédagogiques:
1. Être capable de calculer les notions de risque quadratique d'un estimateur.
1. Être capable de calculer le biais d'un estimateur.
1. Renforcer la comprehénsion qu'un estimateur est une variable aléatoire.
Commençons par importer quelques modules.
```
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
import seaborn as sns
# settings for seaborn plotting style
sns.set(color_codes=True)
# settings for seaborn plot sizes
sns.set(rc={'figure.figsize':(5,5)})
```
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
## 1. Risque quadratique d'un estimateur
Le risque quadratique est l'une des mesure les plus brute de la performance d'un estimateur. Cependant il offre un premier moyen de comparer divers estimateurs.
Plaçons nous dans le cas ou le jeu de données est issu d'une loi normale centrée réduite $\mathcal{N}(0,1)$.
```
# Create a Standard Gaussian and sample from it
num_sample = 10000
# generate random numbers from N(0,1)
data_normal = norm.rvs(size=num_sample,loc=0,scale=1)
ax = sns.distplot(data_normal,
bins=100,
kde=True,
color='skyblue',
hist_kws={"linewidth": 20,'alpha':1})
ax.set(xlabel='Loi normale', ylabel='Fréquence')
# data = g.sample([num_sample]).numpy()
```
En connaissant uniquement la moyenne, l'estimateur de la variance issu du maxium de vraisemblance peut être utile car il incorpore la valeur de la moyenne.
### Questions
* Implémenter l'estimateur de la moyenne et de la variance que vous avez trouvé avec le maximum de vraisemblance que nous noterons $T_{n}$ et $S_{n}$
* Implémenter un estimateur de la variance empririque $S_{n}^{2}$
* Implémenter un estimateur de la variance empririque corrigée $\hat S_{n}^{2}$
Pour rappel en notant la loi normale $\mathcal{N}(\mu,\sigma^{2})$
\begin{align}
T_{n} &= \frac{1}{n} \sum_{i=1}^{n} x_{i} \\
S_{n} &= \frac{1}{n} \sum_{i=1}^{n} (x_{i}-\mu)^{2}\\
S_{n}^{2} &= \frac{1}{n} \sum_{i=1}^{n} (x_{i}-T_{n})^{2}\\
\hat S_{n}^{2} &= \frac{1}{n-1} \sum_{i=1}^{n} (x_{i}-T_{n})^{2}
\end{align}
Vous utiliserez les fonctions ci-dessous à remplir.
```
def estimateur_moyenne_vraisemblance(data):
"""
Input:
data: Tableau de nombre: le jeu de données
Output:
moyenne: estimation de la moyenne
"""
length = len(data)
moyenne = 0
for i in range(length):
moyenne = moyenne + data[i]
return moyenne / length
def estimateur_variance_vraisemblance(data, moyenne):
"""
Input:
data: Tableau de nombre: le jeu de données
moyenne: la moyenne connue
Output:
var: estimation de la variance
"""
length = len(data)
variance = 0
for i in range(length):
variance = variance + (data[i]-moyenne)**2
return variance / length
def estimateur_variance_empirique(data):
"""
Input:
data: Tableau de nombre: le jeu de données
Output:
var: estimation de la variance
"""
length = len(data)
moyenne = estimateur_moyenne_vraisemblance(data)
variance = 0
for i in range(length):
variance = variance + (data[i]-moyenne)**2
return variance / length
def estimateur_variance_empirique_corrigee(data):
"""
Input:
data: Tableau de nombre: le jeu de données
Output:
var: estimation de la variance
"""
length = len(data)
moyenne = estimateur_moyenne_vraisemblance(data)
variance = 0
for i in range(length):
variance = variance + (data[i]-moyenne)**2
return variance / (length-1)
```
```
# generate random numbers from N(0,1)
num_sample = 5
data_normal = norm.rvs(size=num_sample, loc=0, scale=1)
# Estimations
moyenne = estimateur_moyenne_vraisemblance(data_normal)
var_vraisemblance = estimateur_variance_vraisemblance(data_normal, 0)
var_empirique = estimateur_variance_empirique(data_normal)
print("Moyenne: {}",moyenne)
print("Variance vraisemblance: ", var_vraisemblance)
print("Variance empirique: ", var_empirique)
```
Moyenne: {} -0.16555148152533544
Variance vraisemblance: 0.1391702385661771
Variance empirique: 0.11176294553094361
Nous connaissons les valeurs des paramètres, nous pouvons calculer le risque quadratique de nos estimateurs:
* Rappelez la formule du risque quadratique.
* Éxécutez la cellule de code suivante pour estimer la moyenne de l'échantillon `data_normal` avec $T_{n}$ et sa variance avec respectivement $S_{n},S_{n}^{2}$ et $\hat S_{n}^{2}$.
* Calculez Le risque quadratique associé aux estimateurs: $T_{n}, S_{n}, S_{n}^{2}, \hat S_{n}^{2}$ en utilisant la fonction `risque_quad` à remplir.
* Comparez les divers estimateurs de la variance. Lequel est le plus bas ?
```
```
```
# 1 Generer des donnees aleatoires suivant une loi normale from N(0,1)
num_sample = 5
data_normal = norm.rvs(size=num_sample, loc=0, scale=1)
# 2. Obtenir une estimation de la variance avec chaque estimateur
variance_vraisemblance = estimateur_variance_vraisemblance(data_normal, 0)
variance_empirique = estimateur_variance_empirique(data_normal)
variance_empirique_corrigee = estimateur_variance_empirique_corrigee(data_normal)
# 3. Calculer le risque empirique pour chaque estimateur
def risque_quad(estimation, parametre):
"""
Input:
estimation: Float, estimation donnee par un estimateur
parametre: Float, la veritable valeur du parametre
Return:
risque_quadratique: Float. Risque quadratique de l'estimateur
"""
return (estimation-parametre)**2
# On connait la variance de nos donnees: 1
risque_var_empirique = risque_quad(variance_empirique, 1)
risque_var_vraisemblance = risque_quad(variance_vraisemblance, 1)
risque_var_empirique_corrige = risque_quad(variance_empirique_corrigee, 1)
print("Risque quadratique variance empirique", risque_var_empirique)
print("Risque quadratique variance vraisemblance", risque_var_vraisemblance)
print("Risque quadratique variance empirique corrige", risque_var_empirique_corrige)
```
Risque quadratique variance empirique 0.7837840635598172
Risque quadratique variance vraisemblance 0.77402836881981
Risque quadratique variance empirique corrige 0.7338402250781214
Dans le cas ou le jeu de données connu on peut utiliser $S_{n}$ pour estimer la variance. Dans le cas contraire, il est nécessaire d'utiliser des estimateurs se basant uniquement sur les données comme $S_{n}^{2}, \hat S_{n}^{2}$.
## 2. Biais et variance d'un estimateur
On veut maintenant calculer le biais d'un estimateur. On va s'intéresser aux estimateurs de la variance, car nous avons vu que l'estimateur de la variance empirique était biaisé. On va illustrer ce que ça signifie.
Dans la suite on continue de considérer la variable aléatoire $X \sim \mathcal{N}(0,1)$
### Question
* Rapellez la définition du biais d'un estimateur
* Remplissez le code de `esperance_estimateur` ci dessous pour calculer l'espérance d'un estimateur, vous utiliserez cette fonction pour calculer le biais.
```
def esperance_estimateur(data, estimateur, taille_echantillon, iteration):
"""
Input:
data: Jeux de données complet
estimateur: Estimateur à évaluer
taille_echantillon: nombre d'éléments à tirer dans le jeu de données pour effectuer une estimation
iteration: Nombre d'itération pour estimer l'espérance.
Output:
biais: le biais de l'estimateur
"""
estimation = 0
for i in range(iteration):
echantillon = np.random.choice(data, taille_echantillon)
estimation = estimation + estimateur(echantillon)
esperance_estimateur = estimation / iteration
return esperance_estimateur
```
On va comparer plusieurs estimateurs de la variance, dans la suite l'indice représente le nombre d'exemples utilisé pour chaque estimation.
* Comparez le biais de la variance empirique $S_{10}^{2}$ et de la variance empirique corrigée $\hat S_{10}^{2}$.
* Comparez le biais de la variance empirique $S_{100}^{2}$ et de la variance empirique corrigée $\hat S_{100}^{2}$.
Qu'observez-vous ?
```
# Create a Standard Gaussian and sample from it
num_sample = 10000
# generate random numbers from N(0,1)
data_normal = norm.rvs(size=num_sample,loc=0,scale=1)
```
```
esperance_var_emp = esperance_estimateur(data_normal, estimateur_variance_empirique, 15, 50)
esperance_var_corr = esperance_estimateur(data_normal, estimateur_variance_empirique_corrigee, 15, 50)
```
0.9324079338421366
```
print("Biais estimateur variance empirique", (1-esperance_var_emp))
print("Biais estimateur variance empirique corrigee", 1-esperance_var_corr)
```
Biais estimateur variance empirique 0.0675920661578634
Biais estimateur variance empirique corrigee -0.02453540939903709
## 3. Ouverture
On a vu la notion de biais d'un estimateur, dans la suite, on s'intéressera aux propriétés de convergences de ces derniers. maintenant, on va illustrer la convergence d'un estimateur. On va donc voir pourquoi les big data permettent souvent d'obtenir de meilleurs estimateurs.
```
```
|
d412f60e8d115da1b72ddc074736c6ea6d73f4de
| 33,249 |
ipynb
|
Jupyter Notebook
|
Risque_empirique_et_biais_d'un_estimateur.ipynb
|
jmamath/UVS-Probabilite-Statistiques
|
b5874aa10a13d4ad6893cedad1c56e63f8caa7df
|
[
"MIT"
] | null | null | null |
Risque_empirique_et_biais_d'un_estimateur.ipynb
|
jmamath/UVS-Probabilite-Statistiques
|
b5874aa10a13d4ad6893cedad1c56e63f8caa7df
|
[
"MIT"
] | null | null | null |
Risque_empirique_et_biais_d'un_estimateur.ipynb
|
jmamath/UVS-Probabilite-Statistiques
|
b5874aa10a13d4ad6893cedad1c56e63f8caa7df
|
[
"MIT"
] | 1 |
2021-03-26T21:36:50.000Z
|
2021-03-26T21:36:50.000Z
| 62.264045 | 14,754 | 0.71921 | true | 2,733 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.897695 | 0.808067 | 0.725398 |
__label__fra_Latn
| 0.870318 | 0.523675 |
# 微分の基礎
## 微分と関数最小化の関係
前章で微分が目的関数の最小化に役立つと紹介しました。本節ではまず具体例を用いてそのことを直感的に理解しましょう。
例として、下図のような下向きにくぼんだ形をした関数がどこで最小値をとるかを探す問題を考えます。
適当な点 $\theta_{1}$ でこの関数のグラフに接する直線(**接線**)を考えます([注釈1](#note1))。
仮にこの接線の傾きが $+3$ 、すなわち正の値であったとしましょう。この時、接線は右に進むほど高さが上がり,逆に左に進むほど高さが下がります。
$\theta_{1}$ の周辺では、関数のグラフと接線は非常に近く、両者はほとんど見分けることができません。
すると、関数も接線と同じように $\theta_{1}$ 右に進むと増加し,左に進むと減少していることがわかります。
次に、グラフ上の別の点 $\theta_{2}$ においてこのグラフに接する接線を考えます。
今度は接線の傾きが $-1$ であるとします。傾きが負のため、接線は右肩下がりの直線です。
グラフは接線と非常に近いため、グラフも $\theta_{2}$ の周辺では、同じように右肩下がりであるとわかります。
最後にちょうどグラフの谷となっている点 $\theta_{3}$ を考えます。
図を見て分かる通り、関数はこの点で最小値を取ります。
一方、点 $\theta_{3}$ でグラフに接する接線は水平、すなわち**傾きが0**です。
これまでの観察をまとめると、接線の傾きと関数の振る舞いには次の関係があることがわかります。
- ある点で接する接線の傾きが正ならば、その点の近くでグラフは右肩上がり(= 左に進むと高さが下がる)
- ある点で接する接線の傾きが負ならば、その点の近くでグラフは右肩下がり(= 右に進むと高さが下がる)
- 関数が最小値をとる点に接する接線の傾きは $0$ である
つまり、関数の最小値をとる点を求める問題では、接線の傾きが $0$ となる点が答えの候補([注釈2](#note2))となることがわかります。
本章で説明するように、**微分を用いると接線の傾きを計算することができます**。
このことから、微分が関数の最小化問題に有用なツールであることがわかります。
以降では、微分の定義と微分に関する公式を紹介します。さらに、入力が多変数の関数での微分(偏微分)についても解説します。
## 2 点間を通る直線の傾き
接線の傾きと微分の関係を調べるため、まずは2点を通る直線の傾きを求める問題を考えます。
直線の傾きは 「$y$ (縦方向)の増加量 / $x$ (横方向)の増加量」で計算できるので、上図の直線の傾き $a$ は、
$$
a = \dfrac{f(x_{2}) - f(x_{1})}{x_{2}-x_{1}}
$$
で求められます。
## 接線の傾き
上図での点 $x_1$ における接線の傾きを求めるために、もう一方の点 $x_2$ を $x_1$ に近づけていきます。ただし、 $x_1$ と $x_2$ が完全に同じになってしまうと、$x$ 方向、 $y$ 方向の増加量がどちらも $0$ になってしまうため、傾きは $0/0$ で計算することができません。
そこで 2 点は異なる点でありつつ、 $x_2$ を限りなく $x_1$ に近づけていった時、直線の傾きがどのように振る舞うかを見る必要があります。これを数式的に表現するには、**極限**の考えが必要になります。
極限では、変数がある値に限りなく近づくとき、その変数によって記述される関数がどのような値に近づくかを考えます。
関数 $f$ に対し、$h$ という変数を $a$ に近づけていったときの $f(h)$ が近づく値を $\lim$ という記号を用いて
$$
\displaystyle \lim _{h\rightarrow a} f(h)
$$
と書きます。例えば、 $h$ を限りなく $0$ に近づけた時に、 $3h$ も限りなく $0$ に近づいていきます。
従って
$$
\displaystyle \lim _{h\rightarrow 0} 3h=0
$$
です。もっと一般に $n$ を自然数、 $c$ を定数として、
$$
\displaystyle \lim _{h\rightarrow a} c h^n=c a^n
$$
が成立します。これだけ見ると単に関数 $\lim _{h\rightarrow a} f(h)$ は $h$ に $a$ を代入した値 $f(a)$ のように思えるかも知れません。しかし、今考えている直線の傾きの場合、そのような代入操作を行うと先程のように $0/0$ の形が現れてしまうため。単純な代入では極限を求めることはできません。
極限の計算方法は後ほど詳しく解説します。
それでは、下図のある点 $x$ における接線の傾き $a$ を求めていきましょう。
2点を通る直線の傾きの式と極限を組み合わせて、接線の傾きを求めることができます。
はじめに、 $x$ から $h$ だけ離れた点 $x + h$ を考え、2点を通る直線の傾きを求めてみます。
次に $h$ を $0$ に限りなく近づけていけば、1点 $x$ で接する接線を考えることができます。
これを式でみると
$$
\begin{aligned}
a
&= \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{(x + h) - x} \\
&= \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} \\
\end{aligned}
$$
となります。よく見るとこの式は $x$ を決めるごとにある1つの値を定めています、すなわちこの式は $x$ の関数です($h$ は $h\to 0$ の極限を取っているので $h$ の関数にはなっていないことに注意してください)。
この式を $f$ の **導関数 (derivative)**と呼び、 $f'(x)$ と書きます。すなわち、
$$
f'(x)= \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}
$$
です。導関数を求めることを**微分(differentiation)**するといいます。
記号の使い方として、 $f'(x)$ を
$$
\frac{d}{dx}f (x)
$$
または
$$
\frac{df}{dx} (x)
$$
と書いても構いません。
この $d$ という記号は増分をとる操作を表しており、例えば $dx$ は $x$ の変化量を表します。
この記法は煩雑ですが、$x, y$ など変数が複数ある場合、 どの変数で微分しているかが明確になるため、表現を正確にすることができます。
## 微分の公式
ある関数の導関数を計算するには、導関数の定義通りに計算するのが最も愚直な方法です.
しかし、これから紹介する種々の公式を組み合わせることで、わざわざ定義に戻らなくても、複雑な関数の導関数を計算できます。
以下では、 $c$ は定数、 $x$ は変数を表します。
まずはじめに、以下の 3 つの公式を紹介します。
$$
\begin{align}
\left( c\right)^{'} &= 0 \\
\left( x\right)^{'} &= 1\\
\left( x^{2}\right)^{'} &= 2x
\end{align}
$$
1つ目の左辺は定数関数 $f(x) = c$ を $x$ で微分した導関数を表しています。
定数関数のグラフを書くと $x$ 軸に平行になるため、どの点で接線を引いてもその傾きは $0$です(グラフと接線は同一の直線となります)。これは導関数は 0 であることを意味します。これが 1 つ目の式の図形的な理解です。
同様に2つ目の公式は 「$y=x$ のグラフではどの点で接線を引いても傾きが1」であることを意味します(この場合もグラフと接線は同一の直線です)。
極限の扱いに慣れるために、3番目の公式を証明しましょう。導関数の定義から左辺は
$$
\displaystyle \lim _{h\to 0} \frac{(x+h)^2 - x^2}{(x+h) - x}
$$
です。これは
$$
\begin{align}
\lim _{h\to 0} \frac{(x+h)^2 - x^2}{(x+h) - x} = \lim _{h\to 0} \frac{2xh + h^2}{h} = \lim _{h\to 0} (2x + h)
\end{align}
$$
と計算できます。ここで $h\to 0$ 、すなわち $h$ を限りなく $0$ に近づけると、 $2x + h$ は $2x$ に限りなく近づいていきます。従って、 $\left( x^{2}\right)^{'} = 2x$ が導かれました。
以下の公式も頻繁に利用します。
$$
\begin{align}
\left( x^{n} \right)^{'} &= nx^{n-1} \\
\left( e^{ax} \right)^{'} &= ae^{ax}
\end{align}
$$
ここで、$a$ は定数、 $e$ は**自然対数の底**、もしくは **ネイピア数** と呼ばれる特別な定数で、およそ $2.71828\cdots$ です。 $1$ つ目の公式で $n=1, 2$とすると、本節の最初に紹介した $x$, $x^2$ の微分の公式に帰着されることに注意してください。
$e^{ax}$ は $\exp(ax)$ のように表記されることがよくあります。その表記を用いると最後の公式は
$$
\bigl( \exp(ax) \bigr)^{'} = a\exp(ax)
$$
とも書けます。
## 線形性
微分は**線形性**という性質を持っています。
それがどのような性質なのか、具体例を挙げて見ていきましょう。
微分には線形性という性質によって、
$$
(3x)' = 3 \times (x)'
$$
のように定数項を微分の演算の外側に出すことができます。
また、
$$
\left( 3x^{2} + 4x - 5 \right)' = \left( 3x^{2} \right)' + \left( 4x \right)' - \left( 5 \right)'
$$
のように、加算や減算はそれぞれ項ごとに独立に微分の演算を行うことができます。
この 2 つの特性を合わせて線形性と呼びます。
もう少し微分の計算を練習してみましょう。
$$
\begin{aligned}
\left( 3x^{2} + 4x + 5 \right)' &= \left( 3x^{2} \right)' + \left( 4x \right)' - \left( 5 \right)' \\
&= 3 \times \left( x^{2} \right)' + 4 \times \left( x \right)' - 5 \times \left( 1 \right)' \\
&= 3 \times 2x + 4 \times 1 - 5 \times 0 \\
&= 6x + 4
\end{aligned}
$$
この線形性に関しては、下記のように公式としてまとめることができます。
$$
\begin{align}
\left( cf(x) \right)^{'} &= c f'(x) \\
\left( f(x) + g(x) \right)^{'} &= f^{'}(x) + g^{'}(x) \\
\end{align}
$$
2 つの関数の積の形で書かれている関数に関しては次の公式が成り立ちます。
$$
\bigl( f(x) g(x) \bigr)^{'} = f^{'}(x)g(x) + f(x)g^{'}(x)
$$
関数 $f$ の導関数と関数 $g$ の導関数がわかれば、関数 $fg$ を計算できることがわかります。
## 合成関数の微分
関数 $y = f(x)$ と $z = g(y)$ の**合成**とは $f$ を適用したあとに $g$ を適用する関数、すなわち $z = g(f(x))$ のことを指します。
ディープラーニングで用いるニューラルネットワークは、層を何層も重ねて複雑な関数を表現します。各々の層を 1 つの関数とみなすと、ニューラルネットワークは多くの関数(層)を合成した**合成関数**と見ることができます。
合成関数の微分を考える時には次に紹介する公式(合成関数の微分の公式)が有用です。
この公式は**連鎖律 (chain rule)** とも呼ばれています。
連鎖律は合成関数を微分を簡単に計算するための公式というだけではなく、ニューラルネットワークの訓練方法である誤差逆伝播法を理解する上で本質的な役割を果たします。
簡単な例として、
$$
\left\{ (3x + 4)^{2} \right\}'
$$
を計算することを考えます。
この式は、 $3x+4$ という内側の部分と $(\cdot)^{2}$ という外側の部分で構成されています。
この式を $(9x^2 + 24x + 16)'$ のように展開してから微分を計算しても良いのですが、3乗や4乗とべき数が増えると式を展開するのが大変になります。
ここで役に立つ考え方が合成関数の微分です。
合成関数の微分は、内側の微分と外側の微分をそれぞれ行い、その結果をかけ合わせることで求めることができます。
外側の微分の際には関数の引数を入力とみなし、その入力についての微分を計算します。
それでは、具体的にこの $(3x+4)^2$ という関数の微分を考えてみます。
まず内側の関数を $u = (3x+4)$ とおいて、
$$
\left\{ (3x + 4)^{2} \right\}' = (u^{2})'
$$
と見ます。ここで、 $(\cdot)'$ をもう少し厳密に考える必要が出てきます。
今変数は $x$, $u$ の2つありるため、 $(\cdot)'$ という表記では、 $x$ で微分しているのか $u$ で微分しているのかの区別がつきません。
そこで、多少複雑に見えますが、先程紹介した $d$ を使った記法を用いて微分する変数を明示します。
合成関数の微分を公式としてまとめると次のようになります。
$$
\frac{d}{dx} f(g(x)) = \frac{df(u)}{du}\frac{du}{dx}
$$
ここで $u = g(x)$ です。
公式を見るよりも実際の適用例を見た方が理解しやすいかも知れません。
合成関数の微分の公式を用いて、先程の $(3x+4)^2$ の微分を計算すると次のようになります。
2行目で合成関数の微分の公式を利用していることに注目してください。
$$
\begin{aligned}
\left\{ (3x + 4)^{2} \right\}' &= \frac{d}{dx} \left\{ (3x + 4)^{2} \right\} \\
&= \frac{du}{dx} \frac{d}{du} (u^2) \\
&= \frac{d}{dx} (3x + 4) \cdot \frac{d}{du} (u^{2}) \\
&= 3 \cdot 2u \\
&= 6u = 6(3x + 4) = 18x + 24 \\
\end{aligned}
$$
気になる人は、 $(3x + 4)^2$ を展開してから各項を微分した場合と結果が一致していることを確かめてみてください。
## 偏微分
機械学習では、1つの入力変数 $x$ から出力変数 $y$ を予測するケースは稀であり、多くの場合、複数の入力変数 $x_1, x_2, \dots, x_M$ を用いて $y$ を予測する**多変数関数**が扱われます。
例えば、家賃を予測する場合、部屋の広さだけではなく、駅からの距離や周辺の犯罪発生率なども同時に考慮した方がより正確に予測ができると期待されます。
複数の入力 $x_1, x_2, \dots, x_M$ をとる関数 $f(x_1, x_2, \dots, x_M)$ を多変数関数とよびます。
この多変数関数において、ある入力 $x_m$ にのみ注目して微分することを **偏微分** とよび、
$$
\frac{\partial}{\partial x_{m}} f(x_1, x_2, \dots, x_M)
$$
と表します。微分を意味する記号が、 $d$ から $\partial$ に変わっています。こうすると、 $\frac{\partial}{\partial x_m}$ は $x_m$ 以外を定数と考え、 $x_m$ にのみ着目して微分を行うという意味となります([注釈3](#note3))。
以下の例で具体的な計算の流れを確認しましょう。
$$
\begin{aligned}
\frac{\partial}{\partial x_1}
\left( 3x_1+4x_2 \right)
&= \frac{\partial}{\partial x_1}
\left( 3x_1 \right) + \frac{\partial}{\partial x_1} \left( 4x_2 \right) \\
&= 3 \times \frac{\partial}{\partial x_1} \left( x_1 \right) + 4 \times \frac{\partial}{\partial x_1} x_2 \\
&= 3 \times 1 + 4 \times 0 \\
&= 3
\end{aligned}
$$
偏微分でも微分と同じ公式を適用できます。今回のケースでは、 $x_1$ にだけ着目しており、 $x_2$ は定数として扱かっています。そのため、上式の 2 行目から 3 行目で $x_2$ を $x_1$ で偏微分した値を $0$ としています(定数の微分は $0$ であったことを思い出してください)。
<hr />
<div class="alert alert-info">
**注釈 1**
ここで考えている関数のグラフでは、グラフ上のどの点を取ってもその点で接する接線がただ1本だけ引ける状況を考えています。例えば関数のグラフが谷の部分で「尖った」形をしていると,谷の底で複数の接線が引けてしまいます。ここではそのようなケースは考えず、関数のグラフは図のような「滑らか」なカーブになっている場合をイメージしてください。
[▲上へ戻る](#ref_note1)
</div>
<div class="alert alert-info">
**注釈 2**
今は関数のグラフの「谷」を考えましたが、「山」でも同様に接線の傾きが $0$ となるため、ある点での接線の傾きが $0$ だからと言って、必ずしも関数がその点で最小値をとるとは限りません。
[▲上へ戻る](#ref_note2)
</div>
<div class="alert alert-info">
**注釈 3**
入力変数が他の入力変数と独立でない場合は定数と考えることはできません。しかし本資料ではそのようなケースは出てきません。
[▲上へ戻る](#ref_note3)
</div>
|
d3008eb4d659f81e4d9477883afbb7af1a2d417d
| 13,282 |
ipynb
|
Jupyter Notebook
|
ja/04_Basics_of_Differential_ja.ipynb
|
Kazumaitani/tutorials
|
df1a9020f5fb347d784aed5a234b0f10703fef03
|
[
"BSD-3-Clause"
] | null | null | null |
ja/04_Basics_of_Differential_ja.ipynb
|
Kazumaitani/tutorials
|
df1a9020f5fb347d784aed5a234b0f10703fef03
|
[
"BSD-3-Clause"
] | null | null | null |
ja/04_Basics_of_Differential_ja.ipynb
|
Kazumaitani/tutorials
|
df1a9020f5fb347d784aed5a234b0f10703fef03
|
[
"BSD-3-Clause"
] | null | null | null | 30.888372 | 180 | 0.522587 | true | 6,547 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.833325 | 0.721743 | 0.601446 |
__label__jpn_Jpan
| 0.621504 | 0.235692 |
```python
from sympy.physics.mechanics import *
from sympy import symbols, atan, cos, Matrix
```
```python
q = dynamicsymbols('q:2')
qd = dynamicsymbols('q:2', level=1)
l = symbols('l:2')
m = symbols('m:2')
g, t = symbols('g, t')
```
```python
# Compose World Frame
N = ReferenceFrame('N')
A = N.orientnew('A', 'axis', [q[0], N.z])
B = N.orientnew('B', 'axis', [q[1], N.z])
A.set_ang_vel(N, qd[0] * N.z)
B.set_ang_vel(N, qd[1] * N.z)
```
```python
O = Point('O')
P = O.locatenew('P', l[0] * A.x)
R = P.locatenew('R', l[1] * B.x)
```
```python
O.set_vel(N, 0)
P.v2pt_theory(O, N, A)
R.v2pt_theory(P, N, B)
```
$\displaystyle l_{0} \dot{q}_{0}\mathbf{\hat{a}_y} + l_{1} \dot{q}_{1}\mathbf{\hat{b}_y}$
```python
ParP = Particle('ParP', P, m[0])
ParR = Particle('ParR', R, m[1])
```
```python
FL = [(P, m[0] * g * N.x), (R, m[1] * g * N.x)]
```
```python
# Calculate the lagrangian, and form the equations of motion
Lag = Lagrangian(N, ParP, ParR)
LM = LagrangesMethod(Lag, q, forcelist=FL, frame=N)
lag_eqs = LM.form_lagranges_equations()
```
```python
lag_eqs
```
$\displaystyle \left[\begin{matrix}g l_{0} m_{0} \sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} + g l_{0} m_{1} \sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} + l_{0}^{2} m_{0} \frac{d^{2}}{d t^{2}} \operatorname{q_{0}}{\left(t \right)} + l_{0}^{2} m_{1} \frac{d^{2}}{d t^{2}} \operatorname{q_{0}}{\left(t \right)} + l_{0} l_{1} m_{1} \left(\sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} + \cos{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{1}}{\left(t \right)} \right)}\right) \frac{d^{2}}{d t^{2}} \operatorname{q_{1}}{\left(t \right)} - l_{0} l_{1} m_{1} \left(- \sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{1}}{\left(t \right)} \right)} + \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{0}}{\left(t \right)} \right)}\right) \frac{d}{d t} \operatorname{q_{0}}{\left(t \right)} \frac{d}{d t} \operatorname{q_{1}}{\left(t \right)} + l_{0} l_{1} m_{1} \left(- \sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \frac{d}{d t} \operatorname{q_{0}}{\left(t \right)} + \sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \frac{d}{d t} \operatorname{q_{1}}{\left(t \right)} + \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \frac{d}{d t} \operatorname{q_{0}}{\left(t \right)} - \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \frac{d}{d t} \operatorname{q_{1}}{\left(t \right)}\right) \frac{d}{d t} \operatorname{q_{1}}{\left(t \right)}\\g l_{1} m_{1} \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} + l_{0} l_{1} m_{1} \left(\sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} + \cos{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{1}}{\left(t \right)} \right)}\right) \frac{d^{2}}{d t^{2}} \operatorname{q_{0}}{\left(t \right)} - l_{0} l_{1} m_{1} \left(\sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{1}}{\left(t \right)} \right)} - \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{0}}{\left(t \right)} \right)}\right) \frac{d}{d t} \operatorname{q_{0}}{\left(t \right)} \frac{d}{d t} \operatorname{q_{1}}{\left(t \right)} + l_{0} l_{1} m_{1} \left(- \sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \frac{d}{d t} \operatorname{q_{0}}{\left(t \right)} + \sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \frac{d}{d t} \operatorname{q_{1}}{\left(t \right)} + \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \frac{d}{d t} \operatorname{q_{0}}{\left(t \right)} - \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} \cos{\left(\operatorname{q_{0}}{\left(t \right)} \right)} \frac{d}{d t} \operatorname{q_{1}}{\left(t \right)}\right) \frac{d}{d t} \operatorname{q_{0}}{\left(t \right)} + l_{1}^{2} m_{1} \frac{d^{2}}{d t^{2}} \operatorname{q_{1}}{\left(t \right)}\end{matrix}\right]$
```python
lag_eqs.simplify()
```
```python
lag_eqs
```
$\displaystyle \left[\begin{matrix}l_{0} \left(g m_{0} \sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} + g m_{1} \sin{\left(\operatorname{q_{0}}{\left(t \right)} \right)} + l_{0} m_{0} \frac{d^{2}}{d t^{2}} \operatorname{q_{0}}{\left(t \right)} + l_{0} m_{1} \frac{d^{2}}{d t^{2}} \operatorname{q_{0}}{\left(t \right)} + l_{1} m_{1} \sin{\left(\operatorname{q_{0}}{\left(t \right)} - \operatorname{q_{1}}{\left(t \right)} \right)} \left(\frac{d}{d t} \operatorname{q_{1}}{\left(t \right)}\right)^{2} + l_{1} m_{1} \cos{\left(\operatorname{q_{0}}{\left(t \right)} - \operatorname{q_{1}}{\left(t \right)} \right)} \frac{d^{2}}{d t^{2}} \operatorname{q_{1}}{\left(t \right)}\right)\\l_{1} m_{1} \left(g \sin{\left(\operatorname{q_{1}}{\left(t \right)} \right)} - l_{0} \sin{\left(\operatorname{q_{0}}{\left(t \right)} - \operatorname{q_{1}}{\left(t \right)} \right)} \left(\frac{d}{d t} \operatorname{q_{0}}{\left(t \right)}\right)^{2} + l_{0} \cos{\left(\operatorname{q_{0}}{\left(t \right)} - \operatorname{q_{1}}{\left(t \right)} \right)} \frac{d^{2}}{d t^{2}} \operatorname{q_{0}}{\left(t \right)} + l_{1} \frac{d^{2}}{d t^{2}} \operatorname{q_{1}}{\left(t \right)}\right)\end{matrix}\right]$
```python
```
|
e447edf79e05251a5242710b90e96719ab84f0f0
| 10,178 |
ipynb
|
Jupyter Notebook
|
InvertedPendulumCart/triplePendulum/double_pendulum.ipynb
|
Chachay/PythonRobotics
|
d9114af5637295c73ce8eec511baec38a819af53
|
[
"MIT"
] | null | null | null |
InvertedPendulumCart/triplePendulum/double_pendulum.ipynb
|
Chachay/PythonRobotics
|
d9114af5637295c73ce8eec511baec38a819af53
|
[
"MIT"
] | null | null | null |
InvertedPendulumCart/triplePendulum/double_pendulum.ipynb
|
Chachay/PythonRobotics
|
d9114af5637295c73ce8eec511baec38a819af53
|
[
"MIT"
] | 1 |
2021-08-03T10:55:52.000Z
|
2021-08-03T10:55:52.000Z
| 55.923077 | 3,763 | 0.525545 | true | 2,346 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.919643 | 0.740174 | 0.680696 |
__label__kor_Hang
| 0.272501 | 0.419816 |
# Lab Manual
## Propagation of Uncertainties
***
### The Sophisticated Approach
We can use calculus to tell us how much a function varies if we change one of its parameters. For the following, we’ll assume that we are trying to compute some value, Y , using the inputs A, B, and C. This could be the perimeter of a rectangle, in which case the perimeter, Y , is the sum of four variables:
$$Y(A, B, C, D) = A + B + C + D$$
In the case of a volume measurement, the volume, Y , would be the product of length, width, and height:
$$Y(A, B, C) = ABC$$
The uncertainties in our measured values will be given by δA, δB, and δC. The uncertainty in the
computed value is, then, δY .
If we have a function which is the sum or difference of a number of variables,
$$Y = A \pm B \pm C$$
Then we can compute the uncertainty in Y with the following formula:
\begin{equation}
(\delta Y)^2 = (\delta A)^2 + (\delta B)^2 + (\delta C)^2
\tag{R1}
\end{equation}
If the function is some kind of product of our three variables,
$$ Y = ABC \quad \textrm{or} \quad Y = \frac{AB}{C} \quad \textrm{or} \quad Y = \frac{A}{BC} \quad \textrm{or} \quad Y = \frac{1}{ABC}$$
Then we can compute the uncertainty in Y via:
\begin{equation}
(\frac{\delta Y}{Y})^2 = (\frac{\delta A}{A})^2 + (\frac{\delta B}{B})^2 + (\frac{\delta C}{C})^2
\tag{R2}
\end{equation}
Finally, if our function is a power of one of our variables (for example, the area of a circle),
$$Y = A^\alpha$$
Then we can compute the uncertainty by
\begin{equation}
\frac{\delta Y}{Y} = |\alpha| \frac{\delta A}{A}
\tag{R3}
\end{equation}
Let’s try our volume example. Because V = xyz, we use ([R2](#mjx-eqn)),
\begin{align*}
\frac{\delta V}{V} = \sqrt{\left(\frac{\delta x}{x}\right)^2 + \left(\frac{\delta y}{y}\right)^2 + \left(\frac{\delta z}{z}\right)^2} \\
= \sqrt{\left(\frac{0.5}{100}\right)^2 + \left(\frac{0.5}{50}\right)^2 + \left(\frac{0.5}{20}\right)^2} \\
\frac{\delta V}{1 * 10^5} = 0.027386 \\
\delta V = 2738.6
\nonumber
\end{align*}
Therefore we may write
$$V = (1.000 \pm 0.027) * 10^5 cm^3 \quad \textrm{or} \quad V = (0.1000 \pm 0.0027) m^3$$
Or, in more modern notation,
$$V = 1.000(27) * 10^5 cm^3 \quad \textrm{or} \quad V = 0.1000(27) m^3$$
Although this method seems complicated, it gives the best estimate of our uncertainties. After you’ve used it a few times, it will become second nature.
### Combining Formulae
What happens if we have something like the volume of a cylinder:
$$ V = \pi r^2 h$$
Here we must work with both ([R3](#mjx-eqn)) and ([R2](#mjx-eqn)). The key is to look at the formula as the product of $r^2$ and *h* ($\pi$ is exact and has no uncertainty). This requires us to use ([R2](#mjx-eqn)). But to get $\delta r^2$, we’ll need ([R3](#mjx-eqn)).
Let’s start with ([R2](#mjx-eqn)):
\begin{equation}
(\frac{\delta V}{V})^2 = (\frac{\delta r^2}{r^2})^2 + (\frac{\delta h}{h})^2
\nonumber
\end{equation}
Now, ([R3](#mjx-eqn)) tells us that
\begin{equation}
\frac{\delta \left(r^2 \right)}{r^2} = 2 \frac{\delta r}{r}
\nonumber
\end{equation}
So we can make the substitution:
\begin{equation}
\left(\frac{\delta V}{V}\right)^2 = \left(2 \frac{\delta r}{r} \right)^2 + \left(\frac{\delta h}{h}\right)^2
\nonumber
\end{equation}
Although it is tempting to multiply through by $V^2 = (\pi r^2 h)^2$ on both sides before we compute, it’s usually easier if we don’t. The fractional errors ($ \delta x/x$) are used in so many places that it makes sense to use those more often than not.
|
7b260cbd29ad0eaca2dfea61331f6e585a7925d7
| 5,275 |
ipynb
|
Jupyter Notebook
|
LabManual.ipynb
|
JNichols-19/PhysicsLabs
|
289cb0d07408afde252fe2cabad17fc0b4d987c8
|
[
"MIT"
] | null | null | null |
LabManual.ipynb
|
JNichols-19/PhysicsLabs
|
289cb0d07408afde252fe2cabad17fc0b4d987c8
|
[
"MIT"
] | null | null | null |
LabManual.ipynb
|
JNichols-19/PhysicsLabs
|
289cb0d07408afde252fe2cabad17fc0b4d987c8
|
[
"MIT"
] | null | null | null | 36.37931 | 317 | 0.529858 | true | 1,202 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.855851 | 0.870597 | 0.745102 |
__label__eng_Latn
| 0.983613 | 0.569453 |
<a href="https://colab.research.google.com/github/alirezash97/Time-frequency-analysis-course/blob/main/TayPaper/Tay2005.ipynb" target="_parent"></a>
```
import math
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
from sympy import *
```
```
def K(x):
output = []
for i in range(int((filter_length-1)/2)):
combination_statement = math.factorial(filter_length) / (math.factorial(i) * math.factorial(filter_length-i))
second_statement = np.multiply( np.power(x, i), np.power( (1-x), filter_length-i ))
final_statement = np.multiply(combination_statement, second_statement)
output.append(final_statement)
return np.sum(output)
######################################
```
```
def kl(x, l):
combination_statement = math.factorial(filter_length) / (math.factorial(l) * math.factorial(filter_length-l))
second_statement = np.multiply(np.power(x, l), np.power((1-x), (filter_length-l)))
tirth_statement = np.multiply( np.power(x, (filter_length-l)), np.power((1-x), l))
final_statement = np.multiply(combination_statement, (second_statement - tirth_statement))
return final_statement
#####################################
```
```
def B(x, alpha_list):
sigma = []
for l in range( Vanishing_moments, int((filter_length-1)/2) ):
sigma.append(np.multiply( kl(x, l), alpha_list[l]))
final_equation = K(x) - np.sum(sigma)
return final_equation
```
```
def main_function():
# inputs
global filter_length
global Vanishing_moments
filter_length = int(input("Please enter filter length: "))
Vanishing_moments = int(input("Please enter the number of vanishing moments: "))
while int(((filter_length-1)/2-Vanishing_moments)) %2 != 0:
Vanishing_moments = int(input("Please enter another number for vanishing moments: "))
else:
pass
global number_of_pin
number_of_pin = int(1/2*((filter_length - 1) /2-Vanishing_moments))
print("You have to choose %d"%number_of_pin, "pins")
global zero_pinning
zero_pinning = []
for i in range(number_of_pin):
temp = float(input("Enter %dth pin: " %(i+1)))
zero_pinning.append(temp)
#############
# create symbols
global alpha_list
alpha_list = []
for i in range(1, filter_length+1):
alpha_list.append(sym.symbols('alpha%d'%i))
global x_list
x_list = []
for i in range(len(zero_pinning)):
x_list.append(sym.symbols('x%d'%i))
#############
# create equations
global my_equations
my_equations = []
for i in range(len(x_list)):
Eq1 = sym.Eq(B(x_list[i], alpha_list), 0)
my_equations.append(Eq1)
Eq2 = sym.Eq(diff(B(x_list[i], alpha_list), x_list[i]))
my_equations.append(Eq2)
##############
# replace x with zero pinning values
global replaced_equations
replaced_equations = []
for i, equation in enumerate(my_equations):
replaced = equation.subs(x_list[math.floor(i/2)], zero_pinning[math.floor(i/2)])
replaced_equations.append(replaced)
###############
# find alphas using equations
global alpha_results
alpha_results = solve([i for i in replaced_equations], [j for j in alpha_list[Vanishing_moments : int((filter_length-1)/2)]])
###############
# plot
my_array = []
for key in alpha_results:
my_array.append(alpha_results[key])
alpha_values = np.zeros((len(alpha_list)))
alpha_values[Vanishing_moments : int((filter_length-1)/2)] = my_array
x = np.linspace(0, 1, num=100)
fx = []
for i in range(len(x)):
fx.append(B(x[i], alpha_values))
plt.plot(x, fx)
return alpha_values, alpha_results
```
```
alphas_list, alpha_results = main_function()
print(alpha_results)
```
```
alphas_list, alpha_results = main_function()
print(alpha_results)
```
```
alphas_list, alpha_results = main_function()
print(alpha_results)
```
```
# spectoral factorization
z = sym.symbols('z')
spectoral_factorization = np.multiply(-1/4*z, np.power((1-np.power(z, -1)), 2))
based_on_z = B(spectoral_factorization, alphas_list)
print(based_on_z)
```
-0.000663369854565131*z**9*(1 - 1/z)**18*(0.25*z*(1 - 1/z)**2 + 1)**3 - 0.00613771873923674*z**8*(1 - 1/z)**16*(0.25*z*(1 - 1/z)**2 + 1)**4 + 3.5048497472446*z**4*(1 - 1/z)**8*(0.25*z*(1 - 1/z)**2 + 1)**8 - 0.720337075701222*z**3*(1 - 1/z)**6*(0.25*z*(1 - 1/z)**2 + 1)**9 + 4.125*z**2*(1 - 1/z)**4*(0.25*z*(1 - 1/z)**2 + 1)**10 - 3.0*z*(1 - 1/z)**2*(0.25*z*(1 - 1/z)**2 + 1)**11 + 1.0*(0.25*z*(1 - 1/z)**2 + 1)**12
```
```
|
3a4427b9d6ba658cb6a00491bb64f4d87a977e89
| 46,354 |
ipynb
|
Jupyter Notebook
|
TayPaper/Tay2005.ipynb
|
alirezash97/Time-frequency-analysis-course
|
9e326ba32a43d411a338e68611ef5e8e75b78fa3
|
[
"MIT"
] | null | null | null |
TayPaper/Tay2005.ipynb
|
alirezash97/Time-frequency-analysis-course
|
9e326ba32a43d411a338e68611ef5e8e75b78fa3
|
[
"MIT"
] | null | null | null |
TayPaper/Tay2005.ipynb
|
alirezash97/Time-frequency-analysis-course
|
9e326ba32a43d411a338e68611ef5e8e75b78fa3
|
[
"MIT"
] | null | null | null | 127.69697 | 12,346 | 0.840726 | true | 1,353 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.803174 | 0.713828 |
__label__eng_Latn
| 0.391609 | 0.496793 |
<p align="center">
</p>
## Data Analytics
### Bootstrap Confidence Intervals in Python
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
### Bootstrap Confidence Intervals in Python
Here's a simple workflow, demonstration of bootstrap for modeling workflows. This should help you get started with this important data analytics method to evaluate and integrate uncertainty in any sample statistics or model.
#### Bootstrap
The uncertainty in an estimated population parameter from a sample, represented as a range, lower and upper bound, based on a specified probability interval known as the **confidence level**.
* one source of uncertainty is the paucity of data.
* do 200 or even less sample data provide a precise (and accurate estimate) of the mean? standard deviation? skew? 13th percentile / P13? 3rd central moment? experimental variogram? mutual information? Shannon entropy? etc.
Would it be useful to know the uncertainty due to limited sampling?
* what is the impact of uncertainty in the mean porosity e.g. 20%+/-2%?
**Bootstrap** is a method to assess the uncertainty in a sample statistic by repeated random sampling with replacement.
Assumptions
* sufficient, representative sampling, identical, idependent samples
Limitations
1. assumes the samples are representative
2. assumes stationarity
3. only accounts for uncertainty due to too few samples, e.g. no uncertainty due to changes away from data
4. does not account for boundary of area of interest
5. assumes the samples are independent
6. does not account for other local information sources
The Bootstrap Approach (Efron, 1982)
Statistical resampling procedure to calculate uncertainty in a calculated statistic from the data itself.
* Does this work? Prove it to yourself, for uncertainty in the mean solution is standard error:
\begin{equation}
\sigma^2_\overline{x} = \frac{\sigma^2_s}{n}
\end{equation}
Extremely powerful - could calculate uncertainty in any statistic! e.g. P13, skew etc.
* Would not be possible access general uncertainty in any statistic without bootstrap.
* Advanced forms account for spatial information and sampling strategy (game theory and Journel’s spatial bootstrap (1993).
Steps:
1. assemble a sample set, must be representative, reasonable to assume independence between samples
2. optional: build a cumulative distribution function (CDF)
* may account for declustering weights, tail extrapolation
* could use analogous data to support
3. For $\ell = 1, \ldots, L$ realizations, do the following:
* For $i = \alpha, \ldots, n$ data, do the following:
* Draw a random sample with replacement from the sample set or Monte Carlo simulate from the CDF (if available).
6. Calculate a realization of the sammary statistic of interest from the $n$ samples, e.g. $m^\ell$, $\sigma^2_{\ell}$. Return to 3 for another realization.
7. Compile and summarize the $L$ realizations of the statistic of interest.
* calculate and display the histogram or CDF to see the full distribution
* report the percentiles representing the lower and upper confidence intervals, e.g. for a confidence level of 95%, the 2.5 and 97.5 percentiles.
This is a very powerful method. Let's demonstrate a bunch of measures.
* when available, I compare to the distributions from the analytic expressions.
Let's demonstrate for a variety of parameters / statistics:
* mean / arithmetic average
* proportion
* interquartile range
* coefficient of variation
* correlation coefficient
#### Objective
The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods.
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
3. In the terminal type: pip install geostatspy.
4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
You will need to copy the data file to your working directory. They are available here:
* Tabular data - sample_data_biased.csv at https://git.io/fh0CW
We need some standard packages. These should have been installed with Anaconda 3.
```python
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # plotting
import matplotlib # plotting
import seaborn as sns # plotting
from scipy import stats # summary statistics
import math # trig etc.
import scipy.signal as signal # kernel for moving window calculation
import random # random sampling
from scipy.stats import gaussian_kde # for PDF calculation
from scipy.stats import t # Student's t distribution for analytical solution
```
#### Set the working directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time).
```python
os.chdir("c:/PGE383") # set the working directory
```
#### Set the Random Number Seed
For repeatability, set the random number seed.
* this ensures that the same random samples are drawn each run and everyone in the class will have the same results.
* change this seed number for a new set of random realizations
```python
seed = 73073
np.random.seed(seed = seed)
```
#### Loading Tabular Data
Here's the command to load our comma delimited data file in to a Pandas' DataFrame object.
```python
df = pd.read_csv('sample_data_biased.csv') # load our data table
```
Let's drop some samples so that we increase the variations in bootstrap samples for our demonstration below.
```python
df = df.sample(frac = 0.02) # extract 50 random samples to reduce the size of the dataset
print('Using ' + str(len(df)) + ' number of samples')
```
Using 6 number of samples
Visualizing the DataFrame would be useful and we already learned about these methods in this demo (https://git.io/fNgRW).
We can preview the DataFrame by printing a slice or by utilizing the 'head' DataFrame member function (with a nice and clean format, see below). With the slice we could look at any subset of the data table and with the head command, add parameter 'n=13' to see the first 13 rows of the dataset.
```python
df.head() # display first 4 samples in the table as a preview
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>X</th>
<th>Y</th>
<th>Facies</th>
<th>Porosity</th>
<th>Perm</th>
</tr>
</thead>
<tbody>
<tr>
<th>131</th>
<td>320</td>
<td>99</td>
<td>1</td>
<td>0.141749</td>
<td>13.154686</td>
</tr>
<tr>
<th>180</th>
<td>750</td>
<td>679</td>
<td>0</td>
<td>0.074619</td>
<td>0.089169</td>
</tr>
<tr>
<th>120</th>
<td>900</td>
<td>559</td>
<td>1</td>
<td>0.133474</td>
<td>13.104697</td>
</tr>
<tr>
<th>276</th>
<td>730</td>
<td>79</td>
<td>1</td>
<td>0.134691</td>
<td>15.160833</td>
</tr>
<tr>
<th>13</th>
<td>300</td>
<td>400</td>
<td>1</td>
<td>0.118448</td>
<td>8.495566</td>
</tr>
</tbody>
</table>
</div>
#### Summary Statistics for Tabular Data
The table includes X and Y coordinates (meters), Facies 1 and 0 (1 is sandstone and 0 interbedded sand and mudstone), Porosity (fraction), and permeability as Perm (mDarcy).
There are a lot of efficient methods to calculate summary statistics from tabular data in DataFrames. The describe command provides count, mean, minimum, maximum, and quartiles all in a nice data table. We use transpose just to flip the table so that features are on the rows and the statistics are on the columns.
```python
df.describe().transpose()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>X</th>
<td>6.0</td>
<td>628.333333</td>
<td>253.725574</td>
<td>300.000000</td>
<td>422.500000</td>
<td>740.000000</td>
<td>765.000000</td>
<td>900.000000</td>
</tr>
<tr>
<th>Y</th>
<td>6.0</td>
<td>467.500000</td>
<td>351.097565</td>
<td>79.000000</td>
<td>174.250000</td>
<td>479.500000</td>
<td>649.000000</td>
<td>989.000000</td>
</tr>
<tr>
<th>Facies</th>
<td>6.0</td>
<td>0.666667</td>
<td>0.516398</td>
<td>0.000000</td>
<td>0.250000</td>
<td>1.000000</td>
<td>1.000000</td>
<td>1.000000</td>
</tr>
<tr>
<th>Porosity</th>
<td>6.0</td>
<td>0.112892</td>
<td>0.030695</td>
<td>0.074373</td>
<td>0.085576</td>
<td>0.125961</td>
<td>0.134387</td>
<td>0.141749</td>
</tr>
<tr>
<th>Perm</th>
<td>6.0</td>
<td>8.457717</td>
<td>6.605513</td>
<td>0.089169</td>
<td>2.679906</td>
<td>10.800132</td>
<td>13.142189</td>
<td>15.160833</td>
</tr>
</tbody>
</table>
</div>
#### Visualizing Tabular Data with Location Maps
It is natural to set the x and y coordinate and feature ranges manually. e.g. do you want your color bar to go from 0.05887 to 0.24230 exactly? Also, let's pick a color map for display. I heard that plasma is known to be friendly to the color blind as the color and intensity vary together (hope I got that right, it was an interesting Twitter conversation started by Matt Hall from Agile if I recall correctly). We will assume a study area of 0 to 1,000m in x and y and omit any data outside this area.
```python
xmin = 0.0; xmax = 1000.0 # range of x values
ymin = 0.0; ymax = 1000.0 # range of y values
pormin = 0.05; pormax = 0.25; # range of porosity values
permmin = 0.01; permmax = 10000
nx = 100; ny = 100; csize = 10.0
cmap = plt.cm.plasma # color map
cumul_prob = np.linspace(0.0,1.0,100) # list of cumulative probabilities
```
Let's try out location maps, histograms and scatter plots.
```python
plt.subplot(231)
plt.scatter(df['X'],df['Y'],s = 20,c = df['Porosity'],cmap = plt.cm.inferno,linewidths = 0.3,edgecolor = 'black',alpha = 0.8,vmin = pormin,vmax = pormax)
plt.colorbar(); plt.xlabel('X (m)'); plt.ylabel('Y (m)'); plt.title('Porosity Location Map')
plt.subplot(234)
plt.hist(df['Porosity'],color = 'red',alpha = 0.3,edgecolor='black',bins = np.linspace(pormin,pormax,int(len(df)/3)))
plt.xlabel('Porosity (fraction)'); plt.ylabel('Frequency'); plt.title('Porosity Histogram')
plt.subplot(232)
plt.scatter(df['X'],df['Y'],s = 20,c = df['Perm'],cmap = plt.cm.inferno,linewidths = 0.3,edgecolor = 'black',alpha = 0.8,vmin = permmin,vmax = permmax,norm=matplotlib.colors.LogNorm())
plt.colorbar(); plt.xlabel('X (m)'); plt.ylabel('Y (m)'); plt.title('Permeability Location Map')
plt.subplot(235)
plt.hist(df['Perm'],color = 'red',alpha = 0.3,edgecolor='black',bins=np.logspace(np.log10(permmin),np.log10(permmax),int(len(df)/3)))
#sns.kdeplot(x=df['Perm'],color = 'black',alpha = 0.2,levels = 1,log_scale = True,bw_adjust = 1)
plt.xlabel('Permeability (mD)'); plt.ylabel('Frequency'); plt.title('Permeability Histogram'); plt.xscale('log')
plt.subplot(233)
plt.scatter(df['Porosity'],df['Perm'],s = 20,color = 'red',alpha = 0.3,edgecolor='black')
#plt.contour(df['Porosity'],df['Perm'] Z, colors='black');
plt.ylabel('Permeability (mD)'); plt.xlabel('Porosity (fraction)'); plt.title('Permeability-Porosity Scatter Plot')
plt.yscale('log')
sns.kdeplot(x=df['Porosity'],y=df['Perm'],color = 'black',alpha = 0.2,levels = 4)
plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=2.2, wspace=0.2, hspace=0.2)
plt.show()
```
### Bootstrap Method in Python
If you are new to bootstrap and Python, here's the most simple code possible for bootstrap.
* specify the number of bootstrap realizations, $L$
* declare a list to store the bootstrap realizations of the statistic of interest
* loop over L bootstrap realizations
* n MCS, random samples with replacement for a new realization of the data
* calculate the realization of the statistic from the realization of the data
* summarize the resulting uncertainty model, histogram, summary statistics etc.
#### Bootstrap of the Sample Mean, Arithmetic Average
Let's demonstrate bootstrap for uncertainty in the arithmetic average with a simple workflow.
```python
por_avg_real = [] # declare an empty list to store the bootstrap realizations of the statistic
for k in range(0,L): # loop over the L bootstrap realizations
samples = random.choices(df['Porosity'].values, k=len(df)) # n Monte Carlo simulations
por_avg_real.append(np.average(samples)) # calculate the statistic of interest from the new bootstrap dataset
```
We will compare the bootstrap uncertainty in the sample arithmetic average distribution with the analytical expression:
\begin{equation}
\overline{x} \pm t_{\frac{\alpha}{2},n-1}\sqrt{\frac{s^2}{n}}
\end{equation}
The remaining code in this block is just a super cool set of plots with the results.
```python
# Bootstrap method for uncertainty in the sample mean
L = 1000 # set the number of bootstrap realizations
por_avg_real = [] # declare an empty list to store the bootstrap realizations of the statistic
for k in range(0,L): # loop over the L bootstrap realizations
samples = random.choices(df['Porosity'].values, k=len(df)) # n Monte Carlo simulations
por_avg_real.append(np.average(samples)) # calculate the statistic of interest from the new bootstrap dataset
# Analytical solution for the uncertainty in the sample mean, Student's t distribution with the correct mean and standard deviation
analytical = t.pdf(np.linspace(pormin,pormax,100), len(df)-1,loc=np.average(df['Porosity']),scale=math.sqrt(np.var(df['Porosity'].values)/len(df)))
# Plot of the original data and bootstap and analytical results
fig = plt.subplot(131)
plt.hist(df['Porosity'],color = 'red',alpha = 0.3,edgecolor='black',bins = np.linspace(pormin,pormax,int(len(df)/3)))
#plt.plot([np.average(df['Porosity']),np.average(df['Porosity'])],[0,100])
plt.axvline(x=np.average(df['Porosity']),linestyle="--",c='black')
plt.text(np.average(df['Porosity'])+0.005, 8.8, r'Average = ' + str(round(np.average(df['Porosity']),3)), fontsize=12)
plt.text(np.average(df['Porosity'])+0.005, 8.3, r'St.Dev. = ' + str(round(np.std(df['Porosity']),3)), fontsize=12)
plt.text(np.average(df['Porosity'])+0.005, 7.8, r'P10 = ' + str(round(np.percentile(df['Porosity'],10),3)), fontsize=12)
plt.text(np.average(df['Porosity'])+0.005, 7.3, r'P90 = ' + str(round(np.percentile(df['Porosity'],90),3)), fontsize=12)
plt.xlabel('Porosity (fraction)'); plt.ylabel('Frequency'); plt.title('Original Porosity Histogram')
plt.subplot(132)
plt.hist(por_avg_real,color = 'red',alpha = 0.2,edgecolor = 'black',bins=np.linspace(pormin,pormax,100),label = 'bootstrap',density = True) # plot the distribution, could also calculate any summary statistics
plt.plot(np.linspace(pormin,pormax,100),analytical,color = 'black',label = 'analytical',alpha=0.4)
plt.axvline(x=np.average(por_avg_real),linestyle="--",c='black')
plt.fill_between(np.linspace(pormin,pormax,100), 0, analytical, where = np.linspace(pormin,pormax,100) <= np.percentile(por_avg_real,10), facecolor='red', interpolate=True, alpha = 0.5)
plt.fill_between(np.linspace(pormin,pormax,100), 0, analytical, where = np.linspace(pormin,pormax,100) >= np.percentile(por_avg_real,90), facecolor='red', interpolate=True, alpha = 0.5)
plt.text(np.average(por_avg_real)+0.009, L*0.07, r'Average = ' + str(round(np.average(por_avg_real),3)), fontsize=12)
plt.text(np.average(por_avg_real)+0.009, L*0.066, r'St.Dev. = ' + str(round(np.std(por_avg_real),3)), fontsize=12)
plt.text(np.average(por_avg_real)+0.009, L*0.062, r'P90 = ' + str(round(np.percentile(por_avg_real,90),3)), fontsize=12)
plt.text(np.average(por_avg_real)+0.009, L*0.058, r'P10 = ' + str(round(np.percentile(por_avg_real,10),3)), fontsize=12)
plt.xlabel('Bootstrap Realizations and Analytical Uncertainty Distribution for Mean'); plt.ylabel('Frequency'); plt.title('Distribution of Average')
plt.legend()
fig = plt.subplot(133)
plt.hist(df['Porosity'],color = 'red',alpha = 0.2,edgecolor='grey',bins = np.linspace(pormin,pormax,int(len(df)/3)))
#plt.plot([np.average(df['Porosity']),np.average(df['Porosity'])],[0,100])
plt.axvline(x=np.average(df['Porosity']),c='black')
plt.axvline(x=np.percentile(por_avg_real,90),linestyle="--",c='black')
plt.axvline(x=np.percentile(por_avg_real,10),linestyle="--",c='black')
plt.text(np.percentile(por_avg_real,90)+0.009,8.8, r'Average = ' + str(round(np.average(por_avg_real),3)), fontsize=12)
plt.text(np.percentile(por_avg_real,90)+0.009,8.3, r'Average P90 = ' + str(round(np.percentile(por_avg_real,90),3)), fontsize=12)
plt.text(np.percentile(por_avg_real,90)+0.009,7.8, r'Average P10 = ' + str(round(np.percentile(por_avg_real,10),3)), fontsize=12)
plt.xlabel('Porosity (fraction)'); plt.ylabel('Frequency'); plt.title('Porosity Histogram with Uncertainty in Average')
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
#### Bootstrap of the Proportion
Let's demonstrate another case for which we have the analytical solution.
We will compare the bootstrap results with the analytical expression:
\begin{equation}
\hat{p} \pm t_{\frac{\alpha}{2},n-1}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}
\end{equation}
```python
#
L = 100000 # set the number of bootstrap realizations
sand_prop_real = [] # declare an empty list to store the bootstrap realizations of the statistic
shale_prop_real = []
for k in range(0,L): # loop over the L bootstrap realizations
samples = random.choices(df['Facies'].values, k=len(df)) # n Monte Carlo simulations
sand_prop_real.append(samples.count(1)/len(df)) # calculate the statistic of interest from the new bootstrap dataset
shale_prop_real.append(samples.count(0)/len(df)) # calculate the statistic of interest from the new bootstrap dataset
sand_prop = np.sum(df['Facies'] == 1)/len(df); shale_prop = np.sum(df['Facies'] == 0)/len(df)
analytical_shale = t.pdf(np.linspace(0.0,1.0,100), len(df)-1,loc=shale_prop,scale=math.sqrt(shale_prop*(1.0-shale_prop)/len(df)))
fig = plt.subplot(121)
barlist = plt.bar(x=['shale','sand'],height = [1-sand_prop,sand_prop],color = 'red',alpha = 0.3,edgecolor='black')
barlist[0].set_color('blue')
plt.text(-.3, shale_prop+0.02, r'Prop. Shale = ' + str(round(shale_prop,2)), fontsize=12)
plt.text(0.7, sand_prop+0.02, r'Prop. Sand = ' + str(round(sand_prop,2)), fontsize=12)
plt.xlabel('Rock Type / Facies'); plt.ylabel('Proportion'); plt.title('Facies Histogram')
plt.ylim([0,1]); plt.yticks(np.arange(0, 1.1, 0.1))
plt.subplot(122)
plt.hist(sand_prop_real,color = 'red',alpha = 0.2,edgecolor = 'black',bins=np.linspace(0.0,1.0,40),label = 'sand',density = True) # plot the distribution, could also calculate any summary statistics
analytical = t.pdf(np.linspace(0.0,1.0,100), len(df)-1,loc=sand_prop,scale=math.sqrt(sand_prop*(1.0-sand_prop)/len(df)))
plt.plot(np.linspace(0.0,1.0,100),analytical,color = 'black',label = 'analytical',alpha=0.4)
plt.axvline(x=sand_prop,linestyle="--",c='black')
plt.fill_between(np.linspace(0.0,1.0,100), 0, analytical, where = np.linspace(0.0,1.0,100) <= np.percentile(sand_prop_real,10), facecolor='red', interpolate=True, alpha = 0.5)
plt.fill_between(np.linspace(0.0,1.0,100), 0, analytical, where = np.linspace(0.0,1.0,100) >= np.percentile(sand_prop_real,90), facecolor='red', interpolate=True, alpha = 0.5)
plt.text(np.average(sand_prop_real)+0.009, 7.0, r'Prop. = ' + str(round(sand_prop,2)), fontsize=12)
plt.text(np.average(sand_prop_real)+0.009, 6.4, r'Prop. P90 = ' + str(round(np.percentile(sand_prop_real,90),2)), fontsize=12)
plt.text(np.average(sand_prop_real)+0.009, 5.8, r'Prop. P10 = ' + str(round(np.percentile(sand_prop_real,10),2)), fontsize=12)
plt.hist(shale_prop_real,color = 'blue',alpha = 0.2,edgecolor = 'black',bins=np.linspace(0.0,1.0,40),label = 'shale',density = True) # plot the distribution, could also calculate any summary statistics
plt.plot(np.linspace(0.0,1.0,100),analytical_shale,color = 'black',alpha=0.4)
plt.axvline(x=shale_prop,linestyle="--",c='black')
plt.fill_between(np.linspace(0.0,1.0,100), 0, analytical_shale, where = np.linspace(0.0,1.0,100) <= np.percentile(shale_prop_real,10), facecolor='blue', interpolate=True, alpha = 0.5)
plt.fill_between(np.linspace(0.0,1.0,100), 0, analytical_shale, where = np.linspace(0.0,1.0,100) >= np.percentile(shale_prop_real,90), facecolor='blue', interpolate=True, alpha = 0.5)
plt.text(np.average(shale_prop_real)+0.009, 7.0, r'Prop. = ' + str(round(shale_prop,2)), fontsize=12)
plt.text(np.average(shale_prop_real)+0.009, 6.4, r'Prop. P90 = ' + str(round(np.percentile(shale_prop_real,90),2)), fontsize=12)
plt.text(np.average(shale_prop_real)+0.009, 5.8, r'Prop. P10 = ' + str(round(np.percentile(shale_prop_real,10),2)), fontsize=12)
plt.xlabel('Bootstrap Realizations and Analytical Uncertainty Distributions for Proportion'); plt.ylabel('Frequency'); plt.title('Distribution of Bootstrap Proportions')
plt.legend(loc = 'upper left')
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
#### Bootstrap of the Interquartile Range
To prove that we can bootstrap any statistic let's select a more complicated measure of dispersion, the interquartile range.
```python
L = 1000 # set the number of bootstrap realizations
iqr_real = [] # declare an empty list to store the bootstrap realizations of the statistic
for k in range(0,L): # loop over the L bootstrap realizations
samples = random.choices(df['Porosity'].values, k=len(df)) # n Monte Carlo simulations
iqr_real.append(np.percentile(samples,75) - np.percentile(samples,25)) # calculate the statistic of interest from the new bootstrap dataset
iqr = np.percentile(df['Porosity'],75) - np.percentile(df['Porosity'],25)
plt.subplot(111)
plt.hist(iqr_real,color = 'red',alpha = 0.2,edgecolor = 'black',bins=np.linspace(0.0,0.125,100),label = 'bootstrap',density = True) # plot the distribution, could also calculate any summary statistics
plt.axvline(x=iqr,c='black')
plt.axvline(x=np.percentile(iqr_real,10),linestyle="--",c='black')
plt.axvline(x=np.percentile(iqr_real,90),linestyle="--",c='black')
sns.kdeplot(x=iqr_real,color = 'grey',alpha = 0.1,levels = 1,bw_adjust = 1)
plt.text(np.percentile(iqr_real,90)+0.009, L*0.07, r'Average = ' + str(round(np.average(iqr_real),3)), fontsize=12)
plt.text(np.percentile(iqr_real,90)+0.009, L*0.066, r'St.Dev. = ' + str(round(np.std(iqr_real),3)), fontsize=12)
plt.text(np.percentile(iqr_real,90)+0.009, L*0.062, r'P90 = ' + str(round(np.percentile(iqr_real,90),3)), fontsize=12)
plt.text(np.percentile(iqr_real,90)+0.009, L*0.058, r'P10 = ' + str(round(np.percentile(iqr_real,10),3)), fontsize=12)
plt.xlabel('Boostrap Realizations of Interquartile Range'); plt.ylabel('Frequency'); plt.title('Distribution of Interquartile Range')
# plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
#### Bootstrap of the Coefficient of Variation
Here's another statistic that requires multiple measures from the each bootstrap realization of the data.
* this reinforces that we bootstrap dataset realizations and then calculate the statistic on this dataset realization
For the coefficient of variation we will:
* calculate a bootstrap realization of the dataset with $n$ samples with replacement
* calculate the mean and standard deviation from this bootstrapped realization of the dataset
* calculate a boostrap realization of the coefficient of variation as the standard deviation divided by the mean
Repeat this $L$ times and then evaluate the resulting distribution.
```python
L = 1000 # set the number of bootstrap realizations
cv_real = [] # declare an empty list to store the bootstrap realizations of the statistic
for k in range(0,L): # loop over the L bootstrap realizations
samples = random.choices(df['Porosity'].values, k=len(df)) # n Monte Carlo simulations
cv_real.append(np.std(samples)/np.average(samples)) # calculate the statistic of interest from the new bootstrap dataset
cv = np.std(df['Porosity'])/np.average(df['Porosity'])
plt.subplot(111)
plt.hist(cv_real,color = 'red',alpha = 0.2,edgecolor = 'black',bins=np.linspace(0.1,0.4,100),label = 'bootstrap',density = True) # plot the distribution, could also calculate any summary statistics
plt.axvline(x=cv,c='black')
plt.axvline(x=np.percentile(cv_real,10),linestyle="--",c='black')
plt.axvline(x=np.percentile(cv_real,90),linestyle="--",c='black')
sns.kdeplot(x=cv_real,color = 'grey',alpha = 0.1,levels = 1,bw_adjust = 1)
plt.text(np.percentile(cv_real,90)+0.009, 16, r'Average = ' + str(round(np.average(cv_real),3)), fontsize=12)
plt.text(np.percentile(cv_real,90)+0.009, 15, r'St.Dev. = ' + str(round(np.std(cv_real),3)), fontsize=12)
plt.text(np.percentile(cv_real,90)+0.009, 14, r'P90 = ' + str(round(np.percentile(cv_real,90),3)), fontsize=12)
plt.text(np.percentile(cv_real,90)+0.009, 13, r'P10 = ' + str(round(np.percentile(cv_real,10),3)), fontsize=12)
plt.xlabel('Boostrap Realizations of Coefficient of Variation'); plt.ylabel('Frequency'); plt.title('Distribution of Coefficient of Variation')
# plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
#### Bootstrap of the Correlation Coefficient
Here's a statistic that requires us to work with multiple, paired features at once.
* this reinforces that we boostrap for a new realization of the dataset, a set of samples with all their features.
For the correlation coefficient we will:
* calculate a bootstrap realization of the dataset with $n$ samples with replacement as a new DataFrame with all features
* calculate the the correlation coefficient between 2 paired features
Repeat this $L$ times and then evaluate the resulting distribution.
```python
L = 1000 # set the number of bootstrap realizations
corr_real = [] # declare an empty list to store the bootstrap realizations of the statistic
for k in range(0,L): # loop over the L bootstrap realizations
samples = df.sample(n=len(df),replace=True,random_state = seed + k) # n random samples with replacement as a new DataFrame
corr_real.append(samples[['Porosity','Perm']].corr()['Porosity'][1]) # calculate the statistic of interest from the new bootstrap dataset
corr = df[['Porosity','Perm']].corr()['Porosity'][1]
plt.subplot(111)
plt.hist(corr_real,color = 'red',alpha = 0.2,edgecolor = 'black',bins=np.linspace(0.5,1.0,100),label = 'bootstrap',density = True) # plot the distribution, could also calculate any summary statistics
plt.axvline(x=corr,c='black')
plt.axvline(x=np.percentile(corr_real,10),linestyle="--",c='black')
plt.axvline(x=np.percentile(corr_real,90),linestyle="--",c='black')
sns.kdeplot(x=corr_real,color = 'grey',alpha = 0.1,levels = 1,bw_adjust = 1)
# plt.text(np.percentile(corr_real,90)+0.009, 16, r'Average = ' + str(round(np.average(corr_real),3)), fontsize=12)
# plt.text(np.percentile(corr_real,90)+0.009, 15, r'St.Dev. = ' + str(round(np.std(corr_real),3)), fontsize=12)
# plt.text(np.percentile(corr_real,90)+0.009, 14, r'P90 = ' + str(round(np.percentile(corr_real,90),3)), fontsize=12)
# plt.text(np.percentile(corr_real,90)+0.009, 13, r'P10 = ' + str(round(np.percentile(corr_real,10),3)), fontsize=12)
plt.xlabel('Boostrap Realizations of Correlation Coefficient'); plt.ylabel('Frequency'); plt.title('Distribution of Correlation Coefficient')
# plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
#### Comments
This was a basic demonstration of bootstrap.
* you can use bootstrap to calculate uncertainty in any statistic!
* you are calculating a set of realizations of the statistic, representing uncertainty due to small sample size
* note the assumptions of bootstrap, including stationarity and representativity
* remember, get a dataset realization by bootstrap and then calculate the realization of the statistic from the dataset realization
* if your statistic has multiple inputs (e.g. P25 and P75), calculate each from the same bootstrap realization of the dataset.
I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
I hope this was helpful,
*Michael*
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
```python
```
|
e0dbcc4f35cf241e60e131381502e74b602d3c49
| 327,278 |
ipynb
|
Jupyter Notebook
|
PythonDataBasics_BootstrapConfidence.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 403 |
2017-10-15T02:07:38.000Z
|
2022-03-30T15:27:14.000Z
|
PythonDataBasics_BootstrapConfidence.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 4 |
2019-08-21T10:35:09.000Z
|
2021-02-04T04:57:13.000Z
|
PythonDataBasics_BootstrapConfidence.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 276 |
2018-06-27T11:20:30.000Z
|
2022-03-25T16:04:24.000Z
| 327.933868 | 76,520 | 0.913315 | true | 9,322 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.699254 | 0.760651 | 0.531888 |
__label__eng_Latn
| 0.848989 | 0.074084 |
(rotational_motion)=
# Rotational Motion
## Angular Momentum Theorem
From Newton's second law:
\\[\vec{F}=ma=m\frac{d\vec{v}}{dt}=\frac{d(m\vec{v})}{dt}\\]
The product \\(m\vec{v}\\) is called **linear momentum**, written as \\(\vec{p}\\).
Taking the cross product of both sides with the position vector \\(\vec{r}\\):
\\[\vec{r}\times\vec{F} = \vec{r}\times(\frac{d\vec{p}}{dt})\\]
The product \\(\vec{r}\times\vec{F}\\) is called the **torque**, denoted by \\(\vec{\tau}\\). From the rules of cross product, the magnitude of \\(\vec{\tau}\\) is given by:
\\[|\tau|=rF_\theta\\]
where \\(F_\theta\\) is the **tangential component** of \\(\vec{F}\\).
The direction of \\(\vec{\tau}\\) is perpendicular to the plane formed by \\(r\\) and \\(F\\).
Therefore:
\\[\vec{\tau}=\vec{r}\times\frac{d\vec{p}}{dt}\\]
From the product rule of differentiation:
\\[\frac{d(\vec{r}\times\vec{p})}{dt}=\vec{r}\times\frac{d\vec{p}}{dt}+\frac{d\vec{r}}{dt}\times\vec{p}\\]
Since \\(\frac{d\vec{r}}{dt}=\vec{v}\\) and \\(m\vec{v}=\vec{p}\\), RHS in the equation above becomes:
\\[\vec{v}\times(m\vec{v})=m(\vec{v}\times\vec{v})=0\\]
Thus:
\\[\vec{\tau}=\frac{d(\vec{r}\times\vec{p})}{dt}\\]
The product \\(\vec{r}\times\vec{p}\\) is defined as the angular momentum \\(\vec{L}\\), and this is the **angular momentum theorem**:
\\[\vec{\tau}=\frac{d\vec{L}}{dt}\\]
## Moment of inertia
The moment of inertia of a body \\(I\\) composed of N discrete sub-bodies is defined as:
\\[I=\sum_{i=1}^{N}m_ir_i^2\\]
Where \\(m_i\\) is the mass of each sub-bodies, and \\(r_i\\) is the distance of those sub-bodies from the axis of rotation.
With this definition, the angular momentum equation reduces to:
\\[\vec{\tau}=I\frac{d^2\theta}{dt^2}\\]
where \\(\frac{d^2\theta}{dt^2}\\) represents angular acceleration.
For a continuous body, its moment of inertia \\(I\\) is defined as:
\\[I=\int r^2dm=\int\rho(r)r^2dV\\]
## Tutorial Problem 4.5
A cable is wrapped several times around a uniform, solid circular cylinder that can rotate about its axis. The cylinder has radius \\(R\\), and mass \\(m\\). The cable is pulled with a force of magnitude \\(F\\). Assuming that the cable unwinds without stretching or slipping, what will be the angular acceleration of the cylinder?
Ignore the weight of the cable.
```python
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
from sympy import Symbol, integrate
F = Symbol('F') # force
M = Symbol('M') # mass of cylinder
R = Symbol('R') # distance from the centre of the cylinder to where the force is applied
t = Symbol('t') # time
# x component of N = -F
# y component of N = Mg
# only F contributes to net torque
# sub into angular momentum equation
# for cylinder, I=MR**2/2
angular_acceleration = -(2*F)/(M*R)
angular_velocity = integrate(angular_acceleration, t)
angle = integrate(angular_velocity, t)
print("angular acceleration = ", angular_acceleration)
print("angular velocity = ", angular_velocity)
print("angle = ", angle)
```
angular acceleration = -2*F/(M*R)
angular velocity = -2*F*t/(M*R)
angle = -F*t**2/(M*R)
```python
# substitute arbitrary numbers
F = 10 # N
M = 10 # kg
R = 1 # m
t = np.linspace(0, 10, 500) # create list of time from 0 to 10 seconds
a_a = -(2*F)/(M*R) # angular acceleration
a_v = -2 * F * t / (M * R) # angular velocity
a = -(F * t**2) / (M * R) # angle
print("Angular acceleration = %.2f rad/s2" % (a_a))
X = R * np.cos(a) # x coordinate
Y = R * np.sin(a) # y coordinate
length = R * (-a / (2 * np.pi)) # length of string
l = np.zeros((len(t), len(t))) # store data in matrix to make animation
for i in range(len(t)):
for j in range(i+1):
l[i][j] = length[j]
# plot angular velocity over time
fig = plt.figure(figsize=(6,4))
plt.plot(t, a_v, 'k')
plt.xlabel('time (s)')
plt.ylabel('angular velocity (rad/s)')
plt.title('Angular velocity over time', fontsize=14)
plt.grid(True)
plt.show()
```
```python
nframes = len(t)
# Plot background axes
fig, ax = plt.subplots(figsize=(10,2))
# plot lines
line1, = ax.plot([], [], 'ro', lw=2)
line2, = ax.plot([], [], 'k', lw=0.5)
line3, = ax.plot([], [], 'k', lw=2)
# customise axis
ax.set_xlim(-2,18)
ax.set_ylim(-2,2)
ax.set_title('Motion of cylinder and string')
lines = [line1, line2, line3]
# Plot background for each frame
def init():
for line in lines:
line.set_data([], [])
return lines
# Set what data to plot in each frame
def animate(i):
x1 = X[i]
y1 = Y[i]
lines[0].set_data(x1, y1)
x2 = X
y2 = Y
lines[1].set_data(x2, y2)
x3 = l[i]
y3 = 1
lines[2].set_data(x3, y3)
return lines
# Call the animator
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=nframes, interval=10, blit=True)
```
```python
HTML(anim.to_html5_video())
```
### References
Course notes from Lecture 4 of the module ESE 95011 Mechanics
|
ab81fda6ef30a6529cf47b9651814ce6b9d08fd7
| 109,210 |
ipynb
|
Jupyter Notebook
|
notebooks/a_modules/mechanics/4_Rotational_Motion.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 3 |
2020-08-02T07:32:14.000Z
|
2021-11-16T16:40:43.000Z
|
notebooks/a_modules/mechanics/4_Rotational_Motion.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 5 |
2020-07-27T10:45:26.000Z
|
2020-08-12T15:09:14.000Z
|
notebooks/a_modules/mechanics/4_Rotational_Motion.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 4 |
2020-08-05T13:57:32.000Z
|
2022-02-02T19:03:57.000Z
| 94.065461 | 20,264 | 0.839612 | true | 1,584 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.867036 | 0.766494 |
__label__eng_Latn
| 0.905954 | 0.619154 |
```python
# https://colab.research.google.com/github/kassbohm/tm-snippets/blob/master/ipynb/TM_A/TM_0/trafo_tensor/trafo_tensor_2.ipynb
from sympy.physics.units import *
from sympy import *
# Input:
# 1.
# (i1, i2, i3)-components of symmetric tensor:
T11, T12, T13 = -1, 4, 0
T22, T23 = 5, 0
T33 = 0
# 2.
# (i1, i2, i3)-components of unit vector:
d1, d2, d3 = 0, 1, 0
pprint("\nInput 1: (i1, i2, i3)-tensor-components:")
T = Matrix([
[T11, T12, T13],
[T12, T22, T23],
[T13, T23, T33]
])
pprint(T)
pprint("\nInput 2: (i1, i2, i3)-unit-vector-components:")
d = Matrix([d1, d2, d3])
tmp = d.norm()
assert(tmp==1)
pprint("\nOutput: Tensor-component in unit-vector-direction:")
tmp = d.transpose()*T*d
pprint(tmp)
# Input 1: (i1, i2, i3)-tensor-components:
# ⎡-1 4 0⎤
# ⎢ ⎥
# ⎢4 5 0⎥
# ⎢ ⎥
# ⎣0 0 0⎦
#
# Input 2: (i1, i2, i3)-unit-vector-components:
#
# Output: Tensor-component in unit-vector-direction:
# [5]
```
|
b754a9f4da103715e9d7f349b3af49e0f44764cf
| 2,246 |
ipynb
|
Jupyter Notebook
|
ipynb/TM_A/TM_0/trafo_tensor/trafo_tensor_2.ipynb
|
kassbohm/tm-snippets
|
5e0621ba2470116e54643b740d1b68b9f28bff12
|
[
"MIT"
] | null | null | null |
ipynb/TM_A/TM_0/trafo_tensor/trafo_tensor_2.ipynb
|
kassbohm/tm-snippets
|
5e0621ba2470116e54643b740d1b68b9f28bff12
|
[
"MIT"
] | null | null | null |
ipynb/TM_A/TM_0/trafo_tensor/trafo_tensor_2.ipynb
|
kassbohm/tm-snippets
|
5e0621ba2470116e54643b740d1b68b9f28bff12
|
[
"MIT"
] | null | null | null | 28.43038 | 138 | 0.435886 | true | 409 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.927363 | 0.824462 | 0.764576 |
__label__eng_Latn
| 0.102466 | 0.614698 |
## ph237 Caltech 2018
### Gravitational Radiation
#### Assignment #1
```python
# standard preamble for ph237 notebooks
%matplotlib notebook
from __future__ import division, print_function
import numpy as np
#from mpl_toolkits.mplot3d import Axes3D
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import cm
plt.style.use('seaborn-paper')
#plt.style.use('fivethirtyeight')
import scipy.constants as const
from astropy.constants import M_sun
mpl.rcParams.update({'text.usetex': False,
'lines.linewidth': 2.5,
'font.size': 18,
'xtick.labelsize': 'medium',
'ytick.labelsize': 'medium',
'axes.labelsize': 'small',
'axes.grid': True,
'grid.alpha': 0.73,
'lines.markersize': 12,
'legend.borderpad': 0.2,
'legend.fancybox': True,
'legend.fontsize': 13,
'legend.framealpha': 0.7,
'legend.handletextpad': 0.1,
'legend.labelspacing': 0.2,
'legend.loc': 'best',
'savefig.dpi': 100,
'figure.figsize': (9,6),
'pdf.compression': 9})
```
## Problem 1: Fabry-Perot Cavity
**There exists a Fabry-Perot cavity with a length of 4000\,m. The power transmission, $T_I$, of the input mirror is 1% and the transmission of the end mirror, $T_E$, is 10 ppm. The cavity is illuminated from the input side with a 100 W laser having a wavelength of 437 nm.**
### Part (a)
** Using the steady state fields approach, solve for the transmitted power as a function of cavity length.**
The Fabry-Perot Cavity is schematically drawn below
To use the steady state fields approach, we make the ansatz that the field is a superposition of a left-going wave and a right going wave to the left of the cavity (with amplitudes $E_{\rm in}$ and $E_{\rm R}$ in the figure), a superposition of a left-going wave and a right going wave inside the cavity (with amplitudes $E_{1}$ and $E_{2}$ in the figure) and a right-going wave to the right of the cavity ( (with amplitude $E_{\rm T}$).
To use the steady state field approach we have to make sure the ansatz is consistent with the reflectivity and transmissivity at each mirror. This means we solve the system of equations (for $E_1$, $E_2$, $E_{\rm R}$ and $E_{\rm T}$ as a function of L , $\omega$, and $E_{\rm In}$)
\begin{align}
E_{\rm R}&=-r_I E_{\rm In} \\
E_1&=t_I E_{\rm In}-r_I E_2 \\
E_2& =-r_E e^{-2i\phi}E_1 \\
E_{\rm T}&=t_E e^{-i\phi}E_1,
\end{align}
where the phase $\phi =\omega_0L/c$ and the lower case $t$ and $r$ are the amplitude transmissivity and reflectivity (which are the square roots of the energy transmissivity and reflectivity).
This solution for the transmitted field is
\begin{align}
E_{\rm T}=\frac{t_Et_I e^{-i\phi}}{1-r_Ir_E e^{-2i\phi}}E_{\rm In}
\end{align}
The power is proportional to the field modulus squared $P\propto |E|^2$; Hence the transmitted power is
\begin{align}
P_T&=\left |\frac{t_Et_I e^{-i\phi}}{1-r_Ir_E e^{-2i\phi}} \right |^2P_{\rm In} \nonumber \\
&=\frac{T_ET_I}{1+R_IR_E-2r_Ir_E\cos(2\phi)}
,
\end{align}
where $P_{\rm In}$ is the laser power,
### Part b
Draw a diagram of the cavity, label each of the nodes, and write down the Adjacency Matrix, $A$, for the Fabry-Perot cavity. Solve for the System Matrix, $G$ using Mathematica or Python or otherwise, and show how one of the elements of $G$ can be used to find the solution in part a)
We translate the optical system into a directed graph
and then write down the adjacency matrix, defined as
$A_{ij}=\text{value of connection connecting node j to node i}$
as well as the system matrix $G=(1-A)^{-1}$
```python
# Define adjacency matrix for FP
def A(w):
M = np.zeros([7,7],complex)
prop = np.exp(-1j*ph(w))
M[1,0] = tI
M[1,5] = -rI
M[2,1] = prop
M[3,2] = tE
M[4,2] = -rE
M[5,4] = prop
M[6,0] = rI
M[6,5] = tI
return M
# function that calculates system matrix G = (1-A)^(-1) for FP
def G(w):
return np.linalg.inv(np.identity(7,complex) - A(w))
```
Recall that the vector of fields $\vec E =(E_0, \dots, E_6)^T$ obeys
\begin{align}
\vec E =A \vec E +\vec E_{\rm Inj},
\end{align}
where the vector of injected fields is (in this case) $\vec E_{\rm Inj}=(E_{\rm In},\dots)^T$. This means that we can use the system matrix $G=(1-A)^{-1}$ to find the field at any of the node locations via
\begin{align}
\vec E =G \vec E_{\rm Inj} .
\end{align}
Performing the matrix multiplication, we see that
\begin{align}
E_i=G_{i0}E_{\rm In}.
\end{align}
# Problem 2: Frequency Respons
**In this problem we will compute the frequency response of a LIGO-like interferometer to gravitational waves. Assume that the results from above still hold.**
## Part (a)
**Assume that we now drive the end mirror with a sinusoidal modulation having an amplitude of $x_0$ and a frequency, $\omega$. Write down an expression for the fields reflected from the mirror, utilizing the Jacobi-Anger expansion.**
As is shown below,
suppose the central location of the mirror is $x=L$ and it is driven so that is position becomes $x=L+\delta L$, with $\delta L =x_0 \cos \omega t$. We calculate the values of the reflected field $E_{\rm R}$ (referred to at $x=0$) in terms of the ingoing field $E_{\rm In}$ (also referred to at $x=0$).
At the mirror location $x=L+\delta L$ the reflected field is simply $-r$ times the ingoing field evaluated at the mirror. The propagation to the mirror from $x=0$ contributes a phase factor $e^{-i\phi}$ and the propagation from the mirror to $x=0$, also contributes a phase factor of $e^{-i\phi}$, where $\phi=\omega_0(L+\delta L)/c=\omega_0(L+x_0\cos\omega t)/c$ and $\omega_0$ is the light frequency. Hence the reflected field is
\begin{align}
E_R&=-r_E e^{-2i\phi}E_{\rm In}
=-r_E e^{-2i\omega_0L/c}e^{-2i x_0 \cos (\omega t)/c} E_{\rm In}
\end{align}
Taylor expanding in small $x_0$ (at fancy restaurants they call this utilizing the Jacobi-Anger expansion)
\begin{align}
E_R=-r_E e^{-2i\omega_0 L/c}\left[1-i\frac{x_0\omega_0}{c}(e^{i\omega t}+e^{-i\omega t})\right]E_{\rm In} \label{eq:side}
\end{align}
Note if we restore the time dependent factor $e^{i\omega_0t}$ that the first term has a time dependence $e^{i\omega_0t}$ (and is simply the reflected field when there is no modulation) while the second two terms are sideband fields with the time dependence $e^{i(\omega_0\pm\omega)t}$.
## Part b
**Use your knowledge of the frequency dependent System Matrix derived above to compute an expression for the transmitted power. Make a plot of the transfer function of the transmitted power as a function of modulation frequency (the y-axis should be in units of Watts/meter).
Hint Remember that the transmitted field will be the sum of the DC fields (computed above) and the AC fields}**
Now consider the Fabry Perot Cavity of problem one and imagine that we modulate the end mirror with $\delta L =x_0 \cos\omega t$. From part (a), we know that this produces sideband fields at node 4, i.e
\begin{align}
E_4=-r_E E_2+ir_E\frac{x_0\omega_0}{c}(e^{i\omega t}+e^{-i\omega t})E_{2},
\end{align}
As we are working to first order in the amplitude modulation, we can take $x_0=0$ when we evaluate $E_2$ the second term. From problem 1, when $x_0=0$, we know how to evaluate $E_2$ in terms of $E_{\rm In}$ and the system matrix $E_2=G_{20}(\omega_0)E_{\rm in}$. Hence
\begin{align}
E_4=-r_E E_2+ir_E\frac{x_0\omega_0}{c}(e^{i\omega t}+e^{-i\omega t})G_{20}(\omega_0)E_{\rm in}.
\end{align}
Here we are now writing the system matrix $G(\omega)$ as a function of frequency, with the frequency dependence coming from the frequency dependent phase $\phi(\omega)=\omega L/c$. Thus we see that we are effectively injecting fields at the sideband frequencies $\omega_0\pm \omega$ at node 4. The system matrix evaluated at the sideband frequencies also governs how light at the sideband frequencies propagates through the optical system. Thus we can write the complete field (with it's time dependence) as
\begin{align}
\vec E(t)&=G(\omega_0)
\begin{bmatrix}
E_{\rm In} \\ 0 \\0 \\0 \\0 \\0 \\0
\end{bmatrix}
e^{i\omega_0 t} \nonumber \\
&+G(\omega_0 +\omega)
\begin{bmatrix}
0 \\ 0 \\ 0 \\0 \\ ir_EG_{20}(\omega_0)E_{\rm In}x_0\omega_0/c \\ 0 \\0
\end{bmatrix}
e^{i(\omega_0+\omega)t}
+G(\omega_0 -\omega)
\begin{bmatrix}
0 \\ 0 \\ 0 \\0 \\ ir_EG_{20}(\omega_0)E_{\rm In}x_0\omega_0/c \\ 0 \\0
\end{bmatrix}
e^{i(\omega_0-\omega)t}
\end{align}
Performing the matrix multiplication yields a transmitted field (including all of the time dependence) of
\begin{align}
E_T(t)&=G_{30}(\omega_0)E_{\rm In}e^{i\omega_0 t} \nonumber \\
&+G_{34}(\omega_0+\omega)ir_EG_{20}(\omega_0)x_0\frac{\omega_0}{c}E_{\rm In}e^{i(\omega+\omega_0)t}
+G_{34}(\omega_0-\omega)ir_EG_{20}(\omega_0)x_0\frac{\omega_0}{c}E_{\rm In}e^{i(-\omega+\omega_0)t} \label{eq:ETmod}
\end{align}
Thus we see that the transmitted field also has components at the carrier frequency $\omega_0$ and and the sideband frequencies $\omega_0\pm \omega$.
This produces an output power with a DC component (computed in problem 1) and a slowly varying (relative to the carrier frequency) modulation at $\omega$. Namely, anytime the complex electric field is of the form
\begin{align}
E(t)=E_0e^{i\omega_0 t}+E_+e^{i(\omega_0+\omega)t}+E_-e^{i(\omega_0-\omega)t},
\end{align}
with $E_\pm\ll E_0$, then
the power is
\begin{align}
P&\propto |E|^2 \nonumber \\
&=|E_0|^2+E_0 e^{i\omega_0 t}(E_+^*e^{-i(\omega_0+\omega)t}+E_-^*e^{i(\omega_0+\omega)t}) +E_0^* e^{-i\omega_0 t}(E_+e^{i(\omega_0+\omega)t}+E_-e^{-i(\omega_0+\omega)t}) +\mathcal{O}(E_\pm^2) \nonumber \\
&=|E_0|^2+e^{i\omega t}\left[E_0^*E_++E_0E_-^*\right]+e^{-i\omega t}\left[E_0E_+^*+E_0^*E_-\right] \nonumber \\
&=|E_0|^2 +2A\cos(\omega t+\delta),
\end{align}
where $A$ and $\delta$ are the amplitude and phase of $E_0^*E_++E_0E_-^*=Ae^{i\delta}$.
The transmitted field is of this form with
\begin{align}
&E_0=G_{30}(\omega_0)E_{\rm In},& &E_{\pm}=ir_Ex_0\frac{\omega_0}{c}G_{34}(\omega_0\pm \omega)G_{20}(\omega_0)E_{\rm In}&
\end{align}
Note both $E_0$ and $E_{\pm}$ are proportional to $E_{\rm in}$. Note that both $A$ and $|E_0|^2$ are proportional to $|E_{\rm In}|^2$ or equivalently the input power $P_{\rm In}$. Hence, we can write
\begin{align}
P_T=|G_{30}(\omega_0)|^2P_{\rm In}+\Delta P\cos(\omega t+\Phi),
\end{align}
where
\begin{align}
\Delta P e^{i\Phi}&=2G^*_{30}(\omega_0)ir_Ex_0\frac{\omega_0}{c}G_{34}(\omega_0+\omega)G_{20}(\omega_0)+2G_{30}(\omega_0)(ir_Ex_0\frac{\omega_0}{c}G_{34}(\omega_0-\omega)G_{20}(\omega_0))^*P_{\rm In} \nonumber \\
&=2ir_Ex_0\frac{\omega_0}{c}\left[G^*_{30}(\omega_0)G_{34}(\omega_0+\omega)G_{20}(\omega_0)-G_{30}(\omega_0)G^*_{34}(\omega_0-\omega)G^*_{20}(\omega_0)\right]P_{\rm in},
\end{align}
We consider the transfer function from modulation amplitude to power to be $\Delta P/x_0$.
The constants characterizing the Fabry-Perot Cavity in this probelm are below. Note that the length of the cavity is micro-tuned to be slightly off resonance.
```python
#Defining constants
c = const.c # speed of light
lam = 437e-9 # laser wavelength
w0 = 2*np.pi*c/lam # laser frequency
L0 = 4000 # initial length guess
L = round(L0/lam)*lam # length of Fabry Perot cavity (tuned to int # of waves)
L += 10e-12 # add small offset so that there is a linear readout
TI = 0.014 # power transmissivity of input mirror
TE = 1e-5 # power transmissivity of end mirror
tI = np.sqrt(TI) # amplitude transmissivity of input mirror
tE = np.sqrt(TE) # amplitdue transmissivity of end mirror
RI = 1-TI # energy reflectivity of input mirror
RE = 1-TE # energy reflectivity of end mirror
rI = np.sqrt(RI) # amplitude reflectivity of input mirror
rE = np.sqrt(RE) # amplitude reflectivity of end mirror
Pin = 1 # laser power
def ph(w): # phase accumaled over a half round trip in the FP cavity
return w*L/c
```
A function that computes $\frac{\Delta P}{x_0} e^{i\Phi}$. The transfer function for the modulation amplitude is the absolute value of this function and the transfer function for the phase is the arguement.
```python
# define a function that computes Delta P/ x_0 in eq. 22 for FP
def P_trans(w):
wc = w0
z = 2j*rE*(2*np.pi/lam)*Pin * (np.conj(G(wc)[3,0]) * (G(wc+w)[3,4] * G(wc)[2,0]) -
G(wc)[3,0] * np.conj(G(wc-w)[3,4] * G(wc)[2,0]))
return z
```
Bode plots for magnitude of the tranfer function
```python
#plot (zoomed out) for FP
f = np.logspace(0, 5, 1300)
omega = 2*np.pi*f
#y = list(map(Transfer, omega))
y = np.zeros_like(omega, complex)
for i in range(len(omega)):
y[i] = P_trans(omega[i])
```
```python
fig1,ax = plt.subplots(2,1, sharex=True, figsize=(8,7))
ax[0].loglog(f, np.abs(y),
rasterized=True)
ax[0].set_title(r'Single Fabry-Perot transfer function')
ax[0].set_ylabel(r'$\Delta P/x_0 [W/m]$')
ax[1].semilogx(f, np.angle(y, deg=True),
rasterized=True)
ax[1].set_ylabel(r'Phase [deg]')
ax[1].set_xlabel(r'Frequency [Hz]')
ax[1].set_yticks(np.arange(-180,181,45))
plt.savefig("Figures/2bwide.pdf", bbox_inches='tight')
```
A close up of the resonances. The yellow lines denote the resonances frequency of the static cavity
```python
# plot (zoomed in) for FP
f = np.linspace(0, 1e5, 1000)
y = np.zeros_like(f, complex)
for i in range(len(f)):
y[i] = P_trans(2*np.pi*f[i])
plt.figure(221)
plt.semilogy(f/1000, np.abs(y),
rasterized=True)
plt.axvline(c/2/L/1000, color='xkcd:tangerine', alpha=0.5, lw=5)
plt.axvline(c/1/L/1000, color='xkcd:shit', alpha=0.5, lw=5)
plt.xlabel(r'Frequency [kHz]')
plt.ylabel(r'$\Delta P/x_0 [W/m]$')
plt.savefig("Figures/2bclose.pdf", bbox_inches='tight')
```
## Part c
**Now write down a larger Adjacency Matrix which represents a Michelson interferometer with Fabry-Perot cavities in place of the usual end mirrors. Assume that there is a small asymmetry in the Michelson, such that the distance from the beamsplitter to one of the FP cavities is 100 pm larger than the distance to the other cavity.
Make a Bode plot of the transfer function as in part b), but instead of the transmission of the FP cavity, use the anti-symmetric (detection) port as the readout.**
The optical layout of the Michelson interferometer is
The corresponding directed graph is
We take the mirrors in each Fabry-Perot cavity to be identical. We assume the y-axis FP cavity is located a distance d from the beam splitter\footnote{The field at the antisymmetric port depends on d when $\Delta \neq 0$, but that the power does not. Hence we will set $d=0$ in our numerical computations of the power.} and the the x-cavity is located a distance $d+\Delta$ from the beam splitter with $\Delta =100\,pm$. We take the transmissivity and reflectivity of the beam splitter to be $t_{\rm BS}=r_{\rm BS}=1/\sqrt{2}$.
```python
# part 2c
# extra parameters for 2c
TBS = 0.5
tBS = np.sqrt(TBS) #beam splitter tranmissivity
rBS = np.sqrt(1 - TBS) #beam splitter reflectivity
# distance to the y FP cavity.
# The field at the antisymmetric port depends on d when Del is not zero, but the power doesn't
d = 0
Del = 1e-10 #difference between distance to x cavity and the distance to the y cavity
```
We now imagine that the x-axis end mirror is shaken about its central location with $\delta L= x_0\cos\omega t$. Using the results of problem 2 (a) and the logic of 2 (b), this means that the field at the anti-symmetric port is now the field at the carrier frequency plus the result of injecting the field (including the full time dependence)
\begin{align}
E_{side}(t)=ir_E\frac{x_0\omega_0}{c}(e^{i(\omega_0 +\omega)t}+e^{+i(\omega_0-\omega )t})G_{20}(\omega_0)E_{\rm in}
\end{align}
in node 3. Again using the system matrix to propagate the fields, the field at the antisymmetric port is
\begin{align}
E_{AS}(t)&=G_{12,0}(\omega_0)E_{\rm In}e^{i\omega_0 t} \nonumber \\
&+G_{12,3}(\omega_0+\omega)ir_EG_{20}(\omega_0)x_0\frac{\omega_0}{c}E_{\rm In}e^{i(\omega+\omega_0)t}
+G_{12,3}(\omega_0-\omega)ir_EG_{20}(\omega_0)x_0\frac{\omega_0}{c}E_{\rm In}e^{i(-\omega+\omega_0)t}, \label{eq:EASmod}
\end{align}
which is exactly of the same form of the field as the transmitted field from the modulated Fabry-Perot Cavity, with the exception that we have relabeled the elements of the system matrix to correspond to the correct nodes. Hence, the same logic as above reveals that the power at the antisymmetric port is
\begin{align}
P_T=|G_{12,0}(\omega_0)|^2P_{\rm In}+\Delta P\cos(\omega t+\Phi),
\end{align}
where
\begin{align}
\Delta P e^{i\Phi}&=2G^*_{12,0}(\omega_0)ir_Ex_0\frac{\omega_0}{c}G_{12,3}(\omega_0+\omega)G_{20}(\omega_0)+2G_{12,0}(\omega_0)(ir_Ex_0\frac{\omega_0}{c}G_{12,3}(\omega_0-\omega)G_{20}(\omega_0))^*P_{\rm In} \nonumber \\
&=2ir_Ex_0\frac{\omega_0}{c}\left[G^*_{12,0}(\omega_0)G_{12,3}(\omega_0+\omega)G_{20}(\omega_0)-G_{12,0}(\omega_0)G^*_{12,3}(\omega_0-\omega)G^*_{20}(\omega_0)\right]P_{\rm in},
\end{align}
The nonzero components of the adjacency matrix A are
\begin{align}
&A_{10}=t_{BS}e^{-i\phi_x} & \nonumber \\
&A_{21}=t_I, & &A_{23}=-r_Ie^{-i\phi}& \nonumber \\
&A_{32}=-r_Ee^{-i\phi}& \nonumber \\
&A_{42}=t_Ee^{-i\phi}& \nonumber \\
&A_{51}=r_I, & &A_{53}=t_I e^{-i\phi}& \nonumber \\
&A_{60}=-r_{BS}e^{-i\phi_y}& \nonumber \\
&A_{76}=t_I,& &A_{7,9}=-r_Ie^{-i\phi}& \nonumber \\
&A_{87}=t_E e^{-i\phi},& \nonumber \\
&A_{97}=-r_E e^{-i\phi},& \nonumber \\
&A_{10,6}=r_I,&
&A_{10,9}=t_I e^{-i\phi},& \nonumber \\
&A_{11,5}=t_{BS}e^{-i\phi_x},&
&A_{11,10}=-r_{BS}e^{-i\phi_y}& \nonumber \\
&A_{12,5}=r_{BS}e^{-i\phi_x},&
&A_{12,10}=t_{BS}e^{-i\phi_y}&,
\end{align}
where the phases are
\begin{align}
\phi(\omega)&=\omega L/c \nonumber \\
\phi_y(\omega)&=\omega d/c \nonumber \\
\phi_x(\omega) &=\omega(d+\Delta)/c
\end{align}
We compute the adjacency matrix, the system matrix and the power transfer funciton $\frac{\Delta P}{x_0}e^{i\Phi}$
```python
def phx(w): #phase accumulated travelling to the x FP cavity
return w*(d+Del)/c
def phy(w): #phase accumulated travelling to the y FP cavity
return w*(d)/c#plot (zoomed out) for FP
# make x be a list of f rather than \omega, so we can plot transmitted power versus f
f = np.logspace(0, 5, 1000)
#y = list(map(Transfer, 2*np.pi*f))
```
```python
#Define adjacency matrix for Michelson
def A(w):
M = np.zeros([13,13],complex)
M[1,0] = tBS*np.exp(-1j*phx(w))
M[2,1] = tI
M[2,3] = -rI*np.exp(-1j*ph(w))
M[3,2] = -rE*np.exp(-1j*ph(w))
M[4,2] = tE*np.exp(-1j*phx(w))
M[5,1] = +rI
M[5,3] = tI*np.exp(-1j*ph(w))
M[6,0] = -rBS*np.exp(-1j*phy(w))
M[7,6] = tI
M[7,9] = -rI*np.exp(-1j*ph(w))
M[8,7] = tE*np.exp(-1j*ph(w))
M[9,7] = -rE*np.exp(-1j*ph(w))
M[10,6] = +rI
M[10,9] = tI*np.exp(-1j*ph(w))
M[11,5] = tBS*np.exp(-1j*phx(w))
M[11,10] = -rBS*np.exp(-1j*phy(w))
M[12,5] = rBS*np.exp(-1j*phx(w))
M[12,10] = tBS*np.exp(-1j*phy(w))
return M
# function that calculates system matrix G = (1-A)^(-1) for Michelson
def G(w):
return np.linalg.inv(np.identity(13,complex) - A(w))
#define a function that computes Delta P/ x_0 in eq. 26 for Michelson
def P_dark(w):
z = 2j * rE * Pin* (w0/c) * (np.conj(G(w0)[12,0])* G(w0+w)[12,3] * G(w0)[2,0] -
G(w0)[12,0] * np.conj(G(w0-w)[12,3] * G(w0)[2,0]))
return z
```
Bode plots
```python
#plot (zoomed out) for Fabry-Perot Michelson
f = np.logspace(0, 5, 1000)
#y = list(map(Transfer, omega))
y = np.zeros_like(f, complex)
for i in range(len(f)):
y[i] = P_dark(2*np.pi*f[i])
```
```python
fig23, ax = plt.subplots(2,1, sharex=True, figsize=(8,7))
ax[0].loglog(f, np.abs(y),
rasterized=True, c='xkcd:Burple')
ax[0].set_title(r'Michelson w/ Fabry-Perot arms')
ax[0].set_ylabel(r'$\Delta P/x_0 [W/m]$')
ax[1].semilogx(f, np.angle(y, deg=True),
rasterized=True, c='xkcd:primary blue')
ax[1].set_ylabel(r'Phase [deg]')
ax[1].set_xlabel(r'Frequency [Hz]')
ax[1].set_yticks(np.arange(-180,181,45))
plt.savefig("Figures/2cwide.pdf")
```
Close up of the resonances. The yellow lines denote the resonances frequency of the static Fabry Perot cavity
```python
#plot (zoomed in) for Michelson
# make x be a list of f rather than \omega, so we can plot transmitted power versus f
f = np.linspace(0, 1e5, 1000)
y = np.zeros_like(f, complex)
for i in range(len(f)):
y[i] = P_dark(2*np.pi*f[i])
plt.figure(227)
plt.semilogy(f/1000, np.abs(y),
rasterized=True)
plt.axvline(c/2/L/1000, color='xkcd:tangerine', alpha=0.5, lw=5)
plt.axvline(c/1/L/1000, color='xkcd:shit', alpha=0.5, lw=5)
plt.xlabel(r'Frequency [kHz]')
plt.ylabel(r'$\Delta P/x_0 [W/m]$')
plt.savefig("Figures/2cclose.pdf")
```
```python
```
|
f6ac4d2db138152dcbf45d48ba482fff5cb1c091
| 28,681 |
ipynb
|
Jupyter Notebook
|
Assignments/2018/a1.ipynb
|
rxa254/ph237
|
a9b3b0360966268537e53b7cc073187596fc91ff
|
[
"MIT"
] | 1 |
2020-04-05T22:58:54.000Z
|
2020-04-05T22:58:54.000Z
|
Assignments/2018/a1.ipynb
|
rxa254/ph237
|
a9b3b0360966268537e53b7cc073187596fc91ff
|
[
"MIT"
] | 10 |
2018-04-16T01:10:34.000Z
|
2020-05-08T05:37:09.000Z
|
Assignments/2018/a1.ipynb
|
rxa254/ph237
|
a9b3b0360966268537e53b7cc073187596fc91ff
|
[
"MIT"
] | 1 |
2018-04-05T17:37:09.000Z
|
2018-04-05T17:37:09.000Z
| 43.522003 | 544 | 0.557477 | true | 7,209 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.882428 | 0.819893 | 0.723497 |
__label__eng_Latn
| 0.934561 | 0.519257 |
# Visualizing stationary paths of a functional
Last revised: 02-Feb-2019 by Dick Furnstahl [furnstahl.1@osu.edu]
Consider the functional
$\begin{align}
S = \int_{x_1}^{x_2} f[y(x), y'(x), x] \, dx
\end{align}$
with $y_1 = y(x_1)$ and $y_2 = y(x_2)$ fixed. We denote by $y^*(x)$ the path that minimizes $S$ (or, more generally, makes it stationary). Then we consider the class of candidate paths $y(x)$ given by
$\begin{align}
y(x) = y^*(x) + \alpha \eta(x)
\end{align}$
where $\eta(x)$ is some function that vanishes at the endpoints: $\eta(x_1) = \eta(x_2) = 0$. We can derive the Euler-Lagrange equations by minimizing $S(\alpha)$ with respect to $\alpha$.
Here we visualize this problem by considering a particular $S$, choosing among some possible $\eta(x)$ definitions, and seeing how $S$ is minimized with respect to $\alpha$. We will also allow for an incorrect determination of $y^*(x)$, in which case we expect that the minimum alpha will give us a reasonable reproduction of the true $y^*(x)$. The variation of $\alpha$ and the choice of functions will be made using widgets from `ipywidgets`.
## Looking at a plot of the functional evaluation versus $\alpha$
\[We'll use `%matplotlib notebook` so that we can modify figures without redrawing them.\]
```python
%matplotlib notebook
```
```python
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import display
```
### Functional from Taylor problem 6.9
This problem states: "Find the equation of the path from the origin $O$ to the point $P(1,1)$ in the $xy$ plane that makes the integral $\int_O^P (y'^2 + y y' + y^2)$ stationary. The answer from solving the Euler-Lagrange equation is $y^*(x) = \sinh(x)/\sinh(1)$.
```python
def y_star(x):
"""Path that minimizes the functional in Taylor problem 6.9."""
return np.sinh(x) / np.sinh(1.)
```
```python
delta_x = 0.001
x_pts = np.arange(0., 1., delta_x)
fig = plt.figure(figsize=(6,3),
num='Visualizing stationary paths of a functional')
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
def setup_figure():
ax1.set_title('Show paths')
ax1.plot(x_pts, y_star(x_pts), color='black', lw=2)
ax1.set_xlabel('x')
ax1.set_ylabel('y(x)')
ax2.set_title('Evaluate functional')
ax2.set_xlabel(r'$\alpha$')
ax2.set_ylabel('functional')
ax2.set_xlim(-0.4, 0.4)
ax2.set_ylim(1.5, 3.)
#ax2.axvline(0., color='black', alpha=0.3)
ax2.axhline(evaluate_functional(x_pts, y_star(x_pts)),
color='black', alpha=0.3)
fig.tight_layout()
def evaluate_functional(x_pts, y_pts):
"""Given arrays of x and y points, evaluate the functional from 6.9."""
# The numpy gradient function takes the derivative of an array y_pts
# that is a function of points x in x_pts.
y_deriv_pts = np.gradient(y_pts, x_pts)
f = y_deriv_pts**2 + y_pts * y_deriv_pts + y_pts**2
# Use the numpy trapezoid rule (trapz) to do the integral over f.
return np.trapz(f, x_pts)
def make_path(alpha, ax1_passed, ax2_passed,
base_function='exact', eta_function='sine'):
"""Given a base function, which may be the exact y^*(x) or a guess that
is not correct, generate and plot the path corresponding to adding
alpha*eta(x) to the base function, with eta(x) chosen among some
functions that vanish at the endpoints in x.
"""
# map x_pts to zero to 1 (it may already be there)
x_mapped_pts = (x_pts - x_pts[0]) / (x_pts[-1] - x_pts[0])
# Choices for the base function
if (base_function == 'exact'):
base = lambda x : y_star(x)
elif (base_function == 'guess 1'):
base = lambda x : np.sinh(2.*x) / np.sinh(2.)
elif (base_function == 'guess 2'):
base = lambda x : x**3
if (eta_function == 'sine'):
eta = lambda x : np.sin(np.pi * x)
elif (eta_function == 'parabola'):
eta = lambda x : 4. * x * (1. - x)
y_new_pts = base(x_pts) + alpha * eta(x_mapped_pts)
ax1_passed.plot(x_pts, y_new_pts, color='red', lw=1)
ax2_passed.plot(alpha, evaluate_functional(x_pts, y_new_pts), '.',
color='red')
def reset_graph(event):
ax1.clear()
ax2.clear()
setup_figure()
button = widgets.Button(
description='reset graph'
)
button.on_click(reset_graph)
widgets.interact(make_path,
alpha=widgets.FloatSlider(min=-1., max=1., step=.05,
value=0.0, description=r'$\alpha$',
continuous_update=False),
ax1_passed=widgets.fixed(ax1),
ax2_passed=widgets.fixed(ax2),
base_function=widgets.Dropdown(options=['exact', 'guess 1',
'guess 2'],
value='exact',
description='base function'),
eta_function=widgets.Dropdown(options=['sine', 'parabola'],
value='sine',
description=r'$\eta(x)$')
)
setup_figure()
button
```
<IPython.core.display.Javascript object>
interactive(children=(FloatSlider(value=0.0, continuous_update=False, description='$\\alpha$', max=1.0, min=-1…
Button(description='reset graph', style=ButtonStyle())
```python
```
|
4528a20383a3e9862a97ae430eabfd1653cf1fbe
| 149,782 |
ipynb
|
Jupyter Notebook
|
2020_week_6/.ipynb_checkpoints/visualizing_stationary_functional_v1-checkpoint.ipynb
|
CLima86/Physics_5300_CDL
|
d9e8ee0861d408a85b4be3adfc97e98afb4a1149
|
[
"MIT"
] | null | null | null |
2020_week_6/.ipynb_checkpoints/visualizing_stationary_functional_v1-checkpoint.ipynb
|
CLima86/Physics_5300_CDL
|
d9e8ee0861d408a85b4be3adfc97e98afb4a1149
|
[
"MIT"
] | null | null | null |
2020_week_6/.ipynb_checkpoints/visualizing_stationary_functional_v1-checkpoint.ipynb
|
CLima86/Physics_5300_CDL
|
d9e8ee0861d408a85b4be3adfc97e98afb4a1149
|
[
"MIT"
] | null | null | null | 143.744722 | 106,275 | 0.826034 | true | 1,473 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.817574 | 0.91611 | 0.748988 |
__label__eng_Latn
| 0.91674 | 0.578482 |
```python
import opt_mo
import numpy as np
import sympy as sym
import itertools
sym.init_printing()
```
```python
p_1, p_2, p_3, p_4 = sym.symbols("p_1, p_2, p_3, p_4")
q_1, q_2, q_3, q_4 = sym.symbols("q_1, q_2, q_3, q_4")
```
```python
p = (p_1, p_2, p_3, p_4)
q = (q_1, q_2, q_3, q_4)
```
```python
pi_1, pi_2, pi_3, pi_4 = sym.symbols("pi_1, pi_2, pi_3, pi_4")
pi = (pi_1, pi_2, pi_3, pi_4)
```
**Theorem 1 Proof**
As discrubed in Section 2 the utility of a memory one player against another is given by the steady states of M
multiplied by the payoffs.
```python
M = opt_mo.mem_one_match_markov_chain(player=p, opponent=q)
```
```python
ss = opt_mo.steady_states(M, pi)
```
```python
v = sym.Matrix([[ss[pi_1]], [ss[pi_2]], [ss[pi_3]], [ss[pi_4]]])
```
```python
utility = v.dot(np.array([3, 0, 5, 1]))
```
```python
expr = utility.factor()
```
```python
numerator, denominator = sym.fraction(expr)
```
```python
numerator
```
```python
numerator_elements = [[numerator.coeff(f1 * f2) * f1 * f2 for f2 in p] for f1 in p]
```
```python
flat_elements = list(itertools.chain.from_iterable(numerator_elements))
```
```python
cross_prod = sum(flat_elements) / 2
```
```python
cross_prod
```
```python
sym.latex(sum(flat_elements) / 2).replace("\\left", "").replace("\\right", "")
```
'p_{1} p_{2} (q_{1} q_{2} - 5 q_{1} q_{4} - q_{1} - q_{2} q_{3} + 5 q_{3} q_{4} + q_{3}) + p_{1} p_{3} (- q_{1} q_{3} + q_{2} q_{3}) + p_{1} p_{4} (5 q_{1} q_{3} - 5 q_{3} q_{4}) + p_{2} p_{3} (- q_{1} q_{2} + q_{1} q_{3} + 3 q_{2} q_{4} + q_{2} - 3 q_{3} q_{4} - q_{3}) + p_{2} p_{4} (- 5 q_{1} q_{3} + 5 q_{1} q_{4} + 3 q_{2} q_{3} - 3 q_{2} q_{4} + 2 q_{3} - 2 q_{4}) + p_{3} p_{4} (- 3 q_{2} q_{3} + 3 q_{3} q_{4})'
```python
linear_expr = numerator.subs({p_2: 0, p_3: 0, p_4: 0}).coeff(p_1) * p_1
linear_expr += numerator.subs({p_1: 0, p_3: 0, p_4: 0}).coeff(p_2) * p_2
linear_expr += numerator.subs({p_1: 0, p_2: 0, p_4: 0}).coeff(p_3) * p_3
linear_expr += numerator.subs({p_1: 0, p_2: 0, p_3: 0}).coeff(p_4) * p_4
```
```python
linear_expr
```
```python
sym.latex(linear_expr).replace("\\left", "").replace("\\right", "")
```
'p_{1} (- q_{1} q_{2} + 5 q_{1} q_{4} + q_{1}) + p_{2} (q_{2} q_{3} - q_{2} - 5 q_{3} q_{4} - q_{3} + 5 q_{4} + 1) + p_{3} (q_{1} q_{2} - q_{2} q_{3} - 3 q_{2} q_{4} - q_{2} + q_{3}) + p_{4} (- 5 q_{1} q_{4} + 3 q_{2} q_{4} + 5 q_{3} q_{4} - 5 q_{3} + 2 q_{4})'
```python
constant = numerator.subs({p_2: 0, p_3: 0, p_4: 0, p_1: 0})
```
```python
constant
```
```python
sym.latex(constant)
```
'q_{2} - 5 q_{4} - 1'
```python
((constant + linear_expr + cross_prod) - numerator).simplify()
```
**Denominator**
```python
denominator_elements = [[denominator.coeff(f1 * f2) * f1 * f2 for f2 in p] for f1 in p]
```
```python
flat_elements = list(itertools.chain.from_iterable(denominator_elements))
```
```python
cross_prod = sum(flat_elements) / 2
```
```python
cross_prod
```
```python
sym.latex(cross_prod).replace("\\left", "").replace("\\right", "")
```
'p_{1} p_{2} (q_{1} q_{2} - q_{1} q_{4} - q_{1} - q_{2} q_{3} + q_{3} q_{4} + q_{3}) + p_{1} p_{3} (- q_{1} q_{3} + q_{1} q_{4} + q_{2} q_{3} - q_{2} q_{4}) + p_{1} p_{4} (- q_{1} q_{2} + q_{1} q_{3} + q_{1} + q_{2} q_{4} - q_{3} q_{4} - q_{4}) + p_{2} p_{3} (- q_{1} q_{2} + q_{1} q_{3} + q_{2} q_{4} + q_{2} - q_{3} q_{4} - q_{3}) + p_{2} p_{4} (- q_{1} q_{3} + q_{1} q_{4} + q_{2} q_{3} - q_{2} q_{4}) + p_{3} p_{4} (q_{1} q_{2} - q_{1} q_{4} - q_{2} q_{3} - q_{2} + q_{3} q_{4} + q_{4})'
```python
linear_expr = denominator.subs({p_2: 0, p_3: 0, p_4: 0}).coeff(p_1) * p_1
linear_expr += denominator.subs({p_1: 0, p_3: 0, p_4: 0}).coeff(p_2) * p_2
linear_expr += denominator.subs({p_1: 0, p_2: 0, p_4: 0}).coeff(p_3) * p_3
linear_expr += denominator.subs({p_1: 0, p_2: 0, p_3: 0}).coeff(p_4) * p_4
```
```python
sym.latex(linear_expr).replace("\\left", "").replace("\\right", "")
```
'p_{1} (- q_{1} q_{2} + q_{1} q_{4} + q_{1}) + p_{2} (q_{2} q_{3} - q_{2} - q_{3} q_{4} - q_{3} + q_{4} + 1) + p_{3} (q_{1} q_{2} - q_{2} q_{3} - q_{2} + q_{3} - q_{4}) + p_{4} (- q_{1} q_{4} + q_{2} + q_{3} q_{4} - q_{3} + q_{4} - 1)'
```python
constant = denominator.subs({p_2: 0, p_3: 0, p_4: 0, p_1: 0})
```
```python
constant
```
```python
sym.latex(constant)
```
'q_{2} - q_{4} - 1'
```python
sym.Matrix(Q_num)
```
```python
n.collect(p_1 * p_2).collect(p_1 * p_3).collect(p_1 * p_4).collect(p_2 * p_3).collect(p_2 * p_4)
```
```python
```
```python
element = [[n.coeff(f1 * f2) * f1 * f2 for f2 in p] for f1 in p]
```
```python
flat_elements = list(itertools.chain.from_iterable(list2d))
```
```python
flat_elements
```
<function __main__.<lambda>(l)>
```python
expr = 0
for i in range(3):
for j in range()
expr+= n.coeff(p[i] * p[i + 1]) * p[i] * p[i + 1]
```
```python
expr
```
```python
(n - expr).simplify()
```
```python
```
|
1551641618bec275ea5d9dc1db0d63e382937555
| 79,103 |
ipynb
|
Jupyter Notebook
|
nbs/0.0.2 Proofs.ipynb
|
trallard/Memory-size-in-the-prisoners-dilemma
|
d674b3c6950beb3c4e0cc22230a1529c3959afb4
|
[
"MIT"
] | 2 |
2020-03-31T16:34:06.000Z
|
2020-04-01T14:36:42.000Z
|
nbs/0.0.2 Proofs.ipynb
|
trallard/Memory-size-in-the-prisoners-dilemma
|
d674b3c6950beb3c4e0cc22230a1529c3959afb4
|
[
"MIT"
] | null | null | null |
nbs/0.0.2 Proofs.ipynb
|
trallard/Memory-size-in-the-prisoners-dilemma
|
d674b3c6950beb3c4e0cc22230a1529c3959afb4
|
[
"MIT"
] | null | null | null | 106.751687 | 11,224 | 0.806189 | true | 2,259 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.843895 | 0.661923 | 0.558593 |
__label__eng_Latn
| 0.096635 | 0.13613 |
</a>
# Modulo II: Vectores Palabra (Word Embeddings) y CBOW 02
Veremos cómo preparar los datos para aplicar:
- Propagación hacia adelante (Forward propagation).
- Pérdida de entropía cruzada (crosss-entropy loss).
- Retropropagación (Backpropagation).
- Descenso de gradiente (gradient descent).
```python
import numpy as np
```
## Forward propagation
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 2 </div>
```python
N = 3
V = 5
# Inicializando los pesos de la red
```
```python
```
### Inicialización de pesos y bías
```python
W1 = np.array([[ 0.41687358, 0.08854191, -0.23495225, 0.28320538, 0.41800106],
[ 0.32735501, 0.22795148, -0.23951958, 0.4117634 , -0.23924344],
[ 0.26637602, -0.23846886, -0.37770863, -0.11399446, 0.34008124]])
W2 = np.array([[-0.22182064, -0.43008631, 0.13310965],
[ 0.08476603, 0.08123194, 0.1772054 ],
[ 0.1871551 , -0.06107263, -0.1790735 ],
[ 0.07055222, -0.02015138, 0.36107434],
[ 0.33480474, -0.39423389, -0.43959196]])
b1 = np.array([[ 0.09688219],
[ 0.29239497],
[-0.27364426]])
b2 = np.array([[ 0.0352008 ],
[-0.36393384],
[-0.12775555],
[-0.34802326],
[-0.07017815]])
```
Agregar las funciones vistas en los notebooks pasados
```python
def get_dict(data):
words = sorted(list(set(data)))
n = len(words)
idx = 0
# return these correctly
word2Ind = {}
Ind2word = {}
for k in words:
word2Ind[k] = idx
Ind2word[idx] = k
idx += 1
return word2Ind, Ind2word
def get_windows(words, C):
i = C
while i < len(words) - C:
center_word = words[i]
context_words = words[(i - C):i] + words[(i+1):(i+C+1)]
yield context_words, center_word
i += 1
def word_to_one_hot_vector(word, word2Ind, V):
one_hot_vector = np.zeros(V)
one_hot_vector[word2Ind[word]] = 1
return one_hot_vector
def context_words_to_vector(context_words, word2Ind, V):
context_words_vectors = [word_to_one_hot_vector(w, word2Ind, V) for w in context_words]
context_words_vectors = np.mean(context_words_vectors, axis=0)
return context_words_vectors
def get_training_example(words, C, word2Ind, V):
for context_words, center_word in get_windows(words, C):
yield context_words_to_vector(context_words, word2Ind, V), word_to_one_hot_vector(center_word, word2Ind, V)
```
```python
words = ['i', 'am', 'happy', 'because', 'i', 'am', 'learning']
```
```python
word2Ind, Ind2word = get_dict(words)
```
```python
Ind2word
```
{0: 'am', 1: 'because', 2: 'happy', 3: 'i', 4: 'learning'}
```python
word2Ind
```
{'am': 0, 'because': 1, 'happy': 2, 'i': 3, 'learning': 4}
## Datos de entrenamiento
```python
training_examples = get_training_example(words, 2, word2Ind, V)
```
```python
training_examples
```
<generator object get_training_example at 0x7fc9f562b9e0>
```python
x_array, y_array = next(training_examples)
```
```python
x_array
```
array([0.25, 0.25, 0. , 0.5 , 0. ])
```python
y_array
```
array([0., 0., 1., 0., 0.])
```python
x = x_array.copy()
```
```python
x.reshape(V,1)
```
array([[0.25],
[0.25],
[0. ],
[0.5 ],
[0. ]])
```python
x.shape=(V,1)
x
```
array([[0.25],
[0.25],
[0. ],
[0.5 ],
[0. ]])
```python
y = y_array.copy()
```
```python
y.shape = (V,1)
```
```python
y
```
array([[0.],
[0.],
[1.],
[0.],
[0.]])
```python
def relu(z):
result = z.copy()
result[result<0]=0
return result
def softmax(z):
e_z= np.exp(z)
sum_ez = np.sum(e_z)
return e_z / sum_ez
```
## Forward
### Valores de la capa oculta
\begin{align}
\mathbf{z_1} = \mathbf{W_1}\mathbf{x} + \mathbf{b1} \\
\mathbf{h} = \mathbf{ReLu}(\mathbf{z_1)} \\
\end{align}
```python
z1 = np.dot(W1, x) + b1
```
```python
z1
```
array([[ 0.36483875],
[ 0.63710329],
[-0.3236647 ]])
```python
h = relu(z1)
h
```
array([[0.36483875],
[0.63710329],
[0. ]])
### Valores de la capa de salida
\begin{align}
\mathbf{z_2} = \mathbf{W_2}\mathbf{h} + \mathbf{b2} \\
\mathbf{\hat{y}} = \mathbf{softmax}(\mathbf{z_2)} \\
\end{align}
```python
z2 = np.dot(W2, h) + b2
```
```python
y_hat = softmax(z2)
```
```python
y_hat
```
array([[0.18519074],
[0.19245626],
[0.23107446],
[0.18236353],
[0.20891502]])
```python
```
2
```python
y
```
array([[0.],
[0.],
[1.],
[0.],
[0.]])
```python
Ind2word[y_hat.argmax()]
```
'happy'
### Cross-entropy loss
$$ J = -\sum\limits_{k=1}^{V} y_k \log{\hat{y}_k}$$
```python
def cross_entropy_loss(y_predicted, y_actual):
loss = np.sum(-np.log(y_predicted)*y_actual)
return loss
```
```python
cross_entropy_loss(y_hat, y)
```
1.4650152923611106
```python
```
### Backpropagation
Las formulas que necesitamos para implementar el backpropagation son:
\begin{align}
\frac{\partial J}{\partial \mathbf{W_1}} &= \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right )\mathbf{x}^\top \tag{7}\\
\frac{\partial J}{\partial \mathbf{W_2}} &= (\mathbf{\hat{y}} - \mathbf{y})\mathbf{h^\top} \tag{8}\\
\frac{\partial J}{\partial \mathbf{b_1}} &= \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right ) \tag{9}\\
\frac{\partial J}{\partial \mathbf{b_2}} &= \mathbf{\hat{y}} - \mathbf{y} \tag{10}
\end{align}
Calcule la derivada parcial de la función de pérdida con respecto a $ \mathbf {b_2} $ y almacene el resultado en `grad_b2`.
$$\frac{\partial J}{\partial \mathbf{b_2}} = \mathbf{\hat{y}} - \mathbf{y} \tag{10}$$
```python
grad_b2 = y_hat - y
```
```python
grad_b2
```
array([[ 0.18519074],
[ 0.19245626],
[-0.76892554],
[ 0.18236353],
[ 0.20891502]])
Calcular la derivada parcial de la función con respecto a $ \mathbf {w_2} $, y guardarlo en `grad_W2`
$$\frac{\partial J}{\partial \mathbf{W_2}} = (\mathbf{\hat{y}} - \mathbf{y})\mathbf{h^\top} \tag{8}$$
```python
grad_w2 = np.dot((y_hat - y), h.T)
grad_w2
```
array([[ 0.06756476, 0.11798563, 0. ],
[ 0.0702155 , 0.12261452, 0. ],
[-0.28053384, -0.48988499, -0. ],
[ 0.06653328, 0.1161844 , 0. ],
[ 0.07622029, 0.13310045, 0. ]])
**Ahora, calcule la derivada con respecto a $\mathbf{b_1}$ y guardar el resultado en `grad_b1`.**
$$\frac{\partial J}{\partial \mathbf{b_1}} = \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right ) \tag{9}$$
```python
grad_b1 = relu(np.dot(W2.T, (y_hat - y)))
grad_b1
```
array([[0. ],
[0. ],
[0.17045858]])
**Finalmente, calcular la derivada parcial del loss con respecto a $\mathbf{W_1}$, y guardarlo en`grad_W1`.**
$$\frac{\partial J}{\partial \mathbf{W_1}} = \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right )\mathbf{x}^\top \tag{7}$$
```python
grad_w1 = np.dot(relu(np.dot(W2.T,grab_b2)),x.T)
grad_w1
```
array([[0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ],
[0.04261464, 0.04261464, 0. , 0.08522929, 0. ]])
Resultado esperado
array([[0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ],
[0.04261464, 0.04261464, 0. , 0.08522929, 0. ]])
## Gradiante descendente
Durante la fase del gradiante descendente, actualizará los pesos y los bías $ \alpha $ veces el gradiente de las matrices y vectores originales, utilizando las siguientes fórmulas.
\begin{align}
\mathbf{W_1} &:= \mathbf{W_1} - \alpha \frac{\partial J}{\partial \mathbf{W_1}} \tag{11}\\
\mathbf{W_2} &:= \mathbf{W_2} - \alpha \frac{\partial J}{\partial \mathbf{W_2}} \tag{12}\\
\mathbf{b_1} &:= \mathbf{b_1} - \alpha \frac{\partial J}{\partial \mathbf{b_1}} \tag{13}\\
\mathbf{b_2} &:= \mathbf{b_2} - \alpha \frac{\partial J}{\partial \mathbf{b_2}} \tag{14}\\
\end{align}
```python
alpha = 0.03
```
```python
```
**Ahora Calcule los nuevo valores de $\mathbf{W_2}$ (que serán guardados en `W2_new`), $\mathbf{b_1}$ (en `b1_new`), y $\mathbf{b_2}$ (en `b2_new`).**
\begin{align}
\mathbf{W_2} &:= \mathbf{W_2} - \alpha \frac{\partial J}{\partial \mathbf{W_2}} \tag{12}\\
\mathbf{b_1} &:= \mathbf{b_1} - \alpha \frac{\partial J}{\partial \mathbf{b_1}} \tag{13}\\
\mathbf{b_2} &:= \mathbf{b_2} - \alpha \frac{\partial J}{\partial \mathbf{b_2}} \tag{14}\\
\end{align}
```python
# Actualizacion de pesos
w1_new = W1 - alpha*grad_w1
w2_new = W2 - alpha*grad_w2
b1_new = b1 - alpha*grad_b1
b2_new = b2 - alpha*grad_b2
```
```python
W1
```
array([[ 0.41687358, 0.08854191, -0.23495225, 0.28320538, 0.41800106],
[ 0.32735501, 0.22795148, -0.23951958, 0.4117634 , -0.23924344],
[ 0.26637602, -0.23846886, -0.37770863, -0.11399446, 0.34008124]])
```python
w1_new
```
array([[ 0.41687358, 0.08854191, -0.23495225, 0.28320538, 0.41800106],
[ 0.32735501, 0.22795148, -0.23951958, 0.4117634 , -0.23924344],
[ 0.25359163, -0.25125325, -0.37770863, -0.13956325, 0.34008124]])
```python
```
## Opción 1: extraer los embeddings de W1
```python
w1_new
```
array([[ 0.41687358, 0.08854191, -0.23495225, 0.28320538, 0.41800106],
[ 0.32735501, 0.22795148, -0.23951958, 0.4117634 , -0.23924344],
[ 0.25359163, -0.25125325, -0.37770863, -0.13956325, 0.34008124]])
```python
for i in range(V):
print(Ind2word[i])
```
am
because
happy
i
learning
```python
word2Ind
```
{'am': 0, 'because': 1, 'happy': 2, 'i': 3, 'learning': 4}
```python
print(f'{x} ----> {w1_new[]}')
```
## Opción 2: extraer los embeddings de W2
```python
w2_new
```
array([[-0.24209007, -0.465482 , 0.13310965],
[ 0.06370138, 0.04444758, 0.1772054 ],
[ 0.27131525, 0.08589287, -0.1790735 ],
[ 0.05059224, -0.0550067 , 0.36107434],
[ 0.31193865, -0.43416402, -0.43959196]])
```python
```
## Opción 3: extraer los embeddings de W1 y W2
```python
W3 = (W1 + W2)/2
```
```python
```
```python
def gradient_descent(xtrain,ytrain, N, V, numiter, alpha):
#inicicalizar el modelo
W1,W2,b1,b2
for i in range(numiter):
z,h,yhat = forward(xtrain,ytrain,w1,w2,b1,b2)
cost = funcion_costo(ytrain, yhat)
grad_w1,grad_w2 ... = back_prop
# actualizar los pesos
W1 = W1 - alpha*grad_w1
W2 = W2 - alpha*grad_w2
b1 = b1 - alpha*grad_b1
b2 = b2 - alpha*grad_b2
```
|
6a3bce63e0634d09fc6459c85030c07af3e99bc3
| 25,514 |
ipynb
|
Jupyter Notebook
|
Module_II/Notebooks_Clase/02-word-embeddings-cbow.ipynb
|
edgmz28/NLP_Course
|
c7638828727a685b4601b0f96d4990b94ba6820a
|
[
"MIT"
] | null | null | null |
Module_II/Notebooks_Clase/02-word-embeddings-cbow.ipynb
|
edgmz28/NLP_Course
|
c7638828727a685b4601b0f96d4990b94ba6820a
|
[
"MIT"
] | null | null | null |
Module_II/Notebooks_Clase/02-word-embeddings-cbow.ipynb
|
edgmz28/NLP_Course
|
c7638828727a685b4601b0f96d4990b94ba6820a
|
[
"MIT"
] | null | null | null | 22.032815 | 243 | 0.450851 | true | 4,181 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.890294 | 0.879147 | 0.782699 |
__label__spa_Latn
| 0.106933 | 0.656805 |
# Homework and bake-off: word-level entailment with neural networks
```python
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
```
## Contents
1. [Overview](#Overview)
1. [Set-up](#Set-up)
1. [Data](#Data)
1. [Edge disjoint](#Edge-disjoint)
1. [Word disjoint](#Word-disjoint)
1. [Baseline](#Baseline)
1. [Representing words: vector_func](#Representing-words:-vector_func)
1. [Combining words into inputs: vector_combo_func](#Combining-words-into-inputs:-vector_combo_func)
1. [Classifier model](#Classifier-model)
1. [Baseline results](#Baseline-results)
1. [Homework questions](#Homework-questions)
1. [Hypothesis-only baseline [2 points]](#Hypothesis-only-baseline-[2-points])
1. [Alternatives to concatenation [2 points]](#Alternatives-to-concatenation-[2-points])
1. [A deeper network [2 points]](#A-deeper-network-[2-points])
1. [Your original system [3 points]](#Your-original-system-[3-points])
1. [Bake-off [1 point]](#Bake-off-[1-point])
## Overview
The general problem is word-level natural language inference.
Training examples are pairs of words $(w_{L}, w_{R}), y$ with $y = 1$ if $w_{L}$ entails $w_{R}$, otherwise $0$.
The homework questions below ask you to define baseline models for this and develop your own system for entry in the bake-off, which will take place on a held-out test-set distributed at the start of the bake-off. (Thus, all the data you have available for development is available for training your final system before the bake-off begins.)
## Set-up
See [the first notebook in this unit](nli_01_task_and_data.ipynb) for set-up instructions.
```python
from collections import defaultdict
import json
import numpy as np
import os
import pandas as pd
from torch_shallow_neural_classifier import TorchShallowNeuralClassifier
import nli
import utils
```
```python
DATA_HOME = 'data'
NLIDATA_HOME = os.path.join(DATA_HOME, 'nlidata')
wordentail_filename = os.path.join(
NLIDATA_HOME, 'nli_wordentail_bakeoff_data.json')
GLOVE_HOME = os.path.join(DATA_HOME, 'glove.6B')
```
## Data
I've processed the data into two different train/test splits, in an effort to put some pressure on our models to actually learn these semantic relations, as opposed to exploiting regularities in the sample.
* `edge_disjoint`: The `train` and `dev` __edge__ sets are disjoint, but many __words__ appear in both `train` and `dev`.
* `word_disjoint`: The `train` and `dev` __vocabularies are disjoint__, and thus the edges are disjoint as well.
These are very different problems. For `word_disjoint`, there is real pressure on the model to learn abstract relationships, as opposed to memorizing properties of individual words.
```python
with open(wordentail_filename) as f:
wordentail_data = json.load(f)
```
The outer keys are the splits plus a list giving the vocabulary for the entire dataset:
```python
wordentail_data.keys()
```
dict_keys(['edge_disjoint', 'vocab', 'word_disjoint'])
### Edge disjoint
```python
wordentail_data['edge_disjoint'].keys()
```
dict_keys(['dev', 'train'])
This is what the split looks like; all three have this same format:
```python
wordentail_data['edge_disjoint']['dev'][: 5]
```
[[['sweater', 'stroke'], 0],
[['constipation', 'hypovolemia'], 0],
[['disease', 'inflammation'], 0],
[['herring', 'animal'], 1],
[['cauliflower', 'outlook'], 0]]
Let's test to make sure no edges are shared between `train` and `dev`:
```python
nli.get_edge_overlap_size(wordentail_data, 'edge_disjoint')
```
0
As we expect, a *lot* of vocabulary items are shared between `train` and `dev`:
```python
nli.get_vocab_overlap_size(wordentail_data, 'edge_disjoint')
```
2916
This is a large percentage of the entire vocab:
```python
len(wordentail_data['vocab'])
```
8470
Here's the distribution of labels in the `train` set. It's highly imbalanced, which will pose a challenge for learning. (I'll go ahead and reveal that the `dev` set is similarly distributed.)
```python
def label_distribution(split):
return pd.DataFrame(wordentail_data[split]['train'])[1].value_counts()
```
```python
label_distribution('edge_disjoint')
```
0 14650
1 2745
Name: 1, dtype: int64
### Word disjoint
```python
wordentail_data['word_disjoint'].keys()
```
dict_keys(['dev', 'train'])
In the `word_disjoint` split, no __words__ are shared between `train` and `dev`:
```python
nli.get_vocab_overlap_size(wordentail_data, 'word_disjoint')
```
0
Because no words are shared between `train` and `dev`, no edges are either:
```python
nli.get_edge_overlap_size(wordentail_data, 'word_disjoint')
```
0
The label distribution is similar to that of `edge_disjoint`, though the overall number of examples is a bit smaller:
```python
label_distribution('word_disjoint')
```
0 7199
1 1349
Name: 1, dtype: int64
## Baseline
Even in deep learning, __feature representation is vital and requires care!__ For our task, feature representation has two parts: representing the individual words and combining those representations into a single network input.
### Representing words: vector_func
Let's consider two baseline word representations methods:
1. Random vectors (as returned by `utils.randvec`).
1. 50-dimensional GloVe representations.
```python
def randvec(w, n=50, lower=-1.0, upper=1.0):
"""Returns a random vector of length `n`. `w` is ignored."""
return utils.randvec(n=n, lower=lower, upper=upper)
```
```python
# Any of the files in glove.6B will work here:
glove_dim = 50
glove_src = os.path.join(GLOVE_HOME, 'glove.6B.{}d.txt'.format(glove_dim))
# Creates a dict mapping strings (words) to GloVe vectors:
GLOVE = utils.glove2dict(glove_src)
def glove_vec(w):
"""Return `w`'s GloVe representation if available, else return
a random vector."""
return GLOVE.get(w, randvec(w, n=glove_dim))
```
### Combining words into inputs: vector_combo_func
Here we decide how to combine the two word vectors into a single representation. In more detail, where `u` is a vector representation of the left word and `v` is a vector representation of the right word, we need a function `vector_combo_func` such that `vector_combo_func(u, v)` returns a new input vector `z` of dimension `m`. A simple example is concatenation:
```python
def vec_concatenate(u, v):
"""Concatenate np.array instances `u` and `v` into a new np.array"""
return np.concatenate((u, v))
```
`vector_combo_func` could instead be vector average, vector difference, etc. (even combinations of those) – there's lots of space for experimentation here; [homework question 2](#Alternatives-to-concatenation-[1-point]) below pushes you to do some exploration.
### Classifier model
For a baseline model, I chose `TorchShallowNeuralClassifier`:
```python
net = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)
```
### Baseline results
The following puts the above pieces together, using `vector_func=glove_vec`, since `vector_func=randvec` seems so hopelessly misguided for `word_disjoint`!
```python
word_disjoint_experiment = nli.wordentail_experiment(
train_data=wordentail_data['word_disjoint']['train'],
assess_data=wordentail_data['word_disjoint']['dev'],
model=net,
vector_func=glove_vec,
vector_combo_func=vec_concatenate)
print("macro-f1: {0}".format(word_disjoint_experiment['macro-F1']))
```
Finished epoch 100 of 100; error is 0.02750065876170993
precision recall f1-score support
0 0.919 0.939 0.929 1910
1 0.406 0.335 0.367 239
accuracy 0.872 2149
macro avg 0.662 0.637 0.648 2149
weighted avg 0.862 0.872 0.866 2149
macro-f1: 0.6477534575895934
## Homework questions
Please embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)
### Hypothesis-only baseline [2 points]
During our discussion of SNLI and MultiNLI, we noted that a number of research teams have shown that hypothesis-only baselines for NLI tasks can be remarkably robust. This question asks you to explore briefly how this baseline effects the 'edge_disjoint' and 'word_disjoint' versions of our task.
For this problem, submit two functions:
1. A `vector_combo_func` function called `hypothesis_only` that simply throws away the premise, using the unmodified hypothesis (second) vector as its representation of the example.
1. A function called `run_hypothesis_only_evaluation` that does the following:
1. Loops over the two conditions 'word_disjoint' and 'edge_disjoint' and the two `vector_combo_func` values `vec_concatenate` and `hypothesis_only`, calling `nli.wordentail_experiment` to train on the conditions 'train' portion and assess on its 'dev' portion, with `glove_vec` as the `vector_func`. So that the results are consistent, use an `sklearn.linear_model.LogisticRegression` with default parameters as the model.
1. Returns a `dict` mapping `(condition_name, function_name)` pairs to the 'macro-F1' score for that pair, as returned by the call to `nli.wordentail_experiment`. (Tip: you can get the `str` name of your function `hypothesis_only` with `hypothesis_only.__name__`.)
The test functions `test_hypothesis_only` and `test_run_hypothesis_only_evaluation` will help ensure that your functions have the desired logic.
```python
##### YOUR CODE HERE
def hypothesis_only(u, v):
"""Just return the hypothesis part"""
return v
def run_hypothesis_only_evaluation():
##### YOUR CODE HERE
from sklearn.linear_model import LogisticRegression
eval_results = {}
net = LogisticRegression()
for condition_name in ['edge_disjoint', 'word_disjoint']:
for vec_combo_func in [vec_concatenate, hypothesis_only]:
result = nli.wordentail_experiment(
train_data=wordentail_data[condition_name]['train'],
assess_data=wordentail_data[condition_name]['dev'],
model=net,
vector_func=glove_vec,
vector_combo_func=vec_combo_func)
print("macro-f1: {0}".format(result['macro-F1']))
eval_results[(condition_name, vec_combo_func.__name__)] = result['macro-F1']
return eval_results
```
```python
def test_hypothesis_only(hypothesis_only):
v = hypothesis_only(1, 2)
assert v == 2
```
```python
test_hypothesis_only(hypothesis_only)
```
```python
def test_run_hypothesis_only_evaluation(run_hypothesis_only_evaluation):
results = run_hypothesis_only_evaluation()
assert ('word_disjoint', 'vec_concatenate') in results, \
"The return value of `run_hypothesis_only_evaluation` does not have the intended kind of keys"
assert isinstance(results[('word_disjoint', 'vec_concatenate')], float), \
"The values of the `run_hypothesis_only_evaluation` result should be floats"
```
```python
test_run_hypothesis_only_evaluation(run_hypothesis_only_evaluation)
```
precision recall f1-score support
0 0.875 0.969 0.920 7376
1 0.574 0.230 0.328 1321
accuracy 0.857 8697
macro avg 0.725 0.600 0.624 8697
weighted avg 0.830 0.857 0.830 8697
macro-f1: 0.6242497026339122
precision recall f1-score support
0 0.872 0.975 0.920 7376
1 0.584 0.199 0.297 1321
accuracy 0.857 8697
macro avg 0.728 0.587 0.609 8697
weighted avg 0.828 0.857 0.826 8697
macro-f1: 0.6086585700699786
precision recall f1-score support
0 0.902 0.979 0.939 1910
1 0.474 0.151 0.229 239
accuracy 0.887 2149
macro avg 0.688 0.565 0.584 2149
weighted avg 0.854 0.887 0.860 2149
macro-f1: 0.5837810695455686
precision recall f1-score support
0 0.893 0.989 0.939 1910
1 0.382 0.054 0.095 239
accuracy 0.885 2149
macro avg 0.638 0.522 0.517 2149
weighted avg 0.836 0.885 0.845 2149
macro-f1: 0.5169358178053831
### Alternatives to concatenation [2 points]
We've so far just used vector concatenation to represent the premise and hypothesis words. This question asks you to explore two simple alternative:
1. Write a function `vec_diff` that, for a given pair of vector inputs `u` and `v`, returns the element-wise difference between `u` and `v`.
1. Write a function `vec_max` that, for a given pair of vector inputs `u` and `v`, returns the element-wise max values between `u` and `v`.
You needn't include your uses of `nli.wordentail_experiment` with these functions, but we assume you'll be curious to see how they do!
```python
def vec_diff(u, v):
##### YOUR CODE HERE
return u - v
def vec_max(u, v):
##### YOUR CODE HERE
return np.maximum(u, v)
```
```python
def test_vec_diff(vec_diff):
u = np.array([10.2, 8.1])
v = np.array([1.2, -7.1])
result = vec_diff(u, v)
expected = np.array([9.0, 15.2])
assert np.array_equal(result, expected), \
"Expected {}; got {}".format(expected, result)
```
```python
test_vec_diff(vec_diff)
```
```python
def test_vec_max(vec_max):
u = np.array([1.2, 8.1])
v = np.array([10.2, -7.1])
result = vec_max(u, v)
expected = np.array([10.2, 8.1])
assert np.array_equal(result, expected), \
"Expected {}; got {}".format(expected, result)
```
```python
test_vec_max(vec_max)
```
```python
if 'IS_GRADESCOPE_ENV' not in os.environ:
net = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)
print(net)
for vec_combo_func in [vec_diff, vec_max, vec_concatenate, hypothesis_only]:
result = nli.wordentail_experiment(
train_data=wordentail_data['word_disjoint']['train'],
assess_data=wordentail_data['word_disjoint']['dev'],
model=net,
vector_func=glove_vec,
vector_combo_func=vec_combo_func)
print("macro-f1: {0}".format(result['macro-F1']))
```
TorchShallowNeuralClassifier(
hidden_dim=50,
hidden_activation=Tanh(),
batch_size=1028,
max_iter=100,
eta=0.01,
optimizer=<class 'torch.optim.adam.Adam'>,
l2_strength=0)
Finished epoch 100 of 100; error is 0.28842502646148205
precision recall f1-score support
0 0.909 0.842 0.874 1910
1 0.205 0.326 0.252 239
accuracy 0.785 2149
macro avg 0.557 0.584 0.563 2149
weighted avg 0.831 0.785 0.805 2149
macro-f1: 0.5630849852522789
Finished epoch 100 of 100; error is 1.056394025683403
precision recall f1-score support
0 0.910 0.890 0.900 1910
1 0.252 0.297 0.273 239
accuracy 0.824 2149
macro avg 0.581 0.593 0.586 2149
weighted avg 0.837 0.824 0.830 2149
macro-f1: 0.586104297300003
Finished epoch 100 of 100; error is 0.02869832795113325
precision recall f1-score support
0 0.927 0.924 0.925 1910
1 0.409 0.423 0.416 239
accuracy 0.868 2149
macro avg 0.668 0.673 0.671 2149
weighted avg 0.870 0.868 0.869 2149
macro-f1: 0.6705681430526949
Finished epoch 100 of 100; error is 1.5673212707042694
precision recall f1-score support
0 0.903 0.919 0.911 1910
1 0.248 0.213 0.229 239
accuracy 0.840 2149
macro avg 0.575 0.566 0.570 2149
weighted avg 0.830 0.840 0.835 2149
macro-f1: 0.5700959707451074
### A deeper network [2 points]
It is very easy to subclass `TorchShallowNeuralClassifier` if all you want to do is change the network graph: all you have to do is write a new `define_graph`. If your graph has new arguments that the user might want to set, then you should also redefine `__init__` so that these values are accepted and set as attributes.
For this question, please subclass `TorchShallowNeuralClassifier` so that it defines the following graph:
$$\begin{align}
h_{1} &= xW_{1} + b_{1} \\
r_{1} &= \textbf{Bernoulli}(1 - \textbf{dropout\_prob}, n) \\
d_{1} &= r_1 * h_{1} \\
h_{2} &= f(d_{1}) \\
h_{3} &= h_{2}W_{2} + b_{2}
\end{align}$$
Here, $r_{1}$ and $d_{1}$ define a dropout layer: $r_{1}$ is a random binary vector of dimension $n$, where the probability of a value being $1$ is given by $1 - \textbf{dropout_prob}$. $r_{1}$ is multiplied element-wise by our first hidden representation, thereby zeroing out some of the values. The result is fed to the user's activation function $f$, and the result of that is fed through another linear layer to produce $h_{3}$. (Inside `TorchShallowNeuralClassifier`, $h_{3}$ is the basis for a softmax classifier, so no activation function is applied to it.)
For your implementation, please use `nn.Sequential`, `nn.Linear`, and `nn.Dropout` to define the required layers.
For comparison, using this notation, `TorchShallowNeuralClassifier` defines the following graph:
$$\begin{align}
h_{1} &= xW_{1} + b_{1} \\
h_{2} &= f(h_{1}) \\
h_{3} &= h_{2}W_{2} + b_{2}
\end{align}$$
The following code starts this sub-class for you, so that you can concentrate on `define_graph`. Be sure to make use of `self.dropout_prob`
For this problem, submit just your completed `TorchDeepNeuralClassifier`. You needn't evaluate it, though we assume you will be keen to do that!
You can use `test_TorchDeepNeuralClassifier` to ensure that your network has the intended structure.
```python
import torch.nn as nn
class TorchDeepNeuralClassifier(TorchShallowNeuralClassifier):
def __init__(self, dropout_prob=0.7, **kwargs):
self.dropout_prob = dropout_prob
super().__init__(**kwargs)
def define_graph(self):
"""Complete this method!
Returns
-------
an `nn.Module` instance, which can be a free-standing class you
write yourself, as in `torch_rnn_classifier`, or the outpiut of
`nn.Sequential`, as in `torch_shallow_neural_classifier`.
"""
##### YOUR CODE HERE
return nn.Sequential(
nn.Linear(self.input_dim, self.hidden_dim),
nn.Dropout(p=self.dropout_prob),
self.hidden_activation,
nn.Linear(self.hidden_dim, self.n_classes_))
##### YOUR CODE HERE
if 'IS_GRADESCOPE_ENV' not in os.environ:
net = TorchDeepNeuralClassifier()
for vec_combo_func in [vec_diff, vec_max, vec_concatenate, hypothesis_only]:
result = nli.wordentail_experiment(
train_data=wordentail_data['word_disjoint']['train'],
assess_data=wordentail_data['word_disjoint']['dev'],
model=net,
vector_func=glove_vec,
vector_combo_func=vec_combo_func)
print("macro-f1: {0}".format(result['macro-F1']))
```
Finished epoch 100 of 100; error is 3.103418618440628
precision recall f1-score support
0 0.919 0.919 0.919 1910
1 0.351 0.351 0.351 239
accuracy 0.856 2149
macro avg 0.635 0.635 0.635 2149
weighted avg 0.856 0.856 0.856 2149
macro-f1: 0.6351563013428553
Finished epoch 100 of 100; error is 3.0264479219913483
precision recall f1-score support
0 0.904 0.982 0.941 1910
1 0.534 0.163 0.250 239
accuracy 0.891 2149
macro avg 0.719 0.573 0.596 2149
weighted avg 0.863 0.891 0.864 2149
macro-f1: 0.5956472654290015
Finished epoch 100 of 100; error is 2.4502106606960297
precision recall f1-score support
0 0.905 0.987 0.944 1910
1 0.621 0.172 0.269 239
accuracy 0.896 2149
macro avg 0.763 0.579 0.607 2149
weighted avg 0.873 0.896 0.869 2149
macro-f1: 0.6065023627413547
Finished epoch 100 of 100; error is 2.7340420484542847
precision recall f1-score support
0 0.895 0.989 0.940 1910
1 0.447 0.071 0.123 239
accuracy 0.887 2149
macro avg 0.671 0.530 0.531 2149
weighted avg 0.845 0.887 0.849 2149
macro-f1: 0.5311554770666994
```python
def test_TorchDeepNeuralClassifier(TorchDeepNeuralClassifier):
dropout_prob = 0.55
assert hasattr(TorchDeepNeuralClassifier(), "dropout_prob"), \
"TorchDeepNeuralClassifier must have an attribute `dropout_prob`."
try:
inst = TorchDeepNeuralClassifier(dropout_prob=dropout_prob)
except TypeError:
raise TypeError("TorchDeepNeuralClassifier must allow the user "
"to set `dropout_prob` on initialization")
inst.input_dim = 10
inst.n_classes_ = 5
graph = inst.define_graph()
assert len(graph) == 4, \
"The graph should have 4 layers; yours has {}".format(len(graph))
expected = {
0: 'Linear',
1: 'Dropout',
2: 'Tanh',
3: 'Linear'}
for i, label in expected.items():
name = graph[i].__class__.__name__
assert label in name, \
"The {} layer of the graph should be a {} layer; yours is {}".format(i, label, name)
assert graph[1].p == dropout_prob, \
"The user's value for `dropout_prob` should be the value of `p` for the Dropout layer."
```
```python
test_TorchDeepNeuralClassifier(TorchDeepNeuralClassifier)
```
### Your original system [3 points]
This is a simple dataset, but our focus on the 'word_disjoint' condition ensures that it's a challenging one, and there are lots of modeling strategies one might adopt.
You are free to do whatever you like. We require only that your system differ in some way from those defined in the preceding questions. They don't have to be completely different, though. For example, you might want to stick with the model but represent examples differently, or the reverse.
Keep in mind that, for the bake-off evaluation, the 'edge_disjoint' portions of the data are off limits. You can, though, train on the combination of the 'word_disjoint' 'train' and 'dev' portions. You are free to use different pretrained word vectors and the like. Please do not introduce additional entailment datasets into your training data, though.
Please embed your code in this notebook so that we can rerun it.
In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
```python
# Enter your system description in this cell.
# Tried out different systems:
# system_0 : Original system.
# Uses bidirectional RNN classifier (0.68)
# Choosing this as original system.
# system_1 : Variations of vector_combo_func (0.70)
# system_2 : Retrofit GLOVE using WordNet (0.64)
# system_3 : Data augmentation of 'entails' class using WordNet to avoid imbalance during training. (0.48)
# My peak score was: 0.68
if 'IS_GRADESCOPE_ENV' not in os.environ:
from nltk.corpus import wordnet as wn
from retrofitting import Retrofitter
from torch_rnn_classifier import TorchRNNClassifier
def get_wordnet_edges():
edges = defaultdict(set)
for ss in wn.all_synsets():
lem_names = {lem.name() for lem in ss.lemmas()}
for lem in lem_names:
edges[lem] |= lem_names
return edges
wn_edges = get_wordnet_edges()
# Idea: Bidirectional RNN classifier
def system_0_original():
# Data------------
with open(wordentail_filename) as f:
wordentail_data = json.load(f)
print("Distribution of labels : \n{0}".format(pd.DataFrame(wordentail_data['word_disjoint']['train'])[1].value_counts()))
# Model-----------
X_glove = pd.DataFrame(GLOVE)
X_glove['$UNK'] = 0
X_glove = X_glove.T
vocab = list(X_glove.index)
embedding = X_glove.values
net = TorchRNNClassifier(vocab=vocab, embedding=embedding, bidirectional=True)
# Exp-------------
result = nli.wordentail_experiment(
train_data=wordentail_data['word_disjoint']['train'],
assess_data=wordentail_data['word_disjoint']['dev'],
model=net,
vector_func=lambda x: np.array([x]),
vector_combo_func=vec_concatenate)
return result['macro-F1']
#############################################################################
# Idea: Variations of vector_combo_func.
def system_1():
# Data------------
with open(wordentail_filename) as f:
wordentail_data = json.load(f)
print("Distribution of labels : \n{0}".format(pd.DataFrame(wordentail_data['word_disjoint']['train'])[1].value_counts()))
def vec_merge(u, v):
"""Merge different feature reps including array diff, max, avg etc."""
return np.concatenate((u, v, vec_diff(u, v), vec_max(u,v)))
# Model-----------
net = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)
print(net)
# Exp-------------
result = nli.wordentail_experiment(
train_data=wordentail_data['word_disjoint']['train'],
assess_data=wordentail_data['word_disjoint']['dev'],
model=net,
vector_func=glove_vec,
vector_combo_func=vec_merge)
return result['macro-F1']
#######################################################################
# Idea: Retrofit GLOVE using WordNet
def system_2():
# Data------------
with open(wordentail_filename) as f:
wordentail_data = json.load(f)
X_glove = pd.DataFrame(GLOVE).T
print(X_glove.shape)
def convert_edges_to_indices(edges, Q):
lookup = dict(zip(Q.index, range(Q.shape[0])))
index_edges = defaultdict(set)
for start, finish_nodes in edges.items():
s = lookup.get(start)
if s:
f = {lookup[n] for n in finish_nodes if n in lookup}
if f:
index_edges[s] = f
return index_edges
wn_index_edges = convert_edges_to_indices(wn_edges, X_glove)
wn_retro = Retrofitter(verbose=True)
X_retro = wn_retro.fit(X_glove, wn_index_edges)
print(X_retro.shape)
def retro_vec(w):
"""Return `w`'s Retrofitted representation if available, else return
a random vector."""
return X_retro.loc[w].values if w in X_retro.index else randvec(w, n=glove_dim)
# Model-----------
net = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)
print(net)
# Exp-------------
result = nli.wordentail_experiment(
train_data=wordentail_data['word_disjoint']['train'],
assess_data=wordentail_data['word_disjoint']['dev'],
model=net,
vector_func=retro_vec,
vector_combo_func=vec_concatenate)
return result['macro-F1']
################################################################
# Idea: Data augmentation of 'entails' class using wordnet
def system_3():
# Data------------
with open(wordentail_filename) as f:
wordentail_data = json.load(f)
x_train = wordentail_data['word_disjoint']['train']
print("Existing distribution of labels : \n{0}".format(pd.DataFrame(x_train)[1].value_counts()))
# get wordnet edges
def get_wordnet_edges():
edges = defaultdict(set)
for ss in wn.all_synsets():
lem_names = {lem.name() for lem in ss.lemmas()}
for lem in lem_names:
edges[lem] |= lem_names
return edges
wn_edges = get_wordnet_edges()
# data augmentation of positive entailments.
positive_entailments = []
for premise_hypothesis, label in x_train:
if label == 1:
positive_entailments.append(premise_hypothesis)
print("Current count of positives: {0}".format(len(positive_entailments)))
positive_entailments_ex = []
for premise_hypothesis in positive_entailments:
premise = premise_hypothesis[0]
hypothesis = premise_hypothesis[1]
for wn_premise in wn_edges[premise]:
if premise == wn_premise:
continue
for wn_hypothesis in wn_edges[hypothesis]:
if wn_hypothesis == hypothesis:
continue
positive_entailments_ex.append([wn_premise, wn_hypothesis])
print("New count of positives to add: {0}".format(len(positive_entailments_ex)))
x_train.extend([[item, 1] for item in positive_entailments_ex])
print("New distribution of labels : \n{0}".format(pd.DataFrame(wordentail_data['word_disjoint']['train'])[1].value_counts()))
# Model-----------
net = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)
# Exp-------------
result = nli.wordentail_experiment(
train_data=wordentail_data['word_disjoint']['train'],
assess_data=wordentail_data['word_disjoint']['dev'],
model=net,
vector_func=glove_vec,
vector_combo_func=vec_concatenate)
return result['macro-F1']
###################################################################
print("System 0 (Original) Score:{0}".format(system_0_original()))
print("="*100)
# print("System 1 Score:{0}".format(system_1()))
# print("="*100)
# print("System 2 Score:{0}".format(system_2()))
# print("="*100)
# print("System 3 Score:{0}".format(system_3()))
# print("="*100)
####################################################################
# Please do not remove this comment.
```
Distribution of labels :
0 7199
1 1349
Name: 1, dtype: int64
Finished epoch 100 of 100; error is 0.02302360360044986
precision recall f1-score support
0 0.932 0.910 0.921 1910
1 0.396 0.469 0.429 239
accuracy 0.861 2149
macro avg 0.664 0.690 0.675 2149
weighted avg 0.872 0.861 0.866 2149
System 0 (Original) Score:0.6750996412104682
====================================================================================================
## Bake-off [1 point]
The goal of the bake-off is to achieve the highest macro-average F1 score on __word_disjoint__, on a test set that we will make available at the start of the bake-off. The announcement will go out on the discussion forum. To enter, you'll be asked to run `nli.bake_off_evaluation` on the output of your chosen `nli.wordentail_experiment` run.
The cells below this one constitute your bake-off entry.
The rules described in the [Your original system](#Your-original-system-[3-points]) homework question are also in effect for the bake-off.
Systems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.
Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.
The announcement will include the details on where to submit your entry.
```python
# Enter your bake-off assessment code into this cell.
# Please do not remove this comment.
##### YOUR CODE HERE
```
```python
# On an otherwise blank line in this cell, please enter
# your macro-avg f1 value as reported by the code above.
# Please enter only a number between 0 and 1 inclusive.
# Please do not remove this comment.
##### YOUR CODE HERE
```
|
d48f79b4547a02c24f64014c8b3680fbfd344d6a
| 50,864 |
ipynb
|
Jupyter Notebook
|
hw_wordentail-Copy2.ipynb
|
abgoswam/cs224u
|
33e1a22d1c9586b473f43b388163a74264e9258a
|
[
"Apache-2.0"
] | null | null | null |
hw_wordentail-Copy2.ipynb
|
abgoswam/cs224u
|
33e1a22d1c9586b473f43b388163a74264e9258a
|
[
"Apache-2.0"
] | null | null | null |
hw_wordentail-Copy2.ipynb
|
abgoswam/cs224u
|
33e1a22d1c9586b473f43b388163a74264e9258a
|
[
"Apache-2.0"
] | null | null | null | 33.136156 | 574 | 0.518579 | true | 8,776 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.665411 | 0.824462 | 0.548606 |
__label__eng_Latn
| 0.892008 | 0.112924 |
>>> Work in Progress (Following are the lecture notes of Prof Fei-Fei Li/Prof Justin Johnson/Prof Serena Yeung - CS231n - Stanford. This is my interpretation of their excellent teaching and I take full responsibility of any misinterpretation or misinformation provided herein.)
### Lecture 3: Loss Function and Optimization
#### Outline:
- Loss functions
- A loss function quantifies the unhappiness with the scores across the training data
- a function that takes in a W and tells us how bad quantitatively is that W
- minimize the loss on training example
- different types of loss
- Optimization
- Come up with a way of efficient procedure to calculate W
- efficiently come up with the procedure of searching through the space of all possible Ws and come up with what is the correct value of W that is the least bad
#### Loss function
- Given a dataset $\{(x_{i}, y_{i})\}_{i=1}^{N}$, where $x_{i}$ is image and $y_{i}$ is (integer) label
- Loss over the dataset is sum of loss over examples:
> $L(W) = \frac{1}{N}\sum\limits_{i}L_{i}(f(x_{i},W),y_{i}) + \lambda R(W)$
- where 1st term is the data loss
- 2nd term is the regularization correction - making the model simple
- binary SVM - has 2 classes - each example will be classified as positive or negative example
- multinomial SVM - handle multiple classes
#### Multiclass SVM loss
- Given a dataset $\{(x_{i}, y_{i})\}_{i=1}^{N}$, where $x_{i}$ is image and $y_{i}$ is (integer) label
- and scores vector $s = f(x_{i}, W)$
- predicted scores that are coming from the classifier
- $y_{i}$ is the ground truth label
- $s_{y_{i}}$ denotes score of the true class for the ith example in training set
- $s_{1}$ and $s_{2}$ will be cat and dog score respectively
- SVM loss has the form - **Hinge Loss**:
> \begin{equation}\\
\begin{aligned}\\
L_{i} &= \sum\limits_{j \neq y_{i}}
\begin{cases}
0 & \text{if $s_{y_{i}} \geq s_{j} + 1$}\\
s_{j} - s_{y_{i}} + 1 & \text{otherwise}\\
\end{cases}\\
&= \sum\limits_{j \neq y_{i}} max(0, s_{j} - s_{y_{i}} + 1)
\end{aligned}\\
\end{equation}\\
$\tiny{\text{YouTube-Stanford-CS231n-Justin Johnson}}$
- If the true score is high, that is good. Otherwise, we will have to incur some loss and that would be bad.
$\tiny{\text{YouTube-Stanford-CS231n-Justin Johnson}}$
$\tiny{\text{YouTube-Stanford-CS231n-Justin Johnson}}$
$\tiny{\text{YouTube-Stanford-CS231n-Justin Johnson}}$
- Why +1?
- We care about the relative scores
#### Regularization - 2nd term
> $L(W) = \frac{1}{N}\sum\limits_{i}L_{i}(f(x_{i},W),y_{i}) + \lambda R(W)$
> where $\lambda$ is the regularization strength (hyperparameter)
- Types of regularization:
- L2 regularization - weight decay - Euclidean norm or squared norm - penalize the Euclidean norm of this weight vector
> $R(W) = \sum_{k}\sum_{l}W^{2}_{k,l}$
- L1 regularization - nice property of encouraging sparsity in matrix W
> $R(W) = \sum_{k}\sum_{l}|W_{k,l}|$
- Elastic net (L1 + L2) regularization - combination of L1 and L2
> $R(W) = \sum_{k}\sum_{l}\beta W^{2}_{k,l} + |W_{k,l}|$
- Max norm regularization - penalizes the max norm rather than L1 and L2 norm
- Dropout regularization - specific to deep learning
- Fancier regularization: Batch normalization, stochastic depth
- Goal of regularization term is that it penalizes the complexity of the model rather than explicitly trying to fit the training data
#### Softmax Classifier (Multinomial Logistic Regression)
- Multiclass SVM
- there was no interpretation of loss function
- the model f spits out scores for the classes, which didn't actually had much interpretation
- all we cared about was the score of correct class must be greater than score of incorrect class
- Multinomial Logistic Regression
- in this case, the scores will have meaning
> Softmax function $P(Y=k|X=x_{i}) = \frac{e^{s}k}{\sum_{j}e^{s_{j}}}$
> where scores $s = f(x_{i}; W)$ = unnormalized log probabilities of the classes
- the probability of softmax function sum to 1
- To maximize the log likelihood, or (for a loss function) to minimize the negative log likelihood of the correct class:
> $L_{i} = -$log $P(Y=y_{i}|X=x_{i}) $
- more weight (i.e., probability of 1) should be on the cat and 0 probability for all other classes
- computed probability distribution coming out of the softmax function should match this target probability distribution that has all the mass on the correct class
- use KL divergence
- maximum likelihood estimate
- Goal is the probability of true class is high and as close to 1
- Loss function will be the -log of the probability of true class
> $L_{i} = -$log $\frac{e^{s_{y_{i}}}}{\sum_{j}e^{s_{j}}} $
- Calculation steps:
- Calculate unnormalized log probabilities, as above
- calculate exponent of it(unnormalized probabilities)
- calculate normalized value (probabilities)
- calculate negative log (softmax loss function value) (or multinomial logistic regression)
#### Optimization
- find bottom of the valley
- use types of iterative method
- types
- random search
- depends on luck
- follow the slope.
- use local geometry, which way will take me little bit down
- gradient is the vector of (partial derivatives) along each dimension
- slope in any direction is the dot product of the direction with the gradient
- direction of steepest descent is negative gradient
- use finite differences
- adv
- easy to write
- disadv
- approximate
- can be very slow if size is large
- in practice, it is never used
- instead compute analytic gradient
- calculate dW in one step instead of looping over iteratively
- adv
- exact, fast
- disadv
- error prone
- in practice, always use analytic gradient, but check implementation with numerical gradient - gradient check
$\tiny{\text{YouTube-Stanford-CS231n-Justin Johnson}}$
$\tiny{\text{YouTube-Stanford-CS231n-Justin Johnson}}$
- gradient descent
- most used
- initialize W random
- compute loss and gradient
- update the weights in opposite of the gradient direction
- gradient points to the direction of greatest increase
- minus gradient points in the direction of greatest decrease
- take small step in the direction of minus gradient
- repeat till it converges
- step size or learning rate is a hyperparameter
- tells us how far we step in the direction of gradient
- step size is the first hyperparameter we check
- model size and regularization can be dealt later, but step size should be the primary focus
#### Stochastic Gradient Descent (SGD)
> $L(W) = \frac{1}{N}\sum\limits_{i}L_{i}(f(x_{i},y_{i},W) + \lambda R(W)$
> $\nabla_{W}L(W) = \frac{1}{N}\sum\limits_{i}\nabla_{W} L_{i}(f(x_{i},y_{i},W) + \lambda\nabla_{W} R(W)$
- Vanilla Minibatch Gradient Descent
- minibatch of size 32/64/128
#### Image features
- Instead of feeding raw pixels into linear classifiers doesnot work too well
- Prior to deep neural network popularity, two stage approach was used
- first, take your image, compute various feature representations
- then concatenate these feature vectors to give some feature representation of image
- trick is to use right feature transform for the problem statement
- example
- color histogram
- count how many pixels fall into each bucket
- tells us what type of color exist in image
- histogram of oriented gradients (HoG)
- dominant edge direction of each pixel
- compute histogram over these different edge orientation in bucket
- tells us what type of edge exist in image
- was used for object recognition in the past
- bag of words (comes from NLP)
- in NLP, number of words in a paragraph are counted
- apply same concept in images
- no straightforward analogy of words and images
- create your own version of vocabulary of visual words
- get sample
- ConvNets
```python
```
|
ced2358bcbcaa14af177af62d40d94575cccbdf1
| 12,085 |
ipynb
|
Jupyter Notebook
|
cs231n_cnn/lec03-lossFun-optimization.ipynb
|
chandrabsingh/learnings
|
a3f507bbbf46582ce5a64991983dfc0759db0af5
|
[
"MIT"
] | null | null | null |
cs231n_cnn/lec03-lossFun-optimization.ipynb
|
chandrabsingh/learnings
|
a3f507bbbf46582ce5a64991983dfc0759db0af5
|
[
"MIT"
] | null | null | null |
cs231n_cnn/lec03-lossFun-optimization.ipynb
|
chandrabsingh/learnings
|
a3f507bbbf46582ce5a64991983dfc0759db0af5
|
[
"MIT"
] | null | null | null | 39.622951 | 283 | 0.582871 | true | 2,152 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.875787 | 0.765505 |
__label__eng_Latn
| 0.99603 | 0.616858 |
# Ejercicio 3 - Simulación de distribuciones condicionadas
### Julian Ferres - Nro.Padrón 101483
## Enunciado:
Sea $X$ \~ $N(0,1)$ truncada al intervalo $[-1,1]$
Imagine $m(x) = E[Y | X=x]$ como:
\begin{equation}
m(x) := \left\{
\begin{array}{ll}
\frac{(x + 2)^2}{2} & \mathrm{si\ } si -1\leq x<-0.5 \\
\frac{x}{2}+0.875 & \mathrm{si\ } -0.5 \leq x \leq 0\\
-5(x-0.2)^2 +1.075 & \mathrm{si\ } 0 < x \leq 0.5 \\
x + 0.125 & \mathrm{si\ } 0.5 \leq x < 1
\end{array}
\right.
\end{equation}
Dado un $x$, la distribución condicional de $Y - m(x)$ es $N(0, \sigma ^2(x))$,
con $\sigma(x)=0.2-0.1 * \cos(2x)$
- Se pide simular $200$ puntos $(X,Y)$, y graficarlos en un plano. Además, vamos a necesitar
Los $200$ pares ordenados en cuestión, para hacer análisis posteriores
- Reconstruir $m(x)$ con los $200$ puntos, para eso:
Realizar una partición de $[-1,1]$ en intervalos de longitud $h$ y en cada intervalo encontrar el polinomio $f$ de grado $M$ que minimice el error cuadratico medio $$ \frac{1}{n} \sum |f(X_i)-Y_i|^2$$
Usar:
1. $h = 0.5$ , $M=1$
2. $h = 0.1$ , $M=1$
3. $h = 0.25$ , $M=2$
4. $h = 0.5$ , $M=2$
## Solución:
#### Importo todas las librerias e inicializo funciones
```python
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from math import cos, pi
from scipy.stats import truncnorm
```
```python
m1 = lambda x: (x+2)**2/2
m2 = lambda x: x/2 + 0.875
m3 = lambda x: -5*(x-0.2)**2 + 1.075
m4 = lambda x: x + 0.125
```
```python
def m(x):
if -1 <= x < -0.5:
return m1(x)
if -0.5 <= x < 0:
return m2(x)
if 0 <= x < 0.5:
return m3(x)
if 0.5 <= x < 1:
return m4(x)
m = np.vectorize(m)
```
```python
x_0 = np.linspace(-1,1,1000) #Me genero 1000 valores entre -1 y 1 para graficar m(x) 'suave'
y_0 = m(x_0)
```
#### Normal truncada
```python
a , b = -1 , 1 #Limites de la normal truncada
```
```python
x1 = np.linspace(truncnorm.ppf(0.01, a, b),
truncnorm.ppf(0.99, a, b), 200) #Genero 200 cuantiles de la normal truncada
plt.plot(x1, truncnorm.pdf(x1, a, b),
'r-', lw=3, alpha=0.75, label='Normal truncada')
plt.title("Density Plot de X",fontsize='15')
plt.legend(loc='best', frameon= True)
plt.grid()
```
```python
x1 = truncnorm.rvs(a, b, size=200)
#Me genero la muestra de distribucion X
```
```python
sigma = np.vectorize(lambda x : 0.2 - 0.1 * cos(2*pi*x))
normal = np.vectorize(np.random.normal)
y1 = normal( m(x1),sigma(x1))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x_0, y_0, 'g-', linewidth = 5, label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.plot(x1, y1, 'ro' ,markersize= 5, alpha = 0.5 ,label = 'Dispersion (X,Y)')
plt.legend(loc='best', frameon= True)
plt.title("Scatter Plot de (X,Y) y Line plot de m(x)", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
#### La muestra de los $200$ pares con distribución $(X,Y)$ se encuentra en la variable output
## Reconstruyo la regresión
#### Con h=0.5 y M=1
```python
partition = [[],[],[],[]]
for i in range(200):
partition[int(2*(x1[i]+1))].append(i)
```
```python
polinomio_a_trozos = []
cuadrado_de_los_errores1 = 0
for i in range(4):
x_aux , y_aux = [x1[j] for j in partition[i]],[y1[j] for j in partition[i]]
z = np.polyfit(x_aux,y_aux,1)
polinomio_a_trozos.append(np.poly1d(z))
#sumo los errores para cada trozo del polinomio
for j in range(len(x_aux)):
cuadrado_de_los_errores1 += (polinomio_a_trozos[i](x_aux[j])-y_aux[j])**2
```
```python
xp=[]
xp.append(np.linspace(-1, -0.5, 200))
xp.append(np.linspace(-0.5,0, 200))
xp.append(np.linspace(0, 0.5, 200))
xp.append(np.linspace(0.5,1, 200))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x1, y1, 'ro', linewidth = 5, alpha = 0.5 ,label = 'Dispersion X,Y')
plt.legend(loc='best', frameon= True)
for i in range(4):
plt.plot(xp[i], polinomio_a_trozos[i](xp[i]) ,'b-', linewidth = 5 )
plt.plot(x_0, y_0, 'g-', linewidth = 5, alpha = 0.75 ,label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.title("Estimación m(x) con h=0.5 y M=1", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
La estimación parece ajustarse bien a la función de regresion, no obstante, el error cuadrático medio es alto ya que no esta Overfitteando
a la muestra.
#### Estimación del error cuadrático medio
```python
(cuadrado_de_los_errores1 / 200)**0.5
```
0.21568051286590767
#### Con h=0.1 y M=1
```python
partition = [[] for i in range(20)]
for i in range(200):
partition[int(10*(x1[i]+1))].append(i)
```
```python
polinomio_a_trozos = []
cuadrado_de_los_errores2 = 0
for i in range(20):
x_aux , y_aux = [x1[j] for j in partition[i]],[y1[j] for j in partition[i]]
z = np.polyfit(x_aux,y_aux,1)
polinomio_a_trozos.append(np.poly1d(z))
#sumo los errores para cada trozo del polinomio
for j in range(len(x_aux)):
cuadrado_de_los_errores2 += (polinomio_a_trozos[i](x_aux[j])-y_aux[j])**2
```
```python
xp=[]
for i in range(20):
xp.append(np.linspace(-1+i*(1/10), -0.9+i*(1/10), 200))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x1, y1, 'ro', linewidth = 5, alpha = 0.5 ,label = 'Dispersion X,Y')
plt.legend(loc='best', frameon= True)
for i in range(20):
plt.plot(xp[i], polinomio_a_trozos[i](xp[i]) ,'b-', linewidth = 5 )
plt.plot(x_0, y_0, 'g-', linewidth = 5, alpha = 0.75,label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.title("Estimación m(x) con h=0.1 y M=1", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
Se puede observar un claro caso de Overfitting, donde el error cuadrático medio es medianamente bajo, pero no estima correctamente la regresión.
#### Estimación del error cuadrático medio
```python
(cuadrado_de_los_errores2 / 200)**0.5
```
0.196030417748454
#### Con h=0.25 y M=2
```python
partition = [[] for i in range(8)]
for i in range(200):
partition[int(4*(x1[i]+1))].append(i)
```
```python
polinomio_a_trozos = []
cuadrado_de_los_errores3 = 0
for i in range(8):
x_aux , y_aux = [x1[j] for j in partition[i]],[y1[j] for j in partition[i]]
z = np.polyfit(x_aux,y_aux,2)
polinomio_a_trozos.append(np.poly1d(z))
#sumo los errores para cada trozo del polinomio
for j in range(len(x_aux)):
cuadrado_de_los_errores3 += (polinomio_a_trozos[i](x_aux[j])-y_aux[j])**2
```
```python
xp=[]
for i in range(8):
xp.append(np.linspace(-1+i*(1/4), -1+(i+1)*(1/4), 200))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x1, y1, 'ro', linewidth = 5,alpha = 0.5, label ='Dispersion X,Y')
plt.legend(loc='best', frameon= True)
for i in range(8):
plt.plot(xp[i], polinomio_a_trozos[i](xp[i]) ,'b-', linewidth = 5 )
plt.plot(x_0, y_0, 'g-', linewidth = 5,alpha = 0.75 ,label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.title("Estimación m(x) con h=0.25 y M=2", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
Se puede observar un claro caso de Overfitting, donde el error cuadrático medio es medianamente bajo, pero no estima correctamente la regresión.
#### Estimación del error cuadrático medio
```python
(cuadrado_de_los_errores3 / 200)**0.5
```
0.20118730938942803
#### Con h=0.5 y M=2
```python
partition = [[] for i in range(4)]
for i in range(200):
partition[int(2*(x1[i]+1))].append(i)
```
```python
polinomio_a_trozos = []
cuadrado_de_los_errores4 = 0
for i in range(4):
x_aux , y_aux = [x1[j] for j in partition[i]],[y1[j] for j in partition[i]]
z = np.polyfit(x_aux,y_aux,2)
polinomio_a_trozos.append(np.poly1d(z))
#sumo los errores para cada trozo del polinomio
for j in range(len(x_aux)):
cuadrado_de_los_errores4 += (polinomio_a_trozos[i](x_aux[j])-y_aux[j])**2
```
```python
xp=[]
for i in range(4):
xp.append(np.linspace(-1+i*(1/2), -1+(i+1)*(1/2), 200))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x1, y1, 'ro', linewidth = 5,alpha = 0.5, label = 'Dispersion X,Y')
plt.legend(loc='best', frameon= True)
for i in range(4):
plt.plot(xp[i], polinomio_a_trozos[i](xp[i]) ,'b-', linewidth = 5)
plt.plot(x_0, y_0, 'g-', linewidth = 5,alpha = 0.75 ,label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.title("Estimación m(x) con h=0.5 y M=2", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
Se ve que el ECM es ligeramente superior a los casos con Overfitting, se ve que predice la regresión de forma bastante acertada.
#### Estimación del error cuadrático medio
```python
(cuadrado_de_los_errores4 / 200)**0.5
```
0.21280621089954807
```python
(cuadrado_de_los_errores1 / 200)**0.5 , (cuadrado_de_los_errores2 / 200)**0.5 , (cuadrado_de_los_errores3 / 200)**0.5 , (cuadrado_de_los_errores4 / 200)**0.5
```
(0.21568051286590767,
0.196030417748454,
0.20118730938942803,
0.21280621089954807)
```python
```
Link al Repo de GitHub: https://github.com/julianferres/Aprendizaje-Estadistico.git
|
19f304dd3d9dbd58021a9409befa120e56bc2c18
| 306,309 |
ipynb
|
Jupyter Notebook
|
Ejercicios/06-Reconstruir Regresion (Penultima Clase)/.ipynb_checkpoints/EstimacionRegresion-checkpoint.ipynb
|
julianferres/Aprendizaje-Estadistico
|
897c5389afa2a0aad7ca46125540154b3b764e0d
|
[
"MIT"
] | null | null | null |
Ejercicios/06-Reconstruir Regresion (Penultima Clase)/.ipynb_checkpoints/EstimacionRegresion-checkpoint.ipynb
|
julianferres/Aprendizaje-Estadistico
|
897c5389afa2a0aad7ca46125540154b3b764e0d
|
[
"MIT"
] | null | null | null |
Ejercicios/06-Reconstruir Regresion (Penultima Clase)/.ipynb_checkpoints/EstimacionRegresion-checkpoint.ipynb
|
julianferres/Aprendizaje-Estadistico
|
897c5389afa2a0aad7ca46125540154b3b764e0d
|
[
"MIT"
] | null | null | null | 397.803896 | 59,076 | 0.937279 | true | 3,359 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.718594 | 0.604931 |
__label__spa_Latn
| 0.273359 | 0.243788 |
<a href="https://colab.research.google.com/github/desabuh/elliptic_curves_cryptography_plots/blob/master/plot_for_eliptic_curves.ipynb" target="_parent"></a>
```python
import matplotlib.pyplot as plt
import math
import numpy as np
%matplotlib inline
from ipywidgets import interact
from sympy import nsolve
```
TOTIENT FUNCTION
```python
def gcd(a, b):
if b==0:
return a
return gcd(b, a % b)
```
```python
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.xlabel("N")
plt.ylabel("phi(N)")
plt.scatter([*range(1,2500)], [sum(gcd(n, i) == 1 for i in range(1,n)) for n in range(1, 2500)], s = 1, c='green');
plt.show()
```
GENERAL TO WEISTRASS FORM
```python
@interact(a = (-10,10,0.1), b=(-10,10,0.1), c=(-10,10,0.1), d=(-10,10,0.1), e=(-10,10,0.1))
def ell_curve(a, b, c, d, e):
mx2, mx1 = np.ogrid[-10:10:0.1,-15:15:0.1]
def evaluate_general(x,y):
return np.power(y,2) + a*x*y + b*y - np.power(x, 3) - c * np.power(x,2) - d*x - e
def transform_coord(x,y):
return x - ((a**2 + 4*c) / 12), y - (a / 2)*x + ((a**3 + 4*a*c - 12 * b) / 24)
def evaluate_normal(x,y):
x, y = transform_coord(x,y)
return np.power(y,2) - np.power(x,3) - d*x - e
plt.contour(mx1.ravel(), mx2.ravel(), evaluate_general(mx1, mx2), [0], colors="blue")
plt.contour(mx1.ravel(), mx2.ravel(), evaluate_normal(mx1, mx2), [0], colors="red")
plt.show()
```
interactive(children=(FloatSlider(value=0.0, description='a', max=10.0, min=-10.0), FloatSlider(value=0.0, des…
FINITE CURVE
```python
def display_finite_curve(a, b, N):
def is_point(x, y, a, b, N):
return (y**2) % N == (x**3+ a*x + b) % N
points = [(x,y) for x in range(N) for y in range(N) if is_point(x,y,a,b,N)]
plt.text(-5,-5,s = "p = {}\n a = {}\n b= {}".format(N,a,b),c = "black",bbox={'facecolor': 'green', 'alpha': 0.5})
plt.scatter(list(zip(*points))[0], list(zip(*points))[1], s=10)
```
```python
display_finite_curve(1, -1, 39)
```
|
42e314c6b79a013b5e2452d76d764878412f599f
| 80,515 |
ipynb
|
Jupyter Notebook
|
plot_for_eliptic_curves.ipynb
|
desabuh/elliptic_curves_cryptography_plots
|
6d7a4d01cb38042a78bfbb3959d6856837e3b8be
|
[
"Apache-2.0"
] | null | null | null |
plot_for_eliptic_curves.ipynb
|
desabuh/elliptic_curves_cryptography_plots
|
6d7a4d01cb38042a78bfbb3959d6856837e3b8be
|
[
"Apache-2.0"
] | null | null | null |
plot_for_eliptic_curves.ipynb
|
desabuh/elliptic_curves_cryptography_plots
|
6d7a4d01cb38042a78bfbb3959d6856837e3b8be
|
[
"Apache-2.0"
] | null | null | null | 91.598407 | 31,062 | 0.786673 | true | 720 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.957912 | 0.863392 | 0.827053 |
__label__yue_Hant
| 0.19135 | 0.759855 |
# __Funciones__
Una función es un algoritmo que ayuda a automatizar el proceso de desarrollo de un algoritmo, ahorrando líneas de código.
_Estructura:_
```
def nombre_funcion(entradas):
(desarrollo)
return salidas
```
Las funciones son la base de la Programación Orientada a Objetos (OOP, por sus siglas en inglés).
#### Particularidades:
* Al igual que las condicionales y los ciclos, requiere de identaciones (espacios horizontales) que permitan identificar que un algoritmo particular pertenece a la función desarrollada.
* Las variables declaradas __dentro__ de las funciones se catalogan como variables __locales__. Hasta el momento, hemos trabajado con variables _globales_.
```python
#Creación de una función suma para dos números
def suma_sencilla(x, y):
return x+y
```
```python
#Podemos llamar, y utilizar, a la función suma de la siguiente forma
print(suma_sencilla(1,1))
```
```python
#Función suma de varios números
def suma(nums):
s = 0
for num in nums:
s += num
return s
```
```python
print(suma([1,2,3,4]))
```
```python
#Se pueden tener varias entradas y varias salidas
def sum_div(x,y,z):
return x+y, x/z
print(sum_div(1,2,3))
```
### __Ejemplo:__
Elabora un algoritmo compacto que permita adicionar elementos al carrito de compras, eliminar artículos y que calcule el total de compra.
```python
productos = {
'Martillo': 1000,
'Destornillador': 5000,
'Arandelas': 100,
'Tuercas': 50,
'Pernos': 200
}
```
```python
#FUNCIONES
def totalCarrito(lista_compras):
total = 0
for item in lista_compras:
subtotal = productos[item[0]]*int(item[1])
total += subtotal
return total
def resumenCompra(lista_compras):
msg = "Resumen de compra:\nArtículo\tCantidad\tSubtotal\n"
for item in lista_compras:
msg += item[0] + "\t" + str(item[1]) + "\t\t" + str(totalCarrito([item,])) + "\n"
msg += "TOTAL\t" + str(totalCarrito(lista_compras))
return msg
def eliminarArticulo(lista_compras, articulo):
for item in lista_compras:
if articulo in item:
lista_compras.remove(item)
break
return lista_compras
#DESARROLLO INTERACTIVO
listaCompras = []
while True:
art = input("\n¿Qué artículo deseas comprar? ")
cant = input("¿Cuántas unidades desas? ")
listaCompras.append((art, cant))
print(resumenCompra(listaCompras))
if input("¿Deseas eliminar un artículo del carrito? (s/n) ") == "s":
listaCompras = eliminarArticulo(listaCompras, input("Escribe el nombre del artículo: "))
print(resumenCompra(listaCompras))
if input("¿Deseas seguir comprando? (s/n) ") == "n":
break
print("----------------------------------------------------------------")
```
## __Ejercicio:__
Sin usar la función `factorial` de la librería `math`, elabora un algoritmo que resuelva la siguiente ecuación matemática:
$$
\begin{equation}
y = \frac{x^2+5}{2 x!}
\end{equation}
$$
Recuerda que:
$$
\begin{equation}
x! = x(x-1)(x-2) \cdots 1
\end{equation}
$$
```python
```
|
d792d304646aa59fe5ea8b1428fdad260495da3b
| 6,307 |
ipynb
|
Jupyter Notebook
|
Python/Colab/FuncionesPython.ipynb
|
judrodriguezgo/DesarrolloWeb
|
a020b1eb734e243114982cde9edfc3c25d60047a
|
[
"MIT"
] | 1 |
2021-10-30T16:54:25.000Z
|
2021-10-30T16:54:25.000Z
|
Python/Colab/FuncionesPython.ipynb
|
judrodriguezgo/DesarrolloWeb
|
a020b1eb734e243114982cde9edfc3c25d60047a
|
[
"MIT"
] | null | null | null |
Python/Colab/FuncionesPython.ipynb
|
judrodriguezgo/DesarrolloWeb
|
a020b1eb734e243114982cde9edfc3c25d60047a
|
[
"MIT"
] | 3 |
2021-11-23T22:24:15.000Z
|
2021-12-31T23:51:47.000Z
| 28.031111 | 198 | 0.462661 | true | 887 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.839734 | 0.803174 | 0.674452 |
__label__spa_Latn
| 0.903304 | 0.40531 |
# Under- and overfitting, model selection
## Preliminaries
In the first set of exercises you had to implement the training and evaluation of the linear regression and $k$-NN methods from scratch in order to practice your `numpy` skills. From this set of exercises onward, you can use the implementations provided in `scikit-learn` or other higher-level libraries. We start this set of exercises by demonstrating some of the features of `scikit-learn`.
For example, implementation of linear regression model fitting with an analytical solution for the parameters is provided by the class `sklearn.linar_model.LinearRegression`. You can train a linear regression model in the following way:
```python
import numpy as np
from sklearn import datasets, linear_model
# load the diabetes dataset
diabetes = datasets.load_diabetes()
# use only one feature
X = diabetes.data[:, np.newaxis, 2]
y = diabetes.target
# split the data into training/testing sets
X_train = X[:-20]
X_test = X[-20:]
# split the targets into training/testing sets
y_train = y[:-20]
y_test = y[-20:]
# create linear regression object
model = linear_model.LinearRegression()
# train the model using the training dataset
model.fit(X_train, y_train)
```
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)
Let's visualize the training dataset and the learned regression model.
```python
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure()
plt.plot(X_train, y_train, 'r.', markersize=12)
X_edge = np.array([np.min(X_train, 0), np.max(X_train, 0)])
plt.plot(X_edge, model.predict(X_edge), 'b-')
plt.legend(('Data', 'Linear regression'), loc='lower right')
plt.title('Linear regression')import numpy as np
from sklearn import datasets, linear_model
# load the diabetes dataset
diabetes = datasets.load_diabetes()
# use only one feature
X = diabetes.data[:, np.newaxis, 2]
y = diabetes.target
# split the data into training/testing sets
X_train = X[:-20]
X_test = X[-20:]
# split the targets into training/testing sets
y_train = y[:-20]
y_test = y[-20:]
# create linear regression object
model = linear_model.LinearRegression()
# train the model using the training dataset
model.fit(X_train, y_train)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.show()
```
Once trained, the model can be used to make predictions on the test data:
```python
# Make predictions using the testing dataset
prediction = model.predict(X_test)
```
The next step (not shown here) is to evaluate the performance of the trained model.
Note that the `scikit-learn` interface works by first initializing an object from the class that implements the machine learning model (linear regression in this case) and then fitting the initialized model using the data in the training set. Finally, the trained (fitted) model can be used to make predictions on unseen data. In fact, all models implemented in this library follow the same *initialize-fit-predict* programming interface. For example, a $k$-NN classifier can be trained in the following way:
```python
from sklearn.model_selection import train_test_split
from sklearn import datasets, neighbors
breast_cancer = datasets.load_breast_cancer()
X = breast_cancer.data
y = breast_cancer.target
# make use of the train_test_split() utility function instead
# of manually dividing the data
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=40)
# initialize a 3-NN classifier
model = neighbors.KNeighborsClassifier(n_neighbors=3)
# train the model using the training dataset
model.fit(X_train, y_train)
# make predictions using the testing dataset
prediction = model.predict(X_test)
```
Note that the features in the breast cancer dataset have different scales (some have on average very small absolute values, and some very large), which means that the distance metric used by $k$-NN will me dominated by the features with large values. You can use any of the number of feature transformation methods implemented in `scikit-learn` to scale the features. For example, you can use the `sklearn.preprocessing.StandardScaler` method to transform all features to a have a zero mean and unit variance:
```python
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
```
The scaler has its own parameters which are the means and standard deviations of the features estimated from the training set. If you train a model with the scaled features, you will have to remember to also apply the scaling transformation every time you make a prediction on new unseen and unscaled data. This is somewhat prone to error. One option for making the code more robust is to create a processing pipeline that includes the scaling and $k$-NN models in a sequence:
```python
from sklearn.pipeline import Pipeline
knn = neighbors.KNeighborsClassifier(n_neighbors=3)
model = Pipeline([
("scaler", scaler),
("knn", knn)
])
# train the model using the training dataset
model.fit(X_train, y_train)
# make predictions using the testing dataset
prediction = model.predict(X_test)
```
If you are curious, more information about the design of the `scikit-learn` application programming interface (API) can be found [in this paper](https://arxiv.org/pdf/1309.0238.pdf).
## Exercises
### Bias-variance decomposition
Show that the mean squared error of the estimate of a parameter can be decomposed into an expression that includes both the bias and variance (Eq. 5.53-5.54 in "Deep learning" by Goodfellow et al.).
\begin{align}
MSE(\hat{\theta}) & = E[(\hat{\theta} - \theta)^2] \\
& = E[(\hat{\theta}-E[\hat{\theta}]+E[\hat{\theta}]-\theta)^2] \\
& = E[(\hat{\theta}-E[\hat{\theta}])^2+2(\hat{\theta}-E[\hat{\theta}])(E[\hat{\theta}]-\theta)+(E[\hat{\theta}]-\theta)^2] \\
& = E[(\hat{\theta}-E[\hat{\theta}])^2]+E[2(\hat{\theta}-E[\hat{\theta}])(E[\hat{\theta}]-\theta)]+E[(E[\hat{\theta}]-\theta)^2] \\
& = E[(\hat{\theta}-E[\hat{\theta}])^2+2(E[\hat{\theta}]-\theta)E[\hat{\theta}-E[\hat{\theta}]]+(E[\hat{\theta}]-\theta)^2 &E[\hat{\theta}]-\theta = constant \\
& = E[(\hat{\theta}-E[\hat{\theta}])^2]+2(E[\hat{\theta}]-\theta)(E[\hat{\theta}]-E[\hat{\theta}])+(E[\hat{\theta}]-\theta)^2 &E[\hat{\theta}] = constant \\
& = E[(\hat{\theta}-E[\hat{\theta}])^2] + (E[\hat{\theta}]-\theta)^2 \\
& = Var(\hat{\theta}) + Bias(\hat{\theta},\theta)^2 \\
\end{align}
### Polynomial regression
For this exercise we will be using generated data to better show the effects of the different polynomial orders.
The data is created using the make_polynomial_regression function.
```python
import numpy as np
%matplotlib inline
def generate_dataset(n=100, degree=1, noise=1, factors=None):
# Generates a dataset by adding random noise to a randomly
# generated polynomial function.
x = np.random.uniform(low=-1, high=1, size=n)
factors = np.random.uniform(0, 10, degree+1)
y = np.zeros(x.shape)
for idx in range(degree+1):
y += factors[idx] * (x ** idx)
# add noise
y += np.random.normal(-noise, noise, n)
return x, y
# load generated data
np.random.seed(0)
X, y = generate_dataset(n=100, degree=4, noise=1.5)
plt.plot(X, y, 'r.', markersize=12)
```
Implement polynomial regression using the `sklearn.preprocessing.PolynomialFeatures` transformation. Using the `sklearn.grid_search.GridSearchCV` class, perform a grid search of the polynomial order hyperparameter space with cross-validation and report the performance on an independent test set.
Plot a learning curve that show the validation accuracy as a function of the polynomial order.
<p><font color='#770a0a'>Which models have a high bias, and which models have high variance? Motivate your answer.</font><p>
Repeat this experiment, this time using the diabetes dataset instead of the generated data.
```python
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
import operator
#%% split data in train and test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=40)
#%%
X_train = X_train[:,np.newaxis]
y_train = y_train[:, np.newaxis]
X_test = X_test[:,np.newaxis]
y_test = y_test[:,np.newaxis]
poly = PolynomialFeatures(degree=2)
X_poly = poly.fit_transform(X_train)
pol = LinearRegression()
pol.fit(X_poly, y_train)
y_poly_pred = pol.predict(X_poly)
plt.scatter(X_train, y_train, s=10, label='training data') #waardes data plotten
#soorteren x waardes voor plotten
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X_train,y_poly_pred), key=sort_axis)
X_train2, y_poly_pred = zip(*sorted_zip)
#regressie plotten
plt.plot(X_train2, y_poly_pred, color='r', label='model 2nd degree polynomial')
plt.legend(loc = 'upper left')
plt.show()
```
```python
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree), LinearRegression(**kwargs))
param = {'polynomialfeatures__degree':np.arange(20)}
grid = GridSearchCV(PolynomialRegression(), param, cv = 5) #find the optimum degree, 5-fold cross-validation
grid.fit(X_train,y_train)
values = grid.cv_results_ #results
mean_test_score = values['mean_test_score']
polymodel = grid.best_estimator_ #the model with the best estimator
performance = polymodel.score(X_test,y_test) #performance score on the independent test set
y_predicted = polymodel.fit(X_train,y_train).predict(X_train)
sorted_zip = sorted(zip(X_train,y_predicted), key=operator.itemgetter(0))
X_train_plt, y_predicted_plt = zip(*sorted_zip)
plt.scatter(X_train,y_train, label='Train data')
plt.plot(X_train_plt,y_predicted_plt,'r-',markersize=10, label= 'Model')
plt.title('Best estimated model: 3rd degree polynomial')
plt.legend(loc = 'upper left')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
```python
plt.plot(np.arange(20),values['mean_test_score'])
plt.title('Validation accuracy dependence on degree of polynomial')
plt.xlabel('degree of polynomial')
plt.ylabel('Validation accuracy')
plt.show()
print(polymodel)
print ("The optimum degree polynomial that fits this data is 3, with mean test score {}".format(mean_test_score[3]))
print("The accuracy on the independent test set for degree = 3 is {}".format(performance))
```
The models with a polynomial degree lower than the optimum (degree < 3) have a high bias. Low polynomial orders are insufficient to describe the data in a generalized manner. Using such few polynomials would result in underfitting.
The models with a polynomial degree highger than the optimum (degree > 3) have a high variance. High polynomial orders describe the train data very well, but are too specific such that the model cannot be applied to and predict new data, because it is overfitted. The model should contain less polynomials to be more generalized.
NOTE: In the following code polynomial orders are taken 0 to 5 and not to 20, to reduce the running time.
Other ways of reducing running time, but still trying more polynomial degrees (for example 10, which will run for approx. 1 min) are: N-fold cross validation less than 4 could or decreasing the size of the training data set. However, generally, a higher cross-validation fold is beneficial for finding the right hyperparameter (degree polynomial) and a big training dataset increases the accuracy of the model.
```python
diabetes = datasets.load_diabetes()
Xdb = diabetes.data[:]
ydb = diabetes.target[:]
Xdb_train, Xdb_test, ydb_train, ydb_test = train_test_split(Xdb, ydb,test_size=0.20, random_state=30) #test_size may be changed
def PolynomialRegression_db(degree=2, **kwargs):
return make_pipeline(StandardScaler(), PolynomialFeatures(degree), LinearRegression(**kwargs))
poldegrees = 5 #define the polynomial degrees that are validated
param = {'polynomialfeatures__degree':np.arange(poldegrees)}
grid_db = GridSearchCV(PolynomialRegression_db(), param, cv = 4) #find the optimum degree, n-fold cross-validation (cv)
import timeit
start = timeit.default_timer()
grid_db.fit(Xdb_train,ydb_train)
stop = timeit.default_timer()
print('Time: ', stop - start)
values_db = grid_db.cv_results_ #results
mean_test_score_db = values_db['mean_test_score']
polymodel_db = grid_db.best_estimator_ #the model with the best estimator
performance_db = polymodel_db.score(Xdb_test,ydb_test)
plt.plot(np.arange(poldegrees),values_db['mean_test_score'])
plt.title('Validation accuracy diabetes dataset')
plt.xlabel('degree of polynomial')
plt.ylabel('Validation accuracy')
plt.show()
print(polymodel_db)
print ("The optimum degree polynomial that fits this data is 1, with mean test score {}".format(mean_test_score_db[1]))
print("The accuracy on the independent test set for degree = 1 is {}".format(performance_db))
```
### ROC curve analysis
A common method to evaluate binary classifiers is the receiver operating characteristic (ROC) curve. Similar to the week one practicals, implement a $k$-NN classifier on the breast cancer dataset, however, his time use the $k$-NN pipeline from the preliminary. Train the model for different values of $k$ and evaluate their respective performance with an ROC curve, use the `sklearn.metrics.roc_curve` function.
```python
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import datasets, neighbors
from sklearn.pipeline import Pipeline
from sklearn import metrics
import matplotlib.pyplot as plt
breast_cancer = datasets.load_breast_cancer()
X = breast_cancer.data
y = breast_cancer.target
scaler = StandardScaler()
# make use of the train_test_split() utility function instead
# of manually dividing the data
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=40)
#compute the mdoel for different k-values
for k in range(3,len(X_train)+1,50): # the value of k needs always to be an odd number
knn = neighbors.KNeighborsClassifier(n_neighbors=k)
model = Pipeline([
("scaler", scaler),
("knn", knn)
])
# train the model using the training dataset
model.fit(X_train, y_train)
prediction = model.predict(X_test)
#compute false positive rate and true positive rate
fpr,tpr,tresholds = metrics.roc_curve(y_test, prediction, pos_label=1)
#calculate the area under the curve
auc = metrics.roc_auc_score(y_test,prediction)
#plot the calculated values
plt.plot(fpr,tpr, label=' value of k = ' +str(k))
plt.plot()
plt.xlabel('1-Specifity')
plt.ylabel('Sensitivity')
plt.title('Receiver Operating Characteristic')
plt.legend(loc='lower right')
plt.show()
```
The ROC plots the true positive rate vs the false positive rate. The greater the area under the curve the better the classifier. From the figure above it can be concluded that the best classifier is found for the value of k is 3. A random classifier is found with the value of k is 403.
### $F_1$ score and Dice similarity coefficient
The Dice similarity coefficient is a very popular evaluation measure for image segmentation applications. Assuming that $A$ is the ground truth segmentation of an object represented as a binary image, and $B$ is the binary output of an image segmentation method, the Dice similarity coefficient is computed as:
$\text{Dice}(A,B) = \frac{2|A\cap B|}{|A| + |B|}$
where $|\cdot|$ represents the cardinality of the objects (e.g. $|A|$ is the number of non-zero pixels in the ground truth segmentation).
For example, the Dice similarity can be computed in the following way:
```python
import numpy as np
import matplotlib.pyplot as plt
# generate some test objecys
A = np.zeros((32, 32))
A[10:-10, 10:-10] = 1
B = np.zeros((32, 32))
B[5:-15, 5:-15] = 1
dice = 2*np.sum(A*B)/(np.sum(A)+np.sum(B))
# display the results
plt.plot()
plt.imshow(A)
plt.imshow(B, alpha=0.7)
print('The dice-score is = ',dice)
```
<p><font color='#770a0a'>Show that the $F_1$ score, which is the harmonic mean of precision and recall, is equivalent to the Dice similarity coefficient</font><p>
```python
from sklearn.metrics import f1_score
from sklearn.metrics import classification_report
A = np.zeros((32, 32))
A[10:-10, 10:-10] = 1
B = np.zeros((32, 32))
B[5:-15, 5:-15] = 1
print('The f1 score = ',f1_score(A,B,average='weighted'))
```
The f1 score = 0.3402777777777778
A = ground_truth
B = predicted_values
Why are the dice-score and f1-score the same?
f1 = 2 * (precision * recal) / (precision + recall)
= 2 / ((precision/(precision * recal)) + (recal/(precision * recal))
= 2 / (1/recal) + (1/precision)
wherein precision = |AB|/|B| and recal = |AB|/|A|
if you write this in the f1 formula you get:
f1= 2 / (|A|/|AB|) + (|B|/|AB|)
dice_score = 2 * |AB| / (|A|+|B|)
= 2 / ((|A|/|AB|) + (|B|/|AB|))
So as you can see these two (f1 and dice-score) are the same formulas.
```python
```
|
bc6a71f84ebcbf386e903c9221747f33ef1cbc87
| 184,214 |
ipynb
|
Jupyter Notebook
|
practicals/week_2.ipynb
|
ndkruif/8dm40-machine-learning10
|
fab34c1969b4ea161a3947dbc6358a716ed742f3
|
[
"MIT"
] | null | null | null |
practicals/week_2.ipynb
|
ndkruif/8dm40-machine-learning10
|
fab34c1969b4ea161a3947dbc6358a716ed742f3
|
[
"MIT"
] | null | null | null |
practicals/week_2.ipynb
|
ndkruif/8dm40-machine-learning10
|
fab34c1969b4ea161a3947dbc6358a716ed742f3
|
[
"MIT"
] | null | null | null | 227.144266 | 43,276 | 0.908362 | true | 4,290 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.845942 | 0.826712 | 0.699351 |
__label__eng_Latn
| 0.960621 | 0.463157 |
# Generative models - variational auto-encoders
### Author: Philippe Esling (esling@ircam.fr)
In this course we will cover
1. A [quick recap](#recap) on simple probability concepts
2. A formal introduction to [Variational Auto-Encoders](#vae) (VAEs)
3. An explanation of the [implementation](#implem) of VAEs
4. Some [modifications and tips to improve the reconstruction](#improve) of VAEs **(exercise)**
<a id="recap"> </a>
## Quick recap on probability
The field of probability aims to model random or uncertain events. Hence, a random variable $X$ denotes a quantity that is uncertain, such as the result of an experiment (flipping a coin) or the measurement of an uncertain property (measuring the temperature). If we observe several occurrences of the variable $\{\mathbf{x}_{i}\}_{i=1}$, it might take different values on each occasion, but some values may occur more often than others. This information is captured by the _probability distribution_ $p(\mathbf{x})$ of the random variable.
To understand these concepts graphically, we will rely on the `Pytorch Probability` package.
```python
import torch
import torch.nn as nn
import torch.distributions as distrib
import torchvision
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from helper_plot import hdr_plot_style
hdr_plot_style()
```
### Probability distributions
#### Discrete distributions
Let $\mathbf{x}$ be a discrete random variable with range $R_{X}=\{x_1,\cdots,x_n\}$ (finite or countably infinite). The function
\begin{equation}
p_{X}(x_{i})=p(X=x_{i}), \forall i\in\{1,\cdots,n\}
\end{equation}
is called the probability mass function (PMF) of $X$.
Hence, the PMF defines the probabilities of all possible values for a random variable. The above notation allows to express that the PMF is defined for the random variable $X$, so that $p_{X}(1)$ gives the probability that $X=1$. For discrete random variables, the PMF is also called the \textit{probability distribution}. The PMF is a probability measure, therefore it satisfies all the corresponding properties
- $0 \leq p_{X}(x_i) < 1, \forall x_i$
- $\sum_{x_i\in R_{X}} p_{X}(x_i) = 1$
- $\forall A \subset R_{X}, p(X \in A)=\sum_{x_a \in A}p_{X}(x_a)$
A very simple example of discrete distribution is the `Bernoulli` distribution. With this distribution, we can model a coin flip. If we throw the coin a very large number of times, we hope to see on average an equal amount of _heads_ and _tails_.
```python
bernoulli = distrib.Bernoulli(0.5)
samples = bernoulli.sample((10000,))
plt.figure(figsize=(10,8))
sns.distplot(samples)
plt.title("Samples from a Bernoulli (coin toss)")
plt.show()
```
However, we can also _sample_ from the distribution to have individual values of a single throw. In that case, we obtain a series of separate events that _follow_ the distribution
```python
vals = ['heads', 'tails']
samples = bernoulli.sample((10,))
for s in samples:
print('Coin is tossed on ' + vals[int(s)])
```
#### Continuous distributions
The same ideas apply to _continuous_ random variables, which can model for instance the height of human beings. If we try to guess the height of someone that we do not know, there is a higher probability that this person will be around 1m70, instead of 20cm or 3m. For the rest of this course, we will use the shorthand notation $p(\mathbf{x})$ for the distribution $p(\mathbf{x}=x_{i})$, which expresses for a real-valued random variable $\mathbf{x}$, evaluated at $x_{i}$, the probability that $\mathbf{x}$ takes the value $x_i$.
One notorious example of such distributions is the Gaussian (or Normal) distribution, which is defined as
\begin{equation}
p(x)=\mathcal{N}(\mu,\sigma)=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}
\end{equation}
Similarly as before, we can observe the behavior of this distribution with the following code
```python
normal = distrib.Normal(loc=0., scale=1.)
samples = normal.sample((10000,))
plt.figure(figsize=(10,8))
sns.distplot(samples)
plt.title("Samples from a standard Normal")
plt.show()
```
### Comparing distributions (KL divergence)
$
\newcommand{\R}{\mathbb{R}}
\newcommand{\bb}[1]{\mathbf{#1}}
\newcommand{\bx}{\bb{x}}
\newcommand{\by}{\bb{y}}
\newcommand{\bz}{\bb{z}}
\newcommand{\KL}[2]{\mathcal{D}_{\text{KL}}\left[#1 \| #2\right]}$
Originally defined in the field of information theory, the _Kullback-Leibler (KL) divergence_ (usually noted $\KL{p(\bx)}{q(\bx)}$) is a dissimilarity measure between two probability distributions $p(\bx)$ and $q(\bx)$. In the view of information theory, it can be understood as the cost in number of bits necessary for coding samples from $p(\bx)$ by using a code optimized for $q(\bx)$ rather than the code optimized for $p(\bx)$. In the view of probability theory, it represents the amount of information lost when we use $q(\bx)$ to approximate the true distribution $p(\bx)$. %that explicit the cost incurred if events were generated by $p(\bx)$ but charged under $q(\bx)$
Given two probability distributions $p(\bx)$ and $q(\bx)$, the Kullback-Leibler divergence of $q(\bx)$ _from_ $p(\bx)$ is defined to be
\begin{equation}
\KL{p(\bx)}{q(\bx)}=\int_{\R} p(\bx) \log \frac{p(\bx)}{q(\bx)}d\bx
\end{equation}
Note that this dissimilarity measure is \textit{asymmetric}, therefore, we have
\begin{equation}
\KL{p(\bx)}{q(\bx)}\neq \KL{q(\bx)}{p(\bx)}
\end{equation}
This asymmetry also describes an interesting behavior of the KL divergence, depending on the order to which it is evaluated. The KL divergence can either be a _mode-seeking_ or _mode-coverage} measure.
<a id="vae"></a>
## Variational auto-encoders
As we have seen in the previous AE course, VAEs are also a form generative models. However, they are defined from a more sound probabilistic perspective. to find the underlying probability distribution of the data $p(\mathbf{x})$ based on a set of examples in $\mathbf{x}\in\mathbb{R}^{d_{x}}$. To do so, we consider *latent variables* defined in a lower-dimensional space $\mathbf{z}\in\mathbb{R}^{d_{z}}$ ($d_{z} \ll d_{x}$) with the joint probability distribution $p(\mathbf{x}, \mathbf{z}) = p(\mathbf{x} \vert \mathbf{z})p(\mathbf{z})$. Unfortunately, for complex distributions this integral is too complex and cannot be found in closed form.
### Variational inference
The idea of *variational inference* (VI) allows to solve this problem through *optimization* by assuming a simpler approximate distribution $q_{\phi}(\mathbf{z}\vert\mathbf{x})\in\mathcal{Q}$ from a family $\mathcal{Q}$ of approximate densities. Hence, the goal is to minimize the difference between this approximation and the real distribution. Therefore, this turns into the optimization problem of minimizing the Kullback-Leibler (KL) divergence between the parametric approximation and the original density
$$
q_{\phi}^{*}(\mathbf{z}\vert \mathbf{x})=\text{argmin}_{q_{\phi}(\mathbf{z} \vert \mathbf{x})\in\mathcal{Q}} \mathcal{D}_{KL} \big[ q_{\phi}\left(\mathbf{z} \vert \mathbf{x}\right) \parallel p\left(\mathbf{z} \vert \mathbf{x}\right) \big]
\tag{2}
$$
By developing this KL divergence and re-arranging terms (the detailed development can be found in [3](#reference1)), we obtain
$$
\log{p(\mathbf{x})} - D_{KL} \big[ q_{\phi}(\mathbf{z} \vert \mathbf{x}) \parallel p(\mathbf{z} \vert \mathbf{x}) \big] =
\mathbb{E}_{\mathbf{z}} \big[ \log{p(\mathbf{x} \vert \mathbf{z})}\big] - D_{KL} \big[ q_{\phi}(\mathbf{z} \vert \mathbf{x}) \parallel p(\mathbf{z}) \big]
\tag{3}
$$
This formulation describes the quantity we want to maximize $\log p(\mathbf{x})$ minus the error we make by using an approximate $q$ instead of $p$. Therefore, we can optimize this alternative objective, called the *evidence lower bound* (ELBO)
$$
\begin{equation}
\mathcal{L}_{\theta, \phi} = \mathbb{E} \big[ \log{ p_\theta (\mathbf{x|z}) } \big] - \beta \cdot D_{KL} \big[ q_\phi(\mathbf{z|x}) \parallel p_\theta(\mathbf{z}) \big]
\end{equation}
\tag{4}
$$
We can see that this equation involves $q_{\phi}(\mathbf{z} \vert \mathbf{x})$ which *encodes* the data $\mathbf{x}$ into the latent representation $\mathbf{z}$ and a *decoder* $p(\mathbf{x} \vert \mathbf{z})$, which allows generating a data vector $\mathbf{x}$ given a latent configuration $\mathbf{z}$. Hence, this structure defines the *Variational Auto-Encoder* (VAE).
The VAE objective can be interpreted intuitively. The first term increases the likelihood of the data generated given a configuration of the latent, which amounts to minimize the *reconstruction error*. The second term represents the error made by using a simpler posterior distribution $q_{\phi}(\mathbf{z} \vert \mathbf{x})$ compared to the true prior $p_{\theta}(\mathbf{z})$. Therefore, this *regularizes* the choice of approximation $q$ so that it remains close to the true posterior distribution [3].
### Reparametrization trick
Now, while this formulation has some very interesting properties, it involves sampling operations, where we need to draw the latent point $\mathbf{z}$ from the distribution $q_{\phi}(\mathbf{z}\vert\mathbf{x})$. The simplest choice for this variational approximate posterior is a multivariate Gaussian with a diagonal covariance structure (which leads to independent Gaussians on every dimension, called the *mean-field* family) so that
$$
\text{log}q_\phi(\mathbf{z}\vert\mathbf{x}) = \text{log}\mathcal{N}(\mathbf{z};\mathbf{\mu}^{(i)},\mathbf{\sigma}^{(i)})
\tag{5}
$$
where the mean $\mathbf{\mu}^{(i)}$ and standard deviation $\mathbf{\sigma}^{(i)}$ of the approximate posterior are different for each input point and are produced by our encoder parametrized by its variational parameters $\phi$. Now the KL divergence between this distribution and a simple prior $\mathcal{N}(\mathbf{0}, \mathbf{I})$ can be very simply obtained with
$$
D_{KL} \big[ q_\phi(\mathbf{z|x}) \parallel \mathcal{N}(\mathbf{0}, \mathbf{I}) \big] = \frac{1}{2}\sum_{j=1}^{D}\left(1+\text{log}((\sigma^{(i)}_j)^2)+(\mu^{(i)}_j)^2+(\sigma^{(i)}_j)^2\right)
\tag{6}
$$
While this looks convenient, we will still have to perform gradient descent through a sampling operation, which is non-differentiable. To solve this issue, we can use the *reparametrization trick*, which takes the sampling operation outside of the gradient flow by considering $\mathbf{z}^{(i)}=\mathbf{\mu}^{(i)}+\mathbf{\sigma}^{(i)}\odot\mathbf{\epsilon}^{(l)}$ with $\mathbf{\epsilon}^{(l)}\sim\mathcal{N}(\mathbf{0}, \mathbf{I})$
<a id="implem"> </a>
## VAE implementation
As we have seen, VAEs can be simply implemented by decomposing the above series of operations into an `encoder` which represents the distribution $q_\phi(\mathbf{z}\vert\mathbf{x})$, from which we will sample some values $\tilde{\mathbf{z}}$ (using the reparametrization trick) and compute the Kullback-Leibler (KL) divergence. Then, we use these values as input to a `decoder` which represents the distribution $p_\theta(\mathbf{x}\vert\mathbf{z})$ so that we can produce a reconstruction $\tilde{\mathbf{x}}$ and compute the reconstruction error.
Therefore, we can define the VAE based on our previous implementation of the AE that we recall here
```python
class AE(nn.Module):
def __init__(self, encoder, decoder, encoding_dim):
super(AE, self).__init__()
self.encoding_dims = encoding_dim
self.encoder = encoder
self.decoder = decoder
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
```
In order to move to a probabilistic version, we need to add the latent space sampling mechanism, and change the behavior of our `call` function. This process is implemented in the following `VAE` class.
Note that we purposedly rely on an implementation of the `encode` function where the `encoder` first produces an intermediate representation of size `encoder_dims`. Then, this representation goes through two separate functions for encoding $\mathbf{\mu}$ and $\mathbf{\sigma}$. This provides a clearer implementation but also the added bonus that we can ensure that $\mathbf{\sigma} > 0$
```python
class VAE(AE):
def __init__(self, encoder, decoder, encoding_dims, latent_dims):
super(VAE, self).__init__(encoder, decoder, encoding_dims)
self.latent_dims = latent_dims
self.mu = nn.Sequential(nn.Linear(self.encoding_dims, self.latent_dims), nn.ReLU())
self.sigma = nn.Sequential(nn.Linear(self.encoding_dims, self.latent_dims), nn.Softplus())
def encode(self, x):
######################
# YOUR CODE GOES HERE
######################
return mu, sigma
def decode(self, z):
return self.decoder(z)
def forward(self, x):
# Encode the inputs
z_params = self.encode(x)
# Obtain latent samples and latent loss
z_tilde, kl_div = self.latent(x, z_params)
# Decode the samples
x_tilde = self.decode(z_tilde)
return x_tilde.reshape(-1, 1, 28, 28), kl_div
def latent(self, x, z_params):
######################
# YOUR CODE GOES HERE
######################
return z, kl_div
```
Now the interesting aspect of VAEs is that we can define any parametric function as `encoder` and `decoder`, as long as we can optimize them. Here, we will rely on simple feed-forward neural networks, but these can be largely more complex (with limitations that we will discuss later in the tutorial).
```python
def construct_encoder_decoder(nin, n_latent = 16, n_hidden = 512, n_classes = 1):
# Encoder network
encoder = nn.Sequential(
nn.Flatten(),
nn.Linear(nin, n_hidden), nn.ReLU(),
nn.Linear(n_hidden, n_hidden), nn.ReLU(),
nn.Linear(n_hidden, n_hidden), nn.ReLU(),
)
# Decoder network
decoder = nn.Sequential(
nn.Linear(n_latent, n_hidden), nn.ReLU(),
nn.Linear(n_hidden, n_hidden), nn.ReLU(),
nn.Linear(n_hidden, nin * n_classes), nn.Sigmoid()
)
return encoder, decoder
```
### Evaluating the error
In the definition of the `VAE` class, we directly included the computation of the $D_{KL}$ term to regularize our latent space. However, remember that the complete loss of equation (4) also contains a *reconstruction loss* which compares our reconstructed output to the original data.
While there are several options to compare the error between two elements, there are usually two preferred choices among the generative literature depending on how we consider our problem
1. If we consider each dimension (pixel) to be a binary unit (following a Bernoulli distribution), we can rely on the `binary cross entropy` between the two distributions
2. If we turn our problem to a set of classifications, where each dimension can belong to a given set of *intensity classes*, then we can compute the `multinomial loss` between the two distributions
In the following, we define both error functions and regroup them in the `reconstruction_loss` call (depending on the `num_classes` considered). However, as the `multinomial loss` requires a large computational overhead, and for the sake of simplicity, we will train all our first models by relying on the `binary cross entropy`
```python
# Reconstruction criterion
recons_criterion = torch.nn.MSELoss(reduction='sum')
def compute_loss(model, x):
######################
# YOUR CODE GOES HERE
######################
return full_loss
def train_step(model, x, optimizer):
# Compute the loss.
loss = compute_loss(model, x)
# Before the backward pass, zero all of the network gradients
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to parameters
loss.backward()
# Calling the step function to update the parameters
optimizer.step()
return loss
```
### Optimizing a VAE on a real dataset
For this tutorial, we are going to take a quick shot at a real-life problem by trying to train our VAEs on the `FashionMNIST` dataset. This dataset can be natively used in PyTorch by relying on the `torchvision.datasets` classes as follows
```python
dataset_dir = './data'
# Going to use 80%/20% split for train/valid
valid_ratio = 0.2
# Load the dataset for the training/validation sets
train_valid_dataset = torchvision.datasets.FashionMNIST(root=dataset_dir, train=True, transform=torchvision.transforms.ToTensor(), download=True)
# Split it into training and validation sets
nb_train = int((1.0 - valid_ratio) * len(train_valid_dataset))
nb_valid = int(valid_ratio * len(train_valid_dataset))
train_dataset, valid_dataset = torch.utils.data.dataset.random_split(train_valid_dataset, [nb_train, nb_valid])
# Load the test set
test_dataset = torchvision.datasets.FashionMNIST(root=dataset_dir, transform=torchvision.transforms.ToTensor(),train=False)
# Prepare
num_threads = 4 # Loading the dataset is using 4 CPU threads
batch_size = 128 # Using minibatches of 128 samples
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, num_workers=num_threads)
valid_loader = torch.utils.data.DataLoader(dataset=valid_dataset, batch_size=batch_size, shuffle=False, num_workers=num_threads)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,batch_size=batch_size,shuffle=False,num_workers=num_threads)
```
The `FashionMNIST` dataset is composed of simple 28x28 black and white images of different items of clothings (such as shoes, bags, pants and shirts). We put a simple function here to display one batch of the test set (note that we keep a fixed batch from the test set in order to evaluate the different variations that we will try in this tutorial).
```python
print("The train set contains {} images, in {} batches".format(len(train_loader.dataset), len(train_loader)))
print("The validation set contains {} images, in {} batches".format(len(valid_loader.dataset), len(valid_loader)))
print("The test set contains {} images, in {} batches".format(len(test_loader.dataset), len(test_loader)))
nsamples = 10
classes_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal','Shirt', 'Sneaker', 'Bag', 'Ankle boot']
imgs_test, labels = next(iter(test_loader))
fig = plt.figure(figsize=(20,5))
for i in range(nsamples):
ax = plt.subplot(1,nsamples, i+1)
plt.imshow(imgs_test[i, 0, :, :], vmin=0, vmax=1.0, cmap=matplotlib.cm.gray)
ax.set_title("{}".format(classes_names[labels[i]]), fontsize=15)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
```
Now based on our proposed implementation, the optimization aspects are defined in a very usual way
```python
# Using Bernoulli or Multinomial loss
num_classes = 1
# Number of hidden and latent
n_hidden = 512
n_latent = 2
# Compute input dimensionality
nin = imgs_test.shape[2] * imgs_test.shape[3]
# Construct encoder and decoder
encoder, decoder = construct_encoder_decoder(nin, n_hidden = n_hidden, n_latent = n_latent, n_classes = num_classes)
# Build the VAE model
model = VAE(encoder, decoder, n_hidden, n_latent)
# Construct the optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
```
Now all that is left to do is train the model. We define here a `train_vae` function that we will reuse along the future implementations and variations of VAEs and flows. Note that this function is set to run for only a very few number of `epochs` and also most importantly, *only considers a subsample of the full dataset at each epoch*. This option is just here so that you can test the different models very quickly on any CPU or laptop.
```python
def generate_and_save_images(model, epoch, test_sample):
predictions, _ = model(test_sample)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
plt.imshow(predictions[i, 0, :, :].detach(), cmap='gray')
plt.axis('off')
# Tight_layout minimizes the overlap between 2 sub-plots
#plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
epochs=50
test_sample = imgs_test[0:16, :, :, :]
for epoch in range(1, epochs + 1):
full_loss = torch.Tensor([0])
# Forward pass: compute predicted y by passing x to the model.
for i, (x, _) in enumerate(train_loader):
full_loss += train_step(model, x, optimizer)
#for i, (x, _) in enumerate(valid_loader):
# train_step(model, x, optimizer)
print('Epoch: {}, Test set ELBO: {}'.format(epoch, full_loss))
generate_and_save_images(model, epoch, test_sample)
```
### Evaluating generative models
In order to evaluate our upcoming generative models, we will rely on the computation of the Negative Log-Likelihood. This code for the following `evaluate_nll_bpd` is inspired by the [Sylvester flow repository](https://github.com/riannevdberg/sylvester-flows)
```python
from scipy.special import logsumexp
def evaluate_nll_bpd(data_loader, model, batch = 500, R = 5):
# Set of likelihood tests
likelihood_test = []
# Go through dataset
for batch_idx, (x, _) in enumerate(data_loader):
for j in range(x.shape[0]):
a = []
for r in range(0, R):
cur_x = x[j].unsqueeze(0)
# Repeat it as batch
x = cur_x.expand(batch, *cur_x.size()[1:]).contiguous()
x = x.view(batch, -1)
x_tilde, kl_div = model(x)
rec = reconstruction_loss(x_tilde, x, average=False)
a_tmp = (rec + kl_div)
a.append(- a_tmp.cpu().data.numpy())
# calculate max
a = np.asarray(a)
a = np.reshape(a, (a.shape[0] * a.shape[1], 1))
likelihood_x = logsumexp(a)
likelihood_test.append(likelihood_x - np.log(len(a)))
likelihood_test = np.array(likelihood_test)
nll = - np.mean(likelihood_test)
# Compute the bits per dim (but irrelevant for binary data)
bpd = nll / (np.prod(nin) * np.log(2.))
return nll, bpd
```
Now we can evaluate our VAE model more formally as follows.
```python
# Plot final loss
plt.figure()
plt.plot(losses_kld[:, 0].numpy());
# Evaluate log-likelihood and bits per dim
nll, _ = evaluate_nll_bpd(test_loader, model)
print('Negative Log-Likelihood : ' + str(nll))
```
We can also evaluate the latent space of our model, which should be organized (being the overall point of using a VAE instead of a common AE).
```python
x = np.linspace(-3, 3, 8)
y = np.linspace(-3, 3, 8)
fig = plt.figure(figsize=(10, 8))
for i in range(8):
for j in range(8):
plt.subplot(8, 8, (i * 8) + j + 1)
final_tensor = torch.zeros(2)
final_tensor[0] = x[i]
final_tensor[1] = y[j]
plt.imshow(model.decode(final_tensor).detach().reshape(28, 28), cmap='gray')
plt.axis('off')
```
### Limitations of VAEs - (**exercise**)
Although VAEs are extremely powerful tools, they still have some limitations. Here we list the three most important and known limitations (all of them are still debated and topics of active research).
1. **Blurry reconstructions.** As can be witnessed directly in the results of the previous vanilla VAE implementation, the reconstructions appear to be blurry. The precise origin of this phenomenon is still debated, but the proposed explanation are
1. The use of the KL regularization
2. High variance regions of the latent space
3. The reconstruction criterion (expectation)
4. The use of simplistic latent distributions
2. **Posterior collapse.** The previous *blurry reconstructions* issue can be mitigated by using a more powerful decoder. However, relying on a decoder with a large capacity causes the phenomenon of *posterior collapse* where the latent space becomes useless. A nice intuitive explanation can be found [here](https://ermongroup.github.io/blog/a-tutorial-on-mmd-variational-autoencoders/)
3. **Simplistic Gaussian approximation**. In the derivation of the VAE objective, recall that the KL divergence term needs to be computed analytically. Therefore, this forces us to rely on quite simplistic families. However, the Gaussian family might be too simplistic to model real world data
In the present tutorial, we show how normalizing flows can be used to mostly solve the third limitation, while also adressing the two first problems. Indeed, we will see that normalizing flows also lead to sharper reconstructions and also act on preventing posterior collapse
<a id="improve"></a>
## Improving the quality of VAEs
As we discussed in the previous section, several known issues have been reported when using the vanilla VAE implementation. We listed some of the major issues as being
1. **Blurry reconstructions.**
2. **Posterior collapse.**
3. **Simplistic Gaussian approximation**.
Here, we discuss some recent developments that were proposed in the VAE literature and simple adjustments that can be made to (at least partly) alleviate these issues. However, note that some more advanced proposals such as PixelVAE [5](#reference1) and VQ-VAE [6](#reference1) can lead to wider increases in quality
### Reducing the bluriness of reconstructions
In this tutorial, we relied on extremely simple decoder functions, to show how we could easily define VAEs and normalizing flows together. However, the capacity of the decoder obviously directly influences the quality of the final reconstruction. Therefore, we could address this issue naively by using deep networks and of course convolutional layers as we are currently dealing with images.
First you need to construct a more complex encoder and decoder
```python
def construct_encoder_decoder_complex(nin, n_latent = 16, n_hidden = 512, n_params = 0, n_classes = 1):
######################
# YOUR CODE GOES HERE
######################
# Encoder network
encoder = ...
# Decoder network
decoder = ...
return encoder, decoder
```
### Preventing posterior collapse with Wasserstein-VAE-MMD (InfoVAE)
As we discussed earlier, the reason behind posterior collapse mostly relates to the KL divergence criterion (a nice intuitive explanation can be found [here](https://ermongroup.github.io/blog/a-tutorial-on-mmd-variational-autoencoders/). This can be mitigated by relying on a different criterion, such as regularizing the latent distribution by using the *Maximum Mean Discrepancy* (MMD) instead of the KL divergence. This model was independently proposed as the *InfoVAE* and later also as the *Wasserstein-VAE*.
Here we provide a simple implementation of the `InfoVAEMMD` class based on our previous implementations.
```python
######################
# YOUR CODE GOES HERE
######################
def compute_kernel(x, y):
return ...
def compute_mmd(x, y):
return ...
class InfoVAEMMD(VAE):
def __init__(self, encoder, decoder):
super(InfoVAEMMD, self).__init__(encoder, decoder)
def latent(self, x, z_params):
return ...
```
### Putting it all together
Here we combine all these ideas (except for the MMD, which is not adequate as the flow definition already regularizes the latent space without the KL divergence) to perform a more advanced optimization of the dataset. Hence, we will rely on the complex encoder and decoder with gated convolutions, the multinomial loss and the normalizing flows in order to improve the overall quality of our reconstructions.
```python
# Size of latent space
n_latent = 16
# Number of hidden units
n_hidden = 256
# Rely on Bernoulli or multinomial
num_classes = 128
######################
# YOUR CODE GOES HERE
######################
# Construct encoder and decoder
encoder, decoder = ...
# Create VAE or (InfoVAEMMD - WAE) model
model_flow_p = ...
# Create optimizer algorithm
optimizer = ...
# Add learning rate scheduler
scheduler = ...
# Launch our optimization
losses_flow_param = ...
```
*NB*: It seems that the multinomial version have a hard time converging. Although I only let this run for 200 epochs and only for a subsampling of 5000 examples, it might need more time, but this might also come from a mistake somewhere in my code ... If you spot something odd please let me know :)
### References
<a id="reference1"></a>
[1] Rezende, Danilo Jimenez, and Shakir Mohamed. "Variational inference with normalizing flows." _arXiv preprint arXiv:1505.05770_ (2015). [link](http://arxiv.org/pdf/1505.05770)
[2] Kingma, Diederik P., Tim Salimans, and Max Welling. "Improving Variational Inference with Inverse Autoregressive Flow." _arXiv preprint arXiv:1606.04934_ (2016). [link](https://arxiv.org/abs/1606.04934)
[3] Kingma, D. P., & Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. (2013). [link](https://arxiv.org/pdf/1312.6114)
[4] Rezende, D. J., Mohamed, S., & Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082. (2014). [link](https://arxiv.org/pdf/1401.4082)
[5] Gulrajani, I., Kumar, K., Ahmed, F., Taiga, A. A., Visin, F., Vazquez, D., & Courville, A. (2016). Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013. [link](https://arxiv.org/pdf/1611.05013)
[6] Van den Oord, A., & Vinyals, O. (2017). Neural discrete representation learning. In NIPS 2017 (pp. 6306-6315). [link](http://papers.nips.cc/paper/7210-neural-discrete-representation-learning.pdf)
### Inspirations and resources
https://blog.evjang.com/2018/01/nf1.html
https://github.com/ex4sperans/variational-inference-with-normalizing-flows
https://akosiorek.github.io/ml/2018/04/03/norm_flows.html
https://github.com/abdulfatir/normalizing-flows
https://github.com/riannevdberg/sylvester-flows
|
e18e11a8a03ae202fda9c013da3ab1b932440c9e
| 39,188 |
ipynb
|
Jupyter Notebook
|
10a_variational_auto_encoders.ipynb
|
piptouque/atiam_ml
|
9da637eae179237d30a15dd9ce3e95a2a956c385
|
[
"MIT"
] | null | null | null |
10a_variational_auto_encoders.ipynb
|
piptouque/atiam_ml
|
9da637eae179237d30a15dd9ce3e95a2a956c385
|
[
"MIT"
] | null | null | null |
10a_variational_auto_encoders.ipynb
|
piptouque/atiam_ml
|
9da637eae179237d30a15dd9ce3e95a2a956c385
|
[
"MIT"
] | null | null | null | 47.044418 | 698 | 0.616337 | true | 7,734 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.872347 | 0.819893 | 0.715232 |
__label__eng_Latn
| 0.978203 | 0.500055 |
# Non-linear equations: introduction
This notebook is based on Chapter 1 of
<a id='thebook'></a>
> Süli, Endre and Mayers, David F. _An introduction to numerical analysis_. Cambridge University Press, Cambridge, 2003.
<https://doi.org/10.1017/CBO9780511801181>
(ebook in [Helka](https://helka.helsinki.fi/permalink/358UOH_INST/1h3k2rg/alma9926836783506253))
In addition, some examples are taken from Chapters 1 and 2 of
> Scott, L. Ridgway.
_Numerical analysis_. Princeton University Press, Princeton, NJ, 2011.
The equation $x^2 = 2$ has two solutions $x = \pm \sqrt{2}$, but how do we compute an approximation to $\sqrt{2}$ as a floating-point number?
This problem has a long tradition, see the Babylonian clay tablet [YBC 7289](https://en.wikipedia.org/wiki/YBC_7289).
Perhaps the first algorithm used for approximating $\sqrt{q}$, with $q > 0$, is the [Babylonian method](https://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Babylonian_method):
1. Start with an initial guess $x_0$
2. Set
$$
x_{n+1} = \frac 1 2 (x_n + \frac q {x_n})
$$
3. Repeat step 2 until the desired accuracy is achieved
```python
def babylonian_method(x, q):
return 0.5*(x + q/x)
```
```python
import numpy as np
max_steps = 5
xs = np.zeros(max_steps)
x = 1.5
for n in range(max_steps):
x = babylonian_method(x, 2)
xs[n] = x
xs
```
```python
xtrue = np.sqrt(2)
errs = np.abs(xs - xtrue)
import pandas as pd
df = pd.DataFrame({
'x': xs,
'error': errs,
})
df.index = range(1, max_steps+1)
df.index.name = 'n'
df.style.format({'error': '{:.1e}'})
```
# Fixed-point iteration
The Babylonian method is a fixed point iteration of the form
$$
x_{n+1} = f(x_n), \qquad n=0,1,\dots
$$
If the sequence $x_n$ converges to a point $\xi$ and $f$ is continuous, then $\xi$ is a fixed point of $f$, that is, $\xi = f(\xi)$. Indeed
$$
\xi = \lim_{n \to \infty} x_{n+1} = \lim_{n \to \infty} f(x_n) = f(\xi).
$$
Note that while the Gaussian elimination terminates after a finite number of steps, a fixed point iteration may require infinite number of steps to converge.
In the case of the Babylonian method
$$
f(x) = \frac 1 2 (x + \frac q {x})
$$
and
$$
\xi = f(\xi)
\quad\iff\quad
\xi = \frac q {\xi}
\quad\iff\quad
\xi^2 = q
\quad\iff\quad
\xi = \pm \sqrt{q}.
$$
One may wonder if the simpler choice $f(x) = \frac q {x}$ would work as well.
After all $\xi = f(\xi)$ is equivalent to $\xi = \pm \sqrt{q}$ also for this $f$.
```python
def non_method(x, q):
return q / x
max_steps = 5
xs = np.zeros(max_steps)
x = 1.5 # initial guess
for n in range(max_steps):
x = non_method(x, 2)
xs[n] = x
xs
```
Consider a closed interval $I = [a,b]$ with $a < b$.
A function $f : I \to \mathbb R$ is a _contraction_ if there is $0 < L < 1$ such that
\begin{equation}\tag{1}
|f(x) - f(y)| \le L |x - y|, \qquad x,y \in I.
\end{equation}
The [mean value theorem](https://en.wikipedia.org/wiki/Mean_value_theorem) implies that (1) holds with $L = \max_{x \in I} |f'(x)|$ whenever $f$ is differentiable.
## Theorem: Banach fixed-point
> Suppose that $f : I \to I$ is a contraction. Then there is a unique fixed point $\xi \in I$ of $f$, and the sequence $x_{n+1} = f(x_{n})$ converges to $\xi$ for any initial guess $x_0 \in I$.
For a proof, see Theorem 1.3 of [the book](#thebook).
## Theorem: local convergence
> Let $\xi$ be a fixed point of $f : \mathbb R \to \mathbb R$ and suppose that $f$ is continuously differentiable near $\xi$. If $|f'(\xi)| < 1$, then there is an open interval $I$ containing $\xi$ such that the sequence $x_{n+1} = f(x_{n})$ converges to $\xi$ for any initial guess $x_0 \in I$.
_Proof_. The continuity of $f'$ near $\xi$, together with $|f'(\xi)| < 1$, implies that there are $\epsilon > 0$ and $0 <\delta < 1$ such that
$$
|f'(x)| \le \delta, \qquad x \in (\xi - \epsilon, \xi + \epsilon) =: I.
$$
The mean value theorem implies
$$
|f(x) - f(y)| \le L |x - y|, \qquad x,y \in I,
$$
with $L = \delta$. In particular,
$$
|f(x) - \xi| = |f(x) - f(\xi)| \le \delta |x - \xi| < \delta \epsilon < \epsilon, \qquad x \in I.
$$
Hence $f$ maps $I$ to itself and we can apply the Banach fixed-point theorem. $\blacksquare$
In the case of the Babylonian method
$$
f(x) = \frac 1 2 (x + \frac q {x})
\quad \text{and} \quad
f'(x) = \frac 1 2 (1 - \frac q {x^2}).
$$
Thus $f'(\xi) = 0$ at the fixed point $\xi = \sqrt{q}$,
and the method converges locally (that is, for an initial guess near $\xi$).
On the other hand, in the case of the "non-method"
$$
f(x) = \frac q {x},
\quad \text{and} \quad
f'(x) = - \frac q {x^2}.
$$
Thus $f'(\xi) = -1$ at $\xi = \sqrt{q}$,
and $f$ is **not** a contraction on any open interval containing $\xi$.
## Definition: order of convergence
> Suppose that a sequence $x_n$ converges to $\xi$ in $\mathbb R$.
If there are $p > 1$ and $\mu > 0$ such that
>
>\begin{align}
\tag{1}
\lim_{n \to \infty} \frac{|x_{n+1} - \xi|}{|x_{n} - \xi|^p} = \mu,
\end{align}
>
> then $x_n$ is said to converge with order $p$.
If (1) holds with $p = 1$ and $0 < \mu < 1$,
then $x_n$ is said to converge linearly.
Finally, if (1) holds with $p = 1$ and $\mu = 1$,
then $x_n$ is said to converge sublinearly.
## Example: different orders of convergence
Let $\lambda \in (0,1)$. Then
* sequence $x_n = \lambda^n$ converges linearly to zero,
* sequence $x_n = \lambda^{2^n}$ converges quadratically (that is, with order 2) to zero,
* sequence $x_n = 1/n$ converges sublinearly to zero,
* sequence $x_n = \lambda^{n!}$ converges superpolynomially to zero in the sense that for all $p > 1$
$$
\frac{x_{n+1}}{x_n^p} \to 0.
$$
```python
from matplotlib import pyplot as plt
lam = 0.5
def f(n):
return lam**n
N = 10
ns = np.arange(1, N+1)
plt.semilogy(ns, f(ns));
```
## Theorem: local _linear_ convergence
> Let $f$, $\xi$ and $I$ be as in the local convergence theorem.
Suppose that $f'(\xi) \ne 0$. Then the sequence $x_{n+1} = f(x_{n})$ converges linearly to $\xi$ for any initial guess $x_0 \in I$.
_Proof_. The _local convergence_ theorem implies that $x_n \to \xi$. The mean value theorem implies that there is $\tilde x_n$ between $\xi$ and $x_n$ such that
$$
f(x_n) - f(\xi) = f'(\tilde x_n)(x_n - \xi).
$$
As $x_n \to \xi$, also $\tilde x_n \to \xi$. Recalling that $f'$ is continuous near $\xi$, we have
$$
\frac{|x_{n+1} - \xi|}{|x_{n} - \xi|}
=
\frac{|f(x_n) - f(\xi)|}{|x_{n} - \xi|}
=
|f'(\tilde x_n)| \to |f'(\xi)| =: \mu.
$$
To conclude, we observe that $0 < \mu < 1$.
$\blacksquare$
## Theorem: local _higher-order_ convergence
> Let $f$, $\xi$ and $I$ be as in the local convergence theorem.
Suppose that $f$ has continuous derivatives up to order $p \ge 2$ near $\xi$,
and that $f'(\xi) = \dots = f^{(p - 1)}(\xi) = 0$
and $f^{(p)}(\xi) \ne 0$. Then the sequence $x_{n+1} = f(x_{n})$ converges with order $p$ to $\xi$ for any initial guess $x_0 \in I$.
_Proof_. The _local convergence_ theorem implies that $x_n \to \xi$.
[Taylor's theorem](https://en.wikipedia.org/wiki/Taylor's_theorem#Explicit_formulas_for_the_remainder), with Lagrange form of the remainder, says that there is $\tilde x_n$ between $\xi$ and $x_n$ such that
$$
f(x_n) - f(\xi) = \frac{f^{(p)}(\tilde x_n)}{p!}(x_n - \xi)^p.
$$
As $x_n \to \xi$, also $\tilde x_n \to \xi$. Recalling that $f^{(p)}$ is continuous near $\xi$, we have
$$
\frac{|x_{n+1} - \xi|}{|x_{n} - \xi|^p}
=
\frac{|f(x_n) - f(\xi)|}{|x_{n} - \xi|^p}
=
\frac{|f^{(p)}(\tilde x_n)|}{p!} \to \frac{|f^{(p)}(\xi)|}{p!} \ne 0.
$$
$\blacksquare$
# Relaxation and Newton's method
The problem to solve $\phi(x) = 0$ can be rewritten as the problem to find a fixed point $x = f(x)$.
Indeed, these two problems are equivalent if
$f(x) = x - \phi(x)$.
## Theorem: relaxation
> Let $\phi : \mathbb R \to \mathbb R$ be continuously differentiable near a point $\xi \in \mathbb R$.
Suppose that $\phi(\xi) = 0$ and $\phi'(\xi) > 0$. Then there
are an open interval $I$ containing $\xi$ and $\lambda > 0$ such that the relaxation iteration
>
> $$
x_{n+1} = x_n - \lambda \phi(x_n), \quad n=0,1\dots
$$
>
> converges to $\xi$ for any initial guess $x_0 \in I$.
If instead $\phi(\xi) = 0$ and $\phi'(\xi) < 0$, then we can apply the theorem to $-\phi$.
_Proof_. The function $f(x) = x - \lambda \phi(x)$ satisfies
$$
-1 < f'(\xi) = 1 - \lambda \phi'(\xi) < 1
$$
for small $\lambda > 0$, and we can apply the _local convergence_ theorem.
$\blacksquare$
Note that the choice $\lambda = 1/\phi'(\xi)$ leads to $f'(\xi) = 0$
and gives a method with at least quadratic convergence.
Of course, we typically don't know $\xi$ (we are solving for it), and hence can not make this optimal choice in practice.
Let's apply relaxation to the "non-method" $f(x) = q / x$. In this case
$$
\phi(x) = x - \frac q x, \quad \phi'(x) = 1 + \frac q {x^2}.
$$
In particular, $\phi'(\xi) = 2$ at $\xi = \sqrt{q}$.
Taking $\lambda = 1/\phi'(\xi) = 1/2$ gives the Babylonian method
$$
x_{n+1} = x_n - \frac 1 2 \phi(x_n) = \frac 1 2 (x_n + \frac q {x_n}).
$$
Newton's method can be viewed as a generalization of the relaxation iteration where we let $\lambda$ to be non-constant, and take $\lambda = 1/\phi'(x_n)$:
$$
x_{n+1} = x_n - \frac {\phi(x_n)}{\phi'(x_n)}, \quad n=0,1\dots
$$
Put differently, we take $\phi'(x_n)$ as a proxy of $\phi'(\xi)$. Another, way to arrive to Newton's method is to replace $\phi(x)$ in the equation $\phi(x) = 0$
by its first order Taylor polynomial at $x = x_n$, that is,
$$
0 = \phi(x) \approx \phi(x_n) + \phi'(x_n) (x - x_n).
$$
Then solving for $x$ gives $x = x_{n+1}$ with $x_{n+1}$ as above.
Let us apply Newton's method to the equation $x^2 = q$ and take $\phi(x) = x^2 - q$.
Then
$$
x_{n+1} = x_n - \frac {x_n^2 - q} {2 x_n} = \frac 1 2 (x_n + \frac q {x_n}),
$$
the Babylonian method once again.
## Theorem: convergence of Newton's method
>Suppose that $\phi : \mathbb R \to \mathbb R$ has continuous derivatives up to order 3 near a point $\xi \in \mathbb R$, and that $\phi(\xi) = 0$ and $\phi'(\xi) \ne 0$. Then there
is an open interval $I$ containing $\xi$ such that Newton's method
>
>$$
x_{n+1} = x_n - \frac {\phi(x_n)}{\phi'(x_n)}, \quad n=0,1\dots
$$
>
>converges at least quadratically to $\xi$ for any initial guess $x_0 \in I$.
_Proof_. The function
$$
f(x) = x - \frac {\phi(x)}{\phi'(x)}
$$
satisfies $f'(\xi) = 0$, and we can apply the _local higher-order convergence_ theorem.
$\blacksquare$
A slightly sharper proof shows that the assumptions can be weakened to $\phi$ having continuous derivatives up to order 2 near $\xi$, see Theorem 1.8 of the [the book](#thebook).
## Example: Kepler's equation
In orbital mechanics, [Kepler's equation](https://en.wikipedia.org/wiki/Kepler%27s_equation) relates various geometric properties of the orbit of a body subject to a central force. It reads
$$
M = E - e \sin E
$$
where $M$ is the mean anomaly, $E$ is the eccentric anomaly, and $e$ is the eccentricity.
The first published use by Newton of his eponymous method in an iterative form, and applied to a nonpolynomial equation, is in the second and third editions of
his _Philosophiae Naturalis Principia Mathematica_, where it is applied to Kepler's equation. For more details, see Section 6 of
>Ypma, Tjalling J.
_Historical development of the Newton-Raphson method_.
SIAM Rev. 37 (1995), no. 4, 531–551.
<https://doi.org/10.1137/1037125>
Let us consider the following calculation that is reproduced from p. 148 of
> Duffett-Smith, Peter and Zwart, Jonathan. _Practical Astronomy with your Calculator or Spreadsheet_. Cambridge University Press, Cambridge, UK, 2011.
(ebook in [Helka](https://helka.helsinki.fi/permalink/358UOH_INST/1h3k2rg/alma9932214283506253))
```python
# Parameters from the book by Duffett-Smith and Zwart
M = 6.108598
E0 = 5.31
e = 0.9673 # eccentricity of the orbit of Halley is given on p. 145
# Solve Kepler's equation E - e sin(E) - M = 0
def f(E):
return E - e*np.sin(E) - M
def fprime(E):
return 1 - e*np.cos(E)
def newton_demo(f, x0, fprime, max_steps = 5):
'''Newton's method with a fixed number of steps'''
x = x0
for n in range(max_steps):
x = x - f(x)/fprime(x)
return x
E = newton_demo(f, E0, fprime)
print(f'newton_demo: {E = }')
import scipy.optimize as opt
E = opt.newton(f, E0, fprime)
print(f'opt.newton: {E = }')
```
## Example: global behavior of Newton's method
Consider
$$
f(x) = x \exp(- x^2).
$$
The only solution to $f(x) = 0$ is $x = 0$, and $f'(0) = 1$. Hence Newton's method converges starting from an initial guess $x_0$ close enough to the origin. But it does not converge if the initial guess is not good enough.
```python
def f(x):
return x * np.exp(-x**2)
xs = np.linspace(-4, 4)
plt.plot(xs, f(xs));
x0 = 1
plt.plot([x0, x0], [-0.5, 0.5], 'r');
```
```python
def fprime(x):
return (1 - 2*x**2) * np.exp(-x**2)
x = newton_demo(f, x0, fprime, max_steps=100)
print(f'{x0 = }, {x = }')
```
# On the optimization sub-package of SciPy
We have already seen [newton](https://docs.scipy.org/doc/scipy/reference/reference/generated/scipy.optimize.newton.html), the implementation of Newton's method in SciPy. Calling `newton` without giving the derivative makes SciPy to use the secant method, described in Section 1.5 of [the book](#thebook). The bisection method is given by [bisect](https://docs.scipy.org/doc/scipy/reference/reference/generated/scipy.optimize.bisect.html) and it is described in Section 1.6 of [the book](#thebook).
Most of the methods in the optimization sub-package are outside the scope of this course. Some of them are described in the optimization course at UH. For more information on the sub-package see the [tutorial](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html).
```python
# Compare Newton, secant and bisection
def f(x):
return np.exp(x) - x - 2
def fprime(x):
return np.exp(x) - 1
x0, x1 = 1, 3
x, rres = opt.newton(f, x0, fprime, full_output=True)
print(f" Newton's method converged to {x} in {rres.iterations} steps")
x, rres = opt.newton(f, x0, x1=x1, full_output=True)
print(f' The secant method converged to {x} in {rres.iterations} steps')
x, rres = opt.bisect(f, x0, x1, full_output=True)
print(f"The bisection method converged to {x} in {rres.iterations} steps")
```
```python
```
|
daf1ff351ad48290988ed5131dce57f42de7041d
| 138,438 |
ipynb
|
Jupyter Notebook
|
nonlinear-eqs/lecture.ipynb
|
rmsb/notebooks
|
fd06c9d068acc0ed738ca60a5a727c9581357e5f
|
[
"CC-BY-4.0"
] | 2 |
2022-01-11T13:08:56.000Z
|
2022-01-11T13:09:47.000Z
|
nonlinear-eqs/lecture.ipynb
|
rmsb/notebooks
|
fd06c9d068acc0ed738ca60a5a727c9581357e5f
|
[
"CC-BY-4.0"
] | null | null | null |
nonlinear-eqs/lecture.ipynb
|
rmsb/notebooks
|
fd06c9d068acc0ed738ca60a5a727c9581357e5f
|
[
"CC-BY-4.0"
] | 2 |
2022-01-18T09:48:10.000Z
|
2022-01-24T18:03:41.000Z
| 157.137344 | 112,456 | 0.881882 | true | 4,898 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.782662 | 0.785309 | 0.614632 |
__label__eng_Latn
| 0.965013 | 0.266325 |
# <span style="color:#2c061f"> PS6: Solving the Solow-model</span>
<br>
## <span style="color:#374045"> Introduction to Programming and Numerical Analysis </span>
*Oluf Kelkjær*
### **Today's Plan**
1. Dataproject
1. Working with equations
* Scipy's `linalg`
* `Sympy`
2. Let's work on PS6
## Dataproject
Expect feedback from me before the next exercise class!
Remember to do peer-feedback - **deadline**: 24th april 23:59
## Scipy's `linalg`
Linalg is one of scipy's submodules.
Can basically do anything with the realm of linear algebra:
- Basic stuff: determinant, invert, norm
- Matrix decompositions (LU, Cholesky etc.)
- Solve a system of equations
- Find eigenvalues
## An example:
Let's solve for x
$$Ax = b$$
```python
import numpy as np
from scipy import linalg
np.random.seed(1900)
A = np.random.uniform(size=(5,5))
b = np.random.uniform(size=5)
print(f'Matrix A:\n{A}\n\nMatrix b:\n {b}')
```
Matrix A:
[[0.33224607 0.71427591 0.37834749 0.24908241 0.83598633]
[0.02005845 0.32670359 0.05606653 0.4008206 0.13288711]
[0.88711192 0.15490098 0.01708181 0.95781716 0.58999632]
[0.83959058 0.7146372 0.58705537 0.40933648 0.14603168]
[0.16407166 0.65717511 0.146494 0.67717016 0.47425348]]
Matrix b:
[0.78485347 0.85159023 0.84757586 0.42016935 0.20991113]
```python
# Solve using LU factorization ->
# Split A in a lower, upper triangular matrix and a permutation matrix -> Speed
# LU factorize A using linalg
LU,piv = linalg.lu_factor(A)
# Solve using linalg
x = linalg.lu_solve((LU,piv),b)
print(x)
```
[-15.33189031 -24.00998148 40.02675108 15.24193293 4.89008792]
```python
# or you could use a regular solve
print(linalg.solve(A,b))
```
[-15.33189031 -24.00998148 40.02675108 15.24193293 4.89008792]
## What do we use it for?
In the first question of the exam 2020 you had to implement the OLS estimator using linear algebra. Recall that,
$$\hat{\beta}=(X^{'}X)^{-1}X^{'}y$$
## Symbolic Python
`SymPy` is a Python library for symbolic mathematics and lets you solve equations **analytically**! (*like* WolframAlpha or Symbolab)
Say that you want implement the utility function of standard OLG agent. We assume agents derive utility from consumption in both periods:
$$U_t = u(c_{1t})+\frac{1}{1+\rho}u(c_{2t+1})$$
We assume log-preferences
```python
import sympy as sm
# Initialize variabels in Sympy
c1,c2 = sm.symbols('c_1t'), sm.symbols('c_2t+1')
rho = sm.symbols('rho')
# Setup utility in sympy
uc1 = sm.ln(c1)
uc2 = sm.ln(c2)
U = uc1 + 1/(1+rho) * uc2
U
```
$\displaystyle \log{\left(c_{1t} \right)} + \frac{\log{\left(c_{2t+1} \right)}}{\rho + 1}$
With `sympy` we are able to do many calculations. Say that we need the derivate of $U$ w.r.t. $c_{2t+1}$:
```python
# We just use SymPy's .diff() method:
sm.diff(U,c2)
```
$\displaystyle \frac{1}{c_{2t+1} \left(\rho + 1\right)}$
Another cool feature is that you can turn your SymPy equations into python functions. This can really tie your model projects together:
* Solve model analytically with SymPy
* Convert your solution to a python function e.g. the law-of-motion in OLG
* Find steady state level of capital using an optimizer
How is it done?
```python
# Use SymPy's lambdify method which takes an iterable of arguments in our case the consumptions and rho
# and of course the function in our case U
util = sm.lambdify((c1,c2,rho),U)
# Compute some utility
util(7,8,0.1)
```
3.836311550582437
## Lets work on PS6 :)
|
44c62ed132b74b9f80d5db1b7d496bc40bbc1c21
| 8,630 |
ipynb
|
Jupyter Notebook
|
Slides/exc_11/class11.ipynb
|
OlufKelk/IPNA
|
27abbcd0de9d2da33169f1caf9604ebb58682c61
|
[
"MIT"
] | null | null | null |
Slides/exc_11/class11.ipynb
|
OlufKelk/IPNA
|
27abbcd0de9d2da33169f1caf9604ebb58682c61
|
[
"MIT"
] | null | null | null |
Slides/exc_11/class11.ipynb
|
OlufKelk/IPNA
|
27abbcd0de9d2da33169f1caf9604ebb58682c61
|
[
"MIT"
] | null | null | null | 23.972222 | 146 | 0.529432 | true | 1,179 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.890294 | 0.859664 | 0.765354 |
__label__eng_Latn
| 0.869806 | 0.616505 |
# Programación lineal
> La programación lineal es el campo de la optimización matemática dedicado a maximizar o minimizar (optimizar) funciones lineales, denominada función objetivo, de tal forma que las variables de dicha función estén sujetas a una serie de restricciones expresadas mediante un sistema de ecuaciones o inecuaciones también lineales.
**Referencias:**
- https://es.wikipedia.org/wiki/Programaci%C3%B3n_lineal
- https://docs.scipy.org/doc/scipy-0.18.1/reference/optimize.html
## 1. Apuntes históricos
- 1826: Joseph Fourier anticipa la programación lineal. Carl Friedrich Gauss resuelve ecuaciones lineales por eliminación "gaussiana".
- 1902: Gyula Farkas concibe un método para resolver sistemas de inecuaciones.
- Es hasta la Segunda Guerra Mundial que se plantea la programación lineal como un modelo matemático para planificar gastos y retornos, de modo que se reduzcan costos de guerra y aumentar pérdidas del enemigo. Secreto hasta 1947 (posguerra).
- 1947: George Dantzig publica el algoritmo simplex y John von Neumann desarrolló la teoría de la dualidad. Se sabe que Leonid Kantoróvich también formuló la teoría en forma independiente.
- Fue usado por muchas industrias en la planificación diaria.
**Hasta acá, tiempos exponenciales de solución. Lo siguiente, tiempo polinomial.**
- 1979: Leonid Khachiyan, diseñó el llamado Algoritmo del elipsoide, a través del cual demostró que el problema de la programación lineal es resoluble de manera eficiente, es decir, en tiempo polinomial.
- 1984: Narendra Karmarkar introduce el método del punto interior para resolver problemas de programación lineal.
**Mencionar complejidad computacional.**
## 2. Motivación
Cuando se quiere optimizar una función de varias variables con restricciones, se puede aplicar siempre el método de Multiplicadores de Lagrange. Sin embargo, este método es computacionalmente muy complejo conforme crece el número de variables.
Por tanto, cuando la función a optimizar y las restricciones son de caracter lineal, los métodos de solución que se pueden desarrollar son computacionalmente eficientes, por lo que es útil realizar la distinción.
## 3. Problemas de programación lineal
De acuerdo a lo descrito anteriormente, un problema de programación lineal puede escribirse en la siguiente forma:
\begin{equation}
\begin{array}{ll}
\min_{x_1,\dots,x_n} & c_1x_1+\dots+c_nx_n \\
\text{s. a. } & a^{eq}_{j,1}x_1+\dots+a^{eq}_{j,n}x_n=b^{eq}_j \text{ para } 1\leq j\leq m_1 \\
& a_{k,1}x_1+\dots+a_{k,n}x_n\leq b_k \text{ para } 1\leq k\leq m_2,
\end{array}
\end{equation}
donde:
- $x_i$ para $i=1,\dots,n$ son las incógnitas o variables de decisión,
- $c_i$ para $i=1,\dots,n$ son los coeficientes de la función a optimizar,
- $a^{eq}_{j,i}$ para $j=1,\dots,m_1$ e $i=1,\dots,n$, son los coeficientes de la restricción de igualdad,
- $a_{k,i}$ para $k=1,\dots,m_2$ e $i=1,\dots,n$, son los coeficientes de la restricción de desigualdad,
- $b^{eq}_j$ para $j=1,\dots,m_1$ son valores conocidos que deben ser respetados estrictamente, y
- $b_k$ para $k=1,\dots,m_2$ son valores conocidos que no deben ser superados.
Equivalentemente, el problema puede escribirse como
\begin{equation}
\begin{array}{ll}
\min_{\boldsymbol{x}} & \boldsymbol{c}^T\boldsymbol{x} \\
\text{s. a. } & \boldsymbol{A}_{eq}\boldsymbol{x}=\boldsymbol{b}_{eq} \\
& \boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b},
\end{array}
\end{equation}
donde:
- $\boldsymbol{x}=\left[x_1\quad\dots\quad x_n\right]^T$,
- $\boldsymbol{c}=\left[c_1\quad\dots\quad c_n\right]^T$,
- $\boldsymbol{A}_{eq}=\left[\begin{array}{ccc}a^{eq}_{1,1} & \dots & a^{eq}_{1,n}\\ \vdots & \ddots & \vdots\\ a^{eq}_{m_1,1} & \dots & a^{eq}_{m_1,n}\end{array}\right]$,
- $\boldsymbol{A}=\left[\begin{array}{ccc}a_{1,1} & \dots & a_{1,n}\\ \vdots & \ddots & \vdots\\ a_{m_2,1} & \dots & a_{m_2,n}\end{array}\right]$,
- $\boldsymbol{b}_{eq}=\left[b^{eq}_1\quad\dots\quad b^{eq}_{m_1}\right]^T$, y
- $\boldsymbol{b}=\left[b_1\quad\dots\quad b_{m_2}\right]^T$.
**Nota:** el problema $\max_{\boldsymbol{x}}\boldsymbol{g}(\boldsymbol{x})$ es equivalente a $\min_{\boldsymbol{x}}-\boldsymbol{g}(\boldsymbol{x})$.
## 4. Ejemplo básico
Una compañía produce dos productos ($X_1$ y $X_2$) usando dos máquinas ($A$ y $B$). Cada unidad de $X_1$ que se produce requiere 50 minutos en la máquina $A$ y 30 minutos en la máquina $B$. Cada unidad de $X_2$ que se produce requiere 24 minutos en la máquina $A$ y 33 minutos en la máquina $B$.
Al comienzo de la semana hay 30 unidades de $X_1$ y 90 unidades de $X_2$ en inventario. El tiempo de uso disponible de la máquina $A$ es de 40 horas y el de la máquina $B$ es de 35 horas.
La demanda para $X_1$ en la semana actual es de 75 unidades y de $X_2$ es de 95 unidades. La política de la compañía es maximizar la suma combinada de unidades de $X_1$ e $X_2$ en inventario al finalizar la semana.
Formular el problema de decidir cuánto hacer de cada producto en la semana como un problema de programación lineal.
### Solución
Sean:
- $x_1$ la cantidad de unidades de $X_1$ a ser producidas en la semana, y
- $x_2$ la cantidad de unidades de $X_2$ a ser producidas en la semana.
Notar que lo que se quiere es maximizar $x_1+x_2$.
Restricciones:
1. El tiempo de uso disponible de la máquina $A$ es de 40 horas: $50x_1+24x_2\leq 40(60)\Rightarrow 50x_1+24x_2\leq 2400$.
2. El tiempo de uso disponible de la máquina $B$ es de 35 horas: $30x_1+33x_2\leq 35(60)\Rightarrow 30x_1+33x_2\leq 2100$.
3. La demanda para $X_1$ en la semana actual es de 75 unidades: $x_1+30\geq 75\Rightarrow x_1\geq 45\Rightarrow -x_1\leq -45$.
4. La demanda para $X_2$ en la semana actual es de 95 unidades: $x_2+90\geq 95\Rightarrow x_2\geq 5\Rightarrow -x_2\leq -5$.
Finalmente, el problema puede ser expresado en la forma explicada como:
\begin{equation}
\begin{array}{ll}
\min_{x_1,x_2} & -x_1-x_2 \\
\text{s. a. } & 50x_1+24x_2\leq 2400 \\
& 30x_1+33x_2\leq 2100 \\
& -x_1\leq -45 \\
& -x_2\leq -5,
\end{array}
\end{equation}
o, eqivalentemente
\begin{equation}
\begin{array}{ll}
\min_{\boldsymbol{x}} & \boldsymbol{c}^T\boldsymbol{x} \\
\text{s. a. } & \boldsymbol{A}_{eq}\boldsymbol{x}=\boldsymbol{b}_{eq} \\
& \boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b},
\end{array}
\end{equation}
con
- $\boldsymbol{c}=\left[-1 \quad -1\right]^T$,
- $\boldsymbol{A}=\left[\begin{array}{cc}50 & 24 \\ 30 & 33\\ -1 & 0\\ 0 & -1\end{array}\right]$, y
- $\boldsymbol{b}=\left[2400\quad 2100\quad -45\quad -5\right]^T$.
Preferiremos, en adelante, la notación vectorial/matricial.
Este problema está sencillo pues solo es en dos variables. La solución gráfica es válida.
```python
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
```
```python
def res1(x1):
return (2400-50*x1)/24
def res2(x1):
return (2100-30*x1)/33
```
```python
x1 = np.linspace(40, 50)
r1 = res1(x1)
r2 = res2(x1)
```
```python
plt.figure(figsize = (8,6))
plt.plot(x1, res1(x1), 'b--', label = 'res1')
plt.plot(x1, res2(x1), 'r--', label = 'res2')
plt.plot([45, 45], [0, 25], 'k', label = 'res3')
plt.plot([40, 50], [5, 5], 'm', label = 'res4')
plt.fill_between(np.array([45.0, 45.6]), res1(np.array([45.0, 45.6])), 5*np.ones(2))
plt.text(44.8,4.9,'$(45,5)$',fontsize=10)
plt.text(44.75,6.25,'$(45,6.25)$',fontsize=10)
plt.text(45.6,5.1,'$(45.6,5)$',fontsize=10)
plt.legend(loc = 'best')
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.axis([44, 46, 4, 7])
plt.show()
```
## 5. ¿Cómo se resuelve en python?
### 5.1 Librería `SciPy`
`SciPy` es un software de código abierto basado en `Python` para matemáticas, ciencia e ingeniería.
En particular, los siguientes son algunos de los paquetes básicos:
- `NumPy`
- **Librería `SciPy`**
- `SymPy`
- `matplotlib`
- `pandas`
La **Librería `SciPy`** es uno de los paquetes principales y provee varias rutinas numéricas eficientes. Entre ellas, para integración numérica y optimización.
En esta clase, y en lo que resta del módulo, estaremos utilizando el módulo `optimize` de la librería `SciPy`.
```python
import scipy.optimize as opt
```
El módulo `optimize` que acabamos de importar contiene varias funciones para optimización y búsqueda de raices ($f(x)=0$). Entre ellas se encuentra la función `linprog`
```python
help(opt.linprog)
```
Help on function linprog in module scipy.optimize._linprog:
linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None, bounds=None, method='simplex', callback=None, options=None)
Minimize a linear objective function subject to linear
equality and inequality constraints.
Linear Programming is intended to solve the following problem form:
Minimize: c^T * x
Subject to: A_ub * x <= b_ub
A_eq * x == b_eq
Parameters
----------
c : array_like
Coefficients of the linear objective function to be minimized.
A_ub : array_like, optional
2-D array which, when matrix-multiplied by x, gives the values of the
upper-bound inequality constraints at x.
b_ub : array_like, optional
1-D array of values representing the upper-bound of each inequality
constraint (row) in A_ub.
A_eq : array_like, optional
2-D array which, when matrix-multiplied by x, gives the values of the
equality constraints at x.
b_eq : array_like, optional
1-D array of values representing the RHS of each equality constraint
(row) in A_eq.
bounds : sequence, optional
``(min, max)`` pairs for each element in ``x``, defining
the bounds on that parameter. Use None for one of ``min`` or
``max`` when there is no bound in that direction. By default
bounds are ``(0, None)`` (non-negative)
If a sequence containing a single tuple is provided, then ``min`` and
``max`` will be applied to all variables in the problem.
method : str, optional
Type of solver. At this time only 'simplex' is supported
:ref:`(see here) <optimize.linprog-simplex>`.
callback : callable, optional
If a callback function is provide, it will be called within each
iteration of the simplex algorithm. The callback must have the signature
`callback(xk, **kwargs)` where xk is the current solution vector
and kwargs is a dictionary containing the following::
"tableau" : The current Simplex algorithm tableau
"nit" : The current iteration.
"pivot" : The pivot (row, column) used for the next iteration.
"phase" : Whether the algorithm is in Phase 1 or Phase 2.
"basis" : The indices of the columns of the basic variables.
options : dict, optional
A dictionary of solver options. All methods accept the following
generic options:
maxiter : int
Maximum number of iterations to perform.
disp : bool
Set to True to print convergence messages.
For method-specific options, see `show_options('linprog')`.
Returns
-------
A `scipy.optimize.OptimizeResult` consisting of the following fields:
x : ndarray
The independent variable vector which optimizes the linear
programming problem.
fun : float
Value of the objective function.
slack : ndarray
The values of the slack variables. Each slack variable corresponds
to an inequality constraint. If the slack is zero, then the
corresponding constraint is active.
success : bool
Returns True if the algorithm succeeded in finding an optimal
solution.
status : int
An integer representing the exit status of the optimization::
0 : Optimization terminated successfully
1 : Iteration limit reached
2 : Problem appears to be infeasible
3 : Problem appears to be unbounded
nit : int
The number of iterations performed.
message : str
A string descriptor of the exit status of the optimization.
See Also
--------
show_options : Additional options accepted by the solvers
Notes
-----
This section describes the available solvers that can be selected by the
'method' parameter. The default method is :ref:`Simplex <optimize.linprog-simplex>`.
Method *Simplex* uses the Simplex algorithm (as it relates to Linear
Programming, NOT the Nelder-Mead Simplex) [1]_, [2]_. This algorithm
should be reasonably reliable and fast.
.. versionadded:: 0.15.0
References
----------
.. [1] Dantzig, George B., Linear programming and extensions. Rand
Corporation Research Study Princeton Univ. Press, Princeton, NJ, 1963
.. [2] Hillier, S.H. and Lieberman, G.J. (1995), "Introduction to
Mathematical Programming", McGraw-Hill, Chapter 4.
.. [3] Bland, Robert G. New finite pivoting rules for the simplex method.
Mathematics of Operations Research (2), 1977: pp. 103-107.
Examples
--------
Consider the following problem:
Minimize: f = -1*x[0] + 4*x[1]
Subject to: -3*x[0] + 1*x[1] <= 6
1*x[0] + 2*x[1] <= 4
x[1] >= -3
where: -inf <= x[0] <= inf
This problem deviates from the standard linear programming problem.
In standard form, linear programming problems assume the variables x are
non-negative. Since the variables don't have standard bounds where
0 <= x <= inf, the bounds of the variables must be explicitly set.
There are two upper-bound constraints, which can be expressed as
dot(A_ub, x) <= b_ub
The input for this problem is as follows:
>>> c = [-1, 4]
>>> A = [[-3, 1], [1, 2]]
>>> b = [6, 4]
>>> x0_bounds = (None, None)
>>> x1_bounds = (-3, None)
>>> from scipy.optimize import linprog
>>> res = linprog(c, A_ub=A, b_ub=b, bounds=(x0_bounds, x1_bounds),
... options={"disp": True})
Optimization terminated successfully.
Current function value: -22.000000
Iterations: 1
>>> print(res)
fun: -22.0
message: 'Optimization terminated successfully.'
nit: 1
slack: array([ 39., 0.])
status: 0
success: True
x: array([ 10., -3.])
Note the actual objective value is 11.428571. In this case we minimized
the negative of the objective function.
### 5.2 Solución del ejemplo básico con linprog
Ya hicimos la solución gráfica. Contrastemos con la solución que nos da `linprog`...
```python
import numpy as np
```
```python
# Resolver...
c = np.array([-1, -1])
A = np.array([[50, 24], [30, 33], [-1, 0], [0, -1]])
b = np.array([2400, 2100, -45, -5])
```
```python
resultado = opt.linprog(c, A_ub=A, b_ub=b)
```
```python
resultado
```
fun: -51.25
message: 'Optimization terminated successfully.'
nit: 4
slack: array([ 0. , 543.75, 0. , 1.25])
status: 0
success: True
x: array([ 45. , 6.25])
```python
```
Las cantidades de $X_1$ y $X_2$ que se deben producir para maximizar el inventario al final de la semana, con las restricciones de tiempo de uso de las máquinas y de demanda es:
$$x_1=45$$
$$x_2=6.25$$
## 6. Optimización de inversión en bonos
### 6.1 Ejemplo 1
**Referencia:**
```python
from IPython.display import YouTubeVideo
YouTubeVideo('gukxBus8lOs')
```
El objetivo de este problema es determinar la mejor estrategia de inversión, dados diferentes tipos de bono, la máxima cantidad que puede ser invertida en cada bono, el porcentaje de retorno y los años de madurez. También hay una cantidad fija de dinero disponible ($\$750,000$). Por lo menos la mitad de este dinero debe ser invertido en bonos con 10 años o más para la madurez. Se puede invertir un máximo del $25\%$ de esta cantidad en cada bono. Finalmente, hay otra restricción que no permite usar más de $35\%$ en bonos de alto riesgo.
Existen seis (6) opciones de inversión con las letras correspondientes $A_i$
1. $A_1$:(Tasa de retorno=$8.65\%$; Años para la madurez=11, Riesgo=Bajo)
1. $A_2$:(Tasa de retorno=$9.50\%$; Años para la madurez=10, Riesgo=Alto)
1. $A_3$:(Tasa de retorno=$10.00\%$; Años para la madurez=6, Riesgo=Alto)
1. $A_4$:(Tasa de retorno=$8.75\%$; Años para la madurez=10, Riesgo=Bajo)
1. $A_5$:(Tasa de retorno=$9.25\%$; Años para la madurez=7, Riesgo=Alto)
1. $A_6$:(Tasa de retorno=$9.00\%$; Años para la madurez=13, Riesgo=Bajo)
Lo que se quiere entonces es maximizar el retorno que deja la inversión.
Este problema puede ser resuelto con programación lineal. Formalmente, puede ser descrito como:
$$\max_{A_1,A_2,...,A_6}\sum^{6}_{i=1} A_iR_i,$$
donde $A_i$ representa la cantidad invertida en la opción, y $R_i$ representa la tasa de retorno respectiva.
Plantear restricciones...
```python
# Resolver...
c = -np.array([8.65, 9.5, 10, 8.75, 9.25, 9])/100
A = np.array([[-1, -1, 0, -1, 0, -1], [0, 1, 1, 0, 1, 0], [1, 1, 1, 1, 1, 1]])
b = np.array([-.5, .35, 1])*750000
bounds = (0, 0.25*750000)
```
```python
res_bonos = opt.linprog(c, A_ub=A, b_ub=b, bounds=(bounds, bounds, bounds, bounds, bounds, bounds))
```
```python
res_bonos.x
```
array([ 112500., 75000., 187500., 187500., 0., 187500.])
### 6.2 Ejemplo 2
**Referencia:**
- https://la.mathworks.com/help/optim/ug/maximize-long-term-investments-using-linear-programming.html
Suponga que se tiene una cantidad inicial de dinero $C_0$ para invertir en un periodo de tiempo $T$ años en $N$ bonos sin cupones. Cada bono paga una tasa de interés compuesta anualmente, y paga el principal más el interés compuesto al final del periodo de madurez. El objetivo es maximizar la cantidad total de dinero después de $T$ años.
Se puede incluir la restricción de que ninguna inversión individual puede superar una fracción del capital total.
Suponga que:
- El capital inicial es $C_0=\$1000$.
- El periodo de tiempo es $T=5$ años.
- El número de bonos es $N=4$.
Para modelar dinero que no se invierte, tenemos una opción $B_0$ disponible cada año que tiene periodo de madurez un año y una tasa de interés del $0\%$.
- El Bono 1, denotado por $B_1$, se puede comprar en el año 1, tiene periodo de madurez de 4 años, y una tasa de interés del $2\%$.
- El Bono 2, denotado por $B_2$, se puede comprar en el año 5, tiene periodo de madurez de 1 año, y una tasa de interés del $4\%$.
- El Bono 3, denotado por $B_3$, se puede comprar en el año 2, tiene periodo de madurez de 4 años, y una tasa de interés del $6\%$.
- El Bono 4, denotado por $B_4$, se puede comprar en el año 2, tiene periodo de madurez de 3 años, y una tasa de interés del $6\%$.
Separando la opción $B_0$ (no invertir) en 5 bonos con periodo de madurez de 1 año y tasa de interés del $0\%$, este problema se puede modelar equivalentemente como si se tuviera un total de 9 bonos disponibles: $B_k$ para $k=1,\dots,9$.
<font color=red>Ver en el tablero una representación gráfica y plantear</font>
```python
# Resolver...
rho = np.array([0, 0, 0, 0, 0, 2, 4, 6, 6])
mad = np.array([1, 1, 1, 1, 1, 4, 1, 4, 3])
r = (1+rho/100)**mad
# Matrices para linprog
c = -np.array([0, 0, 0, 0, r[4], 0, r[6], r[7], 0])
Aeq = np.array([[1, 0, 0, 0, 0, 1, 0, 0, 0],
[-r[0], 1, 0, 0, 0, 0, 0, 1, 1],
[0, -r[1], 1, 0, 0, 0, 0, 0, 0],
[0, 0, -r[2], 1, 0, 0, 0, 0, 0],
[0, 0, 0, -r[3], 1, -r[5], 1, 0, -r[8]]])
beq = np.array([1000, 0, 0, 0, 0])
```
```python
res_bonos2 = opt.linprog(c, A_eq=Aeq, b_eq=beq)
```
```python
res_bonos2
```
fun: -1262.4769600000004
message: 'Optimization terminated successfully.'
nit: 7
slack: array([], dtype=float64)
status: 0
success: True
x: array([ 1000., 0., 0., 0., 0., 0., 0., 1000.,
0.])
```python
res_bonos2.x
```
array([ 1000., 0., 0., 0., 0., 0., 0., 1000.,
0.])
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
|
6b8a8e56d9efcb31c8510b54a6dfeaa9fca37df2
| 78,367 |
ipynb
|
Jupyter Notebook
|
Modulo4/Clase16_IntroProgLineal.ipynb
|
esjimenezro/SimRC2018-1
|
26d1e6c611f3887550cdaf32af65ae689cc8cd81
|
[
"MIT"
] | null | null | null |
Modulo4/Clase16_IntroProgLineal.ipynb
|
esjimenezro/SimRC2018-1
|
26d1e6c611f3887550cdaf32af65ae689cc8cd81
|
[
"MIT"
] | null | null | null |
Modulo4/Clase16_IntroProgLineal.ipynb
|
esjimenezro/SimRC2018-1
|
26d1e6c611f3887550cdaf32af65ae689cc8cd81
|
[
"MIT"
] | 13 |
2018-01-22T16:27:05.000Z
|
2021-06-10T22:09:13.000Z
| 97.471393 | 27,729 | 0.802034 | true | 6,495 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.839734 | 0.885631 | 0.743695 |
__label__spa_Latn
| 0.817352 | 0.566184 |
```python
%matplotlib inline
```
# Scaling the regularization parameter for SVCs
The following example illustrates the effect of scaling the
regularization parameter when using `svm` for
`classification <svm_classification>`.
For SVC classification, we are interested in a risk minimization for the
equation:
\begin{align}C \sum_{i=1, n} \mathcal{L} (f(x_i), y_i) + \Omega (w)\end{align}
where
- $C$ is used to set the amount of regularization
- $\mathcal{L}$ is a `loss` function of our samples
and our model parameters.
- $\Omega$ is a `penalty` function of our model parameters
If we consider the loss function to be the individual error per
sample, then the data-fit term, or the sum of the error for each sample, will
increase as we add more samples. The penalization term, however, will not
increase.
When using, for example, `cross validation <cross_validation>`, to
set the amount of regularization with `C`, there will be a
different amount of samples between the main problem and the smaller problems
within the folds of the cross validation.
Since our loss function is dependent on the amount of samples, the latter
will influence the selected value of `C`.
The question that arises is `How do we optimally adjust C to
account for the different amount of training samples?`
The figures below are used to illustrate the effect of scaling our
`C` to compensate for the change in the number of samples, in the
case of using an `l1` penalty, as well as the `l2` penalty.
l1-penalty case
-----------------
In the `l1` case, theory says that prediction consistency
(i.e. that under given hypothesis, the estimator
learned predicts as well as a model knowing the true distribution)
is not possible because of the bias of the `l1`. It does say, however,
that model consistency, in terms of finding the right set of non-zero
parameters as well as their signs, can be achieved by scaling
`C1`.
l2-penalty case
-----------------
The theory says that in order to achieve prediction consistency, the
penalty parameter should be kept constant
as the number of samples grow.
Simulations
------------
The two figures below plot the values of `C` on the `x-axis` and the
corresponding cross-validation scores on the `y-axis`, for several different
fractions of a generated data-set.
In the `l1` penalty case, the cross-validation-error correlates best with
the test-error, when scaling our `C` with the number of samples, `n`,
which can be seen in the first figure.
For the `l2` penalty case, the best result comes from the case where `C`
is not scaled.
.. topic:: Note:
Two separate datasets are used for the two different plots. The reason
behind this is the `l1` case works better on sparse data, while `l2`
is better suited to the non-sparse case.
```python
print(__doc__)
# Author: Andreas Mueller <amueller@ais.uni-bonn.de>
# Jaques Grobler <jaques.grobler@inria.fr>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import LinearSVC
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import GridSearchCV
from sklearn.utils import check_random_state
from sklearn import datasets
rnd = check_random_state(1)
# set up dataset
n_samples = 100
n_features = 300
# l1 data (only 5 informative features)
X_1, y_1 = datasets.make_classification(n_samples=n_samples,
n_features=n_features, n_informative=5,
random_state=1)
# l2 data: non sparse, but less features
y_2 = np.sign(.5 - rnd.rand(n_samples))
X_2 = rnd.randn(n_samples, n_features // 5) + y_2[:, np.newaxis]
X_2 += 5 * rnd.randn(n_samples, n_features // 5)
clf_sets = [(LinearSVC(penalty='l1', loss='squared_hinge', dual=False,
tol=1e-3),
np.logspace(-2.3, -1.3, 10), X_1, y_1),
(LinearSVC(penalty='l2', loss='squared_hinge', dual=True,
tol=1e-4),
np.logspace(-4.5, -2, 10), X_2, y_2)]
colors = ['navy', 'cyan', 'darkorange']
lw = 2
for fignum, (clf, cs, X, y) in enumerate(clf_sets):
# set up the plot for each regressor
plt.figure(fignum, figsize=(9, 10))
for k, train_size in enumerate(np.linspace(0.3, 0.7, 3)[::-1]):
param_grid = dict(C=cs)
# To get nice curve, we need a large number of iterations to
# reduce the variance
grid = GridSearchCV(clf, refit=False, param_grid=param_grid,
cv=ShuffleSplit(train_size=train_size,
n_splits=250, random_state=1))
grid.fit(X, y)
scores = grid.cv_results_['mean_test_score']
scales = [(1, 'No scaling'),
((n_samples * train_size), '1/n_samples'),
]
for subplotnum, (scaler, name) in enumerate(scales):
plt.subplot(2, 1, subplotnum + 1)
plt.xlabel('C')
plt.ylabel('CV Score')
grid_cs = cs * float(scaler) # scale the C's
plt.semilogx(grid_cs, scores, label="fraction %.2f" %
train_size, color=colors[k], lw=lw)
plt.title('scaling=%s, penalty=%s, loss=%s' %
(name, clf.penalty, clf.loss))
plt.legend(loc="best")
plt.show()
```
|
28fd91a7f2e481e099a913d6b309b824b228518b
| 6,433 |
ipynb
|
Jupyter Notebook
|
lab03/svm/plot_svm_scale_c.ipynb
|
cruxiu/MLStudies
|
2b0a9ac7dbede4200080666dfdcba6a2f65f93af
|
[
"MIT"
] | 1 |
2019-08-22T01:35:16.000Z
|
2019-08-22T01:35:16.000Z
|
lab03/svm/plot_svm_scale_c.ipynb
|
cruxiu/MLStudies
|
2b0a9ac7dbede4200080666dfdcba6a2f65f93af
|
[
"MIT"
] | null | null | null |
lab03/svm/plot_svm_scale_c.ipynb
|
cruxiu/MLStudies
|
2b0a9ac7dbede4200080666dfdcba6a2f65f93af
|
[
"MIT"
] | null | null | null | 119.12963 | 2,855 | 0.640914 | true | 1,346 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.658417 | 0.774583 | 0.509999 |
__label__eng_Latn
| 0.98891 | 0.023228 |
We consider the polar corrdinates in which (by mapping theorem), raindrops are a point process with $\lambda r$ $drops/cm$
~~
$$
\begin{align}
P[\Lambda^1([0,r] \times [0, 1] \times R^2)=0] &=exp(-\int_{0}^r \int_{0}^{1} \int_{R^2}\lambda p(x,y)dxdydtdr)\\ &= exp(-\lambda t 2\pi \int_0^r \rho e^{-\rho}d\rho)\\
&= exp(-\lambda t 2\pi(1-e^{-r}(r+1)))\\
\end{align}
$$
~~
|
d6e1691a91ac623e4fed2238bf60cd1a50d14874
| 992 |
ipynb
|
Jupyter Notebook
|
2015_Fall/MATH-578B/Homework3/Untitled.ipynb
|
NeveIsa/hatex
|
c5cfa2410d47c7e43a476a8c8a9795182fe8f836
|
[
"MIT"
] | 19 |
2015-09-10T02:45:33.000Z
|
2022-02-10T03:20:47.000Z
|
2015_Fall/MATH-578B/Homework3/Untitled.ipynb
|
NeveIsa/hatex
|
c5cfa2410d47c7e43a476a8c8a9795182fe8f836
|
[
"MIT"
] | 1 |
2015-09-16T23:11:00.000Z
|
2015-09-23T21:21:52.000Z
|
2015_Fall/MATH-578B/Homework3/Untitled.ipynb
|
saketkc/hatex
|
c5cfa2410d47c7e43a476a8c8a9795182fe8f836
|
[
"MIT"
] | 12 |
2015-09-25T19:06:45.000Z
|
2022-02-10T03:21:09.000Z
| 24.195122 | 194 | 0.507056 | true | 162 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.967899 | 0.774583 | 0.749719 |
__label__eng_Latn
| 0.545679 | 0.58018 |
# North American Einstein Toolkit School 2021: NRPy+ Tutorial
### Authors: Leo Werneck, Terrence Pierre Jacques, Patrick Nelson, Zach Etienne, & Thiago Assumpção
## You can access this tutorial by going to https://tinyurl.com/ETKSchoolNRPytutorial
<table><tr>
<td style="background-color: white;">
<a href=http://nrpyplus.net>
</a>
</td>
<td style="background-color: white;">
<a href=https://www.wvu.edu>
</a>
</td>
<td style="background-color: white;">
<a href=https://www.uidaho.edu>
</a>
</td>
<td style="background-color: white;">
<a href=https://einsteintoolkit.org>
</a>
</td>
<td style="background-color: white;">
<a href=https://www.nsf.gov>
</a>
</td>
</tr></table>
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This tutorial notebook is organized as follows:
0. [Step 0](#introduction): **Introduction**
1. [Step 0.a](#maxwell_equations): Maxwell's equations in vacuum, flat space, and Cartesian coordinates
1. [Step 0.b](#potential_formulation_maxwell_equations): Potential formulation of Maxwell's equations
1. [Step 0.c](#hyperbolicity_maxwell_equations): Improving the hyperbolicity of Maxwell's equations
1. [Step 0.d](#systems_I_and_II): Summary
1. [Step 1](#implementation): **Part I - Implementing Maxwell's equations using NRPy+**
1. [Step 1.a](#initializenrpy): Initialize core Python/NRPy+ modules
1. [Step 1.b](#gridfunction_declaration): Gridfunction declaration
1. [Step 1.c](#finite_differences): Finite difference derivatives
1. [Step 1.d](#system_I_implementation): Implementation of system I
1. [Step 1.d.i](#system_I_evolution_eqs): *System I evolution equations*
1. [Step 1.d.ii](#system_I_constraint_eqs): *System I constraint equation*
1. [Step 1.e](#system_II_implementation): Implementation of system II
1. [Step 1.e.i](#system_II_evolution_eqs): *System II evolution equations*
1. [Step 1.e.ii](#system_II_constraint_eqs): *System II constraint equations*
1. [Step 2](#thorn_writing): **Part II - Einstein Toolkit thorn writing**
1. [Step 2.a](#generating_c_code_kernels): Generating C code kernels for Maxwell's equations
1. [Step 2.a.i](#system_I_c_code_generation): *System I C code generation*
1. [Step 2.a.ii](#system_II_c_code_generation): *System II C code generation*
1. [Step 2.a.iii](#zero_rhss): *C code to initialize right-hand sides to zero*
1. [Step 2.b](#rhs_driver): The right-hand side driver function
1. [Step 2.c](#constraints_driver): The constraints driver function
1. [Step 2.d](#mol_registration): Registering gridfunctions for the Method of Lines
1. [Step 2.e](#gridfunction_symmetries): Set gridfunction symmetries
1. [Step 2.f](#boundary_conditions): Boundary condition configuration
1. [Step 2.g](#nrpy_banner): `NRPy+` banner
1. [Step 2.h](#ccl_files): Thorn configuration files
1. [Step 2.h.i](#param_ccl): *Generating the `param.ccl` file*
1. [Step 2.h.ii](#interface_ccl): *Generating the `interface.ccl` file*
1. [Step 2.h.iii](#schedule_ccl): *Generating the `schedule.ccl` file*
1. [Step 2.h.iv](#configuration_ccl): *Generating the `configuration.ccl` file*
1. [Step 2.h.v](#make_code_defn): *Generating the `make.code.defn` file*
1. [Step 3](#results): **Part III - Results**
1. [Step 4](#latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file**
<a id='introduction'></a>
# Step 0: Introduction \[Back to [top](#toc)\]
$$\label{introduction}$$
In this tutorial we will provide a hands-on introduction on how to use the [NRPy+ infrastructure](http://nrpyplus.net) to write thorns for the [Einstein Toolkit (ETK)](https://einsteintoolkit.org). Although we will give an overview of NRPy+ and provide a high level description of some of its modules, we do not aim to thoroughly describe every aspect of the infrastructure. For more details on NRPy+'s inner workings, we refer the reader to:
* [NRPy+'s webpage](http://nrpyplus.net)
* [NRPy+ tutorial from the 2020 ETK workshop by Zach Etienne](https://www.youtube.com/watch?v=TIPiW5-mPOM)
The material covered by this tutorial notebook is also based on many of the pedagogical `Jupyter` notebooks that form `NRPy+`'s documentation. We mention here the ones that are most useful as **additional reading material**:
* [Terrence's presentation on `MaxwellVacuum` thorns](https://lsu.app.box.com/s/9hrfzrh8dkmvpba4uqatnthjwafwahnv)
* [NRPy+ indexed expressions tutorial notebook](Tutorial-Indexed_Expressions.ipynb)
* [NRPy+ finite differences tutorial notebook](Tutorial-Finite_Difference_Derivatives.ipynb)
* [How NRPy+ computs finite differences coefficients tutorial notebook](Tutorial-How_NRPy_Computes_Finite_Difference_Coeffs.ipynb)
* [NRPy+ MaxwellVacuum formulation in Cartesian coordinates tutorial notebook](Tutorial-VacuumMaxwell_formulation_Cartesian.ipynb)
* [NRPy+ MaxwellVacuum formulation in Curvilinear coordinates tutorial notebook](Tutorial-VacuumMaxwell_formulation_Curvilinear.ipynb)
* [NRPy+ MaxwellVacuum right-hand sides tutorial notebook](Tutorial-VacuumMaxwell_Cartesian_RHSs.ipynb)
* [NRPy+ MaxwellVacuum ETK thorn tutorial notebook](Tutorial-ETK_thorn-MaxwellVacuum.ipynb)
<hr style="width:100%;height:3px;color:black"/>
<a id='maxwell_equations'></a>
## Step 0.a: Maxwell's equations in vacuum, flat space, and Cartesian coordinates \[Back to [top](#toc)\]
$$\label{maxwell_equations}$$
In this tutorial we will generate an ETK thorn to solve [Maxwell's equations](https://en.wikipedia.org/wiki/Maxwell%27s_equations) in flat space using Cartesian coordinates. In [Gaussian](https://en.wikipedia.org/wiki/Gaussian_units) and $c = 1$ units, the system of equations we are interested in is
$$
\begin{align}
\vec{\nabla} \cdot \vec{E} &= 0, \\
\vec{\nabla} \cdot \vec{B} &= 0, \\
\frac{\partial \vec{E}}{\partial t} &= \vec{\nabla} \times \vec{B}, \\
\frac{\partial \vec{B}}{\partial t} &= -\vec{\nabla} \times \vec{E},
\end{align}
$$
where $\vec{E}$ is the electric field, $\vec{B}$ is the magnetic field, and $\vec{\nabla}\cdot\vec{V}$ and $\vec{\nabla}\times\vec{V}$ are the divergence and the curl of the vector $\vec{V}$, respectively.
The last two equations involve time derivatives of $\vec{E}$ and $\vec{B}$ and therefore are referred to as *evolution* equations. The first two equations must be satisfied for all times, and are referred to as *constraint* equations.
<a id='potential_formulation_maxwell_equations'></a>
## Step 0.b: Potential formulation of Maxwell's equations \[Back to [top](#toc)\]
$$\label{potential_formulation_maxwell_equations}$$
A formulation of Maxwell's equations that is particularly useful for numerical integration can be found by introducing auxiliary variables. We start by introducing a new vector quantity, $\vec{A}$, known as the *magnetic* or *vector* potential, such that
$$
\vec{B} = \vec{\nabla}\times\vec{A}.
$$
Note that the "no-magnetic monopole" constraint is automatically satisfied:
$$
\vec{\nabla} \cdot \vec{B} = \vec{\nabla} \cdot \bigl(\vec{\nabla}\times\vec{A}\bigr) = 0.
$$
[Ampère's law](https://en.wikipedia.org/wiki/Ampère%27s_circuital_law), which is the evolution equation for $\vec{B}$, can then be written as
$$
\frac{\partial}{\partial t}\bigl(\vec{\nabla}\times\vec{A}\bigr) = -\vec{\nabla} \times \vec{E} \implies \vec{\nabla}\times\left(\frac{\partial\vec{A}}{\partial t} + \vec{E}\right) = 0 \implies \frac{\partial\vec{A}}{\partial t} + \vec{E} = -\vec{\nabla}\Phi,
$$
where $\Phi$ is an arbitrary scalar function known as the *electric* (or *scalar*) potential. Finally, [Faraday's law](https://en.wikipedia.org/wiki/Faraday%27s_law_of_induction), the evolution equation for $\vec{E}$, can be written as
$$
\frac{\partial\vec{E}}{\partial t} = \vec{\nabla}\times\bigl(\vec{\nabla}\times\vec{A}\bigr) = -\nabla^{2}\vec{A} + \vec{\nabla}\bigl(\vec{\nabla}\cdot\vec{A}\bigr).
$$
Thus Maxwell's equations now read
$$
\begin{align}
\vec{\nabla} \cdot \vec{E} &= 0, \\
\frac{\partial\vec{E}}{\partial t} &= -\nabla^{2}\vec{A} + \vec{\nabla}\bigl(\vec{\nabla}\cdot\vec{A}\bigr),\\
\frac{\partial\vec{A}}{\partial t} &= -\vec{E} - \vec{\nabla}\Phi.
\end{align}
$$
Note that we now end up with 7 dynamical fields, namely $\left(\Phi,\vec{A},\vec{E}\right)$, but only 6 evolution equations. We can remedy this by adopting a particular gauge. We will choose here the [Lorenz gauge](https://en.wikipedia.org/wiki/Lorenz_gauge_condition), which reads
$$
\frac{\partial\Phi}{\partial t} = -\vec{\nabla}\cdot\vec{A}.
$$
Thus we arrive at the evolution equations
$$
\boxed{
\begin{align}
\frac{\partial\vec{E}}{\partial t} &= -\nabla^{2}\vec{A} + \vec{\nabla}\bigl(\vec{\nabla}\cdot\vec{A}\bigr)\\
\frac{\partial\vec{A}}{\partial t} &= -\vec{E} - \vec{\nabla}\Phi\\
\frac{\partial\Phi}{\partial t} &= -\vec{\nabla}\cdot\vec{A}
\end{align}
}\ ,
$$
plus the constraints
$$
\boxed{ \vec{\mathcal{C}} \equiv \vec{\nabla} \cdot \vec{E} = 0}\ ,
$$
which must be satisfied at all times.
<a id='hyperbolicity_maxwell_equations'></a>
## Step 0.c: Improving the hyperbolicity of Maxwell's equations \[Back to [top](#toc)\]
$$\label{hyperbolicity_maxwell_equations}$$
If we take a time derivative of the evolution equation for the vector potential and plug in the evolution equation for the electric field and the scalar potential we find
$$
\frac{\partial^{2}\vec{A}}{\partial t^{2}} = -\frac{\partial\vec{E}}{\partial t} - \vec{\nabla}\left(\frac{\partial\Phi}{\partial t}\right) = \nabla^{2}\vec{A} - \vec{\nabla}\bigl(\vec{\nabla}\cdot\vec{A}\bigr) - \vec{\nabla}\left(\frac{\partial\Phi}{\partial t}\right).
$$
Notice that this is almost a wave equation for the vector potential $\vec{A}$, but we have a mixed derivative term on the right-hand side that spoils this behaviour. We can thus introduce a new auxiliary variable defined as
$$
\Gamma \equiv \vec{\nabla}\cdot\vec{A},
$$
so that we have
$$
\frac{\partial^{2}\vec{A}}{\partial t^{2}} = \nabla^{2}\vec{A} - \vec{\nabla}\Gamma - \vec{\nabla}\left(\frac{\partial\Phi}{\partial t}\right).
$$
The resulting system of equations now becomes
$$
\boxed{
\begin{align}
\frac{\partial\vec{E}}{\partial t} &= -\nabla^{2}\vec{A} + \vec{\nabla}\Gamma\\
\frac{\partial\vec{A}}{\partial t} &= -\vec{E} - \vec{\nabla}\Phi\\
\frac{\partial\Phi}{\partial t} &= -\Gamma\\
\frac{\partial\Gamma}{\partial t} &= -\nabla^{2}\Phi
\end{align}
}\ ,
$$
while the constraints that must satisfied are
$$
\boxed{
\begin{align}
\mathcal{C} &\equiv \vec{\nabla} \cdot \vec{E} = 0\\
\mathcal{G} &\equiv \Gamma - \vec{\nabla}\cdot\vec{A} = 0
\end{align}
}\ .
$$
<a id='systems_I_and_II'></a>
## Step 0.d: Summary \[Back to [top](#toc)\]
$$\label{systems_I_and_II}$$
Switching to index notation, we write
$$
\frac{\partial}{\partial t} \equiv \partial_{t}\ ;\ \frac{\partial}{\partial x^{i}} \equiv \partial_{i},
$$
where $\vec{x} \equiv x^{i} = (x,y,z)$.
We are thus interested in solving Maxwell's equations in vacuum (i.e. without source terms), in flat space, and in Cartesian coordinates, and we will do so using:
$$
\text{System I:}\
\boxed{
\begin{align}
\color{blue}{\partial_{t}E^{i}} &\, \color{blue}{= -\partial^{j}\partial_{j}A^{i} + \partial^{i}\partial_{j}A^{j}}\\
\color{blue}{\partial_{t}A^{i}} &\, \color{blue}{= -E^{i} - \partial^{i}\Phi}\\
\color{blue}{\partial_{t}\Phi} &\, \color{blue}{= -\partial_{i}A^{i}}\\
\color{red}{\partial_{i}E^{i}} &\, \color{red}{= 0}
\end{align}
}
\ ,
$$
and
$$
\text{System II:}\
\boxed{
\begin{align}
\color{blue}{\partial_{t}E^{i}} &\, \color{blue}{= -\partial^{j}\partial_{j}A^{i} + \partial^{i}\Gamma}\\
\color{blue}{\partial_{t}A^{i}} &\, \color{blue}{= -E^{i} - \partial^{i}\Phi}\\
\color{blue}{\partial_{t}\Phi} &\, \color{blue}{= -\Gamma}\\
\color{blue}{\partial_{t}\Gamma} &\, \color{blue}{= -\partial^{i}\partial_{i}\Phi}\\
\color{red}{\partial_{i}E^{i}} &\, \color{red}{= 0}\\
\color{red}{\Gamma} &\, \color{red}{= \partial_{i}A^{i}}
\end{align}
}\ .
$$
In the colorcode above, $\color{blue}{\text{blue}}$ equations are the $\color{blue}{\text{evolution}}$ equations, while $\color{red}{\text{red}}$ equations are the $\color{red}{\text{constraint}}$ equations.
<hr style="width:100%;height:3px;color:black"/>
<a id='implementation'></a>
# Step 1: Part I - Implementing Maxwell's equations using NRPy+ \[Back to [top](#toc)\]
$$\label{implementation}$$
In this part of the tutorial we will focus on the implementation of Maxwell's equations. At the end of this part, we will have written a series of `Python` functions which can be used to generate symbolic expressions for Maxwell's equations. We will go over:
* Basic `NRPy+` syntax
* Handling indexed expressions
* Finite difference derivatives
* Implementation of Maxwell's equations
<a id='initializenrpy'></a>
## Step 1.a: Initialize core Python/NRPy+ modules \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
Let's start by importing all the needed modules from NRPy+:
```python
# Step 1.a: Import core Python/NRPy+ modules
# Step 1.a.i: Import needed Python modules
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
import sympy as sp # SymPy: a Python library for symbolic mathematics
# Step 1.a.ii: Add the "nrpy_core" directory to the path
base_dir = os.getcwd()
nrpy_core_dir = os.path.join(base_dir,"..","..")
sys.path.append(nrpy_core_dir)
# Step 1.b: Load needed NRPy+ modules
from outputC import lhrh, outCfunction # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import NRPy_logo as logo # NRPy+: contains NRPy+ logo in ASCII art
```
<a id='gridfunction_declaration'></a>
## Step 1.b: Gridfunction declaration \[Back to [top](#toc)\]
$$\label{gridfunction_declaration}$$
When solving Maxwell's equations numerically using `NRPy`, we will *discretize* space as a *grid*. Assuming we choose a cube of length $L$ and discretize the $(x,y,z)$-directions using $(N_{x},N_{y},N_{z})$ points, respectively, we introduce the notation
$$
\begin{alignat}{2}
x \to x_{i} &\equiv x_{\rm min} + i \cdot \Delta x,\ &i=0,1,\ldots,N_{x}-1\\
y \to y_{j} &\equiv y_{\rm min} + j \cdot \Delta y,\ &j=0,1,\ldots,N_{y}-1\\
z \to z_{k} &\equiv z_{\rm min} + k \cdot \Delta z,\ &k=0,1,\ldots,N_{z}-1
\end{alignat}
$$
where $(\Delta x,\Delta y,\Delta z)$ are known as the *step sizes*. Our particular choice of discretization does not avoid the origin and is known as *vertex-centered*.
A particular function $f(x,y,z)$ can be evaluated on the points in the grid. We introduce the notation
$$
f_{ijk} \equiv f(x_{i},y_{j},z_{k}) = f(i\cdot\Delta x,j\cdot\Delta y,k\cdot\Delta z).
$$
$f_{ijk}$, i.e. the function $f(x,y,z)$ evaluated at the points of our numerical grid, is referred to as a *gridfunction*. Notice that the indices $i,j,k$ in this case are *not* indices associated with spacetime, but simply provide a useful bookeeping of where we are in our numerical grid.
In order to implement systems I and II, we will need the following *gridfunctions*:
* The electric field, $E^{i} = \bigl(E^{x},E^{y},E^{z}\bigr)$;
* The vector potential, $A^{i} = \bigl(A^{x},A^{y},A^{z}\bigr)$;
* The scalar potential, $\Phi$;
* The auxiliary scalar, $\Gamma$, which is only used in system II.
This means we need a total of 7 gridfunctions for system I and 8 gridfunctions for system II. We will now learn how to register [`SymPy`](https://www.sympy.org) variables which correspond to the gridfunctions above.
Notice first that the gridfunctions above fall into two distinct categories: scalar gridfunctions, $(\Phi,\Gamma)$, and 3-vector gridfunctions, $(E^{i},A^{i})$. Because of this, their declaration within `NRPy+` is also slightly different.
><font size=4>Scalar gridfunctions are registered using the `NRPy+` module [`grid.py`](#FIXME)</font>
><font size=4>Indexed expression gridfunctions are registered using the `NRPy+` module [`indexedexp.py`](#FIXME)</font>
We handle contravariant ("Up") and covariant ("Down") indices by appending "U"'s and "D"'s to the names of indexed expression variables. For example, in standard `NRPy+` notation, the variable that represents the tensor $M^{ab}_{\ \ \ \ \ \ cd}$ should be defined using
$$
\underbrace{M^{ab}_{\ \ \ \ \ \ cd}}_{\text{Indexed expression}} \leftrightarrow \underbrace{\text{MUUDD}}_{\text{NRPy+ variable}}.
$$
Variables that represent 4-dimensional indexed expressions should have a "4" in their names. For example, the 4-dimensional indexed expression $V_{\mu}$ is declared as
$$
\underbrace{V_{\mu}}_{\text{Indexed expression}} \leftrightarrow \underbrace{\text{V4D}}_{\text{NRPy+ variable}}.
$$
```python
# Step 1.b: Gridfunction declaration
def declare_MaxwellVacuum_gridfunctions_if_not_declared_already():
# Step 1.b.i: This if statement simply prevents multiple
# declarations of the same gridfunctions,
# which would result in errors.
for i in range(len(gri.glb_gridfcs_list)):
if "Phi" in gri.glb_gridfcs_list[i].name:
Phi,Gamma = sp.symbols("Phi Gamma",real=True)
EU = ixp.declarerank1("EU",DIM=3)
AU = ixp.declarerank1("AU",DIM=3)
return Phi,Gamma,EU,AU
# Step 1.b.i: Scalar gridfunctions: Phi and Gamma
Phi,Gamma = gri.register_gridfunctions("EVOL",["Phi","Gamma"])
# Step 1.b.ii: 3-vector gridfunctions: E^{i} and A^{i}
EU = ixp.register_gridfunctions_for_single_rank1("EVOL","EU",DIM=3)
AU = ixp.register_gridfunctions_for_single_rank1("EVOL","AU",DIM=3)
return Phi,Gamma,EU,AU
Phi,Gamma,EU,AU = declare_MaxwellVacuum_gridfunctions_if_not_declared_already()
```
<a id='finite_differences'></a>
## Step 1.c: Finite difference derivatives \[Back to [top](#toc)\]
$$\label{finite_differences}$$
`NRPy+` handles derivatives numerically using [finite differences](https://en.wikipedia.org/wiki/Finite_difference). For example, to obtain the expression for $\partial_{x}f(x)$, we can use the following:
$$
f(x\pm\Delta x) = f(x) \pm \Delta x \partial_{x}f(x) + \frac{(\Delta x)^{2}}{2}\partial_{x}^{2}f(x) + \mathcal{O}\bigl(\Delta x^{3}\bigr),
$$
from which we can construct the *second-order accurate centered finite difference* approximation
$$
\underbrace{\partial_{x}f(x)}_{\text{Derivative}} = \underbrace{\frac{f(x+\Delta x) - f(x-\Delta x)}{2\Delta x}}_{\text{FD approximation}} + \mathcal{O}\bigl(\Delta x^{2}\bigr).
$$
In `NRPy+` the derivatives are replaced by the finite difference approximation only when we are generating the C code from the symbolic expressions. So let us begin our discussion by explaining how to represent derivatives in our symbolic expressions.
From the point of view of the programmer, a derivative of an object should be considered simply as another indexed expression. For example, we can think of the second derivative of the vector $P^{i}$ as the new object $p^{k}_{\ \ ij}$, i.e.
$$
p^{k}_{\ \ ij} \equiv \partial_{i}\partial_{j}P^{k}.
$$
This means that the declaration of derivatives is also performed using the [`indexedexp.py`](FIXME) module. However, there is a special notation to distinguish "derivative indices" ($i$ and $j$ in the example above) from "regular indices" ($k$ in the example above). This distinction must be introduced so that the [`outputC.py`](FIXME) module correctly identifies derivatives and replaces them with the appropriate finite difference expression. This special notation consists of appending "_d" (lowercase!) to the end of the variable's name, followed by the derivative indices. For example:
$$
\partial_{i}\partial_{j}P^{k} \equiv \underbrace{P^{k}_{\ ,ij}}_{\text{Indexed expression}} \leftrightarrow \underbrace{\text{PU_dDD}}_{\text{NRPy+ variable}}.
$$
The underscore preceding the derivative indices is introduced to distinguish *numerical* derivatives (i.e. those which we evaluate using finite differences) from *analytic* derivatives (i.e. those which we evaluate using `SymPy` and exact, symbolic differentiation).
Here we provide a simple example that evaluates the derivative of a scalar $\psi$ using finite differences. This variable will not be used again, so we prepend its name with an underscore to identify it as a dummy variable. Note also that we will remove $\psi$ from the list of gridfunctions after using it, thus avoiding undesireable gridfunctions when generating the ETK thorn.
[Using fourth-order accurate finite differences, the expressions we should obtain for the first and second derivatives](https://en.wikipedia.org/wiki/Finite_difference_coefficient) of $\psi$ along the $x$-direction are, respectively,
$$
\begin{align}
\partial_{x}\psi &= \frac{1}{\Delta x}\left(\frac{1}{12}\psi_{i-2} - \frac{2}{3}\psi_{i-1} + \frac{2}{3}\psi_{i+1} - \frac{1}{12}\psi_{i+2}\right),\\
\partial_{x}^{2}\psi &= \frac{1}{\Delta x^{2}}\left(-\frac{1}{12}\psi_{i-2} + \frac{4}{3}\psi_{i-1} - \frac{5}{2}\psi_{i} + \frac{4}{3}\psi_{i+1} - \frac{1}{12}\psi_{i+2}\right).
\end{align}
$$
Let us now demonstrate that `NRPy+` gives us the above expressions.
```python
# Step 1.c: Finite differences
# Step 1.c.i: Declare the gridfunction psi and
# its first and second derivatives
_psi = gri.register_gridfunctions("EVOL","psi")
_psi_dD = ixp.declarerank1("psi_dD")
_psi_dDD = ixp.declarerank2("psi_dDD","sym01") # <- symmetric under i <-> j
# Step 1.c.ii: Now get the finite difference expression
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",4)
# Step 1.c.iii: Generate the C code
print(fin.FD_outputC("returnstring",
[lhrh(lhs="First__derivative",rhs=_psi_dD[0]),
lhrh(lhs="Second_derivative",rhs=_psi_dDD[0][0])],
params="outCverbose=False"))
# Step 1.c.iv: Remove the gridfunction psi from the list of gridfunctions
for gf in gri.glb_gridfcs_list:
if gf.name == "psi":
gri.glb_gridfcs_list.remove(gf)
```
{
/*
* NRPy+ Finite Difference Code Generation, Step 1 of 2: Read from main memory and compute finite difference stencils:
*/
const double psi_i0m2_i1_i2 = in_gfs[IDX4(PSIGF, i0-2,i1,i2)];
const double psi_i0m1_i1_i2 = in_gfs[IDX4(PSIGF, i0-1,i1,i2)];
const double psi = in_gfs[IDX4(PSIGF, i0,i1,i2)];
const double psi_i0p1_i1_i2 = in_gfs[IDX4(PSIGF, i0+1,i1,i2)];
const double psi_i0p2_i1_i2 = in_gfs[IDX4(PSIGF, i0+2,i1,i2)];
const double FDPart1_Rational_2_3 = 2.0/3.0;
const double FDPart1_Rational_1_12 = 1.0/12.0;
const double FDPart1_Rational_5_2 = 5.0/2.0;
const double FDPart1_Rational_4_3 = 4.0/3.0;
const double psi_dD0 = invdx0*(FDPart1_Rational_1_12*(psi_i0m2_i1_i2 - psi_i0p2_i1_i2) + FDPart1_Rational_2_3*(-psi_i0m1_i1_i2 + psi_i0p1_i1_i2));
const double psi_dDD00 = ((invdx0)*(invdx0))*(FDPart1_Rational_1_12*(-psi_i0m2_i1_i2 - psi_i0p2_i1_i2) + FDPart1_Rational_4_3*(psi_i0m1_i1_i2 + psi_i0p1_i1_i2) - FDPart1_Rational_5_2*psi);
/*
* NRPy+ Finite Difference Code Generation, Step 2 of 2: Evaluate SymPy expressions and write to main memory:
*/
First__derivative = psi_dD0;
Second_derivative = psi_dDD00;
}
<a id='system_I_implementation'></a>
## Step 1.d: Implementation of system I \[Back to [top](#toc)\]
$$\label{system_I_implementation}$$
We now implement system I, derived in [Step 0.b](#potential_formulation_maxwell_equations) above, which read
$$
\text{System I:}\
\boxed{
\begin{align}
\color{blue}{\partial_{t}E^{i}} &\, \color{blue}{= -\partial^{j}\partial_{j}A^{i} + \partial^{i}\partial_{j}A^{j}}\\
\color{blue}{\partial_{t}A^{i}} &\, \color{blue}{= -E^{i} - \partial^{i}\Phi}\\
\color{blue}{\partial_{t}\Phi} &\, \color{blue}{= -\partial_{i}A^{i}}\\
\color{red}{\partial_{i}E^{i}} &\, \color{red}{= 0}
\end{align}
}
\ .
$$
In the colorcode above, $\color{blue}{\text{blue}}$ equations are $\color{blue}{\text{evolution}}$ equations, while $\color{red}{\text{red}}$ equations are $\color{red}{\text{constraint}}$ equations.
<a id='system_I_evolution_eqs'></a>
### Step 1.d.i: System I evolution equations \[Back to [top](#toc)\]
$$\label{system_I_evolution_eqs}$$
We will now implement the evolution equations of system I, namely
$$
\begin{align}
\partial_{t}E^{i} &= -\partial^{j}\partial_{j}A^{i} + \partial^{i}\partial_{j}A^{j},\\
\partial_{t}A^{i} &= -E^{i} - \partial^{i}\Phi,\\
\partial_{t}\Phi &= -\partial_{i}A^{i}.
\end{align}
$$
```python
# Step 1.d: Implementation of system I
# Step 1.d.i: System I evolution equations
def MaxwellVacuum_system_I_evolution_equations():
# Step 1.d.i.1: Declare Maxwell gridfunctions
_Phi,_Gamma,_EU,_AU = declare_MaxwellVacuum_gridfunctions_if_not_declared_already()
# Step 1.d.i.2: Declare all derivatives appearing on the
# right-hand sides of the evolution equations
# of system I
AU_dDD = ixp.declarerank3("AU_dDD","sym12")
AU_dD = ixp.declarerank2("AU_dD","nosym")
Phi_dD = ixp.declarerank1("Phi_dD")
# Step 1.d.i.3: Right-hand side of E^{i}:
#
# partial_{t}E^{i} = -partial^{j}partial_{j}A^{i} + partial^{i}partial_{j}A^{j}
ErhsU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
ErhsU[i] += -AU_dDD[i][j][j] + AU_dDD[j][j][i]
# Step 1.d.i.4: Right-hand side of A^{i}:
#
# partial_{t}A^{i} = -E^{i} - partial^{i}Phi
ArhsU = ixp.zerorank1()
for i in range(DIM):
ArhsU[i] = -EU[i] - Phi_dD[i]
# Step 1.d.i.5: Right-hand side of Phi:
#
# partial_{t}Phi = -partial_{i}A^{i}
Phi_rhs = sp.sympify(0)
for i in range(DIM):
Phi_rhs += -AU_dD[i][i]
return ErhsU,ArhsU,Phi_rhs
```
<a id='system_I_constraint_eqs'></a>
### Step 1.d.ii: System I constraint equation \[Back to [top](#toc)\]
$$\label{system_I_constraint_eqs}$$
We now implement the constraint equation
$$
\mathcal{C} = \partial_{i}E^{i}.
$$
Notice that $\mathcal{C}=0$ *analytically*, but this constraint *is not* enforced during an evolution. It can instead be used as a diagnostic of how well our numerical solution satisfies Maxwell's equations.
```python
# Step 1.d.ii: System I constraint equation
def MaxwellVacuum_system_I_constraint_equation():
# Step 1.d.ii.1: Declare Maxwell gridfunctions
_Phi,_Gamma,_EU,_AU = declare_MaxwellVacuum_gridfunctions_if_not_declared_already()
# Step 1.d.ii.2: Declare all derivatives appearing on the
# right-hand sides of the constraint equations
# of system I
EU_dD = ixp.declarerank2("EU_dD", "nosym")
# Step 1.d.ii.3: Constraint equation (Gauss' law in vacuum):
#
# C = partial_{i}E^{i}
C = sp.sympify(0)
for i in range(DIM):
C += EU_dD[i][i]
return C
```
<a id='system_II_implementation'></a>
## Step 1.e: Implementation of system II \[Back to [top](#toc)\]
$$\label{system_II_implementation}$$
We now implement system II, derived in [Step 0.c](#hyperbolicity_maxwell_equations) above, which read
$$
\text{System II:}\
\boxed{
\begin{align}
\color{blue}{\partial_{t}E^{i}} &\, \color{blue}{= -\partial^{j}\partial_{j}A^{i} + \partial^{i}\Gamma}\\
\color{blue}{\partial_{t}A^{i}} &\, \color{blue}{= -E^{i} - \partial^{i}\Phi}\\
\color{blue}{\partial_{t}\Phi} &\, \color{blue}{= -\Gamma}\\
\color{blue}{\partial_{t}\Gamma} &\, \color{blue}{= -\partial^{i}\partial_{i}\Phi}\\
\color{red}{\partial_{i}E^{i}} &\, \color{red}{= 0}\\
\color{red}{\Gamma} &\, \color{red}{= \partial_{i}A^{i}}
\end{align}
}\ .
$$
In the colorcode above, $\color{blue}{\text{blue}}$ equations are $\color{blue}{\text{evolution}}$ equations, while $\color{red}{\text{red}}$ equations are $\color{red}{\text{constraint}}$ equations.
<a id='system_II_evolution_eqs'></a>
### Step 1.e.i: System II evolution equations \[Back to [top](#toc)\]
$$\label{system_II_evolution_eqs}$$
We will now implement the evolution equations of system II, namely
$$
\begin{align}
\partial_{t}E^{i} &= -\partial^{j}\partial_{j}A^{i} + \partial^{i}\Gamma,\\
\partial_{t}A^{i} &= -E^{i} - \partial^{i}\Phi,\\
\partial_{t}\Phi &= -\Gamma,\\
\partial_{t}\Gamma &= -\partial^{i}\partial_{i}\Phi.
\end{align}
$$
```python
# Step 1.e: Implementation of system II
# Step 1.e.i: System II evolution equations
def MaxwellVacuum_system_II_evolution_equations():
# Step 1.e.i.1: Declare Maxwell gridfunctions
_Phi,Gamma,_EU,_AU = declare_MaxwellVacuum_gridfunctions_if_not_declared_already()
# Step 1.e.i.2: Declare all derivatives appearing on the
# right-hand sides of the evolution equations
# of system II
AU_dDD = ixp.declarerank3("AU_dDD","sym12")
Phi_dD = ixp.declarerank1("Phi_dD")
Phi_dDD = ixp.declarerank2("Phi_dDD","sym01")
Gamma_dD = ixp.declarerank1("Gamma_dD")
# Step 1.e.i.3: Right-hand side of E^{i}:
#
# partial_{t}E^{i} = -partial^{j}partial_{j}A^{i} + partial^{i}Gamma
ErhsU = ixp.zerorank1()
for i in range(DIM):
ErhsU[i] += Gamma_dD[i]
for j in range(DIM):
ErhsU[i] -= AU_dDD[i][j][j]
# Step 1.e.i.4: Right-hand side of A^{i}:
#
# partial_{t}A^{i} = -E^{i} - partial^{i}Phi
ArhsU = ixp.zerorank1()
for i in range(DIM):
ArhsU[i] = -EU[i] - Phi_dD[i]
# Step 1.e.i.5: Right-hand side of Phi:
#
# partial_{t}Phi = -Gamma
Phi_rhs = -Gamma
# Step 1.e.i.6: Right-hand side of Gamma:
#
# partial_{t}Gamma = -partial^{i}partial_{i}Phi
Gamma_rhs = sp.sympify(0)
for i in range(DIM):
Gamma_rhs -= Phi_dDD[i][i]
return ErhsU,ArhsU,Phi_rhs,Gamma_rhs
```
<a id='system_II_constraint_eqs'></a>
### Step 1.e.ii: System II constraint equations \[Back to [top](#toc)\]
$$\label{system_II_constraint_eqs}$$
We now implement the constraint equations
$$
\begin{align}
\mathcal{C} &= \partial_{i}E^{i},\\
\mathcal{G} &= \Gamma - \partial_{i}A^{i}.
\end{align}
$$
Notice that $\mathcal{C}=0$ and $\mathcal{G}=0$ *analytically*, but these constraints *are not* enforced during an evolution. They can instead be used as diagnostics of how well our numerical solution satisfies Maxwell's equations.
```python
# Step 1.e.ii: System I constraint equations
def MaxwellVacuum_system_II_constraint_equations():
# Step 1.e.ii.1: Declare Maxwell gridfunctions
_Phi,_Gamma,_EU,_AU = declare_MaxwellVacuum_gridfunctions_if_not_declared_already()
# Step 1.e.ii.2: Declare all derivatives appearing on the
# right-hand sides of the constraint equations
# of system II
EU_dD = ixp.declarerank2("EU_dD", "nosym")
AU_dD = ixp.declarerank2("AU_dD", "nosym")
# Step 1.e.ii.3: Gauss' law in vacuum:
#
# C = partial_{i}E^{i}
C = sp.sympify(0)
for i in range(DIM):
C += EU_dD[i][i]
# Step 1.e.ii.4: Gamma-constraint:
#
# G = Gamma - partial_{i}A^{i}
G = Gamma
for i in range(DIM):
G += -AU_dD[i][i]
return C,G
```
<hr style="width:100%;height:3px;color:black"/>
<a id='thorn_writing'></a>
# Step 2: Part II - Einstein Toolkit thorn writing \[Back to [top](#toc)\]
$$\label{thorn_writing}$$
In this part of the tutorial we will focusing on writing the `MaxwellVacuum` ETK thorn. We will go over:
* Generating C code kernels from the symbolic expressions, including a function to initialize the right-hand sides to zero
* Writing all additional functions that are needed by the thorn:
* The right-hand sides driver function
* The constraints driver function
* Registering the evolved, and respective right-hand sides, gridfunctions so that they can be used by the [Method of Lines](https://einsteintoolkit.org/thornguide/CactusNumerical/MoL/documentation.html) thorn
* Specifying gridfunctions symmetries
* Boundary conditions
* Printing the `NRPy+` logo when using this thorn
* Writing the thorn's configuration (`*.ccl`) files
<a id='generating_c_code_kernels'></a>
## Step 2.a: Generating C code kernels for Maxwell's equations \[Back to [top](#toc)\]
$$\label{generating_c_code_kernels}$$
We now focus on generating the C code kernels that are needed by the `MaxwellVacuum` thorn. These will be source files containing a single C function each. The functions can be used to compute, for example, the right-hand sides of Maxwell's evolution equations, and the source code which composes the core of the function's body is the result of converting the `SymPy` symbolic expressions into highly optimized C code.
We will then focus on how to generate this highly optimized C code in `NRPy+` using the `outputC.py` module. Because we want our symbolic derivatives to be replaced by finite differences approximations, we will be using the wrapper function `FD_outputC()`, defined in the `finite_differences.py` module.
```python
# Step 2.a: Generating C code kernels for Maxwell's equations
# Step 2.a.0.i: Set the thorn's directories
Thorndir = "MaxwellVacuum"
Ccodesdir = os.path.join(Thorndir,"src")
shutil.rmtree(Thorndir, ignore_errors=True)
cmd.mkdir(Thorndir)
cmd.mkdir(Ccodesdir)
# Step 2.a.0.ii: Copy SIMD compiler intrinsics files
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join(nrpy_core_dir,"SIMD","SIMD_intrinsics.h"),os.path.join(Ccodesdir,"SIMD"))
# Step 2.a.0.iii: Enable rfm precompute
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files"))
# Step 2.a.0.iv: Set gridfunction memory access mode
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
# Step 2.a.0.v: Set coordinate system and generate reference_metric files
CoordSystem = "Cartesian"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
# Step 2.a.0.vi: Set number of spatial dimensions
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2.a.0.vii: Master C code generating function
def MaxwellVacuum_C_code_generation_function(name,desc,gf_and_expr_list,make_code_defn_list=None):
outfile = os.path.join(Ccodesdir,name+".c")
outCfunction(
includes = ["math.h","cctk.h","cctk_Arguments.h","cctk_Parameters.h","SIMD/SIMD_intrinsics.h"],
outfile = outfile,
desc = desc,
name = name,
enableCparameters = False,
params = "CCTK_ARGUMENTS",
preloop = """
DECLARE_CCTK_ARGUMENTS;
const CCTK_REAL NOSIMDinvdx0 = 1.0/CCTK_DELTA_SPACE(0);
const REAL_SIMD_ARRAY invdx0 = ConstSIMD(NOSIMDinvdx0);
const CCTK_REAL NOSIMDinvdx1 = 1.0/CCTK_DELTA_SPACE(1);
const REAL_SIMD_ARRAY invdx1 = ConstSIMD(NOSIMDinvdx1);
const CCTK_REAL NOSIMDinvdx2 = 1.0/CCTK_DELTA_SPACE(2);
const REAL_SIMD_ARRAY invdx2 = ConstSIMD(NOSIMDinvdx2);
#pragma omp parallel for
for(int i2=cctk_nghostzones[2];i2<cctk_lsh[2]-cctk_nghostzones[2];i2++) {
#include "rfm_files/rfm_struct__SIMD_inner_read2.h"
for(int i1=cctk_nghostzones[1];i1<cctk_lsh[1]-cctk_nghostzones[1];i1++) {
#include "rfm_files/rfm_struct__SIMD_inner_read1.h"
for(int i0=cctk_nghostzones[0];i0<cctk_lsh[0]-cctk_nghostzones[0];i0+=SIMD_width) {
#include "rfm_files/rfm_struct__SIMD_inner_read0.h"
""",
body =fin.FD_outputC("returnstring",gf_and_expr_list,
params="outCverbose=False,includebraces=False,enable_SIMD=True,preindent=8"),
postloop = """
} // for(int i0=cctk_nghostzones[0];i0<cctk_lsh[0]-cctk_nghostzones[0];i0+=SIMD_width)
} // for(int i1=cctk_nghostzones[1];i1<cctk_lsh[1]-cctk_nghostzones[1];i1++)
} // for(int i2=cctk_nghostzones[2];i2<cctk_lsh[2]-cctk_nghostzones[2];i2++)
""")
if make_code_defn_list is not None:
make_code_defn_list.append(name+".c")
```
<a id='system_I_c_code_generation'></a>
### Step 2.a.i: System I C code generation \[Back to [top](#toc)\]
$$\label{system_I_c_code_generation}$$
We will now generate the C code kernels for [System I](#systems_I_and_II). We will generate a total of 8 source files containing:
* Functions to evaluate the right-hand sides of the evolution equations in System I using:
* 2nd order finite differences
* 4th order finite differences
* 6th order finite differences
* 8th order finite differences
* Functions to evaluate the constraint equation in System I using:
* 2nd order finite differences
* 4th order finite differences
* 6th order finite differences
* 8th order finite differences
```python
# Step 2.a.i: System I C code generation
# Step 2.a.i.1: Clear list of registered gridfunctions
gri.glb_gridfcs_list = []
# Step 2.a.i.2: Declare an auxiliary gridfunction "C" to store Gauss' law
_CGF = gri.register_gridfunctions("AUX","C")
# Step 2.a.i.3: Store the current (dfault) finite difference order
FD_order_orig = par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER")
# Step 2.a.i.4: Set which finite difference orders we want
FD_order_min = 2
FD_order_max = 8
FD_order_step = 2
# Step 2.a.i.5: Generate the C code kernels
make_code_defn_list = []
for FD_order in range(FD_order_min,FD_order_max+1,FD_order_step):
# Update finite difference order
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",FD_order)
# Evolution equations
name = "MaxwellVacuum_System_I_RHSs_FD_order_%d"%(FD_order)
desc = "Right-hand sides of the evolution equations in system I - FD order = %d"%(FD_order)
ErhsU,ArhsU,Phi_rhs = MaxwellVacuum_system_I_evolution_equations()
gf_and_expr_list = [lhrh(lhs=gri.gfaccess("rhs_gfs","EU0"),rhs=ErhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","EU1"),rhs=ErhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","EU2"),rhs=ErhsU[2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","AU0"),rhs=ArhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","AU1"),rhs=ArhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","AU2"),rhs=ArhsU[2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","Phi"),rhs=Phi_rhs)]
MaxwellVacuum_C_code_generation_function(name,desc,gf_and_expr_list,make_code_defn_list=make_code_defn_list)
# Constraint equations
name = "MaxwellVacuum_System_I_constraints_FD_order_%d"%(FD_order)
desc = "Constraint equations in system I - FD order = %d"%(FD_order)
C = MaxwellVacuum_system_I_constraint_equation()
gf_and_expr_list = lhrh(lhs=gri.gfaccess("aux_gfs","C"),rhs=C)
MaxwellVacuum_C_code_generation_function(name,desc,gf_and_expr_list,make_code_defn_list=make_code_defn_list)
# Step 2.a.i.6: Restore original FD_order
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",FD_order_orig)
```
Output C function MaxwellVacuum_System_I_RHSs_FD_order_2() to file MaxwellVacuum/src/MaxwellVacuum_System_I_RHSs_FD_order_2.c
Output C function MaxwellVacuum_System_I_constraints_FD_order_2() to file MaxwellVacuum/src/MaxwellVacuum_System_I_constraints_FD_order_2.c
Output C function MaxwellVacuum_System_I_RHSs_FD_order_4() to file MaxwellVacuum/src/MaxwellVacuum_System_I_RHSs_FD_order_4.c
Output C function MaxwellVacuum_System_I_constraints_FD_order_4() to file MaxwellVacuum/src/MaxwellVacuum_System_I_constraints_FD_order_4.c
Output C function MaxwellVacuum_System_I_RHSs_FD_order_6() to file MaxwellVacuum/src/MaxwellVacuum_System_I_RHSs_FD_order_6.c
Output C function MaxwellVacuum_System_I_constraints_FD_order_6() to file MaxwellVacuum/src/MaxwellVacuum_System_I_constraints_FD_order_6.c
Output C function MaxwellVacuum_System_I_RHSs_FD_order_8() to file MaxwellVacuum/src/MaxwellVacuum_System_I_RHSs_FD_order_8.c
Output C function MaxwellVacuum_System_I_constraints_FD_order_8() to file MaxwellVacuum/src/MaxwellVacuum_System_I_constraints_FD_order_8.c
<a id='system_II_c_code_generation'></a>
### Step 2.a.ii: System II C code generation \[Back to [top](#toc)\]
$$\label{system_II_c_code_generation}$$
We will now generate the C code kernels for [System II](#systems_I_and_II). We will generate a total of 8 source files containing:
* Functions to evaluate the right-hand sides of the evolution equations in System II using:
* 2nd order finite differences
* 4th order finite differences
* 6th order finite differences
* 8th order finite differences
* Functions to evaluate the constraint equation in System II using:
* 2nd order finite differences
* 4th order finite differences
* 6th order finite differences
* 8th order finite differences
```python
# Step 2.a.ii: System II C code generation
# Step 2.a.ii.1: Clear list of registered gridfunctions
gri.glb_gridfcs_list = []
# Step 2.a.ii.2: Declare an auxiliary gridfunction "C" to store Gauss' law
_CGF,_GGF = gri.register_gridfunctions("AUX",["C","G"])
# Step 2.a.ii.3: Store the current (dfault) finite difference order
FD_order_orig = par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER")
# Step 2.a.ii.4: Set which finite difference orders we want
FD_order_min = 2
FD_order_max = 8
FD_order_step = 2
# Step 2.a.ii.5: Generate the C code kernels
for FD_order in range(FD_order_min,FD_order_max+1,FD_order_step):
# Update finite difference order
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",FD_order)
# Evolution equations
desc = "Right-hand sides of the evolution equations in system II - FD order = %d"%(FD_order)
name = "MaxwellVacuum_System_II_RHSs_FD_order_%d"%(FD_order)
ErhsU,ArhsU,Phi_rhs,Gamma_rhs = MaxwellVacuum_system_II_evolution_equations()
gf_and_expr_list = [lhrh(lhs=gri.gfaccess("rhs_gfs","EU0" ),rhs=ErhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","EU1" ),rhs=ErhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","EU2" ),rhs=ErhsU[2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","AU0" ),rhs=ArhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","AU1" ),rhs=ArhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","AU2" ),rhs=ArhsU[2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","Phi" ),rhs=Phi_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","Gamma"),rhs=Gamma_rhs)]
MaxwellVacuum_C_code_generation_function(name,desc,gf_and_expr_list,make_code_defn_list=make_code_defn_list)
# Constraint equations
desc = "Constraint equations in system II - FD order = %d"%(FD_order)
name = "MaxwellVacuum_System_II_constraints_FD_order_%d"%(FD_order)
C,G = MaxwellVacuum_system_II_constraint_equations()
gf_and_expr_list = [lhrh(lhs=gri.gfaccess("aux_gfs","C"),rhs=C),
lhrh(lhs=gri.gfaccess("aux_gfs","G"),rhs=G)]
MaxwellVacuum_C_code_generation_function(name,desc,gf_and_expr_list,make_code_defn_list=make_code_defn_list)
# Step 2.a.ii.6: Restore original FD_order
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",FD_order_orig)
```
Output C function MaxwellVacuum_System_II_RHSs_FD_order_2() to file MaxwellVacuum/src/MaxwellVacuum_System_II_RHSs_FD_order_2.c
Output C function MaxwellVacuum_System_II_constraints_FD_order_2() to file MaxwellVacuum/src/MaxwellVacuum_System_II_constraints_FD_order_2.c
Output C function MaxwellVacuum_System_II_RHSs_FD_order_4() to file MaxwellVacuum/src/MaxwellVacuum_System_II_RHSs_FD_order_4.c
Output C function MaxwellVacuum_System_II_constraints_FD_order_4() to file MaxwellVacuum/src/MaxwellVacuum_System_II_constraints_FD_order_4.c
Output C function MaxwellVacuum_System_II_RHSs_FD_order_6() to file MaxwellVacuum/src/MaxwellVacuum_System_II_RHSs_FD_order_6.c
Output C function MaxwellVacuum_System_II_constraints_FD_order_6() to file MaxwellVacuum/src/MaxwellVacuum_System_II_constraints_FD_order_6.c
Output C function MaxwellVacuum_System_II_RHSs_FD_order_8() to file MaxwellVacuum/src/MaxwellVacuum_System_II_RHSs_FD_order_8.c
Output C function MaxwellVacuum_System_II_constraints_FD_order_8() to file MaxwellVacuum/src/MaxwellVacuum_System_II_constraints_FD_order_8.c
<a id='zero_rhss'></a>
### Step 2.a.iii: C code to initialize right-hand sides to zero \[Back to [top](#toc)\]
$$\label{zero_rhss}$$
This is a simple function that is used to initialize the right-hand side gridfunctions to zero.
```python
# Step 2.a.iii: C code to initialize right-hand sides to zero
zero = sp.sympify(0)
name = "MaxwellVacuum_zero_rhss"
desc = "Set right-hand sides of the evolution equations to zero"
gf_and_expr_list = [lhrh(lhs=gri.gfaccess("rhs_gfs","EU0" ),rhs=zero),
lhrh(lhs=gri.gfaccess("rhs_gfs","EU1" ),rhs=zero),
lhrh(lhs=gri.gfaccess("rhs_gfs","EU2" ),rhs=zero),
lhrh(lhs=gri.gfaccess("rhs_gfs","AU0" ),rhs=zero),
lhrh(lhs=gri.gfaccess("rhs_gfs","AU1" ),rhs=zero),
lhrh(lhs=gri.gfaccess("rhs_gfs","AU2" ),rhs=zero),
lhrh(lhs=gri.gfaccess("rhs_gfs","Phi" ),rhs=zero),
lhrh(lhs=gri.gfaccess("rhs_gfs","Gamma"),rhs=zero)]
outfile = os.path.join(Ccodesdir,name+".c")
outCfunction(
includes = ["math.h","cctk.h","cctk_Arguments.h","cctk_Parameters.h","SIMD/SIMD_intrinsics.h"],
outfile = outfile,
desc = desc,
name = name,
enableCparameters=False,
params = "CCTK_ARGUMENTS",
preloop = """
DECLARE_CCTK_ARGUMENTS;
#pragma omp parallel for
for(int i2=0;i2<cctk_lsh[2];i2++) {
for(int i1=0;i1<cctk_lsh[1];i1++) {
for(int i0=0;i0<cctk_lsh[0];i0+=SIMD_width) {
""",
body =fin.FD_outputC("returnstring",gf_and_expr_list,
params="outCverbose=False,includebraces=False,enable_SIMD=True,preindent=8"),
postloop = """
} // for(int i0=0;i0<cctk_lsh[0];i0+=SIMD_width)
} // for(int i1=0;i1<cctk_lsh[1];i1++)
} // for(int i2=0;i2<cctk_lsh[2];i2++)
""")
make_code_defn_list.append(name+".c")
```
Output C function MaxwellVacuum_zero_rhss() to file MaxwellVacuum/src/MaxwellVacuum_zero_rhss.c
<a id='rhs_driver'></a>
## Step 2.b: The right-hand side driver function \[Back to [top](#toc)\]
$$\label{rhs_driver}$$
This function is used to select which of the right-hand sides functions we have defined above should be called. There are two parameters in the `MaxwellVacuum` thorn (which we will declare below) that steer the behaviour of the right-hand side driver:
* `FD_order`
* `which_system`
The first of them, `FD_order`, selects which finite difference order we want to use when evaluating the right-hand sides of the evolution equations. The second of them, `which_system`, selects whether we will use the evolution equations in System I or System II.
```python
# Step 2.b: The right-hand side driver function
# Step 2.b.i: Get the names of the RHSs functions
fct_names = []
for fct in make_code_defn_list:
if "RHSs" in fct:
fct_names.append(fct.split(".")[0])
# Step 2.b.ii: Includes we will need in our driver funtion
include_string = ""
for fct_name in fct_names:
include_string += "extern void "+fct_name+"(CCTK_ARGUMENTS);\n"
# Step 2.b.iii: Right-hand side evaluation driver function
driver_function_string = """
#include <math.h>
#include "cctk.h"
#include "cctk_Arguments.h"
#include "cctk_Parameters.h"
#include "SIMD/SIMD_intrinsics.h"
"""+include_string+"""
void MaxwellVacuum_RHSs(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if( CCTK_EQUALS(which_system,"SystemI")) {
if(FD_order == 2) {
MaxwellVacuum_System_I_RHSs_FD_order_2(CCTK_PASS_CTOC);
}
else if(FD_order == 4) {
MaxwellVacuum_System_I_RHSs_FD_order_4(CCTK_PASS_CTOC);
}
else if(FD_order == 6) {
MaxwellVacuum_System_I_RHSs_FD_order_6(CCTK_PASS_CTOC);
}
else if(FD_order == 8) {
MaxwellVacuum_System_I_RHSs_FD_order_8(CCTK_PASS_CTOC);
}
else {
CCTK_VError(__LINE__,__FILE__,CCTK_THORNSTRING,
"Error: unsupported FD_order: %d",FD_order);
}
}
else if( CCTK_EQUALS(which_system,"SystemII")) {
if(FD_order == 2) {
MaxwellVacuum_System_II_RHSs_FD_order_2(CCTK_PASS_CTOC);
}
else if(FD_order == 4) {
MaxwellVacuum_System_II_RHSs_FD_order_4(CCTK_PASS_CTOC);
}
else if(FD_order == 6) {
MaxwellVacuum_System_II_RHSs_FD_order_6(CCTK_PASS_CTOC);
}
else if(FD_order == 8) {
MaxwellVacuum_System_II_RHSs_FD_order_8(CCTK_PASS_CTOC);
}
else {
CCTK_VError(__LINE__,__FILE__,CCTK_THORNSTRING,
"Error: unsupported FD_order: %d. Supported orders are: 2, 4, 6, and 8.",FD_order);
}
}
else {
CCTK_VError(__LINE__,__FILE__,CCTK_THORNSTRING,
"Error: unsupported which_system: %s. Supported systems are \\"SystemI\\" and \\"SystemII\\".",which_system);
}
}
"""
# Step 2.b.iv: Write the right-hand side evaluation driver function to file
name = "MaxwellVacuum_RHSs.c"
outfile = os.path.join(Ccodesdir,name)
with open(outfile,"w") as file:
file.write(driver_function_string)
print("Wrote file \"%s\""%outfile)
make_code_defn_list.append(name)
```
Wrote file "MaxwellVacuum/src/MaxwellVacuum_RHSs.c"
<a id='constraints_driver'></a>
## Step 2.c: The constraints driver function \[Back to [top](#toc)\]
$$\label{constraints_driver}$$
This function is used to select which of the constraint functions we have generated above should be called. There are two parameters in the `MaxwellVacuum` thorn (which we will declare below) that steer the behaviour of the constraints driver:
* `FD_order`
* `which_system`
The first of them, `FD_order`, selects which finite difference order we want to use when evaluating the constraint equations. The second of them, `which_system`, selects whether we will use the constraint equations in System I or System II.
```python
# Step 2.c: The constraints driver function
# Step 2.c.i: Get the names of the constraints functions
fct_names = []
for fct in make_code_defn_list:
if "constraint" in fct:
fct_names.append(fct.split(".")[0])
# Step 2.c.ii: Includes we will need in our driver funtion
include_string = ""
for fct_name in fct_names:
include_string += "extern void "+fct_name+"(CCTK_ARGUMENTS);\n"
# Step 2.c.iii: Constraints driver function
driver_function_string = """
#include <math.h>
#include "cctk.h"
#include "cctk_Arguments.h"
#include "cctk_Parameters.h"
#include "SIMD/SIMD_intrinsics.h"
"""+include_string+"""
void MaxwellVacuum_constraints(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if( CCTK_EQUALS(which_system,"SystemI")) {
if(FD_order == 2) {
MaxwellVacuum_System_I_constraints_FD_order_2(CCTK_PASS_CTOC);
}
else if(FD_order == 4) {
MaxwellVacuum_System_I_constraints_FD_order_4(CCTK_PASS_CTOC);
}
else if(FD_order == 6) {
MaxwellVacuum_System_I_constraints_FD_order_6(CCTK_PASS_CTOC);
}
else if(FD_order == 8) {
MaxwellVacuum_System_I_constraints_FD_order_8(CCTK_PASS_CTOC);
}
else {
CCTK_VError(__LINE__,__FILE__,CCTK_THORNSTRING,
"Error: unsupported FD_order: %d",FD_order);
}
}
else if( CCTK_EQUALS(which_system,"SystemII")) {
if(FD_order == 2) {
MaxwellVacuum_System_II_constraints_FD_order_2(CCTK_PASS_CTOC);
}
else if(FD_order == 4) {
MaxwellVacuum_System_II_constraints_FD_order_4(CCTK_PASS_CTOC);
}
else if(FD_order == 6) {
MaxwellVacuum_System_II_constraints_FD_order_6(CCTK_PASS_CTOC);
}
else if(FD_order == 8) {
MaxwellVacuum_System_II_constraints_FD_order_8(CCTK_PASS_CTOC);
}
else {
CCTK_VError(__LINE__,__FILE__,CCTK_THORNSTRING,
"Error: unsupported FD_order: %d. Supported orders are: 2, 4, 6, and 8.",FD_order);
}
}
else {
CCTK_VError(__LINE__,__FILE__,CCTK_THORNSTRING,
"Error: unsupported which_system: %s. Supported systems are \\"SystemI\\" and \\"SystemII\\".",which_system);
}
}
"""
# Step 2.c.iv: Write the constraints driver function to file
name = "MaxwellVacuum_constraints.c"
outfile = os.path.join(Ccodesdir,name)
with open(outfile,"w") as file:
file.write(driver_function_string)
print("Wrote file \"%s\""%outfile)
make_code_defn_list.append(name)
```
Wrote file "MaxwellVacuum/src/MaxwellVacuum_constraints.c"
<a id='mol_registration'></a>
## Step 2.d: Registering gridfunctions for the Method of Lines \[Back to [top](#toc)\]
$$\label{mol_registration}$$
We now register the gridfunctions from the `MaxwellVacuum` thorn so that they can be used the [Method of Lines](https://einsteintoolkit.org/thornguide/CactusNumerical/MoL/documentation.html) ETK thorn.
```python
make_code_defn_list.append("MaxwellVacuum_MoL_registration.c")
```
```python
%%writefile $Ccodesdir/MaxwellVacuum_MoL_registration.c
//--------------------------------------------------------------------------
// Register with the Method of Lines time stepper
// (MoL thorn, found in arrangements/CactusBase/MoL)
// MoL documentation located in arrangements/CactusBase/MoL/doc
//--------------------------------------------------------------------------
#include <stdio.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
#include "Symmetry.h"
void MaxwellVacuum_MoL_registration(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_INT group, rhs, ierr=0;
// Register evolution & RHS gridfunction groups with MoL, so it knows
group = CCTK_GroupIndex("MaxwellVacuum::evol_variables");
rhs = CCTK_GroupIndex("MaxwellVacuum::evol_variables_rhs");
ierr += MoLRegisterEvolvedGroup(group, rhs);
if(ierr) CCTK_ERROR("Problems registering with MoL");
}
```
Writing MaxwellVacuum/src/MaxwellVacuum_MoL_registration.c
<a id='gridfunction_symmetries'></a>
## Step 2.e: Set gridfunction symmetries \[Back to [top](#toc)\]
$$\label{gridfunction_symmetries}$$
This function tells the [`CartGrid3D`](https://einsteintoolkit.org/thornguide/CactusBase/CartGrid3D/documentation.html) ETK thorn what are the symmetry properties of each of the gridfunctions used by the `MaxwellVacuum` thorn. This is used to know the parity of the gridfunctions, for example
$$
f(-x,y,z) = s f(x,y,z),
$$
with $s=+1\ (-1)$ meaning the function is even (odd) in $x$.
```python
# Step 2.e: Set gridfuntion symmetries
# Step 2.e.i: Get list of evolved and auxiliary gridfunctions
evol_gfs_list = []
aux_gfs_list = []
for i in range(len(gri.glb_gridfcs_list)):
if gri.glb_gridfcs_list[i].gftype == "EVOL":
evol_gfs_list.append(gri.glb_gridfcs_list[i].name+"GF")
if gri.glb_gridfcs_list[i].gftype == "AUX":
aux_gfs_list.append( gri.glb_gridfcs_list[i].name+"GF")
# Step 2.e.ii: Sort gridfunction lists
evol_gfs_list.sort()
aux_gfs_list.sort()
# Step 2.e.iii: Get list of all (evol+aux) gridfunctions
full_gfs_list = []
full_gfs_list.extend(evol_gfs_list)
full_gfs_list.extend(aux_gfs_list)
# Step 2.e.iv: Write the function to a string
outstr = """
#include "cctk.h"
#include "cctk_Arguments.h"
#include "cctk_Parameters.h"
#include "Symmetry.h"
void MaxwellVacuum_Symmetry_registration(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
// Stores gridfunction parity across x=0, y=0, and z=0 planes, respectively
int sym[3];
// Next register parities for each gridfunction based on its name
// (to ensure this algorithm is robust, gridfunctions with integers
// in their base names are forbidden in NRPy+).
"""
outstr += ""
for gfname in full_gfs_list:
gfname_without_GFsuffix = gfname[:-2]
outstr += """
// Default to scalar symmetry:
sym[0] = 1; sym[1] = 1; sym[2] = 1;
// Now modify sym[0], sym[1], and/or sym[2] as needed
// to account for gridfunction parity across
// x=0, y=0, and/or z=0 planes, respectively
"""
# If gridfunction name does not end in a digit, by NRPy+ syntax, it must be a scalar
if gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 1].isdigit() == False:
outstr += " // (this gridfunction is a scalar -- no need to change default sym[]'s!)\n"
elif len(gfname_without_GFsuffix) > 2:
# Rank-1 indexed expression (e.g., vector)
if gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 2].isdigit() == False:
if int(gfname_without_GFsuffix[-1]) > 2:
print("Error: Found invalid gridfunction name: "+gfname)
sys.exit(1)
symidx = gfname_without_GFsuffix[-1]
if int(symidx) < 3: outstr += " sym[" + symidx + "] = -1;\n"
# Rank-2 indexed expression
elif gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 2].isdigit() == True:
if len(gfname_without_GFsuffix) > 3 and gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 3].isdigit() == True:
print("Error: Found a Rank-3 or above gridfunction: "+gfname+", which is at the moment unsupported.")
print("It should be easy to support this if desired.")
sys.exit(1)
symidx0 = gfname_without_GFsuffix[-2]
if int(symidx0) >= 0: outstr += " sym[" + symidx0 + "] *= -1;\n"
symidx1 = gfname_without_GFsuffix[-1]
if int(symidx1) >= 0: outstr += " sym[" + symidx1 + "] *= -1;\n"
else:
print("Don't know how you got this far with a gridfunction named "+gfname+", but I'll take no more of this nonsense.")
print(" Please follow best-practices and rename your gridfunction to be more descriptive")
sys.exit(1)
outstr += " SetCartSymVN(cctkGH, sym, \"MaxwellVacuum::" + gfname + "\");\n"
outstr += "}\n"
# Step 2.e.v: Write the function string to file
name = "MaxwellVacuum_Symmetry_registration.c"
outfile = os.path.join(Ccodesdir,name)
with open(outfile,"w") as file:
file.write(outstr)
print("Wrote file \"%s\""%outfile)
# Step 2.e.vi: Add function name to the list of functions
# to be added to the make.code.defn file
make_code_defn_list.append(name)
```
Wrote file "MaxwellVacuum/src/MaxwellVacuum_Symmetry_registration.c"
<a id='boundary_conditions'></a>
## Step 2.f: Boundary condition configuration \[Back to [top](#toc)\]
$$\label{boundary_conditions}$$
```python
# Step 2.f: Register with the boundary conditions thorns.
# Step 2.f.i Set BC type to "none" for all variables
# Since we choose NewRad boundary conditions, we must register all
# gridfunctions to have boundary type "none". This is because
# NewRad is seen by the rest of the Toolkit as a modification to the
# RHSs.
# Step 2.f.ii: This code is based on Kranc's McLachlan/ML_Maxwell/src/Boundaries.cc code.
outstr = """
#include "cctk.h"
#include "cctk_Arguments.h"
#include "cctk_Parameters.h"
#include "cctk_Faces.h"
#include "util_Table.h"
#include "Symmetry.h"
// Set `none` boundary conditions on Maxwell RHSs, as these are set via NewRad.
void MaxwellVacuum_BoundaryConditions_evolved_gfs(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_INT ierr CCTK_ATTRIBUTE_UNUSED = 0;
"""
for gf in evol_gfs_list:
outstr += """
ierr = Boundary_SelectVarForBC(cctkGH, CCTK_ALL_FACES, 1, -1, "MaxwellVacuum::"""+gf+"""", "none");
if (ierr < 0) CCTK_ERROR("Failed to register BC for MaxwellVacuum::"""+gf+"""!");
"""
outstr += """
}
// Set `none` boundary conditions on Maxwell constraints
void MaxwellVacuum_BoundaryConditions_aux_gfs(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_INT ierr CCTK_ATTRIBUTE_UNUSED = 0;
"""
for gf in aux_gfs_list:
outstr += """
ierr = Boundary_SelectVarForBC(cctkGH, CCTK_ALL_FACES, cctk_nghostzones[0], -1, "MaxwellVacuum::"""+gf+"""", "none");
if (ierr < 0) CCTK_ERROR("Failed to register BC for MaxwellVacuum::"""+gf+"""!");
"""
outstr += "}\n"
# Step 2.f.iii: Write the function string to file
name = "MaxwellVacuum_BoundaryConditions_evolved_gfs.c"
outfile = os.path.join(Ccodesdir,name)
with open(outfile,"w") as file:
file.write(outstr)
print("Wrote file \"%s\""%outfile)
# Step 2.f.iv: Add function name to the list of functions
# to be added to the make.code.defn file
make_code_defn_list.append(name)
# Step 2.f.v: Set C code for calling NewRad BCs
# As explained in lean_public/LeanMaxwellMoL/src/calc_mwev_rhs.F90,
# the function NewRad_Apply takes the following arguments:
# NewRad_Apply(cctkGH, var, rhs, var0, v0, radpower),
# which implement the boundary condition:
# var = var_at_infinite_r + u(r-var_char_speed*t)/r^var_radpower
# Obviously for var_radpower>0, var_at_infinite_r is the value of
# the variable at r->infinity. var_char_speed is the propagation
# speed at the outer boundary, and var_radpower is the radial
# falloff rate.
outstr = """
#include <math.h>
#include "cctk.h"
#include "cctk_Arguments.h"
#include "cctk_Parameters.h"
void MaxwellVacuum_NewRad(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
"""
for gf in evol_gfs_list:
var_at_infinite_r = "0.0"
var_char_speed = "1.0"
var_radpower = "3.0"
outstr += " NewRad_Apply(cctkGH, "+gf+", "+gf.replace("GF","")+"_rhsGF, "+var_at_infinite_r+", "+var_char_speed+", "+var_radpower+");\n"
outstr += "}\n"
# Step 2.f.vi: Write the function string to file
name = "MaxwellVacuum_NewRad.c"
outfile = os.path.join(Ccodesdir,name)
with open(outfile,"w") as file:
file.write(outstr)
print("Wrote file \"%s\""%outfile)
# Step 2.f.vii: Add function name to the list of functions
# to be added to the make.code.defn file
make_code_defn_list.append(name)
```
Wrote file "MaxwellVacuum/src/MaxwellVacuum_BoundaryConditions_evolved_gfs.c"
Wrote file "MaxwellVacuum/src/MaxwellVacuum_NewRad.c"
<a id='nrpy_banner'></a>
## Step 2.g: `NRPy+` banner \[Back to [top](#toc)\]
$$\label{nrpy_banner}$$
```python
# Step 2.g: Function that prints the NRPy+ banner
# Step 2.g.i: Write the function to a string
outstr = """
#include <stdio.h>
void MaxwellVacuum_Banner() {
"""
logostr = logo.print_logo(print_to_stdout=False)
outstr += " printf(\"MaxwellVacuum: another Einstein Toolkit thorn generated by\\n\");\n"
for line in logostr.splitlines():
outstr += " printf(\""+line+"\\n\");\n"
outstr += "}\n"
# Step 2.g.ii: Write the function string to file
name = "MaxwellVacuum_Banner.c"
outfile = os.path.join(Ccodesdir,name)
with open(outfile,"w") as file:
file.write(outstr)
print("Wrote file \"%s\""%outfile)
# Step 2.g.iii: Add function name to the list of functions
# to be added to the make.code.defn file
make_code_defn_list.append(name)
```
Wrote file "MaxwellVacuum/src/MaxwellVacuum_Banner.c"
<a id='ccl_files'></a>
## Step 2.h: Thorn configuration files \[Back to [top](#toc)\]
$$\label{ccl_files}$$
<a id='make_code_defn'></a>
### Step 2.h.i: Generating the `make.code.defn` file \[Back to [top](#toc)\]
$$\label{make_code_defn}$$
The `make.code.defn` file specifies which files must be compiled during the ETK build. The general format is:
```
# Files to be compiled
SRCS = file_1.ext \
file_2.ext \
file_3.ext
# Subdirectories containing source files
SUBDIRS = subdirectory_1 \
subdirectory_2
```
or, equivalently,
```
# Files to be compiled
SRCS = file_1.ext file_2.ext file_3.ext
# Subdirectories containing source files
SUBDIRS = subdirectory_1 subdirectory_2
```
```python
# Step 2.c: Thorn configuration files
# Step 2.c.i: Generating the make.code.defn file
# Step 2.c.i.1: Write the make.code.defn file to string
make_code_defn_list.sort()
make_code_defn_string = """
# Main make.code.defn file for thorn MaxwellVacuum
# Source files in this directory
SRCS = """
for i in range(len(make_code_defn_list)):
fct = make_code_defn_list[i]
if i == 0:
make_code_defn_string += fct+" \\\n"
elif i > 0 and i < len(make_code_defn_list)-1:
make_code_defn_string += " "+fct+" \\\n"
else:
make_code_defn_string += " "+fct
# Step 2.c.i.2: Write the make.code.defn file
outfile = os.path.join(Ccodesdir,"make.code.defn")
with open(outfile,"w") as file:
file.write(make_code_defn_string)
print("Wrote file \"%s\""%outfile)
```
Wrote file "MaxwellVacuum/src/make.code.defn"
<a id='param_ccl'></a>
### Step 2.h.ii: Generating the `param.ccl` file \[Back to [top](#toc)\]
$$\label{param_ccl}$$
The `param.ccl` file constains the parameters defined by our thorn and those which we wish to use in our thorn but are defined by other thorns. The general structure of an entry in the `param.ccl` file is:
<pre>
<code>
<font color='purple'>PARAMETER_TYPE</font> <font color='blue'>parameter_name</font> <font color='red'>"Description of the parameter"</font>
{
<font color='green'><allowed or disallowed values></font> :: <font color='red'>"Description of the value"</font>
} <font color='green'>parameter_default_value</font>
</code>
</pre>
Typical values of `PARAMETER_TYPE` are `CCTK_REAL`, `CCTK_INT`, `CCTK_STRING`, and `BOOLEAN`. The `<allowed or disallowed>` values can actually span multiple lines.
Let us now look at the simplest of examples. Let us define a parameter `key`, which is an `integer`, can have any value, and defaults to 1. This is achieved by doing:
<pre>
<code>
<font color='purple'>CCTK_INT</font> <font color='blue'>key</font> <font color='red'>"A very simple integer parameter"</font>
{
<font color='green'>*:*</font> :: <font color='red'>"Can be anything"</font>
} <font color='green'>1</font>
</code>
</pre>
As mentioned before, our thorn contains two parameters: `FD_order` and `which_system`, which are of types `CCTK_INT` and `CCTK_STRING`, respectively. It also shares parameters with the `Method of Lines` ETK thorn.
```python
%%writefile $Thorndir/param.ccl
# This param.ccl file was automatically generated by NRPy+.
# You are advised against modifying it directly; instead
# modify the Python code that generates it.
shares: MethodOfLines
restricted:
CCTK_INT FD_order "Finite-differencing order"
{
2:2 :: "finite-differencing order = 2"
4:4 :: "finite-differencing order = 4"
6:6 :: "finite-differencing order = 6"
8:8 :: "finite-differencing order = 8"
} 4
CCTK_STRING which_system "Which system to evolve"
{
"SystemI" :: "Evolve system I"
"SystemII" :: "Evolve system II"
} "SystemII"
```
Writing MaxwellVacuum/param.ccl
<a id='interface_ccl'></a>
### Step 2.h.iii: Generating the `interface.ccl` file \[Back to [top](#toc)\]
$$\label{interface_ccl}$$
We will now generate the `interface.ccl` file for the `MaxwellVacuum` thorn. This file specifies how the thorn interacts with the other thorns in the toolkit. For example, if we want to implement a function that can be used by other thorns, then we would include its prototype to this file. Similarly, if you want to use a function or parameter defined by another thorn, you have to mention that in this file.
This is also the place to define the gridfunctions that we need for our thorn, i.e. $\bigl(E^{i},A^{i},\Phi,\Gamma,\mathcal{C},\mathcal{G}\bigr)$.
The [official Einstein Toolkit documentation](https://einsteintoolkit.org/usersguide/UsersGuide.html#x1-179000D2.2) defines what must/should be included in an `interface.ccl` file in detail.
```python
%%writefile $Thorndir/interface.ccl
# This interface.ccl file was automatically generated by NRPy+.
# You are advised against modifying it directly; instead
# modify the Python code that generates it.
# With "implements", we give our thorn its unique name.
implements: MaxwellVacuum
# By "inheriting" other thorns, we tell the Toolkit that we
# will rely on variables/function that exist within those
# functions.
inherits: Boundary grid MethodofLines
# Needed functions and #include's:
USES INCLUDE: Symmetry.h
USES INCLUDE: Boundary.h
# Needed Method of Lines function
CCTK_INT FUNCTION MoLRegisterEvolvedGroup(CCTK_INT IN EvolvedIndex, CCTK_INT IN RHSIndex)
REQUIRES FUNCTION MoLRegisterEvolvedGroup
# Needed Boundary Conditions function
CCTK_INT FUNCTION GetBoundarySpecification(CCTK_INT IN size, \
CCTK_INT OUT ARRAY nboundaryzones, \
CCTK_INT OUT ARRAY is_internal, \
CCTK_INT OUT ARRAY is_staggered, \
CCTK_INT OUT ARRAY shiftout)
USES FUNCTION GetBoundarySpecification
CCTK_INT FUNCTION SymmetryTableHandleForGrid(CCTK_POINTER_TO_CONST IN cctkGH)
USES FUNCTION SymmetryTableHandleForGrid
CCTK_INT FUNCTION Boundary_SelectVarForBC(CCTK_POINTER_TO_CONST IN GH, \
CCTK_INT IN faces, \
CCTK_INT IN boundary_width, \
CCTK_INT IN table_handle, \
CCTK_STRING IN var_name, \
CCTK_STRING IN bc_name)
USES FUNCTION Boundary_SelectVarForBC
# Needed for EinsteinEvolve/NewRad outer boundary condition driver:
CCTK_INT FUNCTION NewRad_Apply(CCTK_POINTER_TO_CONST IN cctkGH, \
CCTK_REAL ARRAY IN var, \
CCTK_REAL ARRAY INOUT rhs, \
CCTK_REAL IN var0, \
CCTK_REAL IN v0, \
CCTK_INT IN radpower)
REQUIRES FUNCTION NewRad_Apply
# Tell the Toolkit that we want all gridfunctions
# to be visible to other thorns by using
# the keyword "public". Note that declaring these
# gridfunctions *does not* allocate memory for them;
# that is done by the schedule.ccl file.
public:
CCTK_REAL evol_variables type = GF Timelevels=3
{
AU0GF,AU1GF,AU2GF,EU0GF,EU1GF,EU2GF,GammaGF,PhiGF
} "Maxwell evolved gridfunctions"
CCTK_REAL evol_variables_rhs type = GF Timelevels=1 TAGS='InterpNumTimelevels=1 prolongation="none"'
{
AU0_rhsGF,AU1_rhsGF,AU2_rhsGF,EU0_rhsGF,EU1_rhsGF,EU2_rhsGF,Gamma_rhsGF,Phi_rhsGF
} "right-hand-side storage for Maxwell evolved gridfunctions"
CCTK_REAL aux_variables type = GF Timelevels=3
{
CGF,GGF
} "Auxiliary gridfunctions for Maxwell diagnostics"
```
Writing MaxwellVacuum/interface.ccl
<a id='schedule_ccl'></a>
### Step 2.h.iv: Generating the `schedule.ccl` file \[Back to [top](#toc)\]
$$\label{schedule_ccl}$$
We will now generate the `schedule.ccl` file for the `MaxwellVacuum` thorn. This file specifies:
* Which variables/gridfunctions we need to allocate memory to
* What function calls are performed by the toolkit scheduler and the order in which these function calls happen
* Function scope (global or local)
* (Optional for now) which gridfunctions our functions read from and write to
Official documentation on constructing ETK `schedule.ccl` files is found [here](https://einsteintoolkit.org/usersguide/UsersGuide.html#x1-187000D2.4).
```python
%%writefile $Thorndir/schedule.ccl
# This schedule.ccl file was automatically generated by NRPy+.
# You are advised against modifying it directly; instead
# modify the Python code that generates it.
# Next allocate storage for all 3 gridfunction groups used in MaxwellVacuum
STORAGE: evol_variables[3] # Evolution variables
STORAGE: evol_variables_rhs[1] # Variables storing right-hand-sides
STORAGE: aux_variables[3] # Diagnostics variables
# The following scheduler is based on Lean/LeanMaxwellMoL/schedule.ccl
schedule MaxwellVacuum_Banner at STARTUP
{
LANG: C
OPTIONS: meta
} "Output ASCII art banner"
schedule MaxwellVacuum_Symmetry_registration at BASEGRID
{
LANG: C
OPTIONS: Global
} "Register symmetries, the CartGrid3D way."
schedule MaxwellVacuum_zero_rhss at BASEGRID after MaxwellVacuum_Symmetry_registration
{
LANG: C
} "Idea from Lean: set all rhs functions to zero to prevent spurious nans"
# MoL: registration
schedule MaxwellVacuum_MoL_registration in MoL_Register
{
LANG: C
OPTIONS: META
} "Register variables for MoL"
# MoL: compute RHSs, etc
schedule MaxwellVacuum_RHSs in MoL_CalcRHS as MaxwellVacuum_RHS
{
LANG: C
} "MoL: Evaluate Maxwell RHSs"
schedule MaxwellVacuum_NewRad in MoL_CalcRHS after MaxwellVacuum_RHS
{
LANG: C
} "NewRad boundary conditions, scheduled right after RHS eval."
schedule MaxwellVacuum_BoundaryConditions_evolved_gfs in MoL_PostStep
{
LANG: C
OPTIONS: LEVEL
SYNC: evol_variables
} "Apply boundary conditions and perform AMR+interprocessor synchronization"
schedule GROUP ApplyBCs as MaxwellVacuum_ApplyBCs in MoL_PostStep after MaxwellVacuum_BoundaryConditions_evolved_gfs
{
} "Group for applying boundary conditions"
# Compute divergence and Gamma constraints
schedule MaxwellVacuum_constraints in MoL_PseudoEvolution
{
LANG: C
OPTIONS: Local
} "Compute Maxwell (divergence and Gamma) constraints"
```
Writing MaxwellVacuum/schedule.ccl
<hr style="width:100%;height:3px;color:black"/>
<a id='code_validation'></a>
# Step 3: Part III - Code validation (Courtesy: Terrence Pierre Jacques) \[Back to [top](#toc)\]
$$\label{code_validation}$$
**Code tests adopting fourth-order finite differencing, coupled to 2nd order Iterative Crank-Nicholson method-of-lines for time integration**
Inside the directory *`test_results/`* are the files used for this convergence test:
**maxwell_toroidaldipole-0.125_OB4.par & maxwell_toroidaldipole-0.0625_OB4.par** : ETK parameter files needed for performing the tests. These parameter files set up a toroidal dipole field propagating in a 3D numerical grid that extends from -4. to +4. along the x-, y-, and z-axes (in units of $c=1$). The parameter files are identical, except the latter has grid resolution that is twice as high as the former (so the errors should drop in the higher resolution case by a factor of $2^2$, since we adopt fourth-order finite differencing coupled to 2nd order Iterative Crank-Nicholson time integration.)
**Second-order code validation test results:**
The plot below shows the discrepancy between numerical and exact solutions to x-components of system I $\vec{E}$ and $\vec{A}$ at two different resolutions, at t = 2.0 (to not have errors at the boundary propagate too far inward): dashed is low resolution ($\Delta x_{\rm low}=0.125$) and solid is high resolution ($\Delta x_{\rm high}=0.0625$). Since this test adopts **fourth**-order finite differencing for spatial derivatives and **second**-order Iterative Crank-Nicholson timestepping, we would expect this error to drop by a factor of approximately $(\Delta x_{\rm low}/\Delta x_{\rm high})^2 = (0.125/0.0625)^2 = 2^2=4$ when going from low to high resolution, and after rescaling the error in the low-resolution case, we see that indeed it overlaps the high-resolution result quite nicely, confirming second-order convergence. We note that we also observe convergence for all other evolved variables (in both systems) with a nonzero exact solutions.
```python
from IPython.display import Image
Image("test_results/Ex-convergence.png", width=700, height=700)
```
```python
Image("test_results/Ax-convergence.png", width=700, height=700)
```
Because System I is weakly hyperbolic (see [Tutorial-VacuumMaxwell_formulation_Cartesian](Tutorial-VacuumMaxwell_formulation_Cartesian.ipynb) for more discussion), zero speed error nodes of the constraint violation sit on our numerical grid, adding to the errors of our evolution variables. In contrast, System II is strongly hyperbolic, and the error nodes propagate away at the speed of light, leading to more stable evolution of the evolution variables. The plot below demostrates the qualitative behavior for both systems.
Contrast these plots to Figure 1 in [Knapp, Walker, & Baumgarte (2002)](https://arxiv.org/abs/gr-qc/0201051); we observe excellent qualitative agreement.
```python
Image("test_results/constraintviolation.png")
```
<hr style="width:100%;height:3px;color:black"/>
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[ETK_Workshop_2021-NRPy_tutorial.pdf](ETK_Workshop_2021-NRPy_tutorial.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
# Step 4: Generate a PDF version of this tutorial notebook
# Step 4.a: First copy the latex_nrpy_style.tplx from the
# nrpy_core directory to the base directory
src_file = os.path.join(nrpy_core_dir,"latex_nrpy_style.tplx")
dst_file = os.path.join(base_dir ,"latex_nrpy_style.tplx")
shutil.copyfile(src_file,dst_file)
# Step 4.b: Now generate the PDF file
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("ETK_Workshop_2021-NRPy_tutorial")
# Step 4.c: Clean up by removing the latex_nrpy_style.tplx from
# the base directory
cmd.delete_existing_files(dst_file)
```
Created ETK_Workshop_2021-NRPy_tutorial.tex, and compiled LaTeX file to PDF
file ETK_Workshop_2021-NRPy_tutorial.pdf
|
fc829f51c2ea74419abc71f0790f2f33be513b00
| 685,692 |
ipynb
|
Jupyter Notebook
|
in_progress/2021_ETK_School/ETK_Workshop_2021-NRPy_tutorial.ipynb
|
Harmohit-Singh/nrpytutorial
|
81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1
|
[
"BSD-2-Clause"
] | 66 |
2018-06-26T22:18:09.000Z
|
2022-02-09T21:12:33.000Z
|
in_progress/2021_ETK_School/ETK_Workshop_2021-NRPy_tutorial.ipynb
|
Harmohit-Singh/nrpytutorial
|
81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1
|
[
"BSD-2-Clause"
] | 14 |
2020-02-13T16:09:29.000Z
|
2021-11-12T14:59:59.000Z
|
in_progress/2021_ETK_School/ETK_Workshop_2021-NRPy_tutorial.ipynb
|
Harmohit-Singh/nrpytutorial
|
81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1
|
[
"BSD-2-Clause"
] | 30 |
2019-01-09T09:57:51.000Z
|
2022-03-08T18:45:08.000Z
| 266.495142 | 222,140 | 0.902076 | true | 23,046 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.831143 | 0.785309 | 0.652704 |
__label__eng_Latn
| 0.764436 | 0.35478 |
# **`PoissonGeometry`**
## **Downloading from PyPi**
Run the following:
```
!pip install poissongeometry==0.1.2
!pip install galgebra==0.4.3
```
To verify that `PoissonGeometry` module it was installed correctly run:
```
def test_poissongeometry():
""" This method verifies if the module was installed correctly """
try:
import poisson
result = 'The module was installed correctly'
except:
result = 'The module was NOT installed correctly'
return result
test_poissongeometry()
```
**More information**: see [Github](https://github.com/appliedgeometry/poissongeometry) repository or [PyPi](https://pypi.org/project/poissongeometry/) page.
## **Preparing the Environment to Work with `PoissonGeometry`**
### Obtaining $\LaTeX$ format
With this code you can print the results of certain functions in `PoissonGeometry` with $\LaTeX$ typography:
```
import sympy
def custom_latex_printer(exp, **options):
from google.colab.output._publish import javascript
url = "https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=default"
javascript(url=url)
return sympy.printing.latex(exp, **options)
sympy.init_printing(use_latex="mathjax", latex_printer=custom_latex_printer)
```
### $\LaTeX$ code
* To print the result of a function in $\LaTeX$ code we just need to add the flag `latex_format=True`, whose default value is `False`:
function_name(param_1, ..., param_n, latex_format=True)
### Syntax
* A scalar function in `PoissonGeometry` is written using *string literal expressions*.
For example, the function $f:\mathbb{R}^{3} \to \mathbb{R}$ given by
$$f(x_1,x_2,x_3) = ax_1^2 + bx_2^2 + cx_3^2, \quad a,b,c \in \mathbb{R}$$
should be written exactly as follows:
```
"a*x1**2 + b*x2**2 + c*x3**2"
```
Here, `x1, x2, x3` are symbolic variables that `PoissonGeometry` defines by default and that represent in this case the coordinates $(x_1,x_2,x_3)$.
**Remark.** All characters that are not local coordinates are treated as (symbolic) parameters: for example `a`, `b`, `c` above.
**Note.** Python supports the following basic operations:
| Expression | Description || Expression | Description |
| :--------: | ------------ || :--------: | -------------- |
| + | Addition || * | Multiplication |
| - | Substraction || ** | Power |
| / | Division ||
* A multivector field or a differential form in `Poisson Geometry` is written using *dictionaries* with *tuples of integers* as **keys** and *string* type **values**.
For example, in $\mathbb{R}^{3}$:
* The vector field $x_1\frac{\partial}{\partial x_1} + x_2\frac{\partial}{\partial x_2} + x_3 \frac{\partial}{\partial x_3}$ should be written as \\
```{(1,):'x1', (2,):'x2', (3,):'x3'}```
**Note:** Commas on keys are important!
* The bivector field $x_1\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} - x_2\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_3 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3}$ should be written as \\
```{(1,2):'x1', (1,3):'-x2', (2,3):'x3'}```
* The 3-multivector field $x_1\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3}$ should be written as \\
```{(1,2,3):'x1'}```
**Remarks:**
1. In `Python`, a variable \\
```{key_1: value_1, ..., key_n: value_n}``` \\
it is called 'dictionary'. Each `key_1,...,key_n` is called a *key* of the dictionary and each `value_1,...,value_n` is called the *value* of the corresponding key.
2. In our case, each key is a `tuple` and each value a `string` type variable.
When we have a multivector $A$ of degree $a$ on $\mathbb{R}^{m}$ $$A = \sum_{1 \leq i_1 < i_2 < \cdots < i_a \leq m} A^{i_1 i_2 \cdots i_a}\,\frac{\partial}{\partial{x_{i_1}}} \wedge \frac{\partial}{\partial{x_{i_2}}} \wedge \cdots \wedge \frac{\partial}{\partial{x_{i_a}}},$$ \\
the keys of the dictionary are tuples $(i_1,i_2,\ldots,i_a)$ corresponding to the ordered indices $i_1 i_2 \cdots i_a$ of $A^{i_1 i_2 \cdots i_a}$ and the values the corresponding string expression of the coefficient (scalar function) $A^{i_1 i_2 \cdots i_a}$.
**Note.** We can only write the keys and values of *non-zero coefficients*.
3. We can change the order of the indeces in each `tuple` adding the minus sign in the corresponding `string` value.
For example,
* The bivector field $x_1\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2}$ can be written as \\
```{(1,2): 'x1'}```
or as
```{(2,1): '-x1'}```
where this last dictionary corresponds to the bivector field $-x_1\frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_1}$.
**Note.** Although we have the option of disregarding the index order, we recommend not doing so, to avoid possible computation errors.
4. The syntax for differential forms is the same as for multivectors.
For example, the differential 2-form on $\mathbb{R}^4$ $$-\mathrm{d}x_{1} \wedge \mathrm{d}x_{2} - (x_1 + x_4)\mathrm{d}x_{3} \wedge \mathrm{d}x_{4}$$ is written as \\
```{(1,2):'-1', (3,4): '-(x1 + x4)'}```
5. Finally, in `PoissonGeometry` we use the following notation:
* `Dxi` is equivalent to $\frac{\partial}{\partial x_{i}}$.
* `dxi` is equivalent to $\mathrm{d}x_{i}$.
This assignment is given because in `SymPy` it's not possible to define variables $\frac{\partial}{\partial x_{i}}$ or $\mathrm{d}x_{i}$.
## **Testing the `PoissonGeometry` Class**
### Instantiating and knowing the Class
First, it is necessary to instantiate the class. For this, we must tell to `PoissonGeometry` the dimension and the symbolic variable that names the coordinates.
For example, if we want to work in dimension 4 and use $z$ to name the coordinates:
```
# We import the class and give it a short name for simplicity.
from poisson.poisson import PoissonGeometry as pg
# We declare the variables and the dimension
pg4 = pg(4, variable="z")
```
**Remark:** By default, `variable`=`"x"`.
To know the dimension in which we are working write:
```
pg4.dim
```
To know the (current) coordinates write:
```
pg4.coordinates
```
In addition, `PoissonGeometry` builds a basis $\left\{\frac{\partial}{\partial x_{1}},...,\frac{\partial}{\partial x_{n}}\right\}$ of vector fields. In our running example, for $n=4$.
```
pg4.Dx_basis
```
Moreover, it's possible to operate on this basis. For example, to compute wedge products:
```
Dz1, Dz2, Dz3, Dz4 = pg4.Dx_basis
print(F'Wedge product of Dz1 with Dz2: {Dz1 ^ Dz2}') # The wedge product in Galgebra is denoted with the symbol ^
print(F'Wedge product of Dz1 with Dz2: {Dz1 ^ Dz1}')
```
The following operations can be carried out in `Galgebra`:
| Expression | Description | Expression | Description |
| :----------: | ---------- | :----------: | ---------- |
| * | Multiplication | ^ | Wdege Product |
| + | Addition | - | Substraction |
## **Functions of `PoissonGeometry`**
###Function: `bivector_to_matrix`
COMPUTES THE MATRIX OF A BIVECTOR FIELD
For example, the matrix of the bivector field on $\mathbb{R}^4$
$$\Pi = x_3\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} - x_2\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_1 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},$$
can be computed as follows:
```
from poisson.poisson import PoissonGeometry
pg4 = PoissonGeometry(4)
# We entered the bivector field
bivector = {(1,2): 'x3', (1,3): '-x2', (2,3): 'x1'}
pg4.bivector_to_matrix(bivector)
```
**Remember.** If we want the $\LaTeX$ code of the previous matrix we just need to add the flag `latex_format=True`, whose default value is `False`:
```
print(pg4.bivector_to_matrix(bivector, latex_format=True))
```
* The 'power' of the `latex_format`: you can just copy and paste the result into a `.tex` file
###Function: `sharp_morphism`
COMPUTES THE IMAGE OF A DIFFERENTIAL 1-FORM UNDER THE VECTOR BUNDLE MORPHISM 'SHARP' INDUCED BY A BIVECTOR FIELD
For example, consider the Lie-Poisson bivector field on
$\mathbb{R}^{3}$
$$\Pi = x_3\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} - x_2\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_1 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},$$
associated to the Lie algebra $\mathfrak{so}(3)$ [4], and the differential 1-form
$$\alpha = x_1 \mathrm{d}x_{1} + x_{2} \mathrm{d}x_{2} + x_{3} \mathrm{d}x_{3}.$$
To compute *$\Pi^{\natural}(\alpha)$*, run:
```
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
# We entered the bivector field and the 1-form.
bivector = {(1,2): 'x3', (1,3): '-x2', (2,3): 'x1'}
alpha = {(1,): 'x1', (2,): 'x2', (3,): 'x3'}
pg3.sharp_morphism(bivector, alpha)
```
Therefore, $\Pi^{\natural}(\alpha)=0$.
###Function: `hamiltonian_vf`
COMPUTES THE HAMILTONIAN VECTOR FIELD OF A SCALAR FUNCTION RESPECT TO A POISSON BIVECTOR FIELD
For example, consider the Poisson bivector field on $\mathbb{R}^{6}$,
$$\Pi = \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_4} + \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_5} + \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_6},$$ and the function
$$h = -\frac{1}{x_{2}-x_{1}}-\frac{1}{x_{3}-x_{1}}-\frac{1}{x_{3}-x_{2}} +\frac{1}{2} (x_{4}^{2} + x_{5}^{2} + x_{6}^{2}).$$
The Hamiltonian vector field of $h$ respect to $\Pi$ is given by
\begin{align}
X_{h} &= - x_4\frac{\partial}{\partial{x_1}}- x_5\frac{\partial}{\partial{x_2}} - x_6\frac{\partial}{\partial{x_3}} + \left[ \frac{1}{(x_1-x_3)|x_1-x_3|} + \frac{1}{(x_1-x_2)|x_1-x_2|} \right]\frac{\partial}{\partial{x_4}} \\
&+ \left[ \frac{1}{(x_2-x_3)|x_2-x_3|} + \frac{1}{(x_1-x_2)|x_1-x_2|} \right]\frac{\partial}{\partial{x_5}} -\left[ \frac{1}{(x_2-x_3)|x_2-x_3|} + \frac{1}{(x_1-x_3)|x_1-x_3|} \right]\frac{\partial}{\partial{x_6}}.
\end{align}
This vector field is the associated to the Hamiltonian system of a particular case of the three bodies problem [3].
To compute this vector field with `Poisson Geometry` run:
```
# This module is for Python readable printing
import pprint
pp = pprint.PrettyPrinter(indent=2)
# Instantiated the Poisson Geometry class
from poisson.poisson import PoissonGeometry
pg6 = PoissonGeometry(6)
bivector = {(1,4): '1', (2,5): '1', (3,6): '1'}
h = '- 1/sqrt((x2 - x1)**2) - 1/sqrt((x3 - x1)**2) - 1/sqrt((x3 - x2)**2)+ 1/2*(x4**2 + x5**2 + x6**2)'
pp.pprint(pg6.hamiltonian_vf(bivector, h))
```
We must remember that we declared `Dxi` $\equiv \frac{\partial}{\partial x_{i}}$.
### Function: `lichnerowicz_poisson_operator`
COMPUTES THE IMAGE OF A MULTIVECTOR FIELD UNDER THE COBOUNDARY OPERATOR INDUCED BY A POISSON BIVECTOR FIELD
RELATIVE TO THE SCHOUTEN-NIJENHUIS BRACKET FOR MULTIVECTOR FIELDS
Consider the bivector field on $\mathbb{R}^{3}$
$$\Pi = x_{1}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} - ax_{1}x_{3} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_3(2x_{1} - ax_{2}) \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},$$ and the 3-multivector field $$A = (bx_{2}^{2}x_{3} + c)\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3}.$$
Computing the Schouten-Nijenhuis bracket of two multivector fields is a complicated affair. The `PoissonGeometry` class can do this computation very quickly!:
```
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
P = {(1,2): 'x1**2', (1,3): '-a*x1*x3', (2,3): 'x3*(2*x1 - a*x2)'}
A = {(1,2,3): 'b*x2**2*x3 + c'}
pg3.lichnerowicz_poisson_operator(P, A)
```
Therefore, $A$ is a cocycle of $\Pi$. The formal cohomology group of degree 3 of the Lichnerowicz-Poisson complex of $\Pi$ is given by (see [1]):
$$\mathscr{H}^{3}(\Pi) \simeq\ \mathbb{R} \cdot \frac{\partial}{\partial{x_{1}}} \wedge \frac{\partial}{\partial{x_{2}}} \wedge \frac{\partial}{\partial{x_{3}}} \bigoplus \mathbb{R} \cdot x_{2}^2x_{3}\,\frac{\partial}{\partial{x_{1}}} \wedge \frac{\partial}{\partial{x_{2}}} \wedge \frac{\partial}{\partial{x_{3}}}.$$
### Function: `curl_operator`
Consider the following Poisson bivector field (Flaskcha-Ratiu) on $\mathbb{R}^6$,
$$\Pi = x_{1}x_{2}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} - x_{1}x_{3} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_2 x_3 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3} + \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_4} - \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_5} + \frac{\partial}{\partial x_4}\wedge \frac{\partial}{\partial x_6}.$$
With `curl_operator` we can calculate the divergence of $\Pi$ respect to the volume form $f\Omega_{0}$ on $\mathbb{R}^6$, where $\Omega_{0}$ is te euclidean volume form and $f$ is a nonwhere vanishing function:
```
from poisson.poisson import PoissonGeometry
pg6 = PoissonGeometry(6)
bivector = {(1,2): 'x1*x2', (1,3): '-x1*x3', (2,3): 'x2*x3', (3,4): '1', (3,5): '-1', (4,6): '1'}
pg6.curl_operator(bivector, 1)
```
So the divergence of $\Pi$ is trivial.
__Remark.__ The parameter `1` in `curl_operator` means that $f \equiv 1$, therefore the divergence is respect to the euclidean volume $\Omega_{0}$ on $\mathbb{R}^6$.
### Function: `poisson_bracket`
COMPUTES THE POISSON BRACKET, INDUCED BY A POISSON BIVECTOR FIELD, OF TWO SCALAR FUNCTIONS
For example, consider the Lie-Poisson bivector field on $\mathbb{R}^{3}$,
$$\Pi = x_3\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} - x_2\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_1 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},$$
associated to the Lie algebra $\mathfrak{sl}(2)$ [7]. It is well known that $\{x_{1},x_{2}\}_{\Pi} = -x_{3}$, $\{x_{2},x_{3}\}_{\Pi} = x_{1}$ y $\{x_{3},x_{1}\}_{\Pi} = x_{2}$ are the commutation relations of this Lie algebra:
```
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
bivector ={(1,2): '-x3', (1,3): '-x2', (2,3): 'x1'}
x1_x2 = pg3.poisson_bracket(bivector, 'x1', 'x2')
x2_x3 = pg3.poisson_bracket(bivector, 'x2', 'x3')
x3_x1 = pg3.poisson_bracket(bivector, 'x3', 'x1')
print(F'{{x1, x2}} = {x1_x2}')
print(F'{{x2, x3}} = {x2_x3}')
print(F'{{x3, x1}} = {x3_x1}')
```
###Function: `modular_vf`
Consider the bivector field on $\mathbb{R}^{4}$,
$$\Pi = 2x_{4}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + 2x_{3} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_4} - 2x_{4} \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3} + 2x_{3} \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_4} + (x_{1}-x_{2}) \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_4}.$$ This bivector field is a particular case of a family of Poisson bivector fields that arise in the analysis of the orbital stability of the Pais-Uhlenbeck oscillator.
* The function `modular_vf` computes the modular field of $\Pi$ respect to a volume form $f\Omega_{0}$ on $\mathbb{R}^4$ ( here $f$ is nonwhere vanishing function and $\Omega_{0}$ the euclidean volume form on $\mathbb{R}^4$ ):
```
from poisson.poisson import PoissonGeometry
pg4 = PoissonGeometry(4)
bivector ={(1,3):'2*x4', (1,4): '2*x3', (2,3): '-2*x4', (2,4): '2*x3', (3,4):'x1-x2'}
pg4.modular_vf(bivector, 1)
```
So in this case the modular field of $\Pi$ respect to the euclidean volume form is trivial.
__Note__: The second entry for the function `curl_operator`, in this example `1`, can intake any real function $f$, in order to modify the volume form.
For example, with the bivector field on $\mathbb{R}^3$, $$\Pi = x_{3}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} -x_{2} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_{1}\frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},$$
and the function $f(x_{1},x_{2},x_{3}) = \mathrm{exp}(x_1 + x_2 + x_3)$, let's calculate the modular field of $\Pi$ with respect to the form of volume $f\Omega_{0}$:
```
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
bivector = {(1,2): 'x3', (1,3): '-x2', (2,3): 'x1'}
function = 'exp(x1 + x2 + x3)'
pg3.modular_vf(bivector, function)
```
Therfore the modular field of $\Pi$ respect to the volume form $f\Omega_{0}$ is given by
\begin{equation*}
(x_{3} - x_{2})\frac{\partial}{\partial{x_{1}}} + (x_{1} - x_{3})\frac{\partial}{\partial{x_{2}}} + (x_{2} - x_{1})\frac{\partial}{\partial{x_{3}}}.
\end{equation*}
###Function: `flashcka_ratiu_bivector`
COMPUTES THE FLASCHKA-RATIU BIVECTOR FIELD AND THE CORRESPONDING SYMPLECTIC FORM OF A 'MAXIMAL' SET OF SCALAR FUNCTIONS
For example, consider the functions
$$f(x_1, x_2, x_3, x_4) = x_4$$
and
$$g(x_1, x_2, x_3, x_4) = −x_1^2 + x_2^2 + x_3^2$$
which locally describe a 'broken' singularity of a Lefschetz foliation on a 4-dimensional manifold [6].
We can construct a Poisson bivector field $\Pi$ in such a way that the functions $f$ and $g$ are Casimirs of $\Pi$:
```
from poisson.poisson import PoissonGeometry
pg4 = PoissonGeometry(4)
casimirs = ['x4', '-x1**2 + x2**2 + x3**2']
pg4.flaschka_ratiu_bivector(casimirs)
```
To obtain the symplectic form of $\Pi$ on 2-dimensional leaves, we add the flag `symplectic_form=True`
```
bivector, symplectic_form = pg4.flaschka_ratiu_bivector(casimirs, symplectic_form=True)
print(f'Poisson bivector field: {bivector}')
print(f'Symplectic form: {symplectic_form}')
```
###Function: `linear_normal_form_R3`
COMPUTES A NORMAL FORM OF ANY LIE-POISSON BIVECTOR FIELD ON R^3
For example, consider the Lie-Poisson bivector field on $\mathbb{R}^{3}$,
$$\Pi = -10x_{3}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} +10x_{2} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} -10x_{1}\frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3}$$
To compute a normal form of $\Pi$, run:
```
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
bivector = {(1,2): '-10*x3', (1,3): '10*x2', (2,3): '-10*x1'}
pg3.linear_normal_form_R3(bivector)
```
Note that $\Pi$ is a multiple ($-10$) of the Lie-Poisson bivector field on $\mathbb{R}^{3}$ induced by the Lie algebra $\mathfrak{so}(3)$. Therefore, they belong to the same equivalence class.
### Function: `one_forms_bracket`
COMPUTES THE LIE BRACKET OF TWO DIFFERENTIAL 1-FORMS INDUCED BY A POISSON BIVECTOR FIELD
For example, consider the following bivector on $\mathbb{R}^{4}$,
$$\Pi = \big( x_3^2 + x_4^2 \big)\frac{\partial}{\partial{x_{1}}} \wedge \frac{\partial}{\partial{x_{2}}}
+ \big( x_2x_3 - x_1x_4\big)\frac{\partial}{\partial{x_{1}}} \wedge \frac{\partial}{\partial{x_{3}}}
- \big( x_1x_3 + x_2x_4\big)\frac{\partial}{\partial{x_{1}}} \wedge \frac{\partial}{\partial{x_{4}}}
+ \big( x_1x_3 + x_2x_4\big)\frac{\partial}{\partial{x_{2}}} \wedge \frac{\partial}{\partial{x_{3}}}
+ \big( x_2x_3 - x_1x_4\big)\frac{\partial}{\partial{x_{2}}} \wedge \frac{\partial}{\partial{x^{4}}}
+ \big( x_1^2 + x_2^2 \big)\frac{\partial}{\partial{x_{3}}} \wedge \frac{\partial}{\partial{x_{4}}},$$
and the 1-forms
$$\alpha = x_1\mathrm{d}{x_1} - x_2\mathrm{d}{x_2} + x_3\mathrm{d}{x_3} - x_4\mathrm{d}{x_4} \quad \mathrm{and} \quad \beta = x_2\mathrm{d}{x_1} + x_1\mathrm{d}{x_2} + x_4\mathrm{d}{x_3} + x_3\mathrm{d}{x_4}$$.
To compute the Lie bracket, induced by $\Pi$, of $\alpha$ and $\beta$, run:
```
from poisson.poisson import PoissonGeometry
pg4 = PoissonGeometry(4)
bivector ={(1,2): 'x3**2 + x4**2', (1,3): 'x2*x3 - x1*x4', (1,4): '-x1*x3 - x2*x4', (2,3): 'x1*x3 + x2*x4', (2,4): 'x2*x3 - x1*x4', (3,4): 'x1**2 + x2**2'}
alpha = {(1,): 'x1', (2,): '-x2', (3,): 'x3', (4,): '-x4'}
beta = {(1,): 'x2', (2,): 'x1', (3,): 'x4', (4,): 'x3'}
pg4.one_forms_bracket(bivector, alpha, beta)
```
So $\{\alpha, \beta\}_{\Pi} = 0$.
## **Applications**
### Function: `gauge_transformation`
COMPUTES THE GAUGE TRANSFORMATION OF A BIVECTOR FIELD WITH RESPECT TO A GIVEN DIFFERENTIAL 2-FORM
For example, consider an arbitrary bivector field on $\mathbb{R}^3$,
$$\Pi=\Pi_{12} \frac{\partial}{\partial x_{1}} \wedge \frac{\partial}{\partial x_{2}} + \Pi_{13} \frac{\partial}{\partial x_{1}} \wedge \frac{\partial}{\partial x_{3}} + \Pi_{23} \frac{\partial}{\partial x_{2}} \wedge \frac{\partial}{\partial x_{3}}$$
and a differential $2$-form,
$$\lambda = \lambda_{12} \mathrm{d}x_{1}\wedge \mathrm{d}x_{2} + \lambda_{13} \mathrm{d}x_{1}\wedge \mathrm{d}x_{3} + \lambda_{23} \mathrm{d}x_{2}\wedge \mathrm{d}x_{3}$$
To compute the gauge transformation of $\Pi$ induced by $\lambda$, run:
```
import pprint
pp = pprint.PrettyPrinter(indent=2)
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
P = {(1,2): 'P12', (1,3): 'P13', (2,3): 'P23'}
L = {(1,2): 'L12', (1,3): 'L13', (2,3): 'L23'}
gauge_bivector, determinant = pg3.gauge_transformation(P, L)
print('L-gauge transformation of P:')
pp.pprint(gauge_bivector)
print(f'\nIt\'s well-defined on the open subset \n {{{determinant} != 0}} \n of R^3')
```
* Therefore we obtain [5]:
**Proposition 3.1** Let $\Pi$ be a bivector field on a 3-dimensional smooth manifold $M$. Then, given a differential 2-form $\lambda$ on $M$, the $\lambda$-gauge transformation $\overline{\Pi}$ of $\Pi$ is well defined on the open subset of $M$
\begin{equation*}
\{F := \big\langle \lambda,\Pi \big\rangle + 1 \neq 0 \} \subseteq M.
\end{equation*}
Moreover, $\overline{\Pi}$ is given by
\begin{equation*}
\overline{\Pi} = \tfrac{1}{F}\Pi.
\end{equation*}
If $\Pi$ is a Poisson bivector field and $\lambda$ is closed along the leaves of $\Pi$, then $\overline{\Pi}$ is also Poisson.
### Function: `jacobiator`
COMPUTES THE SCHOUTEN-NIJENHUIS BRACKET OF A BIVECTOR FIELD WITH ITSELF
For example, we can modify the following $4$-parametric bivector field on $\mathbb{R}^{4}$
$$ \Pi=a_1 x_2 \frac{\partial}{\partial x_{1}} \wedge \frac{\partial}{\partial x_{2}} + a_2 x_3 \frac{\partial}{\partial x_{1}} \wedge \frac{\partial}{\partial x_{3}} + a_3 x_4 \frac{\partial}{\partial x_{1}} \wedge \frac{\partial}{\partial x_{4}} + a_4 x_1 \frac{\partial}{\partial x_{2}} \wedge \frac{\partial}{\partial x_{3}}, $$
using the `jacobiator` function, to construct a family of Poisson bivector fields on $\mathbb{R}^{4}$:
```
from poisson.poisson import PoissonGeometry
pg4 = PoissonGeometry(4)
bivector = {(1,2): 'a1*x2', (1,3): 'a2*x3', (1,4): 'a3*x4', (2,3): 'a4*x1'}
pg4.jacobiator(bivector)
```
Therefore,
\begin{equation*}
[\hspace{-0.065cm}[ \Pi,\Pi ]\hspace{-0.065cm}] = -2a_{4}(a_{1}+a_{2})x^1\,\frac{\partial}{\partial{x^{1}}} \wedge \frac{\partial}{\partial{x^{2}}} \wedge \frac{\partial}{\partial{x^{3}}} - 2a_{3}a_{4}x^4\frac{\partial}{\partial{x^{2}}} \wedge \frac{\partial}{\partial{x^{3}}} \wedge \frac{\partial}{\partial{x^{4}}}.
\end{equation*}
Hence, we have two cases, explained in the following lemma [5].
**Lemma 3.2** If $a_{4}=0$, then $\Pi$ determines a 3-parametric family of Poisson bivector fields on $\mathbb{R}^{4}_{x}$:
\begin{equation}
\Pi \,=\, a_{1}x^2\,\frac{\partial}{\partial{x^{1}}} \wedge \frac{\partial}{\partial{x^{2}}}
+ a_{2}x^3\,\frac{\partial}{\partial{x^{1}}} \wedge \frac{\partial}{\partial{x^{3}}}
+ a_{3}x^4\,\frac{\partial}{\partial{x^{1}}} \wedge \frac{\partial}{\partial{x^{4}}}.
\end{equation}
If $a_{2}=-a_{1}$ and $a_{3}=0$, then $\Pi$ determines a 2-parametric family of Poisson bivector fields on $\mathbb{R}^{4}_{x}$:
\begin{equation}
\Pi \,=\, a_{1}x^2\,\frac{\partial}{\partial{x^{1}}} \wedge \frac{\partial}{\partial{x^{2}}} - a_{1}x^3\,\frac{\partial}{\partial{x^{1}}} \wedge \frac{\partial}{\partial{x^{3}}} + a_{4}x^1\,\frac{\partial}{\partial{x^{2}}} \wedge \frac{\partial}{\partial{x^{3}}}.
\end{equation}
## **'Test-Type' Functions**
Functions which allow us to verify whether a given geometric object on a Poisson manifold satisfies certain property.
### Function: `is_homogeneous_unimodular`
VERIFIES WHETHER AN HOMOGENEOUS POISSON BIVECTOR FIELD ON R^m IS UNIMODULAR, OR NOT
For example, consider the bivector of Poisson on $\mathbb{R}^{4}$,
$$\Pi = 2x_{4}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + 2x_{3} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_4} - 2x_{4} \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3} + 2x_{3} \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_4} + (x_{1}-x_{2}) \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_4}.$$
This Poisson bivector field arises in the analysis of the orbital stability of the Pais-Uhlenbeck oscillator at $\mathbb{R}^{4}$ [2]. As we saw above, $\Pi$ has a trivial modular field relative to the euclidean volume form on $\mathbb{R}^{4}$, so it's an unimodular Poisson bivector on $\mathbb{R}^{4}$. We can verify this with:
```
from poisson.poisson import PoissonGeometry
pg4 = PoissonGeometry(4)
P ={(1,3): '-2*x4', (1,4): '2*x3', (2,3): '-2*x4', (2,4): '2*x3', (3,4): 'x1 + x2'}
pg4.is_homogeneous_unimodular(P)
```
### Function: `isomorphic_lie_poisson`
VERIFIES WHETHER TWO LIE-POISSON BIVECTOR FIELDS ON R^3 ARE ISOMORPHIC, OR NOT
For example, the bivector fields
$$\Pi_{1} = x_{3}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} -x_{2} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_{1}\frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},$$
and
$$\Pi_{2} = -x_{3}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} -x_{2} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_{1}\frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},$$ \\
which are induced by the Lie algebras $\mathfrak{so}(3)$ and $\mathfrak{sl}(2)$, respectively, are NOT isomorphic:
```
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
P1 ={(1,2): 'x3', (1,3): '-x2', (2,3): 'x1'}
P2 ={(1,2): '-x3', (1,3): '-x2', (2,3): 'x1'}
pg3.isomorphic_lie_poisson_R3(P1, P2)
```
### Function: `is_poisson_bivector`
VERIFIES IF A GIVEN BIVECTOR FIELD IS A POISSON BIVECTOR FIELD OR NOT
For example, we can verify that the bivector field on $\mathbb{R}^{4}$,
$$\Pi = x_{2}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} + x_{3} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_{4} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_4} + x_{1} \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3}$$
It's NOT a Poisson bivector field:
```
from poisson.poisson import PoissonGeometry
pg4 = PoissonGeometry(4)
bivector = {(1,2): 'x2', (1,3): 'x3', (1,4): 'x4', (2,3): 'x1'}
pg4.is_poisson_bivector(bivector)
```
###Function: `is_in_kernel`
VERIFIES WHETHER A DIFFERENTIAL 1-FORM BELONGS TO THE KERNEL OF A (POISSON) BIVECTOR FIELD
For example, for the quadratic Flaschka-Ratiu bivector field on $\mathbb{R}^{4}$ [6]
\begin{align*}
\Pi &= (x_{3}^{2}+x_{4}^{2})\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} + (x_{2}x_{3} - x_{1}x_{4}) \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} - (x_{1}x_{3} + x_{2}x_{4}) \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_4} + (x_{1}x_{3} + x_{2}x_{4}) \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3} \\ &+ (x_{2}x_{3} - x_{1}x_{4}) \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_4} + (x_{1}^{2} + x_{2}^{2}) \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_4},
\end{align*}
we can verify that the differential 1-form $\alpha = x _{1}\mathrm{d}_{1} + x_{2}\mathrm{d}_{2} + x_{3}\mathrm{d}_{3} + x_{4}\mathrm{d}_{4}$ belongs to the kernel of $\Pi$. In other words, that $\Pi^{\#}(\alpha)=0$:
```
from poisson.poisson import PoissonGeometry
pg4 = PoissonGeometry(4)
bivector = {(1,2): 'x3**2 + x4**2', (1,3): 'x2*x3 - x1*x4', (1,4): '-x1*x3 - x2*x4',
(2,3): 'x1*x3 + x2*x4', (2,4): 'x2*x3 - x1*x4', (3,4): 'x1**2 + x2**2'}
alpha ={(1,): 'x1', (2,): '-x2', (3,): 'x3', (4,): '-x4'}
pg4.is_in_kernel(bivector, alpha)
```
### Functions: `is_casimir` and `is_poisson_vf`
GIVEN A POISSON BIVECTOR FIELD P, WITH THESE FUNCTIONS WE CAN VERIFY WHETHER A SCALAR FUNCTION IS A CASIMIR FUNCTION OF P
OR WHETHER A VECTOR FIELD IS A POISSON VECTOR FIELD FOR P, RESPECTIVELY
Consider the Lie-Poisson bivector field on $\mathbb{R}^{3}$
$$\Pi = -x_3\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} - x_2\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + x_1 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},$$
associated to the Lie algebra $\mathfrak{sl}(2)$ and a Casimir function $K$ of $\Pi$ given by
$$K = x_{1}^{2} + x_{2}^{2} + x_{3}^{2}$$:
```
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
bivector = {(1,2): '-x3', (1,3): '-x2', (2,3): 'x1'}
K = 'x1**2 + x2**2 - x3**2'
pg3.is_casimir(bivector, K)
```
This verifies that in effect $K$ is a Casimir function of $\Pi$. Now let's consider the function $f:\mathbb{R}\to\mathbb{R}$ defined by
$$f(t) := \left\{
\begin{array}{ll}
e^{-\frac{1}{t^2}} & \hbox{if} \quad t>0, \\
0 & \hbox{en otro caso.}
\end{array}
\right.$$
Let's define a smooth function $F$ by $F:=f\circ K$. Then, The vector field
$$ W := \frac{x_1 F}{x_1^2 + x_2^2}\,\frac{\partial}{\partial{x_1}} \,+\, \frac{x_1 F}{x_2^2 + x_2^2} \frac{\partial}{\partial{x_2}}$$
is a Poisson vector field of $\Pi$:
```
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
bivector = {(1,2): '-x3', (1,3): '-x2', (2,3): 'x1'}
W = {(1,): 'x1*exp(-1/(x1**2 + x2**2 - x3**2))/(x1**2 + x2**2)', (2,): 'x2*exp(-1/(x1**2 + x2**2 - x3**2))/(x1**2 + x2**2)', (3,): 0}
pg3.is_poisson_vf(bivector, W)
```
Notice that $W$ **IS NOT** a **Hamiltonian** field of $\Pi$ [8].
### Function: `is_poisson_pair`
VERIFIES WHETHER A COUPLE OF POISSON BIVECTOR FIELDS FORM A POISSON PAIR.
Consider
$$ \Pi = ax_1 x_2\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} - b x_1 x_3\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} + b x_2 x_3 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3}, $$
and
$$ \Psi = x_3^{2} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2}.$$
Then:
```
from poisson.poisson import PoissonGeometry
pg3 = PoissonGeometry(3)
Pi = {(1,2): 'a*x1*x2', (1,3): '-b*x1*x3', (2,3): 'b*x2*x3'}
Psi = {(1,2): 'x3**2'}
pg3.is_poisson_pair(Pi, Psi)
```
Therefore, $\Pi$ and $\Psi$ form a Poisson pair.
## __Bibliografía__
[1] M. Ammar, G. Kass, M. Masmoudi, N. Poncin, *Strongly R-Matrix Induced Tensors, Koszul Cohomology, and Arbitrary-Dimensional Quadratic Poisson Cohomology*, Pac. J. Math. 245, 1-23 (2010)
[2] M. Avendaño-Camacho, J. A. Vallejo and Yu. Vorobiev, *A Perturbation Theory Approach to the Stability of the Pais-Uhlenbeck Oscillator*, J. Math. Phys. 58, (2017)
[3] P. G. Breen, C. N. Foley, T. Boekholt, S. P. Zwart, *Newton vs the Machine: Solving the Chaotic Three-Body Problem Using Deep Neural Networks*, arXiv:1910.07291 [astro-ph.GA]
[4] J. P. Dufour and N. T. Zung, *Poisson Structures and their Normal Forms*, Progress in Mathematics, 242, Birkh\"auser Verlag, Basel, (2005)
[5] M. Evangelista-Alvarado, J. C. Ruíz-Pantaleón, P. Suárez-Serrato, *On Computational Poisson Geometry I: Symbolic Foundations*
[6] L. C. Garcia-Naranjo, P. Suárez-Serrato and R. Vera, *Poisson Structures on Smooth 4-manifolds*, Lett. Math. Phys. 105, 1533-1550 (2015)
[7] C. Laurent-Gengoux, A. Pichereau and P. Vanhaecke, *Poisson Structures*,
Grundlehren der mathematischen Wissenschaften, 347, Springer-Verlag Berlin Heidelberg, (2013)
[8] N. Nakanishi, *On the Structure of Infinitesimal Automorphisms of Linear Poisson Manifolds I*, J. Math. Kyoto Univ. 31, 71-82 (1991)
|
23fa26b54e42f4b81d4756b8dd0a79b6e4b94e25
| 59,283 |
ipynb
|
Jupyter Notebook
|
docs/Tutorial_Ingles.ipynb
|
mevangelista-alvarado/poisson_geometry
|
98bea08d9127f1bda45bc04b88d73b05dd480c65
|
[
"MIT"
] | null | null | null |
docs/Tutorial_Ingles.ipynb
|
mevangelista-alvarado/poisson_geometry
|
98bea08d9127f1bda45bc04b88d73b05dd480c65
|
[
"MIT"
] | null | null | null |
docs/Tutorial_Ingles.ipynb
|
mevangelista-alvarado/poisson_geometry
|
98bea08d9127f1bda45bc04b88d73b05dd480c65
|
[
"MIT"
] | null | null | null | 36.45941 | 624 | 0.487644 | true | 11,699 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.800692 | 0.731059 | 0.585353 |
__label__eng_Latn
| 0.726474 | 0.198301 |
# 1. 1-D example with 3 parameters
The following example shows how to construct the kernel, automatically, from a symbolic expression defining the linear differential operator in **1D**.
We consider the following operator, for an unknwon *u*
$$
\mathcal{L}^{\mu, \alpha, \beta} u := \mu u + \alpha \partial_x u + \beta \partial_{xx} u
$$
```python
# imports
from mlhiphy.calculus import dx, dy, dz
from mlhiphy.calculus import Constant
from mlhiphy.calculus import Unknown
from mlhiphy.kernels import compute_kernel, generic_kernel
from sympy import expand
from sympy import symbols
from sympy import exp
```
```python
x, x_i, x_j = symbols('x x_i x_j')
u = Unknown('u')
alpha = Constant('alpha')
beta = Constant('beta')
mu = Constant('mu')
theta = Constant('theta')
expr = mu * u + alpha * dx(u) + beta * dx(dx(u))
```
```python
kuu = generic_kernel(expr, u, (x_i, x_j))
```
```python
from IPython.display import Math
from sympy import latex
Math(latex(expand(kuu)))
```
$$\alpha^{2} \frac{\partial^{2}}{\partial x_{i}\partial x_{j}} u{\left (x_{i},x_{j} \right )} + \alpha \beta \frac{\partial^{3}}{\partial x_{i}^{2}\partial x_{j}} u{\left (x_{i},x_{j} \right )} + \alpha \beta \frac{\partial^{3}}{\partial x_{i}\partial x_{j}^{2}} u{\left (x_{i},x_{j} \right )} + \alpha \mu \frac{\partial}{\partial x_{i}} u{\left (x_{i},x_{j} \right )} + \alpha \mu \frac{\partial}{\partial x_{j}} u{\left (x_{i},x_{j} \right )} + \beta^{2} \frac{\partial^{4}}{\partial x_{i}^{2}\partial x_{j}^{2}} u{\left (x_{i},x_{j} \right )} + \beta \mu \frac{\partial^{2}}{\partial x_{i}^{2}} u{\left (x_{i},x_{j} \right )} + \beta \mu \frac{\partial^{2}}{\partial x_{j}^{2}} u{\left (x_{i},x_{j} \right )} + \mu^{2} u{\left (x_{i},x_{j} \right )}$$
```python
# RBF kernel
kuu = theta * exp(-0.5*((x_i - x_j)**2))
```
```python
kuf = compute_kernel(expr, kuu, x_i)
kfu = compute_kernel(expr, kuu, x_j)
kff = compute_kernel(expr, kuu, (x_i, x_j))
```
```python
Math(latex(expand(kuf)))
```
$$- 1.0 \alpha \theta x_{i} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 1.0 \alpha \theta x_{j} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 1.0 \beta \theta x_{i}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 2.0 \beta \theta x_{i} x_{j} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 1.0 \beta \theta x_{j}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 1.0 \beta \theta e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + \mu u{\left (x_{i} \right )}$$
```python
Math(latex(expand(kfu)))
```
$$1.0 \alpha \theta x_{i} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 1.0 \alpha \theta x_{j} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 1.0 \beta \theta x_{i}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 2.0 \beta \theta x_{i} x_{j} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 1.0 \beta \theta x_{j}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 1.0 \beta \theta e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + \mu u{\left (x_{j} \right )}$$
```python
Math(latex(expand(kff)))
```
$$- 1.0 \alpha^{2} \theta x_{i}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 2.0 \alpha^{2} \theta x_{i} x_{j} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 1.0 \alpha^{2} \theta x_{j}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 1.0 \alpha^{2} \theta e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 1.0 \beta^{2} \theta x_{i}^{4} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 4.0 \beta^{2} \theta x_{i}^{3} x_{j} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 6.0 \beta^{2} \theta x_{i}^{2} x_{j}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 6.0 \beta^{2} \theta x_{i}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 4.0 \beta^{2} \theta x_{i} x_{j}^{3} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 12.0 \beta^{2} \theta x_{i} x_{j} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 1.0 \beta^{2} \theta x_{j}^{4} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 6.0 \beta^{2} \theta x_{j}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 3.0 \beta^{2} \theta e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 2.0 \beta \mu \theta x_{i}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 4.0 \beta \mu \theta x_{i} x_{j} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + 2.0 \beta \mu \theta x_{j}^{2} e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} - 2.0 \beta \mu \theta e^{- 0.5 x_{i}^{2}} e^{- 0.5 x_{j}^{2}} e^{1.0 x_{i} x_{j}} + \mu^{2} u{\left (x_{i},x_{j} \right )}$$
```python
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:600px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: 'Alegreya Sans', sans-serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 135%;
font-size: 120%;
width:600px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.text_cell_render h1 {
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#054BCD;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #054BCD;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
|
3f8e2f7dcdcc2cac2207579038b9502e3ea186c2
| 11,729 |
ipynb
|
Jupyter Notebook
|
autoker/01_example_1d.ipynb
|
ratnania/mlhiphy
|
c75b5c4b5fbc557f77d234df001fe11b10681d7d
|
[
"MIT"
] | 6 |
2018-07-12T09:03:43.000Z
|
2019-10-29T09:50:34.000Z
|
autoker/01_example_1d.ipynb
|
ratnania/mlhiphy
|
c75b5c4b5fbc557f77d234df001fe11b10681d7d
|
[
"MIT"
] | null | null | null |
autoker/01_example_1d.ipynb
|
ratnania/mlhiphy
|
c75b5c4b5fbc557f77d234df001fe11b10681d7d
|
[
"MIT"
] | 4 |
2018-04-25T06:33:03.000Z
|
2020-03-13T02:25:07.000Z
| 37.234921 | 1,678 | 0.444283 | true | 2,992 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.868827 | 0.709019 | 0.616015 |
__label__yue_Hant
| 0.080117 | 0.269539 |
# Solving orbital equations with different algorithms
This notebook was adapted from `Orbit_games.ipynb`.
We consider energy plots and orbital solutions in polar coordinates for the general potential energy
$\begin{align}
U(r) = k r^n
\end{align}$
for different ODE solution algorithms. The `solve_ivp` function can itself be specified to use different solution methods (with the `method` keyword). Here we will set it by default to use 'RK23', which is a variant on the Runge-Kutta second-order algorithm. Second-order in this context means that the accuracy of a calculation will improve by a factor of $10^2 = 100$ if $\Delta t$ is reduced by a factor of ten.
We will compare it with the crudest algorithm, Euler's method, which is first order, and a second-order algorithm called Leapfrog, which is designed to be precisely <em>time-reversal invariant</em>. This property guarantees conservation of energy, which is not true of the other algorithms we will consider.
To solve the differential equations for orbits, we have defined the $\mathbf{y}$
and $d\mathbf{y}/dt$ vectors as
$\begin{align}
\mathbf{y} = \left(\begin{array}{c} r(t) \\ \dot r(t) \\ \phi(t) \end{array} \right)
\qquad
\frac{d\mathbf{y}}{dt}
= \left(\begin{array}{c} \dot r(t) \\ \ddot r(t) \\ \dot\phi(t) \end{array} \right)
= \left(\begin{array}{c} \dot r(t) \\
-\frac{1}{\mu}\frac{dU_{\rm eff}(r)}{dr} \\
\frac{l}{\mu r^2} \end{array} \right)
\end{align}$
where we have substituted the differential equations for $\ddot r$ and $\dot\phi$.
Then Euler's method can be written as a simple prescription to obtain $\mathbf{y}_{i+1}$
from $\mathbf{y}_i$, where the subscripts label the elements of the `t_pts` array:
$\mathbf{y}_{i+1} = \mathbf{y}_i + \left(d\mathbf{y}/dt\right)_i \Delta t$, or, by components:
$\begin{align}
r_{i+1} &= r_i + \frac{d\mathbf{y}_i[0]}{dt} \Delta t \\
\dot r_{i+1} &= \dot r_{i} + \frac{d\mathbf{y}_i[1]}{dt} \Delta t \\
\phi_{i+1} &= \phi_i + \frac{d\mathbf{y}_i[2]}{dt} \Delta t
\end{align}$
**Look at the** `solve_ode_Euler` **method below and verify the algorithm is correctly implemented.**
The leapfrog method does better by evaluating $\dot r$ at a halfway time step before and after the $r$ evaluation,
which is both more accurate and incorporates time reversal:
$\begin{align}
\dot r_{i+1/2} &= \dot r_{i} + \frac{d\mathbf{y}_i[1]}{dt} \Delta t/2 \\
r_{i+1} &= r_i + \dot r_{i+1/2} \Delta t \\
\dot r_{i+1} &= \dot r_{i+1/2} + \frac{d\mathbf{y}_{i+1}[1]}{dt} \Delta t/2 \\
\phi_{i+1} &= \phi_i + \frac{d\mathbf{y}_i[2]}{dt} \Delta t
\end{align}$
**Look at the** `solve_ode_Leapfrog` **method below and verify the algorithm is correctly implemented.**
A third method is the second-order Runge-Kutta algorithm, which we invoke from `solve_ivp` as `RK23`.
It does not use a fixed time-step as in our "homemade" implementations, so there is not a direct
comparison, but we can still check if it conserves energy.
**Run the notebook. You are to turn in and comment on the "Change in energy with time" plot at the end.
Where do you see energy conserved or not conserved? Show that Euler is first order and leapfrog is second
order by changing $\Delta t$; describe what you did and what you found.**
**Try another potential to see if you get the same general conclusions.**
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
```
```python
# Change the common font size
font_size = 14
plt.rcParams.update({'font.size': font_size})
```
```python
class Orbit:
"""
Potentials and associated differential equations for central force motion
with the potential U(r) = k r^n. Several algorithms for integration of
ordinary differential equations are now available.
"""
def __init__(self, m_1=1., m_2=1., G=1,):
self.m_1 = m_1
self.m_2 = m_2
self.G = G
def dz_dt(self, t, z):
"""
This function returns the right-hand side of the diffeq:
[dz/dt d^2z/dt^2]
Parameters
----------
t : float
time
z : float
8-component vector with
z[0] = x_1(t), z[1] = x_dot_1(t)
z[2] = y_1(t), z[3] = y_dot_1(t)
z[4] = x_2(t), z[5] = x_dot_2(t)
z[6] = y_2(t), z[7] = y_dot_2(t)
"""
r_12=np.sqrt( (z[0]-z[4])**2 + (z[2]-z[6])**2 )
return [ z[1], self.G * self.m_2 * (z[4] - z[0]) / r_12 ** 3,\
z[3], self.G * self.m_2 * (z[6] - z[2]) / r_12 ** 3,\
z[5], -self.G * self.m_1 * (z[4] - z[0]) / r_12 ** 3,\
z[7], -self.G * self.m_1 * (z[6] - z[2]) / r_12 ** 3 ]
def solve_ode(self, t_pts, z_0,
abserr=1.0e-8, relerr=1.0e-8):
"""
Solve the ODE given initial conditions.
Use solve_ivp with the option of specifying the method.
Specify smaller abserr and relerr to get more precision.
"""
solution = solve_ivp(self.dz_dt, (t_pts[0], t_pts[-1]),
z_0, t_eval=t_pts, method='RK23',
atol=abserr, rtol=relerr)
x_1, x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2 = solution.y
return x_1, x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2
def solve_ode_Leapfrog(self, t_pts, z_0):
"""
Solve the ODE given initial conditions with the Leapfrog method.
"""
delta_t = t_pts[1] - t_pts[0]
x_1_0, x_dot_1_0, y_1_0, y_dot_1_0, x_2_0, x_dot_2_0, y_2_0, y_dot_2_0 = z_0
# initialize the arrays with zeros
num_t_pts = len(t_pts)
x_1 = np.zeros(num_t_pts)
x_dot_1 = np.zeros(num_t_pts)
x_dot_1_half = np.zeros(num_t_pts)
y_1 = np.zeros(num_t_pts)
y_dot_1 = np.zeros(num_t_pts)
y_dot_1_half = np.zeros(num_t_pts)
x_2 = np.zeros(num_t_pts)
x_dot_2 = np.zeros(num_t_pts)
x_dot_2_half = np.zeros(num_t_pts)
y_2 = np.zeros(num_t_pts)
y_dot_2 = np.zeros(num_t_pts)
y_dot_2_half = np.zeros(num_t_pts)
# initial conditions
x_1[0] = x_1_0
x_dot_1[0] = x_dot_1_0
y_1[0] = y_1_0
y_dot_1[0] = y_dot_1_0
x_2[0] = x_2_0
x_dot_2[0] = x_dot_2_0
y_2[0] = y_2_0
y_dot_2[0] = y_dot_2_0
# step through the differential equation
for i in np.arange(num_t_pts - 1):
t = t_pts[i]
z = [x_1[i], x_dot_1[i], y_1[i], y_dot_1[i],\
x_2[i], x_dot_2[i], y_2[i], y_dot_2[i] ]
out=self.dz_dt(t,z)
x_dot_1_half[i] = x_dot_1[i] + out[1] * delta_t/2.
x_1[i+1] = x_1[i] + x_dot_1_half[i] * delta_t
y_dot_1_half[i] = y_dot_1[i] + out[3] * delta_t/2.
y_1[i+1] = y_1[i] + y_dot_1_half[i] * delta_t
x_dot_2_half[i] = x_dot_2[i] + out[5] * delta_t/2.
x_2[i+1] = x_2[i] + x_dot_2_half[i] * delta_t
y_dot_2_half[i] = y_dot_2[i] + out[7] * delta_t/2.
y_2[i+1] = y_2[i] + y_dot_2_half[i] * delta_t
z = [x_1[i+1], x_dot_1[i], y_1[i+1], y_dot_1[i],\
x_2[i+1], x_dot_2[i], y_2[i+1], y_dot_2[i] ]
out = self.dz_dt(t,z)
x_dot_1[i+1] = x_dot_1_half[i] + out[1] * delta_t/2.
y_dot_1[i+1] = y_dot_1_half[i] + out[3] * delta_t/2.
x_dot_2[i+1] = x_dot_2_half[i] + out[5] * delta_t/2.
y_dot_2[i+1] = y_dot_2_half[i] + out[7] * delta_t/2.
return x_1, x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2
```
```python
def start_stop_indicies(t_pts, plot_start, plot_stop):
start_index = (np.fabs(t_pts-plot_start)).argmin() # index in t_pts array
stop_index = (np.fabs(t_pts-plot_stop)).argmin() # index in t_pts array
return start_index, stop_index
```
```python
def plot_y_vs_x(x, y, axis_labels=None, label=None, title=None,
color=None, linestyle=None, semilogy=False, loglog=False,
ax=None):
"""
Generic plotting function: return a figure axis with a plot of y vs. x,
with line color and style, title, axis labels, and line label
"""
if ax is None: # if the axis object doesn't exist, make one
ax = plt.gca()
if (semilogy):
line, = ax.semilogy(x, y, label=label,
color=color, linestyle=linestyle)
elif (loglog):
line, = ax.loglog(x, y, label=label,
color=color, linestyle=linestyle)
else:
line, = ax.plot(x, y, label=label,
color=color, linestyle=linestyle)
if label is not None: # if a label if passed, show the legend
ax.legend()
if title is not None: # set a title if one if passed
ax.set_title(title)
if axis_labels is not None: # set x-axis and y-axis labels if passed
ax.set_xlabel(axis_labels[0])
ax.set_ylabel(axis_labels[1])
return ax, line
```
# Start The Plots
The first one is a normal plot using 'RK23', the second one is using Leapfrog method, the thrid one is a simple plot, setting the initial conditions so that they move in the opposite directions.
```python
#Labels for individual plot axes
orbit_labels=(r'$x$',r'$y$')
#common plotting time
t_start=0.
t_end=10.
delta_t=0.01
t_pts=np.arange(t_start, t_end+delta_t,delta_t)
G = 1.
m_1 = 1.
m_2 = 5.
o1 = Orbit(m_1, m_2, G)
#Initial conditions with the velocity of the center of mass to be 0
x_1_0, x_dot_1_0 = 1., -1.
y_1_0, y_dot_1_0 = 1., 1.
x_2_0, x_dot_2_0 = -(m_1 / m_2) * x_1_0, -(m_1 / m_2) * x_dot_1_0
y_2_0, y_dot_2_0 = -(m_1 / m_2) * y_1_0, -(m_1 / m_2) * y_dot_1_0
z_0 = [x_1_0, x_dot_1_0, y_1_0, y_dot_1_0,\
x_2_0, x_dot_2_0, y_2_0, y_dot_2_0]
x_1,x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2 = o1.solve_ode(t_pts,z_0)
```
```python
fig = plt.figure(figsize=(7,5))
ax = fig.add_subplot(1,1,1)
ax.plot(x_1, y_1, color='blue', label=r'$m_1$')
ax.plot(x_2, y_2, color='red', label=r'$m_2$')
start, stop = start_stop_indicies(t_pts, t_start, t_end)
ax.set_title('Simple gravitational orbit')
ax.legend()
ax.set_aspect(1)
fig.tight_layout()
fig.savefig('Gravitation_orbit.png')
```
## Plot orbit and check energy conservation
```python
#common plotting time
t_start=0.
t_end=20.
delta_t=0.001
t_pts=np.arange(t_start, t_end+delta_t,delta_t)
G = 10.
m_1 = 10.
m_2 = 1.
o2 = Orbit(m_1, m_2, G)
#Initial conditions with the velocity of the center of mass to be 0
x_1_0, x_dot_1_0 = 0.1, 0.
y_1_0, y_dot_1_0 = 0., 0.75
x_2_0, x_dot_2_0 = -(m_1 / m_2) * x_1_0, -(m_1 / m_2) * x_dot_1_0
y_2_0, y_dot_2_0 = -(m_1 / m_2) * y_1_0, -(m_1 / m_2) * y_dot_1_0
z_0 = [x_1_0, x_dot_1_0, y_1_0, y_dot_1_0,\
x_2_0, x_dot_2_0, y_2_0, y_dot_2_0]
x_1,x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2 = o2.solve_ode_Leapfrog(t_pts,z_0)
```
```python
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(1,1,1)
ax.plot(x_1, y_1, color='blue', label=r'$m_1$')
ax.plot(x_2, y_2, color='red', label=r'$m_2$')
start, stop = start_stop_indicies(t_pts, t_start, t_end)
ax.set_title('Simple gravitational orbit')
ax.legend()
ax.set_aspect(1)
fig.tight_layout()
fig.savefig('Gravitation_orbit_LeapFrog.png')
```
```python
#common plotting time
t_start=0.
t_end=50.
delta_t=0.01
t_pts=np.arange(t_start, t_end+delta_t,delta_t)
G = 10.
m_1 = 1.
m_2 = 1.
o2 = Orbit(m_1, m_2, G)
#Initial conditions with the velocity of the center of mass to be 0
x_1_0, x_dot_1_0 = 1., 0.
y_1_0, y_dot_1_0 = 0., 1.
x_2_0, x_dot_2_0 = -(m_1 / m_2) * x_1_0, -(m_1 / m_2) * x_dot_1_0
y_2_0, y_dot_2_0 = -(m_1 / m_2) * y_1_0, -(m_1 / m_2) * y_dot_1_0
z_0 = [x_1_0, x_dot_1_0, y_1_0, y_dot_1_0,\
x_2_0, x_dot_2_0, y_2_0, y_dot_2_0]
x_1,x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2 = o2.solve_ode(t_pts,z_0)
```
```python
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(1,1,1)
ax.plot(x_1, y_1, color='blue', label=r'$m_1$')
ax.plot(x_2, y_2, color='red', label=r'$m_2$')
start, stop = start_stop_indicies(t_pts, t_start, t_end)
ax.set_title('Simple gravitational orbit')
ax.legend()
ax.set_aspect(1)
fig.tight_layout()
fig.savefig('Simple orbits.png')
```
```python
```
```python
```
|
18411bd5c9218bfaf4dbba86cfb58063a9dbb2e2
| 73,194 |
ipynb
|
Jupyter Notebook
|
Orbital_eqs_with_different_algorithms.ipynb
|
YupengLu-hub/5300_final
|
da4e4b6e76a20a0faece93977eeb4b29251a2f18
|
[
"MIT"
] | null | null | null |
Orbital_eqs_with_different_algorithms.ipynb
|
YupengLu-hub/5300_final
|
da4e4b6e76a20a0faece93977eeb4b29251a2f18
|
[
"MIT"
] | null | null | null |
Orbital_eqs_with_different_algorithms.ipynb
|
YupengLu-hub/5300_final
|
da4e4b6e76a20a0faece93977eeb4b29251a2f18
|
[
"MIT"
] | null | null | null | 131.881081 | 21,356 | 0.843812 | true | 4,467 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.875787 | 0.887205 | 0.777002 |
__label__eng_Latn
| 0.803622 | 0.643569 |
# Analysis of a quantum wave packet
## Introduction
This script studies the initialization of the wavepacket and its initial properties. In what follows we will consider an harmonic potential and the wave packet of an electron. We are interested in the following properties:
* probability density
* average position
* momentum
* kinetic energy
* potential energy (for an harmonic potential)
* total energy
The first part is concerned with the theory. We manually calculate the physical quantities of interest and we use Sympy to check them. The second part provides an implementation using Numpy. The physical quantities are again computed but this time numerically. Finally, a comparison between the theory and the numerical results if performed.
```python
import numpy as np
import sympy as sp
from sympy import simplify, factor
sp.init_printing() # Init pretty print as default
```
## Theory
### First properties
We initialize the wavefunction with a wavepacket centered in $x_0$ spatially localized within $\sigma$ of $x_0$ with an initial momentum $p_0$:
\begin{align}
\psi(x,t=0) = \psi(x) = &
\frac{1}{(2\pi)^{1/4} \sqrt{\sigma}}
e^{- \frac 1 2 \frac{(x-x_0)^2}{2\sigma^2}}
e^{i p_0(x-x_0)/\hbar} \\
|\psi|^2= &
\frac{1}{\sigma \sqrt{2 \pi}}
e^{- \frac{(x-x_0)^2}{2\sigma^2}}
\end{align}
Notice that $|\psi|^2$ is the expression of the probability density function (PDF) of a [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) $\mathcal N (x_0, \sigma^2)$, which will prove useful to compute integrals.
We can first check that the wave packet is indeed normalized. This is also coherent with the fact that $|\psi|^2$ is a PDF.
\begin{equation}
\int_\mathbb{R} |\psi|^2 dx = 1
\end{equation}
Secondly, the average position is easily calculated, again thanks to the properties of the normal distribution:
\begin{equation}
\left\langle x \right\rangle = \int_\mathbb{R} x |\psi|^2 dx = x_0
\end{equation}
### Momentum
Let us now turn our attention to the average momentum. The associated observable is $\hat{p} = \frac{\hbar}{i} \frac{d}{dx}$. We can compute the following expressions:
\begin{align}
\frac{d\psi}{dx} = & \left(- \frac{x-x_0}{2 \sigma^2} + i \frac{p_0}{\hbar}\right) \psi\\
= & \left(i \frac{p_0}{\hbar} - \frac{x-x_0}{2 \sigma^2} \right) \psi\\
\frac{\hbar}{i} \frac{d}{dx} = & \left(p_0 + i \hbar \frac{x-x_0}{2 \sigma^2} \right) \psi
\end{align}
Let us use sympy to check the result of the first derivative:
```python
# Define some symbols
x, x0, sigma, hbar, m, w = sp.symbols('x x_0 sigma \hbar m omega', real = True, positive = True)
p0 = sp.symbols('p_0', real=True)
pi = sp.pi
i = sp.I
psi = 1/((2*sp.pi)**sp.Rational(1,4) * sp.sqrt(sigma)) * sp.exp(-sp.Rational(1,2) * (x-x0)**2/(2*sigma**2)) * sp.exp(i*p0*(x-x0)/hbar)
psi2 = sp.Abs(psi)**2
dpsidx = simplify(sp.diff(psi, x))
dpsidx_theory = psi*(i*p0/hbar - (x-x0)/(2*sigma**2))
```
We can check that the expression derived previously are correct by looking at the difference of our expression and the one computed by sympy, and the result should be 0:
```python
simplify(dpsidx_theory-dpsidx)
```
We can then compute the average momentum:
\begin{align}
\left\langle p \right\rangle = & \int_\mathbb{R} \psi^* \frac{\hbar} i \frac{d\psi}{dx} dx = \int_\mathbb{R} \left(p_0 + i \hbar \frac{x-x_0}{2 \sigma^2} \right) |\psi|^2 dx\\
= & p_0 \int_\mathbb{R} |\psi|^2 dx +
\frac{i \hbar}{2 \sigma} \left( \int_\mathbb{R} x |\psi|^2 dx - x_0 \int_\mathbb{R}|\psi|^2 dx \right) \\
= & p_0 + \frac{i \hbar}{2 \sigma} (x_0 - x_0 \times 1) = p_0
\end{align}
Below is the result from sympy for comparison:
```python
sp.integrate(sp.conjugate(psi) * hbar/i * dpsidx, (x, -sp.oo, sp.oo))
```
#### Kinetic energy
We can then turn our attention to the kinetic energy. Since the observable is $\hat{E_c} = \frac{1}{2m} \hat{p}^2 = \frac{-\hbar^2}{2m} \frac{d^2}{dx^2}$, we first need to compute the second derivative $\frac{d^2\psi}{dx^2}$. This is fairly easy since we know that $\frac{d\psi}{dx} = \left(i \frac{p_0}{\hbar} - \frac{x-x_0}{2 \sigma^2} \right) \psi$:
\begin{align}
\frac{d^2\psi}{dx^2} = & - \frac{1}{2\sigma^2} \psi(x) + \left(i \frac{p_0}{\hbar} - \frac{x-x_0}{2 \sigma^2} \right) \frac{d\psi}{dx} \\
= & - \frac{1}{2\sigma^2} \psi(x) + \left(i \frac{p_0}{\hbar} - \frac{x-x_0}{2 \sigma^2} \right)^2 \psi \\
= & \left(- \frac{1}{2\sigma^2} - \frac{p_0^2}{\hbar^2} -2i \frac{p_0}{\hbar} \frac{x-x_0}{2\sigma^2} + \left(\frac{x-x_0}{2\sigma^2} \right)^2 \right) \psi(x)\\
= & \left(- \left(\frac{p_0^2}{\hbar^2} + \frac{1}{2\sigma^2} \right) + \left(\frac{x-x_0}{2\sigma^2} \right)^2 -2i \frac{p_0}{\hbar} \frac{x-x_0}{2\sigma^2} \right) \psi(x)
\end{align}
Once again let us use sympy to check the result of the second derivative:
```python
d2psidx2 = simplify(sp.diff(dpsidx, x))
d2psidx2_theory = psi * (-(p0**2/hbar**2 + 1 / (2*sigma**2)) + (x-x0)**2/(2*sigma**2)**2 -2*i*p0/hbar*(x-x0)/(2*sigma**2))
simplify(d2psidx2_theory-d2psidx2)
```
We are now in a position to compute the average kinetic energy:
\begin{align}
\left\langle E_c \right\rangle
= & \frac{-\hbar^2}{2m} \int_\mathbb{R} \psi^* \left(- \left(\frac{p_0^2}{\hbar^2} + \frac{1}{2\sigma^2} \right) + \left(\frac{x-x_0}{2\sigma^2} \right)^2 -2i \frac{p_0}{\hbar} \frac{x-x_0}{2\sigma^2} \right) \psi(x) dx \\
= & \frac{-\hbar^2}{2m} \int_\mathbb{R} \left(- \left(\frac{p_0^2}{\hbar^2} + \frac{1}{2\sigma^2} \right) + \left(\frac{x-x_0}{2\sigma^2} \right)^2 -2i \frac{p_0}{\hbar} \frac{x-x_0}{2\sigma^2} \right) |\psi(x)|^2 dx \\
= & \frac{\hbar^2}{2m} \left(\frac{p_0^2}{\hbar^2} + \frac{1}{2\sigma^2} \right) \int_\mathbb{R} |\psi(x)|^2 dx
- \frac{\hbar^2}{2m} \frac{1}{4 \sigma^4} \int_\mathbb{R} \left(x-x_0\right)^2 |\psi(x)|^2 dx
+ \frac{\hbar^2}{2m} 2i \frac{p_0}{\hbar} \frac{1}{2\sigma^2} \int_\mathbb{R} \left(x-x_0 \right) |\psi(x)|^2 dx\\
= & \left(\frac{p_0^2}{2m} + \frac{\hbar^2}{2m} \frac{1}{2\sigma^2} \right) \int_\mathbb{R} |\psi(x)|^2 dx
- \frac{\hbar^2}{2m} \frac{1}{4 \sigma^4} \int_\mathbb{R} \left(x-x_0\right)^2 |\psi(x)|^2 dx \\
= & \frac{p_0^2}{2m} + \frac{\hbar^2}{2m} \frac{1}{2\sigma^2}
- \frac{\hbar^2}{2m} \frac{1}{4 \sigma^4} \sigma^2 \\
= & \frac{p_0^2}{2m} + \frac{\hbar^2}{2m} \left( \frac{1}{2\sigma^2} - \frac{1}{4 \sigma^2}\right) \\
= & \frac{p_0^2}{2m} + \frac{\hbar^2}{2m} \frac{1}{4\sigma^2}
\end{align}
Let's see what sympy has to say:
```python
sp.integrate(sp.conjugate(psi) * (-hbar**2/(2*m)) * d2psidx2, (x, -sp.oo, sp.oo))
```
#### Potential energy
Given an harmonic potential $V(x) = \frac 1 2 m \omega^2 (x-x_0)^2$ centered in $x_0$, the average potential energy is easy to find:
\begin{align}
\left\langle V \right\rangle = & \int_\mathbb{R} \psi^* V(x) \psi dx
= \int_\mathbb{R} \psi^* \frac 1 2 m \omega^2 (x-x_0)^2 \psi dx\\
= & \int_\mathbb{R}\frac 1 2 m \omega^2 (x-x_0)^2 |\psi|^2 dx
= \frac 1 2 m \omega^2 \int_\mathbb{R}(x-x_0)^2 |\psi|^2 dx\\
= & \frac 1 2 m \omega^2 \sigma^2
\end{align}
Let's check with sympy:
```python
V = sp.Rational(1,2) * m * w**2 * (x-x0)**2
sp.integrate(sp.conjugate(psi) * V *psi, (x, -sp.oo, sp.oo))
```
#### Total energy
The total energy $\mathcal E$ is the sum of the kinetic energy and the potential energy. Hence:
\begin{align}
\mathcal E = &\left\langle E_c \right\rangle + \left\langle V \right\rangle \\
= & \frac{p_0^2}{2m} + \frac{\hbar^2}{2m} \frac{1}{4\sigma^2} + \frac 1 2 m \omega^2 \sigma^2
\end{align}
## Implementation
### Simulation parameters
The settings below define the characteristics of the spatial grid used.
```python
Nx = 1000 # Number of grid points
x_start = 000e-9 # Position to start the computations from
x_end = 100e-9 # Position at which to end the computations
x = np.linspace(x_start,x_end,Nx, dtype="float128") # Position vector
dx = x[2] - x[1] # Spatial precision
dx
```
1.00100100100100100176e-10
### Physical constants
We choose to study the behavior of an electron for instance in silicon. Hence we use its effective mass instead of the normal mass.
```python
me = 9.10938291e-31 # Electron mass
meff = 0.19*me # Electron effective mass
m = meff
hbar = 1.054571726e-34 # Reduced Planck's constant
e = 1.602176565e-19 # Elementary charge
h = hbar*2*np.pi # Planck's constant
```
### Potential characteristic
We assume an harmonic potential with a level spacing of $\Delta E = 10$ meV. The energy levels of an harmonic oscillator are given by $E_n = \left(n + \frac 1 2 \right) \hbar \omega$, so the level spacing is $\Delta E = \hbar \omega$, which gives us the value of the parameter $\omega$ to use. Note the $e$ factor when defining `omega` to convert `deltaE` in Joules.
```python
deltaE = 10e-3 # Level spacing of the QD (eV)
omega = e*deltaE / hbar # s^-1
```
Now that $\omega$ is defined, we can compute the energy levels, for instance $E_0 = \frac{\hbar \omega}{2}$:
```python
E_0 = hbar*omega/2
E_0
```
Which converted in meV gives:
```python
E_0 / e * 1e3
```
### Wave packet initialization
The code below defines the center $x_0$ of the wave packet and its spatial width.
```python
x0 = 25e-9 # Position of the center
sigma = 10e-9 # Width of the Gaussian
```
How to choose the value of the momentum ? We can try setting the speed of the wave packet $v$ which then determines the momentum as $p = mv$. However, let us try to find the momentum such that the energy of the wave packet is exactly that of the ground state of the harmonic oscillator $E_0$.
From the previous section, the total energy of the wave packet is $\mathcal E = \left\langle E_c \right\rangle + \left\langle V \right\rangle$ which depends on $p_0$. We can inverse that relation to get $p_0$:
\begin{align}
\mathcal E = & \frac{p_0^2}{2m} + \frac{\hbar^2}{2m} \frac{1}{4\sigma^2} + \frac 1 2 m \omega^2 \sigma^2 \\
2m \mathcal E = & p_0^2 + \frac{\hbar^2}{4\sigma^2} + m^2 \omega^2 \sigma^2 \\
p_0^2 = & 2m \mathcal E - \frac{\hbar^2}{4\sigma^2} - m^2 \omega^2 \sigma^2
\end{align}
For $\mathcal E = E_0= \frac{\hbar \omega}{2}$, we get:
\begin{align}
p_0^2 = & 2m \frac{\hbar \omega}{2} - \frac{\hbar^2}{4\sigma^2} - m^2 \omega^2 \sigma^2\\
p_0^2 = & m \hbar \omega - \frac{\hbar^2}{4\sigma^2} - m^2 \omega^2 \sigma^2\\
\end{align}
When is this quantity positive ? Let's look at each term:
```python
m*hbar*omega, hbar**2/(4*sigma**2), m**2 * omega**2 * sigma**2
```
We conclude that with our choice of parameters $\omega$ and $\sigma$, the momentum is not real. Let us see what values are better visually. We will draw the surface of equation $p_0^2$ with $\omega$ and $\sigma$ varying and see when this is greater than 0.
```python
%matplotlib notebook
from mpl_toolkits.mplot3d import Axes3D
# Axes3D import has side effects, it enables using projection='3d' in add_subplot
import matplotlib.pyplot as plt
from matplotlib import cm
def p0_squared(omega, sigma):
return m*hbar*omega - hbar**2/(4*sigma**2) - m**2 * omega**2 * sigma**2
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111, projection='3d')
deltaE_range = np.linspace(0, 100, 10) * 1e-3
omega_range = e*deltaE_range / hbar
sigma_range = np.linspace(1e-9, 100e-9, 10)
X, Y = np.meshgrid(omega_range, sigma_range)
zs = np.array(p0_squared(np.ravel(X), np.ravel(Y)))
Z = zs.reshape(X.shape)
ax.plot_surface(X, Y, Z, cmap=cm.jet)
ax.set_xlabel('$\omega$')
ax.set_ylabel('$\sigma$')
ax.set_zlabel('$p_0^2$')
plt.title("Max value: %e" % max(zs))
plt.show()
```
<IPython.core.display.Javascript object>
```python
p0 = m * 1e-9 * 1e-9/1e-12
```
The value of the momentum is quite abstract. However wavelength and group velocity are easier to understand. Luckily we can get these two values very easily from the momentum. The associated wavelength and group velocity are given below in nm and nm/ps respectively:
```python
wavelength = 2*np.pi*hbar/p0
v = p0/m
wavelength * 1e9, v * 1e9/1e12
```
Finally, we define the wave function $\psi$ and its associated probability density $|\psi|^2$:
```python
psi = 1/((2*np.pi)**(1/4)*np.sqrt(sigma)) * np.exp(-1/2 *(x-x0)**2/(2*sigma**2)) * np.exp(1j*p0*(x-x0)/hbar)
prob = abs(psi)**2
```
### Verification of theoretical properties
What follow are some checks to verify the initial conditions of the wavepacket. We begin by computing derivatives of the wavefunction:
```python
# First spatial derivative of psi
dpsi0dx = np.zeros(Nx, dtype="complex256")
# dpsi0dx(1:Nx-1) = (psi(2:Nx) - psi(1:Nx-1)) / dx
# dpsi0dx(1) = (psi(2) - psi(1)) / (dx)
dpsi0dx[2:Nx-1] = (psi[3:Nx] - psi[1:Nx-2]) / (2*dx)
# dpsi0dx(Nx) = (psi(Nx) - psi(Nx-1)) / (dx)
# Second spatial derivative of psi
d2psi0dx2 = np.zeros(Nx, dtype="complex256")
# d2psi0dx2(1:Nx-1) = (dpsi0dx(2:Nx) - dpsi0dx(1:Nx-1)) / dx
# d2psi0dx2(1) = (dpsi0dx(2) - dpsi0dx(1)) / dx
d2psi0dx2[2:Nx-1] = (dpsi0dx[3:Nx] - dpsi0dx[1:Nx-2]) / (2*dx)
# d2psi0dx2(Nx) = (dpsi0dx(Nx) - dpsi0dx(Nx-1)) / dx
```
We can then check the physical quantities we computed in the theory section:
```python
total_probability = np.trapz(prob * dx)
total_probability
```
0.99378996877180826765
```python
average_position = np.trapz(x * prob * dx);
x0, average_position
```
(2.5e-08, 2.5020039908209068836e-08)
```python
average_momentum = np.trapz(np.conj(psi) * hbar / 1j * dpsi0dx * dx);
p0, average_momentum
```
(1.7307827529000001e-37, (1.7195488195355279624e-37+9.594754951586863072e-29j))
```python
kinetic_energy = p0**2/(2*m) + hbar**2/(2*m) * 1/(4*sigma**2)
average_kinetic_energy = np.trapz(np.conj(psi) * -hbar**2/(2*m) * d2psi0dx2 * dx)
kinetic_energy, average_kinetic_energy
```
(8.031926041954226e-23, (7.978957460834612074e-23+4.857173437671517596e-35j))
```python
potential_energy = 1/2 * m * omega**2 * sigma**2
V = 1/2 * m * omega**2 * (x-x0)**2
average_potential_energy = np.trapz(V * prob * dx)
potential_energy, average_potential_energy
```
(1.9974736850373786e-21, 1.897536138239393811e-21)
```python
total_energy = kinetic_energy + potential_energy;
average_total_energy = average_kinetic_energy + average_potential_energy;
E_0, total_energy, average_total_energy
```
(8.010882825e-22,
2.077792945456921e-21,
(1.9773257128477399318e-21+4.857173437671517596e-35j))
## Conclusion
Globally, all physical quantities follow the theoretical predictions for any good momentum. However, the problem is choosing the initial value of the momentum depending on the parameters (namely $\sigma$ and $\Delta E$). There also seems to be an influence on the precision depending on the value of $p_0$.
|
1dbe33815b28e67ea63363bdbb00df12aae68639
| 213,988 |
ipynb
|
Jupyter Notebook
|
physics/quantum-gaussian-wavepacket.ipynb
|
greglan/python_scripts
|
f2e98ed3fd975d79b0a6b569b65c850a7f4f3ab3
|
[
"MIT"
] | null | null | null |
physics/quantum-gaussian-wavepacket.ipynb
|
greglan/python_scripts
|
f2e98ed3fd975d79b0a6b569b65c850a7f4f3ab3
|
[
"MIT"
] | null | null | null |
physics/quantum-gaussian-wavepacket.ipynb
|
greglan/python_scripts
|
f2e98ed3fd975d79b0a6b569b65c850a7f4f3ab3
|
[
"MIT"
] | null | null | null | 131.76601 | 133,259 | 0.833313 | true | 5,343 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92944 | 0.874077 | 0.812403 |
__label__eng_Latn
| 0.811068 | 0.725816 |
# Kapitel 4
## Perceptron Lernalgorithmus
```python
# Grafische Darstellung
import matplotlib.pyplot as plt
# Zufall
from random import choice
# Für die mathematischen Operationen
from numpy import array, dot, random, linspace, zeros
# Ganz wichtig, sonst wird der Plot nicht angezeigt
%matplotlib inline
# Trainingsdaten
# Pro Zeile: die binären Inputdaten und die gewünschte binäre Ausgabe
# in einer Liste von Tupeln.
# An der Stelle 0 des Inputvektors ist das Bias Neuron
training_data_set = [
(array([1,0,0]), 0),
(array([1,0,1]), 1),
(array([1,1,0]), 1),
(array([1,1,1]), 1),
]
# Die heaviside Stufenfunktion als Lambda Funktion
heaviside = lambda x: 0 if x < 0 else 1
# Anfangsinitialisierung des Zufallgenerators wegen
# Reproduzierbarkeit der Ergebnisse
random.seed( 18 ) # irgendein Wert
# Array von Länge 3 mit 0 initialisieren
w = zeros(3)
# Die Anzahl der Durchläufe. Erfahrungswert durch Probieren
iterations = 25
# Start des Trainierens
def fit(iterations, training_data_set,w):
""" Lernen im Perceptron
iterations: Ein Vorwärts- und Rückwärtslauf aller Trainingsbeispiele
trainings_data_set: Die Trainingsbeispiele
w: Die Gewichte zum Starten
"""
errors = []
weights = []
for i in range(iterations):
# zufällige Auswahl eines Trainingsbeispiels
training_data = choice(training_data_set)
x = training_data[0]
y = training_data[1]
# Den errechneten Output ermitteln: Gewichtete Summe mit
# nachgelagerter Stufenfunktion
y_hat = heaviside(dot(w, x))
# Fehler berechnen als Differenz zwischen gewünschtem und
# aktuellem Output
error = y - y_hat
# Fehler sammeln für die Ausgabe
errors.append(error)
# Gewichte sammeln für spätere Ausgabe
weights.append(w)
# Gewichtsanpassung = Das Lernen… x_i ist entweder 0 oder 1
w += error * x
# Rückgabe der Fehler und Gewichte
return errors, weights
# Trainieren
# Wir sammeln die Fehler/Gewichte in jedem Schritt für die grafische Ausgabe
errors, weights = fit(iterations, training_data_set,w)
# Den letzten Gewichtsvektor ausgeben
w = weights[iterations-1]
print("Gewichtsvektor am Ende des Trainings:")
print((w))
# Auswertung nach dem Trainieren
print("Auswertung am Ende des Trainings:")
for x, y in training_data_set:
y_hat = heaviside(dot(x, w))
print("{}: {} -> {}".format(x, y, y_hat))
#-------------------------------------------------------
# Grafik für Fehler pro Lernbeispiel :-)
# Figure Nummern Start
fignr = 1
# Druckgröße in inch
plt.figure(fignr,figsize=(10,10))
# Ausgabe Fehler als Plot
plt.plot(errors)
# Raster
plt.style.use('seaborn-whitegrid')
# Labels
plt.xlabel('Iterationen')
# Label der y-Achse mit Hilfe von LaTex
plt.ylabel(r"$(y - \hat y)$")
```
## Estimator und Predictor
```python
# Numpy hilft uns mit den arrays
import numpy as np
# Das sind unsere Basisklassen
from sklearn.base import BaseEstimator, ClassifierMixin
# Prüfroutinen für die Konsistenz der Daten, etc.
from sklearn.utils.validation import check_X_y, check_is_fitted, check_random_state
# Puffern der unterschiedlichen Zielwerte
from sklearn.utils.multiclass import unique_labels
# Ganz wichtig, sonst wird der Plot nicht angezeigt
%matplotlib inline
# Unser Estimator, passend bezeichnet und die Basisklassen
class PerceptronEstimator(BaseEstimator, ClassifierMixin):
# Initialisierung
def __init__(self, n_iterations=20, random_state=None):
""" Initialisierung der Objekte
n_iterations: Anzahl der Iterationen für das Lernen
random_state: Um Wiederholbarkeit zu garantieren sollte ein
numpy.random.RandomState Objekt konstruiert werden,
das mit random_state Seed initialisiert wurde
"""
# Die Anzahl der Iterationen
self.n_iterations = n_iterations
# Die Seed für den Zufallsgenerator
self.random_state = random_state
# Die Fehler im Lernprozeß für den Plot gepuffert
self.errors = []
# Eine Stufenfunktion, benannt nach dem Mathematiker und Physiker
# Oliver Heaviside
def heaviside(self, x):
""" Eine Stufenfunktion
x: Der Wert für den die Stufenfunktion ausgewertet wird
"""
if x < 0:
result = 0
else:
result = 1
return result
# Lernen
def fit(self, X=None, y=None ):
""" Trainieren
X: Array-ähnliche Struktur mit [N,D], wobei
N = Zeilen = Anzahl der Lernbeispiele und
D = Spalten = Anzahl der Features
y: Array mit [N], mit N so wie oben
"""
# Erzeugung des Zufallsgenerators (RNG)
self.random_state_ = check_random_state(self.random_state)
# Gewichtinitialisierung
# np.size(.,1) = Anzahl der Spalten
self.w = self.random_state_.random_sample(np.size(X,1))
# Prüfe, ob X und y die korrekte shape haben: X.shape[0] = y.shape[0]
X, y = check_X_y(X, y)
# Die eindeutigen Zielwerte speichern
self.classes_ = unique_labels(y)
# Lerndaten für spätere Prüfung in Methode predict speichern
self.X_ = X
self.y_ = y
# Lernen
for i in range(self.n_iterations):
# zufälliges durchwürfeln, für batch size = 1
# np.size(.,0) = Anzahl der Zeilen
rand_index = self.random_state_.randint(0,np.size(X,0))
# Ein zufälliger Inputvektor
x_ = X[rand_index]
# Ein dazu passender Output
y_ = y[rand_index]
# Den errechneten Output ermitteln:
# Gewichtete Summe mit nachgelagerter Stufenfunktion
y_hat = self.heaviside(dot(self.w, x_))
# Fehler berechnen als Differenz zwischen gewünschtem und
# aktuellem Output
error = y_ - y_hat
# Fehler sammeln für die Ausgabe
self.errors.append(error)
# Gewichtsanpassung = Das Lernen
self.w += error * x_
# Rückgabe des Estimators für verknüpfte Aufrufe
return self
# Auswerten
def predict(self, x):
""" Auswerten eines Vektors
x: Ein Test Inputvektor
"""
# Prüfen, ob fit berereits aufgerufen wurde
# Die Daten wurden in der Methode fit gesetzt
check_is_fitted(self, ['X_', 'y_'])
# Auswerten, Forward Path
y_hat = self.heaviside(dot(self.w,x))
return y_hat
# Plot
def plot(self):
""" Ausgabe des Fehlers
Die im Fehlerarray gespeicherten Fehler als Grafik ausgeben
"""
# Figure Nummern Start
fignr = 1
# Druckgröße in inch
plt.figure(fignr,figsize=(5,5))
# Ausgabe Fehler als Plot
plt.plot(self.errors)
# Raster
plt.style.use('seaborn-whitegrid')
# Labels
plt.xlabel('Iterationen')
plt.ylabel(r"$(y - \hat y)$")
# Trainingsdaten
X = np.array([[1,0,0], [1,0,1], [1,1,0],[1,1,1]])
y = np.array([0,1,1,1])
# Lernen
Perceptron = PerceptronEstimator(20,10)
Perceptron.fit(X,y)
# Testdaten
x = np.array([1,0,0])
# Auswertung
for index, x in enumerate(X):
p = Perceptron.predict(x)
print("{}: {} -> {}".format(x, y[index],p))
# Graph ausgeben
Perceptron.plot()
```
## Scikit-Learn Perceptron
```python
# Scikit-Learn Perceptron
import numpy as np
# Das bereits bekannte Iris Dataset
from sklearn.datasets import load_iris
# Ladies and Gentlemen – Das Perceptron
from sklearn.linear_model import Perceptron
# Den Iris Datensatz laden
iris = load_iris()
# Die Eingabevektoren für das Lernen.
# 150 Vektoren mit 5 Spalten
X = iris.data[:,(2,3)] # petal length, petal width
# Die gewünschten Werte
y = iris.target
# Das Perceptron instanziieren
# random_state = Seed für Zufallsgenerator
# max_iter = Maximale Anzahl an Iterationen
# tol = Stoppkriterium
Perceptron = Perceptron(random_state=49,max_iter=100000,tol=None)
# Lernen bitte
Perceptron.fitt(X,y)
# und auswerten: Iris-setosa, Iris-versicolor, Iris-virginca
y_prediction = Perceptron.predict([ [1.4,0.2], [3.5,1.0], [6.0,2.5]])
# natürlich die Ausgabe nicht vergessen
print(y_prediction)
```
## Adaline
```python
from sklearn.base import BaseEstimator, ClassifierMixin
# Prüfroutinen
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted, check_random_state
# Speichern
from sklearn.utils.multiclass import unique_labels
# Ganz wichtig, sonst wird der Plot nicht angezeigt
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from random import choice
import math
import sympy
class AdalineEstimator(BaseEstimator, ClassifierMixin):
# Init
def __init__(self, eta=.001, n_iterations=500, random_state=None):
""" Initialisierung der Objekte
eta: Lernrate
n_iterations: Anzahl der Iterationen für das Lernen
random_state: Um Wiederholbarkeit zu garantieren sollte ein numpy.random.RandomState Objekt
konstruiert werden, das mit random_state Seed initialisiert wurde
batch: Batchlernen anwenden?
"""
# Die Anzahl der Iterationen
self.n_iterations = n_iterations
# Die Lernrate
self.eta = eta
# Die Seed für den Zufallsgenerator
self.random_state = random_state
# Die Fehler im Lernprozeß
self.errors = []
# Gewichte für die Berechnung im KNN
self.w = []
# Alle Gewichte für Plot, zum Zeichnen der Trenngeraden
self.wAll = []
# Der gewichtete Input
def net_i(self, x):
""" Den gewichteten Input w*x berechnen
x: Ein Vektor
"""
return np.dot(x, self.w)
# Aktivierungsfunktion, so wie im Kapitel 1 besprochen
def activation(self, x):
""" Lineare Aktivierungsfunktion
"""
return self.net_i(x)
# Outputfunktion, wobei der Output 1 und -1 sein kann
# im Gegensatz zum Perceptron, wo 1 und 0 ausgegeben werden
def output(self, x):
""" Outputfunktion
"""
if self.activation(x) >= 0.0:
return 1
else:
return -1
# Lernen
def fit(self, X=None, y=None):
""" Trainieren
X: Array-ähnliche Struktur mit [N,D], wobei
N = Zeilen = Anzahl der Lernbeispiele und
D = Spalten = Anzahl der Features
y: Array mit [N], mit N so wie oben
"""
# Erzeugung des Zufallsgenerators (RNG)
self.random_state_ = check_random_state(self.random_state)
# Gewichtinitialisierung
self.w = self.random_state_.random_sample(np.size(X,1)) # np.size(.,1) = Anzahl der Spalten
# Prüfe, ob X und y die korrekte Shape haben: X.shape[0] = y.shape[0]
X, y = check_X_y(X, y)
# Lerndaten für spätere Verwendung ablegen
self.X_ = X
self.y_ = y
# Lernen mit Gradientenabstieg
for i in range(self.n_iterations):
# Optimales zufälliges durchwürfeln, für batch size = 1
rand_index = self.random_state_.randint(0,np.size(X,0)) # np.size(.,0) = Anzahl der Zeilen
# Ein zufälliger Inputvektor
x_ = X[rand_index]
# Ein dazu passender Output (+1,-1)
y_ = y[rand_index]
# net input berechnen
net_j = np.dot(x_, self.w)
# Fehler zwischen gewünschem Output und net input
error = (y_ - net_j)**2
self.errors.append(error)
# Online Learning
# Variante 1: Schön langsam erklärt, bei Verwendung Variante 2 kommentieren
for j in range(3):
weight = {}
self.w[j] += self.eta * x_[j] * (y_ - net_j)
weight[0] = self.w[0]
weight[1] = self.w[1]
weight[2] = self.w[2]
self.wAll.append(weight) # 3 * nr_iterations
# Variante 2: Kompakter implementiert, bei Verwendung Variante 1 kommentieren
# self.w += self.eta * x_ * (y_ - net_j)
# self.wAll.append(self.w)
# Auswerten
def predict(self,x):
""" Auswerten eines Vektors
x: Ein Test Inputvektor
"""
# Prüfen, ob fit gerufen wurde
check_is_fitted(self, ['X_', 'y_'])
return self.output(x)
# Anzeigen
def plot(self):
x1 = []
x2 = []
colors = []
for i in range(self.X_.shape[0]):
x1.append(self.X_[i][1])
x2.append(self.X_[i][2])
y = self.y_[i]
if y == 1:
colors.append('r') # rot
else:
colors.append('b') # blau
# Errors
plt.plot(self.errors)
# Learning Curve
plt.figure(1)
plt.show()
# Scatter
plt.figure(2)
plt.scatter(x1, x2,c=colors)
# Result Line
x1Line = np.linspace(0.0, 1.0, 2)
x2Line = lambda x1, w0, w1, w2: (-x1*w1 - w0) / w2;
alpha = 0.0
for idx, weight in enumerate(self.wAll):
# alpha = Transparenz, je näher zum Ziel desto dünkler
if (idx % 500 == 0):
alpha = 1.0 #( idx / len(self.wAll) )
plt.plot(x1Line, x2Line(x1Line,weight[0],weight[1],weight[2]), alpha=alpha , linestyle='solid', label=str(idx), linewidth=1.5)
# Ergebnisgerade
plt.plot(x1Line, x2Line(x1Line,weight[0],weight[1],weight[2]), alpha=alpha , linestyle='solid', label=str(idx), linewidth=2.0)
plt.legend(loc='best', shadow=True)
### MAIN ###
# Erzeugung des Zufallsgenerators (RNG)
random_state = check_random_state(10)
# Initialisierung
I = []
o = []
# Datensatz aufbauen
for x in random_state.random_sample(20):
y = random_state.random_sample()
I.append([1, x, y+1.0]) # Falls +0.0, dann überlappende Mengen
o.append(1)
for x in random_state.random_sample(20):
y = random_state.random_sample()
I.append([1, x, y-1.0]) # Falls +0.0, dann überlappende Mengen
o.append(-1)
# numpy array bauen
X = np.array(I)
y = np.array(o)
# Walte deines Amtes Estimator
Adaline = AdalineEstimator(eta=0.01,n_iterations=500, random_state=10)
Adaline.fit(X,y)
Adaline.plot()
```
```python
```
|
fa2e5b6241509b2010cda2c560e5d6804985ae75
| 71,683 |
ipynb
|
Jupyter Notebook
|
Kapitel 4.ipynb
|
RolandSchwaigerNor/knn_mit_python
|
5973decdd3e276ef66c8a21579a553a307ebe59e
|
[
"MIT"
] | null | null | null |
Kapitel 4.ipynb
|
RolandSchwaigerNor/knn_mit_python
|
5973decdd3e276ef66c8a21579a553a307ebe59e
|
[
"MIT"
] | null | null | null |
Kapitel 4.ipynb
|
RolandSchwaigerNor/knn_mit_python
|
5973decdd3e276ef66c8a21579a553a307ebe59e
|
[
"MIT"
] | null | null | null | 119.671119 | 31,924 | 0.820948 | true | 4,074 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.718594 | 0.644041 |
__label__deu_Latn
| 0.894281 | 0.334654 |
<a href="https://colab.research.google.com/github/pedroblossbraga/MachineLearning-Pre-Processing-with-Python/blob/master/Effects_of_transformations_in_XGBoostRegressor_example.ipynb" target="_parent"></a>
## Tests with different data transformations applied to XGBoost
Samples:
- original data
- linearly transformated data
- minmax scaled data
- pseudo-random data
```python
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from xgboost import XGBRegressor
from sklearn.metrics import mean_absolute_error
```
```python
from sklearn.metrics import mean_squared_error
```
```python
import matplotlib.pyplot as plt
import seaborn as sns
import statistics
from IPython.display import display
```
```python
import warnings
warnings.filterwarnings("ignore")
```
```python
from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
```
```python
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.2)
```
```python
# lidar com os valores ausentes
df_imputer = SimpleImputer()
train_X = df_imputer.fit_transform(train_X)
test_X = df_imputer.transform(test_X)
```
```python
def test_XGBoost(train_X, train_y, test_X, test_y, plot_residuals=True):
# instanciar o modelo XGBoost
model = XGBRegressor()
# chamar o fit para o modelo
model.fit(train_X, train_y, verbose=False)
# fazer previsões em cima do dataset de teste
predictions = model.predict(test_X)
print("Mean Absolute Error: {:.2f}".format(mean_absolute_error(predictions, test_y)))
if plot_residuals:
plt.figure(figsize=(15,3))
plt.subplot(1,2,1)
plt.title("predictions")
sns.distplot(predictions)
plt.axvline(statistics.mean(predictions), color='red')
plt.subplot(1,2,2)
plt.title(r'residuals $\epsilon = |\hat{y} - y|$')
sns.distplot(abs(predictions-test_y))
plt.axvline(statistics.mean(abs(predictions-test_y)), color='red')
plt.show()
return mean_absolute_error(y_pred=predictions, y_true=test_y), mean_squared_error(y_true=test_y, y_pred=predictions)
```
### original data
```python
mae0, mse0 = test_XGBoost(train_X, train_y, test_X, test_y)
```
### linearly transformed
\begin{equation}
X_1 = \{x_j + k\}_{j=1}^N, k \in \mathbb{N}
\end{equation}
```python
k=20
mae1, mse1 = test_XGBoost(train_X+k, train_y+k, test_X+k, test_y+k)
```
### Min-Max scaled
\begin{equation}
\hat{X} = \left\{ \frac{x_j - min \{ X \} }{max \{ X \} - min \{ X \} } \right\}_{j=1}^N
\end{equation}
```python
def minmaxscale(v):
return (v - v.min(axis=0))/(v.max(axis=0)-v.min(axis=0))
mae2, mse2 = test_XGBoost(
minmaxscale(train_X),
minmaxscale(train_y),
minmaxscale(test_X),
minmaxscale(test_y)
)
```
```python
import numpy as np
def randomize_matrix(X):
X_ = X.copy()
if len(X_.shape)==1: # vector
for i in range(X_.shape[0]):
X_[i] = np.random.randint(-20, 20)
else: # matrix
for lin in range(X_.shape[0]):
for col in range(X_.shape[1]):
X_[lin][col] = np.random.randint(-20, 20)
return X_
```
### Pseudo-random data
```python
mae3, mse3 = test_XGBoost(
randomize_matrix(train_X),
randomize_matrix(train_y),
randomize_matrix(test_X),
randomize_matrix(test_y)
)
```
```python
erros = {
'MAE': [mae0, mae1, mae2, mae3],
'MSE': [mse0, mse1, mse2, mse3],
'transf': ['original', 'linear', 'minmax', 'pseudo-random']
}
display(pd.DataFrame(erros))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>MAE</th>
<th>MSE</th>
<th>transf</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1.947568</td>
<td>7.555698</td>
<td>original</td>
</tr>
<tr>
<th>1</th>
<td>1.960215</td>
<td>8.868490</td>
<td>linear</td>
</tr>
<tr>
<th>2</th>
<td>0.057456</td>
<td>0.006035</td>
<td>minmax</td>
</tr>
<tr>
<th>3</th>
<td>11.327111</td>
<td>170.878044</td>
<td>pseudo-random</td>
</tr>
</tbody>
</table>
</div>
|
de408b4ce889cc61377a3c7c58c48e2a65dcd657
| 116,443 |
ipynb
|
Jupyter Notebook
|
Effects_of_transformations_in_XGBoostRegressor_example.ipynb
|
pedroblossbraga/MachineLearning-Pre-Processing-with-Python
|
55aa2897ddf75e295aa8a014049273e8636e5c5b
|
[
"MIT"
] | 2 |
2020-01-13T20:09:37.000Z
|
2020-01-20T01:13:49.000Z
|
Effects_of_transformations_in_XGBoostRegressor_example.ipynb
|
pedroblossbraga/MachineLearning-Pre-Processing-with-Python
|
55aa2897ddf75e295aa8a014049273e8636e5c5b
|
[
"MIT"
] | null | null | null |
Effects_of_transformations_in_XGBoostRegressor_example.ipynb
|
pedroblossbraga/MachineLearning-Pre-Processing-with-Python
|
55aa2897ddf75e295aa8a014049273e8636e5c5b
|
[
"MIT"
] | 1 |
2021-03-16T17:53:12.000Z
|
2021-03-16T17:53:12.000Z
| 235.238384 | 28,454 | 0.902227 | true | 1,358 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.833325 | 0.709953 |
__label__eng_Latn
| 0.279714 | 0.487791 |
# Numerical norm bounds for cartpole
For a cartpole system with state $y = \begin{bmatrix}x & \dot{x} & \varphi & \dot{\varphi} \end{bmatrix}^T$ we have
|
\begin{equation}
\dot{y} = \begin{bmatrix}
\dot{x} \\
\frac{u + m_p \sin\varphi (l \dot{\varphi}^2 + g \cos\varphi)}{m_c + m_p \sin^2\varphi} \\
\dot{\varphi} \\
\frac{-u\cos\varphi - m_p l \dot{\varphi}^2 \cos\varphi \sin\varphi - (m_c+m_p)g\sin\varphi}{l(m_c + m_p \sin^2\varphi)}
\end{bmatrix}.
\end{equation}
Evaluating the corresponding Jacobian at 0 (in the state and action space) yields:
\begin{equation}
\nabla_{\begin{bmatrix} y & u \end{bmatrix}^T} f(0)\begin{bmatrix} y \\ u \end{bmatrix} = \begin{bmatrix} \dot{x} \\ \frac{-m_p g \varphi + u}{m_c}\\
\dot{\varphi} \\
\frac{g(m_c + m_p)\varphi - u}{l m_c}
\end{bmatrix}
\end{equation}
We want to find an NLDI of the form
\begin{equation}
\dot{y} = \nabla_{\begin{bmatrix} y & u \end{bmatrix}^T} f(0) \begin{bmatrix} y \\ u \end{bmatrix} + I p, \;\; \|p\| \leq \|Cy + Du\|
\end{equation}
To find $C$ and $D$," we determine an entry-wise norm bound. That is, for $i=1,\ldots,4$, we want to find $F_i$ such that for all $y$ such that $y_{\text{min}} \leq y \leq y_{\text{max}}$, and all $u$ such that $u_{\text{min}} \leq u \leq u_{\text{max}}$:
\begin{equation}
(\nabla f_i(0) \begin{bmatrix} y \\ u \end{bmatrix} - \dot{y}_i)^2 \leq \begin{bmatrix} y \\ u \end{bmatrix}^T F_i \begin{bmatrix} y \\ u \end{bmatrix}
\end{equation}
and then write
\begin{equation}
\|\dot{y} - \nabla f(0)\begin{bmatrix} y \\ u \end{bmatrix}\|_2 \leq \|\begin{bmatrix} F_1^{1/2} \\ F_2^{1/2} \\ F_3^{1/2} \\ F_4^{1/2} \end{bmatrix} \begin{bmatrix} y \\ u \end{bmatrix} \| = \|\begin{bmatrix} F_1^{1/2} \\ F_2^{1/2} \\ F_3^{1/2} \\ F_4^{1/2} \end{bmatrix}_{:, :n} y + \begin{bmatrix} F_1^{1/2} \\ F_2^{1/2} \\ F_3^{1/2} \\ F_4^{1/2} \end{bmatrix}_{:, n:n+m} u \|,
\end{equation}
where $n = \dim(y) = 4$, $m = \dim(u) = 1$.
```python
import numpy as np
import cvxpy as cp
import scipy.linalg as sla
```
```python
g = 9.81
l = 1
mc = 1
mp = 1
```
## Define max and min values
```python
# State is: [y, u] = [x, xdot, phi, phidot, u]^T
s_max = np.array([1.2, 1.0, 0.1, 1.0, 10])
s_min = np.array([-1.2, -1.0, -0.1, -1.0, -10])
x_max, xdot_max, phi_max, phidot_max, u_max = s_max
x_min, xdot_min, phi_min, phidot_min, u_min = s_min
n = 4
m = 1
nm = n+m
x_idx, xdot_idx, phi_idx, phidot_idx, u_idx = range(nm)
```
## Find element-wise bounds
### $f_1$
No error -- linearization is the same as original
### $f_2$
```python
gridnum = 50
phi = np.linspace(phi_min, phi_max, gridnum)
phidot = np.linspace(phidot_min, phidot_max, gridnum)
u = np.linspace(u_min, u_max, gridnum)
Phi, Phidot, U = np.meshgrid(phi, phidot, u)
v2 = np.ravel(( (-mp*g*Phi + U)/mc -
(U + mp*np.sin(Phi)*(l*Phidot**2 - g*np.cos(Phi)))/(mc + mp*np.sin(Phi)**2) )**2)
U2 = np.array([np.ravel(Phi*Phi),
np.ravel(Phidot*Phidot),
np.ravel(U*U),
2*np.ravel(Phi*Phidot),
2*np.ravel(Phi*U),
2*np.ravel(Phidot*U)]).T
```
```python
c2 = cp.Variable(6)
cp.Problem(cp.Minimize(cp.max(U2@c2 - v2)), [U2@c2 >= v2, c2[:3]>=0]).solve(verbose=True, solver=cp.MOSEK)
```
Problem
Name :
Objective sense : min
Type : LO (linear optimization problem)
Constraints : 250003
Cones : 0
Scalar variables : 7
Matrix variables : 0
Integer variables : 0
Optimizer started.
Problem
Name :
Objective sense : min
Type : LO (linear optimization problem)
Constraints : 250003
Cones : 0
Scalar variables : 7
Matrix variables : 0
Integer variables : 0
Optimizer - threads : 2
Optimizer - solved problem : the dual
Optimizer - Constraints : 7
Optimizer - Cones : 0
Optimizer - Scalar variables : 122381 conic : 0
Optimizer - Semi-definite variables: 0 scalarized : 0
Factor - setup time : 0.06 dense det. time : 0.00
Factor - ML order time : 0.00 GP order time : 0.00
Factor - nonzeros before factor : 28 after factor : 28
Factor - dense dim. : 0 flops : 6.01e+06
ITE PFEAS DFEAS GFEAS PRSTATUS POBJ DOBJ MU TIME
0 1.5e+00 9.1e+04 2.2e+01 0.00e+00 0.000000000e+00 1.625858229e+01 2.1e+00 0.39
1 3.6e-01 2.1e+04 5.2e+00 3.53e+01 4.163932841e-02 1.532790485e-01 4.9e-01 0.44
2 6.7e-02 4.0e+03 9.8e-01 1.68e+00 3.515579040e-02 4.940871768e-02 9.3e-02 0.48
3 2.7e-03 5.0e+02 1.5e-01 1.56e+00 2.985705958e-02 3.168708727e-02 1.1e-02 0.50
4 9.3e-04 1.7e+02 5.2e-02 1.50e+00 2.376829379e-02 2.427345456e-02 3.8e-03 0.53
5 5.9e-04 1.1e+02 3.2e-02 1.29e+00 2.174807682e-02 2.204270508e-02 2.4e-03 0.57
6 5.8e-04 1.1e+02 3.2e-02 1.17e+00 2.174438032e-02 2.203805958e-02 2.4e-03 0.60
7 3.4e-04 6.2e+01 1.9e-02 1.17e+00 2.063719469e-02 2.080234539e-02 1.4e-03 0.63
8 3.4e-04 6.2e+01 1.9e-02 1.05e+00 2.066347025e-02 2.082840854e-02 1.4e-03 0.67
9 2.3e-04 4.1e+01 1.3e-02 1.06e+00 2.047042791e-02 2.057992248e-02 9.4e-04 0.69
10 1.9e-04 3.4e+01 1.0e-02 1.03e+00 2.047127610e-02 2.056123143e-02 7.7e-04 0.73
11 1.3e-05 2.4e+00 7.2e-04 1.03e+00 2.039796556e-02 2.040419103e-02 5.4e-05 0.75
12 2.7e-06 4.9e-01 1.5e-04 1.00e+00 2.035598433e-02 2.035726696e-02 1.1e-05 0.78
13 5.0e-07 9.0e-02 2.8e-05 1.00e+00 2.035544856e-02 2.035568516e-02 2.0e-06 0.80
14 4.5e-08 8.2e-03 2.5e-06 1.00e+00 2.035505409e-02 2.035507569e-02 1.9e-07 0.83
15 4.6e-12 8.3e-07 2.5e-10 1.00e+00 2.035504100e-02 2.035504101e-02 1.9e-11 0.86
Basis identification started.
Primal basis identification phase started.
Primal basis identification phase terminated. Time: 0.00
Dual basis identification phase started.
Dual basis identification phase terminated. Time: 0.00
Basis identification terminated. Time: 0.04
Optimizer terminated. Time: 0.99
Interior-point solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 2.0355041003e-02 nrm: 2e+00 Viol. con: 4e-14 var: 0e+00
Dual. obj: 2.0355041005e-02 nrm: 3e-01 Viol. con: 0e+00 var: 2e-12
Basic solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 2.0355041003e-02 nrm: 2e+00 Viol. con: 1e-11 var: 0e+00
Dual. obj: 2.0355041002e-02 nrm: 4e-01 Viol. con: 1e-17 var: 1e-14
0.020355041002912407
```python
c2 = c2.value
c2
```
array([ 1.73328132e+00, 1.28470585e-02, 2.62093606e-05, 5.11706867e-13,
-6.42813415e-03, 1.55358430e-15])
```python
C2 = np.zeros((nm,nm))
C2[phi_idx, phi_idx] = c2[0]/2
C2[phidot_idx, phidot_idx] = c2[1]/2
C2[u_idx, u_idx] = c2[2]/2
C2[phi_idx, phidot_idx] = c2[3]
C2[phi_idx, u_idx] = c2[4]
C2[phidot_idx, u_idx] = c2[5]
C2 += C2.T
```
```python
np.linalg.eig(C2)[0]
```
array([1.28470585e-02, 1.73330516e+00, 2.36962697e-06, 0.00000000e+00,
0.00000000e+00])
```python
gam2 = np.real(sla.sqrtm(C2))
gam2
```
array([[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.31653239e+00,
3.57965622e-13, -4.87685586e-03],
[ 0.00000000e+00, 0.00000000e+00, 3.57965622e-13,
1.13344865e-01, 2.87142270e-14],
[ 0.00000000e+00, 0.00000000e+00, -4.87685586e-03,
2.87142270e-14, 1.55744585e-03]])
### $f_3$
No error -- linearization is the same as original
### $f_4$
```python
gridnum = 50
phi = np.linspace(phi_min, phi_max, gridnum)
phidot = np.linspace(phidot_min, phidot_max, gridnum)
u = np.linspace(u_min, u_max, gridnum)
""
Phi, Phidot, U = np.meshgrid(phi, phidot, u)
""
v4 = np.ravel(( (g*(mc+mp)*Phi - U)/(l*mc) -
(U*np.cos(Phi) + mp*l*(Phidot**2)*np.cos(Phi)*np.sin(Phi) - (mc+mp)*g*np.sin(Phi))/(
-l*(mc + mp*np.sin(Phi)**2))
)**2)
U4 = np.array([np.ravel(Phi*Phi),
np.ravel(Phidot*Phidot),
np.ravel(U*U),
2*np.ravel(Phi*Phidot),
2*np.ravel(Phi*U),
2*np.ravel(Phidot*U)]).T
```
```python
c4 = cp.Variable(6)
cp.Problem(cp.Minimize(cp.max(U4@c4 - v4)), [U4@c4 >= v4, c4[:3]>=0]).solve(verbose=True, solver=cp.MOSEK)
```
Problem
Name :
Objective sense : min
Type : LO (linear optimization problem)
Constraints : 250003
Cones : 0
Scalar variables : 7
Matrix variables : 0
Integer variables : 0
Optimizer started.
Problem
Name :
Objective sense : min
Type : LO (linear optimization problem)
Constraints : 250003
Cones : 0
Scalar variables : 7
Matrix variables : 0
Integer variables : 0
Optimizer - threads : 2
Optimizer - solved problem : the dual
Optimizer - Constraints : 7
Optimizer - Cones : 0
Optimizer - Scalar variables : 122381 conic : 0
Optimizer - Semi-definite variables: 0 scalarized : 0
Factor - setup time : 0.06 dense det. time : 0.00
Factor - ML order time : 0.00 GP order time : 0.00
Factor - nonzeros before factor : 28 after factor : 28
Factor - dense dim. : 0 flops : 6.01e+06
ITE PFEAS DFEAS GFEAS PRSTATUS POBJ DOBJ MU TIME
0 1.6e+00 9.2e+04 3.9e+01 0.00e+00 0.000000000e+00 2.778799058e+01 2.1e+00 0.41
1 3.0e-01 1.7e+04 7.4e+00 2.87e+01 6.326255012e-02 2.890161114e-01 4.1e-01 0.46
2 3.7e-02 2.2e+03 9.2e-01 1.47e+00 5.633976511e-02 7.704543125e-02 5.0e-02 0.50
3 1.1e-03 4.5e+02 2.7e-01 1.58e+00 4.680001018e-02 5.190181357e-02 1.0e-02 0.53
4 3.0e-04 1.3e+02 7.6e-02 1.37e+00 3.869236676e-02 3.989925610e-02 2.8e-03 0.56
5 1.5e-04 6.1e+01 3.7e-02 1.34e+00 3.354007279e-02 3.406089289e-02 1.4e-03 0.59
6 1.5e-04 6.1e+01 3.7e-02 1.16e+00 3.354082164e-02 3.406080416e-02 1.4e-03 0.62
7 1.1e-04 4.7e+01 2.8e-02 1.16e+00 3.276598661e-02 3.315636257e-02 1.1e-03 0.66
8 8.2e-05 3.4e+01 2.0e-02 1.09e+00 3.244980364e-02 3.273125973e-02 7.7e-04 0.68
9 7.8e-05 3.2e+01 1.9e-02 1.04e+00 3.248495838e-02 3.275085703e-02 7.2e-04 0.71
10 9.4e-06 3.9e+00 2.3e-03 1.03e+00 3.232963484e-02 3.236145378e-02 8.7e-05 0.74
11 1.9e-06 8.0e-01 4.8e-04 1.01e+00 3.227233826e-02 3.227891845e-02 1.8e-05 0.77
12 1.4e-07 5.8e-02 3.5e-05 1.00e+00 3.226362929e-02 3.226411067e-02 1.3e-06 0.80
13 4.9e-09 2.0e-03 1.2e-06 1.00e+00 3.226272756e-02 3.226274403e-02 4.5e-08 0.82
14 8.0e-13 3.3e-07 2.0e-10 1.00e+00 3.226271785e-02 3.226271786e-02 7.4e-12 0.84
Basis identification started.
Primal basis identification phase started.
Primal basis identification phase terminated. Time: 0.00
Dual basis identification phase started.
Dual basis identification phase terminated. Time: 0.00
Basis identification terminated. Time: 0.03
Optimizer terminated. Time: 0.97
Interior-point solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 3.2262717854e-02 nrm: 3e+00 Viol. con: 1e-14 var: 0e+00
Dual. obj: 3.2262717857e-02 nrm: 5e-01 Viol. con: 0e+00 var: 2e-11
Basic solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 3.2262717854e-02 nrm: 3e+00 Viol. con: 1e-12 var: 0e+00
Dual. obj: 3.2262717853e-02 nrm: 5e-01 Viol. con: 0e+00 var: 7e-15
0.032262717853947874
```python
c4 = c4.value
c4
```
array([ 2.86341906e+00, 1.43252388e-02, 8.28430784e-05, -1.71779544e-12,
-1.05858385e-02, -1.88719950e-14])
```python
C4 = np.zeros((nm,nm))
C4[phi_idx, phi_idx] = c4[0]/2
C4[phidot_idx, phidot_idx] = c4[1]/2
C4[u_idx, u_idx] = c4[2]/2
C4[phi_idx, phidot_idx] = c4[3]
C4[phi_idx, u_idx] = c4[4]
C4[phidot_idx, u_idx] = c4[5]
C4 += C4.T
```
```python
np.linalg.eig(C4)[0]
```
array([1.43252388e-02, 2.86345820e+00, 4.37074558e-05, 0.00000000e+00,
0.00000000e+00])
```python
gam4 = np.real(sla.sqrtm(C4))
gam4
```
array([[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.69215254e+00,
-9.48768893e-13, -6.23141107e-03],
[ 0.00000000e+00, 0.00000000e+00, -9.48768893e-13,
1.19688090e-01, -1.96197871e-13],
[ 0.00000000e+00, 0.00000000e+00, -6.23141107e-03,
-1.96197871e-13, 6.63419885e-03]])
## Final system
```python
A = np.array([[0, 1, 0, 0],
[0, 0, -mp*g/mc, 0],
[0, 0, 0, 1],
[0, 0, g*(mc+mp)/(l*mc), 0]])
B = np.expand_dims(np.array([0, 1/mc, 0, -1/(l*mc)]), 1)
# G_inv = np.array([[1, 0, 0, 0],
# [0, a2[0], 0, a2[1]],
# [0, 0, 1, 0],
# [0, a4[0], 0, a4[1]]])
# G = np.linalg.inv(G_inv)
G = np.eye(4)
F = np.vstack([gam2, gam4])
C = F[:,:n]
D = F[:,n:]
```
### Check correctness
```python
from IPython.core.debugger import set_trace
```
```python
def xdot_f(state, u):
# x = state[:, 0]
x_dot = state[:, 1]
theta = state[:, 2]
theta_dot = state[:, 3]
u = u.flatten()
sin_theta = np.sin(theta)
cos_theta = np.cos(theta)
temp = 1/(mc + mp * (sin_theta * sin_theta))
x_ddot = temp * (u + mp * np.sin(theta) * (l * (theta_dot**2)
- g * cos_theta))
theta_ddot = -(1/l) * temp * (u * cos_theta
+ mp * l * (theta_dot**2) * cos_theta * sin_theta
- (mc + mp) * g * sin_theta)
return np.vstack([x_dot, x_ddot, theta_dot, theta_ddot]).T
```
```python
prop = np.random.random((10000, nm))
rand_ss = s_max*prop + s_min*(1-prop)
rand_ys = rand_ss[:,:n]
rand_us = rand_ss[:,n:]
fx = xdot_f(rand_ys, rand_us)
# print(np.linalg.norm((fx - rand_ys@A.T - rand_us@B.T)@np.linalg.inv(G).T, axis=1) <= \
# np.linalg.norm(rand_ys@C.T + rand_us@D.T, axis=1))
print((np.linalg.norm((fx - rand_ys@A.T - rand_us@B.T)@np.linalg.inv(G).T, axis=1) <= \
np.linalg.norm(rand_ys@C.T + rand_us@D.T, axis=1)).all())
ratio = np.linalg.norm(rand_ys@C.T + rand_us@D.T, axis=1)/np.linalg.norm(
(fx - rand_ys@A.T - rand_us@B.T)@np.linalg.inv(G).T, axis=1)
print(ratio.max())
print(ratio.mean())
print(np.median(ratio))
```
True
84078.10099634247
69.84146506508074
5.992302081027291
### Save
```python
np.save('A.npy', A)
np.save('B.npy', B)
np.save('G.npy', G)
np.save('C.npy', C)
np.save('D.npy', D)
```
## Check if robust LQR solves
```python
import scipy.linalg as la
```
```python
Q = np.random.randn(n, n)
Q = Q.T @ Q
# Q = np.eye(n)
R = np.random.randn(m, m)
R = R.T @ R
# R = np.eye(m)
```
```python
alpha = 0.001
n, m = B.shape
wq = C.shape[0]
S = cp.Variable((n, n), symmetric=True)
Y = cp.Variable((m, n))
mu = cp.Variable()
R_sqrt = la.sqrtm(R)
f = cp.trace(S @ Q) + cp.matrix_frac(Y.T @ R_sqrt, S)
cons_mat = cp.bmat((
(A @ S + S @ A.T + cp.multiply(mu, G @ G.T) + B @ Y + Y.T @ B.T + alpha * S, S @ C.T + Y.T @ D.T),
(C @ S + D @ Y, -cp.multiply(mu, np.eye(wq)))
))
cons = [S >> 0, mu >= 1e-2] + [cons_mat << 0]
cp.Problem(cp.Minimize(f), cons).solve(solver=cp.MOSEK, verbose=True)
K = np.linalg.solve(S.value, Y.value.T).T
```
Problem
Name :
Objective sense : min
Type : CONIC (conic optimization problem)
Constraints : 259
Cones : 0
Scalar variables : 31
Matrix variables : 3
Integer variables : 0
Optimizer started.
Problem
Name :
Objective sense : min
Type : CONIC (conic optimization problem)
Constraints : 259
Cones : 0
Scalar variables : 31
Matrix variables : 3
Integer variables : 0
Optimizer - threads : 2
Optimizer - solved problem : the primal
Optimizer - Constraints : 237
Optimizer - Cones : 1
Optimizer - Scalar variables : 17 conic : 16
Optimizer - Semi-definite variables: 3 scalarized : 130
Factor - setup time : 0.00 dense det. time : 0.00
Factor - ML order time : 0.00 GP order time : 0.00
Factor - nonzeros before factor : 2.28e+04 after factor : 2.28e+04
Factor - dense dim. : 0 flops : 2.96e+06
ITE PFEAS DFEAS GFEAS PRSTATUS POBJ DOBJ MU TIME
0 1.0e+00 1.0e+01 1.0e+00 0.00e+00 0.000000000e+00 0.000000000e+00 1.0e+00 0.01
1 1.9e-01 1.9e+00 1.4e-01 -2.81e-01 -2.281270483e-01 7.099763692e-03 1.9e-01 0.04
2 4.4e-02 4.3e-01 1.4e-02 8.16e-01 -1.864024979e-02 7.937994766e-03 4.3e-02 0.05
3 5.6e-03 5.6e-02 5.5e-04 1.07e+00 1.033468323e-02 1.059336267e-02 5.5e-03 0.06
4 1.5e-03 1.5e-02 1.1e-04 9.37e-01 6.738530686e-02 6.977829923e-02 1.5e-03 0.07
5 4.3e-04 4.2e-03 3.2e-05 6.82e-03 1.795084991e-01 1.833424876e-01 4.2e-04 0.07
6 8.6e-05 8.5e-04 9.4e-06 -1.86e-01 4.157593139e-01 4.266345494e-01 8.5e-05 0.08
7 1.9e-05 1.9e-04 4.2e-06 -7.94e-01 1.197600207e+00 1.244557314e+00 1.9e-05 0.09
8 2.9e-06 2.9e-05 6.6e-07 -5.18e-01 2.986824762e+00 3.040107799e+00 2.9e-06 0.09
9 6.2e-07 6.2e-06 7.6e-08 3.93e-01 4.314244561e+00 4.329628937e+00 6.1e-07 0.10
10 8.3e-08 8.3e-07 4.0e-09 8.88e-01 4.623217394e+00 4.625594375e+00 8.2e-08 0.10
11 8.8e-09 8.7e-08 1.6e-10 9.85e-01 4.682839944e+00 4.683176564e+00 8.6e-09 0.11
12 1.1e-09 2.1e-10 2.0e-14 9.98e-01 4.688770677e+00 4.688771600e+00 2.3e-11 0.12
13 5.0e-10 5.2e-10 7.3e-15 1.00e+00 4.688778732e+00 4.688779198e+00 1.2e-11 0.12
14 4.7e-10 2.5e-10 4.8e-15 1.00e+00 4.688780771e+00 4.688781120e+00 8.8e-12 0.13
15 4.7e-10 2.5e-10 4.8e-15 1.00e+00 4.688780771e+00 4.688781120e+00 8.8e-12 0.14
16 4.5e-10 2.3e-10 4.6e-15 1.00e+00 4.688780962e+00 4.688781301e+00 8.5e-12 0.15
17 4.4e-10 1.9e-10 4.4e-15 1.00e+00 4.688781055e+00 4.688781389e+00 8.4e-12 0.15
18 4.4e-10 1.8e-10 4.4e-15 1.00e+00 4.688781077e+00 4.688781410e+00 8.4e-12 0.16
19 4.4e-10 1.8e-10 4.4e-15 1.00e+00 4.688781083e+00 4.688781415e+00 8.4e-12 0.17
20 4.4e-10 1.8e-10 4.4e-15 1.00e+00 4.688781085e+00 4.688781417e+00 8.4e-12 0.17
21 4.4e-10 1.8e-10 4.4e-15 1.00e+00 4.688781085e+00 4.688781417e+00 8.4e-12 0.18
22 4.4e-10 1.8e-10 4.4e-15 1.00e+00 4.688781085e+00 4.688781417e+00 8.4e-12 0.18
Optimizer terminated. Time: 0.20
Interior-point solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 4.6887810853e+00 nrm: 3e+00 Viol. con: 3e-06 var: 0e+00 barvar: 0e+00
Dual. obj: 4.6887814174e+00 nrm: 4e+03 Viol. con: 0e+00 var: 1e-08 barvar: 4e-08
```python
print((rand_ys@K.T).max())
print((rand_ys@K.T).min())
print((rand_ys@K.T).mean())
print(np.median(rand_ys@K.T))
print(abs(rand_ys@K.T).mean())
print(np.median(abs(rand_ys@K.T)))
```
33.38321398489339
-33.42043056485856
0.014336330683547194
-0.047685520942526205
10.38160108794252
9.56790890554478
## Get a sense for u size in non-robust case
```python
import control
```
```python
K, S, _ = control.lqr(A, B, Q, R)
K = np.array(-K)
```
```python
print((rand_ys@K.T).max())
print((rand_ys@K.T).min())
print((rand_ys@K.T).mean())
print(np.median(rand_ys@K.T))
print(abs(rand_ys@K.T).mean())
print(np.median(abs(rand_ys@K.T)))
```
30.167968588820685
-30.21452996513471
0.015277241681704091
-0.019151745702437273
9.448202543340196
8.76269849972503
```python
c2 = cp.Variable(6)
c4 = cp.Variable(6)
a2 = cp.Variable(2)
a4 = cp.Variable(2)
prob2 = cp.Problem(cp.Minimize(cp.max(U2@c2 - v2 * a2[0])), [U2@c2 >= v2 * a2[0] + v4 * a2[1], c2[:3]>=0, a2 >= 0, cp.sum(a2) == 1])
prob2.solve(verbose=True, solver=cp.MOSEK)
prob4 = cp.Problem(cp.Minimize(cp.max(U2@c4 - v4 * a4[1])), [U4@c4 >= v2 * a4[0] + v4 * a4[1], c4[:3]>=0, a4 >= 0, cp.sum(a4) == 1])
prob4.solve(verbose=True, solver=cp.MOSEK)
# cp.Problem(cp.Minimize(cp.max(U4@c4 - v4)), [U4@c4 >= v4, c4[:3]>=0]).solve(verbose=True, solver=cp.MOSEK)
a2 = a2.value
a4 = a4.value
```
Problem
Name :
Objective sense : min
Type : LO (linear optimization problem)
Constraints : 250006
Cones : 0
Scalar variables : 9
Matrix variables : 0
Integer variables : 0
Optimizer started.
Problem
Name :
Objective sense : min
Type : LO (linear optimization problem)
Constraints : 250006
Cones : 0
Scalar variables : 9
Matrix variables : 0
Integer variables : 0
Optimizer - threads : 2
Optimizer - solved problem : the dual
Optimizer - Constraints : 9
Optimizer - Cones : 0
Optimizer - Scalar variables : 125257 conic : 0
Optimizer - Semi-definite variables: 0 scalarized : 0
Factor - setup time : 0.07 dense det. time : 0.00
Factor - ML order time : 0.00 GP order time : 0.00
Factor - nonzeros before factor : 44 after factor : 45
Factor - dense dim. : 0 flops : 9.00e+06
ITE PFEAS DFEAS GFEAS PRSTATUS POBJ DOBJ MU TIME
0 2.0e+00 1.3e+05 2.0e+00 0.00e+00 0.000000000e+00 0.000000000e+00 4.0e+00 0.53
1 2.0e+00 1.2e+05 2.0e+00 1.37e+01 1.224002180e-02 2.304281952e-01 3.9e+00 0.61
2 3.1e-01 1.9e+04 3.1e-01 1.01e+01 3.753776839e-02 3.391134664e-02 6.2e-01 0.65
3 3.2e-02 2.0e+03 3.2e-02 1.30e+00 3.398683590e-02 3.371474003e-02 6.5e-02 0.69
4 7.9e-03 5.0e+02 7.9e-03 1.35e+00 2.819589589e-02 2.813767667e-02 1.6e-02 0.73
5 2.6e-03 1.6e+02 2.6e-03 1.48e+00 2.245436813e-02 2.243856695e-02 5.2e-03 0.77
6 2.6e-03 1.6e+02 2.6e-03 1.23e+00 2.245877548e-02 2.244298639e-02 5.2e-03 0.80
7 1.6e-03 1.0e+02 1.6e-03 1.24e+00 2.102925627e-02 2.101975397e-02 3.3e-03 0.86
8 1.6e-03 1.0e+02 1.6e-03 1.09e+00 2.102538773e-02 2.101589908e-02 3.3e-03 0.91
9 1.4e-03 9.0e+01 1.4e-03 1.13e+00 2.089475367e-02 2.088642708e-02 2.9e-03 0.97
10 1.4e-03 8.7e+01 1.4e-03 1.10e+00 2.084917784e-02 2.084116391e-02 2.8e-03 1.00
11 1.2e-03 7.5e+01 1.2e-03 1.09e+00 2.079637629e-02 2.078949437e-02 2.4e-03 1.04
12 6.0e-04 3.7e+01 6.0e-04 1.07e+00 2.052228711e-02 2.051887928e-02 1.2e-03 1.07
13 1.1e-04 6.9e+00 1.1e-04 1.03e+00 2.040319965e-02 2.040256450e-02 2.2e-04 1.10
14 1.9e-05 1.2e+00 1.9e-05 1.01e+00 2.035764978e-02 2.035754016e-02 3.8e-05 1.14
15 2.4e-06 1.5e-01 2.4e-06 1.00e+00 2.035567233e-02 2.035565854e-02 4.7e-06 1.18
16 2.3e-07 1.5e-02 2.3e-07 1.00e+00 2.035509001e-02 2.035508864e-02 4.7e-07 1.21
17 7.3e-10 4.6e-05 7.3e-10 1.00e+00 2.035504100e-02 2.035504100e-02 1.5e-09 1.24
18 7.3e-10 4.6e-05 7.3e-10 1.00e+00 2.035504100e-02 2.035504100e-02 1.5e-09 1.28
Basis identification started.
Primal basis identification phase started.
Primal basis identification phase terminated. Time: 0.00
Dual basis identification phase started.
Dual basis identification phase terminated. Time: 0.00
Basis identification terminated. Time: 0.05
Optimizer terminated. Time: 1.45
Interior-point solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 2.0355041001e-02 nrm: 2e+00 Viol. con: 8e-12 var: 0e+00
Dual. obj: 2.0355040997e-02 nrm: 3e-01 Viol. con: 0e+00 var: 9e-10
Basic solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 2.0355041001e-02 nrm: 2e+00 Viol. con: 3e-11 var: 0e+00
Dual. obj: 2.0355041002e-02 nrm: 4e-01 Viol. con: 0e+00 var: 9e-16
Problem
Name :
Objective sense : min
Type : LO (linear optimization problem)
Constraints : 250006
Cones : 0
Scalar variables : 9
Matrix variables : 0
Integer variables : 0
Optimizer started.
Problem
Name :
Objective sense : min
Type : LO (linear optimization problem)
Constraints : 250006
Cones : 0
Scalar variables : 9
Matrix variables : 0
Integer variables : 0
Optimizer - threads : 2
Optimizer - solved problem : the dual
Optimizer - Constraints : 9
Optimizer - Cones : 0
Optimizer - Scalar variables : 125257 conic : 0
Optimizer - Semi-definite variables: 0 scalarized : 0
Factor - setup time : 0.07 dense det. time : 0.00
Factor - ML order time : 0.00 GP order time : 0.00
Factor - nonzeros before factor : 44 after factor : 45
Factor - dense dim. : 0 flops : 9.00e+06
ITE PFEAS DFEAS GFEAS PRSTATUS POBJ DOBJ MU TIME
0 2.0e+00 1.3e+05 2.0e+00 0.00e+00 0.000000000e+00 0.000000000e+00 4.0e+00 0.58
1 1.9e+00 1.2e+05 1.9e+00 9.21e+00 1.988813072e-02 1.237263300e-01 3.7e+00 0.65
2 2.1e-01 1.3e+04 2.1e-01 3.87e+00 4.440114845e-02 8.870150714e-02 4.3e-01 0.69
3 2.2e-02 1.4e+03 2.2e-02 1.23e+00 4.389134968e-02 5.036073443e-02 4.5e-02 0.72
4 3.9e-03 2.4e+02 3.9e-03 1.22e+00 3.789448769e-02 3.843203431e-02 7.7e-03 0.75
5 3.6e-03 2.2e+02 3.6e-03 1.28e+00 3.728573314e-02 3.776977959e-02 7.1e-03 0.79
6 1.3e-03 8.1e+01 1.3e-03 1.31e+00 3.254960215e-02 3.268404911e-02 2.6e-03 0.83
7 7.4e-04 4.6e+01 7.4e-04 1.15e+00 3.128041100e-02 3.135063874e-02 1.5e-03 0.87
8 7.4e-04 4.6e+01 7.4e-04 1.08e+00 3.128047126e-02 3.135060056e-02 1.5e-03 0.90
9 5.4e-04 3.4e+01 5.4e-04 1.08e+00 3.082784339e-02 3.087616144e-02 1.1e-03 0.94
10 3.1e-04 2.0e+01 3.1e-04 1.06e+00 3.055534700e-02 3.058187761e-02 6.3e-04 0.97
11 9.3e-05 5.8e+00 9.3e-05 1.02e+00 3.043514736e-02 3.044098303e-02 1.9e-04 1.01
12 1.6e-05 9.8e-01 1.6e-05 1.00e+00 3.039998840e-02 3.040078916e-02 3.1e-05 1.04
13 3.0e-06 1.9e-01 3.0e-06 1.00e+00 3.039772651e-02 3.039783211e-02 6.1e-06 1.08
14 1.5e-07 9.5e-03 1.5e-07 1.00e+00 3.039467957e-02 3.039468462e-02 3.0e-07 1.11
15 9.6e-10 6.0e-05 9.6e-10 1.00e+00 3.039464239e-02 3.039464241e-02 1.9e-09 1.13
16 9.6e-10 6.0e-05 9.6e-10 1.00e+00 3.039464239e-02 3.039464241e-02 1.9e-09 1.16
Basis identification started.
Primal basis identification phase started.
Primal basis identification phase terminated. Time: 0.01
Dual basis identification phase started.
Dual basis identification phase terminated. Time: 0.00
Basis identification terminated. Time: 0.06
Optimizer terminated. Time: 1.36
Interior-point solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 3.0394642393e-02 nrm: 2e+00 Viol. con: 1e-11 var: 0e+00
Dual. obj: 3.0394642413e-02 nrm: 5e-01 Viol. con: 0e+00 var: 1e-09
Basic solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 3.0394642393e-02 nrm: 2e+00 Viol. con: 1e-09 var: 0e+00
Dual. obj: 3.0394642071e-02 nrm: 4e-01 Viol. con: 0e+00 var: 7e-15
```python
C_old = np.load('C.npy')
C - C_old
```
array([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
```python
```
|
dd1093b58a794f3ae48202abf2bdb1731f2a8253
| 44,509 |
ipynb
|
Jupyter Notebook
|
problem_gen/cartpole/Numerical Norm Bounds.ipynb
|
locuslab/robust-nn-control
|
666fb1540f20555aa04bccde12603e67a1c0b913
|
[
"Apache-2.0"
] | 34 |
2020-11-20T05:19:42.000Z
|
2022-03-14T13:03:37.000Z
|
problem_gen/cartpole/Numerical Norm Bounds.ipynb
|
locuslab/robust-nn-control
|
666fb1540f20555aa04bccde12603e67a1c0b913
|
[
"Apache-2.0"
] | null | null | null |
problem_gen/cartpole/Numerical Norm Bounds.ipynb
|
locuslab/robust-nn-control
|
666fb1540f20555aa04bccde12603e67a1c0b913
|
[
"Apache-2.0"
] | 6 |
2020-11-22T16:03:14.000Z
|
2021-04-24T09:45:02.000Z
| 39.704728 | 541 | 0.464872 | true | 13,915 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.908618 | 0.760651 | 0.691141 |
__label__kor_Hang
| 0.105517 | 0.444083 |
```python
import numpy as np
import scipy.misc
from scipy.fftpack import dct, idct
import sys
from PIL import Image
import matplotlib
import matplotlib.pyplot as plt
import random
from tqdm._tqdm_notebook import tqdm_notebook
from scipy.fftpack import dct, idct
import seaborn as sns
from skimage.metrics import structural_similarity as ssim
import pandas as pd
import sympy
%matplotlib inline
class ImageLoader:
def __init__(self, FILE_PATH):
self.img = np.array(Image.open(FILE_PATH))
# 行数
self.row_blocks_count = self.img.shape[0] // 8
# 列数
self.col_blocks_count = self.img.shape[1] // 8
def get_points(self, POINT):
Row = random.randint(0, len(self.img) - POINT - 1)
Col = random.randint(0, len(self.img) - 1)
return self.img[Row : Row + POINT, Col]
def get_block(self, col, row):
return self.img[col * 8 : (col + 1) * 8, row * 8 : (row + 1) * 8]
# plt.rcParams['font.family'] ='sans-serif'#使用するフォント
# plt.rcParams["font.sans-serif"] = "Source Han Sans"
plt.rcParams["font.family"] = "Source Han Sans JP" # 使用するフォント
plt.rcParams["xtick.direction"] = "in" # x軸の目盛線が内向き('in')か外向き('out')か双方向か('inout')
plt.rcParams["ytick.direction"] = "in" # y軸の目盛線が内向き('in')か外向き('out')か双方向か('inout')
plt.rcParams["xtick.major.width"] = 1.0 # x軸主目盛り線の線幅
plt.rcParams["ytick.major.width"] = 1.0 # y軸主目盛り線の線幅
plt.rcParams["font.size"] = 12 # フォントの大きさ
plt.rcParams["axes.linewidth"] = 1.0 # 軸の線幅edge linewidth。囲みの太さ
matplotlib.font_manager._rebuild()
MONO_DIR_PATH = "../../Mono/"
AIRPLANE = ImageLoader(MONO_DIR_PATH + "airplane512.bmp")
BARBARA = ImageLoader(MONO_DIR_PATH + "barbara512.bmp")
BOAT = ImageLoader(MONO_DIR_PATH + "boat512.bmp")
GOLDHILL = ImageLoader(MONO_DIR_PATH + "goldhill512.bmp")
LENNA = ImageLoader(MONO_DIR_PATH + "lenna512.bmp")
MANDRILL = ImageLoader(MONO_DIR_PATH + "mandrill512.bmp")
MILKDROP = ImageLoader(MONO_DIR_PATH + "milkdrop512.bmp")
SAILBOAT = ImageLoader(MONO_DIR_PATH + "sailboat512.bmp")
```
```python
n_bar = 5
N = 32
```
# MSDS
```python
def msds(N,arr):
w_e = 0
e_e = 0
n_e = 0
s_e = 0
nw_e = 0
ne_e = 0
sw_e = 0
se_e = 0
for row in range(arr.shape[0] // N):
for col in range(arr.shape[1] // N):
f_block = arr[row * N : (row + 1) * N, col * N : (col + 1) * N]
# w
if col == 0:
w_block = np.fliplr(f_block)
else:
w_block = arr[row * N : (row + 1) * N, (col - 1) * N : col * N]
# e
if col == arr.shape[1] // N - 1:
e_block = np.fliplr(f_block)
else:
e_block = arr[row * N : (row + 1) * N, (col + 1) * N : (col + 2) * N]
# n
if row == 0:
n_block = np.flipud(f_block)
else:
n_block = arr[(row - 1) * N : row * N, col * N : (col + 1) * N]
# s
if row == arr.shape[0] // N - 1:
s_block = np.flipud(f_block)
else:
s_block = arr[(row + 1) * N : (row + 2) * N, col * N : (col + 1) * N]
w_d1 = f_block[:, 0] - w_block[:, N-1]
e_d1 = f_block[:, N-1] - e_block[:, 0]
n_d1 = f_block[0, :] - n_block[N-1, :]
s_d1 = f_block[N-1, :] - s_block[0, :]
w_d2 = (w_block[:, N-1] - w_block[:, N-2] + f_block[:, 1] - f_block[:, 0]) / 2
e_d2 = (e_block[:, 1] - e_block[:, 0] + f_block[:, N-1] - f_block[:, N-2]) / 2
n_d2 = (n_block[N-1, :] - n_block[N-2, :] + f_block[1, :] - f_block[0, :]) / 2
s_d2 = (s_block[1, :] - s_block[0, :] + f_block[N-1, :] - f_block[N-2, :]) / 2
w_e += np.sum((w_d1 - w_d2) ** 2 )
e_e += np.sum((e_d1 - e_d2) ** 2 )
n_e += np.sum((n_d1 - n_d2) ** 2)
s_e += np.sum((s_d1 - s_d2) ** 2)
# nw
if row == 0 or col == 0:
nw_block = np.flipud(np.fliplr(f_block))
else:
nw_block = arr[(row - 1) * N : row * N, (col - 1) * N : col * N]
# ne
if row == 0 or col == arr.shape[1] // N - 1:
ne_block = np.flipud(np.fliplr(f_block))
else:
ne_block = arr[(row-1) * N : row * N, (col + 1) * N : (col + 2) * N]
# sw
if row == arr.shape[0] // N -1 or col == 0:
sw_block = np.flipud(np.fliplr(f_block))
else:
sw_block = arr[row * N : (row+1) * N, (col-1) * N : col * N]
# se
if row == arr.shape[0]//N-1 or col == arr.shape[0] // N -1:
se_block = np.flipud(np.fliplr(f_block))
else:
se_block = arr[(row + 1) * N : (row + 2) * N, (col+1) * N : (col + 2) * N]
nw_g1 = f_block[0, 0] - nw_block[N-1, N-1]
ne_g1 = f_block[0, N-1] - ne_block[N-1, 0]
sw_g1 = f_block[N-1, 0] - sw_block[0, N-1]
se_g1 = f_block[N-1, N-1] - se_block[0, 0]
nw_g2 = (nw_block[N-1,N-1] - nw_block[N-2,N-2] + f_block[1,1] - f_block[0,0])/2
ne_g2 = (ne_block[N-1,0] - ne_block[N-2,1] + f_block[1,N-2] - f_block[0,N-1])/2
sw_g2 = (sw_block[0,N-1] - nw_block[1,N-2] + f_block[N-2,1] - f_block[N-1,0])/2
se_g2 = (nw_block[0,0] - nw_block[1,1] + f_block[N-2,N-2] - f_block[N-1,N-1])/2
nw_e += (nw_g1 - nw_g2) ** 2
ne_e += (ne_g1 - ne_g2) ** 2
sw_e += (sw_g1 - sw_g2) ** 2
se_e += (se_g1 - se_g2) ** 2
MSDSt = (w_e + e_e + n_e + s_e + nw_e + ne_e + sw_e + se_e)/ ((arr.shape[0]/N)**2)
MSDS1 = (w_e + e_e + n_e + s_e)/ ((arr.shape[0]/N)**2)
MSDS2 = (nw_e + ne_e + sw_e + se_e)/ ((arr.shape[0]/N)**2)
return MSDSt, MSDS1, MSDS2
```
```python
class DMLCT:
def __init__(self, n_bar, N):
self.n_bar = n_bar
self.N = N
self.x_l = (2 * np.arange(N) + 1) / (2 * N)
self.s_l = np.arange(n_bar) / (n_bar - 1)
self.xi = (np.arange(n_bar + 1) - 0.5) / (n_bar - 1)
self.lambda_kh = self.get_lambda_kh(self.n_bar)
self.w_k_j = self.get_w_k_j(self.n_bar, self.N)
self.W_L_k_kh = self.get_W_L_k_kh(self.n_bar, self.N)
self.W_k_kh = self.get_W_k_kh(self.n_bar, self.N)
self.W_R_k_kh = self.get_W_R_k_kh(self.n_bar, self.N)
def Lagrange_j(self, j):
x = sympy.Symbol("x")
L_x = 1.0
for l in range(self.n_bar):
if l != j:
L_x *= (x - self.s_l[l]) / (self.s_l[j] - self.s_l[l])
return sympy.integrate(L_x)
def get_lambda_kh(self, n_bar):
lambda_kh = np.ones(n_bar)
lambda_kh[0] = np.sqrt(1 / 2)
return lambda_kh
def get_w_k_j(self, n_bar, N):
L_j = np.zeros((n_bar, N))
x = sympy.Symbol("x")
for j in range(n_bar):
temp = []
Lj = self.Lagrange_j(j)
for k in range(N):
temp.append(Lj.subs(x, self.x_l[k]))
L_j[j] = np.array(temp)
w_k_j = np.zeros((n_bar, N))
for j in range(n_bar):
w_k_j[j] = scipy.fftpack.dct(L_j[j], norm="ortho")
return w_k_j
def get_W_L_k_kh(self, n_bar, N):
W_L_k_kh = np.zeros((n_bar - 1, N))
lambda_kh = self.get_lambda_kh(n_bar)
for kh in range(n_bar - 1):
W_L_k_kh[kh] = (
(1 - n_bar)
* np.sqrt(2 / N)
* lambda_kh[kh]
* np.cos(np.pi * kh * (self.xi[0] + 1))
* self.w_k_j[0]
)
return W_L_k_kh
def get_W_k_kh(self, n_bar, N):
W_k_kh = np.zeros((n_bar - 1, N))
for kh in range(n_bar - 1):
sum_sin = np.zeros(N)
for j in range(1, n_bar - 2 + 1):
sum_sin += np.sin(np.pi * kh * self.s_l[j]) * self.w_k_j[j]
W_k_kh[kh] = (
(n_bar - 1)
* np.sqrt(2 / N)
* self.lambda_kh[kh]
* (
np.cos(np.pi * kh * self.xi[1])
* (self.w_k_j[0] - (-1) ** (kh) * self.w_k_j[n_bar - 1])
- 2 * np.sin((np.pi * kh) / (2 * (n_bar - 1))) * sum_sin
)
)
return W_k_kh
def get_W_R_k_kh(self, n_bar, N):
W_R_k_kh = np.zeros((n_bar - 1, N))
for kh in range(n_bar - 1):
W_R_k_kh[kh] = (
(n_bar - 1)
* np.sqrt(2 / N)
* self.lambda_kh[kh]
* np.cos(np.pi * kh * (self.xi[n_bar] - 1))
* self.w_k_j[n_bar - 1]
)
return W_R_k_kh
```
```python
def get_F_L_k_horizontal(arr, N, row, col):
# w
if col == 0:
# w_block = np.zeros(N)
w_block = arr[row, col * N : (col + 1) * N]
else:
w_block = arr[row, (col - 1) * N : col * N]
return w_block
```
```python
def get_F_R_k_horizontal(arr, N, row, col):
# e
if col == arr.shape[1] // N - 1:
# e_block = np.zeros(N)
e_block = arr[row, col * N : (col + 1) * N]
else:
e_block = arr[row, (col + 1) * N : (col + 2) * N]
return e_block
```
```python
def get_F_L_k_vertical(arr, N, row, col):
# n
if row == 0:
# n_block = np.zeros(N)
n_block = arr[row * N : (row + 1) * N, col]
else:
n_block = arr[(row - 1) * N : row * N, col]
return n_block
```
```python
def get_F_R_k_vertical(arr, N, row, col):
# s
if row == arr.shape[0] // N - 1:
# s_block = np.zeros(N)
s_block = arr[row * N : (row + 1) * N, col]
else:
s_block = arr[(row + 1) * N : (row + 2) * N, col]
return s_block
```
```python
# dmlct = DMLCT(n_bar, N)
```
```python
IMG = LENNA
# IMG = ImageLoader(MONO_DIR_PATH + "LENNA.bmp")
```
```python
Fk = np.zeros(IMG.img.shape)
```
# 順変換
## 縦方向
### DCT
```python
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
eight_points = IMG.img[N * row : N * (row + 1), col]
c = scipy.fftpack.dct(eight_points, norm="ortho")
Fk[N * row : N * (row + 1), col] = c
```
### 残差
```python
dmlct = DMLCT(n_bar, N)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
# ビューなら直接いじっちゃう
F = Fk[N * row : N * (row + 1), col]
F_L = get_F_L_k_vertical(Fk, N, row, col)
F_R = get_F_R_k_vertical(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range(n_bar - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
# n_bar = 4 なら 0,1,2は残す 3,4,5,6,7を書き換える
F[n_bar - 2 + 1 :] -= U_k_n_bar[n_bar - 2 + 1 :]
```
```python
# 0を残す
for k in reversed(range(1, n_bar - 2 + 1)):
dmlct = DMLCT(k+1, N)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
# ビューなら直接いじっちゃう
F = Fk[N * row : N * (row + 1), col]
F_L = get_F_L_k_vertical(Fk, N, row, col)
F_R = get_F_R_k_vertical(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range((k + 1) - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
F[k] -= U_k_n_bar[k]
```
## 横方向
### DCT
```python
for row in range(Fk.shape[0]):
for col in range(Fk.shape[1] // N):
eight_points = Fk[row, N * col : N * (col + 1)]
c = scipy.fftpack.dct(eight_points, norm="ortho")
Fk[row, N * col : N * (col + 1)] = c
```
### 残差
```python
dmlct = DMLCT(n_bar, N)
for row in range(IMG.img.shape[0]):
for col in range(IMG.img.shape[1] // N):
F = Fk[row, N * col : N * (col + 1)]
F_L = get_F_L_k_horizontal(Fk, N, row, col)
F_R = get_F_R_k_horizontal(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range(n_bar - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
# n_bar = 4 なら 0,1,2は残す 3,4,5,6,7を書き換える
F[n_bar - 2 + 1 :] -= U_k_n_bar[n_bar - 2 + 1 :]
```
```python
# 0を残す
for k in reversed(range(1, n_bar - 2 + 1)):
dmlct = DMLCT(k+1, N)
for row in range(IMG.img.shape[0]):
for col in range(IMG.img.shape[1] // N):
F = Fk[row, N * col : N * (col + 1)]
F_L = get_F_L_k_horizontal(Fk, N, row, col)
F_R = get_F_R_k_horizontal(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range((k + 1) - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
F[k] -= U_k_n_bar[k]
```
```python
plt.imshow(Fk)
# pd.DataFrame(Fk).to_csv("DMLCT_lenna_hiroya_coef.csv",header=False,index=False)
```
# 係数の確保
```python
Fk_Ori = np.copy(Fk)
```
# 逆変換
```python
recover = np.zeros(IMG.img.shape)
```
## 横方向
### 残差
```python
for k in range(1, n_bar - 2 + 1):
dmlct = DMLCT(k+1, N)
for row in range(IMG.img.shape[0]):
for col in range(IMG.img.shape[1] // N):
F = Fk[row, N * col : N * (col + 1)]
F_L = get_F_L_k_horizontal(Fk, N, row, col)
F_R = get_F_R_k_horizontal(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range((k + 1) - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
F[k] += U_k_n_bar[k]
```
```python
dmlct = DMLCT(n_bar, N)
for row in range(IMG.img.shape[0]):
for col in range(IMG.img.shape[1] // N):
F = Fk[row, N * col : N * (col + 1)]
F_L = get_F_L_k_horizontal(Fk, N, row, col)
F_R = get_F_R_k_horizontal(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range(n_bar - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
# n_bar = 4 なら 0,1,2は残す 3,4,5,6,7を書き換える
F[n_bar - 2 + 1 :] += U_k_n_bar[n_bar - 2 + 1 :]
```
### IDCT
```python
for row in range(Fk.shape[0]):
for col in range(Fk.shape[1] // N):
F = Fk[row, N * col : N * col + N]
data = scipy.fftpack.idct(F, norm="ortho")
# Fkに代入した後、縦方向に対して処理
Fk[row, N * col : N * col + N] = data
```
## 縦方向
### 残差
```python
for k in range(1, n_bar - 2 + 1):
dmlct = DMLCT(k+1, N)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
# ビューなら直接いじっちゃう
F = Fk[N * row : N * (row + 1), col]
F_L = get_F_L_k_vertical(Fk, N, row, col)
F_R = get_F_R_k_vertical(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range((k + 1) - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
F[k] += U_k_n_bar[k]
```
```python
dmlct = DMLCT(n_bar, N)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
# ビューなら直接いじっちゃう
F = Fk[N * row : N * (row + 1), col]
F_L = get_F_L_k_vertical(Fk, N, row, col)
F_R = get_F_R_k_vertical(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range(n_bar - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
# n_bar = 4 なら 0,1,2は残す 3,4,5,6,7を書き換える
F[n_bar - 2 + 1 :] += U_k_n_bar[n_bar - 2 + 1 :]
```
### IDCT
```python
for row in range(Fk.shape[0] // N):
for col in range(Fk.shape[1]):
F = Fk[N * row : N * (row + 1), col]
data = scipy.fftpack.idct(F, norm="ortho")
# 復元画像
recover[N * row : N * (row + 1), col] = data
```
```python
plt.imshow(recover.astype("u8"), cmap="gray")
```
もどった...!
# 量子化テーブル
```python
Q = 108
Q_Luminance = np.ones((N,N)) * Q
```
# 量子化
```python
Fk = np.copy(Fk_Ori)
Q_Fk = np.zeros(Fk.shape)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1] // N):
block = Fk[row * N : (row + 1) * N, col * N : (col + 1) * N]
# 量子化
block = np.round(block / Q_Luminance)
# 逆量子化
block = block * Q_Luminance
Q_Fk[row * N : (row+1)*N, col * N : (col+1)*N] = block
```
```python
Fk = np.copy(Q_Fk)
Q_recover = np.zeros(Q_Fk.shape)
```
## 横方向
### 残差
```python
for k in range(1, n_bar - 2 + 1):
dmlct = DMLCT(k+1, N)
for row in range(IMG.img.shape[0]):
for col in range(IMG.img.shape[1] // N):
F = Fk[row, N * col : N * (col + 1)]
F_L = get_F_L_k_horizontal(Fk, N, row, col)
F_R = get_F_R_k_horizontal(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range((k + 1) - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
F[k] += U_k_n_bar[k]
```
```python
dmlct = DMLCT(n_bar, N)
for row in range(IMG.img.shape[0]):
for col in range(IMG.img.shape[1] // N):
F = Fk[row, N * col : N * (col + 1)]
F_L = get_F_L_k_horizontal(Fk, N, row, col)
F_R = get_F_R_k_horizontal(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range(n_bar - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
# n_bar = 4 なら 0,1,2は残す 3,4,5,6,7を書き換える
F[n_bar - 2 + 1 :] += U_k_n_bar[n_bar - 2 + 1 :]
```
### IDCT
```python
for row in range(Fk.shape[0]):
for col in range(Fk.shape[1] // N):
F = Fk[row, N * col : N * col + N]
data = scipy.fftpack.idct(F, norm="ortho")
# Fkに代入した後、縦方向に対して処理
Fk[row, N * col : N * col + N] = data
```
## 縦方向
### 残差
```python
for k in range(1, n_bar - 2 + 1):
dmlct = DMLCT(k+1, N)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
# ビューなら直接いじっちゃう
F = Fk[N * row : N * (row + 1), col]
F_L = get_F_L_k_vertical(Fk, N, row, col)
F_R = get_F_R_k_vertical(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range((k + 1) - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
F[k] += U_k_n_bar[k]
```
```python
dmlct = DMLCT(n_bar, N)
for row in range(IMG.img.shape[0] // N):
for col in range(IMG.img.shape[1]):
# ビューなら直接いじっちゃう
F = Fk[N * row : N * (row + 1), col]
F_L = get_F_L_k_vertical(Fk, N, row, col)
F_R = get_F_R_k_vertical(Fk, N, row, col)
U_k_n_bar = np.zeros(N)
for kh in range(n_bar - 2 + 1):
U_k_n_bar += (
F_L[kh] * dmlct.W_L_k_kh[kh]
+ F[kh] * dmlct.W_k_kh[kh]
+ F_R[kh] * dmlct.W_R_k_kh[kh]
)
# n_bar = 4 なら 0,1,2は残す 3,4,5,6,7を書き換える
F[n_bar - 2 + 1 :] += U_k_n_bar[n_bar - 2 + 1 :]
```
### IDCT
```python
for row in range(Fk.shape[0] // N):
for col in range(Fk.shape[1]):
F = Fk[N * row : N * (row + 1), col]
data = scipy.fftpack.idct(F, norm="ortho")
# 復元画像
Q_recover[N * row : N * (row + 1), col] = data
```
```python
Q_recover = np.round(Q_recover)
```
```python
plt.imshow(Q_recover, cmap="gray")
plt.imsave("DMLCT_16x16_n"+str(n_bar)+ "_LENNA.png",Q_recover,cmap="gray")
```
# 情報量
```python
qfk = pd.Series(Q_Fk.flatten())
pro = qfk.value_counts() / qfk.value_counts().sum()
pro.head()
```
-0.0 0.981468
108.0 0.007042
-108.0 0.007008
216.0 0.001083
-216.0 0.000996
dtype: float64
```python
S = 0
for pi in pro:
S -= pi * np.log2(pi)
S
```
0.1806067711578881
# PSNR
```python
MSE = np.sum(np.sum(np.power((IMG.img - Q_recover),2)))/(Q_recover.shape[0] * Q_recover.shape[1])
PSNR = 10 * np.log10(255 * 255 / MSE)
PSNR
```
28.991609974679005
# MSSIM
```python
MSSIM = ssim(IMG.img,Q_recover.astype(IMG.img.dtype),gaussian_weights=True,sigma=1.5,K1=0.01,K2=0.03)
MSSIM
```
0.7870765007405786
```python
dmlct = DMLCT(n_bar, N)
```
# MSDS
```python
MSDSt, MSDS1, MSDS2 = msds(N,Q_recover)
```
```python
MSDS1
```
39983.40234375
```python
MSDS2
```
6007.1181640625
```python
```
|
b0ad42060853735a08373a0c1818290f7ce75e1e
| 224,150 |
ipynb
|
Jupyter Notebook
|
DMLCT/32x32/DMLCT 32x32.ipynb
|
Hiroya-W/TPHLCT_and_DMLCT
|
5acb7553792335e178d8b99ca1ee42431cc26f92
|
[
"MIT"
] | null | null | null |
DMLCT/32x32/DMLCT 32x32.ipynb
|
Hiroya-W/TPHLCT_and_DMLCT
|
5acb7553792335e178d8b99ca1ee42431cc26f92
|
[
"MIT"
] | 2 |
2021-03-31T19:34:33.000Z
|
2022-01-13T02:01:43.000Z
|
DMLCT/32x32/DMLCT 32x32.ipynb
|
Hiroya-W/Python_DCT
|
5acb7553792335e178d8b99ca1ee42431cc26f92
|
[
"MIT"
] | null | null | null | 172.158218 | 93,072 | 0.880071 | true | 7,853 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.79053 | 0.692642 | 0.547555 |
__label__eng_Latn
| 0.18365 | 0.110482 |
```python
import sympy as sm
```
```python
x = sm.symbols('x')
```
```python
y1 = sm.Pow(sm.E, x)
```
```python
y1.diff(x)
```
$\displaystyle e^{x}$
```python
y1.integrate(x)
```
$\displaystyle e^{x}$
```python
%matplotlib inline
fig = sm.plotting.plot(y1, x)
```
```python
```
|
2779966df0b40662f28d4b9803e890ef32358ca0
| 15,152 |
ipynb
|
Jupyter Notebook
|
ch02_HomeWork_1.ipynb
|
loucadgarbon/pattern-recognition
|
2fd45db9597c01f8b76aba1cad9575c2b3a764ec
|
[
"MIT"
] | null | null | null |
ch02_HomeWork_1.ipynb
|
loucadgarbon/pattern-recognition
|
2fd45db9597c01f8b76aba1cad9575c2b3a764ec
|
[
"MIT"
] | null | null | null |
ch02_HomeWork_1.ipynb
|
loucadgarbon/pattern-recognition
|
2fd45db9597c01f8b76aba1cad9575c2b3a764ec
|
[
"MIT"
] | null | null | null | 113.924812 | 12,898 | 0.889586 | true | 101 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.935347 | 0.803174 | 0.751246 |
__label__yue_Hant
| 0.300945 | 0.583728 |
# <span style="color:#2c061f"> Inaugural project: Getting an Insurance</span>
<br>
## <span style="color:#374045"> Introduction to Programming and Numerical Analysis </span>
*Oluf Kelkjær*
### **Today's Plan**
1. Objectives
2. Remember to...
3. Important formal requirement
4. Question hints
## Objectives
* Apply simple numerical solution and simulation methods
* Solve the problem using SciPy and numpy
* Structure a code project
* Notebook, .py-files, readme etc.
* Document code
* Document and explain your code using what you learned in **lecture 5**
* Present results in text form and in figures
* Create nice tables and figures + create nice markdown cells
## Remember (general hint) (1/4)
Create a python-file for your functions
**Remember** autoreload
```python
import pythonfile as py
%load_ext autoreload
%autoreload 2
```
## Remember (general hint) (2/4)
Document your code - at the exam you might have forgotten what you did and why
```python
import numpy as np
# 1. Initiliaze vectors of goods and container for utilities
psis = np.linspace(-10, -5, 5)
utility = np.empty((psis.shape)) # empty array with same shape as psi's
# 2. Calculate utility for all psis (relative risk aversion parameter
for i, psi in enumerate(psis):
# i. Calculate utility using functions module
u_psi = py.utility(1,psi)
# ii. store the utility at the respective place (i)
utility[i] = u_psi
# iii. print result
print(f'u(1,psi={psi:.2f})={u_psi:.2f}')
```
u(1,psi=-10.00)=-0.11
u(1,psi=-8.75)=-0.13
u(1,psi=-7.50)=-0.15
u(1,psi=-6.25)=-0.19
u(1,psi=-5.00)=-0.25
## Remember (general hint) (3/4)
Incoporate markdown cells in your project that describes the model, intuition, etc.
Remember you do can do equation in markdown using either LaTeX syntax, `$$ math $$`:
$$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$
or `\begin{equation} math \end{equation}`
\begin{align} y_i = \beta_0 + \beta_1 x_i + \epsilon_i \end{align}
Other formatting?
* An indent followed by `*` or `1.` creates list
1. list item 1
2. list item 2
* list item 3
* `** text **` produces **text**
* `*text*` produces *text*
* `#` creates a large header, `##` smaller header and so on
* Google it!
## Remember (general hint) (4/4)
**Always** add docstrings to your functions.
The function name in itself isn't always as informal as you'd want
```python
def utility(z,psi):
""" Calculates utility of given some assets, z, and relative risk aversion, psi.
Args:
z (float): the agents assets
psi (float): the agents relative risk aversion
Returns:
(float): utility of assets
"""
return (z**(1+psi))/(1+psi)
```
## Important requirement!
**ALLWAYS** restart your *kernel* and *run all cells* **before** handing in! This goes for the mandatory assignments along with the exam. It is crucial!
If you have crucual old variables in memory that makes your notebook run, then i cannot run your notebook!!
**Red flag** if the notebook can't be run from start to end!!
## Getting an insurance
Uninsured agents expected utility:
<p style="text-align: center;">$V_0 = pu(y-x)+(1-p)u(y)$</p>
Insured agent expected utility:
<p style="text-align: center;">$V(q;\pi)=pu(y-x+q-\pi(p,q))+(1-p)u(y-\pi(p,q))$</p>
Insurance company requires premium (in order to not go broke):
<p style="text-align: center;">$\pi(p,q) = pq$</p>
Agents utility given by:
<p style="text-align: center;">$u(z) = \frac{z^{1+\psi}}{1+\psi}$</p>
Parameters are:
<p style="text-align: center;">$y=1, p = 0.2, \psi = -2$</p>
## Hints for question 1
Once the *utility function*, *insurance premium* and the *insured agents expected utility* has been defined, we're ready to solve **question 1**.
Find optimal coverage, $q^*$, for $x$ in range $[0.01,0.9]$:
<p style="text-align: center;">$q^\ast = \underset{q\in[0,x]}{\operatorname{argmax}} V(q;\pi)$</p>
1. create `x` array in range `[0.01,0.9]` and an empty array for optimal `q's` to be stored.
2. Loop over `x's` where you define an ojective function for each `x`.
* You can solve the objective function using the `minimize_scalar` funtion and store $q^*(x)$
* Remember that $q\in[0,x]$ which means your solver should be **bounded**
* Remember remember that we're maximizing (-minimize)
Plot the `x` and `q` arrays and comment!
## Hints for question 2:
Find $\tilde{\pi}$ such that $V(q;\tilde{\pi})=V_0$ at each point in $q$-array.
1. Construct $q$ and (empty) $\pi$-array
2. Create new objective function $V(q;\tilde{\pi})=V_0\rightarrow V(q;\tilde{\pi})-V_0=0$
3. Loop through the $q$'s
* Can use `optimize.root` (solves objective function = 0) **seen in lecture 3**
* store results in $\pi$-array
4. plot $q$ and $\pi$-array and comment!
## Hints for question 3:
Create a function for Monte Carlo integration - **seen in lecture 4**.
You can advantageously use two different `SimpleNamespace()`
One for model parameters and the other for different policy parameters
## Hints for question 4:
Define new objective function (MC under no coverage - MC under coverage)
Solve the objective function using `optimize.root` with `method='broyden1'`
|
0bd92678d1fb03f6b2efe2c3af3070da0c470834
| 9,972 |
ipynb
|
Jupyter Notebook
|
Slides/exc_6/6.class.ipynb
|
OlufKelk/IPNA
|
27abbcd0de9d2da33169f1caf9604ebb58682c61
|
[
"MIT"
] | null | null | null |
Slides/exc_6/6.class.ipynb
|
OlufKelk/IPNA
|
27abbcd0de9d2da33169f1caf9604ebb58682c61
|
[
"MIT"
] | null | null | null |
Slides/exc_6/6.class.ipynb
|
OlufKelk/IPNA
|
27abbcd0de9d2da33169f1caf9604ebb58682c61
|
[
"MIT"
] | null | null | null | 28.904348 | 161 | 0.543422 | true | 1,549 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.785309 | 0.91118 | 0.715557 |
__label__eng_Latn
| 0.964933 | 0.500811 |
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import pyplot
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import confusion_matrix
import math
import keras
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dense
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
```
Using TensorFlow backend.
```python
df = pd.read_csv("Churn_Modelling.csv")
```
# Pre-processing
```python
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>RowNumber</th>
<th>CustomerId</th>
<th>Surname</th>
<th>CreditScore</th>
<th>Geography</th>
<th>Gender</th>
<th>Age</th>
<th>Tenure</th>
<th>Balance</th>
<th>NumOfProducts</th>
<th>HasCrCard</th>
<th>IsActiveMember</th>
<th>EstimatedSalary</th>
<th>Exited</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>15634602</td>
<td>Hargrave</td>
<td>619</td>
<td>France</td>
<td>Female</td>
<td>42</td>
<td>2</td>
<td>0.00</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>101348.88</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>15647311</td>
<td>Hill</td>
<td>608</td>
<td>Spain</td>
<td>Female</td>
<td>41</td>
<td>1</td>
<td>83807.86</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>112542.58</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>15619304</td>
<td>Onio</td>
<td>502</td>
<td>France</td>
<td>Female</td>
<td>42</td>
<td>8</td>
<td>159660.80</td>
<td>3</td>
<td>1</td>
<td>0</td>
<td>113931.57</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>4</td>
<td>15701354</td>
<td>Boni</td>
<td>699</td>
<td>France</td>
<td>Female</td>
<td>39</td>
<td>1</td>
<td>0.00</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>93826.63</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>5</td>
<td>15737888</td>
<td>Mitchell</td>
<td>850</td>
<td>Spain</td>
<td>Female</td>
<td>43</td>
<td>2</td>
<td>125510.82</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>79084.10</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
As it can be seen, the variables <b>"RowNumber", "CustomerId", and "Surname"</b> are not useful for the model. <br>
The values of these variables are unique for each observation.
```python
X = df.iloc[:, 3:13]
y = df.iloc[:, 13]
```
### Creating dummy variables
Machine learning models cannot process words, for this reason, columns that show categories need to be converted into dummy variables. <br>
For example, the variable <b>"Gender"</b> will be transformed into a dummy variable with 0 being a female and 1 being a male. However, variables that <br>
include more than two categories, will be split into more columns. <br>
For instance, the <b>"Geography"</b> category has three levels, namely Germany, Spain, and France. This variable needs to be transformed into two binary columns. <br>
In general, the number of columns/variables that will be created from a categorical variable are <b>c-1</b>, where c is the number of levels.
```python
cat = [1,2]
cat_cols = pd.get_dummies(X.iloc[:, cat], drop_first=True)
```
```python
cat_cols.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Geography_Germany</th>
<th>Geography_Spain</th>
<th>Gender_Male</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
```python
X.drop(X.columns[cat], axis=1, inplace=True)
```
```python
X = pd.concat([X,cat_cols], axis=1)
```
```python
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
```
Standardize the variables for the Neural Network
```python
sc = StandardScaler()
X_train = pd.DataFrame(sc.fit_transform(X_train),columns = X_train.columns)
X_test = pd.DataFrame(sc.transform(X_test),columns = X_test.columns)
```
Now, in an Artificial Neural Network, there is an input layer, some hidden layers and an output layer. <br>
The input layer corresponds to a row/observation that travels through the neural network and outputs a specific value. <br>
So, in the image below,
<b>X1, X2, and X3,</b> are the independent variables for one row. In addition, <b>W1, W1, and W3,</b> are the weights for each variable. Note that this image does not depict the current Neural Network that we are using for this problem, but it rather shows a Neural Network with a dataset of 3 independent variables and a dependent variable.
The neuron in the middle, applies an activation function based on the signals it receives. There are different activation functions, but one of the most common activation functions for the hidden layers is the Rectified Linear Unit (ReLU).
\begin{align}
f (x) = max(x,0)
\end{align} <br>
What the ReLU does, is that it returns x, if the $\sum_{i=1}^{n} W_{i}X_{i}$ is above zero and 0 otherwise, where $x = \sum_{i=1}^{n} W_{i}X_{i}$
```python
# rectified linear function
def rectified(x):
return max(0.0, x)
# define a series of inputs
series_in = [x for x in range(-10, 11)]
# calculate outputs for our inputs
series_out = [rectified(x) for x in series_in]
# line plot of raw inputs to rectified outputs
pyplot.plot(series_in, series_out)
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.title('ReLU Function')
pyplot.show()
```
The Neural Network above includes only an input layer and an output layer. For a binary classification problem such as this one, where the customer will either churn or not, the sigmoid activation function for the output layer is a more proper function. <br>
\begin{align}
f(x) = \frac{1}{1 + e^{-x}}
\end{align}
The Sigmoid function returns values ranging from 0 to 1, which is useful because in this situation, the dependent variable is either 0 (not churn) or 1 (churn). <br>
Later, by applying a threshold (e.g., 0.5), we can assign values above this threshold as 1 and below as 0.
```python
def sigmoid(x):
return (1/(1 + math.exp(-x)))
# define a series of inputs
series_in = [x for x in range(-10, 11)]
# calculate outputs for our inputs
series_out = [sigmoid(x) for x in series_in]
# line plot of raw inputs to sigmoid outputs
pyplot.plot(series_in, series_out)
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.title('Sigmoid Function')
pyplot.show()
```
The Neural Network below, includes an input layer with 3 neurons, a hidden layer with 4 neurons, and an output layer. <br>
In this example, the activation function of the hidden layer can be the ReLU function whereas for the output layer, it can be the sigmoid function.
### Neural Network
The ANN is build using the Keras framework, which uses TensorFlow in the backend. <br>
In the code below, the number of nodes in the input layers needs to be defined. This is the <b>"input_dim"</b> argument, which corresponds to the number of independent variables. Next, the <b>"units"</b> argument, defines the number of neurons in the hidden layer. Note that the architecture below shows a Neural Network with two hidden layers. The input layer and the first hidden layer are both defined in one line of code, while the rest of the layers are defined later. <br>
This means that the hidden layers have 6 neurons each with a <i>ReLU</i> activation function as described above while the output layer has 1 neuron and a <i>sigmoid</i> activation function as this is a binary classification problem. <br>
<br>
Next, the ANN model is compiled where the <b>optimizer</b>, the <b>loss</b>, and the <b>metric</b> is being defined. The optimizer argument defines the optimizer that will be used in order to update the weights of the model. Amongst those are the <i>Stochastic Gradient Descent</i>, the <i>Adaptive Moment Estimation (Adam)</i>, and the <i>Root Mean Squared Propagetion (RMSProp)</i>. The most popular optimizers are the Adam and the RMSProp. <br>
For this problem, the <i>binary cross entropy (logarithmic)</i> loss is used.
\begin{align}
\small
Logarithmic Loss = - \frac{1}{n} \; \sum_{i=1}^{n} \bigg(y^{i} \; log\phi \bigg( \sum_{j=1}{m}w_{j}x_{j}^{i} \bigg) + (1-y^{i})log \bigg(1 - \phi \bigg(\sum_{j=1}^{m}w_{j}x_{j}^{i} \bigg) \bigg) \bigg)
\end{align}
where $\phi$ is the sigmoid function, i is the index of the observation and j is the index of the feature. <br>
<br>
The metric, is the metric that will be shown, and in this case it is the accuracy.
### Model Architecture
```python
classifier = Sequential()
classifier.add(Dense(units=6, activation='relu',input_dim=11))
classifier.add(Dense(units=6, activation='relu'))
classifier.add(Dense(units=1, activation='sigmoid'))
classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
```
WARNING:tensorflow:From C:\Users\Hella\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
The architecture of the current ANN is shown below.
```python
print(classifier.summary())
```
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 6) 72
_________________________________________________________________
dense_2 (Dense) (None, 6) 42
_________________________________________________________________
dense_3 (Dense) (None, 1) 7
=================================================================
Total params: 121
Trainable params: 121
Non-trainable params: 0
_________________________________________________________________
None
After defining the model architecture, in order to train the model to the training data, the batch size as well as the number of epochs need to be defined. <br>
The batch size essentially shows after how many observations the weights will be updated. For instance, with a batch size of 32, 32 observations will pass through the ANN and after that, the weights will be updated. <br>
<br>
Next, the number of epochs shows how many times all the observations will pass through the neural network.
```python
classifier.fit(X_train, y_train, batch_size=32, epochs=50)
```
WARNING:tensorflow:From C:\Users\Hella\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/50
8000/8000 [==============================] - 4s 477us/step - loss: 0.5187 - acc: 0.7902
Epoch 2/50
8000/8000 [==============================] - 1s 132us/step - loss: 0.4584 - acc: 0.7945
Epoch 3/50
8000/8000 [==============================] - 1s 137us/step - loss: 0.4435 - acc: 0.7945
Epoch 4/50
8000/8000 [==============================] - 1s 128us/step - loss: 0.4356 - acc: 0.7943
Epoch 5/50
8000/8000 [==============================] - 1s 119us/step - loss: 0.4285 - acc: 0.8103
Epoch 6/50
8000/8000 [==============================] - 1s 117us/step - loss: 0.4218 - acc: 0.8197
Epoch 7/50
8000/8000 [==============================] - 1s 121us/step - loss: 0.4162 - acc: 0.8216
Epoch 8/50
8000/8000 [==============================] - 1s 121us/step - loss: 0.4115 - acc: 0.8233
Epoch 9/50
8000/8000 [==============================] - 1s 117us/step - loss: 0.4076 - acc: 0.8239
Epoch 10/50
8000/8000 [==============================] - 1s 117us/step - loss: 0.4039 - acc: 0.8249
Epoch 11/50
8000/8000 [==============================] - 1s 125us/step - loss: 0.4004 - acc: 0.8246
Epoch 12/50
8000/8000 [==============================] - 1s 133us/step - loss: 0.3975 - acc: 0.8261
Epoch 13/50
8000/8000 [==============================] - 1s 131us/step - loss: 0.3945 - acc: 0.8265
Epoch 14/50
8000/8000 [==============================] - 1s 131us/step - loss: 0.3922 - acc: 0.8267
Epoch 15/50
8000/8000 [==============================] - 1s 133us/step - loss: 0.3897 - acc: 0.8277
Epoch 16/50
8000/8000 [==============================] - 1s 133us/step - loss: 0.3870 - acc: 0.8281
Epoch 17/50
8000/8000 [==============================] - 1s 130us/step - loss: 0.3847 - acc: 0.8301
Epoch 18/50
8000/8000 [==============================] - 1s 121us/step - loss: 0.3821 - acc: 0.8286
Epoch 19/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3796 - acc: 0.8339
Epoch 20/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3773 - acc: 0.8410
Epoch 21/50
8000/8000 [==============================] - 1s 128us/step - loss: 0.3749 - acc: 0.8415
Epoch 22/50
8000/8000 [==============================] - 1s 120us/step - loss: 0.3726 - acc: 0.8427
Epoch 23/50
8000/8000 [==============================] - 1s 119us/step - loss: 0.3701 - acc: 0.8449
Epoch 24/50
8000/8000 [==============================] - 1s 126us/step - loss: 0.3676 - acc: 0.8474
Epoch 25/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3660 - acc: 0.8471
Epoch 26/50
8000/8000 [==============================] - 1s 117us/step - loss: 0.3637 - acc: 0.8514
Epoch 27/50
8000/8000 [==============================] - 1s 117us/step - loss: 0.3615 - acc: 0.8500
Epoch 28/50
8000/8000 [==============================] - 1s 121us/step - loss: 0.3604 - acc: 0.8540
Epoch 29/50
8000/8000 [==============================] - 1s 124us/step - loss: 0.3580 - acc: 0.8538
Epoch 30/50
8000/8000 [==============================] - 1s 119us/step - loss: 0.3567 - acc: 0.8526
Epoch 31/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3550 - acc: 0.8561
Epoch 32/50
8000/8000 [==============================] - 1s 134us/step - loss: 0.3533 - acc: 0.8575
Epoch 33/50
8000/8000 [==============================] - 1s 117us/step - loss: 0.3517 - acc: 0.8578
Epoch 34/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3506 - acc: 0.8578
Epoch 35/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3492 - acc: 0.8594
Epoch 36/50
8000/8000 [==============================] - 1s 119us/step - loss: 0.3487 - acc: 0.8592
Epoch 37/50
8000/8000 [==============================] - 1s 117us/step - loss: 0.3473 - acc: 0.8606
Epoch 38/50
8000/8000 [==============================] - 1s 124us/step - loss: 0.3469 - acc: 0.8587
Epoch 39/50
8000/8000 [==============================] - 1s 119us/step - loss: 0.3459 - acc: 0.8584
Epoch 40/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3450 - acc: 0.8589
Epoch 41/50
8000/8000 [==============================] - 1s 122us/step - loss: 0.3446 - acc: 0.8604
Epoch 42/50
8000/8000 [==============================] - 1s 117us/step - loss: 0.3436 - acc: 0.8594
Epoch 43/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3432 - acc: 0.8591
Epoch 44/50
8000/8000 [==============================] - 1s 119us/step - loss: 0.3429 - acc: 0.8590
Epoch 45/50
8000/8000 [==============================] - 1s 122us/step - loss: 0.3421 - acc: 0.8592
Epoch 46/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3418 - acc: 0.8608
Epoch 47/50
8000/8000 [==============================] - 1s 118us/step - loss: 0.3415 - acc: 0.8599
Epoch 48/50
8000/8000 [==============================] - 1s 127us/step - loss: 0.3407 - acc: 0.8596
Epoch 49/50
8000/8000 [==============================] - 1s 128us/step - loss: 0.3407 - acc: 0.8595
Epoch 50/50
8000/8000 [==============================] - 1s 122us/step - loss: 0.3407 - acc: 0.8601
<keras.callbacks.History at 0x2b8a8328b00>
Furthermore, as explained previously, the model will make predictions on the test set, and a threshold will be set in order to classify the customers that are going to churn and those that are not. <br>
The threshold that is applied here is 0.5.
```python
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
```
Finally, with a <b>confusion matrix</b>, we can see how many observations were correctly predicted. More specifically, a confusion matrix gives information about the <b>True Positive rate, True Negative rate, False Positive rate, and False Negative rate</b>. It is also useful in cases when the dataset is severely unbalanced. In these cases, the confusion matrix shows if the model predicts every observation into one class. The accuracy values in these unbalanced datasets can be misleading, as it can be very high. <br>
<br>
For this particular problem, the dataset is balanced.
```python
labels = [0, 1]
cm = confusion_matrix(y_test, y_pred, labels)
print(cm)
print('Accuracy: ',(cm[0,0]+cm[1,1])/len(y_test)*100)
```
[[1528 79]
[ 206 187]]
Accuracy: 85.75
For this classification problem regarding customer churn, the model correctly predicted that 1528 customers will not churn and 187 customers will churn. In addition, it falsely predicted that 79 customers will churn and that 206 will not churn. The <b>accuracy</b> of the model is <b>85.75%</b>. Below, a plot of the confusion matrix is shown.
```python
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
plt.title('Confusion matrix of the classifier')
fig.colorbar(cax)
ax.set_xticklabels([''] + ['Not Churn', 'Churn'])
ax.set_yticklabels([''] + ['Not Churn', 'Churn'])
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
```
|
11e6f0d2209a878083779e2609b8d60a69770990
| 65,898 |
ipynb
|
Jupyter Notebook
|
churn.ipynb
|
SteliosGian/Simple-ANN
|
df6197fa936cc45a6911a52626e1ba79bb0f6e63
|
[
"MIT"
] | null | null | null |
churn.ipynb
|
SteliosGian/Simple-ANN
|
df6197fa936cc45a6911a52626e1ba79bb0f6e63
|
[
"MIT"
] | null | null | null |
churn.ipynb
|
SteliosGian/Simple-ANN
|
df6197fa936cc45a6911a52626e1ba79bb0f6e63
|
[
"MIT"
] | 1 |
2021-03-29T16:14:47.000Z
|
2021-03-29T16:14:47.000Z
| 77.801653 | 12,180 | 0.742739 | true | 6,125 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.774583 | 0.76908 | 0.595717 |
__label__eng_Latn
| 0.805165 | 0.22238 |
```
%matplotlib inline
from __future__ import division
from IPython.html.widgets import interact,fixed
from scipy import stats
import sympy
from sympy import S, init_printing, Sum, summation,expand
from sympy import stats as st
from sympy.abc import k
import re
```
## Maximum likelihood estimation
$$\hat{p} = \frac{1}{n} \sum_{i=1}^n X_i$$
$$ \mathbb{E}(\hat{p}) = p $$
$$ \mathbb{V}(\hat{p}) = \frac{p(1-p)}{n} $$
```
sympy.plot( k*(1-k),(k,0,1),ylabel='$n^2 \mathbb{V}$',xlabel='p',fontsize=28)
```
```
b=stats.bernoulli(p=.8)
samples=b.rvs(100)
print var(samples)
print mean(samples)
```
0.1659
0.79
```
def slide_figure(n=100,m=1000,p=0.8):
fig,ax=subplots()
ax.axis(ymax=25)
b=stats.bernoulli(p=p)
v=iter(b.rvs,2)
samples=b.rvs(n*1000)
tmp=(samples.reshape(n,-1).mean(axis=0))
ax.hist(tmp,normed=1)
ax1=ax.twinx()
ax1.axis(ymax=25)
ax1.plot(linspace(0,1,200),stats.norm(mean(tmp),std(tmp)).pdf(linspace(0.0,1,200)),lw=3,color='r')
```
```
interact(slide_figure, n=(100,500),p=(0.01,1,.05),m=fixed(500));
```
## Maximum A-Posteriori (MAP) Estimation
```
from sympy.abc import p,n,k
sympy.plot(st.density(st.Beta('p',3,3))(p),(p,0,1) ,xlabel='p')
```
### Maximize the MAP function to get form of estimator
```
obj=sympy.expand_log(sympy.log(p**k*(1-p)**(n-k) * st.density(st.Beta('p',6,6))(p)))
```
```
sol=sympy.solve(sympy.simplify(sympy.diff(obj,p)),p)[0]
print sol
```
(k + 5)/(n + 10)
$ \hat{p}_{MAP} = \frac{(5+\sum_{i=1}^n X_i )}{(n + 10)} $
with corresponding expectation
$ \mathbb{E} = \frac{(5+n p )}{(n + 10)} $
which is a biased estimator. The variance of this estimator is the following:
$$ \mathbb{V}(\hat{p}_{MAP}) = \frac{n (1-p) p}{(n+10)^2} $$
compare this to the variance of the maximum likelihood estimator, which is reproduced here:
$$ \mathbb{V}(\hat{p}_{ML}) = \frac{p(1-p)}{n} $$
```
n=5
def show_bias(n=30):
sympy.plot(p,(5+n*p)/(n+10),(p,0,1),aspect_ratio=1,title='more samples reduce bias')
interact(show_bias,n=(10,500,10));
```
Compute the variance of the MAP estimator
```
sum(sympy.var('x1:10'))
expr=((5+(sum(sympy.var('x1:10'))))/(n+10))
```
```
def apply_exp(expr):
tmp=re.sub('x[\d]+\*\*2','p',str(expand(expr)))
tmp=re.sub('x[\d]+\*x[\d]+','p**2',tmp)
tmp=re.sub('x[\d]+','p',tmp)
return sympy.sympify(tmp)
```
```
ex2 = apply_exp(expr**2)
print ex2
```
8*p**2/25 + 11*p/25 + 1/9
```
tmp=sympy.simplify(ex2 - (apply_exp(expr))**2 )
sympy.plot(tmp,p*(1-p)/10,(p,0,1))
```
### General case
```
def generate_expr(num_samples=10,alpha=6):
n = sympy.symbols('n')
obj=sympy.expand_log(sympy.log(p**k*(1-p)**(n-k) * st.density(st.Beta('p',alpha,alpha))(p)))
sol=sympy.solve(sympy.simplify(sympy.diff(obj,p)),p)[0]
expr=sol.replace(k,(sum(sympy.var('x1:%d'%(num_samples)))))
expr=expr.subs(n,num_samples)
ex2 = apply_exp(expr**2)
ex = apply_exp(expr)
return (ex,sympy.simplify(ex2-ex**2))
```
```
num_samples=10
X_bias,X_v = generate_expr(num_samples,alpha=2)
p1=sympy.plot(X_v,(p,0,1),show=False,line_color='b',ylim=(0,.03),xlabel='p')
X_bias,X_v = generate_expr(num_samples,alpha=6)
p2=sympy.plot(X_v,(p,0,1),show=False,line_color='r',xlabel='p')
p3=sympy.plot(p*(1-p)/num_samples,(p,0,1),show=False,line_color='g',xlabel='p')
p1.append(p2[0])
p1.append(p3[0])
p1.show()
```
```
p1=sympy.plot(n*(1-p)*p/(n+10)**2,(p,0,1),show=False,line_color='b',ylim=(0,.05),xlabel='p',ylabel='variance')
p2=sympy.plot((1-p)*p/n,(p,0,1),show=False,line_color='r',ylim=(0,.05),xlabel='p')
p1.append(p2[0])
p1.show()
```
```
def show_variance(n=5):
p1=sympy.plot(n*(1-p)*p/(n+10)**2,(p,0,1),show=False,line_color='b',ylim=(0,.05),xlabel='p',ylabel='variance')
p2=sympy.plot((1-p)*p/n,(p,0,1),show=False,line_color='r',ylim=(0,.05),xlabel='p')
p1.append(p2[0])
p1.show()
interact(show_variance,n=(10,120,2));
```
The obvious question is what is the value of a biased estimator? The key fact is that the MAP estimator is biased, yes, but it is biased according to the prior probability of $\theta$. Suppose that the true parameter $p=1/2$ which is exactly at the peak of the prior probability function. In this case, what is the bias?
$ \mathbb{E} = \frac{(5+n p )}{(n + 10)} -p \rightarrow 0$
and the variance of the MAP estimator at this point is the following:
$$ \frac{n p (1-p)}{(n+10)^2} \rightarrow $$
```
pv=.60
nsamp=30
fig,ax=subplots()
rv = stats.bernoulli(pv)
map_est=(rv.rvs((nsamp,1000)).sum(axis=0)+5)/(nsamp+10);
ml_est=(rv.rvs((nsamp,1000)).sum(axis=0))/(nsamp);
_,bins,_=ax.hist(map_est,bins=20,alpha=.3,normed=True,label='MAP');
ax.hist(ml_est,bins=20,alpha=.3,normed=True,label='ML');
ax.vlines(map_est.mean(),0,12,lw=3,linestyle=':',color='b')
ax.vlines(pv,0,12,lw=3,color='r',linestyle=':')
ax.vlines(ml_est.mean(),0,12,lw=3,color='g',linestyle=':')
ax.legend()
```
```
```
|
f7c3af1794be71b7d4ee8e2c60b0fabb9e706692
| 149,333 |
ipynb
|
Jupyter Notebook
|
4-assets/BOOKS/Jupyter-Notebooks/Overflow/MAP_Estimation.ipynb
|
impastasyndrome/Lambda-Resource-Static-Assets
|
7070672038620d29844991250f2476d0f1a60b0a
|
[
"MIT"
] | null | null | null |
4-assets/BOOKS/Jupyter-Notebooks/Overflow/MAP_Estimation.ipynb
|
impastasyndrome/Lambda-Resource-Static-Assets
|
7070672038620d29844991250f2476d0f1a60b0a
|
[
"MIT"
] | null | null | null |
4-assets/BOOKS/Jupyter-Notebooks/Overflow/MAP_Estimation.ipynb
|
impastasyndrome/Lambda-Resource-Static-Assets
|
7070672038620d29844991250f2476d0f1a60b0a
|
[
"MIT"
] | 1 |
2021-11-05T07:48:26.000Z
|
2021-11-05T07:48:26.000Z
| 290.531128 | 24,129 | 0.913027 | true | 1,830 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.855851 | 0.870597 | 0.745102 |
__label__eng_Latn
| 0.443206 | 0.569453 |
# Raumluftqualität 2.0
## Zeitliche Entwicklung der CO_2-Konzentration in Räumen
In einem gut gelüfteten, leeren Raum wird sich zunächst genau so viel CO_2 befinden, wie in der Außenluft.
Wenn sich dann Personen in den Raum begeben und CO_2 freisetzen, wird die CO_2-Konzentration langsam zunehmen. Auf welchen Wert sie sich schließlich einstellt, hängt vom Außenluftvolumenstrom ab, mit dem der Raum belüftet wird.
Bei einem völlig unbelüfteten Raum wird das von den Personen produzierte CO_2 sich in der Raumluft immer stärker anreichern, wobei je Zeiteinheit die gleiche Menge an CO_2 freigesetzt wird.
### Beispiel:
In einem Raum von $15 \rm m^2$ Grundfläche bei $2.5 \rm m$ Geschosshöhe befinden sich 2 Personen, die je Person $30\,{\frac{\ell}{h}}$ CO_2 ausatmen. Die CO_2-Konzentration der Außenluft ist 400 ppM. Im Raum sollen 1200 ppM CO_2 zulässig sein.
Stellen Sie die zeitliche Entwicklung der CO_2-Konzentration in einem Diagramm dar.
Gegeben:
Raumvolumen: $V_{\rm ra} = 15 {\rm m^2}\cdot 2.5 {\rm m} = 37.5 {\rm m^3}$
CO_2-Produktion: $\dot V_{\rm sch} = 2\cdot 30 {\rm \dfrac{\ell}{h}} = 60\,000\, {\rm\dfrac{cm^3}{h}}$
Damit ergibt sich die Änderungsrate
$
\dot k = \cfrac{\dot V_{\rm sch}}{V_{\rm ra}}
= {\rm\dfrac{60\,000\,cm^3}{37.5\, m^3\cdot h}}
= {\rm 1600 \dfrac{ppM}{h}}
$
Für das Schadstoffvolumen im Raum ergibt sich:
\begin{align}
k(t)&= 400 {\rm ppM} + 1600\,{\rm\dfrac{ppM}{h}}\, t
\end{align}
Dies Ergebnis wird in den folgenden Zeilen in einem Diagramm dargestellt:
```python
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
import pandas as pd
import numpy as np
```
```python
lt = np.linspace(0,120,13) # 10-min Schritte
df = pd.DataFrame(
{
't': lt,
'k': 400 + 1600*lt/60 # 60min = 1h
}
)
display(df.T)
ax=df.plot(x='t',y='k', label='$k = k(t)$')
ax.axhline(1200,c='r')
ax.grid()
ax.set(
xlim=(0,120),xlabel='Zeit $t$ in $min$',
ylabel='CO_2-Konzentration $k$ in $\mathrm{ppM}$'
);
```
Die zulässige CO_2-Konzentration wird bereits nach kurzer Zeit (etwa 30 min) erreicht. Nach etwa einer Stunde ist die Raumluftqualität inakzeptabel.
### Aufgabe
In einem Gebäude ($400 \rm m^2$ Grundfläche, $3.50 \rm m$ Raumhöhe) arbeiten 120 Personen körperlich mittelschwer belastet. Berechnen Sie unter der Voraussetzung, dass das Gebäude nicht belüftet wird, wie sich die CO_2-Konzentration im Raum entwickelt. Die CO_2-Konzentration der Außenluft beträgt 400 ppM.
Nach welcher Zeit wird die zulässige CO_2-Konzentration von 1200 ppM überschritten?
Stellen Sie den Vorgang in einem Diagramm dar.
```python
A_ra = 400 # m**2
h_ra = 3.5 # m
V_ra = A_ra*h_ra
V_ra
```
1400.0
```python
n = 120 # Personen
dV_co2_person = 30e-3 # 30 l/h
dV_co2 = n*dV_co2_person
dV_co2
```
3.5999999999999996
```python
k_0 = k_au = 400e-6 # 400 ppM
k_zul = 1200e-6 # 1200 ppM
lt = np.linspace(0,2) # Zeitintervall von vier Stunden
df = pd.DataFrame(
{
't': lt,
'k': (k_0 + dV_co2/V_ra*lt)*1e6 # in ppM
}
)
df.head().T
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<th>t</th>
<td>0.0</td>
<td>0.040816</td>
<td>0.081633</td>
<td>0.122449</td>
<td>0.163265</td>
</tr>
<tr>
<th>k</th>
<td>400.0</td>
<td>504.956268</td>
<td>609.912536</td>
<td>714.868805</td>
<td>819.825073</td>
</tr>
</tbody>
</table>
</div>
```python
ax = df.plot(x='t',y='k',label='$k=k(t)$')
ax.set(
xlim=(0,4),xlabel='Zeit $t$ in Stunden',
ylim=(0,3500),ylabel='CO_2-Konzentration in ppM'
)
ax.axhline(k_zul*1e6,c='r')
ax.grid()
```
Nach einer Stunde (von Hand berechnet):
```python
(k_au + dV_co2/V_ra)*1e6
```
2971.4285714285716
|
d48528ada807fca0f40a0c44d0e0ce7aab2ae48e
| 93,187 |
ipynb
|
Jupyter Notebook
|
Notebooks/Notebook_2-Lsg_a.ipynb
|
w-meiners/rlt-rlq
|
215594ebdbcc364d81d29fa620389e006864cc4d
|
[
"MIT"
] | null | null | null |
Notebooks/Notebook_2-Lsg_a.ipynb
|
w-meiners/rlt-rlq
|
215594ebdbcc364d81d29fa620389e006864cc4d
|
[
"MIT"
] | null | null | null |
Notebooks/Notebook_2-Lsg_a.ipynb
|
w-meiners/rlt-rlq
|
215594ebdbcc364d81d29fa620389e006864cc4d
|
[
"MIT"
] | null | null | null | 213.242563 | 44,136 | 0.90239 | true | 1,558 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.787931 | 0.828939 | 0.653147 |
__label__deu_Latn
| 0.91338 | 0.35581 |
<a href="https://colab.research.google.com/github/nickwotton/MQP2019/blob/master/Nick/NN_linearfunction01.ipynb" target="_parent"></a>
# Attempt to Improve Solving a Linear Function using a Neural Network
Given code to use a neural network to fit a linear function, try to optimize the code to get a better fit, i.e. the data points completely overlap on the plot.
```python
import torch
import torch.nn as nn
import numpy as np
import torch.nn.functional as F
import matplotlib.pyplot as plt
```
## Define the Function
Here we define our function $f(x)=ax+b$ with coefficient $a=1$ and intercept term $b=2$.
Then we test the equation with a test value of 2.
```python
# target function
a = 1.
b = 2.
f = lambda x: a*x+b
#test
f(2.)
```
4.0
## Create Model
Next, we create the neural network model. This is done first by setting the inner and outer dimensions with variables. Next we code the model and vary the internal dimensions to attempt to improve the model. At this level, this is essentially a simple linear algebra exercise:
If we have input $x$, internal parameters $a,b$, and solution $f(x)$ then in the one-dimensional case we have:
\begin{equation}
\left(
a_{1}x+b_{2}
\right)
a_{2} + b_{2}
= f(x)
\end{equation}
However, we want to get a better estimate for the true equation. So we increase the interior dimension which corresponds to the number of neurons inside the network. For example, we raised the inner dimension to 3. In matrix form we have:
\begin{equation}
\left(
\begin{bmatrix} x \end{bmatrix}
\begin{bmatrix} a_{1} & a_{2} & a_{3} \end{bmatrix}
+
\begin{bmatrix} b_{1} & b_{2} & b_{3} \end{bmatrix}
\right)
\begin{bmatrix} a_{4} \\ a_{5} \\ a_{6} \end{bmatrix}
+
\begin{bmatrix} b_{4} \\ \end{bmatrix}
=
\begin{bmatrix} f(x) \end{bmatrix}
\end{equation}
Graphically, we can render this second neural network as:
What we discovered here is that ReLU was slowing down the process, so since our function is Linear, we can just remove it.
Additionally, we discerned that the higher the inner dimension, that is, the more nodes in each layer, the smaller the error and the better the performance.
```python
#model
#nn.Linear
in_dim = 1
out_dim = 1
model = nn.Sequential(
nn.Linear(in_dim, 30),
# nn.ReLU(),
nn.Linear(30, out_dim)
)
```
Here we define the Loss function as the Mean Squared Error(MSE).
Note that by doing so, we are essentially 'cheating' the system. In most applications, we would not know the function $f$ so we would be unable to find the MSE.
```python
#loss function
criterion = nn.MSELoss()
```
Next we choose a learning rate and a method for learning. The learning rate is the percent of the data that is accepted in each iteration. The Methods we tried were SGD and Adam.
```python
#optimizer
learning_rate = 0.001
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
## Train the Model
First we create the training data. This is a batch of random points that we pass through our function $f$.
```python
#training data
batch_size = 1000
x_train = torch.randn(batch_size, 1)
y_train = f(x_train)
```
Once we have the training data, we pass this collection of inputs and solutions into the model. With each iteration we calculate the loss and attempt to optimize the model to further reduce the loss.
In this code we print out the loss every 10 iterations.
```python
# Train the model
num_epochs = 500
for epoch in range(num_epochs):
# Forward pass
outputs = model(x_train)
loss = criterion(outputs, y_train)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (epoch+1) % 10 == 0:
print ('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1,
num_epochs, loss.item()))
```
Epoch [10/500], Loss: 3.6593
Epoch [20/500], Loss: 2.1498
Epoch [30/500], Loss: 1.2767
Epoch [40/500], Loss: 0.7678
Epoch [50/500], Loss: 0.4685
Epoch [60/500], Loss: 0.2906
Epoch [70/500], Loss: 0.1835
Epoch [80/500], Loss: 0.1180
Epoch [90/500], Loss: 0.0773
Epoch [100/500], Loss: 0.0515
Epoch [110/500], Loss: 0.0348
Epoch [120/500], Loss: 0.0239
Epoch [130/500], Loss: 0.0166
Epoch [140/500], Loss: 0.0116
Epoch [150/500], Loss: 0.0082
Epoch [160/500], Loss: 0.0058
Epoch [170/500], Loss: 0.0041
Epoch [180/500], Loss: 0.0030
Epoch [190/500], Loss: 0.0021
Epoch [200/500], Loss: 0.0015
Epoch [210/500], Loss: 0.0011
Epoch [220/500], Loss: 0.0008
Epoch [230/500], Loss: 0.0006
Epoch [240/500], Loss: 0.0004
Epoch [250/500], Loss: 0.0003
Epoch [260/500], Loss: 0.0002
Epoch [270/500], Loss: 0.0002
Epoch [280/500], Loss: 0.0001
Epoch [290/500], Loss: 0.0001
Epoch [300/500], Loss: 0.0001
Epoch [310/500], Loss: 0.0000
Epoch [320/500], Loss: 0.0000
Epoch [330/500], Loss: 0.0000
Epoch [340/500], Loss: 0.0000
Epoch [350/500], Loss: 0.0000
Epoch [360/500], Loss: 0.0000
Epoch [370/500], Loss: 0.0000
Epoch [380/500], Loss: 0.0000
Epoch [390/500], Loss: 0.0000
Epoch [400/500], Loss: 0.0000
Epoch [410/500], Loss: 0.0000
Epoch [420/500], Loss: 0.0000
Epoch [430/500], Loss: 0.0000
Epoch [440/500], Loss: 0.0000
Epoch [450/500], Loss: 0.0000
Epoch [460/500], Loss: 0.0000
Epoch [470/500], Loss: 0.0000
Epoch [480/500], Loss: 0.0000
Epoch [490/500], Loss: 0.0000
Epoch [500/500], Loss: 0.0000
## Testing the Model
Now that we have a trained model with low Loss, we want to attempt to replicate the function. To do this we get another random sample of numbers. This sample is passed into both our fuction $f$ and the model.
We then graph both sets of points on a scatter plot. Since the model is highly accurate now, the two sets of points completely overlap.
```python
#test
x_ = torch.randn(50,1)
y_ = f(x_)
plt.scatter(x_.detach().numpy(), y_.detach().numpy(), label='true')
y_pred = model(x_)
plt.scatter(x_.detach().numpy(), y_pred.detach().numpy(), label='pred')
plt.legend()
```
|
368c8844a3be12957deefdd2853a917567f1779f
| 22,180 |
ipynb
|
Jupyter Notebook
|
Nick/NN_linearfunction01.ipynb
|
vitaltavares/MQP2019
|
d7bda131450907beef3c9d619acf36d6c1ced2a9
|
[
"MIT"
] | 1 |
2019-09-13T16:27:14.000Z
|
2019-09-13T16:27:14.000Z
|
Nick/NN_linearfunction01.ipynb
|
vitaltavares/MQP2019
|
d7bda131450907beef3c9d619acf36d6c1ced2a9
|
[
"MIT"
] | 40 |
2019-09-02T17:49:28.000Z
|
2020-04-06T13:03:43.000Z
|
Nick/NN_linearfunction01.ipynb
|
vitaltavares/MQP2019
|
d7bda131450907beef3c9d619acf36d6c1ced2a9
|
[
"MIT"
] | 3 |
2019-09-11T01:51:56.000Z
|
2019-12-03T20:13:16.000Z
| 51.701632 | 8,682 | 0.659784 | true | 1,925 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91848 | 0.808067 | 0.742194 |
__label__eng_Latn
| 0.885838 | 0.562697 |
# Selección óptima de portafolios I
En la clase pasada vimos que:
- La LAC describe las posibles selecciones de riesgo-rendimiento entre un activo libre de riesgo y un activo riesgoso.
- Su pendiente es igual al radio de Sharpe del activo riesgoso.
- La asignación óptima de capital para cualquier inversionista es el punto tangente de la curva de indiferencia del inversionista con la LAC (depende de las preferencias particulares - aversión al riesgo).
Para todo lo anterior, supusimos que ya teníamos el portafolio óptimo (activo riesgoso).
En el siguiente análisis:
**Objetivos:**
- ¿Cuál es el portafolio óptimo de activos riesgosos?
- ¿Cuál es el mejor portafolio de activos riesgosos?
- Es un portafolio eficiente en media-varianza.
- Problema: dado un conjunto de activos riesgosos, ¿cómo construimos la mejor combinación?
*Referencia:*
- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.
___
## 1. Maximizando el radio de Sharpe
### ¿Qué pasa si tenemos dos activos riesgosos?
Cuando tenemos dos o más activos riesgosos, tenemos disponibles diferentes LAC. ¿Qué significan sus pendientes?
<font color=blue> Ver en el tablero.</font>
Pregunta:
- ¿Qué es lo que se quiere?
**Conclusión:**
- El mejor portafolio de activos no depende de las preferencias individuales, y por tanto va a ser igual para todos.
- Dicho mejor portafolio maximiza el radio de Sharpe.
- A este portafolio lo llamaremos el portafolio eficiente en media-varianza (EMV)
**Idea principal: el portafolio óptimo de activos riesgosos es independiente de las preferencias del inversionista.**
- El portafolio EMV determina el portafolio óptimo de activos riesgosos.
- Todos tendremos el mismo portafolio de activos riesgosos (EMV), y lo combinaremos con el activo libre de reisgo de acuerdo con las preferencias de cada uno de nosotros (aversión al riesgo).
- La LAC combinando el activo libre de riesgo y el portafolio EMV, se vuelve el conjunto de portafolios eficientes.
Entonces, se deben seguir los siguientes pasos:
1. Crear la frontera media-varianza.
2. Encontrar el portafolio que maximize el radio de Sharpe (portafolio EMV).
3. Construir la frontera eficiente (LAC) del punto $(0,r_f)$ al punto $(\sigma_s,E[r_s])$ del portafolio EMV.
4. Combinar de acuerdo a sus preferencias.
___
## 2. Solución analítica del portafolio EMV: caso con dos activos.
Queremos solucionar el siguiente problema:
\begin{align}
\max_{w_1,w_2} &\quad \frac{E[r_p]-r_f}{\sigma_p}\\
\text{s.a.} &\quad E[r_p]=w_1E[r_1]+w_2E[r_2]\\
&\quad \sigma_p=\sqrt{w_1^2\sigma_1^2+w_2^2\sigma_2^2+2w_1w_2\rho_{12}\sigma_1\sigma_2}\\
&\quad w_1+w_2=1, \quad w_1,w_2\geq0
\end{align}
el cual es equivalente a
\begin{align}
\max_{w_1} &\quad \frac{w_1E[r_1]+(1-w_1)E[r_2]-r_f}{\sqrt{w_1^2\sigma_1^2+(1-w_1)^2\sigma_2^2+2w_1(1-w_1)\rho_{12}\sigma_1\sigma_2}}\\
\text{s.a.} &\quad 0\leq w_1\leq1
\end{align}
**Actividad.**
El anterior es un problema de maximizar una función de una variable en un dominio cerrado. No debaría representar dificultad.
Encontrar la solución analítica a este problema.
Quien primero lo haga, y salga a explicarlo al tablero, le subo alguna tarea o quiz a 100.
Deben llegar a:
$$w_{1,EMV}=\frac{(E[r_1]-r_f)\sigma_2^2-(E[r_2]-r_f)\sigma_{12}}{(E[r_2]-r_f)\sigma_1^2+(E[r_1]-r_f)\sigma_2^2-((E[r_1]-r_f)+(E[r_2]-r_f))\sigma_{12}}.$$
Si nadie lo ha hecho en 30 min., procederé a hacerlo yo.
**Nota:**
- así como obtuvimos una expresión para el peso del portafolio de mínima varianza con dos activos, obtenemos una expresión para el peso del portafolio Eficiente en Media-Varianza.
- Estas actividades son sin duda un buen ejercicio, y se pueden replicar usando técnicas de varias variables (multiplicadores de Lagrange) cuando se tengan más de dos activos.
- Sin embargo, la complejidad del problema crece considerablemente con el número de variables, y la solución analítica deja de ser viable cuando mencionamos que un portafolio bien diversificado consta aproximadamente de 50-60 activos.
- En esos casos, este problema se soluciona con rutinas numéricas que hagan la optimización por nosotros.
- Por eso, les enseño cómo resolver este problema con optimizadores numéricos, porque son una solución viable y escalable a más variables.
## 3. Ejemplo ilustrativo.
Retomamos el ejemplo de mercados de acciones en los países integrantes del $G5$: EU, RU, Francia, Alemania y Japón.
```python
# Importamos pandas y numpy
import pandas as pd
import numpy as np
```
```python
# Resumen en base anual de rendimientos esperados y volatilidades
annual_ret_summ = pd.DataFrame(columns=['EU', 'RU', 'Francia', 'Alemania', 'Japon'], index=['Media', 'Volatilidad'])
annual_ret_summ.loc['Media'] = np.array([0.1355, 0.1589, 0.1519, 0.1435, 0.1497])
annual_ret_summ.loc['Volatilidad'] = np.array([0.1535, 0.2430, 0.2324, 0.2038, 0.2298])
annual_ret_summ.round(4)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>EU</th>
<th>RU</th>
<th>Francia</th>
<th>Alemania</th>
<th>Japon</th>
</tr>
</thead>
<tbody>
<tr>
<th>Media</th>
<td>0.1355</td>
<td>0.1589</td>
<td>0.1519</td>
<td>0.1435</td>
<td>0.1497</td>
</tr>
<tr>
<th>Volatilidad</th>
<td>0.1535</td>
<td>0.243</td>
<td>0.2324</td>
<td>0.2038</td>
<td>0.2298</td>
</tr>
</tbody>
</table>
</div>
```python
# Matriz de correlación
corr = pd.DataFrame(data= np.array([[1.0000, 0.5003, 0.4398, 0.3681, 0.2663],
[0.5003, 1.0000, 0.5420, 0.4265, 0.3581],
[0.4398, 0.5420, 1.0000, 0.6032, 0.3923],
[0.3681, 0.4265, 0.6032, 1.0000, 0.3663],
[0.2663, 0.3581, 0.3923, 0.3663, 1.0000]]),
columns=annual_ret_summ.columns, index=annual_ret_summ.columns)
corr.round(4)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>EU</th>
<th>RU</th>
<th>Francia</th>
<th>Alemania</th>
<th>Japon</th>
</tr>
</thead>
<tbody>
<tr>
<th>EU</th>
<td>1.0000</td>
<td>0.5003</td>
<td>0.4398</td>
<td>0.3681</td>
<td>0.2663</td>
</tr>
<tr>
<th>RU</th>
<td>0.5003</td>
<td>1.0000</td>
<td>0.5420</td>
<td>0.4265</td>
<td>0.3581</td>
</tr>
<tr>
<th>Francia</th>
<td>0.4398</td>
<td>0.5420</td>
<td>1.0000</td>
<td>0.6032</td>
<td>0.3923</td>
</tr>
<tr>
<th>Alemania</th>
<td>0.3681</td>
<td>0.4265</td>
<td>0.6032</td>
<td>1.0000</td>
<td>0.3663</td>
</tr>
<tr>
<th>Japon</th>
<td>0.2663</td>
<td>0.3581</td>
<td>0.3923</td>
<td>0.3663</td>
<td>1.0000</td>
</tr>
</tbody>
</table>
</div>
Supondremos, además, que la tasa libre de riesgo es $r_f=5\%$.
```python
# Tasa libre de riesgo
rf = 0.05
```
Entonces, supondremos que tenemos disponibles los activos correspondientes a los mercados de acciones de EU y Japón, y en adición el activo libre de riesgo.
#### 1. Construir la frontera de mínima varianza
```python
# Vector de w variando entre 0 y 1 con n pasos
N = 101
w = np.linspace(0, 1, N)
# Rendimientos esperados individuales
# Activo1: EU, Activo2:Japon
E1 = annual_ret_summ.loc['Media', 'EU']
E2 = annual_ret_summ.loc['Media', 'Japon']
# Volatilidades individuales
s1 = annual_ret_summ.loc['Volatilidad', 'EU']
s2 = annual_ret_summ.loc['Volatilidad', 'Japon']
# Correlacion
r12 = corr.loc['EU', 'Japon']
# Covarianza
s12 = s1 * s2 * r12
```
```python
# DataFrame de portafolios:
# 1. Índice: i
# 2. Columnas 1-2: w, 1-w
# 3. Columnas 3-4: E[r], sigma
# 4. Columna 5: Sharpe ratio
portafolios = pd.DataFrame({'w': w,
'1-w': 1 - w,
'Media': w * E1 + (1 - w) * E2,
'Vol': ((w * s1)**2 + ((1 - w) * s2)**2 + 2 * w * (1 - w) * s12) ** 0.5
})
portafolios['RS'] = (portafolios['Media'] - rf) / portafolios['Vol']
portafolios.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>w</th>
<th>1-w</th>
<th>Media</th>
<th>Vol</th>
<th>RS</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.00</td>
<td>1.00</td>
<td>0.149700</td>
<td>0.229800</td>
<td>0.433856</td>
</tr>
<tr>
<th>1</th>
<td>0.01</td>
<td>0.99</td>
<td>0.149558</td>
<td>0.227916</td>
<td>0.436820</td>
</tr>
<tr>
<th>2</th>
<td>0.02</td>
<td>0.98</td>
<td>0.149416</td>
<td>0.226041</td>
<td>0.439814</td>
</tr>
<tr>
<th>3</th>
<td>0.03</td>
<td>0.97</td>
<td>0.149274</td>
<td>0.224176</td>
<td>0.442839</td>
</tr>
<tr>
<th>4</th>
<td>0.04</td>
<td>0.96</td>
<td>0.149132</td>
<td>0.222322</td>
<td>0.445894</td>
</tr>
</tbody>
</table>
</div>
```python
# Importar librerías de gráficos
from matplotlib import pyplot as plt
%matplotlib inline
```
```python
# Gráfica de dispersión de puntos coloreando
# de acuerdo a RS
plt.figure(figsize=(6, 4))
plt.plot(s1, E1, 'ro', ms=10, label='EU')
plt.plot(s2, E2, 'bo', ms=10, label='Japón')
plt.scatter(portafolios['Vol'], portafolios['Media'], c=portafolios['RS'], cmap='inferno', label='Portafolios')
plt.xlabel('Volatilidad $\sigma$')
plt.ylabel('Rendimiento esperado $E[r]$')
plt.grid()
plt.legend(loc='best')
plt.colorbar()
```
#### 2. Encontrar el portafolio que maximiza el radio de Sharpe (EMV)
Primero, encontramos este portafolio con la fórmula que obtuvimos:
$$w_{1,EMV}=\frac{(E[r_1]-r_f)\sigma_2^2-(E[r_2]-r_f)\sigma_{12}}{(E[r_2]-r_f)\sigma_1^2+(E[r_1]-r_f)\sigma_2^2-((E[r_1]-r_f)+(E[r_2]-r_f))\sigma_{12}}.$$
```python
# Fórmula que obtuvimos
w_EMV = ((E1 - rf) * s2**2 - (E2 - rf) * s12) / ((E2 - rf) * s1**2 + (E1 - rf) * s2**2 - (E1 - rf + E2 - rf) * s12)
w_EMV
```
0.6983139170512034
Ahora sí, con la función scipy.optimize.minimize
```python
# Importar la función minimize del módulo optimize
from scipy.optimize import minimize
```
```python
# Función objetivo (-RS)
def menos_RS(w, E1, E2, s1, s2, s12, rf):
Erp = w * E1 + (1 - w) * E2
sp = ((w * s1)**2 + ((1 - w) * s2)**2 + 2 * w *(1 - w) * s12)**0.5
RS = (Erp - rf) / sp
return -RS
```
```python
# Dato inicial
w0 = 0.5
# Cotas de las variables
bnd = ((0, 1),)
```
```python
# Optimización numérica
res = minimize(fun=menos_RS,
x0=w0,
args=(E1, E2, s1, s2, s12, rf),
bounds=bnd
)
# Resultado
res
```
fun: array([-0.63087253])
hess_inv: <1x1 LbfgsInvHessProduct with dtype=float64>
jac: array([3.39728246e-06])
message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 10
nit: 4
status: 0
success: True
x: array([0.69831582])
```python
w_EMV, res.x
```
(0.6983139170512034, array([0.69831582]))
Con lo anterior, podemos obtener datos de rendimiento esperado y volatilidad del portafolio EMV
```python
# Rendimiento esperado y volatilidad del portafolio EMV
E_EMV = w_EMV * E1 + (1 - w_EMV) * E2
s_EMV = ((w_EMV * s1)**2 + ((1 - w_EMV) * s2)**2 + 2 * w_EMV * (1 - w_EMV) * s12)**0.5
E_EMV, s_EMV
```
(0.13978394237787292, 0.14231708951606933)
```python
```
```python
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR, y portafolio EMV
plt.figure(figsize=(6, 4))
plt.plot(s1, E1, 'ro', ms=10, label='EU')
plt.plot(s2, E2, 'bo', ms=10, label='Japón')
plt.scatter(portafolios['Vol'], portafolios['Media'], c=portafolios['RS'], cmap='inferno', label='Portafolios')
plt.plot(s_EMV, E_EMV, 'g*', ms=10, label='Port. EMV')
plt.xlabel('Volatilidad $\sigma$')
plt.ylabel('Rendimiento esperado $E[r]$')
plt.grid()
plt.legend(loc='best')
plt.colorbar()
```
#### 3. Construir LAC
Ahora, dibujamos la LAC, combinando el portafolio EMV con el activo libre de riesgo:
```python
# Vector de wp variando entre 0 y 1.5 con n pasos
wp = np.linspace(0, 1.5)
```
```python
# DataFrame de CAL:
# 1. Índice: i
# 2. Columnas 1-2: wp, wrf
# 3. Columnas 3-4: E[r], sigma
# 4. Columna 5: Sharpe ratio
LAC = pd.DataFrame({'w(EMV)': wp,
'w(rf)': 1 - wp,
'Media': wp * E_EMV + (1 - wp) * rf,
'Vol': wp * s_EMV
})
LAC['RS'] = (LAC['Media'] - rf) / LAC['Vol']
LAC.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>w(EMV)</th>
<th>w(rf)</th>
<th>Media</th>
<th>Vol</th>
<th>RS</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.000000</td>
<td>1.000000</td>
<td>0.050000</td>
<td>0.000000</td>
<td>NaN</td>
</tr>
<tr>
<th>1</th>
<td>0.030612</td>
<td>0.969388</td>
<td>0.052748</td>
<td>0.004357</td>
<td>0.630873</td>
</tr>
<tr>
<th>2</th>
<td>0.061224</td>
<td>0.938776</td>
<td>0.055497</td>
<td>0.008713</td>
<td>0.630873</td>
</tr>
<tr>
<th>3</th>
<td>0.091837</td>
<td>0.908163</td>
<td>0.058245</td>
<td>0.013070</td>
<td>0.630873</td>
</tr>
<tr>
<th>4</th>
<td>0.122449</td>
<td>0.877551</td>
<td>0.060994</td>
<td>0.017427</td>
<td>0.630873</td>
</tr>
</tbody>
</table>
</div>
```python
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR, portafolio EMV y LAC
plt.figure(figsize=(6, 4))
plt.plot(s1, E1, 'ro', ms=10, label='EU')
plt.plot(s2, E2, 'bo', ms=10, label='Japón')
plt.scatter(portafolios['Vol'], portafolios['Media'], c=portafolios['RS'], cmap='inferno', label='Portafolios')
plt.plot(s_EMV, E_EMV, 'g*', ms=10, label='Port. EMV')
plt.plot(LAC['Vol'], LAC['Media'], '--k', lw=2, label='LAC')
plt.xlabel('Volatilidad $\sigma$')
plt.ylabel('Rendimiento esperado $E[r]$')
plt.grid()
plt.legend(loc='best')
plt.colorbar()
plt.axis([0.13, 0.16, 0.13, 0.15])
```
#### 4. Combinación óptima de acuerdo a preferencias
Con los datos anteriores, y la caracterización de aversión al riesgo, se escoge la combinación óptima entre el portafolio EMV y el activo libre de riesgo de acuerdo a:
$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}.$$
```python
# Para gamma=7
g = 7
w_cap = (E_EMV - rf) / (g * s_EMV**2)
w_cap
```
0.6332665142475
```python
w_cap * w_EMV, w_cap * (1 - w_EMV), 1 - w_cap
```
(0.44221882010153346, 0.1910476941459666, 0.3667334857525)
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
|
ff3dace4604663ed249595cbd6ad833a8611079c
| 123,331 |
ipynb
|
Jupyter Notebook
|
Modulo3/Clase13_SeleccionOptimaPortI.ipynb
|
HKael/porinvv2020
|
f6f56a516a25786018321c4537b4680b307d28a9
|
[
"MIT"
] | null | null | null |
Modulo3/Clase13_SeleccionOptimaPortI.ipynb
|
HKael/porinvv2020
|
f6f56a516a25786018321c4537b4680b307d28a9
|
[
"MIT"
] | null | null | null |
Modulo3/Clase13_SeleccionOptimaPortI.ipynb
|
HKael/porinvv2020
|
f6f56a516a25786018321c4537b4680b307d28a9
|
[
"MIT"
] | null | null | null | 119.045367 | 34,316 | 0.845902 | true | 5,820 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.596433 | 0.867036 | 0.517129 |
__label__spa_Latn
| 0.73043 | 0.039793 |
## 1) Numpy Arrays vs Lists
```python
import numpy as np
```
```python
L = [1,2,3]
A = np.array([1,2,3])
L, A
```
([1, 2, 3], array([1, 2, 3]))
```python
L + [5]
```
[1, 2, 3, 5]
```python
L
```
[1, 2, 3]
```python
A + [5]
```
array([6, 7, 8])
O Operador + funciona de maneira diferente em listas e numpy arrays. <br>
Enquanto nas listas ele faz um append, no numpy arrays é tratado como soma vetorial.
```python
L * 2
```
[1, 2, 3, 1, 2, 3]
```python
A * 2
```
array([2, 4, 6])
O operador * funciona de maneira diferente em listas e numpy arrays.<br>
Enquanto ele repete uma lista, em numpy arrays é tratado como multiplicação elemento a elemento entre vetores
O produto escalar é dado por:
$$\begin{align}
a \cdot b &= a^{T}b \\
&= \sum\limits_{i=0}^{n} a_ib_i\\
&= |a||b|\cos{\theta}
\end{align}
$$
```python
dot_list = 0
for e in L:
dot_list += e*e
dot_list
```
14
```python
(A*A).sum(), np.dot(A,A)
```
(14, 14)
```python
a = np.array([1,3,4])
b = np.array([2,4,1])
```
$$
\cos{\theta} = \frac{a\cdot b}{|a||b|}
$$
```python
cos = a.dot(b)/(np.linalg.norm(a)*np.linalg.norm(b))
cos, np.arccos(cos)
```
(0.7703288865196434, 0.6914395540229785)
### Speed Comparison: Dot Product
Comparar a velocidade do cálculo do produto escalar utilizando `for loop` e a função `numpy.dot()`<br>
São criados 2 vetores 1x1000 para o teste.
```python
from datetime import datetime
```
```python
a = np.random.randn(100)
b = np.random.randn(100)
T = 1000
```
```python
def for_dot(a,b):
result = 0
for e,f in zip(a,b):
result += e*f
return result
t0 = datetime.now()
for t in range(T):
for_dot(a,b)
dt1 = datetime.now() - t0
t0 = datetime.now()
for t in range(T):
a.dot(b)
dt2 = datetime.now() - t0
dt1,dt2,dt1.total_seconds()/dt2.total_seconds()
```
(datetime.timedelta(microseconds=79336),
datetime.timedelta(microseconds=1879),
42.222458754656735)
### Creating Arrays
```python
Z = np.zeros(10)
Z
```
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
```python
np.zeros((10,10))
```
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
```python
np.ones((10,10))
```
array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
```python
np.random.random((10,10))
# Números aleatórios baseado em probabilidade 0<x<1
```
array([[0.8727353 , 0.7101418 , 0.99712466, 0.6913042 , 0.18928091,
0.3129924 , 0.8707541 , 0.76127674, 0.04484837, 0.20726868],
[0.58037897, 0.85199027, 0.12931742, 0.69984037, 0.54857979,
0.19323108, 0.03747739, 0.28693611, 0.13446493, 0.92038251],
[0.76115359, 0.82189855, 0.59756655, 0.59055865, 0.98912974,
0.53747975, 0.14158241, 0.48285143, 0.36049548, 0.74296014],
[0.24990797, 0.54769394, 0.61527197, 0.10531474, 0.71674935,
0.00848703, 0.63823206, 0.51600374, 0.45159856, 0.12775354],
[0.12094621, 0.26349439, 0.0460745 , 0.02141862, 0.83742369,
0.77048361, 0.84818114, 0.58709173, 0.03627597, 0.48886898],
[0.90964074, 0.94232793, 0.66774458, 0.86564113, 0.25928235,
0.53727432, 0.50713657, 0.13698081, 0.94057636, 0.12699759],
[0.85851277, 0.9387563 , 0.20228787, 0.3772616 , 0.04864407,
0.73258449, 0.06612805, 0.80031866, 0.10438979, 0.18995695],
[0.80844273, 0.23237812, 0.28653701, 0.25980382, 0.13690201,
0.15392922, 0.10574834, 0.48174979, 0.47251255, 0.73375288],
[0.00802248, 0.66516244, 0.53422215, 0.2853337 , 0.75394833,
0.83927944, 0.082604 , 0.2291872 , 0.68355877, 0.53528026],
[0.55539544, 0.17080853, 0.61780461, 0.95634259, 0.57328085,
0.64070114, 0.44743496, 0.71544752, 0.44835268, 0.7934082 ]])
```python
G = np.random.randn(10,10)
G
# Número aleatórios de distribuição gaussiana
```
array([[-0.29325655, 0.6514455 , -0.59313231, 0.45996943, 1.3397887 ,
0.1356255 , 1.33221726, 0.36584929, 0.40187569, 1.30479922],
[ 0.69069344, 0.16975026, -0.14295021, -1.62121735, 1.43983921,
-0.74673648, -0.70051634, -0.19065805, 0.99060591, 1.67003266],
[-1.85002381, -0.79447583, -1.47101947, -0.77519736, 0.60512255,
1.69136405, -0.37414819, -0.41072259, 1.37359439, 0.34048061],
[ 0.07398756, -0.68716856, 0.7761272 , 1.70610709, -0.09549806,
-0.4563812 , 0.6250297 , 0.76567729, 0.65995112, -1.46688799],
[-1.15924409, 0.91724841, -0.34138872, -2.58894929, -0.23099132,
0.3138421 , 0.49306887, -0.94478341, -0.6026703 , 0.73182024],
[-0.35001544, -1.2157117 , -0.52556314, 0.14471904, 0.30952623,
0.84092323, 1.36910066, 0.95006108, 1.30409573, 1.00748436],
[-0.457104 , 0.10644546, 1.2413489 , 1.57673014, 0.92612354,
1.21189605, -1.28566399, 0.46028265, 0.32939659, -0.75769981],
[ 1.20889008, -0.74653187, -2.11225921, 1.06619329, 0.58199165,
1.46652055, 0.51299538, -0.22632982, -0.70686765, -0.35628585],
[-0.57797704, -0.60259817, 1.3371527 , -0.70509169, 1.16501866,
0.38331481, -0.69265983, -0.27892637, -0.89107526, 0.18796167],
[-1.39126114, 1.09006032, 0.36083266, 0.04547842, 0.51360868,
0.71687125, 0.26616739, 0.18450924, -0.97733634, -0.14369357]])
```python
G.mean()
## Média
```
0.11352944229300384
```python
G.var()
## Variância
```
0.8612373657553634
```python
np.linalg.inv(G)
## Inversa da Matriz
```
array([[-0.12888897, 0.20176253, -0.37353808, 0.0865699 , 0.40538869,
0.15471414, 0.36799712, 0.2736015 , -0.16607605, -0.28100721],
[ 0.05492664, 0.15958189, -0.1694259 , 0.03874689, 0.31843647,
-0.08811236, 0.33860869, 0.02826085, -0.42919972, 0.14344601],
[ 0.22426181, -0.23724135, -0.2569901 , 0.03846767, 0.45572269,
0.09262413, 0.45431058, -0.13187685, 0.19296182, -0.56831628],
[ 0.56757278, -0.4976866 , 0.12779579, -0.29184054, -0.31054629,
-0.27194361, 0.03941485, -0.17247039, 0.13287871, -0.44301028],
[ 0.08329942, 0.38087176, 0.06966686, 0.40665812, 0.01732694,
-0.15537987, -0.06283071, 0.20244253, 0.17223829, 0.25016165],
[-0.08987573, -0.09772335, -0.21197512, -0.11253102, 0.5388706 ,
0.31441403, 0.60904867, 0.20640591, -0.12924144, -0.24885146],
[ 0.34369076, -0.18141415, -0.16873184, 0.3443168 , 0.53783069,
0.07429966, 0.10817637, 0.09360505, 0.060279 , -0.36590588],
[-1.0628979 , 0.78289266, -0.17901836, 0.0330127 , -0.6220006 ,
0.68497583, -0.38634435, 0.10424462, -0.39669954, 1.58077643],
[ 0.16104689, 0.0660416 , 0.05603093, 0.25848534, 0.41338857,
-0.01003739, 0.39605931, -0.06990624, -0.254451 , -0.48900586],
[ 0.43319019, -0.40228234, -0.00552417, -0.72580933, -0.32709565,
0.07877201, -0.01662383, -0.28436166, 0.18246469, -0.38697404]])
```python
## Inversa * Matriz = I
G.dot(np.linalg.inv(G))
```
array([[ 1.00000000e+00, -1.11022302e-16, 1.73472348e-16,
0.00000000e+00, 3.33066907e-16, -3.05311332e-16,
7.97972799e-17, 5.55111512e-17, 1.11022302e-16,
-3.33066907e-16],
[-1.11022302e-16, 1.00000000e+00, -3.81639165e-17,
0.00000000e+00, 2.22044605e-16, 1.38777878e-16,
1.63064007e-16, 5.55111512e-17, 0.00000000e+00,
-4.44089210e-16],
[-2.77555756e-17, -1.11022302e-16, 1.00000000e+00,
0.00000000e+00, 2.63677968e-16, 4.85722573e-17,
3.55618313e-17, -5.55111512e-17, -4.16333634e-17,
1.38777878e-16],
[-2.22044605e-16, -4.44089210e-16, 1.17961196e-16,
1.00000000e+00, 0.00000000e+00, -2.63677968e-16,
1.42247325e-16, -1.11022302e-16, 0.00000000e+00,
-6.66133815e-16],
[ 3.88578059e-16, -3.33066907e-16, 1.45716772e-16,
-1.11022302e-16, 1.00000000e+00, -2.42861287e-16,
1.56125113e-16, -5.55111512e-17, 1.66533454e-16,
-6.10622664e-16],
[ 1.66533454e-16, -3.33066907e-16, 9.28077060e-17,
2.22044605e-16, 1.11022302e-16, 1.00000000e+00,
-1.49186219e-16, -5.55111512e-17, 1.94289029e-16,
-4.44089210e-16],
[-2.22044605e-16, -3.33066907e-16, -6.41847686e-17,
-1.11022302e-16, -2.22044605e-16, 1.24900090e-16,
1.00000000e+00, -8.32667268e-17, 3.05311332e-16,
1.11022302e-15],
[-5.55111512e-17, 0.00000000e+00, -1.66533454e-16,
0.00000000e+00, -2.22044605e-16, 4.16333634e-17,
-2.44596010e-16, 1.00000000e+00, 1.38777878e-16,
1.66533454e-16],
[ 6.93889390e-17, -1.80411242e-16, -6.24500451e-17,
-2.77555756e-17, 8.32667268e-17, 9.88792381e-17,
8.80372164e-17, -1.31838984e-16, 1.00000000e+00,
-1.11022302e-16],
[ 0.00000000e+00, -3.46944695e-17, 1.91903785e-17,
5.55111512e-17, 2.98372438e-16, -1.71737624e-16,
1.60895602e-16, -9.02056208e-17, -1.76941795e-16,
1.00000000e+00]])
Produto externo
```python
a = np.array([1,3,6])
b = np.array([3,1,9])
c = np.outer(a,b)
c
```
array([[ 3, 1, 9],
[ 9, 3, 27],
[18, 6, 54]])
```python
np.trace(G)
```
-1.8252494059901754
```python
## Calculando covariância de um dataset fake, 3 features e 100 samples
X = np.random.randn(100,3)
```
```python
cov = np.cov(X.T)
```
```python
cov
```
array([[ 0.9277445 , 0.03838728, -0.06852152],
[ 0.03838728, 1.03476599, -0.02533857],
[-0.06852152, -0.02533857, 1.09690852]])
```python
np.linalg.eig(cov)
```
(array([0.89761064, 1.13507008, 1.02673828]),
array([[-0.93302255, 0.35190946, 0.07502439],
[ 0.20672837, 0.35361487, 0.91226088],
[-0.29450349, -0.86666964, 0.40268031]]))
```python
a = np.array([[1,3,4],[2,2,5],[5,5,4]])
b = np.array([3,np.pi,np.e])
```
```python
x = np.linalg.inv(a).dot(b)
x
```
array([-0.20115547, 0.26145185, 0.60419998])
```python
np.linalg.solve(a,b)
```
array([-0.20115547, 0.26145185, 0.60419998])
#### Resolvendo um problema
O preço do ingresso em uma feira é R\\$1,50 para crianças e R\\$4,00 para adultos. Em um certo dia 2200 pessoas passaram pela feira e a venda total de ingressos foi de R\\$5050,00. Quantas crianças e quantos adultos passaram pela feira?
Seja $x_1$ a quantidade de crianças e $x_2$ a quantidade de adultos:
$$
\begin{cases}
x_1 + x_2 &= 2200 \\ 1.5 x_1 + 4x_2 &= 5050
\end{cases}
$$
$$Ax = b$$
$$
\begin{pmatrix}
1 & 1 \\
1.5 & 4
\end{pmatrix}
\cdot
\begin{pmatrix}
x_1 \\ x_2
\end{pmatrix}
=
\begin{pmatrix}
2200 \\ 5050
\end{pmatrix}
$$
```python
A = np.array([[1,1],[1.5,4]])
b = np.array([2200,5050])
x = np.linalg.solve(A,b)
x[0],x[1]
```
(1500.0, 700.0)
```python
```
|
5d7b5909ba135d857ef6e538f623461433883dba
| 22,847 |
ipynb
|
Jupyter Notebook
|
numpy-stack-python/Numpy.ipynb
|
mirandagil/extra-courses
|
51858f5089b10b070de43ea3809697760aa261ec
|
[
"MIT"
] | null | null | null |
numpy-stack-python/Numpy.ipynb
|
mirandagil/extra-courses
|
51858f5089b10b070de43ea3809697760aa261ec
|
[
"MIT"
] | null | null | null |
numpy-stack-python/Numpy.ipynb
|
mirandagil/extra-courses
|
51858f5089b10b070de43ea3809697760aa261ec
|
[
"MIT"
] | null | null | null | 25.903628 | 251 | 0.463299 | true | 5,670 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.893309 | 0.715424 | 0.639095 |
__label__krc_Cyrl
| 0.370962 | 0.323162 |
# Basic Symbolic Quantum Mechanics
```python
from sympy import init_printing
init_printing(use_latex=True)
```
```python
from sympy import sqrt, symbols, Rational, srepr
from sympy import expand, Eq, Symbol, simplify, exp, sin
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import *
from sympy.physics.quantum.gate import *
from sympy.physics.quantum.grover import *
from sympy.physics.quantum.qft import QFT, IQFT, Fourier
from sympy.physics.quantum.circuitplot import circuit_plot
```
/Users/bgranger/anaconda/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
/Users/bgranger/anaconda/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
## Bras and kets
Symbolic kets can be created using the `Ket` class as seen here:
```python
phi, psi = Ket('phi'), Ket('psi')
```
These ket instances are fully symbolic and behave exactly like the corresponding mathematical entities.
For example, one can form a linear combination using addition and scalar multiplication:
```python
alpha = Symbol('alpha', complex=True)
beta = Symbol('beta', complex=True)
state = alpha*psi + beta*phi; state
```
Bras can be created using the `Bra` class directly or by using the `Dagger` class
on an expression involving kets:
```python
ip = Dagger(state)*state; ip
```
Because this is a standard SymPy expression, we can use standard SymPy functions and methods
for manipulating expression. Here we use expand to multiply this expression out, followed
by `qapply` which identifies inner and outer products in an expression.
```python
qapply(expand(ip))
```
## Operators
SymPy also has a full set of classes for handling symbolic operators. Here we create three operators,
one of which is hermitian:
```python
A = Operator('A')
B = Operator('B')
C = HermitianOperator('C')
```
When used in arithmetic expressions SymPy knows that operators do not commute under
multiplication/composition as is seen by expanding a polynomial of operators:
```python
expand((A+B)**2)
```
Commutators of operators can also be created:
```python
comm = Commutator(A*B,B+C); comm
```
The `expand` function has custom logic for expanding commutators using standard commutator
relations:
```python
comm.expand(commutator=True)
```
Any commutator can be performed ($[A,B]\rightarrow AB-BA$) using the `doit` method:
```python
_.doit().expand()
```
The `Dagger` class also works with operators and is aware of the properties of unitary
and hermitian operators:
```python
Dagger(_)
```
## Tensor products
Symbolic tensor products of operators and states can also be created and manipulated:
```python
op = TensorProduct(A,B+C)
state = TensorProduct(psi,phi)
op*state
```
Once a tensor product has been created, it can be simplified,
```python
tensor_product_simp(_)
```
and expanded:
```python
expand(_)
```
|
8334c14bc0c7604b2b4cf17902abbee3e67d4fd1
| 28,209 |
ipynb
|
Jupyter Notebook
|
notebooks/dirac_notation.ipynb
|
gvvynplaine/quantum_notebooks
|
58783823596465fe2d6c494c2cc3a53ae69a9752
|
[
"BSD-3-Clause"
] | 42 |
2017-10-17T22:44:27.000Z
|
2022-03-28T06:26:46.000Z
|
notebooks/dirac_notation.ipynb
|
gvvynplaine/quantum_notebooks
|
58783823596465fe2d6c494c2cc3a53ae69a9752
|
[
"BSD-3-Clause"
] | 2 |
2017-10-09T05:16:41.000Z
|
2018-09-22T03:08:29.000Z
|
notebooks/dirac_notation.ipynb
|
gvvynplaine/quantum_notebooks
|
58783823596465fe2d6c494c2cc3a53ae69a9752
|
[
"BSD-3-Clause"
] | 12 |
2017-10-09T04:22:19.000Z
|
2022-03-28T06:25:21.000Z
| 54.881323 | 2,372 | 0.764189 | true | 784 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.936285 | 0.843895 | 0.790126 |
__label__eng_Latn
| 0.988847 | 0.674061 |
# Imports and Simulation Parameters
```python
import numpy as np
import math
import cmath
import scipy
import scipy.integrate
import sys
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
hbar = 1.0 / (2.0 * np.pi)
ZERO_TOLERANCE = 10**-6
```
/Users/joseph/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```python
MAX_VIBRATIONAL_STATES = 200
STARTING_GROUND_STATES = 5
STARTING_EXCITED_STATES = 5
time_scale_set = 10 #will divide the highest energy to give us the time step
low_frequency_cycles = 25.0 #will multiply the frequency of the lowest frequency mode to get the max time
```
```python
#See if a factorial_Storage dictionary exists already and if not, create one
try:
a = factorial_storage
except:
factorial_storage = {}
```
# Defining Parameters of the System
```python
energy_g = 0
energy_gamma = .1
energy_e = 0
energy_epsilon = .31
Huang_Rhys_Parameter = .80
S = Huang_Rhys_Parameter
#bookkeeping
overlap_storage = {}
electronic_energy_gap = energy_e + .5*energy_epsilon - (energy_g + .5 * energy_gamma)
min_energy = energy_g + energy_gamma * .5
```
```python
mu_0 = 1.0
```
If we set the central frequency of a pulse at the 0->0 transition, and we decide we care about the ratio of the 0->1 transition to the 0->0 transition and set that to be $\tau$ then the desired pulse width will be
\begin{align}
\sigma &= \sqrt{-\frac{2 \ln (\tau)}{\omega_{\epsilon}^2}}
\end{align}
```python
def blank_wavefunction(number_ground_states, number_excited_states):
return np.zeros((number_ground_states + number_excited_states))
def perturbing_function(time):
# stdev = 30000.0 * dt #very specific to 0->0 transition!
stdev = 3000.0 * dt #clearly has a small amount of amplitude on 0->1 transition
center = 6 * stdev
return np.cos(electronic_energy_gap*(time - center) / hbar)*np.exp( - (time - center)**2 / (2 * stdev**2)) / stdev
def time_function_handle_from_tau(tau_proportion):
stdev = np.sqrt( -2.0 * np.log(tau_proportion) / (energy_epsilon/hbar)**2)
center = 6 * stdev
return center, stdev, lambda t: np.cos(electronic_energy_gap*(t - center) / hbar)*np.exp( - (t - center)**2 / (2 * stdev**2)) / stdev
def time_function_handle_from_tau_and_kappa(tau_proportion, kappa_proportion):
stdev = np.sqrt( -2.0 * np.log(tau_proportion) / (energy_epsilon/hbar)**2)
center = 6 * stdev
return center, stdev, lambda t: kappa_proportion * energy_gamma * np.cos(electronic_energy_gap*(t - center) / hbar)*np.exp( - (t - center)**2 / (2 * stdev**2)) / stdev
def perturbing_function_define_tau(time, tau_proportion):
center, stdev, f = time_function_handle_from_tau(tau_proportion)
return f(time)
```
```python
```
```python
```
# Defining Useful functions
$ O_{m}^{n} = \left(-1\right)^{n} \sqrt{\frac{e^{-S}S^{m+n}}{m!n!}} \sum_{j=0}^{\min \left( m,n \right)} \frac{m!n!}{j!(m-j)!(n-j)!}(-1)^j S^{-j} $
```python
def factorial(i):
if i in factorial_storage:
return factorial_storage[i]
if i <= 1:
return 1.0
else:
out = factorial(i - 1) * i
factorial_storage[i] = out
return out
def ndarray_factorial(i_array):
return np.array([factorial(i) for i in i_array])
```
```python
def overlap_function(ground_quantum_number, excited_quantum_number):
m = ground_quantum_number
n = excited_quantum_number
if (m,n) in overlap_storage:
return overlap_storage[(m,n)]
output = (-1)**n
output *= math.sqrt(math.exp(-S) * S**(m + n) /(factorial(m) * factorial(n)) )
j_indeces = np.array(range(0, min(m,n) + 1))
j_summation = factorial(m) * factorial(n) * np.power(-1.0, j_indeces) * np.power(S, -j_indeces)
j_summation = j_summation / (ndarray_factorial(j_indeces) * ndarray_factorial( m - j_indeces) * ndarray_factorial(n - j_indeces) )
output *= np.sum(j_summation)
overlap_storage[(m,n)] = output
return output
```
# Solving the Differential Equation
\begin{align*}
\left(\frac{d G_a(t)}{dt} + \frac{i}{\hbar}\Omega_{(a)}\right) &=-E(t)\frac{i}{\hbar} \sum_{b} E_b(t) \mu_{a}^{b}\\
\left(\frac{d E_b(t)}{dt} + \frac{i}{\hbar} \Omega^{(b)} \right) &=-E(t)\frac{i}{\hbar} \sum_{a} G_a(t) \mu_{a}^{b}
\end{align*}
Or in a more compact form:
\begin{align*}
\frac{d}{dt}\begin{bmatrix}
G_a(t) \\
E_b(t)
\end{bmatrix}
= -\frac{i}{\hbar}
\begin{bmatrix}
\Omega_{(a)} & E(t) \mu_{a}^{b} \\
E(t) \mu_{a}^{b} & \Omega^{b}
\end{bmatrix}
\cdot
\begin{bmatrix}
G_a(t) \\
E_b(t)
\end{bmatrix}
\end{align*}
```python
def ode_diagonal_matrix(number_ground_states, number_excited_states):
#Define the Matrix on the RHS of the above equation
ODE_DIAGONAL_MATRIX = np.zeros((number_ground_states + number_excited_states, number_ground_states + number_excited_states), dtype=np.complex)
#set the diagonals
for ground_i in range(number_ground_states):
ODE_DIAGONAL_MATRIX[ground_i, ground_i] = -1.0j * (energy_g + energy_gamma * (ground_i + .5)) / hbar
for excited_i in range(number_excited_states):
excited_index = excited_i + number_ground_states #the offset since the excited states comes later
ODE_DIAGONAL_MATRIX[excited_index, excited_index] = -1.0j * (energy_e + energy_epsilon * (excited_i + .5)) / hbar
return ODE_DIAGONAL_MATRIX
#now for the off-diagonals
def mu_matrix(c, number_ground_states, number_excited_states):
MU_MATRIX = np.zeros((number_ground_states, number_excited_states), dtype = np.complex)
for ground_a in range(number_ground_states):
for excited_b in range(number_excited_states):
new_mu_entry = overlap_function(ground_a, excited_b)
if ground_a >0:
new_mu_entry += c * math.sqrt(ground_a) * overlap_function(ground_a - 1, excited_b)
new_mu_entry += c * math.sqrt(ground_a+1) * overlap_function(ground_a + 1, excited_b)
MU_MATRIX[ground_a, excited_b] = new_mu_entry
return MU_MATRIX
def ode_off_diagonal_matrix(c_value, number_ground_states, number_excited_states):
output = np.zeros((number_ground_states + number_excited_states, number_ground_states + number_excited_states), dtype=np.complex)
MU_MATRIX = mu_matrix(c_value, number_ground_states, number_excited_states)
output[0:number_ground_states, number_ground_states:] = -1.0j * mu_0 * MU_MATRIX / hbar
output[number_ground_states:, 0:number_ground_states] = -1.0j * mu_0 * MU_MATRIX.T / hbar
return output
def IR_transition_dipoles(number_ground_states, number_excited_states):
"outputs matrices to calculate ground and excited state IR emission spectra. Can be combined for total"
output_g = np.zeros((number_ground_states + number_excited_states, number_ground_states + number_excited_states), dtype=np.complex)
output_e = np.zeros((number_ground_states + number_excited_states, number_ground_states + number_excited_states), dtype=np.complex)
for ground_a in range(number_ground_states):
try:
output_g[ground_a, ground_a + 1] = math.sqrt(ground_a + 1)
output_g[ground_a + 1, ground_a] = math.sqrt(ground_a + 1)
except:
pass
try:
output_g[ground_a, ground_a - 1] = math.sqrt(ground_a)
output_g[ground_a - 1, ground_a] = math.sqrt(ground_a)
except:
pass
for excited_a in range(number_excited_states):
matrix_index_e = number_ground_states + excited_a -1 #because of how 'number_ground_states' is defined
try:
output_e[matrix_index_e, matrix_index_e + 1] = math.sqrt(excited_a + 1)
output_e[matrix_index_e + 1, matrix_index_e] = math.sqrt(excited_a + 1)
except:
pass
try:
output_e[matrix_index_e, matrix_index_e - 1] = math.sqrt(excited_a)
output_e[matrix_index_e - 1, matrix_index_e] = math.sqrt(excited_a)
except:
pass
return output_g, output_e
```
\begin{align*}
\mu(x) &= \mu_0 \left(1 + \lambda x \right) \\
&= \mu_0 \left(1 + c\left(a + a^{\dagger} \right) \right) \\
\mu_{a}^{b} &= \mu_0\left(O_{a}^{b} + c\left(\sqrt{a}O_{a-1}^{b} + \sqrt{a+1}O_{a+1}^{b}\right) \right)
\end{align*}
```python
```
```python
class VibrationalStateOverFlowException(Exception):
def __init__(self):
pass
```
```python
```
```python
def propagate_amplitude_to_end_of_perturbation(c_value, ratio_01_00, kappa=1, starting_ground_states=STARTING_GROUND_STATES, starting_excited_states=STARTING_EXCITED_STATES):
center_time, stdev, time_function = time_function_handle_from_tau_and_kappa(ratio_01_00, kappa)
ending_time = center_time + 8.0 * stdev
number_ground_states = starting_ground_states
number_excited_states = starting_excited_states
while number_excited_states + number_ground_states < MAX_VIBRATIONAL_STATES:
#define time scales
max_energy = energy_e + energy_epsilon * (.5 + number_excited_states) + kappa * energy_gamma * mu_0
dt = 1.0 / (time_scale_set * max_energy)
ODE_DIAGONAL = ode_diagonal_matrix(number_ground_states, number_excited_states)
ODE_OFF_DIAGONAL = ode_off_diagonal_matrix(c_value, number_ground_states, number_excited_states)
def ODE_integrable_function(time, coefficient_vector):
ODE_TOTAL_MATRIX = ODE_OFF_DIAGONAL * time_function(time) + ODE_DIAGONAL
return np.dot(ODE_TOTAL_MATRIX, coefficient_vector)
#define the starting wavefuntion
initial_conditions = blank_wavefunction(number_ground_states, number_excited_states)
initial_conditions[0] = 1
#create ode solver
current_time = 0.0
ode_solver = scipy.integrate.complex_ode(ODE_integrable_function)
ode_solver.set_initial_value(initial_conditions, current_time)
#Run it
results = []
try: #this block catches an overflow into the highest ground or excited vibrational state
while current_time < ending_time:
# print(current_time, ZERO_TOLERANCE)
#update time, perform solution
current_time = ode_solver.t+dt
new_result = ode_solver.integrate(current_time)
results.append(new_result)
#make sure solver was successful
if not ode_solver.successful():
raise Exception("ODE Solve Failed!")
#make sure that there hasn't been substantial leakage to the highest excited states
re_start_calculation = False
if abs(new_result[number_ground_states - 1])**2 >= ZERO_TOLERANCE:
number_ground_states +=1
# print("Increasing Number of Ground vibrational states to %i " % number_ground_states)
re_start_calculation = True
if abs(new_result[-1])**2 >= ZERO_TOLERANCE:
number_excited_states +=1
# print("Increasing Number of excited vibrational states to %i " % number_excited_states)
re_start_calculation = True
if re_start_calculation:
raise VibrationalStateOverFlowException()
except VibrationalStateOverFlowException:
#Move on and re-start the calculation
continue
#Finish calculating
results = np.array(results)
return results, number_ground_states, number_excited_states
raise Exception("NEEDED TOO MANY VIBRATIONAL STATES! RE-RUN WITH DIFFERENT PARAMETERS!")
```
```python
def get_average_quantum_number_time_series(c_value, ratio_01_00, kappa=1, starting_ground_states=STARTING_GROUND_STATES, starting_excited_states=STARTING_EXCITED_STATES):
results, number_ground_states, number_excited_states = propagate_amplitude_to_end_of_perturbation(c_value, ratio_01_00, kappa, starting_ground_states, starting_excited_states)
probabilities = np.abs(results)**2
#calculate the average_vibrational_quantum_number series
average_ground_quantum_number = probabilities[:,0:number_ground_states].dot(np.array(range(number_ground_states)) )
average_excited_quantum_number = probabilities[:,number_ground_states:].dot(np.array(range(number_excited_states)))
return average_ground_quantum_number, average_excited_quantum_number, results, number_ground_states, number_excited_states
```
```python
def IR_emission_spectrum_after_excitation(c_value, ratio_01_00, kappa=1, starting_ground_states=STARTING_GROUND_STATES, starting_excited_states=STARTING_EXCITED_STATES):
center_time, stdev, time_function = time_function_handle_from_tau_and_kappa(ratio_01_00, kappa)
perturbation_ending_time = center_time + 8.0 * stdev
simulation_ending_time = perturbation_ending_time + low_frequency_cycles * hbar/min_energy
number_ground_states = starting_ground_states
number_excited_states = starting_excited_states
while number_excited_states + number_ground_states < MAX_VIBRATIONAL_STATES:
ir_transDipole_g, ir_transDipole_e = IR_transition_dipoles(number_ground_states, number_excited_states)
time_emission_g = [0]
time_emission_e = [0]
#define time scales
e = energy_e + energy_epsilon * (.5 + number_excited_states)
g = energy_g + energy_gamma* (.5 + number_ground_states)
plus = e + g
minus = e - g
J = kappa * energy_gamma * mu_0
max_split_energy = plus + math.sqrt(minus**2 + 4 * J**2)
max_energy = max_split_energy * .5
dt = 1.0 / (time_scale_set * max_energy)
time_values = np.arange(0, simulation_ending_time, dt)
ODE_DIAGONAL = ode_diagonal_matrix(number_ground_states, number_excited_states)
ODE_OFF_DIAGONAL = ode_off_diagonal_matrix(c_value, number_ground_states, number_excited_states)
def ODE_integrable_function(time, coefficient_vector):
ODE_TOTAL_MATRIX = ODE_OFF_DIAGONAL * time_function(time) + ODE_DIAGONAL
return np.dot(ODE_TOTAL_MATRIX, coefficient_vector)
def ODE_jacobean(time, coefficient_vector):
ODE_TOTAL_MATRIX = ODE_OFF_DIAGONAL * time_function(time) + ODE_DIAGONAL
return ODE_TOTAL_MATRIX
#define the starting wavefuntion
initial_conditions = blank_wavefunction(number_ground_states, number_excited_states)
initial_conditions[0] = 1
#create ode solver
current_time = 0.0
try:
del ode_solver
except:
pass
# ode_solver = scipy.integrate.complex_ode(ODE_integrable_function)
ode_solver = scipy.integrate.complex_ode(ODE_integrable_function, jac = ODE_jacobean)
# ode_solver.set_integrator("lsoda")
ode_solver.set_integrator("vode", with_jacobian=True)
ode_solver.set_initial_value(initial_conditions, current_time)
#Run it
results = []
try: #this block catches an overflow into the highest ground or excited vibrational state
while current_time < simulation_ending_time:
# print(current_time, ZERO_TOLERANCE)
#update time, perform solution
current_time = ode_solver.t+dt
new_result = ode_solver.integrate(current_time)
results.append(new_result)
#make sure solver was successful
if not ode_solver.successful():
raise Exception("ODE Solve Failed!")
if current_time < perturbation_ending_time:
#make sure that there hasn't been substantial leakage to the highest excited states
re_start_calculation = False
if abs(new_result[number_ground_states - 1])**2 >= ZERO_TOLERANCE:
number_ground_states +=1
# print("Increasing Number of Ground vibrational states to %i " % number_ground_states)
re_start_calculation = True
if abs(new_result[-1])**2 >= ZERO_TOLERANCE:
number_excited_states +=1
# print("Increasing Number of excited vibrational states to %i " % number_excited_states)
re_start_calculation = True
if re_start_calculation:
raise VibrationalStateOverFlowException()
#calculate IR emission
time_emission_g.append(np.conj(new_result).T.dot(ir_transDipole_g.dot(new_result)))
time_emission_e.append(np.conj(new_result).T.dot(ir_transDipole_e.dot(new_result)))
#on to next time value...
except VibrationalStateOverFlowException:
#Move on and re-start the calculation
continue
#Finish calculating
results = np.array(results)
n_t = len(time_emission_e)
time_emission_g = np.array(time_emission_g)
time_emission_e = np.array(time_emission_e)
filter_x = np.array(range(n_t))
filter_center = n_t / 2.0
filter_sigma = n_t / 10.0
filter_values = np.exp(-(filter_x - filter_center)**2 / (2 * filter_sigma**2))
frequencies = np.fft.fftshift(np.fft.fftfreq(time_emission_g.shape[0], d= dt))
frequency_emission_g = dt * np.fft.fftshift(np.fft.fft(time_emission_g * filter_values))
frequency_emission_e = dt * np.fft.fftshift(np.fft.fft(time_emission_e * filter_values))
return results, frequencies, frequency_emission_g, frequency_emission_e, number_ground_states, number_excited_states
raise Exception("NEEDED TOO MANY VIBRATIONAL STATES! RE-RUN WITH DIFFERENT PARAMETERS!")
```
```python
c_values = np.logspace(-2.5, np.log10(1), 15)
tau_values = np.logspace(-3, np.log10(.9), 3)
kappa_values = np.logspace(-1, np.log10(5), 22)
number_calcs = c_values.shape[0] * tau_values.shape[0] * kappa_values.shape[0]
heating_results_ground = np.zeros((kappa_values.shape[0], tau_values.shape[0], c_values.shape[0]))
ir_amplitudes = np.zeros(heating_results_ground.shape)
heating_results_excited = np.zeros(heating_results_ground.shape)
# Keep track of the IR Spectrum
n_g = STARTING_GROUND_STATES
n_e = STARTING_EXCITED_STATES
counter = 1
# we will use the value of c as a bellweather for how many starting states to work with.
c_to_ng = {}
c_to_ne = {}
for i_kappa, kappa in enumerate(kappa_values):
# as we increase in both tau and
for i_tau, tau in enumerate(tau_values):
for i_c, c in enumerate(c_values):
try:
n_g = c_to_ng[c]
n_e = c_to_ne[c]
except:
n_g = STARTING_GROUND_STATES
n_e = STARTING_EXCITED_STATES
c_to_ng[c] = n_g
c_to_ne[c] = n_e
sys.stdout.flush()
sys.stdout.write("\r%i / %i Calculating kappa=%f, c=%f, tau=%f at n_g = %i and n_e=%i..." %(counter, number_calcs, kappa, c, tau, n_g, n_e))
# print("\r%i / %i Calculating kappa=%f, c=%f, tau=%f at n_g = %i and n_e=%i..." %(counter, number_calcs, kappa, c, tau, n_g, n_e))
# n_bar_g, n_bar_e, results, num_g, num_e = get_average_quantum_number_time_series(c,
# tau,
# kappa,
# starting_ground_states = n_g,
# starting_excited_states = n_e)
# heating_results_ground[i_kappa, i_tau, i_c] = n_bar_g[-1]
# heating_results_excited[i_kappa, i_tau, i_c] = n_bar_e[-1]
_, frequencies, emission_g, emission_e, num_g, num_e = IR_emission_spectrum_after_excitation(c,
tau,
kappa,
starting_ground_states = n_g,
starting_excited_states = n_e)
if num_g > c_to_ng[c]:
c_to_ng[c] = num_g
if num_e > c_to_ne[c]:
c_to_ne[c] = num_e
vibrational_frequency_index = np.argmin(np.abs(energy_gamma - frequencies))
ir_power = np.abs(emission_g[vibrational_frequency_index])**2
ir_amplitudes[i_kappa, i_tau, i_c] = ir_power
counter +=1
# plt.figure()
# plt.title(r"$\kappa{}, \tau={}, c={}".format(kappa, tau, c))
# plt.plot(frequencies, emission_g)
# plt.xlim(0, 2)
```
990 / 990 Calculating kappa=5.000000, c=1.000000, tau=0.900000 at n_g = 75 and n_e=60...
```python
print(c_to_ng)
print(c_to_ne)
```
{0.056234132519034911: 11, 0.010857111194022039: 9, 0.1279802213997954: 14, 1.0: 96, 0.43939705607607904: 37, 0.004770582696143927: 9, 0.016378937069540647: 9, 0.024709112279856043: 10, 0.0031622776601683794: 9, 0.084834289824407216: 12, 0.0071968567300115215: 9, 0.29126326549087383: 24, 0.037275937203149402: 10, 0.19306977288832505: 18, 0.6628703161826448: 58}
{0.056234132519034911: 9, 0.010857111194022039: 9, 0.1279802213997954: 11, 1.0: 79, 0.43939705607607904: 27, 0.004770582696143927: 9, 0.016378937069540647: 9, 0.024709112279856043: 9, 0.0031622776601683794: 9, 0.084834289824407216: 9, 0.0071968567300115215: 9, 0.29126326549087383: 18, 0.037275937203149402: 9, 0.19306977288832505: 13, 0.6628703161826448: 45}
```python
# decreased dt, does that keep these parameters from failing? last one is 600 / 800 Calculating kappa=8.858668, c=0.750000, tau=0.850000
for i_kappa, kappa in enumerate(kappa_values):
for i_tau, tau in enumerate(tau_values):
ir_power = ir_amplitudes[i_kappa, i_tau, :]
plt.loglog(c_values, ir_power, "*-")
plt.xlabel(r"$c$")
plt.figure()
for i_kappa, kappa in enumerate(kappa_values):
for i_c, c in enumerate(c_values):
ir_power = ir_amplitudes[i_kappa, :, i_c]
plt.loglog(tau_values, ir_power, "*-")
plt.xlabel(r"$\tau$")
plt.figure()
for i_tau, tau in enumerate(tau_values):
for i_c, c in enumerate(c_values):
# for i_c in [0,-1]:
ir_power = ir_amplitudes[:, i_tau, i_c]
# plt.loglog(kappa_values, ir_power, ["blue", "red"][i_c])
plt.loglog(kappa_values, ir_power)
plt.xlabel(r"$\kappa$")
# plt.xlim(-.1, 1.1)
# plt.ylim(-10, 500)
```
```python
c_log = np.log10(c_values)
tau_log = np.log10(tau_values)
kappa_log = np.log10(kappa_values)
log_ir_amplitudes = np.log(ir_amplitudes)
num_levels = 100
contours = np.linspace(np.min(log_ir_amplitudes), np.max(log_ir_amplitudes), num_levels)
```
```python
# for i_kappa, kappa in enumerate(kappa_values):
# ir_power = log_ir_amplitudes[i_kappa, :, :]
# plt.figure()
# plt.contourf(c_log, tau_log, ir_power, contours)
# plt.title(r"$\kappa = {}$".format(kappa))
# plt.ylabel(r"$c$")
# plt.xlabel(r"$\tau$")
# plt.colorbar()
for i_tau, tau in enumerate(tau_values):
ir_power = log_ir_amplitudes[:, i_tau, :]
plt.figure()
plt.contourf(c_log, kappa_log, ir_power, contours)
plt.title(r"$\tau = {}$".format(tau))
plt.ylabel(r"$\kappa$")
plt.xlabel(r"$c$")
plt.colorbar()
```
```python
for i_c, c in enumerate(c_values):
ir_power = log_ir_amplitudes[:, :, i_c]
plt.figure()
plt.contourf(tau_log, kappa_log, ir_power, contours)
plt.title(r"$c = {}$".format(c))
plt.ylabel(r"$\kappa$")
plt.xlabel(r"$\tau$")
plt.colorbar()
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
|
f78e1dca400e4cd563668bc8e7ec791742f10f7c
| 781,223 |
ipynb
|
Jupyter Notebook
|
MolmerSorenson/code/.ipynb_checkpoints/HeatingCalculations-checkpoint.ipynb
|
jgoodknight/dissertation
|
012ad400e1246d2a7e63cc640be4f7b4bf56db00
|
[
"MIT"
] | 1 |
2020-04-21T06:20:42.000Z
|
2020-04-21T06:20:42.000Z
|
MolmerSorenson/code/.ipynb_checkpoints/HeatingCalculations-checkpoint.ipynb
|
jgoodknight/dissertation
|
012ad400e1246d2a7e63cc640be4f7b4bf56db00
|
[
"MIT"
] | null | null | null |
MolmerSorenson/code/.ipynb_checkpoints/HeatingCalculations-checkpoint.ipynb
|
jgoodknight/dissertation
|
012ad400e1246d2a7e63cc640be4f7b4bf56db00
|
[
"MIT"
] | null | null | null | 675.798443 | 169,492 | 0.936834 | true | 6,489 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.868827 | 0.654895 | 0.56899 |
__label__eng_Latn
| 0.360963 | 0.160284 |
# Phase Shifts and Virtual Z gates
```python
import numpy as np
from pulser import Pulse, Sequence, Register
from pulser.devices import MockDevice
```
## Introduction
Under the right circumstances, phase shifts can be a great way of implementing so-called *virtual-Z gates*. Let's see how these arise and how we can use them to our advantage.
Consider an arbitrary 2x2 unitary matrix (up to a global phase) and a possible decompostion in terms of rotations around the $x$ and $z$ axes ($R_X$ and $R_Z$) of the Bloch sphere:
$$U(\gamma, \theta, \phi) = R_Z(\gamma)R_X(\theta)R_Z(\phi)$$
Our goal is to be able to apply this transformation on our qubit states through our pulses. A pulse that is on-resonance with its target transition (i.e. has detuning $\delta = 0$) can be described as an arbitrary rotation around an axis contained in the Bloch sphere's equator, $\hat{n}= (\cos \phi, -\sin \phi, 0)$. From the general form of such a rotation, $R_{\hat{n}}(\theta) = \exp\left(-i \frac{\theta}{2} \hat{n}\cdot \vec{\sigma}\right)$, we arrive at:
$$
\begin{align}
R_{\hat{n}(\phi)}(\theta) &= \exp\left(-i \frac{\theta}{2} \left[\cos(\phi)\sigma_x -\sin(\phi)\sigma_y\right]\right)\\
&= e^{i\frac{\phi}{2}\sigma_z}e^{-i\frac{\theta}{2}\sigma_x}e^{-i\frac{\phi}{2}\sigma_z}\\
&= R_Z(-\phi)R_X(\theta)R_Z(\phi)
\end{align}
$$
Here, we have two free parameters: the angle of rotation $\theta$, which is determined by the integral of the amplitude waveform, and $\phi$, the pulse's phase. Thus, we can see that a pulse is a particular case of the arbitrary single-qubit gate $U$, where $\gamma = -\phi$, i.e.:
$$ R_{\hat{n}(\phi)}(\theta) = U(-\phi, \theta, \phi) $$
Thus, to reach the desired arbitrary single-qubit, we need an extra $R_Z$ gate, such that:
$$ U(\gamma, \theta, \phi) = R_Z(\gamma + \phi) R_Z(-\phi)R_X(\theta)R_Z(\phi) = R_Z(\gamma + \phi) U(-\phi, \theta, \phi)$$
Now, how do we implement such a gate? In fact, to physically change the phase of the qubit's state in this reference frame with a single pulse, we would have to be able to apply a detuned pulse of zero amplitude, which is no pulse at all! Instead, what we can do is change the frame of rotation such that the phase gate is applied *virtually*.
To understand how this can be done, we first have to realise that this last phase gate is irrelevant if it is the last one applied to the qubit before it is measured - a change of the phase between the $\sigma_z$ eigenstates will produce no change in the measurement outcome, since that occurs in the same basis. But what if it is not the last gate that we apply on this qubit? In that case, we can describe the situation as a new arbitrary gate being applied after the existent one, i.e.
$$ U(\alpha, \beta, \nu)~~U(\gamma, \theta, \phi) = R_Z(\alpha + \nu) R_Z(-\nu)R_X(\beta)R_Z(\nu) ~~ R_Z(k)~
U(-\phi, \theta, \phi) $$
where we define the *carry*, $k=\gamma + \phi$, as the phase of the unrealized phase gate. Now, we can restructure the previous expression such that:
$$
\begin{align}
& R_Z(\alpha + \nu) R_Z(-\nu)R_X(\beta)R_Z(\nu)~~ R_Z(k) ~U(-\phi, \theta, \phi) = \\
&= R_Z(\alpha + \nu + k)~~ R_Z(-\nu - k)R_X(\beta)R_Z(\nu + k) ~~U(-\phi, \theta, \phi) \\
&= R_Z(k') ~~U(-\nu - k, \beta, \nu + k)~~U(-\phi, \theta, \phi), ~~~k' = \alpha + \nu + k
\end{align}
$$
As shown, the previously existent phase gate of angle $k$ can be realized as a **shift** on the phase of the second pulse, $\nu \rightarrow \nu + k$. In this way, we go back to a situation where we have a phase gate at the end (with an updated carry $k'$). We can repeat this process until the moment we measure the qubit, at which point, as we've seen, the remaining phase gate is redundant.
This is the **virtual-Z gate**: the ability to perform phase gates through the adjustment of a qubit's phase reference.
## Phase shifts in Pulser
As shown above, implementing virtual-Z gates requires tracking the *carry* and adapting the phase of each provided pulse accordingly. Although this could certainly be done externally, Pulser offers a convenient way to automatically update a qubit's phase reference (i.e. the *carry*) through **phase shifts**.
A phase shift in Pulser is defined by three things: the *value* of the shift, the *target* qubits and the *basis*. A phase shift of *value* $\phi$ corresponds to a change in the *target*'s phase reference from $k \rightarrow k + \phi$.
It is important to realise that, unlike all other aspects so far, phase shifts are associated to the a qubit's transition, not a channel. Therefore, in principle, all qubit's can keep different phase references, as a result of being subjected to different phase shifts throughout a sequence.
### Programming a Hadamard gate
To exemplify the need for phase shifts, let's try to encode an Hadamard gate using only resonant pulses.
In our decomposition of a unitary matrix, the Hadamard is (you can check this for yourself):
$$ H = U\left(\frac{\pi}{2},\frac{\pi}{2}, \frac{\pi}{2}\right) = R_Z(\pi)~~U\left(-\frac{\pi}{2},\frac{\pi}{2}, \frac{\pi}{2}\right) $$
meaning that we have to apply a $\frac{\pi}{2}$-pulse with phase $\phi=\frac{\pi}{2}$, followed by a phase shift of $\pi$.
But first, let's create a simple setup.
```python
reg = Register({'q0': (0, 0)})
device = MockDevice
seq = Sequence(reg, device)
seq.available_channels
```
{'rydberg_global': Rydberg.Global(Max Absolute Detuning: 1000 rad/µs, Max Amplitude: 200 rad/µs, Basis: 'ground-rydberg'),
'rydberg_local': Rydberg.Local(Max Absolute Detuning: 1000 rad/µs, Max Amplitude: 200 rad/µs, Target time: 0 ns, Max targets: 2000, Basis: 'ground-rydberg'),
'raman_global': Raman.Global(Max Absolute Detuning: 1000 rad/µs, Max Amplitude: 200 rad/µs, Basis: 'digital'),
'raman_local': Raman.Local(Max Absolute Detuning: 1000 rad/µs, Max Amplitude: 200 rad/µs, Target time: 0 ns, Max targets: 2000, Basis: 'digital'),
'mw_global': Microwave.Global(Max Absolute Detuning: 1000 rad/µs, Max Amplitude: 200 rad/µs, Basis: 'XY')}
```python
seq.declare_channel('ch0', 'raman_local', initial_target = 'q0')
```
```python
# Defining the waveform for a pi/2 pulse
from pulser.waveforms import BlackmanWaveform
pi2_wf = BlackmanWaveform(1000, np.pi/2) # Duration: 1us, Area: pi/2
pi2_wf.draw()
```
```python
# 2. Create the pi/2 pulse
pi_2 = Pulse.ConstantDetuning(pi2_wf, detuning=0, phase=np.pi/2)
pi_2.draw()
```
```python
#3. Applying the H gate
seq.add(pi_2, 'ch0') # The first pi/2-pulse
# Now the phase shift of pi on 'q0', for the 'digital' basis, which is usually where phase shifts are useful
seq.phase_shift(np.pi, 'q0', basis='digital')
seq.draw(draw_phase_shifts=True)
```
This produced the desired effect: we have a $\frac{\pi}{2}$ pulse with $\phi=\frac{\pi}{2}$ followed by a phase shift of $\pi$. However, the need to specify the target qubit and basis is mildly incovenient. Moreover, it would be even better if the entire hadamard could be in the same pulse object. Fortunately, there's a way.
The `Pulse` object has an optional argument called `post_phase_shift`, with which the user can order a phase shift to be applied immediatly after the physical pulse. In this case, the target and basis are **implicitly defined** to be the channel's current target and basis.
Here's how we could define the Hadamard in a single pulse:
```python
h = Pulse.ConstantDetuning(pi2_wf, detuning=0, phase=np.pi/2, post_phase_shift=np.pi)
seq.add(h, 'ch0')
seq.draw(draw_phase_shifts=True)
```
Notice how the two pulse shapes are naturally identical, and are both followed by the adequate $\pi$ phase shift. However, we expect to see an adjustment on the phase of the second pulse, simply because it is following a phase shift. To inspect the phase of each pulse, we can print the sequence:
```python
print(seq)
```
Channel: ch0
t: 0 | Initial targets: q0 | Phase Reference: 0.0
t: 0->1000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=1.57) | Targets: q0
t: 1000->2000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=4.71) | Targets: q0
Here, we see that the phase of the second pulse has the appropriate adjustment of $\pi$. What happens if we apply the Hadamard yet again?
```python
seq.add(h, 'ch0')
print(seq)
```
Channel: ch0
t: 0 | Initial targets: q0 | Phase Reference: 0.0
t: 0->1000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=1.57) | Targets: q0
t: 1000->2000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=4.71) | Targets: q0
t: 2000->3000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=1.57) | Targets: q0
That's right, the phase of the third pulse is back to $\frac{\pi}{2}$ because, at the moment the third Hadamard was constructed, there had been two $\pi$ phase shifts, which means the reference is back to $\phi=0$.
### Phase shifts with multiple channels and different targets
Now, let's construct a more complex example where there are multiple channels and multiple qubits.
```python
reg = Register({'q0': (0, 0), 'q1': (5, 5)})
device = MockDevice
seq = Sequence(reg, device)
seq.declare_channel('raman', 'raman_local', initial_target = 'q0')
seq.declare_channel('ryd1', 'rydberg_local', initial_target = 'q0')
seq.declare_channel('ryd2', 'rydberg_local', initial_target = 'q0')
seq.declared_channels
```
{'raman': Raman.Local(Max Absolute Detuning: 1000 rad/µs, Max Amplitude: 200 rad/µs, Target time: 0 ns, Max targets: 2000, Basis: 'digital'),
'ryd1': Rydberg.Local(Max Absolute Detuning: 1000 rad/µs, Max Amplitude: 200 rad/µs, Target time: 0 ns, Max targets: 2000, Basis: 'ground-rydberg'),
'ryd2': Rydberg.Local(Max Absolute Detuning: 1000 rad/µs, Max Amplitude: 200 rad/µs, Target time: 0 ns, Max targets: 2000, Basis: 'ground-rydberg')}
We see that we have two qubits and three channels, all `Local`, with `raman` acting on the `digital` basis and the other two on the `ground-rydberg` basis. Let's use the Hadamard from before and add it to channels `raman` and `ryd1`, which are both targeting `q0` on different bases:
```python
seq.add(h, 'raman')
seq.add(h, 'ryd1')
seq.draw(draw_phase_shifts=True)
```
We can see that the pulse in channel `ryd1` waited for the pulse in `raman` to be finished (because they are acting on the same target). We also noticed that the phase shift in `raman` does not appear in the other channels, the reason for that being the fact that they act on **different bases**. We can check this by printing the sequence:
```python
print(seq)
```
Channel: raman
t: 0 | Initial targets: q0 | Phase Reference: 0.0
t: 0->1000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=1.57) | Targets: q0
Channel: ryd1
t: 0 | Initial targets: q0 | Phase Reference: 0.0
t: 0->1000 | Delay
t: 1000->2000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=1.57) | Targets: q0
Channel: ryd2
t: 0 | Initial targets: q0 | Phase Reference: 0.0
Here, we confirm that the phase of the pulse in `ryd1` is $\frac{\pi}{2}$, which indicates a phase reference of $\phi=0$ as expected. What about when the phase shift targets the same qubit, the same basis, but the pulses are on different channels? In that case, we expect that the channel is irrelevant, and we can already see evidence of that in how the phase shift at the end of the Hadamard on `ryd1` also appears in `ryd2`. We can confirm this by adding another pulse to `ryd2` (e.g. the `pi_2` pulse we defined before) and then priting the sequence:
```python
seq.add(pi_2, 'ryd2')
print(seq)
```
Channel: raman
t: 0 | Initial targets: q0 | Phase Reference: 0.0
t: 0->1000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=1.57) | Targets: q0
Channel: ryd1
t: 0 | Initial targets: q0 | Phase Reference: 0.0
t: 0->1000 | Delay
t: 1000->2000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=1.57) | Targets: q0
Channel: ryd2
t: 0 | Initial targets: q0 | Phase Reference: 0.0
t: 0->2000 | Delay
t: 2000->3000 | Pulse(Amp=Blackman(Area: 1.57), Detuning=0 rad/µs, Phase=4.71) | Targets: q0
Notice how the `pi_2` pulse has a phase of $\frac{3\pi}{2}$: it's phase of $\frac{\pi}{2}$ plus the shift of $\pi$ accrued by the Hadamard in channel `ryd1`.
Let's now shift our attention torwards what happens when the basis and the channel stay the same, but the target qubit changes. By now, you can already predict that qubit `q1` has a phase reference of $\phi=0$ on both basis, since all phase shifts so far were always targeting `q0`. We can see this if we change the target on some channels and apply pulses again:
```python
seq.target('q1', 'raman')
seq.add(h, 'raman')
seq.target('q1', 'ryd1')
seq.add(h, 'ryd1')
print(seq)
seq.draw(draw_phase_shifts=True)
```
We can see how the second pulse on the `raman` and `ryd1` channels are exactly identical to their first one, except for the target. Had the target not changed, they would have their phase shifted by $\pi$, just like we saw earlier. We also see how, this time, the phase shift in `ryd1` does not appear in `ryd2` like before because they have **different targets**. Notice what happens if we also make `ryd2` target `q1`:
```python
seq.target('q1', 'ryd2')
seq.draw(draw_phase_shifts=True)
```
As you can see, channel `ryd2` starts off with a phase of $\phi = \pi$, which it picked up from the phase shift induced by the Hadamard on `ryd1`. On the other hand, the Hadamard in the `raman` channel did not affect the phase of `ryd2`(we would have $\phi=0$ in that case) because, again, it acts on a different basis.
|
749326fa77f7e381f7b4f9f2fac850da53cc6703
| 418,304 |
ipynb
|
Jupyter Notebook
|
tutorials/composition/Phase Shifts and Virtual Z gates.ipynb
|
LaurentAjdnik/Pulser
|
ab0bc7e1f0712b2ecbb737f2bbd3c6ae49ac8763
|
[
"Apache-2.0"
] | null | null | null |
tutorials/composition/Phase Shifts and Virtual Z gates.ipynb
|
LaurentAjdnik/Pulser
|
ab0bc7e1f0712b2ecbb737f2bbd3c6ae49ac8763
|
[
"Apache-2.0"
] | null | null | null |
tutorials/composition/Phase Shifts and Virtual Z gates.ipynb
|
LaurentAjdnik/Pulser
|
ab0bc7e1f0712b2ecbb737f2bbd3c6ae49ac8763
|
[
"Apache-2.0"
] | null | null | null | 694.857143 | 102,476 | 0.946596 | true | 4,165 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.857768 | 0.815232 | 0.69928 |
__label__eng_Latn
| 0.99043 | 0.462994 |
# Simon algorithm(Overview)
We explain Simon algorithm algorithm.
In this algorithm, for a function $f_s(x)$ with $n$-bit input and output ($s$ is any $n$-bit sequence), one of the following is assumed to be true.
Case1: Always return different outputs for different inputs (one-to-one correspondence)
Case2: For input $x, x'$, if $x' = x\oplus s$, then $f_s(x) = f_s(x')$. That is, it returns the same output for two inputs.
This algorithm determines whether Oracle is case 1 or case 2 above.
The concrete quantum circuit is as follows. The contents of $U_f$ are shown for the case of case 2 above and $s=1001$.
The number of qubits is $2 n$.
Check the state.
$$
\begin{align}
\lvert \psi_1\rangle &= \frac{1}{\sqrt{2^n}} \biggl(\otimes^n H\lvert 0\rangle \biggr) \lvert 0\rangle^{\otimes n} \\
&= \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} \lvert x\rangle \lvert 0\rangle^{\otimes n}
\end{align}
$$
Next, consider $\lvert \psi_2 \rangle$.
Here, for $f_s(x)$, we have the following oracle gate $U_f$.
$$
U_f \lvert x \rangle \lvert 0 \rangle = \lvert x \rangle \lvert f_s(x) \rangle
$$
Using this $U_f$, we get
$$
\lvert \psi_2 \rangle = \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} \lvert x\rangle \lvert f_s(x)\rangle
$$
Therefore, $\lvert \psi_3 \rangle$ is as follows
$$
\lvert \psi_3 \rangle = \frac{1}{2^n} \sum_{x=0}^{2^n-1}\sum_{y=0}^{2^n-1} (-1)^{x\cdot y} \lvert y\rangle \lvert f_s(x)\rangle
$$
Now, consider what the measurement result of $\lvert y \rangle$ would be if $f_s(x)$ were as follows.
Case 1: Always return different outputs for different inputs (one-to-one correspondence)
All measurement results are obtained with equal probability.
Case 2: For input $x, x'$, if $x' = x\oplus s$, then $f_s(x) = f_s(x')$. That is, it returns the same output for two inputs.
Notice the amplitude $A(y, x)$ of the state $\lvert y \rangle \lvert f_s(x) \rangle = \lvert y \rangle \lvert f_s(x\oplus s) \rangle$.
$$
A(y, x) = \frac{1}{2^n} \{(-1)^{x\cdot y} + (-1)^{(x\oplus s) \cdot y}\}
$$
As you can see from the equation, the amplitude of $y$ such that $y\cdot s \equiv 1 \bmod2$ is $0$ due to cancellation.
Therefore, only $y$ is measured such that $y\cdot s \equiv 0 \bmod2$.
In both case 1 and case 2, if such $n$ different $y$ are obtained by measurement (except for $00. .0$), we can determine $s'$ such that $y\cdot s' \equiv 0 \bmod2$ for all those $y$.
In the case 1, $s'$ is completely random.
However, in the case 2, $f_s(s') = f_s(0)$ is always true because $s' = 0\oplus s'$.
Thus, except for the case where $s'$ such that $f_s(s') = f_s(0)$ is obtained from case 1 with probability $1 / 2^n$, we can check whether $s'$ is obtained from case 1 or case 2 using the oracle gate.
The oracle can be determined from the above.
Finally, we consider the implementation of the oracle gate $U_f$.
In the case 1, it is enough that the output has a one-to-one correspondence with the input $x$.
For simplicity, let's consider a circuit that randomly inserts an $X$ gate.
The case 2 is a bit more complicated.
First, the $CX$ gate creates the following state.
$$
\lvert \psi_{1a} \rangle = \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} \lvert x\rangle \lvert x\rangle
$$
Next, for the lowest index $i'$ where $s_i=1$, take XOR of the auxiliary register and $s$ only if $x_{i'} = 0$.
As a result, we get the following $\lvert\psi_2\rangle$.
$$
\begin{align}
\lvert \psi_{2} \rangle &= \frac{1}{\sqrt{2^n}} \biggl(\sum_{\{x_{i'}=0\}} \lvert x\rangle \lvert x \oplus s\rangle + \sum_{\{x_{i'}=1\}} \lvert x\rangle \lvert x\rangle \biggr) \\
&= \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} \lvert x\rangle \lvert f_s(x)\rangle
\end{align}
$$
We can confirm that $f_s(x)$ satisfies case 2 by calculation.
Let's implement this with blueqat.
```python
from blueqat import Circuit
import numpy as np
```
Prepare a function for two types of oracle gates $U_f$.
```python
def oracle_1(c, s):
_n = len(s)
for i in range(_n):
if np.random.rand() > 0.5:
c.x[i]
for i in range(_n):
c.cx[i, i + _n]
def oracle_2(c, s):
_n = len(s)
flag = 0
for i, si in enumerate(reversed(s)):
c.cx[i, i + _n]
if si == '1' and flag == 0:
c.x[i]
for j, sj in enumerate(s):
if sj == '1':
c.cx[i, j + _n]
c.x[i]
flag = 1
```
The following is the main body of the algorithm.
First, use a random number to determine the oracle (one of the two types) and the $s$ you want to find.
(In the following, the values are fixed to reproduce the quantum circuit shown in the figure above.)
```python
n = 4
N = np.random.randint(1, 2**n-1)
s = bin(N)[2:].zfill(n)
which_oracle = np.random.rand()
### to reproduce the quantum circuit shown in the figure above ###
### Erasing these two lines will randomly determine s and oracle###
s = "1001"
which_oracle = 0
######
c = Circuit(n * 2)
c.h[:n]
if which_oracle > 0.5:
oracle_1(c, s)
oracle = "oracle 1"
else:
oracle_2(c, s)
oracle = "oracle 2"
c.h[:n].m[:n]
res = c.run(shots = 1000)
```
```python
res
```
Counter({'10010000': 111,
'00100000': 120,
'10110000': 128,
'11110000': 123,
'01100000': 137,
'01000000': 133,
'00000000': 115,
'11010000': 133})
Extract $n$ results other than '00...0' from the sampling result.
```python
res_list = list(res.keys())
_res_list = []
for i in res_list:
if i[:n] != '0'*4:
_res_list.append(i[:n])
if len(_res_list) == 4:
break
print(_res_list)
```
['1001', '0010', '1011', '1111']
Find $s'$ from the extracted result.
(Here, we are simply looking for $s'$ that matches the condition by brute force, but it is possible to find it efficiently in linear algebra.)
If the oracle in case 2 is selected, the resulting $s'$ should be equal to $s$.
```python
for i in range(2**n):
l = bin(i)[2:].zfill(n)
flag = 1
for sampled in _res_list:
mod = np.sum(np.array(list(l), dtype = np.int64) * np.array(list(sampled), dtype = np.int64)) % 2
if mod:
flag = 0
break
if flag:
output_s = l
```
```python
print("s' =", output_s)
print("s =", s)
```
s' = 1001
s = 1001
```python
```
|
d465672b71771321550c89919a55ca904d30b45f
| 11,260 |
ipynb
|
Jupyter Notebook
|
tutorial/103_simon.ipynb
|
Blueqat/blueqat-tutorials
|
be863d1a6834ce6aa8a7cec0c886d7e3b4caabd1
|
[
"Apache-2.0"
] | 7 |
2021-11-22T19:18:09.000Z
|
2022-01-30T22:38:03.000Z
|
tutorial/103_simon.ipynb
|
Blueqat/blueqat-tutorials
|
be863d1a6834ce6aa8a7cec0c886d7e3b4caabd1
|
[
"Apache-2.0"
] | 20 |
2021-11-23T22:41:58.000Z
|
2022-01-30T17:46:46.000Z
|
tutorial/103_simon.ipynb
|
Blueqat/blueqat-tutorials
|
be863d1a6834ce6aa8a7cec0c886d7e3b4caabd1
|
[
"Apache-2.0"
] | 3 |
2022-01-04T22:29:13.000Z
|
2022-01-30T08:38:20.000Z
| 28.871795 | 213 | 0.500444 | true | 2,101 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.828939 | 0.831143 | 0.688967 |
__label__eng_Latn
| 0.976199 | 0.439032 |
# Logistic Regression (Regularised)
## Introduction
In this example, we will implement regularized logistic regression
to predict whether microchips from a fabrication plant passes quality assurance (QA). During QA, each microchip goes through various tests to ensure it is functioning correctly.
Suppose you are the product manager of the factory and you have the
test results for some microchips on two different tests. From these two tests,
you would like to determine whether the microchips should be accepted or
rejected. To help you make the decision, you have a dataset of test results
on past microchips, from which you can build a logistic regression model.
## Visualizing the data
Before starting to implement any learning algorithm, it is always good to
visualize the data if possible.
The file 'data/ex2data2.txt' contains the dataset for our Logistic regression problem.
Here we will load the data and display it on a 2-dimensional plot, where the axes are the two exam scores, and the positive and
negative examples are shown with different markers.
```python
# initial imports
import numpy as np
from matplotlib import pyplot as plt
%matplotlib notebook
import seaborn as sns
# setting graph properties
plt.rcParams['figure.dpi'] = 300 # setting figure dpi for better quality graphs
plt.rcParams['figure.figsize'] = [10,8]
sns.set(context="notebook", style="white") # graph styling using seaborn
%config InlineBackend.figure_format = 'pdf'
```
```python
# imports from my models designed for these examples
from models.data_preprocessing import add_bias_unit, map_feature, feature_normalize
from models.logistic_regression import cost_function, predict, gradient_descent, gradient_function, sigmoid
from models.plotter import plot_decision_boundary
```
```python
print('Loading data ...')
data = np.loadtxt('data/ex2data2.txt', delimiter=',')
X = data[:, :-1] # (118, 2)
y = data[:, -1, np.newaxis] # (118, 1)
```
Loading data ...
```python
print('Plotting data with + indicating (y = 1) examples and o indicating (y = 0) examples.')
"""
Example plotting for multiple markers
x = np.array([1,2,3,4,5,6])
y = np.array([1,3,4,5,6,7])
m = np.array(['o','+','+','o','x','+'])
unique_markers = set(m) # or yo can use: np.unique(m)
for um in unique_markers:
mask = m == um
# mask is now an array of booleans that van be used for indexing
plt.scatter(x[mask], y[mask], marker=um)
"""
fig, ax = plt.subplots()
y_slim = y.ravel()
# plotting y=1 values
ax.scatter(x=X[y_slim == 1, 0], y=X[y_slim == 1, 1], marker='+', c='black', s=50, label='Accepted')
# plotting y=0 values
# X[y_slim == 0, 0] is logical indexing with rows with y=0 only
ax.scatter(x=X[y_slim == 0, 0], y=X[y_slim == 0, 1], marker='o', c='xkcd:light yellow', s=25, label='Regected', edgecolor='k')
# labels
ax.set_xlabel('Microchip Test 1')
ax.set_ylabel('Microchip Test 2')
# Specified in plot order
ax.legend()
```
Figure shows that our dataset cannot be separated into positive and
negative examples by a straight-line through the plot. Therefore, a straightforward application of logistic regression will not perform well on this dataset
since logistic regression will only be able to find a linear decision boundary.
## Feature Mapping
One way to fit the data better is to create more features from each data
point. In the function map_features(), we will map the features into
all polynomial terms of $x_{1}$ and $x_{2}$ up to the sixth power.
$$
mapFeature(x, 6) =
\begin{bmatrix}
1 \\
x_{1} \\
x_{2} \\
x_{1}^{2} \\
x_{1} x_{2} \\
x_{2}^{2} \\
x_{1}^{3} \\
\vdots \\
x_{1}x_{2}^{5} \\
x_{2}^{6}\\
\end{bmatrix}
$$
As a result of this mapping, our vector of two features (the scores on
two QA tests) has been transformed into a 28-dimensional vector. A logistic
regression classifier trained on this higher-dimension feature vector will have
a more complex decision boundary and will appear nonlinear when drawn in
our 2-dimensional plot.
While the feature mapping allows us to build a more expressive classifier,
it also more susceptible to overfitting. In the next parts of the example, we
will implement regularized logistic regression to fit the data and also you will be able to see how regularization can help combat the overfitting problem.
```python
# Adding Polynomial Features
# Note that mapFeature also adds a column of ones for us, so the intercept term is handled
X = map_feature(X, degree=6)
```
### Cost Function and Gradient
The **regularized** cost function for logistic regression is:
$$
J(\theta) = \frac{1}{m}\sum_{i=1}^{m}\left[ -y^{(i)} \log\left( h_{\theta} \left( x^{(i)} \right) \right) - \left( 1 - y^{(i)} \right) \log \left( 1 - h_{\theta} \left( x^{(i)} \right) \right) \right]+\frac{\lambda}{2m} \sum^{n}_{j=1}\theta_{j}^{2}
$$
Note that you should not regularize the parameter θ0. In Octave/MATLAB, recall that indexing starts from 1, hence, you should not be regularizing
the theta(1) parameter (which corresponds to θ0) in the code. The gradient
of the cost function is a vector where the j
th element is defined as follows:
$$\begin{align}
\frac{\delta J(\theta)}{\delta \theta_{j}} = \frac{1}{m} \sum_{i=1}^{m} \left( h_{\theta} \left( x^{(i)} \right) - y^{(i)} \right)x_{j}^{(i)} && for j=0\\
\frac{\delta J(\theta)}{\delta \theta_{j}} = \left(\frac{1}{m} \sum_{i=1}^{m} \left( h_{\theta} \left( x^{(i)} \right) - y^{(i)} \right)x_{j}^{(i)}\right)+\frac{\lambda}{m}\theta_{j} && for j >= 1 \\
\end{align}
$$
Note that while this gradient looks identical to the linear regression gradient, the formula is actually different because linear and logistic regression have different definitions of h$_{θ}$(x).
```python
# initial sizes
m, n = X.shape
# Initialize fitting parameters
initial_theta = np.zeros([n, 1])
# Compute and display initial cost and gradient
cost = cost_function(initial_theta, X, y, regularized=False)
grad = gradient_function(initial_theta, X, y, regularized=False)
print('Cost at initial theta (zeros): {}'.format(cost))
print('Expected cost (approx): 0.693')
print('Gradient at initial theta (zeros) - first five values only:')
print(grad[:5])
print('Expected gradients (approx) - first five values only:')
print('[0.0085 0.0188 0.0001 0.0503 0.0115]')
test_theta = np.ones([X.shape[1], 1])
cost = cost_function(test_theta, X, y, lamda=10, regularized=True)
grad = gradient_function(test_theta, X, y, lamda=10, regularized=True)
print('Cost at test theta (with lambda = 10): {}'.format(cost))
print('Expected cost (approx): 3.16')
print('Gradient at test theta - first five values only: {}'.format(grad[:5]))
print('Expected gradients (approx) - first five values only:')
print('[0.3460 0.1614 0.1948 0.2269 0.0922]')
```
Cost at initial theta (zeros): [[0.69314718]]
Expected cost (approx): 0.693
Gradient at initial theta (zeros) - first five values only:
[8.47457627e-03 7.77711864e-05 3.76648474e-02 2.34764889e-02
3.93028171e-02]
Expected gradients (approx) - first five values only:
[0.0085 0.0188 0.0001 0.0503 0.0115]
Cost at test theta (with lambda = 10): [[3.16450933]]
Expected cost (approx): 3.16
Gradient at test theta - first five values only: [0.34604507 0.19479576 0.24438558 0.18346846 0.19994895]
Expected gradients (approx) - first five values only:
[0.3460 0.1614 0.1948 0.2269 0.0922]
## Learning parameters using scipy.minimize
In earlier examples, we found the optimal parameters of a linear regression model by implementing gradent descent. I also wrote a cost function and calculated its gradient, then took a gradient descent step accordingly.
This time, instead of taking gradient descent steps, we will use an function called minimize from scipy.optimize module.
The scipy's minimize() function provides a common interface to unconstrained and constrained minimization algorithms for multivariate scalar functions in scipy.optimize.
For logistic regression, we need to optimize the cost function J(θ) with parameters θ. Concretely, we are going to use minimize to find the best parameters θ for the logistic regression cost function, given a fixed dataset (of X and y values).
You will pass to minimize() the following inputs:
- Our predefined cost_function which returns cost while taking X and theta as arguments.
- A gradient_function which returns the derivatives of the $\theta$ values passed to it.
- The initial values of the parameters we are trying to optimize.
- A function that, when given the training set and a particular θ, computes the logistic regression cost and gradient with respect to θ for the dataset (X, y)
- A callbak function which is called after each iteration and the $\theta$ value obtained after each iteration is passed to it, we are using this callback function to store theta values for each iteration.
The minimize() function returns a OptimizeResult object which contains final theta values, function end status, final cost etc.
more info about the minimize function can be refered to the documentation <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html">here</a>.
```python
X_norm, mu, sigma = feature_normalize(X[:, 1:])
X_norm = add_bias_unit(X_norm)
from scipy.optimize import minimize
theta_history = [] # for plotting cost vs iteration graph
def theta_store(value, *args):
theta_history.append(value)
initial_theta = np.zeros(n)
op_result = minimize(fun=cost_function, x0=initial_theta, jac=gradient_function, args=(X, y, 1, True), method='tnc', callback=theta_store)
print('Cost at theta found by Gradient descent: {}'.format(op_result.fun))
print('theta: {}'.format(op_result.x))
# converting theta_history into J_history
J_history = (np.array(theta_history[::-1]) @ op_result.x)
# plot J_history
fig1, ax1 = plt.subplots()
ax1.plot(range(J_history.size), J_history)
ax1.set_xlabel('Iterations')
ax1.set_ylabel('Cost')
fig2, ax2 = plt.subplots()
X_old = data[:, :-1] # (118, 2)
y_old = data[:, -1, np.newaxis] # (118, 1)
y_slim = y_old.ravel()
# plotting y=1 values
ax2.scatter(x=X_old[y_slim == 1, 0], y=X_old[y_slim == 1, 1], marker='+', c='black', s=50, label='Accepted')
# plotting y=0 values
# X[y_slim == 0, 0] is logical indexing with rows with y=0 only
ax2.scatter(x=X_old[y_slim == 0, 0], y=X_old[y_slim == 0, 1], marker='o', c='xkcd:light yellow', s=25, label='Regected', edgecolor='k')
# labels
ax2.set_xlabel('Microchip Test 1')
ax2.set_ylabel('Microchip Test 2')
# Specified in plot order
ax2.legend()
theta = op_result.x[:,np.newaxis]
plot_decision_boundary(theta=theta, X=X, y=y, hypothesis=sigmoid, precision=0.1, fig=fig2, ax=ax2, feature_map=(map_feature, 6))
```
## Influence of labda values
In this part of the example, we will get to try out different regularization parameters for the dataset to understand how regularization prevents overfitting.
### No regularization (Overfitting) ($\lambda = 0$)
```python
initial_theta = np.zeros(n)
op_result = minimize(fun=cost_function, x0=initial_theta, jac=gradient_function, args=(X, y, 0.0001, True), method='tnc') # lambda is zero in the args tuple
fig3, ax3 = plt.subplots()
# plotting y=1 values
ax3.scatter(x=X_old[y_slim == 1, 0], y=X_old[y_slim == 1, 1], marker='+', c='black', s=50, label='Accepted')
# plotting y=0 values
# X[y_slim == 0, 0] is logical indexing with rows with y=0 only
ax3.scatter(x=X_old[y_slim == 0, 0], y=X_old[y_slim == 0, 1], marker='o', c='xkcd:light yellow', s=25, label='Regected', edgecolor='k')
# labels
ax3.set_xlabel('Microchip Test 1')
ax3.set_ylabel('Microchip Test 2')
# Specified in plot order
ax3.legend()
theta = op_result.x[:,np.newaxis]
plot_decision_boundary(theta=theta, X=X, y=y, hypothesis=sigmoid, precision=0.1, fig=fig3, ax=ax3, feature_map=(map_feature, 6))
```
Notice the changes in the decision boundary as you vary λ. With a small
λ, you should find that the classifier gets almost every training example
correct, but draws a very complicated boundary, thus overfitting the data. This is not a good decision boundary: for example, it predicts
that a point at x = (−0.25, 1.5) is accepted (y = 1), which seems to be an incorrect decision given the training set.
### Too much regularization (Underfitting) ($\lambda = 100$)
```python
initial_theta = np.zeros(n)
op_result = minimize(fun=cost_function, x0=initial_theta, jac=gradient_function, args=(X, y, 100, True), method='TNC') # lambda is zero in the args tuple
fig4, ax4 = plt.subplots()
# plotting y=1 values
ax4.scatter(x=X_old[y_slim == 1, 0], y=X_old[y_slim == 1, 1], marker='+', c='black', s=50, label='Accepted')
# plotting y=0 values
# X[y_slim == 0, 0] is logical indexing with rows with y=0 only
ax4.scatter(x=X_old[y_slim == 0, 0], y=X_old[y_slim == 0, 1], marker='o', c='xkcd:light yellow', s=25, label='Regected', edgecolor='k')
# labels
ax4.set_xlabel('Microchip Test 1')
ax4.set_ylabel('Microchip Test 2')
# Specified in plot order
ax4.legend()
theta = op_result.x[:,np.newaxis]
plot_decision_boundary(theta=theta, X=X, y=y, hypothesis=sigmoid, precision=0.1, fig=fig4, ax=ax4, feature_map=(map_feature, 6))
```
With a larger λ, you should see a plot that shows an simpler decision
boundary which still separates the positives and negatives fairly well. However, if λ is set to too high a value, you will not get a good fit and the decision boundary will not follow the data so well, thus underfitting the data.
|
a744f44668549edf4ff86de1240e217b5c0c72e0
| 118,575 |
ipynb
|
Jupyter Notebook
|
Logistic_Regression/Logistic Regression (with regularization).ipynb
|
Mr-MayankThakur/Machine-learning-Implementations-with-Numpy
|
453bb15c9089d42b52ff0ff09d8c66def137ec4e
|
[
"MIT"
] | null | null | null |
Logistic_Regression/Logistic Regression (with regularization).ipynb
|
Mr-MayankThakur/Machine-learning-Implementations-with-Numpy
|
453bb15c9089d42b52ff0ff09d8c66def137ec4e
|
[
"MIT"
] | null | null | null |
Logistic_Regression/Logistic Regression (with regularization).ipynb
|
Mr-MayankThakur/Machine-learning-Implementations-with-Numpy
|
453bb15c9089d42b52ff0ff09d8c66def137ec4e
|
[
"MIT"
] | null | null | null | 204.792746 | 25,776 | 0.899667 | true | 3,743 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.90053 | 0.752378 |
__label__eng_Latn
| 0.976412 | 0.586358 |
# Taylor Problem 16.14 version B
We'll plot at various times a wave $u(x,t)$ that is defined by its initial shape at $t=0$ from $x=0$ to $x=L$, using a Fourier sine series to write the result at a general time t:
$\begin{align}
u(x,t) = \sum_{n=1}^{\infty} B_n \sin(k_n x)\cos(\omega_n t)
\;,
\end{align}$
with $k_n = n\pi/L$ and $\omega_n = k_n c$, where $c$ is the wave speed. Here the coefficients are given by
$\begin{align}
B_n = \frac{2}{L}\int_0^L u(x,0) \sin\frac{n\pi x}{L} \, dx
\;.
\end{align}$
* Created 28-Mar-2019. Last revised 04-Apr-2019 by Dick Furnstahl (furnstahl.1@osu.edu).
* This version sums over all n integers, even and odd.
```python
%matplotlib inline
```
```python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
```
First define functions for the $t=0$ wave function form (here a triangle) and for the subsequent shape at any time $t$ based on the wave speed `c_wave`.
```python
class uTriangular():
"""
documentation
"""
def __init__(self, x_pts, num_terms=20, c_wave=1., L=1.):
self.x_pts = x_pts
self.num_terms = num_terms
self.c = c_wave
self.L = L
def B_coeff(self, n):
"""Fourier coefficient for the nth term in the expansion, which is only
non-zero if n is odd, or n = 2*m + 1.
"""
if n % 2 == 1: # n is odd
m = (n - 1)/2
return (-1)**m * 8. / (n * np.pi)**2
else: # n is even
return 0.
def k(self, n):
"""Wave number for n
"""
return n * np.pi / self.L
def u_wave(self, t):
"""Returns the wave from the sum of Fourier components.
"""
y_pts = np.zeros(len(self.x_pts)) # define y_pts as the same size as x_pts
for n in np.arange(0, self.num_terms):
y_pts += self.B_coeff(n) * np.sin(self.k(n) * self.x_pts) \
* np.cos(self.k(n) * self.c * t)
return y_pts
```
```python
```
```python
```
```python
```
First look at the initial ($t=0$) wave form.
```python
L = 1.
num_n = 40
c_wave = 1
omega_1 = np.pi * c_wave / L
tau = 2.*np.pi / omega_1
# Set up the array of x points (whatever looks good)
x_min = 0.
x_max = L
delta_x = 0.01
x_pts = np.arange(x_min, x_max, delta_x)
u_triangular_1 = uTriangular(x_pts, num_n, c_wave, L)
u_triangular_2 = uTriangular(x_pts, num_n/8, c_wave, L)
# Make a figure showing the initial wave.
t_now = 0.
fig = plt.figure(figsize=(6,4), num='Standing wave')
ax = fig.add_subplot(1,1,1)
ax.set_xlim(x_min, x_max)
gap = 0.1
ax.set_ylim(-1. - gap, 1. + gap)
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$u(x, t=0)$')
ax.set_title(rf'$t = {t_now:.1f}$')
line, = ax.plot(x_pts,
u_triangular_1.u_wave(t_now),
color='blue', lw=2)
line2, = ax.plot(x_pts,
u_triangular_2.u_wave(t_now),
color='red', lw=2)
fig.tight_layout()
```
Next make some plots at an array of time points.
```python
t_array = tau * np.arange(0., 1.125, .125)
fig_array = plt.figure(figsize=(12,12), num='Standing wave')
for i, t_now in enumerate(t_array):
ax_array = fig_array.add_subplot(3, 3, i+1)
ax_array.set_xlim(x_min, x_max)
gap = 0.1
ax_array.set_ylim(-1. - gap, 1. + gap)
ax_array.set_xlabel(r'$x$')
ax_array.set_ylabel(r'$u(x, t)$')
ax_array.set_title(rf'$t = {t_now/tau:.3f}\tau$')
ax_array.plot(x_pts,
u_triangular(x_pts, t_now, num_n, c_wave, L),
color='blue', lw=2)
fig_array.tight_layout()
fig_array.savefig('Taylor_Problem_16p14.png',
bbox_inches='tight')
```
Now it is time to animate!
```python
# Set up the t mesh for the animation. The maximum value of t shown in
# the movie will be t_min + delta_t * frame_number
t_min = 0. # You can make this negative to see what happens before t=0!
t_max = 2.*tau
delta_t = t_max / 100.
t_pts = np.arange(t_min, t_max + delta_t, delta_t)
```
We use the cell "magic" `%%capture` to keep the figure from being shown here. If we didn't the animated version below would be blank.
```python
%%capture
fig_anim = plt.figure(figsize=(6,3), num='Triangular wave')
ax_anim = fig_anim.add_subplot(1,1,1)
ax_anim.set_xlim(x_min, x_max)
gap = 0.1
ax_anim.set_ylim(-1. - gap, 1. + gap)
# By assigning the first return from plot to line_anim, we can later change
# the values in the line.
line_anim, = ax_anim.plot(x_pts,
u_triangular(x_pts, t_min, num_n, c_wave, L),
color='blue', lw=2)
fig_anim.tight_layout()
```
```python
def animate_wave(i):
"""This is the function called by FuncAnimation to create each frame,
numbered by i. So each i corresponds to a point in the t_pts
array, with index i.
"""
t = t_pts[i]
y_pts = u_triangular(x_pts, t, num_n, c_wave, L)
line_anim.set_data(x_pts, y_pts) # overwrite line_anim with new points
return (line_anim,) # this is needed for blit=True to work
```
```python
frame_interval = 80. # time between frames
frame_number = 101 # number of frames to include (index of t_pts)
anim = animation.FuncAnimation(fig_anim,
animate_wave,
init_func=None,
frames=frame_number,
interval=frame_interval,
blit=True,
repeat=False)
```
```python
HTML(anim.to_jshtml()) # animate using javascript
```
```python
```
|
9b808e57a3855a492ba361edb67fa9223a3ae05b
| 36,023 |
ipynb
|
Jupyter Notebook
|
2020_week_11/Taylor_problem_16p14_B.ipynb
|
CLima86/Physics_5300_CDL
|
d9e8ee0861d408a85b4be3adfc97e98afb4a1149
|
[
"MIT"
] | null | null | null |
2020_week_11/Taylor_problem_16p14_B.ipynb
|
CLima86/Physics_5300_CDL
|
d9e8ee0861d408a85b4be3adfc97e98afb4a1149
|
[
"MIT"
] | null | null | null |
2020_week_11/Taylor_problem_16p14_B.ipynb
|
CLima86/Physics_5300_CDL
|
d9e8ee0861d408a85b4be3adfc97e98afb4a1149
|
[
"MIT"
] | null | null | null | 98.155313 | 17,956 | 0.823557 | true | 1,682 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.798187 | 0.666872 |
__label__eng_Latn
| 0.879634 | 0.387698 |
```python
import numpy as np
import sympy as sy
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
s = sy.symbols('s', real=False)
A2 = s**2 + 3*s + 3
A4 = s**4 + 10*s**3 + 45*s**2 + 105*s + 105
A22= sy.simplify(sy.expand(A2*A2))
```
```python
sol22 = sy.solve(A22, s)
sol4 = sy.solve(A4, s)
```
```python
sol22
```
[-3/2 - sqrt(3)*I/2, -3/2 + sqrt(3)*I/2]
```python
sol4
```
[-5/2 - sqrt(-5 + 15/(2*(-75/16 + 15*sqrt(35)*I/16)**(1/3)) + 2*(-75/16 + 15*sqrt(35)*I/16)**(1/3))/2 - sqrt(-10 - 2*(-75/16 + 15*sqrt(35)*I/16)**(1/3) + 10/sqrt(-5 + 15/(2*(-75/16 + 15*sqrt(35)*I/16)**(1/3)) + 2*(-75/16 + 15*sqrt(35)*I/16)**(1/3)) - 15/(2*(-75/16 + 15*sqrt(35)*I/16)**(1/3)))/2,
-5/2 + sqrt(-10 - 2*(-75/16 + 15*sqrt(35)*I/16)**(1/3) + 10/sqrt(-5 + 15/(2*(-75/16 + 15*sqrt(35)*I/16)**(1/3)) + 2*(-75/16 + 15*sqrt(35)*I/16)**(1/3)) - 15/(2*(-75/16 + 15*sqrt(35)*I/16)**(1/3)))/2 - sqrt(-5 + 15/(2*(-75/16 + 15*sqrt(35)*I/16)**(1/3)) + 2*(-75/16 + 15*sqrt(35)*I/16)**(1/3))/2]
```python
for r in sol22:
plt.plot(sy.N(sy.re(r)), sy.N(sy.im(r)), 'x', markersize=12)
for r in sol4:
plt.plot(np.real(sy.N(r)), np.imag(sy.N(r)), 'x', markersize=12)
```
```python
```
|
ac42e2caac2165c9ffe52eaee16cde9b9d4d9175
| 19,927 |
ipynb
|
Jupyter Notebook
|
sampling-and-aliasing/notebooks/PS7-Bessel-filters.ipynb
|
kjartan-at-tec/mr2007-computerized-control
|
16e35f5007f53870eaf344eea1165507505ab4aa
|
[
"MIT"
] | 2 |
2020-11-07T05:20:37.000Z
|
2020-12-22T09:46:13.000Z
|
sampling-and-aliasing/notebooks/PS7-Bessel-filters.ipynb
|
alfkjartan/control-computarizado
|
5b9a3ae67602d131adf0b306f3ffce7a4914bf8e
|
[
"MIT"
] | 4 |
2020-06-12T20:44:41.000Z
|
2020-06-12T20:49:00.000Z
|
sampling-and-aliasing/notebooks/PS7-Bessel-filters.ipynb
|
kjartan-at-tec/mr2007-computerized-control
|
16e35f5007f53870eaf344eea1165507505ab4aa
|
[
"MIT"
] | 1 |
2021-03-14T03:55:27.000Z
|
2021-03-14T03:55:27.000Z
| 119.323353 | 5,368 | 0.74246 | true | 589 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91118 | 0.828939 | 0.755312 |
__label__kor_Hang
| 0.09501 | 0.593175 |
<div align='center' ></div>
<div align='right'></div>
```python
%matplotlib inline
```
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sympy as sym
import solowpy
```
# 4. Solving the model
## 4.1 Solow model as an initial value problem
The Solow model with can be formulated as an initial value problem (IVP) as follows.
$$ \dot{k}(t) = sf(k(t)) - (g + n + \delta)k(t),\ t\ge t_0,\ k(t_0) = k_0 \tag{4.1.0} $$
The solution to this IVP is a function $k(t)$ describing the time-path of capital stock (per unit effective labor). Our objective in this section will be to explore methods for approximating the function $k(t)$. The methods we will learn are completely general and can be used to solve any IVP. Those interested in learning more about these methods should start by reading Chapter 10 of [*Numerical Methods for Economists*](http://mitpress.mit.edu/books/numerical-methods-economics) by Ken Judd before proceeding to John Butcher's excellent book entitled [*Numerical Methods for solving Ordinary Differential Equations*](http://www.shivanian.com/teaching%20avtivities/Butcher.pdf).
Before discussing numerical methods we should stop and consider whether or not there are any special cases (i.e., combintions of parameters) for which the the initial value problem defined in 4.2.1 might have an analytic solution. Analytic results can be very useful in building intuition about the economic mechanisms at play in a model and are invaluable for debugging code.
## 4.2 Analytic methods
### 4.2.1 Analytic solution for a model with Cobb-Douglas production
The Solow model with Cobb-Douglas production happens to have a completely [general analytic solution](https://github.com/davidrpugh/numerical-methods/raw/master/labs/lab-1/solow-analytic-solution.pdf):
$$ k(t) = \left[\left(\frac{s}{n+g+\delta}\right)\left(1 - e^{-(n + g + \delta) (1-\alpha) t}\right) + k_0^{1-\alpha}e^{-(n + g + \delta) (1-\alpha) t}\right]^{\frac{1}{1-\alpha}} \tag{4.2.0}$$
This analytic result is available via the `analytic_solution` method of the `solow.CobbDouglasModel` class.
```python
solowpy.CobbDouglasModel.analytic_solution?
```
### Example: Computing the analytic trajectory
We can compute an analytic solution for our Solow model like so...
```python
# define model parameters
cobb_douglas_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,
'delta': 0.05, 'alpha': 0.33}
# create an instance of the solow.Model class
cobb_douglas_model = solowpy.CobbDouglasModel(params=cobb_douglas_params)
```
```python
# specify some initial condition
k0 = 0.5 * cobb_douglas_model.steady_state
# grid of t values for which we want the value of k(t)
ti = np.linspace(0, 100, 10)
# generate a trajectory!
cobb_douglas_model.analytic_solution(ti, k0)
```
array([[ 0. , 0.91578343],
[ 11.11111111, 1.37081903],
[ 22.22222222, 1.60723533],
[ 33.33333333, 1.72380474],
[ 44.44444444, 1.78011125],
[ 55.55555556, 1.80706521],
[ 66.66666667, 1.81991504],
[ 77.77777778, 1.8260292 ],
[ 88.88888889, 1.82893579],
[ 100. , 1.83031695]])
...and we can make a plot of this solution like so...
```python
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# compute the solution
ti = np.linspace(0, 100, 1000)
analytic_traj = cobb_douglas_model.analytic_solution(ti, k0)
# plot this trajectory
ax.plot(ti, analytic_traj[:,1], 'r-')
# equilibrium value of capital stock (per unit effective labor)
ax.axhline(cobb_douglas_model.steady_state, linestyle='dashed',
color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=20, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Analytic solution to a Solow model\nwith Cobb-Douglas production',
fontsize=25, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
plt.show()
```
### 4.2.2 Linearized solution to general model
In general there will not be closed-form solutions for the Solow model. The standard approach to obtaining general analytical results for the Solow model is to linearize the equation of motion for capital stock (per unit effective labor). Linearizing the equation of motion of capital (per unit effective labor) amounts to taking a first-order [Taylor approximation](http://en.wikipedia.org/wiki/Taylor_series) of equation 4.1.0 around its long-run equilibrium $k^*$:
$$ \dot{k}(t) \approx -\lambda (k(t) - k^*),\ t \ge t_0,\ k(t_0)=k_0 \tag{4.2.1}$$
where the *speed of convergence*, $\lambda$, is defined as
$$ \lambda = -\frac{\partial \dot{k}(k(t))}{\partial k(t)}\bigg|_{k(t)=k^*} \tag{4.2.2} $$
The solution the the linear differential equation 4.2.1 is
$$ k(t) = k^* + e^{-\lambda t}(k_0 - k^*). \tag{4.2.3} $$
To complete the solution it remains to find an expression for the speed of convergence $\lambda$:
\begin{align}
\lambda \equiv -\frac{\partial \dot{k}(k(t))}{\partial k(t)}\bigg|_{k(t)=k^*} =& -[sf'(k^*) - (g + n+ \delta)] \\
=& (g + n+ \delta) - sf'(k^*) \\
=& (g + n + \delta) - (g + n + \delta)\frac{k^*f'(k^*)}{f(k^*)} \\
=& (1 - \alpha_K(k^*))(g + n + \delta) \tag{4.2.4}
\end{align}
where the elasticity of output with respect to capital, $\alpha_K(k)$, is
$$\alpha_K(k) = \frac{k^*f'(k^*)}{f(k^*)}. \tag{4.2.5}$$
### Example: Computing the linearized trajectory
One can compute a linear approximation of the model solution using the `linearized_solution` method of the `solow.Model` class as follows.
```python
# specify some initial condition
k0 = 0.5 * cobb_douglas_model.steady_state
# grid of t values for which we want the value of k(t)
ti = np.linspace(0, 100, 10)
# generate a trajectory!
cobb_douglas_model.linearized_solution(ti, k0)
```
array([[ 0. , 0.91578343],
[ 11.11111111, 1.39657145],
[ 22.22222222, 1.62494486],
[ 33.33333333, 1.73342179],
[ 44.44444444, 1.78494813],
[ 55.55555556, 1.80942305],
[ 66.66666667, 1.82104859],
[ 77.77777778, 1.82657069],
[ 88.88888889, 1.82919369],
[ 100. , 1.8304396 ]])
### 4.2.3 Accuracy of the linear approximation
```python
# initial condition
t0, k0 = 0.0, 0.5 * cobb_douglas_model.steady_state
# grid of t values for which we want the value of k(t)
ti = np.linspace(t0, 100, 1000)
# generate the trajectories
analytic = cobb_douglas_model.analytic_solution(ti, k0)
linearized = cobb_douglas_model.linearized_solution(ti, k0)
```
```python
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ax.plot(ti, analytic[:,1], 'r-', label='Analytic')
ax.plot(ti, linearized[:,1], 'b-', label='Linearized')
# demarcate k*
ax.axhline(cobb_douglas_model.steady_state, linestyle='dashed',
color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=20, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Analytic vs. linearized solutions', fontsize=25, family='serif')
ax.legend(loc='best', frameon=False, prop={'family':'serif'},
bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
fig.show()
```
## 4.3 Finite-difference methods
Four of the best, most widely used ODE integrators have been implemented in the `scipy.integrate` module (they are called `dopri5`, `dop85`, `lsoda`, and `vode`). Each of these integrators uses some type of adaptive step-size control: the integrator adaptively adjusts the step size $h$ in order to keep the approximation error below some user-specified threshold). The cells below contain code which compares the approximation error of the forward Euler with those of [`lsoda`](http://computation.llnl.gov/casc/nsde/pubs/u113855.pdf) and [`vode`](http://jeffreydk.site50.net/papers/BDFmethodpaper.pdf). Instead of simple linear interpolation (i.e., `k=`1), I set `k=5` for 5th order [B-spline](http://en.wikipedia.org/wiki/B-spline) interpolation.
...finally, we can plot trajectories for different initial conditions. Note that the analytic solutions converge to the long-run equilibrium no matter the initial condition of capital stock (per unit of effective labor) providing a nice graphical demonstration that the Solow model is globally stable.
```python
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# lower and upper bounds for initial conditions
k_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model)
k_l = 0.5 * k_star
k_u = 2.0 * k_star
for k0 in np.linspace(k_l, k_u, 5):
# compute the solution
ti = np.linspace(0, 100, 1000)
analytic_traj = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0)
# plot this trajectory
ax.plot(ti, analytic_traj[:,1])
# equilibrium value of capital stock (per unit effective labor)
ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=15, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Analytic solution to a Solow model\nwith Cobb-Douglas production',
fontsize=20, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
plt.show()
```
```python
k0 = 0.5 * ces_model.steady_state
numeric_trajectory = ces_model.ivp.solve(t0=0, y0=k0, h=0.5, T=100, integrator='dopri5')
ti = numeric_trajectory[:,0]
linearized_trajectory = ces_model.linearized_solution(ti, k0)
```
### 4.3.2 Accuracy of finite-difference methods
```python
t0, k0 = 0.0, 0.5
numeric_soln = cobb_douglas_model.ivp.solve(t0, k0, T=100, integrator='lsoda')
```
```python
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# compute and plot the numeric approximation
t0, k0 = 0.0, 0.5
numeric_soln = cobb_douglas_model.ivp.solve(t0, k0, T=100, integrator='lsoda')
ax.plot(numeric_soln[:,0], numeric_soln[:,1], 'bo', markersize=3.0)
# compute and plot the analytic solution
ti = np.linspace(0, 100, 1000)
analytic_soln = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0)
ax.plot(ti, analytic_soln[:,1], 'r-')
# equilibrium value of capital stock (per unit effective labor)
k_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model)
ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=15, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Numerical approximation of the solution',
fontsize=20, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
plt.show()
```
```python
ti = np.linspace(0, 100, 1000)
interpolated_soln = cobb_douglas_model.ivp.interpolate(numeric_soln, ti, k=3)
```
```python
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# compute and plot the numeric approximation
ti = np.linspace(0, 100, 1000)
interpolated_soln = cobb_douglas_model.ivp.interpolate(numeric_soln, ti, k=3)
ax.plot(ti, interpolated_soln[:,1], 'b-')
# compute and plot the analytic solution
analytic_soln = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0)
ax.plot(ti, analytic_soln[:,1], 'r-')
# equilibrium value of capital stock (per unit effective labor)
k_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model)
ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=15, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Numerical approximation of the solution',
fontsize=20, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
plt.show()
```
```python
```
```python
ti = np.linspace(0, 100, 1000)
residual = cobb_douglas_model.ivp.compute_residual(numeric_soln, ti, k=3)
```
```python
# extract the raw residuals
capital_residual = residual[:, 1]
# typically, normalize residual by the level of the variable
norm_capital_residual = np.abs(capital_residual) / interpolated_soln[:,1]
# create the plot
fig = plt.figure(figsize=(8, 6))
plt.plot(ti, norm_capital_residual, 'b-', label='$k(t)$')
plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps')
plt.xlabel('Time', fontsize=15)
plt.ylim(1e-16, 1)
plt.ylabel('Residuals (normalized)', fontsize=15, family='serif')
plt.yscale('log')
plt.title('Residual', fontsize=20, family='serif')
plt.grid()
plt.legend(loc=0, frameon=False, bbox_to_anchor=(1.0,1.0))
plt.show()
```
```python
```
|
709181e99378d3485ccebfd653564c18ca896667
| 189,089 |
ipynb
|
Jupyter Notebook
|
examples/4 Solving the model.ipynb
|
davidrpugh/solowPy
|
91577e04481cec80679ae571ec2bdaa5788151b4
|
[
"MIT"
] | 31 |
2016-02-29T00:20:53.000Z
|
2022-01-26T17:40:38.000Z
|
examples/4 Solving the model.ipynb
|
rfonsek/solowPy
|
91577e04481cec80679ae571ec2bdaa5788151b4
|
[
"MIT"
] | 11 |
2015-04-04T20:01:35.000Z
|
2017-02-20T05:42:49.000Z
|
examples/4 Solving the model.ipynb
|
rfonsek/solowPy
|
91577e04481cec80679ae571ec2bdaa5788151b4
|
[
"MIT"
] | 20 |
2015-08-23T23:42:09.000Z
|
2022-02-23T08:00:53.000Z
| 300.618442 | 35,122 | 0.908366 | true | 3,922 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.831143 | 0.849971 | 0.706448 |
__label__eng_Latn
| 0.816146 | 0.479646 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial3.ipynb" target="_parent"></a>
# Neuromatch Academy: Week 1, Day 5, Tutorial 3
# Dimensionality Reduction and reconstruction
__Content creators:__ Alex Cayco Gajic, John Murray
__Content reviewers:__ Roozbeh Farhoudi, Matt Krause, Spiros Chavlis, Richard Gao, Michael Waskom
---
# Tutorial Objectives
In this notebook we'll learn to apply PCA for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: MNIST. We'll also learn how to use PCA for reconstruction and denoising.
Overview:
- Perform PCA on MNIST
- Calculate the variance explained
- Reconstruct data with different numbers of PCs
- (Bonus) Examine denoising using PCA
You can learn more about MNIST dataset [here](https://en.wikipedia.org/wiki/MNIST_database).
```python
# @title Video 1: PCA for dimensionality reduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="oO0bbInoO_0", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=oO0bbInoO_0
---
# Setup
Run these cells to get the tutorial started.
```python
# Imports
import numpy as np
import matplotlib.pyplot as plt
```
```python
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
```
```python
# @title Helper Functions
def plot_variance_explained(variance_explained):
"""
Plots eigenvalues.
Args:
variance_explained (numpy array of floats) : Vector of variance explained
for each PC
Returns:
Nothing.
"""
plt.figure()
plt.plot(np.arange(1, len(variance_explained) + 1), variance_explained,
'--k')
plt.xlabel('Number of components')
plt.ylabel('Variance explained')
plt.show()
def plot_MNIST_reconstruction(X, X_reconstructed):
"""
Plots 9 images in the MNIST dataset side-by-side with the reconstructed
images.
Args:
X (numpy array of floats) : Data matrix each column
corresponds to a different
random variable
X_reconstructed (numpy array of floats) : Data matrix each column
corresponds to a different
random variable
Returns:
Nothing.
"""
plt.figure()
ax = plt.subplot(121)
k = 0
for k1 in range(3):
for k2 in range(3):
k = k + 1
plt.imshow(np.reshape(X[k, :], (28, 28)),
extent=[(k1 + 1) * 28, k1 * 28, (k2 + 1) * 28, k2 * 28],
vmin=0, vmax=255)
plt.xlim((3 * 28, 0))
plt.ylim((3 * 28, 0))
plt.tick_params(axis='both', which='both', bottom=False, top=False,
labelbottom=False)
ax.set_xticks([])
ax.set_yticks([])
plt.title('Data')
plt.clim([0, 250])
ax = plt.subplot(122)
k = 0
for k1 in range(3):
for k2 in range(3):
k = k + 1
plt.imshow(np.reshape(np.real(X_reconstructed[k, :]), (28, 28)),
extent=[(k1 + 1) * 28, k1 * 28, (k2 + 1) * 28, k2 * 28],
vmin=0, vmax=255)
plt.xlim((3 * 28, 0))
plt.ylim((3 * 28, 0))
plt.tick_params(axis='both', which='both', bottom=False, top=False,
labelbottom=False)
ax.set_xticks([])
ax.set_yticks([])
plt.clim([0, 250])
plt.title('Reconstructed')
plt.tight_layout()
def plot_MNIST_sample(X):
"""
Plots 9 images in the MNIST dataset.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
Nothing.
"""
fig, ax = plt.subplots()
k = 0
for k1 in range(3):
for k2 in range(3):
k = k + 1
plt.imshow(np.reshape(X[k, :], (28, 28)),
extent=[(k1 + 1) * 28, k1 * 28, (k2+1) * 28, k2 * 28],
vmin=0, vmax=255)
plt.xlim((3 * 28, 0))
plt.ylim((3 * 28, 0))
plt.tick_params(axis='both', which='both', bottom=False, top=False,
labelbottom=False)
plt.clim([0, 250])
ax.set_xticks([])
ax.set_yticks([])
plt.show()
def plot_MNIST_weights(weights):
"""
Visualize PCA basis vector weights for MNIST. Red = positive weights,
blue = negative weights, white = zero weight.
Args:
weights (numpy array of floats) : PCA basis vector
Returns:
Nothing.
"""
fig, ax = plt.subplots()
cmap = plt.cm.get_cmap('seismic')
plt.imshow(np.real(np.reshape(weights, (28, 28))), cmap=cmap)
plt.tick_params(axis='both', which='both', bottom=False, top=False,
labelbottom=False)
plt.clim(-.15, .15)
plt.colorbar(ticks=[-.15, -.1, -.05, 0, .05, .1, .15])
ax.set_xticks([])
ax.set_yticks([])
plt.show()
def add_noise(X, frac_noisy_pixels):
"""
Randomly corrupts a fraction of the pixels by setting them to random values.
Args:
X (numpy array of floats) : Data matrix
frac_noisy_pixels (scalar) : Fraction of noisy pixels
Returns:
(numpy array of floats) : Data matrix + noise
"""
X_noisy = np.reshape(X, (X.shape[0] * X.shape[1]))
N_noise_ixs = int(X_noisy.shape[0] * frac_noisy_pixels)
noise_ixs = np.random.choice(X_noisy.shape[0], size=N_noise_ixs,
replace=False)
X_noisy[noise_ixs] = np.random.uniform(0, 255, noise_ixs.shape)
X_noisy = np.reshape(X_noisy, (X.shape[0], X.shape[1]))
return X_noisy
def change_of_basis(X, W):
"""
Projects data onto a new basis.
Args:
X (numpy array of floats) : Data matrix each column corresponding to a
different random variable
W (numpy array of floats) : new orthonormal basis columns correspond to
basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
Y = np.matmul(X, W)
return Y
def get_sample_cov_matrix(X):
"""
Returns the sample covariance matrix of data X.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Covariance matrix
"""
X = X - np.mean(X, 0)
cov_matrix = 1 / X.shape[0] * np.matmul(X.T, X)
return cov_matrix
def sort_evals_descending(evals, evectors):
"""
Sorts eigenvalues and eigenvectors in decreasing order. Also aligns first two
eigenvectors to be in first two quadrants (if 2D).
Args:
evals (numpy array of floats) : Vector of eigenvalues
evectors (numpy array of floats) : Corresponding matrix of eigenvectors
each column corresponds to a different
eigenvalue
Returns:
(numpy array of floats) : Vector of eigenvalues after sorting
(numpy array of floats) : Matrix of eigenvectors after sorting
"""
index = np.flip(np.argsort(evals))
evals = evals[index]
evectors = evectors[:, index]
if evals.shape[0] == 2:
if np.arccos(np.matmul(evectors[:, 0],
1 / np.sqrt(2) * np.array([1, 1]))) > np.pi / 2:
evectors[:, 0] = -evectors[:, 0]
if np.arccos(np.matmul(evectors[:, 1],
1 / np.sqrt(2)*np.array([-1, 1]))) > np.pi / 2:
evectors[:, 1] = -evectors[:, 1]
return evals, evectors
def pca(X):
"""
Performs PCA on multivariate data. Eigenvalues are sorted in decreasing order
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Data projected onto the new basis
(numpy array of floats) : Vector of eigenvalues
(numpy array of floats) : Corresponding matrix of eigenvectors
"""
X = X - np.mean(X, 0)
cov_matrix = get_sample_cov_matrix(X)
evals, evectors = np.linalg.eigh(cov_matrix)
evals, evectors = sort_evals_descending(evals, evectors)
score = change_of_basis(X, evectors)
return score, evectors, evals
def plot_eigenvalues(evals, limit=True):
"""
Plots eigenvalues.
Args:
(numpy array of floats) : Vector of eigenvalues
Returns:
Nothing.
"""
plt.figure()
plt.plot(np.arange(1, len(evals) + 1), evals, 'o-k')
plt.xlabel('Component')
plt.ylabel('Eigenvalue')
plt.title('Scree plot')
if limit:
plt.show()
```
---
# Section 1: Perform PCA on MNIST
The MNIST dataset consists of a 70,000 images of individual handwritten digits. Each image is a 28x28 pixel grayscale image. For convenience, each 28x28 pixel image is often unravelled into a single 784 (=28*28) element vector, so that the whole dataset is represented as a 70,000 x 784 matrix. Each row represents a different image, and each column represents a different pixel.
Enter the following cell to load the MNIST dataset and plot the first nine images.
```python
from sklearn.datasets import fetch_openml
mnist = fetch_openml(name='mnist_784')
X = mnist.data
plot_MNIST_sample(X)
```
The MNIST dataset has an extrinsic dimensionality of 784, much higher than the 2-dimensional examples used in the previous tutorials! To make sense of this data, we'll use dimensionality reduction. But first, we need to determine the intrinsic dimensionality $K$ of the data. One way to do this is to look for an "elbow" in the scree plot, to determine which eigenvalues are signficant.
## Exercise 1: Scree plot of MNIST
In this exercise you will examine the scree plot in the MNIST dataset.
**Steps:**
- Perform PCA on the dataset and examine the scree plot.
- When do the eigenvalues appear (by eye) to reach zero? (**Hint:** use `plt.xlim` to zoom into a section of the plot).
```python
help(pca)
help(plot_eigenvalues)
```
Help on function pca in module __main__:
pca(X)
Performs PCA on multivariate data. Eigenvalues are sorted in decreasing order
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Data projected onto the new basis
(numpy array of floats) : Vector of eigenvalues
(numpy array of floats) : Corresponding matrix of eigenvectors
Help on function plot_eigenvalues in module __main__:
plot_eigenvalues(evals, limit=True)
Plots eigenvalues.
Args:
(numpy array of floats) : Vector of eigenvalues
Returns:
Nothing.
```python
#################################################
## TO DO for students: perform PCA and plot the eigenvalues
#################################################
# perform PCA
score, evectors, evals = pca(X)
# plot the eigenvalues
plot_eigenvalues(evals, limit=False)
plt.xlim((0,100)) # limit x-axis up to 100 for zooming
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_a876e927.py)
*Example output:*
---
# Section 2: Calculate the variance explained
The scree plot suggests that most of the eigenvalues are near zero, with fewer than 100 having large values. Another common way to determine the intrinsic dimensionality is by considering the variance explained. This can be examined with a cumulative plot of the fraction of the total variance explained by the top $K$ components, i.e.,
\begin{equation}
\text{var explained} = \frac{\sum_{i=1}^K \lambda_i}{\sum_{i=1}^N \lambda_i}
\end{equation}
The intrinsic dimensionality is often quantified by the $K$ necessary to explain a large proportion of the total variance of the data (often a defined threshold, e.g., 90%).
## Exercise 2: Plot the explained variance
In this exercise you will plot the explained variance.
**Steps:**
- Fill in the function below to calculate the fraction variance explained as a function of the number of principal componenets. **Hint:** use `np.cumsum`.
- Plot the variance explained using `plot_variance_explained`.
**Questions:**
- How many principal components are required to explain 90% of the variance?
- How does the intrinsic dimensionality of this dataset compare to its extrinsic dimensionality?
```python
help(plot_variance_explained)
```
Help on function plot_variance_explained in module __main__:
plot_variance_explained(variance_explained)
Plots eigenvalues.
Args:
variance_explained (numpy array of floats) : Vector of variance explained
for each PC
Returns:
Nothing.
```python
def get_variance_explained(evals):
"""
Calculates variance explained from the eigenvalues.
Args:
evals (numpy array of floats) : Vector of eigenvalues
Returns:
(numpy array of floats) : Vector of variance explained
"""
#################################################
## TO DO for students: calculate the explained variance using the equation
## from Section 2.
# Comment once you've filled in the function
# raise NotImplementedError("Student excercise: calculate explaine variance!")
#################################################
# cumulatively sum the eigenvalues
csum = np.cumsum(evals)
# normalize by the sum of eigenvalues
variance_explained = csum/sum(evals)
return variance_explained
#################################################
## TO DO for students: call the function and plot the variance explained
#################################################
# calculate the variance explained
variance_explained = get_variance_explained(evals)
# Uncomment to plot the variance explained
plot_variance_explained(variance_explained)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_0f5f51b9.py)
*Example output:*
---
# Section 3: Reconstruct data with different numbers of PCs
Now we have seen that the top 100 or so principal components of the data can explain most of the variance. We can use this fact to perform *dimensionality reduction*, i.e., by storing the data using only 100 components rather than the samples of all 784 pixels. Remarkably, we will be able to reconstruct much of the structure of the data using only the top 100 components. To see this, recall that to perform PCA we projected the data $\bf X$ onto the eigenvectors of the covariance matrix:
\begin{equation}
\bf S = X W
\end{equation}
Since $\bf W$ is an orthogonal matrix, ${\bf W}^{-1} = {\bf W}^T$. So by multiplying by ${\bf W}^T$ on each side we can rewrite this equation as
\begin{equation}
{\bf X = S W}^T.
\end{equation}
This now gives us a way to reconstruct the data matrix from the scores and loadings. To reconstruct the data from a low-dimensional approximation, we just have to truncate these matrices. Let's call ${\bf S}_{1:K}$ and ${\bf W}_{1:K}$ as keeping only the first $K$ columns of this matrix. Then our reconstruction is:
\begin{equation}
{\bf \hat X = S}_{1:K} ({\bf W}_{1:K})^T.
\end{equation}
## Exercise 3: Data reconstruction
Fill in the function below to reconstruct the data using different numbers of principal components.
**Steps:**
* Fill in the following function to reconstruct the data based on the weights and scores. Don't forget to add the mean!
* Make sure your function works by reconstructing the data with all $K=784$ components. The two images should look identical.
```python
help(plot_MNIST_reconstruction)
```
Help on function plot_MNIST_reconstruction in module __main__:
plot_MNIST_reconstruction(X, X_reconstructed)
Plots 9 images in the MNIST dataset side-by-side with the reconstructed
images.
Args:
X (numpy array of floats) : Data matrix each column
corresponds to a different
random variable
X_reconstructed (numpy array of floats) : Data matrix each column
corresponds to a different
random variable
Returns:
Nothing.
```python
def reconstruct_data(score, evectors, X_mean, K):
"""
Reconstruct the data based on the top K components.
Args:
score (numpy array of floats) : Score matrix
evectors (numpy array of floats) : Matrix of eigenvectors
X_mean (numpy array of floats) : Vector corresponding to data mean
K (scalar) : Number of components to include
Returns:
(numpy array of floats) : Matrix of reconstructed data
"""
#################################################
## TO DO for students: Reconstruct the original data in X_reconstructed
# Comment once you've filled in the function
# raise NotImplementedError("Student excercise: reconstructing data function!")
#################################################
# Reconstruct the data from the score and eigenvectors
# Don't forget to add the mean!!
X_reconstructed = score[:,:K]@evectors[:,:K].T+X_mean
return X_reconstructed
K = 784
#################################################
## TO DO for students: Calculate the mean and call the function, then plot
## the original and the recostructed data
#################################################
# Reconstruct the data based on all components
X_mean = X.mean(axis=0)
X_reconstructed = reconstruct_data(score,evectors,X_mean,100)
# Plot the data and reconstruction
plot_MNIST_reconstruction(X, X_reconstructed)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_e3395916.py)
*Example output:*
## Interactive Demo: Reconstruct the data matrix using different numbers of PCs
Now run the code below and experiment with the slider to reconstruct the data matrix using different numbers of principal components.
**Steps**
* How many principal components are necessary to reconstruct the numbers (by eye)? How does this relate to the intrinsic dimensionality of the data?
* Do you see any information in the data with only a single principal component?
```python
# @title
# @markdown Make sure you execute this cell to enable the widget!
def refresh(K=100):
X_reconstructed = reconstruct_data(score, evectors, X_mean, K)
plot_MNIST_reconstruction(X, X_reconstructed)
plt.title('Reconstructed, K={}'.format(K))
_ = widgets.interact(refresh, K=(1, 784, 10))
```
interactive(children=(IntSlider(value=100, description='K', max=784, min=1, step=10), Output()), _dom_classes=…
## Exercise 4: Visualization of the weights
Next, let's take a closer look at the first principal component by visualizing its corresponding weights.
**Steps:**
* Enter `plot_MNIST_weights` to visualize the weights of the first basis vector.
* What structure do you see? Which pixels have a strong positive weighting? Which have a strong negative weighting? What kinds of images would this basis vector differentiate?
* Try visualizing the second and third basis vectors. Do you see any structure? What about the 100th basis vector? 500th? 700th?
```python
help(plot_MNIST_weights)
```
Help on function plot_MNIST_weights in module __main__:
plot_MNIST_weights(weights)
Visualize PCA basis vector weights for MNIST. Red = positive weights,
blue = negative weights, white = zero weight.
Args:
weights (numpy array of floats) : PCA basis vector
Returns:
Nothing.
```python
#################################################
## TO DO for students: plot the weights calling the plot_MNIST_weights function
#################################################
# Plot the weights of the first principal component
plot_MNIST_weights(evectors[:,3])
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_f358e413.py)
*Example output:*
---
# Summary
* In this tutorial, we learned how to use PCA for dimensionality reduction by selecting the top principal components. This can be useful as the intrinsic dimensionality ($K$) is often less than the extrinsic dimensionality ($N$) in neural data. $K$ can be inferred by choosing the number of eigenvalues necessary to capture some fraction of the variance.
* We also learned how to reconstruct an approximation of the original data using the top $K$ principal components. In fact, an alternate formulation of PCA is to find the $K$ dimensional space that minimizes the reconstruction error.
* Noise tends to inflate the apparent intrinsic dimensionality, however the higher components reflect noise rather than new structure in the data. PCA can be used for denoising data by removing noisy higher components.
* In MNIST, the weights corresponding to the first principal component appear to discriminate between a 0 and 1. We will discuss the implications of this for data visualization in the following tutorial.
---
# Bonus: Examine denoising using PCA
In this lecture, we saw that PCA finds an optimal low-dimensional basis to minimize the reconstruction error. Because of this property, PCA can be useful for denoising corrupted samples of the data.
## Exercise 5: Add noise to the data
In this exercise you will add salt-and-pepper noise to the original data and see how that affects the eigenvalues.
**Steps:**
- Use the function `add_noise` to add noise to 20% of the pixels.
- Then, perform PCA and plot the variance explained. How many principal components are required to explain 90% of the variance? How does this compare to the original data?
```python
help(add_noise)
```
Help on function add_noise in module __main__:
add_noise(X, frac_noisy_pixels)
Randomly corrupts a fraction of the pixels by setting them to random values.
Args:
X (numpy array of floats) : Data matrix
frac_noisy_pixels (scalar) : Fraction of noisy pixels
Returns:
(numpy array of floats) : Data matrix + noise
```python
###################################################################
# Insert your code here to:
# Add noise to the data
# Plot noise-corrupted data
# Perform PCA on the noisy data
# Calculate and plot the variance explained
###################################################################
np.random.seed(2020) # set random seed
X_noisy = add_noise(X,0.1)
score_noisy, evectors_noisy, evals_noisy = pca(X_noisy)
variance_explained_noisy = get_variance_explained(evals_noisy)
plot_MNIST_sample(X_noisy)
plot_variance_explained(variance_explained_noisy)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_d4a41b8c.py)
*Example output:*
## Exercise 6: Denoising
Next, use PCA to perform denoising by projecting the noise-corrupted data onto the basis vectors found from the original dataset. By taking the top K components of this projection, we can reduce noise in dimensions orthogonal to the K-dimensional latent space.
**Steps:**
- Subtract the mean of the noise-corrupted data.
- Project the data onto the basis found with the original dataset (`evectors`, not `evectors_noisy`) and take the top $K$ components.
- Reconstruct the data as normal, using the top 50 components.
- Play around with the amount of noise and K to build intuition.
```python
###################################################################
# Insert your code here to:
# Subtract the mean of the noise-corrupted data
# Project onto the original basis vectors evectors
# Reconstruct the data using the top 50 components
# Plot the result
###################################################################
X_noisy_mean = X_noisy.mean(axis=0)
X_reconstructed = reconstruct_data(score_noisy,evectors_noisy,X_mean,100)
plot_MNIST_reconstruction(X_noisy, X_reconstructed)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_e3ee8262.py)
*Example output:*
|
1286954f71e1b3530f21166a4579b90b575c262f
| 404,195 |
ipynb
|
Jupyter Notebook
|
tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial3.ipynb
|
neurorishika/course-content
|
d7fd2feabd662c8a32afc2837f45cc7f18e1f4aa
|
[
"CC-BY-4.0"
] | null | null | null |
tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial3.ipynb
|
neurorishika/course-content
|
d7fd2feabd662c8a32afc2837f45cc7f18e1f4aa
|
[
"CC-BY-4.0"
] | null | null | null |
tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial3.ipynb
|
neurorishika/course-content
|
d7fd2feabd662c8a32afc2837f45cc7f18e1f4aa
|
[
"CC-BY-4.0"
] | null | null | null | 245.263956 | 58,212 | 0.910081 | true | 5,975 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.819893 | 0.665411 | 0.545566 |
__label__eng_Latn
| 0.950552 | 0.105861 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, G.F. Forsyth.
# Coding Assignment: Rocket
The equations of motion for a rocket in purely vertical flight are given by
\begin{align}
\frac{dh}{dt} &= v\\
(m_s+m_p) \frac{dv}{dt}& = -(m_s+m_p)g + \dot{m}_pv_e - \frac{1}{2}\rho v|v|AC_D
\end{align}
$h$ is the altitude of the rocket
$m_s = 50kg$ is the weight of the rocket shell
$g = 9.81 \frac{m}{s^2}$
$\rho = 1.091 \frac{kg}{m^3}$ is the average air density (assumed constant throughout flight)
$A = \pi r^2$ is the maximum cross sectional area of the rocket, where $r = 0.5 m$
$v_e = 325 \frac{m}{s}$ is the exhaust speed
$C_D = 0.15 $ is the drag coefficient
$m_{po} = 100 kg$ at time $t = 0$ is the initial weight of the rocket propellant
The mass of the remaining propellant is given by:
$$m_p = m_{po} - \int^t_0 \dot{m}_p d\tau$$
where $\dot{m}_p$ is the time-varying burn rate given by the following figure:
Propellant Burn Rate
Using Euler's method with a timestep of $\Delta t=0.1s$, create a Python script to calculate the altitude and velocity of the rocket from launch until crash down.
##Assessment:
To check your answers, you can register for [MAE 6286: Practical Numerical Methods with Python](http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about).
1. At time $t=3.2s$, what is the mass (in kg) of rocket propellant remaining in the rocket?
2. What is the maximum speed of the rocket in $\frac{m}{s}$?
At what time does this occur (in seconds)?
What is the altitude at this time (in meters)?
3. What is the rocket's maximum altitude during flight (in meters)? At what time (in seconds) does this occur?
4. At what time (in seconds) does the rocket impact the ground? What is the velocity of the rocket (in $\frac{m}{s}$) at time of impact?
##Derivation of the rocket equations
In case you are kind of confused about the rocket equations, here we show how to get to them.
We will start from the widely used [Reynolds' transport theorem](http://en.wikipedia.org/wiki/Reynolds_transport_theorem). For an extensive property $N$ in a volume $\Omega$, with corresponding intensive property $\eta = N/m$, the Reynolds' transport theorem reads
$$
\frac{DN}{Dt} = \frac{\partial}{\partial t} \int_\Omega \eta \rho {\rm dV} + \oint_{\partial \Omega} \eta \rho \mathbf{V} \cdot \mathbf{n} {\rm d} S
$$
where $m$ is mass, dV is an element of volume, $\mathbf{V}$ the velocity, $\mathbf{n}$ the unit normal vector pointing out of the control volume and $\rho$ density. We will use a control volume that encloses the rocket, and moves with it. If the velocity of the control volume is $\mathbf{V}_{\rm CV}$, we can write the velocity as $\mathbf{V} = \mathbf{V}_{\rm CV} + \mathbf{V}^\prime$, and the Reynolds' transport theorem becomes
$$
\frac{DN}{Dt} = \frac{\partial}{\partial t} \int_\Omega \eta \rho {\rm dV} + \oint_{\partial \Omega} \eta \rho \mathbf{V}_{\rm CV} \cdot \mathbf{n} {\rm d} S + \oint_{\partial \Omega} \eta \rho \mathbf{V}^\prime \cdot \mathbf{n} {\rm d} S.
$$
The second term in the right hand side of this equation can be rewritten using the divergence theorem
$$
\oint_{\partial \Omega} \eta \rho \mathbf{V}_{\rm CV} \cdot \mathbf{n} {\rm d} S = \int_{\Omega} \eta \rho \nabla \cdot \mathbf{V}_{\rm CV} {\rm dV},
$$
but for a non-deforming control volume, $\nabla \cdot \mathbf{V}_{\rm CV} = 0$, yielding
$$
\frac{DN}{Dt} = \frac{\partial}{\partial t} \int_\Omega \eta \rho {\rm dV} + \oint_{\partial \Omega} \eta \rho \mathbf{V}^\prime \cdot \mathbf{n} {\rm d} S.
$$
Replacing $N = m \mathbf{V}^\prime$, we get the momentum conservation equation with a moving non-deforming control volume:
$$
\frac{D(m\mathbf{V}^\prime)}{Dt} = \frac{\partial}{\partial t} \int_\Omega \mathbf{V}^\prime \rho {\rm dV} + \oint_{\partial \Omega} \mathbf{V}^\prime \rho (\mathbf{V}^\prime \cdot \mathbf{n}) {\rm d} S.
$$
For non-accelerating control volume, we can apply Newton's second law to compute the forces:
$$
\frac{D(m\mathbf{V}^\prime)}{Dt} = \frac{D(m\mathbf{V})}{Dt} = \frac{D\mathbf{p}}{Dt} = \sum \mathbf{F}.
$$
However, in this case the control volume is moving with the accelerating rocket, and we need to be a bit more careful. We can write $\frac{D(m\mathbf{V}^\prime)}{Dt} = \frac{D(m\mathbf{V})}{Dt} - \frac{D(m\mathbf{V}_{\rm CV})}{Dt}$, and considering the control volume is not deforming, $\frac{D(m\mathbf{V}_{\rm CV})}{Dt} = m\mathbf{a}_{\rm CV}$, where $\mathbf{a}_{\rm CV}$ is the acceleration of the control volume. On the other hand, $\mathbf{V}$ is taken from an inertial (non-accelerating) frame of reference, and Newton's second law is valid there, then
$$
\frac{D(m\mathbf{V}^\prime)}{Dt} = \frac{D(m\mathbf{V})}{Dt} - \frac{D(m\mathbf{V}_{\rm CV})}{Dt} = \sum \mathbf{F} - m \mathbf{a}_{\rm CV}.
$$
This way we get a momentum equation for an accelerating control volume:
$$
\sum \mathbf{F} - m \mathbf{a}_{\rm CV} = \frac{\partial}{\partial t} \int_\Omega \mathbf{V}^\prime \rho {\rm dV} + \oint_{\partial \Omega} \mathbf{V}^\prime \rho (\mathbf{V}^\prime \cdot \mathbf{n}) {\rm d} S.
$$
Let's examine this last equation to make it specific for the rocket problem. First, the rocket is subject to two forces: gravity and drag. This makes the sum of forces:
$$
\sum \mathbf{F} = (m_c + m_p) \mathbf{g} - \frac{1}{2} \rho_a \mathbf{v} |\mathbf{v}| A C_D.
$$
where $\mathbf{v}$ is the velocity of the rocket, equal to the velocity of the control volume $\mathbf{V}_{\rm CV}$, and $\rho_a$ is the density of air.
Now, let's deal with the volume integral. Check out Figure 1: the control volume encloses the whole rocket, and the border is slightly away from the nozzle. We can make a cut on the control volume to analyze it in two parts: the top part consisting of the container only, and the bottom part from the nozzle down. The top part is a large container that moves with the control volume, and the relative velocities of the propellant inside are negligible. In the bottom part, the fluid is moving quite fast, but the amount of fluid there is constant in time. These two considerations allow us to drop the volume integral.
####Figure 1. Control volume of the rocket.
Done with the volume integral, now the surface integral. If we consider that the burnt propellant is coming out of the control volume with a constant velocity profile, we can rewrite the surface integral as
$$
\oint_{\partial \Omega} \mathbf{V}^\prime \rho_p (\mathbf{V}^\prime \cdot \mathbf{n}) {\rm d} S = \mathbf{V}^\prime \oint_{\partial \Omega} \rho_p (\mathbf{V}^\prime \cdot \mathbf{n}) {\rm d} S = \mathbf{V}^\prime \dot{m}_p = \mathbf{v}_e \dot{m}_p,
$$
where $\mathbf{v}_e$ is the exhaust velocity of the burnt propellant coming out of the nozzle with respect to the control volume, $\rho_p$ is the density of the propellant, and $\dot{m}_p$ is the mass flow rate of propellant. By saying that $\rho = \rho_p$ we are considering that the mass of air is negligible compared to the mass of propellant coming out of the nozzle.
Finally, the momentum equation for the rocket is
$$
(m_c + m_p) \mathbf{g} - \frac{1}{2} \rho_a \mathbf{v} |\mathbf{v}| A C_D - (m_c + m_p)\frac{{\rm d} \mathbf{v}}{{\rm d}t} = \mathbf{v}_e \dot{m}_p.
$$
In practical terms, we're only interested in the $y$ component. Vectors $\mathbf{g}$ and $\mathbf{v}_e$ are pointing down, which gives the scalar equation:
$$
-(m_c + m_p) g - \frac{1}{2} \rho_a v^2 A C_D - (m_c + m_p)\frac{{\rm d} v}{{\rm d}t} = -v_e \dot{m}_p.
$$
This is the equation you will work with for your assignment!
---
######The cell below loads the style of the notebook.
```
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, "r").read())
```
|
b6afcba065807a93382757e94af2a1329293e2e1
| 10,670 |
ipynb
|
Jupyter Notebook
|
lessons/01_phugoid/Rocket_Assignment.ipynb
|
devinberg/numerical-mooc
|
f6cf018c84936fd1047fb8d207b28df9b966ec57
|
[
"CC-BY-3.0"
] | 5 |
2015-04-24T14:39:43.000Z
|
2018-06-23T01:00:45.000Z
|
lessons/01_phugoid/Rocket_Assignment.ipynb
|
albertonogueira/numerical-mooc
|
dd95e650310502b5cdfe6e405ed7ab7e1496d233
|
[
"CC-BY-3.0"
] | 1 |
2015-04-14T12:01:46.000Z
|
2018-11-09T00:57:16.000Z
|
lessons/01_phugoid/Rocket_Assignment.ipynb
|
albertonogueira/numerical-mooc
|
dd95e650310502b5cdfe6e405ed7ab7e1496d233
|
[
"CC-BY-3.0"
] | 5 |
2015-10-02T19:20:21.000Z
|
2020-01-25T03:28:32.000Z
| 51.545894 | 629 | 0.561762 | true | 2,489 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.887205 | 0.863392 | 0.766005 |
__label__eng_Latn
| 0.986686 | 0.618019 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D1_RealNeurons/W3D1_Tutorial3.ipynb" target="_parent"></a>
# Neuromatch Academy: Week 3, Day 1, Tutorial 3
# Real Neurons: Synaptic transmission - Models of static and dynamic synapses
## Tutorial Objectives
Synapses connect neurons into neural networks or circuits. Specialized electrical synapses make direct, physical connections between neurons. In this tutorial, however, we will focus on chemical synapses, which are more common in the brain. These synapses do not physically join neurons. Instead, eliciting a spike in the presynaptic cell causes a chemical, or neurotransmitter, to be released into a small space between the neurons called the synaptic cleft. Once the chemical diffuses across that space, it causes changes in the membrane of the postsynaptic cell. In this tutorial, we will model chemical synaptic transmission and study some interesting effects produced by static synapses that do not change and dynamic ones that change their effects based on the spiking history of the presynaptic neurons.
First, we will start by writing code to simulate static synapses.
Next, we will extend the model to include synapses whose synaptic strength is dependent on the recent spike history: synapses can either progressively increase or decrease the size of their effects on the postsynaptic neuron, based on the recent firing rate of its pre-synaptic partners. This feature of synapses in the brain is called **Short-Term Plasticity** and causes synapses to undergo *Facilitation* or *Depression*.
Our goals for this tutorial are to:
- simulate static synapses and study how excitation and inhibition affect the patterns in the neurons' spiking output
- define mean- or fluctuation-driven regimes
- simulate short-term dynamics of synapses (facilitation and depression)
- study how a change in presynaptic firing history affects the synaptic weights (i.e., PSP amplitude)
# Setup
```python
# Imports
import matplotlib.pyplot as plt # import matplotlib
import numpy as np # import numpy
import time # import time
import ipywidgets as widgets # interactive display
from scipy.stats import pearsonr # import pearson correlation
```
```python
#@title Figure Settings
%matplotlib inline
fig_w, fig_h = (8, 6)
my_fontsize = 18
my_params = {'axes.labelsize': my_fontsize,
'axes.titlesize': my_fontsize,
'figure.figsize': [fig_w, fig_h],
'font.size': my_fontsize,
'legend.fontsize': my_fontsize-4,
'lines.markersize': 8.,
'lines.linewidth': 2.,
'xtick.labelsize': my_fontsize-2,
'ytick.labelsize': my_fontsize-2}
plt.rcParams.update(my_params)
my_layout = widgets.Layout()
```
```python
#@title Helper functions
def my_GWN(pars, mu, sig, myseed=False):
"""
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
myseed : random seed. int or boolean
the same seed will give the same random number sequence
Returns:
I : Gaussian White Noise (GWN) input
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# set random seed
# you can fix the seed of the random number generator so that the results
# are reliable. However, when you want to generate multiple realizations
# make sure that you change the seed for each new realization.
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate GWN
# we divide here by 1000 to convert units to seconds.
I = mu + sig * np.random.randn(Lt) / np.sqrt(dt/1000.)
return I
def Poisson_generator(pars, rate, n, myseed=False):
"""
Generates poisson trains
Args:
pars : parameter dictionary
rate : noise amplitute [Hz]
n : number of Poisson trains
myseed : random seed. int or boolean
Returns:
pre_spike_train : spike train matrix, ith row represents whether
there is a spike in ith spike train over time
(1 if spike, 0 otherwise)
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate uniformly distributed random variables
u_rand = np.random.rand(n, Lt)
# generate Poisson train
poisson_train = 1. * (u_rand<rate*dt/1000.)
return poisson_train
def my_illus_LIFSYN(pars, v_fmp, v):
"""
Illustartion of FMP and membrane voltage
Args:
pars : parameters dictionary
v_fmp : free membrane potential, mV
v : membrane voltage, mV
Returns:
plot of membrane voltage and FMP, alongside with the spiking threshold
and the mean FMP (dashed lines)
"""
plt.figure(figsize=(14., 5))
plt.plot(pars['range_t'], v_fmp, 'r', lw=1., label = 'Free mem. pot.', zorder=2)
plt.plot(pars['range_t'], v, 'b', lw=1., label = 'True mem. pot', zorder=1, alpha=0.7)
plt.axhline(-55, 0, 1, color='k', lw=2., ls='--',label = 'Spike Threshold', zorder=1)
plt.axhline(np.mean(v_fmp),0, 1, color='r', lw=2., ls='--',label = 'Mean Free Mem. Pot.', zorder=1)
plt.xlabel('Time (ms)')
plt.ylabel('V (mV)');
plt.legend(loc=[1.02, 0.68])
plt.show()
def my_illus_STD(Poi_or_reg=False, rate=20., U0=0.5, tau_d=100., tau_f=50., plot_out=True):
"""
Only for one presynaptic train
Args:
Poi_or_reg : Poisson or regular input spiking trains
rate : Rate of input spikes, Hz
U0 : synaptic release probability at rest
tau_d : synaptic depression time constant of x [ms]
tau_f : synaptic facilitation time constantr of u [ms]
plot_out : whether ot not to plot, True or False
Returns:
"""
T_simu = 10.0 * 1000/(1.0*rate) # 10 spikes in the time window
pars = default_pars(T=T_simu)
dt, range_t = pars['dt'], pars['range_t']
if Poi_or_reg:
# Poisson type spike train
pre_spike_train = Poisson_generator(pars, rate, n=1)
pre_spike_train = pre_spike_train.sum(axis=0)
else:
# Regular firing rate
isi_num = int((1e3/rate)/dt) # number of dt
pre_spike_train = np.zeros(len(pars['range_t']))
pre_spike_train[::isi_num] = 1.
u, R, g = dynamic_syn(g_bar=1.2, tau_syn=5., U0=U0, tau_d=tau_d, tau_f=tau_f,
pre_spike_train=pre_spike_train, dt=pars['dt'])
if plot_out:
plt.figure(figsize=(12, 6))
plt.subplot(2,2,1)
plt.plot(pars['range_t'], R, 'b', label='R')
plt.plot(pars['range_t'], u, 'r', label='u')
plt.legend(loc='best')
plt.xlim((0,pars['T']))
plt.ylabel(r'$R$ or $u$ (a.u)')
plt.subplot(2,2,3)
spT = pre_spike_train>0
t_sp = pars['range_t'][spT] #spike times
plt.plot(t_sp, 0.*np.ones(len(t_sp)), 'k|', ms=18, markeredgewidth=2)
plt.xlabel('Time (ms)');
plt.xlim((0,pars['T']))
plt.yticks([])
plt.title('Presynaptic spikes')
plt.subplot(1,2,2)
plt.plot(pars['range_t'], g, 'r', label='STP synapse')
plt.xlabel('Time (ms)')
plt.ylabel('g (nS)')
plt.xlim((0,pars['T']))
plt.tight_layout()
if not Poi_or_reg:
return g[isi_num], g[9*isi_num]
def plot_volt_trace(pars, v, sp):
"""
Plot trajetory of membrane potential for a single neuron
Args:
pars : parameter dictionary
v : volt trajetory
sp : spike train
Returns:
figure of the membrane potential trajetory for a single neuron
"""
V_th = pars['V_th']
dt, range_t = pars['dt'], pars['range_t']
if sp.size:
sp_num = (sp/dt).astype(int)-1
v[sp_num] += 10
plt.plot(pars['range_t'], v, 'b')
plt.axhline(V_th, 0, 1, color='k', ls='--', lw=1.)
plt.xlabel('Time (ms)')
plt.ylabel('V (mV)')
```
In the `Helper Function`:
- Gaussian white noise generator: `my_GWN(pars, mu, sig, myseed=False)`
- Poissonian spike train generator: `Poisson_generator(pars, rate, n, myseed=False)`
```python
#@title Default value function: `default_pars( **kwargs)`
def default_pars( **kwargs):
pars = {}
### typical neuron parameters###
pars['V_th'] = -55. # spike threshold [mV]
pars['V_reset'] = -75. #reset potential [mV]
pars['tau_m'] = 10. # membrane time constant [ms]
pars['g_L'] = 10. #leak conductance [nS]
pars['V_init'] = -65. # initial potential [mV]
pars['E_L'] = -75. #leak reversal potential [mV]
pars['tref'] = 2. # refractory time (ms)
### simulation parameters ###
pars['T'] = 400. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
### external parameters if any ###
for k in kwargs:
pars[k] = kwargs[k]
pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized time points [ms]
return pars
```
## Static synapses
```python
#@title Video: Static and dynamic synapses
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='S82kACA5P0M', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=S82kACA5P0M
## Simulate synaptic conductance dynamics
The GWN input used in Tutorials 1 and 2 is quite unphysiological input compared to the inputs that a real neuron receives. Synaptic input _in vivo_ consists of a mixture of **excitatory** neurotransmitters, which depolarizes the cell and drives it towards spike threshold, and **inhibitory** neurotransmitters that hyperpolarize it, driving it away from spike threshold. Each of these chemicals cause specific ion channels on the postsynaptic neuron to open, resulting in a change in that neuron's conductance and therefore, the flow of current into or out of the cell.
This process can be modelled by assuming that the presynaptic neuron's spiking activity produces transient changes in the postsynaptic neuron's conductance ($g_{\rm syn}(t)$). Typically, the conductance transient is modelled as an exponential function.
Such conductance transients can be generated using a simple ordinary differential equation (ODE):
\begin{eqnarray}
\frac{dg_{\rm syn}(t)}{dt} &=& \bar{g}_{\rm syn} \sum_k \delta(t-t_k) -g_{\rm syn}(t)/\tau_{\rm syn}
\end{eqnarray}
where $\bar{g}_{\rm syn}$ is the maximum conductance elicited by each incoming spike -- this is often referred to as synaptic weight--and $\tau_{\rm syn}$ is the synaptic time constant. Note that the summation runs over all spikes received by the neuron at time $t_k$.
Ohm's law allows us to convert conductance changes into current as:
\begin{align}
I_{\rm syn}(t) = -g_{\rm syn}(t)(V(t)-E_{\rm syn}) \\
\end{align}
The reversal potential $E_{\rm syn}$ describes the direction of current flow and the excitatory or inhibitory nature of the synapse.
**Thus, incoming spikes are filtered by an exponential-shaped kernel, effectively low-pass filtering the input. In other words, synaptic input is not white noise but it is in fact colored noise, where the color (spectrum) of the noise is determined by the synaptic time constants of both excitatory and inhibitory synapses.**
In a neuronal network, the total synaptic input current $I_{\rm syn}$ is the sum of both excitatory and inhibitory inputs. Assuming the total excitatory and inhibitory conductances received at time $t$ are $g_E(t)$ and $g_I(t)$, and their corresponding reversal potentials are $E_E$ and $E_I$, respectively, then the total synaptic current can be described as:
\begin{align}
I_{\rm syn}(V(t),t) = -g_E(t) (V-E_E) - g_I(t) (V-E_I). \\
\end{align}
Accordingly, the membrane potential dynamics of the LIF neuron under synaptic current drive become:
\begin{align}
\tau_m\frac{dV(t)}{dt} = -(V(t)-E_L) - \frac{g_E(t)}{g_L} (V(t)-E_E) - \frac{g_I(t)}{g_L} (V(t)-E_I) + \frac{I_{\rm inj}}{g_L}.\quad (2)
\end{align}
$I_{\rm inj}$ is an external current injected in the neuron which is under experimental control; it can be GWN, DC or anything else.
We will use Eq. (2) to simulate the conductance-based LIF neuron model below.
In the previous tutorials, we saw how the output of a single neuron (spike count/rate and spike time irregularity), changed when we stimulated the neuron with DC and GWN, respectively. Now, we are in a position to study how the neuron behaves when it is bombarded with both excitatory and inhibitory spikes trains -- as happens *in vivo*.
What kind of input a neuron is receiving? When we do not know, we chose the simplest option. The simplest model of input spikes is given when every input spike arrives independently of other spikes, i.e., we assume that the input is Poissonian.
### Simulate LIF neuron with conductance-based synapses
We are now ready to simulate a LIF neuron with conductance-based synaptic inputs! The following code defines the LIF neuron with synaptic input modelled as conductance transients.
```python
#@title Conductance-based LIF: `run_LIF_cond`
def run_LIF_cond(pars, I_inj, pre_spike_train_ex, pre_spike_train_in):
"""conductance-based LIF dynamics
Rgs:
pars : parameter dictionary
I_inj : injected current [pA]. The injected current here can be a value or an array
pre_spike_train_ex : spike train input from presynaptic excitatory neuron
pre_spike_train_in : spike train input from presynaptic inhibitory neuron
Returns:
rec_spikes : spike times
rec_v : mebrane potential
gE : postsynaptic excitatory conductance
gI : postsynaptic inhibitory conductance
"""
# Retrieve parameters
V_th, V_reset = pars['V_th'], pars['V_reset']
tau_m, g_L = pars['tau_m'], pars['g_L']
V_init, E_L = pars['V_init'], pars['E_L']
gE_bar, gI_bar = pars['gE_bar'], pars['gI_bar']
VE, VI = pars['VE'], pars['VI']
tau_syn_E, tau_syn_I = pars['tau_syn_E'], pars['tau_syn_I']
tref = pars['tref']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize
tr = 0.
v = np.zeros(Lt)
v[0] = V_init
gE = np.zeros(Lt)
gI = np.zeros(Lt)
I = I_inj * np.ones(Lt) #ensure I has length Lt
if pre_spike_train_ex.max() == 0:
pre_spike_train_ex_total = np.zeros(Lt)
else:
pre_spike_train_ex_total = pre_spike_train_ex.sum(axis=0) * np.ones(Lt)
if pre_spike_train_in.max() == 0:
pre_spike_train_in_total = np.zeros(Lt)
else:
pre_spike_train_in_total = pre_spike_train_in.sum(axis=0) * np.ones(Lt)
# simulation
rec_spikes = [] # recording spike times
for it in range(Lt-1):
if tr > 0:
v[it] = V_reset
tr = tr-1
elif v[it] >= V_th: #reset voltage and record spike event
rec_spikes.append(it)
v[it] = V_reset
tr = tref/dt
#update the synaptic conductance
gE[it+1] = gE[it] - (dt/tau_syn_E)*gE[it] + gE_bar*pre_spike_train_ex_total[it+1]
gI[it+1] = gI[it] - (dt/tau_syn_I)*gI[it] + gI_bar*pre_spike_train_in_total[it+1]
#calculate the increment of the membrane potential
dv = (-(v[it]-E_L) - (gE[it+1]/g_L)*(v[it]-VE) - \
(gI[it+1]/g_L)*(v[it]-VI) + I[it]/g_L) * (dt/tau_m)
#update membrane potential
v[it+1] = v[it] + dv
rec_spikes = np.array(rec_spikes) * dt
return v, rec_spikes, gE, gI
```
#### Exercise 1: Measure the Mean free membrane potential
Let's simulate the conductance-based LIF neuron with presynaptic spike trains genetated by a `Poisson_generator` with rate 10 Hz for both excitatory and inhibitory inputs. Here, we choose 80 exictatory presynaptic spike trains and 20 inhibitory ones.
Previously, we've already learned that CV$_{\rm ISI}$ can describe the irregularity of the output spike pattern. Now, we will introduce the **Free Membrane Potential (FMP)**, which is the membrane potential of the neuron when its spike threshold is removed. Although this is completely artificial, calculating this quantity allows us to get an idea of how strong the input is. We are mostly interested in knowing the mean and standard deviation (std.) of the FMP.
```python
# Exercise 1
# To complete the exercise, uncomment the code and fill the missing parts (...)
pars = default_pars(T=1000.)
# Add parameters
pars['gE_bar'] = 2.4 # [nS]
pars['VE'] = 0. # [mV] excitatory reversal potential
pars['tau_syn_E'] = 2.4 # [ms]
pars['gI_bar'] = 3. # [nS]
pars['VI'] = -80. # [mV] inhibitory reversal potential
pars['tau_syn_I'] = 5. # [ms]
# generate presynaptic spike trains
pre_spike_train_ex = Poisson_generator(pars, rate=10, n=80)
pre_spike_train_in = Poisson_generator(pars, rate=10, n=20)
# simulate conductance-based LIF model
v, rec_spikes, gE, gI = run_LIF_cond(pars, 0, pre_spike_train_ex, pre_spike_train_in)
dt, range_t = pars['dt'], pars['range_t']
if rec_spikes.size:
sp_num = (rec_spikes/dt).astype(int)-1
v[sp_num] = 10 # draw nicer spikes
####################################################################
## TODO for students: Try to meansure the free membrane potential #
####################################################################
# In order to measure the free membrane potential, first,
# you should prevent the firing of the LIF neuron
# How to prevent a LIF neuron from firing? Increse the threshold pars['V_th'].
# pars['V_th'] = ...
# v_fmp, _, _, _ = ...
# comment this out when you've filled
#raise NotImplementedError("Student excercise: measure the FMP")
# uncomment when you have filled the excercise
# my_illus_LIFSYN(pars, v, v_fmp)
```
```python
# to_remove solutions
pars = default_pars(T=1000.)
#Add parameters
pars['gE_bar'] = 2.4 # [nS]
pars['VE'] = 0. # [mV] excitatory reversal potential
pars['tau_syn_E'] = 2. # [ms]
pars['gI_bar'] = 2.4 # [nS]
pars['VI'] = -80. # [mV] inhibitory reversal potential
pars['tau_syn_I'] = 5. # [ms]
#generate presynaptic spike trains
pre_spike_train_ex = Poisson_generator(pars, rate=10, n=80)
pre_spike_train_in = Poisson_generator(pars, rate=10, n=20)
# simulate conductance-based LIF model
v, rec_spikes, gE, gI = run_LIF_cond(pars, 0, pre_spike_train_ex, pre_spike_train_in)
dt, range_t = pars['dt'], pars['range_t']
if rec_spikes.size:
sp_num = (rec_spikes/dt).astype(int)-1
v[sp_num] = 10 # draw nicer spikes
#measure FMP
pars['V_th'] = 1e3
v_fmp, _, _, _ = run_LIF_cond(pars, 0, pre_spike_train_ex, pre_spike_train_in)
# plotting
with plt.xkcd():
my_illus_LIFSYN(pars, v_fmp, v)
```
### Parameter Exploration: different ratio of excitation and inhibition
In the following, we can investigate how varying the ratio of excitatory to inhibitory inputs changes the firing rate and the spike time regularity (see the output text).
To change the both the excitatory and inhibitory inputs we will vary their firing rates. *However, if you wish, you can vary the strength and/or the number of these connections as well.*
```python
#@title Conductance-based LIF Explorer with different E/I input
my_layout.width = '450px'
@widgets.interact(
inh_rate = widgets.FloatSlider(20., min=10., max=60., step=5., layout=my_layout),
exc_rate = widgets.FloatSlider(10., min=2., max=20., step=2., layout=my_layout)
)
def EI_isi_regularity(exc_rate, inh_rate):
pars = default_pars(T=1000.)
#Add parameters
pars['gE_bar'] = 3. # [nS]
pars['VE'] = 0. # [mV] excitatory reversal potential
pars['tau_syn_E'] = 2. # [ms]
pars['gI_bar'] = 3. # [nS]
pars['VI'] = -80. # [mV] inhibitory reversal potential
pars['tau_syn_I'] = 5. # [ms]
pre_spike_train_ex = Poisson_generator(pars, rate=exc_rate, n=80)
pre_spike_train_in = Poisson_generator(pars, rate=inh_rate, n=20) # 4:1
# Lets first simulate a neuron with identical input but with no spike threshold
# by setting the threshold to a very high value
# so that we can look at the free membrane potential
pars['V_th'] = 1e3
v_fmp, rec_spikes, gE, gI = run_LIF_cond(pars, 0, pre_spike_train_ex, pre_spike_train_in)
# Now simulate a LIP with a regular spike threshold
pars['V_th'] = -55.
v, rec_spikes, gE, gI = run_LIF_cond(pars, 0, pre_spike_train_ex, pre_spike_train_in)
dt, range_t = pars['dt'], pars['range_t']
if rec_spikes.size:
sp_num = (rec_spikes/dt).astype(int)-1
v[sp_num] = 10 #draw nicer spikes
spike_rate = 1e3*len(rec_spikes)/pars['T']
#print('Spike rate = %.3f (sp/s)' % spike_rate)
cv_isi = 0.
if len(rec_spikes)>3:
isi = np.diff(rec_spikes)
cv_isi = np.std(isi)/np.mean(isi)
#print('CV ISI = %.3f' % (cv_isi))
#print('Mean of Free Mem Pot = %.3f' % (np.mean(v_fmp)))
#print('STD of Free Mem Pot = %.3f' % (np.std(v_fmp)))
print('\n')
plt.figure(figsize=(15., 10))
plt.subplot(2,1,1)
plt.text(500, -35, 'Spike rate = %.3f (sp/s), Mean of Free Mem Pot = %.3f' % (spike_rate, np.mean(v_fmp)),
fontsize=16, fontweight='bold', horizontalalignment='center', verticalalignment='bottom')
plt.text(500, -38.5, 'CV ISI = %.3f, STD of Free Mem Pot = %.3f' % (cv_isi, np.std(v_fmp)),
fontsize=16, fontweight='bold', horizontalalignment='center', verticalalignment='bottom')
plt.plot(pars['range_t'], v_fmp, 'r', lw=1., label = 'Free mem. pot.', zorder=2)
plt.plot(pars['range_t'], v, 'b', lw=1., label = 'mem. pot with spk thr', zorder=1, alpha=0.7)
plt.axhline(pars['V_th'], 0, 1, color='k', lw=1., ls='--',label = 'Spike Threshold', zorder=1)
plt.axhline(np.mean(v_fmp),0, 1, color='r', lw=1., ls='--',label = 'Mean Free Mem. Pot.', zorder=1)
plt.ylim(-76, -39)
plt.xlabel('Time (ms)')
plt.ylabel('V (mV)')
plt.legend(loc=[1.02, 0.68])
plt.subplot(2,2,3)
plt.plot(pars['range_t'][::3], gE[::3], 'r', lw=1.)
plt.xlabel('Time (ms)')
plt.ylabel(r'$g_E$ (nS)')
plt.subplot(2,2,4)
plt.plot(pars['range_t'][::3], gI[::3], 'b', lw=1.)
plt.xlabel('Time (ms)')
plt.ylabel(r'$g_I$ (nS)')
plt.tight_layout()
#_ = widgets.interact(EI_isi_regularity, inh_rate = (10., 60., 5.), exc_rate = (5.,20.,2))
```
### Mean-driven and Fluctuation-driven regimes
If we look at the figure above, we note that when the mean FMP is above spike threshold, the fluctuations in the FMP are rather small and the neuron spikes in a fairly regular fashion. This regime when the mean FMP is above the spike threshold is called **mean-driven regime**.
When the mean FMP is below spike threshold, the fluctuations in the FMP are large and the neuron's spikes are driven by these fluctuations. As a consequence, the neuron spikes in more Poisson-like fashion. This regime when the mean FMP is below the spike threshold and spikes are driven by the fluctuations is called **fluctuation-driven regime**.
#### Think!
- How much can you increase the spike pattern variability? Under what condition(s) the neuron may also respond with Poisson-type spikes? Note that we injected Poisson-type spikes.
- Link to the balance of excitation and inhibition. One of the definition of excitation and inhibition balance is that mean free membrane potential remains constant as excitatory and inhibitory input rates are increased. What do you think happens to the neuron firing rate as we change excitatory and inhibitory rates while keeping the neuron in balance? See [Kuhn, Aertsen, and Rotter (2004)](https://www.jneurosci.org/content/jneuro/24/10/2345.full.pdf) for much more on this.
## Short-term synaptic plasticity
Short-term plasticity (STP) is a phenomenon in which synaptic efficacy changes over time in a way that reflects the history of presynaptic activity. Two types of STP, with opposite effects on synaptic efficacy, have been experimentally observed. They are known as Short-Term Depression (STD) and Short-Term Facilitation (STF).
The mathematical model of STP is based on the concept of a limited pool of synaptic resources available for transmission ($R$), such as, for example, the overall amount of synaptic vescicles at the presynaptic terminals. The amount of presynaptic resource changes in a dynamic fashion depending on the recent history of spikes.
Following a presynaptic spike, (i) the fraction $u$ (release probability) of the available pool to be utilized increases due to spike-induced calcium influx to the presynaptic terminal, after which (ii) $u$ is consumed to increase the post-synaptic conductance. Between spikes, $u$ decays back to zero with time constant $\tau_f$ and $R$ recovers to 1 with time constant $\tau_d$. In summary, the dynamics of excitatory (subscript E) STP is given by:
\begin{eqnarray}
&& \frac{du_E}{dt} &= -\frac{u_E}{\tau_f} + U_0(1-u_E^-)\delta(t-t_{\rm sp}) \\[.5mm]
&& \frac{dR_E}{dt} &= \frac{1-R_E}{\tau_d} - u_E^+ R_E^- \delta(t-t_{\rm sp}) \qquad (6)\\[.5mm]
&& \frac{dg_E(t)}{dt} &= -\frac{g_E}{\tau_E} + \bar{g}_E u_E^+ R_E^- \sum_k \delta(t-t_{\rm k}),
\end{eqnarray}
where $U_0$ is a constan determining the increment of $u$ produced by a spike. $u_E^-$ and $R_E^-$ denote the corresponding values just before the spike arrives, whereas $u_E^+$ refers to the moment right after the spike. $\bar{g}_E$ denotes the maximum excitatory conductane, and the calculation of $g_E(t)$ is calculated for all spiketimes $k$. Similarly, one can obtain the dynamics of inhibitory STP.
The interplay between the dynamics of $u$ and $x$ determines whether the joint effect of $ux$ is dominated by depression or facilitation. In the parameter regime of $\tau_d \gg \tau_f$ and large $U_0$, an initial spike incurs a large drop in $x$ that takes a long time to recover; therefore the synapse is STD-dominated. In the regime of $\tau_d \ll \tau_f$ and small $U_0$, the synaptic efficacy is increased gradually by spikes, and consequently, the synapse is STF-dominated. This phenomenological model successfully reproduces the kinetic dynamics of depressed and facilitated synapses observed in many cortical areas.
### Exercise 2: Compute $du$, $dR$ and $dg$
As we learned in several previous tutorials, the Euler numerical integration method involves the calculation of each derivative at step $n$:
\begin{eqnarray}
du_E &=& -\frac{u_E[t]}{\tau_f} dt + U_0(1-u_E[t])\cdot \text{sp_or_not[t+dt]} \\
dR_E &=& \frac{1-R_E[t]}{\tau_d} dt - u_E[t+dt]R_E[t]\cdot \text{sp_or_not[t+dt]} \\
dg_E &=& -\frac{g_E[t]}{\tau_{E}} dt + \bar{g}_Eu_E[t+dt]R_E[t]\cdot \text{sp_or_not[t+dt]}
\end{eqnarray}
where $\text{sp_or_not}=1$ if there's a spike in the time window $dt$, and $\text{sp_or_not}=0$ otherwise. In addition, note that any spike train generated by our `Poisson_generator` is binary. Then, the values are updated:
\begin{aligned}
\begin{eqnarray}
u_E[t+dt] = u_E[t] + du_E\\
R_E[t+dt] = R_E[t] + dR_E\\
g_E[t+dt] = g_E[t] + dg_E
\end{eqnarray}
\end{aligned}
Similarly, one can obtain the dynamics of inhibitory conductance.
```python
# Exercise 2
def dynamic_syn(g_bar, tau_syn, U0, tau_d, tau_f, pre_spike_train, dt):
"""
Short-term synaptic plasticity
Args:
g_bar : synaptic conductance strength
tau_syn : synaptic time constant [ms]
U0 : synaptic release probability at rest
tau_d : synaptic depression time constant of x [ms]
tau_f : synaptic facilitation time constantr of u [ms]
pre_spike_train : total spike train (number) input
from presynaptic neuron
dt : time step [ms]
Returns:
u : usage of releasable neurotransmitter
R : fraction of synaptic neurotransmitter resources available
g : postsynaptic conductance
"""
Lt = len(pre_spike_train)
# Initialize
u = np.zeros(Lt)
R = np.zeros(Lt)
R[0] = 1.
g = np.zeros(Lt)
# simulation
for it in range(Lt-1):
#########################################################################
## TODO for students: compute du, dx and dg, remove NotImplementedError #
#########################################################################
# Note pre_spike_train[i] is binary, which is sp_or_not in the $i$th timebin
# comment this out when you've finished filling this out.
raise NotImplementedError("Student excercise: compute the STP dynamics")
# du = ...
u[it+1] = u[it] + du
# dR = ...
R[it+1] = R[it] + dR
# dg = ...
g[it+1] = g[it] + dg
return u, R, g
# Uncomment this line after completing the dynamic_syn function
# _ = my_illus_STD(Poi_or_reg=False, rate=20., U0=0.5, tau_d=100., tau_f=50.) # Poi_or_reg=False:regular spike train
```
```python
# to_remove solution
def dynamic_syn(g_bar, tau_syn, U0, tau_d, tau_f, pre_spike_train, dt):
"""
Short-term synaptic plasticity
Args:
g_bar : synaptic conductance strength
tau_syn : synaptic time constant [ms]
U0 : synaptic release probability at rest
tau_d : synaptic depression time constant of x [ms]
tau_f : synaptic facilitation time constantr of u [ms]
pre_spike_train : total spike train (number) input
from presynaptic neuron
dt : time step [ms]
Returns:
u : usage of releasable neurotransmitter
R : fraction of synaptic neurotransmitter resources available
g : postsynaptic conductance
"""
Lt = len(pre_spike_train)
# Initialize
u = np.zeros(Lt)
R = np.zeros(Lt)
R[0] = 1.
g = np.zeros(Lt)
# simulation
for it in range(Lt-1):
du = - (dt/tau_f)*u[it] + U0 * (1.0-u[it]) * pre_spike_train[it+1]
u[it+1] = u[it] + du
dR = (dt/tau_d)*(1.0-R[it]) - u[it+1]*R[it] * pre_spike_train[it+1]
R[it+1] = R[it] + dR
dg = - (dt/tau_syn)*g[it] + g_bar*R[it]*u[it+1] * pre_spike_train[it+1]
g[it+1] = g[it] + dg
return u, R, g
with plt.xkcd():
_ = my_illus_STD(Poi_or_reg=False, rate=20., U0=0.5, tau_d=100., tau_f=50.)
```
### Parameter Exploration
Below, an interactive demo that shows how Short-term synaptic depression (STD) changes for different firing rate of the presynaptic spike train and how the amplitude synaptic conductance $g$ changes with every incoming spike until it reaches its stationary state.
Does it matter if the neuron fires in a Poisson manner, rather than regularly?
**Note:** `Poi_or_Reg=1`: for *Posisson type* and `Poi_or_Reg=0`: for *regular* presynaptic spikes.
```python
#@title STD Explorer with input rate
def my_STD_diff_rate(rate, Poi_or_Reg):
_ = my_illus_STD(Poi_or_reg=Poi_or_Reg, rate=rate)
_ = widgets.interact(my_STD_diff_rate, rate = (10., 100.1, 5.), Poi_or_Reg = (0, 1, 1))
```
### Synaptic depression and presynaptic firing rate
Once, I asked an experimentalist about the experimental values of the PSP amplitude produced by a connection between two neocortical excitatory neurons. She asked: "At what frequency?" I was confused, but you will understand her question now that you know that PSP amplitude depends on the spike history, and therefore on the spike rate of presynaptic neuron.
Hre, we will study how the ratio of the synaptic conductance corresponding to the first and last spike change as a function of the presynaptic firing rate.
For computational effciency, we assume that the presynaptic spikes are regular. This assumption means that we do not have to run multiple trials.
```python
#@title STD conductance ratio with different input rate
# Regular firing rate
input_rate = np.arange(5., 40.1, 5.)
g_1 = np.zeros(len(input_rate)) # record the the PSP at 1st spike
g_2 = np.zeros(len(input_rate)) # record the the PSP at 10th spike
for ii in range(len(input_rate)):
g_1[ii], g_2[ii] = my_illus_STD(Poi_or_reg=False, rate=input_rate[ii], \
plot_out=False, U0=0.5, tau_d=100., tau_f=50)
plt.figure(figsize=(11, 4.5))
plt.subplot(1,2,1)
plt.plot(input_rate,g_1,'m-o',label = '1st Spike')
plt.plot(input_rate,g_2,'c-o',label = '10th Spike')
plt.xlabel('Rate [Hz]')
plt.ylabel('Conductance [nS]')
plt.legend()
plt.subplot(1,2,2)
plt.plot(input_rate,g_2/g_1,'b-o',)
plt.xlabel('Rate [Hz]')
plt.ylabel(r'Conductance ratio $g_{10}/g_{1}$')
plt.tight_layout()
```
### Parameter Exploration of short-term synaptic facilitation (STF)
Below, we see an illustration of a short-term facilitation example. Take note of the change in the synaptic variables: `U_0`, `tau_d`, and `tau_f`.
- for STD, `U0=0.5, tau_d=100., tau_f=50.`
- for STP, `U0=0.2, tau_d=100., tau_f=750.`
Also notice how the input rate affects the STF.
```python
#@title STF Explorer with input rate
def my_STD_diff_rate(rate, Poi_or_Reg):
_ = my_illus_STD(Poi_or_reg=Poi_or_Reg, rate=rate, U0=0.2, tau_d=100., tau_f=750.)
_ = widgets.interact(my_STD_diff_rate, rate = (4., 40.1, 2.), Poi_or_Reg = (0, 1, 1))
```
### Synaptic facilitation and presynaptic firing rate
Here, we will study how the ratio of the synaptic conductance corresponding to the $1^{st}$ and $10^{th}$ spike changes as a function of the presynaptic rate.
```python
#@title STF conductance ratio with different input rate
# Regular firing rate
input_rate = np.arange(2., 40.1, 2.)
g_1 = np.zeros(len(input_rate)) # record the the PSP at 1st spike
g_2 = np.zeros(len(input_rate)) # record the the PSP at 10th spike
for ii in range(len(input_rate)):
g_1[ii], g_2[ii] = my_illus_STD(Poi_or_reg=False, rate=input_rate[ii], \
plot_out=False, U0=0.2, tau_d=100., tau_f=750.)
plt.figure(figsize=(11, 4.5))
plt.subplot(1,2,1)
plt.plot(input_rate,g_1,'m-o',label = '1st Spike')
plt.plot(input_rate,g_2,'c-o',label = '10th Spike')
plt.xlabel('Rate [Hz]')
plt.ylabel('Conductance [nS]')
plt.legend()
plt.subplot(1,2,2)
plt.plot(input_rate,g_2/g_1,'b-o',)
plt.xlabel('Rate [Hz]')
plt.ylabel(r'Conductance ratio $g_{10}/g_{1}$')
plt.tight_layout()
```
### Think!
Why does the ratio of the first and second-to-last spike conductance changes in a non-monotonic fashion for synapses with STF, even though it decreases monotonically for synapses with STD?
### Conductance-based LIF with STP
Previously, we looked only at how presynaptic firing rate affects the presynaptic resource availability and thereby the synaptic conductance. It is straightforward to imagine that, while the synaptic conductances are changing, the output of the postsynaptic neuron will change as well.
So, let's put the STP on synapses impinging on an LIF neuron and see what happens.
```python
#@title `run_LIF_cond_STP`
def run_LIF_cond_STP(pars, I_inj, pre_spike_train_ex, pre_spike_train_in):
'''
conductance-based LIF dynamics
Expects:
pars : parameter dictionary
I_inj : injected current [pA]. The injected current here can be a value or an array
pre_spike_train_ex : spike train input from presynaptic excitatory neuron (binary)
pre_spike_train_in : spike train input from presynaptic inhibitory neuron (binary)
Returns:
rec_spikes : spike times
rec_v : mebrane potential
gE : postsynaptic excitatory conductance
gI : postsynaptic inhibitory conductance
'''
# Retrieve parameters
V_th, V_reset = pars['V_th'], pars['V_reset']
tau_m, g_L = pars['tau_m'], pars['g_L']
V_init, V_L = pars['V_init'], pars['E_L']
gE_bar, gI_bar = pars['gE_bar'], pars['gI_bar']
U0E, tau_dE, tau_fE = pars['U0_E'], pars['tau_d_E'], pars['tau_f_E']
U0I, tau_dI, tau_fI = pars['U0_I'], pars['tau_d_I'], pars['tau_f_I']
VE, VI = pars['VE'], pars['VI']
tau_syn_E, tau_syn_I = pars['tau_syn_E'], pars['tau_syn_I']
tref = pars['tref']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
nE = pre_spike_train_ex.shape[0]
nI = pre_spike_train_in.shape[0]
# compute conductance Excitatory synapses
uE = np.zeros((nE, Lt))
RE = np.zeros((nE, Lt))
gE = np.zeros((nE, Lt))
for ie in range(nE):
u, R, g = dynamic_syn(gE_bar, tau_syn_E, U0E, tau_dE, tau_fE,
pre_spike_train_ex[ie, :], dt)
uE[ie, :], RE[ie, :], gE[ie, :] = u, R, g
gE_total = gE.sum(axis=0)
# compute conductance Inhibitory synapses
uI = np.zeros((nI, Lt))
RI = np.zeros((nI, Lt))
gI = np.zeros((nI, Lt))
for ii in range(nI):
u, R, g = dynamic_syn(gI_bar, tau_syn_I, U0I, tau_dI, tau_fI,
pre_spike_train_in[ii, :], dt)
uI[ii, :], RI[ii, :], gI[ii, :] = u, R, g
gI_total = gI.sum(axis=0)
# Initialize
v = np.zeros(Lt)
v[0] = V_init
I = I_inj * np.ones(Lt) #ensure I has length Lt
# simulation
rec_spikes = [] # recording spike times
tr = 0.
for it in range(Lt-1):
if tr >0:
v[it] = V_reset
tr = tr-1
elif v[it] >= V_th: #reset voltage and record spike event
rec_spikes.append(it)
v[it] = V_reset
tr = tref/dt
#calculate the increment of the membrane potential
dv = (-(v[it]-V_L) - (gE_total[it+1]/g_L)*(v[it]-VE) - \
(gI_total[it+1]/g_L)*(v[it]-VI) + I[it]/g_L) * (dt/tau_m)
#update membrane potential
v[it+1] = v[it] + dv
rec_spikes = np.array(rec_spikes) * dt
return v, rec_spikes, uE, RE, gE, RI, RI, gI
```
### Simulation of a postsynaptic neuron with STP synapses driven by Poisson type spike trains
Here we have assumed that both excitatory and inhibitory synapses show short-term depression. Change the nature of synapses and study how spike pattern variability changes.
In the interactive demo, `tau_d = 500*tau_ratio (ms)` and `tau_f = 300*tau_ratio (ms)`.
You should compare the output of this neuron with what you observed in the previous turorial when synapses were assumed to be static.
_Note: it will take slighly longer time to run each case_
```python
#@title LIF_STP Explorer
def LIF_STP(tau_ratio):
pars = default_pars(T=1000)
pars['gE_bar'] = 1.2*4 #[nS]
pars['VE'] = 0. #[mV]
pars['tau_syn_E'] = 5. #[ms]
pars['gI_bar'] = 1.6*4 #[nS]
pars['VI'] = -80. #[ms]
pars['tau_syn_I'] = 10. #[ms]
# here we assume that both Exc and Inh synapses have synaptic depression
pars['U0_E'] = 0.45
pars['tau_d_E'] = 500. * tau_ratio #[ms]
pars['tau_f_E'] = 300. * tau_ratio #[ms]
pars['U0_I'] = 0.45
pars['tau_d_I'] = 500. * tau_ratio #[ms]
pars['tau_f_I'] = 300. * tau_ratio #[ms]
pre_spike_train_ex = Poisson_generator(pars, rate=15, n=80)
pre_spike_train_in = Poisson_generator(pars, rate=15, n=20) # 4:1
v, rec_spikes, uE, RE, gE, uI, RI, gI \
= run_LIF_cond_STP(pars, 0, pre_spike_train_ex, pre_spike_train_in)
t_plot_range = pars['range_t']>200
plt.figure(figsize=(11., 7))
plt.subplot(2,1,1)
plot_volt_trace(pars, v, rec_spikes)
plt.subplot(2,2,3)
plt.plot(pars['range_t'][t_plot_range], gE.sum(axis=0)[t_plot_range], 'r')
plt.xlabel('Time (ms)')
plt.ylabel(r'$g_E$ (nS)')
plt.subplot(2,2,4)
plt.plot(pars['range_t'][t_plot_range], gI.sum(axis=0)[t_plot_range], 'b')
plt.xlabel('Time (ms)')
plt.ylabel(r'$g_I$ (nS)')
plt.tight_layout()
_ = widgets.interact(LIF_STP, tau_ratio = (0.2, 1.1, 0.2))
```
### Optional
Vary the parameters of the above simulation and observe spiking pattern of the postsynaptic neuron.
Will the neuron show higher irregularity if the synapses have STP? If yes, what should be the nature of STP on static and dynamic synapses, respectively?
**Task**: Calculate the CV$_{\rm ISI}$ for different `tau_ratio` after simulating the LIF neuron with STP (Hint:`run_LIF_cond_STP` help you understand the irregularity).
### Functional impliations of short-term dynamics of synapses
As you have seen above, if the firing rate is stationary, the synaptic conductance quickly reaches a fixed point. On the other hand, if the firing rate transiently changes, synaptic conductance will vary -- even if the change is as short as a single inter-spike-interval. Such small changes can be observed in a single neuron when input spikes are regular and periodic. If the input spikes are Poissonian then one may have to perform an average over several neurons.
_Come up with other functions that short-term dynamics of synapses can be used to implement and implement them._
## Summary
Congratulations! You have just finished the Tutorial 3 (one to go!). Here, we saw how to model conductance-based synapses and also how to incorporate their short-term dynamics.
We covered the:
- static synapses and how excitation and inhibition affect the neuronal output
- mean- or fluctuation-driven regimes
- short-term dynamics of synapses (both facilitation and depression)
Finally, we incorporated all the aforementioned tools to study how a change in presynaptic firing history affects the synaptic weights!
Next, you will learn about another form of synaptic plasticity based on both pre- and postsynaptic spike times.
|
139bc83459ab5c17fda5451de556644486d71612
| 837,354 |
ipynb
|
Jupyter Notebook
|
tutorials/W3D1_RealNeurons/W3D1_Tutorial3.ipynb
|
liuxiaomiao123/NeuroMathAcademy
|
16a7969604a300bf9fbb86f8a5b26050ebd14c65
|
[
"CC-BY-4.0"
] | 2 |
2020-07-03T04:39:09.000Z
|
2020-07-12T02:08:31.000Z
|
tutorials/W3D1_RealNeurons/W3D1_Tutorial3.ipynb
|
NinaHKivanani/course-content
|
3c91dd1a669cebce892486ba4f8086b1ef2e1e49
|
[
"CC-BY-4.0"
] | 1 |
2020-06-22T22:57:03.000Z
|
2020-06-22T22:57:03.000Z
|
tutorials/W3D1_RealNeurons/W3D1_Tutorial3.ipynb
|
NinaHKivanani/course-content
|
3c91dd1a669cebce892486ba4f8086b1ef2e1e49
|
[
"CC-BY-4.0"
] | 1 |
2020-07-15T17:53:40.000Z
|
2020-07-15T17:53:40.000Z
| 298.415538 | 187,528 | 0.917219 | true | 11,872 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.749087 | 0.721743 | 0.540649 |
__label__eng_Latn
| 0.941271 | 0.094437 |
# Solution for compulsary exercise 2 - TKT4196 2021
```python
import numpy as np
import scipy as sp
from scipy.stats import multivariate_normal as mvn
import matplotlib.pyplot as plt
import scipy.stats
import time
import sympy as sym
fontsizes=18
plt.rcParams.update({'font.size': fontsizes})
plt.rcParams.update({"font.family": "serif"})
plt.rcParams.update({"mathtext.fontset" : "cm"})
plt.rcParams.update({'font.serif': 'Times New Roman'})
plt.close('all')
```
Save all basic values given, awaiting transformation to u-space
```python
# =============================================================================
# INPUT
mu_dt = 1.12
V_dt = 0.32
std_dt = mu_dt * V_dt
mu_df = 1.1
V_df= 0.2
std_df = mu_df * V_df
mu_N1 = 40
V_N1 = 0.3
std_N1 = mu_N1 * V_N1
mu_N2 = 60
V_N2 = 0.2
std_N2 = mu_N2 * V_N2
mu_G1 = 400000 #N
V_G1 = 0.1
std_G1 = mu_G1 * V_G1 #N
mu_Q = 500000 #N
V_Q = 0.35
std_Q = mu_Q * V_Q #N
l1 = 5000 #mm
rho_rc = 25*1e-6 #N/mm^3
A = lambda d: np.pi/4 * d**2 # Defining function for area of the pile [mm2]
C = lambda d: np.pi * d # Defining function for circumference of the pile [mm]
G2 = lambda h,d: A(d) * h * rho_rc # Defining function for self weight of the pile [N]
k1 = 0.01 # constant in front of N_1 in c_1 [N/mm2]
k2 = 0.005 # constant in front of N_2 in c_2 [N/mm2]
```
Function calculating parameters from moments as defined in Exercise 3:
```python
def addparams(X):
moments = X['moments']
if X['type'] == 'Normal':
theta = X['moments']
if X['type'] == 'Lognormal':
sigma_ln = np.sqrt(np.log(moments[1]**2/moments[0]**2+1))
mu_ln = np.log(moments[0])-0.5*sigma_ln**2
theta = [mu_ln,sigma_ln]
if X['type'] == 'Gumbel':
a = np.pi/np.sqrt(6)/moments[1]
b = moments[0] - 0.5772/a
theta = [a,b]
X['theta']=theta
```
### Transformed input
We use dictionaries here to allow type to be defined and utilised
```python
#TRANSFORMED INPUT
G1 = {}
G1['type'] = "Gumbel"
G1['moments'] = [mu_G1,std_G1]
addparams(G1)
Q = {}
Q['type'] = "Gumbel"
Q['moments'] = [mu_Q,std_Q]
addparams(Q)
N1 = {}
N1['type'] = "Lognormal"
N1['moments'] = [mu_N1,std_N1]
addparams(N1)
N2 = {}
N2['type'] = "Lognormal"
N2['moments'] = [mu_N2,std_N2]
addparams(N2)
dt = {}
dt['type'] = "Normal"
dt['moments'] = [mu_dt,std_dt]
addparams(dt)
df = {}
df['type'] = "Normal"
df['moments'] = [mu_df,std_df]
addparams(df)
```
Limit state function in the X-space
$$ g_X(R_t,R_s,G_1,G_2,Q) = (R_t + R_s) - (G_1 + G_2 + Q) $$
substituting in the random variables:
$$ g_X(\delta_t, \delta_f, N_1, N_2, G_1, Q_1) = \delta_t \, A \cdot 10 \, k_2 \, N_2 + \delta_f \, C \cdot 0.4 \left(k_1 \, l_1 \, N_1 + k_2 \, (h-l_1) \, N_2\right) - (G_1 + A \, \rho_{rc} \, h + Q)$$
where:
$A$: Cross sectional area
$C$: Cross sectional circumference
```python
gX = lambda x,h,d: x[0]*A(d)*10*k2*x[3]+x[1]*C(d)*0.4*(k1*x[2]*l1+(h-l1)*k2*x[3])-(x[4]+A(d)*rho_rc*h+x[5])
```
Recall that we turn our limit state function from a function of $x$, to a function of $u$
$$g=g(x)=g(x(u)) \Rightarrow g=g(u)$$
To do this mapping, we need the to know the function $x(u)$
We recall that
$$x = F_{X_i}^{-1}(\Phi(u))$$
Note that that $F_{x_i}$ depends on the distribution type of variable $X_i$, whilst $\Phi$ is always the same.
The compendium covers the distribution types which we represent variables with in this course, we program their $x(u)$:
```python
def x2u(X):
#X is here a dictionary, which describes the type and stores its parameters
if X['type'] == 'Normal':
# If X is normal distributed, use Eq. (3.8) (isolated x)
x = lambda u: u*X['theta'][1] + X['theta'][0]
# X['theta'][1] = sigma
# X['theta'][0] = mu
if X['type'] == 'Lognormal':
# If X is lognormal distributed, use Eq. (3.26)
x = lambda u: np.exp(X['theta'][1]*u+X['theta'][0])
# X['theta'][1] = sigma_L
# X['theta'][0] = mu_L
if X['type'] == 'Gumbel':
# If X is Gumbel distributed, use Eq. (3.27)
x = lambda u: X['theta'][1]-1/X['theta'][0]*np.log(-np.log(sp.stats.norm.cdf(u)))
# X['theta'][1] = b
# X['theta'][0] = a
return x
```
We want to differentiate the limit state with respects to $u$ in our iteration. We can do this by utilizing the chain rule:
$$\frac{\partial g}{\partial u} = \frac{\partial g}{ \partial x}\frac{\partial x}{\partial u}$$
$$ \frac{\partial x}{\partial u} = \frac{\partial}{\partial u} \left( F_{x}^{-1}(\Phi(u) \right) $$
```python
def dxdufun(X):
if X['type'] == "Normal":
dxdu = lambda u: X['theta'][1]
if X['type'] == "Lognormal":
dxdu = lambda u: X['theta'][1] * np.exp(X['theta'][0]+X['theta'][1] * u)
if X['type'] == "Gumbel":
dxdu = lambda u: - sp.stats.norm.pdf(u)/(X['theta'][0] * sp.stats.norm.cdf(u)*np.log(sp.stats.norm.cdf(u)))
return dxdu
```
We create a list of the transformed functions for each variable to make the code easier to look at.
```python
U = [x2u(dt), x2u(df), x2u(N1), x2u(N2), x2u(G1), x2u(Q)]
```
We can now write our LSF as a function of u:
```python
gU = lambda u,h,d: gX([U[0](u[0]),
U[1](u[1]),
U[2](u[2]),
U[3](u[3]),
U[4](u[4]),
U[5](u[5])],h,d)
```
Next, we define the function that gives us the next, improved $\alpha$-vector
```python
def alpha_next(u,h,d):
dgdu = np.zeros(6)
dgdu[0] = ( A(d)*10*k2*U[3](u[3])*dxdufun(dt)(u[0]) )
dgdu[1] = ( C(d)*0.4*(k1*U[2](u[2])*l1+(h-l1)*k2*U[3](u[3]))*dxdufun(df)(u[1]) )
dgdu[2] = ( C(d)*0.4*k1*l1*U[1](u[1])*dxdufun(N1)(u[2]) )
dgdu[3] = ( (A(d)*10*k2*U[0](u[0])+k2*C(d)*0.4*(h-l1)*U[1](u[1]))*dxdufun(N2)(u[3]) )
dgdu[4] = ( -dxdufun(G1)(u[4]) )
dgdu[5] = ( -dxdufun(Q)(u[5]) )
k = np.sqrt(sum(dgdu**2)) #Normalisation factor, makes ||alpha|| = 1
alpha = -dgdu/k
return alpha
```
Recall that this does not correspond exactly to eq. (3.10) as this equation requires us to insert the previous values of $\beta$ and $\mathrm{\alpha}$, in other words, a new $\mathrm{\alpha}$ is not found until we do this. This means that we need a starting value, $\mathrm{\alpha}_0$
```python
alpha0 = np.array([-1/np.sqrt(6),-1/np.sqrt(6),-1/np.sqrt(6),1/np.sqrt(6),1/np.sqrt(6),1/np.sqrt(6)])
```
Now we need a way to solve $g_u(\beta \cdot \mathbf{\alpha}) = 0$ with respect to $\beta$. This can be done in Python by utilizing `fsolve`
```python
from scipy.optimize import fsolve
```
Since `fsolve` is a numerical solver it needs a starting guess for $\beta$. The logical guess would be the previous $\beta$ we calculated. This also means we need a starting value for $\beta$, $\beta_0$:
```python
beta0=4
```
```python
def beta_next(alpha, beta_prev,h,d):
equation = lambda beta: gU(beta*alpha,h,d)
beta_new = fsolve(equation, x0=beta_prev) #Finds beta that gives equtaion = 0
return beta_new
```
```python
def FORM_tol(h,d,alpha_start, beta_start, tol=1e-5, itreturn=False):
alphas = [alpha_start]
beta1 = beta_next(alpha_start, beta0, h, d)
betas = [beta_start, beta1] #Need at least two values of beta for the next line to work, first beta is therefore calculated manually
while abs((betas[-1] - betas[-2])/betas[-1]) >= tol:
alpha_new = alpha_next(betas[-1]*alphas[-1],h,d) #calculates new, improved alpha, using the last calculated values (by using [-1])
alphas.append(alpha_new) #adds the new, improved alpha to the list of alphas
beta_new = beta_next(alphas[-1],betas[-1],h,d) #calculates new, improved beta
betas.append(beta_new) #adds the new, improved beta to the list of betas
if itreturn:
return alphas[-1], betas[-1], len(alphas)-1 #returns the final values, and number of iterations
else:
return alphas[-1], betas[-1] #returns only the final values
```
### Monte Carlo simulation
We use Monte Carlo as a way to test the quality of our results
```python
def MCS(h,d,n_sim):
n_sim = int(n_sim)
# Defining arrays with realisations for each randomly distributed variable
df_mcs = np.random.normal(loc=mu_df, scale=std_df, size=n_sim)
dt_mcs = np.random.normal(loc=mu_dt, scale=std_dt, size=n_sim)
N1_mcs = np.random.lognormal(N1['theta'][0], N1['theta'][1], size=n_sim)
N2_mcs = np.random.lognormal(N2['theta'][0], N2['theta'][1], size=n_sim)
G1_mcs = np.random.gumbel(G1['theta'][1], 1/G1['theta'][0], size=n_sim)
Q_mcs = np.random.gumbel(Q['theta'][1], 1/Q['theta'][0], size=n_sim)
l2 = h - l1
c1 = k1 * N1_mcs
c2 = k2 * N2_mcs
f1 = 0.4 * c1
f2 = 0.4 * c2
Rs1 = df_mcs * C(d) * l1 * f1
Rs2 = df_mcs * C(d) * l2 * f2
Rs = Rs1 + Rs2 # Collect all random variables that contribute to shaft resistance
Rt = dt_mcs * A(d) * 10 * c2 # Collect all random variables that contribute to tip resistance
g = Rt + Rs - (G1_mcs + G2(h,d) + Q_mcs) # LSF for MCS
fails = 0
for el in g:
if el <= 0:
fails+=1
pf = fails/n_sim
beta = -sp.stats.norm.ppf(pf)
return beta, pf
```
## Task 1
We test our FORM algortithm by comparing the results with Monte Carlo simulation.
We choose two combinations,
- $h = 6000 \text{ mm}, \phi = 800 \text{ mm}$
- $h = 8000 \text{ mm}, \phi = 700 \text{ mm}$
```python
alpha1, beta1 = FORM_tol(6000,800,alpha_start=alpha0, beta_start=beta0)
beta1_mcs = MCS(6000, 800, 1e6)[0]
alpha2, beta2 = FORM_tol(8000,700,alpha_start=alpha0, beta_start=beta0)
beta2_mcs = MCS(8000, 700, 1e6)[0]
print(u'h = 6000 mm, \u03C6 = 800 mm:')
print(u'\u03B2-FORM = %.5f' % beta1)
print(u'\u03B2-MCS = %.5f' % beta1_mcs)
print()
print(u'h = 8000 mm, \u03C6 = 700 mm:')
print(u'\u03B2-FORM = %.5f' % beta2)
print(u'\u03B2-MCS = %.5f' % beta2_mcs)
```
h = 6000 mm, φ = 800 mm:
β-FORM = 3.91698
β-MCS = 3.87188
h = 8000 mm, φ = 700 mm:
β-FORM = 4.03157
β-MCS = 3.96304
## Task 2
```python
f1 = lambda h,d: FORM_tol(h, d, alpha_start = alpha0, beta_start = beta0)[1] #Only returns the beta-value
```
```python
beta = f1(10000,600)
pf = sp.stats.norm.cdf(-beta)
print('Reliability index = %.5f' % beta)
print('Probability of failure = %.3e' % pf)
```
Reliability index = 3.93919
Probability of failure = 4.088e-05
## Task 3
```python
n=20 # Resolution of the plot
h_dum = np.linspace(5000, 20000,n) # Dummy arrays used for plotting
d_dum = np.linspace(200, 1000,n)
hh, dd = np.meshgrid(h_dum,d_dum)
beta_dum = np.zeros_like(hh)
for i in range(n):
for j in range(n):
beta_dum[i,j]=f1(hh[i,j],dd[i,j])
```
```python
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax1 = fig.add_subplot(111, projection='3d')
ax1.plot_surface(hh/1000,dd/1000, beta_dum,cmap=plt.cm.coolwarm)
# Enhance the plot
ax1.set_xlim(h_dum[0]/1000,h_dum[-1]/1000)
ax1.set_ylim(d_dum[0]/1000,d_dum[-1]/1000)
ax1.set_xlabel(r'$h$ [m]',fontsize=15)
ax1.set_ylabel(r'$\phi$ [m]',fontsize=15)
ax1.set_title(r'$\beta(h,\phi)$',fontsize=20)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
ax1.zaxis.set_tick_params(labelsize=12)
plt.tight_layout()
plt.show()
```
## Task 4
```python
# Optimization problem
# Cost Inputs: =================================================
C_0 = 50e3 # Fixed construction cost [NOK]
c_pile = 1e-5 # Cost of pile pr mm3 [NOK/mm3]
N_F = 5 # Expected fatalities given failure
i_e = 0.03 # Interest rate
i_s = 0.025 # Societal interest rate
T_sl = 50 # Service live [years]
SWTP = 32.1e6 # NOK
H = 20*C_0 # Failure cost
D = 2*C_0 # Demolition cost
w = 1/T_sl # Obsolence rate
```
```python
n=20 # Again, resolution of plots
h_dum = np.linspace(5e3, 20e3,n)
d_dum = np.linspace(0.2e3, 1e3,n)
hh, dd = np.meshgrid(h_dum,d_dum)
PF_1 = lambda h, d: scipy.stats.norm.cdf(-f1(h,d)) # Annual probability of failure
pf_1 = np.zeros_like(hh)
C_1 = np.zeros_like(hh)
V = lambda h,d: A(d)*h # Volume of pile given height and diameter
for i in range(n):
for j in range(n):
pf_1[i,j] = PF_1(hh[i,j], dd[i,j])
C_1[i,j] = V(hh[i,j], dd[i,j]) * c_pile
```
```python
C_c = C_0 + C_1 # Construction cost
EC_f = (C_c + H)* pf_1/i_e # Infite renewal expected failure cost
EC_obs = (C_c + D)*w/i_e # Infinte renewal expected obsolence cost
EC_tot = C_c + EC_f + EC_obs # Excpected total cost
```
```python
from matplotlib import ticker
def plotZ(Z,Tittel,pos):
ax1 = fig.add_subplot(pos,projection='3d')
ax1.plot_surface(hh/1000,dd/1000,Z,cmap=plt.cm.coolwarm)
ax1.set_xlim(h_dum[0]/1000,h_dum[-1]/1000)
ax1.set_ylim(d_dum[0]/1000,d_dum[-1]/1000)
ax1.set_xlabel('h [m]',fontsize=15)
ax1.set_ylabel('d [m]',fontsize=15)
ax1.set_zlabel("NOK",rotation=90,labelpad=5,fontsize=15)
ax1.set_title(Tittel,fontsize=20)
formatter = ticker.ScalarFormatter(useMathText=True)
formatter.set_scientific(True)
formatter.set_powerlimits((-1,1))
ax1.zaxis.set_major_formatter(formatter)
ax1.xaxis.set_tick_params(labelsize=12)
ax1.yaxis.set_tick_params(labelsize=12)
ax1.zaxis.set_tick_params(labelsize=12)
```
```python
fig = plt.figure(figsize=(10,10))
plotZ(C_c,'Construction costs',221)
plotZ(EC_obs,"Obsolescence cost",222)
plotZ(EC_f,"Expected cost of failure ",223)
plotZ(EC_tot,"Total expected cost",224)
plt.tight_layout()
fig.subplots_adjust(wspace=.1,hspace=.2)
plt.show()
```
## Task 5
```python
n=50
d_opt = 700
h_dum = np.linspace(5e3, 10e3,n+1)
C_1 = lambda h: c_pile * V(h, d_opt)
C_c = lambda h: C_0 + C_1(h)
EC_obs = lambda h: (C_c(h) + D)*w/i_e
EC_f = lambda h: (C_c(h)+H) * PF_1(h, d_opt)*1/i_e
f2 = lambda h: float(C_c(h) + EC_obs(h) + EC_f(h) )
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
ax.plot(h_dum/1000,np.vectorize(f2)(h_dum),'k')
ax.set_xlabel(r'$h$ [m]')
ax.set_xlim(h_dum[0]/1000,h_dum[-1]/1000)
ax.set_ylabel(r'E[$C_{tot}]$ [NOK]')
ax.grid(which='major')
ax.minorticks_on()
ax.grid(which='minor', alpha=0.2)
fig.tight_layout()
plt.show()
```
```python
res=scipy.optimize.minimize(f2,6e3,method='nelder-mead',options={'xatol': 1e-2})
h_opt=float(res.x)
ECtot_opt = float(res.fun)
print(res)
print('\nOptimal height: %.0f mm \nOptimal expected total costs: %.0f NOK' % (h_opt, ECtot_opt))
```
final_simplex: (array([[5801.11083984],
[5801.10168457]]), array([193676.69550286, 193676.69550306]))
fun: 193676.6955028569
message: 'Optimization terminated successfully.'
nfev: 34
nit: 17
status: 0
success: True
x: array([5801.11083984])
Optimal height: 5801 mm
Optimal expected total costs: 193677 NOK
```python
def hjelpelinjer(ax, x,y, c='b',label=None):
ax.plot([x,x],[0,y],(c + '--'))
ax.plot([0,x],[y,y],(c + '--'))
ax.plot(x,y,(c + 'o'),label=label)
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(111)
ax.plot(h_dum/1000,np.vectorize(f2)(h_dum),'k')
hjelpelinjer(ax, h_opt/1000,f2(h_opt))
ax.set_xlabel(r'$h$ [m]')
ax.set_xlim(5,7)
ax.set_ylabel(r'E[$C_{tot}]$ [NOK]',fontsize=20)
ax.set_ylim(ECtot_opt*0.95, ECtot_opt*1.05)
ax.grid(which='major')
ax.minorticks_on()
ax.grid(which='minor', alpha=0.2)
fig.tight_layout()
plt.show()
```
## Task 6
```python
h_dum = np.linspace(5000,20000,50)
C1 = c_pile*np.pi/4*d_opt**2
K1 = C1*(i_s + w)/(SWTP*N_F)
#Numerical differentation:
dh = 1
dPdh = lambda h: (PF_1(h+dh, d_opt) - PF_1(h-dh, d_opt)) / (2*dh)
#Finding the height, h_min, which gives -dPdh = K1
equation = lambda h: -dPdh(h) - K1
h_min = float(fsolve(equation, 12000))
fig = plt.figure(figsize=(10,10))
ax1 = fig.add_subplot(211)
ax1.plot(h_dum,-np.vectorize(dPdh)(h_dum),'k',label=r'$-\frac{\mathrm{d}P_{f,1}}{\mathrm{d}h}$')
ax1.plot([h_dum[0],h_dum[-1]],[K1,K1],'r--',label='$K_1$')
ax1.plot(h_min,K1,'ro')
ax1.set_xlabel('$h$ [mm]')
ax1.set_ylabel('[mm$^{-1}$]')
ax1.set_xlim(h_dum[0],h_dum[-1])
ax1.set_ylim(K1*0.1, K1*20)
ax1.minorticks_on()
ax1.legend()
ax2 = fig.add_subplot(212)
ax2.plot(h_dum, np.vectorize(f2)(h_dum),'k')
ax2.axvspan(h_min,h_dum[-1],facecolor='g',alpha=0.1,label='Acceptable region')
ax2.set_xlim(h_dum[0],h_dum[-1])
ax2.set_ylim(f2(h_min)*0.8,f2(h_min)*1.2)
ax2.set_xlabel('$h$ [mm]')
ax2.set_ylabel(r'E[$C_{tot}]$ [NOK]')
hjelpelinjer(ax2, h_opt, f2(h_opt),'b')
hjelpelinjer(ax2, h_min, f2(h_min),'g')
ax2.minorticks_on()
ax2.legend(loc='lower right')
fig.tight_layout()
plt.show()
if h_min <= h_opt:
res = 'acceptable'
else:
res = 'not acceptable'
print('Minimal accepted height: %.0f mm' % h_min)
print('The optimal design is %s.' %res)
```
```python
```
|
642beb39e05911db9177be50e9aa799b2ba99e96
| 804,395 |
ipynb
|
Jupyter Notebook
|
Solutions2021/CE2/Solution-CE2.ipynb
|
jochenkohler/TKT4196_material
|
2e3be73beaafd22387ea27cd8d49f04a776e3bce
|
[
"MIT"
] | null | null | null |
Solutions2021/CE2/Solution-CE2.ipynb
|
jochenkohler/TKT4196_material
|
2e3be73beaafd22387ea27cd8d49f04a776e3bce
|
[
"MIT"
] | null | null | null |
Solutions2021/CE2/Solution-CE2.ipynb
|
jochenkohler/TKT4196_material
|
2e3be73beaafd22387ea27cd8d49f04a776e3bce
|
[
"MIT"
] | null | null | null | 821.649642 | 487,544 | 0.947741 | true | 5,965 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.885631 | 0.808067 | 0.71565 |
__label__eng_Latn
| 0.472979 | 0.501026 |
# Start-to-Finish Example: Unit Testing `GiRaFFE_NRPy`: Boundary Conditions
## Author: Patrick Nelson
## This module Validates the Boundary Conditions routines for `GiRaFFE_NRPy`.
**Notebook Status:** <font color='green'><b>Validated</b></font>
**Validation Notes:** This module will validate the routines in [Tutorial-GiRaFFE_NRPy-BCs](Tutorial-GiRaFFE_NRPy-BCs.ipynb).
### NRPy+ Source Code for this module:
* [GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-BCs.ipynb) Generates the driver to compute the magnetic field from the vector potential in arbitrary spactimes.
## Introduction:
This notebook validates the code that will interpolate the metric gridfunctions on cell faces. These values, along with the reconstruction of primitive variables on the faces, are necessary for the Riemann solvers to compute the fluxes through the cell faces.
It is, in general, good coding practice to unit test functions individually to verify that they produce the expected and intended output. We will generate test data with arbitrarily-chosen analytic functions and calculate gridfunctions at the cell centers on a small numeric grid. We will then compute the values on the cell faces in two ways: first, with our interpolator, then second, we will shift the grid and compute them analytically. Then, we will rerun the function at a finer resolution. Finally, we will compare the results of the two runs to show third-order convergence.
When this notebook is run, the significant digits of agreement between the approximate and exact values in the ghost zones will be evaluated. If the agreement falls below a thresold, the point, quantity, and level of agreement are reported [here](#compile_run).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#setup): Set up core functions and parameters for unit testing the BCs algorithm
1. [Step 1.a](#expressions) Write expressions for the gridfunctions we will test
1. [Step 1.b](#ccodekernels) Generate C functions to calculate the gridfunctions
1. [Step 1.c](#free_parameters) Set free parameters in the code
1. [Step 2](#mainc): `BCs_unit_test.c`: The Main C Code
1. [Step 2.a](#compile_run): Compile and run the code
1. [Step 3](#convergence): Code validation: Verify that relative error in numerical solution converges to zero at the expected order
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='setup'></a>
# Step 1: Set up core functions and parameters for unit testing the BCs algorithm \[Back to [top](#toc)\]
$$\label{setup}$$
We'll start by appending the relevant paths to `sys.path` so that we can access sympy modules in other places. Then, we'll import NRPy+ core functionality and set up a directory in which to carry out our test.
```python
import os, sys # Standard Python modules for multiplatform OS-level functions
# First, we'll add the parent directory to the list of directories Python will check for modules.
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
from outputC import outCfunction, lhrh # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
out_dir = "Validation/"
cmd.mkdir(out_dir)
thismodule = "Start_to_Finish_UnitTest-GiRaFFE_NRPy-BCs"
# Set the finite-differencing order to 2
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", 2)
```
<a id='expressions'></a>
## Step 1.a: Write expressions for the gridfunctions we will test \[Back to [top](#toc)\]
$$\label{expressions}$$
Now, we'll choose some functions with arbitrary forms to generate test data. We'll need to set seven gridfunctions, so expressions are being pulled from several previously written unit tests.
\begin{align}
A_x &= dy + ez + f \\
A_y &= mx + nz + p \\
A_z &= sx + ty + u. \\
\bar{v}^x &= ax + by + cz \\
\bar{v}^y &= bx + cy + az \\
\bar{v}^z &= cx + ay + bz \\
[\sqrt{\gamma} \Phi] &= 1 - (x+2y+z) \\
\end{align}
```python
a,b,c,d,e,f,g,h,l,m,n,o,p,q,r,s,t,u = par.Cparameters("REAL",thismodule,["a","b","c","d","e","f","g","h","l","m","n","o","p","q","r","s","t","u"],1e300)
M_PI = par.Cparameters("#define",thismodule,["M_PI"], "")
AD = ixp.register_gridfunctions_for_single_rank1("EVOL","AD",DIM=3)
ValenciavU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","ValenciavU",DIM=3)
psi6Phi = gri.register_gridfunctions("EVOL","psi6Phi")
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
x = rfm.xxCart[0]
y = rfm.xxCart[1]
z = rfm.xxCart[2]
AD[0] = d*y + e*z + f
AD[1] = m*x + n*z + o
AD[2] = s*x + t*y + u
ValenciavU[0] = a#*x + b*y + c*z
ValenciavU[1] = b#*x + c*y + a*z
ValenciavU[2] = c#*x + a*y + b*z
psi6Phi = sp.sympify(1) - (x + sp.sympify(2)*y + z)
```
<a id='ccodekernels'></a>
## Step 1.b: Generate C functions to calculate the gridfunctions \[Back to [top](#toc)\]
$$\label{ccodekernels}$$
Here, we will use the NRPy+ function `outCfunction()` to generate C code that will calculate our metric gridfunctions over an entire grid; note that we call the function twice, once over just the interior points, and once over all points. This will allow us to compare against exact values in the ghostzones. We will also call the function to generate the boundary conditions function we are testing.
```python
metric_gfs_to_print = [\
lhrh(lhs=gri.gfaccess("evol_gfs","AD0"),rhs=AD[0]),\
lhrh(lhs=gri.gfaccess("evol_gfs","AD1"),rhs=AD[1]),\
lhrh(lhs=gri.gfaccess("evol_gfs","AD2"),rhs=AD[2]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU0"),rhs=ValenciavU[0]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU1"),rhs=ValenciavU[1]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU2"),rhs=ValenciavU[2]),\
lhrh(lhs=gri.gfaccess("evol_gfs","psi6Phi"),rhs=psi6Phi),\
]
desc = "Calculate test data on the interior grid for boundary conditions"
name = "calculate_test_data"
outCfunction(
outfile = os.path.join(out_dir,name+".h"), desc=desc, name=name,
params ="const paramstruct *restrict params,REAL *restrict xx[3],REAL *restrict auxevol_gfs,REAL *restrict evol_gfs",
body = fin.FD_outputC("returnstring",metric_gfs_to_print,params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts="InteriorPoints,Read_xxs")
desc = "Calculate test data at all points for comparison"
name = "calculate_test_data_exact"
outCfunction(
outfile = os.path.join(out_dir,name+".h"), desc=desc, name=name,
params ="const paramstruct *restrict params,REAL *restrict xx[3],REAL *restrict auxevol_gfs,REAL *restrict evol_gfs",
body = fin.FD_outputC("returnstring",metric_gfs_to_print,params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
import GiRaFFE_NRPy.GiRaFFE_NRPy_BCs as BC
BC.GiRaFFE_NRPy_BCs(os.path.join(out_dir,"boundary_conditions"))
```
Output C function calculate_test_data() to file Validation/calculate_test_data.h
Output C function calculate_test_data_exact() to file Validation/calculate_test_data_exact.h
<a id='free_parameters'></a>
## Step 1.c: Set free parameters in the code \[Back to [top](#toc)\]
$$\label{free_parameters}$$
We also need to create the files that interact with NRPy's C parameter interface.
```python
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
# par.generate_Cparameters_Ccodes(os.path.join(out_dir))
# Step 3.d.ii: Set free_parameters.h
with open(os.path.join(out_dir,"free_parameters.h"),"w") as file:
file.write("""
// Override parameter defaults with values based on command line arguments and NGHOSTS.
params.Nxx0 = atoi(argv[1]);
params.Nxx1 = atoi(argv[2]);
params.Nxx2 = atoi(argv[3]);
params.Nxx_plus_2NGHOSTS0 = params.Nxx0 + 2*NGHOSTS;
params.Nxx_plus_2NGHOSTS1 = params.Nxx1 + 2*NGHOSTS;
params.Nxx_plus_2NGHOSTS2 = params.Nxx2 + 2*NGHOSTS;
// Step 0d: Set up space and time coordinates
// Step 0d.i: Declare \Delta x^i=dxx{0,1,2} and invdxx{0,1,2}, as well as xxmin[3] and xxmax[3]:
const REAL xxmin[3] = {-1.0,-1.0,-1.0};
const REAL xxmax[3] = { 1.0, 1.0, 1.0};
params.dxx0 = (xxmax[0] - xxmin[0]) / ((REAL)params.Nxx_plus_2NGHOSTS0-1.0);
params.dxx1 = (xxmax[1] - xxmin[1]) / ((REAL)params.Nxx_plus_2NGHOSTS1-1.0);
params.dxx2 = (xxmax[2] - xxmin[2]) / ((REAL)params.Nxx_plus_2NGHOSTS2-1.0);
params.invdx0 = 1.0 / params.dxx0;
params.invdx1 = 1.0 / params.dxx1;
params.invdx2 = 1.0 / params.dxx2;
\n""")
# Generates declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(out_dir))
```
<a id='mainc'></a>
# Step 2: `BCs_unit_test.c`: The Main C Code \[Back to [top](#toc)\]
$$\label{mainc}$$
```python
%%writefile $out_dir/BCs_unit_test.c
// These are common packages that we are likely to need.
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "string.h" // Needed for strncmp, etc.
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#include <time.h> // Needed to set a random seed.
#define REAL double
#include "declare_Cparameters_struct.h"
const int NGHOSTS = 3;
REAL a,b,c,d,e,f,g,h,l,m,n,o,p,q,r,s,t,u;
// Standard NRPy+ memory access:
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
// Standard formula to calculate significant digits of agreement:
#define SDA(a,b) 1.0-log10(2.0*fabs(a-b)/(fabs(a)+fabs(b)))
// Give gridfunctions their names:
#define VALENCIAVU0GF 0
#define VALENCIAVU1GF 1
#define VALENCIAVU2GF 2
#define NUM_AUXEVOL_GFS 3
#define AD0GF 0
#define AD1GF 1
#define AD2GF 2
#define STILDED0GF 3
#define STILDED1GF 4
#define STILDED2GF 5
#define PSI6PHIGF 6
#define NUM_EVOL_GFS 7
#include "calculate_test_data.h"
#include "calculate_test_data_exact.h"
#include "boundary_conditions/GiRaFFE_boundary_conditions.h"
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
#include "set_Cparameters-nopointer.h"
// We'll define our grid slightly different from how we normally would. We let our outermost
// ghostzones coincide with xxmin and xxmax instead of the interior of the grid. This means
// that the ghostzone points will have identical positions so we can do convergence tests of them.
// Step 0e: Set up cell-centered Cartesian coordinate grids
REAL *xx[3];
xx[0] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS0);
xx[1] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS1);
xx[2] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS2);
for(int j=0;j<Nxx_plus_2NGHOSTS0;j++) xx[0][j] = xxmin[0] + ((REAL)(j))*dxx0;
for(int j=0;j<Nxx_plus_2NGHOSTS1;j++) xx[1][j] = xxmin[1] + ((REAL)(j))*dxx1;
for(int j=0;j<Nxx_plus_2NGHOSTS2;j++) xx[2][j] = xxmin[2] + ((REAL)(j))*dxx2;
//for(int i=0;i<Nxx_plus_2NGHOSTS0;i++) printf("xx[0][%d] = %.15e\n",i,xx[0][i]);
// This is the array to which we'll write the NRPy+ variables.
REAL *auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS2 * Nxx_plus_2NGHOSTS1 * Nxx_plus_2NGHOSTS0);
REAL *evol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS2 * Nxx_plus_2NGHOSTS1 * Nxx_plus_2NGHOSTS0);
// And another for exact data:
REAL *auxevol_exact_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS2 * Nxx_plus_2NGHOSTS1 * Nxx_plus_2NGHOSTS0);
REAL *evol_exact_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS2 * Nxx_plus_2NGHOSTS1 * Nxx_plus_2NGHOSTS0);
// Generate some random coefficients. Leave the random seed on its default for consistency between trials.
a = (double)(rand()%20)/5.0;
f = (double)(rand()%20)/5.0;
m = (double)(rand()%20)/5.0;
b = (double)(rand()%10-5)/100.0;
c = (double)(rand()%10-5)/100.0;
d = (double)(rand()%10-5)/100.0;
g = (double)(rand()%10-5)/100.0;
h = (double)(rand()%10-5)/100.0;
l = (double)(rand()%10-5)/100.0;
n = (double)(rand()%10-5)/100.0;
o = (double)(rand()%10-5)/100.0;
p = (double)(rand()%10-5)/100.0;
// First, calculate the test data on our grid, along with the comparison:
calculate_test_data(¶ms,xx,auxevol_gfs,evol_gfs);
calculate_test_data_exact(¶ms,xx,auxevol_exact_gfs,evol_exact_gfs);
// Run the BCs driver on the test data to fill in the ghost zones:
apply_bcs_potential(¶ms,evol_gfs);
apply_bcs_velocity(¶ms,auxevol_gfs);
/*char filename[100];
sprintf(filename,"out%d-numer.txt",Nxx0);
FILE *out2D = fopen(filename, "w");
for(int i2=0;i2<Nxx_plus_2NGHOSTS2;i2++) for(int i1=0;i1<Nxx_plus_2NGHOSTS1;i1++) for(int i0=0;i0<Nxx_plus_2NGHOSTS0;i0++) {
// We print the difference between approximate and exact numbers.
fprintf(out2D,"%.16e\t %e %e %e\n",
//auxevol_gfs[IDX4S(VALENCIAVU2GF,i0,i1,i2)]-auxevol_exact_gfs[IDX4S(VALENCIAVU2GF,i0,i1,i2)],
evol_gfs[IDX4S(AD2GF,i0,i1,i2)]-evol_exact_gfs[IDX4S(AD2GF,i0,i1,i2)],
xx[0][i0],xx[1][i1],xx[2][i2]
);
}
fclose(out2D);*/
int all_agree = 1;
for(int i0=0;i0<Nxx_plus_2NGHOSTS0;i0++){
for(int i1=0;i1<Nxx_plus_2NGHOSTS1;i1++){
for(int i2=0;i2<Nxx_plus_2NGHOSTS2;i2++){
if(SDA(evol_gfs[IDX4S(AD0GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD0GF, i0,i1,i2)])<10.0){
printf("Quantity AD0 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(evol_gfs[IDX4S(AD0GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD0GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(evol_gfs[IDX4S(AD1GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD1GF, i0,i1,i2)])<10.0){
printf("Quantity AD1 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(evol_gfs[IDX4S(AD1GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD1GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(evol_gfs[IDX4S(AD2GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD2GF, i0,i1,i2)])<10.0){
printf("Quantity AD2 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(evol_gfs[IDX4S(AD2GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD2GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(auxevol_gfs[IDX4S(VALENCIAVU0GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU0GF, i0,i1,i2)])<10.0){
printf("Quantity ValenciavU0 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(auxevol_gfs[IDX4S(VALENCIAVU0GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU0GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(auxevol_gfs[IDX4S(VALENCIAVU1GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU1GF, i0,i1,i2)])<10.0){
printf("Quantity ValenciavU1 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(auxevol_gfs[IDX4S(VALENCIAVU1GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU1GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(auxevol_gfs[IDX4S(VALENCIAVU2GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU2GF, i0,i1,i2)])<10.0){
printf("Quantity ValenciavU2 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(auxevol_gfs[IDX4S(VALENCIAVU2GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU2GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(evol_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)],evol_exact_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)])<10.0){
printf("psi6Phi = %.15e,%.15e\n",evol_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)],evol_exact_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)]);
//printf("Quantity psi6Phi only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
// SDA(evol_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)],evol_exact_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
}
}
}
if(all_agree) printf("All quantities agree at all points!\n");
}
```
Writing Validation//BCs_unit_test.c
<a id='compile_run'></a>
## Step 2.a: Compile and run the code \[Back to [top](#toc)\]
$$\label{compile_run}$$
Now that we have our file, we can compile it and run the executable.
```python
import time
results_file = "out.txt"
print("Now compiling, should take ~2 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(out_dir,"BCs_unit_test.c"), os.path.join(out_dir,"BCs_unit_test"))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
print("Now running...\n")
start = time.time()
cmd.Execute(os.path.join("Validation","BCs_unit_test"),"2 2 2",file_to_redirect_stdout=os.path.join(out_dir,results_file))
# To do a convergence test, we'll also need a second grid with twice the resolution.
# cmd.Execute(os.path.join("Validation","BCs_unit_test"),"9 9 9",file_to_redirect_stdout=os.path.join(out_dir,results_file))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
```
Now compiling, should take ~2 seconds...
Compiling executable...
(EXEC): Executing `gcc -Ofast -fopenmp -march=native -funroll-loops Validation/BCs_unit_test.c -o Validation/BCs_unit_test -lm`...
(BENCH): Finished executing in 0.4094369411468506 seconds.
Finished compilation.
Finished in 0.4185829162597656 seconds.
Now running...
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./Validation/BCs_unit_test 2 2 2`...
(BENCH): Finished executing in 0.20866155624389648 seconds.
Finished in 0.2214832305908203 seconds.
Now, we will interpret our output and verify that we produced the correct results.
```python
with open(os.path.join(out_dir,results_file),"r") as file:
output = file.readline()
print(output)
if output!="All quantities agree at all points!\n": # If this isn't the first line of this file, something went wrong!
sys.exit(1)
```
All quantities agree at all points!
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Start_to_Finish-GiRaFFE_NRPy-Metric_Face_Values.pdf](Tutorial-Start_to_Finish-GiRaFFE_NRPy-Metric_Face_Values.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish_UnitTest-GiRaFFE_NRPy-BCs",location_of_template_file=os.path.join(".."))
```
Created Tutorial-Start_to_Finish_UnitTest-GiRaFFE_NRPy-BCs.tex, and
compiled LaTeX file to PDF file Tutorial-Start_to_Finish_UnitTest-
GiRaFFE_NRPy-BCs.pdf
|
276b45a7e10fca02b586b38adb2d32cf1f9b83bd
| 29,181 |
ipynb
|
Jupyter Notebook
|
in_progress/Tutorial-Start_to_Finish_UnitTest-GiRaFFE_NRPy-BCs.ipynb
|
terrencepierrej/nrpytutorial
|
3ea18beed99cf6b7d67c89c140ca68630452001e
|
[
"BSD-2-Clause"
] | null | null | null |
in_progress/Tutorial-Start_to_Finish_UnitTest-GiRaFFE_NRPy-BCs.ipynb
|
terrencepierrej/nrpytutorial
|
3ea18beed99cf6b7d67c89c140ca68630452001e
|
[
"BSD-2-Clause"
] | null | null | null |
in_progress/Tutorial-Start_to_Finish_UnitTest-GiRaFFE_NRPy-BCs.ipynb
|
terrencepierrej/nrpytutorial
|
3ea18beed99cf6b7d67c89c140ca68630452001e
|
[
"BSD-2-Clause"
] | null | null | null | 45.312112 | 591 | 0.5824 | true | 6,464 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.7773 | 0.644225 | 0.500756 |
__label__eng_Latn
| 0.625265 | 0.001753 |
## GeostatsPy: Bootstrap for Subsurface Data Analytics in Python
### Michael Pyrcz, Associate Professor, University of Texas at Austin
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
### PGE 383 Exercise: Bootstrap for Subsurface Data Analytics in Python
Here's a simple workflow, demonstration of bootstrap for subsurface modeling workflows. This should help you get started with building subsurface models that integrate uncertainty in the sample statistics.
#### Bootstrap
Uncertainty in the sample statistics
* one source of uncertainty is the paucity of data.
* do 200 or even less wells provide a precise (and accurate estimate) of the mean? standard deviation? skew? P13?
Would it be useful to know the uncertainty in these statistics due to limited sampling?
* what is the impact of uncertainty in the mean porosity e.g. 20%+/-2%?
**Bootstrap** is a method to assess the uncertainty in a sample statistic by repeated random sampling with replacement.
Assumptions
* sufficient, representative sampling, identical, idependent samples
Limitations
1. assumes the samples are representative
2. assumes stationarity
3. only accounts for uncertainty due to too few samples, e.g. no uncertainty due to changes away from data
4. does not account for boundary of area of interest
5. assumes the samples are independent
6. does not account for other local information sources
The Bootstrap Approach (Efron, 1982)
Statistical resampling procedure to calculate uncertainty in a calculated statistic from the data itself.
* Does this work? Prove it to yourself, for uncertainty in the mean solution is standard error:
\begin{equation}
\sigma^2_\overline{x} = \frac{\sigma^2_s}{n}
\end{equation}
Extremely powerful - could calculate uncertainty in any statistic! e.g. P13, skew etc.
* Would not be possible access general uncertainty in any statistic without bootstrap.
* Advanced forms account for spatial information and sampling strategy (game theory and Journel’s spatial bootstrap (1993).
Steps:
1. assemble a sample set, must be representative, reasonable to assume independence between samples
2. optional: build a cumulative distribution function (CDF)
* may account for declustering weights, tail extrapolation
* could use analogous data to support
3. For $\ell = 1, \ldots, L$ realizations, do the following:
* For $i = \alpha, \ldots, n$ data, do the following:
* Draw a random sample with replacement from the sample set or Monte Carlo simulate from the CDF (if available).
6. Calculate a realization of the sammary statistic of interest from the $n$ samples, e.g. $m^\ell$, $\sigma^2_{\ell}$. Return to 3 for another realization.
7. Compile and summarize the $L$ realizations of the statistic of interest.
This is a very powerful method. Let's try it out.
#### Objective
In the PGE 383: Stochastic Subsurface Modeling class I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows.
The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods.
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
3. In the terminal type: pip install geostatspy.
4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
You will need to copy the data file to your working directory. They are available here:
* Tabular data - sample_data_biased.csv at https://git.io/fh0CW
There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code.
```python
import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper
import geostatspy.geostats as geostats # GSLIB methods convert to Python
```
We will also need some standard packages. These should have been installed with Anaconda 3.
```python
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # for plotting
from scipy import stats # summary statistics
import math # trig etc.
import scipy.signal as signal # kernel for moving window calculation
import random
```
#### Set the working directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time).
```python
os.chdir("c:/PGE383") # set the working directory
```
#### Loading Tabular Data
Here's the command to load our comma delimited data file in to a Pandas' DataFrame object.
```python
df = pd.read_csv('sample_data_biased.csv') # load our data table (wrong name!)
```
Let's drop some samples so that we increase the variations in bootstrap samples for our demonstration below.
```python
df = df.sample(frac = 0.9) # extract 50 random samples to reduce the size of the dataset
print('Using ' + str(len(df)) + ' number of samples')
```
Using 260 number of samples
Visualizing the DataFrame would be useful and we already learned about these methods in this demo (https://git.io/fNgRW).
We can preview the DataFrame by printing a slice or by utilizing the 'head' DataFrame member function (with a nice and clean format, see below). With the slice we could look at any subset of the data table and with the head command, add parameter 'n=13' to see the first 13 rows of the dataset.
```python
print(df.iloc[0:5,:]) # display first 4 samples in the table as a preview
df.head(n=13) # we could also use this command for a table preview
```
X Y Facies Porosity Perm
209 530 789 1 0.154871 51.991055
146 670 49 0 0.118435 11.076648
196 500 699 1 0.121085 9.033307
57 420 874 1 0.205473 954.697202
76 840 949 1 0.141307 18.635578
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>X</th>
<th>Y</th>
<th>Facies</th>
<th>Porosity</th>
<th>Perm</th>
</tr>
</thead>
<tbody>
<tr>
<th>209</th>
<td>530</td>
<td>789</td>
<td>1</td>
<td>0.154871</td>
<td>51.991055</td>
</tr>
<tr>
<th>146</th>
<td>670</td>
<td>49</td>
<td>0</td>
<td>0.118435</td>
<td>11.076648</td>
</tr>
<tr>
<th>196</th>
<td>500</td>
<td>699</td>
<td>1</td>
<td>0.121085</td>
<td>9.033307</td>
</tr>
<tr>
<th>57</th>
<td>420</td>
<td>874</td>
<td>1</td>
<td>0.205473</td>
<td>954.697202</td>
</tr>
<tr>
<th>76</th>
<td>840</td>
<td>949</td>
<td>1</td>
<td>0.141307</td>
<td>18.635578</td>
</tr>
<tr>
<th>273</th>
<td>820</td>
<td>919</td>
<td>1</td>
<td>0.104364</td>
<td>2.766127</td>
</tr>
<tr>
<th>90</th>
<td>430</td>
<td>809</td>
<td>1</td>
<td>0.181352</td>
<td>212.452297</td>
</tr>
<tr>
<th>197</th>
<td>490</td>
<td>419</td>
<td>1</td>
<td>0.098454</td>
<td>2.881800</td>
</tr>
<tr>
<th>53</th>
<td>410</td>
<td>884</td>
<td>1</td>
<td>0.218254</td>
<td>1923.286133</td>
</tr>
<tr>
<th>256</th>
<td>410</td>
<td>669</td>
<td>1</td>
<td>0.118083</td>
<td>22.382541</td>
</tr>
<tr>
<th>101</th>
<td>740</td>
<td>379</td>
<td>1</td>
<td>0.132215</td>
<td>29.031980</td>
</tr>
<tr>
<th>181</th>
<td>640</td>
<td>669</td>
<td>0</td>
<td>0.105467</td>
<td>1.389270</td>
</tr>
<tr>
<th>252</th>
<td>200</td>
<td>359</td>
<td>0</td>
<td>0.077625</td>
<td>0.540486</td>
</tr>
</tbody>
</table>
</div>
#### Summary Statistics for Tabular Data
The table includes X and Y coordinates (meters), Facies 1 and 0 (1 is sandstone and 0 interbedded sand and mudstone), Porosity (fraction), and permeability as Perm (mDarcy).
There are a lot of efficient methods to calculate summary statistics from tabular data in DataFrames. The describe command provides count, mean, minimum, maximum, and quartiles all in a nice data table. We use transpose just to flip the table so that features are on the rows and the statistics are on the columns.
```python
df.describe().transpose()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>X</th>
<td>260.0</td>
<td>474.769231</td>
<td>255.099997</td>
<td>0.000000</td>
<td>300.000000</td>
<td>430.000000</td>
<td>670.000000</td>
<td>990.000000</td>
</tr>
<tr>
<th>Y</th>
<td>260.0</td>
<td>530.242308</td>
<td>304.354191</td>
<td>9.000000</td>
<td>269.000000</td>
<td>544.000000</td>
<td>839.000000</td>
<td>999.000000</td>
</tr>
<tr>
<th>Facies</th>
<td>260.0</td>
<td>0.807692</td>
<td>0.394874</td>
<td>0.000000</td>
<td>1.000000</td>
<td>1.000000</td>
<td>1.000000</td>
<td>1.000000</td>
</tr>
<tr>
<th>Porosity</th>
<td>260.0</td>
<td>0.134177</td>
<td>0.038388</td>
<td>0.058548</td>
<td>0.106235</td>
<td>0.124065</td>
<td>0.153088</td>
<td>0.228790</td>
</tr>
<tr>
<th>Perm</th>
<td>260.0</td>
<td>215.012711</td>
<td>576.830397</td>
<td>0.075819</td>
<td>3.416283</td>
<td>12.826386</td>
<td>62.273419</td>
<td>5308.842566</td>
</tr>
</tbody>
</table>
</div>
#### Visualizing Tabular Data with Location Maps
It is natural to set the x and y coordinate and feature ranges manually. e.g. do you want your color bar to go from 0.05887 to 0.24230 exactly? Also, let's pick a color map for display. I heard that plasma is known to be friendly to the color blind as the color and intensity vary together (hope I got that right, it was an interesting Twitter conversation started by Matt Hall from Agile if I recall correctly). We will assume a study area of 0 to 1,000m in x and y and omit any data outside this area.
```python
xmin = 0.0; xmax = 1000.0 # range of x values
ymin = 0.0; ymax = 1000.0 # range of y values
pormin = 0.05; pormax = 0.25; # range of porosity values
nx = 100; ny = 100; csize = 10.0
cmap = plt.cm.plasma # color map
```
Let's try out locmap. This is a reimplementation of GSLIB's locmap program that uses matplotlib. I hope you find it simpler than matplotlib, if you want to get more advanced and build custom plots lock at the source. If you improve it, send me the new code. Any help is appreciated. To see the parameters, just type the command name:
```python
GSLIB.locmap
```
<function geostatspy.GSLIB.locmap(df, xcol, ycol, vcol, xmin, xmax, ymin, ymax, vmin, vmax, title, xlabel, ylabel, vlabel, cmap, fig_name)>
Now we can populate the plotting parameters and visualize the porosity data.
```python
plt.subplot(111)
GSLIB.locmap_st(df,'X','Y','Porosity',xmin,xmax,ymin,ymax,pormin,pormax,'Well Data - Porosity','X(m)','Y(m)','Porosity (fraction)',cmap)
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
#### Declustering
Let's calculate some declustering weights. There is a demonstration on declustering here https://git.io/fhgJl if you need more information.
```python
wts, cell_sizes, dmeans = geostats.declus(df,'X','Y','Porosity',iminmax = 1, noff= 10, ncell=100,cmin=10,cmax=2000)
df['Wts'] = wts # add weights to the sample data DataFrame
df.head() # preview to check the sample data DataFrame
def weighted_avg_and_std(values, weights): # function to calculate weighted mean and st. dev., from Eric O Lebigot, stack overflow,
average = np.average(values, weights=weights)
variance = np.average((values-average)**2, weights=weights)
return (average, math.sqrt(variance))
sample_avg, sample_stdev = weighted_avg_and_std(df['Porosity'],df['Wts'])
print('Declustered mean = ' + str(round(sample_avg,3)) + ' and declustered standard deviation = ' + str(round(sample_stdev,3)))
```
There are 260 data with:
mean of 0.13417733651923078
min and max 0.058547873 and 0.228790002
standard dev 0.03831442273384987
Declustered mean = 0.121 and declustered standard deviation = 0.032
##### A Couple of Bootstrap Realizations
We will attempt boostrap by-hand and manually loop over $L$ realizations and draw $n$ samples to calculate the summary statistics of interest, mean and variance. The choice function from the random package simplifies sampling with replacement from a set of samples with weights.
This command returns a ndarray with k samples with replacment from the 'Porosity' column of our DataFrame (df) accounting for the data weights in column 'Wts'.
```p
samples1 = random.choices(df['Porosity'].values, weights=df['Wts'].values, cum_weights=None, k=len(df))
```
It is instructive to look at a couple of these realizations from the original declustered data set.
```python
samples1 = random.choices(df['Porosity'].values, weights=df['Wts'].values, cum_weights=None, k=len(df))
samples2 = random.choices(df['Porosity'].values, weights=df['Wts'].values, cum_weights=None, k=len(df))
print('Bootstrap means, realization 1 = ' + str(np.average(samples1)) + ' and realization 2 = ' + str(np.average(samples2)))
plt.subplot(131)
GSLIB.hist_st(df['Porosity'],pormin,pormax,False,False,20,df['Wts'],'Porosity (fraction)','Declustered Porosity')
plt.subplot(132)
GSLIB.hist_st(samples1,pormin,pormax,False,False,20,None,'Bootstrap Sample - Realizaton 1','Bootstrap Porosity 1')
plt.subplot(133)
GSLIB.hist_st(samples2,pormin,pormax,False,False,20,None,'Bootstrap Sample - Realizaton 2','Bootstrap Porosity 2')
plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
Note that the bootstrap distributions vary quite a bit from the original.
#### Summarizations Over Bootstrap Realizations
Let's make a loop to conduct $L$ resamples and calculate the average and standard deviation for each ($m^\ell$, $\sigma^2_{\ell}$, for $\ell = 0,\dots,L-1$). We then summarization over these $L$ realizations.
I did not find any built-in, concise functions to accomplish this, i.e. with a single line of code, so we are going to do it by hand.
To understand this code there are just a couple of Python concepts that you need to add to your Python arsenal.
1. declaring arrays - NumPy has a lot of great array (ndarray) functionality. There are build in functions to make a ndarray of any length (and dimension). This includes 'zeros', 'ones' and 'rand', so when we use this code:
```p
mean = np.zeros(L); stdev = np.zeros(L)
```
we're making arrays of length $L$ pre-populated with zeros.
2. For Loops - when we are using the command below, we are instructing the computer to loop over all the indented code below the command for $l = 0,1,2,\ldots,L-1$ times. For each loop the $l$ variable increments, so we can use this to save each result to a different index in the arrays mean and stdev. Note, Python arrays index starting at 0 and stop at the length - 1.
```p
for l in range(0, L):
```
we are running each bootstrap resampled realization, calculating the average and standard deviation and storing them in the arrays that we already declared.
```python
L = 1000 # set the number of realizations
mean = np.zeros(L); stdev = np.zeros(L) # declare arrays to hold the realizations of the statistics
for l in range(0, L): # loop over realizations
samples = random.choices(df['Porosity'].values, weights=df['Wts'].values, cum_weights=None, k=len(df))
mean[l] = np.average(samples)
stdev[l] = np.std(samples)
plt.subplot(121)
GSLIB.hist_st(mean,0.11,0.15,False,False,50,None,'Average Porosity (fraction)','Bootstrap Uncertainty in Porosity Average')
plt.subplot(122)
GSLIB.hist_st(stdev,0.015,0.045,False,False,50,None,'Standard Deviation Porosity (fraction)','Bootstrap Uncertainty in Porosity Standard Deviation')
plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
print('Summary Statistics for Bootstrap Porosity Mean Realizations:')
print(stats.describe(mean))
print('P10 ' + str(round(np.percentile(mean,10),3)) + ', P50 ' + str(round(np.percentile(mean,50),3)) + ', P90 ' + str(round(np.percentile(mean,90),3)))
print('\nSummary Statistics for Bootstrap Porosity Standard Deviation Realizations:')
print(stats.describe(stdev))
print('P10 ' + str(round(np.percentile(stdev,10),3)) + ', P50 ' + str(round(np.percentile(stdev,50),3)) + ', P90 ' + str(round(np.percentile(stdev,90),3)))
```
#### Comments
This was a basic demonstration of bootstrap. Much more could be done, you could replace the statistics, average and standard deviation with any other statistics, for example P90, kurtosis, P13 etc. I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
I hope this was helpful,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
|
a34b9616cbe4ff887166103eb849b84d0c366686
| 162,982 |
ipynb
|
Jupyter Notebook
|
GeostatsPy_bootstrap.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 403 |
2017-10-15T02:07:38.000Z
|
2022-03-30T15:27:14.000Z
|
GeostatsPy_bootstrap.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 4 |
2019-08-21T10:35:09.000Z
|
2021-02-04T04:57:13.000Z
|
GeostatsPy_bootstrap.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 276 |
2018-06-27T11:20:30.000Z
|
2022-03-25T16:04:24.000Z
| 198.274939 | 82,588 | 0.882533 | true | 5,823 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.672332 | 0.752013 | 0.505602 |
__label__eng_Latn
| 0.936083 | 0.013012 |
# Electronics Worked Example
This notebook starts to explore a worked example of how we can use a package like `lcapy` to support the production of technical eductional materials in a way that guarantees correctness and provides opportunities to developing self-text activities.
The aim is to be able to generate a set of materials where all the equations and all the calculations are rendered mechanically from a original circuit description. This would then guarantee their correctness relative to the original circuit.
From an original circuit description, we can generate:
- a rendered circuit diagram;
- symbolic expressions derived from simple automated analysis of the circuit;
- automatically generated problem solutions for calculations based around particular component values and voltages etc.
## `lcapy`
As described elsewhere, `lcapy` is a linear circuit analysis package that can be used to describe, display and analyse the behaviour of a wide range of linear analogue electrical circuits.
`lcapy` use a circuit description that can be used to generate an circuit diagram as the basis for a wide range of analyses.
```python
%%capture
try:
import lcapy
except:
%pip install lcapy
# https://github.com/dawbarton/pdf2svg
#!brew install pdf2svg
```
Circuits are defined by generating a graph that connects numbered nodes with edges representing a particular component. Layout instructions describe the orientation of the edge, and labels can be added as required.
In actual use, it would make sense to build up a library of comment circuit blocks that can be visually reviewed, and then the corresponding definition copied and reused, with modification as required.
```python
from lcapy import Circuit
cct = Circuit("""
V1 1 0; down
R1 1 2; left=2, i=I_1, v=V_{R_1}
R2 1 3; right=2, i=I_2, v=V_{R_2}
L1 2 0_1; down, i=I_1, v=V_{L_1}
L2 3 0_3; down, i=I_1, v=V_{L_2}
W 0 0_3; right
W 0 0_1; left""")
# Render the circuit
cct.draw(scale=3)
# We can also save a generated circuit diagram to an image file
#cct.draw(scale=3, filename='circuit_diagram.svg')
```
## Example Production Workflow
To provide an example of how we might use `lcapy` consider the following teaching example taken from *The following section is a reworking of http://www.open.edu/openlearn/science-maths-technology/introduction-electronics/content-section-3.1 .*
### Voltage Divider
Voltage dividers are widely used in electronic circuits to create a reference voltage, or to reduce the amplitude of a signal. The figure below shows a voltage divider. The value of $V_{out}$ can be calculated from the values of $V_S$, $R_1$ and $R_2$.
```python
import lcapy
from lcapy import Circuit
#We can create a schematic for the voltage divider using lcapy
#This has the advantage that circuit description is also a model
#The model can be analysed and used to calculate voltages and currents, for example,
# across components if component values and the source voltage are defined
#Figure: A voltage divider circuit
sch='''
VS 1 0 ; down
W 1 2 ; right, size=2
R1 2 3 ; down
R2 3 4; down
W 3 5; right
P1 5 6; down,v=V_{out}
W 4 6; right
W 4 0; left
'''
#Demonstrate thate we can write the descriptioon to a file
fn="voltageDivider.sch"
with open(fn, "w") as text_file:
text_file.write(sch)
# and then create the circuit model from the (persisted) file
cct = Circuit(fn)
```
The schema is also retrievable from the circuit object:
```python
cct.sch
```
VS 1 0 ; down
W 1 2 ; right, size=2
R1 2 3 ; down
R2 3 4; down
W 3 5; right
P1 5 6; down,v=V_{out}
W 4 6; right
W 4 0; left
```python
#Draw the circuit diagram that corresponds to the schematic description
cct.draw(style='american', draw_nodes=False, label_nodes=False) #american, british, european
#Draw function is defined in https://github.com/mph-/lcapy/blob/master/lcapy/schematic.py
#The styles need tweaking to suit OU convention - this requires a minor patch to lcapy
#Styles defined in https://github.com/mph-/lcapy/blob/master/lcapy/schematic.py#Schematic.tikz_draw
```
In the first instance, let’s assume that is not connected to anything (for voltage dividers it is always assumed that negligible current flows through ). This means that, according to Kirchhoff’s first law, the current flowing through is the same as the current flowing through . Ohm’s law allows you to calculate the current through $R_1$. It is the potential difference across that resistor, divided by its resistance. Since the voltage is distributed over two resistors, the potential drop over $R_1$ is $V_{R_1}=V_S - V_{out}$.
The equation at the end of the last paragraph is written explicitly as LateX.
But we can also analyse the circuit using `lcapy` to see what the equation *should* be by deriving it automatically:
The voltage across $R_2$, $V_{out}$, is given as:
```python
cct.R1.v
#We can't do anything about the order of the variables in the output expression, unfortunately
#It would be neater if sympy sorted fractional terms last but it doesn't...
```
We can get an expression for the output voltage, $V_{out}$, or its calculated value in a couple of ways:
```{note}
The circuit's node numbers can be displayed on the schematic if required: simply set `label_nodes=True` in the `draw()` statement.
```
For example, we can find the voltage across the appropriately numbered nodes by getting the open circuit voltage across nodes 3 and 4 (that is, across R2).
```python
cct.Voc(3,4)['t']
```
The output voltage can also be obtained by direct reference to the appropriate component:
```python
cct.R2.v
```
We can create a reference for our voltages as symbolic elements.
For example, we can get an expression for $V_{out}$:
```python
#sympy is a symbolic maths package
from sympy import Symbol, Eq
#If we add .expr to the voltages, we can get the sympy representation of voltage and current equations
# that are automatically derived from the model.
vout_expr = cct.R2.v.expr
vout_expr
```
We can also reference this value via a string:
```python
out_component = 'R2'
cct[out_component].v.expr
```
```python
cct.R2.relname
```
'R2'
And an expression for the voltage across $R_1$:
```python
v_r1_expr = cct.R1.v.expr
v_r1_expr
```
Working with sympy symbols, we can perform a substitution if expressions match exactly.
In this case, we can swap in $V_{out}$ for the expression returned from the analysis to give us an expression in the form we want for $R_1$:
```python
vout = Symbol('V_out')
v_r1_expr.subs(vout_expr, vout)
```
We can now create an equation to declare the relationship between a symbol for $R_1$ and the expression that defines its value:
```python
#I don't know how to get the symbols from the circuit as sympy symbols
# so create them explicitly
v_r1 = Symbol("V_{R_1}")
# Define the equation
Eq( v_r1, v_r1_expr.subs(vout_expr, vout) )
```
```{note}
In a traditional set of teaching materials, we might work through the derivations of various formulae describing how to calculate particulalr quantities.
In order to render symbols correctly, we might author our document using LaTeX. Jupyter notebooks are quite comfortable with rendering LaTeX inline, or as part of a multiline block, so let's create an example of the sort of proof we might see in a traditionally authored text.
*The following expressions are hand written using LaTeX*
```
The current through $R_1$ ($I_{R_1}$) is given by $I_{R_1}=\displaystyle\frac{(V_S-V_{out})}{R_1}$
Similarly, the current through $R_2$ is given by $I_{R_2}=\displaystyle\frac{V_{out}}{R_2}$
Kirchoff’s first law tells you that $I_{R_1}=I_{R_2}=$, and therefore
$\displaystyle\frac{V_{out}}{V_{R_2}}=\frac{(V_S-V_{out})}{R_1}$
Multiplying both sides by $R_1$ and by $R_2$ gives
$R_1V_{out}=R_2(V_S-V_{out})$
Then multiplying out the brackets on the right-hand side gives
$R_1V_{out}=R_2V_S-R_2V_{out}$
This can be rearranged to
$R_1V_{out}+R_2V_{out}=R_2V_S$
giving
$(R_1+R_2)V_{out}=R_2V_S$
and therefore the fundamental result is obtained:
$V_{out}=\displaystyle\frac{R_2V_S}{(R_1+R_2)}$
```{note}
One thing we might note at this point is that by authoring in the proof in the same document as the circuit diagram, we reduce the "distance" between the circuit diagram and the derivation that purports to relate to it but there is still the potential for error.
The distance between the image and the derivation also means that if we change the labeling on the diagram, for example, the derivation will no longer properly describe the components as labeled in the diagram.
```
From our symbolic expressions, we can render an equation for $V_{out}$ very straightforwardly:
```python
#We can find this from quantities we have derived through analysis of the presented circuit
Eq(vout, vout_expr)
```
Recall that the expression `vout_expr` was calculated automatically from the original circuit representation, which was also used to render the circuit diagram. The expression is guaranteed to be correct for the circuit we have defined, although it may not be presented in the order of the form we would like it to be in.
As well as accessing the voltage across a component, we can also access and expression for the current flowing through a component.
For example, let's get the expression for the current flowing through $R_1$ and assign it to an appropriate symbol:
```python
Eq(Symbol('I_{R_1}'), cct.R1.i.expr)
# The following equation is generated by the symbolic analysis...
```
```{note}
There is still distance in each of these uses of `Eq()`, where we "casually" state the equivalence of left and right hand components. Could we derive the symbol for the left hand side automatically, somehow?
```
We can also get an expression for the current through $R_2$:
```python
#We get the following from the circuit analysis, as above...
cct.R2.i.expr
# We note that the circuit analysis returns equal expressions for I_R_1 and I_R_2
# which gives some sort of reinforcement to the idea of Kirchoff's Law...
# The following equation is generated by the symbolic analysis...
```
Being able to render expressions that describe the values of particular quantities in algebraic terms means we may be able to create derivations that are "necessarily" true.
But we can do more than that, and also start to insert numbers into the expressions.
Consider the following exercise:
#### Exercise
Suppose $V_S= 24 V$ and $R_2 = 100\Omega$. You want $V_{out} = 6 V$. What value of $R_1$ do you need?
#### Answer
```{note}
In a traditional set of materials, we are likely to write out the steps to the solution manually (a process which is subject to error and likely to need checking for correctness as well as sense) and then manually substitute in the required values and perform the calculation (which requires more checking).
```
Rearranging the equation for $V_{out}$ gives
$V_{out}(R_1+R_2)=R_2V_S$
and therefore:
$(R_1+R_2)=\displaystyle\frac{R_2V_S}{V_{out}}$
which means the equation for $R_1$ is:
$R_1=\displaystyle\frac{R_2V_S}{V_{out}}-R_2$
Substituting in the values given and performing the calculation gives us the required solution:
$R_1=\displaystyle\frac{100\Omega \times 24V}{6V}-100\Omega = 400\Omega-100\Omega=300\Omega$
```{note}
Once again, all the above steps are manuually derived and presented using hand crafted LaTex.
```
How might we set about creating an equivalent solution, but guaranteed to be correct?
We essentially want to solve for $R_1$ in the following expression (which was automatically derived by analysis of the presented circuit representation):
```python
Eq(vout, vout_expr)
```
The steps we need to take are a simplification of the expression, with value substitution:
```python
from sympy import sympify
#This is clunky - is there a proper way of substituting values into lcapy expressions?
Eq(6, sympify(str(vout_expr)).subs([('VS', 24), ('R2', 100)]))
```
Rearranging, we need to solve the following for $R_1$, generated the output expression automatically through symbolic analysis:
```python
Eq(vout_expr - vout, 0)
```
`sympy` can solve such equations for us directly, so let's see how it solves things symbolically for $R_1$:
```python
from sympy import solve
Eq(Symbol('R1'),solve(sympify(str(vout_expr-vout)),'R1')[0])
#The following equation is generated by the symbolic analysis...
```
To solve the equation numerically, we can substitute values into the `sympy` expression as follows:
```python
solve(sympify(str(vout_expr-vout)).subs([('VS',24), ('R2',100),('V_out',6)]),'R1')[0]
```
We can craft an equation to display the solution for us:
```python
VALUES = [('VS', 24),
('R2', 100),
('V_out', 6)]
SOLVE_FOR = 'R1'
Eq(Symbol(SOLVE_FOR), solve(sympify(str(vout_expr-vout)), SOLVE_FOR)[0].subs(VALUES))
#The following result is calculated by the symbolic analysis...
```
A key point about this is that we can script in different component values and display the correct output.
We can also do partial solutions which might be useful if we want to build up the steps to a solution, with partial substitution of values along the way:
```python
Vs=20; Vout=5
R1=solve(sympify(str(vout_expr-vout)),'R1')[0].subs([('VS',Vs),('V_out',Vout)])
print('For V_S={Vs}V and V_out={Vout}V, we need R1={R1}.'.format(Vs=Vs,Vout=Vout,R1=R1))
```
For V_S=20V and V_out=5V, we need R1=3*R2.
Alternatively, we can create a function to solve for any single missing value.
The following will calculate the relevant solution:
```python
def soln(cct, out_component=None, values=None):
if values is None:
values={'VS':24, 'R1':'', 'R2':100, 'V_out':6}
outval=[v for v in values if not values[v]]
invals=[(v, values[v]) for v in values if values[v] ]
if len(outval)!=1 or len(invals)!=3:
return 'oops'
outval = outval[0]
out_component = out_component if out_component else outval
vout_expr = cct[out_component].v.expr
print(vout_expr)
print(invals)
return 'Value of {} is {}'.format(outval,
solve(sympify(str(vout_expr-vout)).subs(invals), outval)[0])
```
For example, using the default values baked into the function:
```python
soln(cct, 'R2')
```
R2*VS/(R1 + R2)
[('VS', 24), ('R2', 100), ('V_out', 6)]
'Value of R1 is 300'
Or using values we pass in:
```python
soln(cct, 'R2', {'VS':24, 'R2':'', 'R1':300, 'V_out':6})
```
R2*VS/(R1 + R2)
[('VS', 24), ('R1', 300), ('V_out', 6)]
'Value of R2 is 100'
With these pieces in place, we can also consider how we might automatically generate circuit diagrams with specified component values, and then perform, or check, calculations against them.
For example, the following function wraps a specific circuit definition and allows specific component values to be passed in:
```python
#We can also explore a simple thing to check the value from a circuit analysis
def cct1(V='24',R1='100',R2='100'):
R1 = '' if R1 and float(R1) <=0 else R1
sch='''
VS 1 0 {V}; down
W 1 2 ; right, size=2
R1 2 3 {R1}; down
R2 3 4 {R2}; down
W 3 5; right, size=2
P1 5 6; down,v=V_{{out}}
W 4 6; right, size=2
W 4 0; left
'''.format(V=V,R1=R1,R2=R2)
cct = Circuit()
cct.add(sch)
cct.draw(label_nodes=False)
#The output voltage, V_out is the voltage across R2
txt='The output voltage, $V_{{out}}$ across $R_2$ is {}V.'.format(cct.R2.v if R1 else V)
return cct
```
We can render the circuit simply by calling the function:
```python
_ = cct1()
```
It's trivial to make an interactive widget built around the previous function that will create a diagram for us with components labeled according to values specified by input sliders.
```{note}
In a live Jupyter backed UI, the following example will render several interactive widgets that allow component values to be specified; once selected, the circuit will be drawn and a missing value calculated automatically.
```
```python
from ipywidgets import interact_manual
cct_w = None
@interact_manual
def i_cct1(V='24', R1='', R2='100'):
global cct_w
cct_w = cct1(V=V,R1=R1,R2=R2)
#We could then select R and V values and extend the function
# to calculate V_out automatically
```
interactive(children=(Text(value='24', description='V'), Text(value='', description='R1'), Text(value='100', d…
```python
i_cct1()
```
```python
# We could also plot V_out vs R_1 for given V_S and R_2?
```
```python
cct_w.sch
```
VS 1 0 24; down
W 1 2 ; right, size=2
R1 2 3 ; down
R2 3 4 100; down
W 3 5; right, size=2
P1 5 6; down,v=V_{out}
W 4 6; right, size=2
W 4 0; left
```python
```
|
a76e91cb4814c49611a11ad287a00c23b4925677
| 163,084 |
ipynb
|
Jupyter Notebook
|
src/electronics/worked-example.ipynb
|
OpenComputingLab/SubjectMatterNotebooks
|
1a7eff65528862c6b0c38eee6416d1ed814d4499
|
[
"MIT"
] | 3 |
2021-12-07T07:55:49.000Z
|
2022-01-12T03:02:10.000Z
|
src/electronics/worked-example.ipynb
|
OpenComputingLab/SubjectMatterNotebooks
|
1a7eff65528862c6b0c38eee6416d1ed814d4499
|
[
"MIT"
] | null | null | null |
src/electronics/worked-example.ipynb
|
OpenComputingLab/SubjectMatterNotebooks
|
1a7eff65528862c6b0c38eee6416d1ed814d4499
|
[
"MIT"
] | null | null | null | 117.326619 | 50,691 | 0.870312 | true | 4,587 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.79053 | 0.831143 | 0.657044 |
__label__eng_Latn
| 0.997686 | 0.364864 |
Interpreting Poisson distribution
=================================
The matrial in this subsection is adapted from [[1]](#references).
The poisson distribution can be viewed as an approximation to a binomial distribution. Recall that a binomial distribution with parameters *n* and *p* specifies the probability of having *k* successes in *n* trials when the probability of success in each trial is *p*:
\begin{equation}
P(X=k) = \binom{n}{k} p^k {(1-p)}^{n-k}
\end{equation}
The expected value of this distribution is *np*.
Sometimes, *n* is extremely large and *p* is extremely small; still *np* is within a reasonable range. For example consider the particles discharged from a radioactive material in a short period of time.
```
from IPython.display import SVG
SVG('particles.svg')
```
In this example, the number of particles is very large, and the probability for each individual particle to emit is very small. Still the product of these values is a mid-range number that shows how many particles on average are discharged in that certain period.
Having extreme values for *n* and *p*, we can not directly use the binomial distribution. Let $\lambda = np$. We can approximate the binomial distribution as follows:
\begin{align}
P(X = k) &= \frac{n!}{(n-k)!k!}p^k(1-p)^{n-k} \\
&= \frac{n!}{(n-k)!k!} (\frac{\lambda}{n})^k (1-\frac{\lambda}{n})^{n-k} \\
&= \frac{n(n-1)\ldots(n-i+1)}{n^k}.\frac{\lambda^k}{k!}\frac{(1-\lambda/n)^n}{(1-\lambda/n)^k}
\end{align}
For large *n* and moderate $\lambda$,
\begin{align}
(1-\frac{\lambda}{n})^n &\simeq e^{-\lambda} \\
\frac{n(n-1)\ldots(n-i+1)}{n^k} &\simeq 1 \\
(1-\frac{\lambda}{n})^k &\simeq 1
\end{align}
Hence, for large $n$ and moderate $\lambda$,
\begin{equation}
P(X=k) = e^{-\lambda}\frac{\lambda^k}{k!}
\end{equation}
Below is an excerpt from [[1]](#references):
>In other words, if *n* independent trials, each of which results in a success with probability *p*, are performed, then, when *n* is large and *p* is small enough to make *np* moderate, the number of successes occurring is approximately a Poisson random variable with parameter $\lambda = np$. This value $\lambda$ (which will later be shown to equal the expected number of successes) will usually be determined empirically. Some examples of random variables that generally obey the Poisson probability law are as follows:
1. The number of misprints on a page (or a group of pages) of a book
2. The number of people in a community who survive to age 100
3. The number of wrong telephone numbers that are dialed in a day
4. The number of packages of dog biscuits sold in a particular store each day
5. The number of customers entering a post office on a given day
6. The number of vacancies occurring during a year in the federal judicial system
7. The number of $\alpha$-particles discharged in a fixed period of time from some radioactive material
Assumptions about the distribution of trips
===========================================
In our work, we assume that the number of trips from source **s** to destination **d** during the time period **t** follows a Poisson distribution. The meaning of this assumption in terms of the original binomial distribution is that:
**There are many individuals in area _s_. The probability for each individual to take a taxi to area _d_ during period _t_ is very small. However the product of these two values (number of individuals times the probability of trip for each individual) is a moderate number.**
```
SVG('people.svg')
```
The starting core assumption about the distribution of trips from region *s* to regoin *t* during period *t* is that $\lambda_{s,d,t} = N_{s} \times p_{s,d,t}$, where $N$ is the number of individuals residing in region $s$ in period $t$, and $p_{s,d,t}$ is the probability for each of them to take a taxi for destination $d$ in this period.
The simplest approach for estimating the $\lambda$ parameters is to directly obtain them from the data. In our work, we take a different approach, that is decomposing $\lambda$'s into basic constituents and then estimating the values for these constituents.
The first step in decomposition is to consider the trips as a sum of independent Poisson distributions. For example, we can decompose the morning trips from area **A** to area **B** into three sets of independent trips:
- *workers*: Those who live in A and work in B
- *kids*: The children living in A who have to go to their schools in B
- *shoppers*: Individuals living in A who go to B to receive a service
## Some of the assumptions about the *working* trips
Let us now study the *worker* group a bit closer. Travels by this group follows a Poisson distribution with parameter $\lambda_w$. Based on the binomial interpretation of Poisson distribution, we have $\lambda_w = N_A \times p^w_{B}$, where $N_A$ is the number of residents of region A and $p^w_{B}$ is the probability for a resident to *have a job in regioin B and take a taxi to their job*.
A distinguishing characteristic of our approach is specification of Poisson parameters by means of logical rules. Consider the following rule that specifies a rate for working trips in the morning:
```
trips(From, To, Time) ~ poisson(50) :-
morning(Time),
residential(From),
business(To).
```
The problem with this rule is that it treats all sources and destinations equally, as long as the source is *residential* and the destination is *business*. However, $\lambda_w$ varies for different sources and destinations. Consider our previous example ($\lambda_w = N_A \times p^w_B$). In this example, $\lambda_w$ depends on:
1. the population of region A ($N_A$).
2. the probability for a resident of A to have a job in B
The following rule takes these factors into account (A detailed justification will follow [later](#details)):
```
trips(From, To, Time) ~ poisson(Param) :-
morning(Time),
residential(From, Density),
job_ratio(From, To, Ratio),
Param is Density*Ratio*200.
```
However, to exploit this rule, we need to know the distribution of workers in each source area for each target area. If we do not have sufficient data about such distributions, it is simpler to assume that the jobs in source areas are distributed according to the number of jobs in target areas:
```
trips(From, To, Time) ~ poisson(Param) :-
morning(Time),
residential(From, Density1),
business(To, Density2),
Param is Density1*Density2*150.
```
## Some of the assumptions about the *shopping* trips
Similar to the *workers*, we can have a binomial interpretation for shoppers: $\lambda_s = N_A \times p^s_{B}$, where $N_A$ is the number of residents of region A and $p^s_{B}$ is the probability for a resident to *require a service in regioin B and take a taxi to B*.
Again, the simplest model does not account for depency of $N_A$ and $p^s_B$ to characteristics of A and B:
```
trips(From, To, Time) ~ poisson(10) :-
morning(Time),
residential(From),
business(To).
```
To incorporate these characteris, we use the *gravity model* (more details in another note). Based on this model, each source and destination is viewed as a charged mass and the probability of a movement is expressed in terms of weights of bodies and their distance:
```
trips(From, To, Time) ~ poisson(Param) :-
morning(Time),
residential(From, Density1),
shopping(To, Density2),
inv_distance(From, To, Inverse_Dist),
Param is Density1*Density2*Inverse_Dist*80.
```
## Peripheral factors
Recall that in our interpretation of $\lambda$, the probability $p^s_{\text{dest}}$ means the probability for each individual (residing in the source area) to take a taxi to the destination area. This probability is influenced by two factors:
1. The individual has the intention to go to the destination area
2. The individual takes a taxi to reach there
In previous rules, we assumed that *densities* reflect both factors. However, sometimes peripheral factors influence one (or both) of these factors. For example, we can assume that when it rains, the probability of taking a taxi increases for a shopper increases by a constant factor:
```
trips(From, To, Time) ~ poisson(Param) :-
evening(Time),
shopping(From, Density1),
residential(To, Density2),
inv_distance(From, To, Inverse_Dist),
rain_factor(From, To, Time, Rain_F),
Param is Density1*Density2*Inverse_Dist*Rain_F*80.
rain_factor(_,_,_, 1):
not(rains_at(Time)).
rain_factor(From, To, Time, 1.4) :-
shopping(From, _),
residential(To, _),
rains_at(Time).
```
Note that by adding such rules, we have other factors to learn (e.g. the impact of rain) which might complicate our learning algorithm.
The meaning of densities
========================
<a id='details'></a>
In previous section, instead of directly specifying the parameter for each distribution, we expressed the parameter as a product of multiple factors (e.g. *density1 $\times$ density2 $\times$ 80*). Here I will try to justify this choice.
Let variable $X$ be the number of workers taking a taxi from area **A** to area **B** in the morning. We are assuming that $X$ is a Poisson random variable and its parameter can be obtained via this rule:
```
trips(From, To, Time) ~ poisson(Param) :-
morning(Time),
residential(From, Density1),
business(To, Density2),
Param is Density1*Density2*150.
```
Denoting this Poisson parameter by $\lambda$ and the densities by $d_A$ and $d_B$, this rule says that $\lambda = d_A \times d_B \times 150$. Recalling the binomial interpretation of Poisson distribution, this rule implies that:
\begin{equation}
N_A \times p_{A \to B} = d_A \times d_B \times 150
\end{equation}
(Once again, $N_A$ is the number of individuals residing in *A*, and $p_{A \to B}$ is the probability for each of them to take a taxi to *B* for the purpose of shopping).
Now we are free in our interpretation of the densities, as long as the above equality holds. For example, if we take the density of a residential area ($d_A$) equal to its population, the formula changes into:
\begin{equation}
p = d_B \times 150
\end{equation}
So, the probability for each resident of **A** to have a job in **B** AND take a taxi to their job is reflected by two numbers: $d_B$ which is chosen by modeler, and 150, which is learned from the data.
But how do we choose $d_B$? It follows that by making simplifying assumptions, we can use the number of jobs in **B** as its density. Assume that for an individual, the chance of having a job is independent from where they live. Also assume that each individual is eqully likely to have any of the jobs in the city (regardless of where they live). Denoting the population of A by $P_A$ and number of jobs in B by $J_B$, we can write the $p_{A \to B}$ as $c_1 \times P_A \times c_2 \times J_B$, where $c_1$ is the probability of having a job, and $c_2$ is $\frac{1}{\text{#all jobs}}$.
However, if we are learning the base parameter (150) from the data, we do not need to include the constants $c_1$ and $c_2$, as they will be automatically reflected in the learned parameter. Following the same kind of logic, we can further simplify the model and use areas of **A** and **B** as $d_A$ and $d_B$.
<a id='references'></a>
### References
- [1] Ross, S. (2012). A first course in probability. Pearson.
|
f51297024548b6d4e045a4b55ed71a9d4a6c0a6a
| 83,488 |
ipynb
|
Jupyter Notebook
|
Hanoi/Poisson_Assumptions.ipynb
|
Behrouz-Babaki/notebooks
|
9e84263879ba17e8581b8bdee60aaf3006fa4a73
|
[
"Unlicense"
] | null | null | null |
Hanoi/Poisson_Assumptions.ipynb
|
Behrouz-Babaki/notebooks
|
9e84263879ba17e8581b8bdee60aaf3006fa4a73
|
[
"Unlicense"
] | null | null | null |
Hanoi/Poisson_Assumptions.ipynb
|
Behrouz-Babaki/notebooks
|
9e84263879ba17e8581b8bdee60aaf3006fa4a73
|
[
"Unlicense"
] | null | null | null | 152.07286 | 864 | 0.653627 | true | 2,922 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91848 | 0.874077 | 0.802823 |
__label__eng_Latn
| 0.999103 | 0.703559 |
```python
# HIDDEN
# flake8: noqa t
import sys
import os
projroot = lambda p: p if os.path.isfile(os.path.join(p, "Pipfile")) or p == '/' else projroot(os.path.dirname(p))
sys.path.append(projroot(os.getcwd()))
import pandas as pd
import urllib
import math
import numpy as np
from typing import Tuple, Any
from IPython.display import display, Math, Latex, display_latex, HTML
```
# Bayes formula
```python
# HIDDEN
display(Latex(r"""
\begin{equation}
P(A \mid B) = \frac{P(B \mid A) \, P(A)}{P(B)}
\end{equation}"""))
```
\begin{equation}
P(A \mid B) = \frac{P(B \mid A) \, P(A)}{P(B)}
\end{equation}
Over the years I've encountered numerous probability puzzles. These puzzles are usually against human intuitions with
surprising answers. The common thread behind these puzzles are that they can all be solved using Baye's theorem in a
consistent way. Below we will go through three examples in the
# Example 1
Probability of a disease in the population is 1%. A diagnose is 90% accurate on a diseased person, with a 5%
mis-diagnose rate on a normal person. If a randomly selected person is positively diagnosed, what's the probability
of the actual disease?
- Let A: has disease
- Let B: positively diagnosed
- $P(B \mid A)$: probability of positively diagnosed given having disease = 0.9
- $P(A)$: has disease = 0.01
- $P(B)$: positively diagnosed = 0.01 * 0.9 + 0.99 * 0.05 =0.0585
- $P(A \mid B)$: = (0.9 * 0.01) / 0.0585 = 0.154 = 15%
A position diagnose result in 15% of probability of having the actual disease. This seems slow, but makes sense intuitively:
because the disease is rare, a randomly chose person is more likely to be disease free and mis diagnosed. A common sense
thing to do is to get a second test. Now that we can replace $P(A)$ with 0.15 instead of 0.01, reapply the Bayes theorem
again will mean a second positive diagnose results in 74% of chance of having the disease.
# Example 2
Monty hall. 3 closed doors, 1 with prize behind it. Contestant randomly selects 1 door. The host, knowing what's
behind the doors, pick one of the remaining two doors that is empty and reveal it to the contestant. Host then ask the contestant
if he/she wants to switch. Should the contestant switch?
Before we reframe the question in Bayes, lets fix the parameters: say contestant picks door1, and
host reveals door3. So the question becomes
1. $P(Prize=door1 \mid Open=door3)$: probability of prize behind door1 given host opens door3
2. $P(Prize=door2 \mid Open=door3)$: probability of prize behind door2 given host opens door3
Which of above two is greater? If 2 is greater, then definitely switch!
Note $P(Prize=door3 \mid Open=door3)$ is not among consideration, by the question definition this is eliminated from
the probability space.
Next we solve for 1 and 2
1. $P(Prize=door1 \mid Open=door3)$ = $\frac{P(Open=door3 \mid Prize=door1) P(Prize=door1)}{P(Open=door3)}$
- $P(Open=door3 \mid Prize=door1)$, probability of host open door3 given prize is behind door1, = 1/2
- $P(Prize=door1)$, probability of prize is behind door1, = 1/3
- $P(Open=door3)$, probability of host open door3 in general, = 1/2
- result = (1/2 * 1/3) / 1/2 = 1/3
- This also matches intuition: contestant picked randomly 1 out of 3 and stayed with the pick, the probability
of winning should remain 1/3.
2. $P(Prize=door2 \mid Open=door3)$ = $\frac{P(Open=door3 \mid Prize=door2) P(Prize=door2)}{P(Open=door3)}$
- $P(Open=door3 \mid Prize=door2)$, probability of host open door3 given prize is behind door2, = 1. In other words, the host has no choice.
- $P(Prize=door2)$, probability of prize is behind door2, = 1/3
- $P(Open=door3)$, probability of host open door3 in general, = 1/2
- result = (1 * 1/3) / 1/2 = 2/3
2/3 > 1/3, switch is a good idea!
Another way to think of this problem is that when the host removed an empty door, the host removed the uncertainty:
2 doors out of three by definition has 2/3 chance of containing the prize, now that the empty door is removed, the entire
2/3 probability is assigned to the only remaining closed door.
Now suppose the question is modified as such: host's toddler son wondered in and accidentally knocked over door3
which is revealed to be empty, does switching door enhances the contestant's probability of winning?
1. $P(Prize=door1 \mid Open=door3)$ = $\frac{P(Open=door3 \mid Prize=door1) P(Prize=door1)}{P(Open=door3)}$
- $P(Open=door3 \mid Prize=door1)$, probability of toddler knocks door3 given prize is behind door1, = 1/2
- $P(Prize=door1)$, probability of prize is behind door1, = 1/3
- $P(Open=door3)$, probability of toddler knocks over door3 in general, = 1/2
- result = (1/2 * 1/3) / 1/2 = 1/3
2. $P(Prize=door2 \mid Open=door3)$ = $\frac{P(Open=door3 \mid Prize=door2) P(Prize=door2)}{P(Open=door3)}$
- $P(Open=door3 \mid Prize=door2)$, probability of toddler knocks door3 given prize is behind door2, = 1/2
- $P(Prize=door2)$, probability of prize is behind door2, = 1/3
- $P(Open=door3)$, probability of toddler knocks over door3 in general, = 1/2
- result = (1/2 * 1/3) / 1/2 = 1/3
1/3 = 1/3, switching makes no difference!
From contestant's perspective, both scenarios appear the same. However, the actual probability of winning the prize
after switching the door varies a great deal depending on the **circumstances** behind the scenario: whether the host
chose the empty door with or without the knowledge of the prize location. Contemplate on this for a moment,
then proceed to the next example.
# Example 3
## Version 1
We have a random sample of parents with exactly two children. We then ask one parent if at least one of the child
is a daughter. If the parent says yes, what's the probability that the other child is also a daughter?
If the other child is also a daughter, then both children are daughters, we denote this as P(DD). We then denote that
P(D) is at least one child is daughter. So we are seeking $P(DD | D)$.
$P(DD | D)$ = $\frac{P(D \mid DD) P(DD)}{P(D)}$
- $P(D \mid DD)$, probability one is daughter given both are daughters: 1
- $P(DD)$, probability both are daughters: 1/4
- $P(D)$, probability at least one daughter = 1 - probability of both sons = 1 - 1/4 = 3/4
- result = (1/2 * 1/3) / 1/2 = 1/3
The probability of the other child is a daughter is 1/3.
## Version 2
Same as version 1, except we ask if each parent if they have a daughter named Lucy. If the parent says yes, what's the
probability that the other child is also a daughter?
r
Now we are looking for $P(DD | Lucy)$.
$P(DD | Lucy)$ = $\frac{P(Lucy \mid DD) P(DD)}{P(Lucy)}$
- $P(DD)$, probability bother are daughters: 1/4
- $P(Lucy)$, probability of a girl is named Lucy, lets assume it's L (the actual value doesn't really matter).
- $P(Lucy \mid DD)$, probability of at least one girl is named Lucy given there are two daughters, this is 2 * $P(Lucy)$ = 2 * L.
- result = (2 \* P(Lucy) * 1/4) / P(Lucy) = (2 * L * 1/4) / L = 1/2
So the probability of other child is a daughter is 1/2.
This example also illustrates the essence of bayesian reasoning: the posterior probability is affected by the input (priors), or
our knowledge of the world!
|
c9557b16ae35b73054f75c27a2028d6623af1f85
| 8,750 |
ipynb
|
Jupyter Notebook
|
bayes.ipynb
|
mhzed/dsblog
|
f63d80fdb50c7846d052cb2393c218fc39755d4a
|
[
"MIT"
] | null | null | null |
bayes.ipynb
|
mhzed/dsblog
|
f63d80fdb50c7846d052cb2393c218fc39755d4a
|
[
"MIT"
] | null | null | null |
bayes.ipynb
|
mhzed/dsblog
|
f63d80fdb50c7846d052cb2393c218fc39755d4a
|
[
"MIT"
] | null | null | null | 8,750 | 8,750 | 0.663657 | true | 2,263 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.923039 | 0.771184 |
__label__eng_Latn
| 0.99809 | 0.630051 |
<table border="0">
<tr>
<td>
</td>
<td>
</td>
</tr>
</table>
# Double Machine Learning: Use Cases and Examples
Double Machine Learning (DML) is an algorithm that applies arbitrary machine learning methods
to fit the treatment and response, then uses a linear model to predict the response residuals
from the treatment residuals.
The EconML SDK implements the following DML classes:
* DMLCateEstimator: suitable for estimating heterogeneous treatment effects.
* SparseLinearDMLCateEstimator: suitable for the case when $W$ is high dimensional vector and both the first stage and second stage estimate are linear.
In ths notebook, we show the performance of the DML on both synthetic data and observational data.
**Notebook contents:**
1. Example usage with single continuous treatment synthetic data
2. Example usage with multiple continuous treatment synthetic data
3. Example usage with single continuous treatment observational data
4. Example usage with multiple continuous treatment, multiple outcome observational data
```python
import econml
```
```python
## Ignore warnings
import warnings
warnings.filterwarnings('ignore')
```
```python
# Main imports
from econml.dml import DMLCateEstimator,SparseLinearDMLCateEstimator
# Helper imports
import numpy as np
from itertools import product
from sklearn.linear_model import Lasso, LassoCV, LogisticRegression, LogisticRegressionCV,LinearRegression,MultiTaskElasticNet,MultiTaskElasticNetCV
from sklearn.ensemble import RandomForestRegressor,RandomForestClassifier
from sklearn.preprocessing import PolynomialFeatures
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
```
## 1. Example Usage with Single Continuous Treatment Synthetic Data
### 1.1. DGP
We use the data generating process (DGP) from [here](https://arxiv.org/abs/1806.03467). The DGP is described by the following equations:
\begin{align}
T =& \langle W, \beta\rangle + \eta, & \;\eta \sim \text{Uniform}(-1, 1)\\
Y =& T\cdot \theta(X) + \langle W, \gamma\rangle + \epsilon, &\; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim& \text{Normal}(0,\, I_{n_w})\\
X \sim& \text{Uniform}(0,1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders and $\beta, \gamma$ have high sparsity.
For this DGP,
\begin{align}
\theta(x) = \exp(2\cdot x_1).
\end{align}
```python
# Treatment effect function
def exp_te(x):
return np.exp(2*x[0])
```
```python
# DGP constants
np.random.seed(123)
n=1000
n_w=30
support_size=5
n_x=1
# Outcome support
support_Y=np.random.choice(range(n_w),size=support_size,replace=False)
coefs_Y=np.random.uniform(0,1,size=support_size)
epsilon_sample=lambda n: np.random.uniform(-1,1,size=n)
# Treatment support
support_T=support_Y
coefs_T=np.random.uniform(0,1,size=support_size)
eta_sample=lambda n: np.random.uniform(-1,1,size=n)
# Generate controls, covariates, treatments and outcomes
W=np.random.normal(0,1,size=(n,n_w))
X=np.random.uniform(0,1,size=(n,n_x))
# Heterogeneous treatment effects
TE=np.array([exp_te(x_i) for x_i in X])
T=np.dot(W[:,support_T],coefs_T)+eta_sample(n)
Y=TE*T+np.dot(W[:,support_Y],coefs_Y)+epsilon_sample(n)
# Generate test data
X_test = np.array(list(product(np.arange(0, 1, 0.01), repeat=n_x)))
```
### 1.2. Train Estimator
We train models in three different ways, and compare their performance.
#### 1.2.1. Default Setting
```python
est = DMLCateEstimator(model_y=RandomForestRegressor(),model_t=RandomForestRegressor())
est.fit(Y, T, X, W)
te_pred = est.const_marginal_effect(X_test)
```
#### 1.2.2. Polynomial Features for Heterogeneity
```python
est = DMLCateEstimator(model_y=RandomForestRegressor(),model_t=RandomForestRegressor(),featurizer=PolynomialFeatures(degree=2))
est.fit(Y, T, X, W)
te_pred1=est.const_marginal_effect(X_test)
```
#### 1.2.3. Polynomial Features with regularization
```python
est = DMLCateEstimator(model_y=RandomForestRegressor(),model_t=RandomForestRegressor(),model_final=LassoCV(),featurizer=PolynomialFeatures(degree=10))
est.fit(Y, T, X, W)
te_pred2=est.const_marginal_effect(X_test)
```
### 1.3. Performance Visualization
```python
plt.figure(figsize=(10,6))
plt.plot(X_test,te_pred,label='DML default')
plt.plot(X_test,te_pred1,label='DML polynomial degree=2')
plt.plot(X_test,te_pred2,label='DML polynomial degree=10 with Lasso')
expected_te=np.array([exp_te(x_i) for x_i in X_test])
plt.plot(X_test,expected_te,'b--',label='True effect')
plt.ylabel('Treatment Effect')
plt.xlabel('x')
plt.legend()
plt.show()
```
## 2. Example Usage with Multiple Continuous Treatment Synthetic Data
### 2.1. DGP
We use the data generating process (DGP) from [here](https://arxiv.org/abs/1806.03467), and modify the treatment to generate multiple treatments. The DGP is described by the following equations:
\begin{align}
T =& \langle W, \beta\rangle + \eta, & \;\eta \sim \text{Uniform}(-1, 1)\\
Y =& T\cdot \theta_{1}(X) + T^{2}\cdot \theta_{2}(X) + \langle W, \gamma\rangle + \epsilon, &\; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim& \text{Normal}(0,\, I_{n_w})\\
X \sim& \text{Uniform}(0,1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders and $\beta, \gamma$ have high sparsity.
For this DGP,
\begin{align}
\theta_{1}(x) = \exp(2\cdot x_1)\\
\theta_{2}(x) = x_1^{2}\\
\end{align}
```python
# DGP constants
np.random.seed(123)
n=1000
n_w=30
support_size=5
n_x=1
# Outcome support
support_Y=np.random.choice(range(n_w),size=support_size,replace=False)
coefs_Y=np.random.uniform(0,1,size=support_size)
epsilon_sample=lambda n: np.random.uniform(-1,1,size=n)
# Treatment support
support_T=support_Y
coefs_T=np.random.uniform(0,1,size=support_size)
eta_sample=lambda n: np.random.uniform(-1,1,size=n)
# Generate controls, covariates, treatments and outcomes
W=np.random.normal(0,1,size=(n,n_w))
X=np.random.uniform(0,1,size=(n,n_x))
# Heterogeneous treatment effects
TE1=np.array([exp_te(x_i) for x_i in X])
TE2=np.array([x_i**2 for x_i in X]).flatten()
T=np.dot(W[:,support_T],coefs_T)+eta_sample(n)
Y=TE1*T+TE2*T**2+np.dot(W[:,support_Y],coefs_Y)+epsilon_sample(n)
# Generate test data
X_test = np.array(list(product(np.arange(0, 1, 0.01), repeat=n_x)))
```
### 2.2. Train Estimator
```python
est = DMLCateEstimator(model_y=RandomForestRegressor(),model_t=RandomForestRegressor())
```
```python
T=T.reshape(-1,1)
est.fit(Y, np.concatenate((T, T**2), axis=1), X, W)
```
```python
te_pred = est.const_marginal_effect(X_test)
```
### 2.3. Performance Visualization
```python
plt.figure(figsize=(10,6))
plt.plot(X_test, te_pred[:,0], label='DML estimate1')
plt.plot(X_test, te_pred[:,1], label='DML estimate2')
expected_te1 = np.array([exp_te(x_i) for x_i in X_test])
expected_te2=np.array([x_i**2 for x_i in X_test]).flatten()
plt.plot(X_test, expected_te1, '--', label='True effect1')
plt.plot(X_test, expected_te2, '--', label='True effect2')
plt.ylabel("Treatment Effect")
plt.xlabel("x")
plt.legend()
plt.show()
```
## 3. Example Usage with Single Continuous Treatment Observational Data
We applied our technique to Dominick’s dataset, a popular historical dataset of store-level orange juice prices and sales provided by University of Chicago Booth School of Business.
The dataset is comprised of a large number of covariates $W$, but researchers might only be interested in learning the elasticity of demand as a function of a few variables $x$ such
as income or education.
We applied the `DMLCateEstimator` to estimate orange juice price elasticity
as a function of income, and our results, unveil the natural phenomenon that lower income consumers are more price-sensitive.
### 3.1. Data
```python
# A few more imports
import os
import pandas as pd
import urllib.request
from sklearn.preprocessing import StandardScaler
```
```python
# Import the data
file_name = "oj_large.csv"
if not os.path.isfile(file_name):
print("Downloading file (this might take a few seconds)...")
urllib.request.urlretrieve("https://msalicedatapublic.blob.core.windows.net/datasets/OrangeJuice/oj_large.csv", file_name)
oj_data = pd.read_csv(file_name)
```
```python
oj_data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>store</th>
<th>brand</th>
<th>week</th>
<th>logmove</th>
<th>feat</th>
<th>price</th>
<th>AGE60</th>
<th>EDUC</th>
<th>ETHNIC</th>
<th>INCOME</th>
<th>HHLARGE</th>
<th>WORKWOM</th>
<th>HVAL150</th>
<th>SSTRDIST</th>
<th>SSTRVOL</th>
<th>CPDIST5</th>
<th>CPWVOL5</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2</td>
<td>tropicana</td>
<td>40</td>
<td>9.018695</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>tropicana</td>
<td>46</td>
<td>8.723231</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>tropicana</td>
<td>47</td>
<td>8.253228</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>3</th>
<td>2</td>
<td>tropicana</td>
<td>48</td>
<td>8.987197</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>4</th>
<td>2</td>
<td>tropicana</td>
<td>50</td>
<td>9.093357</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
</tbody>
</table>
</div>
```python
# Prepare data
Y = oj_data['logmove'].values
T = np.log(oj_data["price"]).values
scaler = StandardScaler()
W1 = scaler.fit_transform(oj_data[[c for c in oj_data.columns if c not in ['price', 'logmove', 'brand', 'week', 'store','INCOME']]].values)
W2 = pd.get_dummies(oj_data[['brand']]).values
W = np.concatenate([W1, W2], axis=1)
X=scaler.fit_transform(oj_data[['INCOME']].values)
```
```python
## Generate test data
min_income = -1
max_income = 1
delta = (1 - (-1)) / 100
X_test = np.arange(min_income, max_income + delta - 0.001, delta).reshape(-1,1)
```
### 3.2. Train Estimator
```python
est = DMLCateEstimator(model_y=RandomForestRegressor(),model_t=RandomForestRegressor())
est.fit(Y, T, X, W)
te_pred=est.const_marginal_effect(X_test)
```
### 3.3. Performance Visualization
```python
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(10,6))
plt.plot(X_test, te_pred, label="OJ Elasticity")
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.legend()
plt.title("Orange Juice Elasticity vs Income")
plt.show()
```
### 3.4. Bootstrap Confidence Intervals
```python
from econml.bootstrap import BootstrapEstimator
boot_est=BootstrapEstimator(DMLCateEstimator(model_y=RandomForestRegressor(),
model_t=RandomForestRegressor()),n_bootstrap_samples=20)
boot_est.fit(Y, T, X, W)
te_pred_interval = boot_est.const_marginal_effect_interval(X_test, lower=1, upper=99)
```
```python
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(10,6))
plt.plot(X_test.flatten(), te_pred, label="OJ Elasticity")
plt.fill_between(X_test.flatten(), te_pred_interval[0], te_pred_interval[1], alpha=.5, label="1-99% CI")
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.title("Orange Juice Elasticity vs Income")
plt.legend()
plt.show()
```
## 4. Example Usage with Multiple Continuous Treatment, Multiple Outcome Observational Data
We use the same data, but in this case, we want to fit the demand of multiple brand as a function of the price of each one of them, i.e. fit the matrix of cross price elasticities. It can be done, by simply setting as $Y$ to be the vector of demands and $T$ to be the vector of prices. Then we can obtain the matrix of cross price elasticities.
\begin{align}
Y=[Logmove_{tropicana},Logmove_{minute.maid},Logmove_{dominicks}] \\
T=[Logprice_{tropicana},Logprice_{minute.maid},Logprice_{dominicks}] \\
\end{align}
### 4.1. Data
```python
# Import the data
oj_data = pd.read_csv(file_name)
```
```python
# Prepare data
oj_data['price']=np.log(oj_data["price"])
# Transform dataset.
# For each store in each week, get a vector of logmove and a vector of logprice for each brand.
# Other features are store specific, will be the same for all brands.
groupbylist=["store","week","AGE60","EDUC","ETHNIC","INCOME","HHLARGE","WORKWOM","HVAL150",
"SSTRDIST","SSTRVOL","CPDIST5","CPWVOL5"]
oj_data1=pd.pivot_table(oj_data,index=groupbylist,columns=oj_data.groupby(groupbylist).cumcount(),
values=['logmove','price'],aggfunc='sum').reset_index()
oj_data1.columns=oj_data1.columns.map('{0[0]}{0[1]}'.format)
oj_data1=oj_data1.rename(index=str,columns={"logmove0": "logmove_T", "logmove1": "logmove_M",
"logmove2":"logmove_D","price0":"price_T","price1":"price_M","price2":"price_D"})
# Define Y,T,X,W
Y = oj_data1[['logmove_T',"logmove_M","logmove_D"]].values
T=oj_data1[['price_T',"price_M","price_D"]].values
scaler = StandardScaler()
W=scaler.fit_transform(oj_data1[[c for c in groupbylist if c not in ['week', 'store','INCOME']]].values)
X=scaler.fit_transform(oj_data1[['INCOME']].values)
```
```python
## Generate test data
min_income = -1
max_income = 1
delta = (1 - (-1)) / 100
X_test = np.arange(min_income, max_income + delta - 0.001, delta).reshape(-1,1)
```
### 4.2. Train Estimator
```python
est = DMLCateEstimator(model_y=MultiTaskElasticNetCV(cv=3),model_t=MultiTaskElasticNetCV(cv=3))
est.fit(Y, T, X, W)
te_pred=est.const_marginal_effect(X_test)
```
### 4.3. Performance Visualization
```python
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(18,10))
dic={0:"Tropicana",1:"Minute.maid",2:"Dominicks"}
for i in range(3):
for j in range(3):
plt.subplot(3,3, 3*i+j+1)
plt.plot(X_test, te_pred[:,i,j],color="C{}".format(str(3*i+j)),label="OJ Elasticity {} to {}".format(dic[j],dic[i]))
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.legend()
plt.suptitle("Orange Juice Elasticity vs Income",fontsize=16)
plt.show()
```
**Findings**: Look at the diagonal of the matrix, the TE of OJ prices are always negative to the sales across all the brand, but people with higher income are less price-sensitive. By contrast, for the non-diagonal of the matrix, the TE of prices for other brands are always positive to the sales for that brand, the TE is affected by income in different ways for different competitors. In addition, compare to previous plot, the negative TE of OJ prices for each brand are all larger than the TE considering all brand together, which means we would have underestimated the effect of price changes on demand.
### 4.4. Bootstrap Confidence Intervals
```python
from econml.bootstrap import BootstrapEstimator
boot_est=BootstrapEstimator(DMLCateEstimator(model_y=MultiTaskElasticNetCV(cv=3),model_t=MultiTaskElasticNetCV(cv=3))
,n_bootstrap_samples=20)
boot_est.fit(Y,T,X,W)
te_pred_interval = boot_est.const_marginal_effect_interval(X_test, lower=1, upper=99)
```
```python
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(18,10))
dic={0:"Tropicana",1:"Minute.maid",2:"Dominicks"}
for i in range(3):
for j in range(3):
plt.subplot(3,3, 3*i+j+1)
plt.plot(X_test, te_pred[:,i,j],color="C{}".format(str(3*i+j)),label="OJ Elasticity {} to {}".format(dic[j],dic[i]))
plt.fill_between(X_test.flatten(), te_pred_interval[0][:, i, j],te_pred_interval[1][:, i,j], color="C{}".format(str(3*i+j)),alpha=.5, label="1-99% CI")
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.legend()
plt.suptitle("Orange Juice Elasticity vs Income",fontsize=16)
plt.show()
```
**Note**: For expository purpose, we only run 20 samples here to get the Confidence Interval. More samples are needed for validity of the bootstrap.
```python
```
|
b02cac233a65124c3b3035889f8b75a54af3ecec
| 443,145 |
ipynb
|
Jupyter Notebook
|
notebooks/Double Machine Learning Examples.ipynb
|
bquistorff/EconML
|
73a21bfe3470e7f0d1702a6db71efd0892cfee9d
|
[
"MIT"
] | 2 |
2019-05-03T13:11:05.000Z
|
2021-11-09T20:05:53.000Z
|
notebooks/Double Machine Learning Examples.ipynb
|
bquistorff/EconML
|
73a21bfe3470e7f0d1702a6db71efd0892cfee9d
|
[
"MIT"
] | null | null | null |
notebooks/Double Machine Learning Examples.ipynb
|
bquistorff/EconML
|
73a21bfe3470e7f0d1702a6db71efd0892cfee9d
|
[
"MIT"
] | null | null | null | 447.621212 | 144,264 | 0.936488 | true | 5,335 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.803174 | 0.731059 | 0.587167 |
__label__eng_Latn
| 0.576525 | 0.202516 |
# Integrating the moment equations - Protein
(c) 2017 Manuel Razo. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT)
---
```python
import cloudpickle
# Our numerical workhorses
import numpy as np
import scipy as sp
import pandas as pd
# Import matplotlib stuff for plotting
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib as mpl
# Seaborn, useful for graphics
import seaborn as sns
# Import the utils for this project
import chann_cap_utils as chann_cap
# Set PBoC plotting style
chann_cap.set_plotting_style()
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline (only use with static plots (non-Bokeh))
%config InlineBackend.figure_format = 'svg'
tmpdir = '../../tmp/'
figdir = '../../fig/moment_dynamics_numeric/'
```
### $\LaTeX$ macros
$\newcommand{kpon}{k^{(p)}_{\text{on}}}$
$\newcommand{kpoff}{k^{(p)}_{\text{off}}}$
$\newcommand{kron}{k^{(r)}_{\text{on}}}$
$\newcommand{kroff}{k^{(r)}_{\text{off}}}$
$\newcommand{rm}{r _m}$
$\newcommand{gm}{\gamma _m}$
$\newcommand{mm}{\left\langle m \right\rangle}$
$\newcommand{ee}[1]{\left\langle #1 \right\rangle}$
$\newcommand{bb}[1]{\mathbf{#1}}$
## Numerically integrating the moment equations.
In this notebook we will numerically integrate the differential equations for both, the mRNA and protein distribution moments for the two- and three-state promoter.
Up to this stage we wrote the chemical master equation in matrix notation and did some tricks when it came to compute the $n^{\text{th}}$ moment. Now we will use the equations we obtained for the time derivative of the mRNA distribution moments and try to solve them.
### Defining the moment equations.
We will define a single function that caputes all moments that lead to computing the protein third moment $\ee{\bb{p}^3}$. That is the first three moments of the mRNA distribution, the cross correlation terms between protein and mRNA and the three moments of the protein distribution. To have it as a reference let's write down all equations.
### mRNA moments
\begin{align}
% zeroth moment
{d \ee{\bb{m}^0} \over dt} &= \mathbf{K}_m \left\langle \mathbf{m}^0 \right\rangle.\\
% first moment
{d \ee{\bb{m}^1}\over dt} &=
\left( \mathbf{K}_m - \mathbf{\Gamma}_m \right)\ee{\bb{m}^1}
+ \mathbf{R}_m \left\langle \mathbf{m}^0 \right\rangle.\\
% second moment
{d\ee{\bb{m}^2} \over dt} &= \left( \mathbf{K}_m - 2 \mathbf{\Gamma}_m \right)
\left\langle \mathbf{m}^2 \right\rangle
+ \left( 2 \mathbf{R}_m + \mathbf{\Gamma}_m \right)
\left\langle \mathbf{m}^1 \right\rangle
+ \mathbf{R}_m
\left\langle \mathbf{m}^0 \right\rangle\\
% third moment
{d\ee{\bb{m}^3} \over dt} &= \left( \mathbf{K}_m - 3 \mathbf{\Gamma}_m \right)
\left\langle \mathbf{m}^3 \right\rangle
+ \left( 3 \mathbf{R}_m + 3 \mathbf{\Gamma}_m \right)
\left\langle \mathbf{m}^2 \right\rangle
+ \left( 3 \mathbf{R}_m - \mathbf{\Gamma}_m \right)
\left\langle \mathbf{m}^1 \right\rangle
+ \mathbf{R}_m
\left\langle \mathbf{m}^0 \right\rangle
\end{align}
### Protein moments (and cross correlations)
\begin{align}
% first moment
{d \ee{\bb{p}^1}\over dt} &=
\left( \mathbf{K} - \mathbf{\Gamma}_p \right) \ee{\bb{p}^1}
+ \mathbf{R}_p \left\langle \mathbf{m} \right\rangle.\\
% <mp>
{d \ee{\bb{mp}}\over dt} &=
\left( \mathbf{K} - \mathbf{\Gamma}_m - \mathbf{\Gamma}_p \right)
\left\langle \mathbf{mp} \right\rangle
+ \mathbf{R}_m \left\langle \mathbf{p} \right\rangle
+ \mathbf{R}_p \left\langle \mathbf{m}^2 \right\rangle \\
% second moment
{d \ee{\bb{p}^2}\over dt} &=
\left( \mathbf{K} - 2 \mathbf{\Gamma}_p \right)
\left\langle \mathbf{p}^2 \right\rangle
+ \mathbf{\Gamma}_p \left\langle \mathbf{p} \right\rangle
+ \mathbf{R}_p \mm
+ 2 \mathbf{R}_p \left\langle \mathbf{mp} \right\rangle \\
% <m^2p>
{d \ee{\bb{m}^2\bb{p}}\over dt} &=
\left( \bb{K} - 2 \bb{\Gamma}_m - \bb{\Gamma}_p \right)
\ee{\bb{m}^2\bb{p}} +
\bb{R}_m \ee{\bb{p}} +
\left( \bb{R}_m + \bb{\Gamma}_m \right) \ee{\bb{mp}} +
\bb{R}_p \ee{\bb{m}^3}.\\
% <mp^2>
{d \ee{\bb{m}\bb{p}^2}\over dt} &=
\left( \bb{K} - \bb{\Gamma}_m - 2 \bb{\Gamma}_p \right)
\ee{\bb{mp}^2} +
\bb{R}_m \ee{\bb{p}^2} +
2 \bb{R}_p \ee{\bb{m}^2\bb{p}} +
\bb{R}_p \ee{\bb{m}^2} +
\bb{\Gamma}_p \ee{\bb{mp}}.\\
% third moment
{d \ee{\bb{p}^3}\over dt} &=
\left( \bb{K} - 3 \bb{\Gamma}_p \right) \ee{\bb{p}^3} +
3 \bb{\Gamma}_p \ee{\bb{p}^2} -
\bb{\Gamma}_p \ee{\bb{p}} +
3 \bb{R}_p \ee{\bb{mp}^2} +
3 \bb{R}_p \ee{\bb{mp}} +
\bb{R}_p \ee{\bb{m}}.
\end{align}
Let's now implemenet these equations into the function
```python
def dpdt(mp, t, Kmat, Rm, Gm, Rp, Gp):
'''
function to integrate all mRNA and protein moment dynamics
using scipy.integrate.odeint
Parameters
----------
m : array-like.
Array containing all moments (mRNA, protein and cross correlations)
Unregulated
mp[0] = m0_P (RNAP bound)
mp[1] = m0_E (Empty promoter)
mp[2] = m1_P (RNAP bound)
mp[3] = m1_P (Empty promoter)
mp[4] = m2_P (RNAP bound)
mp[5] = m2_P (Empty promoter)
mp[6] = m3_P (RNAP bound)
mp[7] = m3_P (Empty promoter)
mp[8] = p1_P (RNAP bound)
mp[9] = p1_P (Empty promoter)
mp[10] = mp_P (RNAP bound)
mp[11] = mp_P (Empty promoter)
mp[12] = p2_P (RNAP bound)
mp[13] = p2_P (Empty promoter)
mp[14] = m2p_P (RNAP bound)
mp[15] = m2p_P (Empty promoter)
mp[16] = mp2_P (RNAP bound)
mp[17] = mp2_P (Empty promoter)
mp[18] = p3_P (RNAP bound)
mp[19] = p3_P (Empty promoter)
---------
Regulated:
mp[0] = m0_P (RNAP bound)
mp[1] = m0_E (Empty promoter)
mp[2] = m0_R (Repressor bound)
mp[3] = m1_P (RNAP bound)
mp[4] = m1_E (Empty promoter)
mp[5] = m1_R (Repressor bound)
mp[6] = m2_P (RNAP bound)
mp[7] = m2_E (Empty promoter)
mp[8] = m2_R (Repressor bound)
mp[9] = m3_P (RNAP bound)
mp[10] = m3_E (Empty promoter)
mp[11] = m3_R (Repressor bound)
mp[12] = p1_P (RNAP bound)
mp[13] = p1_E (Empty promoter)
mp[14] = p1_R (Repressor bound)
mp[15] = mp_P (RNAP bound)
mp[16] = mp_E (Empty promoter)
mp[17] = mp_R (Repressor bound)
mp[18] = p2_P (RNAP bound)
mp[19] = p2_E (Empty promoter)
mp[20] = p2_R (Repressor bound)
mp[21] = m2p_P (RNAP bound)
mp[22] = m2p_E (Empty promoter)
mp[23] = m2p_R (Repressor bound)
mp[24] = mp2_P (RNAP bound)
mp[25] = mp2_E (Empty promoter)
mp[26] = mp2_R (Repressor bound)
mp[27] = p3_P (RNAP bound)
mp[28] = p3_E (Empty promoter)
mp[29] = p3_R (Repressor bound)
t : array-like.
Time array
Kmat : array-like.
Matrix containing the transition rates between the promoter states.
Rm : array-like.
Matrix containing the mRNA production rate at each of the states.
Gm : array-like.
Matrix containing the mRNA degradation rate at each of the states.
Rp : array-like.
Matrix containing the protein production rate at each of the states.
Gp : array-like.
Matrix containing the protein degradation rate at each of the states.
Returns
-------
dynamics of all mRNA and protein moments
'''
# Obtain the zeroth and first moment based on the size
# of the Kmat matrix
if Kmat.shape[0] == 2:
m0 = mp[0:2]
m1 = mp[2:4]
m2 = mp[4:6]
m3 = mp[6:8]
p1 = mp[8:10]
mp1 = mp[10:12]
p2 = mp[12:14]
m2p = mp[14:16]
mp2 = mp[16:18]
p3 = mp[18::]
elif Kmat.shape[0] == 3:
m0 = mp[0:3]
m1 = mp[3:6]
m2 = mp[6:9]
m3 = mp[9:12]
p1 = mp[12:15]
mp1 = mp[15:18]
p2 = mp[18:21]
m2p = mp[21:24]
mp2 = mp[24:27]
p3 = mp[27::]
# Initialize array to save all dynamics
dmpdt = np.array([])
# Compute the moment equations for the:
#=== mRNA ===#
# Zeroth moment
dm0dt_eq = np.dot(Kmat, m0)
dmpdt = np.append(dmpdt, dm0dt_eq)
# <m1>
dm1dt_eq = np.dot((Kmat - Gm), m1) + np.dot(Rm, m0)
dmpdt = np.append(dmpdt, dm1dt_eq)
# <m2>
dm2dt_eq = np.dot((Kmat - 2 * Gm), m2) + np.dot((2 * Rm + Gm), m1) +\
np.dot(Rm, m0)
dmpdt = np.append(dmpdt, dm2dt_eq)
# <m3>
dm3dt_eq = np.dot((Kmat - 3 * Gm), m3) +\
np.dot((3 * Rm + 3 * Gm), m2) +\
np.dot((3 * Rm - Gm), m1) +\
np.dot(Rm, m0)
dmpdt = np.append(dmpdt, dm3dt_eq)
#=== protein and correlations ===#
# <p1>
dp1dt_eq = np.dot((Kmat - Gp), p1) + np.dot(Rp, m1)
dmpdt = np.append(dmpdt, dp1dt_eq)
# <mp>
dmpdt_eq = np.dot((Kmat - Gm - Gp), mp1) +\
np.dot(Rm, p1) +\
np.dot(Rp, m2)
dmpdt = np.append(dmpdt, dmpdt_eq)
# <p2>
dp2dt_eq = np.dot((Kmat - 2 * Gp), p2) +\
np.dot(Gp, p1) +\
np.dot(Rp, m1) +\
np.dot((2 * Rp), mp1)
dmpdt = np.append(dmpdt, dp2dt_eq)
# <m2p>
dm2pdt_eq = np.dot((Kmat - 2 * Gm - Gp), m2p) +\
np.dot(Rm, p1) +\
np.dot((Rm + Gm), mp1) +\
np.dot(Rp, m3)
dmpdt = np.append(dmpdt, dm2pdt_eq)
# <mp2>
dmp2dt_eq = np.dot((Kmat - Gm - 2 * Gp), mp2) +\
np.dot(Rm, p2) +\
np.dot((2 * Rp), m2p) +\
np.dot(Rp, m2) +\
np.dot(Gp, mp1)
dmpdt = np.append(dmpdt, dmp2dt_eq)
# <p3>
dp3dt_eq = np.dot((Kmat - 3 * Gp), p3) +\
np.dot((3 * Gp), p2) -\
np.dot(Gp, p1) +\
np.dot((3 * Rp), mp2) +\
np.dot((3 * Rp), mp1) +\
np.dot(Rp, m1)
dmpdt = np.append(dmpdt, dp3dt_eq)
return dmpdt
```
Let's define a function that will become quite handy in future implementations of these dynamics. This function will take the output of `odeint` function and transform it into a tidy `DataFrame`.
```python
def dynamics_to_df(sol, t):
'''
Takes the output of the dpdt function and the vector time and returns
a tidy pandas DataFrame with the GLOBAL moments.
Parameters
----------
sol : array-like.
Array with 20 or 30 columns containing the dynamics of the mRNA and
protein distribution moments.
t : array-like.
Time array used for integrating the differential equations
Returns
-------
tidy dataframe with the GLOBAL moments
'''
# Define names of dataframe columns
names = ['time', 'm1', 'm2', 'm3', 'p1', 'mp', 'p2', 'm2p', 'mp2', 'p3']
# Initialize matrix to save global moments
mat = np.zeros([len(t), len(names)])
# Save time array in matrix
mat[:, 0] = t
# List index for columns depending on number of elements in matrix
idx = np.arange(int(sol.shape[1]) / 10,
sol.shape[1], int(sol.shape[1]) / 10)
# Loop through index and compute global moments
for i, index in enumerate(idx):
# Compute and save global moment
mat[:, i+1] = np.sum(sol[:, int(index):int(index + sol.shape[1] / 10)],
axis=1)
return pd.DataFrame(mat, columns=names)
```
## Two-state promoter
Having defined these functions let's first test them with the two-state unregulated promoter.
Let's define the necessary parameters.
```python
# List the parameters fit for the lacUV5 promoter
par_UV5 = dict(kp_on=5.5, kp_off=28.9, rm=87.6, gm=1)
# define protein degradation rate in units of mRNA degradation rate
gp = 0.000277 / 0.00284
par_UV5['gp'] = gp
# define rp based on the mean protein copy number per mRNA
par_UV5['rp'] = 1000 * par_UV5['gp']
kp_on = par_UV5['kp_on']
kp_off = par_UV5['kp_off']
rm = par_UV5['rm']
gm = par_UV5['gm']
gp = par_UV5['gp']
rp = par_UV5['rp']
```
Now we will define the state transition matrix $\mathbf{K}$, the mRNA production matrix $\mathbf{R}_m$, the mRNA degradation matrix $\mathbf{\Gamma}_m$, the protein production matrix $\mathbf{R}_p$, and the protein degradation matrix $\mathbf{\Gamma}_p$.
```python
# Define the rate constant matrix
Km_unreg = np.array([[-kp_off, kp_on],
[kp_off, -kp_on]])
# Define the mRNA production matrix
Rm_unreg = np.array([[rm, 0],
[0, 0]])
# Define the mRNA degradation matrix
Gm_unreg = np.array([[gm, 0],
[0, gm]])
# Define the protein production matrix
Rp_unreg = np.array([[rp, 0],
[0, rp]])
# Define the protein degradation matrix
Gp_unreg = np.array([[gp, 0],
[0, gp]])
```
Now we define initial conditions and time array to use for integrating the ODEs
```python
# Define time on which to perform integration
t = np.linspace(0, 60, 101)
# Define initial conditions
m0_init = [0.5, 0.5]
m1_init = [0, 0]
m2_init = [0, 0]
m3_init = [0, 0]
p1_init = [0, 0]
mp_init = [0, 0]
p2_init = [0, 0]
m2p_init = [0, 0]
mp2_init = [0, 0]
p3_init = [0, 0]
# Solve equation
mp_sol = sp.integrate.odeint(dpdt,
m0_init + m1_init + m2_init + m3_init +
p1_init + mp_init + p2_init + m2p_init +
mp2_init + p3_init,
t,
args=(Km_unreg, Rm_unreg, Gm_unreg,
Rp_unreg, Gp_unreg))
mp_sol.shape
```
(101, 20)
Let's test the function converting our ODE numerical integration to a `DataFrame`.
```python
df_sol = dynamics_to_df(mp_sol, t)
df_sol.head().round(1)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>time</th>
<th>m1</th>
<th>m2</th>
<th>m3</th>
<th>p1</th>
<th>mp</th>
<th>p2</th>
<th>m2p</th>
<th>mp2</th>
<th>p3</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.000000e+00</td>
</tr>
<tr>
<th>1</th>
<td>0.6</td>
<td>6.8</td>
<td>73.7</td>
<td>1021.0</td>
<td>234.8</td>
<td>2403.3</td>
<td>91597.4</td>
<td>25649.9</td>
<td>972671.1</td>
<td>3.926675e+07</td>
</tr>
<tr>
<th>2</th>
<td>1.2</td>
<td>10.1</td>
<td>137.7</td>
<td>2303.1</td>
<td>710.9</td>
<td>9025.1</td>
<td>689141.1</td>
<td>106746.1</td>
<td>8337666.5</td>
<td>6.751009e+08</td>
</tr>
<tr>
<th>3</th>
<td>1.8</td>
<td>11.8</td>
<td>180.3</td>
<td>3277.7</td>
<td>1298.2</td>
<td>18041.3</td>
<td>2104606.7</td>
<td>217080.1</td>
<td>26566097.8</td>
<td>3.262837e+09</td>
</tr>
<tr>
<th>4</th>
<td>2.4</td>
<td>12.8</td>
<td>206.0</td>
<td>3903.1</td>
<td>1928.2</td>
<td>27894.1</td>
<td>4417669.7</td>
<td>332033.6</td>
<td>56570169.8</td>
<td>9.351843e+09</td>
</tr>
</tbody>
</table>
</div>
For comparison let's compute the *steady-state* values of both the mRNA and the protein levels and save it as a pandas `Series`.
```python
with open('./two_state_moments_lambdify.dill', 'rb') as file:
mom_unreg_dict = cloudpickle.load(file)
# Import regulated states
with open('./three_state_moments_lambdify.dill', 'rb') as file:
mom_reg_dict = cloudpickle.load(file)
```
```python
# Define names of dataframe index
names = ['m1', 'm2', 'm3', 'p1', 'p2', 'p3']
# Initialize array to save steady state
mp_ss = np.zeros(len(names))
# Compute mRNA steady state moments
mfunc = [mom_unreg_dict[(1, 0)],
mom_unreg_dict[(2, 0)],
mom_unreg_dict[(3, 0)]]
for i, f in enumerate(mfunc):
mp_ss[i] = f(kp_on, kp_off, rm, gm, rp, gp)
# Compute the protein steady state moments
pfunc = [mom_unreg_dict[(0, 1)],
mom_unreg_dict[(0, 2)],
mom_unreg_dict[(0, 3)]]
for i, f in enumerate(pfunc):
mp_ss[i + 3] = f(kp_on, kp_off, rm, gm, rp, gp)
df_ss = pd.Series(mp_ss, index=names)
df_ss
```
m1 1.400581e+01
m2 2.392858e+02
m3 4.756025e+03
p1 1.400581e+04
p2 2.000841e+08
p3 2.914073e+12
dtype: float64
With this we can now plot the dynamics of all moments normalized by the stead-state value in order to compare the response time of mRNA and protein moments.
```python
# Initialize figure
fig = plt.figure(figsize=(7, 5))
# Define axis to have two on the bottom and one on the top
ax = list()
ax.append(plt.subplot2grid((2, 4), (0, 1), colspan=2, rowspan=1))
ax.append(plt.subplot2grid((2, 4), (1, 0), colspan=2, rowspan=1))
ax.append(plt.subplot2grid((2, 4), (1, 2), colspan=2, rowspan=1))
# Define array with names of moments to plot
moments_m = ['m1', 'm2', 'm3']
moments_p = ['p1', 'p2', 'p3']
# Loop through groups
for i in range(len(moments_m)):
# Plot moments moment
ax[i].plot(df_sol['time'], df_sol[moments_m[i]] / df_ss[moments_m[i]],
label='mRNA')
ax[i].plot(df_sol['time'], df_sol[moments_p[i]] / df_ss[moments_p[i]],
label='protein')
# Label axis
ax[i].set_xlabel('time ($\gamma_m^{-1}$ units)')
ax[i].set_ylabel(r'$\left\langle x(t)^{:d} \right\rangle$ / '.format(i+1) +\
r'$\left\langle x_{{ss}}^{:d} \right\rangle$'.format(i+1))
# Include legend
ax[0].legend(fontsize=12)
plt.tight_layout()
# Save figure
plt.savefig(figdir + 'mp_unreg_dynamics.pdf', bbox_inches='tight')
```
---
## Three-state promoter
Let's now test the function for the regulated promoter. We need to define the parameters for the regulated promoter.
```python
# List the parameters fit for the lacUV5 regulated promoter
par_UV5_reg = dict(kp_on=5.5, kp_off=28.9, rm=87.6, gm=1,
Nns=4.6E6, ka=139, ki=0.53, epsilon=4.5)
# Define the k0 parameters in units of the mRNA degradation time
k0_norm = 2.7E-3 / 0.00284
par_UV5_reg['k0'] = k0_norm
# define protein degradation rate in units of mRNA degradation rate
gp = 0.000277 / 0.00284
par_UV5_reg['gp'] = gp
# define rp based on the mean protein copy number per mRNA
par_UV5_reg['rp'] = 1000 * par_UV5_reg['gp']
kp_on = par_UV5_reg['kp_on']
kp_off = par_UV5_reg['kp_off']
rm = par_UV5_reg['rm']
gm = par_UV5_reg['gm']
ka = par_UV5_reg['ka']
ki = par_UV5_reg['ki']
epsilon = par_UV5_reg['epsilon']
Nns = par_UV5_reg['Nns']
```
Now we will define the mRNA production matrix $\mathbf{R}_m$, the mRNA degradation matrix $\mathbf{\Gamma}_m$, the protein production matrix $\mathbf{R}_p$, and the protein degradation matrix $\mathbf{\Gamma}_p$.
```python
# Define the production matrix
Rm_reg = np.array([[rm, 0, 0],
[0, 0, 0],
[0, 0, 0]])
# Define the degradation matrix
Gm_reg = np.array([[gm, 0, 0],
[0, gm, 0],
[0, 0, gm]])
# Define the production matrix
Rp_reg = np.array([[rp, 0, 0],
[0, rp, 0],
[0, 0, rp]])
# Define the production matrix
Gp_reg = np.array([[gp, 0, 0],
[0, gp, 0],
[0, 0, gp]])
```
Let's compute the moments for the $R = 260$ strains and all operators in the absence of inducer.
```python
# Define repressors per cell
rep = 260
par_UV5_reg['rep'] = rep
# Calculate the repressor on rate including the MWC model
kr_on = k0_norm * rep * chann_cap.p_act(0, ka, ki, epsilon)
# Define energies
operators = ['Oid', 'O1', 'O2', 'O3']
energies = [-17, -15.3, -13.9, -9.7]
energy_dict = dict(zip(operators, energies))
```
Having defined the parameters that do not depend on the operator let's now loop through operators and integrate the equations. We will also compute the steady-state solution to each of the moments and save it also in a tidy `DataFrame`.
```python
# Define time on which to perform integration
t = np.linspace(0, 60, 200)
# Define initial conditions
m0_init = [0.3, 0.3, 0.4]
m1_init = [0, 0, 0]
m2_init = [0, 0, 0]
m3_init = [0, 0, 0]
p1_init = [0, 0, 0]
mp_init = [0, 0, 0]
p2_init = [0, 0, 0]
m2p_init = [0, 0, 0]
mp2_init = [0, 0, 0]
p3_init = [0, 0, 0]
# Initialize DataFrame to save the solutions
df_sol_reg = pd.DataFrame() # Dynamics
df_ss_reg = pd.DataFrame() # Steady-state
# Loop through operators integrating the equation at each point
for op in operators:
eRA = energy_dict[op]
# Compute the repressor off-rate based on the on-rate and
# the binding energy
kr_off = chann_cap.kr_off_fun(eRA, k0_norm, kp_on, kp_off, Nns)
#=== Compute dynamics ===#
# Define the rate constant matrix
Km_reg = np.array([[-kp_off, kp_on, 0],
[kp_off, -(kp_on + kr_on), kr_off],
[0, kr_on, -kr_off]])
# Solve equation
sol = sp.integrate.odeint(dpdt,
m0_init + m1_init + m2_init + m3_init +\
p1_init + mp_init + p2_init + m2p_init +\
mp2_init + p3_init,
t,
args=(Km_reg, Rm_reg, Gm_reg,
Rp_reg, Gp_reg))
# Convert to pandas DataFrame and append the extra information
df = dynamics_to_df(sol, t)
df['operator'] = [op] * len(t)
df['energy'] = [eRA] * len(t)
df['repressor'] = [rep] * len(t)
df['IPTG'] = [0] * len(t)
# Append to global data frame
df_sol_reg = pd.concat([df_sol_reg, df], ignore_index=True)
#=== Compute steady-state ===#
# Define names of dataframe index
names = ['m1', 'm2', 'm3', 'p1', 'p2', 'p3']
# Initialize array to save steady state
mp_ss = np.zeros(len(names))
# Compute mRNA steady state moments
mfunc = [mom_reg_dict[(1, 0)],
mom_reg_dict[(2, 0)],
mom_reg_dict[(3, 0)]]
for i, f in enumerate(mfunc):
mp_ss[i] = f(kr_on, kr_off, kp_on, kp_off, rm, gm, rp, gp)
# Compute the protein steady state moments
pfunc = [mom_reg_dict[(0, 1)],
mom_reg_dict[(0, 2)],
mom_reg_dict[(0, 3)]]
for i, f in enumerate(pfunc):
mp_ss[i + 3] = f(kr_on, kr_off, kp_on, kp_off, rm, gm, rp, gp)
df_ss = pd.Series(mp_ss, index=names)
df_ss['operator'] = op
df_ss['energy'] = eRA
df_ss['repressor'] = rep
df_ss['IPTG'] = 0
# Append to global data frame
df_ss_reg = df_ss_reg.append(df_ss, ignore_index=True)
df_sol_reg.head().round(1)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>time</th>
<th>m1</th>
<th>m2</th>
<th>m3</th>
<th>p1</th>
<th>mp</th>
<th>p2</th>
<th>m2p</th>
<th>mp2</th>
<th>p3</th>
<th>operator</th>
<th>energy</th>
<th>repressor</th>
<th>IPTG</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>Oid</td>
<td>-17.0</td>
<td>260</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>0.3</td>
<td>0.7</td>
<td>4.3</td>
<td>39.0</td>
<td>21.4</td>
<td>115.5</td>
<td>3206.7</td>
<td>896.9</td>
<td>22407.9</td>
<td>588456.7</td>
<td>Oid</td>
<td>-17.0</td>
<td>260</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0.6</td>
<td>0.5</td>
<td>2.6</td>
<td>18.1</td>
<td>39.1</td>
<td>166.3</td>
<td>11432.7</td>
<td>1083.1</td>
<td>67858.5</td>
<td>4439009.7</td>
<td>Oid</td>
<td>-17.0</td>
<td>260</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0.9</td>
<td>0.4</td>
<td>1.5</td>
<td>8.7</td>
<td>51.7</td>
<td>169.0</td>
<td>20515.4</td>
<td>882.7</td>
<td>96283.4</td>
<td>11184808.5</td>
<td>Oid</td>
<td>-17.0</td>
<td>260</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>1.2</td>
<td>0.3</td>
<td>0.9</td>
<td>4.4</td>
<td>60.5</td>
<td>151.7</td>
<td>28562.6</td>
<td>638.6</td>
<td>104397.2</td>
<td>18855645.2</td>
<td>Oid</td>
<td>-17.0</td>
<td>260</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
```python
df_ss_reg.round(1)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>IPTG</th>
<th>energy</th>
<th>m1</th>
<th>m2</th>
<th>m3</th>
<th>operator</th>
<th>p1</th>
<th>p2</th>
<th>p3</th>
<th>repressor</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0</td>
<td>-17.0</td>
<td>0.0</td>
<td>0.1</td>
<td>0.5</td>
<td>Oid</td>
<td>17.2</td>
<td>6575.2</td>
<td>4.683003e+06</td>
<td>260.0</td>
</tr>
<tr>
<th>1</th>
<td>0.0</td>
<td>-15.3</td>
<td>0.1</td>
<td>0.4</td>
<td>2.7</td>
<td>O1</td>
<td>93.6</td>
<td>42900.8</td>
<td>3.402951e+07</td>
<td>260.0</td>
</tr>
<tr>
<th>2</th>
<td>0.0</td>
<td>-13.9</td>
<td>0.4</td>
<td>1.6</td>
<td>11.7</td>
<td>O2</td>
<td>372.1</td>
<td>273361.5</td>
<td>2.948103e+08</td>
<td>260.0</td>
</tr>
<tr>
<th>3</th>
<td>0.0</td>
<td>-9.7</td>
<td>9.0</td>
<td>112.2</td>
<td>1725.9</td>
<td>O3</td>
<td>9038.9</td>
<td>84479685.7</td>
<td>8.153365e+11</td>
<td>260.0</td>
</tr>
</tbody>
</table>
</div>
Having this let's plot all moments
```python
# Initialize figure
fig = plt.figure(figsize=(7, 5.5))
# Define axis to have two on the bottom and one on the top
ax = list()
ax.append(plt.subplot2grid((2, 4), (0, 1), colspan=2, rowspan=1))
ax.append(plt.subplot2grid((2, 4), (1, 0), colspan=2, rowspan=1))
ax.append(plt.subplot2grid((2, 4), (1, 2), colspan=2, rowspan=1))
# Define array with names of moments to plot
moments_m = ['m1', 'm2', 'm3']
moments_p = ['p1', 'p2', 'p3']
# Define operator colors
colors = sns.color_palette('colorblind', n_colors=len(operators))
color_dict = dict(zip(operators, colors))
# Loop through moments
for i in range(len(moments_m)):
# Loop through operators
for op in operators:
# Extract operator data
data_sol = df_sol_reg[df_sol_reg['operator'] == op]
data_ss = df_ss_reg[df_ss_reg['operator'] == op]
# Plot moments moment
ax[i].plot(data_sol['time'],
data_sol[moments_m[i]] / data_ss[moments_m[i]].values,
label=op, color=color_dict[op])
ax[i].plot(data_sol['time'],
data_sol[moments_p[i]] / data_ss[moments_p[i]].values,
label='', color=color_dict[op], linestyle='--')
# Label axis
ax[i].set_xlabel('time ($\gamma_m^{-1}$ units)')
ax[i].set_ylabel(r'$\left\langle x(t)^{:d} \right\rangle$ / '.format(i+1) +\
r'$\left\langle x_{{ss}}^{:d} \right\rangle$'.format(i+1))
ax[i].set_ylim([0, 1.5])
# Include legend
first_legend = ax[0].legend(loc=0, ncol=2, title='operator')
# Add secondary label to distinguish protein vs mRNA.
ax2 = ax[0].add_artist(first_legend)
# "plot" a solid and a dashed line for the legend
line1, = ax[0].plot([], [], color='black', label='mRNA')
line2, = ax[0].plot([], [], color='black', linestyle='--', label='protein')
# Create another legend.
ax[0].legend(handles=[line1, line2], loc='upper right', frameon=False)
plt.tight_layout()
# Save figure
plt.savefig(figdir + 'mp_reg_dynamics.pdf', bbox_inches='tight')
```
All dynamics seem to converge to the predicted stteady state. This tells us that the numerical integration of the moment differential equations is working as expected.
## Multi-promoter dynamics.
It is known that depending on the position of a gene relative to the replication origin and the cell growth rate a gene can have multiple copies of a gene. For our specific experimental case with a doubling time of $\approx$ 60 min the *galK* locus has an average of 1.66 copies of the promoter along the cell cycle. What this means is that cells spend 2/3 of the cell cycle ($\approx$ 40 min) with one copy of the promoter and the rest ($\approx$ 20 min) with a single copy.
The way we account for that in our model is that assuming both promoters have the same mRNA production rate $r_m$, during the two-copies phase cells have a production rate of $2 r_m$, with the rest of the parameters remaining the same.
```python
```
|
71cb06518224f43378ff3845e55004822623962c
| 234,584 |
ipynb
|
Jupyter Notebook
|
src/theory/sandbox/moment_dynamics_numeric_protein.ipynb
|
RPGroup-PBoC/chann_cap
|
f2a826166fc2d47c424951c616c46d497ed74b39
|
[
"MIT"
] | 2 |
2020-08-21T04:06:12.000Z
|
2022-02-09T07:36:58.000Z
|
src/theory/sandbox/moment_dynamics_numeric_protein.ipynb
|
RPGroup-PBoC/chann_cap
|
f2a826166fc2d47c424951c616c46d497ed74b39
|
[
"MIT"
] | null | null | null |
src/theory/sandbox/moment_dynamics_numeric_protein.ipynb
|
RPGroup-PBoC/chann_cap
|
f2a826166fc2d47c424951c616c46d497ed74b39
|
[
"MIT"
] | 2 |
2020-04-29T17:43:28.000Z
|
2020-09-09T00:20:16.000Z
| 41.93493 | 487 | 0.494885 | true | 10,412 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.831143 | 0.675765 | 0.561657 |
__label__eng_Latn
| 0.568924 | 0.143247 |
_Tugas 3 - Analisis Sumber Daya Air oleh Taruma S.M. (25017046)_
# Tugas 3 - Analisis Sumber Daya Air
> Suatu saluran persegiempat mempunyai lebar $2.5\ meter$ dan mempunyai kemiringan dasar saluran sama dengan $1:400$.
Jika konstanta Chezy adalah 30 dalam satuan SI, tentukan kedalaman normal jika debit aliran adalah $0.80\ m^3/detik$. Tentukan solusi dengan menggunakan **Metoda Interval Halving, Newton Rhapson, dan Secant**.
> Petunjuk: $Q = AC\sqrt{RS},\ A=by_n, \text{dan}\ \large{y_{n1} = y_{n0}-\frac{f(y_{n0})}{f'(y_{n0})}}$, untuk harga awal $y_{n0}$ ditentukan dengan asumsi kecepatan aliran $v = 1 m/detik$.
```python
# Ignore this block code.
# Define Function for printing result
def cetak(name, val, unit = ""):
print('| {:>15s} = {:>10.5f} {:<{n}s}|'.format(name, val, unit, n = pad-15-10-7))
def new_line(pad = 50):
print('='*pad)
pad = 50
```
***
## Penurunan dan Penentuan Nilai Awal
### Fungsi dan Turunannya
Fungsi $f(y)$ diperoleh dari:
$$\begin{aligned}
Q &= A C \sqrt{R S} &\leftrightarrow 0 = A C \sqrt{R S} - Q \\
f(y) &= A C \sqrt{R S} - Q = 0 &
\end{aligned}$$
Turunan dari fungsi $f(y)$:
$$\begin{aligned}f(y) &= C b y \sqrt{\frac{b s y}{b + 2 y}} - Q \\
f'(y) &= C b \sqrt{\frac{b s y}{b + 2 y}} + \frac{C}{s} \sqrt{\frac{b s y}{b + 2 y}} \left(b + 2 y\right) \left(- \frac{b s y}{\left(b + 2 y\right)^{2}} + \frac{b s}{2 \left(b + 2 y\right)}\right) \end{aligned}$$
Disederhanakan, $f'(y)$ menjadi:
$$\begin{aligned} f'(y) &= \frac{C b \sqrt{\frac{b s y}{b + 2 y}}}{2 \left(b + 2 y\right)} \left(3 b + 4 y\right) \end{aligned}$$
> Catatan : Hasil turunan diperoleh dari kode dibawah menggunakan _sympy_.
```python
# Mencari turunan fungsi menggunakan library sympy
from sympy import symbols, simplify, sqrt, latex
sb, sy, sC, sS, sQ = symbols('b, y, C, s, Q')
sA = sb * sy
sP = sb + 2* sy
sR = sA/sP
sfy = sA*sC*sqrt(sR*sS) - sQ
# Mencari Turunan f(y)
print('Persamaan Q: \n\tf(y)=', sfy, '\n==\n In Latex:', latex(sfy), '\n')
sfy_d = sfy.diff(sy)
print('Turunan Persamaan Q terhadap y : \n\tf\'(y)=', sfy_d, '\n==\n In Latex:', latex(sfy_d), '\n')
print('Bentuk sederhana : \n\tf\'(y)=', simplify(sfy_d), '\n==\n In Latex:', latex(simplify(sfy_d)), '\n')
# Mendefinisikan fungsi dalam python
def f(y):
return float(sfy.subs({sS: S, sC: C, sb: b, sy: y, sQ: Q}))
def fd(y):
return float(sfy_d.subs({sS: S, sC: C, sb: b, sy: y}))
```
Persamaan Q:
f(y)= C*b*y*sqrt(b*s*y/(b + 2*y)) - Q
==
In Latex: C b y \sqrt{\frac{b s y}{b + 2 y}} - Q
Turunan Persamaan Q terhadap y :
f'(y)= C*b*sqrt(b*s*y/(b + 2*y)) + C*sqrt(b*s*y/(b + 2*y))*(b + 2*y)*(-b*s*y/(b + 2*y)**2 + b*s/(2*(b + 2*y)))/s
==
In Latex: C b \sqrt{\frac{b s y}{b + 2 y}} + \frac{C}{s} \sqrt{\frac{b s y}{b + 2 y}} \left(b + 2 y\right) \left(- \frac{b s y}{\left(b + 2 y\right)^{2}} + \frac{b s}{2 \left(b + 2 y\right)}\right)
Bentuk sederhana :
f'(y)= C*b*sqrt(b*s*y/(b + 2*y))*(3*b + 4*y)/(2*(b + 2*y))
==
In Latex: \frac{C b \sqrt{\frac{b s y}{b + 2 y}}}{2 \left(b + 2 y\right)} \left(3 b + 4 y\right)
***
### Menentukan nilai awal $y_{n0}$ dan $y_{n1}$
#### Nilai $y_{n0}$
Dengan mengasumsikan kecepatan aliran $v = 1\ m/det$, $y_{n0}$ dapat dicari dari persamaan $Q = VA$:
$$Q = VA \leftrightarrow Q = V\ (by_{n0})$$
Dimasukkan nilai $V = 1\ m/det$, $b = 2.5\ m$ dan $Q = 0.8\ m^3/det$, maka diperoleh nilai $y_{n0}$ sebesar:
$$\begin{aligned} V = 1\ m/det, b = 2.5\ m, Q = 0.8\ m^3/det &\rightarrow& Q &= V\ (b\ y_{n0}) \\
&& 0.8 &= 1\ (2.5\ y_{n0}) \\
&& y_{n0} &= \frac{0.8}{2.5} = 0.32\ m
\end{aligned}$$
#### Nilai $y_{n1}$
Nilai $y_{n1}$ diperoleh dari persamaan yang diberikan di petunjuk yaitu $y_{n1} = y_{n0}-\frac{f(y_{n0})}{f'(y_{n0})}$:
$$\begin{aligned} y_{n0} = 0.32\ m, f(y_{n0}) = -0.1942946, f'(y_{n0}) = 2.646344 &\rightarrow& y_{n1} &= y_{n0}-\frac{f(y_{n0})}{f'(y_{n0})} \\
&& y_{n1} &= 0.32 - \frac{-0.1942946}{2.646344} \\
&& y_{n1} &= 0.39342\ m
\end{aligned}$$
### Nilai $y_{n0}$ dan $y_{n1}$
Disimpulkan bahwa nilai $y_{n0} = 0.32\ m$ dan $y_{n1} = 0.39342\ m$. Dengan catatan:
- Untuk Metoda Interval Halving, nilai batas kiri: $x_a = y_{n0}$ dan batas kanan: $x_b = y_{n1}$.
- Untuk Metoda Newton-Rhapson, nilai awal: $x_k = y_{n1}$.
- Untuk Metoda Secant, nilai $x_n = y_{n1}$ dan $x_{n-1} = y_{n0}$.
Catatan: Hasil diatas diperoleh dari perhitungan melalui python dibawah ini.
```python
# Diketahui:
b = 2.5 # (m) - Lebar Saluran
S = 1/400 # (m/m) - Kemiringan dasar Saluran
C = 30 # (..) - Chezy
Q = 0.80 # (m3/det) - Debit Aliran
# Asumsikan
V = 1 # (m3/det)
new_line(pad)
print('|{:^{n}s}|'.format('Diketahui', n=pad-2))
new_line(pad)
cetak('b', b, 'm')
cetak('S', S, 'm/m')
cetak('C', C)
cetak('Q', Q, 'm^3/det')
new_line(pad)
# Menentukan harga awal
# Q = V.A -> Q = V.b.y -> y = Q/(V.b)
yn0 = Q/(V*b)
fy, fdy = f(yn0), fd(yn0)
yn1 = yn0 - fy/fdy
print()
new_line(pad)
print('|{:^{n}s}|'.format('Mencari nilai y_{n0} dan y_{n1}', n=pad-2))
new_line(pad)
cetak('y_{n0}', yn0, 'm')
cetak('f(y_{n0})',fy)
cetak('f\'(y_{n0})',fdy)
cetak('y_{n1}', yn1, 'm')
new_line(pad)
```
==================================================
| Diketahui |
==================================================
| b = 2.50000 m |
| S = 0.00250 m/m |
| C = 30.00000 |
| Q = 0.80000 m^3/det |
==================================================
==================================================
| Mencari nilai y_{n0} dan y_{n1} |
==================================================
| y_{n0} = 0.32000 m |
| f(y_{n0}) = -0.19429 |
| f'(y_{n0}) = 2.64634 |
| y_{n1} = 0.39342 m |
==================================================
*******
## Penyelesaian Numerik (Interval Halving, Newton, Secant)
Kode diperoleh dari Latihan Soal Notebook [Interval-Halving, Newton-Rhapson, Secant (Minggu 15)](https://github.com/taruma/belajar-tsa/blob/master/ansis/Interval-Halving%2C%20Newton-Rhapson%2C%20Secant%20(Minggu%2015).ipynb) atau dapat dilihat dengan nbviewer [Interval-Halving, Newton-Rhapson, Secant (Minggu 15)](https://nbviewer.jupyter.org/github/taruma/belajar-tsa/blob/master/ansis/Interval-Halving%2C%20Newton-Rhapson%2C%20Secant%20%28Minggu%2015%29.ipynb). Dan dimodifikasi sesuai kebutuhan.
***
### Metode Interval Halving
#### Langkah Pengerjaan
Solusi menggunakan metode _Interval Halving_ dengan langkah sebagai berikut:
- Nilai batas kiri dan kanan yang digunakan diperoleh dari perhitungan sebelumnya untuk mendapatkan nilai $y_{n0}$ dan $y_{n1}$.
$$\begin{aligned} &\text{Batas bawah/kiri: }& x_a &= y_{n0} &\text{Batas atas/kanan: }& x_b &=&\ y_{n1} \\
&& x_a &= 0.32\ m && x_b &=&\ 0.39342\ m \end{aligned}$$
- Periksa nilai $f(x_a)$ dan $f(x_b)$ lebih kecil dari $0$. Langkah ini memastikan bahwa akar persamaannya berada di antara $x_a$ dan $x_b$. Dan diperoleh bahwa nilai akar-akarnya berada di antara $x_a$ dan $x_b$
$$\begin{aligned} f(x_a) &=\ -0.19429 ; f(x_b) &=\ 0.00704 \\
f(x_a)f(x_b) &< 0 &\rightarrow \text{OK} \end{aligned}$$
- Cari nilai tengah $(x_h)$ yang merupakan titik tengah dari $x_a$ dan $x_b$:
$$x_h = \frac{x_a + x_b}{2}$$
- Tentukan batas atas/bawah berikutnya. Nilai $x_h$ sebagai batas atas ketika $f(x_a)f(x_h)<0$ dan sebaliknya menjadi batas bawah ketika $f(x_b)f(x_h)<0$.
$$\begin{aligned} x_b \leftarrow x_h &: \text{if } f(x_a)f(x_h)<0 \text{ TRUE} \\
x_a \leftarrow x_h &: \text{if } f(x_h)f(x_b)<0 \text{ TRUE}
\end{aligned}$$
```python
x1, x2 = yn0, yn1
iterasi = 20
xa, xb = x1, x2
pad = 70
new_line(pad)
print('Periksa nilai akarnya berada diantara xa dan xb')
if f(xa)*f(xb) < 0:
print('f(x_a) x f(x_b) < 0 === OK \n\tdengan nilai f(x_a) = {fxa:12.5f} dan f(x_b) = {fxb:12.5f}'.format(
fxa=f(xa), fxb=f(xb)))
else:
print('f(x_a) x f(x_b) < 0 === NOT OK \n\ttidak akan ditemukan akarnya diantara {xa} dan {xb}\n\tdengan nilai f(x_a) = {fxa} dan f(x_b) = {fxb}'.format(
xa=xa, xb=xb, fxa=f(xa), fxb=f(xb)))
new_line(pad)
print('\n')
lebar_text = 12*7 + 8
print('='*lebar_text)
print('|{:^{n:d}s}|'.format('Solusi Numerik Metoda Interval Halving', n=lebar_text-2))
print('='*lebar_text)
print('| {:^{n:d}s} | {:^{n:d}s} | {:^{n:d}s} | {:^{n:d}s} | {:^{n:d}s} | {:^{n:d}s} | {:^{n:d}s} |'.format(
'n', 'x_a', 'x_b', 'f(x_a)', 'f(x_b)', 'x_h', 'f(x_h)', n=10))
print('='*lebar_text)
for i in range(1, iterasi+1):
fxa, fxb = f(xa), f(xb)
xh = (xa+xb)/2
fxh = f(xh)
print('| {:^{n:d}d} | {:^ {f1:d}.{f2:d}f} | {:^ {f1:d}.{f2:d}f} | {:^ {f1:d}.{f2:d}f} | {:^ {f1:d}.{f2:d}f} | {:^ {f1:d}.{f2:d}f} | {:^ {f1:d}.{f2:d}f} |'.format(
i, xa, xb, f(xa), f(xb), xh, f(xh), n=10, f1=10, f2=7))
if fxa*fxh < 0:
xb = xh
elif fxb*fxh < 0:
xa = xh
print('='*lebar_text)
print('Maka diperoleh nilai akar-akarnya = {:10.6f} dengan hasil f(x_h) = {:15.10f}'.format(xh, f(xh)))
```
======================================================================
Periksa nilai akarnya berada diantara xa dan xb
f(x_a) x f(x_b) < 0 === OK
dengan nilai f(x_a) = -0.19429 dan f(x_b) = 0.00704
======================================================================
============================================================================================
| Solusi Numerik Metoda Interval Halving |
============================================================================================
| n | x_a | x_b | f(x_a) | f(x_b) | x_h | f(x_h) |
============================================================================================
| 1 | 0.3200000 | 0.3934200 | -0.1942946 | 0.0070430 | 0.3567100 | -0.0953224 |
| 2 | 0.3567100 | 0.3934200 | -0.0953224 | 0.0070430 | 0.3750650 | -0.0445413 |
| 3 | 0.3750650 | 0.3934200 | -0.0445413 | 0.0070430 | 0.3842425 | -0.0188469 |
| 4 | 0.3842425 | 0.3934200 | -0.0188469 | 0.0070430 | 0.3888313 | -0.0059261 |
| 5 | 0.3888313 | 0.3934200 | -0.0059261 | 0.0070430 | 0.3911256 | 0.0005525 |
| 6 | 0.3888313 | 0.3911256 | -0.0059261 | 0.0005525 | 0.3899785 | -0.0026883 |
| 7 | 0.3899785 | 0.3911256 | -0.0026883 | 0.0005525 | 0.3905521 | -0.0010683 |
| 8 | 0.3905521 | 0.3911256 | -0.0010683 | 0.0005525 | 0.3908389 | -0.0002580 |
| 9 | 0.3908389 | 0.3911256 | -0.0002580 | 0.0005525 | 0.3909822 | 0.0001472 |
| 10 | 0.3908389 | 0.3909822 | -0.0002580 | 0.0001472 | 0.3909105 | -0.0000554 |
| 11 | 0.3909105 | 0.3909822 | -0.0000554 | 0.0001472 | 0.3909464 | 0.0000459 |
| 12 | 0.3909105 | 0.3909464 | -0.0000554 | 0.0000459 | 0.3909285 | -0.0000048 |
| 13 | 0.3909285 | 0.3909464 | -0.0000048 | 0.0000459 | 0.3909374 | 0.0000206 |
| 14 | 0.3909285 | 0.3909374 | -0.0000048 | 0.0000206 | 0.3909330 | 0.0000079 |
| 15 | 0.3909285 | 0.3909330 | -0.0000048 | 0.0000079 | 0.3909307 | 0.0000016 |
| 16 | 0.3909285 | 0.3909307 | -0.0000048 | 0.0000016 | 0.3909296 | -0.0000016 |
| 17 | 0.3909296 | 0.3909307 | -0.0000016 | 0.0000016 | 0.3909302 | -0.0000000 |
| 18 | 0.3909302 | 0.3909307 | -0.0000000 | 0.0000016 | 0.3909304 | 0.0000008 |
| 19 | 0.3909302 | 0.3909304 | -0.0000000 | 0.0000008 | 0.3909303 | 0.0000004 |
| 20 | 0.3909302 | 0.3909303 | -0.0000000 | 0.0000004 | 0.3909302 | 0.0000002 |
============================================================================================
Maka diperoleh nilai akar-akarnya = 0.390930 dengan hasil f(x_h) = 0.0000001907
#### Solusi Numerik Metoda Interval Halving
Dengan menggunakan prosedur diatas dan dilakukan iterasi sebanyak $20$ kali, diperoleh bahwa nilai $y_n = 0.390930\ m$ dengan nilai $f(x_h) = 0.0000001907$. Dari tabel hasil perhitungan dibawah, dapat dilihat bahwa nilai akarnya sudah dapat ditemukan pada langkah ke $16$ jika error yang ditargetkan $\epsilon = 0.000001$.
***
### Metoda Newton-Rhapson
#### Langkah Pengerjaan
Solusi Numerik menggunakan metoda _Newton-Rhapson_ dimulai dari:
- Nilai awal $x_k$ menggunakan nilai $y_{n0}$ maka $x_k = 0.32\ m$
- Dalam metoda Newton-Rhapson diperlukan turunan dari fungsi $f(y)$. Persamaan yang digunakan:
$$\begin{aligned} f(y) &=& C b y \sqrt{\frac{b s y}{b + 2 y}} - Q \\
f'(y) &=& \frac{C b \sqrt{\frac{b s y}{b + 2 y}}}{2 \left(b + 2 y\right)} \left(3 b + 4 y\right)
\end{aligned}$$
dengan: $C = 30, b = 2.5\ m, s = \frac{1}{400}\ m/m$
- Akar persamaan $x_{k+1}$ diperoleh dengan melakukan iterasi sebanyak $k$ dengan menggunakan persamaan:
$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}$$
```python
# Defining function
iterasi = 10 # Iteration
xk = yn0 # x_k
lebar_text = 12*5 + 6
print('='*lebar_text)
print('|{:^{n:d}s}|'.format('Solusi Numerik Metoda Newton Rhapson', n=lebar_text-2))
print('='*lebar_text)
print('| {:^{n:d}s} | {:^{n:d}s} | {:^{n:d}s} | {:^{n:d}s} | {:^{n:d}s} |'.format(
'k', 'x_k', 'f(x_k)', 'f\'(x_k)', 'x_{k+1}', n=10))
print('='*lebar_text)
for i in range(1, iterasi+1):
xk_p1 = xk - f(xk)/fd(xk)
print('| {:^{n:d}d} | {:^ {f1:d}.{f2:d}f} | {:^ {f1:d}.{f2:d}f} | {:^ {f1:d}.{f2:d}f} | {:^ {f1:d}.{f2:d}f} | '.format(i,
xk, f(xk), fd(xk), xk_p1, n=10, f1=10, f2=6))
xk = xk_p1
print('='*lebar_text)
print('Maka diperoleh nilai akar-akarnya = {:10.6f} dengan hasil f(x_k) = {:15.10f}'.format(xk, f(xk)))
```
==================================================================
| Solusi Numerik Metoda Newton Rhapson |
==================================================================
| k | x_k | f(x_k) | f'(x_k) | x_{k+1} |
==================================================================
| 1 | 0.320000 | -0.194295 | 2.646344 | 0.393420 |
| 2 | 0.393420 | 0.007043 | 2.831491 | 0.390933 |
| 3 | 0.390933 | 0.000007 | 2.825843 | 0.390930 |
| 4 | 0.390930 | 0.000000 | 2.825838 | 0.390930 |
| 5 | 0.390930 | 0.000000 | 2.825838 | 0.390930 |
| 6 | 0.390930 | 0.000000 | 2.825838 | 0.390930 |
| 7 | 0.390930 | 0.000000 | 2.825838 | 0.390930 |
| 8 | 0.390930 | 0.000000 | 2.825838 | 0.390930 |
| 9 | 0.390930 | 0.000000 | 2.825838 | 0.390930 |
| 10 | 0.390930 | 0.000000 | 2.825838 | 0.390930 |
==================================================================
Maka diperoleh nilai akar-akarnya = 0.390930 dengan hasil f(x_k) = 0.0000000000
#### Solusi Numerik Metoda Newton Rhapson
Dengan menggunakan prosedur diatas dan dilakukan iterasi sebanyak $10$ kali, diperoleh bahwa nilai $y_n = 0.390930\ m$ dengan nilai $f(x_k) = 0.0000000000$. Dari tabel hasil perhitungan diatas, dapat dilihat bahwa nilai akarnya sudah dapat ditemukan pada langkah ke $4$ jika error yang ditargetkan $\epsilon = 0.000001$.
***
### Metoda Secant
#### Langkah Pengerjaan
Solusi Numerik menggunakan metoda _Secant_ dimulai dari:
- Menentukan nilai $x_0 = x_{n-1}$ dan $x_1 = x_n$ dari nilai $y_{n0}$ dan $y_{n1}$:
$$\begin{aligned}x_0 &= x_{n-1} &= y_{n0} &= 0.32\ m \\
x_1 &= x_{n} &= y_{n1} &= 0.39342\ m\end{aligned}$$
- Dalam metoda _Secant_ hanya diperlukan fungsi $f(y)$. Persamaan yang digunakan:
$$\begin{aligned} f(y) &=& C b y \sqrt{\frac{b s y}{b + 2 y}} - Q \\
\end{aligned}$$
dengan: $C = 30, b = 2.5\ m, s = \frac{1}{400}\ m/m$
- Akar persamaan $x_{n+1}$ diperoleh dengan melakukan iterasi sebanyak $n$ dengan menggunakan persamaan:
$$x_{n+1} = x_n - f(x_n)\frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})}$$
```python
x0, x1 = yn1, yn0
lebar_text = 10*5+12+2*6+7
print('='*lebar_text)
print('|{:^{n:d}s}|'.format(
'Solusi Numerik Menggunakan Metoda Secant', n=lebar_text-2))
print('='*(10*5+12+2*6+7))
print('| {:^10s} | {:^10s} | {:^10s} | {:^10s} | {:^10s} | {:^12s} |'.format(
'n', 'x_{n-1}', 'x_n', 'N_n', 'D_n', 'x_{n+1}-x_n'))
print('='*(10*5+12+2*6+7))
iterasi = 6
xn = x1
xn_m1 = x0
xn_p1 = 0
for n in range(1, iterasi+1):
Nn = f(xn)*(xn-xn_m1)
Dn = f(xn)-f(xn_m1)
xn_p1 = xn - Nn / Dn
dx = xn_p1-xn
print('| {:^10d} | {:^ 10.6f} | {:^ 10.6f} | {:^ 10.6f} | {:^ 10.6f} | {:^ 12.6f} |'.format(n, xn_m1, xn, Nn, Dn, dx))
xn_m1 = xn
xn = xn_p1
print('='*(10*5+12+2*6+7))
print('Maka diperoleh nilai akar-akarnya = {:10.6f} dengan f(x_{{n+1}}) = {:15.10f}'.format(xn, f(xn)))
```
=================================================================================
| Solusi Numerik Menggunakan Metoda Secant |
=================================================================================
| n | x_{n-1} | x_n | N_n | D_n | x_{n+1}-x_n |
=================================================================================
| 1 | 0.393420 | 0.320000 | 0.014265 | -0.201338 | 0.070852 |
| 2 | 0.320000 | 0.390852 | -0.000016 | 0.194073 | 0.000081 |
| 3 | 0.390852 | 0.390933 | 0.000000 | 0.000229 | -0.000002 |
| 4 | 0.390933 | 0.390930 | 0.000000 | -0.000007 | 0.000000 |
| 5 | 0.390930 | 0.390930 | -0.000000 | 0.000000 | 0.000000 |
| 6 | 0.390930 | 0.390930 | 0.000000 | 0.000000 | 0.000000 |
=================================================================================
Maka diperoleh nilai akar-akarnya = 0.390930 dengan f(x_{n+1}) = 0.0000000000
#### Solusi Numerik Metoda _Secant_
Dengan menggunakan prosedur diatas dan dilakukan iterasi sebanyak $6$ kali, diperoleh bahwa nilai $y_n = 0.390930\ m$ dengan nilai $f(x_k) = 0.0000000000$. Dari tabel hasil perhitungan diatas, dapat dilihat bahwa nilai akarnya sudah dapat ditemukan pada langkah ke $4$ jika error yang ditargetkan $\epsilon = 0.000001$.
***
## Kesimpulan
Ringkasan dari penyelesaian permasalahan dengan 3 metoda yaitu _Interval-Halving, Newton-Rhapson, Secant_:
$$\begin{array}{|l|c|c|c|c|}
\hline
\text{Metoda} & \text{Jumlah Iterasi Coba} & y_{n0} & f(y_{n0}) & \text{Jumlah Iterasi jika } \epsilon = 0.000001 \\
\hline
\text{Interval-Halving} & 20 & 0.390930 & 0.0000001907 & 16 \\
\text{Newton-Rhapson} & 10 & 0.390930 & 0.0000000000 & 4 \\
\text{Secant} & 6 & 0.390930 & 0.0000000000 & 4 \\\hline
\end{array}
$$
Dari ketiga metoda diatas, metoda _Newton-Rhapson_ dan _Secant_ memiliki iterasi yang lebih sedikit dengan $\epsilon = 1\times10^{-6}$, akan tetapi metoda _Newton-Rhapson_ memerlukan persamaan turunan $f'(y)$ yang jika persamaannya akan sulit diturunkan jika dilakukan secara manual. Sedangkan metoda _Secant_ hanya menggunakan persamaan $f(y)$.
|
fa1d9b549c7493cc2a4256b75bffae5dfab39619
| 27,087 |
ipynb
|
Jupyter Notebook
|
ansis/Tugas 3 - Taruma S. (25017046).ipynb
|
taruma/belajar-tsa
|
ded702846f0453993ba90738a4b6b2401abe9177
|
[
"MIT"
] | null | null | null |
ansis/Tugas 3 - Taruma S. (25017046).ipynb
|
taruma/belajar-tsa
|
ded702846f0453993ba90738a4b6b2401abe9177
|
[
"MIT"
] | null | null | null |
ansis/Tugas 3 - Taruma S. (25017046).ipynb
|
taruma/belajar-tsa
|
ded702846f0453993ba90738a4b6b2401abe9177
|
[
"MIT"
] | null | null | null | 41.995349 | 508 | 0.460332 | true | 7,896 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.879147 | 0.817574 | 0.718768 |
__label__ind_Latn
| 0.382835 | 0.50827 |
```python
from __future__ import division
from sympy import *
init_session(quiet=True)
```
```python
T,delta,rho_r,b_m,c_m,a_m,R_u = symbols('T,delta,rho_r,b_m,c_m,a_m,R_u')
W = symbols('W', cls=Function)(delta)
alphar = -log(1-delta*rho_r*(b_m-c_m)) - sqrt(2)*a_m/(4*R_u*T*b_m)*log(W);
display(alphar)
for ndelta in range(1,5):
ss = simplify(diff(alphar, delta, ndelta))
display(ss)
W =(1+delta*rho_r*(b_m*(1+sqrt(2)+c_m))) / (1+delta*rho_r*(b_m*(1-sqrt(2)+c_m)))
for ndelta in range(1,5):
display(diff(W,delta,ndelta))
```
```python
```
|
1f7eb70f932474ed2cd1fc5f4fdbf46ec413fe9d
| 1,512 |
ipynb
|
Jupyter Notebook
|
doc/notebooks/Sympy derivatives of alphar from VTPR.ipynb
|
pauliacomi/CoolProp
|
80eb4601c67ecd04353067663db50937fd7ccdae
|
[
"MIT"
] | 520 |
2015-01-14T16:49:41.000Z
|
2022-03-29T07:48:50.000Z
|
doc/notebooks/Sympy derivatives of alphar from VTPR.ipynb
|
pauliacomi/CoolProp
|
80eb4601c67ecd04353067663db50937fd7ccdae
|
[
"MIT"
] | 1,647 |
2015-01-01T07:42:45.000Z
|
2022-03-31T23:48:56.000Z
|
doc/notebooks/Sympy derivatives of alphar from VTPR.ipynb
|
pauliacomi/CoolProp
|
80eb4601c67ecd04353067663db50937fd7ccdae
|
[
"MIT"
] | 320 |
2015-01-02T17:24:27.000Z
|
2022-03-19T07:01:00.000Z
| 21.6 | 89 | 0.529101 | true | 208 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.946597 | 0.709019 | 0.671155 |
__label__eng_Latn
| 0.21851 | 0.397649 |
# Hamiltonian Monte Carlo (HMC)
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from functools import partial
```
## Use of auxiliary variables
Slice sampling is a simple MCMC algorithm that introduces the idea of auxiliary variables. The motivation for slice sampling is that if we can sample uniformly from the region under the graph of the target distribution, we will have random samples from the target distribution. In the univariate case, the algorithm is as follows
- start with some $x$ where $p(x) \ne 0$
- repeat
- sample $y$ (auxiliary variable) uniformly from 0 to $f(x)$
- draw a horizontal line at $y$ within $p(x)$ (this may consist of multiple intervals)
- sample $x$ from the horizontal segments
The auxiliary $y$ variable allows us to sample $(x, y)$ points that are in the region under the graph of the target distribution. Only the $x$ variable is used for the Monte Carlo samples - the $y$ variables are simply discarded. This works because the joint distribution is constructed so that it factors $p(x, y) = p(y \mid x) p(x)$ and so projecting leaves just $p(x)$. The slice sampler is a Markov Chain Monte Carlo method since the next $(x, y)$ position depends only on the current position. Like Gibbs sampling, there is no tuning process and all proposals are accepted. For slice sampling, you either need the inverse distribution function or some way to estimate it. Later we will see that Hamiltonian Monte Carlo also uses auxiliary variables to generate a new proposal in an analogous way.
A toy example illustrates the process - Suppose we want to draw random samples from the posterior distribution $\mathcal{N}(0, 1)$ using slice sampling
Start with some value $x$
- sample $y$ from $\mathcal{U}(0, f(x))$ - this is the horizontal "slice" that gives the method its name
- sample the next $x$ from $f^{-1}(y)$ - this is typically done numerically
- repeat
Will sketch picture in class to show what is going on.
### A simple slice sampler example
```python
import scipy.stats as stats
```
```python
dist = stats.norm(5, 3)
w = 0.5
x = dist.rvs()
niters = 1000
xs = []
while len(xs) < niters:
y = np.random.uniform(0, dist.pdf(x))
lb = x
rb = x
while y < dist.pdf(lb):
lb -= w
while y < dist.pdf(rb):
rb += w
x = np.random.uniform(lb, rb)
if y > dist.pdf(x):
if np.abs(x-lb) < np.abs(x-rb):
lb = x
else:
lb = y
else:
xs.append(x)
```
```python
plt.hist(xs, 20)
pass
```
Notes on the slice sampler:
- the slice may consist of disjoint pieces for multimodal distributions
- the slice can be a rectangular hyperslab for multivariable posterior distributions
- sampling from the slice (i.e. finding the boundaries at level $y$) is non-trivial and may involve iterative rejection steps - see figure below (from Wikimedia) for a typical approach - the blue bars represent disjoint pieces of the true slice through a bimodal distribution and the black lines are the proposal distribution approximating the true slice
## Hamiltonian Monte Carlo (HMC)
HMC uses an auxiliary variable corresponding to the momentum of particles in a potential energy well to generate proposal distributions that can make use of gradient information in the posterior distribution. For reversibility to be maintained, the total energy of the particle has to be conserved - hence we are interested in Hamiltonian systems. The main attraction of HMC is that it works much better than other methods when variables of interest are highly correlated. Because we have to solve problems involving momentum, we need to understand how to numerically solve differential equations in a way that is both accurate (i.e. second order) and preserves total energy (necessary for a Hamiltonian system).
Example adapted from [MCMC: Hamiltonian Monte Carlo (a.k.a. Hybrid Monte Carlo)](https://theclevermachine.wordpress.com/2012/11/18/mcmc-hamiltonian-monte-carlo-a-k-a-hybrid-monte-carlo/)
### Hamiltonian systems
In a Hamiltonian system, we consider particles with position $x$ and momentum (or velocity if we assume unit mass) $v$. The total energy of the system $H(x, v) = K(v) + U(x)$, where $K$ is the kinetic energy and $U$ is the potential energy, is conserved. Such a system satisfies the following Hamiltonian equations
$$
\begin{align}
\frac{dx}{dt} &= & \frac{\delta H}{dv} \\
\frac{dv}{dt} &= & -\frac{\delta H}{dx}
\end{align}
$$
Since $K$ depends only on $v$ and $U$ depends only on $x$, we have
$$
\begin{align}
\frac{dx}{dt} &= & \frac{\delta K}{dv} \\
\frac{dv}{dt} &= & -\frac{\delta U}{dx}
\end{align}
$$
#### Harmonic oscillator
We will consider solving a classical Hamiltonian system - that of a undamped spring governed by the second order differential equation
$$
x'' + x = 0
$$
We convert this to two first order ODEs by using a dummy variable $x' = v$ to get
$$
\begin{align}
x' &= v \\
v' &= -x
\end{align}
$$
From the Hamiltonian equations above, this is equivalent to a system with kinetic energy $K(v) = \frac{1}{2}v^2$ and potential energy $U(x) = \frac{1}{2}x^2$.
Writing in matrix form,
$$
A = \pmatrix{ x' \\ v' } = \pmatrix{0 & 1 \\ -1 & 0} \pmatrix{x \\ v}
$$
and in general, for the state vector $x$,
$$
x' = Ax
$$
We note that $A$ is anti- or skew-symmetric ($A^T = -A$), and hence has purely imaginary eigenvalues. Solving $|A - \lambda I = 0$, we see that the eigenvalues and eigenvectors are $i, \pmatrix{1\\i}$ and $-i, \pmatrix{1\\-i}$. Since the eigenvalues are pure imaginary, we see that the solution for the initial conditions $(x,v) = (1, 0)$ is $x(t) = e^{it}$ and the orbit just goes around a circle with a period of $2\pi$, neither growing nor decaying. Another weay of seeing this is that the Hamiltonian $H(u, v)$ or sum of potential ($U(x)) = \frac{1}{2}x^2$) and kinetic energy ($K(v) = \frac{1}{2}v^2$) is constant, i.e. in vector form, $(x^T x) = \text{constant}$.
### Finite difference methods
We want to find a finite difference approximation to $u' = Au$ that is **accurate** and **preserves total energy**. If total energy is not preserved, the orbit will either spiral in towards zero or outwards away from the unit circle. If the accuracy is poor, the orbit will not be close to its starting value after $t = 2\pi$. This gives us an easy way to visualize how good our numerical scheme is. We can also compare the numerical scheme to the Taylor series to evaluate its accuracy.
#### Forward Euler
The simplest finite difference scheme for integrating ODEs is the forward Euler
$$
\frac{u_{n+1} - u_n}{\Delta t} = A u_n
$$
Rearranging terms, we get
$$
u_{n+1} = u_n + \Delta t A u_n = \left( I + \Delta t A \right) u_n
$$
Since the eigenvalues of $A$ are $\pm i$, we see that the eigenvalues of the forward Euler matrix are $1 \pm i$. Since the absolute value of the eigenvalues is greater than 1, we expect **growing** solutions - i.e. the solution will spiral away from the unit circle.
```python
import scipy.linalg as la
```
```python
def f_euler(A, u, N):
orbit = np.zeros((N,2))
dt = 2*np.pi/N
for i in range(N):
u = u + dt * A @ u
orbit[i] = u
return orbit
```
```python
A = np.array([[0,1],[-1,0]])
u = np.array([1.0,0.0])
N = 64
orbit = f_euler(A, u, N)
```
##### Accuracy
```python
la.norm(np.array([1.0,0.0]) - orbit[-1])
```
0.3600318484671193
##### Conservation of energy
```python
plt.plot([p @ p for p in orbit])
pass
```
```python
ax = plt.subplot(111)
plt.plot(orbit[:, 0], orbit[:,1], 'o')
ax.axis('square')
plt.axis([-1.5, 1.5, -1.5, 1.5])
pass
```
##### Accuracy and conservation of energy
We can see that forward Euler is not very accurate and also does not preserve energy since the orbit spirals away from the unit circle.
#### The trapezoidal method
The trapezoidal method uses the following scheme
$$
\frac{u_{n+1} - u_n}{\Delta t} = \frac{1}{2} ( A u_{n+1} + A u_{n})
$$
This is an implicit scheme (because $u_{n+1}$ appears on the RHS) whose solution is
$$
u_{n+1} = \left(I - \frac{\Delta t}{2} A \right)^{-1} \left(I + \frac{\Delta t}{2} A \right) u_{n} = B u_n
$$
By inspection, we see that the eigenvalues are the complex conjugates of
$$
\frac{1 + \frac{\Delta t}{2} i}{1 - \frac{\Delta t}{2} i}
$$
whose absolute value is 1 - hence, energy is conserved. If we expand the matrix $B$ using the geometric series and compare with the Taylor expansion, we see that the trapezoidal method has local truncation error $O(h^3)$ and hence accuracy $O(h^2)$, where $h$ is the time step.
```python
def trapezoidal(A, u, N):
p = len(u)
orbit = np.zeros((N,p))
dt = 2*np.pi/N
for i in range(N):
u = la.inv(np.eye(p) - dt/2 * A) @ (np.eye(p) + dt/2 * A) @ u
orbit[i] = u
return orbit
```
```python
A = np.array([[0,1],[-1,0]])
u = np.array([1.0,0.0])
N = 64
orbit = trapezoidal(A, u, N)
```
##### Accuracy
```python
la.norm(np.array([1.0,0.0]) - orbit[-1])
```
0.005039305635733781
##### Conservation of energy
```python
plt.plot([p @ p for p in orbit])
pass
```
```python
ax = plt.subplot(111)
plt.plot(orbit[:, 0], orbit[:,1], 'o')
ax.axis('square')
plt.axis([-1.5, 1.5, -1.5, 1.5])
pass
```
#### The leapfrog method
The leapfrog method uses a second order difference to update $u_n$. The algorithm simplifies to the following explicit scheme:
- First take one half-step for v
- Then take a full step for u
- Then take one final half step for v
It performs as well as the trapezoidal method, with the advantage of being an explicit scheme and cheaper to calculate, so the leapfrog method is used in HMC.
```python
def leapfrog(A, u, N):
orbit = np.zeros((N,2))
dt = 2*np.pi/N
for i in range(N):
u[1] = u[1] + dt/2 * A[1] @ u
u[0] = u[0] + dt * A[0] @ u
u[1] = u[1] + dt/2 * A[1] @ u
orbit[i] = u
return orbit
```
##### If we don't care about the intermediate steps, it is more efficient to just take 1/2 steps at the beginning and end
```python
def leapfrog2(A, u, N):
dt = 2*np.pi/N
u[1] = u[1] + dt/2 * A[1] @ u
for i in range(N-1):
u[0] = u[0] + dt * A[0] @ u
u[1] = u[1] + dt * A[1] @ u
u[0] = u[0] + dt * A[0] @ u
u[1] = u[1] + dt/2 * A[1] @ u
return u
```
```python
A = np.array([[0,1],[-1,0]])
u = np.array([1.0,0.0])
N = 64
```
```python
orbit = leapfrog(A, u, N)
```
##### Accuracy
```python
la.norm(np.array([1.0,0.0]) - orbit[-1])
```
0.0025229913808033464
##### Conservation of energy
```python
plt.plot([p @ p for p in orbit])
pass
```
```python
ax = plt.subplot(111)
plt.plot(orbit[:, 0], orbit[:,1], 'o')
ax.axis('square')
plt.axis([-1.5, 1.5, -1.5, 1.5])
pass
```
### From Hamiltonians to probability distributions
The physical analogy considers the negative log likelihood of the target distribution $p(x)$ to correspond to a potential energy well, with a collection of particles moving on the surface of the well. The state of each particle is given only by its position and momentum (or velocity if we assume unit mass for each particle). In a Hamiltonian system, the total energy $H(x,, v) = U(x) + K(v)$ is conserved. From statistical mechanics, the probability of each state is related to the total energy of the system
$$
\begin{align}
p(x, v) & \propto e^{-H(x, v)} \\
&= e^{-U(x) - K(v)} \\
&= e^{-P(x)}e^{-K(v)} \\
& \propto p(x) \, p(v)
\end{align}
$$
Since the joint distribution factorizes $p(x, v) = p(x)\, p(v)$, we can select an initial random $v$ for a particle, numerically integrate using a finite difference method such as the leapfrog and then use the updated $x^*$ as the new proposal. The acceptance ratio for the new $x^*$ is
$$
\frac{ e^{ -U(x^*)-K(v^*) }} { e^{-U(x)-K(v)} } = e^{U(x)-U(x^*)+K(x)-K(x^*)}
$$
If our finite difference scheme was exact, the acceptance ration would be 1 since energy is conserved with Hamiltonian dynamics. However, as we have seen, the leapfrog method does not conserve energy perfectly and an accept/reject step is still needed.
#### Example of HMC
We will explore how HMC works when the target distribution is bivariate normal centered at zero
$$
x \sim N(0, \Sigma)
$$
In practice of course, the target distribution will be the posterior distribution and depend on both data and distributional parameters.
The potential energy or negative log likelihood is proportional to
$$
U(x) = \frac{x^T\Sigma^{-1} x}{2}
$$
The kinetic energy is given by
$$
K(v) = \frac{v^T v}{2}
$$
where the initial $v_0$ is chosen at random from the unit normal at each step.
To find the time updates, we use the Hamiltonian equations and find the first derivatives of total energy with respect to $x$ and $v$
$$
\begin{align}
x' &= \frac{\delta K}{\delta v} &= v \\
v' &= -\frac{\delta U}{\delta x} &= -\Sigma^{-1} x \\
\end{align}
$$
giving us the block matrix
$$
A = \pmatrix{0 & 1 \\ -\Sigma^{-1} & 0}
$$
By using the first derivatives, we are making use of the gradient information on the log posterior to guide the proposal distribution.
##### This is what the target distribution should look like
```python
sigma = np.array([[1,0.8],[0.8,1]])
mu = np.zeros(2)
ys = np.random.multivariate_normal(mu, sigma, 1000)
sns.kdeplot(ys[:,0], ys[:,1])
plt.axis([-3.5,3.5,-3.5,3.5])
pass
```
##### This is the HMC posterior
```python
def E(A, u0, v0, u, v):
"""Total energy."""
return (u0 @ tau @ u0 + v0 @ v0) - (u @ tau@u + v @ v)
```
```python
def leapfrog(A, u, v, h, N):
"""Leapfrog finite difference scheme."""
v = v - h/2 * A @ u
for i in range(N-1):
u = u + h * v
v = v - h * A @ u
u = u + h * v
v = v - h/2 * A @ u
return u, v
```
```python
niter = 100
h = 0.01
N = 100
tau = la.inv(sigma)
orbit = np.zeros((niter+1, 2))
u = np.array([-3,3])
orbit[0] = u
for k in range(niter):
v0 = np.random.normal(0,1,2)
u, v = leapfrog(tau, u, v0, h, N)
# accept-reject
u0 = orbit[k]
a = np.exp(E(A, u0, v0, u, v))
r = np.random.rand()
if r < a:
orbit[k+1] = u
else:
orbit[k+1] = u0
```
```python
sns.kdeplot(orbit[:, 0], orbit[:, 1])
plt.plot(orbit[:,0], orbit[:,1], alpha=0.2)
plt.scatter(orbit[:1,0], orbit[:1,1], c='red', s=30)
plt.scatter(orbit[1:,0], orbit[1:,1], c=np.arange(niter)[::-1], cmap='Reds')
plt.axis([-3.5,3.5,-3.5,3.5])
pass
```
```python
```
|
e90c194b43abf48b977c5d176ec0f5c40e905115
| 197,176 |
ipynb
|
Jupyter Notebook
|
notebooks/S10E_HMC.ipynb
|
taotangtt/sta-663-2018
|
67dac909477f81d83ebe61e0753de2328af1be9c
|
[
"BSD-3-Clause"
] | null | null | null |
notebooks/S10E_HMC.ipynb
|
taotangtt/sta-663-2018
|
67dac909477f81d83ebe61e0753de2328af1be9c
|
[
"BSD-3-Clause"
] | null | null | null |
notebooks/S10E_HMC.ipynb
|
taotangtt/sta-663-2018
|
67dac909477f81d83ebe61e0753de2328af1be9c
|
[
"BSD-3-Clause"
] | null | null | null | 218.356589 | 58,620 | 0.905972 | true | 4,356 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.855851 | 0.874077 | 0.74808 |
__label__eng_Latn
| 0.990691 | 0.576373 |
<h1> Data Transformation </h1>
## Logistic Regression - on [Titanic Dataset](https://www.kaggle.com/c/titanic)
- Models the probability an object belongs to a class
- Values ranges from 0 to 1
- Can use threshold to classify into which classes a class belongs
- An S-shaped curve
$
\begin{align}
\sigma(t) = \frac{1}{1 + e^{-t}}
\end{align}
$
#### Read the data
```python
import pandas as pd
df_train = pd.read_csv('../data/titanic_train.csv')
```
```python
df_train.head(8)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>PassengerId</th>
<th>Survived</th>
<th>Pclass</th>
<th>Name</th>
<th>Sex</th>
<th>Age</th>
<th>SibSp</th>
<th>Parch</th>
<th>Ticket</th>
<th>Fare</th>
<th>Cabin</th>
<th>Embarked</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>0</td>
<td>3</td>
<td>Braund, Mr. Owen Harris</td>
<td>male</td>
<td>22.0</td>
<td>1</td>
<td>0</td>
<td>A/5 21171</td>
<td>7.2500</td>
<td>NaN</td>
<td>S</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>1</td>
<td>1</td>
<td>Cumings, Mrs. John Bradley (Florence Briggs Th...</td>
<td>female</td>
<td>38.0</td>
<td>1</td>
<td>0</td>
<td>PC 17599</td>
<td>71.2833</td>
<td>C85</td>
<td>C</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>1</td>
<td>3</td>
<td>Heikkinen, Miss. Laina</td>
<td>female</td>
<td>26.0</td>
<td>0</td>
<td>0</td>
<td>STON/O2. 3101282</td>
<td>7.9250</td>
<td>NaN</td>
<td>S</td>
</tr>
<tr>
<th>3</th>
<td>4</td>
<td>1</td>
<td>1</td>
<td>Futrelle, Mrs. Jacques Heath (Lily May Peel)</td>
<td>female</td>
<td>35.0</td>
<td>1</td>
<td>0</td>
<td>113803</td>
<td>53.1000</td>
<td>C123</td>
<td>S</td>
</tr>
<tr>
<th>4</th>
<td>5</td>
<td>0</td>
<td>3</td>
<td>Allen, Mr. William Henry</td>
<td>male</td>
<td>35.0</td>
<td>0</td>
<td>0</td>
<td>373450</td>
<td>8.0500</td>
<td>NaN</td>
<td>S</td>
</tr>
<tr>
<th>5</th>
<td>6</td>
<td>0</td>
<td>3</td>
<td>Moran, Mr. James</td>
<td>male</td>
<td>NaN</td>
<td>0</td>
<td>0</td>
<td>330877</td>
<td>8.4583</td>
<td>NaN</td>
<td>Q</td>
</tr>
<tr>
<th>6</th>
<td>7</td>
<td>0</td>
<td>1</td>
<td>McCarthy, Mr. Timothy J</td>
<td>male</td>
<td>54.0</td>
<td>0</td>
<td>0</td>
<td>17463</td>
<td>51.8625</td>
<td>E46</td>
<td>S</td>
</tr>
<tr>
<th>7</th>
<td>8</td>
<td>0</td>
<td>3</td>
<td>Palsson, Master. Gosta Leonard</td>
<td>male</td>
<td>2.0</td>
<td>3</td>
<td>1</td>
<td>349909</td>
<td>21.0750</td>
<td>NaN</td>
<td>S</td>
</tr>
</tbody>
</table>
</div>
## Data Statistics
#### Describing the statistics for numerical features
```python
df_train.describe()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>PassengerId</th>
<th>Survived</th>
<th>Pclass</th>
<th>Age</th>
<th>SibSp</th>
<th>Parch</th>
<th>Fare</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>891.000000</td>
<td>891.000000</td>
<td>891.000000</td>
<td>714.000000</td>
<td>891.000000</td>
<td>891.000000</td>
<td>891.000000</td>
</tr>
<tr>
<th>mean</th>
<td>446.000000</td>
<td>0.383838</td>
<td>2.308642</td>
<td>29.699118</td>
<td>0.523008</td>
<td>0.381594</td>
<td>32.204208</td>
</tr>
<tr>
<th>std</th>
<td>257.353842</td>
<td>0.486592</td>
<td>0.836071</td>
<td>14.526497</td>
<td>1.102743</td>
<td>0.806057</td>
<td>49.693429</td>
</tr>
<tr>
<th>min</th>
<td>1.000000</td>
<td>0.000000</td>
<td>1.000000</td>
<td>0.420000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>25%</th>
<td>223.500000</td>
<td>0.000000</td>
<td>2.000000</td>
<td>20.125000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>7.910400</td>
</tr>
<tr>
<th>50%</th>
<td>446.000000</td>
<td>0.000000</td>
<td>3.000000</td>
<td>28.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>14.454200</td>
</tr>
<tr>
<th>75%</th>
<td>668.500000</td>
<td>1.000000</td>
<td>3.000000</td>
<td>38.000000</td>
<td>1.000000</td>
<td>0.000000</td>
<td>31.000000</td>
</tr>
<tr>
<th>max</th>
<td>891.000000</td>
<td>1.000000</td>
<td>3.000000</td>
<td>80.000000</td>
<td>8.000000</td>
<td>6.000000</td>
<td>512.329200</td>
</tr>
</tbody>
</table>
</div>
#### Find the count of the non-NaN values per feature
```python
df_train.count()
```
PassengerId 891
Survived 891
Pclass 891
Name 891
Sex 891
Age 714
SibSp 891
Parch 891
Ticket 891
Fare 891
Cabin 204
Embarked 889
dtype: int64
## What features can be removed?
### Remove features that are not related to your outcome
```python
df_train.drop(['Name', 'Ticket'], axis=1, inplace=True)
```
### Remove column with missing data
```python
df_train.drop(['Cabin'], axis=1, inplace=True)
```
## Data Imputation - Filling in missing values
- Select a percentage threshold that you would want to accomodate
- Around 1/5th to 1/3rd of the data (20% to 33.3%)
- if more than 50% of the data is missing, you will be generating data for the majority of your dataset - Not a good thing to do
```python
from matplotlib import pyplot as plt
import seaborn as sns
plt.figure(figsize=(7,5))
sns.boxplot(x='Pclass',y='Age',data=df_train)
```
```python
from matplotlib import pyplot as plt
import seaborn as sns
plt.figure(figsize=(7,5))
sns.boxplot(x='Sex',y='Age',data=df_train)
```
```python
from matplotlib import pyplot as plt
import seaborn as sns
plt.figure(figsize=(7,5))
sns.boxplot(x='Embarked',y='Age',data=df_train)
```
```python
def add_age(cols):
Age = cols[0]
Pclass = cols[1]
if pd.isnull(Age):
return int(df_train[df_train["Pclass"] == Pclass]["Age"].mean())
else:
return Age
```
```python
df_train['Age'] = df_train[['Age', 'Pclass']].apply(add_age,axis=1)
```
```python
df_train.count()
```
PassengerId 891
Survived 891
Pclass 891
Sex 891
Age 891
SibSp 891
Parch 891
Fare 891
Embarked 889
dtype: int64
### Drop Rows
```python
df_train.dropna(inplace=True)
```
```python
df_train.count()
```
PassengerId 889
Survived 889
Pclass 889
Sex 889
Age 889
SibSp 889
Parch 889
Fare 889
Embarked 889
dtype: int64
## Categorical to Numerical
#### Convert the categorical values to numeric
- Find the columns that are explicitly categorical - like male, female
- Find the columns that are although numerical, represent categorical features
### One-Hot Encoding
- A technique to create multiple feature for each corrsponding value
```python
import numpy as np
col = 'Sex'
print(np.unique(df_train[col]))
```
['female' 'male']
```python
import numpy as np
col = 'Embarked'
print(np.unique(df_train[col]))
```
['C' 'Q' 'S']
```python
import numpy as np
col = 'Pclass'
print(np.unique(df_train[col]))
```
[1 2 3]
#### pd.get_dummies()
```python
# sex = pd.get_dummies(df_train["Sex"],drop_first=True)
# embarked = pd.get_dummies(df_train["Embarked"],drop_first=True)
# pclass = pd.get_dummies(df_train["Pclass"],drop_first=True)
sex = pd.get_dummies(df_train["Sex"])
embarked = pd.get_dummies(df_train["Embarked"])
pclass = pd.get_dummies(df_train["Pclass"])
```
### Drop the columns that were used for transformation
```python
df_train.drop(['Sex', 'Embarked', 'Pclass', 'PassengerId'], axis=1, inplace=True)
```
```python
df_train.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Survived</th>
<th>Age</th>
<th>SibSp</th>
<th>Parch</th>
<th>Fare</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>22.0</td>
<td>1</td>
<td>0</td>
<td>7.2500</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>38.0</td>
<td>1</td>
<td>0</td>
<td>71.2833</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>26.0</td>
<td>0</td>
<td>0</td>
<td>7.9250</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>35.0</td>
<td>1</td>
<td>0</td>
<td>53.1000</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>35.0</td>
<td>0</td>
<td>0</td>
<td>8.0500</td>
</tr>
</tbody>
</table>
</div>
### Add encoded columns to the training dataset
```python
df_train = pd.concat([df_train,pclass,sex,embarked],axis=1)
```
```python
df_train.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Survived</th>
<th>Age</th>
<th>SibSp</th>
<th>Parch</th>
<th>Fare</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>female</th>
<th>male</th>
<th>C</th>
<th>Q</th>
<th>S</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>22.0</td>
<td>1</td>
<td>0</td>
<td>7.2500</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>38.0</td>
<td>1</td>
<td>0</td>
<td>71.2833</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>26.0</td>
<td>0</td>
<td>0</td>
<td>7.9250</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>35.0</td>
<td>1</td>
<td>0</td>
<td>53.1000</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>35.0</td>
<td>0</td>
<td>0</td>
<td>8.0500</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
# Save the transformed file as a pickle file
```python
df_train.shape
```
(889, 13)
```python
import pickle as pkl
df_train.to_pickle('../data/titanic_tansformed.pkl')
```
## Logistic Regression
```python
data = df_train.drop("Survived",axis=1)
label = df_train["Survived"]
```
```python
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Age</th>
<th>SibSp</th>
<th>Parch</th>
<th>Fare</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>female</th>
<th>male</th>
<th>C</th>
<th>Q</th>
<th>S</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>22.0</td>
<td>1</td>
<td>0</td>
<td>7.2500</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>38.0</td>
<td>1</td>
<td>0</td>
<td>71.2833</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>26.0</td>
<td>0</td>
<td>0</td>
<td>7.9250</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>35.0</td>
<td>1</td>
<td>0</td>
<td>53.1000</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>35.0</td>
<td>0</td>
<td>0</td>
<td>8.0500</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
```python
from sklearn.cross_validation import train_test_split
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size = 0.3, random_state = 101)
```
/Users/talat/anaconda3/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
```python
from sklearn.linear_model import LogisticRegression
# Run Logistic Regression
log_regr = LogisticRegression()
log_regr.fit(data_train, label_train)
predictions = log_regr.predict(data_test)
```
### Accuracy
```python
print('Accuracy', log_regr.score(data_test, label_test))
print('Coefficients', log_regr.coef_)
print('Intercept', log_regr.intercept_)
```
Accuracy 0.8277153558052435
Coefficients [[-0.03855005 -0.24328718 -0.10076536 0.00231142 1.18429317 0.2997331
-0.8678696 1.56887514 -0.95271847 0.39520621 0.20701836 0.0139321 ]]
Intercept [0.61615666]
### Precision Recall
```python
from sklearn.metrics import classification_report
print(classification_report(label_test, predictions))
```
precision recall f1-score support
0 0.82 0.92 0.87 163
1 0.85 0.68 0.76 104
avg / total 0.83 0.83 0.82 267
## Cross Validation
```python
from sklearn.model_selection import StratifiedKFold
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# skf = StratifiedKFold(n_splits=5)
log_regr = LogisticRegression()
log_regr.fit(data_train, label_train)
score = log_regr.score(data_train, label_train)
print('Train accuracy score', score)
score_cv = cross_val_score(log_regr, data_train, label_train, cv=10, scoring='accuracy')
print('Cross Val Accuracy for each run', score_cv)
print('CrossVal Accuracy', score_cv.mean())
```
Train accuracy score 0.8086816720257235
Cross Val Accuracy for each run [0.76190476 0.6984127 0.79365079 0.87301587 0.80952381 0.77777778
0.78688525 0.81967213 0.91803279 0.7704918 ]
CrossVal Accuracy 0.8009367681498828
## AUC - Receiver Operating Characteristics
- How much a model is capable of distinguishing between classes
- Higher the AUC, better the model is
$
\begin{align}
True Positive Rate = \frac{TP}{TP + FN}
\end{align}
$
<br>
$
\begin{align}
\ False Positive Rate = 1 - \frac{TN}{TN + FP} = \frac{FP}{TN + FP}
\end{align}
$
```python
from sklearn import metrics
fpr, tpr, threshold = metrics.roc_curve(label_test, log_regr.predict(data_test))
roc_auc = metrics.auc(fpr, tpr)
print('AUCROC Stage1 vs Healthy: ' , roc_auc)
```
AUCROC Stage1 vs Healthy: 0.8014688532326569
```python
import matplotlib.pyplot as plt
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
```
```python
```
|
cd8438a471ec419fb1ff2f5b36aacce015db6ceb
| 86,419 |
ipynb
|
Jupyter Notebook
|
classification/notebooks/04 - Data Transformation.ipynb
|
pshn111/Machine-Learning-Package
|
fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2
|
[
"MIT"
] | null | null | null |
classification/notebooks/04 - Data Transformation.ipynb
|
pshn111/Machine-Learning-Package
|
fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2
|
[
"MIT"
] | null | null | null |
classification/notebooks/04 - Data Transformation.ipynb
|
pshn111/Machine-Learning-Package
|
fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2
|
[
"MIT"
] | null | null | null | 56.96704 | 20,888 | 0.700321 | true | 6,430 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.928409 | 0.810479 | 0.752456 |
__label__eng_Latn
| 0.238335 | 0.586539 |
# Image compression for acoustic transmissions.
```python
__author__ = 'andrea munafo'
```
This notebook investigates some image compression methods using neural networks.
The point is to obtain a dimentionality reduction that can be supported over very bandwidth constrained media, such as acoustics.
This description of autoencoders is based on [Understanding Variational Autoencoders](
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73), on [Autoencoding Variational Bayes](https://arxiv.org/pdf/1312.6114.pdf) and on [Tutorial on variational autoencoders](https://arxiv.org/abs/1606.05908).
and is reported here to provide a description for the implementation reported below.
All implementations are applied to the MNIST dataset.
## What is dimentionality reduction?
Dimensionality reduction is the process of reducing the number of features that describe some data.
This reduction can be done, either selecting specific features or by reducing to a new set of features created from the original ones (feature extraction).
Applications range from data visualisation, data storage, heavy computation and communications.
First, let’s call encoder the process that produce the “new features” representation from the “old features” representation (by selection or by extraction) and decoder the reverse process.
Dimensionality reduction can then be interpreted as data compression where the encoder compress the data (from the initial space to the encoded space, also called latent space) whereas the decoder decompress them.
Of course, depending on the initial data distribution, the latent space dimension and the encoder definition, this compression can be lossy, meaning that a part of the information is lost during the encoding process and cannot be recovered when decoding.
The problem can be formulated as finding the optimal encoder $e$ in the set of possible encoders $E$, and the optimal decoder $d$, in the set of possible dencoders $D$, such that:
$(e*,d*) = argmin f(x, d(e(x))$
where $e \in E$, $d \in D$, and $f(.)$ is the function that defines the reconstruction error measure between the input data $x$ and the encoded-decoded data $d(e(x))$.
One way to solve this problem is using Principal Components Analysis (PCA).
The idea of PCA is to build $n_e$ new features that are linear combinations of the $n_d$ original features.
This is done so that the projections of the original data on the subspace defined by these new features are as close as possible to the initial data (in term of euclidean distance).
This is shown in the figure below:
For example, the encoded version of point B is obtained as the projection of B onto the line that minimises the distance from all the points.
It can be shown that, the unitary eigenvectors corresponding to the $n_e$ greatest eigenvalues (in norm) of the covariance features matrix C are orthogonal (or can be chosen to be so) and define the best subspace of dimension $n_e$ to project data on with minimal error of approximation.
These $n_e$ eigenvectors can then be chosen as the new features.
In the PCA context, the problem of dimension reduction can be expressed as an eigenvalue/eigenvector problem.
Moreover, it can also be shown that the decoder matrix is the transposed of the encoder matrix.
Encoding and decoding matrices obtained with PCA define one of the solutions.
However, several basis can be chosen to describe the same optimal subspace and, so, several encoder/decoder pairs can give the optimal reconstruction error.
Now, let’s assume that both the encoder and the decoder are deep and non-linear. In such case, the more complex the architecture is, the more the autoencoder can proceed to a high dimensionality reduction while keeping reconstruction loss low. Intuitively, if our encoder and our decoder have enough degrees of freedom, we can reduce any initial dimensionality to 1. Indeed, an encoder with “infinite power” could theoretically takes our N initial data points and encodes them as 1, 2, 3, … up to N (or more generally, as N integer on the real axis) and the associated decoder could make the reverse transformation, with no loss during the process.
Here, we should however keep two things in mind. First, an important dimensionality reduction with no reconstruction loss often comes with a price: the lack of interpretable and exploitable structures in the latent space (lack of regularity). Second, most of the time the final purpose of dimensionality reduction is not to only reduce the number of dimensions of the data but to reduce this number of dimensions while keeping the major part of the data structure information in the reduced representations. For these two reasons, the dimension of the latent space and the “depth” of autoencoders (that define degree and quality of compression) have to be carefully controlled and adjusted depending on the final purpose of the dimensionality reduction.
The concept is shown in the picture below. We would like to reduce the dimentionality, while keeping some of the structure of the data.
# Autoencoders
```python
import os
import pylab as plt
import numpy as np
import torch
import torchvision
from torch import nn
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import MNIST
from torchvision.utils import save_image
```
```python
print(torch.__version__)
```
1.3.0
```python
import pathlib
pathlib.Path("../results/01-autoencoder/ae").mkdir(parents=True, exist_ok=True)
pathlib.Path("../results/01-autoencoder/convae").mkdir(parents=True, exist_ok=True)
pathlib.Path("../results/01-autoencoder/vae").mkdir(parents=True, exist_ok=True)
pathlib.Path("../saved-mdls/01-autoencoder").mkdir(parents=True, exist_ok=True)
```
```python
num_epochs = 100
batch_size = 64
learning_rate = 1e-3
device = 'cuda'
```
```python
def toImg(x, mu=0.5, std=1):
"""Converts x to an image shape. It works for batches of inputs."""
x = mu * (x + std)
x = x.clamp(0, 1)
x = x.view(x.size(0), 1, 28, 28)
return x
```
## Get the data
### Define some transforms to normalise the images
```python
ds_mean = 0.1307
ds_std = 0.3081
img_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((ds_mean,), (ds_std,)) # The first tuple (0.5, 0.5, 0.5) is the mean for all three channels and the second (0.5, 0.5, 0.5) is the standard deviation for all three channels.
])
```
### Grab the dataset
```python
train_ds = MNIST('./data', train=True, transform=img_transform, download=True)
valid_ds = MNIST('./data', train=False, transform=img_transform, download=True)
```
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz
100.1%
Extracting ./data/MNIST/raw/train-images-idx3-ubyte.gz to ./data/MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ./data/MNIST/raw/train-labels-idx1-ubyte.gz
113.5%
Extracting ./data/MNIST/raw/train-labels-idx1-ubyte.gz to ./data/MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw/t10k-images-idx3-ubyte.gz
100.4%
Extracting ./data/MNIST/raw/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz
180.4%
Extracting ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw
Processing...
Done!
```python
# plt.imshow(train_ds.data[1])
```
```python
train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=True)
valid_dl = DataLoader(valid_ds, batch_size=batch_size, shuffle=True)
```
## Define the model
Defining an Autoencoder model is not difficult. The following class shows one potential option as a combination of Linear and ReLU layers.
```python
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(28 * 28, 128),
nn.ReLU(True),
nn.Linear(128, 64),
nn.ReLU(True),
nn.Linear(64, 12),
nn.ReLU(True),
nn.Linear(12, 3))
self.decoder = nn.Sequential(
nn.Linear(3, 12),
nn.ReLU(True),
nn.Linear(12, 64),
nn.ReLU(True),
nn.Linear(64, 128),
nn.ReLU(True),
nn.Linear(128, 28 * 28),
nn.Tanh())
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
def encode(self, x):
x = self.encoder(x)
return x
def decode(self, x):
x = self.decoder(x)
return x
```
## Train the model
```python
model = Autoencoder()
```
```python
loss_fn = nn.MSELoss()
optimizer = torch.optim.Adam(
model.parameters(), lr=learning_rate, weight_decay=1e-5)
```
```python
if device == 'cuda':
model.cuda()
```
```python
for epoch in range(num_epochs):
for x, y in train_dl:
x = x.view(x.size(0), -1).to(device) # resize to be a vector bsx28*28
# x = Variable(x).cuda()
output = model(x)
loss = loss_fn(output, x)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('epoch [{}/{}], loss:{:.4f}'.format(epoch + 1, num_epochs, loss.data))
if epoch % 10 == 0:
ipic = toImg(x.cpu().data)
opic = toImg(output.cpu().data)
save_image(opic, '../results/ae/image_{}_o.png'.format(epoch))
save_image(ipic, '../results/ae/image_{}_i.png'.format(epoch))
```
epoch [1/100], loss:0.5602
epoch [2/100], loss:0.5473
epoch [3/100], loss:0.5371
epoch [4/100], loss:0.5408
epoch [5/100], loss:0.4876
epoch [6/100], loss:0.4854
epoch [7/100], loss:0.4918
epoch [8/100], loss:0.5050
epoch [9/100], loss:0.5683
epoch [10/100], loss:0.4808
epoch [11/100], loss:0.4911
epoch [12/100], loss:0.4622
epoch [13/100], loss:0.5145
epoch [14/100], loss:0.5030
epoch [15/100], loss:0.4437
epoch [16/100], loss:0.5159
epoch [17/100], loss:0.5257
epoch [18/100], loss:0.4939
epoch [19/100], loss:0.4832
epoch [20/100], loss:0.4450
epoch [21/100], loss:0.4416
epoch [22/100], loss:0.4433
epoch [23/100], loss:0.4908
epoch [24/100], loss:0.4583
epoch [25/100], loss:0.4830
epoch [26/100], loss:0.4803
epoch [27/100], loss:0.4719
epoch [28/100], loss:0.4317
epoch [29/100], loss:0.4332
epoch [30/100], loss:0.5100
epoch [31/100], loss:0.4836
epoch [32/100], loss:0.4686
epoch [33/100], loss:0.4797
epoch [34/100], loss:0.5120
epoch [35/100], loss:0.4823
epoch [36/100], loss:0.4763
epoch [37/100], loss:0.4901
epoch [38/100], loss:0.4556
epoch [39/100], loss:0.4851
epoch [40/100], loss:0.4474
epoch [41/100], loss:0.4293
epoch [42/100], loss:0.4440
epoch [43/100], loss:0.4943
epoch [44/100], loss:0.5166
epoch [45/100], loss:0.5518
epoch [46/100], loss:0.5341
epoch [47/100], loss:0.5265
epoch [48/100], loss:0.5320
epoch [49/100], loss:0.4872
epoch [50/100], loss:0.4489
epoch [51/100], loss:0.5037
epoch [52/100], loss:0.4680
epoch [53/100], loss:0.4607
epoch [54/100], loss:0.4225
epoch [55/100], loss:0.4512
epoch [56/100], loss:0.5028
epoch [57/100], loss:0.4769
epoch [58/100], loss:0.5159
epoch [59/100], loss:0.4600
epoch [60/100], loss:0.4927
epoch [61/100], loss:0.4544
epoch [62/100], loss:0.4826
epoch [63/100], loss:0.5090
epoch [64/100], loss:0.5271
epoch [65/100], loss:0.5107
epoch [66/100], loss:0.4235
epoch [67/100], loss:0.4578
epoch [68/100], loss:0.4514
epoch [69/100], loss:0.4575
epoch [70/100], loss:0.4789
epoch [71/100], loss:0.4688
epoch [72/100], loss:0.5022
epoch [73/100], loss:0.4651
epoch [74/100], loss:0.4402
epoch [75/100], loss:0.4553
epoch [76/100], loss:0.5108
epoch [77/100], loss:0.4504
epoch [78/100], loss:0.4580
epoch [79/100], loss:0.5241
epoch [80/100], loss:0.4497
epoch [81/100], loss:0.4625
epoch [82/100], loss:0.4352
epoch [83/100], loss:0.5229
epoch [84/100], loss:0.5016
epoch [85/100], loss:0.4678
epoch [86/100], loss:0.4940
epoch [87/100], loss:0.4866
epoch [88/100], loss:0.5085
epoch [89/100], loss:0.4982
epoch [90/100], loss:0.4447
epoch [91/100], loss:0.4906
epoch [92/100], loss:0.4760
epoch [93/100], loss:0.4070
epoch [94/100], loss:0.4130
epoch [95/100], loss:0.4456
epoch [96/100], loss:0.4077
epoch [97/100], loss:0.4823
epoch [98/100], loss:0.4681
epoch [99/100], loss:0.4975
epoch [100/100], loss:0.4695
We are now done training. We can save the model so that we can use it later on.
```python
# As suggested in https://pytorch.org/tutorials/beginner/saving_loading_models.html
torch.save(model, '../saved-mdls/01-autoencoder/autoencoder_{}e.pt'.format(epoch+1))
```
/opt/anaconda3/lib/python3.7/site-packages/torch/serialization.py:256: UserWarning: Couldn't retrieve source code for container of type Autoencoder. It won't be checked for correctness upon loading.
"type " + obj.__name__ + ". It won't be checked "
# Using and validating the simple autoencoder
```python
# Model class must be defined somewhere
model = torch.load('../saved-mdls/01-autoencoder/autoencoder.pt')
model.eval()
```
```python
for count, (x, _) in enumerate(valid_dl):
x = x.view(x.size(0), -1)
encoding = model.encode(x)
decoding = model.decode(encoding)
# save (some) results
if count % 10 == 0:
ipic = to_img(x.cpu().data)
opic = to_img(decoding.cpu().data)
save_image(ipic, '../results/01-autoencoder/ae/val-{}-enc.png'.format(count))
save_image(opic, '../results/01-autoencoder/ae/val-{}-dec.png'.format(count))
```
Let's see how each image was encoded.
The next line prints the encodings for the last batch.
```python
print('encoded output: {}'.format(encoding.detach().cpu().numpy()))
```
If we want to analyse the results inline we can grab the encoding/decoding variables and play with them.
```python
idx = 0
```
```python
plt.imshow(decoding[idx].cpu().data.view(28, 28))
```
```python
f, axarr = plt.subplots(1,2)
axarr[0].imshow(x[idx].cpu().data.view(28,28))
axarr[1].imshow(decoding[idx].cpu().data.view(28, 28))
```
## Encoding / Decoding single images
To do this, I am going to use the validation set again, but one can do the same with any image.
```python
model.eval()
```
Autoencoder(
(encoder): Sequential(
(0): Linear(in_features=784, out_features=128, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=128, out_features=64, bias=True)
(3): ReLU(inplace)
(4): Linear(in_features=64, out_features=12, bias=True)
(5): ReLU(inplace)
(6): Linear(in_features=12, out_features=3, bias=True)
)
(decoder): Sequential(
(0): Linear(in_features=3, out_features=12, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=12, out_features=64, bias=True)
(3): ReLU(inplace)
(4): Linear(in_features=64, out_features=128, bias=True)
(5): ReLU(inplace)
(6): Linear(in_features=128, out_features=784, bias=True)
(7): Tanh()
)
)
```python
idx = 1001 # select an index in the validation set
```
```python
# Define the transforms to normalise the image
pic_tt = transforms.ToTensor() # Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]
pic_n = transforms.Normalize((ds_mean,), (ds_std,))
if torch.__version__ != '1.3.0':
pic_data = valid_ds.test_data[idx].numpy()
pic_data = pic_data[:, :, None] # (H x W x C)
print('pic_data.dtype: {}'.format(pic_data.dtype))
else:
pic_data = valid_ds.data[idx].numpy()
pic = pic_tt(pic_data)
pic = pic_n(pic)
# print stats
print('pic mean: {}, std: {}'.format(pic.mean(), pic.std()))
# print pic shape
print('\npic shape (C x H x W): {}'.format(pic.shape))
```
pic_data.dtype: uint8
pic mean: 0.02145381271839142, std: 1.0174704790115356
pic shape (C x H x W): torch.Size([1, 28, 28])
```python
plt.imshow(pic.view(28,28))
```
### Run the model to encode
```python
encoding = model.encode(pic.to(device).view(1, -1)) # note that we need to reshape the pic to be: bsx28*28
# print encoding result
print('encoded output: {}'.format(encoding.detach().cpu().numpy()))
```
encoded output: [[-9.222232 9.739955 -7.7938967]]
### Run the decoder and show the result
```python
decoding = model.decode(encoding)
```
```python
plt.imshow(decoding.cpu().data.view(28, 28))
```
## Generating new images
```python
encoding = torch.tensor([
[17., -10., +0.]])
decoding = model.decode(encoding)
plt.imshow(decoding.cpu().data.view(28, 28))
```
# Convolutional Autoencoders
Alternatively, one can define a Convolutional Autoencoder, where the network is composed of a sequence of convolutional layers.
```python
class ConvAutoencoder(nn.Module):
def __init__(self):
super(ConvAutoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 16, 3, stride=3, padding=1), # b, 16, 10, 10
nn.ReLU(True),
nn.MaxPool2d(2, stride=2), # b, 16, 5, 5
nn.Conv2d(16, 8, 3, stride=2, padding=1), # b, 8, 3, 3
nn.ReLU(True),
nn.MaxPool2d(2, stride=1) # b, 8, 2, 2
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(8, 16, 3, stride=2), # b, 16, 5, 5
nn.ReLU(True),
nn.ConvTranspose2d(16, 8, 5, stride=3, padding=1), # b, 8, 15, 15
nn.ReLU(True),
nn.ConvTranspose2d(8, 1, 2, stride=2, padding=1), # b, 1, 28, 28
nn.Tanh()
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
def encode(self, x):
x = self.encoder(x)
return x
def decode(self, x):
x = self.decoder(x)
return x
```
```python
model = ConvAutoencoder()
loss_fn = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-5)
if device == 'cuda': model.cuda()
for epoch in range(num_epochs):
for x, y in train_dl:
x = x.to(device) # resize to be a vector bsx28*28
output = model(x)
loss = loss_fn(output, x)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('epoch [{}/{}], loss:{:.4f}'.format(epoch + 1, num_epochs, loss.data))
if epoch % 10 == 0:
ipic = toImg(x.cpu().data)
opic = toImg(output.cpu().data)
save_image(opic, '../results/01-autoencoder/convae/image_{}_conv_o.png'.format(epoch))
save_image(ipic, '../results/01-autoencoder/convae/image_{}_conv_i.png'.format(epoch))
```
epoch [1/100], loss:0.5109
epoch [2/100], loss:0.4628
epoch [3/100], loss:0.5199
epoch [4/100], loss:0.4988
epoch [5/100], loss:0.4752
epoch [6/100], loss:0.4783
epoch [7/100], loss:0.4565
epoch [8/100], loss:0.4774
epoch [9/100], loss:0.4367
epoch [10/100], loss:0.5051
epoch [11/100], loss:0.4323
epoch [12/100], loss:0.4938
epoch [13/100], loss:0.4625
epoch [14/100], loss:0.4442
epoch [15/100], loss:0.4359
epoch [16/100], loss:0.4216
epoch [17/100], loss:0.4128
epoch [18/100], loss:0.5216
epoch [19/100], loss:0.4163
epoch [20/100], loss:0.4229
epoch [21/100], loss:0.4854
epoch [22/100], loss:0.4435
epoch [23/100], loss:0.4405
epoch [24/100], loss:0.3971
epoch [25/100], loss:0.4503
epoch [26/100], loss:0.4532
epoch [27/100], loss:0.4384
epoch [28/100], loss:0.4189
epoch [29/100], loss:0.4507
epoch [30/100], loss:0.4159
epoch [31/100], loss:0.4487
epoch [32/100], loss:0.4177
epoch [33/100], loss:0.4478
epoch [34/100], loss:0.4493
epoch [35/100], loss:0.4836
epoch [36/100], loss:0.4739
epoch [37/100], loss:0.4223
epoch [38/100], loss:0.4179
epoch [39/100], loss:0.4717
epoch [40/100], loss:0.4035
epoch [41/100], loss:0.4273
epoch [42/100], loss:0.4396
epoch [43/100], loss:0.4240
epoch [44/100], loss:0.4513
epoch [45/100], loss:0.4021
epoch [46/100], loss:0.4341
epoch [47/100], loss:0.3915
epoch [48/100], loss:0.4056
epoch [49/100], loss:0.4917
epoch [50/100], loss:0.4463
epoch [51/100], loss:0.4041
epoch [52/100], loss:0.4991
epoch [53/100], loss:0.4324
epoch [54/100], loss:0.4214
epoch [55/100], loss:0.4178
epoch [56/100], loss:0.4286
epoch [57/100], loss:0.4134
epoch [58/100], loss:0.4234
epoch [59/100], loss:0.4489
epoch [60/100], loss:0.4033
epoch [61/100], loss:0.4821
epoch [62/100], loss:0.4093
epoch [63/100], loss:0.4131
epoch [64/100], loss:0.4737
epoch [65/100], loss:0.4279
epoch [66/100], loss:0.4410
epoch [67/100], loss:0.4481
epoch [68/100], loss:0.3976
epoch [69/100], loss:0.4504
epoch [70/100], loss:0.3888
epoch [71/100], loss:0.3833
epoch [72/100], loss:0.4279
epoch [73/100], loss:0.3964
epoch [74/100], loss:0.3978
epoch [75/100], loss:0.4131
epoch [76/100], loss:0.4012
epoch [77/100], loss:0.4147
epoch [78/100], loss:0.4499
epoch [79/100], loss:0.3835
epoch [80/100], loss:0.4146
epoch [81/100], loss:0.4270
epoch [82/100], loss:0.3776
epoch [83/100], loss:0.4476
epoch [84/100], loss:0.4090
epoch [85/100], loss:0.4182
epoch [86/100], loss:0.4433
epoch [87/100], loss:0.4532
epoch [88/100], loss:0.4058
epoch [89/100], loss:0.4659
epoch [90/100], loss:0.4265
epoch [91/100], loss:0.4224
epoch [92/100], loss:0.4296
epoch [93/100], loss:0.4599
epoch [94/100], loss:0.4055
epoch [95/100], loss:0.4219
epoch [96/100], loss:0.4640
epoch [97/100], loss:0.4497
epoch [98/100], loss:0.4238
epoch [99/100], loss:0.4529
epoch [100/100], loss:0.3765
```python
# As suggested in https://pytorch.org/tutorials/beginner/saving_loading_models.html
filename = '../saved-mdls/01-autoencoder/convautoencoder_{}e.pt'.format(epoch+1)
torch.save(model, filename)
print('model saved as: {}'.format(filename))
```
model saved as: ./saved-mdls/01-autoencoder/convautoencoder_100e.pt
/opt/anaconda3/lib/python3.7/site-packages/torch/serialization.py:256: UserWarning: Couldn't retrieve source code for container of type ConvAutoencoder. It won't be checked for correctness upon loading.
"type " + obj.__name__ + ". It won't be checked "
```python
model.eval()
```
ConvAutoencoder(
(encoder): Sequential(
(0): Conv2d(1, 16, kernel_size=(3, 3), stride=(3, 3), padding=(1, 1))
(1): ReLU(inplace)
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(16, 8, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(4): ReLU(inplace)
(5): MaxPool2d(kernel_size=2, stride=1, padding=0, dilation=1, ceil_mode=False)
)
(decoder): Sequential(
(0): ConvTranspose2d(8, 16, kernel_size=(3, 3), stride=(2, 2))
(1): ReLU(inplace)
(2): ConvTranspose2d(16, 8, kernel_size=(5, 5), stride=(3, 3), padding=(1, 1))
(3): ReLU(inplace)
(4): ConvTranspose2d(8, 1, kernel_size=(2, 2), stride=(2, 2), padding=(1, 1))
(5): Tanh()
)
)
```python
idx = 1001 # select an index in the validation set
```
```python
# Define the transforms to normalise the image
pic_tt = transforms.ToTensor() # Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]
pic_n = transforms.Normalize((ds_mean,), (ds_std,))
if torch.__version__ != '1.3.0':
pic_data = valid_ds.test_data[idx].numpy()
pic_data = pic_data[:, :, None] # (H x W x C)
print('pic_data.dtype: {}'.format(pic_data.dtype))
else:
pic_data = valid_ds.data[idx].numpy()
pic = pic_tt(pic_data)
pic = pic_n(pic)
# print stats
print('pic mean: {}, std: {}'.format(pic.mean(), pic.std()))
# print pic shape
print('\npic shape (C x H x W): {}'.format(pic.shape))
```
pic_data.dtype: uint8
pic mean: 0.02145381271839142, std: 1.0174704790115356
pic shape (C x H x W): torch.Size([1, 28, 28])
```python
plt.imshow(pic.view(28,28))
```
### Run the model to encode
```python
print(pic.to(device)[None, :, :, :].shape)
```
torch.Size([1, 1, 28, 28])
```python
encoding = model.encode(pic.to(device)[None, :, :, :]) # note that we need to reshape the pic to be: bsx28*28
# print encoding result
print('encoded output: {}'.format(encoding.detach().cpu().numpy()))
```
encoded output: [[[[10.795247 6.077096 ]
[ 5.69029 0. ]]
[[ 2.3519301 2.3519301 ]
[ 2.3519301 2.3519301 ]]
[[ 6.9652967 6.9652967 ]
[ 0.5121887 0.5121887 ]]
[[ 1.8640186 7.8633065 ]
[ 5.015077 9.722703 ]]
[[ 0.46509135 0.46509135]
[ 8.607014 8.607014 ]]
[[ 9.282421 5.930196 ]
[ 4.88288 1.791863 ]]
[[ 0. 7.888112 ]
[ 1.9434392 6.8836126 ]]
[[12.865752 12.865752 ]
[12.865752 12.865752 ]]]]
### Run the decoder and show the result
```python
decoding = model.decode(encoding)
```
```python
plt.imshow(decoding.cpu().data.view(28, 28))
```
# Variational Autoencoders
We have discussed the problem of dimentionality reduction, and used encoder-decoder architectures to address it.
At this point, one question might arise: what is the link between autoencoders and content generation?
we could be tempted to think that, if the latent space is regular enough (well “organized” by the encoder during the training process), we could take a point randomly from that latent space and decode it to get a new content. The decoder would then act more or less like the generator of a Generative Adversarial Network.
As done before, we can generate new data by decoding points that are randomly sampled from the latent space. The quality and relevance of generated data depend on the regularity of the latent space.
The problem is that the regularity of the latent space for autoencoders depends on the distribution of the data in the initial space, the dimension of the latent space and the architecture of the encoder.
It is difficult to ensure that the encoder will organize the latent space in a smart way compatible with the generative process we just described.
This lack of structure among the encoded data into the latent space is normal since during training we never enforced such organization of the latern space.
We only trained to encode and decode so that we can minimize the loss.
To use the decoder of our autoencoder for generative purpose, we have to be sure that the latent space is regular enough. One possible solution to obtain such regularity is to introduce explicit regularisation during the training process.
A variational autoencoder is an autoencoder whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process.
Just as a standard autoencoder, a variational autoencoder is an architecture composed of both an encoder and a decoder and that is trained to minimise the reconstruction error between the encoded-decoded data and the initial data. However, in order to introduce some regularisation of the latent space, we proceed to a slight modification of the encoding-decoding process: instead of encoding an input as a single point, we encode it as a distribution over the latent space.
To do this, the loss function that is minimised when training a VAE is composed of a “reconstruction term” (on the final layer), that tends to make the encoding-decoding scheme as performant as possible, and a “regularisation term” (on the latent layer), that tends to regularise the organisation of the latent space by making the distributions returned by the encoder close to a standard normal distribution.
One way to measure the difference between probability distribution is using the Kullback-Leibler divergence.
In practice, the encoded distributions are chosen to be normal so that the encoder can be trained to return the mean and the covariance matrix that describe these Gaussians.
The reason why an input is encoded as a distribution with some variance instead of a single point is that it makes possible to express very naturally the latent space regularisation: the distributions returned by the encoder are enforced to be close to a standard normal distribution.
Moreover, the Kullback-Leibler divergence between two Gaussian distributions has a closed form that can be directly expressed in terms of the means and the covariance matrices of the two distributions.
## Intuition
The regularity that is expected from the latent space in order to make generative process possible can be expressed through two main properties:
- continuity (two close points in the latent space should not give two completely different contents once decoded)
- completeness (for a chosen distribution, a point sampled from the latent space should give “meaningful” content once decoded)
A point of the latent space that would be halfway between the means of two encoded distributions coming from different training data should be decoded in something that is somewhere between the data that gave the first distribution and the data that gave the second distribution as it may be sampled by the autoencoder in both cases.
## Probabilistic framework
Variational autoencoders encode inputs as distributions (e.g. mean and std) and not as simple points (or vectors). The regularisation of the latent space is obtained constraining the distribution to be as close as possible to a Gaussian.
$x$ : data
$z$ : encoded rapresentation of $x$
We also assume that $z$ cannot be directly observed. Hewover, the latent representation $z$ can be sampled from the prior distribution $p(z)$
The data $x$ can be sampled from the conditional likelihood distribution: $p(x|z)$.
Within this framework, we can consider the probabilistic versions of encoders and decoders (instead of the deterministic ones that we described before).
- The “probabilistic decoder” is defined by $p(x|z)$, that describes the distribution of the decoded variable given the encoded one.
- The “probabilistic encoder” is defined by $p(z|x)$, that describes the distribution of the encoded variable given the decoded one.
Where of course, the link between the two is obtained through the Bayes' theorem: $p(z|x) = p(x|z)p(z)/p(x)$.
The previous equation also means that the encoder $p(z|x)$ depends on the prior $p(z)$, or on the encoded representation of $z$ in the latent space.
Let's assume:
$p(z) = \mathcal{N}(0, I)$
$p(x|z) = \mathcal{N}(f(z), cI)$, where $f \in F$ (family of functions), and $I$ is the identity matrix, $c$ constant > 0
If we consider $f$ to be well defined, fixed (and known), and if we know $p(x)$, can have all the elements to solve this problem and calculate $p(z|x)$. However calculating $p(x)$ is often intractable as $p(x)=\int p(x|u)p(u)du$
### Variational inference
Variational inference (VI) can be used to approximate complex distributions and hence we can use it to approximate $p(z|x)$.
The idea is to set a parametrised family of distribution (for example the family of Gaussians, whose parameters are the mean and the covariance) and to look for the best approximation of the target distribution among this family.
The best element in the family is one that minimise a given approximation error measurement (most of the time the Kullback-Leibler divergence between approximation and target).
This can be done using gradient descent over the parameters that describe the family [REF].
To do so, we assume $p(z|x)$, our target distribution as a Gaussian:
$p(z|x) = \mathcal{N}(g(x), h(x))$,
This means that we have chosen a Gaussian to approximate $p(z|x)$, whose mean and covariance are defined by two functions, $g$ and $h$, of the parameter $x$ (the data).
The problem than become that of finding the best approximation among this family by optimising the functions g and h (or better, their parameters) to minimise the Kullback-Leibler (KL) divergence between the approximation $\mathcal{N}(g(x), h(x))$ and the target $p(z|x)$, or:
\begin{equation*}
(g^*, h^*) = arg \min_{g,h} KL(\mathcal{N}(g(x), h(x)), p(z|x)).
\label{eq:optimal-encoder} \tag{1}
\end{equation*}
This would give us the optimal encoder $p^*(z|x) = \mathcal{N}(g^*(x), h^*(x))$.
Eq. (1), with a little bit of work (see [Understanding Variational Autoencoders](
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73)) can be written as:
\begin{equation*}
(g^*, h^*) = \underset{g,h\in (GxH)}{\operatorname{argmax}} \{
\mathbb{E}_{z \sim \mathcal{N}(g(x), h(x))} [ log(p(x|z) ] - KL( \mathcal{N}(g(x), h(x)), p(z)) \}
\label{eq:optimal-encoder-2} \tag{2}
\end{equation*}
### Comment
All this reasoning was based on the assumption that we know $f(z)$. In fact, given that we know $f(z)$, we can calculate $p(x|z)$ (the decoder) and then we calculate the encoder $p(z|x)$ approximating it via variational inference.
However, in practise, $f(z)$ is not known and need to be chosen. In fact, the objective is to find a couple encoder-decoder with a regularised latent space, and the decoder $p(x|z)$ is a function of the function $f(z)$.
We have also realised that the encoder $p(z|x)$ depends on the prior $p(z)$ (Bayes' theorem).
As $p(z|x)$ can be approximated (by variational inference) from $p(z)$, and given that $p(x|z)$ and $p(z)$ have been set to be simple standard Gaussian,
we only have two parameters in our model: $c$ (that defines the variance of the posterior, $p(x|z)$) and the function $f$ (that defines the mean of the posterior).
In short:
- the encoder-decoder performance depends on the function $f(z)$
- the regularity of the latent space depends on the prior $p(z)$
This makes sense intuitively, as $f(z)$ defines the encoder, and $p(z)$ defines the latent space distribution.
## The encoder
Different functions $f(z)$ define different encoders.
We would like to find the best $f$ that maximise the encoding/decoding performance, or:
We want to choose the function $f$ that maximises the expected log-likelihood of $x$ (the data) given $z$ (its latent representation or encoding) when $z$ is sampled from $p^*(z|x)$. Where $p^*(z|x)$ is the optimal encoder, calculated as before.
For a given input $x$, we want to:
- maximise the probability to have $\hat x = x$
- when we sample $z$ from the distribution $p^*(z|x)$ (encoding)
- and then sample $\hat x$ from the distribution $p(x|z)$ (decoding).
We can write this as:
\begin{equation}
f^* = \underset{f\in F}{\operatorname{argmax}}
\mathbb{E}_{z \sim p^*(z|x)} [ \log p(x|z) ]
\label{eq:optimal-decoder} \tag{3}
\end{equation}
Note that, if $p(x|z) = \mathcal{N}(f(z), cI)$, then
$\log p(x|z) = - \frac{||x-f(z)||^2}{2c}$
## Encoder/Decoder
If we now put everything together (Eq.(1), (2), (3)):
\begin{equation*}
(f^*, g^*, h^*) = \underset{f,g,h\in (FxGxH)}{\operatorname{argmax}} \{
\mathbb{E}_{z \sim \mathcal{N}(g(x), h(x))} [ log(p(x|z) ] - KL( \mathcal{N}(g(x), h(x)), p(z)) \}
\label{eq:optimal-encoder-2}
\end{equation*}
$\tag{4}$
with $\log p(x|z) = - \frac{||x-f(z)||^2}{2c}$
*Comments:*
The previous Eq (4), includes:
- the reconstruction error between $x$ and $f(z)$ (objective 1)
- the regularisation term given by the $KL$ divergence between $\mathcal{N}(g(x), h(x))$ and $p(z)$ (which is a standard Gaussian) (objective 2).
The constant $c$ acts as the balance between these two objectives:
The higher $c$ is, the more we assume a high variance around $f(z)$ for the probabilistic decoder in our model and, the more we favour the regularisation term over the reconstruction term.
The opposite stands if $c$ is small.
## Implementing a variational encoder with neural nets
We can’t easily optimise over the entire space of functions.
One way to solve this problem is to constrain the optimisation domain and decide to express $f$, $g$ and $h$ as neural networks.
In this case, $F$, $G$ and $H$ correspond respectively to the families of functions defined by the networks architectures and the optimisation is done over the parameters of these networks.
In practise, $g$ and $h$, which together define the encoder $p(z|x)$ are defined to share the initial part of their network architecture.
**Figure: Encoder part of the VAE (from [Understanding Variational Autoencoders](
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73))**
Moreover, to simplify the problem, $\mathcal{N}(g(x), h(x))$ is assumed as a multidimensional Gaussian distribution with diagonal covariance matrix.
Under this assumption, the function $h(x)$ which defines the covariance matrix, is the vector of the diagonal elements.
To summarise, the output of the encoder part is a Gaussian with both mean $g$ and covariance $h$ that are functions of $x$.
The decoder network $f$ needs instead to model $p(x|z)$ that we defined as a Gaussian with unit variance.
As usual, the overall architecture is then obtained by concatenating the encoder and the decoder parts.
## Reparametrisation
The last element that remains is related to the fact that we need to sample from the encoded distribution $p(z|x)$ in a way that we can backpropagate the error through the network.
The reparametrisation allows us to do exactly that: makes the gradient descent possible despite the random sampling that occurs halfw aythrough the architecture.
The real trick is to understand that $z$ is a random variable sampled from a Gaussian distribution with mean $g(x)$ and variance $h(x)$.
This means that we can express $z$ as:
$z=h(x) \eta + g(x)$,
with $\eta \sim \mathcal{N}(0, I)$
In this way, we have sandboxed the sampling to only happen on the variable $\eta$, while we can do gradient descent over $h(x)$ and $g(x)$.
## Objective function
Finally, the _objective function_ of the variational autoencoder is defined by Eq. (4), where the expectation operator is replaced, most of the time by a Monte-Carlo approximation (sometimes obtained as a single draw).
The final architecture is sketched out in the next figure (from [Understanding Variational Autoencoders](
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73)):
**Figure: architecture of a variational autoencoder (C = $\frac{1}{2c}$)**
# Implementing a variational autoencoder
## Grab the data
```python
num_epochs = 100
batch_size = 128
learning_rate = 1e-3
ds_mean = 0.1307
ds_std = 0.3081
img_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((ds_mean,), (ds_std,)) # The first tuple (0.5, 0.5, 0.5) is the mean for all three channels and the second (0.5, 0.5, 0.5) is the standard deviation for all three channels.
])
### Grab the dataset
train_ds = MNIST('./data', train=True, transform=img_transform, download=True)
valid_ds = MNIST('./data', train=False, transform=img_transform, download=True)
# plt.imshow(train_ds.data[1])
train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=True)
valid_dl = DataLoader(valid_ds, batch_size=batch_size, shuffle=True)
```
## Define the model
```python
class VAE(nn.Module):
def __init__(self):
super(VAE, self).__init__()
self.base_encoder = nn.Sequential(
nn.Linear(28 * 28, 400),
nn.ReLU(True)
)
self.mean_encoder = nn.Sequential(
self.base_encoder,
nn.Linear(400, 20)
)
self.var_encoder = nn.Sequential(
self.base_encoder,
nn.Linear(400, 20)
)
self.decoder = nn.Sequential(
nn.Linear(20, 400),
nn.ReLU(True),
nn.Linear(400, 784),
nn.Sigmoid()
)
def forward(self, x):
mu, logvar = self.mean_encoder(x), self.var_encoder(x)
z = self.reparametrize(mu, logvar)
return self.decoder(z), mu, logvar
def encode(self, x):
return self.mean_encoder(x), self.var_encoder(x)
def decode(self, x):
mu, logvar = x[0], x[1]
z = self.reparametrize(mu, logvar)
return self.decoder(z), mu, logvar
def reparametrize(self, mu, logvar):
"""This function implements the reparametrisation trick:
z = h(x)*eta+g(x), eta is Gaussian
"""
std = logvar.mul(0.5).exp_()
# sample the normal distribution
if torch.cuda.is_available():
eta = torch.cuda.FloatTensor(std.size()).normal_()
else:
eta = torch.FloatTensor(std.size()).normal_()
return eta.mul(std).add_(mu)
```
```python
model = VAE()
optimizer = torch.optim.Adam(
model.parameters(), lr=learning_rate, weight_decay=1e-5)
if torch.cuda.is_available():
model.cuda()
model.train()
```
VAE(
(base_encoder): Sequential(
(0): Linear(in_features=784, out_features=400, bias=True)
(1): ReLU(inplace)
)
(mean_encoder): Sequential(
(0): Sequential(
(0): Linear(in_features=784, out_features=400, bias=True)
(1): ReLU(inplace)
)
(1): Linear(in_features=400, out_features=20, bias=True)
)
(var_encoder): Sequential(
(0): Sequential(
(0): Linear(in_features=784, out_features=400, bias=True)
(1): ReLU(inplace)
)
(1): Linear(in_features=400, out_features=20, bias=True)
)
(decoder): Sequential(
(0): Linear(in_features=20, out_features=400, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=400, out_features=784, bias=True)
(3): Sigmoid()
)
)
```python
mse = nn.MSELoss(size_average=False)
bce = nn.BCELoss(size_average=False)
def loss_f(x_hat, x, mu, logvar):
"""
x_hat: generated img.
x: origin img.
mu: latent mean
logvar: latent log variance (log(sigma^2))
"""
# E[log P(x|z)]
approx_err = bce(x_hat, x) # (-||x-f(z)||^2/(2c))
# The divergence between two Gaussians can be calculated in closed form.
# loss = 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
_kl = mu.pow(2).add_(logvar.exp()).mul_(-1).add_(1).add_(logvar)
KL = torch.sum(_kl).mul_(-0.5)
return approx_err + KL
```
## Train
```python
model.train()
num_epochs = 100
for epoch in range(num_epochs):
train_loss = 0
for batch_idx, (x, y) in enumerate(train_dl):
x = x.view(x.size(0), -1) # batchsize x 128
if torch.cuda.is_available():
x = x.cuda()
y_hat, mu, logvar = model(x)
loss = loss_f(y_hat, x, mu, logvar)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.data.item()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch,
batch_idx * len(x),
len(train_dl.dataset), 100. * batch_idx / len(train_dl),
loss.data.item() / len(x)))
print('>> Epoch: {} Average loss: {:.4f}'.format(
epoch, train_loss / len(train_dl.dataset)))
if epoch % 10 == 0:
save_image(toImg(x.cpu().data), '../results/01-autoencoder/vae/image_{}_i.png'.format(epoch))
save_image(toImg(y_hat.cpu().data), '../results/01-autoencoder/vae/image_{}_o.png'.format(epoch))
```
Train Epoch: 0 [0/60000 (0%)] Loss: 548.798096
Train Epoch: 0 [12800/60000 (21%)] Loss: -10400.995117
Train Epoch: 0 [25600/60000 (43%)] Loss: -13520.972656
Train Epoch: 0 [38400/60000 (64%)] Loss: -15273.527344
Train Epoch: 0 [51200/60000 (85%)] Loss: -15952.123047
>> Epoch: 0 Average loss: -12749.0210
Train Epoch: 1 [0/60000 (0%)] Loss: -16029.354492
Train Epoch: 1 [12800/60000 (21%)] Loss: -16067.035156
Train Epoch: 1 [25600/60000 (43%)] Loss: -16273.087891
Train Epoch: 1 [38400/60000 (64%)] Loss: -16572.113281
Train Epoch: 1 [51200/60000 (85%)] Loss: -16374.041016
>> Epoch: 1 Average loss: -16285.1504
Train Epoch: 2 [0/60000 (0%)] Loss: -16515.455078
Train Epoch: 2 [12800/60000 (21%)] Loss: -16544.494141
Train Epoch: 2 [25600/60000 (43%)] Loss: -16339.613281
Train Epoch: 2 [38400/60000 (64%)] Loss: -16133.729492
Train Epoch: 2 [51200/60000 (85%)] Loss: -16039.532227
>> Epoch: 2 Average loss: -16280.5579
Train Epoch: 3 [0/60000 (0%)] Loss: -16026.655273
Train Epoch: 3 [12800/60000 (21%)] Loss: -16169.336914
Train Epoch: 3 [25600/60000 (43%)] Loss: -16350.911133
Train Epoch: 3 [38400/60000 (64%)] Loss: -16016.779297
Train Epoch: 3 [51200/60000 (85%)] Loss: -16301.458008
>> Epoch: 3 Average loss: -16044.4319
Train Epoch: 4 [0/60000 (0%)] Loss: -16284.871094
Train Epoch: 4 [12800/60000 (21%)] Loss: -16331.748047
Train Epoch: 4 [25600/60000 (43%)] Loss: -15999.494141
Train Epoch: 4 [38400/60000 (64%)] Loss: -15769.664062
Train Epoch: 4 [51200/60000 (85%)] Loss: -15759.160156
>> Epoch: 4 Average loss: -15910.8688
Train Epoch: 5 [0/60000 (0%)] Loss: -16160.987305
Train Epoch: 5 [12800/60000 (21%)] Loss: -16652.466797
Train Epoch: 5 [25600/60000 (43%)] Loss: -15019.367188
Train Epoch: 5 [38400/60000 (64%)] Loss: -15418.454102
Train Epoch: 5 [51200/60000 (85%)] Loss: -15785.478516
>> Epoch: 5 Average loss: -15739.4408
Train Epoch: 6 [0/60000 (0%)] Loss: -15510.125977
Train Epoch: 6 [12800/60000 (21%)] Loss: -15574.984375
Train Epoch: 6 [25600/60000 (43%)] Loss: -15398.647461
Train Epoch: 6 [38400/60000 (64%)] Loss: -15153.815430
Train Epoch: 6 [51200/60000 (85%)] Loss: -15735.817383
>> Epoch: 6 Average loss: -15629.2851
Train Epoch: 7 [0/60000 (0%)] Loss: -15564.513672
Train Epoch: 7 [12800/60000 (21%)] Loss: -15361.147461
Train Epoch: 7 [25600/60000 (43%)] Loss: -15257.525391
Train Epoch: 7 [38400/60000 (64%)] Loss: -15549.546875
Train Epoch: 7 [51200/60000 (85%)] Loss: -15512.875977
>> Epoch: 7 Average loss: -15618.3374
Train Epoch: 8 [0/60000 (0%)] Loss: -15687.753906
Train Epoch: 8 [12800/60000 (21%)] Loss: -15689.877930
Train Epoch: 8 [25600/60000 (43%)] Loss: -15374.916016
Train Epoch: 8 [38400/60000 (64%)] Loss: -15942.115234
Train Epoch: 8 [51200/60000 (85%)] Loss: -14915.119141
>> Epoch: 8 Average loss: -15655.1822
Train Epoch: 9 [0/60000 (0%)] Loss: -14978.934570
Train Epoch: 9 [12800/60000 (21%)] Loss: -15950.153320
Train Epoch: 9 [25600/60000 (43%)] Loss: -16483.396484
Train Epoch: 9 [38400/60000 (64%)] Loss: -15933.653320
Train Epoch: 9 [51200/60000 (85%)] Loss: -16408.232422
>> Epoch: 9 Average loss: -15699.3873
Train Epoch: 10 [0/60000 (0%)] Loss: -15647.618164
Train Epoch: 10 [12800/60000 (21%)] Loss: -16330.379883
Train Epoch: 10 [25600/60000 (43%)] Loss: -15949.466797
Train Epoch: 10 [38400/60000 (64%)] Loss: -16028.821289
Train Epoch: 10 [51200/60000 (85%)] Loss: -15954.647461
>> Epoch: 10 Average loss: -15707.1465
Train Epoch: 11 [0/60000 (0%)] Loss: -15906.910156
Train Epoch: 11 [12800/60000 (21%)] Loss: -16228.386719
Train Epoch: 11 [25600/60000 (43%)] Loss: -15131.728516
Train Epoch: 11 [38400/60000 (64%)] Loss: -15891.877930
Train Epoch: 11 [51200/60000 (85%)] Loss: -16029.520508
>> Epoch: 11 Average loss: -15761.8662
Train Epoch: 12 [0/60000 (0%)] Loss: -15814.916016
Train Epoch: 12 [12800/60000 (21%)] Loss: -15640.537109
Train Epoch: 12 [25600/60000 (43%)] Loss: -15681.725586
Train Epoch: 12 [38400/60000 (64%)] Loss: -15612.296875
Train Epoch: 12 [51200/60000 (85%)] Loss: -15969.593750
>> Epoch: 12 Average loss: -15755.1473
Train Epoch: 13 [0/60000 (0%)] Loss: -15734.547852
Train Epoch: 13 [12800/60000 (21%)] Loss: -15912.906250
Train Epoch: 13 [25600/60000 (43%)] Loss: -15386.783203
Train Epoch: 13 [38400/60000 (64%)] Loss: -15979.217773
Train Epoch: 13 [51200/60000 (85%)] Loss: -15417.867188
>> Epoch: 13 Average loss: -15720.0671
Train Epoch: 14 [0/60000 (0%)] Loss: -15656.513672
Train Epoch: 14 [12800/60000 (21%)] Loss: -16088.073242
Train Epoch: 14 [25600/60000 (43%)] Loss: -15439.639648
Train Epoch: 14 [38400/60000 (64%)] Loss: -15694.291992
Train Epoch: 14 [51200/60000 (85%)] Loss: -15735.992188
>> Epoch: 14 Average loss: -15731.8497
Train Epoch: 15 [0/60000 (0%)] Loss: -16031.658203
Train Epoch: 15 [12800/60000 (21%)] Loss: -15363.505859
Train Epoch: 15 [25600/60000 (43%)] Loss: -15335.965820
Train Epoch: 15 [38400/60000 (64%)] Loss: -15611.947266
Train Epoch: 15 [51200/60000 (85%)] Loss: -15649.885742
>> Epoch: 15 Average loss: -15741.5805
Train Epoch: 16 [0/60000 (0%)] Loss: -15287.258789
Train Epoch: 16 [12800/60000 (21%)] Loss: -15810.520508
Train Epoch: 16 [25600/60000 (43%)] Loss: -15941.052734
Train Epoch: 16 [38400/60000 (64%)] Loss: -15806.700195
Train Epoch: 16 [51200/60000 (85%)] Loss: -16056.616211
>> Epoch: 16 Average loss: -15741.6051
Train Epoch: 17 [0/60000 (0%)] Loss: -15775.357422
Train Epoch: 17 [12800/60000 (21%)] Loss: -15751.205078
Train Epoch: 17 [25600/60000 (43%)] Loss: -16333.038086
Train Epoch: 17 [38400/60000 (64%)] Loss: -16151.360352
Train Epoch: 17 [51200/60000 (85%)] Loss: -16332.794922
>> Epoch: 17 Average loss: -15736.9253
Train Epoch: 18 [0/60000 (0%)] Loss: -15776.823242
Train Epoch: 18 [12800/60000 (21%)] Loss: -15912.367188
Train Epoch: 18 [25600/60000 (43%)] Loss: -15632.143555
Train Epoch: 18 [38400/60000 (64%)] Loss: -15429.823242
Train Epoch: 18 [51200/60000 (85%)] Loss: -16084.385742
>> Epoch: 18 Average loss: -15756.3129
Train Epoch: 19 [0/60000 (0%)] Loss: -15728.345703
Train Epoch: 19 [12800/60000 (21%)] Loss: -15664.066406
Train Epoch: 19 [25600/60000 (43%)] Loss: -15685.458984
Train Epoch: 19 [38400/60000 (64%)] Loss: -15815.618164
Train Epoch: 19 [51200/60000 (85%)] Loss: -14626.311523
>> Epoch: 19 Average loss: -15748.6681
Train Epoch: 20 [0/60000 (0%)] Loss: -15371.667969
Train Epoch: 20 [12800/60000 (21%)] Loss: -15351.987305
Train Epoch: 20 [25600/60000 (43%)] Loss: -15516.359375
Train Epoch: 20 [38400/60000 (64%)] Loss: -15894.355469
Train Epoch: 20 [51200/60000 (85%)] Loss: -15774.041016
>> Epoch: 20 Average loss: -15764.9767
Train Epoch: 21 [0/60000 (0%)] Loss: -16372.184570
Train Epoch: 21 [12800/60000 (21%)] Loss: -15606.973633
Train Epoch: 21 [25600/60000 (43%)] Loss: -15101.914062
Train Epoch: 21 [38400/60000 (64%)] Loss: -15878.699219
Train Epoch: 21 [51200/60000 (85%)] Loss: -15291.368164
>> Epoch: 21 Average loss: -15746.8408
Train Epoch: 22 [0/60000 (0%)] Loss: -15855.291992
Train Epoch: 22 [12800/60000 (21%)] Loss: -16354.753906
Train Epoch: 22 [25600/60000 (43%)] Loss: -16244.956055
Train Epoch: 22 [38400/60000 (64%)] Loss: -16145.729492
Train Epoch: 22 [51200/60000 (85%)] Loss: -15999.291992
>> Epoch: 22 Average loss: -15754.9529
Train Epoch: 23 [0/60000 (0%)] Loss: -15501.169922
Train Epoch: 23 [12800/60000 (21%)] Loss: -15767.891602
Train Epoch: 23 [25600/60000 (43%)] Loss: -15450.406250
Train Epoch: 23 [38400/60000 (64%)] Loss: -16731.027344
Train Epoch: 23 [51200/60000 (85%)] Loss: -15620.701172
>> Epoch: 23 Average loss: -15784.9061
Train Epoch: 24 [0/60000 (0%)] Loss: -15639.106445
Train Epoch: 24 [12800/60000 (21%)] Loss: -16171.797852
Train Epoch: 24 [25600/60000 (43%)] Loss: -16093.580078
Train Epoch: 24 [38400/60000 (64%)] Loss: -15425.541992
Train Epoch: 24 [51200/60000 (85%)] Loss: -15944.828125
>> Epoch: 24 Average loss: -15775.7069
Train Epoch: 25 [0/60000 (0%)] Loss: -15554.327148
Train Epoch: 25 [12800/60000 (21%)] Loss: -15968.566406
Train Epoch: 25 [25600/60000 (43%)] Loss: -15659.537109
Train Epoch: 25 [38400/60000 (64%)] Loss: -15878.447266
Train Epoch: 25 [51200/60000 (85%)] Loss: -16219.924805
>> Epoch: 25 Average loss: -15797.3291
Train Epoch: 26 [0/60000 (0%)] Loss: -15969.476562
Train Epoch: 26 [12800/60000 (21%)] Loss: -15485.171875
Train Epoch: 26 [25600/60000 (43%)] Loss: -15726.582031
Train Epoch: 26 [38400/60000 (64%)] Loss: -15938.704102
Train Epoch: 26 [51200/60000 (85%)] Loss: -15694.555664
>> Epoch: 26 Average loss: -15808.7790
Train Epoch: 27 [0/60000 (0%)] Loss: -15663.542969
Train Epoch: 27 [12800/60000 (21%)] Loss: -15626.659180
Train Epoch: 27 [25600/60000 (43%)] Loss: -16402.958984
Train Epoch: 27 [38400/60000 (64%)] Loss: -15684.948242
Train Epoch: 27 [51200/60000 (85%)] Loss: -15679.839844
>> Epoch: 27 Average loss: -15828.3127
Train Epoch: 28 [0/60000 (0%)] Loss: -15980.693359
Train Epoch: 28 [12800/60000 (21%)] Loss: -15336.357422
Train Epoch: 28 [25600/60000 (43%)] Loss: -15683.900391
Train Epoch: 28 [38400/60000 (64%)] Loss: -15997.925781
Train Epoch: 28 [51200/60000 (85%)] Loss: -16086.597656
>> Epoch: 28 Average loss: -15855.4075
Train Epoch: 29 [0/60000 (0%)] Loss: -15888.863281
Train Epoch: 29 [12800/60000 (21%)] Loss: -15684.571289
Train Epoch: 29 [25600/60000 (43%)] Loss: -15961.804688
Train Epoch: 29 [38400/60000 (64%)] Loss: -15849.370117
Train Epoch: 29 [51200/60000 (85%)] Loss: -15698.311523
>> Epoch: 29 Average loss: -15878.0026
Train Epoch: 30 [0/60000 (0%)] Loss: -16085.376953
Train Epoch: 30 [12800/60000 (21%)] Loss: -16326.175781
Train Epoch: 30 [25600/60000 (43%)] Loss: -15949.460938
Train Epoch: 30 [38400/60000 (64%)] Loss: -16597.437500
Train Epoch: 30 [51200/60000 (85%)] Loss: -16148.072266
>> Epoch: 30 Average loss: -15926.7902
Train Epoch: 31 [0/60000 (0%)] Loss: -15720.159180
Train Epoch: 31 [12800/60000 (21%)] Loss: -15995.453125
Train Epoch: 31 [25600/60000 (43%)] Loss: -16108.129883
Train Epoch: 31 [38400/60000 (64%)] Loss: -16219.212891
Train Epoch: 31 [51200/60000 (85%)] Loss: -15874.607422
>> Epoch: 31 Average loss: -15964.4744
Train Epoch: 32 [0/60000 (0%)] Loss: -15681.047852
Train Epoch: 32 [12800/60000 (21%)] Loss: -15873.328125
Train Epoch: 32 [25600/60000 (43%)] Loss: -15883.899414
Train Epoch: 32 [38400/60000 (64%)] Loss: -16537.060547
Train Epoch: 32 [51200/60000 (85%)] Loss: -15421.623047
>> Epoch: 32 Average loss: -16004.1485
Train Epoch: 33 [0/60000 (0%)] Loss: -16307.760742
Train Epoch: 33 [12800/60000 (21%)] Loss: -16323.730469
Train Epoch: 33 [25600/60000 (43%)] Loss: -15635.327148
Train Epoch: 33 [38400/60000 (64%)] Loss: -15912.753906
Train Epoch: 33 [51200/60000 (85%)] Loss: -16238.466797
>> Epoch: 33 Average loss: -16063.7850
Train Epoch: 34 [0/60000 (0%)] Loss: -16172.517578
Train Epoch: 34 [12800/60000 (21%)] Loss: -15760.675781
Train Epoch: 34 [25600/60000 (43%)] Loss: -16394.808594
Train Epoch: 34 [38400/60000 (64%)] Loss: -16498.275391
Train Epoch: 34 [51200/60000 (85%)] Loss: -15712.812500
>> Epoch: 34 Average loss: -16138.8057
Train Epoch: 35 [0/60000 (0%)] Loss: -16025.337891
Train Epoch: 35 [12800/60000 (21%)] Loss: -16540.029297
Train Epoch: 35 [25600/60000 (43%)] Loss: -15970.066406
Train Epoch: 35 [38400/60000 (64%)] Loss: -15916.967773
Train Epoch: 35 [51200/60000 (85%)] Loss: -16363.433594
>> Epoch: 35 Average loss: -16232.1018
Train Epoch: 36 [0/60000 (0%)] Loss: -15838.261719
Train Epoch: 36 [12800/60000 (21%)] Loss: -15826.887695
Train Epoch: 36 [25600/60000 (43%)] Loss: -16310.962891
Train Epoch: 36 [38400/60000 (64%)] Loss: -16837.082031
Train Epoch: 36 [51200/60000 (85%)] Loss: -16399.076172
>> Epoch: 36 Average loss: -16367.7808
Train Epoch: 37 [0/60000 (0%)] Loss: -15912.750000
Train Epoch: 37 [12800/60000 (21%)] Loss: -16463.654297
Train Epoch: 37 [25600/60000 (43%)] Loss: -16507.283203
Train Epoch: 37 [38400/60000 (64%)] Loss: -16087.551758
Train Epoch: 37 [51200/60000 (85%)] Loss: -16615.046875
>> Epoch: 37 Average loss: -16458.7341
Train Epoch: 38 [0/60000 (0%)] Loss: -17020.490234
Train Epoch: 38 [12800/60000 (21%)] Loss: -16587.585938
Train Epoch: 38 [25600/60000 (43%)] Loss: -16319.344727
Train Epoch: 38 [38400/60000 (64%)] Loss: -16382.368164
Train Epoch: 38 [51200/60000 (85%)] Loss: -16504.347656
>> Epoch: 38 Average loss: -16632.4627
Train Epoch: 39 [0/60000 (0%)] Loss: -16381.448242
Train Epoch: 39 [12800/60000 (21%)] Loss: -16697.320312
Train Epoch: 39 [25600/60000 (43%)] Loss: -16804.128906
Train Epoch: 39 [38400/60000 (64%)] Loss: -16615.097656
Train Epoch: 39 [51200/60000 (85%)] Loss: -16981.410156
>> Epoch: 39 Average loss: -16810.5873
Train Epoch: 40 [0/60000 (0%)] Loss: -16861.273438
Train Epoch: 40 [12800/60000 (21%)] Loss: -17466.968750
Train Epoch: 40 [25600/60000 (43%)] Loss: -17402.136719
Train Epoch: 40 [38400/60000 (64%)] Loss: -16999.201172
Train Epoch: 40 [51200/60000 (85%)] Loss: -17495.255859
>> Epoch: 40 Average loss: -17060.1137
Train Epoch: 41 [0/60000 (0%)] Loss: -17004.109375
Train Epoch: 41 [12800/60000 (21%)] Loss: -17359.103516
Train Epoch: 41 [25600/60000 (43%)] Loss: -17511.921875
Train Epoch: 41 [38400/60000 (64%)] Loss: -17389.585938
Train Epoch: 41 [51200/60000 (85%)] Loss: -17074.890625
>> Epoch: 41 Average loss: -17282.8759
Train Epoch: 42 [0/60000 (0%)] Loss: -17634.613281
Train Epoch: 42 [12800/60000 (21%)] Loss: -17447.382812
Train Epoch: 42 [25600/60000 (43%)] Loss: -17465.964844
Train Epoch: 42 [38400/60000 (64%)] Loss: -17410.089844
Train Epoch: 42 [51200/60000 (85%)] Loss: -17276.812500
>> Epoch: 42 Average loss: -17441.8183
Train Epoch: 43 [0/60000 (0%)] Loss: -17767.224609
Train Epoch: 43 [12800/60000 (21%)] Loss: -17329.523438
Train Epoch: 43 [25600/60000 (43%)] Loss: -17272.613281
Train Epoch: 43 [38400/60000 (64%)] Loss: -17227.455078
Train Epoch: 43 [51200/60000 (85%)] Loss: -17635.570312
>> Epoch: 43 Average loss: -17500.9153
Train Epoch: 44 [0/60000 (0%)] Loss: -17490.695312
Train Epoch: 44 [12800/60000 (21%)] Loss: -17791.738281
Train Epoch: 44 [25600/60000 (43%)] Loss: -17543.478516
Train Epoch: 44 [38400/60000 (64%)] Loss: -17350.167969
Train Epoch: 44 [51200/60000 (85%)] Loss: -17349.462891
>> Epoch: 44 Average loss: -17593.8134
Train Epoch: 45 [0/60000 (0%)] Loss: -17580.316406
Train Epoch: 45 [12800/60000 (21%)] Loss: -17385.085938
Train Epoch: 45 [25600/60000 (43%)] Loss: -17382.095703
Train Epoch: 45 [38400/60000 (64%)] Loss: -17429.287109
Train Epoch: 45 [51200/60000 (85%)] Loss: -17536.187500
>> Epoch: 45 Average loss: -17564.7666
Train Epoch: 46 [0/60000 (0%)] Loss: -17598.343750
Train Epoch: 46 [12800/60000 (21%)] Loss: -17388.191406
Train Epoch: 46 [25600/60000 (43%)] Loss: -17293.800781
Train Epoch: 46 [38400/60000 (64%)] Loss: -17492.017578
Train Epoch: 46 [51200/60000 (85%)] Loss: -17604.408203
>> Epoch: 46 Average loss: -17465.0792
Train Epoch: 47 [0/60000 (0%)] Loss: -17610.906250
Train Epoch: 47 [12800/60000 (21%)] Loss: -17191.480469
Train Epoch: 47 [25600/60000 (43%)] Loss: -17691.476562
Train Epoch: 47 [38400/60000 (64%)] Loss: -17332.033203
Train Epoch: 47 [51200/60000 (85%)] Loss: -17568.968750
>> Epoch: 47 Average loss: -17531.3173
Train Epoch: 48 [0/60000 (0%)] Loss: -17307.283203
Train Epoch: 48 [12800/60000 (21%)] Loss: -17492.416016
Train Epoch: 48 [25600/60000 (43%)] Loss: -17337.798828
Train Epoch: 48 [38400/60000 (64%)] Loss: -17336.730469
Train Epoch: 48 [51200/60000 (85%)] Loss: -17419.738281
>> Epoch: 48 Average loss: -17327.9903
Train Epoch: 49 [0/60000 (0%)] Loss: -17431.869141
Train Epoch: 49 [12800/60000 (21%)] Loss: -17365.650391
Train Epoch: 49 [25600/60000 (43%)] Loss: -17182.134766
Train Epoch: 49 [38400/60000 (64%)] Loss: -16896.330078
Train Epoch: 49 [51200/60000 (85%)] Loss: -17013.939453
>> Epoch: 49 Average loss: -17136.9201
Train Epoch: 50 [0/60000 (0%)] Loss: -17015.525391
Train Epoch: 50 [12800/60000 (21%)] Loss: -16782.992188
Train Epoch: 50 [25600/60000 (43%)] Loss: -17026.062500
Train Epoch: 50 [38400/60000 (64%)] Loss: -17217.783203
Train Epoch: 50 [51200/60000 (85%)] Loss: -17103.363281
>> Epoch: 50 Average loss: -17111.1771
Train Epoch: 51 [0/60000 (0%)] Loss: -17389.720703
Train Epoch: 51 [12800/60000 (21%)] Loss: -17199.408203
Train Epoch: 51 [25600/60000 (43%)] Loss: -17102.441406
Train Epoch: 51 [38400/60000 (64%)] Loss: -17427.525391
Train Epoch: 51 [51200/60000 (85%)] Loss: -16863.777344
>> Epoch: 51 Average loss: -17124.5835
Train Epoch: 52 [0/60000 (0%)] Loss: -16963.330078
Train Epoch: 52 [12800/60000 (21%)] Loss: -16904.535156
Train Epoch: 52 [25600/60000 (43%)] Loss: -17380.410156
Train Epoch: 52 [38400/60000 (64%)] Loss: -16948.828125
Train Epoch: 52 [51200/60000 (85%)] Loss: -17177.294922
>> Epoch: 52 Average loss: -17001.3476
Train Epoch: 53 [0/60000 (0%)] Loss: -16871.671875
Train Epoch: 53 [12800/60000 (21%)] Loss: -16856.982422
Train Epoch: 53 [25600/60000 (43%)] Loss: -16822.964844
Train Epoch: 53 [38400/60000 (64%)] Loss: -16449.992188
Train Epoch: 53 [51200/60000 (85%)] Loss: -17396.875000
>> Epoch: 53 Average loss: -16953.9299
Train Epoch: 54 [0/60000 (0%)] Loss: -17157.412109
Train Epoch: 54 [12800/60000 (21%)] Loss: -16841.775391
Train Epoch: 54 [25600/60000 (43%)] Loss: -16886.615234
Train Epoch: 54 [38400/60000 (64%)] Loss: -16942.320312
Train Epoch: 54 [51200/60000 (85%)] Loss: -16818.152344
>> Epoch: 54 Average loss: -16996.4448
Train Epoch: 55 [0/60000 (0%)] Loss: -16912.093750
Train Epoch: 55 [12800/60000 (21%)] Loss: -16525.378906
Train Epoch: 55 [25600/60000 (43%)] Loss: -17093.421875
Train Epoch: 55 [38400/60000 (64%)] Loss: -17006.072266
Train Epoch: 55 [51200/60000 (85%)] Loss: -16766.722656
>> Epoch: 55 Average loss: -16809.1004
Train Epoch: 56 [0/60000 (0%)] Loss: -16664.152344
Train Epoch: 56 [12800/60000 (21%)] Loss: -16771.865234
Train Epoch: 56 [25600/60000 (43%)] Loss: -16739.525391
Train Epoch: 56 [38400/60000 (64%)] Loss: -16652.421875
Train Epoch: 56 [51200/60000 (85%)] Loss: -17241.503906
>> Epoch: 56 Average loss: -16875.5294
Train Epoch: 57 [0/60000 (0%)] Loss: -16731.355469
Train Epoch: 57 [12800/60000 (21%)] Loss: -16529.789062
Train Epoch: 57 [25600/60000 (43%)] Loss: -16789.970703
Train Epoch: 57 [38400/60000 (64%)] Loss: -16686.048828
Train Epoch: 57 [51200/60000 (85%)] Loss: -17631.421875
>> Epoch: 57 Average loss: -16833.4013
Train Epoch: 58 [0/60000 (0%)] Loss: -16920.191406
Train Epoch: 58 [12800/60000 (21%)] Loss: -17039.376953
Train Epoch: 58 [25600/60000 (43%)] Loss: -16618.492188
Train Epoch: 58 [38400/60000 (64%)] Loss: -16558.892578
Train Epoch: 58 [51200/60000 (85%)] Loss: -16741.796875
>> Epoch: 58 Average loss: -16736.3788
Train Epoch: 59 [0/60000 (0%)] Loss: -16661.085938
Train Epoch: 59 [12800/60000 (21%)] Loss: -16362.338867
Train Epoch: 59 [25600/60000 (43%)] Loss: -16555.357422
Train Epoch: 59 [38400/60000 (64%)] Loss: -16383.298828
Train Epoch: 59 [51200/60000 (85%)] Loss: -16890.437500
>> Epoch: 59 Average loss: -16539.4592
Train Epoch: 60 [0/60000 (0%)] Loss: -16634.007812
Train Epoch: 60 [12800/60000 (21%)] Loss: -16557.408203
Train Epoch: 60 [25600/60000 (43%)] Loss: -16968.511719
Train Epoch: 60 [38400/60000 (64%)] Loss: -16812.199219
Train Epoch: 60 [51200/60000 (85%)] Loss: -16647.976562
>> Epoch: 60 Average loss: -16661.2688
Train Epoch: 61 [0/60000 (0%)] Loss: -17091.906250
Train Epoch: 61 [12800/60000 (21%)] Loss: -16241.574219
Train Epoch: 61 [25600/60000 (43%)] Loss: -16217.345703
Train Epoch: 61 [38400/60000 (64%)] Loss: -16891.384766
Train Epoch: 61 [51200/60000 (85%)] Loss: -16775.207031
>> Epoch: 61 Average loss: -16609.7100
Train Epoch: 62 [0/60000 (0%)] Loss: -17134.527344
Train Epoch: 62 [12800/60000 (21%)] Loss: -16760.613281
Train Epoch: 62 [25600/60000 (43%)] Loss: -16567.193359
Train Epoch: 62 [38400/60000 (64%)] Loss: -16474.320312
Train Epoch: 62 [51200/60000 (85%)] Loss: -16455.552734
>> Epoch: 62 Average loss: -16746.5140
Train Epoch: 63 [0/60000 (0%)] Loss: -17373.203125
Train Epoch: 63 [12800/60000 (21%)] Loss: -16973.087891
Train Epoch: 63 [25600/60000 (43%)] Loss: -16784.939453
Train Epoch: 63 [38400/60000 (64%)] Loss: -17112.052734
Train Epoch: 63 [51200/60000 (85%)] Loss: -16774.017578
>> Epoch: 63 Average loss: -16980.7624
Train Epoch: 64 [0/60000 (0%)] Loss: -17215.111328
Train Epoch: 64 [12800/60000 (21%)] Loss: -16998.052734
Train Epoch: 64 [25600/60000 (43%)] Loss: -16724.621094
Train Epoch: 64 [38400/60000 (64%)] Loss: -16849.472656
Train Epoch: 64 [51200/60000 (85%)] Loss: -16990.341797
>> Epoch: 64 Average loss: -16977.7651
Train Epoch: 65 [0/60000 (0%)] Loss: -16878.310547
Train Epoch: 65 [12800/60000 (21%)] Loss: -16966.154297
Train Epoch: 65 [25600/60000 (43%)] Loss: -16739.839844
Train Epoch: 65 [38400/60000 (64%)] Loss: -16463.107422
Train Epoch: 65 [51200/60000 (85%)] Loss: -17488.527344
>> Epoch: 65 Average loss: -16794.3002
Train Epoch: 66 [0/60000 (0%)] Loss: -16805.675781
Train Epoch: 66 [12800/60000 (21%)] Loss: -16661.839844
Train Epoch: 66 [25600/60000 (43%)] Loss: -16651.310547
Train Epoch: 66 [38400/60000 (64%)] Loss: -17002.986328
Train Epoch: 66 [51200/60000 (85%)] Loss: -17010.789062
>> Epoch: 66 Average loss: -16781.7397
Train Epoch: 67 [0/60000 (0%)] Loss: -17037.335938
Train Epoch: 67 [12800/60000 (21%)] Loss: -16572.474609
Train Epoch: 67 [25600/60000 (43%)] Loss: -16418.179688
Train Epoch: 67 [38400/60000 (64%)] Loss: -17102.197266
Train Epoch: 67 [51200/60000 (85%)] Loss: -16854.257812
>> Epoch: 67 Average loss: -16788.2711
Train Epoch: 68 [0/60000 (0%)] Loss: -16557.568359
Train Epoch: 68 [12800/60000 (21%)] Loss: -16540.048828
Train Epoch: 68 [25600/60000 (43%)] Loss: -16928.925781
Train Epoch: 68 [38400/60000 (64%)] Loss: -16511.935547
Train Epoch: 68 [51200/60000 (85%)] Loss: -16502.919922
>> Epoch: 68 Average loss: -16653.5732
Train Epoch: 69 [0/60000 (0%)] Loss: -17173.617188
Train Epoch: 69 [12800/60000 (21%)] Loss: -16657.203125
Train Epoch: 69 [25600/60000 (43%)] Loss: -16737.843750
Train Epoch: 69 [38400/60000 (64%)] Loss: -16758.394531
Train Epoch: 69 [51200/60000 (85%)] Loss: -16613.031250
>> Epoch: 69 Average loss: -16693.8696
Train Epoch: 70 [0/60000 (0%)] Loss: -16670.583984
Train Epoch: 70 [12800/60000 (21%)] Loss: -16123.583984
Train Epoch: 70 [25600/60000 (43%)] Loss: -17051.111328
Train Epoch: 70 [38400/60000 (64%)] Loss: -16375.357422
Train Epoch: 70 [51200/60000 (85%)] Loss: -16055.228516
>> Epoch: 70 Average loss: -16574.8070
Train Epoch: 71 [0/60000 (0%)] Loss: -17300.732422
Train Epoch: 71 [12800/60000 (21%)] Loss: -16737.800781
Train Epoch: 71 [25600/60000 (43%)] Loss: -16490.570312
Train Epoch: 71 [38400/60000 (64%)] Loss: -16617.035156
Train Epoch: 71 [51200/60000 (85%)] Loss: -17057.218750
>> Epoch: 71 Average loss: -16812.1958
Train Epoch: 72 [0/60000 (0%)] Loss: -16844.027344
Train Epoch: 72 [12800/60000 (21%)] Loss: -17196.638672
Train Epoch: 72 [25600/60000 (43%)] Loss: -17155.652344
Train Epoch: 72 [38400/60000 (64%)] Loss: -16815.871094
Train Epoch: 72 [51200/60000 (85%)] Loss: -16868.396484
>> Epoch: 72 Average loss: -17024.8841
Train Epoch: 73 [0/60000 (0%)] Loss: -16993.830078
Train Epoch: 73 [12800/60000 (21%)] Loss: -16626.347656
Train Epoch: 73 [25600/60000 (43%)] Loss: -16572.027344
Train Epoch: 73 [38400/60000 (64%)] Loss: -16964.886719
Train Epoch: 73 [51200/60000 (85%)] Loss: -16832.048828
>> Epoch: 73 Average loss: -16820.0449
Train Epoch: 74 [0/60000 (0%)] Loss: -16726.564453
Train Epoch: 74 [12800/60000 (21%)] Loss: -17005.005859
Train Epoch: 74 [25600/60000 (43%)] Loss: -16827.363281
Train Epoch: 74 [38400/60000 (64%)] Loss: -16812.699219
Train Epoch: 74 [51200/60000 (85%)] Loss: -17007.255859
>> Epoch: 74 Average loss: -16855.9665
Train Epoch: 75 [0/60000 (0%)] Loss: -17013.611328
Train Epoch: 75 [12800/60000 (21%)] Loss: -16890.156250
Train Epoch: 75 [25600/60000 (43%)] Loss: -16731.054688
Train Epoch: 75 [38400/60000 (64%)] Loss: -16648.509766
Train Epoch: 75 [51200/60000 (85%)] Loss: -16863.269531
>> Epoch: 75 Average loss: -16755.0141
Train Epoch: 76 [0/60000 (0%)] Loss: -16395.185547
Train Epoch: 76 [12800/60000 (21%)] Loss: -16981.302734
Train Epoch: 76 [25600/60000 (43%)] Loss: -16936.773438
Train Epoch: 76 [38400/60000 (64%)] Loss: -16885.177734
Train Epoch: 76 [51200/60000 (85%)] Loss: -16895.365234
>> Epoch: 76 Average loss: -16824.4441
Train Epoch: 77 [0/60000 (0%)] Loss: -16717.771484
Train Epoch: 77 [12800/60000 (21%)] Loss: -16638.263672
Train Epoch: 77 [25600/60000 (43%)] Loss: -16863.125000
Train Epoch: 77 [38400/60000 (64%)] Loss: -16758.496094
Train Epoch: 77 [51200/60000 (85%)] Loss: -16392.998047
>> Epoch: 77 Average loss: -16704.3087
Train Epoch: 78 [0/60000 (0%)] Loss: -16794.507812
Train Epoch: 78 [12800/60000 (21%)] Loss: -16618.425781
Train Epoch: 78 [25600/60000 (43%)] Loss: -16530.740234
Train Epoch: 78 [38400/60000 (64%)] Loss: -16447.982422
Train Epoch: 78 [51200/60000 (85%)] Loss: -16567.976562
>> Epoch: 78 Average loss: -16514.9580
Train Epoch: 79 [0/60000 (0%)] Loss: -16274.107422
Train Epoch: 79 [12800/60000 (21%)] Loss: -15965.790039
Train Epoch: 79 [25600/60000 (43%)] Loss: -16976.404297
Train Epoch: 79 [38400/60000 (64%)] Loss: -16533.996094
Train Epoch: 79 [51200/60000 (85%)] Loss: -16301.074219
>> Epoch: 79 Average loss: -16574.6297
Train Epoch: 80 [0/60000 (0%)] Loss: -16861.246094
Train Epoch: 80 [12800/60000 (21%)] Loss: -16715.041016
Train Epoch: 80 [25600/60000 (43%)] Loss: -16621.761719
Train Epoch: 80 [38400/60000 (64%)] Loss: -16638.947266
Train Epoch: 80 [51200/60000 (85%)] Loss: -16915.890625
>> Epoch: 80 Average loss: -16756.9602
Train Epoch: 81 [0/60000 (0%)] Loss: -17538.429688
Train Epoch: 81 [12800/60000 (21%)] Loss: -17190.730469
Train Epoch: 81 [25600/60000 (43%)] Loss: -17047.847656
Train Epoch: 81 [38400/60000 (64%)] Loss: -16756.128906
Train Epoch: 81 [51200/60000 (85%)] Loss: -16829.742188
>> Epoch: 81 Average loss: -16969.9282
Train Epoch: 82 [0/60000 (0%)] Loss: -17068.365234
Train Epoch: 82 [12800/60000 (21%)] Loss: -16725.025391
Train Epoch: 82 [25600/60000 (43%)] Loss: -16801.292969
Train Epoch: 82 [38400/60000 (64%)] Loss: -16457.238281
Train Epoch: 82 [51200/60000 (85%)] Loss: -16701.607422
>> Epoch: 82 Average loss: -16702.8252
Train Epoch: 83 [0/60000 (0%)] Loss: -17212.253906
Train Epoch: 83 [12800/60000 (21%)] Loss: -16595.898438
Train Epoch: 83 [25600/60000 (43%)] Loss: -16469.062500
Train Epoch: 83 [38400/60000 (64%)] Loss: -16930.515625
Train Epoch: 83 [51200/60000 (85%)] Loss: -16346.410156
>> Epoch: 83 Average loss: -16681.1220
Train Epoch: 84 [0/60000 (0%)] Loss: -16859.251953
Train Epoch: 84 [12800/60000 (21%)] Loss: -16471.759766
Train Epoch: 84 [25600/60000 (43%)] Loss: -16511.751953
Train Epoch: 84 [38400/60000 (64%)] Loss: -16661.656250
Train Epoch: 84 [51200/60000 (85%)] Loss: -16598.855469
>> Epoch: 84 Average loss: -16600.5367
Train Epoch: 85 [0/60000 (0%)] Loss: -16690.406250
Train Epoch: 85 [12800/60000 (21%)] Loss: -16376.686523
Train Epoch: 85 [25600/60000 (43%)] Loss: -17155.906250
Train Epoch: 85 [38400/60000 (64%)] Loss: -16336.201172
Train Epoch: 85 [51200/60000 (85%)] Loss: -16828.732422
>> Epoch: 85 Average loss: -16589.3381
Train Epoch: 86 [0/60000 (0%)] Loss: -16457.542969
Train Epoch: 86 [12800/60000 (21%)] Loss: -16272.050781
Train Epoch: 86 [25600/60000 (43%)] Loss: -16127.432617
Train Epoch: 86 [38400/60000 (64%)] Loss: -16412.441406
Train Epoch: 86 [51200/60000 (85%)] Loss: -15926.803711
>> Epoch: 86 Average loss: -16343.3976
Train Epoch: 87 [0/60000 (0%)] Loss: -16248.798828
Train Epoch: 87 [12800/60000 (21%)] Loss: -16318.914062
Train Epoch: 87 [25600/60000 (43%)] Loss: -16081.777344
Train Epoch: 87 [38400/60000 (64%)] Loss: -17063.484375
Train Epoch: 87 [51200/60000 (85%)] Loss: -16452.720703
>> Epoch: 87 Average loss: -16501.6742
Train Epoch: 88 [0/60000 (0%)] Loss: -16617.968750
Train Epoch: 88 [12800/60000 (21%)] Loss: -16762.095703
Train Epoch: 88 [25600/60000 (43%)] Loss: -16962.550781
Train Epoch: 88 [38400/60000 (64%)] Loss: -17069.365234
Train Epoch: 88 [51200/60000 (85%)] Loss: -17031.849609
>> Epoch: 88 Average loss: -16713.8787
Train Epoch: 89 [0/60000 (0%)] Loss: -16634.357422
Train Epoch: 89 [12800/60000 (21%)] Loss: -16780.300781
Train Epoch: 89 [25600/60000 (43%)] Loss: -17194.730469
Train Epoch: 89 [38400/60000 (64%)] Loss: -16642.835938
Train Epoch: 89 [51200/60000 (85%)] Loss: -16811.687500
>> Epoch: 89 Average loss: -16672.3675
Train Epoch: 90 [0/60000 (0%)] Loss: -16412.855469
Train Epoch: 90 [12800/60000 (21%)] Loss: -16578.832031
Train Epoch: 90 [25600/60000 (43%)] Loss: -16710.269531
Train Epoch: 90 [38400/60000 (64%)] Loss: -16771.345703
Train Epoch: 90 [51200/60000 (85%)] Loss: -16795.388672
>> Epoch: 90 Average loss: -16634.6537
Train Epoch: 91 [0/60000 (0%)] Loss: -16775.830078
Train Epoch: 91 [12800/60000 (21%)] Loss: -16958.330078
Train Epoch: 91 [25600/60000 (43%)] Loss: -16458.888672
Train Epoch: 91 [38400/60000 (64%)] Loss: -16233.967773
Train Epoch: 91 [51200/60000 (85%)] Loss: -16599.865234
>> Epoch: 91 Average loss: -16741.0825
Train Epoch: 92 [0/60000 (0%)] Loss: -17074.335938
Train Epoch: 92 [12800/60000 (21%)] Loss: -16677.009766
Train Epoch: 92 [25600/60000 (43%)] Loss: -16861.306641
Train Epoch: 92 [38400/60000 (64%)] Loss: -16204.630859
Train Epoch: 92 [51200/60000 (85%)] Loss: -16753.148438
>> Epoch: 92 Average loss: -16639.2805
Train Epoch: 93 [0/60000 (0%)] Loss: -16560.656250
Train Epoch: 93 [12800/60000 (21%)] Loss: -17046.308594
Train Epoch: 93 [25600/60000 (43%)] Loss: -16815.138672
Train Epoch: 93 [38400/60000 (64%)] Loss: -16509.044922
Train Epoch: 93 [51200/60000 (85%)] Loss: -16683.046875
>> Epoch: 93 Average loss: -16743.7790
Train Epoch: 94 [0/60000 (0%)] Loss: -16674.994141
Train Epoch: 94 [12800/60000 (21%)] Loss: -16944.917969
Train Epoch: 94 [25600/60000 (43%)] Loss: -16855.167969
Train Epoch: 94 [38400/60000 (64%)] Loss: -16546.781250
Train Epoch: 94 [51200/60000 (85%)] Loss: -16807.929688
>> Epoch: 94 Average loss: -16767.7904
Train Epoch: 95 [0/60000 (0%)] Loss: -17421.816406
Train Epoch: 95 [12800/60000 (21%)] Loss: -16699.886719
Train Epoch: 95 [25600/60000 (43%)] Loss: -16530.775391
Train Epoch: 95 [38400/60000 (64%)] Loss: -16462.689453
Train Epoch: 95 [51200/60000 (85%)] Loss: -16422.068359
>> Epoch: 95 Average loss: -16710.3526
Train Epoch: 96 [0/60000 (0%)] Loss: -17130.708984
Train Epoch: 96 [12800/60000 (21%)] Loss: -16741.439453
Train Epoch: 96 [25600/60000 (43%)] Loss: -16863.515625
Train Epoch: 96 [38400/60000 (64%)] Loss: -16242.845703
Train Epoch: 96 [51200/60000 (85%)] Loss: -16575.855469
>> Epoch: 96 Average loss: -16707.6568
Train Epoch: 97 [0/60000 (0%)] Loss: -16601.457031
Train Epoch: 97 [12800/60000 (21%)] Loss: -16559.931641
Train Epoch: 97 [25600/60000 (43%)] Loss: -16509.224609
Train Epoch: 97 [38400/60000 (64%)] Loss: -16780.644531
Train Epoch: 97 [51200/60000 (85%)] Loss: -16513.462891
>> Epoch: 97 Average loss: -16538.1087
Train Epoch: 98 [0/60000 (0%)] Loss: -17080.683594
Train Epoch: 98 [12800/60000 (21%)] Loss: -16576.988281
Train Epoch: 98 [25600/60000 (43%)] Loss: -16717.597656
Train Epoch: 98 [38400/60000 (64%)] Loss: -16582.261719
Train Epoch: 98 [51200/60000 (85%)] Loss: -16598.505859
>> Epoch: 98 Average loss: -16717.3681
Train Epoch: 99 [0/60000 (0%)] Loss: -16601.992188
Train Epoch: 99 [12800/60000 (21%)] Loss: -16578.425781
Train Epoch: 99 [25600/60000 (43%)] Loss: -16143.469727
Train Epoch: 99 [38400/60000 (64%)] Loss: -16556.958984
Train Epoch: 99 [51200/60000 (85%)] Loss: -16283.593750
>> Epoch: 99 Average loss: -16547.9954
```python
# As suggested in https://pytorch.org/tutorials/beginner/saving_loading_models.html
filename = '../saved-mdls/01-autoencoder/vae_{}e.pt'.format(epoch+1)
torch.save(model, filename)
print('model saved as: {}'.format(filename))
```
model saved as: ./saved-mdls/01-autoencoder/vae_100e.pt
/opt/anaconda3/lib/python3.7/site-packages/torch/serialization.py:256: UserWarning: Couldn't retrieve source code for container of type VAE. It won't be checked for correctness upon loading.
"type " + obj.__name__ + ". It won't be checked "
## Using the Variational autoencoder
```python
# Model class must be defined somewhere
model = torch.load('./saved-mdls/01-autoencoder/vae_100e.pt',
map_location=torch.device('cpu'))
model.eval()
```
/Users/andreamunafo/opt/anaconda3/envs/compiling-ai/lib/python3.6/site-packages/torch/serialization.py:493: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/Users/andreamunafo/opt/anaconda3/envs/compiling-ai/lib/python3.6/site-packages/torch/serialization.py:493: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Sigmoid' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
VAE(
(base_encoder): Sequential(
(0): Linear(in_features=784, out_features=400, bias=True)
(1): ReLU(inplace=True)
)
(mean_encoder): Sequential(
(0): Sequential(
(0): Linear(in_features=784, out_features=400, bias=True)
(1): ReLU(inplace=True)
)
(1): Linear(in_features=400, out_features=20, bias=True)
)
(var_encoder): Sequential(
(0): Sequential(
(0): Linear(in_features=784, out_features=400, bias=True)
(1): ReLU(inplace=True)
)
(1): Linear(in_features=400, out_features=20, bias=True)
)
(decoder): Sequential(
(0): Linear(in_features=20, out_features=400, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=400, out_features=784, bias=True)
(3): Sigmoid()
)
)
Choose an image
```python
idx = 1010
```
Define the transforms to normalise the image (as done during training)
```python
# Define the transforms to normalise the image
pic_tt = transforms.ToTensor() # Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]
pic_n = transforms.Normalize((ds_mean,), (ds_std,))
if torch.__version__ != '1.3.0':
pic_data = valid_ds.test_data[idx].numpy()
pic_data = pic_data[:, :, None] # (H x W x C)
print('pic_data.dtype: {}'.format(pic_data.dtype))
else:
pic_data = valid_ds.data[idx].numpy()
pic = pic_tt(pic_data)
pic = pic_n(pic)
# print stats
print('pic mean: {}, std: {}'.format(pic.mean(), pic.std()))
# print pic shape
print('\npic shape (C x H x W): {}'.format(pic.shape))
```
pic mean: 0.0921572670340538, std: 1.0990101099014282
pic shape (C x H x W): torch.Size([1, 28, 28])
```python
plt.imshow(pic.view(28,28))
```
```python
# check size
print(pic.to(device)[None, :, :, :].shape)
print(pic.to(device).view(1, -1).shape)
```
torch.Size([1, 1, 28, 28])
torch.Size([1, 784])
### Encode
```python
device = 'cpu'
mean_encoding, var_encoding = model.encode(pic.to(device).view(1, -1)) # note that we need to reshape the pic to be: bsx28*28
# print encoding result
print('encoded mean: {}'.format(mean_encoding.detach().cpu().numpy()))
print('encoded var: {}'.format(var_encoding.detach().cpu().numpy()))
```
encoded mean: [[-0.5592429 1.4444453 1.1667817 -0.18782139 1.0589974 -0.55701214
-0.07533638 0.1629481 0.64944714 5.653008 0.3813714 -0.9495977
1.085453 -1.5471504 -3.5718756 1.2370685 -0.7264838 0.53405297
1.5231957 -1.6848253 ]]
encoded var: [[-5.571127 -6.57798 -6.2529154 -5.8869295 -5.8825827 -5.62205
-5.6434155 -6.084442 -6.771547 -5.3372474 -5.7165003 -6.550778
-5.552048 -6.1818843 -5.855712 -5.3861585 -6.335955 -6.6406946
-6.5768113 -6.869463 ]]
### Decode
```python
decoding, _, _ = model.decode((mean_encoding, var_encoding))
```
```python
plt.imshow(decoding.cpu().data.view(28, 28))
```
## fin.
```python
```
|
6980ff37d9502fc7b5cf901f7e249e353394d8e0
| 147,769 |
ipynb
|
Jupyter Notebook
|
notebooks/01-autoencoder.ipynb
|
andreamunafo/friendly-robot
|
fe3e5b9a33a3312389350f3387a4290dc5bf3148
|
[
"MIT"
] | null | null | null |
notebooks/01-autoencoder.ipynb
|
andreamunafo/friendly-robot
|
fe3e5b9a33a3312389350f3387a4290dc5bf3148
|
[
"MIT"
] | null | null | null |
notebooks/01-autoencoder.ipynb
|
andreamunafo/friendly-robot
|
fe3e5b9a33a3312389350f3387a4290dc5bf3148
|
[
"MIT"
] | null | null | null | 49.092691 | 7,328 | 0.65149 | true | 28,770 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.877477 | 0.857768 | 0.752672 |
__label__eng_Latn
| 0.438541 | 0.58704 |
# Distance Metrics
by: Team Old Nation
```python
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.metrics import pairwise_distances
import sympy as sym
import seaborn as sns
import numpy as np
%matplotlib inline
```
```python
# Reading in csv
df = pd.read_csv("2022_Project_distance_Matrix.csv")
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ARFL</th>
<th>Argonne</th>
<th>Boeing</th>
<th>Delta Dental</th>
<th>Ford</th>
<th>Hope Village</th>
<th>Kellogg's</th>
<th>Neogen</th>
<th>Old Nation</th>
<th>Qside</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>7</td>
<td>4</td>
<td>9</td>
<td>8</td>
<td>3</td>
<td>0</td>
<td>6</td>
<td>2</td>
<td>5</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>0</td>
<td>1</td>
<td>3</td>
<td>2</td>
<td>4</td>
<td>8</td>
<td>6</td>
<td>5</td>
<td>7</td>
<td>9</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>2</td>
<td>1</td>
<td>6</td>
<td>0</td>
<td>8</td>
<td>5</td>
<td>4</td>
<td>7</td>
<td>9</td>
</tr>
<tr>
<th>3</th>
<td>8</td>
<td>7</td>
<td>3</td>
<td>9</td>
<td>4</td>
<td>5</td>
<td>0</td>
<td>6</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>2</td>
<td>1</td>
<td>8</td>
<td>7</td>
<td>3</td>
<td>9</td>
<td>6</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<th>5</th>
<td>3</td>
<td>2</td>
<td>0</td>
<td>6</td>
<td>9</td>
<td>7</td>
<td>8</td>
<td>4</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<th>6</th>
<td>3</td>
<td>4</td>
<td>2</td>
<td>9</td>
<td>8</td>
<td>6</td>
<td>0</td>
<td>5</td>
<td>7</td>
<td>1</td>
</tr>
<tr>
<th>7</th>
<td>1</td>
<td>2</td>
<td>5</td>
<td>3</td>
<td>0</td>
<td>6</td>
<td>9</td>
<td>7</td>
<td>8</td>
<td>4</td>
</tr>
<tr>
<th>8</th>
<td>6</td>
<td>1</td>
<td>0</td>
<td>5</td>
<td>2</td>
<td>9</td>
<td>4</td>
<td>3</td>
<td>7</td>
<td>8</td>
</tr>
<tr>
<th>9</th>
<td>3</td>
<td>1</td>
<td>2</td>
<td>6</td>
<td>0</td>
<td>8</td>
<td>9</td>
<td>4</td>
<td>5</td>
<td>7</td>
</tr>
<tr>
<th>10</th>
<td>1</td>
<td>7</td>
<td>8</td>
<td>0</td>
<td>9</td>
<td>6</td>
<td>5</td>
<td>4</td>
<td>3</td>
<td>2</td>
</tr>
<tr>
<th>11</th>
<td>5</td>
<td>4</td>
<td>3</td>
<td>6</td>
<td>0</td>
<td>7</td>
<td>2</td>
<td>1</td>
<td>9</td>
<td>8</td>
</tr>
<tr>
<th>12</th>
<td>3</td>
<td>4</td>
<td>6</td>
<td>9</td>
<td>7</td>
<td>2</td>
<td>5</td>
<td>8</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>13</th>
<td>8</td>
<td>9</td>
<td>7</td>
<td>5</td>
<td>4</td>
<td>1</td>
<td>6</td>
<td>3</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<th>14</th>
<td>0</td>
<td>4</td>
<td>1</td>
<td>7</td>
<td>2</td>
<td>8</td>
<td>5</td>
<td>3</td>
<td>6</td>
<td>9</td>
</tr>
<tr>
<th>15</th>
<td>4</td>
<td>3</td>
<td>6</td>
<td>5</td>
<td>9</td>
<td>1</td>
<td>7</td>
<td>8</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<th>16</th>
<td>2</td>
<td>4</td>
<td>1</td>
<td>7</td>
<td>0</td>
<td>8</td>
<td>9</td>
<td>3</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<th>17</th>
<td>1</td>
<td>2</td>
<td>5</td>
<td>6</td>
<td>0</td>
<td>9</td>
<td>3</td>
<td>4</td>
<td>7</td>
<td>8</td>
</tr>
<tr>
<th>18</th>
<td>0</td>
<td>5</td>
<td>9</td>
<td>2</td>
<td>1</td>
<td>3</td>
<td>8</td>
<td>4</td>
<td>7</td>
<td>6</td>
</tr>
<tr>
<th>19</th>
<td>2</td>
<td>3</td>
<td>1</td>
<td>7</td>
<td>4</td>
<td>6</td>
<td>9</td>
<td>0</td>
<td>5</td>
<td>8</td>
</tr>
<tr>
<th>20</th>
<td>3</td>
<td>8</td>
<td>7</td>
<td>0</td>
<td>2</td>
<td>4</td>
<td>9</td>
<td>6</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<th>21</th>
<td>3</td>
<td>2</td>
<td>1</td>
<td>5</td>
<td>8</td>
<td>7</td>
<td>9</td>
<td>0</td>
<td>4</td>
<td>6</td>
</tr>
<tr>
<th>22</th>
<td>0</td>
<td>3</td>
<td>1</td>
<td>7</td>
<td>6</td>
<td>9</td>
<td>5</td>
<td>2</td>
<td>4</td>
<td>8</td>
</tr>
<tr>
<th>23</th>
<td>1</td>
<td>0</td>
<td>3</td>
<td>2</td>
<td>9</td>
<td>8</td>
<td>7</td>
<td>6</td>
<td>5</td>
<td>4</td>
</tr>
<tr>
<th>24</th>
<td>6</td>
<td>7</td>
<td>8</td>
<td>9</td>
<td>5</td>
<td>3</td>
<td>4</td>
<td>0</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<th>25</th>
<td>2</td>
<td>4</td>
<td>0</td>
<td>9</td>
<td>7</td>
<td>8</td>
<td>3</td>
<td>1</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<th>26</th>
<td>2</td>
<td>6</td>
<td>8</td>
<td>7</td>
<td>9</td>
<td>5</td>
<td>0</td>
<td>3</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<th>27</th>
<td>2</td>
<td>0</td>
<td>1</td>
<td>5</td>
<td>9</td>
<td>4</td>
<td>8</td>
<td>6</td>
<td>3</td>
<td>7</td>
</tr>
<tr>
<th>28</th>
<td>3</td>
<td>5</td>
<td>0</td>
<td>6</td>
<td>1</td>
<td>8</td>
<td>7</td>
<td>2</td>
<td>9</td>
<td>4</td>
</tr>
<tr>
<th>29</th>
<td>7</td>
<td>3</td>
<td>9</td>
<td>5</td>
<td>8</td>
<td>0</td>
<td>6</td>
<td>4</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<th>30</th>
<td>5</td>
<td>4</td>
<td>6</td>
<td>7</td>
<td>8</td>
<td>1</td>
<td>9</td>
<td>3</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<th>31</th>
<td>6</td>
<td>8</td>
<td>3</td>
<td>7</td>
<td>9</td>
<td>0</td>
<td>4</td>
<td>5</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<th>32</th>
<td>4</td>
<td>7</td>
<td>6</td>
<td>9</td>
<td>8</td>
<td>0</td>
<td>3</td>
<td>5</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<th>33</th>
<td>7</td>
<td>3</td>
<td>8</td>
<td>5</td>
<td>6</td>
<td>2</td>
<td>9</td>
<td>4</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>34</th>
<td>4</td>
<td>5</td>
<td>9</td>
<td>8</td>
<td>7</td>
<td>1</td>
<td>6</td>
<td>3</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<th>35</th>
<td>7</td>
<td>5</td>
<td>9</td>
<td>3</td>
<td>8</td>
<td>0</td>
<td>4</td>
<td>6</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<th>36</th>
<td>2</td>
<td>1</td>
<td>0</td>
<td>5</td>
<td>4</td>
<td>8</td>
<td>6</td>
<td>3</td>
<td>7</td>
<td>9</td>
</tr>
<tr>
<th>37</th>
<td>1</td>
<td>7</td>
<td>3</td>
<td>0</td>
<td>9</td>
<td>6</td>
<td>8</td>
<td>2</td>
<td>5</td>
<td>4</td>
</tr>
<tr>
<th>38</th>
<td>5</td>
<td>0</td>
<td>1</td>
<td>3</td>
<td>2</td>
<td>4</td>
<td>9</td>
<td>6</td>
<td>7</td>
<td>8</td>
</tr>
</tbody>
</table>
</div>
```python
temp = np.zeros([10,10])
for i in range(len(df.columns)):
col1 = df.columns[i]
for j in range(len(df.columns)):
col2 = df.columns[j]
temp[i][j] = np.sum(df[df[col1]==0][col2])/len(df[df[col1]==0][col2])
data = pd.DataFrame(temp, columns=df.columns)
data = data.set_index(data.columns)
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ARFL</th>
<th>Argonne</th>
<th>Boeing</th>
<th>Delta Dental</th>
<th>Ford</th>
<th>Hope Village</th>
<th>Kellogg's</th>
<th>Neogen</th>
<th>Old Nation</th>
<th>Qside</th>
</tr>
</thead>
<tbody>
<tr>
<th>ARFL</th>
<td>0.000000</td>
<td>3.000000</td>
<td>3.000000</td>
<td>5.200000</td>
<td>4.000000</td>
<td>6.200000</td>
<td>6.600000</td>
<td>4.000000</td>
<td>5.600000</td>
<td>7.400000</td>
</tr>
<tr>
<th>Argonne</th>
<td>2.666667</td>
<td>0.000000</td>
<td>1.666667</td>
<td>3.333333</td>
<td>6.666667</td>
<td>5.333333</td>
<td>8.000000</td>
<td>6.000000</td>
<td>5.000000</td>
<td>6.333333</td>
</tr>
<tr>
<th>Boeing</th>
<td>3.200000</td>
<td>2.600000</td>
<td>0.000000</td>
<td>6.200000</td>
<td>4.600000</td>
<td>8.000000</td>
<td>5.600000</td>
<td>2.600000</td>
<td>5.800000</td>
<td>6.400000</td>
</tr>
<tr>
<th>Delta Dental</th>
<td>1.666667</td>
<td>7.333333</td>
<td>6.000000</td>
<td>0.000000</td>
<td>6.666667</td>
<td>5.333333</td>
<td>7.333333</td>
<td>4.000000</td>
<td>3.000000</td>
<td>3.666667</td>
</tr>
<tr>
<th>Ford</th>
<td>2.500000</td>
<td>2.500000</td>
<td>2.833333</td>
<td>5.666667</td>
<td>0.000000</td>
<td>7.666667</td>
<td>6.166667</td>
<td>3.833333</td>
<td>6.833333</td>
<td>7.000000</td>
</tr>
</tbody>
</table>
</div>
```python
data_dist = sym.Matrix(pairwise_distances(data))
data_dist[3,3] = 0.0 #this was essentially 0 but rounding errors made it larger
data_dist
```
$\displaystyle \left[\begin{matrix}2.38418579101563 \cdot 10^{-7} & 6.05750223552029 & 5.29150262212918 & 9.31092548210614 & 5.17622771266232 & 12.7561749752816 & 11.4282301536347 & 6.8818602136341 & 12.5153905252693 & 12.2784363825367\\6.05750223552029 & 0.0 & 6.88379740162846 & 10.0111049451208 & 8.65062618157399 & 12.4867930230304 & 12.6051136007927 & 8.61523198888006 & 12.3237575438662 & 11.0453610171873\\5.29150262212918 & 6.88379740162846 & 2.38418579101563 \cdot 10^{-7} & 11.4391141848193 & 5.77523448297412 & 13.4610549363711 & 10.5560935535411 & 6.2010751755912 & 13.3676849154968 & 13.4208792558461\\9.31092548210614 & 10.0111049451208 & 11.4391141848193 & 0.0 & 11.9698695806503 & 10.4076894650062 & 11.9536140513607 & 9.5102284117914 & 9.72325391351407 & 10.816653826392\\5.17622771266232 & 8.65062618157399 & 5.77523448297412 & 11.9698695806503 & 0.0 & 14.4090249496626 & 12.4922198009623 & 8.56024402559751 & 14.0904814206849 & 14.5143607047182\\12.7561749752816 & 12.4867930230304 & 13.4610549363711 & 10.4076894650062 & 14.4090249496626 & 3.37174788087152 \cdot 10^{-7} & 8.32452868202025 & 9.7802522121535 & 4.04907396820557 & 5.77234787586473\\11.4282301536347 & 12.6051136007927 & 10.5560935535411 & 11.9536140513607 & 12.4922198009623 & 8.32452868202025 & 0.0 & 9.84321537348893 & 10.4396413305779 & 8.9876458418085\\6.8818602136341 & 8.61523198888006 & 6.2010751755912 & 9.5102284117914 & 8.56024402559751 & 9.7802522121535 & 9.84321537348893 & 0.0 & 8.74880944281373 & 11.0905365064094\\12.5153905252693 & 12.3237575438662 & 13.3676849154968 & 9.72325391351407 & 14.0904814206849 & 4.04907396820557 & 10.4396413305779 & 8.74880944281373 & 0.0 & 6.73609679265374\\12.2784363825367 & 11.0453610171873 & 13.4208792558461 & 10.816653826392 & 14.5143607047182 & 5.77234787586473 & 8.9876458418085 & 11.0905365064094 & 6.73609679265374 & 0.0\end{matrix}\right]$
```python
plt.figure(figsize=(10,10))
sns.heatmap(np.corrcoef(pairwise_distances(data)), annot=True, xticklabels=list(data.columns), yticklabels=list(data.columns))
```
Based on the distance matrix and correlation plot our project is most similar to Hope Village. Our project is most different from Ford and Boeing. Ford and Boeing are doing Image analysis of some sort, along with heavy machine learning. We, on the other hand, are using census data to learn more about the demographics of Old Nations Customer Base. Hope Village has a very similar project, as they are also working with census data along with housing data.
```python
```
|
b948618b1e6967064e112570e72ff248f8efece2
| 112,659 |
ipynb
|
Jupyter Notebook
|
TeamOldNation_DistanceMetrics.ipynb
|
bowerm37/cmse495-SS22
|
75d63b3aab43cc997a9099939862f7980b4e7f98
|
[
"MIT"
] | null | null | null |
TeamOldNation_DistanceMetrics.ipynb
|
bowerm37/cmse495-SS22
|
75d63b3aab43cc997a9099939862f7980b4e7f98
|
[
"MIT"
] | null | null | null |
TeamOldNation_DistanceMetrics.ipynb
|
bowerm37/cmse495-SS22
|
75d63b3aab43cc997a9099939862f7980b4e7f98
|
[
"MIT"
] | null | null | null | 121.661987 | 79,836 | 0.775766 | true | 6,036 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.727975 | 0.727975 | 0.529948 |
__label__cym_Latn
| 0.140761 | 0.069577 |
<div>
</div>
<center><h1>Mathematical Optimization for Engineers</h1></center>
<center><h2>Lab 2</h2></center>
<center><h2>Basic math</h2></center>
$$\newcommand{\mr}[1]{\mathrm{#1}}
\newcommand{\D}{\displaystyle}
\newcommand{\bm}[1]{\text{\mathbf $#1$}}
\newcommand{\bx}{\mathbf{ x}}
\newcommand{\f}{\mathbf{{f}}}
\newcommand{\g}{\mathbf{ g}}
\newcommand{\h}{\mathbf{ h}}
\newcommand{\R}{\mathbb R}
\newcommand{\A}{\mathbf{ A}}
\newcommand{\br}{\boldsymbol{r}}
\newcommand{\bp}{\boldsymbol{p}}
\newcommand{\bnabla}{\mathbf{\nabla}}
$$
In this lab, we will learn about Jacobians, gradients and Hessian matrices.
<u>Notation</u>: Please note that throughout the course, we will denote matrices and vectors with boldface letters. Their components will be denoted by normal letters with subscripts. For example,
$$
\bx = \left(\begin{array}{c}
x_1 \\
\vdots \\
x_n
\end{array} \right)
$$
## Jacobian
Let $\ \mathbf \f:\R^n \rightarrow \R^m,\,\bx \mapsto \f(\bx)$ be a continuously differentiable function, where
$$
\bx = \left(\begin{array}{c}
x_1 \\
\vdots \\
x_n
\end{array} \right) , \qquad \f(\bx) = \left(\begin{array}{c}
f_1(\bx) \\
\vdots \\
f_m(\bx)
\end{array} \right).
$$
The Jacobian $\f'(\bx)$ is defined by the matrix
$$
\f'(\bx) = \left( \begin{array}{ccc}
\frac{\partial f_1(\bx)}{\partial x_1} & \cdots & \frac{\partial f_1(\bx)}{\partial x_n} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_m(\bx)}{\partial x_1} & \cdots & \frac{\partial f_m(\bx)}{\partial x_n}
\end{array} \right).
$$
## Gradient
Now, let $f:\R^n \rightarrow \R,\,\bx \mapsto f(\bx)$ be a scalar-valued continuously differentiable function
with vector-valued arguments.
The gradient $\bnabla f(\bx)$ and Jacobian $f'(\bx)$ are defined as the column vector and row vector, respectively,
$$
\bnabla f(\bx) = \left(\begin{array}{c}
\frac{\partial f(\bx)}{\partial x_1} \\
\vdots \\
\frac{\partial f(\bx)}{\partial x_n}
\end{array} \right), \qquad f'(\bx) = \left(\begin{array}{ccc}
\frac{\partial f(\bx)}{\partial x_1} &
\cdots &
\frac{\partial f(\bx)}{\partial x_n}
\end{array}\right).
$$
Note that $\bnabla f(\bx)^T=f'(\bx)$.<br>
<br>
This is also valid for any general $\ \mathbf \f:\R^n \rightarrow \R^m,\,\bx \mapsto \f(\bx)$
### Gradient of the scalar product of two functions
Let $\g,\h:\R^n\rightarrow\R^n$ be two continuously differentiable functions. We want to compute the
gradient of $f:\R^n\rightarrow \R,\;\bx \mapsto f(\bx) = \g(\bx)^T\h(\bx)$, where $\bx\in\R^n$.
We have
$$
f(\bx) = \g(\bx)^T\h(\bx) = \sum_{i=1}^{n} g_i(\bx)h_i(\bx).
$$
The derivative with respect to $x_j$ ($1 \le j \le n$) is computed by the application of the product rule
\begin{equation}\label{eq:1}
\frac{\partial f(\bx)}{\partial x_j} = \sum_{i=1}^{n} \left(\frac{\partial g_i(\bx)}{\partial x_j}h_i(\bx) +g_i(\bx)\frac{\partial h_i(\bx)}{\partial x_j}\right).
\end{equation}
With the notations
$$
\frac{\partial \g(\bx)}{\partial x_j} = \left(\begin{array}{c}
\frac{\partial g_1(\bx)}{\partial x_j} \\
\vdots \\
\frac{\partial g_n(\bx)}{\partial x_j}
\end{array} \right) \quad \text{and} \quad
\frac{\partial \h(\bx)}{\partial x_j}
= \left(\begin{array}{c}
\frac{\partial h_1(\bx)}{\partial x_j} \\
\vdots \\
\frac{\partial h_n(\bx)}{\partial x_j}
\end{array} \right), \text{ respectively},
$$
we can rewrite the equation as
$$
\frac{\partial f(\bx)}{\partial x_j} = \frac{\partial \g(\bx)}{\partial x_j} ^T \h(\bx) +
\g(\bx)^T \frac{\partial \h(\bx)}{\partial x_j} = \h(\bx)^T\frac{\partial \g(\bx)}{\partial x_j}
+\g(\bx)^T \frac{\partial \h(\bx)}{\partial x_j}
$$
Finally,
$$
\bnabla f(\bx)^T = f'(\bx) = \h(\bx)^T \g'(\bx) + \g(\bx)^T\h'(\bx).
$$
$$
\implies \bnabla f(\bx) = \bnabla g(\bx) \ \h(\bx) + \bnabla h(\bx) \ \g(\bx).
$$
### Derivative of quadratic form
If $\A\in\R^{n\times n}$ is a symmetric matrix, the function
$$
f:\R^n\rightarrow \R,\quad\bx\mapsto f(\bx) = \bx^T\,\A\,\bx
$$ is called a quadratic form. <br>
<br>
With the definitions,
$$
\g(\bx) := \bx \ \text{and } \ \h(\bx):= \A\,\bx,
$$
we have $f(\bx) = \g(\bx)^T\h(\bx)$, i.e., exactly the situation as above.
With $\g'(\bx) = \mathbf{ I}$, where $\mathbf{ I}$ denotes the unity matrix, and $\h'(\bx) = \A$, it is more or less easy to see that the gradient of $f$ is given by
$$
\bnabla f(\bx) = 2\,\A\,\bx.
$$
### Example 1
Let the function $f:\R^2 \rightarrow \R$ be defined as
$ f(x,y) = \frac{1}{2} (x^2 + \alpha y^2), \alpha \in \R $
<u>Task 1</u>: Find all stationary points of the function $f$.
<u>Task 2</u>: Calculate the Hessian of the function $f$ for arbitrary $x$ and $y$.
<u>Task 3</u>: What are the eigenvalues of the Hessian of the function $f$ with respect to $x$ and $y$?
<u>Task 4</u>: Characterize the stationary points for positive and negative $\alpha$.
<u>Task 5</u>: Characterize the convexity of the function for every $\alpha$.
### Example 2: Rosenbrock function
The Rosenbrock function is a famous test function used in optimization. <br>
<br>
It is defined by:
$f:\R^2 \rightarrow \R$, $ f(x,y) = (a-x)^2 + b(y-x^2)^2 $ with $a=1, b=100$
<u>Task 1</u>: Find all stationary points of the function $f$.
<u>Task 2</u>: Calculate the Hessian of the function $f$ for the stationary points found. (Hint: use Python)
<u>Task 3</u>: What are the eigenvalues of the Hessian of the function $f$? (Hint: use Python)
```python
def rosenbrock(x):
return ((x[0]-1)**2 + 100*(x[1]-x[0]**2)**2)
```
```python
# compute hessian using autograd
import autograd
x0 = # stationary point
hessian_rosenbrock = # add your code here
```
```python
# compute the eigenvalues using numpy
import numpy as np
# add your code here
```
|
37d331bfbdefc7788c7f86b2a8769c83c9bd16ad
| 9,071 |
ipynb
|
Jupyter Notebook
|
EngineeringOptimization/GitLab/Lab02.ipynb
|
dalexa10/EngineeringDesignOptimization
|
eb5b5e4edd773aef629f59aea8a9771af41bd224
|
[
"MIT"
] | null | null | null |
EngineeringOptimization/GitLab/Lab02.ipynb
|
dalexa10/EngineeringDesignOptimization
|
eb5b5e4edd773aef629f59aea8a9771af41bd224
|
[
"MIT"
] | null | null | null |
EngineeringOptimization/GitLab/Lab02.ipynb
|
dalexa10/EngineeringDesignOptimization
|
eb5b5e4edd773aef629f59aea8a9771af41bd224
|
[
"MIT"
] | null | null | null | 32.512545 | 205 | 0.490464 | true | 2,060 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91611 | 0.882428 | 0.808401 |
__label__eng_Latn
| 0.922555 | 0.716518 |
# Chap2: An Asset Selling Problem
The chapter to answer basic question: **When to sell an asset**
In this case, we use a simple class of policies known as: *policy function approximations*.
**Mathematical Model**
* **State Variable**: $S_t$ that captures all the information we need ...($R_t$, $I_t$, $B_t$)
* **Decsion Variable** : $x_t$ where $x_0$ is the design decision, while $x_t$ for t>0 represents control varilables.
* **Exogenous Information**: The variable $W_t$ captures information that first becomes available between $t-1$ and $t$ from outside of our system.
* **Transition Fucntion**:
$$
S_{t+1} = S^M(S_t, x_t,W_{t+1})
$$
* **Objective Function**: When there are elements of the *problem* that depend on the state variables, we represent costs/contribution as:
$$
C(S_{t},x_{t}) \ or \ C(S_t,x_t, W_{t+1})
$$
**Set of learning problems** : where we are trying to learn about a function which is not itself a function of state variable, it means:
This means the state variable consists purely of belief state $B_t$.
## Narrative
We are holding a block of shares of stock, looking for oppurtunity to sell.If we sell at time t, we receive a price that varies according to some random process over time. Once we sell the stock, the process stops.
## Basic Model
### State Variable:
Two state variables:
* The **Physical State** which says whether or not we are still holding the assest
* The **Information State** the price of stock
* The physical State:
$$
R_t=\left\{
\begin{array}{@{}ll@{}}
1, & \text{if we are holding the stock at time t}\ \\
0, & \text{if we are not longer holding the stock at time t}
\end{array}\right.
$$
If we sell the stock, we receive the price per share $p_t$. This means our state variable is:
$$
S_t=(R_t, p_t)
$$
### Decision Variable:
$$
\begin{equation}
x_t=\left\{
\begin{array}{@{}ll@{}}
1, & \text{if we sell the stock at time t}\ \\
0, & \text{if we do not sell the stock at time t}
\end{array}\right.
\end{equation}
$$
we only can sell stock if we hold it,
$$
x_t < R_t
$$
#### Policy:
What is the policy: <br> **which is going to define how we make decision**
$$
X^{\pi}(S_t)
$$
Example policy is we might be to sell if the price drops the below some limit point, thus we could write:
$$
\begin{equation}
X^{sell-low}(S_t|\theta^{low})=\left\{
\begin{array}{@{}ll@{}}
1, & \text{if}\ p_t<\theta^{low} \ \text{and}\ R_t=1 \\
1, & \text{if}\ t= T\ \text{and} \ R_t = 1 \\
0, & \text{otherwise}
\end{array}\right.
\end{equation}
$$
### Exogenous Function
The only random process in our basic model is the change in price. Alternatively, naturally we simply write $W_t$ be the new price, in which we would write:
$$
W_{t+1} = (p_{t+1})
$$
### Transition Function
Is the equation that describe how the states evolves. The transition equation is:
$$
R_{t+1} = R_{t} - x_{t}
$$
How the price evolves over time.
$$p_{t+1} = p_t + \hat{p}_{t+1}$$
**Transition Function**
$$
S_{t+1} = S^M(S_t, X^{\pi}(S_t), W_{t+1})
$$
If we use our policy $X^{\pi}(S_t)$, and if we choose sample path $\omega$, that determine the sequence $W_1,W_2,W_3,...,W_T$...than the simulation of process can be written as:
$$
(S_0,x_0 = X^{\pi}(S_0)), W_1(\omega), S_1, x_1 = X^{\pi}(S_1), W_(\omega),..., x_{T-1}, W_T(\omega),S_T)
$$
### Objective Function
The performance metric is how much we earn from selling our stock.
$$
C(S_t,x_t) = p_tx_t
$$
We now can formulate the optimizatio problem:
$$
max \ \sum_{t=0}^{T-1} p_tx_t \\
x_0,x_1,...,x_{T-1}
$$
Constraints of the Optimization:
$$
\sum_{t=0}^{T-1} x_t = 1, \\
x_t \leq 1, \\
x_t \geq 0,
$$
**Including Uncertainty**:
We are simulating a policy following a sample path $\omega$ of price $p_1(\omega), p_2(\omega),...$
$$
S_{t+1}(\omega) = S^{M}(S_t(\omega), X^{\pi}(S_t(\omega)), W_{t+1}(\omega))
$$
If we follow policy $\pi$ along this sample path, we can compute the performance:
$$
\hat{F}^{\pi}(\omega) = \sum_{t=0}^{T-1} p_t(\omega)X^{\pi}(S_t(\omega))
$$
This is for one sample path . We can simulate over sample of $N$ sampels ($\omega^{1},...,\omega^n,...\omega^N$), and take average:
$$
\overline{F}^{\pi} = \frac{1}{N} \sum_{n=1}^{N}\hat{F}^{\pi}(\omega^n)
$$
Finally, we write out optimization problem in terms of finding the best policy, which we can write:
$$
max \ \ \overline{F}^{\pi} \\
x \in \Pi
$$
## Designing Policy
### Sell_low Policy:
In this policy we *sell* at that the price lower than, $\theta^{sell-low}$.
$$
X^{sell-low}(S_t|\theta^{low}) =\left\{
\begin{array}{@{}ll@{}}
1 & if \ p_t < \theta^{low} \ and \ R_t=1 \\
1, & t=T \ R_t=1 \\
0, & otherwise
\end{array}\right.
$$
### High_Low policy:
In this policy , we will sell the asset if the price jumps *too high** or ** too low**:
$$
X^{high-low}(S_t|\theta^{high-low}) =\left\{
\begin{array}{@{}ll@{}}
1 & if \ p_t < \theta^{low} \ or \ p_t > \theta^{high} \\
1, & t=T \ and \ R_t=1 \\
0, & otherwise
\end{array}\right.
$$
### Track Policy:
Perhaps we just want to sell when the stock rises above a tracking signal. Tod this, first create a smoothed estimate of price:
$$\overline{p}_t = (1- \alpha)\overline{p}_{t-1} + \alpha \hat{p}_t$$
Now consider a tracking policy that we might write:
$$
X^{track}(S_t | \theta^{track}) =\left\{
\begin{array}{@{}ll@{}}
1 & if \ p_t > \overline{p}_t + \theta^{track} \ \\
1, & t=T \ and \ R_t=1 \\
0, & otherwise
\end{array}\right.
$$
For this policy, we are going to need to tweak our model, because we now $\overline{p}_{t}$ in order to make decision. This means we would now write our state as:
$$
S_t = (R_t, p_t, \overline{p}_t)
$$
**Classes of Policies**:
$F = \{"sell-low","high-low","track"\}$
**Parameter for each policy**:
$\theta^f \in F$
**Search over policies**:
$\pi \in \Pi$
## Coding Part:
### Import libraries
```python
import pandas as pd
import numpy as np
from collections import namedtuple
import matplotlib.pyplot as plt
from copy import copy
import math
import seaborn as sns
sns.set_theme(style="darkgrid")
sns.set(rc={'figure.figsize':(12,8)})
```
### Change the directory to to get module
```python
import os
os.chdir("/home/peyman/Documents/PhD_UiS/seqdec_powell_repo/Chap2_Assett_Selling/function")
os.getcwd()
```
'/home/peyman/Documents/PhD_UiS/seqdec_powell_repo/Chap2_Assett_Selling/function'
### Import modules from function
```python
from AssetSellingModel import AssetSellingModel
from AssetSellingPolicy import AssetSellingPolicy
```
### Change the directory to to get parameters
```python
import os
os.chdir("/home/peyman/Documents/PhD_UiS/seqdec_powell_repo/Chap2_Assett_Selling/data")
os.getcwd()
```
'/home/peyman/Documents/PhD_UiS/seqdec_powell_repo/Chap2_Assett_Selling/data'
### Policy Parameters:
```python
sheet1= pd.read_excel("asset_selling_policy_parameters_edit.xlsx", sheet_name="Sheet1", usecols=["policy", "param1","param2"])
```
```python
sheet1
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>policy</th>
<th>param1</th>
<th>param2</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>sell_low</td>
<td>2</td>
<td>NaN</td>
</tr>
<tr>
<th>1</th>
<td>high_low</td>
<td>4</td>
<td>10.0</td>
</tr>
<tr>
<th>2</th>
<td>track</td>
<td>0</td>
<td>4.0</td>
</tr>
</tbody>
</table>
</div>
```python
params = zip(sheet1["param1"], sheet1["param2"])
param_list = list(params)
param_list
```
[(2, nan), (4, 10.0), (0, 4.0)]
### Full grid Policy Parameters:
```python
sheet2 = pd.read_excel("asset_selling_policy_parameters_edit.xlsx", sheet_name="Sheet2")
```
```python
sheet2.dtypes
```
low_min int64
low_max int64
high_min int64
high_max int64
increment_size float64
dtype: object
```python
dtype(sheet2)
```
### Parameters of the Dynamic Model:
```python
sheet3 = pd.read_excel("asset_selling_policy_parameters_edit.xlsx", sheet_name="Sheet3")
sheet3
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Policy</th>
<th>TimeHorizon</th>
<th>DiscountFactor</th>
<th>InitialPrice</th>
<th>InitialBias</th>
<th>UpStep</th>
<th>DownStep</th>
<th>Variance</th>
<th>Iterations</th>
<th>PrintStep</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>track</td>
<td>40</td>
<td>0.99</td>
<td>16</td>
<td>Up</td>
<td>1</td>
<td>-1</td>
<td>2</td>
<td>10</td>
<td>40</td>
</tr>
</tbody>
</table>
</div>
### Bias Term:
```python
biasdf = pd.read_excel("asset_selling_policy_parameters_edit.xlsx", sheet_name="Sheet4")
biasdf_edit=biasdf.set_index('additional')
biasdf_edit
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Up</th>
<th>Neutral</th>
<th>Down</th>
</tr>
<tr>
<th>additional</th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>Up</th>
<td>0.9</td>
<td>0.1</td>
<td>0.0</td>
</tr>
<tr>
<th>Neutral</th>
<td>0.2</td>
<td>0.6</td>
<td>0.2</td>
</tr>
<tr>
<th>Down</th>
<td>0.0</td>
<td>0.1</td>
<td>0.9</td>
</tr>
</tbody>
</table>
</div>
### Track Policy:
```python
policy_selected = sheet3['Policy'][0]
print("The selected policy is: {}".format(policy_selected))
T = sheet3['TimeHorizon'][0]
print("The T value is: {}".format(T))
initPrice = sheet3['InitialPrice'][0]
print("The initial price is: {}".format(initPrice))
initBias = sheet3['InitialBias'][0]
print("The initial Bias is: {}".format(initBias))
exog_params = {'UpStep':sheet3['UpStep'][0],'DownStep':sheet3['DownStep'][0],'Variance':sheet3['Variance'][0],'biasdf'
:biasdf_edit}
print("The exog_params is: {}".format(exog_params))
nIterations = sheet3['Iterations'][0]
print("The nIteration is: {}".format(nIterations))
printStep = sheet3['PrintStep'][0]
print("The printStep is: {}".format(printStep))
printIterations = [0]
print("The printIterations is: {}".format(printIterations))
printIterations.extend(list(reversed(range(nIterations-1,0,-printStep))))
```
The selected policy is: track
The T value is: 40
The initial price is: 16
The initial Bias is: Up
The exog_params is: {'UpStep': 1, 'DownStep': -1, 'Variance': 2, 'biasdf': Up Neutral Down
additional
Up 0.9 0.1 0.0
Neutral 0.2 0.6 0.2
Down 0.0 0.1 0.9}
The nIteration is: 10
The printStep is: 40
The printIterations is: [0]
```python
policy_names = ['sell_low', 'high_low', 'track']
print("policy_names are: {}".format(policy_names))
state_names = ['price', 'resource','bias']
print("state_names are: {}".format(state_names))
init_state = {'price': initPrice, 'resource': 1,'bias':initBias}
print("init_state are: {}".format(init_state))
decision_names = ['sell', 'hold']
print("decision_names are: {}".format(decision_names))
```
policy_names are: ['sell_low', 'high_low', 'track']
state_names are: ['price', 'resource', 'bias']
init_state are: {'price': 16, 'resource': 1, 'bias': 'Up'}
decision_names are: ['sell', 'hold']
```python
biasdf = exog_params["biasdf"]
print("the biasdf_edit is \n .{}".format(biasdf))
print(type(biasdf))
biasdf= biasdf.cumsum(axis =1)
print("the biasdf_edit is \n .{}".format(biasdf))
```
the biasdf_edit is
. Up Neutral Down
additional
Up 0.9 0.1 0.0
Neutral 0.2 0.6 0.2
Down 0.0 0.1 0.9
<class 'pandas.core.frame.DataFrame'>
the biasdf_edit is
. Up Neutral Down
additional
Up 0.9 1.0 1.0
Neutral 0.2 0.8 1.0
Down 0.0 0.1 1.0
```python
M = AssetSellingModel(state_names, decision_names, init_state,exog_params,T)
```
```python
P = AssetSellingPolicy(M, policy_names)
```
### Policy Evaluation
```python
t = 0
prev_price = init_state['price']
```
```python
policy_info = {'sell_low': param_list[0],
'high_low': param_list[1],
'track': param_list[2] + (prev_price,)}
```
```python
policy_info
```
{'sell_low': (2, nan), 'high_low': (4, 10.0), 'track': (0, 4.0, 16)}
```python
policy_selected
```
'track'
```python
#if (not policy_selected =='full_grid'):
print("Selected policy {}, time horizon {}, initial price {} and number of iterations {}".format(policy_selected,T,initPrice,nIterations))
contribution_iterations=[P.run_policy(param_list, policy_info, policy_selected, t) for ite in list(range(nIterations))]
contribution_iterations = pd.Series(contribution_iterations)
print("Contribution per iteration: ")
print(contribution_iterations)
cum_avg_contrib = contribution_iterations.expanding().mean()
print("Cumulative average contribution per iteration: ")
print(cum_avg_contrib)
#plotting the results
#fig, axsubs = plt.subplots(1,2,sharex=True,sharey=True)
#fig.suptitle("Asset selling using policy {} with parameters {} and T {}".format(policy_selected,policy_info[policy_selected],T) )
#i = np.arange(0, nIterations, 1)
#axsubs[0].plot(i, cum_avg_contrib, 'g')
#axsubs[0].set_title('Cumulative average contribution')
#axsubs[1].plot(i, contribution_iterations, 'g')
#axsubs[1].set_title('Contribution per iteration')
# Create a big subplot
#ax = fig.add_subplot(111, frameon=False)
# hide tick and tick label of the big axes
#plt.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
#ax.set_ylabel('USD', labelpad=0) # Use argument `labelpad` to move label downwards.
#ax.set_xlabel('Iterations', labelpad=10)
#plt.show()
```
Selected policy track, time horizon 40, initial price 16 and number of iterations 10
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.4139840589481404 curr_bias Up new_bias Up
smoothed_price = 19.275851851363697
time=1, obj=0.0, s.resource=1, s.price=16.818962962840924, x=Decision(sell=0, hold=1)
coin 0.8487800961190239 curr_bias Up new_bias Up
smoothed_price = 12.919965111987779
time=2, obj=0.0, s.resource=1, s.price=15.844213500127637, x=Decision(sell=1, hold=0)
coin 0.45203620488154816 curr_bias Up new_bias Up
obj=15.844213500127637, state.resource=0
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.2720800519673423 curr_bias Up new_bias Up
smoothed_price = 29.300677434463253
time=1, obj=0.0, s.resource=1, s.price=19.325169358615813, x=Decision(sell=0, hold=1)
coin 0.905534061916374 curr_bias Up new_bias Neutral
smoothed_price = 11.678950072957633
time=2, obj=0.0, s.resource=1, s.price=17.41361453720127, x=Decision(sell=1, hold=0)
coin 0.3755268704762992 curr_bias Neutral new_bias Neutral
obj=17.41361453720127, state.resource=0
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.09168318648059992 curr_bias Up new_bias Up
smoothed_price = 30.17968207409804
time=1, obj=0.0, s.resource=1, s.price=19.54492051852451, x=Decision(sell=0, hold=1)
coin 0.9746533545172668 curr_bias Up new_bias Neutral
smoothed_price = 9.10715798582386
time=2, obj=0.0, s.resource=1, s.price=16.93547988534935, x=Decision(sell=1, hold=0)
coin 0.845463715621411 curr_bias Neutral new_bias Down
obj=16.93547988534935, state.resource=0
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.7718855570743978 curr_bias Up new_bias Up
smoothed_price = 13.590645854568095
time=1, obj=0.0, s.resource=1, s.price=15.397661463642024, x=Decision(sell=1, hold=0)
coin 0.058599417274914245 curr_bias Up new_bias Up
obj=15.397661463642024, state.resource=0
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.5630457710972723 curr_bias Up new_bias Up
smoothed_price = 14.404641398215574
time=1, obj=0.0, s.resource=1, s.price=15.601160349553894, x=Decision(sell=1, hold=0)
coin 0.6337684851009756 curr_bias Up new_bias Up
obj=15.601160349553894, state.resource=0
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.32637382885748367 curr_bias Up new_bias Up
smoothed_price = 29.23727426734112
time=1, obj=0.0, s.resource=1, s.price=19.30931856683528, x=Decision(sell=0, hold=1)
coin 0.010789716774533442 curr_bias Up new_bias Up
smoothed_price = 24.356969142318995
time=2, obj=0.0, s.resource=1, s.price=20.57123121070621, x=Decision(sell=0, hold=1)
coin 0.10655404328873319 curr_bias Up new_bias Up
smoothed_price = 32.917325150498336
time=3, obj=0.0, s.resource=1, s.price=23.65775469565424, x=Decision(sell=0, hold=1)
coin 0.2744609047476718 curr_bias Up new_bias Up
smoothed_price = 20.209678891873452
time=4, obj=0.0, s.resource=1, s.price=22.795735744709045, x=Decision(sell=1, hold=0)
coin 0.7876352836985036 curr_bias Up new_bias Up
obj=22.795735744709045, state.resource=0
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.5732916894272274 curr_bias Up new_bias Up
smoothed_price = 27.318028871373315
time=1, obj=0.0, s.resource=1, s.price=18.82950721784333, x=Decision(sell=0, hold=1)
coin 0.25230738226248894 curr_bias Up new_bias Up
smoothed_price = 32.205300235020715
time=2, obj=0.0, s.resource=1, s.price=22.173455472137675, x=Decision(sell=0, hold=1)
coin 0.34703831077049807 curr_bias Up new_bias Up
smoothed_price = 22.184929865392462
time=3, obj=0.0, s.resource=1, s.price=22.176324070451372, x=Decision(sell=0, hold=1)
coin 0.020311694923608736 curr_bias Up new_bias Up
smoothed_price = 36.75498055259952
time=4, obj=0.0, s.resource=1, s.price=25.82098819098841, x=Decision(sell=0, hold=1)
coin 0.5773741485181105 curr_bias Up new_bias Up
smoothed_price = 19.512647218984497
time=5, obj=0.0, s.resource=1, s.price=24.243902947987433, x=Decision(sell=1, hold=0)
coin 0.9296106478220897 curr_bias Up new_bias Neutral
obj=24.243902947987433, state.resource=0
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.4326408307147147 curr_bias Up new_bias Up
smoothed_price = 29.78638009471284
time=1, obj=0.0, s.resource=1, s.price=19.44659502367821, x=Decision(sell=0, hold=1)
coin 0.9844689697230611 curr_bias Up new_bias Neutral
smoothed_price = 9.047636235056132
time=2, obj=0.0, s.resource=1, s.price=16.84685532652269, x=Decision(sell=1, hold=0)
coin 0.8262653217404742 curr_bias Neutral new_bias Down
obj=16.84685532652269, state.resource=0
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.4433318705856356 curr_bias Up new_bias Up
smoothed_price = 4.415644002958089
time=1, obj=0.0, s.resource=1, s.price=13.103911000739522, x=Decision(sell=1, hold=0)
coin 0.6753561026946597 curr_bias Up new_bias Up
obj=13.103911000739522, state.resource=0
time=0, obj=0.0, s.resource=1, s.price=16, x=Decision(sell=0, hold=1)
coin 0.8662383958568906 curr_bias Up new_bias Up
smoothed_price = 41.35203449737756
time=1, obj=0.0, s.resource=1, s.price=22.33800862434439, x=Decision(sell=0, hold=1)
coin 0.9282091767826981 curr_bias Up new_bias Neutral
smoothed_price = 24.822358317891954
time=2, obj=0.0, s.resource=1, s.price=22.959096047731283, x=Decision(sell=0, hold=1)
coin 0.3886605993398763 curr_bias Neutral new_bias Neutral
smoothed_price = 28.44497845663615
time=3, obj=0.0, s.resource=1, s.price=24.3305666499575, x=Decision(sell=0, hold=1)
coin 0.7922183625790814 curr_bias Neutral new_bias Neutral
smoothed_price = 8.076789104517587
time=4, obj=0.0, s.resource=1, s.price=20.26712226359752, x=Decision(sell=1, hold=0)
coin 0.7101424377623707 curr_bias Neutral new_bias Neutral
obj=20.26712226359752, state.resource=0
Contribution per iteration:
0 15.844214
1 17.413615
2 16.935480
3 15.397661
4 15.601160
5 22.795736
6 24.243903
7 16.846855
8 13.103911
9 20.267122
dtype: float64
Cumulative average contribution per iteration:
0 15.844214
1 16.628914
2 16.731103
3 16.397742
4 16.238426
5 17.331311
6 18.318824
7 18.134828
8 17.575837
9 17.844966
dtype: float64
## Plot Results
```python
df_cont_per_ite = pd.DataFrame({"iteration_number": list(range(nIterations)), "contribution_iterations":contribution_iterations})
sns.lineplot(data=df_cont_per_ite, x= "iteration_number", y="contribution_iterations")
```
## Full grid Search
```python
policy_selected =='full_grid'
# obtain the theta values to carry out a full grid search
grid_search_theta_values = P.grid_search_theta_values(sheet2['low_min'], sheet2['low_max'], sheet2['high_min'], sheet2['high_max'], sheet2['increment_size'])
# use those theta values to calculate corresponding contribution values
```
```python
len(grid_search_theta_values[0])
```
25
```python
%%capture
policy_selected =='full_grid'
# obtain the theta values to carry out a full grid search
grid_search_theta_values = P.grid_search_theta_values(sheet2['low_min'], sheet2['low_max'], sheet2['high_min'], sheet2['high_max'], sheet2['increment_size'])
# use those theta values to calculate corresponding contribution values
contribution_iterations = [P.vary_theta(param_list, policy_info, "high_low", t, grid_search_theta_values[0]) for ite in list(range(nIterations))]
contribution_iterations_arr = np.array(contribution_iterations)
cum_sum_contrib = contribution_iterations_arr.cumsum(axis=0)
nElem = np.arange(1,cum_sum_contrib.shape[0]+1).reshape((cum_sum_contrib.shape[0],1))
cum_avg_contrib=cum_sum_contrib/nElem
#print("cum_avg_contrib")
#print(cum_avg_contrib)
# plot those contribution values on a heat map
P.plot_heat_map_many(cum_avg_contrib, grid_search_theta_values[1], grid_search_theta_values[2], printIterations)
```
```python
print("cum_avg_contrib")
print(cum_avg_contrib)
```
cum_avg_contrib
[[18.47367058 20.02044278 23.71558344 24.04911983 25.64664546 18.46354228
19.9971443 23.4944788 8.82900535 27.39145963 18.41058556 20.46051754
21.80163486 25.30872441 24.51162131 18.97575447 13.24130896 13.83837746
14.65914641 24.51858946 22.05931859 19.54746608 23.29884974 13.64768737
26.03581103]
[19.69100318 20.37305553 23.23858313 23.76467669 25.84529101 18.23380197
16.47730129 17.92318102 10.78275225 26.04158751 15.6912722 20.2448214
17.87449738 19.47026892 24.74667952 16.78197067 16.76903764 14.10385934
18.63771033 24.42103201 22.06444139 17.12797049 23.73444868 14.77986095
20.09595631]
[20.69994569 16.83950464 22.74294447 19.56126436 20.52459656 18.47919587
15.11744387 16.20921065 15.00940661 25.51281981 13.80659895 20.25218014
19.14019475 17.30568466 21.05307615 17.63382212 18.0583466 13.60198246
20.07523909 21.21275022 20.9849638 16.32976729 20.95614325 15.05129774
17.81028782]
[20.18013059 17.80921563 22.83681797 20.66946341 21.67762855 18.64562133
16.51153134 15.06668557 13.78071309 25.56287196 13.84389485 20.76422187
19.73595235 18.96932916 21.87421549 18.05258783 18.52434386 15.50891778
18.55859893 22.63139844 19.56249686 16.15022656 19.50527058 15.13432515
17.22395803]
[20.20517103 18.45097802 22.52135779 18.84335523 19.48817699 18.84645425
17.12043862 16.58986652 15.60289554 25.29226768 14.71355315 19.09216357
20.18432852 17.72401005 20.26431727 18.53066645 18.82637145 15.34413496
19.59040908 21.06536502 19.48544474 16.04541057 20.42913257 14.70689882
16.94170926]
[19.9262659 18.67286269 20.75302403 19.71022848 18.21367716 19.19238006
17.51795722 17.68624384 16.78180395 22.78350727 15.48395009 19.30591146
19.01314863 17.08034736 20.95328276 18.66106818 18.10879285 14.98659596
20.39322653 20.0443103 19.9037171 16.94296788 19.57782471 14.52283041
18.59472501]
[19.8137429 18.9086271 20.84268636 18.39076378 17.23701861 19.36730219
18.20099241 18.23276394 17.66079438 21.17920167 16.1442936 18.43430402
19.43508475 17.9552651 21.50933229 18.57660873 18.42222031 16.25785103
21.01294484 19.29514058 19.22370624 16.51033466 18.79126479 14.48088114
17.95088124]
[19.62653733 17.82007762 20.94883455 17.34353318 18.63876886 19.81500259
17.46284492 18.58482107 16.70445351 20.04914113 16.41550734 18.8476955
19.63813038 17.28721213 20.53508951 18.99281685 17.86891367 17.53757442
20.159934 20.00713271 18.72468824 16.88411792 18.1474659 14.48123896
18.91527785]
[19.63648073 18.1478174 21.24734214 16.45992462 19.69289125 19.82953391
16.79495559 19.30543893 16.28062686 19.00743174 16.61822087 18.12487532
19.85294926 16.80365162 19.69257256 19.14099563 18.57933051 18.03082425
20.63793609 19.24262662 18.34680726 16.33745161 17.85696122 14.5706945
18.48843037]
[19.63661504 18.36680492 21.42992756 17.12582509 20.30051109 19.84620636
17.08335876 19.55338491 15.79131993 18.31310674 17.06211989 18.52285707
20.03049054 17.48311031 20.27369434 19.18396337 18.20648271 18.56249637
19.88403096 18.75795701 18.08166846 16.30207016 18.1781855 14.65780872
19.2417764 ]]
```python
P.plot_heat_map_many(cum_avg_contrib, grid_search_theta_values[1], grid_search_theta_values[2], printIterations)
```
```python
type(cum_avg_contrib)
```
numpy.ndarray
```python
grid_search_theta_values[1]
```
array([[12.],
[13.],
[14.],
[15.],
[16.]])
```python
ax = sns.heatmap(cum_avg_contrib)
```
```python
len(cum_avg_contrib)
```
10
|
ed422414ceefaf6f7a4d1b5ec4fa26a44789c7b1
| 134,764 |
ipynb
|
Jupyter Notebook
|
Chap2_Assett_Selling/An_Asset_Selling_Problem.ipynb
|
Peymankor/seqdec_powell_repo
|
d3a03399ca7762821e80988d112f98bad5adefd8
|
[
"MIT"
] | 1 |
2021-04-21T19:21:53.000Z
|
2021-04-21T19:21:53.000Z
|
Chap2_Assett_Selling/An_Asset_Selling_Problem.ipynb
|
Efsilvaa/seqdec_powell_repo
|
d3a03399ca7762821e80988d112f98bad5adefd8
|
[
"MIT"
] | null | null | null |
Chap2_Assett_Selling/An_Asset_Selling_Problem.ipynb
|
Efsilvaa/seqdec_powell_repo
|
d3a03399ca7762821e80988d112f98bad5adefd8
|
[
"MIT"
] | 2 |
2021-04-21T19:21:45.000Z
|
2021-06-24T18:19:17.000Z
| 77.272936 | 35,852 | 0.796006 | true | 9,577 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.833325 | 0.857768 | 0.714799 |
__label__eng_Latn
| 0.402081 | 0.49905 |
```python
from sympy import symbols, cos, sin, pi, simplify, pprint, tan, expand_trig, sqrt, trigsimp, atan2
from sympy.matrices import Matrix
```
```python
q1, q2, q3, q4, q5, q6= symbols('q1:7')
alpha, beta, gamma = symbols('alpha beta gamma', real = True)
px, py, pz = symbols('px py pz', real = True)
```
```python
R03 = Matrix([
[sin(q2 + q3)*cos(q1), cos(q1)*cos(q2 + q3), -sin(q1)],
[sin(q1)*sin(q2 + q3), sin(q1)*cos(q2 + q3), cos(q1)],
[ cos(q2 + q3), -sin(q2 + q3), 0]])
R03T = Matrix([
[sin(q2 + q3)*cos(q1), sin(q1)*sin(q2 + q3), cos(q2 + q3)],
[cos(q1)*cos(q2 + q3), sin(q1)*cos(q2 + q3), -sin(q2 + q3)],
[ -sin(q1), cos(q1), 0]])
```
```python
R36 = Matrix([
[-sin(q4)*sin(q6) + cos(q4)*cos(q5)*cos(q6), -sin(q4)*cos(q6) - sin(q6)*cos(q4)*cos(q5), -sin(q5)*cos(q4)],
[ sin(q5)*cos(q6), -sin(q5)*sin(q6), cos(q5)],
[-sin(q4)*cos(q5)*cos(q6) - sin(q6)*cos(q4), sin(q4)*sin(q6)*cos(q5) - cos(q4)*cos(q6), sin(q4)*sin(q5)]])
```
```python
R0u = Matrix([
[1.0*cos(alpha)*cos(beta), -1.0*sin(alpha)*cos(gamma) + sin(beta)*sin(gamma)*cos(alpha), 1.0*sin(alpha)*sin(gamma) + sin(beta)*cos(alpha)*cos(gamma)],
[1.0*sin(alpha)*cos(beta), sin(alpha)*sin(beta)*sin(gamma) + 1.0*cos(alpha)*cos(gamma), sin(alpha)*sin(beta)*cos(gamma) - 1.0*sin(gamma)*cos(alpha)],
[ -1.0*sin(beta), 1.0*sin(gamma)*cos(beta), 1.0*cos(beta)*cos(gamma)]])
```
```python
Rgu_eval = Matrix([[0, 0, 1], [0, -1.00000000000000, 0], [1.00000000000000, 0, 0]])
RguT_eval = Matrix([[0, 0, 1], [0, -1.00000000000000, 0], [1.00000000000000, 0, 0]])
```
```python
# Total transform wrt gripper given
# yaw (alpha), pitch (beta), roll (beta)
# position px, py, pz
T0g_b = Matrix([
[1.0*sin(alpha)*sin(gamma) + sin(beta)*cos(alpha)*cos(gamma), 1.0*sin(alpha)*cos(gamma) - 1.0*sin(beta)*sin(gamma)*cos(alpha), 1.0*cos(alpha)*cos(beta), px],
[sin(alpha)*sin(beta)*cos(gamma) - 1.0*sin(gamma)*cos(alpha), -1.0*sin(alpha)*sin(beta)*sin(gamma) - 1.0*cos(alpha)*cos(gamma), 1.0*sin(alpha)*cos(beta), py],
[ 1.0*cos(beta)*cos(gamma), -1.0*sin(gamma)*cos(beta), -1.0*sin(beta), pz],
[ 0, 0, 0, 1]])
```
```python
# Total transform wrt gripper given
# angles q1, q2, q3, q4, q5, q6
T0g_a = Matrix([
[((sin(q1)*sin(q4) + sin(q2 + q3)*cos(q1)*cos(q4))*cos(q5) + sin(q5)*cos(q1)*cos(q2 + q3))*cos(q6) - (-sin(q1)*cos(q4) + sin(q4)*sin(q2 + q3)*cos(q1))*sin(q6), -((sin(q1)*sin(q4) + sin(q2 + q3)*cos(q1)*cos(q4))*cos(q5) + sin(q5)*cos(q1)*cos(q2 + q3))*sin(q6) + (sin(q1)*cos(q4) - sin(q4)*sin(q2 + q3)*cos(q1))*cos(q6), -(sin(q1)*sin(q4) + sin(q2 + q3)*cos(q1)*cos(q4))*sin(q5) + cos(q1)*cos(q5)*cos(q2 + q3), -0.303*sin(q1)*sin(q4)*sin(q5) + 1.25*sin(q2)*cos(q1) - 0.303*sin(q5)*sin(q2 + q3)*cos(q1)*cos(q4) - 0.054*sin(q2 + q3)*cos(q1) + 0.303*cos(q1)*cos(q5)*cos(q2 + q3) + 1.5*cos(q1)*cos(q2 + q3) + 0.35*cos(q1)],
[ ((sin(q1)*sin(q2 + q3)*cos(q4) - sin(q4)*cos(q1))*cos(q5) + sin(q1)*sin(q5)*cos(q2 + q3))*cos(q6) - (sin(q1)*sin(q4)*sin(q2 + q3) + cos(q1)*cos(q4))*sin(q6), -((sin(q1)*sin(q2 + q3)*cos(q4) - sin(q4)*cos(q1))*cos(q5) + sin(q1)*sin(q5)*cos(q2 + q3))*sin(q6) - (sin(q1)*sin(q4)*sin(q2 + q3) + cos(q1)*cos(q4))*cos(q6), -(sin(q1)*sin(q2 + q3)*cos(q4) - sin(q4)*cos(q1))*sin(q5) + sin(q1)*cos(q5)*cos(q2 + q3), 1.25*sin(q1)*sin(q2) - 0.303*sin(q1)*sin(q5)*sin(q2 + q3)*cos(q4) - 0.054*sin(q1)*sin(q2 + q3) + 0.303*sin(q1)*cos(q5)*cos(q2 + q3) + 1.5*sin(q1)*cos(q2 + q3) + 0.35*sin(q1) + 0.303*sin(q4)*sin(q5)*cos(q1)],
[ -(sin(q5)*sin(q2 + q3) - cos(q4)*cos(q5)*cos(q2 + q3))*cos(q6) - sin(q4)*sin(q6)*cos(q2 + q3), (sin(q5)*sin(q2 + q3) - cos(q4)*cos(q5)*cos(q2 + q3))*sin(q6) - sin(q4)*cos(q6)*cos(q2 + q3), -sin(q5)*cos(q4)*cos(q2 + q3) - sin(q2 + q3)*cos(q5), -0.303*sin(q5)*cos(q4)*cos(q2 + q3) - 0.303*sin(q2 + q3)*cos(q5) - 1.5*sin(q2 + q3) + 1.25*cos(q2) - 0.054*cos(q2 + q3) + 0.75],
[ 0, 0, 0, 1]])
```
```python
def get_hypotenuse(a, b):
return sqrt(a*a + b*b)
def get_cosine_law_angle(a, b, c):
cos_gamma = (a*a + b*b - c*c) / (2*a*b)
sin_gamma = sqrt(1 - cos_gamma * cos_gamma)
gamma = atan2(sin_gamma, cos_gamma)
return gamma
def get_wrist_center(gripper_point, R0g, dg = 0.303):
xu, yu, zu = gripper_point
nx, ny, nz = R0g[0, 2], R0g[1, 2], R0g[2, 2]
xw = xu - dg * nx
yw = yu - dg * ny
zw = zu - dg * nz
return xw, yw, zw
def get_first_three_angles(wrist_center):
x, y, z = wrist_center
a1, a2, a3 = 0.35, 1.25, -0.054
d1, d4 = 0.75, 1.5
l = 1.50097168527591
phi = 1.53481186671284
x_prime = get_hypotenuse(x, y)
mx = x_prime - a1
my = z - d1
m = get_hypotenuse(mx, my)
alpha = atan2(my, mx)
gamma = get_cosine_law_angle(l, a2, m)
beta = get_cosine_law_angle(m, a2, l)
q1 = atan2(y, x)
q2 = pi/2 - beta - alpha
q3 = -(gamma - phi)
return q1, q2, q3
def get_last_three_angles(R):
sin_q4 = R[2, 2]
cos_q4 = -R[0, 2]
sin_q5 = sqrt(R[0, 2]**2 + R[2, 2]**2)
cos_q5 = R[1, 2]
sin_q6 = -R[1, 1]
cos_q6 = R[1, 0]
q4 = atan2(sin_q4, cos_q4)
q5 = atan2(sin_q5, cos_q5)
q6 = atan2(sin_q6, cos_q6)
return q4, q5, q6
```
```python
def ik(x, y, z, roll, pitch, yaw, debug = False):
# input: given position and orientation of the gripper wrt to URDFrame
# output: angles q1, q2, q3, q4, q5, q6
gripper_point = x, y, z
R0u_eval = R0u.evalf(subs = {alpha: yaw, beta: pitch, gamma: roll})
R0g_eval = R0u_eval * RguT_eval
wrist_center = get_wrist_center(gripper_point, R0g_eval, dg = 0.303)
j1, j2, j3 = get_first_three_angles(wrist_center)
R03T_eval = R03T.evalf(subs = {q1: j1.evalf(), q2: j2.evalf(), q3: j3.evalf()})
R36_eval = R03T_eval * R0g_eval
j4, j5, j6 = get_last_three_angles(R36_eval)
j1 = j1.evalf()
j2 = j2.evalf()
j3 = j3.evalf()
j4 = j4.evalf()
j5 = j5.evalf()
j6 = j6.evalf()
if debug:
print()
print("\n x:", x, "\n y:", y, "\n z:", z)
print("\n roll:", roll, "\n pitch:", pitch, "\n yaw:", yaw)
print()
print(" j1:", j1, "\n j2:", j2, "\n j3:", j3)
print(" j4:", j4, "\n j5:", j5, "\n j6:", j6)
print()
print("wrist_center", wrist_center)
print()
print("evaluated R0g:")
pprint(R0g_eval)
print()
print("Total transform wrt gripper: given yaw (alpha), pitch (beta), roll (beta), px, py, pz")
pprint(T0g_b.evalf(subs = {
gamma: roll, beta: pitch, alpha: yaw, px: x, py: y, pz: z
}))
print()
print("Total transform wrt gripper: given angles q1, q2, q3, q4, q5, q6")
pprint(T0g_a.evalf(subs = {
q1: j1, q2: j2, q3: j3, q4: j4, q5: j5, q6: j6
}))
return j1, j2, j3, j4, j5, j6
```
```python
qs = ik(x = 0.49792, y = 1.3673, z = 2.4988,
roll = 0.366, pitch = -0.078, yaw = 2.561, debug = True)
```
x: 0.49792
y: 1.3673
z: 2.4988
roll: 0.366
pitch: -0.078
yaw: 2.561
j1: 1.01249809363771
j2: -0.275800363737724
j3: -0.115686651053751
j4: 1.63446527240323
j5: 1.52050002599430
j6: -0.815781306199679
wrist_center (0.750499428337951, 1.20160389781975, 2.47518995758694)
evaluated R0g:
⎡0.257143295038827 0.48887208255965 -0.833595473062543⎤
⎢ ⎥
⎢0.259329420712765 0.796053601157403 0.54685182237706 ⎥
⎢ ⎥
⎣0.93092726749696 -0.356795110642117 0.0779209320563015⎦
Total transform wrt gripper: given yaw (alpha), pitch (beta), roll (beta), px, py, pz
⎡0.257143295038827 0.48887208255965 -0.833595473062543 0.49792⎤
⎢ ⎥
⎢0.259329420712765 0.796053601157403 0.54685182237706 1.3673 ⎥
⎢ ⎥
⎢0.93092726749696 -0.356795110642117 0.0779209320563015 2.4988 ⎥
⎢ ⎥
⎣ 0 0 0 1.0 ⎦
Total transform wrt gripper: given angles q1, q2, q3, q4, q5, q6
⎡0.257143295038827 0.48887208255965 -0.833595473062543 0.497919999999998⎤
⎢ ⎥
⎢0.259329420712765 0.796053601157403 0.54685182237706 1.3673 ⎥
⎢ ⎥
⎢0.93092726749696 -0.356795110642117 0.0779209320563016 2.49880000000001 ⎥
⎢ ⎥
⎣ 0 0 0 1.0 ⎦
```python
qs = ik(x = 2.3537, y = -0.1255546, z = 2.841452,
roll = 0.131008, pitch = -0.10541, yaw = 0.0491503, debug = True)
# -0.07, 0.41, -1.07, 0.32, 0.46, 0
```
x: 2.3537
y: -0.1255546
z: 2.841452
roll: 0.131008
pitch: -0.10541
yaw: 0.0491503
j1: -0.0682697289101386
j2: 0.434273483083027
j3: -1.13476160607020
j4: 0.206486955261342
j5: 0.604353673052791
j6: -0.0272724984420472
wrist_center (2.05274568076731, -0.140358517861560, 2.80957188470638)
evaluated R0g:
⎡-0.0977692193362551 0.0624374999845176 0.993248578325719 ⎤
⎢ ⎥
⎢-0.135600778861087 -0.989558155295888 0.0488578147246194⎥
⎢ ⎥
⎣ 0.985927790724374 -0.129908490419534 0.105214901959143 ⎦
Total transform wrt gripper: given yaw (alpha), pitch (beta), roll (beta), px, py, pz
⎡-0.0977692193362551 0.0624374999845176 0.993248578325719 2.3537 ⎤
⎢ ⎥
⎢-0.135600778861087 -0.989558155295888 0.0488578147246194 -0.1255546⎥
⎢ ⎥
⎢ 0.985927790724374 -0.129908490419534 0.105214901959143 2.841452 ⎥
⎢ ⎥
⎣ 0 0 0 1.0 ⎦
Total transform wrt gripper: given angles q1, q2, q3, q4, q5, q6
⎡-0.0977692193362551 0.0624374999845176 0.993248578325719 2.35369999999999
⎢
⎢-0.135600778861087 -0.989558155295888 0.0488578147246194 -0.1255546
⎢
⎢ 0.985927790724374 -0.129908490419534 0.105214901959143 2.841452
⎢
⎣ 0 0 0 1.0
⎤
⎥
⎥
⎥
⎥
⎥
⎦
```python
qs = ik(x = 2.3537, y = -0.1255546, z = 2.841452,
yaw = 0.131008, pitch = -0.10541, roll = 0.0491503, debug = True)
# -0.07, 0.41, -1.07, 0.32, 0.46, 0
```
x: 2.3537
y: -0.1255546
z: 2.841452
roll: 0.0491503
pitch: -0.10541
yaw: 0.131008
j1: -0.0800813023075095
j2: 0.439897727669121
j3: -1.14290690404898
j4: 0.363106480125535
j5: 0.626901765463759
j6: -0.226818468786107
wrist_center (2.05496387941051, -0.164916872597119, 2.80957188470638)
evaluated R0g:
⎡-0.0977692193362551 0.135600778861087 0.985927790724374⎤
⎢ ⎥
⎢-0.0624374999845176 -0.989558155295888 0.129908490419534⎥
⎢ ⎥
⎣ 0.993248578325719 -0.0488578147246194 0.105214901959143⎦
Total transform wrt gripper: given yaw (alpha), pitch (beta), roll (beta), px, py, pz
⎡-0.0977692193362551 0.135600778861087 0.985927790724374 2.3537 ⎤
⎢ ⎥
⎢-0.0624374999845176 -0.989558155295888 0.129908490419534 -0.1255546⎥
⎢ ⎥
⎢ 0.993248578325719 -0.0488578147246194 0.105214901959143 2.841452 ⎥
⎢ ⎥
⎣ 0 0 0 1.0 ⎦
Total transform wrt gripper: given angles q1, q2, q3, q4, q5, q6
⎡-0.0977692193362552 0.135600778861087 0.985927790724374 2.3536999999999
⎢
⎢-0.0624374999845176 -0.989558155295888 0.129908490419534 -0.1255545999999
⎢
⎢ 0.993248578325719 -0.0488578147246194 0.105214901959143 2.8414520000000
⎢
⎣ 0 0 0 1.0
9 ⎤
⎥
99⎥
⎥
1 ⎥
⎥
⎦
```python
```
|
725f15ba5198348403be819ff74cf76573674377
| 19,444 |
ipynb
|
Jupyter Notebook
|
notebooks/joint_angles_ik_optimized.ipynb
|
mithi/arm-ik
|
e7e87ef0e43b04278d2300f67d863c3f7eafb77e
|
[
"MIT"
] | 39 |
2017-07-29T11:40:03.000Z
|
2022-02-28T14:49:48.000Z
|
notebooks/joint_angles_ik_optimized.ipynb
|
mithi/arm-ik
|
e7e87ef0e43b04278d2300f67d863c3f7eafb77e
|
[
"MIT"
] | null | null | null |
notebooks/joint_angles_ik_optimized.ipynb
|
mithi/arm-ik
|
e7e87ef0e43b04278d2300f67d863c3f7eafb77e
|
[
"MIT"
] | 16 |
2017-10-27T13:30:21.000Z
|
2022-02-10T10:08:42.000Z
| 39.520325 | 626 | 0.408044 | true | 5,330 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.923039 | 0.771843 | 0.712442 |
__label__yue_Hant
| 0.042001 | 0.493573 |
# Lecture 24 - Adaptive Learning Rates
# Backpropagation in a Nutshell
Suppose you have the data $\{x_i\}_{i=1}^N \in \mathbb{R}^D$ with labels $\{d_i\}_{i=1}^N \in \mathbb{R}^K$. You want to find a mapper that learns the input samples $x_i$ and maps it to a label response $d_i$, i.e. *classification*.
Consider the following objective function:
$$J(w) = \frac{1}{2} \sum_{j=1}^p e_p^2 = \frac{1}{2} \sum_{j=1}^p \left(d_p-y_p\right)^2$$
where $p$ is the dimensionality of your desired values.
1. **Forward Pass:**
For any neuron $i$ receiving input signals from neurons $j$:
$$y_i = \phi\left(\sum_{j=1}^M w_{ij}x_j\right)$$
where $\phi(\bullet)$ is a pre-defined activation function. (The weights and biases of the network have been initialized to some random value.)
2. **Backward Pass:** Compute the gradient by defining local errors at every layer and neuron
$$\delta_i = \phi'(v_i)\sum_j \delta_j w_{ij}$$
$$\Delta w_{ij} = - \eta \delta_j y_i$$
and update the parameters:
$$w_{ij}^{(n+1)} = w_{ij}^{(n)} + \Delta w_{ij}$$
where $\eta$ is the learning rate.
# Adaptive Linear Systems
In regression, we found an analytic solution for the *optimal* weights of our regressor.
For some data $\{x_i\}_{i=1}^N$ and labels $\{d_i\}_{i=1}^N$, we found that the optimal weights are:
$$w^* = (X^TX)^{-1} X^Td$$
using the mean squared error function $J = \frac{1}{2N} \sum_{i=1}^N (d_i - wx_i)^2$
Note that if we were operating in the feature space of $X$, $\phi(X)$, the optimal weights are:
$$w^* = (\phi(X)^T\phi(X))^{-1} \phi(X)^Td$$
Assuming, the data $X$ is demeaned, $R = X^TX$ is called the *covariance* of the input data and $P = X^Td$ is called the *cross-covariance* of the input data with the desired signal.
Such analytic solution only exists if the model is a *linear function on the parameters*, i.e., $y = Xw$ or $y = \phi(X) w$, which is not the case for an MLP output!
* We can find a solution by performing a *search* of the performance surface (governed by the error function $J$).
# The Error Surface for a Linear Neuron
* The mean squared error function $J = \frac{1}{2N} \sum_{i=1}^N e_i^2$ is a *convex* function and therefore we can apply convex optimization techniques to search for the *minima* of this function.
* The most common method to optimize the least squares error is the *steepest descent*.
* The error surface lies in a space with a horizontal axis for each weight and one vertical axis for the error.
* For a linear neuron with a squared error, it is a quadratic bowl.
* Vertical cross-sections are parabolas.
* Horizontal cross-sections are ellipses.
* For multi-layer, non-linear nets the error surface is much more complicated.
* But locally, a piece of a quadratic bowl is usually a very good apparoximation.
## Convergence Speed of Full-Batch Learning
* Going "downhill" reduces the error, but the direction of steepest descent does not point at the minimum unless the ellipse is a circle.
* The gradient is big in the direction in which only want to travel a small distance.
* The gradient is small in the direction in which we want to travel a large distance
* Even for non-linear multi-layer nets, the error surface is locally quadratic, so the same speed issues apply.
## Learning Rate
$$w^{(t+1)} = w^{(t)} - \eta \nabla J(w^{(t)})$$
* If the learning rate is big, the weights slosh to and from the ravine.
* If the learning rate is too big, this oscillation diverges.
* What we would like to achieve:
* Move quickly in direction with small but consistent gradients.
* Move slowly in directions with big but inconsistent gradients.
## Stochastic Gradient Descent - Online Learning
* If the dataset is redundant, the gradient on the first half is almost identical to the gradient on the second half.
* So instead of computing the ful gradient, update the weights using the gradient on the first half and then get a gradient for the new weights on the second half.
* The extreme version of this approach updates weights after each point. It's called **online learning**.
* Mini-batches are usually better than online.
* Less computation is used for updating the weights.
* Computing the gradient for many cases simultaneously uses matrix-matrix multiplications which are very efficient, especially with GPUs.
* Mini-batches need to be balanced for classes.
## Full-Batch vs Mini-Batch Learning
* If we use the full gradient computed from all the training cases, **batch learning**, there are many clever ways to speed up (e.g. non-linear conjugate gradient).
* The optimization community has studied the general problem of optimizing smooth non-linear functions for many years.
* Multilayer neural nets are not typical of the problems they study so their methods may need a lot of modifications.
* For large neural networks with very large and highly redundant training sets, it is nearly always best to use **mini-batch learning**.
* The mini-batches may need to be quite big when adapting fancy methods.
* Big mini-batches are more computationaly efficient.
# Mini-Batch Gradient Descent
* We start by guessing a learning rate.
* If the error keeps getting worse or oscillates wildly, reduce the learning rate.
* If the error is falling fairly consistently but slowly, increase the learning rate.
* We can write a simple routine to automate this way of adjusting the learning rate.
* Towards the end of mini-batch learning it nearly always helps to turn down the learning rate.
* This removes fluctuations in the final weights caused by the variations between mini-batches.
* Turn down the learning rate when the error stops decreasing.
* Use the error on a separate validation set.
# A Bag of Tricks for Mini-Batch Gradient Descent
<span style="color:blue">**Initializing the Weights**</span>
* If two hidden units have exactly the same bias and exactly the same incoming and outgoing weights, they will always get exactly the same gradient.
* So they can never learn to be different features.
* We break symmetry by initializing the weights to have small random values.
* If a hidden unit has a big *fan-in*, small changes on many of its incoming weights can cause the learning to overshoot.
* We generally want smaller incoming weights when the fan-in is big, so initialize the weights to be proportional to $\sqrt{\text{fan-in}}$.
<span style="color:blue">**Shifting the Input (Demeaning)**</span>
* When using the steepest descent, shifting the input values makes a big difference!
* It usually helps to transform each component of the input vector so that it has zero mean over the whole training set.
* The hyperbolic tangent produces hidden activations that are roughly zero mean.
* In this repect it's better than the logistic
<span style="color:blue">**Scalling the Input (Unit Variance)**</span>
* When using steepest descent, scaling then input values makes a big difference.
* It usually helps to transform each component of the input vector over the whole training set.
<span style="color:blue">**Decorrelate the Input components**</span>
* For a linear neuron, we get a big win by decorrelating each component of the input from the other input components.
* There are several different ways to decorrelate inputs. A reasonable method is to use *Principal Component Analysis (PCA)*.
* Drop the principal components with the smallest eigenvalues.
* This achieves some dimensionality reduction.
* Divide the remaining principal components by the square roots of their eigenvalues. For a linear neuron, this converts an axis aligned elliptical error surface into a circular one.
* For a circular error surface, the gradient points straight towards the minimum.
<span style="color:orange">**Common Problems**</span>
**Plateau**
* If we start with a very big learning rate, the weights of each hidden unit will all bcome very big and positive or very big and negative.
* The error derivatives for the hidden units will all become tiny and the error will not decrease.
* This is usualy a *plateau*, but can be misunderstood as a local minimum.
**Turning Down Learning Rate**
* Turning down the learning rate reduces the random fluctuations in the error due to the different gradients on different mini-batches.
* So we get a quick win.
* But then we get slower learning.
* So, don't turn down the learning rate too soon!
# Optimization for Training Networks
Here are a few ways to speed up mini-batch learning:
**Momentum**
* Instead of using the gradient to change *position* of the weight *particle*, use it to change the *velocity*.
**Adaptive Learning Rate**
* Use separate adaptive learning rates for each parameter.
* Slowly adjust the rate using the consistency of the gradient for that parameter.
**RMSProp (Root Mean Square Propagation)**
* Use separate adaptive learning rates for each parameter.
* Divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.
* This is the mini-batch version of just using the sign of the gradient (method called **RProp** designed for full-batch learning).
**ADAM (Adaptive Moment Estimation)**
* Use separate adaptive learning rates for each parameter.
* Adaptation of RMSProp, running averages of both gradient and second moments are used
# The Momentum Learning
* The momentum learning can be applied to full-batch learning or mini-batch learning.
* Probably the commonest recipe to learn deep neural nets is to use stochastic gradient descent with mini-batches combined with momentum.
* Imagine a ball on the error surface. The location of the ball in the horizontal plane represents the weight vector.
* The ball starts off by following the gradient, but once it has velocity, it no longer does steepest descent.
* Its momentum makes it keep going in the previous direction.
* We need to introduce *viscosity* so that the velocity dies off when are getting closer to the solution.
* It damps oscillations in directions of high curvature by combining gradients with opposite signs.
* It builds up speed in directions with a gentle but consistent gradient.
\begin{align}
\Delta w_{ji}^{(t)} &= \alpha \Delta w_{ji}^{(t-1)} + \eta \nabla J(w_{ji}^{(t)})\\
&= \alpha \Delta w_{ji}^{(t-1)} + \eta \delta_j^{(t)} y_i^{(t)}
\end{align}
The effect of the gradient is to increment the previous velocity. The velocity also decays by $\alpha$ which is slightly less than 1 (generally, $\alpha=0.9$).
* At the beginning of learning there may be very large gradients.
* So it pays off to use a small momentum (e.g. $\alpha=0.5$).
* Once the large gradients have disappeared and the weights are stuck in a ravine the momentum can be smoothly raised to its final value (e.g. $\alpha=0.9$ or even $0.99$).
* This allows us to learn at a rate that would have caused divergent oscillations without momentum (case of increased learning rate only).
## Nesterov's Accelerated Gradient Descent
* The standard momentum *first* computes the gradient at the current location and *then* takes a big jump in the direction of the updated accumulated gradient.
* Ilya Sutskever (2012 unpublished) suggested a new form of momentum that often works better.
* Inspired by the Nesterov method for optimizing convex functions.
* *First* make a big jump in the direction of the previous accumulated gradient.
* *Then* measure the gradient where you end up and make a correction.
$$\Delta w_{ji}^{(t)} = \alpha \Delta w_{ji}^{(t-1)} + \eta \nabla J(w_{ji}^{(t-1)} + \eta \Delta w_{ji}^{(t-1)})$$
# (General) Adaptive Learning Rate
* In multilayer neural networks, the appropriate learning rates can vary widely between weights:
* The magnitude of the gradient are often very different for different layers, especially if the initial weights are small.
* The fan-in of a unit determines the size of the "overshoot" effects caused by simultaneously changing many of the incoming weights of a unit to correct the same error.
* So use a global learning rate multiplied by an appropriate local gain that is determined empirically for each weight.
* Start with a local gain of 1 for every weight.
* Increase the local gain if the gradient for that weight does not change sign.
* Use small additive increases and multiplicative decreases.
* This ensures that big gains decay rapidly when oscillations start.
* If the gradient is totally random the gain will hover around 1 when we increase by plus $\delta$ half the time and decrease by times $1-\delta$ half the time.
$$\Delta w_{ji} = -\eta g_{ji} \nabla J(w_{ji})$$
\begin{align}
\text{If } &\left(\nabla J(w_{ji}^{(t)}) \times \nabla J(w_{ji}^{(t-1)})\right) \geq 0 \\
&\text{then } g_{ji}(t) = g_{ji}(t-1) + \delta \\
&\text{else } g_{ji}(t) = g_{ji}(t-1) \times \delta \\
\end{align}
* Need to limit the gains to lie in some reasonable range, e.g. $[0.1,10]$ or $[0.01,100]$.
* Use full batch learning or very large mini-batches.
* This ensures that changes in the sign of the gradient are not mainly due to the sampling error of a mini-batch.
* Adaptive learning rates can be combined with momentum (Jacobs, 1989).
# RProp
* RProp stands for *Resilient BackPropagation*.
* The magnitude of the gradient can be very different for different weights and can change during learning.
* This make it hard to choose a single glocal learning rate.
* For full-batch learning, we can deal with this variation by only using the sign of the gradient.
* The weight updates are all of the same magnitude.
* This escapes from plateaus with tiny gradients quickly.
$$\Delta w_{ji} = -\eta g_{ji}\text{sign}\left( \nabla J(w_{ji})\right)$$
\begin{align}
\text{If } &\left(\nabla J(w_{ji}^{(t)}) \times \nabla J(w_{ji}^{(t-1)})\right) \geq 0 \\
& \text{then } g_{ji}(t) = g_{ji}(t-1)
\times \delta_1 \\
& \text{else } g_{ji}(t) = g_{ji}(t-1) \times \delta_2
\end{align}
* RProp combines the idea of only using the sign of the gradient with the idea of adapting the learning rate separately for each weight.
* Increase the learning rate for a weight multiplicatively if the signs of its last two gradients agree.
* Otherwise decrease the step size multiplicatively.
* Use full batch learning or very big mini-batches.
* This ensures that changes in the sign of the gradient are not mainly due to the sampling error of a mini-batch.
# RMSProp
* RProp is equivalent to using the gradient but also dividing by the size of the gradient.
* The problem with mini-batch RProp is that we divide by a different number for each mini-batch.
* RMSProp keeps a moving average of the squared gradient for each weight.
$$\Delta w_{ji}^{(t)} = \gamma \Delta w_{ji}^{(t-1)} + (1-\gamma) \left(\nabla J(w_{ji}^{(t)})\right)^2$$
$$w_{ji}^{(t+1)} = w_{ji}^{(t)} - \frac{\eta}{\sqrt{\Delta w_{ji}^{(t)}}} \nabla J(w_{ji}^{(t)}) $$
# ADAM
* ADAM is a combination of RMSProp and momentum.
* ADAM keeps both moving average of the gradient and the squared gradient.
* Adam includes bias corrections to the estimates of both the first-order moments (the momentum term) and the (uncentered) second-order moments to account for their initializationat the origin.
* Thus, unlike in ADAM, the RMSProp second-order moment estimate may have high biasearly in training.
$$m_w^{(t+1)} = \beta_1 m_w^{(t)} + (1-\beta_1)\nabla J(w^{(t)})$$
$$v_w^{(t+1)} = \beta_2 v_w^{(t)} + (1-\beta_2)\left(\nabla J(w^{(t)})\right)^2$$
$$\hat{m}_w = \frac{m_w^{(t+1)}}{1 - \beta_1^{t+1}}\text{ (bias correction)}$$
$$\hat{v}_w = \frac{v_w^{(t+1)}}{1 - \beta_2^{t+1}}\text{ (bias correction)}$$
$$w^{(t+1)} = w^{(t)} - \eta \frac{\hat{m}_w}{\sqrt{\hat{v}_m} + \epsilon}$$
* Often used $\beta_1 = 0.9$, $\beta_2 = 0.999$ and $\epsilon = 10^{-8}$.
# Summary of Learning Methods for Neural Networks
* For small datasets (e.g. 10,000 samples) or bigger datasets without much redundacy, use a full-batch method.
* AdaGrad, RProp, ...
* For big, redundant datasets use mini-batches.
* Try gradient descent with momentum.
* Try RMSProp.
* Try ADAM.
*Why there is no simple recipe?*
* There are lots of different network architectures
* Tasks differ a lot
* Some require very accurate weights, some don't.
* Some have many very rare cases (e.g. words).
# Recommended Reading
Chapter 8 "Optimization for Training Deep Models" from Deep Learning by Ian Goodfellow
* http://www.deeplearningbook.org/
```python
```
|
dc3aac5602f15eb47a3a6e5c3e5dc80171d05bf7
| 22,170 |
ipynb
|
Jupyter Notebook
|
22_LearningRate/22_LearningRate.ipynb
|
zhengyul9/lecture
|
905b93ba713f8467887fe8de5a44a3d8a7cae45c
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
22_LearningRate/22_LearningRate.ipynb
|
zhengyul9/lecture
|
905b93ba713f8467887fe8de5a44a3d8a7cae45c
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
22_LearningRate/22_LearningRate.ipynb
|
zhengyul9/lecture
|
905b93ba713f8467887fe8de5a44a3d8a7cae45c
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null | 44.969574 | 250 | 0.612855 | true | 4,275 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.865224 | 0.824462 | 0.713344 |
__label__eng_Latn
| 0.99827 | 0.495669 |
# Spectral Analysis of Deterministic Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Zero-Padding
### Concept
Let's assume a signal $x_N[k]$ of finite length $N$, for instance a windowed signal $x_N[k] = x[k] \cdot \text{rect}_N[k]$. The discrete Fourier transformation (DFT) of $x_N[k]$ reads
\begin{equation}
X_N[\mu] = \sum_{k=0}^{N-1} x_N[k] \; w_N^{\mu k}
\end{equation}
where $w_N = \mathrm{e}^{-\mathrm{j} \frac{2 \pi}{N}}$ denotes the kernel of the DFT. For a sampled time-domain signal, the distance in frequency between two neighboring coefficients is given as $\Delta f = \frac{f_s}{N}$, where $f_s = \frac{1}{T}$ denotes the sampling frequency. Hence, if $N$ is increased the distance between neighboring frequencies is decreased. This leads to the concept of zero-padding in spectral analysis. Here the signal $x_N[k]$ of finite length is filled up with (M-N) zero values to a total length $M \geq N$
\begin{equation}
x_M[k] = \begin{cases}
x_N[k] & \mathrm{for} \; k=0,1,\dots,N-1 \\
0 & \mathrm{for} \; k=N,N+1,\dots,M-1
\end{cases}
\end{equation}
Appending zeros does not change the contents of the signal itself. However, the DFT $X_M[\mu]$ of $x_M[k]$ has now a decreased distance between neighboring frequencies $\Delta f = \frac{f_s}{M}$.
The question arises what influence zero-padding has on the spectrum and if it can enhance spectral analysis. On first sight it seems that the frequency resolution is higher, however do we get more information on the signal? In order to discuss this, a short numerical example is evaluated followed by a derivation of the mathematical relations between the spectrum $X_M[k]$ with zero-padding and $X_N[k]$ without zero-padding.
#### Example
The following example computes and plots the magnitude spectra $|X[\mu]|$ of a truncated complex exponential signal $x_N[k] = \mathrm{e}^{\,\mathrm{j}\,\Omega_0\,k} \cdot \text{rect}_N[k]$ and its zero-padded version $x_M[k]$.
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
N = 16 # length of the signal
M = 32 # length of zero-padded signal
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# DFT of the exponential signal
xN = np.exp(1j*Om0*np.arange(N))
XN = np.fft.fft(xN)
# DFT of the zero-padded exponential signal
xM = np.concatenate((xN, np.zeros(M-N)))
XM = np.fft.fft(xM)
# plot spectra
plt.figure(figsize = (10, 6))
plt.subplot(121)
plt.stem(np.arange(N),np.abs(XN))
plt.title(r'DFT$_{%d}$ of $e^{j \Omega_0 k}$ without zero-padding' %N)
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_N[\mu]|$')
plt.axis([0, N, 0, 18])
plt.grid()
plt.subplot(122)
plt.stem(np.arange(M),np.abs(XM))
plt.title(r'DFT$_{%d}$ of $e^{j \Omega_0 k}$ with zero-padding' %M)
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_M[\mu]|$')
plt.axis([0, M, 0, 18])
plt.grid()
```
**Exercise**
* Check the two spectra carefully for relations. Are there common coefficients for the case $M = 2 N$?
* Increase the length `M` of the zero-padded signal $x_M[k]$. Can you gain additional information from the spectrum?
Every second (because the DFT length has been doubled) coefficient has been added, the other coefficients stay the same. With longer zero-padding, the maximum of the main lobe of the window gets closer to its true maximum.
### Interpolation of the Discrete Fourier Transformation
Lets step back to the discrete-time Fourier transformation (DTFT) of the finite-length signal $x_N[k]$ without zero-padding
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k = -\infty}^{\infty} x_N[k] \, \mathrm{e}^{\,-\mathrm{j}\,\Omega\,k} = \sum_{k=0}^{N-1} x_N[k] \,\mathrm{e}^{-\,\mathrm{j}\,\Omega\,k}
\end{equation}
The discrete Fourier transformation (DFT) is derived by sampling $X_N(\mathrm{e}^{\mathrm{j}\,\Omega})$ at $\Omega = \mu \frac{2 \pi}{N}$
\begin{equation}
X_N[\mu] = X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega}) \big\vert_{\Omega = \mu \frac{2 \pi}{N}} = \sum_{k=0}^{N-1} x_N[k] \, \mathrm{e}^{\,-\mathrm{j}\, \mu \frac{2\pi}{N}\,k}
\end{equation}
Since the DFT coefficients $X_N[\mu]$ are sampled equidistantly from the DTFT $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$, we can reconstruct the DTFT of $x_N[k]$ from the DFT coefficients by interpolation. Introduce the inverse DFT of $X_N[\mu]$
\begin{equation}
x_N[k] = \frac{1}{N} \sum_{\mu = 0}^{N-1} X_N[\mu] \; \mathrm{e}^{\,\mathrm{j}\,\frac{2 \pi}{N} \mu \,k}
\end{equation}
into the DTFT
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k=0}^{N-1} x_N[k] \; \mathrm{e}^{-\,\mathrm{j}\, \Omega\, k} =
\sum_{\mu=0}^{N-1} X_N[\mu] \cdot \frac{1}{N} \sum_{k=0}^{N-1} \mathrm{e}^{-\mathrm{j}\, k \,(\Omega - \frac{2 \pi}{N} \mu)}
\end{equation}
reveals the relation between $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and $X_N[\mu]$. The last sum over $k$ constitutes a [geometric series](https://en.wikipedia.org/wiki/Geometric_series) and can be rearranged to
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \frac{1}{N} \cdot \frac{1-\mathrm{e}^{-\mathrm{j}(\Omega-\frac{2\pi}{N}\mu)N}}{1-\mathrm{e}^{-\mathrm{j}(\Omega-\frac{2\pi}{N}\mu)}}
\end{equation}
By factorizing the last fraction to
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \frac{1}{N} \cdot \frac{\mathrm{e}^{-\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)N}{2}}}{\mathrm{e}^{-\mathrm{j}\frac{\Omega-\frac{2\pi}{N}\mu}{2}}} \cdot \frac{\mathrm{e}^{\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)N}{2}}-\mathrm{e}^{-\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)N}{2}}}{\mathrm{e}^{\mathrm{j}\frac{\Omega-\frac{2\pi}{N}\mu}{2}}-\mathrm{e}^{-\mathrm{j}\frac{\Omega-\frac{2\pi}{N}\mu}{2}}}
\end{equation}
and making use of [Euler's identity](https://en.wikipedia.org/wiki/Euler%27s_identity) $2\mathrm{j}\cdot\sin(x)=\mathrm{e}^{\mathrm{j} x}-\mathrm{e}^{-\mathrm{j} x}$ this can be simplified to
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \mathrm{e}^{-\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)(N-1)}{2}} \cdot \frac{1}{N} \cdot \frac{\sin(N\frac{\Omega-\frac{2\pi}{N}\mu}{2})}{\sin(\frac{\Omega-\frac{2\pi}{N}\mu}{2})}
\end{equation}
The last fraction can be written in terms of the $N$-th order periodic sinc function (aliased sinc function, [Dirichlet kernel](https://en.wikipedia.org/wiki/Dirichlet_kernel)), which is defined as
\begin{equation}
\text{psinc}_N (\Omega) = \frac{1}{N} \frac{\sin(\frac{N}{2} \Omega)}{ \sin(\frac{1}{2} \Omega)}
\end{equation}
According to this definition, the periodic sinc function is not defined at $\Omega = 2 \pi \,n$ for $n \in \mathbb{Z}$. This is resolved by applying [L'Hôpital's rule](https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule) which results in $\text{psinc}_N (2 \pi \,n) = 1$ for $n \in \mathbb{Z}$.
Using the periodic sinc function, the DTFT $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ of a finite-length signal $x_N[k]$ can be derived from its DFT $X_N[\mu]$ by
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \mathrm{e}^{-\,\mathrm{j}\, \frac{( \Omega - \frac{2 \pi}{N} \mu ) (N-1)}{2}} \cdot \text{psinc}_N ( \Omega - \frac{2 \pi}{N} \mu )
\end{equation}
#### Example
This example illustrates the
1. periodic sinc function, and
2. interpolation of $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ from $X_N[\mu]$ for an exponential signal using above relation.
```python
N = 16 # order of periodic sinc function
M = 1024 # number of frequency points
Om = np.linspace(-np.pi, np.pi, M)
# definition of periodic sinc function
def psinc(x, N):
x = np.asanyarray(x)
y = np.where(x == 0, 1.0e-20, x)
return 1/N * np.sin(N/2*y)/np.sin(1/2*y)
# plot psinc
plt.figure(figsize = (10, 8))
plt.plot(Om, psinc(Om, 16))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\mathrm{psinc}_N (\Omega)$')
plt.grid()
```
```python
N = 16 # length of the signal
M = 1024 # number of frequency points for DTFT
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# DFT of the exponential signal
xN = np.exp(1j*Om0*np.arange(N))
XN = np.fft.fft(xN)
# interpolation of DTFT from DFT coefficients
Xi = np.asarray(np.zeros(M), dtype=complex)
for mu in np.arange(M):
Omd = 2*np.pi/M*mu-2*np.pi*np.arange(N)/N
interpolator = psinc(Omd, N) * np.exp(-1j*Omd*(N-1)/2)
Xi[mu] = np.sum(XN * interpolator)
# plot spectra
plt.figure(figsize = (10, 8))
ax1 = plt.gca()
plt.plot(np.arange(M)*2*np.pi/M, abs(Xi), 'r', label=r'$|X_N(e^{j \Omega})|$')
plt.stem(np.arange(N)*2*np.pi/N, abs(XN), basefmt = ' ', label=r'$|X_N[\mu]|$')
plt.title(r'DFT $X_N[\mu]$ and interpolated DTFT $X_N(e^{j \Omega})$', y=1.08)
plt.ylim([-0.5, N+2]);
plt.legend()
ax1.set_xlabel(r'$\Omega$')
ax1.set_xlim([0, 2*np.pi])
ax1.grid()
ax2 = ax1.twiny()
ax2.set_xlim([0, N])
ax2.set_xlabel(r'$\mu$', color='C0')
ax2.tick_params('x', colors='C0')
```
### Relation between Discrete Fourier Transformations with and without Zero-Padding
It was already outlined above that the DFT is related to the DTFT by sampling. Hence, the DFT $X_M[\mu]$ is given by sampling the DTFT $X_M(\mathrm{e}^{\mathrm{j}\, \Omega})$ at $\Omega = \frac{2 \pi}{M} \mu$. Since the zero-padded signal $x_M[k]$ differs from $x_N[k]$ only with respect to the additional zeros, the DTFTs of both are equal
\begin{equation}
X_M(\mathrm{e}^{\mathrm{j}\, \Omega}) = X_N(\mathrm{e}^{\mathrm{j}\, \Omega})
\end{equation}
The desired relation between the DFTs $X_N[\mu]$ and $X_M[\mu]$ of the signal $x_N[k]$ and its zero-padded version $x_M[k]$ can be found by sampling the interpolated DTFT $X_N(\mathrm{e}^{\mathrm{j}\, \Omega})$ at $\Omega = \frac{2 \pi}{M} \mu$
\begin{equation}
X_M[\mu] = \sum_{\eta=0}^{N-1} X_N[\eta] \cdot \mathrm{e}^{\,-\mathrm{j}\, \frac{( \frac{2 \pi}{M} \mu - \frac{2 \pi}{N} \eta ) (N-1)}{2}} \cdot \text{psinc}_N \left( \frac{2 \pi}{M} \mu - \frac{2 \pi}{N} \eta \right)
\end{equation}
for $\mu = 0, 1, \dots, M-1$.
Above equation relates the spectrum $X_N[\mu]$ of the original signal $x_N[k]$ to the spectrum $X_M[\mu]$ of the zero-padded signal $x_M[k]$. It essentially constitutes a bandlimited interpolation of the coefficients $X_N[\mu]$.
All spectral information of a signal of finite length $N$ is already contained in its spectrum derived from a DFT of length $N$. By applying zero-padding and a longer DFT, the frequency resolution is only virtually increased. The additional coefficients are related to the original ones by bandlimited interpolation. In general, zero-padding does not bring additional insights in spectral analysis. It may bring a benefit in special applications, for instance when estimating the frequency of an isolated harmonic signal from its spectrum. This is illustrated in the following example.
Zero-padding is also used to make a circular convolution equivalent to a linear convolution. However, there is a different reasoning behind this. Details are discussed in a [later section](../nonrecursive_filters/fast_convolution.ipynb#Linear-Convolution-by-Periodic-Convolution).
#### Example
The following example shows that the coefficients $X_M[\mu]$ of the spectrum of the zero-padded signal $x_M[k]$ can be derived by interpolation from the spectrum $X_N[\mu]$.
```python
N = 16 # length of the signal
M = 32 # number of points for interpolated DFT
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# periodic sinc function
def psinc(x, N):
x = np.asanyarray(x)
y = np.where(x == 0, 1.0e-20, x)
return 1/N * np.sin(N/2*y)/np.sin(1/2*y)
# DFT of the exponential signal
xN = np.exp(1j*Om0*np.arange(N))
XN = np.fft.fft(xN)
# interpolation of DFT coefficients
XM = np.asarray(np.zeros(M), dtype=complex)
for mu in np.arange(M):
Omd = 2*np.pi/M*mu-2*np.pi*np.arange(N)/N
interpolator = psinc(Omd, N) * np.exp(-1j*Omd*(N-1)/2)
XM[mu] = np.sum(XN * interpolator)
# plot spectra
plt.figure(figsize = (10, 6))
plt.subplot(121)
plt.stem(np.arange(N),np.abs(XN))
plt.title(r'DFT of $e^{j \Omega_0 k}$ without zero-padding')
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_N[\mu]|$')
plt.axis([0, N, 0, 18])
plt.grid()
plt.subplot(122)
plt.stem(np.arange(M),np.abs(XM))
plt.title(r'Interpolated spectrum')
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_M[\mu]|$')
plt.axis([0, M, 0, 18])
plt.grid()
```
**Exercise**
* Compare the interpolated spectrum to the spectrum with zero padding from the first example.
* Estimate the frequency $\Omega_0$ of the exponential signal from the interpolated spectrum. How could you further increase the accuracy of your estimate?
The interpolated spectrum is the same as the spectrum with zero padding from the first example. The estimated frequency from the interpolated spectrum is $\Omega_0=\frac{2\pi}{M}\mu=\frac{2\pi}{32}\cdot11$. A better estimate can be obtained by increasing the number of points for the interpolated DFT or by further zero-padding of the time domain signal.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2017*.
|
48595d9c4d0ab8da46c81d8d60af077571a0b669
| 128,980 |
ipynb
|
Jupyter Notebook
|
spectral_analysis_deterministic_signals/zero_padding.ipynb
|
ganlubbq/digital-signal-processing-lecture
|
f9ac5b2f5500aa612b48d1d920c7cba366c44dba
|
[
"MIT"
] | 2 |
2017-11-14T16:14:37.000Z
|
2021-05-16T21:01:41.000Z
|
spectral_analysis_deterministic_signals/zero_padding.ipynb
|
ganlubbq/digital-signal-processing-lecture
|
f9ac5b2f5500aa612b48d1d920c7cba366c44dba
|
[
"MIT"
] | null | null | null |
spectral_analysis_deterministic_signals/zero_padding.ipynb
|
ganlubbq/digital-signal-processing-lecture
|
f9ac5b2f5500aa612b48d1d920c7cba366c44dba
|
[
"MIT"
] | 2 |
2020-06-26T14:19:29.000Z
|
2020-12-11T08:31:29.000Z
| 287.260579 | 38,448 | 0.897852 | true | 4,420 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.880797 | 0.901921 | 0.794409 |
__label__eng_Latn
| 0.895081 | 0.684011 |
# Homework 5
## Due Date: Tuesday, October 3rd at 11:59 PM
# Problem 1
We discussed documentation and testing in lecture and also briefly touched on code coverage. You must write tests for your code for your final project (and in life). There is a nice way to automate the testing process called continuous integration (CI).
This problem will walk you through the basics of CI and show you how to get up and running with some CI software.
### Continuous Integration
The idea behind continuous integration is to automate away the testing of your code.
We will be using it for our projects.
The basic workflow goes something like this:
1. You work on your part of the code in your own branch or fork
2. On every commit you make and push to GitHub, your code is automatically tested on a fresh machine on Travis CI. This ensures that there are no specific dependencies on the structure of your machine that your code needs to run and also ensures that your changes are sane
3. Now you submit a pull request to `master` in the main repo (the one you're hoping to contribute to). The repo manager creates a branch off `master`.
4. This branch is also set to run tests on Travis. If all tests pass, then the pull request is accepted and your code becomes part of master.
We use GitHub to integrate our roots library with Travis CI and Coveralls. Note that this is not the only workflow people use. Google git..github..workflow and feel free to choose another one for your group.
### Part 1: Create a repo
Create a public GitHub repo called `cs207test` and clone it to your local machine.
**Note:** No need to do this in Jupyter.
### Part 2: Create a roots library
Use the example from lecture 7 to create a file called `roots.py`, which contains the `quad_roots` and `linear_roots` functions (along with their documentation).
Also create a file called `test_roots.py`, which contains the tests from lecture.
All of these files should be in your newly created `cs207test` repo. **Don't push yet!!!**
```python
%%file cs207test/roots.py
def linear_roots(a=1.0, b=0.0):
"""Returns the roots of a linear equation: ax+ b = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of linear term
b: float, optional, default value is 0
Coefficient of constant term
RETURNS
========
roots: 1-tuple of real floats
Has the form (root) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> linear_roots(1.0, 2.0)
-2.0
"""
if a == 0:
raise ValueError("The linear coefficient is zero. This is not a linear equation.")
else:
return ((-b / a))
def quad_roots(a=1.0, b=2.0, c=0.0):
"""Returns the roots of a quadratic equation: ax^2 + bx + c = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
"""
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
```
```python
%%file cs207test/test_roots.py
import roots
def test_quadroots_result():
assert roots.quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j))
def test_quadroots_types():
try:
roots.quad_roots("", "green", "hi")
except TypeError as err:
assert(type(err) == TypeError)
def test_quadroots_zerocoeff():
try:
roots.quad_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
def test_linearoots_result():
assert roots.linear_roots(2.0, -3.0) == 1.5
def test_linearroots_types():
try:
roots.linear_roots("ocean", 6.0)
except TypeError as err:
assert(type(err) == TypeError)
def test_linearroots_zerocoeff():
try:
roots.linear_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
```
### Part 3: Create an account on Travis CI and Start Building
#### Part A:
Create an account on Travis CI and set your `cs207test` repo up for continuous integration once this repo can be seen on Travis.
#### Part B:
Create an instruction to Travis to make sure that
1. python is installed
2. its python 3.5
3. pytest is installed
The file should be called `.travis.yml` and should have the contents:
```yml
language: python
python:
- "3.5"
before_install:
- pip install pytest pytest-cov
script:
- pytest
```
You should also create a configuration file called `setup.cfg`:
```cfg
[tool:pytest]
addopts = --doctest-modules --cov-report term-missing --cov roots
```
#### Part C:
Push the new changes to your `cs207test` repo.
At this point you should be able to see your build on Travis and if and how your tests pass.
```python
%%file cs207test/.travis.yml
language: python
python:
- "3.5"
before_install:
- pip install pytest pytest-cov
script:
- pytest
```
```python
%%file cs207test/setup.cfg
[tool:pytest]
addopts = --doctest-modules --cov-report term-missing --cov roots
```
### Part 4: Coveralls Integration
In class, we also discussed code coverage. Just like Travis CI runs tests automatically for you, Coveralls automatically checks your code coverage. One minor drawback of Coveralls is that it can only work with public GitHub accounts. However, this isn't too big of a problem since your projects will be public.
#### Part A:
Create an account on [`Coveralls`](https://coveralls.zendesk.com/hc/en-us), connect your GitHub, and turn Coveralls integration on.
#### Part B:
Update your the `.travis.yml` file as follows:
```yml
language: python
python:
- "3.5"
before_install:
- pip install pytest pytest-cov
- pip install coveralls
script:
- py.test
after_success:
- coveralls
```
Be sure to push the latest changes to your new repo.
```python
%%file cs207test/.travis.yml
language: python
python:
- "3.5"
before_install:
- pip install pytest pytest-cov
- pip install coveralls
script:
- py.test
after_success:
- coveralls
```
### Part 5: Update README.md in repo
You can have your GitHub repo reflect the build status on Travis CI and the code coverage status from Coveralls. To do this, you should modify the `README.md` file in your repo to include some badges. Put the following at the top of your `README.md` file:
```
[](https://travis-ci.org/dsondak/cs207testing.svg?branch=master)
[](https://coveralls.io/github/dsondak/cs207testing?branch=master)
```
Of course, you need to make sure that the links are to your repo and not mine. You can find embed code on the Coveralls and Travis CI sites.
```python
```
---
# Problem 2
Write a Python module for reaction rate coefficients. Your module should include functions for constant reaction rate coefficients, Arrhenius reaction rate coefficients, and modified Arrhenius reaction rate coefficients. Here are their mathematical forms:
\begin{align}
&k_{\textrm{const}} = k \tag{constant} \\
&k_{\textrm{arr}} = A \exp\left(-\frac{E}{RT}\right) \tag{Arrhenius} \\
&k_{\textrm{mod arr}} = A T^{b} \exp\left(-\frac{E}{RT}\right) \tag{Modified Arrhenius}
\end{align}
Test your functions with the following paramters: $A = 10^7$, $b=0.5$, $E=10^3$. Use $T=10^2$.
A few additional comments / suggestions:
* The Arrhenius prefactor $A$ is strictly positive
* The modified Arrhenius parameter $b$ must be real
* $R = 8.314$ is the ideal gas constant. It should never be changed (except to convert units)
* The temperature $T$ must be positive (assuming a Kelvin scale)
* You may assume that units are consistent
* Document each function!
* You might want to check for overflows and underflows
**Recall:** A Python module is a `.py` file which is not part of the main execution script. The module contains several functions which may be related to each other (like in this problem). Your module will be importable via the execution script. For example, suppose you have called your module `reaction_coeffs.py` and your execution script `kinetics.py`. Inside of `kinetics.py` you will write something like:
```python
import reaction_coeffs
# Some code to do some things
# :
# :
# :
# Time to use a reaction rate coefficient:
reaction_coeffs.const() # Need appropriate arguments, etc
# Continue on...
# :
# :
# :
```
Be sure to include your module in the same directory as your execution script.
```python
```
```python
%%file reaction_coeffs.py
def const(k=1.0):
"""Returns the constant coefficient k.
INPUTS
=======
a: float, optional, default value is 1.0
Constant reaction coefficient
RETURNS
========
Constant k
unless k is not an int or float
in which case a TypeError exception is raised
EXAMPLES
=========
>>> const(1.0)
1.0
"""
if type(k) != int and type(k) != float:
raise TypeError("Input needs to be a number!")
else:
return k
def arr(A=10^7,T=10^2,E=10^3,R=8.314):
"""Returns the constant coefficient k.
INPUTS
=======
A: float, optional, default value is 10^7
Arhenieus constant, needs to be positive
T: float, optional, default value is 10^2
temperature, must be positive
R: float, fixed value 8.314, should not change
RETURNS
========
Constant k
unless any of A,T, and E are not numbers
in which case a TypeError exception is raised
EXAMPLES
=========
>>> arr(10^7,10^2,10^2)
11.526748719357375
"""
if A < 0:
raise ValueError("The Arrhenius constant must be strictly positive")
elif T < 0:
raise ValueError("Temperature must be positive")
elif R != 8.314:
raise ValueError("Unless in a stargate, the universal gas constant is 8.314")
elif type(A) != int and type(A) != float:
raise TypeError("parameters need to be either type int or type float")
elif type(T) !=int and type(T) != int:
raise TypeError("parameters need to be either type int or type float")
elif type(E) != int and type(E) != float:
raise TypeError("parameters need to be either type int or type float")
else:
import numpy as np
k=(A)*np.exp(-E/(R*T))
return k
def mod_arr(A=10^7,b=0.5,T=10^2,E=10^3,R=8.314):
"""Returns the constant coefficient k.
INPUTS
=======
A: float, optional, default value is 10^7
Arhenieus constant, needs to be positive
T: float, optional, default value is 10^2
temperature, must be positive
R: float, fixed value 8.314, should not change
b: float, optional, default value is 0.5, must be a real number
RETURNS
========
Constant k
unless any of A,T,b and E are not numbers
in which case a TypeError exception is raised
EXAMPLES
=========
>>> mod_arr()
32.116059468138779
"""
if A < 0:
raise ValueError("The Arrhenius constant must be strictly positive")
elif T < 0:
raise ValueError("Temperature must be positive")
elif isinstance(b,complex):
raise ValueError("b must be a real number")
elif R != 8.314:
raise ValueError("Unless in a stargate, the universal gas constant is 8.314")
elif type(A) != int and type(A) != float:
raise TypeError("parameters need to be either type int or type float")
elif type(T) !=int and type(T) != int:
raise TypeError("parameters need to be either type int or type float")
elif type(E) != int and type(E) != float:
raise TypeError("parameters need to be either type int or type float")
else:
import numpy as np
k=(A)*(T**b)*np.exp(-E/(R*T))
return k
```
Overwriting reaction_coeffs.py
```python
import reaction_coeffs
print(reaction_coeffs.const())
print(reaction_coeffs.arr())
print(reaction_coeffs.mod_arr())
```
1.0
11.3547417175
32.1160594681
```python
!pytest
```
[1m============================= test session starts ==============================[0m
platform darwin -- Python 3.6.1, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
rootdir: /Users/filipmichalsky/cs207_filip_michalsky/homeworks/HW5, inifile:
collected 13 items [0m[1m
[0m
test_multi_progress_r.py ..
test_multi_reaction_r.py ...
test_single_progress_r.py ..
cs207test/test_roots.py ......
[32m[1m========================== 13 passed in 0.24 seconds ===========================[0m
---
# Problem 3
Write a function that returns the **progress rate** for a reaction of the following form:
\begin{align}
\nu_{A} A + \nu_{B} B \longrightarrow \nu_{C} C.
\end{align}
Order your concentration vector so that
\begin{align}
\mathbf{x} =
\begin{bmatrix}
\left[A\right] \\
\left[B\right] \\
\left[C\right]
\end{bmatrix}
\end{align}
Test your function with
\begin{align}
\nu_{i} =
\begin{bmatrix}
2.0 \\
1.0 \\
0.0
\end{bmatrix}
\qquad
\mathbf{x} =
\begin{bmatrix}
1.0 \\
2.0 \\
3.0
\end{bmatrix}
\qquad
k = 10.
\end{align}
You must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.
```python
import numpy as np
```
```python
%%file single_progress_r.py
def progress_rate(v_i,x,k):
"""Returns the progress rate of a reaction
INPUTS
=======
v_i: vector of stochiometric coefficients
x: vector of concentrations (first dimension needs to match v_i),second dimension is 1
k: float, fixed value 8.314, should not change
RETURNS
========
Progress rate of the reaction
k*[A]^v_i[a]*[B]*v_i[b]*[C]^v_i[c]
unless first dimension of x does not match first dimension of v_i
in which case a ValueError exception is raised
EXAMPLES
=========
>>> import numpy as np
>>> progress_rate(np.array([2,1,0]),np.array([1,2,3]),10)
20
"""
#v_i=np.array([2,1,0])
#x=np.array([1,2,3])
#k=10
import numpy as np
#check that the first dimension matches
if x.shape[0] != v_i.shape[0]:
raise ValueError("First dimension of x and v_i need to match")
else:
product_X_to_vi=1
#loop through the rows to multiply each x by its stochiometric coeffs
for i,j in zip(v_i,x):
product_X_to_vi=product_X_to_vi*(j**i)
progress_rate=k*product_X_to_vi
return progress_rate
if __name__ == "__main__":
import doctest
doctest.testmod()
```
Overwriting single_progress_r.py
```python
import single_progress_r
import numpy as np
single_progress_r.progress_rate(np.array([2,1,0]),np.array([1,2,3]),10)
```
20
```python
import single_progress_r
import numpy as np
single_progress_r.progress_rate(np.array([2,1]),np.array([[2,1],[2,1],[2,1]]),10)
```
```python
%%file test_single_progress_r.py
import single_progress_r
import numpy as np
#!pytest --doctest-modules
def test_progress_rate_result():
assert single_progress_r.progress_rate(np.array([2,1,0]),np.array([1,2,3]),10) == 20
def test_progress_rate_shapes():
try:
single_progress_r.progress_rate(np.array([2,1]),np.array([[2,1],[2,1],[2,1]]),10)
except ValueError as err:
assert(type(err) == ValueError)
#def test_arr_result():
# assert reaction_coeffs.arr(A=10^7,E=10^3,T=10^2) == XX
```
Overwriting test_single_progress_r.py
```bash
%%bash
python -m pytest -v test_single_progress_r.py
```
============================= test session starts ==============================
platform darwin -- Python 3.6.1, pytest-3.0.7, py-1.4.33, pluggy-0.4.0 -- /Users/filipmichalsky/anaconda/bin/python
cachedir: .cache
rootdir: /Users/filipmichalsky/cs207_filip_michalsky/homeworks/HW5, inifile:
collecting ... collected 2 items
test_single_progress_r.py::test_progress_rate_result PASSED
test_single_progress_r.py::test_progress_rate_shapes PASSED
=========================== 2 passed in 0.19 seconds ===========================
```bash
%%bash
python multi_progress_r.py -v
#doctest of created file
```
Trying:
import numpy as np
Expecting nothing
ok
Trying:
progress_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,0.0],[0.0,1.0],[2.0,1.0]]),np.array([1.0,2.0,1.0]),10)[0]
Expecting:
40.0
ok
Trying:
progress_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,0.0],[0.0,1.0],[2.0,1.0]]),np.array([1.0,2.0,1.0]),10)[1]
Expecting:
10.0
ok
1 items had no tests:
__main__
1 items passed all tests:
3 tests in __main__.progress_rate_multi
3 tests in 2 items.
3 passed and 0 failed.
Test passed.
---
# Problem 4
Write a function that returns the **progress rate** for a system of reactions of the following form:
\begin{align}
\nu_{11}^{\prime} A + \nu_{21}^{\prime} B \longrightarrow \nu_{31}^{\prime\prime} C \\
\nu_{12}^{\prime} A + \nu_{32}^{\prime} C \longrightarrow \nu_{22}^{\prime\prime} B + \nu_{32}^{\prime\prime} C
\end{align}
Note that $\nu_{ij}^{\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\nu_{ij}^{\prime\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. Therefore, in this convention, I have ordered my vector of concentrations as
\begin{align}
\mathbf{x} =
\begin{bmatrix}
\left[A\right] \\
\left[B\right] \\
\left[C\right]
\end{bmatrix}.
\end{align}
Test your function with
\begin{align}
\nu_{ij}^{\prime} =
\begin{bmatrix}
1.0 & 2.0 \\
2.0 & 0.0 \\
0.0 & 2.0
\end{bmatrix}
\qquad
\nu_{ij}^{\prime\prime} =
\begin{bmatrix}
0.0 & 0.0 \\
0.0 & 1.0 \\
2.0 & 1.0
\end{bmatrix}
\qquad
\mathbf{x} =
\begin{bmatrix}
1.0 \\
2.0 \\
1.0
\end{bmatrix}
\qquad
k = 10.
\end{align}
You must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.
Please note I have already generalized the progress rate formula, I only need to subtract now the v_ij product from v_ij reactants to get the correct v_ij's
```python
%%file multi_progress_r.py
import numpy as np
def progress_rate_multi(v_ij_reac,v_ij_proc,x,k):
"""Returns the progress rate of a set of reactions
INPUTS
=======
v_ij_reac: vector of stochiometric coefficients of reactants
v_ij_proc: vector of stochiometric coefficients of products
x: vector of concentrations (first dimension needs to match v_i),second dimension is 1
k: float
RETURNS
========
Progress rates of the reactions (a string based on number of reactions)
according to the formula k*[A]^v_i[a]*[B]*v_i[b]*[C]^v_i[c]
unless v_ij = v_ij_proc-v_ij_reac has the first dimension not matching
the first dimension of x which returns a ValueError
unless shape of v_ij_proc does not match the shape of x, which will return ValueError as well
EXAMPLES
=========
>>> import numpy as np
>>> progress_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,0.0],[0.0,1.0],[2.0,1.0]]),np.array([1.0,2.0,1.0]),10)[0]
40.0
>>> progress_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,0.0],[0.0,1.0],[2.0,1.0]]),np.array([1.0,2.0,1.0]),10)[1]
10.0
"""
import numpy as np
#calculate the effective stochiometric coefficients on the reactants side
#check length of product coeffs and reac coeffs is the same
if v_ij_proc.shape != v_ij_reac.shape:
raise ValueError("Dimensions of stochiometric coeffs need to match")
#check that the first dimension matches
elif x.shape[0] != v_ij_proc.shape[0]:
raise ValueError("First dimension of x and v_i need to match")
else:
v_ij = v_ij_reac
product_X_to_vij = 1
#loop through the rows to multiply each x by its stochiometric coeffs
for i,j in zip(v_ij,x):
#print("i,j")
#print(i,j)
#print("j**i")
#print(j**i)
product_X_to_vij=product_X_to_vij*(j**i)
#print("product")
#print(product_X_to_vij)
progress_rate_multi=k*product_X_to_vij
return progress_rate_multi
if __name__ == "__main__":
import doctest
doctest.testmod()
```
Overwriting multi_progress_r.py
```python
import multi_progress_r
multi_progress_r.progress_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,0.0],[0.0,1.0],[2.0,1.0]]),np.array([1.0,2.0,1.0]),10)
```
array([ 40., 10.])
```python
a=np.array([0.25,2.0])
b=np.array([1.,1.])
a*b
```
array([ 0.25, 2. ])
```python
import multi_progress_r
v_ij_reac = np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]])
v_ij_proc = np.array([[0.0,0.0],[0.0,1.0],[2.0,1.0]])
x = np.array([1.0,2.0,1.0])
k = 10
assert multi_progress_r.progress_rate_multi(v_ij_reac , v_ij_proc, x , k )[0] == 40
assert multi_progress_r.progress_rate_multi(v_ij_reac , v_ij_proc, x , k )[1] == 10
```
```python
```
```python
#sample fail of the function based on wrong input
import multi_progress_r
multi_progress_r.progress_rate_multi(np.array([2,1,1]),np.array([2,1]),np.array([[2,1],[2,1],[2,1]]),10)
```
```python
%%file test_multi_progress_r.py
import multi_progress_r
import numpy as np
import doctest
doctest.testmod(verbose=True)
def progress_rate_result():
#test the result og the function
v_ij_reac = np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]])
v_ij_proc = np.array([[0.0,0.0],[0.0,1.0],[2.0,1.0]])
x = np.array([1.0,2.0,1.0])
k = 10
assert multi_progress_r.progress_rate_multi(v_ij_reac , v_ij_proc, x , k )[0] == 40.0
assert progress_rate_multi(v_ij_reac , v_ij_proc, x , k )[1] == 10.0
def test_progress_rate_shapes_with_x():
#test whether shape of coeffs v_i matches shape of concentrations x
try:
multi_progress_r.progress_rate_multi(np.array([2,1]),np.array([2,1]),np.array([[2,1],[2,1],[2,1]]),10)
except ValueError as err:
assert(type(err) == ValueError)
def test_progress_rate_shapes_coeffs():
#test whether the shape of v_ij's matches
try:
multi_progress_r.progress_rate_multi(np.array([2,1,1]),np.array([2,1]),np.array([[2,1],[2,1],[2,1]]),10)
except ValueError as err:
assert(type(err) == ValueError)
```
Overwriting test_multi_progress_r.py
```bash
%%bash
python multi_progress_r.py -v
```
Trying:
import numpy as np
Expecting nothing
ok
Trying:
progress_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,0.0],[0.0,1.0],[2.0,1.0]]),np.array([1.0,2.0,1.0]),10)[0]
Expecting:
40.0
ok
Trying:
progress_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,0.0],[0.0,1.0],[2.0,1.0]]),np.array([1.0,2.0,1.0]),10)[1]
Expecting:
10.0
ok
1 items had no tests:
__main__
1 items passed all tests:
3 tests in __main__.progress_rate_multi
3 tests in 2 items.
3 passed and 0 failed.
Test passed.
```python
!pytest
```
[1m============================= test session starts ==============================[0m
platform darwin -- Python 3.6.1, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
rootdir: /Users/filipmichalsky/cs207_filip_michalsky/homeworks/HW5, inifile:
collected 13 items [0m[1m
[0m
test_multi_progress_r.py ..
test_multi_reaction_r.py ...
test_single_progress_r.py ..
cs207test/test_roots.py ......
[32m[1m========================== 13 passed in 0.23 seconds ===========================[0m
---
# Problem 5
Write a function that returns the **reaction rate** of a system of irreversible reactions of the form:
\begin{align}
\nu_{11}^{\prime} A + \nu_{21}^{\prime} B &\longrightarrow \nu_{31}^{\prime\prime} C \\
\nu_{32}^{\prime} C &\longrightarrow \nu_{12}^{\prime\prime} A + \nu_{22}^{\prime\prime} B
\end{align}
Once again $\nu_{ij}^{\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\nu_{ij}^{\prime\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. In this convention, I have ordered my vector of concentrations as
\begin{align}
\mathbf{x} =
\begin{bmatrix}
\left[A\right] \\
\left[B\right] \\
\left[C\right]
\end{bmatrix}
\end{align}
Test your function with
\begin{align}
\nu_{ij}^{\prime} =
\begin{bmatrix}
1.0 & 0.0 \\
2.0 & 0.0 \\
0.0 & 2.0
\end{bmatrix}
\qquad
\nu_{ij}^{\prime\prime} =
\begin{bmatrix}
0.0 & 1.0 \\
0.0 & 2.0 \\
1.0 & 0.0
\end{bmatrix}
\qquad
\mathbf{x} =
\begin{bmatrix}
1.0 \\
2.0 \\
1.0
\end{bmatrix}
\qquad
k = 10.
\end{align}
You must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.
```python
#%%file multi_reaction_r.py
import numpy as np
def reaction_rate_multi(v_ij_reac,v_ij_proc,x,k):
"""Returns the rection rate for a set of species
INPUTS
=======
v_ij_reac: vector of stochiometric coefficients of reactants
v_ij_proc: vector of stochiometric coefficients of products
x: vector of concentrations (first dimension needs to match v_i),second dimension is 1
k: float
RETURNS
========
Reaction rates of the species (a string based on number of species)
according to the formula k*[A]^v_i[a]*[B]*v_i[b]*[C]^v_i[c] for progress rate and
dXi/dt = sum(v_ij_proc-v_ij_reac)*progress rate
unless v_ij = v_ij_proc-v_ij_reac has the first dimension not matching
the first dimension of x which returns a ValueError
unless shape of v_ij_proc does not match the shape of x, which will return ValueError as well
EXAMPLES
=========
>>> import numpy as np
>>> reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[0][0]
-50.0
>>> reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[1][0]
-60.0
>>> reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[2][0]
20.0
"""
import numpy as np
#calculate the effective stochiometric coefficients on the reactants side
#check length of product coeffs and reac coeffs is the same
if v_ij_proc.shape != v_ij_reac.shape:
raise ValueError("Dimensions of stochiometric coeffs need to match")
#check that the first dimension matches
elif x.shape[0] != v_ij_proc.shape[0]:
raise ValueError("First dimension of x and v_i need to match")
else:
v_ij = v_ij_reac
product_X_to_vij = 1
#loop through the rows to multiply each x by its stochiometric coeffs
for i,j in zip(v_ij,x):
product_X_to_vij=product_X_to_vij*(j**i)
product_X_to_vij=product_X_to_vij.reshape(len(product_X_to_vij),1)
progress_rate_multi=k*product_X_to_vij
reaction_rates=np.dot((v_ij_proc-v_ij_reac),progress_rate_multi)
return reaction_rates
if __name__ == "__main__":
import doctest
doctest.testmod()
```
```python
k=np.array([[10],[10],[10]])
w_ij=np.array([[1],[2],[3]])
print(k.shape,w_ij.shape)
```
(3, 1) (3, 1)
```python
#from multi_reaction_r import reaction_rate_multi
import multi_reaction_r
multi_reaction_r.reaction_rate_multi(np.array([[1.0,0.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),np.array([[10],[10]]))
```
array([[-30.],
[-60.],
[ 20.]])
```bash
%%bash
python multi_reaction_r.py -v
```
Trying:
import numpy as np
Expecting nothing
ok
Trying:
reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[0][0]
Expecting:
-50.0
ok
Trying:
reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[1][0]
Expecting:
-60.0
ok
Trying:
reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[2][0]
Expecting:
20.0
ok
1 items had no tests:
__main__
1 items passed all tests:
4 tests in __main__.reaction_rate_multi
4 tests in 2 items.
4 passed and 0 failed.
Test passed.
```python
%%file test_multi_reaction_r.py
import multi_reaction_r as mr
import numpy as np
#import doctest
#doctest.testmod(verbose=True)
def test_reaction_rate_result():
#test the result og the function
v_ij_reac = np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]])
v_ij_proc = np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]])
x = np.array([1.0,2.0,1.0])
k = 10
assert mr.reaction_rate_multi(v_ij_reac , v_ij_proc, x , k )[0][0] == -50.0
assert mr.reaction_rate_multi(v_ij_reac , v_ij_proc, x , k )[1][0] == -60.0
assert mr.reaction_rate_multi(v_ij_reac , v_ij_proc, x , k )[2][0] == 20.0
def test_progress_rate_shapes_with_x():
#test whether shape of coeffs v_i matches shape of concentrations x
try:
mr.reaction_rate_multi(np.array([2,1]),np.array([2,1]),np.array([[2,1],[2,1],[2,1]]),10)
except ValueError as err:
assert(type(err) == ValueError)
def test_progress_rate_shapes_coeffs():
#test whether the shape of v_ij's matches
try:
mr.reaction_rate_multi(np.array([2,1,1]),np.array([2,1]),np.array([[2,1],[2,1],[2,1]]),10)
except ValueError as err:
assert(type(err) == ValueError)
```
Overwriting test_multi_reaction_r.py
```bash
%%bash
python -m pytest -v test_multi_reaction_r.py
```
============================= test session starts ==============================
platform darwin -- Python 3.6.1, pytest-3.0.7, py-1.4.33, pluggy-0.4.0 -- /Users/filipmichalsky/anaconda/bin/python
cachedir: .cache
rootdir: /Users/filipmichalsky/cs207_filip_michalsky/homeworks/HW5, inifile:
collecting ... collected 3 items
test_multi_reaction_r.py::test_reaction_rate_result PASSED
test_multi_reaction_r.py::test_progress_rate_shapes_with_x PASSED
test_multi_reaction_r.py::test_progress_rate_shapes_coeffs PASSED
=========================== 3 passed in 0.22 seconds ===========================
---
# Problem 6
Put parts 3, 4, and 5 in a module called `chemkin`.
Next, pretend you're a client who needs to compute the reaction rates at three different temperatures ($T = \left\{750, 1500, 2500\right\}$) of the following system of irreversible reactions:
\begin{align}
2H_{2} + O_{2} \longrightarrow 2OH + H_{2} \\
OH + HO_{2} \longrightarrow H_{2}O + O_{2} \\
H_{2}O + O_{2} \longrightarrow HO_{2} + OH
\end{align}
The client also happens to know that reaction 1 is a modified Arrhenius reaction with $A_{1} = 10^{8}$, $b_{1} = 0.5$, $E_{1} = 5\times 10^{4}$, reaction 2 has a constant reaction rate parameter $k = 10^{4}$, and reaction 3 is an Arrhenius reaction with $A_{3} = 10^{7}$ and $E_{3} = 10^{4}$.
You should write a script that imports your `chemkin` module and returns the reaction rates of the species at each temperature of interest given the following species concentrations:
\begin{align}
\mathbf{x} =
\begin{bmatrix}
H_{2} \\
O_{2} \\
OH \\
HO_{2} \\
H_{2}O
\end{bmatrix} =
\begin{bmatrix}
2.0 \\
1.0 \\
0.5 \\
1.0 \\
1.0
\end{bmatrix}
\end{align}
You may assume that these are elementary reactions.
```python
%%file chemkin1.py
import numpy as np
def const(k=1.0):
"""Returns the constant coefficient k.
INPUTS
=======
a: float, optional, default value is 1.0
Constant reaction coefficient
RETURNS
========
Constant k
unless k is not an int or float
in which case a TypeError exception is raised
EXAMPLES
=========
>>> const(1.0)
1.0
"""
if type(k) != int and type(k) != float:
raise TypeError("Input needs to be a number!")
else:
return k
def arr(A=10^7,T=10^2,E=10^3,R=8.314):
"""Returns the constant coefficient k.
INPUTS
=======
A: float, optional, default value is 10^7
Arhenieus constant, needs to be positive
T: float, optional, default value is 10^2
temperature, must be positive
R: float, fixed value 8.314, should not change
RETURNS
========
Constant k
unless any of A,T, and E are not numbers
in which case a TypeError exception is raised
EXAMPLES
=========
>>> arr(10^7,10^2,10^2)
11.526748719357375
"""
if A < 0:
raise ValueError("The Arrhenius constant must be strictly positive")
elif T < 0:
raise ValueError("Temperature must be positive")
elif R != 8.314:
raise ValueError("Unless in a stargate, the universal gas constant is 8.314")
elif type(A) != int and type(A) != float:
raise TypeError("parameters need to be either type int or type float")
elif type(T) !=int and type(T) != int:
raise TypeError("parameters need to be either type int or type float")
elif type(E) != int and type(E) != float:
raise TypeError("parameters need to be either type int or type float")
else:
import numpy as np
k=(A)*np.exp(-E/(R*T))
return k
def mod_arr(A=10^7,b=0.5,T=10^2,E=10^3,R=8.314):
"""Returns the constant coefficient k.
INPUTS
=======
A: float, optional, default value is 10^7
Arhenieus constant, needs to be positive
T: float, optional, default value is 10^2
temperature, must be positive
R: float, fixed value 8.314, should not change
b: float, optional, default value is 0.5, must be a real number
RETURNS
========
Constant k
unless any of A,T,b and E are not numbers
in which case a TypeError exception is raised
EXAMPLES
=========
>>> mod_arr()
32.116059468138779
"""
if A < 0:
raise ValueError("The Arrhenius constant must be strictly positive")
elif T < 0:
raise ValueError("Temperature must be positive")
elif isinstance(b,complex):
raise ValueError("b must be a real number")
elif R != 8.314:
raise ValueError("Unless in a stargate, the universal gas constant is 8.314")
elif type(A) != int and type(A) != float:
raise TypeError("parameters need to be either type int or type float")
elif type(T) !=int and type(T) != int:
raise TypeError("parameters need to be either type int or type float")
elif type(E) != int and type(E) != float:
raise TypeError("parameters need to be either type int or type float")
else:
import numpy as np
k=(A)*(T**b)*np.exp(-E/(R*T))
return k
def reaction_rate_multi(v_ij_reac,v_ij_proc,x,k):
"""Returns the rection rate for a set of species
INPUTS
=======
v_ij_reac: vector of stochiometric coefficients of reactants
v_ij_proc: vector of stochiometric coefficients of products
x: vector of concentrations (first dimension needs to match v_i),second dimension is 1
k: float
RETURNS
========
Reaction rates of the species (a string based on number of species)
according to the formula k*[A]^v_i[a]*[B]*v_i[b]*[C]^v_i[c] for progress rate and
dXi/dt = sum(v_ij_proc-v_ij_reac)*progress rate
unless v_ij = v_ij_proc-v_ij_reac has the first dimension not matching
the first dimension of x which returns a ValueError
unless shape of v_ij_proc does not match the shape of x, which will return ValueError as well
EXAMPLES
=========
>>> import numpy as np
>>> reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[0][0]
-50.0
>>> reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[1][0]
-60.0
>>> reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[2][0]
20.0
"""
import numpy as np
#calculate the effective stochiometric coefficients on the reactants side
#check length of product coeffs and reac coeffs is the same
if v_ij_proc.shape != v_ij_reac.shape:
raise ValueError("Dimensions of stochiometric coeffs need to match")
#check that the first dimension matches
elif x.shape[0] != v_ij_proc.shape[0]:
raise ValueError("First dimension of x and v_i need to match")
else:
v_ij = v_ij_reac
product_X_to_vij = 1
#loop through the rows to multiply each x by its stochiometric coeffs
for i,j in zip(v_ij,x):
product_X_to_vij=product_X_to_vij*(j**i)
product_X_to_vij=product_X_to_vij.reshape(len(product_X_to_vij),1)
progress_rate_multi=k*product_X_to_vij
reaction_rates=np.dot((v_ij_proc-v_ij_reac),progress_rate_multi)
return reaction_rates
if __name__ == "__main__":
import doctest
doctest.testmod()
```
Overwriting chemkin1.py
```bash
%%bash
python chemkin1.py -v
```
Trying:
arr(10^7,10^2,10^2)
Expecting:
11.526748719357375
ok
Trying:
const(1.0)
Expecting:
1.0
ok
Trying:
mod_arr()
Expecting:
32.116059468138779
ok
Trying:
import numpy as np
Expecting nothing
ok
Trying:
reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[0][0]
Expecting:
-50.0
ok
Trying:
reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[1][0]
Expecting:
-60.0
ok
Trying:
reaction_rate_multi(np.array([[1.0,2.0],[2.0,0.0],[0.0,2.0]]),np.array([[0.0,1.0],[0.0,2.0],[1.0,0.0]]),np.array([1.0,2.0,1.0]),10)[2][0]
Expecting:
20.0
ok
1 items had no tests:
__main__
4 items passed all tests:
1 tests in __main__.arr
1 tests in __main__.const
1 tests in __main__.mod_arr
4 tests in __main__.reaction_rate_multi
7 tests in 5 items.
7 passed and 0 failed.
Test passed.
```python
import chemkin1
import numpy as np
vij_reac = np.array([[2.0,0.0,0.0],[1.0,0.0,1.0],[0.0,1.0,0.0],[0.0,1.0,0.0],[0.0,0.0,1.0]])
vij_proc = np.array([[1.0,0.0,0.0],[0.0,1.0,0.0],[2.0,0.0,1.0],[0.0,0.0,1.0],[0.0,1.0,0.0]])
x = np.array([2.0,1.0,0.5,1.0,1.0])
species = np.array([['H2'],['O2'],['OH'],['OH2'],['H20']])
reaction_rate =[]
T = [750,1500,2500]
for temp in T:
k1= chemkin1.mod_arr(A=10**7,b=0.5,T=temp,E=(5*(10**4)),R=8.314)
k2= chemkin1.const(10**4)
k3= chemkin1.arr(A=10**8,T=temp,E=5*(10**4),R=8.314)
k=np.array([[k1],[k2],[k3]])
this_reaction_rate = chemkin1.reaction_rate_multi(vij_reac,vij_proc,x,k)
reaction_rate.append(this_reaction_rate)
print("Temperature ",temp)
print("Reaction rates of species")
for i,j in zip(species,this_reaction_rate):
print(i[0],': ',j[0])
```
Temperature 750
Reaction rates of species
H2 : -360707.78728
O2 : -388635.752574
OH : 749343.539854
OH2 : 27927.9652935
H20 : -27927.9652935
Temperature 1500
Reaction rates of species
H2 : -28111762.0765
O2 : -29921368.5157
OH : 58033130.5922
OH2 : 1809606.43925
H20 : -1809606.43925
Temperature 2500
Reaction rates of species
H2 : -180426142.596
O2 : -189442449.726
OH : 369868592.322
OH2 : 9016307.12982
H20 : -9016307.12982
---
# Problem 7
Get together with your project team, form a GitHub organization (with a descriptive team name), and give the teaching staff access. You can have has many repositories as you like within your organization. However, we will grade the repository called **`cs207-FinalProject`**.
Within the `cs207-FinalProject` repo, you must set up Travis CI and Coveralls. Make sure your `README.md` file includes badges indicating how many tests are passing and the coverage of your code.
|
01d9ccfe356768cdf19c80482c39e8d1b22b8203
| 66,835 |
ipynb
|
Jupyter Notebook
|
HW5/HW5-Final.ipynb
|
filip-michalsky/CS207_Systems_Development
|
4790c3101e3037d7741565198e814637e34eaff9
|
[
"MIT"
] | null | null | null |
HW5/HW5-Final.ipynb
|
filip-michalsky/CS207_Systems_Development
|
4790c3101e3037d7741565198e814637e34eaff9
|
[
"MIT"
] | null | null | null |
HW5/HW5-Final.ipynb
|
filip-michalsky/CS207_Systems_Development
|
4790c3101e3037d7741565198e814637e34eaff9
|
[
"MIT"
] | null | null | null | 36.146566 | 1,541 | 0.526625 | true | 12,827 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.795658 | 0.867036 | 0.689864 |
__label__eng_Latn
| 0.892912 | 0.441117 |
<a href="https://colab.research.google.com/github/tbeucler/2022_ML_Earth_Env_Sci/blob/main/Lab_Notebooks/S2_2_Training_Models.ipynb" target="_parent"></a>
#**Chapter 4 – Training Models**
<table align="left">
<td align=middle>
<a target="_blank" href="https://github.com/ageron/handson-ml2/blob/master/04_training_linear_models.ipynb"> Open the original notebook <br></a>
</td>
</table>
Let's begin like in the last notebook: importing a few common modules, ensuring MatplotLib plots figures inline and preparing a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so once again we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
You don't need to worry about understanding everything that is written in this section.
```python
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Is this notebook running on Colab or Kaggle?
IS_COLAB = "google.colab" in sys.modules
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# To make this notebook's output stable across runs
rnd_seed = 42
rnd_gen = np.random.default_rng(rnd_seed)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
In this notebook we will be working with the [*Iris Flower Dataset*](https://en.wikipedia.org/wiki/Iris_flower_data_set), in which the length and width of both the sepals and petals of three types of Iris flowes were recorded. For reference, these are pictures of the three flowers: <br>
<center> In order: Iris Setosa, Iris Versicolor, and Iris Virginica </center>
</img>
</img>
Photo Credits:[Kosaciec szczecinkowaty Iris setosa](https://en.wikipedia.org/wiki/File:Kosaciec_szczecinkowaty_Iris_setosa.jpg) by [Radomil Binek](https://commons.wikimedia.org/wiki/User:Radomil) licensed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.en); [Blue flag flower close-up (Iris versicolor)](https://en.wikipedia.org/wiki/File:Iris_versicolor_3.jpg)by Danielle Langlois licensed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.en); [image of Iris virginica shrevei](https://en.wikipedia.org/wiki/File:Iris_virginica.jpg) by [Frank Mayfield](https://www.flickr.com/photos/33397993@N05) licensed under [CC BY-SA 2.0](https://creativecommons.org/licenses/by-sa/2.0/deed.en).
<br><br>
As you can imagine, this dataset is normally used to train *multiclass*/*multinomial* classification algorithms and not *binary* classification algorithms, since there *are* more than 2 classes.
"*Three classes, even!*" - an observant TA
For this exercise, however, we will be implemented the binary classification algorithm referred to as the *logistic regression* algorithm (also called logit regression).
```python
# Let's load the Iris Dataset
from sklearn import datasets
iris = datasets.load_iris()
# Print out some information about the data
print(f'Keys in Iris dictionary: \n{list(iris.keys())}\n\n')
print(iris.DESCR)
# And load the petal lengths and widths as our input data
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
print(iris['data_module'])
# The target data labels Setosa as 0, Versicolor as 1, and Virginica as 2. For
# this exercise we will be using only the Versicolor and Virgina sets.
bin_indices = np.logical_or(y==1,y==2)
bin_X = X[bin_indices]
bin_y = (y[bin_indices]==2).astype(np.uint8) # convert to binary
```
We now have a set of binary classification data we can use to train an algorithm.
As we saw during our reading, we need to define three things in order to train our algorithm: the type of algorithm we will train, the cost function (which will tell us how close our prediction is to the truth), and a method for updating the parameters in our model according to the value of the cost function (e.g., the gradient descent method).
Let's begin by defining the type of algorithm we will use. We will train a logistic regression model to differentiate between two classes. A reminder of how the logistic regression algorithm works is given below.
<br><br><br>
The logistic regression algorithm will thus take an input $t$ that is a linear combination of the features:
<a name="logit"></a>
<center> $t_{\small{n}} = \beta_{\small{0}} + \beta_{\small{1}} \cdot X_{1,n} + \beta_{\small{2}} \cdot X_{2,n}$ </center>
where
* $n$ is the ID of the sample
* $X_{\small{0}}$ represents the petal length
* $X_{\small{1}}$ represents the petal width
This input is then fed into the logistic function, $\sigma$:
\begin{align}
\sigma: t\mapsto \dfrac{1}{1+e^ {-t}}
\end{align}
Let's plot it below to remember the shape of the function
```python
t = np.arange(-4,4,.1)
def logistic(in_val):
# Return the value of the logistic function
return 1/(1 + np.exp(- in_val))
fig, ax = plt.subplots()
ax.axvline(0, c='black', alpha=1)
ax.axhline(0, c='black', alpha=1)
[ax.axhline(y_val, c='black', alpha=0.5, linestyle='dotted') for y_val in (0.5,1)]
plt.autoscale(axis='x', tight=True)
ax.plot(t, logistic(t));
ax.set_xlabel('$t$')
ax.set_ylabel('$\\sigma\\ \\left(t\\right)$')
fig.tight_layout()
```
With the logistic function, we define inputs resulting in $\sigma\geq.5$ as belonging to the ***one*** class, and any value below that is considered to belong to the ***zero*** class.
We now have a function which lets us map the value of the petal length and width to the class to which the observation belongs (i.e., whether the length and width correspond to Iris Versicolor or Iris Virginica). However, there is a parameter vector **$\theta$** with a number of parameters that we do not have a value for: <br> $\theta = [ \beta_{\small{0}}, \beta_{\small{1}}$, $\beta_{\small{2}} ]$
**Q1) Set up an array of random numbers between 0 and 1 representing the $\theta$ vector.**
Hint: Use `rnd_gen`! If you're not sure how to use it, consult the `default_rng` documentation [at this link](https://numpy.org/doc/stable/reference/random/generator.html). For instance, you may use the `random` method of `rnd_gen`.
```python
# Write your code here
```
In order to determine whether a set of $\beta$ values is better than the other, we need to quantify well the values are able to predict the class. This is where the cost function comes in.
The cost function, $c$, will return a value close to zero when the prediction, $\hat{p}$, is correct and a large value when it is wrong. In a binary classification problem, we can use the log loss function. For a single prediction and truth value, it is given by:
\begin{align}
\text{c}(\hat{p},y) = \left\{
\begin{array}{cl}
-\log(\hat{p})& \text{if}\; y=1\\
-\log(1-\hat{p}) & \text{if}\; y=0
\end{array}
\right.
\end{align}
However, we want to apply the cost function to an n-dimensional set of predictions and truth values. Thankfully, we can find the average value of the log loss function $J$ for an an-dimensional set of $\hat{y}$ & $y$ as follows:
\begin{align}
\text{J}(\mathbf{\hat{p}},y) = - \dfrac{1}{n} \sum_{i=1}^{n}
\left[ y_i\cdot \log\left( \hat{p}_i \right) \right] +
\left[ \left( 1 - y_i \right) \cdot \log\left( 1-\hat{p}_i \right) \right]
\end{align}
We now have a formula that can be used to calculate the average cost over the training set of data.
**Q2) Define a log_loss function that takes in an arbitrarily large set of prediction and truths**
Hint 1: You need to encode the function $J$ above, for which Numpy's functions may be quite convenient (e.g., [`log`](https://numpy.org/doc/stable/reference/generated/numpy.log.html), [`mean`](https://numpy.org/doc/stable/reference/generated/numpy.mean.html), etc.)
Hint 2: Asserting the dimensions of the vector is a good way to check that your function is working correctly. [Here's a tutorial on how to use `assert`](https://swcarpentry.github.io/python-novice-inflammation/10-defensive/index.html#assertions). For instance, to assert that two vectors `X` and `Y` have the same dimension, you may use:
```
assert X.shape==Y.shape
```
Now let's code 💻
```python
def log_loss(p_hat, y, epsilon=1e-7):
# Write your code here.
# We can also run into problems if p_hat = 0, so add an _epsilon_ term
# when evaluating log(p_hat).
log_p_hat = epsilon + ___________
J = _____________________________
# After calculating J, assert that J has the same shape as p_hat and y
assert __________
assert __________
return
```
We now have a way of quantifying how good our predictions are. The final thing needed for us to train our algorithm is figuring out a way to update the parameters in a way that improves the average quality of our predictions.
<br><br>**Warning**: we'll go into a bit of math below <br><br>
Let's look at the change in a single parameter within $\theta$: $\beta_1$ (given $X_{1,i} = X_1$, $\;\hat{p}_{i} = \hat{p}$, $\;y_{i} = y$). If we want to know what the effect of changing the value of $\beta_1$ will have on the log loss function we can find this with the partial derivative:
<center>$
\dfrac{\partial J}{\partial \beta_1}
$</center>
This may not seem very helpful by itself - after all, $\beta_1$ isn't even in the expression of $J$. But if we use the chain rule, we can rewrite the expression as:
<center>
$\dfrac{\partial J}{\partial \hat{p}} \cdot
\dfrac{\partial \hat{p}}{\partial \theta} \cdot
\dfrac{\partial \theta}{\partial \beta_1}$
</center>
We'll spare you the math (feel free to verify it youself, however!):
<center>$\dfrac{\partial J}{\partial \hat{p}} = \dfrac{\hat{p} - y}{\hat{p}(1-\hat{p})}, \quad
\dfrac{\partial \hat{p}}{\partial \theta} = \hat{p} (1-\hat{p}), \quad
\dfrac{\partial \theta}{\partial \beta_1} = X_1 $
</center>
and thus
<center>$
\dfrac{\partial J}{\partial \beta_1} = (\hat{p} - y) \cdot X_1
$</center>
We can calculate the partial derivative for each parameter in $\theta$ which, as you may have realized, is simply the $\theta$ gradient of $J$: $\nabla_{\theta}(J)$
With all of this information, we can now write $\nabla_{\theta} J$ in terms of the error, the feature vector, and the number of samples we're training on!
<a name="grad_eq"></a>
<center>$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\theta^{(k)}}) = \dfrac{1}{n} \sum\limits_{i=1}^{n}{ \left ( \hat{p}^{(k)}_{i} - y_{i} \right ) \mathbf{X}_{i}}$</center>
Note that here $k$ represents the iteration of the parameters we are currently on.
We now have a gradient we can calculate and use in the batch gradient descent method! The updated parameters will thus be:
<a name="grad_descent"></a>
\begin{align}
{\mathbf{\theta}^{(k+1)}} = {\mathbf{\theta}^{(k)}} - \eta\,\nabla_{\theta^{(k)}}J(\theta^{(k)})
\end{align}
Where $\eta$ is the learning rate parameter. It's also worth pointing out that $\;\hat{p}^{(k)}_i = \sigma\left(\theta^{(k)}, X_i\right) $
In order to easily calculate the input to the logistic regression, we'll multiply the $\theta$ vector with the X data, and as we have a non-zero bias $\beta_0$ we'd like to have an X matrix whose first column is filled with ones.
\begin{align}
X_{\small{with\ bias}} = \begin{pmatrix}
1 & X_{1,0} & X_{2,0}\\
1 & X_{1,1} & X_{2,1}\\
&...&\\
1 & X_{1,n} & X_{2,n}
\end{pmatrix}
\end{align}
<br>
**Q3) Prepare the `X_with_bias` matrix (remember to use the `bin_X` data and not just `X`). Write a function called `predict` that takes in the parameter vector $\theta$ and the `X_with_bias` matrix and evaluates the logistic function for each of the samples.**
Hint 1: You recently learned how to initialize arrays in the `Numpy` notebook [at this link](https://nbviewer.org/github/tbeucler/2022_ML_Earth_Env_Sci/blob/main/Lab_Notebooks/S1_2_Numpy.ipynb). There are many ways to add a columns of 1 to `bin_X`, for instance using [`np.concatenate`](https://numpy.org/doc/stable/reference/generated/numpy.concatenate.html) or [`np.append`](https://numpy.org/doc/stable/reference/generated/numpy.append.html).
Hint 2: To clarify, the function `predict` calculates $\hat{p}$ from $\beta$ and $\boldsymbol{X}$.
Hint 3: In practice, to calculate the logistic function for each sample, you may follow the equations [higher up in the notebook](#logit) and (1) calculate $t$ from $\beta$ and $\boldsymbol{X_{\mathrm{with\ bias}}}$ before (2) applying the logistic function $\sigma$ to $t$.
```python
# Prepare the X_with_bias matrix
```
```python
# Write your function predict here
```
**Q4) Now that you have a `predict` function, write a `gradient_calc` function that calculates the gradient for the logistic function.**
Hint 1: You'll have to feed `theta`, `X`, and `y` to the `gradient_calc` function.
Hint 2: You can use [this equation](#grad_eq) to calculate the gradient of the cost function.
```python
# Write your code here
```
We can now write a function that will train a logistic regression algorithm!
Your `logistic_regression` function needs to:
* Take in a set of training input/output data, validation input/output data, a number of iterations to train for, a set of initial parameters $\theta$, and a learning rate $\eta$
* At each iteration:
* Generate a set of predictions on the training data. Hint: You may use your function `predict` on inputs `X_train` from the training set.
* Calculate and store the loss function for the training data at each iteration. Hint: You may use your function `log_loss` on inputs `X_train` and outputs `y_train` from the training set.
* Calculate the gradient. Hint: You may use your function `grad_calc`.
* Update the $\theta$ parameters. Hint: You need to implement [this equation](#grad_descent).
* Generate a set of predictions on the validation data using the updated parameters. Hint: You may use your function `predict` on inputs `X_valid` from the validation set.
* Calculate and store the loss function for the validation data. Hint: You may use your function `log_loss` on inputs `X_valid` and outputs `y_valid` from the validation set.
* Bonus: Calculate and store the accuracy of the model on the training and validation data as a metric!
* Return the final set of parameters $\theta$ & the stored training/validation loss function values (and the accuracy, if you did the bonus)
**Q5) Write the `logistic_regression` function**
```python
# Write your code here
```
**¡¡¡Important Note!!!**
The notebook assumes that you will return
1. a Losses list, where Losses[0] is the training loss and Losses[1] is the validation loss
2. a tuple with the 3 final coefficients ($\beta_0$, $\beta_1$, $\beta_2$)
The code for visualizing the bonus accuracy is not included - but it should be simple enough to do in a way similar to that which is done with the losses.
---------------------
Now that we have our logistic regression function, we're all set to train our algorithm! Or are we?
There's an important data step that we've neglected up to this point - we need to split the data into the train, validation, and test datasets.
```python
test_ratio = 0.2
validation_ratio = 0.2
total_size = len(X_with_bias)
test_size = int(total_size * test_ratio)
validation_size = int(total_size * validation_ratio)
train_size = total_size - test_size - validation_size
rnd_indices = rnd_gen.permutation(total_size)
X_train = X_with_bias[rnd_indices[:train_size]]
y_train = bin_y[rnd_indices[:train_size]]
X_valid = X_with_bias[rnd_indices[train_size:-test_size]]
y_valid = bin_y[rnd_indices[train_size:-test_size]]
X_test = X_with_bias[rnd_indices[-test_size:]]
y_test = bin_y[rnd_indices[-test_size:]]
```
Now we're ready!
**Q6) Train your logistic regression algorithm. Use 5000 iterations, $\eta$=0.1**
Hint: It's time to use the `logistic_regression` function you defined in Q5.
```python
# Complete the code
losses, coeffs =
```
Let's see how our model did while learning!
```python
# Produce the Loss Function Visualization Graphs
fig, ax = plt.subplots(figsize=(18,8))
ax.plot(losses[0], color='blue', label='Training', linewidth=3);
ax.plot(losses[1], color='orange', label='Validation', linewidth=3);
ax.legend();
ax.set_ylabel('Log Loss')
ax.set_xlabel('Iterations')
ax.set_title('Loss Function Graph')
ax.autoscale(axis='x', tight=True)
fig.tight_layout();
# Let's get predictions from our model for the training, validation, and testing
# datasets
y_hat_train = (predict(X_train, coeffs)>=.5).astype(int)
y_hat_valid = (predict(X_valid, coeffs)>=.5).astype(int)
y_hat_test = (predict(X_test, coeffs)>=.5).astype(int)
y_sets = [ [y_hat_train, y_train],
[y_hat_valid, y_valid],
[y_hat_test, y_test] ]
def accuracy_score(y_hat, y):
assert(y_hat.size==y.size)
return (y_hat == y).sum()/y.size
[accuracies.append(accuracy_score(y_set[0],y_set[1])) for y_set in y_sets]
printout= (f'Training Accuracy:{accuracies[0]:.1%} \n'
f'Validation Accuracy:{accuracies[1]:.1%} \n')
# Add the testing accuracy only once you're sure that your model works!
print(printout)
```
Congratulations on training a logistic regression algorithm from scratch! Once you're done with the upcoming environmental science applications notebook, feel free to come back to take a look at the challenges 😀
## Challenges
* **C1)** Add L2 Regularization to training function
* **C2)** Add early stopping to the training algorithm! Stop training when the accuracy is >=90%
* **C3)** Implement a softmax regression model (It's multiclass logistic regression 🙂)
|
ead37fc16174c0312b44dcfd6242954419929e80
| 28,128 |
ipynb
|
Jupyter Notebook
|
Lab_Notebooks/S2_2_Training_Models.ipynb
|
Raphbub/2022_ML_Earth_Env_Sci
|
5ab5aeab073b5fe4626ba94e1a3e5d1af5caaadf
|
[
"MIT"
] | null | null | null |
Lab_Notebooks/S2_2_Training_Models.ipynb
|
Raphbub/2022_ML_Earth_Env_Sci
|
5ab5aeab073b5fe4626ba94e1a3e5d1af5caaadf
|
[
"MIT"
] | null | null | null |
Lab_Notebooks/S2_2_Training_Models.ipynb
|
Raphbub/2022_ML_Earth_Env_Sci
|
5ab5aeab073b5fe4626ba94e1a3e5d1af5caaadf
|
[
"MIT"
] | null | null | null | 44.506329 | 746 | 0.557132 | true | 4,970 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.685949 | 0.76908 | 0.52755 |
__label__eng_Latn
| 0.975801 | 0.064005 |
Dynamics of offset diff drive vehicle
Equations are from the paper written by Masayoshi Wada etal.
https://www.jstage.jst.go.jp/article/jrsj1983/18/8/18_8_1166/_pdf
This notebook is written by Yosuke Matsusaka
# Preparation
We use sympy to transform the equation:
```python
!sudo pip install -U sympy
```
[33mDEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support[0m
[33mWARNING: The directory '/home/developer/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.[0m
Requirement already up-to-date: sympy in /usr/local/lib/python2.7/dist-packages (1.5.1)
Requirement already satisfied, skipping upgrade: mpmath>=0.19 in /usr/local/lib/python2.7/dist-packages (from sympy) (1.1.0)
```python
from sympy import *
```
```python
init_printing()
```
# Symbols
Following symbols are used to describe internal state of the vehicle:
```python
(r, s, W, theta) = symbols(r"r s W \theta_s")
```
$r$ : radius of each wheels
$W$: wheel separation
$s$ : wheel offset
$\theta_s$ : current angle of the steering joint (described later)
Joint space parameters:
```python
(wr, wl, ws) = symbols('w_r w_l w_s')
uv = Matrix([wr, wl, ws])
```
$w_r$ : angular velocity of the right wheel joint
$w_l$ : angular velocity of the left wheel joint
$w_s$: angular velocity of the steering joint
Cartesian space parameters:
```python
(vx, vy, vz) = symbols('v_x v_y v_z')
xv = Matrix([vx, vy, vz])
```
$v_x$ : x axes velocity
$v_y$ : y axes velocity
$v_z$ : z axes velocity
# Dynamics of diff drive vehicle
```python
diff_drive = Matrix([
[r/2, r/2],
[0, 0],
[r/W, -r/W]
])
Eq(xv, MatMul(diff_drive, Matrix(uv[0:2]), evaluate=False), evaluate=False)
```
$\displaystyle \left[\begin{matrix}v_{x}\\v_{y}\\v_{z}\end{matrix}\right] = \left[\begin{matrix}\frac{r}{2} & \frac{r}{2}\\0 & 0\\\frac{r}{W} & - \frac{r}{W}\end{matrix}\right] \left[\begin{matrix}w_{r}\\w_{l}\end{matrix}\right]$
x axes velocity is generated from the combination of left and right wheel velocities.
z axes velocity is generated from the difference of left and right wheel velocities.
We can natually understand that we cannot directly obtain y axes velocity from the mechanism.
# Dynamics of offset diff drive vehicle
Here, we apply offset to the diff drive vehicle described above.
We first, create offset transformation matrix:
```python
offset = Matrix([
[1, 0, 0],
[0, 1, s],
[0, 0, 1]
])
offset
```
$\displaystyle \left[\begin{matrix}1 & 0 & 0\\0 & 1 & s\\0 & 0 & 1\end{matrix}\right]$
Apply offset to the jacobian matrix:
```python
offset_diff_drive = offset * diff_drive
Eq(offset_diff_drive, MatMul(offset, diff_drive, evaluate=False), evaluate=False)
```
$\displaystyle \left[\begin{matrix}\frac{r}{2} & \frac{r}{2}\\\frac{r s}{W} & - \frac{r s}{W}\\\frac{r}{W} & - \frac{r}{W}\end{matrix}\right] = \left[\begin{matrix}1 & 0 & 0\\0 & 1 & s\\0 & 0 & 1\end{matrix}\right] \left[\begin{matrix}\frac{r}{2} & \frac{r}{2}\\0 & 0\\\frac{r}{W} & - \frac{r}{W}\end{matrix}\right]$
Now, the equation will be as below:
```python
Eq(xv, MatMul(offset_diff_drive, Matrix(uv[0:2]), evaluate=False), evaluate=False)
```
$\displaystyle \left[\begin{matrix}v_{x}\\v_{y}\\v_{z}\end{matrix}\right] = \left[\begin{matrix}\frac{r}{2} & \frac{r}{2}\\\frac{r s}{W} & - \frac{r s}{W}\\\frac{r}{W} & - \frac{r}{W}\end{matrix}\right] \left[\begin{matrix}w_{r}\\w_{l}\end{matrix}\right]$
What is interesting here, is we get y axes velocity by just adding the offset!
By calculating the inverse matrix, we can realize omnidirectional movement from this mechanism:
```python
inv_offset_diff_drive = Matrix(offset_diff_drive[0:2, :]).inv()
Eq(Matrix(uv[0:2]), MatMul(inv_offset_diff_drive, Matrix(xv[0:2]), evaluate=False), evaluate=False)
```
$\displaystyle \left[\begin{matrix}w_{r}\\w_{l}\end{matrix}\right] = \left[\begin{matrix}\frac{1}{r} & \frac{W}{2 r s}\\\frac{1}{r} & - \frac{W}{2 r s}\end{matrix}\right] \left[\begin{matrix}v_{x}\\v_{y}\end{matrix}\right]$
Unfortunately z axes velocity is defined by holonomic constraints and not controllable by us. But we will add a steering joint to our vehicle to realize complete omnidirectional movement.
# Dynamics of offset diff drive vehicle with a steering joint
Here, we will add a steering joint to our vehicle to get controllable z axes velocity:
```python
rotate = Matrix([
[cos(theta), -sin(theta), 0],
[sin(theta), cos(theta), 0],
[0, 0, 1]
])
rotate
```
$\displaystyle \left[\begin{matrix}\cos{\left(\theta_s \right)} & - \sin{\left(\theta_s \right)} & 0\\\sin{\left(\theta_s \right)} & \cos{\left(\theta_s \right)} & 0\\0 & 0 & 1\end{matrix}\right]$
By applying the rotation, jacobian matrix will be as follows:
```python
abs_offset_diff_drive = rotate * offset_diff_drive
Eq(abs_offset_diff_drive, MatMul(rotate, offset_diff_drive, evaluate=False), evaluate=False)
```
$\displaystyle \left[\begin{matrix}\frac{r \cos{\left(\theta_s \right)}}{2} - \frac{r s \sin{\left(\theta_s \right)}}{W} & \frac{r \cos{\left(\theta_s \right)}}{2} + \frac{r s \sin{\left(\theta_s \right)}}{W}\\\frac{r \sin{\left(\theta_s \right)}}{2} + \frac{r s \cos{\left(\theta_s \right)}}{W} & \frac{r \sin{\left(\theta_s \right)}}{2} - \frac{r s \cos{\left(\theta_s \right)}}{W}\\\frac{r}{W} & - \frac{r}{W}\end{matrix}\right] = \left[\begin{matrix}\cos{\left(\theta_s \right)} & - \sin{\left(\theta_s \right)} & 0\\\sin{\left(\theta_s \right)} & \cos{\left(\theta_s \right)} & 0\\0 & 0 & 1\end{matrix}\right] \left[\begin{matrix}\frac{r}{2} & \frac{r}{2}\\\frac{r s}{W} & - \frac{r s}{W}\\\frac{r}{W} & - \frac{r}{W}\end{matrix}\right]$
Final jacobian matrix is as follows:
```python
fwd_dynamics = BlockMatrix([abs_offset_diff_drive, Matrix([0, 0, -1])]).as_explicit()
Eq(xv,MatMul(fwd_dynamics, uv, evaluate=False), evaluate=False)
```
$\displaystyle \left[\begin{matrix}v_{x}\\v_{y}\\v_{z}\end{matrix}\right] = \left[\begin{matrix}\frac{r \cos{\left(\theta_s \right)}}{2} - \frac{r s \sin{\left(\theta_s \right)}}{W} & \frac{r \cos{\left(\theta_s \right)}}{2} + \frac{r s \sin{\left(\theta_s \right)}}{W} & 0\\\frac{r \sin{\left(\theta_s \right)}}{2} + \frac{r s \cos{\left(\theta_s \right)}}{W} & \frac{r \sin{\left(\theta_s \right)}}{2} - \frac{r s \cos{\left(\theta_s \right)}}{W} & 0\\\frac{r}{W} & - \frac{r}{W} & -1\end{matrix}\right] \left[\begin{matrix}w_{r}\\w_{l}\\w_{s}\end{matrix}\right]$
Inverse jacobian matrix will let us know how to calculate each joint velocity to realize omnidirectional movement:
```python
inv_dynamics = fwd_dynamics.inv().simplify()
Eq(uv, MatMul(inv_dynamics, xv, evaluate=False), evaluate=False)
```
$\displaystyle \left[\begin{matrix}w_{r}\\w_{l}\\w_{s}\end{matrix}\right] = \left[\begin{matrix}\frac{- \frac{W \sin{\left(\theta_s \right)}}{2} + s \cos{\left(\theta_s \right)}}{r s} & \frac{\frac{W \cos{\left(\theta_s \right)}}{2} + s \sin{\left(\theta_s \right)}}{r s} & 0\\\frac{\frac{W \sin{\left(\theta_s \right)}}{2} + s \cos{\left(\theta_s \right)}}{r s} & \frac{- \frac{W \cos{\left(\theta_s \right)}}{2} + s \sin{\left(\theta_s \right)}}{r s} & 0\\- \frac{\sin{\left(\theta_s \right)}}{s} & \frac{\cos{\left(\theta_s \right)}}{s} & -1\end{matrix}\right] \left[\begin{matrix}v_{x}\\v_{y}\\v_{z}\end{matrix}\right]$
```python
```
|
211e90cc46b4271b86f9aa0d861ccbecaa6abd17
| 21,104 |
ipynb
|
Jupyter Notebook
|
offset-diff-drive-dynamics.ipynb
|
devrt/offset-diff-drive-controller
|
70eab6eee6ca6f4d5f3e0938ee0b088fc4ce8ba9
|
[
"BSD-3-Clause"
] | null | null | null |
offset-diff-drive-dynamics.ipynb
|
devrt/offset-diff-drive-controller
|
70eab6eee6ca6f4d5f3e0938ee0b088fc4ce8ba9
|
[
"BSD-3-Clause"
] | 1 |
2020-07-10T04:58:12.000Z
|
2020-07-10T04:59:01.000Z
|
offset-diff-drive-dynamics.ipynb
|
devrt/offset-diff-drive-controller
|
70eab6eee6ca6f4d5f3e0938ee0b088fc4ce8ba9
|
[
"BSD-3-Clause"
] | null | null | null | 33.7664 | 840 | 0.396607 | true | 2,555 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.795658 | 0.658418 | 0.523875 |
__label__eng_Latn
| 0.708965 | 0.055467 |
<a href="https://colab.research.google.com/github/rovle/prirucnik-za-prezivljavanje/blob/main/LUMEN_Evaluation_radionica.ipynb" target="_parent"></a>
# LUMEN Data Science Evaluation radionica
U ovoj radionici ćemo kratko demonstrirati kauzalnu inferenciju koristeći paket DoWhy.
Prvo instaliramo i importiramo potrebne pakete.
```python
!pip install dowhy
from numpy.random import beta, uniform, binomial
from collections import defaultdict
import matplotlib.pyplot as plt
from dowhy import CausalModel
import pandas as pd
import numpy as np
import warnings
pd.set_option('display.float_format', lambda x: '%.2f' % x)
warnings.filterwarnings("ignore")
plt.style.use('ggplot')
np.random.seed(42)
```
Requirement already satisfied: dowhy in /usr/local/lib/python3.7/dist-packages (0.6)
Requirement already satisfied: pandas>=0.24 in /usr/local/lib/python3.7/dist-packages (from dowhy) (1.1.5)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from dowhy) (0.22.2.post1)
Requirement already satisfied: pydot>=1.4 in /usr/local/lib/python3.7/dist-packages (from dowhy) (1.4.2)
Requirement already satisfied: statsmodels in /usr/local/lib/python3.7/dist-packages (from dowhy) (0.10.2)
Requirement already satisfied: sympy>=1.4 in /usr/local/lib/python3.7/dist-packages (from dowhy) (1.7.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from dowhy) (1.4.1)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from dowhy) (2.5.1)
Requirement already satisfied: numpy>=1.15 in /usr/local/lib/python3.7/dist-packages (from dowhy) (1.19.5)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.24->dowhy) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.24->dowhy) (2018.9)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->dowhy) (1.0.1)
Requirement already satisfied: pyparsing>=2.1.4 in /usr/local/lib/python3.7/dist-packages (from pydot>=1.4->dowhy) (2.4.7)
Requirement already satisfied: patsy>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from statsmodels->dowhy) (0.5.1)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy>=1.4->dowhy) (1.2.1)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.0->dowhy) (4.4.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.24->dowhy) (1.15.0)
Definiramo naš dataset. Radi se o (izmaštanom) datasetu prodaje određenih proizvoda. Značajke su
- je li taj proizvod **šefov favorit**, binarno obilježje,
- je li taj proizvod **reklamiran**, binarno obilježje,
- **cijena** proizvoda, float od 10 do 1000,
- proporcija tih proizvoda koji su **oštećeni**, float od 0 do 1.
- je li taj proizvod **rasprodan**, binarno obilježje.
```python
def generate_dataset_prodaje(koliko_utjece=0):
sefov_favorit = binomial(1, 0.3, size=2000)
reklamiran = beta(3, 5, size=2000) + sefov_favorit*0.2
reklamiran = np.vectorize(round)(reklamiran)
cijena = ( uniform(10, 800, size=2000)
+ 100*reklamiran
+ 100*sefov_favorit )
osteceni = beta(2, 20, size=2000) + 0.1*(1 - cijena/1000)
rasprodani = ( beta(2, 2, size=2000)
+ reklamiran*koliko_utjece
- (cijena/1000)**3
- osteceni*0.3 )
rasprodani = np.vectorize(round)(rasprodani)
rasprodani = np.clip(rasprodani, 0, 1)
return reklamiran, cijena, osteceni, rasprodani, sefov_favorit
```
```python
df = pd.DataFrame(np.transpose(generate_dataset_prodaje(0)), columns=["Reklamiran", "Cijena", "Osteceni", "Rasprodani",
"Sefov_favorit"])
```
Kratko samo pregledavamo dataset:
```python
df.describe()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Reklamiran</th>
<th>Cijena</th>
<th>Osteceni</th>
<th>Rasprodani</th>
<th>Sefov_favorit</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>2000.00</td>
<td>2000.00</td>
<td>2000.00</td>
<td>2000.00</td>
<td>2000.00</td>
</tr>
<tr>
<th>mean</th>
<td>0.36</td>
<td>470.25</td>
<td>0.14</td>
<td>0.24</td>
<td>0.30</td>
</tr>
<tr>
<th>std</th>
<td>0.48</td>
<td>245.84</td>
<td>0.06</td>
<td>0.43</td>
<td>0.46</td>
</tr>
<tr>
<th>min</th>
<td>0.00</td>
<td>10.12</td>
<td>0.02</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<th>25%</th>
<td>0.00</td>
<td>266.03</td>
<td>0.10</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<th>50%</th>
<td>0.00</td>
<td>468.51</td>
<td>0.13</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<th>75%</th>
<td>1.00</td>
<td>670.93</td>
<td>0.18</td>
<td>0.00</td>
<td>1.00</td>
</tr>
<tr>
<th>max</th>
<td>1.00</td>
<td>998.90</td>
<td>0.43</td>
<td>1.00</td>
<td>1.00</td>
</tr>
</tbody>
</table>
</div>
```python
df[['Cijena', 'Osteceni']].hist(figsize=(16, 5), bins=35, xlabelsize=8, ylabelsize=8)
```
```python
fig, axes = plt.subplots(1, 3)
fig.set_size_inches(20, 5)
binarne_znacajke = ['Sefov_favorit', 'Reklamiran', 'Rasprodani']
for it, ix in enumerate(binarne_znacajke):
df[[ix]].value_counts().plot.bar(ax=axes[it])
axes[it].set_xticklabels(['Nije', 'Jest'])
```
Definiramo kauzalni graf:
```python
causal_graph = """
digraph {
Reklamiran;
Cijena;
Osteceni;
Rasprodani;
Sefov_favorit;
Reklamiran -> Cijena; Cijena -> Osteceni;
Osteceni -> Rasprodani; Cijena -> Rasprodani;
Reklamiran->Rasprodani; Sefov_favorit -> Cijena;
Sefov_favorit -> Reklamiran;
}
"""
```
Definiramo kauzalni model i crtamo kauzalni graf:
```python
df[['Reklamiran', 'Rasprodani']] = df[['Reklamiran', 'Rasprodani']].applymap(bool)
model= CausalModel(
data = df,
graph=causal_graph.replace("\n", " "),
treatment='Reklamiran',
outcome='Rasprodani')
```
```python
model.view_model()
```
Estimiramo kauzalni efekt:
```python
estimands = model.identify_effect(proceed_when_unidentifiable=True)
estimate = model.estimate_effect(estimands,method_name = "backdoor.linear_regression")
print(estimate.value)
```
-0.07123667677450779
## Refutacija
Ključan korak kauzalne inferencije su pokušaji refutacije onoga što smo dobili. Za to koristimo više tehnika:
### Random common cause
Dodajemo nezavisnu značajku kao zajednički uzrok značajkama dataseta; ako je kauzalna inferencija točna, estimacija se ne bi trebala značajno promijeniti.
```python
refutel = model.refute_estimate(estimands,estimate, "random_common_cause")
print(refutel)
```
Refute: Add a Random Common Cause
Estimated effect:-0.07305299634183735
New effect:-0.01952386024522915
### Placebo treatment
Zamijenimo treatment značajku (Reklamirani) s nekom slučajno značajkom; ako je kauzalna inferencija točna, estimacija bi trebala biti blizu nule.
```python
refutel = model.refute_estimate(estimands,estimate, "placebo_treatment_refuter")
print(refutel)
```
Refute: Use a Placebo Treatment
Estimated effect:-0.07305299634183735
New effect:-0.00200747857073064
p value:0.49
## Alexa play "All Together Now"
```python
metode = ["backdoor.linear_regression",
"backdoor.propensity_score_stratification"]
rezultati = defaultdict(list)
for znacajnost in np.arange(-0.1, 0.5, 0.01):
for metoda in metode:
df = pd.DataFrame(np.transpose(generate_dataset_prodaje(znacajnost)),
columns=["Reklamiran", "Cijena", "Osteceni",
"Rasprodani", "Sefov_favorit"])
df[['Reklamiran', 'Rasprodani']] = df[['Reklamiran',
'Rasprodani']].applymap(bool)
model= CausalModel(
data = df,
graph=causal_graph.replace("\n", " "),
treatment='Reklamiran',
outcome='Rasprodani')
estimands = model.identify_effect(proceed_when_unidentifiable=True)
estimate = model.estimate_effect(estimands,
method_name = metoda)
rezultati[metoda].append((znacajnost, estimate.value))
```
```python
fig, axes = plt.subplots(1, 2)
fig.set_size_inches(20, 5)
for ix, metoda in enumerate(metode):
tocke = rezultati[metoda]
axes[ix].scatter(x = [p[0] for p in tocke],
y = [p[1] for p in tocke])
axes[ix].plot(np.arange(-0.1, 0.6, 0.1),
np.arange(-0.1, 0.6, 0.1),
linestyle='--', color='blue')
axes[ix].set_title(metoda)
axes[ix].set_xlabel('Faktor značajnosti')
axes[ix].set_ylabel('Estimacija kauzalnosti')
```
```python
```
|
2a56d1cf1397f841bed247529890fa0755649957
| 117,564 |
ipynb
|
Jupyter Notebook
|
LUMEN_Evaluation_radionica.ipynb
|
rovle/prirucnik-za-prezivljavanje
|
bf79a84b9479eab97c78fa0ecc47c3a77b32f689
|
[
"MIT"
] | null | null | null |
LUMEN_Evaluation_radionica.ipynb
|
rovle/prirucnik-za-prezivljavanje
|
bf79a84b9479eab97c78fa0ecc47c3a77b32f689
|
[
"MIT"
] | null | null | null |
LUMEN_Evaluation_radionica.ipynb
|
rovle/prirucnik-za-prezivljavanje
|
bf79a84b9479eab97c78fa0ecc47c3a77b32f689
|
[
"MIT"
] | null | null | null | 172.381232 | 50,646 | 0.869008 | true | 3,194 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.721743 | 0.705785 | 0.509396 |
__label__eng_Latn
| 0.117188 | 0.021826 |
# ¿Cómo funciona la suspensión de un auto?
<div>
</div>
> Una primer aproximación al modelo de la suspensión de un automovil es considerar el *oscilador armónico amortiguado*.
Referencia:
- https://es.wikipedia.org/wiki/Oscilador_arm%C3%B3nico#Oscilador_arm.C3.B3nico_amortiguado
Un **modelo** que describe el comportamiento del sistema mecánico anterior es
\begin{equation}
m\frac{d^2 x}{dt^2}=-c\frac{dx}{dt}-kx
\end{equation}
donde $c$ es la constante de amortiguamiento y $k$ es la constante de elasticidad. <font color=red> Revisar modelado </font>
Documentación de los paquetes que utilizaremos hoy.
- https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html
- https://docs.scipy.org/doc/scipy/reference/index.html
___
En `python` existe una función llamada <font color = blue>_odeint_</font> del paquete <font color = blue>_integrate_</font> de la libreria <font color = blue>_scipy_</font>, que permite integrar sistemas vectoriales de primer orden, del tipo
\begin{equation}
\frac{d\boldsymbol{y}}{dt} = \boldsymbol{f}(t,\boldsymbol{y}); \qquad \text{ con }\quad \boldsymbol{y}\in\mathbb{R}^n,\quad \boldsymbol{f}:\mathbb{R}_{+}\times\mathbb{R}^n\to\mathbb{R}^n
\end{equation}
con condiciones iniciales $\boldsymbol{y}(0) = \boldsymbol{y}_{0}$. Notar que <font color=red> $\boldsymbol{y}$ representa un vector de $n$ componentes</font>.
Ahora, si nos fijamos bien, el modelo del *oscilador armónico amortiguado* que obtuvimos es una ecuación diferencial ordinaria (EDO) de segundo orden. No hay problema. La podemos convertir en un sistema de ecuaciones de primer orden de la siguiente manera:
1. Seleccionamos el vector $\boldsymbol{y}=\left[y_1\quad y_2\right]^T$, con $y_1=x$ y $y_2=\frac{dx}{dt}$.
2. Notamos que $\frac{dy_1}{dt}=\frac{dx}{dt}=y_2$ y $\frac{dy_2}{dt}=\frac{d^2x}{dt^2}=-\frac{c}{m}\frac{dx}{dt}-\frac{k}{m}x=-\frac{c}{m}y_2-\frac{k}{m}y_1$.
3. Entonces, el modelo de segundo orden lo podemos representar como el siguiente sistema vectorial de primer orden:
\begin{equation}
\frac{d\boldsymbol{y}}{dt}=\left[\begin{array}{c}\frac{dy_1}{dt} \\ \frac{dy_2}{dt}\end{array}\right]=\left[\begin{array}{c}y_2 \\ -\frac{k}{m}y_1-\frac{c}{m}y_2\end{array}\right]=\left[\begin{array}{cc}0 & 1 \\-\frac{k}{m} & -\frac{c}{m}\end{array}\right]\boldsymbol{y}.
\end{equation}
```python
# Primero importamos todas las librerias, paquetes y/o funciones que vamos a utlizar
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
```
```python
# Definimos los parámetros k, m y c
k, m, c = 3, 1, 0.5
# Función f(y,t) que vamos a integrar
def amortiguado(y,t,k,m,c):
y1 = y[0]
y2 = y[1]
return np.array([y2, -(k/m)*y1-(c/m)*y2])
# Condiciones iniciales
y0 = np.array([1,1])
# Especificamos los puntos de tiempo donde queremos la solución
t = np.linspace(0, 30, 300)
# Solución numérica
y = odeint(amortiguado, y0, t, args=(k,m,c))
```
¿Cómo entrega odeint las soluciones?
```python
# Averiguar la forma de solución
y.shape
```
(300, 2)
```python
# Mostrar la solución
y
```
array([[ 1.00000000e+00, 1.00000000e+00],
[ 1.08255354e+00, 6.44400045e-01],
[ 1.12926028e+00, 2.87265734e-01],
[ 1.14050185e+00, -6.08345277e-02],
[ 1.11768327e+00, -3.90114960e-01],
[ 1.06314152e+00, -6.91821494e-01],
[ 9.80031347e-01, -9.58438604e-01],
[ 8.72192532e-01, -1.18384948e+00],
[ 7.44003548e-01, -1.36344733e+00],
[ 6.00226559e-01, -1.49419684e+00],
[ 4.45848722e-01, -1.57464647e+00],
[ 2.85924657e-01, -1.60489362e+00],
[ 1.25424603e-01, -1.58650595e+00],
[-3.09074594e-02, -1.52240341e+00],
[-1.78682133e-01, -1.41670627e+00],
[-3.13976227e-01, -1.27455532e+00],
[-4.33425216e-01, -1.10191089e+00],
[-5.34294753e-01, -9.05337619e-01],
[-6.14530179e-01, -6.91782064e-01],
[-6.72783643e-01, -4.68349828e-01],
[-7.08419119e-01, -2.42089037e-01],
[-7.21496257e-01, -1.97860630e-02],
[-7.12734597e-01, 1.92221059e-01],
[-6.83460139e-01, 3.88206930e-01],
[-6.35536705e-01, 5.63197023e-01],
[-5.71284864e-01, 7.13077112e-01],
[-4.93391403e-01, 8.34672057e-01],
[-4.04812458e-01, 9.25793147e-01],
[-3.08673486e-01, 9.85254145e-01],
[-2.08169142e-01, 1.01285709e+00],
[-1.06466032e-01, 1.00934976e+00],
[-6.61104632e-03, 9.76357430e-01],
[ 8.85523259e-02, 9.16292147e-01],
[ 1.76457556e-01, 8.32243416e-01],
[ 2.54876953e-01, 7.27854260e-01],
[ 3.21970545e-01, 6.07187080e-01],
[ 3.76321163e-01, 4.74583783e-01],
[ 4.16955391e-01, 3.34524538e-01],
[ 4.43350454e-01, 1.91489371e-01],
[ 4.55427534e-01, 4.98265382e-02],
[ 4.53532357e-01, -8.63687837e-02],
[ 4.38404262e-01, -2.13362439e-01],
[ 4.11135226e-01, -3.27873347e-01],
[ 3.73120515e-01, -4.27147733e-01],
[ 3.26002856e-01, -5.09014244e-01],
[ 2.71612061e-01, -5.71919246e-01],
[ 2.11902105e-01, -6.14942256e-01],
[ 1.48887646e-01, -6.37791984e-01],
[ 8.45818567e-02, -6.40784133e-01],
[ 2.09373508e-02, -6.24802410e-01],
[-4.02082314e-02, -5.91244778e-01],
[-9.71805876e-02, -5.41957213e-01],
[-1.48509956e-01, -4.79157531e-01],
[-1.92964291e-01, -4.05352053e-01],
[-2.29573832e-01, -3.23247831e-01],
[-2.57646763e-01, -2.35663301e-01],
[-2.76775942e-01, -1.45440011e-01],
[-2.86836931e-01, -5.53580104e-02],
[-2.87977823e-01, 3.19428361e-02],
[-2.80601553e-01, 1.14033541e-01],
[-2.65341593e-01, 1.88757055e-01],
[-2.43032054e-01, 2.54278333e-01],
[-2.14673367e-01, 3.09122495e-01],
[-1.81394746e-01, 3.52200544e-01],
[-1.44414712e-01, 3.82822476e-01],
[-1.05000916e-01, 4.00698037e-01],
[-6.44304938e-02, 4.05925681e-01],
[-2.39520746e-02, 3.98970667e-01],
[ 1.52495098e-02, 3.80633436e-01],
[ 5.20849165e-02, 3.52009701e-01],
[ 8.55876270e-02, 3.14443793e-01],
[ 1.14936335e-01, 2.69477011e-01],
[ 1.39471973e-01, 2.18792684e-01],
[ 1.58709138e-01, 1.64159774e-01],
[ 1.72341858e-01, 1.07376701e-01],
[ 1.80243801e-01, 5.02170689e-02],
[ 1.82463197e-01, -5.62123894e-03],
[ 1.79212881e-01, -5.85622380e-02],
[ 1.70855985e-01, -1.07192185e-01],
[ 1.57887912e-01, -1.50293175e-01],
[ 1.40915311e-01, -1.86869344e-01],
[ 1.20632813e-01, -2.16165278e-01],
[ 9.77983079e-02, -2.37676424e-01],
[ 7.32075851e-02, -2.51151649e-01],
[ 4.76690987e-02, -2.56588223e-01],
[ 2.19795918e-02, -2.54219734e-01],
[-3.09873955e-03, -2.44497674e-01],
[-2.68589505e-02, -2.28067501e-01],
[-4.86674160e-02, -2.05740192e-01],
[-6.79788598e-02, -1.78460311e-01],
[-8.43480530e-02, -1.47271718e-01],
[-9.74380008e-02, -1.13282048e-01],
[-1.07024548e-01, -7.76270369e-02],
[-1.12997443e-01, -4.14357813e-02],
[-1.15358006e-01, -5.79788809e-03],
[-1.14213629e-01, 2.82666093e-02],
[-1.09769431e-01, 5.98338166e-02],
[-1.02317451e-01, 8.80984204e-02],
[-9.22238117e-02, 1.12391550e-01],
[-7.99143379e-02, 1.32193748e-01],
[-6.58591253e-02, 1.47142925e-01],
[-5.05565614e-02, 1.57037300e-01],
[-3.45172979e-02, 1.61833489e-01],
[-1.82486470e-02, 1.61640028e-01],
[-2.23983493e-03, 1.56706740e-01],
[ 1.30514820e-02, 1.47410493e-01],
[ 2.72111227e-02, 1.34237906e-01],
[ 3.98784229e-02, 1.17765682e-01],
[ 5.07542116e-02, 9.86392579e-02],
[ 5.96065967e-02, 7.75504737e-02],
[ 6.62744819e-02, 5.52149751e-02],
[ 7.06688393e-02, 3.23500227e-02],
[ 7.27717970e-02, 9.65333856e-03],
[ 7.26336834e-02, -1.22164388e-02],
[ 7.03682113e-02, -3.26571938e-02],
[ 6.61460356e-02, -5.11382596e-02],
[ 6.01869530e-02, -6.72125140e-02],
[ 5.27510391e-02, -8.05254290e-02],
[ 4.41290354e-02, -9.08209578e-02],
[ 3.46323078e-02, -9.79442377e-02],
[ 2.45826886e-02, -1.01841190e-01],
[ 1.43025071e-02, -1.02555178e-01],
[ 4.10509572e-03, -1.00220965e-01],
[-5.71397588e-03, -9.50562807e-02],
[-1.48847166e-02, -8.73513683e-02],
[-2.31693954e-02, -7.74569047e-02],
[-3.03679467e-02, -6.57707453e-02],
[-3.63220085e-02, -5.27239153e-02],
[-4.09175312e-02, -3.87663328e-02],
[-4.40859626e-02, -2.43526600e-02],
[-4.58040350e-02, -9.92871470e-03],
[-4.60922337e-02, 4.08119800e-03],
[-4.50120538e-02, 1.72856986e-02],
[-4.26621872e-02, 2.93362230e-02],
[-3.91737998e-02, 3.99351709e-02],
[-3.47050959e-02, 4.88421445e-02],
[-2.94353420e-02, 5.58782266e-02],
[-2.35585689e-02, 6.09282254e-02],
[-1.72771482e-02, 6.39409421e-02],
[-1.07954393e-02, 6.49275402e-02],
[-4.31368716e-03, 6.39581692e-02],
[ 1.97765943e-03, 6.11570151e-02],
[ 7.90306528e-03, 5.66960110e-02],
[ 1.33063359e-02, 5.07874454e-02],
[ 1.80542610e-02, 4.36757518e-02],
[ 2.20394035e-02, 3.56287536e-02],
[ 2.51819948e-02, 2.69286518e-02],
[ 2.74309256e-02, 1.78630283e-02],
[ 2.87638445e-02, 8.71613240e-03],
[ 2.91864079e-02, -2.39306389e-04],
[ 2.87307434e-02, -8.74952811e-03],
[ 2.74532105e-02, -1.65862707e-02],
[ 2.54315592e-02, -2.35522285e-02],
[ 2.27615988e-02, -2.94853359e-02],
[ 1.95534995e-02, -3.42618035e-02],
[ 1.59278522e-02, -3.77978836e-02],
[ 1.20116157e-02, -4.00503712e-02],
[ 7.93407486e-03, -4.10158893e-02],
[ 3.82292821e-03, -4.07290402e-02],
[-1.99386584e-04, -3.92595299e-02],
[-4.01903279e-03, -3.67084026e-02],
[-7.53369787e-03, -3.32035400e-02],
[-1.06550374e-02, -2.88945878e-02],
[-1.33105865e-02, -2.39474939e-02],
[-1.54451110e-02, -1.85388367e-02],
[-1.70213884e-02, -1.28501164e-02],
[-1.80204191e-02, -7.06218181e-03],
[-1.84410945e-02, -1.34994748e-03],
[-1.82993527e-02, 4.12245437e-03],
[-1.76268770e-02, 9.20596955e-03],
[-1.64693946e-02, 1.37702627e-02],
[-1.48846455e-02, 1.77066315e-02],
[-1.29400987e-02, 2.09301420e-02],
[-1.07104931e-02, 2.33809612e-02],
[-8.27528669e-03, 2.50248869e-02],
[-5.71609021e-03, 2.58530983e-02],
[-3.11416444e-03, 2.58811736e-02],
[-5.48049501e-04, 2.51474380e-02],
[ 1.90860958e-03, 2.37107251e-02],
[ 4.18898848e-03, 2.16476415e-02],
[ 6.23470843e-03, 1.90494555e-02],
[ 7.99714491e-03, 1.60186882e-02],
[ 9.43837476e-03, 1.26655558e-02],
[ 1.05317686e-02, 9.10435258e-03],
[ 1.12622197e-02, 5.44989437e-03],
[ 1.16260257e-02, 1.81412182e-03],
[ 1.16304384e-02, -1.69705203e-03],
[ 1.12929114e-02, -4.98657608e-03],
[ 1.06400909e-02, -7.96862948e-03],
[ 9.70658499e-03, -1.05706460e-02],
[ 8.53355543e-03, -1.27347635e-02],
[ 7.16718837e-03, -1.44188132e-02],
[ 5.65709128e-03, -1.55968093e-02],
[ 4.05466582e-03, -1.62589055e-02],
[ 2.41151165e-03, -1.64108831e-02],
[ 7.77900648e-04, -1.60732421e-02],
[-7.98635319e-04, -1.52798610e-02],
[-2.27456622e-03, -1.40763419e-02],
[-3.61144889e-03, -1.25181215e-02],
[-4.77680726e-03, -1.06683391e-02],
[-5.74479375e-03, -8.59560656e-03],
[-6.49662364e-03, -6.37173957e-03],
[-7.02078378e-03, -4.06946867e-03],
[-7.31301267e-03, -1.76027952e-03],
[-7.37607266e-03, 4.87642282e-04],
[-7.21932418e-03, 2.61124781e-03],
[-6.85812738e-03, 4.55422110e-03],
[-6.31309922e-03, 6.26831530e-03],
[-5.60924821e-03, 7.71436370e-03],
[-4.77502635e-03, 8.86299514e-03],
[-3.84132387e-03, 9.69501328e-03],
[-2.84044197e-03, 1.02014636e-02],
[-1.80507533e-03, 1.03833963e-02],
[-7.67331710e-04, 1.02513437e-02],
[ 2.42181483e-04, 9.82454909e-03],
[ 1.19518410e-03, 9.12997398e-03],
[ 2.06644020e-03, 8.20112874e-03],
[ 2.83435051e-03, 7.07676734e-03],
[ 3.48140918e-03, 5.79949187e-03],
[ 3.99451765e-03, 4.41431307e-03],
[ 4.36515405e-03, 2.96720883e-03],
[ 4.58940006e-03, 1.50372425e-03],
[ 4.66782792e-03, 6.76642050e-05],
[ 4.60526827e-03, -1.30011880e-03],
[ 4.41045781e-03, -2.56277695e-03],
[ 4.09559101e-03, -3.68835291e-03],
[ 3.67579419e-03, -4.65048450e-03],
[ 3.16853576e-03, -5.42889919e-03],
[ 2.59299920e-03, -6.00971481e-03],
[ 1.96943622e-03, -6.38553482e-03],
[ 1.31851777e-03, -6.55534650e-03],
[ 6.60707493e-04, -6.52424133e-03],
[ 1.56706421e-05, -6.30296331e-03],
[-5.98265584e-04, -5.90731860e-03],
[-1.16458455e-03, -5.35746407e-03],
[-1.66897588e-03, -4.67710064e-03],
[-2.09965002e-03, -3.89260437e-03],
[-2.44756023e-03, -3.03212050e-03],
[-2.70653293e-03, -2.12464979e-03],
[-2.87331116e-03, -1.19915451e-03],
[-2.94750598e-03, -2.83709131e-04],
[-2.93146652e-03, 5.95279962e-04],
[-2.83007955e-03, 1.41377000e-03],
[-2.65049987e-03, 2.15067188e-03],
[-2.40183020e-03, 2.78832655e-03],
[-2.09475750e-03, 3.31285466e-03],
[-1.74116178e-03, 3.71437984e-03],
[-1.35370872e-03, 3.98712298e-03],
[-9.45439348e-04, 4.12937155e-03],
[-5.29367996e-04, 4.14332975e-03],
[-1.18101593e-04, 4.03486555e-03],
[ 2.76511191e-04, 3.81315686e-03],
[ 6.43690822e-04, 3.49025933e-03],
[ 9.73991278e-04, 3.08061156e-03],
[ 1.25951079e-03, 2.60049377e-03],
[ 1.49405297e-03, 2.06746256e-03],
[ 1.67322061e-03, 1.49976805e-03],
[ 1.79445903e-03, 9.15786421e-04],
[ 1.85704068e-03, 3.33473308e-04],
[ 1.86199702e-03, -2.30143606e-04],
[ 1.81200277e-03, -7.59419712e-04],
[ 1.71121630e-03, -1.24048634e-03],
[ 1.56508289e-03, -1.66156660e-03],
[ 1.38011504e-03, -2.01322342e-03],
[ 1.16364754e-03, -2.28852126e-03],
[ 9.23581944e-04, -2.48310780e-03],
[ 6.68126447e-04, -2.59521496e-03],
[ 4.05540031e-04, -2.62558383e-03],
[ 1.43887408e-04, -2.57732093e-03],
[-1.09190231e-04, -2.45570220e-03],
[-3.46677662e-04, -2.26787852e-03],
[-5.62359431e-04, -2.02261209e-03],
[-7.50965013e-04, -1.72991853e-03],
[-9.08277890e-04, -1.40071545e-03],
[-1.03120674e-03, -1.04646479e-03],
[-1.11782219e-03, -6.78801085e-04],
[-1.16735638e-03, -3.09181726e-04],
[-1.18016783e-03, 5.14368857e-05],
[-1.15767707e-03, 3.92899575e-04],
[-1.10227236e-03, 7.06110899e-04],
[-1.01719178e-03, 9.83250186e-04],
[-9.06387198e-04, 1.21793999e-03],
[-7.74373028e-04, 1.40536142e-03],
[-6.26065857e-04, 1.54232050e-03],
[-4.66620193e-04, 1.62726244e-03],
[-3.01265866e-04, 1.66023638e-03],
[-1.35150539e-04, 1.64281632e-03],
[ 2.68080687e-05, 1.57797952e-03],
[ 1.80054177e-04, 1.46995057e-03],
[ 3.20511446e-04, 1.32401743e-03],
[ 4.44678792e-04, 1.14632296e-03],
[ 5.49705086e-04, 9.43642342e-04],
[ 6.33441051e-04, 7.23153279e-04],
[ 6.94468231e-04, 4.92205466e-04],
[ 7.32104638e-04, 2.58096254e-04],
[ 7.46388941e-04, 2.78588802e-05],
[ 7.38044078e-04, -1.91931321e-04],
[ 7.08422500e-04, -3.95327716e-04],
[ 6.59435631e-04, -5.77156322e-04],
[ 5.93470317e-04, -7.33129892e-04]])
- $y$ es una matriz de n filas y 2 columnas.
- La primer columna de $y$ corresponde a $y_1$.
- La segunda columna de $y$ corresponde a $y_2$.
¿Cómo extraemos los resultados $y_1$ y $y_2$ independientemente?
```python
# Extraer y1 y y2
y1 = y[:,0]
y2 = y[:,1]
```
### Para hacer participativamente...
- Graficar en una misma ventana $y_1$ vs. $t$ y $y_2$ vs. $t$... ¿qué pueden observar?
```python
# Gráfica
plt.figure(figsize=(8,6))
plt.plot(t, y1, 'b', lw=3, label='Posición [m]: $y_1(t)$')
plt.plot(t, y2, 'r', lw=3, label='Velocidad [m/s]: $y_2(t)$')
plt.xlabel("Tiempo [s] $t$")
plt.legend(loc="best")
plt.grid()
plt.show()
```
- Graficar $y_2/\omega_0$ vs. $y_1$... ¿cómo se complementan estos gráficos? ¿conclusiones?
```python
# Gráfica
omega0=(k/m)**0.5
plt.figure(figsize=(8,6))
plt.plot(y1, y2/omega0, 'b', lw=3)
plt.xlabel("Posición $y_1$")
plt.ylabel("Velocidad normalizada $y_2/\omega_0$")
plt.grid()
plt.show()
```
## Dependiendo de los parámetros, 3 tipos de soluciones
Teníamos
\begin{equation}
m\frac{d^2 x}{dt^2} + c\frac{dx}{dt} + kx = 0
\end{equation}
si recordamos que $\omega_0 ^2 = \frac{k}{m}$ y definimos $\frac{c}{m}\equiv 2\Gamma$, tendremos
\begin{equation}
\frac{d^2 x}{dt^2} + 2\Gamma \frac{dx}{dt}+ \omega_0^2 x = 0
\end{equation}
<font color=blue>El comportamiento viene determinado por las raices de la ecuación característica. Ver en el tablero...</font>
### Subamortiguado
Si $\omega_0^2 > \Gamma^2$ se tiene movimiento oscilatorio *subamortiguado*.
```python
omega0 = (k/m)**0.5
Gamma = c/(2*m)
```
```python
omega0**2, Gamma**2
```
(2.9999999999999996, 0.0625)
```python
omega0**2 > Gamma**2
```
True
Entonces, el primer caso que ya habíamos presentado corresponde a movimiento amortiguado.
```python
# Gráfica, de nuevo
plt.figure(figsize=(8,6))
plt.plot(t, y1, 'b', lw=3, label='Posición [m]: $y_1(t)$')
plt.plot(t, y2, 'r', lw=3, label='Velocidad [m/s]: $y_2(t)$')
plt.xlabel("Tiempo [s] $t$")
plt.legend(loc="best")
plt.grid()
plt.show()
```
### Sobreamortiguado
Si $\omega_0^2 < \Gamma^2$ se tiene movimiento oscilatorio *sobreamortiguado*.
```python
# Nuevas constantes
k = .1 # Constante del muelle
m = 1.0 # Masa
c = 1 # Constante de amortiguación
```
Simular y graficar...
```python
omega0 = np.sqrt(k/m)
Gamma = c/(2*m)
```
```python
omega0**2, Gamma**2
```
(0.1, 0.25)
```python
omega0**2<Gamma**2
```
True
```python
# Simular
y = odeint(amortiguado, y0, t, args=(k,m,c))
y1s = y[:,0]
y2s = y[:,1]
```
```python
# Graficar
plt.figure(figsize=(8,6))
plt.plot(t, y1s, 'b', lw=3, label='Posición [m]: $y_1(t)$')
plt.plot(t, y2s, 'r', lw=3, label='Velocidad [m/s]: $y_2(t)$')
plt.xlabel("Tiempo [s] $t$")
plt.legend(loc="best")
plt.grid()
plt.show()
```
### Amortiguamiento crítico
Si $\omega_0^2 = \Gamma^2$ se tiene movimiento *críticamente amortiguado*.
```python
# Nuevas constantes
k = .0625 # Constante del muelle
m = 1.0 # Masa
c = .5 # Constante de amortiguación
```
Simular y graficar...
```python
omega0 = np.sqrt(k/m)
Gamma = c/(2*m)
```
```python
omega0**2, Gamma**2
```
(0.0625, 0.0625)
```python
omega0**2 == Gamma**2
```
True
```python
# Simular
y = odeint(amortiguado, y0, t, args=(k,m,c))
y1c = y[:,0]
y2c = y[:,1]
```
```python
# Graficar
plt.figure(figsize=(8,6))
plt.plot(t, y1c, 'b', lw=3, label='Posición [m]: $y_1(t)$')
plt.plot(t, y2c, 'r', lw=3, label='Velocidad [m/s]: $y_2(t)$')
plt.xlabel("Tiempo [s] $t$")
plt.legend(loc="best")
plt.grid()
plt.show()
```
En resumen, se tiene entonces:
```python
tt = t
fig, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3, sharex='col',
sharey='row',figsize =(10,6))
ax1.plot(tt, y1, c = 'k')
ax1.set_title('Amortiguado', fontsize = 14)
ax1.set_ylabel('Posición', fontsize = 14)
ax2.plot(tt, y1s, c = 'b')
ax2.set_title('Sobreamortiguado', fontsize = 14)
ax3.plot(tt, y1c, c = 'r')
ax3.set_title('Crítico', fontsize = 16)
ax4.plot(tt, y2, c = 'k')
ax4.set_ylabel('Velocidad', fontsize = 14)
ax4.set_xlabel('tiempo', fontsize = 14)
ax5.plot(tt, y2s, c = 'b')
ax5.set_xlabel('tiempo', fontsize = 14)
ax6.plot(tt, y2c, c = 'r')
ax6.set_xlabel('tiempo', fontsize = 14)
plt.show()
```
> **Tarea**. ¿Cómo se ve el espacio fase para los diferentes casos así como para diferentes condiciones iniciales?
> En un gráfico como el anterior, realizar gráficas del plano fase para los distintos movimientos y para cuatro conjuntos de condiciones iniciales distintas
- y0 = [1, 1]
- y0 = [1, -1]
- y0 = [-1, 1]
- y0 = [-1, -1]
Hacer lo anterior en un nuevo notebook de jupyter llamado Tarea7_ApellidoNombre.ipynb y subir en el espacio habilitado.
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Lázaro Alonso. Modified by Esteban Jiménez Rodríguez.
</footer>
|
17f823ead7fec64d7d56e65954465bd84f202c0a
| 221,771 |
ipynb
|
Jupyter Notebook
|
Modulo2/Clase13_OsciladorAmortiguado.ipynb
|
lilianaavila/SimMat2018-2
|
85ef5d977c536276902c917ac5cd3f1820627fa7
|
[
"MIT"
] | 1 |
2022-01-29T04:16:12.000Z
|
2022-01-29T04:16:12.000Z
|
Modulo2/Clase13_OsciladorAmortiguado.ipynb
|
lilianaavila/SimMat2018-2
|
85ef5d977c536276902c917ac5cd3f1820627fa7
|
[
"MIT"
] | 1 |
2020-08-14T17:44:49.000Z
|
2020-08-14T17:48:39.000Z
|
Modulo2/Clase13_OsciladorAmortiguado.ipynb
|
lilianaavila/SimMat2018-2
|
85ef5d977c536276902c917ac5cd3f1820627fa7
|
[
"MIT"
] | 3 |
2019-01-31T18:08:31.000Z
|
2019-01-31T18:13:26.000Z
| 215.520894 | 36,724 | 0.889891 | true | 9,491 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.822189 | 0.737158 | 0.606083 |
__label__krc_Cyrl
| 0.106802 | 0.246465 |
# Обучение без учителя
- В задачах машинного обучения без учителя нет целевой переменной (правильных ответов), есть только признаковое описание объектов. Необходимо выявить некоторые скрытые закономерности данных, понять, как объекты устроены, есть ли в данных какая-то структура.
- **Кластеризация**: k-means, DBSCAN, ...
- **Визуализация**: PCA, t-SNE, ...
- Задача кластеризации состоит в том, чтобы разбить объекты, которые даны в выборке, на несколько кластеров, так чтобы объекты в одном кластере были похожи друг на друга, а объекты в разных кластерах были не похожи друг на друга.
# 1. k-means
Кластеры представляют собой центроиды, то есть каждый кластер задаётся координатами центра, а точки относятся к тому кластеру, чей центр к ним ближе.
**Минимизируем полную сумму квадратов расстояний от каждой точки до среднего значения назначенного ей кластера**
Итерационный алгоритм:
1. берём k случайных центров (*k средних*)
2. присоединяем все объекты к ближайшему к ним кластеру
3. пересчитываем координаты кластеров, чтобы они оказались в центре масс точек, которые относятся к ним. То есть для каждой центроиды берём все точки, которые к ней относятся, считаем среднее.
4. снова ищем для каждой точки ближайший центроид
- Если ни у одной точки её назначение не изменилось, то останавливаемся и сохраняем кластеры.
### Достоинства:
+ Очень простой алгоритм
+ Работает даже на больших данных
### Недостатки:
- Надо задавать число k руками
- Не всегда находит кластеры правильно (ищет только кластеры выпуклой формы)
- сильно зависит от начальной инициализации центров кластеров (разный результат от запуска к запуску)
```python
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn import datasets
from sklearn.metrics import accuracy_score
```
```python
np.random.seed(42)
```
```python
X1 = np.random.normal(loc=[0,-10], size=(100,2))
X2 = np.random.normal(loc=[-10, 0], size=(100,2))
X3 = np.random.normal(loc=[0, 0], size=(100,2))
X = np.vstack((X1,X2,X3))
y = np.array([1]*100 + [2]*100 + [3]*100)
```
```python
k_means = KMeans(n_clusters=3)
```
```python
clusters = k_means.fit_predict(X)
```
```python
plt.scatter(X[:, 0], X[:, 1], c=clusters)
plt.title('K-Means')
plt.show()
```
Попробуем разное количество кластеров:
```python
plt.figure(figsize=(12, 12))
for n_c in range(2, 8):
k_means = KMeans(n_clusters=n_c)
clusters = k_means.fit_predict(X)
plt.subplot(3, 2, n_c-1)
plt.scatter(X[:, 0], X[:, 1], c=clusters)
plt.title('n_clusters = {}'.format(n_c))
plt.show()
```
### Оценка качества разбиения, выбор числа кластеров, SSE, elbow method
Дана матрица данных $X$ и дано число $k$ предполагаемых кластеров. Цель кластеризации представить данные в виде групп кластеров $C=\{C_1, C_2, \ldots, C_k\}$. Каждый кластер имеет свой центр:
\begin{equation}
\mu_i = \frac{1}{n_i} \sum \limits_{x_j \in C_i} x_j
\end{equation}
где $n_i = |C_i|$ - это количество точек в кластере $C_i$.
Таким образом, мы имеем некоторые кластеры $C=\{C_1, C_2, \ldots, C_k\}$ и необходимо оценить качество разбиения. Для этого будем вычислять сумму квадратических ошибок (**SSE, sum of squared error**):
\begin{equation}
SSE(C) = \sum \limits_{i=1}^{k} \sum \limits_{x_j \in C_i} ||x_j - \mu_i||^2
\end{equation}
Сумма квадратов расстояний между центроидом и каждым объектом кластера.
Цель найти
\begin{equation}
C^* = arg\min\limits_C \{SSE(C)\}
\end{equation}
Выбирается число кластеров в месте "преломления" графика
# 2. DBSCAN
Идея: хотим найти скопления точек, не зная заранее количество кластеров
Нужно задать два параметра:
- размер окрестности точки $\epsilon$
- minPts - минимальное количество точек в $\epsilon$ - окрестности
Тогда все точки можно разделить на три типа:
- Core (ядровые) точки - у которых в $\epsilon$ - окрестности от minPts соседей
- Border (граничные) точки - не Core точки, но достижимы из Core точек
- Noise (шумовые) точки - все остальные, меньше minPts соседей в $\epsilon$ - окрестности
- minPts = 4
- вокруг каждой точки есть своя $\epsilon$ - окрестность
- точка А и все красные точки - core (ядровые), так как $\epsilon$ окрестность вокруг них содержит по 4 и более точек (включая себя).
- точки B и С -border points, они не core points (ядровые), но они достижимы из A через другие ядровые точки, поэтому они тоже принадлежат кластеру
- точка N - шум
### Алгоритм:
1. берём следующую точку, ищем соседей в $\epsilon$ окрестности
2. если их как минимум minPts, то начинаем поиск связной компоненты из этой core-точки (обход в ширину)
3. иначе помечаем точку как noise, она может быть позднее к какой-то Core-точке как Border-точка
Начинаем с точки P2. minPts=2. Нашли ближайших соседей P2 - P1 и P3, они внутри окрестности. Всего в окрестности находится 3 точки, это больше minPts, поэтому P2 - это core-точка. Из неё нужно запустить поиск связной компоненты. Для этого проверим соседей P1 и P3.
В окрестности P1 находится две точки, она тоже core. Но никаких новых точек из неё не было найдено.
Проверим теперь P3 точку. У неё в окрестности три точки (core), причем P4 - новая точка. То есть из начальной точки P2 можно достичь точки P4 по цепочке из core точек.
Core. Достигаем новой точки P5. Проверяем P5, больше не находим непосещенных точек. Это значит, что кластер найден.
Берём следующую непомеченную точку P6. Ищем соседей в окрестности. Их нет. Noise.
### Достоинства:
+ Не нужно задавать количество кластеров
+ Кластеры могут быть любой формы
+ Может работать с шумными данными
### Недостатки:
- Долго работает на больших данных
- Чувствителен к выбору гиперпараметров (лучше использовать иерархический HDBSCAN)
```python
from sklearn.cluster import DBSCAN
```
```python
from sklearn.cluster import DBSCAN
plt.figure(figsize=(12, 36))
i = 1
for sample in [2, 5, 10]:
for e in [0.2, 1, 3, 5, 10]:
dbscan = DBSCAN(eps=e, min_samples=sample)
clusters = dbscan.fit_predict(X)
plt.subplot(9, 2, i)
plt.scatter(X[:, 0], X[:, 1], c=clusters)
plt.title('eps = {}, min_samples = {}'.format(e, sample))
i += 1
i += 1
plt.show()
```
- Если маленький радиус е и мало объектов, то DBSCAN находит много выбросов
- Если увеличить радиус и не менять количество объектов, то часть из них остается выбросами
- Если еще увеличить радиус, то выбросы пропадают
- Если еще сильнее увеличить радиус, то выделяется только один кластер и т.д.
### Kmeans и DBSCAN на более сложных данных
```python
n_samples = 1500
X = datasets.make_circles(n_samples=n_samples,
factor = 0.5,
noise = 0.05)
X = X[0]
X.shape
```
(1500, 2)
```python
dbscan = DBSCAN(eps=0.1, min_samples=5)
clusters = dbscan.fit_predict(X)
```
```python
plt.scatter(X[:, 0], X[:, 1], c=clusters)
plt.title('DBSCAN');
```
```python
k_means = KMeans(n_clusters=2)
clusters = k_means.fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c = clusters);
```
# Пример
```python
from sklearn.datasets import load_digits
```
```python
digits = load_digits()
X, y = digits['data'], digits['target']
```
```python
plt.imshow(X[1].reshape(8, 8), cmap='gray')
```
```python
km = KMeans(n_clusters=10)
```
```python
clusters = km.fit_predict(X)
```
```python
pred = np.zeros(X.shape[0])
```
```python
for i in range(10):
bc = np.bincount(y[clusters == i])
pred[clusters == i] = bc.argmax() # преобладающая метка в кластере
```
```python
from sklearn.metrics import accuracy_score
```
```python
accuracy_score(y, pred)
```
0.7924318308291597
Ошибки кластеризации для нуля:
```python
incorrect_indices = np.where(np.logical_and(pred == 0, y!=0))[0]
```
```python
for i in range(2):
plt.imshow(X[incorrect_indices[i]].reshape(8, 8), cmap='gray')
plt.title('Real digit is {}'.format(y[incorrect_indices[i]]))
plt.show()
```
```python
correct_indices = np.where(np.logical_and(pred == 0, y == 0))[0]
```
```python
for i in range(2):
plt.imshow(X[correct_indices[i]].reshape(8, 8), cmap='gray')
```
## Другие алгоритмы кластеризации
# Домашнее задание 2
## Задание 1
1. Реализовать kmeans
2. Визуализировать сходимость центров кластеров
3. Оценить $SSE$ для значений $k = 1, \ldots, 10$ и построить график зависимости $SSE$ от количества кластеров.
### Алгоритм к-средних
На вход алгоритм получает матрицу данных $D$, количество кластеров $k$, и критерий остановки $\epsilon$:
1. t = 0
2. случайным образом инициализируем $k$ центров кластеров: $\mu_1^t, \mu_2^t, \ldots, \mu_k^t \in R^d$;
3. повторять
4. $t = t + 1$;
5. $C_j = 0$ для всех $j = 1, \ldots, k$
6. для каждого $x_j \in D$
7. $j^* = arg\min\limits_C \{||X_j - \mu_i^{t-1}||^2\}$ \\\ присваиваем $x_j$ к ближайшему центру
8. $C_{j^*} = C_{j^*} \cup {x_j}$
9. для каждого i=1 до k
10. $\mu_i = \frac{1}{|C_i|} \sum_{x_j \in C_i} x_j$
11. пока $\sum_{i=1}^k ||\mu_i^{t} - \mu_i^{t-1}||^2 \leq \epsilon$
```python
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
X, Y = make_blobs(n_samples = 1000, n_features=2, centers=5, cluster_std = 1.2, random_state=17)
plt.scatter(X[:,0], X[:,1])
```
## Задание 2
1. Исследуйте данные с помощью pandas. Посмотрите на признаки и их распределения. (Например, постройте график распределения автомобилей по годам, типу топлива и т.д.)
2. Кластеризуйте данные с помощью KMeans из sklearn.clustering. Найдите оптимальное число кластеров
3. Произведите анализ получившихся кластеров:
Пример: первый кластер содержит полноприводные автомобили немецкого производства с АКПП, небольшим пробегом и высокой стоимостью, второй - японцы с правым рулём ...
```python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('data/2.csv', encoding='cp1251')
df = df.drop(columns=['Модель', 'Цвет'])
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Марка</th>
<th>Год</th>
<th>Состояние</th>
<th>Пробег</th>
<th>Объем</th>
<th>Топливо</th>
<th>Мощность</th>
<th>Кузов</th>
<th>Привод</th>
<th>КПП</th>
<th>Руль</th>
<th>Хозяев в ПТС</th>
<th>Цена</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Volkswagen</td>
<td>2013.0</td>
<td>БУ</td>
<td>42000.0</td>
<td>1200.0</td>
<td>бензин</td>
<td>105.0</td>
<td>хэтчбек</td>
<td>передний</td>
<td>автомат</td>
<td>левый</td>
<td>1 владелец</td>
<td>689196.0</td>
</tr>
<tr>
<td>1</td>
<td>Skoda</td>
<td>2012.0</td>
<td>БУ</td>
<td>62000.0</td>
<td>1800.0</td>
<td>бензин</td>
<td>152.0</td>
<td>кроссовер</td>
<td>полный</td>
<td>механика</td>
<td>левый</td>
<td>1 владелец</td>
<td>639196.0</td>
</tr>
<tr>
<td>2</td>
<td>Renault</td>
<td>2015.0</td>
<td>БУ</td>
<td>4700.0</td>
<td>1600.0</td>
<td>бензин</td>
<td>106.0</td>
<td>хэтчбек</td>
<td>передний</td>
<td>механика</td>
<td>левый</td>
<td>1 владелец</td>
<td>629196.0</td>
</tr>
<tr>
<td>3</td>
<td>Nissan</td>
<td>2012.0</td>
<td>БУ</td>
<td>70000.0</td>
<td>1600.0</td>
<td>бензин</td>
<td>110.0</td>
<td>хэтчбек</td>
<td>передний</td>
<td>автомат</td>
<td>левый</td>
<td>1 владелец</td>
<td>479196.0</td>
</tr>
<tr>
<td>4</td>
<td>УАЗ</td>
<td>2014.0</td>
<td>БУ</td>
<td>50000.0</td>
<td>2700.0</td>
<td>бензин</td>
<td>128.0</td>
<td>внедорожник</td>
<td>полный</td>
<td>механика</td>
<td>левый</td>
<td>1 владелец</td>
<td>599196.0</td>
</tr>
</tbody>
</table>
</div>
|
f50d398c08132d17bce9a38f3837c1e5721f7469
| 692,619 |
ipynb
|
Jupyter Notebook
|
02_kmeans_dbscan.ipynb
|
aicherepanov/ml-course-2021
|
90502f715b7424ead555d3ab75697cdbc79b142e
|
[
"MIT"
] | null | null | null |
02_kmeans_dbscan.ipynb
|
aicherepanov/ml-course-2021
|
90502f715b7424ead555d3ab75697cdbc79b142e
|
[
"MIT"
] | null | null | null |
02_kmeans_dbscan.ipynb
|
aicherepanov/ml-course-2021
|
90502f715b7424ead555d3ab75697cdbc79b142e
|
[
"MIT"
] | 2 |
2021-09-10T11:10:24.000Z
|
2021-11-04T08:31:10.000Z
| 774.741611 | 365,636 | 0.948871 | true | 4,894 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.885631 | 0.815232 | 0.721996 |
__label__rus_Cyrl
| 0.785019 | 0.515769 |
# <span style="color:#2c061f"> Problem Set 3: Loading and Structuring data</span>
<br>
## <span style="color:#374045"> Introduction to Programming and Numerical Analysis </span>
*Oluf Kelkjær*
### **Today's Plan**
1. Introduction to Pandas
2. Monte Carlo integration briefly
### Introduction to Pandas
`Pandas` is a powerful library when dealing with data.
`Pandas` is built on top of `Numpy` which means that alot of `Numpy` structure is used or replicated in `Pandas`.
The core element of Pandas is the `DataFrame`. Looks like a 'classic' dataset and can store heterogeneous tabular data.
The `DataFrame` is a `Class` with many methods!
```python
import pandas as pd
import numpy as np
data = {"A": 1.0,
"B": pd.Timestamp("20130102"),
"C": pd.Series(1, index=list(range(4)), dtype="float32"),
"D": np.array([3] * 4, dtype="int32"),
"E": pd.Categorical(["test", "train", "test", "train"]),
"F": "foo"}
df = pd.DataFrame(data)
df.head() # df.tail(x) last x rows, df.sample(x) x random rows
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1.0</td>
<td>2013-01-02</td>
<td>1.0</td>
<td>3</td>
<td>test</td>
<td>foo</td>
</tr>
<tr>
<th>1</th>
<td>1.0</td>
<td>2013-01-02</td>
<td>1.0</td>
<td>3</td>
<td>train</td>
<td>foo</td>
</tr>
<tr>
<th>2</th>
<td>1.0</td>
<td>2013-01-02</td>
<td>1.0</td>
<td>3</td>
<td>test</td>
<td>foo</td>
</tr>
<tr>
<th>3</th>
<td>1.0</td>
<td>2013-01-02</td>
<td>1.0</td>
<td>3</td>
<td>train</td>
<td>foo</td>
</tr>
</tbody>
</table>
</div>
Almost every datawrangling action you can do in `SQL`, `Excel` and `R (data.table)`, you can also do in `pandas`.
### Accessing data in DataFrame
Multiple ways to go about:
```python
df["A"]
df.A
df.loc[:,"A"] # .loc needs names. First input is rows, second i column. : means take all
df.iloc[:,0] # .iloc needs index
df["A"].all() == df.A.all() == df.loc[:,"A"].all() == df.iloc[:,0].all()
```
True
### Creating new columns
You can add new columns to the DataFrame and math operations is allowed:
```python
df["C/D"] = df['C'] / df['D']
df['I'] = [1,2,3,4]
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
<th>C/D</th>
<th>I</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1.0</td>
<td>2013-01-02</td>
<td>1.0</td>
<td>3</td>
<td>test</td>
<td>foo</td>
<td>0.333333</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>1.0</td>
<td>2013-01-02</td>
<td>1.0</td>
<td>3</td>
<td>train</td>
<td>foo</td>
<td>0.333333</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>1.0</td>
<td>2013-01-02</td>
<td>1.0</td>
<td>3</td>
<td>test</td>
<td>foo</td>
<td>0.333333</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>1.0</td>
<td>2013-01-02</td>
<td>1.0</td>
<td>3</td>
<td>train</td>
<td>foo</td>
<td>0.333333</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
### Subsetting DataFrame
Sometimes you only need specific parts of a DataFrame. To subset often the `.loc` method is used:
```python
# Subset DataFrames
boolean_array = df['E'] == 'test'
print(boolean_array)
df_new = df.loc[boolean_array,['B','E','C/D']] # only want rows where boolean array is True + specified columns
df_new
```
0 True
1 False
2 True
3 False
Name: E, dtype: bool
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>B</th>
<th>E</th>
<th>C/D</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2013-01-02</td>
<td>test</td>
<td>0.333333</td>
</tr>
<tr>
<th>2</th>
<td>2013-01-02</td>
<td>test</td>
<td>0.333333</td>
</tr>
</tbody>
</table>
</div>
### Pandas wrapped up
These functions will get you far.
**Remember** the answers to the PS is suggested answers - what matters is the right result.
However, don't overcomplicate things.
For your next project (**Data Project**) - you will be using `pandas`.
If you're spending time on the **Inaugural Project** today, fear not.
You will also be dealing with `pandas` in the next Problem Set
## Numerical Integration
**General** problem:
\begin{equation}
\mathbb{E}[g(x)]=\int_{x \in X}^{} g(x)f(x)dx
\end{equation}
where $g:\mathbb{R}\rightarrow \mathbb{R}$ is some function and $f(x)$ is the PDF for $x$.
**General** solution:
Relying on the **LLN** we can **approximate** the true integral with a finite sample, i.e. turn into discrete sum:
\begin{equation}
\mathbb{E}[g(x)]\approx \sum_{i=1}^{N} g(x_i) w_{i}
\end{equation}
In **Monte Carlo integration** we draw $N$ (pseudo-)random $x_i$ from $f(x)$, where weights $\sum w_i=\frac{1}{N}$.
This means the integral can be approximated by
\begin{equation}
\mathbb{E}[g(x)]\approx \frac{1}{N} \sum_{i=1}^{N} g(x_i)
\end{equation}
**In conclusion:** the most likely values of $x$ will weight the most as they are sampled the most often - thus gaining the appropriate weight in MC integration. Taking the mean is thereby sufficient.
**Question 3** of Inaugural project is presented as integral - you should use previous logic.
Cookbook:
1. Draw `x` from beta distribution
2. evalute `u( )` as seen in the question
3. Return its mean
|
2ca525e93592a2af8ac9b49077eb32a3e5132166
| 14,015 |
ipynb
|
Jupyter Notebook
|
Slides/exc_7/7.class.ipynb
|
OlufKelk/IPNA
|
27abbcd0de9d2da33169f1caf9604ebb58682c61
|
[
"MIT"
] | null | null | null |
Slides/exc_7/7.class.ipynb
|
OlufKelk/IPNA
|
27abbcd0de9d2da33169f1caf9604ebb58682c61
|
[
"MIT"
] | null | null | null |
Slides/exc_7/7.class.ipynb
|
OlufKelk/IPNA
|
27abbcd0de9d2da33169f1caf9604ebb58682c61
|
[
"MIT"
] | null | null | null | 27.642998 | 206 | 0.424759 | true | 2,261 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.754915 | 0.766294 | 0.578487 |
__label__eng_Latn
| 0.795085 | 0.182348 |
# The Black-Scholes-Merton model
The stock price $\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator*{\plim}{plim}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\space}{\text{ }}
\newcommand{\bspace}{\;\;\;\;}
\newcommand{\bbspace}{\;\;\;\;\;\;\;\;}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\void}{\left.\right.}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\CB}[1]{\left\{ #1 \right\}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\given}[1]{\left. #1 \right|}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\asim}{\overset{\text{a}}{\sim}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\EE}{\mathbb{E}}
\newcommand{\II}{\mathbb{I}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\QQ}{\mathbb{Q}}
\newcommand{\PP}{\mathbb{P}}
\newcommand{\AcA}{\mathcal{A}}
\newcommand{\FcF}{\mathcal{F}}
\newcommand{\AsA}{\mathscr{A}}
\newcommand{\FsF}{\mathscr{F}}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathcal{N}\left( #1 \right)}
\newcommand{\Exp}[1]{\mathrm{E}\left[ #1 \right]}
\newcommand{\Var}[1]{\mathrm{Var}\left[ #1 \right]}
\newcommand{\Avar}[1]{\mathrm{Avar}\left[ #1 \right]}
\newcommand{\Cov}[1]{\mathrm{Cov}\left( #1 \right)}
\newcommand{\Corr}[1]{\mathrm{Corr}\left( #1 \right)}
\newcommand{\ExpH}{\mathrm{E}}
\newcommand{\VarH}{\mathrm{Var}}
\newcommand{\AVarH}{\mathrm{Avar}}
\newcommand{\CovH}{\mathrm{Cov}}
\newcommand{\CorrH}{\mathrm{Corr}}
\newcommand{\ow}{\text{otherwise}}
\newcommand{\FSD}{\text{FSD}}
\newcommand{\SSD}{\text{SSD}}S_t$ satisfies the ***Black-Scholes-Merton model***, $\dd S_t = \mu S_t \dd t + \sigma S_t \dd W_t$. And from the preceding result, we have
$$S_t = S_0 \cdot \exp\CB{\P{\mu - \ffrac{\sigma^2}{2}}t +\sigma W_t},\bspace \Exp{S_t} = S_0 e^{\mu t}$$
Here, $\mu$ represents the **expected return** of the stock, $\sigma$ measures the riskness of the stock, and we call it the ***volatility***. Normally, the higher the *risk*, the higher the *expected return*.
## Estimating Volatility from Historical Data
Usually the time interval is fixed. So for $i=1,2,\dots,n$, define $\Delta_t = t_i - t_{i-1}$. Then let
$$X_i = \log\P{\ffrac{S_{t_{\void_i}}}{S_{t_{\void_{i-1}}}}} \Longrightarrow \bar X = \ffrac{1}{n} \sum_{i=1}^n X_i,\bspace s = \sqrt{\ffrac{1}{\mathbf{n-1}} \sum\nolimits_{i=1}^n \P{X_i - \bar X}^2}$$
From the expression of $S_t$, we find the distribution of $X_i$, and the *unbiased estimate* of $\sigma$, ($ \Exp{s^2} = \sigma^2 \Delta_t$)
$$X_i = \log\P{\ffrac{S_{t_{\void_i}}}{S_{t_{\void_{i-1}}}}} \sim \N{\P{\mu - \ffrac{\sigma^2}{2}}\Delta_t,\sigma^2 \Delta_t},\bspace \hat\sigma = \ffrac{s}{\sqrt{\Delta_t}}$$
$Assumptions$
1. NO transaction costs or taxes.
2. All securities are *perfectly divisible*.
3. *Short selling* of stocks is permitted.
4. The assets pay NO dividends.
5. NO riskless arbitrage opportunies.
6. Trading takes place *continuously*.
7. The stock price $S_t$ satisfies the Black-Scholes-Merton model, where the parameters $\mu$, $\sigma$, and risk-free interest rate $r$ are *constant*.
## Derivation of the Black-Scholes-Merton Differential Equation
### Method 1: Dynamic Replication (No arbitrage)
The model is assumed to be $\dd S_t = \mu S_t \dd t + \sigma S_t \dd W_t$ and let $\pi_t$ denote the amount of money (not shares) held in the stock at time $t$. Borned to be an *adapted process* to $\FcF_t$.
Let $V_t^\pi \equiv V_t$ denote the ***wealth process*** corresponding to the portfolio strategy $\pi$. Besides, this portfolio strategy is required to be **self-financing**, meaning that the amount held in the bank at time $t$ is equal to $V_t - \pi_t$. If we let $B_t = e^{rt}$ denote the bond price at time $t$, using no-arbitrage,
$$\dd V_t = \ffrac{\pi_t}{S_t} \dd S_t + \ffrac{V_t - \pi_t}{B_t}\dd B_t$$
With this we can determine the value of the investor's wealth by combine the two differential equation together:
$$\dd V_t = \SB{rV_t + \P{\mu-r} \pi_t}\dd t + \sigma\pi_t \dd W_t$$
Using the portfolio strategy $\pi$ to replicate the European call:
$$\begin{align}
V_T &= c\P{T,S_T} = \max\P{S_T-K,0}\Longrightarrow V_t = c\P{t,S_t}, \bspace \forall \; t\leq T\\[0.8em]
\dd c\P{t,S_t} &\using{\text{Itô}} \P{\ffrac{\partial c}{\partial t} + \mu S_t \ffrac{\partial c}{\partial S_t} + \ffrac{1}{2}\sigma^2 S_t^2 \ffrac{\partial^2 c}{\partial S_t^2}}\dd t + \sigma S_t\ffrac{\partial c}{\partial S_t}\;\dd W_t\\
\dd V_t &= \SB{rV_t + \P{\mu-r} \pi_t}\dd t + \sigma\pi_t \dd W_t
\end{align}$$
Compare the $\dd W_t$ term of the last two equations, we have the **number of shares needs** to be **delta** of the option, and this is known as the ***dynamic replication***.
$$\ffrac{\pi_t}{S_t} = \ffrac{\partial c}{\partial S_t}$$
And then we compare the $\dd t$ term, and this leads to the ***Black-Scholes-Merton equation***.
$$ \ffrac{\partial c}{\partial t} + \mu S_t \ffrac{\partial c}{\partial S_t} + \ffrac{1}{2}\sigma^2 S_t^2 \ffrac{\partial^2 c}{\partial S_t^2} = rc + \P{\mu-r} \pi_t $$
with terminal condition $c\P{T,S_T} = \max\P{S_T -K,0}$. Then plugin the equity $\ffrac{\pi_t}{S_t} = \ffrac{\partial c}{\partial S_t}$ to this, we have the equivalent equation (with the same terminal condition):
$$\ffrac{\partial c}{\partial t} + r S_t \ffrac{\partial c}{\partial S_t} + \ffrac{1}{2}\sigma^2 S_t^2 \ffrac{\partial^2 c}{\partial S_t^2} - rc = 0$$
$Remark$
>$\mu$ is not in the final equation.
### Method 2: Riskless Hedging Principle
Portfolio: short selling of $1$ unit of a *European call option* and long holding of $\Delta$ units of the asset. The portfolio value at time $t$ is now
$$\Pi\P{t,S_t} = -c\P{t,S_t}+\Delta S_t$$
$Remark$
>Here the $\Delta$ is the stock portion, the same as the dynamic replication, $\Delta = \ffrac{\pi_t}{S_t} = \ffrac{\partial c}{\partial S_t}$.
The differential financial gain over is $\dd\Pi\P{t,S_t} = -\dd c + \Delta \dd S_t$, or more specifically using Itô,
$$\begin{align}
\dd\Pi\P{t,S_t}&=-\P{\ffrac{\partial c}{\partial t} + \mu S_t \ffrac{\partial c}{\partial S_t} + \ffrac{1}{2}\sigma^2 S_t^2 \ffrac{\partial^2 c}{\partial S_t^2}}\dd t - \sigma S_t\ffrac{\partial c}{\partial S_t}\;\dd W_t + \Delta \P{\mu S_t\dd t + \sigma S_t \dd W_t}\\
&= \P{ - \ffrac{\partial c}{\partial t} + \mu S_t\P{\Delta - \ffrac{\partial c}{\partial S_t}} - \ffrac{1}{2}\sigma^2 S_t^2 \ffrac{\partial^2 c}{\partial S_t^2}}\dd t + \sigma S_t\P{\Delta - \ffrac{\partial c}{\partial S_t}}\dd W_t
\end{align}$$
Then from the **riskless hedging principle**, we have $\dd\Pi\P{t,S_t} = r\cdot\Pi\P{t,S_t}\dd t$ and thus
$$\P{ - \ffrac{\partial c}{\partial t} + \mu S_t\P{\Delta - \ffrac{\partial c}{\partial S_t}} - \ffrac{1}{2}\sigma^2 S_t^2 \ffrac{\partial^2 c}{\partial S_t^2}}\dd t + \sigma S_t\P{\Delta - \ffrac{\partial c}{\partial S_t}}\dd W_t = r\cdot\Pi\P{t,S_t}\dd t = r\cdot\P{-c+\Delta S_t}\dd t$$
Match the $\dd W_t$ terms on both sides, we have $\Delta = \ffrac{\partial c}{\partial S_t}$. Match the $\dd t$ terms on both sides, again, we have the **Black-Scholes-Merton equation**, by replacing $\Delta$ with $\ffrac{\partial c}{\partial S_t}$:
$$\ffrac{\partial c}{\partial t} + \ffrac{1}{2} \sigma^2 S_t^2 \ffrac{\partial^2 c}{\partial S_t^2} + rS_t \ffrac{\partial c}{\partial S_t} - rc = 0$$
### Method 3: Risk-Neutral Valuation
Following the **Gisanov's Thoerem**, to find the measure $\QQ$ under which the discounted stock price $\CB{ \tilde S_t}_{t\geq 0}$ is a martingale, our conclusion is
- $\given{\ffrac{\dd\QQ}{\dd\PP}}_{\FcF_t} = \exp\CB{-\theta W_t - \ffrac{1}{2}\theta^2 t}$, where $\theta = \ffrac{\mu - r}{\sigma}$
- Under $\QQ$ measure, $\dd S_t = rS_t\dd t + \sigma S_t \dd \widetilde W_t$, $\dd \tilde S_t = \sigma \tilde S_t \dd \widetilde W_t$
And then we obtain the risk-neutral valuation formula:
$$\begin{align}
c\P{t,S_t} &= \ExpH_t^\QQ\SB{e^{-r\P{T-t}}c\P{T,S_T}}\\
&= \ExpH_t^\QQ\SB{e^{-r\P{T-t}} %\max
\P{S_T - K}^+}\\
&= \ExpH^\QQ\SB{e^{-r\P{T-t}} %\max
\P{S_T - K}^+ \mid S_t}
\end{align}$$
With this we apply the **Feynman-Kac theorem** and obtain (meaning these methods are actually equivalent!)
$$\ffrac{\partial c}{\partial t} + \ffrac{1}{2} \sigma^2 S_t^2 \ffrac{\partial^2 c}{\partial S_t^2} + rS_t \ffrac{\partial c}{\partial S_t} - rc = 0$$
## Black-Scholes Formula
For *european call* option, we solve the equation derived from each method and obtain:
$$c\P{t,S_t} = S_t\Phi^{-1}\P{d_1} - Ke^{-r\P{T-t}} \Phi^{-1}\P{d_2}\equiv S_t\N{d_1} - Ke^{-r\P{T-t}} \N{d_2}\\
\begin{align}
\N{x} &\equiv \Phi^{-1}\P{x} = P\CB{X\leq x} = \ffrac{1}{\sqrt{2\pi}} \int_{-\infty}^x \exp\CB{-\ffrac{u^2}{2}} \; \dd u,\bspace X\sim \N{0,1}\\
d_1 &= \ffrac{1}{\sigma\sqrt{T-t}} \P{\log\ffrac{S_t}{K} + \P{r\,\mathbf+\,\ffrac{1}{2}\sigma^2}\P{T-t}}\\
d_2 &= \ffrac{1}{\sigma\sqrt{T-t}} \P{\log\ffrac{S_t}{K} + \P{r\,\mathbf-\,\ffrac{1}{2}\sigma^2}\P{T-t}} = d_1 - \sigma \sqrt{T-t}
\end{align}$$
$Proof$
>We can solve the **PDE**, and to see the essays in ref folder. Or, use the probabilistic approach. In method 3, European call option price can expressed as expectation under risk neutral measure $\QQ$
>
>$$\begin{align}
c\P{t,S_t} &= \ExpH^\QQ\SB{e^{-r\P{T-t}} %\max
\P{S_T-K}^+\mid S_t}\\
&= \ExpH^\QQ_t\SB{e^{-r\P{T-t}} %\max
\P{S_T-K}^+}\\
&=e^{-r\P{T-t}}\P{\ExpH^\QQ_t\SB{S_T \cdot \mathbf{1}_{S_{\void_T}>K}}-K\cdot\ExpH^\QQ_t\SB{ \mathbf{1}_{S_{ \void_T} >K}}}
\end{align}$$
>
>Note that $S_T = S_t \exp\CB{\P{r- \ffrac{1}{2}\sigma^2}\P{T-t} + \sigma\P{\widetilde W_T -\widetilde W_t}}$, then
>$$S_T\mid \FcF_t \sim S_t \exp\CB{\P{r- \ffrac{1}{2}\sigma^2}\P{T-t} + \sigma\sqrt{T-t}\cdot Z},\bspace Z\sim \N{0,1}$$
>
>$$\begin{align}
\ExpH^\QQ_t\SB{\mathbf{1}_{S_{\void_T}>K}} &= \QQ\P{S_t \exp\CB{\P{r- \ffrac{1}{2}\sigma^2}\P{T-t} + \sigma\sqrt{T-t}\cdot Z} >K}\\
&= \QQ\P{Z>\ffrac{1}{\sigma\sqrt{T-t}} \P{\log\ffrac{K}{S_t} - \P{r-\ffrac{1}{2}\sigma^2}\P{T-t}}}\\
&= \QQ\P{Z>-d_2} = \QQ\P{Z<d_2} = \N{d_2}
\end{align}$$
>
>$$\begin{align}
\ExpH^\QQ_t\SB{S_T \cdot \mathbf{1}_{S_{\void_T}>K}} &= \ExpH^\QQ_t\SB{S_T \cdot \mathbf{1}_{Z>-d_{\void_2}}}\\
&= S_t\int_{-d_{\void_2}}^{+\infty} \exp\CB{\P{r-\ffrac{1}{2}\sigma^2}\P{T-t} + \sigma \sqrt{T-t}\cdot z} \ffrac{1}{\sqrt{2\pi}} \exp\CB{-\ffrac{z^2}{2}}\;\dd z\\
&= S_t e^{r\P{T-t}}\int_{-d_{\void_2}}^{+\infty}\ffrac{1}{\sqrt{2\pi}}\exp\CB{-\ffrac{ \P{z - \sigma\sqrt{T-t}}^2 }{2}}\;\dd z\\
&\bspace\text{let } u = z-\sigma\sqrt{T-t},\text{ and the lower bond or the integral:}\\
&\bspace-d_2 \to -\P{d_2 - \sigma\sqrt{T-t}} = -d_1,\text{ so that}\\
&= S_t e^{r\P{T-t}}\int_{-d_{\void_1}}^{+\infty}\ffrac{1}{\sqrt{2\pi}}\exp\CB{-\ffrac{u^2 }{2}}\;\dd u\\
&= S_t e^{r\P{T-t}} \N{d_1}
\end{align}$$
>
>So finally the formula: $c\P{t,S_t} = S_t\N{d_1} - Ke^{-r\P{T-t}} \N{d_2}$
$Remark$
>- the Delta Hedging, $\Delta = \ffrac{\partial c}{\partial S_t} = \N{d_1}$.
>- the probability that the European call option will be exercised in a risk-neutral world. $\ExpH^\QQ_t\SB{\mathbf{1}_{S_{\void_T}>K}} = \N{d_2}$
>- drift $\mu$ is not contained in the formula, meaning that the price of a European option does not depend on $\mu$, where $\mu$ is hard to estimate in pratice.
>- From $\dd S_t = \mu S_t + \sigma S_t \dd W_t$ to $\dd S_t = r S_t + \sigma S_t \dd \widetilde W_t$, we need to assume $\widetilde W_t = W_t + \ffrac{\mu-r}{\sigma}t$. We call $\ffrac{\mu-r}{\sigma}$, denoted as $\theta$, the ***sharpe ratio***, or ***market price of risk***, ***risk premium*** or ***relative risk***.
***
For a european put option, the **Black-Scholes-Merton equation** is:
$$\ffrac{\partial p}{\partial t} + \ffrac{1}{2} \sigma^2 S_t^2 \ffrac{\partial^2 p}{\partial S_t^2} + rS_t \ffrac{\partial p}{\partial S_t} - rp = 0$$
with **terminal condition** $p\P{T,S_T} = \max\P{K-S_T,0}$. Similarly, we drive the formula for $p$:
$$p\P{t,S_t} = Ke^{-r\P{T-t}} \N{-d_2} - S_t\N{-d_1}$$
We can also derive this by the put-call parity:
$$\begin{align}
p\P{t,S_t} &= c\P{t,S_t} + Ke^{-r\P{T-t}} - S_t \\
&= S_t\N{d_1} - Ke^{-r\P{T-t}} \N{d_2} + Ke^{-r\P{T-t}} - S_t\\
&= -S_t\P{1-\N{d_1}} + Ke^{-r\P{T-t}}\P{1-\N{d_2}}\\
&= Ke^{-r\P{T-t}} \N{-d_2} - S_t\N{-d_1}
\end{align}$$
## Volatility
In BS model, the input data consists of the parameters $S_t$, $r$, $T$, $t$, and $\sigma$. To apply the model, we need to estimate the parameter $\sigma$. We could use ***historic volatility***, or ***implied volatility***.
### [Historic Volatility](#Estimating-Volatility-from-Historical-Data)
The standard approach is to use historical data for a period of the *same length as the time to maturity*. Like we we are evaluating a European call with $6$ months left to maturity, we'll retrive the data of the last $6$ months.
### Implied Volatility
Implied volatility is the value of $σ$ that matches the theoretical BS price of the option with the observed market price of the option. We first let the benchmark option price be $\tilde n$ and write the BS pricing formula for European call options by $c\P{t,S_t, T,r,\sigma, K}$. Then the implied volatility can be found by solving
$$\tilde c = c\P{t,S_t, T,r,\sigma, K}$$
Newton method is a practical way to find the root. And there's only one solution due to the fact that $\ffrac{\partial c}{\partial \sigma} > 0$. In later Chapter, we'll see that there's a smile pattern for **implied volatility** on the **strike price** $S_t$.
## Variations on Black-Scholes-Merton
### Dividends Paid Continuously
Assume the stock pays a dividend at a continuous rate $q$. In the infinitesimal time interval $[t,t+\dd t)$, the holder of the stock receives a dividend payment of $qS_t\dd t$. Thus, the cumulative dividend to time $t$ ends up with $\d{\int_0^t qS_u\;\dd u}$
#### Method 1: Combination of dynamic replication and risk-neutral valuation
We first write the total value of holding $1$ share of stock,
$$G_t := S_t + \int_0^t qS_u\;\dd u$$
Therefore, the wealth process of investing in this stock and the bond is
$$\begin{align}
\dd V_t &= \ffrac{\pi_t}{S_t}\dd G_t + \ffrac{V_t - \pi_t}{B_t}\dd B_t\neq \ffrac{\pi_t}{S_t}\dd S_t + \ffrac{V_t - \pi_t}{B_t}\dd B_t\\
&= \P{r V_t + \P{\mu+q-r}\pi_t}\dd t + \sigma \pi_t \dd W_t
\end{align}$$
To get the **discounted wealth process** to be a martingale, that is $\dd V_t = rV_t \dd t + \sigma\pi_t \dd \widetilde W_t$, we now need to have $\widetilde W_t = W_t + \ffrac{\mu+q-r}{\sigma}t$. This makes the stock dynamics
$$\dd S_t = \P{r-q} S_t \dd t + \sigma S_t \dd \widetilde W_t = \mu S_t \dd t + \sigma S_t \dd \widetilde W_t$$
Then the price of the European call option with continuous dividends paid is $\ExpH_t^\QQ \SB{e^{-r\P{T-t}}\P{S_T - K}^+}$. By **Feynman-Kac theorem**, the pricing PDE becomes
$$\ffrac{\partial c}{\partial t} + \ffrac{1}{2} \sigma^2 S_t^2 \ffrac{\partial^2 c}{\partial S_t^2} + \P{r-q}S_t \ffrac{\partial c}{\partial S_t} - rc = 0$$
Solve the equation we have
$$c_{\text{dividend}}\P{t,S_t} = S_t e^{-q\P{T-t}}\N{d_1} -Ke^{-r\P{T-t}}\N{d_2}$$
where
$$\begin{align}
d_1 &= \ffrac{1}{\sigma\sqrt{T-t}} \P{\log\P{\ffrac{S_t}{K}} + \P{r-q\,\mathbf+\,\ffrac{\sigma^2}{2}} \P{T-t}}\\
d_2 &= \ffrac{1}{\sigma\sqrt{T-t}} \P{\log\P{\ffrac{S_t}{K}} + \P{r-q\,\mathbf-\,\ffrac{\sigma^2}{2}} \P{T-t}}\\
&= d_1 - \sigma\sqrt{T-t}
\end{align}$$
#### Method 2: Risk-nuetral valuation
First recall that $\dd S_t = \mu S_t \dd t + \sigma S_t \dd W_t$ and $S_t = S_0 \cdot \exp\CB{\P{\mu - \ffrac{\sigma^2}{2}}t +\sigma W_t}$. Now given extra dividends, $\CB{S_t}_{t\geq 0}$ doesn't represent the true value of the stock at time $t$. In other words, if we buy the stock at time $0$ for $S_0$, $S_t$ is not *tradalble* at time $t$, since the value of holding is the sum of $S_t-S_0$ and the cumulative dividends.
However this situation can be solved using the strategy that immediately reinvest the dividend in the stock. The infinitesimal payout $qS_t \dd t$ will buy $q\dd t$ units of stock. Thus, at time $t$, we now hold
$$Y_t = e^{qt} S_t = S_0 \exp\CB{\P{\mu - \ffrac{\sigma^2}{2} + q}t + \sigma W_t}$$
To value, we need to find the measure $\QQ$ under which the discounted stock price $\CB{\tilde Y_t}_{t\geq }$ is a martingale. First we write
$$\dd Y_t = \P{\mu + q}Y_t\dd t + \sigma Y_t \dd W_t \Rightarrow \dd \tilde Y_t = \P{\mu +q -r}\tilde Y_t\dd t + \sigma \tilde Y_t \dd W_t \equiv \sigma \tilde Y_t \dd \widetilde W_t $$
Since under $\QQ$ measure $\dd Y_t = rY_t\dd t + \sigma Y_t \dd\widetilde W_t$, we define $\QQ$
$$\given{\ffrac{\dd \QQ}{\dd \PP}}_{\FcF_t} = \exp\CB{-\theta W_t - \ffrac{1}{2}\theta^2 t}, \bspace \theta = \ffrac{\mu + q - r}{\sigma}$$
Under this, we have
$$\begin{align}
c\P{t,S_t} &= \ExpH^\QQ\SB{e^{-r\P{T-t}}c\P{T,S_T}\mid \FcF_t}\\
&= \ExpH^\QQ_t \SB{e^{-r\P{T-t}}\P{S_T-K}^+}\\
&= \ExpH^\QQ_t \SB{e^{-r\P{T-t}-qT}\P{Y_T-Ke^{qT}}^+}
\end{align}$$
$Remark$
>Let $S_t$ denote the stock price with dividend and $S_t^0$ represent the stock price without dividend. Then
>
>$$\begin{align}
\dd S_t &= \P{r-q} S_t \dd t + \sigma S_t \dd \widetilde W_t\\
\dd S_t^0 &= r S_t^0 \dd t + \sigma S_t^0 \dd \widetilde W_t\\
\end{align}$$
>
>Note that $\dd\P{S_t^0 e^{q\P{T-t}}} = S_t^0 e^{q\P{T-t}} \P{\P{r-q}\dd t + \sigma\dd\widetilde W_t }$. So if we let $S_t = S_t^0 e^{q\P{T-t}}$, such that $S_T^0 = S_T$, the option price with dividend would be
>
>$$\begin{align}
c_{\text{dividend}} &= \ExpH^\QQ_t\SB{e^{-r\P{T-t}} \P{S_T-K}^+ }\\
&= \ExpH^\QQ_t\SB{e^{-r\P{T-t}} \P{S_T^0-K}^+ }\\
&= c\P{t,S_t^0} = c\P{t,S_t e^{-q\P{T-t}}}
\end{align}$$
>
>Thus, $c_{\text{dividend}} = c\P{t,S_t e^{-q\P{T-t}}}$. The *price of call option with dividend* is **equal** to the *price of call option without dividend, but with the stock price $S_t$ replaced by $S_t e^{-q\P{T-t}}$*.
### Currency Options in the B-S-M Model
Consider a currency option that give its owner the right to buy $1$ unit of a foreign currency for $K$ units of the domestic currency at maturity $T$. Thus the payoff would be, as evaluated in the domestic currency, is euqal to $\P{R_T - K}^+$, where $R_T$ is the exchange rate at time $T$, $i.e.$, the domestic value of one unit of foreign currency at time $T$.
We now assume the **exchange rate process** $\CB{R_t}_{t\geq 0}$ satisfies the **Black-Scholes model**,
$$\dd R_t = R_t \SB{\mu_R \dd t + \sigma_R \dd W_t} = \mu_R R_t \dd t + \sigma_R R_t \dd W_t$$
Let the foreign risk-free rate be $r_f$, then the price of the European call currency option is the same as in the case of a dividend-paying underlying, but with $q$ replaced by $r_f$, and with the *dividend* replaced by *interest*. Thus, the domestic value of the one unit of the foreign account is
$$\dd R_t^{\,f} := \dd\P{R_t^{\,f} e^{r_{\void_f} \cdot t}} = \P{\mu_R + r_f}R_t^{\,f}\dd t + \sigma_R R_t^{\,f} \dd W_t $$
To replicate, we consider the wealth dynamic (in domestic currency) of a portfolio with $\pi_t$ in the foreign account and the rest in the domestic account, that is
$$\dd V_t = \ffrac{\pi_t}{R_t^{\,f}} \dd R_t^{\,f} + \ffrac{V_t - \pi_t}{B_t} \dd B_t= \P{r_d V_t + \pi_t \P{\mu_R + r_f - r_d}}\dd t + \pi_t \sigma_R \dd W_t$$
where $r_d$ is the domestic risk-free rate. And this dynamics is the the same as the case of a dividend-paying with $q$ replaced by $r_f$. Thus we have the solution of the option price, by applying the risk-neutral method then,
$$\begin{align}
c\P{t,R_t} &= R_t \exp\CB{-r_f\P{T-t}}\N{d_1} - K\exp\CB{-r_d\P{T-t}}\N{d_2}\\
d_1 &= \ffrac{1}{\sigma_R\sqrt{T-t}} \P{\log\P{\ffrac{R_t}{K}} + \P{r_d - r_f + \ffrac{\sigma^2}{2}}\P{T-t}}\\
d_2 &= \ffrac{1}{\sigma_R\sqrt{T-t}} \P{\log\P{\ffrac{R_t}{K}} + \P{r_d - r_f - \ffrac{\sigma^2}{2}}\P{T-t}}
\end{align}$$
|
031a28bcc6f1e869b10b289cce0ae96e79e79eaa
| 25,810 |
ipynb
|
Jupyter Notebook
|
FinMath/Models and Pricing of Financial Derivatives/Chap_04_The_Black-Scholes-Merton_model.ipynb
|
XavierOwen/Notes
|
d262a9103b29ee043aa198b475654aabd7a2818d
|
[
"MIT"
] | 2 |
2018-11-27T10:31:08.000Z
|
2019-01-20T03:11:58.000Z
|
FinMath/Models and Pricing of Financial Derivatives/Chap_04_The_Black-Scholes-Merton_model.ipynb
|
XavierOwen/Notes
|
d262a9103b29ee043aa198b475654aabd7a2818d
|
[
"MIT"
] | null | null | null |
FinMath/Models and Pricing of Financial Derivatives/Chap_04_The_Black-Scholes-Merton_model.ipynb
|
XavierOwen/Notes
|
d262a9103b29ee043aa198b475654aabd7a2818d
|
[
"MIT"
] | 1 |
2020-07-14T19:57:23.000Z
|
2020-07-14T19:57:23.000Z
| 58.261851 | 448 | 0.539558 | true | 7,642 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.727975 | 0.872347 | 0.635047 |
__label__eng_Latn
| 0.812415 | 0.313759 |
<a href="https://colab.research.google.com/github/SergeiSa/Computational-Intelligence-Slides-Fall-2020/blob/master/Google%20Colab%20notebooks/practice%2001/practice_01_fss_jacobian_2.ipynb" target="_parent"></a>
# **Practice 1 Q&A: Fundamental Subspaces and Jacobian Mapping**
## **Goals for today**
---
During today practice we will:
* Exploit a structure of linear mapping between Joint and Task spaces velocities.
* Understand what is physical interpretation of fundamental subspaces of the manipulator Jacobian.
* Obtain null space motion of 3 DoF plane manipulator with fixed end effector
## **Four Fundamental Subspaces. Recall**
---
>As we have studied on the lectures there are four fundamental subspaces accompanying any linear operator (matrix) $\mathbf{A}^{m \times n}$, namely:
>* **Column** space (range, image): $\mathcal{C}(\mathbf{A}) \in \mathbb{R}^m$
>* **Null** space (kernel): $\mathcal{N}(\mathbf{A}) \in \mathbb{R}^n$
>* **Row** space: $\mathcal{R}(\mathbf{A}) = \mathcal{C}(\mathbf{A}^T) \in \mathbb{R}^n$
>* **Left null** space: $\mathcal{N}(\mathbf{A}^T) \in \mathbb{R}^m$
---
## **Jacobian Force Mapping**
>* In yout homework you was trying to the physical interpretation of remaining fundamental subspaces:
\begin{equation}
\boldsymbol{\tau} = \mathbf{J}(\mathbf{q})^T\mathbf{F}
\end{equation}
where:
* $\mathbf{F} \in \mathbb{R}^m$ task space force imposed on the end-effector
* $\boldsymbol{\tau} \in \mathbb{R}^n$ joint space torques (control effort in the actuators)
>
Similarly to what we did for velocities one can deduce that:
* The row space of $\mathbf{J}$ is the subspace $\mathcal{R}(\mathbf{J}) \in \mathbb{R}^n$ of the joint torques $\boldsymbol{\tau}$ that can balance forces applied to end-effector $\mathbf{F}$, in the given manipulator
posture $\mathbf{q}$
* The left null space of $\mathbf{J}$ is the subspace $\mathcal{N}(\mathbf{J}^T) \in \mathbb{R}^m$ of the end-effector forces $\mathbf{F}$ that do not require any balancing joint torques $\boldsymbol{\tau}$, in the given manipulator posture $\mathbf{q}$.
<p></p>
Let us consider the following examples.
## **Examples: Zero Torques**
Consider the 1-DoF planar manipulator (pendulum):
<p></p>
The question is: Which force vector do not require any balancing torque?
It is obvious that force along pendulum linkage will not produce any torque, however let us check this fact by analyzing left null space of the Jacobian matrix.
Solution of forward kinematics:
\begin{equation}
\boldsymbol{x} =
\ell
\begin{bmatrix}
\cos q\\
\sin q
\end{bmatrix}
\end{equation}
Thus the Jacobian calculated as:
\begin{equation}
\mathbf{J} =
\frac{\partial \boldsymbol{x}}{\partial q}
=
\ell
\begin{bmatrix}
-\sin q\\
\cos q
\end{bmatrix}
\end{equation}
```
from numpy import cos, sin, zeros, sum, pi, dot, array
from scipy.linalg import null_space
q = 0
l = 1
# jacobian
jac = l*array([-sin(q), cos(q)])
F_0 = null_space([jac.transpose()])
print(F_0)
```
[[1.]
[0.]]
As we expected force is along the link.
Let us add the linear joint in to base:
<p></p>
Jacobian become:
\begin{equation}
\mathbf{J} =
\frac{\partial \boldsymbol{x}}{\partial \mathbf{q}}
=
\begin{bmatrix}
1 & -\ell_1\sin q_2\\
0 & \ell_1\cos q_2
\end{bmatrix}
\end{equation}
Let's implement Jacobian matrix:
```
def jacobian(q):
return l*array([[1,-sin(q[1])], [0,cos(q[1])]])
q = [1,pi/2]
F_0 = null_space(jacobian(q).transpose())
print(F_0)
```
[[3.061617e-17]
[1.000000e+00]]
If link is in the vertical posture we don't need any balancing torques $\mathbf{\boldsymbol{\tau}}$ to overcome vertical force $\mathbf{F} = \mathbf{e}_y\lambda, \forall \lambda$.
Let's check other postures:
```
q = [1,pi/4]
F_0 = null_space(jacobian(q).transpose())
print(F_0)
```
[]
>**QUESTION**:
As you can see null space is empty, what does that phisicly mean?
>**HW EXERCISE**:
> Consider cable driven manipulator:
> <p></p>
>
> with relationships between cable tensions $\boldsymbol{\tau}$ and force acting on end effector $\mathbf{F}$ given by:
>
>\begin{equation}
\mathbf{F} = \mathbf{W}\boldsymbol{\tau}
\end{equation}
>
>where: $\mathbf{W} \in \mathbb{R}^{m \times n}$ is so called wire matrix that play role of the Jacobian.
>
> Taking in to account that tension must remain positive $\tau_i >0,\forall i$ ,Do the following:
>
>* Answer what is the minimal number of actuators that supports
arbitary end effector force?
>* Formulate criteria that alow you to find cable tension do not producing any force on end effector, do it for the minimal number of cables. ( slack is not allowed), Tip: use null space $\mathcal{N}(\mathbf{W})$
>* Come up with the geometrical representation of the proposed criteria.
|
ce354c743bdf11c64ba393e0a5597b1639bdc64a
| 10,922 |
ipynb
|
Jupyter Notebook
|
Google Colab notebooks/Practice Fundumental Subspaces 2020 part 1/practice_01_fss_jacobian_2.ipynb
|
kahlflekzy/Computational-Intelligence-Slides-Spring-2022
|
9401fe1258efa91a6c9886501d02909420a94add
|
[
"MIT"
] | 6 |
2022-01-19T15:40:01.000Z
|
2022-03-23T22:27:44.000Z
|
Google Colab notebooks/Practice Fundumental Subspaces 2020 part 1/practice_01_fss_jacobian_2.ipynb
|
kahlflekzy/Computational-Intelligence-Slides-Spring-2022
|
9401fe1258efa91a6c9886501d02909420a94add
|
[
"MIT"
] | 2 |
2021-05-27T09:02:55.000Z
|
2021-10-13T09:36:55.000Z
|
Google Colab notebooks/Practice Fundumental Subspaces 2020 part 1/practice_01_fss_jacobian_2.ipynb
|
kahlflekzy/Computational-Intelligence-Slides-Spring-2022
|
9401fe1258efa91a6c9886501d02909420a94add
|
[
"MIT"
] | 8 |
2021-01-20T07:58:57.000Z
|
2021-10-12T08:28:08.000Z
| 33.503067 | 318 | 0.492767 | true | 1,478 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.822189 | 0.679179 | 0.558413 |
__label__eng_Latn
| 0.946753 | 0.135711 |
<center>
<h1> INF285 - Computación Científica </h1>
<h2> ODE </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.18</h2>
</center>
## Some algorithms in this jupyter notebook have been obtained from the textbook "Lloyd N. Trefethen, Spectral Methods in MATLAB, SIAM, Philadelphia, 2000" and translated to Python.
<div id='toc' />
## Table of Contents
* [Introduction](#intro)
* [Initial Value Problemsl](#IVP)
* [Three solvers](#solver)
* [Stability Regions](#stab)
* [Convergence](#conver)
* [High order, higher dimensions and dynamical systems](#high)
* [Boundary Value Problems](#BVP)
* [Shooting Method](#MD)
* [Finite Differences](#DD)
* [Acknowledgements](#acknowledgements)
```python
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import pyplot #
import numpy as np
from scipy.integrate import odeint
from pylab import * #
from numpy import linalg as LA
from matplotlib.legend_handler import HandlerLine2D
from scipy.linalg import toeplitz
from scipy.optimize import root
from ipywidgets import interact, RadioButtons, Checkbox
import sympy as sym
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
from mpl_toolkits.axes_grid1 import make_axes_locatable
sym.init_printing()
# Global parameter that controls the figure's size
L = 15
# Source: https://github.com/tclaudioe/Scientific-Computing/blob/master/SC5/02%20Differentiation%20matrices.ipynb
def plot_matrices_with_values(ax,M,FLAG_test=True):
N=M.shape[0]
cmap = plt.get_cmap('GnBu')
im = ax.matshow(M, cmap=cmap)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im, cax=cax)
if FLAG_test:
for i in np.arange(0, N):
for j in np.arange(0, N):
ax.text(i, j, '{:.2f}'.format(M[i,j]), va='center', ha='center', color='r')
return
```
<div id='intro' />
# Introduction
[Back to TOC](#toc)
In this jupyter noteboon we will study several numerical methods for solving ordinary differential equations (ODEs).
In particular we will study Initial Value Problems (IVP) and Boundary Value Problems (BVP), their principal numerical methods of numerical resolution, and their stability and convergence.
<div id='IVP' />
# Initial Value Problems
[Back to TOC](#toc)
An IVP corresponds to an ordinary differential equation of the form:
$$ \dot{y}(t) = f(t,y(t)), \quad t\in]t_0,T]$$
subject to an initial condition,
$$ y(t_0) = y_0. $$
<div id='solver' />
# Three solvers
[Back to TOC](#toc)
Here we list a few solvers, just notice that we are only hightliting their one-time-step version:
* <b>Euler's Method: </b>
\begin{align*}
y_{i+1} &= y_i + h\,f(t_i,y_i)
\end{align*}
* <b>Runge-Kutta of second order (RK2) </b>
\begin{align*}
k_{1} &= f(t_i,y_i) \\
y_{i+1} &= y_i + h\,f\left(t_i + \dfrac{h}{2}, y_i + \dfrac{h}{2}\,k_1\right)
\end{align*}
* <b>Runge-Kutta of 4th order (RK4) </b>
\begin{align*}
k_{1} &= h \, f(t_i,y_i) \\
k_{2} &= h \, f(t_i + \dfrac{h}{2}, y_i + \dfrac{1}{2}k_1) \\
k_{3} &= h \, f(t_i + \dfrac{h}{2}, y_i + \dfrac{1}{2}k_2) \\
k_{4} &= h \, f(t_i + h, y + k_3) \\
y_{i+1} &= y_i + \dfrac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)
\end{align*}
For all this methods $h$ is the time-step it will advance the solution.
The previous algorithms only advance one time step the solution, so it should be applied the corresponding time steps expected such that it goes from the initial time $t_0$ until the final time $T$.
```python
# Forward Euler Method
def eulerMethod_one_step(yi,ti,f,h):
return yi+h*f(ti,yi)
# Runge-Kutta of Second order
def RK2_one_step(yi,ti,f,h):
k1=f(ti,yi)
return yi+h*f(ti+h/2.0,yi+h/2.0*k1)
# Runge-Kutta
def RK4_one_step(yi,ti,f,h):
k1=f(ti,yi)
k2=f(ti+h/2.0,yi+(h/2.0)*k1)
k3=f(ti+h/2.0,yi+(h/2.0)*k2)
k4=f(ti+h,yi+h*k3)
return yi+(h/6.0)*(k1+2.0*k2+2.0*k3+k4)
```
Here we define the several steps solvers.
These are the version that will be used later on in the code.
```python
# This implemantions have been upgraded respect to the ones included in the classnotes to handle
def eulerMethod(t0,T,N,y0,f):
t = np.linspace(t0,T,N+1)
h = (T-t0)/N
if isinstance(y0,(int,float)):
y = np.zeros(N+1)
else:
y = np.zeros((N+1,len(y0)))
y[0] = y0
for i in np.arange(N):
y[i+1] = y[i]+f(t[i],y[i])*h
return t, y
# Challenge: Make backwardEulerMethod work!
'''def backwardEulerMethod(t0,T,N,y0,f):
t = np.linspace(t0,T,N+1)
h = (T-t0)/N
y = np.zeros(N+1)
y[0] = y0
for i in np.arange(N):
f_hat= lambda x: x - y[i]-f(t[i+1],x)*h
# You must select a solver, "find_root" is just a generic name. The input used are the anonymous function "f_hat" and the initial guess y_i.
x = find_root(f_hat,y[i])
y[i+1] = x
return t, y'''
def RK2(t0,T,N,y0,f):
t = np.linspace(t0,T,N+1)
h = (T-t0)/N
if isinstance(y0,(int,float)):
y = np.zeros(N+1)
else:
y = np.zeros((N+1,len(y0)))
y[0] = y0
for i in np.arange(N):
k1 = f(t[i],y[i])
y[i+1] = y[i]+f(t[i]+h/2,y[i]+k1*h/2)*h
return t, y
def RK4(t0,T,N,y0,f):
t = np.linspace(t0,T,N+1)
h = (T-t0)/N
if isinstance(y0,(int,float)):
y = np.zeros(N+1)
else:
y = np.zeros((N+1,len(y0)))
y[0] = y0
for i in np.arange(N):
k1 = f(t[i],y[i])
k2 = f(t[i]+h/2,y[i]+k1*h/2)
k3 = f(t[i]+h/2,y[i]+k2*h/2)
k4 = f(t[i]+h,y[i]+k3*h)
y[i+1] = y[i]+(k1+2*k2+2*k3+k4)*h/6
return t, y
```
### 1D Example
In this example we will solve the logistic equation $\dot{y}(t) = y(t)(1-y(t))$ for several inital conditions $y_0$.
```python
# Logistic Equation
def f(t,y):
return np.array(y*(1-y))
# First Example
def change_N_and_y0_1st_Example(N=20,y0=0.1, ODEsolver=eulerMethod):
t0 = 0
T = 4
t, y = ODEsolver(t0,T,N,y0,f)
fig = plt.figure()
ax = fig.gca()
ax.axis([0, 4, -1, 2])
# Plotting stationary states for this problem.
t = np.linspace(t0,T,N+1)
plt.plot(t,t*0+1,'b-',lw=5, alpha=0.5, label='Stationary state y=1')
plt.plot(t,t*0,'r-',lw=5, alpha=0.5, label='Stationary state y=0')
# Plotting numerical solution.
plt.plot(t,y,'.',label=r'$y_i$')
plt.grid(True)
ax.legend(loc='lower left', ncol=1, fancybox=True, shadow=True, numpoints=1, bbox_to_anchor=(1,0))
plt.show()
radio_button_ODEsolvers=RadioButtons(
options=[('Euler\'s Method',eulerMethod),('RK2',RK2),('RK4',RK4)],
value=eulerMethod,
description='ODE solver:',
disabled=False
)
interact(change_N_and_y0_1st_Example,N=(3,100,1),y0=(-0.5,2,0.1), ODEsolver=radio_button_ODEsolvers)
```
interactive(children=(IntSlider(value=20, description='N', min=3), FloatSlider(value=0.1, description='y0', ma…
<function __main__.change_N_and_y0_1st_Example(N=20, y0=0.1, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>)>
Now, solve several initial conditions at the same time.
```python
def plot_log_eq(N=30,ODEsolver=eulerMethod):
t0 = 0
T = 4
fig = plt.figure(figsize=(L/2,L/2))
fig.clf()
ax = fig.gca()
ax.axis("equal")
ax.grid(True)
ax.set_title("Numerical Approximations and Direction Field")
ax.axis([0, 4, -2, 2])
# Plotting the stationary states
t = np.linspace(t0,T,N+1)
plt.plot(t,t*0+1,'b-',lw=5, alpha=0.5, label='Stationary state y=1')
plt.plot(t,t*0,'r-',lw=5, alpha=0.5, label='Stationary state y=0')
plt.legend(loc='best')
# Solving over time for each method.
for y0 in np.linspace(-0.5,5,40):
t_times, y_output = ODEsolver(t0,T,N,y0,f)
if y0>=1:
plt.plot(t_times,y_output,'k-',alpha=0.2)
else:
plt.plot(t_times,y_output,'k-',alpha=0.8)
plt.plot(t_times,y_output,'.')
X,Y = np.meshgrid(np.arange(0,4,.2), np.arange(-2,2,.2))
theta = np.arctan(f(0,Y))
U = np.cos(theta)
V = np.sin(theta)
plt.quiver(X,Y,U,V,alpha=0.5)
interact(plot_log_eq,N=(5,100,1),ODEsolver=radio_button_ODEsolvers)
```
interactive(children=(IntSlider(value=30, description='N', min=5), RadioButtons(description='ODE solver:', opt…
<function __main__.plot_log_eq(N=30, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>)>
<div id='stab' />
## Stability Regions
[Back to TOC](#toc)
To perform a linear stability analysis, we will consider the following ODE $\dot{y} = \lambda y$, with $y(0)=1$ and $\lambda\in\mathbb{C}$, for which we know its exact solution $y(t)=\exp(\lambda\,t)$.
Thus, solving the previos ODE with the Euler method, we have,
\begin{align*}
y_{i+1} &= y_i + \lambda h y_i \\
y_{i+1} &= (1 + \lambda h ) y_i \\
y_{i+1} &= (1 + \lambda h )^{i+1} y_0
\end{align*}
Given that the real part of $\lambda$ is negative and $h$ is positive, we require then that $\left|1+\lambda h \right| <1$. We denote this range of values for $h$ the stability region for the Euler Method.
Following a similar approach, we the the following constraints for RK2 and RK4:
- RK2:
$$ \left|1+\lambda h + \dfrac{(\lambda h)^2}{2!} \right| <1 $$
- RK4:
$$ \left| 1+\lambda h + \dfrac{(\lambda h)^2}{2!} + \dfrac{(\lambda h)^3}{3!} + \dfrac{(\lambda h)^4}{4!} \right| <1 $$
```python
def zplot2(z, ax=plt.gca(), lw=1.5, line_color='k', label=''):
ax.plot(np.real(z), np.imag(z), line_color, lw=lw, label=label)
def runge_kutta_stability_regions():
z = np.exp(1j * np.pi * np.arange(201)/100.)
r = z-1
d = 1-1./z;
# Order 1
W1, W2, W4 = [0], [0], [0]
for zi in z[1:]:
W1.append( W1[-1]-(1.+W1[-1]-zi) )
for zi in z[1:]:
W2.append( W2[-1]-(1+W2[-1]+.5*W2[-1]**2-zi**2)/(1+W2[-1]) )
for zi in z[1:]:
num = (1+W4[-1]+.5*W4[-1]**2+W4[-1]**3/6+W4[-1]**4/24-zi**4)
den = (1+W4[-1]+W4[-1]**2/2+W4[-1]**3/6.)
W4.append( W4[-1] - num/den )
return W1, W2, W4
W1,W2,W4=runge_kutta_stability_regions()
```
```python
fig = plt.figure(figsize=(L/2,L/2))
ax=fig.gca()
zplot2(W1,ax,line_color='r',label='Euler')
zplot2(W2,ax,line_color='g',label='RK2')
zplot2(W4,ax,line_color='b',label='RK4')
ax.axis("equal")
ax.axis([-5, 2, -3.5, 3.5])
ax.grid("on")
ax.legend(loc='best')
ax.set_title("Stability Regions of some Runge-Kutta ODE solvers")
```
### So, how do we use the stability region for each method?
The key point here is that we can't define $h$ a priori since we may not know the corresponding $\lambda$, thus, in principle, we can estimate $\lambda$ or just try the solver for different $h$.
```python
def plot_log_eq_stability_region(y0=1.2, N=20, T=10, ODEsolver=eulerMethod):
# How did w get this? Why do I need it?
def f_1D_prime(y):
return np.array(1-2*y)
t0=0
if ODEsolver==eulerMethod:
c1,c2,c3='b','k','k'
elif ODEsolver==RK2:
c1,c2,c3='k','b','k'
else:
c1,c2,c3='k','k','b'
t_times, y_output = ODEsolver(t0,T,N,y0,f)
fig = plt.figure(figsize=(L,L/3))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
ax1.axis('equal')
ax1.grid(True)
ax1.set_title('Numerical Approximation')
ax1.axis([0, T, -0.5, 2])
ax1.plot(t_times,y_output,'k-')
ax1.plot(t_times,y_output,'b.')
# The next code is to plot in logscale unstable numerical simulations
h = (T-t0)/N
y_all=h*f_1D_prime(y_output)
y_pos=y_all>0
y_neg=np.logical_not(y_pos)
ax3.semilogy(t_times[y_pos],y_all[y_pos],'.r',ms=20)
ax3.semilogy(t_times[y_neg],-y_all[y_neg],'.b',ms=20)
ax3.grid(True)
ax3.set_title(r'$\log10(|y_i|)$')
zplot2(W1,ax2,line_color=c1)
zplot2(W2,ax2,line_color=c2)
zplot2(W4,ax2,line_color=c3)
k_lambdah=h*f_1D_prime(y_output)
ax2.plot(np.real(k_lambdah),np.imag(k_lambdah),'.r',ms=20)
ax2.axis('equal')
ax2.axis([-5, 2, -3.5, 3.5])
ax2.grid(True)
ax2.set_title('Stability Region and $k=\lambda\,h$')
interact(plot_log_eq_stability_region,y0=(-2,5,0.1),N=(2,100,1), T=(1,20,1), ODEsolver=radio_button_ODEsolvers)
```
interactive(children=(FloatSlider(value=1.2, description='y0', max=5.0, min=-2.0), IntSlider(value=20, descrip…
<function __main__.plot_log_eq_stability_region(y0=1.2, N=20, T=10, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>)>
<div id='conver' />
## Convergence
[Back to TOC](#toc)
This is very important! Let's study it!
To talk about the convergence, we need to have in mind the order of each method.
In particular, we will consider that $y(T)$ denotes the exact solution at the final time $T$ that need to solve the ODE, and $y_N$ be the approximation of the solution at the final time $T=t_N$.
In simple words, we can say the following:
- Euler's method is a first order method since $y(T)=y_N+\mathcal{O}(h)$. This means that if $h$ is reduce by half, then the error is reduced by half.
- RK2 is a second order method since $y(T)=y_N+\mathcal{O}(h^2)$. This means that if $h$ is reduce by half, then the error is reduced by 4.
- RK4 is a forth order method since $y(T)=y_N+\mathcal{O}(h^4)$. This means that if $h$ is reduce by half, then the error is reduced by 16.
To validate this, we will solve numerically the following problem,
\begin{align*}
\dot{y}(t)&=-3\,y(t)+6\,t+5, \quad t\in]0,0.5]\\
y(0)&=3,
\end{align*}
for which we know the solution $y(t)=2\exp(-3t)+2t+1$.
```python
t0 = 0
T = 0.5
y0 = 3.0
def f(t,y):
return np.array(-3*y+6*t+5)
y_sol = lambda t: 2.*np.exp(-3.*t)+2.*t+1
```
```python
fig = plt.figure(figsize=(L/2,L/2))
fig.clf()
ax = fig.gca()
ax.axis('equal')
ax.grid(True)
ax.set_title('Convergence Analysis')
N_list = np.logspace(1,5,5,dtype=int)
for N in N_list:
h = (T-t0)/N
t_times, y_output = eulerMethod(t0,T,N,y0,f)
plt.loglog(h,abs(y_output[-1]-y_sol(t_times[-1])),'b.',ms=20,label='Euler',alpha=.5)
t_times, y_output = RK2(t0,T,N,y0,f)
plt.loglog(h,abs(y_output[-1]-y_sol(t_times[-1])),'rs',ms=20,label='RK2',alpha=.5)
t_times, y_output = RK4(t0,T,N,y0,f)
plt.loglog(h,abs(y_output[-1]-y_sol(t_times[-1])),'gd',ms=20,label='RK4',alpha=.5)
if N==N_list[0]:
plt.legend(numpoints=1, loc='lower right')
h_list=T/N_list
plt.loglog(h_list,h_list,'k-')
plt.text(10**(-6),10**(-3),r'$\mathcal{O}(h)$', fontsize=25, horizontalalignment='center', color='b')
plt.loglog(h_list,np.power(h_list,2.),'k-')
plt.text(10**(-7),10**(-8),r'$\mathcal{O}(h^2)$', fontsize=25, horizontalalignment='center', color='r')
plt.loglog(h_list[:-2],np.power(h_list[:-2],4.),'k-')
plt.text(10**(0),10**(-11),r'$\mathcal{O}(h^4)$', fontsize=25, horizontalalignment='center', color='g')
plt.xlabel(r'$h$')
plt.ylabel(r'Error$=|y(T)-y_{N}|$')
```
<div id='high' />
# High order, higher dimensions and dynamical systems
[Back to TOC](#toc)
Why do we study higher order problems?
Because they are so cool!
### The Van der Pol oscillator
The equation that model de Van der Pol oscillator is the following:
$$\ddot{y}-\mu\,(1-y^2)\,\dot{y} + y = 0,$$
with $y(0)=2$ and $\dot{y}(0)=0$,
where $\mu$ is a scalar parameter that regulates the non-linearity of the problem and the strength of the damping.
See https://en.wikipedia.org/wiki/Van_der_Pol_oscillator.
So, since it has second derivatives respecto to time, we need to translate it to a dynamical system with the following change of variables:
\begin{align*}
y_1(t) &= y(t),\\
y_2(t) &= \dot{y}(t).
\end{align*}
Thus,
\begin{align*}
\dot{y}_1 &= \dot{y} = y_2,\\
\dot{y}_2 &= \ddot{y} = \mu (1-y^2) \dot{y} - y = \mu (1-y_1^2) y_2 - y_1,
\end{align*}
with $y_1(0)=2$ and $y_2(0)=0$.
```python
def f_vdp(t,y,mu=10):
y1 = y[0]
y2 = y[1]
return np.array([y2, mu*(1-y1**2)*y2-y1])
def f_vdp_eig_jacobian(t,y,mu=10):
J=[[0,1],[-(2*mu*y[0]*y[1]+1), mu*(1-y[0]**2)]]
lambs,vs= LA.eig(J)
return lambs
```
```python
def plotting_vdp(y01=2, y02=0, N=20000, T=200, mu=10, ODEsolver=eulerMethod):
y0=np.array([y01, y02])
f= lambda t,y: f_vdp(t,y,mu)
t0=0
t_times, y_output = ODEsolver(t0,T,N,y0,f)
fig = plt.figure(figsize=(L,L/2))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.grid(True)
ax1.set_title("Numerical Approximation")
ax1.plot(t_times,y_output[:,0],'-', label=r'$y_1(t)$', alpha=0.5)
ax1.set(xlabel='t')
ax1.plot(t_times,y_output[:,1],'-r', label=r'$y_2(t)$', alpha=0.5)
ax1.legend(loc='best')
ax2.grid(True)
ax2.set_title("Phase Portrait")
ax2.plot(y_output[:,0],y_output[:,1],'-')
ax2.set(xlabel='$y=y_1$', ylabel='$\dot{y}=y_2$')
interact(plotting_vdp,y01=(-3,3,0.1), y02=(-3,3,0.1),
N=(10,10000,10),T=(10,200,10),
mu=(1,50,0.1),ODEsolver=radio_button_ODEsolvers)
```
interactive(children=(FloatSlider(value=2.0, description='y01', max=3.0, min=-3.0), FloatSlider(value=0.0, des…
<function __main__.plotting_vdp(y01=2, y02=0, N=20000, T=200, mu=10, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>)>
```python
def plot_sol_eig_phase_portrait_2eq(t_times,y_output,f_jacobian,h,c1,c2,c3):
fig = plt.figure(figsize=(L,L/3))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
ax1.grid(True)
ax1.set_title('Numerical Approximation')
ax1.plot(t_times,y_output[:,0],'-')
ax1.plot(t_times,y_output[:,1],'r-')
zplot2(W1,ax2,line_color=c1)
zplot2(W2,ax2,line_color=c2)
zplot2(W4,ax2,line_color=c3)
for i in range(1,t_times.size):
k_lambdah=h*f_jacobian(t_times[i],y_output[i,:])
ax2.plot(np.real(k_lambdah[0]),np.imag(k_lambdah[0]),'.r',ms=10,alpha=.4)
ax2.plot(np.real(k_lambdah[1]),np.imag(k_lambdah[1]),'sm',ms=10,alpha=.4)
ax2.axis('equal')
ax2.axis([-5, 2, -3.5, 3.5])
ax2.grid('on')
ax3.grid(True)
ax3.set_title('Phase Portrait')
ax3.plot(y_output[:,0],y_output[:,1],'-')
ax3.set(xlabel='$y_1$', ylabel='$y_2$')
```
### What about the stability region for dynamical systems?
In this case we need to compute the eigenvalues of the Jacobian matrix of $\mathbf{F}(\mathbf{y})$.
So the eigenvalues $\lambda_i$ multiplied by $h$ have to be inside of the stability region of the real part of $\lambda_i$ is negative, for $i\in\{1,2,\dots,\text{Dimension of Dynamical System}\}$.
```python
def plotting_vpd_as_system(y01=2, y02=0, N=1000, T=40, mu=10, ODEsolver=eulerMethod):
y0=np.array([y01, y02])
f= lambda t,y: f_vdp(t,y,mu)
t0 = 0
t_times, y_output = ODEsolver(t0,T,N,y0,f)
h = (T-t0)/N
if ODEsolver==eulerMethod:
c1,c2,c3='b','k','k'
elif ODEsolver==RK2:
c1,c2,c3='k','b','k'
else:
c1,c2,c3='k','k','b'
plot_sol_eig_phase_portrait_2eq(t_times,y_output,f_vdp_eig_jacobian,h,c1,c2,c3)
interact(plotting_vpd_as_system, y01=(-3,3,0.1), y02=(-3,-3,0.1),
N=(10,4000,10),T=(10,200,10),
mu=(1,50,0.1),ODEsolver=radio_button_ODEsolvers)
```
interactive(children=(FloatSlider(value=2.0, description='y01', max=3.0, min=-3.0), FloatSlider(value=-3.0, de…
<function __main__.plotting_vpd_as_system(y01=2, y02=0, N=1000, T=40, mu=10, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>)>
### Lotka-Volterra model
This instance models the evolution of the average population of two species living in a closed system.
The species number 1 is considered that feeds itself with an unlimited amount of food and the species number 2 feeds itself with species number 1.
We will denote the average population of species number 1 as $y_1(t)$ and the average population of species number 2 as $y_2(t)$.
So, if we consider that there is no presence of species number 2 in the system, then species number 1 will grow exponentially, this means $\dot{y}_1=y_1$.
On the other case, if no presence of species number 1 in the system, then species number 2 will decay exponentially as follows $\dot{y}_2=-y_2$.
However, we are not interested in the extreme cases, how can we understand the dynamics when there exists these two species together interacting?
This is modeled by the Lotka-Volterra dynamical system:
\begin{align*}
\dot{y}_1(t)&=(1-y_2(t)/\mu_2)\,y_1(t),\\
\dot{y}_2(t)&=-(1-y_1(t)/\mu_1)\,y_2(t),\\
y_1(0)&=400,\\
y_2(0)&=100,
\end{align*}
where $\mu_1$ and $\mu_2$ are normalization constants.
```python
mu1=300
mu2=200
def f_predprey(t,y,mu1=mu1,mu2=mu2):
y1 = y[0]
y2 = y[1]
return np.array([(1-y2/mu2)*y1, -(1-y1/mu1)*y2])
def f_predprey_jacobian(t,y,mu1=mu1,mu2=mu2):
J=[[1-y[1]/mu2,-y[0]/mu2],[y[1]/mu1, -(1-y[0]/mu1)]]
lambs,vs= LA.eig(J)
return lambs
```
```python
def plotting_predprey(y01=70, y02=100, N=1000, T=30, ODEsolver=eulerMethod):
y0=np.array([y01, y02])
f= lambda t,y: f_predprey(t,y)
t0 = 0
t_times, y_output = ODEsolver(t0,T,N,y0,f)
h = (T-t0)/N
if ODEsolver==eulerMethod:
c1,c2,c3='b','k','k'
elif ODEsolver==RK2:
c1,c2,c3='k','b','k'
else:
c1,c2,c3='k','k','b'
plot_sol_eig_phase_portrait_2eq(t_times,y_output,f_predprey_jacobian,h,c1,c2,c3)
interact(plotting_predprey,y01=(0,1000,10),y02=(0,1000,10),
N=(10,1000,10),T=(10,200,10),
ODEsolver=radio_button_ODEsolvers)
```
interactive(children=(IntSlider(value=70, description='y01', max=1000, step=10), IntSlider(value=100, descript…
<function __main__.plotting_predprey(y01=70, y02=100, N=1000, T=30, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>)>
## Now solving the following equation
\begin{align*}
\dot{y}_1(t) &= y_2(t),\\
\dot{y}_2(t) &= -y_1(t),
\end{align*}
with $y_1(0)=1$ and $y_2(0)=0$.
What functions are these? Do you recognize them?
```python
def f_trig(t,y):
y1 = y[0]
y2 = y[1]
return np.array([y2, -y1])
def f_trig_jacobian(t,y):
return np.array([0.+1.j, 0.-1.j])
def plotting_f_trig(y01=1, y02=0, N=100, T=2*np.pi, ODEsolver=eulerMethod):
y0=np.array([y01, y02])
f= lambda t,y: f_trig(t,y)
t0 = 0
t_times, y_output = ODEsolver(t0,T,N,y0,f)
h = (T-t0)/N
if ODEsolver==eulerMethod:
c1,c2,c3='b','k','k'
elif ODEsolver==RK2:
c1,c2,c3='k','b','k'
else:
c1,c2,c3='k','k','b'
plot_sol_eig_phase_portrait_2eq(t_times,y_output,f_trig_jacobian,h,c1,c2,c3)
interact(plotting_f_trig,y01=(-2,2,0.1),y02=(-2,2,0.1),
N=(10,1000,10),T=(1,10*np.pi,0.1),
ODEsolver=radio_button_ODEsolvers)
```
interactive(children=(FloatSlider(value=1.0, description='y01', max=2.0, min=-2.0), FloatSlider(value=0.0, des…
<function __main__.plotting_f_trig(y01=1, y02=0, N=100, T=6.283185307179586, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>)>
## Last but not least, solving:
\begin{align*}
\dot{y}(t) &= \lambda\,y(t),\\
\end{align*}
with $y(0)=1$.
What function is this? Do you recognize it?
```python
def f_exp(t,y,lam=-1):
return lam*y
def f_exp_jac(t,y,lam=-1):
return lam
def plotting_f_exp(y0=1, N=100, T=10, lam=-1, ODEsolver=eulerMethod):
f= lambda t,y: f_exp(t,y, lam)
t0 = 0
t_times, y_output = ODEsolver(t0,T,N,y0,f)
h = (T-t0)/N
if ODEsolver==eulerMethod:
c1,c2,c3='b','k','k'
elif ODEsolver==RK2:
c1,c2,c3='k','b','k'
else:
c1,c2,c3='k','k','b'
# Plotting
fig = plt.figure(figsize=(L*2/3,L/3))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.grid(True)
ax1.set_title('Numerical Approximation')
ax1.plot(t_times,y_output[:],'.b')
ax1.plot(t_times,y_output[:],'k-',alpha=0.5)
#ax1.axis([0, T, -2, 2])
zplot2(W1,ax2,line_color=c1)
zplot2(W2,ax2,line_color=c2)
zplot2(W4,ax2,line_color=c3)
for i in range(1,t_times.size):
k_lambdah=h*f_exp_jac(t_times[i],y_output[i],lam)
ax2.plot(np.real(k_lambdah),np.imag(k_lambdah),'.r',ms=10,alpha=.4)
ax2.axis('equal')
ax2.axis([-5, 2, -3.5, 3.5])
ax2.grid('on')
interact(plotting_f_exp,y0=(-2,2,0.1),
N=(5,1000,10),T=(1,20,1), lam=(-2,2,0.1),
ODEsolver=radio_button_ODEsolvers)
```
interactive(children=(FloatSlider(value=1.0, description='y0', max=2.0, min=-2.0), IntSlider(value=100, descri…
<function __main__.plotting_f_exp(y0=1, N=100, T=10, lam=-1, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>)>
<div id='BVP' />
# Boundary Value Problems
[Back to TOC](#toc)
An instance of a BVP corresponds to a ordinary differential equation of the following form:
$$ y''(x) = f(x,y(x),y'(x)),$$
with the following boundary conditions,
\begin{align*}
y(a) &= y_a,\\
y(b) &= y_b.
\end{align*}
To undertand them better, let's find a numerical approximation of the following problem:
\begin{align*}
y''(x) &= 4\,y(x),\\
y(0) &= y_0,\\
y(1) &= y_n.
\end{align*}
<div id='MD' />
## Shooting Method
[Back to TOC](#toc)
This method takes advantage that we already know how to handle IVP and uses that knowledge to find a numerical approximation of an BVP.
The main steps are the followings:
- We have a BVP on the form:
\begin{align*}
y''(x) &= f(x,y(x),y'(x)),\\
y(a) &= y_a,\\
y(b) &= y_b.
\end{align*}
- Let's considerer the following steps to translate the BVP to an IVP:
1. $x \rightarrow t$, this implies that $t_0=a$ and $T=b$.
2. \begin{align*}
\ddot{y}(t) &= f(t,y(t),\dot{y}(t)),\\
y(t_0) &= y_a,\\
\dot{y}(t_0) &= \alpha.
\end{align*}
**Warning**: We actually don't know $\alpha$ but we do need it since we treating the previous problem as a IVP. What we do have is $y(b)=y(T)=y_b$.
3. So, since we don't have $\alpha$, we need to find it by solving the following 1D root: $F(\alpha) = y_N^{(\alpha)} - y_b$, where $y_N^{(\alpha)}$ denotes the IVP approximation at time $t=T$ considering $\alpha$ for the initial condition of $\dot{y}(t_0)$.
4. After we find the root for $F(\alpha)$, we can say we have found a numerical approximation of $y(x)$ since we have a numerical approximation that satisfies the ODE and also the boundary condition defined on the interval $[a,b]$. This is wonderful!
Notice however that the function $F(\alpha)$ is the output of the execution of an algorithm, so we only expect it to be continuous.
For instance it will be interesting to apply the Bisection method here!
The main disadvantage of this algorithm is that we need to solve several IVPs to actually find the right one, the good news is that we are re-using previous knowledge in a new type of problems!
```python
# BVP: y''(x)= 4*y(x), y(0)=1, and y(1)=3, x \in [0,1].
# Dynamical system:
# y1=y -> y1'=y'=y2
# y2=y' -> y2'=y''=4*y=4*y1.
# y1(0) = 1, (and we know y(1)=3)
# y1(1) = 3.
# y2(0) = alpha, we don't know it.
def f(t,y):
return np.array([y[1],4*y[0]])
```
```python
def shooting_method_101(alpha=-1, N=50, ya=1, yb=3, ODEsolver=eulerMethod):
t0 = 0
T = 1
y0 = np.array([ya,alpha])
t_times, y_output = ODEsolver(t0,T,N,y0,f)
h = (T-t0)/N
fig = plt.figure(figsize=(L/2,L/2))
ax = fig.gca()
plt.grid(True)
plt.title("Numerical Approximation")
plt.plot(t_times,y_output[:,0],'r.',ms=12,alpha=0.5,label=r'$y1_i$')
plt.plot(t_times,y_output[:,1],'m.',ms=12,alpha=0.5,label=r'$y2_i$')
plt.plot(0,1,'*k',ms=16,label='Left BC')
plt.plot(1,3,'*g',ms=16,label='Right BC')
plt.plot(0,alpha,'darkorange',ms=16,label=r'$\alpha$', marker=r'$\alpha$')
plt.legend(loc='best')
interact(shooting_method_101, alpha=(-2,2,0.1),
N=(5,100,1), ya=(-5,5,0.1), yb=(-5,5,0.1),
ODEsolver=radio_button_ODEsolvers)
```
interactive(children=(FloatSlider(value=-1.0, description='alpha', max=2.0, min=-2.0), IntSlider(value=50, des…
<function __main__.shooting_method_101(alpha=-1, N=50, ya=1, yb=3, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>)>
### Now, let's find $\alpha$ automatically
```python
y0 = np.array([1, 1.1])
```
```python
def F(alpha, ya=1, yb=3, N=100, ODEsolver=eulerMethod):
t0 = 0
T = 1
y0 = np.zeros(2)
y0[0] = ya
y0[1] = alpha
t_times, y_output = ODEsolver(t0,T,N,y0,f)
# Notice that we don use absolute value for the error since
# we want to look at this problem as a rootfinding problem
# and not as a minimization, that could be another approach.
return y_output[-1,0]-yb
```
```python
t0 = 0
T = 1
ya = 1
yb = 3
N = 100
ODEsolver = RK4
F_root = lambda alpha: F(alpha, ya=ya, yb=yb, N=N, ODEsolver=ODEsolver)
alpha = root(F_root, 0.).x[0]
y0 = np.array([ya,alpha])
t_times, y_output = ODEsolver(t0,T,N,y0,f)
fig = plt.figure(figsize=(L/2,L/2))
plt.grid(True)
plt.title("Numerical Approximation")
plt.plot(t_times,y_output[:,0],'r.',ms=12,alpha=0.5,label=r'$y1_i$')
plt.plot(t_times,y_output[:,1],'m.',ms=12,alpha=0.5,label=r'$y2_i$')
plt.plot(0,ya,'*k',ms=16,label='Left BC')
plt.plot(1,yb,'*g',ms=16,label='Right BC')
plt.plot(0,alpha,'darkorange',ms=16,label=r'$\alpha$', marker=r'$\alpha$')
plt.legend(loc='best')
print(alpha)
```
### Let's take a look to $F(\alpha, \dots)$, find the 'BUG'!
```python
ODEsolver = eulerMethod
several_alphas = np.linspace(-1,0,100)
Fv = np.vectorize(lambda x: F(x,ODEsolver=ODEsolver))
F_alphas = Fv(several_alphas)
plt.figure(figsize=(10,10))
plt.plot(several_alphas,F_alphas,'.b',label=r'$F(\alpha)$')
plt.plot(several_alphas,F_alphas*0,'-g',alpha=0.8,linewidth=3,label=r'$0$')
plt.grid(True)
plt.plot(alpha,F(alpha,ODEsolver=ODEsolver),'.r',ms=20,label=r'Root: $\alpha$='+str(alpha))
plt.xlabel(r'$\alpha$')
plt.ylabel(r'$F(\alpha)=y_N^{(\alpha)} - y_b$')
plt.legend(loc='best')
plt.show()
```
<div id='DD' />
## Finite Differences
[Back to TOC](#toc)
This method opens a new door of algorithms for solving BVP.
The main idea is to approximate each derivative of $y(x)$ by a finite difference approximation and then solve the linear or possible non-linear system of equations associated.
Here we list a few finite difference approximation for the first and one for the second derivative:
- Forward Difference:
$$ y'(x) = \dfrac{y(x+h) - y(x)}{h} + O(h)$$
- Backward Difference:
$$ y'(x) = \dfrac{y(x) - y(x-h)}{h} + O(h)$$
- Central Difference:
$$ y'(x) = \dfrac{y(x+h) - y(x-h)}{2h} + O(h^2)$$
And for the second derivative:
\begin{align*}
y''(x) &= \dfrac{\text{Forward Difference } - \text{ Backward Difference}}{h} + O(h^2) \\
y''(x) &= \dfrac{\dfrac{y(x+h) - y(x)}{h} - \dfrac{y(x) - y(x-h)}{h}}{h} + O(h^2) \\
y''(x) &= \dfrac{y(x+h) - 2\,y(x) + y(x-h)}{h^2} + O(h^2) \\
\end{align*}
Notice that in all the previous cases we first need to define the function on a grid, it does not have to be regular, but for simplicity we will consider it regular.
Recall that the BVP will be defined on $[a,b]$, thus the regular grid will be defined as follows: $x_i=i\,h$, where $h=\dfrac{b-a}{N}$.
This also help us to define the approximation of the function over the grid, this means the following,
- Traditionally we say $y(x)$ is the function defined on $x\in[a,b]$.
- Now, we will have a grid function $y_i\approx y(x_i)$ for $i\in\{0,1,2,\dots,N\}$. In particular we would like to point out that the vector $\mathbf{y}$ will be the representation of all the scalar values $y_i$ and the vector $\mathbf{x}$ will store the grid values $x_i$. Thus in this discrete world $\mathbf{y}$ will apporximate the continuous function $y(x)$ on $[a,b]$.
On the other hand, the finite difference operator over the finite grid will be then defined as follows:
- Forward Difference: $ y'(x_i)=\dfrac{y_{i+1} - y_i}{h} + O(h) $.
- Backward Difference: $ y'(x_i) = \dfrac{y_i - y_{i-1}}{h} + O(h) $.
- Central Difference: $ y'(x_i) = \dfrac{y_{i+1} - y_{i-1}}{2h} + O(h^2)$.
- Second derivative finite difference: $y''(x_i) = \dfrac{y_{i+1} - 2\,y_i + y_{i-1}}{h^2} + O(h^2)$.
Now, let's apply them!
```python
def solve_finite_difference_eq(M=4,ya=1,yb=3):
# Spatial discretization,
# 'M' represents the number of intervals to be used.
h=(1.-0.)/M
# Building Finite Difference Discretization
deltas=-(2.+4.*(h**2.))
A=toeplitz(np.append(np.array([deltas, 1.]), np.zeros(M-3)))
# Building RHS
b=np.append(-ya, np.zeros(M-3))
b=np.append(b,-yb)
# Solving the linear system of equations
y=np.linalg.solve(A, b)
# Adding back the boundary conditions into the solution
y=np.append([ya], y)
y=np.append(y,[yb])
x_FD = np.linspace(0,1,M+1) # xi, i.e. the spatial grid.
y_FD = y # yi, i.e. the approximation of y(x_i).
return x_FD, y_FD, A, b
def plot_solution_finite_difference_eq(M=4,ya=1,yb=3, FLAG_test=True):
# Solving with Finite Difference
x_FD, y_FD, A, b = solve_finite_difference_eq(M,ya,yb)
# Plotting
fig = plt.figure(figsize=(L,L/2))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
# Plotting the discrete solution
ax1.grid(True)
ax1.set_title("FD")
ax1.plot(x_FD,y_FD,'.',ms=20)
ax1.grid("on")
# Plotting the pattern and coefficients of the tri-diagonal matrix
plot_matrices_with_values(ax2,A,FLAG_test)
interact(plot_solution_finite_difference_eq, M=(3,25,1),
ya=(-5,5,0.1), yb=(-5,5,0.1),
FLAG_test=Checkbox(value=True, description='Show numerical values'))
```
interactive(children=(IntSlider(value=4, description='M', max=25, min=3), FloatSlider(value=1.0, description='…
<function __main__.plot_solution_finite_difference_eq(M=4, ya=1, yb=3, FLAG_test=True)>
## Comparing the Shooting Method with Finite Differences
```python
def shooting_method_vs_FD(alpha=-0.420306048726636, N=50, ya=1, yb=3, ODEsolver=eulerMethod, M=4):
# Shooting Method
t0 = 0
T = 1
y0 = np.array([ya,alpha])
t_times, y_output = ODEsolver(t0,T,N,y0,f)
h = (T-t0)/N
# Finite Differences
x_FD, y_FD, A, b = solve_finite_difference_eq(M,ya,yb)
fig = plt.figure(figsize=(L/2,L/2))
ax = fig.gca()
plt.grid(True)
plt.title("Numerical Approximations")
plt.plot(t_times,y_output[:,0],'r.',ms=12,alpha=0.5,label=r'SM: $y1_i$')
plt.plot(t_times,y_output[:,1],'m.',ms=12,alpha=0.5,label=r'SM: $y2_i$')
plt.plot(x_FD,y_FD,'b.',ms=20,label=r'FD: $y_i$')
plt.plot(0,1,'*k',ms=16,label='Left BC')
plt.plot(1,3,'*g',ms=16,label='Right BC')
plt.plot(0,alpha,'r',ms=16,label=r'$\alpha$', marker=r'$\alpha$')
plt.legend(loc='best')
interact(shooting_method_vs_FD, alpha=(-2,2,0.01),
N=(5,100,1), ya=(-5,5,0.1), yb=(-5,5,0.1),
ODEsolver=radio_button_ODEsolvers,
M=(3,25,1))
```
interactive(children=(FloatSlider(value=-0.420306048726636, description='alpha', max=2.0, min=-2.0, step=0.01)…
<function __main__.shooting_method_vs_FD(alpha=-0.420306048726636, N=50, ya=1, yb=3, ODEsolver=<function eulerMethod at 0x7fd920c4fd30>, M=4)>
In summary,
- Both methods find a reasonable approximation.
- The Shooting Method requires to solve several IVP until it find the missing slope $\alpha$, but it uses previous knowledge.
- Finite Different requieres to solve a linear (or possible non-linear) system of equations but we requires few points to get a reasonable approximation.
- Therefore, depending on the numerical problem that needs to be solve, one or the other would be better choice!
<div id='acknowledgements' />
# Acknowledgements
[Back to TOC](#toc)
- _Material creado por profesor Claudio Torres_ (`ctorres@inf.utfsm.cl`) _y ayudantes: Alvaro Salinas y Martín
Villanueva. DI UTFSM. Abril 2016._
- **DISCLAIMER**: El presente notebook ha sido creado para el curso **ILI286 - Computación Científica 2**, del [Departamento de Informática](http://www.inf.utfsm.cl/), [Universidad Técnica Federico Santa María](http://www.utfsm.cl/). El material ha sido creado por Claudio Torres <ctorres@inf.utfsm.cl> y Sebastian Flores <sebastian.flores@usm.cl>, y es distribuido sin restricciones. En caso de encontrar un error, por favor no dude en contactarnos.
- [Update 2015] Se ha actualizado los notebooks a Python 3 e includio el "magic" "%matplotlib inline" antes de cargar matplotlib para que los gráficos se generen en el notebook.
- [Update 2016] (Álvaro) Modificaciones mayores al formato original. Agregado contexto: Introducción, Tabla de Contenidos, Explicaciones de cada método.
- [Update 2019] (C. Torres) Small changes. Fixing issue with title of sections and identation. Adding 'interact' to the logistic equation! Adding interact to everything, work in progress. All done, Enjoy!
- _Update July 2021 - v1.16 - C.Torres_ : Updating format and translating to English. Several updates in several functions! Major update!
- _Update July 2021 - v1.17 - C.Torres_ : Minor update, removing commented code.
- _Update July 2021 - v1.18 - C.Torres_ : Removing warning and changing the name of some variables.
```python
```
|
b55dd5b91a479193fda51a875775401fc938d57f
| 195,825 |
ipynb
|
Jupyter Notebook
|
SC1v2/11_ODE.ipynb
|
luciofondon98/Scientific-Computing
|
c9da76ee6aa7d165c26814875e3d1c109719ff37
|
[
"BSD-3-Clause"
] | null | null | null |
SC1v2/11_ODE.ipynb
|
luciofondon98/Scientific-Computing
|
c9da76ee6aa7d165c26814875e3d1c109719ff37
|
[
"BSD-3-Clause"
] | null | null | null |
SC1v2/11_ODE.ipynb
|
luciofondon98/Scientific-Computing
|
c9da76ee6aa7d165c26814875e3d1c109719ff37
|
[
"BSD-3-Clause"
] | null | null | null | 114.383762 | 39,924 | 0.83376 | true | 12,925 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.737158 | 0.795658 | 0.586526 |
__label__eng_Latn
| 0.663814 | 0.201026 |
```python
import numpy as np
import sympy as sy
sy.init_printing()
```
# <font face="gotham" color="red"> Similarity </font>
If $A = PBP^{-1}$, we say $A$ is <font face="gotham" color="red">similar</font> to $B$, decomposing $A$ into $PBP^{-1}$ is also called a <font face="gotham" color="red">similarity transformation</font>.
<font face="gotham" color="red">If $n\times n$ matrices $A$ and $B$ are similar, they have the same eigenvalues</font>.
The <font face="gotham" color="red">diagnoalization</font>, which we will explain, is a process of finding similar matrices.
# <font face="gotham" color="purple"> Diagonalizable Matrix</font>
Let $A$ be an $n\times n$ matrix. If there exists an $n\times n$ invertible matrix $P$ and a diagonal matrix $D$, such that
$$
A=PDP^{-1}
$$
then matrix $A$ is called a diagonalizable matrix.
And futher, <font face="gotham" color="red">the columns of $P$ are linearly independent eigenvectors of $A$, and its corresponding eigenvalues are on the diagonal of $D$. In other words, $A$ is diagonalizable if and only if the dimension of eigenspace basis is $n$</font>.
It's easy to show why this equation holds.
Let
$$
P = \left[\begin{array}{llll}
{v}_{1} & {v}_{2} & \cdots & {v}_{n}
\end{array}\right]\\
$$
$$
D = \left[\begin{array}{cccc}
\lambda_{1} & 0 & \cdots & 0 \\
0 & \lambda_{2} & \cdots & 0 \\
\vdots & \vdots & & \vdots \\
0 & 0 & \cdots & \lambda_{n}
\end{array}\right]
$$
where $v_i, i \in (1, 2, ...n)$ is an eigenvector of $A$, $\lambda_i, i \in (1, 2, ...n)$ is an eigenvalue of $A$.
$$
AP = A\left[\begin{array}{llll}
{v}_{1} & {v}_{2} & \cdots & {v}_{n}
\end{array}\right]=\left[\begin{array}{llll}
A {v}_{1} & A {v}_{2} & \cdots & A {v}_{n}
\end{array}\right]
$$
$$P D=P\left[\begin{array}{cccc}
\lambda_{1} & 0 & \cdots & 0 \\
0 & \lambda_{2} & \cdots & 0 \\
\vdots & \vdots & & \vdots \\
0 & 0 & \cdots & \lambda_{n}
\end{array}\right]=\left[\begin{array}{lllll}
\lambda_{1} {v}_{1} & \lambda_{2} {v}_{2} & \cdots & \lambda_{n} {v}_{n}
\end{array}\right]$$
We know that $A{v}_i = \lambda_i{v}_i$, i.e.
$$
AP = PD
$$
Since $P$ has all independent eigenvectors, then
$$
A = PDP^{-1}
$$
# <font face="gotham" color="purple"> Diagonalizing a Matrix</font>
Consider a matrix
$$A=\left[\begin{array}{rrr}
1 & 3 & 3 \\
-3 & -5 & -3 \\
3 & 3 & 1
\end{array}\right]$$
diagonalize the matrix $A$.
Following these steps:
1. Find the eigenvalues of $A$
2. Find the eigenvectors of $A$
3. Construct $P$.
4. Construct $D$ from the corresponding columns of $P$.
```python
A = sy.Matrix([[1,3,3], [-3, -5, -3], [3,3,1]])
eig = sy.matrices.MatrixEigen.eigenvects(A)
eig
```
Construct $P$
```python
P = sy.zeros(3, 3)
P[:, 0] = eig[0][2][0]
P[:, 1] = eig[0][2][1]
P[:, 2] = eig[1][2][0]
P
```
Construct $D$
```python
D = sy.diag(eig[0][0], eig[0][0], eig[1][0])
D
```
We can verify if $PDP^{-1}=A$ holds:
```python
P * D * P.inv() == A
```
Of course we don't need to go through this process seperately. There is ```diagonalize``` method in SymPy.
```python
P, D = A.diagonalize()
```
```python
P
```
```python
D
```
Sometimes we just want to test if a matrix is diagonalizable, then use ```is_diagonalizable``` in SymPy.
```python
A.is_diagonalizable()
```
|
4b1c5fe5effbae2e00ec1a1c3fcd1e0cf30a94b6
| 9,173 |
ipynb
|
Jupyter Notebook
|
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Strang-Book-Linear_Algebra_With_Python-Jupyter-Notebooks/Chapter 13 - Diagonalization.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Strang-Book-Linear_Algebra_With_Python-Jupyter-Notebooks/Chapter 13 - Diagonalization.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Strang-Book-Linear_Algebra_With_Python-Jupyter-Notebooks/Chapter 13 - Diagonalization.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | 2 |
2022-02-09T15:41:33.000Z
|
2022-02-11T07:47:40.000Z
| 27.79697 | 1,483 | 0.527635 | true | 1,195 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91611 | 0.868827 | 0.795941 |
__label__eng_Latn
| 0.868255 | 0.687569 |
```python
from IPython.core.display import HTML, Image
css_file = 'style.css'
HTML(open(css_file, 'r').read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Philosopher:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Philosopher', sans-serif;
font-weight: 400;
font-size: 2.2em;
line-height: 100%;
color: rgb(0, 80, 120);
margin-bottom: 0.1em;
margin-top: 0.1em;
display: block;
}
.text_cell_render h2 {
font-family: 'Philosopher', serif;
font-weight: 400;
font-size: 1.9em;
line-height: 100%;
color: rgb(245,179,64);
margin-bottom: 0.1em;
margin-top: 0.1em;
display: block;
}
.text_cell_render h3 {
font-family: 'Philosopher', serif;
margin-top:12px;
margin-bottom: 3px;
font-style: italic;
color: rgb(94,127,192);
}
.text_cell_render h4 {
font-family: 'Philosopher', serif;
}
.text_cell_render h5 {
font-family: 'Alegreya Sans', sans-serif;
font-weight: 300;
font-size: 16pt;
color: grey;
font-style: italic;
margin-bottom: .1em;
margin-top: 0.1em;
display: block;
}
.text_cell_render h6 {
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 10pt;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 100%;
}
</style>
## Library import
```python
from sympy import init_printing, Matrix, symbols, eye, Rational
init_printing()
```
# LU decomposition of a matrix A
In this notebook, we will decompose the matrix $A$ into and upper and lower triangular matrix, such that multiplying these will return $A$. this is shown in (1) where $L$ is the lower triangular matrix and $U$ is the upper triangular matrix.
$$ A = LU \tag{1}$$
## Turning a matrix of coefficients into _U_pper triangular form
Consider the following matrix of coefficients shown in (2).
$$ \begin{bmatrix} 1 & -2 & 1 \\ 3 & 2 & -2 \\ 6 & -1 & -1 \end{bmatrix} \tag{2}$$
We need to convert this into upper triangular form. A generic $ 3 \times 3$ upper triangular matrix is shown in (3).
$$ \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ 0 & a_{22} & a_{23} \\ 0 & 0 & a_{3,3} \end{bmatrix} \tag{3} $$
Note that all entries _below_ the main diagonal are $0$. This is an _upper triangular matrix_.
To get our matrix in (2) into upper triangular form, successive elementary row operation follow, which remember, is nothing other than matrix multiplication of the elementary matrices. An elementary matrix is an identity matrix on which one elementary row operation was performed. below, we create the $3 \times 3$ matrix in (2) and save it as `sympy` matrix object named `A`.
```python
A = Matrix([[1, -2, 1], [3, 2, -2], [6, -1, -1]])
A
```
The `eye()` function returns an identity matrix.
```python
eye(3) # Identity matrix of size 3 x 3
```
We have to get a $-3$ in the first _pivot_ (the $1$ in row $1$, column $1$) to get rid of the $3$ in row $2$, column $1$ (we call the resulting elementary matrix `E21`, referring to the row $2$, column $1$). Then we add the new row $1$ to row $2$. Row $1$ of the identity matrix is then $\left( -3,0,0 \right) $ (but we leave it as $\left( 1,0,0 \right)$ in `E21`). Adding this to row $2$ leaves $ \left( -3,1,0 \right) $.
To see why this is so, if we multiply row $1$ by $-3$ we have $ \left( -3, 0, 0 \right) $. Adding this to row $2$, which is $ \left( 0,1,0 \right) $, results in $ \left( -3,1,0 \right) $ and hence `E21` being what we see below.
```python
E21 = Matrix([[1, 0, 0], [-3, 1, 0], [0, 0, 1]])
E21
```
Now we left-multiply $A$ by the elementary matrix `E21`. In matrix notation, we would write $E_{21} A$.
```python
E21 * A # The resulting matrix after multiplication by E21
```
Good, we now have a leading $0$ in row $2$. We follow the same steps to get rid of the leading $6$ in row $3$, column $1$. Multiplying row $1$ (of the identity matrix) by $-6$ and adding this new row to row $3$ yields elementary matrix `E31`.
```python
E31 = Matrix([[1, 0, 0], [0, 1, 0], [-6, 0, 1]])
E31
```
Now for the left-multiplication of $E_{21} A$ by `E31` so that we have $E_{31} E_{21} A$.
```python
E31 * E21 * A # This got rid of the leading 6 in row 3
```
Now the $11$ in row $3$, column $2$ is the _pivot_ and we also need to get rid of the $8$ in row $2$, column $2$. Unfortunately, we have an $8$ and an $11$ to deal with. We will have to do two elementary row operations. First $-11$ times row $2$ of the identity matrix $ \therefore \left( 0,-11,0 \right) $, added to $8$ times row $3$ $ \therefore \left( 0,0,8 \right) $) such that we have $ \left( 0,-11,8 \right) $. Below it is save as `E32`.
```python
E32 = Matrix([[1, 0 , 0], [0, 1, 0], [0, -11, 8]])
E32
```
```python
U = E32 * E31 * E21 * A
U # We call is U for upper triangular
```
The matrix is now in upper triangular form, achived by the elementary matrixes shown in (4).
$$ { E }_{ 32 } { E }_{ 31 } { E }_{ 21 } A=U \tag{4} $$
## Calculating the _L_ower triangular from
Note, to reverse the process above, we would have to do the multiplication shown in (5).
$$ { \left( { E }_{ 21 } \right) }^{ -1 }{ \left( { E }_{ 31 } \right) }^{ -1 }{ \left( { E }_{ 32 } \right) }^{ -1 }\left( { E }_{ 32 } \right) \left( { E }_{ 31 } \right) \left( { E }_{ 21 } \right) A=A \tag{5} $$
The inverse of a matrix can be calculated using the `sympy` method `.inv()`.
We can check this with a Boolean logic, using the `==` symbol, which checks if the left- and right-hand sides re equal.
```python
E21.inv() * E31.inv() * E32.inv() * E32 * E31 * E21 * A == A # The Boolean double equal signs asks: Is the
# left-hand side equal to the right-hand side?
```
True
We will be back with the identity matrix just multiplying the inverse elementary matrices and the elementary matrices.
```python
E21.inv() * E31.inv() * E32.inv() * E32 * E31 * E21
```
We left-multiply both sides of (4) by these inverse elementary matrices as shown in (6).
$$ { \left( { E }_{ 21 } \right) }^{ -1 }{ \left( { E }_{ 31 } \right) }^{ -1 }{ \left( { E }_{ 32 } \right) }^{ -1 }\left( { E }_{ 32 } \right) \left( { E }_{ 31 } \right) \left( { E }_{ 21 } \right) A={ \left( { E }_{ 21 } \right) }^{ -1 }{ \left( { E }_{ 31 } \right) }^{ -1 }{ \left( { E }_{ 32 } \right) }^{ -1 }U \tag{6} $$
The multiplication of these inverse elementary matrices results in a _l_ower triangular matrix, $L$, such that we have (1).
```python
L = E21.inv() * E31.inv() * E32.inv()
L
```
Let's check if this is so.
```python
A == L * U # Checking this with a Boolean question
```
True
```python
A, L * U # They are identical
```
## Doing this in one go using sympy
The `sympy` library provides the `.LUdecomposition()` method for rectangular matrices. It returns three values.
```python
L, U, _ = A.LUdecomposition()
```
```python
L
```
```python
U # Note the difference from the U above
```
$$\left[\begin{matrix}1 & -2 & 1\\0 & 8 & -5\\0 & 0 & - \frac{1}{8}\end{matrix}\right]$$
Note the subtle difference between this $U$ and the one calculated above. It simply has row $3$ divided by $8$. It makes no difference as shown below.
```python
L * U # Back to A
```
### What's special about L?
Our methods only works when no row interchange happens. It also actually only works when doing the conventional subtracting the scalar multiplication of a row from another row, leaving the positive scalar. (This is opposed to the negatives I often use in my head, allowing me to add the two rows).
Note the $3$ (in row $2$, column $1$) and the $6$ (in row $3$, column $1$). They are the row multiplications we have to do for `E21` and `E31`. The
$\frac{11}{8}$ is what we did for `E32` (we just did it in two steps so as not to use fractions).
## Row exchanges
Sometimes, we have to allow row exchanges, i.e. if the pivot contains a $0$.
As an example, from a $ 3 \times 3 $ identity matrix we could have the following.
```python
eye(3)
```
Exchanging rows $1$ and $2$.
```python
Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
```
```python
A, Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]]) * A # Showing row exchange
```
By the way, how many permutations of row exchanges are there? This answer is $n!$, where $n$ is the number of rows.
## Example problems
### Example problem 01
1. Perform LU decomposition of:
$$ \begin{bmatrix} 1 & 0 & 1 \\ a & a & a \\ b & b & a \end{bmatrix} $$
2. For which values of $a$ and $b$ does $L$ and $U$ exist?
#### Solution
```python
a, b = symbols('a b')
```
```python
A = Matrix([[1, 0, 1], [a, a, a], [b, b, a]])
A
```
```python
L,U, _ = A.LUdecomposition()
```
```python
L, U
```
Checking.
```python
L * U == A
```
True
For existence it is clear that $ a \ne 0 $. Not only for the division by $0$, but because we will have a row of $0$'s and a $0$ in the pivot position, row $2$, column $2$. Furthermore, $a \ne b$, for the same reasons.
## Hints and tips
```python
E21, E21.inv()
```
To take the inverse of an elementary matrix, simply change the sign of the off-diagonal elements and multiply each element by 1 over the determinant (more about the determinant later). The determinant is easy to do for these $ n=3 $ square matrices, since the top row is $ \left( 1,0,0 \right) $.
```python
E32, E32.inv()
```
$$\begin{pmatrix}\left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & -11 & 8\end{matrix}\right], & \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & \frac{11}{8} & \frac{1}{8}\end{matrix}\right]\end{pmatrix}$$
By keeping track of the elementary matrices it is easy to get $L$ and $U$. It is also easy to get the inverses of $L$ and $U$. This means it is easy to calculate the values of a column vector $ \underline{x} $ when we have $ A \underline{x} = \underline{b} $ as shown in (7).
$$ \begin{align} Ax &= LUx=b\\ Ux &= { L }^{ -1 }b\\ x &= { U }^{ -1 }{ L }^{ -1 }b \end{align} \tag{7} $$
```python
```
|
b0c6728f50c396bd2b02c55730bdccf14b3f6de8
| 58,910 |
ipynb
|
Jupyter Notebook
|
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_4_LU_decomposition_of_A.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_4_LU_decomposition_of_A.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_4_LU_decomposition_of_A.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | 2 |
2022-02-09T15:41:33.000Z
|
2022-02-11T07:47:40.000Z
| 49.629318 | 3,412 | 0.705449 | true | 3,504 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.600188 | 0.855851 | 0.513672 |
__label__eng_Latn
| 0.980474 | 0.031761 |
# Axion-electron coupling limits vs axion mass
Axion mass-PQ scale relation:
\begin{equation}
m_{a}=5.70(7) \mu \mathrm{eV}\left(\frac{10^{12} \,\mathrm{GeV}}{f_{a}}\right)
\end{equation}
Axion-electron coupling
\begin{equation}
g_{ae} \equiv \frac{C_{ae} m_{e}}{f_{a}}=8.943 \times 10^{-11} C_{ae}\frac{m_a}{\mathrm{eV}}
\end{equation}
Model dependent constant:
\begin{equation}
C_{ae} =
\begin{cases}
2\times 10^{-4} & {\rm KSVZ} \\
[0.024,\frac{1}{3}] & {\rm DFSZ\,I} \\
[-\frac{1}{3},0] & {\rm DFSZ\,II}
\end{cases}
\end{equation}
In DFSZ the lepton mass can come from either coupling to $H_u$ or $H_d$, so $C_{ae} = -C_{au}$ or $C_{ad}$. The range of values for DFSZ I and II come from the perturbativity of the Yukawa couplings with sets the range $0.28<v_{u} / v_{d}<140$ for the Higgs vevs.
```python
%matplotlib inline
from numpy import loadtxt
import matplotlib.pyplot as plt
from PlotFuncs import FigSetup,BlackHoleSpins,AxionElectron, MySaveFig
fig,ax = FigSetup(Shape='Rectangular',ylab=r'$|g_{ae}|$',mathpazo=True,\
g_min=1e-15,g_max=1e-10,m_min=1e-9,m_max=1e5,xtick_rotation=0)
AxionElectron.QCDAxion(ax)
AxionElectron.LUX(ax)
AxionElectron.PandaX(ax)
AxionElectron.XENON1T(ax)
AxionElectron.SolarBasin(ax)
AxionElectron.SuperCDMS(ax)
AxionElectron.EDELWEISS(ax)
AxionElectron.RedGiants(ax)
AxionElectron.SolarNu(ax)
AxionElectron.QUAX(ax)
MySaveFig(fig,'AxionElectron')
```
```python
fig,ax = FigSetup(Shape='Rectangular',ylab=r'$|g_{ae}|$',mathpazo=True,\
g_min=1e-15,g_max=1e-10,m_min=1e-9,m_max=1e5,xtick_rotation=0)
AxionElectron.QCDAxion(ax)
AxionElectron.LUX(ax)
AxionElectron.PandaX(ax)
AxionElectron.XENON1T(ax)
AxionElectron.SolarBasin(ax)
AxionElectron.SuperCDMS(ax)
AxionElectron.EDELWEISS(ax,text_on=False)
AxionElectron.RedGiants(ax)
AxionElectron.SolarNu(ax)
AxionElectron.QUAX(ax)
AxionElectron.DARWIN(ax)
AxionElectron.LZ(ax)
AxionElectron.Semiconductors(ax)
AxionElectron.Magnon(ax)
AxionElectron.MagnonScan(ax)
MySaveFig(fig,'AxionElectron_with_Projections')
```
```python
fig,ax = FigSetup(Shape='Rectangular',ylab=r'$|g_{ae}|$',mathpazo=True,\
g_min=1e-15,g_max=1e-10,m_min=1e0,m_max=5e5,xtick_rotation=0)
AxionElectron.QCDAxion(ax,DFSZ_on=True,KSVZ_on=False,Hadronic_on=False)
plt.gcf().text(0.16,0.23,r'{\bf DFSZ axions}',fontsize=30,rotation=37,alpha=1)
AxionElectron.LUX(ax,text_pos=[1.2e0,7e-12])
AxionElectron.PandaX(ax,fs=25,text_pos=[1.1e3,2.8e-13],rotation=0,alpha=0.8,rotation_mode='anchor')
AxionElectron.XENON1T(ax,text_shift=[2,1.4],fs=22)
AxionElectron.SolarBasin(ax)
AxionElectron.SuperCDMS(ax,text_pos=[5e1,2.5e-11],rotation=-72)
AxionElectron.EDELWEISS(ax,text_pos=[3.5e4,2.2e-11],fs=20,rotation=42,text_col='w')
AxionElectron.RedGiants(ax,text_pos=[1.2e0,2e-13])
AxionElectron.SolarNu(ax,text_pos=[1.2e0,4e-11])
AxionElectron.GERDA(ax,rotation=25)
AxionElectron.DARWIN(ax,text_pos=[1.2e3,2.5e-14],rotation=10,rotation_mode='anchor')
AxionElectron.LZ(ax,fs=25,rotation=30,rotation_mode='anchor')
MySaveFig(fig,'AxionElectron_UndergroundDetectorsCloseup')
```
```python
```
|
14817a17b75c5e943a39278ad07c7eae0444a81b
| 551,869 |
ipynb
|
Jupyter Notebook
|
AxionElectron.ipynb
|
cajohare/AxionLimits
|
0f230025c2fb87ed3e054347d844f182ac62811c
|
[
"MIT"
] | 42 |
2019-07-09T18:05:09.000Z
|
2022-03-13T02:54:14.000Z
|
AxionElectron.ipynb
|
cajohare/AxionLimits
|
0f230025c2fb87ed3e054347d844f182ac62811c
|
[
"MIT"
] | 1 |
2021-07-02T13:12:07.000Z
|
2021-07-23T00:32:13.000Z
|
AxionElectron.ipynb
|
cajohare/AxionLimits
|
0f230025c2fb87ed3e054347d844f182ac62811c
|
[
"MIT"
] | 12 |
2020-04-07T17:28:07.000Z
|
2022-03-16T10:42:36.000Z
| 2,904.573684 | 211,804 | 0.963165 | true | 1,179 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.779993 | 0.682574 | 0.532403 |
__label__yue_Hant
| 0.57368 | 0.075279 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial1.ipynb" target="_parent"></a>
# Tutorial 1: Geometric view of data
**Week 1, Day 5: Dimensionality Reduction**
**By Neuromatch Academy**
__Content creators:__ Alex Cayco Gajic, John Murray
__Content reviewers:__ Roozbeh Farhoudi, Matt Krause, Spiros Chavlis, Richard Gao, Michael Waskom, Siddharth Suresh, Natalie Schaworonkow, Ella Batty
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'></p>
---
# Tutorial Objectives
*Estimated timing of tutorial: 50 minutes*
In this notebook we'll explore how multivariate data can be represented in different orthonormal bases. This will help us build intuition that will be helpful in understanding PCA in the following tutorial.
Overview:
- Generate correlated multivariate data.
- Define an arbitrary orthonormal basis.
- Project the data onto the new basis.
```python
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/kaq2x/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
```python
# @title Video 1: Geometric view of data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Af4y1R78w", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="THu9yHnpq9I", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
---
# Setup
```python
# Imports
import numpy as np
import matplotlib.pyplot as plt
```
```python
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
```
```python
# @title Plotting Functions
def plot_data(X):
"""
Plots bivariate data. Includes a plot of each random variable, and a scatter
plot of their joint activity. The title indicates the sample correlation
calculated from the data.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[8, 4])
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(X[:, 0], color='k')
plt.ylabel('Neuron 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(X[:, 0])))
ax1.set_xticklabels([])
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(X[:, 1], color='k')
plt.xlabel('Sample Number')
plt.ylabel('Neuron 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(X[:, 1])))
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(X[:, 0], X[:, 1], '.', markerfacecolor=[.5, .5, .5],
markeredgewidth=0)
ax3.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(X[:, 0], X[:, 1])[0, 1]))
plt.show()
def plot_basis_vectors(X, W):
"""
Plots bivariate data as well as new basis vectors.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
W (numpy array of floats) : Square matrix representing new orthonormal
basis each column represents a basis vector
Returns:
Nothing.
"""
plt.figure(figsize=[4, 4])
plt.plot(X[:, 0], X[:, 1], '.', color=[.5, .5, .5], label='Data')
plt.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.plot([0, W[0, 0]], [0, W[1, 0]], color='r', linewidth=3,
label='Basis vector 1')
plt.plot([0, W[0, 1]], [0, W[1, 1]], color='b', linewidth=3,
label='Basis vector 2')
plt.legend()
plt.show()
def plot_data_new_basis(Y):
"""
Plots bivariate data after transformation to new bases.
Similar to plot_data but with colors corresponding to projections onto
basis 1 (red) and basis 2 (blue). The title indicates the sample correlation
calculated from the data.
Note that samples are re-sorted in ascending order for the first
random variable.
Args:
Y (numpy array of floats): Data matrix in new basis each column
corresponds to a different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[8, 4])
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(Y[:, 0], 'r')
plt.xlabel
plt.ylabel('Projection \n basis vector 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(Y[:, 0])))
ax1.set_xticklabels([])
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(Y[:, 1], 'b')
plt.xlabel('Sample number')
plt.ylabel('Projection \n basis vector 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(Y[:, 1])))
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(Y[:, 0], Y[:, 1], '.', color=[.5, .5, .5])
ax3.axis('equal')
plt.xlabel('Projection basis vector 1')
plt.ylabel('Projection basis vector 2')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(Y[:, 0], Y[:, 1])[0, 1]))
plt.show()
```
---
# Section 1: Generate correlated multivariate data
```python
# @title Video 2: Multivariate data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1xz4y1D7ES", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="jcTq2PgU5Vw", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
This video describes the covariance matrix and the multivariate normal distribution.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
To gain intuition, we will first use a simple model to generate multivariate data. Specifically, we will draw random samples from a *bivariate normal distribution*. This is an extension of the one-dimensional normal distribution to two dimensions, in which each $x_i$ is marginally normal with mean $\mu_i$ and variance $\sigma_i^2$:
\begin{align}
x_i \sim \mathcal{N}(\mu_i,\sigma_i^2).
\end{align}
Additionally, the joint distribution for $x_1$ and $x_2$ has a specified correlation coefficient $\rho$. Recall that the correlation coefficient is a normalized version of the covariance, and ranges between -1 and +1:
\begin{align}
\rho = \frac{\text{cov}(x_1,x_2)}{\sqrt{\sigma_1^2 \sigma_2^2}}.
\end{align}
For simplicity, we will assume that the mean of each variable has already been subtracted, so that $\mu_i=0$ for both $i=1$ and $i=2$. The remaining parameters can be summarized in the covariance matrix, which for two dimensions has the following form:
\begin{align}
{\bf \Sigma} =
\begin{pmatrix}
\text{var}(x_1) & \text{cov}(x_1,x_2) \\
\text{cov}(x_1,x_2) &\text{var}(x_2)
\end{pmatrix}.
\end{align}
In general, $\bf \Sigma$ is a symmetric matrix with the variances $\text{var}(x_i) = \sigma_i^2$ on the diagonal, and the covariances on the off-diagonal. Later, we will see that the covariance matrix plays a key role in PCA.
</details>
## Coding Exercise 1: Draw samples from a distribution
We have provided code to draw random samples from a zero-mean bivariate normal distribution with a specified covariance matrix (`get_data`). Throughout this tutorial, we'll imagine these samples represent the activity (firing rates) of two recorded neurons on different trials. Fill in the function below to calculate the covariance matrix given the desired variances and correlation coefficient. The covariance can be found by rearranging the equation above:
\begin{align}
\text{cov}(x_1,x_2) = \rho \sqrt{\sigma_1^2 \sigma_2^2}.
\end{align}
```python
# @markdown Execute this cell to get helper function `get_data`
def get_data(cov_matrix):
"""
Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian.
Note that samples are sorted in ascending order for the first random variable
Args:
cov_matrix (numpy array of floats): desired covariance matrix
Returns:
(numpy array of floats) : samples from the bivariate Gaussian, with each
column corresponding to a different random
variable
"""
mean = np.array([0, 0])
X = np.random.multivariate_normal(mean, cov_matrix, size=1000)
indices_for_sorting = np.argsort(X[:, 0])
X = X[indices_for_sorting, :]
return X
help(get_data)
```
Help on function get_data in module __main__:
get_data(cov_matrix)
Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian.
Note that samples are sorted in ascending order for the first random variable
Args:
cov_matrix (numpy array of floats): desired covariance matrix
Returns:
(numpy array of floats) : samples from the bivariate Gaussian, with each
column corresponding to a different random
variable
```python
def calculate_cov_matrix(var_1, var_2, corr_coef):
"""
Calculates the covariance matrix based on the variances and correlation
coefficient.
Args:
var_1 (scalar) : variance of the first random variable
var_2 (scalar) : variance of the second random variable
corr_coef (scalar) : correlation coefficient
Returns:
(numpy array of floats) : covariance matrix
"""
#################################################
## TODO for students: calculate the covariance matrix
# Fill out function and remove
#raise NotImplementedError("Student excercise: calculate the covariance matrix!")
#################################################
# Calculate the covariance from the variances and correlation
cov = corr_coef * np.sqrt(var_1 * var_2)
cov_matrix = np.array([[var_1, cov], [cov, var_2]])
return cov_matrix
# Set parameters
np.random.seed(2020) # set random seed
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
# Compute covariance matrix
cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
# Generate data with this covariance matrix
X = get_data(cov_matrix)
# Visualize
plot_data(X)
```
```python
# to_remove solution
def calculate_cov_matrix(var_1, var_2, corr_coef):
"""
Calculates the covariance matrix based on the variances and correlation
coefficient.
Args:
var_1 (scalar) : variance of the first random variable
var_2 (scalar) : variance of the second random variable
corr_coef (scalar) : correlation coefficient
Returns:
(numpy array of floats) : covariance matrix
"""
# Calculate the covariance from the variances and correlation
cov = corr_coef * np.sqrt(var_1 * var_2)
cov_matrix = np.array([[var_1, cov], [cov, var_2]])
return cov_matrix
# Set parameters
np.random.seed(2020) # set random seed
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
# Compute covariance matrix
cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
# Generate data with this covariance matrix
X = get_data(cov_matrix)
# Visualize
with plt.xkcd():
plot_data(X)
```
## Interactive Demo 1: Correlation effect on data
We'll use the function you just completed but now we can change the correlation coefficient via slider. You should get a feel for how changing the correlation coefficient affects the geometry of the simulated data.
1. What effect do negative correlation coefficient values have?
2. What correlation coefficient results in a circular data cloud?
Note that we sort the samples according to neuron 1's firing rate, meaning the plot of neuron 1 firing rate over sample number looks clean and pretty unchanging when compared to neuron 2.
```python
# @markdown Execute this cell to enable widget
def _calculate_cov_matrix(var_1, var_2, corr_coef):
# Calculate the covariance from the variances and correlation
cov = corr_coef * np.sqrt(var_1 * var_2)
cov_matrix = np.array([[var_1, cov], [cov, var_2]])
return cov_matrix
@widgets.interact(corr_coef = widgets.FloatSlider(value=.2, min=-1, max=1, step=0.1))
def visualize_correlated_data(corr_coef=0):
variance_1 = 1
variance_2 = 1
# Compute covariance matrix
cov_matrix = _calculate_cov_matrix(variance_1, variance_2, corr_coef)
# Generate data with this covariance matrix
X = get_data(cov_matrix)
# Visualize
plot_data(X)
```
interactive(children=(FloatSlider(value=0.2, description='corr_coef', max=1.0, min=-1.0), Output()), _dom_clas…
```python
# to_remove explanation
"""
1) Negative correlation coefficients reverse the direction of the relationship: now higher neuron 1
activities are associated with lower neuron 2 activities.
2) A correlation coefficient of 0 (no correlation) results in a circular looking data cloud.
"""
```
'\n\n1) Negative correlation coefficients reverse the direction of the relationship: now higher neuron 1\nactivities are associated with lower neuron 2 activities.\n\n2) A correlation coefficient of 0 (no correlation) results in a circular looking data cloud.\n\n'
---
# Section 2: Define a new orthonormal basis
*Estimated timing to here from start of tutorial: 20 min*
```python
# @title Video 3: Orthonormal bases
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1wT4y1E71g", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="PC1RZELnrIg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
This video shows that data can be represented in many ways using different bases. It also explains how to check if your favorite basis is orthonormal.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
Next, we will define a new orthonormal basis of vectors ${\bf u} = [u_1,u_2]$ and ${\bf w} = [w_1,w_2]$. As we learned in the video, two vectors are orthonormal if:
1. They are orthogonal (i.e., their dot product is zero):
\begin{align}
{\bf u\cdot w} = u_1 w_1 + u_2 w_2 = 0
\end{align}
2. They have unit length:
\begin{align}
||{\bf u} || = ||{\bf w} || = 1
\end{align}
</details>
In two dimensions, it is easy to make an arbitrary orthonormal basis. All we need is a random vector ${\bf u}$, which we have normalized. If we now define the second basis vector to be ${\bf w} = [-u_2,u_1]$, we can check that both conditions are satisfied:
\begin{align}
{\bf u\cdot w} = - u_1 u_2 + u_2 u_1 = 0
\end{align}
and
\begin{align}
{|| {\bf w} ||} = \sqrt{(-u_2)^2 + u_1^2} = \sqrt{u_1^2 + u_2^2} = 1,
\end{align}
where we used the fact that ${\bf u}$ is normalized. So, with an arbitrary input vector, we can define an orthonormal basis, which we will write in matrix by stacking the basis vectors horizontally:
\begin{align}
{{\bf W} } =
\begin{pmatrix}
u_1 & w_1 \\
u_2 & w_2
\end{pmatrix}.
\end{align}
## Coding Exercise 2: Find an orthonormal basis
In this exercise you will fill in the function below to define an orthonormal basis, given a single arbitrary 2-dimensional vector as an input.
**Steps**
* Modify the function `define_orthonormal_basis` to first normalize the first basis vector $\bf u$.
* Then complete the function by finding a basis vector $\bf w$ that is orthogonal to $\bf u$.
* Test the function using initial basis vector ${\bf u} = [3,1]$. Plot the resulting basis vectors on top of the data scatter plot using the function `plot_basis_vectors`. (For the data, use $\sigma_1^2 =1$, $\sigma_2^2 =1$, and $\rho = .8$).
```python
def define_orthonormal_basis(u):
"""
Calculates an orthonormal basis given an arbitrary vector u.
Args:
u (numpy array of floats) : arbitrary 2-dimensional vector used for new
basis
Returns:
(numpy array of floats) : new orthonormal basis
columns correspond to basis vectors
"""
#################################################
## TODO for students: calculate the orthonormal basis
# Fill out function and remove
#raise NotImplementedError("Student excercise: implement the orthonormal basis function")
#################################################
# Normalize vector u
u = u / np.linalg.norm(u)
# Calculate vector w that is orthogonal to w
w = np.array([-u[1], u[0]])
# Put in matrix form
W = np.column_stack([u, w])
return W
# Set up parameters
np.random.seed(2020) # set random seed
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
u = np.array([3, 1])
# Compute covariance matrix
cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
# Generate data
X = get_data(cov_matrix)
# Get orthonomal basis
W = define_orthonormal_basis(u)
# Visualize
plot_basis_vectors(X, W)
```
```python
# to_remove solution
def define_orthonormal_basis(u):
"""
Calculates an orthonormal basis given an arbitrary vector u.
Args:
u (numpy array of floats) : arbitrary 2-dimensional vector used for new
basis
Returns:
(numpy array of floats) : new orthonormal basis
columns correspond to basis vectors
"""
# Normalize vector u
u = u / np.sqrt(u[0] ** 2 + u[1] ** 2)
# Calculate vector w that is orthogonal to w
w = np.array([-u[1], u[0]])
# Put in matrix form
W = np.column_stack([u, w])
return W
# Set up parameters
np.random.seed(2020) # set random seed
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
u = np.array([3, 1])
# Compute covariance matrix
cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
# Generate data
X = get_data(cov_matrix)
# Get orthonomal basis
W = define_orthonormal_basis(u)
# Visualize
with plt.xkcd():
plot_basis_vectors(X, W)
```
---
# Section 3: Project data onto new basis
*Estimated timing to here from start of tutorial: 35 min*
```python
# @title Video 4: Change of basis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1LK411J7NQ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Mj6BRQPKKUc", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
Finally, we will express our data in the new basis that we have just found. Since $\bf W$ is orthonormal, we can project the data into our new basis using simple matrix multiplication :
\begin{align}
{\bf Y = X W}.
\end{align}
We will explore the geometry of the transformed data $\bf Y$ as we vary the choice of basis.
## Coding Exercise 3: Change to orthonormal basis
In this exercise you will fill in the function below to change data to a orthonormal basis.
**Steps**
* Complete the function `change_of_basis` to project the data onto the new basis.
* Plot the projected data using the function `plot_data_new_basis`.
* What happens to the correlation coefficient in the new basis? Does it increase or decrease?
* What happens to variance?
```python
def change_of_basis(X, W):
"""
Projects data onto new basis W.
Args:
X (numpy array of floats) : Data matrix each column corresponding to a
different random variable
W (numpy array of floats) : new orthonormal basis columns correspond to
basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
#################################################
## TODO for students: project the data onto o new basis W
# Fill out function and remove
#raise NotImplementedError("Student excercise: implement change of basis")
#################################################
# Project data onto new basis described by W
Y = X @ W
return Y
# Project data to new basis
Y = change_of_basis(X, W)
# Visualize
plot_data_new_basis(Y)
```
```python
# to_remove solution
def change_of_basis(X, W):
"""
Projects data onto new basis W.
Args:
X (numpy array of floats) : Data matrix each column corresponding to a
different random variable
W (numpy array of floats) : new orthonormal basis columns correspond to
basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
# Project data onto new basis described by W
Y = X @ W
return Y
# Project data to new basis
Y = change_of_basis(X, W)
# Visualize
with plt.xkcd():
plot_data_new_basis(Y)
```
## Interactive Demo 3: Play with the basis vectors
To see what happens to the correlation as we change the basis vectors, run the cell below. The parameter $\theta$ controls the angle of $\bf u$ in degrees. Use the slider to rotate the basis vectors.
1. What happens to the projected data as you rotate the basis?
2. How does the correlation coefficient change? How does the variance of the projection onto each basis vector change?
3. Are you able to find a basis in which the projected data is **uncorrelated**?
```python
# @markdown Make sure you execute this cell to enable the widget!
def refresh(theta=0):
u = [1, np.tan(theta * np.pi / 180)]
W = define_orthonormal_basis(u)
Y = change_of_basis(X, W)
plot_basis_vectors(X, W)
plot_data_new_basis(Y)
_ = widgets.interact(refresh, theta=(0, 90, 5))
```
interactive(children=(IntSlider(value=0, description='theta', max=90, step=5), Output()), _dom_classes=('widge…
```python
# to_remove explanation
"""
1) As you rotate the basis vectors, the look of the cloud of projected data changes.
Specifically, the correlations and variances of the projected data change.
2) As the angle increases, the correlation coefficient decreases, goes to 0, then starts
to become negative. It changes from 0.8 to -.8 when the angle is changed by 90 degrees.
With theta of 0, the data projected onto the two basis vectors has equal variances of 1.
As theta increases, these variances become unequal: the projected data has larger and larger variance
for basis vector 1 and smaller variance for basis vector 2. Past theta of 45, the trend reverses and
the variances start becoming more similar.
3) If theta is 45 degrees, the projected data is uncorrelated.
"""
```
---
# Summary
*Estimated timing of tutorial: 50 minutes*
- In this tutorial, we learned that multivariate data can be visualized as a cloud of points in a high-dimensional vector space. The geometry of this cloud is shaped by the covariance matrix.
- Multivariate data can be represented in a new orthonormal basis using the dot product. These new basis vectors correspond to specific mixtures of the original variables - for example, in neuroscience, they could represent different ratios of activation across a population of neurons.
- The projected data (after transforming into the new basis) will generally have a different geometry from the original data. In particular, taking basis vectors that are aligned with the spread of cloud of points decorrelates the data.
* These concepts - covariance, projections, and orthonormal bases - are key for understanding PCA, which we be our focus in the next tutorial.
---
# Notation
\begin{align}
x_i &\quad \text{data point for dimension } i\\
\mu_i &\quad \text{mean along dimension } i\\
\sigma_i^2 &\quad \text{variance along dimension } i \\
\bf u, \bf w &\quad \text{orthonormal basis vectors}\\
\rho &\quad \text{correlation coefficient}\\
\bf \Sigma &\quad \text{covariance matrix}\\
\bf X &\quad \text{original data matrix}\\
\bf W &\quad \text{projection matrix}\\
\bf Y &\quad \text{transformed data}\\
\end{align}
```python
```
|
f2d19328e81927a4b6a2712e60b70cffc195be91
| 788,488 |
ipynb
|
Jupyter Notebook
|
tutorials/W1D5_DimensionalityReduction/ED_W1D5_Tutorial1.ipynb
|
eduardojdiniz/CompNeuro
|
20269e66540dc4e802273735c97323020ee37406
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null |
tutorials/W1D5_DimensionalityReduction/ED_W1D5_Tutorial1.ipynb
|
eduardojdiniz/CompNeuro
|
20269e66540dc4e802273735c97323020ee37406
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null |
tutorials/W1D5_DimensionalityReduction/ED_W1D5_Tutorial1.ipynb
|
eduardojdiniz/CompNeuro
|
20269e66540dc4e802273735c97323020ee37406
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | 575.118891 | 184,336 | 0.943909 | true | 7,026 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.746139 | 0.70253 | 0.524185 |
__label__eng_Latn
| 0.939933 | 0.056187 |
# Beispielprogramm: Matrixwertige Exponentialfuntion
Zu einer reellwertigen Matrix betrachten wir ein Anfangswertproblem für die lineare DGL
\begin{align}
\dot {\vec u}(t) &:= A {\vec u(t)},\quad t>0\\
\vec u(0)&= \vec u_0
\end{align}
Dessen Lösung ist durch $\vec u(t) = \exp(A t) \vec u_0$
explizit gegeben.
In der Vorlesung wurde behandelt, dass das Verhalten der Lösung durch die **Eigenwerte von A** bestimmt ist. Wir konzentrieren uns im folgenden auf den Spezialfall
$$A := \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \mathbb R^{2\times 2}$$
Aus diesen Fall kann bereits vieles abgeleitet werden.
Zunächst definieren wir die Matrix (**TODO:** Bitte ggf. ändern!):
```octave
A = [3 1.0; -1.0 -3]
```
A =
3 1
-1 -3
Die Eigenwerte ergeben sich über die Nullstellen des charakteristischen Polynoms:
$$ 0= det(A - \lambda I) = \begin{vmatrix} a-\lambda & b \\ c & d-\lambda \end{vmatrix} = (a-\lambda )(d-\lambda )-bc = \lambda^2 - \lambda (a+b) + (ad - bc)$$
Über die $pq$-Formel sieht man, dass sich entweder (i) zwei reellwertige Eigenwerte $\lambda_1, \lambda_2 \in\mathbb R$
oder (ii) zwei komplex-konjugiert Eigenwerte
$$\lambda_{1/2} = \alpha \pm i \beta \in \mathbb C,$$
mit Real- und Imaginärteil $\alpha, \beta \in \mathbb R$ ergeben. Wir berechnen diese mit Octave:
```octave
[V,Lambda]=eig(A);
Lambda
V
alpha = real(Lambda(1,1))
alpha2 = real(Lambda(2,2))
beta = abs(imag(Lambda(1,1)))
```
Lambda =
Diagonal Matrix
2.8284 0
0 -2.8284
V =
0.98560 -0.16910
-0.16910 0.98560
alpha = 2.8284
alpha2 = -2.8284
beta = 0
Für den Eigenwert $\lambda = \alpha + i \beta$ folgt mit der Eulerschen Formel die Gleichung
$$
\exp(\lambda t) = \exp(\alpha t) \cdot \exp(i \beta t) = \exp(\alpha t) \cdot \left( \cos (\beta t) + i \sin (\beta t) \right)
$$
Somit bestimmt der Realteil $\alpha$ das Wachstumsverhalten $\exp(\alpha t)$, während der Imaginärteil $\beta$ zu Oszillationen $\sin (\beta t)$ bzw. $\cos (\beta t)$ führt. Entsprechend definieren wir nachfolgend charakteristische **Zeitkonstanten**. Diese sind für Wachstum und Oszillationen durch $1/|Re \lambda|$ bzw. $2 \pi/|Im \lambda|$ gegeben. Um jeweils den schnellsten Prozess auflösen zu können, betrachten wir insgesamt das Minimum.
```octave
tGrowth = abs(1.0/real(Lambda(1,1)))
tGrowth2 = abs(1.0/real(Lambda(2,2)))
if (beta > 0)
# Ein Wachstums-/Zerfallprozess mit Oszillation
tOscillation = 2.0*pi/beta
tChar = min(tGrowth,tOscillation)
else
# Zwei Wachstums-/Zerfallprozesse
tChar=min(tGrowth, tGrowth2)
endif
```
tGrowth = 0.35355
tGrowth2 = 0.35355
tChar = 0.35355
Obwohl die Lösung explizite Lösung der DGL bekannt ist, lösen wir numerisch ;-):
```octave
function udot=f_rhs(u,t, A)
udot = A*u;
endfunction
u0 = [10,40];
tdiskret = linspace(0, 3.0*tChar, 500);
uDiskret = lsode(@(u,t) f_rhs(u,t,A), u0, tdiskret);
```
Wir wollen nun die Lösung auch graphisch darstellen:
```octave
# Funktion zur Ausgabe des Vektorfelds
function plot_field_2D(myrhs, spaceX1, spaceX2, name)
for j=1:length(spaceX2)
for i=1:length(spaceX1)
vec = myrhs([spaceX1(i);spaceX2(j)], 0.0);
dX1(j,i) = vec(1);
dX2(j,i) = vec(2);
end
end
quiver(spaceX1,spaceX2,dX1,dX2);
endfunction
```
```octave
# Plot um Ursprung
min1 = min(uDiskret(:,1));
max1 = max(uDiskret(:,1));
abs1 = max(abs(min1), abs(max1));
min2 = min(uDiskret(:,2));
max2 = max(uDiskret(:,2));
abs2 = max(abs(min2), abs(max2));
spaceX1=linspace(-abs1, abs1, 20);
spaceX2=linspace(-abs2, abs2, 20);
hold on
xlabel("Komponente u_1")
ylabel("Komponente u_2")
plot_field_2D(@(u,t) f_rhs(u,t,A), spaceX1, spaceX2, "Zeitliche Entwicklung")
plot(uDiskret(:,1), uDiskret(:,2), "Color", "black")
plot (u0(1), u0(2), "Color", "black")
legend("Richtungsfeld", "Loesung", "Anfangswert")
```
Das Vorzeichen von $Re \lambda$ entscheidet darüber, ob ein Wachstums- oder Zerfallsprozess vorliegt. Für komplexe $\lambda$ erhalten wir, je nach Vorzeigen des Realteils, eine gedämpfte ($Re \lambda<0$) oder angefachte ($Re \lambda>0$) Schwingung.
```octave
xlabel("Zeit t")
ylabel("Komponente u_1, u_2")
hold on
plot(tdiskret, uDiskret(:,1), "Color", "blue")
plot(tdiskret, uDiskret(:,2), "Color", "red")
# Fuer komplexe Eigenwerte plotten wir die einhuellende Exponentialfunktion
if (beta > 0)
uGrowth = abs(norm(u0))*exp(tdiskret/tGrowth);
plot(tdiskret, uGrowth, "Color", "Black")
plot(tdiskret, -uGrowth, "Color", "Black")
legend("Komponente u_1", "Komponente u_2", "Einhuellende")
else
legend("Komponente u_1", "Komponente u_2")
endif
```
```octave
```
|
d999a1bf94ff4db2647b92e7023c96c59e6c792e
| 48,728 |
ipynb
|
Jupyter Notebook
|
jupyter/beispiel05-Matrixexponential2x2.ipynb
|
anaegel/modsim-sommer2020
|
26906435115447718ee081dbce59e16bcce03513
|
[
"BSD-3-Clause"
] | null | null | null |
jupyter/beispiel05-Matrixexponential2x2.ipynb
|
anaegel/modsim-sommer2020
|
26906435115447718ee081dbce59e16bcce03513
|
[
"BSD-3-Clause"
] | null | null | null |
jupyter/beispiel05-Matrixexponential2x2.ipynb
|
anaegel/modsim-sommer2020
|
26906435115447718ee081dbce59e16bcce03513
|
[
"BSD-3-Clause"
] | 1 |
2020-08-07T23:11:26.000Z
|
2020-08-07T23:11:26.000Z
| 158.723127 | 26,498 | 0.877504 | true | 1,733 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.885631 | 0.857768 | 0.759666 |
__label__deu_Latn
| 0.830156 | 0.603292 |
# CFL and Nyquist conditions
**Nicolás Guarín-Zapata**
This notebook shows interactively the CFL and Nyquist-Shanon conditions
This document is based on a Notebook from ["Practical Numerical Methods with Python"](http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about) written by L.A. Barba, G.F. Forsyth, C. Cooper (2014).
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
```
## CFL Condition
The *one-dimensional linear convection equation* is the simplest case for study the CFL condition. Here it is:
\begin{equation}\frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0\end{equation}
The equation represents a *wave* propagating with speed $c$ in the $x$ direction, without change of shape. For that reason, it's sometimes called the *one-way wave equation* (sometimes also the *advection equation*).
With an initial condition $u(x,0)=u_0(x)$, the equation has an exact solution given by:
\begin{equation}u(x,t)=u_0(x-ct).
\end{equation}
```
def linearconv(nx):
"""Solve the linear convection equation.
Solves the equation d_t u + c d_x u = 0 where
* the wavespeed c is set to 1
* the domain is x \in [0, 2]
* 20 timesteps are taken, with \Delta t = 0.025
* the initial data is the hat function
Produces a plot of the results
Parameters
----------
nx : integer
number of internal grid points
Returns
-------
None : none
"""
dx = 2./(nx-1)
nt = 20
dt = .025
c = 1
u = np.ones(nx)
u[.5/dx : 1/dx+1]=2
un = np.ones(nx)
for n in range(nt):
un = u.copy()
u[1:] = un[1:] -c*dt/dx*(un[1:] -un[0:-1])
u[0] = 1.0
plt.plot(np.linspace(0,2,nx), u, color='#003366', ls='-', lw=3)
plt.ylim(0,2.5);
```
### Different spatial discretizations
```
linearconv(11)
```
```
linearconv(21)
```
```
linearconv(41)
```
```
linearconv(81)
```
```
linearconv(91)
```
```
linearconv(201)
```
### Nyquist-Shanon condition
```
from scipy import interpolate
n = 120
x = np.linspace(-np.pi, np.pi, n)
y = np.sin(5*x)
plt.plot(x,y, lw=2);
plt.xlim(-np.pi, np.pi);
f = interpolate.interp1d(x, y, kind='quadratic')
```
```
def plot_smooth(x, y, n):
f = interpolate.interp1d(x, y, kind='quadratic')
xnew = np.linspace(-np.pi, np.pi, n)
plt.plot(x,y, 'ro', xnew, f(xnew) , '--r', lw=2);
```
Now we are going to play a little bit with the sampling rate
```
```
```
n = 5
xnew = np.linspace(-np.pi, np.pi, n)
ynew = f(xnew)
plt.plot(x,y, lw=2);
plot_smooth(xnew, ynew, 100)
plt.ylim(-1.5,1.5)
```
```
n = 8
xnew = np.linspace(-np.pi, np.pi, n)
plt.plot(x,y, lw=2);
plot_smooth(xnew, f(xnew), 100)
plt.ylim(-1.5,1.5)
```
```
n = 10
xnew = np.linspace(-np.pi, np.pi, n)
plt.plot(x,y, lw=2);
plot_smooth(xnew, f(xnew), 100)
plt.ylim(-1.5,1.5)
```
```
n = 15
xnew = np.linspace(-np.pi, np.pi, n)
plt.plot(x,y, lw=2);
plot_smooth(xnew, f(xnew), 100)
plt.ylim(-1.5,1.5)
```
```
n = 20
xnew = np.linspace(-np.pi, np.pi, n)
plt.plot(x,y, lw=2);
plot_smooth(xnew, f(xnew), 100)
plt.ylim(-1.5,1.5)
```
```
n = 30
xnew = np.linspace(-np.pi, np.pi, n)
plt.plot(x,y, lw=2);
plot_smooth(xnew, f(xnew), 100)
plt.ylim(-1.5,1.5)
```
```
n = 50
xnew = np.linspace(-np.pi, np.pi, n)
plt.plot(x,y, lw=2);
plot_smooth(xnew, f(xnew), 100)
plt.ylim(-1.5,1.5)
```
---
######The cell below loads the style of the notebook.
```
from IPython.core.display import HTML
css_file = 'styles/numericalmoocstyle.css'
HTML(open(css_file, "r").read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 750px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:600px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Nixie One', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Nixie One', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Nixie One', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Nixie One', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Nixie One', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 90%;
}
</style>
|
f2ef90e5a9d1febb284d8435a66acc25d89b9ba7
| 303,586 |
ipynb
|
Jupyter Notebook
|
2014/ce795/CFL_Nyquist_conditions.ipynb
|
nicoguaro/talks
|
01e9ddc4a44952a1ea8b1d8acf3cbc17ddbc31e2
|
[
"MIT"
] | null | null | null |
2014/ce795/CFL_Nyquist_conditions.ipynb
|
nicoguaro/talks
|
01e9ddc4a44952a1ea8b1d8acf3cbc17ddbc31e2
|
[
"MIT"
] | null | null | null |
2014/ce795/CFL_Nyquist_conditions.ipynb
|
nicoguaro/talks
|
01e9ddc4a44952a1ea8b1d8acf3cbc17ddbc31e2
|
[
"MIT"
] | null | null | null | 421.647222 | 32,319 | 0.923867 | true | 2,182 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.859664 | 0.845942 | 0.727226 |
__label__eng_Latn
| 0.463387 | 0.527922 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.