text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# Modeling spiking neural networks with Brian
```python
# If Brian2 is not yet installed, the following will install it
# (otherwise it will print a number of "Requirement already satisfied" lines)
%pip install brian2
```
Requirement already satisfied: brian2 in /mnt/data/anaconda2/envs/datasci_course/lib/python3.7/site-packages (2.4.1)
Requirement already satisfied: sympy>=1.2 in /mnt/data/anaconda2/envs/datasci_course/lib/python3.7/site-packages (from brian2) (1.6.2)
Requirement already satisfied: setuptools>=24.2 in /mnt/data/anaconda2/envs/datasci_course/lib/python3.7/site-packages (from brian2) (49.6.0.post20201009)
Requirement already satisfied: pyparsing in /mnt/data/anaconda2/envs/datasci_course/lib/python3.7/site-packages (from brian2) (2.4.7)
Requirement already satisfied: jinja2>=2.7 in /mnt/data/anaconda2/envs/datasci_course/lib/python3.7/site-packages (from brian2) (2.11.2)
Requirement already satisfied: cython>=0.29 in /mnt/data/anaconda2/envs/datasci_course/lib/python3.7/site-packages (from brian2) (0.29.21)
Requirement already satisfied: numpy>=1.15 in /mnt/data/anaconda2/envs/datasci_course/lib/python3.7/site-packages (from brian2) (1.19.2)
Requirement already satisfied: mpmath>=0.19 in /mnt/data/anaconda2/envs/datasci_course/lib/python3.7/site-packages (from sympy>=1.2->brian2) (1.1.0)
Requirement already satisfied: MarkupSafe>=0.23 in /mnt/data/anaconda2/envs/datasci_course/lib/python3.7/site-packages (from jinja2>=2.7->brian2) (1.1.1)
Note: you may need to restart the kernel to use updated packages.
Let's first import "everything" from the Brian 2 package.
This also provides access to the scientific computing package numpy (imported as np), and to the package pyplot from the plotting library matplotlib (imported as plt).
We also ask the notebook to include plots directly in the notebook (instead of showing them in a separate window). Note that lines starting with % are specific commands for the jupyter notebook, they won't work in a Python script, for example.
We also switch off Brian's "code generation" mechanism, which improves the performance for complex models by generating/compiling/executing C++ code "behind the scenes". For the simple models that we are covering in this tutorial, this is not necessary – it even slows down things due to the need for compilation.
```python
from brian2 import *
prefs.codegen.target = 'numpy'
```
```python
%matplotlib inline
%xmode minimal
```
Exception reporting mode: Minimal
## Neurons
Let's create a group of integrate-and-fire neurons. We use the same equation as in last week's tutorial
$$
\tau\frac{\mathrm{d}V}{\mathrm{d}t} = E_L - V + I_\mathrm{stim}
$$
If $V > V_\mathrm{threshold}$: emit a spike and set $V \leftarrow V_\mathrm{reset}$.
```python
start_scope()
N = 3
E_L = -50*mV
V_threshold = -30*mV
V_reset = -55*mV
C = 70*pF
g_L = 10*nS
tau = C/g_L
duration = 100*ms
neurons = NeuronGroup(N, '''dV/dt = (E_L - V + I_stim)/tau : volt
I_stim : volt (constant) # note, this is rescaled with g_L already''',
threshold='V>V_threshold', reset='V=V_reset', method='euler')
# Initialize values
neurons.V = V_reset
# neurons.I_stim = ...
# record membrane potential and spikes
v_mon = StateMonitor(neurons, 'V', record=True)
spike_mon = SpikeMonitor(neurons)
# run simulation
run(duration)
```
We could plot things directly by calling `plt.plot`, but by calling `plt.subplots` first we can easily arrange things in subplots.
```python
fig, ax = plt.subplots(2, 1, sharex=True)
ax[0].plot(spike_mon.t/ms, spike_mon.i, 'o')
ax[1].plot(v_mon.t/ms, v_mon.V.T/mV);
```
### f/I curve
```python
# Same code as before. What do we have to change to plot an f/I curve?
start_scope()
N = 3
E_L = -50*mV
V_threshold = -30*mV
V_reset = -55*mV
C = 70*pF
g_L = 10*nS
tau = C/g_L
duration = 100*ms
neurons = NeuronGroup(N, '''dV/dt = (E_L - V + I_stim)/tau : volt
I_stim : volt (constant) # note, this is rescaled with g_L already''',
threshold='V>V_threshold', reset='V=V_reset', method='euler')
# Initialize values
neurons.V = V_reset
# neurons.I_stim = ...
# record membrane potential and spikes
v_mon = StateMonitor(neurons, 'V', record=True)
spike_mon = SpikeMonitor(neurons)
# run simulation
run(duration)
```
## Synapses
As mentioned in the lecture, synapses can be modeled on various models of detail. Here, we will use an "exponential current-based" synapse. Let us first try this out with some artficial stimulation:
```python
start_scope()
E_L = -50*mV
V_threshold = -30*mV
V_reset = -55*mV
C = 70*pF
g_L = 10*nS
tau = C/g_L
tau_syn = 2*ms
duration = 100*ms
# Artificial stimulation
exc_stim = SpikeGeneratorGroup(1, [0, 0], [10, 85]*ms)
inh_stim = SpikeGeneratorGroup(1, [0, 0], [30, 80]*ms)
# 3 Neurons all connected to the same artificial input spike trains
# The first neuron only gets excitatory input, the second only inhibitory, the third gets both
neurons = NeuronGroup(3, '''dV/dt = (E_L - V + I_syn)/tau : volt
dI_syn/dt = -I_syn/tau_syn : volt # decay between spikes''',
threshold='V>V_threshold', reset='V=V_reset', method='euler')
# Initialize values
neurons.V = E_L
# Introduce synaptic connections
exc_syn = Synapses(exc_stim, neurons, on_pre='I_syn_post += 1*mV') # "on_pre" = "for each pre-synaptic spike"
exc_syn.connect(i=[0, 0], j=[0, 2]) # i = "source index", j = "target index"
inh_syn = Synapses(inh_stim, neurons, on_pre='I_syn_post -= 1*mV')
inh_syn.connect(i=[0, 0], j=[1, 2])
# record membrane potential and spikes
v_mon = StateMonitor(neurons, 'V', record=True)
spike_mon = SpikeMonitor(neurons)
# run simulation
run(duration)
```
```python
fig, axs = plt.subplots(3, 1, sharex=True)
axs[0].plot(v_mon.t/ms, v_mon.V[0]/mV)
axs[1].plot(v_mon.t/ms, v_mon.V[1]/mV)
axs[2].plot(v_mon.t/ms, v_mon.V[2]/mV);
```
### Network
```python
# This does not do anything interesting yet
start_scope()
N_E = 4000 # excitatory neurons
N_I = 1000 # inhibitory neurons
E_L = -50*mV
V_threshold = -30*mV
V_reset = -55*mV
C = 70*pF
g_L = 10*nS
tau = C/g_L
tau_syn = 2*ms
w_e = 1*mV
w_i = 1*mV
duration = 1000*ms
# Artificial stimulation
exc_stim = SpikeGeneratorGroup(1, [0, 0], [10, 85]*ms)
inh_stim = SpikeGeneratorGroup(1, [0, 0], [30, 80]*ms)
# Now a single "receiver neuron"
neurons = NeuronGroup(N_E + N_I,
'''dV/dt = (E_L - V + I_syn)/tau : volt
dI_syn/dt = -I_syn/tau_syn : volt # decay between spikes''',
threshold='V>V_threshold', reset='V=V_reset', method='euler')
# Initialize values
neurons.V = E_L
exc_neurons = neurons[:N_E] # Uses "slicing" to get subpopulations of neurons
inh_neurons = neurons[N_E:]
# Connect neurons to each other
exc_syn = Synapses(exc_neurons, neurons, on_pre='I_syn_post += w_e') # "on_pre" = "for each pre-synaptic spike"
exc_syn.connect(p=0.02) # 2% connection probability between each pair of neurons
inh_syn = Synapses(inh_neurons, neurons, on_pre='I_syn_post -= w_i')
inh_syn.connect(p=0.02)
# record spikes of all neurons, but the membrane potential only of 1 neuron
v_mon = StateMonitor(neurons, 'V', record=0)
spike_mon = SpikeMonitor(neurons)
# run simulation
run(duration, report='text') # for simulations that take longer to run
```
Starting simulation at t=0. s for a duration of 1. s
1. s (100%) simulated in 1s
```python
fig, ax = plt.subplots(2, 1, sharex=True, figsize=(12, 6))
ax[0].plot(spike_mon.t/ms, spike_mon.i, '.')
ax[1].plot(v_mon.t/ms, v_mon.V.T/mV);
```
|
192b03b32642930d7ecb1db638c73ca9c2c754a0
| 51,798 |
ipynb
|
Jupyter Notebook
|
tutorials/T08-Spiking-Neural-Networks-Empty.ipynb
|
mgraupe/DataSciPy2021
|
94454b2651e864531f0e2ce4a1691b92508fa3a2
|
[
"CC-BY-4.0"
] | 7 |
2021-11-17T20:40:33.000Z
|
2021-12-31T11:21:49.000Z
|
tutorials/T08-Spiking-Neural-Networks-Empty.ipynb
|
mgraupe/DataSciPy2021
|
94454b2651e864531f0e2ce4a1691b92508fa3a2
|
[
"CC-BY-4.0"
] | null | null | null |
tutorials/T08-Spiking-Neural-Networks-Empty.ipynb
|
mgraupe/DataSciPy2021
|
94454b2651e864531f0e2ce4a1691b92508fa3a2
|
[
"CC-BY-4.0"
] | null | null | null | 121.023364 | 21,824 | 0.864319 | true | 2,318 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.855851 | 0.760651 | 0.651004 |
__label__eng_Latn
| 0.830437 | 0.350831 |
This notebook is an attempt at trying to get the broadband directivity functions as Lasse suggested.
In Madsen & Wahlberg 2007 (Deep Sea Research), the authors describe the integration of the directivity function over the signal's frequency range as a way to get the broadband directivity function. Let's try it out here.
2021-05-21
```python
import matplotlib.pyplot as plt
import numpy as np
import sympy
from sympy import Integral, symbols, pi, lambdify
import beamshapes
from beamshapes import piston_in_infinite_baffle as pib
from beamshapes.utilities import dB
import tqdm
```
```python
%matplotlib notebook
```
```python
a,k,theta,kmin,kmax = symbols('a k theta kmin kmax')
```
This is the famous piston in an infinite baffle directivity function.
```python
pib.d_theta
```
$\displaystyle \frac{2 J_{1}\left(a k \sin{\left(\theta \right)}\right)}{a k \sin{\left(\theta \right)}}$
Let's now integrate it wrt to the k from kmin-->kmax
```python
Integral(pib.d_theta, (k,kmin,kmax))
```
$\displaystyle \int\limits_{kmin}^{kmax} \frac{2 J_{1}\left(a k \sin{\left(\theta \right)}\right)}{a k \sin{\left(\theta \right)}}\, dk$
```python
# make SymPy solve the integral
integral_solution = Integral(pib.d_theta, (k,kmin,kmax)).doit()
integral_solution
```
$\displaystyle \frac{kmax \sqrt{\sin^{2}{\left(\theta \right)}} {{}_{1}F_{2}\left(\begin{matrix} \frac{1}{2} \\ \frac{3}{2}, 2 \end{matrix}\middle| {- \frac{a^{2} kmax^{2} \sin^{2}{\left(\theta \right)}}{4}} \right)}}{\sin{\left(\theta \right)}} - \frac{kmin \sqrt{\sin^{2}{\left(\theta \right)}} {{}_{1}F_{2}\left(\begin{matrix} \frac{1}{2} \\ \frac{3}{2}, 2 \end{matrix}\middle| {- \frac{a^{2} kmin^{2} \sin^{2}{\left(\theta \right)}}{4}} \right)}}{\sin{\left(\theta \right)}}$
So, it looks like SymPy could solve this integral - which is nice! Let's convert the expression into a function and get some outputs for broadband beamshapes!
```python
broadband_directivity = lambdify([kmin,kmax,a,theta,], integral_solution, 'sympy')
```
```python
broadband_directivity(100,200,0.1,0.001)
```
$\displaystyle 200.0 {{}_{1}F_{2}\left(\begin{matrix} 0.5 \\ 1.5, 2 \end{matrix}\middle| {-9.99999666666711 \cdot 10^{-5}} \right)} - 100.0 {{}_{1}F_{2}\left(\begin{matrix} 0.5 \\ 1.5, 2 \end{matrix}\middle| {-2.49999916666678 \cdot 10^{-5}} \right)}$
```python
thetas = np.linspace(-np.pi/2, np.pi/2,50)
kmin_v, kmax_v = 50,250
av = 0.01
bbd_dtheta = np.array(np.abs([broadband_directivity(kmin_v, kmax_v, av, each).evalf() for each in thetas]), 'float32')
db_dtheta = dB(bbd_dtheta)
db_dtheta -= np.max(db_dtheta) # normalise to on-axis
```
```python
plt.figure()
a0 = plt.subplot(111, projection='polar')
plt.plot(thetas, db_dtheta)
a0.set_theta_zero_location("N");a0.set_thetamax(90);a0.set_thetamin(-90);plt.yticks(np.arange(-12,2,2));
plt.title(f'broadband beamshape, k:{kmin_v}-{kmax_v}')
```
<IPython.core.display.Javascript object>
Text(0.5, 1.0, 'broadband beamshape, k:50-250')
Let's compare this to the beamshapes at between $k=50-250$
```python
onefreq_dthetas = []
kvalues = np.arange(50,300,50)
for kv in tqdm.tqdm(kvalues):
_, dtheta_onefreq = pib.piston_in_infinite_baffle_directivity(thetas, {'k':kv, 'a':av})
onefreq_dthetas.append(dtheta_onefreq)
```
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:52<00:00, 10.58s/it]
```python
plt.figure()
a1 = plt.subplot(111, projection='polar')
for each, kv in zip(onefreq_dthetas,kvalues):
a1.plot(thetas, each,label=str(kv))
a1.set_theta_zero_location("N");a1.set_thetamax(90);a1.set_thetamin(-90);plt.yticks(np.arange(-12,2,2));
plt.legend();
```
<IPython.core.display.Javascript object>
### Extending this exercise to other models...To be done.
|
ceba3927ea5e02b542d9d2495949b0a77350d028
| 224,042 |
ipynb
|
Jupyter Notebook
|
beamshapes/workshop/broadband directivity functions.ipynb
|
faroit/bat_beamshapes
|
9c2919b4e56dbd0cfc1039edc608ce1592b78df3
|
[
"MIT"
] | 4 |
2022-01-05T02:06:33.000Z
|
2022-03-16T09:00:08.000Z
|
beamshapes/workshop/broadband directivity functions.ipynb
|
faroit/bat_beamshapes
|
9c2919b4e56dbd0cfc1039edc608ce1592b78df3
|
[
"MIT"
] | 19 |
2021-08-10T20:48:34.000Z
|
2022-01-05T08:02:04.000Z
|
beamshapes/workshop/broadband directivity functions.ipynb
|
faroit/bat_beamshapes
|
9c2919b4e56dbd0cfc1039edc608ce1592b78df3
|
[
"MIT"
] | 4 |
2022-01-04T14:50:17.000Z
|
2022-01-28T02:33:34.000Z
| 97.834934 | 71,515 | 0.747699 | true | 1,257 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.894789 | 0.795658 | 0.711946 |
__label__eng_Latn
| 0.428483 | 0.492422 |
# Reinforcement Learning for Ion Trap Quantum Computers
This exercise is a short extension of the **Ion Trap Reinforcement Learning Environment** where we are going to employ a Projective Simulation (PS) agent to use short laser pulse sequences mapping an initially unentangled state $|000\rangle$ onto a GHZ-like state:
\begin{align}
|\mathrm{GHZ}\rangle = \frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|iii\rangle.\nonumber
\end{align}
We will consider three qutrits, i.e., $d=3$ for simplicity but you may choose to extend this at your own leisure.
More formally, we do not want to find GHZ states exactly but those states which are maximally entangled. We consider $n$ $d$-level states to be maximally entangled if they have a *Schmidt rank vector* (SRV) of $(d,...,d)$ where the $i$th entry is the rank of the reduced density matrix $\rho_i=\mathrm{tr}_{\bar{i}}(\rho)$ where $\bar{i}$ is the complement of $\{i\}$ in $\{1,...,n\}$.
Luckily, you don't really have to take care of this since this is already the default settings of the environment which we are going to load now:
```python
from ion_trap import IonTrapEnv
```
That was easy. According to the docs in the `init` method, the class allows the following kwargs:
* `num_ions` (int): The number of ions. Defaults to 3.
* `dim` (int): The local (odd) dimension of an ion. Defaults to 3.
* `goal` (list): List of SRVs that are rewarded. Defaults to `[[3,3,3]]`.
* `phases` (dict): The phases defining the laser gate set. Defaults to `{'pulse_angles': [np.pi/2], 'pulse_phases': [0, np.pi/2, np.pi/6], 'ms_phases': [-np.pi/2]}`
* `max_steps` (int): The maximum number of allowed time steps. Defaults to 10.
If you want to change anything you need to provide kwargs in form of a `dict` with the desired arguments as follows `IonTrapEnv(**{ 'max_steps': 20 })`.
Indeed, let us submit a small change. Since this is just supposed to be a small scale test, let us reduce the number of allowed phases and therefore, the number of possible actions.
```python
import numpy as np
KWARGS = {'phases': {'pulse_angles': [np.pi/2], 'pulse_phases': [np.pi/2], 'ms_phases': [-np.pi/2]}}
env = IonTrapEnv(**KWARGS)
```
Next, we need to get the reinforcement learning agent that is to learn some pulse sequences. We have a simple PS agent for you in store:
```python
from ps import PSAgent
```
For the args of this class the docs say the following:
* `num_actions` (int): The number of available actions.
* `glow` (float, optional): The glow (or eta) parameter. Defaults to 0.1
* `damp` (float, optional): The damping (or gamma) parameter. Defaults to 0.
* `softmax` (float, optional): The softmax (or beta) parameter. Defaults to 0.1.
We don't know the number of actions at this point, but possibly want to keep all the other default parameters. Let's ask the environment how many actions there are and initialize the agent accordingly.
```python
num_actions = env.num_actions
agent = PSAgent(num_actions)
```
Fantastic, we have everything ready for a first run. Let's do that. The interaction between an environment and an agent is standardized through the [*openAI* `gym`](https://github.com/openai/gym) environments. In terms of code, we can imagine the interaction to go as follows,
Indeed, every reinforcement learning environment should provide at least two methods:
* `reset()`: Resets the environment to its initial state. *Returns* the initial observation.
* `step(action)`: Performs an action (given by an action index) on the environment. *Returns* the new observation, an associated reward and a bool value `done` which indicates whether a terminal state has been reached.
The agent on the other hand, supports the following two main methods:
* `predict(observation)`: Given an observation, the agent predicts an action. *Returns* an action index.
* `learn(reward)`: Uses the current reward to update internal network.
Knowing that the `IonTrapEnv` has been built according to this standard and the agent features the two methods above, we can start coding the interaction between agent and environment:
```python
# data set for performance evaluation
DATA_STEPS = []
# maximum number of episodes
NUM_EPISODES = 5000
for i in range(NUM_EPISODES):
# initial observation from environment
observation = env.reset()
#bool: whether or not the environment has finished the episode
done = False
#int: the current time step in this episode
num_steps = 0
action_seq = []
while not done:
# increment counter
num_steps += 1
# predict action
action = agent.predict(observation)
action_seq.append(action)
# perform action on environment and receive observation and reward
observation, reward, done = env.step(action)
# learn from reward
agent.train(reward)
# gather statistics
if done:
DATA_STEPS.append(num_steps)
print(action_seq)
```
[0, 1, 5, 3, 0]
And this is all the code that is needed to have an agent interact with our environment! In `DATA_STEPS` we have gathered the data that keeps track of the length of pulse sequences that generate GHZ-like states. We can use `matplotlib` to visualize the performance of the agent over time:
```python
import matplotlib.pyplot as plt
import numpy as np
x_axis = np.arange(len(DATA_STEPS))
plt.plot(x_axis, DATA_STEPS)
plt.ylabel('Length of pulse sequence')
plt.xlabel('Episode')
```
We have witnessed an agent learning! The agent was able to push the gate sequences down to 5 laser pulses consisting of two Molmer-Sorensen gates and three single-ion laser pules.
Note that this is of course not conclusive because it is a single agent. Nevertheless, it has obviously learned and we can expect future agents to fare similarly. **Good work!**
|
8a8425229f252e50652504eae30b921735fa2937
| 23,859 |
ipynb
|
Jupyter Notebook
|
Reinforcement Learning for Ion Traps.ipynb
|
HendrikPN/rl-ion-trap-tutorial
|
0cf5ce083a0aa2fb63416e334b2a9396bac37da2
|
[
"MIT"
] | 5 |
2020-02-17T15:48:06.000Z
|
2020-11-08T11:31:05.000Z
|
Reinforcement Learning for Ion Traps.ipynb
|
HendrikPN/rl-ion-trap-tutorial
|
0cf5ce083a0aa2fb63416e334b2a9396bac37da2
|
[
"MIT"
] | null | null | null |
Reinforcement Learning for Ion Traps.ipynb
|
HendrikPN/rl-ion-trap-tutorial
|
0cf5ce083a0aa2fb63416e334b2a9396bac37da2
|
[
"MIT"
] | 3 |
2020-02-17T16:04:29.000Z
|
2020-11-16T12:33:44.000Z
| 89.695489 | 14,600 | 0.820613 | true | 1,427 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.76908 | 0.689306 | 0.530131 |
__label__eng_Latn
| 0.997214 | 0.070002 |
Exercise material of the MSc-level course **Numerical Methods in Geotechnical Engineering**.
Held at Technische Universität Bergakademie Freiberg.
Comments to:
*Prof. Dr. Thomas Nagel
Chair of Soil Mechanics and Foundation Engineering
Geotechnical Institute
Technische Universität Bergakademie Freiberg.*
https://tu-freiberg.de/en/soilmechanics
```python
import numpy as np
import matplotlib.pyplot as plt
#Some plot settings
import plot_functions.plot_settings
#FEM routines
%run plot_functions/fem_routines.ipynb
```
# Liquid-solid phase transition in soils: ground freezing
## Governing differential equation
We consider the problem as one dimensional with piecewise constant properties:
$$
\varrho \dot{h} = \lambda T_{,zz}
$$
$[\lambda] = \text{W m}^{-1} \text{K}^{-1}$: heat conductivity
$[\varrho] = \text{kg m}^{-3}$: density
$[h] = \text{J kg}^{-1}$: specific enthalpy
Let the ice volume fraction be given as an equilibrium function
$$
\phi_\text{I} = \phi \left[1 + e^{k(T - T_\text{m})} \right]^{-1}
$$
so that the apparent ice density is
$$
\varrho_\text{I} = \phi \varrho_\text{IR} \left[1 + e^{k(T - T_\text{m})} \right]^{-1}
$$
With $h(T,\varrho_\text{I})$ we then find the energy balance
$$
\left[ C - L \frac{\partial \varrho_\text{I}}{\partial T} \right] \dot{T} = \lambda T_{,zz}
$$
with
$[L] = \text{J kg}^{-1}$: specific latent heat of fusion
$[C] = \text{J m}^{-3} \text{K}^{-1}$: volumetric heat capacity.
We further use
\begin{align}
C &= (1 - \phi) c_{p\text{S}} \varrho_\text{SR} + \phi_\text{I} c_{p\text{I}} \varrho_\text{IR} + (\phi - \phi_\text{I}) c_{p\text{W}} \varrho_\text{WR}
\\
\lambda &= (1 - \phi) \lambda_\text{SR} + \phi_\text{I} \lambda_\text{IR} + (\phi - \phi_\text{I}) \lambda_\text{WR}
\end{align}
## Weak form
The pore pressure can have (the essential/Dirichlet) boundary conditions in the form:
$$
p = \bar{p}\ \forall z \in \partial \Omega_\mathrm{D}
$$
We now introduce a test function $\eta$ which vanishes where the pore pressure is given
$$
\eta = 0\ \forall z \in \partial \Omega_\mathrm{D}
$$
and construct the weak form (using integration by parts):
\begin{align}
0 &= \int \limits_0^H \eta \left[(C - L \partial_T \varrho_\text{I}) \dot{T} - \lambda T_{,zz} \right] \text{d}z
\\
&= \int \limits_0^H \left[\eta (C - L \partial_T \varrho_\text{I}) \dot{T} - \left( \eta \lambda T_{,z} \right)_{,z} + \eta_{,z} \lambda T_{,z} \right] \, \text{d}z
\\
&= \int \limits_0^H \left[\eta (C - L \partial_T \varrho_\text{I}) \dot{T} + \eta_{,z} \lambda T_{,z} \right] \, \text{d}z + \left[ \eta q_z \right]^H_0
\end{align}
where the natural/Neumann boundary conditions have appeared.
```python
import sympy as sp
k,T,Tm = sp.symbols('k T T_m')
f = 1/(1+sp.exp(k*(T-Tm)))
sp.diff(f,T)
```
$\displaystyle - \frac{k e^{k \left(T - T_{m}\right)}}{\left(e^{k \left(T - T_{m}\right)} + 1\right)^{2}}$
```python
def phi_I(T,phi=0.35,k=4,T_m=273.15):
return phi / (1 + np.exp(k*(T-(T_m-4/k))))
def dphi_I_dT(T,phi=0.35,k=4,T_m=273.15):
temp = np.exp(k*(T-(T_m-4/k)))
return -phi*k * temp / (1 + temp)**2
```
```python
T = np.linspace(-10,10,100)
fig, ax = plt.subplots()
ax.plot(T,phi_I(T+273.15),label='$k=1$')
ax.plot(T,phi_I(T+273.15,k=2),label='$k=2$')
ax.plot(T,phi_I(T+273.15,k=0.5),label='$k=0.5$')
#ax.plot(T,dphi_I_dT(T+273.15),label='$k=0.5$')
ax.axvline(0,ls=':')
ax.set_xlabel('$T$ / °C')
ax.set_ylabel('$\\phi_\\mathrm{I}$')
ax.legend()
fig.tight_layout();
```
```python
def heat_capacity(T,phi=0.35,k=4,T_m=273.15):
c_pW = 4186. #J/kgK
c_pI = 2052. #J/kgK
c_pS = 500. #J/kgK
rho_W = 1000. #kg/m³
rho_I = 900. #kg/m³
rho_S = 2600. #kg/m³
L = 334.e3 #J/kg
phi_ice = phi_I(T,phi,k,T_m)
rho_cp = rho_S * (1-phi) * c_pS + rho_W * (phi - phi_ice) * c_pW + rho_I * phi_ice * c_pI
return rho_cp - dphi_I_dT(T,phi,k,T_m)*L*rho_I
```
```python
def heat_conductivity(T,phi=0.35,k=4,T_m=273.15):
lam_W = 0.3 #W/mK
lam_I = 2.5 #W/mK
lam_S = 2.2 #W/mK
phi_ice = phi_I(T,phi,k,T_m)
return lam_S * (1-phi) + lam_W * (phi - phi_ice) + lam_I * phi_ice
```
## Time and space discretization, Picard iterations
Using a backward Euler approach for simplicity we find the time-discrete weak form as ($p_{n+1} \equiv p$)
$$
0 = \int \limits_0^H \left[\eta (C - L \partial_T \varrho_\text{I}) \frac{T - T_n}{\Delta t} + \eta_{,z} \lambda T_{,z} \right] \, \text{d}z + \left[ \eta q_z \right]^H_0
$$
Introducing standard FE approximations
$$
T \approx N_i \hat{T}_i, \quad \eta \approx N_i \hat{\eta}_i, \quad \frac{\partial T}{\partial z} \approx \nabla N_i \hat{T}_i, \quad \frac{\partial \eta}{\partial z} \approx \nabla N_i \hat{\eta}_i
$$
This yields
$$
\begin{align}
0 &= \int \limits_0^H \left[ N_i \hat{\eta}_i (C - L \partial_T \varrho_\text{I}) \frac{N_k \hat{T}_k - N_k \hat{T}_{n,k}}{\Delta t} + \nabla N_i \hat{\eta}_i \lambda \nabla N_k \hat{T}_k \right] \, \text{d}z + \left[ N_i \hat{\eta}_i q_z \right]^H_0
\end{align}
$$
Now we bring all quantities associated with the unknown pressure to the left-hand side (LHS) and all known quantities to the RHS:
$$
\hat{\eta}_i \int \limits_0^H \left[ N_i \frac{(C - L \partial_T \varrho_\text{I})}{\Delta t} N_k + \nabla N_i \lambda \nabla N_k \right] \, \text{d}z\ \hat{T}_k = \hat{\eta}_i \left[ N_{n_\text{n}} \bar{q}_z|_{z=H} \delta_{n_\text{n}i} - N_{n_\text{n}} \bar{q}_z|_{z=0} \delta_{i0} \right] + \hat{\eta}_i \int \limits_0^H N_i \frac{(C - L \partial_T \varrho_\text{I})}{\Delta t} N_k \, \text{d}z\ \hat{T}_{n,k}
$$
can be simplified by realizing that the nodal test function values are arbitrary and thus
$$
\int \limits_0^H \left[ N_i \frac{(C - L \partial_T \varrho_\text{I})}{\Delta t} N_k + \nabla N_i \lambda \nabla N_k \right] \, \text{d}z\ \hat{T}_k = N_{n_\text{n}} \bar{q}_z|_{z=H} \delta_{n_\text{n}i} - N_{n_\text{n}} \bar{q}_z|_{z=0} \delta_{i0} + \int \limits_0^H N_i \frac{(C - L \partial_T \varrho_\text{I})}{\Delta t} N_k \, \text{d}z\ \hat{T}_{n,k}
$$
which leaves us with $n_\text{n}$ equations for the $n_\text{n}$ unknown nodal pressures $\hat{p}_k$. If any coefficients in the above are taken as pressure-dependent, the system could be solved repeatedly usind Picard iterations. For strong non-linearities, a Newton linearization would typically be used.
## Local assember
```python
def local_assembler(elem,dt,prev_sol,sol,mass_lumping=False):
element_order = elem._line_element__nnodes
K_loc = np.zeros((element_order,element_order))
M_loc = np.zeros((element_order,element_order))
b_loc = np.zeros(element_order)
z_nodes = elem._line_element__coords
for i in range(elem._line_element__quad_degree):
#local integration point coordinate
xi = elem._line_element__quad_points[i]
#shape function
N = shape_function(element_order,xi)
#gradient of shape function
dN_dX = grad_shape_function(elem,xi)
#determinant of Jacobian
detJ = np.abs(element_jacobian(elem,xi))
#integration weight
w = elem._line_element__quad_weights[i]
#global integration point coordinate (for spatially varying properties)
#z_glob = np.dot(N,z_nodes)
#evaluation of local material/structural properties
#E = Stiffness(z_glob)
#evaluation of local body force
#CV = ConsolidationCoeff(z_glob)
T_prev = np.dot(N,prev_sol)#T_prev in integration point
T = np.dot(N,sol)#T in integration point
C = heat_capacity(T)
lam = heat_conductivity(T)
#assembly of local stiffness matrix
M_loc = np.outer(N,N) * C / dt
if (mass_lumping):
M_loc = np.diag(M_loc.sum(0)) #diagonal of column sum
K_loc += (np.outer(dN_dX,dN_dX) * lam + M_loc)* w * detJ
#assembly of local RHS
b_loc += N * C * T_prev/dt * w * detJ
return K_loc,b_loc
```
## Time loop and problem solution
We now establish the time loop and in each time step perform the global assembly, apply a vanishing traction on the top and constrain the displacement at the bottom to zero.
```python
def time_loop(dt,nodes,elements,solution,mass_lumping=False):
#Startwerte
t_end = 366*24*60*60 #s
absolute_tolerance = 1.e-6
max_iter = 100
iteration_counter = np.array([0])
apply_initial_conditions(solution,283.15)
y = [solution] #create a list that will hold the solution vectors at all time points
times = np.array([0.])
#
while times[-1]+dt < t_end: #repeat the loop as long as the final time step is below the end point
times = np.append(times,times[-1]+dt) #here define the next time point as the previous time point plus the time increment dt
y_old = y[-1] #Starting value for recursive update
i = 0
#
while True:
K, f = global_assembler(nodes,elements,y[-1],y_old,dt,mass_lumping)
#f = apply_Neumann_bc(f,len(nodes)-1,0)
K, f = apply_Dirichlet_bc(K, f, len(nodes)-1, 263.15)#fixed temperature top
solution = np.linalg.solve(K,f)
i += 1
if (np.abs(np.linalg.norm(solution) - np.linalg.norm(y_old)) < absolute_tolerance or i > max_iter): #if change is below tolerance, stop iterations
break
y_old = solution #preparation of next recursion
y.append(solution) #append the new found solution to the solution vector
iteration_counter = np.append(iteration_counter,i) #store how much iterations this time step took to converge
return times, y,iteration_counter
```
```python
#spatial discretization
H = 10.
nel = 20
n_per_el = 3
nodes,elements,solution=generate_mesh(H,nel,n_per_el)
```
```python
times, sols, iters = time_loop(24*60*60,nodes,elements,solution)
```
```python
fig, ax = plt.subplots(ncols=2,figsize=(18,5))
ax[0].set_xlabel('$z$ / m')
ax[0].set_ylabel('$T$ / °C')
ax[0].plot(nodes, sols[0]-273.15, marker='o', label='$t = %i$ d' %(times[0]/60/60))
ax[0].plot(nodes, sols[1]-273.15, marker='o', label='$t = %i$ d' %(times[1]/60/60/24))
ax[0].plot(nodes, sols[10]-273.15, marker='o', label='$t = %i$ d' %(times[10]/60/60/24))
ax[0].plot(nodes, sols[30]-273.15, marker='o', label='$t = %i$ d' %(times[30]/60/60/24))
ax[0].plot(nodes, sols[200]-273.15, marker='o', label='$t = %i$ d' %(times[200]/60/60/24))
ax[0].plot(nodes, sols[-1]-273.15, marker='o', label='$t = %i$ d' %(times[-1]/60/60/24))
ax[1].set_xlabel('$z$ / m')
ax[1].set_ylabel('$\\phi_\\mathrm{I}$')
ax[1].plot(nodes, phi_I(sols[0]), marker='o')
ax[1].plot(nodes, phi_I(sols[1]), marker='o')
ax[1].plot(nodes, phi_I(sols[10]), marker='o')
ax[1].plot(nodes, phi_I(sols[30]), marker='o')
ax[1].plot(nodes, phi_I(sols[200]), marker='o')
ax[1].plot(nodes, phi_I(sols[365]), marker='o')
fig.legend(loc='upper center',ncol=6);
```
```python
```
```python
```
|
74ac738e940c9e2f2eebf48f80968bacbec7c8d0
| 124,439 |
ipynb
|
Jupyter Notebook
|
10_freeze_thaw.ipynb
|
dominik-kern/Numerical_Methods_Introduction
|
09a0d6bd0ddbfc6e7f94b65516d9691766ed46ae
|
[
"MIT"
] | null | null | null |
10_freeze_thaw.ipynb
|
dominik-kern/Numerical_Methods_Introduction
|
09a0d6bd0ddbfc6e7f94b65516d9691766ed46ae
|
[
"MIT"
] | 1 |
2022-01-04T19:02:05.000Z
|
2022-01-06T08:40:21.000Z
|
10_freeze_thaw.ipynb
|
dominik-kern/Numerical_Methods_Introduction
|
09a0d6bd0ddbfc6e7f94b65516d9691766ed46ae
|
[
"MIT"
] | 4 |
2020-12-03T13:01:55.000Z
|
2022-03-16T14:07:04.000Z
| 265.895299 | 69,956 | 0.905729 | true | 3,792 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.849971 | 0.705785 | 0.599897 |
__label__eng_Latn
| 0.506831 | 0.232092 |
# 非 GIAO 的 RHF 磁化率数值导数计算方式简述
> 创建时间:2020-08-27
在这篇文档中,我们会讨论使用 PySCF 以及其作为 libcint 的接口,计算非 GIAO 的 RHF 数值磁化率的程序。该文档大量参考 PySCF 的代码 [magnetizability/rhf.py](https://github.com/pyscf/pyscf/blob/master/pyscf/prop/magnetizability/rhf.py) 与 [nmr/rhf.py](https://github.com/pyscf/pyscf/blob/master/pyscf/prop/nmr/rhf.py)。一些公式符号参考 Atkins, Friedman [^Atkins-Friedman.Oxford.2010]。
我们的讨论中所使用到的分子体系 `mol` 会是非对称的氨分子,并且取用最小基组。规范原点 (Gauge Origin) 会取在坐标原点上 `coord_orig`。其 RHF 计算放在实例 `mf`,而磁化率计算实例会放在 `mf_mag`。
```python
from pyscf import gto, scf
from pyscf.prop import nmr, magnetizability
import numpy as np
np.set_printoptions(precision=5, linewidth=150, suppress=True)
```
```python
mol = gto.Mole()
mol.atom = """
N 0. 0. 0.
H 0. 1. 0.2
H 0.1 0.3 1.5
H 0.9 0.4 -.2
"""
mol.basis = "STO-3G"
mol.verbose = 0
mol.build()
coord_orig = np.zeros(3)
```
其自洽场能量为
```python
mf = scf.RHF(mol).run()
mf.e_tot
```
-55.253540514686556
其磁化率张量 $\xi_{ts}$ 为 (其中,$t, s \in \{ x, y, z \}$ 表示三个坐标方向,需要注意这里选择了规范原点为坐标原点,若选取其它坐标则会得到非常不同的结果)
```python
mf_mag = magnetizability.RHF(mf)
mf_mag.gauge_orig = coord_orig
mf_mag.kernel()
```
array([[-4.94475, 0.21773, -0.08268],
[ 0.21773, -4.27801, 0.49885],
[-0.08268, 0.49885, -4.15348]])
## 基础概念
### 分子能量作为外加微扰量的函数
我们指出,磁化率可以看作是分子处在某一恒定外加磁场 $\boldsymbol{\mathscr{B}}$ 下 (作为三维矢量),所产生的能量变化的表征:
$$
E_\mathrm{tot} (\boldsymbol{\mathscr{B}}) = E_\mathrm{tot}^{(0)} + E_\mathrm{tot}^{(1)} \boldsymbol{\mathscr{B}} + E_\mathrm{tot}^{(2)} \boldsymbol{\mathscr{B}}^2 + \cdots
$$
以一般物理的约定俗成而言,对于外加的磁场微扰 (Atkins and Friedman, eq 13.34)
$$
E_\mathrm{tot}^{(2)} = - \frac{1}{2} \boldsymbol{\mathscr{B}}^\dagger \boldsymbol{\xi} \boldsymbol{\mathscr{B}} = - \frac{1}{2} \sum_{t, s \in \{ x, y, z \}} \mathscr{B}_t \xi_{ts} \mathscr{B}_s
$$
其中,$\boldsymbol{\xi}$ 是二维对称矩阵 (或称张量,如之前代码所展示)。我们使用了 $\boldsymbol{\xi}$ (Atkins and Friedman, eq 13.34, termed as *magnetizability*) 而非 $\boldsymbol{\chi}$ (Atkins and Friedman, eq 13.3c, termed as *magnetic susceptibility*) 来表示磁化率。因此,磁化率本身可以表示为 (矩阵元的形式与向量形式)
$$
\xi_{ts} = - \frac{\partial^2 E_\mathrm{tot} (\boldsymbol{\mathscr{B}})}{\partial \mathscr{B}_t \partial \mathscr{B}_s},
\quad \boldsymbol{\xi} = - \boldsymbol{\nabla}_{\boldsymbol{\mathscr{B}}} \boldsymbol{\nabla}_{\boldsymbol{\mathscr{B}}}^\dagger E_\mathrm{tot} (\boldsymbol{\mathscr{B}})
$$
### 哈密顿算符作为外加微扰量的算符
能量可以通过波函数在哈密顿算符的变分极小值处的期望获得:
$$
E_\mathrm{tot} (\boldsymbol{\mathscr{B}}) = \langle \Psi (\boldsymbol{\mathscr{B}}) | \hat H (\boldsymbol{\mathscr{B}}) | \Psi (\boldsymbol{\mathscr{B}}) \rangle
$$
其中,
$$
\hat H (\boldsymbol{\mathscr{B}}) = \sum_{i} \hat h (\boldsymbol{\mathscr{B}}, \boldsymbol{r}_i) + \hat V_\mathrm{ee} + \hat V_\mathrm{NN}
$$
上述算符是体系的多电子总哈密顿算符;而 $\hat h (\boldsymbol{\mathscr{B}})$ 则是单电子的 Core Hamiltonian 算符;$\hat V_\mathrm{ee}$ 为电子互斥算符,$\hat V_\mathrm{NN}$ 为原子核互斥算符。需要注意,由于我们不使用 GIAO,因此 $\hat V_\mathrm{ee}$ 就是普通的电子互斥算符,不受外场 $\boldsymbol{\mathscr{B}}$ 干扰;但使用 GIAO 的情况下,可能需要额外考虑这部分贡献。
$$
\hat h (\boldsymbol{\mathscr{B}}) = \hat h {}^{(0)} + \hat h {}^{(1)} (\boldsymbol{\mathscr{B}}) + \hat h {}^{(2)} (\boldsymbol{\mathscr{B}})
$$
$\hat h {}^{(0)}$ 是没有外加场的算符 (这与自洽场计算过程所用到的算符相同)。其余的算符则为 (Atkins and Friedman, eq 13.26, eq 13.29)
$$
\begin{align}
\hat h {}^{(1)} (\boldsymbol{\mathscr{B}}) &= \frac{1}{2} \boldsymbol{\mathscr{B}} \cdot \boldsymbol{r} \times \boldsymbol{\hat{p}} \\
\hat h {}^{(2)} (\boldsymbol{\mathscr{B}}) &= \frac{1}{8} \big( \boldsymbol{\mathscr{B}}^2 \boldsymbol{r}^2 - (\boldsymbol{\mathscr{B}} \cdot \boldsymbol{r})^2 \big)
\end{align}
$$
其中,
$$
\boldsymbol{r} = \begin{pmatrix} x \\ y \\ z \end{pmatrix}, \quad
\boldsymbol{\hat p} = \begin{pmatrix}
\displaystyle - i \frac{\partial}{\partial x} \\
\displaystyle - i \frac{\partial}{\partial y} \\
\displaystyle - i \frac{\partial}{\partial z}
\end{pmatrix} = -i \nabla \boldsymbol{r}
$$
其中,
$$
E_\mathrm{tot}^{(2)} = 2 \langle \Psi^{(0)} (\boldsymbol{\mathscr{B}}) | \sum_i \hat h {}^{(1)} (\boldsymbol{\mathscr{B}}, \boldsymbol{r}_i) | \Psi^{(1)} (\boldsymbol{\mathscr{B}}) \rangle + \langle \Psi^{(0)} | \sum_i \hat h {}^{(2)} (\boldsymbol{\mathscr{B}}, \boldsymbol{r}_i) | \Psi^{(0)} \rangle
$$
前一项被称为顺磁项 (Paramagnetic),后一项称为抗磁项 (Diamagnetic)。$\Psi^{(0)}$ 是指未微扰的体系哈密顿算符 $\hat H {}^{(0)}$ 的本征态,$\Psi^{(1)} (\boldsymbol{\mathscr{B}})$ 则是一阶微扰的波函数;其解析的求取方法是在程序中表示为 U 矩阵,通过 CP-HF 方程求取;我们在这里不会对解析方法作说明,但了解这两项的区分是有帮助的。
## Core Hamiltonian 的程序实现
我们知道,PySCF 中,在自洽场实例中更改 Core Hamiltonian 的类方法函数 (method function) 就可以实现外场微扰下的能量计算。这在 pyxdh 偶极矩的计算 [文档](https://py-xdh.readthedocs.io/zh_CN/latest/numdiff/num_dip.html) 中有所说明。在这里我们也要做类似的工作。
### 顺磁项
在 PySCF 中,顺磁项 $\hat h {}^{(1)} (\boldsymbol{\mathscr{B}}) = \frac{1}{2} \boldsymbol{\mathscr{B}} \cdot \boldsymbol{r} \times \boldsymbol{\hat p}$ 有其对应的积分 `hcore_1` ($h_{t \mu \nu}^{(1)}$,需要注意它不包含作为标量的 $\mathscr{B}_t$)
$$
h_{t \mu \nu}^{(1)} \cdot \mathscr{B}_t = \langle \mu | \hat h {}^{(1)} (\mathscr{B}_t) | \nu \rangle
$$
```python
hcore_1 = - 0.5 * mol.intor("int1e_cg_irxp") * 1j
hcore_1.shape, hcore_1.dtype
```
((3, 8, 8), dtype('complex128'))
上述的程序看起来会有些奇怪,因为这里出现了复数。我们需要分段对其作解释。
**积分字符**
我们使用到了积分字符 `int1e_cg_irxp`。关于这段字符,其意义需要通过 [auto_intor.cl](https://github.com/sunqm/libcint/blob/abf6948fa17e5b4ecbd26de05bf4b1d7b2b2fe3c/scripts/auto_intor.cl#L12) 程序了解:
```lisp
'("int1e_cg_irxp" (#C(0 1) \| rc cross p))
```
其右侧是积分的具体形式,说明在 [README](https://github.com/sunqm/libcint/blob/master/README) 文件中,意义为
$$
\mathtt{int1e\_cg\_irxp} = i \langle \mu | \boldsymbol{r} \times \boldsymbol{\hat p} | \nu \rangle
$$
其维度是 $(t, \mu, \nu)$,但其第一个维度是通过向量叉乘给出,因此它与 $\boldsymbol{r}$ 或 $\boldsymbol{p}$ 的维度不是直接相关的。如果我们令动量算符 $\boldsymbol{\hat l} = \boldsymbol{r} \times \boldsymbol{\hat p}$,那么可以将上述积分写为
$$
\mathtt{int1e\_cg\_irxp}_{t \mu \nu} = i \langle \mu | \hat l_t | \nu \rangle
$$
**反对称性厄米性**
我们应当留意到 $\mathtt{int1e\_cg\_irxp}_{t \mu \nu}$ 是一个反对称矩阵 (即随 $\mu, \nu$ 交换成相反值)
```python
np.allclose(mol.intor("int1e_cg_irxp"), - mol.intor("int1e_cg_irxp").swapaxes(-1, -2))
```
True
这是由于 $\nabla$ 算符本身是一个反对称算符。但是需要留意到,动量算符在此基础上乘以了虚数单位 $- i$,因此,该矩阵是厄米的,即其转置后的共轭是其本身。我们所定义的 $h_{t \mu \nu}^{(1)}$ 就具有这样的性质:
```python
np.allclose(hcore_1, hcore_1.swapaxes(-1, -2).conj())
```
True
因此,我们会说 `hcore_1` $h_{t \mu \nu}^{(1)}$ 是厄米的。
### 抗磁项
抗磁项 $\hat h {}^{(2)} (\boldsymbol{\mathscr{B}}) = \frac{1}{8} \big( \boldsymbol{\mathscr{B}}^2 \boldsymbol{r}^2 - (\boldsymbol{\mathscr{B}} \cdot \boldsymbol{r})^2 \big)$ 需要一些技巧生成。PySCF 中可以生成张量 `int1e_rr`:
$$
\mathtt{int1e\_rr}_{ts \mu \nu} = \langle \mu | ts | \nu \rangle
$$
我们定义 `hcore_2` $h_{ts \mu \nu}^{(2)}$ 为
$$
h_{ts \mu \nu}^{(2)} = \frac{1}{8} \big( \delta_{ts} \langle \mu | x^2 + y^2 + z^2 | \nu \rangle - \langle \mu | ts | \nu \rangle \big)
$$
```python
with mol.with_common_orig(coord_orig):
int1e_rr = mol.intor("int1e_rr").reshape(3, 3, mol.nao, mol.nao)
hcore_2 = 1/8 * (np.einsum("ts, uv -> tsuv", np.eye(3), int1e_rr.diagonal(0, 0, 1).sum(-1)) - int1e_rr)
```
并且,上述张量具有下述性质:
$$
h_{ts \mu \nu}^{(2)} \cdot \mathscr{B}_t \mathscr{B}_s = \langle \mu | \hat h {}^{(2)} (\mathscr{B}_t, \mathscr{B}_s) | \nu \rangle
$$
### Core Hamiltonian 程序实现
我们最后可以编写外加磁场微扰下的 Core Hamiltonian,以及在此微扰下的分子体系能量。为了加快计算速度,我们会使用为微扰的自洽场密度作为初猜 `dm_guess`。Core Hamiltonian 表达式为
$$
h_{\mu \nu} (\boldsymbol{\mathscr{B}}) = h_{\mu \nu} (\mathscr{B}_x, \mathscr{B}_y, \mathscr{B}_z)
= h_{\mu \nu}^{(0)} + \sum_{t} h_{t \mu \nu}^{(1)} \mathscr{B}_t + \sum_{ts} h_{ts \mu \nu}^{(2)} \mathscr{B}_t \mathscr{B}_s
$$
```python
dm_guess = mf.make_rdm1()
def hcore_mag_field(dev_xyz):
mf = scf.RHF(mol)
def hcore(mol_):
hcore_total = np.asarray(scf.rhf.get_hcore(mol_), dtype=np.complex128)
hcore_total += np.einsum("tuv, t -> uv", hcore_1, dev_xyz)
hcore_total += np.einsum("tsuv, t, s -> uv", hcore_2, dev_xyz, dev_xyz)
return hcore_total
mf.get_hcore = hcore
return mf.kernel(dm=dm_guess)
```
上述函数的参数 `t`, `s` 表示坐标方向分量,`dev_t`, `dev_s` 表示外加微扰大小,单位为 a.u.。
譬如,若在 $x$ 方向的磁场上施加 $\mathscr{B}_x = 1 \, \mathsf{a.u.}$,而 $y$ 方向上施加 $\mathscr{B}_y = 2 \, \mathsf{a.u.}$ (即 $\boldsymbol{\mathscr{B}} = (\mathscr{B}_x, \mathscr{B}_y, \mathscr{B}_z) = (1, 2, 0) \, \mathsf{a.u.}$),那么下述程序会给出该自洽场能量 $E_\mathrm{tot} (\mathscr{B}_x, \mathscr{B}_y, \mathscr{B}_z)$:
```python
hcore_mag_field(0, 1, 1, 2)
```
-57.997213868466154
## 数值导数求取磁化率
我们已经有了求取 $E_\mathrm{tot} (\mathscr{B}_x, \mathscr{B}_y, \mathscr{B}_z)$ 的程序了,接下来就可以进行数值导数计算。数值导数的计算公式可以简单地使用三点差分法;对于被求导量 $x :\neq y$,有 (当 $h$ 足够小时)
$$
\frac{\partial^2 f}{\partial x \partial y} \simeq \frac{1}{4 h^2} \big[ f(x + h, y + h) - f(x - h, y + h) - f(x + h, y - h) + f(x - h, y - h) \big]
$$
而对被求导量相同的情形,有
$$
\frac{\partial^2 f}{\partial x^2} \simeq \frac{1}{h^2} \big[ f(x + h) - 2 f(x) + f(x - h) \big]
$$
下面的程序就依照上述两个公式进行二阶数值导数求取。求导的原点取在 $(\mathscr{B}_x, \mathscr{B}_y, \mathscr{B}_z) = (0, 0, 0)$ 即不受外磁场影响的情形的自洽场能量 `eng_origin`,差分大小为 `interval` $h = 10^{-3} \, \mathsf{a.u.}$。需要注意,根据约定俗成,
$$
\xi_{ts} = - \frac{\partial^2 E_\mathrm{tot} (\boldsymbol{\mathscr{B}})}{\partial \mathscr{B}_t \partial \mathscr{B}_s}
$$
因此求取得到的磁化率 `num_polar` $\xi_{ts}$ 需要乘以 -1。
```python
eng_origin = hcore_mag_field((0, 0, 0))
interval = 1e-3
num_polar = np.zeros((3, 3))
for t in range(3):
for s in range(3):
if t != s:
dev_xyzs = np.zeros((4, 3))
dev_xyzs[0, t] = dev_xyzs[0, s] = dev_xyzs[1, t] = dev_xyzs[2, s] = interval
dev_xyzs[3, t] = dev_xyzs[3, s] = dev_xyzs[2, t] = dev_xyzs[1, s] = -interval
num_polar[t, s] = (
+ hcore_mag_field(dev_xyzs[0])
- hcore_mag_field(dev_xyzs[1])
- hcore_mag_field(dev_xyzs[2])
+ hcore_mag_field(dev_xyzs[3])
) / (4 * interval**2)
else:
dev_xyzs = np.zeros((2, 3))
dev_xyzs[0, t], dev_xyzs[1, t] = interval, -interval
num_polar[t, t] = (
+ hcore_mag_field(dev_xyzs[0])
+ hcore_mag_field(dev_xyzs[2])
- eng_origin * 2
) / (interval ** 2)
num_polar *= -1
```
```python
num_polar
```
array([[-4.94475, 0.21773, -0.08268],
[ 0.21773, -4.27801, 0.49885],
[-0.08268, 0.49885, -4.15348]])
我们再与 PySCF 的解析结果作对照:
```python
mf_mag.kernel()
```
array([[-4.94475, 0.21773, -0.08268],
[ 0.21773, -4.27801, 0.49885],
[-0.08268, 0.49885, -4.15348]])
[^Atkins-Friedman.Oxford.2010]: Atkins, P. W.; Friedman, R. S. *Molecular Quantum Mechanics*; Oxford University Press, 2010.
|
ed24139dad5b6b9834e8e1d99a743617aa4750c6
| 19,482 |
ipynb
|
Jupyter Notebook
|
source/QC_Notes/Prop_Series/Mag_NoGIAO_NumDeriv.ipynb
|
ajz34/ajz34.readthedocs.io
|
73be05a73241c18b98fd0d4dbdc48c643278c3da
|
[
"MIT"
] | 2 |
2020-07-30T12:31:14.000Z
|
2021-08-14T03:56:56.000Z
|
source/QC_Notes/Prop_Series/Mag_NoGIAO_NumDeriv.ipynb
|
ajz34/ajz34.readthedocs.io
|
73be05a73241c18b98fd0d4dbdc48c643278c3da
|
[
"MIT"
] | null | null | null |
source/QC_Notes/Prop_Series/Mag_NoGIAO_NumDeriv.ipynb
|
ajz34/ajz34.readthedocs.io
|
73be05a73241c18b98fd0d4dbdc48c643278c3da
|
[
"MIT"
] | 1 |
2020-07-30T12:32:09.000Z
|
2020-07-30T12:32:09.000Z
| 27.323983 | 332 | 0.498357 | true | 5,286 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.865224 | 0.66888 | 0.578731 |
__label__yue_Hant
| 0.184307 | 0.182917 |
#Real Analysis
### 1. Sequences
For each of the patterns below, define a formula for $n$-th term $x_n$ of the sequence. Then, plot the first 20 terms of the sequences. Combine all three plots into a figure using the subplot command. Label everything.
a) 4, 8, 12, 16,...
b) 1, 6, 11, 16, 21, ...
c) 1/2, 2/3, 3/4, 4/5, ...
d) 1/2, -1/4, 1/8, -1/16, ...
e) 1, 1, 2, 3, 5, 8, ..
####Let a= first term, n=number of terms, d= difference between 2 consecutive terms, Then,
####a) nth term, Xn = a + (n-1)d = 4+(n-1)4 = 4+4n-4 = 4n
####b) nth term, Xn = a + (n-1)d = 1+(n-1)5 = 1+5n-5 = 5n-4
####c) nth term, Xn = $ \frac{n}{n+1}$
####d) nth term, Xn = $ \frac{(-1)^{n+1}}{2^n}$
####e) nth term, Xn = Xn-1 + Xn-2 (Fibonacci series)
```python
import matplotlib.pyplot as plt
%matplotlib inline
A1 = [] #Initializing an empty array for storing number of iterations value, n
B1, B2, B3, B4, B5 = [], [], [], [], [] #Initializing empty arrays for storing sequence output
a1 = 4 #Initializing first term for P1.a
#Loop for getting the sequence for P1.a
for n in range(1, 21):
a1 = 4*n
A1.append(n)
B1.append(a1)
print(B1,end=',\n')
a2 = 1 #Initializing first term for P1.b
#Loop for getting the sequence for P1.b
for n in range(1, 21):
a2 = (5*n)-4
B2.append(a2)
print(B2,end=',\n')
a3 = 1/2 #Initializing first term for P1.c
#Loop for getting the sequence for P1.c
for n in range(1, 21):
a3 = n/(n+1)
B3.append(a3)
print(B3, end=',\n')
a4 = 1/2 #Initializing first term for P1.d
#Loop for getting the sequence for P1.d
for n in range(1, 21):
a4 = ((-1)**(n+1))/(2*n)
B4.append(a4)
print(B4, end=',\n')
#Initializing sequence for P1.e
a = 0
b = 1
c = 0
B5.append(b)
#Loop for getting the sequence for P1.e
for n in range(19):
c = a+b
a = b
b = c
B5.append(c)
print(B5, end=',\n')
#Plotting all the sequences
fig,ax = plt.subplots(1,5, figsize = (30,5))
fig.suptitle('Plotting all the sequences', color='b')
ax[0].plot(A1,B1, 'r', label='4n'); ax[0].set_title('P1.a) 4, 8, 12, 16,...')
ax[1].plot(A1,B2, 'c', label='5n-4'); ax[1].set_title('P1.b) 1, 6, 11, 16, 21, ...')
ax[2].plot(A1,B3, 'g', label='n/(n+1)'); ax[2].set_title('P1.c) 1/2, 2/3, 3/4, 4/5, ...')
ax[3].plot(A1,B4, 'y', label='(-1)^(n+1)/2^n'); ax[3].set_title('P1.d) 1/2, -1/4, 1/8, -1/16, ...')
ax[4].plot(A1,B5, 'm', label='Fibonacci Series'); ax[4].set_title('P1.e) 1, 1, 2, 3, 5, 8, ..')
for i in range(5):
ax[i].legend()
ax[i].set_xlabel('Input-n')
ax[i].set_ylabel('X_n')
```
### 2. Sequence Convergence
Show that the following sequences have the given limits using either an $\varepsilon-\delta$ argument, or an appropriate theorem from the lecture notes.
a) $\lim \frac{3n+1}{2n+5} = \frac{3}{2}$
b) $\lim \frac{2n}{n+2} = 2$
c) $\lim\left(\frac{1}{n} - \frac{1}{n+1} \right) = 0$
d) $\lim \frac{n + 5}{n^{2}} = 0$
e) $\lim \left(\frac{1}{3n}\right)^{2} = 0$
P2.a. In this example, we have,
$f(x) = \frac {3n+1}{2n+5}$ and $Limit, L = \frac {3}{2}$. For any $\varepsilon > 0, \left|f(x) - \frac{3}{2}\right| < \varepsilon$ proves the convergence.
\begin{aligned} \left| \frac {3n+1}{2n+5} - \frac {3}{2} \right| &= \left| \frac{(6n+2)-(6n+15)}{2(2n+5)}\right| < \varepsilon \\ &=) \left| \frac{-13}{4n+10}\right|<\varepsilon \\ &=) \left|\frac{-13}{4n}\right| < \varepsilon\\&=) \frac{13}{4n} < \varepsilon \\ &=) \frac{13}{4\varepsilon}<n\\&=)for\ n>N, we\ get, \ N=\left \lceil \frac{13}{4\varepsilon } \right \rceil \ satisfies\ the\ condition.
\end{aligned}
When we plot the difference of function and limit, $f(x)-L$ with $N\geq65$ and $\varepsilon=0.05$ it shows the convergence of $f(x)$.
```python
#P2.a
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n = np.linspace(65 , 100, 200)
#Plotting the sequence for N>=65 and ep=0.05
plt.plot(n, 3/2-(3*n+1)/(2*n+5), 'r', label='x_n=Convergence')
plt.title('P2.a')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
P2.b. In this example, we have,
$function, f(x) = \frac {2n}{n+2}$ and $Limit, L = 2$. For any $\varepsilon > 0 , \left|f(x) - 2\right| < \varepsilon$ proves the convergence.
\begin{aligned} \left| \frac {2n}{n+2} - 2 \right| &= \left| \frac{2n-2(n+2)}{n+2}\right| = \left| \frac{4}{n+2}\right|< \varepsilon \\&=)\frac {4}{n+2} < \varepsilon \\ &=) \frac {n+2}{4} > \frac{1}{\varepsilon} \\ &=) n > \frac{4}{\varepsilon}-2\\& =)for\ n>N, we\ get, N=\left \lceil \frac{4}{\varepsilon}-2\right \rceil\ satisfies\ the\ condition.
\end{aligned}
When we plot the difference of function and limit, $f(x)-L$ with $N\geq78$ and $\varepsilon=0.05$ it shows the convergence of $f(x)$.
```python
#P2.b
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n = np.linspace(78 , 100, 200)
#Plotting the sequence for N>=78 and ep=0.05
plt.plot(n, 2-(2*n)/(n+2), 'b', label='x_n=Convergence')
plt.title('P2.b')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
P2.c. In this example, we have,
$function, f(x) = \frac {1}{n}-\frac {1}{n+1}$ and $Limit, L = 0$. For any $\varepsilon > 0 , \left|f(x) - 0\right| < \varepsilon$ proves the convergence.
\begin{aligned} \left| \frac {1}{n}-\frac {1}{n+1} \right| &< \left| \frac{1}{n}\right|< \varepsilon \\&=)\frac {1}{n} < \varepsilon \\ &=)\frac {1}{\varepsilon} < n, \ for\ n>N, we\ get, N=\left \lceil \frac{1}{\varepsilon}\right \rceil\ satisfies\ the\ condition.
\end{aligned}
When we plot the difference of function and limit, $f(x)-L$ with $N\geq20$ and $\varepsilon=0.05$ it shows the convergence of $f(x)$.
```python
#P2.c
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n = np.linspace(20 , 100, 200)
#Plotting the sequence for N>=20 and ep=0.05
plt.plot(n, 1/n-1/(n+1), 'g', label='x_n=Convergence')
plt.title('P2.c')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
P2.d. In this example, we have,
$function, f(x) = \frac {n+5}{n^2}$ and $Limit, L = 0$. For any $\varepsilon > 0 , \left|f(x) - 0\right| < \varepsilon$ proves the convergence.
\begin{aligned} &\left| \frac {n+5}{n^2}\right| < \varepsilon \\&\left| \frac {n+5}{n^2}\right| >\left| \frac{n}{n^2}\right|=)\left| \frac{n}{n^2}\right|< \varepsilon \\&=) \frac{1}{n}< \varepsilon\\&=) \frac{1}{\varepsilon} < n, \ as\ n>N, we\ get, N=\left \lceil \frac {1}{\varepsilon}\right \rceil\ satisfies\ the\ condition.
\end{aligned}
When we plot the difference of function and limit, $f(x)-L$ with $N\geq20$ and $\varepsilon=0.05$ it shows the convergence of $f(x)$.
```python
#P2.d
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n = np.linspace(20 , 100, 200)
#Plotting the sequence for N>=20 and ep=0.05
plt.plot(n, (n+5)/(n**2), 'm', label='x_n=Convergence')
plt.title('P2.d')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
P2.e. In this example, we have,
$function, f(x) = (\frac {1}{3n})^2$ and $Limit, L = 0$. For any $\varepsilon > 0 , \left|f(x) - 0\right| < \varepsilon$ proves the convergence.
\begin{aligned} \left| (\frac {1}{3n})^2-0\right| &< \varepsilon \\&\left| \frac {1}{9n^2}\right| < \varepsilon \\&=) \frac{1}{9n^2}< \varepsilon\\&=) \frac{1}{9\varepsilon}< n^2 \\ &=)\frac {1}{3\sqrt\varepsilon} < n, \ for\ n>N, we\ get, N=\left \lceil \frac {1}{3\sqrt\varepsilon}\right \rceil\ satisfies\ the\ condition.
\end{aligned}
When we plot the difference of function and limit, $f(x)-L$ with $N\geq1.5$ and $\varepsilon=0.05$ it shows the convergence of $f(x)$.
```python
#P2.e
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n = np.linspace(1.5 , 100, 200)
#Plotting the sequence for N>=1.5 and ep=0.05
plt.plot(n, ((1/(3*n))**2), 'y', label='x_n=Convergence')
plt.title('P2.e')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
### 3. Proving non-convergence
Show that the sequence $(x_n)$ where $x_n = (-1)^n$ does not converge to any number $c$. Use the definition of a limit for sequences. Given $\varepsilon > 0$, you have to show that there does not exist an $m$ such that for all $n>m$ such that $c-\varepsilon < x_n < c+\varepsilon$.
In this example, lets assume,
$f(x) = x_n = (-1)^n$ converges to any number $c$. For any $$\varepsilon > 0 , \left|f(x) - c\right| < \varepsilon$$ proves the convergence.
$$\left| (-1)^n-c\right| < \varepsilon$$Let, $\varepsilon=\frac{1}{2}$.By definition of convergence, we can have N such that for any n>N, $$\left| (-1)^n-c\right| < \frac{1}{2}$$It means that the maximum distance between any two points $(−1)^n$ and $(−1)^m$, for m,n>N, can be $2*\frac{1}{2}=1$, as they are both $\frac{1}{2}$ away from c.
But this cannot be true as the distance between any $(−1)^n$ and $(−1)^{n+1}$ is 2 for n>N. Therefore the sequence does not converge for $\varepsilon=\frac{1}{2}$ and hence given sequence is not convergent (seen in the plot as well).
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n = np.linspace(1 , 20, 20)
#Plotting the sequence
plt.plot(n, (-1)**n, 'b', label='x_n=Non-Convergence')
plt.title('P3')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
### 4. Sequences via recurrence relations
Let $x_0 = 1$ and $x_{i+1} = - a x_i$ where $0 < a < 1$.
a) Plot the first 20 points in $(x_n)$ when $a=\frac{1}{2}$.
b) Show that $(x_n)$ converges to 0.
```python
import matplotlib.pyplot as plt
%matplotlib inline
x = 1 #Initializing the sequence
a = 1/2 #Initializing the constant
A , B= [],[] #Intializing empty arrays for storing number of iterations and sequence outputs
#Loop for getting the sequence
for n in range(20):
x = -a*x
A.append(n)
B.append(x)
print(n+1,")", x, end=',')
#Plotting the sequence
plt.plot(A, B, 'r', label='-ax, a=1/2')
plt.title('P4. Sequences via recurrence relations')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
For the given conditions: $x_0 = 1$ and $x_{i+1} = - a x_i$ where $0 < a < 1$, when we select $n=20, a=\frac{1}{2}$ and plot the sequence, it is clearly observed that the sequence $(x_n)$ converges to 0.
### 5. Sequences via recurrence relations - 2
Let $x_0 = 1$ and $x_{i+1} = - a x_i + \frac{1}{5}$ where $0 < a < 1$.
a) Plot the first 20 points in $(x_n)$ when $a=\frac{1}{3}$.
b) Show that $(x_n)$ converges to 0.
```python
import matplotlib.pyplot as plt
%matplotlib inline
x = 1 #Initializing the sequence
a = 1/3 #Initializing the constant
A , B= [],[] #Intializing empty arrays for storing number of iterations and sequence outputs
#Loop for getting the sequence
for n in range(20):
x = -a*x+(1/5)
A.append(n)
B.append(x)
print(n+1,")", x, end=',')
#Plotting the sequence
plt.plot(A, B, 'm', label='-ax+(1/5),a=1/3')
plt.title('P5. Sequences via recurrence relations-2')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
For the given conditions: $x_0 = 1$ and $x_{i+1} = - a x_i+ \frac{1}{5}$ where $0 < a < 1$, when we select $n=20$ and $a=\frac{1}{3}$ and plot the sequence, it is clearly observed that the sequence $(x_n)$ converges to a constant value= $ 0.15$.
### 6. Sequences via recurrence relations - 3
Let $x_0 = 1$ and $x_{i+1} = - a x_i$ where $0 < a \leq 1$.
a) Plot the first 20 points in $(x_n)$ when $a=1$.
b) What happens with the convergance of the given sequences $(x_n)$?
```python
import matplotlib.pyplot as plt
%matplotlib inline
x = 1 #Initializing the sequence
a = 1 #Initializing the constant
A , B= [],[] #Intializing empty arrays for storing number of iterations and sequence outputs
#Loop for getting the sequence
for n in range(20):
x = -a*x
A.append(n)
B.append(x)
print(n+1,")",x, end=',')
#Plotting the sequence
plt.plot(A, B, 'g', label='-ax, a=1')
plt.title('P6. Sequences via recurrence relations-3')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
For the given conditions: $x_0 = 1$ and $x_{i+1} = - a x_i$ where $0 < a \leq 1$, when we select $n=20$ and $a=1$ and plot the sequence, it is clearly observed that the sequence $(x_n)$ is not convergent and becomes cyclic and ranges between $-1\:to\: 1$.
### 7. Sequences via recurrence relations - 7
Let $x_0 = 1$ and $x_{i+1} = a x_i$ where $a \geq 1$.
a) Plot the first 20 points in $(x_n)$ when $a= \frac{3}{2}$.
b) What happens with the convergance of the given sequences $(x_n)$?
```python
import matplotlib.pyplot as plt
%matplotlib inline
x = 1 #Initializing the sequence
a = 3/2 #Initializing the constant
A , B= [],[] #Intializing empty arrays for storing number of iterations and sequence outputs
#Loop for getting the sequence
for n in range(20):
x = a*x
A.append(n)
B.append(x)
print(n+1,')',x, end=',')
#Plotting the sequence
plt.plot(A, B, 'y', label='ax, a=3/2')
plt.title('P7. Sequences via recurrence relations-7')
plt.xlabel('Input-n')
plt.ylabel('x_n')
plt.legend()
```
For the given conditions: $x_0 = 1$ and $x_{i+1} = a x_i$ where $a \geq 1$, when we select $n=20$ and $a=\frac{3}{2}$ and plot the sequence, it is clearly observed that the sequence $(x_n)$ is not convergent as it moves towards infinity.
### 8. Definition of Derivative
Use the definition of the derivative to find the derivatives of the following functions.
a) $f(x) = x^3$, for any $x$
b) $g(x) = 1/x$, for $x \neq 0$
c) $h(x) = \sqrt{x}$, for $x > 0$
d) $k(x) = x^{-1/2}$ for $x > 0$.
$$a) f(x)=x^3$$
$$Using\:Power\:Rule,\frac{dx^n}{dx}=nx^{n-1}, for\: n=3$$
$$df(x)/dx= 3x2$$
$$----------------------------------------------$$
$$b) g(x)=1/x\:or\: x^{-1}$$
$$Using\:Power\:Rule,\frac{dx^n}{dx}=nx^{n-1}, for\: n=-1$$
$$dg(x)/dx= -1x^{-2}\:or\: \frac{-1}{x^2}$$
$$----------------------------------------------$$
$$c) h(x)=\sqrt{x}\: or\: x^{1/2}$$
$$Using\:Power\:Rule,\frac{dx^n}{dx}=nx^{n-1}, for\: n=1/2$$
$$dh(x)/dx= (1/2)*x^{-1/2}\: or\: \frac{1}{2\sqrt{x}}$$
$$----------------------------------------------$$
$$d) k(x)=x^{-1/2}$$
$$Using\:Power\:Rule,\frac{dx^n}{dx}=nx^{n-1}, for\: n=-1/2$$
$$dk(x)/dx= (-1/2)*x^{-3/2}\: or\: \frac{-1}{2x\sqrt{x}}$$
$$----------------------------------------------$$
### 9. The Chain Rule
Use the chain rule to find these deriviatives. Verify your results with `sympy`.
a) $f(x) = \frac{1}{1+x^2}$
b) $h(x) = (\sin x^k)^m$ for $m,k \in \mathbb{N}$
$a)f(x)= \frac{1}{1+x^2}$
$$Let,u = x^2, f = \frac{1}{1+u} or (1+u)^{-1}$$
$$Using\:Power\:Rule,du/dx=2x\:,df/du=-1*(1+u)^{-2}\:or\:\frac{-1}{(1+u)^2}$$
$$Using\:Chain\:Rule,\frac{df}{dx}=\frac{df}{du}\frac{du}{dx}=\frac{-2x}{(1+u)^2}=\frac{-2x}{(1+x^2)^2}$$
$$----------------------------------------------$$
$b)h(x)= (\sin x^k)^m$
$$Let,u = x^k, v = \sin u, f = v^m$$
$$Using\:Power\:Rule,du/dx=kx^{k-1}\:,dv/du=cos u\:,df/dv=mv^{m-1}$$
$$Using\:Chain\:Rule,\frac{df}{dx}=\frac{df}{dv}\frac{dv}{du}\frac{du}{dx}=mv^{m-1}*cos u*kx^{k-1}$$
$$=mkx^{k-1}\ (sin x^k)^{m-1}cos x^k$$
$$----------------------------------------------$$
```python
#Code for P9.a
import sympy as sm
#Initializing the symbol
x = sm.symbols("x")
sm.diff(1/(1+x**2),x) #Performing the differentiation
```
$\displaystyle - \frac{2 x}{\left(x^{2} + 1\right)^{2}}$
```python
#Code for P9.b
import sympy as sm
#Initializing the symbols
x = sm.symbols("x")
k = sm.symbols("k")
m = sm.symbols("m")
#Substituting the symbols with some variables
u = x**k
v = sm.sin(x)
f = x**m
vou = v.subs(x,u)
fov = f.subs(x,vou)
dfov = sm.diff(fov,x) #Performing the differentiation
dfov
```
$\displaystyle \frac{k m x^{k} \sin^{m}{\left(x^{k} \right)} \cos{\left(x^{k} \right)}}{x \sin{\left(x^{k} \right)}}$
|
c13e17c5d6392e2ab4b8f46630cc0cd8c2f30311
| 282,293 |
ipynb
|
Jupyter Notebook
|
Real Analysis/Real Analysis.ipynb
|
joy6543/Mathematics-and-Analytical-Methods
|
eea45c7ffd98c308254142903ef661c792fa3503
|
[
"MIT"
] | null | null | null |
Real Analysis/Real Analysis.ipynb
|
joy6543/Mathematics-and-Analytical-Methods
|
eea45c7ffd98c308254142903ef661c792fa3503
|
[
"MIT"
] | null | null | null |
Real Analysis/Real Analysis.ipynb
|
joy6543/Mathematics-and-Analytical-Methods
|
eea45c7ffd98c308254142903ef661c792fa3503
|
[
"MIT"
] | null | null | null | 282,293 | 282,293 | 0.928613 | true | 5,836 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.833325 | 0.924142 | 0.77011 |
__label__eng_Latn
| 0.826968 | 0.627556 |
# Local Interaction
**Tomohiro Kusano**
*Graduate School of Economics, University of Tokyo*
This notebook demonstrates how to study local interaction model using the **`localint`** Python library.
```python
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import random
from matplotlib.animation import FuncAnimation
from __future__ import division
from localint import LocalInteraction
from IPython.display import Image
import io
import base64
from IPython.display import HTML
```
**Note:** We don't use `%matplotlib inline` here because if use it, `animation`, which is a function defined in this notebook, doesn't work in ordinary environment.
## Local Interaction Game
Let $\chi$ be a finite set of players and $P:\chi \times \chi \to \mathbb{R}_+$ be a function such that
* $P(x,x) = 0$ for all $x \in \chi$
* $P(x,y) = P(y,x)$ for all $x,y \in \chi$.
A *local interaction system* is the undirected graph induced by $(\chi, P)$. Note that $P$ can be represented by a matrix, which will be introduced as "adjacency matrix" in the next section, since $\chi$ is finite here.
For example, $(\chi, P)$, where $\chi = {0,1,2}$ and
\begin{equation*}
P =
\begin{bmatrix}
0 & 1 & 0 \\
0 & 0 & 2 \\
3 & 0 & 0
\end{bmatrix}
\end{equation*}
represents the following local interaction system.
```python
Image(filename='./localint_materials/figure_1.png')
```
The integer on each edge denote the corresponding weight on the edge.
In each period, given the local interaction system, each player plays a game constructing his/her belief, which is a distribution on the action space, according to the weights on the edges and what other players are taking.
For example, let's consider the above system. Suppose that each player has two actions (0 and 1), and Player 1, 2 are taking action 0, 1 respectively. Given the system and other players' action, Player 0 constructs a belief $(1, 3)$, which means the ratio of the probability that Player 0 meets a player taking action 0 to the probability that Player 0 meets a player taking action 1 is 1:3.
## The `LocalInteraction` class
The **`LocalInteraction`** class requires two parameters, **payoff matrix** and **adjacency matrix**.
### Payoff Matrix
Payoff matrix must be 2-dimensional square numpy array. In a game-theoretic model, it means that both the set of actions and the payoff function are the same across all players.
For instance, consider a coordination game where the payoff table is given by the following:
1$\backslash$2 | $A$ | $B$
------------- |---------------| ---------
$A$ | 4, 4 | 0, 2
$B$ | 2, 0 | 3, 3
Note that this payoff table implies that the game is symmetric. Because of the symmetricity, it suffices to record the only one of the player's payoffs like the following:
```python
payoff_matrix = np.asarray([[4, 0],
[2, 3]])
print payoff_matrix
```
[[4 0]
[2 3]]
### Adjacency Matrix
Adjacency matrix represents how the nodes in the system are connected. In particular, in the context of the local interaction model, it represents whether each pair of players interacts and how strong the interaction of them is if they are connected.
Let's consider an adjacency matrix given by the following:
\begin{equation}
[a_{ij}] =
\begin{bmatrix}
0 &1 &3\\
2 &0 &1\\
3 &2 &0
\end{bmatrix}
\end{equation}
```python
adj_matrix = np.asarray([[0, 1, 3],
[2, 0, 1],
[3, 2, 0]])
print adj_matrix
```
[[0 1 3]
[2 0 1]
[3 2 0]]
For example, $a_{12}(=1)$ denotes the weight on player 2's action to player 1. Note that the weight on player 1's action player 2 ($a_{21}=2$) is different. That is, the **`LocalInteraction`** class allow adjacency matrix to be asymmetric.
### Creating a `LocalInteraction`
Now that we have two parameters, `payoff_matrix` and `adj_matrix`, we can create a `LocalInteraction`:
```python
li = LocalInteraction(payoff_matrix, adj_matrix)
```
```python
li.players[0]
```
Player in a 2-player normal form game with payoff matrix:
[[4, 0], [2, 3]]
The adjacency matrix is saved in the form of [`csr_matrix`](https://docs.scipy.org/doc/scipy-0.15.1/reference/sparse.html).
```python
li.adj_matrix
```
<3x3 sparse matrix of type '<type 'numpy.int32'>'
with 6 stored elements in Compressed Sparse Row format>
### Initializing current actions
Originally, current actions are $N$-dimensional zero vector, where $N =$ "the number of players".
```python
li.N, li.current_actions
```
(3, array([0, 0, 0]))
To initialize `current_actions`, we can use `set_init_actions`:
```python
init_actions = [1, 0, 1]
li.set_init_actions(init_actions)
```
```python
li.current_actions
```
array([1, 0, 1])
If we don't specify the list of the players' actions, `set_init_actions` randomly set `current_actions`.
```python
li.set_init_actions()
```
```python
li.current_actions
```
array([1, 1, 0])
## Examples
In this section, we give you a couple of examples for typical graphs, and analyze the local interaction models corresponding to those graphs.
In order to show those results graphically, we have to define functions to draw a graph and generate an animation.
```python
def draw_graph(graph_dict, figsize=(16,10), node_size=200, linewidth=2):
fig = plt.figure(figsize=figsize, facecolor='w')
nx.draw_networkx_nodes(graph_dict['G'], graph_dict['pos'],
node_size=node_size, node_color='w')
nx.draw_networkx_edges(graph_dict['G'], graph_dict['pos'],
alpha=0.5, width=linewidth, arrows=False)
plt.axis('off')
plt.show()
```
```python
def animation(li, init_actions=None, pos='circular', node_size=200,
node_colors=None, linewidth=2, interval=200, figsize=(16,10)):
num_actions = li.num_actions
if node_colors is None:
node_colors = mpl.rcParams['axes.color_cycle']
num_colors = len(node_colors)
if num_colors < num_actions:
raise ValueError('{0} colors required '.format(num_actions) +
'(only {0} provided)'.format(num_colors))
G = nx.DiGraph(li.adj_matrix)
if isinstance(pos, dict):
pos = pos
else:
try:
layout_func = getattr(nx, '{0}_layout'.format(pos))
pos = layout_func(G)
except:
raise ValueError(
"pos must be a dictionary of node-position pairs, or one of " +
"{'circular', 'random', 'shell', 'spring', 'spectral'}")
def get_fig(n):
for i in range(num_actions):
nodelist = np.where(li.current_actions == i)[0].tolist()
nx.draw_networkx_nodes(G, pos, node_size=node_size,
nodelist=nodelist,
node_color=node_colors[i])
li.play()
return fig
li.set_init_actions(init_actions)
fig = plt.figure(figsize=figsize, facecolor='w')
nx.draw_networkx_edges(G, pos, alpha=0.5, width=linewidth, arrows=False)
anim = FuncAnimation(fig, get_fig, interval=interval)
plt.axis('off')
plt.show()
plt.close()
```
### 2-actions case
For convenience, we focus on a coordination game, which is given by the following:
```python
coordination_game = np.array([[11, 0],
[9, 8]])
```
Also, let `node_colors_2` be a list whose $i$-th ($i = 0, 1$) element denotes a color of players taking action $i$:
```python
node_colors_2 = ['b', 'y']
```
Actually, in this case, the action 1, which leads to the risk-dominant but inefficient outcome if both players take it, is *contageous* in some sense although we don't formally define it. You would see what it means in the following section before long.
#### Circle
We first examine one of the simplest graph, called "circle graph".
```python
N = 100
circle = {}
G = nx.cycle_graph(n=N)
circle['G'] = G
circle['adj_matrix'] = nx.adjacency_matrix(G)
circle['pos'] = nx.circular_layout(G)
```
Note that we have to specify not only the graph and the adjacency matrix but also positions of nodes since `draw_graph` and `animation` require it.
```python
draw_graph(circle)
```
```python
li_coor = LocalInteraction(coordination_game, circle['adj_matrix'])
```
```python
init_actions = np.zeros(li_coor.N, dtype=int)
init_actions[[0, -1]] = 1
animation(li_coor, init_actions=init_actions, pos=circle['pos'],
node_colors=node_colors_2, interval=100)
```
You can see that the distribution of the players taking action 1 is spreaded across all nodes as time goes on.
#### Two-dimensional lattice
We next examine another simple graph, called "Two-dimensional lattice". Actually, Its procedure for simulation is the same as the circle graph, except for that it is tedious to specify the positions of nodes in this case.
```python
N = 100
lattice2d = {}
m, n = 10, 10
G = nx.grid_2d_graph(m, n)
lattice2d['adj_matrix'] = nx.adjacency_matrix(G)
lattice2d['G'] = nx.Graph(lattice2d['adj_matrix'])
lattice2d['pos'] = {}
for i, (x, y) in enumerate(G.nodes_iter()):
lattice2d[(x, y)] = i
lattice2d['pos'][i] = (x/(m-1), y/(n-1))
```
```python
draw_graph(lattice2d)
```
```python
li_coor = LocalInteraction(coordination_game, lattice2d['adj_matrix'])
```
```python
# m, n = 10, 10
init_actions = np.zeros(li_coor.N, dtype=int)
for node in [(m//2-i, n//2-j) for i in range(2) for j in range(2)]:
init_actions[lattice2d[node]] = 1
animation(li_coor, init_actions=init_actions, pos=lattice2d['pos'],
node_colors=node_colors_2, figsize=(14,8), interval=500)
```
### 3-actions case
The `localint` module works even in 3-actions case. Let's consider the following game, which is called "Bilingual Game":
```python
def bilingual_game(e, a=11, b=0, c=9, d=8):
A = np.array([[a , a , b],
[a-e, a-e, d-e],
[c , d , d]])
return A
```
```python
bg = bilingual_game(e=0.1)
bg
```
array([[ 11. , 11. , 0. ],
[ 10.9, 10.9, 7.9],
[ 9. , 8. , 8. ]])
```python
node_colors_3 = ['b', 'r', 'y']
```
We show that even the action 0, which leads to Pareto efficient outcome, can be contagious in this case.
#### Circle
```python
li_bg = LocalInteraction(bg, circle['adj_matrix'])
```
```python
init_actions = np.ones(li_bg.N, dtype=int) * 2
init_actions[[0, 1, -2, -1]] = 0
animation(li_bg, init_actions=init_actions, pos=circle['pos'],
node_colors=node_colors_3, interval=100)
```
#### Two-dimensional lattice
```python
li_bg = LocalInteraction(bg, lattice2d['adj_matrix'])
```
```python
# m, n = 10, 10
init_actions = np.ones(li_bg.N, dtype=int) * 2
for node in [(m//2-i, n//2-j) for i in range(2) for j in range(2)]:
init_actions[lattice2d[node]] = 0
animation(li_bg, init_actions=init_actions, pos=lattice2d['pos'],
node_colors=node_colors_3, interval=500)
```
|
c9aad62b8dfacba69ac125183daaa18751246670
| 41,547 |
ipynb
|
Jupyter Notebook
|
localint_note.ipynb
|
jparajuli/game_theory_models
|
0eb107a8279c44b3a29ec1ff4f3538f21541c3ed
|
[
"BSD-3-Clause"
] | 26 |
2015-12-18T13:15:49.000Z
|
2021-12-28T09:44:13.000Z
|
localint_note.ipynb
|
jparajuli/game_theory_models
|
0eb107a8279c44b3a29ec1ff4f3538f21541c3ed
|
[
"BSD-3-Clause"
] | 10 |
2015-01-05T14:53:26.000Z
|
2019-03-07T01:44:40.000Z
|
localint_note.ipynb
|
oyamad/game_theory_models
|
0eb107a8279c44b3a29ec1ff4f3538f21541c3ed
|
[
"BSD-3-Clause"
] | 9 |
2015-11-24T20:12:57.000Z
|
2020-10-13T16:28:21.000Z
| 36.444737 | 397 | 0.670831 | true | 2,980 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.808067 | 0.810479 | 0.654921 |
__label__eng_Latn
| 0.965017 | 0.359933 |
# Bayesian Statistics for Physicists
<em>Note: This notebook was presented at the EMMI workshop <a href="https://indico.gsi.de/event/7534/">"Uncertainty Quantification (UQ) at the Extremes (ISNET-6)"</a> on 9-October-2018.
It has been replaced for further development by the multipart BSFP_pn_xxx.ipynb notebooks.</em>
## <a name="Overview">Overview</a>
A brief, hands-on introduction to the basics of Bayesian statistics in a manner adapted to the general intuition and experience of physicists. We use a Jupyter notebook with Python (scipy, numpy, mathplotlib) to allow for active visualization of examples, hands-on follow-ups, and readily extended content. You can find the notebook and associated files at
https://github.com/furnstahl/Bayes_for_physicists.
This is not an exhaustive guide to Bayesian statistics, but a selected sampling of topics that come up regularly in physics applications, with links to more information.
Most of the examples were adapted from code found on the web.
Please contribute suggestions, comments, links, code, ...
Last revised: 13-Oct-2018 by Dick Furnstahl [furnstahl.1@osu.edu].
<hr>
## <a name="Contents">Contents</a>
<ul>
<li><a href="#Overview">Overview</a>
<li><a href="#Python">Python set up</a>
<li><a href="#Basics">Bayesian basics</a>
[<a href="#Rules">Rules</a>]
[<a href="#Networks">Networks</a>]
[<a href="#Discrepancy">Model discrepancy</a>]
<li><a href="#Priors">Choosing priors</a>
[<a href="#MaxEntropy">Maximum entropy</a>]
[<a href="#ConjPriors">Conjugate priors</a>]
<li><a href="#Updating">Bayesian updating examples</a>
<li><a href="#Sampling">Sampling</a>
[<a href="#Multivariate">Multivariate gaussian</a>]
[<a href="#MCMC">MCMC</a>]
<li><a href="#Evidence">Model selection: Bayes ratio and evidence</a>
<li><a href="#GPs">Gaussian processes</a>
<li><a href="#Appendices">Appendices</a>:
[<a href="#References">References</a>]
[<a href="#Vocabulary">Vocabulary</a>]
[<a href="#Notation">Notation</a>]
</ul>
<hr>
## <a name="Python">Python/Jupyter set up</a>
We recommend installing the standard Anaconda Python3 package (from https://www.anaconda.com/download), which is available for Windows, Mac OS X, and Linux. Anaconda will provide scipy, numpy, matplotlib, and jupyter notebooks (and more!).
<span class="blue">You can start this notebook from the <a href="https://docs.anaconda.com/anaconda/navigator/">Anaconda Navigator</a> or from the command line (go to the directory with this notebook and type: <code>jupyter notebook</code>).</span>
It is convenient to use the Jupyter notebook extensions "Code Folding" and "Collapsible Headings", which can be turned on using the jupyter_nbextensions_configurator extension after installing via <br>
<code>conda install -c conda-forge jupyter_contrib_nbextensions</code>
<br> (see https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/ for a complete list and full instructions).
Other packages you will need to install for this notebook:
<ul>
<li>pymc3 [conda install -c conda-forge pymc3]
<li>emcee [conda install -c astropy emcee]
<li>corner [conda install -c astropy corner]
</ul>
```python
# set up for plots in this notebook using matplotlib (there are other plotting choices)
%matplotlib inline
```
```python
import numpy as np
import scipy.stats as stats
from scipy.stats import norm, uniform
import matplotlib.pyplot as plt
#plt.style.use('seaborn') # pretty matplotlib plots
import corner
import pymc3 as pm
```
/Users/furnstah/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
```python
# make font adjustments
#plt.rcParams['font.size'] = 12
#plt.rcParams['legend.fontsize'] = 'medium'
#plt.rcParams['figure.titlesize'] = 'medium'
plt.rcdefaults() # revert to defaults for now
```
```python
%%html
<!-- Use html cell magic to add css styling -->
<style>
em {
color: red;
}
dd {
margin-left: 15px;
}
.red{color: red}
.blue{color: blue}
</style>
```
<!-- Use html cell magic to add css styling -->
<style>
em {
color: red;
}
dd {
margin-left: 15px;
}
.red{color: red}
.blue{color: blue}
</style>
```python
#%%javascript
#IPython.OutputArea.auto_scroll_threshold = 9999;
```
## <a name="Basics">Bayesian basics</a>
### Why should physicists use Bayesian statistics?
cf. <a href="https://www.astro.princeton.edu/~strauss/AST303/bayesian_paper.pdf">Why isn't every physicist a Bayesian?</a> from 1995.
<ul>
<li>Includes conventional physics statistics (e.g., for parameter estimation)
<li>Calculate what you really want, e.g., probability for some parameter vs. frequentist confidence interval
<li>Assumptions are made explicit (in the form of priors)
<li>Allows us to account for "nuisance parameters"
<li>Well suited for theory errors, which are generally systematic
<li>Clear prescription for combining different types of errors
<li>Model selection: compare different theoretical models (or combine!)
<li>Model checking: we can test if our UQ model works and study sensitivities
<li><em>Statistics as diagnostic and discovery tools for physics</em>
<li> **[add your own favorite reasons]**
</ul>
### Everything is a pdf (probability density function)
Physicists are used to multidimensional normalized pdfs as wave functions squared, e.g. probability density for particle 1 at $x_1$ and particle 2 at $x_2$:
<span class="red">
$$
|\Psi(x_1, x_2)|^2 \Longrightarrow p(x_1,x_2) \equiv p(\textbf{x})
\quad \mbox{with}\quad \textbf{x}
\equiv \{x_1,x_2\}
$$
</span>
(Other notation for generic pdfs: $p(\textbf{x}) = P(\textbf{x}) = \textrm{pr}(\textbf{x}) = \textrm{prob}(\textbf{x}) = \ldots$ )
$p(x_1,x_2)$ is the <em>joint probability density</em> of $x_1$ and $x_2$. <br>
What is the probability to find particle 1 at $x_1$ and particle 2 anywhere? $\int\! |\Psi(x_1,x_2)|^2 dx_2$ <br>
The <em>marginal probability density</em> of $x_1$ is:
$\color{blue}{p(x_1) = \int\! p(x_1,x_2)\,dx_2}$. <br>
"Marginalizing" = "integrating out" (eliminates "nuisance parameters" from posterior).
Just as with "Lagrangian", we will not always be careful about saying probability vs. probability density.
In Bayesian statistics there are pdfs (or pmfs if discrete) for data, experimental <i>and</i> theoretical uncertainties, fit parameters, hyperparameters (what?), events (Will it rain tomorrow?), etc. Even if definite $x_0$, we can use $p(x) = \delta(x-x_0)$.
### Visualization of pdfs
#### Matplotlib plotting definitions
```python
def dist_stuff(dist):
"""
Find the median, mean, and 68%/95% credible intervals
for the given 1-d distribution (from stats).
"""
median = [dist.median(), dist.pdf(dist.median())]
mean = [dist.mean(), dist.pdf(dist.mean())]
cred68 = dist.interval(0.68)
cred95 = dist.interval(0.95)
return median, mean, cred68, cred95
def dist_mode(dist, x):
"""
Find the mode (maximum) of the 1-d distribution.
"""
x_max_index = dist.pdf(x).argmax()
mode = [x[x_max_index], dist.pdf(x[x_max_index])]
return mode
def dist_plot(dist_label, x_dist, dist, plot_num):
"""
Plot the distribution, indicating median, mean, mode
and 68%/95% probability intervals.
"""
colors = ('blue', 'blue', 'blue')
median, mean, cred68, cred95 = dist_stuff(dist)
mode = dist_mode(dist, x_dist)
plt.subplot(1,3,plot_num)
plt.plot(x_dist,dist.pdf(x_dist), label=dist_label, color=colors[plot_num-1])
text_x = 0.2*(x_dist[-1]-x_dist[0])
text_x_mid = (x_dist[-1]+x_dist[0])/2
text_y = mode[1]*1.15
plt.annotate('median', xy=median, xytext=(text_x_mid+text_x, text_y),
arrowprops=dict(facecolor='black', shrink=0.05))
plt.annotate('mode', xy=mode, xytext=(text_x_mid-text_x, text_y),
arrowprops=dict(facecolor='red', shrink=0.05))
plt.annotate('mean', xy=mean, xytext=(text_x_mid, text_y),
arrowprops=dict(facecolor='blue', shrink=0.05))
plt.xlabel('x')
plt.ylabel('p(x)')
plt.fill_between(x_dist, 0, dist.pdf(x_dist),
where=((x_dist > cred68[0]) & (x_dist < cred68[1])),
facecolor='blue', alpha=0.2)
plt.fill_between(x_dist, 0, dist.pdf(x_dist),
where=((x_dist > cred95[0]) & (x_dist < cred95[1])),
facecolor='blue', alpha=0.1)
plt.legend();
```
#### Some standard pdfs: normal and beta distributions
```python
%matplotlib inline
# Make some standard plots
plt.figure(figsize=(15,5))
# Standard normal distribution
x_norm = np.linspace(-4, 4, 500)
mu = 0 # mean
sigma = 1.0 # standard deviation
norm_dist = stats.norm(mu, sigma) # the normal distribution
norm_label='normal pdf' + '\n' + r'$\mu=${:1.1f},'.format(mu) \
+ '\n' + r'$\sigma=${:1.1f}'.format(sigma)
dist_plot(norm_label, x_norm, norm_dist, 1)
# beta distribution
x_beta = np.linspace(-0.1, 1.1, 500)
a1 = .5
b1 = 10
beta_dist = stats.beta(a1, b1)
beta1_label='beta pdf' + '\n' + r'$a=${:1.1f}'.format(a1) \
+ ',\n$b=${:1.1f}'.format(b1)
dist_plot(beta1_label, x_beta, beta_dist, 2)
# another beta distribution
#x_beta = np.linspace(-0.1, 1.1, 500)
a2 = 10
b2 = 10
beta2_dist = stats.beta(a2, b2)
beta2_label='beta pdf' + '\n' + r'$a=${:1.1f}'.format(a2) \
+ ',\n$b=${:1.1f}'.format(b2)
dist_plot(beta2_label, x_beta, beta2_dist, 3)
```
The 68%/95% probability regions are shown in dark/light shading. When applied to posteriors, these are known as <em>credible intervals</em> or DoBs (degree of belief intervals) or Bayesian confidence intervals. The horizontal extent on the $x$-axis translates into the vertical extent of the error bar or error band for $x$.
#### More standard pdfs: Student t
```python
%matplotlib inline
# Make some plots of the Student t distribution
plt.figure(figsize=(15,5))
x_t = np.linspace(-5, 5, 500)
nu1 = 1
t1_dist = stats.t(nu1) # the Student t distribution
t1_label='t pdf' + '\n' + r'$\nu=${:1.1f}'.format(nu1)
dist_plot(t1_label, x_t, t1_dist, 1)
nu2 = 5
t2_dist = stats.t(nu2) # the Student t distribution
t2_label='t pdf' + '\n' + r'$\nu=${:1.1f}'.format(nu2)
dist_plot(t2_label, x_t, t2_dist, 2)
nu3 = 100
t3_dist = stats.t(nu3) # the Student t distribution
t3_label='t pdf' + '\n' + r'$\nu=${:1.1f}'.format(nu3)
dist_plot(t3_label, x_t, t3_dist, 3)
```
Note the "heavy tails" in the t distribution as $\nu$ gets small. As $\nu$ gets large, the distribution approaches a standard normal (Gaussian) distribution.
#### Projected posterior plots
Here we use the [corner package](https://corner.readthedocs.io/en/latest/api.html) to make some projected posterior plots.
```python
%matplotlib inline
# examples of corner plots
ndim, nsamples = 2, 100000
#np.random.seed(42)
# generate some fake data from a normal distribution
norm_samples = stats.norm.rvs(size=ndim * nsamples).reshape([nsamples, ndim])
#figure = corner.corner(norm_samples)
figure1 = corner.corner(norm_samples, labels=[r"$x$", r"$y$", r"$\log \alpha$", r"$\Gamma \, [\mathrm{parsec}]$"],
quantiles=[0.16, 0.5, 0.84],
show_titles=True, title_kwargs={"fontsize": 12})
ax = figure1.get_axes()
figure1.set_size_inches(5,5)
ndim, nsamples = 2, 100000
#np.random.seed(42)
# generate some fake data from a beta distribution
a = 4
b = 20
beta_samples = stats.beta(a,b).rvs(size=ndim * nsamples).reshape([nsamples, ndim])
#figure = corner.corner(beta_samples)
figure2 = corner.corner(beta_samples, labels=[r"$x$", r"$y$", r"$\log \alpha$", r"$\Gamma \, [\mathrm{parsec}]$"],
quantiles=[0.16, 0.5, 0.84],
show_titles=True, title_kwargs={"fontsize": 12})
figure2.set_size_inches(5,5)
```
```python
%matplotlib inline
# now more than one mode (all random)
ndim, nsamples = 4, 50000
np.random.seed(1234)
data1 = np.random.randn(ndim * 4 * nsamples // 5).reshape([4 * nsamples // 5, ndim])
mean = 4*np.random.rand(ndim)
data2 = (mean[None, :] + np.random.randn(ndim * nsamples // 5).reshape([nsamples // 5, ndim]))
samples = np.vstack([data1, data2])
#figure = corner.corner(samples)
figure = corner.corner(samples, labels=[r"$x$", r"$y$", r"$\log \alpha$", r"$\Gamma \, [\mathrm{parsec}]$"],
quantiles=[0.16, 0.5, 0.84],
show_titles=True, title_kwargs={"fontsize": 12})
figure.set_size_inches(5,5)
```
### Sampling of 1d pdfs in Python
```python
%matplotlib inline
def plot_hist(name, x_dist, dist, num_samples, num_bins, plot_num):
plt.subplot(1,3,plot_num)
samples = dist.rvs(size=num_samples)
count, bins, ignored = plt.hist(samples, num_bins, density=True,
color='blue', alpha=0.7)
plt.plot(x_dist,dist.pdf(x_dist), linewidth=2, color='r')
title_string = name + ' samples = {:d}'.format(num_samples)
plt.title(title_string)
mu, sigma = 0, 1.0 # mean and standard deviation
x_dist = np.linspace(-4, 4, 500)
name = r'normal $\mu=${:1.1f}, $\sigma=${:1.1f}'.format(mu,sigma)
plt.figure(figsize=(15,5))
num_bins = 50
num_samples = 100
norm_dist = stats.norm(mu, sigma)
plot_hist(name, x_dist, norm_dist, num_samples, num_bins, 1)
num_samples = 1000
norm_dist = stats.norm(mu, sigma)
plot_hist(name, x_dist, norm_dist, num_samples, num_bins, 2)
num_samples = 10000
norm_dist = stats.norm(mu, sigma)
plot_hist(name, x_dist, norm_dist, num_samples, num_bins, 3)
```
<hr>
### Bayes' Rule: Interaction of prior and likelihood
$A$ and $B$ are generic propositions and $I$ is "information" (things we know). $p(A \mid B)$ means the probability of $A$ given $B$ (or <em>contingent</em> or <em>conditional</em> on $B$).
A particular case is a vector of parameters $\textbf{a} = \{a_1, a_2, \cdots\}$ for a theoretical model and some data it describes. Here Bayes' Rule is being used for <em>parameter estimation</em>.
$$
\newcommand{\avec}{\textbf{a}}
p(A \mid B,I) =
\frac{p(B \mid A,I)\, p(A \mid I)}{p(B \mid I)}
\ \Longrightarrow\
\overbrace{p(\avec \mid \textrm{data},I)}^{\textrm{posterior}} =
\frac{\color{red}{\overbrace{p(\textrm{data} \mid \avec,I)}^{\textrm{likelihood}}} \times
\color{blue}{\overbrace{p(\avec \mid I)}^{\textrm{prior}}}}
{\color{darkgreen}{\underbrace{p(\textrm{data} \mid I)}_{\textrm{evidence}}}}
$$
Common notation in statistics: $\boldsymbol{\theta}$ for parameters, $\mathcal{L}$ or $L$ for the likelihood, $\pi(\boldsymbol{\theta})$ for the prior.
<hr>
$$\overbrace{p(\avec \mid \textrm{data},I)}^{\textrm{posterior}} \propto \color{red}{\overbrace{p(\textrm{data} \mid \avec,I)}^{\textrm{likelihood}}} \times
\color{blue}{\overbrace{p(\avec \mid I)}^{\textrm{prior}}}$$
Left: likelihood overwhelms prior. Right: prior is returned (restricts domain)
<div style="float:left"></div>
<div style="float:left"></div>
<div style="clear: both"></div>
Note: these are one-dimensional projections of multi-dimensional pdfs. <br>
<em>Here we don't need to calculate the evidence separately; just normalize the numerator.</em>
<hr>
### Bayesian rules of probability as principles of logic
Notation: $p(x \mid I)$ is the probability (or pdf) of $x$ being true
given information $I$
<ol>
<li> <b>Sum rule:</b> If set $\{x_i\}$ is exhaustive and exclusive,
$$ \sum_i p(x_i \mid I) = 1 \quad \longrightarrow \quad \color{red}{\int\!dx\, p(x \mid I) = 1}
$$ </li>
<ul>
<li> cf. complete and orthonormal </li>
<li> implies <em>marginalization</em> (cf. inserting complete set of states or integrating out variables)
$$
p(x \mid I) = \sum_j p(x,y_j \mid I)
\quad \longrightarrow \quad
\color{red}{p(x \mid I) = \int\!dy\, p(x,y \mid I)}
$$
</li>
</ul>
<li> <b>Product rule:</b> expanding a joint probability of $x$ and $y$
$$
\color{red}{ p(x,y \mid I) = p(x \mid y,I)\,p(y \mid I)
= p(y \mid x,I)\,p(x \mid I)}
$$
</li>
<ul>
<li> If $x$ and $y$ are <em>mutually independent</em>: $p(x \mid y,I)
= p(x \mid I)$, then
$$
p(x,y \mid I) \longrightarrow p(x \mid I)\,p(y \mid I)
$$
</li>
<li> Rearranging the second equality yields <em> Bayes' Rule (or Theorem)</em>
$$
\color{blue}{p(x \mid y,I) = \frac{p(y \mid x,I)\,
p(x \mid I)}{p(y \mid I)}}
$$
</li>
</ul>
</ol>
See <a href="https://www.amazon.com/Algebra-Probable-Inference-Richard-Cox/dp/080186982X/ref=sr_1_1?s=books&ie=UTF8&qid=1538835666&sr=1-1">Cox</a> for the proof.
### Bayesian model checking: one example
<span class="red">How can you evaluate whether your Bayesian predictions are working?</span>
Cf. checking whether a least-squares fit to data with Gaussian noise misses about 1/3 of your 1-$\sigma$ error bars.
More generally: are the residuals normally distributed?
<em>Are your Bayesian credible intervals consistent with observed successes?</em> Check with a <em>calibration</em> or <em>empirical coverage</em> or <em>credible interval diagnostic</em> plot.
<div style="float:left"></div>
<div style="float:left"></div>
<div style="clear: both"></div>
<span class="blue">To be discussed: other ways to do Bayesian model checking.</span>
### <a name="Netwoks">Networks</a>
A Bayesian network is a graphical model that makes conditional dependence explicit through the edges in a directed graph. <span class="red">(More on this soon!)</span>
<div style="float:left"></div>
<div style="float:right"></div>
<div style="clear: both"></div>
### <a name="Discrepancy">Model discrepancy</a>
$\newcommand{\yexp}{\textbf{y}_{\rm exp}}$
$\newcommand{\yth}{\textbf{y}_{\rm th}}$
$\newcommand{\ytrue}{\textbf{y}_{\rm true}}$
The main goal of Bayesian parameter estimation is the calculation of a joint posterior pdf for the model parameters given a set of experimental data and any other information we have. This task begins with a <em>statistical model</em>:
$$ \yexp = \yth + \Delta \yth + \Delta \yexp $$
where $\yexp$ are the experimental measurements of an observable $y$
and $\yth$ are the corresponding theoretical (model) calculations.
In terms of the true results $\ytrue$, we can decompose this as
$$ \yexp = \ytrue + \Delta\yexp\;; \qquad
\ytrue = \yth + \Delta\yth \;.
$$
The model for the experimental uncertainty $\Delta\yexp$ is usually stochastic noise
$$ \Delta\yexp \sim \mathcal{N}(0,\Sigma_{\rm exp}) $$
with zero mean and $\Sigma_{\rm exp}$ typically uncorrelated (so the covariance matrix is diagonal). Systematic uncertainties can also be incorporated.
The "new" feature here is $\Delta\yth$, which is the model discrepancy or model defect. Its role is to account statistically for the deficiencies of the model. It could be a truncation error from an expansion or a model of the observed residuals in a set of training data.
By including $\Delta\yth$, we can suppress overfitting and deal with underfitting.
<p>[Return to <a href="#Contents">Contents</a>]</p>
<hr>
## <a name="Priors">Choosing priors</a>
A key feature of Bayesian statistics is the choice of the prior probability distribution. We should incorporate all of the information we have, but no more. What are the options for choosing priors? What are the subtleties? Places to use an <em>informative prior:</em>
<ul>
<li>your dataset is small, but there is related information available from other systems;
<li>your model is very flexible but you want to prevent overfitting (so use priors that prefer
values close to zero);
<li>you want to stay away from particular regions of parameter space: maybe strictly unphysical such as
negative cross sections or acausal, or maybe
counter to well-motivated assumptions such as parameters are of order unity;
<li>the posterior from a previous experiment can become a prior ==> Bayesian updating;
<li>other examples??
</ul>
### <a name="Uniform">Subtlety with uniform prior</a>
An example from the 2016 workshop on [Bayesian Methods in Astronomy](https://github.com/jakevdp/BayesianAstronomy/blob/master/Index.ipynb) considers fitting a straight line $y = mx+b$ to noisy data. If you take the prior on the slope $m$ to be uniform (flat), thinking that this is non-informative, then look at this plot, which samples lines with uniformly distributed slopes:
```python
%matplotlib inline
xx = np.linspace(-1, 1,11)
for slope in np.linspace(0, 20, 100):
plt.plot(xx, slope * xx, '-k', linewidth=1)
plt.axis([-1, 1, -1, 1], aspect='equal');
```
```python
xx
```
array([-1. , -0.8, -0.6, -0.4, -0.2, 0. , 0.2, 0.4, 0.6, 0.8, 1. ])
The density of the lines indicates the relative probability of different slopes. Summary point: flat priors are not necessarily minimally informative. For the slope we probably want a prior that does not artificially over-weight large slopes; see http://arxiv.org/abs/1411.5018 for some discussion. For example, we might use a flat prior on the *angle* the line makes with the x-axis, which implies
$$
P(m) \propto (1 + m^2)^{-3/2}
$$
### <a name="MaxEnt">Maximum entropy for priors (appealing to physicists!)</a>
<ul>
<li> Basic idea: identify least informative $p(x)$ from maximizing entropy:
$$
S[p(x)] = -\int\!dx\, p(x)\, \log\left[\frac{p(x)}{m(x)}\right]
$$
subject to constraints from the prior information.
<ul>
<li> $m(x)$ is an appropriate measure (often uniform, but see Sivia)
<li> Use Lagrange multiplier(s) to maximize
</ul>
<li> One constraint is always normalization: $\int\!dx\, p(x) = 1$
$\Longrightarrow$ alone it leads to uniform $p(x)$ (actually to $m(x)$)
<li>
If the mean $\mu$ and variance $\sigma^2$ are known, then maximize (may be clearer to do with discretized variables)
\begin{align}
Q[p(x)] &= -\int\! dx\, p(x)\, \log\left[\frac{p(x)}{m(x)}\right]
+ \lambda_0 \left[ 1 - \int\! dx\, p(x) \right] \\
& \quad + \lambda_1 \left[\sigma^2 - \int\! dx\, (x-\mu)^2 p(x) \right]
\end{align}
Then
$$
\frac{\delta Q}{\delta p(x)} = 0 \quad\mbox{and}\quad
m(x) = \mbox{const.}
\ \Longrightarrow\
p(x \mid \mu,\sigma) = \frac{1}{\sigma\sqrt{2 \pi}}
e^{-(x-\mu)^2/2\sigma^2}
$$
<li>
For $\textbf{x} = \{x_1, \cdots, x_N\}$, we find $p(\textbf{x})$
is the familiar least-squares likelihood function. See Sivia for many more details.
If you want a "scale invariant" prior, this often means a flat prior on the *logarithm* of the parameter. This is an example of a [Jeffreys Prior](https://en.wikipedia.org/wiki/Jeffreys_prior).
<p>[Return to <a href="#Contents">Contents</a>]</p>
<hr>
### <a name="ConjPriors">Conjugate priors</a>
Bayes theorem tells us that the posterior follows from multiplying the prior by the likelihood and normalizing:
$$
p(\theta\,|\,x) = \frac{p(x\,|\,\theta)\, p(\theta)}
{\int\!d\theta'\, p(x\,|\,\theta')\, p(\theta')}
$$
If the resulting $p(\theta\,|\,x)$ is in the same family of pdfs as $p(\theta)$, then $p(\theta)$ is said to be a conjugate prior for the likelihood $p(x\,|\,\theta)$.
The likelihood is often a fixed form. <em>If there is freedom to choose a conjugate prior, the Bayesian updating of the posterior is given in closed form.</em>
Comments:
<ul>
<li>Suppose we are flipping a coin and seek the posterior for the probability of heads $\theta \in [0,1]$. Let $x$ be the number of successes (heads) in $N$ trials. The likelihood is binomial:
$$ p(x\mid\theta) = \binom{N}{x}
\theta^{x} (1-\theta)^{N-x} \;.
$$
If we choose the prior to be a <i>beta distribution</i> with <em>hyperparameters</em> $a$ and $b$ (which are parameters of the prior as opposed to the model),
$$ p(\theta\mid a,b) = \textrm{Beta}(\theta\mid a,b)
= \frac{\theta^{a-1}(1-\theta)^{b-1}}{B(a,b)}
$$
with beta function $B(a,b)$, then the posterior is of the same form ($\bar x \equiv N - x$):
$$ p(\theta\mid x,a,b) = \frac{\theta^{x+a-1}(1-\theta)^{\bar x+b-1}}{B(x+a,\bar x+b)} = \textrm{Beta}(\theta\mid x+a,\bar x +b)
$$
This posterior can be used as the prior for more samples; the hyperparameters just add additional information (check that it doesn't matter if you analyze the data all at once or sequentially because the tosses are independent).
<li>See the [Wikipedia article](https://en.wikipedia.org/wiki/Conjugate_prior) for a big table of conjugate prior pairs.
<li>If the likelihood is a normal distribution with known variance, the conjugate prior is also normal. If it is a normal distribution with known mean, the inverse gamma distribution is a conjugate prior.
</ul>
<p>[Return to <a href="#Contents">Contents</a>]</p>
<hr>
## <a name="Updating">Bayesian updating examples</a>
### Determining the bias of a coin
```python
%matplotlib inline
# adapted from https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
from IPython.core.pylabtools import figsize
import numpy as np
from matplotlib import pyplot as plt
figsize(11, 15)
import scipy.stats as stats
# If the coin is fair, prob_heads = 0.5 but you can set it to what you want.
prob_heads = 0.7
# hyperparameters for several different priors
# prior 1 is uniform in [0,1]
alpha_1 = 1
beta_1 = 1
# prior 2 is concentrated near 0.5 with very small tails
alpha_2 = 30
beta_2 = 30
# prior 3 is peaked at ends, but allows for probability everywhere
alpha_3 = .2
beta_3 = .2
# Calculate Bayesian updating using the conjugate prior for binomial, which is a beta distribution
dist = stats.beta
n_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500, 1000, 2000]
data = stats.bernoulli.rvs(prob_heads, size=n_trials[-1]) # heads or tails, 1 or 0
x = np.linspace(0, 1, 301) # mesh for posterior plots
for k, N in enumerate(n_trials): # enumerate creates a tuple with a counter for each n_trials entry
heads = data[:N].sum() # add up the number of 1s = number of heads
# update using the conjugate prior
y_1 = dist.pdf(x, alpha_1 + heads, beta_1 + N - heads) # beta(x,alpha+heads,beta+(N-heads))
y_2 = dist.pdf(x, alpha_2 + heads, beta_2 + N - heads)
y_3 = dist.pdf(x, alpha_3 + heads, beta_3 + N - heads)
y_max = np.max([y_1.max(), y_2.max()])
# now make the plots!
sx = plt.subplot(len(n_trials)/2, 2, k+1)
plt.xlabel("$p$, probability of heads")
plt.setp(sx.get_yticklabels(), visible=False)
plt.yticks([])
plt.plot(x, y_1, label="uniform prior")
plt.fill_between(x, 0, y_1, color="blue", alpha=0.1)
plt.plot(x, y_2, label="informative prior", color="r")
plt.fill_between(x, 0, y_2, color="red", alpha=0.1)
plt.plot(x, y_3, label="anti prior", color="g")
plt.fill_between(x, 0, y_3, color="green", alpha=0.1)
plt.vlines(prob_heads, 0, 1.1*y_max, color="k", linestyles="--", lw=2)
plt.annotate("observe {:d} tosses,\n {:d} heads".format(N, heads), xy=(0.05,0.35),
xycoords='axes fraction', horizontalalignment='left',verticalalignment='top')
leg = plt.legend()
leg.get_frame().set_alpha(0.4)
plt.autoscale(tight=True)
figure_title = "Bayesian updating of posterior probabilities for biased coin with actual p(heads) = %1.2f" % prob_heads
plt.suptitle(figure_title,
y=1.02,
fontsize=14)
plt.tight_layout()
```
## <a name="Sampling">Sampling</a>
### <a name="Multivariate">Sampling from multivariate normal distributions</a>
Suppose we have a univariate normal distribution
$$
x \sim \mathcal{N}(\mu,\sigma^2)
\ \Longleftrightarrow\
p(x\mid\mu,\sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}}
e^{-(x-\mu)^2/2\sigma^2}
$$
If we have a way to generate <em>standard normals</em> $\mathcal{N}(0,1)$, then we can sample $x$ from
$$
x \sim \mu + \sigma \mathcal{N}(0,1) \;.
$$
So there is a simple shift by $\mu$ and then we scale the normal draw by $\sigma$, the square root of the variance.
$\newcommand{\xvec}{\textbf{x}}$
$\newcommand{\muvec}{\boldsymbol{\mu}}$
The general <em>multivariate Gaussian distribution</em> is
$$
\xvec \sim \mathcal{N}(\muvec,\Sigma)
\ \Longleftrightarrow\
p(\xvec\mid \muvec,\Sigma) = \frac{1}{\sqrt{\det(2\pi\Sigma)}} e^{-\frac12(\xvec-\muvec)^{\rm T}\Sigma^{-1}(\xvec-\muvec)}
$$
The generalization to sample it will be to shift by $\muvec$ and scale by some square root of the covariance matrix $\Sigma$:
$$
\xvec \sim \muvec + B \mathcal{N}(0,I) \;,
$$
where $I$ is the identity matrix and $B B^\intercal = \Sigma$ (e.g., a Cholesky decomposition).
<p>[Return to <a href="#Contents">Contents</a>]</p>
<hr>
### <a name="MCMC">MCMC sampling</a>
Great examples of MCMC sampling are at http://elevanth.org/blog/2017/11/28/build-a-better-markov-chain/.
<!-- Let's take a look at a
<a href="http://www.physics.ohio-state.edu/~ntg/MCMC_javascript_visualizations.html">simplified version</a>
-->
Here are the individual simplified simulations:
<ul>
<li>Metropolis-Hastings <a href="http://elevanth.org/mcmcdemo2/applet.html#RandomWalkMH,standard">2D Gaussian</a>
and <a href="http://elevanth.org/mcmcdemo2/applet.html#RandomWalkMH,donut">donut</a>
</li>
<li>Hamiltonian Monte Carlo <a href="http://elevanth.org/mcmcdemo2/applet.html#HamiltonianMC,standard">2D Gaussian</a>
and <a href="http://elevanth.org/mcmcdemo2/applet.html#HamiltonianMC,donut">donut</a>
</li>
<li>Hamiltonian Monte Carlo <a href="http://elevanth.org/mcmcdemo2/applet.html#HamiltonianMC,standard">2D Gaussian with
U-turn</a>
</li>
<li>NUTS sampler <a href="http://elevanth.org/mcmcdemo2/applet.html#NaiveNUTS,standard">2D Gaussian</a>
and <a href="http://elevanth.org/mcmcdemo2/applet.html#NaiveNUTS,multimodal">multi-modal</a>
</li>
</ul>
The detailed simulator can be found at https://chi-feng.github.io/mcmc-demo/, which also links to the github repository with the javascript source.
```javascript
%%javascript
IPython.OutputArea.auto_scroll_threshold = 9999;
```
<IPython.core.display.Javascript object>
```python
# At present this adversely affects the menu bar
from IPython.display import display,HTML
display(HTML(filename="./MCMC_javascript_visualizations.html"))
```
<!DOCTYPE html>
<!-- saved from url=(0064)http://elevanth.org/blog/2017/11/28/build-a-better-markov-chain/ -->
<html lang="en-US">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Markov Chains: Why Walk When You Can Flow? | Elements of Evolutionary Anthropology</title>
<link rel="profile" href="http://gmpg.org/xfn/11">
<link rel="pingback" href="http://elevanth.org/blog/xmlrpc.php">
<link rel="dns-prefetch" href="http://s0.wp.com/">
<link rel="dns-prefetch" href="http://fonts.googleapis.com/">
<link rel="dns-prefetch" href="http://s.w.org/">
<link rel="alternate" type="application/rss+xml" title="Elements of Evolutionary Anthropology » Feed" href="http://elevanth.org/blog/feed/">
<link rel="alternate" type="application/rss+xml" title="Elements of Evolutionary Anthropology » Comments Feed" href="http://elevanth.org/blog/comments/feed/">
<style type="text/css">
img.wp-smiley,
img.emoji {
display: inline !important;
border: none !important;
box-shadow: none !important;
height: 1em !important;
width: 1em !important;
margin: 0 .07em !important;
vertical-align: -0.1em !important;
background: none !important;
padding: 0 !important;
}
</style>
<link rel="stylesheet" id="wp-quicklatex-format-css" href="./MCMC_javascript_visualizations_files/quicklatex-format.css" type="text/css" media="all">
<link rel="stylesheet" id="tt-easy-google-fonts-css" href="./MCMC_javascript_visualizations_files/css" type="text/css" media="all">
<link rel="stylesheet" id="dashicons-css" href="./MCMC_javascript_visualizations_files/dashicons.min.css" type="text/css" media="all">
<link rel="stylesheet" id="crawford_oswald-css" href="./MCMC_javascript_visualizations_files/css(1)" type="text/css" media="all">
<link rel="stylesheet" id="crawford_domine-css" href="./MCMC_javascript_visualizations_files/css(2)" type="text/css" media="all">
<link rel="stylesheet" id="crawford_bootstrap_css-css" href="./MCMC_javascript_visualizations_files/bootstrap.min.css" type="text/css" media="all">
<link rel="stylesheet" id="crawford_style-css" href="./MCMC_javascript_visualizations_files/style.css" type="text/css" media="all">
<link rel="stylesheet" id="jetpack-widget-social-icons-styles-css" href="./MCMC_javascript_visualizations_files/social-icons.css" type="text/css" media="all">
<link rel="stylesheet" id="social-logos-css" href="./MCMC_javascript_visualizations_files/social-logos.min.css" type="text/css" media="all">
<link rel="stylesheet" id="jetpack_css-css" href="./MCMC_javascript_visualizations_files/jetpack.css" type="text/css" media="all">
<link rel="https://api.w.org/" href="http://elevanth.org/blog/wp-json/">
<link rel="EditURI" type="application/rsd+xml" title="RSD" href="http://elevanth.org/blog/xmlrpc.php?rsd">
<link rel="wlwmanifest" type="application/wlwmanifest+xml" href="http://elevanth.org/blog/wp-includes/wlwmanifest.xml">
<link rel="prev" title="Journal Clubbing: Accurate Age Estimation" href="http://elevanth.org/blog/2017/09/09/journal-clubbing-accurate-age-estimation/">
<link rel="next" title="Algebra and the Missing Oxen" href="http://elevanth.org/blog/2018/01/29/algebra-and-missingness/">
<meta name="generator" content="WordPress 4.9.8">
<link rel="canonical" href="http://elevanth.org/blog/2017/11/28/build-a-better-markov-chain/">
<link rel="shortlink" href="https://wp.me/p6D80w-bM">
<link rel="alternate" type="application/json+oembed" href="http://elevanth.org/blog/wp-json/oembed/1.0/embed?url=http%3A%2F%2Felevanth.org%2Fblog%2F2017%2F11%2F28%2Fbuild-a-better-markov-chain%2F">
<link rel="alternate" type="text/xml+oembed" href="http://elevanth.org/blog/wp-json/oembed/1.0/embed?url=http%3A%2F%2Felevanth.org%2Fblog%2F2017%2F11%2F28%2Fbuild-a-better-markov-chain%2F&format=xml">
<link rel="dns-prefetch" href="http://v0.wordpress.com/">
<style type="text/css">img#wpstats{display:none}</style><!--[if lt IE 9]><![endif]--> <style type="text/css">
a, a:hover, a:focus, header nav .menu-item-has-children:hover a, header nav .menu-item-has-children:hover .sub-menu li a:hover {color:#1e73be;}
article blockquote {border-color:#1e73be;}
</style>
<!-- Jetpack Open Graph Tags -->
<meta property="og:type" content="article">
<meta property="og:title" content="Markov Chains: Why Walk When You Can Flow?">
<meta property="og:url" content="http://elevanth.org/blog/2017/11/28/build-a-better-markov-chain/">
<meta property="og:description" content="Abstract: If you are still using a Gibbs sampler, you are working too hard for too little result. Newer, better algorithms trade random walks for frictionless flow. In 1989, Depeche Mode was popula…">
<meta property="article:published_time" content="2017-11-28T20:50:01+00:00">
<meta property="article:modified_time" content="2018-04-26T08:15:56+00:00">
<meta property="og:site_name" content="Elements of Evolutionary Anthropology">
<meta property="og:image" content="http://elevanth.org/blog/wp-content/uploads/2017/11/KRDx8SR-1.gif">
<meta property="og:image:width" content="500">
<meta property="og:image:height" content="381">
<meta property="og:locale" content="en_US">
<meta name="twitter:site" content="@rlmcelreath">
<meta name="twitter:text:title" content="Markov Chains: Why Walk When You Can Flow?">
<meta name="twitter:card" content="summary">
<!-- End Jetpack Open Graph Tags -->
<link rel="icon" href="http://elevanth.org/blog/wp-content/uploads/2015/08/elevanth_icon_box-150x150.png" sizes="32x32">
<link rel="icon" href="http://elevanth.org/blog/wp-content/uploads/2015/08/elevanth_icon_box-300x300.png" sizes="192x192">
<link rel="apple-touch-icon-precomposed" href="http://elevanth.org/blog/wp-content/uploads/2015/08/elevanth_icon_box-300x300.png">
<meta name="msapplication-TileImage" content="http://elevanth.org/blog/wp-content/uploads/2015/08/elevanth_icon_box-300x300.png">
<style type="text/css" id="wp-custom-css">
body {
margin-left: auto !important;
margin-right: auto !important;
font-size: 18pt !important;
line-height: 2;
font-family: "EB Garamond", ETBembo, Palatino, "Palatino Linotype", "Palatino LT STD", "Book Antiqua", Georgia, serif;
}
input[type=text], input[type=email], input[type=password], input[type=search], textarea {
font-size: 2rem !important;
margin: 2px !important;
padding: 2px !important;
}
header nav .sub-menu li {
font-size: 16pt !important;
}
p, footer, pre.code {
width: 100%;
}
p.post-date {
font-size: 14pt !important;
}
p {
-webkit-font-smoothing: antialiased;
}
.widget li {
font-size: 14pt !important;
line-height: 1 !important;
}
article blockquote {
border-left: 0 solid #20b2aa !important;
padding-left: 30px;
}
h1 {
font-style: italic;
font-weight: 400;
margin-top: 4rem;
margin-bottom: 1.5rem;
font-size: 3.2rem;
line-height: 1;
}
h2 {
font-style: italic;
font-weight: 400;
margin-top: 2.1rem;
margin-bottom: 0;
font-size: 2.2rem;
line-height: 1;
}
h3 {
font-style: italic;
font-weight: 400;
font-size: 1.7rem;
margin-top: 2rem;
margin-bottom: 0;
line-height: 1;
}
h4 {
font-style: italic !important;
font-weight: 400;
font-size: 2.0rem !important;
margin-top: 2rem;
margin-bottom: 0;
line-height: 1;
}
header #site-name {
text-transform: none !important;
font-size: 48px;
font-style: italic;
color: #222;
font-family: "IM Fell Great Primer" !important;
}
pre {
padding: 10px;
overflow: auto;
font-size: 90% !important;
line-height: 1.45;
background-color: #f6f8fa;
border-radius: 3px;
font: 13px "SFMono-Regular", Consolas, "Liberation Mono", Menlo, Courier, monospace;
}
.fullwidth {
max-width: 95%;
}
.sidenote, .marginnote {
float: right;
clear: right;
margin-right: -60%;
width: 50%;
margin-top: 0;
margin-bottom: 0;
font-size: 1.7rem;
line-height: 1.96;
vertical-align: baseline;
position: relative;
} </style>
<style id="tt-easy-google-font-styles" type="text/css">p { font-family: 'Libre Baskerville'; font-size: 17px; font-style: normal; font-weight: 400; line-height: 1.4; }
h1 { font-family: 'IM Fell Great Primer'; font-size: 10px; font-style: italic; font-weight: 400; }
h2 { font-family: 'IM Fell Great Primer'; font-style: italic; font-weight: 400; }
h3 { font-family: 'IM Fell Great Primer'; font-style: italic; font-weight: 400; }
h4 { color: #383838; font-family: 'Libre Baskerville'; font-style: italic; font-weight: 400; }
h5 { }
h6 { }
</style>
</head>
<body class="post-template-default single single-post postid-730 single-format-standard">
<div class="row" role="main">
<div class="col-md-8 col-md-offset-2">
<article id="post-730" class="post-730 post type-post status-publish format-standard hentry category-statistics last">
<h1 id="post-title">Markov Chains: Comparing different samplers</h1>
<p><em><strong>Abstract</strong>: If you are still using a Gibbs sampler, you are working too hard for too little result. Newer, better algorithms trade random walks for frictionless flow.</em>
[Adapted from a <a href="http://elevanth.org/blog/2017/11/28/build-a-better-markov-chain/">blog entry
by Richard McElreath</a> (@rlmcelreath).]
</p>
<h3>Metropolis, Hastings, and the Random Walk</h3>
<p>The simplest and least reliable way of building a Markov chain is the <b>Metropolis-Hastings algorithm</b>.
Below are embedded the MCMC simulations written by Chi Feng, found <a href="https://chi-feng.github.io/mcmc-demo/" rel="noopener" target="_blank">here</a>. The target distribution is a benign two-dimensional Gaussian—a nice Gaussian hill. You are looking down on it, with its peak in the center. The Markov chain wanders around this hill, making random proposals to move away from its current position. These proposals are represented by the arrows. Green arrows are accepted proposals. The chain moves to the new location. Red arrows are rejections.
</p>
<center><object data="./MCMC_javascript_visualizations_files/applet.html#RandomWalkMH,standard" type="text/html" width="600" height="400"></object></center>
<p>As samples build up, the target distribution takes shape, as seen in the histograms on the bottom and left margins. The algorithm works, even though it blindly stumbles around the target, doing a kind of random walk.</p>
<p>And that is exactly the problem. A major problem is that Metropolis-Hastings is a bit too random. So it spends a lot of time re-exploring the same parts of the target, and unless it is tuned just right, it will also reject a lot of proposals (the red arrows), wasting computer time.</p>
<p>Here’s another simulation, this time with a harder target: a donut. The donut target might look weird, but it represents a very common target, by analogy. In high dimensions—once there are many parameters—the target distribution occupies a narrow ring (in high-dimensional space). Most of the probability mass is not near the center. Now look what happens to Metropolis-Hastings:<br>
</p>
<center><object data="./MCMC_javascript_visualizations_files/applet.html#RandomWalkMH,donut" type="text/html" width="600" height="400"></object></center>
<p>Notice how the Markov chain tends to get stuck in specific regions of the donut. Not only that, but it rejects a lot of proposals (the red arrows), so it wastes a lot of computer time doing nothing. Given enough time, it can explore the entire target. But it might take a very long time indeed. When we don’t know what the target looks like in the first place, this kind of behavior is not only annoying, but also hazardous to inference.</p>
<p>The fundamental problem with Metropolis-Hastings, and with Gibbs-Sampling as a special case, is that it is just too random. In simple targets, that isn’t so bad. But in even moderately complex targets, it means inference often isn’t reliable. It tends to get stuck in narrow regions of the target. There must be a better way.</p>
<h3>Better Living Through Physics</h3>
<div id="attachment_761" style="width: 410px" class="wp-caption alignright"><a href="./MCMC_javascript_visualizations_files/tumblr_n84kkuafX11tfhyyio1_500.gif"></a><p class="wp-caption-text">Can your Markov chain do this?</p></div>
<p>If there’s a random way to do something, there’s usually a less random way that is both better and requires more thought. Instead of making random proposals, suppose instead that you run a physics simulation. Your vector of parameters is now a particle in <em>n</em>-dimensional space. The surface in this space is a giant <em>n</em>-dimensional bowl. The shape of the bowl is determined by the shape of the logarithm of the target distribution. If the target is a nice Gaussian, for example, then the log-Gaussian is a smooth parabolic bowl like this (in one-dimension):</p>
<p></p>
<p>To make things a little crazier, suppose that this surface is frictionless. Now what we do is flick the particle in a random direction. It will frictionlessly flow across the bowl, eventually turning around. If we take samples of the particle’s position along its path, before flicking it off in another random trajectory, then we can learn about the shape of the surface. </p>
<p>This is the principle behind <b>Hamiltonian Monte Carlo</b>. It will be easier to see it in action. Here is another simulation, this time using Hamiltonian Monte Carlo, again on the two-dimensional Gaussian target. The paths are flicks of the particle, and the green arrows again represent accepted proposals.
<center><object data="./MCMC_javascript_visualizations_files/applet.html#HamiltonianMC,standard" type="text/html" width="600" height="400"></object></center>
<p>Now the proposals are both within the high-probability region of the target—so many fewer proposals are rejected—and the proposals can get far away from their starting point, so that the chain efficiently explores the whole shape of the target in less time. Effectively, it flows across the target and maps out its whole shape much faster.</p>
<p>The cost of all this elegance is needing more information about the target. Hamiltonian Monte Carlo does a lot more calculation. But it also needs fewer samples to get a good image of the target. Where this really counts is with the donut. Let’s revisit it:
</p>
<center><object data="./MCMC_javascript_visualizations_files/applet.html#HamiltonianMC,donut" type="text/html" width="600" height="400"></object></center>
<p>Now instead of getting stuck, the chain sweeps around the target. Even though all the chain knows at any moment in time is the local shape of the target—it can’t see the whole distribution like you can here—it still manages to glide smoothly around it. This shouldn’t be so amazing—a ball doesn’t know the shape of the surface it rolls on, yet its path is governed by it.</p>
<h3>Stan is NUTS</h3>
<p>There are still some improvements to be had. Hamiltonian Monte Carlo needs to be told how many steps to take in its simulated paths. The step number determines how long the path continues before a new flick is made in a new random direction. If it takes too few steps, then it ends up with samples too similar to one another. If it takes too many, it can also end up with samples too similar to one another. Why? Because the path eventually makes a U-turn. Here’s an example:
</p>
<center><object data="./MCMC_javascript_visualizations_files/applet.html#HamiltonianMC,standard,65" type="text/html" width="600" height="400"></object></center>
<p>The path goes on long enough that it often eats its own tail—it makes an unfortunate U-turn. The algorithm still works, but it isn’t very efficient, because it again explores in local spaces. We can of course tune the number of steps by hand, but that is not so easy when the target distribution is complex.</p>
<p>The <b>No U-Turn Sampler</b> (NUTS) is an approach for adaptively finding a good number of steps. The NUTS algorithm tries to figure out when the path starts to turn around. In order to do this efficiently, it needs to simulate the path in both directions. It looks pretty weird:
</p>
<center><object data="./MCMC_javascript_visualizations_files/applet.html#NaiveNUTS,standard" type="text/html" width="600" height="400"></object></center>
<p>Notice how the path grows in both directions. This is the algorithm figuring out when the path turns around. When the path starts to turn around, NUTS stops the simulation and takes a sample. Then it flicks the particle in another random direction and starts another simulation. There are lots of little adaptive nudges in these algorithms that help them explore the target more efficiently. An implementation like Stan (<a href="http://mc-stan.org/" rel="noopener" target="_blank">mc-stan.org</a>) uses an advanced version of NUTS that is even slicker.</p>
<h3>Problems Remain</h3>
<p>Hamiltonian algorithms still have limitations. Some targets are still hard to explore. Here’s a example: a multimodal target. While the paths do a good job exploring each lump of probability mass, they have trouble transitioning among them.
</p>
<center><object data="./MCMC_javascript_visualizations_files/applet.html#NaiveNUTS,multimodal" type="text/html" width="600" height="400"></object></center>
<p>Targets like this are not so unusual. They arise in many classification and latent variable models. With some clever coding, you can collapse some modes together. But you have to realize what is going on, first. Other issues arise with models that contain very steep changes in log-probability. Wizards are working on solutions to these problems. But even before solutions are found, Hamiltonian samplers are typically much better than Gibbs zombies.</p>
<h3>Read More</h3>
<p>The BUGS project’s history is summarized in: Lunn, Spiegelhalter, and Best. (2009). “The BUGS project: Evolution, critique and future directions.” <a href="https://doi.org/10.1002%2Fsim.3680">doi:10.1002/sim.3680</a></p>
<p>Hamiltonian Monte Carlo was originally called “Hybrid Monte Carlo”: Duane, Kennedy, Pendleton, and Roweth (1987) “Hybrid Monte Carlo”. <a href="https://doi.org/10.1016%2F0370-2693%2887%2991197-X">doi:10.1016/0370-2693(87)91197-X</a></p>
<p>The No U-Turn Sampler (NUTS) was first described by Hoffman and Gelman (2011) “The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.” <a href="https://arxiv.org/abs/1111.4246">arxiv.org/abs/1111.4246</a></p>
<p>Michael Betancourt’s “Conceptual Introduction to Hamiltonian Monte Carlo” is only slightly technical and worth your time. <a href="https://arxiv.org/abs/1701.02434" rel="noopener" target="_blank">arxiv.org/abs/1701.02434</a></p>
<p> </p>
</article>
</div>
</div>
</div>
<div id="sharing_email" style="display: none;">
<form action="http://elevanth.org/blog/2017/11/28/build-a-better-markov-chain/" method="post">
<label for="target_email">Send to Email Address</label>
<input type="email" name="target_email" id="target_email" value="">
<label for="source_name">Your Name</label>
<input type="text" name="source_name" id="source_name" value="">
<label for="source_email">Your Email Address</label>
<input type="email" name="source_email" id="source_email" value="">
<input type="text" id="jetpack-source_f_name" name="source_f_name" class="input" value="" size="25" autocomplete="off" title="This field is for validation and should not be changed">
<input type="submit" value="Send Email" class="sharing_send">
<a rel="nofollow" href="http://elevanth.org/blog/2017/11/28/build-a-better-markov-chain/#cancel" class="sharing_cancel">Cancel</a>
<div class="errors errors-1" style="display: none;">
Post was not sent - check your email addresses! </div>
<div class="errors errors-2" style="display: none;">
Email check failed, please try again </div>
<div class="errors errors-3" style="display: none;">
Sorry, your blog cannot share posts by email. </div>
</form>
</div></body></html>
<p>[Return to <a href="#Contents">Contents</a>]</p>
<hr>
## <a name="Evidence">Model selection and evidence</a>
$\newcommand{\thetavec}{\boldsymbol{\theta}}$
Determine the evidence for different models $M_1$ and $M_2$ via <em>marginalization</em> by integrating over all possible sets of parameters ${\thetavec}$ in different models, with the same data $D$ and information $I$.
The evidence ratio for two different models:
$$
\frac{p(M_1\mid D, I)}{p(M_2\mid D, I)}
= \frac{p(D\mid M_1, I)\,p(M_1,I)}{p(D\mid M_2, I)\,p(M_2,I)}
$$
The Bayes Ratio (implements Occam’s Razor):
$$
\frac{p(D\mid M_1, I)}{p(D\mid M_2, I)}
= \frac{\int\!d\thetavec_1\, p(D\mid\thetavec_1,M_1,I)
\,p(\thetavec_1\mid M_1,I)}
{\int\!d\thetavec_2\, p(D\mid\thetavec_2,M_2,I)
\,p(\thetavec_2\mid M_2,I)}
$$
Example: what order polynomial underlies the noisy data?
<p>[Return to <a href="#Contents">Contents</a>]</p>
<hr>
## <a name="GPs">Gaussian processes (GPs)</a>
### Overview of GPs
GP: the natural generalization of multivariate Gaussian random variables to infinite (countably or continuous) index sets. They look like random functions, but with characteristic degrees of smoothness, correlation lengths, and range. Here are some examples with different "kernels" and different parameters that dictate those features (figure from J. Melendez):
#### Explanations of GPs from the web
The following is adapted from a blog entry from Kathleen Bailey at http://katbailey.github.io/post/gaussian-processes-for-dummies/.
"Here’s how Kevin Murphy explains it in the excellent textbook <i>Machine Learning: A Probabilistic Perspective</i>:"
'A GP defines a prior over functions, which can be converted into a posterior over functions once we have seen some data. Although it might seem difficult to represent a distribution over a function, it turns out that we only need to be able to define a distribution over the function’s values at a finite, but arbitrary, set of points, say $x_1, \ldots, x_N$. A GP assumes that $p(f(x_1),\ldots,f(x_N))$ is jointly Gaussian, with some mean $\mu(x)$ and covariance $\Sigma(x)$ given by $\kappa_{ij} = \Sigma(x_i,x_j)$, where $\kappa$ is a positive definite kernel function. The key idea is that if $x_i$ and $x_j$ are deemed by the kernel to be similar, then we expect the output of the function at those points to be similar, too.'
So it is important to stress that we are really only dealing with a discrete set of points. Thus the physicist-friendly idea of a continuum limit of masses on springs may be preferred to more abstract notions in function space.
It should also be sufficient to consider the bivariate case, because the generalization from one to two variables is really where the new feature of correlation comes in. Generalizing further really doesn't introduce anything new.
### Bivariate normal case
$\newcommand{\xvec}{\textbf{x}}$
$\newcommand{\muvec}{\boldsymbol{\mu}}$
The general multivariate Gaussian distribution is
$$
p(\xvec\mid \muvec,\Sigma) = \frac{1}{\sqrt{\det(2\pi\Sigma)}} e^{-\frac12(\xvec-\muvec)^{\rm T}\Sigma^{-1}(\xvec-\muvec)}
$$
For the <em>bivariate</em> case we can parameterize the mean vector and covariance matrix as
$$
\muvec = \left( \begin{array}{c}
\mu_x \\ \mu_y
\end{array} \right)
\;, \qquad
\Sigma = \left( \begin{array}{cc}
\sigma_x^2 & \rho\sigma_x\sigma_y \\
\rho\sigma_x\sigma_y & \sigma_y^2
\end{array}
\right)
$$
The covariance matrix must be positive definite, which implies $\color{red}{0\lt\rho^2\lt 1}$.
If take $\mu_x = \mu_y = 0$ and $\sigma_x = \sigma_y = \sigma$ for clarity,
so that
$$
\Sigma = \sigma^2 \left(\begin{array}{cc}
1 & \rho \\
\rho & 1
\end{array}
\right)
$$
and
$$
p(x,y\mid \sigma,\rho) = \frac{1}{2\pi\sigma^2}
\exp\left(-\frac{x^2 + y^2 - 2\rho x y }{2\sigma^2\sqrt{1-\rho^2}}
\right)
\;.
$$
It's clear that contours of equal probability have $x^2 + y^2 - 2\rho xy = \mbox{constant}$, so they are ellipses. The value of $\rho$ determines the eccentricity of the ellipse.
If $\rho=0$, $x$ and $y$ are independent (uncorrelated) and we have a circle. As $\rho$ approaches $+1$, $x$ and $y$ are increasingly correlated (toward straight line at $45^\circ$), while for $\rho$ approaching $-1$ they become increasingly anti-correlated (toward straight line at $-45^\circ$).
For reference, the Cholesky decomposition of $\Sigma$ is
$$
\Sigma = \sigma^2\left( \begin{array}{cc}
1 & \rho \\
\rho & 1
\end{array}
\right)
=
\sigma^2\left( \begin{array}{cc}
1 & 0 \\
\rho & \sqrt{1-\rho^2}
\end{array}
\right)
\left( \begin{array}{cc}
1 & \rho \\
0 & \sqrt{1-\rho^2}
\end{array}
\right)
$$
### Example code for generating and plotting GPs
The following code is adapted from a blog post by Katherine Bailey entitled <a href="http://katbailey.github.io/post/gaussian-processes-for-dummies/"><i>Gaussian Processes for Dummies</i></a>. First we generate several instances of draws from a Gaussian process with a squared exponential kernel function, which is the covariance between $x$ and $x'$:
$$ \kappa_{\rm SE}(x,x') = \sigma^2 e^{-(x-x')^2/2l^2} $$
So we can see that $\sigma$ controls the vertical extent of the functions while $l$ controls
how rapidly they wiggle. Comparing to our expression above for the bivariate normal case,
we see that $\rho$ is given by $e^{-(x-x')^2/2l^2}$. So when $x$ and $x'$ are close,
$\rho \approx 1$ and the value of the function is highly correlated. When $x$ and $x'$ are far apart, $\rho \rightarrow 0$, and they become independent (thus $l$ plays the role of a correlation length).
Let's generate some GPs with this kernel! For the function $f(x)$ we write draws as
$$
f(x) \sim \mathcal{GP[\mu(x),\kappa(x,x')]}
$$
where $\mu(x)$ is the mean at each $x$ and $\kappa(x,x')$ is the covariance between $x$ and $x'$. In practice we have a finite set of $N$ points $\textbf{x} = \{x_i\}_{i=1}^{N}$ with corresponding function values $\textbf{f}=\{f(x_i)\}_{i=1}^{N}$.
We form the mean vector $\boldsymbol{\mu} = m(\textbf{x})$ and the covariance matrix $K_{ij} = \kappa(x_i,x_j)$. Then
$$ \textbf{f} \mid \textbf{x} \sim \mathcal{N}(\boldsymbol{\mu},K)
$$
are draws from a multivariate normal distribution. Try it:
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Define the squared exponential kernel function for the covariance
# We take the variance to be 1.
def sqr_exp_kernel(a, b, length_param):
sqdist = np.sum(a**2,1).reshape(-1,1) + np.sum(b**2,1) - 2*np.dot(a, b.T)
return np.exp(-sqdist / (2*length_param**2))
# Grid of x points
npts = 500
x_min = -5
x_max = +5
Xtest = np.linspace(x_min, x_max, npts).reshape(-1,1)
length_param = .5 # this is "l" (correlation length)
K_ss = sqr_exp_kernel(Xtest, Xtest, length_param)
# Get Cholesky decomposition (square root) of the covariance matrix
nugget = 1e-12 # size of nugget will depend on how many points are used
# 1e-12 for 500; 1e-13 for 100; 1e-15 for 50
L = np.linalg.cholesky(K_ss + nugget*np.eye(npts))
# Sample 3 sets of standard normals for our test points,
# multiply them by the square root of the covariance matrix
# Note: mean mu = 0 here implicitly.
f_prior = np.dot(L, np.random.normal(size=(npts,3)))
# Now let's plot the 3 sampled functions.
plt.plot(Xtest, f_prior)
plt.axis([-5, 5, -3, 3])
plt.title('Three samples from the GP prior with l = {:1.1f}'.format(length_param))
plt.show()
```
Now we train it on some data (see references for details):
```python
# Noiseless training data
Xtrain = np.array([-4, -3, -2, -1, 1]).reshape(5,1)
ytrain = np.sin(Xtrain)
#ytrain = np.array([0,0,0,0,0]).reshape(5,1)
# Apply the same kernel function to our training points
nugget_train = 5e-5
K = sqr_exp_kernel(Xtrain, Xtrain, length_param)
L = np.linalg.cholesky(K + nugget_train*np.eye(len(Xtrain)))
# Compute the mean at our test points.
K_s = sqr_exp_kernel(Xtrain, Xtest, length_param)
Lk = np.linalg.solve(L, K_s)
mu = np.dot(Lk.T, np.linalg.solve(L, ytrain)).reshape((npts,))
# Compute the standard deviation so we can plot it
s2 = np.diag(K_ss) - np.sum(Lk**2, axis=0)
stdv = np.sqrt(s2)
# Draw samples from the posterior at our test points.
nugget_test = 1e-6
L = np.linalg.cholesky(K_ss + nugget_test*np.eye(npts) - np.dot(Lk.T, Lk))
f_post = mu.reshape(-1,1) + np.dot(L, np.random.normal(size=(npts,3)))
plt.plot(Xtrain, ytrain, 'bs', ms=8)
plt.plot(Xtest, f_post)
plt.gca().fill_between(Xtest.flat, mu-2*stdv, mu+2*stdv, color="#dddddd")
plt.plot(Xtest, mu, 'r--', lw=2)
plt.axis([-5, 5, -3, 3])
plt.title('Three samples from the GP posterior')
plt.show()
```
### Other demos for Gaussian Processes (and other regression)
<ul>
<li>
Gaussian process regression, where you can add data points, play with the hyperparameters, and then see the inference for the curve. It’s by Tomi Peltola:
http://www.tmpl.fi/gp/
<li>
This simulation shows how a GP prior is a distribution over functions, and how observing data conditions the prior to obtain the GP posterior.
http://rpradeep.webhop.net/gpr/
</ul>
<p>[Return to <a href="#Contents">Contents</a>]</p>
<hr>
## <a name="Appendices">Appendices</a>
### <a name="References">References</a>
Please suggest additional references (with links).
### Physics-oriented pedagogical articles and texts
<ul>
<li>R. Trotta,
<a href="https://www.tandfonline.com/doi/abs/10.1080/00107510802066753"><i>Bayes in the sky: Bayesian inference and model selection in cosmology</i></a>, Contemp. Phys. <b>49</b>, 71 (2008)
[<a href="https://arxiv.org/abs/0803.4089">arXiv:0803.4089</a>].
<li>D.S. Sivia and J. Skilling,
<a href="https://www.amazon.com/Data-Analysis-Bayesian-Devinderjit-Sivia/dp/0198568320/ref=mt_paperback?_encoding=UTF8&me=&qid="><i>Data Analysis: A Bayesian Tutorial, 2nd edition</i></a>, (Oxford University Press, 2006).
<li>P. Gregory,
<a href="https://www.amazon.com/Bayesian-Logical-Analysis-Physical-Sciences/dp/0521150124/ref=sr_1_1?s=books&ie=UTF8&qid=1538587731&sr=1-1&keywords=gregory+bayesian"><i>Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with Mathematica® Support</i></a>, (Cambridge University Press, 2010).
</ul>
### Standard statistics references
<ul>
<li>A. Gelman et al.,
<a href="https://www.amazon.com/Bayesian-Analysis-Chapman-Statistical-Science/dp/1439840954/ref=sr_1_1?ie=UTF8&qid=1538589213&sr=8-1&keywords=gelman+bayesian+data+analysis"><i>Bayesian Data Analysis, 3rd edition</i>, (Chapman and Hall/CRC, 2013).
</ul>
### BUQEYE references
<ul>
<li>R.J. Furnstahl, D.R. Phillips and S. Wesolowski,
<i>A recipe for EFT uncertainty quantification in nuclear physics</i>,
J. Phys. G <b>42</b>, 034028 (2015), [<a href="https://arxiv.org/abs/1407.0657">arXiv:1407.0657</a>].
<li> R.J. Furnstahl, N. Klco, D.R. Phillips and S.Wesolowski,
<i>Quantifying truncation errors in effective field theory</i>,
Phys. Rev. C <b>92</b>, 024005 (2015)
[<a href="https://arxiv.org/abs/1506.01343">arXiv:1506.01343</a>].
<li>S. Wesolowski, N. Klco, R.J. Furnstahl, D.R. Phillips and A. Thapaliya,
<i>Bayesian parameter estimation for effective field theories</i>,
J. Phys. G <b>43</b>, 074001 (2016)
[<a href="https://arxiv.org/abs/1511.03618">arXiv:1511.03618</a>].
<li> J.A. Melendez, S. Wesolowski and R.J. Furnstahl,
<i>Bayesian truncation errors in chiral effective field theory: nucleon-nucleon observables</i>,
Phys. Rev. C <b>96</b>, 024003 (2017)
[<a href="https://arxiv.org/abs/1704.03308">arXiv:1704.03308</a>].
<li> S. Wesolowski, R.J. Furnstahl, J.A. Melendez and D.R. Phillips,
<i>Exploring Bayesian parameter estimation for chiral effective field theory using nucleon-nucleon phase shifts</i>,
[<a href="https://arxiv.org/abs/1808.08211">arXiv:1808.08211</a>].
</ul>
### Github repositories
Please suggest more!
<ul>
<li>https://github.com/jakevdp/BayesianAstronomy Materials for the Bayesian Methods in Astronomy workshop at the 227th American Astronomical Society meeting. Includes Jupyter notebooks and useful exercises.
<li>http://people.duke.edu/~ccc14/sta-663-2018/ STA 663: Computational Statistics and Statistical Computing (2018) at Duke University. Lots of good things here!
</ul>
### <a name="Vocabulary">Vocabulary</a>
Plan: build up a good set of definitions with appropriate links. Please add more words/phrases!
<dl>
<dt>conjugate prior </dt>
<dd>If the probability distribution family (e.g., beta distributions) for the posterior pdf is the same as for the prior pdf, the latter is said to be a <a href="https://en.wikipedia.org/wiki/Conjugate_prior">conjugate prior</a>. This means that the updating by Bayes' rule can be carried out analytically. Some Bayesian practitioners are strongly opposed to the use of conjugate priors (see <a href="https://github.com/jakevdp/BayesianAstronomy/blob/master/Index.ipynb"> comments here</a>). </dd>
<!--
<dt>contingent </dt>
<dd> </dd>
-->
<dt>credible vs. confidence interval </dt>
<dd>This is a contrast between Bayesian and frequentist statistics. For a frequentist, a parameter has a true value, which is fixed and not a distribution. A 95% confidence interval mean that with a large number of repeated trials, 95% of the calculated confidence intervals would include the true value. This is clearly hard to think about! A Bayesian 95% credible interval is the range of the posterior for the parameter (which is treated as a random variable) that has 95% of the probability. So there is a 95% probability that the parameter is in that interval.
</dd>
<dt>evidence </dt>
<dd>In the standard context of inferring parameters $\boldsymbol{\theta}$ given data $D$ and information $I$, the evidence is $p(D\mid I) = \int\! d\boldsymbol{\theta}\, p(D \mid \boldsymbol{\theta},I)\,p(\boldsymbol{\theta},I)$. This is also called the Fully Marginalized Likelihood or FML. The expression shows that it is the integral over <i>all</i> $\boldsymbol{\theta}$ weighted by the likelihood. This is typically an expensive integral to do. In the context of model fitting (i.e., parameter estimation), it acts as a normalization constant and in most cases can be ignored because the normalization can be found directly (or only relative probabilities are needed). </dd>
<dt>gaussian process </dt>
<dd>
From [Wikipedia](https://en.wikipedia.org/wiki/Gaussian_process): "In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space."
</dd>
<dt>hierarchical model </dt>
<dd>A model with hyperparameters. See [Wikipedia](https://en.wikipedia.org/wiki/Bayesian_hierarchical_modeling). </dd>
<dt>hyperparameter </dt>
<dd>A parameter of a prior distribution. </dd>
<dt>iid (independently and identically distributed) </dt>
<dd>A set of random variables is iid (or i.i.d. or IID) if each random variable has the same probability distribution and all are mutually independent.
</dd>
<dt>likelihood </dt>
<dd>Usually in the form $p(D\mid \boldsymbol{\theta},I)$, where $\boldsymbol{\theta}$ are the parameters of our model, $D$ is the data, and $I$ is any other information we use. This is the probability of observing our actual data given the model (with the particular parameters $\boldsymbol{\theta}$). It is the same quantity that is maximized in frequentist maximum-likelihood approaches. </dd>
<dt>MAP estimate</dt>
<dd><a href="https://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation">Maximum a posteriori estimate</a>. This is the mode (maximum) of the posterior distribution for the quantity of interest. If the prior is uniform, the MAP estimate equals the maximum likelihood estimate. </dd>
<dt>maximum entropy </dt>
<dd>A method used to determine priors. </dd>
<dt>MCMC </dt>
<dd>Markov-chain Monte Carlo. A generic name for stochastic sampling methods. </dd>
<dt>model selection and model averaging </dt>
<dd> </dd>
<dt>nugget </dt>
<dd>
For Gaussian process (GP) calculations or any sampling of a multivariate normal distribution, one typically needs to find the Cholesky decomposition of the covariance matrix. However, this matrix can become ill-conditioned (too-small or negative eigenvalues). A standard solution is to add a small number, called a nugget, to the diagonal of the covariance matrix. For GP regression, this is equivalent to adding (or increasing, if already present) the data noise.
</dd>
<dt>nuisance parameter </dt>
<dd>A nuisance parameter is a parameter in your model whose value you don't care about for the posterior. So you integrate it out (marginalize). </dd>
<dt>overfitting and underfitting</dt>
<dd>This example from http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html of fitting polynomials to nonlinear functions illustrates overfitting and underfitting. The true function is a cosine with noise added. A polynomial of degree 1 is an inadequate model for this data; this is underfitting. The polynomial of degree 15 tries to fit the noise; this is overfitting.
</dd>
<dt>point estimate (cf. interval estimate) </dt>
<dd>A point estimate is a single value to characterize a
posterior. It could be the mode, mean, median or something
else. An interval estimate is more natural in Bayesian statistics, because the full posterior is the real target. Giving a series of credible intervals often conveys much of the information about the posterior.</dd>
<dt>posterior </dt>
<dd>This is the quantity of the left side of Bayes' rule, the thing we want to compute. Often in the form $p(\boldsymbol{\theta}\mid D,I)$, where $\boldsymbol{\theta}$ are the parameters of our model, $D$ is the data, and $I$ is any other information we use. It is our knowledge of the model given the data and any relevant background knowledge (which include the choice of model). </dd>
<dt>prior </dt>
<dd>A pdf that encodes what is known about the answer (e.g., parameters) before any data is used. The notation consistent with our definitions of <i>posterior</i> and <i>likelihood</i> is $p(\boldsymbol{\theta}\mid I)$, where $\boldsymbol{\theta}$ are the parameters of our model and $I$ is any other information we use (e.g., some of the parameters must be positive or less than a known magnitude because of physics reasons).
See also <i>conjugate prior</i> and <i>maximum entropy</i>.
</dd>
<dt>residual</dt>
<dd>The difference of theory prediction and experimental data.
</dd>
<dt> </dt>
<dd> </dd>
</dl>
### <a name="Notation">Notation</a> <span class="red">[still coming . . .]</span>
Plan: build up a dictionary of notation with appropriate links and examples (with code).
univariate normal distribution
$$\mathcal{N}(\mu,\sigma^2)$$
<hr>
<p>[Return to <a href="#Contents">Contents</a>]</p>
<hr>
```python
```
|
85ddb85576071b0536cafc94ef2ea3b3826bd441
| 765,762 |
ipynb
|
Jupyter Notebook
|
Bayesian_Statistics_for_Physicists/Bayesian_statistics_for_physicists_v3.ipynb
|
furnstahl/Bayes_for_physicists
|
74e0cb4d706c44d1eaf546c2c5601d30a84554a1
|
[
"MIT"
] | 4 |
2018-10-09T07:19:13.000Z
|
2021-02-02T17:58:35.000Z
|
Bayesian_Statistics_for_Physicists/Bayesian_statistics_for_physicists_v3.ipynb
|
furnstahl/Bayes_for_physicists
|
74e0cb4d706c44d1eaf546c2c5601d30a84554a1
|
[
"MIT"
] | null | null | null |
Bayesian_Statistics_for_Physicists/Bayesian_statistics_for_physicists_v3.ipynb
|
furnstahl/Bayes_for_physicists
|
74e0cb4d706c44d1eaf546c2c5601d30a84554a1
|
[
"MIT"
] | 3 |
2018-10-09T13:59:24.000Z
|
2019-12-08T09:13:13.000Z
| 330.782721 | 205,732 | 0.913801 | true | 20,303 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.715424 | 0.76908 | 0.550218 |
__label__eng_Latn
| 0.906939 | 0.116671 |
# Policy Gradient
Policy gradient method for reinforcement learning learns the policy directly, not the Q-function. Compared to Q-learning it is more straightforward to use with continuous state and action spaces.
## Log derivative trick
In reinforcement learning we are trying to find a policy that maximizes the sum of rewards over episode.
$$
R = r_1 + r_2 + ... + r_n
$$
$$
\pi^* = argmax_{\pi} E[R]
$$
Expectation here is over environment transitions $P(s_{t+1}|s_t, a_t)$ and actions chosen by the policy $\pi_{\theta}(a_t|s_t)$, where $\theta$ is the parameters (network weights) of the policy. In general case this could be seen as a maximization of expected total reward $R(\tau)$ for trajectory $\tau = <s_0, a_0, r_1, s_1, a_1, ..., r_n, s_n>$, when the only thing we can change is the probability of the trajectory, denoted by $p_{\theta}(\tau)$:
$$
\max E_{\tau \sim p_{\theta}(\tau)}[R(\tau)]
$$
While the probability of trajectory $\tau$ depends both on environment dynamics $P(s_{t+1}|s_t, a_t)$ and policy $\pi_{\theta}(a_t|s_t)$, we only have control over the latter. Therefore we want to change the policy parameters $\theta$ in a way that would maximize the expectation (average value) of $R(\tau)$. We can do this by taking gradient steps with respect to the $\theta$:
$$
\nabla_{\theta} E_{\tau \sim p_{\theta}(\tau)}[R(\tau)]
$$
Computing this gradient is not straightforward, but turns out we can modify the equation so that we can estimate the gradient using Monte-Carlo sampling:
$$
\begin{align}
\nabla_{\theta} E_{\tau \sim p_{\theta}(\tau)}[R(\tau)] &= \nabla_{\theta} \sum_{\tau} p_{\theta}(\tau) R(\tau) & \text{definition of expectation} \\
& = \sum_{\tau} \nabla_{\theta} p_{\theta}(\tau) R(\tau) & \text{swap sum and gradient} \\
& = \sum_{\tau} p_{\theta}(\tau) \frac{\nabla_{\theta} p_{\theta}(\tau)}{p_{\theta}(\tau)} R(\tau) & \text{both multiply and divide by } p_{\theta}(\tau) \\
& = \sum_{\tau} p_{\theta}(\tau) \nabla_{\theta} \log p_{\theta}(\tau) R(\tau) & \text{use the fact that } \nabla_{\theta} \log(x) = \frac{1}{x} \nabla_{\theta} x \\
& = E_{\tau \sim p_{\theta}(\tau)}[\nabla_{\theta} \log p_{\theta}(\tau) R(\tau)] & \text{definition of expectation}
\end{align}
$$
Because the expectation is still over $p_{\theta}(\tau)$, we can sample trajectory $\tau$ as usual, compute its total reward with $R(\tau)$ and this multiplied with $\nabla_{\theta} \log p_{\theta}(\tau)$ is the unbiased estimate of the gradient.
## Policy gradient formula
With little bit more work we can show that $\nabla_{\theta} \log \pi_{\theta}(\tau)$ does not depend on enironment dynamics $P(s_{t+1}|s_t, a_t)$, only on policy $\pi_{\theta}(a_t|s_t)$:
$$
\begin{align}
p_{\theta}(\tau) &= p(s_0)\prod_{t=0}^n\pi_{\theta}(a_t|s_t)P(s_{t+1}|s_t,a_t) & \text{probability of a trajectory}\\
\log p_{\theta}(\tau) &= \log p(s_0) + \sum_{t=0}^n \log p_{\theta}(a_t|s_t) + \log P(s_{t+1}|s_t,a_t) & \text{log probability of a trajectory}\\
\nabla_{\theta} \log p_{\theta}(\tau) &= \sum_{t=0}^n \nabla_{\theta} \log \pi_{\theta}(a_t|s_t) & \text{environment dynamics does not depend on }\theta
\end{align}
$$
Thanks to log-derivative trick we can operate with summation, not with multiplication. In the sum all the derivatives of environment dynamics probabilities are zero, because they do not depend on policy parameters $\theta$. The final form of policy gradient formula is as follows:
$$
\nabla_{\theta} E[R] = E\left[\sum_{t=0}^n \nabla_{\theta} \log \pi_{\theta}(a_t|s_t) R \right]
$$
## Reduction of variance
While the above derivation may be mathematically elegant, the resulting gradient estimator is very noisy. Turns out there are multiple ways to reduce the variance of the gradient.
### Use return instead of total reward
Because the goodness of an action cannot possibly depend on actions and rewards before it, we can safely replace total reward $R$ with return $R_t$:
$$
\nabla_{\theta} E[R] = E\left[\sum_{t=0}^n \nabla_{\theta} \log \pi_{\theta}(a_t|s_t) R_t \right]
$$
### Use constant baseline
If we subtract a constant from return in policy gradient formula, it does not affect the result (because derivative of constant is zero) and it can substantially reduce the variance, therefore reducing the time to convergence.
$$
\nabla_{\theta} E[R] = E\left[\sum_{t=0}^n \nabla_{\theta} \log \pi_{\theta}(a_t|s_t) (R_t - b) \right]
$$
The quantity $R_t -b$ is sometimes called an **advantage**. Indeed it measures "how much better was the return this time compared to the average".
### Use state value as a baseline
Even better than using constant baseline is to use state value $V(s_t)$ as a baseline ([why we can do this?](https://ai.stackexchange.com/questions/7896/why-is-baseline-conditional-on-state-at-some-timestep-unbiased)):
$$
\nabla_{\theta} E[R] = E\left[\sum_{t=0}^n \nabla_{\theta} \log \pi_{\theta}(a_t|s_t) (R_t - V(s_t)) \right]
$$
Indeed "average return from state $s_t$" is basically the definition of state value $V(s_t)$ and therefore it is the best possible baseline.
## Intuition
While the mathematical derivation of policy gradient might be a bit involved, the intuition behind it is pretty straightforward. First note that without the advantage part, the policy gradient formula is just normal log-likelihood maximization, meaning it increases the probability of chosen actions. Advantages just weight whether the probability is increased or decreased and how much:
* if $R_t - b$ is positive, then current return was better than usual and the probability of the action is *increased*.
* if $R_t - b$ is negative, then current return was better than usual and the probability of the action is instead *decreased*.
For example with discrete action space when using softmax operation to calculate the probabilities, decreasing the probability of one action actually means increasing the probability of other actions. In other words - negative advantages say that explore more, try other actions.
## Implementation
When looking at the policy gradient formulas, it may not be evident how to implement it without digging into the depths of TensorFlow. Actually the implementation can be very simple, with a small modification to supervised learning loss function. Remember that most loss functions used in supervised learning actually try to maximize the log-likelihood of the target value:
* categorical cross-entropy loss function is $L = -\sum_i \log(p_{ik})$. By minimizing the negative log probability, it actually tries to maximize the log probability of the target class $k$ of sample $i$.
* mean squared error loss function is $L = \sum_i (\hat{y}_i-y_i)^2$. By minimizing the mean squared error, it [actually tries to](https://www.jessicayung.com/mse-as-maximum-likelihood/) maximize the log probability of $y_i$ under Gaussian distribution with mean $\hat{y}_i$ and fixed standard deviation.
For the general case we can rewrite those loss functions as
$$
L = -\sum_i \log p_{\theta}(y_i|x_i)
$$
where $x_i$ is the input to the network, $y_i$ is the target output and $\theta$ represents the parameters (weights) of the network. All we need to do, is to augment this loss function with weights for individual samples:
$$
L = -\sum_i \log p_{\theta}(y_i|x_i) \alpha_i
$$
Sample weighting is used in supervised learning to fight with class imbalance. It is already implemented in Keras as `sample_weight` parameter to `Model.fit()` method. Note that the gradient of the loss function now looks a lot like policy gradient formula:
$$
\nabla_{\theta} L = -\sum_i \nabla_{\theta} \log p_{\theta}(y_i|x_i) \alpha_i
$$
Because sample weights $\alpha_i$ are constants, they pass through the gradient operation as just multipliers.
Now the only thing left is to replace supervised learning terminology with reinforcement learning terms:
$$
\begin{align}
p_{\theta}(y_i|x_i) &= \pi_{\theta}(a_t|s_t)\\
\alpha_i &= R_t - b\\
i &= t
\end{align}
$$
This gives us the policy gradient loss function, with the only difference of negative sign, because loss functions are usually minimized, not maximized:
$$
L = -\sum_{t=0}^n \log \pi_{\theta}(a_t|s_t) (R_t - b)
$$
## Algorithm
Finally the policy gradient algorithm goes like this:
```
repeat
collect a trajectory by sampling actions from policy
calculate returns per time step
calculate advantages per time step
train the policy network with
states as inputs
actions as outputs
advantages as sample weights
until termination
```
# CartPole Example
```python
import gym
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Model
from keras.layers import Input, Dense
from keras.optimizers import RMSprop
```
Using TensorFlow backend.
```python
# create the CartPole environment
env = gym.make('CartPole-v0')
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)
```
Observation space: Box(4,)
Action space: Discrete(2)
```python
# create a model with two hidden layers
x = Input(shape=env.observation_space.shape)
h1 = Dense(64, activation='tanh')(x)
h2 = Dense(64, activation='tanh')(h1)
p = Dense(env.action_space.n, activation='softmax')(h2)
# use RMSProp optimizer and categorical crossentropy loss
model = Model(x, p)
model.compile(optimizer=RMSprop(0.003), loss='sparse_categorical_crossentropy')
model.summary()
```
WARNING:tensorflow:From /home/tambet/miniconda3/envs/nn/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 4) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 320
_________________________________________________________________
dense_2 (Dense) (None, 64) 4160
_________________________________________________________________
dense_3 (Dense) (None, 2) 130
=================================================================
Total params: 4,610
Trainable params: 4,610
Non-trainable params: 0
_________________________________________________________________
```python
# calculate returns
def calculate_returns(rewards, dones, discount=0.9):
returns = []
for reward, done in zip(reversed(rewards), reversed(dones)):
if done:
ret = reward
else:
ret = reward + discount * ret
returns.insert(0, ret)
return returns
# buffers to keep average returns per timestep
timestep_returns = np.zeros(env.spec.max_episode_steps)
timestep_counts = np.zeros(env.spec.max_episode_steps)
# calculate baselines
def calculate_baselines(returns):
# use simple timestep-dependent average as baseline
timestep_returns[:len(returns)] += returns
timestep_counts[:len(returns)] += 1
baselines = timestep_returns[:len(returns)] / timestep_counts[:len(returns)]
return baselines
# calculate advantages
def calculate_advantages(returns, baselines):
# calculate advantages
advantages = returns - baselines
# normalize advantages
advantages /= np.std(advantages) + 0.000001
return advantages
```
```python
# reset statistics
episode_rewards = []
episode_lengths = []
# do 100 episodes
for i in range(100):
states = []
actions = []
rewards = []
dones = []
episode_reward = 0
episode_length = 0
# collect a trajectory
state = env.reset()
done = False
while not done:
# predict action probabilities from state
p = model.predict_on_batch(state[np.newaxis])
# sample action from probabilities
action = np.random.choice(env.action_space.n, p=p[0])
# log state and action
states.append(state)
actions.append(action)
# step environment
state, reward, done, info = env.step(action)
#env.render()
# log reward and done
rewards.append(reward)
dones.append(done)
# sum rewards per episode
episode_reward += reward
episode_length += 1
# record reward statistics
episode_rewards.append(episode_reward)
episode_lengths.append(episode_length)
print("Episode ", i + 1, "Episode reward:", episode_reward, "Episode length:", episode_length)
# calculate returns
returns = calculate_returns(rewards, dones)
# calculate baselines
baselines = calculate_baselines(returns)
# calculate advantages
advantages = calculate_advantages(returns, baselines)
# train the network, skip training if all advantages are zeros
if np.any(advantages):
model.train_on_batch(np.array(states), np.array(actions), sample_weight=advantages)
```
Episode 1 Episode reward: 25.0 Episode length: 25
Episode 2 Episode reward: 11.0 Episode length: 11
WARNING:tensorflow:From /home/tambet/miniconda3/envs/nn/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Episode 3 Episode reward: 20.0 Episode length: 20
Episode 4 Episode reward: 35.0 Episode length: 35
Episode 5 Episode reward: 33.0 Episode length: 33
Episode 6 Episode reward: 46.0 Episode length: 46
Episode 7 Episode reward: 45.0 Episode length: 45
Episode 8 Episode reward: 60.0 Episode length: 60
Episode 9 Episode reward: 26.0 Episode length: 26
Episode 10 Episode reward: 30.0 Episode length: 30
Episode 11 Episode reward: 45.0 Episode length: 45
Episode 12 Episode reward: 30.0 Episode length: 30
Episode 13 Episode reward: 38.0 Episode length: 38
Episode 14 Episode reward: 19.0 Episode length: 19
Episode 15 Episode reward: 69.0 Episode length: 69
Episode 16 Episode reward: 46.0 Episode length: 46
Episode 17 Episode reward: 41.0 Episode length: 41
Episode 18 Episode reward: 74.0 Episode length: 74
Episode 19 Episode reward: 54.0 Episode length: 54
Episode 20 Episode reward: 34.0 Episode length: 34
Episode 21 Episode reward: 54.0 Episode length: 54
Episode 22 Episode reward: 99.0 Episode length: 99
Episode 23 Episode reward: 130.0 Episode length: 130
Episode 24 Episode reward: 45.0 Episode length: 45
Episode 25 Episode reward: 200.0 Episode length: 200
Episode 26 Episode reward: 200.0 Episode length: 200
Episode 27 Episode reward: 79.0 Episode length: 79
Episode 28 Episode reward: 200.0 Episode length: 200
Episode 29 Episode reward: 48.0 Episode length: 48
Episode 30 Episode reward: 33.0 Episode length: 33
Episode 31 Episode reward: 200.0 Episode length: 200
Episode 32 Episode reward: 200.0 Episode length: 200
Episode 33 Episode reward: 159.0 Episode length: 159
Episode 34 Episode reward: 200.0 Episode length: 200
Episode 35 Episode reward: 200.0 Episode length: 200
Episode 36 Episode reward: 200.0 Episode length: 200
Episode 37 Episode reward: 165.0 Episode length: 165
Episode 38 Episode reward: 139.0 Episode length: 139
Episode 39 Episode reward: 187.0 Episode length: 187
Episode 40 Episode reward: 127.0 Episode length: 127
Episode 41 Episode reward: 194.0 Episode length: 194
Episode 42 Episode reward: 183.0 Episode length: 183
Episode 43 Episode reward: 200.0 Episode length: 200
Episode 44 Episode reward: 200.0 Episode length: 200
Episode 45 Episode reward: 200.0 Episode length: 200
Episode 46 Episode reward: 192.0 Episode length: 192
Episode 47 Episode reward: 200.0 Episode length: 200
Episode 48 Episode reward: 200.0 Episode length: 200
Episode 49 Episode reward: 172.0 Episode length: 172
Episode 50 Episode reward: 191.0 Episode length: 191
Episode 51 Episode reward: 200.0 Episode length: 200
Episode 52 Episode reward: 200.0 Episode length: 200
Episode 53 Episode reward: 200.0 Episode length: 200
Episode 54 Episode reward: 189.0 Episode length: 189
Episode 55 Episode reward: 200.0 Episode length: 200
Episode 56 Episode reward: 200.0 Episode length: 200
Episode 57 Episode reward: 196.0 Episode length: 196
Episode 58 Episode reward: 200.0 Episode length: 200
Episode 59 Episode reward: 179.0 Episode length: 179
Episode 60 Episode reward: 200.0 Episode length: 200
Episode 61 Episode reward: 200.0 Episode length: 200
Episode 62 Episode reward: 200.0 Episode length: 200
Episode 63 Episode reward: 200.0 Episode length: 200
Episode 64 Episode reward: 200.0 Episode length: 200
Episode 65 Episode reward: 200.0 Episode length: 200
Episode 66 Episode reward: 200.0 Episode length: 200
Episode 67 Episode reward: 200.0 Episode length: 200
Episode 68 Episode reward: 200.0 Episode length: 200
Episode 69 Episode reward: 200.0 Episode length: 200
Episode 70 Episode reward: 200.0 Episode length: 200
Episode 71 Episode reward: 200.0 Episode length: 200
Episode 72 Episode reward: 200.0 Episode length: 200
Episode 73 Episode reward: 200.0 Episode length: 200
Episode 74 Episode reward: 200.0 Episode length: 200
Episode 75 Episode reward: 200.0 Episode length: 200
Episode 76 Episode reward: 200.0 Episode length: 200
Episode 77 Episode reward: 163.0 Episode length: 163
Episode 78 Episode reward: 200.0 Episode length: 200
Episode 79 Episode reward: 200.0 Episode length: 200
Episode 80 Episode reward: 200.0 Episode length: 200
Episode 81 Episode reward: 200.0 Episode length: 200
Episode 82 Episode reward: 200.0 Episode length: 200
Episode 83 Episode reward: 200.0 Episode length: 200
Episode 84 Episode reward: 200.0 Episode length: 200
Episode 85 Episode reward: 200.0 Episode length: 200
Episode 86 Episode reward: 200.0 Episode length: 200
Episode 87 Episode reward: 200.0 Episode length: 200
Episode 88 Episode reward: 200.0 Episode length: 200
Episode 89 Episode reward: 200.0 Episode length: 200
Episode 90 Episode reward: 200.0 Episode length: 200
Episode 91 Episode reward: 200.0 Episode length: 200
Episode 92 Episode reward: 200.0 Episode length: 200
Episode 93 Episode reward: 200.0 Episode length: 200
Episode 94 Episode reward: 40.0 Episode length: 40
Episode 95 Episode reward: 172.0 Episode length: 172
Episode 96 Episode reward: 200.0 Episode length: 200
Episode 97 Episode reward: 200.0 Episode length: 200
Episode 98 Episode reward: 200.0 Episode length: 200
Episode 99 Episode reward: 200.0 Episode length: 200
Episode 100 Episode reward: 200.0 Episode length: 200
```python
# plot episode rewards and lengths
# because in this environment you get reward 1 for each timestep you are alive,
# then episode reward matches with episode length
plt.figure(figsize=(13, 5))
plt.subplot(1, 2, 1)
plt.plot(episode_rewards)
plt.title("Episode rewards")
plt.subplot(1, 2, 2)
plt.plot(episode_lengths)
plt.title("Episode lengths")
```
```python
# plot average timestep returns. remember that these are discounted!
plt.plot(timestep_returns / timestep_counts)
```
```python
# visualize one episode
state = env.reset()
done = False
env.render()
while not done:
p = model.predict(state[np.newaxis])
action = np.argmax(p[0])
state, reward, done, info = env.step(action)
env.render()
env.close()
```
## Final words
If you have access to environment dynamics function $P(s'|s,a)$ and reward function $R(s, a, s')$ and both are differentiable, you could theoretically optimize $E[R]$ explicitly. In practice there are some instability issues, for details see [these slides](http://rail.eecs.berkeley.edu/deeprlcourse/static/slides/lec-13.pdf) or [this paper](https://arxiv.org/abs/1510.09142). Still, it is a good idea to make use of environment model, whenever it is available (or learnable).
```python
```
|
a0eddbb112afcbeee10e5b4a0e85a8b182a28b4a
| 86,199 |
ipynb
|
Jupyter Notebook
|
rlcode/2. Policy Gradient (long).ipynb
|
enliktjioe/nn2020
|
930ef4168ffbddbb5e81a782ba1328077a4f2525
|
[
"MIT"
] | null | null | null |
rlcode/2. Policy Gradient (long).ipynb
|
enliktjioe/nn2020
|
930ef4168ffbddbb5e81a782ba1328077a4f2525
|
[
"MIT"
] | null | null | null |
rlcode/2. Policy Gradient (long).ipynb
|
enliktjioe/nn2020
|
930ef4168ffbddbb5e81a782ba1328077a4f2525
|
[
"MIT"
] | null | null | null | 135.746457 | 47,252 | 0.853107 | true | 5,598 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.870597 | 0.826712 | 0.719733 |
__label__eng_Latn
| 0.903892 | 0.510513 |
```python
import networkx as nx
from networkx.algorithms.shortest_paths.weighted import dijkstra_path
from networkx.algorithms.shortest_paths.generic import shortest_path_length
import geopandas as gpd
from geopy.distance import geodesic
from shapely.geometry import Point, LineString
import geojson
import osmnx as ox
import pandas as pd
import numpy as np
import math
import json
import gurobipy as gp
from gurobipy import GRB
from tqdm import tqdm
from tqdm._tqdm_notebook import tqdm_notebook
tqdm_notebook.pandas()
from keplergl import KeplerGl
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
```
```python
zones = gpd.read_file('taxi_zones/taxi_zones.shp')
table = pd.read_csv('taxi+_zone_lookup.csv')
zones = zones.to_crs("EPSG:4326")
```
```python
def dist(pt1, pt2):
return geodesic(pt1, pt2).miles
```
```python
zones['Centroid'] = zones['geometry'].apply(lambda x: x.centroid)
x_list = [(lon, lat) for (lon, lat) in zones['Centroid'].apply(lambda x: (x.xy[1][0], x.xy[0][0]))]
pi_list = [geometry.area for geometry in zones['geometry']]
dQ = dist
epsilon = 0.5
delta = 1.1
```
## Greedy Geometric Spanner Algorithm
```python
G = nx.Graph()
n = len(x_list)
G.add_nodes_from(range(n))
weighted_edges = [(i, j, dQ(x_list[i], x_list[j])) for i in range(n) for j in range(n)]
G.add_weighted_edges_from(weighted_edges)
sorted_edges = sorted(weighted_edges, key=lambda x: x[2])
spanner = nx.Graph()
spanner.add_nodes_from(range(n))
stretch = 1.1
for edge in tqdm(sorted_edges):
source, target, weight = edge
if not nx.has_path(spanner, source, target):
spanner.add_weighted_edges_from([edge])
else:
spanner_weight = shortest_path_length(spanner, source, target, weight='weight')
if spanner_weight < weight * stretch:
continue
else:
spanner.add_weighted_edges_from([edge])
```
100%|██████████| 69169/69169 [01:26<00:00, 800.31it/s]
```python
# Verify the constructed graph spanner
for edge in weighted_edges:
source, target, weight = edge
spanner_weight = shortest_path_length(spanner, source, target, weight='weight')
if spanner_weight > weight * 1.1:
print('invalid')
```
```python
len(G.edges)
```
34716
```python
len(spanner.edges)
```
1373
```python
num_edge_list = []
for stretch in tqdm(np.arange(1, 2.01, 0.01)):
G = nx.Graph()
n = len(x_list)
G.add_nodes_from(range(n))
weighted_edges = [(i, j, dQ(x_list[i], x_list[j])) for i in range(n) for j in range(n)]
G.add_weighted_edges_from(weighted_edges)
sorted_edges = sorted(weighted_edges, key=lambda x: x[2])
spanner = nx.Graph()
spanner.add_nodes_from(range(n))
for edge in sorted_edges:
source, target, weight = edge
if not nx.has_path(spanner, source, target):
spanner.add_weighted_edges_from([edge])
else:
spanner_weight = shortest_path_length(spanner, source, target, weight='weight')
if spanner_weight < weight * stretch:
continue
else:
spanner.add_weighted_edges_from([edge])
num_edge_list.append(len(spanner.edges))
```
```python
plt.figure(figsize=(8, 6))
sns.lineplot(x=np.arange(1, 2.01, 0.01), y=num_edge_list)
plt.xlabel('Delta (stretch factor)', fontsize=16);
plt.ylabel('Number of edges in the spanner', fontsize=16);
```
## Graph Spanner Version for approximate optimal utility
\begin{align}
Minimize: \ &\sum_{x, z\in \mathcal{X}} \pi_x k_{xz} d_Q(x, z) \\
Subject\ to:\ & k_{xz} \leq e^{\frac{\epsilon}{\delta} d_G(x, x')} k_{x' z} & z\in \mathcal{X}, (x, x') \in E \tag{1} \\
& \sum_{x \in \mathcal{X}} k_{xz} = 1 & x\in\mathcal{X} \tag{2} \\
& k_{xz} \geq 0 &x,z\in \mathcal{X} \tag{3}
\end{align}
**The algorithm we used for constructing the graph spanner is from** <span style="color:gray">*Althöfer, I., Das, G., Dobkin, D. et al. On sparse spanners of weighted graphs. Discrete Comput Geom 9, 81–100 (1993). https://doi.org/10.1007/BF02189308*</span> **implemented by** <span style="color:gray">*Ao Qu, github: https://github.com/quao627/GeoDifferentialPrivacy*</span>
```python
def optql_graph_spanner(x_list, pi_list, spanner, dQ, epsilon=0.5):
print(f'Start building a linear program for {len(x_list)} locations...')
pre_prob = np.array(pi_list) / sum(pi_list) # normalize probability distribution
threshold = math.exp(epsilon / delta)
# define a model
model = gp.Model('OptQL')
# add variables accessed as (0, 0), (0, 1), (1, 1), ...
variables = model.addVars(n, n, lb=0.0, ub=1.0, name='k')
# set objective function
model.setObjective(gp.quicksum(pre_prob[i] * variables[i, j] * dQ(x_list[i], x_list[j]) \
for i in range(n) for j in range(n)), GRB.MINIMIZE)
# add constraints (1)
print('Adding differential privacy constraints...')
model.addConstrs(variables[i, k] <= pow(threshold, dQ(x_list[i], x_list[j])) * variables[j, k] \
for (i, j) in spanner.edges for k in range(n))
model.addConstrs(variables[i, k] <= pow(threshold, dQ(x_list[i], x_list[j])) * variables[j, k] \
for (j, i) in spanner.edges for k in range(n))
# add constraints (2)
print('Add probability sum constraints...')
model.addConstrs(gp.quicksum(variables.select(i, '*')) == 1 for i in range(n))
# constriants (3) are already satisfied
# optimize the model
model.optimize()
# build a matrix to store the stochastic matrix
variables = model.getAttr('x', variables)
matrix = np.zeros((n, n))
for key, value in variables.items():
matrix[key] = value
# get post-process probability distribution
post_prob = pre_prob @ matrix
return matrix, pre_prob, post_prob
```
```python
p_matrix, pre_prob, post_prob = optql_graph_spanner(x_list, pi_list, spanner, dQ, epsilon=epsilon)
```
Start building a linear program for 263 locations...
Adding differential privacy constraints...
Add probability sum constraints...
Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (mac64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 722461 rows, 69169 columns and 1236889 nonzeros
Model fingerprint: 0x478468b7
Coefficient statistics:
Matrix range [1e+00, 2e+02]
Objective range [3e-05, 7e-01]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 1e+00]
Concurrent LP optimizer: dual simplex and barrier
Showing barrier log only...
Presolve removed 138338 rows and 0 columns
Presolve removed 0 rows and 138338 columns
Presolve time: 2.30s
Presolved: 69169 rows, 653292 columns, 1306058 nonzeros
Ordering time: 0.40s
Barrier statistics:
Dense cols : 263
Free vars : 263
AA' NZ : 3.611e+05
Factor NZ : 5.201e+06 (roughly 300 MBytes of memory)
Factor Ops : 8.375e+08 (less than 1 second per iteration)
Threads : 3
Objective Residual
Iter Primal Dual Primal Dual Compl Time
0 -4.62418304e+04 -0.00000000e+00 8.89e-01 1.60e+01 3.08e-01 4s
1 -2.03107232e+03 8.91592160e+00 1.20e-02 2.63e-01 6.93e-03 4s
2 -1.07898758e+03 1.02132602e+01 5.27e-03 4.05e-02 2.14e-03 4s
3 -1.20914209e+02 1.04274830e+01 1.03e-04 9.41e-14 1.97e-04 5s
4 -8.51164205e+01 1.01167444e+01 7.44e-05 4.62e-14 1.43e-04 5s
5 -6.05486177e+01 9.76574648e+00 5.49e-05 6.04e-14 1.06e-04 5s
6 -5.17965529e+01 9.60765748e+00 4.76e-05 3.91e-14 9.22e-05 6s
7 -4.86841654e+01 9.49779956e+00 4.50e-05 4.09e-14 8.73e-05 6s
8 -2.55234257e+01 8.95106172e+00 2.62e-05 6.39e-14 5.18e-05 6s
9 -1.91086428e+01 8.45086998e+00 2.09e-05 4.97e-14 4.14e-05 7s
10 -1.28849806e+01 7.89753830e+00 1.57e-05 5.68e-14 3.12e-05 7s
11 -7.37732188e+00 7.35963528e+00 1.09e-05 5.86e-14 2.21e-05 7s
12 -4.38154612e+00 6.74337646e+00 8.11e-06 4.80e-14 1.67e-05 8s
13 -1.51677505e+00 5.76590589e+00 5.22e-06 7.11e-14 1.09e-05 8s
14 3.29424585e-01 5.03095719e+00 3.27e-06 5.51e-14 7.06e-06 8s
15 1.64508468e+00 4.46899586e+00 1.88e-06 4.26e-14 4.24e-06 9s
16 1.90931288e+00 4.21063538e+00 1.54e-06 3.55e-14 3.45e-06 9s
17 2.18350278e+00 3.99557685e+00 1.14e-06 4.26e-14 2.72e-06 9s
18 2.39777286e+00 3.79166799e+00 8.31e-07 3.91e-14 2.09e-06 9s
19 2.63037230e+00 3.50524344e+00 5.15e-07 4.62e-14 1.31e-06 10s
20 2.81528154e+00 3.33574956e+00 2.84e-07 5.33e-14 7.81e-07 10s
21 2.89618832e+00 3.25050559e+00 1.92e-07 6.22e-14 5.32e-07 11s
22 2.93811919e+00 3.21498000e+00 1.46e-07 3.73e-14 4.16e-07 11s
23 2.96482342e+00 3.18913781e+00 1.19e-07 3.20e-14 3.37e-07 11s
24 2.98494446e+00 3.17130297e+00 9.83e-08 3.55e-14 2.80e-07 11s
25 2.99118481e+00 3.15844699e+00 9.21e-08 4.80e-14 2.51e-07 12s
26 3.01801247e+00 3.13577287e+00 6.62e-08 3.91e-14 1.77e-07 12s
27 3.02961938e+00 3.12690380e+00 5.51e-08 5.33e-14 1.46e-07 12s
28 3.04219996e+00 3.12065376e+00 4.31e-08 6.75e-14 1.18e-07 13s
29 3.05074763e+00 3.11138769e+00 3.49e-08 5.51e-14 9.10e-08 13s
30 3.05992307e+00 3.10622606e+00 2.61e-08 6.04e-14 6.95e-08 13s
31 3.06435444e+00 3.10338445e+00 2.18e-08 6.93e-14 5.86e-08 14s
32 3.06829356e+00 3.10029914e+00 1.79e-08 9.24e-14 4.80e-08 14s
33 3.07090723e+00 3.09798948e+00 1.53e-08 9.06e-14 4.07e-08 15s
34 3.07429797e+00 3.09537409e+00 1.19e-08 1.74e-13 3.16e-08 15s
35 3.07633768e+00 3.09380124e+00 9.86e-09 1.40e-13 2.62e-08 15s
36 3.07861333e+00 3.09272578e+00 7.59e-09 8.35e-14 2.12e-08 16s
37 3.07985927e+00 3.09117210e+00 6.27e-09 6.39e-14 1.70e-08 16s
38 3.08074146e+00 3.09082195e+00 5.35e-09 9.77e-14 1.51e-08 16s
39 3.08133484e+00 3.08964740e+00 4.72e-09 1.74e-13 1.25e-08 17s
40 3.08174335e+00 3.08911877e+00 4.28e-09 2.45e-13 1.11e-08 17s
41 3.08242241e+00 3.08853849e+00 3.55e-09 9.77e-14 9.18e-09 17s
42 3.08302138e+00 3.08819439e+00 2.90e-09 9.24e-14 7.76e-09 18s
43 3.08314112e+00 3.08767644e+00 2.77e-09 2.81e-13 6.81e-09 18s
44 3.08368264e+00 3.08744793e+00 2.20e-09 3.69e-13 5.65e-09 18s
45 3.08390572e+00 3.08728535e+00 1.95e-09 3.94e-13 5.07e-09 19s
46 3.08417990e+00 3.08716516e+00 1.66e-09 3.36e-13 4.48e-09 19s
47 3.08436089e+00 3.08686723e+00 1.46e-09 1.78e-13 3.76e-09 19s
48 3.08448817e+00 3.08666724e+00 1.32e-09 1.08e-13 3.27e-09 20s
49 3.08462529e+00 3.08649902e+00 1.17e-09 1.28e-13 2.81e-09 20s
50 3.08482102e+00 3.08642634e+00 9.64e-10 1.03e-13 2.41e-09 21s
51 3.08484816e+00 3.08633061e+00 9.33e-10 1.14e-13 2.23e-09 21s
52 3.08505572e+00 3.08625352e+00 7.05e-10 1.55e-13 1.80e-09 21s
53 3.08516027e+00 3.08620709e+00 5.86e-10 2.03e-13 1.57e-09 22s
54 3.08521066e+00 3.08615239e+00 5.31e-10 2.11e-13 1.41e-09 22s
55 3.08524382e+00 3.08612420e+00 4.93e-10 1.81e-13 1.32e-09 22s
56 3.08530695e+00 3.08607319e+00 4.20e-10 3.55e-13 1.15e-09 23s
57 3.08534179e+00 3.08600347e+00 3.79e-10 3.20e-13 9.93e-10 23s
58 3.08536132e+00 3.08599556e+00 3.58e-10 3.11e-13 9.52e-10 24s
59 3.08540200e+00 3.08594623e+00 3.11e-10 2.93e-13 8.17e-10 24s
60 3.08542995e+00 3.08590885e+00 2.79e-10 4.03e-13 7.19e-10 25s
61 3.08545441e+00 3.08588562e+00 2.52e-10 4.39e-13 6.47e-10 25s
62 3.08549423e+00 3.08586187e+00 2.07e-10 9.04e-13 5.52e-10 26s
63 3.08551089e+00 3.08581835e+00 1.89e-10 3.55e-13 4.62e-10 26s
64 3.08553288e+00 3.08580077e+00 1.64e-10 2.91e-13 4.02e-10 26s
65 3.08555444e+00 3.08578354e+00 1.39e-10 3.43e-13 3.44e-10 27s
66 3.08558973e+00 3.08577050e+00 9.92e-11 2.13e-13 2.71e-10 27s
67 3.08559745e+00 3.08576519e+00 9.06e-11 3.13e-13 2.52e-10 27s
68 3.08560798e+00 3.08574408e+00 7.89e-11 4.05e-13 2.04e-10 28s
69 3.08561403e+00 3.08573579e+00 7.21e-11 5.29e-13 1.83e-10 28s
70 3.08562321e+00 3.08572551e+00 6.19e-11 5.72e-13 1.54e-10 28s
71 3.08563147e+00 3.08571854e+00 5.28e-11 4.17e-13 1.31e-10 29s
72 3.08563911e+00 3.08570914e+00 4.45e-11 4.73e-13 1.05e-10 29s
73 3.08564370e+00 3.08570375e+00 3.94e-11 5.10e-13 9.01e-11 30s
74 3.08565650e+00 3.08569947e+00 2.52e-11 6.04e-13 6.45e-11 30s
75 3.08566103e+00 3.08569637e+00 2.02e-11 1.10e-12 5.30e-11 31s
76 3.08566512e+00 3.08569245e+00 1.58e-11 1.17e-12 4.10e-11 31s
77 3.08566884e+00 3.08569041e+00 1.18e-11 3.76e-12 3.24e-11 32s
78 3.08567096e+00 3.08568756e+00 1.04e-11 3.77e-12 2.49e-11 32s
79 3.08567303e+00 3.08568598e+00 7.63e-12 3.10e-12 1.94e-11 32s
80 3.08567420e+00 3.08568468e+00 6.46e-12 2.76e-12 1.57e-11 33s
81 3.08567469e+00 3.08568345e+00 5.89e-12 2.67e-12 1.31e-11 33s
82 3.08567805e+00 3.08568184e+00 5.63e-12 2.53e-12 5.68e-12 34s
83 3.08567843e+00 3.08568128e+00 4.61e-12 2.45e-12 4.27e-12 34s
84 3.08567869e+00 3.08568091e+00 8.13e-12 3.37e-12 3.34e-12 34s
85 3.08567879e+00 3.08568086e+00 1.06e-11 3.24e-12 3.10e-12 35s
86 3.08567888e+00 3.08568083e+00 8.85e-12 3.18e-12 2.93e-12 35s
87 3.08567894e+00 3.08568073e+00 1.27e-11 2.90e-12 2.69e-12 35s
88 3.08567911e+00 3.08568059e+00 1.75e-11 2.35e-12 2.22e-12 35s
89 3.08567924e+00 3.08568054e+00 1.94e-11 6.79e-12 1.96e-12 36s
90 3.08567938e+00 3.08568035e+00 2.29e-11 9.74e-11 1.46e-12 36s
91 3.08567941e+00 3.08568027e+00 2.12e-11 8.15e-11 1.29e-12 36s
92 3.08567952e+00 3.08568010e+00 1.59e-11 5.79e-10 8.60e-13 36s
93 3.08567965e+00 3.08567980e+00 1.79e-11 4.70e-11 2.19e-13 37s
94 3.08567966e+00 3.08567978e+00 1.14e-11 1.36e-11 1.68e-13 37s
95 3.08567977e+00 3.08567977e+00 1.24e-09 8.53e-10 1.32e-14 37s
Barrier solved model in 95 iterations and 37.27 seconds
Optimal objective 3.08567977e+00
Crossover log...
12710 DPushes remaining with DInf 1.8303188e-01 38s
3334 DPushes remaining with DInf 1.7529785e-01 40s
1371 DPushes remaining with DInf 1.7289629e-01 45s
477 DPushes remaining with DInf 1.6723214e-01 51s
127 DPushes remaining with DInf 1.6104527e-01 55s
0 DPushes remaining with DInf 1.3116419e-01 58s
300326 PPushes remaining with PInf 4.0751134e-02 58s
199720 PPushes remaining with PInf 5.1742165e-03 65s
172453 PPushes remaining with PInf 1.8943377e-03 70s
123212 PPushes remaining with PInf 1.0140784e-03 73s
103271 PPushes remaining with PInf 5.9071235e-04 76s
67155 PPushes remaining with PInf 1.8196838e-04 82s
45245 PPushes remaining with PInf 1.3676637e-04 85s
22987 PPushes remaining with PInf 5.3076216e-05 91s
9315 PPushes remaining with PInf 3.5564546e-05 96s
2108 PPushes remaining with PInf 6.5698968e+00 100s
0 PPushes remaining with PInf 4.5676605e+00 103s
Push phase complete: Pinf 4.5676605e+00, Dinf 9.3034723e+02 103s
Iteration Objective Primal Inf. Dual Inf. Time
312288 3.0856766e+00 0.000000e+00 9.303472e+02 103s
312892 3.0856837e+00 0.000000e+00 8.219638e+02 109s
313496 3.0856898e+00 0.000000e+00 9.265972e+01 115s
314100 3.0856914e+00 0.000000e+00 6.867347e-02 120s
314704 3.0856916e+00 0.000000e+00 2.317480e-03 126s
314948 3.0856796e+00 1.213708e-03 0.000000e+00 130s
Extra simplex iterations from dual to original model: 1
315071 3.0856796e+00 0.000000e+00 0.000000e+00 134s
Solved with barrier
Solved in 315071 iterations and 134.28 seconds
Optimal objective 3.085679569e+00
```python
plt.figure(figsize=(12, 8))
sns.heatmap(p_matrix, vmin=0, vmax=0.3)
```
```python
edges = [(x_list[i], x_list[j]) for i, j in spanner.edges]
edges = list(zip(*edges))
edges = pd.DataFrame({'Source_lon': list(zip(*edges[0]))[1],
'Source_lat': list(zip(*edges[0]))[0],
'Target_lon': list(zip(*edges[1]))[1],
'Target_lat': list(zip(*edges[1]))[0]})
map1 = KeplerGl()
prob_diff = post_prob - pre_prob
tmp = zones.copy(deep=True).drop('Centroid', axis=1)
tmp['prob_diff'] = prob_diff
tmp['pre_prob'] = pre_prob
tmp['post_prob'] = post_prob
equalizer = ox.geocode_to_gdf('Beijing, China').iloc[0, 0]
tmp.append(pd.Series(), ignore_index=True)
tmp.loc[len(tmp), 'geometry'] = equalizer
tmp.loc[len(tmp)-1, 'pre_prob'] = max(pre_prob.max(), post_prob.max())
tmp.loc[len(tmp)-2, 'post_prob'] = max(pre_prob.max(), post_prob.max())
map1.add_data(tmp, name='Differential Privacy on Boston Postal Zones')
map1.add_data(edges, name='Edges selected by the spanner')
with open('spanner_comparison_map_config.json', 'r') as f:
config = json.load(f)
map1.config = config
map1
```
User Guide: https://docs.kepler.gl/docs/keplergl-jupyter
KeplerGl(config={'version': 'v1', 'config': {'visState': {'filters': [], 'layers': [{'id': 'oj0unz', 'type': '…
```python
with open('spanner_comparison_map_config.json', 'w') as f:
json.dump(map1.config, f)
```
## Experiments on NYC taxi data
Please replace the data directory by any taxi trip data released by NYC TLS (https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page).
```python
trips = pd.read_csv('yellow_tripdata_2010-07.csv')
```
```python
def encrypt_trip(pickup_point, dropoff_point, p_matrix):
n = p_matrix.shape[0]
for index, row in zones[['zone', 'geometry']].iterrows():
geometry = row['geometry']
if pickup_point.within(geometry):
pickup_zone = row['zone']
pickup_index = index
if dropoff_point.within(geometry):
dropoff_zone = row['zone']
dropoff_index = index
encrypted_pickup_index = np.random.choice(range(n), p=p_matrix[pickup_index])
encrypted_dropoff_index = np.random.choice(range(n), p=p_matrix[dropoff_index])
encrypted_pickup_zone = zones.loc[encrypted_pickup_index, 'zone']
encrypted_dropoff_zone = zones.loc[encrypted_dropoff_index, 'zone']
print(f"Original OD Pair: {pickup_zone}, {dropoff_zone}, Encryped OD Pair: {encrypted_pickup_zone}, {encrypted_dropoff_zone}")
return encrypted_pickup_zone, encrypted_dropoff_zone
```
```python
pickup_latitude, pickup_longitude, dropoff_latitude, dropoff_longitude = tuple(trips.loc[1, ['pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude']])
pickup_point = Point([pickup_longitude, pickup_latitude])
dropoff_point = Point([dropoff_longitude, dropoff_latitude])
encrypt_trip(pickup_point, dropoff_point, p_matrix)
```
Original OD Pair: Clinton West, Clinton West, Encryped OD Pair: Lenox Hill East, UN/Turtle Bay South
('Lenox Hill East', 'UN/Turtle Bay South')
```python
def find_zone(lon, lat):
point = Point([lon, lat])
for index, zone in enumerate(zones['geometry']):
if point.within(zone):
return index
return None
pickup_zones = trips.progress_apply(lambda x: find_zone(x['pickup_longitude'], x['pickup_latitude']), axis=1)
```
HBox(children=(FloatProgress(value=0.0, max=14656519.0), HTML(value='')))
# Questions & Concerns
* How should we determine the prior distribution for the linear program?
* The zones assgined by TLC are probably not fine-grained enough.
* Are there things we need to do to verify the differential privacy algorithm? eg, post-distribution, data utility
* What is the next step after applying this algorithm to nyc taxi dataset?
# Next Step:
* Demand Prediction (OD demand) -> Compare difference
```python
```
|
11ad0b34d7f8e34ba7c78862c05daf4bbe9ebd6f
| 155,071 |
ipynb
|
Jupyter Notebook
|
NewYorkTaxi/.ipynb_checkpoints/Greedy_Graph_Spanner-checkpoint.ipynb
|
quao627/GeoDifferentialPrivacy
|
fab0973c7d038cf6dcaa3fc3f00ecde742ae6700
|
[
"MIT"
] | 1 |
2021-06-08T22:49:13.000Z
|
2021-06-08T22:49:13.000Z
|
NewYorkTaxi/Greedy_Graph_Spanner.ipynb
|
quao627/GeoDifferentialPrivacy
|
fab0973c7d038cf6dcaa3fc3f00ecde742ae6700
|
[
"MIT"
] | null | null | null |
NewYorkTaxi/Greedy_Graph_Spanner.ipynb
|
quao627/GeoDifferentialPrivacy
|
fab0973c7d038cf6dcaa3fc3f00ecde742ae6700
|
[
"MIT"
] | null | null | null | 207.037383 | 105,608 | 0.889457 | true | 8,839 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.760651 | 0.70253 | 0.53438 |
__label__eng_Latn
| 0.188351 | 0.079873 |
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Lorena A. Barba, Gilbert F. Forsyth 2015. Thanks to NSF for support via CAREER award #1149784.
[@LorenaABarba](https://twitter.com/LorenaABarba)
12 steps to Navier-Stokes
=====
***
We continue our journey to solve the Navier-Stokes equation with Step 4. But don't continue unless you have completed the previous steps! In fact, this next step will be a combination of the two previous ones. The wonders of *code reuse*!
Step 4: Burgers' Equation
----
***
You can read about Burgers' Equation on its [wikipedia page](http://en.wikipedia.org/wiki/Burgers'_equation).
Burgers' equation in one spatial dimension looks like this:
$$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2u}{\partial x^2}$$
As you can see, it is a combination of non-linear convection and diffusion. It is surprising how much you learn from this neat little equation!
We can discretize it using the methods we've already detailed in Steps [1](http://nbviewer.ipython.org/urls/github.com/barbagroup/aCFDPython/blob/master/lessons/01_Step_1.ipynb) to [3](http://nbviewer.ipython.org/urls/github.com/barbagroup/CFDPython/blob/master/lessons/04_Step_3.ipynb). Using forward difference for time, backward difference for space and our 2nd-order method for the second derivatives yields:
$$\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n - u_{i-1}^n}{\Delta x} = \nu \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2}$$
As before, once we have an initial condition, the only unknown is $u_i^{n+1}$. We will step in time as follows:
$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)$$
###Initial and Boundary Conditions
To examine some interesting properties of Burgers' equation, it is helpful to use different initial and boundary conditions than we've been using for previous steps.
Our initial condition for this problem is going to be:
\begin{eqnarray}
u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\
\phi &=& \exp \bigg(\frac{-x^2}{4 \nu} \bigg) + \exp \bigg(\frac{-(x-2 \pi)^2}{4 \nu} \bigg)
\end{eqnarray}
This has an analytical solution, given by:
\begin{eqnarray}
u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\
\phi &=& \exp \bigg(\frac{-(x-4t)^2}{4 \nu (t+1)} \bigg) + \exp \bigg(\frac{-(x-4t -2 \pi)^2}{4 \nu(t+1)} \bigg)
\end{eqnarray}
Our boundary condition will be:
$$u(0) = u(2\pi)$$
This is called a *periodic* boundary condition. Pay attention! This will cause you a bit of headache if you don't tread carefully.
###Saving Time with SymPy
The initial condition we're using for Burgers' Equation can be a bit of a pain to evaluate by hand. The derivative $\frac{\partial \phi}{\partial x}$ isn't too terribly difficult, but it would be easy to drop a sign or forget a factor of $x$ somewhere, so we're going to use SymPy to help us out.
[SymPy](http://sympy.org/en/) is the symbolic math library for Python. It has a lot of the same symbolic math functionality as Mathematica with the added benefit that we can easily translate its results back into our Python calculations (it is also free and open source).
Start by loading the SymPy library, together with our favorite library, NumPy.
```python
import numpy
import sympy
```
We're also going to tell SymPy that we want all of its output to be rendered using $\LaTeX$. This will make our Notebook beautiful!
```python
from sympy import init_printing
init_printing(use_latex=True)
```
Start by setting up symbolic variables for the three variables in our initial condition and then type out the full equation for $\phi$. We should get a nicely rendered version of our $\phi$ equation.
```python
x, nu, t = sympy.symbols('x nu t')
phi = sympy.exp(-(x-4*t)**2/(4*nu*(t+1))) + sympy.exp(-(x-4*t-2*numpy.pi)**2/(4*nu*(t+1)))
phi
```
It's maybe a little small, but that looks right. Now to evaluate our partial derivative $\frac{\partial \phi}{\partial x}$ is a trivial task.
```python
phiprime = phi.diff(x)
phiprime
```
If you want to see the unrendered version, just use the Python print command.
```python
print(phiprime)
```
-(-8*t + 2*x)*exp(-(-4*t + x)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) - (-8*t + 2*x - 12.5663706143592)*exp(-(-4*t + x - 6.28318530717959)**2/(4*nu*(t + 1)))/(4*nu*(t + 1))
###Now what?
Now that we have the Pythonic version of our derivative, we can finish writing out the full initial condition equation and then translate it into a usable Python expression. For this, we'll use the *lambdify* function, which takes a SymPy symbolic equation and turns it into a callable function.
```python
from sympy.utilities.lambdify import lambdify
u = -2*nu*(phiprime/phi)+4
print(u)
```
-2*nu*(-(-8*t + 2*x)*exp(-(-4*t + x)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) - (-8*t + 2*x - 12.5663706143592)*exp(-(-4*t + x - 6.28318530717959)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)))/(exp(-(-4*t + x - 6.28318530717959)**2/(4*nu*(t + 1))) + exp(-(-4*t + x)**2/(4*nu*(t + 1)))) + 4
###Lambdify
To lambdify this expression into a useable function, we tell lambdify which variables to request and the function we want to plug them in to.
```python
ufunc = lambdify((t, x, nu), u)
print(ufunc(1,4,3))
```
3.4917066420644494
###Back to Burgers' Equation
Now that we have the initial conditions set up, we can proceed and finish setting up the problem. We can generate the plot of the initial condition using our lambdify-ed function.
```python
from matplotlib import pyplot
%matplotlib inline
###variable declarations
nx = 101
nt = 100
dx = 2*numpy.pi/(nx-1)
nu = .07
dt = dx*nu
x = numpy.linspace(0, 2*numpy.pi, nx)
#u = numpy.empty(nx)
un = numpy.empty(nx)
t = 0
u = numpy.asarray([ufunc(t, x0, nu) for x0 in x])
u
```
array([ 4. , 4.06283185, 4.12566371, 4.18849556, 4.25132741,
4.31415927, 4.37699112, 4.43982297, 4.50265482, 4.56548668,
4.62831853, 4.69115038, 4.75398224, 4.81681409, 4.87964594,
4.9424778 , 5.00530965, 5.0681415 , 5.13097336, 5.19380521,
5.25663706, 5.31946891, 5.38230077, 5.44513262, 5.50796447,
5.57079633, 5.63362818, 5.69646003, 5.75929189, 5.82212374,
5.88495559, 5.94778745, 6.0106193 , 6.07345115, 6.136283 ,
6.19911486, 6.26194671, 6.32477856, 6.38761042, 6.45044227,
6.51327412, 6.57610598, 6.63893783, 6.70176967, 6.76460125,
6.82742866, 6.89018589, 6.95176632, 6.99367964, 6.72527549,
4. , 1.27472451, 1.00632036, 1.04823368, 1.10981411,
1.17257134, 1.23539875, 1.29823033, 1.36106217, 1.42389402,
1.48672588, 1.54955773, 1.61238958, 1.67522144, 1.73805329,
1.80088514, 1.863717 , 1.92654885, 1.9893807 , 2.05221255,
2.11504441, 2.17787626, 2.24070811, 2.30353997, 2.36637182,
2.42920367, 2.49203553, 2.55486738, 2.61769923, 2.68053109,
2.74336294, 2.80619479, 2.86902664, 2.9318585 , 2.99469035,
3.0575222 , 3.12035406, 3.18318591, 3.24601776, 3.30884962,
3.37168147, 3.43451332, 3.49734518, 3.56017703, 3.62300888,
3.68584073, 3.74867259, 3.81150444, 3.87433629, 3.93716815, 4. ])
```python
pyplot.figure(figsize=(11,7), dpi=100)
pyplot.plot(x,u, marker='o', lw=2)
pyplot.xlim([0,2*numpy.pi])
pyplot.ylim([0,10]);
```
This is definitely not the hat function we've been dealing with until now. We call it a "saw-tooth function". Let's proceed forward and see what happens.
### Periodic Boundary Conditions
One of the big differences between Step 4 and the previous lessons is the use of *periodic* boundary conditions. If you experiment with Steps 1 and 2 and make the simulation run longer (by increasing `nt`) you will notice that the wave will keep moving to the right until it no longer even shows up in the plot.
With periodic boundary conditions, when a point gets to the right-hand side of the frame, it *wraps around* back to the front of the frame.
Recall the discretization that we worked out at the beginning of this notebook:
$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)$$
What does $u_{i+1}^n$ *mean* when $i$ is already at the end of the frame?
Think about this for a minute before proceeding.
```python
for n in range(nt):
un = u.copy()
for i in range(1, nx-1):
u[i] = un[i] - un[i] * dt/dx *(un[i] - un[i-1]) + nu*dt/dx**2*\
(un[i+1]-2*un[i]+un[i-1])
u[0] = un[0] - un[0] * dt/dx * (un[0] - un[-2]) + nu*dt/dx**2*\
(un[1]-2*un[0]+un[-2])
u[-1] = un[-1] - un[-1] * dt/dx * (un[-1] - un[-2]) + nu*dt/dx**2*\
(un[0]-2*un[-1]+un[-2])
u_analytical = numpy.asarray([ufunc(nt*dt, xi, nu) for xi in x])
```
```python
pyplot.figure(figsize=(11,7), dpi=100)
pyplot.plot(x,u, marker='o', lw=2, label='Computational')
pyplot.plot(x, u_analytical, label='Analytical')
pyplot.xlim([0,2*numpy.pi])
pyplot.ylim([0,10])
pyplot.legend();
```
***
What next?
----
The subsequent steps, from 5 to 12, will be in two dimensions. But it is easy to extend the 1D finite-difference formulas to the partial derivatives in 2D or 3D. Just apply the definition — a partial derivative with respect to $x$ is the variation in the $x$ direction *while keeping $y$ constant*.
Before moving on to [Step 5](http://nbviewer.ipython.org/urls/github.com/barbagroup/CFDPython/blob/master/lessons/07_Step_5.ipynb), make sure you have completed your own code for steps 1 through 4 and you have experimented with the parameters and thought about what is happening. Also, we recommend that you take a slight break to learn about [array operations with NumPy](http://nbviewer.ipython.org/urls/github.com/barbagroup/CFDPython/blob/master/lessons/07_Step_5.ipynb).
```python
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: 'Alegreya Sans', sans-serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 135%;
font-size: 120%;
width:600px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.text_cell_render h1 {
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
|
797b64f15adc8d7f038421f868ce7be8d2529ed7
| 67,456 |
ipynb
|
Jupyter Notebook
|
lessons/05_Step_4.ipynb
|
Wallace-dyfq/CFD-Julia-12-steps-to-Navier-Stokes
|
ab9229f22178b29e85a38f8fb02d0b4678b53129
|
[
"CC-BY-3.0"
] | 33 |
2017-04-10T23:10:21.000Z
|
2022-02-05T00:35:56.000Z
|
lessons/05_Step_4.ipynb
|
Wallace-dyfq/CFD-Julia-12-steps-to-Navier-Stokes
|
ab9229f22178b29e85a38f8fb02d0b4678b53129
|
[
"CC-BY-3.0"
] | 1 |
2020-07-04T12:48:28.000Z
|
2020-07-04T12:48:28.000Z
|
lessons/05_Step_4.ipynb
|
Wallace-dyfq/CFD-Julia-12-steps-to-Navier-Stokes
|
ab9229f22178b29e85a38f8fb02d0b4678b53129
|
[
"CC-BY-3.0"
] | 16 |
2018-01-26T19:19:21.000Z
|
2022-03-26T21:35:16.000Z
| 107.757188 | 24,568 | 0.807563 | true | 4,094 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.870597 | 0.849971 | 0.739983 |
__label__eng_Latn
| 0.876793 | 0.557559 |
# Optimization of a State-to-State Transfer in a Lambda System in the RWA
```python
# NBVAL_IGNORE_OUTPUT
%load_ext watermark
import os
import numpy as np
import scipy
import matplotlib
import matplotlib.pylab as plt
import krotov
import qutip
from qutip import Qobj
%watermark -v --iversions
```
Python implementation: CPython
Python version : 3.8.1
IPython version : 7.24.1
matplotlib: 3.4.2
krotov : 1.2.1+dev
numpy : 1.20.3
qutip : 4.6.1
scipy : 1.6.3
$\newcommand{tr}[0]{\operatorname{tr}}
\newcommand{diag}[0]{\operatorname{diag}}
\newcommand{abs}[0]{\operatorname{abs}}
\newcommand{pop}[0]{\operatorname{pop}}
\newcommand{aux}[0]{\text{aux}}
\newcommand{opt}[0]{\text{opt}}
\newcommand{tgt}[0]{\text{tgt}}
\newcommand{init}[0]{\text{init}}
\newcommand{lab}[0]{\text{lab}}
\newcommand{rwa}[0]{\text{rwa}}
\newcommand{bra}[1]{\langle#1\vert}
\newcommand{ket}[1]{\vert#1\rangle}
\newcommand{Bra}[1]{\left\langle#1\right\vert}
\newcommand{Ket}[1]{\left\vert#1\right\rangle}
\newcommand{Braket}[2]{\left\langle #1\vphantom{#2}\mid{#2}\vphantom{#1}\right\rangle}
\newcommand{ketbra}[2]{\vert#1\rangle\!\langle#2\vert}
\newcommand{op}[1]{\hat{#1}}
\newcommand{Op}[1]{\hat{#1}}
\newcommand{dd}[0]{\,\text{d}}
\newcommand{Liouville}[0]{\mathcal{L}}
\newcommand{DynMap}[0]{\mathcal{E}}
\newcommand{identity}[0]{\mathbf{1}}
\newcommand{Norm}[1]{\lVert#1\rVert}
\newcommand{Abs}[1]{\left\vert#1\right\vert}
\newcommand{avg}[1]{\langle#1\rangle}
\newcommand{Avg}[1]{\left\langle#1\right\rangle}
\newcommand{AbsSq}[1]{\left\vert#1\right\vert^2}
\newcommand{Re}[0]{\operatorname{Re}}
\newcommand{Im}[0]{\operatorname{Im}}$
This example is illustrates the use of complex-valued control fields. This is
accomplished by rewriting the Hamiltonian as the sum of two independent
controls (real and imaginary parts). We consider a 3-level system in a
$\Lambda$ configuration, and seek control pulses that implement a
(phase-sensitive) state-to-state transition $\ket{1} \rightarrow \ket{3}$.
## The rotating wave Hamiltonian
The system consists of three levels $\ket{1}$, $\ket{2}$ and $\ket{3}$ with
energy levels $E_{1}, E_{2}$ and $E_{3}$ which interact with a pair of laser
pulses $\epsilon_{P}(t)$ ("pump laser") and $\epsilon_{S}(t)$ ("Stokes laser"),
respectively, see Chapter 15.4.2 in ["Introduction to Quantum Mechanics: A
Time-Dependent Perspective" by David Tannor][Tannor] for details.
[Tannor]: http://www.weizmann.ac.il/chemphys/tannor/Book/
In the lab frame, the Hamiltonian reads
$$
\Op{H}_{\text{lab}} = \begin{pmatrix}
E_1 & -\mu_{12} \epsilon_P(t) & 0 \\
-\mu_{12} \epsilon_P(t) & E_2 & - \mu_{23} \epsilon_S(t) \\
0 & -\mu_{23} \epsilon_S(t) & E_2
\end{pmatrix}\,.
$$
with the dipole values $\mu_{12}$, $\mu_{23}$ describing the coupling to the
(real-valued) control fields $\epsilon_P(t)$, $\epsilon_S(t)$. The "rotating
frame" is defined as
$$\ket{\Psi_{\text{rot}}} = \Op{U}_0^\dagger \ket{\Psi_{\text{lab}}}$$
with the transformation
$$\op{U}_{0} = \ketbra{1}{1}
e^{-i\left(E_{2} - \omega_{P} \right)t} + \ketbra{2}{2} e^{-iE_{2}t} +
\ketbra{3}{3} e^{-i\left(E_{2}-\omega_{S}\right)t}\,,$$
where $\omega_{P}$ and $\omega_{S}$ are the two central frequencies defining
the rotating frame.
The condition of having to fulfill the Schrödinger equation in the rotating
frame implies a rotating frame Hamiltonian defined as
$$\op{H}_{\text{rot}} = \op{U}_{0}^{\dagger} \op{H}_{\text{lab}} \op{U}_{0} - i \op{U}_{0}^{\dagger} \dot{\op{U}}_{0}\,.$$
Note that most textbooks use $\Op{U}$ instead of $\Op{U}^\dagger$, and thus the
adjoint of the above equation to define the rotating frame transformation, but
we follow the example of Tannor's book here.
The rotating frame Hamiltonian reads
$$
\Op{H}_\text{rot} = \begin{pmatrix}
E_1 + \omega_P - E_2 & -\mu_{12} \epsilon_P(t) e^{-i \omega_P t} & 0 \\
-\mu_{12} \epsilon_P(t) e^{+i \omega_P t} & 0 & - \mu_{23} \epsilon_S(t) e^{-i \omega_S t}\\
0 & -\mu_{23} \epsilon_S(t) e^{+i \omega_S t} & E3 + \omega_S -E_2
\end{pmatrix}\,.
$$
We can now write the fields as
$$
\begin{split}
\mu_{12} \epsilon_{P}(t)
&= \Omega_{P}^{(1)}(t) \cos{(\omega_P t)} - \Omega_{P}^{(2)}(t) \sin{(\omega_P t)} \\
&= \Omega_{P}^{(1)}(t) \left( e^{i \omega_P t} + e^{-i \omega_P t}\right)
+ i \Omega_{P}^{(2)}(t) \left( e^{i \omega_P t} - e^{-i \omega_P t} \right) \,,
\end{split}
$$
and similarly for $\epsilon_{S}(t)$, where we have split each field into two
arbitrary (real-valued) auxiliary fields $\Omega_{P}^{(1)}(t),
\Omega_{P}^{(2)}(t)$, and $\Omega_{S}^{(1)}(t), \Omega_{S}^{(2)}(t)$. This
rewriting is suggestive of controls being spectrally centered around $\omega_P$
and $\omega_S$, respectively, in which case any oscillations in
$\Omega_{P,S}^{(1,2)}(t)$ are on a much slower time scale than $\omega_{P, S}$.
Mathematically, however, *any* control fields can written in the above form.
Thus, we have not placed any restriction on the controls at this time.
Plugging this into $\Op{H}_\text{rot}$ and invoking the rotating wave
approximation that neglects all fast oscillating terms $\propto e^{\pm i 2
\omega_{P,S} t}$, we find
$$
\Op{H}_\text{RWA} = \begin{pmatrix}
\Delta_P & -\frac{1}{2} \Omega_P(t) & 0 \\
-\frac{1}{2} \Omega_P^*(t) & 0 & -\frac{1}{2} \Omega_S(t) \\
0 & -\frac{1}{2} \Omega_S^*(t) & \Delta_S
\end{pmatrix}\,,
$$
with the detunings $\Delta_P \equiv E_1 + \omega_P - E_2$, $\Delta_S \equiv E3
+ \omega_S -E_2$ and the complex-valued control fields $\Omega_P(t) \equiv
\Omega_{P}^{(1)}(t) + i \Omega_{P}^{(2)}(t)$ and $\Omega_S(t) \equiv
\Omega_{S}^{(1)}(t) + i \Omega_{S}^{(2)}(t)$, illustrated in the following
diagram:
Most textbooks (including Tannor's) only allow control fields of the form
$\epsilon_{P,S}(t) \propto \Omega_{P,S}(t) \cos{(\omega_{P,S} t)}$ with the
pulse envelopes $\Omega_{P,S}(t) \in \mathbb{R}^+$. This will result in the
same $\Op{H}_\text{RWA}$ as above, but with the positive real-valued envelopes
instead of the complex-valued $\Omega_{P,S}(t)$. However, this restriction is
unnecessary: complex-valued control fields in the RWA are more general and
entirely physical, with the relation to the real-valued field in the lab
frame as defined above. The spectra of the optimized pulses are free to deviate
from the frequencies of the rotating frame, limited only by the numerical
resolution of the time grid and the RWA.
The `krotov` package requires that all control pulses are real-valued.
Therefore, the real and imaginary parts of $\Omega_{P}$ and $\Omega_{S}$ are
treated as independent Hamiltonians, and we write
$$
\Op{H}_\text{RWA}
= \Op{H_0}
+ \Omega_{P}^{(1)}(t) \Op{H}_{P,\text{re}}
+ \Omega_{P}^{(2)}(t) \Op{H}_{P,\text{im}}
+ \Omega_{S}^{(1)}(t) \Op{H}_{S,\text{re}}
+ \Omega_{S}^{(2)}(t) \Op{H}_{S,\text{im}}
$$
for the purpose of the optimization, with
$$
\begin{align}
\Op{H_0} &= \Delta_P \ketbra{1}{1} + \Delta_S \ketbra{3}{3}\,, \\
\Op{H}_{P,\text{re}} &= -\frac{1}{2} \left(\ketbra{1}{2} + \ketbra{2}{1}\right)\,, \\
\Op{H}_{P,\text{im}} &= -\frac{i}{2} \left(\ketbra{1}{2} - \ketbra{2}{1}\right)\,, \\
\Op{H}_{S,\text{re}} &= -\frac{1}{2} \left(\ketbra{2}{3} + \ketbra{3}{2}\right)\,, \\
\Op{H}_{S,\text{im}} &= -\frac{i}{2} \left(\ketbra{2}{3} - \ketbra{3}{2}\right)\,.
\end{align}
$$
## Guess controls
We choose the initial guess for the four control fields based on the intuition
of the "stimulated Raman adiabatic passage" (STIRAP) scheme. STIRAP allows to
transfer the population in $\ket{1}$ $\ket{3}$ without having to pass through
$\ket{2}$; it requires the Stokes-pulse to precede but overlap the pump-pulse.
Here, we leave it up to Krotov's method to find appropriate pulses for a
STIRAP-like transfer (without requiring that the $\ket{2}$ level remains
unpopulated). We start from a low intensity real-valued $\Omega_S(t)$ pulse
with a Blackman shape, followed by an overlapping real-valued $\Omega_P(t)$ of
the same shape. The entire scheme is in the time interval [0, 5].
```python
def Omega_P1(t, args):
"""Guess for the real part of the pump pulse"""
Ω0 = 5.0
return Ω0 * krotov.shapes.blackman(t, t_start=2.0, t_stop=5.0)
def Omega_P2(t, args):
"""Guess for the imaginary part of the pump pulse"""
return 0.0
def Omega_S1(t, args):
"""Guess for the real part of the Stokes pulse"""
Ω0 = 5.0
return Ω0 * krotov.shapes.blackman(t, t_start=0.0, t_stop=3.0)
def Omega_S2(t, args):
"""Guess for the imaginary part of the Stokes pulse"""
return 0.0
```
We can now instantiate the Hamiltonian including these guess controls:
```python
def hamiltonian(E1=0.0, E2=10.0, E3=5.0, omega_P=9.5, omega_S=4.5):
"""Lambda-system Hamiltonian in the RWA"""
# detunings
ΔP = E1 + omega_P - E2
ΔS = E3 + omega_S - E2
H0 = Qobj([[ΔP, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, ΔS]])
HP_re = -0.5 * Qobj([[0.0, 1.0, 0.0], [1.0, 0.0, 0.0], [0.0, 0.0, 0.0]])
HP_im = -0.5 * Qobj([[0.0, 1.0j, 0.0], [-1.0j, 0.0, 0.0], [0.0, 0.0, 0.0]])
HS_re = -0.5 * Qobj([[0.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, 1.0, 0.0]])
HS_im = -0.5 * Qobj([[0.0, 0.0, 0.0], [0.0, 0.0, 1.0j], [0.0, -1.0j, 0.0]])
return [
H0,
[HP_re, Omega_P1],
[HP_im, Omega_P2],
[HS_re, Omega_S1],
[HS_im, Omega_S2],
]
```
```python
H = hamiltonian()
```
## Target state in the rotating frame
The basis states of the $\Lambda$-system are defined as
```python
ket1 = qutip.Qobj(np.array([1.0, 0.0, 0.0]))
ket2 = qutip.Qobj(np.array([0.0, 1.0, 0.0]))
ket3 = qutip.Qobj(np.array([0.0, 0.0, 1.0]))
```
We would like to implement a phase-sensitive transition $\ket{1} \rightarrow
\ket{3}$ *in the lab frame*. Since we are defining the dynamics in the RWA,
this means we have to adjust the target state to be in the rotating frame as
well (the initial state at $t=0$ is not affected by the RWA).
As defined earlier, the states in the rotating frame are obtained from the
states in the lab frame by the transformation $\ket{\Psi_{\text{rot}}} =
\Op{U}_0^\dagger \ket{\Psi_{\text{lab}}}$. In our case, this means that we get
$\ket{3}$ with and additional phase factor:
```python
def rwa_target_state(ket3, E2=10.0, omega_S=4.5, T=5):
return np.exp(1j * (E2 - omega_S) * T) * ket3
```
```python
psi_target = rwa_target_state(ket3)
```
We can now instantiate the control objective:
```python
objective = krotov.Objective(initial_state=ket1, target=psi_target, H=H)
objective
```
Objective[|Ψ₀(3)⟩ to |Ψ₁(3)⟩ via [H₀[3,3], [H₁[3,3], u₁(t)], [H₂[3,3], u₂(t)], [H₃[3,3], u₃(t)], [H₄[3,3], u₄(t)]]]
## Simulate dynamics under the guess field
We use a time grid with 500 steps between $t=0$ and $T=5$:
```python
tlist = np.linspace(0, 5, 500)
```
Before propagating, we visually verify the guess pulses we defined earlier:
```python
def plot_pulse(pulse, tlist, label):
fig, ax = plt.subplots()
if callable(pulse):
pulse = np.array([pulse(t, args=None) for t in tlist])
ax.plot(tlist, pulse)
ax.set_xlabel('time')
ax.set_ylabel('%s pulse amplitude' % label)
plt.show(fig)
```
```python
plot_pulse(H[1][1], tlist, 'Ωₚ')
plot_pulse(H[3][1], tlist, 'Ωₛ')
```
The imaginary parts are zero:
```python
assert np.all([H[2][1](t, None) == 0 for t in tlist])
assert np.all([H[4][1](t, None) == 0 for t in tlist])
```
We introduce projectors $\op{P}_{i} =
\ketbra{i}{i}$ for each of the three energy levels, allowing use to plot the population dynamics:
```python
proj1 = qutip.ket2dm(ket1)
proj2 = qutip.ket2dm(ket2)
proj3 = qutip.ket2dm(ket3)
```
```python
guess_dynamics = objective.mesolve(tlist, e_ops=[proj1,proj2,proj3])
```
```python
def plot_population(result):
fig, ax = plt.subplots()
ax.plot(result.times, result.expect[0], label='1')
ax.plot(result.times, result.expect[1], label='2')
ax.plot(result.times, result.expect[2], label='3')
ax.legend()
ax.set_xlabel('time')
ax.set_ylabel('population')
plt.show(fig)
```
```python
plot_population(guess_dynamics)
```
We find that our guess pulses are too disjoint to implement the STIRAP scheme.
Thus, the Stokes pulse has no effect, whilst the pump pulse merely transfers
population out of $\ket{1}$ into $\ket{2}$ and back again.
## Optimize
In order to invoke `optimize_pulses`, we must define the required parameters
for each control, a pulse shape (used to ensure that the controls remain 0 at
$t=0$ and $t=T$), and the parameter $\lambda_a$ that determines the overall
magnitude of the pulse updates in each iteration.
```python
def S(t):
"""Scales the Krotov methods update of the pulse value at the time t"""
return krotov.shapes.flattop(
t, t_start=0.0, t_stop=5.0, t_rise=0.3, func='sinsq'
)
```
```python
pulse_options = {
H[1][1]: dict(lambda_a=0.5, update_shape=S),
H[2][1]: dict(lambda_a=0.5, update_shape=S),
H[3][1]: dict(lambda_a=0.5, update_shape=S),
H[4][1]: dict(lambda_a=0.5, update_shape=S)
}
```
We now run the optimization, using the phase-sensitive functional $J_{T,
\text{re}} = 1 - \Re\Braket{\Psi(t)}{\Psi_{\tgt}}$, printing the integrated
pulse update for each control in each iteration. The optimization stops when
$J_T$ falls below $10^{-3}$, changes by less than $10^{-5}$, or after at most
15 iterations. We also check for monotonic convergence.
```python
opt_result = krotov.optimize_pulses(
[objective],
pulse_options,
tlist,
propagator=krotov.propagators.expm,
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(
J_T=krotov.functionals.J_T_re,
show_g_a_int_per_pulse=True,
unicode=False,
),
check_convergence=krotov.convergence.Or(
krotov.convergence.value_below(1e-3, name='J_T'),
krotov.convergence.delta_below(1e-5),
krotov.convergence.check_monotonic_error,
),
iter_stop=15,
)
```
iter. J_T g_a_int_1 g_a_int_2 g_a_int_3 g_a_int_4 g_a_int J Delta J_T Delta J secs
0 1.01e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 1.01e+00 n/a n/a 0
1 6.72e-01 8.60e-02 2.87e-04 8.17e-02 3.72e-04 1.68e-01 8.40e-01 -3.37e-01 -1.68e-01 1
2 4.02e-01 7.20e-02 4.21e-04 6.22e-02 4.20e-04 1.35e-01 5.37e-01 -2.70e-01 -1.35e-01 1
3 2.22e-01 4.91e-02 4.64e-04 3.99e-02 3.88e-04 8.98e-02 3.12e-01 -1.80e-01 -8.98e-02 1
4 1.17e-01 2.89e-02 3.87e-04 2.29e-02 3.01e-04 5.25e-02 1.69e-01 -1.05e-01 -5.25e-02 1
5 6.00e-02 1.56e-02 2.69e-04 1.23e-02 2.10e-04 2.84e-02 8.84e-02 -5.69e-02 -2.84e-02 1
6 3.05e-02 8.08e-03 1.71e-04 6.37e-03 1.39e-04 1.48e-02 4.52e-02 -2.95e-02 -1.48e-02 1
7 1.54e-02 4.08e-03 1.06e-04 3.24e-03 9.10e-05 7.51e-03 2.30e-02 -1.50e-02 -7.51e-03 1
8 7.85e-03 2.04e-03 6.65e-05 1.63e-03 5.99e-05 3.79e-03 1.16e-02 -7.59e-03 -3.79e-03 1
9 4.02e-03 1.02e-03 4.31e-05 8.14e-04 4.01e-05 1.91e-03 5.94e-03 -3.83e-03 -1.91e-03 1
10 2.09e-03 5.05e-04 2.88e-05 4.07e-04 2.73e-05 9.68e-04 3.05e-03 -1.94e-03 -9.68e-04 1
11 1.10e-03 2.52e-04 1.99e-05 2.03e-04 1.88e-05 4.94e-04 1.59e-03 -9.87e-04 -4.94e-04 1
12 5.90e-04 1.26e-04 1.40e-05 1.02e-04 1.31e-05 2.54e-04 8.45e-04 -5.09e-04 -2.54e-04 1
```python
opt_result
```
Krotov Optimization Result
--------------------------
- Started at 2021-11-07 05:51:35
- Number of objectives: 1
- Number of iterations: 12
- Reason for termination: Reached convergence: J_T < 0.001
- Ended at 2021-11-07 05:51:59 (0:00:24)
We dump the result of the optimization to disk for later use in the [Ensemble
Optimization for Robust Pulses](08_example_ensemble.ipynb).
```python
if not os.path.isfile('lambda_rwa_opt_result.dump'):
opt_result.dump('lambda_rwa_opt_result.dump')
```
The optimized complex pulses look as follows:
```python
def plot_pulse_amplitude_and_phase(pulse_real, pulse_imaginary,tlist):
ax1 = plt.subplot(211)
ax2 = plt.subplot(212)
amplitudes = [np.sqrt(x*x + y*y) for x,y in zip(pulse_real,pulse_imaginary)]
phases = [np.arctan2(y,x)/np.pi for x,y in zip(pulse_real,pulse_imaginary)]
ax1.plot(tlist,amplitudes)
ax1.set_xlabel('time')
ax1.set_ylabel('pulse amplitude')
ax2.plot(tlist,phases)
ax2.set_xlabel('time')
ax2.set_ylabel('pulse phase (π)')
plt.show()
print("pump pulse amplitude and phase:")
plot_pulse_amplitude_and_phase(
opt_result.optimized_controls[0], opt_result.optimized_controls[1], tlist)
print("Stokes pulse amplitude and phase:")
plot_pulse_amplitude_and_phase(
opt_result.optimized_controls[2], opt_result.optimized_controls[3], tlist)
```
We can convert the complex controls in the rotating frame back into the
real-valued pulses in the lab frame:
```python
def plot_physical_field(pulse_re, pulse_im, tlist, case=None):
if case == 'pump':
w = 9.5
elif case == 'stokes':
w = 4.5
else:
print('Error: selected case is not a valid option')
return
ax = plt.subplot(111)
ax.plot(tlist,pulse_re*np.cos(w*tlist)-pulse_im*np.sin(w*tlist), 'r')
ax.set_xlabel('time', fontsize = 16)
if case == 'pump':
ax.set_ylabel(r'$\mu_{12}\,\epsilon_{P}$')
elif case == 'stokes':
ax.set_ylabel(r'$ \mu_{23}\,\epsilon_{S}$')
plt.show()
print('Physical electric pump pulse in the lab frame:')
plot_physical_field(
opt_result.optimized_controls[0], opt_result.optimized_controls[1], tlist, case = 'pump')
print('Physical electric Stokes pulse in the lab frame:')
plot_physical_field(
opt_result.optimized_controls[2], opt_result.optimized_controls[3], tlist, case = 'stokes')
```
Lastly, we check the population dynamics to verify that we indeed implement the
desired state-to-state transfer:
```python
opt_dynamics = opt_result.optimized_objectives[0].mesolve(
tlist, e_ops=[proj1, proj2, proj3])
```
```python
plot_population(opt_dynamics)
```
|
9af12b015c87aa4c7d48100f47996139ed737060
| 179,091 |
ipynb
|
Jupyter Notebook
|
docs/notebooks/02_example_lambda_system_rwa_complex_pulse.ipynb
|
mcditoos/krotov
|
6a70cc791fa21186997ad2ca5a72f6d30574e7a0
|
[
"BSD-3-Clause"
] | null | null | null |
docs/notebooks/02_example_lambda_system_rwa_complex_pulse.ipynb
|
mcditoos/krotov
|
6a70cc791fa21186997ad2ca5a72f6d30574e7a0
|
[
"BSD-3-Clause"
] | null | null | null |
docs/notebooks/02_example_lambda_system_rwa_complex_pulse.ipynb
|
mcditoos/krotov
|
6a70cc791fa21186997ad2ca5a72f6d30574e7a0
|
[
"BSD-3-Clause"
] | 1 |
2021-11-26T17:01:29.000Z
|
2021-11-26T17:01:29.000Z
| 139.045807 | 21,052 | 0.869843 | true | 6,728 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.763484 | 0.66888 | 0.510679 |
__label__eng_Latn
| 0.760808 | 0.024808 |
# Further information
(difference_between_a_list_and_a_tuple)=
## What is the difference between a Python list and a Python tuple?
Two of the most used Python iterables are lists and tuples. In practice they
have a number of similarities, they are both ordered collections of objects that
can be used in list comprehensions as well as in other ways.
- Tuples are **immutable**
- Lists are **mutable**
This means that once created tuples cannot be changed and lists can.
As a general rule of thumb: if you do not need to modify your iterable then use
a tuple as they are more computationally efficient.
This blog post is a good explanation of the difference:
<https://www.afternerd.com/blog/difference-between-list-tuple/>
## Why does the sum of booleans counts the `True`s?
In the tutorial and elsewhere we created a list of booleans and then take the
sum. Here are some of the steps:
```python
samples = ("Red", "Red", "Blue")
```
```python
booleans = [sample == "Red" for sample in samples]
booleans
```
[True, True, False]
When we take the `sum` of that list we get a numeric value:
```python
sum(booleans)
```
2
This has in fact counted the `True` values as 1 and the `False` values as 0.
```python
int(True)
```
1
```python
int(False)
```
0
## What is the difference between `print` and `return`?
In functions you see we use the `return` statement. This does two things:
1. Assigns a value to the function run;
2. Ends the function.
The `print` statement **only** displays the output.
As an example let us create the following set:
$$
S = \{f(x)\text{ for }x \in \{0, \pi / 4, \pi / 2, 3\pi / 4\}\}
$$
where $f(x)= \cos^2(x)$.
The correct way to do this is:
```python
import sympy as sym
def f(x):
"""
Return the square of the cosine of x
"""
return sym.cos(x) ** 2
S = [f(x) for x in (0, sym.pi / 4, sym.pi / 2, 3 * sym.pi / 4)]
S
```
[1, 1/2, 0, 1/2]
If we replaced the `return` statement in the function definition with a `print` we obtain:
```python
def f(x):
"""
Return the square of the cosine of x
"""
print(sym.cos(x) ** 2)
S = [f(x) for x in (0, sym.pi / 4, sym.pi / 2, 3 * sym.pi / 4)]
```
1
1/2
0
1/2
We see now that as the function has been run it displays the output.
**However** if we look at what `S` is we see that the function has not returned
anything:
```python
S
```
[None, None, None, None]
Here are some other materials on this subject:
- <https://www.tutorialspoint.com/Why-would-you-use-the-return-statement-in-Python>
- <https://pythonprinciples.com/blog/print-vs-return/>
## How does Python sample randomness?
When using the Python random module we are in fact generating a pseudo random
process. True randomness is actually not common.
Pseudo randomness is an important area of mathematics as strong algorithms that
create unpredictable sequences of numbers are vital to cryptographic security.
The specific algorithm using in Python for randomness is called the Mersenne
twister algorithm is state of the art.
You can read more about this here:
<https://docs.python.org/3/library/random.html#module-random>.
## What is the difference between a docstring and a comment
In Python it is possible to write statements that are ignored using the `#`
symbol. This creates something called a "comment". For example:
```python
# create a list to represent the tokens in a bag
bag = ["Red", "Red", "Blue"]
```
A docstring however is something that is "attached" to a function and can be
accessed by Python.
If we rewrite the function to sample the experiment of the tutorial without a
docstring but using comments we will have:
```python
def sample_experiment(bag):
# Select a token
selected_token = pick_a_token(container=bag)
# If the token is red then the probability of selecting heads is 2/3
if selected_token == "Red":
probability_of_selecting_heads = 2 / 3
# Otherwise it is 1 / 2
else:
probability_of_selecting_heads = 1 / 2
# Select a coin according to the probability.
if random.random() < probability_of_selecting_heads:
coin = "Heads"
else:
coin = "Tails"
# Return both the selected token and the coin.
return selected_token, coin
```
Now if we try to access the help for the function we will not get it:
```python
help(sample_experiment)
```
Help on function sample_experiment in module __main__:
sample_experiment(bag)
Furthermore, if you look at the code with comments you will see that because of
the choice of variable names the comments are in fact redundant.
In software engineering it is generally accepted that comments indicate that
your code is not clear and so it is preferable to write clear documentation
explaining why something is done through docstrings.
```python
def sample_experiment(bag):
"""
This samples a token from a given bag and then
selects a coin with a given probability.
If the sampled token is red then the probability
of selecting heads is 2/3 otherwise it is 1/2.
This function returns both the selected token
and the coin face.
"""
selected_token = pick_a_token(container=bag)
if selected_token == "Red":
probability_of_selecting_heads = 2 / 3
else:
probability_of_selecting_heads = 1 / 2
if random.random() < probability_of_selecting_heads:
coin = "Heads"
else:
coin = "Tails"
return selected_token, coin
```
Here are some resources on this:
- <https://blog.codinghorror.com/coding-without-comments/>
- <https://visualstudiomagazine.com/articles/2013/07/26/why-commenting-code-is-still-bad.aspx>
|
5d3ebafe7bee242009fbad0b630c8c150cb9dfb5
| 10,887 |
ipynb
|
Jupyter Notebook
|
book/tools-for-mathematics/06-probability/why/.main.md.bcp.ipynb
|
11michalis11/pfm
|
c91b1eda70d7cde3fbe065db4667f84853947850
|
[
"MIT"
] | 8 |
2020-09-24T21:02:41.000Z
|
2020-10-14T08:37:21.000Z
|
book/tools-for-mathematics/06-probability/why/.main.md.bcp.ipynb
|
11michalis11/pfm
|
c91b1eda70d7cde3fbe065db4667f84853947850
|
[
"MIT"
] | 87 |
2020-09-21T15:54:23.000Z
|
2021-12-19T23:26:15.000Z
|
book/tools-for-mathematics/06-probability/why/.main.md.bcp.ipynb
|
11michalis11/pfm
|
c91b1eda70d7cde3fbe065db4667f84853947850
|
[
"MIT"
] | 3 |
2020-10-02T09:21:27.000Z
|
2021-07-08T14:46:27.000Z
| 24.52027 | 100 | 0.527602 | true | 1,454 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.682574 | 0.810479 | 0.553212 |
__label__eng_Latn
| 0.997957 | 0.123626 |
# Lecture 13: Integral Transforms, D/FFT and Electron Microscopy
----
## Reading and Reference
* Advanced engineering Mathematics, E. Kreyszig, John wiley and Sons, 2010
* Numerical Recipes, W. Press, Cambridge University Press, 1986
* M. De Graef and M. McHenry, Structure of Materials, Cambridge University Press, 2nd ed.
* C. Hammond, The Basics of Crystallography and Diffraction, Oxford Science Publications, 4th ed.
## What to Learn?
* The definition of an integral transform
* The algorithm for computing the discrete Fourier transform
* How diffraction patterns can be used to create phase contrast images in electron microscopy
## What to Do?
* Compute the Fourier transform of different aperture functions.
* Select different regions of a Fourier transform to reconstruct a simulated TEM image and an image of your choosing.
### Introduction to Integral Transforms
----
An integral transform maps a function of one independent variable into a function of another independant variable using a _kernel_:
$$g(\alpha) = \int_{a}^{b} f(t) K(\alpha,t) dt $$
The function $f(t)$ is transformed to a new function $g(\alpha)$ through the definate integral. A similarity to the dot product of functions is evident in this form and this operation can be thought of as a mapping or projection of $f(t)$ into a different independent variable $\alpha$. Existence, integrability and inversion of integral transform operations are important in the study of this topic, although not covered in these notes.
Two examples of integral transforms, the Laplace and Fourier, are discussed in this lecture. It is typical to use the Laplace transform to remove the time dependence from Fick's second law in diffusion problems. The Fourier transform is used in the study of diffraction under certain conditions.
To assist in this lecture some special symbols in `Python` and `sympy` are reviewed:
```python
import sympy as sp
sp.init_printing(use_latex=True)
```
```python
# symbols we will need below
x,y,z,t,c = sp.symbols('x y z t c')
# note the special declaration that omega is a positive number
omega = sp.symbols('omega', positive=True)
```
### Complex Number Review
----
A reminder that $i$ is the square root of negative one and this is how you specify $i$ in `Sympy` and that is different than the complex data type in `Python`.
```python
sp.I**2
```
The natural logarithm of $e$ is $1$:
```python
sp.log(sp.E)
```
In SymPy there are two ways to deal with integration. If you would like to represent an unevaluated integral, you can use the `Integral` function. If you want to compute the integration of an expression you can use the `integrate` function.
```python
sp.Integral(sp.E**(sp.I*omega*t),t)
```
```python
# 'omega', positive=True
sp.integrate(sp.E**(sp.I*omega*t),t)
```
Where we assume there is no zero frequency (as we are dividing by $\omega$) - hence the assumption `positive=True` in the symbol definition above. (Try replacing $\omega$ with $y$ and inspect the expression returned by `integrate`.)
### The Fourier Transform
----
As the domain of the periodicity increases, the frequency spectrum required required to represent the function becomes more finely divided. Recall the argument of the trigonometric terms in the functions of the Fourier series:
$$ \frac{n \pi (\omega +c)}{d} $$
where n is the order of the frequency component, c the offset relative to the origin, and d the domain width. If we let the domain width go to infinity (implying that the function is not periodic) then an integral sum is required rather than a discrete summation. The, infinte, non-periodic function and its frequency spectrum are related by the Fourier transform defined by:
$$ \hat{f}(\omega) = \sqrt{\frac{1}{2\pi}} \int^{+\infty}_{-\infty} f(t) \exp[-i \omega t] dt $$
This results in a mapping of the function f(t) into frequency space.
The real or complex and even or odd nature of the function $f(t)$ determines if the transformed function is even, odd, real, or complex. For the purposes of materials crystal structures in this lecture we will be using even and real functions.
### Diffraction from An Aperture
----
A useful physical problem requiring use of the Fourier transform is diffraction. In this problem we will use a top-hat function to represent the location of an infinity of wave sources from an aperture. We use the `sp.Piecewise` function to generate a "tophat" function for the Fourier transform.
```python
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 8, 4
p = sp.Piecewise((0,x<-1),(1,x<1),(0,True))
sp.plot(p);
```
At some distance from the aperture we place a detector that measures the combined intensity of all the wave sources, however due to the finite width of the slit each wave travels a different distance to the detector. The phase difference between the waves at the detector is given by the Fourier transform of the aperture function when the [Fraunhofer](https://en.wikipedia.org/wiki/Fraunhofer_diffraction_equation) approximation is valid.
This aperture function is even and real so we expect our transformed function to also be even and real. We use the definition of the integral transform above to write an explicit integral statement of the Fourier transform of the top-hat function above. The integral is $1$ between $c$ and $-c$ and zero elsewhere - so we can integrate **just the non-zero part**. This is integrated as:
```python
sp.Integral(1*sp.exp(-sp.I*2*omega*x),(x,-c,c))
```
Calling explicitly for the integration and assigning the result to `a`:
```python
a = sp.sqrt(1/(2*sp.pi))*sp.integrate(1*sp.exp(-sp.I*2*omega*x),(x,-c,c))
a
```
This does not (at first glance) appear to be a real function due to the two exponential terms, but we can use some of the algebraic methods built into `SymPy` to help. We can ask for this form using sines and cosines with the `rewrite` method. Furthermore - we can simplify it further with the expand function. Trial and error may be required to determine the best combination and ordering of algebraic manipulations.
```python
solution = sp.expand(a.rewrite(sp.sin))
solution
```
Here we can use the `subs` (substitution) method to set the value of `c`. I plotted the square of the function since the intensity of a diffracted wave is related to the time averaged energy transferred by the wave. This is proportional to the amplitude squared. As our function is real valued, we can just plot the square.
```python
sp.plot(solution.subs(c,1));
```
```python
sp.plot(solution.subs(c,1)**2);
```
### Diffraction from Two Apertures
----
We could perform the same integration over two top-hat functions and plot those results.
```python
compositeIntegral = sp.sqrt(1/(2*sp.pi))*sp.Integral(1*sp.exp(-sp.I*2*omega*x),(x,1,2)) + \
sp.sqrt(1/(2*sp.pi))*sp.Integral(1*sp.exp(-sp.I*2*omega*x),(x,-2,-1))
compositeIntegral
```
```python
om = compositeIntegral.doit()
om
```
The diffracted intensity from this pair of slits would appear as:
```python
sp.plot(om.rewrite(sp.sin).expand()**2)
```
Or we could functionalize our function to explore other parameters:
```python
def diffractionFunction(d=4.0, w=1.0):
result = sp.sqrt(1/(2*sp.pi))*sp.Integral(1*sp.exp(-sp.I*2*omega*x),\
(x,-(d+w),-(d-w))) + \
sp.sqrt(1/(2*sp.pi))*sp.Integral(1*sp.exp(-sp.I*2*omega*x),\
(x,(d-w),(d+w)))
return result.doit()
```
```python
sp.expand(diffractionFunction(10.,2.).rewrite(sp.sin))
```
### DIY: Complex Numbers
----
Perform the Fourier transformation on an odd or complex valued function. Plot the real and imaginary parts of both the target function and the transformed functions.
### DIY: The Airy Disk
----
Solve for the diffracted intensity in two dimensions from a circular aperture. It may be easier to do this as a discrete problem using the DFT below.
### The Discrete Fourier Transform
----
The discrete Fourier Transform is defined [here](http://en.wikipedia.org/wiki/Discrete_Fourier_transform) and is regarded as one of the most important advances in computing science in the 20th century. Other resources such as Numerical Recipes, the Python help files and many other websites detail the calculation and implementations.
It is often instructive to review other implementations of the DFT to help you gain experience. I will be modeling this implementation after Jake Vanderplas' blog article [here](http://jakevdp.github.io/blog/2013/08/28/understanding-the-fft/). Following the notion in the blog article:
Forward DFT:
$$X_k = \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N}$$
Inverse DFT:
$$x_n = \frac{1}{N}\sum_{k=0}^{N-1} X_k e^{i~2\pi~k~n~/~N}$$
In this section of the notebook, we use Vanderplas' description and implementation.
----
For simplicity, we'll concern ourself only with the forward transform, as the inverse transform can be implemented in a very similar manner. Taking a look at the DFT expression above, we see that it is nothing more than a straightforward linear operation: a matrix-vector multiplication of $\vec{x}$,
$$\vec{X} = M \cdot \vec{x}$$
with the matrix $M$ given by
$$M_{kn} = e^{-i~2\pi~k~n~/~N}$$
With this in mind, we can compute the DFT using simple matrix multiplication as follows:
```python
import numpy as np
def DFT_slow(x):
"""Compute the discrete Fourier Transform of the 1D array x"""
x = np.asarray(x, dtype=float)
N = x.shape[0]
n = np.arange(N)
k = n.reshape((N, 1))
M = np.exp(-2j * np.pi * k * n / N)
return np.dot(M, x)
```
We can use the "all close" function to check if the result from `DFT_slow` and `Numpy` are close:
```python
x_signal = np.random.random(1024)
np.allclose(DFT_slow(x_signal), np.fft.fft(x_signal))
```
----
I think it would be instructive to symbolically expand the matrix above so that it is clear how `n*k` leads to a two dimensional matrix. Switching to `sympy` symbols to expose the details we can do the following:
```python
import sympy as sp
from sympy import Matrix
import numpy as np
sp.init_printing()
```
* `x` is the input vector.
* `k` is the wavenumber or frequency.
* `n` is the component of the input vector.
```python
x = sp.Matrix(sp.symbols('x0:5'))
n = sp.Matrix(sp.symbols('n0:5')).T
k = sp.Matrix(sp.symbols('k0:5'))
N = sp.symbols('N')
M = (-sp.I*2*sp.pi*k*n/N).applyfunc(sp.exp)
```
```python
M*x
```
Each frequency element is projected into each point of the input vector - the matrix links `k` and `n`. So - the contribution at each point is a sum of each frequency contribution, similar to the dot product of functions.
### DFT with Numpy Functions
----
In this section we use the `FFT` submodule of `numpy` to help in the computation of the DFT.
```python
?np.fft # This gives us information on the conventions used in the return values of the functions.
```
```python
?np.fft.fft # This is the main DFT function we will use.
```
```python
?np.fft.fftfreq # This is a helper function to prepare a vector of frequencies.
```
```python
?np.arange # Points in an evenly spaced interval.
```
This approach is derived from a nice discussion on FFT found on the blog Glowing Python.
First we will divide up time into `samplingInterval` sized chunks between 0 and 1. This will aid in getting the x-axis scaled correctly so that frequency can be read directly off the DFT result. You can take `samplingInterval` in seconds putting samplingRate in Hz. Notice the approach here - we could have done this all in one line, but, by intelligently naming our variables and exposing the details of our thoughts the code is more readable:
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
samplingRate = 150.0
samplingInterval = 1.0/samplingRate
timeVector = np.arange(0, 1, samplingInterval)
# Print out the first few elements so you can see what is going on:
timeVector[0:10:]
```
Next we decide on the frequency of our signal and create a list to have a signal to work with.
```python
signalFrequency = 10.0;
ourSignal = np.sin(2*np.pi*signalFrequency*timeVector) + 0.5*np.sin(2*np.pi*(2*signalFrequency)*timeVector)
```
Plotting the input function for clarity:
```python
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(timeVector, ourSignal, 'r')
axes.set_xlabel('Time')
axes.set_ylabel('Signal')
axes.set_title('Our Modest Signal');
```
Using `numpy` to compute the DFT:
```python
n = ourSignal.size
frequencies = np.fft.fftfreq(n, d=1.0/samplingRate)
spectrum = np.abs(np.fft.fft(ourSignal))
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.scatter(frequencies, spectrum, c='r', marker='s', alpha=0.4)
axes.set_xlabel('Frequency')
axes.set_ylabel('Amplitude')
axes.set_title('Our Amplitude Spectrum');
```
### Interactive Microscopy Demonstration (Optional)
Original developed by C. Carter, translated to Python by D. Lewis
---
Transmission electron microscopy utilizes diffraction to determine crystal structures and develop contrast in images. In this section of the lecture we will simulate the diffraction pattern of an atomic structure. Using this diffraction pattern we will simulate using a diffraction aperture to reconstruct a phase contrast image.
```python
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from numpy.fft import *
def atomic_func(x,y):
param = 64.0
return (1+np.sin(4*(x+y)*2*np.pi/param))*(1+np.sin(2*(x-2*y)*2*np.pi/param))/4
def aperture(X, Y, xoffset, yoffset, size):
return (X-xoffset)**2+(Y-yoffset)**2 > size**2
```
We define two functions above:
* `atomic_func` is used to provide an image function periodic in two dimensions from which the diffraction pattern will be constructed. This can be thought of as the density of electrons in a solid that is used to approximate a crystal structure.
* `aperture` returns a Boolean array that will be used to mask the diffraction pattern so that individual frequencies can be selected for image reconstruction. `aperture` will return `True` or `False`.
```python
x = np.arange(0.0,256.0,1.0)
y = np.arange(0.0,256.0,1.0)
X,Y = np.meshgrid(x, y)
Z = atomic_func(X,Y)
```
The `Z` array holds the atomic image function.
```python
P = np.zeros(Z.shape,dtype=complex)
K = np.zeros(Z.shape,dtype=complex)
K = fftshift(fft2(Z, norm='ortho'))
P = np.copy(K)
P[np.where(aperture(X, Y, 128, 128, 3) & aperture(X, Y, 150, 128, 3))] = 0
```
The `P` array holds the processed Fourier spectrum. The values of `P` are set to zero when they are outside the aperture. We use the `K` array to hold a opy of the image
In this cell we create two more `numpy` arrays (there are other ways to do this) that have the same shape as Z. The `P` array we use to hold the processed Fourier spectrum. The processing uses `numpy`'s Boolean indexing to set values in P equal to zero when they are "outside" the aperture. When we get to the images below you'll see what is meant.
Because Python passes by reference we need to call for a copy of K so that we can modify one without changing the other.
From this processed spectrum we will create an image. The K array holds the whole Fourier spectrum.
```python
Im = fftshift(ifft2(P))
```
Above we reprocess `P` into the image `Im`.
```python
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(30,9))
axes[0].imshow(Z, origin='lower')
axes[1].imshow(abs(K),origin='lower', cmap=plt.get_cmap('pink'))
aperture1 = plt.Circle((128,128),3**2,color='r', fill = False)
aperture2 = plt.Circle((150,128),3**2,color='y', fill = False)
axes[1].add_artist(aperture1)
axes[1].add_artist(aperture2)
axes[2].imshow(abs(Im)**2, origin='lower')
plt.show()
```
### Homework
----
Apply the DFT to an image of your choosing. Select the low frequency part of the DFT and regenerate the image (i.e. take the inverse FFT) from only these selected frequencies. Use a Boolean selection to zero out parts of the frequency spectrum before you convert back. To read an image in from disk, use the `ndimage` function from SciPy:
```python
from scipy.ndimage import imread
img = imread('./images/pattern2.jpg', mode='L')
```
checking the data type of `img` will prove helpful.
### Summary
----
* Integral transforms map one function space into another function space. You can find books that include tables of Laplace and Fourier transforms. Many other transforms exist - but the principle is the same.
* The DFT organizes amplitude information in predictable yet non-intuitive ways. Read the documentation for the functions you use!
* Integral transforms are a means for reducing the complexity of certain ODEs and PDEs.
* Diffraction and diffusion are two example applications where integral transforms can be employed.
### Reading Assignments and Practice
----
* Pam Champness' book on electron diffraction is a (relatively) easy read on diffraction. You can always have a look at Cullity, Hammond, or any other book on structure and X-ray/electron characterization.
* Practice taking the FFT of signals you construct by hand. This is a good step when you are debugging a problem. You should always have a test case available to determine if your work is correct.
|
b133e5e87cfe8e5c42ee68ae9cac213947fba2e9
| 27,550 |
ipynb
|
Jupyter Notebook
|
Lecture-13-Integral-Transforms.ipynb
|
juhimgupta/MTLE-4720
|
41797715111636067dd4e2b305a782835c05619f
|
[
"MIT"
] | null | null | null |
Lecture-13-Integral-Transforms.ipynb
|
juhimgupta/MTLE-4720
|
41797715111636067dd4e2b305a782835c05619f
|
[
"MIT"
] | null | null | null |
Lecture-13-Integral-Transforms.ipynb
|
juhimgupta/MTLE-4720
|
41797715111636067dd4e2b305a782835c05619f
|
[
"MIT"
] | null | null | null | 30.816555 | 453 | 0.59539 | true | 4,407 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.870597 | 0.865224 | 0.753262 |
__label__eng_Latn
| 0.993737 | 0.588411 |
## The index of a saddle point
We consider a *n* degree-of-freedom Hamiltonian of the following form:
\begin{equation}
H(q, p) = \sum_{i=1}^{n} \frac{p_i^2}{2} + V(q), \quad (q,p) \in \mathbb{R}^n \times \mathbb{R}^n,
\label{ham_int}
\end{equation}
where $q \in \mathbb{R}^n$ denote the configuration space variables and $p \in \mathbb{R}^n$ denote the corresponding conjugate momentum variables. This Hamiltonian function gives rise to the corresponding Hamilton's differential equations (or just ''Hamilton's equations'') having the following form:
\begin{eqnarray}
\dot{q}_i & = & p_i, \nonumber \\
\dot{p}_i & = & -\frac{\partial V}{\partial q_i} (q), \quad i=1. \ldots , n.
\label{hameq_int}
\end{eqnarray}
These are a set of *2n* first order differential equations defined on the phase space
$\mathbb{R}^n \times \mathbb{R}^n$.
A critical point of the potential energy function is a point $\bar{q} \in \mathbb{R}^n$ satisfying the following equations:
\begin{equation}
\frac{\partial V}{\partial q_i} (\bar{q}) =0, \quad i=1, \ldots n.
\end{equation}
Once a critical point of the potential energy function is located, we want to ''classify'' it. This is done by examining the second derivative of the potential energy function evaluated at the critical point. The second derivative matrix is referred to as the *Hessian matrix*, and it is given by:
\begin{equation}
\frac{\partial^2 V}{\partial q_i \partial q_j} (\bar{q}) =0, \quad i,j=1, \ldots n,
\label{hessian}
\end{equation}
which is a $n \times n$ symmetric matrix. Hence \eqref{hessian} has *n* real eigenvalues, which we denote by:
\begin{equation}
\sigma_k, \quad k=1, \ldots, n.
\label{eiv_Hess}
\end{equation}
However, returning to dynamics as given by Hamilton's equations \eqref{hameq_int}, the point $(\bar{q}, 0)$ is an equilibrium point of Hamilton's equations, i.e. when this point is substituted into the right-hand-side of \eqref{hameq_int} we obtain $(\dot{q}_1, \ldots, \dot{q}_n, \dot{p}_1, \ldots, \dot{p}_n) = (0, \ldots, 0, 0, \ldots, 0)$, i.e. the point $(\bar{q}, 0)$ does not change in time.
Next, we want to determine the nature of the stability of this equilibrium point. Linearized stability is determined by computing the Jacobian of the right hand side of \eqref{hameq_int}, which we will denote by $M$, evaluating it at the equilibrium point $(\bar{q}, 0)$, and determining its eigenvalues. The following calculation is from {% cite ezra2004impenetrable --file reaction_dynamics %}.
The Jacobian of the Hamiltonian vector field \eqref{hameq_int} evaluated at $(\bar{q}, 0)$ is given by:
\begin{equation}
M =
\left(
\begin{array}{cc}
0_{n\times n} & \rm{id}_{n \times n} \\
-\frac{\partial^2 V}{\partial q_i \partial q_j} (\bar{q}) & 0_{n\times n}
\end{array}
\right),
\end{equation}
which is a $2n \times 2n$ matrix. The eigenvalues of $M$, denoted by $\lambda$, are given by the solutions of the following characteristic equation:
\begin{equation}
{\rm det} \, \left( M - \lambda \, {\rm id}_{2n \times 2n} \right) =0,
\label{eivM}
\end{equation}
where ${\rm id}_{2n \times 2n}$ denoted the $2n \times 2n$ identity matrix. Writing \eqref{eivM} in detail (i.e. using the explicit expression for the Jacobian of \eqref{hameq_int}) gives:
\begin{equation}
{\rm det} \,
\left(
\begin{array}{cc}
-\lambda \, \rm{id}_{n \times n} & \rm{id}_{n \times n} \\
-\frac{\partial^2 V}{\partial q_i \partial q_j} (\bar{q}) & -\lambda \rm{id}_{n \times n}
\end{array}
\right) = {\rm det} \, \left(\lambda^2 \, \rm{id}_{n \times n} + \frac{\partial^2 V}{\partial q_i \partial q_j} (\bar{q}) \right) =0.
\end{equation}
We can conclude from this calculation that the eigenvalues of the $n \times n$ symmetric matrix $\frac{\partial^2 V}{\partial q_i \partial q_j} (\bar{q})$ are $-\lambda^2$, where $\lambda$ are the eigenvalues of the $n \times n$ matrix $M$. Hence, the eigenvalues of $M$ occur in pairs, denoted by
$\lambda_k, \, \lambda_{k+n}, \, k=1, \ldots n$, which have the form:
\begin{equation}
\lambda_k, \, \lambda_{k+n} = \pm \sqrt{-\sigma_k}, \quad k=1, \ldots, n,
\end{equation}
where $\sigma_k$ are the eigenvalues of the Hessian of the potential energy evaluated at the critical point $\bar{q}$ as denoted in \eqref{eiv_Hess}. Hence,
we see that the existence of equilibrium points of Hamilton's equations of ''saddle-like stability'' implies that there must be *at least* one negative eigenvalue of \eqref{hessian}. In fact, we have the following classification of the linearized stability of saddle-type equilibrium points of Hamilton's equations in terms of the critical points of the potential energy surface.
+ **Index 1 saddle.** One eigenvalue of \eqref{hessian} is positive, the rest are negative. We will assume that none of the eigenvalues of \eqref{hessian} are zero. Zero eigenvalues give rise to special cases that must be dealt with separately. In the mathematics literature, these are often referred to as *saddle-center-$\cdots$-center equilibria*, with the number of center-$\cdots$-center terms equal to the number of pairs of pure imaginary eigenvalues.
+ **Index 2 saddle.** Two eigenvalues of \eqref{hessian} are positive, the rest are negative
and in general,
+ **Index k saddle.** *k* eigenvalues of \eqref{hessian} are positive,thevrestvare negativev($k \le n$).
## References
{% bibliography --file reaction_dynamics --cited %}
|
5766769da29eba14a41f7819f78c2e827cf6a1f1
| 6,919 |
ipynb
|
Jupyter Notebook
|
content/prologue/index_of_a_saddle_point-jekyll.ipynb
|
champsproject/chem_react_dyn
|
53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598
|
[
"CC-BY-4.0"
] | 11 |
2019-12-09T11:23:13.000Z
|
2020-12-16T09:49:55.000Z
|
content/prologue/index_of_a_saddle_point-jekyll.ipynb
|
champsproject/chem_react_dyn
|
53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598
|
[
"CC-BY-4.0"
] | 40 |
2019-12-09T14:52:38.000Z
|
2022-02-26T06:10:08.000Z
|
content/prologue/index_of_a_saddle_point-jekyll.ipynb
|
champsproject/chem_react_dyn
|
53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598
|
[
"CC-BY-4.0"
] | 3 |
2020-05-12T06:27:20.000Z
|
2022-02-08T05:29:56.000Z
| 6,919 | 6,919 | 0.642434 | true | 1,700 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.935347 | 0.891811 | 0.834152 |
__label__eng_Latn
| 0.987356 | 0.776348 |
$$\newcommand{\ket}[1]{\left|{#1}\right\rangle}$$
$$\newcommand{\bra}[1]{\left\langle{#1}\right|}$$
$$\newcommand{\braket}[2]{\left\langle{#1}\middle|{#2}\right\rangle}$$
## Search on a cube cloud
The SKW search algorithm is based on discrete quantum walk on a hypercube and it is a convenient algorithm for searching a disordered database. In this algorithm, walk is performed on an n-dimensional hypercube and the number of data problem is $N = 2^n$.
We place the specified $N$ data on the vertices of the cube cloud and represent each vertex with an n-bit string. The two vertices x and y are connected only if the Hamming weight difference is one, i.e.
\begin{equation}
|\bar{x} - \bar{y}| = 1.
\end{equation}
Figure 3.1 The data mapping on the hypercube grid is displayed in $d = 3$. Three-component bit strings represent the vertices, and the edges are labeled based on which bit must be flipped to reach the adjacent vertex. For the $n$ qubit string $\textbf{x} = (0010000010) $, Hamming weight is defined as follows [[S07]](https://arxiv.org/abs/0712.0625):
\begin{equation}
|x = \sum_{i=1}^n x_i|.
\end{equation}
Since the cube cloud is symmetric and each vertex is connected to n edges, a quantum walk is done with the n-dimensional coin. Hilbert space and system state are defined as $H = H^n \otimes H^{2n}$ and $\ket{d,\bar{x}}$ respectively, where $d$ denotes the state of the coin. The conditional displacement operator also maps the state $\ket{d,\bar{x}}$ to the state $\ket{d,\bar{x} \oplus \bar{e}_d}$, representing the -th vector of the base of the hypercube (a vector whose all components are zero except the n-th component. Therefore, the unitary displacement operator is defined as follows [[S08]](https://arxiv.org/abs/quant-ph/0205083):
\begin{equation}
S = \sum_{d,\bar{x}}^{n-1} \ket{d,\bar{x} \oplus \bar{e}_d} \bra{d,\bar{x}}
\end{equation}
According to this equation, the point's position will move to an adjacent vertex in the direction of the edge $d$, and the coin operator must be unitary transformation. In standard quantum spinning, only one coin operator is used, which is applied to all vertices of the graph (i.e., the coin operator does not change from one vertex to another), so the coin operator can be written as a separable:
\begin{equation}
C = C. \otimes I
\end{equation}
Choosing the suitable coin operator is important in advancing the algorithm, since we tend to consider all the edges leading to the vertex of the target to be of equal value, so the probability of the point remaining in the current state $\ket{d,\bar{x}}$ for all values $d$ must be the same. Also, for all $d$ the range of transitions from $\ket{d,\bar{x}}$ to $\ket{d^{'},\bar{x}}$ must be the same. The only unit matrices that satisfy these conditions are: G, -G, I, -I. An appropriate and efficient choice is the Grover diffusion operator, which is a suitable unitary operator, and with this choice, the amplitudes of probability are propagated by quantum walk at the maximum possible speed [[S09]](http://stephanhoyer.com/pubs/thesis.pdf). This operator is defined as follows:
\begin{equation}
C. = G = -I_n + \ket{s^c} \bra{s^c},
\end{equation}
Fig. 2) Quantum walk mapping on a cube cloud for one-dimensional walk, in n = 3.
Or in the matrix form:
\begin{equation}
G = \begin{pmatrix}
\frac{2}{n} - 1 & ... & \frac{2}{n}\\
: & ... & : \\
\frac{2}{n} & ... & \frac{2}{n} -1
\end{pmatrix},
\end{equation}
In which
\begin{equation}
\ket{s^c} = \frac{1}{\sqrt{n}} \sum_{d=1}^n \ket{d},
\end{equation}
where $\sum_{d=1}^n \ket{d}\bra{d} = I_n$.
A classical walk on a Hypercube has excellent symmetry. For example, if we start from the vertex, all vertices with the same Hamming weight can be moved together without correcting the quantum walk. This permutation applies to other vertices with the same Hamming weight. Slowly This symmetry requires that all vertices with the same Hamming weight in the probability distribution fall.
Classically, they have the same weight, so we can add vertices of the same weight at the same vertex and reduce the random walk on the cube to a balanced spin on the line [[S10]](https://arxiv.org/abs/quant-ph/0303081). The probability of transition from vertex $i$ to vertex $i+1$ is equal to $P_{i,i+1} = \frac{d - i}{d}$. In addition to being uniform and highly diffused, the Graver coin also maintains permutation symmetry. The Gravure coin is inaccurate concerning the simultaneous permutation of rows and columns. In addition, it is not a balanced coin, meaning that the probability that the coin will not change direction is not equal to the probability that the coin will change direction [[S10]](https://arxiv.org/abs/quant-ph/0303081). To analyze the dynamics of the quantum walk, we need to consider the transformation operator: $U = S.(C.\otimes I)$. For this purpose, the general state of the system after steps can be written as follows,
\begin{equation}
\ket{\psi (t)} = \sum_{d=1}^n \sum_{\bar{x} = 1}^{2^n - 1} \psi_{d,\bar{x}} (t) \ket{d,\bar{x}},
\end{equation}
where $\psi_{d,\bar{x}} (t)$ is the probability of amplitude and normal condition would be accrue. In general, this equation is very difficult to solve, but the same efficient Fourier transform method can be used to diagonalize this operator. In computational foundations, the discrete Fourier transform on space is as follows:
\begin{equation}
\ket{\bar{k}} = \frac{1}{\sqrt{2^n}} \sum_{\bar{x} = 1}^{2^n - 1} (-1)^{\bar{K},\bar{x}} \ket{\bar{x}},
\end{equation}
Where $\ket{\bar{k}}$ comes from Fourie transformation. Now, by diagonalizing the transformation operator, the following equation can be obtained,
\begin{equation}
e^{\pm i \omega_k} = 1 - \frac{2k}{n} \pm 2i \sqrt{\frac{k}{n} (1 - (k/n))}.
\end{equation}
Therefore, the specifics of normal transformations vector operators can be written as follows
\begin{equation}
\ket{v_k},\ket{v_k}^* = \sum_{\bar{x},d} (-1)^{\bar{k}.\bar{x}} \frac{2^{-\frac{n}{2}}}{\sqrt{2}} \ket{d,\bar{x}} \times \begin{matrix}
1/\sqrt{k} && k_d = 1 \\
\mp i / \sqrt{n - k} && k_d = 0
\end{matrix}
\end{equation}
Because of the symmetry in the hypercube, vertices can be labeled so that the vertex is always marked $\bar{x}_{tg} = \bar{0}$, so without diminishing the totality of the problem, we always assume that the vertex sought is $\bar{0}$. Therefore, the location of this vertex is not important. As a result, the disturbance in the coin operator, which indicates the spatial dependence of the new operator, is described as follows
\begin{equation}
C^{'} = C.\otimes (I_{2^n} - \ket{\bar{x}_{tg}} \bra{\bar{x}_{tg}} + C_1 \otimes \ket{\bar{x}_{tg}} \bra{\bar{x}_{tg}}),
\end{equation}
In this phrase, the first sentence, the coin $C_0$, is applied to all vertices except the vertex of the target. With these descriptions, the unit transformation operator is disrupted as follows
\begin{align*}
U^{'} &= S.C^{'} \\
&= S.(G \otimes I - (G - I) \otimes \ket{\textbf{0}}\bra{\textbf{0}})\\
&= U - 2S.(\ket{s^C}\bra{s^C} \otimes \ket{\textbf{0}}\bra{\textbf{0}})
\end{align*}
Analysis of the effect of this disorder will lead to the definition of a quantum walk search algorithm. The conditions are created for the particle that initially starts from a uniform distribution in the whole space to eventually converge to the vertex of the search by applying a small perturbation. This process is contrary to the standard quantum walk, in which the particle starts moving from a point and spreads throughout space.
Fig. 3) Simulation of the search algorithm on a cube, in n = 2 mode, when the searched vertex is $\ket{0}$ [[S09]](http://stephanhoyer.com/pubs/thesis.pdf).
Fig. 4) Quantum circuit of quantum spin search algorithm on n-dimensional transduce network.
|
2d186ca0df2fb72ea867833310ab49456f5e47ee
| 94,657 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/Search on a cube cloud-checkpoint.ipynb
|
Spintronic6889/Introduction-of-Quantum-walk-its-application-on-search-and-decision-making
|
a7cae6ee6a532a1d8a2f22cb25c5d4502a4ef493
|
[
"Apache-2.0"
] | 1 |
2022-01-20T12:42:19.000Z
|
2022-01-20T12:42:19.000Z
|
.ipynb_checkpoints/Search on a cube cloud-checkpoint.ipynb
|
Spintronic6889/Introduction-of-Quantum-walk-its-application-on-search-and-decision-making
|
a7cae6ee6a532a1d8a2f22cb25c5d4502a4ef493
|
[
"Apache-2.0"
] | null | null | null |
.ipynb_checkpoints/Search on a cube cloud-checkpoint.ipynb
|
Spintronic6889/Introduction-of-Quantum-walk-its-application-on-search-and-decision-making
|
a7cae6ee6a532a1d8a2f22cb25c5d4502a4ef493
|
[
"Apache-2.0"
] | null | null | null | 540.897143 | 40,660 | 0.930761 | true | 2,221 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.928409 | 0.859664 | 0.798119 |
__label__eng_Latn
| 0.994948 | 0.692631 |
# Decoupling!
```python
%matplotlib inline
from __future__ import division
import numpy as np
import matplotlib.pyplot as pl
import matplotlib as mpl
#----- MATPLOTLIB paramaters ---------
mpl.rcParams.update({'font.size': 18,'font.family':'serif'})
mpl.rcParams['xtick.major.size'] = 7
mpl.rcParams['xtick.major.width'] = 1
mpl.rcParams['xtick.minor.size'] = 3
mpl.rcParams['xtick.minor.width'] = 1
mpl.rcParams['ytick.major.size'] = 7
mpl.rcParams['ytick.major.width'] = 1
mpl.rcParams['ytick.minor.size'] = 3
mpl.rcParams['ytick.minor.width'] = 1
#--------------------------------------
from scipy.interpolate import interp1d
from scipy.integrate import quad, odeint, solve_ivp, ode
import sympy as sp
x1 = sp.symbols('x1')
```
**Set up the equations of motion**
```python
def h_fun(s):
return sp.sqrt(s**-3 + s**-4)
deriv = sp.diff(h_fun(x1), x1)
def hprime_fun(s):
return float(deriv.subs(x1, s))
def jacob(y,s):
chi = y[0]
chiprime = y[1]
h = 1.0*h_fun(s)
hp = 1.0*hprime_fun(s)
return [[0,1],[(s*hp + h)/(s**2*h) - 2*np.sign(chi)*chi**-3*(L*(s*h)**2)**-1, -s*(s*hp + h)/(s**2*h) ]]
def dyds(y,s):
chi = y[0]
chiprime = y[1]
h = 1.0*h_fun(s)
hp = 1.0*hprime_fun(s)
#term2 = 0.0
#if (chi/L > 1e-5):
# term2 = (L*(s*h)**2*(chi**2))**-1
eqn = -((s*chiprime - chi)*(s*hp + h)/(s**2*h) + np.sign(chi)*(L*(s*h*chi)**2)**-1)#
return [chiprime, eqn]
```
```python
```
**Solving the equations of motions**
```python
L = 0.001
s0 = 1e-4*L
#Initial conditions
chi_init = [s0, 1.0]
s_list = np.logspace(np.log10(s0), np.log10(0.75*L), 100)
ys,output = odeint(dyds, chi_init, s_list, Dfun=jacob, full_output=True)
#print ys
```
```python
```
```python
pl.figure()
pl.plot(s_list/L, ys[:,0]/L)
pl.xlabel(r"$s/\lambda$")
pl.ylabel(r"$\chi/\lambda$")
pl.axvline(1.0/3.0, linestyle='--', color='k')
pl.title(r"$\lambda = " + str(L)+"$")
pl.show()
pl.figure()
pl.plot(s_list/L, ys[:,1]/L)
pl.xlabel(r"$s/\lambda$")
pl.ylabel(r"$\chi'/\lambda$")
pl.title(r"$\lambda = " + str(L)+"$")
pl.show()
```
```python
chi_interp = interp1d(s_list, ys[:,0])
y_list = np.linspace(1e-3*L, 0.5*L)
integ1 = lambda y: y**-4*h_fun(y)**-1*(chi_interp(y))**2
integ2 = lambda y: (1+y**(3.0/2.0))**2*y**-4*h_fun(y)**-1*(chi_interp(y))**2
pl.figure()
pl.plot(y_list, np.vectorize(integ1)(y_list), label='Without halo')
pl.plot(y_list, np.vectorize(integ2)(y_list), label='With halo')
pl.legend()
pl.show()
print(quad(integ1, s0, 0.5*L)[0]/L)
print(quad(integ2, s0, 0.5*L)[0]/L)
```
**Solving it a different way...**
```python
r = ode(dyds, None).set_integrator('dopri5', safety=0.5, beta=0.1)
r.set_initial_value(chi_init, s0)
dt = 1e-2*L
t1 = 2.0*L
r_list = [s0,]
t_list = [s0,]
while r.successful() and r.t < t1:
res = r.integrate(r.t+dt)
t_list = np.append(t_list, r.t+dt)
print(r.y)
r_list = np.append(r_list,r.y[0])
#print(r.t+dt, r.integrate(r.t+dt))
```
```python
pl.figure()
pl.plot(t_list,r_list)
pl.ylim(-1, 1)
pl.show()
```
```python
```
|
aad8671e08c8b3e2ae20c117559c413e4d266ff7
| 5,889 |
ipynb
|
Jupyter Notebook
|
notebooks/.ipynb_checkpoints/Decoupling-checkpoint.ipynb
|
LBJ-Wade/BlackHolesDarkDress
|
4df44248e77f2129459208b35207207512febd3f
|
[
"MIT"
] | 8 |
2018-05-24T14:59:11.000Z
|
2021-12-08T02:40:04.000Z
|
notebooks/.ipynb_checkpoints/Decoupling-checkpoint.ipynb
|
LBJ-Wade/BlackHolesDarkDress
|
4df44248e77f2129459208b35207207512febd3f
|
[
"MIT"
] | null | null | null |
notebooks/.ipynb_checkpoints/Decoupling-checkpoint.ipynb
|
LBJ-Wade/BlackHolesDarkDress
|
4df44248e77f2129459208b35207207512febd3f
|
[
"MIT"
] | 3 |
2018-11-02T18:54:25.000Z
|
2021-03-16T19:13:50.000Z
| 23.939024 | 116 | 0.469689 | true | 1,131 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.679179 | 0.608715 |
__label__eng_Latn
| 0.158612 | 0.252579 |
# Laplace
Notebook to perform Laplace transformations
Author: Lucas Schneider
---
## Initialization
```python
import sympy as sym
import numpy as np
from sympy.integrals import laplace_transform
from sympy.integrals import inverse_laplace_transform
from IPython.display import display
from IPython.display import Math
from sympy.interactive import printing
import scipy.signal as sig
# sym.init_printing()
```
```python
t_var = sym.symbols('t', real=True)
s_var = sym.symbols('s')
d_domain = 'RR'
d_digits = 5
```
```python
def round_expr(expr, num_digits = d_digits):
return expr.xreplace({n : round(n, num_digits) for n in expr.atoms(sym.Number)})
```
---
## Direct Transform
```python
# Other symbols declaration
phi = sym.symbols('φ', real=True)
```
```python
# Input
expression = sym.cos(t_var)
expression *= sym.Heaviside(t_var)
print(str(expression).replace('**','^'))
expression
```
```python
U = laplace_transform(expression, t_var, s_var)
U[0]
```
---
## Inverse Transform
### Functions definition
```python
def poly_from_list(num_list, den_list, domain = d_domain, print_latex=True, print_string=False):
'''
Transforms the list of coefficients into sympy polynomial objects
Parameters
----------
num_list : list of floats
den_list : list of floats
domain : string
print_latex : bool
print_string : bool
Returns
-------
num_pol : sympy.Poly
den_pol : sympy.Poly
'''
num_pol = sym.Poly(num_list, s_var, domain=domain)
den_pol = sym.Poly(den_list, s_var, domain=domain)
F = num_pol / den_pol
F_str = str(F).replace('**','^').replace('I', 'i').replace('exp', 'e^')
if print_latex:
display(Math(f'F(s) = {printing.default_latex(F)}'))
if print_string:
print(f'String for Wolfram: {F_str}\n')
return num_pol, den_pol
```
```python
def residue(num_pol, den_pol, print_latex=True, print_string=False):
'''
Calulates the partial fraction and residues for the given rational polynomial
Parameters
----------
num_pol : sympy.Poly
den_pol : sympy.Poly
print_latex : bool
print_string : bool
Returns
-------
residues: list of floats
Resiudes of the fraction part
poles: list of floats
Poles of the fraction part
multi: list of floats
Poles's multiplicity
complete : list of floats
Coefficients of complete part
'''
r, p, k = sig.residue(N_list, D_list)
m=[]
last_pole = None
last_s = 1
for pole in p:
if pole == last_pole:
m.append(last_multi + 1)
last_multi += 1
else:
m.append(1)
last_multi = 1
last_pole = pole
# TODO: Print residues table (with Pandas?)
return r, p, m, k
```
```python
def partial_fractions(residues, poles, multi, complete, domain=d_domain):
'''
Display the partial fraction representation from the residues
Parameters
----------
residues: list of floats
poles: list of floats
multi: list of floats
complete : list of floats
domain : string
'''
terms = []
terms.append(sym.Poly(complete, s_var, domain=domain).as_expr())
for i in range(len(residues)):
residue = residues[i]
pole = poles[i]
mult = multi[i]
term = sym.Mul(residue, 1/(s_var - pole)**mult)
terms.append(term)
display(Math(f'F(t) = {printing.default_latex(round_expr(sum(terms)))}'))
```
```python
def conjugate_seen(pole, multi, poles_seen):
'''
Checks if an specific pole or its conjugate has already been computed before, with the same multiplicity
Parameters
----------
pole : copmlex
multi : int
poles_seen : list of tuples, with pole and its multiplicity
Returns
-------
True or False
'''
for pole_seen, multi_seen in poles_seen:
if (pole == pole_seen.conjugate()) and (multi == multi_seen):
return True
return False
def ILP_from_residues(residues, poles, multi, complete, complex_simplify=True, print_latex=True, print_string=False):
'''
Perform the Inverse Laplace Transform for the given residues
Parameters
----------
residues: list of floats
poles: list of floats
multi: list of floats
complete : list of floats
complex_simplify : bool
print_latex : bool
print_string : bool
Returns
-------
f : sympy.Expr
'''
f_terms_complete = [complete[i] * sym.diff(sym.DiracDelta(t_var), t_var, len(complete) -1 -i) for i in range(len(complete))]
f_terms_fraction = []
complex_poles_pairs = []
for i in range(len(residues)):
residue = complex(residues[i])
pole = complex(poles[i])
mult = multi[i]
if (abs(pole.imag) > 0.) and complex_simplify:
term = 0
if not conjugate_seen(pole, mult, complex_poles_pairs):
if pole.imag < 0.:
Ak = residue.conjugate()
pk = pole.conjugate()
else:
Ak = residue
pk = pole
term = sym.Mul(t_var ** (mult-1) / sym.factorial(mult-1), 2 * sym.sqrt(Ak.real ** 2 + Ak.imag ** 2), sym.exp(pk.real * t_var), sym.cos(sym.Add(pk.imag*t_var, sym.arg(Ak)), evaluate=False), evaluate=False)
complex_poles_pairs.append((pole, mult))
else:
continue
else:
term = sym.Mul(residues[i] / sym.factorial(mult - 1), t_var ** (mult - 1), sym.exp(poles[i] * t_var), evaluate=False)
f_terms_fraction.append(term)
f = sum(f_terms_complete) + sym.Mul(sum(f_terms_fraction), sym.Heaviside(t_var), evaluate=False)
f = round_expr(f)
f_str = str(f).replace('**','^').replace('I', 'i').replace('exp', 'e^')
if print_latex:
display(Math(f'f(t) = {printing.default_latex(f)}'))
if print_string:
print(f'String for Wolfram: {f_str}\n')
return f
```
```python
def evaluate_f(f, t, print_latex = True):
f_t = f.evalf(subs={t_var: t})
if print_latex:
display(Math(f'f({t}) = {f_t}'))
return f_t
```
### Rational functions
Define the polynomials
```python
N_list = [4,0]
D_list = [1,2,16,32]
polys = poly_from_list(N_list, D_list, print_string=True)
```
$\displaystyle F(s) = \frac{4.0 s}{1.0 s^{3} + 2.0 s^{2} + 16.0 s + 32.0}$
String for Wolfram: 4.0*s/(1.0*s^3 + 2.0*s^2 + 16.0*s + 32.0)
```python
res = residue(N_list, D_list)
partial_fractions(*res)
f_res = ILP_from_residues(*res, complex_simplify=True, print_string=False)
```
$\displaystyle F(t) = \frac{0.2 + 0.4 i}{s + 4.0 i} + \frac{0.2 - 0.4 i}{s - 4.0 i} - \frac{0.4}{s + 2.0}$
$\displaystyle f(t) = \left(0.89443 \cos{\left(4.0 t - 1.10715 \right)} - 0.4 e^{- 2.0 t}\right) \theta\left(t\right)$
### Any function
```python
# Use the polynomials from above
polys = poly_from_list(N_list, D_list, domain='QQ', print_latex=False)
F_func = polys[0] / polys[1]
# Define function with sympy expression
# F_func = (sym.cos(sym.pi/4) * s_var -2*sym.sin(sym.pi/4))/(s_var**2 + 4)
display(Math(f'F(t) = {printing.default_latex(F_func)}'))
```
$\displaystyle F(t) = \frac{s + 3}{s^{3} + 3 s^{2} + 2 s}$
```python
f_func = inverse_laplace_transform(F_func, s_var, t_var).simplify()
display(Math(f'f(t) = {printing.default_latex(f_func)}'))
# result = evaluate_f(f_res, 0.5)
```
$\displaystyle f(t) = \frac{\left(3 e^{2 t} - 4 e^{t} + 1\right) e^{- 2 t} \theta\left(t\right)}{2}$
```python
```
|
33622bbefe3803a04a12beb48e3d17ac2b62488b
| 13,495 |
ipynb
|
Jupyter Notebook
|
laplace.ipynb
|
lucastrschneider/laplace_sym
|
8f019fa1f05de5da6e0dae9a672496b227d30efc
|
[
"MIT"
] | 1 |
2021-02-14T14:32:10.000Z
|
2021-02-14T14:32:10.000Z
|
laplace.ipynb
|
lucastrschneider/laplace_sym
|
8f019fa1f05de5da6e0dae9a672496b227d30efc
|
[
"MIT"
] | null | null | null |
laplace.ipynb
|
lucastrschneider/laplace_sym
|
8f019fa1f05de5da6e0dae9a672496b227d30efc
|
[
"MIT"
] | 1 |
2020-09-10T03:15:27.000Z
|
2020-09-10T03:15:27.000Z
| 26.722772 | 229 | 0.487217 | true | 2,174 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91611 | 0.857768 | 0.78581 |
__label__eng_Latn
| 0.478787 | 0.664031 |
# Fitting a straight line to data
_Inspired by [Hogg et al. 2010](https://arxiv.org/abs/1008.4686) and [@jakevdp's notes](https://github.com/jakevdp/ESAC-stats-2014)_.
Python imports we'll need later...
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
%matplotlib inline
rnd = np.random.RandomState(seed=42)
```
---
# Intro and choice of objective function
I want to start with a problem that everyone is probably familiar with or has at least seen before. The problem is this: we observe $N$ independent data points $\boldsymbol{y}=\{y_1,y_2,...y_N\}$ with uncertainties $\boldsymbol{\sigma}=\{\sigma_1,\sigma_2,...\sigma_N\}$ at perfectly-measured values $\boldsymbol{x}=\{x_1,x_2,...x_N\}$. We have reason to believe that the these data were generated by a process that is well-represented by a straight-line, and the only reason that the data deviate from this straight line is because of uncorrelated, Gaussian measurement noise in the $y$-direction. Let's first generate some data that meet these qualifications:
```python
n_data = 16 # number of data points
a_true = 1.255 # randomly chosen truth
b_true = 4.507
```
---
### Exercise 1:
1. Randomly generate an array of uniformly-distributed `x` values from the domain `(0,2)`.
2. Sort the values in ascending order.
```python
# Fill in your solution here
x = rnd.rand(n_data) * 2
x = np.sort(x)
```
Execute the code below and verify that it executes:
```python
# evaluate the true model at the given x values
y = a_true*x + b_true
# Heteroscedastic Gaussian uncertainties only in y direction
y_err = rnd.uniform(0.1, 0.2, size=n_data) # randomly generate uncertainty for each datum
y = y + rnd.normal(0, y_err) # add noise to y data
```
```python
plt.errorbar(x, y, y_err, marker='o', linestyle='none')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.tight_layout()
```
---
Now let's forget that we did that -- we know nothing about the model parameters, except that we think the true values of the data are well-described by a linear relation! We would like to measure the "best-fit" parameters of this model (for a straight line, the slope and intercept $(a,b)$) given the data above. In math, our model for the data $y$ is:
$$
\begin{align}
y &= f(x \,;\, a, b) + {\rm noise}\\
f(x \,;\, a, b) &= a\,x + b
\end{align}
$$
For a given set of parameters, $(a,b)$, we can evaluate our model $f(x \,;\, a, b)$ at a given $x$ location to compute the value of $y$ that we would expect in the absence of noise. For example, for the $n$th datum and for a given set of parameter values $(a,b)$:
$$
\tilde{y}_n = f(x_n \,;\, a, b)
$$
Now, we somehow want to search through all possible values of $a,b$ to find the "best" values, given the data, with some definition of "best." When we say this word, we are implying that we want to _optimize_ (find the maximum or minimum) some _objective function_ (a function that takes our data, our model, and returns a quantification of "best", usually as a scalar). Numerically, this scalar objective function can be any function (though you probably want it to be convex) and you will see different choices in practice. You have some leeway in this choice depending on whether your goal is _prediction_, _discovery_, or _data compression_.
However, for _inference_—the typical use-case for us as scientists—you don't have this freedom: one of the conclusions of this talk is going to be that __you have no choice about what "best" means__! Before we get there, though, let's explore what seem like reasonable choices.
Here are a few desirable features we'd like any objective function to have:
1. For a given set of parameters, we should compare our predicted values to the measured values and base our objective function on the differences
2. The scalar value should be dimensionless (the value of the objective function shouldn't care if we use kilometers vs. parsecs)
3. Data points that have larger errors should contribute less to the objective function (if a datum has a large offset from the predicted value, it shouldn't matter _if_ the datum has a large uncertainty)
4. Convexity
To meet these three criteria, whatever objective function we choose should operate on the (dimensionless) quantities:
$$
\chi_n = \frac{y_n - \tilde{y}_n(x_n; a,b)}{\sigma_n}
$$
i.e. the difference between our predicted values $\tilde{y}$ and the observed $y$ values, weighted by the inverse uncertainties $\sigma$. The uncertainties have the same units as the data, so this is a dimensionless quantity. It also has the nice property that, as we wanted, points with large uncertainties are _downweighted_ relative to points with small uncertainties. Here are some ideas for objective functions based on this scalar:
- __Weighted absolute deviation__: the sum of the absolute values
$\sum_n^N \, \left|\chi_n\right|$
- __Weighted squared deviation__: the sum of the squares
$\sum_n^N \, \chi_n^2$
- __Weighted absolute deviation to some power__ $p$:
$\sum_n^N \, \left|\chi_n\right|^p $
_(Note: don't show this to statisticians or they will get me fired. To a statistician, $\chi^2$ is a distribution not a statistic...but astronomers seem to use this terminology.)_
For simplicity, let's just compare two of these: the absolute deviation and the squared deviation.
---
### Exercise 2:
Implement the functions to compute the weighted deviations below
```python
# FILL IN THESE FUNCTIONS:
def line_model(pars, x):
return pars[0] * x + pars[1]
def weighted_absolute_deviation(pars, x, y, y_err):
chi = np.divide(y - line_model(pars,x), y_err)
return sum(np.abs(chi))
def weighted_squared_deviation(pars, x, y, y_err):
chi_squared = np.divide(y - line_model(pars,x), y_err)**2
return sum(chi_squared)
```
Verify that you've correctly implemented your functions by executing the following cell:
```python
_pars = [1., -10.]
_x = np.arange(16)
_y = _x
_yerr = np.ones_like(_x)
truth = np.array([-10., -9., -8., -7., -6., -5., -4., -3., -2., -1., 0., 1., 2., 3., 4., 5.])
assert np.allclose(line_model(_pars, _x), truth), 'Error in line_model() function!'
assert weighted_absolute_deviation(_pars, _x, _y, _yerr) == 160., 'Error in weighted_absolute_deviation() function!'
assert weighted_squared_deviation(_pars, _x, _y, _yerr) == 1600., 'Error in weighted_squared_deviation() function!'
```
---
We can demonstrate that these are convex (over some domain) by computing the objective function values over a grid of parameter values (a grid in $a, b$):
```python
# make a 256x256 grid of parameter values centered on the true values
a_grid = np.linspace(a_true-2., a_true+2, 256)
b_grid = np.linspace(b_true-2., b_true+2, 256)
a_grid,b_grid = np.meshgrid(a_grid, b_grid)
ab_grid = np.vstack((a_grid.ravel(), b_grid.ravel())).T
```
```python
fig,axes = plt.subplots(1, 2, figsize=(9,5.1), sharex=True, sharey=True)
for i,func in enumerate([weighted_absolute_deviation, weighted_squared_deviation]):
func_vals = np.zeros(ab_grid.shape[0])
for j,pars in enumerate(ab_grid):
func_vals[j] = func(pars, x, y, y_err)
axes[i].pcolormesh(a_grid, b_grid, func_vals.reshape(a_grid.shape),
cmap='Blues', vmin=func_vals.min(), vmax=func_vals.min()+256) # arbitrary scale
axes[i].set_xlabel('$a$')
# plot the truth
axes[i].plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
axes[i].axis('tight')
axes[i].set_title(func.__name__, fontsize=14)
axes[0].set_ylabel('$b$')
fig.tight_layout()
```
There are minima in both cases near the true values of the parameters (good), but the functions clearly look different. Which one should we choose for finding the best parameters?
In order to pick between these two, or any of the arbitrary objective functions we could have chosen, we have to _justify_ using one function over the others. In what follows, we'll justify optimizing the sum of the squared deviations (so-called "least-squares fitting") by thinking about the problem _probabilistically_, rather than procedurally.
### Least-squares fitting
Let's review the assumptions we made above in generating our data:
1. The data were generated by a straight line
2. Uncorrelated, _known_ Gaussian uncertainties in $y$ cause deviations between the data and predictions
3. The data points are independent
4. The $x$ data are known perfectly, or at least their uncertainties are _far smaller_ than the uncertainties in $y$
First off, these assumptions tell us that for each datum $(x_n, y_n)$ there is some true $y_{n,{\rm true}}$, and because of limitations in our observing process we can't observe the truth, but we know that the values we do observe will be Gaussian (Normal) distributed around the true value. _(Note: This assumption tends to be a good or at least a conservative approximation in practice, but there are certainly more complex situations when, e.g., you have asymmetric uncertainties, or error distributions with large tails!)_. In math:
$$
\begin{align}
p(y \,|\, y_{\rm true}) &= \mathcal{N}(y \,|\, y_{\rm true}, \sigma^2) \\
\mathcal{N}(y \,|\, y_{\rm true}, \sigma^2) &= (2\pi \sigma^2)^{-1/2} \, \exp\left(-\frac{1}{2} \frac{(y-y_{\rm true})^2}{\sigma^2} \right)
\end{align}
$$
This is the likelihood of observing a particular $y$ given the true $y_{\rm true}$. Note that in our model, all of the $y_{\rm true}$'s must lie on a line. It is also interesting that the argument of the normal distribution looks a lot like $\chi^2$!
What about considering two data points, $y_1$ and $y_2$? Now we need to write down the _joint_ probability
$$
p(y_1, y_2 \,|\, y_{1,{\rm true}}, \sigma_1, y_{2,{\rm true}}, \sigma_2)
$$
But, note that in assumption 3 above, we are assuming the data are independent. In that case, the random error in one point does not affect the random error in any other point, so the joint probability can be turned into a product:
$$
p(\{y_n\} \,|\, \{y_{n,{\rm true}}\}, \{\sigma_n\}) = \prod_n^N \, p(y_n \,|\, y_{n,{\rm true}}, \sigma_n)
$$
This is the full expression for the likelihood of the observed data given the true $y$ values. Recall that these true values, according to our assumptions, must lie on a line with some parameters, and we're trying to infer those parameters! We can compute a particular $y_{n,{\rm true}}$ using $x_n$ and a given set of model parameters $a, b$. With that in mind, we can write the likelihood instead as:
$$
p(\{y_n\} \,|\, a, b, \{x_n\}, \{\sigma_n\}) = \prod_n^N \, p(y_n \,|\, a, b, x_n, \sigma_n)
$$
So what are the "best" values of the parameters $a, b$? They are the ones that _maximize_ this likelihood!
The product on the right of the likelihood is a product over exponentials (well, Gaussians), which can be annoying to deal with. But, maximizing the likelihood is equivalent to maximizing the _log_-likelihood -- so we can get rid of the product and all of those exponentials by taking the log of both sides:
$$
\begin{align}
\ln p(\{y_n\} \,|\, a, b, \{x_n\}, \{\sigma_n\}) &= \sum_n^N \, \ln\left[p(y_n \,|\, a, b, x_n, \sigma_n)\right] \\
&= \sum_n^N \ln \left[(2\pi \sigma_n^2)^{-1/2} \,
\exp\left(-\frac{1}{2} \frac{(y_n-(a\,x_n+b))^2}{\sigma_n^2} \right) \right] \\
&= -\frac{N}{2}\ln(2\pi)
- \frac{1}{2} \sum_n^N \left[\frac{(y_n-(a\,x_n+b))^2}{\sigma_n^2} + \ln{\sigma_n^2} \right]
\end{align}
$$
In this case, the uncertainties are known and constant, so to maximize this expression we only care that (abbreviating the likelihood as $\mathcal{L}$):
$$
\begin{align}
\ln \mathcal{L} &= - \frac{1}{2} \sum_n^N \left[\frac{(y_n-(a\,x_n+b))^2}{\sigma_n^2}\right] + {\rm const.} \\
&= - \frac{1}{2} \sum_n^N \, \chi_n^2 + {\rm const.} \\
\end{align}
$$
Apparently, _minimizing_ the sum of the weighted squared deviations is equivalent to _maximizing_ the (log) likelihood derived from thinking about the probability of the data! That is great because (a) it directly gives us the uncertainties on the inferred model parameters, and (b) it's an analytic way to solve this problem using linear algebra which is _really_ fast!
### Least-squares / maximum likelihood with matrix calculus
Using linear algebra, we can simplify and generalize a lot of the expressions above. In what follows, all vectors are column vectors and are represented by lower-case bold symbols. Matrices are upper-case bold symbols.
We'll start by writing our model as a matrix equation. To do that, we need a way to, for a given set of parameters, compute the set of predicted $y$'s. This is done by defining the parameter vector, $\boldsymbol{\theta}$, and a matrix typically called the _design matrix_, $\boldsymbol{X}$:
$$
\boldsymbol{\theta} = \begin{bmatrix} b \\ a \end{bmatrix} \quad
\boldsymbol{X} = \begin{bmatrix} 1 & x_1 \\ 1 & x_2 \\ \vdots & \vdots \\ 1 & x_N \end{bmatrix}
$$
(note the order of the parameters!). With these definitions, the vector of predicted $y$ values is just
$$
\boldsymbol{y}_{\rm pred} = \boldsymbol{X} \, \boldsymbol{\theta}
$$
so the deviation vector between the prediction and the data is just $(\boldsymbol{y}-\boldsymbol{X} \, \boldsymbol{\theta})$ where
$$
\boldsymbol{y} = \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{bmatrix}
$$
But how do we include the uncertainties? We'll pack the list of uncertainties (variances) into the trace of a 2D, $N \times N$ matrix called the _covariance matrix_. Because we are assuming the uncertainties are independent, the off-diagonal terms are all zero:
$$
\boldsymbol{\Sigma} = \begin{bmatrix}
\sigma_1^2 & 0 & \dots & 0 \\
0 & \sigma_2^2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \sigma_N^2
\end{bmatrix}
$$
With these matrices, we can write the expression for $\chi^2$ (and therefore the log-likelihood) very concisely:
$$
\begin{align}
\chi^2 &= \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)^\mathsf{T} \,
\boldsymbol{\Sigma}^{-1} \,
\left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right) \\
\ln\mathcal{L} &= -\frac{1}{2}\left[N\,\ln(2\pi)
+ \ln|\boldsymbol{\Sigma}|
+ \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)^\mathsf{T} \,
\boldsymbol{\Sigma}^{-1} \,
\left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)
\right]
\end{align}
$$
In this form, the terms in the $\chi^2$ have a nice geometric interpretation: This looks like a distance between the data and the model computed with the metric $\boldsymbol{\Sigma}$.
If you solve for the optimum of the log-likelihood function (take the derivative with respect to $\boldsymbol{\theta}$ and set equal to 0), you find that:
$$
\newcommand{\trpo}[1]{{#1}^{\mathsf{T}}}
\newcommand{\bs}[1]{\boldsymbol{#1}}
\bs{\theta}_{\rm best} = \left[\trpo{\bs{X}} \, \bs{\Sigma}^{-1} \, \bs{X}\right]^{-1} \,
\trpo{\bs{X}} \, \bs{\Sigma}^{-1} \, \bs{y}
$$
Getting the best-fit parameters just requires a few simple linear algebra operations! As an added bonus, we also get the _uncertainties_ on the parameters. The $2\times2$ covariance matrix for the best-fit parameters is given by the matrix:
$$
\newcommand{\trpo}[1]{{#1}^{\mathsf{T}}}
\newcommand{\bs}[1]{\boldsymbol{#1}}
C = \left[\trpo{\bs{X}} \, \bs{\Sigma}^{-1} \, \bs{X}\right]^{-1}
$$
That means we can just write out the linear algebra explicitly and use `numpy.linalg` to solve it for us!
### Exercise 3:
Implement the necessary linear algebra to solve for the best-fit parameters and the parameter covariance matrix, defined above. Call these `best_pars` and `pars_Cov`, respectively.
```python
X = np.vstack((np.ones(n_data), x)).T
sigma = np.diag(y_err)
sigma_inv = np.linalg.inv(sigma)
pars_Cov = np.linalg.inv(np.matmul(np.matmul(X.T, sigma_inv), X))
best_pars = np.matmul(np.matmul(np.matmul(pars_Cov, X.T),sigma_inv), y)
print(best_pars)
```
[4.52632121 1.22928402]
```python
np.polyfit(x,y,deg=1)
```
array([1.22042911, 4.54541 ])
Now let's look at the covariance matrix of the parameters (the uncertainty in the parameters) and plot the 1 and 2-sigma error ellipses:
```python
# some tricks to get info we need to plot an ellipse, aligned with
# the eigenvectors of the covariance matrix
eigval,eigvec = np.linalg.eig(pars_Cov)
angle = np.degrees(np.arctan2(eigvec[1,0], eigvec[0,0]))
w,h = 2*np.sqrt(eigval)
```
```python
from matplotlib.patches import Ellipse
fig,ax = plt.subplots(1, 1, figsize=(5,5))
for n in [1,2]:
ax.add_patch(Ellipse(best_pars, width=n*w, height=n*h, angle=angle,
fill=False, linewidth=3-n, edgecolor='#555555',
label=r'{}$\sigma$'.format(n)))
ax.plot(b_true, a_true, marker='o', zorder=10, label='truth')
ax.plot(best_pars[0], best_pars[1], marker='o', zorder=9, label='estimate')
ax.set_xlabel('$b$')
ax.set_ylabel('$a$')
ax.legend(loc='best')
fig.tight_layout()
```
There we have it! The best-fit parameters and their errors for the straight-line fit, optimized with the only justifyable objective function, directly from a few linear algebra calculations.
This approach can be generalized somewhat (e.g. to account for correlated errors as off-diagonal elements in the covariance matrix $\Sigma$), but it only works **for models with linear parameters**, meaning that parameters enter linearly in the model function $f$.
---
# Bonus material (peruse at your leisure)
## The Bayesian approach
Let's review what we did so far. We found that standard weighted least squares fitting is a justified approach to estimating the best-fit parameters because it optimizes the likelihood of the data under the assumptions of our model; it optimizes a _justified scalar objective function_. We then fit our straight-line model to the data using and got back a point-estimate of the best parameters along with a covariance matrix describing the uncertainties in the parameters. This is the way of the _frequentist_. What we're going to do now is see what happens if we switch to a Bayesian methodology instead. While the two methods end up looking mathematically identical, there are fundamental philosophical differences that can lead to very different interpretations and implementations when models are more complex than the toy example we use above.
As Bayesians, we aren't interested in a point-estimate of the best parameters, but rather we're interested in the inferred distribution of possible parameter values (the _posterior probability distribution function_ over parameters). So how do we write down or solve for this posterior pdf? Before we get to that, let's take a look at a fundamental equation of Bayesian statistics, [Bayes' theorem](https://en.wikipedia.org/wiki/Bayes'_theorem), which we'll derive using the joint probability of $A$ and $B$ which are conditional on some other information $I$ that, right now, we don't care about. For example, $A$ could be the time it takes to get from here to NYC, $B$ could be the amount of traffic on the road, and $I$ could include the information that we're driving a car and not walking. Bayes' theorem as expressed below is not controversial -- Bayesians and Frequentists agree that this is just how joint and conditional probabilities work. We start by writing down the joint probability of $A$ and $B$, then factor it in two ways into conditional proabilities:
$$
p(A,B \,|\, I) = p(A\,|\,B, I)\,p(B\,|\, I) = p(B\,|\,A, I)\,p(A \,|\, I)
$$
Now we look at the right two expressions, and divide by one of the marginal probabilities to get:
$$
p(A\,|\,B, I) = \frac{p(B\,|\,A, I)\,p(A \,|\, I)}{p(B\,|\, I)}
$$
Ok, so that's all fine. Now let's replace $A$ and $B$ with variables that represent, from our example above, our data $D=(\{x_n\},\{y_n\},\{\sigma_n\})$ and our model parameters $\boldsymbol{\theta}$:
$$
p(\boldsymbol{\theta}\,|\,D, I) = \frac{p(D\,|\,\boldsymbol{\theta}, I)\,p(\boldsymbol{\theta} \,|\, I)}{p(D\,|\, I)}
$$
In just switching the meaning of the variables, this expression becomes controversial! Frequentists would object to the above for two main reasons:
1. The term on the left hand side is a probability over parameters given the data (the _posterior_ pdf) $p(\boldsymbol{\theta}\,|\,D, I)$. This is something that a frequentist would say cannot exist - there is only one true vector of parameters that we are trying to learn, not a distribution!
2. The right-most term in the numerator is a probability over parameters _with no dependence on the data_ (the _prior_ pdf). This encapsulates all of our prior knowledge about the parameters before we did the experiment and observed some data. This is perhaps the aspect of Bayesian inference that frequentists most disagree with.
The differences above result from the fact that probability means something different to Frequentists and Bayesians. Bayesians think of probability as representing a _degree of belief_ about something, whereas a frequentist thinks of a probability as related to _limiting frequencies of occurrence_ in repeated trials or observations. This is a rich topic and I highly recommend reading [this series of blogposts](http://jakevdp.github.io/blog/2014/03/11/frequentism-and-bayesianism-a-practical-intro/) by Jake Vanderplas to learn more. For now, let's put on Bayesian hats and take a look at the implications of the expression above.
(_It's good to rememeber that we're all friends. The differences are based on philosophy and so can lead to some heated discussions and debates, but remember we're all trying to do science -- we're on the same team!_)
## Bayes' theorem and Bayesian inference
Let's decompose Bayes' theorem (as applied to modeling and inference). The four terms in Bayes' theorem above have names that are good to be familiar with:
- $p(\boldsymbol{\theta}\,|\,D, I)$ - __posterior probability__:
This is the thing we are after when we do Bayesian inference or model fitting. We want to know what the distribution of possible parameter values is, given the data we observe and any prior information or assumptions $I$.
- $p(D\,|\,\boldsymbol{\theta}, I)$ - __likelihood__:
This is the likelihood of the data given a particular set of model parameters. We've already seen this object and used it above to find the best-fit model parameters by maximizing this function. In a Bayesian context, it can also be thought of as a distribution -- it's a distribution that generates new datasets given a model instance. For that reason, we typically refer to models that produce a likelihood as _generative models_ because they specify how to generate new data sets that look like the one you observe. As we saw above when we wrote the likelihood function for a straight line model and data with Gaussian errors, the likelihood usually contains a component that can be interpreted as the _noise model_.
- $p(\boldsymbol{\theta} \,|\, I)$ - __prior probability__
This contains any relevant information about our parameters that we know before observing the data. This can include physical constraints, previous measurements, or anything, really. This flexibility is what makes the prior a somewhat controversial object. In practice, the prior only really matters if it is much narrower than the likelihood function. If the prior is broad with respect to the likelihood, the information in the likelihood makes the prior almost irrelevant. However, there are several subtleties to choosing priors that need to be considered. As an example, one subtlety comes from the choice of coordinates for the model parameters: a prior that is broad and flat in a parameter $\alpha$ won't be broad and flat if you change variables to $\beta = \alpha^2$.
- $p(D\,|\, I)$ - __evidence__ or __fully marginalized likelihood__ (FML)
In many cases the evidence is simply a normalization constant and, for some of the most relevant algorithms used in inference, can be ignored. This term involves an integral over all of parameter space that can be very difficult to compute:
$$
p(D\,|\, I) = \int \,\mathrm{d}\boldsymbol{\theta} \, p(D\,|\,\boldsymbol{\theta}, I) \, p(\boldsymbol{\theta} \,|\, I)
$$
If you need to do Bayesian model selection (e.g., decide between models with different parameters), you unfortunately need to compute this quantity. But if you only _think_ you need the FML, beware!
So how do we make use of all of this, in practice?
Let's return to our example of fitting a line to data with the same data as above. In some sense, we are almost done once we write down an expression for the posterior pdf. If we ignore the FML, this amounts to multiplying a likelihood by a prior pdf. Well, we've already done the most important part: we already wrote down the likelihood function! This is often the hardest part and what we spend the most time doing as scientists (well, assuming you're not building the instrument to observe the data!). We now need to define a prior pdf over the model parameters. Here we have some flexibility. Two possibilities you can always consider:
1. A completely uninformative prior, based on dimensionality, symmetry, or entropy arguments (sometimes, this will mean using a _flat prior_ or _uniform prior_)
2. An empirical prior, based on previous _independent data_ that constrains this model (e.g., a previous measurement of the model parameters from an earlier dataset)
For simplicity, we're going to assume a flat prior over both slope and intercept. Note that for this problem, this is [_not_ an uninformative prior](http://jakevdp.github.io/blog/2014/06/14/frequentism-and-bayesianism-4-bayesian-in-python/). For now, we'll assume that the data are informative enough that the small bias we introduce by using this prior is negligible. Let's now define the functions we'll need, and recall that
$$
\ln\mathcal{L} = -\frac{1}{2}\left[N\,\ln(2\pi)
+ \ln|\boldsymbol{\Sigma}|
+ \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)^\mathsf{T} \,
\boldsymbol{\Sigma}^{-1} \,
\left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)
\right]
$$
### Exercise 4:
Implement the log-prior method (`ln_prior`) on the model class below.
#### Solution:
```python
class StraightLineModel(object):
def __init__(self, x, y, y_err):
"""
We store the data as attributes of the object so we don't have to
keep passing it in to the methods that compute the probabilities.
"""
self.x = np.asarray(x)
self.y = np.asarray(y)
self.y_err = np.asarray(y_err)
def ln_likelihood(self, pars):
"""
We don't need to pass in the data because we can access it from the
attributes. This is basically the same as the weighted squared
deviation function, but includes the constant normalizations for the
Gaussian likelihood.
"""
N = len(self.y)
dy = self.y - line_model(pars, self.x)
ivar = 1 / self.y_err**2 # inverse-variance
return -0.5 * (N*np.log(2*np.pi) + np.sum(2*np.log(self.y_err)) + np.sum(dy**2 * ivar))
def ln_prior(self, pars):
"""
The prior only depends on the parameters, so we don't need to touch
the data at all. We're going to implement a flat (uniform) prior
over the ranges:
a : [0, 100]
b : [-50, 50]
"""
a, b = pars # unpack parameters
ln_prior_val = 0. # we'll add to this
if a < 0 or a > 100.:
return -np.inf
else:
ln_prior_val += np.log(1E-2) # normalization, log(1/100)
if b < -50 or b > 50.:
return -np.inf
else:
ln_prior_val += np.log(1E-2) # normalization, log(1/100)
return ln_prior_val
def ln_posterior(self, pars):
"""
Up to a normalization constant, the log of the posterior pdf is just
the sum of the log likelihood plus the log prior.
"""
lnp = self.ln_prior(pars)
if np.isinf(lnp): # short-circuit if the prior is infinite (don't bother computing likelihood)
return lnp
lnL = self.ln_likelihood(pars)
lnprob = lnp + lnL
if np.isnan(lnprob):
return -np.inf
return lnprob
def __call__(self, pars):
return self.ln_posterior(pars)
```
```python
model = StraightLineModel(x, y, y_err)
```
Now we'll repeat what we did above to map out the value of the log-posterior over a 2D grid of parameter values. Because we used a flat prior, you'll notice it looks identical to the visualization of the `weighted_squared_deviation` -- only the likelihood has any slope to it!
```python
def evaluate_on_grid(func, a_grid, b_grid, args=()):
a_grid,b_grid = np.meshgrid(a_grid, b_grid)
ab_grid = np.vstack((a_grid.ravel(), b_grid.ravel())).T
func_vals = np.zeros(ab_grid.shape[0])
for j,pars in enumerate(ab_grid):
func_vals[j] = func(pars, *args)
return func_vals.reshape(a_grid.shape)
```
```python
fig,axes = plt.subplots(1, 3, figsize=(14,5.1), sharex=True, sharey=True)
# make a 256x256 grid of parameter values centered on the true values
a_grid = np.linspace(a_true-5., a_true+5, 256)
b_grid = np.linspace(b_true-5., b_true+5, 256)
ln_prior_vals = evaluate_on_grid(model.ln_prior, a_grid, b_grid)
ln_like_vals = evaluate_on_grid(model.ln_likelihood, a_grid, b_grid)
ln_post_vals = evaluate_on_grid(model.ln_posterior, a_grid, b_grid)
for i,vals in enumerate([ln_prior_vals, ln_like_vals, ln_post_vals]):
axes[i].pcolormesh(a_grid, b_grid, vals,
cmap='Blues', vmin=vals.max()-1024, vmax=vals.max()) # arbitrary scale
axes[0].set_title('log-prior', fontsize=20)
axes[1].set_title('log-likelihood', fontsize=20)
axes[2].set_title('log-posterior', fontsize=20)
for ax in axes:
ax.set_xlabel('$a$')
# plot the truth
ax.plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
ax.axis('tight')
axes[0].set_ylabel('$b$')
fig.tight_layout()
```
### Exercise 5:
Subclass the `StraightLineModel` class and implement a new prior. Replace the flat prior above with an uncorrelated 2D Gaussian centered on $(\mu_a,\mu_b) = (3., 5.5)$ with root-variances $(\sigma_a,\sigma_b) = (0.05, 0.05)$. Compare the 2D grid plot with the flat prior to the one with a Gaussian prior
#### Solution:
```python
class StraightLineModelGaussianPrior(StraightLineModel): # verbose names are a good thing!
def ln_prior(self, pars):
a, b = pars # unpack parameters
ln_prior_val = 0. # we'll add to this
# prior on a is a Gaussian with mean, stddev = (3, 0.05)
ln_prior_val += -0.5*(a - 3.)**2/0.05**2 # this is not normalized properly, but that's ok
# prior on b is a Gaussian with mean, stddev = (5.5, 0.05)
ln_prior_val += -0.5*(b - 5.5)**2/0.05**2 # this is not normalized properly, but that's ok
return ln_prior_val
```
```python
model_Gprior = StraightLineModelGaussianPrior(x, y, y_err)
```
```python
fig,axes = plt.subplots(1, 3, figsize=(14,5.1), sharex=True, sharey=True)
ln_prior_vals2 = evaluate_on_grid(model_Gprior.ln_prior, a_grid, b_grid)
ln_like_vals2 = evaluate_on_grid(model_Gprior.ln_likelihood, a_grid, b_grid)
ln_post_vals2 = evaluate_on_grid(model_Gprior.ln_posterior, a_grid, b_grid)
for i,vals in enumerate([ln_prior_vals2, ln_like_vals2, ln_post_vals2]):
axes[i].pcolormesh(a_grid, b_grid, vals,
cmap='Blues', vmin=vals.max()-1024, vmax=vals.max()) # arbitrary scale
axes[0].set_title('log-prior', fontsize=20)
axes[1].set_title('log-likelihood', fontsize=20)
axes[2].set_title('log-posterior', fontsize=20)
for ax in axes:
ax.set_xlabel('$a$')
# plot the truth
ax.plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
ax.axis('tight')
axes[0].set_ylabel('$b$')
fig.tight_layout()
```
---
Now what do we do? The answer depends a bit on your intentions. If you'd like to propagate the posterior pdf (as in, pass on to other scientists to use your results), what do you do if the posterior pdf isn't analytic? And what numbers do you put in your abstract? One option is to draw samples from your posterior pdf and compute summary statistics (e.g., median and quantils) using the samples. That's the approach we're going to take.
## MCMC
One of the most common and powerful class of methods people use for generating these samples is Markov Chain Monte Carlo (MCMC), but there are other options (e.g., brute-force or monte carlo rejection sampling). MCMC methods are useful because they scale reasonably to higher dimensions (well, at least better than brute-force). A disadvantage to these methods comes from the "Markov Chain" part of the name: there is always some correlation between nearby steps in a chain of samples, so you have to compute second-order statistics on the samples to try to verify whether your samples are truly random or fair samples from the target distribution (your posterior pdf).
The simplest MCMC algorithm is known as Metropolis-Hastings. I'm not going to explain it in detail, but in pseudocode, it is:
- Start from some position in parameter space, $\theta_0$ with posterior probability $\pi_0$
- Iterate from 1 to $N_{\rm steps}$:
- Sample an offset from $\delta\theta_0$ from some proposal distribution
- Compute a new parameter value using this offset, $\theta_{\rm new} = \theta_0 + \delta\theta_0$
- Evaluate the posterior probability at the new new parameter vector, $\pi_{\rm new}$
- Sample a uniform random number, $r \sim \mathcal{U}(0,1)$
- if $\pi_{\rm new}/\pi_0 > 1$ or $\pi_{\rm new}/\pi_0 > r$:
- store $\theta_{\rm new}$
- replace $\theta_0,\pi_0$ with $\theta_{\rm new},\pi_{\rm new}$
- else:
- store $\theta_0$ again
The proposal distribution has to be chosen and tuned by hand. We'll use a spherical / uncorrelated Gaussian distribution with root-variances set by hand:
```python
def sample_proposal(*sigmas):
return np.random.normal(0., sigmas)
def run_metropolis_hastings(p0, n_steps, model, proposal_sigmas):
"""
Run a Metropolis-Hastings MCMC sampler to generate samples from the input
log-posterior function, starting from some initial parameter vector.
Parameters
----------
p0 : iterable
Initial parameter vector.
n_steps : int
Number of steps to run the sampler for.
model : StraightLineModel instance (or subclass)
A callable object that takes a parameter vector and computes
the log of the posterior pdf.
proposal_sigmas : list, array
A list of standard-deviations passed to the sample_proposal
function. These are like step sizes in each of the parameters.
"""
p0 = np.array(p0)
if len(proposal_sigmas) != len(p0):
raise ValueError("Proposal distribution should have same shape as parameter vector.")
# the objects we'll fill and return:
chain = np.zeros((n_steps, len(p0))) # parameter values at each step
ln_probs = np.zeros(n_steps) # log-probability values at each step
# we'll keep track of how many steps we accept to compute the acceptance fraction
n_accept = 0
# evaluate the log-posterior at the initial position and store starting position in chain
ln_probs[0] = model(p0)
chain[0] = p0
# loop through the number of steps requested and run MCMC
for i in range(1,n_steps):
# proposed new parameters
step = sample_proposal(*proposal_sigmas)
new_p = chain[i-1] + step
# compute log-posterior at new parameter values
new_ln_prob = model(new_p)
# log of the ratio of the new log-posterior to the previous log-posterior value
ln_prob_ratio = new_ln_prob - ln_probs[i-1]
if (ln_prob_ratio > 0) or (ln_prob_ratio > np.log(np.random.uniform())):
chain[i] = new_p
ln_probs[i] = new_ln_prob
n_accept += 1
else:
chain[i] = chain[i-1]
ln_probs[i] = ln_probs[i-1]
acc_frac = n_accept / n_steps
return chain, ln_probs, acc_frac
```
Now we'll run the sampler! Let's start from some arbitrary position allowed by our prior.
### Exercise 6:
Choose a starting position, values for `a` and `b` to start the MCMC from. In general, a good way to do this is to sample from the prior pdf. Generate values for `a` and `b` by sampling from a uniform distribution over the domain we defined above. Then, run the MCMC sampler from this initial position for 8192 steps. Play around with ("tune" as they say) the `proposal_sigmas` until you get an acceptance fraction around ~40%.
#### Solution:
```python
p0 = [6.,6.]
chain,_,acc_frac = run_metropolis_hastings(p0, n_steps=8192, model=model,
proposal_sigmas=[0.058,0.05])
print("Acceptance fraction: {:.1%}".format(acc_frac))
```
Acceptance fraction: 40.1%
Let's look at the chain returned, the parameter value positions throughout the sampler run:
```python
fig,ax = plt.subplots(1, 1, figsize=(5,5))
ax.pcolormesh(a_grid, b_grid, ln_post_vals,
cmap='Blues', vmin=ln_post_vals.max()-128, vmax=ln_post_vals.max()) # arbitrary scale
ax.axis('tight')
fig.tight_layout()
ax.plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
ax.plot(chain[:512,0], chain[:512,1], marker='', color='k', linewidth=1.)
ax.set_xlabel('$a$')
ax.set_ylabel('$b$')
```
We can also look at the individual parameter traces, i.e. the 1D functions of parameter value vs. step number for each parameter separately:
```python
fig,axes = plt.subplots(len(p0), 1, figsize=(5,7), sharex=True)
for i in range(len(p0)):
axes[i].plot(chain[:,i], marker='', drawstyle='steps')
axes[0].axhline(a_true, color='r', label='true')
axes[0].legend(loc='best')
axes[0].set_ylabel('$a$')
axes[1].axhline(b_true, color='r')
axes[1].set_ylabel('$b$')
fig.tight_layout()
```
From these trace plots, we can see by eye that it takes the sampler about a few hundred steps to converge. When we look at the samples returned or when we compute our summary statistics, we don't want to include these parameter values! In addition, there is likely some correlation between nearby steps. We can attempt to remove some of the correlated steps by _thinning_ the chain, i.e. by downsampling. We can do both simultaneously using Python indexing tricks. Certainly by step 2000 the chains look converged, so from there on we'll keep only every 8th step:
```python
good_samples = chain[2000::8]
good_samples.shape
```
(774, 2)
We're left with 774 samples; we hope these are approximately uncorrelated, converged samples from the posterior pdf (there are other ways we can check, but these are out of scope for this workshop). Now you have to choose what summary statistics to report. You have some options, but a reasonable choice is to report the median, 16th, and 84th percentiles:
```python
low,med,hi = np.percentile(good_samples, [16, 50, 84], axis=0)
upper, lower = hi-med, med-low
disp_str = ""
for i,name in enumerate(['a', 'b']):
fmt_str = '{name}={val:.2f}^{{+{plus:.2f}}}_{{-{minus:.2f}}}'
disp_str += fmt_str.format(name=name, val=med[i], plus=upper[i], minus=lower[i])
disp_str += r'\quad '
from IPython import display
disp_str = "${}$".format(disp_str)
display.Latex(data=disp_str)
```
$a=1.24^{+0.06}_{-0.06}\quad b=4.51^{+0.06}_{-0.07}\quad $
Recall that the true values are:
```python
a_true, b_true
```
(1.255, 4.507)
We've now done this problem the Bayesian way as well! Now, instead of drawing the "best-fit" line over the data, we can take a handful of samples and plot a line for each of the samples, as a way to visualize the uncertainty we have in the model parameters:
```python
plt.figure(figsize=(6,5))
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
for pars in good_samples[:128]: # only plot 128 samples
plt.plot(x_grid, line_model(pars, x_grid),
marker='', linestyle='-', color='#3182bd', alpha=0.1, zorder=-10)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.tight_layout()
```
Or, we can plot the samples using a _corner plot_ to visualize the structure of the 2D and 1D (marginal) posteriors:
```python
# uncomment and run this line if the import fails:
# !source activate statsseminar; pip install corner
import corner
```
```python
fig = corner.corner(chain[2000:], bins=32, labels=['$a$', '$b$'], truths=[a_true, b_true])
```
---
## Finally, the problem you came here for: fitting a straight line to data with intrinsic scatter
We made it! We're now ready to do the problem we set out to do. In the initial model, we assumed that we knew the uncertainties in our measurements exactly and that the data were drawn from a one-dimensional line. We're now going to relax that assumption and assume that either (a) the data uncertainties have been underestimated or (b) there is intrinsic scatter in the true model (in the absence of other information, these two ideas are degenerate). Let's first generate some data. We'll assume the latter of the two ideas, and we'll further assume that the model line is convolved with an additional Gaussian in the $y$ direction, with the new parameter being the intrinsic width of the relation expressed as a variance $V$:
```python
V_true = 0.5**2
n_data = 42
# we'll keep the same parameters for the line as we used above
```
```python
x = rnd.uniform(0, 2., n_data)
x.sort() # sort the values in place
y = a_true*x + b_true
# Heteroscedastic Gaussian uncertainties only in y direction
y_err = rnd.uniform(0.1, 0.2, size=n_data) # randomly generate uncertainty for each datum
# add Gaussian intrinsic width
y = rnd.normal(y, np.sqrt(y_err**2 + V_true)) # re-sample y data with noise and intrinsic scatter
```
```python
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.tight_layout()
```
Let's first naively fit the data assuming no intrinsic scatter using least-squares:
```python
X = np.vander(x, N=2, increasing=True)
Cov = np.diag(y_err**2)
Cinv = np.linalg.inv(Cov)
```
```python
best_pars = np.linalg.inv(X.T @ Cinv @ X) @ (X.T @ Cinv @ y)
pars_Cov = np.linalg.inv(X.T @ Cinv @ X)
```
```python
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
plt.plot(x_grid, line_model(best_pars[::-1], x_grid), marker='', linestyle='-', label='best-fit line')
plt.plot(x_grid, line_model([a_true, b_true], x_grid), marker='', linestyle='-', label='true line')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.tight_layout()
```
The covariance matrix for the parameters is:
```python
pars_Cov
```
array([[ 0.00199108, -0.0014514 ],
[-0.0014514 , 0.00142195]])
We clearly get a biased result and yet _very_ precise measurements of the parameters when we don't take in to account the intrinsic scatter. What we need to do now is modify out model to include the scatter as a free parameter. Unfortunately, it enters the model non-linearly so there is no solution using linear algebra or least-squares. Instead, we just write a new likelihood function and optimize it numerically. One choice we'll make is to use the parameter $\ln{V}$ instead of $V$ for reasons I'll explain later. To implement the new model, we'll subclass our `StraightLineModel` class and define new likelihood and prior functions.
### Exercise 7:
Subclass the `StraightLineModel` class and implement new prior and likelihood functions (`ln_prior` and `ln_likelihood`). The our model will now have 3 parameters: `a`, `b`, and `lnV` the log of the intrinsic scatter variance. Use flat priors on all of these parameters. In fact, we'll be even lazier and forget the constant normalization terms: if a parameter vector is within the ranges below, return 0. (log(1.)) otherwise return -infinity:
#### Solution:
```python
class StraightLineIntrinsicScatterModel(StraightLineModel):
def ln_prior(self, pars):
""" The prior only depends on the parameters """
a, b, lnV = pars
# flat priors on a, b, lnV
if a < -10 or a > 10 or b < -100. or b > 100. or lnV < -10. or lnV > 10.:
return -np.inf
# this is only valid up to a numerical constant
return 0.
def ln_likelihood(self, pars):
""" The likelihood function evaluation requires a particular set of model parameters and the data """
a,b,lnV = pars
V = np.exp(lnV)
N = len(y)
dy = y - line_model([a,b], self.x)
ivar = 1 / (self.y_err**2 + V) # inverse-variance now includes intrinsic scatter
return -0.5 * (N*np.log(2*np.pi) - np.sum(np.log(ivar)) + np.sum(dy**2 * ivar))
```
```python
scatter_model = StraightLineIntrinsicScatterModel(x, y, y_err)
```
```python
x0 = [5., 5., 0.] # starting guess for the optimizer
# we have to minimize the negative log-likelihood to maximize the likelihood
result_ml_scatter = minimize(lambda *args: -scatter_model.ln_likelihood(*args),
x0=x0, method='BFGS')
result_ml_scatter
```
fun: 36.77010525530865
hess_inv: array([[ 0.02207535, -0.02292126, 0.00034518],
[-0.02292126, 0.0317851 , -0.00038581],
[ 0.00034518, -0.00038581, 0.05651076]])
jac: array([0., 0., 0.])
message: 'Optimization terminated successfully.'
nfev: 130
nit: 18
njev: 26
status: 0
success: True
x: array([ 1.10279177, 4.74635002, -1.17285508])
```python
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
plt.plot(x_grid, line_model(result_ml_scatter.x[:2], x_grid), marker='', linestyle='-', label='best-fit line')
plt.plot(x_grid, line_model([a_true, b_true], x_grid), marker='', linestyle='-', label='true line')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.tight_layout()
```
```python
V_true, np.exp(result_ml_scatter.x[2])
```
(0.25, 0.3094820831577057)
It looks like the maximum likelihood estimate is a little bit better, and we get a reasonable measurement of the intrinsic scatter, but none of this gives us a handle on the uncertainty. How do we quantify the uncertainty in the now 3 parameters? We'll just run MCMC.
### Exercise 8:
To quantify our uncertainty in the parameters, we'll run MCMC using the new model. Run MCMC for 65536 steps and visualize the resulting chain. Make sure the acceptance fraction is between ~25-50%.
#### Solution:
```python
p0 = [6., 6., -1.]
chain,_,acc_frac = run_metropolis_hastings(p0, n_steps=2**16, model=scatter_model,
proposal_sigmas=[0.15,0.15,0.2])
acc_frac
```
0.3381500244140625
```python
fig,axes = plt.subplots(len(p0), 1, figsize=(5,7), sharex=True)
for i in range(len(p0)):
axes[i].plot(chain[:,i], marker='', drawstyle='steps')
axes[0].axhline(a_true, color='r', label='true')
axes[0].legend(loc='best')
axes[0].set_ylabel('$a$')
axes[1].axhline(b_true, color='r')
axes[1].set_ylabel('$b$')
axes[2].axhline(np.log(V_true), color='r')
axes[2].set_ylabel(r'$\ln V$')
fig.tight_layout()
```
```python
fig = corner.corner(chain[2000:], bins=32, labels=['$a$', '$b$', r'$\ln V$'],
truths=[a_true, b_true, np.log(V_true)])
```
Now we'll again compute the percentiles for the 1D, marginal distributions:
```python
good_samples = chain[2000::8]
good_samples.shape
```
(7942, 3)
```python
low,med,hi = np.percentile(good_samples, [16, 50, 84], axis=0)
upper, lower = hi-med, med-low
disp_str = ""
for i,name in enumerate(['a', 'b', r'\ln V']):
fmt_str = '{name}={val:.2f}^{{+{plus:.2f}}}_{{-{minus:.2f}}}'
disp_str += fmt_str.format(name=name, val=med[i], plus=upper[i], minus=lower[i])
disp_str += r'\quad '
disp_str = "${}$".format(disp_str)
display.Latex(data=disp_str)
```
$a=1.11^{+0.15}_{-0.15}\quad b=4.74^{+0.18}_{-0.18}\quad \ln V=-1.11^{+0.25}_{-0.24}\quad $
Compare this to the diagonal elements of the covariance matrix we got from ignoring the intrinsic scatter and doing least-squares fitting:
```python
disp_str = ""
for i,name in zip([1,0], ['a', 'b']):
fmt_str = r'{name}={val:.2f} \pm {err:.2f}'
disp_str += fmt_str.format(name=name, val=best_pars[i], err=np.sqrt(pars_Cov[i,i]))
disp_str += r'\quad '
disp_str = "${}$".format(disp_str)
display.Latex(data=disp_str)
```
$a=1.06 \pm 0.04\quad b=4.81 \pm 0.04\quad $
The parameter uncertainties estimated from the MCMC samples are much larger -- this reflects our uncertainty about the intrinsic scatter of the points. Precision is highly model dependent.
|
f37ed42ac7b3eaa6666110e75bcdfc481d7b4791
| 581,154 |
ipynb
|
Jupyter Notebook
|
day3/Fitting-a-line.ipynb
|
EBerzin/usrp-sciprog
|
d1fe478aa2278226240657f7d40543adc6d843a5
|
[
"MIT"
] | null | null | null |
day3/Fitting-a-line.ipynb
|
EBerzin/usrp-sciprog
|
d1fe478aa2278226240657f7d40543adc6d843a5
|
[
"MIT"
] | null | null | null |
day3/Fitting-a-line.ipynb
|
EBerzin/usrp-sciprog
|
d1fe478aa2278226240657f7d40543adc6d843a5
|
[
"MIT"
] | null | null | null | 314.647537 | 109,844 | 0.920223 | true | 13,346 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.808067 | 0.706313 |
__label__eng_Latn
| 0.985961 | 0.479334 |
########################################################
### This file is used to generate Table 7-8, Fig 4-5 ###
########################################################
- [Forward Problem](#Forward-Problem)
- [Verify Assumption 1](#Verify-Assumption-1)
- [Table 7](#Table-7)
- [Table 8](#Table-8)
- [Verify Lemma 1](#Verify-Lemma-1)
- [Left plot in Figure 4](#Left-plot-in-Figure-4)
- [Verify Theorem 3.1](#Verify-Theorem-3.1)
- [Right plot in Figure 4](#Right-plot-in-Figure-4)
- [Inverse Problem](#Inverse-Problem)
- [Verify Assumption 2](#Verify-Assumption-2)
- [Verify Theorem 4.2](#Verify-Theorem-4.2)
- [Figure 5](#Figure-5)
<font color=red>**Note(Important!!!!!)**
Since this file would require data from Data directory, you will need to generate the data files, there are two ways to generate the data.</font>
- Way 1: run 'GenerateData.ipynb' or 'GenerateData.py' to get QoI values at quadrature points in order to get coefficients in PCE; and all the ratio evaluations
- Way 2: run 'GenerateData_ParallelVersion.ipynb' and set multiple processors
```python
import os
import scipy.io as sio #for the i/o
import numpy as np
import numpy.polynomial.hermite_e as H
from math import factorial
from scipy.stats import norm
from scipy.stats import gaussian_kde as kde
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot as plt
import ipywidgets as widgets
from ipywidgets import interact, interactive, fixed, interact_manual
from matplotlib import cm
%matplotlib inline
```
```python
####### Plot Formatting ######
plt.rc('lines', linewidth = 1.5)
plt.rc('xtick', labelsize = 14)
plt.rc('ytick', labelsize = 14)
plt.rc('legend',fontsize=14)
# plt.rcParams["font.family"] = "serif"
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.titlesize'] = 12
plt.rcParams['lines.markersize'] = 6
plt.rcParams['figure.figsize'] = (8.0, 6.0)
```
Problem:
\begin{align*}
- \nabla\cdot(A\nabla u) &= (e^{\lambda_1}\lambda_1^2\pi^2 + e^{\lambda_2}\lambda_2^2\pi^2)u \\
u &= 0 \, \text{ on } \Gamma_0 \,\text{( Left edge)}\\
(A\nabla u)\cdot n &= -e^{\lambda_2}\lambda_2\pi \sin\lambda_1\pi x\sin \lambda_2\pi y \, \text{ on } \Gamma_1 \, \text{( Top edge)}\\
(A\nabla u)\cdot n &= e^{\lambda_2}\lambda_2\pi \sin\lambda_1\pi x\sin \lambda_2\pi y \, \text{ on } \Gamma_2 \,\text{( Bottom edge)}\\
(A\nabla u)\cdot n &= e^{\lambda_1}\lambda_1\pi \cos\lambda_1\pi x\cos \lambda_2\pi y \, \text{ on } \Gamma_3 \,\text{( Right edge)}\\
\end{align*}
where
$$ A = \begin{bmatrix} e^{\lambda_1} & 0 \\ 0 & e^{\lambda_2} \end{bmatrix} $$
and $\Omega = [0,1]\times [0,1]$.
<font color = red>**Exact solution:**
$$ u = \sin \lambda_1\pi x \cos \lambda_2 \pi y$$
</font>
QoI is:
\begin{align*}
Q(\lambda_1,\lambda_2) &= \frac{1}{(b-a)(d-c)}\int_{c}^{d} \int_{a}^{b} u(x,y,\lambda_1,\lambda_2)\, dx\, dy \\
&= \frac{1}{(b-a)(d-c)} \int_{a}^{b} \sin \lambda_1\pi x \, dx \int_{c}^{d} \cos \lambda_2 \pi y \, dy\\
&= \frac{1}{(b-a)(d-c)} \left( \frac{\cos \lambda_1\pi a - \cos\lambda_1\pi b}{\lambda_1\pi}\right) \left( \frac{\sin \lambda_2\pi d - \sin \lambda_2\pi c}{\lambda_2\pi}\right) \\
&= \frac{(\cos \lambda_1\pi a - \cos\lambda_1\pi b)(\sin \lambda_2\pi d - \sin\lambda_2\pi c) }{(b-a)(d-c)\lambda_1\lambda_2\pi^2}
\end{align*}
### When standard normal
</font >
$$ \Phi_{ij}(x,y) = \Phi_i(x) \Phi_j(y)$$
$$ (\Phi_{ij}, \Phi_{kl}) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \Phi_{ij}(x,y)\Phi_{kl}(x,y) e^{-\frac{x^2+y^2}{2}}\, dx\, dy = (\sqrt{2\pi})^2 i! j! \delta_{ik}\delta_{jl}$$
$$ Q(\lambda_1, \lambda_2) = \sum_{i,j=0}^{\infty} q_{ij} \Phi_{ij}(\lambda_1,\lambda_2)$$
$$ q_{ij} = \frac{1}{(\Phi_{ij},\Phi_{ij})} \biggr(Q(\lambda_1, \lambda_2), \ \Phi_{ij}(\lambda_1,\lambda_2)\biggr) = \frac{1}{(\sqrt{2\pi})^2 i! j! } \biggr(Q(\lambda_1, \lambda_2), \ \Phi_{ij}(\lambda_1,\lambda_2)\biggr)$$
**Gauss-Hermite Quadrature**
\begin{align*}
\biggr(Q(\lambda_1, \lambda_2), \ \Phi_{ij}(\lambda_1,\lambda_2)\biggr) &= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} Q(\lambda_1,\lambda_2) \Phi_i(\lambda_1) e^{-\frac{\lambda_1^2}{2}} \, d\lambda_1 \Phi_j(\lambda_2) e^{-\frac{\lambda_2^2}{2}} \, d\lambda_2\\
&= \int_{-\infty}^{\infty} \biggr( \sum_{k=1}^n w_k Q(\lambda_1^{(k)}, \lambda_2)\Phi_i(\lambda_1^{(k)}) \biggr) \Phi_j(\lambda_2) e^{-\frac{\lambda_2^2}{2}} \, d\lambda_2\\
&= \sum_{l=1}^n w_l \biggr( \sum_{k=1}^n w_k Q(\lambda_1^{(k)}, \lambda_2^{(l)})\Phi_i(\lambda_1^{(k)}) \biggr) \Phi_j(\lambda_2^{(l)})
\end{align*}
### When generalized normal
**Quadrature Rule**
<font color = red>
If
$$ \int_{-\infty}^{\infty} f(x) e^{-\frac{x^2}{2}}\, dx \approx \sum_{i=1}^n w_if(x_i)$$
we will have
$$ \int_{-\infty}^{\infty} f(x) e^{-\frac{(x-\mu)^2}{2\sigma^2}}\, dx \approx \sigma \sum_{i=1}^n w_if(\mu + \sigma x_i)$$
</font >
If $\lambda_1$, $\lambda_2$ don't follow standard normal $N(0,1)$, assume $\lambda_1\sim N(\mu_1, \sigma_1^2)$, $\lambda_2\sim N(\mu_2, \sigma_2^2)$, we will have
$$ Q(\lambda_1, \lambda_2) = \sum_{i,j=0}^{\infty} q_{ij} \Phi_{ij}\left(\frac{\lambda_1 - \mu_1}{\sigma_1},\frac{\lambda_2 - \mu_2}{\sigma_2}\right)$$
Still use above $\Phi_{ij}$, but in order to make it orthogonal, we should have basis like
$$ \Phi_{ij}\left(\frac{\lambda_1 - \mu_1}{\sigma_1},\frac{\lambda_2 - \mu_2}{\sigma_2}\right) $$
then
\begin{align*}
&\,\,\, \left( \Phi_{ij}\left(\frac{\lambda_1 - \mu_1}{\sigma_1},\frac{\lambda_2 - \mu_2}{\sigma_2}\right), \Phi_{kl}\left(\frac{\lambda_1 - \mu_1}{\sigma_1},\frac{\lambda_2 - \mu_2}{\sigma_2}\right) \right)\\
&= \left( \Phi_{i}\left(\frac{\lambda_1 - \mu_1}{\sigma_1}\right)\Phi_{j}\left(\frac{\lambda_2 - \mu_2}{\sigma_2}\right), \Phi_{k}\left(\frac{\lambda_1 - \mu_1}{\sigma_1}\right) \Phi_l\left(\frac{\lambda_2 - \mu_2}{\sigma_2}\right) \right)\\
&= \int_{R} \int_{R} \Phi_{i}\left(\frac{\lambda_1 - \mu_1}{\sigma_1}\right)\Phi_{j}\left(\frac{\lambda_2 - \mu_2}{\sigma_2}\right) \Phi_{k}\left(\frac{\lambda_1 - \mu_1}{\sigma_1}\right) \Phi_l\left(\frac{\lambda_2 - \mu_2}{\sigma_2}\right)e^{-\frac{(\lambda_1-\mu_1)^2}{2\sigma_1^2}} e^{-\frac{(\lambda_2-\mu_2)^2}{2\sigma_2^2}} d\lambda_1 d\lambda_2 \\
&= \sigma_1\sigma_2 \int_{R} \int_{R} \Phi_{i}(y_1)\Phi_j(y_2)\Phi_{k}(y_1)\Phi_l(y_2) e^{-\frac{y_1^2}{2}}e^{-\frac{y_2^2}{2}}\, dy_1\, dy_2 \\
&= \sigma_1\sigma_2 2\pi i! j! \delta_{ik}\delta_{jl}
\end{align*}
Since
$$ Q(\lambda_1, \lambda_2) = \sum_{i,j=0}^{\infty} q_{ij} \Phi_{ij}\left(\frac{\lambda_1 - \mu_1}{\sigma_1},\frac{\lambda_2 - \mu_2}{\sigma_2}\right)$$
then
\begin{align*}
q_{ij} &= \frac{1}{\left(\Phi_{ij}\left(\frac{\lambda_1 - \mu_1}{\sigma_1},\frac{\lambda_2 - \mu_2}{\sigma_2}\right),\Phi_{ij}\left(\frac{\lambda_1 - \mu_1}{\sigma_1},\frac{\lambda_2 - \mu_2}{\sigma_2}\right)\right)} \biggr(Q(\lambda_1, \lambda_2), \ \Phi_{ij}\left(\frac{\lambda_1 - \mu_1}{\sigma_1},\frac{\lambda_2 - \mu_2}{\sigma_2}\right)\biggr) \\
&= \frac{1}{\sigma_1 \sigma_2 2\pi i! j! } \biggr(Q(\lambda_1, \lambda_2), \ \Phi_{ij}\left(\frac{\lambda_1 - \mu_1}{\sigma_1},\frac{\lambda_2 - \mu_2}{\sigma_2}\right)\biggr)\\
&= \frac{1}{\sigma_1 \sigma_2 2\pi i! j! } \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} Q(\lambda_1,\lambda_2) \Phi_i\left(\frac{\lambda_1-\mu_1}{\sigma_1}\right) e^{-\frac{(\lambda_1-\mu_1)^2}{2\sigma_1^2}} \, d\lambda_1 \Phi_j\left(\frac{\lambda_2-\mu_2}{\sigma_2}\right) e^{-\frac{(\lambda_2-\mu_2)^2}{2\sigma_2^2}} \, d\lambda_2\\
&= \frac{1}{\sigma_1 \sigma_2 2\pi i! j! } \int_{-\infty}^{\infty} \sigma_1 \biggr( \sum_{k=1}^n w_k Q(\mu_1+\sigma_1\lambda_1^{(k)}, \lambda_2)\Phi_i(\lambda_1^{(k)}) \biggr) \Phi_j\left(\frac{\lambda_2 - \mu_2}{\sigma_2}\right) e^{-\frac{(\lambda_2-\mu_2)^2}{2\sigma_2^2}} \, d\lambda_2\\
&= \frac{1}{\sigma_1 \sigma_2 2\pi i! j! } \sigma_2 \sum_{l=1}^n w_l \sigma_1 \biggr( \sum_{k=1}^n w_k Q(\mu_1+\sigma_1\lambda_1^{(k)}, \mu_2+\sigma_2\lambda_2^{(l)})\Phi_i(\lambda_1^{(k)}) \biggr) \Phi_j(\lambda_2^{(l)})\\
&= \frac{1}{2\pi i! j! } \sum_{l=1}^n w_l \biggr( \sum_{k=1}^n w_k Q(\mu_1+\sigma_1\lambda_1^{(k)}, \mu_2+\sigma_2\lambda_2^{(l)})\Phi_i(\lambda_1^{(k)}) \biggr) \Phi_j(\lambda_2^{(l)})
\end{align*}
Assume
$$ \lambda_1 \sim N(\mu_1, \sigma_1^2) = N(0, 0.1^2) \ \ \ \lambda_2 \sim N(\mu_2, \sigma_2^2) = N(0, 0.1^2) $$
```python
proc_size = 25 # this number is determined by the number of data files
```
```python
mu1 = 0
mu2 = 0
sigma1 = 0.1
sigma2 = 0.1
```
## Get $Q_n(\lambda_1,\lambda_2)$ (need to compute coef $q_{ij}$)
```python
def Hermite_2d(i,j,x,y):
'''
Phi_{i,j}(x,y) = Phi_i(x) * Phi_j(y) (left: 2d; right: 1d)
'''
c = np.zeros((20,20))
c[i,j] = 1
return H.hermeval2d(x, y, c)
Q_FEM_quad = np.zeros(int(400)) #already include information of mu1, mu2, sigma1, sigma2
for i in range(proc_size):
filename = os.path.join(os.getcwd(), "Data", "Q_FEM_quad_") + str(i) + ".mat"
partial_data = sio.loadmat(filename)
Q_FEM_quad += partial_data['Q_FEM'].reshape(int(400))
def Phi(n):
#define H_n
coeffs = [0]*(n+1)
coeffs[n] = 1
return coeffs
def q(i,j):
'''
copmute coefficient q_{ij}
Set up Gauss-Hermite quadrature, weighting function is exp^{-x^2}
'''
x, w=H.hermegauss(20)
Q=sum([w[ldx]*sum([w[kdx] * Q_FEM_quad[ldx*20+kdx] * H.hermeval(x[kdx],Phi(i)) for kdx in range(20)])*H.hermeval(x[ldx],Phi(j)) for ldx in range(20)])
q= Q/(2*np.pi*factorial(i)*factorial(j))
return q
qij = np.zeros((10,10))
for i in range(10):
for j in range(10):
qij[i,j] = q(i,j)
def Q(n,x,y):
result = 0
for i in range(n+1):
for j in range(n+1):
if i+j <=n:
result += qij[i,j]*Hermite_2d(i,j,(x-mu1)/sigma1,(y-mu2)/sigma2)
return result
def Qexact(x,y,a=0.4,b=0.6,c=0.4,d=0.6):
sol = (np.cos(x*np.pi*a)-np.cos(x*np.pi*b))*(np.sin(y*np.pi*d)-np.sin(y*np.pi*c))/((b-a)*(d-c)*x*y*np.pi**2)
return sol
```
```python
#Visualize the error between PCE and exact
fig = plt.figure(figsize=(8,7))
ax = fig.gca(projection='3d')
# Make data.
X = np.arange(-0.5, 0.5, 0.1)
Y = np.arange(-0.5, 0.5, 0.1)
X, Y = np.meshgrid(X, Y)
# Z = Q(1, X, Y)
# Z = Q(5,X,Y) - Qexact(X,Y)
Z = Qexact(X,Y)
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# Customize the z axis.
ax.set_zlim(-0.5, 0.5) #Qexact zlim(-0.5,0.5); Qi-Q_exact zlim(-1.2,1.2)
ax.set_xlabel("$\lambda_1$", fontsize = 16)
ax.set_ylabel("$\lambda_2$", fontsize = 16)
ax.tick_params(direction='out', length=6, width=2, pad=1 ,colors='black', \
grid_color='r', grid_alpha=0.5, labelsize=14) #pad:dist b/t label and axis
cbaxes = fig.add_axes([0.88, 0.25, 0.03, 0.5]) #1st:horizontal, 2&4th:vertical, 3rd:width
fig.colorbar(surf, shrink=5, aspect=14, cax=cbaxes)
plt.show()
# fig.savefig("images/2Q_exact_5std") #2Q_exact_5std, 2Q1_5std
```
## Forward Problem
Assume
$$ \lambda_1 \sim N(\mu_1, \sigma_1^2) = N(0, 0.1^2) \ \ \ \lambda_2 \sim N(\mu_2, \sigma_2^2) = N(0, 0.1^2) $$
### Verify Assumption 1
```python
##### Generate data in Table 4 and 5 #####
def assumption1(n, J):
np.random.seed(123456)
lam1sample = np.random.normal(mu1, sigma1, J)
lam2sample = np.random.normal(mu2, sigma2, J)
pfprior_sample_n = Q(n, lam1sample, lam2sample)
pfprior_dens_n = kde(pfprior_sample_n)
x = np.linspace(-1, 1, 1000)
return np.round(np.max(np.abs(np.gradient(pfprior_dens_n(x), x))), 2), np.round(np.max(pfprior_dens_n(x)), 2)
size_J = [int(1E3), int(1E4), int(1E5)]
degree_n = [1, 2, 3, 4, 5]
Bound_matrix, Lip_Bound_matrix = np.zeros((3, 5)), np.zeros((3, 5))
for i in range(3):
for j in range(5):
n, J = degree_n[j], size_J[i]
Lip_Bound_matrix[i, j] = assumption1(n, J)[0]
Bound_matrix[i, j] = assumption1(n, J)[1]
```
#### Table 7
```python
###########################################
################ Table 7 ##################
###########################################
print('Table 7')
print('Bound under certain n and J values')
print(Bound_matrix)
```
Table 7
Bound under certain n and J values
[[2.63 2.63 2.61 2.61 2.61]
[2.64 2.64 2.61 2.61 2.61]
[2.6 2.6 2.57 2.57 2.57]]
#### Table 8
```python
###########################################
################ Table 8 ##################
###########################################
print('Table 8')
print('Lipschitz bound under certain n and J values')
print(Lip_Bound_matrix)
```
Table 8
Lipschitz bound under certain n and J values
[[12.11 12.11 11.95 11.95 11.95]
[11.47 11.47 11.41 11.41 11.41]
[11.07 11.07 10.86 10.86 10.86]]
```python
#### Use plot to show the difference between the exact pushforward and approximate pushforward #####
fig = plt.figure()
def plot_pushforward(n,J):
#pfprior_dens = kde(Q_FEM)
np.random.seed(123456)
lam1sample = np.random.normal(mu1,sigma1,J)
lam2sample = np.random.normal(mu2,sigma2,J)
pfprior_sample = Qexact(lam1sample,lam2sample)
pfprior_dens = kde(pfprior_sample)
pfprior_sample_n = Q(n,lam1sample,lam2sample)
pfprior_dens_n = kde(pfprior_sample_n)
fig.clear()
qplot = np.linspace(-1,1, num=1000)
plt.plot(qplot,pfprior_dens(qplot),color='r', linestyle='-.', linewidth=4,label="$\pi_{\mathcal{D}}^{Q}$")
plt.plot(qplot,pfprior_dens_n(qplot),label="$\pi_{\mathcal{D}}^{Q_n}$")
plt.title('Lipschitz const. = %4.2f and Bound = %2.2f' %(np.max(np.abs(np.gradient(pfprior_dens_n(qplot), qplot))),
np.max(pfprior_dens_n(qplot))))
plt.legend()
interact(plot_pushforward,
n = widgets.IntSlider(value=int(1),min=int(1),max=int(5),step=1),
J = widgets.IntSlider(value=int(1E3),min=int(1E3),max=int(1E5),step=int(1E3)))
```
<Figure size 576x432 with 0 Axes>
interactive(children=(IntSlider(value=1, description='n', max=5, min=1), IntSlider(value=1000, description='J'…
<function __main__.plot_pushforward(n, J)>
### Verify Lemma 1
**Print out Monte Carlo Approximation of $ \|\pi_{\mathcal{D}}^Q(q)-\pi_{\mathcal{D}}^{Q_n}(q)\|_{L^r(\mathcal{D_c})} $ where $r>0$ and $D_c=[-1,1]$**
```python
##### Generate data for the left plot of Fig 3 #####
# Define push-forward densities
N_kde = int(1E4)
N_mc = int(1E4)
np.random.seed(123456)
lam1sample = np.random.normal(mu1,sigma1,N_kde)
lam2sample = np.random.normal(mu2,sigma2,N_kde)
pfprior_dens = kde(Qexact(lam1sample,lam2sample))
def pfprior_dens_n(n,x):
pfprior_sample_n = Q(n,lam1sample,lam2sample)
pdf = kde(pfprior_sample_n)
return pdf(x)
```
```python
# **Print out Monte Carlo Approximation of $ \|\pi_{\mathcal{D}}^Q(q)-\pi_{\mathcal{D}}^{Q_n}(q)\|_{L^r(\mathcal{D_c})} $ where $r>0$ and $D_c=[-1,1]$**
np.random.seed(123456)
qsample = np.random.uniform(-1,1,N_mc)
def error_r_onD(r,n):
diff = (np.mean((np.abs(pfprior_dens_n(n,qsample) - pfprior_dens(qsample)))**r))**(1/r)
return diff
error_r_D = np.zeros((5,5))
for i in range(5):
for j in range(5):
error_r_D[i,j] = error_r_onD(i+1,j+1)
```
```python
np.set_printoptions(linewidth=110)
print('L^r error on data space for Forward Problem',end='\n\n')
print(error_r_D)
```
L^r error on data space for Forward Problem
[[5.78985998e-03 5.78986860e-03 3.66465835e-05 3.66458716e-05 4.68447031e-06]
[1.00098540e-02 1.00098559e-02 6.48003418e-05 6.48018329e-05 6.83125067e-06]
[1.29985177e-02 1.29985144e-02 8.36155928e-05 8.36174583e-05 7.98012310e-06]
[1.51923261e-02 1.51923210e-02 9.70172096e-05 9.70187251e-05 8.72619888e-06]
[1.68583067e-02 1.68583012e-02 1.07115873e-04 1.07116859e-04 9.26298356e-06]]
```python
#### To make it cleaner, create Directory "images" to store all the figures ####
imagepath = os.path.join(os.getcwd(),"images")
os.makedirs(imagepath,exist_ok=True)
```
#### Left plot in Figure 4
```python
###########################################
######### The left plot of Fig 4 ##########
###########################################
fig = plt.figure()
plt.xlim([0,6])
marker = ['-D', '-o', '-v', '-s', '-.']
for i in range(5):
plt.semilogy([1,2,3,4,5],error_r_D[i,:],marker[i],label='r = ' + np.str(i+1))
plt.xlabel('Order of PCE (n)')
plt.ylabel('$L^r$'+' Error in Push-Forward on '+'$\mathcal{D}$')
plt.legend()
# fig.savefig("images/2forward_error_D.png")
fig.savefig("images/Fig4(Left).png")
```
### Verify Theorem 3.1
**Print out Monte Carlo Approximation of $ \|\pi_{\mathcal{D}}^Q(Q(\lambda))-\pi_{\mathcal{D}}^{Q_n}(Q_n(\lambda))\|_{L^2(\Lambda)} $**
```python
##### Generate data for the right plot of Fig 4 #####
np.random.seed(123456)
lam1_seed = np.random.normal(mu1,sigma1,int(1E4))
lam2_seed = np.random.normal(mu2,sigma2,int(1E4)) #int(1E4) since Q_FEM size
error_2_Lam = np.zeros(5)
for i in range(5):
pfprior_sample = Qexact(lam1_seed,lam2_seed)
error_2_Lam[i] = (np.mean((np.abs(pfprior_dens_n(i+1,Q(i+1,lam1_seed,lam2_seed))\
- pfprior_dens(pfprior_sample)))**2))**(1/2)
```
```python
np.set_printoptions(linewidth=110)
print('L^2 error on parameter space for Forward Problem',end='\n\n')
print(error_2_Lam)
```
L^2 error on parameter space for Forward Problem
[2.78836391e-02 2.78836376e-02 2.33315440e-04 2.33317057e-04 1.82851667e-05]
#### Right plot in Figure 4
```python
############################################
######### The right plot of Fig 4 ##########
############################################
fig = plt.figure()
plt.xlim([0,6])
plt.semilogy([1,2,3,4,5], error_2_Lam, '-s' )#, label='$L^2(\Lambda)$ error')
plt.xlabel('Order of PCE (n)')
plt.ylabel('$L^2$'+' Error in Push-Forward on '+'$\Lambda$');
# fig.savefig("images/2forward_error_lam.png")
fig.savefig("images/Fig4(Right).png")
```
## Inverse Problem
Compute $\pi_{\Lambda}^u$ and $\pi_{\Lambda}^{u,n}$
Observed pdf is $\pi_{\mathcal{D}} \sim N(0.3,0.1^2)$
Guess is $\lambda_1\sim N(0,0.1)$, $\lambda_2\sim N(0,0.1)$
Verify Result of Theorem 4.2:
$Q_n(\lambda)\to Q(\lambda)$ in $L^p(\Lambda)$, $\pi_{\Lambda}^{init}\in L^p(\mathcal{D})$. If Assumptions 1, 2 hold, $\{\pi_{\mathcal{D}}^{Q_n}\}$ are uniformly integrable in $L^p(\mathcal{D})$, then
\begin{equation}
\pi_{\Lambda}^{u,n}(\lambda) \to \pi_{\Lambda}^{u}(\lambda) \text{ in } L^p(\Lambda)
\end{equation}
```python
def rejection_sampling(r):
N = r.size # size of proposal sample set
check = np.random.uniform(low=0,high=1,size=N) # create random uniform weights to check r against
M = np.max(r)
new_r = r/M # normalize weights
idx = np.where(new_r>=check)[0] # rejection criterion
return idx
def pdf_obs(x):
return norm.pdf(x, loc=0.1, scale=0.1)
```
```python
#### Use plot to show the difference between the pushforward of the init and the observed #####
plt.figure()
xx = np.linspace(-1,1,100)
plt.plot(xx,pdf_obs(xx),label="$\pi_{\mathcal{D}}^{obs}$")
plt.plot(xx,pfprior_dens(xx), label="$\pi_{\mathcal{D}}^{Q(init)}$")
plt.xlabel("$\mathcal{D}$")
plt.legend();
# fig.savefig("images/2obs_pushforward.png")
```
### Verify Assumption 2
```python
def Meanr(n):
if n==0:
pfprior_sample = Qexact(lam1_seed,lam2_seed)
r = pdf_obs(pfprior_sample)/pfprior_dens(pfprior_sample)
else:
pfprior_sample_n = Q(n,lam1_seed,lam2_seed)
r = pdf_obs(pfprior_sample_n)/pfprior_dens_n(n,pfprior_sample_n)
return np.mean(r)
Expect_r = np.zeros(6)
for i in range(6):
Expect_r[i] = Meanr(i)
```
```python
print('Expected ratio for verifying Assumption 2')
print(Expect_r[1:])
```
Expected ratio for verifying Assumption 2
[1.00179843 1.00179844 1.00197 1.00197 1.00197088]
### Verify Theorem 4.2
Print out Monte Carlo Approximation of $\|\pi_{\Lambda}^{u,n}(\lambda)-\pi_{\Lambda}^u(\lambda)\|_{L^2(\Lambda)} $
\begin{align*}
\|\pi_{\Lambda}^{u,n}(\lambda)-\pi_{\Lambda}^u(\lambda)\|^2_{L^2(\Lambda)} &= \int (\pi^i(\lambda))^2 (r_n(\lambda) - r(\lambda))^2\, d\mu_{\Lambda}\\
&= \mathbb{E}_i (\pi^i(\lambda)(r_n(\lambda) - r(\lambda))^2)\\
&\approx \frac{1}{N} \sum_{j=1}^N \pi^i(\lambda^{(j)})(r_n(\lambda^{(j)}) -r(\lambda^{(j)}) )^2
\end{align*}
```python
##### Load data for Fig 5 #####
# Print out Monte Carlo Approximation of $\|\pi_{\Lambda}^{u,n}(\lambda)-\pi_{\Lambda}^u(\lambda)\|_{L^2(\Lambda)} $
init_eval = np.zeros(int(1E4))
for i in range(int(1E4)):
init_eval[i] = norm.pdf(lam1_seed[i], loc=0.1, scale=0.1)*norm.pdf(lam2_seed[i], loc=0.1, scale=0.1)
r = np.zeros(int(1E4))
for i in range(proc_size):
filename = os.path.join(os.getcwd(), "Data", "r_") + str(i) + ".mat"
partial_data = sio.loadmat(filename)
r += partial_data['r'].reshape(int(1E4))
rn = np.zeros((6,int(1E4)))
for i in range(6):
for j in range(proc_size):
filename = os.path.join(os.getcwd(), "Data", "r") + str(i+1) + '_' + str(j) + ".mat"
partial_data = sio.loadmat(filename)
rn[i,:] += partial_data['r'].reshape(int(1E4))
```
```python
error_Update = np.zeros(5)
for i in range(5):
error_Update[i] = (np.mean(init_eval*(rn[i,:] - r)**2))**(1/2)
```
```python
np.set_printoptions(linewidth=110)
print('L^2 Error for Inverse Problem',end='\n\n')
print(error_Update)
```
L^2 Error for Inverse Problem
[4.25879362e-01 4.25878780e-01 5.37649557e-03 5.37799511e-03 3.87794081e-04]
#### Figure 5
```python
###########################################
################ Figure 5 #################
###########################################
fig = plt.figure()
plt.xlim([0,6])
plt.semilogy([1,2,3,4,5], error_Update, '-s')#, label='$L^2(\Lambda)$ error')
plt.xlabel('Order of PCE (n)')
plt.ylabel('$L^2$'+' Error in Update')
# fig.savefig("images/2inverse_error.png")
fig.savefig("images/Fig5.png")
```
|
2b23a8a46c07056eb95cd51acfc0828d8f384a33
| 212,777 |
ipynb
|
Jupyter Notebook
|
PDE example.ipynb
|
User-zwj/Lp
|
0e645de38e0745da179ad32f53c04b57a15d0adc
|
[
"MIT"
] | 1 |
2021-06-24T20:10:59.000Z
|
2021-06-24T20:10:59.000Z
|
PDE example.ipynb
|
User-zwj/Lp
|
0e645de38e0745da179ad32f53c04b57a15d0adc
|
[
"MIT"
] | null | null | null |
PDE example.ipynb
|
User-zwj/Lp
|
0e645de38e0745da179ad32f53c04b57a15d0adc
|
[
"MIT"
] | null | null | null | 184.063149 | 65,920 | 0.883766 | true | 8,388 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.793106 | 0.737158 | 0.584645 |
__label__eng_Latn
| 0.164735 | 0.196655 |
```python
import os
import time
import handcalcs.render
import numpy as np
import pandas as pd
import pandas_profiling
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.graph_objects as go
import plotly.express as px
from IPython.display import display
from autoviz.AutoViz_Class import AutoViz_Class
from sklearn.preprocessing import FunctionTransformer, OneHotEncoder, MinMaxScaler
from sklearn.impute import SimpleImputer
from sklearn.decomposition import PCA
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split, GridSearchCV, KFold, cross_val_score
from xgboost import XGBClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from pycaret.classification import *
from pycaret.datasets import get_data
%matplotlib inline
```
```python
def load_data(name='test'):
path = os.path.join('../data', name + '.csv')
data = pd.read_csv(path)
return data
```
```python
train_data = load_data('train')
test_data = load_data('test')
df_all = train_data.append(test_data, ignore_index=True)
```
```python
display(df_all.head(40))
```
```python
display(df_all.describe())
```
```python
display(df_all.info())
```
```python
df_all.isnull().info()
```
```python
print(df_all.columns)
```
```python
sns.distplot(df_all['Age'], color='black')
plt.show()
age_cat = pd.cut(df_all['Age'], bins=[0, 10, 20, 30, 40, 50, 60, 70, 80], labels=[
'0-10', '10-20', '20-30', '30-40', '40-50', '50-60', '60-70', '70-80'])
fig, axes = plt.subplots(1, 2, figsize=(10, 6))
sns.countplot(age_cat[df_all['Sex'] == 'female'], color='black', ax=axes[0])
sns.countplot(age_cat[(df_all['Survived'] == 1) & (
df_all['Sex'] == 'female')], color='pink', ax=axes[0]).set_title('Female')
sns.countplot(age_cat[df_all['Sex'] == 'male'], color='black', ax=axes[1])
sns.countplot(age_cat[(df_all['Survived'] == 1) & (
df_all['Sex'] == 'male')], color='blue', ax=axes[1]).set_title('Male')
plt.show()
```
```python
sns.countplot(df_all['Pclass'])
plt.show()
```
```python
grouped = df_all.groupby(['Sex', 'Pclass'])
# display(grouped['Age'].median())
ax = grouped['Age'].median().plot(kind='bar', color='black')
ax.set(ylabel='Median Age')
plt.show()
```
```python
df_all.drop('PassengerId', axis=1, inplace=True)
df_all.drop('Ticket', axis=1, inplace=True)
df_all.drop('Name', axis=1, inplace=True)
df_all.loc[df_all['Cabin'].isnull(), 'Cabin'] = 0
df_all.loc[df_all['Cabin'] != 0, 'Cabin'] = 1
df_all['Embarked'].fillna(
df_all['Embarked'].value_counts().index[0], inplace=True)
df_all['Fare'].fillna(df_all['Fare'].median(), inplace=True)
df_all['Age'] = grouped['Age'].apply(lambda x: x.fillna(x.median()))
df_all['Sex'] = df_all['Sex'].map({'male': 0, 'female': 1})
df_all = pd.get_dummies(df_all, columns=['Embarked'])
```
```python
display(df_all.head(4))
report = pandas_profiling.ProfileReport(df_all)
display(report)
```
```python
AV = AutoViz_Class()
report_av = AV.AutoViz('../data/train.csv')
```
```python
scaler = MinMaxScaler()
X = df_all.drop('Survived', axis=1).iloc[:891].values
y = (df_all['Survived'].iloc[:891].values).astype(int)
X = scaler.fit_transform(X)
X_test = df_all.drop('Survived', axis=1).iloc[891:].values
X_test = scaler.transform(X_test)
y_test = (df_all['Survived'].iloc[891:].values).astype(int)
```
```python
lr = LogisticRegression()
lr.fit(X, y)
Y_pred = lr.predict(X_test)
lr.score(X, y)
```
```python
svc = SVC()
svc.fit(X, y)
Y_pred = svc.predict(X_test)
svc.score(X, y)
```
```python
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X, y)
Y_pred = knn.predict(X_test)
knn.score(X, y)
```
```python
gaussian = GaussianNB()
gaussian.fit(X, y)
Y_pred = gaussian.predict(X_test)
gaussian.score(X, y)
```
```python
perceptron = Perceptron()
perceptron.fit(X, y)
Y_pred = perceptron.predict(X_test)
perceptron.score(X, y)
```
```python
linear_svc = LinearSVC()
linear_svc.fit(X, y)
Y_pred = linear_svc.predict(X_test)
linear_svc.score(X, y)
```
```python
sgd = SGDClassifier()
sgd.fit(X, y)
Y_pred = sgd.predict(X_test)
sgd.score(X, y)
```
```python
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X, y)
Y_pred = decision_tree.predict(X_test)
decision_tree.score(X, y)
```
```python
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X, y)
Y_pred = random_forest.predict(X_test)
random_forest.score(X, y)
```
```python
```
## Logistic Regression
\begin{equation}
P\left[Y=y\ \big|\ x;\omega\right]\approx\sigma(\omega^Tx)
\end{equation}
where
\begin{equation}
\sigma(t)=\frac{1}{1+\exp^{-t}}
\end{equation}
$\omega$ can be obtained using maximum likelihood estimation, i.e. minimizing the negative log-likelihood:
\begin{equation}
J(\omega) = -\frac{1}{m}\sum\limits_{i=1}^{n}y_i\log(\sigma(\omega^Tx_i))+(1-y_i)\log(1-\sigma(\omega^Tx_i))
\end{equation}
```python
def logistic_regression(X, y, alpha=1e-3, num_iter=30, random_state=1):
np.random.seed(random_state)
d, m = X.shape
K = np.max(y)+1 # 0~c-1 => 1~c
w = np.random.randn(d, K)
def softmax(x):
s = np.exp(x)/np.sum(np.exp(x))
return s
def one_hot(y, k):
"""
y=[0,1,2,1]
k=3
return:
[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]]
"""
y_one_hot = np.eye(k)[y]
return y_one_hot
def h(x, w):
p = softmax(w.T@x)
return p
def cost(pred, y):
c = np.sum(-one_hot(y, K).T*np.log(pred))
return c
def grad(w, x, y):
Y = one_hot(y, K).T
b = h(x, w)-Y
b = np.reshape(b, (-1, 1))
x = x.reshape((-1, 1))
g = x@b.T
return g
for i in range(num_iter):
for j in np.random.permutation(m):
gradient = grad(w, X[:, j], y[j])
w -= alpha*gradient
return w
```
## Least Square Ridge Classifier
$X\in R^{m\times d},Y\in R^{m\times K}$
#### closed form solution
\begin{equation}
\begin{aligned}
J(\omega) &= \|X\omega-Y\|_2^2+\lambda\|\omega\|_F^2 \\
&=(X\omega-Y)^T\cdot(X\omega-Y)+\lambda\omega^T\omega \\
&=(\omega^TX^T-Y^T)\cdot(X\omega-Y)+\lambda\omega^T\omega \\
&=\omega^TX^TX\omega-\omega^TX^TY-Y^TX\omega+Y^TY+\lambda\omega^T\omega
\end{aligned}
\end{equation}
To minimize: $\frac{\partial J}{\partial\omega}=0$
\begin{equation}
\begin{aligned}
&\frac{\partial J}{\partial\omega}=2X^TX\omega-2X^TY+2\lambda\omega=0 \\
\implies & 2(X^TX+\lambda I)\omega=2X^TY \\
\implies & \omega=(X^TX+\lambda I)^{-1}X^TY
\end{aligned}
\end{equation}
```python
def ridge_classifier(X, y, lambd=1e-4):
d, m = X.shape
k = np.max(y)+1
w = np.linalg.inv(X@X.T+lambd*np.eye(d))@X@np.eye(k)[y]
return w
```
```python
def error(X, y, w):
m = np.shape(y)
y_pred = w.T @ X
y_pred = np.argmax(y_pred, axis=0)
err = np.sum(y_pred == y) / m
return err
```
```python
scores_lr = []
scores_ls = []
fold = 1
for tr, val in KFold(n_splits=5, random_state=42).split(X, y):
X_train = X[tr]
X_val = X[val]
y_train = y[tr]
y_val = y[val]
best_W_LR = logistic_regression(
X_train.T, y_train, alpha=1e-3, num_iter=300, random_state=42)
val_acc_LR = error(X_val.T, y_val, best_W_LR)
scores_lr.append(val_acc_LR)
print(f'Validation acc LR: Fold {fold}:', val_acc_LR)
W_LS = ridge_classifier(X_train.T, y_train, lambd=1e-4)
val_acc_LS = error(X_val.T, y_val, W_LS)
scores_ls.append(val_acc_LS)
print(f'Validation acc LS: Fold {fold}:', val_acc_LS)
fold += 1
print('-------------------------------')
print("Accuracy Logistic Regression: %0.2f (+/- %0.2f)" %
(np.mean(scores_lr), np.std(scores_lr) * 2))
print("Accuracy Least Squares Ridge: %0.2f (+/- %0.2f)" %
(np.mean(scores_ls), np.std(scores_ls) * 2))
```
```python
def test_clfs(clfs):
for clf in clfs:
start = time()
clf = clf(random_state=0)
scores = cross_val_score(clf, X, y, cv=5)
print(str(clf), 'results:')
print('Accuracy')
```
```python
data = train_data.drop('Name',axis=1).drop('Ticket',axis=1).drop('PassengerId',axis=1)
clf = setup(data=data,target='Survived')
top3 = compare_models(n_select=3,exclude=['catboost'])
```
```python
tuned_top3 = [tune_model(i) for i in top3]
```
```python
bagged_tuned_top3 = [ensemble_model(i, method = 'Bagging') for i in tuned_top3]
```
```python
blender = blend_models(estimator_list = top3)
```
```python
stacker = stack_models(estimator_list = top3[1:], meta_model = top3[0])
```
```python
best_model = automl(optimize = 'Accuracy')
```
```python
save_model(best_model, 'model')
```
```python
plot_model(best_model,plot='boundary')
```
```python
evaluate_model(best_model)
```
```python
predict_model(best_model)
```
```python
model = finalize_model(best_model)
```
```python
y_test_pred = predict_model(best_model, data=test_data)[['PassengerId','Label']]
y_test_pred['Survived']=y_test_pred['Label']
y_test_pred.drop('Label',axis=1,inplace=True)
print(y_test_pred)
y_test_pred.to_csv('../data/my_submission.csv',header=True,index=None,encoding='utf-8')
```
```python
```
|
3b50baaabb3b9ac0c855ef3cf2c4ffa564279800
| 22,963 |
ipynb
|
Jupyter Notebook
|
kaggle/Getting Started/Titanic/Wayne/main.ipynb
|
wangyendt/tianshi_ai_contests
|
989f86ec633b998e116568d945a7b5fcfbf23194
|
[
"MIT"
] | 2 |
2020-09-22T02:05:19.000Z
|
2020-10-17T06:29:34.000Z
|
kaggle/Getting Started/Titanic/Wayne/main.ipynb
|
wangyendt/tianshi_ai_contests
|
989f86ec633b998e116568d945a7b5fcfbf23194
|
[
"MIT"
] | null | null | null |
kaggle/Getting Started/Titanic/Wayne/main.ipynb
|
wangyendt/tianshi_ai_contests
|
989f86ec633b998e116568d945a7b5fcfbf23194
|
[
"MIT"
] | null | null | null | 25.976244 | 127 | 0.536472 | true | 2,935 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.737158 | 0.655156 |
__label__eng_Latn
| 0.204654 | 0.360477 |
# AMath 583 Lab 5
## Where to find this notebook:
* This notebook will be in \$UWHPSC/labs/lab5/lab5a.ipynb if you have cloned the class repository. You may need to "git pull" to update.
* Or you can bring it down to your computer (or SageMathCloud project) by typing this in a terminal shell:
* wget http://faculty.washington.edu/rjl/classes/am583s2014/lab5a.ipynb
## Announcements:
* Please sit at assigned tables.
* If you're having problems with notebooks in SMC, note that you can also use IPython from a terminal.
* Homework 2 will be posted by Thursday. Part of the homework will be to read this paper:
"Best Practices for Scientific Computing, by G. Wilson, D. A. Aruliah", by C. T. Brown, et. al.
that can be found at <http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745>
```
%pylab inline
```
## SymPy
SymPy is a package of Python tools for symbolic calculations in algebra and calculus. This is notebook has a very brief demo of some features, but you can learn much more about it from various on-line resources such as:
* <http://docs.sympy.org/latest/tutorial/intro.html#what-is-symbolic-computation>
* <http://nbviewer.ipython.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-5-Sympy.ipynb>
You can do "from sympy import *" to get all of sympy in your namespace, but note that this will overwrite standard numpy variables such as pi and functions such as sin, cos, exp, sqrt, with sympy versions. Since we may want to mix symbolic and numerical computing, we will be explicit about what's coming from SymPy:
```
import sympy as S # shorthand
```
```
S.init_printing() # so symbolic math is printed nicely
```
### Define a polynomial and factor it:
```
x = S.symbols('x') # defines a symbol
f = x**3 - 3*x**2 - 18*x + 40 # a new symbolic expression
f # print it nicely
```
```
S.factor(f)
```
### We can also differentiate:
```
S.diff(f,x) # differentiate f with respect to x
```
### A messier function and its derivative:
```
f = (x**3 * S.exp(5*x)*S.cos(S.pi*x)) / (1 + S.sqrt(x))
f
```
```
g = S.diff(f,x,n=3)
g
```
Note that if you "print g" it does not come out so pretty, but this might be more useful if you want to cut and paste this into a Fortran program, for example:
```
print g
```
### Evaluate this derivative at the point $x = 0.2$:
```
g2 = g.subs(x, 0.2)
g2
```
Note this substituted 0.2 for $x$, but the special symbol $\pi$ has not be converted to a number. The SymPy function N converts special constants like $\pi$ and $e$ to numerical values:
```
S.N(g2)
```
You can specify how many digits to use when it does this substitution, e.g.
```
S.N(S.pi,n=50)
```
## Symbolic differentiation for a Newton iteration.
In the last two labs you implemented Newton's method for different functions and probably had to compute the derivatives by hand. Here's a way to write and fvals function using SymPy:
```
def fvals(x, debug=False):
from sympy import symbols,diff,sqrt
# First specify f symbolically and differentiate using SymPy:
xs = symbols('xs')
fs = xs**2 - 2.
fprimes = diff(fs, xs)
# Now evaluate numerically at the value x passed in to this function:
f = fs.subs(xs, x)
fprime = fprimes.subs(xs, x)
# The next lines are just for illustrating that this is working:
if debug:
print "fs = ",fs
print "fprimes = ",fprimes
print "x = ",x
return f, fprime
# Try it out:
fv = fvals(3., debug=True)
print "fvals returns: ", fv
```
## Try this out with Newton's method:
```
def newton(fvals, x0, tol):
xk = x0
kmax = 30 # maximum number of iterations
print " k xk f(xk)"
for k in range(kmax):
fxk, fpxk = fvals(xk) # evaluate f(x) and f'(x)
print "%4i %22.15f %22.15f" % (k, xk, fxk)
if abs(fxk) < tol:
break #leave the loop if tolerance satisfied
xk = xk - fxk / fpxk # update xk using Newton's method
return xk
```
```
newton(fvals, 2, 1e-10)
```
## Exercises:
* Use Newton's method to find a root of $f(x) = (x^2 - 2)\exp(-0.1 x^2) + 0.5$ from Lab 4.
* Let $f(x) = \sqrt{\cos(2\pi x) e^x}$. Compute $f''(0.1)$, the second derivative evaluated at $x=0.1$. This value is needed for the Lab 5 Quiz.
Remember you can get help in IPython by adding a ? to the end of an object and running the cell...
```
S.integrate?
```
```
S.diff?
```
|
819a1c4961d5cad46e6d2c6a9ea35edfd3cf4b04
| 9,451 |
ipynb
|
Jupyter Notebook
|
uwhpsc/labs/lab5/lab5a.ipynb
|
philipwangdk/HPC
|
e2937016821701adb80ece5bf65d43d1860640c0
|
[
"MIT"
] | null | null | null |
uwhpsc/labs/lab5/lab5a.ipynb
|
philipwangdk/HPC
|
e2937016821701adb80ece5bf65d43d1860640c0
|
[
"MIT"
] | null | null | null |
uwhpsc/labs/lab5/lab5a.ipynb
|
philipwangdk/HPC
|
e2937016821701adb80ece5bf65d43d1860640c0
|
[
"MIT"
] | null | null | null | 26.69774 | 329 | 0.485875 | true | 1,327 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.849971 | 0.92523 | 0.786419 |
__label__eng_Latn
| 0.991383 | 0.665447 |
# Optimization of Sao Paulo traffic
Lead author: Jules Deschamps.
This notebook presents a simple use case of *information geometry*, in the context of *traffic optimization* in Sao Paulo.
We rely on a dataset listing all traffic jams in Sao Paulo for the past two decades (their location, date, their size, their duration, i.e. how long the traffic was jammed) to propose a solution involving information geometry.
This analysis relies heavily on the geometry of the *Gamma manifold*, which is particularly adapted to addressing this situation, as seen later on.
<center>
</center>
<center>
Figure 1: Sao Paulo: A city with 180km traffic jams -- BBC News
</center>
# 1. Introduction and Motivation
40% of São Paulo residents own a motor vehicle. While this is lower than cities in the United States, it is still higher than most other Latin American cities and São Paulo’s infrastructure was not built to accommodate such a large number of private vehicles. As The Urban Mobility Research Network of São Paulo found, some São Paulo residents spend one month per year in traffic, or 2.4 hours per day. As car ownership increases, and with it further congestion, this time spent in traffic will only grow. In that regard, considering the increase in car ownership and air pollution, even though widening roads only brings a temporary solution, it can alleviate Brazilians of the absurd amount of time they spend in traffic.
In the role of Sao Paulo's city planners, we have been granted a certain amount of resources to solve the congestion problem of Sao Paulo. The issue at hand becomes that of choosing which roads to renovate. More formally, the goal is eventually to reduce the mean expected congestion time in traffic.
### Setup
```python
import os
import subprocess
geomstats_gitroot_path = subprocess.check_output(
["git", "rev-parse", "--show-toplevel"], universal_newlines=True
)
os.chdir(geomstats_gitroot_path[:-1])
print("Working directory: ", os.getcwd())
```
Working directory: C:\Users\Jules\Documents\geomstats
```python
import matplotlib.pyplot as plt
import geomstats.backend as gs
import pandas as pd
```
# 2. Dataset description
We have at our disposal a dataset (accessible [here](https://www.kaggle.com/datasets/danlessa/sao-paulo-traffic-jams-since-2001)) containing traffic jam size measurements by CET at several locations on São Paulo between 2001 and 2019, with more than 5M entries.
Available columns:
- passage (str) - Name of the passage
- direction (str)
- type (str) - Indicates if the passage is an expressway (E)
- region (str) - São Paulo region
- timestamp (datetime) - When the traffic jam was measured (UTC-4)
- jam_size (int) - Traffic jam in meters
- segment (str) - Where the passage is located
Our modeling will not take into account the fact that many of the passages/roads must have been renovated between 2001 and 2019. Similarly, the dataset does not offer information on the width of given roads (even though we could base it off of the type of the road), and therefore on their flow rate: this is an obvious flaw in our analysis but it is easy to fix if the relevant data can be accessed.
## Pre-processing the dataset
```python
from geomstats.datasets.utils import load_sao_paulo
df, jam_count = load_sao_paulo()
```
INFO: Data has already been downloaded... using cached file ('C:\Users\Jules\.geomstats_data\jam.zip').
Some of the columns of the dataset are not necessary for our study: let alone __index__, __type__ and __segment__ do not seem to add any value to the table in our case. In addition, the times (__timestamp__) at which jams are occurring are not relevant in that specific format: it would make much more sense to have the duration of given jam. We also decide to drop the __jam_size__ column.
Additionally, we would want to transform the original dataset so as to access a more relevant table, with features:
- name of the road (primary key = passage + direction for instance, segments are regrouped within same key)
- date (day only)
- duration of the traffic jam (in h)
```python
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>date</th>
<th>duration</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Abraão Ribeiro, Av Dr (F) Bairro...</td>
<td>2005-01-12</td>
<td>1.0</td>
</tr>
<tr>
<th>1</th>
<td>Abraão Ribeiro, Av Dr (F) Bairro...</td>
<td>2005-01-20</td>
<td>1.5</td>
</tr>
<tr>
<th>2</th>
<td>Abraão Ribeiro, Av Dr (F) Bairro...</td>
<td>2005-01-21</td>
<td>1.5</td>
</tr>
<tr>
<th>3</th>
<td>Abraão Ribeiro, Av Dr (F) Bairro...</td>
<td>2005-02-24</td>
<td>4.5</td>
</tr>
<tr>
<th>4</th>
<td>Abraão Ribeiro, Av Dr (F) Bairro...</td>
<td>2005-02-28</td>
<td>2.0</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>650519</th>
<td>Xangai, Vd unico/...</td>
<td>2004-01-16</td>
<td>3.5</td>
</tr>
<tr>
<th>650520</th>
<td>Xangai, Vd unico/...</td>
<td>2004-03-24</td>
<td>4.0</td>
</tr>
<tr>
<th>650521</th>
<td>Xangai, Vd unico/...</td>
<td>2004-03-26</td>
<td>2.0</td>
</tr>
<tr>
<th>650522</th>
<td>Xangai, Vd unico/...</td>
<td>2004-04-29</td>
<td>4.0</td>
</tr>
<tr>
<th>650523</th>
<td>Xangai, Vd unico/...</td>
<td>2004-06-01</td>
<td>3.0</td>
</tr>
</tbody>
</table>
<p>650524 rows × 3 columns</p>
</div>
Above is the table __df__ of all traffic jams and their durations, for each day.
```python
jam_count
```
{'Abraão Ribeiro, Av Dr (F) Bairro/Centro ': 185,
'Abraão Ribeiro, Av Dr (F) Centro/Bairro ': 28,
'Abraão de Morais, Av Prof/Imig Santos/São Paulo ': 1870,
'Abraão de Morais, Av Prof/Imig São Paulo/Santos ': 852,
'Abraão de Morais, Av Prof/Imigrantes (F)Santos/São Paulo ': 396,
'Abraão de Morais, Av Prof/Imigrantes (F)São Paulo/Santos ': 107,
'Adolfo Pinheiro e Lgo 13/05 Bairro/Centro ': 391,
'Adolfo Pinheiro e Lgo 13/05 Centro/Bairro ': 657,
'Aliomar Baleeiro, Vd Min Anchieta/Imigrantes ': 3831,
'Aliomar Baleeiro, Vd Min Imigrantes/Anchieta ': 3191,
'Aliomar Baleeiro, Vd Min (F) Anchieta/Imigrantes ': 1,
'Alvarenga, R único// ': 1919,
'Amaro, Al Sto único// ': 1000,
'Amaro, Av Sto (F) Bairro/Centro ': 647,
'Amaro, Av Sto (F) Centro/Bairro ': 316,
'Amaro, Av Sto (Pavao/Nebraska) (F) Bairro/Centro ': 41,
'Amaro, Av Sto (Pavao/Nebraska) (F) Centro/Bairro ': 23,
'Amaro, Av Sto - DEC SA (F) Bairro/Centro ': 6,
'Amaro, Av Sto - DEC SA (F) Centro/Bairro ': 8,
'Amaro, Av Sto DEC IB Bairro/Centro ': 1814,
'Amaro, Av Sto DEC IB Centro/Bairro ': 1280,
'Amaro, Av Sto DEC SA Bairro/Centro ': 830,
'Amaro, Av Sto DEC SA Centro/Bairro ': 803,
'Anchieta, Via Santos/São Paulo ': 836,
'Anchieta, Via São Paulo/Santos ': 1293,
'Angélica, Av Bairro/Centro ': 19,
'Angélica, Av Centro/Bairro ': 29,
'Angélica, Av (F) Centro/Bairro ': 6,
'Antonio Nakashima, Vd único// ': 1432,
'Antártica, Vd Limão/Sumaré ': 1331,
'Antártica, Vd Sumaré/Limão ': 1308,
'Antônio Joaquim de Moura Andrade, Av Ibirapuera/Marginal ': 7071,
'Antônio Joaquim de Moura Andrade, Av Marginal/Ibirapuera ': 2428,
'Arcoverde, R Card //unico ': 238,
'Arcoverde, R Cardeal (F) único// ': 1270,
'Aricanduva, Av/Elev/Pt Itaquera/Marginal ': 4929,
'Aricanduva, Av/Elev/Pt Marginal/Itaquera ': 3891,
'Aricanduva/Elevado/Ponte( F) Itaquera/Marginal ': 2258,
'Aricanduva/Elevado/Ponte( F) Marginal/Itaquera ': 1315,
'Arnaldo, Av Dr Consolação/Sumare ': 2596,
'Arnaldo, Av Dr Sumare/Consolação ': 5998,
'Arthur da Costa e Silva, Elev Pres Lapa/Penha ': 4972,
'Arthur da Costa e Silva, Elev Pres Penha/Lapa ': 3743,
'Ary Torres, Eng. Pte único// ': 3517,
'Asc.Reis/R.Berta(Local) Bairro/Centro ': 2614,
'Asc.Reis/R.Berta(Local) Centro/Bairro ': 998,
'Ataliba Leonel, Av. Gal. Bairro/Centro ': 56,
'Ataliba Leonel, Av. Gal. Centro/Bairro ': 110,
'Atilio Fontana, Pte Capital/Interior ': 71,
'Atilio Fontana, Pte Interior/Capital ': 3640,
'Atlantica, AV Bairro/Centro ': 485,
'Atlantica, AV Centro/Bairro ': 24,
'Ayrton Senna I, Tn (NÃO USAR) Centro/Bairro ': 2,
'Ayrton Senna I, Túnel unico// ': 7159,
'Ayrton Senna II, Túnel unico// ': 3815,
'Bandeirantes, Av dos Imigrantes/Marginal ': 7714,
'Bandeirantes, Av dos Marginal/Imigrantes ': 6796,
'Bento, Lgo São //unico ': 9,
'Bernardino/Verg/Noe/Domingos/Jabaquara Bairro/Centro ': 2153,
'Bernardino/Verg/Noe/Domingos/Jabaquara Centro/Bairro ': 519,
'Bernardo Goldfarb, Pte Bairro/Centro ': 1079,
'Bernardo Goldfarb, Pte Centro/Bairro ': 174,
'Brasil, Av Ibirapuera/Pinheiros ': 246,
'Brasil, Av Pinheiros/Ibirapuera ': 537,
'Brás Leme, Av / Pte Casa Verde Bairro/Centro ': 2002,
'Brás Leme, Av / Pte Casa Verde Centro/Bairro ': 419,
'Butantã, R //unico ': 206,
'Butantã, R (F) unico// ': 2032,
'CJardim/Europa, Av/Colômbia, R (F) Bairro/Centro ': 1,
'Caetano Alvares, Av. Eng. Bairro/Centro ': 26,
'Caetano Alvares, Av. Eng. Centro/Bairro ': 85,
'Camargo, R único// ': 1430,
'Carlos Caldeira Filho, Av Bairro/Centro ': 1209,
'Carlos Caldeira Filho, Av Centro/Bairro ': 345,
'Carrão, Av Cons (F) Bairro/Centro ': 488,
'Carrão, Av Cons (F) Centro/Bairro ': 61,
'Carrão, Av Cons Bairro/Centro ': 264,
'Carrão, Av Cons Centro/Bairro ': 116,
'Casa Verde, Pte e Av Bras Leme (F) Bairro/Centro ': 874,
'Casa Verde, Pte e Av Bras Leme (F) Centro/Bairro ': 324,
'Catiguá, R / Melo Peixoto, R (F) Bairro/Centro ': 1339,
'Catiguá, R / Melo Peixoto, R (F) Centro/Bairro ': 21,
'Celso Garcia, Av Bairro/Centro ': 124,
'Celso Garcia, Av Centro/Bairro ': 549,
'Chucri Zaidan, Av Dr Bandeirantes/Morumbi ': 1706,
'Chucri Zaidan, Av Dr Morumbi/Bandeirantes ': 285,
'Chucri Zaidan, Av Dr (F) Bandeirantes/Morumbi ': 461,
'Chucri Zaidan, Av Dr (F) Morumbi/Bandeirantes ': 380,
'Cidade Jardim / Europa / Colômbia Bairro/Centro ': 6013,
'Cidade Jardim / Europa / Colômbia Centro/Bairro ': 5448,
'Cidade Universitária, Pt PanAmericana/USP ': 3050,
'Cidade Universitária, Pt USP/PanAmericana ': 179,
'Cidade Universitária, Pte (F) PanAmericana/USP ': 1744,
'Cidade Universitária, Pte (F) USP/PanAmericana ': 116,
'Clelia, R (F) único// ': 172,
'Clélia, R //unico ': 391,
'Consolação, R da Bairro/Centro ': 3618,
'Consolação, R da Centro/Bairro ': 3679,
'Consolação, R da (F) Bairro/Centro ': 2257,
'Consolação, R da (F) Centro/Bairro ': 2784,
'Copa-Afonso de S. Souza/Harry DannembergAricanduva/Itaquera ': 1,
'Copa-Aguia de Haia A Alvim/S Miguel ': 11,
'Copa-Aguia de Haia S Miguel/A Alvim ': 4,
'Copa-Aguia de Haia (F) S Miguel/A Alvim ': 1,
'Copa-Campanella Bairro/Centro ': 10,
'Copa-Campanella Centro/Bairro ': 11,
'Copa-Itaquera/Lider Itaquera/Vila Formosa ': 6,
'Copa-Itaquera/Lider Vila Formosa/Itaquera ': 14,
'Copa-Jacu Pessêgo-N. Trabalhadores A Senna/Maua ': 4,
'Copa-Jacu Pessêgo-N. Trabalhadores Maua/A Senna ': 8,
'Copa-Luiz Ayres Bairro/Centro ': 2,
'Copa-Luiz Ayres Centro/Bairro ': 11,
'Copa-Pires do Rio Bairro/Centro ': 7,
'Copa-Pires do Rio Centro/Bairro ': 8,
'Corifeu de A Marques, Av Bairro/Centro ': 663,
'Corifeu de A Marques, Av Centro/Bairro ': 141,
'Corifeu de Azevedo Marques, Av (F) Bairro/Centro ': 73,
'Corifeu de Azevedo Marques, Av (F) Centro/Bairro ': 4,
'Cruzeiro do Sul, Pt e Av Ipiranga/Santana ': 1377,
'Cruzeiro do Sul, Pt e Av Santana/Ipiranga ': 2409,
'Cruzeiro do Sul, Pte e Av (F) Ipiranga/Santana ': 397,
'Cruzeiro do Sul, Pte e Av (F) Santana/Ipiranga ': 421,
'Dianópolis, Av //unico ': 395,
'Dianópolis, Av (F) unico// ': 181,
'Diário Popular, Vd único// ': 189,
'Dom Pedro (Av Exterior) Pq //unico ': 15,
'Dom Pedro (Av do Exterior), Parque (F) unico// ': 43,
'Edgar Facó, Av. Bairro/Centro ': 1197,
'Edgar Facó, Av. Centro/Bairro ': 160,
'Eliseu de Almeida, Av Bairro/Centro ': 821,
'Eliseu de Almeida, Av Centro/Bairro ': 75,
'Ermano Marchetti, Av Barra Funda/Lapa ': 710,
'Ermano Marchetti, Av Lapa/Barra Funda ': 606,
'Escola Politécnica, Av Bairro/Centro ': 752,
'Escola Politécnica, Av Centro/Bairro ': 39,
'Estado, Av do - DEC CT Ipiranga/Santana ': 7017,
'Estado, Av do - DEC CT Santana/Ipiranga ': 4633,
'Estado, Av do - DEC VILA PRUDENTE Ipiranga/Santana ': 3409,
'Estado, Av do - DEC VILA PRUDENTE Santana/Ipiranga ': 3920,
'Estela, R unico// ': 41,
'Eusébio M/Francisco Morato, Av Prof Bairro/Centro ': 5021,
'Eusébio M/Francisco Morato, Av Prof Centro/Bairro ': 2887,
'Eusébio Stevaux, Vd único// ': 1445,
'F1 - Jacinto Júlio, Av Bairro/Centro ': 1,
'F1 - Jacinto Júlio, Av Centro/Bairro ': 3,
'F1 - Jangadeiro, Av Bairro/Centro ': 4,
'F1 - Jangadeiro, Av Centro/Bairro ': 2,
'F1 - João Paulo da Silva, Av Bairro/Centro ': 2,
'F1 - Miguel Yunes/Pte Vitorino Goulart Interlagos/Marginal ': 2,
'F1 - Papini, Av Prof Centro/Bairro ': 3,
'F1 - Rio Bonito, Av Bairro/Centro ': 2,
'F1 - Rio Bonito, Av Centro/Bairro ': 1,
'F1 - Rubens Montanaro de Borba, Av Bairro/Centro ': 1,
'F1 - Teotonio Vilela, Av Sen Bairro/Centro ': 3,
'F1 - Teotonio Vilela, Av Sen Centro/Bairro ': 3,
'Faria Lima, Av Brig Itaim/Pinheiros ': 3606,
'Faria Lima, Av Brig Pinheiros/Itaim ': 5154,
'Fernando Vieira de Mello Túnel(Reboucas)Bairro/Centro ': 5663,
'Fernando Vieira de Mello Túnel(Reboucas)Centro/Bairro ': 5516,
'Ferradura unico// ': 698,
'Figueira, R da unico// ': 802,
'Francisco Matarazzo, Av Bairro/Centro ': 1170,
'Francisco Matarazzo, Av Centro/Bairro ': 1316,
'Francisco Matarazzo, Av (F) Bairro/Centro ': 1895,
'Francisco Matarazzo, Av (F) Centro/Bairro ': 2070,
'Francisco Mesquita, Av Dr S. Caetano/Sao Paulo ': 2214,
'Francisco Mesquita, Av Dr Sao Paulo/S. Caetano ': 168,
'Francisco Morato, Av Prof Bairro/Centro ': 1751,
'Francisco Morato, Av Prof Centro/Bairro ': 1145,
'Frederico Eduardo Mayr, Vd unico// ': 861,
'Freguesia, Pte Freguesia/Lapa ': 775,
'Freguesia, Pte Lapa/Freguesia ': 323,
'Freguesia/Com Martinelli, Pte (F) Freguesia/Lapa ': 507,
'Freguesia/Com Martinelli, Pte (F) Lapa/Freguesia ': 110,
'Gabriel, Av São (F) Bairro/Centro ': 127,
'Gabriel, Av São (F) Centro/Bairro ': 271,
'Gabriel, Av São Bairro/Centro ': 114,
'Gabriel, Av São Centro/Bairro ': 428,
'Gastão Vidigal, Av Dr Marginal/Pinheiros ': 804,
'Gastão Vidigal, Av Dr Pinheiros/Marginal ': 670,
'Gastão Vidigal, Av Dr (F) Lapa/Pinheiros ': 163,
'Gastão Vidigal, Av Dr (F) Pinheiros/Lapa ': 132,
'Gasômetro, R e Vd único// ': 1324,
'Gazeta do Ipiranga, Vd unico// ': 301,
'Grande Sao Paulo, Vd Ipiranga/Vila Prudente ': 3851,
'Grande Sao Paulo, Vd Vila Prudente/Ipiranga ': 1965,
'Groenlandia, R unico// ': 2879,
'Guadalajara, Vd Belem/Mooca ': 50,
'Guadalajara, Vd Mooca/Belem ': 20,
'Guaicurus, R //unico ': 143,
'Guaicurus, R (F) unico// ': 111,
'Guarapiranga, Av Bairro/Centro ': 755,
'Guarapiranga, Av Centro/Bairro ': 751,
'Guido Caloi, Av Bairro/Centro ': 292,
'Guido Caloi, Av Centro/Bairro ': 423,
'Guilherme Dumont Vilares, Av Dr Campo Limpo/Morato ': 16,
'Guilherme Dumont Vilares, Av Dr Morato/Campo Limpo ': 7,
'Heitor Penteado, R Bairro/Centro ': 202,
'Heitor Penteado, R Centro/Bairro ': 179,
'Ibirapuera, Av Bairro/Centro ': 2225,
'Ibirapuera, Av Centro/Bairro ': 3705,
'Ibirapuera, Av (F) Bairro/Centro ': 2433,
'Ibirapuera, Av (F) Centro/Bairro ': 2911,
'Ibitirama, R unico// ': 171,
'Iguatemi, R //unico ': 336,
'Iguatemi, R (F) único// ': 437,
'Inajar de Souza, Av Freguesia/Lapa ': 794,
'Inajar de Souza, Av Lapa/Freguesia ': 34,
'Inajar de Souza, Av (F) Freguesia/Lapa ': 16,
'Interlagos, Av I Bairro/Centro ': 2008,
'Interlagos, Av I Centro/Bairro ': 1110,
'Ipiranga, Av unico// ': 1523,
'Itapecerica, Est de Bairro/Centro ': 871,
'Itapecerica, Est de Centro/Bairro ': 145,
'Itapecirica, Est de (F) Bairro/Centro ': 274,
'Itapecirica, Est de (F) Centro/Bairro ': 86,
'Itápolis, R //unico ': 6,
'Jacinto Júlio, Av Bairro/Centro ': 2,
'Jaguare, Av Bairro/Centro ': 353,
'Jaguare, Av Centro/Bairro ': 287,
'Jaguaré, Pte Jaguaré/Lapa ': 314,
'Jaguaré, Pte Lapa/Jaguaré ': 828,
'Jangadeiro, Av Bairro/Centro ': 1,
'Jangadeiro, Av Centro/Bairro ': 4,
'Jose Colassuono, Vd unico// ': 1534,
'Jose Felix, R Campo Limpo/Morato ': 6,
'Jose Felix, R Morato/Campo Limpo ': 2,
'José Diniz, Av Ver (F) Bairro/Centro ': 915,
'José Diniz, Av Ver (F) Centro/Bairro ': 1726,
'José Diniz, Av Ver Bairro/Centro ': 2743,
'José Diniz, Av Ver Centro/Bairro ': 3017,
'José Garzotti, Av. Pe Teotônio/Batista Botelho ': 1,
'José Maria, Av Pe Bairro/Centro ': 267,
'José Maria, Av Pe Centro/Bairro ': 18,
'João De Luca, Ver Diadema/Marginal ': 122,
'João De Luca, Ver Marginal/Diadema ': 68,
'João Dias, Av Bairro/Centro ': 2689,
'João Dias, Av Centro/Bairro ': 2667,
'João Dias, Av (F) Bairro/Centro ': 1580,
'João Dias, Av (F) Centro/Bairro ': 1350,
'João Goulart, Elev Pres Lapa/Penha ': 173,
'João Goulart, Elev Pres Penha/Lapa ': 95,
'João Jorge Saad,Vd (Cebolinha) Centro/Bairro ': 384,
'João Mendes, Pça //unico ': 642,
'João Paulo da Silva, Av Bairro/Centro ': 5,
'João Paulo da Silva, Av Centro/Bairro ': 2,
'João, Av São único// ': 185,
'Julio de Mesquita, Pte Lapa/Piqueri ': 29,
'Julio de Mesquita, Pte Limão/Pompéia ': 667,
'Julio de Mesquita, Pte Piqueri/Lapa ': 77,
'Julio de Mesquita, Pte Pompéia/Limão ': 55,
'Juntas Provisórias, R das Ipiranga/Vila Prudente ': 2782,
'Juntas Provisórias, R das Vila Prudente/Ipiranga ': 1425,
'Juscelino Kubitschek, Av Pres Ibirapuera/Pinheiros ': 7635,
'Juscelino Kubitschek, Av Pres Pinheiros/Ibirapuera ': 8411,
'Jânio Quadros, Pres. Pte (Vila Maria) Bairro/Centro ': 114,
'Jânio Quadros, Pres. Pte (Vila Maria) Centro/Bairro ': 68,
'Jânio Quadros, Túnel unico// ': 4050,
'Lapa, Vd Lapa/Piqueri ': 306,
'Lapa, Vd Piqueri/Lapa ': 684,
'Liberdade/ Vergueiro, Av Bairro/Centro ': 841,
'Liberdade/ Vergueiro, Av Centro/Bairro ': 735,
'Ligação - Dec HG (F) Lapa/Penha ': 2672,
'Ligação - Dec HG (F) Penha/Lapa ': 2958,
'Ligação Leste-Oeste Lapa/Penha ': 4424,
'Ligação Leste-Oeste Penha/Lapa ': 4357,
'Limão / Av. Ordem e Progresso, Pte Limão/Sumaré ': 3797,
'Limão / Av. Ordem e Progresso, Pte Sumaré/Limão ': 182,
'Limão, Pt/Ordem e Progresso, Av (N USAR)Limão/Sumaré ': 3,
'Lineu de Paula Machado, Av Bairro/Centro ': 1190,
'Lineu de Paula Machado, Av Butanta/Morumbi ': 849,
'Lineu de Paula Machado, Av Centro/Bairro ': 70,
'Lineu de Paula Machado, Av Joquei/USP ': 62,
'Lineu de Paula Machado, Av Morumbi/Butanta ': 79,
'Lineu de Paula Machado, Av USP/Joquei ': 597,
'Luis Antonio, Av Brig Bairro/Centro ': 433,
'Luis Antonio, Av Brig Centro/Bairro ': 171,
'Luis Antonio, Av Brig. Dec-PA (F) Bairro/Centro ': 357,
'Luis Antonio, Av Brig. Dec-PA (F) Centro/Bairro ': 20,
'Luis Carlos Berrini, Av Eng Bandeirantes/Morumbi ': 1593,
'Luis Carlos Berrini, Av Eng Morumbi/Bandeirantes ': 967,
'Luis Carlos Berrini,Eng Av (F) Bandeirantes/Morumbi ': 1514,
'Luis Carlos Berrini,Eng Av (F) Morumbi/Bandeirantes ': 906,
'Luis, Av. São //unico ': 195,
'Luiz Ignácio de Anhaia Mello, Av Prof Bairro/Centro ': 4055,
'Luiz Ignácio de Anhaia Mello, Av Prof Centro/Bairro ': 2964,
'Luiz Ignácio de Anhaia Mello, Av Prof Sapopem./Vila Prudente ': 706,
'Luiz Ignácio de Anhaia Mello, Av Prof Vila Prudente/Sapopem. ': 360,
'M Paula, R/Jacarei/n Julho, Vd (F) //unico ': 2,
'M.M.D.C, R único// ': 1597,
'Manuel de Teffé, R Bairro/Centro ': 7,
'Marginal Pinheiros Castelo/Interlagos ': 10332,
'Marginal Pinheiros Interlagos/Castelo ': 9675,
'Marginal Tietê A.Senna/Castelo Branco ': 10776,
'Marginal Tietê Castelo/A.Senna ': 9885,
'Marginal Tietê - Pista Central A.Senna/Castelo Branco ': 2883,
'Marginal Tietê - Pista Central Castelo/A.Senna ': 2974,
'Maria Coelho Aguiar, Av Bairro/Centro ': 746,
'Maria Coelho Aguiar, Av Centro/Bairro ': 2554,
'Maria Maluf, CV Anchieta/Imigrantes ': 4512,
'Maria Maluf, CV Imigrantes/Anchieta ': 5019,
'Maria Paula/Vd Jacareí/Vd 9 de Julho único// ': 2778,
'Matriz, R da unico// ': 14,
'Max Feffer Túnel (Cidade Jardim) Bairro/Centro ': 4793,
'Max Feffer Túnel (Cidade Jardim) Centro/Bairro ': 2667,
'Melo Peixoto, R Bairro/Centro ': 1244,
'Melo Peixoto, R Centro/Bairro ': 1352,
'Melo Peixoto, R (F) Bairro/Centro ': 286,
'Melo Peixoto, R (F) Centro/Bairro ': 126,
'Mercúrio, Av único// ': 3250,
'Miguel Estefano,VD Abraão de Morais/Cursino ': 26,
'Miguel Estefano,VD Cursino/Abraão de Morais ': 10,
'Miguel Yunes/Pte Vitorino Goulart Interlagos/Marginal ': 6,
'Miguel Yunes/Pte Vitorino Goulart Marginal/Interlagos ': 3,
'Morumbi, Av Campo Limpo/Santo Amaro ': 194,
'Morumbi, Av Morumbi/Santo Amaro ': 1134,
'Morumbi, Av Santo Amaro/Campo Limpo ': 196,
'Morumbi, Av Santo Amaro/Morumbi ': 358,
'Morumbi, Av. e Pte Aeroporto/Marginal ': 2175,
'Morumbi, Av. e Pte Marginal/Aeroporto ': 2632,
'M´Boi Mirim, Est Bairro/Centro ': 8,
'M´Boi Mirim, Est Centro/Bairro ': 1,
'Natanael, R Mj (F) Estadio/Jardins ': 279,
'Nova Morumbi, Pte (F) unico// ': 1026,
'Nove de Julho, Av Bairro/Centro ': 4115,
'Nove de Julho, Av Centro/Bairro ': 5088,
'Nove de Julho, Av (F) Bairro/Centro ': 2488,
'Nove de Julho, Av (F) Centro/Bairro ': 2439,
'Nove de Julho, Av - DEC PA (F) Bairro/Centro ': 297,
'Nove de Julho, Av - DEC PA (F) Centro/Bairro ': 719,
'Olivia Guedes Penteado, R Bairro/Centro ': 2,
'Oscar Americano, R Eng Bairro/Centro ': 2037,
'Oscar Americano, R Eng Centro/Bairro ': 1767,
'Oscar Americano, R Eng (F) Bairro/Centro ': 445,
'Oscar Americano, R Eng (F) Centro/Bairro ': 62,
'Outeiro, Av NSra do Batista Botelho/Teotônio ': 1,
'Pacaembu / Mj Natanael / Abraao Ribeiro Estádio/Marginal ': 1390,
'Pacaembu / Mj Natanael / Abraao Ribeiro Marginal/Estádio ': 4199,
'Pacaembu, Vd (F) Bairro/Centro ': 330,
'Pacaembu, Vd (F) Centro/Bairro ': 279,
'Pacheco e Chaves, Cap. Vd Ipiranga/Vila Prudente ': 188,
'Pacheco e Chaves, Cap. Vd Vila Prudente/Ipiranga ': 1791,
'Pacheco e Chaves, R Cap Bairro/Centro ': 310,
'Pacheco e Chaves, R Cap Centro/Bairro ': 36,
'Papini, Av Prof Bairro/Centro ': 5,
'Papini, Av Prof Centro/Bairro ': 20,
'Paulina, Vd Dona //unico ': 2644,
'Paulina, Vd Dona (F) único// ': 1935,
'Paulista, Av Consolação/Paraiso ': 6247,
'Paulista, Av Paraiso/Consolação ': 9888,
'Paulo Eiró, R único// ': 2,
'Paulo VI, Av Limão/Sumaré ': 72,
'Paulo VI, Av Sumaré/Limão ': 36,
'Pedro Alvares Cabral, Av Pinheiros/Vila Mariana ': 680,
'Pedro Alvares Cabral, Av Vila Mariana/Pinheiros ': 1231,
'Pedro Alvares Cabral, Av (F) Pinheiros/Vila Mariana ': 791,
'Pedro Alvares Cabral, Av (F) Vila Mariana/Pinheiros ': 1975,
'Pedro I, Av. Dom Bairro/Centro ': 13,
'Pedro I, Av. Dom Centro/Bairro ': 18,
'Pinedo, Av de unico// ': 1,
'Piqueri, Pt Lapa/Piqueri ': 579,
'Piqueri, Pt Piqueri/Lapa ': 903,
'Piqueri, Pte (F) Lapa/Piqueri ': 278,
'Piqueri, Pte (F) Piqueri/Lapa ': 1054,
'Pirajussara, Av Bairro/Centro ': 17,
'Pirajussara, Av Centro/Bairro ': 53,
'Pompeia, Vd Marginal/Pompeia ': 2452,
'Pompeia, Vd Pompeia/Marginal ': 384,
'Queiroz Filho /Jaguaré, Pte Bairro/Centro ': 554,
'Queiroz Filho /Jaguaré, Pte Centro/Bairro ': 1276,
'Queiroz, Av. Sen. //unico ': 956,
'Radial Leste - DEC BR Bairro/Centro ': 6210,
'Radial Leste - DEC BR Centro/Bairro ': 4474,
'Radial Leste - DEC MO Bairro/Centro ': 7150,
'Radial Leste - DEC MO Centro/Bairro ': 4722,
'Raimundo Pereira Magalhaes, Av Bairro/Centro ': 47,
'Raimundo Pereira Magalhaes, Av Centro/Bairro ': 5,
'Raimundo Pereira de Magalhães - Norte Bairro/Centro ': 12,
'Raimundo Pereira de Magalhães - Norte Centro/Bairro ': 3,
'Raimundo Pereira de Magalhães - Sul Bairro/Centro ': 4,
'Raimundo Pereira de Magalhães - Sul Centro/Bairro ': 5,
'Rangel Pestana, Av (F) unico// ': 163,
'Rangel Pestana, Av DEC BR //unico ': 1134,
'Rangel Pestana, Av DEC CT Bairro/Centro ': 548,
'Rangel Pestana, Av DEC CT Centro/Bairro ': 66,
'Raposo Tavares, Via Capital/Interior ': 632,
'Raposo Tavares, Via Interior/Capital ': 3652,
'Reação, R único// ': 1388,
'Rebouças/ Eusébio Matoso, Av Bairro/Centro ': 8167,
'Rebouças/ Eusébio Matoso, Av Centro/Bairro ': 8720,
'Remédios, Pte Lapa/Remédios ': 1471,
'Remédios, Pte Remédios/Lapa ': 1965,
'Republica da Armenia, Vd único// ': 4197,
'República do Líbano, Av Bairro/Centro ': 750,
'República do Líbano, Av Centro/Bairro ': 788,
'República do Líbano, Av (F) Bairro/Centro ': 248,
'República do Líbano, Av (F) Centro/Bairro ': 129,
'Ribeiro Lacerda, R Abraão de Morais/Cursino ': 2,
'Ribeiro Lacerda, R Cursino/Abraão de Morais ': 21,
'Ricardo Jafet, Av Bairro/Centro ': 159,
'Ricardo Jafet, Av Centro/Bairro ': 228,
'Rio Bonito, Av Centro/Bairro ': 1,
'Rio Branco, Br do unico// ': 122,
'Rio Branco, Av Bairro/Centro ': 195,
'Rio Branco, Av Centro/Bairro ': 342,
'Robert Kennedy, Av Bairro/Centro ': 164,
'Robert Kennedy, Av Centro/Bairro ': 37,
'Roberto Abreu Sodré, Vd Bairro/Centro ': 2243,
'Roberto Abreu Sodré, Vd Centro/Bairro ': 169,
'Roque Petroni Júnior, Av Diadema/Marginal ': 1253,
'Roque Petroni Júnior, Av Marginal/Diadema ': 683,
'Roque Petroni Júnior, Av (F) Diadema/Marginal ': 881,
'Roque Petroni Júnior, Av (F) Marginal/Diadema ': 496,
'Rudge, Av/Orlando Murgel, Vd Bairro/Centro ': 484,
'Rudge, Av/Orlando Murgel, Vd Centro/Bairro ': 541,
'Rudge, Av/Orlando Murgel, Vd (F) Bairro/Centro ': 233,
'Rudge, Av/Orlando Murgel, Vd (F) Centro/Bairro ': 232,
'S.Vicente, Av Marques de (F) Barra Funda/Lapa ': 90,
'S.Vicente, Av Marques de (F) Lapa/Barra Funda ': 526,
'Sabará, Av NSra do Bairro/Centro ': 6,
'Sabará, Av NSra do Centro/Bairro ': 1,
'Salim F Maluf, Av/Tatuapé, Pt (N USAR) Marginal/Vila Prudente ': 1,
'Salim Farah Maluf, Av/Tatuapé, Pte Marginal/Vila Prudente ': 5335,
'Salim Farah Maluf, Av/Tatuapé, Pte Vila Prudente/Marginal ': 3669,
'Sapetuba, R único// ': 1637,
'Sebastião Camargo, Túnel unico// ': 4718,
'Socorro, Pte Bairro/Centro ': 1365,
'Socorro, Pte Centro/Bairro ': 1640,
'Sumaré, Av Limão/Sumaré ': 149,
'Sumaré, Av Sumaré/Limão ': 255,
'Susana Rodrigues, R unico// ': 444,
'São Vicente, Av Marq de Barra Funda/Lapa ': 856,
'São Vicente, Av Marq de Lapa/Barra Funda ': 1933,
'Sé, Pça da //unico ': 205,
'Tabapua,R (F) unico// ': 196,
'Tabapuã, R //unico ': 39,
'Tabapuã, R único// ': 482,
'Tajurás, Av dos Bairro/Centro ': 2201,
'Tajurás, Av dos Centro/Bairro ': 297,
'Tancredo Neves, Av Anchieta/Imigrantes ': 395,
'Tancredo Neves, Av Imigrantes/Anchieta ': 336,
'Teodoro Sampaio, R unico// ': 1651,
'Teotonio Vilela, Av Sen Bairro/Centro ': 41,
'Teotonio Vilela, Av Sen Centro/Bairro ': 7,
'Transamérica, Pte unico// ': 1208,
'Trib de Justiça, Túnel Ibirapuera/Marginal ': 6386,
'Trib de Justiça, Túnel Marginal/Ibirapuera ': 3397,
'Trinta e Um de Março, Vd unico// ': 1540,
'Vale/P.Maia/Tirad/S.Dumont Aeroporto/Santana ': 5988,
'Vale/P.Maia/Tirad/S.Dumont Santana/Aeroporto ': 9089,
'Vale/P.Maia/Tirad/S.Dumont (NÃO USAR) Santana/Aeroporto ': 1,
'Valerio, Av São Bairro/Centro ': 1471,
'Valerio, Av São Centro/Bairro ': 24,
'Vicente Rao, Av Prof Diadema/Marginal ': 362,
'Vicente Rao, Av Prof Marginal/Diadema ': 245,
'Vicente Rao, Av Prof (F) Diadema/Marginal ': 5,
'Vicente Rao, Av Prof (F) Marginal/Diadema ': 2,
'Vila Guilherme, Pte Bairro/Centro ': 211,
'Vila Guilherme, Pte Centro/Bairro ': 92,
'Vila Matilde, Vd Penha/Vl Matilde ': 19,
'Vila Matilde, Vd Vl Matilde/Penha ': 32,
'Vinte Três/R Berta/M Guim (NÃO USAR) Aeroporto/Santana ': 2,
'Vinte Três/R Berta/M Guimarães Aeroporto/Santana ': 9678,
'Vinte Três/R Berta/M Guimarães Santana/Aeroporto ': 10112,
'Vinte e Cinco de Março, Vd único// ': 140,
'Vital Brasil, Av Bairro/Centro ': 2983,
'Vital Brasil, Av Centro/Bairro ': 459,
'Vitor Manzini, Av Bairro/Centro ': 199,
'Vitor Manzini, Av Centro/Bairro ': 806,
'Washington Luis, Av Bairro/Centro ': 3511,
'Washington Luis, Av Centro/Bairro ': 2900,
'XXX Campo Limpo/Morato ': 7,
'XXX Morato/Campo Limpo ': 5,
'Xangai, Vd unico// ': 84}
__jam_count__ is the dictionary listing all counts of traffic jams for each road.
The following graph plots the distribution of traffic jam counts between 2001 and 2019 for each and every road. It might seem more adapted to focus our renovations on the roads more likely to be impacted by traffic jams, and for that we may drop the roads with a small jam count.
```python
sorted_jam_count = sorted(jam_count, key = jam_count.get)
```
```python
fig = plt.figure(figsize=(12,8))
plt.plot(gs.sort(list(jam_count.values())))
plt.xlabel("road n°")
plt.ylabel("number of traffic jams between 2001 and 2019")
plt.title("Number of traffic jams between 2001 and 2019 for each road in increasing order")
plt.show()
```
```python
list_jam_count = gs.sort(list(jam_count.values()))
cdf = [list_jam_count[0]]
for i in range(len(list_jam_count)):
cdf.append(cdf[i] + list_jam_count[i])
cdf = cdf / cdf[-1]
```
```python
fig = plt.figure(figsize=(12,8))
plt.plot(cdf)
plt.xlabel("road n°")
plt.title("Cumulative distribution function of traffic jams in SP")
plt.show()
```
The 180 most congestioned roads make up for 90% of all traffic jams in Sao Paulo between 2001 and 2019. That is where we will focus our renovation efforts.
```python
roads_to_renovate = sorted_jam_count[-180:]
```
# 3. Mathematical modeling
In the following 2 sections, we establish a precise framework, listing the hypotheses and simplifications supporting our model. In particular:
- 3.1. gives an introduction to the Gamma manifold and explains how each road can be represented by a point on it.
- 3.2. justifies the use of information geometry to tackle the problem at hand, by seeing a renovation effort on a given road as a tangent vector based at its associated point.
## 3.1. Road representation: introduction to the Gamma manifold.
The modeling of the study relies heavily on the representation of a traffic jam as a random variable.
In fact, the waiting time in a given traffic jam can be predicted by a Gamma distribution.
### 3.1.1. Hypotheses
We consider that a traffic jam has a fixed exit rate, meaning that on average, in a given unit of time the same count of cars will exit the jam. The waiting time to exit the traffic jam once a car is ahead of the lane is independent of other cars and depends on the road only.
In addition, switching lanes rarely helps, and if so, to a negligible extent; furthermore a car entering the traffic jam will almost always choose the least crowded lane (all drivers are a priori mentally sane). These two observations allow to reduce the modeling of a whole traffic jam to that of a single lane, although only in representation, because cars next to each other will have the same behavior. This means that in our modelling the width of the road is not taken into account, as mentioned in the introduction.
Both of these hypotheses are central to the model.
### 3.1.2. Model
In a traffic jam, you wait until every car in front of you has exited the traffic jam, meaning that the waiting time for a car entering the jam is merely the sum of the exit times of all the cars in front.
As a $\nu$-exponential process predicts the waiting time until one very first event (where $\nu$ is a rate of a unit of time), a $(k,\, \nu)$-Gamma process will predict the waiting time until the $k$-th event: mathematically, it is the sum of $k$ i.i.d. $\nu$-exponential processes. In the context of congestion time in a traffic jam, we are summing exit times, hence the connection between waiting time and Gamma disribution.
Therefore, the congestion time of the jam follows a Gamma distribution associated to the road. Its parameters are:
- $k$, the length of the car lane (jam size) in arbitrary units;
- $\nu$, the exit time rate of the traffic jam, i.e. the number of cars (in the same arbitrary unit) that exit the traffic jam in a given amount of time, so essentially the speed of the traffic jam.
By arbitrary units we mean that there exists a number $n$ of cars, common to every road, such that $n$ cars will exit the jam every $\frac{1}{\nu}$ (depending on the road) unit of time on average. From this we draw that a road with car length $k$ is in fact as long as $kn$ cars.
For a given road $r$, we note $T_r$ the congestion time that cars will have to wait in the case the traffic is jammed: $T_r \rightsquigarrow G(k_r, \nu_r)$, with distribution: $$\forall t>0, \, \mathbb{f}(t) = \frac{\nu_r^{k_r}}{\Gamma(k_r)} t^{k_r-1} e^{-\nu_r t}.$$
<center>
</center>
As a road $x_r$ can be represented by two parameters $k_r$ and $\nu_r$, we can consider our space of study to be the space of such parameters (i.e. $\mathbb{(R_+^*)^2}$).
For the following, we denote Gamma distributions' parameters by $(\kappa_r, \gamma_r)$, where $\kappa_r$=$k_r$ (expected jam size) and $\gamma_r$=$\frac{k_r}{\nu_r}$ is the expected congestion time (mean of the Gamma distribution). The space of study is still $\mathbb{(R_+^*)^2}$, and we are instantiating it in the next cell.
```python
import matplotlib.pyplot as plt
from geomstats.information_geometry.gamma import *
space = GammaDistributions()
```
For instance, on the following graph we are representing 3 roads.
```python
road1 = gs.array([1.0,1.0])
road2 = gs.array([2.0,1.0])
road3 = gs.array([1.0,2.0])
fig = plt.figure(figsize=(12,8))
plt.scatter(*road1, label = 'road1')
plt.scatter(*road2, label = 'road2')
plt.scatter(*road3, label = 'road3')
plt.xlabel("$\\kappa$ (expected jam size)")
plt.ylabel("$\\gamma$ (expected time spent in traffic)")
plt.title("3 roads subject to traffic jams, represented as points on the Gamma manifold.")
plt.legend()
plt.xlim(0,5)
plt.ylim(0,5)
plt.show()
```
Here:
- road 1 is $(\kappa_1=1,\gamma_1=1)$;
- road 2 is $(\kappa_2=2,\gamma_2=1)$;
- road 3 is $(\kappa_3=1,\gamma_3=2)$.
This means that cars on road 1 will spend half as much time as cars on road 3 in the case of a traffic jam, on average. On the other hand, cars on road 1 and road 2 will spend the same time in traffic on average, but the line is twice as long on road 2.
## 3.2. Mathematical representation of renovation efforts
### 3.2.1. Hypotheses
Renovating a road initially aims at reducing the expected time spent in traffic. This means that for a given road $x_r = (\kappa_r, \gamma_r)$, we want to reduce $\gamma_r$ as efficiently as possible. But, the efficiency of the renovation in that regard heavily depends on the road: one can argue that it is more efficient to renovate a road where traffic jams are frequent than a road on which the traffic is almost fluid. This is where information geometry comes in handy: as a riemannian manifold, the metric of the Gamma manifold is point-dependent.
By seeing renovation as an effort in reducing the expected time in traffic, we can model the advancement of the renovation as the geodesic departing from the point representation of the road, and with initial tangent vector in the direction and orientation $-\gamma$. This reflects the fact that the advancement of the renovation will follow the most natural path, i.e. the distribution of the waiting time of the associated road will change as little as possible throughout the renovation.
### 3.2.2. Model
We decide to model a renovation effort of budget/effort $r_i$ on road $x_r$ as the tangent vector $r_i \left(\begin{array}{c} 0 \\ -1 \end{array}\right)_{x_r}$, where $\left(\begin{array}{c} 0 \\ -1 \end{array}\right)_{x}$ is the unit tangent vector at $x$ with direction and orientation $-\gamma$. The amount of effort/resources invested in the renovation of a given road is directly represented by the norm of the tangent vector.
Investing as much as $r_i$ renovation resources on road $x_r$ will result in having the renovated road $x_r' = \exp_{x_r} \left( r_i \times \left(\begin{array}{c} 0 \\ -1 \end{array}\right)_{x_r} \right)$, where $\exp_{x}$ is the exponential map at $x$.
The following plot shows a comparison of similar renovations undertaken on the roads in the previous example.
```python
fig = plt.figure(figsize=(12,8))
t = gs.linspace(0,1,10)
plt.scatter(*road1, label = 'road 1')
plt.scatter(*road2, label = 'road 2')
plt.scatter(*road3, label = 'road 3')
effort = gs.array([0.0, -1.0])
effort1 = space.metric.normalize(effort, road1)
renovation1 = space.metric.geodesic(initial_point=road1, initial_tangent_vec=effort1)
renovation1 = renovation1(t)
plt.plot(*gs.transpose(renovation1), label = 'advancement of renovation effort on road 1')
effort2 = space.metric.normalize(effort, road2)
renovation2 = space.metric.geodesic(initial_point=road2, initial_tangent_vec=effort2)
renovation2 = renovation2(t)
plt.plot(*gs.transpose(renovation2), label = 'advancement of renovation effort on road 2')
effort3 = space.metric.normalize(effort, road3)
renovation3 = space.metric.geodesic(initial_point=road3, initial_tangent_vec=effort3)
renovation3 = renovation3(t)
plt.plot(*gs.transpose(renovation3), label = 'advancement of renovation effort on road 3')
plt.xlabel("$\\kappa$ (expected jam size)")
plt.ylabel("$\\gamma$ (expected time spent in traffic)")
plt.title("Comparison of different renovation efforts")
plt.legend()
plt.axis("equal")
plt.show()
print(f"Road 1 renovation: expected waiting time has decreased from {road1[1]} to {str(renovation1[-1,1])[:5]}, expected jam size has increased from {road1[0]} to {str(renovation1[-1,0])[:5]}.")
print(f"Road 2 renovation: expected waiting time has decreased from {road2[1]} to {str(renovation2[-1,1])[:5]}, expected jam size has increased from {road2[0]} to {str(renovation2[-1,0])[:5]}.")
print(f"Road 3 renovation: expected waiting time has decreased from {road3[1]} to {str(renovation3[-1,1])[:5]}, expected jam size has increased from {road3[0]} to {str(renovation3[-1,0])[:5]}.")
```
We observe that it is much more efficient to renovate road 3 rather than road 1 in terms of gained expected waiting time. This was expected given road 1 is much more fluid than road 3. In terms of relative time gain however, the result is the same: this is specific to Gamma distributions. In addition, renovating road 3 is more efficient than renovating road 2, either in absolute or relative time gain. We observe furthermore that investing similar efforts in renovating roads 3 and 2 result in different evolutions regarding the expected jam size: it increases by 44% in the first case and by as much as 50% in the second one. This becomes delicate especially considering the expected car line on road 2 was already long.
We notice that renovations increase the expected jam size: this can be interpreted as the fact that a renovated roads allows drivers to go faster and the lane becomes longer, in a sense the traffic becomes more diluted. This can be observed in the following plot: renovations increase at once the expected jam size and the expected exit time rate, rendering the road open to much more traffic.
```python
fig = plt.figure(figsize=(12,8))
road1 = space.natural_to_standard(road1)
road2 = space.natural_to_standard(road2)
road3 = space.natural_to_standard(road3)
plt.scatter(*road1, label = 'road 1')
plt.scatter(*road2, label = 'road 2')
plt.scatter(*road3, label = 'road 3')
renovation1 = space.standard_to_natural(renovation1)
renovation2 = space.standard_to_natural(renovation2)
renovation3 = space.standard_to_natural(renovation3)
plt.plot(*gs.transpose(renovation1), label = 'advancement of renovation effort on road 1')
plt.plot(*gs.transpose(renovation2), label = 'advancement of renovation effort on road 2')
plt.plot(*gs.transpose(renovation3), label = 'advancement of renovation effort on road 3')
plt.xlabel("$\\kappa$ (expected jam size)")
plt.ylabel("$\\nu$ (expected exit time rate in traffic)")
plt.title("Comparison of different renovation efforts in natural coordinates")
plt.legend()
plt.show()
```
The fact that these results validate our observations and expected consequences of renovations legitimates the use of information geometry to model the situation. For instance, a euclidean modeling of the situation would make no sense: all renovations would have the same impact although applied to different roads, because the norm of a tangent vector (i.e. the renovation effort) would be independent of its base point (the road).
Therefore, the key to optimizing Sao Paulo's traffic obviously lies in maximizing the efficiency of the renovation, with limited renovation resources.
## 3.3. Optimization problem
The aim is to minimize the mean expected congestion time in Sao Paulo, weighted by the frequencies $f_i$ of traffic jams $1 \leq i \leq n$, under the constraint of a total quantity of resources $r$. This reads:
\begin{equation}
\begin{cases}
\min_{(r_i)} \sum_{i=1}^n f_i \times \exp_{x_i} \left( r_i \times \left(\begin{array}{c} 0 \\ -1 \end{array}\right)_{x_i} \right)_{\gamma} \\
\forall i \in \{1,...,n\}, r_i \geq 0 \\
\sum_{1 \leq i \leq n} r_i = r \\
\end{cases},
\end{equation}
where:
- $(x_i)$ are the roads;
- $\left(\begin{array}{c} 0 \\ -1 \end{array}\right)_{x_i}$ is the unit tangent vector at $x_i$ with direction and orientation $-\gamma$;
- $\exp_{x_i}$ is the exponential map at $x_i$;
- for $x \in G$ (the Gamma manifold), $x_{\gamma}$ is its $\gamma$ coordinate;
- $r_i$ is the resource allocated for renovating road $i$.
#### Remark
We could rewrite the problem in a simpler way analytically, making use of the following results:
- the relative efficiency of renovation (i.e. the ratio of expected congestion times) does not depend on the original expected congestion time of the road ($\gamma$);
- similarly, the length of the car lane of the renovated road does not depend on the original expected congestion time of the road ($\gamma$).
However, we will not use these results to make way for a better computational solution of the problem.
# 4. Dataset processing
First, we associate to each of the roads eligible for renovation its parameters for a Gamma distribution, through a maximum likelihood fit of the durations of traffic jams: __jam_table__ gives, for each road $r$, a sample of size $n_r$ of all the traffic jams and their durations from 2001 to 2019.
```python
names, kappas, gammas = [], [], []
for road in roads_to_renovate:
frame = df.loc[df["name"] == road]
sample = frame["duration"]
try:
kappa, gamma = space.maximum_likelihood_fit(sample)
if not(gs.any(gs.isnan([kappa, gamma]))):
names.append(road)
kappas.append(kappa)
gammas.append(gamma)
except:
continue
```
Having focused on the 180 most congestioned roads makes sense now, as the estimations for the Gamma parameters of the roads are much more relevant. Accounting for all the roads would result in having outliers in our set of roads, rendering the computation far more complex. In addition, roads with a negligible count of traffic jams in such a long time span do not necessarily call for renovation.
That is why, for the following and for the problem at hand, we can consider the following simplification: the roads eligible for renovation represent SP's roads subject to traffic jams, i.e. the exact dataset we want to be working on.
```python
dict_parameters = {"name": names, "kappa": kappas, "gamma": gammas}
data = pd.DataFrame.from_dict(dict_parameters)
```
To each of the roads eligible for renovation we associate a weight proportional to the number of traffic jams between 2001 and 2019.
```python
good_points = list(data["name"])
weights = list(map(jam_count.get, good_points))
weights = weights / gs.sum(weights)
```
The 180 most congestioned roads of SP can be represented as follows on the Gamma manifold.
```python
kappa, gamma = data["kappa"], data["gamma"]
fig = plt.figure(figsize=(12,8))
mean = gs.array([gs.sum(weights*kappa), gs.sum(weights*gamma)])
plt.scatter(kappa, gamma, label='road eligible for renovation')
plt.scatter(*mean, color = 'r', s=100, label='mean road eligible for renovation')
plt.xlabel("$\\kappa$ (number of cars in arbitrary units)")
plt.ylabel("$\\gamma$ (expected time spent in traffic in hours)")
plt.title("Sao Paulo's roads most subject to traffic jams, as represented as points on the Gamma manifold.")
plt.legend()
plt.show()
```
We observe that the vast majority of traffic jams in SP can take from 2 to 6+ hours of congestion time. On the most impactful roads (eligible for renovation), the mean waiting time is 3h 24min.
# 5. Solving the problem at hand
Arbitrarily (and for computational purposes), we are allocated a total of 10 resources to allocate on the renovations. It might seem like the amount of total resources should not matter that much as the original aim is simply knowing how to allocate the total quantity of resources between all the roads eligible for renovation, but renovations are not linear.
```python
total_resources = 10
```
```python
points = gs.transpose(gs.stack([kappa, gamma]))
n_points = len(points)
```
We optimize the allocation of resources for renovation here:
```python
original_SP = gs.sum(gs.einsum("...,...j->...j", weights, points), axis=0)
def rebuilding(point, resources):
n_points = point.shape[0] if len(point.shape)>1 else 1
vec = gs.tile([gs.array([0.0,-1.0])], (n_points, 1))
norm = resources * total_resources
tangent_vec = gs.einsum("...,...j->...j", norm, vec)
end_point = space.metric.exp(tangent_vec, point, n_steps=100)
return end_point
def objective(resources):
end_points = rebuilding(points, resources)
gammas = end_points[:,1]
return gs.mean(weights*gammas)
objective_with_grad = gs.autodiff.value_and_grad(objective, to_numpy=True)
resources = total_resources * weights
res = minimize(
objective_with_grad,
resources,
method="SLSQP",
constraints=({'type': 'ineq', 'fun': lambda x: total_resources - gs.sum(x)},
{'type': 'ineq', 'fun': lambda x: x.min()},
),
jac=True,
options={"disp": False, "maxiter": 100},
tol=gs.atol,
)
resources = res.x
new_points = rebuilding(points, resources)
fig = plt.figure(figsize=(16,12))
plt.scatter(points[:,0], points[:,1], label = 'original points', s=20)
plt.scatter(*original_SP, label = 'original SP', s=50)
plt.scatter(new_points[:,0], new_points[:,1], label = 'points after renovation', s=20)
for i in range(n_points):
plt.arrow(points[i,0], points[i,1], (new_points - points)[i,0], (new_points - points)[i,1], head_width=.01, linestyle ="", length_includes_head = True)
percentage = resources[i] * 100 / total_resources
if percentage > 2:
plt.text(points[i,0], points[i,1], f"{str(percentage)[:5]} %")
new_SP = gs.sum(gs.einsum("...,...j->...j", weights, new_points), axis=0)
plt.scatter(*new_SP, label = 'SP after renovation', s=50)
plt.arrow(*original_SP, *(new_SP - original_SP), head_width=.05, linestyle = "-", length_includes_head = True)
plt.xlabel("$\\kappa$ (expected jam size)")
plt.ylabel("$\\gamma$ (expected time spent in traffic in hours)")
plt.title("Optimization of SP's traffic")
plt.legend()
plt.show()
```
Above, the percentages represent the proportion of the total resources that have been allocated to the renovation of each road: they are visible if greater than 1%.
```python
original_size, original_time = original_SP
new_size, new_time = new_SP
relative_time_reduction = (original_time - new_time) / original_time
original_variance, new_variance = original_time**2 / original_size, new_time**2 / new_size
relative_variance_reduction = (original_variance - new_variance) / original_variance
print(f"Mean expected congestion time has been reduced by as much as {str(relative_time_reduction*100)[:5]} % in Sao Paulo :)")
print(f"Variance in congestion time has been reduced by as much as {str(relative_variance_reduction*100)[:5]} % in Sao Paulo :)")
```
# Conclusion
We have managed to substantially reduce the mean expected congestion time in SP by as much as 25%, not at great expense of the expected jam sizes! We also happen to have halved the variance of the mean congestion time, rendering long traffic jams rarer. This is a great success!
|
40a5540e3e933977c2ed5c2eccfa150f956f41b9
| 291,070 |
ipynb
|
Jupyter Notebook
|
notebooks/18_real_world_applications__sao_paulo_traffic_optimization.ipynb
|
lpereira95/geomstats
|
c63a4cf28e6c09f6a6b9926e8a712838362017ba
|
[
"MIT"
] | null | null | null |
notebooks/18_real_world_applications__sao_paulo_traffic_optimization.ipynb
|
lpereira95/geomstats
|
c63a4cf28e6c09f6a6b9926e8a712838362017ba
|
[
"MIT"
] | null | null | null |
notebooks/18_real_world_applications__sao_paulo_traffic_optimization.ipynb
|
lpereira95/geomstats
|
c63a4cf28e6c09f6a6b9926e8a712838362017ba
|
[
"MIT"
] | null | null | null | 180.006184 | 45,366 | 0.800034 | true | 17,243 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.800692 | 0.746139 | 0.597428 |
__label__eng_Latn
| 0.713407 | 0.226355 |
# Solving Ax = b
This work by Jephian Lin is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
```python
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import sympy
```
## Main idea
Let $A$ be an $m\times n$ matrix. The solution set of $A{\bf x} = {\bf b}$ is the intersection of the affine planes $\langle {\bf r}_i, {\bf x}\rangle = b_i$ for all $i=1,\ldots,m$, where ${\bf r}_i$ is the $i$-th row of $A$ and $b_i$ is the $i$-th entry of ${\bf b}$.
Therefore, the solution set of $A$ is an affine space (shifted space).
The solutions set of $A{\bf x} = {\bf b}$ is of the form:
general solutions = particular solution + homogeneous solutions
(a shifted space) (a vector) (a space)
Here "general solution" stands for all solutions of $A{\bf x} = {\bf b}$,
"particular solution" stands for one arbitrary solution of $A{\bf x} = {\bf b}$, and
"homogeneous solutions" stands for all solutions of $A{\bf x} = {\bf 0}$.
Every matrix lead to its **reduced echelon form** after some **row operations**. If $\left[\begin{array}{cc}R | {\bf r}\end{array}\right]$ is the reduced echelon form of $\left[\begin{array}{cc}A | {\bf b}\end{array}\right]$, then $A{\bf x} = {\bf b} = R{\bf x} = {\bf r}$.
## Side stories
- $A{\bf x} = {\bf b} \iff {\bf b}\in\operatorname{Col}(A)$
- matrix inverse
## Experiments
###### Exercise 1
This exercise helps you to visualize the affine space $A{\bf x} = {\bf b}$.
Let
```python
A = np.array([[1,1,1],
[1,1,1]])
b = np.array([5,5])
```
###### 1(a)
Use the techniques you learned in Lesson 2 to draw some random solutions of $A{\bf x} = {\bf b}$.
What is the nullity of $A$? What is the "dimension" of the affine space?
Hint:
```python
xs = 5*np.random.randn(3,10000)
mask = (np.abs(b[:,np.newaxis] - A.dot(xs)) < 0.1).all(axis = 0)
```
```python
### your answer here
```
###### 1(b)
It is known that
```python
p = np.array([5,0,0])
```
is a particular solution of $A{\bf x} = {\bf b}$.
Add a vector of `p` upon your previous drawing.
```python
### your answer here
```
###### 1(c)
Do the same for
```python
b = np.array([5,6])
```
How many solutions are there?
```python
### your answer here
```
###### Exercise 2
This exercise helps you to visualize the affine space $A{\bf x} = {\bf b}$.
Let
```python
A = np.array([[1,1,1],
[1,1,1]])
b = np.array([5,5])
```
###### 2(a)
Draw the grid using the columns of $A$ and draw a vector for $b$.
Is $b$ in the column space of $A$?
```python
### your answer here
```
###### 2(b)
Do the same for
```python
b = np.array([5,6])
```
Is $b$ in the column space of $A$?
```python
### your answer here
```
#### Remark
Whether a particular solution exists depends only on whether ${\bf b}$ is in the column space of $A$ or not.
We say a equation $A{\bf x} = {\bf b}$ is **consistent** if it has at least a particular solution.
Whether the homogeneous solutions contains only the trivial solution ${\bf 0}$ depends only on $A$.
This table summarize the number of solutions of $A{\bf x} = {\bf b}$.
hom \ par | consistent | inconsistent
--------- | ---------- | ------------
trivial | one | none
nontrivial | infinite | none
## Exercises
##### Exercise 3
Let
```python
A = sympy.Matrix([[1,1],
[-1,0],
[0,-1]])
b = sympy.Matrix([3,-2,-1])
Ab = A.col_insert(2,b)
```
###### 3(a)
Calculate the reduced echelon form of `Ab` .
Can you tell if `b` is in the column space of `A` ?
```python
### your answer here
```
###### 3(b)
Let
```python
b = sympy.Matrix([1,2,3])
```
and update `Ab` .
Can you tell if `b` is in the column space of `A` ?
```python
### your answer here
```
##### Exercise 4
Let
```python
A = sympy.Matrix([[1,1,1],
[1,2,4],
[1,3,9]])
b1 = sympy.Matrix([1,0,0])
```
###### 4(a)
If a matrix has no free variable, then the homogeneous solution is trivial.
Find the unique solution of $A{\bf x} = {\bf b}_1$.
```python
### your answer here
```
###### 4(b)
Let
```python
b2 = sympy.Matrix([0,1,0])
Ab = A.col_insert(3,b1)
Abb = Ab.col_insert(4,b2)
```
Can you use `Abb` to solve the solutions of $A{\bf x} = {\bf b}_1$ and $A{\bf x} = {\bf b}_2$ at once?
```python
### your answer here
```
###### 4(c)
Let
```python
b3 = sympy.Matrix([0,0,1])
```
Solve the solutions of $A{\bf x} = {\bf b}_1$, $A{\bf x} = {\bf b}_2$, and $A{\bf x} = {\bf b}_3$ at once.
```python
### your answer here
```
###### 4(d)
Let
$$ B = \begin{bmatrix}
| & ~ & | \\
{\bf b}_1 & \cdots & {\bf b}_3 \\
| & ~ & |
\end{bmatrix}.$$
Find a matrix $X$ such that $AX = B$.
When $B$ is the identity matrix
$$ I_n = \begin{bmatrix}
1 & ~ & ~ \\
~ & \ddots & ~ \\
~ & ~ & 1
\end{bmatrix},$$
the matrix $X$ with $AX = I_n$ is called the **inverse** of $A$, denoted by $A^{-1}$.
```python
### your answer here
```
###### 4(e)
Compare your answer in 4(d) with the output of `np.linalg.inv(A)` .
```python
### your answer here
```
##### Exercise 5
Let
```python
A = sympy.Matrix([[1,3,3,18],
[5,15,16,95],
[-5,-15,-15,-90]])
R,pvts = A.rref()
```
###### 5(a)
Let $B$ be the matrix whose columns are the columns of $A$ the corresponding to leading variables.
Pick a column of $A$ corresponding a free variable.
Check that the column is in the column space of $B$.
(If yes, this means this column is redundant for generating the column space of $A$.)
```python
### your answer here
```
###### 5(b)
Check if $B$ itself has any redundant column.
```python
### your answer here
```
#### Remark
Let $S = \{{\bf u}_1, \ldots, {\bf u}_n\}$ be a collection of vectors and $A$ the matrix whose columns are $S$.
We say $S$ is **linearly independent** if one of the following equivalent condition holds:
- $c_1{\bf u}_1 + \cdots + c_n{\bf u}_n = {\bf 0}$ only have trivial solution $c_1 = \cdots = c_n = 0$.
- $A{\bf x} = {\bf 0}$ only have trivial solution ${\bf x} = 0$.
- $A$ has no free variable.
Moreover, if a space $V$ is equal to$\operatorname{span}(S)$ and $S$ is linearly independent, then we say $S$ is a **basis** of the the space $V$.
##### Exercise 6
Let
```python
A = sympy.Matrix([[1,1,1],
[-1,0,0],
[0,-1,0],
[0,0,-1]])
```
Check if the columns of $A$ form a linearly independent set.
```python
### your answer here
```
##### Exercise 7
```python
A = sympy.Matrix([[1,3,3,18],
[5,15,16,95],
[-5,-15,-15,-90]])
R,pvts = A.rref()
```
Check what is `A.nullspace()`, `A.rowspace()`, and `A.columnspace()` and think about their meaning.
```python
### your answer here
```
#### Remark
Since it is impossible to output a space, the three commands in Exercise 7 in fact outputs the basis of the space only, which is enough.
**Nullspace**: its basis consists of ${\bf h}$'s in the previous lesson.
**Rowspace**: its basis consists of the rows of $R$ corresponding to the pivots.
**Columnspace**: its basis consists of the columns of $A$ corresponding to the pivots.
|
5529cb4444076f3c228ec440e39a24ddd9326f12
| 13,721 |
ipynb
|
Jupyter Notebook
|
Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/06-Solving-Ax-=-b.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/06-Solving-Ax-=-b.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/06-Solving-Ax-=-b.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | 2 |
2022-02-09T15:41:33.000Z
|
2022-02-11T07:47:40.000Z
| 24.458111 | 294 | 0.472706 | true | 2,289 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.919643 | 0.901921 | 0.829445 |
__label__eng_Latn
| 0.988586 | 0.765411 |
```python
import sympy as sp # charger la libraire sympy pour faire du calcul formel.
sp.init_printing()
```
# Calculer la fonction dérivée d'une fonction à une variable
Pour calculer une dérivée première, on utilise la fonction $diff()$ de la librairie Sympy.
Par exemple, pour calculer la dérivée de $\quad\sin(x)$ :
sp.diff(sp.sin(x),x)
qui nous retourne bien : $\quad \quad \cos(x)$.
Il est n'est pas indispensable ici de spécifier que l'on dérive selon la variable $x$, mais il est préfréable de la faire pour respecter l'ordre des arguments, ce qui sera utile ci-dessous. Un exemple un peu plus compliqué :
sp.diff(sp.sin(x)*sp.exp(x)/x)
qui donne $\quad \quad \frac{e^x\sin(x)}{x} + e^x\cos(x) − \frac{e^x\sin(x)}{x^2}$.
Source du doc disponible sur :
[Cliquez ici](http://www.tangentex.com/CalculSymbolique.htm#Par6 "Source")
## Déninir le symbole, variable de votre fonction.
```python
x = sp.Symbol('x')
```
```python
sp.diff(sp.sin(x),x)
```
```python
sp.diff(sp.sin(x)*sp.exp(x)/x)
```
# Application 1 de notre cours : verifions ensemble ?
Attention : la fonction _diff()_ de sympy montre uniquement le résultat final, elle permet seulement de vérifier ses calculs !!!
```python
# la fonction f :
f = -7*x + 2
f
```
```python
sp.diff(f)
```
```python
# la onction g :
g = x**2 + sp.sqrt(x)
g
```
```python
sp.diff(g)
```
```python
# la fonction h :
h = -5/x
h
```
```python
sp.diff(h)
```
```python
# la fonction i :
i = x**3/6 -8*x + 1
i
```
```python
sp.diff(i)
```
```python
# la fonction j :
j = x**2 + 2/(3*x**2)
j
```
```python
sp.diff(j)
```
# Application 2 de notre cours : vérifions encore ?
l'exemple du produit vérifié ici :
$f(x) = 3x\sqrt{x}$
```python
sp.diff(3*x*sp.sqrt(x))
```
Application 2 est vérifié ici :
```python
sp.diff(x**2*sp.sqrt(x))
```
```python
sp.diff(5/(x**2 - x))
```
```python
sp.diff((7*x + 1)**3)
```
```python
sp.diff(sp.sqrt(2*x - 2))
```
## Travailler en autonomie sur plateforme `EULER-WIMS` de Versailles
```python
import IPython.display as ipd
```
```python
ipd.IFrame(src="http://acver.fr/gi8",width = 900,height = 400)
```
|
48a9b9189cdb6b435ea146add8900f3845f5269f
| 36,431 |
ipynb
|
Jupyter Notebook
|
Chapitre_08_Sympy.ipynb
|
Ngom/python_math_premiere
|
70af75ffcf0da4624541179c9ccd81ae6e435ed0
|
[
"MIT"
] | 2 |
2021-02-26T00:39:28.000Z
|
2021-02-26T00:39:52.000Z
|
Chapitre_08_Sympy.ipynb
|
Ngom/python_math_premiere
|
70af75ffcf0da4624541179c9ccd81ae6e435ed0
|
[
"MIT"
] | null | null | null |
Chapitre_08_Sympy.ipynb
|
Ngom/python_math_premiere
|
70af75ffcf0da4624541179c9ccd81ae6e435ed0
|
[
"MIT"
] | null | null | null | 60.216529 | 3,360 | 0.787104 | true | 702 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.890294 | 0.90599 | 0.806598 |
__label__fra_Latn
| 0.85483 | 0.712329 |
```python
from ipywidgets import interactive, interact
import matplotlib.pyplot as plt
import numpy as np
import ipywidgets as widgets
import sympy as sym
import seaborn as sns
import plotly.graph_objects as go
from plotly.offline import init_notebook_mode, iplot
from numba import jit
init_notebook_mode(connected=True)
jit(nopython=True, parallel=True)
sns.set()
```
```python
class plot():
def __init__(self, preWidgetN):
self.N = preWidgetN
x,y,n ,k = sym.symbols('x, y,n,k', real=True)
X=np.linspace(0, 10, 100)
f = sym.Sum((-1)**k*(x**(2*k+1))/(sym.factorial(2*k+1)),(k,0, n))
#f = sym.Sum((-1)**k*(x**(2*k))/(sym.factorial(2*k)),(k,0, n))
#print(sym.latex(f))
f = f.subs(n, self.N.value)
f = sym.lambdify(x, f)
self.trace1 = go.Scatter(x=X, y=np.sin(X),
mode='lines+markers',
name='sin'
)
self.trace2 = go.Scatter(x=X, y=f(X),
mode='lines',
name=r'$\sum_{k=0}^{%s} \frac{\left(-1\right)^{k} x^{2 k + 1}}{\left(2 k + 1\right)!}$' %(self.N.value)
)
layout = go.Layout(template='plotly_dark')
self.fig = go.FigureWidget(data=[self.trace1, self.trace2],
layout = layout,
layout_yaxis_range=[-3 , 3]
)
def sineSeries(self, change):
x,y,n ,k = sym.symbols('x, y,n,k', real=True)
X=np.linspace(0, 10, 100)
f = sym.Sum((-1)**k*(x**(2*k+1))/(sym.factorial(2*k+1)),(k,0, n))
#f = sym.Sum((-1)**k*(x**(2*k))/(sym.factorial(2*k)),(k,0, n))
f = f.subs(n, self.N.value)
f = sym.lambdify(x, f)
with self.fig.batch_update():
self.fig.data[1].x = X
self.fig.data[1].y = f(X)
self.fig.data[1].name = r'$\sum_{k=0}^{%s} \frac{\left(-1\right)^{k} x^{2 k + 1}}{\left(2 k + 1\right)!}$' %(self.N.value)
return
def show(self):
self.N.observe(self.sineSeries, names='value')
display(self.N, self.fig)
return
```
```python
N = widgets.IntSlider(min=0, max=20, step=1, value=0, description='partial sum order')
p = plot(N)
p.show()
```
IntSlider(value=0, description='partial sum order', max=20)
FigureWidget({
'data': [{'mode': 'lines+markers',
'name': 'sin',
'type': 'scat…
```python
```
|
ecc5719530c0343604e9c0013abe6e3ca037e4ed
| 5,574 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/interactive_sinus-checkpoint.ipynb
|
zolabar/Interactive-Calculus
|
5b4b01124eba7a981e4e9df7afcb6ab33cd7341f
|
[
"MIT"
] | 1 |
2022-03-11T01:26:50.000Z
|
2022-03-11T01:26:50.000Z
|
.ipynb_checkpoints/interactive_sinus-checkpoint.ipynb
|
zolabar/Interactive-Calculus
|
5b4b01124eba7a981e4e9df7afcb6ab33cd7341f
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/interactive_sinus-checkpoint.ipynb
|
zolabar/Interactive-Calculus
|
5b4b01124eba7a981e4e9df7afcb6ab33cd7341f
|
[
"MIT"
] | null | null | null | 29.648936 | 149 | 0.426982 | true | 747 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.945801 | 0.882428 | 0.834601 |
__label__eng_Latn
| 0.309637 | 0.777392 |
# On the Reduced Gas Pressure in Spots
What is the gas temerature expected within a spot if _only_ a reduction in gas temperature is accounted for, owing to the presence of a magnetic pressure term?
We'll begin under the assumption that the gas is subject to a polytropic equation of state such that
\begin{equation}
P_{\rm gas} = K \rho^{\gamma},
\end{equation}
where $P_{\rm gas}$ is the thermal gas pressure, $K$ is a constant of proportionality, $\rho$ is the gas density, and $\gamma$ is the ratio of specific heats. We will also assume, for simplicity, that the magnetic field is in equipartition with the gas, meaning that the internal energy of the gas, $U_{\rm gas}$, is equal to the energy of the magnetic field, $U_{\rm mag}$. Finally, if we assume the gas can be described as ideal, then we know that $\gamma = 5/3$, $P_{\rm gas} = nkT$, and $U_{\rm gas} = 3nkT/2 = 3P_{\rm gas}/2$.
It's possible to re-write the ideal gas equation of state by substituting the gas (mass) density for the gas number density using the mean molecular weight, $\mu$. We then have that $n = \rho/(\mu m_H)$, where $m_p$ is the mass of a proton. Thus,
\begin{equation}
P_{\rm gas} = \frac{\rho}{\mu m_p}kT.
\end{equation}
Utilizing Equation (1), however, we find that
\begin{equation}
P_{\rm gas}^{(\gamma - 1)/\gamma} = \frac{k}{K \mu m_p}T = \mathbb{K} T.
\end{equation}
where we have collected all constants to define a new constant $\mathbb{K}$. Note that the mean molecular weight is here taken to be constant with $\mu = 1$ (pure hydrogen gas), for simplicity.
Neglecting magnetic tension forces, the magnetic pressure is equal to the magnetic energy and is spatially isotropic, $P_{\rm mag} = U_{\rm mag} = B^2 / 8\pi$. Since the magnetic field is in energy equipartition with the gas, we have
\begin{equation}
U_{\rm mag} = \frac{B^2}{8\pi} = \frac{3}{2}P_{\rm gas},
\end{equation}
which provides a convenient estimate of the magnetic field strength at any given point in the gas. If we assume that the total pressure at a given point must be the same, regardless of the presence of a magnetic field, then we can write that
\begin{equation}
P_{\rm tot} = P_{\rm gas} + \frac{B^2}{8\pi} = \frac{5}{2} P_{\rm gas},
\end{equation}
which must be equivalent to the total pressure when no magnetic field is present (i.e., the gas pressure in the absence of a magnetic field),
\begin{equation}
P_{\rm tot} \equiv P_{\rm gas,\, 0} = \frac{5}{2}P_{\rm gas}
\end{equation}
where $P_{\rm gas,\, 0}$ is the gas pressure in the absence of a magnetic field.
Under these approximations, we can estimate the change in gas temperature caused by the presence of a magnetic field, neglecting effects related to the _transport_ of energy. Using Equation (3), we can write
\begin{equation}
\frac{T_{\rm gas}}{T_{\rm gas,\, 0}} = \left(\frac{2}{5}\right)^{(\gamma - 1)/\gamma} = \left(\frac{2}{5}\right)^{2/5} \approx 0.693.
\end{equation}
In the case of the Sun, if we take the background photospheric temperature to be approximately 5779 K, then a rough estimate for the temperature within a sunspot would be 4000 K, or 1779 K cooler than the background photosphere. Quite surprisingly, this agrees with rough estimates for the temperatures within starspot umbra (Solanki 2003).
If, instead, the magnetic pressure were equal to the gas pressure within a spot, the umbral temperature ratio would be equal to
\begin{equation}
\frac{T_{\rm gas}}{T_{\rm gas,\, 0}} = \left(\frac{1}{2}\right)^{(\gamma - 1)/\gamma} = \left(\frac{1}{2}\right)^{2/5} \approx 0.758,
\end{equation}
which implies spot temperatures of order 4300 K. Again, consistent with observations of umbral temperatures.
One can also estimate the change in density resulting from the decrease in gas pressure and temperature. From Equation (1),
\begin{equation}
\frac{\rho_{\rm gas}}{\rho_{\rm gas\, 0}} = \left(\frac{P_{\rm gas}}{P_{\rm gas\, 0}}\right)^{1/\gamma},
\end{equation}
which yields a density ratio of approximately 50% for an ideal gas with $\gamma = 5/3$. When typical values for the density in the solar photosphere ($\log\rho \sim 6.4$ near $\tau = 1$) are used, this leads to a density change of approximately $2\times10^{-7}$ g cm$^{-2}$. This is consistent with density changes observed in 3D radiation MHD simulations by Kitiashvili et al. (2010).
Given agreement between estimated this simple estimate of umbral temperatures and estimates from starspot observations, what further consequences might lead to the exclusion of this hypothesis for the cooler nature of sunspots?
Development from the presence of a strong concentration of magnetic flux to the appearance of sunspot pores, if goverend by the thermal evolution of the gas, must occur on a timescale related to the sound crossing time. It is only over this timescale that the gas can effectively communicate information about the presence of the magnetic field. Typical sunspot pores are on the order of 1 Mm in size (Bray & Loughhead 1964) and the adiabatic speed of sound at the solar surface is on the order of 8 km/s. The latter is not exact, but provides a starting point.
\begin{equation}
\tau = \frac{R_{\rm pore}}{c_s} \sim \frac{10^3}{10^1} \textrm{ s} = 10^2 \textrm{ s},
\end{equation}
or approximately 2 minutes. The sound crossing time is, therefore, quite small compared to the pore formation timescale, which is estimated to be between several hours and several days. Limitations imposed by the sound crossing time are therefore not significant.
It is possible, however, that the process of magnetic suppression of convection and the cooling of the photospheric layers are intertwined. Suppression of convection is the result of the interaction of convective flows with the magnetic field via Lorentz forces. These forces are the same forces that lead to the isotropic pressure that could potentially be responsible for the cooling of the layers by offsetting the gas pressure required to maintain hydrostatic equilibrium.
---
## Potential description
The presence of a magnetic field in a plasma would cause a general cooling of the surface layers, in the event that the system was left to equilibrate. Since the sound crossing time is on the order of 2 minutes, this is expected to occur quite rapidly. However, the surface is not static, as convective updrafts are constantly supplying fresh, warm material to the surface. Convection in the near surface layers has an overturn time of order several minutes (solar granulation timescale), of the same order as the time required for the material to arrive in equilibrium with the magnetic field. Therefore, it may be unlikely for the magnetic field to efficiently cool the gas until the system is unable to supply warm material from deeper layers. This is particularly important since, as a glob of plasma may cool, it will have a tendency to sink while convection occurs. Suppression of convection starves the upper layers of warm material and prevents cooler material from traveling inward, permitting the gas in the near surface layers to equilibrate to a cooler temperature, as dictated by the additional pressure contribution to the equation of state.
Needs validation.
---
## Temperature Contrast Estimates for Starspots
Using the above theoretical development, we find the temperature contrast on other stars is equal to that for sunspots. However, measurements of starspot temperature contrasts find a correlation between starspot temperature contrast, and the effective temperature of the star (Berdyugina 2005).
```python
import numpy as np
```
```python
# confirm sunspot estimate above
Bz = np.sqrt(12.*np.pi*10**4.92)
print "Sunspot Bz (G): {:8.3e}".format(Bz)
print "Sunspot umbral temperature (K): {:6.1f}".format(5779.*0.4**0.4)
```
Sunspot Bz (G): 1.771e+03
Sunspot umbral temperature (K): 4005.7
While the temperature ratio appears fixed in Equations (7) and (8), variation in the derived values can be estimated from the variation in $\gamma = c_p / c_v$. There will also be variation in the specific relation between the gas internal energy and the gas pressure, but this is more difficult to quantify from stellar evolution model output.
To address this problem, a small grid of models was run with masses in the range $0.1$ — $0.9 M_{\odot}$ with a mass resolution of $0.1 M_{\odot}$. Output from the code was modified to yield $\gamma$ directly as a function of radius, temperature, pressure, and density.
```python
%matplotlib inline
import matplotlib.pyplot as plt
```
```python
Teffs = np.arange(3000., 6100., 250.)
Tspot_fixed_gamma = Teffs*(0.4)**0.4
# gamma where tau ~ 1 (note, no 0.2 Msun point)
Gammas = np.array([1.22, 1.29, 1.30, 1.29, 1.36, 1.55, 1.63, 1.65])
ModelT = np.array([3.51, 3.57, 3.59, 3.61, 3.65, 3.71, 3.74, 3.77]) # tau = 1
ModelT = 10**ModelT
ModelTeff = 10**np.array([3.47, 3.53, 3.55, 3.57, 3.60, 3.65, 3.69, 3.73]) # T = Teff
Tratio = 0.4**((Gammas - 1.0)/Gammas)
Tspot_physical_gamma = np.array([ModelT[i]*0.4**((Gammas[i] - 1.0)/Gammas[i]) for i in range(len(ModelT))])
# smoothed curve
from scipy.interpolate import interp1d
icurve = interp1d(ModelT, Tspot_physical_gamma, kind='cubic')
Tphot_smoothed = np.arange(3240., 5880., 20.)
Tspot_smoothed = icurve(Tphot_smoothed)
# approximate Berdyugina data
DeltaT = np.array([ 350., 450., 700., 1000., 1300., 1650., 1850.])
BerdyT = np.array([3300., 3500., 4000., 4500., 5000., 5500., 5800.])
BSpotT = BerdyT - DeltaT
print Tratio
```
[ 0.84769638 0.81384297 0.80940837 0.81384297 0.78462644 0.72242951
0.70177027 0.69700478]
```python
fig, ax = plt.subplots(1, 1, figsize=(8,4))
ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.)
ax.set_ylabel('Spot Temperature (K)', fontsize=20.)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.grid(True)
ax.plot(Teffs, Tspot_fixed_gamma, '--', lw=2, dashes=(10., 10.), markersize=9.0, c='#1e90ff')
ax.plot(BerdyT, BSpotT, '-', lw=2 , dashes=(25., 15.), c='#000080')
ax.fill_between(BerdyT, BSpotT - 200., BSpotT + 200., facecolor='#000080', alpha=0.1, edgecolor='#eeeeee')
ax.plot(ModelT, Tspot_physical_gamma, 'o', lw=2, markersize=9.0, c='#800000')
ax.plot(Tphot_smoothed, Tspot_smoothed, '-', lw=3, c='#800000')
```
```python
fig, ax = plt.subplots(1, 1, figsize=(8,4))
ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.)
ax.set_ylabel('T(phot) - T(spot) (K)', fontsize=20.)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.grid(True)
ax.plot(Teffs, Teffs - Tspot_fixed_gamma, '--', lw=2, dashes=(10., 10.), markersize=9.0, c='#1e90ff')
ax.plot(BerdyT, DeltaT, '-', lw=2 , dashes=(25., 15.), c='#000080')
ax.fill_between(BerdyT, DeltaT - 200., DeltaT + 200., facecolor='#000080', alpha=0.1, edgecolor='#eeeeee')
ax.plot(ModelT, ModelT - Tspot_physical_gamma, 'o', lw=2, markersize=9.0, c='#800000')
ax.plot(Tphot_smoothed, Tphot_smoothed - Tspot_smoothed, '-', lw=3, c='#800000')
```
Results of this simple model indicate that equilibration of a polytropic gas within a magnetic structure located near the photosphere ($\tau_{\rm ross} = 1$) provides a reasonable approximation to observed spot temperatures from low-mass M dwarfs up to solar-type stars. Above 5400 K, the gas is sufficiently ideal that the model predicted relationship (red line) is asymptotic to the case of a purely ideal gas (small-dashed light-blue line). Below that temperature, the simple model traces the relationship provided by Berdyugina (2005). Difficulties below 4000 K may be the result of either model inaccuracies, either stemming from atmospheric structure or the simple approximation of energy equipartition, or observational complications that arise from measuring M dwarf photospheric temperatures.
We can also estimate umbral magnetic field strengths by extracting the model gas pressure at the same optical depth ($\tau_{\rm ross} = 1$),
```python
# log(Pressure)
p_gas = np.array([6.37, 5.90, 5.80, 5.70, 5.60, 5.45, 5.30, 5.15])
B_field_Eeq = np.sqrt(12.*np.pi*0.4*10**p_gas)/1.0e3 # in kG
B_field_Peq = np.sqrt( 8.*np.pi*10**p_gas)/1.0e3
# smooth curves
icurve = interp1d(ModelT, B_field_Eeq, kind='cubic')
B_field_Eeq_smooth = icurve(np.arange(3240., 5880., 20.))
icurve = interp1d(ModelT, B_field_Peq, kind='cubic')
B_field_Peq_smooth = icurve(np.arange(3240., 5880., 20.))
```
```python
fig, ax = plt.subplots(1, 1, figsize=(8,4))
ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.)
ax.set_ylabel('Equipartition Magnetic Field (kG)', fontsize=20.)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.grid(True)
ax.plot(ModelT, B_field_Eeq, 'o', lw=2, markersize=9.0, c='#800000')
ax.plot(np.arange(3240., 5880., 20.), B_field_Eeq_smooth, '-', lw=2, c='#800000')
ax.plot(ModelT, B_field_Peq, 'o', lw=2, markersize=9.0, c='#1e90ff')
ax.plot(np.arange(3240., 5880., 20.), B_field_Peq_smooth, '-', lw=2, dashes=(20., 5.), c='#1e90ff')
```
The two curves represent two different approximations, one is energy equipartition (red curve) and the other that the magnetic pressure is precisely equal to the gas pressure (blue curve). These values do not represent surface averaged magnetic field strengths, but the strengths of local concentrations of magnetic flux. Based on energy equipartition, we do not expect spot magnetic field strengths to be considerably larger than those estimated from the red curve.
Finally, we can estimate a "curve of cooling" relating the strength of a magnetic field to the temperature within the magnetic sturcture. Since stars the properties of the photospheric layers are so different for stars as a function of effeictve temperature, it'll be helpful to compute curves at several effective temperatures characteristic of low-mass M dwarf, a late K dwarf, and an early K/G dwarf.
```python
B_field_strengths = np.arange(0.1, 8.1, 0.1)*1.0e3
log_P_phot = np.array([5.15, 5.60, 5.90])
Gamma_phot = np.array([1.65, 1.36, 1.29])
Exponents = (Gamma_phot - 1.0)/Gamma_phot
print Exponents
fig, ax = plt.subplots(3, 1, figsize=(8, 12), sharex=True)
i = 0
ax[2].set_xlabel('Spot Magnetic Field (kG)', fontsize=20.)
for axis in ax:
B_field_eq = np.sqrt(8.*np.pi*(0.4*10**log_P_phot[i]))
axis.grid(True)
axis.set_ylabel('$T_{\\rm spot} / T_{\\rm phot}$', fontsize=20.)
axis.tick_params(which='major', axis='both', length=10., labelsize=16.)
axis.plot(B_field_strengths/1.0e3, (1.0 - B_field_strengths**2/(8.*np.pi*10**log_P_phot[i]))**Exponents[i],
'-', lw=3, c='#800000')
axis.plot(B_field_eq/1.0e3, (1.0 - B_field_eq**2/(8.0*np.pi*10**log_P_phot[i]))**Exponents[i],
'o', markersize=12.0, c='#555555')
i += 1
```
### M Dwarfs
Shulyak et al. (2010, 2014) have measured distribution of magnetic fields on the surfaces of M dwarfs by modeling FeH bands in high resolution, high S/N Stokes I sectra. They find M dwarf spectra are typically best fit by a uniform 1 kG magnetic field everywhere on the star, but with the addition of local concentrations across the surface. These local concentrations can reach upward of 7 – 8 kG. We now ask, do the stars that possess these field lie close to the locus defined by models?
```python
Shulyak_max_B = np.array([[3400., 100., 6.5, 0.5],
[3400., 100., 6.0, 0.5],
[3300., 100., 7.5, 0.5],
[3100., 100., 6.5, 0.5],
[3100., 50., 1.0, 0.5]])
```
```python
fig, ax = plt.subplots(1, 1, figsize=(8,4))
ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.)
ax.set_ylabel('Equipartition Magnetic Field (kG)', fontsize=20.)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.grid(True)
ax.fill_between(np.arange(3240., 5880., 20.), B_field_Peq_smooth, B_field_Eeq_smooth, facecolor='#555555', alpha=0.3, edgecolor='#eeeeee')
ax.errorbar(Shulyak_max_B[:,0], Shulyak_max_B[:,2], xerr=Shulyak_max_B[:,1], yerr=Shulyak_max_B[:,3], lw=2, fmt='o', c='#555555')
```
We find that the data from Shulyak et al. (2014) for the maximum mangetic field strengths (in local concentrations) of active stars are on the order of those expected from either energy of pressure equipartition. The fact that two have weaker magnetic fields only indicates that the there were no strong local concentrations, similar to the surface of the quiet Sun.
Note, however, that there is some uncertainty in this comparison. Shulyak et al. (2014) quote the effective temperature, which for M dwarfs is not necessarily equal to the photospheric temperature. Model atmospheres predict that the "effective temperature" for an M dwarf occurs in the optically thin layers above the opaque photosphere. Thus, it is more likely that the photospheric temperature for the Shulyak sample is _greater_ than those quoted above.
If we instead convert the quoted effective temperatures to photospheric temperatures using stellar model atmospheres, we find
```python
Shulyak_max_B = np.array([[3685., 100., 6.5, 0.5],
[3685., 100., 6.0, 0.5],
[3578., 100., 7.5, 0.5],
[3379., 100., 6.5, 0.5],
[3379., 50., 1.0, 0.5]])
fig, ax = plt.subplots(1, 1, figsize=(8,4))
ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.)
ax.set_ylabel('Equipartition Magnetic Field (kG)', fontsize=20.)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.grid(True)
ax.fill_between(np.arange(3240., 5880., 20.), B_field_Peq_smooth, B_field_Eeq_smooth, facecolor='#555555', alpha=0.3, edgecolor='#eeeeee')
ax.errorbar(Shulyak_max_B[:,0] + 100., Shulyak_max_B[:,2], xerr=Shulyak_max_B[:,1], yerr=Shulyak_max_B[:,3], lw=2, fmt='o', c='#555555')
```
which shows that localized concentrations of magnetic fields exceed those estimated by energy or pressure equipartition.
In general, these comparisons are further complicated by uncertainties regarding the formation height of molecules and atomic features with respect to the background plasma. Molecules may reveal the magnetic field strength at an optical depth (in the ambient plasma) below $\tau = 1$, in which case the gas pressure used to compute the equipartition magnetic field would be larger.
|
3729b72456f23419a35f98b5163db1398e789852
| 256,754 |
ipynb
|
Jupyter Notebook
|
Daily/20150803_pressure_reduc_spots.ipynb
|
gfeiden/Notebook
|
daf211f5c059d4b11dab0b57b05ec05918f430b2
|
[
"MIT"
] | null | null | null |
Daily/20150803_pressure_reduc_spots.ipynb
|
gfeiden/Notebook
|
daf211f5c059d4b11dab0b57b05ec05918f430b2
|
[
"MIT"
] | null | null | null |
Daily/20150803_pressure_reduc_spots.ipynb
|
gfeiden/Notebook
|
daf211f5c059d4b11dab0b57b05ec05918f430b2
|
[
"MIT"
] | null | null | null | 434.439932 | 54,484 | 0.921146 | true | 5,298 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.930458 | 0.857768 | 0.798117 |
__label__eng_Latn
| 0.989269 | 0.692627 |
<h1> Creating the Best Fantasy Basketball Lineup </h1>
<h2> Introduction </h2>
Fantasy basketball, and fantasy sports in general, is an extremely popular game where participants compete against each other by virtually selecting players from real upcoming games to create a lineup. Typically, the goal is to assemble the best all-star team, meaning participants are allowed to select players from different professional teams that compete in the same league to play in each coach's virtual team.
A basketball team typically consists of five players: a point guard (PG), a shooting guard (SG), a small forward (SF), a power forward (PF), and a center (C). Each player, assigned to these positions, can have different characteristics and objectives in the basketball game. A shooting guard might specialize in scoring, while a point guard's objective might be to run the game-plan and distribute the ball efficiently. To help compare players across different positions we need to find a common metric to assess each player's performance -- and we'll use fantasy points to do this.
In a follow-on example we will extend the optimization model to reflect actual fantasy basketball competitions found on major sites.
<h2> Objective and Prerequisites </h2>
In this example, we'll take on the role of a basketball coach and learn how to find the optimal lineup of National Basketball Association (NBA) players in the context of fantasy basketball. The goal is to select the five players who are going to perform the best in the NBA games played on December 25, 2017, by simultaneously satisfying player and position eligibility and budget constraints.
We aim to demonstrate that machine learning and mathematical optimization are closely connected, and how they can be integrated to derive optimal decisions. Simply starting from a dataset, we will apply machine learning techniques to make predictions about the performance of basketball players. But when taking into account the problem constraints, directly utilizing these forecasts to make player selections isn't straightforward. This is where mathematical optimization and the Gurobi Optimizer thrive to prove what the best viable lineup is, by formulating and solving a mixed-integer programming (MIP) problem that will maximize the lineup's **total fantasy points**.
This example is for beginners in mathematical optimization who may have some experience in data handling and predictive modeling and experience using Python.
The presented problem requires the installation of the following Python packages:
- **pandas** for data analysis and manipulation
- **numpy** for calculations
- **matplotlib** and **seaborn** for plotting and visualizing information
- **scikit-learn** for accessing data science predictive tools
- **gurobipy** for utilizing Gurobi to formulate and solve the optimization problem
<h2> Problem Statement and Solution Approach</h2>
Given the regular season historical performances (box scores) of NBA players for the seasons 2016-2017 and 2017-2018, the eligible positions for each of the players, and a budget restriction on how much total salary we are allowed to spend for our team, select the five NBA players who are going to have the best performance on the NBA games played on Christmas day of the 2017-2018 season.
Among the eligible players that are going to play that day, we need to select a point guard, a shooting guard, a shooting forward, a power forward, and a center.
The solution to the problem consists of two components: 1) **fantasy points forecast** and 2) **lineup optimization**.
<h3> Fantasy Points Forecast </h3>
Let's start by loading the necessary libraries to solve our problem.
```python
%pip install gurobipy
```
```python
import pandas as pd #importing pandas
import numpy as np #importing numpy
import matplotlib.pyplot as plt #importing matplotlib
import seaborn as sns #importing seaborn
from sklearn.model_selection import train_test_split #importing scikit-learn's function for data splitting
from sklearn.linear_model import LinearRegression #importing scikit-learn's linear regressor function
from sklearn.neural_network import MLPRegressor #importing scikit-learn's neural network function
from sklearn.ensemble import GradientBoostingRegressor #importing scikit-learn's gradient booster regressor function
from sklearn.metrics import mean_squared_error #importing scikit-learn's root mean squared error function for model evaluation
from sklearn.model_selection import cross_validate #improting scikit-learn's cross validation function
from gurobipy import Model, GRB, quicksum #importing Gurobi
```
We are going to use two publicly available datasets from [Kaggle](https://www.kaggle.com). The [first dataset](https://www.kaggle.com/pablote/nba-enhanced-stats) contains the historical information of the players' and teams' performances and will be for training and testing our predictive model. The [second dataset](https://www.kaggle.com/alandu20/daily-fantasy-basketball-draftkings) includes the salaries of players, as well as their eligible position. Note that *salaries* here are determined by sites that host the fantasy sports contests and have nothing to do with what the players are actually paid.
We have slightly preprocessed the two datasets to have consistency in the players' names as well as including solely the columns that we would need for the purposes of this project. Also, we utilize observations where athletes have played for at least 3 minutes in a basketball game. Both of the updated versions of the datasets are available in our **repository**.
We begin by reading the boxscores dataset, which includes the regular season performances for all players from season 2016-2017 up to the games played on December 25th, 2017. For games before that date, we calculated each player's total fantasy points using box score values as follows:
<font size="3">$FP = Points + 1.5 \times Assists - 0.5 \times Turnovers + Steals + 2 \times Blocks + 1.25 \times Rebounds + 0.5 \times ThreePointer$</font>
This value is already calculated in the **boxscores_dataset.csv** file.
```python
boxscores = pd.read_csv('https://raw.githubusercontent.com/yurchisin/modeling-examples/master/fantasy_basketball_1_2/boxscores_dataset.csv') #load boxscores dataset
boxscores = boxscores[(boxscores.playMin>=3) | (boxscores.playMin.isnull())]
```
In general, fantasy points are connected with features related to the utilization and efficiency of each player and we can examine some of these relationships through visualization. Below are a few scatterplots, but feel free to add more visuals to get a deeper understanding of what's happening in the data.
```python
fig, (FGA, FGM, FTM, Min) = plt.subplots(1, 4, figsize=(14,5))
fig.tight_layout()
FGA.scatter(boxscores['playFGA'], boxscores['FantasyPoints'], c='blue', alpha = .2)
FGM.scatter(boxscores['playFGM'], boxscores['FantasyPoints'], c='lightblue', alpha = .2)
FTM.scatter(boxscores['playFTM'], boxscores['FantasyPoints'], c='coral', alpha = .2)
Min.scatter(boxscores['playMin'], boxscores['FantasyPoints'], c='purple', alpha = .2)
FGA.set_xlabel('Field Goal Attempts')
FGM.set_xlabel('Field Goals Made')
FTM.set_xlabel('Free Throws Made')
Min.set_xlabel('Minutes Played')
FGA.set_ylabel('Fantasy Points');
```
A distribution plot of the true fantasy points of the players for the previous games is shown below:
```python
hplot = sns.histplot(boxscores['FantasyPoints'], color="blue", label="Fantasy Points", kde=True, stat="density", linewidth=0, bins=20)
hplot.set_xlabel("Fantasy Points", fontsize = 12)
hplot.set_ylabel("Density", fontsize = 12)
sns.set(rc={"figure.figsize":(14, 5)})
```
When building our predictive model, fantasy points scored in a game is a good target (or independent) variable. But what about the model's features?
One approach is to use boxscore data for a player's previous several games to find the average of key stats over that span. We will begin with the average of the three previous games for: points, assists, turnovers, steals, blocks, total rebounds, field goal attempts, free throw attempts, 2-pointer \%, 3-pointer \%, free throw \%, minutes played, days-off and the true fantasy points.
For example, this calculation for the assists at the $k^{th}$ game is:
<font size="4">$Average \hspace{0.1cm} Assists_k = \frac{Assists_{k-1}+Assists_{k-2}+Assists_{k-3}}{3}$</font>
```python
horizon=3
for column_name in ['playPTS','playAST','playTO','playSTL','playBLK','playTRB','playFGA','playFTA','play2P%','play3P%','playFT%','playMin','teamDayOff','FantasyPoints']:
boxscores['moving' + column_name] = boxscores.groupby(['playDispNm'])[column_name].transform(lambda x: x.rolling(horizon, 1).mean().shift(1)) #lagged moving average of numeric features
```
This gives us a set of features to start with, though we still would want to perform feature selection for our predictive model. Changing the number of lag games may improve your predictive model -- so give it a try!
Because we are using a lagged average, there is not any historical information for the first game played for each player, so we drop these observations.
```python
boxscores.dropna(subset = ["movingplayPTS"], inplace=True) #drop the first observation for each player
```
To start with feature selection, we can look at the correlation between our newly derived variables, which is shown below:
```python
sns.set(rc = {'figure.figsize':(15,8)})
sns.heatmap(boxscores[['movingplayPTS', 'movingplayAST','movingplayTO','movingplaySTL','movingplayBLK','movingplayTRB','movingplayFGA','movingplayFTA','movingplay2P%','movingplay3P%','movingplayFT%','movingplayMin','movingteamDayOff','movingFantasyPoints']].corr(),annot=True)
plt.show()
```
We observe that some of the features are highly correlated. For example, the average of free throw attempts with the points average.
We are also investigating the utilization of two categorical variables. Specifically, if the current game is home or away and if the player will start or come from the bench. We transform the categorical variables to 0-1 variables.
```python
boxscores['dummyTeamLoc'] = pd.get_dummies(data=boxscores['teamLoc'],drop_first=True) #1 if the game is a home game, 0 if it is an away game
boxscores['dummyplayStat'] = pd.get_dummies(data=boxscores['playStat'],drop_first=True) #1 if the player starts, 0 if the player comes from the bench
```
Now that the dataset has been updated, we move into forecasting the fantasy points for the players who are going to play on December 25, 2017.
```python
forecasting_data = boxscores[boxscores.gmDate != '2017-12-25'] #for model training, we exclude observation on December 25, 2017
```
We split the data set using 80\% of the data for training and 20\% for testing. For forecasting we try three models: linear regression, a neural network and a gradient boosting regressor. In addition to the heat map, we based out statistical significance tests and looked into multicollinearity using variance inflation factors (VIF). Cross-validation is also used to identify potential overfitting.
The features that will be used for our predictive models are the average of the player's: assists, turnovers, steals, blocks, total rebounds, free throw attempts, free throw %, as well as whether the player will start or will come from the bench.
```python
X = forecasting_data[['movingplayAST','movingplayTO','movingplaySTL','movingplayBLK','movingplayTRB','movingplayFTA','movingplayFT%','dummyplayStat']] #select the features that will be used for model training
y = forecasting_data['FantasyPoints'] #target set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=4) #dataset splitting
linear_regressor = LinearRegression() #load linear regressor
linear_regressor.fit(X_train, y_train) #train the linear regression model
linear_regression_validation = cross_validate(linear_regressor, X_train, y_train, cv=5, return_train_score=True, return_estimator=True)
mlp = MLPRegressor(hidden_layer_sizes=(5,5), activation='relu') #load neural network
mlp.fit(X_train,y_train) #train the neural network with a ReLU function and two hidden layers with 5 nodes each
mlp_validation = cross_validate(mlp, X_train, y_train, cv=5, return_train_score=True, return_estimator=True)
gb = GradientBoostingRegressor() #load a gradient boosting regressor
gb.fit(X_train, y_train) #train a gradient boosting model
gb_validation = cross_validate(gb, X_train, y_train, cv=5, return_train_score=True, return_estimator=True)
```
/Users/yurchisin/opt/anaconda3/lib/python3.9/site-packages/sklearn/neural_network/_multilayer_perceptron.py:614: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.
warnings.warn(
```python
gb_validation['test_score'] #print CV test score across each fold
```
array([0.50957097, 0.49910472, 0.48128309, 0.49759688, 0.50076644])
We observe that the model performs consistently across each of the folds. Now, to evaluate the performance of the models we calculate their mean squared error (MSE) values.
```python
linear_regression_predictions = linear_regressor.predict(X_test) #make predictions based on the test set for the linear regression model
mlp_predictions = mlp.predict(X_test) #make predictions based on the test set for the neural network model
gb_predictions = gb.predict(X_test) #make predictions based on the test set for the gradient boosting model
linear_regression_mse = mean_squared_error(y_test, linear_regression_predictions) #calculate the MSE for the linear regression model
mlp_mse = mean_squared_error(y_test, mlp_predictions) #calculate the MSE for the neural network model
gb_mse = mean_squared_error(y_test, gb_predictions) #calculate the MSE for the gradient boosting model
results = {'Linear Regression':[linear_regression_mse],'ReLU Neural Network':[mlp_mse],'Gradient Boosting Regressor':[gb_mse]}
modeling_results = pd.DataFrame(data=results,index=['MSE'])
modeling_results
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Linear Regression</th>
<th>ReLU Neural Network</th>
<th>Gradient Boosting Regressor</th>
</tr>
</thead>
<tbody>
<tr>
<th>MSE</th>
<td>91.077312</td>
<td>90.16697</td>
<td>90.670528</td>
</tr>
</tbody>
</table>
</div>
Each of the model types perform similarly well as shown from their mean squared error values. Also, plots of the predicted value error with respect to the predicted fantasy points are shown in the following figures:
```python
fig, (LR, FNN, GBR) = plt.subplots(1, 3,figsize=(15,5));
fig.tight_layout()
LR.scatter(x = linear_regression_predictions, y = y_test - linear_regression_predictions,color='red',alpha=0.06)
FNN.scatter(x = mlp_predictions, y = y_test - mlp_predictions, color='green',alpha=0.06)
GBR.scatter(x = gb_predictions, y = y_test - gb_predictions, color='blue',alpha=0.06)
LR.set_xlabel('Linear Regression Predicted Fantasy Points')
FNN.set_xlabel('Neural Network Predicted Fantasy Points')
GBR.set_xlabel('Gradient Boosting Predicted Fantasy Points')
LR.set_ylabel('Linear Regression Residual')
FNN.set_ylabel('Neural Network Residual')
GBR.set_ylabel('Gradient Boosting Residual')
```
As we see from the above plots and the test scores, our models can only partially explain the variance of the fantasy points. That is to be expected since there are so many factors that affect the performance of the players. By a slight margin the gradient boosting regressor had the lowest MSE, so we'll refit that model to the complete data set and use that to predict fantasy points.
After that, we'll append the predicted fantasy points to the salary data to set up the optimization part of the solution.
```python
gb_final = GradientBoostingRegressor(random_state=4)
gb_final.fit(X, y)
optimization_dataset = boxscores
optimization_dataset['PredictedFantasyPoints'] = gb_final.predict(boxscores[['movingplayAST','movingplayTO','movingplaySTL','movingplayBLK','movingplayTRB','movingplayFTA','movingplayFT%','dummyplayStat']])
player_results = pd.read_csv('https://raw.githubusercontent.com/yurchisin/modeling-examples/master/fantasy_basketball_1_2/target_games.csv')
player_list = list(player_results['Player'].unique())
col = pd.DataFrame()
for player in player_list:
player_flag = player
optimization_data_per_player = optimization_dataset.loc[(optimization_dataset['playDispNm']==player)&(optimization_dataset['gmDate']=='2017-12-25')]
col = col.append(optimization_data_per_player)
player_results['PredictedFantasyPoints'] = col['PredictedFantasyPoints'].values
```
We also calculate another column which is the predicted fantasy points divided by the salary of each player.
```python
pd.set_option('display.expand_frame_repr', False)
player_results['Points/Salary Ratio'] = 1000*player_results['PredictedFantasyPoints']/player_results['Salary'] #we multiple the fantasy vs salary ratio by 1000 for better visualization
player_results.sort_values(by='PredictedFantasyPoints',ascending=False).head(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Player</th>
<th>Pos</th>
<th>Team</th>
<th>Opp</th>
<th>Salary</th>
<th>PredictedFantasyPoints</th>
<th>Points/Salary Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<th>4</th>
<td>Joel Embiid</td>
<td>C</td>
<td>PHI</td>
<td>@ NYK</td>
<td>9500</td>
<td>51.313689</td>
<td>5.401441</td>
</tr>
<tr>
<th>0</th>
<td>James Harden</td>
<td>PG</td>
<td>HOU</td>
<td>@ OKC</td>
<td>11100</td>
<td>48.809577</td>
<td>4.397259</td>
</tr>
<tr>
<th>1</th>
<td>LeBron James</td>
<td>SF</td>
<td>CLE</td>
<td>@ GSW</td>
<td>11000</td>
<td>48.149718</td>
<td>4.377247</td>
</tr>
<tr>
<th>2</th>
<td>Russell Westbrook</td>
<td>PG</td>
<td>OKC</td>
<td>vs HOU</td>
<td>10900</td>
<td>44.007224</td>
<td>4.037360</td>
</tr>
<tr>
<th>3</th>
<td>Kevin Durant</td>
<td>SF</td>
<td>GSW</td>
<td>vs CLE</td>
<td>10500</td>
<td>43.438575</td>
<td>4.137007</td>
</tr>
</tbody>
</table>
</div>
So how many potential lineups are there? A lot.
$25PG \times 23SF \times 22SG \times 19PF \times 9C = 2,163,150$ combinations
Using brute force to calculate the total fantasy points for each lineup would take a very, very long time. We also need to consider the salary constraint (e.g. $30,000). A greedy approach (i.e. always taking the best player available) would select the first three players that have the highest fantasy points (Embiid, James, and Harden), but then realize that the salary cap will be violated. Using the points to salary ratio can also lead to the same issue.
This is where optimization becomes most advantageous, due to its ability to efficiently explore the space of options in a very efficient manner and provably showing you have an optimal solution.
<h3> Optimal Lineup Selection </h3>
Now that we have predictions for the players in our slate of NBA games we can work on determining our optimal lineup.
First, we'll make some definitions as these are helpful for us to set up our optimization model. For starters, the subscript $i$ will be used to denote individual players across the set of all players we can choose from.
**Input Parameters**
$p_{i}$: the predicted fantasy points of player $i$
$s_{i}$: the salary of player $i$
$S$: our total available salary
Here is the code to set up the indices and parameters:
```python
indices = player_results.Player
points = dict(zip(indices, player_results.PredictedFantasyPoints))
salaries = dict(zip(indices, player_results.Salary))
S = 30000
m = Model(); # this defines the model that we'll add to as we finish the formulation
```
**Decision Variables**
The goal of this problem is to determine whether each player is in our final lineup. This an example of when *binary variables* are used in optimization, to represent a **yes** or **no** decision we want to make. This is modeled as $y_{i} = 1$: if player $i$ is selected; and $y_{i} = 0$ otherwise.
Here is where we add a set of decision variables to our model in gurobipy:
```python
y = m.addVars(player_results.Player, vtype=GRB.BINARY, name="y")
```
**Objective Function**
The objective of our problem is to maximize the total fantasy points of our lineup. Since we are using binary variables, if a player is selected ($y_{i} = 1$), then their predicted points will contribute to our lineup using $p_{i} \cdot y_{i}$. If $y_{i} = 0$ then $p_{i} \cdot y_{i} = 0$ as well. Summing across all players will give us the function to maximize, written as:
\begin{align}
Max \hspace{0.2cm} Z = \sum p_{i} \cdot y_{i}
\end{align}
Below is one way to write a summation using the quicksum function in gurobypi (you can get other ways to work) which we add to our model as the objective:
```python
# since we are maximizing points the last argument here is GRB.MAXIMIZE
m.setObjective(quicksum(points[i]*y[i] for i in indices), GRB.MAXIMIZE)
```
**Constraints**
We need to guarantee that each position has exactly one player assigned. Since our decision variables are each 0 or 1, we can model this by adding a constraint for each position summing across the eligible players (and decision variables) requiring that sum to be equal to 1:
\begin{align}
\sum_{i \space eligible} y_{i} = 1
\end{align}
Looping over the positions will add a constraint for each while the condition in the quicksum function will tell the model to only sum over players if they are eligible for that position:
```python
player_position_map = list(zip(player_results.Player, player_results.Pos))
for j in player_results.Pos:
m.addConstr(quicksum([y[i] for i, pos in player_position_map if pos==j])==1)
```
Additionally, we need to ensure that the total salary must not exceed a certain threshold:
\begin{align}
\sum s_{i} \cdot y_{i} \leq S
\end{align}
```python
m.addConstr(quicksum(salaries[i]*y[i] for i in indices) <= S, name="salary");
#the budget for the selected team must not exceed 30,000
```
Now we've added all of the required parts to our optimization model and we are ready to solve.
```python
m.optimize() #we optimize our model
```
Gurobi Optimizer version 9.5.1 build v9.5.1rc2 (mac64[rosetta2])
Thread count: 8 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 97 rows, 96 columns and 2082 nonzeros
Model fingerprint: 0xb9f2d21c
Variable types: 0 continuous, 96 integer (96 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+04]
Objective range [7e+00, 5e+01]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 3e+04]
Found heuristic solution: objective 126.1046919
Presolve removed 91 rows and 57 columns
Presolve time: 0.00s
Presolved: 6 rows, 39 columns, 78 nonzeros
Found heuristic solution: objective 156.7777076
Variable types: 0 continuous, 39 integer (39 binary)
Root relaxation: objective 1.736178e+02, 7 iterations, 0.00 seconds (0.00 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 173.61779 0 2 156.77771 173.61779 10.7% - 0s
H 0 0 171.9189291 173.61779 0.99% - 0s
0 0 172.93546 0 4 171.91893 172.93546 0.59% - 0s
Cutting planes:
Gomory: 1
GUB cover: 1
Explored 1 nodes (10 simplex iterations) in 0.03 seconds (0.00 work units)
Thread count was 8 (of 8 available processors)
Solution count 3: 171.919 156.778 126.105
Optimal solution found (tolerance 1.00e-04)
Best objective 1.719189291363e+02, best bound 1.719189291363e+02, gap 0.0000%
Much of the output above can be ignored for now, but will provide valuable information as you create larger and more complicated models. Let's focus on the last two lines which says an optimal solution was found and its objective value.
We need to extract the solution from the model to find which players are selected. Using getVars() gets the value of each decision variable, so we loop through the decision variables checking if its value (using .x) is greater than 0. While in the loop we record info about each player selected in a data frame.
```python
results = pd.DataFrame()
for v in m.getVars():
if v.x > 1e-6:
results = results.append(player_results.iloc[v.index][['Player','Pos','PredictedFantasyPoints','Salary']])
print(v.varName, v.x)
print('Total fantasy score: ', m.objVal)
```
y[Joel Embiid] 1.0
y[Dario Saric] 1.0
y[Trevor Ariza] 1.0
y[Jarrett Jack] 1.0
y[Markieff Morris] 1.0
Total fantasy score: 171.91892913632242
Because the games were played, we also look up the true fantasy points of the players selected for the games on December 25, 2017, and we add that to our results dataframe.
```python
results['True Fantasy Points'] = [53.5,17.25,28.5,15.5,29.25]
results
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Player</th>
<th>Pos</th>
<th>PredictedFantasyPoints</th>
<th>Salary</th>
<th>True Fantasy Points</th>
</tr>
</thead>
<tbody>
<tr>
<th>4</th>
<td>Joel Embiid</td>
<td>C</td>
<td>51.313689</td>
<td>9500.0</td>
<td>53.50</td>
</tr>
<tr>
<th>19</th>
<td>Dario Saric</td>
<td>PF</td>
<td>40.505486</td>
<td>6200.0</td>
<td>17.25</td>
</tr>
<tr>
<th>36</th>
<td>Trevor Ariza</td>
<td>SG</td>
<td>26.354157</td>
<td>5000.0</td>
<td>28.50</td>
</tr>
<tr>
<th>43</th>
<td>Jarrett Jack</td>
<td>PG</td>
<td>27.780012</td>
<td>4600.0</td>
<td>15.50</td>
</tr>
<tr>
<th>49</th>
<td>Markieff Morris</td>
<td>SF</td>
<td>25.965585</td>
<td>4400.0</td>
<td>29.25</td>
</tr>
</tbody>
</table>
</div>
The best total fantasy points based on *this initial set* of predictions is **$171.92$** by selecting **Jarrett Jack** at point guard, **Trevor Ariza** for small guard, **Markieff Morris** to be the shooting forward, **Dario Saric** as power forward and **Joel Embiid** to be the center of our team, with a total salary of $\$29,700$,
Let's quickly revisit the idea of putting together a lineup in a *greedy* fashion, meaning we start with the *best available* choice and repeat. Notice that players we predicted to have very high scoring games, such as James Harden or Lebron James, didn't make the optimal roster, so going after just the high scorers can be suboptimal.
If you were to re-sort the salary table by the ratio of predicted points to salary, we'd see PF Jordan Bell and PG Josh Hart atop the list. But neither of those players are in the optimal lineup as well, so even using that metric can lead to a suboptimal solution.
This shows the power of mathematical optimization. Our simple model was able to consider over 2 million possibilities and arrive at an optimal solution that would have been overlooked if we were to apply some intuitive decision rules when creating a lineup.
<h2> Conclusion </h2>
In this example you learned how to optimize your basketball lineup in the context of fantasy sports. Starting with box score data, we constructed models that predict the future fantasy points of NBA athletes and then developed an optimization formulation to calculate the best lineup subject to budget and position eligibility constraints.
Through this example, our aim was to clearly present some of the connections between data science and mathematical optimization by starting with data and ending with a decision. We also saw how easy it is to make suboptimal decisions even when applying sensible rules.
We also have a more advanced fantasy basketball problem that uses the same predictive element but expands the optimization component to reflect real fantasy basketball contests from popular sites.
|
2a15f3705e9e1d70821f8a2b34b5ed4e0e7d96fe
| 646,248 |
ipynb
|
Jupyter Notebook
|
fantasy_basketball_1_2/fantasy_basketball_part1_gcl.ipynb
|
yurchisin/modeling-examples
|
baddc045e049b7dc78d3f324213d6e89e5510cd5
|
[
"Apache-2.0"
] | null | null | null |
fantasy_basketball_1_2/fantasy_basketball_part1_gcl.ipynb
|
yurchisin/modeling-examples
|
baddc045e049b7dc78d3f324213d6e89e5510cd5
|
[
"Apache-2.0"
] | null | null | null |
fantasy_basketball_1_2/fantasy_basketball_part1_gcl.ipynb
|
yurchisin/modeling-examples
|
baddc045e049b7dc78d3f324213d6e89e5510cd5
|
[
"Apache-2.0"
] | null | null | null | 623.189971 | 295,088 | 0.938282 | true | 7,554 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.865224 | 0.774583 | 0.670188 |
__label__eng_Latn
| 0.989617 | 0.395403 |
# Exercise 1 - The Mandelbrot Set
The mandelbrot set is a beautiful fractal ([Wikipedia](http://en.wikipedia.org/wiki/Mandelbrot_set)).
More precisely, it contains all numbers from the complex plane where the complex quadratic polynomial
\begin{align}
z_{n+1} = z_{n}^2 + c
\end{align}
remains bounded. A complex number $c$ is part of the Mandelbrot set when starting with $z_0=0$ and applying the iteration repeatedly, the absolute value of $z_n$ remains smaller or equal than 2 regardless how large $n$ becomes.
Moreover, one can draw very beautiful images by looking at the *escape time* of a numerical computation:
We will numerically determine whether a complex number $c$ belongs to the set, and if not we will keep track how many iterations are needed until $|z_n| > 2$. Plotting the escape times of a sampled grid of complex numbers as a 2D image will yield the famous *Apfelmaennchen* you might have seen before.
#Task:#
* Write a function ``mandelbrot(relim, imlim, resteps, imsteps, maxiterations)`` that computes and returns the escape times for the mandelbrot set.
* ``relim`` and ``imlim`` are tuples that define the boundary of the complex plane (e.g. ``relim=(-2,1)`` and ``imlim=(-1,1)``), for convenience ``imlim`` should contain real numbers although it represents the complex axis.
* ``resteps`` and ``imsteps`` are integers and define the sampling resolution along each axis (e.g. 300 and 200 steps).
* ``maxiterations`` is an integer defining the maximum number of iterations (e.g. 50).
* The function should sample complex numbers from the plane as defined by the parameters ``relim, imlim, resteps, imsteps`` and repeatedly apply the quadratic polynomial from above until the absolute value of a complex number is larger than 2. In case a maximum number of iterations is reached (``maxiterations``) the number is believed to be part of the set.
* The function should return a 2D array containing the number of iterations needed for every sampled complex number and two 1D arrays containing the sampled values along each axis. Escape times in the 2D array of exactly ``maxiterations`` indicate complex numbers that are believed to be part of the set.
* Make a 2D color plot of the escape times, you might want to look at matplotlib's [imshow](http://www.mathworks.de/de/help/images/ref/imshow.html) function.
###Hints how the function ``mandelbrot`` may work:###
* Create a 2D numpy array containing the escape times, in the beginning filled with zeros.
* Secondly, create two 1D arrays sampling each dimension of the complex plane. You might want to take a look at the [np.linspace](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html) function. You can multiply one of the arrays with ``1j`` to create the imaginary axis. Later on you can calculate each starting complex number simply from summing individual elements of both arrays.
* The easy brute-force solution involves three nested for loops. Be aware that you can use python's ``break`` statement to leave a for loop and spare unnecessary computations.
* Iterate at most ``maxiterations`` times for any value in your complex plane and apply the quadratic polynomial from above. If the absolute value gets larger than 2 you can stop iterating and store the number of iterations you have needed in the 2D escape array. Thus, for complex numbers that are actually part of the Mandelbrot set, the 2D array of escape times should contain the value ``maxiterations``.
###Hints for optimization:###
* You can aim for an optimized version that uses vectorized computation. Here you should only need a single for loop.
* Create a second and third 2D array containing the starting complex numbers and the intermediate values, checkout the [meshgrid](http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html) function to create these.
* Use the second 2D array to iteratively update all values in the third array at once (vectorized computation!). Mask all values that already escaped the boundary via boolean indexing to speed up the computation.
* Remember that boolean indexing creates copies, not views! So you need to find a smart way to only apply the polynomial
to values that haven't escaped without always keeping the full 2D array, otherwise internally numpy has to reiterate the full array every timestep. If you also create mesh grids of matrix indices you can reduce the 2D arrays each iteration to 1D arrays containing only the currently not escaped values. This is tricky!
* Now you can use finer resolutions than above!
# Brute-Force Solution
```
import numpy as np
def mandelbrot(relim=(-2,1), imlim=(-1, 1), resteps=300, imsteps=200, maxiterations=50):
"""Computes the *escape times* of the numeric approximation of the Mandelbrot set.
Values of ``maxiterations`` refer to complex numbers that are believed to be part of the set.
:param relim: The limits of the real axis
:param imlim: The limits of the imaginary axis
:param resteps: Sampling resolution along real axis
:param imsteps: Sampling resolution along the imaginary axis
:param maxiterations: Maximum number of iterations
:return:
2D Array of escape times
1D Array of real samples
1D Array of imaginary samples
"""
realaxis = np.linspace(relim[0], relim[1], resteps) # Sample the real axis
imaxis = np.linspace(imlim[0], imlim[1], imsteps) * 1j # Sample the imaginary axis
escape = np.zeros((imsteps, resteps), dtype=int) # 2D array of escape times
for irun in range(resteps): # Iterate over real axis
for jrun in range(imsteps): # Iterate over imaginary axis
c = realaxis[irun] + imaxis[jrun] # Starting value
z = 0.0 # Helper variable
for krun in range(maxiterations): # Compute the escape time
z = z**2 + c
if np.abs(z) >= 2:
break # c is not part of the Mandelbrot set
else:
krun = maxiterations # If the for loop did not break c is part of the set
escape[jrun, irun] = krun
return escape, realaxis, imaxis
```
```
xlim = (-2,1)
ylim = (-1,1)
xsteps = 300
ysteps = 200
maxiterations = 50
```
```
escape, realaxis, imaxis = mandelbrot(xlim, ylim, xsteps, ysteps, maxiterations)
```
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(escape, aspect='auto', origin='lower')
plt.xticks(np.arange(xsteps)[::50], ['%.2f' % x for x in realaxis[::50]])
plt.yticks(np.arange(ysteps)[::50], ['%.2f' % x.imag for x in imaxis[::50]])
plt.xlabel('real axis')
plt.ylabel('imaginary axis')
```
# Optimized Version
```
def mandelbrot_pro(relim=(-2,1), imlim=(-1, 1), resteps=1500, imsteps=1000, maxiterations=50):
"""As function mandelbrot, but faster if the number of maxiterations is rather low.
It uses a vectorized approach.
"""
realaxis = np.linspace(relim[0], relim[1], resteps)
imaxis = np.linspace(imlim[0], imlim[1], imsteps) * 1j
escape = np.ones((imsteps, resteps), dtype=int) * maxiterations
realarray, imarray = np.meshgrid(realaxis, imaxis)
complexarray = realarray + imarray
indices_real = np.arange(len(realaxis)) # index array
indices_im = np.arange(len(imaxis)) # index array
mesh_indices_real, mesh_indices_im = np.meshgrid(indices_real, indices_im) # Create a mesh version
zarray = complexarray.copy()
for irun in range(maxiterations):
if len(zarray) == 0:
break
zarray *= zarray # Application of the polinomial
zarray += complexarray
mask = np.abs(zarray) >= 2 # Mask all values that already escaped
escape[mesh_indices_im[mask], mesh_indices_real[mask]]=irun # Remembert the escape time
inv_mask = np.invert(mask) # Invert the mask to keep only values that have not escaped, yet
mesh_indices_real = mesh_indices_real[inv_mask] # Filter the indeices
mesh_indices_im = mesh_indices_im[inv_mask]
zarray = zarray[inv_mask] # Filter the z values
complexarray = complexarray[inv_mask] # Filter the starting values
return escape, realaxis, imaxis
```
```
xlim = (-2,1)
ylim = (-1,1)
xsteps = 1500
ysteps = 1000
maxiterations = 50
```
```
escape, realaxis, imaxis = mandelbrot_pro(xlim, ylim, xsteps, ysteps, maxiterations)
```
```
plt.imshow(escape, aspect='auto', origin='lower')
plt.xticks(np.arange(xsteps)[::100], ['%.2f' % x for x in realaxis[::100]])
plt.yticks(np.arange(ysteps)[::100], ['%.2f' % x.imag for x in imaxis[::100]])
plt.xlabel('real axis')
plt.ylabel('imaginary axis')
```
```
"""Short timing test"""
%timeit mandelbrot((-2,1), (-1,1), 45, 30, 10)
%timeit mandelbrot_pro((-2,1), (-1,1), 45, 30, 10)
```
10 loops, best of 3: 41.9 ms per loop
1000 loops, best of 3: 673 µs per loop
# Exercise 2 - Numerical Integration of a Neuron Model
We are going to simulate our first neuron model with Euler integration ([Wikipedia](http://en.wikipedia.org/wiki/Euler_method)).
Basically, our neuron model consists of a single differential equation that describes the development of the membrane voltage over time. For now let's assume this equation is an arbitrary function $f$:
\begin{align}
\frac{dV}{dt} = f(V)
\end{align}
To obtain the voltage as a function of time, i.e. $V(t)$, we have to solve the differential equation. Lucky for you, we are in the computer practical and not the analytical tutorial. We are going to solve it numerically, so there's no need for a complicated Ansatz :-)
As said before, we will use simple Euler integration. Accordingly, if we assume discretized time we can easily compute $V(t+1)$, the membrane voltage of the next time step, in case we now the previous voltage $V(t)$:
\begin{align}
V(t+1) = V(t) + f(V) * dt
\end{align}
with $dt$ the size of the discretized timesteps.
If we start with a chosen initial value of $V(0)$ we can iteratively solve the differential equation.
By this method we can simulate very complex neuron models. Let's simulate a rescaled version of the exponential integrate and fire neuron ([Scholarpedia](http://www.scholarpedia.org/article/Adaptive_exponential_integrate-and-fire_model)):
\begin{align}
\frac{dV}{dt} = -V + \exp(V) + I
\end{align}
$I$ describes a fixed input current. We will simulate several neurons fed with different current values.
For some values of $I$ the membrane potential $V$ will rise to infinity, this corresponds to the upstroke of an action potential (you remember action potentials from a neurobio course, right?). However, our neuron model cannot recover from this upstroke by itself. For a smooth recovery we would need a second differential equation. However, we will keep our model simple and add a so called *reset rule*: whenever $V$ crosses a particular threshold $V(t)\geq V_t$ then we set it back to a reset value at the next timestep $V(t+1)=V_r$.
#Task#
* Write a function ``expIF_neuron(V, I, Vt, Vr, duration, dt)`` that simulates one or more exponetial integrate-and-fire neurons.
* The parameter ``V`` can be a scalar or numpy array describing the initial conditions
* The parameter ``I`` can be a scalar or numpy array describing the input currents
* The parameter ``Vt`` is a scalar value defining the spiking threshold
* The parameter ``Vr`` is a scalar defining the reset value after threshold crossing
* The parameter ``duration`` gives the length of the simulation
* The parameter ``dt`` describes the stepsize of the Euler integration
* The function should return
* A 2D array of voltage traces, i.e. the simulated development of the membrane potential
* First dimension is the number of neurons, second dimension the voltage trace over time
* First entries in second dimension should contain the initial values
* A 1D array containing the discretized timesteps
* In case ``V`` and ``I`` are arrays, the function should be vectorized, i.e. there should only be a single loop over all timesteps, but no loop over all neurons!
* Simulate 5 neurons at once (do NOT call the function 5 times!) with 5 different input currents $I\in\{-3.0, -2.0, -1.0, 0.0, 1.0\}$.
* Choose the other parameters as
* $V_r=-1.0$
* $V_t=5.0$
* $duration=10.0$
* $dt = 0.01$
* Set the initial $V$ values to $V_r$
* Plot all 5 voltage traces in a single plot, add a legend and label the axis.
###Hints###
* You can use the template provided below. This exercise can be solved in just a handful of lines ;-)
* To incorporate the reset rule you may try boolean indexing.
```
## The template: ##
def expIF_neuron(V=-1.0, I=0.0, Vt=5.0, Vr=-1.0, duration=10.0, dt=0.01):
"""Numerically integrates the expIF neuron membrane equation with the Euler-Method.
The neurons obey a reset rule, when the membrane potential crosses `Vt`
it is set back to `Vr`.
:param V: array of initial membrane values (or a scalar)
:param VT: spiking threshold (scalar)
:param I: array of input currents (or scalar)
:param duration: duration of experiment (scalar)
:param dt: stepsize of Euler integration (scalar)
:return:
2D array of voltage time series, first dimension the neurons,
second dimension the voltage trace.
First entries contain the initial values.
1D array of simulation times
"""
steps = int(duration/dt) # Calculate the number of simulation steps
if isinstance(V, np.ndarray): # V can be scalar or an array, we need to check first
nneurons = len(V) # Infer the number of neurons from the length of the initial conditions
else:
nneurons = 1
V_series = np.zeros((nneurons, steps+1)) # Array that will contain the voltage traces
# 1st dim neurons, 2nd dim voltage traces
# i.e. V[2,10] would return the voltage of neuron #2 at the 10th timestep!
# Wee need steps+1 since the 0th entry should contain the initial conditions
V_series[:, 0] = V # Set initial conditions
times = np.zeros(steps+1) # Array of timesteps
for step in range(1, steps+1): # Loop starting from step 1 (0th contains initial conditions)
############# Your code ##############
# Manipulate V_series here to simulate the neuron model
# Iteratively compute f(V(t)) = -V(t) + exp(V(t)) + I and V(t+1) = V(t) + f(V(t)) * dt
# Do not introduce another for loop, try to think vectorized
# Try using boolean indexing to implement the threshold crossing and voltage reset
######### End of your code ###########
times[step] = times[step-1] + dt # You actually don't need the times explicitly, but returning them
# will make plotting easier
return V_series, times
```
```
def expIF_neuron(V=-1.0, I=0.0, Vt=5.0, Vr=-1.0, duration=10.0, dt=0.01):
"""Numerically integrates the expIF neuron membrane equation with the Euler-Method.
The neurons obey a reset rule, when the membrane potential crosses `Vt`
it is set back to `Vr`.
:param V: array of initial membrane values (or a scalar)
:param VT: spiking threshold (scalar)
:param I: array of input currents (or scalar)
:param duration: duration of experiment (scalar)
:param dt: stepsize of Euler integration (scalar)
:return:
2D array of voltage time series, first dimension the neurons,
second dimension the voltage trace.
First entry contains the initial values.
1D array of simulation times
"""
steps = int(duration/dt) # Calculate the number of simulation steps
if isinstance(V, np.ndarray): # V can be scalar or array, we need to check first
nneurons = len(V)
else:
nneurons = 1
V_series = np.zeros((nneurons, steps+1)) # Array that will contain the voltage traces
# 1st dim neurons, 2nd dim voltage traces
# i.e. V[2,10] would return the voltage of neuron 3 at the 11th timestep!
# Wee need steps+1 since the 0th entry should contain the initial conditions
V_series[:, 0] = V # Set initial conditions
times = np.zeros(steps+1) # Array of timesteps
for step in range(1, steps+1): # Loop starting from step 1 (0th contains initial conditions)
prev_V = V_series[:, step-1]
dV = -prev_V + np.exp(prev_V) + I
next_V = prev_V + dt * dV # Euler step
next_V[prev_V>=Vt] = Vr # Voltage reset
prev_V[prev_V>=Vt] = Vt # For better plotting we bound also the previous step
V_series[:, step] = next_V
times[step] = times[step-1] + dt # You actually don't need the times explicitly, but returning them
# will make plotting easier
return V_series, times
```
```
nneurons = 5
Vt = 5.0
Vr = -1.0
dt = 0.01
duration = 10.0
I = np.linspace(-3.,1.0, nneurons)
V = np.ones(nneurons)*-1.0
```
```
V_series, times = expIF_neuron(V, I, Vt, Vr, duration, dt)
```
```
for neuron in range(nneurons):
plt.plot(times, V_series[neuron,:], linewidth=2, label='I=%.1f' % (I[neuron]))
ax = plt.gca()
ax.ticklabel_format(useOffset=False) # prevents strange units
plt.xlabel('time')
plt.ylabel('V')
plt.legend()
```
```
```
|
3e247b75359cad32ac60e937b35c81a65633ce54
| 168,068 |
ipynb
|
Jupyter Notebook
|
python-beginner/PCC_Exercise_2_with_solutions.ipynb
|
BCCN-Prog/materials
|
4317ab52521093cc84c33b41ab027b46d1e5e48a
|
[
"MIT"
] | null | null | null |
python-beginner/PCC_Exercise_2_with_solutions.ipynb
|
BCCN-Prog/materials
|
4317ab52521093cc84c33b41ab027b46d1e5e48a
|
[
"MIT"
] | null | null | null |
python-beginner/PCC_Exercise_2_with_solutions.ipynb
|
BCCN-Prog/materials
|
4317ab52521093cc84c33b41ab027b46d1e5e48a
|
[
"MIT"
] | null | null | null | 280.113333 | 57,249 | 0.89515 | true | 4,513 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.94079 | 0.923039 | 0.868386 |
__label__eng_Latn
| 0.988427 | 0.855885 |
# Tutorial
We will solve the following problem using a computer to assist with the technical aspects:
```{admonition} Problem
Consider the function $f(x)= \frac{24 x \left(a - 4 x\right) + 2 \left(a - 8 x\right) \left(b - 4 x\right)}{\left(b - 4 x\right)^{4}}$
1. Given that $\frac{df}{dx}|_{x=0}=0$, $\frac{d^2f}{dx^2}|_{x=0}=-1$ and that $b>0$ find the values of $a$ and $b$.
2. For the specific values of $a$ and $b$ find:
1. $\lim_{x\to 0}f(x)$;
2. $\lim_{x\to \infty}f(x)$;
3. $\int f(x) dx$;
4. $\int_{5}^{20} f(x) dx$.
```
Sympy is once again the library we will use for this.
We will start by creating a variable `expression` that has the value of the expression of $f(x)$:
```python
import sympy as sym
x = sym.Symbol("x")
a = sym.Symbol("a")
b = sym.Symbol("b")
expression = (24 * x * (a - 4 * x) + 2 * (a - 8 * x) * (b - 4 * x)) / ((b - 4 * x) ** 4)
expression
```
$\displaystyle \frac{24 x \left(a - 4 x\right) + \left(2 a - 16 x\right) \left(b - 4 x\right)}{\left(b - 4 x\right)^{4}}$
now we can will use `sympy.diff` to calculate the derivative. This tool takes two inputs:
- the first is the expression we are differentiating. Essentially this is the numerator of $\frac{df}{dx}$.
- the first is the variable we are differentiating for. Essentially this is the denominator of $\frac{df}{dx}$.
```{attention}
We have imported `import sympy as sym` so we are going to write `sym.diff`:
```
```python
derivative = sym.diff(expression, x)
derivative
```
$\displaystyle \frac{16 a - 16 b - 64 x}{\left(b - 4 x\right)^{4}} + \frac{16 \left(24 x \left(a - 4 x\right) + \left(2 a - 16 x\right) \left(b - 4 x\right)\right)}{\left(b - 4 x\right)^{5}}$
Let us factorise that to make it slightly clearer:
```python
sym.factor(derivative)
```
$\displaystyle \frac{16 \left(- 3 a b - 12 a x + b^{2} + 16 b x + 16 x^{2}\right)}{\left(- b + 4 x\right)^{5}}$
We will now create the first equation, which is obtained by substituting $x=0$
in to the value of the derivative and equating that to $0$:
```python
first_equation = sym.Eq(derivative.subs({x: 0}), 0)
first_equation
```
$\displaystyle \frac{32 a}{b^{4}} + \frac{16 a - 16 b}{b^{4}} = 0$
We will factor that equation:
```python
sym.factor(first_equation)
```
$\displaystyle \frac{16 \left(3 a - b\right)}{b^{4}} = 0$
Now we are going to create the second equation, substituting $x=0$ in to the
value of the second derivative. We calculate the second derivative by passing a
third (optional) input to `sym.diff`:
```python
second_derivative = sym.diff(expression, x, 2)
second_derivative
```
$\displaystyle \frac{64 \left(-1 - \frac{8 \left(- a + b + 4 x\right)}{b - 4 x} + \frac{10 \left(12 x \left(a - 4 x\right) + \left(a - 8 x\right) \left(b - 4 x\right)\right)}{\left(b - 4 x\right)^{2}}\right)}{\left(b - 4 x\right)^{4}}$
We equate this expression to $-1$:
```python
second_equation = sym.Eq(second_derivative.subs({x: 0}), -1)
second_equation
```
$\displaystyle \frac{64 \left(\frac{10 a}{b} - 1 - \frac{8 \left(- a + b\right)}{b}\right)}{b^{4}} = -1$
Now to solve the first equation to obtain a value for $a$:
```python
sym.solveset(first_equation, a)
```
$\displaystyle \left\{\frac{b}{3}\right\}$
Now to substitute that value for $a$ and solve the second equation for $b$:
```python
second_equation = second_equation.subs({a: b / 3})
second_equation
```
$\displaystyle - \frac{192}{b^{4}} = -1$
```python
sym.solveset(second_equation, b)
```
$\displaystyle \left\{- 2 \sqrt{2} \sqrt[4]{3}, 2 \sqrt{2} \sqrt[4]{3}, - 2 \sqrt{2} \sqrt[4]{3} i, 2 \sqrt{2} \sqrt[4]{3} i\right\}$
Recalling the question we know that $b>0$ thus: $b = 2\sqrt{2}\sqrt[4]{3}$ and
$a=\frac{2\sqrt{2}\sqrt[4]{3}}{3}$.
We will substitute these values back and finish the question:
```python
expression = expression.subs(
{a: 2 * sym.sqrt(2) * sym.root(3, 4) / 3, b: 2 * sym.sqrt(2) * sym.root(3, 4)}
)
expression
```
$\displaystyle \frac{24 x \left(- 4 x + \frac{2 \sqrt{2} \sqrt[4]{3}}{3}\right) + \left(- 16 x + \frac{4 \sqrt{2} \sqrt[4]{3}}{3}\right) \left(- 4 x + 2 \sqrt{2} \sqrt[4]{3}\right)}{\left(- 4 x + 2 \sqrt{2} \sqrt[4]{3}\right)^{4}}$
```{attention}
We are using the `sym.root` command for the generic $n$th root.
```
We can confirm our findings:
```python
sym.diff(expression, x).subs({x: 0})
```
$\displaystyle 0$
```python
sym.diff(expression, x, 2).subs({x: 0})
```
$\displaystyle -1$
Now we will calculate the limits using `sym.limit`, this takes 3 inputs:
- The expression we are taking the limit of.
- The variable that is changing.
- The value that the variable is tending towards.
```python
sym.limit(expression, x, 0)
```
$\displaystyle \frac{\sqrt{3}}{36}$
```python
sym.limit(expression, x, sym.oo)
```
$\displaystyle 0$
Now we are going to calculate the **indefinite** integral using
`sympy.integrate`. This tool takes 2 inputs as:
- the first is the expression we're integrating. This is the $f$ in $\int_a^b f
dx$.
- the second is the remaining information needed to calculate the integral: $x$.
```python
sym.factor(sym.integrate(expression, x))
```
$\displaystyle \frac{x \left(6 x - \sqrt{2} \sqrt[4]{3}\right)}{12 \left(4 x^{3} - 6 \sqrt{2} \sqrt[4]{3} x^{2} + 6 \sqrt{3} x - \sqrt{2} \cdot 3^{\frac{3}{4}}\right)}$
If we want to calculate a **definite** integral then instead of passing the
single variable we pass a tuple which contains the variable as well as the
bounds of integration:
```python
sym.factor(sym.integrate(expression, (x, 5, 20)))
```
$\displaystyle - \frac{5 \left(- 5000 \sqrt{2} \sqrt[4]{3} - 1200 \sqrt{3} + 75 \sqrt{2} \cdot 3^{\frac{3}{4}} + 119997\right)}{2 \left(-32000 - 120 \sqrt{3} + \sqrt{2} \cdot 3^{\frac{3}{4}} + 2400 \sqrt{2} \sqrt[4]{3}\right) \left(-500 - 30 \sqrt{3} + \sqrt{2} \cdot 3^{\frac{3}{4}} + 150 \sqrt{2} \sqrt[4]{3}\right)}$
|
9f3b2e447831e3043d3bca1d259b7ad465fe910f
| 14,456 |
ipynb
|
Jupyter Notebook
|
book/tools-for-mathematics/03-calculus/tutorial/.main.md.bcp.ipynb
|
daffidwilde/pfm
|
dcf38faccee3c212c8394c36f4c093a2916d283e
|
[
"MIT"
] | null | null | null |
book/tools-for-mathematics/03-calculus/tutorial/.main.md.bcp.ipynb
|
daffidwilde/pfm
|
dcf38faccee3c212c8394c36f4c093a2916d283e
|
[
"MIT"
] | null | null | null |
book/tools-for-mathematics/03-calculus/tutorial/.main.md.bcp.ipynb
|
daffidwilde/pfm
|
dcf38faccee3c212c8394c36f4c093a2916d283e
|
[
"MIT"
] | null | null | null | 24.924138 | 354 | 0.475028 | true | 2,100 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.954647 | 0.861538 | 0.822465 |
__label__eng_Latn
| 0.923613 | 0.749195 |
# SymPy Basics
Adapted from: https://github.com/sympy/sympy/wiki/Quick-examples
```python
from sympy import *
from IPython.display import display
init_printing(order="lex",use_latex='mathjax')
```
# Symbolic Expressions and Calculations
```python
x, y, z, t = symbols('x y z t')
k, m, n = symbols('k m n', integer=True)
#f, g, h = map(Function, 'fgh')
```
```python
eqn = Rational(3,2)*pi + exp(I*x) / (x**2 + y)
eqn
```
$$\frac{3 \pi}{2} + \frac{e^{i x}}{x^{2} + y}$$
```python
eqn.subs(x,3)
```
$$\frac{3 \pi}{2} + \frac{e^{3 i}}{y + 9}$$
```python
exp(I*x).subs(x,pi).evalf()
```
$$-1.0$$
```python
expr = x + 2*y
expr.args
```
$$\left ( x, \quad 2 y\right )$$
```python
exp(pi * sqrt(163)).evalf(50)
```
$$262537412640768743.99999999999925007259719818568888$$
```python
N(pi,100)
```
$$3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068$$
```python
latex(S(eqn,evaluate=False))
```
'\\frac{3 \\pi}{2} + \\frac{e^{i x}}{x^{2} + y}'
$$ \frac{3 \pi}{2} + \frac{e^{i x}}{x^{2} + y}$$
## Algebra
```python
((x+y)**2 * (x+1)).expand()
```
$$x^{3} + 2 x^{2} y + x^{2} + x y^{2} + 2 x y + y^{2}$$
```python
a = 1/x + (x*sin(x) - 1)/x
a
```
$$\frac{1}{x} \left(x \sin{\left (x \right )} - 1\right) + \frac{1}{x}$$
```python
a.simplify()
```
$$\sin{\left (x \right )}$$
```python
eqn = Eq(x**3 + 2*x**2 + 4*x + 8, 0)
eqn
```
$$x^{3} + 2 x^{2} + 4 x + 8 = 0$$
```python
solve(eqn,x)
```
$$\left [ -2, \quad - 2 i, \quad 2 i\right ]$$
```python
eq1 = Eq(x + 5*y, 2)
eq2 = Eq(-3*x + 6*y, 15)
display(eq1)
display(eq2)
sln = solve([eq1, eq2], [x, y])
sln
```
$$x + 5 y = 2$$
$$- 3 x + 6 y = 15$$
$$\left \{ x : -3, \quad y : 1\right \}$$
```python
display(eq1.subs(sln))
display(eq2.subs(sln))
```
$$\mathrm{True}$$
$$\mathrm{True}$$
## Recurrence Relations
$$
\large\begin{align}
y_0 & =1 \\
y_1 & =4 \\
y_n & =y_n-2y_{n-1}+5y_{n-2}
\end{align}
$$
```python
f=y(n)-2*y(n-1)-5*y(n-2)
f
```
$$y{\left (n \right )} - 5 y{\left (n - 2 \right )} - 2 y{\left (n - 1 \right )}$$
```python
sln = rsolve(f,y(n),[1,4])
sln
```
$$\left(\frac{1}{2} + \frac{\sqrt{6}}{4}\right) \left(1 + \sqrt{6}\right)^{n} + \left(- \sqrt{6} + 1\right)^{n} \left(- \frac{\sqrt{6}}{4} + \frac{1}{2}\right)$$
```python
for i in range(0,10):
print(sln.subs(n,i).simplify())
```
1
4
13
46
157
544
1873
6466
22297
76924
## Sums and Products
```python
a, b = symbols('a b')
s = Sum(6*n**2 + 2**n, (n, a, b))
s
```
$$\sum_{n=a}^{b} \left(2^{n} + 6 n^{2}\right)$$
```python
s.doit()
```
$$- 2^{a} + 2^{b + 1} - 2 a^{3} + 3 a^{2} - a + 2 b^{3} + 3 b^{2} + b$$
```python
s.subs({b:3,a:1}).doit()
```
$$98$$
```python
Sum(b, (b, 1, n)).doit().factor()
```
$$\frac{n}{2} \left(n + 1\right)$$
```python
Sum(n*(n+1)/2,(n, 1, b)).doit()
```
$$\frac{b^{3}}{6} + \frac{b^{2}}{2} + \frac{b}{3}$$
```python
for i in range(1,10):
print(Sum(n*(n+1)/2, (n, 1, b)).doit().subs(b,i))
```
1
4
10
20
35
56
84
120
165
```python
Sum(n, (n, a, b)).subs(a,1).doit()
```
$$\frac{b^{2}}{2} + \frac{b}{2}$$
```python
(x**3/6 + x**2/2 +x/3).factor()
```
$$\frac{x}{6} \left(x + 1\right) \left(x + 2\right)$$
```python
product(n*(n+1), (n, 1, b))
```
$${2}^{\left(b\right)} b!$$
```python
f=Function('f')
ex=Eq(f(1/x)-3*f(x),x)
```
## Calculus
$$\lim_{x\to 0} \frac{\sin x - x}{x^3} = -\frac{1}{6}$$
```python
((sin(x)-x)/x**3).limit(x,0)
```
$$- \frac{1}{6}$$
```python
(x**2+5*x**3).diff(x)
```
$$15 x^{2} + 2 x$$
```python
(-x).limit(x,oo)
```
$$-\infty$$
$$\int x^2 \cos x \ dx$$
```python
(x**2 * cos(x)).integrate(x)
```
$$x^{2} \sin{\left (x \right )} + 2 x \cos{\left (x \right )} - 2 \sin{\left (x \right )}$$
$$\int_0^{\pi/2} x^2 \cos x \ dx$$
```python
integrate(x**2 * cos(x), (x, 0, pi/2))
##(x**2 * cos(x)).integrate(x, 0, pi/2) does not work.
```
$$-2 + \frac{\pi^{2}}{4}$$
$$ \large f''(x) + 9 f(x) = 1 $$
```python
fn = dsolve(Eq(Derivative(f(x),x,x) + 9*f(x), 1), f(x))
fn
```
$$f{\left (x \right )} = C_{1} \sin{\left (3 x \right )} + C_{2} \cos{\left (3 x \right )} + \frac{1}{9}$$
```python
fla = 3*sin(3*x)+3*cos(3*x)+1/9
fla.diff(x).diff(x).subs(x,3)+9*fla.subs(x,3)
```
$$1.0$$
## Linear Algebra
|
a55c5474aee9966b0ac15d9c005a6ccf83b4f64d
| 18,890 |
ipynb
|
Jupyter Notebook
|
Notebooks/SymPy Basics.ipynb
|
Andrewnetwork/WorkshopScipy
|
739d24b9078fffb84408e7877862618d88d947dc
|
[
"MIT"
] | 433 |
2017-12-16T20:50:07.000Z
|
2021-11-08T13:05:57.000Z
|
Notebooks/SymPy Basics.ipynb
|
Andrewnetwork/WorkshopScipy
|
739d24b9078fffb84408e7877862618d88d947dc
|
[
"MIT"
] | 3 |
2017-12-17T06:10:28.000Z
|
2018-11-14T15:50:10.000Z
|
Notebooks/SymPy Basics.ipynb
|
Andrewnetwork/WorkshopScipy
|
739d24b9078fffb84408e7877862618d88d947dc
|
[
"MIT"
] | 47 |
2017-12-06T20:40:09.000Z
|
2019-06-01T11:33:57.000Z
| 18.666008 | 186 | 0.387877 | true | 2,018 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.94079 | 0.896251 | 0.843184 |
__label__yue_Hant
| 0.257245 | 0.797332 |
# sympy!
Logo is from the [SymPy webpage](https://www.sympy.org/en/index.html)
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import sympy
```
```python
#seems optional; can make some of the output look better
sympy.init_printing(use_unicode=True)
```
From the [SymPy webpage](https://www.sympy.org/en/index.html):
> SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.
From the TAMU [Math page](https://www.math.tamu.edu/courses/math151/):
>### MATH 151 - Engineering Mathematics I - Fall 2021
>##### Credits 4. 3 Lecture Hours. 2 Lab Hours.
>
>(MATH 2413) Engineering Mathematics I. Rectangular coordinates, vectors, analytic geometry, functions, limits, derivatives of functions, applications, integration, computer algebra. MATH 171 designed to be a more demanding version of this course. Only one of the following will satisfy the requirements for a degree: MATH 131, MATH 142, MATH 147, MATH 151 and MATH 171.
>**Prerequisite**: MATH 150 or equivalent or acceptable score on TAMU Math Placement Exam; also taught at Galveston and Qatar campuses.
There are some helpful documents on the pages for that course. Take a look at these documents to familiarize yourself with what is going to covered and **how** they do it.
Here is a lab example from the course: https://www.math.tamu.edu/courses/math151/Python/Math151Lab1b.pdf
```python
s0 = 400
v0 = -16
t = 4
"st = (g/2)t^2 + v0t + s0"
g = -32
st = (g/2) * (t**2) + v0*t + s0
```
```python
print(st)
```
80.0
### Make it a function
We're going to create a function that will save us from having to type in the formula each time. <br>
*Note: if our equation/formula changes, we will have to redo the function too*
```python
def question1(s0, v0, t, g=-32):
"""Calculate the height of the ball"""
st = (g/2) * (t**2) + v0*t + s0
return st
```
Now we can test out our function by calling it with different values
```python
question1(400, -16, 4)
```
```python
question1(10, 400, 25)
```
### Importing the entire package
The sympy documentation and the MATH 151 Lab worksheet both demonstrate importing the entire package into the current namespace. If this is one of the only packages you'll be using, it will work just fine. However, if you are using multiple packages and import them all into here like below, you *could* end up clobbering, or overwriting, a different function with the same name.
That sounds scary but nothing is permanent. If that happens, just restart the Python kernel.
```python
from sympy import *
```
The above method of importing is generally frowned upon by the Python community. While it is a valid line, it can overwrite functions/classes if two packages use the same name for a function.
### Creating symbols
sympy uses symbols (hence the name....)
Before we can use these symbols we have to create them first. We can call our symbols anything we want but it's usually best to stick with simple, obvious names.
```python
x = symbols('x')
```
So let's call ours what the assignment calls them
```python
s0 = symbols('s0')
v0 = symbols('v0')
g = symbols('g')
t = symbols('t')
```
We can also accomplish the above line in one single line:
```python
s0, v0, g, t = symbols('s0 v0 g t')
```
Now let's see what that did
```python
s0 + v0 + 10 # 'fake' equation just to see the symbols
```
Notice how the output above looks a little different than the one below here
```python
print(s0 + v0+10)
```
s0 + v0 + 10
Using the print statement removes any formatting that sympy has applied to the symbol objects we created.
### Creating equations/formulas
Let's use those symbols to create our equation from above. Sympy calls these expressions but I'll call them equations/formulas here.
```python
height = ((g/2) * t**2) + v0*t + s0
```
```python
height
```
This looks a little different from the equation on our sheet right? Is this one wrong?
No, it turns out that sympy took it upon itself to 'simplify' the equation and move *t<sup>2</sup>* to be with *g*
### Using the equation
Now we want to input values for the different variables in that equation and obtain a result.
To do this, we use the subs method
```python
height.subs([(s0, 400), [v0, -16], [t,4], [g, -32]])
```
Important to note that our original equation is unmodified. The substitution that we did was temporary.
```python
height # remains unmodified
```
```python
# now with different values
height.subs([(s0, 10), [v0, 400], [t,25], [g, -32]])
```
We can also use loops to input values so that we don't need to manually type everything in, every single time we want to change the values.
```python
# create a list of lists with values for each variable
values = [
[400, -16, 4, -32],
[10, 400, 25, -32]
]
```
```python
for inputs in values:
answer = height.subs([(s0, inputs[0]), [v0, inputs[1]], [t,inputs[2]], [g, inputs[3]]])
print(answer)
```
80
10
We can use list comprehensions to complete the same process. List comprehensions can sometimes be faster and clearer (and sometimes NOT!)
```python
[height.subs([(s0, inputs[0]), [v0, inputs[1]], [t,inputs[2]], [g, inputs[3]]])
for inputs in values]
```
While the above line works and displays the results, it does **not** save the results and so we cannot do anything with them later. To *save* the results, we need to catch the returned results.
```python
answers = [height.subs([(s0, inputs[0]), [v0, inputs[1]], [t,inputs[2]], [g, inputs[3]]])
for inputs in values]
```
```python
answers
```
```python
# get the first answer
answers[0]
```
What about the next function?
sin(e<sup>x</sup>)
Let's try a different way to 'encode' our equation without typing it out explicitly using symbols.
To do this we'll use the sympify function (get it?)
<code> sympy.sympify('my_equation') </code>
```python
function = sympy.sympify('x+1')
```
```python
function
```
Great! Now let's code in our next one
```python
function = sympy.sympify('sin(e**x)')
```
```python
function
```
Looks right. Let's try it:
```python
a = (0, function.subs(x, 0).evalf())
```
```python
a
```
That worked. Let's try another number for x
```python
# this way may sometimes not work with multiple variables
b = (1, function.subs(x,1).evalf())
print(b)
```
(1, sin(e))
What happened here? Why didn't it work?
It turns out that when we typed `e` into our equation, it was interpreted as being a variable (symbol) called e. The code has no idea that `e` has a special meaning.
How do fix this? Use the exponential function instead
```python
function = sympy.sympify('sin(exp(x))')
function
```
Though it looks the same, it's 'different' under the hood.
Let's try our substitution from earlier
```python
# this way may sometimes not work with multiple variables
b = (1, function.subs(x,1).evalf())
print(b)
```
(1, 0.410781290502909)
What's that `.evalf()` bit?
Sometimes you will need to tell sympy to run the calculation.
It doesn't 'complete' the calculation of sin(x) because it will output a floating point (decimal) number that is cutoff, and therefore not *exactly* equal sin(x). To avoid this, sympy maintains it as sin(x) unless we tell it to evaluate it.
If we just substituted it as we did earlier:
```python
function.subs(x,1)
```
We still have e in our result. To get a number we have to evaluate the function:
```python
# this way may sometimes not work with multiple variables
b = (1, function.subs(x,1).evalf())
print(b)
```
(1, 0.410781290502909)
```python
# sympy documentation says this is the way to structure it
b = (1, function.evalf(subs={'x':1}))
print(b)
```
(1, 0.410781290502909)
What if you have multiple variables? That curly bracket in the `subs=` part is a dictionary. We can continue to add values for different variables.
```python
big_eq = sympy.sympify('sin(a)+b+c')
big_eq
```
```python
big_eq.subs([(a,1),(b,2),(c,3)])
```
What happened here? Turns out we never defined a, b, and c to be symbols. I honestly thought the sympify function did that for use behind the scenes...
We can just use strings instead:
```python
big_eq.subs([('a',1),('b',2),('c',3)])
```
But notice how it maintains the result as sin(1) instead of a decimal number? That's where evalf comes in.
```python
big_eq.evalf(subs={'a':1,'b':2,'c':3})
```
Plus it looks a little cleaner. We could even make it easier on our eyes by doing this equivalent thing:
```python
big_eq.evalf(subs={'a':1,
'b':2,
'c':3 })
```
More examples:
```python
b = (1, function.evalf(subs={x:sympy.log(sympy.pi/2.)}))
print(b)
```
(1, 1.00000000000000)
```python
sympy.log(100,10) #note this function uses the natural log by default
```
### Plotting
We can also plot functions using sympy and it ends up being easier to do and nicer looking without much effort.
```python
sympy.plot(sympy.sympify('x**3-25*x**2+3'), (x, -10, 10), ylim=(-5,10))
```
```python
made_up = x**3-25*x**2+3
```
```python
made_up
```
```python
made_up.subs(x, 0)
```
Let's go through a list of x values to obtain the y values and then we can plot those.
The `range()` function below will create our list of values for us.
```python
output = []
for value in range(-10, 11):
output.append(made_up.subs(x, value))
```
```python
print(output
)
```
[-3497, -2751, -2109, -1565, -1113, -747, -461, -249, -105, -23, 3, -21, -89, -195, -333, -497, -681, -879, -1085, -1293, -1497]
```python
# same thing but in a list comprehension instead
output = [made_up.subs(x, value) for value in range(-10, 11)]
```
```python
print(output) #output should be the same
```
[-3497, -2751, -2109, -1565, -1113, -747, -461, -249, -105, -23, 3, -21, -89, -195, -333, -497, -681, -879, -1085, -1293, -1497]
Now we can plot it
```python
plt.plot(output)
```
Why doesn't it look the same?
1. We didn't provide any x values so it added its own
2. We didn't give it any kind of boundaries so it plotted what it needed to
We can make it look similar.
```python
plt.plot(range(-10, 11), output)
plt.xlim(-10, 10)
plt.ylim(-5,10)
plt.grid()
```
But it's not exactly the same. We could go through the process of trying to recreate it OR we can simply use what sympy has provided.
Another reason to use the sympy version, it evaluates at more x values. What do I mean?
Look closely:
```python
# sympy
sympy.plot(sympy.sympify('x**3-25*x**2+3'), (x, -10, 10), ylim=(-5,10))
#ours
plt.figure(figsize=(7,5))
plt.plot(range(-10, 11), output)
plt.xlim(-11, 11)
plt.xticks(np.arange(-10, 11, 2.5))
plt.ylim(-5,10)
plt.grid()
```
Now zoom in near 3
```python
# sympy
sympy.plot(sympy.sympify('x**3-25*x**2+3'), (x, -1, 1), ylim=(1,4))
#ours
plt.figure(figsize=(7,5))
plt.plot(range(-10, 11), output)
plt.xlim(-1, 1)
plt.ylim(1,4)
plt.grid()
```
Notice how our data are 'pointed' and not a nice smooth curve? It's because we only evaluated our function at integer values.
### Expanding and factoring
```python
sympy.factor(made_up)
```
```python
test2 = (x+3)*(x**2-4)
```
```python
test2
```
```python
test2_expanded = sympy.expand(test2)
test2_expanded
```
```python
sympy.factor(test2_expanded)
```
### Points and Lines
Create a couple of instances of points
```python
p = sympy.Point(4,-2)
q = sympy.Point(-1,3)
```
Now let's create a line from those two points
```python
l = sympy.Line(p, q)
```
And now we can get things from this line:
```python
l.slope
```
How about angles between lines?
```python
p = sympy.Point(1,0)
q = sympy.Point(0,1)
origin = sympy.Point(0,0)
```
```python
plt.scatter([1,0,0], [0, 1, 0], marker='o')
plt.quiver([0,0], [0,0], [1,0], [0, 1], scale=2)
plt.annotate('First', (0.1, .01))
```
```python
first = sympy.Line(origin, p)
second = sympy.Line(origin, q)
```
```python
first.slope
```
```python
second.slope
```
Angle between our two lines
```python
first.angle_between(second)
```
Output is in radians, let's convert to degrees
```python
float(first.angle_between(second))
```
```python
np.rad2deg(float(first.angle_between(second)))
```
```python
second.intersection(l)
```
```python
l
```
```python
l.intersect(first)
```
```python
sympy.solve(sympy.sympify(['-1*x+y-3', '-1*x+y+2']).subs('x', (4,-1)), 'y')
```
```python
sympy.plot('-1*x + 2', (x, -5, 5))
```
```python
sympy.sympify(['-1*x+y-3', '-1*x+y+2'])
```
```python
```
|
9ecc83bf00a7bac6aea38829150d742033c57032
| 187,438 |
ipynb
|
Jupyter Notebook
|
GAP_Python_Workshop_4.ipynb
|
snifflesnrumjum/GAP_Python
|
bf8eaa5fa2cddc5acd56cdd87e41c7bedca0e30a
|
[
"MIT"
] | null | null | null |
GAP_Python_Workshop_4.ipynb
|
snifflesnrumjum/GAP_Python
|
bf8eaa5fa2cddc5acd56cdd87e41c7bedca0e30a
|
[
"MIT"
] | null | null | null |
GAP_Python_Workshop_4.ipynb
|
snifflesnrumjum/GAP_Python
|
bf8eaa5fa2cddc5acd56cdd87e41c7bedca0e30a
|
[
"MIT"
] | null | null | null | 91.388591 | 15,300 | 0.850836 | true | 3,752 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.907312 | 0.867036 | 0.786672 |
__label__eng_Latn
| 0.989319 | 0.666035 |
# Base rate fallacy example
In this notenook we work an example of the base rate fallacy using Bayes Theorem.
Assume that we have two random variables $HasDisease$ and $FailsTest$. $HasDisease=y$ indicates that a person has the disease while $HasDisease=n$ indicates that the person in disease free. In addition, we have a test which attempts to detect the disease. $FailsTest=y$ indicates that our test says a person hasthe disease while $FailsTest=n$ indicates that our test says a person does not have the disease.
In this notebook you can play around with the probabilities of interest and see now likely it is that, given you fail the test, that you actually have the disease.
Suppose we know the following probabilities:
\begin{align}
Pr(FailsTest=y | HasDisease=y) &= FailAndHasDisease \\
Pr(FailsTest=n | HasDisease=y) &= NotFailAndHasDisease \\
Pr(FailsTest=y | HasDisease=n) &= FailAndNotHasDisease \\
Pr(FailsTest=n | HasDisease=n) &= NotFailAndNotHasDisease \\
\end{align}
And we know the prior probability of the disease in the population
$$
Pr(HasDisease=y).
$$
Note, the point of the base rate fallacy is that you need all <i>5</i> probabilities to compute what you are interested in, namely <i>the probability you have the disease given you fail the test</i>, denoted
$$
Pr(HasDisease=y | FailsTest=y).
$$
Without, $Pr(HasDisease=y)$ you cannot truly understand $Pr(HasDisease=y | FailsTest=y)$.
You can play aroun with the numbers in the next cell to see how things work out.
```python
FailAndHasDisease = 1.0
NotFailAndHasDisease = 0.0
FailAndNotHasDisease = 0.01
NotFailAndNotHasDisease = 0.99
HasDisease = 1./1000
```
Bayes theorem says that
$$
Pr(HasDisease=y | FailsTest=y) = \frac{Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y)}{Pr(FailsTest=y)}
$$
Our table gives us the two terms in the numerator, we get the demoninator from the Law of total probability.
\begin{align}
Pr(FailsTest=y) & = Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y) + Pr(FailsTest=y | HasDisease=n) Pr(HasDisease=n) \\
& = Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y) + Pr(FailsTest=y | HasDisease=n) (1- Pr(HasDisease=y))
\end{align}
So, the whole thing is
$$
Pr(HasDisease=y | FailsTest=y) = \frac{Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y)}{(Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y) + Pr(FailsTest=y | HasDisease=n) (1- Pr(HasDisease=y)))}
$$
```python
FailAndHasDisease*HasDisease/(FailAndHasDisease*HasDisease + FailAndNotHasDisease*(1-HasDisease))
```
0.09099181073703368
This matches the result we did by hand in class. Play around with the probabilities and see what you discover.
```python
```
|
73072720b11ffb3df6bd2ac886fe415400411a2b
| 4,450 |
ipynb
|
Jupyter Notebook
|
lectures/05 Basic Statistics, Probability, and Linear Algebra/BaseRateFallacy.ipynb
|
thomasmeagher/DS-501
|
b5c697c3bc4f44903af16219f242b5728c9a82d1
|
[
"MIT"
] | 5 |
2017-07-27T02:52:06.000Z
|
2020-04-01T09:25:14.000Z
|
lectures/05 Basic Statistics, Probability, and Linear Algebra/BaseRateFallacy.ipynb
|
thomasmeagher/DS-501
|
b5c697c3bc4f44903af16219f242b5728c9a82d1
|
[
"MIT"
] | null | null | null |
lectures/05 Basic Statistics, Probability, and Linear Algebra/BaseRateFallacy.ipynb
|
thomasmeagher/DS-501
|
b5c697c3bc4f44903af16219f242b5728c9a82d1
|
[
"MIT"
] | 4 |
2017-07-18T21:50:17.000Z
|
2020-04-01T09:25:18.000Z
| 30.479452 | 420 | 0.591011 | true | 798 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.959762 | 0.872347 | 0.837246 |
__label__eng_Latn
| 0.930443 | 0.783536 |
You'll need a few packages for this to work. You can install everything you need by running these two commands:
conda install astropy numpy h5py matplotlib tqdm
pip install astro-gala pyia
```python
import sys
from os import path
import warnings
# Third-party
from astropy.table import Table
from astropy.io import fits
import astropy.coordinates as coord
import astropy.units as u
import h5py
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from tqdm import tqdm
import gala.dynamics as gd
import gala.integrate as gi
import gala.potential as gp
from gala.units import galactic
from pyia import GaiaData
```
/Users/adrian/anaconda/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
We'll use a simple model for the Milky Way potential that is implemented in `Gala` that contains components for the Galactic disk, bulge, nucleus, and dark matter halo:
```python
mw = gp.MilkyWayPotential()
H = gp.Hamiltonian(mw)
print(mw.keys())
```
odict_keys(['disk', 'bulge', 'nucleus', 'halo'])
We'll need to transform observed positions and velocities to the Galactocentric rest frame. To do that, we have to make assumptions about the solar velocity and position. We'll assume the Sun is at:
$$
\begin{align}
\boldsymbol{r}_{\odot} &= (-8, 0, 0)~{\rm kpc}\\
\boldsymbol{v}_{\odot} &= (11.1, 232.24, 7.25)~{\rm kpc}
\end{align}
$$
(but feel free to play with the definitions if you prefer other values).
```python
rsun = 8 * u.kpc
vsun = [11.1, 232.24, 7.25] * u.km/u.s
```
```python
gc_frame = coord.Galactocentric(galcen_distance=rsun,
galcen_v_sun=coord.CartesianDifferential(*vsun))
```
We next need to load some Gaia data. For now, I'll load a mock (simulated_ dataset with DR2-like uncertainties for all stars within 200 pc of the Sun:
```python
g = GaiaData('/Users/adrian/data/GaiaDR2-mock/Gaia-DR2-mock-200pc.fits')
```
We can use this object to get an Astropy sky coordinate object, which has the sky positions, distance (parallax), proper motions, and radial velocity automatically filled:
```python
c = g.skycoord
c
```
<SkyCoord (ICRS): (ra, dec, distance) in (deg, deg, pc)
[(315.07577633, 35.40392509, 197.99079895),
(314.88167156, 35.3907023 , 123.34000397),
(314.86214545, 35.39305834, 148.10754395), ...,
( 45.02060181, -35.32089102, 187.71482849),
( 44.93473399, -35.33031997, 172.88237 ),
( 44.94769253, -35.28921706, 129.5793457 )]
(pm_ra_cosdec, pm_dec, radial_velocity) in (mas / yr, mas / yr, km / s)
[( -8.97676, -44.3531 , -18.882 ), ( 20.8935 , -68.4191 , -11.6318 ),
( 15.4162 , -1.7871 , -6.852 ), ...,
(-58.3823 , -29.8226 , -5.77041), ( -0.30213, 10.4613 , 21.0896 ),
( 86.0874 , -4.13801, 26.4174 )]>
**Note: not all Gaia DR2 stars will have radial velocities, so you may have to filter out those stars here**
Next, we will transform these heliocentric positions/velocities into Galactocentric values, then pass in to a Gala class that will handle computing dynamical quantities from the Galactocentric Cartesian phase-space positions:
```python
w = gd.PhaseSpacePosition(c.transform_to(gc_frame).cartesian)
```
With this object, we can do things like compute the angular momentum:
```python
L = w.angular_momentum()
```
```python
L.shape
```
(3, 2799632)
Or, given a model for the gravitational potential, compute the energy of the stellar orbits:
```python
E = H.energy(w)
```
Let's now plot the energy vs. $z$-component of the angular momentum (index 2):
```python
L_unit = u.km/u.s * u.kpc
E_unit = u.km/u.s * u.kpc/u.Myr
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
ax.plot(L[2].to(L_unit).value,
E.to(E_unit).value,
linestyle='none', marker=',', alpha=0.2)
ax.set_xlim(-3000, -500)
ax.set_ylim(-160, -100)
ax.set_xlabel('$L_z$ [{0:latex_inline}]'.format(L_unit))
ax.set_ylabel('$E$ [{0:latex_inline}]'.format(E_unit))
fig.tight_layout()
```
That's a lot of points! Let's instead make a histogram so we can look at the log-density of points:
```python
E_grid = np.linspace(-160, -100, 128)
Lz_grid = np.linspace(-3000, -500, 128)
H, xedg, yedg = np.histogram2d(L[2].to(L_unit).value, E.to(E_unit).value,
bins=(Lz_grid, E_grid))
```
```python
norm = mpl.colors.LogNorm(vmin=1e-1, vmax=1E5)
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
ax.pcolormesh(xedg, yedg, H.T, norm=norm, cmap='Blues')
ax.set_xlabel('$L_z$ [{0:latex_inline}]'.format(L_unit))
ax.set_ylabel('$E$ [{0:latex_inline}]'.format(E_unit))
fig.tight_layout()
```
That's a very smooth distribution! The real Galaxy probably won't look like that...but we'll see!
We can also compute other quantities for these stars, like the actions. These are other integrals of motion that are useful because they are adiabatically invariant, and because they are conserved in a static potential (unlike the angular momentum components which can vary if the orbit is not planar and the potential is non-spherical). The problem is that, except in very simple potential models, computing the actions has to be done numerically. There are many algorithms out there for estimating actions (see papers by Jason Sanders, James Binney, Jo Bovy). Here, with Gala, we'll use a method that requires numerically integrating orbits in order to compute the actions. This makes it quite slow to run for millions of stars, but we can at least run for a subset of stars as a demo. In practice, you can parallelize this or run on batches (subsets) of stars.
### How do we choose an integration timestep?
One thing we have to choose when numerically estimating the actions is the timestep and length of orbit integration. We'll set the length to 5 Gyr (~20 complete orbits of a sun-like orbit around the Galaxy), and vary the timestep to see if the value of the actions converges:
```python
all_actions = []
dts = [0.1, 0.2, 0.4, 0.8, 1., 2., 4, 8] * u.Myr
with warnings.catch_warnings(record=True):
warnings.simplefilter("ignore")
for dt in tqdm(dts):
orbit = mw.integrate_orbit(w[0], dt=dt, t1=0*u.Gyr, t2=5*u.Gyr,
Integrator=gi.DOPRI853Integrator)
res = gd.actionangle.find_actions(orbit, N_max=8)
all_actions.append(res['actions'])
```
100%|██████████| 8/8 [00:15<00:00, 1.95s/it]
```python
act = u.Quantity(all_actions)
plt.figure(figsize=(6, 5))
for k in range(3):
plt.plot(dts[1:], np.abs((act[1:, k]-act[0, k])/act[0, k]),
label='$J_{0}$'.format(k+1))
plt.xscale('log')
plt.yscale('log')
plt.xlabel('timestep [Myr]')
plt.ylabel('fractional error')
plt.legend(loc='best', fontsize=16)
plt.tight_layout()
```
From this, it looks like we can set the timestep to 2 Myr and only suffer a fractional error of $10^{-5}$. How long does computing the actions for one orbit take?
```python
%%time
orbit = mw.integrate_orbit(w[0], dt=2*u.Myr, t1=0*u.Gyr, t2=5*u.Gyr,
Integrator=gi.DOPRI853Integrator)
res = gd.actionangle.find_actions(orbit, N_max=8)
```
CPU times: user 594 ms, sys: 28.5 ms, total: 623 ms
Wall time: 429 ms
/Users/adrian/projects/gala/build/lib.macosx-10.7-x86_64-3.6/gala/dynamics/actionangle.py:502: UserWarning: More unknowns than equations!
warnings.warn("More unknowns than equations!")
~0.5 seconds! Let's run on a subset (128) of the orbits:
```python
some_w = w[:128]
orbits = mw.integrate_orbit(some_w, dt=1*u.Myr, t1=0*u.Gyr, t2=4*u.Gyr,
Integrator=gi.DOPRI853Integrator)
with warnings.catch_warnings(record=True):
warnings.simplefilter("ignore")
all_actions = []
for orbit in tqdm(orbits.orbit_gen(), total=some_w.shape[0]):
res = gd.actionangle.find_actions(orbit, N_max=8)
all_actions.append(res['actions'])
all_actions = u.Quantity(all_actions)
```
100%|██████████| 128/128 [01:20<00:00, 1.59it/s]
This is a simulated data set, and I don't remember what kind of age-action relations were put in, but we generally expect the vertical action, $J_z$, to increase with stellar age. Let's see if that's the case for these simulated stars:
```python
plt.figure(figsize=(6, 5))
plt.scatter(g.age[:some_w.shape[0]],
all_actions[:, 2].to(L_unit))
plt.xscale('log')
plt.yscale('log')
plt.xlim(1E-2, 20)
plt.ylim(1E-4, 1e2)
plt.xlabel('age [Gyr]')
plt.ylabel('$J_z$ [{0:latex_inline}]'.format(L_unit))
```
Older stars definitely tend to have larger values of vertical action! Another way to look at this is by looking at the maximum height a star reaches above the plane, $|z_{\rm max}|$:
```python
plt.figure(figsize=(6, 5))
plt.scatter(g.age[:some_w.shape[0]],
orbits.zmax(approximate=True).to(u.pc).value)
plt.xscale('log')
plt.yscale('log')
plt.xlim(1E-2, 20)
plt.ylim(10, 2e3)
plt.xlabel('age [Gyr]')
plt.ylabel(r'$\left|z_{\rm max}\right|$ ' + '[{0:latex_inline}]'.format(L_unit))
```
```python
```
|
820bb69cacbe6e36a6f4ae4abdaa60393a8b8210
| 384,773 |
ipynb
|
Jupyter Notebook
|
jay-z/Vertical-action-demo.ipynb
|
adrn/dr2-zero-day-happy-fun-time
|
5f3d42127c8cca2104e2267304689b1c14bf83eb
|
[
"MIT"
] | null | null | null |
jay-z/Vertical-action-demo.ipynb
|
adrn/dr2-zero-day-happy-fun-time
|
5f3d42127c8cca2104e2267304689b1c14bf83eb
|
[
"MIT"
] | null | null | null |
jay-z/Vertical-action-demo.ipynb
|
adrn/dr2-zero-day-happy-fun-time
|
5f3d42127c8cca2104e2267304689b1c14bf83eb
|
[
"MIT"
] | null | null | null | 615.6368 | 148,896 | 0.937896 | true | 2,757 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.640636 | 0.559965 |
__label__eng_Latn
| 0.907525 | 0.139317 |
# Week 7 - GAN Part 3 and Evaluation Metrics Notebook
In this notebook, we will solve questions on GANs and evaluation metrics of generative models in general.
- This notebook is prepared using PyTorch. However, you can use any Python package you want to implement the necessary functions in questions.
- If the question asks you to implement a specific function, please do not use its readily available version from a package and implement it yourself.
## Question 1
Please answer the questions below:
1. Please explain the concept of batch normalization. What problem does it solve and how?
2. What is spectral normalization and why do we use it?
3. What is the meaning of class-conditional GAN? How do we make GANs class conditional?
4. What are the main differences between StyleGAN-v1 and StyleGAN-v2?
5. Why is it so hard to quantitatively evaluate generative models?
6. What assumptions are we making on the data/model distribution when using Inception Score and Frechet Inception distance?
You can write your answer for each question in the markdown cell below:
**Please write your answer for each question here**
## Question 2
**Part 1**: Implement regular batch normalization for convolutional layers. Definition of the function and the input to test is given.
For reference, you can use the batch normalization paper given [here](https://arxiv.org/abs/1502.03167). Also, you can refer to the lecture slides.
Please note that we will implement a simple version of batch normalization, and in this simple version we will not be using *running mean*, *running variance*, and *momentum* parameters.
```python
import torch
torch.manual_seed(0)
batch_size = 16
num_channels = 12
input_tensor = torch.normal(3, 10, size=(batch_size, num_channels, 24, 24))
```
```python
def batch_normalization_conv(X, gamma, beta, eps=1e-8):
"""Performs batch normalization operation for convolutional layer output
Args:
X: input tensor (N x C x H x W)
gamma: scale variable
beta: offset variable
eps: epsilon value
Returns:
the resulting tensor of the batch norm operation
"""
#######################
# Write code here
#######################
mean = X.mean(dim=(0, 2, 3), keepdim=True)
var = ((X - mean)**2).mean(dim=(0, 2, 3), keepdim=True)
X_hat = (X - mean) / torch.sqrt(var + eps)
return gamma * X_hat + beta
```
```python
print("Expected Output:")
print(batch_normalization_conv(input_tensor, gamma=1, beta=0)[0,0,0,:])
```
Expected Output:
tensor([-1.1380, -1.1645, -0.2613, -0.4449, 0.8398, 0.6829, -0.3268, -2.1290,
0.3125, -1.2757, 0.3403, 0.2984, 0.1098, 1.2294, 1.1083, -0.2580,
-1.3651, -1.7090, 0.5573, 0.7845, 0.5895, -1.5679, -0.3522, 1.8458])
```python
input_tensor[0,0,0,:]
```
tensor([ -8.2584, -8.5236, 0.4942, -1.3388, 11.4871, 9.9201, -0.1601,
-18.1522, 6.2227, -9.6333, 6.4998, 6.0813, 4.1984, 15.3766,
14.1678, 0.5272, -10.5265, -13.9593, 8.6665, 10.9351, 8.9884,
-12.5510, -0.4136, 21.5301])
**Part 2**: Implement class-conditional batch normalization for convolutional layers. You can copy-paste and modify your code from part 1 and use the same input above with the given **Y** vector below. You can refer to the lecture slides for the pseudocode.
This part is a bit tricky since we cannot directly use the class labels as inputs to a feed-forward neural network.
We therefore use the embeddings of the classes instead. We define 10-dimensional embeddings to represent our $y \in {0, 1}$ classes as float vectors.
We then randomly generate 0 and 1 values with the amount of **batch_size** and get their embeddings.
In our function, we will imitate a feed-forward neural network to implement class-conditional batch normalization, so we also define the weights and biases of this very simple perceptron as *gamma_w*, *gamma_b*, *beta_w*, and *beta_b*.
```python
import torch.nn as nn
# Assuming binary classification (binary labels)
num_classes = 2
embedding_dim = 10
# 10-dimensional embeddings for two classes: 2 x 10
class_embeddings = nn.Embedding(num_classes, embedding_dim)
# 16 random labels of 0 and 1
input_labels = torch.randint(0, 2, size=(batch_size,))
# Get class embeddings
input_label_embeddings = class_embeddings(input_labels)
gamma_w = torch.randn(embedding_dim, num_channels)
gamma_b = torch.zeros(1, num_channels)
beta_w = torch.randn(embedding_dim, num_channels)
beta_b = torch.zeros(1, num_channels)
```
```python
def cond_batch_normalization_conv(X, Y, gamma_w, gamma_b, beta_w, beta_b, eps=1e-8):
"""Performs conditional batch normalization operation for convolutional layer output
Args:
X: input tensor (N x C x H x W)
Y: input labels (N x emb_dim)
gamma_w: scale weights (emb_dim x C)
gamma_b: scale bias (1 x C)
beta_w: offset weights (emb_dim x C)
beta_b: offset bias (1 x C)
eps: epsilon value
Returns:
the resulting tensor of the batch norm operation
"""
#######################
# Write code here
#######################
mean = X.mean(dim=(0, 2, 3), keepdim=True)
var = ((X - mean)**2).mean(dim=(0, 2, 3), keepdim=True)
X_hat = (X - mean) / torch.sqrt(var + eps)
gamma = torch.matmul(Y, gamma_w) + gamma_b
beta = torch.matmul(Y, beta_w) + beta_b
gamma = gamma.unsqueeze(2).unsqueeze(2)
beta = beta.unsqueeze(2).unsqueeze(2)
return gamma * X_hat + beta
```
```python
print("Expected Output:")
print(cond_batch_normalization_conv(input_tensor, input_label_embeddings, gamma_w, gamma_b, beta_w, beta_b)[0, 0, 0, :].data)
```
Expected Output:
tensor([-4.8654, -4.9883, -0.8110, -1.6601, 4.2812, 3.5554, -1.1141, -9.4485,
1.8426, -5.5023, 1.9710, 1.7771, 0.9049, 6.0829, 5.5230, -0.7957,
-5.9161, -7.5062, 2.9747, 4.0255, 3.1238, -6.8539, -1.2315, 8.9334])
## Question 3
Implement the adaptive instance normalization (AdaIN) from StyleGAN. You can refer to the lecture slides or the StyleGAN paper [here](https://arxiv.org/abs/1812.04948).
Adaptive instance normalization is used in StyleGAN to incorporate the *style* information to the network through combining learned affine transformations and feature maps produced by convolutions.
AdaIN operation is defined mathemtically with the following equation:
\begin{equation}
\text{AdaIN}(\mathbf{x}_i, \mathbf{y}) = \mathbf{y}_{s, i}\frac{\mathbf{x}_i - \mu(\mathbf{x}_i)}{\sigma(\mathbf{x}_i)} + \mathbf{y}_{b,i}
\end{equation}
which takes the feature map $\mathbf{x}_i$ and the style vector $\mathbf{y}$ as parameters. Essentially, the operation normalizes the feature maps, scales it with half of the style vector and shifts it with the other half. Representations $\mathbf{y}_s$ and $\mathbf{y}_b$ correspond to *shift* and *bias* and they are simply two halves of the style vector $\mathbf{y} = (\mathbf{y}_s, \mathbf{y}_b)$
```python
input_feature_map = torch.randn(batch_size, num_channels, 24, 24)
style_vector = torch.randn(batch_size, 2 * num_channels)
```
```python
def adaptive_instance_normalization(X, y, eps=1e-8):
"""Performs adaptive instance normalization on the given feature map X with the
style input y
Args:
X: Feature map (N x C x W x H)
y: Style vector (N x 2C)
Returns:
The resulting tensor from the operation
"""
mean = X.mean(dim=(0, 2, 3), keepdim=True)
var = ((X - mean)**2).mean(dim=(0, 2, 3), keepdim=True)
X_hat = (X - mean) / torch.sqrt(var + eps)
factor, bias = y.chunk(2, 1)
factor = factor.unsqueeze(2).unsqueeze(2)
bias = bias.unsqueeze(2).unsqueeze(2)
return X_hat * factor + bias
```
```python
print(adaptive_instance_normalization(input_feature_map, style_vector)[0,0,0,:])
```
tensor([9.5492e-01, 3.1419e-01, 3.7104e+00, 1.3578e+00, 1.9867e+00, 2.0013e+00,
3.8199e+00, 3.2714e+00, 3.1662e-04, 2.9241e+00, 1.2787e+00, 3.0599e+00,
3.0669e+00, 4.9342e-01, 5.8750e-01, 2.1606e+00, 2.4507e+00, 2.4924e+00,
2.1146e+00, 2.1017e+00, 2.4752e+00, 2.3877e+00, 3.2252e+00, 4.0635e+00])
## Question 4
Implement a function that calculates the Frechet Inception Distance score from given real examples and fake examples.
You can refer to its original paper [here](https://arxiv.org/abs/1706.08500).
\begin{equation}
\text{FID} = ||\mu_1 – \mu_2||^2 + \text{Tr}(C_1 + C_2 – 2\sqrt{C_1C_2})
\end{equation}
where $\mu_1$ and $\mu_2$ are the feature-wise means of the real and generated samples, respectively. In addition, $C_1$ and $C_2$ are the covariance matrices of the real and generated samples, sometimes also referred as sigma ($\Sigma$).
```python
import torch
torch.manual_seed(0)
import torch.nn as nn
from torchvision.models import inception_v3
from torchvision.datasets import MNIST
from torchvision.transforms import Resize
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
```
```python
mnist_dataset = MNIST(root=".", download=True)
# Get 200 MNIST examples
mnist_examples = mnist_dataset.data[:32].type(torch.FloatTensor)
mnist_examples /= 255
resizer = Resize(299)
mnist_examples = resizer(mnist_examples)
plt.imshow(mnist_examples[2].numpy(), interpolation='nearest', cmap='gray')
# Reshape the images to 3D to give them as inputs to the Inception network
mnist_examples = mnist_examples.unsqueeze(dim=3).repeat((1, 1, 1, 3))
```
```python
# Create 200 noisy images as the same sizes with MNIST
noisy_examples = torch.randn(32, 299, 299, 1).clip(0, 1).repeat((1, 1, 1, 3))
plt.imshow(noisy_examples[2].numpy(), interpolation='nearest', cmap='gray')
```
```python
# Download the pretrained inception v3 model
inception_model = inception_v3(pretrained=True)
# Replace the classification layer with an identity layer to get the activations
inception_model.fc = nn.Identity()
# Evaluation mode
inception_model.eval()
with torch.no_grad():
mnist_features = inception_model(mnist_examples.permute(0, 3, 1, 2))
noisy_features = inception_model(noisy_examples.permute(0, 3, 1, 2))
```
Downloading: "https://download.pytorch.org/models/inception_v3_google-0cc3c7bd.pth" to /root/.cache/torch/hub/checkpoints/inception_v3_google-0cc3c7bd.pth
0%| | 0.00/104M [00:00<?, ?B/s]
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
```python
mnist_features.shape == noisy_features.shape
```
True
```python
from scipy.linalg import sqrtm
# Solution taken from: https://machinelearningmastery.com/how-to-implement-the-frechet-inception-distance-fid-from-scratch/
def calculate_fid(real_features, fake_features):
"""Calculates the Frechet Inception Distance of the given real and fake features
to measure the similarity of two data distributions
Args:
real_features: Features taken from the real images (N x D)
fake_features: Features taken from the fake images (N x D)
Returns:
A scalar number as the distance between two data distributions
"""
# calculate mean and covariance statistics
mu1, sigma1 = real_features.mean(axis=0), np.cov(real_features, rowvar=False)
mu2, sigma2 = fake_features.mean(axis=0), np.cov(fake_features, rowvar=False)
# calculate sum squared difference between means
ssdiff = np.sum((mu1 - mu2)**2.0)
# calculate sqrt of product between cov
covmean = sqrtm(sigma1.dot(sigma2))
# check and correct imaginary numbers from sqrt
if np.iscomplexobj(covmean):
covmean = covmean.real
# calculate score
fid = ssdiff + np.trace(sigma1 + sigma2 - 2.0 * covmean)
return fid
```
```python
calculate_fid(mnist_features.numpy(), noisy_features.numpy())
```
396.46000186190406
```python
calculate_fid(mnist_features.numpy()[:16], mnist_features.numpy()[16:])
```
95.29507726881653
|
76d1f08257e3375f763cc6706c92a58f8a0bc34d
| 186,167 |
ipynb
|
Jupyter Notebook
|
inzva x METU ImageLab Joint Program/Week 7 - GANs 3 and Evaluation Metrics/Week_7_GAN_Part_3_and_Evaluation_Metrics_Solutions.ipynb
|
inzva/-AI-Labs-Joint-Program
|
45d776000f5d6671c7dbd98bb86ad3ceae6e4b2c
|
[
"MIT"
] | 12 |
2021-07-31T11:14:41.000Z
|
2022-02-26T14:28:59.000Z
|
inzva x METU ImageLab Joint Program/Week 7 - GANs 3 and Evaluation Metrics/Week_7_GAN_Part_3_and_Evaluation_Metrics_Solutions.ipynb
|
inzva/-AI-Labs-Joint-Program
|
45d776000f5d6671c7dbd98bb86ad3ceae6e4b2c
|
[
"MIT"
] | null | null | null |
inzva x METU ImageLab Joint Program/Week 7 - GANs 3 and Evaluation Metrics/Week_7_GAN_Part_3_and_Evaluation_Metrics_Solutions.ipynb
|
inzva/-AI-Labs-Joint-Program
|
45d776000f5d6671c7dbd98bb86ad3ceae6e4b2c
|
[
"MIT"
] | 1 |
2021-08-16T20:50:44.000Z
|
2021-08-16T20:50:44.000Z
| 73.150098 | 78,590 | 0.734948 | true | 3,592 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.61878 | 0.909907 | 0.563033 |
__label__eng_Latn
| 0.921533 | 0.146443 |
<a href="https://colab.research.google.com/github/gucervus/Palestra-FHO/blob/main/main.ipynb" target="_parent"></a>
Paradigma de programação: Uma forma de abordar um problema do ponto de vista
lógico e prático no momento de codar, padrão de comportamento
Esse paradigma trata a programação de um ponto de vista
matemático...
Mas o que isso significa?
Ponto de vita imperativo
```python
a = int(input('Primeiro numero: '))
b = int(input('Segundo numero: '))
operacao = input('Qual a operação')
if operacao == '+':
print(a+b)
else:
print("Operação invalida no momento")
```
Ponto de vista do objeto
1° Construir um objeto
2° Gerar os atributos desse objeto
3° Gerar as ações desse objeto
De maneira geral, a chamada da POO também é difere da
programação funcional
```python
class Calculadora():
def __init__(self, primeiroNumero, segundoNumero):
self.primeiroNumero = primeiroNumero
self.segundoNumero = segundoNumero
def soma(self):
return self.primeiroNumero + self.segundoNumero
if __name__ == '__main__':
objeto = Calculadora(10,7)
print(objeto.soma())
```
Ponto de vista funcional
. Adição de quaisquer sequência de números
. Elementos que constituem a operação
. Operação em si
```python
def soma(a,b):
soma = a + b
return soma
soma(10,7)
```
Principais linguagens de programação
.Lua: wordwarcraft, angry birds, civilization, street figther IV
.Elixir: Globosat, pinterest, Gopro
.Scala: Tumbler, Linkedin, Simiens, Twitter
```python
from PIL import Image
Image.open('/content/LP_Funcional.png')
```
Mas e essa tal de independência?
1. Possuir ao menos um parâmetro: Irá permitir que o usuário interaja com a aplicação
2. Deve retornar um objeto (valor, string, função)
3. Não deve possuir loops: Elementos do paradigma imperativo
# Usar o minímo necessário de elementos do paradigma imperativo
```python
def criaArray():
palavra = 'olá mundo'
lista = []
for i in palavra:
lista+=[i]
print(lista)
criaArray()
```
Função de alta ordem é uma função que recebe ou retorna uma função.
Só é a possível definir uma função de alta ordem, se a linguagem tem
funções de primeira classe, pois é necessário que funções sejam "passáveis".
```python
def criaArray(palavra):
f = lambda i:i
return list(map(str, f(palavra)))
criaArray('olá mundo')
```
Caracteristicas básicas de uma função
. Imutável
- Criar novos objetos, ao invés de ficar operando sobre um mesmo
. Sem mudança de estados
- Evitar efeitos colaterais ao longo da operação
Criação de lista sem mutabilidade
```python
def separa():
array = list()
y = 0
for i in range(1,7+1):
valores = int(input(f"Digite o {i}° valor: "))
if valores % 2 == 0:
y+=1
array.insert(0,valores)
else:
array.insert(len(array),valores)
print(sorted(array[:y]) + sorted(array[y:]))
separa()
```
Mesma operação com mutabilidade:
. Criar uma lista
. Atruibuir a lista como parametro
. Interagir com o parametro
. Criar uma nova lista
# Função de primeira classe
```python
def ordena(lista):
novaLista = sorted(filter(lambda i: i % 2 == 0, lista)) + sorted(filter(lambda i: i % 2 == 1, lista))
return novaLista
lista = [int(input(f'{c+1}° número: ')) for c in range(7)]
ordena(lista)
```
Ganho:
. Torna os programas mais próximos de expressões matemáticas
. Torna as declarações mais simples e direta
. Confinamento de valores em espaços imutáveis na memória
. Torna o código mais amigável a leitura e refatoramento
#Efeitos colaterais
```python
from datetime import date
def atribui():
data_atual = date.today()
nova_data = str(data_atual)
data_final = int(nova_data[5:7]) + 1
return data_final
atribui()
```
```python
import requests
from time import sleep
def resposta(request):
sleep(5)
return request.status_code
resposta(requests.get('https://github.com/'))
```
Elementos da programação funcional:
- lambda
- filter
- map
- reduce
```python
def nome(parametro):
return expressão
```
```python
nome = lambda parametro:expressao
```
```python
def f(x):
return x**2
f(2)
```
```python
f = lambda i: i**2
f(2)
```
```python
def interacao(lista):
g = lambda x: (x**3 - 3*x + 2)**((-x/4) - 1)
return list(map(g, itens))
ret([2,3,2,3,2,3])
```
```python
Image.open('/content/cerebro.jpeg')
```
```python
Image.open('/content/filtro.jpg')
```
```python
from sympy import *
```
```python
f = lambda x: ((x**3 - 3*x+2)**((-x/4)-(1)))
f(5)
```
```python
var('x')
f = lambda x: diff(x)
f((x**2)/(x**3))
```
```python
def analiseCurva():
# Derivada de primeira ordem
var('x')
func = ((1/3)*(x**3)- ((5/2)*(x**2)) + 6*x)
f = lambda x: diff(x)
# Passar a equação no domino
dominio = [-2,-1,0,1, 2, 3]
g = list(map(lambda dominio: (dominio**2 - 5*dominio + 6), dominio))
# Classificação da curva
resultado = []
for c in g:
if c == 0:
resultado.append('anula')
elif c > 0:
resultado.append('crescente')
else:
resultado.append('decrescente')
#Resultados
print(Symbol(str(f(func))))
print(dominio)
print(resultado)
analiseCurva()
```
Foto da análise da curva
```python
from PIL import Image
Image.open('')
```
```python
var('x')
f = lambda x: integrate(x)
f(x**2)
```
Aplicação estatística:
- Correlação linear simples
- Regressão linear simples
Os estudos ligados a regressão aplicam-se aquelas situações
em que há razões para supor uma relação de causa-efeito
entre duas variáveis quatitativas e se deseja expressar
matematicamente essa relação
Relacionamento entre X (variável independente, explicativa) e
Y (variável dependente, resposta)
Esse relacionamento é explicado por um modelo matemático,
uma equação que associa a variável dependente com a independente
Ex: Resultados para o teor de cálcio no solo (x) e a porcentagem
de turberculos maduros (y) em uma parcela aleatória da população
```python
from functools import reduce
import numpy as np
from statistics import mean
def RegressaoLinear(calcio_x, tuberculos_y):
#Pesos
n = len(calcio_x)
soma_x = reduce(lambda calcio_x, i: i+calcio_x, calcio_x)
soma_y = reduce(lambda tuberculos_y, y: y+tuberculos_y, tuberculos_y)
soma_xy = sum(x*y for x,y in zip(calcio_x,tuberculos_y))
soma_quadrada_x = sum(np.array(calcio_x)**2)
soma_quadrada_y = sum(np.array(tuberculos_y)**2)
#Estatistica do teste
#Coeficiente de correlaçao de Person
R2 = ((n*soma_xy)-(soma_x*soma_y))/sqrt(((n*soma_quadrada_x)-soma_x**2)*((n*soma_quadrada_y)-(soma_y**2)))
t = R2*sqrt((n-2)/(1 - R2*R2))
tStudent = [{
'7': [12.7062,4.3027,3.1824,2.7765,2.5706,2.4469,2.3646]
}]
ts = tStudent[0]['7'][6]
#H0: Não existe correlação
#H1: Existe correlação
if abs(t) >=ts:
print(' Rejeita-se H0\n',
f'Com o valor t = {t:.2f} sendo maior que o valor tabelado {ts}\n',
'rejeitamos H0 e concluimos que há correlação entre o teor de cálcio no solo\n',
f'e a porcentagem de tuberculos maduros de acordo com a classificação r = {R2:.2f}\n')
else:
print('Rejeita-se H1')
#Construindo a reta de regressão
media_x = mean(np.array(calcio_x))
media_y = mean(np.array(tuberculos_y))
#Construindo a reta estimada de regresaão: Y = a +bx
beta = (soma_xy - n * media_x * media_y ) / (soma_quadrada_x - n * ( media_x * media_x ) )
alfa = ((media_y)-(beta*media_x))
#Predição
predicao = lambda x: alfa + (beta*x)
num = float(input(' Qual o valor de x deseja predizer?: '))
print(f' Com a reta real sendo, y = {alfa:.2f} + {beta:.2f}X')
print(f' Para x = {num}, temos y = {predicao(1.1)}')
RegressaoLinear([0.2, 0.3, 0.4, 0.5, 0.7, 0.8, 1.0, 1.1, 1.3],[75, 79, 80, 86, 88, 89, 93, 95, 99])
```
|
0304144b6561b393d82bb257a8ae8a5551d652be
| 17,705 |
ipynb
|
Jupyter Notebook
|
main.ipynb
|
gucervus/Palestra-FHO
|
f72fc7c27a1248b41f4a5f975a0c6196dfe4094a
|
[
"MIT"
] | null | null | null |
main.ipynb
|
gucervus/Palestra-FHO
|
f72fc7c27a1248b41f4a5f975a0c6196dfe4094a
|
[
"MIT"
] | null | null | null |
main.ipynb
|
gucervus/Palestra-FHO
|
f72fc7c27a1248b41f4a5f975a0c6196dfe4094a
|
[
"MIT"
] | 1 |
2021-06-10T22:47:43.000Z
|
2021-06-10T22:47:43.000Z
| 27.280431 | 222 | 0.45185 | true | 2,550 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.752013 | 0.805632 | 0.605846 |
__label__por_Latn
| 0.982329 | 0.245912 |
```
!pip install -q tensorflow==2.3.1
!pip install -q tensorflow-quantum
from IPython.display import clear_output
clear_output()
```
```
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
```
```
from google.colab import drive
drive.mount('/content/drive')
```
Mounted at /content/drive
```
%cd '/content/drive/MyDrive/Projects/GSoC 2021'
```
/content/drive/.shortcut-targets-by-id/1PiGZPRwtvmwxh1Xs0mtaS05Kvo_tnftz/GSoC 2021
# Task III: Quantum Convolutional Neural Network (QCNN) Part
Problem statement: Setup and apply a quantum convolutional neural network (QCNN) on particle physics data to perform binary classification on two types of objects (electrons and photons) using TFQ. Feel free to experiment with different ways of encoding the classical data inputs into the qubits. Specifically, show that the model fits the dataset and that your training loss decreases over time (given the small dataset size, we will not be focusing on the accuracy of your model).
## The Approach Used to Tackle the Problem
### Dataset & Preprocessing
The dataset consists of 100 samples, both for the training set and testing set. The label consists of two classes, photons are labeled 0, and electrons are labeled 1. Every sample is a 32 x 32 image containing the particle's energy. This means every sample has 32 x 32 = 1024 features if we consider every pixel in the image as a feature. This number is clearly too big if we naively use 1 qubit for every feature. Some feature reduction technique is needed. The maximum number of qubits allowed in Cirq is about 20 qubits.
Since the image is very sparse (a lot of pixels with zero values), the one thing that immediately comes to mind is to crop the image, cutting all zero pixels on the image's border. But after some further inspections, there are some images with non-zero pixels near the border of the image. Cropping the image will make the data lose some of (maybe) important features. Another method that comes to mind is to do max or average-pooling on the image. But once again, since the image is sparse, there is a very high chance that two very different 32 x 32 images will look the same after pooling. Not only that, the image after pooling will lose a lot of detail, making it harder for the model to differentiate it.
Many previous works have tried to classify image data using a parametrized quantum circuit (PQC), mostly MNIST dataset[1]. In almost all of them, the authors suggested a way to reduce the image dimension. In reference [2] (and its [code implementation](https://www.tensorflow.org/quantum/tutorials/mnist)), the author tried to reduce the image dimension by using bilinear interpolation. This method has a similar problem with pooling. There is a very high chance that two different images (even with a different label) will look the same after the reduction. This indeed happened and is mentioned in the paper.
The other seemingly famous and reasonable approach is by using Principal Component Analysis (PCA). This has been done before with relatively successful results[3-4]. One weakness of this approach in our case is that the PCA model fitted to the training dataset may not be able to catch the true variance of the data distribution since the number of samples in the dataset is small. If we then use this PCA model to transform the testing dataset, the PQC may not classify this data correctly. This problem is similar to overfitting, which also very likely to happen because of the low number of training samples. But because the problem statement also indirectly admitted that overfitting is inevitable with this dataset size, we will ignore the testing performance and go on with PCA.
### Encoding
After the dataset is transformed with PCA, the resulting data will then be transformed into quantum data. There are many ways to do this, and it is not trivial to determine which will give the best result. So in this project, two methods will be tried:
1. Angle Encoding <br>
This method is as simple as treating every feature of the sample as an angle argument for a one-qubit rotation gate that acts on a qubit. In this project, RY gate will be used. Using this encoding, a feature vectors of a sample $\boldsymbol{x}=\left[\begin{array}{llll}
x^{1} & x^{2} & \ldots & x^{N}
\end{array}\right]$ will be transformed to
$$
|\boldsymbol{x}\rangle=R Y\left(x^{1}\right) \otimes R Y\left(x^{2}\right) \otimes \ldots \otimes R Y\left(x^{N}\right) \underbrace{|00 \ldots 00\rangle}_{N} .
$$
where $N$ is the number of features in the sample and $x^{i}$ means the $i$-th feature of $\boldsymbol{x}$. **Figure 1** shows the circuit schematic of this encoding.
2. Amplitude Encoding <br>
This method encodes all features in a sample as amplitudes of a quantum state. The main benefit of this method is it requires less number of qubits. The whole 32 x 32 = 1024 pixels of an image only need $\operatorname{log_{2}}(1024)=10$ qubits. Using this encoding, a feature vectors of a sample $\boldsymbol{x}=\left[\begin{array}{llll}
x^{1} & x^{2} & \ldots & x^{N}
\end{array}\right]$ will be transformed to [5-6]
$$
A_{\mathrm{n}}=\frac{x^{n}}{\sqrt{\sum_{n=1}^{N}\left|x^{n}\right|^{2}}} \\
|\boldsymbol{x}\rangle=\sum_{n=1}^{N} A_{n}|\operatorname{binary}(n-1)\rangle
$$
where $N$ is the number of features in the sample and $\operatorname{binary}$ is an operator that converts integer to its binary form. **Figure 2** shows the circuit schematic of this encoding where
$$
\beta_{j}^{g}=2 \arcsin \left(\frac{\sqrt{\sum_{l=1}^{2 g-1}\left|A_{(2 j-1) 2^{g-1}+l}\right|^{2}}}{\sqrt{\sum_{l=1}^{2 g}\left|A_{(j-1) 2^{g}+l}\right|^{2}}}\right).
$$
The code implementation of this encoding is provided below but unfortunately tfq.convert_to_tensor does not support serialization of cirq.ControlledOperation yet (at the time of submission). This encoding needs cirq.ControlledOperation since the encoding uses multi-controlled RY gate. So in this project, this encoding will not be used for training.
**Figure 1**: Angle encoding.
**Figure 2**: Amplitude encoding.
### QCNN Architecture
The QCNN architecture used in this project is similar to the one proposed in its original paper[7]. In the [TFQ tutorial of QCNN](https://www.tensorflow.org/quantum/tutorials/qcnn), the fully connected layer is not implemented though it is mentioned in the paper. This project follows the TFQ tutorial but will implement a PQC as the fully connected layer and compare the performance with the one that does not use it. One layer of this fully connected layer is equal to applying a one-qubit unitary on every qubit used for the layer and CNOT gates that entangled nearby qubits. At the end of the circuit, a final one-qubit unitary gate is applied to the output qubit (the last qubit). The last qubit will then be measured using the Pauli-Z gate, and the result is the prediction output of the model. The complete picture/plot of this fully connected layer will be given in the **Code Implementation** section. **Figure 3** shows the schematic of a QCNN (with a fully connected layer) for 8 data qubits.
Another variation that is also tested in this project is the usage of the cluster state circuit. The QCNN model with and without cluster state circuit will be compared to see whether it gives any improvement to the model's performance or not.
**Figure 3**: The schematic of QCNN (with a fully connected layer).
## Code Implementation
### Load and Check the Dataset
Let's load and check the dataset. We also need to convert the true label to one-hot format but with 1/-1 instead of 1/0 because the expectation value of a Pauli-Z gate is between -1 and 1.
```
# Load the dataset
with np.load('./electron-photon.npz') as data:
x_train = data["x_train"]
y_train = data["y_train"]
x_test = data["x_test"]
y_test = data["y_test"]
y_train = 2*y_train-1
y_test = 2*y_test-1
# Sanity check
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
```
(100, 32, 32) (100,)
(100, 32, 32) (100,)
The next thing to check is the label balances. It turns out that the dataset is quite balanced, so no need to do any dataset imbalance preprocessing tricks, e.g., SMOTE.
```
# Check class balances
print("Class 0:", (y_train == -1).sum(), "Class 1:", (y_train == 1).sum())
print("Class 0:", (y_test == -1).sum(), "Class 1:", (y_test == 1).sum())
```
Class 0: 44 Class 1: 56
Class 0: 54 Class 1: 46
Let's plot a sample of images from both labels to see how the images look like.
```
# Plot the image from both classes
plt.imshow(x_train[y_train==-1][0], cmap='gray')
```
```
plt.imshow(x_train[y_train==1][0], cmap='gray')
```
### Testing Out Some Dimensionality Reduction Techniques
First, let's try to find out where is the maximum coordinate of non-zero pixel in the dataset.
```
# finding the left-most non-zero pixel coordinate in training dataset
for i in range(x_train.shape[2]):
if x_train[:, :, i].max() != 0:
border_left = i
break
# finding the right-most non-zero pixel coordinate in training dataset
for i in range(x_train.shape[2]):
if x_train[:, :, x_train.shape[2]-1-i].max() != 0:
border_right = x_train.shape[2]-1-i
break
# finding the top-most non-zero pixel coordinate in training dataset
for i in range(x_train.shape[1]):
if x_train[:, i, :].max() != 0:
border_top = i
break
# finding the bottom-most non-zero pixel coordinate in training dataset
for i in range(x_train.shape[1]):
if x_train[:, x_train.shape[1]-1-i, :].max() != 0:
border_bottom = x_train.shape[1]-1-i
break
print(border_left, border_right)
print(border_top, border_bottom)
```
0 29
0 26
```
# finding the left-most non-zero pixel coordinate in training dataset
for i in range(x_test.shape[2]):
if x_test[:, :, i].max() != 0:
border_left = i
break
# finding the right-most non-zero pixel coordinate in training dataset
for i in range(x_test.shape[2]):
if x_test[:, :, x_test.shape[2]-1-i].max() != 0:
border_right = x_test.shape[2]-1-i
break
# finding the top-most non-zero pixel coordinate in training dataset
for i in range(x_test.shape[1]):
if x_test[:, i, :].max() != 0:
border_top = i
break
# finding the bottom-most non-zero pixel coordinate in training dataset
for i in range(x_test.shape[1]):
if x_test[:, x_test.shape[1]-1-i, :].max() != 0:
border_bottom = x_test.shape[1]-1-i
break
print(border_left, border_right)
print(border_top, border_bottom)
```
0 29
0 30
Both training and testing datasets have non-zero pixels near the image's very edges, making it impossible to crop the image without information loss.
Second, let's try the max-pooling with filter size 2 x 2 and stride 2 x 2.
```
def max_pool(image):
max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid')
x = tf.constant(image)
x = tf.reshape(x, [-1, image.shape[1], image.shape[2], 1])
return max_pool_2d(x).numpy().reshape(-1, int(x.shape[1]/2), int(x.shape[2]/2))
```
```
temp = x_train.copy()
for i in range(3):
temp = max_pool(temp)
temp.shape
```
(100, 4, 4)
```
plt.imshow(x_train[56], cmap='gray')
```
```
plt.imshow(temp[56], cmap='gray')
```
We can see that the number of non-zero pixels in an image is reduced greatly after pooling. This is not very good as images will become a lot more similar to one another.
### Define Layers
Next, we need to define all the layers that will be used for the QCNN model.
#### Cluster state
The circuit below is the circuit to prepare a cluster state.
```
def cluster_state_circuit(bits):
"""Return a cluster state on the qubits in `bits`."""
circuit = cirq.Circuit()
circuit.append(cirq.H.on_each(bits))
for this_bit, next_bit in zip(bits, bits[1:] + [bits[0]]):
circuit.append(cirq.CZ(this_bit, next_bit))
return circuit
```
Let's plot the circuit for 4 qubits configuration.
```
SVGCircuit(cluster_state_circuit(cirq.GridQubit.rect(1, 4)))
```
#### Unitaries
Below are the standard, simple, one and two-qubit unitary circuits. These unitaries are generally very useful as building blocks.
```
def one_qubit_unitary(bit, symbols):
"""Make a Cirq circuit enacting a rotation of the bloch sphere about the X,
Y and Z axis, that depends on the values in `symbols`.
"""
return cirq.Circuit(
cirq.X(bit)**symbols[0],
cirq.Y(bit)**symbols[1],
cirq.Z(bit)**symbols[2])
def two_qubit_unitary(bits, symbols):
"""Make a Cirq circuit that creates an arbitrary two qubit unitary."""
circuit = cirq.Circuit()
circuit += one_qubit_unitary(bits[0], symbols[0:3])
circuit += one_qubit_unitary(bits[1], symbols[3:6])
circuit += [cirq.ZZ(*bits)**symbols[6]]
circuit += [cirq.YY(*bits)**symbols[7]]
circuit += [cirq.XX(*bits)**symbols[8]]
circuit += one_qubit_unitary(bits[0], symbols[9:12])
circuit += one_qubit_unitary(bits[1], symbols[12:])
return circuit
```
#### QCNN Layers
##### Quantum Convolution
This circuit is the Quantum Convolution block in **Figure 3**.
```
def quantum_conv_circuit(bits, symbols):
"""Quantum Convolution Layer.
Return a Cirq circuit with the cascade of `two_qubit_unitary` applied
to all pairs of qubits in `bits`.
"""
circuit = cirq.Circuit()
for first, second in zip(bits[0::2], bits[1::2]):
circuit += two_qubit_unitary([first, second], symbols)
for first, second in zip(bits[1::2], bits[2::2] + [bits[0]]):
circuit += two_qubit_unitary([first, second], symbols)
return circuit
```
Let's plot the circuit for 4 qubits configuration.
```
SVGCircuit(quantum_conv_circuit(cirq.GridQubit.rect(1, 4), sympy.symbols('x0:15')))
```
findfont: Font family ['Arial'] not found. Falling back to DejaVu Sans.
##### Quantum Pooling
This circuit is the Quantum Pooling block in **Figure 3**. If the number of qubits into this circuit is $Q$, this circuit pools the qubits to $\frac{Q}{2}$ qubits.
```
def two_qubit_pool(source_qubit, sink_qubit, symbols):
"""Make a Cirq circuit to do a parameterized 'pooling' operation, which
attempts to reduce entanglement down from two qubits to just one."""
pool_circuit = cirq.Circuit()
sink_basis_selector = one_qubit_unitary(sink_qubit, symbols[0:3])
source_basis_selector = one_qubit_unitary(source_qubit, symbols[3:6])
pool_circuit.append(sink_basis_selector)
pool_circuit.append(source_basis_selector)
pool_circuit.append(cirq.CNOT(control=source_qubit, target=sink_qubit))
pool_circuit.append(sink_basis_selector**-1)
return pool_circuit
```
```
def quantum_pool_circuit(source_bits, sink_bits, symbols):
"""A layer that specifies a quantum pooling operation.
A Quantum pool tries to learn to pool the relevant information from two
qubits onto 1.
"""
circuit = cirq.Circuit()
for source, sink in zip(source_bits, sink_bits):
circuit += two_qubit_pool(source, sink, symbols)
return circuit
```
Let's plot the circuit for 4 qubits configuration.
```
test_bits = cirq.GridQubit.rect(1, 4)
SVGCircuit(quantum_pool_circuit(test_bits[:2], test_bits[2:], sympy.symbols('x0:6')))
```
##### Quantum Fully Connected
This circuit is the Quantum Fully Connected block in **Figure 3** without the final one-qubit unitary.
```
def quantum_fc_circuit(bits, symbols):
"""Quantum Fully Connected Layer
"""
circuit = cirq.Circuit()
# qubit iteration
for i in range(len(bits)):
circuit += one_qubit_unitary(bits[i], symbols[3*i:3*(i+1)])
# entangling gates
if len(bits) == 1:
pass
else:
for j in range(len(bits)):
if j != (len(bits)-1):
circuit += cirq.CNOT(bits[j], bits[j+1])
else:
pass
return circuit
```
Let's plot the circuit for 4 qubits configuration.
```
SVGCircuit(quantum_fc_circuit(cirq.GridQubit.rect(1, 4), sympy.symbols('x0:12')))
```
### PCA + Angle Encoding
#### Dimensionality Reduction: Principal Component Analysis (PCA)
Before doing PCA, we need to preprocess the data.
First, we have to flatten the images into one-dimensional vectors.
```
x_train_flatten = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])
x_test_flatten = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])
print(x_train_flatten.shape, x_test_flatten.shape)
```
(100, 1024) (100, 1024)
Second, it is also better to do normalization before PCA. But upon inspection, there are some features (pixels) in the dataset that have a non-zero mean but very small standard deviation, resulting in a huge value of features after the normalization. One of the reasons this can happen is that there is not enough variation in the dataset for some features, which leads to a very small standard deviation.
So for this project, we will proceed without doing the normalization.
```
from sklearn.decomposition import PCA
num_component = 16
pca = PCA(n_components=num_component)
pca.fit(x_train_flatten)
print(np.cumsum(pca.explained_variance_ratio_))
```
[0.48118097 0.6593828 0.76321 0.84084564 0.89022726 0.92087126
0.946993 0.9600858 0.96964806 0.976457 0.9819477 0.98669606
0.99004734 0.99288285 0.9952728 0.9963434 ]
We will reduce the dataset from 1024 features to 16 features. We can see that by using only 16 features from PCA, the dataset already captures more than 99.6% of the total variance.
```
x_train_pca = pca.transform(x_train_flatten)
x_test_pca = pca.transform(x_test_flatten)
print(x_train_pca.shape, x_test_pca.shape)
```
(100, 16) (100, 16)
Check the minimum and maximum value of the features to see whether we need to scale them or not. Since an RY gate's argument is an angle, it is best to keep the value of the features between $-\pi$ to $\pi$.
```
print(x_train_pca.min(), x_train_pca.max())
print(x_test_pca.min(), x_test_pca.max())
```
-0.3767928 0.7692176
-0.599721 0.72540677
The values are all inside the boundary so no need to scale the features.
#### Generate Quantum Data from the Dataset
After PCA, we transform the data into quantum data using angle encoding.
```
def angle_encoding(X, qubits):
"""Generate quantum data from the dataset (after PCA)."""
quantum_data = []
# iterate through data samples
for sample in X:
circuit = cirq.Circuit()
# iterate through sample's features
for bit in range(len(qubits)):
circuit.append(cirq.ry(sample[bit])(qubits[bit]))
quantum_data.append(circuit)
return tfq.convert_to_tensor(quantum_data)
```
```
qubits = cirq.GridQubit.rect(1, num_component)
train_quantum_data = angle_encoding(x_train_pca, qubits)
test_quantum_data = angle_encoding(x_test_pca, qubits)
```
Let's plot the angle encoding circuit for the first sample in the dataset.
```
SVGCircuit(tfq.from_tensor(train_quantum_data)[0])
```
#### Model Definition
We will try two different models, with and without the Quantum Fully Connected (QFC) layer.
This first one is the one with QFC layer.
```
def model_with_qfc(qubits):
"""Create sequence of alternating convolution and pooling operators
which gradually shrink over time, followed by 3 QFC layers
"""
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:63')
# first convolution + pooling, reduce the number of qubit from 16 to 8
# every conv needs 15 params
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
# every pool needs 6 params
model_circuit += quantum_pool_circuit(qubits[:8], qubits[8:],
symbols[15:21])
# second convolution + pooling, reduce the number of qubit from 8 to 4
model_circuit += quantum_conv_circuit(qubits[8:], symbols[21:36])
model_circuit += quantum_pool_circuit(qubits[8:12], qubits[12:],
symbols[36:42])
# third convolution + pooling, reduce the number of qubit from 4 to 2
model_circuit += quantum_conv_circuit(qubits[12:], symbols[42:57])
model_circuit += quantum_pool_circuit(qubits[12:14], qubits[14:],
symbols[57:63])
# fully connected layer
# 3 parameters for every qubit for every layer
# repeat 3 times = 3 fully connected layer > total 18 parameters
symbols_fc = sympy.symbols('qfc0:18')
for i in range(3):
model_circuit += quantum_fc_circuit(qubits[14:], symbols_fc[2*3*i : 2*3*(i+1)])
# final unitary
model_circuit += one_qubit_unitary(qubits[15], sympy.symbols('final0:3'))
return model_circuit
```
Let's plot the circuit.
```
qubits = cirq.GridQubit.rect(1, num_component)
SVGCircuit(model_with_qfc(qubits))
```
Next, we construct a model from this circuit.
```
# Create our qubits and readout operators in Cirq.
qubits = cirq.GridQubit.rect(1, num_component)
readout_operators = cirq.Z(qubits[-1])
# Build a sequential model
image_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
quantum_model = tfq.layers.PQC(model_with_qfc(qubits),
readout_operators)(image_input)
qcnn_with_qfc = tf.keras.Model(inputs=[image_input], outputs=[quantum_model])
# Show the keras plot of the model
tf.keras.utils.plot_model(qcnn_with_qfc,
show_shapes=True,
show_layer_names=False,
dpi=70)
```
The next one is the one without QFC.
```
def model_wo_qfc(qubits):
"""Create sequence of alternating convolution and pooling operators
which gradually shrink over time."""
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:84')
# first convolution + pooling, reduce the number of qubit from 16 to 8
# every conv needs 15 params
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
# every pool needs 6 params
model_circuit += quantum_pool_circuit(qubits[:8], qubits[8:],
symbols[15:21])
# second convolution + pooling, reduce the number of qubit from 8 to 4
model_circuit += quantum_conv_circuit(qubits[8:], symbols[21:36])
model_circuit += quantum_pool_circuit(qubits[8:12], qubits[12:],
symbols[36:42])
# third convolution + pooling, reduce the number of qubit from 4 to 2
model_circuit += quantum_conv_circuit(qubits[12:], symbols[42:57])
model_circuit += quantum_pool_circuit(qubits[12:14], qubits[14:],
symbols[57:63])
# forth convolution + pooling, reduce the number of qubit from 2 to 1
model_circuit += quantum_conv_circuit(qubits[14:], symbols[63:78])
model_circuit += quantum_pool_circuit([qubits[14]], [qubits[15]],
symbols[78:84])
return model_circuit
```
Let's plot the circuit.
```
qubits = cirq.GridQubit.rect(1, num_component)
SVGCircuit(model_wo_qfc(qubits))
```
```
# Create our qubits and readout operators in Cirq.
qubits = cirq.GridQubit.rect(1, num_component)
readout_operators = cirq.Z(qubits[-1])
# Build a sequential model
image_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
quantum_model_wo = tfq.layers.PQC(model_wo_qfc(qubits),
readout_operators)(image_input)
qcnn_wo_qfc = tf.keras.Model(inputs=[image_input], outputs=[quantum_model_wo])
# Show the keras plot of the model
tf.keras.utils.plot_model(qcnn_wo_qfc,
show_shapes=True,
show_layer_names=False,
dpi=70)
```
The two models have the same number of trainable parameters. Usually, it is considered a fair comparison between two models with different architecture if both models have the same number of trainable parameters.
```
qcnn_with_qfc.summary()
```
Model: "functional_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None,)] 0
_________________________________________________________________
pqc_2 (PQC) (None, 1) 84
=================================================================
Total params: 84
Trainable params: 84
Non-trainable params: 0
_________________________________________________________________
```
qcnn_wo_qfc.summary()
```
Model: "functional_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(None,)] 0
_________________________________________________________________
pqc_3 (PQC) (None, 1) 84
=================================================================
Total params: 84
Trainable params: 84
Non-trainable params: 0
_________________________________________________________________
#### Train the Model
We need a custom accuracy function since the label now is 1/-1 instead of 1/0.
```
# Custom accuracy metric
@tf.function
def custom_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true)
y_pred = tf.map_fn(lambda x: 1.0 if x >= 0 else -1.0, y_pred)
return tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))
```
The optimizer that will be used to train both models is Adam with lr = 0.001. The cost function to be minimized is MSE (same for both models).
```
# Compile the models
qcnn_with_qfc.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss=tf.losses.mse,
metrics=[custom_accuracy])
qcnn_wo_qfc.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss=tf.losses.mse,
metrics=[custom_accuracy])
```
Let's train the models for 100 epochs.
```
# Fit the model with QFC
history_with = qcnn_with_qfc.fit(x=train_quantum_data,
y=y_train,
batch_size=32,
epochs=100,
verbose=1,
validation_data=(test_quantum_data, y_test))
```
Epoch 1/100
4/4 [==============================] - 18s 5s/step - loss: 1.0063 - custom_accuracy: 0.4531 - val_loss: 0.9974 - val_custom_accuracy: 0.4219
Epoch 2/100
4/4 [==============================] - 18s 4s/step - loss: 1.0038 - custom_accuracy: 0.4531 - val_loss: 0.9982 - val_custom_accuracy: 0.4219
Epoch 3/100
4/4 [==============================] - 18s 4s/step - loss: 1.0020 - custom_accuracy: 0.3516 - val_loss: 0.9990 - val_custom_accuracy: 0.4766
Epoch 4/100
4/4 [==============================] - 19s 5s/step - loss: 1.0008 - custom_accuracy: 0.5000 - val_loss: 0.9997 - val_custom_accuracy: 0.4844
Epoch 5/100
4/4 [==============================] - 18s 4s/step - loss: 0.9993 - custom_accuracy: 0.4688 - val_loss: 1.0003 - val_custom_accuracy: 0.6172
Epoch 6/100
4/4 [==============================] - 18s 5s/step - loss: 0.9982 - custom_accuracy: 0.5625 - val_loss: 1.0013 - val_custom_accuracy: 0.5781
Epoch 7/100
4/4 [==============================] - 18s 5s/step - loss: 0.9968 - custom_accuracy: 0.5469 - val_loss: 1.0026 - val_custom_accuracy: 0.5781
Epoch 8/100
4/4 [==============================] - 18s 4s/step - loss: 0.9950 - custom_accuracy: 0.5469 - val_loss: 1.0044 - val_custom_accuracy: 0.5781
Epoch 9/100
4/4 [==============================] - 18s 4s/step - loss: 0.9927 - custom_accuracy: 0.6016 - val_loss: 1.0066 - val_custom_accuracy: 0.5781
Epoch 10/100
4/4 [==============================] - 18s 4s/step - loss: 0.9907 - custom_accuracy: 0.6562 - val_loss: 1.0107 - val_custom_accuracy: 0.5781
Epoch 11/100
4/4 [==============================] - 18s 4s/step - loss: 0.9881 - custom_accuracy: 0.6016 - val_loss: 1.0166 - val_custom_accuracy: 0.5781
Epoch 12/100
4/4 [==============================] - 18s 4s/step - loss: 0.9868 - custom_accuracy: 0.6016 - val_loss: 1.0242 - val_custom_accuracy: 0.5781
Epoch 13/100
4/4 [==============================] - 18s 4s/step - loss: 0.9850 - custom_accuracy: 0.4375 - val_loss: 1.0300 - val_custom_accuracy: 0.5781
Epoch 14/100
4/4 [==============================] - 18s 4s/step - loss: 0.9847 - custom_accuracy: 0.6016 - val_loss: 1.0289 - val_custom_accuracy: 0.5781
Epoch 15/100
4/4 [==============================] - 18s 4s/step - loss: 0.9848 - custom_accuracy: 0.4922 - val_loss: 1.0293 - val_custom_accuracy: 0.5781
Epoch 16/100
4/4 [==============================] - 18s 4s/step - loss: 0.9844 - custom_accuracy: 0.5469 - val_loss: 1.0274 - val_custom_accuracy: 0.5781
Epoch 17/100
4/4 [==============================] - 18s 4s/step - loss: 0.9844 - custom_accuracy: 0.4922 - val_loss: 1.0257 - val_custom_accuracy: 0.5781
Epoch 18/100
4/4 [==============================] - 18s 4s/step - loss: 0.9850 - custom_accuracy: 0.6016 - val_loss: 1.0231 - val_custom_accuracy: 0.5781
Epoch 19/100
4/4 [==============================] - 18s 4s/step - loss: 0.9848 - custom_accuracy: 0.5469 - val_loss: 1.0228 - val_custom_accuracy: 0.5781
Epoch 20/100
4/4 [==============================] - 18s 4s/step - loss: 0.9850 - custom_accuracy: 0.5469 - val_loss: 1.0230 - val_custom_accuracy: 0.5781
Epoch 21/100
4/4 [==============================] - 18s 4s/step - loss: 0.9846 - custom_accuracy: 0.5469 - val_loss: 1.0226 - val_custom_accuracy: 0.5781
Epoch 22/100
4/4 [==============================] - 18s 4s/step - loss: 0.9846 - custom_accuracy: 0.5469 - val_loss: 1.0224 - val_custom_accuracy: 0.5781
Epoch 23/100
4/4 [==============================] - 18s 4s/step - loss: 0.9844 - custom_accuracy: 0.4922 - val_loss: 1.0220 - val_custom_accuracy: 0.5781
Epoch 24/100
4/4 [==============================] - 18s 4s/step - loss: 0.9847 - custom_accuracy: 0.5469 - val_loss: 1.0202 - val_custom_accuracy: 0.5781
Epoch 25/100
4/4 [==============================] - 18s 4s/step - loss: 0.9848 - custom_accuracy: 0.6016 - val_loss: 1.0197 - val_custom_accuracy: 0.5781
Epoch 26/100
4/4 [==============================] - 18s 4s/step - loss: 0.9846 - custom_accuracy: 0.6016 - val_loss: 1.0210 - val_custom_accuracy: 0.5781
Epoch 27/100
4/4 [==============================] - 19s 5s/step - loss: 0.9844 - custom_accuracy: 0.4922 - val_loss: 1.0230 - val_custom_accuracy: 0.5781
Epoch 28/100
4/4 [==============================] - 18s 4s/step - loss: 0.9840 - custom_accuracy: 0.4922 - val_loss: 1.0222 - val_custom_accuracy: 0.5781
Epoch 29/100
4/4 [==============================] - 18s 4s/step - loss: 0.9840 - custom_accuracy: 0.5469 - val_loss: 1.0209 - val_custom_accuracy: 0.5781
Epoch 30/100
4/4 [==============================] - 18s 4s/step - loss: 0.9842 - custom_accuracy: 0.5469 - val_loss: 1.0198 - val_custom_accuracy: 0.5781
Epoch 31/100
4/4 [==============================] - 18s 4s/step - loss: 0.9842 - custom_accuracy: 0.5469 - val_loss: 1.0195 - val_custom_accuracy: 0.5781
Epoch 32/100
4/4 [==============================] - 18s 4s/step - loss: 0.9842 - custom_accuracy: 0.4922 - val_loss: 1.0189 - val_custom_accuracy: 0.5781
Epoch 33/100
4/4 [==============================] - 18s 4s/step - loss: 0.9844 - custom_accuracy: 0.4375 - val_loss: 1.0168 - val_custom_accuracy: 0.5781
Epoch 34/100
4/4 [==============================] - 18s 4s/step - loss: 0.9840 - custom_accuracy: 0.4922 - val_loss: 1.0141 - val_custom_accuracy: 0.5781
Epoch 35/100
4/4 [==============================] - 18s 4s/step - loss: 0.9853 - custom_accuracy: 0.5469 - val_loss: 1.0118 - val_custom_accuracy: 0.5781
Epoch 36/100
4/4 [==============================] - 18s 4s/step - loss: 0.9867 - custom_accuracy: 0.6016 - val_loss: 1.0107 - val_custom_accuracy: 0.5781
Epoch 37/100
4/4 [==============================] - 18s 4s/step - loss: 0.9867 - custom_accuracy: 0.6016 - val_loss: 1.0115 - val_custom_accuracy: 0.5781
Epoch 38/100
4/4 [==============================] - 18s 4s/step - loss: 0.9859 - custom_accuracy: 0.6016 - val_loss: 1.0136 - val_custom_accuracy: 0.5781
Epoch 39/100
4/4 [==============================] - 18s 4s/step - loss: 0.9847 - custom_accuracy: 0.5469 - val_loss: 1.0165 - val_custom_accuracy: 0.5781
Epoch 40/100
4/4 [==============================] - 18s 4s/step - loss: 0.9837 - custom_accuracy: 0.4922 - val_loss: 1.0185 - val_custom_accuracy: 0.5781
Epoch 41/100
4/4 [==============================] - 18s 4s/step - loss: 0.9834 - custom_accuracy: 0.6016 - val_loss: 1.0193 - val_custom_accuracy: 0.5781
Epoch 42/100
4/4 [==============================] - 18s 4s/step - loss: 0.9833 - custom_accuracy: 0.4922 - val_loss: 1.0216 - val_custom_accuracy: 0.5781
Epoch 43/100
4/4 [==============================] - 18s 4s/step - loss: 0.9825 - custom_accuracy: 0.6016 - val_loss: 1.0219 - val_custom_accuracy: 0.5781
Epoch 44/100
4/4 [==============================] - 18s 4s/step - loss: 0.9827 - custom_accuracy: 0.6016 - val_loss: 1.0250 - val_custom_accuracy: 0.5781
Epoch 45/100
4/4 [==============================] - 18s 4s/step - loss: 0.9821 - custom_accuracy: 0.5469 - val_loss: 1.0289 - val_custom_accuracy: 0.5781
Epoch 46/100
4/4 [==============================] - 18s 4s/step - loss: 0.9812 - custom_accuracy: 0.5469 - val_loss: 1.0302 - val_custom_accuracy: 0.5781
Epoch 47/100
4/4 [==============================] - 18s 4s/step - loss: 0.9812 - custom_accuracy: 0.6016 - val_loss: 1.0316 - val_custom_accuracy: 0.5781
Epoch 48/100
4/4 [==============================] - 18s 4s/step - loss: 0.9817 - custom_accuracy: 0.4375 - val_loss: 1.0340 - val_custom_accuracy: 0.5781
Epoch 49/100
4/4 [==============================] - 18s 4s/step - loss: 0.9806 - custom_accuracy: 0.6016 - val_loss: 1.0303 - val_custom_accuracy: 0.5781
Epoch 50/100
4/4 [==============================] - 18s 4s/step - loss: 0.9813 - custom_accuracy: 0.5469 - val_loss: 1.0288 - val_custom_accuracy: 0.5781
Epoch 51/100
4/4 [==============================] - 19s 5s/step - loss: 0.9809 - custom_accuracy: 0.5469 - val_loss: 1.0284 - val_custom_accuracy: 0.5781
Epoch 52/100
4/4 [==============================] - 18s 4s/step - loss: 0.9808 - custom_accuracy: 0.6016 - val_loss: 1.0284 - val_custom_accuracy: 0.5781
Epoch 53/100
4/4 [==============================] - 18s 4s/step - loss: 0.9811 - custom_accuracy: 0.5469 - val_loss: 1.0303 - val_custom_accuracy: 0.5781
Epoch 54/100
4/4 [==============================] - 18s 4s/step - loss: 0.9806 - custom_accuracy: 0.6562 - val_loss: 1.0320 - val_custom_accuracy: 0.5781
Epoch 55/100
4/4 [==============================] - 18s 4s/step - loss: 0.9802 - custom_accuracy: 0.5469 - val_loss: 1.0381 - val_custom_accuracy: 0.5781
Epoch 56/100
4/4 [==============================] - 18s 4s/step - loss: 0.9802 - custom_accuracy: 0.6016 - val_loss: 1.0419 - val_custom_accuracy: 0.5781
Epoch 57/100
4/4 [==============================] - 18s 4s/step - loss: 0.9805 - custom_accuracy: 0.4922 - val_loss: 1.0447 - val_custom_accuracy: 0.5781
Epoch 58/100
4/4 [==============================] - 18s 4s/step - loss: 0.9806 - custom_accuracy: 0.5469 - val_loss: 1.0418 - val_custom_accuracy: 0.5781
Epoch 59/100
4/4 [==============================] - 18s 4s/step - loss: 0.9803 - custom_accuracy: 0.4922 - val_loss: 1.0372 - val_custom_accuracy: 0.5781
Epoch 60/100
4/4 [==============================] - 18s 4s/step - loss: 0.9808 - custom_accuracy: 0.6016 - val_loss: 1.0311 - val_custom_accuracy: 0.5781
Epoch 61/100
4/4 [==============================] - 18s 4s/step - loss: 0.9798 - custom_accuracy: 0.5469 - val_loss: 1.0300 - val_custom_accuracy: 0.5781
Epoch 62/100
4/4 [==============================] - 18s 4s/step - loss: 0.9800 - custom_accuracy: 0.5469 - val_loss: 1.0279 - val_custom_accuracy: 0.5781
Epoch 63/100
4/4 [==============================] - 18s 4s/step - loss: 0.9802 - custom_accuracy: 0.6562 - val_loss: 1.0275 - val_custom_accuracy: 0.5781
Epoch 64/100
4/4 [==============================] - 18s 4s/step - loss: 0.9796 - custom_accuracy: 0.6016 - val_loss: 1.0327 - val_custom_accuracy: 0.5781
Epoch 65/100
4/4 [==============================] - 18s 4s/step - loss: 0.9800 - custom_accuracy: 0.5469 - val_loss: 1.0385 - val_custom_accuracy: 0.5781
Epoch 66/100
4/4 [==============================] - 18s 4s/step - loss: 0.9808 - custom_accuracy: 0.4375 - val_loss: 1.0398 - val_custom_accuracy: 0.5781
Epoch 67/100
4/4 [==============================] - 18s 4s/step - loss: 0.9812 - custom_accuracy: 0.6016 - val_loss: 1.0329 - val_custom_accuracy: 0.5781
Epoch 68/100
4/4 [==============================] - 18s 4s/step - loss: 0.9801 - custom_accuracy: 0.6016 - val_loss: 1.0310 - val_custom_accuracy: 0.5781
Epoch 69/100
4/4 [==============================] - 18s 4s/step - loss: 0.9791 - custom_accuracy: 0.4922 - val_loss: 1.0311 - val_custom_accuracy: 0.5781
Epoch 70/100
4/4 [==============================] - 18s 4s/step - loss: 0.9794 - custom_accuracy: 0.4922 - val_loss: 1.0275 - val_custom_accuracy: 0.5781
Epoch 71/100
4/4 [==============================] - 18s 4s/step - loss: 0.9785 - custom_accuracy: 0.4375 - val_loss: 1.0226 - val_custom_accuracy: 0.5781
Epoch 72/100
4/4 [==============================] - 18s 5s/step - loss: 0.9800 - custom_accuracy: 0.6016 - val_loss: 1.0170 - val_custom_accuracy: 0.5781
Epoch 73/100
4/4 [==============================] - 18s 4s/step - loss: 0.9807 - custom_accuracy: 0.5469 - val_loss: 1.0157 - val_custom_accuracy: 0.5781
Epoch 74/100
4/4 [==============================] - 18s 5s/step - loss: 0.9812 - custom_accuracy: 0.5469 - val_loss: 1.0150 - val_custom_accuracy: 0.5781
Epoch 75/100
4/4 [==============================] - 18s 5s/step - loss: 0.9815 - custom_accuracy: 0.6016 - val_loss: 1.0152 - val_custom_accuracy: 0.5781
Epoch 76/100
4/4 [==============================] - 19s 5s/step - loss: 0.9807 - custom_accuracy: 0.5469 - val_loss: 1.0170 - val_custom_accuracy: 0.5781
Epoch 77/100
4/4 [==============================] - 18s 5s/step - loss: 0.9800 - custom_accuracy: 0.5469 - val_loss: 1.0186 - val_custom_accuracy: 0.5781
Epoch 78/100
4/4 [==============================] - 18s 5s/step - loss: 0.9799 - custom_accuracy: 0.6016 - val_loss: 1.0211 - val_custom_accuracy: 0.5781
Epoch 79/100
4/4 [==============================] - 19s 5s/step - loss: 0.9793 - custom_accuracy: 0.5469 - val_loss: 1.0245 - val_custom_accuracy: 0.5781
Epoch 80/100
4/4 [==============================] - 18s 5s/step - loss: 0.9785 - custom_accuracy: 0.6016 - val_loss: 1.0274 - val_custom_accuracy: 0.5781
Epoch 81/100
4/4 [==============================] - 18s 5s/step - loss: 0.9782 - custom_accuracy: 0.4922 - val_loss: 1.0303 - val_custom_accuracy: 0.5781
Epoch 82/100
4/4 [==============================] - 18s 4s/step - loss: 0.9779 - custom_accuracy: 0.4922 - val_loss: 1.0294 - val_custom_accuracy: 0.5781
Epoch 83/100
4/4 [==============================] - 18s 5s/step - loss: 0.9790 - custom_accuracy: 0.6016 - val_loss: 1.0260 - val_custom_accuracy: 0.5781
Epoch 84/100
4/4 [==============================] - 18s 5s/step - loss: 0.9779 - custom_accuracy: 0.4922 - val_loss: 1.0254 - val_custom_accuracy: 0.5781
Epoch 85/100
4/4 [==============================] - 18s 5s/step - loss: 0.9780 - custom_accuracy: 0.4922 - val_loss: 1.0226 - val_custom_accuracy: 0.5781
Epoch 86/100
4/4 [==============================] - 18s 5s/step - loss: 0.9777 - custom_accuracy: 0.5469 - val_loss: 1.0197 - val_custom_accuracy: 0.5781
Epoch 87/100
4/4 [==============================] - 18s 4s/step - loss: 0.9784 - custom_accuracy: 0.4922 - val_loss: 1.0174 - val_custom_accuracy: 0.5781
Epoch 88/100
4/4 [==============================] - 18s 4s/step - loss: 0.9795 - custom_accuracy: 0.5469 - val_loss: 1.0145 - val_custom_accuracy: 0.5781
Epoch 89/100
4/4 [==============================] - 18s 4s/step - loss: 0.9801 - custom_accuracy: 0.6016 - val_loss: 1.0137 - val_custom_accuracy: 0.5781
Epoch 90/100
4/4 [==============================] - 18s 4s/step - loss: 0.9800 - custom_accuracy: 0.5469 - val_loss: 1.0156 - val_custom_accuracy: 0.5781
Epoch 91/100
4/4 [==============================] - 18s 4s/step - loss: 0.9787 - custom_accuracy: 0.6016 - val_loss: 1.0174 - val_custom_accuracy: 0.5781
Epoch 92/100
4/4 [==============================] - 18s 4s/step - loss: 0.9775 - custom_accuracy: 0.5469 - val_loss: 1.0205 - val_custom_accuracy: 0.5781
Epoch 93/100
4/4 [==============================] - 18s 4s/step - loss: 0.9772 - custom_accuracy: 0.4922 - val_loss: 1.0228 - val_custom_accuracy: 0.5781
Epoch 94/100
4/4 [==============================] - 18s 4s/step - loss: 0.9770 - custom_accuracy: 0.5469 - val_loss: 1.0224 - val_custom_accuracy: 0.5781
Epoch 95/100
4/4 [==============================] - 18s 4s/step - loss: 0.9766 - custom_accuracy: 0.6016 - val_loss: 1.0233 - val_custom_accuracy: 0.5781
Epoch 96/100
4/4 [==============================] - 18s 4s/step - loss: 0.9764 - custom_accuracy: 0.5469 - val_loss: 1.0263 - val_custom_accuracy: 0.5781
Epoch 97/100
4/4 [==============================] - 18s 4s/step - loss: 0.9756 - custom_accuracy: 0.6562 - val_loss: 1.0296 - val_custom_accuracy: 0.5781
Epoch 98/100
4/4 [==============================] - 18s 4s/step - loss: 0.9760 - custom_accuracy: 0.4922 - val_loss: 1.0360 - val_custom_accuracy: 0.5781
Epoch 99/100
4/4 [==============================] - 18s 4s/step - loss: 0.9754 - custom_accuracy: 0.5469 - val_loss: 1.0353 - val_custom_accuracy: 0.5781
Epoch 100/100
4/4 [==============================] - 18s 4s/step - loss: 0.9752 - custom_accuracy: 0.5469 - val_loss: 1.0348 - val_custom_accuracy: 0.5781
```
# save the training history as csv file
import csv
w = csv.writer(open("./train_history_qcnn_with_qfc.csv", "w"))
for key, val in history_with.history.items():
w.writerow([key, val])
```
```
plt.plot(history_with.history['loss'][1:], label='Training')
plt.plot(history_with.history['val_loss'][1:], label='Validation')
plt.title('Quantum CNN with Quantum Fully Connected Layer')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
```
# Fit the model without QFC
history_wo = qcnn_wo_qfc.fit(x=train_quantum_data,
y=y_train,
batch_size=32,
epochs=100,
verbose=1,
validation_data=(test_quantum_data, y_test))
```
Epoch 1/100
4/4 [==============================] - 18s 5s/step - loss: 1.0016 - custom_accuracy: 0.5625 - val_loss: 1.0028 - val_custom_accuracy: 0.5781
Epoch 2/100
4/4 [==============================] - 18s 5s/step - loss: 0.9997 - custom_accuracy: 0.5469 - val_loss: 1.0048 - val_custom_accuracy: 0.5781
Epoch 3/100
4/4 [==============================] - 18s 5s/step - loss: 0.9978 - custom_accuracy: 0.4922 - val_loss: 1.0065 - val_custom_accuracy: 0.5781
Epoch 4/100
4/4 [==============================] - 18s 5s/step - loss: 0.9961 - custom_accuracy: 0.5469 - val_loss: 1.0069 - val_custom_accuracy: 0.5781
Epoch 5/100
4/4 [==============================] - 18s 5s/step - loss: 0.9957 - custom_accuracy: 0.5469 - val_loss: 1.0080 - val_custom_accuracy: 0.5781
Epoch 6/100
4/4 [==============================] - 18s 5s/step - loss: 0.9943 - custom_accuracy: 0.5469 - val_loss: 1.0091 - val_custom_accuracy: 0.5781
Epoch 7/100
4/4 [==============================] - 18s 5s/step - loss: 0.9928 - custom_accuracy: 0.4922 - val_loss: 1.0098 - val_custom_accuracy: 0.5781
Epoch 8/100
4/4 [==============================] - 18s 5s/step - loss: 0.9921 - custom_accuracy: 0.6016 - val_loss: 1.0100 - val_custom_accuracy: 0.5781
Epoch 9/100
4/4 [==============================] - 18s 5s/step - loss: 0.9916 - custom_accuracy: 0.5469 - val_loss: 1.0116 - val_custom_accuracy: 0.5781
Epoch 10/100
4/4 [==============================] - 19s 5s/step - loss: 0.9906 - custom_accuracy: 0.5469 - val_loss: 1.0130 - val_custom_accuracy: 0.5781
Epoch 11/100
4/4 [==============================] - 18s 5s/step - loss: 0.9897 - custom_accuracy: 0.5469 - val_loss: 1.0145 - val_custom_accuracy: 0.5781
Epoch 12/100
4/4 [==============================] - 18s 5s/step - loss: 0.9885 - custom_accuracy: 0.5469 - val_loss: 1.0156 - val_custom_accuracy: 0.5781
Epoch 13/100
4/4 [==============================] - 18s 5s/step - loss: 0.9880 - custom_accuracy: 0.5469 - val_loss: 1.0167 - val_custom_accuracy: 0.5781
Epoch 14/100
4/4 [==============================] - 18s 5s/step - loss: 0.9874 - custom_accuracy: 0.4922 - val_loss: 1.0173 - val_custom_accuracy: 0.5781
Epoch 15/100
4/4 [==============================] - 18s 5s/step - loss: 0.9869 - custom_accuracy: 0.5469 - val_loss: 1.0168 - val_custom_accuracy: 0.5781
Epoch 16/100
4/4 [==============================] - 19s 5s/step - loss: 0.9869 - custom_accuracy: 0.6016 - val_loss: 1.0164 - val_custom_accuracy: 0.5781
Epoch 17/100
4/4 [==============================] - 19s 5s/step - loss: 0.9868 - custom_accuracy: 0.4922 - val_loss: 1.0174 - val_custom_accuracy: 0.5781
Epoch 18/100
4/4 [==============================] - 19s 5s/step - loss: 0.9862 - custom_accuracy: 0.4922 - val_loss: 1.0166 - val_custom_accuracy: 0.5781
Epoch 19/100
4/4 [==============================] - 19s 5s/step - loss: 0.9863 - custom_accuracy: 0.4375 - val_loss: 1.0148 - val_custom_accuracy: 0.5781
Epoch 20/100
4/4 [==============================] - 18s 5s/step - loss: 0.9866 - custom_accuracy: 0.6016 - val_loss: 1.0129 - val_custom_accuracy: 0.5781
Epoch 21/100
4/4 [==============================] - 18s 5s/step - loss: 0.9874 - custom_accuracy: 0.4922 - val_loss: 1.0122 - val_custom_accuracy: 0.5781
Epoch 22/100
4/4 [==============================] - 18s 5s/step - loss: 0.9875 - custom_accuracy: 0.5469 - val_loss: 1.0115 - val_custom_accuracy: 0.5781
Epoch 23/100
4/4 [==============================] - 18s 5s/step - loss: 0.9877 - custom_accuracy: 0.5469 - val_loss: 1.0112 - val_custom_accuracy: 0.5781
Epoch 24/100
4/4 [==============================] - 18s 5s/step - loss: 0.9877 - custom_accuracy: 0.4375 - val_loss: 1.0108 - val_custom_accuracy: 0.5781
Epoch 25/100
4/4 [==============================] - 18s 5s/step - loss: 0.9881 - custom_accuracy: 0.6562 - val_loss: 1.0100 - val_custom_accuracy: 0.5781
Epoch 26/100
4/4 [==============================] - 18s 5s/step - loss: 0.9882 - custom_accuracy: 0.5469 - val_loss: 1.0110 - val_custom_accuracy: 0.5781
Epoch 27/100
4/4 [==============================] - 18s 5s/step - loss: 0.9874 - custom_accuracy: 0.5469 - val_loss: 1.0115 - val_custom_accuracy: 0.5781
Epoch 28/100
4/4 [==============================] - 18s 5s/step - loss: 0.9872 - custom_accuracy: 0.6016 - val_loss: 1.0126 - val_custom_accuracy: 0.5781
Epoch 29/100
4/4 [==============================] - 18s 5s/step - loss: 0.9868 - custom_accuracy: 0.5469 - val_loss: 1.0142 - val_custom_accuracy: 0.5781
Epoch 30/100
4/4 [==============================] - 18s 5s/step - loss: 0.9860 - custom_accuracy: 0.4922 - val_loss: 1.0151 - val_custom_accuracy: 0.5781
Epoch 31/100
4/4 [==============================] - 18s 5s/step - loss: 0.9856 - custom_accuracy: 0.5469 - val_loss: 1.0152 - val_custom_accuracy: 0.5781
Epoch 32/100
4/4 [==============================] - 18s 5s/step - loss: 0.9855 - custom_accuracy: 0.6016 - val_loss: 1.0155 - val_custom_accuracy: 0.5781
Epoch 33/100
4/4 [==============================] - 18s 5s/step - loss: 0.9850 - custom_accuracy: 0.6562 - val_loss: 1.0171 - val_custom_accuracy: 0.5781
Epoch 34/100
4/4 [==============================] - 18s 5s/step - loss: 0.9842 - custom_accuracy: 0.6016 - val_loss: 1.0205 - val_custom_accuracy: 0.5781
Epoch 35/100
4/4 [==============================] - 18s 5s/step - loss: 0.9839 - custom_accuracy: 0.6016 - val_loss: 1.0245 - val_custom_accuracy: 0.5781
Epoch 36/100
4/4 [==============================] - 18s 5s/step - loss: 0.9838 - custom_accuracy: 0.5469 - val_loss: 1.0287 - val_custom_accuracy: 0.5781
Epoch 37/100
4/4 [==============================] - 18s 5s/step - loss: 0.9835 - custom_accuracy: 0.4922 - val_loss: 1.0307 - val_custom_accuracy: 0.5781
Epoch 38/100
4/4 [==============================] - 18s 5s/step - loss: 0.9833 - custom_accuracy: 0.4922 - val_loss: 1.0292 - val_custom_accuracy: 0.5781
Epoch 39/100
4/4 [==============================] - 18s 5s/step - loss: 0.9830 - custom_accuracy: 0.6016 - val_loss: 1.0273 - val_custom_accuracy: 0.5781
Epoch 40/100
4/4 [==============================] - 18s 5s/step - loss: 0.9831 - custom_accuracy: 0.5469 - val_loss: 1.0269 - val_custom_accuracy: 0.5781
Epoch 41/100
4/4 [==============================] - 18s 5s/step - loss: 0.9834 - custom_accuracy: 0.4922 - val_loss: 1.0265 - val_custom_accuracy: 0.5781
Epoch 42/100
4/4 [==============================] - 18s 5s/step - loss: 0.9838 - custom_accuracy: 0.6562 - val_loss: 1.0244 - val_custom_accuracy: 0.5781
Epoch 43/100
4/4 [==============================] - 18s 5s/step - loss: 0.9830 - custom_accuracy: 0.6562 - val_loss: 1.0264 - val_custom_accuracy: 0.5781
Epoch 44/100
4/4 [==============================] - 18s 5s/step - loss: 0.9827 - custom_accuracy: 0.5469 - val_loss: 1.0298 - val_custom_accuracy: 0.5781
Epoch 45/100
4/4 [==============================] - 18s 5s/step - loss: 0.9830 - custom_accuracy: 0.6016 - val_loss: 1.0326 - val_custom_accuracy: 0.5781
Epoch 46/100
4/4 [==============================] - 18s 5s/step - loss: 0.9833 - custom_accuracy: 0.4922 - val_loss: 1.0351 - val_custom_accuracy: 0.5781
Epoch 47/100
4/4 [==============================] - 18s 5s/step - loss: 0.9831 - custom_accuracy: 0.6562 - val_loss: 1.0356 - val_custom_accuracy: 0.5781
Epoch 48/100
4/4 [==============================] - 19s 5s/step - loss: 0.9832 - custom_accuracy: 0.5469 - val_loss: 1.0380 - val_custom_accuracy: 0.5781
Epoch 49/100
4/4 [==============================] - 18s 5s/step - loss: 0.9830 - custom_accuracy: 0.6562 - val_loss: 1.0391 - val_custom_accuracy: 0.5781
Epoch 50/100
4/4 [==============================] - 18s 5s/step - loss: 0.9831 - custom_accuracy: 0.6562 - val_loss: 1.0441 - val_custom_accuracy: 0.5781
Epoch 51/100
4/4 [==============================] - 18s 5s/step - loss: 0.9845 - custom_accuracy: 0.4922 - val_loss: 1.0494 - val_custom_accuracy: 0.5781
Epoch 52/100
4/4 [==============================] - 18s 5s/step - loss: 0.9845 - custom_accuracy: 0.4922 - val_loss: 1.0480 - val_custom_accuracy: 0.5781
Epoch 53/100
4/4 [==============================] - 18s 5s/step - loss: 0.9848 - custom_accuracy: 0.6562 - val_loss: 1.0436 - val_custom_accuracy: 0.5781
Epoch 54/100
4/4 [==============================] - 18s 5s/step - loss: 0.9832 - custom_accuracy: 0.6016 - val_loss: 1.0439 - val_custom_accuracy: 0.5781
Epoch 55/100
4/4 [==============================] - 18s 5s/step - loss: 0.9835 - custom_accuracy: 0.5469 - val_loss: 1.0450 - val_custom_accuracy: 0.5781
Epoch 56/100
4/4 [==============================] - 18s 5s/step - loss: 0.9833 - custom_accuracy: 0.6016 - val_loss: 1.0445 - val_custom_accuracy: 0.5781
Epoch 57/100
4/4 [==============================] - 18s 5s/step - loss: 0.9835 - custom_accuracy: 0.4922 - val_loss: 1.0443 - val_custom_accuracy: 0.5781
Epoch 58/100
4/4 [==============================] - 18s 5s/step - loss: 0.9827 - custom_accuracy: 0.4375 - val_loss: 1.0385 - val_custom_accuracy: 0.5781
Epoch 59/100
4/4 [==============================] - 18s 5s/step - loss: 0.9821 - custom_accuracy: 0.5469 - val_loss: 1.0305 - val_custom_accuracy: 0.5781
Epoch 60/100
4/4 [==============================] - 18s 5s/step - loss: 0.9813 - custom_accuracy: 0.4922 - val_loss: 1.0247 - val_custom_accuracy: 0.5781
Epoch 61/100
4/4 [==============================] - 18s 5s/step - loss: 0.9804 - custom_accuracy: 0.5469 - val_loss: 1.0201 - val_custom_accuracy: 0.5781
Epoch 62/100
4/4 [==============================] - 18s 5s/step - loss: 0.9827 - custom_accuracy: 0.6016 - val_loss: 1.0169 - val_custom_accuracy: 0.5781
Epoch 63/100
4/4 [==============================] - 18s 5s/step - loss: 0.9825 - custom_accuracy: 0.6562 - val_loss: 1.0171 - val_custom_accuracy: 0.5781
Epoch 64/100
4/4 [==============================] - 18s 5s/step - loss: 0.9823 - custom_accuracy: 0.5469 - val_loss: 1.0202 - val_custom_accuracy: 0.5781
Epoch 65/100
4/4 [==============================] - 18s 5s/step - loss: 0.9815 - custom_accuracy: 0.5469 - val_loss: 1.0220 - val_custom_accuracy: 0.5781
Epoch 66/100
4/4 [==============================] - 18s 5s/step - loss: 0.9816 - custom_accuracy: 0.6016 - val_loss: 1.0238 - val_custom_accuracy: 0.5781
Epoch 67/100
4/4 [==============================] - 18s 5s/step - loss: 0.9811 - custom_accuracy: 0.5469 - val_loss: 1.0258 - val_custom_accuracy: 0.5781
Epoch 68/100
4/4 [==============================] - 18s 5s/step - loss: 0.9810 - custom_accuracy: 0.5469 - val_loss: 1.0269 - val_custom_accuracy: 0.5781
Epoch 69/100
4/4 [==============================] - 18s 5s/step - loss: 0.9810 - custom_accuracy: 0.5469 - val_loss: 1.0274 - val_custom_accuracy: 0.5781
Epoch 70/100
4/4 [==============================] - 18s 5s/step - loss: 0.9809 - custom_accuracy: 0.5469 - val_loss: 1.0272 - val_custom_accuracy: 0.5781
Epoch 71/100
4/4 [==============================] - 18s 5s/step - loss: 0.9811 - custom_accuracy: 0.6016 - val_loss: 1.0268 - val_custom_accuracy: 0.5781
Epoch 72/100
4/4 [==============================] - 18s 5s/step - loss: 0.9809 - custom_accuracy: 0.5469 - val_loss: 1.0280 - val_custom_accuracy: 0.5781
Epoch 73/100
4/4 [==============================] - 18s 5s/step - loss: 0.9808 - custom_accuracy: 0.6016 - val_loss: 1.0290 - val_custom_accuracy: 0.5781
Epoch 74/100
4/4 [==============================] - 18s 5s/step - loss: 0.9807 - custom_accuracy: 0.6562 - val_loss: 1.0315 - val_custom_accuracy: 0.5781
Epoch 75/100
4/4 [==============================] - 18s 5s/step - loss: 0.9814 - custom_accuracy: 0.5469 - val_loss: 1.0359 - val_custom_accuracy: 0.5781
Epoch 76/100
4/4 [==============================] - 18s 5s/step - loss: 0.9809 - custom_accuracy: 0.6016 - val_loss: 1.0376 - val_custom_accuracy: 0.5781
Epoch 77/100
4/4 [==============================] - 18s 5s/step - loss: 0.9810 - custom_accuracy: 0.4922 - val_loss: 1.0388 - val_custom_accuracy: 0.5781
Epoch 78/100
4/4 [==============================] - 18s 5s/step - loss: 0.9811 - custom_accuracy: 0.5469 - val_loss: 1.0373 - val_custom_accuracy: 0.5781
Epoch 79/100
4/4 [==============================] - 18s 5s/step - loss: 0.9811 - custom_accuracy: 0.4922 - val_loss: 1.0340 - val_custom_accuracy: 0.5781
Epoch 80/100
4/4 [==============================] - 18s 5s/step - loss: 0.9809 - custom_accuracy: 0.5469 - val_loss: 1.0292 - val_custom_accuracy: 0.5781
Epoch 81/100
4/4 [==============================] - 18s 5s/step - loss: 0.9798 - custom_accuracy: 0.6016 - val_loss: 1.0265 - val_custom_accuracy: 0.5781
Epoch 82/100
4/4 [==============================] - 18s 5s/step - loss: 0.9799 - custom_accuracy: 0.4375 - val_loss: 1.0248 - val_custom_accuracy: 0.5781
Epoch 83/100
4/4 [==============================] - 19s 5s/step - loss: 0.9798 - custom_accuracy: 0.5469 - val_loss: 1.0202 - val_custom_accuracy: 0.5781
Epoch 84/100
4/4 [==============================] - 19s 5s/step - loss: 0.9804 - custom_accuracy: 0.5469 - val_loss: 1.0170 - val_custom_accuracy: 0.5781
Epoch 85/100
4/4 [==============================] - 18s 5s/step - loss: 0.9804 - custom_accuracy: 0.5469 - val_loss: 1.0153 - val_custom_accuracy: 0.5781
Epoch 86/100
4/4 [==============================] - 18s 5s/step - loss: 0.9807 - custom_accuracy: 0.5469 - val_loss: 1.0142 - val_custom_accuracy: 0.5781
Epoch 87/100
4/4 [==============================] - 18s 5s/step - loss: 0.9809 - custom_accuracy: 0.6016 - val_loss: 1.0140 - val_custom_accuracy: 0.5781
Epoch 88/100
4/4 [==============================] - 18s 5s/step - loss: 0.9811 - custom_accuracy: 0.4922 - val_loss: 1.0155 - val_custom_accuracy: 0.5781
Epoch 89/100
4/4 [==============================] - 18s 5s/step - loss: 0.9805 - custom_accuracy: 0.5469 - val_loss: 1.0152 - val_custom_accuracy: 0.5781
Epoch 90/100
4/4 [==============================] - 18s 5s/step - loss: 0.9805 - custom_accuracy: 0.4375 - val_loss: 1.0137 - val_custom_accuracy: 0.5781
Epoch 91/100
4/4 [==============================] - 18s 5s/step - loss: 0.9810 - custom_accuracy: 0.6562 - val_loss: 1.0115 - val_custom_accuracy: 0.5781
Epoch 92/100
4/4 [==============================] - 18s 5s/step - loss: 0.9810 - custom_accuracy: 0.6016 - val_loss: 1.0130 - val_custom_accuracy: 0.5781
Epoch 93/100
4/4 [==============================] - 18s 5s/step - loss: 0.9799 - custom_accuracy: 0.5469 - val_loss: 1.0154 - val_custom_accuracy: 0.5781
Epoch 94/100
4/4 [==============================] - 18s 5s/step - loss: 0.9795 - custom_accuracy: 0.5469 - val_loss: 1.0171 - val_custom_accuracy: 0.5781
Epoch 95/100
4/4 [==============================] - 18s 5s/step - loss: 0.9800 - custom_accuracy: 0.5469 - val_loss: 1.0189 - val_custom_accuracy: 0.5781
Epoch 96/100
4/4 [==============================] - 18s 5s/step - loss: 0.9792 - custom_accuracy: 0.6016 - val_loss: 1.0199 - val_custom_accuracy: 0.5781
Epoch 97/100
4/4 [==============================] - 18s 5s/step - loss: 0.9788 - custom_accuracy: 0.4375 - val_loss: 1.0207 - val_custom_accuracy: 0.5781
Epoch 98/100
4/4 [==============================] - 18s 5s/step - loss: 0.9792 - custom_accuracy: 0.4922 - val_loss: 1.0176 - val_custom_accuracy: 0.5781
Epoch 99/100
4/4 [==============================] - 18s 5s/step - loss: 0.9801 - custom_accuracy: 0.6016 - val_loss: 1.0147 - val_custom_accuracy: 0.5781
Epoch 100/100
4/4 [==============================] - 18s 5s/step - loss: 0.9793 - custom_accuracy: 0.5469 - val_loss: 1.0146 - val_custom_accuracy: 0.5781
```
# save the training history as csv file
import csv
w = csv.writer(open("./train_history_qcnn_wo_qfc.csv", "w"))
for key, val in history_wo.history.items():
w.writerow([key, val])
```
```
plt.plot(history_wo.history['loss'][1:], label='Training')
plt.plot(history_wo.history['val_loss'][1:], label='Validation')
plt.title('Quantum CNN without Quantum Fully Connected Layer')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
Now we compare the models by plotting the loss and accuracy during training.
```
plt.plot(history_with.history['loss'][1:], label='Training, with')
plt.plot(history_with.history['val_loss'][1:], label='Validation, with')
plt.plot(history_wo.history['loss'][1:], label='Training, w/o')
plt.plot(history_wo.history['val_loss'][1:], label='Validation, w/o')
plt.title('Quantum CNN with vs w/o Quantum Fully Connected Layer')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
```
plt.plot(history_with.history['custom_accuracy'][1:], label='Training, with')
plt.plot(history_with.history['val_custom_accuracy'][1:], label='Validation, with')
plt.plot(history_wo.history['custom_accuracy'][1:], label='Training, w/o')
plt.plot(history_wo.history['val_custom_accuracy'][1:], label='Validation, w/o')
plt.title('Quantum CNN with vs w/o Quantum Fully Connected Layer')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
From these plots, we can clearly see that:
1. Both models fit the training dataset with the training losses decrease over time.
2. Both models overfit to the training dataset. As expected, 100 training samples are too small for the models to generalize well.
3. Both models stuck at around 0.55 (average) of accuracy and cannot get better (averaging into flat curve).
4. The model with QFC shows a little bit better in performance (lower loss) at around after 90 epochs.
Because the model with QFC seems to be better, I trained it with more epochs (up to 200 epochs). But this time, I prepend the quantum data with a cluster state circuit like in the TFQ tutorial.
```
# Create our qubits and readout operators in Cirq.
qubits = cirq.GridQubit.rect(1, num_component)
readout_operators = cirq.Z(qubits[-1])
# Build a sequential model
image_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state = tfq.layers.AddCircuit()(
image_input, prepend=cluster_state_circuit(qubits))
quantum_model = tfq.layers.PQC(model_with_qfc(qubits),
readout_operators)(cluster_state)
qcnn_with_qfc_cluster = tf.keras.Model(inputs=[image_input], outputs=[quantum_model])
# Show the keras plot of the model
tf.keras.utils.plot_model(qcnn_with_qfc_cluster,
show_shapes=True,
show_layer_names=False,
dpi=70)
```
```
# Fit the model with QFC and cluster state circuit
history = qcnn_with_qfc_cluster.fit(x=train_quantum_data,
y=y_train,
batch_size=32,
epochs=100,
verbose=1,
validation_data=(test_quantum_data, y_test))
```
This is the results from the best model that I have achieved so far.
```
plt.plot(history.history['loss'][1:], label='Training')
plt.plot(history.history['val_loss'][1:], label='Validation')
plt.title('Quantum CNN with Quantum Fully Connected Layer')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
```
plt.plot(history.history['custom_accuracy'][1:], label='Training')
plt.plot(history.history['val_custom_accuracy'][1:], label='Validation')
plt.title('Quantum CNN with Quantum Fully Connected Layer')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
From the plots, it can be seen that:
1. The model performs better with the cluster state circuit as the training loss is lower.
2. The model still overfits the training data since the validation loss still did not get better.
3. The training accuracy is slightly higher (averaging at 0.6 to 0.65) and shows an uptrend.
These results are very interesting as it shows that the cluster state helps the model to perform better. It seems that the translationally invariant and high entanglement aspect of the cluster state does help.
### Amplitude Encoding
This code below is the implementation of amplitude encoding in Cirq.
```
# helper function
# function to calculate the angle β
def beta(s, j, X_sample):
# numerator index
index_num = (2*j-1)*(2**(s-1))
# denominator index
index_den = (j-1)*(2**s)
num = np.sqrt(np.sum(abs(X_sample[index_num : index_num+2**(s-1)])**2))
den = np.sqrt(np.sum(abs(X_sample[index_den : index_den+2**(s)])**2))
if den == 0:
#print("Zero denominator!")
beta = 0
else:
beta = 2*np.arcsin(num/(den+1e-10))
return beta
def decimalToBinary(n, length):
binary = bin(n).replace("0b", "")
if len(binary) != length:
for i in range(length - len(binary)):
binary = "0" + binary
return binary
def locate_x(current_j, prev_j, length):
loc = []
prev_binary = decimalToBinary(prev_j, length)
counter = 0
for i in decimalToBinary(current_j, length):
if i != prev_binary[counter]:
loc.append(counter)
counter += 1
return loc
```
```
def mcry_gate(qubits, i, theta):
control = [c for c in range(i)]
target = i
mult_cnot_qubit = len(control + [target])
unitary_matrix = np.zeros((2**mult_cnot_qubit, 2**mult_cnot_qubit))
np.fill_diagonal(unitary_matrix, val=1)
unitary_matrix[-1, :] *= 0
unitary_matrix[-1, -2] += 1
unitary_matrix[-2, :] *= 0
unitary_matrix[-2, -1] += 1
mcry = cirq.Circuit()
mcry += cirq.rx(np.pi/2)(qubits[target])
mcry += cirq.rz(theta/2)(qubits[target])
mcry += cirq.rx(-np.pi/2)(qubits[target])
# MCCNOT
mcry += cirq.ops.MatrixGate(unitary_matrix).on(*qubits)
mcry += cirq.rx(np.pi/2)(qubits[target])
mcry += cirq.rz(-theta/2)(qubits[target])
mcry += cirq.rx(-np.pi/2)(qubits[target])
# MCCNOT
mcry += cirq.ops.MatrixGate(unitary_matrix).on(*qubits)
return mcry
```
```
def amplitude_encoding(X_sample):
n = int(np.log2(len(X_sample)))
qubits = cirq.GridQubit.rect(1, n)
circuit = cirq.Circuit()
# for every qubits
for i in range(n):
# for every gates on the qubit
if i == 0:
circuit += cirq.ry(beta(n, 1, X_sample))(qubits[0])
else:
for j in range(2**i):
if j != 0:
for loc in locate_x((2**i)-j-1, (2**i)-j, length=i):
circuit += cirq.X(qubits[loc])
mcry_qubit = [qubits[c] for c in range(i+1)]
#circuit += mcry_gate(mcry_qubit, i, beta(n-i, (2**i)-j, X_sample))
circuit += cirq.ControlledGate(sub_gate=cirq.ry(beta(n-i, (2**i)-j, X_sample)), num_controls=len(mcry_qubit)-1)(*mcry_qubit)
for k in range(i):
circuit += cirq.X(qubits[k])
return circuit
```
Let's test out the code and see the measurement result.
```
# example of an already normalized feature vector
X_sample = np.array([np.sqrt(0.2), np.sqrt(0.2), np.sqrt(0.5), np.sqrt(0.1)])
qubits = cirq.GridQubit.rect(1, 2)
a = amplitude_encoding(X_sample)
a += cirq.measure(*qubits, key='result')
# Initialize simulator
s=cirq.Simulator()
# Sample the circuit with 10000 shots
samples=s.run(a, repetitions=10000)
samples.histogram(key="result")
```
Counter({0: 2050, 1: 2002, 2: 4987, 3: 961})
We can see that the quantum state's amplitudes exactly matched the features' value as the measurement results (per total shots) are equal to the feature vector squared (remember that measurement results are equal to the square of amplitudes).
#### Generate Quantum Data from the Dataset
An attempt to use the amplitude encoding.
```
def amplitude_encoding_generate(X):
"""Generate quantum data from the dataset (after PCA)."""
quantum_data = []
# iterate through data samples
for sample in X:
quantum_data.append(amplitude_encoding(sample))
return tfq.convert_to_tensor(quantum_data)
```
Normalize all samples' feature vectors to conform with the amplitude encoding equation mentioned in the **The Approach Used to Tackle the Problem** section.
```
x_train_norm = (x_train_pca.T / np.sqrt(np.sum(x_train_pca ** 2, -1))).T
x_test_norm = (x_test_pca.T / np.sqrt(np.sum(x_test_pca ** 2, -1))).T
```
```
tfq.convert_to_tensor([amplitude_encoding(x_train_norm[47])])
```
tfq.convert_to_tensor failed to convert the multi-controlled RY gate.
```
qubits_test = cirq.GridQubit.rect(1, 3)
circ = cirq.Circuit()
circ += cirq.ControlledGate(sub_gate=cirq.X, num_controls=2)(*qubits_test)
print(circ)
```
(0, 0): ───@───
│
(0, 1): ───@───
│
(0, 2): ───X───
It also failed to convert the CCH gate.
```
tfq.convert_to_tensor([circ])
```
Until the time of submission, the workaround for this problem is still not known yet.
## Conclusion
A QCNN model with and without the QFC layer has been trained. Both models were able to fit the training dataset as the training loss decrease over time. Both models did not generalize well as the validation loss did not decrease. This is to be expected since the training dataset is too small. The QCNN model with the QFC layer shows a slightly better performance with lower loss. It is also found that the cluster state does help the QCNN model in achieving better performance (lower loss, higher accuracy).
For future work, the workaround to make the amplitude encoding method works needs to be found. It is one of the promising encoding methods for data with a large number of features (e.g., image) as it only needs $\operatorname{log_{2}}(N)$ qubits where $N$ is the number of features.
## References
1. [Y. LeCun and C. Cortes, “MNIST handwritten digit database,” 2010.](http://yann.lecun.com/exdb/mnist/)
2. [E. Farhi and H. Neven, “Classification with Quantum Neural Networks on Near Term Processors,” pp. 1–21, 2018.](https://arxiv.org/abs/1802.06002)
3. [A. Skolik, J. R. McClean, M. Mohseni, P. van der Smagt, and M. Leib, “Layerwise learning for quantum neural networks,” Quantum Mach. Intell., vol. 3, no. 1, p. 5, Jun. 2021.](https://link.springer.com/article/10.1007/s42484-020-00036-4)
4. [S. Mardirosian, “Quantum-enhanced Supervised Learning with Variational Quantum Circuits,” Leiden University, 2019.](https://theses.liacs.nl/pdf/2018-2019-MardirosianSevak.pdf)
5. [M. Mottonen, J. J. Vartiainen, V. Bergholm, and M. M. Salomaa, “Transformation of quantum states using uniformly controlled rotations,” Quantum Inf. Comput., vol. 5, no. 6, pp. 467–473, Jul. 2004.](https://dl.acm.org/doi/abs/10.5555/2011670.2011675)
6. [M. Schuld and F. Petruccione, Supervised Learning with Quantum Computers. 2018.](https://link.springer.com/book/10.1007/978-3-319-96424-9)
7. [I. Cong, S. Choi, and M. D. Lukin, “Quantum convolutional neural networks,” Nat. Phys., vol. 15, no. 12, pp. 1273–1278, Dec. 2019.](https://www.nature.com/articles/s41567-019-0648-8)
|
450b26c86cbc08d9782d0d039d02875230079936
| 1,014,419 |
ipynb
|
Jupyter Notebook
|
QML_HEP_GSoC_2021_Tasks_III_QCNN.ipynb
|
eraraya-ricardo/qml-hep-gsoc-2021
|
0e8f718742b2ef75c882b9bf5400811ed3358b3e
|
[
"MIT"
] | 2 |
2021-06-08T12:46:05.000Z
|
2022-03-27T14:56:03.000Z
|
QML_HEP_GSoC_2021_Tasks_III_QCNN.ipynb
|
eraraya-ricardo/qml-hep-gsoc-2021
|
0e8f718742b2ef75c882b9bf5400811ed3358b3e
|
[
"MIT"
] | null | null | null |
QML_HEP_GSoC_2021_Tasks_III_QCNN.ipynb
|
eraraya-ricardo/qml-hep-gsoc-2021
|
0e8f718742b2ef75c882b9bf5400811ed3358b3e
|
[
"MIT"
] | null | null | null | 323.681876 | 216,711 | 0.755402 | true | 22,413 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.70253 | 0.740174 | 0.519995 |
__label__eng_Latn
| 0.517316 | 0.046451 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, G.F. Forsyth, C.D. Cooper. Partly based on content by David Ketcheson, also under CC-BY.
# Phugoid model: bonus!
_The phugoid model of glider flight_ has been such a fun problem to showcase the power of numerical solution of differential equations, we thought you'd enjoy a bonus notebook. The previous lessons were:
* [Phugoid motion](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_01_Phugoid_Theory.ipynb) —Lays the groundwork for our fun problem, with some context, a little history and a description of the physics of phugoids: curves representing the trajectory of a glider exchanging potential and kinetic energy, with no drag.
* [Phugoid oscillation](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_02_Phugoid_Oscillation.ipynb) —Develops the simple harmonic motion of an aircraft experiencing a small perturbation from the horizontal trajectory: our opportunity to introduce Euler's method, and study its convergence via an exact solution.
* [Full phugoid motion](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb) —The full model takes into account the force of drag and results in a system of two nonlinear equations. We obtain the trajectories using Euler's method in vectorized form, introduce grid-convergence analysis and finish with the paper-airplane challenge!
That is a fantastic foundation for numerical methods. It's a good time to complement it with some theory: the first screencast of the course uses Taylor series to show that _Euler's method is a first-order method_, and we also show you graphical interpretations. Many problems require a more accurate method, though: second order or higher. Among the most popular higher-order methods that we can mention are the _Runge-Kutta methods_, developed around 1900: more than 100 years after Euler published his book containing the method now named after him!
### Euler's method is a first-order method
In this screencast, we use a Taylor series expansion to analyze Euler's method and show that it incurs a truncation error of first order. We also use a graphical interpretation to motivate the _modified_ Euler method, which achieves second order.
```python
from IPython.display import YouTubeVideo
YouTubeVideo('6i6qhqDCViA')
```
## Second-order methods
The notebook on phugoid oscillation (lesson 2) included a study of the accuracy obtained with Euler's method, using the exact solution for the simple harmonic motion. We made a _convergence plot_ and saw that as $\Delta t$ gets smaller, the error also gets smaller.
We could have drawn a line with a slope equal to 1 on that log-log plot, and you would have seen that it was parallel to the convergence line. A slope equal to 1 on a log-log convergence plot is an indication that we have a first-order method: the error scales as ${\mathcal O}(\Delta t)$.
In lesson 3, using the full phugoid model (which is nonlinear and does not have an exact solution), we did a _grid-convergence study_ with three different grids, and obtained the _observed_ order of convergence—it was very close to 1, indicating a slope of 1 on a log-log plot.
Another way to look at an ${\mathcal O}(\Delta t)$ method is to say that the error scales _linearly_ with the step size, or that they are proportional:
$$ e \propto \Delta t.$$
where $e$ stands for the error. To get more accuracy, we could use a _second-order_ method, in which the error is ${\mathcal O}(\Delta t^2)$. In general, we say that a method is of order $p$ when the error is proportional to $(\Delta t)^p$.
In the screencast titled "Euler's method is a first-order method," we used a graphical interpretation to get an idea for improving it: by estimating an intermediate point, like the **midpoint**, we can get a better approximation of the area under the curve of $u^\prime$. The scheme has two steps and is written as:
\begin{align}
u_{n+1/2} & = u_n + \frac{\Delta t}{2} f(u_n) \\
u_{n+1} & = u_n + \Delta t \,\, f(u_{n+1/2}).
\end{align}
This method is known as the *explicit midpoint method* or the *modified Euler method*, and it is a second-order method. Notice that we had to apply the right-hand side, $~f(u)$, twice. This idea can be extended: we could imagine estimating additional points between $u_{n}$ and $u_{n+1}$ and evaulating $~f(u)$ at the intermediate points to get higher accuracy—that's the idea behind Runge-Kutta methods.
### Runge-Kutta methods
In the modified Euler method, we improve the accuracy over Euler's method by evaluating the right-hand side of the differential equation at an intermediate point: the midpoint. The same idea can be applied again, and the function $f(u)$ can be evaluated at more intermediate points, improving the accuracy even more. This is the basis of the famous *Runge-Kutta (RK) methods*, going back to Carl Runge and Martin Kutta. The modified Euler method corresponds to _second-order_ Runge-Kutta.
Here's a bit of historical coincidence that will blow your mind: Carl Runge's daughter Iris—an accomplished applied mathematician in her own right—worked assiduously over the summer of 1909 to translate Lanchester's _"Aerodonetics."_ She also reproduced his graphical method to draw the phugoid curves (Tobies, 2012).
### Phugoid model with 2nd-order RK
Let's compute the motion of a glider under the full phugoid model using the second-order Runge-Kutta method. We'll build on the _paper airplane challenge_ of lesson 3 now, and look for the horizontal distance that the plane travels until the moment it touches the ground.
As usual, let's start by importing the libraries and modules that we need, and setting up the model parameters. We also set some default plotting formats using the [`rcParams`](http://matplotlib.org/api/matplotlib_configuration_api.html#matplotlib.rcParams) module.
```python
from math import sin, cos, log
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
```
In the paper-airplane challenge of lesson 3, we suggested an $L/D=5.0$ as a realistic value for paper airplanes, according to experiments, and a trim velocity of 4.9 m/s. Let's start with those values, but you could experiment changing these a bit. _What do you think will happen if you make $L/D$ higher?_
```python
# model parameters:
g = 9.8 # gravity in m s^{-2}
v_t = 4.9 # trim velocity in m s^{-1}
C_D = 1/5.0 # drag coefficient --- or D/L if C_L=1
C_L = 1.0 # for convenience, use C_L = 1
### set initial conditions ###
v0 = 6.5 # start at the trim velocity (or add a delta)
theta0 = -0.1 # initial angle of trajectory
x0 = 0.0 # horizotal position is arbitrary
y0 = 2.0 # initial altitude
```
Among the initial parameters that we suggest for your first experiment, we are starting with a velocity a little higher than the trim velocity, launch the paper airplane with a negative initial angle, and take the initial height to be 2 meters—all sound like reasonable choices.
Now, we can define a few functions to carry out the computation:
* The right-hand side of the phugoid model from [Lesson 3](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb),
* One step of the Euler's method that we learned in [Lesson 2](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_02_Phugoid_Oscillation.ipynb), and
* Differences with respect to a fine grid, as in [Lesson 3](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb).
```python
def f(u):
"""Returns the right-hand side of the phugoid system of equations.
Parameters
----------
u : array of float
array containing the solution at time n.
Returns
-------
dudt : array of float
array containing the RHS given u.
"""
v = u[0]
theta = u[1]
x = u[2]
y = u[3]
return numpy.array([-g*sin(theta) - C_D/C_L*g/v_t**2*v**2,
-g*cos(theta)/v + g/v_t**2*v,
v*cos(theta),
v*sin(theta)])
def euler_step(u, f, dt):
"""Returns the solution at the next time-step using Euler's method.
Parameters
----------
u : array of float
solution at the previous time-step.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
u_n_plus_1 : array of float
approximate solution at the next time step.
"""
return u + dt * f(u)
def get_diffgrid(u_current, u_fine, dt):
"""Returns the difference between one grid and the fine one using L-1 norm.
Parameters
----------
u_current : array of float
solution on the current grid.
u_finest : array of float
solution on the fine grid.
dt : float
time-increment on the current grid.
Returns
-------
diffgrid : float
difference computed in the L-1 norm.
"""
N_current = len(u_current[:,0])
N_fine = len(u_fine[:,0])
grid_size_ratio = numpy.ceil(N_fine/N_current)
diffgrid = dt * numpy.sum( numpy.abs(\
u_current[:,2]- u_fine[::grid_size_ratio,2]))
return diffgrid
```
Next, we also need to define the function `rk2_step()` that computes the next time step using the *modified Euler* method of equations $(1)$ and $(2)$, above, otherwise known as 2nd-order Runge-Kutta or RK2. This function will be called over and over again within the time loop.
```python
def rk2_step(u, f, dt):
"""Returns the solution at the next time-step using 2nd-order Runge-Kutta.
Parameters
----------
u : array of float
solution at the previous time-step.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
u_n_plus_1 : array of float
solution at the next time step.
"""
u_star = u + 0.5*dt*f(u)
return u + dt*f(u_star)
```
Like in [Lesson 3](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb), we first need to set up the time discretization, then initialize arrays to save the solution and we are set to go! The only difference this time is that we are using _both_ Euler's method and 2nd-order Runge-Kutta to get a solution, to compare the two.
```python
# set time-increment and discretize the time
T = 15.0 # final time
dt = 0.01 # set time-increment
N = int(T/dt) + 1 # number of time-steps
# set initial conditions
u_euler = numpy.empty((N, 4))
u_rk2 = numpy.empty((N, 4))
# initialize the array containing the solution for each time-step
u_euler[0] = numpy.array([v0, theta0, x0, y0])
u_rk2[0] = numpy.array([v0, theta0, x0, y0])
# use a for loop to call the function rk2_step()
for n in range(N-1):
u_euler[n+1] = euler_step(u_euler[n], f, dt)
u_rk2[n+1] = rk2_step(u_rk2[n], f, dt)
```
Now we can get the position of the glider in time, according to both Euler's method and the 2nd-order Runge-Kutta method, by extracting the appropriate portions of the solution arrays:
```python
x_euler = u_euler[:,2]
y_euler = u_euler[:,3]
x_rk2 = u_rk2[:,2]
y_rk2 = u_rk2[:,3]
```
##### How far will it fly before touching the ground?
As the $y$-axis measures the vertical coordinate with respect to the ground, negative values of $y$ don't have any physical meaning: the glider would have hit the ground by then! To find out if there are any negative $y$ values we can use the handy function [`numpy.where`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html). This function returns the **indices** of the elements in an array that match a given condition. For example, `numpy.where(y_euler<0)[0]` gives an array of the indices `i` where `y_euler[i]<0` (the `[0]` is necessary as `numpy.where` returns an array, which in this case contains a single line). If no elements of the array match the condition, the array of indices comes out empty.
From the physical problem, we know that once there is one negative value, the glider has hit the ground and all the remaining time-steps are unphysical. Therefore, we are interested in finding the _first_ index where the condition applies, given by `numpy.where(y_euler<0)[0][0]`—do read the documentation of the function if you need to!
```python
# get the index of element of y where altitude becomes negative
idx_negative_euler = numpy.where(y_euler<0.0)[0]
if len(idx_negative_euler)==0:
idx_ground_euler = N-1
print ('Euler integration has not touched ground yet!')
else:
idx_ground_euler = idx_negative_euler[0]
idx_negative_rk2 = numpy.where(y_rk2<0.0)[0]
if len(idx_negative_rk2)==0:
idx_ground_rk2 = N-1
print ('Runge-Kutta integration has not touched ground yet!')
else:
idx_ground_rk2 = idx_negative_rk2[0]
```
##### Do Euler and RK2 produce the same solution?
An easy way to compare the numerical results obtained with the Euler and 2nd-order Runge-Kutta methods is using [`numpy.allclose`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html). This function compares each element of two arrays and returns `True` if each comparison is within some relative tolerance. Here, we use the default tolerance: $10^{-5}$.
```python
# check to see if the paths match
print('Are the x-values close? {}'.format(numpy.allclose(x_euler, x_rk2)))
print('Are the y-values close? {}'.format(numpy.allclose(y_euler, y_rk2)))
```
Are the x-values close? False
Are the y-values close? False
Hmmm, they do differ. Maybe $10^{-5}$ is too tight a tolerance, considering we're using a somewhat coarse grid with first- and second-order methods. Perhaps we can assess this visually, by plotting the glider's path? Study the code below, where we are plotting the path twice, taking a closer look in the second plot by "zooming in" to the beginning of the flight.
```python
# plot the glider path
pyplot.figure(figsize=(10,6))
pyplot.subplot(121)
pyplot.grid(True)
pyplot.xlabel('$x$')
pyplot.ylabel('$y$')
pyplot.plot(x_euler[:idx_ground_euler], y_euler[:idx_ground_euler], 'k-', label='Euler')
pyplot.plot(x_rk2[:idx_ground_rk2], y_rk2[:idx_ground_rk2], 'r--', label='RK2')
pyplot.title('distance traveled: {:.3f}'.format(x_rk2[idx_ground_rk2-1]))
pyplot.legend();
# Let's take a closer look!
pyplot.subplot(122)
pyplot.grid(True)
pyplot.xlabel('$x$')
pyplot.ylabel('$y$')
pyplot.plot(x_euler, y_euler, 'k-', label='Euler')
pyplot.plot(x_rk2, y_rk2, 'r--', label='RK2')
pyplot.xlim(0,5)
pyplot.ylim(1.8,2.5);
```
From far away, the Euler and RK2 methods seem to be producing similar answers. However, if we take a closer look, small differences become evident. Keep in mind that we are solving the same equation and both methods will converge to the same solution as we refine the grid. However, they converge to that solution at different rates: RK2 gets more accurate faster, as you make $\Delta t$ smaller.
### Grid-convergence
Just like in [Lesson 3](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb), we want to do a grid-convergence study with RK2, to see if we indeed observe the expected rate of convergence. It is always an important step in a numerical solution to investigate whether the method is behaving the way we expect it to: this needs to be confirmed experimentally for every new problem we solve and for every new method we apply!
In the code below, a `for`-loop computes the solution on different time grids, with the coarsest and finest grid differing by 100x. We can use the difference between solutions to investigate convergence, as before.
```python
# use a for-loop to compute the solution on different grids
dt_values = numpy.array([0.1, 0.05, 0.01, 0.005, 0.001])
u_values = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for i, dt in enumerate(dt_values):
N = int(T/dt)+1 # number of time-steps
### discretize the time t ###
t = numpy.linspace(0.0, T, N)
# initialize the array containing the solution for each time-step
u = numpy.empty((N, 4))
u[0] = numpy.array([v0, theta0, x0, y0])
# time loop
for n in range(N-1):
u[n+1] = rk2_step(u[n], f, dt)
# store the value of u related to one grid
u_values[i] = u
```
Once those runs are done, we compute the difference between each numerical solution and the fine-grid solution.
```python
# compute diffgrid
diffgrid = numpy.empty_like(dt_values)
for i, dt in enumerate(dt_values):
diffgrid[i] = get_diffgrid(u_values[i], u_values[-1], dt)
```
And now we plot!
```python
# plot using the matplotlib function loglog()
pyplot.figure(figsize=(6,6))
pyplot.grid(True)
pyplot.xlabel(r'$\Delta t$', fontsize=18)
pyplot.ylabel(r'$L_1$-norm of the grid differences', fontsize=18)
pyplot.xlim(1e-4,1)
pyplot.ylim(1e-4,1)
pyplot.axis('equal')
pyplot.loglog(dt_values[:-1], diffgrid[:-1], color='k', ls='--', lw=2, marker='o');
```
This is looking good! The difference relative to our fine-grid solution is decreasing with the mesh size at a faster rate than in [Lesson 3](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb), but *how much faster?* When we computed the observed order of convergence with Euler's method, we got a value close to 1—it's a first-order method. Can you guess what we'll get now with RK2?
To compute the observed order of convergence, we use three grid resolutions that are refined at a constant rate, in this case $r=2$.
```python
# check convergence rate
r = 2
h = 0.001
dt_values = numpy.array([h, r*h, r**2*h])
u_values = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for i, dt in enumerate(dt_values):
N = int(T/dt)+1 # number of time-steps
### discretize the time t ###
t = numpy.linspace(0.0, T, N)
# initialize the array containing the solution for each time-step
u = numpy.empty((N, 4))
u[0] = numpy.array([v0, theta0, x0, y0])
# time loop
for n in range(N-1):
### call rk2_step() ###
u[n+1] = rk2_step(u[n], f, dt)
# store the value of u related to one grid
u_values[i] = u
# calculate the order of convergence
alpha = (log(get_diffgrid(u_values[2], u_values[1], dt_values[2]))
- log(get_diffgrid(u_values[1], u_values[0], dt_values[1]))) / log(r)
print('The order of convergence is alpha = {:.3f}'.format(alpha))
```
The order of convergence is alpha = 1.983
Probably you're not too surprised to see that the observed order of convergence is close to $2$. Because we used a second-order method! This means that the numerical solution is converging with the grid resolution twice as fast compared with Euler's method in [Lesson 3](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb), or in other words, the error scales as ${\mathcal O}(\Delta t^2)$. That is a lot faster! However, we are paying a price here: second-order Runge-Kutta requires more computations per iteration.
##### Challenge task
How much longer does it take to get the solution with RK2, compared to Euler's method? Run the same solution (same time grid, same parameters), but find a way to *time* the calculation with Python, and compare the runtimes.
## Multi-step methods
The screencast *"Euler's method is a first-order method"* motivated graphically an idea to get increased accuracy: using intermediate points between $u_{n}$ and $u_{n+1}$ and evaluating the right-hand side of the differential equation at those intermediate points. The idea is to somehow get a better approximation using more data from the function $f(u)$.
Another way to bring more information about $f(u)$ into the numerical solution is to look at time data $t\lt t_{n}$. For example, we can involve in the calculation of the solution $u_{n+1}$ the known solution at $u_{n-1}$, in addition to $u_{n}$. Schemes that use this idea are called _multi-step methods_.
A classical multi-step method achieves second order by applying a _centered difference_ approximation of the derivative $u'$:
$$ u'(t) \approx \frac{u_{n+1} - u_{n-1}}{2\Delta t}.$$
Isolate the future value of the solution $u_{n+1}$ and apply the differential equation $u'=f(u)$, to get the following formula for this method:
$$ u_{n+1} = u_{n-1} + 2\Delta t \, f(u_n),$$
This scheme is known as the **leapfrog method**. Notice that it is using the right-hand side of the differential equation, $f(u)$, evaluated at the _midpoint_ between $u_{n-1}$ and $u_{n+1}$, where the time interval between these two solutions is $2\Delta t$. Why is it called "leapfrog"? If you imagine for a moment all of the _even_ indices $n$ of the numerical solution, you notice that these solution values are computed using the slope estimated from _odd_ values $n$, and vice-versa.
Let's define a function that computes the numerical solution using the leapfrog method:
```python
def leapfrog_step(unm1, u, f, dt):
"""Returns the solution time-step n+1) using Euler's method.
Parameters
----------
unm1 : array of float
solution at time-step n-1.
u : array of float
solution at time-step n.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
u_n_plus_1 : array of float
solution at time-step n+1.
"""
return unm1 + 2.0*dt*f(u)
```
But wait ... what will we do at the _initial_ time step, when we don't have information for $u_{n-1}$? This is an issue with all multi-step methods: we say that they are _not self-starting_. In the first time step, we need to use another method to get the first "kick"—either Euler's method or 2nd-order Runge Kutta could do: let's use RK2, since it's also second order.
For this calculation, we are going to re-enter the model parameters in the code cell below, so that later on we can experiment here using the leapfrog method and different starting values. At the end of this notebook, we'll give you some other model parameters to try that will create a very interesting situation!
```python
# model parameters:
g = 9.8 # gravity in m s^{-2}
v_t = 4.9 # trim velocity in m s^{-1}
C_D = 1/5.0 # drag coefficient --- or D/L if C_L=1
C_L = 1.0 # for convenience, use C_L = 1
### set initial conditions ###
v0 = 6.5 # start at the trim velocity (or add a delta)
theta0 = -0.1 # initial angle of trajectory
x0 = 0.0 # horizotal position is arbitrary
y0 = 2.0 # initial altitude
# set time-increment and discretize the time
T = 15.0 # final time
dt = 0.01 # set time-increment
N = int(T/dt) + 1 # number of time-steps
# set initial conditions
u_leapfrog = numpy.empty((N, 4))
# initialize the array containing the solution for each time-step
u_leapfrog[0] = numpy.array([v0, theta0, x0, y0])
# first step using RK2
u_leapfrog[1] = rk2_step(u_leapfrog[0], f, dt)
```
Now we have all the required information to loop in time using the leapfrog method. The code cell below calls the leapfrog function for each time step.
```python
# use a for loop to call the function leapfrog_step()
for n in range(1,N-1):
u_leapfrog[n+1] = leapfrog_step(u_leapfrog[n-1], u_leapfrog[n], f, dt)
```
Like before, we extract from the solution array the information about the glider's position in time and find where it reaches the ground.
```python
# get the glider position in time
x_leapfrog = u_leapfrog[:,2]
y_leapfrog = u_leapfrog[:,3]
# get the index of element of y where altitude becomes negative
idx_negative_leapfrog = numpy.where(y_leapfrog<0.0)[0]
if len(idx_negative_leapfrog)==0:
idx_ground_leapfrog = N-1
print ('The glider has not reached the ground yet!')
else:
idx_ground_leapfrog = idx_negative_leapfrog[0]
```
Plotting the glider's trajectory with both the leapfrog and RK2 methods, we find that the solutions are very close to each other now: we don't see the differences that were apparent when we compared Euler's method and RK2.
```python
# plot the glider path
pyplot.figure(figsize=(11,8))
pyplot.subplot(121)
pyplot.grid(True)
pyplot.xlabel('$x$')
pyplot.ylabel('$y$')
pyplot.plot(x_leapfrog[:idx_ground_leapfrog], y_leapfrog[:idx_ground_leapfrog], color='k', ls='-', lw=2)
pyplot.title('distance traveled: {:.3f}'.format(x_leapfrog[idx_ground_leapfrog-1]), fontsize=18);
# Let's take a closer look!
pyplot.subplot(122)
pyplot.grid(True)
pyplot.xlabel('$x$')
pyplot.ylabel('$y$')
pyplot.plot(x_leapfrog[:idx_ground_leapfrog], y_leapfrog[:idx_ground_leapfrog], color='k', ls=':', lw=2)
pyplot.plot(x_rk2, y_rk2, 'r--', label='RK2')
pyplot.xlim(0,5)
pyplot.ylim(1.8,2.5);
```
What about the observed order of convergence? We'll repeat the process we have used before, with a grid-refinement ratio $r=2$ ... here we go:
```python
# check convergence rate
r = 2
h = 0.001
dt_values = numpy.array([h, r*h, r**2*h])
u_values = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for i, dt in enumerate(dt_values):
N = int(T/dt) + 1 # number of time-steps
### discretize the time t ###
t = numpy.linspace(0.0, T, N)
# initialize the array containing the solution for each time-step
u = numpy.empty((N, 4))
u[0] = numpy.array([v0, theta0, x0, y0])
# time loop
u[1] = rk2_step(u[0], f, dt)
for n in range(1, N-1):
u[n+1] = leapfrog_step(u[n-1], u[n], f, dt)
# store the value of u related to one grid
u_values[i] = u
# calculate the order of convergence
alpha = (log(get_diffgrid(u_values[2], u_values[1], dt_values[2]))
- log(get_diffgrid(u_values[1], u_values[0], dt_values[1]))) / log(r)
print('The order of convergence is alpha = {:.3f}'.format(alpha))
```
The order of convergence is alpha = 2.186
We now have numerical evidence that our calculation with the leapfrog method indeed exhibits second-order convergence, i.e., the method is ${\mathcal O}(\Delta t^2)$. _The leapfrog method is a second-order method_. Good job!
### But chew on this ...
Go back to the cell that re-enters the model parameters, just above the leapfrog-method time loop, and change the following: the initial height `y0` to 25, and the final time `T` to 36. Now re-run the leapfrog calculation and the two code cells below that, which extract the glider's position and plot it.
_What is going on?_
## Reference
Tobies, R. "Iris Runge: A life at the crossroads of mathematics, science and industry," Springer Basel, 1st ed. (2012). [Read on Google books, page 73](http://books.google.com/books?id=EDm0eQqFUQ4C&lpg=PA73&dq=%22I%20have%20been%20making%20good%20progress%20with%20Lanchester.%20The%20second%20chapter%20is%20already%20on%20your%20desk%22&pg=PA73#v=onepage&q=%22I%20have%20been%20making%20good%20progress%20with%20Lanchester.%20The%20second%20chapter%20is%20already%20on%20your%20desk%22&f=false).
---
###### The cell below loads the style of the notebook.
```python
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, "r").read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 750px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:600px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Nixie One', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Nixie One', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Nixie One', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Nixie One', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Nixie One', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 90%;
}
</style>
|
a39cb3067216826d489294ea10c516cece0bd29f
| 153,099 |
ipynb
|
Jupyter Notebook
|
lessons/01_phugoid/01_04_Second_Order_Methods.ipynb
|
simplicius/numerical-mooc
|
2e33574f097567d8e7aa6bfe59adaf9312f33884
|
[
"CC-BY-3.0"
] | null | null | null |
lessons/01_phugoid/01_04_Second_Order_Methods.ipynb
|
simplicius/numerical-mooc
|
2e33574f097567d8e7aa6bfe59adaf9312f33884
|
[
"CC-BY-3.0"
] | null | null | null |
lessons/01_phugoid/01_04_Second_Order_Methods.ipynb
|
simplicius/numerical-mooc
|
2e33574f097567d8e7aa6bfe59adaf9312f33884
|
[
"CC-BY-3.0"
] | null | null | null | 128.871212 | 45,070 | 0.841815 | true | 8,444 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.685949 | 0.912436 | 0.625885 |
__label__eng_Latn
| 0.984191 | 0.292471 |
# Exploring Data with Python
A significant part of a data scientist's role is to explore, analyze, and visualize data. There's a wide range of tools and programming languages that they can use to do this, and of the most popular approaches is to use Jupyter notebooks (like this one) and Python.
Python is a flexible programming language that is used in a wide range of scenarios; from web applications to device programming. It's extremely popular in the data science and machine learning community because of the many packages it supports for data analysis and visualization.
In this notebook, we'll explore some of these packages, and apply basic techniques to analyze data. This is not intended to be a comprehensive Python programming exercise; or even a deep dive into data analysis. Rather, it's intended as a crash course in some of the common ways in which data scientists can use Python to work with data.
> **Note**: If you've never used the Jupyter Notebooks environment before, there are a few things you should be aware of:
>
> - Notebooks are made up of *cells*. Some cells (like this one) contain *markdown* text, while others (like the one beneath this one) contain code.
> - The notebook is connected to a Python *kernel* (you can see which one at the top right of the page - if you're running this notebook in an Azure Machine Learning compute instance it should be connected to the **Python 3.6 - AzureML** kernel). If you stop the kernel or disconnect from the server (for example, by closing and reopening the notebook, or ending and resuming your session), the output from cells that have been run will still be displayed; but any variables or functions defined in those cells will have been lost - you must rerun the cells before running any subsequent cells that depend on them.
> - You can run each code cell by using the **► Run** button. The **◯** symbol next to the kernel name at the top right will briefly turn to **⚫** while the cell runs before turning back to **◯**.
> - The output from each code cell will be displayed immediately below the cell.
> - Even though the code cells can be run individually, some variables used in the code are global to the notebook. That means that you should run all of the code cells <u>**in order**</u>. There may be dependencies between code cells, so if you skip a cell, subsequent cells might not run correctly.
## Exploring data arrays with NumPy
Let's start by looking at some simple data.
Suppose a college takes a sample of student grades for a data science class.
Run the code in the cell below by clicking the **► Run** button to see the data.
```python
data = [50,50,47,97,49,3,53,42,26,74,82,62,37,15,70,27,36,35,48,52,63,64]
print(data)
```
[50, 50, 47, 97, 49, 3, 53, 42, 26, 74, 82, 62, 37, 15, 70, 27, 36, 35, 48, 52, 63, 64]
The data has been loaded into a Python **list** structure, which is a good data type for general data manipulation, but not optimized for numeric analysis. For that, we're going to use the **NumPy** package, which includes specific data types and functions for working with *Num*bers in *Py*thon.
Run the cell below to load the data into a NumPy **array**.
```python
import numpy as np
grades = np.array(data)
print(grades)
```
[50 50 47 97 49 3 53 42 26 74 82 62 37 15 70 27 36 35 48 52 63 64]
Just in case you're wondering about the differences between a **list** and a NumPy **array**, let's compare how these data types behave when we use them in an expression that multiplies them by 2.
```python
print (type(data),'x 2:', data * 2)
print('---')
print (type(grades),'x 2:', grades * 2)
```
<class 'list'> x 2: [50, 50, 47, 97, 49, 3, 53, 42, 26, 74, 82, 62, 37, 15, 70, 27, 36, 35, 48, 52, 63, 64, 50, 50, 47, 97, 49, 3, 53, 42, 26, 74, 82, 62, 37, 15, 70, 27, 36, 35, 48, 52, 63, 64]
---
<class 'numpy.ndarray'> x 2: [100 100 94 194 98 6 106 84 52 148 164 124 74 30 140 54 72 70
96 104 126 128]
Note that multiplying a list by 2 creates a new list of twice the length with the original sequence of list elements repeated. Multiplying a NumPy array on the other hand performs an element-wise calculation in which the array behaves like a *vector*, so we end up with an array of the same size in which each element has been multiplied by 2.
The key takeaway from this is that NumPy arrays are specifically designed to support mathematical operations on numeric data - which makes them more useful for data analysis than a generic list.
You might have spotted that the class type for the numpy array above is a **numpy.ndarray**. The **nd** indicates that this is a structure that can consists of multiple *dimensions* (it can have *n* dimensions). Our specific instance has a single dimension of student grades.
Run the cell below to view the **shape** of the array.
```python
grades.shape
```
(22,)
The shape confirms that this array has only one dimension, which contains 22 elements (there are 22 grades in the original list). You can access the individual elements in the array by their zero-based ordinal position. Let's get the first element (the one in position 0).
```python
grades[0]
```
50
Alright, now you know your way around a NumPy array, it's time to perform some analysis of the grades data.
You can apply aggregations across the elements in the array, so let's find the simple average grade (in other words, the *mean* grade value).
```python
grades.mean()
```
49.18181818181818
So the mean grade is just around 50 - more or less in the middle of the possible range from 0 to 100.
Let's add a second set of data for the same students, this time recording the typical number of hours per week they devoted to studying.
```python
# Define an array of study hours
study_hours = [10.0,11.5,9.0,16.0,9.25,1.0,11.5,9.0,8.5,14.5,15.5,
13.75,9.0,8.0,15.5,8.0,9.0,6.0,10.0,12.0,12.5,12.0]
# Create a 2D array (an array of arrays)
student_data = np.array([study_hours, grades])
# display the array
student_data
```
array([[10. , 11.5 , 9. , 16. , 9.25, 1. , 11.5 , 9. , 8.5 ,
14.5 , 15.5 , 13.75, 9. , 8. , 15.5 , 8. , 9. , 6. ,
10. , 12. , 12.5 , 12. ],
[50. , 50. , 47. , 97. , 49. , 3. , 53. , 42. , 26. ,
74. , 82. , 62. , 37. , 15. , 70. , 27. , 36. , 35. ,
48. , 52. , 63. , 64. ]])
Now the data consists of a 2-dimensional array - an array of arrays. Let's look at its shape.
```python
# Show shape of 2D array
student_data.shape
```
(2, 22)
The **student_data** array contains two elements, each of which is an array containing 22 elements.
To navigate this structure, you need to specify the position of each element in the hierarchy. So to find the first value in the first array (which contains the study hours data), you can use the following code.
```python
# Show the first element of the first element
student_data[0][0]
```
10.0
Now you have a multidimensional array containing both the student's study time and grade information, which you can use to compare data. For example, how does the mean study time compare to the mean grade?
```python
# Get the mean value of each sub-array
avg_study = student_data[0].mean()
avg_grade = student_data[1].mean()
print('Average study hours: {:.2f}\nAverage grade: {:.2f}'.format(avg_study, avg_grade))
```
Average study hours: 10.52
Average grade: 49.18
## Exploring tabular data with Pandas
While NumPy provides a lot of the functionality you need to work with numbers, and specifically arrays of numeric values; when you start to deal with two-dimensional tables of data, the **Pandas** package offers a more convenient structure to work with - the **DataFrame**.
Run the following cell to import the Pandas library and create a DataFrame with three columns. The first column is a list of student names, and the second and third columns are the NumPy arrays containing the study time and grade data.
```python
import pandas as pd
df_students = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie',
'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny',
'Jakeem','Helena','Ismat','Anila','Skye','Daniel','Aisha'],
'StudyHours':student_data[0],
'Grade':student_data[1]})
df_students
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Dan</td>
<td>10.00</td>
<td>50.0</td>
</tr>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.50</td>
<td>50.0</td>
</tr>
<tr>
<th>2</th>
<td>Pedro</td>
<td>9.00</td>
<td>47.0</td>
</tr>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.00</td>
<td>97.0</td>
</tr>
<tr>
<th>4</th>
<td>Ethan</td>
<td>9.25</td>
<td>49.0</td>
</tr>
<tr>
<th>5</th>
<td>Vicky</td>
<td>1.00</td>
<td>3.0</td>
</tr>
<tr>
<th>6</th>
<td>Frederic</td>
<td>11.50</td>
<td>53.0</td>
</tr>
<tr>
<th>7</th>
<td>Jimmie</td>
<td>9.00</td>
<td>42.0</td>
</tr>
<tr>
<th>8</th>
<td>Rhonda</td>
<td>8.50</td>
<td>26.0</td>
</tr>
<tr>
<th>9</th>
<td>Giovanni</td>
<td>14.50</td>
<td>74.0</td>
</tr>
<tr>
<th>10</th>
<td>Francesca</td>
<td>15.50</td>
<td>82.0</td>
</tr>
<tr>
<th>11</th>
<td>Rajab</td>
<td>13.75</td>
<td>62.0</td>
</tr>
<tr>
<th>12</th>
<td>Naiyana</td>
<td>9.00</td>
<td>37.0</td>
</tr>
<tr>
<th>13</th>
<td>Kian</td>
<td>8.00</td>
<td>15.0</td>
</tr>
<tr>
<th>14</th>
<td>Jenny</td>
<td>15.50</td>
<td>70.0</td>
</tr>
<tr>
<th>15</th>
<td>Jakeem</td>
<td>8.00</td>
<td>27.0</td>
</tr>
<tr>
<th>16</th>
<td>Helena</td>
<td>9.00</td>
<td>36.0</td>
</tr>
<tr>
<th>17</th>
<td>Ismat</td>
<td>6.00</td>
<td>35.0</td>
</tr>
<tr>
<th>18</th>
<td>Anila</td>
<td>10.00</td>
<td>48.0</td>
</tr>
<tr>
<th>19</th>
<td>Skye</td>
<td>12.00</td>
<td>52.0</td>
</tr>
<tr>
<th>20</th>
<td>Daniel</td>
<td>12.50</td>
<td>63.0</td>
</tr>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.00</td>
<td>64.0</td>
</tr>
</tbody>
</table>
</div>
Note that in addition to the columns you specified, the DataFrame includes an *index* to unique identify each row. We could have specified the index explicitly, and assigned any kind of appropriate value (for example, an email address); but because we didn't specify an index, one has been created with a unique integer value for each row.
### Finding and filtering data in a DataFrame
You can use the DataFrame's **loc** method to retrieve data for a specific index value, like this.
```python
# Get the data for index value 5
df_students.loc[5]
```
Name Vicky
StudyHours 1
Grade 3
Name: 5, dtype: object
You can also get the data at a range of index values, like this:
```python
# Get the rows with index values from 0 to 5
df_students.loc[0:5]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Dan</td>
<td>10.00</td>
<td>50.0</td>
</tr>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.50</td>
<td>50.0</td>
</tr>
<tr>
<th>2</th>
<td>Pedro</td>
<td>9.00</td>
<td>47.0</td>
</tr>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.00</td>
<td>97.0</td>
</tr>
<tr>
<th>4</th>
<td>Ethan</td>
<td>9.25</td>
<td>49.0</td>
</tr>
<tr>
<th>5</th>
<td>Vicky</td>
<td>1.00</td>
<td>3.0</td>
</tr>
</tbody>
</table>
</div>
In addition to being able to use the **loc** method to find rows based on the index, you can use the **iloc** method to find rows based on their ordinal position in the DataFrame (regardless of the index):
```python
# Get data in the first five rows
df_students.iloc[0:5]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Dan</td>
<td>10.00</td>
<td>50.0</td>
</tr>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.50</td>
<td>50.0</td>
</tr>
<tr>
<th>2</th>
<td>Pedro</td>
<td>9.00</td>
<td>47.0</td>
</tr>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.00</td>
<td>97.0</td>
</tr>
<tr>
<th>4</th>
<td>Ethan</td>
<td>9.25</td>
<td>49.0</td>
</tr>
</tbody>
</table>
</div>
Look carefully at the `iloc[0:5]` results, and compare them to the `loc[0:5]` results you obtained previously. Can you spot the difference?
The **loc** method returned rows with index *label* in the list of values from *0* to *5* - which includes *0*, *1*, *2*, *3*, *4*, and *5* (six rows). However, the **iloc** method returns the rows in the *positions* included in the range 0 to 5, and since integer ranges don't include the upper-bound value, this includes positions *0*, *1*, *2*, *3*, and *4* (five rows).
**iloc** identifies data values in a DataFrame by *position*, which extends beyond rows to columns. So for example, you can use it to find the values for the columns in positions 1 and 2 in row 0, like this:
```python
df_students.iloc[0,[1,2]]
```
StudyHours 10
Grade 50
Name: 0, dtype: object
Let's return to the **loc** method, and see how it works with columns. Remember that **loc** is used to locate data items based on index values rather than positions. In the absence of an explicit index column, the rows in our dataframe are indexed as integer values, but the columns are identified by name:
```python
df_students.loc[0,'Grade']
```
50.0
Here's another useful trick. You can use the **loc** method to find indexed rows based on a filtering expression that references named columns other than the index, like this:
```python
df_students.loc[df_students['Name']=='Aisha']
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.0</td>
<td>64.0</td>
</tr>
</tbody>
</table>
</div>
Actually, you don't need to explicitly use the **loc** method to do this - you can simply apply a DataFrame filtering expression, like this:
```python
df_students[df_students['Name']=='Aisha']
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.0</td>
<td>64.0</td>
</tr>
</tbody>
</table>
</div>
And for good measure, you can achieve the same results by using the DataFrame's **query** method, like this:
```python
df_students.query('Name=="Aisha"')
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.0</td>
<td>64.0</td>
</tr>
</tbody>
</table>
</div>
The three previous examples underline an occassionally confusing truth about working with Pandas. Often, there are multiple ways to achieve the same results. Another example of this is the way you refer to a DataFrame column name. You can specify the column name as a named index value (as in the `df_students['Name']` examples we've seen so far), or you can use the column as a property of the DataFrame, like this:
```python
df_students[df_students.Name == 'Aisha']
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.0</td>
<td>64.0</td>
</tr>
</tbody>
</table>
</div>
### Loading a DataFrame from a file
We constructed the DataFrame from some existing arrays. However, in many real-world scenarios, data is loaded from sources such as files. Let's replace the student grades DataFrame with the contents of a text file.
```python
df_students = pd.read_csv('data/grades.csv',delimiter=',',header='infer')
df_students.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Dan</td>
<td>10.00</td>
<td>50.0</td>
</tr>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.50</td>
<td>50.0</td>
</tr>
<tr>
<th>2</th>
<td>Pedro</td>
<td>9.00</td>
<td>47.0</td>
</tr>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.00</td>
<td>97.0</td>
</tr>
<tr>
<th>4</th>
<td>Ethan</td>
<td>9.25</td>
<td>49.0</td>
</tr>
</tbody>
</table>
</div>
The DataFrame's **read_csv** method is used to load data from text files. As you can see in the example code, you can specify options such as the column delimiter and which row (if any) contains column headers (in this case, the delimiter is a comma and the first row contains the column names - these are the default settings, so the parameters could have been omitted).
### Handling missing values
One of the most common issues data scientists need to deal with is incomplete or missing data. So how would we know that the DataFrame contains missing values? You can use the **isnull** method to identify which individual values are null, like this:
```python
df_students.isnull()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>1</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>3</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>4</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>5</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>6</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>7</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>8</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>9</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>10</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>11</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>12</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>13</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>14</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>15</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>16</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>17</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>18</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>19</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>20</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>21</th>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>22</th>
<td>False</td>
<td>False</td>
<td>True</td>
</tr>
<tr>
<th>23</th>
<td>False</td>
<td>True</td>
<td>True</td>
</tr>
</tbody>
</table>
</div>
Of course, with a larger DataFrame, it would be inefficient to review all of the rows and columns individually; so we can get the sum of missing values for each column, like this:
```python
df_students.isnull().sum()
```
Name 0
StudyHours 1
Grade 2
dtype: int64
So now we know that there's one missing **StudyHours** value, and two missing **Grade** values.
To see them in context, we can filter the dataframe to include only rows where any of the columns (axis 1 of the DataFrame) are null.
```python
df_students[df_students.isnull().any(axis=1)]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>22</th>
<td>Bill</td>
<td>8.0</td>
<td>NaN</td>
</tr>
<tr>
<th>23</th>
<td>Ted</td>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
When the DataFrame is retrieved, the missing numeric values show up as **NaN** (*not a number*).
So now that we've found the null values, what can we do about them?
One common approach is to *impute* replacement values. For example, if the number of study hours is missing, we could just assume that the student studied for an average amount of time and replace the missing value with the mean study hours. To do this, we can use the **fillna** method, like this:
```python
df_students.StudyHours = df_students.StudyHours.fillna(df_students.StudyHours.mean())
df_students
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Dan</td>
<td>10.000000</td>
<td>50.0</td>
</tr>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.500000</td>
<td>50.0</td>
</tr>
<tr>
<th>2</th>
<td>Pedro</td>
<td>9.000000</td>
<td>47.0</td>
</tr>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.000000</td>
<td>97.0</td>
</tr>
<tr>
<th>4</th>
<td>Ethan</td>
<td>9.250000</td>
<td>49.0</td>
</tr>
<tr>
<th>5</th>
<td>Vicky</td>
<td>1.000000</td>
<td>3.0</td>
</tr>
<tr>
<th>6</th>
<td>Frederic</td>
<td>11.500000</td>
<td>53.0</td>
</tr>
<tr>
<th>7</th>
<td>Jimmie</td>
<td>9.000000</td>
<td>42.0</td>
</tr>
<tr>
<th>8</th>
<td>Rhonda</td>
<td>8.500000</td>
<td>26.0</td>
</tr>
<tr>
<th>9</th>
<td>Giovanni</td>
<td>14.500000</td>
<td>74.0</td>
</tr>
<tr>
<th>10</th>
<td>Francesca</td>
<td>15.500000</td>
<td>82.0</td>
</tr>
<tr>
<th>11</th>
<td>Rajab</td>
<td>13.750000</td>
<td>62.0</td>
</tr>
<tr>
<th>12</th>
<td>Naiyana</td>
<td>9.000000</td>
<td>37.0</td>
</tr>
<tr>
<th>13</th>
<td>Kian</td>
<td>8.000000</td>
<td>15.0</td>
</tr>
<tr>
<th>14</th>
<td>Jenny</td>
<td>15.500000</td>
<td>70.0</td>
</tr>
<tr>
<th>15</th>
<td>Jakeem</td>
<td>8.000000</td>
<td>27.0</td>
</tr>
<tr>
<th>16</th>
<td>Helena</td>
<td>9.000000</td>
<td>36.0</td>
</tr>
<tr>
<th>17</th>
<td>Ismat</td>
<td>6.000000</td>
<td>35.0</td>
</tr>
<tr>
<th>18</th>
<td>Anila</td>
<td>10.000000</td>
<td>48.0</td>
</tr>
<tr>
<th>19</th>
<td>Skye</td>
<td>12.000000</td>
<td>52.0</td>
</tr>
<tr>
<th>20</th>
<td>Daniel</td>
<td>12.500000</td>
<td>63.0</td>
</tr>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.000000</td>
<td>64.0</td>
</tr>
<tr>
<th>22</th>
<td>Bill</td>
<td>8.000000</td>
<td>NaN</td>
</tr>
<tr>
<th>23</th>
<td>Ted</td>
<td>10.413043</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
Alternatively, it might be important to ensure that you only use data you know to be absolutely correct; so you can drop rows or columns that contains null values by using the **dropna** method. In this case, we'll remove rows (axis 0 of the DataFrame) where any of the columns contain null values.
```python
df_students = df_students.dropna(axis=0, how='any')
df_students
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Dan</td>
<td>10.00</td>
<td>50.0</td>
</tr>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.50</td>
<td>50.0</td>
</tr>
<tr>
<th>2</th>
<td>Pedro</td>
<td>9.00</td>
<td>47.0</td>
</tr>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.00</td>
<td>97.0</td>
</tr>
<tr>
<th>4</th>
<td>Ethan</td>
<td>9.25</td>
<td>49.0</td>
</tr>
<tr>
<th>5</th>
<td>Vicky</td>
<td>1.00</td>
<td>3.0</td>
</tr>
<tr>
<th>6</th>
<td>Frederic</td>
<td>11.50</td>
<td>53.0</td>
</tr>
<tr>
<th>7</th>
<td>Jimmie</td>
<td>9.00</td>
<td>42.0</td>
</tr>
<tr>
<th>8</th>
<td>Rhonda</td>
<td>8.50</td>
<td>26.0</td>
</tr>
<tr>
<th>9</th>
<td>Giovanni</td>
<td>14.50</td>
<td>74.0</td>
</tr>
<tr>
<th>10</th>
<td>Francesca</td>
<td>15.50</td>
<td>82.0</td>
</tr>
<tr>
<th>11</th>
<td>Rajab</td>
<td>13.75</td>
<td>62.0</td>
</tr>
<tr>
<th>12</th>
<td>Naiyana</td>
<td>9.00</td>
<td>37.0</td>
</tr>
<tr>
<th>13</th>
<td>Kian</td>
<td>8.00</td>
<td>15.0</td>
</tr>
<tr>
<th>14</th>
<td>Jenny</td>
<td>15.50</td>
<td>70.0</td>
</tr>
<tr>
<th>15</th>
<td>Jakeem</td>
<td>8.00</td>
<td>27.0</td>
</tr>
<tr>
<th>16</th>
<td>Helena</td>
<td>9.00</td>
<td>36.0</td>
</tr>
<tr>
<th>17</th>
<td>Ismat</td>
<td>6.00</td>
<td>35.0</td>
</tr>
<tr>
<th>18</th>
<td>Anila</td>
<td>10.00</td>
<td>48.0</td>
</tr>
<tr>
<th>19</th>
<td>Skye</td>
<td>12.00</td>
<td>52.0</td>
</tr>
<tr>
<th>20</th>
<td>Daniel</td>
<td>12.50</td>
<td>63.0</td>
</tr>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.00</td>
<td>64.0</td>
</tr>
</tbody>
</table>
</div>
### Explore data in the DataFrame
Now that we've cleaned up the missing values, we're ready to explore the data in the DataFrame. Let's start by comparing the mean study hours and grades.
```python
# Get the mean study hours using to column name as an index
mean_study = df_students['StudyHours'].mean()
# Get the mean grade using the column name as a property (just to make the point!)
mean_grade = df_students.Grade.mean()
# Print the mean study hours and mean grade
print('Average weekly study hours: {:.2f}\nAverage grade: {:.2f}'.format(mean_study, mean_grade))
```
Average weekly study hours: 10.52
Average grade: 49.18
OK, let's filter the DataFrame to find only the students who studied for more than the average amount of time.
```python
# Get students who studied for the mean or more hours
df_students[df_students.StudyHours > mean_study]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.50</td>
<td>50.0</td>
</tr>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.00</td>
<td>97.0</td>
</tr>
<tr>
<th>6</th>
<td>Frederic</td>
<td>11.50</td>
<td>53.0</td>
</tr>
<tr>
<th>9</th>
<td>Giovanni</td>
<td>14.50</td>
<td>74.0</td>
</tr>
<tr>
<th>10</th>
<td>Francesca</td>
<td>15.50</td>
<td>82.0</td>
</tr>
<tr>
<th>11</th>
<td>Rajab</td>
<td>13.75</td>
<td>62.0</td>
</tr>
<tr>
<th>14</th>
<td>Jenny</td>
<td>15.50</td>
<td>70.0</td>
</tr>
<tr>
<th>19</th>
<td>Skye</td>
<td>12.00</td>
<td>52.0</td>
</tr>
<tr>
<th>20</th>
<td>Daniel</td>
<td>12.50</td>
<td>63.0</td>
</tr>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.00</td>
<td>64.0</td>
</tr>
</tbody>
</table>
</div>
Note that the filtered result is itself a DataFrame, so you can work with its columns just like any other DataFrame.
For example, let's find the average grade for students who undertook more than the average amount of study time.
```python
# What was their mean grade?
df_students[df_students.StudyHours > mean_study].Grade.mean()
```
66.7
Let's assume that the passing grade for the course is 60.
We can use that information to add a new column to the DataFrame, indicating whether or not each student passed.
First, we'll create a Pandas **Series** containing the pass/fail indicator (True or False), and then we'll concatenate that series as a new column (axis 1) in the DataFrame.
```python
passes = pd.Series(df_students['Grade'] >= 60)
df_students = pd.concat([df_students, passes.rename("Pass")], axis=1)
df_students
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
<th>Pass</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Dan</td>
<td>10.00</td>
<td>50.0</td>
<td>False</td>
</tr>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.50</td>
<td>50.0</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>Pedro</td>
<td>9.00</td>
<td>47.0</td>
<td>False</td>
</tr>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.00</td>
<td>97.0</td>
<td>True</td>
</tr>
<tr>
<th>4</th>
<td>Ethan</td>
<td>9.25</td>
<td>49.0</td>
<td>False</td>
</tr>
<tr>
<th>5</th>
<td>Vicky</td>
<td>1.00</td>
<td>3.0</td>
<td>False</td>
</tr>
<tr>
<th>6</th>
<td>Frederic</td>
<td>11.50</td>
<td>53.0</td>
<td>False</td>
</tr>
<tr>
<th>7</th>
<td>Jimmie</td>
<td>9.00</td>
<td>42.0</td>
<td>False</td>
</tr>
<tr>
<th>8</th>
<td>Rhonda</td>
<td>8.50</td>
<td>26.0</td>
<td>False</td>
</tr>
<tr>
<th>9</th>
<td>Giovanni</td>
<td>14.50</td>
<td>74.0</td>
<td>True</td>
</tr>
<tr>
<th>10</th>
<td>Francesca</td>
<td>15.50</td>
<td>82.0</td>
<td>True</td>
</tr>
<tr>
<th>11</th>
<td>Rajab</td>
<td>13.75</td>
<td>62.0</td>
<td>True</td>
</tr>
<tr>
<th>12</th>
<td>Naiyana</td>
<td>9.00</td>
<td>37.0</td>
<td>False</td>
</tr>
<tr>
<th>13</th>
<td>Kian</td>
<td>8.00</td>
<td>15.0</td>
<td>False</td>
</tr>
<tr>
<th>14</th>
<td>Jenny</td>
<td>15.50</td>
<td>70.0</td>
<td>True</td>
</tr>
<tr>
<th>15</th>
<td>Jakeem</td>
<td>8.00</td>
<td>27.0</td>
<td>False</td>
</tr>
<tr>
<th>16</th>
<td>Helena</td>
<td>9.00</td>
<td>36.0</td>
<td>False</td>
</tr>
<tr>
<th>17</th>
<td>Ismat</td>
<td>6.00</td>
<td>35.0</td>
<td>False</td>
</tr>
<tr>
<th>18</th>
<td>Anila</td>
<td>10.00</td>
<td>48.0</td>
<td>False</td>
</tr>
<tr>
<th>19</th>
<td>Skye</td>
<td>12.00</td>
<td>52.0</td>
<td>False</td>
</tr>
<tr>
<th>20</th>
<td>Daniel</td>
<td>12.50</td>
<td>63.0</td>
<td>True</td>
</tr>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.00</td>
<td>64.0</td>
<td>True</td>
</tr>
</tbody>
</table>
</div>
DataFrames are designed for tabular data, and you can use them to perform many of the kinds of data analytics operation you can do in a relational database; such as grouping and aggregating tables of data.
For example, you can use the **groupby** method to group the student data into groups based on the **Pass** column you added previously, and count the number of names in each group - in other words, you can determine how many students passed and failed.
```python
print(df_students.groupby(df_students.Pass).Name.count())
```
Pass
False 15
True 7
Name: Name, dtype: int64
You can aggregate multiple fields in a group using any available aggregation function. For example, you can find the mean study time and grade for the groups of students who passed and failed the course.
```python
print(df_students.groupby(df_students.Pass)['StudyHours', 'Grade'].mean())
```
StudyHours Grade
Pass
False 8.783333 38.000000
True 14.250000 73.142857
/usr/local/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
"""Entry point for launching an IPython kernel.
DataFrames are amazingly versatile, and make it easy to manipulate data. Many DataFrame operations return a new copy of the DataFrame; so if you want to modify a DataFrame but keep the existing variable, you need to assign the result of the operation to the existing variable. For example, the following code sorts the student data into descending order of Grade, and assigns the resulting sorted DataFrame to the original **df_students** variable.
```python
# Create a DataFrame with the data sorted by Grade (descending)
df_students = df_students.sort_values('Grade', ascending=False)
# Show the DataFrame
df_students
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
<th>Pass</th>
</tr>
</thead>
<tbody>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.00</td>
<td>97.0</td>
<td>True</td>
</tr>
<tr>
<th>10</th>
<td>Francesca</td>
<td>15.50</td>
<td>82.0</td>
<td>True</td>
</tr>
<tr>
<th>9</th>
<td>Giovanni</td>
<td>14.50</td>
<td>74.0</td>
<td>True</td>
</tr>
<tr>
<th>14</th>
<td>Jenny</td>
<td>15.50</td>
<td>70.0</td>
<td>True</td>
</tr>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.00</td>
<td>64.0</td>
<td>True</td>
</tr>
<tr>
<th>20</th>
<td>Daniel</td>
<td>12.50</td>
<td>63.0</td>
<td>True</td>
</tr>
<tr>
<th>11</th>
<td>Rajab</td>
<td>13.75</td>
<td>62.0</td>
<td>True</td>
</tr>
<tr>
<th>6</th>
<td>Frederic</td>
<td>11.50</td>
<td>53.0</td>
<td>False</td>
</tr>
<tr>
<th>19</th>
<td>Skye</td>
<td>12.00</td>
<td>52.0</td>
<td>False</td>
</tr>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.50</td>
<td>50.0</td>
<td>False</td>
</tr>
<tr>
<th>0</th>
<td>Dan</td>
<td>10.00</td>
<td>50.0</td>
<td>False</td>
</tr>
<tr>
<th>4</th>
<td>Ethan</td>
<td>9.25</td>
<td>49.0</td>
<td>False</td>
</tr>
<tr>
<th>18</th>
<td>Anila</td>
<td>10.00</td>
<td>48.0</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>Pedro</td>
<td>9.00</td>
<td>47.0</td>
<td>False</td>
</tr>
<tr>
<th>7</th>
<td>Jimmie</td>
<td>9.00</td>
<td>42.0</td>
<td>False</td>
</tr>
<tr>
<th>12</th>
<td>Naiyana</td>
<td>9.00</td>
<td>37.0</td>
<td>False</td>
</tr>
<tr>
<th>16</th>
<td>Helena</td>
<td>9.00</td>
<td>36.0</td>
<td>False</td>
</tr>
<tr>
<th>17</th>
<td>Ismat</td>
<td>6.00</td>
<td>35.0</td>
<td>False</td>
</tr>
<tr>
<th>15</th>
<td>Jakeem</td>
<td>8.00</td>
<td>27.0</td>
<td>False</td>
</tr>
<tr>
<th>8</th>
<td>Rhonda</td>
<td>8.50</td>
<td>26.0</td>
<td>False</td>
</tr>
<tr>
<th>13</th>
<td>Kian</td>
<td>8.00</td>
<td>15.0</td>
<td>False</td>
</tr>
<tr>
<th>5</th>
<td>Vicky</td>
<td>1.00</td>
<td>3.0</td>
<td>False</td>
</tr>
</tbody>
</table>
</div>
## Visualizing data with Matplotlib
DataFrames provide a great way to explore and analyze tabular data, but sometimes a picture is worth a thousand rows and columns. The **Matplotlib** library provides the foundation for plotting data visualizations that can greatly enhance your ability the analyze the data.
Let's start with a simple bar chart that shows the grade of each student.
```python
# Ensure plots are displayed inline in the notebook
%matplotlib inline
from matplotlib import pyplot as plt
# Create a bar plot of name vs grade
plt.bar(x=df_students.Name, height=df_students.Grade)
# Display the plot
plt.show()
```
Well, that worked; but the chart could use some improvements to make it clearer what we're looking at.
Note that you used the **pyplot** class from Matplotlib to plot the chart. This class provides a whole bunch of ways to improve the visual elements of the plot. For example, the following code:
- Specifies the color of the bar chart.
- Adds a title to the chart (so we know what it represents)
- Adds labels to the X and Y (so we know which axis shows which data)
- Adds a grid (to make it easier to determine the values for the bars)
- Rotates the X markers (so we can read them)
```python
# Create a bar plot of name vs grade
plt.bar(x=df_students.Name, height=df_students.Grade, color='orange')
# Customize the chart
plt.title('Student Grades')
plt.xlabel('Student')
plt.ylabel('Grade')
plt.grid(color='#95a5a6', linestyle='--', linewidth=2, axis='y', alpha=0.7)
plt.xticks(rotation=90)
# Display the plot
plt.show()
```
A plot is technically contained with a **Figure**. In the previous examples, the figure was created implicitly for you; but you can create it explicitly. For example, the following code creates a figure with a specific size.
```python
# Create a Figure
fig = plt.figure(figsize=(8,3))
# Create a bar plot of name vs grade
plt.bar(x=df_students.Name, height=df_students.Grade, color='orange')
# Customize the chart
plt.title('Student Grades')
plt.xlabel('Student')
plt.ylabel('Grade')
plt.grid(color='#95a5a6', linestyle='--', linewidth=2, axis='y', alpha=0.7)
plt.xticks(rotation=90)
# Show the figure
plt.show()
```
A figure can contain multiple subplots, each on its own *axis*.
For example, the following code creates a figure with two subplots - one is a bar chart showing student grades, and the other is a pie chart comparing the number of passing grades to non-passing grades.
```python
# Create a figure for 2 subplots (1 row, 2 columns)
fig, ax = plt.subplots(1, 2, figsize = (10,4))
# Create a bar plot of name vs grade on the first axis
ax[0].bar(x=df_students.Name, height=df_students.Grade, color='orange')
ax[0].set_title('Grades')
ax[0].set_xticklabels(df_students.Name, rotation=90)
# Create a pie chart of pass counts on the second axis
pass_counts = df_students['Pass'].value_counts()
ax[1].pie(pass_counts, labels=pass_counts)
ax[1].set_title('Passing Grades')
ax[1].legend(pass_counts.keys().tolist())
# Add a title to the Figure
fig.suptitle('Student Data')
# Show the figure
fig.show()
```
Until now, you've used methods of the Matplotlib.pyplot object to plot charts. However, Matplotlib is so foundational to graphics in Python that many packages, including Pandas, provide methods that abstract the underlying Matplotlib functions and simplify plotting. For example, the DataFrame provides its own methods for plotting data, as shown in the following example to plot a bar chart of study hours.
```python
df_students.plot.bar(x='Name', y='StudyHours', color='teal', figsize=(6,4))
```
## Getting started with statistical analysis
Now that you know how to use Python to manipulate and visualize data, you can start analyzing it.
A lot of data science is rooted in *statistics*, so we'll explore some basic statistical techniques.
> **Note**: This is <u>not</u> intended to teach you statistics - that's much too big a topic for this notebook. It will however introduce you to some statistical concepts and techniques that data scientists use as they explore data in preparation for machine learning modeling.
### Descriptive statistics and data distribution
When examining a *variable* (for example a sample of student grades), data scientists are particularly interested in its *distribution* (in other words, how are all the different grade values spread across the sample). The starting point for this exploration is often to visualize the data as a histogram, and see how frequently each value for the variable occurs.
```python
# Get the variable to examine
var_data = df_students['Grade']
# Create a Figure
fig = plt.figure(figsize=(10,4))
# Plot a histogram
plt.hist(var_data)
# Add titles and labels
plt.title('Data Distribution')
plt.xlabel('Value')
plt.ylabel('Frequency')
# Show the figure
fig.show()
```
The histogram for grades is a symmetric shape, where the most frequently occurring grades tend to be in the middle of the range (around 50), with fewer grades at the extreme ends of the scale.
#### Measures of central tendency
To understand the distribution better, we can examine so-called *measures of central tendency*; which is a fancy way of describing statistics that represent the "middle" of the data. The goal of this is to try to find a "typical" value. Common ways to define the middle of the data include:
- The *mean*: A simple average based on adding together all of the values in the sample set, and then dividing the total by the number of samples.
- The *median*: The value in the middle of the range of all of the sample values.
- The *mode*: The most commonly occuring value in the sample set<sup>\*</sup>.
Let's calculate these values, along with the minimum and maximum values for comparison, and show them on the histogram.
> <sup>\*</sup>Of course, in some sample sets , there may be a tie for the most common value - in which case the dataset is described as *bimodal* or even *multimodal*.
```python
# Get the variable to examine
var = df_students['Grade']
# Get statistics
min_val = var.min()
max_val = var.max()
mean_val = var.mean()
med_val = var.median()
mod_val = var.mode()[0]
print('Minimum:{:.2f}\nMean:{:.2f}\nMedian:{:.2f}\nMode:{:.2f}\nMaximum:{:.2f}\n'.format(min_val,
mean_val,
med_val,
mod_val,
max_val))
# Create a Figure
fig = plt.figure(figsize=(10,4))
# Plot a histogram
plt.hist(var)
# Add lines for the statistics
plt.axvline(x=min_val, color = 'gray', linestyle='dashed', linewidth = 2)
plt.axvline(x=mean_val, color = 'cyan', linestyle='dashed', linewidth = 2)
plt.axvline(x=med_val, color = 'red', linestyle='dashed', linewidth = 2)
plt.axvline(x=mod_val, color = 'yellow', linestyle='dashed', linewidth = 2)
plt.axvline(x=max_val, color = 'gray', linestyle='dashed', linewidth = 2)
# Add titles and labels
plt.title('Data Distribution')
plt.xlabel('Value')
plt.ylabel('Frequency')
# Show the figure
fig.show()
```
For the grade data, the mean, median, and mode all seem to be more or less in the middle of the minimum and maximum, at around 50.
Another way to visualize the distribution of a variable is to use a *box* plot (sometimes called a *box-and-whiskers* plot). Let's create one for the grade data.
```python
# Get the variable to examine
var = df_students['Grade']
# Create a Figure
fig = plt.figure(figsize=(10,4))
# Plot a histogram
plt.boxplot(var)
# Add titles and labels
plt.title('Data Distribution')
# Show the figure
fig.show()
```
The box plot shows the distribution of the grade values in a different format to the histogram. The *box* part of the plot shows where the inner two *quartiles* of the data reside - so in this case, half of the grades are between approximately 36 and 63. The *whiskers* extending from the box show the outer two quartiles; so the other half of the grades in this case are between 0 and 36 or 63 and 100. The line in the box indicates the *median* value.
It's often useful to combine histograms and box plots, with the box plot's orientation changed to align it with the histogram (in some ways, it can be helpful to think of the histogram as a "front elevation" view of the distribution, and the box plot as a "plan" view of the distribution from above.)
```python
# Create a function that we can re-use
def show_distribution(var_data):
from matplotlib import pyplot as plt
# Get statistics
min_val = var_data.min()
max_val = var_data.max()
mean_val = var_data.mean()
med_val = var_data.median()
mod_val = var_data.mode()[0]
print('Minimum:{:.2f}\nMean:{:.2f}\nMedian:{:.2f}\nMode:{:.2f}\nMaximum:{:.2f}\n'.format(min_val,
mean_val,
med_val,
mod_val,
max_val))
# Create a figure for 2 subplots (2 rows, 1 column)
fig, ax = plt.subplots(2, 1, figsize = (10,4))
# Plot the histogram
ax[0].hist(var_data)
ax[0].set_ylabel('Frequency')
# Add lines for the mean, median, and mode
ax[0].axvline(x=min_val, color = 'gray', linestyle='dashed', linewidth = 2)
ax[0].axvline(x=mean_val, color = 'cyan', linestyle='dashed', linewidth = 2)
ax[0].axvline(x=med_val, color = 'red', linestyle='dashed', linewidth = 2)
ax[0].axvline(x=mod_val, color = 'yellow', linestyle='dashed', linewidth = 2)
ax[0].axvline(x=max_val, color = 'gray', linestyle='dashed', linewidth = 2)
# Plot the boxplot
ax[1].boxplot(var_data, vert=False)
ax[1].set_xlabel('Value')
# Add a title to the Figure
fig.suptitle('Data Distribution')
# Show the figure
fig.show()
# Get the variable to examine
col = df_students['Grade']
# Call the function
show_distribution(col)
```
All of the measurements of central tendency are right in the middle of the data distribution, which is symmetric with values becoming progressively lower in both directions from the middle.
To explore this distribution in more detail, you need to understand that statistics is fundamentally about taking *samples* of data and using probability functions to extrapolate information about the full *population* of data. For example, the student data consists of 22 samples, and for each sample there is a grade value. You can think of each sample grade as a variable that's been randomly selected from the set of all grades awarded for this course. With enough of these random variables, you can calculate something called a *probability density function*, which estimates the distribution of grades for the full population.
The Pandas DataFrame class provides a helpful plot function to show this density.
```python
def show_density(var_data):
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(10,4))
# Plot density
var_data.plot.density()
# Add titles and labels
plt.title('Data Density')
# Show the mean, median, and mode
plt.axvline(x=var_data.mean(), color = 'cyan', linestyle='dashed', linewidth = 2)
plt.axvline(x=var_data.median(), color = 'red', linestyle='dashed', linewidth = 2)
plt.axvline(x=var_data.mode()[0], color = 'yellow', linestyle='dashed', linewidth = 2)
# Show the figure
plt.show()
# Get the density of Grade
col = df_students['Grade']
show_density(col)
```
As expected from the histogram of the sample, the density shows the characteristic 'bell curve" of what statisticians call a *normal* distribution with the mean and mode at the center and symmetric tails.
Now let's take a look at the distribution of the study hours data.
```python
# Get the variable to examine
col = df_students['StudyHours']
# Call the function
show_distribution(col)
```
The distribution of the study time data is significantly different from that of the grades.
Note that the whiskers of the box plot only extend to around 6.0, indicating that the vast majority of the first quarter of the data is above this value. The minimum is marked with an **o**, indicating that it is statistically an *outlier* - a value that lies significantly outside the range of the rest of the distribution.
Outliers can occur for many reasons. Maybe a student meant to record "10" hours of study time, but entered "1" and missed the "0". Or maybe the student was abnormally lazy when it comes to studying! Either way, it's a statistical anomaly that doesn't represent a typical student. Let's see what the distribution looks like without it.
```python
# Get the variable to examine
col = df_students[df_students.StudyHours>1]['StudyHours']
# Call the function
show_distribution(col)
```
In this example, the dataset is small enough to clearly see that the value **1** is an outlier for the **StudyHours** column, so you can exclude it explicitly. In most real-world cases, it's easier to consider outliers as being values that fall below or above percentiles within which most of the data lie. For example, the following code uses the Pandas **quantile** function to exclude observations below the 0.01th percentile (the value above which 99% of the data reside).
```python
q01 = df_students.StudyHours.quantile(0.01)
# Get the variable to examine
col = df_students[df_students.StudyHours>q01]['StudyHours']
# Call the function
show_distribution(col)
```
> **Tip**: You can also eliminate outliers at the upper end of the distribution by defining a threshold at a high percentile value - for example, you could use the **quantile** function to find the 0.99 percentile below which 99% of the data reside.
With the outliers removed, the box plot shows all data within the four quartiles. Note that the distribution is not symmetric like it is for the grade data though - there are some students with very high study times of around 16 hours, but the bulk of the data is between 7 and 13 hours; The few extremely high values pull the mean towards the higher end of the scale.
Let's look at the density for this distribution.
```python
# Get the density of StudyHours
show_density(col)
```
This kind of distribution is called *right skewed*. The mass of the data is on the left side of the distribution, creating a long tail to the right because of the values at the extreme high end; which pull the mean to the right.
#### Measures of variance
So now we have a good idea where the middle of the grade and study hours data distributions are. However, there's another aspect of the distributions we should examine: how much variability is there in the data?
Typical statistics that measure variability in the data include:
- **Range**: The difference between the maximum and minimum. There's no built-in function for this, but it's easy to calculate using the **min** and **max** functions.
- **Variance**: The average of the squared difference from the mean. You can use the built-in **var** function to find this.
- **Standard Deviation**: The square root of the variance. You can use the built-in **std** function to find this.
```python
for col_name in ['Grade','StudyHours']:
col = df_students[col_name]
rng = col.max() - col.min()
var = col.var()
std = col.std()
print('\n{}:\n - Range: {:.2f}\n - Variance: {:.2f}\n - Std.Dev: {:.2f}'.format(col_name, rng, var, std))
```
Grade:
- Range: 94.00
- Variance: 472.54
- Std.Dev: 21.74
StudyHours:
- Range: 15.00
- Variance: 12.16
- Std.Dev: 3.49
Of these statistics, the standard deviation is generally the most useful. It provides a measure of variance in the data on the same scale as the data itself (so grade points for the Grade distribution and hours for the StudyHours distribution). The higher the standard deviation, the more variance there is when comparing values in the distribution to the distribution mean - in other words, the data is more spread out.
When working with a *normal* distribution, the standard deviation works with the particular characteristics of a normal distribution to provide even greater insight. Run the cell below to see the relationship between standard deviations and the data in the normal distribution.
```python
import scipy.stats as stats
# Get the Grade column
col = df_students['Grade']
# get the density
density = stats.gaussian_kde(col)
# Plot the density
col.plot.density()
# Get the mean and standard deviation
s = col.std()
m = col.mean()
# Annotate 1 stdev
x1 = [m-s, m+s]
y1 = density(x1)
plt.plot(x1,y1, color='magenta')
plt.annotate('1 std (68.26%)', (x1[1],y1[1]))
# Annotate 2 stdevs
x2 = [m-(s*2), m+(s*2)]
y2 = density(x2)
plt.plot(x2,y2, color='green')
plt.annotate('2 std (95.45%)', (x2[1],y2[1]))
# Annotate 3 stdevs
x3 = [m-(s*3), m+(s*3)]
y3 = density(x3)
plt.plot(x3,y3, color='orange')
plt.annotate('3 std (99.73%)', (x3[1],y3[1]))
# Show the location of the mean
plt.axvline(col.mean(), color='cyan', linestyle='dashed', linewidth=1)
plt.axis('off')
plt.show()
```
The horizontal lines show the percentage of data within 1, 2, and 3 standard deviations of the mean (plus or minus).
In any normal distribution:
- Approximately 68.26% of values fall within one standard deviation from the mean.
- Approximately 95.45% of values fall within two standard deviations from the mean.
- Approximately 99.73% of values fall within three standard deviations from the mean.
So, since we know that the mean grade is 49.18, the standard deviation is 21.74, and distribution of grades is approximately normal; we can calculate that 68.26% of students should achieve a grade between 27.44 and 70.92.
The descriptive statistics we've used to understand the distribution of the student data variables are the basis of statistical analysis; and because they're such an important part of exploring your data, there's a built-in **Describe** method of the DataFrame object that returns the main descriptive statistics for all numeric columns.
```python
df_students.describe()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>StudyHours</th>
<th>Grade</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>22.000000</td>
<td>22.000000</td>
</tr>
<tr>
<th>mean</th>
<td>10.522727</td>
<td>49.181818</td>
</tr>
<tr>
<th>std</th>
<td>3.487144</td>
<td>21.737912</td>
</tr>
<tr>
<th>min</th>
<td>1.000000</td>
<td>3.000000</td>
</tr>
<tr>
<th>25%</th>
<td>9.000000</td>
<td>36.250000</td>
</tr>
<tr>
<th>50%</th>
<td>10.000000</td>
<td>49.500000</td>
</tr>
<tr>
<th>75%</th>
<td>12.375000</td>
<td>62.750000</td>
</tr>
<tr>
<th>max</th>
<td>16.000000</td>
<td>97.000000</td>
</tr>
</tbody>
</table>
</div>
## Comparing data
Now that you know something about the statistical distribution of the data in your dataset, you're ready to examine your data to identify any apparent relationships between variables.
First of all, let's get rid of any rows that contain outliers so that we have a sample that is representative of a typical class of students. We identified that the StudyHours column contains some outliers with extremely low values, so we'll remove those rows.
```python
df_sample = df_students[df_students['StudyHours']>1]
df_sample
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Name</th>
<th>StudyHours</th>
<th>Grade</th>
<th>Pass</th>
</tr>
</thead>
<tbody>
<tr>
<th>3</th>
<td>Rosie</td>
<td>16.00</td>
<td>97.0</td>
<td>True</td>
</tr>
<tr>
<th>10</th>
<td>Francesca</td>
<td>15.50</td>
<td>82.0</td>
<td>True</td>
</tr>
<tr>
<th>9</th>
<td>Giovanni</td>
<td>14.50</td>
<td>74.0</td>
<td>True</td>
</tr>
<tr>
<th>14</th>
<td>Jenny</td>
<td>15.50</td>
<td>70.0</td>
<td>True</td>
</tr>
<tr>
<th>21</th>
<td>Aisha</td>
<td>12.00</td>
<td>64.0</td>
<td>True</td>
</tr>
<tr>
<th>20</th>
<td>Daniel</td>
<td>12.50</td>
<td>63.0</td>
<td>True</td>
</tr>
<tr>
<th>11</th>
<td>Rajab</td>
<td>13.75</td>
<td>62.0</td>
<td>True</td>
</tr>
<tr>
<th>6</th>
<td>Frederic</td>
<td>11.50</td>
<td>53.0</td>
<td>False</td>
</tr>
<tr>
<th>19</th>
<td>Skye</td>
<td>12.00</td>
<td>52.0</td>
<td>False</td>
</tr>
<tr>
<th>1</th>
<td>Joann</td>
<td>11.50</td>
<td>50.0</td>
<td>False</td>
</tr>
<tr>
<th>0</th>
<td>Dan</td>
<td>10.00</td>
<td>50.0</td>
<td>False</td>
</tr>
<tr>
<th>4</th>
<td>Ethan</td>
<td>9.25</td>
<td>49.0</td>
<td>False</td>
</tr>
<tr>
<th>18</th>
<td>Anila</td>
<td>10.00</td>
<td>48.0</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>Pedro</td>
<td>9.00</td>
<td>47.0</td>
<td>False</td>
</tr>
<tr>
<th>7</th>
<td>Jimmie</td>
<td>9.00</td>
<td>42.0</td>
<td>False</td>
</tr>
<tr>
<th>12</th>
<td>Naiyana</td>
<td>9.00</td>
<td>37.0</td>
<td>False</td>
</tr>
<tr>
<th>16</th>
<td>Helena</td>
<td>9.00</td>
<td>36.0</td>
<td>False</td>
</tr>
<tr>
<th>17</th>
<td>Ismat</td>
<td>6.00</td>
<td>35.0</td>
<td>False</td>
</tr>
<tr>
<th>15</th>
<td>Jakeem</td>
<td>8.00</td>
<td>27.0</td>
<td>False</td>
</tr>
<tr>
<th>8</th>
<td>Rhonda</td>
<td>8.50</td>
<td>26.0</td>
<td>False</td>
</tr>
<tr>
<th>13</th>
<td>Kian</td>
<td>8.00</td>
<td>15.0</td>
<td>False</td>
</tr>
</tbody>
</table>
</div>
### Comparing numeric and categorical variables
The data includes two *numeric* variables (**StudyHours** and **Grade**) and two *categorical* variables (**Name** and **Pass**). Let's start by comparing the numeric **StudyHours** column to the categorical **Pass** column to see if there's an apparent relationship between the number of hours studied and a passing grade.
To make this comparison, let's create box plots showing the distribution of StudyHours for each possible Pass value (true and false).
```python
df_sample.boxplot(column='StudyHours', by='Pass', figsize=(8,5))
```
Comparing the StudyHours distributions, it's immediately apparent (if not particularly surprising) that students who passed the course tended to study for more hours than students who didn't. So if you wanted to predict whether or not a student is likely to pass the course, the amount of time they spend studying may be a good predictive feature.
### Comparing numeric variables
Now let's compare two numeric variables. We'll start by creating a bar chart that shows both grade and study hours.
```python
# Create a bar plot of name vs grade and study hours
df_sample.plot(x='Name', y=['Grade','StudyHours'], kind='bar', figsize=(8,5))
```
The chart shows bars for both grade and study hours for each student; but it's not easy to compare because the values are on different scales. Grades are measured in grade points, and range from 3 to 97; while study time is measured in hours and ranges from 1 to 16.
A common technique when dealing with numeric data in different scales is to *normalize* the data so that the values retain their proportional distribution, but are measured on the same scale. To accomplish this, we'll use a technique called *MinMax* scaling that distributes the values proportionally on a scale of 0 to 1. You could write the code to apply this transformation; but the **Scikit-Learn** library provides a scaler to do it for you.
```python
from sklearn.preprocessing import MinMaxScaler
# Get a scaler object
scaler = MinMaxScaler()
# Create a new dataframe for the scaled values
df_normalized = df_sample[['Name', 'Grade', 'StudyHours']].copy()
# Normalize the numeric columns
df_normalized[['Grade','StudyHours']] = scaler.fit_transform(df_normalized[['Grade','StudyHours']])
# Plot the normalized values
df_normalized.plot(x='Name', y=['Grade','StudyHours'], kind='bar', figsize=(8,5))
```
With the data normalized, it's easier to see an apparent relationship between grade and study time. It's not an exact match, but it definitely seems like students with higher grades tend to have studied more.
So there seems to be a correlation between study time and grade; and in fact, there's a statistical *correlation* measurement we can use to quantify the relationship between these columns.
```python
df_normalized.Grade.corr(df_normalized.StudyHours)
```
0.9117666413789675
The correlation statistic is a value between -1 and 1 that indicates the strength of a relationship. Values above 0 indicate a *positive* correlation (high values of one variable tend to coincide with high values of the other), while values below 0 indicate a *negative* correlation (high values of one variable tend to coincide with low values of the other). In this case, the correlation value is close to 1; showing a strongly positive correlation between study time and grade.
> **Note**: Data scientists often quote the maxim "*correlation* is not *causation*". In other words, as tempting as it might be, you shouldn't interpret the statistical correlation as explaining *why* one of the values is high. In the case of the student data, the statistics demonstrates that students with high grades tend to also have high amounts of study time; but this is not the same as proving that they achieved high grades *because* they studied a lot. The statistic could equally be used as evidence to support the nonsensical conclusion that the students studied a lot *because* their grades were going to be high.
Another way to visualise the apparent correlation between two numeric columns is to use a *scatter* plot.
```python
# Create a scatter plot
df_sample.plot.scatter(title='Study Time vs Grade', x='StudyHours', y='Grade')
```
Again, it looks like there's a discernible pattern in which the students who studied the most hours are also the students who got the highest grades.
We can see this more clearly by adding a *regression* line (or a *line of best fit*) to the plot that shows the general trend in the data. To do this, we'll use a statistical technique called *least squares regression*.
> **Warning - Math Ahead!**
>
> Cast your mind back to when you were learning how to solve linear equations in school, and recall that the *slope-intercept* form of a linear equation looks like this:
>
> \begin{equation}y = mx + b\end{equation}
>
> In this equation, *y* and *x* are the coordinate variables, *m* is the slope of the line, and *b* is the y-intercept (where the line goes through the Y-axis).
>
> In the case of our scatter plot for our student data, we already have our values for *x* (*StudyHours*) and *y* (*Grade*), so we just need to calculate the intercept and slope of the straight line that lies closest to those points. Then we can form a linear equation that calculates a new *y* value on that line for each of our *x* (*StudyHours*) values - to avoid confusion, we'll call this new *y* value *f(x)* (because it's the output from a linear equation ***f***unction based on *x*). The difference between the original *y* (*Grade*) value and the *f(x)* value is the *error* between our regression line and the actual *Grade* achieved by the student. Our goal is to calculate the slope and intercept for a line with the lowest overall error.
>
> Specifically, we define the overall error by taking the error for each point, squaring it, and adding all the squared errors together. The line of best fit is the line that gives us the lowest value for the sum of the squared errors - hence the name *least squares regression*.
Fortunately, you don't need to code the regression calculation yourself - the **SciPy** package includes a **stats** class that provides a **linregress** method to do the hard work for you. This returns (among other things) the coefficients you need for the slope equation - slope (*m*) and intercept (*b*) based on a given pair of variable samples you want to compare.
```python
from scipy import stats
#
df_regression = df_sample[['Grade', 'StudyHours']].copy()
# Get the regression slope and intercept
m, b, r, p, se = stats.linregress(df_regression['StudyHours'], df_regression['Grade'])
print('slope: {:.4f}\ny-intercept: {:.4f}'.format(m,b))
print('so...\n f(x) = {:.4f}x + {:.4f}'.format(m,b))
# Use the function (mx + b) to calculate f(x) for each x (StudyHours) value
df_regression['fx'] = (m * df_regression['StudyHours']) + b
# Calculate the error between f(x) and the actual y (Grade) value
df_regression['error'] = df_regression['fx'] - df_regression['Grade']
# Create a scatter plot of Grade vs StudyHours
df_regression.plot.scatter(x='StudyHours', y='Grade')
# Plot the regression line
plt.plot(df_regression['StudyHours'],df_regression['fx'], color='cyan')
# Display the plot
plt.show()
```
Note that this time, the code plotted two distinct things - the scatter plot of the sample study hours and grades is plotted as before, and then a line of best fit based on the least squares regression coefficients is plotted.
The slope and intercept coefficients calculated for the regression line are shown above the plot.
The line is based on the ***f*(x)** values calculated for each **StudyHours** value. Run the following cell to see a table that includes the following values:
- The **StudyHours** for each student.
- The **Grade** achieved by each student.
- The ***f(x)*** value calculated using the regression line coefficients.
- The *error* between the calculated ***f(x)*** value and the actual **Grade** value.
Some of the errors, particularly at the extreme ends, are quite large (up to over 17.5 grade points); but in general, the line is pretty close to the actual grades.
```python
# Show the original x,y values, the f(x) value, and the error
df_regression[['StudyHours', 'Grade', 'fx', 'error']]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>StudyHours</th>
<th>Grade</th>
<th>fx</th>
<th>error</th>
</tr>
</thead>
<tbody>
<tr>
<th>3</th>
<td>16.00</td>
<td>97.0</td>
<td>83.098400</td>
<td>-13.901600</td>
</tr>
<tr>
<th>10</th>
<td>15.50</td>
<td>82.0</td>
<td>79.941687</td>
<td>-2.058313</td>
</tr>
<tr>
<th>9</th>
<td>14.50</td>
<td>74.0</td>
<td>73.628262</td>
<td>-0.371738</td>
</tr>
<tr>
<th>14</th>
<td>15.50</td>
<td>70.0</td>
<td>79.941687</td>
<td>9.941687</td>
</tr>
<tr>
<th>21</th>
<td>12.00</td>
<td>64.0</td>
<td>57.844698</td>
<td>-6.155302</td>
</tr>
<tr>
<th>20</th>
<td>12.50</td>
<td>63.0</td>
<td>61.001410</td>
<td>-1.998590</td>
</tr>
<tr>
<th>11</th>
<td>13.75</td>
<td>62.0</td>
<td>68.893193</td>
<td>6.893193</td>
</tr>
<tr>
<th>6</th>
<td>11.50</td>
<td>53.0</td>
<td>54.687985</td>
<td>1.687985</td>
</tr>
<tr>
<th>19</th>
<td>12.00</td>
<td>52.0</td>
<td>57.844698</td>
<td>5.844698</td>
</tr>
<tr>
<th>1</th>
<td>11.50</td>
<td>50.0</td>
<td>54.687985</td>
<td>4.687985</td>
</tr>
<tr>
<th>0</th>
<td>10.00</td>
<td>50.0</td>
<td>45.217846</td>
<td>-4.782154</td>
</tr>
<tr>
<th>4</th>
<td>9.25</td>
<td>49.0</td>
<td>40.482777</td>
<td>-8.517223</td>
</tr>
<tr>
<th>18</th>
<td>10.00</td>
<td>48.0</td>
<td>45.217846</td>
<td>-2.782154</td>
</tr>
<tr>
<th>2</th>
<td>9.00</td>
<td>47.0</td>
<td>38.904421</td>
<td>-8.095579</td>
</tr>
<tr>
<th>7</th>
<td>9.00</td>
<td>42.0</td>
<td>38.904421</td>
<td>-3.095579</td>
</tr>
<tr>
<th>12</th>
<td>9.00</td>
<td>37.0</td>
<td>38.904421</td>
<td>1.904421</td>
</tr>
<tr>
<th>16</th>
<td>9.00</td>
<td>36.0</td>
<td>38.904421</td>
<td>2.904421</td>
</tr>
<tr>
<th>17</th>
<td>6.00</td>
<td>35.0</td>
<td>19.964144</td>
<td>-15.035856</td>
</tr>
<tr>
<th>15</th>
<td>8.00</td>
<td>27.0</td>
<td>32.590995</td>
<td>5.590995</td>
</tr>
<tr>
<th>8</th>
<td>8.50</td>
<td>26.0</td>
<td>35.747708</td>
<td>9.747708</td>
</tr>
<tr>
<th>13</th>
<td>8.00</td>
<td>15.0</td>
<td>32.590995</td>
<td>17.590995</td>
</tr>
</tbody>
</table>
</div>
### Using the regression coefficients for prediction
Now that you have the regression coefficients for the study time and grade relationship, you can use them in a function to estimate the expected grade for a given amount of study.
```python
# Define a function based on our regression coefficients
def f(x):
m = 6.3134
b = -17.9164
return m*x + b
study_time = 14
# Get f(x) for study time
prediction = f(study_time)
# Grade can't be less than 0 or more than 100
expected_grade = max(0,min(100,prediction))
#Print the estimated grade
print ('Studying for {} hours per week may result in a grade of {:.0f}'.format(study_time, expected_grade))
```
Studying for 14 hours per week may result in a grade of 70
So by applying statistics to sample data, you've determined a relationship between study time and grade; and encapsulated that relationship in a general function that can be used to predict a grade for a given amount of study time.
This technique is in fact the basic premise of machine learning. You can take a set of sample data that includes one or more *features* (in this case, the number of hours studied) and a known *label* value (in this case, the grade achieved) and use the sample data to derive a function that calculates predicted label values for any given set of features.
## Further Reading
To learn more about the Python packages you explored in this notebook, see the following documentation:
- [NumPy](https://numpy.org/doc/stable/)
- [Pandas](https://pandas.pydata.org/pandas-docs/stable/)
- [Matplotlib](https://matplotlib.org/contents.html)
## Challenge: Analyze Flight Data
If this notebook has inspired you to try exploring data for yourself, why not take on the challenge of a real-world dataset containing flight records from the US Department of Transportation? You'll find the challenge in the [/challenges/01 - Flights Challenge.ipynb](./challenges/01%20-%20Flights%20Challenge.ipynb) notebook!
> **Note**: The time to complete this optional challenge is not included in the estimated time for this exercise - you can spend as little or as much time on it as you like!
```python
```
|
41f4fd012cd8bde7cab60a14297ff1447d4fbdf2
| 425,525 |
ipynb
|
Jupyter Notebook
|
01 - Data Exploration.ipynb
|
illagarg/MS-Certified-Azure-Data-Scientist-Associate
|
8fd847f5d9e552a7073d8589dc80903693be5bae
|
[
"MIT"
] | null | null | null |
01 - Data Exploration.ipynb
|
illagarg/MS-Certified-Azure-Data-Scientist-Associate
|
8fd847f5d9e552a7073d8589dc80903693be5bae
|
[
"MIT"
] | null | null | null |
01 - Data Exploration.ipynb
|
illagarg/MS-Certified-Azure-Data-Scientist-Associate
|
8fd847f5d9e552a7073d8589dc80903693be5bae
|
[
"MIT"
] | 1 |
2021-03-19T06:51:13.000Z
|
2021-03-19T06:51:13.000Z
| 92.929679 | 24,924 | 0.791493 | true | 24,349 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.865224 | 0.795658 | 0.688423 |
__label__eng_Latn
| 0.964666 | 0.437768 |
# Overlapping Generations Model (OLG)
### The Usefulness of the Model
The key ingredient of the OLG model is exogenous population turnover. Individuals in the population live for two periods, one in which they work and save, and the second in which they are 'old', do not work, but rather live off their savings.
The model can be used to assess various systems of 'social security'; is there a place for the government in providing for the older generation?
In the following, we describe the basic model setup, and go through how to solve the model analytically. A reader who is familiar with the OLG model may choose to simple skip these next steps and go straight to our coded implementation - our model setup is entirely standard, and as presented e.g. in Macro III at KU.
### Model Setup
Time is discrete and infinite, $t = 0,1,2,...$.
Population grows at a constant rate, n:
$$ L_t = L_{t-1}(1+n)$$
This implies, that in every period t, there are $L_t$ 'young' (working) individuals, and $L_{t-1}$ 'old' (non-working) individuals. Agents derive utility from consumption in each of the two periods they are alive:
$$U_t = u(c_{1t})+\frac{1}{1+\rho} \cdot u(c_{2t+1})$$
Where $c_{1t}$ is consumption of the young at time t, $c_{2t+1}$ is consumption of old at time $t+1$ and $\rho$ is a discount factor.
Let $r_{t+1}$ denote the interest rate between $t$ and $t+1$. Then individuals are budget constrained in each period of their live as follows:
$$c_{1t} + s_{t} = w_t$$
$$c_{2t+1} = (1+r_{t+1})\cdot s_t$$
Substituting for s, we get the following life-time constraint:
$$c_{1+t} + \frac{c_{2t+1}}{1+r_{t+1}} = w_t$$
In every period t, agents born at time t solve the following problem:
$$\max_{c_{1t},c_{2t+1}} u(c_{1t})+\frac{1}{1+\rho} u(c_{2t+1})$$
Subject to the constraint from above:
$$c_{1+t} + \frac{c_{2t+1}}{1+r_{t+1}} = w_t$$
Production is assumed to take place with CRS technology, competitive markets and profit-maximizing firms. This yields:
$$r_t = f'(k_t)$$
$$w_t = f(k_t)-f'(k_t)k_t$$
Where $k_t = \frac{K_t}{L_t}$. Recall, that $L_t$ refers to the *working population* at time t.
### Solving the Household Problem
The household problem is a standard optimization problem subject to a constraint. We set up the lagrangian:
$$ L(c_{1t},c_{2t+1},\lambda) = u(c_{1t}) + \frac{1}{1+\rho} u(c_{2t+1}) + \lambda[w_t-c_{1t}-\frac{c_{2t+1}}{1+r_{t+1}}]$$
Differentiate wrt. consumption (in each period):
$$\frac{\partial L}{\partial c_{1t}} = u'(c_{1t}) -\lambda $$
$$\frac{\partial L}{\partial c_{2t+1}} = \frac{1}{1+\rho} u'(c_{2t+1}) -\frac{\lambda}{1+r_{t+1}} $$
Equate to zero (first order conditions) and substitute for the lagrange multiplier $\lambda$ to obtain the Euler Equation:
$$u'(c_{1t}) = \frac{1+r_{t+1}}{1+\rho} u'(c_{2t+1})$$
### Characterizing Optimal Savings
Substitute the budget constrains into the Euler Equation to get:
$$u'(w_t-s_t) = \frac{1+r_{t+1}}{1+\rho} u'((1+r_{t+1})s_t)$$
This implicitly defines optimal savings as a function of wage and interest rate, i.e. $s(w_t,r_{t+1})$
### Law of Motion for Capital
Knowing the individual decisions allow us to aggregate economy-wide. The aggregate capital stock at time $t+1$ is equal to aggregate savings in time $t$, aggregate dissaving in time t, and un-depreciated capital carried over from time t.
$$K_{t+1} = S_t -K_t+(1-\delta)K_t$$
$$\Rightarrow K_{t+1} = S_t - \delta K_t$$
$$\Rightarrow k_{t+1}(1+n) = s_t-\delta k_t$$
Substituting for savings, substituting for wage and interest rate, and assuming $\delta = 0$:
$$k_{t+1} (1+n) = s(f(k_t)-k_tf'(k_t),f'(k_{t+1})$$
Which implicitly defines the law of motion for aggregate capital per worker.
```python
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from modelproject import *
```
```python
sp.init_printing()
```
We solve the OLG model using sympy, thus we begin by setting up symbols for the variables we use
```python
rho = sp.symbols('rho')
delta = sp.symbols('delta')
n = sp.symbols('n')
c1 = sp.symbols('c_{1t}')
c2 = sp.symbols('c_{2t+1}')
r = sp.symbols('r_{t+1}')
w = sp.symbols('w_t')
s = sp.symbols('s_t')
k = sp.symbols('k_t')
kp1 = sp.symbols('k_{t+1}')
lmbda = sp.symbols('lambda')
```
next we define the standard log utility, and a placeholder that keeps the code general for the derivations. By using `sp.Function` for the utility we can then substitute in the specific functional form at a laster stage.
```python
u_undetermined = sp.Function('u')
def log_u(c):
return sp.log(c)
def U(c1, c2, rho, u):
return u(c1) + 1 / (1 + rho) * u(c2)
U(c1, c2, rho, u = u_undetermined)
```
We define budget constraints and calculate the intertemporal constraint
```python
period_1_budget = sp.Eq(w, c1 + s)
period_2_budget = sp.Eq(c2, (1+r)*s)
period_1_budget, period_2_budget
```
```python
inter_budget = period_1_budget.subs(s, sp.solve(period_2_budget, s)[0])
inter_budget
```
next we set up the lagrangian and calculate the relevant first derivatives
```python
lagrangian = U(c1,c2,rho, u = u_undetermined) - lmbda * (inter_budget.rhs - w)
lagrangian
```
```python
dc1 = sp.diff(lagrangian, c1)
dc2 = sp.diff(lagrangian, c2)
dlmbda = sp.diff(lagrangian, lmbda)
dc1, dc2, dlmbda
```
Calculate euler equation
Finally we call our `make_euler_equation` function with the derivatives of the lagrangian as inputs. We the nsubstitute in the within period budget constraints to gen an exproession in terms of savings wages and interest rate.
```python
euler_eq = make_euler_equation(dc1, dc2, u = u_undetermined, c1 = c1, lmbda = lmbda)
euler_eq
```
```python
euler_eq = euler_eq.subs(c1, w - s).subs(c2, period_2_budget.rhs)
euler_eq
```
Implicitly the euler equation determines $s$, now we move to determine the evolution of captial in the economy.
This is also a good spot to set the actual functional form of $u$, we use a $log$ function here as it is analytically solveable, and explore using the square root further down.
```python
euler_eq = euler_eq.replace(u_undetermined, log_u).doit()
euler_eq
```
Next setup for firms. We show the aggregate production function but the work with the per capita normalized version to calculate expressions for the equilibrium interest rates and wages.
```python
Y = sp.symbols('Y_t')
K = sp.symbols('K_t')
L = sp.symbols('L_t')
y = sp.symbols('y_t')
alpha = sp.symbols('alpha')
production_function = sp.Eq(
Y,
K**alpha * L**(1-alpha)
)
production_function
```
```python
norm_prod_func = sp.Eq(y, k**alpha)
norm_prod_func
```
Define interest rate and wage
```python
interest_rate = sp.Eq(r, sp.Derivative(norm_prod_func.rhs, k)).doit()
wage = sp.Eq(w, norm_prod_func.rhs - k*sp.Derivative(norm_prod_func.rhs, k)).doit()
interest_rate, wage
```
Finally from a standard capital depreciation identity we derive a first expression for the evolution of capital in the economy.
```python
evolution = sp.Eq(kp1, 1/(1+n)* (s - delta*k) )
evolution
```
Using the euler equation and the capital evolution path we can derive the transition path of the economy
```python
savings_of_k = sp.solve(euler_eq, s)[0].subs(w, wage.rhs)
# Only substitute
if r in savings_of_k.atoms():
savings_of_k = savings_of_k.subs(r, interest_rate.rhs)
transition_eq = evolution.subs(s, savings_of_k)
transition_eq
```
In the following section we show visualizations of the capital accumulation path for various parametrizations, including varying $\alpha$ and $\n$.
```python
_transition_func = sp.lambdify((k, rho, delta, n, alpha, r), transition_eq.rhs)
def transition_func(k, alpha, rho, delta, n, r = 0):
return _transition_func(k, rho, delta, n , alpha, r)
```
```python
_equilibrium = sp.lambdify(
(alpha, n, rho, delta),
sp.solve(transition_eq.subs(kp1, k), k)[0]
)
def equilibrium(alpha, n, rho, delta):
return _equilibrium(alpha, n, rho, delta)
```
Visualizing variation in $\alpha$
```python
xr = np.linspace(0,2,1000)
_r = 0.05
_d = 0.02
_n = 0.02
for _a in np.linspace(0.1, 0.5, 5):
res = [transition_func(k = x, alpha = _a, rho = _r, delta = _d, n = _n) for x in xr]
kstar = equilibrium(alpha = _a, rho = _r, n = _n, delta = _d)
plt.plot(xr, res, color = 'blue', alpha = 1)
plt.scatter([kstar], [kstar], color = 'red')
plt.annotate(f'$\\alpha$={round(_a,2)}', xy= (kstar + 0.01, kstar - 0.02))
plt.plot(xr, xr, color = 'black')
plt.xlim(0,0.5)
plt.ylim(0,0.5)
plt.xlabel('$k_t$')
plt.ylabel('$k_{t+1}$')
plt.title('Transition curves for varying $\\alpha$')
```
Visualizing variation in $n$
```python
xr = np.linspace(0,2,1000)
_r = 0.05
_d = 0.02
_a = 0.2
#_n = 0.02
for _n in np.linspace(0, 1, 5):
res = [transition_func(k = x, alpha = _a, rho = _r, delta = _d, n = _n) for x in xr]
kstar = equilibrium(alpha = _a, rho = _r, n = _n, delta = _d)
plt.plot(xr, res, color = 'blue', alpha = 1)
plt.scatter([kstar], [kstar], color = 'red')
plt.annotate(f'$n$={round(_n,2)}', xy= (kstar + 0.01, kstar - 0.02))
plt.plot(xr, xr, color = 'black')
plt.xlim(0,0.5)
plt.ylim(0,0.5)
plt.xlabel('$k_t$')
plt.ylabel('$k_{t+1}$')
plt.title('Transition curves for varying $n$')
```
Visualizing the convergence to st.st.
```python
k_ = 0.0001
xr = range(10)
xr2 = np.linspace(0,1,1000)
_r = 0.05
_d = 0.05
_a = 0.5
_n = 0
out = list()
for _ in xr:
k_ = transition_func(k = k_, alpha = _a, rho = _r, delta = _d, n = _n)
out.append(k_)
```
```python
res = [transition_func(k = x, alpha = _a, rho = _r, delta = _d, n = _n) for x in xr2]
plt.plot(xr, xr, color = 'black')
plt.plot(xr2, res, color = 'blue', alpha = 1)
plt.step(out[:-1], out[1:], where = 'post', color = 'red', linestyle = '--', alpha = 0.8)
plt.scatter(out[:-1], out[1:], color = 'red', alpha = 0.8)
plt.xlabel('$k_t$')
plt.ylabel('$k_{t+1}$')
plt.xlim(0,0.1)
plt.ylim(0,0.1)
```
## Part 2 - numerical solution with sqrt utility
In the section below we first show that there is no analytical solution to the OLG model when the within period utility is given by $u(c)=\sqrt(c)$. We then show graphically that even though no analytical solution exists, it can be found numerically, and we iteratively identify the aproximate value of $k^*$ with square root utility.
The first steps are identical to the ones covered above.
```python
def sqrt_u(c):
return sp.sqrt(c)
```
```python
euler_eq = make_euler_equation(dc1, dc2, u = u_undetermined, c1 = c1, lmbda = lmbda)
euler_eq = euler_eq.subs(c1, w - s).subs(c2, period_2_budget.rhs)
euler_eq = euler_eq.replace(u_undetermined, sqrt_u).doit()
euler_eq
```
```python
savings_of_k = sp.solve(euler_eq, s)[0].subs(w, wage.rhs)
savings_of_k
```
```python
savings_of_k = savings_of_k.subs(r, interest_rate.rhs)
savings_of_k
```
```python
transition_eq = evolution.subs(s, savings_of_k)
transition_eq
```
```python
_tf2 = sp.lambdify((k, alpha, rho, delta, n), transition_eq.rhs)
def tf2(k, alpha, rho, delta, n):
return _tf2(k, alpha, rho, delta, n)
```
Notice now the analytical solution is empty.
```python
sp.solve(transition_eq.subs(kp1, k), k)
```
Here we compare the steady state of a model with log utility and one with square root utility, notice how the change in utility drastically changes the steady state level of capital.
```python
k_ = 0.0001
k_2 = 0.0001
xr = range(10)
xr2 = np.linspace(0,1,1000)
_r = 0.05
_d = 0.05
_a = 0.5
_n = 0
out = list()
out2 = list()
for _ in xr:
k_ = tf2(k = k_, alpha = _a, rho = _r, delta = _d, n = _n)
k_2 = transition_func(k = k_2, alpha = _a, rho = _r, delta = _d, n = _n)
out.append(k_)
out2.append(k_2)
res = [tf2(k = x, alpha = _a, rho = _r, delta = _d, n = _n) for x in xr2]
res2 = [transition_func(k = x, alpha = _a, rho = _r, delta = _d, n = _n) for x in xr2]
plt.plot(xr, xr, color = 'black')
plt.plot(xr2, res, color = 'blue', alpha = 1, label = 'sqrt')
plt.plot(xr2, res2, color = 'navy', alpha = 1, label = 'log')
plt.step(out[:-1], out[1:], where = 'post', color = 'red', linestyle = '--', alpha = 0.8)
plt.scatter(out[:-1], out[1:], color = 'red', alpha = 0.8)
plt.step(out2[:-1], out2[1:], where = 'post', color = 'red', linestyle = '--', alpha = 0.8)
plt.scatter(out2[:-1], out2[1:], color = 'red', alpha = 0.8)
plt.xlabel('$k_t$')
plt.ylabel('$k_{t+1}$')
plt.xlim(0,0.2)
plt.ylim(0,0.2)
plt.legend()
```
Finally using the simple convergence algorithm shown in the figure we can approximate the steady state level of capital in an economy characterized by square root utility.
```python
k_ = 0.0001
for _ in range(10000):
k_ = tf2(k = k_, alpha = _a, rho = _r, delta = _d, n = _n)
print(k_)
```
0.10949660035925014
|
7182420258cf1398e11d54b5d9ce343029fd738b
| 217,855 |
ipynb
|
Jupyter Notebook
|
modelproject/modelproject.ipynb
|
Kristianuruplarsen/projects-2019-remove-argv-0
|
833e57eb55e6159e8866e0e5007cbd16efd6a7c8
|
[
"MIT"
] | null | null | null |
modelproject/modelproject.ipynb
|
Kristianuruplarsen/projects-2019-remove-argv-0
|
833e57eb55e6159e8866e0e5007cbd16efd6a7c8
|
[
"MIT"
] | 6 |
2019-04-16T19:30:22.000Z
|
2019-05-14T14:14:59.000Z
|
modelproject/modelproject.ipynb
|
Kristianuruplarsen/projects-2019-remove-argv-0
|
833e57eb55e6159e8866e0e5007cbd16efd6a7c8
|
[
"MIT"
] | 1 |
2019-05-13T14:39:54.000Z
|
2019-05-13T14:39:54.000Z
| 190.100349 | 31,392 | 0.89051 | true | 4,144 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.785309 | 0.795658 | 0.624837 |
__label__eng_Latn
| 0.936955 | 0.290037 |
# Ensemble Learning
* The basic idea of ensemble learning is to have multiple learning algorithms for the same problem and combine their results to make a final prediction
* There are multiple types on ensemble learning. Common approaches include:
* Boosting
* Bagging/Bootstrapping
* Random Forests
* Mixture of Experts
## Boosting and Bagging
* When you have one data set, usually you may train an algorithm and learn a single set of parameters. However, when we do this, we have no idea how stable/variable those parameters that we estimated are.
* Bootstrapping can show us the variation in estimated parameter values given a particular data set. Sometimes, it can also help to improve our predictions.
* Essentially, to perform bootstrapping, you sample from your data set *with replacement* and train your algorithm to estimate the parameters with each sampled subset. You can then look at how much the parameters vary with each sampled subset and you can also combine your estimates from each trained method by averaging over all of the results for regression:
\begin{equation}
y_{com}(\mathbf{x}) = \frac{1}{M} \sum_{m=1}^M y_m(\mathbf{x})
\end{equation}
* You can aggregate results over all your bootstrap samples using majority vote for classification.
```python
import numpy as np
import matplotlib.pyplot as plt
import math
import textwrap
%matplotlib inline
def generateRandData(N, l, u, gVar):
'''generateRandData(N, l, u, gVar): Generate N uniformly random data points in the range [l,u) with zero-mean Gaussian random noise with variance gVar'''
x = np.random.uniform(l,u,N)
e = np.random.normal(0,gVar,N)
t = np.sin(2*math.pi*x) + e
return x,t
def fitdataReg(x,t,M,la):
'''fitdata(x,t,M): Fit a polynomial of order M to the data (x,t)'''
X = np.array([x**m for m in range(M+1)]).T
w = np.linalg.inv(X.T@X+(la*np.identity(M+1)))@X.T@t
return w
def plotPoly(x,t,xrange, y, esty, subplotloc,la=0):
#plot everything
plt.subplot(*subplotloc) #identify the subplot to use
# plt.tight_layout()
plt.ylim([-2,2])
p1 = plt.plot(xrange, y, 'g') #plot true value
p2 = plt.plot(x, t, 'bo') #plot training data
p3 = plt.plot(xrange, esty, 'r') #plot estimated value
#add title, legend and axes labels
plt.ylabel('t') #label x and y axes
plt.xlabel('x')
def bootstrapRegression(M, numData,percentSample,numSamples):
#generate data
x,t = generateRandData(numData,0,1,1)
numDataSamples = round(percentSample*numData)
subplotloc = [2, round(numSamples/2), 1]
fig = plt.figure()
xrange = np.arange(0.05,.95,0.001) #get equally spaced points in the xrange
esty = np.empty([numSamples, xrange.shape[0]])
for iter in range(numSamples):
#select a random subset of the data
rp = np.random.permutation(numData)
x_sub = x[rp[0:numDataSamples-1]]
t_sub = t[rp[0:numDataSamples-1]]
#fit the random subset
w = fitdataReg(x_sub,t_sub,M,0)
#plot results
subplotloc[2] = iter+1
y = np.sin(2*math.pi*xrange) #compute the true function value
X = np.array([xrange**m for m in range(w.shape[0])]).T
esty[iter,:] = X@w #compute the predicted value
plotPoly(x_sub,t_sub,xrange,y,esty[iter,:],subplotloc)
#combine the bootstrapped results
comy = esty.mean(0)
yerr = esty.var(0)
# compare to full data set
fig = plt.figure()
plotPoly(x,t,xrange,y,comy,[1, 1, 1])
plt.errorbar(xrange, comy, yerr=yerr, fmt='r.',ms=10,errorevery=10)
fig = plt.figure()
w = fitdataReg(x,t,M,0)
y = np.sin(2*math.pi*xrange) #compute the true function value
X = np.array([xrange**m for m in range(w.shape[0])]).T
yy = X@w #compute the predicted value
plotPoly(x,t,xrange,y,yy, [1, 1, 1])
#Figure 1.7 from text
bootstrapRegression(5, 50,.75,20)
```
# Boosting: AdaBoost
* Goal: Combine base (``weak'') classifiers to form a committee whose performance is better than any of the single base classifiers.
* The base classifiers are trained in sequence (not in parallel like in bootstrapping)
* Each base classifier is trained using a weighted data set (different weights for each base classifier)
* Points that are misclassified by a base classifier are weighted more heavily while training the next base classifier
* Consider a two class classification problem with $\mathbf{X} = \left\{ \mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_n\right\}$ with corresponding labels $y_i \in \left\{ -1,1\right\}$.
* The goal is to construct a classifier of the form:
\begin{equation}
f(\mathbf{x}) = sign(F(\mathbf{x}))
\end{equation}
where
\begin{equation}
F(\mathbf{x}) = \sum_{k=1}^K \frac{1}{2}\alpha_k \phi(\mathbf{x}; \theta_k)
\end{equation}
where $\phi(\mathbf{x}; \theta_k)$ is the base classifier.
* We need to determine the parameter values for each base classifier:
\begin{eqnarray}
\arg \min_{\alpha_k, \theta_k} \sum_{i=1}^N \exp\left(-y_i F(\mathbf{x}_i) \right)
\end{eqnarray}
* This cost function penalizes the samples that are incorrectly classified ($y_iF(\mathbf{x}_i) < 0$) heavily
* Direct optimization of all $\alpha$s and $\theta$s is difficult. So, we iteratively optimize (which is sub-optimal). At each stage, we train one base classifier holding fixed all those that have already been trained.
* Let:
\begin{eqnarray}
F_m(\mathbf{x}) &=& \sum_{k=1}^m \frac{1}{2}\alpha_k \phi(\mathbf{x}; \theta_k)\\
&=& F_{m-1}(\mathbf{x}) + \frac{1}{2}\alpha_m \phi(\mathbf{x}; \theta_m)
\end{eqnarray}
* At step $m$, we optimize for $\alpha_m$ and $\theta_m$ where $F_{m-1}(\mathbf{x})$ is fixed:
\begin{eqnarray}
(\alpha_m, \theta_m) &=& \arg \min_{\alpha, \theta} J(\alpha, \theta)\\
&=& \arg \min_{\alpha, \theta} \sum_{i=1}^N \exp\left( -y_i\left( F_{m-1}(\mathbf{x}_i) +\frac{1}{2} \alpha\phi(\mathbf{x}_i; \theta)\right)\right)
\end{eqnarray}
* So, let's optimize this in two steps: first $\theta_m$ and then $\alpha_m$
\begin{eqnarray}
\theta_m &=& \arg \min_{\theta} \sum_{i=1}^N \exp\left( -y_i\left( F_{m-1}(\mathbf{x}_i) + \frac{1}{2}\alpha\phi(\mathbf{x}_i; \theta)\right)\right)\\
&=& \arg \min_{\theta} \sum_{i=1}^N w_i^{(m)} \exp\left( -\frac{1}{2}y_i\alpha\phi(\mathbf{x}_i; \theta)\right)
\end{eqnarray}
where
\begin{equation}
w_i^{(m)} = \exp\left(-y_iF_{m-1}(\mathbf{x}_i)\right)
\end{equation}
* This can be re-written as:
\begin{eqnarray}
\theta_m &=& \arg \min_{\theta} \exp\left(-\alpha_m/2\right)\sum_{n \in T_m}w_n^{(m)} + \exp\left(\alpha_m/2\right)\sum_{n \in M_m}w_n^{(m)} \nonumber \\
&=& \left( \exp\left(\alpha_m/2\right) - \exp\left(-\alpha_m/2\right)\right)\sum_{i=1}^Nw_i^{(m)} I(\phi_m(\mathbf{x}_i;\theta) \ne y_i) + \exp\left(-\alpha_m/2\right)\sum_{i=1}^Nw_i^{(m)}
\end{eqnarray}
* This is equivalent to minimizing
\begin{equation}
\arg \min_{\theta} \sum_{i=1}^N w_i^{(m)} I(\phi_m(\mathbf{x}_i;\theta) \ne y_i)
\end{equation}
* Once we have the optimal classifier at step $m$ (i.e., $\theta_m$), then we determine the $\alpha_m$ values
\begin{eqnarray}
\sum_{y_i\phi(\mathbf{x}_i;\theta_m)<0}w_i^{(m)} = P_m\\
\sum_{y_i\phi(\mathbf{x}_i;\theta_m)>0}w_i^{(m)} = 1 - P_m
\end{eqnarray}
* Plugging this into J, we get:
\begin{eqnarray}
\alpha_m = \arg\min_{\alpha} \left\{ \exp(-\alpha)(1-P_m) + \exp(\alpha)P_m\right\}
\end{eqnarray}
* Take the derivative with respect to $\alpha$, set to zero, we get:
\begin{equation}
\alpha_m = \frac{1}{2}\ln\frac{1-P_m}{P_m}
\end{equation}
* Once you get $\theta_m$ and $\alpha_m$, you compute the weights for the next step:
\begin{equation}
w_i^{(m+1)} = \frac{\exp(-y_iF_m(\mathbf{x}_i))}{Z_m} = \frac{\exp(-y_i\alpha_m\phi(\mathbf{x}_i;\theta_m))}{Z_m}
\end{equation}
where
\begin{equation}
Z_m = \sum_{i=1}^N w_i^{(m)}\exp\left(-y_i\alpha_m\phi(\mathbf{x}_i;\phi_m)\right)
\end{equation}
* Notice that the weight corresponding to a sample is increased (or decreased) with respect to its value in the previous iteration
* Notice that the amount of increase or decrease depends on $\alpha_m$ which controls the relative importance of the $m^{th}$ term in building up the final classifier
```python
```
## Random Forests
* A forest is made up of many trees...
* For classification/regression, put an input vector down each of the trees in the forest. For classification, classify the data point using majority vote. For regression, average the values
* Each tree is grown using:
* Sample $N$ data points (with replacement, i.e., a bootstrap sample) from the full training data set
* Specify a number $d << D$. $d$ variables are selected at random out of all $D$ features to determine the split on the node. Select the best of the $d$ features to split at that node
* Grow each tree as much as possible (i.e., no pruning or stopping early)
* Error relates to correlation between the trees. Greater correlation leads to greater error. *Does this make sense?*
* Error also relates to the strength of each individual tree. Better individual trees lead to lower error
* https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm
# Dropout
* This is a method to help prevent overfitting and regularize a network.
* The approach attempts to minimize co-dependencies between neurons and enhance robustness of network
* Dropout has one parameter $p$. In each iteration, you randomly exclude each neuron with probability $1-p$ during the training pass (in both forward and backward propagation). Each iteration, you resample which neurons to keep and which to dropout.
* Dropout is related to the concept of ensemble learning with the unique case that the various models in the ensemble share parameters and these models are "combined" into a single model/network at test as opposed to training a fusion model or doing a simple average between outputs.
* During test, you use all neurons all the time.
* Please see and read: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
```python
```
|
4abea4324cb68d3233bc795cbd4bc1e5b4e4e63f
| 104,226 |
ipynb
|
Jupyter Notebook
|
21_Dropout/21_Dropout.ipynb
|
zhengyul9/lecture
|
905b93ba713f8467887fe8de5a44a3d8a7cae45c
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
21_Dropout/21_Dropout.ipynb
|
zhengyul9/lecture
|
905b93ba713f8467887fe8de5a44a3d8a7cae45c
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
21_Dropout/21_Dropout.ipynb
|
zhengyul9/lecture
|
905b93ba713f8467887fe8de5a44a3d8a7cae45c
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null | 304.754386 | 53,432 | 0.913947 | true | 3,050 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.766294 | 0.826712 | 0.633504 |
__label__eng_Latn
| 0.964927 | 0.310173 |
<a href="https://colab.research.google.com/github/metamath1/noviceml/blob/master/CHAP_06.ipynb" target="_parent"></a>
# 6장 그림 및 예제 코드
```
# 기본적인 import들
# 이후 그래프 그리는 코드에는 중복으로 적지 않음.
# 다른 곳으로 그래프 그리는 코드를 복사-붙이기 할 때는
# 이 import 코드와 함께 복사-붙이기 해야함
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from mpl_toolkits import mplot3d
import matplotlib.font_manager as mfm
import sympy
# numpy 출력 형식 지정
np.set_printoptions(precision=4, linewidth=150)
# matplotlib 스타일 지정
mpl.style.use('bmh')
mpl.style.use('seaborn-whitegrid')
style = plt.style.library['bmh']
# 스타일 컬러를 쉽게 쓸 수 있도록 리스트 저장
style_colors = [ c['color'] for c in style['axes.prop_cycle'] ]
# 그림을 로컬 폴더에 저장하고 싶으면 True로 수정
file_print = False
```
```
# 데이터 파일 사용을 위한 github repo 복사
# !주의!
# 구글 colab 환경에서 실행하는 경우만 실행하세요.
# 로컬환경에서는 실행하지 마세요.
!git clone -l -s https://github.com/metamath1/noviceml.git noviceml
```
Cloning into 'noviceml'...
warning: --local is ignored
remote: Enumerating objects: 26, done.[K
remote: Counting objects: 100% (26/26), done.[K
remote: Compressing objects: 100% (23/23), done.[K
remote: Total 74 (delta 10), reused 12 (delta 2), pack-reused 48[K
Unpacking objects: 100% (74/74), done.
```
# 구글 colab 환경일 경우 그래프에 한글 폰트 사용을 위한 설정
path = 'noviceml/font/NanumBarunGothic.ttf'
fontprop = mfm.FontProperties(fname=path, size=18)
# 로컬 환경일 경우 그래프에 한글 폰트 사용을 위한 설정
# https://financedata.github.io/posts/matplotlib-hangul-for-ubuntu-linux.html
# 아래 코드의 주석 제거 후 경로를 유저 컴퓨터의 폰트 파일 경로로 수정하세요.
# path = '/usr/share/fonts/truetype/nanum/NanumBarunGothic.ttf'
# fontprop = mfm.FontProperties(fname=path, size=18)
```
## 심파이를 사용한 미분
$$
(x^2 + 2x) \log x
$$
```
x = sympy.Symbol('x')
f = (x**2 + 2*x)*sympy.log(x)
df = sympy.diff(f, x)
df
# >>> (2*x+2)*log(x) + (x**2 + 2*x)/x
```
(2*x + 2)*log(x) + (x**2 + 2*x)/x
```
sympy.simplify(df)
# >>> x + 2*(x + 1)*log(x) + 2
```
x + 2*(x + 1)*log(x) + 2
- 직접 미분을 사용하면 위 결과를 이용하여 함수를 선언하고 그대로 사용
```
f = lambda x : (x**2 + 2*x)*np.log(x)
df = lambda x : (2*x+2)*np.log(x) + (x+2)
print(f(1))
print(df(1))
```
0.0
3.0
## 수치미분
```
############################################################
# 수치미분함수
############################################################
def numer_deriv(f, x, h=0.001, method="center") :
"""
{f(x+h) - f(x)} / h을 수치적으로 계산한다.
f : 미분할 함수로 주어진 위치에서 함수값 계산을 위해 사용
x : 미분계수를 구할 변수의 위치로
일변수인 경우 int 또는 float
다변수인 경우 넘파이 어레이 (d,) 벡터
h : 비율을 구할 작은 구간
"""
if type(x) in (float, int) : # ---- [1]
grad = [0.0]
x_ = [x]
var_type = 'scalar'
else :
grad = np.zeros(x.shape) # ---- [2]
x_ = x.copy().astype('float32')
var_type = 'vector'
for i, xi in enumerate(x_) : # ---- [3]
original_value = x_[i]
if method=='forward' : # ---- [4]
x_[i] = original_value + h
else :
x_[i] = original_value + (h/2)
if var_type == 'scalar' : # ---- [5]
gradplus = f(x_[i])
else :
gradplus = f(x_)
if method=='forward' : # ---- [6]
x_[i] = original_value
else:
x_[i] = original_value - (h/2)
if var_type == 'scalar' :
gradminus = f(x_[i])
else :
gradminus = f(x_)
grad[i] = (gradplus - gradminus) / h # ---- [7]
if var_type == 'scalar' : # ---- [8]
return grad[0]
else :
return grad
```
```
print(numer_deriv(f, 1, h=0.5, method="forward"))
print(numer_deriv(f, 1, h=0.5, method="center"))
```
4.257383635135726
2.9997299032915508
## 그림 6-1
```
f = lambda x : (x**2 + 2*x)*np.log(x)
df = lambda x : (2*x+2)*np.log(x) + (x+2)
fig, (ax1, ax2) = plt.subplots(1, 2, sharex=True, sharey=True)
fig.set_size_inches((15,7))
ax1.xaxis.set_tick_params(labelsize=18)
ax1.yaxis.set_tick_params(labelsize=18)
ax1.set_xlabel(r'$x$', fontsize=25)
ax1.set_ylabel(r'$y$', fontsize=25)
ax1.grid(False)
ax2.xaxis.set_tick_params(labelsize=18)
ax2.yaxis.set_tick_params(labelsize=18)
ax2.set_xlabel(r'$x$', fontsize=25)
ax2.set_ylabel(r'$y$', fontsize=25)
ax2.grid(False)
x = np.linspace(0.6, 1.7, 100)
x0 = 1.0
h = 0.5
ax1.plot(x, f(x), color='k', lw=1)
ax1.set_title("전방 차분", fontproperties=fontprop)
ax1.plot(x0, f(x0), 'o', markersize=8, color='k', zorder=3)
ax1.plot([x0-h, x0+h], [f(x0)-df(x0)*h, f(x0)+df(x0)*h], '--', lw=1, color='k')
ax1.plot([x0, x0+h], [f(x0), f(x0+h)], lw=2, color='k')
ax1.set_xlabel(r'$x$', fontsize=20)
ax1.set_ylabel(r'$y$', fontsize=20)
ax2.plot(x, f(x), color='k', lw=1)
ax2.set_title("중앙 차분", fontproperties=fontprop)
ax2.plot(x0, f(x0), 'o', markersize=8, color='k', zorder=3)
ax2.plot([x0-h, x0+h], [f(x0)-df(x0)*h, f(x0)+df(x0)*h], '--', lw=1, color='k')
ax2.plot([x0-h/2, x0+h/2], [f(x0-h/2), f(x0+h/2)], lw=2, color='k')
ax2.set_xlabel(r'$x$', fontsize=20)
ax2.set_ylabel(r'$y$', fontsize=20)
if file_print == True :
fig.savefig("imgs/chap6/fig6-1.png", dpi=300, bbox_inches='tight')
fig.savefig("imgs/chap6/fig6-1.pdf", format='pdf', bbox_inches='tight')
plt.show()
```
## 식(6.5) 미분
$$
f(x,y)=(x^2+2x)\ln{y}
$$
```
f_xy = lambda x : (x[0]**2 + 2*x[0])*np.log(x[1])
numer_deriv(f_xy, np.array([1, 2]))
```
array([2.7726, 1.4989])
```
x = sympy.Symbol('x')
y = sympy.Symbol('y')
f_xy_sympy = (x**2 + 2*x)*sympy.log(y)
df_xy_x = sympy.diff(f_xy_sympy, x)
df_xy_y = sympy.diff(f_xy_sympy, y)
print(df_xy_x)
print(df_xy_y)
print("{:.4f}".format(df_xy_x.evalf(subs={x:1.0, y:2.0})))
print("{:.4f}".format(df_xy_y.evalf(subs={x:1.0, y:2.0})))
```
(2*x + 2)*log(y)
(x**2 + 2*x)/y
2.7726
1.5000
## 자동미분
### 파이토치
```
import torch # 파이토치 불러오기
```
#### 텐서
```
np.random.seed(0) # 랜덤 어레이 생성에서 늘 같은 결과가 나오게 하기 위해
x = np.random.rand(6).reshape(2,3)
x_tensor = torch.tensor(x)
x_from_numpy = torch.from_numpy(x)
x_Tensor = torch.Tensor(x)
x_as_tensor = torch.as_tensor(x)
print(x, x.dtype)
print(x_tensor, x_tensor.dtype, x_tensor.requires_grad)
print(x_from_numpy, x_from_numpy.dtype, x_from_numpy.requires_grad)
print(x_Tensor, x_Tensor.dtype, x_Tensor.requires_grad)
print(x_as_tensor, x_as_tensor.dtype, x_as_tensor.requires_grad)
```
[[0.5488 0.7152 0.6028]
[0.5449 0.4237 0.6459]] float64
tensor([[0.5488, 0.7152, 0.6028],
[0.5449, 0.4237, 0.6459]], dtype=torch.float64) torch.float64 False
tensor([[0.5488, 0.7152, 0.6028],
[0.5449, 0.4237, 0.6459]], dtype=torch.float64) torch.float64 False
tensor([[0.5488, 0.7152, 0.6028],
[0.5449, 0.4237, 0.6459]]) torch.float32 False
tensor([[0.5488, 0.7152, 0.6028],
[0.5449, 0.4237, 0.6459]], dtype=torch.float64) torch.float64 False
```
x[0,0] = 100
print(x, x.dtype)
print(x_tensor, x_tensor.dtype, x_tensor.requires_grad)
print(x_from_numpy, x_from_numpy.dtype, x_from_numpy.requires_grad)
print(x_Tensor, x_Tensor.dtype, x_Tensor.requires_grad)
print(x_as_tensor, x_as_tensor.dtype, x_as_tensor.requires_grad)
```
[[100. 0.7152 0.6028]
[ 0.5449 0.4237 0.6459]] float64
tensor([[0.5488, 0.7152, 0.6028],
[0.5449, 0.4237, 0.6459]], dtype=torch.float64) torch.float64 False
tensor([[100.0000, 0.7152, 0.6028],
[ 0.5449, 0.4237, 0.6459]], dtype=torch.float64) torch.float64 False
tensor([[0.5488, 0.7152, 0.6028],
[0.5449, 0.4237, 0.6459]]) torch.float32 False
tensor([[100.0000, 0.7152, 0.6028],
[ 0.5449, 0.4237, 0.6459]], dtype=torch.float64) torch.float64 False
```
x_tensor_grad = torch.tensor(x, requires_grad=True)
print(x_tensor_grad, x_tensor_grad.dtype, x_tensor_grad.requires_grad)
```
tensor([[100.0000, 0.7152, 0.6028],
[ 0.5449, 0.4237, 0.6459]], dtype=torch.float64,
requires_grad=True) torch.float64 True
```
x = torch.tensor([1.0], requires_grad=True)
f = (x**2 + 2*x) * torch.log(x)
print(x)
print(f)
print(x.grad)
print(x.grad_fn)
print(f.grad_fn)
```
tensor([1.], requires_grad=True)
tensor([0.], grad_fn=<MulBackward0>)
None
None
<MulBackward0 object at 0x7f4e04f1b438>
```
# x가 마지막 노드인가?
# backward()함수는 마지막 노드까지 역전파하면서 미분계수를 구한다.
print(x.is_leaf)
```
True
#### torch.autograd.backward
```
torch.autograd.backward(f, grad_tensors=torch.tensor([1.]), retain_graph=True)
print(x.grad)
```
tensor([3.])
#### torch.autograd.grad
```
df = torch.autograd.grad(f, x, retain_graph=True)
print(df)
```
(tensor([3.]),)
```
print(x.grad)
```
tensor([3.])
#### 식(6.5) 파이토치로 미분
```
x = torch.tensor([1.0], requires_grad=True)
y = torch.tensor([2.0], requires_grad=True)
f_xy = (x**2 + 2*x) * torch.log(y)
torch.autograd.backward(f_xy, retain_graph=True)
print(x.grad)
print(y.grad)
df = torch.autograd.grad(f_xy, (x,y), retain_graph=True)
print(df)
```
tensor([2.7726])
tensor([1.5000])
(tensor([2.7726]), tensor([1.5000]))
### 자동미분 구현
```
def times(x, y):
return x*y, (x,y)
def times_deriv(cache, dout=1):
return cache[1]*dout, cache[0]*dout
TIMES = {'f': times, 'df': times_deriv}
v, cache = TIMES['f'](2,3)
dx, dy = TIMES['df'](cache)
print("dx={}, dy={}".format(dx, dy))
```
dx=3, dy=2
```
def add(x, y):
return x+y, (x,y)
def add_deriv(cache, dout=1):
return dout, dout
ADD = {'f': add, 'df': add_deriv}
def log(x):
return np.log(x), x
def log_deriv(cache, dout=1):
return (1/cache)*dout
LOG = {'f': log, 'df': log_deriv}
```
```
x = 1.; y = 2.
a, cache_a = TIMES['f'](x, x)
b, cache_b = TIMES['f'](2, x)
c, cache_c = ADD['f'](a, b)
d, cache_d = LOG['f'](y)
z, cache_z = TIMES['f'](c, d)
print("forward pass f(x) = {:.6f}".format(z))
dx = dy = 0.
dc, dd = TIMES['df'](cache_z, 1)
dy = LOG['df'](cache_d, dd)
da, db = ADD['df'](cache_c, dc)
_, dx_ = TIMES['df'](cache_b, db); dx+=dx_;
dx_, dx__ = TIMES['df'](cache_a, da); dx+=dx_+dx__;
print("backward pass dx = {:.6f}, dy = {:.6f}".format(dx, dy))
```
forward pass f(x) = 2.079442
backward pass dx = 2.772589, dy = 1.500000
- 수치 미분으로 위 자동 미분 결과를 확인
```
def f_xy(x):
return (x[0]*x[0] + 2*x[0])*np.log(x[1])
numer_deriv(f_xy, np.array([1, 2]), method="center")
```
array([2.7726, 1.4989])
- 파이토치로 상류층 미분계수를 2로 주고 미분한 경우
```
x = torch.tensor([1.], requires_grad=True)
y = torch.tensor([2.], requires_grad=True)
z = (x**2 + 2*x)*torch.log(y)
dz = torch.autograd.grad(z, (x,y), grad_outputs=torch.tensor([2.]), retain_graph=True)
print(dz)
```
(tensor([5.5452]), tensor([3.]))
```
```
|
7fcc5a3cc45cb49449d5f59d4f6bcf2b4fbe9839
| 74,358 |
ipynb
|
Jupyter Notebook
|
CHAP_06.ipynb
|
snapbuy/noviceml
|
0ca9e3b26fbc906cae1d5a758fa0f807db8dd117
|
[
"MIT"
] | 23 |
2021-01-02T07:52:35.000Z
|
2022-03-05T07:04:04.000Z
|
CHAP_06.ipynb
|
sunghyouk/noviceml
|
993b720597273e92d47367c21b48a95ba7f8adad
|
[
"MIT"
] | null | null | null |
CHAP_06.ipynb
|
sunghyouk/noviceml
|
993b720597273e92d47367c21b48a95ba7f8adad
|
[
"MIT"
] | 13 |
2020-12-17T13:43:08.000Z
|
2022-01-12T07:49:45.000Z
| 69.951082 | 45,390 | 0.752925 | true | 4,211 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.754915 | 0.822189 | 0.620683 |
__label__kor_Hang
| 0.486109 | 0.280385 |
<a href="https://colab.research.google.com/github/mella30/Deep-Learning-with-Tensorflow-2/blob/main/Course3-Probabilistic_Deep_Learning_with_Tensorflow2/week_3_Programming_Assignment.ipynb" target="_parent"></a>
# Programming Assignment
## RealNVP for the LSUN bedroom dataset
### Instructions
In this notebook, you will develop the RealNVP normalising flow architecture from scratch, including the affine coupling layers, checkerboard and channel-wise masking, and combining into a multiscale architecture. You will train the normalising flow on a subset of the LSUN bedroom dataset.
Some code cells are provided for you in the notebook. You should avoid editing provided code, and make sure to execute the cells in order to avoid unexpected errors. Some cells begin with the line:
`#### GRADED CELL ####`
Don't move or edit this first line - this is what the automatic grader looks for to recognise graded cells. These cells require you to write your own code to complete them, and are automatically graded when you submit the notebook. Don't edit the function name or signature provided in these cells, otherwise the automatic grader might not function properly.
### How to submit
Complete all the tasks you are asked for in the worksheet. When you have finished and are happy with your code, press the **Submit Assignment** button at the top of this notebook.
### Let's get started!
We'll start running some imports, and loading the dataset. Do not edit the existing imports in the following cell. If you would like to make further Tensorflow imports, you should add them here.
```python
#### PACKAGE IMPORTS ####
# Run this cell first to import all required packages. Do not make any imports elsewhere in the notebook
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
from tensorflow.keras import Model, Input
from tensorflow.keras.layers import Conv2D, BatchNormalization
from tensorflow.keras.optimizers import Adam
tfd = tfp.distributions
tfb = tfp.bijectors
# If you would like to make further imports from tensorflow, add them here
from tensorflow.keras import layers
from tensorflow.keras.regularizers import l2
```
#### The LSUN Bedroom Dataset
In this assignment, you will use a subset of the [LSUN dataset](https://www.yf.io/p/lsun). This is a large-scale image dataset with 10 scene and 20 object categories. A subset of the LSUN bedroom dataset has been provided, and has already been downsampled and preprocessed into smaller, fixed-size images.
* F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser and J. Xia. "LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop". [arXiv:1506.03365](https://arxiv.org/abs/1506.03365), 10 Jun 2015
Your goal is to develop the RealNVP normalising flow architecture using bijector subclassing, and use it to train a generative model of the LSUN bedroom data subset. For full details on the RealNVP model, refer to the original paper:
* L. Dinh, J. Sohl-Dickstein and S. Bengio. "Density estimation using Real NVP". [arXiv:1605.08803](https://arxiv.org/abs/1605.08803), 27 Feb 2017.
#### Import the data
The dataset required for this project can be downloaded from the following link:
https://drive.google.com/file/d/1scbDZrn5pkRjF_CeZp66uHVQC9o1gIsg/view?usp=sharing
You should upload this file to Drive for use in this Colab notebook. It is recommended to unzip it on Drive, which can be done using the `zipfile` package:
>```
import zipfile
with zipfile.ZipFile("/path/to/lsun_bedroom.zip","r") as zip_ref:
zip_ref.extractall('lsun_bedroom_data')
```
```python
# Run this cell to connect to your Drive folder
from google.colab import drive
drive.mount('/content/gdrive')
```
Mounted at /content/gdrive
```python
# unpack data
import zipfile
with zipfile.ZipFile("/content/gdrive/MyDrive/Datasets/lsun_bedroom.zip","r") as zip_ref:
zip_ref.extractall('lsun_bedroom_data')
```
#### Load the dataset
The following functions will be useful for loading and preprocessing the dataset. The subset you will use for this assignment consists of 10,000 training images, 1000 validation images and 1000 test images.
The images have been downsampled to 32 x 32 x 3 in order to simplify the training process.
```python
# Functions for loading and preprocessing the images
def load_image(filepath):
raw_img = tf.io.read_file(filepath)
img_tensor_int = tf.image.decode_jpeg(raw_img, channels=3)
img_tensor_flt = tf.image.convert_image_dtype(img_tensor_int, tf.float32)
img_tensor_flt = tf.image.resize(img_tensor_flt, [32, 32])
img_tensor_flt = tf.image.random_flip_left_right(img_tensor_flt)
return img_tensor_flt, img_tensor_flt
def load_dataset(split):
train_list_ds = tf.data.Dataset.list_files('/content/lsun_bedroom_data/{}/*.jpg'.format(split), shuffle=False)
train_ds = train_list_ds.map(load_image)
return train_ds
```
```python
# Load the training, validation and testing datasets splits
train_ds = load_dataset('train')
val_ds = load_dataset('val')
test_ds = load_dataset('test')
```
```python
# Shuffle the datasets
shuffle_buffer_size = 1000
train_ds = train_ds.shuffle(shuffle_buffer_size)
val_ds = val_ds.shuffle(shuffle_buffer_size)
test_ds = test_ds.shuffle(shuffle_buffer_size)
```
```python
# Display a few examples
n_img = 4
f, axs = plt.subplots(n_img, n_img, figsize=(14, 14))
for k, image in enumerate(train_ds.take(n_img**2)):
i = k // n_img
j = k % n_img
axs[i, j].imshow(image[0])
axs[i, j].axis('off')
f.subplots_adjust(wspace=0.01, hspace=0.03)
```
```python
# Batch the Dataset objects
batch_size = 64
train_ds = train_ds.batch(batch_size)
val_ds = val_ds.batch(batch_size)
test_ds = test_ds.batch(batch_size)
```
### Affine coupling layer
We will begin the development of the RealNVP architecture with the core bijector that is called the _affine coupling layer_. This bijector can be described as follows: suppose that $x$ is a $D$-dimensional input, and let $d<D$. Then the output $y$ of the affine coupling layer is given by the following equations:
$$
\begin{align}
y_{1:d} &= x_{1:d} \tag{1}\\
y_{d+1:D} &= x_{d+1:D}\odot \exp(s(x_{1:d})) + t(x_{1:d}), \tag{2}
\end{align}
$$
where $s$ and $t$ are functions from $\mathbb{R}^d\rightarrow\mathbb{R}^{D-d}$, and define the log-scale and shift operations on the vector $x_{d+1:D}$ respectively.
The log of the Jacobian determinant for this layer is given by $\sum_{j}s(x_{1:d})_j$.
The inverse operation can be easily computed as
$$
\begin{align}
x_{1:d} &= y_{1:d}\tag{3}\\
x_{d+1:D} &= \left(y_{d+1:D} - t(y_{1:d})\right)\odot \exp(-s(y_{1:d})),\tag{4}
\end{align}
$$
In practice, we will implement equations $(1)$ and $(2)$ using a binary mask $b$:
$$
\begin{align}
\text{Forward pass:}\qquad y &= b\odot x + (1-b)\odot\left(x\odot\exp(s(b\odot x)) + t(b\odot x)\right),\tag{5}\\
\text{Inverse pass:}\qquad x &= b\odot y + (1-b)\odot\left(y - t(b\odot x) \odot\exp( -s(b\odot x))\right).\tag{6}
\end{align}
$$
Our inputs $x$ will be a batch of 3-dimensional Tensors with `height`, `width` and `channels` dimensions. As in the original architecture, we will use both spatial 'checkerboard' masks and channel-wise masks:
```python
# Run this cell to download and view a figure to illustrate the checkerboard and binary masks
!wget -q -O binary_masks.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1d_cBjyPGm8i0l5GsRSspPoAlxiPo3HGt"
Image("binary_masks.png", width=800)
```
<center>Figure 1. Spatial checkerboard mask (left) and channel-wise mask (right). From the original paper.</center>
#### Custom model for log-scale and shift
You should now create a custom model for the shift and log-scale parameters that are used in the affine coupling layer bijector. We will use a convolutional residual network, with two residual blocks and a final convolutional layer. Using the functional API, build the model according to the following specifications:
* The function takes the `input_shape` and `filters` as arguments
* The model should use the `input_shape` in the function argument to set the shape in the Input layer (call this layer `h0`).
* The first hidden layer should be a Conv2D layer with number of filters set by the `filters` argument, and a ReLU activation
* The second hidden layer should be a BatchNormalization layer
* The third hidden layer should be a Conv2D layer with the same number of filters as the input `h0` to the model, and a ReLU activation
* The fourth hidden layer should be a BatchNormalization layer
* The fifth hidden layer should be the sum of the fourth hidden layer output and the inputs `h0`. Call this layer `h1`
* The sixth hidden layer should be a Conv2D layer with filters set by the `filters` argument, and a ReLU activation
* The seventh hidden layer should be a BatchNormalization layer
* The eighth hidden layer should be a Conv2D layer with the same number of filters as `h1` (and `h0`), and a ReLU activation
* The ninth hidden layer should be a BatchNormalization layer
* The tenth hidden layer should be the sum of the ninth hidden layer output and `h1`
* The eleventh hidden layer should be a Conv2D layer with the number of filters equal to twice the number of channels of the model input, and a linear activation. Call this layer `h2`
* The twelfth hidden layer should split `h2` into two equal-sized Tensors along the final channel axis. These two Tensors are the shift and log-scale Tensors, and should each have the same shape as the model input
* The final layer should then apply the `tanh` nonlinearity to the log_scale Tensor. The outputs to the model should then be the list of Tensors `[shift, log_scale]`
All Conv2D layers should use a 3x3 kernel size, `"SAME"` padding and an $l2$ kernel regularizer with regularisation coefficient of `5e-5`.
_Hint: use_ `tf.split` _with arguments_ `num_or_size_splits=2, axis=-1` _to create the output Tensors_.
In total, the network should have 14 layers (including the `Input` layer).
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_conv_resnet(input_shape, filters):
"""
This function should build a CNN ResNet model according to the above specification,
using the functional API. The function takes input_shape as an argument, which should be
used to specify the shape in the Input layer, as well as a filters argument, which
should be used to specify the number of filters in (some of) the convolutional layers.
Your function should return the model.
"""
h0 = layers.Input(shape=input_shape)
h = layers.Conv2D(filters=filters, kernel_size=(3,3), padding="same", kernel_regularizer=l2(5e-5), activation="relu")(h0)
h = layers.BatchNormalization()(h)
h = layers.Conv2D(filters=input_shape[-1], kernel_size=(3,3), padding="same", kernel_regularizer=l2(5e-5), activation="relu")(h)
h = layers.BatchNormalization()(h)
h1 = layers.Add()([h0, h])
h = layers.Conv2D(filters=filters, kernel_size=(3,3), padding="same", kernel_regularizer=l2(5e-5), activation="relu")(h1)
h = layers.BatchNormalization()(h)
h = layers.Conv2D(filters=input_shape[-1], kernel_size=(3,3), padding="same", kernel_regularizer=l2(5e-5), activation="relu")(h)
h = layers.BatchNormalization()(h)
h = layers.Add()([h1, h])
h2 = layers.Conv2D(filters=2*input_shape[-1], kernel_size=(3,3), padding="same", kernel_regularizer=l2(5e-5), activation="linear")(h)
shift, log_scale = layers.Lambda(lambda t: tf.split(t, num_or_size_splits=2, axis=-1))(h2)
log_scale = layers.Activation(activation="tanh")(log_scale)
model = Model(inputs=h0, outputs=[shift, log_scale])
return model
```
```python
# Test your function and print the model summary
conv_resnet = get_conv_resnet((32, 32, 3), 32)
conv_resnet.summary()
```
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 32, 32, 3)] 0
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 32, 32, 32) 896 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 32, 32, 32) 128 conv2d[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 32, 32, 3) 867 batch_normalization[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 32, 32, 3) 12 conv2d_1[0][0]
__________________________________________________________________________________________________
add (Add) (None, 32, 32, 3) 0 input_1[0][0]
batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 32, 32, 32) 896 add[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 32, 32, 32) 128 conv2d_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 32, 32, 3) 867 batch_normalization_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 32, 32, 3) 12 conv2d_3[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 32, 32, 3) 0 add[0][0]
batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 32, 32, 6) 168 add_1[0][0]
__________________________________________________________________________________________________
lambda (Lambda) [(None, 32, 32, 3), 0 conv2d_4[0][0]
__________________________________________________________________________________________________
activation (Activation) (None, 32, 32, 3) 0 lambda[0][1]
==================================================================================================
Total params: 3,974
Trainable params: 3,834
Non-trainable params: 140
__________________________________________________________________________________________________
You can also inspect your model architecture graphically by running the following cell. It should look something like the following:
```python
# Run this cell to download and view an example model plot
!wget -q -O model_plot.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1I9MhFGquwyHsJlDgO8hItXnMqAvc5cQg"
Image("model_plot.png", width=1400)
```
```python
# Plot the model graph
tf.keras.utils.plot_model(conv_resnet, show_layer_names=False, rankdir='LR')
```
```python
# Check the output shapes are as expected
print(conv_resnet(tf.random.normal((1, 32, 32, 3)))[0].shape)
print(conv_resnet(tf.random.normal((1, 32, 32, 3)))[1].shape)
```
(1, 32, 32, 3)
(1, 32, 32, 3)
#### Binary masks
Now that you have a shift and log-scale model built, we will now implement the affine coupling layer. We will first need functions to create the binary masks $b$ as described above. The following function creates the spatial 'checkerboard' mask.
It takes a rank-2 `shape` as input, which correspond to the `height` and `width` dimensions, as well as an `orientation` argument (an integer equal to `0` or `1`) that determines which way round the zeros and ones are entered into the Tensor.
```python
# Function to create the checkerboard mask
def checkerboard_binary_mask(shape, orientation=0):
height, width = shape[0], shape[1]
height_range = tf.range(height)
width_range = tf.range(width)
height_odd_inx = tf.cast(tf.math.mod(height_range, 2), dtype=tf.bool)
width_odd_inx = tf.cast(tf.math.mod(width_range, 2), dtype=tf.bool)
odd_rows = tf.tile(tf.expand_dims(height_odd_inx, -1), [1, width])
odd_cols = tf.tile(tf.expand_dims(width_odd_inx, 0), [height, 1])
checkerboard_mask = tf.math.logical_xor(odd_rows, odd_cols)
if orientation == 1:
checkerboard_mask = tf.math.logical_not(checkerboard_mask)
return tf.cast(tf.expand_dims(checkerboard_mask, -1), tf.float32)
```
This function creates a rank-3 Tensor to mask the `height`, `width` and `channels` dimensions of the input. We can take a look at this checkerboard mask for some example inputs below. In order to make the Tensors easier to inspect, we will squeeze out the single channel dimension (which is always 1 for this mask).
```python
# Run the checkerboard_binary_mask function to see an example
# NB: we squeeze the shape for easier viewing. The full shape is (4, 4, 1)
tf.squeeze(checkerboard_binary_mask((4, 4), orientation=0))
```
<tf.Tensor: shape=(4, 4), dtype=float32, numpy=
array([[0., 1., 0., 1.],
[1., 0., 1., 0.],
[0., 1., 0., 1.],
[1., 0., 1., 0.]], dtype=float32)>
```python
# The `orientation` should be 0 or 1, and determines which way round the binary entries are
tf.squeeze(checkerboard_binary_mask((4, 4), orientation=1))
```
<tf.Tensor: shape=(4, 4), dtype=float32, numpy=
array([[1., 0., 1., 0.],
[0., 1., 0., 1.],
[1., 0., 1., 0.],
[0., 1., 0., 1.]], dtype=float32)>
You should now complete the following function to create a channel-wise mask. This function takes a single integer `num_channels` as an input, as well as an `orientation` argument, similar to above. You can assume that the `num_channels` integer is even.
The function should return a rank-3 Tensor with singleton entries for `height` and `width`. In the channel axis, the first `num_channels // 2` entries should be zero (for `orientation=0`) and the final `num_channels // 2` entries should be one (for `orientation=0`). The zeros and ones should be reversed for `orientation=1`. The `dtype` of the returned Tensor should be `tf.float32`.
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def channel_binary_mask(num_channels, orientation=0):
"""
This function takes an integer num_channels and orientation (0 or 1) as
arguments. It should create a channel-wise binary mask with
dtype=tf.float32, according to the above specification.
The function should then return the binary mask.
"""
if orientation == 0:
return tf.concat([tf.zeros((1,1, num_channels//2), dtype=tf.float32),
tf.ones((1,1, num_channels - num_channels//2), dtype=tf.float32)], axis=-1)
return tf.concat([tf.ones((1,1, num_channels//2), dtype=tf.float32),
tf.zeros((1,1, num_channels - num_channels//2), dtype=tf.float32)], axis=-1)
```
```python
# Run your function to see an example channel-wise binary mask
channel_binary_mask(6, orientation=0)
```
<tf.Tensor: shape=(1, 1, 6), dtype=float32, numpy=array([[[0., 0., 0., 1., 1., 1.]]], dtype=float32)>
```python
#### GRADED CELL ####
# Complete the following functions.
# Make sure to not change the function names or arguments.
def forward(x, b, shift_and_log_scale_fn):
"""
This function takes the input Tensor x, binary mask b and callable
shift_and_log_scale_fn as arguments.
This function should implement the forward transformation in equation (5)
and return the output Tensor y, which will have the same shape as x
"""
shift, log_scale = shift_and_log_scale_fn(b * x)
return b * x + (1 - b) * (x * tf.math.exp(log_scale) + shift)
def inverse(y, b, shift_and_log_scale_fn):
"""
This function takes the input Tensor x, binary mask b and callable
shift_and_log_scale_fn as arguments.
This function should implement the forward transformation in equation (5)
and return the output Tensor y, which will have the same shape as x
"""
shift, log_scale = shift_and_log_scale_fn(b * y)
return b * y + (1 - b) * ((y - shift) * tf.math.exp(-log_scale))
```
The new bijector class also requires the `log_det_jacobian` methods to be implemented. Recall that the log of the Jacobian determinant of the forward transformation is given by $\sum_{j}s(x_{1:d})_j$, where $s$ is the log-scale function of the affine coupling layer.
You should now complete the following functions to define the `forward_log_det_jacobian` and `inverse_log_det_jacobian` methods of the affine coupling layer bijector.
* Both functions `forward_log_det_jacobian` and `inverse_log_det_jacobian` takes an input Tensor `x` (or `y`), a rank-3 binary mask `b`, and the `shift_and_log_scale_fn` callable
* These arguments are the same as the description for the `forward` and `inverse` functions
* The `forward_log_det_jacobian` function should implement the log of the Jacobian determinant for the transformation $(5)$
* The `inverse_log_det_jacobian` function should implement the log of the Jacobian determinant for the transformation $(6)$
* Both functions should reduce sum over the last three axes of the input Tensor (`height`, `width` and `channels`)
```python
#### GRADED CELL ####
# Complete the following functions.
# Make sure to not change the function names or arguments.
def forward_log_det_jacobian(x, b, shift_and_log_scale_fn):
"""
This function takes the input Tensor x, binary mask b and callable
shift_and_log_scale_fn as arguments.
This function should compute and return the log of the Jacobian determinant
of the forward transformation in equation (5)
"""
_, log_scale = shift_and_log_scale_fn(b * x)
return tf.reduce_sum((1 - b) * log_scale, axis=[-1,-2,-3])
def inverse_log_det_jacobian(y, b, shift_and_log_scale_fn):
"""
This function takes the input Tensor y, binary mask b and callable
shift_and_log_scale_fn as arguments.
This function should compute and return the log of the Jacobian determinant
of the forward transformation in equation (6)
"""
_s_log_scale = shift_and_log_scale_fn(b * y)
return -tf.reduce_sum((1 - b) * log_scale, axis=[-1,-2,-3])
```
You are now ready to create the coupling layer bijector, using bijector subclassing. You should complete the class below to define the `AffineCouplingLayer`.
* You should complete the initialiser `__init__`, and the internal class method `_get_mask`
* The `_forward`, `_inverse`, `_forward_log_det_jacobian` and `_inverse_log_det_jacobian` methods are completed for you using the functions you have written above. Do not modify these methods
* The initialiser takes the `shift_and_log_scale_fn` callable, `mask_type` string (either `"checkerboard"` or `"channel"`, `orientation` (integer, either `0` or `1`) as required arguments, and allows for extra keyword arguments
* The required arguments should be set as class attributes in the initialiser (note that the `shift_and_log_scale_fn` attribute is being used in the `_forward`, `_inverse`, `_forward_log_det_jacobian` and `_inverse_log_det_jacobian` methods)
* The initialiser should call the base class initialiser, and pass in any extra keyword arguments
* The class should have a required number of event dimensions equal to 3
* The internal method `_get_mask` takes a `shape` as an argument, which is the shape of an input Tensor
* This method should use the `checkerboard_binary_mask` and `channel_binary_mask` functions above, as well as the `mask_type` and `orientation` arguments passed to the initialiser to compute and return the required binary mask
* This method is used in each of the `_forward`, `_inverse`, `_forward_log_det_jacobian` and `_inverse_log_det_jacobian` methods
```python
#### GRADED CELL ####
# Complete the following class.
# Make sure to not change the class or method names or arguments.
class AffineCouplingLayer(tfb.Bijector):
"""
Class to implement the affine coupling layer.
Complete the __init__ and _get_mask methods according to the instructions above.
"""
def __init__(self, shift_and_log_scale_fn, mask_type, orientation, **kwargs):
"""
The class initialiser takes the shift_and_log_scale_fn callable, mask_type,
orientation and possibly extra keywords arguments. It should call the
base class initialiser, passing any extra keyword arguments along.
It should also set the required arguments as class attributes.
"""
super(AffineCouplingLayer, self).__init__(forward_min_event_ndims=3, **kwargs)
self.shift_and_log_scale_fn = shift_and_log_scale_fn
self.mask_type = mask_type
self.orientation = orientation
def _get_mask(self, shape):
"""
This internal method should use the binary mask functions above to compute
and return the binary mask, according to the arguments passed in to the
initialiser.
"""
if self.mask_type == "channel":
return channel_binary_mask(shape[-1], self.orientation)
return checkerboard_binary_mask(shape[1:], self.orientation)
def _forward(self, x):
b = self._get_mask(x.shape)
return forward(x, b, self.shift_and_log_scale_fn)
def _inverse(self, y):
b = self._get_mask(y.shape)
return inverse(y, b, self.shift_and_log_scale_fn)
def _forward_log_det_jacobian(self, x):
b = self._get_mask(x.shape)
return forward_log_det_jacobian(x, b, self.shift_and_log_scale_fn)
def _inverse_log_det_jacobian(self, y):
b = self._get_mask(y.shape)
return inverse_log_det_jacobian(y, b, self.shift_and_log_scale_fn)
```
```python
# Test your function by creating an instance of the AffineCouplingLayer class
affine_coupling_layer = AffineCouplingLayer(conv_resnet, 'channel', orientation=1,
name='affine_coupling_layer')
```
```python
# The following should return a Tensor of the same shape as the input
affine_coupling_layer.forward(tf.random.normal((16, 32, 32, 3))).shape
```
TensorShape([16, 32, 32, 3])
```python
# The following should compute a log_det_jacobian for each event in the batch
affine_coupling_layer.forward_log_det_jacobian(tf.random.normal((16, 32, 32, 3)), event_ndims=3).shape
```
TensorShape([16])
#### Combining the affine coupling layers
In the affine coupling layer, part of the input remains unchanged in the transformation $(5)$. In order to allow transformation of all of the input, several coupling layers are composed, with the orientation of the mask being reversed in subsequent layers.
```python
# Run this cell to download and view a sketch of the affine coupling layers
!wget -q -O alternating_masks.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1r1vASfLOW3kevxRzFUXhCtHN8dzldHve"
Image("alternating_masks.png", width=800)
```
<center>Figure 2. RealNVP alternates the orientation of masks from one affine coupling layer to the next. From the original paper.</center>
Our model design will be similar to the original architecture; we will compose three affine coupling layers with checkerboard masking, followed by a batch normalization bijector (`tfb.BatchNormalization` is a built-in bijector), followed by a squeezing operation, followed by three more affine coupling layers with channel-wise masking and a final batch normalization bijector.
The squeezing operation divides the spatial dimensions into 2x2 squares, and reshapes a Tensor of shape `(H, W, C)` into a Tensor of shape `(H // 2, W // 2, 4 * C)` as shown in Figure 1.
The squeezing operation is also a bijective operation, and has been provided for you in the class below.
```python
# Bijector class for the squeezing operation
class Squeeze(tfb.Bijector):
def __init__(self, name='Squeeze', **kwargs):
super(Squeeze, self).__init__(forward_min_event_ndims=3, is_constant_jacobian=True,
name=name, **kwargs)
def _forward(self, x):
input_shape = x.shape
height, width, channels = input_shape[-3:]
y = tfb.Reshape((height // 2, 2, width // 2, 2, channels), event_shape_in=(height, width, channels))(x)
y = tfb.Transpose(perm=[0, 2, 1, 3, 4])(y)
y = tfb.Reshape((height // 2, width // 2, 4 * channels),
event_shape_in=(height // 2, width // 2, 2, 2, channels))(y)
return y
def _inverse(self, y):
input_shape = y.shape
height, width, channels = input_shape[-3:]
x = tfb.Reshape((height, width, 2, 2, channels // 4), event_shape_in=(height, width, channels))(y)
x = tfb.Transpose(perm=[0, 2, 1, 3, 4])(x)
x = tfb.Reshape((2 * height, 2 * width, channels // 4),
event_shape_in=(height, 2, width, 2, channels // 4))(x)
return x
def _forward_log_det_jacobian(self, x):
return tf.constant(0., x.dtype)
def _inverse_log_det_jacobian(self, y):
return tf.constant(0., y.dtype)
def _forward_event_shape_tensor(self, input_shape):
height, width, channels = input_shape[-3], input_shape[-2], input_shape[-1]
return height // 2, width // 2, 4 * channels
def _inverse_event_shape_tensor(self, output_shape):
height, width, channels = output_shape[-3], output_shape[-2], output_shape[-1]
return height * 2, width * 2, channels // 4
```
You can see the effect of the squeezing operation on some example inputs in the cells below. In the forward transformation, each spatial dimension is halved, whilst the channel dimension is multiplied by 4. The opposite happens in the inverse transformation.
```python
# Test the Squeeze bijector
squeeze = Squeeze()
squeeze(tf.ones((10, 32, 32, 3))).shape
```
TensorShape([10, 16, 16, 12])
```python
# Test the inverse operation
squeeze.inverse(tf.ones((10, 4, 4, 96))).shape
```
TensorShape([10, 8, 8, 24])
We can now construct a block of coupling layers according to the architecture described above. You should complete the following function to chain together the bijectors that we have constructed, to form a bijector that performs the following operations in the forward transformation:
* Three `AffineCouplingLayer` bijectors with `"checkerboard"` masking with orientations `0, 1, 0` respectively
* A `BatchNormalization` bijector
* A `Squeeze` bijector
* Three more `AffineCouplingLayer` bijectors with `"channel"` masking with orientations `0, 1, 0` respectively
* Another `BatchNormalization` bijector
The function takes the following arguments:
* `shift_and_log_scale_fns`: a list or tuple of six conv_resnet models
* The first three models in this list are used in the three coupling layers with checkerboard masking
* The last three models in this list are used in the three coupling layers with channel masking
* `squeeze`: an instance of the `Squeeze` bijector
_NB: at this point, we would like to point out that we are following the exposition in the original paper, and think of the forward transformation as acting on the input image. Note that this is in contrast to the convention of using the forward transformation for sampling, and the inverse transformation for computing log probs._
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def realnvp_block(shift_and_log_scale_fns, squeeze):
"""
This function takes a list or tuple of six conv_resnet models, and an
instance of the Squeeze bijector.
The function should construct the chain of bijectors described above,
using the conv_resnet models in the coupling layers.
The function should then return the chained bijector.
"""
block = [AffineCouplingLayer(shift_and_log_scale_fns[0], 'checkerboard', orientation=0),
AffineCouplingLayer(shift_and_log_scale_fns[1], 'checkerboard', orientation=1),
AffineCouplingLayer(shift_and_log_scale_fns[2], 'checkerboard', orientation=0),
tfb.BatchNormalization(),
squeeze,
AffineCouplingLayer(shift_and_log_scale_fns[3], 'channel', orientation=0),
AffineCouplingLayer(shift_and_log_scale_fns[4], 'channel', orientation=1),
AffineCouplingLayer(shift_and_log_scale_fns[5], 'channel', orientation=0),
tfb.BatchNormalization()
]
return tfb.Chain(list(reversed(block)))
```
```python
# Run your function to create an instance of the bijector
checkerboard_fns = []
for _ in range(3):
checkerboard_fns.append(get_conv_resnet((32, 32, 3), 512))
channel_fns = []
for _ in range(3):
channel_fns.append(get_conv_resnet((16, 16, 12), 512))
block = realnvp_block(checkerboard_fns + channel_fns, squeeze)
```
```python
# Test the bijector on a dummy input
block.forward(tf.random.normal((10, 32, 32, 3))).shape
```
TensorShape([10, 16, 16, 12])
#### Multiscale architecture
The final component of the RealNVP is the multiscale architecture. The squeeze operation reduces the spatial dimensions but increases the channel dimensions. After one of the blocks of coupling-squeeze-coupling that you have implemented above, half of the dimensions are factored out as latent variables, while the other half is further processed through subsequent layers. This results in latent variables that represent different scales of features in the model.
```python
# Run this cell to download and view a sketch of the multiscale architecture
!wget -q -O multiscale.png --no-check-certificate "https://docs.google.com/uc?export=download&id=19Sc6PKbc8Bi2DoyupHZxHvB3m6tw-lki"
Image("multiscale.png", width=700)
```
<center>Figure 3. RealNVP creates latent variables at different scales by factoring out half of the dimensions at each scale. From the original paper.</center>
The final scale does not use the squeezing operation, and instead applies four affine coupling layers with alternating checkerboard masks.
The multiscale architecture for two latent variable scales is implemented for you in the following bijector.
```python
# Bijector to implement the multiscale architecture
class RealNVPMultiScale(tfb.Bijector):
def __init__(self, **kwargs):
super(RealNVPMultiScale, self).__init__(forward_min_event_ndims=3, **kwargs)
# First level
shape1 = (32, 32, 3) # Input shape
shape2 = (16, 16, 12) # Shape after the squeeze operation
shape3 = (16, 16, 6) # Shape after factoring out the latent variable
self.conv_resnet1 = get_conv_resnet(shape1, 64)
self.conv_resnet2 = get_conv_resnet(shape1, 64)
self.conv_resnet3 = get_conv_resnet(shape1, 64)
self.conv_resnet4 = get_conv_resnet(shape2, 128)
self.conv_resnet5 = get_conv_resnet(shape2, 128)
self.conv_resnet6 = get_conv_resnet(shape2, 128)
self.squeeze = Squeeze()
self.block1 = realnvp_block([self.conv_resnet1, self.conv_resnet2,
self.conv_resnet3, self.conv_resnet4,
self.conv_resnet5, self.conv_resnet6], self.squeeze)
# Second level
self.conv_resnet7 = get_conv_resnet(shape3, 128)
self.conv_resnet8 = get_conv_resnet(shape3, 128)
self.conv_resnet9 = get_conv_resnet(shape3, 128)
self.conv_resnet10 = get_conv_resnet(shape3, 128)
self.coupling_layer1 = AffineCouplingLayer(self.conv_resnet7, 'checkerboard', 0)
self.coupling_layer2 = AffineCouplingLayer(self.conv_resnet8, 'checkerboard', 1)
self.coupling_layer3 = AffineCouplingLayer(self.conv_resnet9, 'checkerboard', 0)
self.coupling_layer4 = AffineCouplingLayer(self.conv_resnet10, 'checkerboard', 1)
self.block2 = tfb.Chain([self.coupling_layer4, self.coupling_layer3,
self.coupling_layer2, self.coupling_layer1])
def _forward(self, x):
h1 = self.block1.forward(x)
z1, h2 = tf.split(h1, 2, axis=-1)
z2 = self.block2.forward(h2)
return tf.concat([z1, z2], axis=-1)
def _inverse(self, y):
z1, z2 = tf.split(y, 2, axis=-1)
h2 = self.block2.inverse(z2)
h1 = tf.concat([z1, h2], axis=-1)
return self.block1.inverse(h1)
def _forward_log_det_jacobian(self, x):
log_det1 = self.block1.forward_log_det_jacobian(x, event_ndims=3)
h1 = self.block1.forward(x)
_, h2 = tf.split(h1, 2, axis=-1)
log_det2 = self.block2.forward_log_det_jacobian(h2, event_ndims=3)
return log_det1 + log_det2
def _inverse_log_det_jacobian(self, y):
z1, z2 = tf.split(y, 2, axis=-1)
h2 = self.block2.inverse(z2)
log_det2 = self.block2.inverse_log_det_jacobian(z2, event_ndims=3)
h1 = tf.concat([z1, h2], axis=-1)
log_det1 = self.block1.inverse_log_det_jacobian(h1, event_ndims=3)
return log_det1 + log_det2
def _forward_event_shape_tensor(self, input_shape):
height, width, channels = input_shape[-3], input_shape[-2], input_shape[-1]
return height // 4, width // 4, 16 * channels
def _inverse_event_shape_tensor(self, output_shape):
height, width, channels = output_shape[-3], output_shape[-2], output_shape[-1]
return 4 * height, 4 * width, channels // 16
```
```python
# Create an instance of the multiscale architecture
multiscale_bijector = RealNVPMultiScale()
```
#### Data preprocessing bijector
We will also preprocess the image data before sending it through the RealNVP model. To do this, for a Tensor $x$ of pixel values in $[0, 1]^D$, we transform $x$ according to the following:
$$
T(x) = \text{logit}\left(\alpha + (1 - 2\alpha)x\right),\tag{7}
$$
where $\alpha$ is a parameter, and the logit function is the inverse of the sigmoid function, and is given by
$$
\text{logit}(p) = \log (p) - \log (1 - p).
$$
You should now complete the following function to construct this bijector from in-built bijectors from the bijectors module.
* The function takes the parameter `alpha` as an input, which you can assume to take a small positive value ($\ll0.5$)
* The function should construct and return a bijector that computes $(7)$ in the forward pass
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_preprocess_bijector(alpha):
"""
This function should create a chained bijector that computes the
transformation T in equation (7) above.
This can be computed using in-built bijectors from the bijectors module.
Your function should then return the chained bijector.
"""
return tfb.Chain([tfb.Invert(tfb.Sigmoid()),
tfb.Shift(alpha),
tfb.Scale(1 - 2 * alpha)])
```
```python
# Create an instance of the preprocess bijector
preprocess = get_preprocess_bijector(0.05)
```
#### Train the RealNVP model
Finally, we will use our RealNVP model to train
We will use the following model class to help with the training process.
```python
# Helper class for training
class RealNVPModel(Model):
def __init__(self, **kwargs):
super(RealNVPModel, self).__init__(**kwargs)
self.preprocess = get_preprocess_bijector(0.05)
self.realnvp_multiscale = RealNVPMultiScale()
self.bijector = tfb.Chain([self.realnvp_multiscale, self.preprocess])
def build(self, input_shape):
output_shape = self.bijector(tf.expand_dims(tf.zeros(input_shape[1:]), axis=0)).shape
self.base = tfd.Independent(tfd.Normal(loc=tf.zeros(output_shape[1:]), scale=1.),
reinterpreted_batch_ndims=3)
self._bijector_variables = (
list(self.bijector.variables))
self.flow = tfd.TransformedDistribution(
distribution=self.base,
bijector=tfb.Invert(self.bijector),
)
super(RealNVPModel, self).build(input_shape)
def call(self, inputs, training=None, **kwargs):
return self.flow
def sample(self, batch_size):
sample = self.base.sample(batch_size)
return self.bijector.inverse(sample)
```
```python
# Create an instance of the RealNVPModel class
realnvp_model = RealNVPModel()
realnvp_model.build((1, 32, 32, 3))
```
```python
# Compute the number of variables in the model
print("Total trainable variables:")
print(sum([np.prod(v.shape) for v in realnvp_model.trainable_variables]))
```
Total trainable variables:
315180
Note that the model's `call` method returns the `TransformedDistribution` object. Also, we have set up our datasets to return the input image twice as a 2-tuple. This is so we can train our model with negative log-likelihood as normal.
```python
# Define the negative log-likelihood loss function
def nll(y_true, y_pred):
return -y_pred.log_prob(y_true)
```
It is recommended to use the GPU accelerator hardware on Colab to train this model, as it can take some time to train. Note that it is not required to train the model in order to pass this assignment. For optimal results, a larger model should be trained for longer.
```python
# Compile and train the model
optimizer = Adam()
realnvp_model.compile(loss=nll, optimizer=Adam())
realnvp_model.fit(train_ds, validation_data=val_ds, epochs=10)
```
Epoch 1/10
938/938 [==============================] - 387s 392ms/step - loss: 213.6944 - val_loss: -3272.2449
Epoch 2/10
938/938 [==============================] - 363s 387ms/step - loss: -4292.1904 - val_loss: -5019.3613
Epoch 3/10
938/938 [==============================] - 364s 388ms/step - loss: -5596.1196 - val_loss: -5903.5815
Epoch 4/10
938/938 [==============================] - 363s 387ms/step - loss: -6373.9087 - val_loss: -6692.9321
Epoch 5/10
938/938 [==============================] - 364s 387ms/step - loss: -6852.5996 - val_loss: -7120.7026
Epoch 6/10
938/938 [==============================] - 363s 387ms/step - loss: -7192.7466 - val_loss: -7292.8345
Epoch 7/10
938/938 [==============================] - 364s 388ms/step - loss: -7274.7817 - val_loss: -7627.1870
Epoch 8/10
938/938 [==============================] - 364s 388ms/step - loss: -7667.7085 - val_loss: -7847.4912
Epoch 9/10
938/938 [==============================] - 363s 387ms/step - loss: -7834.7275 - val_loss: -7753.6006
Epoch 10/10
938/938 [==============================] - 363s 387ms/step - loss: -7931.5469 - val_loss: -8150.2329
<keras.callbacks.History at 0x7f7e35489890>
```python
# Evaluate the model
realnvp_model.evaluate(test_ds)
```
157/157 [==============================] - 25s 151ms/step - loss: -8143.5366
-8143.53662109375
#### Generate some samples
```python
# Sample from the model
samples = realnvp_model.sample(8).numpy()
```
/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:2215: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
warnings.warn('`layer.apply` is deprecated and '
```python
# Display the samples
n_img = 8
f, axs = plt.subplots(2, n_img // 2, figsize=(14, 7))
for k, image in enumerate(samples):
i = k % 2
j = k // 2
axs[i, j].imshow(image[0])
axs[i, j].axis('off')
f.subplots_adjust(wspace=0.01, hspace=0.03)
```
Congratulations on completing this programming assignment! In the next week of the course we will look at the variational autoencoder.
|
e8655c7e54d0c40d3d0dcf869881c098573fdfd3
| 648,784 |
ipynb
|
Jupyter Notebook
|
Course3-Probabilistic_Deep_Learning_with_Tensorflow2/week3_Programming_Assignment.ipynb
|
mella30/Probabilistic-Deep-Learning-with-TensorFlow-2
|
e9748316547d7f433632f4735990306d6e15da72
|
[
"MIT"
] | null | null | null |
Course3-Probabilistic_Deep_Learning_with_Tensorflow2/week3_Programming_Assignment.ipynb
|
mella30/Probabilistic-Deep-Learning-with-TensorFlow-2
|
e9748316547d7f433632f4735990306d6e15da72
|
[
"MIT"
] | null | null | null |
Course3-Probabilistic_Deep_Learning_with_Tensorflow2/week3_Programming_Assignment.ipynb
|
mella30/Probabilistic-Deep-Learning-with-TensorFlow-2
|
e9748316547d7f433632f4735990306d6e15da72
|
[
"MIT"
] | null | null | null | 345.281533 | 292,982 | 0.915413 | true | 11,404 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.66888 | 0.861538 | 0.576266 |
__label__eng_Latn
| 0.943654 | 0.177189 |
# Almgren and Chriss Model For Optimal Execution of Portfolio Transactions
### Introduction
We consider the execution of portfolio transactions with the aim of minimizing a combination of risk and transaction costs arising from permanent and temporary market impact. As an example, assume that you have a certain number of stocks that you want to sell within a given time frame. If you place this sell order directly to the market as it is, transaction costs may rise due to temporary market impact. On the other hand, if you split up into pieces in time, cost may rise due to volatility in the stock price.
[Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) provided a solution to this problem by assuming the permanent and temporary market impact functions are linear functions of the rate of trading, and that stock prices follow a discrete arithmetic random walk.
In this notebook, we will take a look at the model used by Almgren and Chriss to solve the optimal liquidation problem. We will start by stating the formal definitions of *trading trajectory*, *trading list*, and *trading strategy* for liquidating a single stock.
### Trading Trajectory, Trading List, and Trading Strategy
We define trading trajectory, trading list, and trading strategy just as Almgren and Chriss did in their [paper](https://cims.nyu.edu/~almgren/papers/optliq.pdf). Suppose we hold $X$ shares of a stock that we want to liquidate before time $T$. Divide $T$ into $N$ intervals of length $\tau=\frac{T}{N}$ and define:
- $t_k = k\tau$ to be discrete times, where $k = 0,..,N$.
- A **trading trajectory** to be the list $(x_0,..,x_N)$, where $x_k$ is the number of shares we plan to hold at time $t_k$. We require that our initial position $x_0 = X$, and that at liquidation time $T$, $x_N = 0$.
- A **trading list** to be $(n_1,..,n_N)$, $n_k = x_{k-1} - x_k$ as the number of shares that we will sell between times $t_{k-1}$ and $t_k$.
- A **trading strategy** as a rule for determining $n_k$ from the information available at time $t_{k-1}$.
Below, we can see a visual example of a trading trajectory, for $N = 12$.
## Price Dynamics
We will assume that the stock price evolves according to a discrete arithmetic random walk:
\begin{equation}
S_k = S_{k-1} + \sigma \tau^{1/2} \xi_k
\end{equation}
for $k = 1,..,N$ and where:
\begin{equation}
S_k = \text{ stock price at time $k$}\hspace{21.6cm}\\
\sigma = \text{ standard deviation of the fluctuations in stock price}\hspace{16.3cm}\\
\tau = \text{ length of discrete time interval}\hspace{20.2cm}\\
\xi_k = \text{ draws from independent random variables}\hspace{17.8cm}
\end{equation}
We will denote the initial stock price as $S_0$. The role of $\xi_k$ is to simulate random price fluctuations using random numbers drawn from a Normal Gaussian distribution with zero mean and unit variance. The code below shows us what this price model looks like, for an initial stock price of $S_0 =$ \$50 dollars, a standard deviation of price fluctuations of $\sigma = 0.379$, and a discrete time interval of $\tau = 1$.
```python
%matplotlib inline
import matplotlib.pyplot as plt
# Add-on : Hide Matplotlib deprecate warnings
import warnings
warnings.filterwarnings("ignore")
# High resolution plot outputs for retina display
%config InlineBackend.figure_format = 'retina'
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Set the number of days to follow the stock price
n_days = 100
# Plot the stock price as a function of time
utils.plot_price_model(seed = 0, num_days = n_days)
```
## Market Impact
As we learned previously the price of a stock is affected by market impact that occurs every time we sell a stock. In their model, Almgren and Chriss distinguish between two types of market impact, permanent and temporary market impact. We will now add these two factors into our price model.
### Permanent Impact
Permanent market impact refers to changes in the equilibrium price of a stock as a direct function of our trading. Permanent market impact is called *permanent* because its effect persists for the entire liquidation period, $T$. We will denote the permanent price impact as $g(v)$, and will add it to our price model:
\begin{equation}
S_k = S_{k-1} + \sigma \tau^{1/2} \xi_k - \tau g\left(\frac{n_k}{\tau}\right)
\end{equation}
Here, we assumed the permanent impact function, $g(v)$, is a linear function of the trading rate, $v = n_k / \tau$. We will take $g(v)$ to have the form:
\begin{equation}
g(v) = \gamma \left(\frac{n_k}{\tau}\right)
\end{equation}
where $\gamma$ is a constant and has units of (\$/share${}^2$). Replacing this in the above equation we get:
\begin{equation}
S_k = S_{k-1} + \sigma \tau^{1/2} \xi_k - \gamma n_k
\end{equation}
With this form, we can see that for each $n$ shares that we sell, we will depress the stock price permanently by $n\gamma$, regardless of the time we take to sell the stocks.
### Temporary Impact
Temporary market impact refers to temporary imbalances in supply and demand caused by our trading. This leads to temporary price movements away from equilibrium. Temporary market impact is called *temporary* because its effect
dissipates by the next trading period. We will denote the temporary price impact as $h(v)$. Given this, the actual stock price at time $k$ is given by:
\begin{equation}
\tilde{S_k} = S_{k-1} - h\left(\frac{n_k}{\tau}\right)
\end{equation}
Where, we have again assumed the temporary impact function, $h(v)$, is a linear function of the trading rate, $v = n_k / \tau$. We will take $h(v)$ to have the form:
\begin{equation}
h(v) = \epsilon \mbox{ sign}(n_k) + \eta \left(\frac{n_k}{\tau}\right)
\end{equation}
where $\epsilon$ and $\eta$ are constants with units (\$/share) and (\$ time/share${}^2$), respectively. It is important to note that $h(v)$ does not affect the price $S_k$.
## Capture
We define the **Capture** to be the total profits resulting from trading along a particular trading trajectory, upon completion of all trades. We can compute the capture via:
\begin{equation}
\sum\limits_{k=1}^{N} n_k \tilde{S_k} = X S_0 + \sum\limits_{k=1}^{N} \left(\sigma \tau^{1/2} \xi_k - \tau g\left(\frac{n_k}{\tau}\right)\right) x_k - \sum\limits_{k=1}^{N} n_k h\left(\frac{n_k}{\tau}\right)
\end{equation}
As we can see this is the sum of the product of the number of shares $n_k$ that we sell in each time interval, times the effective price per share $\tilde{S_k}$ received on that sale.
## Implementation Shortfall
We define the **Implementation Shortfall** as the total cost of trading and is given by:
\begin{equation}
I_s = X S_0 - \sum_{k = 1}^N n_k \tilde{S_k}
\end{equation}
This is what we seek to minimize when determining the best trading strategy!
Note that since $\xi_k$ is random, so is the implementation shortfall. Therefore, we have to frame the minimization problem in terms of the expectation value of the shortfall and its corresponding variance. We'll refer to $E(x)$ as the expected shortfall and $V(x)$ as the variance of the shortfall. Simplifying the above equation for $I_s$, is easy to see that:
\begin{equation}
E(x) = \sum\limits_{k=1}^{N} \tau x_k g\left(\frac{n_k}{\tau}\right) + \sum\limits_{k=1}^{N} n_k h\left(\frac{n_k}{\tau}\right)
\end{equation}
and
\begin{equation}
V(x) = \sigma^2 \sum\limits_{k=1}^{N} \tau {x_k}^2
\end{equation}
The units of $E(x)$ are dollars and the units of $V(x)$ are dollars squared. So now, we can reframe our minimization problem in terms of $E(x)$ and $V(x)$.
For a given level of variance of shortfall, $V(x)$, we seek to minimize the expectation of shortfall, $E(x)$. In the next section we will see how to solve this problem.
## Utility Function
Our goal now is to find the strategy that has the minimum expected shortfall $E(x)$ for a given maximum level of variance $V(x) \ge 0$. This constrained optimization problem can be solved by introducing a Lagrange multiplier $\lambda$. Therefore, our problem reduces to finding the trading strategy that minimizes the **Utility Function** $U(x)$:
\begin{equation}
U(x) = E(x) + \lambda V(x)
\end{equation}
The parameter $\lambda$ is referred to as **trader’s risk aversion** and controls how much we penalize the variance relative to the expected shortfall.
The intuition of this utility function can be thought of as follows. Consider a stock which exhibits high price volatility and thus a high risk of price movement away from the equilibrium price. A risk averse trader would prefer to trade a large portion of the volume immediately, causing a known price impact, rather than risk trading in small increments at successively adverse prices. Alternatively, if the price is expected to be stable over the liquidation period, the trader would rather split the trade into smaller sizes to avoid price impact. This trade-off between speed of execution and risk of price movement is ultimately what governs the structure of the resulting trade list.
# Optimal Trading Strategy
Almgren and Chriss solved the above problem and showed that for each value
of risk aversion there is a uniquely determined optimal execution strategy. The details of their derivation is discussed in their [paper](https://cims.nyu.edu/~almgren/papers/optliq.pdf). Here, we will just state the general solution.
The optimal trajectory is given by:
\begin{equation}
x_j = \frac{\sinh \left( \kappa \left( T-t_j\right)\right)}{ \sinh (\kappa T)}X, \hspace{1cm}\text{ for } j=0,...,N
\end{equation}
and the associated trading list:
\begin{equation}
n_j = \frac{2 \sinh \left(\frac{1}{2} \kappa \tau \right)}{ \sinh \left(\kappa T\right) } \cosh \left(\kappa \left(T - t_{j-\frac{1}{2}}\right)\right) X, \hspace{1cm}\text{ for } j=1,...,N
\end{equation}
where $t_{j-1/2} = (j-\frac{1}{2}) \tau$. The expected shortfall and variance of the optimal trading strategy are given by:
In the above equations $\kappa$ is given by:
\begin{align*}
&\kappa = \frac{1}{\tau}\cosh^{-1}\left(\frac{\tau^2}{2}\tilde{\kappa}^2 + 1\right)
\end{align*}
where:
\begin{align*}
&\tilde{\kappa}^2 = \frac{\lambda \sigma^2}{\tilde{\eta}} = \frac{\lambda \sigma^2}{\eta \left(1-\frac{\gamma \tau}{2 \eta}\right)}
\end{align*}
```python
```
|
3da6a578f9a36a0e1210662a4da96b05d976b134
| 150,985 |
ipynb
|
Jupyter Notebook
|
drl-finance/1-Almgren-and-Chriss Model.ipynb
|
fdasilva59/deep-reinforcement-learning
|
a95e71730e54dae26ca24fcb31d5f4a7bb0eb025
|
[
"MIT"
] | null | null | null |
drl-finance/1-Almgren-and-Chriss Model.ipynb
|
fdasilva59/deep-reinforcement-learning
|
a95e71730e54dae26ca24fcb31d5f4a7bb0eb025
|
[
"MIT"
] | null | null | null |
drl-finance/1-Almgren-and-Chriss Model.ipynb
|
fdasilva59/deep-reinforcement-learning
|
a95e71730e54dae26ca24fcb31d5f4a7bb0eb025
|
[
"MIT"
] | null | null | null | 531.637324 | 137,220 | 0.916144 | true | 2,814 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.843895 | 0.83762 | 0.706863 |
__label__eng_Latn
| 0.994168 | 0.480612 |
```python
from sympy import *
y, x, a, s, u = symbols('ry rx amp sigma2 mu')
simplify(diff((y-(a*exp(-(x-u)**2/(2*s**2))))**2, s, u))
```
2*amp*(mu - rx)*(2*sigma2**2*(amp - ry*exp((mu - rx)**2/(2*sigma2**2))) - (2*amp - ry*exp((mu - rx)**2/(2*sigma2**2)))*(mu - rx)**2)*exp(-(mu - rx)**2/sigma2**2)/sigma2**5
```python
if 1:
y, x, a, s, u = symbols('ly lx amp sigma1 mu')
else:
y, x, a, s, u = symbols('ry rx amp sigma2 mu')
hessian((y-(a*exp(-(x-u)**2/(2*s**2))))**2, (a, s, u))
```
Matrix([
[ 2*exp(-(lx - mu)**2/sigma1**2), 2*amp*(lx - mu)**2*exp(-(lx - mu)**2/sigma1**2)/sigma1**3 - 2*(lx - mu)**2*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**3, -amp*(-2*lx + 2*mu)*exp(-(lx - mu)**2/sigma1**2)/sigma1**2 + (-2*lx + 2*mu)*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**2],
[ 2*amp*(lx - mu)**2*exp(-(lx - mu)**2/sigma1**2)/sigma1**3 - 2*(lx - mu)**2*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**3, 2*amp**2*(lx - mu)**4*exp(-(lx - mu)**2/sigma1**2)/sigma1**6 + 6*amp*(lx - mu)**2*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**4 - 2*amp*(lx - mu)**4*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**6, -amp**2*(-2*lx + 2*mu)*(lx - mu)**2*exp(-(lx - mu)**2/sigma1**2)/sigma1**5 - 2*amp*(-2*lx + 2*mu)*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**3 + amp*(-2*lx + 2*mu)*(lx - mu)**2*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**5],
[-amp*(-2*lx + 2*mu)*exp(-(lx - mu)**2/sigma1**2)/sigma1**2 + (-2*lx + 2*mu)*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**2, -amp**2*(-2*lx + 2*mu)*(lx - mu)**2*exp(-(lx - mu)**2/sigma1**2)/sigma1**5 - 2*amp*(-2*lx + 2*mu)*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**3 + amp*(-2*lx + 2*mu)*(lx - mu)**2*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**5, amp**2*(-2*lx + 2*mu)**2*exp(-(lx - mu)**2/sigma1**2)/(2*sigma1**4) + 2*amp*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/sigma1**2 - amp*(-2*lx + 2*mu)**2*(-amp*exp(-(lx - mu)**2/(2*sigma1**2)) + ly)*exp(-(lx - mu)**2/(2*sigma1**2))/(2*sigma1**4)]])
```python
if 0:
y, x, a, s, u = symbols('ly lx amp sigma1 mu')
else:
y, x, a, s, u = symbols('ry rx amp sigma2 mu')
hessian((y-(a*exp(-(x-u)**2/(2*s**2))))**2, (a, s, u))
```
Matrix([
[ 2*exp(-(-mu + rx)**2/sigma2**2), 2*amp*(-mu + rx)**2*exp(-(-mu + rx)**2/sigma2**2)/sigma2**3 - 2*(-mu + rx)**2*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**3, -amp*(2*mu - 2*rx)*exp(-(-mu + rx)**2/sigma2**2)/sigma2**2 + (2*mu - 2*rx)*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**2],
[2*amp*(-mu + rx)**2*exp(-(-mu + rx)**2/sigma2**2)/sigma2**3 - 2*(-mu + rx)**2*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**3, 2*amp**2*(-mu + rx)**4*exp(-(-mu + rx)**2/sigma2**2)/sigma2**6 + 6*amp*(-mu + rx)**2*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**4 - 2*amp*(-mu + rx)**4*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**6, -amp**2*(-mu + rx)**2*(2*mu - 2*rx)*exp(-(-mu + rx)**2/sigma2**2)/sigma2**5 - 2*amp*(2*mu - 2*rx)*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**3 + amp*(-mu + rx)**2*(2*mu - 2*rx)*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**5],
[ -amp*(2*mu - 2*rx)*exp(-(-mu + rx)**2/sigma2**2)/sigma2**2 + (2*mu - 2*rx)*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**2, -amp**2*(-mu + rx)**2*(2*mu - 2*rx)*exp(-(-mu + rx)**2/sigma2**2)/sigma2**5 - 2*amp*(2*mu - 2*rx)*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**3 + amp*(-mu + rx)**2*(2*mu - 2*rx)*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**5, amp**2*(2*mu - 2*rx)**2*exp(-(-mu + rx)**2/sigma2**2)/(2*sigma2**4) + 2*amp*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/sigma2**2 - amp*(2*mu - 2*rx)**2*(-amp*exp(-(-mu + rx)**2/(2*sigma2**2)) + ry)*exp(-(-mu + rx)**2/(2*sigma2**2))/(2*sigma2**4)]])
|
99026262ad4c9b4b40e51849bb140b2f508f2b5a
| 6,942 |
ipynb
|
Jupyter Notebook
|
notebooks/Hessian.ipynb
|
Chris7/pyquant
|
56410060546bcdafdba83232d8119f23a28cac56
|
[
"MIT"
] | 13 |
2016-04-26T14:19:44.000Z
|
2022-03-29T19:38:15.000Z
|
notebooks/Hessian.ipynb
|
Chris7/pyquant
|
56410060546bcdafdba83232d8119f23a28cac56
|
[
"MIT"
] | 14 |
2016-01-11T17:48:57.000Z
|
2021-12-19T17:50:30.000Z
|
notebooks/Hessian.ipynb
|
Chris7/pyquant
|
56410060546bcdafdba83232d8119f23a28cac56
|
[
"MIT"
] | 6 |
2016-05-12T17:39:26.000Z
|
2021-01-30T18:12:22.000Z
| 64.277778 | 808 | 0.40291 | true | 2,178 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.939913 | 0.817574 | 0.768449 |
__label__ces_Latn
| 0.042156 | 0.623697 |
```python
import cmath, random, numpy
import functools
import matplotlib.pyplot as plt
import sys
import os
import math
from qutip import*
from sympy import*
#from sympsi import*
from scipy import optimize
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import time
import math
from qutip import *
from qutip.ipynbtools import plot_animation
import numpy as np
import matplotlib.pyplot as plt
import qutip
%matplotlib inline
import matplotlib.pylab as plt
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from IPython.display import display, Math, Latex
import cmath
from mpl_toolkits.axes_grid1 import AxesGrid
from scipy.special import factorial
```
```python
"""Define the operators for MZI - we will deal in Heisenberg """
T = Symbol('T')
xvec = np.arange(-40.,40.)*5./40
yvec = np.arange(-50.,50)*5/40
X,Y = np.meshgrid(xvec, xvec) ##Some plotting params
X1,Y1 = np.meshgrid(yvec,yvec)
N_dim = 35##Dimenstion of the Hilbert spac
a1 = destroy(N_dim) ##This is for single-photon field
a2 = destroy(N_dim) ##for coherent field
a3 = destroy(N_dim) ##for vacuum field
def n_choose_k(n,k):
return factorial(n)/(factorial(n-k)*factorial(k))
def D(state,alpha):
Rho_new=displace(N_dim,alpha)*state*displace(N_dim,alpha).dag()
return Rho_new
'''Define a rotation in phase space, or phase shifter operation'''
def Phase(theta):
b=-1j*theta*a1.dag()*a1;
return b.expm()
'''Squeezing operation, inputs a density matrix and outputs the squeezed density matrix for squeezing parameter r'''
def Sq(state,r):
Rho_new=squeeze(N_dim,r)*state*squeeze(N_dim,r).dag();
return Rho_new
'''The function below creates a beamsplitter operation that acts on two modes.
The value for k determines what number Fock state could be filtered out of the first state
based on a single photon input for the second BS port, followed by single photon detection.'''
def BS_operator_filtering(a1, a2, k):
theta_k = np.arctan(1/np.sqrt(k))
T = np.sin(theta_k)*np.sin(theta_k)
R = np.cos(theta_k)*np.cos(theta_k)
print('I am filtering', k, 'and:', theta_k*180/math.pi)
print('BS T is : ', T, 'and : ', R)
b = theta_k*(tensor(a1,a2.dag()) - tensor(a1.dag(),a2))
return b.expm()
def SSV_plus(r,alpha):
state = ket2dm((displace(N_dim,alpha)+displace(N_dim,-alpha))*squeeze(N_dim,r)*fock(N_dim,0))
norm_state = state/state.tr()
return norm_state
def SSV_minus(r,alpha):
state = ket2dm((displace(N_dim,alpha)-displace(N_dim,-alpha))*squeeze(N_dim,r)*fock(N_dim,0))
norm_state = state/state.tr()
return norm_state
def cat_plus(alpha):
cat = (1/(np.sqrt(2)*np.sqrt(1+np.e**(-alpha*alpha.conj()))))*(coherent(N_dim,-alpha)+(coherent(N_dim,alpha)))
return cat
def cat_minus(alpha):
cat = (1/(np.sqrt(2)*np.sqrt(1-np.e**(-alpha*alpha.conj()))))*(-coherent(N_dim,-alpha)+(coherent(N_dim,alpha)))
return cat
def pnr_resolution_detector(eta, click, n_truc):
pi_n = 0;
l = np.arange(click,n_truc)
for i in l:
pi_n += n_choose_k(i,click)*math.pow((1-eta),(i-click))*math.pow(eta,click)*fock(N_dim,i)*fock(N_dim,i).dag()
#print("The final Povm element is: ", pi_0)
return Qobj(pi_n)
def Fock_Filter_povm(in_state,in_fock,refl,num_det,eta,n_truc):
Projector = tensor(pnr_resolution_detector(eta, num_det, n_truc),qeye(N_dim));
Initial_state=tensor(in_state,ket2dm(fock(N_dim,in_fock)));
theta_k=np.arccos(np.sqrt(refl));
BS1= ((theta_k)*(tensor(a1,a2.dag()) - tensor(a1.dag(),a2))).expm()
Rho=BS1*Initial_state*BS1.dag();
Rho_filtered = ((Rho*Projector).ptrace(1))/((Rho*Projector).tr())
'''The operation .ptrace(m) takes the partial trace over every mode EXCEPT m, where the numbering
startes at 0. So .ptrace(1) means you keep mode 1, which is actually the 2nd mode'''
print('BS has reflectivity',refl,' and I am detecting the |',num,'> state, where my detector has efficiency', eta)
return Rho_filtered
def Fock_Filter_prob(in_state,in_fock,refl,num_det,eta,n_truc):
Projector = tensor(pnr_resolution_detector(eta, num_det, n_truc),qeye(N_dim));
Initial_state=tensor(in_state,ket2dm(fock(N_dim,in_fock)));
theta_k=np.arccos(np.sqrt(refl));
BS1= ((theta_k)*(tensor(a1,a2.dag()) - tensor(a1.dag(),a2))).expm()
Rho=BS1*Initial_state*BS1.dag();
P=(Rho*Projector).tr()
print('The probability of a sucessful detection is:',P)
Rho_filtered = ((Rho*Projector).ptrace(1))/((Rho*Projector).tr())
#Rho_filtered=Rho*Projector
'''The operation .ptrace(m) takes the partial trace over every mode EXCEPT m, where the numbering
startes at 0. So .ptrace(1) means you keep mode 1, which is actually the 2nd mode'''
print('BS has reflectivity',refl,' and I am detecting the |',num,'> state, where my detector has efficiency', eta)
return Rho_filtered
def fid(state1,state2):
F=np.absolute((state1.sqrtm()*state2*state1.sqrtm()).sqrtm().tr())
return F
```
```python
#Variable definitions:
#delta = initial coherent state amplitude
# refl(1-4)= beamsplitter r^2 values at each step
# n(1-4) = number of photons detected at each detector
# beta = amplitude of final dispalcement to displace the state back
#alpha = amplitude of SSV state to be compared with. Note, here
#this can be related to cat state amplitude by amp_cat=alpha/(Cosh[sq]-Sinh[sq])
# sq = the 'r' value, or squeezing parameter. This is the negative of the mathematica results.
'''Four-step check: r[4.] = 0.42134; r[3.] = 0.69684; r[2.] = 0.55398;
r[1.] = .576813; \[Delta] = 4.6621868; n[4.] = 1; n[3.] = 2;
n[2.] = 4; n[1.] = 6;
sq= -0.51, alpha= 1.59, beta= 2.08625'''
refl1= .576813**2; refl2=0.55398**2; refl3=0.69684**2;refl4=0.42134**2;
delta=4.6621868; beta=2.08625; sq=0.476595; alpha=1.59; n1=6;n2=4;n3=2;n4=1;
```
```python
ssv=SSV_plus(sq,alpha)
W_ssv=wigner(ssv,xvec,xvec);
eta=1;
first=Fock_Filter_prob(ket2dm(coherent(N_dim,delta)),1,refl1,n1,eta,N_dim)
W1=wigner(first,yvec,yvec);
fig = plt.figure(figsize=(16,7))
# `ax` is a 3D-aware axis instance, because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(2, 2, 2)
p = ax.contourf(X, Y, W_ssv, 20, cmap=cm.RdBu)
cb = fig.colorbar(p, shrink = 0.7)
# surface_plot with color grading and color bar
ax = fig.add_subplot(2, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, W_ssv, rstride=1, cstride=1, cmap=cm.RdBu, linewidth=0.5)
cb = fig.colorbar(p,shrink = .7)
plt.title('Wigner function of ssv state')
fig = plt.figure(figsize=(16,7))
# `ax` is a 3D-aware axis instance, because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(2, 2, 2)
p = ax.contourf(X1, Y1, W1, 20, cmap=cm.RdBu)
cb = fig.colorbar(p, shrink = 0.7)
# surface_plot with color grading and color bar
ax = fig.add_subplot(2, 2, 1, projection='3d')
p = ax.plot_surface(X1, Y1, W1, rstride=1, cstride=1, cmap=cm.RdBu, linewidth=0.5)
cb = fig.colorbar(p,shrink = .7)
plt.title('Wigner function of filtered state')
fid = fidelity(ssv,first)
print('fidelity with ssv:',fid*fid)
fig, axes = plt.subplots(1, 3, figsize=(12,4))
bar0 = axes[0].bar(range(N_dim), first.diag())
lbl0 = axes[0].set_title("Filtered State")
lim0 = axes[0].set_xlim([-.5, 20])
bar1 = axes[1].bar(range(N_dim), ssv.diag())
lbl1 = axes[1].set_title("SSV")
lim1 = axes[1].set_xlim([-.5, 20])
bar2 = axes[2].bar(range(N_dim), ket2dm(coherent(N_dim,delta)).diag())
lbl2 = axes[2].set_title("Initial Coherent State")
lim2 = axes[2].set_xlim([-.5, 20])
plt.show()
```
```python
second=Fock_Filter_prob(first,1,refl2,n2,eta,N_dim);
W2=wigner(second,yvec,yvec);
fig = plt.figure(figsize=(16,7))
# `ax` is a 3D-aware axis instance, because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(2, 2, 2)
p = ax.contourf(X1, Y1, W2, 20, cmap=cm.RdBu)
cb = fig.colorbar(p, shrink = 0.7)
# surface_plot with color grading and color bar
ax = fig.add_subplot(2, 2, 1, projection='3d')
p = ax.plot_surface(X1, Y1, W2, rstride=1, cstride=1, cmap=cm.RdBu, linewidth=0.5)
cb = fig.colorbar(p,shrink = .7)
plt.title('Wigner function of filtered state')
fid = fidelity(ssv,second)
print('fidelity with cat:',fid*fid)
fig, axes = plt.subplots(1, 3, figsize=(12,4))
bar0 = axes[0].bar(range(N_dim), second.diag())
lbl0 = axes[0].set_title("Twice Filtered State")
lim0 = axes[0].set_xlim([-.5, 20])
bar1 = axes[1].bar(range(N_dim), first.diag())
lbl1 = axes[1].set_title("First Filtered State")
lim1 = axes[1].set_xlim([-.5, 20])
bar2 = axes[2].bar(range(N_dim), ket2dm(coherent(N_dim,delta)).diag())
lbl2 = axes[2].set_title("Initial Coherent State")
lim2 = axes[2].set_xlim([-.5, 20])
plt.show()
```
```python
third=Fock_Filter_prob(second,1,refl3,n3,eta,N_dim);
W3=wigner(third,yvec,yvec);
fig = plt.figure(figsize=(16,7))
# `ax` is a 3D-aware axis instance, because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(2, 2, 2)
p = ax.contourf(X1, Y1, W3, 20, cmap=cm.RdBu)
cb = fig.colorbar(p, shrink = 0.7)
# surface_plot with color grading and color bar
ax = fig.add_subplot(2, 2, 1, projection='3d')
p = ax.plot_surface(X1, Y1, W3, rstride=1, cstride=1, cmap=cm.RdBu, linewidth=0.5)
cb = fig.colorbar(p,shrink = .7)
plt.title('Wigner function of filtered state')
fid = fidelity(ssv,third)
print('fidelity with cat:',fid*fid)
fig, axes = plt.subplots(1, 4, figsize=(16,4))
bar0 = axes[0].bar(range(N_dim), third.diag())
lbl0 = axes[0].set_title("Third Filtered State")
lim0 = axes[0].set_xlim([-.5, 20])
bar1 = axes[1].bar(range(N_dim), second.diag())
lbl1 = axes[1].set_title("Second Filtered State")
lim1 = axes[1].set_xlim([-.5, 20])
bar2 = axes[2].bar(range(N_dim), first.diag())
lbl2 = axes[2].set_title("First Filtered State")
lim2 = axes[2].set_xlim([-.5, 20])
bar3 = axes[3].bar(range(N_dim), ket2dm(coherent(N_dim,delta)).diag())
lbl3 = axes[3].set_title("Initial Coherent State")
lim3 = axes[3].set_xlim([-.5, 20])
plt.show()
```
```python
fourth=Fock_Filter_prob(third,1,refl4,n4,eta,N_dim);
W4=wigner(fourth,xvec,xvec);
fig = plt.figure(figsize=(16,7))
# `ax` is a 3D-aware axis instance, because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(2, 2, 2)
p = ax.contourf(X, Y, W4, 20, cmap=cm.RdBu)
cb = fig.colorbar(p, shrink = 0.7)
# surface_plot with color grading and color bar
ax = fig.add_subplot(2, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, W4, rstride=1, cstride=1, cmap=cm.RdBu, linewidth=0.5)
cb = fig.colorbar(p,shrink = .7)
plt.title('Wigner function of filtered state')
fid = fidelity(ssv,fourth)
print('fidelity with cat:',fid*fid)
fig, axes = plt.subplots(1, 4, figsize=(16,4))
bar0 = axes[0].bar(range(N_dim), fourth.diag())
lbl0 = axes[0].set_title("four-time Filtered State")
lim0 = axes[0].set_xlim([-.5, 20])
bar1 = axes[1].bar(range(N_dim), third.diag())
lbl1 = axes[1].set_title("three-time Filtered State")
lim1 = axes[1].set_xlim([-.5, 20])
bar2 = axes[2].bar(range(N_dim), second.diag())
lbl2 = axes[2].set_title("two-time Filtered State")
lim2 = axes[2].set_xlim([-.5, 20])
bar3 = axes[3].bar(range(N_dim), ket2dm(coherent(N_dim,delta)).diag())
lbl3 = axes[3].set_title("Initial Coherent State")
lim3 = axes[3].set_xlim([-.5, 20])
plt.show()
```
```python
final_state=D(fourth,beta*(-1))
W_final=wigner(final_state,xvec,xvec)
fig = plt.figure(figsize=(16,7))
# `ax` is a 3D-aware axis instance, because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(2, 2, 2)
p = ax.contourf(X, Y, W_ssv, 20, cmap=cm.RdBu)
cb = fig.colorbar(p, shrink = 0.7)
# surface_plot with color grading and color bar
ax = fig.add_subplot(2, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, W_ssv, rstride=1, cstride=1, cmap=cm.RdBu, linewidth=0.5)
cb = fig.colorbar(p,shrink = .7)
plt.title('Wigner function of squeezed cat state')
fig = plt.figure(figsize=(16,7))
# `ax` is a 3D-aware axis instance, because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(2, 2, 2)
p = ax.contourf(X, Y, W_final, 20, cmap=cm.RdBu)
cb = fig.colorbar(p, shrink = 0.7)
# surface_plot with color grading and color bar
ax = fig.add_subplot(2, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, W_final, rstride=1, cstride=1, cmap=cm.RdBu, linewidth=0.5)
cb = fig.colorbar(p,shrink = .7)
plt.title('Displaced Wigner function of |5>, |3> and |2> filtered coherent state')
fid = fidelity(ssv,final_state)
print('fidelity with ssv:',fid*fid)
plt.show()
fig, axes = plt.subplots(1, 5, figsize=(16,4))
bar0 = axes[0].bar(range(N_dim), second.diag())
lbl0 = axes[0].set_title("second Filtered")
lim0 = axes[0].set_xlim([-.5, 20])
bar1 = axes[1].bar(range(N_dim), third.diag())
lbl1 = axes[1].set_title("third Filtered")
lim1 = axes[1].set_xlim([-.5, 20])
bar2 = axes[2].bar(range(N_dim), fourth.diag())
lbl2 = axes[2].set_title("fourth Filtered State")
lim2 = axes[2].set_xlim([-.5, 20])
bar3 = axes[3].bar(range(N_dim), final_state.diag())
lbl3 = axes[3].set_title("Final")
lim3 = axes[3].set_xlim([-.5, 8])
bar4 = axes[4].bar(range(N_dim), ssv.diag())
lbl4 = axes[4].set_title("SSV")
lim4 = axes[4].set_xlim([-.5, 10])
plt.show()
```
```python
from mpl_toolkits.axes_grid1 import AxesGrid
def shiftedColorMap(cmap, start=1, midpoint=0, stop=0, name='shiftedcmap'):
'''
Function to offset the "center" of a colormap. Useful for
data with a negative min and positive max and you want the
middle of the colormap's dynamic range to be at zero.
Input
-----
cmap : The matplotlib colormap to be altered
start : Offset from lowest point in the colormap's range.
Defaults to 0.0 (no lower offset). Should be between
0.0 and `midpoint`.
midpoint : The new center of the colormap. Defaults to
0.5 (no shift). Should be between 0.0 and 1.0. In
general, this should be 1 - vmax / (vmax + abs(vmin))
For example if your data range from -15.0 to +5.0 and
you want the center of the colormap at 0.0, `midpoint`
should be set to 1 - 5/(5 + 15)) or 0.75
stop : Offset from highest point in the colormap's range.
Defaults to 1.0 (no upper offset). Should be between
`midpoint` and 1.0.
'''
cdict = {
'red': [],
'green': [],
'blue': [],
'alpha': []
}
# regular index to compute the colors
reg_index = np.linspace(start, stop, 257)
# shifted index to match the data
shift_index = np.hstack([
np.linspace(0, midpoint, 128, endpoint=False),
np.linspace(midpoint, 1, 129, endpoint=True)
])
for ri, si in zip(reg_index, shift_index):
r, g, b, a = cmap(ri)
cdict['red'].append((si, r, r))
cdict['green'].append((si, g, g))
cdict['blue'].append((si, b, b))
cdict['alpha'].append((si, a, a))
newcmap = cm.colors.LinearSegmentedColormap(name, cdict)
plt.register_cmap(cmap=newcmap)
return newcmap
#from matplotlib import rc,rcParams
#from pylab import *
```
```python
#activate latex text rendering
#rc('text', usetex=True)
#rc('axes', linewidth=2)
#rc('font', weight='bold')
#rcParams['text.latex.preamble'] = [r'\usepackage{sfmath} \boldmath']
xvec = np.arange(-33.,33.)*5./40
yvec = np.arange(-35.,35.)*5./40
X,Y = np.meshgrid(xvec, xvec) ##Some plotting params
W=wigner(final_state,xvec,xvec);
orig_cmap = cm.seismic
shifted_cmap1 = shiftedColorMap(orig_cmap, midpoint=1-(W.max()/(W.max()-W.min())), name='shifted')
#m = cm.ScalarMappable(cmap=shifted_cmap1)
#m.set_array([-1/math.pi,1/math.pi])
#m.set_clim(vmin=-1/math.pi,vmax=1/math.pi)
#m.set_array([-1,1])
#m.set_clim(vmin=-1,vmax=1)
fig = plt.figure(figsize=(8,5))
# `ax` is a 3D-aware axis instance, because of the projection='3d' keyword argument to add_subplot
ax1 = fig.add_subplot(1,1,1, projection='3d')
p = ax1.plot_surface(X, Y, W, rstride=1,cstride=1, cmap=shifted_cmap1, linewidth=0.5)
cb = fig.colorbar(p, shrink = 0.5)
cb.ax.tick_params(labelsize=12)
ax1.contour(X, Y, W, cmap=shifted_cmap1, linestyles="solid", offset=-0.3)
ax1.set_xlim([-4.5,4.5])
ax1.set_ylim([-4.5,4.5])
ax1.view_init(15, -40)
ax1.contour(X, Y, W,[0], zdir='y', cmap=cm.bwr, offset=4.5)
ax1.contour(X, Y, W,[0], zdir='x', cmap=cm.RdBu, offset=-4.5)
#plt.axis('off')
ax1.grid(b=False)
#plt.xlabel('P',fontsize=16,fontweight='heavy')
plt.xlabel('P',fontsize=16,fontweight='heavy')
plt.ylabel('X',fontsize=16,fontweight='heavy')
ax1.xaxis.set_tick_params(labelsize=12)
ax1.yaxis.set_tick_params(labelsize=12)
ax1.zaxis.set_tick_params(labelsize=12)
fig2 = plt.figure(figsize=(8,4))
ax1=fig2.add_subplot(1,2,1)
ax1.bar(range(N_dim),final_state.diag())
lim1 = ax1.set_xlim([-.5, 10.5])
lim1y = ax1.set_ylim([0,0.6])
plt.ylabel('Probability',fontsize='large')
plt.xlabel('Photon Number',fontsize='large')
plt.title('Approximate SSV')
ax2=fig2.add_subplot(1,2,2)
ax2.bar(range(N_dim),ssv.diag())
lim2 = ax2.set_xlim([-.5, 10.5])
lim2y = ax2.set_ylim([0,0.6])
plt.xlabel('Photon Number',fontsize='large')
plt.title('Ideal SSV')
#plt.subplots_adjust(wspace=0)
plt.show()
```
```python
'''Now plot the density matrix elements of the ideal state with that of
the approximate state resulting from photon-catalysis'''
s=np.abs((ssv.full()))
temp=np.delete(s,np.s_[10:],0)
state=np.delete(temp,np.s_[10:],1)
s2=np.abs((final_state.full()))
temp1=np.delete(s2,np.s_[10:],0)
state2=np.delete(temp1,np.s_[10:],1)
#plt.imshow(state)
fig = plt.figure(figsize=(6,4))
plt.pcolormesh(state, vmin=0.0, vmax=0.7)
cbar=plt.colorbar()
cbar.ax.tick_params(labelsize=14)
plt.tick_params(labelsize=14)
plt.show()
#plt.imshow(state2)
fig = plt.figure(figsize=(6,4))
plt.pcolormesh(state2, vmin=0.0, vmax=0.7)
cbar=plt.colorbar()
cbar.ax.tick_params(labelsize=14)
plt.tick_params(labelsize=14)
plt.show()
fig = plt.figure(figsize=(6,4))
plt.pcolormesh(np.abs(state2-state), vmin=0.0, vmax=0.7)
cbar=plt.colorbar()
cbar.ax.tick_params(labelsize=14)
plt.tick_params(labelsize=14)
plt.show()
```
```python
s=np.abs((ssv.full()))
temp=np.delete(s,np.s_[10:],0)
state=np.delete(temp,np.s_[10:],1)
#plt.imshow(state)
plt.pcolor(state, vmin=0.0, vmax=0.6)
plt.colorbar()
plt.show()
```
```python
s2=np.abs((final_state.full()))
temp1=np.delete(s2,np.s_[10:],0)
state2=np.delete(temp1,np.s_[10:],1)
#plt.imshow(state2)
plt.pcolor(state2, vmin=0.0, vmax=0.6)
plt.colorbar()
plt.show()
```
```python
plt.pcolor((np.abs(state2-state)), vmin=0.0, vmax=0.6)
#plt.pcolor(X, Y, f(data), cmap=cm, vmin=0.0, vmax=0.5)
plt.colorbar()
plt.show()
```
```python
```
|
1170618026cc73784e8a273103932904c2b42b30
| 758,674 |
ipynb
|
Jupyter Notebook
|
Four_step_Catalysis.ipynb
|
me3nq/GKP_photon_catalysis
|
9041bb57f758b5b52ffd5ac476131533c3c3060e
|
[
"Apache-2.0"
] | 4 |
2019-08-26T21:18:56.000Z
|
2022-03-24T04:10:35.000Z
|
Four_step_Catalysis.ipynb
|
me3nq/GKP_photon_catalysis
|
9041bb57f758b5b52ffd5ac476131533c3c3060e
|
[
"Apache-2.0"
] | null | null | null |
Four_step_Catalysis.ipynb
|
me3nq/GKP_photon_catalysis
|
9041bb57f758b5b52ffd5ac476131533c3c3060e
|
[
"Apache-2.0"
] | null | null | null | 807.959531 | 83,652 | 0.948584 | true | 6,125 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.890294 | 0.774583 | 0.689607 |
__label__eng_Latn
| 0.542226 | 0.44052 |
```python
from Arbie.Variables.pool import Pool
from Arbie import Token
```
```python
size = 100
# Setup tokens
dai = Token('dai')
eth = Token('eth')
pool1 = Pool([dai, eth],[400*size, 1*size],[0.51, 0.49], fee=0.003)
print(pool1.spot_price(dai, eth))
pool1
```
384.3137254901961
Amm(
Tokens: [Token(Name: dai, Address: 0x6850A5d7c21A8449886130A946d14d3f37Cc18AE), Token(Name: eth, Address: 0x704cE91D485E70d9a886e0Ebd717F659215a2436)],
Balances: [Balance(Token: Token(Name: dai, Address: 0x6850A5d7c21A8449886130A946d14d3f37Cc18AE), Value: 40000), Balance(Token: Token(Name: eth, Address: 0x704cE91D485E70d9a886e0Ebd717F659215a2436), Value: 100)],
Weights: [Balance(Token: Token(Name: dai, Address: 0x6850A5d7c21A8449886130A946d14d3f37Cc18AE), Value: 0.51), Balance(Token: Token(Name: eth, Address: 0x704cE91D485E70d9a886e0Ebd717F659215a2436), Value: 0.49)],
Fee: 0.003
Address: 0xf530E737266e350882D25DCf228e1e075b8F60d5)
```python
pool2 = Pool([dai, eth],[410*size, 1*size],[0.51, 0.49], fee=0.005)
print(pool2.spot_price(dai, eth))
pool2
```
393.92156862745094
Amm(
Tokens: [Token(Name: dai, Address: 0x6850A5d7c21A8449886130A946d14d3f37Cc18AE), Token(Name: eth, Address: 0x704cE91D485E70d9a886e0Ebd717F659215a2436)],
Balances: [Balance(Token: Token(Name: dai, Address: 0x6850A5d7c21A8449886130A946d14d3f37Cc18AE), Value: 41000), Balance(Token: Token(Name: eth, Address: 0x704cE91D485E70d9a886e0Ebd717F659215a2436), Value: 100)],
Weights: [Balance(Token: Token(Name: dai, Address: 0x6850A5d7c21A8449886130A946d14d3f37Cc18AE), Value: 0.51), Balance(Token: Token(Name: eth, Address: 0x704cE91D485E70d9a886e0Ebd717F659215a2436), Value: 0.49)],
Fee: 0.005
Address: 0x0013B7cc84C81E14821c5f62a1801549B714A1C2)
```python
# We can see that pool2 has a higher price for eth then pool1. If we have dai we can buy eth from pool1 and sell to pool2
# The only question is how much should we buy and sell?
```
```python
from sympy import *
from sympy.plotting import plot
init_printing()
```
```python
# We can plot our return function
x = symbols('x')
expr = pool1.out_given_in_expr(dai, eth)
plot(expr, (x, 0, 50000))
expr = pool1.out_given_in_expr(eth, dai)
plot(expr, (x, 0, 200))
```
```python
# How can we found if there is an arbitrage opertunity between pool1 and pool2?
from Arbie.Actions.arbitrage import arbitrage_expr, arbitrage_diff_expr, TradeOpertunity
trade = TradeOpertunity([pool1, pool2], dai, eth)
arb_expr = arbitrage_expr(trade)
arb_expr
```
```python
# If we plot we can clearly see that there seams to be some profit to be made!
plot(arb_expr, (x, 0, 400))
```
```python
darb_expr = arbitrage_diff_expr(trade)
darb_expr
```
```python
from Arbie.Actions.arbitrage import find_arbitrage
find_arbitrage(trade)
```
Balance(Token: Token(Name: dai, Address: 0x6850A5d7c21A8449886130A946d14d3f37Cc18AE), Value: 165.009948076539)
```python
# If we have two pools that dosn't have an arbitrage opertunity what happens then?
bad_trade = TradeOpertunity([pool2, pool1], dai, eth)
bad_expr = arbitrage_expr(bad_trade)
plot(bad_expr, (x, 0, 400))
```
```python
```
|
cf9a67c743da29e45894eea37afbaf791a777503
| 127,020 |
ipynb
|
Jupyter Notebook
|
examples/Arbie.ipynb
|
owodunni/arbie-examples
|
f6719812f5bd1f9fa6e85f24b71e0b2f3dae53eb
|
[
"MIT"
] | null | null | null |
examples/Arbie.ipynb
|
owodunni/arbie-examples
|
f6719812f5bd1f9fa6e85f24b71e0b2f3dae53eb
|
[
"MIT"
] | null | null | null |
examples/Arbie.ipynb
|
owodunni/arbie-examples
|
f6719812f5bd1f9fa6e85f24b71e0b2f3dae53eb
|
[
"MIT"
] | null | null | null | 329.067358 | 34,644 | 0.911809 | true | 1,194 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.812867 | 0.731059 | 0.594254 |
__label__eng_Latn
| 0.318146 | 0.21898 |
# Item III
*Solving a very (un)known problem*
1. Implement a function that finds the two roots of the quadratic equation $a x^2 + b x + c = 0$ given $a$,$b$, and $c$, i.e. implement $x_{\pm} = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$.
2. What are the roots of $2x^2+10^9+1 = 0$? How many digits of significance can you get for the two roots? Is there any problem?
3. Design a code that finds the correct roots for $x^2+Bx+C=0$, given $B \gg C$ to at least 2 digits of significance.
4. Solve the previous equation using this new code and find the new roots. *I hope it works!*
5. From the well-known solution $x_{\pm}$ of the quadratic equation design an algorithm that approximates the two roots of $x^2+Bx+C = 0$, given $B \gg C$. Hint: *A Taylor expansion may work*.
---
## Part 1
```python
def solve_quadratic(a,b,c):
assert(a!=0)
disc = (b**2-4*a*c+0j)**0.5
x1 = (-b-disc)/(2*a)
x2 = (-b+disc)/(2*a)
return (x1,x2)
```
## Part 2
The analytical solution is:
$$
x_{\pm} = \frac{-10^{9}\pm\sqrt{10^{18}-8}}{4}
$$
```python
# if we use the solver
x1,x2 = solve_quadratic(2,1e9,1)
print("x-:",x1)
print("x+:",x2)
```
x-: (-500000000+0j)
x+: 0j
The approximation given for $x_{-}$ is good since the relative error is very little (the approximation $\sqrt{10^{18}-8} \approx 10^{9}$ that the computer does because of the *absorption* of the much smaller $-8$ doesn't affect the relative error too much).
<!-- we have as many digits of significance as the double's allow, since -->
For $x_{+}$ however, the error is equal to the value of the root, since
## Part 3
We make the following change to the equation:
\begin{align}
x_{\pm} = \frac{-b \pm \sqrt{b^2-4ac}}{2a} &= \frac{-b \pm \sqrt{b^2-4ac}}{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}
\\ &= \frac{4ac}{2a \left(-b \mp \sqrt{b^2-4ac}\right)}
\\ &= \frac{-2c}{\left(b \pm \sqrt{b^2-4ac}\right)}
\end{align}
we can use it for $x_{+}$ if $b>0$ or $x_{-}$ otherwise.
```python
def solve_quadratic_2(a,b,c):
assert(a!=0)
disc = (b**2-4*a*c+0j)**0.5
if b>0:
x1 = (-b-disc)/(2*a)
x2 = -2*c/(b+disc)
else:
x1 = -2*c/(b-disc)
x2 = (-b+disc)/(2*a)
return (x1,x2)
```
## Part 4
We solve using the new method and see that it works.
```python
x1,x2 = solve_quadratic_2(2,1e9,1)
print("x-:",x1)
print("x+:",x2)
```
x-: (-500000000+0j)
x+: (-1e-09+0j)
## Part 5
If $C$ is small, we can approximate the solution:
$$
x = x_0 + C x_1 + C^2 x_2 + ...
$$
Then
$$
\begin{align}
x^2 + B x + C &= 0
\\ (x_0^2 + C (2x_0x_1) + C^2 (x_1^2 + 2x_0x_2) + \dots) + (Bx_0 + C B x_1 + C^2 B x_2 + \dots) + C &= 0
\end{align}
$$
And we have the following equations:
\begin{align}
O(C^0) : &\qquad x_0^2 + B x_0 = 0
\\ O(C^1) : &\qquad 2x_0x_1 + B x_1 +1 = 0
\\ O(C^2) : &\qquad x_1^2 + 2 x_0x_2 + B x_2 = 0
\end{align}
which have the following solutions:
$$
(x_0,x_1,x_2)_1 = \left(0,\frac{-1}{B},\frac{-1}{B^3}\right)
$$
$$
(x_0,x_1,x_2)_2 = \left(-B,\frac{1}{B},\frac{1}{B^3}\right)
$$
which result in the following approximations for $x$:
$$
\begin{align}
x_{1} &= 0 + C \frac{-1}{B} + C^2 \frac{-1}{B^3} + \cdots
\\ x_{2} &= -B + C \frac{1}{B} + C^2 \frac{1}{B^3} + \cdots
\end{align}
$$
```python
def solve_quadratic_3(a,b,c):
# In case a!=1 we just have to scale the equation:
b /= a
c /= a
# Approximations:
x1 = 0+c*(-1/b)+c**2*(-1/b**3)
x2 = -b+c*(1/b)+c**2*(1/b**3)
return (x1,x2)
```
```python
x1,x2 = solve_quadratic_3(2,1e9,1)
print("x1:",x1)
print("x2:",x2)
```
x1: -1e-09
x2: -500000000.0
```python
```
|
7252c1c30da54c99ea864d11b3892cd24ec394e4
| 6,539 |
ipynb
|
Jupyter Notebook
|
t1_questions/item_03_alpha.ipynb
|
autopawn/cc5-works
|
63775574c82da85ed0e750a4d6978a071096f6e7
|
[
"MIT"
] | null | null | null |
t1_questions/item_03_alpha.ipynb
|
autopawn/cc5-works
|
63775574c82da85ed0e750a4d6978a071096f6e7
|
[
"MIT"
] | null | null | null |
t1_questions/item_03_alpha.ipynb
|
autopawn/cc5-works
|
63775574c82da85ed0e750a4d6978a071096f6e7
|
[
"MIT"
] | null | null | null | 26.051793 | 268 | 0.46995 | true | 1,471 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.924142 | 0.901921 | 0.833503 |
__label__eng_Latn
| 0.881271 | 0.774839 |
# Realization of Recursive Filters
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Introduction
Computing the output $y[k] = \mathcal{H} \{ x[k] \}$ of a [linear time-invariant](https://en.wikipedia.org/wiki/LTI_system_theory) (LTI) system is of central importance in digital signal processing. This is often referred to as [*filtering*](https://en.wikipedia.org/wiki/Digital_filter) of the input signal $x[k]$. We already have discussed the realization of [non-recursive filters](../nonrecursive_filters/introduction.ipynb). This section focuses on the realization of recursive filters.
### Recursive Filters
Linear difference equations with constant coefficients represent linear time-invariant (LTI) systems
\begin{equation}
\sum_{n=0}^{N} a_n \; y[k-n] = \sum_{m=0}^{M} b_m \; x[k-m]
\end{equation}
where $y[k] = \mathcal{H} \{ x[k] \}$ denotes the response of the system to the input signal $x[k]$, $N$ the order, $a_n$ and $b_m$ constant coefficients, respectively. Above equation can be rearranged with respect to the output signal $y[k]$ by extracting the first element ($n=0$) of the left-hand sum
\begin{equation}
y[k] = \frac{1}{a_0} \left( \sum_{m=0}^{M} b_m \; x[k-m] - \sum_{n=1}^{N} a_n \; y[k-n] \right)
\end{equation}
It is evident that the output signal $y[k]$ at time instant $k$ is given as a linear combination of past output samples $y[k-n]$ superimposed by a linear combination of the actual $x[k]$ and past $x[k-m]$ input samples. Hence, the actual output $y[k]$ is composed from the two contributions
1. a [non-recursive part](../nonrecursive_filters/introduction.ipynb#Non-Recursive-Filters), and
2. a recursive part where a linear combination of past output samples is fed back.
The impulse response of the system is given as the response of the system to a Dirac impulse at the input $h[k] = \mathcal{H} \{ \delta[k] \}$. Using above result and the properties of the discrete Dirac impulse we get
\begin{equation}
h[k] = \frac{1}{a_0} \left( b_k - \sum_{n=1}^{N} a_n \; h[k-n] \right)
\end{equation}
Due to the feedback, the impulse response will in general be of infinite length. The impulse response is termed as [infinite impulse response](https://en.wikipedia.org/wiki/Infinite_impulse_response) (IIR) and the system as recursive system/filter.
### Transfer Function
Applying a $z$-transform to the left- and right-hand side of the difference equation and rearranging terms yields the transfer function $H(z)$ of the system
\begin{equation}
H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{m=0}^{M} b_m \; z^{-m}}{\sum_{n=0}^{N} a_n \; z^{-n}}
\end{equation}
The transfer function is given as a [rational function](https://en.wikipedia.org/wiki/Rational_function) in $z$. The polynominals of the numerator and denominator can be expressed alternatively by their roots as
\begin{equation}
H(z) = \frac{b_M}{a_N} \cdot \frac{\prod_{\mu=1}^{P} (z - z_{0\mu})^{m_\mu}}{\prod_{\nu=1}^{Q} (z - z_{\infty\nu})^{n_\nu}}
\end{equation}
where $z_{0\mu}$ and $z_{\infty\nu}$ denote the $\mu$-th zero and $\nu$-th pole of degree $m_\mu$ and $n_\nu$ of $H(z)$, respectively. The total number of zeros and poles is denoted by $P$ and $Q$. Due to the symmetries of the $z$-transform, the transfer function of a real-valued system $h[k] \in \mathbb{R}$ exhibits complex conjugate symmetry
\begin{equation}
H(z) = H^*(z^*)
\end{equation}
Poles and zeros are either real valued or complex conjugate pairs for real-valued systems ($b_m\in\mathbb{R}$, $a_n\in\mathbb{R}$). For the poles of a causal and stable system $H(z)$ the following condition has to hold
\begin{equation}
\max_{\nu} | z_{\infty\nu} | < 1
\end{equation}
Hence, all poles have to be located inside the unit circle $|z| = 1$. Amongst others, this implies that $M \leq N$.
### Example
The following example shows the pole/zero diagram, the magnitude and phase response, and impulse response of a recursive filter with so-called [Butterworth](https://en.wikipedia.org/wiki/Butterworth_filter) lowpass characteristic.
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.markers import MarkerStyle
from matplotlib.patches import Circle
import scipy.signal as sig
N = 5 # order of recursive filter
L = 128 # number of computed samples
def zplane(z, p, title='Poles and Zeros'):
"Plots zero and pole locations in the complex z-plane"
ax = plt.gca()
ax.plot(np.real(z), np.imag(z), 'bo', fillstyle='none', ms = 10)
ax.plot(np.real(p), np.imag(p), 'rx', fillstyle='none', ms = 10)
unit_circle = Circle((0,0), radius=1, fill=False,
color='black', ls='solid', alpha=0.9)
ax.add_patch(unit_circle)
ax.axvline(0, color='0.7')
ax.axhline(0, color='0.7')
plt.title(title)
plt.xlabel(r'Re{$z$}')
plt.ylabel(r'Im{$z$}')
plt.axis('equal')
plt.xlim((-2, 2))
plt.ylim((-2, 2))
plt.grid()
# compute coefficients of recursive filter
b, a = sig.butter(N, 0.2, 'low')
# compute transfer function
Om, H = sig.freqz(b, a)
# compute impulse response
k = np.arange(L)
x = np.where(k==0, 1.0, 0)
h = sig.lfilter(b, a, x)
# plot pole/zero-diagram
plt.figure(figsize=(5, 5))
zplane(np.roots(b), np.roots(a))
# plot magnitude response
plt.figure(figsize=(10, 3))
plt.plot(Om, 20 * np.log10(abs(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H(e^{j \Omega})|$ in dB')
plt.grid()
plt.title('Magnitude response')
# plot phase response
plt.figure(figsize=(10, 3))
plt.plot(Om, np.unwrap(np.angle(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\varphi (\Omega)$ in rad')
plt.grid()
plt.title('Phase response')
# plot impulse response (magnitude)
plt.figure(figsize=(10, 3))
plt.stem(20*np.log10(np.abs(np.squeeze(h))))
plt.xlabel(r'$k$')
plt.ylabel(r'$|h[k]|$ in dB')
plt.grid()
plt.title('Impulse response (magnitude)');
```
**Exercise**
* Does the system have an IIR?
* What happens if you increase the order `N` of the filter?
Solution: It can be concluded from the last illustration, showing the magnitude of the impulse response $|h[k]|$ on a logarithmic scale, that the magnitude of the impulse response decays continuously for increasing $k$ but does not become zero at some point. This behavior continues with increasing $k$ as can be observed when increasing the number `L` of computed samples in above example. The magnitude response $|H(e^{j \Omega})|$ of the filter decays faster with increasing order `N` of the filter.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2018*.
|
6b4a04c0a3320ccc9e8c3d53d7e6089f121c3ee4
| 76,476 |
ipynb
|
Jupyter Notebook
|
recursive_filters/introduction.ipynb
|
hustcxl/digital-signal-processing-lecture
|
1d6d9af39ed8cc2fc768a9af523cfa97ec4123f8
|
[
"MIT"
] | 1 |
2020-11-04T03:40:49.000Z
|
2020-11-04T03:40:49.000Z
|
recursive_filters/introduction.ipynb
|
cphysics/signal
|
2e47bb4f0cf368418ee9a1108f0cea24a5dc812d
|
[
"MIT"
] | null | null | null |
recursive_filters/introduction.ipynb
|
cphysics/signal
|
2e47bb4f0cf368418ee9a1108f0cea24a5dc812d
|
[
"MIT"
] | null | null | null | 287.503759 | 17,212 | 0.916994 | true | 2,037 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.7773 | 0.874077 | 0.67942 |
__label__eng_Latn
| 0.957343 | 0.416852 |
```python
from simba.core import linear_coupling_operator_from_k_matrix, hamiltonian_from_r_matrix, make_complex_ladder_state
from sympy import Matrix, init_printing
from IPython.display import display
init_printing(use_latex='mathjax')
```
```python
r_matrix = Matrix([[-2, 1], [1, 2]])
state = make_complex_ladder_state(1)
(state.H, r_matrix, state)
```
$\displaystyle \left( \left[\begin{matrix}\overline{a_{1}} & a_{1}\end{matrix}\right], \ \left[\begin{matrix}-2 & 1\\1 & 2\end{matrix}\right], \ \left[\begin{matrix}a_{1}\\\overline{a_{1}}\end{matrix}\right]\right)$
```python
hamiltonian_from_r_matrix(r_matrix)
```
$\displaystyle \left(a_{1} - 2 \overline{a_{1}}\right) a_{1} + \left(2 a_{1} + \overline{a_{1}}\right) \overline{a_{1}}$
```python
r_matrix = Matrix([[-2, 0, 3, 1], [0, 3, 0, 2], [0, 3, 3, 0], [0, 0, 5, 1]])
state = make_complex_ladder_state(2)
(state.H, r_matrix, state)
```
$\displaystyle \left( \left[\begin{matrix}\overline{a_{1}} & a_{1} & \overline{a_{2}} & a_{2}\end{matrix}\right], \ \left[\begin{matrix}-2 & 0 & 3 & 1\\0 & 3 & 0 & 2\\0 & 3 & 3 & 0\\0 & 0 & 5 & 1\end{matrix}\right], \ \left[\begin{matrix}a_{1}\\\overline{a_{1}}\\a_{2}\\\overline{a_{2}}\end{matrix}\right]\right)$
```python
hamiltonian_from_r_matrix(r_matrix)
```
$\displaystyle \left(a_{1} - 2 \overline{a_{1}}\right) a_{1} + \left(2 a_{1} + \overline{a_{1}}\right) \overline{a_{1}}$
```python
from simba.core import transfer_function_to_state_space, SLH, concat
from sympy import symbols
s, a, b = symbols('s a b')
tf = (s - 2) / (s + 2)
system = transfer_function_to_state_space(tf).extended_to_quantum().to_physically_realisable()
g_a, g_b = system.to_slh(a), system.to_slh(b)
g_ab = concat(g_a, g_b)
g_ab
```
$$\displaystyle \left(I_{4\times4}, \left[\begin{matrix}0 & 2 & 0 & 0\\2 & 0 & 0 & 0\\0 & 0 & 0 & 2\\0 & 0 & 2 & 0\end{matrix}\right] \left[\begin{matrix}a_{1}\\\overline{a_{1}}\\b_{1}\\\overline{b_{1}}\end{matrix}\right], \frac{1}{2} \left[\begin{matrix}\overline{a_{1}} & a_{1} & \overline{b_{1}} & b_{1}\end{matrix}\right] \left[\begin{matrix}0 & 0 & 0 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\end{matrix}\right] \left[\begin{matrix}a_{1}\\\overline{a_{1}}\\b_{1}\\\overline{b_{1}}\end{matrix}\right]\right)$$
```python
g_a
```
$$\displaystyle \left(I_{2\times2}, \left[\begin{matrix}0 & 2\\2 & 0\end{matrix}\right] \left[\begin{matrix}a_{1}\\\overline{a_{1}}\end{matrix}\right], \frac{1}{2} \left[\begin{matrix}\overline{a_{1}} & a_{1}\end{matrix}\right] \left[\begin{matrix}0 & 0\\0 & 0\end{matrix}\right] \left[\begin{matrix}a_{1}\\\overline{a_{1}}\end{matrix}\right]\right)$$
|
217f6d6e573626adf89a56fb0fe416c31259031b
| 6,566 |
ipynb
|
Jupyter Notebook
|
notebooks/misc.ipynb
|
joebentley/simba
|
dd1b7bc6d22ad96566898dd1851cfa210462cb00
|
[
"MIT"
] | 8 |
2020-03-19T10:59:25.000Z
|
2022-01-22T22:33:07.000Z
|
notebooks/misc.ipynb
|
joebentley/simba
|
dd1b7bc6d22ad96566898dd1851cfa210462cb00
|
[
"MIT"
] | 1 |
2022-01-22T11:24:45.000Z
|
2022-01-22T11:24:45.000Z
|
notebooks/misc.ipynb
|
joebentley/simba
|
dd1b7bc6d22ad96566898dd1851cfa210462cb00
|
[
"MIT"
] | 1 |
2020-03-19T13:27:41.000Z
|
2020-03-19T13:27:41.000Z
| 29.981735 | 584 | 0.442735 | true | 1,068 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.909907 | 0.731059 | 0.665195 |
__label__eng_Latn
| 0.075422 | 0.383803 |
<center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Polynomial Interpolation: Vandermonde, Lagrange, Newton, Chebyshev </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.27</h2>
</center>
## Table of Contents
* [Introduction](#intro)
* [Vandermonde Matrix](#vander)
* [Lagrange Interpolation](#lagrange)
* [Runge Phenomenon](#runge)
* [Newton's Divided Difference](#DDN)
* [Interpolation Error](#Error)
* [Chebyshev Interpolation](#cheby)
* [Python Modules and Functions](#py)
* [Acknowledgements](#acknowledgements)
```python
import numpy as np
import matplotlib.pyplot as plt
import sympy as sp
from functools import reduce
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
%matplotlib inline
from ipywidgets import interact, fixed, IntSlider
```
<div id='intro' />
## Introduction
Hello! In this notebook we will learn how to interpolate 1D data with polynomials. A polynomial interpolation consists in finding a polynomial that fits a discrete set of known data points, allowing us to construct new data points within the range of the data. Formally, a polynomial $P(x)$ interpolate the data $(x_1,y_1),...,(x_n,y_n)$ if $P(x_i)=y_i$ for all $i$ in $1,...,n$.
```python
def Y(D, xi):
# Function that evaluates the xi's points in the polynomial
if D['M']=='Vandermonde':
P = lambda i: i**np.arange(len(D['P']))
elif D['M']=='Lagrange':
P = lambda i: [np.prod(i - np.delete(D['x'],j)) for j in range(len(D['x']))]
elif D['M']=='Newton':
P = lambda i: np.append([1],[np.prod(i-D['x'][:j]) for j in range(1,len(D['P']))])
return [np.dot(D['P'], P(i)) for i in xi]
def Interpolation_Plot(D,ylim=None):
# Function that shows the data points and the function that interpolates them.
xi = np.linspace(min(D['x']),max(D['x']),1000)
yi = Y(D,xi)
plt.figure(figsize=(8,8))
plt.plot(D['x'],D['y'],'ro',label='Interpolation points')
plt.plot(xi,yi,'b-',label='$P(x)$')
plt.xlim(min(xi)-0.5, max(xi)+0.5)
if ylim:
plt.ylim(ylim[0], ylim[1])
else:
plt.ylim(min(yi)-0.5, max(yi)+0.5)
plt.grid(True)
plt.legend(loc='best')
plt.xlabel('$x$')
#plt.ylabel('$P(x)$')
plt.show()
```
<div id='vander' />
## Vandermonde Matrix
First, we are going to learn the Vandermonde Matrix method. This is a $m \times m$ matrix (with $m$ being the length of the set of known data points) with the terms of a geometric progression in each row. It allows us to construct a system of linear equations with the objective of find the coefficients of the polynomial function that interpolates our data.
Example:
Given the set of known data points: $(x_1,y_1),(x_2,y_2),(x_3,y_3)$
Our system of linear equations will be:
$$ \begin{bmatrix}
1 & x_1 & x_1^2 \\[0.3em]
1 & x_2 & x_2^2 \\[0.3em]
1 & x_3 & x_3^2 \end{bmatrix}
\begin{bmatrix}
a_1 \\[0.3em]
a_2 \\[0.3em]
a_3 \end{bmatrix} =
\begin{bmatrix}
y_1 \\[0.3em]
y_2 \\[0.3em]
y_3 \end{bmatrix}$$
And solving it we will find the coefficients $a_1,a_2,a_3$ that we need to construct the polynomial $P(x)=a_1+a_2x+a_3x^2$ that interpolates our data.
```python
def Vandermonde(x, y, show=False):
# We construct the matrix and solve the system of linear equations
A = np.array([xi**np.arange(len(x)) for xi in x])
b = y
xsol = np.linalg.solve(A,b)
# The function shows the data if the flag is true
if show:
print('Data Points: '); print([(x[i],y[i]) for i in range(len(x))])
print('A = '); print(np.array_str(A, precision=2, suppress_small=True))
print("cond(A) = "+str(np.linalg.cond(A)))
print('b = '); print(np.array_str(b, precision=2, suppress_small=True))
print('x = '); print(np.array_str(xsol, precision=2, suppress_small=True))
xS = sp.Symbol('x')
F = np.dot(xS**np.arange(len(x)),xsol)
print('Interpolation Function: ')
print('F(x) = ')
print(F)
# Finally, we return a data structure with our interpolating polynomial
D = {'M':'Vandermonde',
'P':xsol,
'x':x,
'y':y}
return D
```
```python
def show_time_V(epsilon=0):
x = np.array([1.0,2.0,3.0+epsilon,5.0,6.5])
y = np.array([2.0,5.0,4.0,6.0,2.0])
D = Vandermonde(x,y,True)
Interpolation_Plot(D,[-4,10])
interact(show_time_V,epsilon=(-1,2,0.1))
```
interactive(children=(FloatSlider(value=0.0, description='epsilon', max=2.0, min=-1.0), Output()), _dom_classe…
<function __main__.show_time_V(epsilon=0)>
<div id='lagrange' />
## Lagrange Interpolation
With this method, we can interpolate data thanks to the Lagrange basis polynomials. Given a set of $n$ data points $(x_1,y_1),...,(x_n,y_n)$, the Lagrange interpolation polynomial is the following:
$$ P(x) = \sum^n_{i=1} y_i\,L_i(x),$$
where $L_i(x)$ are the Lagrange basis polynomials:
$$ L_i(x) = \prod^n_{j=1,j \neq i} \frac{x-x_j}{x_i-x_j} = \frac{x-x_1}{x_i-x_1} \cdot ... \cdot \frac{x-x_{i-1}}{x_i-x_{i-1}} \cdot \frac{x-x_{i+1}}{x_i-x_{i+1}} \cdot ... \cdot \frac{x-x_n}{x_i-x_n}$$
or simply $L_i(x)=\dfrac{l_i(x)}{l_i(x_i)}$, where $l_i(x)=\displaystyle{\prod^n_{j=1,j \neq i} (x-x_j)}$.
The most important property of these basis polynomials is:
$$ L_{j \neq i}(x_i) = 0 $$
$$ L_i(x_i) = 1 $$
So, we assure that $L(x_i) = y_i$, which indeed interpolates the data.
```python
def Lagrange(x, y, show=False):
# We calculate the li's
p = np.array([y[i]/np.prod(x[i] - np.delete(x,i)) for i in range(len(x))])
# The function shows the data if the flag is true
if show:
print('Data Points: '); print([(x[i],y[i]) for i in range(len(x))])
xS = sp.Symbol('x')
L = np.dot(np.array([np.prod(xS - np.delete(x,i))/np.prod(x[i] - np.delete(x,i)) for i in range(len(x))]),y)
print('Interpolation Function: ');
print(L)
# Finally, we return a data structure with our interpolating polynomial
D = {'M':'Lagrange',
'P':p,
'x':x,
'y':y}
return D
```
```python
def show_time_L(epsilon=0):
x = np.array([1.0,2.0,3.0+epsilon,4.0,5.0,7.0,6.0])
y = np.array([2.0,5.0,4.0,6.0,7.0,3.0,8.0])
D = Lagrange(x,y,True)
Interpolation_Plot(D,[0,10])
interact(show_time_L,epsilon=(-1,1,0.1))
```
interactive(children=(FloatSlider(value=0.0, description='epsilon', max=1.0, min=-1.0), Output()), _dom_classe…
<function __main__.show_time_L(epsilon=0)>
```python
def show_time_Li(i=0, N=7):
x = np.arange(N+1)
y = np.zeros(N+1)
y[i]=1
D = Lagrange(x,y,True)
Interpolation_Plot(D,[-1,2])
i_widget = IntSlider(min=0, max=7, step=1, value=0)
N_widget = IntSlider(min=5, max=20, step=1, value=7)
def update_i_range(*args):
i_widget.max = N_widget.value
N_widget.observe(update_i_range, 'value')
interact(show_time_Li,i=i_widget,N=N_widget)
```
interactive(children=(IntSlider(value=0, description='i', max=7), IntSlider(value=7, description='N', max=20, …
<function __main__.show_time_Li(i=0, N=7)>
Here you get some questions about Lagrange Interpolation:
- Explain what happens with the interpolator polynomial when you add a new point to the set of points to interpolate. **Answer: We need to recalculate the polynomial**
- Why it is not a good idea to use Lagrange interpolation for a set of points which is constantly changing? **A: Because we need to compute the whole interpolation again**
- What is the operation count of obtaining the interpolator polynomial using Lagrange? What happens with the error?
<div id='DDN' />
## Newton's Divided Difference
In this interpolation method we will use divided differences to calculate the coefficients of our interpolation polynomial. Given a set of $n$ data points $(x_1,y_1),...,(x_n,y_n)$, the Newton polynomial is:
$$ P(x) = \sum^n_{i=1} (f[x_1 ... x_i] \cdot \prod^{i-1}_{j=1} (x-x_j)) ,$$
where $ \prod^{0}_{j=1} (x-x_j) = 0 $, and:
$$ f[x_i] = y_i $$
$$ f[x_j...x_i] = \frac{f[x_{j+1}...x_i]-f[x_j...x_{i-1}]}{x_i-x_j}$$
```python
def Divided_Differences(x, y):
dd = np.array([y])
for i in range(len(x)-1):
ddi = []
for a in range(len(x)-i-1):
ddi.append((dd[i][a+1]-dd[i][a])/(x[a+i+1]-x[a]))
ddi = np.append(ddi,np.full((len(x)-len(ddi),),0.0))
dd = np.append(dd,[ddi],axis=0)
return np.array(dd)
def Newton(x, y, show=False):
# We calculate the divided differences and store them in a data structure
dd = Divided_Differences(x,y)
# The function shows the data if the flag is true
if show:
print('Data Points: '); print([(x[i],y[i]) for i in range(len(x))])
xS = sp.Symbol('x')
N = np.dot(dd[:,0],np.append([1],[np.prod(xS-x[:i]) for i in range(1,len(dd))]))
print('Interpolation Function: ');
print(N)
# Finally, we return a data structure with our interpolating polynomial
D = {'M':'Newton',
'P':dd[:,0],
'x':x,
'y':y}
return D
```
```python
def show_time_N(epsilon=0):
x = np.array([0.0,2.0,3.0+epsilon,4.0,5.0,6.0])
y = np.array([1.0,3.0,0.0,6.0,8.0,4.0])
D = Newton(x,y,True)
Interpolation_Plot(D)
interact(show_time_N,epsilon=(-1,1,0.1))
```
interactive(children=(FloatSlider(value=0.0, description='epsilon', max=1.0, min=-1.0), Output()), _dom_classe…
<function __main__.show_time_N(epsilon=0)>
Questions about Newton's DD:
- What is the main problem using this method (and Lagrange)? How can you fix it? **A: A problem with polynomial interpolation with equispaced date is the Runge phenomenon and can be handle with Chebyshev points**
- What to do when a new point is added? **A: Pro, is not necessary re-calculate the whole polynomial only a small piece**
<div id='Error' />
## Polynomial Interpolation Error
The interpolation error is given by:
$$ f(x)-P(x) = \frac{(x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n)}{n!} \cdot f^{(n)}(c) ,$$
where $c$ is within the interval from the minimun value of $x$ and the maximum one.
```python
def Error(f, n, xmin, xmax, method=Lagrange, points=np.linspace, plot_flag=True):
# This function plots f(x), the interpolating polynomial, and the associated error
# points can be np.linspace to equidistant points or Chebyshev to get Chebyshev points
x = points(xmin,xmax,n)
y = f(x)
xe = np.linspace(xmin,xmax,100)
ye = f(xe)
D = method(x,y)
yi = Y(D, xe)
if plot_flag:
plt.figure(figsize=(5,10))
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5), sharey = False)
ax1.plot(xe, ye,'r-', label='f(x)')
ax1.plot(x, y,'ro', label='Interpolation points')
ax1.plot(xe, yi,'b-', label='Interpolation')
ax1.set_xlim(xmin-0.5,xmax+0.5)
ax1.set_ylim(min(yi)-0.5,max(yi)+0.5)
ax1.set_title('Interpolation')
ax1.grid(True)
ax1.set_xlabel('$x$')
ax1.legend(loc='best')
ax2.semilogy(xe, abs(ye-yi),'b-', label='Absolute Error')
ax2.set_xlim(xmin-0.5,xmax+0.5)
ax2.set_title('Absolute Error')
ax2.set_xlabel('$x$')
ax2.grid(True)
#ax2.legend(loc='best')
plt.show()
return max(abs(ye-yi))
```
```python
def test_error_Newton(n=5):
#me = Error(lambda x: np.sin(x)**3, n, 1, 7, Newton)
me = Error(lambda x: (1/(1+12*x**2)), n, -1, 1, Newton)
print("Max Error:", me)
```
```python
interact(test_error_Newton,n=(5,25))
```
interactive(children=(IntSlider(value=5, description='n', max=25, min=5), Output()), _dom_classes=('widget-int…
<function __main__.test_error_Newton(n=5)>
<div id='runge' />
## **Runge's Phenomenon**: It is a problem of oscillation of polynomials at the edges of the interval.
We are interpolating a data that is 0 almost everywhere and 1 at the middle point, notice that when $n$ increases the oscilations increase and the red dots seems to be at 0 everywhere but it is just an artifact, there must be a 1 at the middle. The oscillations you see at the end of the interval is the Runge phenomenon.
```python
def Runge(n=9):
x = np.linspace(0,1,n)
y = np.zeros(n)
y[int((n-1.0)/2.)]=1
D = Newton(x,y,False)
Interpolation_Plot(D)
interact(Runge,n=(5,25,2))
```
interactive(children=(IntSlider(value=9, description='n', max=25, min=5, step=2), Output()), _dom_classes=('wi…
<function __main__.Runge(n=9)>
<div id='cheby' />
## Chebyshev Interpolation
With the objective of reducing the error of the polynomial interpolation, we need to find the values of $x_1,x_2,...,x_n$ that minimize $(x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n)$.
To choose these values of $-1 \leq x_1,x_2,...,x_n \leq 1$ (to use another interval we just need to do a change of variables) that minimize the error, we will use the roots of the Chebyshev polynomials, also called **Chebyshev nodes** (of the first kind), which are defined by:
$$ x_i = \cos\left(\frac{(2i-1)\pi}{2n}\right), i = 1,...,n $$
```python
def Chebyshev(xmin,xmax,n=5):
# This function calculates the n Chebyshev points and plots or returns them depending on ax
ns = np.arange(1,n+1)
x = np.cos((2*ns-1)*np.pi/(2*n))
y = np.sin((2*ns-1)*np.pi/(2*n))
plt.figure(figsize=(10,5))
plt.ylim(-0.1,1.1)
plt.xlim(-1.1,1.1)
plt.plot(np.cos(np.linspace(0,np.pi)),np.sin(np.linspace(0,np.pi)),'k-')
plt.plot([-2,2],[0,0],'k-')
plt.plot([0,0],[-1,2],'k-')
for i in range(len(y)):
plt.plot([x[i],x[i]],[0,y[i]],'r-')
plt.plot([0,x[i]],[0,y[i]],'r-')
plt.plot(x,[0]*len(x),'bo',label='Chebyshev points')
plt.plot(x,y,'ro')
plt.xlabel('$x$')
plt.title('n = '+str(n))
plt.grid(True)
plt.legend(loc='best')
plt.show()
def Chebyshev_points(xmin,xmax,n):
ns = np.arange(1,n+1)
x = np.cos((2*ns-1)*np.pi/(2*n))
#y = np.sin((2*ns-1)*np.pi/(2*n))
return (xmin+xmax)/2 + (xmax-xmin)*x/2
def Chebyshev_points_histogram(n=50,nbins=20):
xCheb=Chebyshev_points(-1,1,n)
plt.figure()
plt.hist(xCheb,bins=nbins,density=True)
plt.grid(True)
plt.show()
```
```python
interact(Chebyshev,xmin=fixed(-1),xmax=fixed(1),n=(2,50))
```
interactive(children=(IntSlider(value=5, description='n', max=50, min=2), Output()), _dom_classes=('widget-int…
<function __main__.Chebyshev(xmin, xmax, n=5)>
```python
interact(Chebyshev_points_histogram,n=(20,10000),nbins=(20,200))
```
interactive(children=(IntSlider(value=50, description='n', max=10000, min=20), IntSlider(value=20, description…
<function __main__.Chebyshev_points_histogram(n=50, nbins=20)>
By using these points, we reduce the numerator of the interpolation error formula:
$$ (x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n) = \dfrac{1}{2^{n-1}} \cdot T_n(x), $$
where $T(x) = \cos (n \cdot \arccos (x))$ is the n-th Chebyshev polynomial.
$$ T_0(x) = 1 $$
$$ T_1(x) = x $$
$$ T_2(x) = 2x^2 -1 $$
$$...$$
$$ T_{n+1}(x) = 2 \cdot x \cdot T_n(x) - T_{n-1}(x) $$
```python
def T(n,x):
# Recursive function that returns the n-th Chebyshev polynomial evaluated at x
if n == 0:
return x**0
elif n == 1:
return x
else:
return 2*x*T(n-1,x)-T(n-2,x)
def Chebyshev_Polynomials(n=2, Flag_All_Tn=False):
# This function plots the first n Chebyshev polynomials
x = np.linspace(-1,1,1000)
plt.figure(figsize=(10,5))
plt.xlim(-1, 1)
plt.ylim(-1.1, 1.1)
if Flag_All_Tn:
for i in np.arange(n+1):
y = T(i,x)
plt.plot(x,y,label='$T_{'+str(i)+'}(x)$')
else:
y = T(n,x)
plt.plot(x,y,label='$T_{'+str(n)+'}(x)$')
# plt.title('$T_${:}$(x)$'.format(n))
plt.legend(loc='right')
plt.grid(True)
plt.xlabel('$x$')
plt.show()
```
```python
interact(Chebyshev_Polynomials,n=(0,12),Flag_All_Tn=True)
```
interactive(children=(IntSlider(value=2, description='n', max=12), Checkbox(value=True, description='Flag_All_…
<function __main__.Chebyshev_Polynomials(n=2, Flag_All_Tn=False)>
```python
n=9
xmin=1
xmax=9
mee = Error(lambda x: np.sin(x)**3, n, xmin, xmax, method=Lagrange)
mec = Error(lambda x: np.sin(x)**3, n, xmin, xmax, method=Lagrange, points=Chebyshev_points)
print("Max error (equidistants points):", mee)
print("Max error (Chebyshev nodes):", mec)
```
```python
def test_error_chebyshev(n=5):
mee = Error(lambda x: (1/(1+12*x**2)), n, -1, 1, Lagrange)
mec = Error(lambda x: (1/(1+12*x**2)), n, -1, 1, method=Lagrange, points=Chebyshev_points)
print("Max error (equidistants points):", mee)
print("Max error (Chebyshev nodes):", mec)
```
```python
interact(test_error_chebyshev,n=(5,100,2))
```
interactive(children=(IntSlider(value=5, description='n', min=5, step=2), Output()), _dom_classes=('widget-int…
<function __main__.test_error_chebyshev(n=5)>
Questions about Chebyshev:
- How can you calculate the Chebyshev points in the interval [a,b] instead of [-1,1]? **A: Using a change of variable**
## Convergence analysis
```python
n=50
shift=2
my_functions={0:lambda x: (x)**10,
1:lambda x: np.abs((x)**3),
2:lambda x: np.exp(-((x)**-2)),
3:lambda x: 1/(1+x**2),
4:lambda x: np.sin(x)**3}
labels = {0: "x^{10}",
1: "|x^3|",
2: "\exp(-x^{-2})",
3: "1/(1+x^2)",
4: "\sin^3(x)"}
n_points=np.arange(shift,n)
for k in np.arange(5):
max_error=np.zeros(n-shift)
max_error_es=np.zeros(n-shift)
for i in n_points:
max_error[i-shift] = Error(my_functions[k], i, -1, 1, Newton, Chebyshev_points, plot_flag=False)
max_error_es[i-shift] = Error(my_functions[k], i, -1, 1, Newton, points=np.linspace, plot_flag=False)
axis=plt.figure()
plt.semilogy(n_points,max_error,'kd',label='Chebyshev points')
plt.semilogy(n_points,max_error_es,'k.',label='Equalspaced poins')
plt.ylim(10**-16,10**4)
plt.grid(True)
plt.title('Interpolation Error of $f(x)='+str(labels[k])+"$")
plt.xlabel('Number of points used in the interpolation')
plt.ylabel('Max error on domain')
plt.legend(loc='best')
plt.show()
```
<div id='py' />
## Python Modules and Functions
Interpolation:
http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.polyfit.html
Vandermonde Matrix:
http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.vander.html
Lagrange:
http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.lagrange.html
Chebyshev Points:
http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.polynomial.chebyshev.chebroots.html#numpy.polynomial.chebyshev.chebroots
<div id='acknowledgements' />
# Acknowledgements
* _Material created by professor Claudio Torres_ (`ctorres@inf.utfsm.cl`) _and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. April 2016._
* _Material modified by Cristopher Arenas. May 2017._
* _Material modified by Claudio Torres. May 2017._
* _Bug fixed by Cristobal Carmona. April 2018._
* _Update June 2020 - v1.25 - C.Torres_ : Fixing formatting issues.
* _Update June 2020 - v1.26 - C.Torres_ : Adding "ylim" argumento to Interpolation_Plot(D,ylim=None) and addint "show_time_Li".
* _Update June 2020 - v1.27 - C.Torres_ : Adding comment that the Chebyshev nodes used are of the first kind and "Chebyshev_points_histogram".
```python
```
|
ad38f31c33435ce2a96393db19169c5ccc945a99
| 273,824 |
ipynb
|
Jupyter Notebook
|
SC1/07_Polynomial_Interpolation_1D.ipynb
|
maxaubel/Scientific-Computing
|
57a04b5d3e3f7be2fe9b06127f7e569659698656
|
[
"BSD-3-Clause"
] | 37 |
2017-06-05T21:01:15.000Z
|
2022-03-17T12:51:55.000Z
|
SC1/07_Polynomial_Interpolation_1D.ipynb
|
maxaubel/Scientific-Computing
|
57a04b5d3e3f7be2fe9b06127f7e569659698656
|
[
"BSD-3-Clause"
] | null | null | null |
SC1/07_Polynomial_Interpolation_1D.ipynb
|
maxaubel/Scientific-Computing
|
57a04b5d3e3f7be2fe9b06127f7e569659698656
|
[
"BSD-3-Clause"
] | 63 |
2017-10-02T21:21:30.000Z
|
2022-03-23T02:23:22.000Z
| 224.445902 | 51,500 | 0.906586 | true | 6,393 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.819893 | 0.763484 | 0.625975 |
__label__eng_Latn
| 0.655621 | 0.292681 |
# Continued Fractions
This Open Educational Resource (OER) or book is about using a computer _to explore mathematical concepts_ and _to generate questions_. However, we're going to start with hand computation and go slow for a while, first. Humour us, for a bit, and think of it as brushing up on arithmetic with fractions.
In the next unit, "Rootfinding, Newton’s Method, and Dynamical Systems", the following mysterious sequence will arise naturally; here, we just throw it down.
\begin{equation*}
1, \frac{3}{2}, \frac{17}{12}, \frac{577}{408}, \frac{665857}{470832}, \cdots
\end{equation*}
We could suggest that you _try_ to guess what the rule is for generating these numbers ("guessing the next term" is a common kind of puzzle, see e.g. https://www.mathsisfun.com/algebra/sequences-finding-rule.html), but unless you've seen it before, this example is actually pretty difficult (out of the blue like this, and unmotivated). Soon, we will show a powerful tool (the [Online Encyclopedia of Integer Sequences](http://oeis.org/) OEIS) which makes this sequence, and many others, accessible. But without either experience or a powerful tool, this sequence is (in our opinion) too difficult to guess just now, because the rule is _nonlinear_.
Since we have mentioned it, go to the OEIS at the link above, and enter the _subsequence_ of numerators $1$, $3$, $17$, $577$. The program at the server will then suggest that the sequence is [A001601](http://oeis.org/A001601), which indeed it is; look at the next number at that link, which is 665857, the next term above. One of the rules given at that link (there are several) is indeed how we generated these numbers. The OEIS is a very powerful kind of index of examples from a wide cross-section of mathematics; it is a professional tool. Notice that this sequence has _several_ explanations: it occurs in more than one context. This is part of the power of the OEIS.
By entering only a _subsequence_ of the data, we are employing _Jon Borwein's Rule_ mentioned in the Preamble: "don't blow all your data on your conjecture". Seeing the fifth entry, which we knew but held back, gives us some confidence that this is right.
This is not a mathematical proof, of course: but it is evidence. We will talk more about this.
Here is our first Python program: all it does is draw some squares. If you want to modify it, and you are reading this as a Jupyter Book, click on the icon up in the top corner to download the Jupyter notebook. We don't think you'll need the Python documentation just yet (although the code might look a bit mysterious, its intentions are straightforward), but you can find the [Python 3 documentation here](https://docs.python.org/3/) for when you do need it. One thing you _will_ need is that to modify this code you need to be working with a Jupyter notebook, not the Jupyter Book; again, if you are reading the Jupyter Book, and you want to switch to the notebook, click the download icon in the top right corner. [Documentation for Jupyter notebooks can be found here](https://jupyter-notebook.readthedocs.io/en/stable/).
```python
from matplotlib import pyplot as plt
sq = [1,3.0/2.0, 17.0/12.0, 577.0/408.0] # The first four terms of our mysterious sequence, as floating-point numbers
fig = plt.figure(figsize=(6, 6))
ax = fig.add_axes([0,0,1,1])
# Draw a square of side length sq[0] = 1 # 'r' means "red"
plt.plot( [0, 0], [0, sq[0]], 'r' ) # line from origin to top left corner
plt.plot( [0, sq[0]], [sq[0],sq[0]], 'r' ) # line from top left to top right
plt.plot( [sq[0],sq[0]], [sq[0],0], 'r' ) # line from top right to bottom right
plt.plot( [sq[0],0], [0,0], 'r' ) # line from bottom right to the origin
# Now do a square of length sq[1] = 3/2 # 'k' means "black" (use shorter code, this time)
# We list all x-values first, then all y-values. We have five points because we want to
# draw all around the square, including back to the origin (0,0) where we started
plt.plot( [0, sq[1], sq[1], 0, 0 ], [0, 0, sq[1], sq[1], 0], 'k') # all at once: all x, all y
# Now do a square of length sq[2] = 17/12 # 'b' means "blue"
plt.plot( [0, sq[2], sq[2], 0, 0 ], [0, 0, sq[2], sq[2], 0], 'b') # all at once: all x, all y
# Now do a square of length sq[3] = 577/408 # 'g' means "green" (uncomment the line below to see)
# plt.plot( [0, sq[3], sq[3], 0, 0 ], [0, 0, sq[3], sq[3], 0], 'g') # all at once: all x, all y
# Scale them all and draw them
ax.set_xlim( [-0.25, 1.75] )
ax.set_ylim( [-0.25, 1.75] )
plt.axis('equal')
plt.show()
```
The code above just draws three squares (if you uncomment the "green" block and run it, it will draw four squares; but the fourth one really looks like the third one so it's hard to see). You should look carefully at the code to see what it is doing (pretty simple really, but programming is very fussy: missing brackets, commas, etc, can all cause headaches). First Python notes: the comment character is `#`, and lists start indexing at 0, so `sq[0]` is the first element. This might seem weird, but you get used to it. The other thing is one has to "import" various packages in order to do common things. We'll see a lot of the matplotlib package; it is very useful.
The squares have side lengths equal to the numbers in the sequence above. What are the _areas_ of the squares? Work out the first few, at least, by hand, and see if you can spot a pattern. We'll do this down below, so if you don't feel like doing arithmetic just now, that's ok. But, some arithmetic is coming, so you might do well with a warm-up.
We now return to the mysterious sequence $1$, $3/2$, $17/12$, $\ldots$ .
In fact, each term $x_n$ is generated from its predecessor[^1] by the rule $x_n = \frac{1}{2}\left(x_{n-1} + \frac{2}{x_{n-1}}\right)$. This kind of thing is sometimes called a _recurrence relation_ or _iteration_ or _discrete dynamical system_.
We give a first exercise at the bottom of this unit that uses this rule to give you practice in the following three things:
1. The use of mathematical subscripts to indicate a sequence
2. How to implement such a sequence "semi-manually" by using Python as a calculator
3. How to make that more efficient by using _ranges_ and _loops_ in Python
Even if you can already program in Python we suggest you at least read the exercise, to make sure we're on the same page.
Coming back to the iteration rule $x_n = \frac{1}{2}\left(x_{n-1} + \frac{2}{x_{n-1}}\right)$, which works for arbitrary real (or complex!) numbers $x_{n-1}$, we specialize this to the case when $x_{n-1}$ is just a _rational_ number, say $p_{n-1}/q_{n-1}$. What means the same thing, if we label the numerators and denominators by $x_{n-1} = \frac{p_{n-1}}{q_{n-1}}$ and $x_n = \frac{p_n}{q_n}$, we find by using rational arithmetic
\begin{align*}
\frac{p_n}{q_n} &= \frac12\left( \frac{p_{n-1}}{q_{n-1}} + \frac{2}{p_{n-1}/q_{n-1}}\right) \\
&= \frac12\left( \frac{p_{n-1}}{q_{n-1}} + \frac{2q_{n-1}}{p_{n-1}}\right) \\
&= \frac{ p_{n-1}^2 + 2q_{n-1}^2}{2p_{n-1}q_{n-1}}\>,
\end{align*}
after putting them over a common denominator. This gives the following two separate equations for the numerators and denominators:
$$
\begin{align}
p_n &= p_{n-1}^2 + 2q_{n-1}^2\\
q_n &= 2p_{n-1}q_{n-1}.
\end{align}
$$
There are a lot of questions that can be asked about this sequence, and we'll list some below. By the end of this section, we hope that you'll already be more comfortable asking your own. Feel free to copy that sequence on a piece of paper, "close the book" (or look away from the screen, whatever), and take ten minutes or so and write as many questions as you can, and don't worry about the answers.
After having written that sentence, one of us (RMC) did that exercise, and wrote down fourteen questions in a few minutes. Two of those questions are sort of leading questions for this chapter, so we'll reproduce them here.
1. What do the numbers $x_n = \frac{p_n^2}{q_n^2}$ do as $n$ gets large?
2. What do the expansions in [continued fractions](https://en.wikipedia.org/wiki/Continued_fraction) look like?
Another question we'll give away: where did this sequence and rule come from? If you really can't wait, you can start the next chapter, where that question is given an answer. This chapter and the next are independent enough that you can do that without losing the thread of the argument. **Using the "back" button works, in Jupyter Book, to get you back here when you've read that, if you want to**. Or, you can give us a bit more trust, and keep reading here.
```{epigraph}
At first glance, nothing seems simpler or less significant than writing a number, say $\frac{9}{7}$, in the form
\begin{equation*}
\frac{9}{7} = 1 + \frac{2}{7} = 1 + \cfrac{1}{\frac{7}{2}} = 1 + \frac{1}{3 + \frac{1}{2}} = 1 + \cfrac{1}{3 + \cfrac{1}{1 + \frac{1}{1}}}.
\end{equation*}
It turns out, however, that fractions of this form, called _continued fractions_ provide much insight...
-- from p. 3 of C. D. Olds, "Continued Fractions", published in 1963 by The Mathematical Association of America {cite:p}`Olds1963`
```
Carl Douglas Olds won the 1973 Chauvenet Prize, the highest award for mathematical exposition, for his paper "The Simple Continued Fraction for $e$." The book cited above is likewise a model of lucidity, and reads very well today.
What's happening there? You can see that we haven't really _done_ anything, by working backwards: $1+1/1$ is $2$, so $3+1/2 = 7/2$, so $1 + 2/7 = 9/7$ which is what we started with. So this is just a way to rewrite a rational number. What, exactly, did we do to get there? What's the process? And what does it look like for our sequence $1$, $3/2$, $17/12$, and so on?
First, we take out the integer part. For our first two numbers, nothing much happens:
$$
\begin{align}
1 &= 1 \quad \text{already} \\
\dfrac{3}{2} &= 1 + \dfrac{1}{2} = 1 + \cfrac{1}{1 + \frac{1}{1}} \>,
\end{align}
$$
but this last isn't much obvious use. From now on, we'll try to avoid ending the continued fraction with + 1/1. In almost all cases, we will be able to do that.
The next number is more interesting:
$$
\begin{align}
\dfrac{17}{12} &= \dfrac{12 + 5}{12} \\
&= 1 + \dfrac{5}{12} \\
&= 1 + \cfrac{1}{\frac{12}{5}} \\
&= 1 + \cfrac{1}{2 + \frac{2}{5}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{\frac{5}{2}}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{2 + \frac{1}{2}}} \>.
\end{align}
$$
It looks like a pattern is emerging.
The crucial step in this process is writing the fractional part that we get, after taking out the integer part, as a reciprocal of another fraction:
\begin{equation*}
\dfrac{5}{12} = \cfrac{1}{\frac{12}{5}}.
\end{equation*}
Now a longer example:
$$
\begin{align}
\dfrac{577}{408} &= \dfrac{408 + 169}{408} \\
&= 1 + \dfrac{169}{408} \\
&= 1 + \dfrac{1}{\frac{408}{169}} \\
&= 1 + \cfrac{1}{2 + \frac{70}{169}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{\frac{169}{70}}} \\
&= 1 + \cfrac{1}{2+\cfrac{1}{2 + \frac{29}{70}}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{\frac{70}{29}}}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \frac{12}{29}}}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{\frac{29}{12}}}}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \frac{5}{12}}}}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{\frac{12}{5}}}}}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \frac{2}{5}}}}}} \\
&= 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \frac{1}{2}}}}}}} \\
&= 1 + [2 \>, 2 \>, 2\>, 2\>, 2 \>, 2 \>, 2] \quad \text{for short.}
\end{align}
$$
At this point, you may feel like sticking out your tongue and giving us a raspberry for such obvious cheating. Think of it like "television wrestling" and give the entertainment a chance!
## The game of _pass the parcel_
Who can play:
Initiator and any number[^2] of players.
Initiator "0" chooses a real (decimal) number, takes the integer part (which might be $0$ or even negative, this one time) and keeps it, and separates out the fractional part which should be in $[0, 1)$, and passes the fractional part to the first player, call them $A$.
Example: suppose the number chosen was $5.318309886184$; the initiator takes the integer part, $5$, and computes the fractional part $x_0 = 0.318309886184$ and passes it to the first player, $A$.
$A$ takes the number, _inverts it_, removes and keeps the integer part, and passes the new fractional part on to the next player, call them $B$.
In this example, $\frac{1}{0.318309886184} = 3.14159265359$ and the player $A$ keeps $\boxed{3}$ and passes $0.14159265359$ on to the next player, $B$. Each player follows these rules: 1) invert, 2) take & keep integer part, 3) pass fractional part on. The game ends if the received number is zero or repeats an earlier fractional part exactly. Mostly, it won't end! So, in practice, stop when you feel like it.
Here, player $B$ gets $0.14159265359$, inverts to $7.06251330592$, removes and keeps $\boxed{7}$ and passes $0.06251330592$ to player $C$. $C$ gets $0.06251330592$, inverts to $15.9965944095$, keeps $\boxed{15}$, passes $0.9965944095$ to $D$. $D$ inverts to $1.00341722818$, keeps $\boxed{1}$ and passes $0.00341722818$ to $E$. $E$ inverts to $292.63483365$, keep $\boxed{292}$ and passes $0.6483365$ to $F$. $F$ inverts to $1.57521580653$, keeps $\boxed{1}$. At this point, looking back, this means that
\begin{equation*}
5 + \dfrac{1}{\pi} = 5 + \cfrac{1}{3 + \cfrac{1}{7 + \cfrac{1}{15 + \cfrac{1}{1 + \cfrac{1}{292 + \cfrac{1}{1 + \substack{\ \\ \ \\ \ddots}}}}}}} .
\end{equation*}
Exercise: compute the difference between your starting number and the final rational number you get. You should see that each _partial quotient_ (which is what the kept integers are called) will give you at least one decimal digit of accuracy.
**Surprises** Rational $x_0$ always stop with 0 remainder at some point, while irrational $x_0$ will never stop. Compare with Olds' rational example:
\begin{equation*}
\dfrac{355}{113} = 3 + \cfrac{1}{7 + \cfrac{1}{15 + \cfrac{1}{1}}}
\end{equation*}
which stops as claimed.
Several questions can arise here. It's a good place for the reader to pause, and write some of them down. Here's a few from us. Some of them are interesting, but to be honest we're more interested in what your questions will be.
1. Do rational numbers stop after a fixed number of iterations? Or can the game go arbitrarily long?
2. If we start with an unreduced fraction, like 18/15, does it make a difference?
3. Can we draw a picture of this process?
4. What happens if you make an arithmetic mistake somewhere in the middle?
5. Can we use negative numbers?
6. Can we use noninteger entries in the continued fraction?
7. Can we use symbols (variables)? What does $1 + [x, 2x, x, 2x, x, 2x]$ look like as a function of $x$?, for instance?
## Another Python Program
Consider the small program below, which uses a list and a loop (see the exercises at the end of this section for an introduction) to encode this process. We have "hardwired" the loop to compute five "partial quotients" of the continued fractions; you may change that, of course, if you are reading this as a Jupyter notebook and not as a Jupyter Book. (Click on the icon up in the top corner to download the Jupyter notebook, if you are reading this as a Jupyter Book).
```python
r = 1.414213562373095
import math
a = [math.floor(r)]
for k in range(5):
f = r - a[k]
r = 1/f
a.append( math.floor(r) )
print( a )
```
[1, 2, 2, 2, 2, 2]
As an exercise, you should re-type every line of that (maybe it won't hurt to copy-and-paste the decimal approximation to $\sqrt2$—wait, what's root 2 doing here?) and write out a comment for each line explaining what it does. The math.floor function computes the _largest integer less than or equal_ to whatever it gets called with. The variable names (r, a, k, f) are all single-letter, which is ok for a short math program; they are sort of meaningful, even: r for root 2, ok "a" doesn't mean much, f for "fractional part", and then the index variable k because of the old Fortran convention: variables whose names start with the letters `i`, `j`, `k`, `ell`, `m`, `n` (i.e. the letters I–N ) are commonly thought of as INtegers. This is not part of Python—you could call your variables whatever you wanted—but it makes your programs easier to read by people who share that convention.
One thing we are skating past for the moment, whistling: that program uses floating point, and sometimes the behaviour is a bit weird. To see what we mean, replace the first line with `r = 17/12`, and run the program: we expect it to terminate at [1,2,2,2], but in fact it generates [1, 2, 2, 1, 1, 70368744177664]. We will not explain that at this time, but merely wave our hands and say "rounding errors".
We now return to thinking about the sequence $1$, $3/2$, $17/12$, $577/408$, $\ldots$.
When you think about it, it _is_ a bit mysterious that the simple rule
\begin{equation*}
x_n = \dfrac1{2}{\left(x_{n-1} + \frac{2}{x_{n-1}}\right)}
\end{equation*}
can generate the continued fractions
\begin{equation*}
1, 1 + [2], 1 + [2, 2, 2], \text{and } 1 + [2, 2, 2, 2, 2, 2, 2].
\end{equation*}
The next one,
\begin{equation*}
\dfrac{665857}{470832} = 1 + [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
\end{equation*}
apparently has fifteen 2's in it; don't worry, we'll check that by computer, later. That's one, three, seven, and fifteen twos. What's next? That is an example of a puzzle sequence that is much easier for humans to get unaided, by the way. We'll leave that for now and go back to the first question, about $x_n^2 = \frac{p_n^2}{q_n^2}$.
The squares of our sequence are
$$
\begin{align}
1 &\>, \\
\dfrac{9}{4} &= 2\dfrac{1}{4} \>, \\
\left(\dfrac{17}{12}\right)^2 &= \dfrac{289}{144} = \dfrac{288 + 1}{144} = 2 + \dfrac{1}{144} = 2 + \dfrac{1}{12^2} \>,\\
\left(\dfrac{577}{408}\right)^2 &= \dfrac{332929}{166464} = \dfrac{332928 + 1}{166464} = 2 + \dfrac{1}{166464} = 2 + \dfrac{1}{408^2}
\end{align}
$$
and at this point, we might be prepared to bet that
\begin{equation*}
x_4^2 = \left(\dfrac{665857}{470832}\right)^2 = 2 + \dfrac{1}{470832^2} \approx 2 + 4.5\cdot10^{-12}.
\end{equation*}
Checking using RMC's phone (a Nexus 5), we see that this is, in fact, true. But what does it mean?
One thing it means is that our sequence can be written as
\begin{equation*}
1 = \sqrt{1},\>\> \dfrac32 = \sqrt{1 + \dfrac{1}{2^2}},\>\> \dfrac{17}{12} = \sqrt{2 + \dfrac{1}{12^2}},\>\> \dfrac{577}{408}=\sqrt{2 + \dfrac{1}{408^2}},\>\> \dfrac{665857}{470832}=\sqrt{2 + \dfrac{1}{470832^2}} \approx \sqrt{2 + 4.5\cdot10^{-12}},
\end{equation*}
that is, apart from $x_0$, a sequence of square roots of numbers that rapidly approach $2$. The denominator of $x_5$ is
\begin{equation*}
q_5 = 2p_4q_4 = 2\cdot 470832 \cdot 665857 \approx 2 \cdot 500,000 \cdot 650,000 = 6.5\cdot 10^{11};
\end{equation*}
the next
\begin{equation*}
\left(\dfrac{p_5}{q_5}\right)^2 = 2 + \dfrac{1}{q_5^2} \approx 2 + 2\cdot 10^{-24},
\end{equation*}
which makes a difference from $2$ by about as much as adding one molecule to a mole of material makes [^3]. For reference, one mole of water weighs about 18.01528 grams; one molecule of water is going to be hard to detect!
Some more question present themselves. Does this continue? Is $x_5 = 1 + [2 \>, 2 \>, \ldots, 2]$ with thirty-one 2's in the continued fraction? Does $x_6$ have sixty-three 2's in it? Is $x_n^2 = 2 + \frac{1}{q_n^2}$ always? Does this mean that $x_n \approx \sqrt{2}$?
<!-- The techniques of the calculus answer that last. The binomial theorem says (if $a > 0$) that
$$
\begin{align}
\left(a^2 + b\right)^{\frac{1}{2}} &= a\left(1 + \dfrac{b}{a^2}\right)^{\frac{1}{2}} \\
&\approx a\cdot\left(1 + \dfrac{1}{2}\cdot\left(\dfrac{b}{a^2}\right) + \mathcal{O}\left(\dfrac{b}{a^2}\right)^2\right)
\end{align}
$$
where the $\mathcal{O}$ symbol means here "about the same size as." Therefore
$$
\begin{align}
\sqrt{2 + \dfrac{1}{q_n^2}} &= \sqrt{2}\left(1 + \dfrac{1}{2q_n^2}\right)^{\frac{1}{2}} \nonumber \\
&\approx \sqrt{2}\left(1 + \dfrac{1}{4q_n^2} + \mathcal{O}\left(\dfrac{1}{q_n^2}\right)^2\right)\>.
\end{align}
$$
In other words, $\sqrt{\ \ }$ is a continuous function: if you change its argument only a little, then its output is only a little different. Thus, $\frac{17}{12}$, being the square root of $2 + \frac{1}{144}$, ought to be close to $\sqrt{2}\left(1 + \frac{1}{288}\right)$ or different to $\sqrt{2}$ only in the third decimal place.
-->
We could use techniques from calculus to answer that last question, but let's try just using inequalities (it's good practice, anyway). Suppose that $x^2 = 2+s$ for some $s>0$, and $y^2=2$ exactly; so $y=\sqrt{2}$ but we'll try not to use any knowledge of that more than $1 < y < 2 $. Then
\begin{equation}
x^2 - y^2 = (2+s) - 2 = s
\end{equation}
and, factoring the difference of squares,
\begin{equation}
x - y = \frac{s}{x+y} < \frac{s}{2y} < \frac{s}{2}
\end{equation}
where we have used the facts that $x>y$ (which is because $x^2$ is greater than $2$, so naturally $x$ must be greater than the square root of $2$) and $y > 1$, and the ordinary rules for manipulating inequalities (which, admittedly, you might not have had a lot of practice with; they are a bit annoying and fussy).
What does this _mean_? We now know $0 < x - \sqrt{2} < s/2$ if $x^2=2+s$ with $s>0$. That is, if the square of your estimate is nearly $2$, then your estimate is nearly the square root of $2$. This in technical terms establishes the _continuity_ of the square root function, at least on one side.
Exercise: go through the steps in the case when $x^2 = 2 - s$ is smaller than $2$ and see if you can reach a similar conclusion.
Or, we can just draw it. The following figure shows the case where $x^2 = 2 + s$ is bigger than $2$. As an exercise, alter the plot so it shows the case where $s$ is negative.
```python
import numpy as np
fig2 = plt.figure(figsize=(6, 6))
ax2 = fig2.add_axes([0,0,1,1])
n = 501
x = np.linspace(0,2,n)
y = np.zeros(n)
for k in range(n):
y[k] = x[k]**2
a = 17/12
b = a**2
r2 = np.sqrt(2)
two= 2
plt.plot( x, y, 'k') # The black line is y=x^2. On the tiny scale plotted it looks pretty linear.
plt.plot( [r2,r2], [0,two], 'r')
plt.plot( [0,r2], [two,two], 'r')
plt.plot( [a,a], [0,b], 'b')
plt.plot( [0,a], [b,b], 'b')
# Scale them all and draw them
ax2.axis('equal')
ax2.set_xlim( [1.40, 1.42] )
ax2.set_ylim( [1.99, 2.01] )
ax2.annotate( '$x^2 = 2+s$', xy=(0.125,0.87), xycoords='figure fraction' )
ax2.annotate( '$x$', xy=(0.92,0.075), xycoords='figure fraction')
ax2.annotate( r'$\sqrt{2}$', xy=(0.775,0.075), xycoords='figure fraction')
plt.show()
```
Looking back at that plot, we see that the horizontal distance from $x$ to $\sqrt{2}$ is pretty clearly less than half the vertical distance from $2+s$ to $2$. That is the graphical interpretation of the inequality that we derived up above. You can also see the source of our "could have used calculus" remark, because it is _the slope of the curve_ (which looks pretty linear on this scale) at $\sqrt{2}$ that determines the relationship of the horizontal width to the vertical width. Well, actually, that's kind of the start of _real analysis_; we will leave things at that.
Exercise: What happens with $[1,2,2]$, $[1,2,2,2,2]$, $[1,2,2,2,2,2,2,2,2]$ instead? That is, with two 2s, four 2s, eight 2s, etc?
With any even number of twos? With any odd number of twos? If you see a pattern emerging, can you prove it?
<!-- Can we get them to guess that an odd number means x^2 > 2 and an even number means x^2 < 2, we are a long way towards proof of convergence; but even a bracketing theorem is valuable
-->
Indeed for example we have $(17/12)^2 = 2 + 1/144$ so we expect that the difference between $17/12$ and $\sqrt{2}$ should be smaller than $1/288$. By direct computation,
\begin{equation}
\dfrac{17}{12} = 1.416666\ldots
\end{equation}
while
\begin{equation}
\sqrt{2} \approx 1.4142\ldots
\end{equation}
and $17/12-\sqrt{2} = 0.002453\ldots$ while $1/288=0.00347\dot{2}$ in agreement with our theorizing.
Here's another question. What is
\begin{equation}
1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \substack{\ \ \\ \\ \ddots}}}}}
\end{equation}
where the 2's continue forever? Does this make sense? At this point, many people are surprised at the perfect predictability, and repeating nature, of this continued fraction, because it is indeed true that with quite natural definitions, this infinite continued fraction can only be $\sqrt{2}$.
But "everybody knows" that the decimal expansion for $\sqrt{2}$ does not repeat, because $\sqrt{2}$ is irrational! Why is this different? Is it something special about $\sqrt{2}$? (Of course a continued fraction is not a decimal expansion.)
To answer that, we do some more examples. At this point, it's helpful if everyone in the class takes a different starting point, i.e. a different number. We'll do $\sqrt{3}$ here, but people should try lots of things: $\sqrt{4}$ is boring, but $\sqrt{5}$ is interesting, $\frac{\left(1 + \sqrt{5}\right)}{2}$ even more so. It's a bold move to think about cube roots, or $\ln(2)$. How about $e$, or $\pi = 3.14159\ldots$?
Now $\sqrt{3} \approx 1.732\ldots$ (All we needed was that $1 < \sqrt{3} < 2$ so the integer part of $\sqrt{3}$ is $1$.) Thus
\begin{equation*}
\sqrt{3} = 1 + \left(\sqrt{3} - 1\right) = 1 + \cfrac{1}{\cfrac{1}{\sqrt{3} - 1}}.
\end{equation*}
Now
$$
\begin{align}
\dfrac{1}{\sqrt{3}-1} &= \dfrac{1}{\sqrt{3} - 1} \cdot \left(\dfrac{\sqrt{3} + 1}{\sqrt{3} + 1}\right) \\
&= \dfrac{\sqrt{3} + 1}{\left(\sqrt{3}\right)^2 - 1^2} \\
&= \dfrac{\sqrt{3} + 1}{2} \\
&= \dfrac{2 + \left(\sqrt{3}-1\right)}{2} \\
&= 1 + \dfrac{\left(\sqrt{3} - 1\right)}{2} \\
&= 1 + \cfrac{1}{\cfrac{2}{\sqrt{3} - 1}}
\end{align}
$$
and
\begin{equation*}
\dfrac{2}{\sqrt{3} - 1} = \dfrac{2}{\sqrt{3} - 1}\left(\dfrac{\sqrt{3} + 1}{\sqrt{3} + 1}\right) = \dfrac{2\cdot \left(\sqrt{3} + 1\right)}{\left(\sqrt{3}\right)^2 - 1^2} = \sqrt{3} + 1
\end{equation*}
by the same trick;
\begin{equation*}
= 2 + \left(\sqrt{3} - 1\right).
\end{equation*}
Therefore,
$$
\begin{align}
\sqrt{3} - 1 &= \dfrac{1}{1 + \frac{1}{2}\left(\sqrt{3} - 1\right)} \\
&= \dfrac{1}{1 + \cfrac{1}{2 + \left(\sqrt{3} - 1\right)}} \\
&= \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{1 + \cfrac{1}{2 + \left(\sqrt{3} - 1\right)}}}}
\end{align}
$$
by repeating the substitution. This suggests that
\begin{equation*}
\sqrt{3} = 1 + [1, 2, 1, 2, 1, 2, 1, 2, 1, 2, \ldots],
\end{equation*}
which is, indeed, true[^4]. So we can have repeating continued fractions from other things besides $\sqrt{2}$.
Here are some others to try: $e = 2.7182818284\ldots$, $\sqrt{13}$, $3^{\frac{1}{3}}$, $5^{\frac{1}{5}}$, $\gamma = 0.577\ldots$ (the Euler-Mascheroni constant), $\pi$, $\pi^2$, $\sqrt{\pi}$, $e^{\frac{1}{e}}$, $\pi^{\pi}$, $e^{\pi}$.
### Warning: the Python arithmetic changes when we import SymPy ("Symbolic Python")
SymPy has a class `Rational` which allows us to perform exact rational arithmetic, and also exact arithmetic on some exact numbers like $\sqrt{3}$. We also import a pretty fancy piece of code called a "continued fraction iterator". It's a power tool; go ahead and use it if you like. We didn't write it, though—better give you [a link to the docs](https://docs.sympy.org/latest/modules/ntheory.html) (some of us haven't read them, so we could hardly blame you if you don't).
```python
import sympy
from sympy.core import Rational, pi
from sympy import sqrt
from sympy.ntheory.continued_fraction import continued_fraction_iterator
def confrac(expr, n):
result = []
for i, v in enumerate(continued_fraction_iterator(expr)):
if i > (n-1):
break
result.append(v)
return(result)
```
```python
# the first 7 partial quotients of the continued fraction of sqrt(3)
confrac(sqrt(3), 7)
```
[1, 1, 2, 1, 2, 1, 2]
```python
# the first 10 partial quotients of the continued fraction of 1/pi
confrac(Rational(1, sympy.N(pi)), 10)
```
[0, 3, 7, 15, 1, 292, 1, 1, 1, 2]
Question: Is that code correct? Does it actually produce an approximation to $1/\pi$? Let's see.
```python
print( 1/(3+1/(7+1/(15+1/(1+1/(292+1/(1+1/(1+1/(1+1/(2))))))))) )
print( sympy.N(1/pi) )
```
0.3183098861846737
0.318309886183791
A little different, but believably within tolerance.
## Programming as a method of validation
Consider the assertion about that $\sqrt{3} = 1 + \overline{[1 \>, 2]}$ where $\overline{[1 \>, 2]}$ means the infinite repeating continued fraction
\begin{equation*}
\cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{\ddots}}}}}.
\end{equation*}
We will show in a moment a Python program to compute the successive truncations of this fraction, namely $[1]$, $[1 \>, 2]$, $[1 \>, 2 \>, 1]$, and so on.
<!-- The program we exhibit below is intended to show some features of Maple programming, but not to be the "best possible'' program for this particular computation. Here, simplest is best.
The basic idea of the program is that we will convert at each step
\begin{equation}
1 + \left[1 \>, 2 \>, 1 \>, 2 \>, \ldots, \frac{a_{n-1}}{(a_n + s)}\right]
\end{equation}
to
\begin{equation}
1 + \left[1 \>, 2 \>, 1 \>, 2 \>, \ldots, a_{n-1} + \frac{1}{(a_n + s)}\right]
\end{equation}
because
\begin{equation}
\cfrac{\ddots}{a_{n-1} + \cfrac{1}{(a_n + s)}}
\end{equation}
can be written in our $\left[ \ \ \right]$ notation in either way. The program exhibits several useful features of Python programming: the most important of which is _assignment_, as in `a = 2;` which means the variable named `a` is assigned the value 2, or as in `a = 1;` which means the variable `a` is assigned the value 1. The `def function():` block denotes a procedure body, intended to perform the statements contained in the body, each time the procedure is invoked. By assigning the procedure body to the name `bottomup3` we allow easy invocation, e.g. -->
```python
# A program to convert a list of partial quotients to a convergent
def list_to_frac(inputlist):
expr = 0
for i in reversed(inputlist[1:]):
expr += i
expr = 1/expr
return(expr + inputlist[0])
```
```python
n = 1
cfrac1 = confrac(sqrt(3), n + 1)
print('Partial quotients of sqrt(3) when n = ', n,':', cfrac1)
list_to_frac(cfrac1)
```
Partial quotients of sqrt(3) when n = 1 : [1, 1]
$\displaystyle 2$
which results in the answer 2, which is $1 + \frac{1}{1}$, the depth $n=1$ continued fraction.
```python
n = 2
cfrac2 = confrac(sqrt(3), n + 1)
print('Partial quotients of sqrt(3) when n = ', n,':', cfrac2)
list_to_frac(cfrac2)
```
Partial quotients of sqrt(3) when n = 2 : [1, 1, 2]
$\displaystyle \frac{5}{3}$
yields $\frac{5}{3}$, which is $1 + \frac{1}{(1 + \frac{1}{2})}$. the depth 2 continued fraction. We can now ask for as many _convergents_ (as they are called) as we wish, or have patience for.
```python
for i in range(1, 6):
print('n = ', i)
cfrac = confrac(sqrt(3), i+1)
print(cfrac)
expr = list_to_frac(cfrac)
print('Result of continued fraction:', expr, 'or', sympy.N(expr))
```
n = 1
[1, 1]
Result of continued fraction: 2 or 2.00000000000000
n = 2
[1, 1, 2]
Result of continued fraction: 5/3 or 1.66666666666667
n = 3
[1, 1, 2, 1]
Result of continued fraction: 7/4 or 1.75000000000000
n = 4
[1, 1, 2, 1, 2]
Result of continued fraction: 19/11 or 1.72727272727273
n = 5
[1, 1, 2, 1, 2, 1]
Result of continued fraction: 26/15 or 1.73333333333333
This loop produces
$$
\begin{align}
2 &\\
\dfrac{5}{3} &= 1.66\ldots \\
\dfrac{7}{4} &= 1.75 \\
\dfrac{19}{11} &= 1.72727 \ldots \\
\dfrac{26}{15} &= 1.733 \ldots
\end{align}
$$
The noticing sort of person might see that these are alternating larger and smaller than $\sqrt{3}$. These don't seem to be approaching $\sqrt{3}$ all that fast, compared to our memory of the $\sqrt{2}$ example. But when we go back and look again, we see that it took _fifteen_ 2's to get us to $12$ decimal place accuracy, so we try
```python
n = 15
cfrac15 = confrac(sqrt(3), n + 1)
print('Partial quotients of sqrt(3) when n = ', n,':', cfrac15)
list_to_frac(cfrac15)
```
Partial quotients of sqrt(3) when n = 15 : [1, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1]
$\displaystyle \frac{18817}{10864}$
To evaluate this to floating-point, `sympy.N()`,
```python
sympy.N(list_to_frac(cfrac15))
```
$\displaystyle 1.73205081001473$
Now, let's express $\sqrt{3}$ in decimals:
```python
sympy.N(sqrt(3))
```
$\displaystyle 1.73205080756888$
These should be comparable; we see that the error is $\mathcal{O}(10^{-9})$, not as good as that for $\sqrt{2}$ but not bad.
### Working backwards
What is $1 + \left[3 \>, 3 \>, 3\>, \ldots \right]$?
\begin{equation*}
x = 1 + \dfrac{1}{y}
\end{equation*}
where $y = 3 + \frac{1}{y}$ _i.e._ $y^2 - 3y - 1 = 0$ _i.e._ $y = \frac{3 \pm \sqrt{9 + 4}}{2} = \frac{3 + \sqrt{13}}{2}$ because negative sign gives negative answer.
$$
\begin{align}
x &= 1 + \dfrac{2}{3 + \sqrt{13}} \\
&= \dfrac{5 + \sqrt{13}}{3 + \sqrt{13}} \\
&= \dfrac{\left(5 + \sqrt{13}\right)\left(-3 + \sqrt{13}\right)}{-3^2 + 13} \\
&= \dfrac{\left(-15 + 2\sqrt{13} + 13\right)}{4} \\
&= \dfrac{-2 + 2\sqrt{13}}{4} \\
&= \dfrac{\sqrt{13} - 1}{2} \approx 1.30277563773\ldots
\end{align}
$$
One can check on a calculator.
### Working _forwards_
If you try to compute a continued fraction from the bottom up, and the continued fraction is infinite, you have to decide where to truncate and then work backwards as we have been doing above. If you decide at the end that you aren't happy with the accuracy you obtained, you have to go back, truncate the continued fraction farther down, and do it again. This is annoying.
There is also a better way. If the simple continued fraction is [$a_0$; $a_1$, $a_2$, $\ldots$ ] then the first two approximants are $a_0/1$ and $(a_0a_1 + 1)/a_1$, so we at least have something to start with. Call the $n$th approximant $x_n$ and write it as the rational number $p_n/q_n$. So $x_0 = a_0/1$ and so $p_0=a_0$ and $q_0=1$. Then $p_1 = a_1a_0 + 1$ and $q_1 = a_1$. "It can be shown that"
\begin{align}
p_{n+1} &= a_{n+1}p_n + p_{n-1} \\
q_{n+1} &= a_{n+1}q_n + q_{n-1}
\end{align}
and this allows us to work _forward_ until we are happy with our approximation. In the typical unit on continued fractions, one proves that the true answer is _trapped between_ successive convergents, and so the error is less than the difference between two successive convergents.
If you want a mathematical proof, you can find one _very_ clearly written out in Olds' book on pages 21-24. The [Wikipedia article](https://en.wikipedia.org/wiki/Continued_fraction) has the theorems but not the proofs. The proof in Olds is by induction, and we do recommend that you try to prove it yourself.
But if you don't want to prove it, you should at least program it. Here's our program. Once we finished debugging, it successfully computed the value of the list of the partial quotients of the continued fraction for $e$.
__Remark on Indexing__ Above we have used $x_0 = p_0/q_0$ to start, indexing from $0$ like Python; Olds uses $x_1 = p_1/q_1$ to start, indexing from $1$ like Maple. There are a variety of conventions in place, and one must be careful.
```python
# Compute and return successive elements of the continued fraction.
# For the base case with just one entry, return the correct a[1]/1 and 0 (could have been "undefined")
# Code translated by Maple's CodeGeneration[Python]
def forward (a):
n = len(a)
if n==0:
return( 0, 0 )
elif n==1:
return(a[0], 0)
else:
p0 = a[0]
q0 = 1
p1 = a[1] * a[0] + 1
q1 = a[1]
for k in range(3, n + 1):
p = a[k - 1] * p1 + p0
q = a[k - 1] * q1 + q0
p0 = p1
p1 = p
q0 = q1
q1 = q
return(p1 / q1, p0 / q0)
```
```python
ex1,ex0 = forward( [2,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1])
print( ex1, ex0, ex1-ex0 )
```
2.7182818284585633 2.718281828470584 -1.2020606732221495e-11
### Games
1. Who can find $x_0 \in (0, 1)$ that gives the biggest partial quotients? The longest transient? The longest period? In the first 5, 10? Obviously taking $x_0 = [1, N]$ for $N$ arbitrarily large works--but a quadratic? a cubic? How about a root of an equation with only coefficients $\pm 1$? How about the smallest partial quotients?
2. Who can write the shortest code? The fastest? the most general?
### The Gauss Map
The pass the parcel map from the game can be expressed mathematically as follows, where `frac` means "take the fractional part":
\begin{equation*}
x_0 \to \mathrm{frac}\left(\frac{1}{x_0}\right)
\end{equation*}
Define
\begin{equation*}
G(x) =
\begin{cases}
\mathrm{frac}\left(\frac{1}{x}\right) & x \neq 0 \\
0 & x = 0
\end{cases}
\end{equation*}
then the 1st parcel is $G(x_0)$ _i.e._ $x_1 = G(x_0)$, 2nd parcel is $G(x_1)$ _i.e._ $x_2 = G(x_1)$, etc.
*Draw $G(x)$* (This is harder to do nicely than it seems it ought to be).
```python
import math
import numpy as np
import matplotlib.pyplot as plt
G = lambda x: math.modf(1/x)[0]
vecG = np.vectorize(G)
x = np.linspace(0, 1, 10001, dtype=float)
y = vecG(x[1:])
y = np.append(0, y)
plt.figure(dpi=300)
plt.plot(x, y, 'k,' )
plt.show()
```
However, from the figure above, we can observe that the lines become increasing unclear as $x$ approaches 0. Therefore, instead of computing the corresponding $y$-values for linearly spaced $x$-values, we can plot this problem by computing the inverse.
$$
\begin{align}
y &= G(x) = \frac{1}{x} \\
\frac{1}{x} &= n + y \\
x &= \frac{1}{n+y}
\end{align}
$$
for $n = 0, 1, 2, 3, \ldots$.
```python
y = np.linspace(0,1,101, dtype=float)
recip = lambda t: 1.0/t
R = np.vectorize( recip )
y1 = y[1:]
N = 100
plt.figure(dpi=300)
for n in range(N):
x = R(y1+n+1)
plt.plot( x, y1, 'k', linewidth=0.1)
plt.show()
```
## Floating-point issues
If one tries to do continued fractions with floating-point arithmetic (e.g. on a calculator) then some "interesting" issues arise. Many instructors won't know how to handle them, either---they're not in any book that we know of, just some papers. But if the student is _not_ interested, the facts may seem dull as well as confusing. Almost no-one likes dealing with rounding error. _In this particular case_ though, there is a uniform ["shadowing"](https://en.wikipedia.org/wiki/Shadowing_lemma) theorem: if one is working in arithmetic with unit roundoff $\mu$ (for IEEE double precision, $\mu = 2^{-53} \approx 10^{-16}$) then the computed $x_k$ from the pass-the-parcel game are the _exact_ $x_k$ for some slightly different starting point $x_0^*$ which differs _at most_ from $x_0$ by $4\mu$. There are still subtleties lying around here, because in floating-point, orbits must ultimately be periodic; and by Lagrange's theorem, these can only be from quadratic irrational $x_0$. These are a set of _measure zero_ in the reals; so we have the paradoxical result that floating-point simulation gives us results that almost surely can't arise if one chooses a true real number $x_0$ "at random". This might be an interesting point of departure for a discussion with the students. The link above is to some fairly deep mathematics, but for the Gauss map, everything can be constructed explicitly.
We start with an example.
### Using Decimal Arithmetic to compute these
Start "pass the parcel" with $3.14$. The originator keeps $\boxed{3}$ and passes $0.14$ to $A$. $A$ inverts on a $3$ digit calculator (yes, we have one that can be set that way!) to get $7.143$; $A$ keeps $\boxed{7}$ and passes $0.143$ to $B$. $B$ inverts to get $7.000$, keeps $\boxed{7}$, and the game stops[^5]. This suggests
\begin{equation*}
3.14 = 3 + \cfrac{1}{7 + \cfrac{1}{7}},
\end{equation*}
but is it? Using rational arithmetic, $3 + \left[7 \>, 7\right]$ is $3\frac{7}{50} = 3\frac{14}{100}$ which is $3.14$. So, it worked!
This kind of thing can get annoying. It's not _much_ of a problem if there are only human players, because they can argue a bit about when to stop, and reach sensible conclusions. The calculator if _it's_ playing can't do that, and might return
\begin{equation}
3 + \left[7, 6, 1, 7142857142, 1, 6, 6, 1, 125313283, \ldots\right]
\end{equation}
and keep going from there, with the really large numbers indicating that something unexpected happened; to explain this, we'll need a bit more theory. But first another example. We compute $\frac{10}{3}$ on our calculator (an HP48G+, vintage 1995 or so: there are free simulators available, so you can play along):
$$
\begin{align*}
\frac{10}{3} =\ &3.33333333333 \to \boxed{3} \\
\text{invert} &\phantom{0}.333333333333 \text{ get } 3.00000000003 \to \boxed{3} \\
\text{invert} &\phantom{0}.00000000003 \text{ get } 33333333333.3 \to \boxed{33333333333} \\
\text{invert} &\phantom{0}.3 \text{ get } 3.33333333333 \to \boxed{3} \\
\text{invert} &\phantom{0}.33333333333 \text{ get } 3.00000000003 \to \boxed{3}
\end{align*}
$$
and the game ends because of the _repeat_ rule. So instead of
\begin{equation*}
\dfrac{10}{3} = 3\dfrac{1}{3}
\end{equation*}
the calculator got
\begin{equation*}
\dfrac{10}{3} \stackrel{?}{=} 3 + \cfrac{1}{3 + \cfrac{1}{33333333333 + \cfrac{1}{3 + \cfrac{1}{3 + \cfrac{1}{33\ldots3 + \substack{\ \\ \ \\ \ddots}}}}}}.
\end{equation*}
Because it's repeating, we can actually figure out what that number is:
\begin{equation*}
x = 3 + \cfrac{1}{3 + \cfrac{1}{N + \cfrac{1}{x}}}
\end{equation*}
where $N = 33333333333$ (We've used this trick without comment: it is a bit suspicious, but we assure you it's okay here and can be rigourously justified). Then
$$
\begin{align}
x &= 3 + \cfrac{1}{3 + \cfrac{x}{Nx + 1}} = 3 + \dfrac{Nx + 1}{3Nx + 3 + x} \\
&= \dfrac{9Nx + 9 + 3x + Nx + 1}{3Nx + x + 3}
\end{align}
$$
so
\begin{equation*}
x(3Nx + x + 3) = (9N + N + 3)x + 10
\end{equation*}
or
\begin{equation*}
(3N + 1)x^2 - (10N)x - 10 = 0
\end{equation*}
so
\begin{equation*}
x = \dfrac{10N \pm \sqrt{100N^2 + 40(3N+1)}}{2(3N + 1)} .
\end{equation*}
If we compute this to $30$ Digits in Python, like so, we can understand what's happening
```python
N = 33333333333
x = sympy.Symbol('x')
eqn = (3*N + 1)*x**2 - 10*N*x - 10
sol = sympy.solve(eqn)
[sympy.N(z, 30) for z in sol]
```
[-3.00000000000299999999997600000e-11, 3.33333333333000000000003000000]
We ignore the negative root. We see the problem more clearly: $x$ is _not_ $\frac{10}{3}$ but instead is very close to it. We have computed not the continued fraction for $\frac{10}{3}$, but rather the continued fraction for a number that is very close to $\frac{10}{3}$, because of rounding error.
[Computing continued fractions this way _always_ get the exact continued fraction for a number very close---depending on the precision used---to the one we wanted. In the language of the numerical analyst, this algorithm is _numerically stable_.]{cite:p}`Corless1992`
Notice how different the continued fractions are, though
\begin{equation*}
3 + [3]
\end{equation*}
versus
\begin{equation*}
3 + \left[3, N, 3, 3, N, 3, 3, N, \ldots \right]
\end{equation*}
Nearby numbers will have continued fractions that agree only for a short initial segment (here, only two partial quotients). You should try to convince yourself that, say
\begin{equation*}
x = 1 + \left[2, 2, 2, 2, 2 \right]
\end{equation*}
and
\begin{equation*}
y = 1 + \left[2, 2, 2, 2, 2, M \right]
\end{equation*}
where $M = 1000$, say, are quite close; $x \approx y$. They'll be closer yet if $M = 10^6$, and closer yet again if $M = 10^{10}$. Try it and see.
## Random Facts about Continued Fractions
### Euclid's algorithm for greatest common divisor.
Suppose we compute the continued fraction for a rational number, $p/q$. It turns out the steps of the algorithm above—that is, applying the Gauss map and remembering the integer parts that arise—is exactly Euclid's algorithm for finding the greatest common divisor of $p$ and $q$. Take for instance $16/10$. The integer part is $1$ and we have $6/10$ as the fractional part; invert it to get $10/6$ which has integral part $1$ again and fractional part $4/6$; invert that to get $6/4$ with integral part $1$ again and fractional part $2/4$; invert again to get $4/2$ and now we have an exact division to get $2$ (which is the GCD). We can also (according to our rule, we don't do this, but this time we say that we can) write $2$ as $1+1/1$. This gives $16/10 = [1;1,1,2] = [1;1,1,1,1]$. Working backwards, we have
\begin{equation}
1 + \cfrac{1}{1+ \cfrac{1}{1 + \cfrac{1}{2}}} = 1 + \cfrac{1}{1+ \cfrac{2}{3}} = 1 + \cfrac{3}{5} = \frac{8}{5}
\end{equation}
which is obviously right.
### The continued fraction for the golden ratio and its connection to Fibonacci numbers
The continued fraction with all partial quotients equal to $1$ gives the _golden ratio_ $\phi = (1+\sqrt{5})/2 = 1.618\ldots$. This is so because
\begin{equation}
\phi = 1 + \frac{1}{\phi}
\end{equation}
and recursively substituting the equation into its own right hand side produces nothing but
\begin{equation}
\phi = [1;1,1,1,1,1,1\ldots]
\end{equation}
Truncating these gives approximations to $\phi$ as ratios of Fibonacci numbers. This continued fraction is interesting for several reasons, including the notion of "noble" numbers, which are all those numbers which have continued fractions _ending_ in $[\ldots, 1, 1, 1, \ldots]$, that is, ending with an infinite sequence of $1$s. These somehow are the "least sensitive" to perturbations, and they show up in physical situations involving resonance (such as the rings around Saturn and Jupiter).
### Inequalities
Because the partial quotients $a_k$ of a simple continued fraction are positive integers, something interesting happens with sequential convergents $p_{2k-1}/q_{2k-1}$, $p_{2k}/q_{2k}$, and $p_{2k+1}/q_{2k+1}$. Let's consider it carefully. We are indexing from zero for the first entry: suppose $x = [a_0; a_1, a_2, a_3, \ldots ]$. Then $x_0 = p_0/q_0 = a_0/1$ is an integer. The next iterate $x_1 = a_0 + 1/a_1$ is a positive quantity larger: $x_0 < x_1$. But then $x_2 = a_0 + 1/(a_1 + 1/a_2)$ and (because the denominator is _bigger_) $1/(a_1+1/a_2) < 1/a_1$. But it's still positive. So $x_0 < x_2 < x_1$.
Now it begins to get a little hairy, but $x_3 = a_0 + 1/(a_1 + 1/(a_2 + 1/a_3))$; now $a_2 + 1/a_3 > a_2$ so its reciprocals flip the inequality: $1/(a_2 + 1/a_3) < 1/a_2$ (this is just what we did with $a_1$ and $a_2$ before); now adding $a_1$ we have $a_1 + 1/(a_2 + 1/a_3) < a_1 + 1/a_2$. Okay. Now reciprocate again, and this flips the sign of the inequality again, and when we add $a_0$ to both sides we have
\begin{equation}
a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3}}} > a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2}}\>.
\end{equation}
We don't know any "easy" way to just _see_ that; we had to work it through. But once we believe it, then we find $x_3 > x_2$. What we have now is
\begin{equation}
x_0 < x_2 < x_3 < x_1
\end{equation}
and perhaps you will believe that this process can be continued and _all_ the even-numbered convergents will be smaller than _all_ the odd-numbered convergents, and that _all_ the even-numbered convergents increase and _all_ the odd-numbered convergents decrease. We have
\begin{equation}
x_0 < x_2 < x_4 < x_6 < \cdots < x < \cdots < x_7 < x_5 < x_3 < x_1 \>.
\end{equation}
Can you _prove_ this?
### Differences of convergents
Theorem 1.4 of Olds (p. 27) states that
\begin{equation}
p_{n+1}q_n - p_nq_{n+1} = (-1)^{n}
\end{equation}
where we have changed the formula so it indexes from zero. Let us verify this for $p_0/q_0 = a_0/1$ and $p_1/q_1 = (a_0a_1+1)/a_1$: the case $n=0$ gives
\begin{equation}
p_1q_0 - p_0q_1 = (a_0a_1+1)\cdot 1 - a_0a_1 = 1 = (-1)^0
\end{equation}
so we have the indexing correct there, anyway.
Dividing both sides of that equation by $q_nq_{n+1}$ we have
\begin{equation}
\frac{p_{n+1}}{q_{n+1}} - \frac{p_n}{q_n} = \frac{(-1)^n}{q_nq_{n+1}}\>,
\end{equation}
which tells us something quite important: because the denominators _grow exponentially_ with $n$ (they grow at least as fast as Fibonacci numbers do), then the _difference between successive continued fractions_ can be made as small as we please by taking $n$ large enough.
Let's look at an example, with $n=2$:
\begin{equation}
x_3-x_2 = 1 + \cfrac{1}{2 + \cfrac{1}{3 + \cfrac{1}{4}}} - \left( 1 + \cfrac{1}{2 + \cfrac{1}{3}} \right)
\end{equation}
The code below shows that this is positive, and it should be $1/(30\cdot 7) = 1/210$. Computation shows that it is.
__Remark__ Using this theorem it is fairly easy to prove the inequalities in the cell above this one.
__Another Remark__ Going back and looking at the code `list_to_frac`, we see that the _code_ indexes the lists from $0$, and the variables are named as if the indexing starts from zero, but then abandons that connection for space reasons and just re-uses variables instead of keeping all convergents. "Off-by-one" errors are extremely common in computing, and while—after many years of practice—we now can make our programs work relatively quickly, such indexing issues don't make it easier. In the Maple version of this book, all the indexing is from one, and in some ways that makes it harder; but then there are issues with "off by one" errors in termination in Python, too, so it's kind of "six of one, half a dozen of the other". Indeed the newer programming language Julia indexes from $1$, like Maple, for much this reason. We have no advice except to be careful and to check.
```python
ex1,ex0 = forward( [1,2,3,4] )
print( ex1, ex0, ex1-ex0, 1/210 )
```
1.4333333333333333 1.4285714285714286 0.004761904761904745 0.004761904761904762
### Solving Diophantine equations
A [Diophantine equation](https://en.wikipedia.org/wiki/Diophantine_equation) is an ancient type of equation, one where the solutions are desired to be _integers_ (and usually _positive_ integers). They are named for [Diophantus](https://en.wikipedia.org/wiki/Diophantus) who wrote a book about them. Continued fractions (or, equivalently, Euclid's algorithm) can be used to solve _linear_ Diophantine equations in two variables, and can also be used to solve _Pell's equation_
\begin{equation}
x^2 - Ny^2 = 1
\end{equation}
Perhaps the most famous of this kind of equation is [Archimedes' "Cattle of the Sun"](https://en.wikipedia.org/wiki/Archimedes's_cattle_problem) problem from antiquity.
### Generalized continued fractions
Continued fractions don't have to have just the "simple" form used above, but can also have different things in the numerator. For example, [Lord Brouncker](https://en.wikipedia.org/wiki/William_Brouncker,_2nd_Viscount_Brouncker) found the following continued fraction for $4/\pi$, at some time prior to 1656 when it was reported in a book by the mathematician [John Wallis](https://en.wikipedia.org/wiki/John_Wallis):
\begin{equation}
\frac{4}{\pi} = 1+ \cfrac{1^2}{2 + \cfrac{3^2}{2 + \cfrac{5^2}{2 + \cfrac{7^2}{2 + \cfrac{9^2}{\ddots}}}}}.
\end{equation}
### Lambert's proof of the irrationality of $\pi$
[Johann Heinrich Lambert](https://en.wikipedia.org/wiki/Johann_Heinrich_Lambert) used the following continued fraction to prove that $\pi$ [is irrational](https://en.wikipedia.org/wiki/Proof_that_%CF%80_is_irrational#Lambert's_proof).
\begin{equation}
\tan(x) = \cfrac{x}{1+\cfrac{x^{2}}{-3+\cfrac{x^{2}}{5+\cfrac{x^{2}}{-7+\cfrac{x^{2}}{9-\cfrac{x^{2}}{\ddots}}}}}}
\end{equation}
He first proved that if $x$ was rational then $\tan(x)$ must be irrational using this fraction; since $\tan(\pi/4) = 1$ is rational, it must then be true that $\pi/4$ is irrational, and hence $\pi$ must be irrational.
### Galois' Theorem
[Évariste Galois](https://en.wikipedia.org/wiki/%C3%89variste_Galois) discovered and proved that all _purely periodic_ continued fractions (like $1+\sqrt{2}$) are special quadratic irrationals: they are the positive roots of quadratic equations with integer coefficients, and the _other root_ of the quadratic must lie in the interval $(-1,0)$. This might be fun for people to explore or argue over.
### Lagrange's Theorem
[Joseph Louis Lagrange](https://en.wikipedia.org/wiki/Joseph-Louis_Lagrange) discovered and proved that all _ultimately periodic_ continued fractions are quadratic irrationals; that is, they are the roots of quadratic equations with integer coefficients.
This has startling implications: cube roots will not be periodic (is there any pattern, though?) Transcendental numbers will not be periodic (but the number $e$ and its relatives have _amazing_ patterns in them). There is a lot to play with, here.
Proving that an ultimately periodic continued fraction is a quadratic irrational is not so hard; proving that _all_ quadratic irrationals are ultimately periodic is harder. The proofs are in Olds' book, however, if students are interested.
Here is a graphical representation of one "orbit" of the game of pass the parcel if one starts with $x_0 = \frac{14981}{19661}-\frac{\sqrt{46}}{19661} \approx 0.761620348406331$. The game proceeds as in the table below.
| $n$| $x_n$ | approximation | aₙ |
|----|-----------------------------------------------|-------------------|---|
| 0 | $\frac{14981}{19661}-\frac{\sqrt{46}}{19661}$ | 0.761620348406331 | 1 |
| 1 | $\frac{3566}{11415}+\frac{\sqrt{46}}{11415}$ | 0.312990129652485 | 3 |
| 2 | $\frac{112}{557}-\frac{\sqrt{46}}{1114}$ | 0.194988931792527 | 5 |
| 3 | $-\frac{1}{45}+\frac{\sqrt{46}}{45}$ | 0.128496221847228 | 7 |
| 4 | $\sqrt{46}-6$ | 0.782329983125264 | 1 |
| 5 | $-\frac{2}{5}+\frac{\sqrt{46}}{10}$ | 0.278232998312525 | 3 |
| 6 | $-\frac{5}{3}+\frac{\sqrt{46}}{3}$ | 0.594109994375093 | 1 |
| 7 | $-\frac{2}{7}+\frac{\sqrt{46}}{7}$ | 0.683189997589324 | 1 |
| 8 | $-\frac{2}{3}+\frac{\sqrt{46}}{6}$ | 0.463721663854212 | 2 |
| 9 | $-\frac{6}{5}+\frac{\sqrt{46}}{5}$ | 0.156465996625053 | 6 |
| 10 | $-3+\frac{\sqrt{46}}{2}$ | 0.391164991562646 | 2 |
| 11 | $-\frac{4}{5}+\frac{\sqrt{46}}{5}$ | 0.556465996625050 | 1 |
| 12 | $-\frac{1}{3}+\frac{\sqrt{46}}{6}$ | 0.797054997187546 | 1 |
| 14 | $-\frac{4}{3}+\frac{\sqrt{46}}{3}$ | 0.927443327708428 | 1 |
| 15 | $-\frac{3}{5}+\frac{\sqrt{46}}{10}$ | 0.0782329983125263|12 |
| 16 | $\sqrt{46}-6$ | 0.782329983125292 | 1 |
and because $x_{16} = x_4$ the game ends. That table is a bit hard to read; it's easier in graphical form. We see the initial transient, and then the loop. Once on the loop, the game loops forever. What Lagrange's theorem says is that _every_ quadratic irrational has such a graph: a possibly very long transient followed by a possibly long loop. Moreover, every such graph gives the continued fraction of a quadratic irrational.
```{image} ../Figures/Continued\ Fractions/nicetransient.png
:height: 300px
:alt: Frequency distribution for five thousand partial quotients of pi
:align: center
```
### Mental estimation of square roots
\begin{equation*}
\sqrt{a^2 + b} \doteq a + \dfrac{b}{2a}
\end{equation*}
E.g.
\begin{equation*}
\sqrt{65} = \sqrt{64 + 1} \doteq 8 + \dfrac{1}{16}
\end{equation*}
Check
\begin{equation*}
\left(8 + \dfrac{1}{16}\right)^2 = 8^2 + 2\cdot 8 \cdot \dfrac{1}{16} + \dfrac{1}{16^2} = 65 + \dfrac{1}{16^2}
\end{equation*}
Also, $1/16 = 0.0625$ so $8 + 1/16 = 8.0625$ so when we square that, either by hand multiplication or by calculator, we get $65.00390625$ which is more convincing because decimals are more intelligible.
Now it turns out there is a general formula, with a (slightly) generalized kind of continued fraction, namely
\begin{equation*}
\sqrt{a^2 + b} = a + \cfrac{b}{2a + \cfrac{b}{2a + \cfrac{b}{2a + \substack{\ \\ \ \\ \ddots}}}}
\end{equation*}
(see page 137 of {cite:p}`Olds1963`) so
\begin{equation*}
8 + \cfrac{1}{16 + \cfrac{1}{16 + \cfrac{1}{16 + \substack{\ \\ \ \\ \ddots}}}}
\end{equation*}
ought to give _better_ estimate of $\sqrt{65}$. Taking just two of the "16"s we get $8.0622568093$ whereas the true square root of $65$ starts out $8.0622577483$ which is about six decimals of agreement.
#### A similar method RMC just learned, which is apparently current in some high schools
Instead of estimating $\sqrt{a^2+b}$ by $a + \frac{b}{2a}$, consider instead the following (rather strange-looking at first) trick. Take $\sqrt{73}$ for example. The squares that we know that surround this number are $64$ and $81$. We have by plain subtraction that $64 + 9 = 73 = 81 - 8$. So the distance between the two square numbers is $81 - 64 = 9 + 8 = 17$. Obviously the square root of $73$ must lie between $8$ and $9$. So, think about the number $8$ plus something in between $0$ and $1$ as an approximation to $\sqrt{73}$; why not $8 + \frac{9}{17}$? This is the average distance, in some weird way. This method has the advantage that it is also $9 - \frac{8}{17}$ so you get the same answer no matter which end of the interval you start from.
## Best Approximation
This section "steals a bit of thunder" from the typical entry-level number theory course; but since most students won't take any number theory, this may not be a problem. But it's such a lovely bit of mathematics (which we just talk about here, without proofs) that perhaps the reader will be induced to take the number theory course, later.
One application of continued fractions is to find "nice" approximate fractions for decimals. This is harder than merely writing, say, 3.1416 as $\frac{31416}{10000} = \frac{3927}{1250}$ after cancelling $2^{3}$ from top and bottom. Playing "pass the parcel" gives
\begin{equation*}
3 + \left[7, 16, 11, 6835269.99316 \right]
\end{equation*}
which suggest $3 + \left[7 \>, 16 \>, 11\right]$ is the continued fraction for $\frac{3927}{1250}$. [It feels weird to throw out large numbers, but remember it's $+\frac{1}{\left(M + \cdots\right)}$, which will be very small, that we are really throwing out.]
What happens if we throw out the $11$, also?
$$
\begin{align}
3 + \left[7 \>, 16 \right] &= 3 + \cfrac{1}{7 + \cfrac{1}{16}} \\ &= 3\dfrac{16}{113} \\ &= \dfrac{355}{113} \\
&\doteq 3.14159292036 \>.
\end{align}
$$
Something interesting has happened: this is a _better_ approximation to $\pi$ than $3.1416$ is! This has six correct decimal places, unlike the four we started with! This kind of thing does _not_ always happen, but it happens enough that Derive, a CAS that now lives on certain TI calculators, used (uses) this algorithm and sometimes gets exact answers even when such can't be guaranteed. The underlying theorem is one of best approximation by rational numbers, and it turns out that the convergents of infinite continued fractions give in a certain sense these "best" approximation.
In essence, continued fractions are "economical" best approximations. If $\left|x - \frac{p}{q}\right| < \frac{1}{2q^2}$, then necessarily $\frac{p}{q}$ is one of the convergents of the continued fraction. This means that shorter $p$ and $q$ can be used---that is, the fractions are nicer.
### The astonishing continued fractions for e
We need to talk about [Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler), pronounced in English more like "Oiler" but the Russians, who have a great claim to him because of his years at St. Petersburg, say "euh-ler" with a vowel more like the French "oeuf". Euler was one of the greatest mathematicians—it's sort of pointless to argue who was greater, Archimedes, Newton, Gauss, Euler, or Einstein; Leonardo da Vinci is a contender, too, and how could we compare them? We claim it is impossible to rank people who were clearly smarter than we are. We couldn't even _see_ the differences between them, much less judge them accurately. And, all of them were rich and privileged in their own ways, and without doubt this list of the "greatest" is entirely Eurocentric; where does [Nasir al-Din al-Tusi](https://en.wikipedia.org/wiki/Nasir_al-Din_al-Tusi) (who was the academic ancestor of almost every modern mathematics PhD holder, according to the [Mathematics Geneology](https://genealogy.math.ndsu.nodak.edu/id.php?id=217509)) fit on this list, or Brahmagupta or Aryabhata or Liu Hui? Anyway, pointless. For any list of top ten, one could make a case that Euler belonged there. His achievements and impact were astounding; one of the "minor" things is that he revolutionized mathematical notation and writing, and not just in English---he wrote mostly in Latin. Reading mathematics before Euler is painful because everyone tried to put things into words; after Euler we are all comfortable reading equations as if they are part of the language, which they _are_.
One of Euler's "minor" technical achievements (well, in comparison to things like the Euler equations for fluid flow, or Euler's method for solving differential equations, or the Euler--Lagrange equations of the calculus of variations) was unlocking many of the secrets of the number $e$, the base of the natural logarithms. Euler defined this as
\begin{equation}
e := \lim_{n\to\infty} \left( 1 + \frac{1}{n}\right)^n
\end{equation}
but we are not going to pursue that here because that is properly part of modern Calculus. Instead, we will display one of Euler's continued fractions:
\begin{equation}
\nu = \frac{e+1}{e-1} = 2 + \cfrac{1}{6 + \cfrac{1}{10 + \cfrac{1}{14 + \cfrac{1}{18 + \cfrac{1}{\ddots}}}}}
\end{equation}
If we know $\nu$, then because $(e-1)\nu = e+1$ or $e(\nu-1) = \nu+1$ we have
\begin{equation}
e = \frac{\nu + 1}{\nu - 1}\>.
\end{equation}
Because the partial quotients in $\nu$ grow so predictably—they increase by $4$ every time—one gets very accurate approximations for $\nu$ very quickly.
```python
evenfrac = list_to_frac( [2,6,10,14,18])
oddfrac = list_to_frac( [2,6,10,14,18,22])
#evenfrac,evalf(evenfrac),oddfrac,evalf(oddfrac), evalf(oddfrac-evenfrac), evalf( (exp(1)+1)/(exp(1)-1));
print( evenfrac, oddfrac, oddfrac-evenfrac, (np.exp(1)+1)/(np.exp(1)-1) )
```
2.1639534135512517 2.1639534137389793 1.8772761123386772e-10 2.163953413738653
Experiments show that the even-ending lists are always _increasing_ while the odd-ending lists are always _decreasing_. This means that
\begin{equation}
\frac{p_0}{q_0} < \frac{p_2}{q_2} < \frac{p_4}{q_4} < \cdots < \frac{p_5}{q_5} <\frac{p_3}{q_3} <\frac{p_1}{q_1}
\end{equation}
and our previous experiment suggested (which turns out to be true) that _all_ odd-ending continued fractions are larger than _all_ even-ending continued fractions, and moreover that the difference between them goes rapidly to zero. This is the basis for the proof that the number $\nu$ really is represented by the continued fraction. We _won't_ fill in the details, although we are so very close: instead we will just claim that these experiments are pointing at true facts.
The _practical_ fact that comes out of this theorem is that
\begin{equation}
\frac{33630}{15541} < \nu < \frac{741721}{342762}
\end{equation}
which, when translated into much more intelligible decimals, says that $\nu = 2.163953413_{55}^{74}$ where the curious subscripts/superscripts mean that the true answer is trapped somewhere between $2.163953413{55}$ and $2.163953413{74}$. Welcome to the somewhat niche world of [interval arithmetic](https://en.wikipedia.org/wiki/Interval_arithmetic).
This translates to a similar "trapping" of the value of $e$: we put the too-big rational in the numerator and the too-small rational in the denominator and get an _overestimate_ of $e$, and put the too-small rational in the numerator and the too-big rational in the denominator and get an _underestimate_ of $e$. When we do this we get the _very curious_ estimate, which surely would have pleased Euler,
\begin{equation}
\frac{16853950302}{6200221819} < e < \frac{16853950303}{6200221818}\>.
\end{equation}
Translating this to decimals gives $e = 2.718281828_{29}^{89}$ which again traps $e$. Using _one_ more partial quotient improves this to
\begin{equation}
{\frac{9681562563498}{3561647825527}} < e < {\frac{9681562563499}{3561647825526}}
\end{equation}
or $e = 2.71828182845_{82}^{94}$, which has two more decimal digits nailed down.
### Rational approximation to the exponential function
Lambert apparently discovered in 1766 that, if $x \ne 0$,
\begin{equation}
\nu(x) = \frac{e^x+1}{e^x-1} = \frac{2}{x} + \cfrac{1}{\frac{6}{x} + \cfrac{1}{\frac{10}{x} + \cfrac{1}{\frac{14}{x} + \cfrac{1}{\frac{18}{x} + \cfrac{1}{\ddots}}}}}
\end{equation}
Again, if we know $\nu(x)$, then
\begin{equation}
e^x = \frac{\nu(x) + 1}{\nu(x) - 1}\>.
\end{equation}
These give rather effective approximations of $e^x$ by rational functions.
```python
x = sympy.Symbol('x')
nu4 = list_to_frac( [2/x, 6/x, 10/x, 14/x, 18/x ])
print( nu4 )
```
1/(1/(1/(x/18 + 14/x) + 10/x) + 6/x) + 2/x
```python
e4 = (nu4+1)/(nu4-1)
p1 = sympy.plotting.plot( np.abs( e4-sympy.exp(x)), xlim=(0.1,2.1),
ylim=(1.0e-15,1.0e-6), yscale='log', adaptive=False, nb_of_points=300 )
```
### Randomness in Continued Fractions
One of the "generic good questions" in the Preamble was "What does a random choice look like?" For continued fractions, this opens a _huge_ collection of questions. For instance, if we choose an $x_0$ "at random" in the interval $[0,1)$, what does its continued fraction look like? And what does that even mean, "look like"? A very great Russian mathematician, [A. Y. Khinchin](https://en.wikipedia.org/wiki/Aleksandr_Khinchin) (there are about half-a-dozen different ways to transliterate his name to English), solved important problems in this area, and looking at his solutions gives a very good introduction to the deep mathematics known as _ergodic theory_.
One answer (Khinchin's answer, with help from Gauss and from Kuzmin) to what continued fractions "look like" is to look at a frequency distribution of the partial quotients that arise. This is related to the distribution of the $x_n$ that arise from the dynamical system $x_n = G(x_{n-1})$ starting from a "random" $x_0$; this is an instance of the previously-mentioned ergodic theory. It turns out the $x_n$ are distributed not uniformly but according to a known distribution, the so-called _Gauss measure_. We will see more of this in the "Bohemian matrices" unit (actually in the solution to one of the exercises).
Back to partial quotients $a_n$ and their distribution. We've been working with $\pi$ here, so we know its first few partial quotients: $[3,7,15,1,292,\ldots]$. R. William Gosper computed several _million_ partial quotients for $\pi$ in the 1970s, and nowadays many more are known: see [A001203](http://oeis.org/A001203). At this time of writing, the record holder is Syed Fahad, with 30 _billion_ partial quotients. Okay, then. So we ought to be able to study the statistics of this particular continued fraction, and in particular we can draw a frequency distribution. Below, we use only 5000 partial quotients, and drew the resulting frequency distribution. About 40% of the time, the partial quotient is a $1$. Next most common is a $2$. The relative likelihood of a partial quotient appearing seems to diminish with its size. This is indeed what happens if one chooses $x_0$ "at random"; But!
_It is not known if the distribution of the partial quotients of $\pi$_ are "typical" (everyone thinks so, but there is no proof). What _is_ known, which Khinchin proved, is that the distribution is the same (the [Gauss–Kuzmin distribution](https://en.wikipedia.org/wiki/Gauss–Kuzmin_distribution)) for _almost all_ initial numbers $x_0$ in the interval (in a technical sense, for a set of measure 1); and that the geometric mean of the partial quotients tends to a constant, now called [Khinchin's constant](https://en.wikipedia.org/wiki/Khinchin's_constant).
```{image} ../Figures/Continued\ Fractions/fivethousand.png
:height: 300px
:alt: Frequency distribution for five thousand partial quotients of pi
:align: center
```
There are other kinds of "random continued fractions" questions that could be asked. We invite you to pose some! We take up this challenge in the exercises for the "Bohemian Matrices" unit, where we again see the Gauss–Kuzmin distribution.
## Notes and further reading
We have scattered several links throughout this unit. Here are some more.
- [Sacred Geometry](https://www.sacred-geometry.es/?q=en/content/continued-fractions)
- [The results on the OEIS you get when you search for "Continued Fraction"](http://oeis.org/search?q=continued+fraction&language=english&go=Search)
- [Carl Douglas Olds' Chauvenet Prize-winning paper](https://www.jstor.org/stable/2318113) (JSTOR has free memberships available; but you can go in through your library, if you have one, by using libkey.io/ instead of https:// for that link).
- [Bill Gosper's original work described in the famous MIT HAKMEM](https://w3.pppl.gov/~hammett/work/2009/AIM-239-ocr.pdf) Bill Gosper used Möbius transformations to perform arithmetic on _infinite_ continued fractions. You should read it for Bill's language alone (he still talks like that, and is simply wonderful to be around). There are also astonishing facts about continued fractions in there: for instance, _every_ continued fraction where the partial quotients are in arithmetic progression has a known closed form involving Bessel functions.
- [A lovely paper on the geometry of continued fractions by Alan Beardon and Ian Short](https://www.jstor.org/stable/10.4169/amer.math.monthly.121.05.391) which makes some beautiful diagrams of _horocircles_ and makes use of Möbius transformations.
- [A shorter and even more lovely paper by Alan Beardon](https://www.jstor.org/stable/10.4169/math.mag.88.4.272) again using Möbius maps.
## Practice Problems and Exercises
1. Write down as many questions as you can, about this section.
2. Open a fresh Jupyter notebook and type in a code cell the following three lines:
```python
x0 = 1
x1 = (x0 + 2/x0)/2
print (x1)
```
and press and hold the control key and the Enter key. There, you have just used Python to compute the first Newton iterate for the square root of two; the computer should have printed out `1.5`.
3. Now copy the final two lines of that cell (not the `x0=1`) and put them in a fresh code cell, and change `x0` to `x1` and `x1` to `x2` everywhere. Run it again. The notebook should print `1.4166666666666665`. Do it again 4 more times, changing `x2` to `x3`, and `x3` to `x4`, and `x4` to `x5`, and `x5` to `x6` in their newly copied lines. You should find after running the program that _both_ `x5` and `x6` are `1.414213562373095`; no matter how many more times you do this (`x7`, `x8`, whatever) it won't change any more.
4. Now go back and modify your print statements to be `print(x1, x1**2-2)`, `print(x2, x2**2-2)`, and so on, all the way up to `print(x6, x6**2-2)` and run all the cells again (in order). You should see that the second numbers printed get smaller each time, until the line for `x5`. This says that `x5` squared is only about -4.4 times ten to the minus 16 smaller than 2 (we will see in a moment that this is not a very trustworthy statement). That is, Python says that `x5` is the exact square root of a number only a proton's width away from two (see the appendix on floating point numbers).
5. Now we are going to do the same in _rational arithmetic_ by looking after the numerators and denominators $p_n$ and $q_n$ ourselves. Either by going back and changing all your previous cells, or by writing fresh cells, enter the following (it can all be in one cell)
```python
p0 = 1
q0 = 1
p1 = p0**2 + 2*q0**2
q1 = 2*p0*q0
print( p1, q1, p1/q1, (p1/q1)**2-2, p1**2 - 2*q1**2, q1**2 )
... (these dots mean do the case p2/q2, p3/q3, all the way up to the end)
p6 = p5**2 + 2*q5**2
q6 = 2*p5*q5
print( p6, q6, p6/q6, (p6/q6)**2-2, p6**2 - 2*q6**2, q6**2 )
```
You should be a little tired of cutting-and-pasting and changing 3s to 4s and 4s to 5s etc; it's not _too bad_ in such a short program (and that's what it is, technically called a "straight-line program" because it has no loops), but it's clearly repetetive and error-prone unless you are very finicky (we are very finicky). We'll start using loops in a moment, but right now there are two other puzzles that should appear when you run this program. First, the pn/qn ratios should be giving the (apparently) same numbers as the xn before, and similarly the difference between squaring the ratio and 2. But the last two entries give (as a ratio) the _exact_ numbers for `(pn/qn)**2 - 2` (if we have done our algebra right). Our program generates the ratios $1/4$, $1/144$, $1/166464$, and so on until
\begin{equation*}
x_6 = p_6/q_6 = \frac{1572584048032918633353217}{1111984844349868137938112}.
\end{equation*}
(If you did not get those numbers, go look for your typos)<br>
Python says that
\begin{equation*}
\left( \frac{p_6}{q_6} \right)^2 - 2 = \frac{1}{1236510294063800469693771621893337765354742124544}.
\end{equation*}
That's about $8.0\times 10^{-49}$, not the $-4.4\times 10^{-16}$ from before. The sign isn't even the same. What happened? The puzzles are resolved by thinking about floating-point arithmetic versus exact integer arithmetic. Write out a paragraph describing your understanding of the differences, and then read the symbolic algebra appendix and the floating-point appendix. <br>
One final point of this exercise: we did not ever compare `p3/q3` to `p2/q2`, or any iterate to its previous one; instead, we tried to decide how good any iterate was (as an approximation to the square root of two) by checking to see how close its square was to two. This is a kind of error analysis called "backward error analysis" and we will see that it is very useful.
6. _Lists_ in Python. Lists are enclosed in square brackets, like this:
```python
x = [1.0] # x is a list with just one element, namely the floating-point number 1.0
print( x[0] ) # the first element has index zero; Python counts from 0
```
Type the above two lines into a fresh cell (don't just copy-and-paste, really type; it's practice for your fingers). You don't have to type the comments (The hashtag and everything after that on each line) but you may.
7. You can use a single list to store all the numbers `x0`, `x1`, `x2`, and so on; type these lines in
```python
x = [1.0]
print( x[0] )
nxt = (x[0]+2/x[0])/2
x.append( nxt ) # This appends an element to the list "x" (if the list was called y, you would say y.append( nxt ))
print( "The list x is ", x )
print( "The first element of x is ", x[0] )
print( "The second element of x is ", x[1] )
```
That doesn't look very different to using two variables `x0` and `x1`, but it is: we can now automatically increment the indices.
8. Type in the following and execute them:
```python
x = [1.0] # We reproduce our iteration using the list and indices into the list so we don't have new variable names
nxt = (x[0]+2/x[0])/2
x.append( nxt )
nxt = (x[1]+2/x[1])/2
x.append( nxt )
nxt = (x[2]+2/x[2])/2
x.append( nxt )
nxt = (x[3]+2/x[3])/2
x.append( nxt )
nxt = (x[4]+2/x[4])/2
x.append( nxt )
nxt = (x[5]+2/x[5])/2
x.append( nxt )
print( "The list x is ", x )
print( "The fifth element of x is ", x[4] )
print( "The sixth element of x is ", x[5] )
print( "The seventh element of x is ", x[6] )
```
9. _Loops at last_ Type in the following and execute it:
```python
x = [1.0]
for k in range(6):
nxt = ( x[k] + 2/x[k] )/2 # We don't really need "nxt" but it's a little more readable this way
x.append( nxt )
print( x )
```
The indentation is important there. More concisely, without the extra variable "nxt",
```python
x = [1.0]
for k in range(6):
x.append( (x[k]+2/x[k])/2 )
print( x )
```
10. Write a loop that uses two lists of integers, say `p` and `q`, and computes the exact integer numerators and denominators for the first six iterates. Our answer: When we print `p` and `q` we get the following:
$$
\begin{gather*}
[1, 3, 17, 577, 665857, 886731088897, 1572584048032918633353217] \\
[1, 2, 12, 408, 470832, 627013566048, 1111984844349868137938112]
\end{gather*}
$$
11. Which method gives a better approximation to $\sqrt{73}$, the $a + b/(2a)$ formula or the "blending" formula taught in some high schools and mentioned above?
12. Which method would be easier to teach to high school students, do you think? Why do you think so?
13. Write a Python program that plots the Gauss map on a _torus_. Think of it as wrapping the top and bottom of the unit square around a cylinder, and then bending the cylinder around to make a torus. Compare to the graph on the cover of the March 1992 issue of the American Mathematical Monthly, that is, Volume 99, no. 3.
## Open Problems/Big Projects
```{epigraph}
math wasn’t invented or discovered, math was manifested
-- [Mansi Bezbaruah](https://twitter.com/djmansib/status/1486205992140476417?s=20&t=JJ1YOr3N2adjFCBubzwiew)
```
0. Open the Online Encyclopedia of Integer Sequences, and choose a sequence to work with (say, [The Thue--Morse sequence A010060](http://oeis.org/A010060). Turn it into a continued fraction any way you like (e.g. make a decimal out of the sequence and compute its continued fraction; but do as you please!). Discover something about that continued fraction. Do not, and we repeat, do not get distracted and explore the OEIS for its own sake. Really, don't do that. Wait! Stop! Come back!
1. Are the elements (partial quotients) in the CF for Stark's number bounded? Some references: [A paper on algorithms to compute continued fractions](https://doi.org/10.1007/3-540-61581-4_39) and [a page of Arcana including Continued Fractions](http://www.numericana.com/answer/fractions.htm).
2. Is there any pattern in the simple continued fraction for $\pi$?
3. What can you say about continued fractions of bounded height?
4. Implement a "rounded rational" arithmetic package (in whatever language you like). Try to give an "IEEE-like" guarantee:
rr$(a\ op\ b) = (a\ op\ b)(1 + \delta)$ where $\left|\delta\right|$ is as small as possible given the restriction. RMC did this years ago in a now-vanished language (which might come back). You can find that source code in Chapter 22 of [The Aldor Manual](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.196.3360&rep=rep1&type=pdf).
5. Does the continued fraction for [Khinchin's constant](https://en.wikipedia.org/wiki/Khinchin's_constant) follow the Gauss–Kuzmin distribution? It's a pretty puzzle to compute the constant quickly, by the way. See [http://oeis.org/A002210](http://oeis.org/A002210).
\begin{equation}
K = \prod_{k\ge 1} \left( 1 + \frac{1}{k(k+2)}\right)^{\log_2 k} \approx 2.685452001\ldots
\end{equation}
[^1]: Here, $x_0 = 1$, $x_1 = \frac{3}{2}$, so on and so forth.
[^2]: In English as opposed to mathematics, zero is not a number. If we say that, we have a number of things to talk about, we don't mean there's nothing to say!
[^3]: Avogadro's number is $6.022\cdot 10^{23}$, about.
[^4]: There is for sure a possibility that you will doubt this, at this moment. No proof will be provided here just yet, because not everyone likes proofs.
[^5]: Except when we subtract 7 from 7.000 we got---$1.40\cdot10^{-8}$, not zero! So the 7 for $B$ should have been 6, the game _didn't_ stop, and $C$ gets something that displays as $1.000$, take $\boxed{1}$, and the fractional part is $1.400\cdot10^{-10}$. _oops_.
|
c86a3e9e77545c69b9c501d8ff8dbd0e01769cc0
| 316,691 |
ipynb
|
Jupyter Notebook
|
book/Contents/continued-fractions.ipynb
|
jameshughes89/Computational-Discovery-on-Jupyter
|
614eaaae126082106e1573675599e6895d09d96d
|
[
"MIT"
] | 14 |
2022-02-21T23:50:22.000Z
|
2022-03-23T22:21:55.000Z
|
book/Contents/continued-fractions.ipynb
|
jameshughes89/Computational-Discovery-on-Jupyter
|
614eaaae126082106e1573675599e6895d09d96d
|
[
"MIT"
] | null | null | null |
book/Contents/continued-fractions.ipynb
|
jameshughes89/Computational-Discovery-on-Jupyter
|
614eaaae126082106e1573675599e6895d09d96d
|
[
"MIT"
] | 2 |
2022-02-22T02:43:44.000Z
|
2022-02-23T14:27:31.000Z
| 171.369589 | 116,616 | 0.850671 | true | 25,859 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.810479 | 0.835484 | 0.677142 |
__label__eng_Latn
| 0.993602 | 0.411558 |
# Taylor Expanding the Non-linearities
```python
import sympy as sp
```
```python
# Define the symbols
gamma_max = sp.Symbol('\gamma_{max}')
nu_max = sp.Symbol(r'\nu_{max}')
Kd = sp.Symbol('K_D')
phi_R = sp.Symbol('\phi_R')
phi_O = sp.Symbol('\phi_O')
lam = sp.Symbol('\lambda')
phi_P = 1 - phi_R
# Define the self replciator steady state equations
c_AA = (nu_max * phi_P / lam) - 1
lam_eq = gamma_max * phi_R * (c_AA / (c_AA + Kd))
phiR_soln = sp.solve(lam_eq - lam, phi_R)[0]
```
```python
def taylor(function, x0, n, x):
i = 0
p = 0
while i <= n:
p = p + (function.diff(x, i).subs(x, x0))/(sp.factorial(i))*(x - x0)**i
i += 1
return p
phiR_expand = taylor(phiR_soln, x=lam, x0=0, n=2)
```
```python
phiR_expand
```
$\displaystyle - \frac{\lambda^{2} \left(- 4 K_{D} \gamma_{max} \nu_{max} + \gamma_{max}^{2} + 2 \gamma_{max} \nu_{max} + \nu_{max}^{2} - \frac{\left(\gamma_{max}^{2} \nu_{max} + \gamma_{max} \nu_{max}^{2}\right)^{2}}{\gamma_{max}^{2} \nu_{max}^{2}}\right)}{4 \gamma_{max} \nu_{max} \sqrt{\gamma_{max}^{2} \nu_{max}^{2}}} + \frac{\lambda \left(- \gamma_{max} + \nu_{max} - \frac{- \gamma_{max}^{2} \nu_{max} - \gamma_{max} \nu_{max}^{2}}{\sqrt{\gamma_{max}^{2} \nu_{max}^{2}}}\right)}{2 \gamma_{max} \nu_{max}} + \frac{\gamma_{max} \nu_{max} - \sqrt{\gamma_{max}^{2} \nu_{max}^{2}}}{2 \gamma_{max} \nu_{max}}$
```python
```
|
f27c98b72daa25877c03ebe8cb58990754e3968a
| 3,669 |
ipynb
|
Jupyter Notebook
|
code/analysis/simplifying_expressions.ipynb
|
gchure/modelling_growth
|
764d7aee4d0d562cd5e1b6e21b534ab465d1d672
|
[
"MIT"
] | null | null | null |
code/analysis/simplifying_expressions.ipynb
|
gchure/modelling_growth
|
764d7aee4d0d562cd5e1b6e21b534ab465d1d672
|
[
"MIT"
] | null | null | null |
code/analysis/simplifying_expressions.ipynb
|
gchure/modelling_growth
|
764d7aee4d0d562cd5e1b6e21b534ab465d1d672
|
[
"MIT"
] | null | null | null | 30.322314 | 667 | 0.502589 | true | 548 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.924142 | 0.771843 | 0.713293 |
__label__yue_Hant
| 0.356731 | 0.49555 |
# Simulation of Ball drop and Spring mass damper system
"Simulation of dynamic systems for dummies".
This is a very simple description of how to do time simulations of a dynamic system using SciPy ODE (Ordinary Differnetial Equation) Solver.
```python
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
```
## Simulation of a static system to introduce ODEint
Define a method that takes a system state and describe how this state will change in time. The method does this by returning time derivatives for each state. The ODE solver will use these time derivatives to calculate new states, for the next time step.
Here is a method that takes a system to simulate a train that travels with constant speed:
(The system has only one state, the position of the train)
```python
V_start = 150*10**3/3600 # [m/s] Train velocity at start
def train(states,t):
# states:
# [x]
x = states[0] # Position of train
dxdt = V_start # The position state will change by the speed of the train
# Time derivative of the states:
d_states_dt = np.array([dxdt])
return d_states_dt
```
```python
x_start = 0 # [m] Train position at start
# The states at start of the simulation, the train is traveling with constant speed V at position x = 0.
states_0 = np.array([x_start])
# Create a time vector for the simulation:
t = np.linspace(0,10,100)
# Simulate with the "train" method and start states for the times in t:
states = odeint(func = train,y0 = states_0,t = t)
# The result is the time series of the states:
x = states[:,0]
```
```python
fig,ax = plt.subplots()
ax.plot(t,x,label = 'Train position')
ax.set_title('Train traveling at constant speed')
ax.set_xlabel('time [s]')
ax.set_ylabel('x [m]')
a = ax.legend()
```
The speed can hower be a state too:
```python
def train_2_states(states,t):
# states:
# [x,V]
x = states[0] # Position of train
V = states[1] # Speed of train
dxdt = V # The position state will change by the speed of the train
dVdt = 0 # The velocity will not change (No acceleration)
# Time derivative of the states:
d_states_dt = np.array([dxdt,dVdt])
return d_states_dt
```
```python
# The states at start of the simulation, the train is traveling with constant speed V at position x = 0.
states_0 = np.array([x_start,V_start])
# Create a time vector for the simulation:
t = np.linspace(0,10,100)
# Simulate with the "train" method and start states for the times in t:
states = odeint(func = train_2_states,y0 = states_0,t = t)
# The result is the time series of the states:
x = states[:,0]
dxdt = states[:,1]
```
```python
fig,axes = plt.subplots(ncols = 2)
fig.set_size_inches(11,5)
ax = axes[0]
ax.plot(t,x,label = 'Train position')
ax.set_title('Train traveling at constant speed')
ax.set_xlabel('time [s]')
ax.set_ylabel('x [m]')
a = ax.legend()
ax = axes[1]
ax.plot(t,dxdt,label = 'Train speed')
ax.set_title('Train traveling at constant speed')
ax.set_xlabel('time [s]')
ax.set_ylabel('dx/dt [m/s]')
a = ax.legend()
```
## Ball drop
Here is a system where the speed is not constant.
A simulation of a ball drop under the influence of gravity force.
```python
g = 9.81
m = 1
def ball_drop(states,t):
# states:
# [x,v]
# F = g*m = m*dv/dt
# --> dv/dt = (g*m) / m
x = states[0]
dxdt = states[1]
dvdt = (g*m) / m
d_states_dt = np.array([dxdt,dvdt])
return d_states_dt
```
```python
states_0 = np.array([0,0])
t = np.linspace(0,10,100)
states = odeint(func = ball_drop,y0 = states_0,t = t)
x = states[:,0]
dxdt = states[:,1]
fig,axes = plt.subplots(ncols = 2)
fig.set_size_inches(11,5)
ax = axes[0]
ax.plot(t,x,label = 'Ball position')
ax.set_title('Ball drop')
ax.set_xlabel('time [s]')
ax.set_ylabel('x [m]')
a = ax.legend()
ax = axes[1]
ax.plot(t,dxdt,label = 'Ball speed')
ax.set_title('Ball drop')
ax.set_xlabel('time [s]')
ax.set_ylabel('dx/dt [m/s]')
a = ax.legend()
```
Simulating in air, where the ball has a resistance due aerodynamic drag.
```python
cd = 0.01
def ball_drop_air(states,t):
# states:
# [x,u]
# F = g*m - cd*u = m*du/dt
# --> du/dt = (g*m - cd*u**2) / m
x = states[0]
u = states[1]
dxdt = u
dudt = (g*m - cd*u**2) / m
d_states_dt = np.array([dxdt,dudt])
return d_states_dt
```
```python
states = odeint(func = ball_drop_air,y0 = states_0,t = t)
x_air = states[:,0]
dxdt_air = states[:,1]
fig,axes = plt.subplots(ncols = 2)
fig.set_size_inches(11,5)
ax = axes[0]
ax.plot(t,x,label = 'Vacuum')
ax.plot(t,x_air,label = 'Air')
ax.set_title('Ball drop in vacuum and air')
ax.set_xlabel('time [s]')
ax.set_ylabel('x [m]')
a = ax.legend()
ax = axes[1]
ax.plot(t,dxdt,label = 'Vacuum')
ax.plot(t,dxdt_air,label = 'Air')
ax.set_title('Ball drop in vacuum and air')
ax.set_xlabel('time [s]')
ax.set_ylabel('dx/dt [m/s]')
a = ax.legend()
```
The very classical dynamic system with a spring, a mass and a damper.
```python
k = 3 # The stiffnes of the spring (relates to position)
c = 0.1 # Damping term (relates to velocity)
m = 0.1 # The mass (relates to acceleration)
def spring_mass_damp(states,t):
# states:
# [x,v]
# F = -k*x -c*v = m*dv/dt
# --> dv/dt = (-kx -c*v) / m
x = states[0]
dxdt = states[1]
dvdt = (-k*x -c*dxdt) / m
d_states_dt = np.array([dxdt,dvdt])
return d_states_dt
```
```python
y0 = np.array([1,0])
t = np.linspace(0,10,100)
states = odeint(func = spring_mass_damp,y0 = y0,t = t)
x = states[:,0]
dxdt = states[:,1]
```
```python
fig,ax = plt.subplots()
ax.plot(t,x)
ax.set_title('Spring mass damper simulation')
ax.set_xlabel('time [s]')
a = ax.set_ylabel('x [m]')
```
Also add a gravity force
```python
g = 9.81
def spring_mass_damp_g(states,t):
# states:
# [x,v]
# F = g*m -k*x -c*v = m*dv/dt
# --> dv/dt = (g*m -kx -c*v) / m
x = states[0]
dxdt = states[1]
dvdt = (g*m -k*x -c*dxdt) / m
d_states_dt = np.array([dxdt,dvdt])
return d_states_dt
```
```python
states_g = odeint(func = spring_mass_damp_g,y0 = y0,t = t)
x_g = states_g[:,0]
dxdt_g = states_g[:,1]
```
```python
fig,ax = plt.subplots()
ax.plot(t,x,label = 'No gravity force')
ax.plot(t,x_g,label = 'Gravity force')
ax.set_title('Spring mass damper simulation with and without gravity')
ax.set_xlabel('time [s]')
ax.set_ylabel('x [m]')
a = ax.legend()
```
## SymPy solution
```python
import sympy as sym
import sympy.physics.mechanics as me
```
```python
from sympy.physics.vector import init_vprinting
init_vprinting(use_latex='mathjax')
```
```python
x, v = me.dynamicsymbols('x v')
```
```python
m, c, k, g, t = sym.symbols('m c k g t')
```
```python
ceiling = me.ReferenceFrame('C')
```
```python
O = me.Point('O')
P = me.Point('P')
```
```python
O.set_vel(ceiling, 0)
```
```python
P.set_pos(O, x * ceiling.x)
P.set_vel(ceiling, v * ceiling.x)
P.vel(ceiling)
```
$$v\mathbf{\hat{c}_x}$$
```python
damping = -c * P.vel(ceiling)
stiffness = -k * P.pos_from(O)
gravity = m * g * ceiling.x
forces = damping + stiffness + gravity
forces
```
```python
```
```python
```
|
19536918fc8d7365498ac0e6afd51f2dd72b08e5
| 175,512 |
ipynb
|
Jupyter Notebook
|
notebooks/01_dynamics_experiments/spring_mass_damper.ipynb
|
martinlarsalbert/ForceMan
|
b07541f1c04d0ed103d7a5f66256812ebca2824c
|
[
"MIT"
] | null | null | null |
notebooks/01_dynamics_experiments/spring_mass_damper.ipynb
|
martinlarsalbert/ForceMan
|
b07541f1c04d0ed103d7a5f66256812ebca2824c
|
[
"MIT"
] | 1 |
2019-12-15T17:23:21.000Z
|
2019-12-15T17:23:21.000Z
|
notebooks/01_dynamics_experiments/spring_mass_damper.ipynb
|
martinlarsalbert/ForceMan
|
b07541f1c04d0ed103d7a5f66256812ebca2824c
|
[
"MIT"
] | null | null | null | 263.531532 | 34,604 | 0.915185 | true | 2,232 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92944 | 0.865224 | 0.804174 |
__label__eng_Latn
| 0.845178 | 0.706699 |
## 3TM Nickel
In this demonstration we are considering 3 coupled heat diffusion equations, i.e. the 3 temperature model formulated as
\begin{align}\label{eq:coupledHeatequation}
\begin{cases}
C_i^E(\varphi^E)\cdot\rho_i\cdot\partial_t\varphi^E &= \partial_x\left(k^E_i(\varphi^E_i)\cdot \partial_x\varphi^E_i\right) + G_i^{EL}\cdot(\varphi^L_i-\varphi^E_i)+G_i^{SE}\cdot(\varphi^S_i-\varphi^E_i) + S(x,t) \\ \nonumber
C_i^L(\varphi^L)\cdot\rho_i\cdot\partial_t\varphi^L &= \partial_x\left(k^L_i(\varphi^L_i)\cdot \partial_x\varphi^L_i\right) + G_i^{EL}\cdot(\varphi^E_i-\varphi^L_i)+G_i^{LS}\cdot(\varphi^S_i-\varphi^L_i) \\ \nonumber
C_i^S(\varphi^S)\cdot\rho_i\cdot\partial_t\varphi^S &= \partial_x\left(k^S_i(\varphi^S_i)\cdot \partial_x\varphi^S_i\right) + G_i^{SE}\cdot(\varphi^E_i-\varphi^S_i)+G_i^{LS}\cdot(\varphi^L_i-\varphi^S_i)
\end{cases}
\end{align}
Where the superindex indicates each individual system "E" = Electron, "L" = Lattice, "S" = Spin, and the subindex "i" indicates that we can, with this solver find solutions to multiple piece wise homogeneous layers.
### Aim
* Calculate the energy deposit in a Nickel film via the transfer matrix method.
* Do a 3 temperature simulation and calculate the temperature dynamics within a Nickel layer in space and time
* Depict results
* Compare them to [Ultrafast Spin Dynamics in Ferromagnetic Nickel](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.76.4250)
* [Numerical Units](https://pypi.org/project/numericalunits/) is not required but used here to make the physical dimension of the variables used more clear.
### Setup
* Initially the electron and phonon temperature of Nickel is at 300 K
* The heating occurs through a laser source, gaussian in time and exponentially decaying in space. The fluence is of the 400 nm laser light is $5 \mathrm{mJ/cm^{2}}$, the polarization is "p" and the incident angle is 45°.
* A 20 nm Nickel layer is considered and the physical parameters are taken from [Ultrafast Spin Dynamics in Ferromagnetic Nickel](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.76.4250).
```python
from NTMpy import NTMpy as ntm
from matplotlib import pyplot as plt
import numpy as np
import numericalunits as u
u.reset_units('SI')
```
```python
#Define the source, responsible for heating
s = ntm.source()
s.spaceprofile = "TMM"
s.timeprofile = "Gaussian"
s.FWHM = 0.1*u.ps
s.fluence = 5*u.mJ/u.cm**2
s.t0 = 0.5*u.ps
s.polarization = "p"
s.theta_in = np.pi/4 #rad (0 is grazing)
s.lambda_vac = 400 #nm
```
In order to obtain the correct $C_{e,l,s}(T)$, i.e. heat capacity for all the systems under consideration, we are considering $C_e(T) = \gamma T $, where $\gamma = \frac{6000}{\rho}\mathrm{\frac{J}{kgK}}$ and $C_l = \frac{2\cdot 10^6}{\rho}\mathrm{\frac{J}{kg K}}$ from the paper by _Bigot et al._, mentioned above.
For $C_s$ we are considering the total heat capacity $C_{tot}$, from [here](https://webbook.nist.gov/cgi/cbook.cgi?ID=C7440020&Mask=2&Type=JANAFS&Plot=on#JANAFS) and extract the slope $\gamma_l$ before and the slope after $\gamma_r$ Curie temperature.
This gives us
\begin{equation}
C_s(T) = \gamma_s T ~\text{ where }~ \gamma_s = \gamma_l - \gamma_r
\end{equation}
Note, that this way of obtaining the coefficients for the respective heat capacities and the linearization of $C_i$ is an approximation, mainly valid for the low temperature regime.
The coupling constants, responsible for the heat exchange for all the systems are given in the paper by _Bigot et al._
and the heat conductivity, responsible for diffusion is assumed to be 1 $\mathrm{\frac{W}{mK}}$, since for a 20 nm thin film we can confidently assume uniform heating without diffusion playing a major role.
```python
length = 20*u.nm
density = 8.908*u.g/(u.cm**3)
n_index = 1.7163+2.5925j
C_l = 2.2e6*u.J/(u.m**3*u.K)/density
gamma = 6e3*u.J/(u.m**3*u.K**2)/density
#The units of C_tot = J/kgK --> don´t devide over density any more!
C_tot = lambda T: np.piecewise(T, [T<600, (T>=600) & (T<700),T>= 700 ], \
[lambda T:1/0.058* (13.69160 + 82.49509*(T/1000) - 174.955*(T/1000)**2 + 161.6011*(T/1000)**3),
lambda T:1/0.058* (1248.045 - 1257.510*(T/1000) - 165.1266/(T/1000)**2),
lambda T:1/0.058* (16.49839 + 18.74913*(T/1000) - 6.639841*(T/1000)**2 + 1.717278*(T/1000)**3)])
C_e = lambda T: gamma *T
#Extract the slope of the total heat capacity before and after curie temperature
temp = np.linspace(300,2000,5000)
indexl = temp <= 500
indexh = temp > 750
z1 = np.polyfit(temp[indexl],C_tot(temp[indexl]),1)
Ce1 = np.poly1d(z1)
coef1 = Ce1.coef
print("Linear approx before Curie temp:")
print(Ce1)
z2 = np.polyfit(temp[indexh],C_tot(temp[indexh]),1)
Ce2 = np.poly1d(z2)
coef2 = Ce2.coef
print("Linear approx after Curie temp:")
print(Ce2)
gammaS = coef1[0]-coef2[0]
print(f"Difference of slopes gives gammaS: {gammaS:.3f}")
C_s = lambda Ts: gammaS * Ts
#Conductivity is not so important as we assume uniform heating all get the same conductivity
k = 1*u.W/(u.m*u.K)
#Coupling constants taken from paper
G_el = 8e17 *u.W/(u.m**3*u.K)
G_se = 6e17 *u.W/(u.m**3*u.K)
G_ls = 0.3e17 *u.W/(u.m**3*u.K)
```
Linear approx before Curie temp:
0.3633 x + 356.5
Linear approx after Curie temp:
0.1833 x + 338.4
Difference of slopes gives gammaS: 0.180
```python
#Depicting the different heat capacities
C_la = C_l*np.ones_like(temp)
plt.figure()
plt.grid()
plt.title("Different heat capacities in Nickel")
plt.xlabel("Temperature in K"); plt.ylabel("$C_i$ in J/kgK")
plt.plot(temp,C_tot(temp),'orange',label = "$C_{tot}(T)$")
plt.plot(temp,C_la,'k',label="$C_l$")
plt.plot(temp,C_e(temp),'r',label = "$C_e(T) =\gamma T$")
plt.plot(temp,C_s(temp),'b',label = "$C_s(T) = \gamma_s T$")
plt.legend(loc='upper left')
plt.show()
```
Note, that $C_e(T)$ exceeds $C_{tot}$ after Curie temperature. However, the fluence of the laser is too small to cause heating above this temperature. Also we are trying to compare our findings to the paper mentioned above, which is why we are also taking their reported parameters under consideration.
Now that all the parameters are defined, we can create the simulation object, provide it with the physical properties, which we just evaluated and run the simulation.
* `sim = simulation(3,s)` creates the simulation object. The input arguments are the number of systems under consideration and the source object, crated above.
* `sim.addLayer(length,refractive_index,[heat_conductivity],[heat_Capacity], density,[coupling])` crates layer stacks.
Note that `[heat_conductivity]` is an array, where each entry corresponds to the conductivity of a system. The same holds for `heat_Capacity`. `[coupling]` indicates the linera coupling constant, as indicated in the equation above. Here the first entry of the array corresponds to the coupling between system 1-2; second entry: 2-3; third entry: 3-1.
* The output of `sim.run()` is the full temperature map of each system. I.e. `Temp_map[0]` corrensponds to the temperature dynamics of the electron system in space, along dim-0 and time, along dim-1. `x` and `t` are vectors contianing the space and time grid respectively.
Finally we crate the visual object by `v = visual(sim)`, where the simulation object gets passed on as input argument.
```python
sim = ntm.simulation(3,s)
sim.addLayer(length,n_index,[k,k,k],[C_e,C_l,C_s],density,[G_el,G_ls,G_se])
sim.final_time = 8*u.ps
#To get the raw output in form of arrays
[x, t, Temp_map] = sim.run()
#Create a visual object
v = ntm.visual(sim)
```
-----------------------------------------------------------
No specific time constant has been indicated.
The stability region has been calculated and an appropriate timestep has been chosen.
Timestep = 5.49e-15 s
-----------------------------------------------------------
Line 728 Spinsystem
-----------------------------------------------------------
Transfer matrix absorption profile and a Gaussian time profile is taken into account for the source.
Length of every layer has to be given in units of meter.
-----------------------------------------------------------
100%|████████████████████████████████████████████████████████████████████████████| 1549/1549 [00:00<00:00, 2968.66it/s]
-----------------------------------------------------------
Heat diffusion in a coupled electron-latticelspin system has been simulated
Eleapsed time in E.E.- loop: 0.5634047985076904
-----------------------------------------------------------
------------------------------------------------------------
The simulation object of the 3 temerature system has been passed on to the visual class.
------------------------------------------------------------
-----------------------------------------------------------
The maunually chosen time step of 5.49e-15 is very small and will eventually cause a long simulation time.
We suggest a timestep of 2.63e-13 s
-----------------------------------------------------------
Line 728 Spinsystem
-----------------------------------------------------------
Transfer matrix absorption profile and a Gaussian time profile is taken into account for the source.
Length of every layer has to be given in units of meter.
-----------------------------------------------------------
100%|████████████████████████████████████████████████████████████████████████████| 1549/1549 [00:00<00:00, 1932.18it/s]
-----------------------------------------------------------
Heat diffusion in a coupled electron-latticelspin system has been simulated
Eleapsed time in E.E.- loop: 0.8016860485076904
-----------------------------------------------------------
```python
[timegrid,Temp_vec] = v.average()
print("Shape of temp_vec = "+str(np.shape(Temp_vec)))
```
The function `[timegrid,Temp_vec] = v.average()` has two outputs: the timegrid in vector form and the corresponding averaged temperatures. `Temp_vec` is in array form. That is different rows correspond to different systems and the data for each timestep is storead along the colum direction.
```python
v.contour("1")
```
```python
v.contour("2")
```
```python
v.contour("3")
```
Comparing this to Fig. 3, from [Ultrafast Spin Dynamics in Ferromagnetic Nickel](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.76.4250), we see, that our findings are in agreement with what is reported in the paper.
### New Spin heat capacity
Now let us consider the follwing $C_s$:
\begin{equation} C_s(T)=
\begin{cases}
\gamma_s T &\text{ for } T<T_C \\ \nonumber
0 &\text{ else }
\end{cases}
\end{equation}
where the Curie temperature $T_C = 650$K
We redefine $C_s$ and reset the simulation.
```python
C_s = lambda T: np.piecewise(T,[T <= 650, T >= 650], [lambda T: gammaS*T,1e-2])
sim = simulation(3,s)
sim.addLayer(length,n_index,[k,k,k],[C_e,C_l,C_s],density,[G_el,G_ls,G_se])
sim.final_time = 8*u.ps
[x, t, Temp_map]= sim.run()
v = visual(sim)
```
-----------------------------------------------------------
No specific time constant has been indicated.
The stability region has been calculated and an appropriate timestep has been chosen.
Timestep = 5.49e-15 s
-----------------------------------------------------------
Line 728 Spinsystem
-----------------------------------------------------------
Transfer matrix absorption profile and a Gaussian time profile is taken into account for the source.
Length of every layer has to be given in units of meter.
-----------------------------------------------------------
100%|████████████████████████████████████████████████████████████████████████████| 1549/1549 [00:00<00:00, 2609.20it/s]
-----------------------------------------------------------
Heat diffusion in a coupled electron-latticelspin system has been simulated
Eleapsed time in E.E.- loop: 0.5936684608459473
-----------------------------------------------------------
------------------------------------------------------------
The simulation object of the 3 temerature system has been passed on to the visual class.
------------------------------------------------------------
-----------------------------------------------------------
The manually chosen time step of 5.49e-15 is eventually too big and could cause instabilities in the simulation.
We suggest a timestep of 5.49e-17 s
-----------------------------------------------------------
Line 728 Spinsystem
-----------------------------------------------------------
Transfer matrix absorption profile and a Gaussian time profile is taken into account for the source.
Length of every layer has to be given in units of meter.
-----------------------------------------------------------
100%|████████████████████████████████████████████████████████████████████████████| 1458/1458 [00:00<00:00, 2483.14it/s]
-----------------------------------------------------------
Heat diffusion in a coupled electron-latticelspin system has been simulated
Eleapsed time in E.E.- loop: 0.5871601104736328
-----------------------------------------------------------
```python
plt.figure()
plt.plot(temp,C_s(temp))
plt.title("Spin heat capacity $C_s$ with discontinuity at $T_C$")
plt.xlabel("Temperature (K)");plt.ylabel("$C_i$ in J/kgK")
[timegrid,Temp_vec] = v.average()
```
### 1 TM- Simulation
If there is only one system, then the heating should be way stronger on this system, since the heat does not get distributed among different systems. In order to qualitatively check this, we reset the problem and run a simulation again.
#### Decrease the timestep automatically
Note: In this specific case, the automatically calculated time step for stability calculations would be larger than the FWHM of the laser source. This would lead to an incorrect caption of the laser pulse, since too little time steps would be applied in order to correctly capture the shape of the source.
Therefore a routine has been implemented, which makes the timestep around the peak of the gaussian smaller, in order to capture its shape and to correctly calculate the energy deposit in time.
```python
#Source
s = source()
#Those are the default options for space and time profile
#s.spaceprofile = "TMM"
#s.timeprofile = "Gaussian"
s.FWHM = 0.1*u.ps
s.fluence = 5*u.mJ/u.cm**2
s.t0 = 0.5*u.ps
s.polarization = "p"
s.theta_in = np.pi/4
s.lambda_vac = 400
#1 TM simulation
sim = simulation(1,s)
sim.addLayer(length,n_index,[k],[C_e],density)
sim.final_time = 8*u.ps
v = visual(sim)
```
------------------------------------------------------------
The simulation object of the 1 temerature system has been passed on to the visual class.
------------------------------------------------------------
-----------------------------------------------------------
No specific time constant has been indicated.
The stability region has been calculated and an appropriate timestep has been chosen.
Timestep = 1.25e-12 s
-----------------------------------------------------------
-----------------------------------------------------------
Transfer matrix absorption profile and a Gaussian time profile is taken into account for the source.
Length of every layer has to be given in units of meter.
-----------------------------------------------------------
100%|██████████████████████████████████████████████████████████████████████████████| 206/206 [00:00<00:00, 4392.66it/s]
-----------------------------------------------------------
Electron temperature heat diffusion has been simulated.
Eleapsed time in E.E.- loop: 0.06252121925354004
-----------------------------------------------------------
```python
[timegrid,temp_vec] = v.average()
print("Shape of temp_vec = "+str(np.shape(temp_vec)))
v.timegrid()
```
|
530ed6d64eec1feb6f9c9a99de4c1e7d98edc8b2
| 208,058 |
ipynb
|
Jupyter Notebook
|
Examples/3TmNickel.ipynb
|
VaSca92/NTMpy
|
be78cde21c045eb20f46cdb027a6933155ec962a
|
[
"MIT"
] | 6 |
2020-02-17T08:18:27.000Z
|
2021-12-16T15:42:14.000Z
|
Examples/3TmNickel.ipynb
|
VaSca92/NTMpy
|
be78cde21c045eb20f46cdb027a6933155ec962a
|
[
"MIT"
] | 5 |
2019-05-13T01:34:22.000Z
|
2019-08-27T10:10:57.000Z
|
Examples/3TmNickel.ipynb
|
VaSca92/NTMpy
|
be78cde21c045eb20f46cdb027a6933155ec962a
|
[
"MIT"
] | 3 |
2020-01-25T11:48:48.000Z
|
2021-09-10T18:51:27.000Z
| 320.089231 | 27,568 | 0.916807 | true | 4,184 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.903294 | 0.749087 | 0.676646 |
__label__eng_Latn
| 0.938072 | 0.410407 |
# NRPy+'s Reference Metric Interface
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
### NRPy+ Source Code for this module: [reference_metric.py](../edit/reference_metric.py)
## Introduction:
### Why use a reference metric? Benefits of choosing the best coordinate system for the problem
When solving a partial differential equation on the computer, it is useful to first pick a coordinate system well-suited to the geometry of the problem. For example, if we are modeling a spherically-symmetric star, it would be hugely wasteful to model the star in 3-dimensional Cartesian coordinates ($x$,$y$,$z$). This is because in Cartesian coordinates, we would need to choose high sampling in all three Cartesian directions. If instead we chose to model the star in spherical coordinates ($r$,$\theta$,$\phi$), so long as the star is centered at $r=0$, we would not need to model the star with more than one point in the $\theta$ and $\phi$ directions!
A similar argument holds for stars that are *nearly* spherically symmetric. Such stars may exhibit density distributions that vary slowly in $\theta$ and $\phi$ directions (e.g., isolated neutron stars or black holes). In these cases the number of points needed to sample the angular directions will still be much smaller than in the radial direction.
Thus choice of an appropriate reference metric may directly mitigate the [Curse of Dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follow
1. [Step 1](#define_ref_metric): Defining a reference metric, [`reference_metric.py`](../edit/reference_metric.py)
1. [Step 2](#define_geometric): Defining geometric quantities, **`ref_metric__hatted_quantities()`**
1. [Step 3](#prescribed_ref_metric): Prescribed reference metrics in [`reference_metric.py`](../edit/reference_metric.py)
1. [Step 3.a](#sphericallike): Spherical-like coordinate systems
1. [Step 3.a.i](#spherical): **`reference_metric::CoordSystem = "Spherical"`**
1. [Step 3.a.ii](#sinhspherical): **`reference_metric::CoordSystem = "SinhSpherical"`**
1. [Step 3.a.iii](#sinhsphericalv2): **`reference_metric::CoordSystem = "SinhSphericalv2"`**
1. [Step 3.b](#cylindricallike): Cylindrical-like coordinate systems
1. [Step 3.b.i](#cylindrical): **`reference_metric::CoordSystem = "Cylindrical"`**
1. [Step 3.b.ii](#sinhcylindrical): **`reference_metric::CoordSystem = "SinhCylindrical"`**
1. [Step 3.b.iii](#sinhcylindricalv2): **`reference_metric::CoordSystem = "SinhCylindricalv2"`**
1. [Step 3.c](#cartesianlike): Cartesian-like coordinate systems
1. [Step 3.c.i](#cartesian): **`reference_metric::CoordSystem = "Cartesian"`**
1. [Step 3.d](#prolatespheroidal): Prolate spheroidal coordinates
1. [Step 3.d.i](#symtp): **`reference_metric::CoordSystem = "SymTP"`**
1. [Step 3.d.ii](#sinhsymtp): **`reference_metric::CoordSystem = "SinhSymTP"`**
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='define_ref_metric'></a>
# Step 1: Defining a reference metric, [`reference_metric.py`](../edit/reference_metric.py) \[Back to [top](#toc)\]
$$\label{define_ref_metric}$$
***Note that currently only orthogonal reference metrics of dimension 3 or fewer are supported. This can be extended if desired.***
NRPy+ assumes all curvilinear coordinate systems map directly from a uniform, Cartesian numerical grid with coordinates $(x,y,z)$=(`xx[0]`,`xx[1]`,`xx[2]`). Thus when defining reference metrics, all defined coordinate quantities must be in terms of the `xx[]` array. As we will see, this adds a great deal of flexibility
For example, [**reference_metric.py**](../edit/reference_metric.py) requires that the *orthogonal coordinate scale factors* be defined. As described [here](https://en.wikipedia.org/wiki/Curvilinear_coordinates), the $i$th scale factor is the positive root of the metric element $g_{ii}$. In ordinary spherical coordinates $(r,\theta,\phi)$, with line element $ds^2 = g_{ij} dx^i dx^j = dr^2+ r^2 d \theta^2 + r^2 \sin^2\theta \ d\phi^2$, we would first define
* $r = xx_0$
* $\theta = xx_1$
* $\phi = xx_2$,
so that the scale factors are defined as
* `scalefactor_orthog[0]` = $1$
* `scalefactor_orthog[1]` = $r$
* `scalefactor_orthog[2]` = $r \sin \theta$
Here is the corresponding code:
```python
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: parameter interface
import reference_metric as rfm # NRPy+: Reference metric support
r = rfm.xx[0]
th = rfm.xx[1]
ph = rfm.xx[2]
rfm.scalefactor_orthog[0] = 1
rfm.scalefactor_orthog[1] = r
rfm.scalefactor_orthog[2] = r*sp.sin(th)
# Notice that the scale factor will be given
# in terms of the fundamental Cartesian
# grid variables, and not {r,th,ph}:
print("r*sin(th) = "+str(rfm.scalefactor_orthog[2]))
```
r*sin(th) = xx0*sin(xx1)
Next suppose we wish to modify our radial coordinate $r(xx_0)$ to be an exponentially increasing function, so that our numerical grid $(xx_0,xx_1,xx_2)$ will map to a spherical grid with radial grid spacing ($\Delta r$) that *increases* with $r$. Generally we will find it useful to define $r(xx_0)$ to be an odd function, so let's choose
$$r(xx_0) = a \sinh(xx_0/s),$$
where $a$ is an overall radial scaling factor, and $s$ denotes the scale (in units of $xx_0$) over which exponential growth will take place. In our implementation below, note that we use the relation
$$\sinh(x) = \frac{e^x - e^{-x}}{2},$$
as SymPy finds it easier to evaluate exponentials than hyperbolic trigonometric functions.
```python
a,s = sp.symbols('a s',positive=True)
xx0_rescaled = rfm.xx[0] / s
r = a*(sp.exp(xx0_rescaled) - sp.exp(-xx0_rescaled))/2
# Must redefine the scalefactors since 'r' has been updated!
rfm.scalefactor_orthog[0] = 1
rfm.scalefactor_orthog[1] = r
rfm.scalefactor_orthog[2] = r*sp.sin(th)
print(rfm.scalefactor_orthog[2])
```
a*(exp(xx0/s) - exp(-xx0/s))*sin(xx1)/2
Often we will find it useful to also define the appropriate mappings from (`xx[0]`,`xx[1]`,`xx[2]`) to Cartesian coordinates (for plotting purposes) and ordinary spherical coordinates (e.g., in case initial data when solving a PDE are naturally written in spherical coordinates). For this purpose, reference_metric.py also declares lists **`xxCart[]`** and **`xxSph[]`**, which in this case are defined as
```python
rfm.xxSph[0] = r
rfm.xxSph[1] = th
rfm.xxSph[2] = ph
rfm.xxCart[0] = r*sp.sin(th)*sp.cos(ph)
rfm.xxCart[1] = r*sp.sin(th)*sp.sin(ph)
rfm.xxCart[2] = r*sp.cos(th)
# Here we show off SymPy's pretty_print()
# and simplify() functions. Nice, no?
sp.pretty_print(sp.simplify(rfm.xxCart[0]))
```
⎛xx₀⎞
a⋅sin(xx₁)⋅cos(xx₂)⋅sinh⎜───⎟
⎝ s ⎠
<a id='define_geometric'></a>
# Step 2: Define geometric quantities, `ref_metric__hatted_quantities()` \[Back to [top](#toc)\]
$$\label{define_geometric}$$
Once `scalefactor_orthog[]` has been defined, the function **`ref_metric__hatted_quantities()`** within [reference_metric.py](../edit/reference_metric.py) can be called to define a number of geometric quantities useful for solving PDEs in curvilinear coordinate systems.
Adopting the notation of [Baumgarte, Montero, Cordero-Carrión, and Müller, PRD 87, 044026 (2012)](https://arxiv.org/abs/1211.6632), geometric quantities related to the reference metric are named "hatted" quantities, . For example, the reference metric is defined as $\hat{g}_{ij}$=`ghatDD[i][j]`:
```python
rfm.ref_metric__hatted_quantities()
sp.pretty_print(sp.Matrix(rfm.ghatDD))
```
⎡1 0 0 ⎤
⎢ ⎥
⎢ 2 ⎥
⎢ ⎛ xx₀ -xx₀ ⎞ ⎥
⎢ ⎜ ─── ─────⎟ ⎥
⎢ 2 ⎜ s s ⎟ ⎥
⎢ a ⋅⎝ℯ - ℯ ⎠ ⎥
⎢0 ─────────────────── 0 ⎥
⎢ 4 ⎥
⎢ ⎥
⎢ 2 ⎥
⎢ ⎛ xx₀ -xx₀ ⎞ ⎥
⎢ ⎜ ─── ─────⎟ ⎥
⎢ 2 ⎜ s s ⎟ 2 ⎥
⎢ a ⋅⎝ℯ - ℯ ⎠ ⋅sin (xx₁)⎥
⎢0 0 ─────────────────────────────⎥
⎣ 4 ⎦
In addition to $\hat{g}_{ij}$, **`ref_metric__hatted_quantities()`** also provides:
* The rescaling "matrix" `ReDD[i][j]`, used for separating singular (due to chosen coordinate system) pieces of smooth rank-2 tensor components from the smooth parts, so that the smooth parts can be used within temporal and spatial differential operators.
* Inverse reference metric: $\hat{g}^{ij}$=`ghatUU[i][j]`.
* Reference metric determinant: $\det\left(\hat{g}_{ij}\right)$=`detgammahat`.
* First and second derivatives of the reference metric: $\hat{g}_{ij,k}$=`ghatDD_dD[i][j][k]`; $\hat{g}_{ij,kl}$=`ghatDD_dDD[i][j][k][l]`
* Christoffel symbols associated with the reference metric, $\hat{\Gamma}^i_{jk}$ = `GammahatUDD[i][j][k]` and their first derivatives $\hat{\Gamma}^i_{jk,l}$ = `GammahatUDD_dD[i][j][k][l]`
For example, the Christoffel symbol $\hat{\Gamma}^{xx_1}_{xx_2 xx_2}=\hat{\Gamma}^1_{22}$ is given by `GammahatUDD[1][2][2]`:
```python
sp.pretty_print(sp.simplify(rfm.GammahatUDD[1][2][2]))
```
-sin(2⋅xx₁)
────────────
2
Given the trigonometric identity $2\sin(x)\cos(x) = \sin(2x)$, notice that the above expression is equivalent to Eq. 18 of [Baumgarte, Montero, Cordero-Carrión, and Müller, PRD 87, 044026 (2012)](https://arxiv.org/abs/1211.6632). This is expected since the sinh-radial spherical coordinate system is equivalent to ordinary spherical coordinates in the angular components.
<a id='prescribed_ref_metric'></a>
# Step 3: Prescribed reference metrics in [`reference_metric.py`](../edit/reference_metric.py) \[Back to [top](#toc)\]
$$\label{prescribed_ref_metric}$$
One need not manually define scale factors or other quantities for reference metrics, as a number of prescribed reference metrics are already defined in [reference_metric.py](../edit/reference_metric.py). These can be accessed by first setting the parameter **reference_metric::CoordSystem** to one of the following, and then calling the function **`rfm.reference_metric()`**.
```python
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import grid as gri # NRPy+: Functions having to do with numerical grids
# Step 0a: Initialize parameters
thismodule = __name__
par.initialize_param(par.glb_param("char", thismodule, "CoordSystem", "Spherical"))
# Step 0b: Declare global variables
xx = gri.xx
xxCart = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
Cart_to_xx = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
Cartx,Carty,Cartz = sp.symbols("Cartx Carty Cartz", real=True)
Cart = [Cartx,Carty,Cartz]
xxSph = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
scalefactor_orthog = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
have_already_called_reference_metric_function = False
CoordSystem = par.parval_from_str("reference_metric::CoordSystem")
M_PI,M_SQRT1_2 = par.Cparameters("#define",thismodule,["M_PI","M_SQRT1_2"],"")
global xxmin
global xxmax
global UnitVectors
UnitVectors = ixp.zerorank2(DIM=3)
```
We will find the following plotting function useful for analyzing coordinate systems in which the radial coordinate is rescaled.
```python
def create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0):
import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities
plt.clf()
Nr = 20
dxx0 = 1.0 / float(Nr)
xx0s = []
rs = []
deltars = []
rprimes = []
for i in range(Nr):
xx0 = (float(i) + 0.5)*dxx0
xx0s.append(xx0)
rs.append( sp.sympify(str(r_of_xx0 ).replace("xx0",str(xx0))))
rprimes.append(sp.sympify(str(rprime_of_xx0).replace("xx0",str(xx0))))
if i>0:
deltars.append(sp.log(rs[i]-rs[i-1],10))
else:
deltars.append(sp.log(2*rs[0],10))
# fig, ax = plt.subplots()
fig = plt.figure(figsize=(12,12)) # 8 in x 8 in
ax = fig.add_subplot(221)
ax.set_title('$r(xx_0)$ for '+CoordSystem,fontsize='x-large')
ax.set_xlabel('$xx_0$',fontsize='x-large')
ax.set_ylabel('$r(xx_0)$',fontsize='x-large')
ax.plot(xx0s, rs, 'k.', label='Spacing between\nadjacent gridpoints')
# legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
# legend.get_frame().set_facecolor('C1')
ax = fig.add_subplot(222)
ax.set_title('Grid spacing for '+CoordSystem,fontsize='x-large')
ax.set_xlabel('$xx_0$',fontsize='x-large')
ax.set_ylabel('$\log_{10}(\Delta r)$',fontsize='x-large')
ax.plot(xx0s, deltars, 'k.', label='Spacing between\nadjacent gridpoints\nin $r(xx_0)$ plot')
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
ax = fig.add_subplot(223)
ax.set_title('$r\'(xx_0)$ for '+CoordSystem,fontsize='x-large')
ax.set_xlabel('$xx_0$',fontsize='x-large')
ax.set_ylabel('$r\'(xx_0)$',fontsize='x-large')
ax.plot(xx0s, rprimes, 'k.', label='Nr=96')
# legend = ax.legend(loc='upper left', shadow=True, fontsize='x-large')
# legend.get_frame().set_facecolor('C1')
plt.tight_layout(pad=2)
plt.show()
```
<a id='sphericallike'></a>
## Step 3.a: Spherical-like coordinate systems \[Back to [top](#toc)\]
$$\label{sphericallike}$$
<a id='spherical'></a>
### Step 3.a.i: **`reference_metric::CoordSystem = "Spherical"`** \[Back to [top](#toc)\]
$$\label{spherical}$$
Standard spherical coordinates, with $(r,\theta,\phi)=(xx_0,xx_1,xx_2)$
```python
if CoordSystem == "Spherical":
# Adding assumption real=True can help simplify expressions involving xx[0] & xx[1] below.
xx[0] = sp.symbols("xx0", real=True)
xx[1] = sp.symbols("xx1", real=True)
RMAX = par.Cparameters("REAL", thismodule, ["RMAX"],10.0)
xxmin = [sp.sympify(0), sp.sympify(0), -M_PI]
xxmax = [ RMAX, M_PI, M_PI]
r = xx[0]
th = xx[1]
ph = xx[2]
Cart_to_xx[0] = sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)
Cart_to_xx[1] = sp.acos(Cartz / Cart_to_xx[0])
Cart_to_xx[2] = sp.atan2(Carty, Cartx)
xxSph[0] = r
xxSph[1] = th
xxSph[2] = ph
# Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2].
# Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above.
xxCart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2])
xxCart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2])
xxCart[2] = xxSph[0]*sp.cos(xxSph[1])
scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0])
scalefactor_orthog[1] = xxSph[0]
scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1])
# Set the unit vectors
UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])],
[ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])],
[ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]]
```
Now let's analyze $r(xx_0)$ for **"Spherical"** coordinates.
```python
%matplotlib inline
CoordSystem = "Spherical"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
RMAX = 10.0
r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("RMAX",str(RMAX)))
rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("RMAX",str(RMAX)))
create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0)
```
<a id='sinhspherical'></a>
### Step 3.a.ii: **`reference_metric::CoordSystem = "SinhSpherical"`** \[Back to [top](#toc)\]
$$\label{sinhspherical}$$
Spherical coordinates, but with $$r(xx_0) = \text{AMPL} \frac{\sinh\left(\frac{xx_0}{\text{SINHW}}\right)}{\sinh\left(\frac{1}{\text{SINHW}}\right)}.$$
SinhSpherical uses two parameters: `AMPL` and `SINHW`. `AMPL` sets the outer boundary distance; and `SINHW` sets the focusing of the coordinate points near $r=0$, where a small `SINHW` ($\sim 0.125$) will greatly focus the points near $r=0$ and a large `SINHW` will look more like an ordinary spherical polar coordinate system.
```python
if CoordSystem == "SinhSpherical":
xxmin = [sp.sympify(0), sp.sympify(0), -M_PI]
xxmax = [sp.sympify(1), M_PI, M_PI]
AMPL, SINHW = par.Cparameters("REAL",thismodule,["AMPL","SINHW"],[10.0,0.2])
# Set SinhSpherical radial coordinate by default; overwrite later if CoordSystem == "SinhSphericalv2".
r = AMPL * (sp.exp(xx[0] / SINHW) - sp.exp(-xx[0] / SINHW)) / \
(sp.exp(1 / SINHW) - sp.exp(-1 / SINHW))
th = xx[1]
ph = xx[2]
Cart_to_xx[0] = SINHW*sp.asinh(sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)*sp.sinh(1/SINHW)/AMPL)
Cart_to_xx[1] = sp.acos(Cartz / sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2))
Cart_to_xx[2] = sp.atan2(Carty, Cartx)
xxSph[0] = r
xxSph[1] = th
xxSph[2] = ph
# Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2].
# Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above.
xxCart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2])
xxCart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2])
xxCart[2] = xxSph[0]*sp.cos(xxSph[1])
scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0])
scalefactor_orthog[1] = xxSph[0]
scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1])
# Set the unit vectors
UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])],
[ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])],
[ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]]
```
Now we explore $r(xx_0)$ for `SinhSpherical` assuming `AMPL=10.0` and `SINHW=0.2`:
```python
%matplotlib inline
CoordSystem = "SinhSpherical"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
AMPL = 10.0
SINHW = 0.2
r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)))
rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)))
create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0)
```
<a id='sinhsphericalv2'></a>
### Step 3.a.iii: **`reference_metric::CoordSystem = "SinhSphericalv2"`** \[Back to [top](#toc)\]
$$\label{sinhsphericalv2}$$
The same as SinhSpherical coordinates, but with an additional `AMPL*const_dr*xx_0` term:
$$r(xx_0) = \text{AMPL} \left[\text{const_dr}\ xx_0 + \frac{\sinh\left(\frac{xx_0}{\text{SINHW}}\right)}{\sinh\left(\frac{1}{\text{SINHW}}\right)}\right].$$
```python
if CoordSystem == "SinhSphericalv2":
# SinhSphericalv2 adds the parameter "const_dr", which allows for a region near xx[0]=0 to have
# constant radial resolution of const_dr, provided the sinh() term does not dominate near xx[0]=0.
xxmin = [sp.sympify(0), sp.sympify(0), -M_PI]
xxmax = [sp.sympify(1), M_PI, M_PI]
AMPL, SINHW = par.Cparameters("REAL",thismodule,["AMPL","SINHW"],[10.0,0.2])
const_dr = par.Cparameters("REAL",thismodule,["const_dr"],0.0625)
r = AMPL*( const_dr*xx[0] + (sp.exp(xx[0] / SINHW) - sp.exp(-xx[0] / SINHW)) /
(sp.exp(1 / SINHW) - sp.exp(-1 / SINHW)) )
th = xx[1]
ph = xx[2]
# NO CLOSED-FORM EXPRESSION FOR RADIAL INVERSION.
# Cart_to_xx[0] = "NewtonRaphson"
# Cart_to_xx[1] = sp.acos(Cartz / sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2))
# Cart_to_xx[2] = sp.atan2(Carty, Cartx)
xxSph[0] = r
xxSph[1] = th
xxSph[2] = ph
# Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2].
# Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above.
xxCart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2])
xxCart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2])
xxCart[2] = xxSph[0]*sp.cos(xxSph[1])
scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0])
scalefactor_orthog[1] = xxSph[0]
scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1])
# Set the unit vectors
UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])],
[ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])],
[ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]]
```
Now we explore $r(xx_0)$ for `SinhSphericalv2` assuming `AMPL=10.0`, `SINHW=0.2`, and `const_dr=0.05`. Notice that the `const_dr` term significantly increases the grid spacing near $xx_0=0$ relative to `SinhSpherical` coordinates.
```python
%matplotlib inline
CoordSystem = "SinhSphericalv2"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
AMPL = 10.0
SINHW = 0.2
const_dr = 0.05
r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)).replace("const_dr",str(const_dr)))
rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)).replace("const_dr",str(const_dr)))
create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0)
```
<a id='cylindricallike'></a>
## Step 3.b: Cylindrical-like coordinate systems \[Back to [top](#toc)\]
$$\label{cylindricallike}$$
<a id='cylindrical'></a>
### Step 3.b.i: **`reference_metric::CoordSystem = "Cylindrical"`** \[Back to [top](#toc)\]
$$\label{cylindrical}$$
Standard cylindrical coordinates, with $(\rho,\phi,z)=(xx_0,xx_1,xx_2)$
```python
if CoordSystem == "Cylindrical":
# Assuming the cylindrical radial coordinate
# is positive makes nice simplifications of
# unit vectors possible.
xx[0] = sp.symbols("xx0", real=True)
RHOMAX,ZMIN,ZMAX = par.Cparameters("REAL",thismodule,["RHOMAX","ZMIN","ZMAX"],[10.0,-10.0,10.0])
xxmin = [sp.sympify(0), -M_PI, ZMIN]
xxmax = [ RHOMAX, M_PI, ZMAX]
RHOCYL = xx[0]
PHICYL = xx[1]
ZCYL = xx[2]
Cart_to_xx[0] = sp.sqrt(Cartx ** 2 + Carty ** 2)
Cart_to_xx[1] = sp.atan2(Carty, Cartx)
Cart_to_xx[2] = Cartz
xxCart[0] = RHOCYL*sp.cos(PHICYL)
xxCart[1] = RHOCYL*sp.sin(PHICYL)
xxCart[2] = ZCYL
xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2)
xxSph[1] = sp.acos(ZCYL / xxSph[0])
xxSph[2] = PHICYL
scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0])
scalefactor_orthog[1] = RHOCYL
scalefactor_orthog[2] = sp.diff(ZCYL,xx[2])
# Set the unit vectors
UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)],
[-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)],
[ sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
Next let's plot **"Cylindrical"** coordinates.
```python
%matplotlib inline
import numpy as np # NumPy: A numerical methods module for Python
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
R = np.linspace(0, 2, 24)
h = 2
u = np.linspace(0, 2*np.pi, 24)
x = np.outer(R, np.cos(u))
y = np.outer(R, np.sin(u))
z = h * np.outer(np.ones(np.size(u)), np.ones(np.size(u)))
r = np.arange(0,2,0.25)
theta = 2*np.pi*r*0
fig = plt.figure(figsize=(12,12)) # 8 in x 8 in
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1 = plt.axes(projection='polar')
ax1.set_rmax(2)
ax1.set_rgrids(r,labels=[])
thetas = np.linspace(0,360,24, endpoint=True)
ax1.set_thetagrids(thetas,labels=[])
# ax.grid(True)
ax1.grid(True,linewidth='1.0')
ax1.set_title("Top Down View")
plt.show()
ax2 = plt.axes(projection='3d', xticklabels=[], yticklabels=[], zticklabels=[])
#ax2.plot_surface(x,y,z, alpha=.75, cmap = 'viridis') # z in case of disk which is parallel to XY plane is constant and you can directly use h
x=np.linspace(-2, 2, 100)
z=np.linspace(-2, 2, 100)
Xc, Zc=np.meshgrid(x, z)
Yc = np.sqrt(4-Xc**2)
rstride = 10
cstride = 10
ax2.plot_surface(Xc, Yc, Zc, alpha=1.0, rstride=rstride, cstride=cstride, cmap = 'viridis')
ax2.plot_surface(Xc, -Yc, Zc, alpha=1.0, rstride=rstride, cstride=cstride, cmap = 'viridis')
ax2.set_title("Standard Cylindrical Grid in 3D")
ax2.grid(False)
plt.axis('off')
plt.show()
```
<a id='sinhcylindrical'></a>
### Step 3.b.ii" **`reference_metric::CoordSystem = "SinhCylindrical"`** \[Back to [top](#toc)\]
$$\label{sinhcylindrical}$$
Cylindrical coordinates, but with
$$\rho(xx_0) = \text{AMPLRHO} \frac{\sinh\left(\frac{xx_0}{\text{SINHWRHO}}\right)}{\sinh\left(\frac{1}{\text{SINHWRHO}}\right)}$$
and
$$z(xx_2) = \text{AMPLZ} \frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}$$
```python
if CoordSystem == "SinhCylindrical":
# Assuming the cylindrical radial coordinate
# is positive makes nice simplifications of
# unit vectors possible.
xx[0] = sp.symbols("xx0", real=True)
xxmin = [sp.sympify(0), -M_PI, sp.sympify(-1)]
xxmax = [sp.sympify(1), M_PI, sp.sympify(+1)]
AMPLRHO, SINHWRHO, AMPLZ, SINHWZ = par.Cparameters("REAL",thismodule,
["AMPLRHO","SINHWRHO","AMPLZ","SINHWZ"],
[ 10.0, 0.2, 10.0, 0.2])
# Set SinhCylindrical radial & z coordinates by default; overwrite later if CoordSystem == "SinhCylindricalv2".
RHOCYL = AMPLRHO * (sp.exp(xx[0] / SINHWRHO) - sp.exp(-xx[0] / SINHWRHO)) / (sp.exp(1 / SINHWRHO) - sp.exp(-1 / SINHWRHO))
# phi coordinate remains unchanged.
PHICYL = xx[1]
ZCYL = AMPLZ * (sp.exp(xx[2] / SINHWZ) - sp.exp(-xx[2] / SINHWZ)) / (sp.exp(1 / SINHWZ) - sp.exp(-1 / SINHWZ))
Cart_to_xx[0] = SINHWRHO*sp.asinh(sp.sqrt(Cartx ** 2 + Carty ** 2)*sp.sinh(1/SINHWRHO)/AMPLRHO)
Cart_to_xx[1] = sp.atan2(Carty, Cartx)
Cart_to_xx[2] = SINHWZ*sp.asinh(Cartz*sp.sinh(1/SINHWZ)/AMPLZ)
xxCart[0] = RHOCYL*sp.cos(PHICYL)
xxCart[1] = RHOCYL*sp.sin(PHICYL)
xxCart[2] = ZCYL
xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2)
xxSph[1] = sp.acos(ZCYL / xxSph[0])
xxSph[2] = PHICYL
scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0])
scalefactor_orthog[1] = RHOCYL
scalefactor_orthog[2] = sp.diff(ZCYL,xx[2])
# Set the unit vectors
UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)],
[-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)],
[ sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
Next let's plot **"SinhCylindrical"** coordinates.
```python
fig=plt.figure()
plt.clf()
fig = plt.figure()
ax = plt.subplot(1,1,1, projection='polar')
ax.set_rmax(2)
Nr = 20
xx0s = np.linspace(0,2,Nr, endpoint=True) + 1.0/(2.0*Nr)
rs = []
AMPLRHO = 1.0
SINHW = 0.4
for i in range(Nr):
rs.append(AMPLRHO * (np.exp(xx0s[i] / SINHW) - np.exp(-xx0s[i] / SINHW)) / \
(np.exp(1.0 / SINHW) - np.exp(-1.0 / SINHW)))
ax.set_rgrids(rs,labels=[])
thetas = np.linspace(0,360,25, endpoint=True)
ax.set_thetagrids(thetas,labels=[])
# ax.grid(True)
ax.grid(True,linewidth='1.0')
plt.show()
```
<a id='sinhcylindricalv2'></a>
### Step 3.b.iii: **`reference_metric::CoordSystem = "SinhCylindricalv2"`** \[Back to [top](#toc)\]
$$\label{sinhcylindricalv2}$$
Cylindrical coordinates, but with
$$\rho(xx_0) = \text{AMPLRHO} \left[\text{const_drho}\ xx_0 + \frac{\sinh\left(\frac{xx_0}{\text{SINHWRHO}}\right)}{\sinh\left(\frac{1}{\text{SINHWRHO}}\right)}\right]$$
and
$$z(xx_2) = \text{AMPLZ} \left[\text{const_dz}\ xx_2 + \frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}\right]$$
```python
if CoordSystem == "SinhCylindricalv2":
# Assuming the cylindrical radial coordinate
# is positive makes nice simplifications of
# unit vectors possible.
xx[0] = sp.symbols("xx0", real=True)
# SinhCylindricalv2 adds the parameters "const_drho", "const_dz", which allows for regions near xx[0]=0
# and xx[2]=0 to have constant rho and z resolution of const_drho and const_dz, provided the sinh() terms
# do not dominate near xx[0]=0 and xx[2]=0.
xxmin = [sp.sympify(0), -M_PI, sp.sympify(-1)]
xxmax = [sp.sympify(1), M_PI, sp.sympify(+1)]
AMPLRHO, SINHWRHO, AMPLZ, SINHWZ = par.Cparameters("REAL",thismodule,
["AMPLRHO","SINHWRHO","AMPLZ","SINHWZ"],
[ 10.0, 0.2, 10.0, 0.2])
const_drho, const_dz = par.Cparameters("REAL",thismodule,["const_drho","const_dz"],[0.0625,0.0625])
RHOCYL = AMPLRHO * ( const_drho*xx[0] + (sp.exp(xx[0] / SINHWRHO) - sp.exp(-xx[0] / SINHWRHO)) / (sp.exp(1 / SINHWRHO) - sp.exp(-1 / SINHWRHO)) )
PHICYL = xx[1]
ZCYL = AMPLZ * ( const_dz *xx[2] + (sp.exp(xx[2] / SINHWZ ) - sp.exp(-xx[2] / SINHWZ )) / (sp.exp(1 / SINHWZ ) - sp.exp(-1 / SINHWZ )) )
# NO CLOSED-FORM EXPRESSION FOR RADIAL OR Z INVERSION.
# Cart_to_xx[0] = "NewtonRaphson"
# Cart_to_xx[1] = sp.atan2(Carty, Cartx)
# Cart_to_xx[2] = "NewtonRaphson"
xxCart[0] = RHOCYL*sp.cos(PHICYL)
xxCart[1] = RHOCYL*sp.sin(PHICYL)
xxCart[2] = ZCYL
xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2)
xxSph[1] = sp.acos(ZCYL / xxSph[0])
xxSph[2] = PHICYL
scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0])
scalefactor_orthog[1] = RHOCYL
scalefactor_orthog[2] = sp.diff(ZCYL,xx[2])
# Set the unit vectors
UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)],
[-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)],
[ sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
For example, let's set up **`SinhCylindricalv2`** coordinates and output the Christoffel symbol $\hat{\Gamma}^{xx_2}_{xx_2 xx_2}$, or more simply $\hat{\Gamma}^2_{22}$:
```python
par.set_parval_from_str("reference_metric::CoordSystem","SinhCylindricalv2")
rfm.reference_metric()
sp.pretty_print(sp.simplify(rfm.GammahatUDD[2][2][2]))
```
⎛ 2⋅xx₂ ⎞ 1
⎜ ────── ⎟ ──────
⎜ SINHWZ ⎟ SINHWZ
-⎝ℯ - 1⎠⋅ℯ
────────────────────────────────────────────────────────────────────────
⎛ ⎛ 2 ⎞ xx₂ ⎛ 2⋅xx₂ ⎞ 1 ⎞
⎜ ⎜ ────── ⎟ ────── ⎜ ────── ⎟ ──────⎟
⎜ ⎜ SINHWZ ⎟ SINHWZ ⎜ SINHWZ ⎟ SINHWZ⎟
SINHWZ⋅⎝- SINHWZ⋅const_dz⋅⎝ℯ - 1⎠⋅ℯ - ⎝ℯ + 1⎠⋅ℯ ⎠
As we will soon see, defining these "hatted" quantities will be quite useful when expressing hyperbolic ([wave-equation](https://en.wikipedia.org/wiki/Wave_equation)-like) PDEs in non-Cartesian coordinate systems.
<a id='cartesianlike'></a>
## Step 3.c: Cartesian-like coordinate systems \[Back to [top](#toc)\]
$$\label{cartesianlike}$$
<a id='cartesian'></a>
### Step 3.c.i: **`reference_metric::CoordSystem = "Cartesian"`** \[Back to [top](#toc)\]
$$\label{cartesian}$$
Standard Cartesian coordinates, with $(x,y,z)=$ `(xx0,xx1,xx2)`
```python
if CoordSystem == "Cartesian":
xmin, xmax, ymin, ymax, zmin, zmax = par.Cparameters("REAL",thismodule,
["xmin","xmax","ymin","ymax","zmin","zmax"],
[ -10.0, 10.0, -10.0, 10.0, -10.0, 10.0])
xxmin = ["xmin", "ymin", "zmin"]
xxmax = ["xmax", "ymax", "zmax"]
xxCart[0] = xx[0]
xxCart[1] = xx[1]
xxCart[2] = xx[2]
xxSph[0] = sp.sqrt(xx[0] ** 2 + xx[1] ** 2 + xx[2] ** 2)
xxSph[1] = sp.acos(xx[2] / xxSph[0])
xxSph[2] = sp.atan2(xx[1], xx[0])
Cart_to_xx[0] = Cartx
Cart_to_xx[1] = Carty
Cart_to_xx[2] = Cartz
scalefactor_orthog[0] = sp.sympify(1)
scalefactor_orthog[1] = sp.sympify(1)
scalefactor_orthog[2] = sp.sympify(1)
# Set the transpose of the matrix of unit vectors
UnitVectors = [[sp.sympify(1), sp.sympify(0), sp.sympify(0)],
[sp.sympify(0), sp.sympify(1), sp.sympify(0)],
[sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
```python
%matplotlib inline
import numpy as np # NumPy: A numerical methods module for Python
import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities
plt.clf()
fig = plt.figure()
ax = fig.gca()
Nx = 16
ax.set_xticks(np.arange(0, 1., 1./Nx))
ax.set_yticks(np.arange(0, 1., 1./Nx))
for tick in ax.get_xticklabels():
tick.set_rotation(60)
# plt.scatter(x, y)
ax.set_aspect('equal')
plt.grid()
# plt.savefig("Cartgrid.png",dpi=300)
plt.show()
# plt.close(fig)
```
<a id='cartesian'></a>
### Step 3.c.ii: **`reference_metric::CoordSystem = "SinhCartesian"`** \[Back to [top](#toc)\]
$$\label{cartesian}$$
In this coordinate system, all three coordinates behave like the $z$-coordinate in SinhCylindrical coordinates, i.e.
$$
\begin{align}
x(xx_0) &= \text{AMPLX} \left[\frac{\sinh\left(\frac{xx_0}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWX}}\right)}\right]\ ,\\
y(xx_1) &= \text{AMPLY} \left[\frac{\sinh\left(\frac{xx_1}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWY}}\right)}\right]\ ,\\
z(xx_2) &= \text{AMPLZ} \left[\frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}\right]\ .
\end{align}
$$
```python
if CoordSystem == "SinhCartesian":
# SinhCartesian coordinates allows us to push the outer boundary of the
# computational domain a lot further away, while keeping reasonably high
# resolution towards the center of the computational grid.
# Set default values for min and max (x,y,z)
xxmin = [sp.sympify(-1), sp.sympify(-1), sp.sympify(-1)]
xxmax = [sp.sympify(+1), sp.sympify(+1), sp.sympify(+1)]
# Declare basic parameters of the coordinate system and their default values
AMPLX,SINHWX,AMPLY,SINHWY,AMPLZ,SINHWZ = par.Cparameters("REAL",thismodule,
["AMPLX","SINHWX","AMPLY","SINHWY","AMPLZ","SINHWZ"],
[ 10.0, 0.2, 10.0, 0.2, 10.0, 0.2])
# Compute (xxCart0,xxCart1,xxCart2) from (xx0,xx1,xx2)
xxCart[0] = AMPLX*(sp.exp(xx[0]/SINHWX) - sp.exp(-xx[0]/SINHWX))/(sp.exp(1/SINHWX) - sp.exp(-1/SINHWX))
xxCart[1] = AMPLY*(sp.exp(xx[1]/SINHWY) - sp.exp(-xx[1]/SINHWY))/(sp.exp(1/SINHWY) - sp.exp(-1/SINHWY))
xxCart[2] = AMPLZ*(sp.exp(xx[2]/SINHWZ) - sp.exp(-xx[2]/SINHWZ))/(sp.exp(1/SINHWZ) - sp.exp(-1/SINHWZ))
# Compute (r,th,ph) from (xxCart2,xxCart1,xxCart2)
xxSph[0] = sp.sqrt(xxCart[0] ** 2 + xxCart[1] ** 2 + xxCart[2] ** 2)
xxSph[1] = sp.acos(xxCart[2] / xxSph[0])
xxSph[2] = sp.atan2(xxCart[1], xxCart[0])
# Compute (xx0,xx1,xx2) from (Cartx,Carty,Cartz)
Cart_to_xx[0] = SINHWX*sp.asinh(AMPLX*Cartx*(sp.exp(1/SINHWX) - sp.exp(-1/SINHWX))/2)
Cart_to_xx[1] = SINHWY*sp.asinh(AMPLY*Carty*(sp.exp(1/SINHWY) - sp.exp(-1/SINHWY))/2)
Cart_to_xx[2] = SINHWZ*sp.asinh(AMPLZ*Cartz*(sp.exp(1/SINHWZ) - sp.exp(-1/SINHWZ))/2)
# Compute scale factors
scalefactor_orthog[0] = sp.diff(xxCart[0],xx[0])
scalefactor_orthog[1] = sp.diff(xxCart[1],xx[1])
scalefactor_orthog[2] = sp.diff(xxCart[2],xx[2])
# Set the transpose of the matrix of unit vectors
UnitVectors = [[sp.sympify(1), sp.sympify(0), sp.sympify(0)],
[sp.sympify(0), sp.sympify(1), sp.sympify(0)],
[sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
```python
%matplotlib inline
import numpy as np # NumPy: A numerical methods module for Python
import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities
plt.clf()
fig = plt.figure()
ax = fig.gca()
# Set plot title
ax.set_title(r"$z=0$ slice of the 3D grid")
# Set SINH parameters. Here we assume:
#
# AMPLX = AMPLY = SINHA
# SINHWX = SINHWY = SINHW
SINHA = 10.0
SINHW = 0.3
# Set number of points. We assume the same point
# distribution along the (x,y)-directions
Nxxs = 20
xxis = np.linspace(-1,1,Nxxs, endpoint=True)
# Compute axis ticks by evaluating x and y using SinhCartesian coordinates
axis_ticks = []
for i in range(Nxxs):
axis_ticks.append(SINHA * (np.exp(xxis[i] / SINHW) - np.exp(-xxis[i] / SINHW)) / \
(np.exp(1.0 / SINHW) - np.exp(-1.0 / SINHW)))
# Set the axis ticks
ax.set_xticks(axis_ticks)
ax.set_yticks(axis_ticks)
# Set x and y labels. Initialize array with empty strings
labelsx = ["" for i in range(Nxxs)]
labelsy = ["" for i in range(Nxxs)]
# Set x_min and x_max tick label
labelsx[0] = r"-AMPLX"
labelsx[-1] = r"AMPLX"
# Set y_min and y_max tick label
labelsy[0] = r"-AMPLY"
labelsy[-1] = r"AMPLY"
# Set tick labels
ax.set_xticklabels(labelsx)
ax.set_yticklabels(labelsy)
# Rotate x labels by 60 degrees
for tick in ax.get_xticklabels():
tick.set_rotation(60)
# Draw the x=0 and y=0 ticklabel
ax.text(0,-11,"0",ha="center",va="center")
ax.text(-11,0,"0",ha="center",va="center")
# plt.scatter(x, y)
ax.set_aspect('equal')
plt.grid()
# plt.savefig("Cartgrid.png",dpi=300)
plt.show()
# plt.close(fig)
```
<a id='prolatespheroidal'></a>
## Step 3.d: [Prolate spheroidal](https://en.wikipedia.org/wiki/Prolate_spheroidal_coordinates)-like coordinate systems \[Back to [top](#toc)\]
$$\label{prolatespheroidal}$$
<a id='symtp'></a>
### Step 3.d.i: **`reference_metric::CoordSystem = "SymTP"`** \[Back to [top](#toc)\]
$$\label{symtp}$$
Symmetric TwoPuncture coordinates, with $(\rho,\phi,z)=(xx_0\sin(xx_1), xx_2, \sqrt{xx_0^2 + \text{bScale}^2}\cos(xx_1))$
```python
if CoordSystem == "SymTP":
var1, var2= sp.symbols('var1 var2',real=True)
bScale, AW, AMAX, RHOMAX, ZMIN, ZMAX = par.Cparameters("REAL",thismodule,
["bScale","AW","AMAX","RHOMAX","ZMIN","ZMAX"],
[0.5, 0.2, 10.0, 10.0, -10.0, 10.0])
# Assuming xx0, xx1, and bScale
# are positive makes nice simplifications of
# unit vectors possible.
xx[0],xx[1] = sp.symbols("xx0 xx1", real=True)
xxmin = [sp.sympify(0), sp.sympify(0),-M_PI]
xxmax = [ AMAX, M_PI, M_PI]
AA = xx[0]
if CoordSystem == "SinhSymTP":
AA = (sp.exp(xx[0]/AW)-sp.exp(-xx[0]/AW))/2
var1 = sp.sqrt(AA**2 + (bScale * sp.sin(xx[1]))**2)
var2 = sp.sqrt(AA**2 + bScale**2)
RHOSYMTP = AA*sp.sin(xx[1])
PHSYMTP = xx[2]
ZSYMTP = var2*sp.cos(xx[1])
xxCart[0] = AA *sp.sin(xx[1])*sp.cos(xx[2])
xxCart[1] = AA *sp.sin(xx[1])*sp.sin(xx[2])
xxCart[2] = ZSYMTP
xxSph[0] = sp.sqrt(RHOSYMTP**2 + ZSYMTP**2)
xxSph[1] = sp.acos(ZSYMTP / xxSph[0])
xxSph[2] = PHSYMTP
rSph = sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)
thSph = sp.acos(Cartz / rSph)
phSph = sp.atan2(Carty, Cartx)
# Mathematica script to compute Cart_to_xx[]
# AA = x1;
# var2 = Sqrt[AA^2 + bScale^2];
# RHOSYMTP = AA*Sin[x2];
# ZSYMTP = var2*Cos[x2];
# Solve[{rSph == Sqrt[RHOSYMTP^2 + ZSYMTP^2],
# thSph == ArcCos[ZSYMTP/Sqrt[RHOSYMTP^2 + ZSYMTP^2]],
# phSph == x3},
# {x1, x2, x3}]
Cart_to_xx[0] = sp.sqrt(-bScale**2 + rSph**2 +
sp.sqrt(bScale**4 + 2*bScale**2*rSph**2 + rSph**4 -
4*bScale**2*rSph**2*sp.cos(thSph)**2))*M_SQRT1_2 # M_SQRT1_2 = 1/sqrt(2); define this way for UnitTesting
# The sign() function in the following expression ensures the correct root is taken.
Cart_to_xx[1] = sp.acos(sp.sign(Cartz)*(
sp.sqrt(1 + rSph**2/bScale**2 -
sp.sqrt(bScale**4 + 2*bScale**2*rSph**2 + rSph**4 -
4*bScale**2*rSph**2*sp.cos(thSph)**2)/bScale**2)*M_SQRT1_2)) # M_SQRT1_2 = 1/sqrt(2); define this way for UnitTesting
Cart_to_xx[2] = phSph
```
<a id='sinhsymtp'></a>
### Step 3.d.ii: **`reference_metric::CoordSystem = "SinhSymTP"`** \[Back to [top](#toc)\]
$$\label{sinhsymtp}$$
Symmetric TwoPuncture coordinates, but with $$xx_0 \to \sinh(xx_0/\text{AW})$$
```python
if CoordSystem == "SinhSymTP":
var1, var2= sp.symbols('var1 var2',real=True)
bScale, AW, AMAX, RHOMAX, ZMIN, ZMAX = par.Cparameters("REAL",thismodule,
["bScale","AW","AMAX","RHOMAX","ZMIN","ZMAX"],
[0.5, 0.2, 10.0, 10.0, -10.0, 10.0])
# Assuming xx0, xx1, and bScale
# are positive makes nice simplifications of
# unit vectors possible.
xx[0],xx[1] = sp.symbols("xx0 xx1", real=True)
xxmin = [sp.sympify(0), sp.sympify(0),-M_PI]
xxmax = [ AMAX, M_PI, M_PI]
AA = xx[0]
if CoordSystem == "SinhSymTP":
# With xxmax[0] == AMAX, sinh(xx0/AMAX) will evaluate to a number between 0 and 1.
# Similarly, sinh(xx0/(AMAX*SINHWAA)) / sinh(1/SINHWAA) will also evaluate to a number between 0 and 1.
# Then AA = AMAX*sinh(xx0/(AMAX*SINHWAA)) / sinh(1/SINHWAA) will evaluate to a number between 0 and AMAX.
AA = AMAX * (sp.exp(xx[0] / (AMAX*SINHWAA)) - sp.exp(-xx[0] / (AMAX*SINHWAA))) / (sp.exp(1 / SINHWAA) - sp.exp(-1 / AMAX))
var1 = sp.sqrt(AA**2 + (bScale * sp.sin(xx[1]))**2)
var2 = sp.sqrt(AA**2 + bScale**2)
RHOSYMTP = AA*sp.sin(xx[1])
PHSYMTP = xx[2]
ZSYMTP = var2*sp.cos(xx[1])
xxCart[0] = AA *sp.sin(xx[1])*sp.cos(xx[2])
xxCart[1] = AA *sp.sin(xx[1])*sp.sin(xx[2])
xxCart[2] = ZSYMTP
xxSph[0] = sp.sqrt(RHOSYMTP**2 + ZSYMTP**2)
xxSph[1] = sp.acos(ZSYMTP / xxSph[0])
xxSph[2] = PHSYMTP
scalefactor_orthog[0] = sp.diff(AA,xx[0]) * var1 / var2
scalefactor_orthog[1] = var1
scalefactor_orthog[2] = AA * sp.sin(xx[1])
# Set the transpose of the matrix of unit vectors
UnitVectors = [[sp.sin(xx[1]) * sp.cos(xx[2]) * var2 / var1,
sp.sin(xx[1]) * sp.sin(xx[2]) * var2 / var1,
AA * sp.cos(xx[1]) / var1],
[AA * sp.cos(xx[1]) * sp.cos(xx[2]) / var1,
AA * sp.cos(xx[1]) * sp.sin(xx[2]) / var1,
-sp.sin(xx[1]) * var2 / var1],
[-sp.sin(xx[2]), sp.cos(xx[2]), sp.sympify(0)]]
```
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Reference_Metric.pdf](Tutorial-Reference_Metric.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Reference_Metric")
```
Created Tutorial-Reference_Metric.tex, and compiled LaTeX file to PDF file
Tutorial-Reference_Metric.pdf
|
0afb65fc7831ae25a9a01c1837b4796eab524c0d
| 346,548 |
ipynb
|
Jupyter Notebook
|
Tutorial-Reference_Metric.ipynb
|
terrencepierrej/nrpytutorial
|
3ea18beed99cf6b7d67c89c140ca68630452001e
|
[
"BSD-2-Clause"
] | null | null | null |
Tutorial-Reference_Metric.ipynb
|
terrencepierrej/nrpytutorial
|
3ea18beed99cf6b7d67c89c140ca68630452001e
|
[
"BSD-2-Clause"
] | null | null | null |
Tutorial-Reference_Metric.ipynb
|
terrencepierrej/nrpytutorial
|
3ea18beed99cf6b7d67c89c140ca68630452001e
|
[
"BSD-2-Clause"
] | null | null | null | 192.84808 | 53,608 | 0.878331 | true | 15,079 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.855851 | 0.647798 | 0.554419 |
__label__eng_Latn
| 0.42735 | 0.12643 |
# Unit Testing `GiRaFFE_NRPy`: $A_k$ to $B^i$
### Author: Patrick Nelson
This notebook validates our A-to-B solver for use in `GiRaFFE_NRPy`. Because the original `GiRaFFE` used staggered grids and we do not, we can not trivially do a direct comparison to the old code. Instead, we will compare the numerical results with the expected analytic results.
**Module Status:** <font color=red><b> In-Progress </b></font>
**Validation Notes:** This module will validate the routines in [Tutorial-GiRaFFE_HO_C_code_library-A2B](../Tutorial-GiRaFFE_HO_C_code_library-A2B.ipynb).
It is, in general, good coding practice to unit test functions individually to verify that they produce the expected and intended output. Here, we expect our functions to produce the correct cross product in an arbitrary spacetime. To that end, we will choose functions that are easy to differentiate, but lack the symmetries that would trivialize the finite-difference algorithm. Higher-order polynomials are one such type of function.
We will start with the simplest case - testing the second-order solver. In second-order finite-differencing, we use a three-point stencil that can exactly differentiate polynomials up to quadratic. So, we will use cubic functions three variables. For instance,
\begin{align}
A_x &= ax^3 + by^3 + cz^3 + dy^2 + ez^2 + f \\
A_y &= gx^3 + hy^3 + lz^3 + mx^2 + nz^2 + p \\
A_z &= px^3 + qy^3 + rz^3 + sx^2 + ty^2 + u. \\
\end{align}
It will be much simpler to let NRPy+ handle most of this work. So, we will import the core functionality of NRPy+, build the expressions, and then output them using `outputC()`.
```python
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# First, we'll add the parent directory to the list of directories Python will check for modules.
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
out_dir = "Validation/"
cmd.mkdir(out_dir)
thismodule = "Unit_Test_GiRaFFE_NRPy_Ccode_library_A2B"
a,b,c,d,e,f,g,h,l,m,n,o,p,q,r,s,t,u = par.Cparameters("REAL",thismodule,["a","b","c","d","e","f","g","h","l","m","n","o","p","q","r","s","t","u"],10.0)
gammadet = gri.register_gridfunctions("AUXEVOL","gammadet")
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
x = rfm.xxCart[0]
y = rfm.xxCart[1]
z = rfm.xxCart[2]
AD = ixp.register_gridfunctions_for_single_rank1("EVOL","AD")
AD[0] = a*x**3 + b*y**3 + c*z**3 + d*y**2 + e*z**2 + f
AD[1] = g*x**3 + h*y**3 + l*z**3 + m*x**2 + n*z**2 + o
AD[2] = p*x**3 + q*y**3 + r*z**3 + s*x**2 + t*y**2 + u
```
Next, we'll let NRPy+ compute derivatives analytically according to $$B^i = \frac{[ijk]}{\sqrt{\gamma}} \partial_j A_k.$$ Then we can carry out two separate tests to verify the numerical derivatives. First, we will verify that when we let the cubic terms be zero, the two calculations of $B^i$ agree to roundoff error. Second, we will verify that when we set the cubic terms, our error is dominated by trunction error that converges to zero at the expected rate.
```python
import WeylScal4NRPy.WeylScalars_Cartesian as weyl
LeviCivitaDDD = weyl.define_LeviCivitaSymbol_rank3()
LeviCivitaUUU = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
LeviCivitaUUU[i][j][k] = LeviCivitaDDD[i][j][k] / sp.sqrt(gammadet)
B_analyticU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","B_analyticU")
for i in range(DIM):
B_analyticU[i] = 0
for j in range(DIM):
for k in range(DIM):
B_analyticU[i] += LeviCivitaUUU[i][j][k] * sp.diff(AD[k],rfm.xxCart[j])
```
Now that we have our vector potential and analytic magnetic field to compare against, we will start writing our unit test. We'll also import common C functionality, define `REAL`, the number of ghost zones, and the faces, and set the standard macros for NRPy+ style memory access.
```python
out_string = """
// These are common packages that we are likely to need.
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "string.h" // Needed for strncmp, etc.
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#include <time.h> // Needed to set a random seed.
# define REAL double
const int MAXFACE = -1;
const int NUL = +0;
const int MINFACE = +1;
const int NGHOSTS = 1;
// Standard NRPy+ memory access:
#define IDX4(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * ( (k) + Nxx_plus_2NGHOSTS[2] * (g) ) ) )
#define IDX3(i,j,k) ( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * (k) ) )
// Assuming idx = IDX3(i,j,k). Much faster if idx can be reused over and over:
#define IDX4pt(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2]) * (g) )
"""
```
We'll now define the gridfunction names.
```python
out_string += """
// Let's also #define the NRPy+ gridfunctions
#define AD0GF 0
#define AD1GF 1
#define AD2GF 2
#define NUM_EVOL_GFS 3
#define GAMMADETGF 0
#define B_ANALYTICU0GF 1
#define B_ANALYTICU1GF 2
#define B_ANALYTICU2GF 3
#define BU0GF 4
#define BU1GF 5
#define BU2GF 6
#define NUM_AUXEVOL_GFS 7
"""
```
Now, we'll handle the different A2B codes. There are several things to do here. First, we'll add `#include`s to the C code so that we have access to the functions we want to test. We must also create a directory and copy the files to that directory. We will choose to do this in the subfolder `A2B` relative to this tutorial.
```python
out_string += """
#include "../A2B/driver_AtoB.c" // This file contains both functions we need.
"""
cmd.mkdir(os.path.join("A2B/"))
shutil.copy(os.path.join("../GiRaFFE_HO/GiRaFFE_Ccode_library/A2B/driver_AtoB.c"),os.path.join("A2B/"))
```
We also should write a function that will use the analytic formulae for $B^i$. Then, we'll need to call the function from the module `GiRaFFE_HO_A2B` to generate the different header files. Also, we will declare the parameters for the vector potential functions.
```python
out_string += """
REAL a,b,c,d,e,f,g,h,l,m,n,o,p,q,r,s,t,u;
void calculate_exact_BU(const int Nxx_plus_2NGHOSTS[3],REAL *xx[3],double *auxevol_gfs) {
for(int i2=0;i2<Nxx_plus_2NGHOSTS[2];i2++) for(int i1=0;i1<Nxx_plus_2NGHOSTS[1];i1++) for(int i0=0;i0<Nxx_plus_2NGHOSTS[0];i0++) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
"""
B_analyticU_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","B_analyticU0"),rhs=B_analyticU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","B_analyticU1"),rhs=B_analyticU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","B_analyticU2"),rhs=B_analyticU[2]),\
]
B_analyticU_kernel = fin.FD_outputC("returnstring",B_analyticU_to_print,params="outCverbose=False")
out_string += B_analyticU_kernel
out_string += """
}
}
"""
gri.glb_gridfcs_list = []
import GiRaFFE_HO.GiRaFFE_HO_A2B as A2B
# We'll generate these into the A2B subdirectory since that's where the functions
# we're testing expect them to be.
A2B.GiRaFFE_HO_A2B("A2B/")
```
Wrote to file "A2B/B_from_A_order10.h"
Wrote to file "A2B/B_from_A_order8.h"
Wrote to file "A2B/B_from_A_order6.h"
Wrote to file "A2B/B_from_A_order4.h"
Wrote to file "A2B/B_from_A_order2.h"
Wrote to file "A2B/B_from_A_order2_dirx0_dnwind.h"
Wrote to file "A2B/B_from_A_order2_dirx0_upwind.h"
Wrote to file "A2B/B_from_A_order2_dirx1_dnwind.h"
Wrote to file "A2B/B_from_A_order2_dirx1_upwind.h"
Wrote to file "A2B/B_from_A_order2_dirx2_dnwind.h"
Wrote to file "A2B/B_from_A_order2_dirx2_upwind.h"
We'll now write a function to set the vector potential $A_k$. This simply uses NRPy+ to generte most of the code from the expressions we wrote at the beginning.
```python
out_string += """
void calculate_AD(const int Nxx_plus_2NGHOSTS[3],REAL *xx[3],double *out_gfs) {
for(int i2=0;i2<Nxx_plus_2NGHOSTS[2];i2++) for(int i1=0;i1<Nxx_plus_2NGHOSTS[1];i1++) for(int i0=0;i0<Nxx_plus_2NGHOSTS[0];i0++) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
"""
AD_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","AD0"),rhs=AD[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD1"),rhs=AD[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD2"),rhs=AD[2]),\
]
AD_kernel = fin.FD_outputC("returnstring",AD_to_print,params="outCverbose=False")
out_string += AD_kernel
out_string += """
}
}
"""
```
We will define the extent of our grid here.
```python
out_string += """
const REAL xmin = -0.01,xmax=0.01;
const REAL ymin = -0.01,ymax=0.01;
const REAL zmin = -0.01,zmax=0.01;
"""
```
Now, we'll write the main method. First, we'll set up the grid. In this test, we cannot use only one point. As we are testing a three-point stencil, we can get away with a minimal $3 \times 3 \times 3$ grid. Then, we'll write the A fields. After that, we'll calculate the magnetic field two ways.
```python
out_string += """
main(int argc, const char *argv[]) {
// Let the first argument be the test we're doing. 1 = coarser grid, 0 = finer grid.
int do_quadratic_test = atoi(argv[4]);
// We'll use this grid. It has one point and one ghost zone.
const int Nxx[3] = {atoi(argv[1]),atoi(argv[2]),atoi(argv[3])};
int Nxx_plus_2NGHOSTS[3];
for (int i=0;i<3;i++) Nxx_plus_2NGHOSTS[i] = Nxx[i]+2*NGHOSTS;
const REAL xxmin[3] = {xmin,ymin,zmin};
const REAL xxmax[3] = {xmax,ymax,zmax};
REAL dxx[3];
for(int i=0;i<3;i++) dxx[i] = (xxmax[i] - xxmin[i]) / ((REAL)Nxx[i]+1.0);
// We'll define our grid slightly different from how we normally would. We let our outermost
// ghostzones coincide with xxmin and xxmax instead of the interior of the grid. This means
// that the ghostzone points will have identical positions so we can do convergence tests of them.
// Step 0d.ii: Set up uniform coordinate grids
REAL *xx[3];
for(int i=0;i<3;i++) {
xx[i] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS[i]);
for(int j=0;j<Nxx_plus_2NGHOSTS[i];j++) {
xx[i][j] = xxmin[i] + ((REAL)(j))*dxx[i]; // Face-centered grid.
}
}
//for(int i=0;i<Nxx_plus_2NGHOSTS[0];i++) printf("xx[0][%d] = %.15e\\n",i,xx[0][i]);
// This is the array to which we'll write the NRPy+ variables.
REAL *auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS[2] * Nxx_plus_2NGHOSTS[1] * Nxx_plus_2NGHOSTS[0]);
REAL *evol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS[2] * Nxx_plus_2NGHOSTS[1] * Nxx_plus_2NGHOSTS[0]);
for(int i2=0;i2<Nxx_plus_2NGHOSTS[2];i2++) for(int i1=0;i1<Nxx_plus_2NGHOSTS[1];i1++) for(int i0=0;i0<Nxx_plus_2NGHOSTS[0];i0++) {
//auxevol_gfs[IDX4(GAMMADETGF,i0,i1,i2)] = 1.0; // Flat Space
auxevol_gfs[IDX4(GAMMADETGF,i0,i1,i2)] = 1.0 - 1.0/(2.0+xx[0][i0]*xx[0][i0]+xx[1][i1]*xx[1][i1]+xx[2][i2]*xx[2][i2]);
}
// We now want to set up the vector potential. First, we must set the coefficients.
// We will use random integers between -10 and 10. For the first test, we let the
// Cubic coefficients remain zero. Those are a,b,c,g,h,l,p,q, and r.
d = (double)(rand()%20-10);
e = (double)(rand()%20-10);
f = (double)(rand()%20-10);
m = (double)(rand()%20-10);
n = (double)(rand()%20-10);
o = (double)(rand()%20-10);
s = (double)(rand()%20-10);
t = (double)(rand()%20-10);
u = (double)(rand()%20-10);
if(do_quadratic_test) {
calculate_AD(Nxx_plus_2NGHOSTS,xx,evol_gfs);
// We'll also calculate the exact solution for B^i
calculate_exact_BU(Nxx_plus_2NGHOSTS,xx,auxevol_gfs);
// And now for the numerical derivatives:
driver_A_to_B(Nxx,Nxx_plus_2NGHOSTS,dxx,evol_gfs,auxevol_gfs);
printf("This test uses quadratic vector potentials, so the magnetic fields should agree to roundoff error.\\n");
printf("Below, each row represents one point. Each column represents a component of the magnetic field.\\n");
printf("Shown is the number of Significant Digits of Agreement, at least 13 is good, higher is better:\\n\\n");
//Two variables for inside the loop:
int ghost_zone_overlap;int indices[3];
for(int i2=0;i2<Nxx_plus_2NGHOSTS[2];i2++) for(int i1=0;i1<Nxx_plus_2NGHOSTS[1];i1++) for(int i0=0;i0<Nxx_plus_2NGHOSTS[0];i0++) {
// Are we on an edge/vertex? This algorithm can probably be improved.
ghost_zone_overlap = 0;
indices[0] = i0;
indices[1] = i1;
indices[2] = i2;
for(int dim=0;dim<3;dim++) {
if(indices[dim]%(Nxx[dim]+NGHOSTS)<NGHOSTS) {
ghost_zone_overlap++;
}
}
if (ghost_zone_overlap < 2) {
// Don't print if we're on an edge or vertex
printf("SDA: %.3f, %.3f, %.3f\\n",
1.0-log10(2.0*fabs(auxevol_gfs[IDX4(B_ANALYTICU0GF,i0,i1,i2)]-auxevol_gfs[IDX4(BU0GF,i0,i1,i2)])/(fabs(auxevol_gfs[IDX4(B_ANALYTICU0GF,i0,i1,i2)])+fabs(auxevol_gfs[IDX4(BU0GF,i0,i1,i2)])+1.e-15)),
1.0-log10(2.0*fabs(auxevol_gfs[IDX4(B_ANALYTICU1GF,i0,i1,i2)]-auxevol_gfs[IDX4(BU1GF,i0,i1,i2)])/(fabs(auxevol_gfs[IDX4(B_ANALYTICU1GF,i0,i1,i2)])+fabs(auxevol_gfs[IDX4(BU1GF,i0,i1,i2)])+1.e-15)),
1.0-log10(2.0*fabs(auxevol_gfs[IDX4(B_ANALYTICU2GF,i0,i1,i2)]-auxevol_gfs[IDX4(BU2GF,i0,i1,i2)])/(fabs(auxevol_gfs[IDX4(B_ANALYTICU2GF,i0,i1,i2)])+fabs(auxevol_gfs[IDX4(BU2GF,i0,i1,i2)])+1.e-15))
);
}
}
}
// Now, we'll set the cubic coefficients:
a = (double)(rand()%20-10);
b = (double)(rand()%20-10);
c = (double)(rand()%20-10);
g = (double)(rand()%20-10);
h = (double)(rand()%20-10);
l = (double)(rand()%20-10);
p = (double)(rand()%20-10);
q = (double)(rand()%20-10);
r = (double)(rand()%20-10);
// And recalculate on our initial grid:
calculate_AD(Nxx_plus_2NGHOSTS,xx,evol_gfs);
// We'll also calculate the exact solution for B^i
calculate_exact_BU(Nxx_plus_2NGHOSTS,xx,auxevol_gfs);
// And now for the numerical derivatives:
driver_A_to_B(Nxx,Nxx_plus_2NGHOSTS,dxx,evol_gfs,auxevol_gfs);
// Some variables needed for the loop:
int ghost_zone_overlap; int indices[3];
char filename[100];
sprintf(filename,"out%d-numer.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
if(do_quadratic_test) {
for(int i2=0;i2<Nxx_plus_2NGHOSTS[2];i2++) for(int i1=0;i1<Nxx_plus_2NGHOSTS[1];i1++) for(int i0=0;i0<Nxx_plus_2NGHOSTS[0];i0++) {
ghost_zone_overlap = 0;
indices[0] = i0;
indices[1] = i1;
indices[2] = i2;
for(int dim=0;dim<3;dim++) {
if(indices[dim]%(Nxx[dim]+NGHOSTS)<NGHOSTS) {
ghost_zone_overlap++;
}
}
if (ghost_zone_overlap < 2) {
// We print the difference between approximate and exact numbers.
fprintf(out2D,"%.16e\t%.16e\t%.16e\\n",
auxevol_gfs[IDX4(B_ANALYTICU0GF,i0,i1,i2)]-auxevol_gfs[IDX4(BU0GF,i0,i1,i2)],
auxevol_gfs[IDX4(B_ANALYTICU1GF,i0,i1,i2)]-auxevol_gfs[IDX4(BU1GF,i0,i1,i2)],
auxevol_gfs[IDX4(B_ANALYTICU2GF,i0,i1,i2)]-auxevol_gfs[IDX4(BU2GF,i0,i1,i2)]
);
}
}
}
else {
for(int i2=0;i2<Nxx_plus_2NGHOSTS[2];i2++) for(int i1=0;i1<Nxx_plus_2NGHOSTS[1];i1++) for(int i0=0;i0<Nxx_plus_2NGHOSTS[0];i0++) {
ghost_zone_overlap = 0;
indices[0] = i0;
indices[1] = i1;
indices[2] = i2;
for(int dim=0;dim<3;dim++) {
if(indices[dim]%(Nxx[dim]+NGHOSTS)<NGHOSTS) {
ghost_zone_overlap++;
}
}
// Don't print on the edges or corners
if (ghost_zone_overlap < 2) {
// Only print points shared between the grids
if (i0%2==0 && i1%2==0 && i2%2==0) {
// We print the difference between approximate and exact numbers.
fprintf(out2D,"%.16e\t%.16e\t%.16e\\n",
auxevol_gfs[IDX4(B_ANALYTICU0GF,i0,i1,i2)]-auxevol_gfs[IDX4(BU0GF,i0,i1,i2)],
auxevol_gfs[IDX4(B_ANALYTICU1GF,i0,i1,i2)]-auxevol_gfs[IDX4(BU1GF,i0,i1,i2)],
auxevol_gfs[IDX4(B_ANALYTICU2GF,i0,i1,i2)]-auxevol_gfs[IDX4(BU2GF,i0,i1,i2)]
);
}
}
}
}
fclose(out2D);
}
"""
```
Now, we must write out the code to a `.C` file.
```python
with open(os.path.join(out_dir,"A2B_unit_test.C"),"w") as file:
file.write(out_string)
```
Now that we have our file, we can compile it and run the executable.
```python
import time
print("Now compiling, should take ~2 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(out_dir,"A2B_unit_test.C"), os.path.join(out_dir,"A2B_unit_test"))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
# os.chdir(out_dir)
print("Now running...\n")
start = time.time()
# cmd.Execute(os.path.join("Stilde_flux_unit_test"))
!./Validation/A2B_unit_test 1 1 1 1
# To do a convergence test, we'll also need a second grid with twice the resolution.
!./Validation/A2B_unit_test 3 3 3 0
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
# os.chdir(os.path.join("../"))
```
Now compiling, should take ~2 seconds...
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops Validation/A2B_unit_test.C -o Validation/A2B_unit_test -lm`...
Finished executing in 1.22558999062 seconds.
Finished compilation.
Finished in 1.23869585991 seconds.
Now running...
This test uses quadratic vector potentials, so the magnetic fields should agree to roundoff error.
Below, each row represents one point. Each column represents a component of the magnetic field.
Shown is the number of Significant Digits of Agreement, at least 13 is good, higher is better:
SDA: 13.201, 13.958, inf
SDA: 13.958, inf, 13.684
SDA: inf, 13.958, 13.256
SDA: inf, inf, inf
SDA: inf, 13.958, 13.256
SDA: 13.958, inf, 13.684
SDA: 13.201, 13.958, inf
Finished in 0.0554201602936 seconds.
Now that we have shown that when we use a quadratic vector potential, we get roundoff-level agreement (which is to be expected, since the finite-differencing used approximates the underlying function with a quadratic), we will use do a convergence test to show that when we can't exactly model the function, the truncation error dominates and converges to zero at the expected rate. For this, we use cubic functions for the vector potential. In the code above, we output the difference beteween the numeric and exact magnetic fields at the overlapping, non-edge, non-vertex points of two separate grids. Here, we import that data and calculate the convergence in the usual way,
$$
k = \log_2 \left( \frac{F - F_1}{F - F_2} \right),
$$
where $k$ is the convergence order, $F$ is the exact solution, $F_1$ is the approximate solution on the coarser grid with resolution $\Delta x$, and $F_2$ is the approximate solution on the finer grid with resolution $\Delta x/2$.
```python
import numpy as np
import matplotlib.pyplot as plt
Data1 = np.loadtxt("out1-numer.txt")
Data2 = np.loadtxt("out3-numer.txt")
convergence = np.log(np.divide(np.abs(Data1),np.abs(Data2)))/np.log(2)
print("Convergence test: All should be approximately 2\n")
print(convergence)
```
Convergence test: All should be approximately 2
[[2. 2. 2.]
[2. 2. 2.]
[2. 2. 2.]
[2. 2. 2.]
[2. 2. 2.]
[2. 2. 2.]
[2. 2. 2.]]
|
a16cca730e02b74176d7b016bac3d786d6141072
| 27,594 |
ipynb
|
Jupyter Notebook
|
GiRaFFE_standalone_Ccodes/Tutorial-Unit_Test-GiRaFFE_NRPy_Ccode_library-A2B.ipynb
|
leowerneck/NRPyIGM
|
f483d6123424fb3e6860dfac4325dd232b223005
|
[
"BSD-2-Clause"
] | null | null | null |
GiRaFFE_standalone_Ccodes/Tutorial-Unit_Test-GiRaFFE_NRPy_Ccode_library-A2B.ipynb
|
leowerneck/NRPyIGM
|
f483d6123424fb3e6860dfac4325dd232b223005
|
[
"BSD-2-Clause"
] | null | null | null |
GiRaFFE_standalone_Ccodes/Tutorial-Unit_Test-GiRaFFE_NRPy_Ccode_library-A2B.ipynb
|
leowerneck/NRPyIGM
|
f483d6123424fb3e6860dfac4325dd232b223005
|
[
"BSD-2-Clause"
] | null | null | null | 43.048362 | 687 | 0.550265 | true | 6,603 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.743168 | 0.843895 | 0.627156 |
__label__eng_Latn
| 0.769626 | 0.295424 |
# Generating Fractals
Here we will be generating fractals and learning about a few things as well
1. Complex numbers (very breifly)
2. Root finding (this is a good primer for gradient decent for those of you with ML on the mind)
3. Thinking iteratively ( very important nor numerical mathematics)
## Complex Numbers
Before we start talking about fractals, we should probably introduce the idea of a complex number before hand, as the fractas we will be generating rely heavily on them. I'll note that this may get kind of deep into some mathematics, and if you want to skip this section you totally can. There are som quirks to complex numbers, but luckily Python will handle them for you, and realistically they will behave like any other number in Python (for our purposes).
First and foremost, let us introduce the complex number $i$
$$
\begin{equation}
i = \sqrt{-1} \implies i^2 = -1
\end{equation}
$$
where $i$ is known as the imaginary number, or some impossible number that when multiplied by itself, returns a negative number. At first this might be concerning, but we must remember that when it comes to math, we made it all up anyways, so why not make up another number for fun? However, that line of thinking might be more concerning, so let's move on.
More often than not, we will write a complex number as $z$, which is written as the sum of an imaginary numers real and complex parts,
$$
z = a + ib
$$
where $a$ is the real part of the complex number $z$, and $b$ is the complex or imaginary part.
If you're curious, you can get more details about complex numbers using the drop down below. For our purposes however, all that you really need to know is the following:
1. A complex number $a + ib$ can be thought of as the classical $x,y$ ordered pair we're used to. In this case our $x$ axis is the real part of the number, and our $y$ axis is the imaginary part of the number
2. These $(a, b)$ ordered pairs can be used to define the "complex plane", which we will use to plot our fractals
<details closed>
<summary> <h1> More details on complex numbers (not required)</h1></summary>
<br>
There are a lot of very useful properties that come from imaginary numbers, and they're used all the time in fiels such as pure mathematics, signal processing, physics, chemistry, fluid dynamics... in principle we could go on forever. The largest reason for this is due to something known as [eulers formula](https://en.wikipedia.org/wiki/Euler%27s_formula), where we instead think of complex numbers on the "complex plane", with the real part of the function $a$ representing our $x$ axis, and the imaginary part of our function $b$ on the $y$ axis. Euler's formula states that we can write the exponential of a complex number as follows
$$
e^{ix} = r (\cos \theta + i\sin \theta)
$$
where $r$ is the radius of tihs circle on the complex plane (more on this later).
For those of you mathematically inclined and want to prove this, the easiest way is to write out the Taylor series for each $e^{ix}, i\sin \theta$ and $\cos \theta$, and you may be surprised what you see.
Long story short, using Euler's formula, we can essentially write any complex number in terms of sine and cosine, this is useful for a whole lot of reasons, but for fractals specificially, this implies periodic boundary conditions -- we expect to see repeating patterns. Indeed, as cosine and sine have $N$ roots repeating every $\pi$, we expect that our complex functions may also have (up to) that many roots!
### Finding Roots Of Complex Polynomials
We all remember polynomial equations of real numbers, for example the quadratic function
$$
x^2 - C = 0
$$
where we can all read quite readily that the solution to this equation is $x = \pm \sqrt{C}$. But what about if we have some complex polynomial like
$$
z^2 + C = 0,
$$
To be honest, I pulled a fast one on you. It' just as easy! In this case, we have two roots which are $\pm i \sqrt{C} $, where we just bring our friend the imaginary unit along for the ride. In principle what we do is we once again factor this into the real and imaginary part of the solution, but often it's easier to think of this (in a way that will make mathmaticians cry) we simply get rid of the part we don't like, and call it $i$. For quadratic complex polynomial equations, we can simply use the quadratic formula and sprinkle in the imaginary unit where ever we need it. This is a bit of an over simplification, but for our purposes it should be fine.
Where this can get a little spicier is when the roots aren't obvious enough to be read off. For example, the equation
$$
z^3 = 1,
$$
is cubic, which means we have three distinct roots which satisfy this equation. In this case, it is easier to look at our friend Euler's formula to find these roots. So let's rewrite our complex equation above using euler's formula. First, the left hand side
$$
z = r e^{ix} \implies z^3 = r^3 e^{3ix}
$$
and the right hand side:
$$
1 = re^{ix} = r (\cos \theta + i\sin \theta)
$$
Where, as one has no complex component, we know that this must be one. Therefore, $r$ is equal to one in this equation, angles are those where cosine is one and sine is zero, or $\theta = 2\pi k$ where $k$ is an integer. Therefore, we have
$$
e^{3ix} = e^{2\pi k}
$$
or by taking the natural logarithm of each side,
$$
ix = \frac{2 \pi k}{3}
$$
And going back to our original equation:
$$
z = e^{ix} = e^{\frac{2 \pi k}{3}}
$$
Where we can take our first three roots as $k = -1, 0, 1$ and obtain
$$
z = 1, e^{2\pi i/3}, e^{-2\pi i/3}
$$
We also note we have periodic roots at integer values of $k$, but we won't worry about those. Knowing these roots are useful in terms of understanding the behaviour of our fractals. We may expect that different roots may cause different basins of convergence, or result in rotations of our fractal. This is also important with respect to establishing the domains in which our fractals may exist.
</details>
# Root Finding With Complex Numbers
If you were crazy enough to read the drop down menu, you may have noticed that finding roots to complex equations were more work that the quadratic formula. And if you take anything away from these notebooks it should be that we Data Scientists (and regular scientists) are _super_ lazy. Wouldn't it be nice if we could use our computer to solve these for us? Luckily, the answer is a resounding yes! And even more better is that our Newton Raphson formula from before generalizes to the complex domain without us having to do anything. Convenient!
One thing to be aware of however in Python is we will need to define complex numbers. It's quite simple. For a complex number $ a + ib$, we can define that in python like
```python3
a = b = 1
complex_number = complex(a, b)
```
and we can then throw that number at our root finding routines as we did before and find ourselves solutions.
## My First Fractal: Mandelbrot
Before we go into root finding for fractals, let's start with a relatively simple one to generate the Mandelbrot. Rather than have me drone on for a year, here's a YouTube video that does a better job than I could explaining that set.
```python
from IPython.display import YouTubeVideo
YouTubeVideo('NGMRB4O922I')
```
# Your Task
Write two functions, the first function will be to calculate the Mandelbrot set as follows
### Mandelbrot Function
1. Initialize $z$ and the number of iterations $n$ as zero,
- Also intialize the maximum iterations as 80
2. while `abs(z) <= 2 and n < max_iter` do the following
* $z = z^2 + c$
* n += 1
3. On exit, return the number of iterations $n$
```python
# note the **kwargs is not strictly necessary, and won'tdo anything here, but will be useful
# if you want to use some of the provided functions later.
def mandelbrot(c, max_iter = 80, **kwargs):
'''
here c is any complex number, and max_iter is the maximum numberof iterations through your loop
you want to go
'''
z = # YOUR CODE HERE
n = # YOUR CODE HERE
while CONDITION: # YOUR CODE HERE
z = SOMETHING #YOUR CODE HERE
n += 1
return n
```
### Iteration Function
You will also need a function which iterates across the complex plane to see what pixel you should generate there. This function itself will require three functions
1. A scale function to convert pixel location into complex coordinates
2. A scale function to convert the number of itereations in your `mandelbrot` function into an RGB color scale
3. A function which iterates over pixels with a given height and width for your image.
In principle we will have the following pseudocode for point 3 which will encompass the other functions
---
```python
def CreateImage(mandelbrot, height, width, domain):
ImageMap = np.zeros([width, height])
for x in range(0, width):
for y in range(0, height):
c = scale_function_coordinate(x, y, width, height, real_max, real_min, complex_max, complex_min)
m = mandelbrot(c)
color = color_function(m)
X[x,y] = color
return ImageMap
```
---
Where let's outline those functions explicitly
##### Pixel Scale Function
Given a pixel coordinate $x$ and $y$, we need to convert this location into a complex number within our domain. the formula for this is as follows
$$
R = R_{min} + \frac{x}{\text{Image Width}} \times (R_{max} - R_{min})
$$
Where $R$ is your value in the real coordinate, $R_{min}$ is the smallest value in the real domain, and $R_{max}$ is the largest value in the real domain, $x$ is the current pixel, and Image Width is the width of the image in pixels you want to create. A similar formula for the complex domain is as follows
$$
C = C_{min} + \frac{x}{\text{Image Height}} \times (C_{max} - C_{min})
$$
Where $C$ is your value in the complex coordinate, $C_{min}$ is the smallest value in the complex domain, $C_{max}$ is the largest value in the complex domain, $x$ is the current pixel, and Image Height is the height of the image in pixels you want to create.
In this case, if we have a known image size in advance, and we know the domain in which our fractal will exist, we can calculate the complex value $c$ at this place in our image. Please fill in the function below
```python
import numpy as np
def scale_function_coordinate(x, y, width, height, r_max, r_min, c_max, c_min):
'''
x --> x coordnate of pixel
y --> y coordinate of pixel
width --> width of image
height --> height of image
r_max, r_min --> maximum and minimum numbers on thereal axis
c_max, c_min --> maximum and minimum numbers on the complex axis
'''
R = None # YOUR CODE HERE
C = None # YOUR CODE HERE
return complex(R, C) # complex is a built in function for complex numbers
```
##### Color Function
Now we need to be able to convert the number of iterations to an RGB coordinate. RGB colors can take values between 0 and 255, so we need to find a way to scale our number of iterations to become some pretty colors so we can observe the changes. Rather than boring you with this, I'll just provide the function
```python
def color(number_of_iterations):
return 255 - int(m * 255/max_iter)
```
## Putting it All Together
If that worked out well for you, you should be able to fill in the following to create your image functions!
```python
# Boundaries for the mandelbrot function
bounds = [-2, 1, -1, 1]
def CreateImage(function, width, height, bounds):
r_max, r_min, c_max, c_min = bounds
if width > 1000:
print(f'width of {width} is too large. Your computer only has so many pixels.')
print("try zooming in with a smaller boundary to observe more detail")
return
if height > 1000:
print(f'height of {height} is too large. Your computer only has so many pixels.')
print("try zooming in with a smaller boundary to observe more detail")
return
X = np.zeros(width, height)
for x in range(0, width):
for y in range(0, height):
c = scale_function_coordinate(x, y, width, height, r_max, r_min, c_max, c_min)
# Note here for changes later for root finding
m = function(c)
color = color(m)
X[width, height] = color
return X
# When you're ready uncomment this line to see if it worked
# plt.imshow(X)
```
If all that worked out, running the above cell should produce what you see below!
```python
import sys
sys.path.append('scripts/')
import fractalfuncs as FF
import matplotlib.pyplot as plt
bounds = [-2, 1, -1, 1]
X = FF.CreateImageMap(function = FF.mandelbrot, function_args = {}, bounds = bounds)
plt.imshow(X, extent=bounds)
plt.xlabel("Real Axis", size = 12)
plt.ylabel("Imaginary Axis", size = 12)
plt.show()
```
# Using Rootfinding
Now, rather than using the mandelbrot set, let's try and use our root finding techniques to find roots instead! If we're in a stable region,it should be pretty easy! If not, it will get spicy and diverge. We will use that divergence to create our fractals instead. Here the fractal properties not only come from the mathematical formulation of our complex set, but also the convergence properties of our root finder: different root finding techniques will result in different fractals.
## Your Task
Copy and paste your NewtonRaphson root finder from the Root Finding portion of this, and use it in this assignment. **NOTE** instead of returning the root, you will have to modify your NewtonRaphson function to return the number of iterations it took.
To use root finding, we will do exactly what we did for the mandelbrot set above, however, we will now modify your image generation function to use $c$ as an initial guess at your solution, and see if your NewtonRaphson root finder can find a solution or not. You will need to modify the cell below for use with your own function. Remember that you will also need to pass the derivative and the function you are evaluating (Hint: `**kwargs` can be handy here)
```python
def CreateImageRootFinding(YOUR ARGUMENTS HERE, width, height, bounds):
r_max, r_min, c_max, c_min = bounds
if width > 1000:
print(f'width of {width} is too large. Your computer only has so many pixels.')
print("try zooming in with a smaller boundary to observe more detail")
return
if height > 1000:
print(f'height of {height} is too large. Your computer only has so many pixels.')
print("try zooming in with a smaller boundary to observe more detail")
return
X = np.zeros(width, height)
for x in range(0, width):
for y in range(0, height):
INITIAL_GUESS = scale_function_coordinate(x, y, width, height, r_max, r_min, c_max, c_min)
# Note you may need to wrap this in try/except to prevent accidental zero division/other nastiness
m = MY_ROOT_FINDER(INITIAL_GUESS)
color = color(m)
X[width, height] = color
```
## First Fractal With Root Finding
The mandelbrot set works well for what it is, but alas, if you try to use thet function in root finding, you will find that your fractal is dreadfully boring. A more interesting function is
$$
f(z) = z^3 - 1
$$
Whose derivative is
$$
f^\prime(z) = 3z^2
$$
### Sanity Check
See if you can reproduce the image below with your own function
```python
def function(z):
return z**3 - 1
def derivative(z):
return 3 * z ** 2
bounds = [-1,1,-1,1]
newton_args = dict(fprime = derivative, f = function, max_iter = 200, prec = 1e-6)
X = FF.CreateImageMap(FF.NewtonRaphsonFact, newton_args, bounds, height=250, width=250)
plt.imshow(X, extent=bounds)
plt.xlabel("Real Axis", size = 12)
plt.ylabel("Imaginary Axis", size = 12)
plt.show()
```
Using the function above, try playing around with the following:
1. Different powers of $z$
2. Change the constant (-1) term. Larger/Smaller positive/negative. What if this term is complex?
What do you observe about the fractal at higher powers and different values of the constant
# Other Functions To Try
Once you've got that working, you should try these functions as well and see what fractals you observe!
$$
\begin{aligned}
f(z) &= \sin(z), x \in \left[-\frac{\pi}{2} - \frac{1}{2}, -\frac{\pi}{2} + \frac{1}{2}\right], y\in \left[-0.3, 0.3\right] \\
f(z) &= \cosh(z) - 1, x \in \left[-0.2, 0.2\right], y \in \left[-\pi, -\pi -\frac{\pi}{8}\right]\\
f(z) &= z^3 - 3^z, x\in [-10, 10], y\in[-10, 10]
\end{aligned}
$$
Note that if you don't know how to calculate a derivative, that's okay, you can use wolfrapmalpha, or alternatively, you can take them numerically with a function i've provided. It can be used as follows
```python
# Only if you haven't imported it already
import sys
sys.path.append('scripts/')
import fractalfuncs as FF
def myfunction(z):
return z**2 # for example
def myderivative(z):
return FF.nderiv(myfunction, z)
```
I note that numerical derivatives are always worse than analytic ones, but that's okay for now. If anyone is interested I can talk about how that works later as well.
## Bored of that?
If you're bored, you can also try other root finding techniques instead of your newton solver! Here are some suggestions
1. [Secant Method](https://en.wikipedia.org/wiki/Secant_method#:~:text=In%20numerical%20analysis%2C%20the%20secant,difference%20approximation%20of%20Newton's%20method.)
2. [Halley's Method](https://en.wikipedia.org/wiki/Halley%27s_method#:~:text=In%20numerical%20analysis%2C%20Halley's%20method,Householder's%20methods%2C%20after%20Newton's%20method.)
3. [Schroder's Method](https://mathworld.wolfram.com/SchroedersMethod.html)
Note that these fractals are getting created with based on convergence properties of the above solvers. For example, the cells below outline the same function we used originially, just with their different methods of solving!
```python
def function(z):
return z**3 - 1
def derivative(z):
return 3 * z ** 2
def secondder(z):
return 6 * z
bounds = [-1,1,-1,1]
secant_args = dict(function = function, mult = 0.5)
X = FF.CreateImageMap(FF.secantfact, secant_args, bounds, height=100, width=100)
plt.imshow(X, extent=bounds)
plt.xlabel("Real Axis", size = 12)
plt.ylabel("Imaginary Axis", size = 12)
plt.show()
```
```python
bounds = [-1,1,-1,1]
schroder_args = dict(derivative = derivative, function = function,
secondder=secondder, prec = 1e-6, max_iter = 50)
X = FF.CreateImageMap(FF.schroderfact, schroder_args, bounds, height=100, width=100)
plt.imshow(X, extent=bounds)
plt.xlabel("Real Axis", size = 12)
plt.ylabel("Imaginary Axis", size = 12)
plt.show()
```
```python
bounds = [-1,1,-1,1]
halley_args = dict(derivative = derivative, function = function,
seconder=secondder, prec = 1e-6, max_iter = 50)
X = FF.CreateImageMap(FF.halleyfact, halley_args, bounds, height=100, width=100)
plt.imshow(X, extent=bounds)
plt.xlabel("Real Axis", size = 12)
plt.ylabel("Imaginary Axis", size = 12)
plt.show()
```
Where you'll notice each root finding technique has different convergence criteria, so the fractals generated are all slightly different. If you try the other fractals listed, you'll notice tht their fractal patterns will show even more variation
|
2b0687856c8ad2573ee390594c4954583b37ded6
| 283,759 |
ipynb
|
Jupyter Notebook
|
notebooks/fractals/Fractals.ipynb
|
lgfunderburk/mathscovery
|
da9fcfd7f660835c663985c94645aec6dfd9f7bb
|
[
"BSD-3-Clause"
] | null | null | null |
notebooks/fractals/Fractals.ipynb
|
lgfunderburk/mathscovery
|
da9fcfd7f660835c663985c94645aec6dfd9f7bb
|
[
"BSD-3-Clause"
] | null | null | null |
notebooks/fractals/Fractals.ipynb
|
lgfunderburk/mathscovery
|
da9fcfd7f660835c663985c94645aec6dfd9f7bb
|
[
"BSD-3-Clause"
] | null | null | null | 395.207521 | 52,324 | 0.932809 | true | 5,019 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.957278 | 0.919643 | 0.880353 |
__label__eng_Latn
| 0.998602 | 0.883689 |
# YO WHADDUP BRO LET'S DO SOME CALCULUS
Remember last time, we talked about how cows are cool and often loosely correlated to math?
```
/; ;\
__ \____//
/{_\_/ `'\____
\___ ---(=)--(=)--}
_____________________________/ :--'
,-,'`@@@@@@@@ @@@@@@ \_ `__\
;:( @@@@@@@@@ @@@ \___(o'o)
:: ) @@@@ @@@@@@ ,'@@( `===='
:: : @@@@@: @@@@ `@@@:
:: \ @@@@@: @@@@@@@) ( '@@@'
;; /\ /`, @@@@@@@@@\\ :@@@@@)
::/ ) {_----------------: :~`, ;
;;'`; : ) : / `; ;
;;;; : : ; : ; ; :
`'`' / : : : : : :
)_ \__; ";" :_ ; \_\ `,','
:__\ \ * `,'* \ \ : \ * 8`;'* *
`^' \ :/ `^' `-^-' \v/ : \/ -Bill Ames-
```
```python
import matplotlib.pyplot as plt
import numpy as np
import sympy
```
Well, this time we're going to go deeper. So let's begin ***with a few questions***
# What do we mean by the phrase "rate of change"?
```python
""" The change of change """
```
# What do we call the "rate of change" of position?
```python
""" velocity / speed """
```
# What do we call the "rate of change" of velocity/speed?
```python
""" acceleration """
```
' acceleration '
Finally,
# What 1-word definition can we use for the phrase "rate of change"?
Whatever you chose, TOO BAD ***HAHAHAHA***, we are going to use a term clearly invented by mathematicians:
## The DERIVATIVE == "The rate of change"
And, even more specifically
***Suppose we have a function we call "f", taking an input "x" because we have no imagination***:
## $f(x)$
***The DERIVATIVE of f with respect to x == "The rate of change of f as x changes"***:
## $\frac{df(x)}{dx} = f'(x)$
#### There are also higher derivatives, like the 2nd and 3rd and Nth derivative:
## $\frac{df^2(x)}{dx} = f''(x)$
## $\frac{df^3(x)}{dx} = f'''(x)$
## $\frac{df^n(x)}{dx} = f^n(x)$ for any $n$
#### When taking these derivatives from position, like we were saying earlier:
## $x(t) = position$
## $\frac{dx(t)}{dt} = x'(t) = velocity$
## $\frac{dx^2(t)}{dt} = x''(t) = acceleration$
## $\frac{dx^3(t)}{dt} = x'''(t) = jerk$
## $\frac{dx^4(t)}{dt} = x''''(t) = jounce$
## $\frac{dx^5(t)}{dt} = x'''''(t) = snap$
## $\frac{dx^6(t)}{dt} = x''''''(t) = crackle$
## $\frac{dx^7(t)}{dt} = x'''''''(t) = pop$
## $\frac{dx^8(t)}{dt} = x''''''''(t) = lock$
## $\frac{dx^9(t)}{dt} = x''''''''(t) = drop$
# Quick example!
- We have a function sin(x) / x telling us the position of a pendulum over time
- Let's see what it's speed graph looks like
- And also it's acceleration graph!
```python
# create a "symbol" called "t", for time (think of this like writing "t" on a piece of paper)
t = sympy.Symbol('t')
# sympy.exp IS the e to the power of x function. "exp" = e**x
g = 0.1
r = 3
position = -(g * t**r - 1)**2 + 1
tvals = np.linspace(1, 2.8,1000)
velocity = sympy.lambdify(t, sympy.diff(position, t), modules=['numpy'])
acceleration = sympy.lambdify(t, sympy.diff(sympy.diff(position, t), t), modules=['numpy'])
pos_call = sympy.lambdify(t, position, modules=['numpy'])
plt.plot(tvals, pos_call(tvals))
plt.xlabel('x axis')
plt.ylabel('position!')
```
```python
plt.plot(tvals, velocity(tvals))
plt.xlabel('x axis')
plt.ylabel('velocity!')
```
```python
plt.plot(tvals, acceleration(tvals))
plt.xlabel('x axis')
plt.ylabel('acceleration!')
```
#### Now, I want you to try and graph the "jerk", and the "jounce" as well
```python
# PUT YOUR CODE HERE!
jerk = sympy.lambdify(t, sympy.diff(
sympy.diff(
sympy.diff(position, t), t) , t), modules=['numpy'])
jounce = sympy.lambdify(t, sympy.diff(
sympy.diff(
sympy.diff(
sympy.diff(position, t), t) , t), t), modules=['numpy'])
plt.plot(tvals, jerk(tvals))
plt.xlabel('x axis')
plt.ylabel('jerk!')
```
```python
plt.plot(tvals, jounce(tvals))
plt.xlabel('x axis')
plt.ylabel('jounce!')
```
----
----
# Now, let's meet a new number:
- Like $\pi$, this number is "transcendental"...
- It never repeats in any macro or micro structure, as far as ever examined
It is:
## $ e \approx 2.71$
***And I have a confession... ***
```python
import numpy as np
np.e * np.e
```
7.3890560989306495
# Now, let's meet a new function, using $e$:
# $f(x) = e^x$
#### On a graph, it looks like this...
```python
x = sympy.Symbol('x') # create a "symbol" called "x" (think of this like writing "x" on a piece of paper)
function = sympy.exp(x) # sympy.exp IS the e to the power of x function. "exp" = e**x
```
```python
xvals = np.linspace(-1,1,100)
use_function = sympy.lambdify(x, function, modules=['numpy'])
plt.plot(xvals, use_function(xvals))
plt.xlabel('x axis')
plt.ylabel('y = f(x) axis')
```
# Now, let's use our new knowledge of derivatives to see "how fast" $e^x$ changes!
## In writing:
## $\frac{df(x)}{dx} = \frac{d}{dx} (e^x)$
```python
# In code:
derivative = sympy.diff(expression, x) # let sympy do the work of taking the derivative of e to the x
```
```python
derivative
```
# Wait.... What? Do you see what happened?
#### Let's try this again...
```python
# In code:
derivative = sympy.diff(function, x) # let sympy do the work of taking the derivative of e to the x
derivative
```
exp(x)
#### What if... We take the derivative of the derivative?
- This is called the "second derivative"
```python
# In code:
derivative = sympy.diff(sympy.diff(function, x), x) # let sympy do the work of taking the derivative of e to the x
derivative
```
exp(x)
#### And again?
```python
# In code:
derivative = sympy.diff(sympy.diff(sympy.diff(derivative, x),x),x) # let sympy do the work of taking the derivative of e to the x
derivative
```
exp(x)
# FYI, this is the part where your mind explodes
# ICYMI, the derivative of this...
```python
xvals = np.linspace(-1,1,100)
use_function = sympy.lambdify(x, function, modules=['numpy'])
plt.plot(xvals, use_function(xvals))
plt.xlabel('x axis')
plt.ylabel('y = f(x) axis')
```
# Is this...
```python
xvals = np.linspace(-1,1,100)
use_derivative = sympy.lambdify(x, derivative, modules=['numpy'])
plt.plot(xvals, use_function(xvals))
plt.xlabel('x axis')
plt.ylabel('y = f(x) axis')
```
```python
```
```python
```
```python
```
```python
```
```python
```
|
73ee2c256454fcdd6592d9fa7da997daebde948f
| 127,537 |
ipynb
|
Jupyter Notebook
|
Math Magic/Week15_Notebook.ipynb
|
jonnyhyman/Programming-Classes
|
f56a9cf90e3b8e4aafad99644f1ed7e87ba14995
|
[
"MIT"
] | 23 |
2018-12-15T01:10:51.000Z
|
2021-07-02T05:23:45.000Z
|
Math Magic/Week15_Notebook.ipynb
|
jonnyhyman/Programming-Classes
|
f56a9cf90e3b8e4aafad99644f1ed7e87ba14995
|
[
"MIT"
] | null | null | null |
Math Magic/Week15_Notebook.ipynb
|
jonnyhyman/Programming-Classes
|
f56a9cf90e3b8e4aafad99644f1ed7e87ba14995
|
[
"MIT"
] | 5 |
2020-02-15T12:47:42.000Z
|
2021-02-28T03:01:19.000Z
| 166.063802 | 15,684 | 0.89454 | true | 2,069 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.890294 | 0.839734 | 0.74761 |
__label__eng_Latn
| 0.87366 | 0.575281 |
```python
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
import stanpy as stp
EI = 32000 # kN/m2
P = 5 # kN
q = 4 # kN/m
l = 4 # m
roller_support = {"w": 0, "M": 0, "H": 0}
fixed_support = {"w": 0, "phi": 0}
hinge = {"M": 0}
x_sym = sym.Symbol("x")
E = 3e7 # kN/m2
b = 0.2 # m
ha = hb = 0.3 # m
hc = 0.4 # m
l1 = 4 # m
l2 = 3 # m
hx = ha + (hc - hb) / l2 * x_sym
cs_1 = stp.cs(b=b, h=ha)
cs_2 = stp.cs(b=b, h=hx)
s1 = {"E": E, "cs": cs_1, "l": l, "bc_i": fixed_support, "bc_k": {"w": 0}}
s2 = {"E": E, "cs": cs_1, "l": l, "bc_k": hinge, "q": q}
s3 = {"E": E, "cs": cs_2, "l": l, "bc_k": roller_support, "P": (P, l / 2)}
s = [s1, s2, s3]
fig, ax = plt.subplots(figsize=(12,8))
stp.plot_system(ax, *s, render=True, facecolor="gray", alpha=0.2, render_scale=0.5)
stp.plot_load(ax, *s, offset=0.4)
ax.set_ylim(-1.5, 2)
ax.set_aspect("equal")
plt.show()
```
```python
dx = 1e-9
x_annotate = np.array([dx, l-dx, l, 2 * l, 5 * l / 2 - dx, 5 * l / 2])
x = np.sort(np.append(np.linspace(0, 3 * l, 1000), x_annotate))
Zi, Zk = stp.tr_solver(*s)
Fxx = stp.tr(*s, x=x)
Z_x = Fxx.dot(Zi).round(10)
w_x = Z_x[:, 0]
phi_x = Z_x[:, 1]
M_x = Z_x[:, 2]
V_x = Z_x[:, 3]
```
```python
scale = 0.5
fig, ax = plt.subplots(figsize=(12, 5))
stp.plot_system(ax, *s)
stp.plot_solution(
ax,
x=x,
y=M_x,
annotate_x=[0, l, 2 * l, 5 * l/2],
fill_p="red",
fill_n="blue",
scale=scale,
alpha=0.2,
flip_y=True
)
ax.grid(linestyle=":")
ax.set_axisbelow(True)
ax.set_ylim(-1.0, 1)
ax.set_ylabel("M/Mmax*{}".format(scale))
ax.set_title("[M] = kNm")
plt.show()
```
```python
scale = 0.5
fig, ax = plt.subplots(figsize=(12, 5))
stp.plot_system(ax, *s)
stp.plot_solution(
ax,
x=x,
y=V_x,
annotate_x=[dx, l-dx, l, 2*l, 5*l/2-dx, 5*l/2, 3*l],
fill_p="red",
fill_n="blue",
scale=scale,
alpha=0.2,
)
ax.grid(linestyle=":")
ax.set_axisbelow(True)
ax.set_ylim(-1.0, 1)
ax.set_ylabel("V/Vmax*{}".format(scale))
ax.set_title("[V] = kN")
plt.show()
```
```python
scale = 0.2
fig, ax = plt.subplots(figsize=(12, 5))
stp.plot_system(ax, *s, lw=1, linestyle=":", c="#111111")
stp.plot_w(ax, x=x, wx=w_x, scale=scale, linestyle="-")
ax.grid(linestyle=":")
ax.set_axisbelow(True)
ax.set_ylim(-1.5, 1.5)
ax.set_ylabel("w/wmax*{}".format(scale))
ax.set_title("[w] = m")
plt.show()
```
```python
```
|
6948a513586b50ebb6b8240add574c230e2483ae
| 91,013 |
ipynb
|
Jupyter Notebook
|
tests/docs/user_guide/jupyter/reduction/ex04.ipynb
|
DavidNaizheZhou/stanpy
|
257072bd52154e9e4d68be957fd12eee1ad3dc56
|
[
"MIT"
] | null | null | null |
tests/docs/user_guide/jupyter/reduction/ex04.ipynb
|
DavidNaizheZhou/stanpy
|
257072bd52154e9e4d68be957fd12eee1ad3dc56
|
[
"MIT"
] | null | null | null |
tests/docs/user_guide/jupyter/reduction/ex04.ipynb
|
DavidNaizheZhou/stanpy
|
257072bd52154e9e4d68be957fd12eee1ad3dc56
|
[
"MIT"
] | 1 |
2022-01-30T17:25:38.000Z
|
2022-01-30T17:25:38.000Z
| 390.613734 | 29,714 | 0.936811 | true | 997 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.859664 | 0.763484 | 0.656339 |
__label__eng_Latn
| 0.110822 | 0.363227 |
# Relaxation Runge--Kutta
```python
# If you do not have numpy, matplotlib, scipy or nodepy, run this cell
!pip install numpy
# This is the basic package in python with all the numerical functions
!pip install scipy
# This package has some functions to deal with polynomials
!pip install matplotlib
# This package allows to plot
!pip install nodepy
# This package has some interesting features for RK methods
```
```python
# We need a couple of packages in this chapter
import numpy as np
# This is the basic package in python with all the numerical functions
import matplotlib.pyplot as plt
# This package allows to plot
from nodepy import rk
#This package already implemented some functions for Runge Kutta and multistep methods
```
The relaxation technique is a modification of high order accurate time integration methods which allow to conserve or dissipate some quantities of interest, such as entropy or energy, that physically would be conserved or dissipated.
Consider again
$$
y'=F(y)
$$
and suppose that we know that the kinetic energy is conserved (dissipated), i.e.
$$
\frac{d}{dt} \frac{1}{2} \langle y, y \rangle = \langle y, y' \rangle = \langle y, F(y) \rangle \stackrel{(\leq)}{=} 0.
$$
This is the case of many examples we have seen before, e.g. Dahlquist's equation with negative coefficient, nonlinear oscillator, damped nonlinear oscillator, semidiscretized conservation laws (linear transport equation, Burgers' equation).
More in general we might have a nonlinear quantity that is conserved (entropy, momentum, total energy (kinetic+potential))
$$
\frac{d}{dt}\eta(y)= \langle \partial_y \eta(y), y' \rangle = \langle \partial_y \eta(y), F(y) \rangle \stackrel{(\leq)}{=} 0.
$$
This is the case of nonlinear pendulum, Lotka-Volterra, other conservation laws (Euler's equation).
Several relaxation methods have been proposed in the last 3 years: the relaxation Runge Kutta [Ketcheson 2019](https://arxiv.org/abs/1905.09847) (energy), [Ketcheson2020](https://arxiv.org/abs/1909.13215) (entropy) and multistep methods [Ranocha 2020](https://arxiv.org/abs/2003.03012), all originally inspired by [Del Buono 2002](https://www.sciencedirect.com/science/article/pii/S0377042701003983?via%3Dihub).
Here, we present the version in case of kinetic energy for explicit RK methods
$$
\begin{array}
{c|c}
c&A\\
\hline
& b^T
\end{array}
$$
where $A\in \mathbb R^{S\times S},\, b,c\in\mathbb R^S$, in the following formulation
$$
\begin{cases}
y^{(k)}=y^n + \Delta t \sum_{j=1}^S a_{kj} F(t^n+c_j\Delta t,y^{(j)}), \quad k=1,\dots, S,\\
y^{n+1} = y^n+ \Delta t \sum_{j=1}^S b_{j} F(t^n+c_j\Delta t,y^{(j)})
\end{cases}
$$
The technique consists of modifying the timestep by a constant $\gamma\approx 1$ which moves forward or backward the solution on the line drawn from $y^n$ to $y^{n+1}$, in order to verify a well known conservation or dissipation scalar constraint, e.g. energy or entropy.
We discuss only energy in this notebook.
Let us consider the energy defined by the scalar product $\langle y,y \rangle$ that we physically know it is conserved or dissipated. Moreover, we know that analytically $\langle F(y), y \rangle \leq 0$ or $=0$.
Then, we define
\begin{cases}
y^{(k)}=y^n + \Delta t \sum_{j=1}^S a_{kj} F(t^n+c_j\Delta t,y^{(j)}), \quad k=1,\dots, S,\\
y^{n+1}_\gamma = y^n+ \gamma\Delta t \sum_{j=1}^S b_{j} F(t^n+c_j\Delta t,y^{(j)})
\end{cases}
and, defining with $\Delta y:= y^{n+1} -y^n$, we develop the scalar product
$$
\begin{align}
&\langle y^{n+1}_\gamma,y^{n+1}_\gamma \rangle \\= & \langle y^{n},y^{n} \rangle + 2 \gamma \langle y^{n}, \Delta t \sum_{j=1}^S b_j f({y}^{(j)}) \rangle + \gamma^2 \langle \Delta y , \Delta y \rangle \\
=& \langle y^{n},y^{n} \rangle +2 \gamma\Delta t \sum_{j=1}^S b_j \underbrace{\langle y^{(j)}, f({y}^{(j)}) \rangle}_{\leq 0}-2 \gamma \Delta t \sum_{j=1}^S b_j\langle {y}^{(j)}, f({y}^{(j)}) \rangle+ 2 \gamma \langle y^{n}, \Delta t \sum_{j=1}^S b_j f( y^{(j)}) \rangle + \gamma^2 \langle \Delta y , \Delta y \rangle ,
\end{align}
$$
where we know from the conservation or dissipation property that the second term is equal or smaller than 0, respectively, if $b_j \geq 0$ for all $s$. Hence, we impose the rest of the equation to be equal to 0, by setting
$$
\begin{align}
&-2 \gamma \Delta t \sum_{j=1}^S b_j\langle y^{(j)}, f(y^{(j)}) \rangle+ 2 \gamma \langle y^{n}, \Delta t \sum_{j=1}^S b_j f({y}^{(j)}) \rangle + \gamma^2 \langle \Delta y , \Delta y \rangle =0\\
\Leftrightarrow & \gamma = \frac{2 \Delta t \sum_{j=1}^S b_j \langle {y}^{(j)}, f({y}^{(j)}) \rangle -2 \gamma \langle y^n, \Delta t \sum_{j=1}^S b_j f({y}^{(j)})\rangle}{\langle \Delta y , \Delta y \rangle}
\end{align}
$$
**Remark** The final timestep we perform is not of size $\Delta t$, but it is of size $\gamma \Delta t$, hence we need also to update the new time as $t^{n+1}=t^n+\gamma \Delta t$
```python
import numpy as np
## Linear scalar Dahlquist's equation
def linear_scalar_flux(u,t=0,k_coef=10):
ff=np.zeros(np.shape(u))
ff[0]= -k_coef*u[0]
return ff
def linear_scalar_exact_solution(u0,t,k_coef=10):
return np.array([np.exp(-k_coef*u0[0]*t)])
def linear_scalar_jacobian(u,t=0,k_coef=10):
Jf=np.zeros((len(u),len(u)))
Jf[0,0]=-k_coef
return Jf
#nonlinear problem y'=-ky|y| +1
def nonlinear_scalar_flux(u,t=0,k_coef=10):
ff=np.zeros(np.shape(u))
ff[0]=-k_coef*abs(u[0])*u[0] +1
return ff
def nonlinear_scalar_exact_solution(u0,t,k_coef = 10):
sqrtk = np.sqrt(k_coef)
ustar = 1 / sqrtk
if u0[0] >= ustar:
uex=np.array([1./np.tanh(sqrtk * t + np.arctanh(1/sqrtk /u0[0])) / sqrtk])
elif u0[0] < 0 and t < - np.atan(sqrtk * u0[0]) / sqrtk:
uex=np.array([np.tan(sqrtk * t + np.arctan(sqrtk * u0[0])) / sqrtk])
else:
uex=np.array([np.tanh(sqrtk * t + np.arctanh(sqrtk * u0[0])) / sqrtk])
return uex
def nonlinear_scalar_jacobian(u,t=0,k_coef=10):
Jf=np.zeros((len(u),len(u)))
Jf[0,0]=-k_coef*abs(u[0])
return Jf
# SYSTEMS
# linear systems
def linear_system2_flux(u,t=0):
d=np.zeros(len(u))
d[0]= -5*u[0] + u[1]
d[1]= 5*u[0] -u[1]
return d
def linear_system2_exact_solution(u0,t):
A=np.array([[-5,1],[5,-1]])
u_e=u0+(1-np.exp(-6*t))/6*np.dot(A,u0)
return u_e
def linear_system2_jacobian(u,t=0):
Jf=np.array([[-5,1],[5,-1]])
return Jf
linear_system2_matrix = np.array([[-5,1],[5,-1]])
def linear_system2_production_destruction(u,t=0):
p=np.zeros((len(u),len(u)))
d=np.zeros((len(u),len(u)))
p[0,1]=u[1]
d[1,0]=u[1]
p[1,0]=5*u[0]
d[0,1]=5*u[0]
return p,d
#lin system 3 x3
def linear_system3_flux(u,t=0):
d=np.zeros(len(u))
d[0]= -u[0] + 3*u[1]
d[1]= -3*u[1] + 5*u[2]
d[2]= -5*u[2]
return d
def linear_system3_exact_solution(u0,t=0):
u_e = np.zeros(len(u0))
u_e[0] = 15.0/8.0*u0[2]*(np.exp(-5*t) - 2*np.exp(-3*t)+np.exp(-t))
u_e[1] = 5.0/2.0*u0[2]*(-np.exp(-5*t) + np.exp(-3*t))
u_e[2] = u0[2]*np.exp(-5*t)
return u_e
def linear_system3_jacobian(u,t=0):
Jf=np.zeros((len(u),len(u)))
Jf[0,0]=-1.
Jf[0,1]=3
Jf[1,1] = -3
Jf[1,2] = 5
Jf[2,2] = -5
return Jf
## Nonlinear 3x3 system production destruction
def nonlinear_system3_flux(u,t=0):
ff=np.zeros(len(u))
ff[0]= -u[0]*u[1]/(u[0]+1)
ff[1]= u[0]*u[1]/(u[0]+1) -0.3*u[1]
ff[2]= 0.3*u[1]
return ff
def nonlinear_system3_production_destruction(u,t=0):
p=np.zeros((len(u),len(u)))
d=np.zeros((len(u),len(u)))
p[1,0]=u[0]*u[1]/(u[0]+1)
d[0,1]=p[1,0]
p[2,1]=0.3*u[1]
d[1,2]=p[2,1]
return p,d
# SIR Model
def SIR_flux(u,t=0,beta=3,gamma=1):
ff=np.zeros(len(u))
N=np.sum(u)
ff[0]=-beta*u[0]*u[1]/N
ff[1]=+beta*u[0]*u[1]/N - gamma*u[1]
ff[2]= gamma*u[1]
return ff
def SIR_jacobian(u,t=0,beta=3,gamma=1):
Jf=np.zeros((len(u),len(u)))
N=np.sum(u)
Jf[0,0]=-beta*u[1]/N
Jf[0,1]=-beta*u[0]/N
Jf[1,0]= beta*u[1]/N
Jf[1,1]= beta*u[0]/N - gamma
Jf[2,1] = gamma
return Jf
def SIR_production_destruction(u,t=0,beta=3,gamma=1):
p=np.zeros((len(u),len(u)))
d=np.zeros((len(u),len(u)))
N=np.sum(u)
p[1,0]=beta*u[0]*u[1]/N
d[0,1]=p[1,0]
p[2,1]=gamma*u[1]
d[1,2]=p[2,1]
return p,d
# Nonlinear_oscillator
def nonLinearOscillator_flux(u,t=0,alpha=0.):
ff=np.zeros(np.shape(u))
n=np.sqrt(np.dot(u,u))
ff[0]=-u[1]/n-alpha*u[0]/n
ff[1]=u[0]/n - alpha*u[1]/n
return ff
def nonLinearOscillator_exact_solution(u0,t):
u_ex=np.zeros(np.shape(u0))
n=np.sqrt(np.dot(u0,u0))
u_ex[0]=np.cos(t/n)*u0[0]-np.sin(t/n)*u0[1]
u_ex[1]=np.sin(t/n)*u0[0]+np.cos(t/n)*u0[1]
return u_ex
def nonLinearOscillator_entropy(u,t=0,alpha=0.):
return np.dot(u,u)/2.
def nonLinearOscillator_entropy_variable(u,t=0,alpha=0.):
return u
# Non linear oscillator damped
def nonLinearOscillatorDamped_flux(u,t,alpha=0.01):
ff=np.zeros(np.shape(u))
n=np.sqrt(np.dot(u,u))
ff[0]=-u[1]/n-alpha*u[0]/n
ff[1]=u[0]/n - alpha*u[1]/n
return ff
def nonLinearOscillatorDamped_exact_solution(u0,t,alpha=0.01):
u_ex=np.zeros(np.shape(u0))
n0=np.sqrt(np.dot(u0,u0))
n=n0*np.exp(-alpha*t)
u_ex[0]=n/n0*(np.cos(t/n)*u0[0]-np.sin(t/n)*u0[1])
u_ex[1]=n/n0*(np.sin(t/n)*u0[0]+np.cos(t/n)*u0[1])
return u_ex
# pendulum
def pendulum_flux(u,t=0):
ff=np.zeros(np.shape(u))
ff[0]=u[1]
ff[1]=-np.sin(u[0])
return ff
def pendulum_jacobian(u,t=0):
Jf=np.zeros((2,2))
Jf[0,1]=1.
Jf[1,0]=np.cos(u[0])
return Jf
def pendulum_entropy(u,t=0):
return np.array(0.5*u[1]**2.-np.cos(u[0]), dtype=np.float)
def pendulum_entropy_variables(u,t=0):
v=np.zeros(np.shape(u))
v[0]=np.sin(u[0])
v[1]=u[1]
return v
# Robertson
def Robertson_flux(u,t=0,alpha=10**4,beta=0.04, gamma=3*10**7):
ff=np.zeros(np.shape(u))
ff[0] = alpha*u[1]*u[2]-beta*u[0]
ff[1] = beta*u[0]-alpha*u[1]*u[2] - gamma*u[1]**2
ff[2] = gamma*u[1]**2
return ff
def Robertson_jacobian(u,t=0,alpha=10**4,beta=0.04, gamma=3*10**7):
Jf=np.zeros((3,3))
Jf[0,0]= -beta
Jf[0,1]= alpha*u[2]
Jf[0,2]= alpha*u[1]
Jf[1,0]= beta
Jf[1,1]= -alpha*u[2]-2*gamma*u[1]
Jf[1,2]= -alpha*u[1]
Jf[2,1] = 2*gamma*u[1]
return Jf
def Robertson_production_destruction(u,t=0,alpha=10**4,beta=0.04, gamma=3*10**7):
p=np.zeros((len(u),len(u)))
d=np.zeros((len(u),len(u)))
p[0,1]=alpha*u[1]*u[2]
d[1,0]=p[0,1]
p[1,0]=beta*u[0]
d[0,1]=p[1,0]
p[2,1]=gamma*u[1]**2
d[1,2]=p[2,1]
return p,d
def Robertson_rhs(u,t=0):
return np.zeros(3)
# Lotka:
def lotka_flux(u,t=0,alpha=1,beta=0.2,delta=0.5,gamma=0.2):
ff=np.zeros(np.shape(u))
ff[0]=alpha*u[0]-beta*u[0]*u[1]
ff[1]=delta*u[0]*u[1]-gamma*u[1]
return ff
def lotka_jacobian(u,t=0,alpha=1,beta=0.2,delta=0.5,gamma=0.2):
Jf=np.zeros((2,2))
Jf[0,0] = alpha -beta*u[1]
Jf[0,1] = -beta*u[0]
Jf[1,0] = delta*u[1]
Jf[1,1] = delta*u[0] -gamma
return Jf
def lotka_entropy(u,t=0,alpha=1,beta=0.2,delta=0.5,gamma=0.2):
return np.array(delta*u[0]-gamma*np.log(u[0])+beta*u[1]-alpha*np.log(u[1]), dtype=np.float)
def lotka_entropy_variables(u,t=0,alpha=1,beta=0.2,delta=0.5,gamma=0.2):
v=np.zeros(np.shape(u))
v[0]=delta-gamma/u[0]
v[1]=beta-alpha/u[1]
return v
#3 bodies problem in 2D: U=(x_1,x_2,v_1,v_2,y_1,y_2,w_1,w_2,z_1,z_2,s_1,s_2)
# where x is the 2D position of body1 and v is speed body1 sun
# y, w are position and velocity body2 earth
# z, s are position and velocity body3 mars
def threeBodies_flux(u,t=0):
m1=1.98892*10**30
m2=5.9722*10**24
m3=6.4185*10**23
G=6.67*10**(-11)
f=np.zeros(np.shape(u))
x=u[0:2]
v=u[2:4]
y=u[4:6]
w=u[6:8]
z=u[8:10]
s=u[10:12]
dxy3=np.linalg.norm(x-y)**3
dxz3=np.linalg.norm(x-z)**3
dyz3=np.linalg.norm(y-z)**3
f[0:2]=v
f[2:4]=-m2*G/dxy3*(x-y)-m3*G/dxz3*(x-z)
f[4:6]=w
f[6:8]=-m1*G/dxy3*(y-x)-m3*G/dyz3*(y-z)
f[8:10]=s
f[10:12]=-m1*G/dxz3*(z-x)-m2*G/dyz3*(z-y)
return f
class ODEproblem:
def __init__(self,name):
self.name=name
if self.name=="linear_scalar":
self.u0 = np.array([1.])
self.T_fin= 2.
self.k_coef=10
self.matrix=np.array([-self.k_coef])
elif self.name=="nonlinear_scalar":
self.k_coef=10
self.u0 = np.array([1.1/np.sqrt(self.k_coef)])
self.T_fin= 1.
elif self.name=="linear_system2":
self.u0 = np.array([0.9,0.1])
self.T_fin= 1.
self.matrix = np.array([[-5,1],[5,-1]])
elif self.name=="linear_system3":
self.u0 = np.array([0,0.,10.])
self.T_fin= 10.
elif self.name=="nonlinear_system3":
self.u0 = np.array([9.98,0.01,0.01])
self.T_fin= 30.
elif self.name=="SIR":
self.u0 = np.array([1000.,1,10**-20])
self.T_fin= 10.
elif self.name=="nonLinearOscillator":
self.u0 = np.array([1.,0.])
self.T_fin= 50
elif self.name=="nonLinearOscillatorDamped":
self.u0 = np.array([1.,0.])
self.T_fin= 50
elif self.name=="pendulum":
self.u0 = np.array([2.,0.])
self.T_fin= 50
elif self.name=="Robertson":
self.u0 = np.array([1.,10**-20,10**-20])
self.T_fin= 10.**10.
elif self.name=="lotka":
self.u0 = np.array([1.,2.])
self.T_fin= 100.
elif self.name=="threeBodies":
self.u0 = np.array([0,0,0,0,149*10**9,0,0,30*10**3,-226*10**9,0,0,-24.0*10**3])
self.T_fin= 10.**8.
else:
raise ValueError("Problem not defined")
def flux(self,u,t=0):
if self.name=="linear_scalar":
return linear_scalar_flux(u,t,self.k_coef)
elif self.name=="nonlinear_scalar":
return nonlinear_scalar_flux(u,t,self.k_coef)
elif self.name=="linear_system2":
return linear_system2_flux(u,t)
elif self.name=="linear_system3":
return linear_system3_flux(u,t)
elif self.name=="nonlinear_system3":
return nonlinear_system3_flux(u,t)
elif self.name=="SIR":
return SIR_flux(u,t)
elif self.name=="nonLinearOscillator":
return nonLinearOscillator_flux(u,t)
elif self.name=="nonLinearOscillatorDamped":
return nonLinearOscillatorDamped_flux(u,t)
elif self.name=="pendulum":
return pendulum_flux(u,t)
elif self.name=="Robertson":
return Robertson_flux(u,t)
elif self.name=="lotka":
return lotka_flux(u,t)
elif self.name=="threeBodies":
return threeBodies_flux(u,t)
else:
raise ValueError("Flux not defined for this problem")
def jacobian(self,u,t=0):
if self.name=="linear_scalar":
return linear_scalar_jacobian(u,t,self.k_coef)
elif self.name=="nonlinear_scalar":
return nonlinear_scalar_jacobian(u,t,self.k_coef)
elif self.name=="linear_system2":
return linear_system2_jacobian(u,t)
elif self.name=="linear_system3":
return linear_system3_jacobian(u,t)
elif self.name=="pendulum":
return pendulum_jacobian(u,t)
elif self.name=="SIR":
return SIR_jacobian(u,t)
elif self.name=="Robertson":
return Robertson_jacobian(u,t)
elif self.name=="lotka":
return lotka_jacobian(u,t)
else:
raise ValueError("Jacobian not defined for this problem")
def exact(self,u,t):
if self.name=="linear_scalar":
return linear_scalar_exact_solution(u,t,self.k_coef)
elif self.name=="nonlinear_scalar":
return nonlinear_scalar_exact_solution(u,t,self.k_coef)
elif self.name=="linear_system2":
return linear_system2_exact_solution(u,t)
elif self.name=="linear_system3":
return linear_system3_exact_solution(u,t)
elif self.name=="nonLinearOscillator":
return nonLinearOscillator_exact_solution(u,t)
elif self.name=="nonLinearOscillatorDamped":
return nonLinearOscillatorDamped_exact_solution(u,t)
else:
raise ValueError("Exact solution not defined for this problem")
def exact_solution_times(self,u0,tt):
exact_solution=np.zeros((len(u0),len(tt)))
for it, t in enumerate(tt):
exact_solution[:,it]=self.exact(u0,t)
return exact_solution
def prod_dest(self,u,t=0):
if self.name=="linear_system2":
return linear_system2_production_destruction(u,t)
if self.name=="nonlinear_system3":
return nonlinear_system3_production_destruction(u,t)
elif self.name=="Robertson":
return Robertson_production_destruction(u,t)
elif self.name=="SIR":
return SIR_production_destruction(u,t)
else:
raise ValueError("Prod Dest not defined for this problem")
```
```python
## explicit RK method
def explicitRelaxRK(flux, y_0, dt0, T_fin, KtMax, A, b, c):
# Solving u'=F(u,t)
# input: flux=F, tspan is a vector of times determining the RK steps
# input: y_0 the initial condition
# dt0 is the basic time interval, that will be modified along the steps
# T_fin is the final time
# KtMax is maximum number of timesteps
# input: A,b,c are matrix and vectors of RK methods
dim=len(y_0) # S
y=np.zeros((dim,KtMax)) # initializing the variable of solutions
tspan=np.zeros(KtMax) # times will be stored here
gammas = np.zeros(KtMax) # Gamma relaxation coefficients
time= 0.
gammas[0] = 1
n=0 # Time step index
tspan[0] = time
y[:,0]=y_0 # first timestep
S=np.shape(A)[0]
u=np.zeros((dim,S)) # Internal stages
Fu=np.zeros((dim,S)) # Flux at internal stages
while(time<T_fin and n<KtMax): # n=0,..., N-1
delta_t=min(dt0,T_fin-time)
#Classic RK step
for k in range(S):
u[:,k]=y[:,n]
for j in range(k):
u[:,k] = u[:,k]+ delta_t*A[k,j]*Fu[:,j]
Fu[:,k] = flux(u[:,k],tspan[n]+delta_t*c[k])
yn1=y[:,n]
for j in range(S):
yn1=yn1+delta_t*b[j]*Fu[:,j]
# Compute the relaxation gamma
deltay = yn1-y[:,n]
sumBScal=0.
for j in range(S):
sumBScal=sumBScal + b[j]* np.dot(u[:,j]-y[:,n],Fu[:,j])
gamma = 2* delta_t* sumBScal/np.dot(deltay,deltay)
# Update the n+1 values
y[:,n+1]=y[:,n] +gamma*deltay
if (time+delta_t<T_fin -10**-16):
time = time + gamma*delta_t
else:
time=T_fin
tspan[n+1]=time
gammas[n+1]=gamma
n=n+1
return tspan[:n+1], y[:,:n+1] , gammas[:n+1]
```
```python
## explicit RK method
def explicitRK(flux, tspan, y_0, A, b, c):
# Solving u'=F(u,t)
# input: flux=F, tspan is a vector of times determining the RK steps
# input: y_0 the initial condition
# input: A,b,c are matrix and vectors of RK methods
N_time=len(tspan) # N+1
dim=len(y_0) # S
y=np.zeros((dim,N_time)) # initializing the variable of solutions
y[:,0]=y_0 # first timestep
S=np.shape(A)[0]
u=np.zeros((dim,S)) # Internal stages
Fu=np.zeros((dim,S)) # Flux at internal stages
for n in range(N_time-1): # n=0,..., N-1
delta_t=tspan[n+1]-tspan[n]
for k in range(S):
u[:,k]=y[:,n]
for j in range(k):
u[:,k] =u[:,k]+ delta_t*A[k,j]*Fu[:,j]
Fu[:,k] = flux(u[:,k],tspan[n]+delta_t*c[k])
y[:,n+1]=y[:,n]
for j in range(S):
y[:,n+1]=y[:,n+1]+delta_t*b[j]*Fu[:,j]
return tspan, y
```
```python
pr=ODEproblem("nonLinearOscillator") #"nonLinearOscillatorDamped"
dt0=1 #0.5
t_span=np.arange(0,pr.T_fin+10**-16,dt0)
rk44=rk.loadRKM('RK44')
tt,uu=explicitRK(pr.flux,t_span,pr.u0,rk44.A,rk44.b,rk44.c)
ttR,uuR, gammas = explicitRelaxRK(pr.flux, pr.u0, dt0, pr.T_fin, np.int(pr.T_fin//dt0*3), rk44.A, rk44.b, rk44.c)
tEx=np.linspace(0,pr.T_fin, 200)
uEx=np.zeros((uu.shape[0], len(tEx)))
for k in range(len(tEx)):
uEx[:,k] = pr.exact(pr.u0,tEx[k])
plt.figure(figsize=(12,5))
plt.subplot(121)
plt.plot(tt,uu[0,:],label="RK")
plt.plot(ttR,uuR[0,:],"--",label="RRK")
plt.plot(tEx,uEx[0,:],":",label="ex")
plt.legend()
plt.subplot(122)
plt.plot(tt,uu[1,:],label="RK")
plt.plot(ttR,uuR[1,:],"--",label="RRK")
plt.plot(tEx,uEx[1,:],":",label="ex")
plt.legend()
plt.figure()
plt.plot(ttR,gammas)
plt.title("Gamma relaxation")
plt.figure(figsize=(12,5))
plt.subplot(121)
errorEnRRK=[np.dot(uuR[:,k],uuR[:,k])-np.dot(pr.exact(pr.u0,ttR[k]),pr.exact(pr.u0,ttR[k])) for k in range(len(ttR))]
errorEnRK=[np.dot(uu[:,k],uu[:,k])-np.dot(pr.exact(pr.u0,tt[k]),pr.exact(pr.u0,tt[k])) for k in range(len(tt))]
plt.plot(ttR,errorEnRRK,label="RRK")
plt.plot(tt,errorEnRK,label="RK")
plt.title("Energy error")
plt.legend()
plt.yscale("symlog",linthresh=1e-15)
plt.subplot(122)
plt.title("Phase space")
plt.plot(uu[0,:],uu[1,:], "x", label="RK")
plt.plot(uuR[0,:],uuR[1,:], "o", label="RRK")
plt.plot(uEx[0,:],uEx[1,:], ":", label="ex")
plt.legend()
```
```python
from nodepy import rk
## Convergence
pr=ODEproblem("nonLinearOscillator")
pr.T_fin=5
# Define some explicit RK
A=np.array([[0,0],[1/2,0]])
b=np.array([0,1])
rk2 = rk.ExplicitRungeKuttaMethod(A,b)
A=np.array([[0]])
b=np.array([1])
rk1 = rk.ExplicitRungeKuttaMethod(A,b)
A=np.array([[0,0,0],[2/3,0,0],[1/3,1/3,0]])
b=np.array([1/4,0,3/4])
rk3 = rk.ExplicitRungeKuttaMethod(A,b)
rk44=rk.loadRKM('RK44')
def error(tt,yy):
errors=np.zeros(len(tt))
for it, t in enumerate(tt):
errors[it]=np.linalg.norm(yy[:,it]-pr.exact(yy[:,0],t))
return np.mean(errors)
Ns=[2**k for k in range(5,10)]
solvers=[ rk2,rk3,rk44] #rk1
errorEx =np.zeros((len(solvers),len(Ns)))
errorRRK =np.zeros((len(solvers),len(Ns)))
dts= np.zeros(len(Ns))
for iN, N in enumerate(Ns):
tspan=np.linspace(0,pr.T_fin,N)
dts[iN]=tspan[1]-tspan[0]
for iS, rkM in enumerate(solvers):
tt,yy=explicitRK(pr.flux,tspan,pr.u0,rkM.A,rkM.b,rkM.c)
errorEx[iS,iN]=error(tt,yy)
tt,yy, gammas =explicitRelaxRK(pr.flux,pr.u0,dts[iN], pr.T_fin, np.int(pr.T_fin//dts[iN]*3),\
rkM.A,rkM.b,rkM.c)
errorRRK[iS,iN]=error(tt,yy)
plt.figure()
for iS, rkM in enumerate(solvers):
orderRK=rkM.order()
plt.loglog(dts,errorEx[iS,:],label=rkM.name+str(orderRK))
plt.loglog(dts,errorRRK[iS,:],"*-",label="R"+rkM.name+str(orderRK))
plt.loglog(dts,dts**(orderRK)/10,":", label="order %d"%(orderRK))
plt.legend()
```
You should see a superconvergence phenomenon for RRK with odd order.
```python
from nodepy import rk
## Convergence
pr=ODEproblem("linear_system2")
# Define some explicit RK
A=np.array([[0,0],[1/2,0]])
b=np.array([0,1])
rk2 = rk.ExplicitRungeKuttaMethod(A,b)
A=np.array([[0]])
b=np.array([1])
rk1 = rk.ExplicitRungeKuttaMethod(A,b)
A=np.array([[0,0,0],[2/3,0,0],[1/3,1/3,0]])
b=np.array([1/4,0,3/4])
rk3 = rk.ExplicitRungeKuttaMethod(A,b)
rk44=rk.loadRKM('RK44')
def error(tt,yy):
errors=np.zeros(len(tt))
for it, t in enumerate(tt):
errors[it]=np.linalg.norm(yy[:,it]-pr.exact(yy[:,0],t))
return np.mean(errors)
Ns=[2**k for k in range(3,10)]
solvers=[ rk2,rk3,rk44] #rk1
errorEx =np.zeros((len(solvers),len(Ns)))
errorRRK =np.zeros((len(solvers),len(Ns)))
dts= np.zeros(len(Ns))
for iN, N in enumerate(Ns):
tspan=np.linspace(0,pr.T_fin,N)
dts[iN]=tspan[1]-tspan[0]
for iS, rkM in enumerate(solvers):
tt,yy=explicitRK(pr.flux,tspan,pr.u0,rkM.A,rkM.b,rkM.c)
errorEx[iS,iN]=error(tt,yy)
tt,yy, gammas =explicitRelaxRK(pr.flux,pr.u0,dts[iN], pr.T_fin, np.int(pr.T_fin//dts[iN]*5),\
rkM.A,rkM.b,rkM.c)
errorRRK[iS,iN]=error(tt,yy)
plt.figure()
for iS, rkM in enumerate(solvers):
orderRK=rkM.order()
plt.loglog(dts,errorEx[iS,:],label=rkM.name+str(orderRK))
plt.loglog(dts,errorRRK[iS,:],"*-",label="R"+rkM.name+str(orderRK))
plt.loglog(dts,dts**(orderRK),":", label="order %d"%(orderRK))
plt.legend()
```
In this case, there is no energy to maintain, we recast the original error
## Pro exercise: code the relaxation RK for a general entropy
* $\frac{d}{dt} \eta(y(t)) \stackrel{(\leq)}{=}0 $ and $\langle \partial_u \eta (y), y \rangle \stackrel{(\leq)}{=}0 $
* The final equation can be a nonlinear equation in $\gamma$, hence a nonlinear solver must be used, try with scipy.optimize.newton or .broyden1
* The extra input we need are the entropy function $\eta(y)$ and the entropy variables function $\partial_y \eta(y)$
* Final relation that should hold is
$$
\eta(y^{n+1}_\gamma)-\eta(y^0)- \gamma \Delta t \sum_{j=1}^S b_j \langle \partial_y \eta(y^{(j)}), F(y^{(j)}) \rangle=0
$$
```python
from scipy import optimize
## explicit RK method
def explicitRelaxRKEntropy(flux, entropy, e_v, y_0, dt0, T_fin, KtMax, A, b, c):
# Solving u'=F(u,t)
# input: flux=F, tspan is a vector of times determining the RK steps
# entropy: scalar function of y
# entropy variable e_v: vector function of y
# input: y_0 the initial condition
# dt0 is the basic time interval, that will be modified along the steps
# T_fin is the final time
# KtMax is maximum number of timesteps
# input: A,b,c are matrix and vectors of RK methods
dim=len(y_0) # S
y=np.zeros((dim,KtMax)) # initializing the variable of solutions
tspan=np.zeros(KtMax) # times will be stored here
gammas = np.zeros(KtMax) # Gamma relaxation coefficients
time= 0.
gammas[0] = 1
n=0 # Time step index
tspan[0] = time
y[:,0]=y_0 # first timestep
S=np.shape(A)[0]
u=np.zeros((dim,S)) # Internal stages
Fu=np.zeros((dim,S)) # Flux at internal stages
while(time<T_fin and n<KtMax): # n=0,..., N-1
ent0=entropy(y[:,n])
e_v0 = e_v(y[:,n])
delta_t=min(dt0,T_fin-time)
#Classic RK step
for k in range(S):
u[:,k]=y[:,n]
for j in range(k):
u[:,k] = u[:,k]+ delta_t*A[k,j]*Fu[:,j]
Fu[:,k] = flux(u[:,k],tspan[n]+delta_t*c[k])
yn1=y[:,n]
for j in range(S):
yn1=yn1+delta_t*b[j]*Fu[:,j]
# Compute the relaxation gamma
deltay = yn1-y[:,n]
sumBScal=0.
for j in range(S):
sumBScal=sumBScal + b[j]* np.dot(e_v(u[:,j]),Fu[:,j])
residual = lambda gamma: np.array(entropy(np.array(y[:,n]+gamma*deltay,dtype=np.float))-ent0-gamma*delta_t*sumBScal, dtype=float)
deriv_res = lambda gamma: np.array(np.dot(e_v(np.array(y[:,n]+gamma*deltay,dtype=np.float)),deltay)-delta_t*sumBScal, dtype=float)
gamma = optimize.newton(residual,np.array([1.]),fprime=deriv_res,tol=10**-14) #broyden1(residual,1.,f_tol=10**-13)
# Update the n+1 values
y[:,n+1]=y[:,n] +gamma*deltay
if (time+delta_t<T_fin -10**-16):
time = time + gamma*delta_t
else:
time=T_fin
tspan[n+1]=time
gammas[n+1]=gamma
n=n+1
return tspan[:n+1], y[:,:n+1] , gammas[:n+1]
```
```python
pr=ODEproblem("nonLinearOscillator")
dt0=1 #0.5
entropy = lambda y: (y[0]**2.+y[1]**2.)/2. ##y[1]**2/2.-np.cos(y[0])
e_v= lambda y: y #np.array([ np.sin(y[0]), y[1] ])
t_span=np.arange(0,pr.T_fin+10**-16,dt0)
rk44=rk.loadRKM('RK44')
tt,uu=explicitRK(pr.flux,t_span,pr.u0,rk44.A,rk44.b,rk44.c)
ttR,uuR, gammas = explicitRelaxRKEntropy(pr.flux, entropy, e_v, pr.u0, dt0, pr.T_fin, np.int(pr.T_fin//dt0*5), rk44.A, rk44.b, rk44.c)
tEx=np.linspace(0,pr.T_fin, 200)
uEx=np.zeros((uu.shape[0], len(tEx)))
for k in range(len(tEx)):
uEx[:,k] = pr.exact(pr.u0,tEx[k])
plt.figure(figsize=(12,5))
plt.subplot(121)
plt.plot(tt,uu[0,:],label="RK")
plt.plot(ttR,uuR[0,:],"--",label="RRK")
plt.plot(tEx,uEx[0,:],":",label="ex")
plt.legend()
plt.subplot(122)
plt.plot(tt,uu[1,:],label="RK")
plt.plot(ttR,uuR[1,:],"--",label="RRK")
plt.plot(tEx,uEx[1,:],":",label="ex")
plt.legend()
plt.figure()
plt.plot(ttR,gammas)
plt.title("Gamma relaxation")
plt.figure(figsize=(12,5))
plt.subplot(121)
errorEnRRK=[np.dot(uuR[:,k],uuR[:,k])-np.dot(pr.exact(pr.u0,ttR[k]),pr.exact(pr.u0,ttR[k])) for k in range(len(ttR))]
errorEnRK=[np.dot(uu[:,k],uu[:,k])-np.dot(pr.exact(pr.u0,tt[k]),pr.exact(pr.u0,tt[k])) for k in range(len(tt))]
plt.plot(tt,errorEnRK,label="RK")
plt.plot(ttR,errorEnRRK,label="RRK")
plt.title("Energy error")
plt.legend()
plt.yscale("symlog",linthresh=1e-15)
plt.subplot(122)
plt.title("Phase space")
plt.plot(uu[0,:],uu[1,:], "x", label="RK")
plt.plot(uuR[0,:],uuR[1,:], "o", label="RRK")
plt.plot(uEx[0,:],uEx[1,:], ":", label="ex")
plt.legend()
```
```python
pr=ODEproblem("pendulum")
dt0=1 #0.5
entropy = pendulum_entropy
e_v= pendulum_entropy_variables
pr.u0=np.array([2.0,0.])
t_span=np.arange(0,pr.T_fin+10**-16,dt0)
rk44=rk.loadRKM('RK44')
tt,uu=explicitRK(pr.flux,t_span,pr.u0,rk44.A,rk44.b,rk44.c)
ttR,uuR, gammas = explicitRelaxRKEntropy(pr.flux, entropy, e_v, pr.u0, dt0, pr.T_fin, np.int(pr.T_fin//dt0*5), rk44.A, rk44.b, rk44.c)
plt.figure(figsize=(12,5))
plt.subplot(121)
plt.plot(tt,uu[0,:],label="RK")
plt.plot(ttR,uuR[0,:],"--",label="RRK")
plt.legend()
plt.subplot(122)
plt.plot(tt,uu[1,:],label="RK")
plt.plot(ttR,uuR[1,:],"--",label="RRK")
plt.legend()
plt.figure()
plt.plot(ttR,gammas)
plt.title("Gamma relaxation")
plt.figure(figsize=(12,5))
plt.subplot(121)
errorEnRRK=[entropy(uuR[:,k])-entropy(uuR[:,0]) for k in range(len(ttR))]
errorEnRK=[entropy(uu[:,k])-entropy(uu[:,0]) for k in range(len(tt))]
plt.plot(ttR,errorEnRRK,label="RRK")
plt.plot(tt,errorEnRK,label="RK")
plt.title("Entropy error")
plt.legend()
plt.yscale("symlog",linthresh=1e-15)
plt.subplot(122)
uLin=np.linspace(-2,2,100)
plt.title("Phase space")
plt.plot(uu[0,:],uu[1,:], "x", label="RK")
plt.plot(uuR[0,:],uuR[1,:], "o", label="RRK")
plt.plot(uLin,np.sqrt(2.*np.cos(uLin)+2.*entropy(uu[:,0])),"g:",label="exact")
plt.plot(uLin,-np.sqrt(2.0*np.cos(uLin)+2.*entropy(uu[:,0])),"g:")
plt.plot()
plt.legend()
```
```python
pr=ODEproblem("lotka")
dt0=1.2 #0.5
entropy = lotka_entropy
e_v= lotka_entropy_variables
t_span=np.arange(0,pr.T_fin+10**-16,dt0)
rk44=rk.loadRKM('RK44')
tt,uu=explicitRK(pr.flux,t_span,pr.u0,rk44.A,rk44.b,rk44.c)
ttR,uuR, gammas = explicitRelaxRKEntropy(pr.flux, entropy, e_v, pr.u0, dt0, pr.T_fin, np.int(pr.T_fin//dt0*5), rk44.A, rk44.b, rk44.c)
plt.figure(figsize=(12,5))
plt.subplot(121)
plt.plot(tt,uu[0,:],label="RK")
plt.plot(ttR,uuR[0,:],"--",label="RRK")
plt.legend()
plt.subplot(122)
plt.plot(tt,uu[1,:],label="RK")
plt.plot(ttR,uuR[1,:],"--",label="RRK")
plt.legend()
plt.figure()
plt.plot(ttR,gammas)
plt.title("Gamma relaxation")
plt.figure(figsize=(12,5))
plt.subplot(121)
errorEnRRK=[entropy(uuR[:,k])-entropy(uuR[:,0]) for k in range(len(ttR))]
errorEnRK=[entropy(uu[:,k])-entropy(uu[:,0]) for k in range(len(tt))]
plt.plot(ttR,errorEnRRK,label="RRK")
plt.plot(tt,errorEnRK,label="RK")
plt.title("Entropy error")
plt.legend()
plt.yscale("symlog",linthresh=1e-15)
plt.subplot(122)
uLin=np.linspace(-2,2,100)
plt.title("Phase space")
plt.plot(uu[0,:],uu[1,:], "x", label="RK")
plt.plot(uuR[0,:],uuR[1,:], "o", label="RRK")
plt.plot()
plt.legend()
```
```python
```
|
fcdd01b8ae3ed0f0d574ca06daa7852f24e8a4b8
| 43,061 |
ipynb
|
Jupyter Notebook
|
solutions/Chapter 4 Relaxation Runge--Kutta.ipynb
|
accdavlo/HighOrderODESolvers
|
d886357cd425eef902b540015276d0e49e53cef2
|
[
"MIT"
] | null | null | null |
solutions/Chapter 4 Relaxation Runge--Kutta.ipynb
|
accdavlo/HighOrderODESolvers
|
d886357cd425eef902b540015276d0e49e53cef2
|
[
"MIT"
] | null | null | null |
solutions/Chapter 4 Relaxation Runge--Kutta.ipynb
|
accdavlo/HighOrderODESolvers
|
d886357cd425eef902b540015276d0e49e53cef2
|
[
"MIT"
] | null | null | null | 38.619731 | 421 | 0.495042 | true | 11,042 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.731059 | 0.843895 | 0.616937 |
__label__eng_Latn
| 0.292365 | 0.271681 |
```python
from pycalphad import equilibrium, Database, Model, variables as v
import sympy
import numpy as np
TDB = """
ELEMENT A GRAPHITE 12.011 1054.0 5.7423 !
ELEMENT B BCC_A2 55.847 4489.0 27.2797 !
ELEMENT C BCC_A2 55.847 4489.0 27.2797 !
TYPE_DEFINITION % SEQ * !
PHASE TEST % 1 1 !
CONSTITUENT TEST : A,B,C: !
"""
my_phases = ['TEST']
comps = ['A', 'B','C']
comps = sorted(comps)
conds = dict({v.T: 1000, v.P: 101325, v.N: 1})
dbf = Database(TDB)
mod = Model(dbf, ['A', 'B', 'C'], 'TEST')
NP = sympy.Symbol('NP', real=True)
total_moles = sum([NP*mod.moles(c) for c in comps])
total_moles = NP
variables = [v.N, v.P, v.T] + mod.site_fractions + [NP]
mass_cons = [v.N, v.P, v.T]
mass_cons.extend(mod.get_internal_constraints())
mass_cons.extend(NP*mod.moles(c) for c in comps)
mass_jac = []
for cons in mass_cons:
mass_jac.append([cons.diff(x) for x in variables])
energy_grad = [(total_moles*mod.GM).diff(x) for x in variables]
```
```python
mass_cons
```
[N,
P,
T,
TEST0A + TEST0B + TEST0C - 1,
1.0*TEST0A*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C),
1.0*TEST0B*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C),
1.0*TEST0C*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)]
```python
mass_jac
```
[[1, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0],
[0,
0,
0,
-1.0*TEST0A*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)**2 + 1.0*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C),
-1.0*TEST0A*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)**2,
-1.0*TEST0A*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)**2,
1.0*TEST0A/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)],
[0,
0,
0,
-1.0*TEST0B*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)**2,
-1.0*TEST0B*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)**2 + 1.0*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C),
-1.0*TEST0B*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)**2,
1.0*TEST0B/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)],
[0,
0,
0,
-1.0*TEST0C*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)**2,
-1.0*TEST0C*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)**2,
-1.0*TEST0C*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)**2 + 1.0*NP/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C),
1.0*TEST0C/(1.0*TEST0A + 1.0*TEST0B + 1.0*TEST0C)]]
```python
A = sympy.Matrix(mass_jac).T.pinv()
x = A * sympy.Matrix(energy_grad)
```
```python
from pycalphad.codegen.sympydiff_utils import build_functions
mu_a = build_functions(x[4], variables, include_grad=True, include_hess=False)
mu_b = build_functions(x[5], variables, include_grad=True, include_hess=False)
mu_c = build_functions(x[6], variables, include_grad=True, include_hess=False)
energy = build_functions(mod.GM, variables, include_grad=True)
```
```python
print(x[4].free_symbols)
```
{TEST0A, TEST0B, NP, T, TEST0C}
```python
mu_a.func([1, 1e5, 1000, 0.4, 0.6, 1e-12, 1e-6])
```
array(-7618.14919886)
```python
np.array(mu_a.grad([1, 1e5, 1000, 0.4, 0.6, 1e-12, 1])) - np.array(mu_b.grad([1, 1e5, 1000, 0.4, 0.6, 1e-12, 1]))
```
array([ 0.00000000e+00, 0.00000000e+00, -3.37123964e+00, 2.07862500e+04,
-1.38575000e+04, 9.23094935e-01, -9.44368765e-10])
```python
from pycalphad.core.solver import InteriorPointSolver
class ProblemSaver(InteriorPointSolver):
saved_problem = [None]
def solve(self, prob):
self.saved_problem[0] = prob
self.verbose = True
return super(ProblemSaver, self).solve(prob)
eq = equilibrium(dbf, ['A', 'B', 'C'], ['TEST'],
{v.MU('B'): -1000, v.X('A'): 0.1, v.T: 800, v.P: 101325}, solver=ProblemSaver())
```
Chemical Potentials [-15315.87500456 -1000. -21480.14360388]
[0. 0. 0. 0. 0. 0. 0.]
[1.00000000e+00 1.01325000e+05 8.00000000e+02 1.00000000e-01
8.60415585e-01 3.95844148e-02 1.00000000e+00]
Status: 0 b'Algorithm terminated successfully at a locally optimal point, satisfying the convergence tolerances (can be specified by options).'
```python
ProblemSaver.saved_problem[0].jacobian([1, 1e5, 800, 0.1, 8.60415585e-01, 3.95844148e-2, 1.0])[-1]
```
array([ 0.00000000e+00, 0.00000000e+00, -1.25000000e+00, 2.59387889e-09,
7.73068284e+03, 1.93540473e-09, -8.77662387e-10])
```python
mu_b.grad([1, 1e5, 800, 0.1, 8.60415585e-01, 3.95844148e-2, 1.0])
```
[array(0.),
array(0.),
array(-1.25),
array(-3.52429197e-12),
array(7730.68284206),
array(2.79669621e-11),
array(2.33626452e-11)]
```python
```
|
5cf77c30ee29c596e20106c58990818e7c1c955c
| 8,312 |
ipynb
|
Jupyter Notebook
|
SymbolicSolve.ipynb
|
richardotis/pycalphad-sandbox
|
43d8786eee8f279266497e9c5f4630d19c893092
|
[
"MIT"
] | 1 |
2017-03-08T18:21:30.000Z
|
2017-03-08T18:21:30.000Z
|
SymbolicSolve.ipynb
|
richardotis/pycalphad-sandbox
|
43d8786eee8f279266497e9c5f4630d19c893092
|
[
"MIT"
] | null | null | null |
SymbolicSolve.ipynb
|
richardotis/pycalphad-sandbox
|
43d8786eee8f279266497e9c5f4630d19c893092
|
[
"MIT"
] | 1 |
2018-11-03T01:31:57.000Z
|
2018-11-03T01:31:57.000Z
| 26.899676 | 153 | 0.488691 | true | 2,095 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.899121 | 0.79053 | 0.710783 |
__label__yue_Hant
| 0.34867 | 0.489718 |
```python
# plotting libs
import matplotlib as mpl
from matplotlib.backends.backend_pdf import PdfPages
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format
mpl.rcParams['figure.figsize'] = (9, 6)
# standard
from collections import defaultdict
from datetime import datetime
from functools import partial
from multiprocessing import Pool, cpu_count
import os
import pickle
# third party
import covidcast
import numpy as np
import pandas as pd
from statsmodels.tsa.tsatools import lagmat
# first party
os.chdir("../code")
from delay import get_international_delays, get_delay_distribution
from conv1d import *
from weekday import Weekday, dow_adjust_cases
os.chdir("../notebooks")
```
## Delay distributions
Delay distribution found by fitting a gamma distribution to linelist data.
```python
## Data path globals
data_dir = "../data/"
fl_line_data = data_dir + "FL_line_list.csv"
int_line_data = data_dir + "international_line_list.tar"
us_zip_data_path = "~/Documents/covid-19/geographical_scope/02_20_uszips.csv"
```
```python
international_delays = get_international_delays(data_path=int_line_data)
delay_dist = get_delay_distribution(international_delays)
intl_delay_dist = get_delay_distribution(international_delays)
# https://www.arcgis.com/home/item.html?id=37abda537d17458bae6677b8ab75fcb9
florida_df = pd.read_csv(fl_line_data, parse_dates=["Case_", "EventDate", "ChartDate"])
florida_delays = (florida_df.ChartDate - florida_df.EventDate).dt.days
florida_delays = florida_delays[florida_delays.gt(0) & florida_delays.lt(60)]
fl_delay_dist = get_delay_distribution(florida_delays)
plt.plot(intl_delay_dist, label="international")
plt.plot(fl_delay_dist, label="florida")
plt.title("Symptom-onset to report delay")
plt.legend()
plt.show()
```
Pull case data.
```python
start_date = datetime(2020, 3, 15)
end_date = datetime(2020, 9, 15)
cases_df = covidcast.signal(
'indicator-combination',
'confirmed_7dav_incidence_num',
start_date, end_date,
geo_type='county')
cumulative_cases_df = covidcast.signal(
'indicator-combination',
'confirmed_7dav_cumulative_num',
end_date, end_date,
geo_type='county')
thresh_geos = cumulative_cases_df[cumulative_cases_df.value > 500].geo_value
```
```python
# get all florida fips codes
geo_map = pd.read_csv(
us_zip_data_path,
usecols=["fips", "state_id", "population"],
dtype={"state_id": str},
converters={"fips": lambda x: str(x).zfill(5)},
)
florida_geo = geo_map[geo_map.state_id.eq("FL")]
florida_population = florida_geo.groupby("fips").population.sum().reset_index()
florida_fips = florida_geo.fips.unique()
```
```python
cases_df = cases_df.set_index(["geo_value"])
geos = cases_df.index.unique()
geos = geos[geos.isin(florida_fips)] # only keep florida geos
geos = geos[geos.isin(thresh_geos)] # counties with >500 cumulative cases
```
### Models
```python
def regress(X, y):
"""Simple OLS, adds intercept. Returns coefs and fitted vals."""
X = np.hstack((np.ones((y.size, 1)), X.reshape(-1, 1)))
beta = np.linalg.inv(X.T @ X + 1e-4 * np.eye(X.shape[1])) @ X.T @ y
return beta, (X @ beta)
def strawman(x_tilde, delay):
"""
The prediction x_hat at time t is x_tilde (infections)
at time t-1:
x_hat{t} = x_tilde{t-1}
Returns a tuple of the corresponding time indices to the
convolution y_hat = delay(*)x_hat.
"""
n = x_tilde.shape[0]
x_hat = lagmat(x_tilde, 1).reshape(-1,)
x_pred = x_tilde[-1] # prediction for next time point
x_hat = np.append(x_hat, x_pred)
y_hat = Conv1D.freq_conv(x_hat, delay)[:n]
return (np.arange(1, n+1), y_hat)
def ar(x_tilde, delay, p, lam=1):
"""
Fit an AR(p) model on x_tilde.
Returns a tuple of the corresponding time indices to the
convolution y_hat = delay(*)x_hat.
"""
n = x_tilde.shape[0]
# lag matrix
Z = np.hstack((np.ones((n, 1)), lagmat(x_tilde, maxlag=p)))[p:]
Z1 = np.concatenate([[1], np.flip(x_tilde[-p:])])
beta = np.linalg.solve(Z.T @ Z + lam*np.eye(p+1), Z.T @ x_tilde[p:])
x_hat = Z @ beta
x_pred = beta.T @ Z1 # prediction for next time point
x_hat = np.append(x_hat, x_pred)
y_hat = Conv1D.freq_conv(x_hat, delay)[:(Z.shape[0]+1)]
return (np.arange(p, n+1), y_hat)
def sf(x_tilde, train_signals, test_signals, delay, alpha=1):
"""
Sensor fusion.
- [TODO] Fit on more than one location. Currently, H=1.
- [TODO] This does not properly fit sensors sequentially.
Args:
x_tilde: infections vector
train_signals: raw signal matrix to become sensors
test_signals: raw signal vector at nowcasting time
delay: reporting delay vector
alpha: l2 regularization parameter
"""
n, p = train_signals.shape
p += 1 # add intercept
H = np.ones((p, 1))
# fit sensors
Z = np.full((n, p), np.nan)
z1 = np.full((p,), np.nan)
for j in range(p):
if j < (p-1):
beta, fitted_vals = regress(train_signals[:, j], x_tilde)
Z[:,j] = fitted_vals
z1[j] = beta.T @ np.array([1, test_signals[j]])
else:
Z[:, j] = np.ones(n)
z1[j] = 1
X = x_tilde.reshape(-1, 1)
G = Z - X @ H.T
#G = G - G.mean(axis=0)
cov_G = G.T @ G / n
Ri = np.linalg.inv(alpha * cov_G + (1 - alpha) * np.eye(p))
beta = np.linalg.inv(H.T @ Ri @ H) @ H.T @ Ri
x_hat = Z @ beta.T
x_pred = beta @ z1 # prediction for next time point
x_hat = np.append(x_hat, x_pred)
y_hat = Conv1D.freq_conv(x_hat, delay)[:(n+1)]
return (None, y_hat)
```
## Nowcasting Miami-Dade (12086)
```python
geo = "12086"
cases = cases_df.loc[geo].sort_values(by='time_value')
n = cases.value.shape[0]
fb_df = covidcast.signal(
'fb-survey',
'smoothed_hh_cmnty_cli',
start_date, end_date,
geo_type='county',
geo_values=geo)
fb = fb_df[['time_value', 'value']]
plt.plot(fb.time_value, fb.value)
plt.xlabel("Time")
plt.ylabel("Facebook CLI")
plt.title(f"{geo} Facebook CLI")
plt.show()
```
### Deconvolution Approaches
(1) Direct deconvolution in the frequency domain.
(2) Solve
\begin{equation}
\hat{x}=\underset{x}{\operatorname{argmin}} \frac{1}{2}\|y-W x\|_{2}^{2}+\lambda\left\|D^{(k+1)} x\right\|_{1}
\end{equation}
where $D^{(k+1)}$ is the discrete difference operator, and $W$ the convolution matrix.
(3) Solve
\begin{equation}
\hat{x}=\underset{x}{\operatorname{argmin}} \frac{1}{2}\|W^{-1}y-x\|_{2}^{2}+\lambda\left\|D^{(k+1)} x\right\|_{1}.
\end{equation}
where $W^{-1}$ is the deconvolution matrix.
```python
dates = cases.time_value
direct = Conv1D.freq_deconv(cases.value, delay_dist)[:n]
admm_v1 = admm_deconvolution(cases.value, delay_dist,
lam=20, rho=20, n_iters=500, k=2)
admm_v2 = admm_deconvolution_v2(cases.value, delay_dist,
lam=200, rho=200, n_iters=500, k=2)
plt.plot(dates, cases.value, label="raw", color="black", linewidth=4, alpha=0.2)
plt.plot(dates, direct, label="[direct] symptom-onset")
plt.plot(dates, admm_v1, label="[admm] symptom-onset")
plt.plot(dates, admm_v1, label="[admm_v2] symptom-onset", linestyle="--")
plt.title("Miami-Dade: symptomatic infection curve from raw cases")
plt.ylim(-1000, 1.5 * max(cases.value))
plt.legend()
plt.show()
plt.plot(dates, cases.value, label="raw", color="black", linewidth=4, alpha=0.2)
plt.plot(dates, Conv1D.freq_conv(direct, delay_dist)[:n], label="[direct] cases")
plt.plot(dates, Conv1D.freq_conv(admm_v1, delay_dist)[:n], label="[admm] cases")
plt.plot(dates, Conv1D.freq_conv(admm_v1, delay_dist)[:n],
label="[admm_v2] cases", linestyle="--")
plt.title("Miami-Dade: reconvolved case curve from symptom-onset curve")
plt.legend()
plt.show()
adj_cases = dow_adjust_cases(cases, lam=10)
direct = Conv1D.freq_deconv(adj_cases, delay_dist)[:n]
admm_v1 = admm_deconvolution(adj_cases, delay_dist, lam=20, rho=20, n_iters=500, k=2)
admm_v2 = admm_deconvolution_v2(adj_cases, delay_dist, lam=200, rho=200, n_iters=500, k=2)
plt.plot(dates, adj_cases, label="dow-adj", color="black", linewidth=4, alpha=0.2)
plt.plot(dates, direct, label="[direct] symptom-onset")
plt.plot(dates, admm_v1, label="[admm] symptom-onset")
plt.plot(dates, admm_v1, label="[admm_v2] symptom-onset", linestyle="--")
plt.title("Miami-Dade: symptomatic infection curve from dow-adj cases")
plt.legend()
plt.show()
plt.plot(dates, cases.value, label="raw", color="black", linewidth=4, alpha=0.2)
plt.plot(dates, Conv1D.freq_conv(direct, delay_dist)[:n], label="[direct] cases")
plt.plot(dates, Conv1D.freq_conv(admm_v1, delay_dist)[:n], label="[admm] cases")
plt.plot(dates, Conv1D.freq_conv(admm_v1, delay_dist)[:n],
label="[admm_v2] cases", linestyle="--")
plt.title("Miami-Dade: reconvolved case curve from dow-adjusted symptom-onset curve")
plt.legend()
plt.show()
```
### Miami-Dade linelist
Comparison with International, Florida, and Miami-Dade linelists.
```python
miami_dade = florida_df[florida_df.County.eq("Dade")]
miami_dade = miami_dade[miami_dade.ChartDate >= datetime(2020, 6, 1)]
miami_dade_delays = (miami_dade.ChartDate - miami_dade.EventDate).dt.days
miami_dade_delays = miami_dade_delays[miami_dade_delays.gt(0) & miami_dade_delays.lt(60)]
miami_dade_delay_dist = get_delay_distribution(miami_dade_delays)
plt.plot(intl_delay_dist, label="international")
plt.plot(fl_delay_dist, label="florida")
plt.plot(miami_dade_delay_dist, label="miami-dade")
plt.legend()
plt.show()
```
## Nowcasting
We will compare (semi)retrospective performance of the strawman, AR(7), and SF method.
All three methods will recieve identical symptom onset curves. The symptom onset curve is estimated using method (3) of deconvolution (via ADMM), with the three different delay distributions.
```python
intl_preds = defaultdict(lambda: {'time': [],'pred': []})
fl_preds = defaultdict(lambda: {'time': [],'pred': []})
miami_dade_preds = defaultdict(lambda: {'time': [],'pred': []})
t1 = n
t0 = t1-100
for t in range(t0, t1):
train_time = cases.time_value[:t]
test_time = cases.time_value.values[t]
train_cases = cases[cases.time_value.isin(train_time)]
n_train_obs = train_cases.shape[0]
train_cases = dow_adjust_cases(train_cases, lam=10)
for delay_dist, preds in zip([intl_delay_dist, fl_delay_dist, miami_dade_delay_dist],
[intl_preds, fl_preds, miami_dade_preds]):
# get infection curve
## commented out - this does CV and is necessary for good performance but slow
## after observing the output, we rerun with a fixed lambda for speed
# sub_infections = infection_curve::get_infection_curve(train_cases, delay_dist)
sub_infections = np.clip(admm_deconvolution_v2(
train_cases, delay_dist, 3000, 3000, n_iters=500, k=2), 0, np.inf)
# strawman
strawman0 = strawman(sub_infections, delay_dist)
preds["strawman"]["pred"].append(strawman0[1][-1])
preds["strawman"]["time"].append(test_time)
# ar7 model
n_lags = 7
ar0 = ar(sub_infections, delay_dist, n_lags)
preds["ar7"]["pred"].append(ar0[1][-1])
preds["ar7"]["time"].append(test_time)
# sf model (fb_survey, ar7, strawman)
train_fb = fb[fb.time_value.isin(train_time)]
test_fb = fb[fb.time_value.eq(test_time)]
train_ar = pd.DataFrame({'time_value': train_time[(n_lags-1):],
'ar': ar0[1]})
test_ar = ar0[1][-1]
train_straw = pd.DataFrame({'time_value': train_time,
'strawman': strawman0[1]})
test_straw = strawman0[1][-1]
if test_fb.size > 0:
sf_df = (pd.DataFrame({'infections': sub_infections,
'time_value': train_time})
.merge(train_fb, how='left', on='time_value')
.merge(train_ar, how='left', on='time_value')
.merge(train_straw, how='left', on='time_value'))
valid_ind = sf_df.notna().all(axis=1)
sf_df.drop(columns=['time_value', 'infections'], inplace=True)
if valid_ind.sum() > 100:
train_infections = sub_infections[valid_ind]
train_signals = sf_df.to_numpy()[valid_ind, :]
test_signals = np.array((
test_fb.value.values[0],
test_ar,
test_straw)).flatten()
sf0 = sf(train_infections, train_signals,
test_signals, delay_dist, 0.5)
preds["sf"]["pred"].append(sf0[1][-1])
preds["sf"]["time"].append(test_time)
if t % 20 == 0: print(f"Finished {t}/{t1}")
plt.title("Miami-Dade")
plt.plot(cases.time_value, cases.value, label="cases", color="black", linewidth=4, alpha=0.8)
for k in preds.keys():
plt.plot(preds[k]['time'], preds[k]["pred"], label=k, marker=".", alpha=0.5)
plt.axvline(x=cases.time_value.values[t0], color="gray", linestyle=":")
plt.xlim(datetime(2020, 6, 1), end_date)
plt.legend()
plt.xticks(rotation=90)
plt.show()
```
```python
for method in ["strawman", "ar7", "sf"]:
plt.plot(cases.time_value, cases.value, label="cases", color="black", linewidth=3, alpha=0.8)
for name, preds in zip(["international", "florida", "miami-dade"],
[intl_preds, fl_preds, miami_dade_preds]):
plt.plot(preds[method]['time'],
preds[method]["pred"],
label=name, marker=".", alpha=0.8)
plt.axvline(x=cases.time_value.values[t0], color="gray", linestyle=":")
plt.legend()
plt.title(f"Miami-Dade: {method}")
plt.xticks(rotation=90)
plt.show()
cases_adj = dow_adjust_cases(cases, lam=10)
cases["adj_value"] = cases_adj
for name, preds in zip(["international", "florida", "miami-dade"],
[intl_preds, fl_preds, miami_dade_preds]):
preds_df = pd.DataFrame({"time_value": preds[method]['time'], 'preds': preds[method]["pred"]})
tmp = preds_df.merge(cases, how="left", on="time_value")
plt.hist(np.abs(tmp.adj_value - tmp.preds),
label=name, alpha=0.6, density=True, bins=20)
plt.legend()
plt.title(f"Miami-Dade")
plt.xlabel("nowcast one-ahead absolute error")
plt.ylabel("density")
plt.axhline(0)
plt.show()
```
```python
## parallelized version
# def batch_nowcast(geo, y, delay, ts, t0, t1, model):
# """
# Make nowcasts over time range [t0, t1).
# Args:
# y: ordered array of cases
# delay: ordered array of delay probabilities
# ts: ordered array of complete timepoints corresponding to y
# t0: index of starting timepoint for prediction
# t1: index of ending timepoint for prediction
# model: fitting function
# """
# #infection_curve = InfectionCurve(delay).get_infection_curve
# res = {"pred": [], "time": []}
# for t in range(t0, t1):
# train_time = ts[:t]
# test_time = ts[t]
# train_y = y[:t]
# # get infection curve (x_tilde)
# x_tilde = np.clip(admm_deconvolution(train_y, delay, 50, 50, n_iters=300, k=2), 0, np.inf)
# # plt.plot(train_y)
# # plt.plot(x_tilde)
# # plt.show()
# #x_tilde = infection_curve(train_y)
# # fit model
# _, out_y = model(x_tilde, delay)
# res["pred"].append(out_y[-1])
# res["time"].append(test_time)
# print(f"{geo}, {res['pred'][-1]:.3f}")
# return {geo: res}
# ar7 = partial(ar, p=7)
# for method, method_str in zip([strawman, ar7], ["strawman", "ar7"]):
# n_cpu = min(5, cpu_count())
# pool = Pool(n_cpu)
# pool_results = []
# for geo in sample_geos:
# cases = cases_df.loc[geo].sort_values(by='time_value')
# dates = cases.time_value.values
# cases = dow_adjust(cases)
# n = cases.shape[0]
# pool_results.append(
# pool.apply_async(batch_nowcast,
# args=(geo,
# cases,
# delay_dist,
# dates,
# n-50, n,
# method,)
# )
# )
# pool_results = [proc.get() for proc in pool_results]
# pickle.dump(pool_results, open(f"{method_str}_fl.p", "wb"))
# print(f"finished {method_str}")
# # load data
# def unpack(fn):
# fp = pickle.load(open(fn, "rb"))
# out = {}
# for arr in fp:
# out[list(arr.keys())[0]] = arr[list(arr.keys())[0]]
# return out
# strawman_res = unpack("strawman_fl.p")
# ar7_res = unpack("ar7_fl.p")
# out_pdf = f"comp_fl.pdf"
# pdf_pages = PdfPages(out_pdf)
# n_plot = len(pool_results)
# n_row = 4
# n_col = 4
# n_plots_per_page = n_row*n_col
# fig, axs = None, None
# j = 0
# for i, geo in enumerate(sample_geos):
# cases = cases_df.loc[geo].sort_values(by='time_value')
# if i % n_plots_per_page == 0:
# fig, axs = plt.subplots(n_row, n_col, figsize=(12, 12), sharex=True)
# axs = axs.ravel()
# j = 0
# axs[j].plot(cases.time_value.values, cases.value.values,
# color="black", label="cases", linewidth=3, alpha=0.8)
# axs[j].plot(strawman_res[geo]["time"], strawman_res[geo]["pred"], label="strawman", marker=".")
# axs[j].plot(ar7_res[geo]["time"], ar7_res[geo]["pred"], label="ar7", marker=".")
# axs[j].legend(fontsize=8)
# axs[j].set_title(geo)
# axs[j].tick_params(axis='both', which='major', labelsize=5, labelrotation=90)
# # close the page if needed
# if (i + 1) % n_plots_per_page == 0 or (i + 1) == n_plot:
# plt.tight_layout()
# pdf_pages.savefig(fig)
# plt.close()
# j += 1
# pdf_pages.close()
# print(f"Saved to {out_pdf}")
# plt.plot(cases.time_value, cases.value, label="cases", color="black", linewidth=4, alpha=0.8)
# for k in preds.keys():
# plt.plot(preds[k]['time'], preds[k]["pred"], label=k, marker=".", alpha=0.5)
# plt.axvline(x=cases.time_value.values[t0], color="gray", linestyle=":")
# plt.legend()
# plt.xticks(rotation=90)
# plt.show()
```
```python
```
|
0bc143fd74a214577b8f0848c1c7cae2c8e58ad8
| 585,386 |
ipynb
|
Jupyter Notebook
|
case_deconv/notebooks/deconv.ipynb
|
dfarrow0/covidcast-nowcast
|
8d9dfc56c643c4f47b72a58dc3e8811ddeb1a6c8
|
[
"MIT"
] | null | null | null |
case_deconv/notebooks/deconv.ipynb
|
dfarrow0/covidcast-nowcast
|
8d9dfc56c643c4f47b72a58dc3e8811ddeb1a6c8
|
[
"MIT"
] | null | null | null |
case_deconv/notebooks/deconv.ipynb
|
dfarrow0/covidcast-nowcast
|
8d9dfc56c643c4f47b72a58dc3e8811ddeb1a6c8
|
[
"MIT"
] | null | null | null | 634.221018 | 84,696 | 0.941239 | true | 5,229 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.7773 | 0.675765 | 0.525272 |
__label__eng_Latn
| 0.378471 | 0.058712 |
```python
from sympy import *
```
```python
import matplotlib.pyplot
matplotlib.pyplot.rcParams['figure.figsize'] = 5, 5
```
```python
h, l = symbols("h l")
```
```python
# Average elementary hue angles
R = 0
Y = 67
G = 140
C = 180
B = 239
M = 303
k_r = (0.299*0.2126)**0.5
k_g = (0.587*0.7152)**0.5
k_b = (0.114*0.0722)**0.5
# Delta hue
d_RY = Y - R
d_YG = G - Y
d_GC = C - G
d_CB = B - C
d_BM = M - B
d_MR = 360 - M
# Luminosity
R_L = k_r
Y_L = k_r + k_g - 0.2
G_L = k_g - 0.1
C_L = k_g + k_b - 0.1
B_L = k_b
M_L = k_b + k_r
# Slope
s_RY = (Y_L - R_L)/d_RY
s_YG = (G_L - Y_L)/d_YG
s_GC = (C_L - G_L)/d_GC
s_CB = (B_L - C_L)/d_CB
s_BM = (M_L - B_L)/d_BM
s_MR = (R_L - M_L)/d_MR
plot(
(-(-R*s_RY + R_L + h*s_RY - 0.5), (h, R, Y)),
(-(-Y*s_YG + Y_L + h*s_YG - 0.5), (h, Y, G)),
(-(-G*s_GC + G_L + h*s_GC - 0.5), (h, G, C)),
(-(-C*s_CB + C_L + h*s_CB - 0.5), (h, C, B)),
(-(-B*s_BM + B_L + h*s_BM - 0.5), (h, B, M)),
(-(-M*s_MR + M_L + h*s_MR - 0.5), (h, M, 360)),
(0.5, (h, 0, 360)),
axis_center=(0, 0),
);
```
```python
[R_L, Y_L, G_L, C_L, B_L, M_L]
```
[0.25212576226954675,
0.7000627962473862,
0.5479370339778395,
0.6386607905138207,
0.09072375653598125,
0.342849518805528]
```python
R, Y, G, C, B, M, d_RY, d_YG, d_GC, d_CB, d_BM, d_MR, R_L, Y_L, G_L, C_L, B_L, M_L, s_RY, s_YG, s_GC, s_CB, s_BM, s_MR = \
symbols("R Y G C B M d_RY d_YG d_GC d_CB d_BM d_MR R_L Y_L G_L C_L B_L M_L s_RY s_YG s_GC s_CB s_BM s_MR")
```
```python
[
-(-R*s_RY + R_L + h*s_RY - 0.5),
-(-Y*s_YG + Y_L + h*s_YG - 0.5),
-(-G*s_GC + G_L + h*s_GC - 0.5),
-(-C*s_CB + C_L + h*s_CB - 0.5),
-(-B*s_BM + B_L + h*s_BM - 0.5),
-(-M*s_MR + M_L + h*s_MR - 0.5),
]
```
[R*s_RY - R_L - h*s_RY + 0.5,
Y*s_YG - Y_L - h*s_YG + 0.5,
G*s_GC - G_L - h*s_GC + 0.5,
C*s_CB - C_L - h*s_CB + 0.5,
B*s_BM - B_L - h*s_BM + 0.5,
M*s_MR - M_L - h*s_MR + 0.5]
```python
plot((x, (x, 0, 0.5)),
(1 - x, (x, 0.5, 1)),
xlim=(-2, 2), ylim=(-2, 2),
axiscenter=(0.0));
```
```python
x, y = symbols("x y")
```
```python
plot(-(2*x - 1)**2/2 + 1/2, xlim=(-2, 2), ylim=(-2, 2), axiscenter=(0.0));
```
```python
(-(2*x - 1)**2 + 1)/2
```
-(2*x - 1)**2/2 + 1/2
```python
plot(2*x**2,
2*(x - 1)**2,
xlim=(-2, 2), ylim=(-2, 2), axiscenter=(0.0));
```
```python
```
|
a362cb68b0aa7c9296d265f30b482ae93c1667b6
| 63,332 |
ipynb
|
Jupyter Notebook
|
formulas.ipynb
|
AlanCristhian/colorwheel
|
a7ca7c558f5eacafbb1687057eb238c990f9cbe1
|
[
"MIT"
] | 1 |
2022-02-02T16:08:19.000Z
|
2022-02-02T16:08:19.000Z
|
formulas.ipynb
|
AlanCristhian/colorwheel
|
a7ca7c558f5eacafbb1687057eb238c990f9cbe1
|
[
"MIT"
] | null | null | null |
formulas.ipynb
|
AlanCristhian/colorwheel
|
a7ca7c558f5eacafbb1687057eb238c990f9cbe1
|
[
"MIT"
] | null | null | null | 220.66899 | 18,912 | 0.918351 | true | 1,186 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.928409 | 0.805632 | 0.747956 |
__label__yue_Hant
| 0.31213 | 0.576084 |
# Jacobi Transformation of a Symmetric Matrix
### Christina Lee
### Category: Numerics
#### based on Numerical Recipes in C++, Sec 11.1
So you want to diagonalize a matrix, do you?
Well, if you have a tiny symmetric matrix, you REALLY want to write up the algorithm by hand, and don't want to spend much time trying to understand the algorithm, then you have come to the right place.
Otherwise, use LAPACK/BLAS to call a highly optimized routine that can work extremely quickly on large matrices. Julia has those libraries built in already. Even if you do call those matrices, you can make them work better by understanding what's going on underneath the hood, which is why we are going through this now.
Start with a base Rotation Matrix of the Form
\begin{equation}
P_{pq} =
\begin{pmatrix}
1& & & && && 0\\
& \ddots &&&& & \\
&& c & \cdots & s && \\
&&\vdots& 1 & \vdots &&\\
&& -s & \cdots & c && \\
&&&&& \ddots & \\
0&&& & && 1
\end{pmatrix}
\end{equation}
From our starting arbitrary symmetric A,
\begin{equation}
A^{T} = A
\end{equation}
we will run a series of transformations,
\begin{equation}
A^{\prime}= P^{T}_{pq} \cdot A \cdot P_{pq}
\end{equation}
where each iteration brings A closer to diagonal form. Thus in our implementing our algorithm, we need to determine two things
<ul>
<li> The values of c and s
<li> The pattern of sweeping p and q
</ul>
And in the end we will need to finally determine if this actually converges, and if has any sort of efficiency.
So lets expand one transformation, and we if we can solve for $c$ and $s$.
\begin{align}
a^{\prime}_{rp} & = c a_{rp} - s a_{rq} \\
a^{\prime}_{rq} & = c a_{rq} + s a_{rp} \\
a^{\prime}_{pp} & = c^2 a_{pp} + s^2 a_{qq} -2 sc a_{pq} \\
a^{\prime}_{qq} & = s^2 a_{qq} + c^2 a_{qq} + 2sc a_{pq} \\
a^{\prime}_{pq} & = \left( c^2-s^2 \right) a_{pq} + sc \left(a_{pq} - a_{qq} \right)
\end{align}
## Determining $s$ and $c$
Given we specifically want $a^{\prime}_{pq}$ to be zero, we re-arrange the last equation,
\begin{equation}
\frac{c^2-s^2}{2 sc} = \frac{a_{pq}-a_{qq}}{2 a_{pq}} =\theta
\end{equation}
At first glance, this equation might not look easier to solve for $s$ or $c$. Second either. We define a new parameter $t = s/c$, which now makes the equation,
\begin{equation}
\frac{1-t^2}{2 t} = \theta \;\;\;\; \implies \;\;\; t^2 -2 \theta t -1=0,
\end{equation}
now quite easily solvable by our friendly quadratic formula. Though the book does recommend using form that pulls out smaller root through
\begin{equation}
t=\frac{\text{sgn}(\theta)}{|\theta| + \sqrt{\theta^2 + 1} }.
\end{equation}
Then reverse solve back to
\begin{align}
c&=\frac{1}{\sqrt{t^2+1}}\\
s&=tc
\end{align}
Though we could use the expressions above, if we simplify them with our new expressions for $c$ and $s$ analytically, we reduce computational load and round off error. These new expressions are
\begin{align}
a^{\prime}_{pq} & = 0\\
a^{\prime}_{qq} & = a_{qq} + t a_{qp} \\
a^{\prime}_{pp} &= a_{pp} - t a_{pq} \\
a^{\prime}_{rp} &= a_{rp} - s \left( a_{rq} +\tau a_{rp} \right) \\
a^{\prime}_{rq} &= a_{rq} + s \left( a_{rp} -\tau a_{rq} \right)
\end{align}
with the new variable
\begin{equation}
\tau = \frac{s}{1+c}
\end{equation}
## Convergence
The sum of the squares of the off diagonal elements (choosen in either upper or lower triagnles arbitrarily)
\begin{equation}
S=\sum\limits_{r < s} |a_{rs}|^2
\end{equation}
## Eigenvectors
By forming a product of every rotation matrix, we also come to approximate the matrix $V$ where
\begin{equation}
D = V^{T} \cdot A \cdot V
\end{equation}
and $D$ is the diagonal form of $A$. $V$ is computed through itereative computation
\begin{align}
V^{\prime} & = V \cdot P_i \\
v^{\prime}_{rs} &= v_{rs} \\
v^{\prime}_{rp} &= c v_{rp} - s v_{rq} \\
v^{\prime}_{rq} &= s v_{rp} + c v_{rq}
\end{align}
### Enough with the talking! LETS COMPUTE STUFF
```julia
using LinearAlgebra
```
```julia
# First, Lets make our nice, helpful functions
## A function to look at the convergence
function convergence(A::Array)
num=0.0
l=size(A)[1]
for ii in 1:(l-1)
for jj in (ii+1):l ## just looking at the lower triangle
num+=A[ii,jj]^2
#println(ii,' ',jj,' ',num,' ',A[ii,jj])
end
end
return num
end
```
convergence (generic function with 1 method)
```julia
# This makes a matrix easier to look at when its filled
# with 1.043848974e-12 everywhere
function roundmatrix(A::Array,rtol::Real)
Ap=copy(A)
for ii in 1:length(A)
if abs(Ap[ii])<rtol
Ap[ii]=0
end
end
return Ap;
end
```
roundmatrix (generic function with 1 method)
```julia
## Here we create a random symmetric matrix
function makeA(n)
A=randn(n,n);
for ii in 1:n
A[ii,1:ii]=transpose(A[1:ii,ii])
end
V=Matrix{Float64}(I,n,n) #initializing the orthogonal transformation
return A,copy(A),V
end
## One A returned will be stored to compare initial and final
```
makeA (generic function with 1 method)
```julia
#Now on to the Rotations!
# We don't always want to compute the eigenvectors, so those are in the
# optional entries slot.
# Both tell the function to compute the vectors with computeV=true
# and input the V=V after the semicolon.
function Rotate(A::Array,p::Int,q::Int; computeV=false, V::Array=Matrix{Float64}(I,1,1) )
θ=(A[q,q]-A[p,p])/(2*A[p,q]);
t=sign(θ)/(abs(θ)+sqrt(θ^2+1));
c=1/sqrt(t^2+1)
s=t*c
τ=s/(1+c)
l=size(A)[1]
Ap=copy(A[:,p])
Aq=copy(A[:,q])
for r in 1:l
A[r,p]=Ap[r]-s*(Aq[r]+τ*Ap[r])
A[r,q]=Aq[r]+s*(Ap[r]-τ*Aq[r])
A[p,r]=A[r,p]
A[q,r]=A[r,q]
end
A[p,q]=0
A[q,p]=0
A[p,p]=Ap[p]-t*Aq[p]
A[q,q]=Aq[q]+t*Aq[p]
if computeV==true
Vp=copy(V[:,p])
Vq=copy(V[:,q])
for r in 1:l
V[r,p]=c*Vp[r]-s*Vq[r]
V[r,q]=s*Vp[r]+c*Vq[r]
end
return A,V
else
return A;
end
end
```
Rotate (generic function with 1 method)
```julia
# This function performs one sweep
function Sweep(A;compV=false,V=Matrix{Float64}(I,1,1))
n=size(A)[1]
for ii in 2:n
for jj in 1:(ii-1) ## Just over one triangle
if compV==false
A=Rotate(A,ii,jj)
else
A,V=Rotate(A,ii,jj;computeV=true,V=V);
end
end
end
if compV==false
return A
else
return A,V
end
end
```
Sweep (generic function with 1 method)
```julia
A,A0,V=makeA(5);
```
```julia
## keep evaluating for a couple iterations
## watch how it changes
A,V=Sweep(A;compV=true,V=V);
display(roundmatrix(A,1e-10))
display(A)
display(V)
display(convergence(A))
```
5×5 Array{Float64,2}:
2.75547 0.0 0.0 0.0 0.0
0.0 -1.0683 0.0 0.0 0.0
0.0 0.0 1.20611 0.0 0.0
0.0 0.0 0.0 0.734884 0.0
0.0 0.0 0.0 0.0 -1.48581
5×5 Array{Float64,2}:
2.75547 1.03898e-31 6.65473e-46 -1.20864e-52 -4.52913e-60
1.03898e-31 -1.0683 4.25018e-46 -2.9053e-69 1.37672e-80
6.65473e-46 4.25018e-46 1.20611 1.91399e-75 5.09273e-116
-1.20864e-52 -2.9053e-69 1.91399e-75 0.734884 0.0
-4.52913e-60 1.37672e-80 5.09273e-116 0.0 -1.48581
5×5 Array{Float64,2}:
0.702867 0.340672 -0.613045 0.0266825 -0.115695
-0.0750998 0.373435 0.239379 0.773343 -0.446705
0.597584 -0.208098 0.484961 0.308218 0.519041
-0.0642891 -0.754388 -0.459432 0.458861 -0.0716542
-0.37296 0.363431 -0.347289 0.309317 0.715913
1.0794844600380612e-62
```julia
## Compare the Optimized LAPLACK routine to your results
eigen(A0)
```
Eigen{Float64,Float64,Array{Float64,2},Array{Float64,1}}
eigenvalues:
5-element Array{Float64,1}:
-1.4858101513857376
-1.068298321190819
0.7348837698415409
1.206106933351782
2.755472216052164
eigenvectors:
5×5 Array{Float64,2}:
-0.115695 -0.340672 -0.0266825 0.613045 0.702867
-0.446705 -0.373435 -0.773343 -0.239379 -0.0750998
0.519041 0.208098 -0.308218 -0.484961 0.597584
-0.0716542 0.754388 -0.458861 0.459432 -0.0642891
0.715913 -0.363431 -0.309317 0.347289 -0.37296
```julia
## A good check to make sure V is an orthonomal transformation
roundmatrix(V*A*transpose(V)-A0,1e-12)
```
5×5 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
```julia
# How long does it take to make a Sweep?
# How much memory will the computation take?
# This is dependent on how large the matrix is
A,A0,V=makeA(10);
@time Sweep(A);
A,A0,V=makeA(20);
@time Sweep(A);
A,A0,V=makeA(100);
@time Sweep(A);
```
0.000016 seconds (230 allocations: 32.594 KiB)
0.000069 seconds (955 allocations: 196.188 KiB)
0.008129 seconds (24.75 k allocations: 17.372 MiB, 27.48% gc time)
In addition to time per sweep, we need to know how many sweeps we need to run. So again we run it on a 10x10, 20x20, and 100x100. The efficiency of the algorithm would get a lot worse if we have to sweep the 100x100 a bunch of times.
```julia
A10,Ap10,V=makeA(10);
A20,Ap20,V=makeA(20);
A100,Ap100,V=makeA(100);
nsweep=collect(1:7);
conv10=zeros(7)
conv20=zeros(7)
conv100=zeros(7)
for i in nsweep
A10=Sweep(A10)
A20=Sweep(A20)
A100=Sweep(A100)
conv10[i]=convergence(A10)
conv20[i]=convergence(A20)
conv100[i]=convergence(A100)
end
[nsweep conv10/10 conv20/20 conv100/100]
```
7×4 Array{Float64,2}:
1.0 1.74923 2.64638 14.7488
2.0 0.0945499 0.422473 2.80609
3.0 0.000314227 0.0162891 0.399226
4.0 6.31792e-10 1.09268e-5 0.0356924
5.0 3.62048e-22 1.18607e-11 0.000598666
6.0 9.38425e-48 7.94096e-26 3.28477e-7
7.0 1.14895e-112 1.23362e-55 6.11775e-13
Well, so we've seen how to do one form of exact diagonalization that works, but doesn't scale very well up to 100x100 matrices. So stay tuned for the Householder method, hopefully coming up soon.
Until then, happy computing :)
```julia
```
|
841f0a0ba155bc37cae8b5477c2e5a92f4897258
| 17,160 |
ipynb
|
Jupyter Notebook
|
Numerics_Prog/Jacobi-Transformation.ipynb
|
IanHawke/M4
|
2d841d4eb38f3d09891ed3c84e49858d30f2d4d4
|
[
"MIT"
] | null | null | null |
Numerics_Prog/Jacobi-Transformation.ipynb
|
IanHawke/M4
|
2d841d4eb38f3d09891ed3c84e49858d30f2d4d4
|
[
"MIT"
] | null | null | null |
Numerics_Prog/Jacobi-Transformation.ipynb
|
IanHawke/M4
|
2d841d4eb38f3d09891ed3c84e49858d30f2d4d4
|
[
"MIT"
] | null | null | null | 29.586207 | 328 | 0.487587 | true | 3,915 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91118 | 0.849971 | 0.774476 |
__label__eng_Latn
| 0.846798 | 0.637701 |
# __Fundamentos de programación__
<strong>Hecho por:</strong> Juan David Argüello Plata
## __1. Variables__
Una variable es el <u>nombre</u> con el que se identifica información de interés.
```
nom_variable = contenido
```
El contenido de una variable puede cambiar de naturaleza; por eso se dice que Python es un lenguaje dinámico.
### __1.1. Naturaleza de las variables__
| Naturaleza | Ejemplo |
|----------|---|
| Numérico | `x = 5` |
| Textual | `text = 'Esta es una frase'` |
| Lista | `lista = [0,1,2,"texto"]` |
| Tupla | `tupla = (0,1,2,"texto")` |
| Diccionario | `dic = {"num":5, "text": "hola"}` |
### __1.2. Variable numérica__
La forma en como se define una variable numérica y el tipo de operaciones básicas que se pueden emplear con ellas se muestra a continuación.
```python
#Declarar una variable numérica es igual que en el álgebra...
x = 1
print(x)
```
```python
x = 5
w = 10
z = 20
print("x = ", x, ", w = ", w, ", z = ", z) #Podemos ser más específicos a la hora de imprimir información
```
También se pueden hacer operaciones matemáticas, pero _cuidado_: es importante escribir bien las ecuaciones.
Si se quisiera resolver:
$$
\begin{equation}
y = \frac{x}{w \, z}
\end{equation}
$$
Se debe escribir el algoritmo así:
```python
y = x/(w*z)
print(y)
```
Porque si se escribe y ejecuta así:
```python
y = x/w*z
print(y)
```
Se estaría realmente resolviendo:
$$
\begin{equation}
y = \frac{x}{w} z
\end{equation}
$$
<h1><strong>Ejercicio:</strong></h1>
Resuelve la siguiente ecuación:
$$
\begin{equation}
y = \frac{m \, n}{m ^{2}} \frac{n +1}{ \left(n^{-2} m \right) ^{3}}
\end{equation}
$$
Dónde:
* $n = 2$
* $m = 10$
```python
```
### __1.2. Variable de texto__
A continuación, se puede observar la naturaleza de las variables textuales.
```python
t = "Esta es una oración" #De igual manera que la variable numérica.
print(t)
```
```python
#Es posible adicionar texto
t2 = ", ¿o no?"
frase_completa = t+t2
print(frase_completa)
```
```python
#Podemos también acceder a las letras en un texto
print(frase_completa[0])
```
```python
#Y a fragmentos de una oración
print(frase_completa[2:])
```
### __1.3. Listas__
Variables _dinámicas_ con contenido de cualquier naturaleza.
```python
#Ejemplo de lista
l = ['a','b','c', [0,1]]
print(l)
```
```python
#¿Cómo accedemos a la información?
print(l[0]) #Recuerda: el contenido de la lista empieza desde 0, 1, 2, ...
```
```python
#Podemos redefinir el contenido de la siguiente manera:
l[0] = 'z'
print(l) #De esta manera, la lista se cambia su valor
```
```python
print(l[3][0]) #También podemos leer la información de una lista dentro de otra lista
```
### __1.4. Tuplas__
Variables _estáticas_ con contenido de cualquier naturaleza.
```python
t = ('a',0,20,'2', ('Hola', 'Adiós')) #Similar a la lista
print(t)
```
```python
#También podemos acceder a su contenido... y jugar con él
print('¿' + t[4][0] + '?, ' + t[4][1])
```
```python
#Pero si lo intentamos cambiar...
t[0] = 1
```
### __1.5. Diccionarios__
Tipo de variable usada en programación web. Facilita la lectura de código al darle _"nombres"_ a su contenido.
```python
#Si vamos al súper mercado
lista_mercado = {
'manzana':2,
'peras':3,
'uvas': 4
}
print(lista_mercado)
```
```python
#Podemos ser aún más específicos...
lista_mercado = {
'Frutas': {
'Manzanas': {'Unidades': 'Un', 'Cant': 2},
'Peras': {'Unidades': 'Un', 'Cant': 1},
'Uvas': {'Unidades': 'Lb', 'Cant': 4}
}
}
print(lista_mercado)
```
```python
#Se accede a la información de la siguiente manera:
print(lista_mercado['Frutas']['Manzanas'])
```
|
dc329134b24703d9dfeea065e5895506c183c7d6
| 9,697 |
ipynb
|
Jupyter Notebook
|
Python/Colab/VariablesPython.ipynb
|
judrodriguezgo/DesarrolloWeb
|
a020b1eb734e243114982cde9edfc3c25d60047a
|
[
"MIT"
] | 1 |
2021-10-30T16:54:25.000Z
|
2021-10-30T16:54:25.000Z
|
Python/Colab/VariablesPython.ipynb
|
judrodriguezgo/DesarrolloWeb
|
a020b1eb734e243114982cde9edfc3c25d60047a
|
[
"MIT"
] | null | null | null |
Python/Colab/VariablesPython.ipynb
|
judrodriguezgo/DesarrolloWeb
|
a020b1eb734e243114982cde9edfc3c25d60047a
|
[
"MIT"
] | 3 |
2021-11-23T22:24:15.000Z
|
2021-12-31T23:51:47.000Z
| 24.062035 | 150 | 0.427452 | true | 1,223 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.944995 | 0.868827 | 0.821037 |
__label__spa_Latn
| 0.971906 | 0.745876 |
# Bayesian Linear Regression
```python
import sys
# Add the symgp folder path to the sys.path list
module_path = r'/Users/jaduol/Documents/Uni (original)/Part II/IIB/MEng Project/'
if module_path not in sys.path:
sys.path.append(module_path)
from symgp import SuperMatSymbol, utils, MVG, Variable
from sympy import symbols, ZeroMatrix, Identity
from IPython.display import display, Math, Latex
```
```python
# Define some symbols
D, N, Ns = symbols('D N Ns')
sig_y = symbols('\u03c3_y')
```
## 1. Prior
```python
# Prior
w = Variable('w',D,1)
p_w = MVG([w],mean=ZeroMatrix(D,1),cov=Identity(D))
print("p_w:")
display(Latex(utils.matLatex(p_w)))
```
p_w:
\begin{align*}
p\left(\mathbf{w}\right)&= \mathcal{N}\left(\mathbf{w};\mathbf{m}_{\mathbf{w}},\mathbf{\Sigma}_{\mathbf{w}}\right)\\
\mathbf{m}_{\mathbf{w}} &= \mathbf{0}\\
\mathbf{\Sigma}_{\mathbf{w}} &= \mathbf{I}\\
\end{align*}
# 2. Likelihood
```python
# Likelihood of w given X
X, y = utils.variables('X y',[(D,N), N])
p_y = MVG([y], mean=X.T*w,
cov=sig_y**2*Identity(N),
cond_vars=[w,X])
print("p_y:")
display(Latex(utils.matLatex(p_y)))
```
p_y:
\begin{align*}
p\left(\mathbf{y}|\mathbf{w},\mathbf{X}\right)&= \mathcal{N}\left(\mathbf{y};\mathbf{m}_{\mathbf{y}|\mathbf{w},\mathbf{X}},\mathbf{\Sigma}_{\mathbf{y}|\mathbf{w},\mathbf{X}}\right)\\
\mathbf{m}_{\mathbf{y}|\mathbf{w},\mathbf{X}} &= \mathbf{X}^T \mathbf{w}\\
\mathbf{\Sigma}_{\mathbf{y}|\mathbf{w},\mathbf{X}} &= \sigma_y^{2} \mathbf{I}\\
\end{align*}
# 3. Posterior
```python
# Joint of w and y
p_w_y = p_w*p_y
print("p_w_y:")
display(Latex(utils.matLatex(p_w_y)))
```
cond_vars: {w}
conditional_cond_vars: {w, X}
new_conditioned_vars: [X]
p_w_y:
\begin{align*}
p\left(\mathbf{y},\mathbf{w}|\mathbf{X}\right)&= \mathcal{N}\left(\left[\begin{smallmatrix}\mathbf{y}\\\mathbf{w}\end{smallmatrix}\right];\mathbf{m}_{\mathbf{y},\mathbf{w}|\mathbf{X}},\mathbf{\Sigma}_{\mathbf{y},\mathbf{w}|\mathbf{X}}\right)\\
\mathbf{m}_{\mathbf{y},\mathbf{w}|\mathbf{X}} &= \left[\begin{smallmatrix}\mathbf{0}\\\mathbf{0}\end{smallmatrix}\right]\\
\mathbf{\Sigma}_{\mathbf{y},\mathbf{w}|\mathbf{X}} &= \left[\begin{smallmatrix}\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}&\mathbf{X}^T\\\mathbf{X}&\mathbf{I}\end{smallmatrix}\right]\\
\end{align*}
```python
# Inference: posterior over w
p_w_post = p_w_y.condition([y])
print("p_w_post:")
display(Latex(utils.matLatex(p_w_post)))
```
p_w_post:
\begin{align*}
p\left(\mathbf{w}|\mathbf{X},\mathbf{y}\right)&= \mathcal{N}\left(\mathbf{w};\mathbf{m}_{\mathbf{w}|\mathbf{X},\mathbf{y}},\mathbf{\Sigma}_{\mathbf{w}|\mathbf{X},\mathbf{y}}\right)\\
\mathbf{m}_{\mathbf{w}|\mathbf{X},\mathbf{y}} &= \mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{y}\\
\mathbf{\Sigma}_{\mathbf{w}|\mathbf{X},\mathbf{y}} &= \mathbf{I} - \mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{X}^T\\
\end{align*}
# 4. Prediction
```python
#Prediction
# Likelihood of w given Xs
Xs, ys = utils.variables('X_{*} y_{*}',[(D,Ns), Ns])
p_ys = MVG([ys], mean=Xs.T*w,
cov=sig_y**2*Identity(Ns),
cond_vars=[w,Xs])
print("p_ys:")
display(Latex(utils.matLatex(p_ys)))
```
p_ys:
\begin{align*}
p\left(\mathbf{y_{*}}|\mathbf{w},\mathbf{X_{*}}\right)&= \mathcal{N}\left(\mathbf{y_{*}};\mathbf{m}_{\mathbf{y_{*}}|\mathbf{w},\mathbf{X_{*}}},\mathbf{\Sigma}_{\mathbf{y_{*}}|\mathbf{w},\mathbf{X_{*}}}\right)\\
\mathbf{m}_{\mathbf{y_{*}}|\mathbf{w},\mathbf{X_{*}}} &= \mathbf{X_{*}}^T \mathbf{w}\\
\mathbf{\Sigma}_{\mathbf{y_{*}}|\mathbf{w},\mathbf{X_{*}}} &= \sigma_y^{2} \mathbf{I}\\
\end{align*}
```python
# Joint of w and ys
p_w_ys = p_w_post*p_ys
print("p_w_ys:")
display(Latex(utils.matLatex(p_w_ys)))
```
cond_vars: {w}
conditional_cond_vars: {X_{*}, w}
new_conditioned_vars: [X_{*}, X, y]
p_w_ys:
\begin{align*}
p\left(\mathbf{y_{*}},\mathbf{w}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}\right)&= \mathcal{N}\left(\left[\begin{smallmatrix}\mathbf{y_{*}}\\\mathbf{w}\end{smallmatrix}\right];\mathbf{m}_{\mathbf{y_{*}},\mathbf{w}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}},\mathbf{\Sigma}_{\mathbf{y_{*}},\mathbf{w}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}}\right)\\
\mathbf{m}_{\mathbf{y_{*}},\mathbf{w}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}} &= \left[\begin{smallmatrix}\mathbf{X_{*}}^T \mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{y}\\\mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{y}\end{smallmatrix}\right]\\
\mathbf{\Sigma}_{\mathbf{y_{*}},\mathbf{w}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}} &= \left[\begin{smallmatrix}\sigma_y^{2} \mathbf{I} + \mathbf{X_{*}}^T \left(\mathbf{I} - \mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{X}^T\right) \mathbf{X_{*}}&\mathbf{X_{*}}^T \left(\mathbf{I} - \mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{X}^T\right)\\\left(\mathbf{I} - \mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{X}^T\right) \mathbf{X_{*}}&\mathbf{I} - \mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{X}^T\end{smallmatrix}\right]\\
\end{align*}
```python
# Predictive distribution of ys
p_ys_post = p_w_ys.marginalise([w])
print("p_ys_post:")
display(Latex(utils.matLatex(p_ys_post)))
```
self.name: S_{y_{*},y_{*}|X_{*},X,y}
name:
p_ys_post:
\begin{align*}
p\left(\mathbf{y_{*}}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}\right)&= \mathcal{N}\left(\mathbf{y_{*}};\mathbf{m}_{\mathbf{y_{*}}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}},\mathbf{\Sigma}_{\mathbf{y_{*}}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}}\right)\\
\mathbf{m}_{\mathbf{y_{*}}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}} &= \mathbf{X_{*}}^T \mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{y}\\
\mathbf{\Sigma}_{\mathbf{y_{*}}|\mathbf{X_{*}},\mathbf{X},\mathbf{y}} &= \sigma_y^{2} \mathbf{I} + \mathbf{X_{*}}^T \left(\mathbf{I} - \mathbf{X} \left(\sigma_y^{2} \mathbf{I} + \mathbf{X}^T \mathbf{X}\right)^{-1} \mathbf{X}^T\right) \mathbf{X_{*}}\\
\end{align*}
```python
```
|
751637a5d61a4ab5283a3a212e3cdbcce2246e6c
| 11,972 |
ipynb
|
Jupyter Notebook
|
symgp/notebooks/BayesianLinearRegression.ipynb
|
jna29/SymGP
|
dd909feb51cb38e6eb70dee7fc3bd430dddf1b78
|
[
"MIT"
] | 2 |
2017-06-07T14:54:07.000Z
|
2021-08-30T20:01:43.000Z
|
symgp/notebooks/BayesianLinearRegression.ipynb
|
jna29/SymGP
|
dd909feb51cb38e6eb70dee7fc3bd430dddf1b78
|
[
"MIT"
] | 1 |
2017-06-07T16:19:54.000Z
|
2017-06-07T20:39:30.000Z
|
symgp/notebooks/BayesianLinearRegression.ipynb
|
jna29/SymGP
|
dd909feb51cb38e6eb70dee7fc3bd430dddf1b78
|
[
"MIT"
] | null | null | null | 29.93 | 759 | 0.487638 | true | 2,730 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.867036 | 0.73412 | 0.636508 |
__label__yue_Hant
| 0.635203 | 0.317152 |
```python
import numpy as np
import pandas as pd
import emcee
import corner
import time
import scipy.optimize as op
from allsn_info import get_at2019dge
from helper.arnett import model_arnett_Ltph
from multiprocessing import Pool
from helper import phys
from helper.mcmcfit import mylinear_fit
from helper.models import model_piro15_bol_recast
import matplotlib
import matplotlib.pyplot as plt
fs = 14
matplotlib.rcParams['font.size']=fs
```
## Method1: Model Fitting
```python
filename = "./helper/piromodel/2.0/sampler.h5"
reader = emcee.backends.HDFBackend(filename)
samples = reader.get_chain(discard=1000, flat=True)
lgR_sigmas = np.percentile(samples[:,0], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
lgM_sigmas = np.percentile(samples[:,1], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
t0_sigmas = np.percentile(samples[:,2], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
E51_sigmas = np.percentile(samples[:,3], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
Eenvs_sigmas = np.percentile(samples[:,4], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87)) * 1e+49
Eenv = Eenvs_sigmas[3]
Renv = 10**lgR_sigmas[3]
Menv = 10**lgM_sigmas[3]
t0 =t0_sigmas[3]
E51 = E51_sigmas[3]
```
```python
data = pd.read_csv('../data/otherSN/Yao2020/bbdata.csv')
t_data = data['phase'].values - t0
L_data = data['Lbb'].values
L_unc_data = data['Lbb_unc'].values
lgL_data = data['lgLbb'].values
lgL_unc_data = data['lgLbb_unc'].values
```
```python
tgrid = np.linspace(0, 30, 100)
Lp15 = model_piro15_bol_recast(tgrid, Renv, Menv, E51, Eenv / 1e+49)
lgLp15 = np.log10(Lp15)
```
```python
result = get_at2019dge()
tb0 = result['tb']
```
```python
tb0 = tb0[tb0['filter'].values=='r']
tb0 = tb0[tb0.instrument!="P60+SEDM"]
tb0 = tb0[(tb0.tmax_of-t0) > max(t_data)]
t_quasi = tb0["tmax_of"].values -t0
Lquasi = tb0["Llambda"].values * tb0['wave'].values
Lquasi_unc = tb0["Llambda_unc"].values * tb0['wave'].values
lgLquasi = np.log10(Lquasi)
lgLquasi_unc = Lquasi_unc / Lquasi / np.log(10)
```
```python
Lp15_data = model_piro15_bol_recast(t_data, Renv, Menv, E51, Eenv / 1e+49)
```
```python
L_data_resi = L_data - Lp15_data
lgL_data_resi = np.log10(L_data_resi)
```
/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in log10
```python
plt.figure()
plt.plot(t_data, Lp15_data)
plt.plot(t_data, L_data, 'ko')
```
<IPython.core.display.Javascript object>
[<matplotlib.lines.Line2D at 0x14a98b390>]
```python
%matplotlib notebook
```
```python
plt.figure(figsize=(6, 5))
ax = plt.subplot(111)
ax.errorbar(t_quasi, lgLquasi, lgLquasi_unc, fmt='--o', color = "grey", markerfacecolor='none', zorder = 3, markersize=7)
ax.errorbar(t_data, lgL_data_resi, lgL_unc_data, fmt='ok', zorder = 3, markersize=5)
```
<IPython.core.display.Javascript object>
<ErrorbarContainer object of 3 artists>
```python
x1 = t_data
y1 = lgL_data_resi
ey1 = lgL_unc_data
ix1 = (~np.isnan(y1))&(x1 > 2)
x1 = x1[ix1]
y1 = y1[ix1]
ey1 = ey1[ix1]
ix = x1 > 6
x1 = x1[ix]
y1 = y1[ix]
ey1 = ey1[ix]
```
```python
x3 = t_quasi
y3 = lgLquasi
ey3 = lgLquasi_unc
```
```python
x = np.hstack([x1, x3]) + t0
y = np.hstack([y1, y3])
ey = np.hstack([ey1, ey3])
ix = np.argsort(x)
x = x[ix]
y = y[ix]
ey = ey[ix]
```
```python
xyey = np.vstack([x, y, ey])
```
```python
np.savetxt("./helper/Lbb_p15subtracted.txt", xyey)
```
### Photospheric phase Arnett model -- modified
```python
from helper.arnett import main_arnettrun
```
```python
# main_arnettrun()
# This takes some time to run
```
```python
filename = "./helper/arnettmodel/sampler.h5"
reader = emcee.backends.HDFBackend(filename)
```
```python
samples = reader.get_chain(discard=200, flat=True)
lgprobs = reader.get_log_prob(discard=200, flat=True)
print (samples.shape)
print (lgprobs.shape)
```
(600000, 4)
(600000,)
```python
taum_sigmas = np.percentile(samples[:,0], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
lgMni_sigmas = np.percentile(samples[:,1], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
t0_sigmas = np.percentile(samples[:,2], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
tfl_sigmas = np.percentile(samples[:,3], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
texp = tfl_sigmas[3]
```
```python
paramsNames= [r"$\tau_{\rm m}$",
'lg' +r'$M_{\rm Ni}$',
r"$t_0$",
r"$t_{\rm fl}$"]
quantiles=[0.1587, 0.5, 0.8413]
```
```python
samples_final = samples[:, :3]
```
```python
corner.corner(samples_final, labels = paramsNames[:3], quantiles = quantiles,
range = [0.99, 0.99, 0.99],
show_titles=True, plot_datapoints=False,
title_kwargs = {"fontsize": fs})
plt.savefig("../paper/figures/corner_arnett_modified_all.pdf")
# plt.close()
```
<IPython.core.display.Javascript object>
```python
taum_ = taum_sigmas[3]
taum_unc_right = taum_sigmas[4]-taum_sigmas[3]
taum_unc_left = taum_sigmas[3]-taum_sigmas[2]
print ("tau_m = %.2f + %.2f - %.2f day"%(taum_, taum_unc_right, taum_unc_left))
```
tau_m = 6.35 + 0.18 - 0.18 day
```python
Mni = 10**lgMni_sigmas[3]
Mni_unc_left = 10**lgMni_sigmas[3] - 10**lgMni_sigmas[2]
Mni_unc_right = 10**lgMni_sigmas[4] - 10**lgMni_sigmas[3]
print ("%.2f (+%.2f) (-%.2f) 1e-2 Msun"%(Mni*100, Mni_unc_right*100, Mni_unc_left*100))
```
1.61 (+0.04) (-0.03) 1e-2 Msun
```python
t0_ = t0_sigmas[3]
t0_unc_right = t0_sigmas[4]-t0_sigmas[3]
t0_unc_left = t0_sigmas[3]-t0_sigmas[2]
print ("t0 = %.2f + %.2f - %.2f day"%(t0_, t0_unc_right, t0_unc_left))
```
t0 = 24.04 + 0.76 - 0.73 day
```python
from helper.arnett import model_arnett_modified
tgrid = np.linspace(0.1, 70, 200)
Lnidecay = model_arnett_modified(tgrid, taum_ = taum_, Mni_ = Mni, t0_ = t0_, texp = 0)
lgLnidecay = np.log10(Lnidecay)
```
```python
plt.figure()
plt.errorbar(x-texp, y, ey, fmt= ".k")
plt.plot(tgrid, lgLnidecay)
```
<IPython.core.display.Javascript object>
[<matplotlib.lines.Line2D at 0x154f5f048>]
Estimate ejecta mass
```python
kappa_opt = 0.07 # relevant for stripped envelope supernova
v_ej = 6000 * 1e+5
Mej_ = (taum_ * 24 * 3600)**2 * 13.8 * phys.c / 2 / kappa_opt * v_ej / phys.sm
Mej_unc_right = ((taum_+taum_unc_right) * 24 * 3600)**2 * 13.8 * phys.c / 2 / kappa_opt * v_ej / phys.sm - Mej_
Mej_unc_left = Mej_ - ((taum_-taum_unc_left) * 24 * 3600)**2 * 13.8 * phys.c / 2 / kappa_opt * v_ej / phys.sm
print ("Mej = %.2f (+%.2f) (-%.2f) Msun"%(Mej_, Mej_unc_right, Mej_unc_left))
```
Mej = 0.27 (+0.02) (-0.01) Msun
```python
Ekin_ = 0.3 * Mej_ * phys.sm * v_ej**2
Ekin_unc_left = 0.3 * Mej_unc_left * phys.sm * v_ej**2
Ekin_unc_right = 0.3 * Mej_unc_right * phys.sm * v_ej**2
print ("Ekin = %.2f (+%.2f) (-%.2f) e+49 erg"%(Ekin_ / 1e+49, Ekin_unc_right / 1e+49, Ekin_unc_left / 1e+49))
```
Ekin = 5.76 (+0.34) (-0.32) e+49 erg
```python
ind_max = np.argsort(Lnidecay)[-1]
```
```python
tpeak = (tgrid[ind_max]) * 86400
Lpeak = Lnidecay[ind_max]
```
```python
print ("tpeak = %.1f day"%(tpeak / 86400))
print ("Lpeak = %.1f e+41 erg/s"%(Lpeak / 1e+41))
```
tpeak = 8.5 day
Lpeak = 5.9 e+41 erg/s
## Method2: KK19 Equations
An improvement from the Arnett relations.
```python
ts = 8.8*86400
beta = 4/3
L0 = Lpeak * beta**2 * tpeak**2 / 2 / ts**2 / (1 - (1 + beta*tpeak/ts)*np.exp(-beta * tpeak / ts))
epsilon_Ni = 3.9e+10 # erg / g / s
M_Ni = L0 / (epsilon_Ni) / phys.sm
```
```python
print ("M_Ni = %.2f e-2 Msun"%(M_Ni/1e-2))
```
M_Ni = 1.70 e-2 Msun
\begin{align}
x &= \frac{t_{\rm peak}}{t_{\rm d}}\\
x &= 0.11 {\rm ln} (1 + 9 \frac{8.8}{9}x) + 0.36
\end{align}
```python
x = np.linspace(0.1, 1, 100)
y = 0.11 * np.log(1 + 8.8*x) + 0.36
```
```python
plt.figure(figsize=(4,4))
plt.plot(x, x)
plt.plot(x, y)
```
<IPython.core.display.Javascript object>
[<matplotlib.lines.Line2D at 0x15619ecf8>]
```python
ix = np.argsort(abs(y-x))
x_solved = x[ix[0]]
```
```python
td = tpeak / x_solved
```
```python
td / 86400
```
15.382239064173326
```python
kappa = 0.07 # DD19 default
vej = 6000 * 1e+5
Mej_kk = td**2 * vej * phys.c / kappa / phys.sm
print (Mej_kk)
```
0.2282549926628625
```python
0.3 * Mej_kk * phys.sm * vej**2
```
4.901897966453538e+49
```python
```
|
cd5c3ecf97ac7953fd37eabc941f6a16ee5d4f7c
| 1,040,323 |
ipynb
|
Jupyter Notebook
|
playground/.ipynb_checkpoints/radioactivity-checkpoint.ipynb
|
yaoyuhan/AT2019dge
|
759116ede9d7480eb34bfdcc4e3ec1224f7cad5a
|
[
"MIT"
] | 1 |
2021-03-11T18:37:42.000Z
|
2021-03-11T18:37:42.000Z
|
playground/.ipynb_checkpoints/radioactivity-checkpoint.ipynb
|
yaoyuhan/AT2019dge
|
759116ede9d7480eb34bfdcc4e3ec1224f7cad5a
|
[
"MIT"
] | null | null | null |
playground/.ipynb_checkpoints/radioactivity-checkpoint.ipynb
|
yaoyuhan/AT2019dge
|
759116ede9d7480eb34bfdcc4e3ec1224f7cad5a
|
[
"MIT"
] | 1 |
2021-03-11T18:37:48.000Z
|
2021-03-11T18:37:48.000Z
| 221.769985 | 427,756 | 0.878539 | true | 3,278 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.822189 | 0.695958 | 0.572209 |
__label__eng_Latn
| 0.100863 | 0.167764 |
# SymPy를 사용한 함수 미분
## 데이터 분석에서 미분의 필요성
그다지 관련이 없어 보이지만 사실 데이터 분석에도 미분(differentiation)이 필요하다. 데이터 분석의 목표 중 하나는 확률 모형의 모수(parameter)나 상태 변수(state)를 추정(estimation)하는 작업이다. 이러한 작업은 근본적으로 함수의 최소점 혹은 최대점을 찾는 최적화(optimization) 작업이며 미분 혹은 편미분을 사용한 도함수를 필요로 한다. 따라서 함수 미분의 지식은 데이터 분석 및 머신 러닝의 각종 내부 구조를 이해하는데 필수적인다.
다행스러운 점은 데이터 분석자 입장에서 필요한 미분의 수준은 그다지 높지 않다는 점이다. 보통은 선형 다항식이나 지수함수의 편미분 정도의 개념만 알고 있으면 되고 대부분의 경우 최적화 라이브러리를 사용하거나 theano, tensorflow 등의 라이브러리에서 도함수나 미분값을 계산해 주기 때문에 실제로 도함수를 구할 일도 그다지 많지는 않다.
## 함수와 변수
프로그래밍을 익힌 사람에게는 변수(variable)와 함수(function)의 개념이 낯설지 않다. 변수란 실제 값을 대표하는 기호이며 함수는 이러한 변수를 기반으로 만들어진 수식으로 변수값이 어떤 수치로 결정되면 함수 값도 수식에 의해 결정된다.
변수는 보통 $x$, $y$, $z$ 등 알파벳 소문자로 표시하며 함수는 $f(x)$, $g(x,y)$ 와 같이 사용할 입력 변수를 괄호안에 넣어 표시한다. 함수의 결과를 다른 변수에 넣어 다시 사용하는 경우도 있다.
$$ y = f(x) $$
$$ z = g(y) = g(f(x)) $$
파이썬의 함수는 이러한 함수의 개념을 그대로 구현한 것이다.
```python
def f(x):
return 2*x
x = 10
y = f(x)
print(x, y)
```
10 20
역함수(inverse function)는 함수의 입력과 출력을 반대로 한 것이며 다음과 같은 기호로 표시한다.
$$ y = f(x), \;\;\; \rightarrow \;\;\; x = f^{-1}(y) $$
## 예측 문제와 함수
예측(prediction) 문제는 독립 변수, 혹은 feature $x$를 입력으로 하여 원하는 종속 변수 혹은 targer $y$와 가능한한 비슷한 값을 만드는 함수 $f$를 찾는 문제라고 할 수 있다.
$$ y \approx \hat{y} = f(x) $$
## 데이터 분석에서 많이 사용되는 함수들
데이터 분석에서 많이 사용되는 함수의 형태는 다항식(polynomial) 함수, 지수(exponential) 함수, 로그(log) 함수 등이다.
### 다항식 함수
다항식 함수는 상수항 $c_0$, 일차항 $c_1x$, 이차항 $c_2x^2$, $\cdots$ 등의 거듭제곱 항의 선형 조합으로 이루어진 함수이다. 다음은 단변수(uni-variate) 다항식 함수의 전형적인 형태이다.
$$ f(x) = c_0 + c_1 x + c_2 x^2 + \cdots + c_n x^n $$
### 지수 함수와 로그 함수
밑(base)를 오일러 수 $e$로 하는 지수함수는 다음과 같이 표시한다. 이는 $e$라는 숫자를 $x$번 거듭제곱한 것이라 생각하면 된다.
$$ y = e^x $$
또는
$$ y = \exp x $$
지수 함수의 역함수는 자연로그 함수이다.
$$ y = \log x $$
만약 밑이 $e$가 아닌 경우에는 다음과 같이 변형하여 사용한다.
$$ y = a^x = e^{\log a \cdot x} $$
## 함수의 그래프와 기울기
함수의 형상을 직관적으로 파악하기 위해 그래프(graph)를 사용하기도 한다. 파이썬에서는 matplotlib의 라인 플롯을 사용하여 그래프를 만들 수 있다.
다만 matplotlib에서는 구체적인 위치가 있어야지만 플롯을 만들 수 있기 때문에 그래프를 작성할 $x$ 영역을 작은 구간으로 나눈 벡터를 생성하고 이 벡터 값에 대한 함수값을 계산하여 그래프를 작성한다. 구간의 간격이 너무 크면 그래프가 부정확해지고 구간의 간격이 너무 작으면 쓸데없이 세부적인 그림을 그리게 되므로 계산 시간이 증가하고 메모리 등의 리소스가 낭비된다.
```python
x = np.linspace(-0.9, 2.9, 100)
y = x**3 - 3*x**2 + x
plt.plot(x, y);
```
함수의 그래프는 앞에서 그린 것처럼 부드러운 곡선(curve)의 형태로 나타나는 경우가 많다. 이 곡선에 대해 한 점만 공통으로 가지는 접선(tangent)를 그릴 수 있는데 이 접선이 수평선과 이루는 각도를 기울기(slope)라고 한다.
```python
x = np.linspace(-0.9, 2.9, 100)
y = x**3-3*x**2+x
plt.plot(x, y)
plt.plot(0, 0, 'ro'); plt.plot(x, x, 'r:');
plt.plot(1, -1, 'go'); plt.plot(x, (3*1**2-6*1+1)*(x-1)-1, 'g:');
```
## 미분
미분(differenciation)이란 이러한 함수로부터 새로운 함수를 도출하는 변환의 일종이다. 미분을 통해 만들어진 새로운 함수는 원래 함수의 기울기(slope)를 나타낸다. 미분으로 만들어진 함수를 원래 함수의 도함수(derivative)라고 한다. 실제로는 극한과 수렴이라는 복잡한 개념을 사용하여 미분을 정의하지만 최적화(optimization)를 위해서는 단순히 기울기를 뜻한다고만 알아도 충분하다.
도함수는 함수 기호에 뒤에 prime 윗첨자를 붙이거나 함수 기호의 앞에 $\dfrac{d}{dx}$, $\dfrac{\partial}{\partial x}$ 등을 붙여서 표시한다. 분수처럼 표기하기도 하는데 분모의 위치에는 미분하고자 하는 변수가 오고 분자의 위치에는 미분하는 함수 자체의 기호나 혹은 함수 계산의 결과로 얻어지는 변수를 넣는다.
예를 들어 $y = f(x)$라는 함수를 미분하면 다음과 같다.
$$ f'(x) = \dfrac{d}{dx}(f) = \dfrac{df}{dx} = \dfrac{d}{dx}(y) = \dfrac{dy}{dx} $$
## 미분 공식
현실적으로 미분은 다음에 설명할 몇가지 공식(formula)를 조합하여 원래 함수에서 도함수를 도출하는 과정이다. 함수가 복잡해지면 몇 페이지에 달아는 공식집이 필요할 정도이지만 여기에서는 가장 핵심적인 몇가지 공식만을 소개한다. 다양한 미분 공식에 대해 알고 싶다면 다음 웹사이트들을 참조한다.
* https://en.wikipedia.org/wiki/Derivative#Rules_of_computation
* https://en.wikipedia.org/wiki/Differentiation_rules
### 기본 미분 공식
* 상수
$$ \dfrac{d}{dx}(c) = 0 $$
$$ \dfrac{d}{dx}(cf) = c \cdot \dfrac{df}{dx} $$
* 거듭제곱
$$ \dfrac{d}{dx}(x^n) = n x^{n-1} $$
* 로그
$$ \dfrac{d}{dx}(\log x) = \dfrac{1}{x} $$
* 지수
$$ \dfrac{d}{dx}(e^x) = e^x $$
* 선형 조합
$$ \dfrac{d}{dx}\left(c_1 f_1 + c_2 f_2 \right) = c_1 \dfrac{df_1}{dx} + c_2 \dfrac{df_2}{dx}$$
이러한 기본 공식을 사용하여 다음 함수를 미분하면,
$$ y = 1 + 2x + 3x^2 + 4\exp(x) + 5\log(x) $$
답은 다음과 같다.
$$ \dfrac{dy}{dx} = 2 + 6x + 4\exp(x) + \dfrac{5}{x} $$
### 곱셈 법칙
어떤 함수의 형태가 두 개의 함수를 곱한 것과 같을 때는 다음과 같이 각 개별 함수의 도함수를 사용하여 원래의 함수의 도함수를 구한다.
$$ \dfrac{d}{dx}\left( f \cdot g \right) = \dfrac{df}{dx} \cdot g + f \cdot \dfrac{dg}{dx} $$
곱셈 법칙을 사용하면 다음과 같은 함수를 미분하여,
$$ f = x \cdot \exp(x) $$
다음과 같은 도함수를 구한다.
$$ \dfrac{df}{dx} = \exp(x) + x \exp(x) $$
## 연쇄 법칙
연쇄 법칙(chain rule)은 미분하고자 하는 함수가 어떤 두 함수의 nested form 인 경우 적용할 수 있다.
$$ f(x) = h(g(x)) $$
인 경우 도함수는 다음과 같이 구한다.
$$ \dfrac{df}{dx} = \dfrac{df}{dg} \cdot \dfrac{dg}{dx} $$
예를 들어 정규 분포의 확률 밀도 함수는 기본적으로 다음과 같은 형태라고 볼 수 있다.
$$ f = \exp \dfrac{(x-\mu)^2}{\sigma^2} $$
이 함수의 도함수는 다음과 같이 구할 수 있다.
$$ f = exp(z) \;,\;\;\;\; z = \dfrac{y^2}{\sigma^2} \;,\;\;\;\; y = x-\mu $$
$$ \dfrac{df}{dx} = \dfrac{df}{dz} \cdot \dfrac{dz}{dy} \cdot \dfrac{dy}{dx} $$
$$ \dfrac{df}{dz} = \exp(z) = \exp \dfrac{(x-\mu)^2}{\sigma^2} $$
$$ \dfrac{dz}{dy} = \dfrac{2y}{\sigma^2} = \dfrac{2(x-\mu)}{\sigma^2} $$
$$ \dfrac{dy}{dx} = 1 $$
$$ \dfrac{df}{dx} = \dfrac{2(x-\mu)}{\sigma^2} \exp \dfrac{(x-\mu)^2}{\sigma^2}$$
## 로그함수의 미분
로그 함수에 연쇄 법칙을 적용하면 다음과 같은 규칙을 얻을 수 있다.
$$ \dfrac{d}{dx} \log f(x) = \dfrac{f'(x)}{f(x)} $$
## 편미분
만약 함수가 두 개 이상의 독립변수를 가지는 다변수 함수인 경우에도 미분 즉, 기울기는 하나의 변수에 대해서만 구할 수 있다. 이를 편미분(partial differentiation)이라고 한다. 따라서 편미분의 결과로 하나의 함수에 대해 여러개의 도함수가 나올 수 있다.
다음은 편미분의 간단한 예이다.
$$ f(x,y) = x^2 + xy + y^2 $$
$$ f_x(x,y) = \dfrac{\partial f}{\partial x} = 2x + y $$
$$ f_y(x,y) = \dfrac{\partial f}{\partial y} = x + 2y $$
## SymPy
SymPy는 심볼릭 연산(symbolic operation)을 지원하기 위한 파이썬 패키지이다. 심볼릭 연산이란 사람이 연필로 계산하는 미분/적분과 동일한 형태의 연산을 말한다. 즉, $x^2$의 미분 연산을 수행하면 그 결과가 $2x$란 형태로 출력된다.
딥 러닝(deep learning) 등에 많이 사용되는 파이썬의 theano 패키지나 tensorflow 패키지도 뉴럴 네트워크 트레이닝시에 필요한 기울기 함수 계산을 위해 이러한 심볼릭 연산 기능을 갖추고 있다.
이를 위해서는 SymPy의 `symbols` 명령을 사용하여 $x$라는 기호가 단순한 숫자나 벡터 변수가 아닌 기호에 해당하는 것임을 알려주어야 한다.
```python
import sympy
sympy.init_printing(use_latex='mathjax') # Juypter 노트북에서 수학식의 LaTeX 표현을 위해 필요함
```
```python
x = sympy.symbols('x')
x
```
$$x$$
```python
type(x)
```
sympy.core.symbol.Symbol
일단 심볼 변수를 정의하면 이를 사용하여 다음과 같이 함수를 정의한다. 이 때 수학 함수는 SymPy 전용 함수를 사용해야 한다.
```python
f = x * sympy.exp(x)
f
```
$$x e^{x}$$
함수가 정의되면 `diff` 명령으로 미분을 할 수 있다. 또한 `simplify` 명령으로 소인수분해 등을 통한 수식 정리가 가능하다.
```python
sympy.diff(f)
```
$$x e^{x} + e^{x}$$
```python
sympy.simplify(sympy.diff(f))
```
$$\left(x + 1\right) e^{x}$$
편미분을 하는 경우에는 어떤 변수로 미분하는지를 명시해야 한다.
```python
x, y = sympy.symbols('x y')
f = x**2 + x*y + y**2
f
```
$$x^{2} + x y + y^{2}$$
```python
sympy.diff(f, x)
```
$$2 x + y$$
```python
sympy.diff(f, y)
```
$$x + 2 y$$
복수의 기호를 사용하는 경우에도 편미분을 해야 한다.
```python
x, mu, sigma = sympy.symbols('x mu sigma')
f = sympy.exp((x-mu)**2)/sigma**2
f
```
$$\frac{1}{\sigma^{2}} e^{\left(- \mu + x\right)^{2}}$$
```python
sympy.diff(f, x)
```
$$\frac{1}{\sigma^{2}} \left(- 2 \mu + 2 x\right) e^{\left(- \mu + x\right)^{2}}$$
```python
sympy.simplify(sympy.diff(f, x))
```
$$\frac{2}{\sigma^{2}} \left(- \mu + x\right) e^{\left(\mu - x\right)^{2}}$$
|
2b3e921b3ed66c36a630b188df84881f64dff84b
| 98,522 |
ipynb
|
Jupyter Notebook
|
08. 미적분과 최적화/01. SymPy를 사용한 함수 미분.ipynb
|
zzsza/Datascience_School
|
da27ac760ca8ad1a563a0803a08b332d560cbdc0
|
[
"MIT"
] | 39 |
2017-04-30T06:17:21.000Z
|
2022-01-07T07:50:11.000Z
|
08. 미적분과 최적화/01. SymPy를 사용한 함수 미분.ipynb
|
yeajunseok/Datascience_School
|
da27ac760ca8ad1a563a0803a08b332d560cbdc0
|
[
"MIT"
] | null | null | null |
08. 미적분과 최적화/01. SymPy를 사용한 함수 미분.ipynb
|
yeajunseok/Datascience_School
|
da27ac760ca8ad1a563a0803a08b332d560cbdc0
|
[
"MIT"
] | 32 |
2017-04-09T16:51:49.000Z
|
2022-01-23T20:30:48.000Z
| 35.722263 | 277 | 0.465287 | true | 4,334 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.685949 | 0.57745 |
__label__kor_Hang
| 1.00001 | 0.179939 |
# Matrix Factorization implementation
Iván Vallés Pérez - 2018
In a [recommender system](https://en.wikipedia.org/wiki/Recommender_system) we have two kind of entities: users and items. We want to predict for an arbitrary user which items the user prefers. Sometimes users give explicit ratings, for example 4 or 5 star reviews. Let's start with this case and give an accurate mathematical formulation of the problem.
We will call $r_{ij}$ the rating that user $i$ gives to item $j$ and we want to build a model to predict these ratings. Let's call the predictions of the model $\hat{r}_{ij}$. It's important to realize that since each user will probably give ratings to just a handful of items and in most cases there are thousands or even millions of items we don't have ratings for the vast majority of user-item possible interactions. Let's call $I$ the set of known interactions and let's suppose that we want to minimize the squared loss, that is, we want to minimize:
\begin{equation}
L = \sum_{(i,j) \in I} (r_{ij} - \hat{r}_{ij})^2
\end{equation}
A matrix-factorization based recommender will solve the above problem by supposing that:
- We represent user $i$ with an unknown user bias $u_i^b$ and an unknown vector of length $K$ that we will call $u_i^e$, which is usually called the user embedding.
- We represent item $j$ with an unknown item bias $v_j^b$ and an unknown vector of length $K$ that we will call $v_j^e$, which is usually called the item embedding.
- The predicted rating of item $j$ by user $i$ is the biases plus the dot product of the two embeddings:
\begin{equation}
\hat{r}_{ij} = u_i^b + v_j^b + u_i^e \cdot v_j^e = u_i^b + v_j^b + \sum_{k=1}^K u_{ik}^ev_{jk}^e
\end{equation}
The above vectors are the parameters of our problem and $K$ is a hyperparameter. If we have $N$ users and $M$ items this means that we have $(K + 1)(N + M)$ parameters. Substituting inside the loss function we have:
\begin{equation}
L = \sum_{(i,j) \in I} (r_{ij} - u_i^b - v_j^b - \sum_{k=1}^K u_{ik}^ev_{jk}^e)^2
\end{equation}
To improve the generalization capabilities of the model regularization is added and finally we have:
\begin{equation}
L = \sum_{(i,j) \in I} (r_{ij} - u_i^b - v_j^b - \sum_{k=1}^Ku_{ik}^ev_{jk}^e)^2 + \lambda_u\sum_{i,k} (u_{ik}^e)^2 + \lambda_v\sum_{j, k} (v_{jk}^e)^2
\end{equation}
$\lambda_u$ and $\lambda_v$ are two additional hyperparameters.
```python
%cd ..
```
```python
from src.matrix_factorization import MatrixFactorization
from src.deep_factorization import DeepFactorization
```
C:\Users\Ivan Valles Perez\AppData\Local\Continuum\Anaconda3\lib\site-packages\h5py\__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
```python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import clear_output
# Load the data and calculate the users set and items set cardinalities
df = pd.read_csv("./data/interactions.csv", names=["U", "I", "Q"])
U_cardinality = df.U.nunique()
I_cardinality = df.I.nunique()
```
```python
# Generates and shuffles the data set to remove possible tendencies
np.random.seed(655321)
mat = df.values
np.random.shuffle(mat)
```
```python
# Divide the data set into train-dev-test
train_mat = mat[:85000]
dev_mat = mat[85000:90000]
test_mat = mat[90000:]
```
```python
# Separate features from target
x_train=(train_mat[:,:2]-1)
y_train=(train_mat[:,2:])
x_dev=(dev_mat[:,:2]-1)
y_dev=(dev_mat[:,2:])
x_test=(test_mat[:,:2]-1)
y_test=(test_mat[:,2:])
```
## Matrix Factorization
```python
# Initialize the model and the performance accumulators for reporting purposes
# The metaparameters have been tuned to improve the algorithm performance
MF = MatrixFactorization(n_users=U_cardinality, n_items=I_cardinality, emb_size=5, lr=0.002, _lambda=0.1)
losses_train = [MF.evaluate(x_train, y_train, batch_size=1000)] # Add the initial loss (w. random weights)
losses_dev = [MF.evaluate(x_dev, y_dev, batch_size=1000)] # Add the initial loss (w. random weights)
```
```python
for i in range(50): # Run for 50 epochs
MF.fit(x_train, y_train, batch_size=128) # Compute an epoch using SGD
losses_train.append(MF.evaluate(x_train, y_train, batch_size=128)) # Compute the train performance
losses_dev.append(MF.evaluate(x_dev, y_dev, batch_size=128)) # Compute the dev. performance
# Plot train and dev errors over time
clear_output(True)
plt.figure(figsize=[10, 3])
plt.plot(losses_train)
plt.plot(losses_dev)
axes = plt.gca()
plt.legend(["Train MSE", "Dev MSE"])
plt.title("Training errors over time (Mean Squared Error)")
plt.ylabel("Log-MSE")
plt.xlabel("Epochs")
plt.ylim([0.1, axes.get_ylim()[1]])
plt.yscale('log')
plt.grid(True, which="both",ls="-")
plt.show()
print("[EPOCH {0}] Train error = {1:.4f} | Dev error = {2:.4f}".format(i+1, losses_train[-1], losses_dev[-1]))
print("Test MSE achieved = {0:.4f}".format(MF.evaluate(x_test, y_test, batch_size=128)))
```
## Extra: solution using deep learning
```python
DF = DeepFactorization(n_users=U_cardinality, n_items=I_cardinality, emb_size=5, lr=0.002, _lambda=0.1)
losses_train = [DF.evaluate(x_train, y_train, batch_size=1000)] # Add the initial loss (w. random weights)
losses_dev = [DF.evaluate(x_dev, y_dev, batch_size=1000)] # Add the initial loss (w. random weights)
```
```python
for i in range(50): # Run for 50 epochs
DF.fit(x_train, y_train, batch_size=128) # Compute an epoch using SGD
losses_train.append(DF.evaluate(x_train, y_train, batch_size=128)) # Compute the train performance
losses_dev.append(DF.evaluate(x_dev, y_dev, batch_size=128)) # Compute the dev. performance
# Plot train and dev errors over time
clear_output(True)
plt.figure(figsize=[10, 3])
plt.plot(losses_train)
plt.plot(losses_dev)
axes = plt.gca()
plt.legend(["Train MSE", "Dev MSE"])
plt.title("Training errors over time (Mean Squared Error)")
plt.ylabel("Log-MSE")
plt.xlabel("Epochs")
plt.ylim([0.1, axes.get_ylim()[1]])
plt.yscale('log')
plt.grid(True, which="both",ls="-")
plt.show()
print("[EPOCH {0}] Train error = {1:.4f} | Dev error = {2:.4f}".format(i+1, losses_train[-1], losses_dev[-1]))
print("Test MSE achieved = {0:.4f}".format(DF.evaluate(x_test, y_test, batch_size=128)))
```
|
1f21fa0f9717cb774963c66dec8de57c1e16fd20
| 49,123 |
ipynb
|
Jupyter Notebook
|
notebooks/example.ipynb
|
ivallesp/deep_matrix_factorization
|
8aab17d7abac81fff4fd574c0a41beb46af04499
|
[
"MIT"
] | 3 |
2020-05-19T17:19:41.000Z
|
2020-12-25T08:35:32.000Z
|
notebooks/example.ipynb
|
ivallesp/deep_matrix_factorization
|
8aab17d7abac81fff4fd574c0a41beb46af04499
|
[
"MIT"
] | null | null | null |
notebooks/example.ipynb
|
ivallesp/deep_matrix_factorization
|
8aab17d7abac81fff4fd574c0a41beb46af04499
|
[
"MIT"
] | null | null | null | 157.951768 | 19,752 | 0.871689 | true | 1,888 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.955981 | 0.861538 | 0.823614 |
__label__eng_Latn
| 0.906642 | 0.751865 |
```python
# This cell is for the Google Colaboratory
# https://stackoverflow.com/a/63519730
if 'google.colab' in str(get_ipython()):
# https://colab.research.google.com/notebooks/io.ipynb
import google.colab.drive as gcdrive
# may need to visit a link for the Google Colab authorization code
gcdrive.mount("/content/drive/")
import sys
sys.path.insert(0,"/content/drive/My Drive/Colab Notebooks/nmisp/60_linear_algebra_2")
```
```python
import numpy as np
import numpy.random as nr
import sympy as sy
import IPython.display as disp
sy.init_printing()
```
# 선형 연립 방정식<br>Systems of Linear Equations
미지수가 3개인 선형 연립 방정식을 생각해 보자.<br>Let's think about a system of linear equations with three unknowns.
```python
n = 3
x = np.array(sy.symbols(f'x:{n}'))
x
```
세 미지수를 모두 결정하려면, 보통 세개의 서로 선형 독립인 방정식이 필요하다.<br>To decide all three unknowns, usually we need three linearly independent equations.
```python
a = np.array(sy.symbols(
f'a:{n}(:{n})'
)).reshape((n, n)).tolist()
b = sy.symbols(f'b:{n}')
```
```python
eqs = []
for coefs, const in zip(a, b):
lhs = sum([aij * xj for aij, xj in zip(coefs, x)])
eq = sy.Eq(lhs, const)
eqs.append(eq)
disp.display(eq)
```
행렬 형태로 정리해 보자<br>Let's rewrite in the matrix form
```python
matA = sy.Matrix(a)
vecB = sy.Matrix(b)
vecX = sy.Matrix(x)
eq_mat = sy.Eq(matA * vecX, vecB)
eq_mat
```
여기서 계수 행렬과 상수 벡터만 생각해 보자.<br>Here, let's just think about the coefficient matrix and constant vector.
```python
matAb = matA.col_insert(n, vecB)
matAb
```
## 가우스 소거법<br>Gauss Elimination
다음 비디오는 가우스 소거법을 소개한다.<br>
The following video introduces the Gauss Elimiation. (00:17 ~ 18:57)
[](https://www.youtube.com/watch?v=QVKj3LADCnA&list=PLE7DDD91010BC51F8&index=3&start=18&end=1137&version=3)
비디오에서 제시한 연립 방정식을 생각해 보자.<br>
Let's think about the system of equations of the video.
```python
A = np.array([
[1, 2, 1],
[3, 8, 1],
[0, 4, 1],
])
A
```
```python
b = np.array([
[2, 12, 2]
]).T
b
```
행렬 A와 벡터 b를 붙인다.<br>
Let's augument matrix A and b.
```python
Ab = np.hstack((A, b))
Ab
```
우선 첫 행의 첫 열 원소에 pivot 이라는 이름을 준다.<br>First, let's designate the first element of the first row as pivot.
```python
p = 0
pivot = Ab[p, p]
pivot
```
두번째 행 첫 열 원소를 pivot 으로 나눈 비를 계산한다.<br>
Divide the element at the first column of the second row with pivot
```python
i = p + 1
multiplier = Ab[i, p] / pivot
multiplier
```
첫 행에 이 비를 곱한 후 둘째 행에서 뺀다.<br>Multiply the first row with this multiplier and subtract from the second row.
```python
Ab[i, :] = Ab[i, :] + (- multiplier) * Ab[p, :]
Ab
```
두번째 행 첫번째 열이 0이 되었음을 알 수 있다.<br>
We can see that the second row first column is now zero.
세번째 행 첫번째 열은 이미 0이다.<br>
The third row first column is already zero.
이제 p 에 1을 더하고 반복하자.<br>Now let's add 1 to `p` and repeat.
```python
p += 1
pivot = Ab[p, p]
pivot
```
`p+1` 행 `p` 열 원소를 `pivot` 으로 나눈 비를 계산한다.<br>
Divide the element at the `p`th column of the `p+1`th row with pivot
```python
i = p + 1
multiplier = Ab[i, p] / pivot
multiplier
```
`p` 행에 이 비를 곱한 후 `p+1` 행에서 뺀다.<br>
Multiply the `p`th row with this multiplier and subtract from the `p+1`th row.
```python
Ab[i, :] = Ab[i, :] + (- multiplier) * Ab[p, :]
Ab
```
이런 식으로 왼쪽 위로부터 오른쪽 아래로의 주대각선 아래 원소를 모두 0으로 만든다.<br>This way, make all elements below main diagonal, from the left upper corner to the right lower direction, zero.
## 후진대입법<br>Backward substitution
주대각선 아래가 모두 0이라면 아래와 같이 생각해 볼 수 있다.<br>If all elements below the main diagonal are zeros, we may think as follows.
```python
alpha = np.array(sy.symbols(
f'alpha:{n}(:{n})'
)).reshape((n, n)).tolist()
beta = sy.symbols(f'beta:{n}')
```
```python
eqs2 = []
for p in range(n):
lhs_list = []
for i in range(p, n):
lhs_list.append(alpha[p][i]*x[i])
eq = sy.Eq(sum(lhs_list), beta[p])
eqs2.append(eq)
for eq in eqs2:
disp.display(eq)
```
맨 마지막 행에서 마지막 미지수를 알 수 있다.<br>
From the last row, we can find the last unknown.
```python
sol = sy.Matrix([None] * n)
```
```python
sol_n_1 = sy.solve(eqs2[-1], x[-1])
sol[-1] = sol_n_1[0]
disp.display(sol)
```
그 하나 앞 미지수는 마지막에서 두번째 방정식에서 구할 수 있다.<br>We can find the second last unknown from the second last equation.
```python
eqs2[-2].subs(x[-1], sol[-1])
```
```python
sol_n_2 = sy.solve(eqs2[-2].subs(x[-1], sol[-1]), x[-2])
sol[-2] = sol_n_2[0]
disp.display(sol)
```
반복하면 모든 해를 구할 수 있다.<br>We can find all solutions this way.
## `numpy.linalg`
`numpy.linalg` 의 `solve()` 함수를 이용할 수도 있다.<br>We can use `solve()` of `numpy.linalg`.
```python
import numpy.linalg as nl
x_sol = nl.solve(A, b)
x_sol
```
```python
import numpy.testing as nt
nt.assert_array_almost_equal(A@x_sol, b)
```
## 소거행렬<br>Elimination matrix
위에서 소개했던 행열 연산을 행하는 행렬을 생각할 수 있다.<br>
We can think about a matrix carrying out row-column operations above. (20:41 ~ 36:26)
[](https://www.youtube.com/watch?v=QVKj3LADCnA&list=PLE7DDD91010BC51F8&index=3&start=1242&end=2186)
2행1열 소거:<br>
Eliminate row 2 column 1:
```python
E21 = np.array([
[1, 0, 0],
[-3, 1, 0],
[0, 0, 1]
])
```
```python
E21 @ A
```
3행2열 소거:<br>
Eliminate row 3 column 2:
```python
E32 = np.array([
[1, 0, 0],
[0, 1, 0],
[0, -2, 1]
])
```
```python
E32 @ (E21 @ A)
```
교환법칙은 성립하는가?<br>
Commutative?
```python
E21 @ E32 @ A
```
결합법칙은 성립하는가?<br>
Associative?
```python
(E32 @ E21) @ A
```
위 두 행렬의 곱:<br>
Product of the two matrices above:
```python
E = E32 @ E21
```
```python
E
```
E 행렬과 A 행렬의 곱은 상삼각 행렬이다.<br>
The product of matricies of E and A is the upper triangular matrix.
```python
E @ A
```
## 표준 기능으로 구현한 가우스 소거법<br>Gauss Elimination in Standard library
다음 셀은 가우스 소거법을 표준기능 만으로 구현한다.<br>
Following cell implements the Gauss elimination with standard library only.
```python
import typing
Scalar = typing.Union[int, float]
Row = typing.List[Scalar]
Matrix = typing.List[Row]
def gauss_elimination(Ab:Matrix) -> None:
# pivot loop
for p in range(0, len(Ab)-1):
pivot = Ab[p][p]
one_over_minus_pivot = -1.0 / pivot
# row loop
for i in range(p+1, len(Ab)):
multiplier = Ab[i][p] * one_over_minus_pivot
# column loop
for j in range(p, len(Ab[p])):
Ab[i][j] += multiplier * Ab[p][j]
```
위 행렬의 예로 확인해 보자.<br>
Let's check with the matrix above.
```python
Ab_list = [
[1, 2, 1, 2],
[3, 8, 1, 12],
[0, 4, 1, 2],
]
```
```python
gauss_elimination(Ab_list)
```
```python
import pprint
pprint.pprint(Ab_list, width=40)
```
다음 셀은 후진대입법을 표준기능 만으로 구현한다.<br>
Following cell implements the back substitution with standard library only.
```python
def back_substitution(Uc:Matrix) -> Row:
# number of unknowns
n = len(Uc)
result = [None] * n
# last unknown
result[n-1] = Uc[n-1][n] / Uc[n-1][n-1]
# row loop from second last to the first unknowns
for i in range(n-2, 0-1, -1):
s = Uc[i][n]
# column loop
for j in range(i+1, n-1+1):
s += (-1) * result[j] * Uc[i][j]
result[i] = s / Uc[i][i]
return result
```
```python
back_substitution(Ab_list)
```
## 연습 문제<br>Exercise
위 방법을 적용 가능한 공학 문제 사례를 설명하고 `numpy.linalg.solve()`로 해를 구해 보시오. 이렇게 구한 해가 맞는지 어떻게 확인할 수 있는가?<br>
Describe an engineering problem that we can apply the method above and find the solution using `numpy.linalg.solve()`. How can we verify if the solution is correct?
## 참고문헌<br>References
* Gilbert Strang. 18.06 Linear Algebra. Spring 2010. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.
## Final Bell<br>마지막 종
```python
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
```python
```
|
0a2bd69bfb61b9542fae8fb20fab19ad2ac6f5f9
| 18,729 |
ipynb
|
Jupyter Notebook
|
60_linear_algebra_2/100_Systems_of_Linear_Equations.ipynb
|
kangwonlee/2109eca-nmisp-template
|
2e078870757fa06222df62d0ff8f4f4f288af51a
|
[
"BSD-3-Clause"
] | null | null | null |
60_linear_algebra_2/100_Systems_of_Linear_Equations.ipynb
|
kangwonlee/2109eca-nmisp-template
|
2e078870757fa06222df62d0ff8f4f4f288af51a
|
[
"BSD-3-Clause"
] | null | null | null |
60_linear_algebra_2/100_Systems_of_Linear_Equations.ipynb
|
kangwonlee/2109eca-nmisp-template
|
2e078870757fa06222df62d0ff8f4f4f288af51a
|
[
"BSD-3-Clause"
] | null | null | null | 20.073955 | 210 | 0.478563 | true | 3,005 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.865224 | 0.812867 | 0.703312 |
__label__kor_Hang
| 0.885624 | 0.472362 |
```python
# Default setup
import pandas as pd
import numpy as np
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.dpi'] = 300
plt.rcParams['axes.grid'] = True
plt.style.use('ggplot')
```
```python
data = pd.read_csv('dataset.csv')
Q = np.array([x/2 for x in data['2Q']])
I = np.array([b for b in data['I']])
```
```python
csfont = {'fontname':'Ubuntu'}
plt.title("Q VS 1 / I",**csfont)
plt.plot(1/I,Q,'-X',alpha = 0.5)
plt.xlabel("1 / I ( 1 / Ampere)")
plt.ylabel("Q (mm)")
m, c = np.polyfit(1/I,Q,1)
plt.text(7,7,'Slope = {}'.format(m.__round__(6)),bbox=dict(facecolor='pink', alpha=0.5))
plt.plot(1/I, m*(1/I) + c,'--',color = 'red')
plt.legend(["Original","Linear Fit"])
plt.tight_layout()
plt.savefig('Figure')
```
```python
# Datas
h = 6.6261 * 10**-27 # h in CGS
v = 15 * 10**6 # Converting to Hz
P = 43 * 10**-1 # converting to cm
a = 7.7 # Already in cm
M = 9.27 * 10**-21 # M in CGS
n = 500 # no unit
slope = m*10**-1 # Converting slope from Ampere-mm to Biot-cm
```
$$
\Large
\begin{equation}
g = \frac{10 \sqrt{125}\quad h \nu_\circ a P}{64\sqrt{2}\quad \pi n \mu_\circ I Q}
\end{equation}
$$
```python
num = h * v * a * P * 10 * np.sqrt(125)
den = np.pi * n * M * slope * 64 * np.sqrt(2)
print("g =",num/den)
```
g = 1.833753011558984
```python
P = [43]*len(I)
H = [(32 * np.pi * n * val)/(10 * np.sqrt(125) * a) for val in I]
HPP = [2 * np.sqrt(2) * val for val in H]
H0 = [HPP[i] * (Q[i]/P[i]) for i in range(len(HPP))]
g1 = [(h*v)/(M*val) for val in H0]
g2 = [(h * v * a * P[i] * 10 * np.sqrt(125)/(np.pi * n * M * Q[i]*I[i] * 64 * np.sqrt(2))) for i in range(len(Q))]
```
```python
frame = {}
frame["I (A)"] = I
frame["P (mm)"] = P
frame["Q (mm)"] = Q
frame["H (G)"] = H
frame["HPP (G)"] = HPP
frame["H0 (G)"] = H0
frame["g"] = g1
```
```python
df = pd.DataFrame(frame)
df.index+=1
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>I (A)</th>
<th>P (mm)</th>
<th>Q (mm)</th>
<th>H (G)</th>
<th>HPP (G)</th>
<th>H0 (G)</th>
<th>g</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>0.100</td>
<td>43</td>
<td>14.0</td>
<td>5.838807</td>
<td>16.514640</td>
<td>5.376860</td>
<td>1.994072</td>
</tr>
<tr>
<th>2</th>
<td>0.126</td>
<td>43</td>
<td>11.5</td>
<td>7.356897</td>
<td>20.808447</td>
<td>5.565050</td>
<td>1.926640</td>
</tr>
<tr>
<th>3</th>
<td>0.154</td>
<td>43</td>
<td>9.0</td>
<td>8.991763</td>
<td>25.432546</td>
<td>5.323091</td>
<td>2.014214</td>
</tr>
<tr>
<th>4</th>
<td>0.181</td>
<td>43</td>
<td>7.5</td>
<td>10.568241</td>
<td>29.891499</td>
<td>5.213634</td>
<td>2.056501</td>
</tr>
<tr>
<th>5</th>
<td>0.207</td>
<td>43</td>
<td>6.5</td>
<td>12.086331</td>
<td>34.185305</td>
<td>5.167546</td>
<td>2.074843</td>
</tr>
</tbody>
</table>
</div>
```python
meang = sum(g1)/len(g1)
meang
```
2.107747334751516
|
2896717e7c186b1a3c777c3355367e4b7751da1b
| 151,964 |
ipynb
|
Jupyter Notebook
|
S2/Electron Spin Resonance/test.ipynb
|
jithu7432/genlab-experiments
|
a09418aa481212e335c881c4d60a66b6350a3d5e
|
[
"MIT"
] | null | null | null |
S2/Electron Spin Resonance/test.ipynb
|
jithu7432/genlab-experiments
|
a09418aa481212e335c881c4d60a66b6350a3d5e
|
[
"MIT"
] | null | null | null |
S2/Electron Spin Resonance/test.ipynb
|
jithu7432/genlab-experiments
|
a09418aa481212e335c881c4d60a66b6350a3d5e
|
[
"MIT"
] | null | null | null | 520.424658 | 144,416 | 0.934971 | true | 1,395 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.7773 | 0.812867 | 0.631842 |
__label__kor_Hang
| 0.220092 | 0.30631 |
# `DSML Workshop 09` - Advanced Non-Linear Regression
In this workshop we continue with hands-on supervised learning (regression). We continue
We will cover the following:
1. Regularization: L1 (LASSO) and L2 (ridge) regression
1. General non-linear features: Radial Basis Functions
1. Other regression algorithms: Overview of some selected algorithms
```python
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
import datetime
%matplotlib inline
```
## Example: predicting peak electrical power
We continue with our electric power example from last week which we retieved from PJM from the following link [here](https://dataminer2.pjm.com/feed/hrl_load_metered/definition)). The files we are loading are the raw files we downloaded from this source. The final input data for our code is `Pittsburgh_load_data.csv`.
```python
df = pd.read_csv("Pittsburgh_load_data.csv")
df["Date"] = pd.to_datetime(df["Date"], format="%d.%m.%Y")
df["Month"] = df["Date"].dt.month
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Date</th>
<th>AVG</th>
<th>MAX</th>
<th>MIN</th>
<th>Total</th>
<th>High_temp</th>
<th>Avg_temp</th>
<th>Month</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2013-01-01</td>
<td>1.598524</td>
<td>1.859947</td>
<td>0.001599</td>
<td>38.368031</td>
<td>0.0</td>
<td>-1.68</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>2013-01-02</td>
<td>1.809347</td>
<td>2.054215</td>
<td>0.001809</td>
<td>43.428194</td>
<td>-3.9</td>
<td>-6.58</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>2013-01-03</td>
<td>1.832822</td>
<td>2.049550</td>
<td>0.001833</td>
<td>43.991607</td>
<td>0.6</td>
<td>-6.12</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>2013-01-04</td>
<td>1.812699</td>
<td>2.008168</td>
<td>0.001813</td>
<td>43.508609</td>
<td>0.0</td>
<td>-1.95</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>2013-01-05</td>
<td>1.662036</td>
<td>1.838251</td>
<td>0.001662</td>
<td>39.892360</td>
<td>1.7</td>
<td>-1.47</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
## `Recap`
**Recall from previous workshop**: We fitted a polynomial model to the full range of electricity data. To do so we used `scikit learn` and obtained the following result:
```python
xp = df["High_temp"].values
yp = df["MAX"].values
```
```python
plt.figure(figsize = (8,6))
plt.scatter(xp, yp, marker="x")
plt.xlabel("High Temperature (°C)")
plt.ylabel("Peak Demand (GW)")
plt.show()
```
```python
# x is the input variable
# y is the output vaiable
# d is degree pf polynomial regression
# Exercise: write a function to:
# - scale the vector X to [-1, 1]
# - use sklearn.preprocessing.PolynomialFeatures to generate the all polynoms from 1 to d
# - use the sklearn.linear_model.LinearRegression to fit a regression
# - plot the resulting Line and the input data in a figure
def plot_regression_poly(x, y, d):
min_x, max_x = x.min(), x.max()
# Rescale x
# Create polynomial features
# Fit the linear regression model
# Artificial test data
xt0 = np.arange(-15, 35, 0.01)
# Transform the test data to the polynomial degree
Xt = []
# Predict the output of the artificial test data in yt
yt = []
# Get the coefficients and print the first 4 of them
print()
# Plot results
plt.figure(figsize = (8,6))
plt.scatter(x, y, marker="x")
ylim = plt.ylim()
plt.plot(xt0, yt, 'C1')
plt.xlabel("Temperature (°C)")
plt.ylabel("Demand (GW)")
plt.xlim([min_x-2, max_x+2])
plt.ylim(ylim)
```
```python
plot_regression_poly(xp, yp, d=100)
```
## `Regularization` (by hand)
We have so far seen that the degree of the polynomial we use for our nonlinear features acts as a nice characterization of the model complexity. But there is another notion of model complexity that is also important to understand, the _magnitude_ of the model parameters. To see why this is important, let's look again at our degree 100 polynomial fit to the data.
Let's also look at the actual weights involved with this fit.
```python
plot_regression_poly(xp, yp, d=100)
```
The way that we get the polynomial to exactly pass through the points requires extremely high values for the coefficients: we need to set the coefficients just so that the different polynomial terms largely “cancel” out precisely at the data points and fit the function exactly to the data.
This also suggests another method for controlling the complexity of the model class:
restircting the magnitude of the coefficients. This is the basis of the technique known as regularization.
Formally, regularization is often written as an additional term in the canonical machine learning problem. Instead of simply minimizing the average loss, we minimize the average loss plus a term that penalizes the magnitude of the coefficients (usually some function of a norm of the weights, often just the sum of squared weights also called $\ell_2$ regularization, but other functions are possible as well). For example, let's consider the following optimizaiton problem:
\begin{equation}
\min \theta \; \frac{1}{m}\sum_{i=1}^m \ell \left(h_\theta(x^{(i)}),y^{(i)} \right) + \lambda \sum_{i=1}^n \theta_i^2
\end{equation}
where $\lambda \in \mathbb{R}_+$ is what is called a _regularization parameter_. $\lambda$ effectively trades off between minimizing the training loss (which naturally "wants" to use large weights), and keeping the weights small. If $\lambda = 0$, we ignore the regularization term entirely, and just minimize training loss; but as $\lambda \rightarrow \infty$, the _only_ relevant term in the optimization problem becomes the sum of the squared weights, which is clearly minimized if we just pick $\theta = 0$. Thus, by varying $\lambda$ between zero and some very large constant, we can "sweep out" different ranges of model complexity.
### Visualizing regularization
Let's see what this looks like on our 100 degree polynomial. The figure above shows the situation with no regularization, i.e, $\lambda = 0$. If we instead choose $\lambda = 1$, we get the following figure.
```python
# x is the input variable
# y is the output vaiable
# d is degree pf polynomial regression
# lam is lambda, the degree of regularization
def plot_regularized_polyregression (x, y, lam, d):
# Hard task:
# Implement the linear regression by hand (without using scikit learn)
# also include the regularization parameter lambda
pass
```
```python
plot_regularized_polyregression (xp, yp, lam=0.1, d=100)
```
What happens if we regularize further?
```python
plot_regularized_polyregression (xp, yp, lam=100, d=100)
```
We can also understand what is happening here by reference to the previous section when we discussed polynomial fittings as a function of the degree of the polynomial. Remember that after degree 10 or so, there wasn't a huge benefit to including additional polynomial terms in the regression. Yet, if include these terms within the context of a tradition least squares fit, we have seen that in order to use the polynomial degree to minimize the error (essentially done by "exactly" passing through some of the points), this is accomplished by a very high weight on the high-order coefficients.
So what happens when we apply a regularization penalty? Because we know that we can achieve low error with a lower-degree polynomial of relatively small weights, if we add sufficient regularization to the optimization objective, this will have the effect of avoiding putting much weight on the high-order terms in the polynomial, and just putting the weight on the lower order terms. We can view this by looking at the magnitude of the actual coefficients of $\theta$ before and after regularization (remember, $\theta$ is ordered in higher-to-lower degree polynomial terms, so we will flip the order to correspond to increasing degree of the polynomial terms.
```python
def ls_poly_reg(x, y, lam, degree):
xs = 2*(x - min(x))/(max(x) - min(x)) - 1 # rescale to range [-1,1]
X = np.array([xs**i for i in range(degree,-1,-1)]).T
return np.linalg.solve(X.T @ X + lam*np.eye(X.shape[1]), X.T @ y)
```
```python
# Define inputs
x=xp
y=yp
lam=1
degree= 50
# Plot function
theta = ls_poly_reg(x, y, lam, degree)
plt.figure(figsize = (8,6))
plt.semilogy(range(degree+1), np.abs(theta[::-1]))
plt.xlabel("Degree of coefficient")
plt.ylabel("Coefficient weight")
plt.show()
```
**What do you observe as you change the degree of polynomial and the regularization parameter?**
## Regularization using `scikit learn`
### Ridge regression ($L_2$ Regularization)
Perhaps the most common form of regularization is known as ridge regression or $L_2$ regularization. This is the process we have implemented manually above. It proceeds by penalizing the sum of squares (L2-norm) of the model coefficients; in this case, the penalty on the model fit would be $P = \alpha\sum_{n=1}^N \theta_n^2$ where $\alpha$ is a free parameter that controls the strength of the penalty (note that this is equivalent to our $\lambda$ from above). This type of penalized model is built into Scikit-Learn with the Ridge estimator:
First we need to create polynomial features using the `PolynomialFeatures` module in `scikit learn`
```python
from sklearn.preprocessing import PolynomialFeatures
# initialize model
Poly = PolynomialFeatures(degree = 50)
# fit and transform xp
X_poly = Poly.fit_transform(xp.reshape(-1,1))
```
```python
#len(X_poly)
```
We then import the `Ridge` regression model
```python
from sklearn.linear_model import Ridge
model_L2 = Ridge(alpha = 0.01, normalize = True, solver = 'lsqr') # select least squares regression as solver
model_L2.fit(X_poly, yp)
prediction = model_L2.predict(X_poly)
print("Coefficients ", model_L2.coef_, "\nIntercept ", model_L2.intercept_ )
```
Coefficients [ 0.00000000e+00 -2.32640141e-02 5.84347924e-05 6.13898052e-06
3.33662650e-07 1.07952528e-08 3.27215854e-10 8.81663013e-12
2.23665684e-13 5.17885431e-15 1.07560117e-16 1.79689238e-18
1.39128809e-20 -6.59554607e-22 -4.74124289e-23 -2.09838984e-24
-7.84618536e-26 -2.67816111e-27 -8.60338073e-29 -2.63977107e-30
-7.79437033e-32 -2.22231243e-33 -6.12133462e-35 -1.62513656e-36
-4.13374005e-38 -9.95388249e-40 -2.21399631e-41 -4.29129615e-43
-5.93554881e-45 2.09363482e-47 6.17553472e-48 3.35292651e-49
1.39778610e-50 5.17393937e-52 1.78342400e-53 5.84705960e-55
1.84300545e-56 5.61572341e-58 1.65785897e-59 4.74022713e-61
1.30860068e-62 3.46361529e-64 8.66817836e-66 1.99233509e-67
3.90866418e-69 4.88106818e-71 -7.53423952e-73 -9.32437350e-74
-5.01593599e-75 -2.19332730e-76 -8.69503304e-78]
Intercept 1.8846505592013083
```python
# function for plotting
def plot_scikit_output (x, y, fitted_model):
min_x, max_x = x.min(), x.max()
xt0 = np.linspace(min_x-1, max_x+1, 400)
xt0_poly = Poly.fit_transform(xt0.reshape(-1,1))
# plotting routine
plt.figure(figsize = (8,6))
plt.scatter(x, y, marker="x")
ylim = plt.ylim()
plt.plot(xt0, fitted_model.predict(xt0_poly), 'C1')
plt.xlabel("Temperature (°C)")
plt.ylabel("Demand (GW)")
plt.xlim([min_x-2, max_x+2])
plt.ylim(ylim)
```
```python
plot_scikit_output (xp, yp, model_L2)
```
### LASSO regression ($L_1$ regularization)
Another very common type of regularization is known as lasso, and involves penalizing the sum of absolute values (1-norms) of regression coefficients:
$$ P = \alpha\sum_{n=1}^N |\theta_n| $$
Though this is conceptually very similar to ridge regression, the results can differ surprisingly: for example, due to geometric reasons lasso regression tends to favor sparse models where possible: that is, it preferentially sets model coefficients to exactly zero. As a results Lasso can be readily used as an embedded-method in feature selection.
```python
from sklearn.linear_model import Lasso
model_L1 = Lasso(alpha = 1)
model_L1.fit(X_poly, yp)
predict = model_L1.predict(X_poly)
print("Coefficients ", model_L2.coef_, "\nIntercept ", model_L2.intercept_ )
```
Coefficients [ 0.00000000e+00 -2.32640141e-02 5.84347924e-05 6.13898052e-06
3.33662650e-07 1.07952528e-08 3.27215854e-10 8.81663013e-12
2.23665684e-13 5.17885431e-15 1.07560117e-16 1.79689238e-18
1.39128809e-20 -6.59554607e-22 -4.74124289e-23 -2.09838984e-24
-7.84618536e-26 -2.67816111e-27 -8.60338073e-29 -2.63977107e-30
-7.79437033e-32 -2.22231243e-33 -6.12133462e-35 -1.62513656e-36
-4.13374005e-38 -9.95388249e-40 -2.21399631e-41 -4.29129615e-43
-5.93554881e-45 2.09363482e-47 6.17553472e-48 3.35292651e-49
1.39778610e-50 5.17393937e-52 1.78342400e-53 5.84705960e-55
1.84300545e-56 5.61572341e-58 1.65785897e-59 4.74022713e-61
1.30860068e-62 3.46361529e-64 8.66817836e-66 1.99233509e-67
3.90866418e-69 4.88106818e-71 -7.53423952e-73 -9.32437350e-74
-5.01593599e-75 -2.19332730e-76 -8.69503304e-78]
Intercept 1.8846505592013083
```python
plot_scikit_output (xp, yp, model_L1)
```
### Regularization and cross-validation performance
We can also illustrate the effects of regularization as they relate to training and validation performance. Just as we did with the degree of the polynomial, we can consider the training and validation errors for different amounts of regularization.
```python
# Large Exercise
# Write one function for Ridge and one for Lasso that:
# - take x, y, and the degree
# - split the test data in train and validation set
# - loop over different values for the regularization parameter
# - for each iteration calculate the MSE on the train and test data
# - plot the train and validation error
```
```python
def plot_L2_regression_performance (x, y, deg):
err_train = []
err_cv = []
# train test split
# np.logspace generates exponentially spaced values
for alpha in np.logspace(-15,10,100):
# create Polynomial Features
# fit model
# compute errors
err_train.append()
err_cv.append()
plt.figure(figsize = (8,6))
plt.loglog(np.logspace(-15,10,100), err_train, np.logspace(-15,10,100), err_cv)
plt.legend(["Training", "Validation"])
plt.xlabel("$\lambda$ (or alpha in scikit learn terms)")
plt.ylabel("Mean squared error")
plt.show()
```
```python
plot_L2_regression_performance(xp,yp,75)
```
A few points are worth emphasizing here. First, the nature of the regularization term: lower $\lambda$ means _less_ regularization, whereas large $\lambda$ mean more regularization (eventually just essentially corresponding to all zero weights) results in the above shape. Thus, larger $\lambda$ means _lower_ model complexity, so the x-axis of the figure works in the opposite direction as in the polynomial degree example. Second, also note that we are using a _logarithmic_ scale on the x-axis (and the y-axis, as before, but the x-axis is the important part here). This means that regularization typically works on a scale of _orders of magnitude_. If you search over possible regularization terms, you'll want to do this search over a logarithmic space, because you need very large changes to the magnitude of $\lambda$ to really illustrate the full differences. Third and last, just as was the case for the polynomial degree, we emphasize that the cross validation error is not a nice unimodal function of $\lambda$; there are multiple local optima owing to the pecularities of the particular polynomial, and it is not easy to globally optimize $\lambda$ by looking at cross validation error in some local region alone. For this reason, techniques like grid searches are often more common in practice for finding model hyperparamters (including the $\lambda$ term), instead of techniques like gradient-based optimization.
**Exercise**: Write a small function to visualize how the $MSE$ improves as you increase $\alpha$ for a fixed choice of polynomial degree in LASSO regression.
```python
# YOUR CODE HERE
def plot_L1_regression_performance (x, y, deg):
pass
```
```python
plot_L1_regression_performance (xp, yp, 50)
```
## General non-linear features
Using polynomials served as as good illustration of the basic principles of nonlinear features, generalization, and regularization, but they are far from the only such type of feature used in practice (and indeed, polynomials are probably a bit less common in most cases than other feature classes. We also only covered polynomials for one dimensional "raw" inputs, where it was easy to enumerate all possible polynomials. In this section we'll cover another type of common nonlinear feature, radial basis functions, and illustrate how to create both polynomials and radial basis functions over multi-dimensional raw inputs.
For the purposes of this section, we're going to adopt a slightly more explicit notation, though in general we're going to use it _only_ for this section. Specifically, whereas before we used $x^{(i)}$ to generally refer to the input features to the algorithm, here we're going to use $x^{(i)} \in \mathbb{R}^n$ (or often just $x \in \mathbb{R}^n$, if we don't need to index over a data set), to refer to just the "raw" input features: i.e., in the case of our peak demand prediction problem $x$ would just refer to the high temperature
\begin{equation}
x^{(i)} \in \mathbb{R}^1 = \left [ \; \mathrm{HighTemperature}^{(i)} \; \right ]
\end{equation}
The raw inputs need not always be one dimensional, of course, for instance we previously used the example of including both the temperature and a day of the week flag as features
\begin{equation}
x^{(i)} \in \mathbb{R}^2 = \left [ \begin{array}{c} \mathrm{HighTemperature}^{(i)} \\ \mathrm{IsWeekday}^{(i)} \end{array} \right ]
\end{equation}
But note that here we don't include any of the polynomial features directly in $x$; instead, $x$ only captures the tru underlying inputs to the algorithm, the elements that we are providing that are not derived from the other quantities (and note that that it also doesn't include the constant feature, for instance). Instead, we'll define a _feature mapping_
\begin{equation}
\phi : \mathbb{R}^n \rightarrow \mathbb{R}^k
\end{equation}
to be a function that maps $n$-dimensional inputs to $k$ dimensional _features_. Everything else remains the same, except that we now consider the hypothesis function that is linear in these feature vectors, i.e.,
\begin{equation}
h_{\theta}(x) = \theta^T \phi(x)
\end{equation}
parameterized by $\theta \in \mathbb{R}^k$.
For example, for a degree-3 polynomial (in one input variable), we can define $\phi : \mathbb{R} \rightarrow \mathbb{R}^4$ as
\begin{equation}
\phi(x) = \left [ \begin{array}{c} x^3 \\ x^2 \\ x \\ 1 \end{array} \right ]
\end{equation}
and similarly for larger degree
polynomials. Hopefully it is clear that this is just a notational definition, but it is useful for begin a bit more precise about these nonlinear features.
### `Radial basis function (RBF)`
If I were to make a completely anecdotal estimate, I would guess that the most frequently used type of nonlinear feature in not the polynomial, but something called the _radial basis function_ (this is actually the case both for explicit features and for the kernel methods we'll talk about shortly), often abreviated as RBF. Radial basis functions are similar to polynomials in that they are non-linear functions of the input data, but they are notably different in that they are generally _local_ features: the value of any particular feature is close to zero for most of the input space, but non-zero in a small region around a "center" parameter. Let's start with the definition, and we can then provide some illustrations that hopefully make this more concrete. To keep this simple to start, we're only going to consider radial basis functions of one-dimensional raw inputs, though we'll shortly expand this to cover the general $n$-dimensional case. A radial basis function feature vector is defined as the following:
\begin{equation}
\phi : \mathbb{R} \rightarrow \mathbb{R}^k = \left [ \begin{array}{c}
\exp \left(\frac{-(x - \mu^{(1)})^2}{2\sigma^2} \right) \\
\exp \left(\frac{-(x - \mu^{(2)})^2}{2\sigma^2} \right) \\
\vdots \\
\exp \left(\frac{-(x - \mu_{(k-1)})^2}{2\sigma^2} \right) \\
1 \end{array} \right ]
\end{equation}
where $\mu^{(1)},\ldots,\mu^{(k-1)} \in \mathbb{R}$ (called the means) and $\sigma \in \mathbb{R}$ (called the bandwidth) are the hyperparameters of this feature vector.
Let's look at a single one of these terms $\phi_j(x)$ (this is the $j$th element of the feature vector, because remember $\phi(x)$ outputs a $k$-dimensional vector.
\begin{equation}
\phi_j(x) = \exp \left(\frac{-(x - \mu^{(j)})^2}{2\sigma^2} \right)
\end{equation}
If you're familiar with the Gaussian distribution, you may recognize this as looking similar to the density function of the Gaussian (though without the normalizeng constant). One single dimension of this feature (for varying inputs $x$, and here assuming mean $\mu^{(j)} = 1.5$ and $\sigma = 0.4$) looks like the following:
```python
x = np.linspace(-0.5,4.5,100)
mu = np.linspace(0,4,9)
sigma = 1
for mu_ in mu:
plt.plot(x, np.exp(-(x-mu_)**2 / (2*sigma**2)))
plt.plot([-0.5,4.5], [1,1])
plt.xlim([-0.5,4.5])
plt.legend([r"$\phi_{" + str(j+1) + "}(x)$" for j in range(10)], bbox_to_anchor=(1.02,0.95))
plt.show()
```
The goal of nonlinear fitting with RBFs is to approximate the underlying function with a linear combination of these features. By combining them in the proper manner, it is possible to approximate very general functions.
To see, this, let's go back again to the nonlinear version of the peak demand prediction problem. We can construct a set of 10 dimensional RBFs spanning the minimum and maximum values of $x$. For simplicity, we choose $\sigma$ to be equal to the distance between the means (this was done above, and seems reasonable, though we'll consider other ways of choosing $\sigma$ below). Note also that there is no need to normalize the data, because the RBF features will always be scaled to be between zero and one (we could further normalize the generated features themselves, but this is typically not needed, as the features by definition will already be scaled to the range $[0,1]$).
```python
# create RBF features
def rbf_feat(x, mu, sig):
return np.hstack([np.exp(-(x[:,None] - mu)**2/(2*sig**2)), np.ones((len(x),1))])
def plot_regression_rbf(theta, mu, sig):
xt = np.linspace(-20,35, 400)
yt = rbf_feat(xt, mu, sig) @ theta
plt.figure(figsize = (8,6))
plt.scatter(df["High_temp"], df["MAX"], marker="x")
ylim = plt.ylim()
plt.plot(xt, yt, 'C1')
plt.xlabel("Temperature (C)")
plt.ylabel("Peak Demand (GW)")
plt.xlim([-18,35])
plt.ylim(ylim)
def train_rbf(x, y, n_rbf):
min_x, max_x = x.min(), x.max()
sig = (max_x - min_x)/(n_rbf-1)
mu = np.linspace(min_x, max_x, n_rbf-1)
Phi = rbf_feat(x, mu, sig)
theta = np.linalg.solve(Phi.T @ Phi, Phi.T @ y)
return theta, mu, sig
x = df["High_temp"].values
plot_regression_rbf(*train_rbf(xp, yp, 10))
```
### Hyperparameters in RBFs
Unlike polynomials, where the only real hyperparameter relevant to the features themselves (not the regularization) is the degree of the polynomial, for RBF features there are a number of hyperparameter choices: the choice of centers themselves (and the number of centers as highlighted above), the bandwidth parameter $\sigma$, and the regularization parameter $\theta$. It can be somewhat understand intuitively how we might trade off between all these different choices, and the good news is that there are some rules of thumb for choosing reasonable values for many of the hyperparameters without resorting to a grid search. However, for the time being, we do want to briefly highlight the effect that the different hyperparameters have on the resulting performance.
**Effect of centers** We have already seen how the number of centers affects the fit of the data, so we will just briefly mention here that while the "obvious" choice for RBF centers on 1D data is to simply use an even grid over the input space, this doesn't work well for higher dimensions.
**Effect of regularization** Just like with polynomial features, we can add regularization to additionally smooth the function. Unlike regularization for polynomial features, however, with a lot of narrow-peaked RBF functions, it is not trivial to fit the data with small weights. This is precisely due to the local nature of the RBF features. Because each features is only non-zero for a small part of the input space, we often cannot find a good fit to the data that has very low weights: there is no equivalent to the "just choosing low degree terms" as we did for the polynomial.
**Effect of bandwidth parameter** The effect of the bandwidth parameter, $\sigma$, can be somewhat less obvious. At a high-level, though, the intuition that is important here is that larger $\sigma$ leads to _smoother_ feature RBF functions, which in turns leads to smoother final functions.
## Some other regression techniques
So far we have only talked about parameterized regression algorithms, i.e. algorithms that make relatively strong assumptions about the functional relationship between features and dependent variable (i.e. the target).
### `KNN regression`
Regression based on k-nearest neighbors. The target is predicted by local interpolation of the targets associated of the nearest neighbors in the training set.
Out of all the machine learning algorithms, KNN is easily the simplest to pick up. Despite it’s simplicity, it has proven to be effective at certain tasks. KNN can be used for both classification and regression problems. Although it is far more popular for classification problems it can perform well in regression tasks as well. One of the benefits of KNN is that it is an **unparametric** algorithm, i.e it does not make strong assumptions about the form of the mapping function. By not making assumptions, KNN is free to learn any functional form from the training data.
```python
x_train, x_test, y_train, y_test = train_test_split(xp, yp, test_size=0.3,random_state=10)
```
```python
from sklearn.neighbors import KNeighborsRegressor
#Fit model
KNN_reg = KNeighborsRegressor(n_neighbors=25)
KNN_model = KNN_reg.fit(x_train.reshape((-1,1)), y_train)
# Predict
y_hat_KNN = KNN_model.predict(x_test.reshape((-1,1)))
```
```python
print("Test set performance:")
print("MAE:",mean_absolute_error(y_hat_KNN, y_test), "GW")
print("RMSE:",(mean_squared_error(y_hat_KNN, y_test))**(0.5), "GW")
#print("R2:",r2_score(y_hat_KNN, y_test))
```
Test set performance:
MAE: 0.1276345969343066 GW
RMSE: 0.1598484899732429 GW
Let us visualize the results...
```python
plt.figure(figsize = (8,6))
plt.scatter(x_train, y_train, marker="x")
plt.plot(np.arange(-18,40,1), KNN_reg.predict(np.arange(-18,40,1).reshape((-1,1))), marker="x", color='C1')
plt.xlabel("High Temperature (°C)")
plt.ylabel("Peak Demand (GW)")
plt.show()
```
**Exercise**: What is a good choice of the number of neighbors `n_neighbors`, the key hyperparameter in KNN regression? Back up your answer with numbers. To do so write a small loop to test different values for `n_neighbors` (i.e., perform a grid search over different choices for `n_neighbors`).
```python
# YOUR CODE HERE:
def find_knn():
pass
```
```python
find_knn (xp,yp,max_k=50)
```
### `Tree-based regression`
```python
# Import the necessary modules and libraries
from sklearn.tree import DecisionTreeRegressor, plot_tree
# Fit regression model
Tree_reg = DecisionTreeRegressor(max_depth=5)
tree_model = Tree_reg.fit(x_train.reshape((-1,1)), y_train)
# Predict
y_hat_tree = tree_model.predict(x_test.reshape((-1,1)))
```
```python
print("Test set performance:")
print("MAE:",mean_absolute_error(y_hat_tree, y_test), "GW")
print("RMSE:",(mean_squared_error(y_hat_tree, y_test))**(0.5), "GW")
#print("R2:",r2_score(y_hat_tree, y_test))
```
Test set performance:
MAE: 0.12751480011005473 GW
RMSE: 0.1595320383040373 GW
```python
plt.figure(figsize = (8,6))
plt.scatter(x_train, y_train, marker="x")
plt.plot(np.arange(-18,40,1), Tree_reg.predict(np.arange(-18,40,1).reshape((-1,1))), marker="x", color='C1')
plt.xlabel("High Temperature (°C)")
plt.ylabel("Peak Demand (GW)")
plt.show()
```
```python
plot_tree(tree_model)
```
Play around with the `max_depth` parameter. What do you observe? How do you think the underlying tree looks like?
**Exercise**: What is a good choice of the tree depths `max_depth`, the key hyperparameter in tree regression? Back up your answer with numbers. To do so, write a small loop to test different values for `n_neighbors` and the respective error metrics. Visualize your results.
```python
# YOUR CODE HERE:
def find_tree_depth():
pass
```
```python
find_tree_depth (xp,yp)
```
The above exercises are what is commonly known as **grid searching an algorithm**. Grid searching is the practice of testing a large range of model hyperparameters via brute force. It is a key component of model selection and evaluation and should be carried out very thoroughly!
---
|
fcf5d641aae8bf9d433a9629a35ad0c039b54b8e
| 590,553 |
ipynb
|
Jupyter Notebook
|
03_Workshops/DSML_WS_09_AdvancedRegression/DSML_WS_09_AdvancedRegression.ipynb
|
IS3UniCologne/DSML_2022
|
ed81e79a34f846d90d869c3f0a76e6729185cc2f
|
[
"MIT"
] | null | null | null |
03_Workshops/DSML_WS_09_AdvancedRegression/DSML_WS_09_AdvancedRegression.ipynb
|
IS3UniCologne/DSML_2022
|
ed81e79a34f846d90d869c3f0a76e6729185cc2f
|
[
"MIT"
] | null | null | null |
03_Workshops/DSML_WS_09_AdvancedRegression/DSML_WS_09_AdvancedRegression.ipynb
|
IS3UniCologne/DSML_2022
|
ed81e79a34f846d90d869c3f0a76e6729185cc2f
|
[
"MIT"
] | null | null | null | 498.356962 | 79,882 | 0.937899 | true | 8,403 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.877477 | 0.822189 | 0.721452 |
__label__eng_Latn
| 0.989197 | 0.514506 |
# Software profesional en Acústica 2020-21 (M2i)
*This notebook was adapted from Chapter 1 of [The FEniCS Tutorial Volume I](https://fenicsproject.org/pub/tutorial/sphinx1/) by Hans Petter Langtangen and Anders Logg, released under CC Attribution 4.0 license. It has been created by Xiangmin Jiao (University of Stony Brook University) and it is available in the repository [Unifem/FEniCS-note](https://github.com/unifem/fenics-notes).*
# The equations of vibrations in fluid-structure interaction (displacement formulation)
Vibrations in linear elasticity is the study of how solid objects are responding to a mechanical vibration and become
internally stressed due to prescribed time-harmonic loading conditions. It is an important problem
in modern engineering. Its corresponding PDE is a generalization of the
Helmholtz equation, and it is among one of the most popular PDEs in
engineering. We now study its variational formulation of a fluid-structure interaction and how to solve
this problem using FEniCS in 2D.
## PDE problem
The time-harmonic equation governing vibrations of a fluid-structure problem involving a elastic solid structure $\Omega_S$ and a fluid domain $\Omega_F$ can be written as
\begin{align}
\label{ftut-elast-varform-equilibrium}\tag{1}
&-\omega^2\rho_S\boldsymbol{u}_S -\boldsymbol{\nabla}\cdot\boldsymbol{\sigma} = \boldsymbol{0}\hbox{ in }\Omega_S,\\
&-\omega^2\rho_F\boldsymbol{u}_F -\rho_Fc^2\nabla\mathrm{div}\boldsymbol{u}_F = \boldsymbol{0}\hbox{ in }\Omega_F,
\label{ftut-fluid-varform-equilibrium}\tag{2}
\end{align}
where $\boldsymbol{\sigma}$ is the *stress tensor*, $c$ is the sound speed in the fluid domain, $\boldsymbol{u}_S$ and $\boldsymbol{u}_F$ are the displacement fields in the solid anf fluid domain,
$\rho_S$ and $\rho_F$ are the *mass density* of the solid and fluid domain, and $\omega$ the angular frequency. For isotropic materials, the stress tensor is further related to the deformation by
the following two equations:
\begin{align}
\boldsymbol{\sigma} &= \lambda\,\hbox{tr}\,(\boldsymbol{\varepsilon}) \boldsymbol{I} + 2\mu\boldsymbol{\varepsilon},
\label{ftut-elast-varform-stresstrain}\tag{3}\\
\boldsymbol{\varepsilon} &= \frac{1}{2}\left(\boldsymbol{\nabla} \boldsymbol{u}_S + (\boldsymbol{\nabla} \boldsymbol{u}_S)^{\top}\right),
\label{ftut-elast-varform-strainu}\tag{4}
\end{align}
where $\boldsymbol{\varepsilon}$ is the *symmetric strain-rate tensor* (symmetric gradient),
and $\boldsymbol{u}$ is the *displacement vector field*, $\boldsymbol{I}$ denotes the *identity tensor*,
$\mathrm{tr}$ denotes the *trace operator* on a tensor, and $\lambda$ and $\mu$
are material properties known as *Lamé's elasticity parameters*.
We can combine (\ref{ftut-elast-varform-stresstrain}) and
(\ref{ftut-elast-varform-strainu}) to obtain
\begin{equation}
\label{ftut-elast-varform-stressu}\tag{5}
\boldsymbol{\boldsymbol{\sigma}} = \lambda(\boldsymbol{\nabla}\cdot \boldsymbol{u}_S)\boldsymbol{I} + \mu(\boldsymbol{\nabla} \boldsymbol{u}_S + (\boldsymbol{\nabla} \boldsymbol{u}_S)^{\top})
\end{equation}
Note that
(\ref{ftut-elast-varform-equilibrium})-(\ref{ftut-elast-varform-strainu})
can easily be transformed to a single vector PDE for $\boldsymbol{u}_S$, which is the
governing PDE for the unknown $\boldsymbol{u}_S$ (Navier's equation). In the
derivation of the variational formulation, however, it is convenient
to keep the equations split as above.
## Variational formulation
The variational formulation of
([1](#mjx-eqn-ftut-elast-varform-equilibrium))-([4](#mjx-eqn-ftut-elast-varform-strainu))
consists of forming the inner product of
([1](#mjx-eqn-ftut-elast-varform-equilibrium))-([2](#mjx-eqn-ftut-fluid-varform-equilibrium)) and two *vector* test functions
$(\boldsymbol{v}_F,\boldsymbol{v}_S)\in \hat{V}$, where $\hat{V}$ is a vector-valued test function space in $\Omega_F$ and $\Omega_S$ such that
\begin{align}
\boldsymbol{v}_F\cdot \mathbf{n} = \boldsymbol{v}_S\cdot \mathbf{n},\\
\boldsymbol{\sigma}(\boldsymbol{u}_S)\mathbf{n} = -\rho_F c^2\mathrm{div}\boldsymbol{u}_{F}\mathbf{n},
\end{align}
on the coupling boundary $\Gamma_I=\partial\Omega_F\cap\partial\Omega_F$. So, integrating over the domain $\Omega_F\cup\Omega_S$ and taking into account the symmetry of the tensor of elasticity, it holds
\begin{align}
-\omega^2\int_{\Omega_S} \rho_S\boldsymbol{u}_S\cdot \boldsymbol{v}_S\ \mathrm{d}\boldsymbol{x}
+ \int_{\Omega_S} \boldsymbol{\sigma}(\boldsymbol{u}_S) : \boldsymbol{\epsilon}(\boldsymbol{v}_S)\ \mathrm{d}\boldsymbol{x}
-\omega^2\int_{\Omega_F} \rho_F\boldsymbol{u}_F\cdot \boldsymbol{v}_F\ \mathrm{d}\boldsymbol{x}
+ \int_{\Omega_F} \rho_F c^2\mathrm{div}\boldsymbol{v}_F\,\mathrm{div}\boldsymbol{v}_F \mathrm{d}\boldsymbol{x}
= \int_{\Gamma_T} \boldsymbol{T}\cdot \boldsymbol{v}_S\ \mathrm{d}\boldsymbol{s}
\label{ftut-elast-varform-sigma_inner_gradv}\tag{8}
\end{align}
for all $(\boldsymbol{v}_F,\boldsymbol{v}_S)\in \hat{V}$ such that $\boldsymbol{u}_S=\mathbf{0}$ on the campled boundary $\Gamma_C=\{\mathbf{x}\in\partial\Omega_{S}: x_0=0\}$ and the traction boundary $\Gamma_T=\{\mathbf{x}\in\partial\Omega_{S}: x_0=L\}$. In addition,
$\boldsymbol{\epsilon}(\boldsymbol{v})$ is the symmetric part of $\boldsymbol{\nabla} \boldsymbol{v}$.
### Enforcing boundary conditions
Now let us consider how to enforce boundary conditions.
For Dirichlet boundaries, we will enforce boundary-conditions strongly.
For these points, no test functions are associated with the Dirichlet nodes.
For traction boundary conditions, we will enforce the boundary condition
weakly using the variational form ([8](#mjx-eqn-ftut-elast-varform-sigma_inner_gradv)).
Similar to the Helmholtz equation, we require their corresponding test
functions $\boldsymbol{v}_S$ vanish along $\Gamma_C$.
Then, the boundary integral above has no effects for points on
$\partial\Omega_S\setminus\Gamma_T$.
### Summary of variational form
In summary, the variational problem is to find $\boldsymbol{u}$ in a vector function space $\hat{V}$ such that
\begin{equation}
a((\boldsymbol{u}_F,\boldsymbol{u}_S),(\boldsymbol{v}_F,\boldsymbol{v}_S)) = L((\boldsymbol{v}_F,\boldsymbol{v}_S))\quad\forall (\boldsymbol{v}_F,\boldsymbol{v}_S)\in\hat{V},
\end{equation}
where
\begin{align}
a((\boldsymbol{u}_F,\boldsymbol{u}_S),(\boldsymbol{v}_F,\boldsymbol{v}_S)) &= -\omega^2\int_{\Omega_S} \rho_S\boldsymbol{u}_S\cdot \boldsymbol{v}_S\ \mathrm{d}\boldsymbol{x}
+ \int_{\Omega_S} \boldsymbol{\sigma}(\boldsymbol{u}_S) : \boldsymbol{\epsilon}(\boldsymbol{v}_S)\ \mathrm{d}\boldsymbol{x}
-\omega^2\int_{\Omega_F} \rho_F\boldsymbol{u}_F\cdot \boldsymbol{v}_F\ \mathrm{d}\boldsymbol{x}
+ \int_{\Omega_F} \rho_F c^2\mathrm{div}\boldsymbol{v}_F\,\mathrm{div}\boldsymbol{v}_F \mathrm{d}\boldsymbol{x}
\end{align}
and
\begin{equation}
\boldsymbol{\sigma}(\boldsymbol{u}_S) = \lambda(\boldsymbol{\nabla}\cdot \boldsymbol{u}_S)\boldsymbol{I} + \mu(\boldsymbol{\nabla} \boldsymbol{u}_S + (\boldsymbol{\nabla} \boldsymbol{u}_S)^{\top}).\\
\end{equation}
## FEniCS implementation
To demonstrate the implementation, we will model a clamped beam deformed under a time-harmonic surface force on the opposite free cross-section surface. This can be modeled by setting $\boldsymbol{T}=(0,0,1)$ on that boundary $\Gamma_T$. The solid structure is box-shaped with length and width $L$, whereas the fluid domain is an interior box-shaped domain with length and width $W<L$. We
set $\boldsymbol{u}=(0,0,0)$ at the clamped end, $x=0$. The rest of the lateral boundary is
traction free; that is, we set $\boldsymbol{T} = 0$. Therefore,
$$L((\boldsymbol{v}_F,\boldsymbol{v}_S)) = \int_{\Gamma_T} \boldsymbol{T}\cdot \boldsymbol{v}_S \mathrm{d}\boldsymbol{s}$$
for this problem.
### Import packages
We start by importing fenics and enforcing matplotlib. In addition, we import `mshr` for mesh generation.
```python
import numpy as np
from dolfin import *
from mshr import *
import matplotlib.pyplot as plt
%matplotlib inline
```
### Generate the mesh and function spaces
Our action startes by generating meshes and defining function spaces.
```python
# Create mesh and define function space
length = 1; width = 1.
length_fluid = 0.6; width_fluid = 0.6
# Mesh
N = 8 # use positive integer values
mesh = RectangleMesh(Point(0, 0), Point(length, width), N*10, N*10, "right/left")
V = VectorFunctionSpace(mesh, 'P', 2)
```
To define the partition of the computational domain attending to the fluid and solid subdomains, each triangle of the finite element mesh is marked with a number flag or *marker*:
```python
# Initialize subdomain and boundary markers
tol = 1e-5
fluid_domain = CompiledSubDomain('(fabs(x[0]-0.5) < L0/2. + tol) and (fabs(x[1]-0.5) < L1/2. + tol)', L0=length_fluid, L1=width_fluid, tol=tol)
# Initialize mesh function for boundary
domain_markers = MeshFunction('size_t', mesh, mesh.topology().dim())
domain_markers.set_all(1)
fluid_domain.mark(domain_markers, 2)
# Define the boundary measure with the boundary markers
dx = Measure('dx', domain=mesh, subdomain_data=domain_markers)
plot(domain_markers)
plt.show()
```
### Define the variational problem
The primary unknown is now a vector field $\boldsymbol{u}$ and not a scalar field,
so we need to work with a vector function space. We will use
piecewise-linear basis functions for all the components.
```python
u = TrialFunction(V)
v = TestFunction(V)
```
With `u = TrialFunction(V)` we get `u` as a vector-valued finite element
function with three components for this 2D problem.
Next, we define the stress tensor and $a$. The gradient and divergence operators
now have a prefix `nabla_`.
This is not strictly necessary in the present problem, but is
recommended in general for vector PDEs arising from continuum mechanics,
if you interpret $\boldsymbol{\nabla}$ as a vector in the PDE notation. See
the notes on `grad(u)` vs. `nabla_grad(u)` below.
```python
from ufl import nabla_div
# Define strain and stress
def epsilon(u):
return 0.5*(nabla_grad(u) + nabla_grad(u).T)
#return sym(nabla_grad(u))
# Frequency
omega = 2*np.pi*1.0
# Physical constants for the solid
rho_fluid = 0.2
c = 1.
# Physical constants for the solid
rho = 1.
beta = 1.25
lambda_ = beta
mu = 1
d = u.geometric_dimension() # space dimension
def sigma(u):
return lambda_*nabla_div(u)*Identity(d) + 2*mu*epsilon(u)
# Define a
a = -omega**2*rho*inner(u, v)*dx(1) + inner(sigma(u), epsilon(v))*dx(1) -omega**2*rho_fluid*inner(u, v)*dx(2) + rho_fluid*c**2*div(u)*div(v)*dx(2)
```
To define the partition of the boundary in the computational domain, each face of the finite element mesh is marked with a number flag or *marker*:
```python
# Initialize subdomain and boundary markers
tol = 1e-5
clamped_boundary = CompiledSubDomain('on_boundary and near(x[0],L,tol)', L=0., tol=tol)
traction_boundary = CompiledSubDomain('on_boundary and near(x[0],L,tol)', L=length, tol=tol)
# Initialize mesh function for boundary
boundary_markers = MeshFunction('size_t', mesh, mesh.topology().dim() - 1)
boundary_markers.set_all(0)
clamped_boundary.mark(boundary_markers, 1)
traction_boundary.mark(boundary_markers, 2)
# Define the boundary measure with the boundary markers
ds = Measure('ds', domain=mesh, subdomain_data=boundary_markers)
```
To define $L$, $\boldsymbol{f}=(0, 0, 1)$ is a constant vector, instead of a scalar.
Such a vector constant is specified as `Constant(0, 0, 1)` in FEniCS.
```python
# Define L
f = Constant((0, 1))
L = dot(f, v)*ds(2)
```
### Define boundary conditions
We only specify the Dirichlet boundary condition. For the boundary condition
$u=(0, 0, 0)$, we must set a vector value to zero, not just a scalar.
We specify the vector constant as `Constant((0, 0, 0))`.
```python
# Define boundary condition
bc = DirichletBC(V, Constant((0, 0)), boundary_markers, 1)
```
### Solve the variational problem
Finally, we can solve the problem.
```python
# Compute solution
u = Function(V)
solve(a == L, u, bc)
```
### Plot the solution
Any component of the solution can be computed or even the modulus of the vector-field solution
```python
# Plot the Finite Element approximation
def plot_solution(u):
'''plot solution of FEM-based simulation'''
fig = plt.figure(figsize=(10,10))
fig = plot(u)
plt.xlabel(r'$x$ / m')
plt.ylabel(r'$y$ / m')
plt.colorbar(fig, fraction=0.038, pad=0.04);
# Plot vector field
plot_solution(u)
# Compute magnitude of displacement
u_magnitude = sqrt(dot(u, u))
Q = FunctionSpace(mesh, 'P', 2)
u_magnitude = project(u_magnitude, Q)
plot_solution(u_magnitude)
```
```python
# Plot second component of the solution
plot_solution(u[1])
```
```python
# Plot first component of the solution
plot_solution(u[0])
```
|
7dfc6d655a08be3d20c5b0bcaf4eb3cd62a61bc9
| 623,710 |
ipynb
|
Jupyter Notebook
|
notebooks/FEniCS_fluid_structure_displacements.ipynb
|
maprieto/SoftwareProfesionalAcustica
|
d6b9bd0d0e182c0c71a878a891c7420ced1b263d
|
[
"MIT"
] | null | null | null |
notebooks/FEniCS_fluid_structure_displacements.ipynb
|
maprieto/SoftwareProfesionalAcustica
|
d6b9bd0d0e182c0c71a878a891c7420ced1b263d
|
[
"MIT"
] | null | null | null |
notebooks/FEniCS_fluid_structure_displacements.ipynb
|
maprieto/SoftwareProfesionalAcustica
|
d6b9bd0d0e182c0c71a878a891c7420ced1b263d
|
[
"MIT"
] | null | null | null | 1,152.883549 | 448,256 | 0.956655 | true | 3,814 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.930458 | 0.76908 | 0.715597 |
__label__eng_Latn
| 0.902517 | 0.500903 |
Euler Problem 41
================
We shall say that an n-digit number is pandigital if it makes use of all the digits 1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
```python
from itertools import permutations
from sympy import isprime
for v in permutations(range(7, 0, -1)):
p = int(''.join(map(str, v)))
if isprime(p):
print(p)
break
```
7652413
**Explanation:** Every pandigital number $N$ with 8 or 9 digits is divisible by 9, since the sum of the digits of $N$ is $1 + 2 + 3 + \cdots + 8 = 36$ or $1 + 2 + 3 + \cdots + 9 = 45$, respectively. Therefore, pandigital primes have at most 7 digits.
We use `itertools.permutations` to iterate through all permutations of the digits 1-7 in reverse order until we find a permutation that forms a prime number.
|
13dcc6937807e3d3def082cc840d00e451f35e35
| 1,819 |
ipynb
|
Jupyter Notebook
|
Euler 041 - Pandigital prime.ipynb
|
Radcliffe/project-euler
|
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
|
[
"MIT"
] | 6 |
2016-05-11T18:55:35.000Z
|
2019-12-27T21:38:43.000Z
|
Euler 041 - Pandigital prime.ipynb
|
Radcliffe/project-euler
|
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
|
[
"MIT"
] | null | null | null |
Euler 041 - Pandigital prime.ipynb
|
Radcliffe/project-euler
|
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
|
[
"MIT"
] | null | null | null | 25.619718 | 261 | 0.550852 | true | 247 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.957912 | 0.859664 | 0.823482 |
__label__eng_Latn
| 0.996253 | 0.751559 |
# Solving ODEs with SciPy
Now that we've learnt the basics of ODE solving we can look at using libraries. These libraries allow us to easy use methods that use adapative step size, explicit or implicit methods, and have been checked to work by many developers and tens of thousands of users.
We will look at the Python library SciPy, and imparticular the `solve_ivp` function for solving initial value problems. You can find the documentation for this function at https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html.
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
# The below commands make the font and image size bigger
plt.rcParams.update({'font.size': 22})
plt.rcParams["figure.figsize"] = (15,10)
```
## A simple first-order example
Consider the ODE $y'(x) = -\frac{1}{2} y(x)$ with $y(0) = 10$. First write a function for the derivative:
```python
def exponential_decay(t, y):
return -0.5 * y
```
`solve_ivp` only needs three arguments (1) the function, (2) the range of the independent variable, and (3) an array with the initial values:
```python
sol = solve_ivp(exponential_decay, [0, 10], [10])
```
Explicitly print out the solutions:
```python
print(sol.t)
print(sol.y)
```
[ 0. 0.11487213 1.26359346 3.06049939 4.85740531 6.65431124
8.45121717 10. ]
[[10. 9.44182253 5.31648754 2.16609348 0.88253023 0.35956879
0.14649891 0.06754689]]
Notice that the steps are not evenly spaced as the default function for the `solve_ivp` is an adaptive forth-order Runge-Kutta algorithm (RK4). Printing the the solution provides futher information, such as `nfev` the number of function evaluations:
```python
print(sol)
print(sol.nfev)
```
message: 'The solver successfully reached the end of the integration interval.'
nfev: 44
njev: 0
nlu: 0
sol: None
status: 0
success: True
t: array([ 0. , 0.11487213, 1.26359346, 3.06049939, 4.85740531,
6.65431124, 8.45121717, 10. ])
t_events: None
y: array([[10. , 9.44182253, 5.31648754, 2.16609348, 0.88253023,
0.35956879, 0.14649891, 0.06754689]])
44
Plot the steps and the analytic solution.
```python
t = np.linspace(0,10,100)
y = 10*np.exp(-0.5*t)
plt.grid(True)
plt.scatter(sol.t, sol.y[0], color='red', linewidth=5);
plt.plot(t, y);
```
## Second-order ODE example
Let's look at the second-order ODE: $y''(x) = -y(x)$ with $y[0] = 1, y'[0] = 0$. First we have to write this in first-order form:
$$\begin{align}
y_0'(x) &= y_1\\
y_1'(x) &= -y_0
\end{align}$$
Now we define a function for this:
```python
def oscilation(t, y):
return [y[1], -y[0]]
```
Now let's solve the ODE. Notie we have to pass the two initial conditions. The code will internally use an adapative RK4 method, but let's output the results on a fixed grid by passing the `t_eval` option as a list of values.
```python
solOsc = solve_ivp(oscilation, [0, 10], [1,0], t_eval = np.linspace(0,10,50))
```
Plot the steps against the analytic solution
```python
tOsc = np.linspace(0,10,100)
yOsc = np.cos(tOsc)
plt.grid(True)
plt.scatter(solOsc.t, solOsc.y[0], color='red', linewidth=5);
plt.plot(tOsc, yOsc);
```
# Stiff ODE example
```python
lam = 300
def dydxStiff(x,y):
global lam
return lam*(-y + np.sin(x))
def yStiff(x):
global lam
C = lam/(1+lam**2)
return C*np.exp(-lam*x) + (lam**2*np.sin(x) -lam*np.cos(x))/(1+lam**2)
```
The implicit methods often want/need the Jacobian matrix. This is an $n\times n$ matrix where the elements $a_{ij} = df_i/dy_j$
```python
def jacobian(x, y):
global lam
return [[-lam]]
```
```python
solStiffRK4 = solve_ivp(dydxStiff, [0, 2], [0], method='RK45')
solStiffImplicit = solve_ivp(dydxStiff, [0, 2], [0], method='BDF', jac=jacobian)
```
```python
plt.grid(True)
plt.scatter(solStiffRK4.t, solStiffRK4.y[0]);
plt.scatter(solStiffImplicit.t, solStiffImplicit.y[0]);
plt.legend(['Adapative RK4 (explicit) method', 'Adapative BDF (implicit) method']);
```
We see that the adaptive integrator forces the RK4 method to take many tiny steps, whereas the implicit `BDF` method can take much larger steps. Try playing with $\lambda$ above. The larger you make it the stiffer the ODE becomes and the more steps the adaptive RK4 method has to take in order to maintain accuracy. The implicit method though takes roughly the same number of steps regardless of the value of $\lambda$.
```python
print("Number of steps RK4 took: %d" % solStiffRK4.nfev)
print("Number of steps BDF took: %d" % solStiffImplicit.nfev)
```
Number of steps RK4 took: 1382
Number of steps BDF took: 86
```python
```
|
15b2c94b731e617f6e964a214b28fca0df760be5
| 143,981 |
ipynb
|
Jupyter Notebook
|
OrdinaryDifferentialEquations/ODEsWithSciPy.ipynb
|
CianCoyle/ACM20030-Examples
|
fb81abf24d066717900657c1de4f2c6f87806413
|
[
"MIT"
] | null | null | null |
OrdinaryDifferentialEquations/ODEsWithSciPy.ipynb
|
CianCoyle/ACM20030-Examples
|
fb81abf24d066717900657c1de4f2c6f87806413
|
[
"MIT"
] | null | null | null |
OrdinaryDifferentialEquations/ODEsWithSciPy.ipynb
|
CianCoyle/ACM20030-Examples
|
fb81abf24d066717900657c1de4f2c6f87806413
|
[
"MIT"
] | null | null | null | 383.949333 | 57,840 | 0.940062 | true | 1,503 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.887205 | 0.899121 | 0.797705 |
__label__eng_Latn
| 0.929518 | 0.691668 |
# The Discrete-Time Fourier Transform
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Properties
The discrete-time Fourier transform (DTFT) has a number of specific properties that are reviewed in the following.
### Invertibility
For many types of signals it is possible to recover the discrete signal $x[k]$ from its DTFT $X(e^{j \Omega}) = \mathcal{F}_* \{ x[k] \}$
\begin{equation}
x[k] = \mathcal{F}_*^{-1} \left\{ \mathcal{F}_* \{ x[k] \} \right\}
\end{equation}
A sufficient condition for the theorem to hold is that both the signal $x[k]$ and its DTFT are absolutely summable/integrable. For this type of signals, above relation can be proven by applying the definition of the DTFT and its inverse and rearranging terms.
**Example**
The invertibility of the DTFT is illustrated at the example of the [complex exponential signal](../discrete_signals/standard_signals.ipynb#Complex-Exponential-Signal) $x[k] = e^{j \Omega_0 k}$ [whose DTFT is given as](definition.ipynb#Transformation-of-the-Exponential-Signal) $X(j \omega) = {\bot \!\! \bot \!\! \bot} ( \frac{\Omega - \Omega_0}{2 \pi} )$. Note that the signal nor its spectrum are absolutely integrable. However, the invertibility still holds as is shown by evaluating the [integral of the inverse DTFT](definition.ipynb#Definition). Since the integration is only performed in the range $\Omega = -\pi$ to $\pi$, it is sufficient to consider a single Dirac impulse $2 \pi \cdot \delta(\Omega - \Omega_0)$ instead of the Dirac comb for the computation.
```python
import sympy as sym
%matplotlib inline
sym.init_printing()
k = sym.symbols('k', integer=True)
W, W0 = sym.symbols('Omega Omega0', real=True)
X = 2*sym.pi*sym.DiracDelta(W - W0)
x = 1/(2*sym.pi) * sym.integrate(X * sym.exp(sym.I*W*k), (W, -sym.pi, sym.pi))
x
```
This result includes the restriction of the normalized angular frequency to $-\pi < \Omega_0 < \pi$ due to the usage of a single Dirac impulse instead of the Dirac comb. The result is specialized to $\Omega_0 = \frac{1}{2}$ in order to show that above result indeed constitutes a complex exponential signal.
```python
x.subs(W0, sym.S.Half)
```
### Linearity
The DTFT is a linear operation. For two signals $x_1[k]$ and $x_2[k]$ with transforms $X_1(e^{j \Omega}) = \mathcal{F}_* \{ x_1[k] \}$ and $X_2(e^{j \Omega}) = \mathcal{F}_* \{ x_2[k] \}$ the following holds
\begin{equation}
\mathcal{F}_* \{ A \cdot x_1[k] + B \cdot x_2[k] \} = A \cdot X_1(e^{j \Omega}) + B \cdot X_2(e^{j \Omega})
\end{equation}
with $A, B \in \mathbb{C}$. The DTFT of a weighted superposition of discrete signals is equal to the weighted superposition of the individual DTFTs. This property is useful to derive the DTFT of signals that can be expressed as superposition of other signals for which the DTFT is known or can be calculated easier. Linearity holds also for the inverse DTFT.
#### Transformation of the cosine and sine signal
The DTFT of $\cos(\Omega_0 k)$ and $\sin(\Omega_0 k)$ is derived by expressing both as harmonic exponential signals using [Euler's formula](https://en.wikipedia.org/wiki/Euler's_formula)
\begin{align}
\cos(\Omega_0 k) &= \frac{1}{2} \left( e^{-j \Omega_0 k} + e^{j \Omega_0 k} \right) \\
\sin(\Omega_0 k) &= \frac{j}{2} \left( e^{-j \Omega_0 k} - e^{j \Omega_0 k} \right)
\end{align}
together with the DTFT $\mathcal{F}_* \{ e^{j \Omega_0 k} \} = {\bot \!\! \bot \!\! \bot} ( \frac{\Omega - \Omega_0}{2 \pi} )$ of the complex exponential signal yields
\begin{align}
\mathcal{F} \{ \cos(\Omega_0 k) \} &= \frac{1}{2} \left[ {\bot \!\! \bot \!\! \bot} \left( \frac{\Omega + \Omega_0}{2 \pi} \right) + {\bot \!\! \bot \!\! \bot} \left( \frac{\Omega - \Omega_0}{2 \pi} \right) \right] \\
\mathcal{F} \{ \sin(\Omega_0 k) \} &= \frac{j}{2} \left[ {\bot \!\! \bot \!\! \bot} \left( \frac{\Omega + \Omega_0}{2 \pi} \right) - {\bot \!\! \bot \!\! \bot} \left( \frac{\Omega - \Omega_0}{2 \pi} \right) \right]
\end{align}
### Symmetries
In order to investigate the symmetries of the DTFT $X(e^{j \Omega}) = \mathcal{F}_* \{ x[k] \}$ of a signal $x[k]$, first the case of a real valued signal $x[k] \in \mathbb{R}$ is considered. The results are then generalized to complex signals $x[k] \in \mathbb{C}$.
#### Real valued signals
Decomposing a real valued signal $x[k] \in \mathbb{R}$ into its even and odd part $x[k] = x_\text{e}[k] + x_\text{o}[k]$ and introducing these into the definition of the DTFT yields
\begin{align}
X(e^{j \Omega}) &= \sum_{k = -\infty}^{\infty} \left( x_\text{e}[k] + x_\text{o}[k] \right) e^{-j \Omega k} \\
&= \sum_{k = -\infty}^{\infty} \left( x_\text{e}[k] + x_\text{o}[k] \right) \cdot \left( \cos(\Omega k) - j \sin(\Omega k) \right) \\
&= \underbrace{\sum_{k = -\infty}^{\infty} x_\text{e}[k] \cos(\Omega k)}_{X_\text{e}(e^{j \Omega})} +
j \underbrace{\sum_{k = -\infty}^{\infty} - x_\text{o}[k] \sin(\Omega k)}_{X_\text{o}(e^{j \Omega})}
\end{align}
For the last equality the fact was exploited that an infinite series with symmetric limits is zero for odd functions. In order to conclude on the symmetry of $X(e^{j \Omega})$ its behavior for a reverse of the sign of $\Omega$ has to be investigated. Due to the symmetry properties of $\cos(\Omega k)$ and $\sin(\Omega k)$, it follows that the DTFT of the
* even part $x_\text{e}[k]$ is real valued with even symmetry $X_\text{e}(e^{j \Omega}) = X_\text{e}(e^{-j \Omega})$
* odd part $x_\text{o}[k]$ is imaginary with odd symmetry $X_\text{o}(e^{j \Omega}) = - X_\text{o}(e^{-j \Omega})$
Combining this, it can be concluded that the DTFT $X(e^{j \Omega})$ of a real-valued signal $x[k] \in \mathbb{R}$ shows complex conjugate symmetry
\begin{equation}
X(e^{j \Omega}) = X^*(e^{- j \Omega})
\end{equation}
#### Complex Signals
By following the same procedure as above for an imaginary signal, the symmetries of the DTFT of the even and odd part of an imaginary signal can be derived. The results can be combined, by decomposing a complex signal $x[k] \in \mathbb{C}$ and its DTFT into its even and odd part for both the real and imaginary part. This results in the following symmetry relations
The transformation symbols $\circ \!\! - \!\! \bullet$ illustrate which part of the signal $x[k]$ is related to which part of its spectrum $X(e^{j \Omega})$. For instance, the odd part of the real part $\Re \{ x_\text{o} [k] \}$ results in an imaginary spectrum with odd symmetry $\Im \{ X_\text{o} (e^{j \Omega}) \}$.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Continuous- and Discrete-Time Signals and Systems - Theory and Computational Examples*.
|
942d24bf3816d719e3fbe70544854bacfd6da707
| 15,784 |
ipynb
|
Jupyter Notebook
|
discrete_time_fourier_transform/properties.ipynb
|
spatialaudio/signals-and-systems-lecture
|
93e2f3488dc8f7ae111a34732bd4d13116763c5d
|
[
"MIT"
] | 243 |
2016-04-01T14:21:00.000Z
|
2022-03-28T20:35:09.000Z
|
discrete_time_fourier_transform/properties.ipynb
|
iamzhd1977/signals-and-systems-lecture
|
b134608d336ceb94d83cdb66bc11c6d4d035f99c
|
[
"MIT"
] | 6 |
2016-04-11T06:28:17.000Z
|
2021-11-10T10:59:35.000Z
|
discrete_time_fourier_transform/properties.ipynb
|
iamzhd1977/signals-and-systems-lecture
|
b134608d336ceb94d83cdb66bc11c6d4d035f99c
|
[
"MIT"
] | 63 |
2017-04-20T00:46:03.000Z
|
2022-03-30T14:07:09.000Z
| 66.881356 | 4,312 | 0.692473 | true | 2,231 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.740174 | 0.863392 | 0.63906 |
__label__eng_Latn
| 0.94828 | 0.323082 |
### Scalar factor of the Universe
```python
import numpy as np
from pylab import *
from scipy.integrate import odeint
```
In this notebook, we solve for the scale factor of the Universe based on the Standard Model of Cosmology, often called $\Lambda$CDM model. We take numerical values from the following:
[1] Lyth, D. H., & Liddle, A. R. (2009). The primordial density perturbation: Cosmology, inflation and the origin of structure. Cambridge University Press.
The Friedmann equation is given by
\begin{equation}
H(a)^2 = H_0^2 \left( \Omega_{r0} a^{-4} + \Omega_{m0} a^{-3} + \Omega_{\Lambda 0} \right)
\end{equation}
where $H$ is the Hubble parameter, and $\Omega_{r0}$, $\Omega_{m0}$, and $\Omega_{\Lambda 0}$ are the radiation, matter, and the vacuum (cosmological constant) energy densities, respectively, today. We refer to the following values tabulated in appendix B of Ref. [1]:
\begin{eqnarray}
\Omega_{r0} &=& 8.47 \times 10^{-5} \\
\Omega_{m0} &=& 0.276 \\
\Omega_{\Lambda 0} &=& 1 - \Omega_{r0} - \Omega_{m0} \\
H_0 &=& 70 \ \text{km} / \text{s} / \text{Mpc} .
\end{eqnarray}
Noting that the Hubble parameter $H$ is related to the scale factor $a$ as
\begin{equation}
H = \frac{\dot{a}}{a}
\end{equation}
where an overdot denotes derivative with respect to the comoving time $t$, then the Friedmann equation can be written down as
\begin{equation}
\dot{a} = a H_0 \sqrt{\left( \Omega_{r0} a^{-4} + \Omega_{m0} a^{-3} + \Omega_{\Lambda 0} \right)} .
\end{equation}
This is the expression that we input into $odeint$. In the following code, we input this differential equation.
```python
# here we setup the constants and the ode
omega_r = 8.47e-5
omega_m = 0.276
omega_vac = 1. - omega_r - omega_m
H_0 = 1. # rescaled to unity for efficient numerics
a_0 = 1. # initial condition on the scale factor today
def f(y, t):
return y*H_0*np.sqrt( omega_r*(y**(-4.)) + omega_m*(y**(-3.)) + omega_vac )
time_points = np.linspace(1., 0.01, 100)
```
Note that by setting $H_0$ to unity, we work in units where time is measured in $H_0^{-1} \sim 14$ billion years. Also, we are integrating backwards in time, starting from the present.
With this said, we obtain the scale factor $a(t)$ of the Universe as follows.
```python
rc('xtick', labelsize = 20) # for the tick marks
rc('ytick', labelsize = 20)
a_lcdm = odeint(f, a_0, time_points) # odeint does its job
plot(time_points, a_lcdm, 'r-', linewidth = 3.0)
ylim(0.01, 1) # aesthetics
xlim(0.01, 1)
xlabel(r'time (14 byr)', fontsize = 20)
ylabel('scale factor', fontsize = 20)
show()
```
So, yeah. This is the scale factor $a(t)$ of the Universe. From this, one could think of the size of the universe as $V(t) \sim a(t)^3$.
The expansion history can be divided into three eras (1) radiation (2) matter and (3) dark energy era, depending on the Universe's energy content. The first era, which comes just right after the Big Bang and primordial inflation, is radiation domination, where $a(t) \sim t^{1/2}$. Then comes matter era, as radiation cools down much faster than matter, during which $a(t) \sim t^{2/3}$. Finally, and today, after both radiation and matter domination, comes dark energy era, where the Universe is dominated by an invisible, negative pressure fluid that sources the observed cosmic acceleration.
|
81ad672c92c1e2798e03e5a67ae27c021f102865
| 24,470 |
ipynb
|
Jupyter Notebook
|
integration_cosmo_dynamics/scale_factor_universe.ipynb
|
reggiebernardo/notebooks
|
b54efe619e600679a5c84de689461e26cf1f82af
|
[
"MIT"
] | null | null | null |
integration_cosmo_dynamics/scale_factor_universe.ipynb
|
reggiebernardo/notebooks
|
b54efe619e600679a5c84de689461e26cf1f82af
|
[
"MIT"
] | null | null | null |
integration_cosmo_dynamics/scale_factor_universe.ipynb
|
reggiebernardo/notebooks
|
b54efe619e600679a5c84de689461e26cf1f82af
|
[
"MIT"
] | null | null | null | 158.896104 | 18,976 | 0.882019 | true | 1,012 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.933431 | 0.833325 | 0.777851 |
__label__eng_Latn
| 0.987587 | 0.645541 |
# Characterization of Systems in the Time Domain
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Eigenfunctions
An [eigenfunction](https://en.wikipedia.org/wiki/Eigenfunction) of a system is defined as the input signal $x(t)$ which produces the output signal $y(t) = \mathcal{H}\{ x(t) \} = \lambda \cdot x(t)$ with $\lambda \in \mathbb{C}$. The weight $\lambda$ associated with $x(t)$ is known as scalar eigenvalue of the system. Hence besides a weighting factor, an eigenfunction is not modified by passing through the system.
[Complex exponential signals](../continuous_signals/standard_signals.ipynb#Complex-Exponential-Signal) $e^{s t}$ with $s \in \mathbb{C}$ are eigenfunctions of linear time-invariant (LTI) systems. This can be proven by applying the properties of LTI systems. Lets assume a generic LTI system with input signal $x(t) = e^{s t}$ and output signal $y(t) = \mathcal{H}\{ x(t) \}$. The response of the LTI system to the shifted input signal $x(t-\tau) = e^{s (t-\tau)}$ reads
\begin{equation}
y(t - \tau) = \mathcal{H}\{ x(t-\tau) \} = \mathcal{H}\{ e^{-s \tau} \cdot e^{s t} \}
\end{equation}
due to the assumed linearity of the system this can be reformulated as
\begin{equation}
y(t - \tau) = e^{-s \tau} \cdot \mathcal{H}\{ e^{s t} \} = e^{-s \tau} \cdot y(t)
\end{equation}
It is straightforward to show that $y(t) = \lambda e^{st}$ fulfills above difference equation.
**Example**
An LTI system whose input/output relation is given by the following inhomogeneous linear ordinary differential equation (ODE) with constant coefficients is considered
\begin{equation}
a_0 y(t) + a_1 \frac{d y(t)}{dt} + a_2 \frac{d^2 y(t)}{dt^2} = x(t)
\end{equation}
with $a_i \in \mathbb{R} \quad \forall i$. In the remainder, the output signal $y(t)$ of the system is computed by explicit solution of the ODE for $x(t) = e^{s t}$ as input signal. Integration constants are discarded for ease of illustration.
```python
%matplotlib inline
import sympy as sym
sym.init_printing()
t, s, a0, a1, a2 = sym.symbols('t s a:3')
x = sym.exp(s * t)
y = sym.Function('y')(t)
ode = sym.Eq(a0*y + a1*y.diff(t) + a2*y.diff(t,2), x)
solution = sym.dsolve(ode)
solution.subs({'C1': 0, 'C2' : 0})
```
**Exercises**
* Is the complex exponential signal an eigenfunction of the system?
* Introduce $x(t) = e^{s t}$ and $y(t) = \lambda \cdot e^{s t}$ into the ODE and solve manually for the eigenvalue $\lambda$. How is the result related to above result derived by solving the ODE?
* Can you generalize your findings to an ODE of arbitrary order?
**Example**
The following inhomogeneous linear ODE with time-dependent coefficient is considered as an example for a time-variant linear system
\begin{equation}
t \cdot \frac{d y(t)}{dt} = x(t)
\end{equation}
The output signal $y(t)$ of the system for a complex exponential signal at the input $x(t) = e^{st}$ is computed by explicit solution of the ODE. Again integration constants are discarded.
```python
ode = sym.Eq(t*y.diff(t), x)
solution = sym.dsolve(ode)
solution.subs('C1', 0)
```
Note, $\text{Ei}(\cdot)$ denotes the [exponential integral](http://docs.sympy.org/latest/modules/functions/special.html#sympy.functions.special.error_functions.Ei). The response $y(t)$ of the time-variant system is not equal to a weighted complex exponential signal $\lambda \cdot e^{s t}$. It can be concluded that complex exponentials are no eigenfunctions of this time-variant system.
**Example**
A final example considers the following non-linear inhomogeneous ODE with constant coefficients
\begin{equation}
\left( \frac{d y(t)}{dt} \right)^2 = x(t)
\end{equation}
as example for a non-linear time-invariant system. Again, the output signal $y(t)$ of the system for a complex exponential signal at the input $x(t) = e^{st}$ is computed by explicit solution of the ODE. As before, integration constants are discarded.
```python
ode = sym.Eq(y.diff(t)**2, x)
solution = sym.dsolve(ode)
solution.subs('C1', 0)
```
Obviously for this non-linear system complex exponential signals are no eigenfunctions.
## Transfer Function
The complex eigenvalue $\lambda$ characterizes the properties of the transfer of a complex exponential signal $e^{st}$ with frequency $s$ through an LTI system. It is commonly termed as [*transfer function*](https://en.wikipedia.org/wiki/Transfer_function) and denoted by $H(s)=\lambda(s)$. Using this definition, the output signal $y(t)$ of an LTI system with complex exponential signal at the input reads
\begin{equation}
y(t) = \mathcal{H} \{ e^{st} \} = H(s) \cdot e^{st}
\end{equation}
Note that the concept of the transfer function is directly linked to the linearity and time-invariance of a system. Only in this case, complex exponential signals are eigenfunctions of the system and $H(s)$ describes the properties of an LTI system with respect to these.
Above equation can be rewritten in terms of the magnitude $| H(s) |$ and phase $\varphi(s) = \arg \{ H(s) \}$ of the complex transfer function $H(s)$
\begin{equation}
y(t) = | H(s) | \cdot e^{s t + j \varphi(s)}
\end{equation}
The magnitude $| H(s) |$ provides the frequency dependent attenuation of the eigenfunction $e^{st}$ by the system, while $\varphi(s)$ provides the introduced phase-shift.
## Link between Transfer Function and Impulse Response
In order to establish the link between the transfer function $H(s)$ and the impulse response $h(t)$ the output signal $y(t) = \mathcal{H} \{ x(t) \}$ of an LTI system with input signal $x(t)$ is considered. It is given by convolving the input signal with the impulse response
\begin{equation}
y(t) = x(t) * h(t) = \int_{-\infty}^{\infty} x(t-\tau) \cdot h(\tau) \; d\tau
\end{equation}
For a complex exponential signal as input $x(t) = e^{st}$ the output of an LTI system is given as $y(t) = \mathcal{H} \{ e^{st} \} = H(s) \cdot e^{st}$. Introducing both signals into above convolution yields
\begin{equation}
H(s) \cdot e^{st} = \int_{-\infty}^{\infty} e^{st} e^{-s \tau} \cdot h(\tau) \; d\tau
\end{equation}
which after rearranging terms results in
\begin{equation}
H(s) = \int_{-\infty}^{\infty} h(\tau) \cdot e^{-s \tau} \; d\tau
\end{equation}
under the assumption that the integrals converge.
The transfer function $H(s)$ can be computed from the impulse response $h(t)$ by integrating over the impulse response multiplied with the complex exponential function $e^{- s t}$. This constitutes an integral transformation, which is later introduced in more detail as [Laplace transform](https://en.wikipedia.org/wiki/Laplace_transform).
**Copyright**
The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
|
d5ee0108c13e6331e863c626441309a77cb033f9
| 16,157 |
ipynb
|
Jupyter Notebook
|
systems_time_domain/eigenfunctions.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | 1 |
2019-01-11T02:04:18.000Z
|
2019-01-11T02:04:18.000Z
|
systems_time_domain/eigenfunctions.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | null | null | null |
systems_time_domain/eigenfunctions.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | null | null | null | 56.100694 | 2,284 | 0.687813 | true | 2,025 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.839734 | 0.845942 | 0.710367 |
__label__eng_Latn
| 0.984334 | 0.488751 |
- - - -
# Mechpy Tutorials
a mechanical engineering toolbox
source code - https://github.com/nagordon/mechpy
documentation - https://nagordon.github.io/mechpy/web/
- - - -
Neal Gordon
2017-02-20
- - - -
## Mechanical Design Notes and code
## Python Initilaization with module imports
```python
# setup
import numpy as np
import sympy as sp
import scipy
from pprint import pprint
sp.init_printing(use_latex='mathjax')
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12, 8) # (width, height)
plt.rcParams['font.size'] = 14
plt.rcParams['legend.fontsize'] = 16
from matplotlib import patches
#get_ipython().magic('matplotlib') # seperate window
get_ipython().magic('matplotlib inline') # inline plotting
```
```python
pwd
```
'/home/neal/Desktop/mechpy'
```python
import mechpy
```
```python
import os ; os.chdir('..') # change to root from the examples folder
```
```python
from mechpy.design import fastened_joint
```
# Materials
[index](#Mechpy)
## Stress and Strain
Stress is a tensor that can be broken into
$$
\overline{\sigma}=\begin{bmatrix}
\sigma_{xx} & \sigma_{xy} & \sigma_{xz}\\
\sigma_{yx} & \sigma_{yy} & \sigma_{yz}\\
\sigma_{zx} & \sigma_{zy} & \sigma_{zz}
\end{bmatrix}
$$
## Factors of safety
In aerospace, typically 1.2 for civilian aircraft and 1.15 for military
$$FS=\frac{\sigma_{yield}}{\sigma}-1$$
## Fastener Notes and Formulas
Finding the centroid of a bolt with coordinates, $\overline{x},\overline{y}$
$$ \overline{x}=\frac{\sum_{i=1}^{n_b}{A_i x_i} }{\sum_{i=1}^{n_b}{A_i} } \ \ \overline{y}=\frac{\sum_{i=1}^{n_b}{A_i y_i} }{\sum_{i=1}^{n_b}{A_i}}$$
Joint/Polar Moment of Inertia, $r=$ distance from centroid to fastener
$$J= \int{r^2dA}= \sum_{i=1}^{n_b}{A_k r_k^2}$$
Bearing Stress on a bolt
$$\sigma^i_{bearing}=\frac{V_{max}}{Dt}$$
Shear Stress on each bolt i due to shear force
$$\tau_f^i = \frac{P}{\sum_{i=1}^{n_b}{A_i} }$$
Where $A_i=$ the area of ith bolt, $n_b=$number of bolts, and $P=$ shear force
Shear Stress on each bolt i due to moment
$$\tau_t^i = \frac{T r_i}{J} $$
### Modes of failure of fastened Joints
1. Tensile Plate Failure across the net section between rivets/bolts
2. Failure of rivets through shear
3. Compression failure between rivet and plate
4. Edge shear-out at rivet hole
5. Edge tearing at rivet hole
#### 1.
$$\sigma_t =\frac{F_s}{(b-nd)t}$$
#### 2.
#### 3.
#### 4.
#### 5.
## Adhesive Joints
With members, or adherends, joined with adhesives, either the member will fail due to tensile loads or the adhesive will fail in shear.
The simple solution to finding the stress of bonded surfaces is taking the average stress
$$\tau_{avg}=\frac{P}{bL}$$, is not an accurate way to model maximum stress. A good rule of thumb based on the calculations below is
$$\tau_{max}=2.08\tau_{avg}$$
The maximum shearing stress of an adhesive layer, $\tau_{max}$, can be computed as
$$\tau_{max}=K_s\tau_{avg}=K_s\left(\frac{P}{bL_L}\right)$$
with $P$ as applied load, $b$ as the width ofthe adhesive layer, and $L_L$ as the length ofthe adhesive layer. The stress distribution factor, $K_s$, can be defined as $K_s=\frac{cL}{tanh(CL/2)}$ where $c=\sqrt{\frac{2G_a}{Et_mt_a}}$, where the shear modulus, $G_a=\frac{\tau}{\gamma}$, and $E$ as the modulus of elasticity.
The max shearing stress, $\tau_{max}$ in a scarf joint can be found with
$$\tau_{max}=K_s\tau_{avg}=K_s\left[ \frac{Pcos\theta}{\left(\frac{bt}{sin\theta} \right) } \right] = K_s\left( \frac{P}{bt} sin\theta cos\theta \right)$$
where $t$ is the thickness of the adherend members and $\theta=tan^{-1}\frac{t}{L_s}$ is the scarf angle
*Mechanical Design of Machine Elements and Machines by Collins, Jack A., Busby, Henry R., Staab, George H. (2009)*
```python
## Bolted Joint Example
# fastener Location
fx = [0,1,2,3,0,1,2,3]
fy = [0,0,0,0,1,1,1,1]
# Force magnitude(x,y)
P = [-300,-500]
# Force location
l = [2,1]
df = fastened_joint(fx, fy, P, l)
df.plot(kind='scatter', x='x', y='y');
#df.plot(style='o', x='x', y='y')
plt.plot(df.xbar[0],df.ybar[0],'*')
df
#ax = plt.gca()
#ax.arrow(l[0], l[1], Pnorm[0],Pnorm[1], head_width=0.05, head_length=0.1, fc='k', ec='k')
#x.arrow(xbar, ybar, Pnorm[0],0, head_width=0.05, head_length=0.1, fc='k', ec='k')
#ax.arrow(xbar, ybar, 0,Pnorm[1], head_width=0.05, head_length=0.1, fc='k', ec='k')
```
# Design
## Factors of Safety
DLL, Design Limit Load = max force or moment expected during a mission with a given statistical probability
Al, Allowable = allowed minimum applied load or strength of a structure at a given statistical probablity
FS, factor of safety [1, $\infty$] = a factor applied to a DLL to decrease the chance of failure, typically around 1-3
KD, knockdown (0,1] = a percentage reduction of Allowable load to reduce the chance of failure
A KD=0.8 would be applied to the allowable to reduce it by 20%, $Al_{new}=Al_{old}*KD$
MS, margin of safety = a measure of reserve strength , how much applied loda can increase before the safety of the vehicle is comprimised. $ MS\geq0$ for a good design, $MS=\frac{Allowable}{DLL*FS}-1$
For example with a $FS=1.15$, $DLL=80$, $Al=100$, we have a margin of $MS=\frac{100}{80*1.15}-1=\frac{100}{92}-1=0.087$ which is passing our design checks based on the expected max load of 80
Lets Assume a knockdown of 27%, so $K=1-0.27=0.73$
$$
FS = \frac{1}{K}
$$
We can also say we have a $FS = \frac{1}{0.73}=1.3699$
$$
\sigma_{design}=\frac{\sigma_{ult}}{FS} = \sigma_{ult}*K
$$
|
7d1844236b4b26f8b75511cda96231aa98e67632
| 19,499 |
ipynb
|
Jupyter Notebook
|
tutorials/design.ipynb
|
nagordon/mechpy
|
aae2315b883f6af7cd90a8451d170744bbf1053a
|
[
"MIT"
] | 45 |
2017-01-27T04:40:30.000Z
|
2021-12-03T03:46:07.000Z
|
tutorials/design.ipynb
|
Lunreth/mechpy
|
aae2315b883f6af7cd90a8451d170744bbf1053a
|
[
"MIT"
] | 2 |
2016-03-01T00:42:38.000Z
|
2020-03-04T15:45:39.000Z
|
tutorials/design.ipynb
|
Lunreth/mechpy
|
aae2315b883f6af7cd90a8451d170744bbf1053a
|
[
"MIT"
] | 19 |
2016-04-25T14:12:34.000Z
|
2021-07-07T17:46:35.000Z
| 45.88 | 7,048 | 0.632494 | true | 1,824 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.785309 | 0.83762 | 0.65779 |
__label__eng_Latn
| 0.940424 | 0.366598 |
[<-Back to the algorithms_in_ipython_notebooks repository](https://github.com/rasbt/algorithms_in_ipython_notebooks).
```
%load_ext watermark
```
```
watermark -d -v -a "Sebastian Raschka"
```
Sebastian Raschka 09/09/2014
CPython 3.4.1
IPython 2.2.0
<font size="1.5em">[More information](https://github.com/rasbt/watermark) about the `watermark` magic command extension.</font>
<hr>
I would be happy to hear your comments and suggestions.
Please feel free to drop me a note via
[twitter](https://twitter.com/rasbt), [email](mailto:bluewoodtree@gmail.com), or [google+](https://plus.google.com/+SebastianRaschka).
<hr>
# Rejection Sampling
While I was listening to the recent episode of the [Programming Throwdown](http://www.programmingthrowdown.com/2014/09/episode-36-swift.html) podcast on my way home, I heard about this interesting concept of rejection sampling. This is one of those simple ideas combined with the powerful principles of statistics that I find quite fascinating.
At its core, rejection sampling is similar to the popular Monte Carlo sampling with the difference of an additional bound.
The goal of rejection sampling is to simplify the task of drawing random samples from a complex probability distribution by using a uniform distribution instead; random samples drawn from the uniform distribution that lie outside certain boundary criteria are rejected, and all samples within the boundary are accepted, respectively.
A mathematical proof can be found [here](http://www.programmingthrowdown.com/2014/09/episode-36-swift.html).
Let's use a simple example to illustrate this concept: Our task is to draw random samples from a geometrically-bounded distribution in form a circle in a cartesian coordinate system with a radius of 2 centered at the coordinates x=4 and y=4.
```
%matplotlib inline
import matplotlib.pyplot as plt
```
```
center = (4,4)
radius = 2
def plot_circle(center, radius):
""" Function to plot a circle. """
fig = plt.figure(figsize=(6,6))
circle = plt.Circle(center, radius, fill=False, color='b')
plt.ylim([0,8])
plt.xlim([0,8])
fgca = fig.gca()
fgca.add_artist(circle)
return fgca
plot_circle(center, radius)
plt.show()
```
<br>
Now, we can draw a simple square around the circle that will represent our uniform distribution.
<br>
```
from matplotlib.patches import Rectangle
def plot_square(center, radius):
""" Function to plot a square. """
fgca = plot_circle(center, radius)
fgca.add_patch(Rectangle((center[0] - radius, center[1] - radius),
2*radius, 2*radius, alpha=0.1))
return fgca
plot_square(center, radius)
plt.show()
```
<br>
Next, we will define a function that generates pseudo-random X and Y coordinates that fall inside the square.
<br>
```
import random
random.seed(567)
```
```
def gen_points(n, center, radius):
"""
Function that generates
n x,y coordinates in a square of length radius*2.
"""
x_coords = []
y_coords = []
for i in range(n):
x_coords.append(random.random()*center[0]+radius)
y_coords.append(random.random()*center[1]+radius)
return x_coords, y_coords
```
```
x, y = gen_points(1, center, radius)
fgca = plot_square(center, radius)
fgca.plot(x, y, linestyle="", marker="x", color="red")
plt.show()
```
<br>
Let us generate 1000 random points and check if our function works correctly.
<br>
```
x, y = gen_points(1000, center, radius)
fgca = plot_square(center, radius)
fgca.plot(x, y, linestyle="", marker="x", color="red")
plt.show()
```
<br>
The plot above looks fine. In the last step, we only need to reject those points lie outside the circle, which is pretty straight forward using a Euclidean distance measure, that is
\begin{equation} \sqrt{(x-y)^2} = |x-y|. \end{equation}
<br>
```
def reject(radius, center, x_coords, y_coords):
""" Returns those coordinates that fall within the circle. """
x_clean = []
y_clean = []
for x,y in zip(x_coords, y_coords):
if ((x - center[0])**2 + (y-center[1])**2)**0.5 <= radius:
x_clean.append(x)
y_clean.append(y)
return x_clean, y_clean
```
<br>
Again, let us do a quick visual check if we correctly removed all points that didn't satisfy the condition $|x-y| \le \text{radius}$.
<br>
```
x_clean, y_clean = reject(radius, center, x, y)
fgca = plot_square(center, radius)
fgca.plot(x_clean, y_clean, linestyle="", marker="x", color="red")
plt.show()
```
<br>
Okay, it seems that we successfully wrote some code that can generate pseudo-random samples that fall inside a geometric circle. If this isn't exciting enough, let us use this concept to estimate the area of this circle pretending that we don't know π (pi).
\begin{equation} A_{\text{rectangle}} = (2 \times R)^2 \end{equation}
<br>
\begin{equation} \hat{A}_{\text{est_circle}} = A_{\text{rectangle}} \times \frac{\text{# points inside circle}}{\text{# all points}} \end{equation}
```
def estimate_circle_area(n, center, radius):
""" Returns the estimated circle area via rejection sampling. """
rect_area = (2*radius)**2
x, y = gen_points(n, center, radius)
x_clean, y_clean = reject(radius, center, x, y)
est_circle_area = rect_area * (len(x_clean)/len(x))
return est_circle_area
```
```
print('Estimated circle area: %s' %estimate_circle_area(100000, center, radius))
```
Estimated circle area: 12.56224
<br>
Now, let's double-check how close we got using the more accurate equation:
\begin{equation} A_{\text{circle}} = \pi \times R^2 \end{equation}
```
from math import pi
print('Circle area using pi: %s' %(pi*radius**2))
```
Circle area using pi: 12.566370614359172
<br>
Who would have guessed what comes next: Let us use our rejection sampling method to estimate pi itself:
\begin{equation} \hat{\pi} = \frac{\hat{A}_{\text{est_circle}}}{R^2} \end{equation}
<br>
```
def approximate_pi(n, center, radius):
""" Returns an approximation of pi via rejection sampling. """
circ_area = estimate_circle_area(n, center, radius)
return circ_area/radius**2
for i in (10, 10**2, 10**4, 10**7):
pi_est = approximate_pi(i, center, radius)
print('Pi estimate: %s (n=%s)' %(pi_est, i))
```
Pi estimate: 1.6 (n=10)
Pi estimate: 3.28 (n=100)
Pi estimate: 3.1496 (n=10000)
Pi estimate: 3.1412652 (n=10000000)
Again, this estimate is pretty accurate up to the 4th digit after the decimal point.
|
8a2e0e9b9f532b24a8a2825f55e3390468f04e00
| 169,319 |
ipynb
|
Jupyter Notebook
|
algorithms_in_ipython_notebooks/ipython_nbs/statistics/rejection_sampling.ipynb
|
gopala-kr/ds-notebooks
|
bc35430ecdd851f2ceab8f2437eec4d77cb59423
|
[
"MIT"
] | 1 |
2019-05-10T09:16:23.000Z
|
2019-05-10T09:16:23.000Z
|
algorithms_in_ipython_notebooks/ipython_nbs/statistics/rejection_sampling.ipynb
|
gopala-kr/ds-notebooks
|
bc35430ecdd851f2ceab8f2437eec4d77cb59423
|
[
"MIT"
] | null | null | null |
algorithms_in_ipython_notebooks/ipython_nbs/statistics/rejection_sampling.ipynb
|
gopala-kr/ds-notebooks
|
bc35430ecdd851f2ceab8f2437eec4d77cb59423
|
[
"MIT"
] | 1 |
2019-10-14T07:30:18.000Z
|
2019-10-14T07:30:18.000Z
| 288.448041 | 62,877 | 0.914888 | true | 1,728 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.798187 | 0.72487 | 0.578582 |
__label__eng_Latn
| 0.935597 | 0.182569 |
```
try: #if the package is installed or placed in same directory
import scamp as sc #import master-project package
except ImportError: #check upper directory for scamp package.
try:
import sys
sys.path.append("..")
import scamp as sc
except: #could not find scamp package
sys.exit("scamp is not installed or placed in the correct directory")
import numpy as np
```
# Basic Introduction
## Create a world!
#### Define dimensions for grid and time.
```
dim = sc.dim() #Define dimensions for grid and time.
```
#### Define world with desired dimensions
```
world = sc.world(dim) #Creates a grid including Ocean, Ice and Atmosphere
```
#### Print the world
```
world.print_all()
```
T
[[[-1. -1. 4. 1.]]]
S
[[[ 32. 32. 32. 35.]]]
V
[[ 1.]]
## Make a simulation
```
world = sc.ode_solver.forward_euler(world) #forward euler with timesteps from dim.
```
## Plot the results
```
world.plot_all()
```
# Introduction to Dim()
### Create a world with a 4x3 grid
```
dim = sc.dim(n_x = 4, n_y = 3)
```
```
world = sc.world(dim)
```
```
world.print_layer('ML')
world.print_layer('ice')
```
T
[[-1. -1. -1.]
[-1. -1. -1.]
[-1. -1. -1.]
[-1. -1. -1.]]
S
[[ 32. 32. 32.]
[ 32. 32. 32.]
[ 32. 32. 32.]
[ 32. 32. 32.]]
V
[[ 1. 1. 1.]
[ 1. 1. 1.]
[ 1. 1. 1.]
[ 1. 1. 1.]]
### Change the time scale to 10-years with a stepsize of 1/100 year
```
dim = sc.dim(years=10, dt = 1/100.)
```
*Both the timescale $years$ and stepsize $dt$ are defined in years
# Introduction to Ocean(), ice() and atmos()
## Define Ocean-, Ice- and Atmosphere-module separately with custom parameters
```
dim = sc.dim()
ocean = sc.ocean(dim, layers = 2, T_start = [1, 20], S_start = [33, 34]) #
atmos = sc.atmos(dim, Q_lw_start = 310)
ice = sc.ice(dim, I_export = 0)
world = sc.world(dim, ocean_start = ocean, atmos_start = atmos, ice_start = ice)
```
## Access variables and parameters
#### Ocean
```
world.ocean.T #temperature
```
array([[[ 1., 20.]]])
#### Ice-layer
```
world.ice.x_ice #fraction of gridpoint covered in ice
```
array([[ 0.5]])
#### Atmosphere
```
world.atmos.Q_lw #long-wave raditation
```
array([[ 310.]])
## Add variables and parameters
```
dim = sc.dim()
world = sc.world(dim)
```
#### Add parameter independent of grid
```
world.ocean.fish_heat = 4 #heat capacity of a fish
```
#### Add grid dependent variable
```
world.ocean.fish = world.ocean.new_value(start = 0) #
```
#### Define differential equation for a variable
\begin{equation}
\frac{d \text{fish}}{dt} = C_{fish} X, \quad X \in U(-1,1)
\end{equation}
```
def fish_ode(world, ode):
ode[world.ocean, 'fish'] = world.ocean.fish_heat * np.random.randint(-1,1)
return ode
world.ocean.ode_func_extra = fish_ode
```
#### Run ode-solver
```
world = sc.ode_solver.forward_euler(world)
```
```
```
|
d1a00a6e5eed898749cffd3a762a406f9912fd46
| 10,310 |
ipynb
|
Jupyter Notebook
|
ipython-notebook/Introduction to Scamp.ipynb
|
rass-n/scamp
|
66d62f7948dd6bd85f142c2588a3430dd336be69
|
[
"MIT"
] | null | null | null |
ipython-notebook/Introduction to Scamp.ipynb
|
rass-n/scamp
|
66d62f7948dd6bd85f142c2588a3430dd336be69
|
[
"MIT"
] | null | null | null |
ipython-notebook/Introduction to Scamp.ipynb
|
rass-n/scamp
|
66d62f7948dd6bd85f142c2588a3430dd336be69
|
[
"MIT"
] | null | null | null | 20.295276 | 90 | 0.424927 | true | 976 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.793106 | 0.661923 | 0.524975 |
__label__eng_Latn
| 0.753266 | 0.058022 |
# Matrix Formalism of the Newton-Euler equations
Renato Naville Watanabe
In this notebook will be shown two examples of how to use a matrix formalism to perform inverse dynamics analysis. It does not consist a comprehensive treatise about the subject. It is rather an introduction based on examples. Nevertheless, the reader of this notebook will have sufficient knowledge to read recent texts on biomechanics and other multibody dynamic analysis.
## Inverse dynamics
For the inverse dynamics analysis, we will obtain the joint torques and forces, from the joint kinematics and external forces.
<figure> <figcaption><i><center>Adapted from Erdemir et al. (2007) </center></i></figcaption>
As an example, we will consider the problem of estimating the forces and torques in the ankle and knee joints during the gait, considering a 3D movement. At this point, we consider that the accelerations, angular velocities, angular accelerations, masses, moments of inertia and rotation matrices necessary to compute the forces and moments are known.
The free-body diagram of the gait, considering a 3D movement is very similar [to the 2D case](GaitAnalysis2D.ipynb). The equations of forces and moments are described by the Newton-Euler equations (for a revision on Tridimensional Newton-Euler equations click [here](Tridimensional%20rigid%20body%20Kinetics.ipynb)):
\begin{align}
\overrightarrow{F_A} + \overrightarrow{GRF} + m_F\overrightarrow{g} &= m_F\overrightarrow{a_{cm_F}}\\
\overrightarrow{M_A} + \overrightarrow{M_{GRF}}+ \overrightarrow{M_{FA}}&=I_F\overrightarrow{\dot{\omega_F}} + \overrightarrow{\omega_F} \times (I_F\overrightarrow{\omega_F})\\
\overrightarrow{F_K} -\overrightarrow{F_A} + m_S\overrightarrow{g} &= m_S\overrightarrow{a_{cm_S}}\\
\overrightarrow{M_K} - \overrightarrow{M_A} + \overrightarrow{M_{FA}} + \overrightarrow{M_{FK}} &= I_S\overrightarrow{\dot{\omega_S}} + \overrightarrow{\omega_S} \times (I_S\overrightarrow{\omega_S})
\end{align}
where
- $\overrightarrow{g} = -9.81\hat{j}$;
- $m_F$ and $m_S$ are the masses of the foot and the shank, respectively;
- $\overrightarrow{GRF}$ is the ground reaction force being applied to the foot;
- $\overrightarrow{a_{cm_F}}$ and $\overrightarrow{a_{cm_S}}$ are the accelerations of the center of mass of the foot and the shank, respectively;
- $\overrightarrow{\omega_F}$ and $\overrightarrow{\omega_S}$ are the angular accelerations of the foot and shank, respectively, described at a basis attached to the segment, and $\overrightarrow{\dot{\omega_F}}$ and $\overrightarrow{\dot{\omega_S}}$ are their time-derivatives;
- $I_S$ and $I_F$ are the matrices of inertia of the shank and the foot, respectively;
- $\overrightarrow{F_K}$, $\overrightarrow{F_A}$, $\overrightarrow{M_A}$ and $\overrightarrow{M_A}$ are the forces and moments at the ankle and knee joints, respectively
Note that each of these equations have components at each of the three directions. Additionally, note that the equations of the forces are described in the global basis, and the equations of the moments must be described in the basis attached to the segment relative to that equation. So, it is a good idea to make this clear with a more precise notation. We will denote as a superscript in the vectors the segment where the basis that we are describing the vector is fixed. So for example, $\overrightarrow{M_A^F}$ is the vector of the moment due to the muscle forces of the ankle, described in the basis fixed at the foot. So, the equations can be rewritten as:
\begin{align}
\overrightarrow{F_A^G} + \overrightarrow{GRF^G} + m_F\overrightarrow{g^G} &= m_F\overrightarrow{a_{cm_F}^G}\\
\overrightarrow{M_A^F} + \overrightarrow{M_{GRF}^F}+ \overrightarrow{M_{FA}^F}&=I_F\overrightarrow{\dot{\omega_F^F}} + \overrightarrow{\omega_F^F} \times (I_F\overrightarrow{\omega_F^F})\\
\overrightarrow{F_K^G} -\overrightarrow{F_A^G} + m_S\overrightarrow{g^G} &= m_S\overrightarrow{a_{cm_S}^G}\\
\overrightarrow{M_K^S} - \overrightarrow{M_A^S} + \overrightarrow{M_{FA}^S} + \overrightarrow{M_{FK}^S} &= I_S\overrightarrow{\dot{\omega_S^S}} + \overrightarrow{\omega_S^S} \times (I_S\overrightarrow{\omega_S^S})
\end{align}
where the superscript $G$ denotes the global frame of reference, the superscript $S$ denotes the frame of reference in the shank and the superscript $F$ denotes the frame of reference at the foot.
The moments due to the ground reaction force, the force at the ankle and the force at the knee are computed by cross-multiplying them by their moment-arms. As the forces and the moment-arms are described in the global basis, we must multiply them by the rotation matrix of the basis corresponding to the segment. So, the equations can be rewritten as:
\begin{align}
\overrightarrow{F_A^G} + \overrightarrow{GRF^G} + m_F\overrightarrow{g^G} &= m_F\overrightarrow{a_{cm_F}^G}\\
\overrightarrow{M_A^F} + R_F(\overrightarrow{r_{cop/cm_F}^G}\times \overrightarrow{GRF^G})+ R_F(\overrightarrow{r_{A/cm_F}^G}\times \overrightarrow{F_A}^G)&=I_F\overrightarrow{\dot{\omega_F^F}} + \overrightarrow{\omega_F^F} \times (I_F\overrightarrow{\omega_F^F})\\
\overrightarrow{F_K^G} -\overrightarrow{F_A^G} + m_S\overrightarrow{g^G} &= m_S\overrightarrow{a_{cm_S}^G}\\
\overrightarrow{M_K^S} - \overrightarrow{M_A^S} - R_S(\overrightarrow{r_{A/cm_S}^G}\times \overrightarrow{F_A^G}) + R_S(\overrightarrow{r_{K/cm_S}^G}\times \overrightarrow{F_K^G}) &= I_S\overrightarrow{\dot{\omega_S^S}} + \overrightarrow{\omega_S^S} \times (I_S\overrightarrow{\omega_S^S})
\end{align}
where $R_S$ is the rotation matrix of the basis attached to the shank and $R_F$ is the rotation matrix of the basis attached to the foot.
Now, we can note that the vectors $\overrightarrow{M_K^S}$ and $\overrightarrow{M_K^F}$ are the same vectors described in different basis. So we could use only one of the descriptions and use rotation matrices to convert from one to another. To pass the vector from the foot coordinates to the shank coordinate, we must first multiply it by the inverted rotation matrix of the foot and then multiply it by the rotation matrix of the shank. So, $\overrightarrow{M_A^S} = R_SR_F^{-1}\overrightarrow{M_A^F}$ and the equations above can be rewritten as:
\begin{align}
\overrightarrow{F_A^G} + \overrightarrow{GRF^G} + m_F\overrightarrow{g^G} &= m_F\overrightarrow{a_{cm_F}^G}\\
\overrightarrow{M_A^F} + R_F(\overrightarrow{r_{cop/cm_F}^G}\times \overrightarrow{GRF^G})+ R_F(\overrightarrow{r_{A/cm_F}^G}\times \overrightarrow{F_A}^G)&=I_F\overrightarrow{\dot{\omega_F^F}} + \overrightarrow{\omega_F^F} \times (I_F\overrightarrow{\omega_F^F})\\
\overrightarrow{F_K^G} -\overrightarrow{F_A^G} + m_S\overrightarrow{g^G} &= m_S\overrightarrow{a_{cm_S}^G}\\
\overrightarrow{M_K^S} - R_SR_F^{-1}\overrightarrow{M_A^F} - R_S(\overrightarrow{r_{A/cm_S}^G}\times \overrightarrow{F_A^G}) + R_S(\overrightarrow{r_{K/cm_S}^G}\times \overrightarrow{F_K^G}) &= I_S\overrightarrow{\dot{\omega_S^S}} + \overrightarrow{\omega_S^S} \times (I_S\overrightarrow{\omega_S^S})
\end{align}
Now, we divide the equations above in the matrices defined previously:
\begin{equation}
\underbrace{\left[\begin{array}{cccc} m_FI_3& [0]& [0]& [0]\\ [0]& I_F & [0] & [0] \\ [0] &[0] & m_SI_3& [0] \\ [0] & [0] & [0] & I_S\end{array}\right]}_{M}\cdot\left[\begin{array}{c}\overrightarrow{a_{cm_F}^G}\\\overrightarrow{\dot{\omega_F^F}}\\\overrightarrow{a_{cm_S}^G}\\\overrightarrow{\dot{\omega_S^S}} \\ \end{array}\right] = \underbrace{\left[\begin{array}{c}[0]\\ - \overrightarrow{\omega_F^F} \times (I_F\overrightarrow{\omega_F^F}) \\ [0] \\ - \overrightarrow{\omega_S^S} \times (I_S\overrightarrow{\omega_S^S}) \end{array}\right]}_{C} + \underbrace{\left[\begin{array}{c} m_F\overrightarrow{g^G}\\ [0]\\ m_S\overrightarrow{g^G} \\ [0] \end{array}\right]}_{G} + \underbrace{\left[\begin{array}{c} \overrightarrow{F_A^G}\\ \overrightarrow{M_A^F}+R_F(\overrightarrow{r_{A/cm_F}^G}\times \overrightarrow{F_A}^G)\\ \overrightarrow{F_K^G} - \overrightarrow{F_A^G} \\ \overrightarrow{M_K^S} - R_SR_F^{-1}\overrightarrow{M_A^F} - R_S(\overrightarrow{r_{A/cm_S}^G}\times \overrightarrow{F_A^G}) + R_S(\overrightarrow{r_{K/cm_S}^G}\times \overrightarrow{F_K^G}) \end{array}\right]}_{Q} + \underbrace{\left[\begin{array}{c} \overrightarrow{GRF^G}\\ R_F(\overrightarrow{r_{cop/cm_F}^G}\times \overrightarrow{GRF^G})\\ [0] \\ [0] \end{array}\right]}_{E}
\end{equation}
where $I_3$ is the identity matrix 3x3.
To perform the inverse dynamics, we still cannot isolate the vector of forces and moments. As the vector $F$ has cross-products we must define the a new operator that performs the cross-product through a matrix multiplication.
We can note that the cross-product between the vectors $\vec{v}$ and $\vec{w}$ has the following result:
\begin{equation}
\vec{v} \times \vec{w} = \left[\begin{array}{c}v_x\\v_y\\v_z \end{array}\right] \times \left[\begin{array}{c}w_x\\w_y\\w_z \end{array}\right] = \left[\begin{array}{c}v_yw_z - v_zw_y\\v_zw_x - v_xw_z\\v_xw_y - v_yw_x \end{array}\right] = \left[\begin{array}{ccc}0&-v_z&v_y\\v_z&0&-v_x\\-v_y&v_x&0 \end{array}\right]\cdot\left[\begin{array}{c}w_x\\w_y\\w_z \end{array}\right]
\end{equation}
So we can define a new operator known as skew-symmetric matrix:
\begin{equation}
S(\vec{v}) \triangleq \left[\begin{array}{ccc}0&-v_z&v_y\\v_z&0&-v_x\\-v_y&v_x&0 \end{array}\right]
\end{equation}
Therefore:
\begin{equation}
\vec{v} \times \vec{w} = S(\vec{v})\cdot\vec{w}
\end{equation}
Now, we will use this operator in the equation we found previously:
\begin{equation}
\left[\begin{array}{cccc} m_FI_3& [0]& [0]& [0]\\ [0]& I_F & [0] & [0] \\ [0] &[0] & m_SI_3& [0] \\ [0] & [0] & [0] & I_S\end{array}\right]\cdot\left[\begin{array}{c}\overrightarrow{a_{cm_F}^G}\\\overrightarrow{\dot{\omega_F^F}}\\\overrightarrow{a_{cm_S}^G}\\\overrightarrow{\dot{\omega_S^S}} \\ \end{array}\right] = \left[\begin{array}{c}[0]\\ - \overrightarrow{\omega_F^F} \times (I_F\overrightarrow{\omega_F^F}) \\ [0] \\ - \overrightarrow{\omega_S^S} \times (I_S\overrightarrow{\omega_S^S}) \end{array}\right] + \left[\begin{array}{c} m_F\overrightarrow{g^G}\\ [0]\\ m_S\overrightarrow{g^G} \\ [0] \end{array}\right] + \left[\begin{array}{c} \overrightarrow{F_A^G}\\ \overrightarrow{M_A^F}+R_F(S(\overrightarrow{r_{A/cm_F}^G})\cdot\overrightarrow{F_A}^G)\\ \overrightarrow{F_K^G} - \overrightarrow{F_A^G} \\ \overrightarrow{M_K^S} - R_SR_F^{-1}\overrightarrow{M_A^F} - R_S(S(\overrightarrow{r_{A/cm_S}^G})\cdot\overrightarrow{F_A^G}) + R_S(S(\overrightarrow{r_{K/cm_S}^G})\cdot\overrightarrow{F_K^G}) \end{array}\right] + \left[\begin{array}{c} \overrightarrow{GRF^G}\\ R_F(\overrightarrow{r_{cop/cm_F}^G}\times \overrightarrow{GRF^G})\\ [0] \\ [0] \end{array}\right]
\end{equation}
Now it is possible to write the vector $F$ as multiplication of a matrix by a vector:
\begin{equation}
\left[\begin{array}{cccc} m_FI_3& [0]& [0]& [0]\\ [0]& I_F & [0] & [0] \\ [0] &[0] & m_SI_3& [0] \\ [0] & [0] & [0] & I_S\end{array}\right]\cdot\left[\begin{array}{c}\overrightarrow{a_{cm_F}^G}\\\overrightarrow{\dot{\omega_F^F}}\\\overrightarrow{a_{cm_S}^G}\\\overrightarrow{\dot{\omega_S^S}} \\ \end{array}\right] = \left[\begin{array}{c}[0]\\ - \overrightarrow{\omega_F^F} \times (I_F\overrightarrow{\omega_F^F}) \\ [0] \\ - \overrightarrow{\omega_S^S} \times (I_S\overrightarrow{\omega_S^S}) \end{array}\right] + \left[\begin{array}{c} m_F\overrightarrow{g^G}\\ [0]\\ m_S\overrightarrow{g^G} \\ [0] \end{array}\right] + \left[\begin{array}{ccc} I_3& [0]& [0]& [0]\\ R_FS\left(\overrightarrow{r_{A/cm_F}^G}\right)&I_3& [0]& [0]\\ -I_3& [0]& I_3 & [0] \\ -R_SS\left(\overrightarrow{r_{A/cm_S}^G}\right)& - R_SR_F^{-1} & R_SS\left(\overrightarrow{r_{K/cm_S}^G}\right) & I_3 \end{array}\right]\cdot\left[\begin{array}{c} \overrightarrow{F_A^G}\\ \overrightarrow{M_A^F}\\ \overrightarrow{F_K^G}\\ \overrightarrow{M_K^S}\end{array}\right] + \left[\begin{array}{c} \overrightarrow{GRF^G}\\ R_F(\overrightarrow{r_{cop/cm_F}^G}\times \overrightarrow{GRF^G})\\ [0] \\ [0] \end{array}\right]
\end{equation}
So, the final equation to compute the forces and torques is obtained by multiplying everything by the inverse of the matrix multipliying the vector of forces:
\begin{equation}
\left[\begin{array}{c} \overrightarrow{F_A^G}\\ \overrightarrow{M_A^F}\\ \overrightarrow{F_K^G}\\ \overrightarrow{M_K^S}\end{array}\right] = \left[\begin{array}{ccc} I_3& [0]& [0]& [0]\\ R_FS\left(\overrightarrow{r_{A/cm_F}^G}\right)&I_3& [0]& [0]\\ -I_3& [0]& I_3 & [0] \\ -R_SS\left(\overrightarrow{r_{A/cm_S}^G}\right)& - R_SR_F^{-1} & R_SS\left(\overrightarrow{r_{K/cm_S}^G}\right) & I_3 \end{array}\right]^{-1}\cdot\left(\left[\begin{array}{cccc} m_FI_3& [0]& [0]& [0]\\ [0]& I_F & [0] & [0] \\ [0] &[0] & m_SI_3& [0] \\ [0] & [0] & [0] & I_S\end{array}\right]\cdot\left[\begin{array}{c}\overrightarrow{a_{cm_F}^G}\\\overrightarrow{\dot{\omega_F^F}}\\\overrightarrow{a_{cm_S}^G}\\\overrightarrow{\dot{\omega_S^S}} \\ \end{array}\right] - \left[\begin{array}{c}[0]\\ - \overrightarrow{\omega_F^F} \times (I_F\overrightarrow{\omega_F^F}) \\ [0] \\ - \overrightarrow{\omega_S^S} \times (I_S\overrightarrow{\omega_S^S}) \end{array}\right] -\left[\begin{array}{c} \overrightarrow{GRF^G}\\ R_F(\overrightarrow{r_{cop/cm_F}^G}\times \overrightarrow{GRF^G})\\ [0] \\ [0] \end{array}\right] - \left[\begin{array}{c} m_F\overrightarrow{g^G}\\ [0]\\ m_S\overrightarrow{g^G} \\ [0] \end{array}\right]\right)
\end{equation}
With the last equation, we can obtain all the forces and moments using only one line of code. Computationally, it is less prone to errors and more efficient.
So, generically, the steps to perform the analysis of inverse dynamics is:
- write the equations of Newton-Euler for each segment. Write explicitly the basis at which each vector is described.
- use the rotation matrices of the basis to pass the description of a vector to another basis. Use it in a way that the same vector is described at just a single frame of reference.
- write the cross-products as a product between the skew-symmetric matrix $S$ of the first vector and the second vector.
- write the equations in the matrix format, repeated here:
\begin{equation}
M(q)\ddot{q} = C(q,\dot{q}) + G(q) + Q + E
\end{equation}
- write explicitly the vector containing the unknown forces and moments $Q$, as a multiplication of a matrix and vector containing only the unknown forces.
- isolate the vector containing only the unknown forces by multiplying the whole equation by the inverse of the matrix multiplying the vector with the forces.
## Problems
1) Solve the problems 18.3.20 and 18.3.24 of the Ruina and Rudra's book by using the Lagrangian formalism (it is much easier than use the Newton-Euler formalism) and then use the matrix formalism to obtain the expressions of the angular accelerations.
2) Write the matrices to find the forces and torques in a tridimensional double pendulum, consisted of two cylindrical bars. Consider that you know all the masses, moments of inertia, rotation matrices, accelerations, angular velocities and angular accelerations necessary to solve the problem.
## References
- YAMAGUCHI, G. T. Dynamic modeling of musculoskeletal motion: a vectorized approach for biomechanical analysis in three dimensions., 2001
- CRAIG, J. Introduction to robotics. , 1989
- JAIN, A. Robot and multibody dynamics. , 2011
- SPONG, M. W.; HUTCHINSON, S.; VIDYASAGAR, M. Robot modeling and control., 2006
- ERDEMIR, A. et al. Model-based estimation of muscle forces exerted during movements. Clinical Biomechanics, v. 22, n. 2, p. 131–154, 2007.
- STANEV, D.; MOUSTAKAS, K. Simulation of constrained musculoskeletal systems in task space. IEEE Transactions on Biomedical Engineering, v. 65, n. 2, p. 307–318, 2018.
- ZAJAC FE, GORDON ME , [Determining muscle's force and action in multi-articular movement](https://drive.google.com/open?id=0BxbW72zV7WmUcC1zSGpEOUxhWXM&authuser=0). Exercise and Sport Sciences Reviews, 17, 187-230. , 1989
- RUINA A, RUDRA P. [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. , 2015
|
105a5cacafa471fb43292f37f792b2ab2051efbf
| 20,134 |
ipynb
|
Jupyter Notebook
|
notebooks/Multibody inverse dynamics.ipynb
|
e-moncao-lima/BMC
|
98c3abbf89e630d64b695b535b0be4ddc8b2724b
|
[
"CC-BY-4.0"
] | 293 |
2015-01-17T12:36:30.000Z
|
2022-02-13T13:13:12.000Z
|
notebooks/Multibody inverse dynamics.ipynb
|
erichuang2013/BMC
|
18c08d9b581672fcf8e1132e37da2ee978f315dc
|
[
"CC-BY-4.0"
] | 11 |
2018-06-21T21:40:40.000Z
|
2018-08-09T19:55:26.000Z
|
notebooks/Multibody inverse dynamics.ipynb
|
erichuang2013/BMC
|
18c08d9b581672fcf8e1132e37da2ee978f315dc
|
[
"CC-BY-4.0"
] | 162 |
2015-01-16T22:54:31.000Z
|
2022-02-14T21:14:43.000Z
| 76.265152 | 1,381 | 0.636784 | true | 5,324 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.822189 | 0.749087 | 0.615891 |
__label__eng_Latn
| 0.848835 | 0.269252 |
```python
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
```
## Working in relative coordinates for particles 1 and 2:
## (This makes PN corrections easier)
### $\mathbf{a} = \mathbf{a}_1 - \mathbf{a}_2$
### $\mathbf{r} = \mathbf{r}_1 - \mathbf{r}_2=(r_x,r_y)$
### $\mathbf{n} = \mathbf{r}/r = \mathbf{n}_{12} = -\mathbf{n}_{21}$
## EOM:
### (Chapter 9 of ``Gravity: Newtonian, Post-Newtonian, Relativistic'', Poisson & Will)
\begin{equation}
\begin{split}
\mathbf{a} = \mathbf{r}''(t) = &-\frac{ G m }{r^2 (t)} \mathbf{n}(t) \\
&- \frac{Gm}{c^2 r^2 } \Bigg( \Big( v^2 (1+3\eta) - \frac{3}{2} (\mathbf{n} \cdot \mathbf{v})^2 \eta - 2(2+\eta) \frac{Gm}{r} \Big) \mathbf{n} \\
& \hspace{2cm} - 2 (2-\eta) (\mathbf{n} \cdot \mathbf{v}) \mathbf{v} \Bigg) \\
& \ + \mathcal{O} \Big( \frac{1}{c^{4}} \Big)
\end{split}
\end{equation}
with: $m = M_1 + M_2$ , and $\ \ \eta = (M_1 M_2) / (M_1 + M_2)^2$
### $r_x''(t) = a_{0} + a_{1} + a_{2} + a_{3} + a_{4}$
### $a_0(t) = -\frac{ G m }{(r_x^2 + r_y^2)^{3/2}} r_x $
### $a_1(t) = -\frac{ G m }{c^2 (r_x^2 + r_y^2)^{3/2}}r_x (v_x^2 + v_y^2) ( 1 + 3 \eta) $
### $a_2(t) = +\frac{3}{2}\frac{ G m }{c^2 (r_x^2 + r_y^2)^{5/2}}r_x (r_x v_x + r_y v_y)^2 \eta $
### $a_3(t) = +2\frac{ (G m)^2 }{c^2 (r_x^2 + r_y^2)^{4/2}}r_x ( 2 + \eta) $
### $a_4(t) = +2\frac{ G m }{c^2 (r_x^2 + r_y^2)^{3/2}}( 2 -\eta)(r_x v_x + r_y v_y) v_x$
## For Python : $y = r_x, r_y, r_x', r_y' =r_x, r_y, v_x, v_y$
```python
def a0_component(r_x, r_y, Gm):
return -Gm*r_x* ((r_x*r_x +r_y*r_y)**(-1.5))
def a1_component(r_x, r_y, v_x, v_y, Gm, c_squared, eta):
v_squared = v_x*v_x +v_y*v_y
eta_factor = 1 + 3*eta
return -Gm*r_x* ((r_x*r_x +r_y*r_y)**(-1.5)) * v_squared * eta_factor / c_squared
def a2_component(r_x, r_y, v_x, v_y, Gm, c_squared, eta):
r_dot_v = r_x*v_x +r_y*v_y
return + 1.5 *Gm*r_x* ((r_x*r_x +r_y*r_y)**(-2.5)) * eta * r_dot_v * r_dot_v/ c_squared
def a3_component(r_x, r_y, v_x, v_y, Gm, c_squared, eta):
eta_factor = 2 + eta
return + 2 *Gm*Gm*r_x* ((r_x*r_x +r_y*r_y)**(-2.)) * eta_factor / c_squared
def a4_component(r_x, r_y, v_x, v_y, Gm, c_squared, eta):
r_dot_v = r_x*v_x +r_y*v_y
eta_factor = 2 - eta
return + 2 *Gm*v_x* ((r_x*r_x +r_y*r_y)**(-1.5)) * eta_factor *r_dot_v/ c_squared
def total_relative_a(r_x, r_y, v_x, v_y, Gm, c_squared, eta):
total = a0_component(r_x, r_y, Gm) + \
a1_component(r_x, r_y, v_x, v_y, Gm, c_squared, eta) + \
a2_component(r_x, r_y, v_x, v_y, Gm, c_squared, eta) + \
a3_component(r_x, r_y, v_x, v_y, Gm, c_squared, eta) + \
a4_component(r_x, r_y, v_x, v_y, Gm, c_squared, eta)
return total
def relative_dynamics(y,t,Gm, c_squared, eta):
r_x, r_y, v_x, v_y = y
dydt = [v_x,
v_y,
total_relative_a(r_x, r_y, v_x, v_y, Gm, c_squared, eta),
total_relative_a(r_y, r_x, v_y, v_x, Gm, c_squared, eta)]
a.append(total_relative_a(r_x, r_y, v_x, v_y, Gm, c_squared, eta))
return dydt
```
```python
a = []
c_squared_val = (3e8)**2.
M_1_GW150914 = 35 * 1.989e+30
M_2_GW150914 = 30 * 1.989e+30
eta_val = (M_1_GW150914 * M_2_GW150914) / ((M_1_GW150914 + M_2_GW150914)**2.)
print(eta_val)
Gm_val = 6.674e-11 * (M_1_GW150914 + M_2_GW150914)
t = np.linspace(0, 5, int(1e4))
```
0.2485207100591716
```python
r_isco_tot_approx = 6 * Gm_val / c_squared_val
```
```python
y0 = [r_isco_tot*20., 0., 0., r_isco_tot*37]
```
```python
sol_non_rel = odeint(relative_dynamics, y0, t, args=(Gm_val, c_squared_val*1e10, eta_val*0. +1.,))
# sol = odeint(relative_dynamics, y0, t, args=(Gm_val, c_squared_val, eta_val,))
```
```python
plt.plot(a)
plt.plot(a_newton)
```
```python
_ = plt.figure(figsize=(12,5)), plt.subplot(1,2,1), plt.title('Relative postition (GR)')
_ = plt.plot(t, sol[:, 0], label='r_x')
_ = plt.plot(t, sol[:, 1], label='r_y')
_ = plt.plot(t, np.sqrt(sol[:, 0]**2.+sol[:, 1]**2.), label='|r|')
_ = plt.legend(loc='best'), plt.xlabel('t/seconds'), plt.grid()
_ = plt.subplot(1,2,2)
_ = plt.plot(t, np.sqrt(sol[:, 0]**2.+sol[:, 1]**2.), label='|r| GR', c='C2')
_ = plt.plot(t, np.sqrt(sol_non_rel[:, 0]**2.+sol_non_rel[:, 1]**2.), c='C2', ls=':',label='|r| Newtonian')
_ = plt.legend(loc=(0.9,0.05)), plt.xlabel('t/seconds'), plt.grid()
_ = plt.suptitle(r'Relative postition black-hole binary', fontsize=14)
_ = plt.savefig('bh_binary_pn.png' , dpi=200)
```
```python
colors = plt.cm.inferno(np.linspace(0,1,len(t)))
_ = plt.plot(sol_non_rel[:, 0], sol_non_rel[:, 1], c='C0', label='Newtonian')
# for i in range(len(t)):
# plt.scatter(sol[i, 0], sol[i, 1], color=colors[i], marker='.', alpha=0.3, label='GR')
# if i==0:
# plt.legend()
_ = plt.plot(sol[:, 0], sol[:, 1], c='C1', label='GR'), plt.legend()
_ = plt.ylabel(r'relative $r_y$', fontsize=16), plt.xlabel(r'relative $r_x$', fontsize=16), plt.grid()
_ = plt.tight_layout(True), plt.savefig('bh_binary_pn2.png' , dpi=200)
```
```python
```
|
6e24a290991ae3a71402dac303a9730e9c36ef96
| 150,023 |
ipynb
|
Jupyter Notebook
|
PN_2-body.ipynb
|
NiallJeffrey/post-Newtonian
|
0e717ecdceef23e2f3643dd20fdb5b6bc603f860
|
[
"MIT"
] | null | null | null |
PN_2-body.ipynb
|
NiallJeffrey/post-Newtonian
|
0e717ecdceef23e2f3643dd20fdb5b6bc603f860
|
[
"MIT"
] | null | null | null |
PN_2-body.ipynb
|
NiallJeffrey/post-Newtonian
|
0e717ecdceef23e2f3643dd20fdb5b6bc603f860
|
[
"MIT"
] | null | null | null | 600.092 | 97,620 | 0.939856 | true | 2,116 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90053 | 0.73412 | 0.661096 |
__label__yue_Hant
| 0.318465 | 0.37428 |
# Sistemas mecánicos
El objetivo de esta práctica es analizar sistemas mecánicos a traves de las herramientas matemáticas de la ingeniería de control.
Empecemos modelando el siguiente sistema mecánico:
Si dibujamos nuestro diagrama de cuerpo libre y escribimos la sumatoria de fuerzas en $x$, obtendremos:
$$
\sum F_x = F - F_R - F_A = ma
$$
en donde $F$ es la fuerza aplicada hacia la derecha, $F_R$ es la fuerza de reacción en el resorte y $F_A$ es la fuerza de reacción en el amortiguador.
Si ahora tomamos en cuenta que:
$$
\begin{align}
F_R &= k x \\
F_A &= c v = c\dot{x} \\
ma &= m \ddot{x}
\end{align}
$$
podemos escribir esta sumatoria de fuerzas como:
$$
F - kx - c\dot{x} = m \ddot{x}
$$
y sacando la transformada de Laplace, y factorizando terminos comúnes:
$$
F(s) = X(s)\left[ ms^2 + cs + k \right]
$$
Por lo que, cuando consideramos a $F(s)$ como la entrada de nuestro sistema y a $X(s)$ como la salida de nuestro sistema, podemos obtener la función de transferencia:
$$
\frac{X(s)}{F(s)} = \frac{1}{ms^2 + cs + k}
$$
y simular el comportamiento de este:
```python
from control import tf, step_response, root_locus
from numpy import linspace
from matplotlib.pyplot import plot
```
```python
m = 1200/4
c = 1500
k = 15000
G = tf([1], [m, c, k])
```
```python
ts = linspace(0, 10, 500)
t, y = step_response(G, ts)
```
```python
plot(t, y);
```
Sin embargo, tenemos que revisar una cosa. Los datos ingresados a este sistema se obtuvieron de valores comerciales para la suspensión de un automovil tipo sedán; sin embargo la función ```step_response``` simula el comportamiento del sistema para una entrada unitaria (en este caso $1N$), por lo que para que esta simulación tenga relevancia tenemos que amplificar esta entrada.
Se propone una entrada de $1100N$ que corresponse al peso de un hombre pesado, lo que esperaremos entonces es un movimiento como el que pasa cuando un hombre pesado se sube a un sedán:
```python
ts = linspace(0, 10, 500)
t, y = step_response(1100*G, ts)
```
```python
plot(t, y);
```
Y ahora obtenemos una simulación con el comportamiento que esperariamos de la suspensión de un coche cuando un hombre pesado se sube en el; la suspensión deja de moverse despues de unos $3s$ y se estabiliza en un valor de aproximadamente $0.07m$, es decir $7cm$ despues de haberse comprimido casi $10cm$ y regresar unas $3$ o $4$ veces.
---
## Ejercicio
* Define un sistema ```G1``` con un amortiguador con constante $c=0\frac{Ns}{m}$, un resorte con constante $k=100\frac{N}{m}$ y una masa de $10kg$.
```python
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
```
```python
from numpy.testing import assert_allclose
assert_allclose(G1.dcgain(), [0.01], 2)
assert_allclose(G1.pole(), [0.+3.16j, 0.-3.16j], 2)
assert_allclose(G1.zero(), [], 2)
```
```python
G1.zero()
```
* Simula el comportamiento de este sistema para una fuerza aplicada de $5N$ del tiempo $0s$ al tiempo $15s$.
```python
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
plot(t, y);
```
```python
from nose.tools import assert_almost_equal, assert_equal
assert_equal(ts[0], 0)
assert_equal(ts[-1], 15)
assert_almost_equal(max(y), 0.02, 4)
assert_almost_equal(min(y), 0.0, 4)
```
* Define un sistema ```G2``` con un amortiguador con constante $c=10\frac{Ns}{m}$, un resorte con constante $k=0\frac{N}{m}$ y una masa de $10kg$.
```python
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
```
```python
from numpy.testing import assert_allclose
from numpy import inf
assert_allclose(G2.dcgain(), [inf])
assert_allclose(G2.pole(), [-1, 0])
assert_allclose(G2.zero(), [], 2)
```
* Simula el comportamiento de este sistema para una fuerza aplicada de $5N$ del tiempo $0s$ al tiempo $20s$.
```python
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
plot(t, y);
```
```python
from nose.tools import assert_almost_equal, assert_equal
assert_equal(ts[0], 0)
assert_equal(ts[-1], 20)
assert_almost_equal(max(y), 1.9, 4)
assert_almost_equal(min(y), 0.0, 4)
```
---
Una vez que hemos comprobado la manera de simular estos sistemas mecánicos, podemos pasar al siguiente paso, predecir su comportamiento a partir de la pura función de transferencia.
La función de transferencia tiene varias caracteristicas de las que no hemos hablado, por ejemplo los polos del sistema:
```python
G1 = tf([1], [1, 1, 1])
G1.pole()
```
Estos polos son obtenidos al resolver la ecuación formada al igualar el denominador de la función de transferencia a $0$.
> Este denominador es llamado **Polinomio caracteristico del sistema** ya que es el que determina el comportamiento del sistema.
Si graficamos estos polos tendremos:
```python
rs, ks = root_locus(G1)
```
Esta gráfica se le conoce como **Lugar geométrico de las raíces**, y en ella las cruces representan a los polos que obtuvimos, las lineas que parten de estas cruces representan el movimiento de estos polos bajo relalimentación, lo cual vermos en la próxima práctica.
En esta gráfica podemos notar que las raices son complejas, y que su parte real es negativa; esta última característica es la que nos inidca que el comportamiento de este sistema será estable; para corroborar esto podemos simular y gráficar su comportamiento:
```python
ts = linspace(0, 15, 500)
t, y = step_response(G1, ts)
plot(t, y);
```
Por otro lado, si creamos una función de transferencia que tenga polos con parte real $0$, nos dará un comportamiento **Criticamente estable**.
```python
G2 = tf([1], [1, 0, 1])
G2.pole()
```
```python
rs, ks = root_locus(G2)
```
```python
ts = linspace(0, 15, 500)
t, y = step_response(G2, ts)
plot(t, y);
```
O una función de transferencia con polos con parte real positiva, su comportamiento será inestable:
```python
G3 = tf([1], [1, -1, 1])
G3.pole()
```
```python
rs, ks = root_locus(G3)
```
```python
ts = linspace(0, 15, 500)
t, y = step_response(G3, ts)
plot(t, y);
```
Despues de esto, podemos preguntarnos si en lugar de esperar que el comportamiento de nuestro sistema sea el adecuado, podemos construir una función con el comportamiento deseado, y la respuesta es si; podemos por ejemplo utilizar dos raices copmletamente reales para obtener un comportamiento subamortiguado:
$$
G_4 = \frac{1}{s+1} \cdot \frac{1}{s+3} = \frac{1}{s^2 + 4s + 3}
$$
```python
G4 = tf([1], [1, 1])*tf([1], [1, 3])
G4.pole()
```
```python
rs, ks = root_locus(G4)
```
```python
ts = linspace(0, 15, 500)
t, y = step_response(G4, ts)
plot(t, y);
```
O incluso asegurarnos que el comportamiento sea inestable:
```python
G5 = tf([1], [1, -1])*tf([1], [1, 3])
G5.pole()
```
```python
rs, ks = root_locus(G5)
```
```python
ts = linspace(0, 15, 500)
t, y = step_response(G5, ts)
plot(t, y);
```
---
## Ejercicio
* Define una función de transferencia ```G3```, con un comportamiento inestable.
```python
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
```
```python
```
```python
assert not all([polo.real<0 for polo in G3.pole()])
```
* Define una función de transferencia ```G4```, con un comportamiento estable.
```python
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
```
```python
assert all([polo.real<0 for polo in G4.pole()])
```
* Define una función de transferencia ```G5```, con un comportamiento criticamente estable.
```python
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
```
```python
assert any([polo.real==0 for polo in G5.pole()])
```
* Define una función de transferencia ```G6```, con un comportamiento sobreamortiguado.
```python
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
```
```python
from numpy import pi, angle
assert all([pi - pi/4 < angle(polo) < pi + pi/4 for polo in G6.pole()])
```
* Define una función de transferencia ```G7```, con un comportamiento subamortiguado.
> Sugerencia: Utiliza como base la función de transferencia del sistema masa - resorte - amortiguador.
```python
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
```
```python
assert not all([pi - pi/4 < angle(polo) < pi + pi/4 for polo in G7.pole()])
```
---
|
6fa11b8bb281e67319cbe358c1f917374d3191c1
| 20,912 |
ipynb
|
Jupyter Notebook
|
Practicas/practica2/sistemas_mecanicos.ipynb
|
robblack007/clase-automatizacion-control
|
e473e4e8424cd080dac30000e5cad4c394a2867d
|
[
"MIT"
] | null | null | null |
Practicas/practica2/sistemas_mecanicos.ipynb
|
robblack007/clase-automatizacion-control
|
e473e4e8424cd080dac30000e5cad4c394a2867d
|
[
"MIT"
] | null | null | null |
Practicas/practica2/sistemas_mecanicos.ipynb
|
robblack007/clase-automatizacion-control
|
e473e4e8424cd080dac30000e5cad4c394a2867d
|
[
"MIT"
] | null | null | null | 23.184035 | 388 | 0.550115 | true | 2,450 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.83762 | 0.815232 | 0.682855 |
__label__spa_Latn
| 0.938383 | 0.424832 |
# Linear Algebra, Handling of Arrays and more Python Features
## Introduction
The aim of this set of lectures is to review some central linear algebra algorithms that we will need in our
data analysis part and in the construction of Machine Learning algorithms (ML).
This will allow us to introduce some central programming features of high-level languages like Python and
compiled languages like C++ and/or Fortran.
As discussed in the introductory notes, these series of lectures focuses both on using
central Python packages like **tensorflow** and **scikit-learn** as well
as writing your own codes for some central ML algorithms. The
latter can be written in a language of your choice, be it Python, Julia, R,
Rust, C++, Fortran etc. In order to avoid confusion however, in these lectures we will limit our
attention to Python, C++ and Fortran.
## Important Matrix and vector handling packages
There are several central software packages for linear algebra and eigenvalue problems. Several of the more
popular ones have been wrapped into ofter software packages like those from the widely used text **Numerical Recipes**. The original source codes in many of the available packages are often taken from the widely used
software package LAPACK, which follows two other popular packages
developed in the 1970s, namely EISPACK and LINPACK. We describe them shortly here.
* LINPACK: package for linear equations and least square problems.
* LAPACK:package for solving symmetric, unsymmetric and generalized eigenvalue problems. From LAPACK's website <http://www.netlib.org> it is possible to download for free all source codes from this library. Both C/C++ and Fortran versions are available.
* BLAS (I, II and III): (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Blas I is vector operations, II vector-matrix operations and III matrix-matrix operations. Highly parallelized and efficient codes, all available for download from <http://www.netlib.org>.
When dealing with matrices and vectors a central issue is memory
handling and allocation. If our code is written in Python the way we
declare these objects and the way they are handled, interpreted and
used by say a linear algebra library, requires codes that interface
our Python program with such libraries. For Python programmers,
**Numpy** is by now the standard Python package for numerical arrays in
Python as well as the source of functions which act on these
arrays. These functions span from eigenvalue solvers to functions that
compute the mean value, variance or the covariance matrix. If you are
not familiar with how arrays are handled in say Python or compiled
languages like C++ and Fortran, the sections in this chapter may be
useful. For C++ programmer, **Armadillo** is widely used library for
linear algebra and eigenvalue problems. In addition it offers a
convenient way to handle and organize arrays. We discuss this library
as well. Before we proceed we believe it may be convenient to repeat some basic features of
matrices and vectors.
## Basic Matrix Features
Matrix properties reminder
$$
\mathbf{A} =
\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\
a_{21} & a_{22} & a_{23} & a_{24} \\
a_{31} & a_{32} & a_{33} & a_{34} \\
a_{41} & a_{42} & a_{43} & a_{44}
\end{bmatrix}\qquad
\mathbf{I} =
\begin{bmatrix} 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
$$
The inverse of a matrix is defined by
$$
\mathbf{A}^{-1} \cdot \mathbf{A} = I
$$
<table border="1">
<thead>
<tr><th align="center"> Relations </th> <th align="center"> Name </th> <th align="center"> matrix elements </th> </tr>
</thead>
<tbody>
<tr><td align="center"> $A = A^{T}$ </td> <td align="center"> symmetric </td> <td align="center"> $a_{ij} = a_{ji}$ </td> </tr>
<tr><td align="center"> $A = \left (A^{T} \right )^{-1}$ </td> <td align="center"> real orthogonal </td> <td align="center"> $\sum_k a_{ik} a_{jk} = \sum_k a_{ki} a_{kj} = \delta_{ij}$ </td> </tr>
<tr><td align="center"> $A = A^{ * }$ </td> <td align="center"> real matrix </td> <td align="center"> $a_{ij} = a_{ij}^{ * }$ </td> </tr>
<tr><td align="center"> $A = A^{\dagger}$ </td> <td align="center"> hermitian </td> <td align="center"> $a_{ij} = a_{ji}^{ * }$ </td> </tr>
<tr><td align="center"> $A = \left (A^{\dagger} \right )^{-1}$ </td> <td align="center"> unitary </td> <td align="center"> $\sum_k a_{ik} a_{jk}^{ * } = \sum_k a_{ki}^{ * } a_{kj} = \delta_{ij}$ </td> </tr>
</tbody>
</table>
### Some famous Matrices
* Diagonal if $a_{ij}=0$ for $i\ne j$
* Upper triangular if $a_{ij}=0$ for $i > j$
* Lower triangular if $a_{ij}=0$ for $i < j$
* Upper Hessenberg if $a_{ij}=0$ for $i > j+1$
* Lower Hessenberg if $a_{ij}=0$ for $i < j+1$
* Tridiagonal if $a_{ij}=0$ for $|i -j| > 1$
* Lower banded with bandwidth $p$: $a_{ij}=0$ for $i > j+p$
* Upper banded with bandwidth $p$: $a_{ij}=0$ for $i < j+p$
* Banded, block upper triangular, block lower triangular....
Some Equivalent Statements. For an $N\times N$ matrix $\mathbf{A}$ the following properties are all equivalent
* If the inverse of $\mathbf{A}$ exists, $\mathbf{A}$ is nonsingular.
* The equation $\mathbf{Ax}=0$ implies $\mathbf{x}=0$.
* The rows of $\mathbf{A}$ form a basis of $R^N$.
* The columns of $\mathbf{A}$ form a basis of $R^N$.
* $\mathbf{A}$ is a product of elementary matrices.
* $0$ is not eigenvalue of $\mathbf{A}$.
## Numpy and arrays
[Numpy](http://www.numpy.org/) provides an easy way to handle arrays in Python. The standard way to import this library is as
```
import numpy as np
n = 10
x = np.random.normal(size=n)
print(x)
```
Here we have defined a vector $x$ with $n=10$ elements with its values given by the Normal distribution $N(0,1)$.
Another alternative is to declare a vector as follows
```
import numpy as np
x = np.array([1, 2, 3])
print(x)
```
Here we have defined a vector with three elements, with $x_0=1$, $x_1=2$ and $x_2=3$. Note that both Python and C++
start numbering array elements from $0$ and on. This means that a vector with $n$ elements has a sequence of entities $x_0, x_1, x_2, \dots, x_{n-1}$. We could also let (recommended) Numpy to compute the logarithms of a specific array as
```
import numpy as np
x = np.log(np.array([4, 7, 8]))
print(x)
```
Here we have used Numpy's unary function $np.log$. This function is
highly tuned to compute array elements since the code is vectorized
and does not require looping. We normaly recommend that you use the
Numpy intrinsic functions instead of the corresponding **log** function
from Python's **math** module. The looping is done explicitely by the
**np.log** function. The alternative, and slower way to compute the
logarithms of a vector would be to write
```
import numpy as np
from math import log
x = np.array([4, 7, 8])
for i in range(0, len(x)):
x[i] = log(x[i])
print(x)
```
We note that our code is much longer already and we need to import the **log** function from the **math** module.
The attentive reader will also notice that the output is $[1, 1, 2]$. Python interprets automacally our numbers as integers (like the **automatic** keyword in C++). To change this we could define our array elements to be double precision numbers as
```
import numpy as np
x = np.log(np.array([4, 7, 8], dtype = np.float64))
print(x)
```
or simply write them as double precision numbers (Python uses 64 bits as default for floating point type variables), that is
```
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0])
print(x)
```
To check the number of bytes (remember that one byte contains eight bits for double precision variables), you can use simple use the **itemsize** functionality (the array $x$ is actually an object which inherits the functionalities defined in Numpy) as
```
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0])
print(x.itemsize)
```
Having defined vectors, we are now ready to try out matrices. We can define a $3 \times 3 $ real matrix $\hat{A}$
as (recall that we user lowercase letters for vectors and uppercase letters for matrices)
```
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
print(A)
```
If we use the **shape** function we would get $(3, 3)$ as output, that is verifying that our matrix is a $3\times 3$ matrix. We can slice the matrix and print for example the first column (Python organized matrix elements in a row-major order, see below) as
```
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[:,0])
```
We can continue this was by printing out other columns or rows. The example here prints out the second column
```
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[1,:])
```
Numpy contains many other functionalities that allow us to slice, subdivide etc etc arrays. We strongly recommend that you look up the [Numpy website for more details](http://www.numpy.org/). Useful functions when defining a matrix are the **np.zeros** function which declares a matrix of a given dimension and sets all elements to zero
```
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to zero
A = np.zeros( (n, n) )
print(A)
```
or initializing all elements to
```
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to one
A = np.ones( (n, n) )
print(A)
```
or as unitarily distributed random numbers (see the material on random number generators in the statistics part)
```
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to random numbers with x \in [0, 1]
A = np.random.rand(n, n)
print(A)
```
As we will see throughout these lectures, there are several extremely useful functionalities in Numpy.
As an example, consider the discussion of the covariance matrix. Suppose we have defined three vectors
$\hat{x}, \hat{y}, \hat{z}$ with $n$ elements each. The covariance matrix is defined as
$$
\hat{\Sigma} = \begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \\
\sigma_{yx} & \sigma_{yy} & \sigma_{yz} \\
\sigma_{zx} & \sigma_{zy} & \sigma_{zz}
\end{bmatrix},
$$
where for example
$$
\sigma_{xy} =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}).
$$
The Numpy function **np.cov** calculates the covariance elements using the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have the exact mean values. For a more in-depth discussion of the covariance and covariance matrix and its meaning, we refer you to the lectures on statistics.
The following simple function uses the **np.vstack** function which takes each vector of dimension $1\times n$ and produces a $ 3\times n$ matrix $\hat{W}$
$$
\hat{W} = \begin{bmatrix} x_0 & y_0 & z_0 \\
x_1 & y_1 & z_1 \\
x_2 & y_2 & z_2 \\
\dots & \dots & \dots \\
x_{n-2} & y_{n-2} & z_{n-2} \\
x_{n-1} & y_{n-1} & z_{n-1}
\end{bmatrix},
$$
which in turn is converted into into the $3 times 3$ covariance matrix
$\hat{\Sigma}$ via the Numpy function **np.cov()**. In our review of
statistical functions and quantities we will discuss more about the
meaning of the covariance matrix. Here we note that we can calculate
the mean value of each set of samples $\hat{x}$ etc using the Numpy
function **np.mean(x)**. We can also extract the eigenvalues of the
covariance matrix through the **np.linalg.eig()** function.
```
# Importing various packages
import numpy as np
n = 100
x = np.random.normal(size=n)
print(np.mean(x))
y = 4+3*x+np.random.normal(size=n)
print(np.mean(y))
z = x**3+np.random.normal(size=n)
print(np.mean(z))
W = np.vstack((x, y, z))
Sigma = np.cov(W)
print(Sigma)
Eigvals, Eigvecs = np.linalg.eig(Sigma)
print(Eigvals)
```
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
eye = np.eye(4)
print(eye)
sparse_mtx = sparse.csr_matrix(eye)
print(sparse_mtx)
x = np.linspace(-10,10,100)
y = np.sin(x)
plt.plot(x,y,marker='x')
plt.show()
```
## Gaussian Elimination
We start with the linear set of equations
$$
\mathbf{A}\mathbf{x} = \mathbf{w}.
$$
We assume also that the matrix $\mathbf{A}$ is non-singular and that the
matrix elements along the diagonal satisfy $a_{ii} \ne 0$. Simple $4\times 4 $ example
$$
\begin{bmatrix}
a_{11}& a_{12} &a_{13}& a_{14}\\
a_{21}& a_{22} &a_{23}& a_{24}\\
a_{31}& a_{32} &a_{33}& a_{34}\\
a_{41}& a_{42} &a_{43}& a_{44}\\
\end{bmatrix} \begin{bmatrix}
x_1\\
x_2\\
x_3 \\
x_4 \\
\end{bmatrix}
=\begin{bmatrix}
w_1\\
w_2\\
w_3 \\
w_4\\
\end{bmatrix}.
$$
or
$$
a_{11}x_1 +a_{12}x_2 +a_{13}x_3 + a_{14}x_4=w_1 \nonumber
$$
$$
a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + a_{24}x_4=w_2 \nonumber
$$
$$
a_{31}x_1 + a_{32}x_2 + a_{33}x_3 + a_{34}x_4=w_3 \nonumber
$$
$$
a_{41}x_1 + a_{42}x_2 + a_{43}x_3 + a_{44}x_4=w_4. \nonumber
$$
The basic idea of Gaussian elimination is to use the first equation to eliminate the first unknown $x_1$
from the remaining $n-1$ equations. Then we use the new second equation to eliminate the second unknown
$x_2$ from the remaining $n-2$ equations. With $n-1$ such eliminations
we obtain a so-called upper triangular set of equations of the form
$$
b_{11}x_1 +b_{12}x_2 +b_{13}x_3 + b_{14}x_4=y_1 \nonumber
$$
$$
b_{22}x_2 + b_{23}x_3 + b_{24}x_4=y_2 \nonumber
$$
$$
b_{33}x_3 + b_{34}x_4=y_3 \nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="eq:gaussbacksub"></div>
$$
b_{44}x_4=y_4. \nonumber
\label{eq:gaussbacksub} \tag{1}
$$
We can solve this system of equations recursively starting from $x_n$ (in our case $x_4$) and proceed with
what is called a backward substitution.
This process can be expressed mathematically as
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
x_m = \frac{1}{b_{mm}}\left(y_m-\sum_{k=m+1}^nb_{mk}x_k\right)\quad m=n-1,n-2,\dots,1.
\label{_auto1} \tag{2}
\end{equation}
$$
To arrive at such an upper triangular system of equations, we start by eliminating
the unknown $x_1$ for $j=2,n$. We achieve this by multiplying the first equation by $a_{j1}/a_{11}$ and then subtract
the result from the $j$th equation. We assume obviously that $a_{11}\ne 0$ and that
$\mathbf{A}$ is not singular.
Our actual $4\times 4$ example reads after the first operation
$$
\begin{bmatrix}
a_{11}& a_{12} &a_{13}& a_{14}\\
0& (a_{22}-\frac{a_{21}a_{12}}{a_{11}}) &(a_{23}-\frac{a_{21}a_{13}}{a_{11}}) & (a_{24}-\frac{a_{21}a_{14}}{a_{11}})\\
0& (a_{32}-\frac{a_{31}a_{12}}{a_{11}})& (a_{33}-\frac{a_{31}a_{13}}{a_{11}})& (a_{34}-\frac{a_{31}a_{14}}{a_{11}})\\
0&(a_{42}-\frac{a_{41}a_{12}}{a_{11}}) &(a_{43}-\frac{a_{41}a_{13}}{a_{11}}) & (a_{44}-\frac{a_{41}a_{14}}{a_{11}}) \\
\end{bmatrix} \begin{bmatrix}
x_1\\
x_2\\
x_3 \\
x_4 \\
\end{bmatrix}
=\begin{bmatrix}
y_1\\
w_2^{(2)}\\
w_3^{(2)} \\
w_4^{(2)}\\
\end{bmatrix},
$$
or
$$
b_{11}x_1 +b_{12}x_2 +b_{13}x_3 + b_{14}x_4=y_1 \nonumber
$$
$$
a^{(2)}_{22}x_2 + a^{(2)}_{23}x_3 + a^{(2)}_{24}x_4=w^{(2)}_2 \nonumber
$$
$$
a^{(2)}_{32}x_2 + a^{(2)}_{33}x_3 + a^{(2)}_{34}x_4=w^{(2)}_3 \nonumber
$$
$$
a^{(2)}_{42}x_2 + a^{(2)}_{43}x_3 + a^{(2)}_{44}x_4=w^{(2)}_4, \nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
\label{_auto2} \tag{3}
\end{equation}
$$
The new coefficients are
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
b_{1k} = a_{1k}^{(1)} \quad k=1,\dots,n,
\label{_auto3} \tag{4}
\end{equation}
$$
where each $a_{1k}^{(1)}$ is equal to the original $a_{1k}$ element. The other coefficients are
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
a_{jk}^{(2)} = a_{jk}^{(1)}-\frac{a_{j1}^{(1)}a_{1k}^{(1)}}{a_{11}^{(1)}} \quad j,k=2,\dots,n,
\label{_auto4} \tag{5}
\end{equation}
$$
with a new right-hand side given by
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
y_{1}=w_1^{(1)}, \quad w_j^{(2)} =w_j^{(1)}-\frac{a_{j1}^{(1)}w_1^{(1)}}{a_{11}^{(1)}} \quad j=2,\dots,n.
\label{_auto5} \tag{6}
\end{equation}
$$
We have also set $w_1^{(1)}=w_1$, the original vector element.
We see that the system of unknowns $x_1,\dots,x_n$ is transformed into an $(n-1)\times (n-1)$ problem.
This step is called forward substitution.
Proceeding with these substitutions, we obtain the
general expressions for the new coefficients
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
a_{jk}^{(m+1)} = a_{jk}^{(m)}-\frac{a_{jm}^{(m)}a_{mk}^{(m)}}{a_{mm}^{(m)}} \quad j,k=m+1,\dots,n,
\label{_auto6} \tag{7}
\end{equation}
$$
with $m=1,\dots,n-1$ and a
right-hand side given by
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
w_j^{(m+1)} =w_j^{(m)}-\frac{a_{jm}^{(m)}w_m^{(m)}}{a_{mm}^{(m)}}\quad j=m+1,\dots,n.
\label{_auto7} \tag{8}
\end{equation}
$$
This set of $n-1$ elimations leads us to an equations which is solved by back substitution.
If the arithmetics is exact and the matrix $\mathbf{A}$ is not singular, then the computed answer will be exact.
Even though the matrix elements along the diagonal are not zero,
numerically small numbers may appear and subsequent divisions may lead to large numbers, which, if added
to a small number may yield losses of precision. Suppose for example that our first division in $(a_{22}-a_{21}a_{12}/a_{11})$
results in $-10^{-7}$ and that $a_{22}$ is one.
one. We are then
adding $10^7+1$. With single precision this results in $10^7$.
* Gaussian elimination, $O(2/3n^3)$ flops, general matrix
* LU decomposition, upper triangular and lower tridiagonal matrices, $O(2/3n^3)$ flops, general matrix. Get easily the inverse, determinant and can solve linear equations with back-substitution only, $O(n^2)$ flops
* Cholesky decomposition. Real symmetric or hermitian positive definite matrix, $O(1/3n^3)$ flops.
* Tridiagonal linear systems, important for differential equations. Normally positive definite and non-singular. $O(8n)$ flops for symmetric. Special case of banded matrices.
* Singular value decomposition
* the QR method will be discussed in chapter 7 in connection with eigenvalue systems. $O(4/3n^3)$ flops.
The LU decomposition method means that we can rewrite
this matrix as the product of two matrices $\mathbf{L}$ and $\mathbf{U}$
where
$$
\begin{bmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \\
a_{21} & a_{22} & a_{23} & a_{24} \\
a_{31} & a_{32} & a_{33} & a_{34} \\
a_{41} & a_{42} & a_{43} & a_{44}
\end{bmatrix}
= \begin{bmatrix}
1 & 0 & 0 & 0 \\
l_{21} & 1 & 0 & 0 \\
l_{31} & l_{32} & 1 & 0 \\
l_{41} & l_{42} & l_{43} & 1
\end{bmatrix}
\begin{bmatrix}
u_{11} & u_{12} & u_{13} & u_{14} \\
0 & u_{22} & u_{23} & u_{24} \\
0 & 0 & u_{33} & u_{34} \\
0 & 0 & 0 & u_{44}
\end{bmatrix}.
$$
LU decomposition forms the backbone of other algorithms in linear algebra, such as the
solution of linear equations given by
$$
a_{11}x_1 +a_{12}x_2 +a_{13}x_3 + a_{14}x_4=w_1 \nonumber
$$
$$
a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + a_{24}x_4=w_2 \nonumber
$$
$$
a_{31}x_1 + a_{32}x_2 + a_{33}x_3 + a_{34}x_4=w_3 \nonumber
$$
$$
a_{41}x_1 + a_{42}x_2 + a_{43}x_3 + a_{44}x_4=w_4. \nonumber
$$
The above set of equations is conveniently solved by using LU decomposition as an intermediate step.
The matrix $\mathbf{A}\in \mathbb{R}^{n\times n}$ has an LU factorization if the determinant
is different from zero. If the LU factorization exists and $\mathbf{A}$ is non-singular, then the LU factorization
is unique and the determinant is given by
$$
det\{\mathbf{A}\}=det\{\mathbf{LU}\}= det\{\mathbf{L}\}det\{\mathbf{U}\}=u_{11}u_{22}\dots u_{nn}.
$$
There are at least three main advantages with LU decomposition compared with standard Gaussian elimination:
* It is straightforward to compute the determinant of a matrix
* If we have to solve sets of linear equations with the same matrix but with different vectors $\mathbf{y}$, the number of FLOPS is of the order $n^3$.
* The inverse is such an operation
With the LU decomposition it is rather
simple to solve a system of linear equations
$$
a_{11}x_1 +a_{12}x_2 +a_{13}x_3 + a_{14}x_4=w_1 \nonumber
$$
$$
a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + a_{24}x_4=w_2 \nonumber
$$
$$
a_{31}x_1 + a_{32}x_2 + a_{33}x_3 + a_{34}x_4=w_3 \nonumber
$$
$$
a_{41}x_1 + a_{42}x_2 + a_{43}x_3 + a_{44}x_4=w_4. \nonumber
$$
This can be written in matrix form as
$$
\mathbf{Ax}=\mathbf{w}.
$$
where $\mathbf{A}$ and $\mathbf{w}$ are known and we have to solve for
$\mathbf{x}$. Using the LU dcomposition we write
$$
\mathbf{A} \mathbf{x} \equiv \mathbf{L} \mathbf{U} \mathbf{x} =\mathbf{w}.
$$
The previous equation can be calculated in two steps
$$
\mathbf{L} \mathbf{y} = \mathbf{w};\qquad \mathbf{Ux}=\mathbf{y}.
$$
To show that this is correct we use to the LU decomposition
to rewrite our system of linear equations as
$$
\mathbf{LUx}=\mathbf{w},
$$
and since the determinant of $\mathbf{L}$ is equal to 1 (by construction
since the diagonals of $\mathbf{L}$ equal 1) we can use the inverse of
$\mathbf{L}$ to obtain
$$
\mathbf{Ux}=\mathbf{L^{-1}w}=\mathbf{y},
$$
which yields the intermediate step
$$
\mathbf{L^{-1}w}=\mathbf{y}
$$
and as soon as we have $\mathbf{y}$ we can obtain $\mathbf{x}$
through $\mathbf{Ux}=\mathbf{y}$.
For our four-dimentional example this takes the form
$$
y_1=w_1 \nonumber
$$
$$
l_{21}y_1 + y_2=w_2\nonumber
$$
$$
l_{31}y_1 + l_{32}y_2 + y_3 =w_3\nonumber
$$
$$
l_{41}y_1 + l_{42}y_2 + l_{43}y_3 + y_4=w_4. \nonumber
$$
and
$$
u_{11}x_1 +u_{12}x_2 +u_{13}x_3 + u_{14}x_4=y_1 \nonumber
$$
$$
u_{22}x_2 + u_{23}x_3 + u_{24}x_4=y_2\nonumber
$$
$$
u_{33}x_3 + u_{34}x_4=y_3\nonumber
$$
$$
u_{44}x_4=y_4 \nonumber
$$
This example shows the basis for the algorithm
needed to solve the set of $n$ linear equations.
The algorithm goes as follows
* Set up the matrix $\bf A$ and the vector $\bf w$ with their correct dimensions. This determines the dimensionality of the unknown vector $\bf x$.
* Then LU decompose the matrix $\bf A$ through a call to the function `ludcmp(double a, int n, int indx, double &d)`. This functions returns the LU decomposed matrix $\bf A$, its determinant and the vector indx which keeps track of the number of interchanges of rows. If the determinant is zero, the solution is malconditioned.
* Thereafter you call the function `lubksb(double a, int n, int indx, double w)` which uses the LU decomposed matrix $\bf A$ and the vector $\bf w$ and returns $\bf x$ in the same place as $\bf w$. Upon exit the original content in $\bf w$ is destroyed. If you wish to keep this information, you should make a backup of it in your calling function.
### LU Decomposition, the inverse of a matrix
If the inverse exists then
$$
\mathbf{A}^{-1}\mathbf{A}=\mathbf{I},
$$
the identity matrix. With an LU decomposed matrix we can rewrite the last equation as
$$
\mathbf{LU}\mathbf{A}^{-1}=\mathbf{I}.
$$
If we assume that the first column (that is column 1) of the inverse matrix
can be written as a vector with unknown entries
$$
\mathbf{A}_1^{-1}= \begin{bmatrix}
a_{11}^{-1} \\
a_{21}^{-1} \\
\dots \\
a_{n1}^{-1} \\
\end{bmatrix},
$$
then we have a linear set of equations
$$
\mathbf{LU}\begin{bmatrix}
a_{11}^{-1} \\
a_{21}^{-1} \\
\dots \\
a_{n1}^{-1} \\
\end{bmatrix} =\begin{bmatrix}
1 \\
0 \\
\dots \\
0 \\
\end{bmatrix}.
$$
In a similar way we can compute the unknow entries of the second column,
$$
\mathbf{LU}\begin{bmatrix}
a_{12}^{-1} \\
a_{22}^{-1} \\
\dots \\
a_{n2}^{-1} \\
\end{bmatrix}=\begin{bmatrix}
0 \\
1 \\
\dots \\
0 \\
\end{bmatrix},
$$
and continue till we have solved all $n$ sets of linear equations.
|
874a51b2e0be3d32dac175a0ef2a4243a352feb9
| 42,905 |
ipynb
|
Jupyter Notebook
|
doc/LectureNotes/_build/html/_sources/linalg.ipynb
|
marlgryd/MachineLearning
|
e07439cee1f9e3042aec765754116dccdf8bcf01
|
[
"CC0-1.0"
] | null | null | null |
doc/LectureNotes/_build/html/_sources/linalg.ipynb
|
marlgryd/MachineLearning
|
e07439cee1f9e3042aec765754116dccdf8bcf01
|
[
"CC0-1.0"
] | null | null | null |
doc/LectureNotes/_build/html/_sources/linalg.ipynb
|
marlgryd/MachineLearning
|
e07439cee1f9e3042aec765754116dccdf8bcf01
|
[
"CC0-1.0"
] | 1 |
2021-09-04T16:21:16.000Z
|
2021-09-04T16:21:16.000Z
| 29.733195 | 365 | 0.496282 | true | 8,148 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.819893 | 0.913677 | 0.749117 |
__label__eng_Latn
| 0.984027 | 0.578783 |
## Testing look-elsewhere effect from combining two searches for the same particle using Gaussian Processes
by Kyle Cranmer, Dec 17, 2015
The correction for 1d look-elsewhere effect (LEE) presented in
*Trial factors or the look elsewhere effect in high energy physics* by Ofer Vitells and Eilam Gross http://arxiv.org/abs/arXiv:1005.1891
It is often claimed that when one has two statistically independent searches for the same particle, then a peak in one tells you where to look for the other, thus eliminating the look-elsewhere effect. These searches might be from two different experiments (eg. ATLAS and CMS), two different decay modes, or two different time perieods (eg. run1 and run2). There are various flaws in this logic, as stressed by Bob Cousins in
[these slides](https://indico.cern.ch/event/233551/contribution/1/attachments/389867/542286/cousins_look_elsewhere_14feb2013.pdf).
This issue quickly becomes subtle as the intuitive procedure of using the location of the excess in one search to inform where to look in the other is not fully specified. A few things can be said about the intuitive procedure
* there is a symmetry of switching search1 and search2, so any approach that breaks this symmetry is going to have some weird properties. Our inference should not depend on the order!
* you can decide to break that symmetry by picking one of the searches (without seeing the results) in order to have the correct Type-I error rate (coverage), but that will lead to sub-optimal inference. (And Cousins has pointed out that there may be something insightful to say by connecting to "Buehler's betting game")
* if you use search 1 to *approximately* specify the location of the bump for search 2, then there is still a residual LEE for search 2 (though it will be considerably smaller)
* The combined result doesn't depend on the order, but clearly has a look-elsewhere effect that needs to be corrected
In what follows I will explore the behavior of the LEE correction for the combination of two searches (which can be trivially extended to more than two searches).
### Formalism
The starting point is to consider a search for a new particle with signal strength $\mu$ and unknown mass $\nu$ on top of a background distribution described by some nuisance parameters $\theta$. We perform the search by scanning over the mass $\nu$ and calculating the test statistic
\begin{equation}
q(\nu) = -2 \log \frac{ \max_{\theta} L(\mu=0, \nu, \theta)}{ \max_{\mu, \theta} L(\mu, \nu, \theta)}
\end{equation}
Assuming the background-only is true, $q(\nu)$ is a chi-square random field (with 1 degree of freedom). That means that, for any point $\nu$, the quantity $q(\nu)$ would have a chi-square distribution if you repeated the experiment many times.
The maximum local significance is based on $Z_{local} = \sqrt{q_{max}}$, where $q_{max} = \max_\nu q(\nu)$.
The correction from local to global significance is given by:
\begin{equation}
p_{global} = p_{local} + N \exp(-(q_{max}-u_0)/2)
\end{equation}
where $N$ is the average number of upcrossings above level $u_0$ (i.e. times that $q(\nu) > u_0$).
This $N$ characterizes the search -- searches with good mass resolution over a large mass range will have large values of $N$, while searches with poor mass resolution will have small values of $N$.
### Shortcut: Gaussian Processes
Creating many likelihood scans from pseudo-experiments (toy Monte Carlo) is somewhat time consuming, so here we make realizations of a chi-square random field by using a Gaussian Process.
The main trick we will use is that a chi-square distribution for one degree of freedom is the same as the distribution of $x^2$ if $x$ is normally distributed. As you might have guessed, a Gaussian Process (GP) is like a chi-square random field, but it is Gaussian-distributed at each point.
Note, the distributions are not independent at each point, there is some covariance. So if the $q(\nu)$ is high at one point, you can expect it to be high near by. We can control this behavior via the GP's kernel. In particular,
$K(\nu, \nu') = Cov[q(\nu)^2, q(\nu')^2]$. We can essentially specify what the mass resolution of our virtual search is via the length scale used in ther kernel.
For more on the theory of Gaussian Processes, the best resource is available for free online: [Rasmussen & Williams (2006)](http://www.gaussianprocess.org/gpml/). We will [`george`](http://dan.iel.fm/george/current/) -- a nice python package for Gaussian Processes (GP).
### Connection to the asymptotic approximation
The next major conceptual pilar for this work is the asymptotic approximations for the likelihood ratio. Wilks's theorem states that assuming background-only ($\mu=0$) the distribution of the best fit signal strength $\hat{\mu}$ follows a Gaussian distribution $G(\hat{\mu} | \mu=0, \sigma)$, where $\sigma^2 = \textrm{Var}[\mu]$. With that assumption $q(\mu) = (\hat{\mu}/\sigma)^2$, hence $q(\mu)$ is chi-square distributed. In this way of thinking, the GP is generating results for $(\hat{\mu}/\sigma)$ as a function of the mass parameter $\nu$.
This allows us to quickly do combinations on these toy results. Fundamentally, we wish to perform a likelihood combination at every mass point. This is additive in the log-likelihood ratio:
\begin{equation}
q_{12}(\nu) = q_1(\nu) + q_2(\nu) + const
\end{equation}
This is also a chi-square random field. In the asymptotic limit, the likelihood combination is equivalent to:
\begin{equation}
\hat{\mu}_{12}(\nu) = \frac{\hat{\mu}_1(\nu) \sigma_2^2(\nu) +\hat{\mu}_2(\nu)\sigma_1^2(\nu)}{\sigma_1^2(\nu)+\sigma_2^2(\nu)}
\end{equation}
together with variance
\begin{equation}
Var[\hat{\mu}_{12}(\nu)] \equiv\sigma_{12}^2(\nu) = \frac{\sigma_1^2(\nu)+\sigma_2^2(\nu)}{\sigma_1^2(\nu) \sigma_2^2(\nu)}
\end{equation}
The important part here is that we can also work out the kernel for the Gaussian process that describes $(\hat{\mu}_{12}/sigma_{12})^2(\nu)$. In particular, the covariance between $\nu_1$ and $\nu_2$ of the GP for combination can be derived from the covariance of the GP for searches 1 and 2.
**Consider the simple case** where the two searches have the same constant sensitivity: $\sigma_1(\nu) = \sigma_2(\nu) = \sigma$. Then $\hat{\mu}_{12}(\nu) = [\hat{\mu}_1(\nu)+\hat{\mu}_2(\nu)]/2$ and $\sigma_{12}^2 = \sigma^2/2$. So the kernel for the GP that describes the combination is given by:
\begin{equation}
K_{12}(\nu, \nu') = Cov[(\hat{\mu}_{12}(\nu)/\sigma_{12})^2, (\hat{\mu}_{12}(\nu')/\sigma_{12})^2
= \frac{1}{2} K_{1}(\nu, \nu') + \frac{1}{2} K_{2}(\nu, \nu')
\end{equation}
**Correlarry** If two searches have the same mass resolution and statistical power, the effective $N$ needed to calculate the LEE for the combination is the same as the individual searches. (This can be demonstraited with the code below by setting `ratio_of_correlations=1.`
**Note** In what follows, I'll demonstrate this for the simple case, but this can be extended to have separate $\sigma(\nu)$ curves for searches 1 and 2.
### Basic outline of what will be shown below
We will create three different Gaussian Processes:
* `gp1` will generate results from experiment 1
* `gp2` will generate results from experiment 2
* we will explicitly combine the results from experiments 1 and 2
* `gp12` will be shown to have the same behavior as the combination of `gp1` and `gp2`
```python
%pylab inline --no-import-all
#plt.rc('text', usetex=True)
plt.rcParams['figure.figsize'] = (6.0, 6.0)
#plt.rcParams['savefig.dpi'] = 60
```
Populating the interactive namespace from numpy and matplotlib
```python
import george
from george.kernels import ExpSquaredKernel
from scipy.stats import chi2, norm
```
```python
length_scale_of_correaltion=1.
ratio_of_length_scales=4.
kernel1 = ExpSquaredKernel(length_scale_of_correaltion, ndim=1)
kernel2 = ExpSquaredKernel(ratio_of_length_scales**2*length_scale_of_correaltion, ndim=1)
kernel12 = 0.5*kernel1+0.5*kernel2
```
```python
# Create the Gaussian process
# gp = george.GP(kernel)
gp1 = george.GP(kernel1, solver=george.HODLRSolver) #faster
gp2 = george.GP(kernel2, solver=george.HODLRSolver) #faster
gp12 = george.GP(kernel12, solver=george.HODLRSolver) #faster
```
```python
n_scan_points=250
x = np.linspace(0,100,n_scan_points)
```
```python
# slow part: pre-compute internal stuff for the GP
gp1.compute(x)
gp2.compute(x)
gp12.compute(x)
```
Show an example of a realization of the Gaussian process for $z(\nu) = (\hat{\mu}/\sigma)$ and $q(\nu) = z^2(\nu)$
Now lets histogram the values of the random field.
Don't get confused here... if you pick a single point and histogram the value of over many instances, you expect a Gaussian. However, for a single instance, you don't expect the histogram for the value of the field to be Gaussian (because of the correlations). Thought experiments: if you make `length_scale_of_correaltion` very small, then each point is essentially independent and you do expect to see a Gaussian; however, if `length_scale_of_correaltion` is very large then you expect the field to be nearly constant and the histogram below would be a delta function.
```python
# evaluate one realization of the GP
z = gp1.sample(x)
# plot the chi-square random field
plt.subplot(121)
plt.plot(x,z)
plt.ylabel(r'$z(\nu)$')
plt.xlabel(r'$\nu$')
plt.subplot(122)
plt.plot(x,z**2)
plt.ylabel(r'$q(\nu)$')
plt.xlabel(r'$\nu$')
```
## Define some quick helper functions
```python
def q_to_pvalue(q):
return (1.-chi2.cdf(q, 1))/2 #divide by 2 for 1-sided test
def pvalue_to_significance(p):
return -norm.ppf(p)
def significance_to_pvalue(Z):
return 1.-norm.cdf(Z)
```
```python
def num_upcrossings(z):
"""count number of times adjacent bins change between 0,1"""
return np.sum((z-np.roll(z,1))**2)/2
```
```python
def global_pvalue(u,u0, n):
#return (1.-chi2.cdf(u, 1))/2. + np.exp(-(u-u0)/2)*n #1-sided p-value
return (1.-chi2.cdf(u, 1)) + np.exp(-(u-u0)/2)*n # 2-sided p-value
```
### Define the threshold for counting upcrossings
```python
u1 = 0.5
```
Check the code to count upcrossings and the LEE correction is working
```python
n_samples = 1000
n_plots = 3
plt.figure(figsize=(9,n_plots*3))
z_array = gp1.sample(x,n_samples)
n_up = np.zeros(n_samples)
for scan_no, z in enumerate(z_array):
scan = z**2
exc1 = (scan>u1) + 0. #add 0. to convert from bool to double
n_up[scan_no] = num_upcrossings(exc1)
if scan_no < n_plots:
plt.subplot(n_plots,2,2*scan_no+1)
plt.plot(x,scan)
plt.plot([0,100],[u1,u1], c='r')
plt.subplot(n_plots,2,2*scan_no+2)
plt.plot(x,exc1)
plt.ylim(-.1,1.1)
print('experiment %d has %d upcrossings' %(scan_no, n_up[scan_no]))
n_av = np.mean(n_up)
print("average number of upcrossings in %d experiments is %f" %(n_samples, n_av))
```
### Make prediction for global p-value for q_max distribution
```python
u = np.linspace(5,25,100)
global_p = global_pvalue(u,u1,n_av)
```
### Generate many toy experiments (via the Gaussian Process), find maximum local significance for each, and check the prediction for the LEE-corrected global p-value
```python
n_samples = 10000
z_array = gp1.sample(x,n_samples)
q_max = np.zeros(n_samples)
for scan_no, z in enumerate(z_array):
scan = z**2
q_max[scan_no] = np.max(scan)
```
```python
bins, edges, patches = plt.hist(q_max, bins=30)
icdf = 1.-np.cumsum(bins/n_samples)
icdf = np.hstack((1.,icdf))
icdf_error = np.sqrt(np.cumsum(bins))/n_samples
icdf_error = np.hstack((0.,icdf_error))
plt.xlabel('$q_{max}$')
plt.ylabel('counts / bin')
```
```python
# plot the p-value
plt.plot(edges,icdf, c='r', label='toys')
plt.errorbar(edges,icdf,yerr=icdf_error)
plt.plot(u, global_p, label='prediction')
plt.xlabel('$u$')
plt.ylabel('$P(q_{max} >u)$')
plt.legend(('prediction','toys'))
#plt.ylabel('P(q>u)')
plt.ylim(1E-3,10)
plt.xlim(0,25)
plt.semilogy()
```
Wow! that was awesome! Go math!
# Part 2
## Now let's do some experiments combining two searches
```python
n_samples = 10000
z_array1 = gp1.sample(x,n_samples)
z_array2 = gp2.sample(x,n_samples)
n_av1, n_av2, n_av12 = 0., 0., 0.
q_max = np.zeros((n_samples,3))
q_10 = np.zeros((n_samples,3))
n_plots = 3
plt.figure(figsize=(9,n_plots*3))
scan_no=0
for z1, z2 in zip(z_array1,z_array2):
scan1 = z1**2
scan2 = z2**2
scan12 = ((z1+z2)**2)/2 # This is where the combination happens
exc1 = (scan1>u1) + 0. #add 0. to convert from bool to double
exc2 = (scan2>u1) + 0. #add 0. to convert from bool to double
exc12 = (scan12>u1) + 0. #add 0. to convert from bool to double
if scan_no < n_plots:
aspect = 1.
#plt.subplot(n_plots,3,3*scan_no+1)
plt.subplot(n_plots,1,1*scan_no+1)
plt.plot(x,scan1, c='r', label='search 1')
#plt.subplot(n_plots,3,3*scan_no+2)
plt.subplot(n_plots,1,1*scan_no+1)
plt.plot(x,scan2, c='g', label='search 2')
#plt.subplot(n_plots,3,3*scan_no+3)
plt.subplot(n_plots,1,1*scan_no+1)
plt.plot(x,scan12, c='b', label='combined')
plt.legend(('search 1', 'search 2', 'combined'))
q_max[scan_no,:] = [np.max(scan1), np.max(scan2), np.max(scan12)]
q_10[scan_no,:] = [scan1[10],scan2[10], scan12[10]]
#print num_upcrossings(exc1)
n_av1 += 1.*num_upcrossings(exc1)/n_samples
n_av2 += 1.*num_upcrossings(exc2)/n_samples
n_av12 += 1.*num_upcrossings(exc12)/n_samples
scan_no +=1
print "n_av search 1, search 2, combined = ", n_av1, n_av2, n_av12
```
```python
#Simple scaling:
print "check simple scailing rule: prediction=%f, observed=%f" %(np.sqrt((n_av1**2+n_av2**2)/2), n_av12)
```
check simple scailing rule: prediction=17.972468, observed=17.961900
## Now let's test the prediction that `gp12` has the same behavior as the explicit combination of search 1 and search 2.
```python
z_array12 = gp12.sample(x,n_samples)
q12_max = np.zeros((n_samples))
n_up = np.zeros(n_samples)
for scan_no, z12 in enumerate(z_array12):
scan12 = (z12)**2
q12_max[scan_no] = np.max(scan12)
n_up[scan_no] = num_upcrossings((scan12 > u1)+0.)
print("average number of upcrossings for combined GP = %f" %(np.mean(n_up)))
```
average number of upcrossings for combined GP = 18.013400
Compare $q_{max}$ distribution from direct combination with the prediction from gp12
```python
bins, edges, patches = plt.hist(q_max[:,2], bins=50, alpha=0.1, color='r', label='explicit combination')
bins, edges, patches = plt.hist(q12_max, bins=edges, alpha=0.1, color='b', label='predicted')
plt.ylabel('counts/bin')
plt.xlabel('$q_{max}$')
plt.legend(('explicit combination', 'predicted'))
```
```python
u = np.linspace(5,25,100)
global_p = global_pvalue(u,u1,np.mean(n_up))
icdf = 1.-np.cumsum(bins/n_samples)
icdf = np.hstack((1.,icdf))
icdf_error = np.sqrt(np.cumsum(bins))/n_samples
icdf_error = np.hstack((0.,icdf_error))
plt.plot(edges,icdf, c='r', label='toys')
plt.errorbar(edges,icdf,yerr=icdf_error)
plt.plot(u, global_p, label='prediction')
plt.xlabel('$u$')
plt.ylabel('$P(q_{max} >u)$')
plt.legend(('prediction','toys'))
#plt.ylabel('P(q>u)')
plt.ylim(1E-3,10)
plt.xlim(0,25)
plt.semilogy()
```
Bingo!
```python
```
|
5ce1d24a2784969f35b6fbec013af02542ad9392
| 312,082 |
ipynb
|
Jupyter Notebook
|
two-experiment-lee.ipynb
|
cranmer/look-elsewhere-2d
|
6bcb57ea928eec6190cc6fdffe6076a5c4daee6a
|
[
"MIT"
] | 12 |
2015-12-14T13:12:18.000Z
|
2021-02-18T11:43:31.000Z
|
two-experiment-lee.ipynb
|
vischia/look-elsewhere-2d
|
41f8c20824f34fdb2c0598367d7539ca1406777d
|
[
"MIT"
] | 1 |
2018-10-15T03:17:32.000Z
|
2019-01-10T02:32:33.000Z
|
two-experiment-lee.ipynb
|
cranmer/look-elsewhere-2d
|
6bcb57ea928eec6190cc6fdffe6076a5c4daee6a
|
[
"MIT"
] | 5 |
2015-12-06T17:27:12.000Z
|
2019-10-19T23:05:40.000Z
| 77.767755 | 577 | 0.818653 | true | 4,469 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.817574 | 0.73412 | 0.600197 |
__label__eng_Latn
| 0.977781 | 0.23279 |
```python
%matplotlib inline
from IPython.display import display,Math
from sympy import *
init_session()
```
```python
from IPython.display import HTML
from ipywidgets import interact
from ipywidgets import interact,Dropdown,IntSlider
@interact
def _(p="202112",n="3456"):
p = p[:6]
pstr = "{:06d}".format(int(p))
p = int(pstr)
n = n[:4]
nstr = "{:04d}".format(int(n))
n = int(nstr)
m = pstr + nstr
clist = ["A","B","C","H","K","M","R","U","X","Y","Z"]
sum = 0
for i in range(len(m)):
sum += int(m[i])*(i+1)
c = sum%11
m = m + clist[c]
print("会場:{}\n番号:{}\nのチェックディジットは{}なので\n受験番号:{}\nとなります".format(p,n,c,m))
```
以下は[ここ](https://ja.wikipedia.org/wiki/Luhn%E3%82%A2%E3%83%AB%E3%82%B4%E3%83%AA%E3%82%BA%E3%83%A0)
にあったコードを用いて作成した.
```python
def check_number(digits):
_sum = 0
alt = False
for d in reversed(str(digits)):
d = int(d)
assert 0 <= d <= 9
if alt:
d *= 2
if d > 9:
d -= 9
_sum += d
alt = not alt
return (_sum % 10) == 0
from IPython.display import HTML
from ipywidgets import interact
from ipywidgets import interact,Dropdown,IntSlider
@interact
def _(n="49927398716"):
check = check_number(n)
if check:
print("{}は正しい".format(n))
else:
print("{}は正しくない".format(n))
```
```python
```
|
002b6416be095fc6c31cc14f19c1b2639ea1d940
| 3,310 |
ipynb
|
Jupyter Notebook
|
21jk1-0512.ipynb
|
ritsumei-aoi/21jk1
|
2d49628ef8721a507193a58aa1af4b31a60dfd8b
|
[
"Apache-2.0"
] | null | null | null |
21jk1-0512.ipynb
|
ritsumei-aoi/21jk1
|
2d49628ef8721a507193a58aa1af4b31a60dfd8b
|
[
"Apache-2.0"
] | null | null | null |
21jk1-0512.ipynb
|
ritsumei-aoi/21jk1
|
2d49628ef8721a507193a58aa1af4b31a60dfd8b
|
[
"Apache-2.0"
] | null | null | null | 23.309859 | 106 | 0.479154 | true | 494 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.800692 | 0.76908 | 0.615796 |
__label__eng_Latn
| 0.236183 | 0.269032 |
# Maths Overview
Trivially, notebooks provide us with a simple editing environment for combining markdown text, simple inline LaTeX and LateX blocks, and code cells prefixed with the `%%latex` block cell magic.
This also us to notebooks a medium for creating content blends narrative text with mathematical notation.
```{note}
Within a notebook user interface, native support for LaTeX inline in markdown cells is limited to that subset of LaTeX that can be parsed by the MathJax parser.
LaTeX parsing magics and code output transclusion can be used to provide access to a full featured LaTeX parser.
```
In addition, code cells allow us to perform mathematical computations and generate graphical outputs.
In a complete one piece generative document flow publishing system where we guarantee the correctness of calculations and formal arguments, as wel as the correctness of output graphics in relation to the body of the content, we ideally need to find a way to relate the (symobolic) mathematical content to the code that is executed.
Using a symbolic maths package such as `sympy`, we can create symbolic computational expressions that can be used to calculate (compute) expressions at a symbolic level as well as rendering those expressions in mathematical form using LaTeX (Mathjax). For rendering integrated one piece content in Jupyter book, the Python `myst_nb.glue()` provides a means for inline code outputs, but this requires a Python kernel. For bookdown workflows, outputs from all supported languages can be inlined [ *TO DO — CHECK* ].
If tight integration with the text is not required, or if markdown output can be generated from code, computation using a wide range of other languages can is enabled by installing the appropriate Jupyter kernel ([curated list of Jupyter kernels](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels)). For example, several kernels are available that are particularly suited to a range of mathematics related activities such as statistical computing, symbolic maths and numerical computation. For example:
- [`R`](https://irkernel.github.io/) statistical computing and graphics;
- [`Stata`](https://github.com/TiesdeKok/ipystata) statistical computing;
- [`SageMath`](https://doc.sagemath.org/html/en/installation/launching.html#setting-up-sagemath-as-a-jupyter-kernel-in-an-existing-jupyter-notebook-or-jupyterlab-installation) computer algebra system;
- [`Maxima`](https://github.com/robert-dodier/maxima-jupyter) computer algebra system;
- [`Octave`](https://github.com/Calysto/octave_kernel) numerical computation;
- [`SciLab`](https://github.com/calysto/scilab_kernel) numerical computation;
- [*Matlab*](https://github.com/calysto/matlab_kernel) mathematical computing;
- [*Wolfram Language*](https://github.com/WolframResearch/WolframLanguageForJupyter) mathematical computing;
- [`Gnuplot`](https://github.com/has2k1/gnuplot_kernel) charts.
```{note}
We can also write markdown in a code cell by converting to the code cell to a *de facto* markdown cell using the `%%markdown` block magic.
```
## Rendering equations Using MathJax
Equations can be rendered as a block using MathJax in a markdown cell.
$$
\begin{align}
\sqrt{3x-1}+(1+x^2)
\end{align}
$$
See this third party [Typesetting Equations](https://nbviewer.jupyter.org/github/ipython/ipython/blob/4.0.x/examples/Notebook/Typesetting%20Equations.ipynb) demonstration notebook for further examples.
MathJax content can also be rendered inline. For example, we can include the expression $\sqrt{3x-1}+(1+x^2)$ embedded *within* a line of text.
## Rendering equations from `sympy`
Guaranteeing the truth of a derived mathematical expression is often difficult if multiple steps of working are required and is further complicated particulalry if the expression is a complicated one.
Using a symbolic maths package such as `sympy` allows derived expressions to be generated automatically and then embedded into book output.
*The following example is taken from ["Technical writing: using math", Nicolás Guarín-Zapata](https://nicoguaro.github.io/posts/tech_writing_math/).*
Given the expression:
```python
from sympy import symbols, exp, sin
x = symbols("x")
f = exp(-x**2)*sin(3*x)
f
```
we can find its second derivative as:
```python
from sympy import diff
fxx = diff(f, x, 2)
fxx
```
Since the second derivative is calculated, and the equation is then rendered to a *LaTeX* form automatically, we know that the expression is correct (although it may not be in the form we require).
### Rendering Matrices
If we have a `numpy` array, we can render it as a LaTeX styled matrix using a Python package such as `numpyarray_to_latex`.
For example, here's a random 4 x 5 array with round brackets (the default, although other bracket styles are customisable):
```python
#https://github.com/benmaier/numpyarray_to_latex
#%pip install --upgrade numpyarray_to_latex
import numpy as np
from numpyarray_to_latex.jupyter import to_jup
array = np.random.randn(4,5)
to_jup(array)
```
$\displaystyle \left(
\begin{array}{}
0.98 & -0.76 & -0.82 & 1.12 & 0.12\\
-1.26 & -1.71 & -1.67 & 0.49 & 1.22\\
-1.04 & 0.17 & 0.18 & -0.59 & 1.21\\
0.05 & 0.27 & 1.85 & -1.15 & 0.17
\end{array}
\right)$
We can access the raw LaTeX if required:
```python
from numpyarray_to_latex import to_ltx
latex_txt = to_ltx(array)
print(latex_txt)
```
\left(
\begin{array}{}
0.9752 & -0.7629 & -0.8192 & 1.1152 & 0.1160\\
-1.2629 & -1.7068 & -1.6683 & 0.4853 & 1.2185\\
-1.0446 & 0.1705 & 0.1847 & -0.5852 & 1.2059\\
0.0474 & 0.2737 & 1.8477 & -1.1480 & 0.1711
\end{array}
\right)
Matrices can also be rendered via `sympy` (using square brackets), as can the results of matrix calculations. For example, let's cast the above array as a `sympy.Matrix` and add it to itself:
```python
from sympy import Matrix
Matrix(array) + Matrix(array)
```
$\displaystyle \left[\begin{matrix}1.95031844930636 & -1.52573498570827 & -1.63848094559338 & 2.23036030020092 & 0.232069228555355\\-2.52584160621578 & -3.41356387618327 & -3.33667226592755 & 0.970506979299136 & 2.43708864310567\\-2.08929947857012 & 0.340943496173498 & 0.369321669583589 & -1.17043961672008 & 2.41184728145124\\0.0947048905289055 & 0.547414340249613 & 3.69542839395078 & -2.29598669946454 & 0.342193659945731\end{matrix}\right]$
## Embedding LaTex / TikZ Graphical Outputs
We can use the [`ipython_magic_tikz`](https://github.com/innovationOUtside/ipython_magic_tikz) magic to provide access to a TikZ/LaTeX parser to allow us to generate diagrams from [TikZ](https://www.overleaf.com/learn/latex/TikZ_package)) scripts.
```python
#%pip install git+https://github.com/innovationOUtside/ipython_magic_tikz.git
%load_ext tikz_magic
```
```python
%%tikz
\usetikzlibrary{shapes.geometric, calc}
\def\numsides{7} % regular polygon sides
\node (a)
[draw, blue!0!black,rotate=90,minimum size=3cm,regular polygon, regular polygon sides=\numsides] at (0, 0) {};
\foreach \x in {1,2,...,\numsides}
\fill (a.corner \x) circle[radius=.5pt];
\foreach \x in {1,2,...,\numsides}{
\draw [red,dashed, shorten <=-0.5cm,shorten >=-0.5cm](a.center) -- (a.side \x);
\draw [red,dashed, shorten <=-0.5cm,shorten >=-0.5cm](a.center) -- (a.corner \x);}
```
|
2cb96dc7e18a96cfde1690e07e4f0e6ce845a9d5
| 11,926 |
ipynb
|
Jupyter Notebook
|
src/maths/overview.ipynb
|
OpenComputingLab/SubjectMatterNotebooks
|
1a7eff65528862c6b0c38eee6416d1ed814d4499
|
[
"MIT"
] | 3 |
2021-12-07T07:55:49.000Z
|
2022-01-12T03:02:10.000Z
|
src/maths/overview.ipynb
|
OpenComputingLab/SubjectMatterNotebooks
|
1a7eff65528862c6b0c38eee6416d1ed814d4499
|
[
"MIT"
] | null | null | null |
src/maths/overview.ipynb
|
OpenComputingLab/SubjectMatterNotebooks
|
1a7eff65528862c6b0c38eee6416d1ed814d4499
|
[
"MIT"
] | null | null | null | 35.284024 | 522 | 0.607832 | true | 2,078 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.826712 | 0.891811 | 0.737271 |
__label__eng_Latn
| 0.948627 | 0.551259 |
# Deep Learning
- 01 What is "Deep Learning?"
- 02 Why Deep Learning?
- 03 The Perceptron (Neural units)
- 04 Shallow Neural Network
- 05 Activation functions
- 06 Loss functions
- 07 Cross entropy
## What is "Deep Learning?"
[Deep learning](https://en.wikipedia.org/wiki/Deep_learning) is the stacking of artificial neural networks (ANNs) to create stacked neural networks, [deep belief networks](https://en.wikipedia.org/wiki/Deep_belief_network), [recurrent neural networks](https://en.wikipedia.org/wiki/Recurrent_neural_network) and deep generative models. A deep neural network (DNN) is an ANN with multiple hidden layers between the input and output layers.
An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it.
_ Deep learning is basically the deep stacking of artificial neurons to learn complex models of data. _
## Why Deep Learning?
- It works
Deep learning and neural networks are increasingly important concepts as demonstrated through their performance on difficult problems in computer vision medical diagnosis, natural language processing and many other domains.
- Learns feature selection
Deep learning algorithms are unique in that they try to learn latent features from data, as opposed to traditional machine learning where features selection is typically handcrafted.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
import seaborn as sns
import warnings
import random
from datetime import datetime
random.seed(datetime.now())
warnings.filterwarnings('ignore')
# Make plots larger
plt.rcParams['figure.figsize'] = (10, 6)
```
```python
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
```
Using TensorFlow backend.
## MNIST data
The [MNIST database](http://yann.lecun.com/exdb/mnist/) of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.
```python
# load the MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
```
```python
# plot first 4 images as gray scale
plt.subplot(221)
plt.imshow(X_train[0], cmap=plt.get_cmap('gray'))
plt.subplot(222)
plt.imshow(X_train[1], cmap=plt.get_cmap('gray'))
plt.subplot(223)
plt.imshow(X_train[2], cmap=plt.get_cmap('gray'))
plt.subplot(224)
plt.imshow(X_train[3], cmap=plt.get_cmap('gray'))
# show the plot
plt.show()
```
The training dataset is structured as a 3-dimensional array of each image, and that images width and image height (28×28 pixels per image).
```python
## 60K 28×28 sized training images
print (X_train.shape)
```
(60000, 28, 28)
```python
# plot 4 more images as gray scale
plt.subplot(221)
plt.imshow(X_train[55], cmap=plt.get_cmap('gray'))
plt.subplot(222)
plt.imshow(X_train[555], cmap=plt.get_cmap('gray'))
plt.subplot(223)
plt.imshow(X_train[5555], cmap=plt.get_cmap('gray'))
plt.subplot(224)
plt.imshow(X_train[55555], cmap=plt.get_cmap('gray'))
# show the plot
plt.show()
```
[But what *is* a Neural Network?](https://youtu.be/aircAruvnKk)
## The Perceptron (Neural units)
The [perceptron]()https://en.wikipedia.org/wiki/Perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide
whether an input, represented by a vector of numbers, belongs to some
specific class or not). It is a type of linear classifier, i.e. a
classification algorithm that makes its predictions based on a linear
predictor function combining a set of weights with the feature
vector.
The perceptron algorithm dates back to the late 1950s, and is the basis of [artificial neural networks](https://en.wikipedia.org/wiki/Artificial_neural_network).
Definition
----------
In the modern sense, the perceptron is an algorithm for learning a
binary classifier: a function that maps its input (a real-valued
[vector]) to an output value $f(x)$ (a single [binary] value):
$$f(x) = \begin{cases}1 & \text{if }\ w \cdot x + b > 0\\0 & \text{otherwise}\end{cases}$$
where is a vector of real-valued weights, $w \cdot x$ is the [dot
product] $\sum_{i=1}^m w_i x_i$, where m is the number of inputs to the
perceptron and is the *bias*. The bias shifts the decision boundary away
from the origin and does not depend on any input value.
The value of $f(x)$ (0 or 1) is used to classify as either a positive or
a negative instance, in the case of a binary classification problem. If
is negative, then the weighted combination of inputs must produce a
positive value greater than $|b|$ in order to push the classifier neuron
over the 0 threshold. Spatially, the bias alters the position (though
not the orientation) of the decision boundary. The perceptron learning
algorithm does not terminate if the learning set is not linearly
separable. If the vectors are not linearly separable learning will
never reach a point where all vectors are classified properly. The most
famous example of the perceptron's inability to solve problems with
linearly nonseparable vectors is the Boolean exclusive-or problem. The
solution spaces of decision boundaries for all binary functions and
learning behaviors are studied in the reference.
In the context of neural networks, a perceptron is an _artificial
neuron_ using the [Heaviside step function](https://en.wikipedia.org/wiki/Heaviside_step_function) as the activation function.
The perceptron algorithm is also termed the **single-layer perceptron**,
to distinguish it from a multilayer perceptron, which is a misnomer
for a more complicated neural network. As a linear classifier, the
single-layer perceptron is the simplest [feedforward neural network](https://en.wikipedia.org/wiki/Feedforward_neural_network).
Learning algorithm
------------------
Below is an example of a learning algorithm for a (single-layer)
perceptron. For [multilayer perceptrons], where a hidden layer exists,
more sophisticated algorithms such as [backpropagation] must be used.
Alternatively, methods such as the [delta rule] can be used if the
function is non-linear and differentiable, although the one below will
work as well.
When multiple perceptrons are combined in an artificial neural network,
each output neuron operates independently of all the others; thus,
learning each output can be considered in isolation.
### Definitions
We first define some variables:
- $y = f(\mathbf{z})$ denotes the *output* from the perceptron for an
input vector $\mathbf{z}$.
- $D = \{(\mathbf{x}_1,d_1),\dots,(\mathbf{x}_s,d_s)\}$ is the
*training set* of $s$ samples, where:
- $\mathbf{x}_j$ is the $n$-dimensional input vector.
- $d_j$ is the desired output value of the perceptron for that
input.
We show the values of the features as follows:
- $x_{j,i}$ is the value of the $i$th feature of the $j$th training
*input vector*.
- $x_{j,0} = 1$.
To represent the weights:
- $w_i$ is the $i$th value in the *weight vector*, to be multiplied by
the value of the $i$th input feature.
- Because $x_{j,0} = 1$, the $w_0$ is effectively a bias that we use
instead of the bias constant $b$.
To show the time-dependence of $\mathbf{w}$, we use:
- $w_i(t)$ is the weight $i$ at time $t$.
Unlike other linear classification algorithms such as [logistic
regression], there is no need for a *learning rate* in the perceptron
algorithm. This is because multiplying the update by any constant simply
rescales the weights but never changes the sign of the prediction.
### Steps
1. Initialize the weights and the threshold. Weights may
be initialized to 0 or to a small random value. In the example below, we
use 0.
2. For each example in our training set , perform the following
steps over the input $\mathbf{x}_j$ and desired output $d_j$
Calculate the actual output:
a. $$\begin{align}
y_j(t) &= f[\mathbf{w}(t)\cdot\mathbf{x}_j] \\
&= f[w_0(t)x_{j,0} + w_1(t)x_{j,1} + w_2(t)x_{j,2} + \dotsb + w_n(t)x_{j,n}]
\end{align}$$
b. Update the weights:
$$w_i(t+1) = w_i(t) + (d_j - y_j(t)) x_{j,i}$$, for all features
$0 \leq i \leq n$.
**offline learning**
For offline learning, the step 2 may be repeated until the iteration error $\frac{1}{s} \sum_{j=1}^s |d_j - y_j(t)|$ is less than a user-specified error threshold $\gamma$, or a predetermined number of iterations have been completed. The algorithm updates the weights after steps 2a and 2b. These weights are immediately applied to a pair in the training set, and subsequently updated, rather than waiting until all pairs in the training set have undergone these steps.
Multiclass perceptron
---------------------
Like most other techniques for training linear classifiers, the
perceptron generalizes naturally to multiclass classification. Here,
the input $x$ and the output $y$ are drawn from arbitrary sets. A
feature representation function $f(x,y)$ maps each possible input/output
pair to a finite-dimensional real-valued feature vector. As before, the
feature vector is multiplied by a weight vector $w$, but now the
resulting score is used to choose among many possible outputs:
$$\hat y = \operatorname{argmax}_y f(x,y) \cdot w.$$ ≈ Learning again
iterates over the examples, predicting an output for each, leaving the
weights unchanged when the predicted output matches the target, and
changing them when it does not. The update becomes:
$$w_{t+1} = w_t + f(x, y) - f(x,\hat y).$$
This multiclass feedback formulation reduces to the original perceptron
when $x$ is a real-valued vector, $y$ is chosen from $\{0,1\}$, and
$f(x,y) = y x$.
For certain problems, input/output representations and features can be
chosen so that $\mathrm{argmax}_y f(x,y) \cdot w$ can be found
efficiently even though $y$ is chosen from a very large or even infinite
set.
In recent years, perceptron training has become popular in the field of
[natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) for such tasks as [part-of-speech tagging](https://en.wikipedia.org/wiki/Part-of-speech_tagging)
and [syntactic parsing](https://en.wikipedia.org/wiki/Parsing).
## Artificial neural network
[Artificial neural networks](https://en.wikipedia.org/wiki/Artificial_neural_network) (**ANNs**) or **[connectionist] systems**
are computing systems inspired by the biological neural networks that
constitute animal brains. Such systems learn (progressively improve
performance) to do tasks by considering examples, generally without
task-specific programming. For example, in image recognition, they might
learn to identify images that contain cats by analyzing example images
that have been manually labeled as “cat” or “no cat” and using the
analytic results to identify cats in other images. They have found most
use in applications difficult to express in a traditional computer
algorithm using rule-based programming.
An ANN is based on a collection of connected units called artificial
neurons, (analogous to axons in a biological brain). Each
connection synapse) between neurons can transmit a signal to another
neuron. The receiving (postsynaptic) neuron can process the signal(s)
and then signal downstream neurons connected to it. Neurons may have
state, generally represented by [real numbers], typically between 0 and 1.
Neurons and synapses may also have a weight that varies as learning
proceeds, which can increase or decrease the strength of the signal that
it sends downstream. Further, they may have a threshold such that only
if the aggregate signal is below (or above) that level is the downstream
signal sent.
Typically, neurons are organized in layers. Different layers may perform
different kinds of transformations on their inputs. Signals travel from
the first (input), to the last (output) layer, possibly after traversing
the layers multiple times.
The original goal of the neural network approach was to solve problems
in the same way that a human brain would. Over time, attention focused
on matching specific mental abilities, leading to deviations from
biology such as backpropagation, or passing information in the reverse
direction and adjusting the network to reflect that information.
Neural networks have been used on a variety of tasks, including
computer vision, speech recognition, machine translation, social
network filtering, playing board and video games, medical diagnosis and
in many other domains.
## Backpropagation Algorithm
**Backpropagation** is a method used in artificial neural networks to
calculate a gradient that is needed in the calculation of the
weights to be used in the network. It is commonly used to train
deep neural networks, a term referring to neural networks with
more than one hidden layer.
Backpropagation is a special case of an older and more general technique
called automatic differentiation. In the context of learning,
backpropagation is commonly used by the gradient descent optimization
algorithm to adjust the weight of neurons by calculating the gradient
of the loss function. This technique is also sometimes called
**backward propagation of errors**, because the error is calculated at
the output and distributed back through the network layers.
[What is backpropagation really doing?](https://youtu.be/Ilg3gGewQ5U)
[Gradient descent, how neural networks learn?](https://youtu.be/IHZwWFHWa-w)
[Backpropagation calculus](https://youtu.be/tIeHLnjs5U8)
**Loss function**
Sometimes referred to as the **cost function** or **error function**
(not to be confused with the Gauss error function), the loss function
is a function that maps values of one or more variables onto a real
number intuitively representing some \"cost\" associated with those
values. For backpropagation, the loss function calculates the difference
between the network output and its expected output, after a case
propagates through the network.
### Assumptions
Two assumptions must be made about the form of the error function.^1
The first is that it can be written as an average
$E=\frac{1}{n}\sum_xE_x$ over error functions $E_x$, for $n$ individual
training examples, $x$. The reason for this assumption is that the
backpropagation algorithm calculates the gradient of the error function
for a single training example, which needs to be generalized to the
overall error function. The second assumption is that it can be written
as a function of the outputs from the neural network.
### Example loss function
Let $y,y'$ be vectors in $\mathbb{R}^n$.
Select an error function $E(y,y')$ measuring the difference between two
outputs. The standard choice is the square of the Euclidean distance
between the vectors $y$ and $y'$:
$E(y,y') = \tfrac{1}{2} \lVert y-y'\rVert^2$
Note that the factor of $\tfrac{1}{2}$ conveniently cancels the exponent
when the error function is subsequently differentiated.
The error function over $n$ training examples can simply be written as
an average of losses over individual
examples$$E=\frac{1}{2n}\sum_x\lVert (y(x)-y'(x)) \rVert^2$$
and therefore, the partial derivative with respect to the
outputs$$\frac{\partial E}{\partial y'} = y'-y$$
**Algorithm**
Let $N$ be a neural network with $e$ connections, $m$ inputs, and $n$
outputs.
Below, $x_1,x_2,\dots$ will denote vectors in $\mathbb{R}^m$,
$y_1,y_2,\dots$ vectors in $\mathbb{R}^n$, and $w_0, w_1, w_2, \ldots$
vectors in $\mathbb{R}^e$. These are called *inputs*, *outputs* and
*weights* respectively.
The neural network corresponds to a function $y = f_N(w, x)$ which,
given a weight $w$, maps an input $x$ to an output $y$.
The optimization takes as input a sequence of *training examples*
$(x_1,y_1), \dots, (x_p, y_p)$ and produces a sequence of weights
$w_0, w_1, \dots, w_p$ starting from some initial weight $w_0$, usually
chosen at random.
These weights are computed in turn: first compute $w_i$ using only
$(x_i, y_i, w_{i-1})$ for $i = 1, \dots, p$. The output of the algorithm
is then $w_p$, giving us a new function $x \mapsto f_N(w_p, x)$. The
computation is the same in each step, hence only the case $i = 1$ is
described.
Calculating $w_1$ from $(x_1, y_1, w_0)$ is done by considering a
variable weight $w$ and applying gradient descent to the function
$w\mapsto E(f_N(w, x_1), y_1)$ to find a local minimum, starting at
$w = w_0$.
This makes $w_1$ the minimizing weight found by gradient descent.
**Algorithm in code**
To implement the algorithm above, explicit formulas are required for the
gradient of the function $w \mapsto E(f_N(w, x), y)$ where the function
is $E(y,y')= |y-y'|^2$.
The learning algorithm can be divided into two phases: propagation and
weight update.
### Phase 1: propagation
Each propagation involves the following steps:
1. Propagation forward through the network to generate the output
value(s)
2. Calculation of the cost (error term)
3. Propagation of the output activations back through the network using
the training pattern target in order to generate the deltas (the
difference between the targeted and actual output values) of all
output and hidden neurons.
### Phase 2: weight update
For each weight, the following steps must be followed:
1. The weight\'s output delta and input activation are multiplied to
find the gradient of the weight.
2. A ratio (percentage) of the weight\'s gradient is subtracted from
the weight.
This ratio (percentage) influences the speed and quality of learning; it
is called the *learning rate*. The greater the ratio, the faster the
neuron trains, but the lower the ratio, the more accurate the training
is. The sign of the gradient of a weight indicates whether the error
varies directly with, or inversely to, the weight. Therefore, the weight
must be updated in the opposite direction, \"descending\" the gradient.
Learning is repeated (on new batches) until the network performs
adequately.
### Pseudocode
The following is pseudocode for a stochastic gradient descent
algorithm for training a three-layer network (only one hidden layer):
```python
initialize network weights (often small random values)\
**do**\
**forEach** training example named ex\
prediction = _neural-net-output_(network, ex) *// forward pass*\
actual = _teacher-output_(ex)\
compute error (prediction - actual) at the output units\
*// backward pass*\
*// backward pass continued*\
update network weights *// input layer not modified by error estimate*\
**until** all examples classified correctly or another stopping criterion satisfied\
**return** the network
```
The lines labeled \"backward pass\" can be implemented using the
backpropagation algorithm, which calculates the gradient of the error of
the network regarding the network\'s modifiable weights.
## How many input neurons?
What is the input layer for the MNIST data? Each image is $28x28 = 784$.
```python
## 10K 28×28 sized test images
X_test.shape
```
(10000, 28, 28)
```python
28*28
```
784
For a multi-layer perceptron model we must reduce the images down into a a single input layer as a vector of pixels. In this case the 28×28 sized images will be 784 pixel input values (28x28=784).
```python
X_train = X_train.reshape(60000, 784).astype('float32')
X_test = X_test.reshape(10000, 784).astype('float32')
```
The pixel values are gray scale between 0 and 255. Scaling of input values when using neural network models is a good idea. Neural network models propagate values and the rate of propagation can be effected by their scale.
```python
# Scale the data between 0 and 1
X_train /= 255.0
X_test /= 255.0
```
## Shallow Neural Network
A shallow neural network has few layers (just one dense layer in this case). Dense means every neuron connected to every other.
```python
X_test[0]
```
array([ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0.32941177, 0.72549021, 0.62352943,
0.59215689, 0.23529412, 0.14117648, 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0.87058824, 0.99607843, 0.99607843, 0.99607843, 0.99607843,
0.94509804, 0.7764706 , 0.7764706 , 0.7764706 , 0.7764706 ,
0.7764706 , 0.7764706 , 0.7764706 , 0.7764706 , 0.66666669,
0.20392157, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0.26274511, 0.44705883,
0.28235295, 0.44705883, 0.63921571, 0.89019608, 0.99607843,
0.88235295, 0.99607843, 0.99607843, 0.99607843, 0.98039216,
0.89803922, 0.99607843, 0.99607843, 0.54901963, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0.06666667, 0.25882354, 0.05490196, 0.26274511,
0.26274511, 0.26274511, 0.23137255, 0.08235294, 0.9254902 ,
0.99607843, 0.41568628, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0.32549021, 0.99215686, 0.81960785, 0.07058824,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0.08627451, 0.9137255 ,
1. , 0.32549021, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0.50588238, 0.99607843, 0.93333334, 0.17254902,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0.23137255, 0.97647059,
0.99607843, 0.24313726, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0.52156866, 0.99607843, 0.73333335, 0.01960784,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0.03529412, 0.80392158,
0.97254902, 0.22745098, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0.49411765, 0.99607843, 0.71372551, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0.29411766, 0.98431373,
0.94117647, 0.22352941, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0.07450981, 0.86666667, 0.99607843, 0.65098041, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0.01176471, 0.79607844, 0.99607843,
0.85882354, 0.13725491, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0.14901961, 0.99607843, 0.99607843, 0.3019608 , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0.12156863, 0.87843138, 0.99607843,
0.4509804 , 0.00392157, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0.52156866, 0.99607843, 0.99607843, 0.20392157, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0.23921569, 0.94901961, 0.99607843,
0.99607843, 0.20392157, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0.47450981, 0.99607843, 0.99607843, 0.85882354, 0.15686275,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0.47450981, 0.99607843,
0.81176472, 0.07058824, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ], dtype=float32)
Finally, the output variable is an integer from 0 to 9. This is a multi-class classification problem. As such, it is good practice to use a one hot encoding of the class values, transforming the vector of class integers into a binary matrix.
We can easily do this using the built-in np_utils.to_categorical() helper function in Keras.
```python
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_test = keras.utils.to_categorical(y_test, n_classes)
```
```python
y_test[0:5]
```
array([[ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]])
```python
y_train[0:5]
```
array([[ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]])
```python
def shallow_net_A(n=55,i=784,o=10):
# create simple one dense layer net
# default 55 neurons, input 784, output 10
net = Sequential()
net.add(Dense(n, activation='sigmoid', input_shape=(i,)))
net.add(Dense(10, activation='softmax'))
# Compile net
net.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics=['accuracy'])
return net
```
```python
nn=shallow_net_A()
```
```python
nn.summary()
```
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 55) 43175
_________________________________________________________________
dense_2 (Dense) (None, 10) 560
=================================================================
Total params: 43,735
Trainable params: 43,735
Non-trainable params: 0
_________________________________________________________________
```python
nn.fit(X_train, y_train, batch_size=128, epochs=99, verbose=1, validation_data=(X_test, y_test))
```
Train on 60000 samples, validate on 10000 samples
Epoch 1/99
60000/60000 [==============================] - 1s - loss: 0.0943 - acc: 0.0993 - val_loss: 0.0932 - val_acc: 0.1032
Epoch 2/99
60000/60000 [==============================] - 0s - loss: 0.0927 - acc: 0.0984 - val_loss: 0.0922 - val_acc: 0.0985
Epoch 3/99
60000/60000 [==============================] - 1s - loss: 0.0920 - acc: 0.0984 - val_loss: 0.0916 - val_acc: 0.1011
Epoch 4/99
60000/60000 [==============================] - 1s - loss: 0.0914 - acc: 0.1155 - val_loss: 0.0911 - val_acc: 0.1264
Epoch 5/99
60000/60000 [==============================] - 0s - loss: 0.0909 - acc: 0.1447 - val_loss: 0.0907 - val_acc: 0.1545
Epoch 6/99
60000/60000 [==============================] - 1s - loss: 0.0905 - acc: 0.1690 - val_loss: 0.0903 - val_acc: 0.1787
Epoch 7/99
60000/60000 [==============================] - 1s - loss: 0.0901 - acc: 0.1906 - val_loss: 0.0899 - val_acc: 0.2001
Epoch 8/99
60000/60000 [==============================] - 1s - loss: 0.0898 - acc: 0.2121 - val_loss: 0.0896 - val_acc: 0.2239
Epoch 9/99
60000/60000 [==============================] - 1s - loss: 0.0894 - acc: 0.2330 - val_loss: 0.0892 - val_acc: 0.2448
Epoch 10/99
60000/60000 [==============================] - 1s - loss: 0.0890 - acc: 0.2534 - val_loss: 0.0888 - val_acc: 0.2611
Epoch 11/99
60000/60000 [==============================] - 1s - loss: 0.0887 - acc: 0.2707 - val_loss: 0.0885 - val_acc: 0.2768
Epoch 12/99
60000/60000 [==============================] - 1s - loss: 0.0884 - acc: 0.2850 - val_loss: 0.0881 - val_acc: 0.2915
Epoch 13/99
60000/60000 [==============================] - 1s - loss: 0.0880 - acc: 0.2965 - val_loss: 0.0878 - val_acc: 0.3022
Epoch 14/99
60000/60000 [==============================] - 1s - loss: 0.0877 - acc: 0.3064 - val_loss: 0.0875 - val_acc: 0.3119
Epoch 15/99
60000/60000 [==============================] - 1s - loss: 0.0873 - acc: 0.3137 - val_loss: 0.0871 - val_acc: 0.3204
Epoch 16/99
60000/60000 [==============================] - 1s - loss: 0.0870 - acc: 0.3232 - val_loss: 0.0868 - val_acc: 0.3300
Epoch 17/99
60000/60000 [==============================] - 1s - loss: 0.0867 - acc: 0.3309 - val_loss: 0.0864 - val_acc: 0.3370
Epoch 18/99
60000/60000 [==============================] - 1s - loss: 0.0863 - acc: 0.3379 - val_loss: 0.0861 - val_acc: 0.3431
Epoch 19/99
60000/60000 [==============================] - 1s - loss: 0.0860 - acc: 0.3438 - val_loss: 0.0857 - val_acc: 0.3479
Epoch 20/99
60000/60000 [==============================] - 1s - loss: 0.0856 - acc: 0.3500 - val_loss: 0.0854 - val_acc: 0.3552
Epoch 21/99
60000/60000 [==============================] - 1s - loss: 0.0853 - acc: 0.3551 - val_loss: 0.0850 - val_acc: 0.3597
Epoch 22/99
60000/60000 [==============================] - 1s - loss: 0.0849 - acc: 0.3593 - val_loss: 0.0847 - val_acc: 0.3642
Epoch 23/99
60000/60000 [==============================] - 1s - loss: 0.0846 - acc: 0.3639 - val_loss: 0.0843 - val_acc: 0.3699
Epoch 24/99
60000/60000 [==============================] - 1s - loss: 0.0842 - acc: 0.3689 - val_loss: 0.0839 - val_acc: 0.3746
Epoch 25/99
60000/60000 [==============================] - 1s - loss: 0.0839 - acc: 0.3733 - val_loss: 0.0836 - val_acc: 0.3790
Epoch 26/99
60000/60000 [==============================] - 1s - loss: 0.0835 - acc: 0.3801 - val_loss: 0.0832 - val_acc: 0.3843
Epoch 27/99
60000/60000 [==============================] - 1s - loss: 0.0831 - acc: 0.3853 - val_loss: 0.0828 - val_acc: 0.3897
Epoch 28/99
60000/60000 [==============================] - 1s - loss: 0.0827 - acc: 0.3910 - val_loss: 0.0824 - val_acc: 0.3968
Epoch 29/99
60000/60000 [==============================] - 1s - loss: 0.0823 - acc: 0.3987 - val_loss: 0.0820 - val_acc: 0.4045
Epoch 30/99
60000/60000 [==============================] - 1s - loss: 0.0819 - acc: 0.4060 - val_loss: 0.0816 - val_acc: 0.4113
Epoch 31/99
60000/60000 [==============================] - 1s - loss: 0.0815 - acc: 0.4127 - val_loss: 0.0811 - val_acc: 0.4189
Epoch 32/99
60000/60000 [==============================] - 1s - loss: 0.0811 - acc: 0.4193 - val_loss: 0.0807 - val_acc: 0.4249
Epoch 33/99
60000/60000 [==============================] - 1s - loss: 0.0807 - acc: 0.4261 - val_loss: 0.0803 - val_acc: 0.4330
Epoch 34/99
60000/60000 [==============================] - 1s - loss: 0.0803 - acc: 0.4326 - val_loss: 0.0799 - val_acc: 0.4392
Epoch 35/99
60000/60000 [==============================] - 1s - loss: 0.0798 - acc: 0.4388 - val_loss: 0.0794 - val_acc: 0.4450
Epoch 36/99
60000/60000 [==============================] - 1s - loss: 0.0794 - acc: 0.4454 - val_loss: 0.0790 - val_acc: 0.4509
Epoch 37/99
60000/60000 [==============================] - 1s - loss: 0.0789 - acc: 0.4510 - val_loss: 0.0785 - val_acc: 0.4580
Epoch 38/99
60000/60000 [==============================] - 1s - loss: 0.0785 - acc: 0.4570 - val_loss: 0.0781 - val_acc: 0.4633
Epoch 39/99
60000/60000 [==============================] - 1s - loss: 0.0780 - acc: 0.4619 - val_loss: 0.0776 - val_acc: 0.4680
Epoch 40/99
60000/60000 [==============================] - 1s - loss: 0.0776 - acc: 0.4671 - val_loss: 0.0771 - val_acc: 0.4735
Epoch 41/99
60000/60000 [==============================] - 1s - loss: 0.0771 - acc: 0.4727 - val_loss: 0.0767 - val_acc: 0.4795
Epoch 42/99
60000/60000 [==============================] - 1s - loss: 0.0767 - acc: 0.4785 - val_loss: 0.0762 - val_acc: 0.4847
Epoch 43/99
60000/60000 [==============================] - 1s - loss: 0.0762 - acc: 0.4833 - val_loss: 0.0757 - val_acc: 0.4888
Epoch 44/99
60000/60000 [==============================] - 1s - loss: 0.0758 - acc: 0.4880 - val_loss: 0.0752 - val_acc: 0.4934
Epoch 45/99
60000/60000 [==============================] - 1s - loss: 0.0753 - acc: 0.4932 - val_loss: 0.0748 - val_acc: 0.4980
Epoch 46/99
60000/60000 [==============================] - 1s - loss: 0.0748 - acc: 0.4972 - val_loss: 0.0743 - val_acc: 0.5021
Epoch 47/99
60000/60000 [==============================] - 1s - loss: 0.0744 - acc: 0.5029 - val_loss: 0.0738 - val_acc: 0.5059
Epoch 48/99
60000/60000 [==============================] - 1s - loss: 0.0739 - acc: 0.5077 - val_loss: 0.0733 - val_acc: 0.5106
Epoch 49/99
60000/60000 [==============================] - 1s - loss: 0.0734 - acc: 0.5124 - val_loss: 0.0729 - val_acc: 0.5144
Epoch 50/99
60000/60000 [==============================] - 1s - loss: 0.0730 - acc: 0.5175 - val_loss: 0.0724 - val_acc: 0.5197
Epoch 51/99
60000/60000 [==============================] - 1s - loss: 0.0725 - acc: 0.5213 - val_loss: 0.0719 - val_acc: 0.5245
Epoch 52/99
60000/60000 [==============================] - 1s - loss: 0.0720 - acc: 0.5255 - val_loss: 0.0714 - val_acc: 0.5291
Epoch 53/99
60000/60000 [==============================] - 1s - loss: 0.0716 - acc: 0.5306 - val_loss: 0.0710 - val_acc: 0.5340
Epoch 54/99
60000/60000 [==============================] - 1s - loss: 0.0711 - acc: 0.5351 - val_loss: 0.0705 - val_acc: 0.5387
Epoch 55/99
60000/60000 [==============================] - 1s - loss: 0.0706 - acc: 0.5397 - val_loss: 0.0700 - val_acc: 0.5434
Epoch 56/99
60000/60000 [==============================] - 0s - loss: 0.0702 - acc: 0.5443 - val_loss: 0.0696 - val_acc: 0.5477
Epoch 57/99
60000/60000 [==============================] - 1s - loss: 0.0697 - acc: 0.5483 - val_loss: 0.0691 - val_acc: 0.5536
Epoch 58/99
60000/60000 [==============================] - 1s - loss: 0.0692 - acc: 0.5538 - val_loss: 0.0686 - val_acc: 0.5575
Epoch 59/99
60000/60000 [==============================] - 1s - loss: 0.0688 - acc: 0.5587 - val_loss: 0.0681 - val_acc: 0.5627
Epoch 60/99
60000/60000 [==============================] - 1s - loss: 0.0683 - acc: 0.5640 - val_loss: 0.0677 - val_acc: 0.5675
Epoch 61/99
60000/60000 [==============================] - 1s - loss: 0.0679 - acc: 0.5689 - val_loss: 0.0672 - val_acc: 0.5714
Epoch 62/99
60000/60000 [==============================] - 1s - loss: 0.0674 - acc: 0.5742 - val_loss: 0.0668 - val_acc: 0.5772
Epoch 63/99
60000/60000 [==============================] - 1s - loss: 0.0670 - acc: 0.5795 - val_loss: 0.0663 - val_acc: 0.5818
Epoch 64/99
60000/60000 [==============================] - 0s - loss: 0.0665 - acc: 0.5849 - val_loss: 0.0658 - val_acc: 0.5864
Epoch 65/99
60000/60000 [==============================] - 1s - loss: 0.0661 - acc: 0.5898 - val_loss: 0.0654 - val_acc: 0.5922
Epoch 66/99
60000/60000 [==============================] - 1s - loss: 0.0656 - acc: 0.5940 - val_loss: 0.0649 - val_acc: 0.5957
Epoch 67/99
60000/60000 [==============================] - 1s - loss: 0.0652 - acc: 0.5991 - val_loss: 0.0645 - val_acc: 0.6008
Epoch 68/99
60000/60000 [==============================] - 1s - loss: 0.0647 - acc: 0.6039 - val_loss: 0.0640 - val_acc: 0.6062
Epoch 69/99
60000/60000 [==============================] - 1s - loss: 0.0643 - acc: 0.6085 - val_loss: 0.0636 - val_acc: 0.6107
Epoch 70/99
60000/60000 [==============================] - 1s - loss: 0.0638 - acc: 0.6137 - val_loss: 0.0631 - val_acc: 0.6152
Epoch 71/99
60000/60000 [==============================] - 0s - loss: 0.0634 - acc: 0.6177 - val_loss: 0.0627 - val_acc: 0.6200
Epoch 72/99
60000/60000 [==============================] - 1s - loss: 0.0629 - acc: 0.6223 - val_loss: 0.0622 - val_acc: 0.6244
Epoch 73/99
60000/60000 [==============================] - 1s - loss: 0.0625 - acc: 0.6260 - val_loss: 0.0618 - val_acc: 0.6276
Epoch 74/99
60000/60000 [==============================] - 1s - loss: 0.0621 - acc: 0.6306 - val_loss: 0.0613 - val_acc: 0.6324
Epoch 75/99
60000/60000 [==============================] - 1s - loss: 0.0616 - acc: 0.6351 - val_loss: 0.0609 - val_acc: 0.6367
Epoch 76/99
60000/60000 [==============================] - 1s - loss: 0.0612 - acc: 0.6390 - val_loss: 0.0605 - val_acc: 0.6422
Epoch 77/99
60000/60000 [==============================] - 1s - loss: 0.0608 - acc: 0.6435 - val_loss: 0.0600 - val_acc: 0.6458
Epoch 78/99
60000/60000 [==============================] - 1s - loss: 0.0603 - acc: 0.6468 - val_loss: 0.0596 - val_acc: 0.6505
Epoch 79/99
60000/60000 [==============================] - 1s - loss: 0.0599 - acc: 0.6506 - val_loss: 0.0592 - val_acc: 0.6534
Epoch 80/99
60000/60000 [==============================] - 1s - loss: 0.0595 - acc: 0.6537 - val_loss: 0.0587 - val_acc: 0.6576
Epoch 81/99
60000/60000 [==============================] - 1s - loss: 0.0591 - acc: 0.6569 - val_loss: 0.0583 - val_acc: 0.6624
Epoch 82/99
60000/60000 [==============================] - 1s - loss: 0.0587 - acc: 0.6606 - val_loss: 0.0579 - val_acc: 0.6652
Epoch 83/99
60000/60000 [==============================] - 1s - loss: 0.0582 - acc: 0.6639 - val_loss: 0.0575 - val_acc: 0.6691
Epoch 84/99
60000/60000 [==============================] - 1s - loss: 0.0578 - acc: 0.6669 - val_loss: 0.0571 - val_acc: 0.6724
Epoch 85/99
60000/60000 [==============================] - 1s - loss: 0.0574 - acc: 0.6707 - val_loss: 0.0566 - val_acc: 0.6768
Epoch 86/99
60000/60000 [==============================] - 1s - loss: 0.0570 - acc: 0.6740 - val_loss: 0.0562 - val_acc: 0.6804
Epoch 87/99
60000/60000 [==============================] - 1s - loss: 0.0566 - acc: 0.6764 - val_loss: 0.0558 - val_acc: 0.6840
Epoch 88/99
60000/60000 [==============================] - 1s - loss: 0.0562 - acc: 0.6801 - val_loss: 0.0554 - val_acc: 0.6877
Epoch 89/99
60000/60000 [==============================] - 1s - loss: 0.0558 - acc: 0.6832 - val_loss: 0.0550 - val_acc: 0.6912
Epoch 90/99
60000/60000 [==============================] - 1s - loss: 0.0554 - acc: 0.6853 - val_loss: 0.0546 - val_acc: 0.6941
Epoch 91/99
60000/60000 [==============================] - 1s - loss: 0.0550 - acc: 0.6883 - val_loss: 0.0542 - val_acc: 0.6977
Epoch 92/99
60000/60000 [==============================] - 1s - loss: 0.0546 - acc: 0.6904 - val_loss: 0.0538 - val_acc: 0.7018
Epoch 93/99
60000/60000 [==============================] - 1s - loss: 0.0542 - acc: 0.6930 - val_loss: 0.0534 - val_acc: 0.7040
Epoch 94/99
60000/60000 [==============================] - 1s - loss: 0.0539 - acc: 0.6957 - val_loss: 0.0531 - val_acc: 0.7073
Epoch 95/99
60000/60000 [==============================] - 1s - loss: 0.0535 - acc: 0.6986 - val_loss: 0.0527 - val_acc: 0.7105
Epoch 96/99
60000/60000 [==============================] - 1s - loss: 0.0531 - acc: 0.7017 - val_loss: 0.0523 - val_acc: 0.7124
Epoch 97/99
60000/60000 [==============================] - 1s - loss: 0.0527 - acc: 0.7038 - val_loss: 0.0519 - val_acc: 0.7155
Epoch 98/99
60000/60000 [==============================] - 1s - loss: 0.0524 - acc: 0.7056 - val_loss: 0.0516 - val_acc: 0.7179
Epoch 99/99
60000/60000 [==============================] - 0s - loss: 0.0520 - acc: 0.7085 - val_loss: 0.0512 - val_acc: 0.7199
<keras.callbacks.History at 0x1226252b0>
```python
# 68% accuracy after 99 epochs
nn.evaluate(X_test, y_test)
```
8608/10000 [========================>.....] - ETA: 0s
[0.051187293976545332, 0.71989999999999998]
```python
nn.fit(X_train, y_train, batch_size=128, epochs=99, verbose=1, validation_data=(X_test, y_test))
```
Train on 60000 samples, validate on 10000 samples
Epoch 1/99
60000/60000 [==============================] - 0s - loss: 0.0516 - acc: 0.7109 - val_loss: 0.0508 - val_acc: 0.7233
Epoch 2/99
60000/60000 [==============================] - 0s - loss: 0.0513 - acc: 0.7132 - val_loss: 0.0505 - val_acc: 0.7261
Epoch 3/99
60000/60000 [==============================] - 0s - loss: 0.0509 - acc: 0.7160 - val_loss: 0.0501 - val_acc: 0.7283
Epoch 4/99
60000/60000 [==============================] - 1s - loss: 0.0506 - acc: 0.7184 - val_loss: 0.0498 - val_acc: 0.7298
Epoch 5/99
60000/60000 [==============================] - 1s - loss: 0.0502 - acc: 0.7204 - val_loss: 0.0494 - val_acc: 0.7319
Epoch 6/99
60000/60000 [==============================] - 1s - loss: 0.0499 - acc: 0.7231 - val_loss: 0.0491 - val_acc: 0.7344
Epoch 7/99
60000/60000 [==============================] - 1s - loss: 0.0495 - acc: 0.7254 - val_loss: 0.0487 - val_acc: 0.7378
Epoch 8/99
60000/60000 [==============================] - 1s - loss: 0.0492 - acc: 0.7279 - val_loss: 0.0484 - val_acc: 0.7405
Epoch 9/99
60000/60000 [==============================] - 1s - loss: 0.0489 - acc: 0.7300 - val_loss: 0.0480 - val_acc: 0.7425
Epoch 10/99
60000/60000 [==============================] - 1s - loss: 0.0485 - acc: 0.7323 - val_loss: 0.0477 - val_acc: 0.7451
Epoch 11/99
60000/60000 [==============================] - 1s - loss: 0.0482 - acc: 0.7345 - val_loss: 0.0474 - val_acc: 0.7463
Epoch 12/99
60000/60000 [==============================] - 1s - loss: 0.0479 - acc: 0.7366 - val_loss: 0.0471 - val_acc: 0.7486
Epoch 13/99
60000/60000 [==============================] - 1s - loss: 0.0476 - acc: 0.7384 - val_loss: 0.0467 - val_acc: 0.7508
Epoch 14/99
60000/60000 [==============================] - 1s - loss: 0.0472 - acc: 0.7409 - val_loss: 0.0464 - val_acc: 0.7532
Epoch 15/99
60000/60000 [==============================] - 1s - loss: 0.0469 - acc: 0.7432 - val_loss: 0.0461 - val_acc: 0.7562
Epoch 16/99
60000/60000 [==============================] - 1s - loss: 0.0466 - acc: 0.7453 - val_loss: 0.0458 - val_acc: 0.7585
Epoch 17/99
60000/60000 [==============================] - 1s - loss: 0.0463 - acc: 0.7474 - val_loss: 0.0455 - val_acc: 0.7606
Epoch 18/99
60000/60000 [==============================] - 1s - loss: 0.0460 - acc: 0.7494 - val_loss: 0.0452 - val_acc: 0.7631
Epoch 19/99
60000/60000 [==============================] - 1s - loss: 0.0457 - acc: 0.7514 - val_loss: 0.0449 - val_acc: 0.7658
Epoch 20/99
60000/60000 [==============================] - 1s - loss: 0.0454 - acc: 0.7537 - val_loss: 0.0446 - val_acc: 0.7679
Epoch 21/99
60000/60000 [==============================] - 1s - loss: 0.0451 - acc: 0.7563 - val_loss: 0.0443 - val_acc: 0.7707
Epoch 22/99
60000/60000 [==============================] - 1s - loss: 0.0448 - acc: 0.7580 - val_loss: 0.0440 - val_acc: 0.7725
Epoch 23/99
60000/60000 [==============================] - 1s - loss: 0.0446 - acc: 0.7601 - val_loss: 0.0437 - val_acc: 0.7754
Epoch 24/99
60000/60000 [==============================] - 1s - loss: 0.0443 - acc: 0.7623 - val_loss: 0.0434 - val_acc: 0.7774
Epoch 25/99
60000/60000 [==============================] - 1s - loss: 0.0440 - acc: 0.7643 - val_loss: 0.0431 - val_acc: 0.7803
Epoch 26/99
60000/60000 [==============================] - 1s - loss: 0.0437 - acc: 0.7671 - val_loss: 0.0429 - val_acc: 0.7818
Epoch 27/99
60000/60000 [==============================] - 1s - loss: 0.0434 - acc: 0.7688 - val_loss: 0.0426 - val_acc: 0.7835
Epoch 28/99
60000/60000 [==============================] - 1s - loss: 0.0432 - acc: 0.7716 - val_loss: 0.0423 - val_acc: 0.7852
Epoch 29/99
60000/60000 [==============================] - 1s - loss: 0.0429 - acc: 0.7739 - val_loss: 0.0420 - val_acc: 0.7872
Epoch 30/99
60000/60000 [==============================] - 1s - loss: 0.0426 - acc: 0.7758 - val_loss: 0.0418 - val_acc: 0.7900
Epoch 31/99
60000/60000 [==============================] - 1s - loss: 0.0424 - acc: 0.7776 - val_loss: 0.0415 - val_acc: 0.7920
Epoch 32/99
60000/60000 [==============================] - 1s - loss: 0.0421 - acc: 0.7802 - val_loss: 0.0412 - val_acc: 0.7942
Epoch 33/99
60000/60000 [==============================] - 1s - loss: 0.0419 - acc: 0.7823 - val_loss: 0.0410 - val_acc: 0.7957
Epoch 34/99
60000/60000 [==============================] - 1s - loss: 0.0416 - acc: 0.7845 - val_loss: 0.0407 - val_acc: 0.7980
Epoch 35/99
60000/60000 [==============================] - 1s - loss: 0.0413 - acc: 0.7866 - val_loss: 0.0405 - val_acc: 0.7995
Epoch 36/99
60000/60000 [==============================] - 1s - loss: 0.0411 - acc: 0.7880 - val_loss: 0.0402 - val_acc: 0.8014
Epoch 37/99
60000/60000 [==============================] - 1s - loss: 0.0409 - acc: 0.7900 - val_loss: 0.0400 - val_acc: 0.8042
Epoch 38/99
60000/60000 [==============================] - 1s - loss: 0.0406 - acc: 0.7918 - val_loss: 0.0397 - val_acc: 0.8059
Epoch 39/99
60000/60000 [==============================] - 1s - loss: 0.0404 - acc: 0.7934 - val_loss: 0.0395 - val_acc: 0.8083
Epoch 40/99
60000/60000 [==============================] - 1s - loss: 0.0401 - acc: 0.7952 - val_loss: 0.0392 - val_acc: 0.8097
Epoch 41/99
60000/60000 [==============================] - 1s - loss: 0.0399 - acc: 0.7970 - val_loss: 0.0390 - val_acc: 0.8113
Epoch 42/99
60000/60000 [==============================] - 1s - loss: 0.0397 - acc: 0.7983 - val_loss: 0.0388 - val_acc: 0.8129
Epoch 43/99
60000/60000 [==============================] - 1s - loss: 0.0394 - acc: 0.8000 - val_loss: 0.0385 - val_acc: 0.8147
Epoch 44/99
60000/60000 [==============================] - 1s - loss: 0.0392 - acc: 0.8018 - val_loss: 0.0383 - val_acc: 0.8162
Epoch 45/99
60000/60000 [==============================] - 1s - loss: 0.0390 - acc: 0.8034 - val_loss: 0.0381 - val_acc: 0.8176
Epoch 46/99
60000/60000 [==============================] - 1s - loss: 0.0387 - acc: 0.8049 - val_loss: 0.0379 - val_acc: 0.8192
Epoch 47/99
60000/60000 [==============================] - 1s - loss: 0.0385 - acc: 0.8064 - val_loss: 0.0376 - val_acc: 0.8209
Epoch 48/99
60000/60000 [==============================] - 1s - loss: 0.0383 - acc: 0.8080 - val_loss: 0.0374 - val_acc: 0.8226
Epoch 49/99
60000/60000 [==============================] - 1s - loss: 0.0381 - acc: 0.8096 - val_loss: 0.0372 - val_acc: 0.8241
Epoch 50/99
60000/60000 [==============================] - 1s - loss: 0.0379 - acc: 0.8112 - val_loss: 0.0370 - val_acc: 0.8249
Epoch 51/99
60000/60000 [==============================] - 1s - loss: 0.0377 - acc: 0.8126 - val_loss: 0.0368 - val_acc: 0.8266
Epoch 52/99
60000/60000 [==============================] - 1s - loss: 0.0375 - acc: 0.8137 - val_loss: 0.0365 - val_acc: 0.8274
Epoch 53/99
60000/60000 [==============================] - 1s - loss: 0.0372 - acc: 0.8151 - val_loss: 0.0363 - val_acc: 0.8282
Epoch 54/99
60000/60000 [==============================] - 1s - loss: 0.0370 - acc: 0.8162 - val_loss: 0.0361 - val_acc: 0.8293
Epoch 55/99
60000/60000 [==============================] - 1s - loss: 0.0368 - acc: 0.8176 - val_loss: 0.0359 - val_acc: 0.8300
Epoch 56/99
60000/60000 [==============================] - 1s - loss: 0.0366 - acc: 0.8185 - val_loss: 0.0357 - val_acc: 0.8309
Epoch 57/99
60000/60000 [==============================] - 1s - loss: 0.0364 - acc: 0.8199 - val_loss: 0.0355 - val_acc: 0.8321
Epoch 58/99
60000/60000 [==============================] - 1s - loss: 0.0362 - acc: 0.8209 - val_loss: 0.0353 - val_acc: 0.8331
Epoch 59/99
60000/60000 [==============================] - 1s - loss: 0.0361 - acc: 0.8224 - val_loss: 0.0351 - val_acc: 0.8340
Epoch 60/99
60000/60000 [==============================] - 1s - loss: 0.0359 - acc: 0.8234 - val_loss: 0.0349 - val_acc: 0.8351
Epoch 61/99
60000/60000 [==============================] - 1s - loss: 0.0357 - acc: 0.8246 - val_loss: 0.0347 - val_acc: 0.8360
Epoch 62/99
60000/60000 [==============================] - 1s - loss: 0.0355 - acc: 0.8258 - val_loss: 0.0346 - val_acc: 0.8374
Epoch 63/99
60000/60000 [==============================] - 1s - loss: 0.0353 - acc: 0.8269 - val_loss: 0.0344 - val_acc: 0.8379
Epoch 64/99
60000/60000 [==============================] - 0s - loss: 0.0351 - acc: 0.8283 - val_loss: 0.0342 - val_acc: 0.8391
Epoch 65/99
60000/60000 [==============================] - 0s - loss: 0.0349 - acc: 0.8294 - val_loss: 0.0340 - val_acc: 0.8396
Epoch 66/99
60000/60000 [==============================] - 0s - loss: 0.0348 - acc: 0.8302 - val_loss: 0.0338 - val_acc: 0.8402
Epoch 67/99
60000/60000 [==============================] - 1s - loss: 0.0346 - acc: 0.8309 - val_loss: 0.0336 - val_acc: 0.8414
Epoch 68/99
60000/60000 [==============================] - 0s - loss: 0.0344 - acc: 0.8320 - val_loss: 0.0335 - val_acc: 0.8428
Epoch 69/99
60000/60000 [==============================] - 0s - loss: 0.0342 - acc: 0.8328 - val_loss: 0.0333 - val_acc: 0.8435
Epoch 70/99
60000/60000 [==============================] - 0s - loss: 0.0341 - acc: 0.8337 - val_loss: 0.0331 - val_acc: 0.8442
Epoch 71/99
60000/60000 [==============================] - 0s - loss: 0.0339 - acc: 0.8346 - val_loss: 0.0329 - val_acc: 0.8450
Epoch 72/99
60000/60000 [==============================] - 0s - loss: 0.0337 - acc: 0.8357 - val_loss: 0.0328 - val_acc: 0.8457
Epoch 73/99
60000/60000 [==============================] - 0s - loss: 0.0335 - acc: 0.8362 - val_loss: 0.0326 - val_acc: 0.8477
Epoch 74/99
60000/60000 [==============================] - 0s - loss: 0.0334 - acc: 0.8370 - val_loss: 0.0324 - val_acc: 0.8486
Epoch 75/99
60000/60000 [==============================] - 0s - loss: 0.0332 - acc: 0.8378 - val_loss: 0.0323 - val_acc: 0.8490
Epoch 76/99
60000/60000 [==============================] - 0s - loss: 0.0331 - acc: 0.8385 - val_loss: 0.0321 - val_acc: 0.8498
Epoch 77/99
60000/60000 [==============================] - 0s - loss: 0.0329 - acc: 0.8388 - val_loss: 0.0320 - val_acc: 0.8503
Epoch 78/99
60000/60000 [==============================] - 0s - loss: 0.0327 - acc: 0.8398 - val_loss: 0.0318 - val_acc: 0.8505
Epoch 79/99
60000/60000 [==============================] - 0s - loss: 0.0326 - acc: 0.8404 - val_loss: 0.0316 - val_acc: 0.8506
Epoch 80/99
60000/60000 [==============================] - 0s - loss: 0.0324 - acc: 0.8413 - val_loss: 0.0315 - val_acc: 0.8509
Epoch 81/99
60000/60000 [==============================] - 0s - loss: 0.0323 - acc: 0.8416 - val_loss: 0.0313 - val_acc: 0.8516
Epoch 82/99
60000/60000 [==============================] - 0s - loss: 0.0321 - acc: 0.8425 - val_loss: 0.0312 - val_acc: 0.8522
Epoch 83/99
60000/60000 [==============================] - 0s - loss: 0.0320 - acc: 0.8433 - val_loss: 0.0310 - val_acc: 0.8524
Epoch 84/99
60000/60000 [==============================] - 0s - loss: 0.0318 - acc: 0.8439 - val_loss: 0.0309 - val_acc: 0.8528
Epoch 85/99
60000/60000 [==============================] - 0s - loss: 0.0317 - acc: 0.8444 - val_loss: 0.0307 - val_acc: 0.8534
Epoch 86/99
60000/60000 [==============================] - 0s - loss: 0.0315 - acc: 0.8453 - val_loss: 0.0306 - val_acc: 0.8540
Epoch 87/99
60000/60000 [==============================] - 0s - loss: 0.0314 - acc: 0.8457 - val_loss: 0.0305 - val_acc: 0.8547
Epoch 88/99
60000/60000 [==============================] - 0s - loss: 0.0313 - acc: 0.8465 - val_loss: 0.0303 - val_acc: 0.8556
Epoch 89/99
60000/60000 [==============================] - 0s - loss: 0.0311 - acc: 0.8471 - val_loss: 0.0302 - val_acc: 0.8562
Epoch 90/99
60000/60000 [==============================] - 1s - loss: 0.0310 - acc: 0.8475 - val_loss: 0.0300 - val_acc: 0.8568
Epoch 91/99
60000/60000 [==============================] - 0s - loss: 0.0309 - acc: 0.8481 - val_loss: 0.0299 - val_acc: 0.8572
Epoch 92/99
60000/60000 [==============================] - 1s - loss: 0.0307 - acc: 0.8484 - val_loss: 0.0298 - val_acc: 0.8579
Epoch 93/99
60000/60000 [==============================] - 1s - loss: 0.0306 - acc: 0.8487 - val_loss: 0.0296 - val_acc: 0.8581
Epoch 94/99
60000/60000 [==============================] - 0s - loss: 0.0305 - acc: 0.8492 - val_loss: 0.0295 - val_acc: 0.8585
Epoch 95/99
60000/60000 [==============================] - 0s - loss: 0.0303 - acc: 0.8498 - val_loss: 0.0294 - val_acc: 0.8594
Epoch 96/99
60000/60000 [==============================] - 0s - loss: 0.0302 - acc: 0.8503 - val_loss: 0.0292 - val_acc: 0.8599
Epoch 97/99
60000/60000 [==============================] - 1s - loss: 0.0301 - acc: 0.8511 - val_loss: 0.0291 - val_acc: 0.8607
Epoch 98/99
60000/60000 [==============================] - 1s - loss: 0.0299 - acc: 0.8512 - val_loss: 0.0290 - val_acc: 0.8612
Epoch 99/99
60000/60000 [==============================] - 1s - loss: 0.0298 - acc: 0.8519 - val_loss: 0.0289 - val_acc: 0.8616
<keras.callbacks.History at 0x12dda54a8>
```python
# 86% accuracy after another 99 epochs
nn.evaluate(X_test, y_test)
```
9024/10000 [==========================>...] - ETA: 0s
[0.028857796752452852, 0.86160000000000003]
```python
nn.fit(X_train, y_train, batch_size=128, epochs=99, verbose=1, validation_data=(X_test, y_test))
```
Train on 60000 samples, validate on 10000 samples
Epoch 1/99
60000/60000 [==============================] - 1s - loss: 0.0297 - acc: 0.8519 - val_loss: 0.0287 - val_acc: 0.8619
Epoch 2/99
60000/60000 [==============================] - 1s - loss: 0.0296 - acc: 0.8528 - val_loss: 0.0286 - val_acc: 0.8625
Epoch 3/99
60000/60000 [==============================] - 1s - loss: 0.0295 - acc: 0.8531 - val_loss: 0.0285 - val_acc: 0.8628
Epoch 4/99
60000/60000 [==============================] - 1s - loss: 0.0293 - acc: 0.8538 - val_loss: 0.0284 - val_acc: 0.8636
Epoch 5/99
60000/60000 [==============================] - 1s - loss: 0.0292 - acc: 0.8543 - val_loss: 0.0283 - val_acc: 0.8640
Epoch 6/99
60000/60000 [==============================] - 1s - loss: 0.0291 - acc: 0.8548 - val_loss: 0.0281 - val_acc: 0.8646
Epoch 7/99
60000/60000 [==============================] - 1s - loss: 0.0290 - acc: 0.8550 - val_loss: 0.0280 - val_acc: 0.8651
Epoch 8/99
60000/60000 [==============================] - 0s - loss: 0.0289 - acc: 0.8557 - val_loss: 0.0279 - val_acc: 0.8655
Epoch 9/99
60000/60000 [==============================] - 0s - loss: 0.0288 - acc: 0.8562 - val_loss: 0.0278 - val_acc: 0.8656
Epoch 10/99
60000/60000 [==============================] - 0s - loss: 0.0287 - acc: 0.8565 - val_loss: 0.0277 - val_acc: 0.8660
Epoch 11/99
60000/60000 [==============================] - 1s - loss: 0.0285 - acc: 0.8568 - val_loss: 0.0276 - val_acc: 0.8663
Epoch 12/99
60000/60000 [==============================] - 1s - loss: 0.0284 - acc: 0.8572 - val_loss: 0.0275 - val_acc: 0.8665
Epoch 13/99
60000/60000 [==============================] - 0s - loss: 0.0283 - acc: 0.8577 - val_loss: 0.0274 - val_acc: 0.8672
Epoch 14/99
60000/60000 [==============================] - 1s - loss: 0.0282 - acc: 0.8580 - val_loss: 0.0273 - val_acc: 0.8673
Epoch 15/99
60000/60000 [==============================] - 1s - loss: 0.0281 - acc: 0.8583 - val_loss: 0.0271 - val_acc: 0.8676
Epoch 16/99
60000/60000 [==============================] - 1s - loss: 0.0280 - acc: 0.8588 - val_loss: 0.0270 - val_acc: 0.8679
Epoch 17/99
60000/60000 [==============================] - 1s - loss: 0.0279 - acc: 0.8593 - val_loss: 0.0269 - val_acc: 0.8681
Epoch 18/99
60000/60000 [==============================] - 1s - loss: 0.0278 - acc: 0.8596 - val_loss: 0.0268 - val_acc: 0.8687
Epoch 19/99
60000/60000 [==============================] - 1s - loss: 0.0277 - acc: 0.8599 - val_loss: 0.0267 - val_acc: 0.8691
Epoch 20/99
60000/60000 [==============================] - 1s - loss: 0.0276 - acc: 0.8602 - val_loss: 0.0266 - val_acc: 0.8695
Epoch 21/99
60000/60000 [==============================] - 1s - loss: 0.0275 - acc: 0.8606 - val_loss: 0.0265 - val_acc: 0.8697
Epoch 22/99
60000/60000 [==============================] - 1s - loss: 0.0274 - acc: 0.8610 - val_loss: 0.0264 - val_acc: 0.8701
Epoch 23/99
60000/60000 [==============================] - 1s - loss: 0.0273 - acc: 0.8612 - val_loss: 0.0263 - val_acc: 0.8702
Epoch 24/99
60000/60000 [==============================] - 1s - loss: 0.0272 - acc: 0.8615 - val_loss: 0.0263 - val_acc: 0.8709
Epoch 25/99
60000/60000 [==============================] - 1s - loss: 0.0271 - acc: 0.8618 - val_loss: 0.0262 - val_acc: 0.8709
Epoch 26/99
60000/60000 [==============================] - 1s - loss: 0.0270 - acc: 0.8621 - val_loss: 0.0261 - val_acc: 0.8713
Epoch 27/99
60000/60000 [==============================] - 1s - loss: 0.0270 - acc: 0.8623 - val_loss: 0.0260 - val_acc: 0.8716
Epoch 28/99
60000/60000 [==============================] - 1s - loss: 0.0269 - acc: 0.8625 - val_loss: 0.0259 - val_acc: 0.8721
Epoch 29/99
60000/60000 [==============================] - 1s - loss: 0.0268 - acc: 0.8628 - val_loss: 0.0258 - val_acc: 0.8723
Epoch 30/99
60000/60000 [==============================] - 1s - loss: 0.0267 - acc: 0.8630 - val_loss: 0.0257 - val_acc: 0.8727
Epoch 31/99
60000/60000 [==============================] - 1s - loss: 0.0266 - acc: 0.8633 - val_loss: 0.0256 - val_acc: 0.8726
Epoch 32/99
60000/60000 [==============================] - 1s - loss: 0.0265 - acc: 0.8636 - val_loss: 0.0255 - val_acc: 0.8727
Epoch 33/99
60000/60000 [==============================] - 1s - loss: 0.0264 - acc: 0.8638 - val_loss: 0.0254 - val_acc: 0.8728
Epoch 34/99
60000/60000 [==============================] - 1s - loss: 0.0263 - acc: 0.8640 - val_loss: 0.0254 - val_acc: 0.8733
Epoch 35/99
60000/60000 [==============================] - 1s - loss: 0.0262 - acc: 0.8643 - val_loss: 0.0253 - val_acc: 0.8734
Epoch 36/99
60000/60000 [==============================] - 1s - loss: 0.0262 - acc: 0.8645 - val_loss: 0.0252 - val_acc: 0.8736
Epoch 37/99
60000/60000 [==============================] - 1s - loss: 0.0261 - acc: 0.8649 - val_loss: 0.0251 - val_acc: 0.8739
Epoch 38/99
60000/60000 [==============================] - 1s - loss: 0.0260 - acc: 0.8653 - val_loss: 0.0250 - val_acc: 0.8745
Epoch 39/99
60000/60000 [==============================] - 1s - loss: 0.0259 - acc: 0.8655 - val_loss: 0.0249 - val_acc: 0.8749
Epoch 40/99
60000/60000 [==============================] - 1s - loss: 0.0258 - acc: 0.8660 - val_loss: 0.0249 - val_acc: 0.8750
Epoch 41/99
60000/60000 [==============================] - 1s - loss: 0.0258 - acc: 0.8661 - val_loss: 0.0248 - val_acc: 0.8751
Epoch 42/99
60000/60000 [==============================] - 1s - loss: 0.0257 - acc: 0.8664 - val_loss: 0.0247 - val_acc: 0.8754
Epoch 43/99
60000/60000 [==============================] - 1s - loss: 0.0256 - acc: 0.8667 - val_loss: 0.0246 - val_acc: 0.8758
Epoch 44/99
60000/60000 [==============================] - 1s - loss: 0.0255 - acc: 0.8671 - val_loss: 0.0246 - val_acc: 0.8760
Epoch 45/99
60000/60000 [==============================] - 1s - loss: 0.0255 - acc: 0.8674 - val_loss: 0.0245 - val_acc: 0.8759
Epoch 46/99
60000/60000 [==============================] - 1s - loss: 0.0254 - acc: 0.8676 - val_loss: 0.0244 - val_acc: 0.8764
Epoch 47/99
60000/60000 [==============================] - 1s - loss: 0.0253 - acc: 0.8678 - val_loss: 0.0243 - val_acc: 0.8765
Epoch 48/99
60000/60000 [==============================] - 1s - loss: 0.0252 - acc: 0.8680 - val_loss: 0.0243 - val_acc: 0.8766
Epoch 49/99
60000/60000 [==============================] - 1s - loss: 0.0252 - acc: 0.8684 - val_loss: 0.0242 - val_acc: 0.8770
Epoch 50/99
60000/60000 [==============================] - 1s - loss: 0.0251 - acc: 0.8687 - val_loss: 0.0241 - val_acc: 0.8773
Epoch 51/99
60000/60000 [==============================] - 1s - loss: 0.0250 - acc: 0.8689 - val_loss: 0.0240 - val_acc: 0.8772
Epoch 52/99
60000/60000 [==============================] - 1s - loss: 0.0249 - acc: 0.8693 - val_loss: 0.0240 - val_acc: 0.8776
Epoch 53/99
60000/60000 [==============================] - 1s - loss: 0.0249 - acc: 0.8696 - val_loss: 0.0239 - val_acc: 0.8778
Epoch 54/99
60000/60000 [==============================] - 1s - loss: 0.0248 - acc: 0.8696 - val_loss: 0.0238 - val_acc: 0.8780
Epoch 55/99
60000/60000 [==============================] - 1s - loss: 0.0247 - acc: 0.8701 - val_loss: 0.0238 - val_acc: 0.8782
Epoch 56/99
60000/60000 [==============================] - 1s - loss: 0.0247 - acc: 0.8702 - val_loss: 0.0237 - val_acc: 0.8783
Epoch 57/99
60000/60000 [==============================] - 1s - loss: 0.0246 - acc: 0.8705 - val_loss: 0.0236 - val_acc: 0.8784
Epoch 58/99
60000/60000 [==============================] - 1s - loss: 0.0245 - acc: 0.8707 - val_loss: 0.0236 - val_acc: 0.8786
Epoch 59/99
60000/60000 [==============================] - 1s - loss: 0.0245 - acc: 0.8710 - val_loss: 0.0235 - val_acc: 0.8788
Epoch 60/99
60000/60000 [==============================] - 1s - loss: 0.0244 - acc: 0.8713 - val_loss: 0.0234 - val_acc: 0.8787
Epoch 61/99
60000/60000 [==============================] - 1s - loss: 0.0243 - acc: 0.8716 - val_loss: 0.0234 - val_acc: 0.8788
Epoch 62/99
60000/60000 [==============================] - 1s - loss: 0.0243 - acc: 0.8716 - val_loss: 0.0233 - val_acc: 0.8790
Epoch 63/99
60000/60000 [==============================] - 1s - loss: 0.0242 - acc: 0.8720 - val_loss: 0.0232 - val_acc: 0.8794
Epoch 64/99
60000/60000 [==============================] - 1s - loss: 0.0241 - acc: 0.8722 - val_loss: 0.0232 - val_acc: 0.8800
Epoch 65/99
60000/60000 [==============================] - 0s - loss: 0.0241 - acc: 0.8724 - val_loss: 0.0231 - val_acc: 0.8802
Epoch 66/99
60000/60000 [==============================] - 0s - loss: 0.0240 - acc: 0.8726 - val_loss: 0.0230 - val_acc: 0.8803
Epoch 67/99
60000/60000 [==============================] - 0s - loss: 0.0240 - acc: 0.8728 - val_loss: 0.0230 - val_acc: 0.8803
Epoch 68/99
60000/60000 [==============================] - 0s - loss: 0.0239 - acc: 0.8730 - val_loss: 0.0229 - val_acc: 0.8802
Epoch 69/99
60000/60000 [==============================] - 0s - loss: 0.0238 - acc: 0.8733 - val_loss: 0.0229 - val_acc: 0.8803
Epoch 70/99
60000/60000 [==============================] - 0s - loss: 0.0238 - acc: 0.8735 - val_loss: 0.0228 - val_acc: 0.8807
Epoch 71/99
60000/60000 [==============================] - 0s - loss: 0.0237 - acc: 0.8738 - val_loss: 0.0227 - val_acc: 0.8808
Epoch 72/99
60000/60000 [==============================] - 0s - loss: 0.0237 - acc: 0.8739 - val_loss: 0.0227 - val_acc: 0.8809
Epoch 73/99
60000/60000 [==============================] - 0s - loss: 0.0236 - acc: 0.8742 - val_loss: 0.0226 - val_acc: 0.8810
Epoch 74/99
60000/60000 [==============================] - 0s - loss: 0.0235 - acc: 0.8743 - val_loss: 0.0226 - val_acc: 0.8815
Epoch 75/99
60000/60000 [==============================] - 1s - loss: 0.0235 - acc: 0.8745 - val_loss: 0.0225 - val_acc: 0.8815
Epoch 76/99
60000/60000 [==============================] - 0s - loss: 0.0234 - acc: 0.8747 - val_loss: 0.0225 - val_acc: 0.8818
Epoch 77/99
60000/60000 [==============================] - 0s - loss: 0.0234 - acc: 0.8751 - val_loss: 0.0224 - val_acc: 0.8819
Epoch 78/99
60000/60000 [==============================] - 0s - loss: 0.0233 - acc: 0.8752 - val_loss: 0.0223 - val_acc: 0.8822
Epoch 79/99
60000/60000 [==============================] - 0s - loss: 0.0233 - acc: 0.8754 - val_loss: 0.0223 - val_acc: 0.8826
Epoch 80/99
60000/60000 [==============================] - 0s - loss: 0.0232 - acc: 0.8755 - val_loss: 0.0222 - val_acc: 0.8828
Epoch 81/99
60000/60000 [==============================] - 0s - loss: 0.0231 - acc: 0.8758 - val_loss: 0.0222 - val_acc: 0.8830
Epoch 82/99
60000/60000 [==============================] - 1s - loss: 0.0231 - acc: 0.8759 - val_loss: 0.0221 - val_acc: 0.8834
Epoch 83/99
60000/60000 [==============================] - 1s - loss: 0.0230 - acc: 0.8762 - val_loss: 0.0221 - val_acc: 0.8833
Epoch 84/99
60000/60000 [==============================] - 1s - loss: 0.0230 - acc: 0.8764 - val_loss: 0.0220 - val_acc: 0.8837
Epoch 85/99
60000/60000 [==============================] - 1s - loss: 0.0229 - acc: 0.8765 - val_loss: 0.0220 - val_acc: 0.8837
Epoch 86/99
60000/60000 [==============================] - 0s - loss: 0.0229 - acc: 0.8769 - val_loss: 0.0219 - val_acc: 0.8837
Epoch 87/99
60000/60000 [==============================] - 0s - loss: 0.0228 - acc: 0.8771 - val_loss: 0.0219 - val_acc: 0.8838
Epoch 88/99
60000/60000 [==============================] - 1s - loss: 0.0228 - acc: 0.8772 - val_loss: 0.0218 - val_acc: 0.8844
Epoch 89/99
60000/60000 [==============================] - 0s - loss: 0.0227 - acc: 0.8773 - val_loss: 0.0218 - val_acc: 0.8844
Epoch 90/99
60000/60000 [==============================] - 0s - loss: 0.0227 - acc: 0.8775 - val_loss: 0.0217 - val_acc: 0.8848
Epoch 91/99
60000/60000 [==============================] - 0s - loss: 0.0226 - acc: 0.8776 - val_loss: 0.0217 - val_acc: 0.8854
Epoch 92/99
60000/60000 [==============================] - 0s - loss: 0.0226 - acc: 0.8778 - val_loss: 0.0216 - val_acc: 0.8855
Epoch 93/99
60000/60000 [==============================] - 0s - loss: 0.0225 - acc: 0.8780 - val_loss: 0.0216 - val_acc: 0.8858
Epoch 94/99
60000/60000 [==============================] - 1s - loss: 0.0225 - acc: 0.8780 - val_loss: 0.0215 - val_acc: 0.8860
Epoch 95/99
60000/60000 [==============================] - 1s - loss: 0.0224 - acc: 0.8780 - val_loss: 0.0215 - val_acc: 0.8862
Epoch 96/99
60000/60000 [==============================] - 1s - loss: 0.0224 - acc: 0.8783 - val_loss: 0.0214 - val_acc: 0.8865
Epoch 97/99
60000/60000 [==============================] - 1s - loss: 0.0223 - acc: 0.8784 - val_loss: 0.0214 - val_acc: 0.8869
Epoch 98/99
60000/60000 [==============================] - 1s - loss: 0.0223 - acc: 0.8785 - val_loss: 0.0213 - val_acc: 0.8869
Epoch 99/99
60000/60000 [==============================] - 1s - loss: 0.0223 - acc: 0.8786 - val_loss: 0.0213 - val_acc: 0.8868
<keras.callbacks.History at 0x12ddb3ef0>
```python
# 88% accuracy after another 99 epochs
nn.evaluate(X_test, y_test)
```
9696/10000 [============================>.] - ETA: 0s
[0.021296201345324516, 0.88680000000000003]
## Activation functions
In computational networks, the [activation function](https://en.wikipedia.org/wiki/Activation_function) of a node
defines the output of that node given an input or set of inputs. A
standard computer chip circuit can be seen as a digital network of
activation functions that can be “ON” (1) or “OFF” (0), depending on
input. This is similar to the behavior of the linear perceptron in
neural networks. However, only *nonlinear* activation functions allow
such networks to compute nontrivial problems using only a small number
of nodes. In artificial neural networks this function is also called
the **transfer function**.
Functions
---------
In biologically inspired neural networks, the activation function is
usually an abstraction representing the rate of action potential
firing in the cell. In its simplest form, this function is binary—that
is, either the neuron is firing or not. The function looks like
$\phi(v_i)=U(v_i)$, where $U$ is the Heaviside step function. In this
case many neurons must be used in computation beyond linear separation
of categories.
A line of positive slope may be used to reflect the increase in firing
rate that occurs as input current increases. Such a function would be of
the form $\phi(v_i)=\mu v_i$, where $\mu$ is the slope. This activation
function is linear, and therefore has the same problems as the binary
function. In addition, networks constructed using this model have
unstable convergence because neuron inputs along favored paths tend to
increase without bound, as this function is not normalizable.
All problems mentioned above can be handled by using a normalizable
sigmoid activation function. One realistic model stays at zero until
input current is received, at which point the firing frequency increases
quickly at first, but gradually approaches an asymptote at 100% firing
rate. Mathematically, this looks like $\phi(v_i)=U(v_i)\tanh(v_i)$,
where the hyperbolic tangent function can be replaced by any sigmoid
function. This behavior is realistically reflected in the neuron, as
neurons cannot physically fire faster than a certain rate. This model
runs into problems, however, in computational networks as it is not
differentiable, a requirement to calculate backpropagation.
The final model, then, that is used in multilayer perceptrons is a
sigmoidal activation function in the form of a hyperbolic tangent. Two
forms of this function are commonly used: $\phi(v_i)=\tanh(v_i)$ whose
range is normalized from -1 to 1, and $\phi(v_i) = (1+\exp(-v_i))^{-1}$
is vertically translated to normalize from 0 to 1. The latter model is
often considered more biologically realistic, but it runs into
theoretical and experimental difficulties with certain types.
## Comparison of activation functions
Some desirable properties in an activation function include:
- Nonlinear – When the activation function is non-linear, then a
two-layer neural network can be proven to be a universal function
approximator. The identity activation function does not satisfy
this property. When multiple layers use the identity activation
function, the entire network is equivalent to a single-layer model.
- Continuously differentiable – This property is necessary for
enabling gradient-based optimization methods. The binary step
activation function is not differentiable at 0, and it
differentiates to 0 for all other values, so gradient-based methods
can make no progress with it.
- Range – When the range of the activation function is finite,
gradient-based training methods tend to be more stable, because
pattern presentations significantly affect only limited weights.
When the range is infinite, training is generally more efficient
because pattern presentations significantly affect most of the
weights. In the latter case, smaller learning rates are typically
necessary.
- Monotonic – When the activation function is monotonic, the error
surface associated with a single-layer model is guaranteed to be
convex.
- Smooth Functions with a Monotonic derivative – These have been shown
to generalize better in some cases. The argument for these
properties suggests that such activation functions are more
consistent with Occam's razor.
- Approximates identity near the origin – When activation functions
have this property, the neural network will learn efficiently when
its weights are initialized with small random values. When the
activation function does not approximate identity near the origin,
special care must be used when initializing the weights.
## Rectified linear unit (ReLU) transfer function
Rectified linear unit (ReLU)
Activation identity
$f(x)=x$
$f'(x)=1$
$(-\infty,\infty)$
$C^\infty$
Logistic (a.k.a. Soft step)
$f(x)=\frac{1}{1+e^{-x}}$
$f'(x)=f(x)(1-f(x))$
$(0,1)$
$C^\infty$
TanH
$f(x)=\tanh(x)=\frac{2}{1+e^{-2x}}-1$
$f'(x)=1-f(x)^2$
$(-1,1)$
$C^\infty$
Rectified linear unit (ReLU)
$f(x) = \begin{cases}
0 & \text{for } x < 0\\
x & \text{for } x \ge 0\end{cases}$
$f'(x) = \begin{cases}
0 & \text{for } x < 0\\
1 & \text{for } x \ge 0\end{cases}$
$[0,\infty)$
$C^0$
The Rectified linear unit (ReLU) seem to work well empirically.
```python
def shallow_net_B(n=55,i=784,o=10):
# create simple one dense layer net
# default 55 neurons, input 784, output 10
# Using relu
net = Sequential()
net.add(Dense(n, activation='relu', input_shape=(i,)))
net.add(Dense(10, activation='softmax'))
# Compile net
net.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics=['accuracy'])
return net
```
```python
nn2=shallow_net_B()
nn2.summary()
```
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_3 (Dense) (None, 55) 43175
_________________________________________________________________
dense_4 (Dense) (None, 10) 560
=================================================================
Total params: 43,735
Trainable params: 43,735
Non-trainable params: 0
_________________________________________________________________
```python
nn2.fit(X_train, y_train, batch_size=128, epochs=99, verbose=1, validation_data=(X_test, y_test))
```
Train on 60000 samples, validate on 10000 samples
Epoch 1/99
60000/60000 [==============================] - 1s - loss: 0.0912 - acc: 0.0756 - val_loss: 0.0902 - val_acc: 0.1054
Epoch 2/99
60000/60000 [==============================] - 1s - loss: 0.0892 - acc: 0.1552 - val_loss: 0.0883 - val_acc: 0.2029
Epoch 3/99
60000/60000 [==============================] - 1s - loss: 0.0873 - acc: 0.2435 - val_loss: 0.0863 - val_acc: 0.2915
Epoch 4/99
60000/60000 [==============================] - 1s - loss: 0.0852 - acc: 0.3212 - val_loss: 0.0841 - val_acc: 0.3640
Epoch 5/99
60000/60000 [==============================] - 1s - loss: 0.0829 - acc: 0.3821 - val_loss: 0.0816 - val_acc: 0.4106
Epoch 6/99
60000/60000 [==============================] - 1s - loss: 0.0803 - acc: 0.4162 - val_loss: 0.0788 - val_acc: 0.4353
Epoch 7/99
60000/60000 [==============================] - 1s - loss: 0.0775 - acc: 0.4381 - val_loss: 0.0758 - val_acc: 0.4511
Epoch 8/99
60000/60000 [==============================] - 1s - loss: 0.0744 - acc: 0.4594 - val_loss: 0.0726 - val_acc: 0.4724
Epoch 9/99
60000/60000 [==============================] - 1s - loss: 0.0712 - acc: 0.4845 - val_loss: 0.0693 - val_acc: 0.5041
Epoch 10/99
60000/60000 [==============================] - 1s - loss: 0.0680 - acc: 0.5242 - val_loss: 0.0661 - val_acc: 0.5474
Epoch 11/99
60000/60000 [==============================] - 0s - loss: 0.0649 - acc: 0.5734 - val_loss: 0.0630 - val_acc: 0.6030
Epoch 12/99
60000/60000 [==============================] - 0s - loss: 0.0618 - acc: 0.6247 - val_loss: 0.0599 - val_acc: 0.6471
Epoch 13/99
60000/60000 [==============================] - 0s - loss: 0.0587 - acc: 0.6640 - val_loss: 0.0568 - val_acc: 0.6821
Epoch 14/99
60000/60000 [==============================] - 0s - loss: 0.0557 - acc: 0.6906 - val_loss: 0.0538 - val_acc: 0.7076
Epoch 15/99
60000/60000 [==============================] - 0s - loss: 0.0528 - acc: 0.7085 - val_loss: 0.0509 - val_acc: 0.7248
Epoch 16/99
60000/60000 [==============================] - 1s - loss: 0.0501 - acc: 0.7229 - val_loss: 0.0483 - val_acc: 0.7400
Epoch 17/99
60000/60000 [==============================] - 1s - loss: 0.0477 - acc: 0.7377 - val_loss: 0.0458 - val_acc: 0.7556
Epoch 18/99
60000/60000 [==============================] - 1s - loss: 0.0454 - acc: 0.7532 - val_loss: 0.0436 - val_acc: 0.7726
Epoch 19/99
60000/60000 [==============================] - 1s - loss: 0.0433 - acc: 0.7701 - val_loss: 0.0414 - val_acc: 0.7882
Epoch 20/99
60000/60000 [==============================] - 1s - loss: 0.0413 - acc: 0.7853 - val_loss: 0.0395 - val_acc: 0.8010
Epoch 21/99
60000/60000 [==============================] - 1s - loss: 0.0395 - acc: 0.7971 - val_loss: 0.0377 - val_acc: 0.8104
Epoch 22/99
60000/60000 [==============================] - 1s - loss: 0.0378 - acc: 0.8064 - val_loss: 0.0361 - val_acc: 0.8188
Epoch 23/99
60000/60000 [==============================] - 1s - loss: 0.0363 - acc: 0.8137 - val_loss: 0.0346 - val_acc: 0.8254
Epoch 24/99
60000/60000 [==============================] - 1s - loss: 0.0350 - acc: 0.8196 - val_loss: 0.0333 - val_acc: 0.8324
Epoch 25/99
60000/60000 [==============================] - 1s - loss: 0.0337 - acc: 0.8244 - val_loss: 0.0321 - val_acc: 0.8362
Epoch 26/99
60000/60000 [==============================] - 1s - loss: 0.0326 - acc: 0.8294 - val_loss: 0.0311 - val_acc: 0.8407
Epoch 27/99
60000/60000 [==============================] - 1s - loss: 0.0316 - acc: 0.8339 - val_loss: 0.0301 - val_acc: 0.8450
Epoch 28/99
60000/60000 [==============================] - 1s - loss: 0.0307 - acc: 0.8371 - val_loss: 0.0292 - val_acc: 0.8479
Epoch 29/99
60000/60000 [==============================] - 1s - loss: 0.0299 - acc: 0.8400 - val_loss: 0.0284 - val_acc: 0.8509
Epoch 30/99
60000/60000 [==============================] - 1s - loss: 0.0291 - acc: 0.8433 - val_loss: 0.0276 - val_acc: 0.8539
Epoch 31/99
60000/60000 [==============================] - 1s - loss: 0.0284 - acc: 0.8466 - val_loss: 0.0269 - val_acc: 0.8558
Epoch 32/99
60000/60000 [==============================] - 1s - loss: 0.0277 - acc: 0.8489 - val_loss: 0.0263 - val_acc: 0.8587
Epoch 33/99
60000/60000 [==============================] - 1s - loss: 0.0271 - acc: 0.8512 - val_loss: 0.0257 - val_acc: 0.8613
Epoch 34/99
60000/60000 [==============================] - 1s - loss: 0.0266 - acc: 0.8538 - val_loss: 0.0252 - val_acc: 0.8628
Epoch 35/99
60000/60000 [==============================] - 1s - loss: 0.0261 - acc: 0.8562 - val_loss: 0.0247 - val_acc: 0.8640
Epoch 36/99
60000/60000 [==============================] - 1s - loss: 0.0256 - acc: 0.8582 - val_loss: 0.0242 - val_acc: 0.8662
Epoch 37/99
60000/60000 [==============================] - 1s - loss: 0.0252 - acc: 0.8601 - val_loss: 0.0238 - val_acc: 0.8674
Epoch 38/99
60000/60000 [==============================] - 1s - loss: 0.0247 - acc: 0.8618 - val_loss: 0.0234 - val_acc: 0.8689
Epoch 39/99
60000/60000 [==============================] - 1s - loss: 0.0244 - acc: 0.8637 - val_loss: 0.0230 - val_acc: 0.8704
Epoch 40/99
60000/60000 [==============================] - 1s - loss: 0.0240 - acc: 0.8654 - val_loss: 0.0227 - val_acc: 0.8712
Epoch 41/99
60000/60000 [==============================] - 1s - loss: 0.0236 - acc: 0.8668 - val_loss: 0.0223 - val_acc: 0.8730
Epoch 42/99
60000/60000 [==============================] - 1s - loss: 0.0233 - acc: 0.8681 - val_loss: 0.0220 - val_acc: 0.8744
Epoch 43/99
60000/60000 [==============================] - 1s - loss: 0.0230 - acc: 0.8693 - val_loss: 0.0217 - val_acc: 0.8749
Epoch 44/99
60000/60000 [==============================] - 1s - loss: 0.0227 - acc: 0.8702 - val_loss: 0.0215 - val_acc: 0.8762
Epoch 45/99
60000/60000 [==============================] - 1s - loss: 0.0224 - acc: 0.8714 - val_loss: 0.0212 - val_acc: 0.8770
Epoch 46/99
60000/60000 [==============================] - 1s - loss: 0.0222 - acc: 0.8726 - val_loss: 0.0209 - val_acc: 0.8788
Epoch 47/99
60000/60000 [==============================] - 1s - loss: 0.0219 - acc: 0.8739 - val_loss: 0.0207 - val_acc: 0.8795
Epoch 48/99
60000/60000 [==============================] - 1s - loss: 0.0217 - acc: 0.8751 - val_loss: 0.0205 - val_acc: 0.8811
Epoch 49/99
60000/60000 [==============================] - 1s - loss: 0.0215 - acc: 0.8760 - val_loss: 0.0203 - val_acc: 0.8820
Epoch 50/99
60000/60000 [==============================] - 1s - loss: 0.0213 - acc: 0.8771 - val_loss: 0.0200 - val_acc: 0.8829
Epoch 51/99
60000/60000 [==============================] - 1s - loss: 0.0211 - acc: 0.8778 - val_loss: 0.0198 - val_acc: 0.8844
Epoch 52/99
60000/60000 [==============================] - 1s - loss: 0.0209 - acc: 0.8789 - val_loss: 0.0197 - val_acc: 0.8853
Epoch 53/99
60000/60000 [==============================] - 1s - loss: 0.0207 - acc: 0.8797 - val_loss: 0.0195 - val_acc: 0.8862
Epoch 54/99
60000/60000 [==============================] - 1s - loss: 0.0205 - acc: 0.8806 - val_loss: 0.0193 - val_acc: 0.8870
Epoch 55/99
60000/60000 [==============================] - 1s - loss: 0.0203 - acc: 0.8816 - val_loss: 0.0191 - val_acc: 0.8884
Epoch 56/99
60000/60000 [==============================] - 1s - loss: 0.0201 - acc: 0.8824 - val_loss: 0.0190 - val_acc: 0.8889
Epoch 57/99
60000/60000 [==============================] - 1s - loss: 0.0200 - acc: 0.8832 - val_loss: 0.0188 - val_acc: 0.8892
Epoch 58/99
60000/60000 [==============================] - 1s - loss: 0.0198 - acc: 0.8837 - val_loss: 0.0187 - val_acc: 0.8896
Epoch 59/99
60000/60000 [==============================] - 1s - loss: 0.0197 - acc: 0.8847 - val_loss: 0.0185 - val_acc: 0.8906
Epoch 60/99
60000/60000 [==============================] - 1s - loss: 0.0195 - acc: 0.8855 - val_loss: 0.0184 - val_acc: 0.8915
Epoch 61/99
60000/60000 [==============================] - 1s - loss: 0.0194 - acc: 0.8859 - val_loss: 0.0183 - val_acc: 0.8923
Epoch 62/99
60000/60000 [==============================] - 1s - loss: 0.0193 - acc: 0.8867 - val_loss: 0.0181 - val_acc: 0.8932
Epoch 63/99
60000/60000 [==============================] - 1s - loss: 0.0191 - acc: 0.8873 - val_loss: 0.0180 - val_acc: 0.8936
Epoch 64/99
60000/60000 [==============================] - 1s - loss: 0.0190 - acc: 0.8878 - val_loss: 0.0179 - val_acc: 0.8945
Epoch 65/99
60000/60000 [==============================] - 1s - loss: 0.0189 - acc: 0.8885 - val_loss: 0.0178 - val_acc: 0.8950
Epoch 66/99
60000/60000 [==============================] - 1s - loss: 0.0188 - acc: 0.8888 - val_loss: 0.0177 - val_acc: 0.8956
Epoch 67/99
60000/60000 [==============================] - 0s - loss: 0.0186 - acc: 0.8894 - val_loss: 0.0176 - val_acc: 0.8961
Epoch 68/99
60000/60000 [==============================] - 0s - loss: 0.0185 - acc: 0.8898 - val_loss: 0.0174 - val_acc: 0.8963
Epoch 69/99
60000/60000 [==============================] - 0s - loss: 0.0184 - acc: 0.8905 - val_loss: 0.0173 - val_acc: 0.8971
Epoch 70/99
60000/60000 [==============================] - 0s - loss: 0.0183 - acc: 0.8908 - val_loss: 0.0172 - val_acc: 0.8978
Epoch 71/99
60000/60000 [==============================] - 0s - loss: 0.0182 - acc: 0.8913 - val_loss: 0.0171 - val_acc: 0.8982
Epoch 72/99
60000/60000 [==============================] - 0s - loss: 0.0181 - acc: 0.8917 - val_loss: 0.0171 - val_acc: 0.8988
Epoch 73/99
60000/60000 [==============================] - 0s - loss: 0.0180 - acc: 0.8923 - val_loss: 0.0170 - val_acc: 0.8990
Epoch 74/99
60000/60000 [==============================] - 0s - loss: 0.0179 - acc: 0.8925 - val_loss: 0.0169 - val_acc: 0.8996
Epoch 75/99
60000/60000 [==============================] - 0s - loss: 0.0178 - acc: 0.8929 - val_loss: 0.0168 - val_acc: 0.9003
Epoch 76/99
60000/60000 [==============================] - 0s - loss: 0.0177 - acc: 0.8935 - val_loss: 0.0167 - val_acc: 0.9006
Epoch 77/99
60000/60000 [==============================] - 1s - loss: 0.0176 - acc: 0.8937 - val_loss: 0.0166 - val_acc: 0.9010
Epoch 78/99
60000/60000 [==============================] - 1s - loss: 0.0176 - acc: 0.8942 - val_loss: 0.0165 - val_acc: 0.9015
Epoch 79/99
60000/60000 [==============================] - 0s - loss: 0.0175 - acc: 0.8948 - val_loss: 0.0165 - val_acc: 0.9020
Epoch 80/99
60000/60000 [==============================] - 1s - loss: 0.0174 - acc: 0.8950 - val_loss: 0.0164 - val_acc: 0.9023
Epoch 81/99
60000/60000 [==============================] - 0s - loss: 0.0173 - acc: 0.8955 - val_loss: 0.0163 - val_acc: 0.9024
Epoch 82/99
60000/60000 [==============================] - 0s - loss: 0.0172 - acc: 0.8959 - val_loss: 0.0162 - val_acc: 0.9028
Epoch 83/99
60000/60000 [==============================] - 0s - loss: 0.0172 - acc: 0.8962 - val_loss: 0.0162 - val_acc: 0.9028
Epoch 84/99
60000/60000 [==============================] - 0s - loss: 0.0171 - acc: 0.8967 - val_loss: 0.0161 - val_acc: 0.9027
Epoch 85/99
60000/60000 [==============================] - 0s - loss: 0.0170 - acc: 0.8970 - val_loss: 0.0160 - val_acc: 0.9030
Epoch 86/99
60000/60000 [==============================] - 1s - loss: 0.0169 - acc: 0.8975 - val_loss: 0.0160 - val_acc: 0.9031
Epoch 87/99
60000/60000 [==============================] - 1s - loss: 0.0169 - acc: 0.8981 - val_loss: 0.0159 - val_acc: 0.9036
Epoch 88/99
60000/60000 [==============================] - 1s - loss: 0.0168 - acc: 0.8980 - val_loss: 0.0158 - val_acc: 0.9040
Epoch 89/99
60000/60000 [==============================] - 1s - loss: 0.0167 - acc: 0.8985 - val_loss: 0.0158 - val_acc: 0.9040
Epoch 90/99
60000/60000 [==============================] - 1s - loss: 0.0167 - acc: 0.8987 - val_loss: 0.0157 - val_acc: 0.9039
Epoch 91/99
60000/60000 [==============================] - 1s - loss: 0.0166 - acc: 0.8990 - val_loss: 0.0156 - val_acc: 0.9042
Epoch 92/99
60000/60000 [==============================] - 1s - loss: 0.0165 - acc: 0.8993 - val_loss: 0.0156 - val_acc: 0.9045
Epoch 93/99
60000/60000 [==============================] - 1s - loss: 0.0165 - acc: 0.8997 - val_loss: 0.0155 - val_acc: 0.9051
Epoch 94/99
60000/60000 [==============================] - 1s - loss: 0.0164 - acc: 0.9000 - val_loss: 0.0155 - val_acc: 0.9048
Epoch 95/99
60000/60000 [==============================] - 1s - loss: 0.0164 - acc: 0.9004 - val_loss: 0.0154 - val_acc: 0.9052
Epoch 96/99
60000/60000 [==============================] - 1s - loss: 0.0163 - acc: 0.9008 - val_loss: 0.0154 - val_acc: 0.9054
Epoch 97/99
60000/60000 [==============================] - 0s - loss: 0.0162 - acc: 0.9011 - val_loss: 0.0153 - val_acc: 0.9053
Epoch 98/99
60000/60000 [==============================] - 0s - loss: 0.0162 - acc: 0.9012 - val_loss: 0.0152 - val_acc: 0.9056
Epoch 99/99
60000/60000 [==============================] - 0s - loss: 0.0161 - acc: 0.9016 - val_loss: 0.0152 - val_acc: 0.9056
<keras.callbacks.History at 0x12e124b00>
```python
# 90% accuracy after first 99 epochs with Relu
nn2.evaluate(X_test, y_test)
```
9984/10000 [============================>.] - ETA: 0s
[0.015197866915352642, 0.90559999999999996]
```python
nn2.fit(X_train, y_train, batch_size=128, epochs=99, verbose=1, validation_data=(X_test, y_test))
```
Train on 60000 samples, validate on 10000 samples
Epoch 1/99
60000/60000 [==============================] - 0s - loss: 0.0161 - acc: 0.9020 - val_loss: 0.0151 - val_acc: 0.9056
Epoch 2/99
60000/60000 [==============================] - 1s - loss: 0.0160 - acc: 0.9022 - val_loss: 0.0151 - val_acc: 0.9058
Epoch 3/99
60000/60000 [==============================] - 0s - loss: 0.0160 - acc: 0.9024 - val_loss: 0.0150 - val_acc: 0.9064
Epoch 4/99
60000/60000 [==============================] - 0s - loss: 0.0159 - acc: 0.9026 - val_loss: 0.0150 - val_acc: 0.9062
Epoch 5/99
60000/60000 [==============================] - 0s - loss: 0.0159 - acc: 0.9029 - val_loss: 0.0150 - val_acc: 0.9065
Epoch 6/99
60000/60000 [==============================] - 0s - loss: 0.0158 - acc: 0.9032 - val_loss: 0.0149 - val_acc: 0.9070
Epoch 7/99
60000/60000 [==============================] - 0s - loss: 0.0158 - acc: 0.9034 - val_loss: 0.0149 - val_acc: 0.9075
Epoch 8/99
60000/60000 [==============================] - 0s - loss: 0.0157 - acc: 0.9037 - val_loss: 0.0148 - val_acc: 0.9077
Epoch 9/99
60000/60000 [==============================] - 0s - loss: 0.0157 - acc: 0.9040 - val_loss: 0.0148 - val_acc: 0.9079
Epoch 10/99
60000/60000 [==============================] - 0s - loss: 0.0156 - acc: 0.9041 - val_loss: 0.0147 - val_acc: 0.9083
Epoch 11/99
60000/60000 [==============================] - 0s - loss: 0.0156 - acc: 0.9043 - val_loss: 0.0147 - val_acc: 0.9089
Epoch 12/99
60000/60000 [==============================] - 0s - loss: 0.0155 - acc: 0.9048 - val_loss: 0.0146 - val_acc: 0.9089
Epoch 13/99
60000/60000 [==============================] - 0s - loss: 0.0155 - acc: 0.9048 - val_loss: 0.0146 - val_acc: 0.9093
Epoch 14/99
60000/60000 [==============================] - 0s - loss: 0.0154 - acc: 0.9052 - val_loss: 0.0146 - val_acc: 0.9097
Epoch 15/99
60000/60000 [==============================] - 0s - loss: 0.0154 - acc: 0.9055 - val_loss: 0.0145 - val_acc: 0.9099
Epoch 16/99
60000/60000 [==============================] - 0s - loss: 0.0153 - acc: 0.9056 - val_loss: 0.0145 - val_acc: 0.9106
Epoch 17/99
60000/60000 [==============================] - 0s - loss: 0.0153 - acc: 0.9059 - val_loss: 0.0144 - val_acc: 0.9106
Epoch 18/99
60000/60000 [==============================] - 0s - loss: 0.0152 - acc: 0.9062 - val_loss: 0.0144 - val_acc: 0.9112
Epoch 19/99
60000/60000 [==============================] - 0s - loss: 0.0152 - acc: 0.9062 - val_loss: 0.0144 - val_acc: 0.9111
Epoch 20/99
60000/60000 [==============================] - 0s - loss: 0.0152 - acc: 0.9064 - val_loss: 0.0143 - val_acc: 0.9111
Epoch 21/99
60000/60000 [==============================] - 0s - loss: 0.0151 - acc: 0.9067 - val_loss: 0.0143 - val_acc: 0.9118
Epoch 22/99
60000/60000 [==============================] - 0s - loss: 0.0151 - acc: 0.9070 - val_loss: 0.0142 - val_acc: 0.9121
Epoch 23/99
60000/60000 [==============================] - 0s - loss: 0.0150 - acc: 0.9071 - val_loss: 0.0142 - val_acc: 0.9124
Epoch 24/99
60000/60000 [==============================] - 0s - loss: 0.0150 - acc: 0.9071 - val_loss: 0.0142 - val_acc: 0.9125
Epoch 25/99
60000/60000 [==============================] - 0s - loss: 0.0150 - acc: 0.9075 - val_loss: 0.0141 - val_acc: 0.9126
Epoch 26/99
60000/60000 [==============================] - 0s - loss: 0.0149 - acc: 0.9078 - val_loss: 0.0141 - val_acc: 0.9127
Epoch 27/99
60000/60000 [==============================] - 0s - loss: 0.0149 - acc: 0.9080 - val_loss: 0.0141 - val_acc: 0.9126
Epoch 28/99
60000/60000 [==============================] - 0s - loss: 0.0148 - acc: 0.9081 - val_loss: 0.0140 - val_acc: 0.9131
Epoch 29/99
60000/60000 [==============================] - 0s - loss: 0.0148 - acc: 0.9083 - val_loss: 0.0140 - val_acc: 0.9128
Epoch 30/99
60000/60000 [==============================] - 0s - loss: 0.0148 - acc: 0.9085 - val_loss: 0.0140 - val_acc: 0.9130
Epoch 31/99
60000/60000 [==============================] - 0s - loss: 0.0147 - acc: 0.9088 - val_loss: 0.0139 - val_acc: 0.9132
Epoch 32/99
60000/60000 [==============================] - 0s - loss: 0.0147 - acc: 0.9089 - val_loss: 0.0139 - val_acc: 0.9136
Epoch 33/99
60000/60000 [==============================] - 0s - loss: 0.0147 - acc: 0.9091 - val_loss: 0.0139 - val_acc: 0.9139
Epoch 34/99
60000/60000 [==============================] - 0s - loss: 0.0146 - acc: 0.9094 - val_loss: 0.0138 - val_acc: 0.9143
Epoch 35/99
60000/60000 [==============================] - 0s - loss: 0.0146 - acc: 0.9098 - val_loss: 0.0138 - val_acc: 0.9141
Epoch 36/99
60000/60000 [==============================] - 0s - loss: 0.0146 - acc: 0.9097 - val_loss: 0.0138 - val_acc: 0.9145
Epoch 37/99
60000/60000 [==============================] - 0s - loss: 0.0145 - acc: 0.9101 - val_loss: 0.0137 - val_acc: 0.9142
Epoch 38/99
60000/60000 [==============================] - 0s - loss: 0.0145 - acc: 0.9102 - val_loss: 0.0137 - val_acc: 0.9147
Epoch 39/99
60000/60000 [==============================] - 0s - loss: 0.0145 - acc: 0.9102 - val_loss: 0.0137 - val_acc: 0.9146
Epoch 40/99
60000/60000 [==============================] - 0s - loss: 0.0144 - acc: 0.9104 - val_loss: 0.0137 - val_acc: 0.9147
Epoch 41/99
60000/60000 [==============================] - 0s - loss: 0.0144 - acc: 0.9106 - val_loss: 0.0136 - val_acc: 0.9148
Epoch 42/99
60000/60000 [==============================] - 0s - loss: 0.0144 - acc: 0.9110 - val_loss: 0.0136 - val_acc: 0.9149
Epoch 43/99
60000/60000 [==============================] - 0s - loss: 0.0143 - acc: 0.9111 - val_loss: 0.0136 - val_acc: 0.9149
Epoch 44/99
60000/60000 [==============================] - 0s - loss: 0.0143 - acc: 0.9113 - val_loss: 0.0135 - val_acc: 0.9148
Epoch 45/99
60000/60000 [==============================] - 0s - loss: 0.0143 - acc: 0.9115 - val_loss: 0.0135 - val_acc: 0.9151
Epoch 46/99
60000/60000 [==============================] - 0s - loss: 0.0142 - acc: 0.9115 - val_loss: 0.0135 - val_acc: 0.9155
Epoch 47/99
60000/60000 [==============================] - 0s - loss: 0.0142 - acc: 0.9117 - val_loss: 0.0135 - val_acc: 0.9156
Epoch 48/99
60000/60000 [==============================] - 0s - loss: 0.0142 - acc: 0.9120 - val_loss: 0.0134 - val_acc: 0.9162
Epoch 49/99
60000/60000 [==============================] - 0s - loss: 0.0141 - acc: 0.9122 - val_loss: 0.0134 - val_acc: 0.9160
Epoch 50/99
60000/60000 [==============================] - 0s - loss: 0.0141 - acc: 0.9123 - val_loss: 0.0134 - val_acc: 0.9163
Epoch 51/99
60000/60000 [==============================] - 0s - loss: 0.0141 - acc: 0.9122 - val_loss: 0.0134 - val_acc: 0.9165
Epoch 52/99
60000/60000 [==============================] - 0s - loss: 0.0141 - acc: 0.9126 - val_loss: 0.0133 - val_acc: 0.9166
Epoch 53/99
60000/60000 [==============================] - 0s - loss: 0.0140 - acc: 0.9126 - val_loss: 0.0133 - val_acc: 0.9166
Epoch 54/99
60000/60000 [==============================] - 0s - loss: 0.0140 - acc: 0.9126 - val_loss: 0.0133 - val_acc: 0.9171
Epoch 55/99
60000/60000 [==============================] - 0s - loss: 0.0140 - acc: 0.9130 - val_loss: 0.0133 - val_acc: 0.9170
Epoch 56/99
60000/60000 [==============================] - 0s - loss: 0.0139 - acc: 0.9132 - val_loss: 0.0132 - val_acc: 0.9177
Epoch 57/99
60000/60000 [==============================] - 0s - loss: 0.0139 - acc: 0.9134 - val_loss: 0.0132 - val_acc: 0.9176
Epoch 58/99
60000/60000 [==============================] - 0s - loss: 0.0139 - acc: 0.9133 - val_loss: 0.0132 - val_acc: 0.9179
Epoch 59/99
60000/60000 [==============================] - 0s - loss: 0.0139 - acc: 0.9137 - val_loss: 0.0132 - val_acc: 0.9177
Epoch 60/99
60000/60000 [==============================] - 0s - loss: 0.0138 - acc: 0.9138 - val_loss: 0.0131 - val_acc: 0.9178
Epoch 61/99
60000/60000 [==============================] - 0s - loss: 0.0138 - acc: 0.9138 - val_loss: 0.0131 - val_acc: 0.9181
Epoch 62/99
60000/60000 [==============================] - 0s - loss: 0.0138 - acc: 0.9140 - val_loss: 0.0131 - val_acc: 0.9183
Epoch 63/99
60000/60000 [==============================] - 0s - loss: 0.0137 - acc: 0.9142 - val_loss: 0.0131 - val_acc: 0.9186
Epoch 64/99
60000/60000 [==============================] - 0s - loss: 0.0137 - acc: 0.9142 - val_loss: 0.0130 - val_acc: 0.9185
Epoch 65/99
60000/60000 [==============================] - 0s - loss: 0.0137 - acc: 0.9142 - val_loss: 0.0130 - val_acc: 0.9189
Epoch 66/99
60000/60000 [==============================] - 0s - loss: 0.0137 - acc: 0.9146 - val_loss: 0.0130 - val_acc: 0.9190
Epoch 67/99
60000/60000 [==============================] - 0s - loss: 0.0136 - acc: 0.9147 - val_loss: 0.0130 - val_acc: 0.9193
Epoch 68/99
60000/60000 [==============================] - 0s - loss: 0.0136 - acc: 0.9150 - val_loss: 0.0129 - val_acc: 0.9192
Epoch 69/99
60000/60000 [==============================] - 0s - loss: 0.0136 - acc: 0.9151 - val_loss: 0.0129 - val_acc: 0.9191
Epoch 70/99
60000/60000 [==============================] - 0s - loss: 0.0136 - acc: 0.9151 - val_loss: 0.0129 - val_acc: 0.9192
Epoch 71/99
60000/60000 [==============================] - 0s - loss: 0.0135 - acc: 0.9155 - val_loss: 0.0129 - val_acc: 0.9192
Epoch 72/99
60000/60000 [==============================] - 0s - loss: 0.0135 - acc: 0.9156 - val_loss: 0.0129 - val_acc: 0.9191
Epoch 73/99
60000/60000 [==============================] - 0s - loss: 0.0135 - acc: 0.9156 - val_loss: 0.0128 - val_acc: 0.9193
Epoch 74/99
60000/60000 [==============================] - 0s - loss: 0.0135 - acc: 0.9159 - val_loss: 0.0128 - val_acc: 0.9192
Epoch 75/99
60000/60000 [==============================] - 0s - loss: 0.0134 - acc: 0.9159 - val_loss: 0.0128 - val_acc: 0.9191
Epoch 76/99
60000/60000 [==============================] - 0s - loss: 0.0134 - acc: 0.9162 - val_loss: 0.0128 - val_acc: 0.9196
Epoch 77/99
60000/60000 [==============================] - 0s - loss: 0.0134 - acc: 0.9161 - val_loss: 0.0128 - val_acc: 0.9199
Epoch 78/99
60000/60000 [==============================] - 0s - loss: 0.0134 - acc: 0.9163 - val_loss: 0.0127 - val_acc: 0.9199
Epoch 79/99
60000/60000 [==============================] - 0s - loss: 0.0133 - acc: 0.9165 - val_loss: 0.0127 - val_acc: 0.9201
Epoch 80/99
60000/60000 [==============================] - 0s - loss: 0.0133 - acc: 0.9166 - val_loss: 0.0127 - val_acc: 0.9202
Epoch 81/99
60000/60000 [==============================] - 0s - loss: 0.0133 - acc: 0.9167 - val_loss: 0.0127 - val_acc: 0.9201
Epoch 82/99
60000/60000 [==============================] - 0s - loss: 0.0133 - acc: 0.9169 - val_loss: 0.0127 - val_acc: 0.9205
Epoch 83/99
60000/60000 [==============================] - 0s - loss: 0.0133 - acc: 0.9169 - val_loss: 0.0126 - val_acc: 0.9205
Epoch 84/99
60000/60000 [==============================] - 0s - loss: 0.0132 - acc: 0.9171 - val_loss: 0.0126 - val_acc: 0.9205
Epoch 85/99
60000/60000 [==============================] - 0s - loss: 0.0132 - acc: 0.9172 - val_loss: 0.0126 - val_acc: 0.9206
Epoch 86/99
60000/60000 [==============================] - 0s - loss: 0.0132 - acc: 0.9174 - val_loss: 0.0126 - val_acc: 0.9208
Epoch 87/99
60000/60000 [==============================] - 0s - loss: 0.0132 - acc: 0.9176 - val_loss: 0.0126 - val_acc: 0.9209
Epoch 88/99
60000/60000 [==============================] - 0s - loss: 0.0131 - acc: 0.9177 - val_loss: 0.0125 - val_acc: 0.9208
Epoch 89/99
60000/60000 [==============================] - 0s - loss: 0.0131 - acc: 0.9179 - val_loss: 0.0125 - val_acc: 0.9210
Epoch 90/99
60000/60000 [==============================] - 0s - loss: 0.0131 - acc: 0.9181 - val_loss: 0.0125 - val_acc: 0.9211
Epoch 91/99
60000/60000 [==============================] - 0s - loss: 0.0131 - acc: 0.9182 - val_loss: 0.0125 - val_acc: 0.9211
Epoch 92/99
60000/60000 [==============================] - 0s - loss: 0.0131 - acc: 0.9184 - val_loss: 0.0125 - val_acc: 0.9211
Epoch 93/99
60000/60000 [==============================] - 0s - loss: 0.0130 - acc: 0.9182 - val_loss: 0.0124 - val_acc: 0.9214
Epoch 94/99
60000/60000 [==============================] - 0s - loss: 0.0130 - acc: 0.9185 - val_loss: 0.0124 - val_acc: 0.9218
Epoch 95/99
60000/60000 [==============================] - 0s - loss: 0.0130 - acc: 0.9186 - val_loss: 0.0124 - val_acc: 0.9217
Epoch 96/99
60000/60000 [==============================] - 0s - loss: 0.0130 - acc: 0.9187 - val_loss: 0.0124 - val_acc: 0.9219
Epoch 97/99
60000/60000 [==============================] - 0s - loss: 0.0130 - acc: 0.9189 - val_loss: 0.0124 - val_acc: 0.9221
Epoch 98/99
60000/60000 [==============================] - 0s - loss: 0.0129 - acc: 0.9191 - val_loss: 0.0124 - val_acc: 0.9218
Epoch 99/99
60000/60000 [==============================] - 0s - loss: 0.0129 - acc: 0.9192 - val_loss: 0.0123 - val_acc: 0.9220
<keras.callbacks.History at 0x122654668>
```python
# 92% accuracy after another 99 epochs with Relu
# Seems to be a plateau
nn2.evaluate(X_test, y_test)
```
7552/10000 [=====================>........] - ETA: 0s
[0.012338483134144916, 0.92200000000000004]
## Loss or cost functions
Loss function
-------------
Sometimes referred to as the **cost function** or **error function**
(not to be confused with the Gauss error function), the loss function
is a function that maps values of one or more variables onto a real
number intuitively representing some “cost” associated with those
values. For backpropagation, the loss function calculates the difference
between the network output and its expected output, after a case
propagates through the network.
### Assumptions
Two assumptions must be made about the form of the error function.
The first is that it can be written as an average
$E=\frac{1}{n}\sum_xE_x$ over error functions $E_x$, for individual
training examples, $x$. The reason for this assumption is that the
backpropagation algorithm calculates the gradient of the error function
for a single training example, which needs to be generalized to the
overall error function. The second assumption is that it can be written
as a function of the outputs from the neural network.
### Example loss function
Let $y,y'$ be vectors in $\mathbb{R}^n$.
Select an error function $E(y,y')$ measuring the difference between two
outputs.
The standard choice is $E(y,y') = \tfrac{1}{2} \lVert y-y'\rVert^2$,
the square of the Euclidean distance between the vectors $y$ and $y'$.
The factor of $\tfrac{1}{2}$ conveniently cancels the exponent when the
error function is subsequently differentiated.
The error function over $n$ training examples can be written as an
average$$E=\frac{1}{2n}\sum_x\lVert (y(x)-y'(x)) \rVert^2$$And the
partial derivative with respect to the
outputs$$\frac{\partial E}{\partial y'} = y'-y$$
## Cross entropy
In information theory, the [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) between two probability
distributions $p$ and $q$ over the same underlying set of events
measures the average number of bits needed to identify an event drawn
from the set, if a coding scheme is used that is optimized for an
“unnatural” probability distribution $q$, rather than the “true”
distribution $p$.
The cross entropy for the distributions $p$ and $q$ over a given set is
defined as follows:
$$H(p, q) = \operatorname{E}_p[-\log q] = H(p) + D_{\mathrm{KL}}(p \| q),\!$$
where $H(p)$ is the entropy of $p$, and $D_{\mathrm{KL}}(p \| q)$ is
the [Kullback–Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) of $q$ from $p$ (also known as the
*relative entropy* of *p* with respect to *q* — note the reversal of
emphasis).
For discrete $p$ and $q$ this means
$$H(p, q) = -\sum_x p(x)\, \log q(x). \!$$
The situation for continuous distributions is analogous. We have to
assume that $p$ and $q$ are [absolutely continuous] with respect to some
reference [measure] $r$ (usually $r$ is a Lebesgue measure on a
Borel [σ-algebra]). Let $P$ and $Q$ be probability density functions
of $p$ and $q$ with respect to $r$. Then
$$-\int_X P(x)\, \log Q(x)\, dr(x) = \operatorname{E}_p[-\log Q]. \!$$
NB: The notation $H(p,q)$ is also used for a different concept, the
joint entropy of $p$ and $q$.
```python
def shallow_net_C(n=55,i=784,o=10):
# create simple one dense layer net
# default 55 neurons, input 784, output 10
# Using relu and
net = Sequential()
net.add(Dense(n, activation='relu', input_shape=(i,)))
net.add(Dense(10, activation='softmax'))
# Compile net
net.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.01), metrics=['accuracy'])
return net
```
```python
nn3=shallow_net_C()
nn3.summary()
```
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_5 (Dense) (None, 55) 43175
_________________________________________________________________
dense_6 (Dense) (None, 10) 560
=================================================================
Total params: 43,735
Trainable params: 43,735
Non-trainable params: 0
_________________________________________________________________
```python
nn3.fit(X_train, y_train, batch_size=128, epochs=99, verbose=1, validation_data=(X_test, y_test))
```
Train on 60000 samples, validate on 10000 samples
Epoch 1/99
60000/60000 [==============================] - 1s - loss: 1.2643 - acc: 0.6854 - val_loss: 0.6699 - val_acc: 0.8520
Epoch 2/99
60000/60000 [==============================] - 0s - loss: 0.5697 - acc: 0.8599 - val_loss: 0.4638 - val_acc: 0.8839
Epoch 3/99
60000/60000 [==============================] - 0s - loss: 0.4490 - acc: 0.8814 - val_loss: 0.3969 - val_acc: 0.8955
Epoch 4/99
60000/60000 [==============================] - 0s - loss: 0.3977 - acc: 0.8917 - val_loss: 0.3615 - val_acc: 0.9024
Epoch 5/99
60000/60000 [==============================] - 0s - loss: 0.3674 - acc: 0.8981 - val_loss: 0.3388 - val_acc: 0.9072
Epoch 6/99
60000/60000 [==============================] - 0s - loss: 0.3467 - acc: 0.9027 - val_loss: 0.3231 - val_acc: 0.9101
Epoch 7/99
60000/60000 [==============================] - 0s - loss: 0.3310 - acc: 0.9069 - val_loss: 0.3107 - val_acc: 0.9134
Epoch 8/99
60000/60000 [==============================] - 0s - loss: 0.3183 - acc: 0.9105 - val_loss: 0.3005 - val_acc: 0.9167
Epoch 9/99
60000/60000 [==============================] - 0s - loss: 0.3076 - acc: 0.9130 - val_loss: 0.2924 - val_acc: 0.9186
Epoch 10/99
60000/60000 [==============================] - 0s - loss: 0.2982 - acc: 0.9162 - val_loss: 0.2848 - val_acc: 0.9202
Epoch 11/99
60000/60000 [==============================] - 0s - loss: 0.2901 - acc: 0.9188 - val_loss: 0.2792 - val_acc: 0.9206
Epoch 12/99
60000/60000 [==============================] - 0s - loss: 0.2827 - acc: 0.9203 - val_loss: 0.2719 - val_acc: 0.9229
Epoch 13/99
60000/60000 [==============================] - 0s - loss: 0.2759 - acc: 0.9220 - val_loss: 0.2664 - val_acc: 0.9238
Epoch 14/99
60000/60000 [==============================] - 0s - loss: 0.2698 - acc: 0.9241 - val_loss: 0.2612 - val_acc: 0.9253
Epoch 15/99
60000/60000 [==============================] - 0s - loss: 0.2640 - acc: 0.9260 - val_loss: 0.2567 - val_acc: 0.9264
Epoch 16/99
60000/60000 [==============================] - 0s - loss: 0.2586 - acc: 0.9267 - val_loss: 0.2519 - val_acc: 0.9285
Epoch 17/99
60000/60000 [==============================] - 0s - loss: 0.2536 - acc: 0.9284 - val_loss: 0.2478 - val_acc: 0.9280
Epoch 18/99
60000/60000 [==============================] - 0s - loss: 0.2487 - acc: 0.9300 - val_loss: 0.2427 - val_acc: 0.9311
Epoch 19/99
60000/60000 [==============================] - 0s - loss: 0.2441 - acc: 0.9315 - val_loss: 0.2392 - val_acc: 0.9334
Epoch 20/99
60000/60000 [==============================] - 0s - loss: 0.2395 - acc: 0.9329 - val_loss: 0.2352 - val_acc: 0.9348
Epoch 21/99
60000/60000 [==============================] - 0s - loss: 0.2355 - acc: 0.9338 - val_loss: 0.2314 - val_acc: 0.9351
Epoch 22/99
60000/60000 [==============================] - 0s - loss: 0.2313 - acc: 0.9348 - val_loss: 0.2278 - val_acc: 0.9350
Epoch 23/99
60000/60000 [==============================] - 0s - loss: 0.2271 - acc: 0.9366 - val_loss: 0.2257 - val_acc: 0.9364
Epoch 24/99
60000/60000 [==============================] - 0s - loss: 0.2233 - acc: 0.9375 - val_loss: 0.2213 - val_acc: 0.9365
Epoch 25/99
60000/60000 [==============================] - 0s - loss: 0.2196 - acc: 0.9386 - val_loss: 0.2176 - val_acc: 0.9382
Epoch 26/99
60000/60000 [==============================] - 0s - loss: 0.2161 - acc: 0.9393 - val_loss: 0.2141 - val_acc: 0.9388
Epoch 27/99
60000/60000 [==============================] - 0s - loss: 0.2124 - acc: 0.9403 - val_loss: 0.2108 - val_acc: 0.9396
Epoch 28/99
60000/60000 [==============================] - 0s - loss: 0.2092 - acc: 0.9412 - val_loss: 0.2085 - val_acc: 0.9403
Epoch 29/99
60000/60000 [==============================] - 0s - loss: 0.2059 - acc: 0.9423 - val_loss: 0.2053 - val_acc: 0.9405
Epoch 30/99
60000/60000 [==============================] - 0s - loss: 0.2029 - acc: 0.9432 - val_loss: 0.2025 - val_acc: 0.9413
Epoch 31/99
60000/60000 [==============================] - 0s - loss: 0.1997 - acc: 0.9438 - val_loss: 0.1996 - val_acc: 0.9425
Epoch 32/99
60000/60000 [==============================] - 0s - loss: 0.1967 - acc: 0.9449 - val_loss: 0.1968 - val_acc: 0.9435
Epoch 33/99
60000/60000 [==============================] - 0s - loss: 0.1938 - acc: 0.9456 - val_loss: 0.1949 - val_acc: 0.9446
Epoch 34/99
60000/60000 [==============================] - 0s - loss: 0.1910 - acc: 0.9468 - val_loss: 0.1924 - val_acc: 0.9453
Epoch 35/99
60000/60000 [==============================] - 0s - loss: 0.1883 - acc: 0.9474 - val_loss: 0.1897 - val_acc: 0.9456
Epoch 36/99
60000/60000 [==============================] - 0s - loss: 0.1856 - acc: 0.9481 - val_loss: 0.1874 - val_acc: 0.9465
Epoch 37/99
60000/60000 [==============================] - 0s - loss: 0.1831 - acc: 0.9490 - val_loss: 0.1845 - val_acc: 0.9474
Epoch 38/99
60000/60000 [==============================] - 0s - loss: 0.1806 - acc: 0.9500 - val_loss: 0.1830 - val_acc: 0.9477
Epoch 39/99
60000/60000 [==============================] - 0s - loss: 0.1781 - acc: 0.9500 - val_loss: 0.1809 - val_acc: 0.9487
Epoch 40/99
60000/60000 [==============================] - 0s - loss: 0.1757 - acc: 0.9508 - val_loss: 0.1781 - val_acc: 0.9488
Epoch 41/99
60000/60000 [==============================] - 0s - loss: 0.1734 - acc: 0.9513 - val_loss: 0.1762 - val_acc: 0.9494
Epoch 42/99
60000/60000 [==============================] - 0s - loss: 0.1711 - acc: 0.9518 - val_loss: 0.1748 - val_acc: 0.9498
Epoch 43/99
60000/60000 [==============================] - 0s - loss: 0.1689 - acc: 0.9525 - val_loss: 0.1730 - val_acc: 0.9501
Epoch 44/99
60000/60000 [==============================] - 0s - loss: 0.1667 - acc: 0.9538 - val_loss: 0.1715 - val_acc: 0.9512
Epoch 45/99
60000/60000 [==============================] - 0s - loss: 0.1646 - acc: 0.9537 - val_loss: 0.1686 - val_acc: 0.9509
Epoch 46/99
60000/60000 [==============================] - 0s - loss: 0.1625 - acc: 0.9539 - val_loss: 0.1669 - val_acc: 0.9518
Epoch 47/99
60000/60000 [==============================] - 0s - loss: 0.1606 - acc: 0.9549 - val_loss: 0.1658 - val_acc: 0.9521
Epoch 48/99
60000/60000 [==============================] - 0s - loss: 0.1586 - acc: 0.9550 - val_loss: 0.1630 - val_acc: 0.9532
Epoch 49/99
60000/60000 [==============================] - 0s - loss: 0.1567 - acc: 0.9557 - val_loss: 0.1618 - val_acc: 0.9533
Epoch 50/99
60000/60000 [==============================] - 1s - loss: 0.1548 - acc: 0.9564 - val_loss: 0.1606 - val_acc: 0.9536
Epoch 51/99
60000/60000 [==============================] - 1s - loss: 0.1530 - acc: 0.9566 - val_loss: 0.1584 - val_acc: 0.9548
Epoch 52/99
60000/60000 [==============================] - 1s - loss: 0.1512 - acc: 0.9570 - val_loss: 0.1575 - val_acc: 0.9550
Epoch 53/99
60000/60000 [==============================] - 1s - loss: 0.1494 - acc: 0.9576 - val_loss: 0.1559 - val_acc: 0.9555
Epoch 54/99
60000/60000 [==============================] - 1s - loss: 0.1477 - acc: 0.9582 - val_loss: 0.1538 - val_acc: 0.9563
Epoch 55/99
60000/60000 [==============================] - 1s - loss: 0.1460 - acc: 0.9586 - val_loss: 0.1536 - val_acc: 0.9556
Epoch 56/99
60000/60000 [==============================] - 1s - loss: 0.1444 - acc: 0.9589 - val_loss: 0.1513 - val_acc: 0.9570
Epoch 57/99
60000/60000 [==============================] - 1s - loss: 0.1428 - acc: 0.9592 - val_loss: 0.1496 - val_acc: 0.9573
Epoch 58/99
60000/60000 [==============================] - 1s - loss: 0.1412 - acc: 0.9598 - val_loss: 0.1487 - val_acc: 0.9579
Epoch 59/99
60000/60000 [==============================] - 1s - loss: 0.1397 - acc: 0.9605 - val_loss: 0.1471 - val_acc: 0.9577
Epoch 60/99
60000/60000 [==============================] - 1s - loss: 0.1381 - acc: 0.9609 - val_loss: 0.1462 - val_acc: 0.9595
Epoch 61/99
60000/60000 [==============================] - 1s - loss: 0.1367 - acc: 0.9613 - val_loss: 0.1446 - val_acc: 0.9587
Epoch 62/99
60000/60000 [==============================] - 1s - loss: 0.1352 - acc: 0.9618 - val_loss: 0.1438 - val_acc: 0.9601
Epoch 63/99
60000/60000 [==============================] - 1s - loss: 0.1337 - acc: 0.9621 - val_loss: 0.1430 - val_acc: 0.9606
Epoch 64/99
60000/60000 [==============================] - 1s - loss: 0.1325 - acc: 0.9627 - val_loss: 0.1418 - val_acc: 0.9609
Epoch 65/99
60000/60000 [==============================] - 1s - loss: 0.1312 - acc: 0.9628 - val_loss: 0.1400 - val_acc: 0.9601
Epoch 66/99
60000/60000 [==============================] - 1s - loss: 0.1298 - acc: 0.9633 - val_loss: 0.1395 - val_acc: 0.9614
Epoch 67/99
60000/60000 [==============================] - 1s - loss: 0.1285 - acc: 0.9637 - val_loss: 0.1380 - val_acc: 0.9610
Epoch 68/99
60000/60000 [==============================] - 1s - loss: 0.1273 - acc: 0.9644 - val_loss: 0.1369 - val_acc: 0.9624
Epoch 69/99
60000/60000 [==============================] - 1s - loss: 0.1261 - acc: 0.9642 - val_loss: 0.1360 - val_acc: 0.9621
Epoch 70/99
60000/60000 [==============================] - 1s - loss: 0.1249 - acc: 0.9647 - val_loss: 0.1346 - val_acc: 0.9624
Epoch 71/99
60000/60000 [==============================] - 1s - loss: 0.1238 - acc: 0.9650 - val_loss: 0.1340 - val_acc: 0.9634
Epoch 72/99
60000/60000 [==============================] - 1s - loss: 0.1227 - acc: 0.9655 - val_loss: 0.1328 - val_acc: 0.9633
Epoch 73/99
60000/60000 [==============================] - 1s - loss: 0.1215 - acc: 0.9657 - val_loss: 0.1319 - val_acc: 0.9631
Epoch 74/99
60000/60000 [==============================] - 1s - loss: 0.1204 - acc: 0.9657 - val_loss: 0.1313 - val_acc: 0.9643
Epoch 75/99
60000/60000 [==============================] - 1s - loss: 0.1194 - acc: 0.9664 - val_loss: 0.1303 - val_acc: 0.9646
Epoch 76/99
60000/60000 [==============================] - 1s - loss: 0.1183 - acc: 0.9668 - val_loss: 0.1297 - val_acc: 0.9643
Epoch 77/99
60000/60000 [==============================] - 1s - loss: 0.1173 - acc: 0.9673 - val_loss: 0.1286 - val_acc: 0.9648
Epoch 78/99
60000/60000 [==============================] - 1s - loss: 0.1162 - acc: 0.9673 - val_loss: 0.1283 - val_acc: 0.9645
Epoch 79/99
60000/60000 [==============================] - 1s - loss: 0.1154 - acc: 0.9674 - val_loss: 0.1270 - val_acc: 0.9649
Epoch 80/99
60000/60000 [==============================] - 1s - loss: 0.1143 - acc: 0.9679 - val_loss: 0.1263 - val_acc: 0.9653
Epoch 81/99
60000/60000 [==============================] - 1s - loss: 0.1134 - acc: 0.9681 - val_loss: 0.1255 - val_acc: 0.9654
Epoch 82/99
60000/60000 [==============================] - 1s - loss: 0.1124 - acc: 0.9686 - val_loss: 0.1247 - val_acc: 0.9651
Epoch 83/99
60000/60000 [==============================] - 1s - loss: 0.1114 - acc: 0.9686 - val_loss: 0.1249 - val_acc: 0.9653
Epoch 84/99
60000/60000 [==============================] - 1s - loss: 0.1107 - acc: 0.9689 - val_loss: 0.1236 - val_acc: 0.9658
Epoch 85/99
60000/60000 [==============================] - 1s - loss: 0.1097 - acc: 0.9695 - val_loss: 0.1231 - val_acc: 0.9658
Epoch 86/99
60000/60000 [==============================] - 1s - loss: 0.1089 - acc: 0.9694 - val_loss: 0.1219 - val_acc: 0.9669
Epoch 87/99
60000/60000 [==============================] - 1s - loss: 0.1080 - acc: 0.9702 - val_loss: 0.1220 - val_acc: 0.9661
Epoch 88/99
60000/60000 [==============================] - 1s - loss: 0.1072 - acc: 0.9702 - val_loss: 0.1208 - val_acc: 0.9672
Epoch 89/99
60000/60000 [==============================] - 1s - loss: 0.1063 - acc: 0.9704 - val_loss: 0.1203 - val_acc: 0.9671
Epoch 90/99
60000/60000 [==============================] - 1s - loss: 0.1056 - acc: 0.9704 - val_loss: 0.1193 - val_acc: 0.9678
Epoch 91/99
60000/60000 [==============================] - 1s - loss: 0.1047 - acc: 0.9708 - val_loss: 0.1192 - val_acc: 0.9673
Epoch 92/99
60000/60000 [==============================] - 1s - loss: 0.1039 - acc: 0.9712 - val_loss: 0.1182 - val_acc: 0.9669
Epoch 93/99
60000/60000 [==============================] - 1s - loss: 0.1032 - acc: 0.9711 - val_loss: 0.1179 - val_acc: 0.9673
Epoch 94/99
60000/60000 [==============================] - 1s - loss: 0.1024 - acc: 0.9715 - val_loss: 0.1172 - val_acc: 0.9674
Epoch 95/99
60000/60000 [==============================] - 1s - loss: 0.1017 - acc: 0.9714 - val_loss: 0.1169 - val_acc: 0.9678
Epoch 96/99
60000/60000 [==============================] - 1s - loss: 0.1010 - acc: 0.9719 - val_loss: 0.1160 - val_acc: 0.9676
Epoch 97/99
60000/60000 [==============================] - 1s - loss: 0.1003 - acc: 0.9720 - val_loss: 0.1155 - val_acc: 0.9676
Epoch 98/99
60000/60000 [==============================] - 1s - loss: 0.0995 - acc: 0.9724 - val_loss: 0.1160 - val_acc: 0.9675
Epoch 99/99
60000/60000 [==============================] - 1s - loss: 0.0989 - acc: 0.9725 - val_loss: 0.1150 - val_acc: 0.9679
<keras.callbacks.History at 0x12e5bdeb8>
```python
# 96% accuracy after first 99 epochs with Relu and Cross-entropy
nn3.evaluate(X_test, y_test)
```
8800/10000 [=========================>....] - ETA: 0s
[0.11497668759040534, 0.96789999999999998]
```python
nn3.fit(X_train, y_train, batch_size=128, epochs=99, verbose=1, validation_data=(X_test, y_test))
```
Train on 60000 samples, validate on 10000 samples
Epoch 1/99
60000/60000 [==============================] - 1s - loss: 0.0981 - acc: 0.9723 - val_loss: 0.1141 - val_acc: 0.9674
Epoch 2/99
60000/60000 [==============================] - 1s - loss: 0.0976 - acc: 0.9727 - val_loss: 0.1138 - val_acc: 0.9679
Epoch 3/99
60000/60000 [==============================] - 1s - loss: 0.0968 - acc: 0.9731 - val_loss: 0.1129 - val_acc: 0.9679
Epoch 4/99
60000/60000 [==============================] - 1s - loss: 0.0962 - acc: 0.9731 - val_loss: 0.1129 - val_acc: 0.9676
Epoch 5/99
60000/60000 [==============================] - 1s - loss: 0.0955 - acc: 0.9735 - val_loss: 0.1120 - val_acc: 0.9677
Epoch 6/99
60000/60000 [==============================] - 1s - loss: 0.0949 - acc: 0.9735 - val_loss: 0.1119 - val_acc: 0.9679
Epoch 7/99
60000/60000 [==============================] - 1s - loss: 0.0943 - acc: 0.9738 - val_loss: 0.1108 - val_acc: 0.9681
Epoch 8/99
60000/60000 [==============================] - 1s - loss: 0.0936 - acc: 0.9741 - val_loss: 0.1108 - val_acc: 0.9673
Epoch 9/99
60000/60000 [==============================] - 1s - loss: 0.0931 - acc: 0.9744 - val_loss: 0.1102 - val_acc: 0.9679
Epoch 10/99
60000/60000 [==============================] - 1s - loss: 0.0924 - acc: 0.9742 - val_loss: 0.1102 - val_acc: 0.9678
Epoch 11/99
60000/60000 [==============================] - 1s - loss: 0.0919 - acc: 0.9744 - val_loss: 0.1097 - val_acc: 0.9680
Epoch 12/99
60000/60000 [==============================] - 1s - loss: 0.0913 - acc: 0.9745 - val_loss: 0.1090 - val_acc: 0.9679
Epoch 13/99
60000/60000 [==============================] - 1s - loss: 0.0907 - acc: 0.9748 - val_loss: 0.1089 - val_acc: 0.9683
Epoch 14/99
60000/60000 [==============================] - 1s - loss: 0.0902 - acc: 0.9750 - val_loss: 0.1085 - val_acc: 0.9685
Epoch 15/99
60000/60000 [==============================] - 1s - loss: 0.0895 - acc: 0.9754 - val_loss: 0.1082 - val_acc: 0.9684
Epoch 16/99
60000/60000 [==============================] - 1s - loss: 0.0890 - acc: 0.9755 - val_loss: 0.1072 - val_acc: 0.9688
Epoch 17/99
60000/60000 [==============================] - 1s - loss: 0.0885 - acc: 0.9757 - val_loss: 0.1070 - val_acc: 0.9690
Epoch 18/99
60000/60000 [==============================] - 1s - loss: 0.0879 - acc: 0.9756 - val_loss: 0.1065 - val_acc: 0.9692
Epoch 19/99
60000/60000 [==============================] - 1s - loss: 0.0874 - acc: 0.9758 - val_loss: 0.1064 - val_acc: 0.9693
Epoch 20/99
60000/60000 [==============================] - 1s - loss: 0.0869 - acc: 0.9758 - val_loss: 0.1058 - val_acc: 0.9689
Epoch 21/99
60000/60000 [==============================] - 1s - loss: 0.0863 - acc: 0.9761 - val_loss: 0.1054 - val_acc: 0.9692
Epoch 22/99
60000/60000 [==============================] - 1s - loss: 0.0858 - acc: 0.9764 - val_loss: 0.1052 - val_acc: 0.9690
Epoch 23/99
60000/60000 [==============================] - 1s - loss: 0.0854 - acc: 0.9763 - val_loss: 0.1045 - val_acc: 0.9693
Epoch 24/99
60000/60000 [==============================] - 1s - loss: 0.0848 - acc: 0.9765 - val_loss: 0.1044 - val_acc: 0.9695
Epoch 25/99
60000/60000 [==============================] - 1s - loss: 0.0844 - acc: 0.9767 - val_loss: 0.1042 - val_acc: 0.9691
Epoch 26/99
60000/60000 [==============================] - 1s - loss: 0.0839 - acc: 0.9767 - val_loss: 0.1045 - val_acc: 0.9687
Epoch 27/99
60000/60000 [==============================] - 1s - loss: 0.0834 - acc: 0.9767 - val_loss: 0.1035 - val_acc: 0.9696
Epoch 28/99
60000/60000 [==============================] - 1s - loss: 0.0829 - acc: 0.9772 - val_loss: 0.1035 - val_acc: 0.9700
Epoch 29/99
60000/60000 [==============================] - 1s - loss: 0.0824 - acc: 0.9771 - val_loss: 0.1026 - val_acc: 0.9698
Epoch 30/99
60000/60000 [==============================] - 1s - loss: 0.0819 - acc: 0.9773 - val_loss: 0.1025 - val_acc: 0.9698
Epoch 31/99
60000/60000 [==============================] - 1s - loss: 0.0815 - acc: 0.9774 - val_loss: 0.1021 - val_acc: 0.9697
Epoch 32/99
60000/60000 [==============================] - 1s - loss: 0.0810 - acc: 0.9777 - val_loss: 0.1017 - val_acc: 0.9699
Epoch 33/99
60000/60000 [==============================] - 1s - loss: 0.0806 - acc: 0.9776 - val_loss: 0.1016 - val_acc: 0.9703
Epoch 34/99
60000/60000 [==============================] - 1s - loss: 0.0801 - acc: 0.9781 - val_loss: 0.1011 - val_acc: 0.9703
Epoch 35/99
60000/60000 [==============================] - 1s - loss: 0.0797 - acc: 0.9779 - val_loss: 0.1011 - val_acc: 0.9704
Epoch 36/99
60000/60000 [==============================] - 1s - loss: 0.0793 - acc: 0.9783 - val_loss: 0.1006 - val_acc: 0.9701
Epoch 37/99
60000/60000 [==============================] - 1s - loss: 0.0789 - acc: 0.9782 - val_loss: 0.1001 - val_acc: 0.9706
Epoch 38/99
60000/60000 [==============================] - 1s - loss: 0.0785 - acc: 0.9783 - val_loss: 0.1003 - val_acc: 0.9707
Epoch 39/99
60000/60000 [==============================] - 1s - loss: 0.0779 - acc: 0.9785 - val_loss: 0.1005 - val_acc: 0.9699
Epoch 40/99
60000/60000 [==============================] - 1s - loss: 0.0776 - acc: 0.9784 - val_loss: 0.1003 - val_acc: 0.9702
Epoch 41/99
60000/60000 [==============================] - 1s - loss: 0.0772 - acc: 0.9785 - val_loss: 0.0990 - val_acc: 0.9709
Epoch 42/99
60000/60000 [==============================] - 1s - loss: 0.0768 - acc: 0.9790 - val_loss: 0.0989 - val_acc: 0.9707
Epoch 43/99
60000/60000 [==============================] - 1s - loss: 0.0764 - acc: 0.9789 - val_loss: 0.0986 - val_acc: 0.9706
Epoch 44/99
60000/60000 [==============================] - 1s - loss: 0.0759 - acc: 0.9790 - val_loss: 0.0985 - val_acc: 0.9710
Epoch 45/99
60000/60000 [==============================] - 1s - loss: 0.0755 - acc: 0.9793 - val_loss: 0.0982 - val_acc: 0.9709
Epoch 46/99
60000/60000 [==============================] - 1s - loss: 0.0752 - acc: 0.9793 - val_loss: 0.0978 - val_acc: 0.9707
Epoch 47/99
60000/60000 [==============================] - 1s - loss: 0.0748 - acc: 0.9794 - val_loss: 0.0979 - val_acc: 0.9709
Epoch 48/99
60000/60000 [==============================] - 1s - loss: 0.0744 - acc: 0.9794 - val_loss: 0.0976 - val_acc: 0.9711
Epoch 49/99
60000/60000 [==============================] - 1s - loss: 0.0740 - acc: 0.9797 - val_loss: 0.0977 - val_acc: 0.9704
Epoch 50/99
60000/60000 [==============================] - 1s - loss: 0.0736 - acc: 0.9798 - val_loss: 0.0966 - val_acc: 0.9719
Epoch 51/99
60000/60000 [==============================] - 1s - loss: 0.0733 - acc: 0.9798 - val_loss: 0.0966 - val_acc: 0.9710
Epoch 52/99
60000/60000 [==============================] - 1s - loss: 0.0729 - acc: 0.9800 - val_loss: 0.0966 - val_acc: 0.9709
Epoch 53/99
60000/60000 [==============================] - 1s - loss: 0.0726 - acc: 0.9801 - val_loss: 0.0964 - val_acc: 0.9712
Epoch 54/99
60000/60000 [==============================] - 1s - loss: 0.0722 - acc: 0.9803 - val_loss: 0.0962 - val_acc: 0.9712
Epoch 55/99
60000/60000 [==============================] - 1s - loss: 0.0718 - acc: 0.9803 - val_loss: 0.0956 - val_acc: 0.9717
Epoch 56/99
60000/60000 [==============================] - 1s - loss: 0.0715 - acc: 0.9806 - val_loss: 0.0958 - val_acc: 0.9714
Epoch 57/99
60000/60000 [==============================] - 1s - loss: 0.0711 - acc: 0.9804 - val_loss: 0.0954 - val_acc: 0.9719
Epoch 58/99
60000/60000 [==============================] - 1s - loss: 0.0707 - acc: 0.9807 - val_loss: 0.0955 - val_acc: 0.9719
Epoch 59/99
60000/60000 [==============================] - 1s - loss: 0.0704 - acc: 0.9808 - val_loss: 0.0948 - val_acc: 0.9718
Epoch 60/99
60000/60000 [==============================] - 1s - loss: 0.0700 - acc: 0.9807 - val_loss: 0.0946 - val_acc: 0.9715
Epoch 61/99
60000/60000 [==============================] - 1s - loss: 0.0698 - acc: 0.9811 - val_loss: 0.0943 - val_acc: 0.9715
Epoch 62/99
60000/60000 [==============================] - 1s - loss: 0.0694 - acc: 0.9811 - val_loss: 0.0941 - val_acc: 0.9721
Epoch 63/99
60000/60000 [==============================] - 1s - loss: 0.0690 - acc: 0.9814 - val_loss: 0.0942 - val_acc: 0.9721
Epoch 64/99
60000/60000 [==============================] - 1s - loss: 0.0687 - acc: 0.9815 - val_loss: 0.0936 - val_acc: 0.9725
Epoch 65/99
60000/60000 [==============================] - 1s - loss: 0.0684 - acc: 0.9814 - val_loss: 0.0942 - val_acc: 0.9724
Epoch 66/99
60000/60000 [==============================] - 1s - loss: 0.0681 - acc: 0.9817 - val_loss: 0.0935 - val_acc: 0.9719
Epoch 67/99
60000/60000 [==============================] - 1s - loss: 0.0678 - acc: 0.9818 - val_loss: 0.0932 - val_acc: 0.9724
Epoch 68/99
60000/60000 [==============================] - 1s - loss: 0.0674 - acc: 0.9817 - val_loss: 0.0931 - val_acc: 0.9725
Epoch 69/99
60000/60000 [==============================] - 1s - loss: 0.0671 - acc: 0.9817 - val_loss: 0.0933 - val_acc: 0.9718
Epoch 70/99
60000/60000 [==============================] - 1s - loss: 0.0668 - acc: 0.9822 - val_loss: 0.0931 - val_acc: 0.9727
Epoch 71/99
60000/60000 [==============================] - 1s - loss: 0.0665 - acc: 0.9821 - val_loss: 0.0924 - val_acc: 0.9724
Epoch 72/99
60000/60000 [==============================] - 1s - loss: 0.0662 - acc: 0.9823 - val_loss: 0.0924 - val_acc: 0.9726
Epoch 73/99
60000/60000 [==============================] - 1s - loss: 0.0659 - acc: 0.9821 - val_loss: 0.0922 - val_acc: 0.9726
Epoch 74/99
60000/60000 [==============================] - 1s - loss: 0.0656 - acc: 0.9823 - val_loss: 0.0919 - val_acc: 0.9725
Epoch 75/99
60000/60000 [==============================] - 1s - loss: 0.0654 - acc: 0.9824 - val_loss: 0.0920 - val_acc: 0.9727
Epoch 76/99
60000/60000 [==============================] - 1s - loss: 0.0650 - acc: 0.9825 - val_loss: 0.0917 - val_acc: 0.9727
Epoch 77/99
60000/60000 [==============================] - 1s - loss: 0.0647 - acc: 0.9825 - val_loss: 0.0915 - val_acc: 0.9727
Epoch 78/99
60000/60000 [==============================] - 1s - loss: 0.0644 - acc: 0.9828 - val_loss: 0.0917 - val_acc: 0.9725
Epoch 79/99
60000/60000 [==============================] - 1s - loss: 0.0641 - acc: 0.9827 - val_loss: 0.0911 - val_acc: 0.9728
Epoch 80/99
5888/60000 [=>............................] - ETA: 0s - loss: 0.0604 - acc: 0.9845
```python
# 97% accuracy after another 99 epochs with Relu and Cross-entropy
nn3.evaluate(X_test, y_test)
```
## Summary
With a fairly simple shallow net we've done fairly well classifying (97% accuracy after another 99 epochs with Relu and Cross-entropy) on the [MNIST](http://yann.lecun.com/exdb/mnist/) handwritten digit classification problem.
Last update September 22, 2017
|
d6a0219834ec6a681a7d5b8bbd2a45bd8f5dd4dc
| 199,316 |
ipynb
|
Jupyter Notebook
|
Week_5/NBB_Deep_Learning_Shallow_Neural_Networks.ipynb
|
Chau-Xochitl/INFO_7390
|
3ab4a3bf7af5c0b57d9604def26d64dc31405333
|
[
"MIT"
] | null | null | null |
Week_5/NBB_Deep_Learning_Shallow_Neural_Networks.ipynb
|
Chau-Xochitl/INFO_7390
|
3ab4a3bf7af5c0b57d9604def26d64dc31405333
|
[
"MIT"
] | null | null | null |
Week_5/NBB_Deep_Learning_Shallow_Neural_Networks.ipynb
|
Chau-Xochitl/INFO_7390
|
3ab4a3bf7af5c0b57d9604def26d64dc31405333
|
[
"MIT"
] | null | null | null | 63.862864 | 15,134 | 0.532009 | true | 52,680 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.760651 | 0.766294 | 0.582882 |
__label__oci_Latn
| 0.364042 | 0.19256 |
# Mackey-Glass Time Series Prediction Using Look Up Table
## Introduction
The purpose of this work is to create a fuzzy system for time series prediction of Mackey-Glass equation, which is delay differential equation:
\begin{equation}
\frac{dx(t)}{dt} = \beta \frac{x(t-\tau)}{1+x^{n}(t-\tau)}-\gamma x(t)
\end{equation}
The goal is in time $t_1$ to predict $x(t_1+k)$, we choose\footnote{$\tau$ should be more than 17.} $\tau = 30$. We need to set $n \approx 10$, because that is the value where chaos appears. The remaining two parameters are set as follows: $\beta = 0.2$ and $\gamma = 0.1$.
```python
# imports
from math import pow
import numpy as np
import matplotlib.pyplot as plt
```
```python
# Mackey-Glass differential eq. parameters
Beta = 0.2
Gamma = 0.1
Tau = 30
n = 10
# initializing an array of random values for numerical simulation
y = np.random.rand(Tau+1)
# lenght of our simulation
samples = 600
```
## Implementation of Mackey-Glass
We can approximate function (1) like this:
\begin{equation}
x(t+k)\approx x(t)+k\frac{dx(t)}{dt}
\end{equation}
We put equations (1) and (2) together and $k=1$, since we go step by step by samples, hence the following equation is our approximation:
\begin{equation}
x(t+1)= x(t)+\beta \frac{x(t-\tau)}{1+x^{n}(t-\tau)}-\gamma x(t)
\end{equation}
Put our constants in equation (3):
\begin{equation}
x(t+1)=x(t)+ \frac{0.2x(t-30)}{1+x^{10}(t-30)}-0.1 x(t)
\end{equation}
Which is the final form of our equation.
```python
# calculating Mackey-Glass eq. for samples
for _ in range(samples):
value=y[-1]+Beta*y[-1-Tau]/(1+pow(y[-1-Tau],n))-Gamma*y[-1]
y = np.r_[y,value]
# we dont need those first random values
y=y[-samples:]
# lets plot it
plt.figure()
plt.plot(y,label='Mackey-Glass')
plt.legend()
plt.xlabel('samples [-]')
plt.ylabel('x')
plt.grid(True)
plt.show()
```
## Design of a fuzzy system
The goal is to create a fuzzy system only from input-output data. We use only the first 300 values of x. The n represents number of statements in fuzzy IF-THEN rule. Thus we make a input-output pairs for creating fuzzy rules for look up table system only from input set.
\begin{align*}
[x(0),x(1),...,x(n-1);x(n)] \\
[x(1),x(2),...,x(n);x(n+1)] \\
. \\
. \\
. \\
[x(299-n),x(299-n+1),...,x(298);x(299)] \\
\end{align*}
We choose that our system is Look up table with product inference engine, triangular fuzzyfier and center average defuzzyfier:
\begin{equation}
f(x)=\frac{\sum_{l=1}^M \overline{y}^l(\prod_{i=1}^n \mu_{A_i^l}(x))}{\sum_{l=1}^M (\prod_{i=1}^n \mu_{A_i^l}(x))}
\end{equation}
Where M is number of Fuzzy IF-THEN rules and n is number of fuzzy statements in one rule.
With fuzzy IF-THEN rules looking like this:
\begin{equation}
IF \bigwedge_{i=0}^{n-1} ( x(k-n) \ is \ A_i^l )\ THEN \ x(k+1) \ is \ B^l
\end{equation}
And we use triangular membership function:
\begin{equation}
\mu_{A_i^l}(x) = \left\{ \begin{array}{r@{\quad}c}
1-\frac{|x-c_{A_i^l}|}{D}, & x \in (c_{A_i^l}-D,c_{A_i^l}+D) \\
0, & else \\ \end{array} \right.
\end{equation}
```python
# Look Up Table parameters
TrainingData = y[0:300] #Training data for fuzzy system, we use only first 300 samples
TestingData = y #We useall of our data for testing
NumStatements = 5 #Number of fuzzy-statements in fuzzy IF-THEN rule
NumFuzzySets = 7 #Number of Fuzzy Sets in input and output area
# definine our membership function
def triangle(x,center,D):
if x<=(center-D):
return 0
if x>=(center+D):
return 0
else:
return -(1/D)*abs(x-center)+1
```
```python
#Creating an array of fuzzy-sets centers
Minimum = min(TrainingData)
Maximum = max(TrainingData)
D = (Maximum-Minimum)/(NumFuzzySets-1)
FuzzySetsCenters=[]
for i in range(NumFuzzySets-1):
FuzzySetsCenters.append(Minimum+i*D)
FuzzySetsCenters.append(Maximum)
```
```python
# creating INPUT-OUTPUT pairs
RulesData = []
for i in range(len(TrainingData)-NumStatements):
rule = []
for j in range(NumStatements):
rule.append(TrainingData[i+j])
RulesData.append(rule)
# fuzzy IF-THEN rules
FuzzyRules = []
for i in range(len(RulesData)):
rulefp = []
for j in range(NumStatements):
fp = []
for k in range(NumFuzzySets):
_ = triangle(RulesData[i][j],FuzzySetsCenters[k],D)
fp.append(_)
rulefp.append(FuzzySetsCenters[fp.index(max(fp))])
FuzzyRules.append(rulefp)
```
Since the number of input-output is usually large, and with each pair generating a single rule, there is high chance, that there are some conflict ones. Which means, there are rules with same IF part but different THEN parts. Therefore, we assign degree of a rule:
\begin{equation}
D(Ru^{(l)})=\mu_{B^{l*}}(y_0^p)\prod_{i=1}^{n}\mu_{A_i^{l*}}(x_{0i}^p)
\end{equation}
Hence we choose non-conflict fuzzy rules, and from conflict rules we choose ones with the highes degree. Chosen Fuzzy Rules are our fuzzy system base.
```python
# function, which assign degree to a fuzzy IF-THEN rule
def DegreeRule(RuleData,FuzzyRule):
value = 1
for i in range(len(RuleData)):
value=value*triangle(RuleData[i],FuzzyRule[i],D)
return value
#Assign degrees to fuzzy IF-THEN rules and deleting conflict ones with lower degree
FRules = FuzzyRules.copy()
for i in range(len(FuzzyRules)):
for j in range(i,len(FuzzyRules)):
if FuzzyRules[i][:-1] == FuzzyRules[j][:-1] and not i==j:
Degree1 = DegreeRule(RulesData[i],FuzzyRules[i])
Degree2 = DegreeRule(RulesData[j],FuzzyRules[j])
if Degree1 > Degree2:
FRules[j] = None
else:
FRules[i] = None
else:
pass
#New, non-conflict fuzzy IF-THEN rules, our fuzzy system base
FuzzyRules=[]
for i in range(len(FRules)):
if not FRules[i] == None:
FuzzyRules.append(FRules[i])
```
## Look Up Table prediciton results
We have used first 300 samples to create our Look Up Rable, now we use our created system to compute whole interval including our testing data. Once again, here is equation of our fuzzy system:
\begin{equation}
f(x)=\frac{\sum_{l=1}^M \overline{y}^l(\prod_{i=1}^n \mu_{A_i^l}(x))}{\sum_{l=1}^M (\prod_{i=1}^n \mu_{A_i^l}(x))}
\end{equation}
Which we will now write as a function in python:
```python
# function of our fuzzy system
def MamdaniValue(X,FuzzyRules,D):
pom1 = 0
pom2 = 0
for FuzzyRule in FuzzyRules:
pom3 = 1
for i in range(len(X)):
pom3 = pom3*triangle(X[i],FuzzyRule[i],D)
pom1=pom1+FuzzyRule[-1]*pom3
pom2=pom2+pom3
try:
return (pom1/pom2)
except ZeroDivisionError:
return 0
```
```python
# Calculating predicted value, real Mackey-Glass and absolute deviation
TestingInputs = []
for i in range(len(TestingData)-NumStatements):
vector = []
for j in range(NumStatements-1):
vector.append(TestingData[i+j])
TestingInputs.append(vector)
Predicted=[]
for i in range(NumStatements):
Predicted.append(TestingData[i]) # Insert values, for same lenght of arrays
for j in range(len(TestingData)-NumStatements):
Predicted.append(MamdaniValue(TestingData[j:j+NumStatements-1],FuzzyRules,D))
Deviation=[]
for i in range(len(TestingData)):
Deviation.append(abs(TestingData[i]-Predicted[i]))
```
```python
# ploting predicted value, real Mackey-Glass and absolute deviation
plt.figure(dpi=80, figsize=(12,8))
plt.plot(TestingData,'b',label='Mackey-Glass')
plt.plot(Predicted,'k',label='Mackey-Glass - LUT')
plt.plot(Deviation,'r',label='Absolute Deviation')
plt.legend()
plt.xlabel('samples [-]')
plt.ylabel('x')
plt.grid(True)
plt.show()
```
```python
```
|
3ce0222b5e65507c31a654074fb7f86a04efbeb6
| 165,883 |
ipynb
|
Jupyter Notebook
|
courses/E375004/ai_chapter1/fuzzymackeyglass.ipynb
|
indigo40123/Python-CTU
|
d34a82046aa9d502a8bc5f8f886893ccb20c9e6c
|
[
"MIT"
] | 16 |
2020-03-24T10:08:58.000Z
|
2022-03-08T17:18:46.000Z
|
courses/E375004/ai_chapter1/fuzzymackeyglass.ipynb
|
indigo40123/Python-CTU
|
d34a82046aa9d502a8bc5f8f886893ccb20c9e6c
|
[
"MIT"
] | 2 |
2021-05-05T08:13:24.000Z
|
2022-02-28T13:22:15.000Z
|
courses/E375004/ai_chapter1/fuzzymackeyglass.ipynb
|
indigo40123/Python-CTU
|
d34a82046aa9d502a8bc5f8f886893ccb20c9e6c
|
[
"MIT"
] | 27 |
2020-03-14T12:51:36.000Z
|
2022-03-22T18:47:23.000Z
| 423.170918 | 117,752 | 0.936202 | true | 2,367 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.743168 | 0.660497 |
__label__eng_Latn
| 0.789646 | 0.372887 |
# 17 Importance Sampling
The Monte Carlo integration procedure with uniform sampling
$$
I = \int_{a_1}^{b_1} \cdots \int_{a_M}^{b_M} f(x_1, \dots, x_M) dx_1\cdots dx_M \approx V \langle f \rangle_\text{mc}
$$
works well with monotonic and smooth integrands $f$. However,
* sharply peaked
* oscillating
integrands are problematic (like for any integration method).
Oscillations increase fluctuations in random sampling and require more sampling.
Sharp peaks are especially annoying for *uniform sampling* because most of the samples will come from regions outside the peak and not contribute to the integral.
## Importance sampling method
**Importance sampling** is a method to turn a non-smooth $f(x)$ into a smoother $g(x)$ by separating the integrand
$$
\int_a^b \! f(x) dx = \int_a^b \! \frac{f(x)}{P(x)} P(x) dx = \int_a^b \! g(x) P(x) dx, \quad g(x) := \frac{f(x)}{P(x)}
$$
The new function $g(x)$ is supposed to be smoother than $f(x)$.
$P(x)$ is a known probability distribution and we are again calculating a *weighted average*, this time over our new function $g(x)$. The trick will be to generate samples according to $P(x)$ and then calculate the average $\langle g \rangle_\text{mc}$:
$$
\int_a^b \! f(x) dx = \langle g \rangle_\text{mc} = \frac{1}{N} \sum_{i=1}^N g(x_i) = \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{P(x_i)}
$$
* The average $\langle g \rangle_\text{mc}$ *must* be calculated for samples $x \sim P$.
* The probability distribution $P(x)$ should be chosen so that the modified integrand $g(x) = f(x)/P(x)$ should become as smooth as possible (ideally, close to uniform).
* The integration volume does not explicitly appear in the importance sampling equation. It is taken into account implicitly by the sampling process.
* Importance sampling with the uniform distribution $P(x) = (b-a)^{-1}$ reduces to the *weighted average method*.
* Importance sampling generalizes to $M$ dimensions just as standard MC sampling.
## Example
The integral
$$
\int_a^b \! f(x) dx = \int_0^\infty \! \cos x \, e^{-x} dx = \frac{1}{2}
$$
oscillates and is strongly peaked at the origin.
```python
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return np.cos(x) * np.exp(-x)
X = np.linspace(0, 100, 400)
plt.plot(X, f(X))
plt.xlabel("x"); plt.ylabel("f(x)");
```
Oscillations are not so problematic but if we want to integrate "to infinity" then we really should take care of the peak at the origin.
Choose the exponential distribution
$$
P(x) = e^{-x}
$$
which is already normalized on the interval $[0, +\infty[$.
We then get
$$
g(x) = \frac{f(x)}{P(x)} = \cos x.
$$
```python
def g(x):
return np.cos(x)
```
```python
plt.plot(X, g(X), X, np.exp(-X))
plt.legend((r"$g(x)$", r"$P(x)$"))
plt.xlabel("x");
```
Note that the exponential distribution stretches to infinity. However, it is very unlikely to draw samples for large $x$ values.
But how do we sample from the exponential distribution?
1. Look through the docs for the [distributions](https://numpy.org/doc/stable/reference/random/generator.html#distributions) that [numpy's Random Generator](https://numpy.org/doc/stable/reference/random/generator.html) can provide, namely the [exponential distribution](https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.exponential.html#numpy.random.Generator.exponential).
2. Transform samples from the *uniform* distribution to the exponential distribution.
### Exponential distribution with `numpy.random.Generator.exponential`
If we have well-tested, documented, and high performance code then we should know how to use it:
```python
import numpy as np
rng = np.random.default_rng()
```
```python
N = 1000
x = rng.exponential(scale=1.0, size=N)
fMC = np.mean(g(x))
fanalytical = 1/2
error = 1 - fMC/fanalytical
print(f"fMC = {fMC} ({N} samples)")
print(f"f = {fanalytical} (error = {error})")
```
fMC = 0.48969567139965026 (1000 samples)
f = 0.5 (error = 0.020608657200699487)
The error decreases with increasing sample size (as can be seen by modifying `N`).
Note that the samples are overwhelmingly appearing in the region where the function $f(x)$ was peaked, i.e., near the origin, and so $g(x)$ is only evaluated near the origin. Hence only those *important* data points contribute to the average. To obtain samples farther out ($x>10$) requires many samples to be drawn. The integration boundaries are implicitly taken into account via the sampling from the exponential distribution: in principle, a sample for very large $x$ *could* be drawn even though it is overwhelmingly unlikely.
```python
plt.plot(X, g(X), X, np.exp(-X))
plt.plot(x, g(x), 'k.')
plt.legend((r"$g(x)$", r"$P(x)$", "MC"))
plt.xlim(0, 20)
plt.xlabel("x");
```
### Exponential distribution with inverse transform
(follows *Computational Methods* Ch 10.B)
We often have to draw samples from a nonuniform distribution. The general approach is to draw samples from the [uniform distribution](https://en.wikipedia.org/wiki/Continuous_uniform_distribution) $\mathcal{U}_\text{[a, b]}$ over the interval $[a, b]$:
\begin{align}
x &\sim \mathcal{U}_\text{[a, b]}\\
p_x(x) &= \begin{cases}
\frac{1}{b -a }, \quad a \le x \le b\\
0, \text{otherwise}
\end{cases}
\end{align}
and then transform the random samples $x$ to the desired samples $y \sim p_y$ that are non-uniformly distributed according to $p_y(y)$.
In the best case, we can obtain the nonuniform distribution *analytically* by using the *inverse transform method*. Let's assume that the transformation
$$
y = G(x)
$$
exists, which transforms a sample $x \sim p_x$ into $y \sim p_y$.
The derivation starts from the *conservation of probability*
$$
|p_y(y) dy| = |p_x(x) dx|
$$
The probability in $[y, y+dy]$ must equal the probability in the corresponding interval $[x, x+dx]$ because $G: x \mapsto y=G(x)$ in a one-to-one fashion.
Let's sample from the uniform distribution $\mathcal{U}_\text{[0, 1]}$ so $p_x(x) = 1$.
Integrate the conservation of probability equation (and pulling the absolute magnitude out of the integral because $\sum_i |a_i| = \left|\sum_i a_i\right|$ if $a_i \ge 0\ \forall i$):
\begin{gather}
\left|\int p_y(y) dy\right| = \int 1 dx = x\\
F(y) := \left|\int p_y(y) dy\right| = x\\
F(y) = x
\end{gather}
If we can solve the integral $\int p_y(y) dy$ analytically and if the inverse of $F(x)$ exists,
$$
y = F^{-1}(x) = G(x)
$$
then we have the **inverse transform**.
Apply to the *exponential distribution*:
\begin{gather}
p_y(y) = e^{-y}\\
F(y) = \left|\int p_y(y) dy\right| = \left|-e^{-y}\right| = e^{-y}\\
F(y) = x\\
e^{-y} = x\\
\end{gather}
to yield the inverse transform
$$
y = -\ln(x), \quad 0 \le x \le 1, \ 0 \le y \le + \infty
$$
The logarithm is the inverse transform of the exponential.
Thus the logarithm of a uniform sample in $[0, 1]$ yields a sample that is exponentially distributed. It "squishes" the samples near $x=1$ towards $y=0$ and expands the samples near $y=0$ towards infinity.
Our MC importance sampling now just has one extra step: transform the uniform samples:
```python
N = 1000
x = rng.uniform(low=0, high=1, size=N)
y = -np.log(x)
fMC = np.mean(g(y))
fanalytical = 1/2
error = 1 - fMC/fanalytical
print(f"fMC = {fMC} ({N} samples)")
print(f"f = {fanalytical} (error = {error})")
```
fMC = 0.49704661874486056 (1000 samples)
f = 0.5 (error = 0.0059067625102788845)
```python
plt.plot(X, g(X), X, np.exp(-X))
plt.plot(y, g(y), 'k.')
plt.plot(x, np.zeros_like(x), 'y|')
plt.legend((r"$g(x)$", r"$P(x)$", "MC", "uniform"))
plt.xlim(0, 20)
plt.xlabel("x");
```
```python
```
|
da71e8ecb3de2fbd4dcd590360a988910e6fe101
| 123,082 |
ipynb
|
Jupyter Notebook
|
17_MonteCarlo/importance_sampling.ipynb
|
Py4Phy/PHY432-resources
|
c26d95eaf5c28e25da682a61190e12ad6758a938
|
[
"CC-BY-4.0"
] | null | null | null |
17_MonteCarlo/importance_sampling.ipynb
|
Py4Phy/PHY432-resources
|
c26d95eaf5c28e25da682a61190e12ad6758a938
|
[
"CC-BY-4.0"
] | 1 |
2022-03-03T21:47:56.000Z
|
2022-03-03T21:47:56.000Z
|
17_MonteCarlo/importance_sampling.ipynb
|
Py4Phy/PHY432-resources
|
c26d95eaf5c28e25da682a61190e12ad6758a938
|
[
"CC-BY-4.0"
] | null | null | null | 253.77732 | 41,300 | 0.920435 | true | 2,255 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.924142 | 0.94079 | 0.869423 |
__label__eng_Latn
| 0.973367 | 0.858295 |
# 02 - Reverse Time Migration
This notebook is the second in a series of tutorial highlighting various aspects of seismic inversion based on Devito operators. In this second example we aim to highlight the core ideas behind seismic inversion, where we create an image of the subsurface from field recorded data. This tutorial follows on the modelling tutorial and will reuse the modelling operator and velocity model.
## Imaging requirement
Seismic imaging relies on two known parameters:
- **Field data** - or also called **recorded data**. This is a shot record corresponding to the true velocity model. In practice this data is acquired as described in the first tutorial. In order to simplify this tutorial we will generate synthetic field data by modelling it with the **true velocity model**.
- **Background velocity model**. This is a velocity model that has been obtained by processing and inverting the field data. We will look at this methods in the following tutorial as it relies on the method we are describing here. This velocity model is usually a **smooth version** of the true velocity model.
## Imaging computational setup
In this tutorial, we will introduce the back-propagation operator. This operator simulates the adjoint wave-equation, that is a wave-equation solved in a reversed time order. This time reversal led to the naming of the method we present here, called Reverse Time Migration. The notion of adjoint in exploration geophysics is fundamental as most of the wave-equation based imaging and inversion methods rely on adjoint based optimization methods.
## Notes on the operators
As we have already described the creation of a forward modelling operator, we will use a thin wrapper function instead. This wrapper is provided by a utility class called `AcousticWaveSolver`, which provides all the necessary operators for seismic modeling, imaging and inversion. The `AcousticWaveSolver` provides a more concise API for common wave propagation operators and caches the Devito `Operator` objects to avoid unnecessary recompilation. Operators introduced for the first time in this tutorial will be properly described.
As before we initialize printing and import some utilities. We also raise the Devito log level to avoid excessive logging for repeated operator invocations.
```python
import numpy as np
%matplotlib inline
from devito import configuration
configuration['log-level'] = 'WARNING'
```
## Computational considerations
Seismic inversion algorithms are generally very computationally demanding and require a large amount of memory to store the forward wavefield. In order to keep this tutorial as lightweight as possible we are using a very simple
velocity model that requires low temporal and spatial resolution. For a more realistic model, a second set of preset parameters for a reduced version of the 2D Marmousi data set [1] is provided below in comments. This can be run to create some more realistic subsurface images. However, this second preset is more computationally demanding and requires a slightly more powerful workstation.
```python
# Configure model presets
from examples.seismic import demo_model
# Enable model presets here:
preset = 'twolayer-isotropic' # A simple but cheap model (recommended)
# preset = 'marmousi2d-isotropic' # A larger more realistic model
# Standard preset with a simple two-layer model
if preset == 'twolayer-isotropic':
def create_model(grid=None):
return demo_model('twolayer-isotropic', origin=(0., 0.), shape=(101, 101),
spacing=(10., 10.), nbl=20, grid=grid, ratio=2)
filter_sigma = (1, 1)
nshots = 21
nreceivers = 101
t0 = 0.
tn = 1000. # Simulation last 1 second (1000 ms)
f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)
# A more computationally demanding preset based on the 2D Marmousi model
if preset == 'marmousi2d-isotropic':
def create_model(grid=None):
return demo_model('marmousi2d-isotropic', data_path='../../../../data/',
grid=grid, nbl=20)
filter_sigma = (6, 6)
nshots = 301 # Need good covergae in shots, one every two grid points
nreceivers = 601 # One recevier every grid point
t0 = 0.
tn = 3500. # Simulation last 3.5 second (3500 ms)
f0 = 0.025 # Source peak frequency is 25Hz (0.025 kHz)
```
# True and smooth velocity models
First, we create the model data for the "true" model from a given demonstration preset. This model represents the subsurface topology for the purposes of this example and we will later use it to generate our synthetic data readings. We also generate a second model and apply a smoothing filter to it, which represents our initial model for the imaging algorithm. The perturbation between these two models can be thought of as the image we are trying to recover.
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_velocity, plot_perturbation
from scipy import ndimage
# Create true model from a preset
model = create_model()
# Create initial model and smooth the boundaries
model0 = create_model(grid=model.grid)
model0.vp = ndimage.gaussian_filter(model0.vp.data, sigma=filter_sigma, order=0)
# Plot the true and initial model and the perturbation between them
plot_velocity(model)
plot_velocity(model0)
plot_perturbation(model0, model)
```
## Acquisition geometry
Next we define the positioning and the wave signal of our source, as well as the location of our receivers. To generate the wavelet for our source we require the discretized values of time that we are going to use to model a single "shot",
which again depends on the grid spacing used in our model. For consistency this initial setup will look exactly as in the previous modelling tutorial, although we will vary the position of our source later on during the actual imaging algorithm.
```python
#NBVAL_IGNORE_OUTPUT
# Define acquisition geometry: source
from examples.seismic import AcquisitionGeometry
# First, position source centrally in all dimensions, then set depth
src_coordinates = np.empty((1, 2))
src_coordinates[0, :] = np.array(model.domain_size) * .5
src_coordinates[0, -1] = 20. # Depth is 20m
# Define acquisition geometry: receivers
# Initialize receivers for synthetic and imaging data
rec_coordinates = np.empty((nreceivers, 2))
rec_coordinates[:, 0] = np.linspace(0, model.domain_size[0], num=nreceivers)
rec_coordinates[:, 1] = 30.
# Geometry
geometry = AcquisitionGeometry(model, rec_coordinates, src_coordinates, t0, tn, f0=.010, src_type='Ricker')
# We can plot the time signature to see the wavelet
geometry.src.show()
```
# True and smooth data
We can now generate the shot record (receiver readings) corresponding to our true and initial models. The difference between these two records will be the basis of the imaging procedure.
For this purpose we will use the same forward modelling operator that was introduced in the previous tutorial, provided by the `AcousticWaveSolver` utility class. This object instantiates a set of pre-defined operators according to an initial definition of the acquisition geometry, consisting of source and receiver symbols. The solver objects caches the individual operators and provides a slightly more high-level API that allows us to invoke the modelling modelling operators from the initial tutorial in a single line. In the following cells we use this to generate shot data by only specifying the respective model symbol `m` to use, and the solver will create and return a new `Receiver` object the represents the readings at the previously defined receiver coordinates.
```python
# Compute synthetic data with forward operator
from examples.seismic.acoustic import AcousticWaveSolver
solver = AcousticWaveSolver(model, geometry, space_order=4)
true_d , _, _ = solver.forward(vp=model.vp)
```
```python
# Compute initial data with forward operator
smooth_d, _, _ = solver.forward(vp=model0.vp)
```
```python
#NBVAL_IGNORE_OUTPUT
# Plot shot record for true and smooth velocity model and the difference
from examples.seismic import plot_shotrecord
plot_shotrecord(true_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data - true_d.data, model, t0, tn)
```
# Imaging with back-propagation
As explained in the introduction of this tutorial, this method is based on back-propagation.
## Adjoint wave equation
If we go back to the modelling part, we can rewrite the simulation as a linear system solve:
\begin{equation}
\mathbf{A}(\mathbf{m}) \mathbf{u} = \mathbf{q}
\end{equation}
where $\mathbf{m}$ is the discretized square slowness, $\mathbf{q}$ is the discretized source and $\mathbf{A}(\mathbf{m})$ is the discretized wave-equation. The discretized wave-equation matricial representation is a lower triangular matrix that can be solve with forward substitution. The pointwise writing or the forward substitution leads to the time-stepping stencil.
On a small problem one could form the matrix explicitly and transpose it to obtain the adjoint discrete wave-equation:
\begin{equation}
\mathbf{A}(\mathbf{m})^T \mathbf{v} = \delta \mathbf{d}
\end{equation}
where $\mathbf{v}$ is the discrete **adjoint wavefield** and $\delta \mathbf{d}$ is the data residual defined as the difference between the field/observed data and the synthetic data $\mathbf{d}_s = \mathbf{P}_r \mathbf{u}$. In our case we derive the discrete adjoint wave-equation from the discrete forward wave-equation to get its stencil.
## Imaging
Wave-equation based imaging relies on one simple concept:
- If the background velocity model is cinematically correct, the forward wavefield $\mathbf{u}$ and the adjoint wavefield $\mathbf{v}$ meet at the reflectors position at zero time offset.
The sum over time of the zero time-offset correlation of these two fields then creates an image of the subsurface. Mathematically this leads to the simple imaging condition:
\begin{equation}
\text{Image} = \sum_{t=1}^{n_t} \mathbf{u}[t] \mathbf{v}[t]
\end{equation}
In the following tutorials we will describe a more advanced imaging condition that produces shaper and more accurate results.
## Operator
We will now define the imaging operator that computes the adjoint wavefield $\mathbf{v}$ and correlates it with the forward wavefield $\mathbf{u}$. This operator essentially consists of three components:
* Stencil update of the adjoint wavefield `v`
* Injection of the data residual at the adjoint source (forward receiver) location
* Correlation of `u` and `v` to compute the image contribution at each timestep
```python
# Define gradient operator for imaging
from devito import TimeFunction, Operator, Eq, solve
from examples.seismic import PointSource
def ImagingOperator(model, image):
# Define the wavefield with the size of the model and the time dimension
v = TimeFunction(name='v', grid=model.grid, time_order=2, space_order=4)
u = TimeFunction(name='u', grid=model.grid, time_order=2, space_order=4,
save=geometry.nt)
# Define the wave equation, but with a negated damping term
eqn = model.m * v.dt2 - v.laplace - model.damp * v.dt
# Use `solve` to rearrange the equation into a stencil expression
stencil = Eq(v.backward, solve(eqn, v.backward))
# Define residual injection at the location of the forward receivers
dt = model.critical_dt
residual = PointSource(name='residual', grid=model.grid,
time_range=geometry.time_axis,
coordinates=geometry.rec_positions)
res_term = residual.inject(field=v, expr=residual * dt**2 / model.m)
# Correlate u and v for the current time step and add it to the image
image_update = Eq(image, image - u * v)
return Operator([stencil] + res_term + [image_update],
subs=model.spacing_map)
```
## Implementation of the imaging loop
As just explained, the forward wave-equation is solved forward in time while the adjoint wave-equation is solved in a reversed time order. Therefore, the correlation of these two fields over time requires to store one of the two fields. The computational procedure for imaging follows:
- Simulate the forward wave-equation with the background velocity model to get the synthetic data and save the full wavefield $\mathbf{u}$
- Compute the data residual
- Back-propagate the data residual and compute on the fly the image contribution at each time step.
This procedure is applied to multiple source positions (shots) and summed to obtain the full image of the subsurface. We can first visualize the varying locations of the sources that we will use.
```python
#NBVAL_IGNORE_OUTPUT
# Prepare the varying source locations
source_locations = np.empty((nshots, 2), dtype=np.float32)
source_locations[:, 0] = np.linspace(0., 1000, num=nshots)
source_locations[:, 1] = 30.
plot_velocity(model, source=source_locations)
```
```python
# Run imaging loop over shots
from devito import Function
# Create image symbol and instantiate the previously defined imaging operator
image = Function(name='image', grid=model.grid)
op_imaging = ImagingOperator(model, image)
for i in range(nshots):
print('Imaging source %d out of %d' % (i+1, nshots))
# Update source location
geometry.src_positions[0, :] = source_locations[i, :]
# Generate synthetic data from true model
true_d, _, _ = solver.forward(vp=model.vp)
# Compute smooth data and full forward wavefield u0
smooth_d, u0, _ = solver.forward(vp=model0.vp, save=True)
# Compute gradient from the data residual
v = TimeFunction(name='v', grid=model.grid, time_order=2, space_order=4)
residual = smooth_d.data - true_d.data
op_imaging(u=u0, v=v, vp=model0.vp, dt=model0.critical_dt,
residual=residual)
```
Imaging source 1 out of 21
Imaging source 2 out of 21
Imaging source 3 out of 21
Imaging source 4 out of 21
Imaging source 5 out of 21
Imaging source 6 out of 21
Imaging source 7 out of 21
Imaging source 8 out of 21
Imaging source 9 out of 21
Imaging source 10 out of 21
Imaging source 11 out of 21
Imaging source 12 out of 21
Imaging source 13 out of 21
Imaging source 14 out of 21
Imaging source 15 out of 21
Imaging source 16 out of 21
Imaging source 17 out of 21
Imaging source 18 out of 21
Imaging source 19 out of 21
Imaging source 20 out of 21
Imaging source 21 out of 21
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_image
# Plot the inverted image
plot_image(np.diff(image.data, axis=1))
```
```python
assert np.isclose(np.linalg.norm(image.data), 1e6, rtol=1e1)
```
And we have an image of the subsurface with a strong reflector at the original location.
## References
[1] _Versteeg, R.J. & Grau, G. (eds.) (1991): The Marmousi experience. Proc. EAGE workshop on Practical Aspects of Seismic Data Inversion (Copenhagen, 1990), Eur. Assoc. Explor. Geophysicists, Zeist._
|
31f8add3cc644c3ae2131b386b994c1b23e04e5e
| 263,112 |
ipynb
|
Jupyter Notebook
|
examples/seismic/tutorials/02_rtm.ipynb
|
CavalcanteLucas/devito
|
f52cfd7d55b91f83245f33af4424adbdb03075d8
|
[
"MIT"
] | 1 |
2020-01-30T17:49:12.000Z
|
2020-01-30T17:49:12.000Z
|
examples/seismic/tutorials/02_rtm.ipynb
|
CavalcanteLucas/devito
|
f52cfd7d55b91f83245f33af4424adbdb03075d8
|
[
"MIT"
] | 1 |
2019-11-06T18:01:25.000Z
|
2019-11-06T18:01:25.000Z
|
examples/seismic/tutorials/02_rtm.ipynb
|
CavalcanteLucas/devito
|
f52cfd7d55b91f83245f33af4424adbdb03075d8
|
[
"MIT"
] | 2 |
2018-11-15T12:03:48.000Z
|
2018-11-15T13:16:19.000Z
| 439.986622 | 41,816 | 0.942412 | true | 3,545 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.718594 | 0.760651 | 0.546599 |
__label__eng_Latn
| 0.9928 | 0.108263 |
```python
from sympy import symbols, init_printing, Function, Sum, Eq, Matrix,cos, sin, pi, I, exp
#Paper T.Lubin 2010b
#Machine with no load rotor, rotor slots, current sheet on stator side
init_printing()
R_1, R_2, R_3, beta, mu_0 = symbols('R_1, R_2, R_3, beta, mu_0', real = 'true', positive = 'true', nonzero ='true')
theta_i = symbols('theta_i')
#Declaration of the motor geometry
Z_r = symbols('Z_r', integer = 'true', positive = 'true', nonzero ='true') #Number of rotor, stator slots
#Declaration of the space variables
r, t = symbols('r t', real = 'true', positive = 'true')
theta = symbols('theta', real ='true')
#Declaration of the discretizing integers for stator and rotor slots
i = symbols('i', integer='true', positive = 'true', nonzero='true')
#Declaration of th magnetic potentials in the 5 areas
P = Function("P")
E = Function("E")
```
```python
##AREA I : AIR GAP
#Dummy variable(s) of summation
n, N, k, K = symbols('n N k K', integer = 'true', positive = 'true', nonzero ='true')
#Integration constants
A_I0, A_In, B_In, C_In, D_In = symbols('A_I0, A_In, B_In, C_In, D_In', commutative=False)
#Expression of the potential
AzI_cst = A_I0
AzI_exp = A_In*R_2/n*P(n, r, R_3)/E(n, R_2, R_3) - B_In*R_3/n*P(n, r, R_2)/E(n, R_2, R_3)
expn = exp(I*(n*theta + k*t))
AzI = AzI_cst + Sum(Sum(AzI_exp*expn,(n,1,N)), (k,1,K))
#Expression of the field
#BrI_cst, BrI_cos, BrI_sin = compute_Br(AzI_cst, AzI_cos, AzI_sin, n, r, theta)
#BrI = BrI_cst + Sum(BrI_cos*cosn+BrI_sin*sinn,(n,1,N))
#BthetaI_cst, BthetaI_cos, BthetaI_sin = compute_Btheta(AzI_cst, AzI_cos, AzI_sin, r)
#BthetaI = BthetaI_cst + Sum(BthetaI_cos*cosn+BthetaI_sin*sinn,(n,1,N))
fAzI = Function('Az_I')(r,theta,t)
fBrI = Function('Br_I')(r,theta,t)
fBthetaI = Function('Btheta_I')(r,theta)
Eq(fAzI, AzI) #, Eq(fBrI, BrI), Eq(fBthetaI, BthetaI)
```
```python
##AREA i : ROTOR SLOT
#Dummy variable(s) of summation
k, K = symbols('k, K', integer = 'true', nonzero = 'true')
#Integration constants
A_i0, A_ik = symbols('A_i0, A_ik', commutative=False)
#Expression of the potential
Azi_cst = A_i0
Azi_cos = A_ik*P(k*pi/beta, R_1, r)/P(k*pi/beta, R_1, R_2)
Azi_sin = 0
coski = cos(k*pi/beta*(theta-theta_i+beta/2))
sinki = sin(k*pi/beta*(theta-theta_i+beta/2))
Azi = Azi_cst + Sum(Azi_cos*coski,(k,1,K))
#Expression of the field
Bri_cst, Bri_cos, Bri_sin = compute_Br(Azi_cst, Azi_cos, Azi_sin, k*pi/beta, r, theta)
Bri = Bri_cst + Sum(Bri_cos*coski+Bri_sin*sinki,(k,1,K))
Bthetai_cst, Bthetai_cos, Bthetai_sin = compute_Btheta(Azi_cst, Azi_cos, Azi_sin, r)
Bthetai = Bthetai_cst + Sum(Bthetai_cos*coski+Bthetai_sin*sinki,(k,1,K))
fAzi = Function('Az_i')(r,theta)
fBri = Function('Br_i')(r,theta)
fBthetai = Function('Btheta_i')(r,theta)
```
```python
Potentials = Matrix([Eq(fAzI, AzI), Eq(fAzi, Azi)])
Fields = Matrix([Eq(fBrI, BrI), Eq(fBthetaI, BthetaI), Eq(fBri, Bri), Eq(fBthetai, Bthetai)])
#Current sheet
p, m, M = symbols('p, m, M', integer = 'true', nonzero = 'true')
fK = Function('K')(theta)
K_m, alpha = symbols('K_m, alpha')
K_cos = K_m
cosm = cos(m*p(theta-alpha))
K = Sum(K_cos*cosm, (m,1,M))
## RESULTING EQUATIONS
Csts = Matrix([A_In, B_In, C_In, D_In, A_ik])
var = [n, n, n, n, (k, i)]
##General integrals to compute
fI_cosni, fI_sinni = symbols('I_cosni, I_sinni', commutative = False)
fI_cosksinni, fI_coskcosni = symbols('I_cosksinni, I_coskcosni', commutative = False)
##CONDITION A.11 = A.9
A_11 = Eq(BthetaI_cos.subs(r, R_2), 1/pi*(Bthetai_cst.subs(r, R_2)*fI_cosni +Bthetai_cos.subs(r, R_2)*fI_coskcosni))
##CONDITION A.7
A_7 = Eq(B_In, mu_0*K_m*cos(m*p*alpha))
##CONDITION A.12 = A.10
A_12 = Eq(BthetaI_sin.subs(r, R_2), 1/pi*(Bthetai_cst.subs(r, R_2)*fI_sinni +Bthetai_cos.subs(r, R_2)*fI_cosksinni))
##CONDITION A.8
A_8 = Eq(D_In, mu_0*K_m*sin(m*p*alpha))
##CONDITION A.13
A_13 = Eq(A_ik, 2/beta*((A_In*R_2/n*P(n, R_2, R_3)/E(n, R_2, R_3) + B_In*R_3/n*2/E(n, R_3, R_2))*fI_coskcosni + (C_In*R_2/n*P(n, R_2, R_3)/E(n, R_2, R_3) + D_In*R_3/n*2/E(n, R_3, R_2))*fI_cosksinni))
A_13bis = Eq(Azi_cos.subs(r, R_2), 2/beta*(AzI_cos.subs(r, R_2)*fI_coskcosni + AzI_sin.subs(r, R_2)*fI_cosksinni))
SetEqs = Matrix([A_11, A_7, A_12, A_8, A_13])
Mat, Vect, Index = get_System(var, var, Csts, SetEqs)
#I_coskcosni = computeInt_coscos(k*pi/beta, -theta_i + beta/2, n, 0, theta_i - beta/2, theta_i + beta/2)
#I_cosksinni = computeInt_cossin(k*pi/beta, -theta_i + beta/2, n, 0, theta_i - beta/2, theta_i + beta/2)
#I_coskcosni = computeInt_coscos(k*pi/beta, -theta_i, n, 0, theta_i, theta_i + beta)
#I_cosksinni = computeInt_cossin(k*pi/beta, -theta_i, n, 0, theta_i, theta_i + beta)
#def P(n,x,y) :
#
# return (x/y)**n + (y/x)**n
#
#def E(n,x,y) :
#
# return (x/y)**n - (y/x)**n
#
#P_n_R2_R3 = P(n, R_2, R_3)
#E_n_R2_R3 = E(n, R_2, R_3)
#E_n_R3_R2 = E(n, R_3, R_2)
#E_k_R1_R2 = E(k*pi/beta, R_1, R_2)
#P_k_R1_R2 = P(k*pi/beta, R_1, R_2)
#Current sheet Fourier series expansion
#I1 = computeInt_coscos(m*p, -alpha, n, 0, 0,2*pi)
#I2 = computeInt_coscos(m*p, -alpha, m*p, 0, 0,2*pi)
```
|
7e8c63c60b0f294a4c5d8105eea3824b01a600a9
| 23,970 |
ipynb
|
Jupyter Notebook
|
Tutorials/tuto_subdomain_model.ipynb
|
EmileDvs/pyleecan
|
ad2f5f25c089a981f373557a198da51c62407928
|
[
"Apache-2.0"
] | null | null | null |
Tutorials/tuto_subdomain_model.ipynb
|
EmileDvs/pyleecan
|
ad2f5f25c089a981f373557a198da51c62407928
|
[
"Apache-2.0"
] | null | null | null |
Tutorials/tuto_subdomain_model.ipynb
|
EmileDvs/pyleecan
|
ad2f5f25c089a981f373557a198da51c62407928
|
[
"Apache-2.0"
] | null | null | null | 91.48855 | 14,292 | 0.777138 | true | 2,074 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.705785 | 0.632561 |
__label__kor_Hang
| 0.222035 | 0.307981 |
# First Post
> Gotta start somewhere
A big part of the workflow students in my python-based mathematics classes is to create clear, beautiful documents with Jupyter. For that reason, I'll use Jupyter to generate all the content in PythonMathClassroom.
The decision to host the blog from githup fastpages came down to the ease with which Jupyter content can go up on that platform without any intermediate fuss.
The first post ought to have some python mathematics, so here we go:
Let's use the sympy library to compute and plot some functions related to $x^2e^{-x}$
```python
from sympy import *
x, y = symbols("x y")
ii=integrate(x**2*exp(-x),x)
ii
```
$\displaystyle \left(- x^{2} - 2 x - 2\right) e^{- x}$
```python
diff(ii,x)
```
$\displaystyle \left(- 2 x - 2\right) e^{- x} - \left(- x^{2} - 2 x - 2\right) e^{- x}$
```python
expand(_)
```
$\displaystyle x^{2} e^{- x}$
```python
solve(x**2*exp(-x)- 3/10 ,x)
```
[-0.439637356954377, 0.829068989148422, 3.95284287457532]
```python
plot(x**2*exp(-x),3/10,(x,-.6,10))
```
```python
```
|
dce6b98ef3ec69f677ba4b4c6d8c22f52053138e
| 22,859 |
ipynb
|
Jupyter Notebook
|
_notebooks/2022-02-18-FirstPost.ipynb
|
ejbarth/PythonMathClassroom
|
493e314678f629b80ced1af361dc9a5d7ebb25ae
|
[
"Apache-2.0"
] | 1 |
2022-02-23T03:17:44.000Z
|
2022-02-23T03:17:44.000Z
|
_notebooks/2022-02-18-FirstPost.ipynb
|
ejbarth/PythonMathClassroom
|
493e314678f629b80ced1af361dc9a5d7ebb25ae
|
[
"Apache-2.0"
] | null | null | null |
_notebooks/2022-02-18-FirstPost.ipynb
|
ejbarth/PythonMathClassroom
|
493e314678f629b80ced1af361dc9a5d7ebb25ae
|
[
"Apache-2.0"
] | null | null | null | 122.240642 | 18,996 | 0.887747 | true | 338 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.962673 | 0.885631 | 0.852574 |
__label__eng_Latn
| 0.977394 | 0.819147 |
+ This notebook is part of lecture 6 *Columnspace and nullspace* in the OCW MIT course 18.06 by Prof Gilbert Strang [1]
+ Created by me, Dr Juan H Klopper
+ Head of Acute Care Surgery
+ Groote Schuur Hospital
+ University Cape Town
+ <a href="mailto:juan.klopper@uct.ac.za">Email me with your thoughts, comments, suggestions and corrections</a>
<a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/InteractiveResource" property="dct:title" rel="dct:type">Linear Algebra OCW MIT18.06</span> <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">IPython notebook [2] study notes by Dr Juan H Klopper</span> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>.
+ [1] <a href="http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/index.htm">OCW MIT 18.06</a>
+ [2] Fernando Pérez, Brian E. Granger, IPython: A System for Interactive Scientific Computing, Computing in Science and Engineering, vol. 9, no. 3, pp. 21-29, May/June 2007, doi:10.1109/MCSE.2007.53. URL: http://ipython.org
```python
from IPython.core.display import HTML, Image
css_file = 'style.css'
HTML(open(css_file, 'r').read())
```
```python
#import numpy as np
from sympy import init_printing, Matrix, symbols
#import matplotlib.pyplot as plt
#import seaborn as sns
#from IPython.display import Image
from warnings import filterwarnings
init_printing(use_latex = 'mathjax')
%matplotlib inline
filterwarnings('ignore')
```
# Columnspace and nullspace of a matrix
## Columnspaces of matrices
* We saw in the previous lecture that columns of a matrix can form vectors
* Consider now the LU-decomposition of *A*
$$ PA = PLU $$
* The union P∪L (all vectors in P or L or both) is NOT a subspace
* The intersection P∩L (or vectors in P and L) is a subspace (because their intersection is only the zero vector)
* The intersection of any two subspaces is a subspace
* Consider the following example matrix
```python
A = Matrix([[1, 1, 2], [2, 1, 3], [3, 1, 4], [4, 1, 5]])
A
```
* Each of the column spaces are vectors (column space) in ℝ<sup>4</sup>
* The linear combinations of all the column vectors form a subspace
* Is it the whole *V* = ℝ<sup>4</sup>, though?
* The reason why we ask is because we want to bring it back to a system of linear equations and ask the question: Is there (always) a solution to the following:
$$ {A} \overline {x}= \overline {b} $$
* Thus, which right-hand sides *b* are allowed?
* In our example above we are in ℝ<sup>4</sup> and we ask if linear combination of all of them fill ℝ<sup>4</sup>
* From our example above some right-hand sides will be allowed (they form a subspace)
* Let's look at an example for **b**
```python
x1, x2, x3 = symbols('x1, x2, x3')
vec_x = Matrix([x1, x2, x3])
b = Matrix([1, 2, 3, 4])
A, vec_x, b
```
```python
A * vec_x
```
* You can do the row multiplication, but it's easy to see from above we are asking about linear combinations of the columns, i.e. how many (*x*<sub>1</sub>) of column 1 plus how many (*x*<sub>2</sub>) of column 2 plus how many (*x*<sub>3</sub>) of column 3 equals **b**?
* Well, since **b** is the same as the first column, **x** would be
$$ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} $$
* So we can solve for all values of **b** if **b** is in the column space
### Linear independence
* We really need to know if the columns above are linearly independent
* We note that column three above is a linear combination of the first two, so adds nothing new
* Actually, we could also throw away the first one because it is column 3 plus -1 times column 2
* Same for column 2
* We thus have two columns left and we say that the column space is of dimension 2 (a 2-dimensional subspace of ℝ<sup>4</sup>)
## The nullspace
* It contains all solutions **x** for A**x**=0
* This solution(s) is in ℝ<sup>3</sup>
```python
zero_b = Matrix([0, 0, 0, 0])
A, vec_x, zero_b
```
* Some solutions would be
$$ \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} $$
$$ \begin{bmatrix} 1 \\ 1 \\ -1 \end{bmatrix} $$
$$ \begin{bmatrix} 2 \\ 2 \\ -2 \end{bmatrix} $$
* In fact, we have:
$$ {c} \begin{bmatrix} 1 \\ 1 \\ -1 \end{bmatrix} $$
* It is thus a line
* The nullspace is a line in ℝ<sup>3</sup>
* **PLEASE** remember, for any space the rules of addition and scalar multiplication must hold for vectors to remain in that space
```python
```
|
b5463f6b1b7a73a329f23578cb3b5db5afa0bd9d
| 7,743 |
ipynb
|
Jupyter Notebook
|
_math/MIT_OCW_18_06_Linear_algebra/I_07_Column_and_null_spaces.ipynb
|
aixpact/data-science
|
f04a54595fbc2d797918d450b979fd4c2eabac15
|
[
"MIT"
] | 2 |
2020-07-22T23:12:39.000Z
|
2020-07-25T02:30:48.000Z
|
_math/MIT_OCW_18_06_Linear_algebra/I_07_Column_and_null_spaces.ipynb
|
aixpact/data-science
|
f04a54595fbc2d797918d450b979fd4c2eabac15
|
[
"MIT"
] | null | null | null |
_math/MIT_OCW_18_06_Linear_algebra/I_07_Column_and_null_spaces.ipynb
|
aixpact/data-science
|
f04a54595fbc2d797918d450b979fd4c2eabac15
|
[
"MIT"
] | null | null | null | 29.780769 | 708 | 0.567609 | true | 1,403 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.752013 | 0.863392 | 0.649281 |
__label__eng_Latn
| 0.982087 | 0.346829 |
```python
from sympy import *
init_printing()
```
```python
# Example 1
```
```python
hbar, m, L, g = symbols("hbar, m, L, g", positive=True)
```
```python
n, ell = symbols('n, ell', integer=True)
```
```python
x = symbols('x', real=True)
```
```python
def phi0(x,n):
return sqrt(2/L)*sin(n*pi*x/L)
```
```python
phi0(x,n)
```
```python
def E0(n):
return hbar**2*n**2*pi**2/(2*m*L**2)
```
```python
E0(n)
```
```python
#first order energy corrections
def E1(n):
return integrate(phi0(x,n)*(m*g*x)*phi0(x,n),(x,0,L))
```
```python
E1(n) # the top line is the one that counts: n neq 0
```
```python
# first order state corrections
```
```python
# first, the integrals are
integrate(phi0(x,ell)*m*g*x*phi0(x,n),(x,0,L))
# The last line is the only good one: ell neq n and both are > 0
```
```python
_15.subs(n,1)
```
```python
summation(_16,(ell,2,oo))
```
```python
str(_15)
```
'Piecewise((0, (Eq(ell, 0) & Eq(n, 0)) | (Eq(ell, 0) & Eq(ell, n) & Eq(n, 0)) | (Eq(ell, 0) & Eq(n, 0) & Eq(ell, -n)) | (Eq(ell, 0) & Eq(ell, n) & Eq(n, 0) & Eq(ell, -n))), (-L*g*m/2, Eq(ell, -n) | (Eq(ell, 0) & Eq(ell, -n)) | (Eq(ell, n) & Eq(ell, -n)) | (Eq(n, 0) & Eq(ell, -n)) | (Eq(ell, 0) & Eq(ell, n) & Eq(ell, -n)) | (Eq(ell, n) & Eq(n, 0) & Eq(ell, -n))), (L*g*m/2, Eq(ell, n) | (Eq(ell, 0) & Eq(ell, n)) | (Eq(ell, n) & Eq(n, 0))), (4*(-1)**ell*(-1)**n*L*ell*g*m*n/(pi**2*ell**4 - 2*pi**2*ell**2*n**2 + pi**2*n**4) - 4*L*ell*g*m*n/(pi**2*ell**4 - 2*pi**2*ell**2*n**2 + pi**2*n**4), True))'
```python
Hprimeelln = 4*(-1)**ell*(-1)**n*L*ell*g*m*n/(pi**2*ell**4 - 2*pi**2*ell**2*n**2 + pi**2*n**4) - 4*L*ell*g*m*n/(pi**2*ell**4 - 2*pi**2*ell**2*n**2 + pi**2*n**4)
```
```python
sumterm = Hprimeelln/(E0(n)-E0(ell))*phi0(ell,x)
```
```python
sumterm
```
```python
with assuming((Q.nonzero(n-ell)),Q.positive(n),Q.positive(ell)):
summation(sumterm,(ell,1,n-1))+summation(sumterm,(ell,n+1,oo))
```
```python
theta = symbols('theta', real=True)
```
```python
Hmatrix = Matrix([[cos(theta),sin(theta)/sqrt(2),0],[sin(theta)/sqrt(2),0,sin(theta)/sqrt(2)],[0,sin(theta)/sqrt(2),-cos(theta)]])
```
```python
Hmatrix
```
```python
Hmatrix.eigenvects(simplify=True)
```
```python
```
```python
```
|
fab078d80176c469e54833fd27923d4c7c4110c9
| 69,241 |
ipynb
|
Jupyter Notebook
|
Lecture4-ex.ipynb
|
corcoted/Phys475
|
8fc0ee6bccda19e5afff14f0e8fda6aef7ee6d66
|
[
"MIT"
] | 2 |
2021-03-10T04:30:46.000Z
|
2021-07-12T09:20:43.000Z
|
Lecture4-ex.ipynb
|
corcoted/Phys475
|
8fc0ee6bccda19e5afff14f0e8fda6aef7ee6d66
|
[
"MIT"
] | null | null | null |
Lecture4-ex.ipynb
|
corcoted/Phys475
|
8fc0ee6bccda19e5afff14f0e8fda6aef7ee6d66
|
[
"MIT"
] | null | null | null | 96.034674 | 12,172 | 0.727965 | true | 946 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.935347 | 0.874077 | 0.817565 |
__label__eng_Latn
| 0.147109 | 0.73781 |
## Exercise set 5: causal forest
In this exercise set we will be working with the `econml` package to estimate a causal forest.
Another more general implementation is found in [generalized random forest](https://github.com/grf-labs/grf) by Athey et al. The package is written for the R programming language.
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.datasets import make_classification
sns.set(style='darkgrid')
%matplotlib inline
```
To highlight the usefulness of causal forest we will be working with synthetic data in this exercise. In particular we will synthetically add a treatment effect to a dataset in which there otherwise is none. Furthermore we will make this effect heterogeneous by adding noise, and by making it depend on a single continuous variable as well as a categorical variable.
>**Ex. 5.1.0:** Use the code below to simulate data according to
<br>
<br>
\begin{align}
T(X) &= \frac{1}{1+e^{-X\delta+U}} > 0.5 \\
\tau(X) &= \frac{1}{1+e^{-\gamma X_0}} \\
Y(T=0) &= X\beta + \epsilon \\
Y(T=1) &= Y(0) + \tau(X) \\
\end{align}
<br>
where $\epsilon, \nu$ are simply noise terms distributed according to $\mathcal{N}(0,1)$ and $\beta,\delta$ are `N_FEATURES` vector of random parameters. $\gamma$ is a scalar parameter.
```python
N_SAMPLES = 10000
N_FEATURES = 5
GAMMA = 1.2
BETA = np.random.RandomState(0).uniform(0,1, size = N_FEATURES)
DELTA = np.random.RandomState(1).uniform(0,1, size = N_FEATURES)
X = np.random.RandomState(2).normal(size = (N_SAMPLES, N_FEATURES))
U = np.random.RandomState(3).normal(size = (N_SAMPLES))
T = 1/(1+np.exp(-(U+X.dot(DELTA))))>.5
Y0 = X @ BETA + np.random.RandomState(5).normal(size = N_SAMPLES)
tau = 10/(1 + np.exp(-GAMMA*X[:,0])) + np.random.normal(size = N_SAMPLES)
Y1 = Y0 + tau
y = Y0 + T*(Y1 - Y0)
```
> **Ex. 5.1.1:** Create a two-subplot figure, and plot $Y(0)$ and $Y(1)$ in one subplot against $X_0$. Plot $\tau(x)$ against $X_0$ in the other subplot. What do you see? Why do we observe $\tau=0$ in many cases?
```python
# Your answer here
```
```python
fig, ax = plt.subplots(1,2, figsize = (10,4))
ax[0].scatter(X[:,0], Y0, label = '$Y(0)$',alpha=.2)
ax[0].scatter(X[:,0], Y1, label = '$Y(1)$',alpha=.2)
ax[0].set_xlabel('$X_0$', fontsize = 16)
ax[0].set_ylabel('$Y(T)$', fontsize = 16)
ax[0].legend()
ax[0].grid(True)
ax[0].set_title('Potential outcomes')
ax[1].scatter(X[:,0], tau, label = '\tau(x)',alpha=.2)
ax[1].set_xlabel('$X_0$', fontsize = 16)
ax[1].set_ylabel('$\\tau(x)$', fontsize = 16)
ax[1].grid(True)
ax[1].set_title('Treatment effect')
fig.tight_layout()
```
> **Ex. 5.1.2:** Is there a selection problem? Plot for each dimension of $X$ the relationship with treatment assignment.
```python
# Your answer here
```
```python
fig, ax = plt.subplots(1,5, figsize = (15,2.5))
for i in range(N_FEATURES):
sns.barplot(y=T, x=X[:,i], ax=ax[i], orient='h')
ax[i].set_xlim(-1,1)
ax[i].set_xlabel(f'Variable: $X_{i}$')
ax[0].set_ylabel('Treated')
fig.tight_layout()
```
>**Ex.5.1.3:** Estimate a causal forest model using the `econml` package, and store the model in a new variable `cf`. To unconfound the treatment assignment, use the gradient boosted forest. Then use the following line to create a dataframe of predicted treatment effects on the same data that you trained the model on.
>> Hint: use the following setting
>>```python
discrete_treatment=True
```
```python
# Your answer here
```
```python
t0,t1 = 0,1
from econml.dml import CausalForestDML
from sklearn.ensemble import GradientBoostingClassifier,GradientBoostingRegressor
cf_dmlxg = CausalForestDML(model_y=GradientBoostingRegressor(),
model_t=GradientBoostingClassifier(),
discrete_treatment=True)
cf_dmlxg.fit(y, T, X=X)
```
<econml.dml.causal_forest.CausalForestDML at 0x7fbcd02f4e50>
>**Ex.5.1.4:** Plot a scatterplot of the estimated individual treatment effects against the simulated "true" ITE's `tau` that you produced in the beginning of this exercise set.
```python
# Your answer here
```
```python
tau_hat = cf_dmlxg.effect(X)
tau_lb, tau_ub = cf_dmlxg.effect_interval(X, alpha=0.05)
from matplotlib import pyplot as plt
%matplotlib inline
f,ax = plt.subplots(figsize=(8,6))
ax.scatter(tau, tau_hat, color='red',alpha=0.1)
# plt.plot(X_range[:,0], tau_hat)
#ax.fill_between(tau, tau_lb, tau_ub)
```
|
80d56ccc9c145b993c4b39801b9cf019e13f7d3a
| 242,992 |
ipynb
|
Jupyter Notebook
|
session_5/ex_5_solution.ipynb
|
carolineespegren/mle_phd_oslo
|
0b74203553cd4dd841a0186c999d3dfc59722000
|
[
"CC-BY-4.0"
] | 5 |
2021-05-26T19:42:00.000Z
|
2021-07-17T07:10:56.000Z
|
session_5/ex_5_solution.ipynb
|
carolineespegren/mle_phd_oslo
|
0b74203553cd4dd841a0186c999d3dfc59722000
|
[
"CC-BY-4.0"
] | null | null | null |
session_5/ex_5_solution.ipynb
|
carolineespegren/mle_phd_oslo
|
0b74203553cd4dd841a0186c999d3dfc59722000
|
[
"CC-BY-4.0"
] | 10 |
2021-05-04T12:31:35.000Z
|
2021-07-15T06:26:24.000Z
| 801.953795 | 127,752 | 0.952225 | true | 1,338 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.828939 | 0.831143 | 0.688967 |
__label__eng_Latn
| 0.883764 | 0.439032 |
# COOT on MNIST/USPS example
This notebook gives a simple example of the COOT problem between two well-known datasets: MNIST and USPS.
We recall that for two datasets $\mathbf{X} \in \mathbb{R}^{n \times d}, \mathbf{X'} \in \mathbb{R}^{n' \times d'}$ the COOT problem aims at finding two optimal transport maps $\boldsymbol{\pi^{s}}$ and $\boldsymbol{\pi^{v}}$ between the samples and the features that solve:
\begin{equation}
\underset{\begin{smallmatrix}\boldsymbol{\pi^{s}} \in \Pi(\mathbf{w},\mathbf{w'}) \\ \boldsymbol{\pi^{v}} \in \Pi(\mathbf{v},\mathbf{v'}) \end{smallmatrix}} {\min} \sum_{ijkl} \left|X_{ik}-X_{jl}\right|^{p} \pi^{s}_{ij} \pi^{v}_{kl}
\end{equation}
where $\mathbf{w},\mathbf{w'}$ are histograms on the samples and $\mathbf{v},\mathbf{v'}$ are histogram on the features.
In our example the samples are images and the features are the pixels of the images. We will have $n,n'=300$ and $d=784, d'=256$
```python
import numpy as np
from scipy import ndimage
import scipy as sp
import matplotlib.pylab as pl
import ot
import scipy.io
import sys
sys.path.append('../code')
import cot
```
### Load the MNIST/USPS dataset
```python
data=sp.io.loadmat('../data/mnist.mat')
Xtot1=data['xapp'].astype(np.float32)
Ytot1=data['yapp']
d1=Xtot1.shape[1]
Ytot1[Ytot1==10]=0
data=sp.io.loadmat('../data/usps.mat')
Xtot2=(data['xapp'].astype(np.float32)+1)/2
Ytot2=data['yapp']
Ytot2-=1
d2=Xtot2.shape[1]
np.random.seed(1976)
```
```python
def get_data(x,y,nbperclass):
xr=np.zeros((0,x.shape[1]))
yr=np.zeros((0))
for i in range(np.max(y).astype(int)+1):
xi=x[y.ravel()==i,:]
idx=np.random.permutation(xi.shape[0])
xr=np.concatenate((xr,xi[idx[:nbperclass],:]),0)
yr=np.concatenate((yr,i*np.ones(nbperclass)))
return xr,yr
#%% We take 300 samples per class
nbperclass=300
xs,ys=get_data(Xtot1,Ytot1,nbperclass)
xs=xs/255
selmnist=xs.sum(0)>0
ntot=nbperclass*10
xs2=np.zeros((xs.shape[0],d1))
#xs2[:,sel1]=xs
xt,ys=get_data(Xtot2,Ytot2,nbperclass)
vs=xs.sum(axis=0) # set the weights on the features
vs/=vs.sum()
vt=xt.sum(axis=0)
vt/=vt.sum()
```
```python
ot.tic()
Ts,Tv,_,log=cot.cot_numpy(xs,xt,v1=vs,v2=vt,niter=100,log=True) # solve COOT
ot.toc()
pl.figure(1,figsize=(6,4))
pl.plot(log['cost'])
pl.title('evolution of cost (no Mass correction)')
pl.show()
ot.tic()
Tsr,Tvr,_,logr=cot.cot_numpy(xs,xt,v1=vs,v2=vt,niter=100,log=True,algo2='sinkhorn',reg2=.5e-2) # solve COOT with sinkhorn
ot.toc()
```
```python
pl.figure(2,figsize=(6,6))
pl.imshow(Ts)
pl.colorbar()
pl.show()
```
```python
Tv.shape,Ts.shape
```
((784, 256), (3000, 3000))
### Confusion matrix on the samples
We evaluate the COOT ability to find the good assignments of the images (samples), i.e. if it aligns well the samples classes between the two datasets based on the knowledge of $\boldsymbol{\pi^{s}}$
```python
#%% confusion matrix
nbc=10
Cmat=np.zeros((nbc,nbc))
for i in range(ntot):
#print(i)
for j in range(ntot):
if Ts[i,j]:
Cmat[int(ys[i]),int(ys[j])]+=Ts[i,j]
print('Find the good class in {:.2f}% '.format(100*np.sum(np.diag(Cmat))))
#%%
pl.imshow(Cmat*10), pl.colorbar()
pl.title('Confusion matrix for COOT between samples')
pl.ylabel('Labels MNIST')
pl.xlabel('Labels USPS')
```
### Vizualize the transport on the features
We propose to vizualize the optimal oupling on the features $\boldsymbol{\pi^{v}}$. In order to do that we color code each pixel of an image of size USPS and we transfer to an image of size MNIST.
```python
#%%pix
dim_source=16
dim_target=28
image = np.zeros((dim_source,dim_source,3))
for i in range(dim_source):
for j in range(dim_source):
image[i,j,0]=i
image[i,j,1]=j
image[i,j,2]=dim_source/2
image=image.astype(np.float32)/dim_source
diag=1./Tv.sum(axis=1)
diag[diag==np.inf]=0
image_target = np.dot(np.diag(diag),np.dot(image.reshape((dim_source*dim_source,3)).T,Tv.T).T)
image_target[~selmnist,:]=np.nan #we remove non informative features
image_target=image_target.reshape((dim_target,dim_target,3))
diagr=1./Tvr.sum(axis=1)
diagr[diagr==np.inf]=0
image_targetr = np.dot(np.diag(diagr),np.dot(image.reshape((dim_source*dim_source,3)).T,Tvr.T).T)
image_targetr[~selmnist,:]=np.nan
image_targetr=image_targetr.reshape((dim_target,dim_target,3))
pl.figure(3,figsize=(16,32))
pl.subplot(1,2,1)
pl.imshow(image)
pl.title('source image')
pl.axis('off')
pl.subplot(1,2,2)
pl.imshow(image_target)
pl.title('Transfered image')
pl.axis('off')
pl.show()
#%%
import scipy.sparse
sTs= scipy.sparse.coo_matrix(Ts)
row=sTs.row
col=sTs.col
pl.figure(10,figsize=(14,3.5))
pl.clf()
pl.subplot(1,4,1)
pl.plot(col,row,'.',markersize=3,alpha=0.5)
#pl.spy(Tv,markersize=3,marker='.',alpha=0.5)
pl.title('$\pi^s$ matrix between samples')
pl.xlabel('USPS samples')
pl.ylabel('MNIST samples')
pl.xticks([300*i for i in range(11)],[' ']*11)
pl.yticks([300*i for i in range(11)],[]*11)
pl.axis('scaled')
pl.xlim((0,ntot))
pl.ylim((ntot,0))
pl.grid()
pl.subplot(1,4,2)
pl.imshow(Cmat*10,cmap='Blues'),
#pl.colorbar()
pl.title('Confusion matrix')
pl.ylabel('Labels MNIST')
pl.xlabel('Labels USPS')
#pl.xticks(*pl.yticks())
pl.yticks([i for i in range(10)],[i for i in range(10)])
pl.ylim((nbc-.5,-.5))
pl.xticks([i for i in range(10)],[i for i in range(10)])
pl.xlim((-.5,nbc-.5,))
pl.subplot(1,4,3)
pl.imshow(image)
pl.title('USPS colored pixels')
pl.axis('off')
pl.xlim([-6,22])
pl.ylim([-6,22])
pl.subplot(1,4,4)
pl.imshow(image_target)
pl.title("MNIST pixels through $\pi^v$")
pl.axis('off')
pl.show()
pl.savefig('./mnist_usps.png')
pl.savefig('./mnist_usps.pdf',bbox_inches='tight')
```
We observe that the spatial structured is preserved (without supervision): the pixel are transported coherently on the center of the image
### Vizualize the images after transformation through the optimal couplings
We can also vizualize the images after transformation via the optimal couplings
```python
#%%
nbl,nbc=5,2
#idx_sel=np.random.randint(0,ntot,n_fig)
idx_sel=np.arange(0,ntot,nbperclass)+3
xts=xt[idx_sel,:]
xss=xs[idx_sel,:]
I=np.zeros((28*nbl,28*(nbc*2+1)))+1
for i in range(nbl):
for j in range(nbc):
I[i*28:(i+1)*28,j*28:(j+1)*28]=xss[i+j*nbl].reshape((28,28))
I[i*28+6:(i)*28+22,j*28+28*(nbc+1)+6:j*28+28*(nbc+1)+22]=xts[i+j*nbl].reshape((16,16))
pl.figure(15)
pl.clf()
pl.imshow(I,cmap='Blues')
pl.axis('off')
pl.ylim([ I.shape[0],-10])
pl.text(20,-7,'MNIST',fontsize=15)
pl.text(20+28*(nbc+1),-7,'USPS',fontsize=15)
```
```python
#%%
import scipy.sparse
sTv= scipy.sparse.coo_matrix(Ts)
row=sTs.row
col=sTs.col
pl.figure(11,figsize=(16,3.5))
pl.clf()
pl.subplot(1,5,1)
pl.imshow(I,cmap='gray')
pl.axis('off')
pl.ylim([ I.shape[0],-10])
pl.text(15,-9,'MNIST',fontsize=12)
pl.text(15+28*(nbc+1),-9,'USPS',fontsize=12)
pl.subplot(1,5,2)
pl.plot(col,row,'.',markersize=3,alpha=0.5)
#pl.spy(Tv,markersize=3,marker='.',alpha=0.5)
pl.title('$\pi^s$ matrix between samples')
pl.xlabel('USPS samples')
pl.ylabel('MNIST samples')
pl.xticks([300*i for i in range(11)],[' ']*11)
pl.yticks([300*i for i in range(11)],[]*11)
pl.axis('scaled')
pl.xlim((0,ntot))
pl.ylim((ntot,0))
pl.grid()
pl.subplot(1,5,3)
pl.imshow(image)
pl.title('USPS colored coded pixels')
pl.axis('off')
pl.xlim([-6,22])
pl.ylim([22,-6])
pl.subplot(1,5,4)
pl.imshow(image_target)
pl.title("MNIST pixels through $\pi^v$")
pl.axis('off')
#pl.show()
pl.subplot(1,5,5)
pl.imshow(image_targetr)
pl.title("MNIST pixels through entropic $\pi^v$")
pl.axis('off')
#pl.show()
pl.savefig('./mnist_usps.png')
pl.savefig('./mnist_usps.pdf',bbox_inches='tight')
#%%
import random
import PIL as pil
# build a rectangle in axes coords
left, width = .25, .5
bottom, height = .25, .5
right = left + width
top = bottom + height
def predict_barycenter(data,T):
diag=1./T.sum(axis=1)
diag[diag==np.inf]=0
return np.dot(T,data.T).T.dot(np.diag(diag))
def predict_barycenter_reverse(data,T):
diag=1./T.sum(axis=1)
diag[diag==np.inf]=0
return np.dot(T,data).T.dot(np.diag(diag)).T
random.seed(1985)
np.random.seed(1976)
n_fig=16
idx_sel=np.random.randint(0,ntot,n_fig)
xsel=xs[idx_sel,:]
xpred=np.zeros((n_fig,d2))
xpredr=np.zeros((n_fig,d2))
for i in range(n_fig):
xpred[i,:]=predict_barycenter(xsel[i,:],Tv.T)
xpredr[i,:]=predict_barycenter(xsel[i,:],Tvr.T)
cmap_g='gray'
pl.figure(figsize=(n_fig,4))
for i in range(n_fig):
ax= pl.subplot(4,n_fig,i+1)
pl.imshow(xsel[i,:].reshape((28,28)),cmap=cmap_g)
pl.axis('off')
pl.xticks(())
pl.yticks(())
#pl.xlim([-6,22])
#pl.ylim([22,-6])
if i==0:
ax.text(left-.3, 0.5*(bottom+top), 'MNIST',
horizontalalignment='right',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
#pl.ylabel('MNIST')
pl.subplot(4,n_fig,i+1+n_fig)
img = pil.Image.fromarray((xsel[i,:]* 255 / np.max(xsel[i,:])).reshape((28,28)) .astype('float32'))
img = img.resize((16,16))
pl.imshow(img,cmap=cmap_g)
pl.axis('off')
pl.xticks(())
pl.yticks(())
pl.xlim([-6,22])
pl.ylim([22,-6])
if i==0:
ax.text(left-.3, 0.5*(bottom+top) - 1.1, 'Resize',
horizontalalignment='right',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax = pl.subplot(4,n_fig,i+1+2*n_fig)
ax.imshow(xpred[i,:].reshape((16,16)),cmap=cmap_g)
pl.axis('off')
pl.xticks(())
pl.yticks(())
pl.xlim([-6,22])
pl.ylim([22,-6])
if i==0:
ax.text(left-.3, 0.5*(bottom+top), 'Map $\pi^v$',
horizontalalignment='right',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax= pl.subplot(4,n_fig,i+1+3*n_fig)
pl.imshow(xpredr[i,:].reshape((16,16)),cmap=cmap_g)
pl.axis('off')
pl.xticks(())
pl.yticks(())
pl.xlim([-6,22])
pl.ylim([22,-6])
if i==0:
ax.text(left-.3, 0.5*(bottom+top), 'Map reg $\pi^v$',
horizontalalignment='right',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
#pl.ylabel('Map reg $\pi^v$')
pl.savefig('./transp_piv_mnist.png')
pl.savefig('./transp_piv_mnist.pdf',bbox_inches='tight')
#%%
import random
import PIL as pil
def predict_barycenter(data,T):
diag=1./T.sum(axis=1)
diag[diag==np.inf]=0
return np.dot(T,data.T).T.dot(np.diag(diag))
def predict_barycenter_reverse(data,T):
diag=1./T.sum(axis=1)
diag[diag==np.inf]=0
return np.dot(T,data).T.dot(np.diag(diag)).T
random.seed(1985)
np.random.seed(1986)
n_fig=15
idx_sel=np.random.randint(0,ntot,n_fig)
xsel=xt[idx_sel,:]
xpred=np.zeros((n_fig,d1))
xpredr=np.zeros((n_fig,d1))
for i in range(n_fig):
xpred[i,:]=predict_barycenter(xsel[i,:],Tv)
xpredr[i,:]=predict_barycenter(xsel[i,:],Tvr)
pl.figure(figsize=(n_fig,4))
cmap_g='gray'
for i in range(n_fig):
ax=pl.subplot(4,n_fig,i+1)
pl.imshow(xsel[i,:].reshape((16,16)),cmap=cmap_g)
pl.axis('off')
pl.xticks(())
pl.yticks(())
pl.xlim([-6,22])
pl.ylim([22,-6])
if i==0:
ax.text(left-.3, 0.5*(bottom+top), 'USPS',
horizontalalignment='right',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax=pl.subplot(4,n_fig,i+1+n_fig)
img = pil.Image.fromarray((xsel[i,:]* 255 / np.max(xsel[i,:])).reshape((16,16)).astype('float32'))
img = img.resize((28,28))
pl.imshow(np.array(img),cmap=cmap_g)
pl.axis('off')
pl.xticks(())
pl.yticks(())
if i==0:
ax.text(left-.3, 0.5*(bottom+top) , 'Resize',
horizontalalignment='right', verticalalignment='center',rotation='vertical',
transform=ax.transAxes)
ax=pl.subplot(4,n_fig,i+1+2*n_fig)
pl.imshow(xpred[i,:].reshape((28,28)),cmap=cmap_g)
pl.axis('off')
pl.xticks(())
pl.yticks(())
if i==0:
ax.text(left-.2, 0.5*(bottom+top) , 'Map $\pi^v$',
horizontalalignment='right',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax=pl.subplot(4,n_fig,i+1+3*n_fig)
pl.imshow(xpredr[i,:].reshape((28,28)),cmap=cmap_g)
pl.axis('off')
pl.xticks(())
pl.yticks(())
if i==0:
ax.text(left-.2, 0.5*(bottom+top) , 'Map reg $\pi^v$',
horizontalalignment='right',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
pl.ylabel('Map reg $\pi^v$')
pl.savefig('./transp_piv_usps.png')
pl.savefig('./transp_piv_usps.pdf',bbox_inches='tight')
```
|
dea7134cf4d2759649ad2d33843825da6819559e
| 319,616 |
ipynb
|
Jupyter Notebook
|
example/coot_mnist.ipynb
|
ievred/COOT
|
61fe30dbbf798125d3c17ba6ebe1353ac745a384
|
[
"MIT"
] | null | null | null |
example/coot_mnist.ipynb
|
ievred/COOT
|
61fe30dbbf798125d3c17ba6ebe1353ac745a384
|
[
"MIT"
] | null | null | null |
example/coot_mnist.ipynb
|
ievred/COOT
|
61fe30dbbf798125d3c17ba6ebe1353ac745a384
|
[
"MIT"
] | 1 |
2022-02-21T09:36:03.000Z
|
2022-02-21T09:36:03.000Z
| 346.655098 | 65,204 | 0.928586 | true | 4,136 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.826712 | 0.690704 |
__label__eng_Latn
| 0.258841 | 0.443068 |
# Solving Non Linear Systems using Newton's Method
We'll be finding the roots of the following system of equations:
$f(\bar{v},\theta)$ = 0.5$\bar{v}^{2}$ + $\sin(\theta)$ = 0
$g(\bar{v},\theta)$ = 0.5$\bar{v}^{2}$ - $\cos(\theta)$ = 0
```python
from numpy.linalg import inv
import numpy as np
import math
```
```python
def f(v_bar, theta):
return 0.5*(v_bar**2) + math.sin(theta)
def g(v_bar, theta):
return 0.5*(v_bar**2) - math.cos(theta)
```
The Jacobian Matrix is calculated as follows:
\begin{equation}
J =
\begin{bmatrix}
\frac{df(\bar{v},\theta)}{d\bar{v}} & \frac{df(\bar{v},\theta)}{d\theta}
ewline \\
\frac{dg(\bar{v},\theta)}{d\bar{v}} & \frac{dg(\bar{v},\theta)}{d\theta} \ \\
\end{bmatrix}
\end{equation}
```python
def jacobian(v_bar,theta):
array = np.array([[v_bar, math.cos(theta)],[v_bar, math.sin(theta)]])
return array
```
Assuming the initial point as
\begin{equation}
x_{0} =
\begin{bmatrix}
0 \\
0 \\
\end{bmatrix}
\end{equation}
And the reiterating solution as
\begin{equation}
x_{n} =
\begin{bmatrix}
\bar{v}_{n} \\
\theta_{n} \\
\end{bmatrix}
\end{equation}
The next point is \begin{equation}
x_{n+1} =
x_{n} - J(\bar{v}_{n},\theta_{n})^{-1}F(\bar{v}_{n},\theta_{n})
\end{equation}
where \begin{equation}
F(\bar{v}_{n},\theta_{n}) =
\begin{bmatrix}
f(\bar{v}_{n},\theta_{n}) \\
g(\bar{v}_{n},\theta_{n}) \\
\end{bmatrix}
\end{equation}
```python
def next_x(old_x):
inverse_jacobian = inv(jacobian(old_x[0][0],old_x[1][0]))
F = np.array([[f(old_x[0][0],old_x[1][0])],[g(old_x[0][0],old_x[1][0])]])
next_x = old_x - np.matmul(inverse_jacobian,F)
return next_x
```
Running 10 iterations of the algorithm
```python
all_x = []
x = np.array([[1],[0]])
all_x.append(x)
for _ in range(10):
x = next_x(x)
all_x.append(x)
```
```python
final_x = np.array(all_x).reshape(11,2)
```
```python
print('Final Solution:',all_x[10][0][0],',',all_x[10][1][0])
```
Final Solution: 1.189207115002721 , -0.7853981633974483
# Visualizing the convergence of the solution to the true solution
```python
import matplotlib.pyplot as plt
plt.figure(figsize=(12,6))
plt.plot(range(11),final_x[:,0])
plt.title('Convergence of the Value of v bar')
plt.axhline(1.18920712, color='red')
plt.show()
```
```python
import matplotlib.pyplot as plt
plt.figure(figsize=(12,6))
plt.plot(range(11),final_x[:,1])
plt.title('Convergence of the Value of theta')
plt.axhline(-0.78539816, color='red')
plt.show()
```
|
1f102dd252c3b9629f3553da866ada9ca6eb978e
| 39,067 |
ipynb
|
Jupyter Notebook
|
Newton-Method-of-solving-non-linear-systems.ipynb
|
sohitmiglani/Practical-Simulations-and-Social-Networks
|
5b6741794004d8347ecfe90f21ea5828b174a71c
|
[
"MIT"
] | 2 |
2019-01-16T13:36:05.000Z
|
2020-09-23T19:25:37.000Z
|
Newton-Method-of-solving-non-linear-systems.ipynb
|
sohitmiglani/Practical-Simulations-and-Social-Networks
|
5b6741794004d8347ecfe90f21ea5828b174a71c
|
[
"MIT"
] | null | null | null |
Newton-Method-of-solving-non-linear-systems.ipynb
|
sohitmiglani/Practical-Simulations-and-Social-Networks
|
5b6741794004d8347ecfe90f21ea5828b174a71c
|
[
"MIT"
] | null | null | null | 146.868421 | 17,688 | 0.896869 | true | 892 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.969785 | 0.917303 | 0.889587 |
__label__eng_Latn
| 0.355496 | 0.905142 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.