text
stringlengths
87
777k
meta.hexsha
stringlengths
40
40
meta.size
int64
682
1.05M
meta.ext
stringclasses
1 value
meta.lang
stringclasses
1 value
meta.max_stars_repo_path
stringlengths
8
226
meta.max_stars_repo_name
stringlengths
8
109
meta.max_stars_repo_head_hexsha
stringlengths
40
40
meta.max_stars_repo_licenses
listlengths
1
5
meta.max_stars_count
int64
1
23.9k
meta.max_stars_repo_stars_event_min_datetime
stringlengths
24
24
meta.max_stars_repo_stars_event_max_datetime
stringlengths
24
24
meta.max_issues_repo_path
stringlengths
8
226
meta.max_issues_repo_name
stringlengths
8
109
meta.max_issues_repo_head_hexsha
stringlengths
40
40
meta.max_issues_repo_licenses
listlengths
1
5
meta.max_issues_count
int64
1
15.1k
meta.max_issues_repo_issues_event_min_datetime
stringlengths
24
24
meta.max_issues_repo_issues_event_max_datetime
stringlengths
24
24
meta.max_forks_repo_path
stringlengths
8
226
meta.max_forks_repo_name
stringlengths
8
109
meta.max_forks_repo_head_hexsha
stringlengths
40
40
meta.max_forks_repo_licenses
listlengths
1
5
meta.max_forks_count
int64
1
6.05k
meta.max_forks_repo_forks_event_min_datetime
stringlengths
24
24
meta.max_forks_repo_forks_event_max_datetime
stringlengths
24
24
meta.avg_line_length
float64
15.5
967k
meta.max_line_length
int64
42
993k
meta.alphanum_fraction
float64
0.08
0.97
meta.converted
bool
1 class
meta.num_tokens
int64
33
431k
meta.lm_name
stringclasses
1 value
meta.lm_label
stringclasses
3 values
meta.lm_q1_score
float64
0.56
0.98
meta.lm_q2_score
float64
0.55
0.97
meta.lm_q1q2_score
float64
0.5
0.93
text_lang
stringclasses
53 values
text_lang_conf
float64
0.03
1
label
float64
0
1
```python import numpy as np import cv2 import matplotlib.pyplot as plt import os import argparse import glob import torch import torch.nn as nn from torch.autograd import Variable from DnCNN.models import DnCNN from DnCNN.utils import * ``` # Padding Instead of padding an input image of dimensions $(w, h)$ so that it's padded output has dimensions $(2w-1, 2h-1)$, it's preferred to find the next power of 2 that is larger than $(2w-1, 2h-1)$. This is because the discrete Fourier Transform (DFT) performs better on inputs with dimensions that factor into small primes (e.g. 2). "The convolution of two functions with non-zero values at indices from 0 to N−1, i.e. N values, will have non-zero values at indices 0 to 2(N−1), i.e. 2N−1 values. Thus, it suffices to use a DFT of length 2N−1 to avoid cyclical overlap. However, if N is chosen so as to factor into small primes, this property is lost if a transform with length 2N−1 is performed; in this case, it is far more efficient to pad to 2N instead." ```python """ Returns the nearest power of 2 that is larger than 2N. This optimization relies on the fact that the Discrete Fourier Transform leverages small prime factors for speed up. " n is the input dimension that we'd like to pad """ def compute_padding(n): return int(np.power(2, np.ceil(np.log2(2 * n - 1)))) test_n = 3 assert compute_padding(test_n) == 8, 'The padding calcuations are incorrect.' """ Returns the input image padded so that its dimensions are the next power of 2 larger than 2w and 2h. " input is the image to pad """ def pad(input): h, w = compute_padding(input.shape[0]), compute_padding(input.shape[1]) padded = cv2.copyMakeBorder(input, h // 2, h // 2, w // 2, w // 2, cv2.BORDER_CONSTANT) return padded test_img = cv2.imread('./img/psf.tif', cv2.IMREAD_GRAYSCALE) assert pad(test_img).shape == (5296, 5696), 'The padded dimensions are incorrect.' """ Returns the cropped version of a padded image " input is the padded image " original_shape is the original dimensions of an image padded with pad() """ def crop(input, original_shape): diff = np.subtract(input.shape, original_shape) assert(np.all(diff >= 0)) h, w = original_shape h_pad, w_pad = diff // 2 return input[h_pad:(input.shape[0] - h_pad), w_pad:(input.shape[1] - w_pad)] assert crop(pad(test_img), test_img.shape).shape == test_img.shape, 'Cropped dimensions are incorrect.' """ Resizes the input image " img is the image to resize " factor is the resizing factor """ def resize(img, factor): num = int(-np.log2(factor)) for i in range(num): img = 0.25 * (img[::2,::2,...] + img[1::2,::2,...] + img[::2,1::2,...] + img[1::2,1::2,...]) return img ``` # Denoiser Plug and play alternating direction method of multipliers (PnP ADMM) uses a denoising algorithm as the proximal update to the dual variable. Here, the denoising convolution neural network (DnCNN) is used as the denoiser. This is preferred over the block matching 3D (BM3D) algorithm for speed and also performance. However, it lacks explicit control over the denoising strength like BM3D. Additionally, the network can make use of GPU resources thanks for PyTorch. ```python class Denoiser(): def __init__(self): n_layers = 17 net = DnCNN(channels=1, num_of_layers=n_layers) device_ids = [0] self.model = nn.DataParallel(net, device_ids=device_ids).cuda() model_path = './DnCNN/logs/DnCNN-S-15/net.pth' self.model.load_state_dict(torch.load(model_path)) self.model.eval() def denoise(self, img): img = np.float32(img[:, :, 0]) img = np.expand_dims(img, 0) img = np.expand_dims(img, 1) noisy = Variable(torch.Tensor(img).cuda()) with torch.no_grad(): result = torch.clamp(noisy - self.model(noisy), 0., 1.) result = torch.squeeze(result.data.cpu(), 0).permute(1, 2, 0) return result.numpy() # Run this to visualize the denoiser's performance. # dncnn = Denoiser() # noisy = cv2.imread('noisy.jpg') # dncnn.denoise(noisy) ``` # PnP ADMM Image reconstruction from DiffuserCam images and a measured PSF can be stated as the following optimization problem: \begin{equation} \mathcal{L}_{p}(x,z,\lambda) = \underset{x,z}{argmin}\frac{1}{2}\|Ax-b\|^{2}_{2} - log(p(z)) + \frac{\rho}{2}\|x-z+\frac{\lambda}{\rho}\|^{2}_{2} \end{equation} , where $\mathcal{L}_{p}(x,z,\lambda)$ is the scaled form of the augmented Lagrangian, $A$ is the point squared function (PSF), x is the reconstruction we'd like to recover, b is the DiffuserCam image, $\lambda$ is the Langrange multiplier, and $\rho$ is an arbitrarily chosen constant. ## Proximal operator updates The iterative updates to $x, z, u$ are computed below: \begin{equation} x^{k+1} := 0 = \nabla{x} \mathcal{L}_{p}(x,z,\lambda) := (A^{T}A + \rho)^{-1}(A^{T}b + \rho{}z^{k} - \lambda^{k}) \end{equation} \begin{equation} z^{k+1} = \mathcal{D}(x^{k+1} + u^{k}), u = \frac{\lambda}{\rho} \end{equation} \begin{equation} u^{k+1} := 0 = \nabla_{u} \mathcal{L}_{p}(x,z,\lambda) := u + x^{k+1} - z^{k+1} \end{equation} ## Optimization Explicitly computing $$A^{T}A, A^{T}b$$ in the proximal update to $x$ can be expensive, especially when working on large images (and hence matrices). Because the forward operator $A$ can be expressed as a linear convolution, we can efficiently compute the $x$ update as below: \begin{equation} x^{k+1} = \mathcal{F}^{-1}(\frac{\overline{\mathcal{F}(A)}\mathcal{F}(b) + \rho \mathcal{F}(z - u)}{|\mathcal{F}(A)|^{2} + \rho}) \end{equation} Note that the image $b$ needs to be padded (and FFT shifted) before convolution. The final output must also be cropped to remove the excess padding. ```python """ Solve argminx Ax = b. " A is the 2D PSF image " b is the measurement """ def pnp_admm(b, A, rho=1, num_iter=5, debug=False): orig_dims = A.shape dncnn = Denoiser() # Sets the initial state of optimization variables x, z, u. x = pad(np.ones(b.shape) * 0.5) z = x.copy() u = np.zeros(x.shape) # Precomputes matrix math. AtA = np.abs(np.fft.fft2(pad(A))) ** 2 Atb = cv2.filter2D(pad(b), -1, A, borderType=cv2.BORDER_WRAP) for i in np.arange(num_iter): # Proximal update for x := x^{t+1} = (AtA + rho)^{-1} (Atb + rho * z - lambda) num = np.fft.fft2(Atb + rho * (z - u)) denom = (AtA + rho) x = np.real((np.fft.ifft2(num / denom))) # Proximal update for z := D(x+u) z = dncnn.denoise(np.expand_dims(x + u, axis=2)) z = np.squeeze(z) # Proximal update for u. u += (x - z) if debug: plt.figure() plt.title(f'PnP ADMM iteration {i}') plt.imshow(crop(np.clip(x, 0, 1) * 255, b.shape), cmap='gray') plt.show() # Clamps the RGB image between 0 and 1. x = np.clip(x, 0, 1) return crop(x, b.shape) ``` # Reconstruction The code below loads the PSF and DiffuserCam image and outputs the PnP ADMM reconstructed image. ```python """ Utility function to threshold the background in the PSF image. This is helpful because the PSF measurements in our lab setup are noisy. " psf is the PSF image to filter """ def filter_psf(psf, threshold=0.3): psf[psf < (threshold * 255)] = 0 return psf """ Reconstructs the scene. " psf is the filename of the PSF iamge " imgs is an array of DiffuserCam images " downsample_factor is the amount to downsample the PSF and DiffuserCam images " filter determines whether to filter the input PSF image " show_plots determines whether image plots will be displayed """ def reconstruct(psf, imgs, downsample_factor=(1.0/8.0), rho=1, num_iter=3, filter=True, show_plots=False, rgb=True): # Read in the PSF image. # psf = cv2.imread(psf, cv2.IMREAD_GRAYSCALE).astype(float) if filter: psf = filter_psf(psf) psf = resize(psf, downsample_factor) / 255 outputs = [] # Iterate through the images and reconstruct them one at a time. for img in imgs: # img = cv2.imread(img).astype(float) if rgb else cv2.imread(img, cv2.IMREAD_GRAYSCALE).astype(float) img = resize(img, downsample_factor) / 255 if rgb: r = pnp_admm(img[:,:,0], psf, rho, num_iter) g = pnp_admm(img[:,:,1], psf, rho, num_iter) b = pnp_admm(img[:,:,2], psf, rho, num_iter) reconstruction = np.dstack((r, g, b)) else: reconstruction = pnp_admm(img, psf, rho, num_iter) # reconstruction *= 255 outputs.append(reconstruction) if show_plots: plt.figure(figsize=(30, 12)) plt.subplot(1, 3, 1) plt.title('PSF') plt.imshow(psf, cmap='gray') plt.subplot(1, 3, 2) plt.title('DiffuserCam image') plt.imshow(img, cmap='gray') if not rgb else plt.imshow(img) plt.subplot(1, 3, 3) plt.title(f'Reconstruction (rho={rho}, n_iter={num_iter})') plt.imshow(reconstruction, cmap='gray') if not rgb else plt.imshow(reconstruction) # plt.show() return outputs # silence = reconstruct('img/1210/psf_1.tiff', ['img/1210/marker.tiff', 'img/1210/textbook.tiff', 'img/1210/textbook2.tiff', 'img/1210/box.tiff', 'img/1210/thor.tiff', 'img/1210/hand.tiff'], filter=True, show_plots=True, rgb=True) ## TAPE PSF: ## Marker images (OK, but not great) # c = reconstruct('img/1210/psf_1.tiff', ['img/1210/double_marker.tiff', 'img/1210/marker.tiff', 'img/1210/marker_close.tiff'], filter=True, show_plots=True, rgb=True) ## Textbook images (Not great) # c = reconstruct('img/1210/psf_1.tiff', ['img/1210/textbook2.tiff', 'img/1210/textbook.tiff', 'img/1210/textbook_last.tiff'], filter=True, show_plots=True, rgb=True) ## Hand images (terrible) # c = reconstruct('img/1210/psf_1.tiff', ['img/1210/hand.tiff', 'img/1210/hand_best.tiff', 'img/1210/hand2.tiff', 'img/1210/hand_last.tiff', 'img/1210/hand_bright.tiff'], filter=True, show_plots=True, rgb=True) ## Thor images (barely recognizable) # c = reconstruct('img/1210/psf_1.tiff', ['img/1210/thor.tiff', 'img/1210/thor2.tiff', 'img/1210/thor_close.tiff', 'img/1210/thor_last.tiff', 'img/1210/thor_last2.tiff'], filter=True, show_plots=True, rgb=True) ## Chart images (Terrible) # c = reconstruct('img/1210/psf_1.tiff', ['img/1210/chart.tiff', 'img/1210/chart_far.tiff'], filter=True, show_plots=True, rgb=True) ## Misc # c = reconstruct('img/1210/psf_1.tiff', ['img/1210/small_text.tiff', 'img/1210/box.tiff', 'img/1210/bright.tiff', 'img/1210/dim.tiff'], filter=True, show_plots=True, rgb=True) # b = reconstruct('img/1210/psf.tiff', ['img/1210/marker.tiff'], filter=True, show_plots=True, rgb=True) # a = reconstruct('img/1210/psf_10.tiff', ['img/1210/marker.tiff'], filter=True, show_plots=True, rgb=True) # a = reconstruct('img/1210/psf_1_close.tiff', ['img/1210/marker.tiff', 'img/1210/marker_close.tiff'], filter=True, show_plots=True, rgb=True) # reconstruct('img/batch2/psf.tiff', ['img/batch2/marker.tiff', 'img/batch2/grapes.tiff'], show_plots=True) # reconstruct('img/psf.tif', ['img/measurement.tif'], filter=False, show_plots=True, rgb=False) # reconstruct('img/batch2/psf.tiff', ['img/batch2/marker.tiff'], filter=False, show_plots=True, rgb=True) # reconstruct('img/batch2/psf.tiff', ['img/batch2/marker.tiff'], filter=True, show_plots=True, rgb=True, downsample_factor=(1.0/4)) # reconstruct('img/batch2/psf.tiff', ['img/batch2/marker_edit.tiff'], filter=True, show_plots=True, rgb=True) # reconstruct('img/batch1/psf.tiff', ['img/batch1/thor_box.tiff'], filter=True, show_plots=True) # reconstruct('img/batch2/psf.tiff', ['img/batch2/thor.tiff'], filter=True, show_plots=True) ``` ```python def loadData(show_im=True): psf = cv2.imread('img/psf.tif', cv2.IMREAD_GRAYSCALE).astype(float) data = cv2.imread('img/measurement.tif', cv2.IMREAD_GRAYSCALE).astype(float) print(np.max(psf), np.max(data)) """In the picamera, there is a non-trivial background (even in the dark) that must be subtracted""" bg = np.mean(psf[5:15,5:15]) psf -= bg data -= bg print(np.max(psf), np.max(data)) """Resize to a more manageable size to do reconstruction on. Because resizing is downsampling, it is subject to aliasing (artifacts produced by the periodic nature of sampling). Demosaicing is an attempt to account for/reduce the aliasing caused. In this application, we do the simplest possible demosaicing algorithm: smoothing/blurring the image with a box filter""" def resize(img, factor): num = int(-np.log2(factor)) for i in range(num): img = 0.25*(img[::2,::2,...]+img[1::2,::2,...]+img[::2,1::2,...]+img[1::2,1::2,...]) return img f = 0.25 psf = resize(psf, f) data = resize(data, f) """Now we normalize the images so they have the same total power. Technically not a necessary step, but the optimal hyperparameters are a function of the total power in the PSF (among other things), so it makes sense to standardize it""" psf /= np.linalg.norm(psf.ravel()) data /= np.linalg.norm(data.ravel()) print(np.max(psf), np.max(data)) if show_im: fig1 = plt.figure() plt.imshow(psf, cmap='gray') plt.title('PSF') # display.display(fig1) fig2 = plt.figure() plt.imshow(data, cmap='gray') plt.title('Raw data') # display.display(fig2) return psf, data # Simulation. downsample = 1.0/8.0 # Loads image and PSF psf, x = loadData(show_im=False) y = pnp_admm(x, psf, 1, 3) y = cv2.resize(y, (x.shape[1], x.shape[0])) print(psf.shape, x.shape, y.shape) plt.figure() plt.title("Clear image") plt.imshow(y) ``` ```python # Simulation. downsample = 1.0/8.0 # Loads image and PSF psf = cv2.imread('img/psf.tif', cv2.IMREAD_GRAYSCALE).astype(float) x = cv2.imread('img/measurement.tif', cv2.IMREAD_GRAYSCALE).astype(float) y = reconstruct(psf, [x], downsample_factor=downsample, filter=False, show_plots=False, rgb=False) y[0] = cv2.resize(y[0], (x.shape[1], x.shape[0])) print(psf.shape, x.shape, y[0].shape) plt.figure() plt.title("Clear image") plt.imshow(y[0]) # plt.figure() # plt.title("PSF") # plt.imshow(psf) # Applies PSF kernel to blur the image---this generates synthetic DiffuserCam image. # psf = resize(psf, downsample) x_tilde = cv2.filter2D(y[0], -1, psf, borderType=cv2.BORDER_WRAP) # x_tilde = cv2.resize(x_tilde, (x.shape[1], x.shape[0])) plt.figure() plt.title("Artificially blurred image") plt.imshow(x_tilde) plt.figure() plt.title("Original blurred image") plt.imshow(x) # TODO: Figure out the difference between these two. Might need to clamp? # Applies added white Gaussian noise to the image. # Runs PnP-ADMM to perform blind deconvolution. This tests the performance of the algorithm. print(psf.shape, x_tilde.shape) # y_tilde = reconstruct(psf, [x_tilde], filter=False, show_plots=False, rgb=False) # plt.figure() # plt.title("Artificial reconstruction") # plt.imshow(y_tilde[0]) ``` ```python """ Artificially blurs a clear image using the PSF, then does blind deconvolution to retrieve the clear image back """ def simulation(psf, img): blur = cv2.filter2D(img, -1, psf, borderType=cv2.BORDER_WRAP) print(blur.shape) plt.figure() plt.title("Synthetically blurred image") plt.imshow(blur) y = reconstruct(psf, [blur], filter=False, show_plots=False, rgb=False) plt.figure() plt.title("Reconstructed image") plt.imshow(y[0]) plt.figure() plt.title("Original image") plt.imshow(img) print(img.shape, y[0].shape) psf = cv2.imread('img/psf.tif', cv2.IMREAD_GRAYSCALE).astype(float) y = cv2.imread('img/test/USAF.jpg', cv2.IMREAD_GRAYSCALE).astype(float) y = cv2.resize(y, (psf.shape[1], psf.shape[0])) simulation(psf, y) ```
70a1023abb1628d86cc12a1a5797948bc4cee2ce
585,038
ipynb
Jupyter Notebook
pnp_admm.ipynb
matthewachan/DiffuserCam
60b40234c2ef26b669b33dc1faa518aeb33848fe
[ "MIT" ]
null
null
null
pnp_admm.ipynb
matthewachan/DiffuserCam
60b40234c2ef26b669b33dc1faa518aeb33848fe
[ "MIT" ]
3
2022-02-03T05:06:24.000Z
2022-02-04T06:25:48.000Z
pnp_admm.ipynb
matthewachan/DiffuserCam
60b40234c2ef26b669b33dc1faa518aeb33848fe
[ "MIT" ]
null
null
null
937.560897
106,610
0.946959
true
4,732
Qwen/Qwen-72B
1. YES 2. YES
0.793106
0.740174
0.587037
__label__eng_Latn
0.806757
0.202213
# Basis for grayscale images ## Introduction Consider the set of real-valued matrices of size $M\times N$; we can turn this into a vector space by defining addition and scalar multiplication in the usual way: \begin{align} \mathbf{A} + \mathbf{B} &= \left[ \begin{array}{ccc} a_{0,0} & \dots & a_{0,N-1} \\ \vdots & & \vdots \\ a_{M-1,0} & \dots & b_{M-1,N-1} \end{array} \right] + \left[ \begin{array}{ccc} b_{0,0} & \dots & b_{0,N-1} \\ \vdots & & \vdots \\ b_{M-1,0} & \dots & b_{M-1,N-1} \end{array} \right] \\ &= \left[ \begin{array}{ccc} a_{0,0}+b_{0,0} & \dots & a_{0,N-1}+b_{0,N-1} \\ \vdots & & \vdots \\ a_{M-1,0}+b_{M-1,0} & \dots & a_{M-1,N-1}+b_{M-1,N-1} \end{array} \right] \\ \\ \\ \beta\mathbf{A} &= \left[ \begin{array}{ccc} \beta a_{0,0} & \dots & \beta a_{0,N-1} \\ \vdots & & \vdots \\ \beta a_{M-1,0} & \dots & \beta a_{M-1,N-1} \end{array} \right] \end{align} As a matter of fact, the space of real-valued $M\times N$ matrices is completely equivalent to $\mathbb{R}^{MN}$ and we can always "unroll" a matrix into a vector. Assume we proceed column by column; then the matrix becomes $$ \mathbf{a} = \mathbf{A}[:] = [ \begin{array}{ccccccc} a_{0,0} & \dots & a_{M-1,0} & a_{0,1} & \dots & a_{M-1,1} & \ldots & a_{0, N-1} & \dots & a_{M-1,N-1} \end{array}]^T $$ Although the matrix and vector forms represent exactly the same data, the matrix form allows us to display the data in the form of an image. Assume each value in the matrix is a grayscale intensity, where zero is black and 255 is white; for example we can create a checkerboard pattern of any size with the following function: ```python # usual python bookkeeping... %pylab inline import matplotlib import matplotlib.pyplot as plt import numpy as np import IPython from IPython.display import Image import math from __future__ import print_function # ensure all images will be grayscale gray(); ``` Populating the interactive namespace from numpy and matplotlib <Figure size 432x288 with 0 Axes> #### (?1) `gray() # ensure all images will be grayscale`? What? #### (R1) ```python gray.__module__ ``` 'matplotlib.pyplot' ```python help(gray) ``` Help on function gray in module matplotlib.pyplot: gray() Set the colormap to "gray". This changes the default colormap as well as the colormap of the current image if there is one. See ``help(colormaps)`` for more information. #### (!1) Worth digging deep: `colormaps` ```python help(colormaps) ``` Help on function colormaps in module matplotlib.pyplot: colormaps() Matplotlib provides a number of colormaps, and others can be added using :func:`~matplotlib.cm.register_cmap`. This function documents the built-in colormaps, and will also return a list of all registered colormaps if called. You can set the colormap for an image, pcolor, scatter, etc, using a keyword argument:: imshow(X, cmap=cm.hot) or using the :func:`set_cmap` function:: imshow(X) pyplot.set_cmap('hot') pyplot.set_cmap('jet') In interactive mode, :func:`set_cmap` will update the colormap post-hoc, allowing you to see which one works best for your data. All built-in colormaps can be reversed by appending ``_r``: For instance, ``gray_r`` is the reverse of ``gray``. There are several common color schemes used in visualization: Sequential schemes for unipolar data that progresses from low to high Diverging schemes for bipolar data that emphasizes positive or negative deviations from a central value Cyclic schemes for plotting values that wrap around at the endpoints, such as phase angle, wind direction, or time of day Qualitative schemes for nominal data that has no inherent ordering, where color is used only to distinguish categories Matplotlib ships with 4 perceptually uniform color maps which are the recommended color maps for sequential data: ========= =================================================== Colormap Description ========= =================================================== inferno perceptually uniform shades of black-red-yellow magma perceptually uniform shades of black-red-white plasma perceptually uniform shades of blue-red-yellow viridis perceptually uniform shades of blue-green-yellow ========= =================================================== The following colormaps are based on the `ColorBrewer <https://colorbrewer2.org>`_ color specifications and designs developed by Cynthia Brewer: ColorBrewer Diverging (luminance is highest at the midpoint, and decreases towards differently-colored endpoints): ======== =================================== Colormap Description ======== =================================== BrBG brown, white, blue-green PiYG pink, white, yellow-green PRGn purple, white, green PuOr orange, white, purple RdBu red, white, blue RdGy red, white, gray RdYlBu red, yellow, blue RdYlGn red, yellow, green Spectral red, orange, yellow, green, blue ======== =================================== ColorBrewer Sequential (luminance decreases monotonically): ======== ==================================== Colormap Description ======== ==================================== Blues white to dark blue BuGn white, light blue, dark green BuPu white, light blue, dark purple GnBu white, light green, dark blue Greens white to dark green Greys white to black (not linear) Oranges white, orange, dark brown OrRd white, orange, dark red PuBu white, light purple, dark blue PuBuGn white, light purple, dark green PuRd white, light purple, dark red Purples white to dark purple RdPu white, pink, dark purple Reds white to dark red YlGn light yellow, dark green YlGnBu light yellow, light green, dark blue YlOrBr light yellow, orange, dark brown YlOrRd light yellow, orange, dark red ======== ==================================== ColorBrewer Qualitative: (For plotting nominal data, `.ListedColormap` is used, not `.LinearSegmentedColormap`. Different sets of colors are recommended for different numbers of categories.) * Accent * Dark2 * Paired * Pastel1 * Pastel2 * Set1 * Set2 * Set3 A set of colormaps derived from those of the same name provided with Matlab are also included: ========= ======================================================= Colormap Description ========= ======================================================= autumn sequential linearly-increasing shades of red-orange-yellow bone sequential increasing black-white color map with a tinge of blue, to emulate X-ray film cool linearly-decreasing shades of cyan-magenta copper sequential increasing shades of black-copper flag repetitive red-white-blue-black pattern (not cyclic at endpoints) gray sequential linearly-increasing black-to-white grayscale hot sequential black-red-yellow-white, to emulate blackbody radiation from an object at increasing temperatures jet a spectral map with dark endpoints, blue-cyan-yellow-red; based on a fluid-jet simulation by NCSA [#]_ pink sequential increasing pastel black-pink-white, meant for sepia tone colorization of photographs prism repetitive red-yellow-green-blue-purple-...-green pattern (not cyclic at endpoints) spring linearly-increasing shades of magenta-yellow summer sequential linearly-increasing shades of green-yellow winter linearly-increasing shades of blue-green ========= ======================================================= A set of palettes from the `Yorick scientific visualisation package <https://dhmunro.github.io/yorick-doc/>`_, an evolution of the GIST package, both by David H. Munro are included: ============ ======================================================= Colormap Description ============ ======================================================= gist_earth mapmaker's colors from dark blue deep ocean to green lowlands to brown highlands to white mountains gist_heat sequential increasing black-red-orange-white, to emulate blackbody radiation from an iron bar as it grows hotter gist_ncar pseudo-spectral black-blue-green-yellow-red-purple-white colormap from National Center for Atmospheric Research [#]_ gist_rainbow runs through the colors in spectral order from red to violet at full saturation (like *hsv* but not cyclic) gist_stern "Stern special" color table from Interactive Data Language software ============ ======================================================= A set of cyclic color maps: ================ ================================================= Colormap Description ================ ================================================= hsv red-yellow-green-cyan-blue-magenta-red, formed by changing the hue component in the HSV color space twilight perceptually uniform shades of white-blue-black-red-white twilight_shifted perceptually uniform shades of black-blue-white-red-black ================ ================================================= Other miscellaneous schemes: ============= ======================================================= Colormap Description ============= ======================================================= afmhot sequential black-orange-yellow-white blackbody spectrum, commonly used in atomic force microscopy brg blue-red-green bwr diverging blue-white-red coolwarm diverging blue-gray-red, meant to avoid issues with 3D shading, color blindness, and ordering of colors [#]_ CMRmap "Default colormaps on color images often reproduce to confusing grayscale images. The proposed colormap maintains an aesthetically pleasing color image that automatically reproduces to a monotonic grayscale with discrete, quantifiable saturation levels." [#]_ cubehelix Unlike most other color schemes cubehelix was designed by D.A. Green to be monotonically increasing in terms of perceived brightness. Also, when printed on a black and white postscript printer, the scheme results in a greyscale with monotonically increasing brightness. This color scheme is named cubehelix because the (r, g, b) values produced can be visualised as a squashed helix around the diagonal in the (r, g, b) color cube. gnuplot gnuplot's traditional pm3d scheme (black-blue-red-yellow) gnuplot2 sequential color printable as gray (black-blue-violet-yellow-white) ocean green-blue-white rainbow spectral purple-blue-green-yellow-orange-red colormap with diverging luminance seismic diverging blue-white-red nipy_spectral black-purple-blue-green-yellow-red-white spectrum, originally from the Neuroimaging in Python project terrain mapmaker's colors, blue-green-yellow-brown-white, originally from IGOR Pro turbo Spectral map (purple-blue-green-yellow-orange-red) with a bright center and darker endpoints. A smoother alternative to jet. ============= ======================================================= The following colormaps are redundant and may be removed in future versions. It's recommended to use the names in the descriptions instead, which produce identical output: ========= ======================================================= Colormap Description ========= ======================================================= gist_gray identical to *gray* gist_yarg identical to *gray_r* binary identical to *gray_r* ========= ======================================================= .. rubric:: Footnotes .. [#] Rainbow colormaps, ``jet`` in particular, are considered a poor choice for scientific visualization by many researchers: `Rainbow Color Map (Still) Considered Harmful <https://ieeexplore.ieee.org/document/4118486/?arnumber=4118486>`_ .. [#] Resembles "BkBlAqGrYeOrReViWh200" from NCAR Command Language. See `Color Table Gallery <https://www.ncl.ucar.edu/Document/Graphics/color_table_gallery.shtml>`_ .. [#] See `Diverging Color Maps for Scientific Visualization <http://www.kennethmoreland.com/color-maps/>`_ by Kenneth Moreland. .. [#] See `A Color Map for Effective Black-and-White Rendering of Color-Scale Images <https://www.mathworks.com/matlabcentral/fileexchange/2662-cmrmap-m>`_ by Carey Rappaport ```python # let's create a checkerboard pattern SIZE = 4 img = np.zeros((SIZE, SIZE)) for n in range(0, SIZE): for m in range(0, SIZE): if (n & 0x1) ^ (m & 0x1): ## recall that ^ is the exclusive or #img[n, m] = 255 img[m, n] = 255 # now display the matrix as an image plt.matshow(img); ``` ```python SIZE = 4 img = np.zeros((SIZE, SIZE)) img.dtype # float64, better uint8 for images ``` dtype('float64') ```python np.uint8 ``` numpy.uint8 `m & 0x1` and `n & 0x1` tests the parity: if `m` is odd, then `m & 0x1` outputs `1` (or `True`); if `m` even, output `0`. Using XOR after that guarantees that only when exactly one of `m` and `n` is odd will the case be colored white. This implies that every time one crosses from one case to any of its neighboring cases, the color changes Note how the axes of `plt.matshow()` are arranged diff from that of `plt.imshow()` help(plt.matshow) ```python plt.imshow(img); ``` Given the equivalence between the space of $M\times N$ matrices and $\mathbb{R}^{MN}$ we can easily define the inner product between two matrices in the usual way: $$ \langle \mathbf{A}, \mathbf{B} \rangle = \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} a_{m,n} b_{m, n} $$ (where we have neglected the conjugation since we'll only deal with real-valued matrices); in other words, we can take the inner product between two matrices as the standard inner product of their unrolled versions. The inner product allows us to define orthogonality between images and this is rather useful since we're going to explore a couple of bases for this space. ## Actual images Conveniently, using IPython, we can read images from disk in any given format and convert them to numpy arrays; let's load and display for instance a JPEG image: img = np.array(plt.imread('cameraman.jpg'), dtype=int) plt.matshow(img);--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-9-37d88f4789f6> in <module>() ----> 1 img = np.array(plt.imread('cameraman.jpg'), dtype=int) 2 plt.matshow(img); /home/phunc20/.virtualenvs/dsp-py2.7/lib/python2.7/site-packages/matplotlib/pyplot.pyc in imread(*args, **kwargs) 2371 @docstring.copy_dedent(_imread) 2372 def imread(*args, **kwargs): -> 2373 return _imread(*args, **kwargs) 2374 2375 /home/phunc20/.virtualenvs/dsp-py2.7/lib/python2.7/site-packages/matplotlib/image.pyc in imread(fname, format) 1351 raise ValueError('Only know how to handle extensions: %s; ' 1352 'with Pillow installed matplotlib can handle ' -> 1353 'more images' % list(handlers)) 1354 return im 1355 ValueError: Only know how to handle extensions: [u'png']; with Pillow installed matplotlib can handle more images!pip install pillow ```python img = np.array(plt.imread('cameraman.jpg'), dtype=int) plt.matshow(img); ``` The image is a $64\times 64$ low-resolution version of the famous "cameraman" test picture. Out of curiosity, we can look at the first column of this image, which is is a $64×1$ vector: ```python img[:,0] ``` array([156, 157, 157, 152, 154, 155, 151, 157, 152, 155, 158, 159, 159, 160, 160, 161, 155, 160, 161, 161, 164, 162, 160, 162, 158, 160, 158, 157, 160, 160, 159, 158, 163, 162, 162, 157, 160, 114, 114, 103, 88, 62, 109, 82, 108, 128, 138, 140, 136, 128, 122, 137, 147, 114, 114, 144, 112, 115, 117, 131, 112, 141, 99, 97]) The values are integers between zero and 255, meaning that each pixel is encoded over 8 bits (or 256 gray levels). ## The canonical basis The canonical basis for any matrix space $\mathbb{R}^{M\times N}$ is the set of "delta" matrices where only one element equals to one while all the others are 0. Let's call them $\mathbf{E}_n$ with $0 \leq n < MN$. Here is a function to create the canonical basis vector given its index: ```python def canonical(n, M=5, N=10): e = np.zeros((M, N)) #e[(n % M), int(n / M)] = 1 e[n % M, n // M] = 1 return e ``` ```python !python --version ``` Python 3.8.2 ```python 10 / 3, 10 // 3, int(10/3) ``` (3.3333333333333335, 3, 3) Here are some basis vectors: look for the position of white pixel, which differentiates them and note that we enumerate pixels column-wise: ```python plt.matshow(canonical(0)); plt.matshow(canonical(1)); plt.matshow(canonical(49)); ``` Note how diff `matshow()` is from `imshow()`: > In a jupyter cell, successive `matshow()`'s can draw any number of images, while `imshow()` will only do it for the last one ```python plt.imshow(canonical(0)); plt.imshow(canonical(1)); plt.imshow(canonical(49)); ``` ##### Stopped here (2020/11/19 12h25) ## Transmitting images Suppose we want to transmit the "cameraman" image over a communication channel. The intuitive way to do so is to send the pixel values one by one, which corresponds to sending the coefficients of the decomposition of the image over the canonical basis. So far, nothing complicated: to send the cameraman image, for instance, we will send $64\times 64 = 4096$ coefficients in a row. Now suppose that a communication failure takes place after the first half of the pixels have been sent. The received data will allow us to display an approximation of the original image only. If we replace the missing data with zeros, here is what we would see, which is not very pretty: ```python # unrolling of the image for transmission (we go column by column, hence "F") tx_img = np.ravel(img, "F") # oops, we lose half the data tx_img[int(len(tx_img)/2):] = 0 # rebuild matrix rx_img = np.reshape(tx_img, (64, 64), "F") plt.matshow(rx_img); ``` help(np.ravel) Can we come up with a trasmission scheme that is more robust in the face of channel loss? Interestingly, the answer is yes, and it involves a different, more versatile basis for the space of images. What we will do is the following: * describe the Haar basis, a new basis for the image space * project the image in the new basis * transmit the projection coefficients * rebuild the image using the basis vectors We know a few things: if we choose an orthonormal basis, the analysis and synthesis formulas will be super easy (a simple inner product and a scalar multiplication respectively). The trick is to find a basis that will be robust to the loss of some coefficients. One such basis is the **Haar basis**. We cannot go into too many details in this notebook but, for the curious, a good starting point is [here](https://chengtsolin.wordpress.com/2015/04/15/real-time-2d-discrete-wavelet-transform-using-opengl-compute-shader/). (An even better starting point is [these two papers](http://grail.cs.washington.edu/projects/wavelets/article/).) Mathematical formulas aside, the Haar basis works by encoding the information in a *hierarchical* way: the first basis vectors encode the broad information and the higher coefficients encode the detail. Let's have a look. First of all, to keep things simple, we will remain in the space of square matrices whose size is a power of two. The code to generate the Haar basis matrices is the following: first we generate a 1D Haar vector and then we obtain the basis matrices by taking the outer product of all possible 1D vectors (don't worry if it's not clear, the results are what's important): ```python def haar1D(n, SIZE): # check power of two if math.floor(math.log(SIZE) / math.log(2)) != math.log(SIZE) / math.log(2): print("Haar defined only for lengths that are a power of two") return None if n >= SIZE or n < 0: print("invalid Haar index") return None # zero basis vector if n == 0: return np.ones(SIZE) # express n >= 1 as 2^p + q with p as large as possible; # then k = SIZE/2^p is the length of the support # and s = qk is the shift p = math.floor(math.log(n) / math.log(2)) pp = int(pow(2, p)) k = SIZE / pp s = (n - pp) * k h = np.zeros(SIZE) h[int(s):int(s+k/2)] = 1 h[int(s+k/2):int(s+k)] = -1 # these are not normalized return h def haar2D(n, SIZE=8): # get horizontal and vertical indices hr = haar1D(n % SIZE, SIZE) hv = haar1D(int(n / SIZE), SIZE) # 2D Haar basis matrix is separable, so we can # just take the column-row product H = np.outer(hr, hv) # np.outer() is just column vector times row vector, # the 1st arg being the col vec, the 2nd the row vec. H = H / math.sqrt(np.sum(H * H)) # the previous line just divides H by its Frobenius norm # so that the returned value of haar2D() has norm 1. return H ``` help(np.outer) First of all, let's look at a few basis matrices; note that the matrices have **both positive and negative values**, so that the value of **zero** will be represented as **gray**: ```python plt.matshow? ``` ```python plt.matshow(haar2D(0)); plt.matshow(haar2D(1)); plt.matshow(haar2D(10)); plt.matshow(haar2D(63)); ``` ```python np.unique(haar2D(0)) ``` array([0.125]) ```python np.unique(haar2D(1)) ``` array([-0.125, 0.125]) ```python np.unique(haar2D(63)) ``` array([-0.5, 0. , 0.5]) ```python white = np.unique(haar2D(0))[0] black = -white plt.matshow(haar2D(0), vmax=white, vmin=black); ``` We can notice two key properties * each basis matrix has positive and negative values in some symmetric pattern: this means that the basis matrix will implicitly compute the difference between image areas * low-index basis matrices take differences between large areas, while high-index ones take differences in smaller **localized** areas of the image We can immediately verify that the Haar matrices are orthogonal: ```python # let's use an 8x8 space; there will be 64 basis vectors # compute all possible inner product and only print the nonzero results for m in range(0,64): for n in range(0,64): r = np.sum(haar2D(m, 8) * haar2D(n, 8)) if r != 0: print("[%dx%d -> %f] " % (m, n, r), end="") ``` [0x0 -> 1.000000] [1x1 -> 1.000000] [2x2 -> 1.000000] [3x3 -> 1.000000] [4x4 -> 1.000000] [5x5 -> 1.000000] [6x6 -> 1.000000] [7x7 -> 1.000000] [8x8 -> 1.000000] [9x9 -> 1.000000] [10x10 -> 1.000000] [11x11 -> 1.000000] [12x12 -> 1.000000] [13x13 -> 1.000000] [14x14 -> 1.000000] [15x15 -> 1.000000] [16x16 -> 1.000000] [16x17 -> -0.000000] [17x16 -> -0.000000] [17x17 -> 1.000000] [18x18 -> 1.000000] [19x19 -> 1.000000] [20x20 -> 1.000000] [21x21 -> 1.000000] [22x22 -> 1.000000] [23x23 -> 1.000000] [24x24 -> 1.000000] [24x25 -> -0.000000] [25x24 -> -0.000000] [25x25 -> 1.000000] [26x26 -> 1.000000] [27x27 -> 1.000000] [28x28 -> 1.000000] [29x29 -> 1.000000] [30x30 -> 1.000000] [31x31 -> 1.000000] [32x32 -> 1.000000] [33x33 -> 1.000000] [34x34 -> 1.000000] [35x35 -> 1.000000] [36x36 -> 1.000000] [37x37 -> 1.000000] [38x38 -> 1.000000] [39x39 -> 1.000000] [40x40 -> 1.000000] [41x41 -> 1.000000] [42x42 -> 1.000000] [43x43 -> 1.000000] [44x44 -> 1.000000] [45x45 -> 1.000000] [46x46 -> 1.000000] [47x47 -> 1.000000] [48x48 -> 1.000000] [49x49 -> 1.000000] [50x50 -> 1.000000] [51x51 -> 1.000000] [52x52 -> 1.000000] [53x53 -> 1.000000] [54x54 -> 1.000000] [55x55 -> 1.000000] [56x56 -> 1.000000] [57x57 -> 1.000000] [58x58 -> 1.000000] [59x59 -> 1.000000] [60x60 -> 1.000000] [61x61 -> 1.000000] [62x62 -> 1.000000] [63x63 -> 1.000000] OK! Everything's fine. Now let's transmit the "cameraman" image: first, let's verify that it works ```python # project the image onto the Haar basis, obtaining a vector of 4096 coefficients # this is simply the analysis formula for the vector space with an orthogonal basis tx_img = np.zeros(64*64) for k in range(0, (64*64)): tx_img[k] = np.sum(img * haar2D(k, 64)) # now rebuild the image with the synthesis formula; since the basis is orthonormal # we just need to scale the basis matrices by the projection coefficients rx_img = np.zeros((64, 64)) for k in range(0, (64*64)): rx_img += tx_img[k] * haar2D(k, 64) plt.matshow(rx_img); ``` ```python np.linalg.norm(img - rx_img, inf) ``` 2.112088282046898e-12 help(np.linalg.norm) Cool, it works! Now let's see what happens if we lose the second half of the coefficients: ```python # oops, we lose half the data lossy_img = np.copy(tx_img); lossy_img[int(len(tx_img)/2):] = 0 # rebuild matrix rx_img = np.zeros((64, 64)) for k in range(0, (64*64)): rx_img += lossy_img[k] * haar2D(k, 64) plt.matshow(rx_img); ``` That's quite remarkable, no? We've lost the same amount of information as before but the image is still acceptable. This is because we lost the coefficients associated to the fine details of the image but we retained the "broad strokes" encoded by the first half. Note that if we lose the first half of the coefficients, the result would look remarkably different: ```python lossy_img = np.copy(tx_img); lossy_img[0:int(len(tx_img)/2)] = 0 rx_img = np.zeros((64, 64)) for k in range(0, (64*64)): rx_img += lossy_img[k] * haar2D(k, 64) plt.matshow(rx_img); ``` In fact, schemes like this one are used in *progressive encoding*: send the most important information first and add details if the channel permits it. You may have experienced this while browsing the internet over a slow connection. All in all, a great application of a change of basis! ## A few of my own questions **(?)** About `# check power of two` of `haar1D()` ```python help(math.log) ``` Help on built-in function log in module math: log(...) log(x[, base]) Return the logarithm of x to the given base. If the base not specified, returns the natural logarithm (base e) of x. ```python math.log(2) ``` 0.6931471805599453 ```python math.log(2, 2) ``` 1.0 ```python math.log(2, 10) ``` 0.30102999566398114 The author is just checking whether $ \log_{2} \texttt{SIZE} = \frac{\ln \texttt{SIZE}}{\ln 2} $ is an integer. **N.B.** Unlike `math.log`, numpy has - `np.log`: Natural logarithm - `np.log2`: base 2 - `np.log10`: base 10 - `np.log1p`: log(1+p) **(?)** What is `n` in `haar2D()`? Must `n` be bounded? **(R)** From reading the code, it seems that if `SIZE = k`, then `n = 0, 1, 2, ..., k**2 -1`, like the above examples - when `SIZE=8`, `n = 0, 1, ..., 63` - when `SIZE=64`, `n = 0, 1, ..., 64**2 - 1` The `n` in `haar2D()` has to do with the `n` in `haar1D()`. To better understand how the function is written that way, readers would better have read [one of the papers](http://grail.cs.washington.edu/projects/wavelets/article/wavelet1.pdf) mentioned above. Briefly speaking, `Haar1D(n , SIZE)` will return a basis for the vector space $V^j$, where with `SIZE`$ = 2^j$. For example, with the box basis and the Haar wavelets described in the paper, $$ \forall\, f \in V^3,\; \text{we have}\\ f = c_{0}^{0} \phi_{0}^{0} + d_{0}^{0} \psi_{0}^{0} + \left(d_{0}^{1} \psi_{0}^{1} + d_{1}^{1} \psi_{1}^{1}\right) + \left(d_{0}^{2} \psi_{0}^{2} + d_{1}^{2} \psi_{1}^{2} + d_{2}^{2} \psi_{2}^{2} + d_{3}^{2} \psi_{3}^{2}\right). $$ <br> What `haar1D(n, SIZE)` does is that it returns those $\phi_{s}^{t}$ and $\psi_{s}^{t}$.<br> As you can see from the $V^3$ example, there should be $2^3$ such basis vectors/functions; in general, there will be `SIZE`$= 2^j$ basis vectors and that's why the subscript (here in the `haar1D` function `n`) runs from `0` to `SIZE-1`. **(?)** Is it correct for `haar2D` to normalize by Frobenius norm? **(R)** ## Ref. - [http://grail.cs.washington.edu/projects/wavelets/](http://grail.cs.washington.edu/projects/wavelets/) - [http://grail.cs.washington.edu/Research/](http://grail.cs.washington.edu/Research/) - **exercise3.4 p.57** of the textbook written by the same authors of the course also talks about Haar basis. ```python type(pow(2,3)), type(pow(2,3.0)) ``` (int, float) ```python pow(2,3), pow(2,3.0) ``` (8, 8.0) ```python 2**3, 2**3.0 ``` (8, 8.0) ```python ```
069c785a4dedfd845f1cebf03b3abe4464474103
164,646
ipynb
Jupyter Notebook
epfl/2020/hw-ipynb/HaarBasis/hb.ipynb
phunc20/dsp
e7c496eb5fd4b8694eab0fc049cf98a5e3dfd886
[ "MIT" ]
1
2021-03-12T18:32:06.000Z
2021-03-12T18:32:06.000Z
epfl/2020/hw-ipynb/HaarBasis/hb.ipynb
phunc20/dsp
e7c496eb5fd4b8694eab0fc049cf98a5e3dfd886
[ "MIT" ]
null
null
null
epfl/2020/hw-ipynb/HaarBasis/hb.ipynb
phunc20/dsp
e7c496eb5fd4b8694eab0fc049cf98a5e3dfd886
[ "MIT" ]
null
null
null
106.154739
17,060
0.827557
true
8,332
Qwen/Qwen-72B
1. YES 2. YES
0.943348
0.90599
0.854663
__label__eng_Latn
0.964962
0.824002
```python %matplotlib inline ``` # Lugiato-Lefever equation -- Soliton molecules This example shows how to perform simulations for the Lugiato-Lefever equation (LLE) [1], using functionality implemented by `py-fmas`. In particular, this example implements the first-order propagation equation \begin{align}\partial_t u = P - (1+i\theta) - i d_2 \partial_x^2 u + i |u|^2 u,\end{align} where $u\equiv u(x,t)$ is a complex field. The temporal evolution is governed by the frequency detuning $\theta=2$, the constant driving amplitude $P=1.37225$, and second order dispersion parameter $d_2=-0.002$. Equations of this type allow to describe the propagation of optical pulses in ring cavities. The example provided below shows how an initial condition of the form \begin{align}u_0(t) = 0.5 + \exp\{ -(\theta/0.85)^2\}\end{align} evolves into a soliton molecule consisting of 5 cavity solitons. This propagation scenario reporduces the soliton molecule shown in Fig. 9(e) of Ref. [2]. References: [1] L.A. Lugiato, R. Lefever, Spatial Dissipative Structures in Passive Optical Systems, Phys. Rev. Lett. 58 (1987) 2209, https://doi.org/10.1103/PhysRevLett.58.2209. [2] C. Godey, I.V. Balakireva, A. Coillet, Y. K. Chembo, Stability analysis of the spatiotemporal Lugiato-Lefever model for Kerr optical frequency combs in the anomalous and normal dispersion regimes, Phys. Rev. A 89 (2014) 063814, http://dx.doi.org/10.1103/PhysRevA.89.063814. .. codeauthor:: Oliver Melchert <melchert@iqo.uni-hannover.de> ```python import fmas import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.colors as col from fmas.config import FTSHIFT, FTFREQ, FT, IFT from fmas.solver import SiSSM def plot_evolution_LLE(z, t, u, t_lim, w_lim): def _setColorbar(im, refPos): """colorbar helper""" x0, y0, w, h = refPos.x0, refPos.y0, refPos.width, refPos.height cax = f.add_axes([x0, y0+1.02*h, w, 0.03*h]) cbar = f.colorbar(im, cax=cax, orientation='horizontal') cbar.ax.tick_params(color='k', labelcolor='k', bottom=False, direction='out', labelbottom=False, labeltop=True, top=True, size=4, pad=0 ) cbar.ax.tick_params(which="minor", bottom=False, top=False ) return cbar w = FTSHIFT(FTFREQ(t.size,d=t[1]-t[0])*2*np.pi) f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(8,4)) plt.subplots_adjust(left=0.13, right=0.96, bottom=0.12, top=0.8, wspace=0.05) cmap=mpl.cm.get_cmap('jet') # -- LEFT SUB-FIGURE: TIME-DOMAIN PROPAGATION CHARACTERISTICS It = np.abs(u)**2 It/=np.max(It) my_norm=col.Normalize(vmin=0,vmax=1) im1 = ax1.pcolorfast(t, z, It[:-1,:-1], norm=my_norm, cmap=cmap) cbar1 = _setColorbar(im1,ax1.get_position()) cbar1.ax.set_title(r"$|u|^2/{\rm{max}}\left(|u|^2\right)$",color='k',y=3.5) ax1.set_xlim(t_lim) ax1.set_ylim([0.,z.max()]) ax1.set_xlabel(r"$x$") ax1.set_ylabel(r"$t$") ax1.ticklabel_format(useOffset=False, style='plain') # -- RIGHT SUB-FIGURE: ANGULAR FREQUENCY-DOMAIN PROPAGATION CHARACTERISTICS Iw = np.abs(FTSHIFT(FT(u, axis=-1),axes=-1))**2 Iw /= np.max(Iw) im2 = ax2.pcolorfast(w,z,Iw[:-1,:-1], norm=col.LogNorm(vmin=1e-6*Iw.max(),vmax=Iw.max()), cmap=cmap ) cbar2 =_setColorbar(im2,ax2.get_position()) cbar2.ax.set_title(r"$|u_k|^2/{\rm{max}}\left(|u_k|^2\right)$",color='k',y=3.5) ax2.set_xlim(w_lim) ax2.set_ylim([0.,z.max()]) ax2.set_xlabel(r"$k$") ax2.tick_params(labelleft=False) ax2.ticklabel_format(useOffset=False, style='plain') plt.show() def main(): # -- DEFINE SIMULATION PARAMETERS x_max, Nx = np.pi, 512 t_max, Nt = 30.0, 60000 n_skip = 60 P, theta, d2 = 1.37225, 2., -0.002 # -- INITIALIZATION STAGE # ... COMPUTATIONAL DOMAIN x = np.linspace(-x_max, x_max, Nx, endpoint=False) k = FTFREQ(x.size,d=x[1]-x[0])*2*np.pi # ... LUGIATO-LEFEVER MODEL Lk = lambda k: -(1+1j*theta) + 1j*d2*k*k Nk = lambda uk: ( lambda ut: (FT(1j*np.abs(ut)**2*ut + P )))( IFT(uk)) # ... SOLVER BASED ON SIMPLE SPLIT-STEP FOURIER METHOD solver = SiSSM(Lk(k), Nk) # ... INITIAL CONDITION u_0k = FT(0.5 + np.exp(-(x/0.85)**2) + 0j) solver.set_initial_condition(k, u_0k) # -- RUN SIMULATION solver.propagate(z_range = t_max, n_steps = Nt, n_skip = n_skip) t_, uxt = solver.z, solver.utz x_lim = (-np.pi,np.pi) k_lim = (-150,150) plot_evolution_LLE(t_, x, uxt, x_lim, k_lim) if __name__=='__main__': main() ```
c423caf9909c3943e3c13f08e39ba80172d69036
6,012
ipynb
Jupyter Notebook
docs/_downloads/22a359d30725e68cca5dae08df9a6a4c/g_model_LLE.ipynb
nunoedgarhubsoftphotoflow/py-fmas
241d942fe0cd6a49001b1bf110dd32bccc86bb16
[ "MIT" ]
4
2021-04-28T07:02:54.000Z
2022-01-25T13:15:49.000Z
docs/_downloads/22a359d30725e68cca5dae08df9a6a4c/g_model_LLE.ipynb
Photonics-Precision-Technologies/py-fmas
241d942fe0cd6a49001b1bf110dd32bccc86bb16
[ "MIT" ]
3
2021-06-10T07:11:35.000Z
2021-11-22T15:23:01.000Z
docs/_downloads/22a359d30725e68cca5dae08df9a6a4c/g_model_LLE.ipynb
Photonics-Precision-Technologies/py-fmas
241d942fe0cd6a49001b1bf110dd32bccc86bb16
[ "MIT" ]
5
2021-05-20T08:53:44.000Z
2022-01-25T13:18:34.000Z
111.333333
3,481
0.592149
true
1,568
Qwen/Qwen-72B
1. YES 2. YES
0.771843
0.682574
0.52684
__label__eng_Latn
0.32087
0.062355
```python import numpy as np import pandas as pd import linearsolve as ls import matplotlib.pyplot as plt plt.style.use('classic') %matplotlib inline ``` # Class 13: Introduction to Real Business Cycle Modeling Real business cycle (RBC) models are extensions of the stochastic Solow model. RBC models replace the ad hoc assumption of a constant saving rate in the Solow model with the solution to an intertemporal utility maximization problem that gives rise to a variable saving rate. RBC models also often feature some sort of household labor-leisure tradeoff that produces endogenous labor varation. In this notebook, we'll consider a baseline RBC model that does not have labor. We'll use the model to compute impulse responses to a one percent shock to TFP. ## The Baseline RBC Model without Labor The equilibrium conditions for the RBC model without labor are: \begin{align} \frac{1}{C_t} & = \beta E_t \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1} +1-\delta }{C_{t+1}}\right]\\ K_{t+1} & = I_t + (1-\delta) K_t\\ Y_t & = A_t K_t^{\alpha}\\ Y_t & = C_t + I_t\\ \log A_{t+1} & = \rho \log A_t + \epsilon_{t+1} \end{align} where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. The objective is use `linearsolve` to simulate impulse responses to a TFP shock using the following parameter values for the simulation: | $\rho$ | $\sigma$ | $\beta$ | $\alpha$ | $\delta $ | $T$ | |--------|----------|---------|----------|-----------|-----| | 0.75 | 0.006 | 0.99 | 0.35 | 0.025 | 26 | ## Model Preparation Before proceding, let's recast the model in the form required for `linearsolve`. Write the model with all variables moved to the lefthand side of the equations and dropping the expecations operator $E_t$ and the exogenous shock $\epsilon_{t+1}$: \begin{align} 0 & = \beta\left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1} +1-\delta }{C_{t+1}}\right] - \frac{1}{C_t}\\ 0 & = A_t K_t^{\alpha} - Y_t\\ 0 & = I_t + (1-\delta) K_t - K_{t+1}\\ 0 & = C_t + I_t - Y_t\\ 0 & = \rho \log A_t - \log A_{t+1} \end{align} Remember, capital and TFP are called *state variables* because they're $t+1$ values are predetermined. Output, consumption, and investment are called a *costate* or *control* variables. Note that the model as 5 equations in 5 endogenous variables. ## Initialization, Approximation, and Solution The next several cells initialize the model in `linearsolve` and then approximate and solve it. ```python # Create a variable called 'parameters' that stores the model parameter values in a Pandas Series # Print the model's parameters ``` ```python # Create a variable called 'sigma' that stores the value of sigma ``` ```python # Create variable called 'var_names' that stores the variable names in a list with state variables ordered first # Create variable called 'shock_names' that stores an exogenous shock name for each state variable. ``` ```python # Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED def equilibrium_equations(variables_forward,variables_current,parameters): # Parameters. PROVIDED p = parameters # Current variables. PROVIDED cur = variables_current # Forward variables. PROVIDED fwd = variables_forward # Euler equation # Production function # Capital evolution # Market clearing # Exogenous tfp # Stack equilibrium conditions into a numpy array ``` Next, initialize the model using `ls.model` which takes the following required arguments: * `equations` * `n_states` * `var_names` * `shock_names` * `parameters` ```python # Initialize the model into a variable named 'rbc_model' ``` ```python # Compute the steady state numerically using .compute_ss() method of rbc_model # Print the computed steady state ``` ```python # Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of rbc_model ``` ### Impulse Responses Compute a 26 period impulse responses of the model's variables to a 0.01 unit shock to TFP in period 5. ```python # Compute impulse responses # Print the first 10 rows of the computed impulse responses to the TFP shock ``` Construct a $2\times2$ grid of plots of simulated TFP, output, consumption, and investment. Be sure to multiply simulated values by 100 so that vertical axis units are in "percent deviation from steady state." ```python # Create figure. PROVIDED fig = plt.figure(figsize=(12,8)) # Create upper-left axis. PROVIDED ax = fig.add_subplot(2,2,1) # Create upper-right axis. PROVIDED ax = fig.add_subplot(2,2,2) # Create lower-left axis. PROVIDED ax = fig.add_subplot(2,2,3) # Create lower-right axis. PROVIDED ax = fig.add_subplot(2,2,4) ```
3bd82714c49ec1c6cab78492a61111e021d494f0
11,273
ipynb
Jupyter Notebook
Lecture Notebooks/Econ126_Class_13_blank.ipynb
t-hdd/econ126
17029937bd6c40e606d145f8d530728585c30a1d
[ "MIT" ]
null
null
null
Lecture Notebooks/Econ126_Class_13_blank.ipynb
t-hdd/econ126
17029937bd6c40e606d145f8d530728585c30a1d
[ "MIT" ]
null
null
null
Lecture Notebooks/Econ126_Class_13_blank.ipynb
t-hdd/econ126
17029937bd6c40e606d145f8d530728585c30a1d
[ "MIT" ]
null
null
null
36.71987
412
0.394837
true
1,270
Qwen/Qwen-72B
1. YES 2. YES
0.815232
0.72487
0.590938
__label__eng_Latn
0.967399
0.211277
# Ordinary Differential Equation Solvers: Runge-Kutta Methods ### Christina Lee ### Category: Numerics So what's an <i>Ordinary Differential Equation</i>? Differential Equation means we have some equation (or equations) that have derivatives in them. The <i>ordinary</i> part differentiates them from <i>partial</i> differential equations (the ones with curly $\partial$ derivatives). Here, we only have one <b>independent</b> variable (let's call it $t$), and one or more <b>dependent</b> variables (let's call them $x_1, x_2, ...$). In partial differential equations, we can have more than one independent variable. This ODE can either be written as a system of the form $$ \frac{d x_1}{dt}=f_1(t,x_1,x_2,...,x_n) $$ $$ \frac{d x_2}{dt}=f_2(t,x_1,x_2,...,x_n) $$ ... $$ \frac{d x_n}{dt}=f_n(t,x_1,x_n,...,x_n) $$ or a single n'th order ODE of the form $$ f_n(t,x) \frac{d^n x}{dt^n}+...+f_1(t,x) \frac{dx}{dt}+f_0(t,x)=0 $$ that can be rewritten in terms of a system of first order equations by performing variable substitutions such as $$ \frac{d^i x}{dt^i}=x_i $$ Though STEM students such as I have probably spent thousands of hours pouring of ways to analytically solve both ordinary and partial differential equations, unfortunately, the real world is rarely so kind as to provide us with an exactly solvable differential equation. At least for interesting problems. We can sometimes approximate the real world as an exactly solvable situation, but for the situation we are interested in, we have to turn to numerics. I'm not saying those thousand different analytic methods are for nothing. We need an idea ahead of time of what the differential equation should be doing, to tell if it's misbehaving or not. We can't just blindly plug and chug. Today will be about introducing four different methods based on Taylor expansion to a specific order, also known as Runge-Kutta Methods. We can improve these methods with adaptive stepsize control, but that is a topic for another time, just like the other modern types of solvers such as Richardson extrapolation and predictor-corrector. Nonetheless, to work with ANY computational differential equation solver, you need to understand the fundamentals of routines like Euler and Runge-Kutta, their error propagation, and where they can go wrong. Otherwise, you might misinterpret the results of more advanced methods. <b>WARNING:</b> If you need to solve a troublesome differential equation for a research problem, use a package, like [DifferentialEquations](https://github.com/JuliaDiffEq/DifferentialEquations.jl). These packages have much better error handling and optimization. Let's first add our plotting package and colors. ```julia using Plots gr() ``` Plots.GRBackend() We will be benchmarking our solvers on one of the simplest and most common ODE's, \begin{equation} \frac{d}{d t}x=x \;\;\;\;\;\;\; x(t)=C e^t \end{equation} Though this only has one dependent variable, we want to structure our code so that we can accommodate a series of dependent variables, $x_1,x_2,...,x_n$, and their associated derivative functions. Therefore, we create a function for each dependent variable, and then `push` it into an array declared as type `Function`. ```julia function f1(t::Float64,x::Array{Float64,1}) return x[1] end f=Function[] push!(f,f1) ``` 1-element Array{Function,1}: f1 ### Euler's Method First published in Euler's <i>Instutionum calculi integralis</i> in 1768, this method gets a lot of milage, and if you are to understand anything, this method is it. We march along with step size $h$, and at each new point, calculate the slope. The slope gives us our new direction to travel for the next $h$. We can determine the error from the Taylor expansion of the function $$ x_{n+1}=x_n+h f(x_n,t) + \mathcal{O}(h^2). $$ In case you haven't seen it before, the notation $\mathcal{O}(x)$ stands for "errors of the order x". Summing over the entire interval, we accumuluate error according to $$N\mathcal{O}(h^2)= \frac{x_f-x_0}{h}\mathcal{O}(h^2)=h, $$ making this a <b>first order</b> method. Generally, if a technique is $n$th order in the Taylor expansion for one step, its $(n-1)$th order over the interval. ```julia function Euler(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64) d=length(f) xp=copy(x) for ii in 1:d xp[ii]+=h*f[ii](t0,x) end return t0+h,xp end ``` Euler (generic function with 1 method) ```julia ``` ## Implicit Method or Backward Euler If $f(t,x)$ has a form that is invertible, we can form a specific expression for the next step. For example, if we use our exponential, \begin{equation} x_{n+1}=x_n+ h f(t_{n+1},x_{n+1}) \end{equation} \begin{equation} x_{n+1}-h x_{n+1}=x_n \end{equation} \begin{equation} x_{n+1}=\frac{x_n}{1-h} \end{equation} This expression varies for each differential equation and only exists if the function is invertible. ```julia function Implicit(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64) return t0+h,[ x[1]/(1-h) ] end ``` Implicit (generic function with 1 method) ## 2nd Order Runge-Kutta So in the Euler Method, we could just make more, tinier steps to achieve more precise results. Here, we make <i>bettter</i> steps. Each step itself takes more work than a step in the first order methods, but we win by having to perform fewer steps. This time, we are going to work with the Taylor expansion up to second order, \begin{equation} x_{n+1}=x_n+h f(t_n,x_n) + \frac{h^2}{2} f^{\prime}(t_n,x_n)+ \mathcal{O} (h^3). \end{equation} Define \begin{equation} k_1=f(t_n,x_n), \end{equation} so that we can write down the derivative of our $f$ function, and the second derivative (curvature), of our solution, \begin{equation} f^{\prime}(t_n,x_n)=\frac{f(t_n+h/2,x_n+h k_1/2)-k_1}{h/2}+\mathcal{O}(h^2). \end{equation} Plugging this expression back into our Taylor expanion, we get a new expression for $x_{n+1}$ \begin{equation} x_{n+1}=x_n+hf(t_n+h/2,x_n+h k_1/2)+\mathcal{O}(h^3) \end{equation} We can also interpret this technique as using the slope at the center of the interval, instead of the start. ```julia function RK2(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64) d=length(f) xp=copy(x) xk1=copy(x) for ii in 1:d xk1[ii]+=f[ii](t0,x)*h/2 end for ii in 1:d xp[ii]+=f[ii](t0+h/2,xk1)*h end return t0+h,xp end ``` RK2 (generic function with 1 method) ## 4th Order Runge-Kutta Wait! Where's 3rd order? There exists a 3rd order method, but I only just heard about it while fact-checking for this post. RK4 is your dependable, multi-purpose workhorse, so we are going to skip right to it. $$ k_1= f(t_n,x_n) $$ $$ k_2= f(t_n+h/2,x_n+h k_1/2) $$ $$ k_3 = f(t_n+h/2, x_n+h k_2/2) $$ $$ k_4 = f(t_n+h,x_n+h k_3) $$ $$ x_{n+1}=x_n+\frac{h}{6}\left(k_1+2 k_2+ 2k_3 +k_4 \right) $$ I'm not going to prove here that the method is fourth order, but we will see numerically that it is. <i>Note:</i> I premultiply the $h$ in my code to reduce the number of times I have to multiply $h$. ```julia function RK4(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64) d=length(f) hk1=zeros(Float64,length(x)) hk2=zeros(Float64,length(x)) hk3=zeros(Float64,length(x)) hk4=zeros(Float64,length(x)) for ii in 1:d hk1[ii]=h*f[ii](t0,x) end for ii in 1:d hk2[ii]=h*f[ii](t0+h/2,x+hk1/2) end for ii in 1:d hk3[ii]=h*f[ii](t0+h/2,x+hk2/2) end for ii in 1:d hk4[ii]=h*f[ii](t0+h,x+hk3) end return t0+h,x+(hk1+2*hk2+2*hk3+hk4)/6 end ``` RK4 (generic function with 1 method) This next function merely iterates over a certain number of steps for a given method. ```julia function Solver(f::Array{Function,1},Method::Function,t0::Float64, x0::Array{Float64,1},h::Float64,N::Int64) d=length(f) ts=zeros(Float64,N+1) xs=zeros(Float64,d,N+1) ts[1]=t0 xs[:,1]=x0 for i in 2:(N+1) ts[i],xs[:,i]=Method(f,ts[i-1],xs[:,i-1],h) end return ts,xs end ``` Solver (generic function with 1 method) ```julia N=1000 xf=10 t0=0. x0=[1.] dt=(xf-t0)/N tEU,xEU=Solver(f,Euler,t0,x0,dt,N); tIm,xIm=Solver(f,Implicit,t0,x0,dt,N); tRK2,xRK2=Solver(f,RK2,t0,x0,dt,N); tRK4,xRK4=Solver(f,RK4,t0,x0,dt,N); xi=tEU yi=exp.(xi); errEU=reshape(xEU[1,:],N+1)-yi errIm=reshape(xIm[1,:],N+1)-yi errRK2=reshape(xRK2[1,:],N+1)-yi; errRK4=reshape(xRK4[1,:],N+1)-yi; ``` ```julia plot(tEU,xEU[1,:],label="Euler") plot!(tIm,xIm[1,:],label="Implicit") plot!(tRK2,xRK2[1,:],label="RK2") plot!(tRK4,xRK4[1,:],label="RK4") plot!(xi,yi,label="Exact") plot!(xlabel="Independent Variable",ylabel="Dependent variable",title="Comparing Methods") ``` ```julia plot(xi,errEU,label="Euler") plot!(xi,errIm,label="Implicit") plot!(xi,errRK2,label="RK2") plot!(xi,errRK4,label="RK4") plot!(xlabel="Independent Variable",ylabel="Error",title="Comparison of error scaling") ``` ## Scaling of the Error I talked above about the error scaling either as $h,h^2$, or $h^4$. I won't just talk but here will numerically demonstrate the relationship as well. For a variety of different step sizes, the below code calculates the final error for each method. Then we will plot the error and see how it scales. ```julia t0=0. tf=1. dx=tf-t0 x0=[1.] dt=collect(.001:.0001:.01) correctans=exp(tf) errfEU=zeros(Float64,length(dt)) errfIm=zeros(Float64,length(dt)) errfRK2=zeros(Float64,length(dt)) errfRK4=zeros(Float64,length(dt)) for ii in 1:length(dt) N=round(Int,dx/dt[ii]) dt[ii]=dx/N tEU,xEU=Solver(f,Euler,t0,x0,dt[ii],N); tIm,xIm=Solver(f,Implicit,t0,x0,dt[ii],N); tRK2,xRK2=Solver(f,RK2,t0,x0,dt[ii],N); tRK4,xRK4=Solver(f,RK4,t0,x0,dt[ii],N); errfEU[ii]=xEU[1,end]-correctans errfIm[ii]=xIm[1,end]-correctans errfRK2[ii]=xRK2[1,end]-correctans errfRK4[ii]=xRK4[1,end]-correctans end ``` ```julia plot(x->errfEU[end]*x/.01,dt[1],dt[end],label="Fitted line", linewidth=10) plot!(dt,errfEU,label="Error",linewidth=3) plot!(xlabel="step size",ylabel="final error", title="Linear Error of Euler Method") ``` ```julia plot(x->errfIm[end]*x/.01,dt[1],dt[end],label="Fitted line", linewidth=10) plot!(dt,errfIm,label="Error",linewidth=3) plot!(xlabel="step size",ylabel="final error", title="Linear Error of Implicit Method") ``` ```julia plot(x->errfRK2[end]*(x/.01)^2,dt[1],dt[end],label="Quadratic", linewidth=10) plot!(dt,errfRK2,label="Error",linewidth=3) plot!(xlabel="step size",ylabel="final error", title="Quadratic Error of Runge-Kutta 2 Method") ``` ```julia plot(x->errfRK4[end]*(x/.01)^4,dt[1],dt[end],label="Quartic", linewidth=10) plot!(dt,errfRK4,label="Error",linewidth=3) plot!(xlabel="step size",ylabel="final error", title="Quartic Error of Runge-Kuuta 4 Method") ``` ## Arbitrary Order While I have presented 4 concrete examples, many more exist. For any choice of variables $a_i, \beta_{i,j},a_i$ that fulfill $$ x_{n+1}=x_n+h\left(\sum_{i=1}^s a_i k_i \right)+ \mathcal{O}(h^p) $$ with $$ k_i=f\left(t_n+\alpha_i h,x_n+h\left(\sum_{j=1}^s \beta_{ij} k_j \right) \right) $$ we have a Runge-Kutta method of order $p$, where $p\geq s$. The Butcher tableau provides a set of consistent coefficients. Stay tuned for when we tuned these routines to the stiff van der Pol equations! ```julia ```
e01a5c070237185708fa77bb8a558f365b259ffb
260,751
ipynb
Jupyter Notebook
Numerics_Prog/Runge-Kutta-Methods.ipynb
albi3ro/M4
ccd27d4b8b24861e22fe806ebaecef70915081a8
[ "MIT" ]
22
2015-11-15T08:47:04.000Z
2022-02-25T10:47:12.000Z
Numerics_Prog/Runge-Kutta-Methods.ipynb
albi3ro/M4
ccd27d4b8b24861e22fe806ebaecef70915081a8
[ "MIT" ]
11
2016-02-23T12:18:26.000Z
2019-09-14T07:14:26.000Z
Numerics_Prog/Runge-Kutta-Methods.ipynb
albi3ro/M4
ccd27d4b8b24861e22fe806ebaecef70915081a8
[ "MIT" ]
6
2016-02-24T03:08:22.000Z
2022-03-10T18:57:19.000Z
98.582609
392
0.643081
true
3,747
Qwen/Qwen-72B
1. YES 2. YES
0.890294
0.847968
0.754941
__label__eng_Latn
0.957409
0.592312
## Rigid body 3 DOF Devlop a system for a rigid body in 3 DOF and do a simualtion ```python import warnings #warnings.filterwarnings('ignore') %matplotlib inline %load_ext autoreload %autoreload 2 ``` ```python import sympy as sp import sympy.physics.mechanics as me import pandas as pd import numpy as np import matplotlib.pyplot as plt from substitute_dynamic_symbols import substitute_dynamic_symbols, find_name, find_derivative_name, lambdify, find_derivatives from pydy.codegen.ode_function_generators import generate_ode_function from scipy.integrate import odeint from sympy import cos,sin ``` ```python x0,y0,z0 = me.dynamicsymbols('x0 y0 z0') x01d,y01d,z01d = me.dynamicsymbols('x01d y01d z01d') u,v,w = me.dynamicsymbols('u v w') u1d,v1d,w1d = me.dynamicsymbols('u v w',1) phi,theta,psi = me.dynamicsymbols('phi theta psi') phi1d,theta1d,psi1d = me.dynamicsymbols('phi1d theta1d psi1d') t = sp.symbols('t') ``` ```python N = me.ReferenceFrame('N') ``` ```python S = N.orientnew('S', 'Axis', [psi,N.z]) ``` ```python S.ang_vel_in(N) ``` $\displaystyle \dot{\psi}\mathbf{\hat{n}_z}$ ```python S.ang_acc_in(N) ``` $\displaystyle \ddot{\psi}\mathbf{\hat{n}_z}$ ```python M = me.Point('M') O = M.locatenew('P',0) M.set_vel(N,0) O.set_vel(S,u*S.x + v*S.y) O.v1pt_theory(M,N,S) ``` $\displaystyle u\mathbf{\hat{s}_x} + v\mathbf{\hat{s}_y}$ ```python velocity_matrix = O.vel(N).to_matrix(N) velocity_matrix ``` $\displaystyle \left[\begin{matrix}u{\left(t \right)} \cos{\left(\psi{\left(t \right)} \right)} - v{\left(t \right)} \sin{\left(\psi{\left(t \right)} \right)}\\u{\left(t \right)} \sin{\left(\psi{\left(t \right)} \right)} + v{\left(t \right)} \cos{\left(\psi{\left(t \right)} \right)}\\0\end{matrix}\right]$ ## Mass ```python mass = sp.symbols('m') ``` ## Inertia ```python I_xx, I_yy, I_zz = sp.symbols('I_xx, I_yy, I_zz') body_inertia_dyadic = me.inertia(S, ixx=I_xx, iyy=I_yy, izz=I_zz) body_inertia_dyadic ``` $\displaystyle I_{xx}\mathbf{\hat{s}_x}\otimes \mathbf{\hat{s}_x} + I_{yy}\mathbf{\hat{s}_y}\otimes \mathbf{\hat{s}_y} + I_{zz}\mathbf{\hat{s}_z}\otimes \mathbf{\hat{s}_z}$ ```python body_inertia_dyadic.to_matrix(S) ``` $\displaystyle \left[\begin{matrix}I_{xx} & 0 & 0\\0 & I_{yy} & 0\\0 & 0 & I_{zz}\end{matrix}\right]$ ```python body_central_inertia = (body_inertia_dyadic, O) ``` ```python body = me.RigidBody('Rigid body', masscenter=O, frame = S, mass=mass, inertia=body_central_inertia) ``` ```python ``` ## Forces ```python fx, fy, fz, mx, my, mz = sp.symbols('f_x f_y f_z m_x m_y m_z') ``` ```python force_vector = fx*S.x + fy*S.y torque_vector = mz*S.z ``` ```python force = (O, force_vector) torque = (S, torque_vector) ``` ## Equations of Motion ```python coordinates = [x0, y0, psi] speeds = [u, v, psi1d] ``` ```python kinematical_differential_equations = [x0.diff() - velocity_matrix[0], y0.diff() - velocity_matrix[1], psi.diff() - psi1d, ] ``` ```python kinematical_differential_equations ``` [-u(t)*cos(psi(t)) + v(t)*sin(psi(t)) + Derivative(x0(t), t), -u(t)*sin(psi(t)) - v(t)*cos(psi(t)) + Derivative(y0(t), t), -psi1d(t) + Derivative(psi(t), t)] ```python #?me.KanesMethod ``` ```python kane = me.KanesMethod(N, coordinates, speeds, kinematical_differential_equations) ``` ```python loads = [force, torque] ``` ```python bodies = [body] fr, frstar = kane.kanes_equations(bodies, loads) ``` ```python constants = [I_xx, I_yy, I_zz,mass] specified = [fx, fy, mz] # External force/torque right_hand_side = generate_ode_function(kane.forcing_full, coordinates, speeds, constants, mass_matrix=kane.mass_matrix_full,specifieds=specified) ``` ```python coordinates_ = [0, 0, 0,] speeds_ = [0, 0, 0,] start = np.array(coordinates_+speeds_) t_ = 0. force_torque = [1,0,0] numerical_specified = np.array(force_torque) I_xx_ = 1 I_yy_ = 1 I_zz_ = 1 mass_ = 1 numerical_constants = np.array([I_xx_, I_yy_, I_zz_,mass_]) right_hand_side(start, t_, numerical_specified, numerical_constants) ``` array([0., 0., 0., 1., 0., 0.]) ```python def simulate(t,force_torque, I_xx,I_yy,I_zz,mass, initial_coordinates = [0, 0, 0,], initial_speeds = [0, 0, 0,]): start = np.array(initial_coordinates+initial_speeds) numerical_specified = force_torque numerical_constants = np.array([I_xx, I_yy, I_zz, mass]) df = pd.DataFrame(index=t) y = odeint(right_hand_side, start, t, args=(numerical_specified, numerical_constants)) for i,symbol in enumerate(coordinates+speeds): name = symbol.name df[name] = y[:,i] return df ``` ```python t = np.linspace(0,10,100) df = simulate(t=t, force_torque=[1,0,0],I_xx=1, I_yy=1, I_zz=1, mass=1) fig,ax = plt.subplots() df.plot(y='x0', ax=ax); ax.set_xlabel('time [s]') fig,ax = plt.subplots() df.plot(y='u', ax=ax); ax.set_xlabel('time [s]'); ``` ```python def track_plot(df,ax, l=1, time_step='1S'): df.plot(x='y0', y='x0',ax = ax) df_ = df.copy() df_.index = pd.TimedeltaIndex(df_.index,unit='s') df_ = df_.resample(time_step).first() def plot_body(row): x = row['y0'] y = row['x0'] psi = row['psi'] xs = [x-l/2*np.sin(psi),x+l/2*np.sin(psi)] ys = [y-l/2*np.cos(psi),y+l/2*np.cos(psi)] ax.plot(xs,ys,'k-') for index,row in df_.iterrows(): plot_body(row) ax.set_xlabel('y0') ax.set_ylabel('x0') ax.axis('equal') ``` ```python t = np.linspace(0,10,100) df = simulate(t=t, force_torque=[0,0,0],I_xx=1, I_yy=1, I_zz=1, mass=1, initial_speeds=[1,0,0,],initial_coordinates=[0, 0, 0,]) fig,ax = plt.subplots() track_plot(df,ax) fig,ax = plt.subplots() df.plot(y='psi', ax=ax); ax.set_xlabel('time') ax.set_ylabel('psi') ``` ```python radius = 10 # Radius of rotation [m] w = 0.1 # Angle velocity [rad/s] V = radius*w # Speed of point [m/s] t = np.linspace(0,2*np.pi/w,100) mass=1 expected_acceleration = -radius*w**2 expected_force = mass*expected_acceleration df = simulate(t=t, force_torque=[0,-expected_force,0],I_xx=1, I_yy=1, I_zz=1, mass=mass, initial_speeds=[V,0,w,],initial_coordinates=[0, 0, 0,]) fig,ax = plt.subplots() track_plot(df,ax, time_step = '5S') fig,ax = plt.subplots() df.plot(y='psi', ax=ax); ax.set_xlabel('time') ax.set_ylabel('psi') ``` ```python ```
ec8f612289f62823d8df698c3a394bde0c435523
84,185
ipynb
Jupyter Notebook
rigid_body_3DOF.ipynb
axelande/rigidbodysimulator
a87c3eb3b7978ef01efca15e66a6de6518870cd8
[ "MIT" ]
null
null
null
rigid_body_3DOF.ipynb
axelande/rigidbodysimulator
a87c3eb3b7978ef01efca15e66a6de6518870cd8
[ "MIT" ]
1
2020-10-26T19:47:02.000Z
2020-10-26T19:47:02.000Z
rigid_body_3DOF.ipynb
axelande/rigidbodysimulator
a87c3eb3b7978ef01efca15e66a6de6518870cd8
[ "MIT" ]
1
2020-10-26T09:17:00.000Z
2020-10-26T09:17:00.000Z
116.117241
17,920
0.869763
true
2,216
Qwen/Qwen-72B
1. YES 2. YES
0.849971
0.771844
0.656045
__label__eng_Latn
0.195212
0.362543
# Kernel Design It's easy to make new kernels in GPflow. To demonstrate, we'll have a look at the Brownian motion kernel, whose function is \begin{equation} k(x, x') = \sigma^2 \text{min}(x, x') \end{equation} where $\sigma^2$ is a variance parameter. ```python import gpflow import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from gpflow.utilities import print_summary, positive plt.style.use("ggplot") %matplotlib inline ``` To make this new kernel class, we inherit from the base class `gpflow.kernels.Kernel` and implement the three functions below. **NOTE:** Depending on the kernel to be implemented, other classes can be more adequate. For example, if the kernel to be implemented is isotropic stationary, you can immediately subclass `gpflow.kernels.IsotropicStationary` (at which point you only have to override `K_r` or `K_r2`; see the `IsotropicStationary` class docstring). Stationary but anisotropic kernels should subclass `gpflow.kernels.AnisotropicStationary` and override `K_d`. #### `__init__` In this simple example, the constructor takes no argument (though it could, if that was convenient, for example to pass in an initial value for `variance`). It *must* call the constructor of the superclass with appropriate arguments. Brownian motion is only defined in one dimension, and we'll assume that the `active_dims` are `[0]`, for simplicity. We've added a parameter to the kernel using the `Parameter` class. Using this class lets the parameter be used in computing the kernel function, and it will automatically be recognised for optimization (or MCMC). Here, the variance parameter is initialized at 1, and constrained to be positive. #### `K` This is where you implement the kernel function itself. This takes two arguments, `X` and `X2`. By convention, we make the second argument optional (it defaults to `None`). Inside `K`, all the computation must be done with TensorFlow - here we've used `tf.minimum`. When GPflow executes the `K` function, `X` and `X2` will be TensorFlow tensors, and parameters such as `self.variance` behave like TensorFlow tensors as well. #### `K_diag` This convenience function allows GPflow to save memory at predict time. It's simply the diagonal of the `K` function, in the case where `X2` is `None`. It must return a one-dimensional vector, so we use TensorFlow's reshape command. ```python class Brownian(gpflow.kernels.Kernel): def __init__(self): super().__init__(active_dims=[0]) self.variance = gpflow.Parameter(1.0, transform=positive()) def K(self, X, X2=None): if X2 is None: X2 = X return self.variance * tf.minimum(X, tf.transpose(X2)) # this returns a 2D tensor def K_diag(self, X): return self.variance * tf.reshape(X, (-1,)) # this returns a 1D tensor k_brownian = Brownian() print_summary(k_brownian, fmt="notebook") ``` <table> <thead> <tr><th>name </th><th>class </th><th>transform </th><th>prior </th><th>trainable </th><th>shape </th><th>dtype </th><th style="text-align: right;"> value</th></tr> </thead> <tbody> <tr><td>Brownian.variance</td><td>Parameter</td><td>Softplus </td><td> </td><td>True </td><td>() </td><td>float64</td><td style="text-align: right;"> 1</td></tr> </tbody> </table> We can now evaluate our new kernel function and draw samples from a Gaussian process with this covariance: ```python np.random.seed(23) # for reproducibility def plotkernelsample(k, ax, xmin=0, xmax=3): xx = np.linspace(xmin, xmax, 300)[:, None] K = k(xx) ax.plot(xx, np.random.multivariate_normal(np.zeros(300), K, 5).T) ax.set_title("Samples " + k.__class__.__name__) def plotkernelfunction(k, ax, xmin=0, xmax=3, other=0): xx = np.linspace(xmin, xmax, 100)[:, None] ax.plot(xx, k(xx, np.zeros((1, 1)) + other)) ax.set_title(k.__class__.__name__ + " k(x, %.1f)" % other) f, axes = plt.subplots(1, 2, figsize=(12, 4), sharex=True) plotkernelfunction(k_brownian, axes[0], other=2.0) plotkernelsample(k_brownian, axes[1]) ``` ## Using the kernel in a model Because we've inherited from the `Kernel` base class, this new kernel has all the properties needed to be used in GPflow. It also has some convenience features such as allowing the user to call `k(X, X2)` which computes the kernel matrix. To show that this kernel works, let's use it inside GP regression. We'll see that Brownian motion has quite interesting properties. To add a little flexibility, we'll add a `Constant` kernel to our `Brownian` kernel, and the `GPR` class will handle the noise. ```python np.random.seed(42) X = np.random.rand(5, 1) Y = np.sin(X * 6) + np.random.randn(*X.shape) * 0.001 k1 = Brownian() k2 = gpflow.kernels.Constant() k = k1 + k2 m = gpflow.models.GPR((X, Y), kernel=k) # m.likelihood.variance.assign(1e-6) opt = gpflow.optimizers.Scipy() opt.minimize(m.training_loss, variables=m.trainable_variables) print_summary(m, fmt="notebook") xx = np.linspace(0, 1.1, 100).reshape(100, 1) mean, var = m.predict_y(xx) plt.plot(X, Y, "kx", mew=2) (line,) = plt.plot(xx, mean, lw=2) _ = plt.fill_between( xx[:, 0], mean[:, 0] - 2 * np.sqrt(var[:, 0]), mean[:, 0] + 2 * np.sqrt(var[:, 0]), color=line.get_color(), alpha=0.2, ) ``` ## See also For more details on how to manipulate existing kernels (or the one you just created!), we refer to the [Manipulating kernels](../advanced/kernels.ipynb) notebook.
603a5190b16c964e18dc727f427e2cc638288e03
92,247
ipynb
Jupyter Notebook
doc/source/notebooks/tailor/kernel_design.ipynb
paulinavaso/docs
afd2fa1742a743b3faf237b76811a93b5caf9936
[ "Apache-2.0" ]
null
null
null
doc/source/notebooks/tailor/kernel_design.ipynb
paulinavaso/docs
afd2fa1742a743b3faf237b76811a93b5caf9936
[ "Apache-2.0" ]
null
null
null
doc/source/notebooks/tailor/kernel_design.ipynb
paulinavaso/docs
afd2fa1742a743b3faf237b76811a93b5caf9936
[ "Apache-2.0" ]
null
null
null
308.518395
59,624
0.917959
true
1,513
Qwen/Qwen-72B
1. YES 2. YES
0.904651
0.822189
0.743794
__label__eng_Latn
0.969398
0.566414
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D3-ModelFitting/W1D3_Tutorial4.ipynb" target="_parent"></a> # Neuromatch Academy: Week 1, Day 3, Tutorial 4 # Model Fitting: Multiple linear regression #Tutorial Objectives This is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of linear models by generalizing to multiple linear regression (Tutorial 4). We then move on to polynomial regression (Tutorial 5). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 6) and two common methods for model selection, AIC and Cross Validation (Tutorial 7). In this tutorial, we will generalize our linear model to incorporate multiple linear features. - Learn how to structure our inputs for multiple linear regression using the 'Design Matrix' - Generalize the MSE for multiple features using the ordinary least squares estimator - Visualize our data and model fit in multiple dimensions # Setup ```python # @title Imports import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D ``` ```python #@title Figure Settings %matplotlib inline fig_w, fig_h = (8, 6) plt.rcParams.update({'figure.figsize': (fig_w, fig_h)}) %config InlineBackend.figure_format = 'retina' ``` # Multiple Linear Regression ```python #@title Video: Multiple Linear Regression from IPython.display import YouTubeVideo video = YouTubeVideo(id="uQjKnlhGEVY", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` Video available at https://youtube.com/watch?v=uQjKnlhGEVY Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn now to the general linear regression case, where we can have more than one regressor, or feature, in our input. Recall that our original univariate linear model was given as \begin{align} y = \theta_0 + \theta_1 x \end{align} where $\theta_0$ is the intercept, $\theta_1$ is the slope. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature \begin{align} y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d \end{align} where $d$ is the dimensionality (number of features) in our input. We can condense this succinctly using vector notation for a single data point \begin{align} y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i \end{align} and fully in matrix form \begin{align} \mathbf{y} = \mathbf{X}\boldsymbol{\theta} \end{align} where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector. This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to fully explore the multivariate case while still easily visualizing our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. In this case our model can be writen as \begin{align} y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon \end{align} or in matrix form where \begin{align} \mathbf{X} = \begin{bmatrix} 1 & x_{1,1} & x_{1,2} \\ 1 & x_{2,1} & x_{2,2} \\ \vdots & \vdots & \vdots \\ 1 & x_{n,1} & x_{n,2} \end{bmatrix}, \boldsymbol{\theta} = \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \\ \end{bmatrix} \end{align} For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term. ```python np.random.seed(121) theta = [0, -2, -3] n_samples = 40 n_regressors = len(theta) x = np.random.uniform(-2, 2, (n_samples, n_regressors)) noise = np.random.randn(n_samples) y = x @ theta + noise ``` Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor: \begin{align} \hat\theta = \sum_i \frac{x_i y_i}{x_i^2}. \end{align} The same holds true for the multiple regressor case, only now expressed in matrix form \begin{align} \boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}. \end{align} This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. ### Exercise: Ordinary Least Squares Estimator In this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion. ```python def ordinary_least_squares(x, y): """Ordinary least squares estimator for linear regression. Args: x (ndarray): design matrix of shape (n_samples, n_regressors) y (ndarray): vector of measurements of shape (n_samples) Returns: ndarray: estimated parameter values of shape (n_regressors) """ ###################################################################### ## TODO for students: solve for the optimal parameter vector using OLS ###################################################################### theta_hat = np.linalg.inv(x.T @ x) @ x.T @ y # comment this out when you've filled #raise NotImplementedError("Student excercise: solve for theta_hat vector using OLS") return theta_hat ``` ```python # to_remove solution def ordinary_least_squares(x, y): """Ordinary least squares estimator for linear regression. Args: x (ndarray): design matrix of shape (n_samples, n_regressors) y (ndarray): vector of measurements of shape (n_samples) Returns: ndarray: estimated parameter values of shape (n_regressors) """ theta_hat = np.linalg.inv(x.T @ x) @ x.T @ y return theta_hat ``` ```python theta_hat = ordinary_least_squares(x, y) theta_hat ``` array([ 0.27846561, -2.01651235, -3.14249005]) Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error. ```python y_hat = x @ theta_hat print(f"MSE = {np.mean((y - y_hat)**2):.2f}") ``` MSE = 0.57 Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the hyperplane. ```python xx, yy = np.mgrid[-2:2:50j, -2:2:50j] y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:] y_hat_grid = y_hat_grid.reshape((50, 50)) ax = plt.subplot(projection='3d') ax.plot(x[:,1], x[:,2], y, '.') ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1', cmap=plt.get_cmap('coolwarm')) for i in range(len(x)): ax.plot((x[i, 1], x[i, 1]), (x[i, 2], x[i, 2]), (y[i], y_hat[i]), 'g-', alpha=.5) ax.set( xlabel='$x_1$', ylabel='$x_2$', zlabel='y' ) plt.tight_layout() ``` # Summary - linear regression generalizes naturally to multiple dimensions - linear algebra affords us the mathematical tools to reason and solve such problems beyond the two dimensional case **NOTE** in practice, multidimensional least squares problems can be solve very efficiently (thanks to numerical routines such as LAPACK). ```python ```
9391b20a84a1cdd86bd2166b9341aef2c296a0d3
384,800
ipynb
Jupyter Notebook
tutorials/W1D3_ModelFitting/hyo_W1D3_Tutorial4.ipynb
hyosubkim/course-content
30370131c42fd3bf4f84c50e9c4eaf19f3193165
[ "CC-BY-4.0" ]
null
null
null
tutorials/W1D3_ModelFitting/hyo_W1D3_Tutorial4.ipynb
hyosubkim/course-content
30370131c42fd3bf4f84c50e9c4eaf19f3193165
[ "CC-BY-4.0" ]
null
null
null
tutorials/W1D3_ModelFitting/hyo_W1D3_Tutorial4.ipynb
hyosubkim/course-content
30370131c42fd3bf4f84c50e9c4eaf19f3193165
[ "CC-BY-4.0" ]
null
null
null
741.425819
338,020
0.95171
true
2,112
Qwen/Qwen-72B
1. YES 2. YES
0.855851
0.721743
0.617705
__label__eng_Latn
0.951848
0.273466
# SVM ```python import numpy as np import sympy as sym import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt %matplotlib inline np.random.seed(1) ``` ## Simple Example Application 对于简单的数据样本例子(也就是说可以进行线性划分,且不包含噪声点) **算法:** 输入:线性可分训练集$T={(x_1,y_1),(x_2,y_2),...,(x_N,y_N)}$,其中$x_i \in \textit{X}=\textit{R},y_i \in \textit{Y}={+1,-1},i=1,2...,N$ 输出:分离超平面和分类决策函数 (1) 构造并求解约束条件最优化问题 $\underset{\alpha}{min}$ $\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_i \alpha_j y_i y_j <x_i \cdot x_j>-\sum_{i=1}^{N}\alpha_i$ s.t $\sum_{i=1}^{N}\alpha_i y_i=0$ $\alpha_i \geq 0,i=1,2,...,N$ 求得最优$\alpha^{*}=(\alpha_1^{*},\alpha_2^{*},...,\alpha_n^{*})$ 其中正分量$\alpha_j^{*}>0$就为支持向量 (2) 计算 $w^{*} = \sum_{i=1}^{N}\alpha_i^{*}y_ix_i$ 选择$\alpha^{*}$的一个正分量$\alpha_j^{*}>0$,计算 $b^{*}=y_j-\sum_{i=1}^{N}\alpha_i^{*}y_i<x_i \cdot x_j>$ (3) 求得分离超平面 $w^{*}\cdot x + b^{*}=0$ 分类决策函数: $f(x)=sign(w^{*}\cdot x + b^{*})$ 这里的sign表示:值大于0的为1,值小于0的为-1. ```python def loadSimpleDataSet(): """ 从文本加载数据集 返回: 数据集和标签集 """ train_x = np.array([[3,3],[4,3],[1,1]]).T train_y = np.array([[1,1,-1]]).T return train_x,train_y ``` ```python train_x,train_y = loadSimpleDataSet() print("train_x shape is : ",train_x.shape) print("train_y shape is : ",train_y.shape) ``` train_x shape is : (2, 3) train_y shape is : (3, 1) ```python plt.scatter(train_x[0,:],train_x[1,:],c=np.squeeze(train_y)) ``` 为了方便计算$\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_i \alpha_j y_i y_j <x_i \cdot x_j>$ 我们需要先求出train_x、train_y、alphas的内积然后逐个元素相乘然后累加. 计算train_x的内积 ```python Inner_train_x = np.dot(train_x.T,train_x) print("Train_x is:\n",train_x) print("Inner train x is:\n",Inner_train_x) ``` Train_x is: [[3 4 1] [3 3 1]] Inner train x is: [[18 21 6] [21 25 7] [ 6 7 2]] 计算train_y的内积 ```python Inner_train_y = np.dot(train_y,train_y.T) print("Train y is:\n",train_y) print("Inner train y is:\n",Inner_train_y) ``` Train y is: [[ 1] [ 1] [-1]] Inner train y is: [[ 1 1 -1] [ 1 1 -1] [-1 -1 1]] 计算alphas(拉格朗日乘子)的内积,但是要注意,我们在这里固定拉格朗日乘子中的某两个alpha之外的其他alpha,因为根据理论知识,我们需要固定两个alpha之外的其他alphas,然后不断的再一堆alphas中去迭代更新这两个alpha.由于这个例子过于简单,且只有3个样本点(事实上$\alpha_1,\alpha_3$就是支持向量). 将约束条件带入其中: $\sum_{i=1}^3\alpha_i y_i=\alpha_1y_1+\alpha_2y_2+\alpha_3y_3 =0 \Rightarrow $ -- $\alpha_3 = -(\alpha_1y_1+\alpha_2y_2)/y_3 $ -- ```python alphas_sym = sym.symbols('alpha1:4') alphas = np.array([alphas_sym]).T alphas[-1]= -np.sum(alphas[:-1,:]*train_y[:-1,:]) / train_y[-1,:] Inner_alphas = np.dot(alphas,alphas.T) print("alphas is: \n",alphas) print("Inner alphas is:\n",Inner_alphas) ``` alphas is: [[alpha1] [alpha2] [1.0*alpha1 + 1.0*alpha2]] Inner alphas is: [[alpha1**2 alpha1*alpha2 alpha1*(1.0*alpha1 + 1.0*alpha2)] [alpha1*alpha2 alpha2**2 alpha2*(1.0*alpha1 + 1.0*alpha2)] [alpha1*(1.0*alpha1 + 1.0*alpha2) alpha2*(1.0*alpha1 + 1.0*alpha2) (1.0*alpha1 + 1.0*alpha2)**2]] 现在求最优的$\alpha^{*}=(\alpha_1^{*},\alpha_2^{*},...,\alpha_n^{*})$ $\underset{\alpha}{min}$ $\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_i \alpha_j y_i y_j <x_i \cdot x_j>-\sum_{i=1}^{N}\alpha_i$ **注意:** 这里需要使用sympy库,详情请见[柚子皮-Sympy符号计算库](https://blog.csdn.net/pipisorry/article/details/39123247) 或者[Sympy](https://www.sympy.org/en/index.html) ```python def compute_dual_function(alphas,Inner_alphas,Inner_train_x,Inner_train_y): """ Parameters: alphas: initialization lagrange multiplier,shape is (n,1). n:number of example. Inner_alphas: Inner product of alphas. Inner_train_x: Inner product of train x set. Inner_train_y: Inner product of train y set. simplify : simplify compute result of dual function. return: s_alpha: result of dual function """ s_alpha = sym.simplify(1/2*np.sum(Inner_alphas * Inner_train_x*Inner_train_y) - (np.sum(alphas))) return s_alpha ``` ```python s_alpha = compute_dual_function(alphas,Inner_alphas,Inner_train_x,Inner_train_y) print('s_alpha is:\n ',s_alpha) ``` s_alpha is: 4.0*alpha1**2 + 10.0*alpha1*alpha2 - 2.0*alpha1 + 6.5*alpha2**2 - 2.0*alpha2 现在对每一个alpha求偏导令其等于0. ```python def Derivative_alphas(alphas,s_alpha): """ Parameters: alphas: lagrange multiplier. s_alpha: dual function return: bool value. True: Meet all constraints,means,all lagrange multiplier >0 False:Does not satisfy all constraints,means some lagrange multiplier <0. """ cache_derivative_alpha = [] for alpha in alphas.squeeze()[:-1]: # remove the last element. derivative = s_alpha.diff(alpha) # diff: derivative cache_derivative_alpha.append(derivative) derivative_alpha = sym.solve(cache_derivative_alpha,set=True) # calculate alphas. print('derivative_alpha is: ',derivative_alpha) # check alpha > 0 check_alpha_np = np.array(list(derivative_alpha[1])) > 0 return check_alpha_np.all() ``` ```python check_alpha = Derivative_alphas(alphas,s_alpha) print("Constraint lagrange multiplier is: ",check_alpha) ``` derivative_alpha is: ([alpha1, alpha2], {(1.50000000000000, -1.00000000000000)}) Constraint lagrange multiplier is: False 可以看出如果是对于$\alpha_2<0$,不满足$\alpha_2 \geqslant 0 $所以我们不能使用极值 ------------- 由于在求偏导的情况下不满足拉格朗日乘子约束条件,所以我们将固定某一个$\alpha_i$,将其他的$\alpha$令成0,使偏导等于0求出当前$\alpha_i$,然后在带入到对偶函数中求出最后的结果.比较所有的结果挑选出结果最小的值所对应的$\alpha_i$,在从中选出$\alpha_i>0$的去求我们最开始固定的其他alphas. **算法:** 输入: 拉格朗日乘子数组,数组中不包括最开始固定的其他alphas 输出: 最优的拉格朗日乘子,也就是支持向量 (1) 将输入的拉格朗日数组扩增一行或者一列并初始化为0 - alphas_zeros = np.zeros((alphas.shape[0],1))[:-1] - alphas_add_zeros = np.c_[alphas[:-1],alphas_zeros] (2) 将扩增后的数组进行"mask"掩模处理,目的是为了将一个$\alpha$保留,其他的$\alpha$全部为0. - mask_alpha = np.ma.array(alphas_add_zeros, mask=False) # create mask array. - mask_alpha.mask[i] = True # masked alpha - 在sysmpy中使用掩模处理会报出一个警告:将掩模值处理为None,其实问题不大,应该不会改变对偶方程中的alpha对象 (3) 使用掩模后的数组放入对偶函数中求偏导$\alpha_i$,并令其等于0求出$\alpha_i$ (4) 将求出的$\alpha_i$和其他都等于0的alphas带入到对偶函数中求出值 (5) 比较所有的对偶函数中的值,选取最小值所对应的alpha组.计算最开始固定值的alphas. ```python def choose_best_alphas(alphas,s_alpha): """ Parameters: alphas: Lagrange multiplier. s_alpha: dual function return: best_vector: best support vector machine. """ # add col in alphas,and initialize value equal 0. about 2 lines. alphas_zeros = np.zeros((alphas.shape[0],1))[:-1] alphas_add_zeros = np.c_[alphas[:-1],alphas_zeros] # cache some parameters. cache_alphas_add = np.zeros((alphas.shape[0],1))[:-1] # cache derivative alphas. cache_alphas_compute_result = np.zeros((alphas.shape[0],1))[:-1] # cache value in dual function result cache_alphas_to_compute = alphas_add_zeros.copy() # get minmux dual function value,cache this values. for i in range(alphas_add_zeros.shape[0]): mask_alpha = np.ma.array(alphas_add_zeros, mask=False) # create mask array. mask_alpha.mask[i] = True # masked alpha value = sym.solve(s_alpha.subs(mask_alpha).diff())[0] # calculate alpha_i cache_alphas_add[i] = value cache_alphas_to_compute[i][1] = value cache_alphas_compute_result[i][0] = s_alpha.subs(cache_alphas_to_compute) # calculate finally dual function result. cache_alphas_to_compute[i][1] = 0 # make sure other alphas equal 0. min_alpha_value_index = cache_alphas_compute_result.argmin() best_vector =np.array([cache_alphas_add[min_alpha_value_index]] + [- cache_alphas_add[min_alpha_value_index] / train_y[-1]]) return [min_alpha_value_index]+[2],best_vector ``` ```python min_alpha_value_index,best_vector = choose_best_alphas(alphas,s_alpha) print(min_alpha_value_index) print('support vector machine is:',alphas[min_alpha_value_index]) ``` [0, 2] support vector machine is: [[alpha1] [1.0*alpha1 + 1.0*alpha2]] /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sympy/core/sympify.py:318: UserWarning: Warning: converting a masked element to nan. return sympify(coerce(a)) $w^{*} = \sum_{i=1}^{N}\alpha_i^{*}y_ix_i$ ```python w = np.sum(np.multiply(best_vector , train_y[min_alpha_value_index].T) * train_x[:,min_alpha_value_index],axis=1) print("W is: ",w) ``` W is: [0.5 0.5] 选择$\alpha^{*}$的一个正分量$\alpha_j^{*}>0$,计算 $b^{*}=y_j-\sum_{i=1}^{N}\alpha_i^{*}y_i<x_i \cdot x_j>$ 这里我选alpha1 ```python b = train_y[0]-np.sum(best_vector.T * np.dot(train_x[:,min_alpha_value_index].T,train_x[:,min_alpha_value_index])[0] * train_y[min_alpha_value_index].T) print("b is: ",b) ``` b is: [-2.] 所以超平面为: $f(x)=sign[wx+b]$ # SMO 这里实现简单版本的smo算法,这里所谓的简单版本指的是速度没有SVC快,参数自动选择没有SCV好等.但是通过调节参数一样可以达到和SVC差不多的结果 ### 算法: #### 1.SMO选择第一个变量的过程为选择一个违反KKT条件最严重的样本点为$\alpha_1$,即违反以下KKT条件: $\alpha_i=0\Leftrightarrow y_ig(x_i)\geqslant1$ $0<\alpha_i<C\Leftrightarrow y_ig(x_i)=1$ $\alpha_i=C \Leftrightarrow y_ig(x_i)\leqslant1$ 其中: $g(x_i)=\sum_{j=1}^{N}\alpha_iy_iK(x_i,x_j)+b$ **注意:** - 初始状态下$\alpha_i$定义为0,且和样本数量一致. - 该检验是在$\varepsilon$范围内的 - 在检验过程中我们先遍历所有满足$0<\alpha_i<C$的样本点,即在间隔边界上的支持向量点,找寻违反KKT最严重的样本点 - 如果没有满足$0<\alpha_i<C$则遍历所有的样本点,找违反KKT最严重的样本点 - 这里的*违反KKT最严重的样本点*可以选择为$y_ig(x_i)$最小的点作为$\alpha_1$ #### 2.SMO选择第二个变量的过程为希望$\alpha_2$有足够的变化 因为$\alpha_2^{new}$是依赖于$|E_1-E_2|$的,并且使得|E_1-E_2|最大,为了加快计算,一种简单的做法是: 如果$E_1$是正的,那么选择最小的$E_i$作为$E_2$,如果$E_1$是负的,那么选择最大的$E_i$作为$E_2$,为了节省计算时间,将$E_i$保存在一个列表中 **注意:** - 如果通过以上方法找到的$\alpha_2$不能使得目标函数有足够的下降,那么采用以下启发式方法继续选择$\alpha_2$,遍历在间隔边上的支持向量的点依次将其对应的变量作为$\alpha_2$试用,直到目标函数有足够的下降,若还是找不到使得目标函数有足够下降,则抛弃第一个$\alpha_1$,在重新选择另一个$\alpha_1$ - 这个简单版本的SMO算法并没有处理这种特殊情况 #### 3.计算$\alpha_1^{new},\alpha_2^{new}$ 计算$\alpha_1^{new},\alpha_2^{new}$,是为了计算$b_i,E_i$做准备. 3.1 计算$\alpha_2$的边界: - if $y_1 \neq y_2$:$L=max(0,\alpha_2^{old}-\alpha_1^{old})$,$H=min(C,C+\alpha_2^{old}-\alpha_1^{old})$ - if $y_1 = y_2$:$L=max(0,\alpha_2^{old}+\alpha_1^{old}-C)$,$H=min(C,C+\alpha_2^{old}+\alpha_1^{old})$ 3.2 计算$\alpha_2^{new,unc} = \alpha_2^{old}+\frac{y_2(E_1-E_2)}{\eta}$ 其中: $\eta = K_{11}+K_{22}-2K_{12}$,这里的$K_n$值得是核函数,可以是高斯核,多项式核等. 3.3 修剪$\alpha_2$ $\alpha_2^{new}=\left\{\begin{matrix} H, &\alpha_2^{new,unc}>H \\ \alpha_2^{new,unc},& L\leqslant \alpha_2^{new,unc}\leqslant H \\ L,& \alpha_2^{new,unc}<L \end{matrix}\right.$ 3.3 计算$\alpha_1^{new}$ $\alpha_1^{new}=\alpha_1^{old}+y_1y_2(\alpha_2^{old}-\alpha_2^{new})$ #### 4.计算阈值b和差值$E_i$ $b_1^{new}=-E_1-y_1K_{11}(\alpha_1^{new}-\alpha_1^{old})-y_2K_{21}(\alpha_2^{new}-\alpha_2^{old})+b^{old}$ $b_2^{new}=-E_2-y_1K_{12}(\alpha_1^{new}-\alpha_1^{old})-y_2K_{22}(\alpha_2^{new}-\alpha_2^{old})+b^{old}$ 如果$\alpha_1^{new},\alpha_2^{new}$,同时满足条件$0<\alpha_i^{new}<C,i=1,2$, 那么$b_1^{new}=b_2^{new}=b^{new}$. 如果$\alpha_1^{new},\alpha_2^{new}$是0或者C,那么$b_1^{new},b_2^{new}$之间的数 都符合KKT条件阈值,此时取中点为$b^{new}$ $E_i^{new}=(\sum_sy_j\alpha_jK(x_i,x_j))+b^{new}-y_i$ 其中s是所有支持向量$x_j$的集合. #### 5. 更新参数 更新$\alpha_i,E_i,b_i$ #### 注意: 在训练完毕后,绝大部分的$\alpha_i$的分量都为0,只有极少数的分量不为0,那么那些不为0的分量就是支持向量 ### SMO简单例子 加载数据,来自于scikit中的的鸢尾花数据,其每次请求是变化的 ```python # data def create_data(): iris = load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) df['label'] = iris.target df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label'] data = np.array(df.iloc[:100, [0, 1, -1]]) for i in range(len(data)): if data[i,-1] == 0: data[i,-1] = -1 return data[:,:2], data[:,-1] ``` ```python X, y = create_data() # 划分训练样本和测试样本 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) ``` ```python plt.scatter(X[:,0],X[:,1],c=y) ``` ### 开始搭建SMO算法代码 ```python class SVM: def __init__(self,max_iter = 100,kernel = 'linear',C=1.,is_print=False,sigma=1): """ Parameters: max_iter:最大迭代数 kernel:核函数,这里只定义了"线性"和"高斯" sigma:高斯核函数的参数 C:惩罚项,松弛变量 is_print:是否打印 """ self.max_iter = max_iter self.kernel = kernel self.C = C # 松弛变量C self.is_print = is_print self.sigma = sigma def init_args(self,features,labels): """ self.m:样本数量 self.n:特征数 """ self.m,self.n = features.shape self.X = features self.Y = labels self.b = 0. # 将E_i 保存在一个列表中 self.alpha = np.zeros(self.m) + 0.0001 self.E = [self._E(i) for i in range(self.m)] def _g(self,i): """ 预测值g(x_i) """ g_x = np.sum(self.alpha*self.Y*self._kernel(self.X[i],self.X)) + self.b return g_x def _E(self,i): """ E(x) 为g(x) 对输入x的预测值和y的差值 """ g_x = self._g(i) - self.Y[i] return g_x def _kernel(self,x1,x2): """ 计算kernel """ if self.kernel == "linear": return np.sum(np.multiply(x1,x2),axis=1) if self.kernel == "Gaussion": return np.sum(np.exp(-((x1-x2)**2)/(2*self.sigma)),axis=1) def _KKT(self,i): """ 判断KKT """ y_g = np.round(np.float64(np.multiply(self._g(i),self.Y[i]))) # 存在精度问题也就是说在epsilon范围内,所以这里使用round if self.alpha[i] == 0: return y_g >= 1,y_g elif 0<self.alpha[i]<self.C: return y_g == 1,y_g elif self.alpha[i] == self.C: return y_g <=1,y_g else: return ValueError def _init_alpha(self): """ 外层循环首先遍历所有满足0<a<C的样本点,检验是否满足KKT 0<a<C的样本点为间隔边界上支持向量点 """ index_array = np.where(np.logical_and(self.alpha>0,self.alpha<self.C))[0] # 因为这里where的特殊性,所以alpha必须是(m,) if len(index_array) !=0: cache_list = [] for i in index_array: bool_,y_g = self._KKT(i) if not bool_: cache_list.append((y_g,i)) # 如果没有则遍历整个样本 else: cache_list = [] for i in range(self.m): bool_,y_g = self._KKT(i) if not bool_: cache_list.append((y_g,i)) #获取违反KKT最严重的样本点,也就是g(x_i)*y_i 最小的 min_i = sorted(cache_list,key=lambda x:x[0])[0][1] # 选择第二个alpha2 E1 = self.E[min_i] if E1 > 0: j = np.argmin(self.E) else: j = np.argmax(self.E) return min_i,j def _prune(self,alpha,L,H): """ 修剪alpha """ if alpha > H: return H elif L<=alpha<=H: return alpha elif alpha < L: return L else: return ValueError def fit(self,features, labels): self.init_args(features, labels) for t in range(self.max_iter): # 开始寻找alpha1,和alpha2 i1,i2 = self._init_alpha() # 计算边界 if self.Y[i1] == self.Y[i2]: # 同号 L = max(0,self.alpha[i2]+self.alpha[i1]-self.C) H = min(self.C,self.alpha[i2]+self.alpha[i1]) else: L = max(0,self.alpha[i2]-self.alpha[i1]) H = min(self.C,self.C+self.alpha[i2]-self.alpha[i1]) # 计算阈值b_i 和差值E_i E1 = self.E[i1] E2 = self.E[i2] eta = self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i1]) + \ self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i2]) - \ 2 * self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i2]) if eta <=0: continue alpha2_new_nuc = self.alpha[i2] + (self.Y[i2] * (E1-E2) /eta) # 修剪 alpha2_new_nuc alpha2_new = self._prune(alpha2_new_nuc,L,H) alpha1_new = self.alpha[i1] + self.Y[i1] * self.Y[i2] * (self.alpha[i2]-alpha2_new) # 计算b_i b1_new = -E1-self.Y[i1]*self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i1])*(alpha1_new - self.alpha[i1])\ - self.Y[i2] * self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i1])*(alpha2_new - self.alpha[i2]) + self.b b2_new = -E2-self.Y[i1]*self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i2])*(alpha1_new - self.alpha[i1])\ - self.Y[i2] * self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i2])*(alpha2_new - self.alpha[i2]) + self.b if 0 < alpha1_new < self.C: b_new = b1_new elif 0 < alpha2_new < self.C: b_new = b2_new else: # 选择中点 b_new = (b1_new + b2_new) / 2 # 更新参数 self.alpha[i1] = alpha1_new self.alpha[i2] = alpha2_new self.b = b_new self.E[i1] = self._E(i1) self.E[i2] = self._E(i2) if self.is_print: print("Train Done!") def predict(self,data): predict_y = np.sum(self.alpha*self.Y*self._kernel(data,self.X)) + self.b return np.sign(predict_y)[0] def score(self,test_X,test_Y): m,n = test_X.shape count = 0 for i in range(m): predict_i = self.predict(test_X[i]) if predict_i == np.float(test_Y[i]): count +=1 return count / m ``` 由于鸢尾花数据每次请求都会变化,我们在这里取正确率的均值与SVC进行对比 ```python count = 0 failed2 = [] for i in range(20): X, y = create_data() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) svm = SVM(max_iter=200,C=2,kernel='linear') svm.fit(X_train,y_train) test_accourate = svm.score(X_test,y_test) train_accourate = svm.score(X_train,y_train) if test_accourate < 0.8: failed2.append((X_train, X_test, y_train, y_test)) # 储存正确率过低的样本集 print("Test accourate:",test_accourate) print("Train accourate:",train_accourate) print('--------------------------') count += test_accourate print("Test average accourate is: ",count/20) ``` Test accourate: 0.88 Train accourate: 0.9466666666666667 -------------------------- Test accourate: 0.92 Train accourate: 0.9733333333333334 -------------------------- Test accourate: 0.92 Train accourate: 0.84 -------------------------- Test accourate: 0.84 Train accourate: 0.8266666666666667 -------------------------- Test accourate: 0.8 Train accourate: 0.7866666666666666 -------------------------- Test accourate: 0.96 Train accourate: 1.0 -------------------------- Test accourate: 1.0 Train accourate: 1.0 -------------------------- Test accourate: 1.0 Train accourate: 0.96 -------------------------- Test accourate: 0.84 Train accourate: 0.9066666666666666 -------------------------- Test accourate: 1.0 Train accourate: 0.9333333333333333 -------------------------- Test accourate: 0.92 Train accourate: 0.8 -------------------------- Test accourate: 0.64 Train accourate: 0.88 -------------------------- Test accourate: 0.96 Train accourate: 1.0 -------------------------- Test accourate: 0.48 Train accourate: 0.6933333333333334 -------------------------- Test accourate: 0.76 Train accourate: 0.8 -------------------------- Test accourate: 1.0 Train accourate: 0.9733333333333334 -------------------------- Test accourate: 0.48 Train accourate: 0.5066666666666667 -------------------------- Test accourate: 0.96 Train accourate: 0.9333333333333333 -------------------------- Test accourate: 0.92 Train accourate: 0.96 -------------------------- Test accourate: 1.0 Train accourate: 1.0 -------------------------- Test average accourate is: 0.8640000000000001 可以发现,有些数据的正确率较高,有些正确率非常的底,我们将低正确率的样本保存,取出进行试验 ```python failed2X_train, failed2X_test, failed2y_train, failed2y_test= failed2[2] ``` 我们可以看出,在更改C后,正确率依然是客观的,这说明简单版本的SMO算法是可行的.只是我们在测算 平均正确率的时候,C的值没有改变,那么可能有些样本的C值不合适. ```python svm = SVM(max_iter=200,C=5,kernel='linear') svm.fit(failed2X_train,failed2y_train) accourate = svm.score(failed2X_test,failed2y_test) accourate ``` 0.88 使用Scikit-SVC测试 ### Scikit-SVC 基于scikit-learn的[SVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC.decision_function) 例子1: ```python from sklearn.svm import SVC count = 0 for i in range(10): X, y = create_data() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) clf = SVC(kernel="linear",C=2) clf.fit(X_train, y_train) accourate = clf.score(X_test, y_test) print("accourate",accourate) count += accourate print("average accourate is: ",count/10) ``` accourate 0.96 accourate 0.96 accourate 1.0 accourate 1.0 accourate 1.0 accourate 0.96 accourate 0.96 accourate 1.0 accourate 0.96 accourate 0.96 average accourate is: 0.9760000000000002 当然由于是简单版本的SMO算法,所以平均正确率肯定没有SVC高,但是我们可以调节C和kernel来使得正确率提高 ## Multilabel classification 多标签:一个实例可以有多个标签比如一个电影可以是动作,也可以是爱情. 多类分类(multi-class classification):有多个类别需要分类,但一个样本只属于一个类别 多标签分类(multi-label classificaton):每个样本有多个标签 对于多类分类,最后一层使用softmax函数进行预测,训练阶段使用categorical_crossentropy作为损失函数 对于多标签分类,最后一层使用sigmoid函数进行预测,训练阶段使用binary_crossentropy作为损失函数 This example simulates a multi-label document classification problem. The dataset is generated randomly based on the following process: - pick the number of labels: n ~ Poisson(n_labels) - n times, choose a class c: c ~ Multinomial(theta) - pick the document length: k ~ Poisson(length) - k times, choose a word: w ~ Multinomial(theta_c) In the above process, rejection sampling is used to make sure that n is more than 2, and that the document length is never zero. Likewise, we reject classes which have already been chosen. The documents that are assigned to both classes are plotted surrounded by two colored circles. The classification is performed by projecting to the first two principal components found by [PCA](http://www.cnblogs.com/jerrylead/archive/2011/04/18/2020209.html) and [CCA](https://files-cdn.cnblogs.com/files/jerrylead/%E5%85%B8%E5%9E%8B%E5%85%B3%E8%81%94%E5%88%86%E6%9E%90.pdf) for visualisation purposes, followed by using the [sklearn.multiclass.OneVsRestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.multiclass.OneVsRestClassifier.html#sklearn.multiclass.OneVsRestClassifier) metaclassifier using two SVCs with linear kernels to learn a discriminative model for each class. Note that PCA is used to perform an unsupervised dimensionality reduction, while CCA is used to perform a supervised one. Note: in the plot, “unlabeled samples” does not mean that we don’t know the labels (as in semi-supervised learning) but that the samples simply do not have a label. ```python from sklearn.datasets import make_multilabel_classification from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC from sklearn.decomposition import PCA from sklearn.cross_decomposition import CCA ``` ```python def plot_hyperplance(clf,min_x,max_x,linestyle,label): # get the separating heyperplance # 0 = w0*x0 + w1*x1 +b w = clf.coef_[0] a = -w[0] /w[1] xx = np.linspace(min_x -5,max_x + 5) yy = a * xx -(clf.intercept_[0]) / w[1] # clf.intercept_[0] get parameter b, plt.plot(xx,yy,linestyle,label=label) ``` ```python def plot_subfigure(X,Y,subplot,title,transform): if transform == "pca": # pca执行无监督分析(不注重label) X = PCA(n_components=2).fit_transform(X) print("PCA",X.shape) elif transform == "cca": # pca 执行监督分析(注重label),也即是说会分析label之间的关系 X = CCA(n_components=2).fit(X, Y).transform(X) print("CCA",X.shape) else: raise ValueError min_x = np.min(X[:, 0]) max_x = np.max(X[:, 0]) min_y = np.min(X[:, 1]) max_y = np.max(X[:, 1]) classif = OneVsRestClassifier(SVC(kernel='linear')) # 使用 one -reset 进行SVM训练 classif.fit(X, Y) plt.subplot(2, 2, subplot) plt.title(title) zero_class = np.where(Y[:, 0]) # 找到第一类的label 索引 one_class = np.where(Y[:, 1]) # 找到第二类的 plt.scatter(X[:, 0], X[:, 1], s=40, c='gray', edgecolors=(0, 0, 0)) plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b', facecolors='none', linewidths=2, label='Class 1') plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange', facecolors='none', linewidths=2, label='Class 2') # classif.estimators_[0],获取第一个估算器,得到第一个决策边界 plot_hyperplance(classif.estimators_[0], min_x, max_x, 'k--', 'Boundary\nfor class 1') # classif.estimators_[1],获取第二个估算器,得到第一个决策边界 plot_hyperplance(classif.estimators_[1], min_x, max_x, 'k-.', 'Boundary\nfor class 2') plt.xticks(()) plt.yticks(()) plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x) plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y) if subplot == 2: plt.xlabel('First principal component') plt.ylabel('Second principal component') plt.legend(loc="upper left") ``` **make_multilabel_classification:** make_multilabel_classification(n_samples=100, n_features=20, n_classes=5, n_labels=2, length=50, allow_unlabeled=True, sparse=False, return_indicator='dense', return_distributions=False, random_state=None) ```python plt.figure(figsize=(8, 6)) # If ``True``, some instances might not belong to any class.也就是说某些实例可以并不属于任何标签([[0,0]]),使用hot形式 X, Y = make_multilabel_classification(n_classes=2, n_labels=1, allow_unlabeled=True, random_state=1) print("Original:",X.shape) plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca") plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca") X, Y = make_multilabel_classification(n_classes=2, n_labels=1, allow_unlabeled=False, random_state=1) print("Original:",X.shape) plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca") plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca") plt.subplots_adjust(.04, .02, .97, .94, .09, .2) plt.show() ``` 由于是使用多标签(也就是说一个实例可以有多个标签),无论是标签1还是标签2还是未知标签(“没有标签的样本”).图中直观来看应该是CCA会由于PCA(无论是有没有采用"没有标签的样本"),因为CCA考虑了label之间的关联. 因为我们有2个标签在实例中,所以我们能够绘制2条决策边界(使用classif.estimators_[index])获取,并使用$x_1 = \frac{w_0}{w_1}x_1-\frac{b}{w_1}$绘制决策边界
4adc355774bba553146fd29ec28d2bdccc339d2c
229,148
ipynb
Jupyter Notebook
5-3 Support vector machines(Application01).ipynb
woaij100/Classic_machine_learning
3bb29f5b7449f11270014184d999171a1c7f5e71
[ "Apache-2.0" ]
77
2018-12-14T02:09:06.000Z
2020-03-07T03:47:22.000Z
5-3 Support vector machines(Application01).ipynb
woaij100/Classic_machine_learning
3bb29f5b7449f11270014184d999171a1c7f5e71
[ "Apache-2.0" ]
null
null
null
5-3 Support vector machines(Application01).ipynb
woaij100/Classic_machine_learning
3bb29f5b7449f11270014184d999171a1c7f5e71
[ "Apache-2.0" ]
10
2019-03-05T09:50:55.000Z
2019-08-07T01:37:45.000Z
161.371831
158,104
0.877909
true
10,082
Qwen/Qwen-72B
1. YES 2. YES
0.803174
0.699254
0.561623
__label__eng_Latn
0.26224
0.143168
```python %matplotlib inline ``` Bad key "text.kerning_factor" on line 4 in C:\Users\sensio\miniconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test_patch.mplstyle. You probably need to get an updated matplotlibrc file from https://github.com/matplotlib/matplotlib/blob/v3.1.3/matplotlibrc.template or from the matplotlib source distribution # How nangs works In this notebook we show how nangs works. We will solve the following PDE: \begin{equation} \frac{\partial \phi}{\partial t} + u \frac{\partial \phi}{\partial x} = 0 \end{equation} The independent variables (i.e, $x$ and $t$) are used as input values for the NN, and the solution (i.e. $\phi$) is the output. In order to find the solution, at each step the NN outputs are derived w.r.t the inputs. Then, a loss function that matches the PDE is built and the weights are updated accordingly. If the loss function goes to zero, we can assume that our NN is indeed the solution to our PDE. ```python # imports import numpy as np import matplotlib.pyplot as plt import torch device = "cuda" if torch.cuda.is_available() else "cpu" torch.__version__ ``` '1.5.0' ## Define your data To solve the PDE we first need a set of points to evaluate it, we will use this points as the dataset for training de NN. ```python # define the mesh x = np.linspace(0,1,20) t = np.linspace(0,1,30) # combine all points m = np.meshgrid(x, t) m = np.stack(m, -1).reshape(-1, 2) m.shape ``` (600, 2) ```python class Dataset(torch.utils.data.Dataset): def __init__(self, X): self.X = torch.from_numpy(X).to(device).float() def __len__(self): return len(self.X) def __getitem__(self, ix): return self.X[ix] dataset = Dataset(m) len(dataset) ``` 600 ## Define your solution topology We use a Multilayer Perceptron to approximate the solution to the PDE ```python # solution topology class Sine(torch.nn.Module): def __init__(self): super().__init__() def forward(self, x): return torch.sin(x) def block(i, o): fc = torch.nn.Linear(i, o) return torch.nn.Sequential( Sine(), torch.nn.Linear(i, o) ) class MLP(torch.nn.Module): def __init__(self, inputs, outputs, layers, neurons): super().__init__() fc_in = torch.nn.Linear(inputs, neurons) fc_hidden = [ block(neurons, neurons) for layer in range(layers-1) ] fc_out = block(neurons, outputs) self.mlp = torch.nn.Sequential( fc_in, *fc_hidden, fc_out ) def forward(self, x): return self.mlp(x) mlp = MLP(2, 1, 3, 100) mlp ``` MLP( (mlp): Sequential( (0): Linear(in_features=2, out_features=100, bias=True) (1): Sequential( (0): Sine() (1): Linear(in_features=100, out_features=100, bias=True) ) (2): Sequential( (0): Sine() (1): Linear(in_features=100, out_features=100, bias=True) ) (3): Sequential( (0): Sine() (1): Linear(in_features=100, out_features=1, bias=True) ) ) ) ```python # check output shape mlp(torch.randn(100,2)).shape ``` torch.Size([100, 1]) ## Boundary conditions We can attempt to solve our PDE at this points, but we would obtain a trivial solution. Instead, we need to specify a set of boundary conditions. ```python # initial condition (t = 0) t0 = np.array([0.]) m0 = np.meshgrid(x, t0) m0 = np.stack(m0, -1).reshape(-1, 2) p0 = np.sin(2*np.pi*x) plt.plot(x, p0) plt.grid(True) plt.xlabel('x') plt.ylabel('$p_0$') plt.show() ``` ```python class DirichletDataset(torch.utils.data.Dataset): def __init__(self, X, Y): assert len(X) == len(Y) self.X = torch.from_numpy(X).to(device).float() self.Y = torch.from_numpy(Y).to(device).float() def __len__(self): return len(self.X) def __getitem__(self, ix): return self.X[ix], self.Y[ix] # we use the names to indicate the order of the variables in the data initial_condition_dataset = DirichletDataset(m0, p0.reshape(-1, 1)) len(initial_condition_dataset) ``` 20 ```python # boundary conditions (peridic conditions at x = 0 and x = 1) xb0 = np.array([0]) mb0 = np.meshgrid(xb0, t) mb0 = np.stack(mb0, -1).reshape(-1, 2) mb0.shape ``` (30, 2) ```python xb1 = np.array([1]) mb1 = np.meshgrid(xb1, t) mb1 = np.stack(mb1, -1).reshape(-1, 2) mb1.shape ``` (30, 2) ```python boco_dataset = DirichletDataset(mb0, mb1) len(boco_dataset) ``` 30 ## Solve the PDE We can now proceed with solving the PDE ```python BATCH_SIZE = 32 EPOCHS = 50 U = 1 dataloaders = { 'inner': torch.utils.data.DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True), 'initial': torch.utils.data.DataLoader(initial_condition_dataset, batch_size=BATCH_SIZE, shuffle=True), 'periodic': torch.utils.data.DataLoader(boco_dataset, batch_size=BATCH_SIZE, shuffle=True) } mlp = MLP(2, 1, 3, 128) mlp.to(device) optimizer = torch.optim.Adam(mlp.parameters()) criterion = torch.nn.MSELoss() hist = [] for epoch in range(1, EPOCHS+1): # iterate over the internal points in batches for batch in dataloaders['inner']: X = batch optimizer.zero_grad() # optimize for boundary points losses = 0 for batch in dataloaders['initial']: x, y = batch p = mlp(x) loss = criterion(p, y) loss.backward() losses += loss.item() for batch in dataloaders['periodic']: x1, x2 = batch p1 = mlp(x1) p2 = mlp(x2) loss = criterion(p1, p2) loss.backward() losses += loss.item() # optimize for internal points X.requires_grad = True p = mlp(X) grads, = torch.autograd.grad(p, X, grad_outputs=p.data.new(p.shape).fill_(1), create_graph=True, only_inputs=True) dpdx, dpdt = grads[:,0], grads[:,1] pde = dpdt + U*dpdx loss = pde.pow(2).mean() loss.backward() optimizer.step() losses += loss.item() hist.append(losses) print(f"Epoch {epoch}/{EPOCHS} loss {losses:.5f}") ``` Epoch 1/50 loss 0.44584 Epoch 2/50 loss 0.43346 Epoch 3/50 loss 0.42094 Epoch 4/50 loss 0.40049 Epoch 5/50 loss 0.35735 Epoch 6/50 loss 0.17384 Epoch 7/50 loss 0.11475 Epoch 8/50 loss 0.09106 Epoch 9/50 loss 0.07281 Epoch 10/50 loss 0.06439 Epoch 11/50 loss 0.05049 Epoch 12/50 loss 0.02657 Epoch 13/50 loss 0.00799 Epoch 14/50 loss 0.00553 Epoch 15/50 loss 0.00346 Epoch 16/50 loss 0.00295 Epoch 17/50 loss 0.00208 Epoch 18/50 loss 0.00168 Epoch 19/50 loss 0.00208 Epoch 20/50 loss 0.00168 Epoch 21/50 loss 0.00113 Epoch 22/50 loss 0.00112 Epoch 23/50 loss 0.00136 Epoch 24/50 loss 0.00091 Epoch 25/50 loss 0.00076 Epoch 26/50 loss 0.00076 Epoch 27/50 loss 0.00062 Epoch 28/50 loss 0.00096 Epoch 29/50 loss 0.00104 Epoch 30/50 loss 0.00077 Epoch 31/50 loss 0.00039 Epoch 32/50 loss 0.00048 Epoch 33/50 loss 0.00067 Epoch 34/50 loss 0.00038 Epoch 35/50 loss 0.00036 Epoch 36/50 loss 0.00030 Epoch 37/50 loss 0.00043 Epoch 38/50 loss 0.00046 Epoch 39/50 loss 0.00067 Epoch 40/50 loss 0.00049 Epoch 41/50 loss 0.00140 Epoch 42/50 loss 0.00033 Epoch 43/50 loss 0.00036 Epoch 44/50 loss 0.00031 Epoch 45/50 loss 0.00135 Epoch 46/50 loss 0.00040 Epoch 47/50 loss 0.00054 Epoch 48/50 loss 0.00025 Epoch 49/50 loss 0.00031 Epoch 50/50 loss 0.00045 ```python plt.plot(hist) plt.xlabel("update step") plt.ylabel("loss") plt.yscale("log") plt.grid(True) plt.show() ``` ## Evaluate your solution ```python def build_mesh(N, t): x = np.linspace(0,1,N) _t = np.array([t]) m = np.meshgrid(x, _t) m = np.stack(m, -1).reshape(-1, 2) return x, t, m x, t, m = build_mesh(20, 0) ``` ```python def eval_model(m): mlp.eval() with torch.no_grad(): p = mlp(torch.tensor(m).float().to(device)).cpu().numpy() return p p = eval_model(m) ``` ```python def plot_model(x, p, t): pe = np.sin(2.*np.pi*(x-U*t)).reshape(-1,1) plt.plot(x, pe, label="exact") plt.plot(x, p, '.k', label="solution") plt.legend() plt.grid(True) l2 = np.sqrt(np.sum((p-pe)**2)) plt.title(f"t = {t:.3f} (L2 = {l2:.5f})") plt.show() plot_model(x, p, 0) ``` ```python def eval_solution(N, t): x, t, m = build_mesh(N, t) p = eval_model(m) plot_model(x, p, t) ``` ```python from matplotlib import animation, rc rc('animation', html='html5') t = np.linspace(0,1,10) x, t, m = build_mesh(30, t) p = eval_model(m).reshape(len(t), -1) fig = plt.figure() ax = plt.subplot(111) def update_plot(i): ax.clear() pe = np.sin(2.*np.pi*(x-U*t[i])) ax.plot(x, pe, label=f"exact (u = {U})") ax.plot(x, p[i], '.k', label="solution") ax.set_xlabel("x", fontsize=14) ax.set_ylabel("p", fontsize=14, rotation=np.pi/2) ax.legend(loc="upper right") ax.grid(True) ax.set_xlim([0, 1]) ax.set_ylim([-1.2, 1.2]) l2 = np.sqrt(np.sum((p[i]-pe)**2)) ax.set_title(f"t = {t[i]:.3f} (L2 = {l2:.5f})") return ax anim = animation.FuncAnimation(fig, update_plot, frames=len(t), interval=300) plt.close() ``` ```python anim ``` ## What nangs provides you Nangs will provideo you with classes and functions to asbtract some of the concepts explained here in order to enable fast experimentation while allowing you to customize it to your needs.
16850ce6447851035682c50aa86c2f6c2c76c469
134,493
ipynb
Jupyter Notebook
tutorials/00_how_it_works.ipynb
adantra/nangs
7d027998cbb225ba2a5972344090e354c5e96480
[ "Apache-2.0" ]
1
2021-02-22T11:17:22.000Z
2021-02-22T11:17:22.000Z
tutorials/00_how_it_works.ipynb
adantra/nangs
7d027998cbb225ba2a5972344090e354c5e96480
[ "Apache-2.0" ]
null
null
null
tutorials/00_how_it_works.ipynb
adantra/nangs
7d027998cbb225ba2a5972344090e354c5e96480
[ "Apache-2.0" ]
2
2020-07-23T09:10:23.000Z
2021-02-22T11:14:24.000Z
91.305499
20,824
0.840356
true
3,055
Qwen/Qwen-72B
1. YES 2. YES
0.695958
0.760651
0.529381
__label__eng_Latn
0.426161
0.068259
```python from sympy import * import matplotlib.pyplot as plt import numpy as np ``` ```python alpha, gamma, a, b, c, d = symbols( 'alpha gamma a b c d', float=True ) t = Symbol('t') p = Function('p', is_real = true)(t) D = Function('D', is_real = true)(p) S = Function('S', is_real = true)(p) D = -a*p + b S = c*p + d z = Function('z', is_real = true)(p) z = D - S class BasicOperationsForGivenODE: """ У конструкторі наведені допоміжні аргументи для автономного рівняння p'(t) = alpha * F(z(p(t))), де z(p) = D(p) - S(p) = (b-d)-(a+c)p, p = p(t), t >= 0 a, b, c, d > 0 - параметри лінійних функцій попиту та пропозиції gamma > 0 таке, що p(0) = gamma F така, що F(0) = 0, F(x) = y, sign(x) = sign(y) """ def __init__(self, F): self.F = Function('F', is_real = true)(z) self.F = F self.diffeq = Eq(p.diff(t), alpha * self.F) self.sol_non = dsolve(self.diffeq) self.sol_chy = dsolve(self.diffeq, ics={p.subs(t, 0): gamma}) # Надалі: # s - набір чисел для кожного параметра. # (Можна знехтувати, якщо потрібно отримати загальний результат) # chy - чи врахувати початкову умову автономного рівняння чи ні def get_solution(self, chy: bool = False, s: dict = {}): """ Метод розв'язує задане ДР з урахуванням (або без) задачі Коші """ sol = self.sol_chy if chy else self.sol_non if isinstance(sol, Equality): return sol.subs(s) for i, sl in enumerate(sol): sol[i] = sl.subs(s) return sol def get_equation(self, s: dict = {}): """ Метод повертає загальний вигляд диференціального рівняння з урахуванням вхідних даних """ return factor(self.diffeq).subs(s) def get_stable_points(self, s: dict = {}): """ Метод розв'язує алгебричне рівняння відносно функції ціни, повертає точку рівноваги (розв'язок) """ return solveset(z, p).subs(s) @staticmethod def rhs_solution_lambdify(diffeq_sol, input_array, alph, params_dict, chy: bool = True): """ Метод для перетворення розв'язку ДР на функцію, яку можна використовувати на масивах бібліотеки numpy """ #sol = self.sol_chy if chy else self.sol_non sol = diffeq_sol sol_rhs = sol.rhs.subs(params_dict).subs( {alpha: alph} ) return lambdify(t, sol_rhs, 'numpy')(input_array) ``` ```python def fast_plot(x, array_of_alphas, case_string, ode_cls, sol = None): """ Функція забезпечує зображення графіків функції p(t) в залежності від можливих параметрів адаптації з множини array_of_alphas """ plt.figure(figsize=(16, 10)) plt.grid(1) plt.xlabel("Time, t", fontdict={'fontsize': 14}) plt.ylabel("Price, p(t)", fontdict={'fontsize': 14}) diffeq_sol = ode_cls.get_solution(chy = True, s = {}) if sol is None else sol for alph in array_of_alphas: plt.plot(x, ode_cls.rhs_solution_lambdify(diffeq_sol, x, alph, params_cases[case_string]), label='α = %.2f' % alph) plt.legend(loc='upper right', prop={'size': 16}) plt.title( "Price behaviour depending on adaptation coefficient change", fontdict={'fontsize': 16} ) plt.show() ``` ```python t_space = np.linspace(0, 1.5, 100) gamma_global = 10 alphas = [0.25, 1, 1.75] params_cases = { 'case1': {a: 10, b: 15, c: 5, d: 10, gamma: gamma_global}, 'case2': {a: 8, b: 12, c: 8, d: 10, gamma: gamma_global}, 'case3': {a: 6, b: 5, c: 7, d: 5, gamma: gamma_global} } F1 = Function('F1', is_real = true)(z) F1 = z F2 = Function('F2', is_real = true)(z) F2 = z*z*z ``` ```python sd = BasicOperationsForGivenODE(F1) ``` ```python F1 ``` $\displaystyle - a p{\left(t \right)} + b - c p{\left(t \right)} - d$ ```python sd.get_solution({}) ``` $\displaystyle p{\left(t \right)} = \frac{b - d + e^{\left(C_{1} - \alpha t\right) \left(a + c\right)}}{a + c}$ ```python fast_plot(t_space, alphas, 'case1', sd) ``` ```python hd = BasicOperationsForGivenODE(F2) ``` ```python F2 ``` $\displaystyle \left(- a p{\left(t \right)} + b - c p{\left(t \right)} - d\right)^{3}$ ```python sol1, sol2 = hd.get_solution(chy=True, s={}) ``` ```python sol1 ``` $\displaystyle p{\left(t \right)} = \frac{- a^{3} \alpha b t + a^{3} \alpha d t - \frac{a^{3} b}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + \frac{a^{3} d}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} - 3 a^{2} \alpha b c t + 3 a^{2} \alpha c d t - \frac{3 a^{2} b c}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + \frac{3 a^{2} c d}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} - 3 a \alpha b c^{2} t + 3 a \alpha c^{2} d t - \frac{3 a b c^{2}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + \frac{3 a c^{2} d}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} - \frac{\sqrt{2} a \sqrt{a^{3} \alpha t + \frac{a^{3}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + 3 a^{2} \alpha c t + \frac{3 a^{2} c}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + 3 a \alpha c^{2} t + \frac{3 a c^{2}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + \alpha c^{3} t + \frac{c^{3}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}}}}{2} - \alpha b c^{3} t + \alpha c^{3} d t - \frac{b c^{3}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + \frac{c^{3} d}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} - \frac{\sqrt{2} c \sqrt{a^{3} \alpha t + \frac{a^{3}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + 3 a^{2} \alpha c t + \frac{3 a^{2} c}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + 3 a \alpha c^{2} t + \frac{3 a c^{2}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} + \alpha c^{3} t + \frac{c^{3}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}}}}{2}}{- a^{4} \alpha t - \frac{a^{4}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} - 4 a^{3} \alpha c t - \frac{4 a^{3} c}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} - 6 a^{2} \alpha c^{2} t - \frac{6 a^{2} c^{2}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} - 4 a \alpha c^{3} t - \frac{4 a c^{3}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}} - \alpha c^{4} t - \frac{c^{4}}{2 a^{3} \gamma^{2} - 4 a^{2} b \gamma + 6 a^{2} c \gamma^{2} + 4 a^{2} d \gamma + 2 a b^{2} - 8 a b c \gamma - 4 a b d + 6 a c^{2} \gamma^{2} + 8 a c d \gamma + 2 a d^{2} + 2 b^{2} c - 4 b c^{2} \gamma - 4 b c d + 2 c^{3} \gamma^{2} + 4 c^{2} d \gamma + 2 c d^{2}}}$ ```python Eq(p.diff(t), alpha*(b-a*p)**3) ``` $\displaystyle \frac{d}{d t} p{\left(t \right)} = \alpha \left(- a p{\left(t \right)} + b\right)^{3}$ ```python ss = dsolve(Eq(p.diff(t), alpha*(b-a*p)**3), p) ``` ```python ss[0] ``` $\displaystyle p{\left(t \right)} = - \frac{\sqrt{2}}{2 \sqrt{a^{3} \left(C_{1} + \alpha t\right)}} + \frac{b}{a}$ ```python ss[1] ``` $\displaystyle p{\left(t \right)} = \frac{\sqrt{2}}{2 \sqrt{a^{3} \left(C_{1} + \alpha t\right)}} + \frac{b}{a}$ ```python ssc = dsolve(Eq(p.diff(t), (b-a*p)**3), p, ics = {p.subs(t, 0): 10}) ``` ```python ssc[0] ``` $\displaystyle p{\left(t \right)} = - \frac{\sqrt{2}}{2 \sqrt{a^{3} \left(t + \frac{1}{2 a \left(10 a - b\right)^{2}}\right)}} - \frac{b t}{a \left(- t - \frac{1}{2 a \left(10 a - b\right)^{2}}\right)} - \frac{b}{2 a^{2} \left(10 a - b\right)^{2} \left(- t - \frac{1}{2 a \left(10 a - b\right)^{2}}\right)}$ ```python (1/(2*(a+c)**3))*((gamma-((b-d)/(a+c)))**(-2)) ``` $\displaystyle \frac{1}{2 \left(a + c\right)^{3} \left(\gamma - \frac{b - d}{a + c}\right)^{2}}$ ```python diffeq_sol_z3 = Eq(p, ((b-d)/(a+c)) + 1/sqrt(2*((a+c)**3)*(alpha*t*(1/(2*(a+c)**3))*((gamma-((b-d)/(a+c)))**(-2))))) ``` ```python diffeq_sol_z3 ``` $\displaystyle p{\left(t \right)} = \frac{b - d}{a + c} + \frac{1}{\sqrt{\frac{\alpha t}{\left(\gamma - \frac{b - d}{a + c}\right)^{2}}}}$ ```python t_space[0] = t_space[0] - 0.0001 ``` ```python fast_plot(t_space, alphas, 'case1', sd, sol=diffeq_sol_z3) ``` ```python ```
0524afb82a41270938db12f0ed925ecde792b518
133,944
ipynb
Jupyter Notebook
diffeq/diffeq.ipynb
MilkyCousin/SymPy-and-Mathematics
2426c42329a8ae938791656001da02c15ec4c6dd
[ "MIT" ]
null
null
null
diffeq/diffeq.ipynb
MilkyCousin/SymPy-and-Mathematics
2426c42329a8ae938791656001da02c15ec4c6dd
[ "MIT" ]
null
null
null
diffeq/diffeq.ipynb
MilkyCousin/SymPy-and-Mathematics
2426c42329a8ae938791656001da02c15ec4c6dd
[ "MIT" ]
null
null
null
252.724528
54,384
0.866474
true
5,841
Qwen/Qwen-72B
1. YES 2. YES
0.896251
0.766294
0.686792
__label__kor_Hang
0.089077
0.433979
#Coloring ### Cartesian Line Plot ``` from sympy.plotting import plot, plot_parametric, plot3d, plot3d_parametric_line, plot3d_parametric_surface ``` ``` p = plot(sin(x)) ``` If the `line_color` aesthetic is a function of arity 1 then the coloring is a function of the x value of a point. ``` p[0].line_color = lambda a : a p.show() ``` If the arity is 2 then the coloring is a function of both coordinates. ``` p[0].line_color = lambda a, b : b p.show() ``` ### Parametric Lines ``` p = plot_parametric(x*sin(x), x*cos(x), (x, 0, 10)) ``` If the arity is 1 the coloring depends on the parameter. ``` p[0].line_color = lambda a : a p.show() ``` For arity 2 the coloring depends on coordinates. ``` p[0].line_color = lambda a, b : a p.show() ``` ``` p[0].line_color = lambda a, b : b p.show() ``` ### 3D Parametric line Arity 1 - the first parameter. Arity 2 or 3 - the first two coordinates or all coordinates. ``` p = plot3d_parametric_line(sin(x)+0.1*sin(x)*cos(7*x), cos(x)+0.1*cos(x)*cos(7*x), 0.1*sin(7*x), (x, 0, 2*pi)) ``` ``` p[0].line_color = lambda a : sin(4*a) p.show() ``` ``` p[0].line_color = lambda a, b : b p.show() ``` ``` p[0].line_color = lambda a, b, c : c p.show() ``` ### Cartesian Surface Plot ``` p = plot3d(sin(x)*y, (x, 0, 6*pi), (y, -5, 5)) ``` Arity 1, 2 or 3 for first, the two first or all coordinates. ``` p[0].surface_color = lambda a : a p.show() ``` ``` p[0].surface_color = lambda a, b : b p.show() ``` ``` p[0].surface_color = lambda a, b, c : c p.show() ``` ``` p[0].surface_color = lambda a, b, c : sqrt((a-3*pi)**2+b**2) p.show() ``` ### Parametric surface plots Arity 1 or 2 - first or both parameters. ``` p = plot3d_parametric_surface(x*cos(4*y), x*sin(4*y), y, (x, -1, 1), (y, -1, 1)) ``` ``` p[0].surface_color = lambda a : a p.show() ``` ``` p[0].surface_color = lambda a, b : a*b p.show() ``` Arrity of 3 will color by coordinates. ``` p[0].surface_color = lambda a, b, c : sqrt(a**2+b**2+c**2) p.show() ```
4f661a0ada771f7d1942353c75686ef3e1a87873
7,379
ipynb
Jupyter Notebook
examples/beginner/plot_colors.ipynb
Michal-Gagala/sympy
3cc756c2af73b5506102abaeefd1b654e286e2c8
[ "MIT" ]
null
null
null
examples/beginner/plot_colors.ipynb
Michal-Gagala/sympy
3cc756c2af73b5506102abaeefd1b654e286e2c8
[ "MIT" ]
null
null
null
examples/beginner/plot_colors.ipynb
Michal-Gagala/sympy
3cc756c2af73b5506102abaeefd1b654e286e2c8
[ "MIT" ]
null
null
null
20.497222
122
0.404933
true
715
Qwen/Qwen-72B
1. YES 2. YES
0.92079
0.882428
0.81253
__label__eng_Latn
0.813288
0.726113
Jupyter Notebook desenvolvido por [Gustavo S.S.](https://github.com/GSimas) > "Na ciência, o crédito vai para o homem que convence o mundo, não para o que primeiro teve a ideia" - Francis Darwin # Capacitores e Indutores **Contrastando com um resistor, que gasta ou dissipa energia de forma irreversível, um indutor ou um capacitor armazena ou libera energia (isto é, eles têm capacidade de memória).** ## Capacitor Capacitor é um elemento passivo projetado para armazenar energia em seu campo elétrico. Um capacitor é formado por duas placas condutoras separadas por um isolante (ou dielétrico). Quando uma fonte de tensão v é conectada ao capacitor, como na Figura 6.2, a fonte deposita uma carga positiva q sobre uma placa e uma carga negativa –q na outra placa. Diz-se que o capacitor armazena a carga elétrica. A quantidade de carga armazenada, representada por q, é diretamente proporcional à tensão aplicada v de modo que: \begin{align} {\Large q = Cv} \end{align} **Capacitância é a razão entre a carga depositada em uma placa de um capacitor e a diferença de potencial entre as duas placas, medidas em farads (F).** Embora a capacitância C de um capacitor seja a razão entre a carga q por placa e a tensão aplicada v, ela não depende de q ou v, mas, sim, das dimensões físicas do capacitor \begin{align} {\Large C = \epsilon \frac{A}{d}} \end{align} Onde **A** é a área de cada placa, **d** é a distância entre as placas e **ε** é a permissividade elétrica do material dielétrico entre as placas Para obter a relação corrente-tensão do capacitor, utilizamos: \begin{align} {\Large i = C \frac{dv}{dt}} \end{align} Diz-se que os capacitores que realizam a Equação acima são lineares. Para um capacitor não linear, o gráfico da relação corrente-tensão não é uma linha reta. E embora alguns capacitores sejam não lineares, a maioria é linear. **Relação Tensão-Corrente:** \begin{align} {\Large v(t) = \frac{1}{C} \int_{t_0}^{t} i(\tau)d\tau + v(t_0)} \end{align} **A Potência Instantânea liberada para o capacitor é:** \begin{align} {\Large p = vi = Cv \frac{dv}{dt}} \end{align} **A energia armazenada no capacitor é:** \begin{align} {\Large w = \int_{-\infty}^{t} p(\tau)d\tau} \\= \\{\Large C \int_{-\infty}^{t} v \frac{dv}{d\tau}d\tau} \\= \\{\Large C \int_{v(-\infty)}^{v(t)} vdv} \\= \\{\Large \frac{1}{2} Cv^2} \end{align} Percebemos que v(-∞) = 0, pois o capacitor foi descarregado em t = -∞. Logo: \begin{align} {\Large w = \frac{1}{2} Cv^2} \\ \\{\Large w = \frac{q^2}{2C}} \end{align} As quais representam a energia armazenada no campo elétrico existente entre as placas do capacitor. Essa energia pode ser recuperada, já que um capacitor ideal não pode dissipar energia. De fato, a palavra capacitor deriva da capacidade de esse elemento armazenar energia em um campo elétrico. 1. **Um capacitor é um circuito aberto em CC.** 2. A tensão em um capacitor não pode mudar abruptamente. 3. **O capacitor ideal não dissipa energia, mas absorve potência do circuito ao armazenar energia em seu campo e retorna energia armazenada previamente ao liberar potência para o circuito.** 4. Um capacitor real, não ideal, possui uma resistência de fuga em paralelo conforme pode ser observado no modelo visto na Figura 6.8. A resistência de fuga pode chegar a valores bem elevados como 100 MΩ e pode ser desprezada para a maioria das aplicações práticas. **Exemplo 6.1** a. Calcule a carga armazenada em um capacitor de 3 pF com 20 V entre seus terminais. b. Determine a energia armazenada no capacitor. ```python print("Exemplo 6.1") C = 3*(10**(-12)) V = 20 q = C*V print("Carga armazenada:",q,"C") w = q**2/(2*C) print("Energia armazenada:",w,"J") ``` Exemplo 6.1 Carga armazenada: 6e-11 C Energia armazenada: 6e-10 J **Problema Prático 6.1** Qual é a tensão entre os terminais de um capacitor de 4,5 uF se a carga em uma placa for 0,12 mC? Quanta energia é armazenada? ```python print("Problema Prático 6.1") C = 4.5*10**-6 q = 0.12*10**-3 V = q/C print("Tensão no capacitor:",V,"V") w = q**2/(2*C) print("Energia armazenada:",w,"J") ``` Problema Prático 6.1 Tensão no capacitor: 26.666666666666668 V Energia armazenada: 0.0015999999999999999 J **Exemplo 6.2** A tensão entre os terminais de um capacitor de 5 uF é: v(t) 10 cos 6.000t V Calcule a corrente que passa por ele. ```python print("Exemplo 6.2") import numpy as np from sympy import * C = 5*10**-6 t = symbols('t') v = 10*cos(6000*t) i = C*diff(v,t) print("Corrente que passa no capacitor:",i,"A") ``` Exemplo 6.2 Corrente que passa no capacitor: -0.3*sin(6000*t) A **Problema Prático 6.2** Se um capacitor de 10 uF for conectado a uma fonte de tensão com: v(t) 75 sen 2.000t V determine a corrente através do capacitor. ```python print("Problema Prático 6.2") C = 10*10**-6 v = 75*sin(2000*t) i = C * diff(v,t) print("Corrente:",i,"A") ``` Problema Prático 6.2 Corrente: 1.5*cos(2000*t) A **Exemplo 6.3** Determine a tensão através de um capacitor de 2 uF se a corrente através dele for i(t) 6e^-3.000t mA Suponha que a tensão inicial no capacitor seja igual a zero. ```python print("Exemplo 6.3") C = 2*10**-6 i = 6*exp(-3000*t)*10**-3 v = integrate(i,(t,0,t)) v = v/C print("Tensão no capacitor:",v,"V") ``` Exemplo 6.3 Tensão no capacitor: 1.0 - 1.0*exp(-3000*t) V **Problema Prático 6.3** A corrente contínua através de um capacitor de 100 uF é: i(t) = 50 sen(120pi*t) mA. Calcule a tensão nele nos instantes t = 1 ms e t = 5 ms. Considere v(0) = 0. ```python print("Problema Prático 6.3") C = 100*10**-6 i = 50*sin(120*np.pi*t)*10**-3 v = integrate(i,(t,0,0.001)) v = v/C print("Tensão no capacitor para t = 1ms:",v,"V") v = integrate(i,(t,0,0.005)) v = v/C print("Tensão no capacitor para t = 5ms:",v,"V") ``` Problema Prático 6.3 Tensão no capacitor para t = 1ms: 0.0931368282680687 V Tensão no capacitor para t = 5ms: 1.73613771038391 V **Exemplo 6.4** Determine a corrente através de um capacitor de 200 mF cuja tensão é mostrada na Figura 6.9. ) ```python print("Exemplo 6.4") #v(t) = 50t, 0<t<1 #v(t) = 100 - 50t, 1<t<3 #v(t) = -200 + 50t, 3<t<4 #v(t) = 0, caso contrario C = 200*10**-6 v1 = 50*t v2 = 100 - 50*t v3 = -200 + 50*t i1 = C*diff(v1,t) i2 = C*diff(v2,t) i3 = C*diff(v3,t) print("Corrente para 0<t<1:",i1,"A") print("Corrente para 1<t<3:",i2,"A") print("Corrente para 3<t<4:",i3,"A") ``` Exemplo 6.4 Corrente para 0<t<1: 0.0100000000000000 A Corrente para 1<t<3: -0.0100000000000000 A Corrente para 3<t<4: 0.0100000000000000 A **Problema Prático 6.4** Um capacitor inicialmente descarregado de 1 mF possui a corrente mostrada na Figura 6.11 entre seus terminais. Calcule a tensão entre seus terminais nos instantes t = 2 ms e t = 5 ms. ```python print("Problema Prático 6.4") C = 1*10**-3 i = 50*t*10**-3 v = integrate(i,(t,0,0.002)) v = v/C print("Tensão para t=2ms:",v,"V") i = 100*10**-3 v = integrate(i,(t,0,0.005)) v = v/C print("Tensão para t=5ms:",v,"V") ``` Problema Prático 6.4 Tensão para t=2ms: 0.000100000000000000 V Tensão para t=5ms: 0.500000000000000 V **Exemplo 6.5** Obtenha a energia armazenada em cada capacitor na Figura 6.12a em condições de CC. ```python print("Exemplo 6.5") C1 = 2*10**-3 C2 = 4*10**-3 I1 = (6*10**-3)*(3000)/(3000 + 2000 + 4000) #corrente que passa no resistor de 2k Vc1 = I1*2000 # tensao sobre o cap1 = tensao sobre o resistor 2k wc1 = (C1*Vc1**2)/2 print("Energia do Capacitor 1:",wc1,"J") Vc2 = I1*4000 wc2 = (C2*Vc2**2)/2 print("Energia do Capacitor 2:",wc2,"J") ``` Exemplo 6.5 Energia do Capacitor 1: 0.016 J Energia do Capacitor 2: 0.128 J **Problema Prático 6.5** Em condições CC, determine a energia armazenada nos capacitores da Figura 6.13. ```python print("Problema Prático 6.5") C1 = 20*10**-6 C2 = 30*10**-6 Vf = 50 #tensao da fonte Req = 1000 + 3000 + 6000 Vc1 = Vf*(3000+6000)/Req Vc2 = Vf*3000/Req wc1 = (C1*Vc1**2)/2 wc2 = (C2*Vc2**2)/2 print("Energia no Capacitor 1:",wc1,"J") print("Energia no Capacitor 2:",wc2,"J") ``` Problema Prático 6.5 Energia no Capacitor 1: 0.020249999999999997 J Energia no Capacitor 2: 0.0033749999999999995 J ## Capacitores em Série e Paralelo ### Paralelo **A capacitância equivalente de N capacitores ligados em paralelo é a soma de suas capacitâncias individuais.** \begin{align} {\Large C_{eq} = C_1 + C_2 + ... + C_N = \sum_{i=1}^{N} C_i} \end{align} ### Série **A capacitância equivalente dos capacitores associados em série é o inverso da soma dos inversos das capacitâncias individuais.** \begin{align} {\Large \frac{1}{C_{eq}} = \frac{1}{C_1} + \frac{1}{C_2} + ... + \frac{1}{C_N}} \end{align} \begin{align} {\Large C_{eq} = \frac{1}{\sum_{i=1}^{N} \frac{1}{C_i}}} \end{align} \begin{align} \\{\Large C_{eq} = (\sum_{i=1}^{N} (C_i)^{-1})^{-1}} \end{align} Para 2 Capacitores: \begin{align} {\Large C_{eq} = \frac{C_1 C_2}{C_1 + C_2}} \end{align} **Exemplo 6.6** Determine a capacitância equivalente vista entre os terminais a-b do circuito da Figura 6.16. ```python print("Exemplo 6.6") u = 10**-6 #definicao de micro Ceq1 = (20*u*5*u)/((20 + 5)*u) Ceq2 = Ceq1 + 6*u + 20*u Ceq3 = (Ceq2*60*u)/(Ceq2 + 60*u) print("Capacitância Equivalente:",Ceq3,"F") ``` Exemplo 6.6 Capacitância Equivalente: 1.9999999999999998e-05 F **Problema Prático 6.6** Determine a capacitância equivalente nos terminais do circuito da Figura 6.17. ```python print("Problema Prático 6.6") Ceq1 = (60*u*120*u)/((60 + 120)*u) Ceq2 = 20*u + Ceq1 Ceq3 = 50*u + 70*u Ceq4 = (Ceq2 * Ceq3)/(Ceq2 + Ceq3) print("Capacitância Equivalente:",Ceq4,"F") ``` Problema Prático 6.6 Capacitância Equivalente: 3.9999999999999996e-05 F **Exemplo 6.7** Para o circuito da Figura 6.18, determine a tensão em cada capacitor. ```python print("Exemplo 6.7") m = 10**-3 Vf = 30 Ceq1 = 40*m + 20*m Ceq2 = 1/(1/(20*m) + 1/(30*m) + 1/(Ceq1)) print("Capacitância Equivalente:",Ceq2,"F") q = Ceq2*Vf v1 = q/(20*m) v2 = q/(30*m) v3 = Vf - v1 - v2 print("Tensão v1:",v1,"V") print("Tensão v2:",v2,"V") print("Tensão v3:",v3,"V") ``` Exemplo 6.7 Capacitância Equivalente: 0.009999999999999998 F Tensão v1: 14.999999999999996 V Tensão v2: 9.999999999999998 V Tensão v3: 5.000000000000005 V **Problema Prático 6.7** Determine a tensão em cada capacitor na Figura 6.20. ```python print("Problema Prático 6.7") Vf = 90 Ceq1 = (30*u * 60*u)/(30*u + 60*u) Ceq2 = Ceq1 + 20*u Ceq3 = (40*u * Ceq2)/(40*u + Ceq2) print("Capacitância Equivalente:",Ceq3,"F") q1 = Ceq3*Vf v1 = q1/(40*u) v2 = Vf - v1 q3 = Ceq1*v2 v3 = q3/(60*u) v4 = q3/(30*u) print("Tensão v1:",v1,"V") print("Tensão v2:",v2,"V") print("Tensão v3:",v3,"V") print("Tensão v4:",v4,"V") ``` Problema Prático 6.7 Capacitância Equivalente: 1.9999999999999998e-05 F Tensão v1: 45.0 V Tensão v2: 45.0 V Tensão v3: 15.000000000000002 V Tensão v4: 30.000000000000004 V
da2afa39abf83da4b6c06dc9bcdeb547dd9e6d9d
19,853
ipynb
Jupyter Notebook
Aula 9.1 - Capacitores.ipynb
ofgod2/Circuitos-electricos-Boylestad-12ed-Portugues
60e815f6904858f3cda8b5c7ead8ea77aa09c7fd
[ "MIT" ]
7
2019-08-13T13:33:15.000Z
2021-11-16T16:46:06.000Z
Aula 9.1 - Capacitores.ipynb
ofgod2/Circuitos-electricos-Boylestad-12ed-Portugues
60e815f6904858f3cda8b5c7ead8ea77aa09c7fd
[ "MIT" ]
1
2017-08-24T17:36:15.000Z
2017-08-24T17:36:15.000Z
Aula 9.1 - Capacitores.ipynb
ofgod2/Circuitos-electricos-Boylestad-12ed-Portugues
60e815f6904858f3cda8b5c7ead8ea77aa09c7fd
[ "MIT" ]
8
2019-03-29T14:31:49.000Z
2021-12-30T17:59:23.000Z
25.616774
302
0.501033
true
4,285
Qwen/Qwen-72B
1. YES 2. YES
0.763484
0.779993
0.595512
__label__por_Latn
0.98881
0.221904
```python import numpy as np from numba import jit import sympy ``` # Item XV Considering the following inner product: $$ \langle p(x),q(x) \rangle =\int_{-1}^{1} \overline{p(x)}q(x) dx $$ * Let $A= [1|x|x^2|...|x^{n-1}]$ be the "matrix" whose "columns" are the monomials $x^j$, for $j=0,...,n-1$. Each column is a function in $L^2[-1,1]$. compute the $QR$ decomposition of $A$. * Let $A=[1|\sin(2\pi x)|\sin(4\pi x)|...|x^{n-1}]$ be the "matrix" whose "columns" are the functions $1$ and $\sin(2\pi x)$, for $j=1,...,n-1$. Each column is a function in $L^2[-1,1]$. Compute the $QR$ decomposition of $A$. * Do part (a) numerically. Make sure you understand what you are doing since this is a important concept that links symbolic computing with numerical computing. --- ```python # This is a generic version of Gram-Schmidt, by default it works on matrices. # For other uses, replace default argument functions. def generic_gs( elems, scalar = lambda a,x : a*x, prod = lambda x,y : np.sum(x*y), neg = lambda x,y : x-y, ): """ elems = [T] scalar :: T -> Float -> T prod :: T -> T -> Float neg :: T -> T -> T NOTE: if is used for a regular matrix, elems must be row-wise. """ n = len(elems) r = np.zeros((n,n)) for i in range(n): for j in range(i): projection = prod(elems[j],elems[i])/prod(elems[j],elems[j]) r[j,i] = projection elems[i] = neg(elems[i],scalar(projection,elems[j])) norm2 = prod(elems[i],elems[i]) if norm2<0: print("Warning: negative norm2=%f at i=%d!"%(norm2,i)) return None norm = norm2**0.5 r[i,i] = norm elems[i] = scalar(norm**-1,elems[i]) return r ``` ```python def symbolic_inner_product(f,q): x = sympy.Symbol('x') v = sympy.integrate(f*q,(x,-1,1)) # We evaluate the expresion as a number because we can't afford the whole symbolic # expression... return float(v) ``` ```python # We define the list of functions for part a def part_a_funcs(n): x = sympy.Symbol('x') part = [x**i for i in range(n)] return part ``` ```python def part_b_funcs(n): x = sympy.Symbol('x') part = [x**0] + [sympy.sin(2*i*sympy.pi*x) for i in range(1,n)] return part ``` ```python # We can print an array of functions: print(part_a_funcs(10)) print(part_b_funcs(10)) ``` [1, x, x**2, x**3, x**4, x**5, x**6, x**7, x**8, x**9] [1, sin(2*pi*x), sin(4*pi*x), sin(6*pi*x), sin(8*pi*x), sin(10*pi*x), sin(12*pi*x), sin(14*pi*x), sin(16*pi*x), sin(18*pi*x)] ```python # We compute the decompositions FUNCS = [part_a_funcs(5),part_b_funcs(5), part_a_funcs(20),part_b_funcs(20), part_a_funcs(100),part_b_funcs(100)] for funcs in FUNCS: # Print Original matrix print("-"*20+"Functions:") print(funcs) # Perform QR decomposition using generic G-S r = generic_gs(funcs,prod=symbolic_inner_product) if r is None: print("Couldn't compute R!!!:") else: # Print Q print("Q:") print(funcs) # Print R print("R:") print(r) ``` --------------------Functions: [1, x, x**2, x**3, x**4] Q: [0.707106781186547, 1.22474487139159*x, 2.37170824512628*x**2 - 0.790569415042095, 4.67707173346743*x**3 - 2.80624304008046*x, 9.28077650307342*x**4 - 7.95495128834865*x**2 + 0.795495128834865] R: [[1.41421356 0. 0.47140452 0. 0.28284271] [0. 0.81649658 0. 0.48989795 0. ] [0. 0. 0.42163702 0. 0.36140316] [0. 0. 0. 0.21380899 0. ] [0. 0. 0. 0. 0.1077496 ]] --------------------Functions: [1, sin(2*pi*x), sin(4*pi*x), sin(6*pi*x), sin(8*pi*x)] Q: [0.707106781186547, 1.0*sin(2*pi*x), 1.0*sin(4*pi*x), 1.0*sin(6*pi*x), 1.0*sin(8*pi*x)] R: [[1.41421356 0. 0. 0. 0. ] [0. 1. 0. 0. 0. ] [0. 0. 1. 0. 0. ] [0. 0. 0. 1. 0. ] [0. 0. 0. 0. 1. ]] --------------------Functions: [1, x, x**2, x**3, x**4, x**5, x**6, x**7, x**8, x**9, x**10, x**11, x**12, x**13, x**14, x**15, x**16, x**17, x**18, x**19] Q: [0.707106781186547, 1.22474487139159*x, 2.37170824512628*x**2 - 0.790569415042095, 4.67707173346743*x**3 - 2.80624304008046*x, 9.28077650307342*x**4 - 7.95495128834865*x**2 + 0.795495128834865, 18.4685120543046*x**5 - 20.5205689492274*x**3 + 4.39726477483447*x, 36.8085471137496*x**6 - 50.1934733369312*x**4 + 16.731157778977*x**2 - 0.796721798998903, 73.4290553655101*x**7 - 118.616166359671*x**5 + 53.9164392543961*x**3 - 5.9907154727108*x, 146.570997825597*x**8 - 273.599195941143*x**6 + 157.845689966067*x**4 - 28.6992163574728*x**2 + 0.797200454374528, 292.689266429782*x**9 - 619.812564204472*x**7 + 433.868794943332*x**5 - 111.248408959896*x**3 + 7.58511879272651*x, 584.646351835467*x**10 - 1384.68872802706*x**8 + 1140.33189366472*x**6 - 380.110631219486*x**4 + 43.8589189865218*x**2 - 0.79743489065461, 1168.08413179724*x**11 - 3059.26796431391*x**9 + 2898.25386102658*x**7 - 1193.39864870916*x**5 + 198.899774796064*x**3 - 9.17998960667973*x, 2334.1394542848*x**12 - 6697.96538974081*x**10 + 7176.39148793533*x**8 - 3525.24494077784*x**6 + 777.627560274535*x**4 - 62.2102048010054*x**2 + 0.797566727821615, 4664.8247961001*x**13 - 14554.2533510225*x**11 + 17401.824641446*x**9 - 9943.89978380936*x**7 + 2747.65651566431*x**5 - 323.25370725723*x**3 + 10.7751235584056*x, 9323.69774897047*x**14 - 31424.3145392121*x**12 + 41480.0950357262*x**10 - 27052.2357651977*x**8 + 9017.41186696691*x**6 - 1423.8018622434*x**4 + 83.7530497845978*x**2 - 0.797648080132224, 18637.0288570101*x**15 - 67478.8968030333*x**13 + 97469.5161890552*x**11 - 71477.643886015*x**9 + 27969.5121573176*x**7 - 5593.90225536597*x**5 + 490.69315947496*x**3 - 12.3704150610011*x, 37255.8196048632*x**16 - 144216.132304494*x**14 + 226270.239829557*x**12 - 184368.438761031*x**10 + 82965.8474262938*x**8 - 20200.3946203341*x**6 + 2404.8109348184*x**4 - 108.487824155802*x**2 + 0.797705620723317, 74479.6642568353*x**17 - 306946.405948985*x**15 + 519828.403493602*x**13 - 466052.839993619*x**11 + 237341.58597819*x**9 - 68354.3246558744*x**7 + 10401.73440399*x**5 - 707.59997666341*x**3 + 13.9657605808844*x, 148890.480388317*x**18 - 650867.820226831*x**16 + 1183404.11036741*x**14 - 1157964.1066223*x**12 + 658848.305301026*x**10 - 219618.867623267*x**8 + 40996.1779316701*x**6 - 3819.59915037231*x**4 + 136.418051302983*x**2 - 0.797797143973976, 297830.260121396*x**19 - 1376466.304985*x**17 + 2674292.07935306*x**15 - 2836385.47335974*x**13 + 1784186.56529593*x**11 - 676762.982328085*x**9 + 150392.04700999*x**7 - 18047.0137902493*x**5 + 980.807898993961*x**3 - 15.5680677297265*x] R: [[1.41421356e+00 0.00000000e+00 4.71404521e-01 0.00000000e+00 2.82842712e-01 0.00000000e+00 2.02030509e-01 0.00000000e+00 1.57134840e-01 0.00000000e+00 1.28564869e-01 0.00000000e+00 1.08785659e-01 0.00000000e+00 9.42809042e-02 0.00000000e+00 8.31890331e-02 0.00000000e+00 7.44322928e-02 0.00000000e+00] [0.00000000e+00 8.16496581e-01 0.00000000e+00 4.89897949e-01 0.00000000e+00 3.49927106e-01 0.00000000e+00 2.72165527e-01 0.00000000e+00 2.22680886e-01 0.00000000e+00 1.88422288e-01 0.00000000e+00 1.63299316e-01 0.00000000e+00 1.44087632e-01 0.00000000e+00 1.28920513e-01 0.00000000e+00 1.16642369e-01] [0.00000000e+00 0.00000000e+00 4.21637021e-01 0.00000000e+00 3.61403161e-01 0.00000000e+00 3.01169301e-01 0.00000000e+00 2.55537589e-01 0.00000000e+00 2.21138298e-01 0.00000000e+00 1.94601702e-01 0.00000000e+00 1.73615244e-01 0.00000000e+00 1.56645333e-01 0.00000000e+00 1.42659143e-01 0.00000000e+00] [0.00000000e+00 0.00000000e+00 0.00000000e+00 2.13808994e-01 0.00000000e+00 2.37565548e-01 0.00000000e+00 2.26767114e-01 0.00000000e+00 2.09323490e-01 0.00000000e+00 1.91879866e-01 0.00000000e+00 1.76077995e-01 0.00000000e+00 1.62177100e-01 0.00000000e+00 1.50041399e-01 0.00000000e+00 1.39440648e-01] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.07749605e-01 0.00000000e+00 1.46931279e-01 0.00000000e+00 1.58233685e-01 0.00000000e+00 1.58233685e-01 0.00000000e+00 1.53579753e-01 0.00000000e+00 1.47113237e-01 0.00000000e+00 1.40107845e-01 0.00000000e+00 1.33145965e-01 0.00000000e+00] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 5.41462137e-02 0.00000000e+00 8.74669606e-02 0.00000000e+00 1.04960353e-01 0.00000000e+00 1.13192537e-01 0.00000000e+00 1.16171288e-01 0.00000000e+00 1.16171288e-01 0.00000000e+00 1.14487646e-01 0.00000000e+00 1.11870786e-01] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.71676031e-02 0.00000000e+00 5.07128592e-02 0.00000000e+00 6.71199607e-02 0.00000000e+00 7.77178492e-02 0.00000000e+00 8.41943367e-02 0.00000000e+00 8.78549600e-02 0.00000000e+00 8.96120592e-02 0.00000000e+00] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.36185873e-02 0.00000000e+00 2.88393613e-02 0.00000000e+00 4.17411809e-02 0.00000000e+00 5.16795572e-02 0.00000000e+00 5.89821034e-02 0.00000000e+00 6.41725285e-02 0.00000000e+00 6.77376689e-02] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 6.82263214e-03 0.00000000e+00 1.61588656e-02 0.00000000e+00 2.53925031e-02 0.00000000e+00 3.34886635e-02 0.00000000e+00 4.01863962e-02 0.00000000e+00 4.55445823e-02 0.00000000e+00] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 3.41659266e-03 0.00000000e+00 8.94821888e-03 0.00000000e+00 1.51730668e-02 0.00000000e+00 2.12422935e-02 0.00000000e+00 2.67495548e-02 0.00000000e+00 3.15460267e-02] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.71043571e-03 0.00000000e+00 4.90820683e-03 0.00000000e+00 8.93293643e-03 0.00000000e+00 1.32339799e-02 0.00000000e+00 1.74551631e-02 0.00000000e+00] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 8.56102718e-04 0.00000000e+00 2.67104048e-03 0.00000000e+00 5.19368982e-03 0.00000000e+00 8.11887144e-03 0.00000000e+00 1.11961856e-02] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 4.28423417e-04 0.00000000e+00 1.44394559e-03 0.00000000e+00 2.98747362e-03 0.00000000e+00 4.91487596e-03 0.00000000e+00] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.14370323e-04 0.00000000e+00 7.76168403e-04 0.00000000e+00 1.70256298e-03 0.00000000e+00 2.94079059e-03] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.07253584e-04 0.00000000e+00 4.15175326e-04 0.00000000e+00 9.62451912e-04 0.00000000e+00] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 5.36566213e-05 0.00000000e+00 2.21130254e-04 0.00000000e+00 5.40189792e-04] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.68414441e-05 0.00000000e+00 1.17336126e-04 0.00000000e+00] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.34264837e-05 0.00000000e+00 6.20524672e-05] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 6.71634612e-06 0.00000000e+00] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 3.35761719e-06]] --------------------Functions: [1, sin(2*pi*x), sin(4*pi*x), sin(6*pi*x), sin(8*pi*x), sin(10*pi*x), sin(12*pi*x), sin(14*pi*x), sin(16*pi*x), sin(18*pi*x), sin(20*pi*x), sin(22*pi*x), sin(24*pi*x), sin(26*pi*x), sin(28*pi*x), sin(30*pi*x), sin(32*pi*x), sin(34*pi*x), sin(36*pi*x), sin(38*pi*x)] Q: [0.707106781186547, 1.0*sin(2*pi*x), 1.0*sin(4*pi*x), 1.0*sin(6*pi*x), 1.0*sin(8*pi*x), 1.0*sin(10*pi*x), 1.0*sin(12*pi*x), 1.0*sin(14*pi*x), 1.0*sin(16*pi*x), 1.0*sin(18*pi*x), 1.0*sin(20*pi*x), 1.0*sin(22*pi*x), 1.0*sin(24*pi*x), 1.0*sin(26*pi*x), 1.0*sin(28*pi*x), 1.0*sin(30*pi*x), 1.0*sin(32*pi*x), 1.0*sin(34*pi*x), 1.0*sin(36*pi*x), 1.0*sin(38*pi*x)] R: [[1.41421356 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. ]] --------------------Functions: [1, x, x**2, x**3, x**4, x**5, x**6, x**7, x**8, x**9, x**10, x**11, x**12, x**13, x**14, x**15, x**16, x**17, x**18, x**19, x**20, x**21, x**22, x**23, x**24, x**25, x**26, x**27, x**28, x**29, x**30, x**31, x**32, x**33, x**34, x**35, x**36, x**37, x**38, x**39, x**40, x**41, x**42, x**43, x**44, x**45, x**46, x**47, x**48, x**49, x**50, x**51, x**52, x**53, x**54, x**55, x**56, x**57, x**58, x**59, x**60, x**61, x**62, x**63, x**64, x**65, x**66, x**67, x**68, x**69, x**70, x**71, x**72, x**73, x**74, x**75, x**76, x**77, x**78, x**79, x**80, x**81, x**82, x**83, x**84, x**85, x**86, x**87, x**88, x**89, x**90, x**91, x**92, x**93, x**94, x**95, x**96, x**97, x**98, x**99] Warning: negative norm2=-0.000000 at i=24! Couldn't compute R!!!: --------------------Functions: [1, sin(2*pi*x), sin(4*pi*x), sin(6*pi*x), sin(8*pi*x), sin(10*pi*x), sin(12*pi*x), sin(14*pi*x), sin(16*pi*x), sin(18*pi*x), sin(20*pi*x), sin(22*pi*x), sin(24*pi*x), sin(26*pi*x), sin(28*pi*x), sin(30*pi*x), sin(32*pi*x), sin(34*pi*x), sin(36*pi*x), sin(38*pi*x), sin(40*pi*x), sin(42*pi*x), sin(44*pi*x), sin(46*pi*x), sin(48*pi*x), sin(50*pi*x), sin(52*pi*x), sin(54*pi*x), sin(56*pi*x), sin(58*pi*x), sin(60*pi*x), sin(62*pi*x), sin(64*pi*x), sin(66*pi*x), sin(68*pi*x), sin(70*pi*x), sin(72*pi*x), sin(74*pi*x), sin(76*pi*x), sin(78*pi*x), sin(80*pi*x), sin(82*pi*x), sin(84*pi*x), sin(86*pi*x), sin(88*pi*x), sin(90*pi*x), sin(92*pi*x), sin(94*pi*x), sin(96*pi*x), sin(98*pi*x), sin(100*pi*x), sin(102*pi*x), sin(104*pi*x), sin(106*pi*x), sin(108*pi*x), sin(110*pi*x), sin(112*pi*x), sin(114*pi*x), sin(116*pi*x), sin(118*pi*x), sin(120*pi*x), sin(122*pi*x), sin(124*pi*x), sin(126*pi*x), sin(128*pi*x), sin(130*pi*x), sin(132*pi*x), sin(134*pi*x), sin(136*pi*x), sin(138*pi*x), sin(140*pi*x), sin(142*pi*x), sin(144*pi*x), sin(146*pi*x), sin(148*pi*x), sin(150*pi*x), sin(152*pi*x), sin(154*pi*x), sin(156*pi*x), sin(158*pi*x), sin(160*pi*x), sin(162*pi*x), sin(164*pi*x), sin(166*pi*x), sin(168*pi*x), sin(170*pi*x), sin(172*pi*x), sin(174*pi*x), sin(176*pi*x), sin(178*pi*x), sin(180*pi*x), sin(182*pi*x), sin(184*pi*x), sin(186*pi*x), sin(188*pi*x), sin(190*pi*x), sin(192*pi*x), sin(194*pi*x), sin(196*pi*x), sin(198*pi*x)] Q: [0.707106781186547, 1.0*sin(2*pi*x), 1.0*sin(4*pi*x), 1.0*sin(6*pi*x), 1.0*sin(8*pi*x), 1.0*sin(10*pi*x), 1.0*sin(12*pi*x), 1.0*sin(14*pi*x), 1.0*sin(16*pi*x), 1.0*sin(18*pi*x), 1.0*sin(20*pi*x), 1.0*sin(22*pi*x), 1.0*sin(24*pi*x), 1.0*sin(26*pi*x), 1.0*sin(28*pi*x), 1.0*sin(30*pi*x), 1.0*sin(32*pi*x), 1.0*sin(34*pi*x), 1.0*sin(36*pi*x), 1.0*sin(38*pi*x), 1.0*sin(40*pi*x), 1.0*sin(42*pi*x), 1.0*sin(44*pi*x), 1.0*sin(46*pi*x), 1.0*sin(48*pi*x), 1.0*sin(50*pi*x), 1.0*sin(52*pi*x), 1.0*sin(54*pi*x), 1.0*sin(56*pi*x), 1.0*sin(58*pi*x), 1.0*sin(60*pi*x), 1.0*sin(62*pi*x), 1.0*sin(64*pi*x), 1.0*sin(66*pi*x), 1.0*sin(68*pi*x), 1.0*sin(70*pi*x), 1.0*sin(72*pi*x), 1.0*sin(74*pi*x), 1.0*sin(76*pi*x), 1.0*sin(78*pi*x), 1.0*sin(80*pi*x), 1.0*sin(82*pi*x), 1.0*sin(84*pi*x), 1.0*sin(86*pi*x), 1.0*sin(88*pi*x), 1.0*sin(90*pi*x), 1.0*sin(92*pi*x), 1.0*sin(94*pi*x), 1.0*sin(96*pi*x), 1.0*sin(98*pi*x), 1.0*sin(100*pi*x), 1.0*sin(102*pi*x), 1.0*sin(104*pi*x), 1.0*sin(106*pi*x), 1.0*sin(108*pi*x), 1.0*sin(110*pi*x), 1.0*sin(112*pi*x), 1.0*sin(114*pi*x), 1.0*sin(116*pi*x), 1.0*sin(118*pi*x), 1.0*sin(120*pi*x), 1.0*sin(122*pi*x), 1.0*sin(124*pi*x), 1.0*sin(126*pi*x), 1.0*sin(128*pi*x), 1.0*sin(130*pi*x), 1.0*sin(132*pi*x), 1.0*sin(134*pi*x), 1.0*sin(136*pi*x), 1.0*sin(138*pi*x), 1.0*sin(140*pi*x), 1.0*sin(142*pi*x), 1.0*sin(144*pi*x), 1.0*sin(146*pi*x), 1.0*sin(148*pi*x), 1.0*sin(150*pi*x), 1.0*sin(152*pi*x), 1.0*sin(154*pi*x), 1.0*sin(156*pi*x), 1.0*sin(158*pi*x), 1.0*sin(160*pi*x), 1.0*sin(162*pi*x), 1.0*sin(164*pi*x), 1.0*sin(166*pi*x), 1.0*sin(168*pi*x), 1.0*sin(170*pi*x), 1.0*sin(172*pi*x), 1.0*sin(174*pi*x), 1.0*sin(176*pi*x), 1.0*sin(178*pi*x), 1.0*sin(180*pi*x), 1.0*sin(182*pi*x), 1.0*sin(184*pi*x), 1.0*sin(186*pi*x), 1.0*sin(188*pi*x), 1.0*sin(190*pi*x), 1.0*sin(192*pi*x), 1.0*sin(194*pi*x), 1.0*sin(196*pi*x), 1.0*sin(198*pi*x)] R: [[1.41421356 0. 0. ... 0. 0. 0. ] [0. 1. 0. ... 0. 0. 0. ] [0. 0. 1. ... 0. 0. 0. ] ... [0. 0. 0. ... 1. 0. 0. ] [0. 0. 0. ... 0. 1. 0. ] [0. 0. 0. ... 0. 0. 1. ]] We can see that the functions of the second item are ortogonal, so the $QR$ decomposition gives the indentity (besides the first function $y(x)=1$ that has to be normalized). Around $i=25$ the function coeficients become too small to handle. The norm (inner product with itself) of the functions after substracting the projections becomes small, and negative. ```python FUNCS = [part_a_funcs(100),part_b_funcs(100)] for funcs in FUNCS: # Print Original matrix print("-"*20+"Functions:") print(funcs) # Perform QR decomposition using generic G-S r = generic_gs(funcs,prod=symbolic_inner_product) if r is None: print("Couldn't compute R!!!:") else: # Print Q print("Q:") print(funcs) # Print R print("R:") print(r) ``` --------------------Functions: [1, x, x**2, x**3, x**4, x**5, x**6, x**7, x**8, x**9, x**10, x**11, x**12, x**13, x**14, x**15, x**16, x**17, x**18, x**19, x**20, x**21, x**22, x**23, x**24, x**25, x**26, x**27, x**28, x**29, x**30, x**31, x**32, x**33, x**34, x**35, x**36, x**37, x**38, x**39, x**40, x**41, x**42, x**43, x**44, x**45, x**46, x**47, x**48, x**49, x**50, x**51, x**52, x**53, x**54, x**55, x**56, x**57, x**58, x**59, x**60, x**61, x**62, x**63, x**64, x**65, x**66, x**67, x**68, x**69, x**70, x**71, x**72, x**73, x**74, x**75, x**76, x**77, x**78, x**79, x**80, x**81, x**82, x**83, x**84, x**85, x**86, x**87, x**88, x**89, x**90, x**91, x**92, x**93, x**94, x**95, x**96, x**97, x**98, x**99] Warning: negative norm2=-0.000000 at i=24! Couldn't compute R!!!: --------------------Functions: [1, sin(2*pi*x), sin(4*pi*x), sin(6*pi*x), sin(8*pi*x), sin(10*pi*x), sin(12*pi*x), sin(14*pi*x), sin(16*pi*x), sin(18*pi*x), sin(20*pi*x), sin(22*pi*x), sin(24*pi*x), sin(26*pi*x), sin(28*pi*x), sin(30*pi*x), sin(32*pi*x), sin(34*pi*x), sin(36*pi*x), sin(38*pi*x), sin(40*pi*x), sin(42*pi*x), sin(44*pi*x), sin(46*pi*x), sin(48*pi*x), sin(50*pi*x), sin(52*pi*x), sin(54*pi*x), sin(56*pi*x), sin(58*pi*x), sin(60*pi*x), sin(62*pi*x), sin(64*pi*x), sin(66*pi*x), sin(68*pi*x), sin(70*pi*x), sin(72*pi*x), sin(74*pi*x), sin(76*pi*x), sin(78*pi*x), sin(80*pi*x), sin(82*pi*x), sin(84*pi*x), sin(86*pi*x), sin(88*pi*x), sin(90*pi*x), sin(92*pi*x), sin(94*pi*x), sin(96*pi*x), sin(98*pi*x), sin(100*pi*x), sin(102*pi*x), sin(104*pi*x), sin(106*pi*x), sin(108*pi*x), sin(110*pi*x), sin(112*pi*x), sin(114*pi*x), sin(116*pi*x), sin(118*pi*x), sin(120*pi*x), sin(122*pi*x), sin(124*pi*x), sin(126*pi*x), sin(128*pi*x), sin(130*pi*x), sin(132*pi*x), sin(134*pi*x), sin(136*pi*x), sin(138*pi*x), sin(140*pi*x), sin(142*pi*x), sin(144*pi*x), sin(146*pi*x), sin(148*pi*x), sin(150*pi*x), sin(152*pi*x), sin(154*pi*x), sin(156*pi*x), sin(158*pi*x), sin(160*pi*x), sin(162*pi*x), sin(164*pi*x), sin(166*pi*x), sin(168*pi*x), sin(170*pi*x), sin(172*pi*x), sin(174*pi*x), sin(176*pi*x), sin(178*pi*x), sin(180*pi*x), sin(182*pi*x), sin(184*pi*x), sin(186*pi*x), sin(188*pi*x), sin(190*pi*x), sin(192*pi*x), sin(194*pi*x), sin(196*pi*x), sin(198*pi*x)] Q: [0.707106781186547, 1.0*sin(2*pi*x), 1.0*sin(4*pi*x), 1.0*sin(6*pi*x), 1.0*sin(8*pi*x), 1.0*sin(10*pi*x), 1.0*sin(12*pi*x), 1.0*sin(14*pi*x), 1.0*sin(16*pi*x), 1.0*sin(18*pi*x), 1.0*sin(20*pi*x), 1.0*sin(22*pi*x), 1.0*sin(24*pi*x), 1.0*sin(26*pi*x), 1.0*sin(28*pi*x), 1.0*sin(30*pi*x), 1.0*sin(32*pi*x), 1.0*sin(34*pi*x), 1.0*sin(36*pi*x), 1.0*sin(38*pi*x), 1.0*sin(40*pi*x), 1.0*sin(42*pi*x), 1.0*sin(44*pi*x), 1.0*sin(46*pi*x), 1.0*sin(48*pi*x), 1.0*sin(50*pi*x), 1.0*sin(52*pi*x), 1.0*sin(54*pi*x), 1.0*sin(56*pi*x), 1.0*sin(58*pi*x), 1.0*sin(60*pi*x), 1.0*sin(62*pi*x), 1.0*sin(64*pi*x), 1.0*sin(66*pi*x), 1.0*sin(68*pi*x), 1.0*sin(70*pi*x), 1.0*sin(72*pi*x), 1.0*sin(74*pi*x), 1.0*sin(76*pi*x), 1.0*sin(78*pi*x), 1.0*sin(80*pi*x), 1.0*sin(82*pi*x), 1.0*sin(84*pi*x), 1.0*sin(86*pi*x), 1.0*sin(88*pi*x), 1.0*sin(90*pi*x), 1.0*sin(92*pi*x), 1.0*sin(94*pi*x), 1.0*sin(96*pi*x), 1.0*sin(98*pi*x), 1.0*sin(100*pi*x), 1.0*sin(102*pi*x), 1.0*sin(104*pi*x), 1.0*sin(106*pi*x), 1.0*sin(108*pi*x), 1.0*sin(110*pi*x), 1.0*sin(112*pi*x), 1.0*sin(114*pi*x), 1.0*sin(116*pi*x), 1.0*sin(118*pi*x), 1.0*sin(120*pi*x), 1.0*sin(122*pi*x), 1.0*sin(124*pi*x), 1.0*sin(126*pi*x), 1.0*sin(128*pi*x), 1.0*sin(130*pi*x), 1.0*sin(132*pi*x), 1.0*sin(134*pi*x), 1.0*sin(136*pi*x), 1.0*sin(138*pi*x), 1.0*sin(140*pi*x), 1.0*sin(142*pi*x), 1.0*sin(144*pi*x), 1.0*sin(146*pi*x), 1.0*sin(148*pi*x), 1.0*sin(150*pi*x), 1.0*sin(152*pi*x), 1.0*sin(154*pi*x), 1.0*sin(156*pi*x), 1.0*sin(158*pi*x), 1.0*sin(160*pi*x), 1.0*sin(162*pi*x), 1.0*sin(164*pi*x), 1.0*sin(166*pi*x), 1.0*sin(168*pi*x), 1.0*sin(170*pi*x), 1.0*sin(172*pi*x), 1.0*sin(174*pi*x), 1.0*sin(176*pi*x), 1.0*sin(178*pi*x), 1.0*sin(180*pi*x), 1.0*sin(182*pi*x), 1.0*sin(184*pi*x), 1.0*sin(186*pi*x), 1.0*sin(188*pi*x), 1.0*sin(190*pi*x), 1.0*sin(192*pi*x), 1.0*sin(194*pi*x), 1.0*sin(196*pi*x), 1.0*sin(198*pi*x)] R: [[1.41421356 0. 0. ... 0. 0. 0. ] [0. 1. 0. ... 0. 0. 0. ] [0. 0. 1. ... 0. 0. 0. ] ... [0. 0. 0. ... 1. 0. 0. ] [0. 0. 0. ... 0. 1. 0. ] [0. 0. 0. ... 0. 0. 1. ]] --- To do it numerically, let's define a polynomial $$ p(x) = \sum_{i=0}^{n-1} p_i x^i $$ as the array of the $p_i$'s. Then the multiplication becomes: $$ p(x)q(x) = \sum_{i=0}^{2n-2} \left( \sum_{k=0}^{i} p_k q_{i-k} \right) x^{i} \,, $$ then the inner product becomes: \begin{align*} \int_{-1}^{1} p(x)q(x) \, dx &= \sum_{i=0}^{2n-2} \left( \sum_{k=0}^{i} p_k q_{i-k} \right) \frac{x^{i+1}}{i+1} |_{x=-1}^{1} \\ &= \sum_{i=0}^{2n-2} [i \, \text{mod} \, 2= 0] \left( \sum_{k=0}^{i} p_k q_{i-k} \right) \frac{2}{i+1} \end{align*} ```python @jit(nopython=True) def poly_mult(a,b): assert(len(a)==len(b)) n = len(a) total = 0 for i in range(0,2*n-1,2): term = 0 for k in range(0,i+1): if k>=0 and i-k>=0 and k<n and i-k<n: term += a[k]*b[i-k] term *= 2.0/(i+1.0) total += term return total ``` ```python def poly_print(poly): stri = [] if len(poly)>0 and poly[0] != 0: stri.append("%.3f"%poly[0]) if len(poly)>1 and poly[1] != 0: stri.append("%.3fx"%poly[1]) for i in range(2,len(poly)): if poly[i]!=0: stri.append("%+.3fx%d"%(poly[i],i)) if len(stri)==0: return "0" return " ".join(stri) def poly_matrix_print(polys,limit=10): print("[") if len(polys)>2*limit: for poly in polys[:limit]: print(" "+poly_print(poly)+" |") print(" ...") for poly in polys[-limit:]: print(" "+poly_print(poly)+" |") else: for poly in polys: print(" "+poly_print(poly)+" |") print("]") ``` ```python poly_mult([1,2,3],[2,5,1]) ``` 16.53333333333333 ```python for N in (5,10,100): # Print Original matrix polys = np.eye(N) print("-"*20+" Original (N=%d):"%N) poly_matrix_print(polys) # Perform QR decomposition using generic G-S r = generic_gs(polys,prod=poly_mult) # Print Q print("Q:") poly_matrix_print(polys) # Assert that Q is orthonormal for i in range(N): for j in range(N): if i==j: assert(np.abs(poly_mult(polys[i],polys[j])-1)<1e-5) else: assert(np.abs(poly_mult(polys[i],polys[j]))<1e-5) # Print R print("R:") print(r) ``` We can see that the numerical method fails for $N=100$. After a closer inspection, this was because, after substracting the projection with the previous functions, the remaining polynomial had very small coefficients, around $i=25$ too. ```python ```
fa71e5ccd097a78ee7d3fd81671e651a77ae028a
43,673
ipynb
Jupyter Notebook
t1_questions/item_15.ipynb
autopawn/cc5-works
63775574c82da85ed0e750a4d6978a071096f6e7
[ "MIT" ]
null
null
null
t1_questions/item_15.ipynb
autopawn/cc5-works
63775574c82da85ed0e750a4d6978a071096f6e7
[ "MIT" ]
null
null
null
t1_questions/item_15.ipynb
autopawn/cc5-works
63775574c82da85ed0e750a4d6978a071096f6e7
[ "MIT" ]
null
null
null
58.308411
2,572
0.470794
true
14,003
Qwen/Qwen-72B
1. YES 2. YES
0.843895
0.822189
0.693841
__label__yue_Hant
0.127795
0.450357
# Finding Roots of Equations ## Calculus review ```python %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy as scipy from scipy.interpolate import interp1d ``` Let's review the theory of optimization for multivariate functions. Recall that in the single-variable case, extreme values (local extrema) occur at points where the first derivative is zero, however, the vanishing of the first derivative is not a sufficient condition for a local max or min. Generally, we apply the second derivative test to determine whether a candidate point is a max or min (sometimes it fails - if the second derivative either does not exist or is zero). In the multivariate case, the first and second derivatives are *matrices*. In the case of a scalar-valued function on $\mathbb{R}^n$, the first derivative is an $n\times 1$ vector called the *gradient* (denoted $\nabla f$). The second derivative is an $n\times n$ matrix called the *Hessian* (denoted $H$) Just to remind you, the gradient and Hessian are given by: $$\nabla f(x) = \left(\begin{matrix}\frac{\partial f}{\partial x_1}\\ \vdots \\\frac{\partial f}{\partial x_n}\end{matrix}\right)$$ $$H = \left(\begin{matrix} \dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_1\,\partial x_n} \\[2.2ex] \dfrac{\partial^2 f}{\partial x_2\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_2^2} & \cdots & \dfrac{\partial^2 f}{\partial x_2\,\partial x_n} \\[2.2ex] \vdots & \vdots & \ddots & \vdots \\[2.2ex] \dfrac{\partial^2 f}{\partial x_n\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_n\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_n^2} \end{matrix}\right)$$ One of the first things to note about the Hessian - it's symmetric. This structure leads to some useful properties in terms of interpreting critical points. The multivariate analog of the test for a local max or min turns out to be a statement about the gradient and the Hessian matrix. Specifically, a function $f:\mathbb{R}^n\rightarrow \mathbb{R}$ has a critical point at $x$ if $\nabla f(x) = 0$ (where zero is the zero vector!). Furthermore, the second derivative test at a critical point is as follows: * If $H(x)$ is positive-definite ($\iff$ it has all positive eigenvalues), $f$ has a local minimum at $x$ * If $H(x)$ is negative-definite ($\iff$ it has all negative eigenvalues), $f$ has a local maximum at $x$ * If $H(x)$ has both positive and negative eigenvalues, $f$ has a saddle point at $x$. If you have $m$ equations with $n$ variables, then the $m \times n$ matrix of first partial derivatives is known as the Jacobian $J(x)$. For example, for two equations $f(x, y)$ and $g(x, y)$, we have $$ J(x) = \begin{bmatrix} \frac{\delta f}{\delta x} & \frac{\delta f}{\delta y} \\ \frac{\delta g}{\delta x} & \frac{\delta g}{\delta y} \end{bmatrix} $$ We can now express the multivariate form of Taylor polynomials in a familiar format. $$ f(x + \delta x) = f(x) + \delta x \cdot J(x) + \frac{1}{2} \delta x^T H(x) \delta x + \mathcal{O}(\delta x^3) $$ ## Main Issues in Root Finding in One Dimension * Separating close roots * Numerical Stability * Rate of Convergence * Continuity and Differentiability ## Bisection Method The bisection method is one of the simplest methods for finding zeros of a non-linear function. It is guaranteed to find a root - but it can be slow. The main idea comes from the intermediate value theorem: If $f(a)$ and $f(b)$ have different signs and $f$ is continuous, then $f$ must have a zero between $a$ and $b$. We evaluate the function at the midpoint, $c = \frac12(a+b)$. $f(c)$ is either zero, has the same sign as $f(a)$ or the same sign as $f(b)$. Suppose $f(c)$ has the same sign as $f(a)$ (as pictured below). We then repeat the process on the interval $[c,b]$. ```python def f(x): return x**3 + 4*x**2 -3 x = np.linspace(-3.1, 0, 100) plt.plot(x, x**3 + 4*x**2 -3) a = -3.0 b = -0.5 c = 0.5*(a+b) plt.text(a,-1,"a") plt.text(b,-1,"b") plt.text(c,-1,"c") plt.scatter([a,b,c], [f(a), f(b),f(c)], s=50, facecolors='none') plt.scatter([a,b,c], [0,0,0], s=50, c='red') xaxis = plt.axhline(0) pass ``` ```python x = np.linspace(-3.1, 0, 100) plt.plot(x, x**3 + 4*x**2 -3) d = 0.5*(b+c) plt.text(d,-1,"d") plt.text(b,-1,"b") plt.text(c,-1,"c") plt.scatter([d,b,c], [f(d), f(b),f(c)], s=50, facecolors='none') plt.scatter([d,b,c], [0,0,0], s=50, c='red') xaxis = plt.axhline(0) pass ``` We can terminate the process whenever the function evaluated at the new midpoint is 'close enough' to zero. This method is an example of what are known as 'bracketed methods'. This means the root is 'bracketed' by the end-points (it is somewhere in between). Another class of methods are 'open methods' - the root need not be somewhere in between the end-points (but it usually needs to be close!) ## Secant Method The secant method also begins with two initial points, but without the constraint that the function values are of opposite signs. We use the secant line to extrapolate the next candidate point. ```python def f(x): return (x**3-2*x+7)/(x**4+2) x = np.arange(-3,5, 0.1); y = f(x) p1=plt.plot(x, y) plt.xlim(-3, 4) plt.ylim(-.5, 4) plt.xlabel('x') plt.axhline(0) t = np.arange(-10, 5., 0.1) x0=-1.2 x1=-0.5 xvals = [] xvals.append(x0) xvals.append(x1) notconverge = 1 count = 0 cols=['r--','b--','g--','y--'] while (notconverge==1 and count < 3): slope=(f(xvals[count+1])-f(xvals[count]))/(xvals[count+1]-xvals[count]) intercept=-slope*xvals[count+1]+f(xvals[count+1]) plt.plot(t, slope*t + intercept, cols[count]) nextval = -intercept/slope if abs(f(nextval)) < 0.001: notconverge=0 else: xvals.append(nextval) count = count+1 plt.show() ``` The secant method has the advantage of fast convergence. While the bisection method has a linear convergence rate (i.e. error goes to zero at the rate that $h(x) = x$ goes to zero, the secant method has a convergence rate that is faster than linear, but not quite quadratic (i.e. $\sim x^\alpha$, where $\alpha = \frac{1+\sqrt{5}}2 \approx 1.6$) however, the trade-off is that the secant method is not guaranteed to find a root in the brackets. A variant of the secant method is known as the **method of false positions**. Conceptually it is identical to the secant method, except that instead of always using the last two values of $x$ for linear interpolation, it chooses the two most recent values that maintain the bracket property (i.e $f(a) f(b) < 0$). It is slower than the secant, but like the bisection, is safe. ## Newton-Raphson Method We want to find the value $\theta$ so that some (differentiable) function $g(\theta)=0$. Idea: start with a guess, $\theta_0$. Let $\tilde{\theta}$ denote the value of $\theta$ for which $g(\theta) = 0$ and define $h = \tilde{\theta} - \theta_0$. Then: $$ \begin{eqnarray*} g(\tilde{\theta}) &=& 0 \\\\ &=&g(\theta_0 + h) \\\\ &\approx& g(\theta_0) + hg'(\theta_0) \end{eqnarray*} $$ This implies that $$ h\approx \frac{g(\theta_0)}{g'(\theta_0)}$$ So that $$\tilde{\theta}\approx \theta_0 - \frac{g(\theta_0)}{g'(\theta_0)}$$ Thus, we set our next approximation: $$\theta_1 = \theta_0 - \frac{g(\theta_0)}{g'(\theta_0)}$$ and we have developed an iterative procedure with: $$\theta_n = \theta_{n-1} - \frac{g(\theta_{n-1})}{g'(\theta_{n-1})}$$ #### Example Let $$g(x) = \frac{x^3-2x+7}{x^4+2}$$ ```python x = np.arange(-5,5, 0.1); y = (x**3-2*x+7)/(x**4+2) p1=plt.plot(x, y) plt.xlim(-4, 4) plt.ylim(-.5, 4) plt.xlabel('x') plt.axhline(0) plt.title('Example Function') plt.show() ``` ```python x = np.arange(-5,5, 0.1); y = (x**3-2*x+7)/(x**4+2) p1=plt.plot(x, y) plt.xlim(-4, 4) plt.ylim(-.5, 4) plt.xlabel('x') plt.axhline(0) plt.title('Good Guess') t = np.arange(-5, 5., 0.1) x0=-1.5 xvals = [] xvals.append(x0) notconverge = 1 count = 0 cols=['r--','b--','g--','y--','c--','m--','k--','w--'] while (notconverge==1 and count < 6): funval=(xvals[count]**3-2*xvals[count]+7)/(xvals[count]**4+2) slope=-((4*xvals[count]**3 *(7 - 2 *xvals[count] + xvals[count]**3))/(2 + xvals[count]**4)**2) + (-2 + 3 *xvals[count]**2)/(2 + xvals[count]**4) intercept=-slope*xvals[count]+(xvals[count]**3-2*xvals[count]+7)/(xvals[count]**4+2) plt.plot(t, slope*t + intercept, cols[count]) nextval = -intercept/slope if abs(funval) < 0.01: notconverge=0 else: xvals.append(nextval) count = count+1 ``` From the graph, we see the zero is near -2. We make an initial guess of $$x=-1.5$$ We have made an excellent choice for our first guess, and we can see rapid convergence! ```python funval ``` In fact, the Newton-Raphson method converges quadratically. However, NR (and the secant method) have a fatal flaw: ```python x = np.arange(-5,5, 0.1); y = (x**3-2*x+7)/(x**4+2) p1=plt.plot(x, y) plt.xlim(-4, 4) plt.ylim(-.5, 4) plt.xlabel('x') plt.axhline(0) plt.title('Bad Guess') t = np.arange(-5, 5., 0.1) x0=-0.5 xvals = [] xvals.append(x0) notconverge = 1 count = 0 cols=['r--','b--','g--','y--','c--','m--','k--','w--'] while (notconverge==1 and count < 6): funval=(xvals[count]**3-2*xvals[count]+7)/(xvals[count]**4+2) slope=-((4*xvals[count]**3 *(7 - 2 *xvals[count] + xvals[count]**3))/(2 + xvals[count]**4)**2) + (-2 + 3 *xvals[count]**2)/(2 + xvals[count]**4) intercept=-slope*xvals[count]+(xvals[count]**3-2*xvals[count]+7)/(xvals[count]**4+2) plt.plot(t, slope*t + intercept, cols[count]) nextval = -intercept/slope if abs(funval) < 0.01: notconverge = 0 else: xvals.append(nextval) count = count+1 ``` We have stumbled on the horizontal asymptote. The algorithm fails to converge. ### Convergence Rate The following is a derivation of the convergence rate of the NR method: Suppose $x_k \; \rightarrow \; x^*$ and $g'(x^*) \neq 0$. Then we may write: $$x_k = x^* + \epsilon_k$$. Now expand $g$ at $x^*$: $$g(x_k) = g(x^*) + g'(x^*)\epsilon_k + \frac12 g''(x^*)\epsilon_k^2 + ...$$ $$g'(x_k)=g'(x^*) + g''(x^*)\epsilon_k$$ We have that \begin{eqnarray} \epsilon_{k+1} &=& \epsilon_k + \left(x_{k-1}-x_k\right)\\ &=& \epsilon_k -\frac{g(x_k)}{g'(x_k)}\\ &\approx & \frac{g'(x^*)\epsilon_k + \frac12g''(x^*)\epsilon_k^2}{g'(x^*)+g''(x^*)\epsilon_k}\\ &\approx & \frac{g''(x^*)}{2g'(x^*)}\epsilon_k^2 \end{eqnarray} ## Gauss-Newton For 1D, the Newton method is $$ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} $$ We can generalize to $k$ dimensions by $$ x_{n+1} = x_n - J^{-1} f(x_n) $$ where $x$ and $f(x)$ are now vectors, and $J^{-1}$ is the inverse Jacobian matrix. In general, the Jacobian is not a square matrix, and we use the generalized inverse $(J^TJ)^{-1}J^T$ instead, giving $$ x_{n+1} = x_n - (J^TJ)^{-1}J^T f(x_n) $$ In multivariate nonlinear estimation problems, we can find the vector of parameters $\beta$ by minimizing the residuals $r(\beta)$, $$ \beta_{n+1} = \beta_n - (J^TJ)^{-1}J^T r(\beta_n) $$ where the entries of the Jacobian matrix $J$ are $$ J_{ij} = \frac{\partial r_i(\beta)}{\partial \beta_j} $$ ## Inverse Quadratic Interpolation Inverse quadratic interpolation is a type of polynomial interpolation. Polynomial interpolation simply means we find the polynomial of least degree that fits a set of points. In quadratic interpolation, we use three points, and find the quadratic polynomial that passes through those three points. ```python def f(x): return (x - 2) * x * (x + 2)**2 x = np.arange(-5,5, 0.1); plt.plot(x, f(x)) plt.xlim(-3.5, 0.5) plt.ylim(-5, 16) plt.xlabel('x') plt.axhline(0) plt.title("Quadratic Interpolation") #First Interpolation x0=np.array([-3,-2.5,-1.0]) y0=f(x0) f2 = interp1d(x0, y0,kind='quadratic') #Plot parabola xs = np.linspace(-3, -1, num=10000, endpoint=True) plt.plot(xs, f2(xs)) #Plot first triplet plt.plot(x0, f(x0),'ro'); plt.scatter(x0, f(x0), s=50, c='yellow'); #New x value xnew=xs[np.where(abs(f2(xs))==min(abs(f2(xs))))] plt.scatter(np.append(xnew,xnew), np.append(0,f(xnew)), c='black'); #New triplet x1=np.append([-3,-2.5],xnew) y1=f(x1) f2 = interp1d(x1, y1,kind='quadratic') #New Parabola xs = np.linspace(min(x1), max(x1), num=100, endpoint=True) plt.plot(xs, f2(xs)) xnew=xs[np.where(abs(f2(xs))==min(abs(f2(xs))))] plt.scatter(np.append(xnew,xnew), np.append(0,f(xnew)), c='green'); ``` So that's the idea behind quadratic interpolation. Use a quadratic approximation, find the zero of interest, use that as a new point for the next quadratic approximation. Inverse quadratic interpolation means we do quadratic interpolation on the *inverse function*. So, if we are looking for a root of $f$, we approximate $f^{-1}(x)$ using quadratic interpolation. This just means fitting $x$ as a function of $y$, so that the quadratic is turned on its side and we are guaranteed that it cuts the x-axis somewhere. Note that the secant method can be viewed as a *linear* interpolation on the inverse of $f$. We can write: $$f^{-1}(y) = \frac{(y-f(x_n))(y-f(x_{n-1}))}{(f(x_{n-2})-f(x_{n-1}))(f(x_{n-2})-f(x_{n}))}x_{n-2} + \frac{(y-f(x_n))(y-f(x_{n-2}))}{(f(x_{n-1})-f(x_{n-2}))(f(x_{n-1})-f(x_{n}))}x_{n-1} + \frac{(y-f(x_{n-2}))(y-f(x_{n-1}))}{(f(x_{n})-f(x_{n-2}))(f(x_{n})-f(x_{n-1}))}x_{n-1}$$ We use the above formula to find the next guess $x_{n+1}$ for a zero of $f$ (so $y=0$): $$x_{n+1} = \frac{f(x_n)f(x_{n-1})}{(f(x_{n-2})-f(x_{n-1}))(f(x_{n-2})-f(x_{n}))}x_{n-2} + \frac{f(x_n)f(x_{n-2})}{(f(x_{n-1})-f(x_{n-2}))(f(x_{n-1})-f(x_{n}))}x_{n-1} + \frac{f(x_{n-2})f(x_{n-1})}{(f(x_{n})-f(x_{n-2}))(f(x_{n})-f(x_{n-1}))}x_{n}$$ We aren't so much interested in deriving this as we are understanding the procedure: ```python x = np.arange(-5,5, 0.1); plt.plot(x, f(x)) plt.xlim(-3.5, 0.5) plt.ylim(-5, 16) plt.xlabel('x') plt.axhline(0) plt.title("Inverse Quadratic Interpolation") #First Interpolation x0=np.array([-3,-2.5,1]) y0=f(x0) f2 = interp1d(y0, x0,kind='quadratic') #Plot parabola xs = np.linspace(min(f(x0)), max(f(x0)), num=10000, endpoint=True) plt.plot(f2(xs), xs) #Plot first triplet plt.plot(x0, f(x0),'ro'); plt.scatter(x0, f(x0), s=50, c='yellow'); ``` Convergence rate is approximately $1.8$. The advantage of the inverse method is that we will *always* have a real root (the parabola will always cross the x-axis). A serious disadvantage is that the initial points must be very close to the root or the method may not converge. That is why it is usually used in conjunction with other methods. ## Brentq Method Brent's method is a combination of bisection, secant and inverse quadratic interpolation. Like bisection, it is a 'bracketed' method (starts with points $(a,b)$ such that $f(a)f(b)<0$. Roughly speaking, the method begins by using the secant method to obtain a third point $c$, then uses inverse quadratic interpolation to generate the next possible root. Without going into too much detail, the algorithm attempts to assess when interpolation will go awry, and if so, performs a bisection step. Also, it has certain criteria to reject an iterate. If that happens, the next step will be linear interpolation (secant method). To find zeros, use ```python x = np.arange(-5,5, 0.1); p1=plt.plot(x, f(x)) plt.xlim(-4, 4) plt.ylim(-10, 20) plt.xlabel('x') plt.axhline(0) pass ``` ```python from scipy import optimize ``` ```python scipy.optimize.brentq(f,-1,.5) ``` ```python scipy.optimize.brentq(f,.5,3) ``` ## Roots of polynomials One method for finding roots of polynomials converts the problem into an eigenvalue one by using the **companion matrix** of a polynomial. For a polynomial $$ p(x) = a_0 + a_1x + a_2 x^2 + \ldots + a_m x^m $$ the companion matrix is $$ A = \begin{bmatrix} -a_{m-1}/a_m & -a_{m-2}/a_m & \ldots & -a_0/a_m \\ 1 & 0 & \ldots & 0 \\ 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \ldots & \vdots \\ 0 & 0 & \ldots & 0 \end{bmatrix} $$ The characteristic polynomial of the companion matrix is $\lvert \lambda I - A \rvert$ which expands to $$ a_0 + a_1 \lambda + a_2 \lambda^2 + \ldots + a_m \lambda^m $$ In other words, the roots we are seeking are the eigenvalues of the companion matrix. For example, to find the cube roots of unity, we solve $x^3 - 1 = 0$. The `roots` function uses the companion matrix method to find roots of polynomials. ```python # Coefficients of $x^3, x^2, x^1, x^0$ poly = np.array([1, 0, 0, -1]) ``` Manual construction ```python A = np.array([ [0,0,1], [1,0,0], [0,1,0] ]) ``` ```python scipy.linalg.eigvals(A) ``` Using built-in function ```python x = np.roots(poly) x ``` ```python plt.scatter([z.real for z in x], [z.imag for z in x]) theta = np.linspace(0, 2*np.pi, 100) u = np.cos(theta) v = np.sin(theta) plt.plot(u, v, ':') plt.axis('square') pass ``` ## Using `scipy.optimize` ### Finding roots of univariate equations ```python def f(x): return x**3-3*x+1 ``` ```python x = np.linspace(-3,3,100) plt.axhline(0, c='red') plt.plot(x, f(x)) pass ``` ```python from scipy.optimize import brentq, newton ``` #### `brentq` is the recommended method ```python brentq(f, -3, 0), brentq(f, 0, 1), brentq(f, 1,3) ``` #### Secant method ```python newton(f, -3), newton(f, 0), newton(f, 3) ``` #### Newton-Raphson method ```python fprime = lambda x: 3*x**2 - 3 newton(f, -3, fprime), newton(f, 0, fprime), newton(f, 3, fprime) ``` ### Finding fixed points Finding the fixed points of a function $g(x) = x$ is the same as finding the roots of $g(x) - x$. However, specialized algorithms also exist - e.g. using `scipy.optimize.fixedpoint`. ```python from scipy.optimize import fixed_point ``` ```python x = np.linspace(-3,3,100) plt.plot(x, f(x), color='red') plt.plot(x, x) pass ``` ```python fixed_point(f, 0), fixed_point(f, -3), fixed_point(f, 3) ``` ### Mutlivariate roots and fixed points Use `root` to solve polynomial equations. Use `fsolve` for non-polynomial equations. ```python from scipy.optimize import root, fsolve ``` Suppose we want to solve a sysetm of $m$ equations with $n$ unknowns \begin{align} f(x_0, x_1) &= x_1 - 3x_0(x_0+1)(x_0-1) \\ g(x_0, x_1) &= 0.25 x_0^2 + x_1^2 - 1 \end{align} Note that the equations are non-linear and there can be multiple solutions. These can be interpreted as fixed points of a system of differential equations. ```python def f(x): return [x[1] - 3*x[0]*(x[0]+1)*(x[0]-1), .25*x[0]**2 + x[1]**2 - 1] ``` ```python sol = root(f, (0.5, 0.5)) sol.x ``` ```python fsolve(f, (0.5, 0.5)) ``` ```python r0 = root(f,[1,1]) r1 = root(f,[0,1]) r2 = root(f,[-1,1.1]) r3 = root(f,[-1,-1]) r4 = root(f,[2,-0.5]) roots = np.c_[r0.x, r1.x, r2.x, r3.x, r4.x] ``` ```python Y, X = np.mgrid[-3:3:100j, -3:3:100j] U = Y - 3*X*(X + 1)*(X-1) V = .25*X**2 + Y**2 - 1 plt.streamplot(X, Y, U, V, color=U, linewidth=2, cmap=plt.cm.autumn) plt.scatter(roots[0], roots[1], s=50, c='none', edgecolors='k', linewidth=2) pass ``` #### We can also give the Jacobian ```python def jac(x): return [[-6*x[0], 1], [0.5*x[0], 2*x[1]]] ``` ```python sol = root(f, (0.5, 0.5), jac=jac) sol.x, sol.fun ``` #### Check that values found are really roots ```python np.allclose(f(sol.x), 0) ``` #### Starting from other initial conditions, different roots may be found ```python sol = root(f, (12,12)) sol.x ``` ```python np.allclose(f(sol.x), 0) ```
509c810d8ef9496a287bfb5908550168541e93ba
31,540
ipynb
Jupyter Notebook
notebooks/T07B_Root_Finding.ipynb
Yijia17/sta-663-2021
e6484e3116c041b8c8eaae487eff5f351ff499c9
[ "MIT" ]
18
2021-01-19T16:35:54.000Z
2022-01-01T02:12:30.000Z
notebooks/T07B_Root_Finding.ipynb
Yijia17/sta-663-2021
e6484e3116c041b8c8eaae487eff5f351ff499c9
[ "MIT" ]
null
null
null
notebooks/T07B_Root_Finding.ipynb
Yijia17/sta-663-2021
e6484e3116c041b8c8eaae487eff5f351ff499c9
[ "MIT" ]
24
2021-01-19T16:26:13.000Z
2022-03-15T05:10:14.000Z
28.93578
798
0.522036
true
6,676
Qwen/Qwen-72B
1. YES 2. YES
0.927363
0.936285
0.868276
__label__eng_Latn
0.956898
0.85563
``` import scipy.stats as stats figsize( 12.5, 4) ``` #Chapter 4 ______ ##The greatest theorem never told > This relatively short chapter focuses on an idea that is always bouncing around our heads, but is rarely made explicit outside books devoted to statistics or Monte Carlo. In fact, we've been used this idea in every example so far. ###The Law of Large Numbers Let $Z_i$ be samples from some probability distribution. According to *the Law of Large numbers*, so long as $E[Z]$ is finite, the following holds, $$\frac{1}{N} \sum_{i=0}^N Z_i \rightarrow E[ Z ], \;\;\; N \rightarrow \infty$$ In words: > The average of a sequence of random variables from the same distribution converges to the expected value of that distribution. This may seem like a boring result, but it will be the most useful tool you use. ### Intuition If the above Law is somewhat surprising, it can be made more clear be examining a simple example. Consider a random variable $Z$ that can take only two values, $c_1$ and $c_2$. Suppose we have a large number of samples of $Z$, denoting a specific sample $Z_i$. The Law says that we can approximate the expected value of $Z$ by averaging over all samples. Consider the average: $$ \frac{1}{N} \sum_{i=0}^N \;Z_i $$ By construction, $Z_i$ can only take on $c_1$ or $c_2$, hence we can partition the sum over these two values: \begin{align} \frac{1}{N} \sum_{i=0}^N \;Z_i & =\frac{1}{N} \big( \sum_{ Z_i = c_1}c_1 + \sum_{Z_i=c_2}c_2 \big) \\\\[5pt] & = c_1 \sum_{ Z_i = c_1}\frac{1}{N} + c_2 \sum_{ Z_i = c_2}\frac{1}{N} \\\\[5pt] & = c_1 \times \text{ (approximate frequency of $c_1$) } \\\\ & \;\;\;\;\;\;\;\;\; + c_2 \times \text{ (approximate frequency of $c_2$) } \\\\[5pt] & \approx c_1 \times P(Z = c_1) + c_2 \times P(Z = c_2 ) \\\\[5pt] & = E[Z] \end{align} Equality holds in the limit, but we can get closer and closer by using more and more samples in the average. This Law holds for *any distribution*, minus some pathological examples that only mathematicians have fun with. ##### Example ____ Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables. We sample `sample_size= 100000` poisson random variables with parameter $\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to it's parameter.) We calculate the average for the first $n$ samples, for $n=1$ to `sample_size`. ``` figsize( 12.5, 5 ) sample_size = 100000 expected_value = lambda_ = 4.5 poi = stats.poisson N_samples = range(1,sample_size,100) for k in range(3): samples = poi.rvs( lambda_, size = sample_size ) partial_average = [ samples[:i].mean() for i in N_samples ] plt.plot( N_samples, partial_average, lw=1.5,label="average \ of $n$ samples; seq. %d"%k) plt.plot( N_samples, expected_value*np.ones_like( partial_average), \ ls = "--", label = "true expected value", c = "k" ) plt.ylim( 4.35, 4.65) plt.title( "Convergence of the average of \n random variables to its \ expected value" ) plt.ylabel( "average of $n$ samples" ) plt.xlabel( "# of samples, $n$") plt.legend() ``` Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how *jagged and jumpy* the average is initially, then *smooths* out). All three paths *approach* the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have another name for *flirting*: convergence. Another very relevant question we can ask is *how quickly am I converging to the expected value?* Let's plot something new. For a specific $N$, let's do the above trials thousands of times and compute how far away we are from the true expected value, on average. But wait-- *compute on average*? This simply the law of large numbers again! For example, we are interested in, for a specific $N$, the quantity: $$D(N) = \sqrt{ \;E\left[\;\; \left( \frac{1}{N}\sum_{i=0}^NZ_i - 4.5 \;\right)^2 \;\;\right] \;\;}$$ (We take the square root so the dimensions of the above quantity and our random variables are the same). As the above is an expected value, it can be approximated using the law of large numbers: instead of averaging $Z_i$, we calculate the following multiple times and average them: $$ Y_k = \left( \;\frac{1}{N}\sum_{i=0}^NZ_i - 4.5 \; \right)^2 $$ i.e., we consider the average $$ \frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k \approx D(N) $$ where $N_Y$ is some suitably large number. ``` figsize( 12.5, 4) N_Y = 250 D_N_results = [] N_array = np.arange( 0, 50000, 2500 ) lambda_ = 4.5 expected_value = 4.5 def D_N( n ): Z = poi.rvs( lambda_, size = (n, N_Y) ) average_Z = Z.mean(axis=0) return np.sqrt( ( (average_Z - expected_value)**2 ).mean() ) for n in N_array: D_N_results.append( D_N(n) ) plt.xlabel( "$N$" ) plt.ylabel( "expected squared-distance from true value" ) plt.plot(N_array, D_N_results, lw = 3, label="expected distance between\n\ expected value and \naverage of $N$ random variables.") plt.plot( N_array, np.sqrt(expected_value)/np.sqrt(N_array), ls = "--", label = r"$\frac{\sqrt{\lambda}}{\sqrt{N}}$" ) plt.legend() plt.title( "How 'fast' is the sample average converging? " ) ``` As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the *rate* of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but *20 000* more samples to again decrease from 0.015 to 0.010, again only a 0.005 decrease. It turns out we can measure this rate of convergence. Above I have plotted a second line, the function $\sqrt{\lambda}/\sqrt{N}$. This was not choosen arbitrarily. In most cases, given a sequence of random variable distributed like $Z$, the rate of converge to $E[Z]$ of the Law of Large Numbers is $$ \frac{ \sqrt{ \; Var(Z) \; } }{\sqrt{N} }$$ This is useful to know: for a given large $N$, we know (on average) how far away we are from the estimate. On the other hand, in a Bayesian setting, this can seem like a useless result: Bayesian analysis is OK with uncertainity so what's the *statistical* point of adding extra precise digits? Though drawing samples can be so computationally cheap that having a *larger* $N$ is fine too. ### How do we compute $Var(Z)$ though? The variance is simply another expected value that can be approximated! Consider the following, once we have the expected value (by using the Law of Large Numbers to estimate it, denote it $\mu$), we can estimate the variance: $$ \frac{1}{N}\sum_{i=0}^N \;(Z_i - \mu)^2 \rightarrow E[ \;( Z - \mu)^2 \;] = Var( Z )$$ ### Expected values and probablities There is an even less explicit relationship between expected value and estimating probabilities. Define the *indicator function* $$\mathbb{1}_A(x) = \begin{cases} 1 & x \in A \\\\ 0 & else \end{cases} $$ Then, by the law of large numbers: $$ \frac{1}{N} \sum_{i=1}^N \mathbb{1}_A(X_i)$$ Again, this is fairly obvious after a moments thought: the indicator function is only 1 if the event occurs, so we are summing only the times the event occurs and dividing by the total number of trials (consider how we usually approximate probablities using frequencies). ### What does this all have to do with Bayesian statistics? *Point estimates*, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would of been required to evaluate complicated expected values represented as multi-dimensional integrals. No longer. If we can sample from the posterior distibution directly, we simply need to evaluate averages. Much easier. If accuracy is a priority, plots like the ones above show how fast you are converging. And if further accuracy is desired, just take more samples from the posterior. When is enough enough? When can you stop drawing samples from the posterior? That is the practioners decision, and also dependent on the variance of the samples (recall from above a high variance means the average will converge slower). We also should understand when the Law of Large Numbers fails. As the name implies, and comparing the graphs above for small $N$, the Law is only true for large sample sizes. Without this, the asymptotic result is not reliable. Knowing in what situations the Law fails can give use *confidence in how unconfident we should be*. The next section deals with this issue. ## Confidence should be proportional to sample size The Law of Large Numbers is only valid as $N$ gets *infinitely* large: the law is treasure at the end of an infinite rainbow. While the law is a powerful tool, it is foolhardy to apply it liberally. Our next example illustrates this. ##### Example: Aggregated geographic data -------- Often data comes in aggregated form. For instance, data may be grouped by state, county, or city level. Of course, the population numbers vary per geographic area. If included in the data is an average of some characteristic of each the geographic area, we must be concious of the Law of Large Numbers and how it can *fail* for areas with small populations. Suppose there are five thousand counties in our dataset. Furthermore, population number in each state are uniformly distributed between 100 and 4000. The way the population numbers are generated is irrelevant to the discussion, so we do not justify this. Furthermore, we are interested in measuring the average height of individuals per county. Unbeknowst to the us, height does not vary across county, and each individual, regardless of the county he or she is currenly living in, has the same distribution of what their height may be: $$ \text{height} \sim \text{Normal}(150, 15 ) $$ We aggregate the individuals at the county level, so we only have data for the *average in the county*. What might our dataset look like? ``` std_height = 15 mean_height = 150 n_counties = 5000 pop_generator = stats.randint( 100, 4000 ) norm = stats.norm( mean_height, scale = std_height ) #generate some artificial population numbers population = pop_generator.rvs( n_counties ) average_across_county = np.zeros( n_counties ) for i in range( n_counties ): #generate some individuals and take the mean average_across_county[i] = norm.rvs( population[i] ).mean() #where are the extreme populations? i_min = np.argmin( average_across_county ) i_max = np.argmax( average_across_county ) #plot population vs. average plt.scatter( population, average_across_county, alpha = 0.5, c="#7A68A6") plt.scatter( [ population[i_min], population[i_max] ], [average_across_county[i_min], average_across_county[i_max] ], s = 60, marker = "o", facecolors = "none", edgecolors = "#A60628", linewidths = 1.5, label="extreme heights") plt.xlim( 100, 4000 ) plt.title( "Average height vs. County Population") plt.xlabel("County Population") plt.ylabel("Average height in county") plt.plot( [100, 4000], [150, 150], color = "k", label = "true expected \ height", ls="--" ) plt.legend(scatterpoints = 1) ``` What do we observe? *Without accounting for population* we run the risk of making an enourmous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties do *not* necessarily have the most extreme heights. The error is that the calculated average of the small population is not a good reflection of the true expected value of the popuation (which should be $\mu =150$). The sample size/population size/$N$, whatever you want to call it, is simply too small to invoke the Law of Large Numbers effectively. We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 4000. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 4000, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights. ``` print "Population sizes of 10 'shortest' counties: " print population[ np.argsort( average_across_county )[:10] ] print print "Population sizes of 10 'tallest' counties: " print population[ np.argsort( -average_across_county )[:10] ] ``` Population sizes of 10 'shortest' counties: [181 168 229 110 156 123 222 154 498 375] Population sizes of 10 'tallest' counties: [105 114 111 236 373 244 183 278 234 268] Not at all uniform over 100 to 4000. This is an absolute failure of the Law of Large Numbers. Below is data from the 2010 US census, which partitions populations beyond counties to the level of block groups (which are aggregates of city blocks or equivilants). The dataset is from a Kaggle machine learning competition some collegues and I participated in. The objective was to predict the census letter mail-back rate of a group block, measured between 0 and 100, using census variables (median income, number of females in the block-group, number of trailer parks, average number of children etc.). Below we plot the mail-back rate versus block group population: ``` figsize( 12.5, 5 ) data = np.genfromtxt( "./data/census_data.csv", skip_header=1, delimiter= ",") plt.scatter( data[:,1], data[:,0], alpha = 0.5, c="#7A68A6") plt.title("Census mail-back rate vs Population") plt.ylabel("Mail-back rate") plt.xlabel("population of block-group") plt.xlim(-100, 15e3 ) plt.ylim( -5, 105) i_min = np.argmin( data[:,0] ) i_max = np.argmax( data[:,0] ) plt.scatter( [ data[i_min,1], data[i_max, 1] ], [ data[i_min,0], data[i_max,0] ], s = 60, marker = "o", facecolors = "none", edgecolors = "#A60628", linewidths = 1.5, label="most extreme points") plt.legend(scatterpoints = 1) ``` The above is a classic phenonmenon in statistics. I say *classic* referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact). I am perhaps overstressing the point and maybe I should have titled the book *"You don't have big data problems!"*, but here again is an example of the trouble with *small datasets*, not big ones. Simpley, small datasets cannot be processed using the Law of Large Numbers. Compare, again, with applying the Law without hassle to big datasets (ex. big data). I mentioned earlier that paradoxically big data prediction problems are solved by relatively simple algorithms. The paradox is partially resolved by understanding that the Law of Large Numbers creates solutions that are *stable*, i.e. adding or substracting a few data points will not affect the solution much. On the other hand, adding or removing data points to a small dataset can create very different results. Returning to our Census return rates, we can ask what are the population sizes of block-groups with the most extreme return rates: ``` print "Population sizes of 10 'least-responsible' block-groups: " print data[ np.argsort( data[:,0] )[:10], 1 ].astype(int) print print "Population sizes of 10 'most-responsible' block-groups: " print data[ np.argsort( data[:,0] )[-10:], 1 ].astype(int) ``` Population sizes of 10 'least-responsible' block-groups: [ 241 447 11 1 13 954 2 257 1058 1187] Population sizes of 10 'most-responsible' block-groups: [ 452 3764 10 1 47 6 1 21 2380 550] Again, we see the numbers are biased towards the left. For further reading on the hidden dangers of the Law of Large Numbers, I would recommend the excellent manuscript [The Most Dangerous Equation](http://nsm.uh.edu/~dgraur/niv/TheMostDangerousEquation.pdf). #### Conclusion While the Law of Large Numbers is cool, it is only true so much as its name implies: with large sample sizes only. We have seen how our inference can be affected by not considering *how the data is shaped*. By becoming Bayesian experts, we can avoid two traps of the Law of Large Numbers. 1. By (cheaply) drawing many samples from the posterior distributions, we can ensure that the Law of Large Number applies as we approximate expected values (which we will do in the next chapter). 2. Bayesian inference understands that with small sample sizes, we can observe wild randomness. Our posterior distribution will reflect this by being more spread rather than tightly concentrated. Thus, our inference should be correctable. ##### Exercises 1\. How would you estimate the quantity $E\left[ \cos{X} \right]$, where $X \sim \text{Exp}(4)$? What about $E\left[ \cos{X} | X \lt 1\right]$, i.e. the expected value *given* we know $X$ is less than 1? Would you need more samples than the original samples size to be equally as accurate? ``` ## Enter code here exp = stats.expon( 4 ) N = 1e5 X = exp.rvs( N ) ## ... ``` 2. The following table was located in the paper "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression" [2]. The table ranks football field-goal kickers by there percent of non-misses. What mistake have the researchers made? ----- ### Table 3: Kicker Careers Ranked by Make Percentage <table><tbody><tr><th>Rank </th><th>Kicker </th><th>Make % </th><th>Number of Kicks</th></tr><tr><td>1 </td><td>Garrett Hartley </td><td>87.7 </td><td>57</td></tr><tr><td>2</td><td> Matt Stover </td><td>86.8 </td><td>335</td></tr><tr><td>3 </td><td>Robbie Gould </td><td>86.2 </td><td>224</td></tr><tr><td>4 </td><td>Rob Bironas </td><td>86.1 </td><td>223</td></tr><tr><td>5</td><td> Shayne Graham </td><td>85.4 </td><td>254</td></tr><tr><td>… </td><td>… </td><td>…</td><td> </td></tr><tr><td>51</td><td> Dave Rayner </td><td>72.2 </td><td>90</td></tr><tr><td>52</td><td> Nick Novak </td><td>71.9 </td><td>64</td></tr><tr><td>53 </td><td>Tim Seder </td><td>71.0 </td><td>62</td></tr><tr><td>54 </td><td>Jose Cortez </td><td>70.7</td><td> 75</td></tr><tr><td>55 </td><td>Wade Richey </td><td>66.1</td><td> 56</td></tr></tbody></table> ### References 1. Wainer, Howard. *The Most Dangerous Equation*. American Scientist, Volume 95. 2. Clarck, Torin K., Aaron W. Johnson, and Alexander J. Stimpson. "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression." (2013): n. page. Web. 20 Feb. 2013. ``` from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() ``` <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } div.cell{ width:800px; margin-left:auto; margin-right:auto; } h1 { font-family: "Charis SIL", Palatino, serif; } div.text_cell_render{ font-family: Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; line-height: 145%; font-size: 120%; width:800px; margin-left:auto; margin-right:auto; } .CodeMirror{ font-family: "Source Code Pro", source-code-pro,Consolas, monospace; } .prompt{ display: None; } .text_cell_render h5 { font-weight: 300; font-size: 16pt; color: #4057A1; font-style: italic; margin-bottom: .5em; margin-top: 0.5em; display: block; } .warning{ color: rgb( 240, 20, 20 ) } </style> ``` ```
4e9b2bb23a8913864a16dba23f84fe25c66dfbaa
409,956
ipynb
Jupyter Notebook
Chapter4_TheGreatestTheoremNeverTold/LawOfLargeNumbers.ipynb
bzillins/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
c08a6344b8d0e39fcdb9702913b46e1b4e33fb9a
[ "MIT" ]
1
2019-05-20T10:54:19.000Z
2019-05-20T10:54:19.000Z
Chapter4_TheGreatestTheoremNeverTold/LawOfLargeNumbers.ipynb
bzillins/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
c08a6344b8d0e39fcdb9702913b46e1b4e33fb9a
[ "MIT" ]
null
null
null
Chapter4_TheGreatestTheoremNeverTold/LawOfLargeNumbers.ipynb
bzillins/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
c08a6344b8d0e39fcdb9702913b46e1b4e33fb9a
[ "MIT" ]
null
null
null
673.162562
129,484
0.926502
true
5,343
Qwen/Qwen-72B
1. YES 2. YES
0.757794
0.909907
0.689522
__label__eng_Latn
0.994978
0.440323
# Fitting a Morse Diatomic Absorption spectrum with a non-Condon Moment In these spectroscopy calculations, we are given $\omega_e$, $\chi_e \omega_e$, the reduced mass $\mu$ and the equilibrium position $r_e$. For each atom, we want to create a system of units out of these. \begin{align} h &= A \cdot e_u\cdot T_u = A \cdot m_u \frac{l_u^2}{T_u} \end{align} lower case means we are setting it still, capital letters mean they are determined. If we assume right now we want to set $E_u$ to be some spectoscopic value in wavenumbers and set $\hbar$ then we know we have to let time float, which is fine since this code is not a time-dependent one. \begin{align} A \cdot T_u &= \frac{h}{e_u} \\ e_u &= m_u \frac{l_u^2}{T_u^2} \\ T_u &= \sqrt{ \frac{ m_u l_u^2}{e_u} }\\ A &= \frac{h}{e_u}\sqrt{ \frac{e_u}{ m_u l_u^2} } = \sqrt{ \frac{h^2}{e_u m_u l_u^2} } \end{align} so we can clearly only select either the mass or the length to fix in a system of units which is self-consistent. ```python import math import numpy as np from scipy.special import gamma, genlaguerre import scipy.integrate import scipy.misc import sympy.mpmath import sympy.functions.combinatorial.factorials as fact import matplotlib.pyplot as plt %matplotlib inline ``` ```python joules_per_wavenumber = .01 #kg * m^3 / s^2 h_joules_seconds = 6.62607E-34 #Joule*seconds ``` ```python TEST1_SYSTEM_DICTIONARY = {"reduced_mass" : 1.0, "alpha" : 1.0, "center" : 0.0, "D" : 2.0} TEST2_SYSTEM_DICTIONARY = {"reduced_mass" : 1.0, "alpha" : 1.5, "center" : 0.5, "D" : 1.0} DEFAULT_UNIVERSE_DICTIONARY = {"hbar" : 1.0 / (2.0 * np.pi), "ZERO_TOLERANCE" : 1.0E-5} Nitrogen_energy_scale_wavenumbers = 2358.57 Nitrogen_mass_scale_amu = 7.00 #??? Nitrogen_Chi_1_Sigma_g_Plus = {"omega_e_wavenumbers" = Nitrogen_scaling, "omega_e" = 2358.57 / Nitrogen_scaling, "omega_e_chi_e" = 14.324/Nitrogen_scaling, "mu" : 1.0} ``` ```python class UnboundStateIndexError(Exception): def __init__(self): pass class Morse(object): def __init__(self, system_dictionary = DEFAULT_SYSTEM_DICTIONARY, universe_dictionary = DEFAULT_UNIVERSE_DICTIONARY): #define the Universe self.hbar = universe_dictionary["hbar"] self.ZERO_TOLERANCE = universe_dictionary["ZERO_TOLERANCE"] #define the system #terminology taken from Matsumoto and Iwamoto, 1993 self.mu = system_dictionary["reduced_mass"] self.center = system_dictionary["center"] self.r = self.center if "omega_e" not in system_dictionary: self.alpha = system_dictionary["alpha"] self.D = system_dictionary["D"] #Derive Other useful quantities self.omega_e = 2.0 * self.alpha * np.sqrt(self.D / (2.0 * self.mu)) self.chi_e_omega_e = self.alpha**2 * self.hbar / (2.0 * self.mu) else: self.omega_e = system_dictionary["omega_e"] self.chi_e_omega_e = system_dictionary["chi_e_omega_e"] self.alpha = np.sqrt(2.0 * self.mu * self.chi_e_omega_e / self.hbar) self.D = 2.0 * self.mu *(self.omega_e / (2.0 * self.alpha))**2 self.a = np.sqrt(2.0 * self.mu * self.D) / (self.alpha * self.hbar) self.maximum_index = int(np.floor(self.a - .5)) #Harmonic Oscillator Approximation: k = self.potential_energy_gradientSquared(self.r) self.omega_HO = np.sqrt(k / self.mu) self.x0 = np.sqrt( self.hbar / (2.0 * self.omega_HO * self.mu)) #determine the needed spatial parameters: self.index_to_xParams_dictionary = {} for energy_index in range(self.maximum_index + 1): #use the analytically calculated spread of the corresponding HO wavefunction to start guessing the needed spatial parameters HO_spatial_spread = self.x0 * np.sqrt(2 * energy_index + 1) x_min = self.r - 5.0 * HO_spatial_spread while np.abs(self.energy_eigenfunction_amplitude(energy_index, x_min)) > self.ZERO_TOLERANCE: x_min += -HO_spatial_spread x_max = self.r + 5.0 * HO_spatial_spread while np.abs(self.energy_eigenfunction_amplitude(energy_index, x_max)) > self.ZERO_TOLERANCE: x_max += HO_spatial_spread keep_integrating = True number_x_points = 10 while keep_integrating: x_vals = np.linspace(x_min, x_max, number_x_points) psi_vals = self.energy_eigenfunction_amplitude(energy_index, x_vals) integral = scipy.integrate.simps(np.conj(psi_vals) * psi_vals, x = x_vals) if np.abs(integral - 1.0) < self.ZERO_TOLERANCE: keep_integrating = False else: number_x_points = number_x_points + 10 self.index_to_xParams_dictionary[energy_index] = (x_min, x_max, number_x_points) #POTENTIAL ENERGY STUFF: def potential_energy(self, x): return -2 * self.D * np.exp(- self.alpha * (x - self.r)) + self.D * np.exp(-2.0 * self.alpha * (x - self.r)) def potential_energy_gradient(self, x): return 2.0 * self.alpha * self.D *(np.exp(- self.alpha * (x - self.r)) - np.exp(-2.0 * self.alpha * (x - self.r))) def potential_energy_gradientSquared(self, x): return 2.0 * self.alpha**2 * self.D *(-np.exp(- self.alpha * (x - self.r)) + 2.0 * np.exp(-2.0 * self.alpha * (x - self.r))) #ENERGY EIGENFUNCTION STUFF: def energy_eigenvalue(self, index): return -self.D + self.hbar * ( self.omega_e *(index + .5) - self.chi_e_omega_e *(index + .5)**2 ) def energy_eigenfunction_amplitude(self, n, x): if n > self.maximum_index: raise UnboundStateIndexError() b_n = self.a - .5 - n N_n = np.sqrt(2.0 * self.alpha * b_n * scipy.misc.factorial(n) / gamma(2 * b_n + n + 1)) z = 2.0 * self.a * np.exp(-self.alpha *(x - self.r)) z_poly = np.power(z, b_n) z_exp = np.exp(-.5 * z) lag_part = genlaguerre(n, 2 * b_n)(z) return N_n * z_poly * z_exp * lag_part class OffsetMorse(object): def __init__(self, ground_morse, excited_morse, universe_dictionary = DEFAULT_UNIVERSE_DICTIONARY): #define the Universe self.hbar = universe_dictionary["hbar"] self.ZERO_TOLERANCE = universe_dictionary["ZERO_TOLERANCE"] #assign variables self.ground_morse = ground_morse self.excited_morse = excited_morse self.franck_condon_factors = np.zeros((self.ground_morse.maximum_index + 1, self.excited_morse.maximum_index + 1)) for ground_index in range(self.ground_morse.maximum_index + 1): ground_xMin, ground_xMax, ground_numPoints = self.ground_morse.index_to_xParams_dictionary[ground_index] for excited_index in range(self.excited_morse.maximum_index + 1): excited_xMin, excited_xMax, excited_numPoints = self.excited_morse.index_to_xParams_dictionary[excited_index] x_min = min([ground_xMin, excited_xMin]) x_max = max([excited_xMax, ground_xMax]) keep_integrating = True n_points = ground_numPoints * excited_numPoints #integrate once x_vals = np.linspace(x_min, x_max, n_points) g_func_vals = self.ground_morse.energy_eigenfunction_amplitude(ground_index, x_vals) e_func_vals = self.excited_morse.energy_eigenfunction_amplitude(excited_index, x_vals) gToE_FCF = scipy.integrate.simps(e_func_vals * np.conj(g_func_vals), x= x_vals) #check to make sure integral is converged while keep_integrating: n_points = n_points * 1.1 x_vals = np.linspace(x_min, x_max, n_points) g_func_vals = self.ground_morse.energy_eigenfunction_amplitude(ground_index, x_vals) e_func_vals = self.excited_morse.energy_eigenfunction_amplitude(excited_index, x_vals) new_integral = scipy.integrate.simps(e_func_vals * np.conj(g_func_vals), x= x_vals) if np.abs((new_integral - gToE_FCF) / new_integral ) < self.ZERO_TOLERANCE: keep_integrating = False else: print("NEED MOAR POINTz") self.franck_condon_factors[ground_index, excited_index] = gToE_FCF def stick_absorption_spectrum(self, starting_ground_index): relevant_FCFs = self.franck_condon_factors[starting_ground_index,:] frequency_values = [] ground_energy = self.ground_morse.energy_eigenvalue(starting_ground_index) for excited_index in range(self.excited_morse.maximum_index + 1): energy_gap = self.excited_morse.energy_eigenvalue(excited_index) - ground_energy frequency_values.append(energy_gap / self.hbar) return frequency_values, relevant_FCFs**2 ``` ```python ground = Morse() excited = Morse(system_dictionary=ALTERNATE_SYSTEM_DICTIONARY) test_offsetMorse = OffsetMorse(ground_morse = ground, excited_morse = excited) for i in range(ground.maximum_index + 1): w, I = test_offsetMorse.stick_absorption_spectrum(i) plt.plot(w, np.log(I)) ``` ```python x_vals = np.linspace(-1, 30, 200, dtype=np.complex) for n in range(int(test.max_ground_index) + 1): print("n="+str(n)) f = test.ground_eigenfunction(n, x_vals) plt.plot(x_vals, np.real(f), label=n) print("integral="+str( scipy.integrate.simps(f * np.conj(f) , x= x_vals))) print("\n") # plt.legend(loc=0) plt.figure() for n in range(int(test.max_excited_index) + 1): print("n="+str(n)) f = test.ground_eigenfunction(n, x_vals) plt.plot(x_vals, np.real(f), label=n) print("integral="+str( scipy.integrate.simps(f * np.conj(f) , x= x_vals))) print("\n") # plt.legend(loc=0) ``` ```python ``` ```python ``` ```python ```
0f2be000c759e8751f1964475cf86dc53943247a
94,428
ipynb
Jupyter Notebook
DetectingNonCondonPaper/code/.ipynb_checkpoints/Morse_fitting_procedure-checkpoint.ipynb
jgoodknight/dissertation
012ad400e1246d2a7e63cc640be4f7b4bf56db00
[ "MIT" ]
1
2020-04-21T06:20:42.000Z
2020-04-21T06:20:42.000Z
DetectingNonCondonPaper/code/.ipynb_checkpoints/Morse_fitting_procedure-checkpoint.ipynb
jgoodknight/dissertation
012ad400e1246d2a7e63cc640be4f7b4bf56db00
[ "MIT" ]
null
null
null
DetectingNonCondonPaper/code/.ipynb_checkpoints/Morse_fitting_procedure-checkpoint.ipynb
jgoodknight/dissertation
012ad400e1246d2a7e63cc640be4f7b4bf56db00
[ "MIT" ]
null
null
null
203.070968
31,662
0.86737
true
2,769
Qwen/Qwen-72B
1. YES 2. YES
0.803174
0.731059
0.587167
__label__eng_Latn
0.488734
0.202516
```python from resources.workspace import * ``` ### The Gaussian (i.e. Normal) distribution Consider the random variable with a Gaussian distribution with mean $\mu$ (`mu`) and variance $P$. We write its probability density function (**pdf**) as $$ p(x) = N(x|\mu,P) = (2 \pi P)^{-1/2} e^{-(x-\mu)^2/2P} \, . \qquad \qquad (1) $$ --- **Exc 2.2:** Code it up (complete the code below)! Hints: * Note that `**` is the power operator in Python. * As in Matlab, $e^x$ is available as `exp(x)` ```python # Univariate (scalar), Gaussian pdf def pdf_G_1(x,mu,P): # pdf_values = ### INSERT ANSWER HERE ### return pdf_values ``` ```python #show_answer('pdf_G_1') ``` Let's plot the pdf. ```python mu = 0 # mean of distribution P = 25 # variance of distribution P12 = sqrt(P) # std. dev of distribution # Plotting N = 201 # num of grid points xx = linspace(-20,20,N) # grid dx = xx[1]-xx[0] # grid spacing pp = pdf_G_1(xx,mu,P) # pdf values plt.subplot(211) # allocate plot panel plt.plot(xx,pp); # plot ``` This could for example be the pdf of a stochastic noise variable. It could also describe our uncertainty about a parameter (or state), which we model as randomness in the Bayesian paradigm. **Exc 2.4:** Change `P` in the above code, and re-run the cell. Look at the figure. * How does the pdf curve change when you increase P? * Re-set `P=25` and re-run (this is a convienient value for examples) **Exc 2.6:** Recall $p(x)$ from eqn (1). The following are helpful points to remember how it looks. Use pen, paper, and calculus. Hint: it's typically easier to analyse $\log p(x)$ rather than $p(x)$ itself. * Where is the location of the mode (maximum) of the distribution? I.e. where $\frac{d p(x)}{d x} = 0$. * Where is the inflection point? I.e. where $\frac{d^2 p(x)}{d x^2} = 0$. * What is the value of $\frac{d^2 \log p(x)}{d x^2}$ at the mode? #### The multivariate (i.e. vector) case Here's the pdf of the *multivariate* Gaussian: \begin{align} N(x|\mu,P) &= |2 \pi P|^{-\frac{1}{2}} e^{-\frac{1}{2}\|x-\mu\|^2_P} \, , \\\ \end{align} where $|.|$ represents the determinant, and $\|.\|_W$ represents the norm with weighting: $\|x\|^2_W = x^T W^{-1} x$. The following implements this pdf. Take a moment to digest the code; in particular, it should be noted that `pdf_G_m()` can accept multiple `x` vectors at once (assembled into a matrix), whence the `xx` naming convention. ```python from numpy.linalg import det, inv def weighted_norm22(xx,W): # Computes the norm of each row vector of xx, as weighted by W. return np.sum((xx @ inv(W)) * xx, axis=1) def pdf_G_m(xx,mu,P): return 1/sqrt(det(2*pi*P))*exp(-0.5*weighted_norm22(xx-mu,P)) ``` The following code plots the pdf as contour (equi-density) curves. The plot appears in the above figure. ```python def list_2_array(grid): return array([xi.ravel() for xi in grid]).T def square_reshape(X): return X.reshape(int(sqrt(len(X))),-1) grid = np.meshgrid(xx,xx) grid = list_2_array(grid) pp = pdf_G_m(grid, 0, P*array([[1,0.7],[0.7,1]])) pp = square_reshape(pp) plt.subplot(212) plt.contour(xx,xx,pp); plt.axis('equal'); ``` **Exc 2.8:** * Set the correlation to 0. How do the contours look? * Set the correlations to 0.99. How do the contours look? **Exc 2.9:** Go play the [correlation game](http://guessthecorrelation.com/) ### Bayes' rule Bayes' rule is how we do inference. For continuous random variables, $x$ and $y$, it reads: $$ p(x|y) = \frac{p(x) \, p(y|x)}{p(y)} \, , \qquad \qquad (2)$$ or, in words: $$ \text{"posterior" (pdf of $x$ given $y$)} \; = \; \frac{\text{"prior" (pdf of $x$)} \; \times \; \text{"likelihood" (pdf of $y$ given $x$)}} {\text{"normalization" (pdf of $y$)}} $$. **Exc 2.10:** Derive Bayes' rule from the definition of [conditional pdf's](https://en.wikipedia.org/wiki/Conditional_probability#Kolmogorov_definition). ```python #show_answer('BR deriv') ``` Computers generally work with discrete, numerical representations of mathematical entities. Numerically, pdfs may be represented by their `values` on a grid, such as `xx` from above. Bayes' rule (2) then consists of *(grid-)point-wise* multiplication, as shown below. ```python def Bayes_rule(prior_values,lklhd_values,dx): pp = prior_values * lklhd_values # pointwise multiplication posterior_values = pp/(sum(pp)*dx) # normalization return posterior_values ``` **Exc 2.12:** Why does `Bayes_rule()` not need the values of the denominator, $p(y)$, as input? ```python #show_answer('BR grid normalization') ``` In fact, since normalization is so simple, we often don't bother to do it until it's strictly necessary. Therefore we often simplify Bayes' rule (2) as $$ p(x|y) \propto p(x) \, p(y|x) \, . \qquad \qquad (3) $$ --- The code below show's Bayes' rule in action. Again, remember that the only thing it's doing is multiplying the prior and likelihood at each gridpoint. Move the sliders with the arrow keys to animate it. ```python %matplotlib inline b = 0 # prior mean B = 1 # prior variance @interact(y=(-10,10,1),log_R=(-2,5,0.5)) def animate_Gaussian_Bayes(y=4.0,log_R=1): R = exp(log_R) prior = lambda x: pdf_G_1(x,b,B) lklhd = lambda x: pdf_G_1(y,x,R) post_vals = Bayes_rule(prior(xx),lklhd(xx),xx[1]-xx[0]) plt.figure(figsize=(10,4)) plt.plot(xx,prior(xx) ,label='prior N(x|0,1)') plt.plot(xx,lklhd(xx) ,label='likelihood N(y|x,R)') plt.plot(xx,post_vals ,label='posterior - pointwise') ### Uncomment this block AFTER doing the exercise ### ### that defines Bayes_rule_Gaussian() ### #mu, P = Bayes_rule_Gaussian(b,B,y,R) #postr = lambda x: pdf_G_1(x,mu,P) #plt.plot(xx,postr(xx),'--',label='posterior - parametric') plt.ylim(ymax=0.6) plt.legend() plt.show() ``` **Exc 2.14:** Answer the following by moving the sliders and seeing what happens. * What happens to the posterior when $R \rightarrow \infty$ ? * What happens to the posterior when $R \rightarrow 0$ ? * Where is the posterior when $R = B$ ? (try moving around $y$) * Does the posterior scale (width) depend on $y$? * Forgetting about its location and scale, what is the shape of the posterior? Does this depend on $R$ or $y$? Can you see why? * Can you see a shortcut to computing this posterior rather than having to do the pointwise multiplication? **Exc 2.15*:** Implement a "uniform" (or "flat" or "box") distribution pdf and call it `pdf_U_1(x,mu,P)`. These <a href="https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)#Moments">formulae</a> for its mean/variance will be useful. In the above animations, replace `pdf_G_1` with your new `pdf_U_1` (both for the prior and likelihood). Assure that everything is working correctly. - Why (in the figure) are the walls of the pdf (ever so slightly) inclined? - What happens when you move the prior and likelihood very far apart? Is the fault of the implementation, or the fundamental assumptions (uniform distribution)? - Re-do Exc 2.14, now with `pdf_U_1`. - Now test a Gaussian prior with a uniform likelihood. - Restore `pdf_G_1` (both the prior and likelihood) in the animation (for later use). ```python #show_answer('pdf_U_1') ``` ### Gaussian-Gaussian Bayes The above animation shows Bayes' rule in 1 dimension. Previously, we saw how a Gaussian looks in 2 dimensions. Can you imagine how Bayes' rule looks in 2 dimensions? In higher dimensions, these things get difficult to imagine, let alone visualize. Similarly, the size of the calculations required for Bayes' rule poses a difficulty. Indeed, the following exercise shows that (pointwise) multiplication for all grid points becomes a preposterious notion in high dimensions. **Exc 2.16:** * (a) How many point-multiplications are needed on a grid with $N$ points in $m$ dimensions? (Imagine an $m$-dimensional cube where each side has a grid with $N$ points on it) * (b) Suppose we model 15 physical quanitites, on each grid point, on a discretized model of Earth. Assume the resolution is $1^\circ$ for latitude (110km), $1^\circ$ for longitude. How many variables are there in total? This is the dimensionality ($m$) of the problem. * (c) Suppose each variable is has a pdf represented with a grid using only $N=10$ points. How many multiplications are necessary to calculate Bayes rule (jointly) for all variables on our Earth model? ```python #show_answer('Dimensionality a') #show_answer('Dimensionality b') #show_answer('Dimensionality c') ``` In response to this computational difficulty, we try to be smart and do something more analytical ("pen-and-paper"): we only compute the parameters (mean and (co)variance) of the posterior pdf. This is doable and quite simple in the Gaussian-Gaussian case. With a prior $p(x) = N(x|b,B)$ and a likelihood $p(y|x) = N(y|x,R)$, the posterior will be given by \begin{align} p(x|y) &= N(x|\mu,P) \qquad \qquad (4) \, , \end{align} where, in the univarite (1-dimensional) case: \begin{align} P &= 1/(1/B + 1/R) \, , \qquad \qquad (5) \\\ \mu &= P(b/B + y/R) \, . \qquad \qquad (6) \end{align} #### Exc 2.18 'Gaussian Bayes': Derive the above expressions for $P$ and $\mu$. *Hint: you need eqns (1) and (3).* ```python #show_answer('BR Gauss') ``` **Exc 2.20:** Do some light algebra to show that eqns (5) and (6) can be rewritten as \begin{align} P &= (1-K)B \, , \qquad \qquad (8) \\\ \mu &= b + K (y-b) \qquad \quad (9) \, , \end{align} where $K = B/(B+R)$, which is called the "Kalman gain". **Exc 2.22*:** Consider the formula for $K$ and its role in the previous couple of equations... Why do you think $K$ is called a "gain"? ```python #show_answer('KG 2') ``` **Exc 2.24:** Implement a Gaussian-Gaussian Bayes' rule by completing the code below. ```python def Bayes_rule_Gaussian(b,B,y,R): ### INSERT ANSWER HERE ### return mu,P ``` ```python #show_answer('BR Gauss code') ``` **Exc 2.26:** Then, go back to the animation above, and uncomment the block that makes use of `Bayes_rule_Gaussian()`. Make sure its curve coincides with that which uses pointwise multiplication (i.e. `Bayes_rule()`). **Exc 2.28:** More questions related to the above animation: * Does the width (i.e. scale) for the posterior depend on the location $y$ of the likelihood? * Is the width (i.e. scale) for the posterior always smaller that that of prior and likelihood? What does this mean information-wise? * Do you think this is always the case, also for non-Gaussian distributions? * What if you're pretty sure about something, and you get a wildly different indication (observation). What is your posterior certainty? **Exc 2.30*:** Why are we so fond of the Gaussian assumption? ```python #show_answer('why Gaussian') ``` ### Next: [Univariate (scalar) Kalman filtering](T3 - Univariate Kalman filtering.ipynb)
683d16b911b89bf2f38a76409eb785a893853ca4
17,449
ipynb
Jupyter Notebook
tutorials/T2 - Bayesian inference.ipynb
geirev/DAPPER
c3f448a1912f3869eccdbd86fb24019655efcb4f
[ "MIT" ]
1
2021-02-02T05:56:31.000Z
2021-02-02T05:56:31.000Z
tutorials/T2 - Bayesian inference.ipynb
JIMMY-KSU/DAPPER
c3f448a1912f3869eccdbd86fb24019655efcb4f
[ "MIT" ]
null
null
null
tutorials/T2 - Bayesian inference.ipynb
JIMMY-KSU/DAPPER
c3f448a1912f3869eccdbd86fb24019655efcb4f
[ "MIT" ]
1
2021-02-02T05:56:35.000Z
2021-02-02T05:56:35.000Z
31.214669
411
0.560032
true
3,183
Qwen/Qwen-72B
1. YES 2. YES
0.7773
0.927363
0.720839
__label__eng_Latn
0.980611
0.513083
<a href="https://colab.research.google.com/github/cstorm125/abtestoo/blob/master/notebooks/frequentist_colab.ipynb" target="_parent"></a> # A/B Testing from Scratch: Frequentist Approach Frequentist A/B testing is one of the most used and abused statistical methods in the world. This article starts with a simple problem of comparing two online ads campaigns (or teatments, user interfaces or slot machines). It outlines several useful statistical concepts and how we exploit them to solve our problem. At the end, it acknowledges some common pitfalls we face when doing a frequentist A/B test and proposes some possible solutions to a more robust A/B testing. Readers are encouraged to tinker with the widgets provided in order to explore the impacts of each parameter. Thanks to [korakot](https://github.com/korakot) for notebook conversion to Colab. ```python #depedencies for colab %%capture !pip install plotnine ``` ```python import numpy as np import pandas as pd from typing import Collection, Tuple #widgets เอาออก เปลี่ยนไปใช้ colab form แทน #from ipywidgets import interact, interactive, fixed, interact_manual #import ipywidgets as widgets # from IPython.display import display #plots import matplotlib.pyplot as plt from plotnine import * #stats import scipy as sp #suppress annoying warning prints import warnings warnings.filterwarnings('ignore') ``` ## Start with A Problem A typical situation marketers (research physicians, UX researchers, or gamblers) find themselves in is that they have two variations of ads (treatments, user interfaces, or slot machines) and want to find out which one has the better performance in the long run. Practitioners know this as A/B testing and statisticians as **hypothesis testing**. Consider the following problem. We are running an online ads campaign `A` for a period of time, but now we think a new ads variation might work better so we run an experiemnt by dividing our audience in half: one sees the existing campaign `A` whereas the other sees a new campaign `B`. Our performance metric is conversion (sales) per click (ignore [ads attribution problem](https://support.google.com/analytics/answer/1662518) for now). After the experiment ran for two months, we obtain daily clicks and conversions of each campaign and determine which campaign has the better performance. We simulate the aforementioned problem with both campaigns getting randomly about a thousand clicks per day. The secrete we will pretend to not know is that hypothetical campaign `B` has slightly better conversion rate than `A` in the long run. With this synthetic data, we will explore some useful statistical concepts and exploit them for our frequentist A/B testing. ```python def gen_bernoulli_campaign(p1: float, p2: float, lmh: Collection = [500, 1000, 1500], timesteps: int = 60, scaler: float = 300, seed: int = 1412) -> pd.DataFrame: ''' :meth: generate fake impression-conversion campaign based on specified parameters :param float p1: true conversion rate of group 1 :param float p2: true conversion rate of group 2 :param Collection lmh: low-, mid-, and high-points for the triangular distribution of clicks :param int nb_days: number of timesteps the campaigns run for :param float scaler: scaler for Gaussian noise :param int seed: seed for Gaussian noise :return: dataframe containing campaign results ''' np.random.seed(seed) ns = np.random.triangular(*lmh, size=timesteps * 2).astype(int) np.random.seed(seed) es = np.random.randn(timesteps * 2) / scaler n1 = ns[:timesteps] c1 = ((p1 + es[:timesteps]) * n1).astype(int) n2 = ns[timesteps:] c2 = ((p2 + es[timesteps:]) * n2).astype(int) result = pd.DataFrame({'timesteps': range(timesteps), 'impression_a': n1, 'conv_a': c1, 'impression_b': n2, 'conv_b': c2}) result = result[['timesteps', 'impression_a', 'impression_b', 'conv_a', 'conv_b']] result['cumu_impression_a'] = result.impression_a.cumsum() result['cumu_impression_b'] = result.impression_b.cumsum() result['cumu_conv_a'] = result.conv_a.cumsum() result['cumu_conv_b'] = result.conv_b.cumsum() result['cumu_rate_a'] = result.cumu_conv_a / result.cumu_impression_a result['cumu_rate_b'] = result.cumu_conv_b / result.cumu_impression_b return result conv_days = gen_bernoulli_campaign(p1 = 0.10, p2 = 0.105, timesteps = 60, scaler=300, seed = 1412) #god-mode conv_days.columns = [i.replace('impression','click') for i in conv_days.columns] #function uses impressions but we use clicks conv_days.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>timesteps</th> <th>click_a</th> <th>click_b</th> <th>conv_a</th> <th>conv_b</th> <th>cumu_click_a</th> <th>cumu_click_b</th> <th>cumu_conv_a</th> <th>cumu_conv_b</th> <th>cumu_rate_a</th> <th>cumu_rate_b</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0</td> <td>1254</td> <td>1007</td> <td>126</td> <td>105</td> <td>1254</td> <td>1007</td> <td>126</td> <td>105</td> <td>0.100478</td> <td>0.104270</td> </tr> <tr> <th>1</th> <td>1</td> <td>1147</td> <td>549</td> <td>116</td> <td>60</td> <td>2401</td> <td>1556</td> <td>242</td> <td>165</td> <td>0.100791</td> <td>0.106041</td> </tr> <tr> <th>2</th> <td>2</td> <td>678</td> <td>955</td> <td>67</td> <td>98</td> <td>3079</td> <td>2511</td> <td>309</td> <td>263</td> <td>0.100357</td> <td>0.104739</td> </tr> <tr> <th>3</th> <td>3</td> <td>968</td> <td>764</td> <td>94</td> <td>82</td> <td>4047</td> <td>3275</td> <td>403</td> <td>345</td> <td>0.099580</td> <td>0.105344</td> </tr> <tr> <th>4</th> <td>4</td> <td>899</td> <td>969</td> <td>93</td> <td>99</td> <td>4946</td> <td>4244</td> <td>496</td> <td>444</td> <td>0.100283</td> <td>0.104618</td> </tr> </tbody> </table> </div> ```python rates_df = conv_days[['timesteps','cumu_rate_a','cumu_rate_b']].melt(id_vars='timesteps') g = (ggplot(rates_df, aes(x='timesteps', y='value', color='variable')) + geom_line() + theme_minimal() + xlab('Days of Experiment Run') + ylab('Cumulative Conversions / Cumulative Clicks')) g ``` ```python #sum after 2 months conv_df = pd.DataFrame({'campaign_id':['A','B'], 'clicks':[conv_days.click_a.sum(),conv_days.click_b.sum()], 'conv_cnt':[conv_days.conv_a.sum(),conv_days.conv_b.sum()]}) conv_df['conv_per'] = conv_df['conv_cnt'] / conv_df['clicks'] conv_df ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>campaign_id</th> <th>clicks</th> <th>conv_cnt</th> <th>conv_per</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>A</td> <td>59504</td> <td>5890</td> <td>0.098985</td> </tr> <tr> <th>1</th> <td>B</td> <td>58944</td> <td>6111</td> <td>0.103675</td> </tr> </tbody> </table> </div> ## Random Variables and Probability Distributions Take a step back and think about the numbers we consider in our daily routines, whether it is conversion rate of an ads campaign, the relative risk of a patient group, or sales and revenues of a shop during a given period of time. From our perspective, they have one thing in common: **we do not know exactly how they come to be**. In fact, we would not need an A/B test if we do. For instance, if we know for certain that conversion rate of an ads campaign will be `0.05 + 0.001 * number of letters in the ads`, we can tell exactly which ads to run: the one with the highest number of letters in it. With our lack of knowledge, we do the next best thing and assume that our numbers are generated by some mathematical formula, calling them **random variables**. For instance, we might think of the probability of a click converting the same way as a coin-flip event, with the probability of converting as $p$ (say 0.1) and not converting as $1-p$ (thus 0.9). With this, we can simulate the event aka click conversion for as many times as we want: ```python def bernoulli(n,p): flips = np.random.choice([0,1], size=n, p=[1-p,p]) flips_df = pd.DataFrame(flips) flips_df.columns = ['conv_flag'] g = (ggplot(flips_df,aes(x='factor(conv_flag)')) + geom_bar(aes(y = '(..count..)/sum(..count..)')) + theme_minimal() + xlab('Conversion Flag') + ylab('Percentage of Occurence') + geom_hline(yintercept=p, colour='red') + ggtitle(f'Distribution after {n} Trials')) g.draw() print(f'Expectation: {p}\nVariance: {p*(1-p)}') print(f'Sample Mean: {np.mean(flips)}\nSample Variance: {np.var(flips)}') # ใช้ colab form แทน interact #interact(bernoulli, n=widgets.IntSlider(min=1,max=500,step=1,value=20), # p=widgets.FloatSlider(min=0.1,max=0.9)) ``` ```python #@title {run: "auto"} n = 20 #@param {type:"slider", min:1, max:500, step:1} p = 0.1 #@param {type:"slider", min:0.1, max:0.9, step:0.1} bernoulli(n, p) ``` **Probability distribution** is represented with the values of a random variable we are interested in the X-axis, and the chance of them appearing after a number of trials in the Y-axis. The distribution above is called [Bernoulli Distribution](http://mathworld.wolfram.com/BernoulliDistribution.html), usually used to model hypothetical coin flips and online advertisements. [Other distributions](https://en.wikipedia.org/wiki/List_of_probability_distributions) are used in the same manner for other types of random variables. [Cloudera](https://www.cloudera.com/) provided a [quick review](https://blog.cloudera.com/blog/2015/12/common-probability-distributions-the-data-scientists-crib-sheet/) on a few of them you might find useful. ## Law of Large Numbers There are two sets of indicators of a distribution that are especially relevant to our problem: one derived theoretically and another derived from data we observed. **Law of Large Numbers (LLN)** describes the relationship of between them. Theoretically, we can derive these values about any distribution: * **Expectation** of a random variable $X_i$ is its long-run average dervied from repetitively sampling $X_i$ from the same distribution. Each distribution requires its own way to obtain the expectation. For our example, it is the weighted average of outcomes $X_i$ ($X_i=1$ converted; $X_i=0$ not converted) and their respective probabilities ($p$ converted; $1-p$ not converted): \begin{align} E[X_i] &= \mu = \sum_{i=1}^{k} p_i * X_i \\ &= (1-p)*0 + p*1 \\ &= p \end{align} where $k$ is number of patterns of outcomes * **Variance** of a random variable $X_i$ represents the expectation of how much $X_i$ deviates from its expectation, for our example formulated as: \begin{align} Var(X_i) &= \sigma^2 = E[(X_i-E(X_i))^2] \\ &= E[X_i^2] - E[X_i]^2 \\ &= \{(1-p)*0^2 + p*1^2\} - p^2 \\ &= p(1-p) \end{align} Empirically, we can also calculate their counterparts with the any amount of data we have on hand: * **Sample Mean** is simply an average of all $X_i$ we currently have in our sample of size $n$: \begin{align} \bar{X} &= \frac{1}{n} \sum_{i=1}^{n} X_i \end{align} * **Sample Variance** is the variance based on deviation from sample mean; the $n-1$ is due to [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction#Source_of_bias) (See Appendix): \begin{align} s^2 &= \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \bar{X})^2 \end{align} LLN posits that when we have a large enough number of sample $n$, the sample mean will converge to expectation. This can be shown with a simple simulation: ```python def lln(n_max,p): mean_flips = [] var_flips = [] ns = [] for n in range(1,n_max): flips = np.random.choice([0,1], size=n, p=[1-p,p]) ns.append(n) mean_flips.append(flips.mean()) var_flips.append(flips.var()) flips_df = pd.DataFrame({'n':ns,'mean_flips':mean_flips,'var_flips':var_flips}).melt(id_vars='n') g = (ggplot(flips_df,aes(x='n',y='value',colour='variable')) + geom_line() + facet_wrap('~variable', ncol=1, scales='free') + theme_minimal() + ggtitle(f'Expectation={p:2f}; Variance={p*(1-p):2f}') + xlab('Number of Samples') + ylab('Value')) g.draw() # interact(lln, n_max=widgets.IntSlider(min=2,max=10000,step=1,value=1000), # p=widgets.FloatSlider(min=0.1,max=0.9)) ``` ```python #@title {run: "auto"} n = 1000 #@param {type:"slider", min:2, max:10000, step:1} p = 0.1 #@param {type:"slider", min:0.1, max:0.9, step:0.1} lln(n, p) ``` Notice that even though LLN does not says that sample variance will also converge to variance as $n$ grows large enough, it is also the case. Mathematically, it can be derived as follows: \begin{align} s^2 &= \frac{1}{n}\sum_{i=1}^{n}(X_i - \bar{X}^2) \\ &= \frac{1}{n}\sum_{i=1}^{n}(X_i - \mu)^2 \text{; as }n\rightarrow\infty\text{ }\bar{X}\rightarrow\mu\\ &=\frac{1}{n}(\sum_{i=1}^{n}{X_i}^2 - 2\mu\sum_{i=1}^{n}X_i + n\mu^2) \\ &=\frac{\sum_{i=1}^{n}{X_i}^2}{n} - \frac{2\mu\sum_{i=1}^{n}X_i}{n} + \mu^2 \\ &= \frac{\sum_{i=1}^{n}{X_i}^2}{n} - 2\mu\bar{X} + \mu^2\text{; as }\frac{\sum_{i=1}^{n}X_i}{n} = \bar{X}\\ &= \frac{\sum_{i=1}^{n}{X_i}^2}{n} - 2\mu^2 + \mu^2 = \frac{\sum_{i=1}^{n}{X_i}^2}{n} - \mu^2 \text{; as }n\rightarrow\infty\text{ }\bar{X}\rightarrow\mu\\ &= E[{X_i}^2] - E[X_i]^2 = Var(X_i) = \sigma^2 \end{align} ## Central Limit Theorem Assuming some probability distribution for our random variable also lets us exploit another extremely powerful statistical concept: **Central Limit Theorem (CLT)**. To see CLT in action, let us simplify our problem a bit and say we are only trying to find out if a hypothetical ads campaign `C` has a conversion rate of more than 10% or not, assuming data collected so far say that `C` has 1,000 clicks and 107 conversions. ```python c_df = pd.DataFrame({'campaign_id':'C','clicks':1000,'conv_cnt':107,'conv_per':0.107},index=[0]) c_df ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>campaign_id</th> <th>clicks</th> <th>conv_cnt</th> <th>conv_per</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>C</td> <td>1000</td> <td>107</td> <td>0.107</td> </tr> </tbody> </table> </div> CLT goes as follows: > If $X_i$ is an independent and identically distributed (i.i.d.) random variable with expectation $\mu$ and variance $\sigma^2$ and $\bar{X_j}$ is the sample mean of $n$ samples of $X_i$ we drew as part of sample group $j$, then when $n$ is large enough, $\bar{X_j}$ will follow a [normal distribution](http://mathworld.wolfram.com/NormalDistribution.html) with with expectation $\mu$ and variance $\frac{\sigma^2}{n}$ It is a mouthful to say and full of weird symbols, so let us break it down line by line. **If $X_i$ is an independent and identically distributed (i.i.d.) random variable with expectation $\mu$ and variance $\sigma^2$** <br/>In our case, $X_i$ is if click $i$ is coverted ($X_i=1$) or not converted ($X_i=0$) with $\mu$ as some probability that represents how likely a click will convert on average. *Independent* means that the probability of each click converting depends only on itself and not other clicks. *Identically distributed* means that the true probability of each click converting is more or less the same. We need to rely on domain knowledge to verify these assumptions; for example, in online advertisement, we would expect, at least for when working with a reputable ads network such as Criteo, that each click comes from indepdent users, as opposed to, say, a click farm where we would see a lot of clicks behaving the same way by design. Identical distribution is a little difficult to assume since we would think different demographics the ads are shown to will react differently so they might not have the same expectation. ```python ind_df = pd.DataFrame({'iid':[False]*100+[True]*100, 'order': list(range(100)) + list(range(100)), 'conv_flag':[1]*50+ [0]*50+ list(np.random.choice([0,1], size=100))}) g = (ggplot(ind_df,aes(x='order',y='conv_flag',color='iid')) + geom_point() + facet_wrap('~iid') + theme_minimal() + xlab('i-th Click') + ylab('Conversion') + ggtitle('Both plots has conversion rate of 50% but only one is i.i.d.')) g ``` **and $\bar{X_j}$ is the sample mean of $n$ samples of $X_i$ we drew as part of sample group $j$, then**<br/> For campaign `C`, we can think of all the clicks we observed as one sample group, which exists in parallel with an infinite number of sample groups that we have not seen yet but can be drawn from the distribution by additional data collection. This way, we calculate the sample mean as total conversions divided by total number of clicks observed during the campaign. **when $n$ is large enough, $\bar{X_j}$ will follow a [normal distribution](http://mathworld.wolfram.com/NormalDistribution.html) with with expectation $\mu$ and variance $\frac{\sigma^2}{n}$**</br> Here's the kicker: regardless of what distribution each $X_i$ of sample group $j$ is drawn from, as long as you have enough number of sample $n$, the sample mean of that sample group $\bar{X_j}$ will converge to a normal distribution. Try increase $n$ in the plot below and see what happens. ```python def clt(n, dist): n_total = n * 10000 if dist == 'discrete uniform': r = np.random.uniform(size=n_total) elif dist =='bernoulli': r = np.random.choice([0,1],size=n_total,p=[0.9,0.1]) elif dist =='poisson': r = np.random.poisson(size=n_total) else: raise ValueError('Choose distributions that are available') #generate base distribution plot r_df = pd.DataFrame({'r':r}) g1 = (ggplot(r_df, aes(x='r')) + geom_histogram(bins=30) + theme_minimal() + xlab('Values') + ylab('Number of Samples') + ggtitle(f'{dist} distribution where sample groups are drawn from')) g1.draw() #generate sample mean distribution plot normal_distribution = np.random.normal(loc=np.mean(r), scale=np.std(r) / np.sqrt(n), size=10000) sm_df = pd.DataFrame({'sample_means':r.reshape(-1,n).mean(1), 'normal_distribution': normal_distribution}).melt() g2 = (ggplot(sm_df, aes(x='value',fill='variable')) + geom_histogram(bins=30,position='nudge',alpha=0.5) + theme_minimal() + xlab('Sample Means') + ylab('Number of Sample Means') + ggtitle(f'Distribution of 10,000 sample means with size {n}')) g2.draw() dists = ['bernoulli','discrete uniform','poisson'] # interact(clt, n=widgets.IntSlider(min=1,max=100,value=1), # dist = widgets.Dropdown( # options=dists, # value='bernoulli') # ) ``` ```python #@title {run: "auto"} n = 30 #@param {type:"slider", min:1, max:100, step:1} dist = 'bernoulli' #@param ["discrete uniform", "bernoulli", "poisson"] {type:"string"} clt(n, dist) ``` The expectation and variance of the sample mean distribution can be derived as follows: \begin{align} E[\bar{X_j}] &= E[\frac{\sum_{i=1}^{n} X_i}{n}] \\ &= \frac{1}{n} \sum_{i=1}^{n} E[X_i] = \frac{1}{n} \sum_{i=1}^{n} \mu\\ &= \frac{n\mu}{n} = \mu \\ Var(\bar{X_j}) &= Var(\frac{\sum_{i=1}^{n} X_i}{n}) \\ &= \frac{1}{n^2} \sum_{i=1}^{n} Var(X_i) = \frac{1}{n^2} \sum_{i=1}^{n} \sigma^2\\ &= \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n} \\ \end{align} The fact that we know this specific normal distribution of sample means has expectation $\mu$ and variance $\frac{\sigma^2}{n}$ is especially useful. Remember we want to find out whether campaign `C` **in general, not just in any sample group,** has better conversion rate than 10%. Below is that exact normal distribution based on information from our sample group (1,000 clicks) and the assumption that conversion rate is 10%: \begin{align} E[\bar{X_j}] &= \mu = p\\ &= 0.1 \text{; by our assumption}\\ Var(\bar{X_j}) &= \frac{\sigma^2}{n} = \frac{p*(1-p)}{n}\\ &= \frac{0.1 * (1-0.1)}{1000}\\ &= 0.0009\\ \end{align} ```python n = c_df.clicks[0] x_bar = c_df.conv_per[0] p = 0.1 mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5) # mu = 0; variance = 1; sigma = (variance)**(0.5) x = np.arange(0.05, 0.15, 1e-3) y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x]) sm_df = pd.DataFrame({'x': x, 'y': y, 'crit':[False if i>x_bar else True for i in x]}) g = (ggplot(sm_df, aes(x='x', y='y')) + geom_area() + theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') + ggtitle('Sample mean distribution under our assumption')) g ``` As long as we know the expectation (which we usually do as part of the assumption) and variance (which is more tricky) of the base distribution, we can use this normal distribution to model random variable from *any* distribution. That is, we can model *any* data as long as we can assume their expectation and variance. ## Think Like A ~~Detective~~ Frequentist In a frequentist perspective, we treat a problem like a criminal persecution. First, we assume innocence of the defendant often called **null hypothesis** (in our case that conversion rate is *less than or equal to* 10%). Then, we collect the evidence (all clicks and conversions from campaign `C`). After that, we review how *unlikely* it is that we have this evidence assuming the defendant is innocent (by looking at where our sample mean lands on the sample mean distribution). Most frequentist tests are simply saying: >If we assume that [conversion rate]() of [ads campaign C]() has the long-run [conversion rate]() of less than or equal to [10%](), our results with sample mean [0.107]() or more extreme ones are so unlikely that they happen only [23%]() of the time, calculated by the area of the distribution with higher value than our sample mean. Note that you can substitute the highlighted parts with any other numbers and statistics you are comparing; for instance, medical trials instead of ads campaigns and relative risks instead of converion rates. ```python g = (ggplot(sm_df, aes(x='x', y='y', group='crit')) + geom_area(aes(fill='crit')) + theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') + ggtitle('Sample mean distribution under our assumption') + guides(fill=guide_legend(title="Conversion Rate < 0.1"))) g ``` Whether 23% is unlikely *beyond reasonable doubt* depends on how much we are willing to tolerate the false positive rate (the percentage of innocent people you are willing to execute). By convention, a lot of practioners set this to 1-5% depending on their problems; for instance, an experiment in physics may use 1% or less because physical phenomena is highly reproducible whereas social science may use 5% because the human behaviors are more variable. This is not to be confused with **false discovery rate** which is the probability of our positive predictions turning out to be wrong. The excellent book [Statistics Done Wrong](https://www.statisticsdonewrong.com/p-value.html) has given this topic an extensive coverage that you definitely should check out (Reinhart, 2015). This degree of acceptable unlikeliness is called **alpha** and the probability we observe is called **p-value**. We must set alpha as part of the assumption before looking at the data (the law must first state how bad an action is for a person to be executed). ## Transforming A Distribution In the previous example of `C`, we are only interested when the conversion rate is *more than* 10% so we look only beyond the right-hand side of our sample mean (thus called **one-tailed tests**). If we were testing whether the conversion rate is *equal to* 10% or not we would be interested in both sides (thus called **two-tailed tests**). However, it is not straightforward since we have to know the equivalent position of our sample mean on the left-hand side of the distribution. One way to remedy this is to convert the sample mean distribution to a distribution that is symmetrical around zero and has a fixed variance so the value on one side is equivalent to minus that value of the other side. **Standard normal distribution** is the normal distribution with expectation $\mu=0$ and variance $\sigma^2=1$. We convert any normal distribution to a standard normal distribution by: 1. Shift its expectation to zero. This can be done by substracting all values of a distribution by its expectation: \begin{align} E[\bar{X_j}-\mu] &= E[\bar{X_j}]-\mu \\ &= \mu-\mu \\ &= 0 \\ \end{align} 2. Scale its variance to 1. This can be done by dividing all values by square root of its variance called **standard deviation**: \begin{align} Var(\frac{\bar{X_j}}{\sqrt{\sigma^2/n}}) &= \frac{1}{\sigma^2/n}Var(\bar{X_j})\\ &= \frac{\sigma^2/n}{\sigma^2/n}\\ &=1 \end{align} Try shifting and scaling the distribution below with different $m$ and $v$. ```python def shift_normal(m,v): n = c_df.clicks[0] x_bar = c_df.conv_per[0] p = 0.1 mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5) x = np.arange(0.05, 0.15, 1e-3) y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x]) sm_df = pd.DataFrame({'x': x, 'y': y}) #normalize process sm_df['x'] = (sm_df.x - m) / np.sqrt(v) sm_df['y'] = np.array([sp.stats.norm.pdf(i, loc=mu-m, scale=sigma/np.sqrt(v)) for i in sm_df.x]) print(f'Expectation of sample mean: {mu-m}; Variance of sample mean: {variance/v}') g = (ggplot(sm_df, aes(x='x', y='y')) + geom_area() + theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') + ggtitle('Shifted Normal Distribution of Sample Mean')) g.draw() # interact(shift_normal, # m=widgets.FloatSlider(min=-1e-1,max=1e-1,value=1e-1,step=1e-2), # v=widgets.FloatSlider(min=9e-5,max=9e-3,value=9e-5,step=1e-4, readout_format='.5f')) ``` ```python #@title {run: "auto"} m = 0.1 #@param {type:"slider", min:-1e-1, max:1e-1, step:1e-2} v = 9e-5 #@param {type:"slider", min:9e-5, max:9e-3, step:1e-4} shift_normal(m,v) ``` By shifting and scaling, we can find out where `C`'s sample mean of 0.107 lands on the X-axis of a standard normal distribution: \begin{align} \bar{Z_j} &= \frac{\bar{X_j} - \mu}{\sigma / \sqrt{n}} \\ &= \frac{0.107 - 0.1}{0.3 / \sqrt{1000}} \approx 0.7378648\\ \end{align} With $\bar{Z_j}$ and $-\bar{Z_j}$, we can calculate the probability of falsely rejecting the null hypotheysis, or p-value, as the area in red, summing up to approximately 46%. This is most likely too high a false positive rate anyone is comfortable with (no one believes a pregnancy test that turns out positive for 46% of the people who are not pregnant), so we fail to reject the null hypothesis that conversion rate of `C` is equal to 10%. If someone asks a frequentist for an opinion, they would probably say that they cannot disprove `C` has conversion rate of 10% in the long run. If they were asked to choose an action, they would probably go with the course of action that assumes `C` has a conversion rate of 10%. ```python n = c_df.clicks[0] x_bar = c_df.conv_per[0] p = 0.1; mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5) x_bar_norm = (x_bar - mu) / sigma def standard_normal(x_bar_norm, legend_title): x_bar_norm = abs(x_bar_norm) x = np.arange(-3, 3, 1e-2) y = np.array([sp.stats.norm.pdf(i, loc=0, scale=1) for i in x]) sm_df = pd.DataFrame({'x': x, 'y': y}) #normalize process sm_df['crit'] = sm_df.x.map(lambda x: False if ((x<-x_bar_norm)|(x>x_bar_norm)) else True) g = (ggplot(sm_df, aes(x='x', y='y',group='crit')) + geom_area(aes(fill='crit')) + theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') + ggtitle('Standard Normal Distribution of Sample Mean') + guides(fill=guide_legend(title=legend_title))) g.draw() standard_normal(x_bar_norm, "Conversion Rate = 0.1") ``` ## Z-test and More With CLT and standard normal distribution (sometimes called **Z-distribution**), we now have all the tools for one of the most popular and useful statistical hypothesis test, the **Z-test**. In fact we have already done it with the hypothetical campaign `C`. But let us go back to our original problem of comparing the long-run conversion rates of `A` and `B`. Let our null hypothesis be that they are equal to each other and alpha be 0.05 (we are comfortable with false positive rate of 5%). ```python conv_df ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>campaign_id</th> <th>clicks</th> <th>conv_cnt</th> <th>conv_per</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>A</td> <td>59504</td> <td>5890</td> <td>0.098985</td> </tr> <tr> <th>1</th> <td>B</td> <td>58944</td> <td>6111</td> <td>0.103675</td> </tr> </tbody> </table> </div> We already know how to compare a random variable to a fixed value, but now we have two random variables from two ads campaign. We get around this by comparing **the difference of their sample mean** $\bar{X_\Delta} = \bar{X_{A}} - \bar{X_{B}}$ to 0. This way, our null hypothesis states that there is no difference between the long-run conversion rates of these campaigns. Through another useful statistical concept, we also know that the variance of $\bar{X_\Delta}$ is the sum of sample mean variances of $\bar{X_\text{A}}$ and $\bar{X_\text{B}}$ (Normal Sum Theorem; [Lemon, 2002](https://www.goodreads.com/book/show/3415974-an-introduction-to-stochastic-processes-in-physics)). Thus, we can calculate the **test statistic** or, specifically for Z-test, **Z-value** as follows: \begin{align} \bar{Z_\Delta} &= \frac{\bar{X_\Delta}-\mu}{\sqrt{\frac{\sigma^2_\text{A}}{n_\text{A}} + \frac{\sigma^2_\text{B}}{n_\text{B}}}} \\ &= \frac{\bar{X_\Delta}-\mu}{\sqrt{\sigma^2_\text{pooled} * (\frac{1}{n_\text{A}} + \frac{1}{n_\text{B}})}} \end{align} Since we are assuming that `A` and `B` has the same conversion rate, their variance is also assumed to be the same: $$\sigma^2_{A} = \sigma^2_{B} = \sigma_\text{pooled} = p * (1-p)$$ where $p$ is the total conversions of both campaigns divided by their clicks (**pooled probability**). In light of the Z-value calculated from our data, we found that p-value of rejecting the null hypothesis that conversion rates of `A` and `B` are equal to each other is less than 3%, lower than our acceptable false positive rate of 5%, so we reject the null hypothesis that they perform equally well. The result of the test is **statistically significant**; that is, it is unlikely enough for us given the null hypothesis. ```python def proportion_test(c1: int, c2: int, n1: int, n2: int, mode: str = 'one_sided') -> Tuple[float, float]: ''' :meth: Z-test for difference in proportion :param int c1: conversions for group 1 :param int c2: conversions for group 2 :param int n1: impressions for group 1 :param int n2: impressions for group 2 :param str mode: mode of test; `one_sided` or `two_sided` :return: Z-score, p-value ''' p = (c1 + c2) / (n1 + n2) p1 = c1 / n1 p2 = c2 / n2 z = (p1 - p2) / np.sqrt(p * (1 - p) * (1 / n1 + 1 / n2)) if mode == 'two_sided': p = 2 * (1 - sp.stats.norm.cdf(abs(z))) elif mode == 'one_sided': p = 1 - sp.stats.norm.cdf(abs(z)) else: raise ValueError('Available modes are `one_sided` and `two_sided`') return z, p z_value, p_value = proportion_test(c1=conv_df.conv_cnt[0], c2=conv_df.conv_cnt[1], n1=conv_df.clicks[0], n2=conv_df.clicks[1], mode='two_sided') print(f'Z-value: {z_value}; p-value: {p_value}') standard_normal(z_value, "No Difference in Conversion Rates") ``` This rationale extends beyond comparing proportions such as conversion rates. For instance, we can also compare revenues of two different stores, assuming they are i.i.d. However in this case, we do not know the variance of the base distribution $\sigma^2$, as it cannot be derived from our assumption (variance of Bernoulli distribution is $p*(1-p)$ but store revenues are not modelled after a coin flip). The test statistic then is created with sample variance $s^2$ based on our sample group and follows a slightly modified version of standard normal distribution (see [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test)). Your test statistics and sample mean distributions may change, but bottom line of frequentist A/B test is exploiting CLT and frequentist reasoning. ## Confidence Intervals Notice that we can calculate p-value from Z-value and vice versa. This gives us another canny way to look at the problem; that is, we can calculate the intervals where there is an arbitrary probability, say 95%, that sample mean of `A` or `B` will fall into. We call it **confidence interval**. You can see that despite us rejecting the null hypothesis that their difference is zero, the confidence intervals of both campaigns can still overlap. Try changing the number of conversion rate and clicks of each group as well as the alpha to see what changes in terms of p-value of Z-test and confidence intervals. You will see that the sample mean distribution gets "wider" as we have fewer samples in a group. Intuitively, this makes sense because the fewer clicks you have collected, the less information you have about true performance of an ads campaign and less confident you are about where it should be. So when designing an A/B test, you should plan to have similar number of sample between both sample groups in order to have similarly distributed sample means. ```python def proportion_plot(c1: int, c2: int, n1: int, n2: int, alpha: float = 0.05, mode: str = 'one_sided') -> None: ''' :meth: plot Z-test for difference in proportion and confidence intervals for each campaign :param int c1: conversions for group 1 :param int c2: conversions for group 2 :param int n1: impressions for group 1 :param int n2: impressions for group 2 :param float alpha: alpha :param str mode: mode of test; `one_sided` or `two_sided` :return: None ''' p = (c1 + c2) / (n1 + n2) p1 = c1 / n1 p2 = c2 / n2 se1 = np.sqrt(p1 * (1 - p1) / n1) se2 = np.sqrt(p2 * (1 - p2) / n2) z = sp.stats.norm.ppf(1 - alpha / 2) x1 = np.arange(p1 - 3 * se1, p1 + 3 * se1, 1e-4) x2 = np.arange(p2 - 3 * se2, p2 + 3 * se2, 1e-4) y1 = np.array([sp.stats.norm.pdf(i, loc=p1, scale=np.sqrt(p1 * (1 - p1) / n1)) for i in x1]) y2 = np.array([sp.stats.norm.pdf(i, loc=p2, scale=np.sqrt(p2 * (1 - p2) / n2)) for i in x2]) sm_df = pd.DataFrame({'campaign_id': ['Campaign A'] * len(x1) + ['Campaign B'] * len(x2), 'x': np.concatenate([x1, x2]), 'y': np.concatenate([y1, y2])}) z_value, p_value = proportion_test(c1, c2, n1, n2, mode) print(f'Z-value: {z_value}; p-value: {p_value}') g = (ggplot(sm_df, aes(x='x', y='y', fill='campaign_id')) + geom_area(alpha=0.5) + theme_minimal() + xlab('Sample Mean Distribution of Each Campaign') + ylab('Probability Density Function') + geom_vline(xintercept=[p1 + se1 * z, p1 - se1 * z], colour='red') + geom_vline(xintercept=[p2+se2*z, p2-se2*z], colour='blue') + ggtitle(f'Confident Intervals at alpha={alpha}')) g.draw() # interact(ci_plot, # p1 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[0] / conv_df.clicks[0], # step=1e-3,readout_format='.5f'), # p2 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[1] / conv_df.clicks[1], # step=1e-3,readout_format='.5f'), # n1 = widgets.IntSlider(min=10,max=70000,value=conv_df.clicks[0]), # n2 = widgets.IntSlider(min=10,max=70000,value=conv_df.clicks[1]), # alpha = widgets.FloatSlider(min=0,max=1,value=0.05)) ``` ```python conv_df.clicks[0], conv_df.clicks[1] ``` (59504, 58944) ```python #@title {run: "auto"} c1 = 5950 #@param {type:"slider", min:0, max:70000} c2 = 6189 #@param {type:"slider", min:0, max:70000} n1 = 59504 #@param {type:"slider", min:10, max:70000, step:10} n2 = 58944 #@param {type:"slider", min:10, max:70000, step:10} alpha = 0.05 #@param {type:"slider", min:0, max:1, step:1e-3} proportion_plot(c1,c2,n1,n2,alpha) ``` ## Any Hypothesis Test Is Statistically Significant with Enough Samples Because we generated the data, we know that conversion rate of campaign `A` (10%) is about 95% that of campaign `B` (10.5%). If we go with our gut feeling, most of us would say that they are practically the same; yet, our Z-test told us that they are different. The reason for this becomes apparent graphically when we decrease the number of clicks for both campaigns in the plot above. The Z-test stops becoming significant when both campaigns have about 50,000 clicks each, even though they still have exactly the same conversion rate. The culprit is our Z-value calculated as: \begin{align} \bar{Z_\Delta} &= \frac{\bar{X_\Delta}-\mu}{\sqrt{\sigma^2_\text{pooled} * (\frac{1}{n_\text{A}} + \frac{1}{n_\text{B}})}} \end{align} Notice number of clicks $n_\text{A}$ and $n_\text{B}$ hiding in the denominator. Our test statistics $\bar{Z_\Delta}$ will go infinitely higher as long as we collect more clicks. If both campaigns `A` and `B` have one million clicks each, the difference of as small as 0.1% will be detected as statistically significant. Try adjusting the probabilities $p1$ and $p2$ in the plot below and see if the area of statistical significance expands or contracts as the difference between the two numbers changes. ```python def significance_plot(p1,p2): n1s = pd.DataFrame({'n1':[10**i for i in range(1,7)],'k':0}) n2s = pd.DataFrame({'n2':[10**i for i in range(1,7)],'k':0}) ns = pd.merge(n1s,n2s,how='outer').drop('k',1) ns['p_value'] = ns.apply(lambda row: proportion_test(p1*row['n1'], p2*row['n2'],row['n1'],row['n2'])[1], 1) g = (ggplot(ns,aes(x='factor(n1)',y='factor(n2)',fill='p_value')) + geom_tile(aes(width=.95, height=.95)) + geom_text(aes(label='round(p_value,3)'), size=10)+ theme_minimal() + xlab('Number of Samples in A') + ylab('Number of Samples in B') + guides(fill=guide_legend(title="p-value"))) g.draw() # interact(significance_plot, # p1 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[0] / conv_df.clicks[0], # step=1e-3,readout_format='.5f'), # p2 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[1] / conv_df.clicks[1], # step=1e-3,readout_format='.5f')) ``` ```python #@title {run: "auto"} p1 = 0.09898494218876042 #@param {type:"slider", min:0, max:1, step:1e-3} p2 = 0.10367467426710097 #@param {type:"slider", min:0, max:1, step:1e-3} significance_plot(p1,p2) ``` More practically, look at cumulative conversion rates and z-values of `A` and `B` on a daily basis. Every day that we check the results based on cumulative clicks and conversions, we will come up with a different test statistic and p-value. Difference in conversion rates seem to stabilize after 20 days; however, notice that if you stop the test at day 25 or so, you would say it is NOT statistically significant, whereas if you wait a little longer, you will get the opposite result. The only thing that changes as time goes on is that we have more samples. ```python g = (ggplot(rates_df, aes(x='timesteps', y='value', color='variable')) + geom_line() + theme_minimal() + xlab('Days of Experiment Run') + ylab('Cumulative Conversions / Cumulative Clicks')) g ``` ```python #test conv_days['cumu_z_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'], row['cumu_conv_b'],row['cumu_click_a'], row['cumu_click_b'], mode='two_sided')[0],1) conv_days['cumu_p_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'], row['cumu_conv_b'],row['cumu_click_a'], row['cumu_click_b'], mode='two_sided')[1],1) #plot g = (ggplot(conv_days, aes(x='timesteps',y='cumu_z_value',color='cumu_p_value')) + geom_line() + theme_minimal() + xlab('Days of Campaign') + ylab('Z-value Calculated By Cumulative Data') + geom_hline(yintercept=[sp.stats.norm.ppf(0.95),sp.stats.norm.ppf(0.05)], color=['red','green']) + annotate("text", label = "Above this line A is better than B", x = 20, y = 2, color = 'red') + annotate("text", label = "Below this line B is better than A", x = 20, y = -2, color = 'green')) g ``` ## Minimum Detectable Effect and Required Sample Size We argue that this too-big-to-fail phenomena among sample groups is especially dangerous in the context of today's "big data" society. Gone are the days where statistical tests are done among two control groups of 100 people each using paper survey forms. Now companies are performning A/B testing between ad variations that could have tens of thousands or more samples (impressions or clicks), and potentially all of them will be "statistically significant". One way to remedy this is to do what frequentists do best: make more assumptions, more specifically **two** more. First, if we want to find out whether `B` has *better* conversion than `A`, we do not only make the null hypothesis (that `B` performs worse than or equally well as `A`) but **minimally by how much**. We can set **mininum detectable effect** as the smallest possible difference that would be worth investing the time and money in one campaign over the other; let say that from experience we think it is 1%. We then ask: > What is the mininum number of samples in a sample group (clicks in a campaign) should we have in order to reject the null hypothesis when the difference in sample means is [1%]()? The required number of samples in each group $n$ and $mn$ (where m is multiplier) in order for the test to reject a minimum detectable effect $\text{MDE}$ at a certain alpha is: \begin{align} Z_{\alpha} &= \frac{\text{MDE}-\mu}{\sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}} \\ \frac{(m+1)\sigma^2}{mn} &= (\frac{\text{MDE}}{Z_{\alpha}})^2 \\ n &= \frac{m+1}{m}(\frac{Z_{\alpha} \sigma}{\text{MDE}})^2 \\ n &= 2(\frac{Z_{\alpha} \sigma}{\text{MDE}})^2; m=1 \end{align} Second, we make yet another crucial assumption about **the variance $\sigma^2$ we expect**. Remember we used to estimate the variance by using the pooled probability of our sample groups, but here we have not even started the experiments. In a conventional A/B testing scenario, we are testing whether an experimental variation is better than the existing one, so one choice is **using sample variance of a campaign you are currently running**; for instance, if `A` is our current ads and we want to know if we should change to `B`, then we will use conversion rate of `A` from past time period to calculate the variance, say 10%. Let us go back in time before we even started our 2-month-long test between campaign `A` and `B`. Now we assume not only acceptable false positive rate alpha of 0.05 but also minimum detectable effect of 1% and expected variance of $\sigma^2 = 0.1 * (1-0.1) = 0.09$, then we calculate that the minimum number of samples we should collect for each campaign. You can see that should we have done that we would have not been able to reject the null hypothesis, and stuck with campaign `A` going forward. The upside is that now we only have to run the test for about 5 days instead of 60 days assuming every day is the same for the campaigns (no peak traffic on weekends, for instance). The downside is that our null hypothesis gets much more specific with not only one but three assumptions: * Long-run conversion rate of `B` is no better than `A`'s * The difference that will matter to us is at least 1% * The expected variance conversion rates is $\sigma^2 = 0.1 * (1-0.1) = 0.09$ This fits many A/B testing scenarios since we might not want to change to a new variation even though it is better but not so much that we are willing to invest our time and money to change our current setup. Try adjusting $\text{MDE}$ and $\sigma$ in the plot below and see how the number of required samples change. ```python def proportion_samples(mde: float, p: float, m: float = 1, alpha: float = 0.05, mode: str = 'one_sided') -> float: ''' :meth: get number of required sample based on minimum detectable difference (in absolute terms) :param float mde: minimum detectable difference :param float p: pooled probability of both groups :param float m: multiplier of number of samples; groups are n and nm :param float alpha: alpha :param str mode: mode of test; `one_sided` or `two_sided` :return: estimated number of samples to get significance ''' variance = p * (1 - p) if mode == 'two_sided': z = sp.stats.norm.ppf(1 - alpha / 2) elif mode == 'one_sided': z = sp.stats.norm.ppf(1 - alpha) else: raise ValueError('Available modes are `one_sided` and `two_sided`') return (m + 1 / m) * variance * (z / mde)**2 def plot_proportion_samples(mde, p, m=1, alpha=0.05, mode='one_sided'): minimum_samples = proportion_samples(mde, p,m, alpha, mode) g = (ggplot(conv_days, aes(x='cumu_click_a',y='cumu_z_value',color='cumu_p_value')) + geom_line() + theme_minimal() + xlab('Number of Samples per Campaign') + ylab('Z-value Calculated By Cumulative Data') + geom_hline(yintercept=[sp.stats.norm.ppf(0.95),sp.stats.norm.ppf(0.05)], color=['red','green']) + annotate("text", label = "Above this line A is better than B", x = 30000, y = 2, color = 'red') + annotate("text", label = "Below this line B is better than A", x = 30000, y = -2, color = 'green') + annotate("text", label = f'Minimum required samples at MDE {mde}={int(minimum_samples)}', x = 30000, y = 0,) + geom_vline(xintercept=minimum_samples)) g.draw() ``` ```python #@title {run: "auto"} mde = 0.01 #@param {type:"slider", min:0.001, max:0.01, step:1e-3} p = 0.1 #@param {type:"slider", min:0, max:1, step:1e-3} m = 1 #@param {type:"slider", min:0, max:1, step:1e-1} p_value = 0.05 #@param {type:"slider", min:0.01, max:0.1, step:1e-3} mode = 'one_sided' #@param ['one_sided','two_sided'] {type:"string"} plot_proportion_samples(mde, p, m, alpha, mode) ``` ## You Will Get A Statistically Significant Result If You Try Enough Times The concept p-value represents is false positive rate of our test, that is, how unlikely it is to observe our sample groups given that they do not have different conversion rates in the long run. Let us re-simulate our campaigns `A` and `B` to have equal expectation of 10%. If we apply our current method, we can be comfortably sure we will not get statistical significance (unless we have an extremely large number of samples). ```python conv_days = gen_bernoulli_campaign(p1 = 0.10, p2 = 0.10, timesteps = 60, scaler=100, seed = 1412) #god-mode conv_days.columns = [i.replace('impression','click') for i in conv_days.columns] #function uses impressions but we use clicks conv_days['cumu_z_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'], row['cumu_conv_b'],row['cumu_click_a'], row['cumu_click_b'], mode='two_sided')[0],1) conv_days['cumu_p_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'], row['cumu_conv_b'],row['cumu_click_a'], row['cumu_click_b'], mode='two_sided')[1],1) conv_days['z_value'] = conv_days.apply(lambda row: proportion_test(row['conv_a'], row['conv_b'],row['click_a'], row['click_b'], mode='two_sided')[0],1) conv_days['p_value'] = conv_days.apply(lambda row: proportion_test(row['conv_a'], row['conv_b'],row['click_a'], row['click_b'], mode='two_sided')[1],1) g = (ggplot(conv_days, aes(x='timesteps',y='cumu_z_value',color='cumu_p_value')) + geom_line() + theme_minimal() + xlab('Days in Campaign') + ylab('Z-value Calculated By Cumulative Data') + geom_hline(yintercept=[sp.stats.norm.ppf(0.975),sp.stats.norm.ppf(0.025)], color=['red','red'])) g ``` Another approach is instead of doing the test only once, we **do it every day using clicks and conversions of that day alone**. We will have 60 tests where 3 of them give statistically significant results that `A` and `B` have different conversion rates in the long run. The fact that we have exactly 5% of the tests turning positive despite knowing that none of them should is not a coincidence. The Z-value is calculated based on alpha of 5%, which means even if there is no difference at 5% of the time we perform this test with this specific set of assumptions we will still have a positive result ([Obligatory relevant xkcd strip](https://xkcd.com/882/); Munroe, n.d.). ```python g = (ggplot(conv_days, aes(x='timesteps',y='z_value',color='p_value')) + geom_line() + theme_minimal() + xlab('Each Day in Campaign') + ylab('Z-value Calculated By Daily Data') + geom_hline(yintercept=[sp.stats.norm.ppf(0.975),sp.stats.norm.ppf(0.025)], color=['red','red']) + ggtitle(f'We Have {(conv_days.p_value<0.05).sum()} False Positives Out of {conv_days.shape[0]} Days ({100*(conv_days.p_value<0.05).sum()/conv_days.shape[0]}%)')) g ``` Not many people will test online ads campaigns based on daily data, but many researchers perform repeated experiments and by necessity repeated A/B tests as shown above. If you have a reason to believe that sample groups from different experiments have the same distribution, you might consider grouping them together and perform one large test as usual. Otherwise, you can tinker the assumption of how much false positive you can tolerate. One such approach, among [others](https://en.wikipedia.org/wiki/Multiple_comparisons_problem), is the [Bonferroni correction](http://mathworld.wolfram.com/BonferroniCorrection.html). It scales your alpha down by the number of tests you perform to make sure that your false positive rate stays at most your original alpha. In our case, if we cale our alpha as$\alpha_{\text{new}}=\frac{0.05}{60} \approx 0.0008$, we will have the following statistically non-significant results. ```python g = (ggplot(conv_days, aes(x='timesteps',y='z_value',color='p_value')) + geom_line() + theme_minimal() + xlab('Each Day in Campaign') + ylab('Z-value Calculated By Daily Data') + geom_hline(yintercept=[sp.stats.norm.ppf(1-0.0008/2),sp.stats.norm.ppf(0.0008/2)], color=['red','red']) + ggtitle(f'We Have {(conv_days.p_value<0.05).sum()} False Positives Out of {conv_days.shape[0]} Days ({100*(conv_days.p_value<0.05).sum()/conv_days.shape[0]}%)')) g ``` ## Best Practices To the best of our knowledge, the most reasonable and practical way to perform a frequentist A/B test is to know your assumptions, including but not limited to: * What distribution should your data be assumed to be drawn from? In many cases, we use Bernoulli distribution for proportions, Poisson distribution for counts and normal distribution for real numbers. * Are you comparing your sample group to a fixed value or another sample group? * Do you want to know if the expectation of the sample group is equal to, more than or less than its counterpart? * What is the minimum detectable effect and how many samples should you collect? What is a reasonable variance to assume in order to calculated required sample size? * What is the highest false positive rate $\alpha$ that you can accept? With these assumptions cleared, you can most likely create a test statistics, then with frequentist reasoning, you can determine if the sample group you collected are unlikely enough that you would reject your null hypothesis because of it. ## References * Lemons, D. S. (2002). An introduction to stochastic processes in physics. Baltimore: Johns Hopkins University Press. Normal Sum Theorem; p34 * Munroe, Randall (n.d.). HOW TO Absurd Scientific Answers toCommon Real-world Problems. Retrieved from https://xkcd.com/882/ * Reinhart, A. (2015, March 1). The p value and the base rate fallacy. Retrieved from https://www.statisticsdonewrong.com/p-value.html * [whuber](https://stats.stackexchange.com/users/919/whuber) (2017). Can a probability distribution value exceeding 1 be OK?. Retrieved from https://stats.stackexchange.com/q/4223 ## Appendix ### Bessel's Correction for Sample Variance Random variables can be thought of as estimation of the real values such as sample variance is an estimation of variance from the "true" distribution. An estimator is said to be **biased** when its expectation is not equal to the true value (not to be confused with LLN where the estimator itself approaches the true value as number of samples grows). We can repeat the experiment we did for LLN with sample mean and true mean, but this time we compare how biased version ($\frac{1}{n} \sum_{i=1}^{n} (X_i - \bar{X})^2$) and unbiased version ($\frac{1}{n-1} \sum_{i=1}^{n} (X_i - \bar{X})^2$) of sample variance approach true variance as number of sample groups grow. Clearly, we can see that biased sample variance normally underestimates the true variance. ```python def var(x, dof=0): n = x.shape[0] mu = np.sum(x)/n return np.sum((x - mu)**2) / (n-dof) n_total = 10000 #total number of stuff n_sample = 100 #number of samples per sample group sg_range = range(1,100) #number of sample groups to take average of sample variances from r = np.random.normal(loc=0,scale=1,size=n_total) #generate random variables based on Z distribution pop_var = var(r) #true variance of the population mean_s_bs = [] mean_s_us = [] for n_sg in sg_range: s_bs = [] s_us =[] for i in range(n_sg): sg = np.random.choice(r,size=n_sample,replace=False) s_bs.append(var(sg)) #biased sample variance s_us.append(var(sg,1)) #unbiased sample variance mean_s_bs.append(np.mean(s_bs)) mean_s_us.append(np.mean(s_us)) s_df = pd.DataFrame({'nb_var':sg_range,'biased_var':mean_s_bs, 'unbiased_var':mean_s_us}).melt(id_vars='nb_var') g = (ggplot(s_df,aes(x='nb_var',y='value',color='variable',group='variable')) + geom_line() + geom_hline(yintercept=pop_var) + theme_minimal() + xlab('Number of Sample Groups') + ylab('Sample Mean of Sample Variance in Each Group')) g ``` We derive exactly how much the bias is as follows: $$B[s_{biased}^2] = E[s_{biased}^2] - \sigma^2 = E[s_{biased}^2 - \sigma^2]$$ where $B[s^2]$ is the bias of estimator (biased sample variance) $s_{biased}^2$ of variance $\sigma^2$. Then we can calculate the bias as: \begin{align} E[s_{biased}^2 - \sigma^2] &= E[\frac{1}{n} \sum_{i=1}^n(X_i - \bar{X})^2 - \frac{1}{n} \sum_{i=1}^n(X_i - \mu)^2] \\ &= \frac{1}{n}E[(\sum_{i=1}^n X_i^2 -2\bar{X}\sum_{i=1}^n X_i + n\bar{X^2}) - (\sum_{i=1}^n X_i^2 -2\mu\sum_{i=1}^n X_i + n\mu^2)] \\ &= E[\bar{X^2} - \mu^2 - 2\bar{X^2} + 2\mu\bar{X}] \\ &= -E[\bar{X^2} -2\mu\bar{X} +\mu^2] \\ &= -E[(\bar{X} - \mu)^2] \\ &= -\frac{\sigma^2}{n} \text{; variance of sample mean}\\ E[s_{biased}^2] &= \sigma^2 - \frac{\sigma^2}{n} \\ &= (1-\frac{1}{n})\sigma^2 \end{align} Therefore if we divide biased estimator $s_{biased}^2$ by $1-\frac{1}{n}$, we will get an unbiased estimator of variance $s_{unbiased}^2$, \begin{align} s_{unbiased}^2 &= \frac{s_{biased}^2}{1-\frac{1}{n}} \\ &= \frac{\frac{1}{n} \sum_{i=1}^n(X_i - \bar{X})^2}{1-\frac{1}{n}}\\ &= \frac{1}{n-1} \sum_{i=1}^n(X_i - \bar{X})^2 \end{align} This is why the sample variance we usually use $s^2$ has $n-1$ instead of $n$. Also, this is not to be confused with the variance of sample means which is $\frac{\sigma^2}{n}$ when variance of the base distribution is known or assumed and $\frac{s^2}{n}$ when it is not. ### Mass vs Density You might wonder why the sample mean distribution has Y-axis that exceeds 1 even though it seemingly should represents probability of each value of sample mean. The short answer is that it does not represents probability but rather **probability density function**. The long answer is that there are two ways of representing probability distributions depending on whether they describe **discrete** or **continuous** data. See also this excellent [answer on Stack Exchange](https://stats.stackexchange.com/questions/4220/can-a-probability-distribution-value-exceeding-1-be-ok) (whuber, 2017). **Discrete probability distributions** contain values that are finite (for instance, $1, 2, 3, ...$) or countably infinite (for instance, $\frac{1}{2^i}$ where $i=1, 2, 3, ...$). They include but not limited to distributions we have used to demonstrate CLT namely uniform, Bernoulli and Poisson distribution. In all these distributions, the Y-axis, now called **probability mass function**, represents the exact probability each value in the X-axis will take, such as the Bernouilli distribution we have shown before: ```python flips = np.random.choice([0,1], size=n, p=[1-p,p]) flips_df = pd.DataFrame(flips) flips_df.columns = ['conv_flag'] g = (ggplot(flips_df,aes(x='factor(conv_flag)')) + geom_bar(aes(y = '(..count..)/sum(..count..)')) + theme_minimal() + xlab('Value') + ylab('Probability Mass Function') + ggtitle(f'Bernoulli Distribution')) g ``` **Continuous probability distribution** contains values that can take infinitely many, uncountable values (for instance, all real numbers between 0 and 1). Since there are infinitely many values, the probability of each individual value is essentially zero (what are the chance of winning the lottery that has infinite number of digits). Therefore, instead of the exact probability of each value (probability mass function), the Y-axis only represents the **probability density function**. This can be thought of as the total probability within an immeasurably small interval around the value. Take an example of a normal distribution with expectation $\mu=0$ and variance $\sigma^2=0.01$. The probability density function of the value 0 is described as: \begin{align} f(x) &= \frac{1}{\sqrt{2\pi\sigma^2}} e^{\frac{-(x-\mu)^2}{2\sigma^2}}\\ &= \frac{1}{\sqrt{2\pi(0.01)}} e^{\frac{-(x-0)^2}{2(0.01)}} \text{; }\mu=0;\sigma^2=0.01 \\ &\approx 3.989 \text{; when } x=0 \end{align} This of course does not mean that there is 398.9% chance that we will draw the value 0 but the density of the probability around the value. The actual probability of that interval around 0 is 3.989 times an immeasurably small number which will be between 0 and 1. Intuitively, we can think of these intervals as start from relatively large numbers such as 0.1 and gradually decreases to smaller numbers such as 0.005. As you can see from the plot below, the plot becomes more fine-grained and looks more "normal" as the intervals get smaller. ```python def prob_density(step,mu=0,sigma=0.1): x = np.arange(-0.5, 0.5, step) y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x]) sm_df = pd.DataFrame({'x': x, 'y': y}) g = (ggplot(sm_df, aes(x='x', y='y')) + geom_bar(stat='identity') + theme_minimal() + xlab('Value') + ylab('Probability Density Function') + ggtitle(f'Normal Distribution with Expectation={mu} and Variance={sigma**2:2f}')) g.draw() # interact(prob_density, step=widgets.FloatSlider(min=5e-3,max=1e-1,value=1e-1,step=1e-3,readout_format='.3f')) ``` ```python #@title {run: "auto"} step = 0.1 #@param {type:"slider", min:5e-3, max:0.1, step:1e-3} prob_density(step) ``` ```python ```
9554054d426a2f030aa9dc956d697da4c957f474
967,004
ipynb
Jupyter Notebook
notebooks/frequentist_colab.ipynb
TeamTamoad/abtestoo
90e903ddbe945034b8226aad05a74fb46efb5326
[ "Apache-2.0" ]
1
2021-08-06T14:43:20.000Z
2021-08-06T14:43:20.000Z
notebooks/frequentist_colab.ipynb
TeamTamoad/abtestoo
90e903ddbe945034b8226aad05a74fb46efb5326
[ "Apache-2.0" ]
null
null
null
notebooks/frequentist_colab.ipynb
TeamTamoad/abtestoo
90e903ddbe945034b8226aad05a74fb46efb5326
[ "Apache-2.0" ]
null
null
null
390.866613
69,406
0.914276
true
17,682
Qwen/Qwen-72B
1. YES 2. YES
0.868827
0.805632
0.699955
__label__eng_Latn
0.981288
0.464561
# Chapter 5 # Numerical Integration and Differentiation In many computational economic applications, one must compute the definite integral of a real-valued function f with respect to a "weighting" function w over an interval $I$ of $R^n$: $$\int_I f(x)w(x) dx$$ The weighting function may be the identity, $w = 1$, in which case the integral represents the area under the function f. In other applications, w may be the probability density of a random variable $\tilde X$ , in which case the integral represents the expectation of $f( \tilde X)$ when $I$ repesents the whole support of $\tilde X$. ```python ``` In this chapter, we discuss three classes of numerical integration or numerical quadrature methods<sup>1</sup>. All methods approximate the integral with a weighted sum of function values: $$\int_I f(x) w(x)dx \approx \sum_{i=0}^{n} w_i f(x_i)\thinspace .$$ <sup>1</sup>Quadrature is a historical mathematical term that means calculating area. The methods differ only in how the *quadrature weights* $wi$ and the *quadrature nodes* $xi$ are chosen. **Newton-Cotes** methods approximate the integrand f between nodes using low order polynomials, and sum the integrals of the polynomials to estimate the integral of f. Newton-Cotes methods are easy to implement, but are not particularly eÆcient for computing the integral of a smooth function. **Gaussian quadrature** methods choose the nodes and weights to satisfy moment matching conditions, and are more powerful than Newton-Cotes methods if the integrand is smooth. **Monte Carlo and quasi-Monte Carlo integration** methods use "random" or "equidistributed" nodes, and are simple to implement and are especially useful if the integration domain is of high dimension or irregularly shaped. In this chapter, we also present an overview of how to compute *finite difference* approximations for the derivatives of a real-valued function. As we have seen in previous chapters, it is often desirable to compute derivatives numerically because analytic derivative expressions are difficult or impossible to derive, or expensive to evaluate. Finite difference methods can also be used to solve differential equations, which arise frequently in dynamic economic models, especially models formulated in continuous time. In this chapter, we introduce numerical methods for differential equations and illustrate their application to *initial value problems*. ```python # https://github.com/QuantEcon/QuantEcon.py/blob/488b7b3b9117cfd9bfc71c187efc87c39fc5b459/quantecon/quad.py """ Filename: quad.py Authors: Chase Coleman, Spencer Lyon Date: 2014-07-01 Defining various quadrature routines. Based on the quadrature routines found in the CompEcon toolbox by Miranda and Fackler. References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ from __future__ import division import math import numpy as np import scipy.linalg as la from scipy.special import gammaln import sympy as sym #from .ce_util import ckron, gridmake from functools import reduce ``` ```python def ckron(*arrays): """ Repeatedly applies the np.kron function to an arbitrary number of input arrays Parameters ---------- *arrays : tuple/list of np.ndarray Returns ------- out : np.ndarray The result of repeated kronecker products Notes ----- Based of original function `ckron` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ return reduce(np.kron, arrays) def gridmake(*arrays): """ TODO: finish this docstring Notes ----- Based of original function ``gridmake`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ if all([i.ndim == 1 for i in arrays]): d = len(arrays) if d == 2: out = _gridmake2(*arrays) else: out = _gridmake2(arrays[0], arrays[1]) for arr in arrays[2:]: out = _gridmake2(out, arr) return out else: raise NotImplementedError("Come back here") def _gridmake2(x1, x2): """ TODO: finish this docstring Notes ----- Based of original function ``gridmake2`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ if x1.ndim == 1 and x2.ndim == 1: return np.column_stack([np.tile(x1, x2.shape[0]), np.repeat(x2, x1.shape[0])]) elif x1.ndim > 1 and x2.ndim == 1: first = np.tile(x1, (x2.shape[0], 1)) second = np.repeat(x2, x1.shape[0]) return np.column_stack([first, second]) else: raise NotImplementedError("Come back here") def _qnwtrap1(n, a, b): """ Compute univariate trapezoid rule quadrature nodes and weights Parameters ---------- n : int The number of nodes a : int The lower endpoint b : int The upper endpoint Returns ------- nodes : np.ndarray(dtype=float) An n element array of nodes nodes : np.ndarray(dtype=float) An n element array of weights Notes ----- Based of original function ``qnwtrap1`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ if n < 1: raise ValueError("n must be at least one") nodes = np.linspace(a, b, n) dx = nodes[1] - nodes[0] weights = dx * np.ones(n) weights[0] *= 0.5 weights[-1] *= 0.5 return nodes, weights def _qnwsimp1(n, a, b): """ Compute univariate Simpson quadrature nodes and weights Parameters ---------- n : int The number of nodes a : int The lower endpoint b : int The upper endpoint Returns ------- nodes : np.ndarray(dtype=float) An n element array of nodes nodes : np.ndarray(dtype=float) An n element array of weights Notes ----- Based of original function ``qnwsimp1`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ if n % 2 == 0: print("WARNING qnwsimp: n must be an odd integer. Increasing by 1") n += 1 nodes = np.linspace(a, b, n) dx = nodes[1] - nodes[0] weights = np.tile([2.0, 4.0], (n + 1) // 2) weights = weights[:n] weights[0] = weights[-1] = 1 weights = (dx / 3.0) * weights return nodes, weights def _qnwlege1(n, a, b): """ Compute univariate Guass-Legendre quadrature nodes and weights Parameters ---------- n : int The number of nodes a : int The lower endpoint b : int The upper endpoint Returns ------- nodes : np.ndarray(dtype=float) An n element array of nodes nodes : np.ndarray(dtype=float) An n element array of weights Notes ----- Based of original function ``qnwlege1`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ # import ipdb; ipdb.set_trace() maxit = 100 m = np.fix((n + 1) / 2.0).astype(int) xm = 0.5 * (b + a) xl = 0.5 * (b - a) nodes = np.zeros(n) weights = nodes.copy() i = np.arange(m, dtype='int') z = np.cos(np.pi * ((i + 1.0) - 0.25) / (n + 0.5)) for its in range(maxit): p1 = 1.0 p2 = 0.0 for j in range(1, n+1): p3 = p2 p2 = p1 p1 = ((2 * j - 1) * z * p2 - (j - 1) * p3) / j pp = n * (z * p1 - p2)/(z * z - 1.0) z1 = z.copy() z = z1 - p1/pp if all(np.abs(z - z1) < 1e-14): break if its == maxit - 1: raise ValueError("Maximum iterations in _qnwlege1") nodes[i] = xm - xl * z nodes[- i - 1] = xm + xl * z weights[i] = 2 * xl / ((1 - z * z) * pp * pp) weights[- i - 1] = weights[i] return nodes, weights def _make_multidim_func(one_d_func, n, *args): """ A helper function to cut down on code repetition. Almost all of the code in qnwcheb, qnwlege, qnwsimp, qnwtrap is just dealing various forms of input arguments and then shelling out to the corresponding 1d version of the function. This routine does all the argument checking and passes things through the appropriate 1d function before using a tensor product to combine weights and nodes. Parameters ---------- one_d_func : function The 1d function to be called along each dimension n : int or array_like(float) A length-d iterable of the number of nodes in each dimension args : These are the arguments to various qnw____ functions. For the majority of the functions this is just a and b, but some differ. Returns ------- func : function The multi-dimensional version of the parameter ``one_d_func`` """ args = list(args) n = np.asarray(n) args = list(map(np.asarray, args)) if all([x.size == 1 for x in [n] + args]): return one_d_func(n, *args) d = n.size for i in range(len(args)): if args[i].size == 1: args[i] = np.repeat(args[i], d) nodes = [] weights = [] for i in range(d): ai = [x[i] for x in args] _1d = one_d_func(n[i], *ai) nodes.append(_1d[0]) weights.append(_1d[1]) weights = ckron(*weights[::-1]) # reverse ordered tensor product nodes = gridmake(*nodes) return nodes, weights ``` ## 5.1 Newton-Cotes Methods Newton-Cotes quadrature methods are designed to approximate the integral of a realvalued function $f$ defined on a bounded interval $[a; b]$ of the real line. Newton-Cotes methods approximate the integrand $f$ between nodes using *low order polynomials*, and sum the integrals of the polynomials to form an estimate the integral of f. Two Newton-Cotes rules are widely used in practice: the **trapezoid rule and Simpson's rule**. Both rules are very easy to implement and are typically adequate for computing the area under a continuous function. ```python ``` The trapezoid rule partitions the interval [a; b] into subintervals of equal length, approximates f over each subinterval using linear interpolants, and then sums the areas under the linear segments. The trapezoid rule draws its name from the fact that the area under f is approximated by a series of trapezoids. ```python ``` where $x_i = a + (i-1)h$, with $h$ (called the step size) equal to $ h=(b − a) / (n-1)$. The $w_i$ are called weights. $$\int _{a}^{b}f(x)\,dx\approx \sum _{{i=1}}^{{n-1}}w_{i}\,f(x_{i}).$$ where $w_1 = w_n = h/2$ and $w_i = h$, otherwise. ```latex %%latex \begin{align} \int_a^b f(x)\,dx &= \int_{x_0}^{x_1} f(x) dx + \int_{x_1}^{x_2} f(x) dx + \ldots + \int_{x_{n-1}}^{x_n} f(x) dx, \nonumber \\ &\approx h \frac{f(x_0) + f(x_1)}{2} + h \frac{f(x_1) + f(x_2)}{2} + \ldots + \nonumber \\ &\quad h \frac{f(x_{n-1}) + f(x_n)}{2} \end{align} ``` \begin{align} \int_a^b f(x)\,dx &= \int_{x_0}^{x_1} f(x) dx + \int_{x_1}^{x_2} f(x) dx + \ldots + \int_{x_{n-1}}^{x_n} f(x) dx, \nonumber \\ &\approx h \frac{f(x_0) + f(x_1)}{2} + h \frac{f(x_1) + f(x_2)}{2} + \ldots + \nonumber \\ &\quad h \frac{f(x_{n-1}) + f(x_n)}{2} \end{align} $$\int_a^b f(x)\,dx \approx \frac{h}{2}\left[f(x_0) + 2 f(x_1) + 2 f(x_2) + \ldots + 2 f(x_{n-1}) + f(x_n)\right] $$ $$ \int_a^b f(x)\,dx \approx h \left[\frac{1}{2}f(x_0) + \sum_{i=1}^{n-1}f(x_i) + \frac{1}{2}f(x_n) \right] \thinspace . $$ For example, when $n = 2$ $${\frac {b-a}{2}}(f_{0}+f_{1})$$ ```python def trapezoidal(f, a, b, n): h = float(b-a)/n result = 0.5*f(a) + 0.5*f(b) for i in range(1, n): result += f(a + i*h) result *= h return result ``` ```python ``` The trapezoid rule is simple and robust. It is said to be first order exact because, if not for rounding error, it will exactly compute the integral of any first order polynomial, that is, a line. In general, if the integrand f is smooth, the trapezoid rule will yield an approximation error that is $O(h^2)$, that is, the error shrinks quadratically with the width of the subintervals. ```python def qnwtrap(n, a, b): """ Computes multivariate trapezoid rule quadrature nodes and weights. Parameters ---------- n : int or array_like(float) A length-d iterable of the number of nodes in each dimension a : scalar or array_like(float) A length-d iterable of lower endpoints. If a scalar is given, that constant is repeated d times, where d is the number of dimensions b : scalar or array_like(float) A length-d iterable of upper endpoints. If a scalar is given, that constant is repeated d times, where d is the number of dimensions Returns ------- nodes : np.ndarray(dtype=float) Quadrature nodes weights : np.ndarray(dtype=float) Weights for quadrature nodes Notes ----- Based of original function ``qnwtrap`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ return _make_multidim_func(_qnwtrap1, n, a, b) ``` ```python ``` ```python ``` Simpson's rule is based on piece-wise quadratic, rather than piece-wise linear, approximations to the integrand $f$. $$\int_a^b f(x)dx \approx \sum_{i=0}^{n-1} w_if(x_i)\thinspace .$$ More formally, let $x_i = a + (i - 1)h$ for $i = 1; 2; ... ; n$, where $ h=(b − a) / (n-1)$ and $n$ is odd. The nodes $x_i$ divide the interval $[a; b]$ into an even number $n - 1$ of subintervals of equal length $h$. ```python ``` ```python def qnwsimp(n, a, b): """ Computes multivariate Simpson quadrature nodes and weights. Parameters ---------- n : int or array_like(float) A length-d iterable of the number of nodes in each dimension a : scalar or array_like(float) A length-d iterable of lower endpoints. If a scalar is given, that constant is repeated d times, where d is the number of dimensions b : scalar or array_like(float) A length-d iterable of upper endpoints. If a scalar is given, that constant is repeated d times, where d is the number of dimensions Returns ------- nodes : np.ndarray(dtype=float) Quadrature nodes weights : np.ndarray(dtype=float) Weights for quadrature nodes Notes ----- Based of original function ``qnwsimp`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ return _make_multidim_func(_qnwsimp1, n, a, b) ``` ```python ``` ```python ``` ```python ``` ```python ``` ```python ``` ```python ``` ```python ``` ```python #https://github.com/birocoles/Disciplina-metodos-computacionais/blob/master/Content/newton-cotes.ipynb # http://nbviewer.jupyter.org/github/sbustamante/ComputationalMethods/blob/master/material/numerical-calculus.ipynb#Numerical-Integration ``` ```python ``` ```python ``` ```python ``` ## 5.2 Gaussian Quadrature Gaussian quadrature rules are constructed with respect to specific weighting functions. Specifically, for a weighting function $w$ defined on an interval $I \in R$ of the real line, and for a given order of approximation n, the quadrature nodes $x_1; x_2; ... ; x_n$ and quadrature weights $w_1; w_2; ...; w_n$ are chosen so as to satisfy the $2n$ "momentmatching" conditions: ```python ``` ```python ``` ```python ``` ## 5.3 Monte Carlo Integration Monte Carlo integration methods are motivated by the Strong Law of Large Numbers. One version of the Law states that if $x_1; x_2; ... ; x_n$ are independent realizations of a random variable $\tilde X$ and $f$ is a continuous function, then ```python ``` ```python ``` ## 5.4 Quasi-Monte Carlo Integration Although Monte-Carlo integration methods originated using insights from probability theory, recent extensions have severed that connection and, in the process, demonstrated ways in which the methods can be improved. Monte-Carlo methods rely on sequences $x_i$ with the property that ```python ``` ```python ``` ## 5.5 An Integration Toolbox The Matlab toolbox accompanying the textbook includes four functions for computing numerical integrals for general functions. Each takes three inputs, n, a, and b and generates appropriate nodes and weights. The functions `qnwtrap` and `qnwsimp` implement the Newton-Cotes trapezoid and Simpson's rule methods, `qnwlege` implements Gauss-Legendre quadrature and `qnwequi` generates nodes and weights associated with either equidistributed or pseudo-random sequences. The calling syntax is the same for each and is illustrated with below with `qnwtrap`. ```python def qnwlege(n, a, b): """ Computes multivariate Guass-Legendre quadrature nodes and weights. Parameters ---------- n : int or array_like(float) A length-d iterable of the number of nodes in each dimension a : scalar or array_like(float) A length-d iterable of lower endpoints. If a scalar is given, that constant is repeated d times, where d is the number of dimensions b : scalar or array_like(float) A length-d iterable of upper endpoints. If a scalar is given, that constant is repeated d times, where d is the number of dimensions Returns ------- nodes : np.ndarray(dtype=float) Quadrature nodes weights : np.ndarray(dtype=float) Weights for quadrature nodes Notes ----- Based of original function ``qnwlege`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ return _make_multidim_func(_qnwlege1, n, a, b) def qnwequi(n, a, b, kind="N", equidist_pp=None): """ Generates equidistributed sequences with property that averages value of integrable function evaluated over the sequence converges to the integral as n goes to infinity. Parameters ---------- n : int Number of sequence points a : scalar or array_like(float) A length-d iterable of lower endpoints. If a scalar is given, that constant is repeated d times, where d is the number of dimensions b : scalar or array_like(float) A length-d iterable of upper endpoints. If a scalar is given, that constant is repeated d times, where d is the number of dimensions kind : string, optional(default="N") One of the following: - N - Neiderreiter (default) - W - Weyl - H - Haber - R - pseudo Random equidist_pp : array_like, optional(default=None) TODO: I don't know what this does Returns ------- nodes : np.ndarray(dtype=float) Quadrature nodes weights : np.ndarray(dtype=float) Weights for quadrature nodes Notes ----- Based of original function ``qnwequi`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ if equidist_pp is None: equidist_pp = np.sqrt(np.array(list(sym.primerange(0, 7920)))) n, a, b = list(map(np.atleast_1d, list(map(np.asarray, [n, a, b])))) d = max(list(map(len, [n, a, b]))) n = np.prod(n) if a.size == 1: a = np.repeat(a, d) if b.size == 1: b = np.repeat(b, d) i = np.arange(1, n + 1) if kind.upper() == "N": # Neiderreiter j = 2.0 ** (np.arange(1, d+1) / (d+1)) nodes = np.outer(i, j) nodes = (nodes - np.fix(nodes)).squeeze() elif kind.upper() == "W": # Weyl j = equidist_pp[:d] nodes = np.outer(i, j) nodes = (nodes - np.fix(nodes)).squeeze() elif kind.upper() == "H": # Haber j = equidist_pp[:d] nodes = np.outer(i * (i+1) / 2, j) nodes = (nodes - np.fix(nodes)).squeeze() elif kind.upper() == "R": # pseudo-random nodes = np.random.rand(n, d).squeeze() else: raise ValueError("Unknown sequence requested") # compute nodes and weights r = b - a nodes = a + nodes * r weights = (np.prod(r) / n) * np.ones(n) return nodes, weights ``` ```python ``` ```python ``` ```python ``` ```python ``` All of the quadrature functions will use tensor products to generate nodes and weights for integration over an arbitrary bounded interval [a; b] in higher dimensional spaces. ```python ``` ```python ``` ## 5.6 Numerical Differentiation The most natural way to approximate a derivative is to replace it with a finite difference. The definition of a derivative, ```python ``` ```python ``` ```python ``` ## 5.7 Initial Value Problems Differential equations pose the problem of inferring a function given information about its derivatives and additional "boundary" conditions. Differential equations may characterized as either ordinary differential equations (ODEs), whose solutions are functions of a single argument, and partial differential equations (PDEs), whose solutions are functions of multiple arguments. Both ODEs and PDEs may be solved numerically using finite difference methods. From a numerical point of view the distinction between ODEs and PDEs is less important than the distinction between initial value problems (IVPs), which can be solved in a recursive or evolutionary fashion, and boundary value problems (BVPs), which require the entire solution to be computed simultaneously because the solution at one point (in time and/or space) depends on the solution everywhere else. For ODEs, the solution of an IVP is known at some point and the solution near this point can then be (approximately) determined. This, in turn, allows the solution at still other points to be approximated and so forth. BVPs, on the other hand, require simultaneous solution of the differential equation and the boundary conditions. We take up the solution of IVPs in this section, but defer discussion of BVPs until the next chapter (page 164). ```python ``` ```python ``` There are numerous other approaches and refinements to solving initial value problems. Brie y, these include so-called multi-step algorithms which utilize information from previous steps to determine the current step direction (Runge-Kutta are singlestep methods). Also, any method can adapt the step size to the current behavior of the system by monitoring the truncation error, reducing (increasing) the step size if this error is unacceptably large (small). Adaptive schemes are important if one requires a given level of accuracy. ```python ``` ### Example: Commercial Fishery ```python ``` ```python ``` ```python ``` ```python ```
b3c74322e8d98bf2e89a951e6e2b6a87d1c06c2d
38,778
ipynb
Jupyter Notebook
Chapter05.ipynb
lnsongxf/Applied_Computational_Economics_and_Finance
f14661bfbfa711d49539bda290d4be5a25087185
[ "MIT" ]
19
2018-05-09T08:17:44.000Z
2021-12-26T07:02:17.000Z
Chapter05.ipynb
lnsongxf/Applied_Computational_Economics_and_Finance
f14661bfbfa711d49539bda290d4be5a25087185
[ "MIT" ]
null
null
null
Chapter05.ipynb
lnsongxf/Applied_Computational_Economics_and_Finance
f14661bfbfa711d49539bda290d4be5a25087185
[ "MIT" ]
11
2017-12-15T13:39:35.000Z
2021-05-15T15:06:02.000Z
29.399545
173
0.522642
true
6,457
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.843895
0.741908
__label__eng_Latn
0.983375
0.562032
# Classification using NAG Second-order Conic Programming via CVXPY ## Correct Rendering of this notebook This notebook makes use of the `latex_envs` Jupyter extension for equations and references. If the LaTeX is not rendering properly in your local installation of Jupyter , it may be because you have not installed this extension. Details at https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/nbextensions/latex_envs/README.html ## Installing the NAG library and running this notebook This notebook depends on the NAG library for Python to run. Please read the instructions in the [Readme.md](https://github.com/numericalalgorithmsgroup/NAGPythonExamples/blob/master/local_optimization/Readme.md#install) file to download, install and obtain a licence for the library. Instruction on how to run the notebook can be found [here](https://github.com/numericalalgorithmsgroup/NAGPythonExamples/blob/master/local_optimization/Readme.md#jupyter). ## Introduction In this notebook, we demonstrate how NAG second-order conic programming (SOCP) solver can be used to find a quartic polynomial \vspace{0.1cm} \begin{equation}\label{poly} p(x,y) = \sum_{i,j} a_{ij} x^i y^j ~\mbox{with}~ i+j\leq4 \end{equation} \vspace{0.1cm} that separates two sets of points in a plane robustly. The optimization model for this problem can be cast as \vspace{0.1cm} \begin{equation}\label{prob} \begin{array}{ll} \underset{a\in\Re^{15}}{\mbox{minimize}} & \|a\|\\[0.6ex] \mbox{subject to} & p(x,y)\leq -1, \mbox{for x, y in set 1},\\[0.6ex] & p(x,y)\geq 1, \mbox{for x, y in set 2}. \end{array} \end{equation} \vspace{0.1cm} ```python # Import necessary packages import numpy as np import cvxpy as cvx import matplotlib.pyplot as plt ``` ## Data preparation In this section, we generate two sets of random points used as training data for our model. ```python # Fix random seed np.random.seed(3) # Generate two primary sets of points # Number of points in each set n = 80 set_1_base = np.random.uniform(-1.0, 1.0, (n,2)) set_2_base = np.random.uniform(-1.0, 1.0, (n,2)) # Scale the primary points to make set 1 surrounded by set 2 for i in range(n): set_1_base[i,0:] = 0.9 * set_1_base[i,0:] * np.random.rand() / \ np.linalg.norm(set_1_base[i,0:]) set_2_base[i,0:] = set_2_base[i,0:] * (1.1 + np.random.rand() / \ np.linalg.norm(set_2_base[i,0:])) # Further process the data to make abnormal shape of sets of points maxnorm_set_1 = max(np.linalg.norm(set_1_base,axis=1)) set_2_pick = set_2_base[np.linalg.norm(set_2_base,axis=1)>maxnorm_set_1,0:] set_1_pick = np.concatenate((set_1_base, set_2_base[np.linalg.norm(set_2_base,axis=1)<= maxnorm_set_1,0:])) # The shape of set 1 is primarily round, I punch set 1 from the left to make it abnormal # Feel free to modify punch power to see the resulting graph punch_power = 1.0 set_1 = set_1_pick[np.linalg.norm(set_1_pick-[-1.0,-0.5],axis=1)>punch_power,0:] set_2 = np.concatenate((set_2_pick, set_1_pick[np.linalg.norm(set_1_pick-[-1.0,-0.5],axis=1)<= punch_power,0:])) ``` Now the training data is ready to use. Visualize it. ```python data = (set_1, set_2) colors = ("red", "blue") groups = ("set1", "set2") fig = plt.figure() ax = fig.add_subplot(1, 1, 1, alpha=1.0) for data, color, group in zip(data, colors, groups): x = data[0:,0] y = data[0:,1] ax.scatter(x, y, alpha=0.8, c=color, edgecolors='none', s=10, label=group) plt.xlim(-2.0, 2.0) plt.ylim(-2.0, 2.0) plt.title('Training Data') plt.legend(loc=2) plt.show() ``` The number of coefficient for a quartic polynomial is $15$, which is therefore our number of variables. ```python # Number of variables nvar = 15 ``` Both constraints in (\ref{prob}) are linear with respect to variable $a$. Use set_1 and set_2 to set linear coefficients for those constraints. ```python A_1 = np.zeros((set_1.shape[0], nvar)) A_2 = np.zeros((set_2.shape[0], nvar)) for i in range(set_1.shape[0]): counter = 0 for j in range(5): for k in range(5-j): A_1[i,counter] = set_1[i,0]**j * set_1[i,1]**k counter = counter + 1 for i in range(set_2.shape[0]): counter = 0 for j in range(5): for k in range(5-j): A_2[i,counter] = set_2[i,0]**j * set_2[i,1]**k counter = counter + 1 ``` ## Model our problem via CVXPY and solve ```python # Define decision variables x = cvx.Variable(nvar) # Define objective function objective = cvx.Minimize(cvx.norm(x)) # Define constraints constraint = [A_1@x <= -1.0, A_2@x >= 1.0] # Define the entire problem problem = cvx.Problem(objective, constraint) # Solve, Bing! problem.solve(solver='NAG', verbose=True) # Save result coef = x.value ``` ------------------------------------------------ E04PT, Interior point method for SOCP problems ------------------------------------------------ Begin of Options Print File = -1 * U Print Level = 2 * d Print Options = Yes * d Print Solution = No * d Monitoring File = 6 * U Monitoring Level = 2 * U Socp Monitor Frequency = 0 * d Infinite Bound Size = 1.00000E+20 * d Task = Minimize * d Stats Time = No * d Time Limit = 1.00000E+06 * d Socp Iteration Limit = 100 * d Socp Max Iterative Refinement = 9 * d Socp Presolve = Yes * d Socp Scaling = None * d Socp Stop Tolerance = 1.05367E-08 * d Socp Stop Tolerance 2 = 1.05367E-08 * d Socp System Formulation = Auto * d End of Options Problem Statistics No of variables 32 bounds not defined No of lin. constraints 176 nonzeroes 2432 No of quad.constraints 0 No of cones 1 biggest cone size 16 Objective function Linear Presolved Problem Measures No of variables 191 No of lin. constraints 175 nonzeroes 2590 No of cones 1 ------------------------------------------------------------------------ it| pobj | dobj | p.inf | d.inf | d.gap | tau | I ------------------------------------------------------------------------ 0 1.00000E+00 0.00000E+00 2.70E-02 5.35E-03 2.00E+00 1.0E+00 1 5.13493E+01 8.32818E+01 1.29E-02 2.55E-03 9.55E-01 5.0E-02 2 2.80351E+01 5.34418E+01 9.73E-03 1.93E-03 7.22E-01 3.6E-01 3 1.11360E+02 2.17980E+02 4.46E-03 8.86E-04 3.31E-01 1.2E-01 4 2.07571E+02 4.06965E+02 2.08E-03 4.13E-04 1.55E-01 5.9E-02 5 3.02366E+02 5.91847E+02 1.38E-03 2.74E-04 1.03E-01 3.5E-02 6 2.42309E+02 4.51372E+02 2.93E-04 5.82E-05 2.18E-02 2.0E-02 7 1.29884E+02 1.56324E+02 8.30E-05 1.65E-05 6.16E-03 2.2E-02 8 7.71822E+01 8.68099E+01 3.34E-05 6.63E-06 2.48E-03 3.4E-02 9 6.67725E+01 7.13985E+01 1.75E-05 3.48E-06 1.30E-03 3.6E-02 10 6.49722E+01 6.63180E+01 7.68E-06 1.52E-06 5.70E-04 3.2E-02 11 6.27050E+01 6.28895E+01 2.58E-06 5.11E-07 1.91E-04 3.2E-02 12 6.17692E+01 6.18968E+01 1.62E-06 3.21E-07 1.20E-04 2.9E-02 13 6.14587E+01 6.14705E+01 4.56E-07 9.04E-08 3.38E-05 2.9E-02 14 6.11965E+01 6.11981E+01 5.12E-08 1.02E-08 3.80E-06 2.8E-02 15 6.11753E+01 6.11754E+01 2.23E-09 4.43E-10 1.66E-07 2.8E-02 16 6.11747E+01 6.11747E+01 8.75E-11 1.72E-11 6.42E-09 2.8E-02 ------------------------------------------------------------------------------ Status: converged, an optimal solution found ------------------------------------------------------------------------------ Final primal objective value 6.117468E+01 Final dual objective value 6.117469E+01 Absolute primal infeasibility 6.493819E-09 Relative primal infeasibility 8.753236E-11 Absolute dual infeasibility 3.212062E-09 Relative dual infeasibility 1.718111E-11 Absolute complementarity gap 6.424124E-09 Relative complementarity gap 6.424124E-09 Iterations 16 ## Visualize the classifier ```python # Generate a mesh x = np.arange(-5.0,5.0,0.008) y = np.arange(-5.0,5.0,0.008) xx, yy = np.meshgrid(x,y) # Ploynomial value on the mesh polyval = np.zeros(xx.shape) counter = 0 for i in range(5): for j in range(5-i): polyval = polyval + coef[counter]*np.power(xx,i)*np.power(yy,j) counter = counter + 1 # Plot the trained polynomial fig = plt.figure() ax = fig.add_subplot(1, 1, 1, alpha=1.0) ax.scatter(xx[polyval<=-1], yy[polyval<=-1], alpha=0.002) data = (set_1, set_2) colors = ("red", "blue") groups = ("set1", "set2") for data, color, group in zip(data, colors, groups): x = data[0:,0] y = data[0:,1] ax.scatter(x, y, alpha=0.8, c=color, edgecolors='none', s=10, label=group) plt.xlim(-2.0, 2.0) plt.ylim(-2.0, 2.0) plt.title('Polynomial') plt.legend(loc=2) plt.show() ```
0ae1273a1c42309281bd95f3e445416e2a3c6766
63,311
ipynb
Jupyter Notebook
local_optimization/SOCP/cvxpy_classification.ipynb
Brunochris13/NAGPythonExamples
e57fc05ab9b27db66d06a52f9b9412205e984544
[ "BSD-3-Clause" ]
40
2018-12-06T20:20:01.000Z
2022-03-05T23:09:31.000Z
local_optimization/SOCP/cvxpy_classification.ipynb
kelly1208/NAGPythonExamples
bd20f719c176bbbbc878fea7d0962e5fa10d9d3e
[ "BSD-3-Clause" ]
11
2019-03-25T11:52:51.000Z
2021-04-12T14:08:31.000Z
local_optimization/SOCP/cvxpy_classification.ipynb
kelly1208/NAGPythonExamples
bd20f719c176bbbbc878fea7d0962e5fa10d9d3e
[ "BSD-3-Clause" ]
21
2019-01-22T13:30:57.000Z
2021-12-15T13:05:14.000Z
154.794621
29,616
0.854828
true
3,284
Qwen/Qwen-72B
1. YES 2. YES
0.868827
0.819893
0.712345
__label__eng_Latn
0.649977
0.493348
# Exponentials, Radicals, and Logs Up to this point, all of our equations have included standard arithmetic operations, such as division, multiplication, addition, and subtraction. Many real-world calculations involve exponential values in which numbers are raised by a specific power. ## Exponentials A simple case of of using an exponential is squaring a number; in other words, multipying a number by itself. For example, 2 squared is 2 times 2, which is 4. This is written like this: \begin{equation}2^{2} = 2 \cdot 2 = 4\end{equation} Similarly, 2 cubed is 2 times 2 times 2 (which is of course 8): \begin{equation}2^{3} = 2 \cdot 2 \cdot 2 = 8\end{equation} In Python, you use the **&ast;&ast;** operator, like this example in which **x** is assigned the value of 5 raised to the power of 3 (in other words, 5 x 5 x 5, or 5-cubed): ```python x = 5**3 print(x) ``` 125 Multiplying a number by itself twice or three times to calculate the square or cube of a number is a common operation, but you can raise a number by any exponential power. For example, the following notation shows 4 to the power of 7 (or 4 x 4 x 4 x 4 x 4 x 4 x 4), which has the value: \begin{equation}4^{7} = 16384 \end{equation} In mathematical terminology, **4** is the *base*, and **7** is the *power* or *exponent* in this expression. ## Radicals (Roots) While it's common to need to calculate the solution for a given base and exponential, sometimes you'll need to calculate one or other of the elements themselves. For example, consider the following expression: \begin{equation}?^{2} = 9 \end{equation} This expression is asking, given a number (9) and an exponent (2), what's the base? In other words, which number multipled by itself results in 9? This type of operation is referred to as calculating the *root*, and in this particular case it's the *square root* (the base for a specified number given the exponential **2**). In this case, the answer is 3, because 3 x 3 = 9. We show this with a **&radic;** symbol, like this: \begin{equation}\sqrt{9} = 3 \end{equation} Other common roots include the *cube root* (the base for a specified number given the exponential **3**). For example, the cube root of 64 is 4 (because 4 x 4 x 4 = 64). To show that this is the cube root, we include the exponent **3** in the **&radic;** symbol, like this: \begin{equation}\sqrt[3]{64} = 4 \end{equation} We can calculate any root of any non-negative number, indicating the exponent in the **&radic;** symbol. The **math** package in Python includes a **sqrt** function that calculates the square root of a number. To calculate other roots, you need to reverse the exponential calculation by raising the given number to the power of 1 divided by the given exponent: ```python import math # Calculate square root of 25 x = math.sqrt(25) print (x) # Calculate cube root of 64 cr = round(64 ** (1. / 3)) print(cr) ``` 5.0 4 The code used in Python to calculate roots other than the square root reveals something about the relationship between roots and exponentials. The exponential root of a number is the same as that number raised to the power of 1 divided by the exponential. For example, consider the following statement: \begin{equation} 8^{\frac{1}{3}} = \sqrt[3]{8} = 2 \end{equation} Note that a number to the power of 1/3 is the same as the cube root of that number. Based on the same arithmetic, a number to the power of 1/2 is the same as the square root of the number: \begin{equation} 9^{\frac{1}{2}} = \sqrt{9} = 3 \end{equation} You can see this for yourself with the following Python code: ```python import math print (9**0.5) print (math.sqrt(9)) ``` 3.0 3.0 ## Logarithms Another consideration for exponential values is the requirement occassionally to determine the exponent for a given number and base. In other words, how many times do I need to multiply a base number by itself to get the given result. This kind of calculation is known as the *logarithm*. For example, consider the following expression: \begin{equation}4^{?} = 16 \end{equation} In other words, to what power must you raise 4 to produce the result 16? The answer to this is 2, because 4 x 4 (or 4 to the power of 2) = 16. The notation looks like this: \begin{equation}log_{4}(16) = 2 \end{equation} In Python, you can calculate the logarithm of a number using the **log** function in the **math** package, indicating the number and the base: ```python import math x = math.log(16, 4) print(x) ``` 2.0 The final thing you need to know about exponentials and logarithms is that there are some special logarithms: The *common* logarithm of a number is its exponential for the base **10**. You'll occassionally see this written using the usual *log* notation with the base omitted: \begin{equation}log(1000) = 3 \end{equation} Another special logarithm is something called the *natural log*, which is a exponential of a number for base ***e***, where ***e*** is a constant with the approximate value 2.718. This number occurs naturally in a lot of scenarios, and you'll see it often as you work with data in many analytical contexts. For the time being, just be aware that the natural log is sometimes written as ***ln***: \begin{equation}log_{e}(64) = ln(64) = 4.1589 \end{equation} The **math.log** function in Python returns the natural log (base ***e***) when no base is specified. Note that this can be confusing, as the mathematical notation *log* with no base usually refers to the common log (base **10**). To return the common log in Python, use the **math.log10** function: ```python import math # Natural log of 29 print (math.log(29)) # Common log of 100 print(math.log10(100)) ``` 3.367295829986474 2.0 ## Solving Equations with Exponentials OK, so now that you have a basic understanding of exponentials, roots, and logarithms; let's take a look at some equations that involve exponential calculations. Let's start with what might at first glance look like a complicated example, but don't worry - we'll solve it step-by-step and learn a few tricks along the way: \begin{equation}2y = 2x^{4} ( \frac{x^{2} + 2x^{2}}{x^{3}} ) \end{equation} First, let's deal with the fraction on the right side. The numerator of this fraction is x<sup>2</sup> + 2x<sup>2</sup> - so we're adding two exponential terms. When the terms you're adding (or subtracting) have the same exponential, you can simply add (or subtract) the coefficients. In this case, x<sup>2</sup> is the same as 1x<sup>2</sup>, which when added to 2x<sup>2</sup> gives us the result 3x<sup>2</sup>, so our equation now looks like this: \begin{equation}2y = 2x^{4} ( \frac{3x^{2}}{x^{3}} ) \end{equation} Now that we've condolidated the numerator, let's simplify the entire fraction by dividing the numerator by the denominator. When you divide exponential terms with the same variable, you simply divide the coefficients as you usually would and subtract the exponential of the denominator from the exponential of the numerator. In this case, we're dividing 3x<sup>2</sup> by 1x<sup>3</sup>: The coefficient 3 divided by 1 is 3, and the exponential 2 minus 3 is -1, so the result is 3x<sup>-1</sup>, making our equation: \begin{equation}2y = 2x^{4} ( 3x^{-1} ) \end{equation} So now we've got rid of the fraction on the right side, let's deal with the remaining multiplication. We need to multiply 3x<sup>-1</sup> by 2x<sup>4</sup>. Multiplication, is the opposite of division, so this time we'll multipy the coefficients and add the exponentials: 3 multiplied by 2 is 6, and -1 + 4 is 3, so the result is 6x<sup>3</sup>: \begin{equation}2y = 6x^{3} \end{equation} We're in the home stretch now, we just need to isolate y on the left side, and we can do that by dividing both sides by 2. Note that we're not dividing by an exponential, we simply need to divide the whole 6x<sup>3</sup> term by two; and half of 6 times x<sup>3</sup> is just 3 times x<sup>3</sup>: \begin{equation}y = 3x^{3} \end{equation} Now we have a solution that defines y in terms of x. We can use Python to plot the line created by this equation for a set of arbitrary *x* and *y* values: ```python import pandas as pd # Create a dataframe with an x column containing values from -10 to 10 df = pd.DataFrame ({'x': range(-10, 11)}) # Add a y column by applying the slope-intercept equation to x df['y'] = 3*df['x']**3 #Display the dataframe print(df) # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="magenta") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` Note that the line is curved. This is symptomatic of an exponential equation: as values on one axis increase or decrease, the values on the other axis scale *exponentially* rather than *linearly*. Let's look at an example in which x is the exponential, not the base: \begin{equation}y = 2^{x} \end{equation} We can still plot this as a line: ```python import pandas as pd # Create a dataframe with an x column containing values from -10 to 10 df = pd.DataFrame ({'x': range(-10, 11)}) # Add a y column by applying the slope-intercept equation to x df['y'] = 2.0**df['x'] #Display the dataframe print(df) # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="magenta") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` Note that when the exponential is a negative number, Python reports the result as 0. Actually, it's a very small fractional number, but because the base is positive the exponential number will always positive. Also, note the rate at which y increases as x increases - exponential growth can be be pretty dramatic. So what's the practical application of this? Well, let's suppose you deposit $100 in a bank account that earns 5&#37; interest per year. What would the balance of the account be in twenty years, assuming you don't deposit or withdraw any additional funds? To work this out, you could calculate the balance for each year: After the first year, the balance will be the initial deposit ($100) plus 5&#37; of that amount: \begin{equation}y1 = 100 + (100 \cdot 0.05) \end{equation} Another way of saying this is: \begin{equation}y1 = 100 \cdot 1.05 \end{equation} At the end of year two, the balance will be the year one balance plus 5&#37;: \begin{equation}y2 = 100 \cdot 1.05 \cdot 1.05 \end{equation} Note that the interest for year two, is the interest for year one multiplied by itself - in other words, squared. So another way of saying this is: \begin{equation}y2 = 100 \cdot 1.05^{2} \end{equation} It turns out, if we just use the year as the exponent, we can easily calculate the growth after twenty years like this: \begin{equation}y20 = 100 \cdot 1.05^{20} \end{equation} Let's apply this logic in Python to see how the account balance would grow over twenty years: ```python import pandas as pd # Create a dataframe with 20 years df = pd.DataFrame ({'Year': range(1, 21)}) # Calculate the balance for each year based on the exponential growth from interest df['Balance'] = 100 * (1.05**df['Year']) #Display the dataframe print(df) # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.Year, df.Balance, color="green") plt.xlabel('Year') plt.ylabel('Balance') plt.show() ``` ```python ``` ```python ```
07a2862422e06b9505a6e51f4e3e9b9e94883028
55,729
ipynb
Jupyter Notebook
Basics Of Algebra by Hiren/01-04-Exponentials Radicals and Logarithms.ipynb
serkin/Basic-Mathematics-for-Machine-Learning
ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab
[ "Apache-2.0" ]
null
null
null
Basics Of Algebra by Hiren/01-04-Exponentials Radicals and Logarithms.ipynb
serkin/Basic-Mathematics-for-Machine-Learning
ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab
[ "Apache-2.0" ]
null
null
null
Basics Of Algebra by Hiren/01-04-Exponentials Radicals and Logarithms.ipynb
serkin/Basic-Mathematics-for-Machine-Learning
ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab
[ "Apache-2.0" ]
null
null
null
103.778399
14,720
0.834826
true
3,193
Qwen/Qwen-72B
1. YES 2. YES
0.96378
0.959762
0.924999
__label__eng_Latn
0.999242
0.987418
# Simulating readout noise on the Rigetti Quantum Virtual Machine © Copyright 2018, Rigetti Computing. $$ \newcommand{ket}[1]{\left|{#1}\right\rangle} \newcommand{bra}[1]{\left\langle {#1}\right|} \newcommand{tr}[1]{\mathrm{Tr}\,\left[ {#1}\right]} \newcommand{expect}[1]{\left\langle {#1} \right \rangle} $$ ## Theoretical Overview Qubit-Readout can be corrupted in a variety of ways. The two most relevant error mechanisms on the Rigetti QPU right now are: 1. Transmission line noise that makes a 0-state look like a 1-state or vice versa. We call this **classical readout bit-flip error**. This type of readout noise can be reduced by tailoring optimal readout pulses and using superconducting, quantum limited amplifiers to amplify the readout signal before it is corrupted by classical noise at the higher temperature stages of our cryostats. 2. T1 qubit decay during readout (our readout operations can take more than a µsecond unless they have been specially optimized), which leads to readout signals that initially behave like 1-states but then collapse to something resembling a 0-state. We will call this **T1-readout error**. This type of readout error can be reduced by achieving shorter readout pulses relative to the T1 time, i.e., one can try to reduce the readout pulse length, or increase the T1 time or both. ## Qubit measurements This section provides the necessary theoretical foundation for accurately modeling noisy quantum measurements on superconducting quantum processors. It relies on some of the abstractions (density matrices, Kraus maps) introduced in our notebook on [gate noise models](GateNoiseModels.ipynb). The most general type of measurement performed on a single qubit at a single time can be characterized by some set $\mathcal{O}$ of measurement outcomes, e.g., in the simplest case $\mathcal{O} = \{0, 1\}$, and some unnormalized quantum channels (see notebook on gate noise models) that encapsulate 1. the probability of that outcome 2. how the qubit state is affected conditional on the measurement outcome. Here the _outcome_ is understood as classical information that has been extracted from the quantum system. ### Projective, ideal measurement The simplest case that is usually taught in introductory quantum mechanics and quantum information courses are Born's rule and the projection postulate which state that there exist a complete set of orthogonal projection operators $$ P_{\mathcal{O}} := \{\Pi_x \text{ Projector }\mid x \in \mathcal{O}\}, $$ i.e., one for each measurement outcome. Any projection operator must satisfy $\Pi_x^\dagger = \Pi_x = \Pi_x^2$ and for an _orthogonal_ set of projectors any two members satisfy $$ \Pi_x\Pi_y = \delta_{xy} \Pi_x = \begin{cases} 0 & \text{ if } x \ne y \\ \Pi_x & \text{ if } x = y \end{cases} $$ and for a _complete_ set we additionally demand that $\sum_{x\in\mathcal{O}} \Pi_x = 1$. Following our introduction to gate noise, we write quantum states as density matrices as this is more general and in closer correspondence with classical probability theory. With these the probability of outcome $x$ is given by $p(x) = \tr{\Pi_x \rho \Pi_x} = \tr{\Pi_x^2 \rho} = \tr{\Pi_x \rho}$ and the post measurement state is $$ \rho_x = \frac{1}{p(x)} \Pi_x \rho \Pi_x, $$ which is the projection postulate applied to mixed states. If we were a sloppy quantum programmer and accidentally erased the measurement outcome then our best guess for the post measurement state would be given by something that looks an awful lot like a Kraus map: $$ \rho_{\text{post measurement}} = \sum_{x\in\mathcal{O}} p(x) \rho_x = \sum_{x\in\mathcal{O}} \Pi_x \rho \Pi_x. $$ The completeness of the projector set ensures that the trace of the post measurement is still 1 and the Kraus map form of this expression ensures that $\rho_{\text{post measurement}}$ is a positive (semi-)definite operator. ### Classical readout bit-flip error Consider now the ideal measurement as above, but where the outcome $x$ is transmitted across a noisy classical channel that produces a final outcome $x'\in \mathcal{O}' = \{0', 1'\}$ according to some conditional probabilities $p(x'|x)$ that can be recorded in the _assignment probability matrix_ $$ P_{x'|x} = \begin{pmatrix} p(0 | 0) & p(0 | 1) \\ p(1 | 0) & p(1 | 1) \end{pmatrix} $$ Note that this matrix has only two independent parameters as each column must be a valid probability distribution, i.e. all elements are non-negative and each column sums to 1. This matrix allows us to obtain the probabilities $\mathbf{p}' := (p(x'=0), p(x'=1))^T$ from the original outcome probabilities $\mathbf{p} := (p(x=0), p(x=1))^T$ via $\mathbf{p}' = P_{x'|x}\mathbf{p}$. The difference relative to the ideal case above is that now an outcome $x' = 0$ does not necessarily imply that the post measurement state is truly $\Pi_{0} \rho \Pi_{0} / p(x=0)$. Instead, the post measurement state given a noisy outcome $x'$ must be \begin{align} \rho_{x'} & = \sum_{x\in \mathcal{O}} p(x|x') \rho_x \\ & = \sum_{x\in \mathcal{O}} p(x'|x)\frac{p(x)}{p(x')} \rho_x \\ & = \frac{1}{p(x')}\sum_{x\in \mathcal{O}} p(x'|x) \Pi_x \rho \Pi_x \end{align} where \begin{align} p(x') & = \sum_{x\in\mathcal{O}} p(x'|x) p(x) \\ & = \tr{\sum_{x\in \mathcal{O}} p(x'|x) \Pi_x \rho \Pi_x} \\ & = \tr{\rho \sum_{x\in \mathcal{O}} p(x'|x)\Pi_x} \\ & = \tr{\rho E_x'}. \end{align} where we have exploited the cyclical property of the trace $\tr{ABC}=\tr{BCA}$ and the projection property $\Pi_x^2 = \Pi_x$. This has allowed us to derive the noisy outcome probabilities from a set of positive operators $$ E_{x'} := \sum_{x\in \mathcal{O}} p(x'|x)\Pi_x \ge 0 $$ that must sum to 1: $$ \sum_{x'\in\mathcal{O}'} E_x' = \sum_{x\in\mathcal{O}}\underbrace{\left[\sum_{x'\in\mathcal{O}'} p(x'|x)\right]}_{1}\Pi_x = \sum_{x\in\mathcal{O}}\Pi_x = 1. $$ The above result is a type of generalized **Bayes' theorem** that is extremely useful for this type of (slightly) generalized measurement and the family of operators $\{E_{x'}| x' \in \mathcal{O}'\}$ whose expectations give the probabilities is called a **positive operator valued measure** (POVM). These operators are not generally orthogonal nor valid projection operators but they naturally arise in this scenario. This is not yet the most general type of measurement, but it will get us pretty far. ### How to model $T_1$ error T1 type errors fall outside our framework so far as they involve a scenario in which the _quantum state itself_ is corrupted during the measurement process in a way that potentially erases the pre-measurement information as opposed to a loss of purely classical information. The most appropriate framework for describing this is given by that of measurement instruments, but for the practical purpose of arriving at a relatively simple description, we propose describing this by a T1 damping Kraus map followed by the noisy readout process as described above. ### Further reading Chapter 3 of John Preskill's lecture notes http://www.theory.caltech.edu/people/preskill/ph229/notes/chap3.pdf ## How do I get started? 1. Come up with a good guess for your readout noise parameters $p(0|0)$ and $p(1|1)$, the off-diagonals then follow from the normalization of $P_{x'|x}$. If your assignment fidelity $F$ is given, and you assume that the classical bit flip noise is roughly symmetric, then a good approximation is to set $p(0|0)=p(1|1)=F$. 2. For your QUIL program `p`, and a qubit index `q` call: ``` p.define_noisy_readout(q, p00, p11) ``` where you should replace `p00` and `p11` with the assumed probabilities. ### Estimate $P_{x'|x}$ yourself! You can also run some simple experiments to estimate the assignment probability matrix directly from a QPU. **Scroll down for some examples!** ```python import numpy as np import matplotlib.pyplot as plt %matplotlib inline from pyquil.quil import Program, MEASURE, Pragma from pyquil.api import QVMConnection from pyquil.gates import I, X, RX, H, CNOT from pyquil.noise import (estimate_bitstring_probs, correct_bitstring_probs, bitstring_probs_to_z_moments, estimate_assignment_probs) DARK_TEAL = '#48737F' FUSCHIA = '#D6619E' BEIGE = '#EAE8C6' cxn = QVMConnection() ``` ## Example 1: Rabi sequence with noisy readout ```python %%time # number of angles num_theta = 101 # number of program executions trials = 200 thetas = np.linspace(0, 2*np.pi, num_theta) p00s = [1., 0.95, 0.9, 0.8] results_rabi = np.zeros((num_theta, len(p00s))) for jj, theta in enumerate(thetas): for kk, p00 in enumerate(p00s): cxn.random_seed = hash((jj, kk)) p = Program(RX(theta, 0)) ro = p.declare("ro") # assume symmetric noise p11 = p00 p.define_noisy_readout(0, p00=p00, p11=p00) p.measure(0, ro[0]) res = cxn.run(p, [0], trials=trials) results_rabi[jj, kk] = np.sum(res) ``` ```python plt.figure(figsize=(14, 6)) for jj, (p00, c) in enumerate(zip(p00s, [DARK_TEAL, FUSCHIA, "k", "gray"])): plt.plot(thetas, results_rabi[:, jj]/trials, c=c, label=r"$p(0|0)=p(1|1)={:g}$".format(p00)) plt.legend(loc="best") plt.xlim(*thetas[[0,-1]]) plt.ylim(-.1, 1.1) plt.grid(alpha=.5) plt.xlabel(r"RX angle $\theta$ [radian]", size=16) plt.ylabel(r"Excited state fraction $n_1/n_{\rm trials}$", size=16) plt.title("Effect of classical readout noise on Rabi contrast.", size=18) ``` ## Example 2: Estimate the assignment probabilities ### Estimate assignment probabilities for a perfect quantum computer ```python estimate_assignment_probs(0, 1000, cxn, Program()) ``` ### Re-Estimate assignment probabilities for an imperfect quantum computer ```python cxn.seed = None header0 = Program().define_noisy_readout(0, .85, .95) header1 = Program().define_noisy_readout(1, .8, .9) header2 = Program().define_noisy_readout(2, .9, .85) ap0 = estimate_assignment_probs(0, 100000, cxn, header0) ap1 = estimate_assignment_probs(1, 100000, cxn, header1) ap2 = estimate_assignment_probs(2, 100000, cxn, header2) ``` ```python print(ap0, ap1, ap2, sep="\n") ``` ## Example 3: Use `pyquil.noise.correct_bitstring_probs` to correct for noisy readout ### 3a) Correcting the Rabi signal from above ```python ap_last = np.array([[p00s[-1], 1 - p00s[-1]], [1 - p00s[-1], p00s[-1]]]) corrected_last_result = [correct_bitstring_probs([1-p, p], [ap_last])[1] for p in results_rabi[:, -1] / trials] ``` ```python plt.figure(figsize=(14, 6)) for jj, (p00, c) in enumerate(zip(p00s, [DARK_TEAL, FUSCHIA, "k", "gray"])): if jj not in [0, 3]: continue plt.plot(thetas, results_rabi[:, jj]/trials, c=c, label=r"$p(0|0)=p(1|1)={:g}$".format(p00), alpha=.3) plt.plot(thetas, corrected_last_result, c="red", label=r"Corrected $p(0|0)=p(1|1)={:g}$".format(p00s[-1])) plt.legend(loc="best") plt.xlim(*thetas[[0,-1]]) plt.ylim(-.1, 1.1) plt.grid(alpha=.5) plt.xlabel(r"RX angle $\theta$ [radian]", size=16) plt.ylabel(r"Excited state fraction $n_1/n_{\rm trials}$", size=16) plt.title("Corrected contrast", size=18) ``` **We find that the corrected signal is fairly noisy (and sometimes exceeds the allowed interval $[0,1]$) due to the overall very small number of samples $n=200$.** ### 3b) In this example we will create a GHZ state $\frac{1}{\sqrt{2}}\left[\left|000\right\rangle + \left|111\right\rangle \right]$ and measure its outcome probabilities with and without the above noise model. We will then see how the Pauli-Z moments that indicate the qubit correlations are corrupted (and corrected) using our API. ```python ghz_prog = Program(H(0), CNOT(0, 1), CNOT(1, 2), MEASURE(0, 0), MEASURE(1, 1), MEASURE(2, 2)) print(ghz_prog) results = cxn.run(ghz_prog, [0, 1, 2], trials=10000) ``` ```python header = header0 + header1 + header2 noisy_ghz = header + ghz_prog print(noisy_ghz) noisy_results = cxn.run(noisy_ghz, [0, 1, 2], trials=10000) ``` ### Uncorrupted probability for $\left|000\right\rangle$ and $\left|111\right\rangle$ ```python probs = estimate_bitstring_probs(results) probs[0, 0, 0], probs[1, 1, 1] ``` As expected the outcomes `000` and `111` each have roughly probability $1/2$. ### Corrupted probability for $\left|000\right\rangle$ and $\left|111\right\rangle$ ```python noisy_probs = estimate_bitstring_probs(noisy_results) noisy_probs[0, 0, 0], noisy_probs[1, 1, 1] ``` The noise-corrupted outcome probabilities deviate significantly from their ideal values! ### Corrected probability for $\left|000\right\rangle$ and $\left|111\right\rangle$ ```python corrected_probs = correct_bitstring_probs(noisy_probs, [ap0, ap1, ap2]) corrected_probs[0, 0, 0], corrected_probs[1, 1, 1] ``` The corrected outcome probabilities are much closer to the ideal value. ### Estimate $\langle Z_0^{j} Z_1^{k} Z_2^{\ell}\rangle$ for $jkl=100, 010, 001$ from non-noisy data *We expect these to all be very small* ```python zmoments = bitstring_probs_to_z_moments(probs) zmoments[1, 0, 0], zmoments[0, 1, 0], zmoments[0, 0, 1] ``` ### Estimate $\langle Z_0^{j} Z_1^{k} Z_2^{\ell}\rangle$ for $jkl=110, 011, 101$ from non-noisy data *We expect these to all be close to 1.* ```python zmoments[1, 1, 0], zmoments[0, 1, 1], zmoments[1, 0, 1] ``` ### Estimate $\langle Z_0^{j} Z_1^{k} Z_2^{\ell}\rangle$ for $jkl=100, 010, 001$ from noise-corrected data ```python zmoments_corr = bitstring_probs_to_z_moments(corrected_probs) zmoments_corr[1, 0, 0], zmoments_corr[0, 1, 0], zmoments_corr[0, 0, 1] ``` ### Estimate $\langle Z_0^{j} Z_1^{k} Z_2^{\ell}\rangle$ for $jkl=110, 011, 101$ from noise-corrected data ```python zmoments_corr[1, 1, 0], zmoments_corr[0, 1, 1], zmoments_corr[1, 0, 1] ``` ##### Overall the correction can restore the contrast in our multi-qubit observables, though we also see that the correction can lead to slightly non-physical expectations. This effect is reduced the more samples we take.
31c8b2fb968b03a65e249bdbd9eb6d0b8b289698
19,758
ipynb
Jupyter Notebook
examples/ReadoutNoise.ipynb
oliverdutton/pyquil
027a3f6aecbd8206baf39189a0183ad0f85c262b
[ "Apache-2.0" ]
1
2021-01-30T18:47:34.000Z
2021-01-30T18:47:34.000Z
examples/ReadoutNoise.ipynb
abhayshivamtiwari/pyquil
854bf41349393beeeedad7a4481797ad78ae36a5
[ "Apache-2.0" ]
null
null
null
examples/ReadoutNoise.ipynb
abhayshivamtiwari/pyquil
854bf41349393beeeedad7a4481797ad78ae36a5
[ "Apache-2.0" ]
null
null
null
38.439689
565
0.595708
true
4,182
Qwen/Qwen-72B
1. YES 2. YES
0.766294
0.718594
0.550654
__label__eng_Latn
0.976972
0.117684
# Chapter 3 - Developing Templates Generating SoftMax distributions from normals could get quite tedious – for any sufficiently complicated shape, the number of normals to be used could be excessive. Let's add a layer of abstraction onto all our work. ##Polygon Construction We can put everything together from all we've talked about (shifting the distribution and generating weights from normals) to a more tangible process: generating a softmax distribution from a [polytope](http://en.wikipedia.org/wiki/Polytope). Let's motivate this with an example first. Imagine you worked at the Pentagon as an HRI researcher. One day, while pondering the nature of language, you happened to look out your window and spot an intruder. If you called a human security officer, you might say something like, "I see an intruder in front of the Heliport facade." We can use our SoftMax classifier to translate this same sentence for a security bot to understand. First, we'd need to divide the space in a similar way we did for the Pac-Man problem: As opposed to our Pac-Man problem, we can't assign weights by inspection. Instead, we'll use our weights-from-normals tactic to generate our weights for each class, and our shifted bias tactic to place those weights appropriately. ### Step 1: Define Polytope We can use a geometry library like [Shapely](http://toblerity.org/shapely/shapely.html) to define custom polytopes (in this case, a pentagon). For a quick way to get ideal pentagon vertex coordinates, you can either calculate them by hand or use [some online tools](http://www.mathsisfun.com/geometry/pentagon.html). Let's try a pentagon with the following coordinates (starting at the corner between the South Parking Entrance and the Heliport Facade): $$ \begin{align} P_1 &= (P_{1x}, P_{1y}) = (-1.90,-0.93) \\ P_2 &= (-1.40,1.45) \\ P_3 &= (1.03,1.71) \\ P_4 &= (2.02,-0.51) \\ P_5 &= (0.21,-2.15) \\ \end{align} $$ ### Step 2: Get Normals and Offsets We want to get six classes, so we'd like to specify $\frac{6(6-1)}{2} = 15$ normal vectors in order to use our transformation matrix $A$. But, we only have six unknowns, so we can reduce the size of our $A$ matrix. That is, we can use: $$ \mathbf{N} = \begin{bmatrix} \mathbf{n}_{0,1}^T \\ \mathbf{n}_{0,2}^T \\ \mathbf{n}_{0,3}^T \\ \mathbf{n}_{0,4}^T \\ \mathbf{n}_{0,5}^T \\ \mathbf{n}_{1,2}^T \\ \end{bmatrix} = \begin{bmatrix} -1 & 1 & 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 0 & 0 & 0 \\ -1 & 0 & 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 & 1 & 0 \\ -1 & 0 & 0 & 0 & 0 & 1 \\ 0 & -1 & 1 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} \mathbf{w}_{0}^T \\ \mathbf{w}_{1}^T \\ \mathbf{w}_{2}^T \\ \mathbf{w}_{3}^T \\ \mathbf{w}_{4}^T \\ \mathbf{w}_{5}^T \\ \end{bmatrix} = \mathbf{A}\mathbf{W} $$ Where $\mathbf{n}_{0,1}$ is the boundary between the interior and the South Parking Entrance, and so on. Except, we can be smarter about this. We only care about the *relative* weights, so why not define one class and solve for the weights of all other classes? Since we have one interior class with weights $w_0$, simply define $w_0 = \begin{bmatrix}0 & 0 \end{bmatrix}^T$ and $b_0 = 0$, leaving us with the following five equations and five unkowns: $$ \mathbf{N} = \begin{bmatrix} \mathbf{n}_{0,1}^T \\ \mathbf{n}_{0,2}^T \\ \mathbf{n}_{0,3}^T \\ \mathbf{n}_{0,4}^T \\ \mathbf{n}_{0,5}^T \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} \mathbf{w}_{1}^T \\ \mathbf{w}_{2}^T \\ \mathbf{w}_{3}^T \\ \mathbf{w}_{4}^T \\ \mathbf{w}_{5}^T \\ \end{bmatrix} = \mathbf{A}\mathbf{W} $$ Does it make sense that the weights we'd use correspond directly to the class boundaries of each class with some zero-weighted interior class? Yes: think of a class boundary as defined by its normal vector. Those normal vectors point exactly in the direction of greatest probability of a given class. Thus, we have: $$ \mathbf{n}_{0,i} = \mathbf{w}_{i} \; \forall i \in N $$ We have the normals, but solving for the class biases will require digging deeper. We need the equation for a normal fixed to the surface of the polytope (not simply its magnitude and direction!). In $\mathbb{R}^2$, we know that a line is uniquely defined by two points passing through it – a face's bounding vertices, for instance. This can help us find the normal vectors and offsets, giving us the weights and biases. Recall the specification of our hyperplanes in $\mathbb{R}^2$: \begin{align} 0 &= (\mathbf{w}_i - \mathbf{w}_j)^T\mathbf{x} + (b_i - b_j) \\ &= (w_{i,x} - w_{j,x})x + (w_{i,y} - w_{j,y})y + (b_i - b_j) \\ &= w_{i,x}x + w_{i,y}y + b_i \end{align} Where the last line assumes $j$ is the interior class with weights and a bias of 0. Since we have two points on this line segment (and any third point from a linear combination of the first two), we can use their $x$ and $y$ values to calculate our weights: \begin{equation}\label{eq:nullspace} \begin{bmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \\ \end{bmatrix} \begin{bmatrix} w_{i,x}\\ w_{i,y}\\ b_i \end{bmatrix} =\begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix} \end{equation} The non-trivial solution to $\ref{eq:nullspace}$ can be found through various decomposition techniques. We use [Singular Value Decomposition](http://en.wikipedia.org/wiki/Singular_value_decomposition). In short, given any polygon, we can use its vertices to find the equations of the normals representing the class boundaries between the interior class and an exterior class for each face. Let's try this out and see if it works well. Note that this part, as well as several future ones, require long swaths of code to be fully explained. Rather than include the code in this document, you can always find it on [our Github](https://github.com/COHRINT/cops_and_robots/blob/master/src/cops_and_robots/robo_tools/fusion/softmax.py). ```python import numpy as np %matplotlib inline from cops_and_robots.robo_tools.fusion.softmax import SoftMax, make_regular_2D_poly poly = make_regular_2D_poly(5, max_r=2, theta=np.pi/3.1) labels = ['Interior', 'Mall Terrace Entrance', 'Heliport Facade', 'South Parking Entrance', 'Concourse Entrance', 'River Terrace Entrance', ] sm = SoftMax(poly=poly, class_labels=labels, resolution=0.1) sm.plot(plot_poly=True, plot_normals=False) ``` **NOTE: 3D Plotting currently borked** Well, that looks like the class boundaries are lining up just fine, but what about the probability distributions themselves? They seem a bit diffuse. If you remember from Chapter 1, we can simply multiply the weights and biases by the same value to raise the steepness of each class. Let's try that: ```python steepness = 5 sm = SoftMax(poly=poly, class_labels=labels, resolution=0.1, steepness=5) sm.plot(plot_poly=True, plot_normals=False) ``` As expected, our boundaries stayed the same but our probabilities are less spread out. Looking good! However, we need to address a few assumptions. Most importantly, our interior class will not always be centered at the origin. Let's look at a shifted coordinate frame again, with the center of our polygon at $(-2,3)$: \begin{align} \mathbf{x}' &= \begin{bmatrix}x & y\end{bmatrix}^T + \begin{bmatrix}2 & -3\end{bmatrix}^T = \begin{bmatrix}x + 2 & y -3\end{bmatrix}^T \\ 0 &= (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{x}' + (b_i - b_j) \\ &= (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{x} + (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{b} + (b_i - b_j)\\ &= \mathbf{w}_i^T \mathbf{x} + \mathbf{w}_i^T \mathbf{b} + b_i\\ &= w_{i,x}x + w_{i,y}y + \begin{bmatrix}w_{i,x} & w_{i,y}\end{bmatrix}\begin{bmatrix}-2 \\ 3\end{bmatrix} + b_i\\ &= w_{i,x}x + w_{i,y}y + b_i^\prime\\ \end{align} But, remember that we need to perform the coordinate shift on the points we use as well: $$ \begin{bmatrix} x_1 + 2 & y_1 - 3 & 1 \\ x_2 + 2 & y_2 - 3 & 1 \\ x_3 + 2 & y_3 - 3 & 1 \\ \end{bmatrix} \begin{bmatrix} w_{i,x}\\ w_{i,y}\\ b_i^\prime \end{bmatrix} =\begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix} $$ ```python poly = make_regular_2D_poly(5, max_r=2, theta=-np.pi/4, origin=(-2,3)) sm = SoftMax(poly=poly, class_labels=labels, resolution=0.1, steepness=5) sm.plot(plot_poly=True, plot_normals=False) ``` Great! We've successfully decomposed the space around the Pentagon, so we can tell the automatic security bots where the suspect is without having to pull out a map of the Pentagon and show them directly where on the map our intruder may be. That is, we've replaced communication of specific coordinates with the communication of 'zones' formed by spatial relationships to landmarks. However, the methodology build up to this point doesn't work for all cases. For instance: what happens if we want to use a non-convex shape to develop a SoftMax model? [Chapter 4](04_mms.ipynb) will dive into some of these pitfalls and how to get around them. For the time being, we'll simply show that, while we can't handle non-convex shapes yet, we can handle convex irregular shapes: ```python poly = Polygon([(-1.0, 0.0), (-1.0, 1.0), (-3.5, 3.3), (-3.0, -2.0), (-2.0, -2.0), ]) sm = SoftMax(poly=poly, steepness=6) sm.plot(plot_poly=True) ``` ```python from IPython.core.display import HTML # Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() ``` <link href='http://fonts.googleapis.com/css?family=Roboto:100,100italic,500,300,300italic,400' rel='stylesheet' type='text/css'> <style> div.cell{ width:800px; margin-left:16% !important; margin-right:auto; } h1, h2, h3, h4 { font-family: "Roboto", "wingdings", sans-serif; } h1{ font-weight: 500; } h2{ font-weight: 400; } h3{ font-weight: 300 !important; /* font-style: italic; */ } h4{ font-weight: 300 !important; font-style: italic; margin-top:12px; margin-bottom: 3px; } div.text_cell_render{ font-family: "HelveticaNeue-light", "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; line-height: 145%; font-size: 120%; width:800px; margin-left:auto; margin-right:auto; } .CodeMirror{ font-family: "Source Code Pro", source-code-pro,Consolas, monospace; } .prompt{ display: None; } .text_cell_render h5 { font-weight: 300; font-size: 22pt; color: #4057A1; font-style: italic; margin-bottom: .5em; margin-top: 0.5em; display: block; } .warning{ color: rgb( 240, 20, 20 ) } .rounded-box{ border-radius: 25px; border: 2px solid #8AC007; padding: 10px; padding-top: 0px; } </style>
27188c0b99a49b02081ae5750b6b1b11536715b8
847,579
ipynb
Jupyter Notebook
resources/notebooks/softmax/03_from_templates.ipynb
COHRINT/cops_and_robots
1df99caa1e38bde1b5ce2d04389bc232a68938d6
[ "Apache-2.0" ]
3
2016-01-19T17:54:51.000Z
2019-10-21T12:09:03.000Z
resources/notebooks/softmax/03_from_templates.ipynb
COHRINT/cops_and_robots
1df99caa1e38bde1b5ce2d04389bc232a68938d6
[ "Apache-2.0" ]
null
null
null
resources/notebooks/softmax/03_from_templates.ipynb
COHRINT/cops_and_robots
1df99caa1e38bde1b5ce2d04389bc232a68938d6
[ "Apache-2.0" ]
5
2015-02-19T02:53:24.000Z
2019-03-05T20:29:12.000Z
1,862.810989
214,532
0.946475
true
3,404
Qwen/Qwen-72B
1. YES 2. YES
0.882428
0.857768
0.756918
__label__eng_Latn
0.967133
0.596907
# 01 Molecular Geometry Analysis The purpose of this project is to introduce you to fundamental Python programming techniques in the context of a scientific problem, viz. the calculation of the internal coordinates (bond lengths, bond angles, dihedral angles), moments of inertia, and rotational constants of a polyatomic molecule. A concise set of instructions for this project may be found [here](https://github.com/CrawfordGroup/ProgrammingProjects/blob/master/Project%2301/project1-instructions.pdf). Original authors (Crawford, et al.) thank Dr. Yukio Yamaguchi of the University of Georgia for the original version of this project. Solution of this project could be {download}`solution_01.py`. ```python # Following os.chdir code is only for thebe (live code), since only in thebe default directory is /home/jovyan import os if os.getcwd().split("/")[-1] != "Project_01": os.chdir("source/Project_01") from solution_01 import Molecule as SolMol ``` ## Step 1: Read the Coordinate Data from Input The input to the program is the set of Cartesian coordinates of the atoms (**in bohr**) and their associated atomic numbers. A sample molecule (acetaldehyde) to use as input to the program is: 7 6 0.000000000000 0.000000000000 0.000000000000 6 0.000000000000 0.000000000000 2.845112131228 8 1.899115961744 0.000000000000 4.139062527233 1 -1.894048308506 0.000000000000 3.747688672216 1 1.942500819960 0.000000000000 -0.701145981971 1 -1.007295466862 -1.669971842687 -0.705916966833 1 -1.007295466862 1.669971842687 -0.705916966833 The first line above is the number of atoms (an integer), while the remaining lines contain the z-values (atom charges) and x-, y-, and z-coordinates of each atom (one integer followed by three double-precision floating-point numbers). This input file({download}`input/acetaldehyde.dat`) along with a few other test cases can be found in this repository in the input directory. After downloading the file to your computer (to a file called `geom.dat`, for example), you must open the file, read the data from each line into appropriate variables, and finally close the file. ````{admonition} Hint 1: Opening and closing the file :class: dropdown In Python, using `open` and `close` can easily handle file I/Os: ```python >>> f = open("input/acetaldehyde.dat", "r") >>> # handle file `f` >>> f.close() ``` However, there's more elegent way to do this by `with`: ```python >>> with open("input/acetaldehyde.dat", "r") as f: >>> pass # handle file `f` ``` The latter code is prefered. Using `with` would automatically close file, avoiding possibilities that programmer forget inserting a line to close file. ```` ````{admonition} Hint 2: Reading the numbers :class: dropdown Handling multiple words (decimals, floats as strings) is relatively easy in Python. For example, ```python >>> "6 0.000000000000 0.000000000000 0.000000000000".split() ['6', '0.000000000000', '0.000000000000', '0.000000000000'] ``` However, these are string types. One need to transform to proper types to enable calculation afterwards. ```` ````{admonition} Hint 3: Storing the z-values (atom charges) and the coordinates by NumPy :class: dropdown There are two ways in python. An intutive way to do this is generating a list to store z-values (as array), and list of list to store coordinates (as matrix). Notice do not use string type, for the sake of convenience of calculation tasks afterwards. It is also the most used way to store array/matrix in C++ and fortran90+. In FORTRAN77, one must use array to store matrix, along with it's leading dimension. Another way is to use numpy. For example, ```python >>> arr = np.array([ ... ["0.0", "0.1"], ... ["2.5", "4.6"], ... ], dtype=float) >>> arr array([[0. , 0.1], [2.5, 4.6]]) ``` Numpy can be extremely powerful, not only it wraps many array/matrix properties (raw data, dimension, type, etc.) into one class (`numpy.ndarray`), but also offers powerful operations in very concise way (matrix multiplication, SVD, tensor contraction, etc.). Learning some essentials of NumPy can prove to be useful. In this document, NumPy will be the default array/matrix/tensor storage structure. Other robust array/matrix/tensor engines could be PyTorch/TensorFlow, etc. ```` ````{admonition} Hint 4: Wrapping molecule information into class :class: dropdown What defines a molecule is somehow complicated. Not only it's atom charges, but also its coordinates. We can derive bond lengths, angles, etc. only from molecule information. We may term atom charges and coordinates as *Attribiutes*; they are variables and information describing molecule, like adjectives in gramma. We may term how we generate bond lengths, angles, etc. as *Methods*; they are functions, mapping from molecule information to some properties we desire, like verbs in gramma. An example to construct molecule class may be ```python class Molecule: def __init__(self): self.atom_charges = NotImplemented # type: np.ndarray self.atom_coords = NotImplemented # type: np.ndarray self.natm = NotImplemented # type: int def construct_from_dat_file(self, file_path: str): # Input: Read file from `file_path` # Attribute modification: Obtain atom charges to `self.atom_charges` # Attribute modification: Obtain coordinates to `self.atom_coords` # Attribute modification: Obtain atom numbers in molecule to `self.natm` raise NotImplementedError("Reader need to fill this method function!") ``` For convenience, redundant variable `self.natm` (number of atoms in molecule) is also included as *Attribute*. It can be easily obtained by counting length of atom charges array `self.atom_charges`. ```` ### Implementation - If `mole` is an instance of `Molecule` class, and it's acetaldehyde molecule, then - `mole.atom_charges` should be a NumPy array with shape (7, ) - `mole.atom_coords` should be a NumPy array with shape (7, 3) (we use *array* instead of matrix, just convention; like any array/matrix in TensorFlow/PyTorch is termed *tensor*) - `mole.natm` should be an integer Reader should fill all `NotImplementedError` in the following code (notice that `NotImplemented` is coded intentionally, not the same to `NotImplementedError`): ```python import numpy as np class Molecule: def __init__(self): self.atom_charges = NotImplemented # type: np.ndarray self.atom_coords = NotImplemented # type: np.ndarray self.natm = NotImplemented # type: int def construct_from_dat_file(self, file_path: str): # Input: Read file from `file_path` # Attribute modification: Obtain atom charges to `self.atom_charges` # Attribute modification: Obtain coordinates to `self.atom_coords` # Attribute modification: Obtain atom numbers in molecule to `self.natm` raise NotImplementedError("About 5~15 lines of code") ``` ### Solution Following is reference output. Reader may repeat the output results with his/hers own implementation. ```python sol_mole = SolMol() sol_mole.construct_from_dat_file("input/acetaldehyde.dat") print(sol_mole.atom_charges) print(sol_mole.atom_coords) print(sol_mole.natm) ``` [6 6 8 1 1 1 1] [[ 0. 0. 0. ] [ 0. 0. 2.84511213] [ 1.89911596 0. 4.13906253] [-1.89404831 0. 3.74768867] [ 1.94250082 0. -0.70114598] [-1.00729547 -1.66997184 -0.70591697] [-1.00729547 1.66997184 -0.70591697]] 7 ## Step 2: Bond Lengths Calculate the interatomic distances using the expression: $$ R_{ij} = \sqrt{(x_i - x_j)^2 + (y_i - y_j)^2 + (z_i - z_j)^2} $$ where $x$, $y$, and $z$ are Cartesian coordinates and $i$ and $j$ denote atomic indices. Print all bond lengths $R_{ij}$ in Bohr. ````{admonition} Hint 1: Bond length method function signiture :class: dropdown Before writing program, we need to ask ourselfs, that what information do we need to definitely define bond length. Actually, we need the two atoms ($i, j$ as subscripts of $R_{ij}$), and molecule coordinates ($x_i, y_i, z_i$). So we define the program as ```python def bond_length(mole: Molecule, i: int, j: int) -> float: # Input: `i`, `j` index of molecule's atom # Output: Bond length from atom `i` to atom `j` raise NotImplementedError("Reader need to fill this method function!") ``` ```` ````{admonition} Hint 2: $L_2$ norm :class: dropdown Directly programming by the expression of $R_{ij}$ is very intutive. However, this expression can also be written as $$ R_{ij} = |\boldsymbol{r}_i - \boldsymbol{r}_j | := \Vert \boldsymbol{r}_i - \boldsymbol{r}_j \Vert_2 $$ where $\boldsymbol{r}_i = (x_i, y_i, z_i)$. $R_{ij}$ could be also termed as $L_2$ (Frobenius) norm of vector $\boldsymbol{r}_i - \boldsymbol{r}_j$, and $L_2$ norm of vector is implemented by NumPy (`numpy.linalg.norm`). See [NumPy API](https://numpy.org/doc/stable/reference/generated/numpy.linalg.norm.html) for its usage. Using `numpy.linalg.norm` is preferred, but not compulsory at this stage. If reader is not ready to use NumPy, python's standard library `math` is prefered to make sqrt calculation. For example, norm of vector $(0, 3, 4)$ could be calculated as ```python >>> import math >>> math.sqrt(0**2 + 3**2 + 4**2) 5.0 ``` ```` ````{admonition} Hint 3: Pretty printing :class: dropdown We may use python's default printing utilities for debugging: ```python >>> print(0, 1, 2.845112131228) 0 1 2.845112131228 ``` However, if we want to format printing string, we may use `str.format` method function. Reader may search `python string format` in search engines for more details. For example, if we want to print atom indexes in 3-char-long decimals and bond length in 10-char-long-5-after-decimal-point, we may write program as ```python >>> print("{:3d} - {:3d}: {:10.5f} Bohr".format(0, 1, 2.845112131228)) 0 - 1: 2.84511 Bohr ``` You may wish to implement another function `print_bond_length` to realize this feature. ```` ````{admonition} Hint 4: Loop structure :class: dropdown We need to print all bond length information in function `print_bond_length`, as Step 2 Hint 3 stated. Note that bond between atom 1 - atom 2 is the same to atom 2 - atom 1. So, reader should notice the range for each loop. In the following code, atom index `j` is always larger than `i`: ```python for i in range(mole.natm): for j in range(i + 1, mole.natm): raise NotImplementedError # Reader should complete printing code ``` ```` ````{admonition} Hint 5: Extending molecule class :class: dropdown We have discussed how to construct `Molecule` class in Step 1 Hint 4. However, we could regard `bond_length` function defined in Step 2 Hint 1 as method function of `Molecule` class. Though this is somehow not "pythonic", one can add method function (or attribute) after class construction, in fashion like C++. But we note that python is more flexable than C++ when handling methods in class, since in C++, although method implementation is not required when constructing class, one still need to declare signiture of function. In python, there's no need declaring function, so we just do implementations. In this case, if we have defined function `bond_length` in Step 2 Hint 1, we just make the following statement to add this function to `Molecule` class: ```python Molecule.bond_length = bond_length ``` We note that when calling `bond_length`, we need use ```python bond_length(mole, i, j) ``` but when calling `Molecule.bond_length`, we need use ```python mole.bond_length(i, j) ``` The first parameter `mole: Molecule` in `Molecule.bond_length`, like other (non-static) method functions, is equivalent to `self`. ```` ### Implementation Reader should fill all `NotImplementedError` in the following code: ```python def bond_length(mole: Molecule, i: int, j: int) -> float: # Input: `i`, `j` index of molecule's atom # Output: Bond length from atom `i` to atom `j` raise NotImplementedError("About 1~3 lines of code") Molecule.bond_length = bond_length ``` ```python def print_bond_length(mole: Molecule): # Usage: Print all bond length print("=== Bond Length ===") for i in range(mole.natm): for j in range(i + 1, mole.natm): raise NotImplementedError("Exactly 1 line of code") Molecule.print_bond_length = print_bond_length ``` ### Solution ```python sol_mole.print_bond_length() ``` === Bond Length === 0 - 1: 2.84511 Bohr 0 - 2: 4.55395 Bohr 0 - 3: 4.19912 Bohr 0 - 4: 2.06517 Bohr 0 - 5: 2.07407 Bohr 0 - 6: 2.07407 Bohr 1 - 2: 2.29803 Bohr 1 - 3: 2.09811 Bohr 1 - 4: 4.04342 Bohr 1 - 5: 4.05133 Bohr 1 - 6: 4.05133 Bohr 2 - 3: 3.81330 Bohr 2 - 4: 4.84040 Bohr 2 - 5: 5.89151 Bohr 2 - 6: 5.89151 Bohr 3 - 4: 5.87463 Bohr 3 - 5: 4.83836 Bohr 3 - 6: 4.83836 Bohr 4 - 5: 3.38971 Bohr 4 - 6: 3.38971 Bohr 5 - 6: 3.33994 Bohr ## Step 3: Bond Angles Calculate bond angles. For example, the angle, $\phi_{ijk}$, between atoms $i$-$j$-$k$, where $j$ is the central atom is given by: $$ \cos \phi_{ijk} = \boldsymbol{e}_{ji} \cdot \boldsymbol{e}_{jk} $$ where the $\boldsymbol{e}_{ij}$ are unit vectors between the atoms, e.g., $$ e_{ij}^x = - \frac{x_i - x_j}{R_{ij}}, \quad e_{ij}^y = - \frac{y_i - y_j}{R_{ij}}, \quad e_{ij}^z = - \frac{z_i - z_j}{R_{ij}} $$ Print all valid bond angles $\phi_{ijk}$ (with bond length $R_{ij} < 3 \, \mathsf{Bohr}$ and $R_{jk} < 3 \, \mathsf{Bohr}$, and $\phi_{ijk} > 90^\circ$) in degree. Original C++ project suggests storing unit vectors in memory. However, this python project will generate unit vectors on-the-fly. Also, the final result of this step is a little different to the original project. ````{admonition} Hint 1: Unit vector generation :class: dropdown From the definition above, we may state that $$ \boldsymbol{e}_{ij} = \frac{\boldsymbol{r}_j - \boldsymbol{r}_i}{| \boldsymbol{r}_j - \boldsymbol{r}_i |} $$ It is a combined opeation of vector substraction, $L_2$ norm, and vector divided by numerical value. In NumPy, they are quite easy to achieve. ```` ````{admonition} Hint 2: Vector inner product :class: dropdown We used inner product between $\boldsymbol{e}_{ji}$ and $\boldsymbol{e}_{jk}$ in the fomular above: $$ \cos \phi_{ijk} = \boldsymbol{e}_{ji} \cdot \boldsymbol{e}_{jk} = e_{ji}^x e_{jk}^x + e_{ji}^y e_{jk}^y + e_{ji}^z e_{jk}^z $$ In NumPy, there are multiple ways to achieve vector inner product. An intutive way is using `numpy.inner` (see [NumPy API](https://numpy.org/doc/stable/reference/generated/numpy.inner.html)). For example, ```python >>> np.inner([-1, 1, 2], [3, 9, 5]) 16 ``` Other ways could be `numpy.dot`, `numpy.tensordot`, `numpy.einsum`, `np.multiply` with `np.sum`. ```` ````{admonition} Hint 3: Inverse trigonometric functions :class: dropdown We need calculate arccos value of vector inner product. In NumPy, `numpy.arccos` can do this job. Without NumPy, there's still `math.acos`. Note that we need to print angle in degree, so we need to multiply result by $180 / \pi$. ```` ````{admonition} Hint 4: Loop structure :class: dropdown From the description, we should know that bond angle 1-2-3 is the same to 3-2-1. However, we regard 1-2-3 as different bond angle to 2-1-3. If bond length of 1-2 is larger than 4 Angstrom, then we silent all bond angles 1-2-\*; but if bond length of 1-3 is larger than 3 Angstrom, as well as 1-2, 2-3 shorter than 3 Angstrom, we still print bond angle 1-2-3. Utilizing that the sum of interior angles of a triangle is 180°, we know that for any three atoms, they have utmost one angle larger than 90°. ```` ### Implementation Reader should fill all `NotImplementedError` in the following code: ```python def bond_unit_vector(mole: Molecule, i: int, j: int) -> np.ndarray: # Input: `i`, `j` index of molecule's atom # Output: Unit vector of bond from atom `i` to atom `j` raise NotImplementedError("About 1~5 lines of code") Molecule.bond_unit_vector = bond_unit_vector ``` ```python def bond_angle(mole: Molecule, i: int, j: int, k: int) -> float: # Input: `i`, `j`, `k` index of molecule's atom; where `j` is the central atom # Output: Bond angle for atoms `i`-`j`-`k` raise NotImplementedError("About 3~5 lines of code") Molecule.bond_angle = bond_angle ``` ```python def is_valid_angle(mole: Molecule, i: int, j: int, k: int) -> bool: # Input: `i`, `j`, `k` index of molecule's atom; where `j` is the central atom # Output: Test if `i`-`j`-`k` is a valid bond angle # if i != j != k # and if i-j and j-k bond length smaller than 3 Angstrom, # and if angle i-j-k > 90 degree raise NotImplementedError("About 1~5 lines of code") Molecule.is_valid_angle = is_valid_angle ``` ```python def print_bond_angle(mole: Molecule): # Usage: Print all bond angle i-j-k which is considered as valid angle raise NotImplementedError("About 5~20 lines of code") Molecule.print_bond_angle = print_bond_angle ``` ### Solution For this solution, there could be different outputs, such as angle `0-1-2` should be equal to `2-1-0`. Such kind of minor differences could be tolerated. However, angle values, number of valid angles, etc. should be exactly the same. ```python sol_mole.print_bond_angle() ``` === Bond Angle === 0 - 1 - 2: 124.26831 Degree 0 - 1 - 3: 115.47934 Degree 1 - 0 - 4: 109.84706 Degree 1 - 0 - 5: 109.89841 Degree 1 - 0 - 6: 109.89841 Degree 4 - 0 - 5: 109.95368 Degree 4 - 0 - 6: 109.95368 Degree 5 - 0 - 6: 107.25265 Degree 2 - 1 - 3: 120.25235 Degree ## Step 4: Out-of-Plane Angles Calculate out-of-plane angles. For example, the angle $\theta_{ijkl}$ for atom $i$ out of the plane containing atoms $j$-$k$-$l$ (with $k$ as the central atom, connected to $i$) is given by: $$ \sin \theta_{ijkl} = \frac{\boldsymbol{e}_{kj} \times \boldsymbol{e}_{kl}}{\sin \phi_{jkl}} \cdot \boldsymbol{e}_{ki} $$ Print all valid out-of-plane angles (bond angle $\phi_{jkl}$ should be valid bond angle, and bond length $R_{ki} < 3 \, \mathsf{Bohr}$) in degree. Note that situation of $\sin \phi_{jkl} = 0$ could occasionally (especially Molecule HCN or H2C=C=CH2), so excluding this kind of situation is encouraged. ````{admonition} Hint 1: Cross products :class: dropdown Cross product of two arrays (length of array is 3) $\boldsymbol{c} = \boldsymbol{a} \times \boldsymbol{b}$ is defined as $$ \begin{align} c_x &= a_y b_z - a_z b_y \\ c_y &= a_z b_x - a_x b_z \\ c_z &= a_x b_y - a_y b_x \end{align} $$ In NumPy, `numpy.cross` ([NumPy API](https://numpy.org/doc/stable/reference/generated/numpy.cross.html)) could fulfill this feature. ```python >>> np.cross([5, 2, 3], [7, 8, 9]) array([ -6, -24, 26]) ``` ```` ````{admonition} Hint 2: Numerical precision :class: dropdown Actually, when implementing arcsin operation, one may encounter `RuntimeWarning` from NumPy: ```python >>> np.arcsin(-1.000000000000001) <stdin>:1: RuntimeWarning: invalid value encountered in arcsin nan ``` To avoid such kind of error, one need to make sure the value passed into `numpy.arcsin` is not larger than 1 nor smaller than -1, strictly. One may also assert that the passed value is not too large or too small, in case of internal programming mistakes. ```python >>> theta = -1.000000000000001 >>> assert(np.abs(theta) < 1 + 1e-7) # make sure that theta is not too wrong, in case of programming mistakes >>> theta = np.sign(theta) if np.abs(theta) > 1 else theta # make theta bounded between -1 to 1 >>> np.arcsin(theta) -1.5707963267948966 ``` ```` ### Implementation Reader should fill all `NotImplementedError` in the following code: ```python def out_of_plane_angle(mole: Molecule, i: int, j: int, k: int, l: int) -> float: # Input: `i`, `j`, `k`, `l` index of molecule's atom; where `k` is the central atom, and angle is i - j-k-l # Output: Out-of-plane bond angle for atoms `i`-`j`-`k`-`l` raise NotImplementedError("About 3~8 lines of code") Molecule.out_of_plane_angle = out_of_plane_angle ``` ```python def is_valid_out_of_plane_angle(mole: Molecule, i: int, j: int, k: int, l: int) -> bool: # Input: `i`, `j`, `k`, `l` index of molecule's atom; where `k` is the central atom, and angle is i - j-k-l # Output: Test if `i`-`j`-`k` is a valid out-of-plane bond angle # if i != j != k != l # and if angle j-k-l is valid bond angle # and if i-k bond length smaller than 3 Angstrom # and bond angle of j-k-l is not linear raise NotImplementedError("About 1~5 lines of code") Molecule.is_valid_out_of_plane_angle = is_valid_out_of_plane_angle ``` ```python def print_out_of_plane_angle(mole: Molecule): # Usage: Print all out-of-plane bond angle i-j-k-l which is considered as valid raise NotImplementedError("About 6~15 lines of code") Molecule.print_out_of_plane_angle = print_out_of_plane_angle ``` ### Solution Difference of output may also be tolerated in this solution. ```python sol_mole.print_out_of_plane_angle() ``` === Out-of-Plane Angle === 3 - 0 - 1 - 2: 0.00000 Degree 2 - 0 - 1 - 3: 0.00000 Degree 5 - 1 - 0 - 4: -53.62632 Degree 6 - 1 - 0 - 4: 53.62632 Degree 4 - 1 - 0 - 5: 53.65153 Degree 6 - 1 - 0 - 5: -56.27711 Degree 4 - 1 - 0 - 6: -53.65153 Degree 5 - 1 - 0 - 6: 56.27711 Degree 1 - 4 - 0 - 5: -53.67878 Degree 6 - 4 - 0 - 5: 56.19462 Degree 1 - 4 - 0 - 6: 53.67878 Degree 5 - 4 - 0 - 6: -56.19462 Degree 1 - 5 - 0 - 6: -54.97706 Degree 4 - 5 - 0 - 6: 54.86999 Degree 0 - 2 - 1 - 3: 0.00000 Degree ## Step 5: Torsion/Dihedral Angles Calculate torsional angles. For example, the torsional angle $\tau_{ijkl}$ for the atom connectivity $i$-$j$-$k$-$l$ is given by: $$ \cos \tau_{ijkl} = \frac{(\boldsymbol{e}_{ji} \times \boldsymbol{e}_{jk}) \cdot (\boldsymbol{e}_{kj} \times \boldsymbol{e}_{kl})}{\sin \phi_{ijk} \cdot \sin \phi_{jkl}} $$ Print all valid dihedral angles ($i, j, k$ and $j, k, l$ constructs a valid angle, where $j, k$ should be bonded or $R_{jk} < 3 \, \mathsf{Bohr}$) in degree. ### Implementation Reader should fill all `NotImplementedError` in the following code: ```python def dihedral_angle(mole: Molecule, i: int, j: int, k: int, l: int) -> float: # Input: `i`, `j`, `k`, `l` index of molecule's atom; where `k` is the central atom, and angle is i - j-k-l # Output: Dihedral angle for atoms `i`-`j`-`k`-`l` raise NotImplementedError("About 3~8 lines of code") Molecule.dihedral_angle = dihedral_angle ``` ```python def is_valid_dihedral_angle(mole: Molecule, i: int, j: int, k: int, l: int) -> bool: # Input: `i`, `j`, `k`, `l` index of molecule's atom; where `k` is the central atom, and angle is i - j-k-l # Output: Test if `i`-`j`-`k` is a valid dihedral bond angle # if i != j != k != l # and if i, j, k construct a valid bond angle (with j-k bonded) # and if j, k, l construct a valid bond angle (with j-k bonded) raise NotImplementedError("About 1~5 lines of code") Molecule.is_valid_dihedral_angle = is_valid_dihedral_angle ``` ```python def print_dihedral_angle(mole: Molecule): # Usage: Print all dihedral bond angle i-j-k-l which is considered as valid raise NotImplementedError("About 6~15 lines of code") Molecule.print_dihedral_angle = print_dihedral_angle ``` ### Solution Difference of output may also be tolerated in this solution. ```python sol_mole.print_dihedral_angle() ``` === Dihedral Angle === 2 - 0 - 1 - 3: 180.00000 Degree 2 - 0 - 1 - 4: 0.00000 Degree 2 - 0 - 1 - 5: 121.09759 Degree 2 - 0 - 1 - 6: 121.09759 Degree 3 - 0 - 1 - 4: 180.00000 Degree 3 - 0 - 1 - 5: 58.90241 Degree 3 - 0 - 1 - 6: 58.90241 Degree 4 - 0 - 1 - 5: 121.09759 Degree 4 - 0 - 1 - 6: 121.09759 Degree 5 - 0 - 1 - 6: 117.80483 Degree 1 - 0 - 4 - 5: 121.06434 Degree 1 - 0 - 4 - 6: 121.06434 Degree 5 - 0 - 4 - 6: 117.87131 Degree 1 - 0 - 5 - 4: 121.03351 Degree 1 - 0 - 5 - 6: 119.43447 Degree 4 - 0 - 5 - 6: 119.53201 Degree 1 - 0 - 6 - 4: 121.03351 Degree 1 - 0 - 6 - 5: 119.43447 Degree 4 - 0 - 6 - 5: 119.53201 Degree 0 - 1 - 2 - 3: 180.00000 Degree 0 - 1 - 3 - 2: 180.00000 Degree ## Step 6: Center-of-Mass Translation Find the center of mass of the molecule: $$ \boldsymbol{r}_\mathrm{c.m.} = \left. \sum_{i} m_i \boldsymbol{r}_i \right/ \sum_{i} m_i $$ or in its components $$ x_\mathrm{c.m.} = \left. \sum_i m_i x_i \right/ \sum_i m_i , \quad y_\mathrm{c.m.} = \left. \sum_i m_i y_i \right/ \sum_i m_i , \quad z_\mathrm{c.m.} = \left. \sum_i m_i z_i \right/ \sum_i m_i $$ where $m_i$ is the mass of atom $i$ and the summation runs over all atoms in the molecule. Translate the input coordinates of the molecule to the center-of-mass. We use isotopic-averaged atom mass, so the final results may be different to the original Crawford's project at $10^{-3} \, \mathsf{a.u.}$ level. ````{admonition} Hint 1: Atomic masses :class: dropdown An excellent source for atomic masses and other physical constants is the [National Institute of Standard and Technology (NIST) website](https://physics.nist.gov/cgi-bin/Compositions/stand_alone.pl). For python users, one may find package [molmass](https://github.com/cgohlke/molmass) useful. ```` ### Implementation ```python def center_of_mass(mole: Molecule) -> np.ndarray or list: # Output: Center of mass for this molecule raise NotImplementedError("About 5~10 lines of code") Molecule.center_of_mass = center_of_mass ``` ### Solution ```python sol_mole.center_of_mass() ``` array([0.64475065, 0. , 2.31636762]) ## Step 7: Principal Moments of Inertia Calculate elements of the [moment of inertia tensor](http://en.wikipedia.org/wiki/Moment_of_inertia_tensor). Diagonal: $$ I_{xx} = \sum_i m_i (\tilde y_i^2 + \tilde z_i^2), \quad I_{yy} = \sum_i m_i (\tilde z_i^2 + \tilde x_i^2) \, \quad I_{zz} = \sum_i m_i (\tilde x_i^2 + \tilde y_i^2) $$ Off-diagonal (add a negative sign): $$ I_{xy} = I_{yx} = - \sum_i m_i \tilde x_i \tilde y_i, \quad I_{yz} = I_{zy} = - \sum_i m_i \tilde y_i \tilde z_i, \quad I_{zx} = I_{xz} = - \sum_i m_i \tilde z_i \tilde x_i $$ Calculate eigenvalue of inertia tensor to obtain the principal moments of inertia: $$ I_a \leqslant I_b \leqslant I_c $$ Report the moments of inertia in $\mathsf{amu \cdot Bohr^2}$, $\mathsf{amu \cdot {\buildrel_{\circ} \over{\mathsf{A}}}{}^2}$, $\mathsf{g \cdot cm^2}$. Note that $\tilde x_i, \tilde y_i, \tilde z_i$ in the equations above are translated by center of mass: $$ \tilde x_i = x_i - x_\mathrm{c.m.}, \quad \tilde y_i = y_i - y_\mathrm{c.m.}, \quad \tilde z_i = z_i - z_\mathrm{c.m.} $$ Based on the relative values of the principal moments, determine the [molecular rotor type](http://en.wikipedia.org/wiki/Rotational_spectroscopy). Criterion of equality and far larger/smaller could be set to $10^{-4} \mathsf{amu \cdot Bohr^2}$. - Spherical: $I_a = I_b = I_c$ - Linear: $I_a \ll I_b = I_c$ - Prolate: $I_a < I_b = I_c$ - Oblate: $I_a = I_b < I_c$ - Asymmetric: $I_a \neq I_b \neq I_c$ ````{admonition} Hint 1: Eigenvalue of symmetric matrix :class: dropdown Eigenvalue $\lambda$ of moments of inertia matrix can be obtained as $$ \left\vert \begin{matrix} I_{xx} - \lambda & I_{xy} & I_{xz} \\ I_{yx} & I_{yy} - \lambda & I_{yz} \\ I_{zx} & I_{zy} & I_{zz} - \lambda \end{matrix} \right\vert = 0 $$ This leads to a cubic equation in $\lambda$, which one can solve directly. However, solving symmetric matrix by program is usually called *diagonalization*. This can be done by `numpy.linalg.eigh` which returns eigenvalues and eigenvectors ([NumPy API](https://numpy.org/doc/stable/reference/generated/numpy.linalg.eigh.html)) or `numpy.linalg.eigvalsh` which only returns eigenvalues ([NumPy API](https://numpy.org/doc/stable/reference/generated/numpy.linalg.eigvalsh.html)). ```` ````{admonition} Hint 2: Physical constants :class: dropdown Lots of useful and precise physical constants are available at the [National Institute of Standards and Technology website](https://physics.nist.gov/cuu/Constants/index.html?/codata86.html). For python, `scipy.constants` ([SciPy API](https://docs.scipy.org/doc/scipy/reference/constants.html)) offers lots of constants and unit conversion utilities. ```` ### Implementation Reader should fill all `NotImplementedError` in the following code: ```python def moment_of_inertia(mole: Molecule) -> np.ndarray or list: # Output: Principle of moments of intertia raise NotImplementedError("About 4~25 lines of code") Molecule.moment_of_inertia = moment_of_inertia ``` ```python def print_moment_of_interia(mole: Molecule): # Output: Print moments of inertia in amu bohr2, amu Å2, and g cm2 raise NotImplementedError("About 3~15 lines of code") Molecule.print_moment_of_interia = print_moment_of_interia ``` ```python def type_of_moment_of_interia(mole: Molecule) -> str: # Output: Judge which type of moment of interia is raise NotImplementedError("About 7~11 lines of code") Molecule.type_of_moment_of_interia = type_of_moment_of_interia ``` ### Solution Value of principles of moment of interia: ```python sol_mole.print_moment_of_interia() ``` In amu bohr^2: 3.19751875e+01 1.78736281e+02 1.99467661e+02 In amu angstrom^2: 8.95396445e+00 5.00512562e+01 5.58566339e+01 In g cm^2: 1.48684078e-39 8.31120663e-39 9.27521227e-39 Type of interia: ```python sol_mole.type_of_moment_of_interia() ``` 'Asymmetric' ## Step 8: Rotational Constants Compute the rotational constants in $\mathsf{cm}^{-1}$ and $\mathsf{MHz}$: $$ A = \frac{h}{8 \pi^2 c I_a} , \quad B = \frac{h}{8 \pi^2 c I_b} , \quad C = \frac{h}{8 \pi^2 c I_c} $$ where $A \geqslant B \geqslant C$. ### Implementation Reader should fill all `NotImplementedError` in the following code: ```python def rotational_constants(mole: Molecule) -> np.ndarray or list: # Output: Rotational constant in cm^-1 raise NotImplementedError("About 1~10 lines of code") Molecule.rotational_constants = rotational_constants ``` ### Solution Compute the rotational constants in $\mathsf{cm}^{-1}$ ```python sol_mole.rotational_constants() ``` array([1.88270004, 0.33680731, 0.30180174]) ## Test Cases **Acetaldehyde** - Input file: {download}`input/acetaldehyde.dat` ```python sol_mole = SolMol() sol_mole.construct_from_dat_file("input/acetaldehyde.dat") sol_mole.print_solution_01() ``` === Atom Charges === [6 6 8 1 1 1 1] === Coordinates === [[ 0. 0. 0. ] [ 0. 0. 2.84511213] [ 1.89911596 0. 4.13906253] [-1.89404831 0. 3.74768867] [ 1.94250082 0. -0.70114598] [-1.00729547 -1.66997184 -0.70591697] [-1.00729547 1.66997184 -0.70591697]] === Bond Length === 0 - 1: 2.84511 Bohr 0 - 2: 4.55395 Bohr 0 - 3: 4.19912 Bohr 0 - 4: 2.06517 Bohr 0 - 5: 2.07407 Bohr 0 - 6: 2.07407 Bohr 1 - 2: 2.29803 Bohr 1 - 3: 2.09811 Bohr 1 - 4: 4.04342 Bohr 1 - 5: 4.05133 Bohr 1 - 6: 4.05133 Bohr 2 - 3: 3.81330 Bohr 2 - 4: 4.84040 Bohr 2 - 5: 5.89151 Bohr 2 - 6: 5.89151 Bohr 3 - 4: 5.87463 Bohr 3 - 5: 4.83836 Bohr 3 - 6: 4.83836 Bohr 4 - 5: 3.38971 Bohr 4 - 6: 3.38971 Bohr 5 - 6: 3.33994 Bohr === Bond Angle === 0 - 1 - 2: 124.26831 Degree 0 - 1 - 3: 115.47934 Degree 1 - 0 - 4: 109.84706 Degree 1 - 0 - 5: 109.89841 Degree 1 - 0 - 6: 109.89841 Degree 4 - 0 - 5: 109.95368 Degree 4 - 0 - 6: 109.95368 Degree 5 - 0 - 6: 107.25265 Degree 2 - 1 - 3: 120.25235 Degree === Out-of-Plane Angle === 3 - 0 - 1 - 2: 0.00000 Degree 2 - 0 - 1 - 3: 0.00000 Degree 5 - 1 - 0 - 4: -53.62632 Degree 6 - 1 - 0 - 4: 53.62632 Degree 4 - 1 - 0 - 5: 53.65153 Degree 6 - 1 - 0 - 5: -56.27711 Degree 4 - 1 - 0 - 6: -53.65153 Degree 5 - 1 - 0 - 6: 56.27711 Degree 1 - 4 - 0 - 5: -53.67878 Degree 6 - 4 - 0 - 5: 56.19462 Degree 1 - 4 - 0 - 6: 53.67878 Degree 5 - 4 - 0 - 6: -56.19462 Degree 1 - 5 - 0 - 6: -54.97706 Degree 4 - 5 - 0 - 6: 54.86999 Degree 0 - 2 - 1 - 3: 0.00000 Degree === Dihedral Angle === 2 - 0 - 1 - 3: 180.00000 Degree 2 - 0 - 1 - 4: 0.00000 Degree 2 - 0 - 1 - 5: 121.09759 Degree 2 - 0 - 1 - 6: 121.09759 Degree 3 - 0 - 1 - 4: 180.00000 Degree 3 - 0 - 1 - 5: 58.90241 Degree 3 - 0 - 1 - 6: 58.90241 Degree 4 - 0 - 1 - 5: 121.09759 Degree 4 - 0 - 1 - 6: 121.09759 Degree 5 - 0 - 1 - 6: 117.80483 Degree 1 - 0 - 4 - 5: 121.06434 Degree 1 - 0 - 4 - 6: 121.06434 Degree 5 - 0 - 4 - 6: 117.87131 Degree 1 - 0 - 5 - 4: 121.03351 Degree 1 - 0 - 5 - 6: 119.43447 Degree 4 - 0 - 5 - 6: 119.53201 Degree 1 - 0 - 6 - 4: 121.03351 Degree 1 - 0 - 6 - 5: 119.43447 Degree 4 - 0 - 6 - 5: 119.53201 Degree 0 - 1 - 2 - 3: 180.00000 Degree 0 - 1 - 3 - 2: 180.00000 Degree === Center of Mass === 0.64475 0.00000 2.31637 === Moments of Inertia === In amu bohr^2: 3.19751875e+01 1.78736281e+02 1.99467661e+02 In amu angstrom^2: 8.95396445e+00 5.00512562e+01 5.58566339e+01 In g cm^2: 1.48684078e-39 8.31120663e-39 9.27521227e-39 Type: Asymmetric === Rotational Constants === In cm^-1: 1.88270 0.33681 0.30180 In MHz: 56441.92712 10097.22927 9047.78849 **Allene** - Input file: {download}`input/allene.dat` ```python sol_mole = SolMol() sol_mole.construct_from_dat_file("input/allene.dat") sol_mole.print_solution_01() ``` === Atom Charges === [6 6 6 1 1 1 1] === Coordinates === [[ 0. 0. 1.88972599] [ 2.55113008 0. 1.88972599] [-2.55113008 0. 1.88972599] [ 3.49599308 1.15721611 0.73250988] [ 3.49599308 -1.15721611 3.0469421 ] [-3.49599308 -1.15721611 0.73250988] [-3.49599308 1.15721611 3.0469421 ]] === Bond Length === 0 - 1: 2.55113 Bohr 0 - 2: 2.55113 Bohr 0 - 3: 3.86009 Bohr 0 - 4: 3.86009 Bohr 0 - 5: 3.86009 Bohr 0 - 6: 3.86009 Bohr 1 - 2: 5.10226 Bohr 1 - 3: 1.88973 Bohr 1 - 4: 1.88973 Bohr 1 - 5: 6.26466 Bohr 1 - 6: 6.26466 Bohr 2 - 3: 6.26466 Bohr 2 - 4: 6.26466 Bohr 2 - 5: 1.88973 Bohr 2 - 6: 1.88973 Bohr 3 - 4: 3.27310 Bohr 3 - 5: 7.36508 Bohr 3 - 6: 7.36508 Bohr 4 - 5: 7.36508 Bohr 4 - 6: 7.36508 Bohr 5 - 6: 3.27310 Bohr === Bond Angle === 1 - 0 - 2: 180.00000 Degree 0 - 1 - 3: 120.00000 Degree 0 - 1 - 4: 120.00000 Degree 0 - 2 - 5: 120.00000 Degree 0 - 2 - 6: 120.00000 Degree 3 - 1 - 4: 120.00000 Degree 5 - 2 - 6: 120.00000 Degree === Out-of-Plane Angle === 4 - 0 - 1 - 3: 0.00000 Degree 3 - 0 - 1 - 4: 0.00000 Degree 6 - 0 - 2 - 5: 0.00000 Degree 5 - 0 - 2 - 6: 0.00000 Degree 0 - 3 - 1 - 4: 0.00000 Degree 0 - 5 - 2 - 6: 0.00000 Degree === Dihedral Angle === 2 - 0 - 1 - 3: 90.00000 Degree 2 - 0 - 1 - 4: 90.00000 Degree 3 - 0 - 1 - 4: 180.00000 Degree 1 - 0 - 2 - 5: 90.00000 Degree 1 - 0 - 2 - 6: 90.00000 Degree 5 - 0 - 2 - 6: 180.00000 Degree 0 - 1 - 3 - 4: 180.00000 Degree 0 - 1 - 4 - 3: 180.00000 Degree 0 - 2 - 5 - 6: 180.00000 Degree 0 - 2 - 6 - 5: 180.00000 Degree === Center of Mass === 0.00000 0.00000 1.88973 === Moments of Inertia === In amu bohr^2: 1.07982664e+01 2.11013373e+02 2.11013373e+02 In amu angstrom^2: 3.02382256e+00 5.90897626e+01 5.90897626e+01 In g cm^2: 5.02117550e-40 9.81208592e-39 9.81208592e-39 Type: Prolate === Rotational Constants === In cm^-1: 5.57494 0.28529 0.28529 In MHz: 167132.49483 8552.73379 8552.73379 **Benzene** - Input file: {download}`input/benzene.dat` ```python sol_mole = SolMol() sol_mole.construct_from_dat_file("input/benzene.dat") sol_mole.print_solution_01() ``` === Atom Charges === [6 6 6 6 6 6 1 1 1 1 1 1] === Coordinates === [[ 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 2.61644846e+00] [ 2.26591048e+00 0.00000000e+00 3.92467249e+00] [ 4.53182108e+00 0.00000000e+00 2.61644839e+00] [ 4.53182108e+00 0.00000000e+00 -1.32281000e-07] [ 2.26591069e+00 0.00000000e+00 -1.30822415e+00] [ 2.26591069e+00 -0.00000000e+00 -3.33421820e+00] [ 6.28638327e+00 0.00000000e+00 -1.01299708e+00] [ 6.28638324e+00 0.00000000e+00 3.62944532e+00] [ 2.26591048e+00 0.00000000e+00 5.95066654e+00] [-1.75456219e+00 0.00000000e+00 3.62944541e+00] [-1.75456216e+00 -0.00000000e+00 -1.01299693e+00]] === Bond Length === 0 - 1: 2.61645 Bohr 0 - 2: 4.53182 Bohr 0 - 3: 5.23290 Bohr 0 - 4: 4.53182 Bohr 0 - 5: 2.61645 Bohr 0 - 6: 4.03130 Bohr 0 - 7: 6.36748 Bohr 0 - 8: 7.25889 Bohr 0 - 9: 6.36748 Bohr 0 - 10: 4.03130 Bohr 0 - 11: 2.02599 Bohr 1 - 2: 2.61645 Bohr 1 - 3: 4.53182 Bohr 1 - 4: 5.23290 Bohr 1 - 5: 4.53182 Bohr 1 - 6: 6.36748 Bohr 1 - 7: 7.25889 Bohr 1 - 8: 6.36748 Bohr 1 - 9: 4.03130 Bohr 1 - 10: 2.02599 Bohr 1 - 11: 4.03130 Bohr 2 - 3: 2.61645 Bohr 2 - 4: 4.53182 Bohr 2 - 5: 5.23290 Bohr 2 - 6: 7.25889 Bohr 2 - 7: 6.36748 Bohr 2 - 8: 4.03130 Bohr 2 - 9: 2.02599 Bohr 2 - 10: 4.03130 Bohr 2 - 11: 6.36748 Bohr 3 - 4: 2.61645 Bohr 3 - 5: 4.53182 Bohr 3 - 6: 6.36748 Bohr 3 - 7: 4.03130 Bohr 3 - 8: 2.02599 Bohr 3 - 9: 4.03130 Bohr 3 - 10: 6.36748 Bohr 3 - 11: 7.25889 Bohr 4 - 5: 2.61645 Bohr 4 - 6: 4.03130 Bohr 4 - 7: 2.02599 Bohr 4 - 8: 4.03130 Bohr 4 - 9: 6.36748 Bohr 4 - 10: 7.25889 Bohr 4 - 11: 6.36748 Bohr 5 - 6: 2.02599 Bohr 5 - 7: 4.03130 Bohr 5 - 8: 6.36748 Bohr 5 - 9: 7.25889 Bohr 5 - 10: 6.36748 Bohr 5 - 11: 4.03130 Bohr 6 - 7: 4.64244 Bohr 6 - 8: 8.04095 Bohr 6 - 9: 9.28488 Bohr 6 - 10: 8.04095 Bohr 6 - 11: 4.64244 Bohr 7 - 8: 4.64244 Bohr 7 - 9: 8.04095 Bohr 7 - 10: 9.28488 Bohr 7 - 11: 8.04095 Bohr 8 - 9: 4.64244 Bohr 8 - 10: 8.04095 Bohr 8 - 11: 9.28488 Bohr 9 - 10: 4.64244 Bohr 9 - 11: 8.04095 Bohr 10 - 11: 4.64244 Bohr === Bond Angle === 0 - 1 - 2: 120.00000 Degree 1 - 0 - 5: 120.00000 Degree 0 - 1 - 10: 120.00000 Degree 1 - 0 - 11: 120.00000 Degree 0 - 5 - 4: 120.00000 Degree 0 - 5 - 6: 120.00000 Degree 5 - 0 - 11: 120.00000 Degree 1 - 2 - 3: 120.00000 Degree 1 - 2 - 9: 120.00000 Degree 2 - 1 - 10: 120.00000 Degree 2 - 3 - 4: 120.00000 Degree 2 - 3 - 8: 120.00000 Degree 3 - 2 - 9: 120.00000 Degree 3 - 4 - 5: 120.00000 Degree 3 - 4 - 7: 120.00000 Degree 4 - 3 - 8: 120.00000 Degree 4 - 5 - 6: 120.00000 Degree 5 - 4 - 7: 120.00000 Degree === Out-of-Plane Angle === 10 - 0 - 1 - 2: 0.00000 Degree 11 - 1 - 0 - 5: 0.00000 Degree 2 - 0 - 1 - 10: 0.00000 Degree 5 - 1 - 0 - 11: 0.00000 Degree 6 - 0 - 5 - 4: 0.00000 Degree 4 - 0 - 5 - 6: 0.00000 Degree 1 - 5 - 0 - 11: 0.00000 Degree 9 - 1 - 2 - 3: 0.00000 Degree 3 - 1 - 2 - 9: 0.00000 Degree 0 - 2 - 1 - 10: 0.00000 Degree 8 - 2 - 3 - 4: 0.00000 Degree 4 - 2 - 3 - 8: 0.00000 Degree 1 - 3 - 2 - 9: 0.00000 Degree 7 - 3 - 4 - 5: 0.00000 Degree 5 - 3 - 4 - 7: 0.00000 Degree 2 - 4 - 3 - 8: 0.00000 Degree 0 - 4 - 5 - 6: 0.00000 Degree 3 - 5 - 4 - 7: 0.00000 Degree === Dihedral Angle === 2 - 0 - 1 - 5: 0.00000 Degree 2 - 0 - 1 - 10: 180.00000 Degree 2 - 0 - 1 - 11: 180.00000 Degree 5 - 0 - 1 - 10: 180.00000 Degree 5 - 0 - 1 - 11: 180.00000 Degree 10 - 0 - 1 - 11: 0.00000 Degree 1 - 0 - 5 - 4: 0.00000 Degree 1 - 0 - 5 - 6: 180.00000 Degree 1 - 0 - 5 - 11: 180.00000 Degree 4 - 0 - 5 - 6: 180.00000 Degree 4 - 0 - 5 - 11: 180.00000 Degree 6 - 0 - 5 - 11: 0.00000 Degree 1 - 0 - 11 - 5: 180.00000 Degree 0 - 1 - 2 - 3: 0.00000 Degree 0 - 1 - 2 - 9: 180.00000 Degree 0 - 1 - 2 - 10: 180.00000 Degree 3 - 1 - 2 - 9: 180.00000 Degree 3 - 1 - 2 - 10: 180.00000 Degree 9 - 1 - 2 - 10: 0.00000 Degree 0 - 1 - 10 - 2: 180.00000 Degree 1 - 2 - 3 - 4: 0.00000 Degree 1 - 2 - 3 - 8: 180.00000 Degree 1 - 2 - 3 - 9: 180.00000 Degree 4 - 2 - 3 - 8: 180.00000 Degree 4 - 2 - 3 - 9: 180.00000 Degree 8 - 2 - 3 - 9: 0.00000 Degree 1 - 2 - 9 - 3: 180.00000 Degree 2 - 3 - 4 - 5: 0.00000 Degree 2 - 3 - 4 - 7: 180.00000 Degree 2 - 3 - 4 - 8: 180.00000 Degree 5 - 3 - 4 - 7: 180.00000 Degree 5 - 3 - 4 - 8: 180.00000 Degree 7 - 3 - 4 - 8: 0.00000 Degree 2 - 3 - 8 - 4: 180.00000 Degree 0 - 4 - 5 - 3: 0.00000 Degree 0 - 4 - 5 - 6: 180.00000 Degree 0 - 4 - 5 - 7: 180.00000 Degree 3 - 4 - 5 - 6: 180.00000 Degree 3 - 4 - 5 - 7: 180.00000 Degree 6 - 4 - 5 - 7: 0.00000 Degree 3 - 4 - 7 - 5: 180.00000 Degree 0 - 5 - 6 - 4: 180.00000 Degree === Center of Mass === 2.26591 0.00000 1.30822 === Moments of Inertia === In amu bohr^2: 3.11839639e+02 3.11839704e+02 6.23679344e+02 In amu angstrom^2: 8.73239929e+01 8.73240110e+01 1.74648004e+02 In g cm^2: 1.45004902e-38 1.45004932e-38 2.90009833e-38 Type: Oblate === Rotational Constants === In cm^-1: 0.19305 0.19305 0.09652 In MHz: 5787.40152 5787.40032 2893.70046 ## References - Wilson, E. B.; Decius, J. C.; Cross, P. C. *Molecular Vibrations* Dover Publication Inc., 1980. ISBN-13: 978-0486639413
a5f3c03ef8f3371331441de02bd00327ffc2fc98
66,041
ipynb
Jupyter Notebook
source/Project_01/Project_01.ipynb
ajz34/PyCrawfordProgProj
d2ba51223a4e6e56deefc5c0d68aa4e663fbcd80
[ "Apache-2.0" ]
13
2020-08-13T06:59:08.000Z
2022-03-21T15:48:09.000Z
source/Project_01/Project_01.ipynb
ajz34/PyCrawfordProgProj
d2ba51223a4e6e56deefc5c0d68aa4e663fbcd80
[ "Apache-2.0" ]
null
null
null
source/Project_01/Project_01.ipynb
ajz34/PyCrawfordProgProj
d2ba51223a4e6e56deefc5c0d68aa4e663fbcd80
[ "Apache-2.0" ]
3
2021-04-26T03:28:48.000Z
2021-09-06T21:04:07.000Z
32.388916
617
0.506473
true
16,664
Qwen/Qwen-72B
1. YES 2. YES
0.63341
0.822189
0.520783
__label__eng_Latn
0.783726
0.048283
```python from sympy.physics.mechanics import * import sympy as sp mechanics_printing(pretty_print=True) ``` ```python m, M, l = sp.symbols(r'm M l') t, g = sp.symbols('t g') r, v = dynamicsymbols(r'r \theta') dr, dv = dynamicsymbols(r'r \theta', 1) ``` ```python x = r*sp.sin(v) y = -r*sp.cos(v) X = sp.Rational(0,1) # l = Y+r Y = l-r dx = x.diff(t) dy = y.diff(t) dX = X.diff(t) dY = Y.diff(t) ``` ```python V = m*g*y + M*g*Y T = sp.Rational(1, 2)*m*(dx**2+dy**2)+sp.Rational(1, 2)*M*(dX**2+dY**2) L = T - V ``` ```python LM = LagrangesMethod(L, [r, v]) ``` ```python soln = LM.form_lagranges_equations() ``` ```python soln ``` ```python sp.solve((soln[0],soln[1]),(r.diff(t,t),v.diff(t,t))) ```
094f6e0a5729b9b1c9965dbd38372ba3164f2cfe
31,154
ipynb
Jupyter Notebook
Pendula/Misc/PendulumHangingMass/PendulumHangingMass.ipynb
ethank5149/Classical-Mechanics
4684cc91abcf65a684237c6ec21246d5cebd232a
[ "MIT" ]
null
null
null
Pendula/Misc/PendulumHangingMass/PendulumHangingMass.ipynb
ethank5149/Classical-Mechanics
4684cc91abcf65a684237c6ec21246d5cebd232a
[ "MIT" ]
null
null
null
Pendula/Misc/PendulumHangingMass/PendulumHangingMass.ipynb
ethank5149/Classical-Mechanics
4684cc91abcf65a684237c6ec21246d5cebd232a
[ "MIT" ]
null
null
null
230.77037
14,516
0.692784
true
286
Qwen/Qwen-72B
1. YES 2. YES
0.946597
0.76908
0.728009
__label__yue_Hant
0.264964
0.52974
# Importing and reading data ```python import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from scipy import integrate import seaborn as sns; sns.set() ``` ```python # be sure to git pull upstream master before reading the data so it is up to date. DATA_URL = 'https://raw.githubusercontent.com/blas-ko/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/' df_confirmed = pd.read_csv(DATA_URL+'time_series_19-covid-Confirmed.csv') df_deaths = pd.read_csv(DATA_URL+'time_series_19-covid-Deaths.csv') df_recovered = pd.read_csv(DATA_URL+'time_series_19-covid-Recovered.csv') ``` ```python def df_to_timeseries(df): return df.drop(['Province/State','Lat','Long'], axis=1).groupby('Country/Region').sum().T def df_to_timeseries_province(df): return df.drop(['Lat','Long'], axis=1).set_index('Country/Region') def country_to_timeseries(df, country): return df.drop(['Lat','Long'],axis=1).set_index(['Country/Region','Province/State']).loc[country].T def country_to_timeseries(df, country): df_country = df[ df['Country/Region'] == country ].drop(['Province/State','Lat','Long'], axis=1) return df_country.groupby('Country/Region').sum().T def province_to_timeseries(df, province): df_province = df[ df['Province/State'] == province ] return df_province.set_index('Province/State').drop(['Country/Region','Lat','Long'], axis=1).T#[province] ``` ```python # testing... country_to_timeseries(df_confirmed, 'China').plot() country_to_timeseries(df_confirmed, 'Italy').plot() ``` # Basic model description We use a SEIRD model to describe the spreading of the CoVid-19 virus in a population. The model distinguishes between the population that is Suceptible ($S$) to the virus, those who have been Exposed ($E$) to it but don't present any symptoms nor are infectious, those who are Infected ($I$), those who have Recovered ($R$), and those who are Deceased ($D$). This model is a simplification of the [compartamental model developed by Alison Hill and collaborators](https://alhill.shinyapps.io/COVID19seir/?fbclid=IwAR0G-qmOdznACkXRHZMMNyk4NRW-MHlk_n4I4W7Q3_MGqmm7wplUp0zpkJk) The Suceptible population gets exposed to the virus by getting in contact with the Infected population at a rate $\lambda$. Then, the Exposed population get into an Infected stage at a rate $\sigma$. The Infected population can either Recover, which they do at a rate $\gamma$, or Die, which they do at a rate $\mu$. These dynamics are described in the following equations \begin{align} \begin{aligned} \dot{S} &= -\lambda \frac{I}{N} S \\ \dot{E} &= \lambda \frac{I}{N} S - \sigma E \\ \dot{I} &= \sigma E - \gamma I - \mu I \\ \dot{R} &= \gamma I \\ \dot{D} &= \mu I, \end{aligned} \end{align} where $\dot{x} := \frac{dx(t)}{dt} $. Note that the equations implicitely encode that the conservation of the total population, i.e. \begin{equation} S + E + I + R + D = N. \end{equation} This model assumes that, once an individual recovers, she will not get susceptible again. Further, we assume that the timescale of the epidemics is fast compared to the natural birth-death rates of the population. Thus, $N$ does not change in time (and, consequentially, it can be absorbed into $\lambda$). ## Simulations ```python # Basic SEIRD model ODE def SEIRD_model(x, t, *params): λ, σ, γ, μ = params S,E,I,R,D = x return [-λ*I*S, λ*I*S - σ*E, σ*E - (γ+μ)*I, γ*I, μ*I] ``` ## How to choose the coefficients and initial conditions of the model? The **symptoms** of the coronavirus, according to the [Canadian public health service](https://www.canada.ca/en/public-health/services/diseases/2019-novel-coronavirus-infection/symptoms.html#s) and the [Worldometers](https://www.worldometers.info/coronavirus/coronavirus-symptoms/) suggest that - There are no vaccines yet - Symptoms may take up to 14 days to appear (there's a range between 2-14 days to develop symptoms according to the [CDC](https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/symptoms.html?CDC_AA_refVal=https%3A%2F%2Fwww.cdc.gov%2Fcoronavirus%2F2019-ncov%2Fabout%2Fsymptoms.html). We will use a exposure period of 8 days. - The infection lasts around 14 days for mild cases and 3+ weeks for more severe cases - 81 % of the cases are mild. The **mortality** of the coronavirus, according to the [Worldometers](https://www.worldometers.info/coronavirus/coronavirus-death-rate/) includes: - 3.4 % death rate as of March 03 - However, if one does the ratio of deaths/confirmed cases as of 20-March-2020, one gets 4 % death rate - The recovery rate is around 35 % so far. The **Basic reproductive ratio** (see [Wikipedia](https://en.wikipedia.org/wiki/Basic_reproduction_number)), according to the [CMMID](https://cmmid.github.io/topics/covid19/current-patterns-transmission/global-time-varying-transmission.html), is - $R_0 \approx 2.2 \pm 0.2$ at 01/feb/20 in China - $R_0 \approx 2.2 \pm 0.3$ at 01/feb/20 in Italy - $R_0$ unknown at 01/feb/20 in the UK Using the [World Bank transportation data](https://data.worldbank.org/indicator/IS.AIR.PSGR?locations=CN-IT-GB), we can estimate the flow between countries using as a proxy the average number of flights of any given country in 2018; these include - China: $611$ million flights in 2018 - Italy: $28$ million flights in 2018 - UK: $165$ million flights in 2018 Specifically with travels between Italy and the UK, [Wikipedia](https://en.wikipedia.org/wiki/Italy%E2%80%93United_Kingdom_relations) says that - Between 4 and 5 million British tourists visit Italy every year. - 1 million Italian tourists visit the UK. - [the UK government](https://www.gov.uk/foreign-travel-advice/italy) says that approximately 3 million British nationals visit Italy every year. The population of countries can be found at the [Wordlometers](https://www.worldometers.info/world-population/population-by-country/), where - China: $14.4 \times 10^9$ individuals - Italy: $60.5 \times 10^6$ individuals - UK: $67.9 \times 10^6$ individuals The number of cases for each country are obtained from the [John Hopkins University repository](https://github.com/CSSEGISandData/COVID-19). ```python # Reported China coronavirus numbers on 01st February 2020 initial_date = '2/1/20' N = 14.4e9 # China's population cases = df_to_timeseries(df_confirmed).loc[initial_date,'China'] # total reported cases including resolved deaths = df_to_timeseries(df_deaths).loc[initial_date,'China'] recovered = df_to_timeseries(df_recovered).loc[initial_date,'China'] R_0 = 2.3 # Basic Reproductive Rate [people] M = 0.034 # Mortality ratio [fraction] P_exposure = 8 # Average exposure period [days] (should really be split up by case severity) P_infectious = 14 # Average exposure period [days] (should really be split up by case severity) # Compute model coefficients γ = 1 / P_infectious μ = γ * M λ = R_0 * (γ + μ) σ = 1 / P_exposure # concatenating problem parameters params = (λ, σ, γ, μ) # setting initial conditions r = 2 R0 = recovered / N D0 = deaths / N I0 = cases/ N - R0 - D0 # confirmed cases E0 = r*I0 # cases without symptoms, so they are not yet detected S0 = (1 - E0 - I0 - R0 - D0) # initial condition at t0 = Feb 01 x0 = [S0, E0, I0, R0, D0] ``` ```python ## Integrating the problem for the next year t0,tf = (0, 365) sol = integrate.solve_ivp(lambda t,x: SEIRD_model(x, t, *params), (t0,tf), x0) ``` ```python def plot_model(sol, country='China', log=False): # Basic plot of the dynamics plt.figure( figsize=(8,6) ) labels = ['Suceptible', 'Exposed', 'Infected', 'Recovered', 'Deceased'] for (i,y) in enumerate(sol.y): if i == 0: 0 # continue plt.plot(sol.t, y*N, label=labels[i], lw=3) plt.title("Covid-19 spread in {}".format(country)) plt.ylabel("Number of people") plt.xlabel("time [days since {}].".format(initial_date)) plt.legend() if log: plt.yscale('log') print(f"For a population of {int(N/1e6)} million people, after {sol.t[-1]:.0f} days there were:") print(f"{sol.y[4][-1]*100:.1f}% total deaths, or {sol.y[4][-1]*N/1e3:.0f} thousand people.") print(f"{sol.y[3][-1]*100:.1f}% total recovered, or {sol.y[3][-1]*N/1e3:.0f} thousand people.") print(f"At the virus' maximum {sol.y[2].max()*100:.1f}% people were simultaneously infected, or {sol.y[2].max()*N/1e3:.0f} thousand people.") print(f"After {sol.t[-1]:.0f} days the virus was present in less than {sol.y[2][-1]*N/1e3:.0f} thousand individuals.") ``` ```python print("If no measures are taken, according to our model:\n") plot_model(sol) ``` ToDo: Include measures of social distancing, quarantine, and the effect of the healthcare system. See [this notebook](https://github.com/TomNicholas/coronavirus/blob/master/covid-19_model.ipynb) for inspiration. # SEIRD coupled model ```python ####----------------------------------#### #### Ilustration of the coupled model #### ####----------------------------------#### import networkx as nx G = nx.DiGraph() G.add_weighted_edges_from( [(1,2,4.0),(2,1, 2.0), (1,3, 1.0),(3,1, 1.0), (2,3, 1.5),(3,2, 2.0) ] ) pos = nx.spring_layout(G) # positions for all nodes # nodes nx.draw_networkx_nodes(G, pos, node_size=1400, alpha=0.9) # labels nx.draw_networkx_labels(G, pos, labels={1:'UK',2:'Italy',3:'China'}, font_size=15, font_family='sans-serif') # edges edge_weights = [] for i in G.nodes: for j in G.nodes: if j != i: edge_weights.append( G[i][j]['weight'] ) nx.draw_networkx_edges(G, pos, width=edge_weights, ) # edge label edge_labels = {(1,2): '$w_{12}$', (1,3): '$w_{13}$', (2,3): '$w_{23}$', } nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels, label_pos=0.5, font_size=15) plt.axis('off') plt.title("FIG 1: SEIRD Network model schema"); ``` The Covid-19 is a pandemic, and, although the model $(1)$ takes a global population $N$ as its input, it does not have any structure of the interaction between the people. It is just a zeroth-order approximation of the rate of contact, and it assumes every suceptible individual has the same likelihood to get infected no matter where she is. Here we want to extend model $(1)$ and embed it into a network, where each of its nodes represents different populations and the edges represents the coupling between them. For instance, nodes may be countries while the edges rate at which people from one country travel to another. To simplify notation, we define the vector $\mathbf{x}^i = (S^i, E^i, I^i, R^i, D^i)$ to represent the population of country $i$. $w_{ij}$ is the flow rate from country $i$ to country $j$. Only Suceptible and Exposed people should be able to travel (Of course Deceased and Infected people will not; but Recovered people were recently infectious, so they shouldn't travel anyways.). While the worldwide population is conserved --$N = \sum_i N^i$--, the population of each individual country can now vary over time. $w_{ij}$ should be proportional to the individuals in country $i$, and the probability of going to country $j$ should decrease if $j$ has a lot of infected individual. Thus, we propose \begin{equation} w_{ij} = \alpha_{ij} ( S^i + E^i ) e^{ - \beta I^j }, \end{equation} where $\alpha_{ij}$ leverages the ammount of people that go from $i$ to $j$. If there's no epidemic (i.e. N^i = S^i), $w_{ij} = \alpha_{ij} S^i$ is the average number of flights per time unit from country $i$ to $j$. $\beta$ represents the intensity to not fly to country $j$ because of its infectiousness. Note that this rationale can be extended to any network structure, where, instead of taking the weight of a given country $i$ to every other country in the world, it only considers the set of $i$'s *possible* destinations, which we denote as $\mathcal{N}_i$. We incorporate the flows between countries to the basic SEIRD model described previously, obtaining the following system \begin{align} \begin{aligned} \dot{S^i} &= -\lambda \frac{I^i}{N^i} S^i &- S^i \sum_{j \in \mathcal{N}_i} \alpha_{ij}e^{-\beta I^j} + \sum_{j \in \mathcal{N}_i} S^j \alpha_{ji}e^{-\beta I^i} \\ \dot{E^i} &= \lambda \frac{I^i}{N^i} S^i - \sigma E^i & \underbrace{ - E^i \sum_{j \in \mathcal{N}_i} \alpha_{ij}e^{-\beta I^j} }_{\text{out-flow}} + \underbrace{ \sum_{j \in \mathcal{N}_i} E^j \alpha_{ji}e^{-\beta I^i} }_{\text{in-flow}} \\ \dot{I^i} &= \sigma E^i - \gamma I^i - \mu I^i \\ \dot{R^i} &= \gamma I^i \\ \dot{D^i} &= \mu I^i, \end{aligned} \end{align} where we assume that the epidemic coefficients ($\lambda, \sigma, \gamma, \mu$) stay constant. Further, we will assume that the initial Exposed individuals are proportional to the infected (and officially registered) individuals. This is, \begin{equation} E^i_0 = r (I^i_0 + R^i_0 + D^i_0), \end{equation} for some parameter $r \geq 1$. *Note that while the virus **may** affect the global population equally on average, different countries have developed different measures to stop the virus, effectively changing the epidemic coefficients. We will deal with those kind of measures later, when we parametrize the coefficients in terms of the action plan of each country*. ToDo: Do the coupled SEIRD for the general case on $k$ countries joined with an adj matrix (or maybe everybody fucks with everybody). $\alpha$ is a candidate for a weighted, directed adjacency whenever it's not constant. If $\alpha$ is constant, does everything couples exactly as we saw in the case of 2 countries? ```python # In the SEIRD model, each country comes packed in a vector of 5 components : S,E,I,R,D def coupled_SEIRD(x, t, *params, coupling=True): α, β, λ, σ, γ, μ = params # could put A, adjmatrix n_countries = int( len(x)/5 ) x_dot = [] for i in range(n_countries): Si, Ei, Ii, Ri, Di = x[5*i:5*(i+1)] # decoupled SEIRD model Si_dot, Ei_dot, Ii_dot, Ri_dot, Di_dot = [-λ*Ii*Si, λ*Ii*Si - σ*Ei, σ*Ei - (γ+μ)*Ii, γ*Ii, μ*Ii] # couplings with other countries coupling_Si = 0 coupling_Ei = 0 if coupling: for j in range(n_countries): if i != j: Sj, Ej, Ij, Rj, Dj = x[5*j:5*(j+1)] coupling_Si += - Si * α[i,j] * np.exp( -β*Ij ) + Sj * α[j,i] * np.exp( -β*Ii ) coupling_Ei += - Ei * α[i,j] * np.exp( -β*Ij ) + Ej * α[j,i] * np.exp( -β*Ii ) Si_dot += coupling_Si Ei_dot += coupling_Ei x_dot += [Si_dot, Ei_dot, Ii_dot, Ri_dot, Di_dot] return x_dot ``` ```python ####-------------#### #### Model setup #### ####-------------#### initial_date = '3/1/20' N_uk = 67.8e6 # UK's population cases_uk = df_to_timeseries(df_confirmed).loc[initial_date,'United Kingdom'] # total reported cases including resolved deaths_uk = df_to_timeseries(df_deaths).loc[initial_date,'United Kingdom'] recovered_uk = df_to_timeseries(df_recovered).loc[initial_date,'United Kingdom'] print("There were {} cases in the UK at {}".format(cases_uk, initial_date)) N_italy = 60.5e6 # Italy's population cases_italy = df_to_timeseries(df_confirmed).loc[initial_date,'Italy'] # total reported cases including resolved deaths_italy = df_to_timeseries(df_deaths).loc[initial_date,'Italy'] recovered_italy = df_to_timeseries(df_recovered).loc[initial_date,'Italy'] print("There were {} cases in Italy at {}".format(cases_italy, initial_date)) N_both = N_uk + N_italy ####----------------------------#### #### Estimation of coefficients #### ####----------------------------#### R_0 = 2.5 # Basic Reproductive Rate [people] M = 0.034 # Mortality ratio [fraction] P_exposure = 8 # Average exposure period [days] P_infectious = 16 # Average infectious period [days] (should really be split up by case severity) ## Estimation of model coefficients # how to compute R0 taking P_exposure into account? γ = 1 / P_infectious σ = 1 / P_exposure μ = γ * M λ = R_0 * (γ + μ + σ) print("λ", λ) ## Estimation of coupling coefficients (source : https://data.worldbank.org/indicator/IS.AIR.PSGR?end=2018&locations=CN-IT&start=2006) α_uk_italy = 4e6 / N_uk α_italy_uk = 1e6 / N_italy α = np.array([ [0.0, α_uk_italy], [α_italy_uk, 0] ]) # transition matrix β = np.log(2) / 2e-4 # repulsion coefficient to fo to "infected" country. (made up) # the logic behind β: if 10k people out of 50M are infected, reduce your chances of going by 1/2. # i.e. exp( -β (10k/50M) ) ~ 1/2 print("β",β) ## MODEL PARAMETERS params = (α, β, λ, σ, γ, μ) ####--------------------------#### #### Initial conditions setup #### ####--------------------------#### r = 7 # Ratio of unregistered vs registered cases (made up quantity) # UK R0_uk = recovered_uk / N_both D0_uk = deaths_uk / N_both I0_uk = cases_uk/ N_both - R0_uk - D0_uk E0_uk = r*I0_uk S0_uk = (N_uk - E0_uk - I0_uk - R0_uk - D0_uk) / N_both x0_uk = [S0_uk, E0_uk, I0_uk, R0_uk, D0_uk] # italy R0_italy = recovered_italy / N_both D0_italy = deaths_italy / N_both I0_italy = cases_italy / N_both - R0_italy - D0_italy E0_italy = r*I0_italy S0_italy = (N_italy - E0_italy - I0_italy - R0_italy - D0_italy) / N_both x0_italy = [S0_italy, E0_italy, I0_italy, R0_italy, D0_italy] ### INITIAL CONDITIONS x0 = x0_uk + x0_italy ``` There were 36 cases in the UK at 3/1/20 There were 1694 cases in Italy at 3/1/20 λ 0.47406249999999994 β 3465.7359027997263 ```python ####-----------------------------------#### #### Numerical simulation of the model #### ####-----------------------------------#### days = 19 t0,tf = (0, days) # days from 01/feb to date : 29+20 # model with coupling %time sol = integrate.solve_ivp(lambda t,x: coupled_SEIRD(x, t, *params, coupling=True), (t0,tf), x0, t_eval=np.linspace(0,days, days+1)) # model without coupling %time sol_nocoupling = integrate.solve_ivp(lambda t,x: coupled_SEIRD(x, t, *params, coupling=False), (t0,tf), x0, t_eval=np.linspace(0,days, days+1)) It_uk, Rt_uk, Dt_uk = (sol.y[2], sol.y[3], sol.y[4]) It_it, Rt_it, Dt_it = (sol.y[2+5], sol.y[3+5], sol.y[4+5]) It_uk_noc, Rt_uk_noc, Dt_uk_noc = (sol_nocoupling.y[2], sol_nocoupling.y[3], sol_nocoupling.y[4]) It_it_noc, Rt_it_noc, Dt_it_noc = (sol_nocoupling.y[2+5], sol_nocoupling.y[3+5], sol_nocoupling.y[4+5]) print("\nCoupled scenario:") print("Confirmed cases after {} days in UK: {}".format(days, (It_uk[-1] + Rt_uk[-1] + Dt_uk[-1])*N_both ) ) print("Confirmed cases after {} days in Italy: {}".format(days, (It_it[-1] + Rt_it[-1] + Dt_it[-1])*N_both ) ) print("\nDecoupled scenario:") print("Confirmed cases after {} days in UK: {}".format(days, (It_uk_noc[-1] + Rt_uk_noc[-1] + Dt_uk_noc[-1])*N_both ) ) print("Confirmed cases after {} days in Italy: {}".format(days, (It_it_noc[-1] + Rt_it_noc[-1] + Dt_it_noc[-1])*N_both ) ) ``` CPU times: user 5.34 ms, sys: 0 ns, total: 5.34 ms Wall time: 3.99 ms CPU times: user 0 ns, sys: 1.94 ms, total: 1.94 ms Wall time: 1.6 ms Coupled scenario: Confirmed cases after 19 days in UK: 3357.1374698537497 Confirmed cases after 19 days in Italy: 38475.79502058507 Decoupled scenario: Confirmed cases after 19 days in UK: 675.2092466621101 Confirmed cases after 19 days in Italy: 33601.8700472445 ### Results of model: Coupled Scenario ```python plt.figure( figsize=(10,6) ) # initial date is 3/1/20. The dataset starts at 1/22/20 days_till_initialdate = (31 - 22) + 29 + 1 # days from 1/22/20 to 3/1/20 plt.plot( sol.t, (It_it + Rt_it + Dt_it)*N_both, lw=3, label='simulation') plt.plot(country_to_timeseries(df_confirmed, 'Italy').iloc[days_till_initialdate:], lw=3, label='data') plt.legend() plt.xticks(rotation=45) plt.ylabel("Number of cases") plt.title('Italy: Coupled with the UK'); ``` ```python plt.figure( figsize=(10,6) ) plt.plot( sol.t, (It_uk + Rt_uk + Dt_uk)*N_both, lw=3, label='simulation') plt.plot(country_to_timeseries(df_confirmed, 'United Kingdom').iloc[days_till_initialdate:], lw=3, label='data') plt.legend() plt.xticks(rotation=45) plt.ylabel("Number of cases") plt.title('UK: Coupled with Italy'); ``` Note that the model is already predicting the cases in Italy and the UK very well. A sensitive parameter in the model is controlling $r$, the ratio of exposed individuals (people with the virus but without symptoms) against the number of registered cases. We will see that in the decoupled scenario, Italy grows similarly as in the coupled case. However, the UK will not have enough exposure to the virus whenever it is decoupled, so the model will largely underestimate its number of cases. Note that there is no preventive action in the considered in the model. This means that we may be highly underestimating the ratio $r$ of unregistered individual **already exposed** the virus. ### No coupling scenario ```python plt.figure( figsize=(10,6) ) plt.plot( sol.t, (It_it_noc + Rt_it_noc + Dt_it_noc)*N_both, lw=3, label='simulation') plt.plot(country_to_timeseries(df_confirmed, 'Italy').iloc[days_till_initialdate:], lw=3, label='data') plt.legend() plt.xticks(rotation=45) plt.ylabel("Number of cases") plt.title('Italy: No coupling with the UK'); ``` ```python plt.figure( figsize=(10,6) ) plt.plot( sol.t, (It_uk_noc + Rt_uk_noc + Dt_uk_noc)*N_both, lw=3, label='simulation') plt.plot(country_to_timeseries(df_confirmed, 'United Kingdom').iloc[days_till_initialdate:], lw=3, label='data') plt.legend() plt.xticks(rotation=45) plt.ylabel("Number of cases") plt.title('UK: No coupling with Italy'); ``` ## Long term dynamics (with no preventive action) Some long term predictions of the model. These do not intend to have predictive power, I show them to see the effects of the coupling of the model. ```python ####-----------------------------------#### #### Numerical simulation of the model #### ####-----------------------------------#### days = 365 t0,tf = (0, days) # days from 01/feb to date : 29+20 %time sol = integrate.solve_ivp(lambda t,x: coupled_SEIRD(x, t, *params, coupling=True), (t0,tf), x0, t_eval=np.linspace(0,days, days+1)) %time sol_nocoupling = integrate.solve_ivp(lambda t,x: coupled_SEIRD(x, t, *params, coupling=False), (t0,tf), x0, t_eval=np.linspace(0,days, days+1)) It_uk, Rt_uk, Dt_uk = (sol.y[2], sol.y[3], sol.y[4]) It_it, Rt_it, Dt_it = (sol.y[2+5], sol.y[3+5], sol.y[4+5]) It_uk_noc, Rt_uk_noc, Dt_uk_noc = (sol_nocoupling.y[2], sol_nocoupling.y[3], sol_nocoupling.y[4]) It_it_noc, Rt_it_noc, Dt_it_noc = (sol_nocoupling.y[2+5], sol_nocoupling.y[3+5], sol_nocoupling.y[4+5]) ``` CPU times: user 17.6 ms, sys: 0 ns, total: 17.6 ms Wall time: 17 ms CPU times: user 9.23 ms, sys: 0 ns, total: 9.23 ms Wall time: 8.97 ms ```python # Basic plot of the dynamics fig, ax = plt.subplots(1, 2, figsize=(12,8)) labels = ['Suceptible', 'Exposed', 'Infected', 'Recovered', 'Deceased'] labels = labels*3 for (i,y) in enumerate(sol.y): if i%5 == 0: 0 # continue if i < 5: ax[0].plot(sol.t, y*N_both, label=labels[i], lw=3) else: ax[1].plot(sol.t, y*N_both, label=labels[i], lw=3) print("Coupled scenario") ax[0].legend() ax[1].legend() # plt.yscale('log') ``` ```python # Basic plot of the dynamics fig, ax = plt.subplots(1, 2, figsize=(12,8)) labels = ['Suceptible', 'Exposed', 'Infected', 'Recovered', 'Deceased'] labels = labels*3 for (i,y) in enumerate(sol_nocoupling.y): if i%5 == 0: 0 # continue if i < 5: ax[0].plot(sol_nocoupling.t, y*N_both, label=labels[i], lw=3) else: ax[1].plot(sol_nocoupling.t, y*N_both, label=labels[i], lw=3) print("Decoupled scenario") ax[0].legend() ax[1].legend() # plt.yscale('log') ``` Of course this projections are not realistic. They assume that most of the population will be affected after one year time (this could indeed be possible if there is no preventive action at all). Further, the coupled case should be corrected because a substantial part of the country's population flights to the neighbouring countries. We should model a return-flight model (maybe with time delays over $S(t)$ and $E(t)$.
8fb64696e6fdaf84581ada553b53b63ab96e84b2
417,623
ipynb
Jupyter Notebook
covid_SEIRD_coupled_model.ipynb
aguirreFabian/COVID-19_Coupled-Epidemics
3e64e1c66399bcbffa70605c219bb633819ff90c
[ "MIT" ]
null
null
null
covid_SEIRD_coupled_model.ipynb
aguirreFabian/COVID-19_Coupled-Epidemics
3e64e1c66399bcbffa70605c219bb633819ff90c
[ "MIT" ]
null
null
null
covid_SEIRD_coupled_model.ipynb
aguirreFabian/COVID-19_Coupled-Epidemics
3e64e1c66399bcbffa70605c219bb633819ff90c
[ "MIT" ]
null
null
null
429.653292
67,012
0.930715
true
7,443
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.833325
0.73399
__label__eng_Latn
0.905681
0.543636
# Probabilidad II # 1. Cadenas de Markov Una cadena de Markov es un proceso aleatorio con la propiedad de Markov. Un proceso aleatorio o estocástico, es un objeto matemático definido como una colección de variables aleatorias. Una cadena de Markov tiene ya sea un espacio de estado discreto (que representaría posibles valores de variables aleatorias) o un índice discreto (usualmente representando tiempo). Usualmente el término "cadena de Markov" se utiliza para describir un proceso con un conjunto discreto de tiempos, es decir una cadena en tiempo discreto de Markov (DTMC). ## Cadena en tiempo discreto de Markov. Una cadena en tiempo discreto involucra un sistema que cambia aleatoriamente entre pasos. Estos pasos son usualmente vistos como momentos en tiempo (no necesariamente la cantidad física). Una cadena discreta de Markov es una secuencia de variables aleatorias tales que la probabilidad de pasar al siguiente estado sólo depende del estado presente y no de los estados previos. \begin{equation} Pr(X_{n+1}=x|X_1=x_1,X_2=x_2,...,X_n=x_n)=P_r(X_{n+1}=x|X_n =x_n) \end{equation} ```python import numpy as np import random as rm states=["Dormir","Comer","Estudiar"] nombreTransicion=[["DD","DC","DE"],["CD","CC","CE"],["ED","EC","EE"]] MT=[[0.2,0.6,0.2],[0.1,0.6,0.3],[0.2,0.7,0.1]] if sum(MT[0])+sum(MT[1])+sum(MT[2])!=3: print("No está bien") else: print("Todo está bien, adelante!! ;)") ``` Todo está bien, adelante!! ;) ```python def prediccion_actividades(days): actividadHoy="Dormir" print("Estado Inicial: "+actividadHoy) listaActividades=[actividadHoy] i=0 prob=1 while i!=days: if actividadHoy!="Dormir": change=np.random.choice(nombreTransicion[0],replace=True,p=MT[0]) if change=="DD": prob=prob*0.2 listaActividades.append("Dormir") pass elif change=="DE": prob=prob*0.6 actividadHoy="Estudiar" listaActividades.append("Estudiar") else: prob=prob*0.2 actividadHoy="Comer" listaActividades.append("Comer") elif actividadHoy!="Estudiar": change=np.random.choice(nombreTransicion[2],replace=True,p=MT[2]) if change=="EE": prob=prob*0.5 listaActividades.append("Estudiar") pass elif change=="ED": prob=prob*0.2 actividadHoy="Dormir" listaActividades.append("Dormir") else: prob=prob*0.3 actividadHoy="Comer" listaActividades.append("Comer") elif actividadHoy!="Comer": change=np.random.choice(nombreTransicion[1],replace=True,p=MT[1]) if change=="CC": prob=prob*0.1 listaActividades.append("Comer") pass elif change=="CD": prob=prob*0.2 actividadHoy="Dormir" listaActividades.append("Dormir") else: prob=prob*0.7 actividadHoy="Estudiar" listaActividades.append("Estudiar") i+=1 print("Posibles estados: "+ str(listaActividades)) print("Estado final después de "+str(days)+"dias: "+ actividadHoy) print("Probabilidad de secuencia de estados posible: "+str(prob)) prediccion_actividades(2) ``` Estado Inicial: Dormir Posibles estados: ['Dormir', 'Estudiar', 'Comer'] Estado final después de 2dias: Comer Probabilidad de secuencia de estados posible: 0.15 ```python def prediccion_actividades(days): # escoge estado inicial actividadHoy = "Dormir" listaActividades = [actividadHoy] i = 0 prob = 1 while i != days: if actividadHoy == "Dormir": change = np.random.choice(nombreTransicion[0],replace=True,p=MT[0]) if change == "DD": prob = prob * 0.2 listaActividades.append("Dormir") pass elif change == "DE": prob = prob * 0.6 actividadHoy = "Estudiar" listaActividades.append("Estudiar") else: prob = prob * 0.2 actividadHoy = "Comer" listaActividades.append("Comer") elif actividadHoy == "Estudiar": change = np.random.choice(nombreTransicion[1],replace=True,p=MT[1]) if change == "EE": prob = prob * 0.5 listaActividades.append("Estudiar") pass elif change == "ED": prob = prob * 0.2 actividadHoy = "Dormir" listaActividades.append("Dormir") else: prob = prob * 0.3 actividadHoy = "Comer" listaActividades.append("Comer") elif actividadHoy == "Comer": change = np.random.choice(nombreTransicion[2],replace=True,p=MT[2]) if change == "CC": prob = prob * 0.1 listaActividades.append("Comer") pass elif change == "CD": prob = prob * 0.2 actividadHoy = "Dormir" listaActividades.append("Dormir") else: prob = prob * 0.7 actividadHoy = "Estudiar" listaActividades.append("Estudiar") i += 1 return listaActividades # para guardar todo listaActividades lista_actividades = [] count = 0 for iterations in range(1,10000): lista_actividades.append(prediccion_actividades(2)) for listita in lista_actividades: if(listita[2] == "Estudiar"): count += 1 # Calcula la probabilidad de iniciar en:'Dormir' and ending at state:'Estudiar' porcentaje = (count/10000) * 100 print("La probabilidad del estado inicial :'Dormir' y el estado final :'Estudiar'= " + str(porcentaje) + "%") ``` La probabilidad del estado inicial :'Dormir' y el estado final :'Estudiar'= 64.99000000000001% ## Metropolis-Hastings En estadística Bayesiana se busca estimar un distribución posterior, sin embargo, esto es usualmente intratable debido a la integral en muchas dimensiones del denominador (likelihood marginal). Es posible aproximar la distribución posterior si se puede de alguna manera muestrear la distribución posterior. Con el método de MCMC, es posible obtener muestras de una distribución propuesta de manera que cada una depende únicamente del estado previo, tal como sucede con las cadenas de Markov. Bajo ciertas condiciones, la cadena de Markov tendrá una única distribución estacionaria. En adición, no todas las muestras son usadas, en cambio, se utiliza un criterio de aceptación basado en la comparación sucesiva de estados respecto a una distribución blanco para asegurar que la distribución estacionaria es la distribución a posteriori de interés. Lo interesante de esta distribución es que sólo necesita ser proporcional a la distribución posterior, lo que significa que no se necesita evaluar las likelihoods marginales potencialmente intratables, lo cual es sólo una constante de normalización. Existen diferentes tipos de MCMC, pero la más sencilla es entender Metropolis-Hastings, que se basa en un algoritmo de caminata aleatoria. Para esto, se necesita realizar un muestreo a partir de las distribuciones. - Uniforme estándar - La distribución propuesta (eg. N(0,$\sigma$)) - La distribución objetivo, que es proporcional a la probabilidad posterior. Dado un valor inicial adivinado $\theta$ con probabilidad positiva de ser dibujado, Metropolis procede así: - Escoje un nuevo valor propuesto tal que $\theta_p =\theta+\Delta\theta$ donde $\Delta\theta \propto N(0,\sigma)$ - Calcula la razón \begin{equation} \rho=\frac{g(\theta_p | X) p(\theta|\theta_p)}{g(\theta|X)} \end{equation} donde $g$ es la probabilidad posterior. - Si la distribución propuesta no es simétrica, se necesita pesar la probabilidad de ser aceptada para mantener un balance (reversibilidad) de la distribución estacionaria y en su lugar calcular \begin{equation} \rho=\frac{p(X|\theta_p) p(\theta_p)}{p(X|\theta)p(\theta)} \end{equation} - Si $\rho \geq 1$ entonces $\theta=\theta_p$ - Si $\rho \leq 1$ entonces $\theta=\theta_p$ con probabilidad $\rho$ de lo contrario $\theta=\theta$ (distribución uniforme) - Repetir. # Ejercicio: 1. Definir una función para un modelo a*x**2+b*x+c, que reciba (x,lista) donde lista=[a,b,c] 2. Definir una función de likelihood apropiada. 3. Definir una función prior apropiada para los datos 4. Desarrollar MCMC para ajustar con inferencia bayesiana los parámetros que mejor ajustan los datos con el modelo. ```python import numpy as np import matplotlib.pyplot as plt np.random.seed(123) x=np.random.rand(25)*7 x_=np.linspace(-3,5,25) ## PARA CREAR DATOS DE JUGUETE ## a=-2 b=5 c=2 y=a*x_**2+b*x_+c y+=x plt.scatter(x_,y) plt.show() ``` ```python x_obs=x y_obs=y def model(x,a,b,c): return a*x**2+b*x+c def loglikelihood(x_obs,y_obs,_a,_b,_c): d=y_obs-model(x_obs,_a,_b,_c) d=d/1.0 d=-0.5*np.sum(d**2) return d def logprior(a,b,c): p=-np.inf if a<10 and a>-10 and b>-10 and b<10 and c>-10 and c<10: p=0.0 return p N=10000 lista_a=[np.random.random()] lista_b=[np.random.random()] lista_c=[np.random.random()] logposterior = [loglikelihood(x_obs, y_obs, lista_a[0], lista_b[0], lista_c[0]) + logprior(lista_a[0], lista_b[0], lista_c[0])] sigma_delta_a=0.5 sigma_delta_b=0.5 sigma_delta_c=0.5 for i in range(1,N): propuesta_a=lista_a[i-1]+np.random.normal(loc=0.0,scale=sigma_delta_a) propuesta_b=lista_b[i-1]+np.random.normal(loc=0.0,scale=sigma_delta_b) propuesta_c=lista_c[i-1]+np.random.normal(loc=0.0,scale=sigma_delta_c) logposterior_viejo=loglikelihood(x_obs,y_obs,lista_a[i-1],lista_b[i-1],lista_c[i-1])+logprior(lista_a[i-1],lista_b[i-1],lista_c[i-1]) logposterior_nuevo=loglikelihood(x_obs,y_obs,propuesta_a,propuesta_b,propuesta_c)+logprior(propuesta_a,propuesta_b,propuesta_c) r=min(1,np.exp(logposterior_nuevo-logposterior_viejo)) alpha=np.random.random() if(alpha<r): lista_a.append(propuesta_a) lista_b.append(propuesta_b) lista_c.append(propuesta_c) logposterior.append(logposterior_viejo) else: lista_a.append(lista_a[i-1]) lista_b.append(lista_b[i-1]) lista_c.append(lista_c[i-1]) logposterior.append(logposterior_viejo) lista_a=np.array(lista_a) lista_b=np.array(lista_b) lista_c=np.array(lista_c) logposterior=np.array(logposterior) plt.hist(lista_a,color="blue") plt.hist(lista_b,color="red") plt.hist(lista_c,color="green") a_=np.mean(lista_a) b_=np.mean(lista_b) c_=np.mean(lista_c) print(a_) print(b_) print(a_) ``` # 2. Árboles de Decision. Los árboles de decisión (DT) son un método de aprendizaje supervisado no paramétrico utilizado para la clasificación y la regresión. El objetivo es crear un modelo que prediga el valor de una variable objetivo mediante el aprendizaje de reglas de decisión simples inferidas de las características de los datos. Por ejemplo, en el ejemplo a continuación, los árboles de decisión aprenden de los datos para aproximar una curva sinusoidal con un conjunto de reglas de decisión si-entonces-otro. Cuanto más profundo es el árbol, más complejas son las reglas de decisión y más se ajusta el modelo. Algunas ventajas de los árboles de decisión son: - Simple de entender e interpretar. Los árboles pueden ser visualizados. - Requiere poca preparación de datos. Otras técnicas a menudo requieren la normalización de datos, se deben crear variables ficticias y eliminar los valores en blanco. Sin embargo, tenga en cuenta que este módulo no admite valores faltantes. - El costo de usar el árbol (es decir, predecir datos) es logarítmico en la cantidad de puntos de datos utilizados para entrenar el árbol. - Capaz de manejar datos numéricos y categóricos. Otras técnicas suelen estar especializadas en el análisis de conjuntos de datos que tienen un solo tipo de variable. Ver algoritmos para más información. - Capaz de manejar problemas de salida múltiple. - Utiliza un modelo de caja blanca. Si una situación dada es observable en un modelo, la explicación de la condición se explica fácilmente por la lógica booleana. Por el contrario, en un modelo de caja negra (por ejemplo, en una red neuronal artificial), los resultados pueden ser más difíciles de interpretar. - Posible validar un modelo utilizando pruebas estadísticas. Eso permite tener en cuenta la fiabilidad del modelo. - Funciona bien incluso si sus supuestos son violados de alguna manera por el modelo verdadero a partir del cual se generaron los datos. Las desventajas de los árboles de decisión incluyen: - Los alumnos de los árboles de decisión pueden crear árboles demasiado complejos que no generalizan bien los datos. Esto se llama sobreajuste. Para evitar este problema, se necesitan mecanismos tales como la poda (actualmente no compatible), establecer el número mínimo de muestras requeridas en un nodo hoja o establecer la profundidad máxima del árbol. - Los árboles de decisión pueden ser inestables porque pequeñas variaciones en los datos pueden generar un árbol completamente diferente. Este problema se mitiga mediante el uso de árboles de decisión dentro de un conjunto. - Se sabe que el problema de aprender un árbol de decisión óptimo es NP-completo bajo varios aspectos de la optimización e incluso para conceptos simples. En consecuencia, los algoritmos prácticos de aprendizaje del árbol de decisiones se basan en algoritmos heurísticos como el algoritmo codicioso donde se toman decisiones localmente óptimas en cada nodo. Tales algoritmos no pueden garantizar la devolución del árbol de decisión globalmente óptimo. Esto puede mitigarse entrenando múltiples árboles en un conjunto de alumnos, donde las características y muestras se muestrean aleatoriamente con reemplazo. - Hay conceptos que son difíciles de aprender porque los árboles de decisión no los expresan fácilmente, como XOR, problemas de paridad o multiplexor. - Los alumnos del árbol de decisión crean árboles sesgados si algunas clases dominan. Por lo tanto, se recomienda equilibrar el conjunto de datos antes de ajustarlo con el árbol de decisión. ```python from sklearn.datasets import load_iris from sklearn import tree iris = load_iris() clf = tree.DecisionTreeClassifier() clf = clf.fit(iris.data, iris.target) plt.figure(figsize=(15,15)) tree.plot_tree(clf.fit(iris.data, iris.target)) plt.show() ``` Los árboles de decisión se trabajarán un poco más a profundidad después de introducir ML. ### References: - The Nature of Statistical Learning Theory. Vladimir N. Vapnik -Springer (2000) - https://people.duke.edu/~ccc14/sta-663/MCMC.html - https://scikit-learn.org/stable/modules/tree.html - https://www.datacamp.com/community/tutorials/markov-chains-python-tutorial ```python ```
482de24e930c526b1d791ee0057c7b922271c7ae
36,697
ipynb
Jupyter Notebook
7Estadistica/4_ProbabilidadII.ipynb
sergiogaitan/Study_Guides
083acd23f5faa6c6bc404d4d53df562096478e7c
[ "MIT" ]
5
2020-09-12T17:16:12.000Z
2021-02-03T01:37:02.000Z
7Estadistica/4_ProbabilidadII.ipynb
sergiogaitan/Study_Guides
083acd23f5faa6c6bc404d4d53df562096478e7c
[ "MIT" ]
null
null
null
7Estadistica/4_ProbabilidadII.ipynb
sergiogaitan/Study_Guides
083acd23f5faa6c6bc404d4d53df562096478e7c
[ "MIT" ]
4
2020-05-22T12:57:49.000Z
2021-02-03T01:37:07.000Z
69.239623
6,916
0.727961
true
4,080
Qwen/Qwen-72B
1. YES 2. YES
0.908618
0.699254
0.635355
__label__spa_Latn
0.967806
0.314473
# Lecture 16 ## Systems of Differential Equations III: ### Phase Planes and Stability ```python import numpy as np import sympy as sp import scipy.integrate sp.init_printing() ################################################## ##### Matplotlib boilerplate for consistency ##### ################################################## from ipywidgets import interact from ipywidgets import FloatSlider from matplotlib import pyplot as plt import cmath %matplotlib inline from IPython.display import set_matplotlib_formats set_matplotlib_formats('svg') global_fig_width = 8 global_fig_height = global_fig_width / 1.61803399 font_size = 12 plt.rcParams['axes.axisbelow'] = True plt.rcParams['axes.edgecolor'] = '0.8' plt.rcParams['axes.grid'] = True plt.rcParams['axes.labelpad'] = 8 plt.rcParams['axes.linewidth'] = 2 plt.rcParams['axes.titlepad'] = 16.0 plt.rcParams['axes.titlesize'] = font_size * 1.4 plt.rcParams['figure.figsize'] = (global_fig_width, global_fig_height) plt.rcParams['font.sans-serif'] = ['Computer Modern Sans Serif', 'DejaVu Sans', 'sans-serif'] plt.rcParams['font.size'] = font_size plt.rcParams['grid.color'] = '0.8' plt.rcParams['grid.linestyle'] = 'dashed' plt.rcParams['grid.linewidth'] = 2 plt.rcParams['lines.dash_capstyle'] = 'round' plt.rcParams['lines.dashed_pattern'] = [1, 4] plt.rcParams['xtick.labelsize'] = font_size plt.rcParams['xtick.major.pad'] = 4 plt.rcParams['xtick.major.size'] = 0 plt.rcParams['ytick.labelsize'] = font_size plt.rcParams['ytick.major.pad'] = 4 plt.rcParams['ytick.major.size'] = 0 ################################################## ``` ## Recap from previous parts - 2-D linear systems can be solved analytically - Eigenvalues are important - Some larger systems can be simplified - Phase planes, nullclines and fixed points ## Example from previous lecture \begin{eqnarray*} \dot{x} &=& x(1-x) -xy,\\ \dot{y} &=& y\left(2-y\right) - 3xy. \end{eqnarray*} ```python def dX_dt(X, t): return np.array([ X[0]*(1. - X[0]) - X[0]*X[1], 2.*X[1]*(1.-X[1]/2.) -3*X[0]*X[1]]) def plot_phase_plane(): plt.figure(figsize=(10,10)) init_x = [1.05, 0.9, 0.7, 0.5, 0.5, 0.32, 0.1] init_y = [1.0, 1.3, 1.6, 1.8, 0.2, 0.2, 0.2] plt.plot(init_x, init_y, 'g*', markersize=20) for v in zip(init_x,init_y): X0 = v # starting point X = scipy.integrate.odeint( dX_dt, X0, np.linspace(0,10,100)) # we don't need infodict here plt.plot( X[:,0], X[:,1], lw=3, color='green') # plot nullclines x = np.linspace(-0.1,1.1,24) y = np.linspace(-0.1,2.1,24) plt.hlines(0,-1,15, color='#F39200', lw=4, label='y-nullcline 1') plt.plot(x,1 - x, color='#0072bd', lw=4, label='x-nullcline 2') plt.vlines(0,-1,15, color='#0072bd', lw=4, label='x-nullcline 1') plt.plot(x,2 - 3*x, color='#F39200', lw=4, label='y-nullcline 2') # quiverplot - define a grid and compute direction at each point X , Y = np.meshgrid(x, y) # create a grid DX = X*(1-X) - X*Y # evaluate dx/dt DY = 2*Y*(1 - Y/2.0) - 3*X*Y # evaluate dy/dt M = (np.hypot(DX, DY)) # norm growth rate M[ M == 0] = 1. # avoid zero division errors plt.quiver(X, Y, DX/M, DY/M, M) plt.xlim(-0.05,1.1) plt.ylim(-0.05,2.1) plt.xlabel('x') plt.ylabel('y') ``` ```python plot_phase_plane() ``` ## Linear ODEs for understanding nonlinear The decoupled ODE system \begin{eqnarray*} \dot{x} = \frac{\rm{d}x}{\rm{d}t} &=& \lambda_1 x,\\ \dot{y} = \frac{\rm{d}y}{\rm{d}t} &=& \lambda_2 y, \end{eqnarray*} has a **fixed point** or **steady state** where $\;\dot{x}=\dot{y}=0\;$, at the origin. Solutions look like $\;x=Ae^{\lambda_1 t}\;$ and $\;y=Be^{\lambda_2 t}\;$ and thus grow exponentially or shrink exponentially depending on the values of $\;\lambda_1\;$ and $\;\lambda_2.$ If $\;\lambda_1 < 0\;$ and $\;\lambda_2 < 0\;$ then all the flow is towards the fixed point. If $\;\lambda_1\;$ or $\;\lambda_2\;$ is positive then some flow will be driven away (towards infinity). Adding in a constant (inhomogeneous) component shifts the fixed point away from the origin. Where is the fixed point of \begin{eqnarray*} \dot{x} &=& \lambda_1 x + 10,\\ \dot{y} &=& \lambda_2 y + 10? \end{eqnarray*} Coupling the system has the effect of altering the principle directions over which the exponential terms apply (**changing the basis**). This means that the **homogeneous** linear system \begin{eqnarray*} \dot{x} &=& a x + b y,\\ \dot{y} &=& c x + d y, \end{eqnarray*} has a fixed point at the origin. The long-term growth or shrinkage of solutions over time is determined by the eigenvalues of the matrix $$ A = \left( \begin{array}{cc} a & b \\ c& d \end{array} \right). $$ More generally, the **inhomogeneous** linear system \begin{eqnarray*} \dot{x} &=& a x + b y + p,\\ \dot{y} &=& c x + d y + q, \end{eqnarray*} can be written $$ \left( \begin{array}{c} \dot{x} \\ \dot{y} \end{array} \right) = \left( \begin{array}{cc} a & b \\ c& d \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right) + \left( \begin{array}{c} p \\ q \end{array} \right) . $$ It has a fixed point at $$ \left( \begin{array}{c} x \\ y \end{array} \right) = -\left( \begin{array}{cc} a & b \\ c& d \end{array} \right)^{-1} \left( \begin{array}{c} p \\ q \end{array} \right) . $$ The long-term growth or shrinkage of solutions over time is again determined by the eigenvalues of the matrix. ## General nonlinear system steady states A more general two-dimensional nonlinear system is \begin{eqnarray*} \dot{x} &=& f(x,y),\\ \dot{y} &=& g(x,y), \end{eqnarray*} where $\;f\;$ and $\;g\;$ can be any functions whatever. We can write a polynomial (Taylor) expansion for the system when it is close to a fixed point $\;(x^*, y^*)\;$ for which $\;f(x^*,y^*)=g(x^*,y^*)=0.$ \begin{eqnarray*} \dot{x} &=& f(x^*,y^*) + \frac{\partial f}{\partial x}(x-x^*) + \frac{\partial f}{\partial y}(y-y^*) + \ldots,\\ \dot{y} &=& g(x^*,y^*) + \frac{\partial g}{\partial x}(x-x^*) + \frac{\partial g}{\partial y}(y-y^*) + \ldots, \end{eqnarray*} So, close to the fixed point: $$ \left( \begin{array}{c} \dot{x} \\ \dot{y} \end{array} \right) \approx \left( \begin{array}{cc} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{array} \right) \left( \begin{array}{c} x-x^* \\ y-y^* \end{array} \right) . $$ This means that (really close to the fixed point) we can approximate with a linear system. The eigenvalues $\;\lambda_1,\;\lambda_2\;$ of the matrix $$ J = \left( \begin{array}{cc} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{array} \right) $$ will determine if a small perturbation away from $\;(x^*,\;y^*)\;$ will decay or grow. ## Steady state classification $$ J = \left( \begin{array}{cc} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{array} \right) $$ - $\lambda_1<\lambda_2<0$ Stable node - $\lambda_1=\lambda_2<0$ Stable star - $\lambda_1>\lambda_2>0$ Unstable node - $\lambda_1=\lambda_2>0$ Unstable star - $\lambda_1<0<\lambda_2$ Saddle (or hyperbolic) point: unstable - Complex $\lambda$: Spiral (with real part determining stability) - Imaginary $\lambda$: Neutral (solution cycles round fixed point) The presence of negative eigenvalues determines whether a steady state is physically viable. ### Eigenvalues of $J$ $$ |J-\lambda I| = \left| \begin{array}{cc} \frac{\partial f}{\partial x}-\lambda & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}-\lambda \end{array}\right| = \left(\frac{\partial f}{\partial x}-\lambda\right)\left(\frac{\partial g}{\partial y}-\lambda\right)-\frac{\partial f}{\partial y}\frac{\partial g}{\partial x} $$ So eigenvalues $\;\lambda\;$ satisfy $$ \lambda^2 - \lambda\left(\frac{\partial f}{\partial x} + \frac{\partial g}{\partial y}\right) + \frac{\partial f}{\partial x}\frac{\partial g}{\partial y} - \frac{\partial f}{\partial y}\frac{\partial g}{\partial x}, $$ or $$ \lambda^2 - \lambda\tau + \Delta \quad \rm{where} \quad \tau=\rm{Trace}(J) \quad and \quad \Delta = \rm{Det}(J) $$ ## Example system \begin{align*} \dot{x} &= x(1-x) -xy &=f(x)\\ \dot{y} &= y\left(2-y\right) - 3xy &=g(x). \end{align*} - What are the nullclines? - What are the fixed points? - What is the stability of the fixed points? ### Calculate the Jacobian $$ J = \left( \begin{array}{cc} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{array} \right) = \left(\begin{array}{cc} -2x-y+1 & -x \\-3y & -3x-2y+2 \end{array}\right) $$ At $(0,0)$: $$ J_{(0,0)} = \left(\begin{array}{cc} 1 & 0 \\0 & 2 \end{array}\right) $$ $$ \left|\begin{array}{cc} 1-\lambda & 0 \\0 & 2-\lambda\end{array}\right|\quad\implies\quad\lambda_1 = 1, \; \lambda_2 = 2 \qquad\rm{(unstable)} $$ ### Calculate the Jacobian $$ J = \left( \begin{array}{cc} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{array} \right) = \left(\begin{array}{cc} -2x-y+1 & -x \\-3y & -3x-2y+2 \end{array}\right) $$ At $(1/2,1/2)$: $$ J_{(1/2,1/2)} = \left(\begin{array}{cc} -\frac{1}{2} & -\frac{1}{2} \\-\frac{3}{2} & -\frac{1}{2} \end{array}\right) $$ $$ \left|\begin{array}{cc} -\frac{1}{2}-\lambda & -\frac{1}{2} \\-\frac{3}{2} & -\frac{1}{2}-\lambda\end{array}\right|\quad\implies\quad\lambda_\pm = \frac{1}{2} \pm \frac{\sqrt{3}}{2} \qquad\rm{(saddle)} $$ ### Calculate the Jacobian $$ J = \left( \begin{array}{cc} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{array} \right) = \left(\begin{array}{cc} -2x-y+1 & -x \\-3y & -3x-2y+2 \end{array}\right) $$ At $(1,0)$: $$ J_{(1,0)} = \left(\begin{array}{cc} -1 & -1 \\ 0 & -1 \end{array}\right) $$ $$ \left|\begin{array}{cc} -1-\lambda & -1 \\0 & -1-\lambda\end{array}\right|\quad\implies\quad\lambda_1 = \lambda_2 = -1 \qquad\rm{(stable)} $$ ### Calculate the Jacobian $$ J = \left( \begin{array}{cc} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{array} \right) = \left(\begin{array}{cc} -2x-y+1 & -x \\-3y & -3x-2y+2 \end{array}\right) $$ At $(0,2)$: $$ J_{(0,2)} = \left(\begin{array}{cc} -1 & 0 \\ -6 & -2 \end{array}\right) $$ $$ \left|\begin{array}{cc} -1-\lambda & 0 \\-6 & -2-\lambda\end{array}\right|\quad\implies\quad\lambda_1 = -1, \; \lambda_2 = -2 \qquad\rm{(stable)} $$ ```python plot_phase_plane() ``` ## Summary - Eigenvalues for behaviour of linear systems - Eigenvalues for the stability of nonlinear systems
115e0f9f1748ae1b3928b90d1a13318d2a551dac
18,315
ipynb
Jupyter Notebook
lectures/lecture-16-systems3.ipynb
SABS-R3/2020-essential-maths
5a53d60f1e8fdc04b7bb097ec15800a89f67a047
[ "Apache-2.0" ]
1
2021-11-27T12:07:13.000Z
2021-11-27T12:07:13.000Z
lectures/lecture-16-systems3.ipynb
SABS-R3/2021-essential-maths
8a81449928e602b51a4a4172afbcd70a02e468b8
[ "Apache-2.0" ]
null
null
null
lectures/lecture-16-systems3.ipynb
SABS-R3/2021-essential-maths
8a81449928e602b51a4a4172afbcd70a02e468b8
[ "Apache-2.0" ]
1
2020-10-30T17:34:52.000Z
2020-10-30T17:34:52.000Z
29.780488
247
0.47573
true
3,897
Qwen/Qwen-72B
1. YES 2. YES
0.721743
0.853913
0.616306
__label__eng_Latn
0.773
0.270215
```python import sympy as sym x, L, C, D, c_0, c_1, = sym.symbols('x L C D c_0 c_1') class TwoPtBoundaryValueProblem(object): """ Solve -(a*u')' = f(x) with boundary conditions specified in subclasses (method get_bc). a and f must be sympy expressions of x. """ def __init__(self, f, a=1, L=L, C=C, D=D): """Default values for L, C, D are symbols.""" self.f = f self.a = a self.L = L self.C = C self.D = D # Integrate twice u_x = - sym.integrate(f, (x, 0, x)) + c_0 u = sym.integrate(u_x/a, (x, 0, x)) + c_1 # Set up 2 equations from the 2 boundary conditions and solve # with respect to the integration constants c_0, c_1 eq = self.get_bc(u) eq = [sym.simplify(eq_) for eq_ in eq] print(('BC eq:', eq)) self.u = self.apply_bc(eq, u) def apply_bc(self, eq, u): # Solve BC eqs respect to the integration constants r = sym.solve(eq, [c_0, c_1]) # Substitute the integration constants in the solution u = u.subs(c_0, r[c_0]).subs(c_1, r[c_1]) u = sym.simplify(sym.expand(u)) return u def get_solution(self, latex=False): return sym.latex(self.u, mode='plain') if latex else self.u def get_residuals(self): """Return the residuals in the equation and BCs.""" R_eq = sym.diff(sym.diff(self.u, x)*self.a, x) + self.f R_0, R_L = self.get_bc(self.u) residuals = [sym.simplify(R) for R in (R_eq, R_0, R_L)] return residuals def get_bc(self, u): raise NotImplementedError( 'class %s has not implemented get_bc' % self.__class__.__name__) class Model1(TwoPtBoundaryValueProblem): """u(0)=0, u(L)=D.""" def get_bc(self, u): return [u.subs(x, 0)-0, # x=0 condition u.subs(x, self.L) - self.D] # x=L condition class Model2(TwoPtBoundaryValueProblem): """u'(0)=C, u(L)=D.""" def get_bc(self, u): return [sym.diff(u,x).subs(x, 0) - self.C, # x=0 cond. u.subs(x, self.L) - self.D] # x=L cond. class Model3(TwoPtBoundaryValueProblem): """u(0)=C, u(L)=D.""" def get_bc(self, u): return [u.subs(x, 0) - self.C, u.subs(x, self.L) - self.D] class Model4(TwoPtBoundaryValueProblem): """u(0)=0, -u'(L)=C*(u-D).""" def get_bc(self, u): return [u.subs(x, 0) - 0, -sym.diff(u, x).subs(x, self.L) - self.C*(u.subs(x, self.L) - self.D)] def test_TwoPtBoundaryValueProblem(): f = 2 model = Model1(f) print(('Model 1, u:', model.get_solution())) for R in model.get_residuals(): assert R == 0 f = x model = Model2(f) print(('Model 2, u:', model.get_solution())) for R in model.get_residuals(): assert R == 0 f = 0 a = 1 + x**2 model = Model3(f, a=a) print(('Model 3, u:', model.get_solution())) for R in model.get_residuals(): assert R == 0 def demo_Model4(): f = 0 model = Model4(f, a=sym.sqrt(1+x)) print(('Model 4, u:', model.get_solution())) if __name__ == '__main__': test_TwoPtBoundaryValueProblem() demo_Model4() ``` ('BC eq:', [c_1, -D - L**2 + L*c_0 + c_1]) ('Model 1, u:', x*(D + L*(L - x))/L) ('BC eq:', [-C + c_0, -D - L**3/6 + L*c_0 + c_1]) ('Model 2, u:', -C*L + C*x + D + L**3/6 - x**3/6) ('BC eq:', [-C + c_1, -D + c_0*atan(L) + c_1]) ('Model 3, u:', (C*atan(L) - C*atan(x) + D*atan(x))/atan(L)) ('BC eq:', [c_1, C*D - 2*C*c_0*sqrt(L + 1) + 2*C*c_0 - C*c_1 - c_0/sqrt(L + 1)]) ('Model 4, u:', 2*C*D*sqrt(L + 1)*(sqrt(x + 1) - 1)/(2*C*L - 2*C*sqrt(L + 1) + 2*C + 1)) ```python ```
657967460713beec73ca454049309c136289d52a
5,528
ipynb
Jupyter Notebook
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/EXERCICES/27_U_XX_F_SYMPY_CLASS.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/EXERCICES/27_U_XX_F_SYMPY_CLASS.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/EXERCICES/27_U_XX_F_SYMPY_CLASS.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
2
2022-02-09T15:41:33.000Z
2022-02-11T07:47:40.000Z
34.55
98
0.446454
true
1,268
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.810479
0.719061
__label__eng_Latn
0.437995
0.50895
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial1.ipynb" target="_parent"></a> # Neuromatch Academy: Week 1, Day 5, Tutorial 1 # Dimensionality Reduction: Geometric view of data --- Tutorial objectives In this notebook we'll explore how multivariate data can be represented in different orthonormal bases. This will help us build intuition that will be helpful in understanding PCA in the following tutorial. Steps: 1. Generate correlated multivariate data. 2. Define an arbitrary orthonormal basis. 3. Project data onto new basis. --- ```python #@title Video: Geometric view of data from IPython.display import YouTubeVideo video = YouTubeVideo(id="emLW0F-VUag", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` Video available at https://youtube.com/watch?v=emLW0F-VUag # Setup Run these cells to get the tutorial started. ```python #library imports import time # import time import numpy as np # import numpy import scipy as sp # import scipy import math # import basic math functions import random # import basic random number generator functions import matplotlib.pyplot as plt # import matplotlib from IPython import display ``` ```python #@title Figure Settings %matplotlib inline fig_w, fig_h = (8, 8) plt.rcParams.update({'figure.figsize': (fig_w, fig_h)}) plt.style.use('ggplot') %config InlineBackend.figure_format = 'retina' ``` ```python #@title Helper functions def get_data(cov_matrix): """ Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian Note that samples are sorted in ascending order for the first random variable. Args: cov_matrix (numpy array of floats): desired covariance matrix Returns: (numpy array of floats) : samples from the bivariate Gaussian, with each column corresponding to a different random variable """ mean = np.array([0,0]) X = np.random.multivariate_normal(mean,cov_matrix,size = 1000) indices_for_sorting = np.argsort(X[:,0]) X = X[indices_for_sorting,:] return X def plot_data(X): """ Plots bivariate data. Includes a plot of each random variable, and a scatter plot of their joint activity. The title indicates the sample correlation calculated from the data. Args: X (numpy array of floats): Data matrix each column corresponds to a different random variable Returns: Nothing. """ fig = plt.figure(figsize=[8,4]) gs = fig.add_gridspec(2,2) ax1 = fig.add_subplot(gs[0,0]) ax1.plot(X[:,0],color='k') plt.ylabel('Neuron 1') plt.title('Sample var 1: {:.1f}'.format(np.var(X[:,0]))) ax1.set_xticklabels([]) ax2 = fig.add_subplot(gs[1,0]) ax2.plot(X[:,1],color='k') plt.xlabel('Sample Number') plt.ylabel('Neuron 2') plt.title('Sample var 2: {:.1f}'.format(np.var(X[:,1]))) ax3 = fig.add_subplot(gs[:, 1]) ax3.plot(X[:,0],X[:,1],'.',markerfacecolor=[.5,.5,.5], markeredgewidth=0) ax3.axis('equal') plt.xlabel('Neuron 1 activity') plt.ylabel('Neuron 2 activity') plt.title('Sample corr: {:.1f}'.format(np.corrcoef(X[:,0],X[:,1])[0,1])) def plot_basis_vectors(X,W): """ Plots bivariate data as well as new basis vectors. Args: X (numpy array of floats): Data matrix each column corresponds to a different random variable W (numpy array of floats): Square matrix representing new orthonormal basis each column represents a basis vector Returns: Nothing. """ plt.figure(figsize=[4,4]) plt.plot(X[:,0],X[:,1],'.',color=[.5,.5,.5],label='Data') plt.axis('equal') plt.xlabel('Neuron 1 activity') plt.ylabel('Neuron 2 activity') plt.plot([0,W[0,0]],[0,W[1,0]],color='r',linewidth=3,label = 'Basis vector 1') plt.plot([0,W[0,1]],[0,W[1,1]],color='b',linewidth=3,label = 'Basis vector 2') plt.legend() def plot_data_new_basis(Y): """ Plots bivariate data after transformation to new bases. Similar to plot_data but with colors corresponding to projections onto basis 1 (red) and basis 2 (blue). The title indicates the sample correlation calculated from the data. Note that samples are re-sorted in ascending order for the first random variable. Args: Y (numpy array of floats): Data matrix in new basis each column corresponds to a different random variable Returns: Nothing. """ fig = plt.figure(figsize=[8,4]) gs = fig.add_gridspec(2,2) ax1 = fig.add_subplot(gs[0,0]) ax1.plot(Y[:,0],'r') plt.xlabel plt.ylabel('Projection \n basis vector 1') plt.title('Sample var 1: {:.1f}'.format(np.var(Y[:,0]))) ax1.set_xticklabels([]) ax2 = fig.add_subplot(gs[1,0]) ax2.plot(Y[:,1],'b') plt.xlabel('Sample number') plt.ylabel('Projection \n basis vector 2') plt.title('Sample var 2: {:.1f}'.format(np.var(Y[:,1]))) ax3 = fig.add_subplot(gs[:, 1]) ax3.plot(Y[:,0],Y[:,1],'.',color=[.5,.5,.5]) ax3.axis('equal') plt.xlabel('Projection basis vector 1') plt.ylabel('Projection basis vector 2') plt.title('Sample corr: {:.1f}'.format(np.corrcoef(Y[:,0],Y[:,1])[0,1])) ``` # Generate correlated multivariate data ```python #@title Video: Multivariate data from IPython.display import YouTubeVideo video = YouTubeVideo(id="YOan2BQVzTQ", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` Video available at https://youtube.com/watch?v=YOan2BQVzTQ To study multivariate data, first we generate it. In this exercise we generate data from a *bivariate normal distribution*. This is an extension of the one-dimensional normal distribution to two dimensions, in which each $x_i$ is marginally normal with mean $\mu_i$ and variance $\sigma_i^2$: \begin{align} x_i \sim \mathcal{N}(\mu_i,\sigma_i^2) \end{align} Additionally, the joint distribution for $x_1$ and $x_2$ has a specified correlation coefficient $\rho$. Recall that the correlation coefficient is a normalized version of the covariance, and ranges between -1 and +1. \begin{align} \rho = \frac{\text{cov}(x_1,x_2)}{\sqrt{\sigma_1^2 \sigma_2^2}} \end{align} For simplicity, we will assume that the mean of each variable has already been subtracted, so that $\mu_i=0$. The remaining parameters can be summarized in the covariance matrix: \begin{equation*} {\bf \Sigma} = \begin{pmatrix} \text{var}(x_1) & \text{cov}(x_1,x_2) \\ \text{cov}(x_1,x_2) &\text{var}(x_2) \end{pmatrix} \end{equation*} Note that this is a symmetric matrix with the variances $\text{var}(x_i) = \sigma_i^2$ on the diagonal, and the covariance on the off-diagonal. ### Exercise We have provided code to draw random samples from a zero-mean bivariate normal distribution. These samples could be used to simulate changes in firing rates for two neurons. Fill in the function below to calculate the covariance matrix given the desired variances and correlation coefficient. The covariance can be found by rearranging the equation above: \begin{align} \text{cov}(x_1,x_2) = \rho \sqrt{\sigma_1^2 \sigma_2^2} \end{align} Use these functions to generate and plot data while varying the parameters. You should get a feel for how changing the correlation coefficient affects the geometry of the simulated data. **Suggestions** * Fill in the function `calculate_cov_matrix` to calculate the covariance. * Generate and plot the data for $\sigma_1^2 =1$, $\sigma_1^2 =1$, and $\rho = .8$. Try plotting the data for different values of the correlation coefficent: $\rho = -1, -.5, 0, .5, 1$. ```python help(plot_data) help(get_data) ``` Help on function plot_data in module __main__: plot_data(X) Plots bivariate data. Includes a plot of each random variable, and a scatter plot of their joint activity. The title indicates the sample correlation calculated from the data. Args: X (numpy array of floats): Data matrix each column corresponds to a different random variable Returns: Nothing. Help on function get_data in module __main__: get_data(cov_matrix) Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian Note that samples are sorted in ascending order for the first random variable. Args: cov_matrix (numpy array of floats): desired covariance matrix Returns: (numpy array of floats) : samples from the bivariate Gaussian, with each column corresponding to a different random variable ```python def calculate_cov_matrix(var_1,var_2,corr_coef): """ Calculates the covariance matrix based on the variances and correlation coefficient. Args: var_1 (scalar): variance of the first random variable var_2 (scalar): variance of the second random variable corr_coef (scalar): correlation coefficient Returns: (numpy array of floats) : covariance matrix """ ################################################################### ## Insert your code here to: ## calculate the covariance from the variances and correlation # cov = ... cov_matrix = np.array([[var_1,cov],[cov,var_2]]) #uncomment once you've filled in the function raise NotImplementedError("Student excercise: calculate the covariance matrix!") ################################################################### return cov ################################################################### ## Insert your code here to: ## generate and plot bivariate Gaussian data with variances of 1 ## and a correlation coefficients of: 0.8 ## repeat while varying the correlation coefficient from -1 to 1 ################################################################### variance_1 = 1 variance_2 = 1 corr_coef = 0.8 #uncomment to test your code and plot #cov_matrix = calculate_cov_matrix(variance_1,variance_2,corr_coef) #X = get_data(cov_matrix) #plot_data(X) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial1_Solution_62df7ae6.py) *Example output:* # Define a new orthonormal basis ```python #@title Video: Orthonormal bases from IPython.display import YouTubeVideo video = YouTubeVideo(id="dK526Nbn2Xo", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` Video available at https://youtube.com/watch?v=dK526Nbn2Xo Next, we will define a new orthonormal basis of vectors ${\bf u} = [u_1,u_2]$ and ${\bf w} = [w_1,w_2]$. As we learned in the video, two vectors are orthonormal if: 1. They are orthogonal (i.e., their dot product is zero): \begin{equation} {\bf u\cdot w} = u_1 w_1 + u_2 w_2 = 0 \end{equation} 2. They have unit length: \begin{equation} ||{\bf u} || = ||{\bf w} || = 1 \end{equation} In two dimensions, it is easy to make an arbitrary orthonormal basis. All we need is a random vector ${\bf u}$, which we have normalized. If we now define the second basis vector to be ${\bf w} = [-u_2,u_1]$, we can check that both conditions are satisfied: \begin{equation} {\bf u\cdot w} = - u_1 u_2 + u_2 u_1 = 0 \end{equation} and \begin{equation} {|| {\bf w} ||} = \sqrt{(-u_2)^2 + u_1^2} = \sqrt{u_1^2 + u_2^2} = 1, \end{equation} where we used the fact that ${\bf u}$ is normalized. So, with an arbitrary input vector, we can define an orthonormal basis, which we will write in matrix by stacking the basis vectors horizontally: \begin{equation} {{\bf W} } = \begin{pmatrix} u_1 & w_1 \\ u_2 & w_2 \end{pmatrix}. \end{equation} ### Exercise In this exercise you will fill in the function below to define an orthonormal basis, given a single arbitrary 2-dimensional vector as an input. **Suggestions** * Modify the function `define_orthonormal_basis` to first normalize the first basis vector $\bf u$. * Then complete the function by finding a basis vector $\bf w$ that is orthogonal to $\bf u$. * Test the function using initial basis vector ${\bf u} = [3,1]$. Plot the resulting basis vectors on top of the data scatter plot using the function `plot_basis_vectors`. (For the data, use $\sigma_1^2 =1$, $\sigma_1^2 =1$, and $\rho = .8$). ```python help(plot_basis_vectors) ``` Help on function plot_basis_vectors in module __main__: plot_basis_vectors(X, W) Plots bivariate data as well as new basis vectors. Args: X (numpy array of floats): Data matrix each column corresponds to a different random variable W (numpy array of floats): Square matrix representing new orthonormal basis each column represents a basis vector Returns: Nothing. ```python def define_orthonormal_basis(u): """ Calculates an orthonormal basis given an arbitrary vector u. Args: u (numpy array of floats): arbitrary 2-dimensional vector used for new basis Returns: (numpy array of floats) : new orthonormal basis columns correspond to basis vectors """ ################################################################### ## Insert your code here to: ## normalize vector u ## calculate vector w that is orthogonal to w #u = .... #w = ... #W = np.column_stack((u,w)) #comment this once you've filled the function raise NotImplementedError("Student excercise: implement the orthonormal basis function") ################################################################### return W variance_1 = 1 variance_2 = 1 corr_coef = 0.8 cov_matrix = calculate_cov_matrix(variance_1,variance_2,corr_coef) X = get_data(cov_matrix) u = np.array([3,1]) #uncomment and run below to plot the basis vectors ##define_orthonormal_basis(u) #plot_basis_vectors(X,W) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial1_Solution_c9ca4afa.py) *Example output:* # Project data onto new basis ```python #@title Video: Change of basis from IPython.display import YouTubeVideo video = YouTubeVideo(id="5MWSUtpbSt0", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` Video available at https://youtube.com/watch?v=5MWSUtpbSt0 Finally, we will express our data in the new basis that we have just found. Since $\bf W$ is orthonormal, we can project the data into our new basis using simple matrix multiplication : \begin{equation} {\bf Y = X W}. \end{equation} We will explore the geometry of the transformed data $\bf Y$ as we vary the choice of basis. #### Exercise In this exercise you will fill in the function below to define an orthonormal basis, given a single arbitrary vector as an input. **Suggestions** * Complete the function `change_of_basis` to project the data onto the new basis. * Plot the projected data using the function `plot_data_new_basis`. * What happens to the correlation coefficient in the new basis? Does it increase or decrease? * What happens to variance? ```python def change_of_basis(X,W): """ Projects data onto new basis W. Args: X (numpy array of floats) : Data matrix each column corresponding to a different random variable W (numpy array of floats): new orthonormal basis columns correspond to basis vectors Returns: (numpy array of floats) : Data matrix expressed in new basis """ ################################################################### ## Insert your code here to: ## project data onto new basis described by W #Y = ... #comment this once you've filled the function raise NotImplementedError("Student excercise: implement change of basis") ################################################################### return Y ## Unomment below to transform the data by projecting it into the new basis ## Plot the projected data # Y = change_of_basis(X,W) # plot_data_new_basis(Y) # disp(...) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial1_Solution_b434bc0d.py) *Example output:* #### Exercise To see what happens to the correlation as we change the basis vectors, run the cell below. The parameter $\theta$ controls the angle of $\bf u$ in degrees. Use the slider to rotate the basis vectors. **Questions** * What happens to the projected data as you rotate the basis? * How does the correlation coefficient change? How does the variance of the projection onto each basis vector change? * Are you able to find a basis in which the projected data is uncorrelated? ```python ###### MAKE SURE TO RUN THIS CELL VIA THE PLAY BUTTON TO ENABLE SLIDERS ######## import ipywidgets as widgets def refresh(theta = 0): u = [1,np.tan(theta * np.pi/180.)] W = define_orthonormal_basis(u) Y = change_of_basis(X,W) plot_basis_vectors(X,W) plot_data_new_basis(Y) _ = widgets.interact(refresh, theta = (0, 90, 5)) ```
c5bcbaef8f255135721a0464f140922323571515
353,246
ipynb
Jupyter Notebook
tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial1.ipynb
liuxiaomiao123/NeuroMathAcademy
16a7969604a300bf9fbb86f8a5b26050ebd14c65
[ "CC-BY-4.0" ]
2
2020-07-03T04:39:09.000Z
2020-07-12T02:08:31.000Z
tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial1.ipynb
NinaHKivanani/course-content
3c91dd1a669cebce892486ba4f8086b1ef2e1e49
[ "CC-BY-4.0" ]
1
2020-06-22T22:57:03.000Z
2020-06-22T22:57:03.000Z
tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial1.ipynb
NinaHKivanani/course-content
3c91dd1a669cebce892486ba4f8086b1ef2e1e49
[ "CC-BY-4.0" ]
1
2021-08-06T08:05:01.000Z
2021-08-06T08:05:01.000Z
285.10573
128,820
0.919037
true
4,467
Qwen/Qwen-72B
1. YES 2. YES
0.760651
0.815232
0.620107
__label__eng_Latn
0.934982
0.279047
# Optimizer tweaks ``` %load_ext autoreload %autoreload 2 %matplotlib inline ``` ``` #export from exp.nb_08 import * ``` ## Imagenette data We grab the data from the previous notebook. ``` path = datasets.untar_data(datasets.URLs.IMAGENETTE_160) ``` ``` tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor] bs=128 il = ImageList.from_files(path, tfms=tfms) sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val')) ll = label_by_func(sd, parent_labeler) data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4) ``` Then a model ``` nfs = [32,64,128,256] ``` ``` cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback, partial(BatchTransformXCallback, norm_imagenette)] ``` This is the baseline of training with vanilla SGD. ``` learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs) ``` ``` run.fit(1, learn) ``` train: [1.7456596212385607, tensor(0.3921, device='cuda:0')] valid: [1.64411865234375, tensor(0.4220, device='cuda:0')] ## Refining the optimizer In PyTorch, the base optimizer in `torch.optim` is just a dictionary that stores the hyper-parameters and references to the parameters of the model we want to train in parameter groups (different groups can have different learning rates/momentum/weight decay... which is what lets us do discriminative learning rates). It contains a method `step` that will update our parameters with the gradients and a method `zero_grad` to detach and zero the gradients of all our parameters. We build the equivalent from scratch. In our implementation, the step function loops over all the parameters to execute the step using stepper functions that we have to provide when initializing the optimizer. ``` class Optimizer(): def __init__(self, params, steppers, **defaults): # might be a generator self.param_groups = list(params) # ensure params is a list of lists if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups] self.hypers = [{**defaults} for p in self.param_groups] self.steppers = listify(steppers) def grad_params(self): return [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers) for p in pg if p.grad is not None] def zero_grad(self): for p,hyper in self.grad_params(): p.grad.detach_() p.grad.zero_() def step(self): for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper) ``` Now that we have changed the optimizer, we will need to adjust the callbacks that were using properties from the PyTorch optimizer: in particular the hyper-parameters are in the list of dictionaries `opt.hypers` (PyTorch has everything in the the list of param groups). ``` #export class Recorder(Callback): def begin_fit(self): self.lrs,self.losses = [],[] def after_batch(self): if not self.in_train: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.loss.detach().cpu()) def plot_lr (self): plt.plot(self.lrs) def plot_loss(self): plt.plot(self.losses) class ParamScheduler(Callback): _order=1 def __init__(self, pname, sched_func): self.pname,self.sched_func = pname,sched_func def set_param(self): for h in self.opt.hypers: h[self.pname] = self.sched_func(self.n_epochs/self.epochs) def begin_batch(self): if self.in_train: self.set_param() ``` To do basic SGD, this what a step looks like: ``` #export def sgd_step(p, lr, **kwargs): p.data.add_(-lr, p.grad.data) return p ``` So let's check we didn't break anything and that recorder and param scheduler work properly. ``` sched = combine_scheds([0.3, 0.7], [sched_cos(0.3, 0.6), sched_cos(0.6, 0.2)]) ``` ``` cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback, Recorder, partial(ParamScheduler, 'lr', sched)] ``` ``` opt_func = partial(Optimizer, steppers=[sgd_step]) ``` ``` learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=opt_func) ``` ``` %time run.fit(1, learn) ``` train: [1.8056946605494804, tensor(0.3653, device='cuda:0')] valid: [1.511763916015625, tensor(0.4840, device='cuda:0')] CPU times: user 4.93 s, sys: 2.32 s, total: 7.25 s Wall time: 11.1 s ``` run.recorder.plot_loss() ``` ``` run.recorder.plot_lr() ``` ## Weight decay By letting our model learn high parameters, it might fit all the data points in the training set with an over-complex function that has very sharp changes, which will lead to overfitting. Weight decay comes from the idea of L2 regularization, which consists in adding to your loss function the sum of all the weights squared. Why do that? Because when we compute the gradients, it will add a contribution to them that will encourage the weights to be as small as possible. Limiting our weights from growing too much is going to hinder the training of the model, but it will yield to a state where it generalizes better. Going back to the theory a little bit, weight decay (or just `wd`) is a parameter that controls that sum of squares we add to our loss: ``` python loss_with_wd = loss + (wd/2) * (weights**2).sum() ``` In practice though, it would be very inefficient (and maybe numerically unstable) to compute that big sum and add it to the loss. If you remember a little bit of high schoool math, you should now that the derivative of `p**2` with respect to `p` is simple `2*p`, so adding that big sum to our loss is exactly the same as doing ``` python weight.grad += wd * weight ``` for every weight in our model, which is equivalent to (in the case of vanilla SGD) updating the parameters with ``` python weight = weight - lr*(weight.grad - wd*weight) ``` This last formula explains why the name of this technique is weight decay, as each weight is decayed by a factor `lr * wd`. This only works for standard SGD, as we have seen that with momentum, RMSProp or in Adam, the update has some additional formulas around the gradient. In those cases, the formula that comes from L2 regularization: ``` python weight.grad += wd * weight ``` is different than weight decay ``` python new_weight = weight - lr * weight.grad - lr * wd * weight ``` Most libraries use the first one, but as it was pointed out in [Decoupled Weight Regularization](https://arxiv.org/pdf/1711.05101.pdf) by Ilya Loshchilov and Frank Hutter, it is better to use the second one with the Adam optimizer, which is why fastai made it its default. Let's allow steppers to add to our `defaults` (which are the default values of all the hyper-parameters). This helper function adds in `dest` the key/values it finds while going through `os` and applying `f` when they was no `key` of the same name. ``` #export def maybe_update(os, dest, f): for o in os: for k,v in f(o).items(): if k not in dest: dest[k] = v def get_defaults(d): return getattr(d,'_defaults',{}) ``` This is the same as before, we just take the default values of the steppers when none are provided in the kwargs. ``` #export class Optimizer(): def __init__(self, params, steppers, **defaults): self.steppers = listify(steppers) maybe_update(self.steppers, defaults, get_defaults) # might be a generator self.param_groups = list(params) # ensure params is a list of lists if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups] self.hypers = [{**defaults} for p in self.param_groups] def grad_params(self): return [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers) for p in pg if p.grad is not None] def zero_grad(self): for p,hyper in self.grad_params(): p.grad.detach_() p.grad.zero_() def step(self): for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper) ``` Weight decay is subtracting `lr*wd*weight` from the weights. We need this function to have an attribute `_defaults` so that we are sure there is an hyper-parameter of the same name in our `Optimizer`. ``` #export def weight_decay(p, lr, wd, **kwargs): p.data.mul_(1 - lr*wd) return p weight_decay._defaults = dict(wd=0.) ``` L2 regularization is adding `wd*weight` to the gradients. ``` #export def l2_reg(p, lr, wd, **kwargs): p.grad.data.add_(wd, p.data) return p l2_reg._defaults = dict(wd=0.) ``` ``` #export sgd_opt = partial(Optimizer, steppers=[weight_decay, sgd_step]) ``` ``` learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=sgd_opt) ``` Before trying to train, let's check the behavior works as intended: when we don't provide a value for `wd`, we pull the corresponding default from `weight_decay`. ``` model = learn.model ``` ``` opt = sgd_opt(model.parameters(), lr=0.1) test_eq(opt.hypers[0]['wd'], 0.) test_eq(opt.hypers[0]['lr'], 0.1) ``` But if we provide a value, it overrides the default. ``` opt = sgd_opt(model.parameters(), lr=0.1, wd=1e-4) test_eq(opt.hypers[0]['wd'], 1e-4) test_eq(opt.hypers[0]['lr'], 0.1) ``` Now let's fit. ``` cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback] ``` ``` learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=partial(sgd_opt, wd=0.01)) ``` ``` run.fit(1, learn) ``` train: [1.8429603582577168, tensor(0.3637, device='cuda:0')] valid: [1.543950439453125, tensor(0.4700, device='cuda:0')] This is already better than the baseline! ## With momentum Momentum requires to add some state. We need to save the moving average of the gradients to be able to do the step and store this inside the optimizer state. To do this, we introduce statistics. Statistics are object with two methods: - `init_state`, that returns the initial state (a tensor of 0. for the moving average of gradients) - `update`, that updates the state with the new gradient value We also read the `_defaults` values of those objects, to allow them to provide default values to hyper-parameters. ``` #export class StatefulOptimizer(Optimizer): def __init__(self, params, steppers, stats=None, **defaults): self.stats = listify(stats) maybe_update(self.stats, defaults, get_defaults) super().__init__(params, steppers, **defaults) self.state = {} def step(self): for p,hyper in self.grad_params(): if p not in self.state: #Create a state for p and call all the statistics to initialize it. self.state[p] = {} maybe_update(self.stats, self.state[p], lambda o: o.init_state(p)) state = self.state[p] for stat in self.stats: state = stat.update(p, state, **hyper) compose(p, self.steppers, **state, **hyper) self.state[p] = state ``` ``` #export class Stat(): _defaults = {} def init_state(self, p): raise NotImplementedError def update(self, p, state, **kwargs): raise NotImplementedError ``` Here is an example of `Stat`: ``` class AverageGrad(Stat): _defaults = dict(mom=0.9) def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)} def update(self, p, state, mom, **kwargs): state['grad_avg'].mul_(mom).add_(p.grad.data) return state ``` Then we add the momentum step (instead of using the gradients to perform the step, we use the average). ``` #export def momentum_step(p, lr, grad_avg, **kwargs): p.data.add_(-lr, grad_avg) return p ``` ``` sgd_mom_opt = partial(StatefulOptimizer, steppers=[momentum_step,weight_decay], stats=AverageGrad(), wd=0.01) ``` ``` learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=sgd_mom_opt) ``` ``` run.fit(1, learn) ``` train: [1.7483108967833876, tensor(0.3988, device='cuda:0')] valid: [1.58801171875, tensor(0.4600, device='cuda:0')] ### Momentum experiments What does momentum do to the gradients exactly? Let's do some plots to find out! ``` x = torch.linspace(-4, 4, 200) y = torch.randn(200) + 0.3 betas = [0.5,0.7,0.9,0.99] ``` ``` def plot_mom(f): _,axs = plt.subplots(2,2, figsize=(12,8)) for beta,ax in zip(betas, axs.flatten()): ax.plot(y, linestyle='None', marker='.') avg,res = None,[] for i,yi in enumerate(y): avg,p = f(avg, beta, yi, i) res.append(p) ax.plot(res, color='red') ax.set_title(f'beta={beta}') ``` This is the regular momentum. ``` def mom1(avg, beta, yi, i): if avg is None: avg=yi res = beta*avg + yi return res,res plot_mom(mom1) ``` As we can see, with a too high value, it may go way too high with no way to change its course. Another way to smooth noisy data is to do an exponentially weighted moving average. In this case, there is a dampening of (1-beta) in front of the new value, which is less trusted than the current average. ``` def ewma(v1, v2, beta): return beta*v1 + (1-beta)*v2 ``` ``` def mom2(avg, beta, yi, i): if avg is None: avg=yi avg = ewma(avg, yi, beta) return avg, avg plot_mom(mom2) ``` We can see it gets to a zero-constant when the data is purely random. If the data has a certain shape, it will get that shape (with some delay for high beta). ``` y = 1 - (x/3) ** 2 + torch.randn(200) * 0.1 ``` ``` y[0]=0.5 ``` ``` plot_mom(mom2) ``` Debiasing is here to correct the wrong information we may have in the very first batch. The debias term corresponds to the sum of the coefficient in our moving average. At the time step i, our average is: $\begin{align*} avg_{i} &= \beta\ avg_{i-1} + (1-\beta)\ v_{i} = \beta\ (\beta\ avg_{i-2} + (1-\beta)\ v_{i-1}) + (1-\beta)\ v_{i} \\ &= \beta^{2}\ avg_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \\ &= \beta^{3}\ avg_{i-3} + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \\ &\vdots \\ &= (1-\beta)\ \beta^{i}\ v_{0} + (1-\beta)\ \beta^{i-1}\ v_{1} + \cdots + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \end{align*}$ and so the sum of the coefficients is $\begin{align*} S &=(1-\beta)\ \beta^{i} + (1-\beta)\ \beta^{i-1} + \cdots + (1-\beta)\ \beta^{2} + (1-\beta)\ \beta + (1-\beta) \\ &= (\beta^{i} - \beta^{i+1}) + (\beta^{i-1} - \beta^{i}) + \cdots + (\beta^{2} - \beta^{3}) + (\beta - \beta^{2}) + (1-\beta) \\ &= 1 - \beta^{i+1} \end{align*}$ since all the other terms cancel out each other. By dividing by this term, we make our moving average a true average (in the sense that all the coefficients we used for the average sum up to 1). ``` def mom3(avg, beta, yi, i): if avg is None: avg=0 avg = ewma(avg, yi, beta) return avg, avg/(1-beta**(i+1)) plot_mom(mom3) ``` ## Adam and friends In Adam, we use the gradient averages but with dampening (not like in SGD with momentum), so let's add this to the `AverageGrad` class. ``` #export class AverageGrad(Stat): _defaults = dict(mom=0.9) def __init__(self, dampening:bool=False): self.dampening=dampening def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)} def update(self, p, state, mom, **kwargs): state['mom_damp'] = 1-mom if self.dampening else 1. state['grad_avg'].mul_(mom).add_(state['mom_damp'], p.grad.data) return state ``` We also need to track the moving average of the gradients squared. ``` #export class AverageSqrGrad(Stat): _defaults = dict(sqr_mom=0.99) def __init__(self, dampening:bool=True): self.dampening=dampening def init_state(self, p): return {'sqr_avg': torch.zeros_like(p.grad.data)} def update(self, p, state, sqr_mom, **kwargs): state['sqr_damp'] = 1 - sqr_mom if self.dampening else 1. state['sqr_avg'].mul_(sqr_mom).addcmul_(state['sqr_damp'], p.grad.data, p.grad.data) return state ``` We will also need the number of steps done during training for the debiasing. ``` #export class StepCount(Stat): def init_state(self, p): return {'step': 0} def update(self, p, state, **kwargs): state['step'] += 1 return state ``` This helper function computes the debias term. If we dampening, `damp = 1 - mom` and we get the same result as before. If we don't use dampening, (`damp = 1`) we will need to divide by `1 - mom` because that term is missing everywhere. ``` #export def debias(mom, damp, step): return damp * (1 - mom**step) / (1-mom) ``` Then the Adam step is just the following: ``` #export def adam_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, **kwargs): debias1 = debias(mom, mom_damp, step) debias2 = debias(sqr_mom, sqr_damp, step) p.data.addcdiv_(-lr / debias1, grad_avg, (sqr_avg/debias2).sqrt() + eps) return p adam_step._defaults = dict(eps=1e-5) ``` ``` #Export adam_opt = partial(StatefulOptimizer, steppers=adam_step, stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()]) ``` ``` learn,run = get_learn_run(nfs, data, 0.001, conv_layer, cbs=cbfs, opt_func=adam_opt) ``` ``` run.fit(3, learn) ``` train: [1.7502999214751047, tensor(0.3965, device='cuda:0')] valid: [1.395741943359375, tensor(0.5280, device='cuda:0')] train: [1.2368107259190322, tensor(0.5956, device='cuda:0')] valid: [1.130247314453125, tensor(0.6280, device='cuda:0')] train: [0.9639695091951683, tensor(0.6891, device='cuda:0')] valid: [1.077808837890625, tensor(0.6520, device='cuda:0')] ## LAMB It's then super easy to implement a new optimizer. This is LAMB from a [very recent paper](https://arxiv.org/pdf/1904.00962.pdf): $\begin{align} g_{t}^{l} &= \nabla L(w_{t-1}^{l}, x_{t}) \\ m_{t}^{l} &= \beta_{1} m_{t-1}^{l} + (1-\beta_{1}) g_{t}^{l} \\ v_{t}^{l} &= \beta_{2} v_{t-1}^{l} + (1-\beta_{2}) g_{t}^{l} \odot g_{t}^{l} \\ m_{t}^{l} &= m_{t}^{l} / (1 - \beta_{1}^{t}) \\ v_{t}^{l} &= v_{t}^{l} / (1 - \beta_{2}^{t}) \\ r_{1} &= \|w_{t-1}^{l}\|_{2} \\ s_{t}^{l} &= \frac{m_{t}^{l}}{\sqrt{v_{t}^{l} + \epsilon}} + \lambda w_{t-1}^{l} \\ r_{2} &= \| s_{t}^{l} \|_{2} \\ \eta^{l} &= \eta * r_{1}/r_{2} \\ w_{t}^{l} &= w_{t}^{l-1} - \eta_{l} * s_{t}^{l} \\ \end{align}$ ``` def lamb_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, wd, **kwargs): debias1 = debias(mom, mom_damp, step) debias2 = debias(sqr_mom, sqr_damp, step) r1 = p.data.pow(2).mean().sqrt() step = (grad_avg/debias1) / ((sqr_avg/debias2).sqrt()+eps) + wd*p.data r2 = step.pow(2).mean().sqrt() p.data.add_(-lr * min(r1/r2,10), step) return p lamb_step._defaults = dict(eps=1e-6, wd=0.) ``` ``` lamb = partial(StatefulOptimizer, steppers=lamb_step, stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()]) ``` ``` learn,run = get_learn_run(nfs, data, 0.003, conv_layer, cbs=cbfs, opt_func=lamb) ``` ``` run.fit(3, learn) ``` train: [1.8331064230456027, tensor(0.3623, device='cuda:0')] valid: [1.45334814453125, tensor(0.5120, device='cuda:0')] train: [1.344959228856445, tensor(0.5551, device='cuda:0')] valid: [1.387230712890625, tensor(0.5440, device='cuda:0')] train: [1.0950260295486274, tensor(0.6420, device='cuda:0')] valid: [1.0847763671875, tensor(0.6380, device='cuda:0')] Other recent variants of optimizers: - [Large Batch Training of Convolutional Networks](https://arxiv.org/abs/1708.03888) (LARS also uses weight statistics, not just gradient statistics. Can you add that to this class?) - [Adafactor: Adaptive Learning Rates with Sublinear Memory Cost](https://arxiv.org/abs/1804.04235) (Adafactor combines stats over multiple sets of axes) - [Adaptive Gradient Methods with Dynamic Bound of Learning Rate](https://arxiv.org/abs/1902.09843) ## Export ``` #export sgd_mom_opt = partial(StatefulOptimizer, steppers=[momentum_step,weight_decay], stats=AverageGrad(), wd=0.01) ``` ``` !python notebook2script.py 09_optimizers.ipynb ``` Converted 09_optimizers.ipynb to exp/nb_09.py ``` ```
b4052e532d6250a1a265d51f1a61eaa7a9e682ea
410,968
ipynb
Jupyter Notebook
dev_course/dl2/09_optimizers.ipynb
rohitgr7/fastai_docs
531139ac17dd2e0cf08a99b6f894dbca5028e436
[ "Apache-2.0" ]
null
null
null
dev_course/dl2/09_optimizers.ipynb
rohitgr7/fastai_docs
531139ac17dd2e0cf08a99b6f894dbca5028e436
[ "Apache-2.0" ]
null
null
null
dev_course/dl2/09_optimizers.ipynb
rohitgr7/fastai_docs
531139ac17dd2e0cf08a99b6f894dbca5028e436
[ "Apache-2.0" ]
null
null
null
324.363062
114,928
0.93098
true
6,065
Qwen/Qwen-72B
1. YES 2. YES
0.743168
0.721743
0.536376
__label__eng_Latn
0.910782
0.084512
# La méthode des multiplicateurs de Lagrange **TODO**: * https://www.google.fr/webhp?ie=utf-8&oe=utf-8&client=firefox-b&gfe_rd=cr&ei=kutIWYeiKoXS8Afc25yQBQ#safe=active&q=m%C3%A9thode+des+multiplicateurs+de+lagrange ## À quoi ça sert ? À trouver les extremums (minimums, maximums) d'une fonction $f$ d'une ou plusieurs variables $x_1, \dots, x_n$, sous réserve que l'ensemble solution respecte un contrainte d'égalité: $g(x_1, \dots, x_n) = 0$. Autrement dit la méthode des multiplicateurs de Lagrange va permettre de résoudre certains problèmes d'*optimisation sous contraintes*. Exemple: maximiser $f(x_1,x_2)$ soumise aux contraintes $g(x_1,x_2)=0$. ## Cas d'une fonction à une variable Dans ce cas, on écrit simplement $x := x_1$. On pose la fonction: $$ \mathcal{L}(x, \lambda) = f(x) + \lambda g(x) $$ $\lambda$ est ce qu'on appelle un *multiplicateur de Lagrange* ; sa valeur n'est pas connue à priori. Pour maximiser $\mathcal{L}$, on annule ses dérivées partielles (condition nécessaire du premier ordre). Le problème initial revient à résoudre le système d'équations à deux inconnues suivant: $$ \left\{ \begin{array}{rcl} {\large \frac{\partial\mathcal{L}(x,\lambda)}{\partial x}} & = & 0 \\ {\large \frac{\partial\mathcal{L}(x,\lambda)}{\partial \lambda}} & = & 0 \end{array} \right. $$ ### Exemple On cherche à résoudre le problème d'optimisation suivant: $$ \begin{align} \max & \quad f(x) = a x^2 + b x + c \\ \text{s.t.} & \quad g(x) = a' x^2 + b' x + c' = 0 \end{align} $$ avec par exemple $a = -\frac12$, $b = 0$, $c = 20$, $a' = 1$, $b' = 2$ et $c' = -10$ soit $$ \begin{align} \max & \quad f(x) = -\frac12 x^2 + 20 \\ \text{s.t.} & \quad g(x) = x^2 + 2 x - 10 = 0 \end{align} $$ avec: * $f(x) = -\frac12 x^2 + 20$ la fonction à maximiser * $g(x) = x^2 + 2 x - 10 = 0$ la contrainte à respecter La fonction de Lagrange correspondant à ce problème est: $$ \begin{align} \mathcal{L}(x,\lambda) & = f(x) + \lambda g(x) \\ & = -\frac12 x^2 + 20 + \lambda (x^2 + 2 x - 10) \\ & = -\frac12 x^2 + 20 + \lambda x^2 + 2 \lambda x - 10 \lambda \\ & = (-\frac12 + \lambda) x^2 + 2 \lambda x + 20 - 10 \lambda \end{align} $$ Les conditions du premier ordre (annulation des dérivées premières) sont données par: $$ \begin{align} \left\{ \begin{array}{rcl} {\large \frac{\partial\mathcal{L}(x,\lambda)}{\partial x}} & = & 0 \\ {\large \frac{\partial\mathcal{L}(x,\lambda)}{\partial \lambda}} & = & 0 \end{array} \right. & \Leftrightarrow \left\{ \begin{array}{rcl} 2 (-\frac12 + \lambda) x + 2 \lambda & = & 0 \\ x^2 + 2 x - 10 & = & 0 \end{array} \right. \\ & \\ & \Leftrightarrow \left\{ \begin{array}{rcl} -x + 2 \lambda x + 2 \lambda & = & 0 \\ x & = & -\sqrt{11} - 1 \quad \text{ou} \quad x = \sqrt{11} - 1 \end{array} \right. \\ & \\ & \Leftrightarrow \left\{ \begin{array}{rcl} 2 \lambda x + 2 \lambda & = & x \\ x & = & -\sqrt{11} - 1 \quad \text{ou} \quad x = \sqrt{11} - 1 \end{array} \right. \\ & \\ & \Leftrightarrow \left\{ \begin{array}{rcl} \lambda & = & \frac{x}{2x + 2} \\ x & = & -\sqrt{11} - 1 \quad \text{ou} \quad x = \sqrt{11} - 1 \end{array} \right. \\ & \\ & \Leftrightarrow \left\{ \begin{array}{rcl} \lambda & = & \frac{-\sqrt{11} - 1}{2(-\sqrt{11} - 1) + 2} \\ x & = & -\sqrt{11} - 1 \end{array} \right. \quad \text{ou} \quad \left\{ \begin{array}{rcl} \lambda & = & \frac{\sqrt{11} - 1}{2(\sqrt{11} - 1) + 2} \\ x & = & \sqrt{11} - 1 \end{array} \right. \\ & \\ & \Leftrightarrow \left\{ \begin{array}{rcl} \lambda & = & \frac{\sqrt{11} + 1}{2 \sqrt{11}} \\ x & = & -\sqrt{11} - 1 \end{array} \right. \quad \text{ou} \quad \left\{ \begin{array}{rcl} \lambda & = & \frac{\sqrt{11} - 1}{2 \sqrt{11}} \\ x & = & \sqrt{11} - 1 \end{array} \right. \end{align} $$ ```python # Check results #def deriv_partielle_x(x, _lambda): # return 2. * (-0.5 + _lambda) * x + 2. * _lambda #def deriv_partielle_lambda(x, _lambda): # return x**2. + 2. * x - 10. #print((math.sqrt(11.) + 1.)/(2. * math.sqrt(11))) #print((math.sqrt(11.) - 1.)/(2. * math.sqrt(11))) #print(-math.sqrt(11.) -1.) #print(math.sqrt(11.) - 1.) #print(g_roots, lambda_roots) #print(deriv_partielle_x(g_roots[0], lambda_roots[0])) # ERR TODO #print(deriv_partielle_lambda(g_roots[0], lambda_roots[0])) # OK #print(deriv_partielle_x(g_roots[1], lambda_roots[1])) # ERR TODO #print(deriv_partielle_lambda(g_roots[1], lambda_roots[1])) # OK # TODO: REVOIR LES CALCULS POUR DERIV_LAMBDA ET LAMBDA_ROOTS, LES RESULTATS NE VERIFIENT PAS LE SYSTEME... ``` TODO Bon, ok, l'exemple 1D est un peu particulier... On se retrouve à devoir choisir entre 2 solutions, les 2 racines de $g$ et on pouvait très bien faire ça dés le début sans avoir recours aux multiplicateurs de Lagrange... Cet exemple illustratif - dont la motivation initiale était de pouvoir représenter la fonction $\mathcal{L}$ et ses points stationnaires - n'est peut-être pas très pertinent du coup... Sauf à montrer en quoi les multiplicateurs de Lagrange deviennent réellement utiles en 2D et plus... TODO: ajouter un exemple 2D : $\max f(x) = -x^2 + 10$ s.t. $g(x) = x_1^2 + 2 x_2^2 - 1 = 0$ : une ellipse centrée sur 0 (intersection entre f et un plan (incliné)). ```python #%matplotlib notebook %matplotlib inline import numpy as np import matplotlib.pyplot as plt def f(x): a = -0.5 b = 0. c = 20. return a * x**2 + b * x + c class ConstraintFunction: def __init__(self): self.a = 1. self.b = 2. self.c = -10. def __call__(self, x): return self.a * x**2 + self.b * x + self.c def delta(self): return self.b ** 2. - 4 * self.a * self.c def roots(self): return np.array([ (-g.b + math.sqrt(g.delta()))/(2. * g.a), (-g.b - math.sqrt(g.delta()))/(2. * g.a) ]) g = ConstraintFunction() lambda_roots = np.array([ (math.sqrt(11.) + 1.)/(2. * math.sqrt(11.)), (math.sqrt(11.) - 1.)/(2. * math.sqrt(11.)) ]) g_roots = g.roots() x_min = -10. x_max = 10. lambda_min = -3. # TODO lambda_max = 3. # TODO ``` ```python # Build datas ############### x = np.linspace(x_min, x_max, 200) fx = f(x) gx = g(x) lambda_ = 1. lx = fx + lambda_ * gx # Plot data ################# fig = plt.figure(figsize=(8.0, 8.0)) ax = fig.add_subplot(111) ax.plot(x, fx, "-b", label=r"$f(x)$") ax.plot(x, gx, "-r", label=r"$g(x)$") ax.plot(x, lx, "--k", label=r"$\mathcal{L}(x)$") ax.plot(np.array([g_roots, g_roots, np.zeros(len(g_roots))]), np.array([np.zeros(len(g_roots)), f(g_roots), f(g_roots)]), ".:r", lw=1) #label=r"$g(x) = 0$") ax.hlines(y=0, xmin=x[0], xmax=x[-1], colors='k') ax.vlines(x=0, ymin=f(x[0]), ymax=g(x[-1]), colors='k') # Set title and labels ###### ax.set_title(r"Graphical representation of $f$, $g$ and $\mathcal{L}$", fontsize=18) ax.set_xlabel(r"$x$", fontsize=18) #ax.set_ylabel(r"$f(x)$", fontsize=18) # Set legend ################ ax.legend(loc='upper right', fontsize=18) # Plot ###################### plt.show() ``` ```python from mpl_toolkits.mplot3d import axes3d from matplotlib import cm #from matplotlib.colors import LightSource # Build datas ############### N = 100 x = np.linspace(x_min, x_max, N) lambda_ = np.linspace(lambda_min, lambda_max, N) xx,ll = np.meshgrid(x, lambda_) fx = f(x) gx = g(x) z = fx + ll * gx # Plot data ################# fig = plt.figure(figsize=(10.0, 10.0)) ax = axes3d.Axes3D(fig) #surf = ax.plot_surface(xx, ll, z, cmap=cm.jet, rstride=1, cstride=1, color='b', shade=False, alpha=1.) #fig.colorbar(surf, shrink=0.5, aspect=5) #ax = fig.gca(projection='3d') #ax.plot_surface(xx, ll, z, rstride=5, cstride=5, alpha=0.3) #cset = ax.contourf(xx, ll, z, zdir='z', offset=0, cmap=cm.coolwarm) #ax = axes3d.Axes3D(fig) #ax.plot_wireframe(xx, ll, z, color='k', alpha=0.3, cstride=0, rstride=1) ax.plot_wireframe(xx, ll, z, color='k', alpha=0.3, cstride=0, rstride=2) #ax.plot_wireframe(xx, ll, z, color='k', alpha=0.3, cstride=1, rstride=0) lambda_0 = [0] xx,ll0 = np.meshgrid(x, lambda_0) z0 = fx + ll0 * gx ax.plot_wireframe(xx, ll0, z0, color='r', alpha=1., cstride=0, rstride=1) #x0 = np.array([root1]) #fx0 = f(x0) #gx0 = g(x0) #xx0,ll = np.meshgrid(x0, lambda_) #z0 = fx0 + ll * gx0 #ax.plot_wireframe(xx0, ll, z0, color='g', alpha=1., cstride=1, rstride=0) x0 = np.array(g_roots) fx0 = f(x0) gx0 = g(x0) xx0,ll = np.meshgrid(x0, lambda_) z0 = fx0 + ll * gx0 ax.plot_wireframe(xx0, ll, z0, color='g', alpha=1., cstride=1, rstride=0) ax.scatter(g_roots, # x lambda_roots, # y f(g_roots) + lambda_roots * g(g_roots), #z label="points stationnaires") ax.legend(loc='upper right', fontsize=14) #fig, ax = plt.subplots(figsize=(10.0, 10.0), subplot_kw=dict(projection='3d')) #ls = LightSource(270, 45) ## To use a custom hillshading mode, override the built-in shading and pass ## in the rgb colors of the shaded surface calculated from "shade". #rgb = ls.shade(z, cmap=cm.gist_earth, vert_exag=0.1, blend_mode='soft') #surf = ax.plot_surface(xx, ll, z, rstride=1, cstride=1, facecolors=rgb, # linewidth=0, antialiased=False, shade=True) ax.set_xlabel(r'$x$') ax.set_ylabel(r'$\lambda$') ax.set_zlabel(r'$\mathcal{L}(x)$'); ``` Les points stationnaires de $\mathcal{L}$ sont quelquepart sur les droites vertes... ## Cas d'une fonction à deux variables **TODO**: reécrire cette partie | | | | ------------------------------------------------ | ------------------------------------------------ | | </img> | </img> | On pose la fonction: $$ \mathcal{L}(x_1,x_2,\lambda) = f(x_1,x_2) + \lambda g(x_1,x_2) $$ $\lambda$ est ce qu'on appelle un *multiplicateur de Lagrange* ; sa valeur n'est pas connue à priori. Pour maximiser $\mathcal{L}$, on annule ses dérivées partielles (condition nécessaire du premier ordre). Le problème initial revient à résoudre le système d'équations à trois inconnues suivant: $$ \left\{ \begin{array}{rcl} {\large \frac{\partial\mathcal{L}(x_1,x_2,\lambda)}{\partial x_1}} & = & 0 \\ {\large \frac{\partial\mathcal{L}(x_1,x_2,\lambda)}{\partial x_2}} & = & 0 \\ {\large \frac{\partial\mathcal{L}(x_1,x_2,\lambda)}{\partial \lambda}} & = & 0 \end{array} \right. $$ ### Exemple On cherche le rectangle d'aire maximum et de périmètre constant $P$. Plus formellement, on cherche à résoudre le problème d'optimisation suivant: $$ \begin{align} \max & \quad f(x_1,x_2) = x_1 x_2 \\ \text{s.t.} & \quad g(x_1,x_2) = 2x_1 + 2x_2 - P = 0 \end{align} $$ avec: * $x_1$ et $x_2$ les dimensions du rectangle (respectivement la largeur et la hauteur) * $f(x_1,x_2) = x_1 x_2$ l'aire du rectangle (la fonction à maximiser) * $g(x_1,x_2) = 2x_1 + 2x_2 - P = 0$ la contrainte sur le périmètre du rectangle ($2x_1 + 2x_2 = P$) La fonction de Lagrange correspondant à ce problème est: $$ \mathcal{L}(x_1,x_2,\lambda) = x_1 x_2 + \lambda (2x_1 + 2x_2 - P) $$ Les conditions du premier ordre (annulation des dérivées premières) sont données par: $$ \begin{align} \left\{ \begin{array}{rcl} {\large \frac{\partial\mathcal{L}(x_1,x_2,\lambda)}{\partial x_1}} & = & 0 \\ {\large \frac{\partial\mathcal{L}(x_1,x_2,\lambda)}{\partial x_2}} & = & 0 \\ {\large \frac{\partial\mathcal{L}(x_1,x_2,\lambda)}{\partial \lambda}} & = & 0 \end{array} \right. & \Leftrightarrow \left\{ \begin{array}{rcl} x_2 + 2 \lambda & = & 0 \\ x_1 + 2 \lambda & = & 0 \\ 2x_1 + 2x_2 - P & = & 0 \end{array} \right. \\ & \\ & \Leftrightarrow \left\{ \begin{array}{rcl} \color{red} \lambda & \color{red} = & {\color{red} {\large \frac{-x_2}{2}}} \\ \color{red} \lambda & \color{red} = & {\color{red} {\large \frac{-x_1}{2}}} \\ 2x_1 + 2x_2 - P & = & 0 \end{array} \right. \\ & \\ & \Leftrightarrow \left\{ \begin{array}{rcl} x_1 & = & x_2 \\ 2x_1 + 2x_2 - P & = & 0 \end{array} \right. \\ & \\ & \Leftrightarrow \left\{ \begin{array}{rcl} x_1 & = & x_2 \\ P & = & 2x_1 + 2x_1 = 4x_1 = 4x_2 \end{array} \right. \end{align} $$ On a donc ${\large \frac{-x_2}{2}} = {\large \frac{-x_1}{2}}$ c'est à dire $x_1 = x_2$. Ainsi, le carré ($x_1 = x_2$) est le rectangle d'aire maximum pour un périmètre donné $P$. Remarque: il est préférable d'éliminer $\lambda$ dés le début des calculs car c'est une variable auxiliaire dont la valeur n'est pas utile. ```python #%matplotlib notebook %matplotlib inline import numpy as np import matplotlib.pyplot as plt # Build datas ############### P = 10 x = np.linspace(0., 5.0, 50) y = np.linspace(0., 5.0, 50) X, Y = np.meshgrid(x, y) Z1 = X * Y Z2 = 2 * X + 2 * Y # Plot data ################# fig, ax = plt.subplots(figsize=(8,8)) # SURFACE ################### ax.imshow(Z1, origin='lower', extent=(0,5,0,5), alpha=0.5, cmap='Blues_r') max_value = np.max(Z1) levels = np.array([0.1*max_value, 0.3*max_value, 0.6*max_value]) cs = plt.contour(x, y, Z1, levels, linewidths=(2, 2, 3), linestyles=('dotted', 'dashed', 'solid'), alpha=0.5, colors='blue') ax.clabel(cs, inline=False, fontsize=12) # Set legend ################ lines = [ cs.collections[0]] labels = ['surface'] # PERIMETRE ################# levels = np.array([P]) cs = plt.contour(x, y, Z2, levels, linewidths=(2, 2, 3), linestyles=('dotted', 'dashed', 'solid'), alpha=0.5, colors='red') ax.clabel(cs, inline=False, fontsize=12) # Set title and labels ###### ax.axis('equal') # <- SAME SCALE ON X AND Y ax.set_title("Example for P = " + str(P), fontsize=20) ax.set_xlabel(r"$X_1$", fontsize=20) ax.set_ylabel(r"$X_2$", fontsize=20) # Set legend ################ lines.append(cs.collections[0]) labels.append('périmètre P = ' + str(P)) ax.legend(lines, labels, prop={'size': 14}, loc='best', fancybox=True, framealpha=0.5) # The optimal point ######### ax.plot([0, 5], [0, 5], ":k", alpha=0.5) ax.plot([2.5], [2.5], "xk") ax.text(2.4, 2.2, r"$x^*$", fontsize=14) # Plot ###################### plt.grid() plt.show() ``` ## Cas générale **TODO** ## Bibliographie Quelques livres sur le sujet: - *Optimisation et contrôle des systèmes linéaires* (chapitre 3) de Maïtine Bergounioux aux editions Dunod - *Toutes les mathématiques et les bases de l'informatique* (p. 566-567) de Horst Stöcker aux editions Dunod Quelques liens intéressants: - https://fr.wikipedia.org/wiki/Multiplicateur_de_Lagrange - https://en.wikipedia.org/wiki/Lagrange_multiplier - http://www.unicaen.fr/ufr/eco/espaceprof/script1/script2/identification/valognes_fabrice/MicroL3/ch02.pdf (bien pour une première approche) - https://economix.fr/docs/1041/Rappels%20Lagrange.pdf (facile d'accès, vu sous l'angle des science économiques) - https://quantique.u-strasbg.fr/lib/exe/fetch.php?media=fr:pageperso:vr:fichiers:multiplicateur-lagrange.pdf - https://ufr-segmi.parisnanterre.fr/servlet/com.univ.collaboratif.utils.LectureFichiergw?ID_FICHIER=1348818743690 - http://nlp.cs.berkeley.edu/tutorials/lagrange-multipliers.pdf - http://res-nlp.univ-lemans.fr/NLP_C_M03_G03/co/Contenu_601.html (vu sous l'angle de la pyhsique)
258987577b72b7043ccba2f0bf0b725447132fa3
23,984
ipynb
Jupyter Notebook
nb_sci_maths/maths_analysis_method_of_lagrange_multipliers_fr.ipynb
jdhp-docs/python-notebooks
91a97ea5cf374337efa7409e4992ea3f26b99179
[ "MIT" ]
3
2017-05-03T12:23:36.000Z
2020-10-26T17:30:56.000Z
nb_sci_maths/maths_analysis_method_of_lagrange_multipliers_fr.ipynb
jdhp-docs/python-notebooks
91a97ea5cf374337efa7409e4992ea3f26b99179
[ "MIT" ]
null
null
null
nb_sci_maths/maths_analysis_method_of_lagrange_multipliers_fr.ipynb
jdhp-docs/python-notebooks
91a97ea5cf374337efa7409e4992ea3f26b99179
[ "MIT" ]
1
2020-10-26T17:30:57.000Z
2020-10-26T17:30:57.000Z
31.188557
278
0.472565
true
5,498
Qwen/Qwen-72B
1. YES 2. YES
0.841826
0.824462
0.694053
__label__fra_Latn
0.38063
0.450849
最初に必要なライブラリを読み込みます。 ```python from sympy import * from sympy.physics.quantum import * from sympy.physics.quantum.qubit import Qubit, QubitBra, measure_all, measure_all_oneshot,measure_partial from sympy.physics.quantum.gate import H,X,Y,Z,S,T,CPHASE,CNOT,SWAP,UGate,CGateS,gate_simp from sympy.physics.quantum.gate import IdentityGate as _I from sympy.physics.quantum.qft import * from sympy.printing.dot import dotprint init_printing() %matplotlib inline import matplotlib.pyplot as plt from sympy.physics.quantum.circuitplot import CircuitPlot,labeller, Mz,CreateOneQubitGate ``` ### 【復習問題1】いつもの説明資料の量子回路をプログラミング手順にそって計算しましょう。 ```python ### 1. 計算に必要な量子ビット(量子レジスタ)を準備して、その値を初期化する ## 2量子ビットを 0 で初期化してください。 Qubit('00') ``` ```python ### 2. 量子計算をユニタリ行列(ゲート演算子)で記述する ## Hadamard のテンソル積 の行列表現を表示してください。 h2=H(0)*H(1) #←[]を書き換えて、2量子ビットにそれぞれHadamardを作用するゲート操作を h2 に代入してください。 represent(h2, nqubits=2) #←[]を書き換えて、h2 の行列表現を表示してください。 ``` ```python ## CNOT を Hadamard で挟んだゲート操作 の行列表現を表示してください。 hCXh=H(0)*CNOT(1,0)*H(0) #←[]を書き換えて、H - CNOT - H の量子回路を hCXh に代入してください。 represent(hCXh,nqubits=2) #←[]を書き換えて、hCXh の行列表現を表示してください。 ``` ```python ### 3. ユニタリ行列を量子ビットに作用する ## Hadamard のテンソル積 を `Qubit('00')` に作用してください。 q1 = qapply(h2*Qubit('00')) #←[]を書き換えて、`Qubit('00')` に `h2` を作用させて計算してください。 q1 # q1を Jupyter で表示します。 ``` ```python ## 次に、CNOT を Hadamard で挟んだゲート操作 を 前の状態に作用してください。 q2 = qapply(hCXh*q1) #←[]を書き換えて、前の状態 `q1` に `hCXh` を作用させて計算してください。 q2 # q2を Jupyter で表示します。 ``` ```python ### 4. 測定する ## measure_all() を使って、それぞれの状態が測定される確率を表示してください。 measure_all(q2) #←[]を書き換えて、`q2`のそれぞれの状態が測定される確率を表示してください。 ``` ### 【復習問題2】グローバーのアルゴリズム <strong> 次の「課題の初期状態」 quest_state を入力として、この量子状態に $\lvert 111 \rangle $ が含まれるか グローバーのアルゴリズムを使って調べてください。   </strong> ```python # 課題の初期状態 quest_state = CNOT(1,0)*CNOT(2,1)*H(2)*H(0)*Qubit('000') CircuitPlot(quest_state,nqubits=3) ``` ```python # 計算した初期状態を init_state とします。 init_state = qapply(quest_state) init_state ``` ```python # 以降で役立ちそうな関数を定義します。 def CCX(c1,c2,t): return CGateS((c1,c2),X(t)) def hadamard(s,n): h = H(s) for i in range(s+1,n+s): h = H(i)*h return h def CCZ(c1,c2,t): return (H(t)*CCX(c1,c2,t)*H(t)) # CCZ演算子を定義します。 def DOp(n): return (Qubit('0'*n)*QubitBra('0'*n)*2-_I(0)) # ゲート操作で計算するには、上記コメントのような演算になります。 h_3 = hadamard(0,3) d_3 = h_3 * DOp(3) * h_3 # 平均値周りの反転操作 # represent(d_3,nqubits=3) ``` ```python # | 111 > の検索する量子回路を作成する。 grover_7= d_3*CCZ(1,2,0) #←[]を書き換えて、 | 111 > の検索する量子回路を `grover_7` に代入してください。 CircuitPlot(grover_7, nqubits=3) #←[]を書き換えて、grover_7 の量子回路図を表示してください。 ``` ```python # 上で作った量子回路を初期状態と作用させて measure_all() で、 | 111 > を検出する確率が高いことを確認する。 measure_all(qapply(grover_7*init_state)) #←[]を書き換えて、全ての状態が測定される確率を表示してください。 ``` ### 【復習問題3】量子テレポーテーション <strong> 次の量子テレポーテーションのプログラムから、Bob の手元にある量子状態を考察し、 Alice の測定結果(古典通信で送られる情報)をもとに、Bob の量子状態が Alice が持っていた量子状態(ini_alice)と一致することを確認してください。   </strong> ```python # Alice と Bob が対象としている量子ビットを測定する操作を準備します。 def alice(qbit): return measure_partial(qbit,(0,1)) def bob(qbit): return measure_partial(qbit,(2,)) def U(x): return T(x)*X(x)*H(x) ini_alice = U(0) * Qubit('000') print(measure_partial(qapply(ini_alice),(0,))) # Alice の初期の量子状態を表示します。 # Alice と Bob は量子もつれの状態を共有します。 pairs = CNOT(1,2)*H(1) # Bell測定を行います。 bell_meas = H(0)*CNOT(0,1)*pairs CircuitPlot(bell_meas,nqubits=3, labels = ['alice','alice','bob']) # 量子テレポーテーションの量子回路図を表示します。 teleportated = qapply(bell_meas*ini_alice) alice(teleportated) # 量子テレポーテーションを行ったあとの、Alice の量子状態を表示します。 ``` ### 【課題1−1】スワップテスト <strong> スワップテストのための 制御SWAP(Fredkin) 回路を作成してください。 </strong> ```python # CCXゲート, Toffoliゲートは、次のように表せます。 def CCX(c1,c2,t): return CGateS((c1,c2),X(t)) def Toffoli(c1,c2,t): return CGateS((c1,c2),X(t)) CircuitPlot(CCX(1,2,0),3,labels=labeller(3)[::-1]) ``` ```python #### 制御SWAP(Fredkin)ゲートを作成してください。 CSWAP=CNOT(0,1)*Toffoli(1,2,0)*CNOT(0,1) #←[]を書き換えて、制御SWAP回路を `CSWAP` に代入してください。 represent(CSWAP,nqubits=3) #←[]を書き換えて、制御SWAPの行列表現を表示してください。 ``` ### 【課題1−2】スワップテスト <strong> 課題の初期状態、 $|q_0>=U(0) |0> $と$|q_1>= |1> $の内積をスワップテストを使って求めてください。 </strong> ```python # 課題の初期状態、 q0 (=U(0) |0>) と q1(= |1> )の内積を求めるために、量子状態を準備します。 # スワップテストで測定する量子ビット q2 もあわせて、3量子ビットを準備します。 def U(x): return T(x)*X(x)*H(x) state_1 = U(0) * Qubit('010') qapply(state_1) ``` ```python # スワップテストを行う想定で、測定確率をmeare_partial()で算出します。 measure_partial(qapply(H(2)*CSWAP*H(2)*state_1),(2,)) ``` この結果から、スワップテストで測定した結果、 q2 = |0> となるケースの確率は、$\frac{3}{4}$ です。 $ \frac{1}{2}(1+ \langle q_0 | q_1\rangle^{2})$ の値がこの結果になることから、内積 $ \langle q_0 | q_1\rangle $ が求まります。 ## 【課題2】量子フーリエ変換 <strong> 問1) 1. 3量子ビットを対象にした、量子フーリエ変換を行います。  |000>, |001>, ..., |110>, |111> の全ての状態のそれぞれの QFT の結果を出してください。      ヒント)sympy.physics.quantum.qft の QFT 関数を使います。 2. QFT(0,3) の量子回路図を CircuitPlot() で作図してください。 </strong> ```python ## QFT(0,3) の行列表現を表示してください。 from sympy.physics.quantum.qft import * # 必要なパッケージをimportします。 represent(QFT(0,3), nqubits=3) ``` ```python # |000> を量子フーリエ変換してください。 qapply(QFT(0,3)*Qubit('000')) ``` ```python # |001> を量子フーリエ変換してください。 qapply(QFT(0,3)*Qubit('001')) ``` ```python # |010> を量子フーリエ変換してください。 qapply(QFT(0,3)*Qubit('010')) ``` ```python # |011> を量子フーリエ変換してください。 qapply(QFT(0,3)*Qubit('011')) ``` ```python # |100> を量子フーリエ変換してください。 qapply(QFT(0,3)*Qubit('100')) ``` ```python # |101> を量子フーリエ変換してください。 qapply(QFT(0,3)*Qubit('101')) ``` ```python # |110> を量子フーリエ変換してください。 qapply(QFT(0,3)*Qubit('110')) ``` ```python # |111> を量子フーリエ変換してください。 qapply(QFT(0,3)*Qubit('111')) ``` ```python ### QFT(0,3) は、SymPy ではひと塊りのまとまったオペレータとして定義されています。 ### 基本ゲートを知るためには、decompose() を使います。 QFT(0,3).decompose() ``` ```python # QFT(0,3) の量子回路図を CircuitPlot() で作図してください。 CircuitPlot(QFT(0,3).decompose(), nqubits=3) ``` ```python # decompose() した上記の回路を改めて、定義しなおします。 qft3_decomp = SWAP(0,2)*H(0)*CGateS((0,),S(1))*H(1)*CGateS((0,),T(2))*CGateS((1,),S(2))*H(2) qft3_decomp ``` ```python # 上記で定義しなおした QFT の量子回路図を CircuitPlot() で作図します。 # QFT(0,3).decompose() の量子回路図と比較してください。 CircuitPlot(qft3_decomp,nqubits=3) ``` <strong> 問2) 1. 3量子ビットを対象にした、量子フーリエ変換を基本的な量子ゲートだけで表してください。   $\sqrt{T}$ゲートである Rk(n,4) は利用してもよい。   ・演算をテンソル積で表してください。 ・(この場合の量子回路図は、うまく描けません。)     </strong> (ヒント)$c_{g}$ をグローバル位相として、Z軸回転 $ R_{z\theta} = c_{g} X \cdot R_{z\theta/2}^{\dagger} \cdot X \cdot R_{z\theta/2} $ と表せることを使います。 ```python # S = c・X・T†・X・T であることを示します。 pprint(represent(S(0),nqubits=1)) represent(exp(I*pi/4)*X(0)*T(0)**(-1)*X(0)*T(0),nqubits=1) ``` ```python # T = c・X・sqrt(T)†・X・sqrt(T) であることを示します。 pprint(represent(T(0),nqubits=1)) represent(exp(I*pi/8)*X(0)*Rk(0,4)**(-1)*X(0)*Rk(0,4),nqubits=1) ``` ```python # qft3_decomp = SWAP(0,2)*H(0)*CGateS((0,),S(1))*H(1)*CGateS((0,),T(2))*CGateS((1,),S(2))*H(2) # qft3_decomp を見ながら、制御Sゲートを置き換えて、qft3_decomp2 へ代入します。 qft3_decomp2 = SWAP(0,2)*H(0)*CNOT(0,1)*T(1)**(-1)*CNOT(0,1)*T(1)*H(1)*CGateS((0,),T(2))*CNOT(1,2)*T(2)**(-1)*CNOT(1,2)*T(2)*H(2) qft3_decomp2 ``` ```python # qft3_decomp2 = SWAP(0,2)*H(0)*CNOT(0,1)*T(1)**(-1)*CNOT(0,1)*T(1)*H(1)*CGateS((0,),T(2))*CNOT(1,2)*T(2)**(-1)*CNOT(1,2)*T(2)*H(2) # qft3_decomp を見ながら、制御Tゲートを置き換えて、qft3_decomp3 へ代入します。 qft3_decomp3 = SWAP(0,2)*H(0)*CNOT(0,1)*T(1)**(-1)*CNOT(0,1)*T(1)*H(1)*CNOT(0,2)*Rk(2,4)**(-1)*CNOT(0,2)*Rk(2,4)*CNOT(1,2)*T(2)**(-1)*CNOT(1,2)*T(2)*H(2) qft3_decomp3 ``` ```python # |000> の量子フーリエ変換の結果をみます。 ### ゲート操作が少し複雑になるため、SymPyがうまく判断できません。 ### represent()で計算します。解答例では、結果が縦ベクトルで行数が長くなるのを嫌い、transpose()します。 # (解答例)transpose(represent(qft3_decomp2*Qubit('000'), nqubits=3)) transpose(represent(qft3_decomp2*Qubit('000'), nqubits=3)) ``` ```python # |001> の量子フーリエ変換の結果をみます。 ### グローバル位相 exp(I*pi/4) をかけると同じになります。 exp(I*pi/4)*transpose(represent(qft3_decomp2*Qubit('001'), nqubits=3)) ``` ```python # |010> の量子フーリエ変換の結果をみます。 ### グローバル位相 exp(I*pi/4) をかけると同じになります。 exp(I*pi/4)*transpose(represent(qft3_decomp2*Qubit('010'), nqubits=3)) ``` ```python # |011> の量子フーリエ変換の結果をみます。 ### グローバル位相 exp(I*pi/2) をかけると同じになります。 exp(I*pi/2)*transpose(represent(qft3_decomp2*Qubit('011'), nqubits=3)) ``` ```python # |100> の量子フーリエ変換の結果をみます。 transpose(represent(qft3_decomp2*Qubit('100'), nqubits=3)) ``` ```python # |101> の量子フーリエ変換の結果をみます。 ### グローバル位相 exp(I*pi/4) をかけると同じになります。 exp(I*pi/4)*transpose(represent(qft3_decomp2*Qubit('101'), nqubits=3)) ``` ```python # |110> の量子フーリエ変換の結果をみます。 ### グローバル位相 exp(I*pi/4) をかけると同じになります。 exp(I*pi/4)*transpose(represent(qft3_decomp2*Qubit('110'), nqubits=3)) ``` ```python # |111> の量子フーリエ変換の結果をみます。 ### グローバル位相 exp(I*pi/2) をかけると同じになります。 exp(I*pi/2)*transpose(represent(qft3_decomp2*Qubit('111'), nqubits=3)) ``` ```python ```
244347152194429d97d83a61b6be49ae2213c7f8
292,758
ipynb
Jupyter Notebook
docs/20190614/sympy_programming_4a_handout.ipynb
kyamaz/openql-notes
03ad81b595e4ad24b3130bfc0d999fe8ee0d6c70
[ "Apache-2.0" ]
4
2018-02-19T10:01:43.000Z
2022-01-12T12:32:34.000Z
docs/20190614/sympy_programming_4a_handout.ipynb
kyamaz/openql-notes
03ad81b595e4ad24b3130bfc0d999fe8ee0d6c70
[ "Apache-2.0" ]
null
null
null
docs/20190614/sympy_programming_4a_handout.ipynb
kyamaz/openql-notes
03ad81b595e4ad24b3130bfc0d999fe8ee0d6c70
[ "Apache-2.0" ]
4
2018-02-19T10:06:37.000Z
2022-01-12T12:42:38.000Z
186.58891
25,608
0.855488
true
4,995
Qwen/Qwen-72B
1. YES 2. YES
0.919643
0.824462
0.75821
__label__yue_Hant
0.50701
0.599909
# Introduction to Graph Matching ```python import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` The graph matching problem (GMP), is meant to find an alignment of nodes between two graphs that minimizes the number of edge disagreements between those two graphs. Therefore, the GMP can be formally written as an optimization problem: \begin{equation} \begin{aligned} \min & {\;-trace(APB^T P^T)}\\ \text{s.t. } & {\;P \: \epsilon \: \mathcal{P}} \\ \end{aligned} \end{equation} Where $\mathcal{P}$ is the set of possible permutation matrices. The Quadratic Assignment problem is a combinatorial opimization problem, modeling following the real-life problem: "Consider the problem of allocating a set of facilities to a set of locations, with the cost being a function of the distance and flow between the facilities, plus costs associated with a facility being placed at a certain location. The objective is to assign each facility to a location such that the total cost is minimized." [1] When written as an optimization problem, the QAP is represented as: \begin{equation} \begin{aligned} \min & {\; trace(APB^T P^T)}\\ \text{s.t. } & {\;P \: \epsilon \: \mathcal{P}} \\ \end{aligned} \end{equation} Since the GMP objective function is the negation of the QAP objective function, any algorithm that solves one can solve the other. This class is an implementation of the Fast Approximate Quadratic Assignment Problem (FAQ), an algorithm designed to efficiently and accurately solve the QAP, as well as GMP. [1] Optimierung, Diskrete & Er, Rainer & Ela, A & Burkard, Rainer & Dragoti-Cela, Eranda & Pardalos, Panos & Pitsoulis, Leonidas. (1998). The Quadratic Assignment Problem. Handbook of Combinatorial Optimization. 10.1007/978-1-4613-0303-9_27. ```python from graspy.match import GraphMatch as GMP from graspy.simulations import er_np ``` For the sake of this tutorial, we will use FAQ to solve the GMP for two graphs where we know a solution exists. Below, we sample a binary graph (undirected and no self-loops) $G_1 \sim ER_{NP}(50, 0.3)$. Then, we randomly shuffle the nodes of $G_1$ to initiate $G_2$. The number of edge disagreements as a result of the node shuffle is printed below. ```python n = 50 p = 0.3 np.random.seed(1) G1 = er_np(n=n, p=p) node_shuffle_input = np.random.permutation(n) G2 = G1[np.ix_(node_shuffle_input, node_shuffle_input)] print("Number of edge disagreements: ", np.sum(abs(G1-G2))) ``` ## Visualize the graphs using heat mapping ```python from graspy.plot import heatmap heatmap(G1, cbar=False, title = 'G1 [ER-NP(50, 0.3) Simulation]') heatmap(G2, cbar=False, title = 'G2 [G1 Randomly Shuffled]') ``` Below, we create a model to solve GMP. The model is then fitted for the two graphs $G_1$ and $G_2$. One of the option for the algorithm is the starting position of $P$. In this case, the class default of barycenter intialization is used, or the flat doubly stochastic matrix. The number of edge disagreements is printed below. With zero edge disagreements, we see that FAQ is successful in unshuffling the graph. ```python gmp = GMP() gmp = gmp.fit(G1,G2) G2 = G2[np.ix_(gmp.perm_inds_, gmp.perm_inds_)] print("Number of edge disagreements: ", np.sum(abs(G1-G2))) ``` ```python heatmap(G1, cbar=False, title = 'G1[ER-NP(50, 0.3) Simulation]') heatmap(G2, cbar=False, title = 'G2[ER-NP(50, 0.3) Randomly Shuffled] unshuffled') ```
fa0aed45832dadc3771c7338cfefd896513e665c
5,456
ipynb
Jupyter Notebook
docs/tutorials/matching/faq.ipynb
spencer-loggia/graspologic
cf7ae59289faa8f5538e335e2859cc2a843f2839
[ "MIT" ]
null
null
null
docs/tutorials/matching/faq.ipynb
spencer-loggia/graspologic
cf7ae59289faa8f5538e335e2859cc2a843f2839
[ "MIT" ]
null
null
null
docs/tutorials/matching/faq.ipynb
spencer-loggia/graspologic
cf7ae59289faa8f5538e335e2859cc2a843f2839
[ "MIT" ]
null
null
null
32.094118
418
0.599707
true
953
Qwen/Qwen-72B
1. YES 2. YES
0.936285
0.882428
0.826204
__label__eng_Latn
0.98885
0.757882
<!-- dom:TITLE: Solving Differential Equations with Deep Learning --> # Solving Differential Equations with Deep Learning <!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and Facility for Rare ion Beams, Michigan State University --> <!-- Author: --> **Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and Facility for Rare ion Beams, Michigan State University Date: **Apr 23, 2021** Copyright 1999-2021, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license ## Ordinary Differential Equations An ordinary differential equation (ODE) is an equation involving functions having one variable. In general, an ordinary differential equation looks like <!-- Equation labels as ordinary links --> <div id="ode"></div> $$ \begin{equation} \label{ode} \tag{1} f\left(x, \, g(x), \, g'(x), \, g''(x), \, \dots \, , \, g^{(n)}(x)\right) = 0 \end{equation} $$ where $g(x)$ is the function to find, and $g^{(n)}(x)$ is the $n$-th derivative of $g(x)$. The $f\left(x, g(x), g'(x), g''(x), \, \dots \, , g^{(n)}(x)\right)$ is just a way to write that there is an expression involving $x$ and $g(x), \ g'(x), \ g''(x), \, \dots \, , \text{ and } g^{(n)}(x)$ on the left side of the equality sign in ([1](#ode)). The highest order of derivative, that is the value of $n$, determines to the order of the equation. The equation is referred to as a $n$-th order ODE. Along with ([1](#ode)), some additional conditions of the function $g(x)$ are typically given for the solution to be unique. ## The trial solution Let the trial solution $g_t(x)$ be <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} g_t(x) = h_1(x) + h_2(x,N(x,P)) \label{_auto1} \tag{2} \end{equation} $$ where $h_1(x)$ is a function that makes $g_t(x)$ satisfy a given set of conditions, $N(x,P)$ a neural network with weights and biases described by $P$ and $h_2(x, N(x,P))$ some expression involving the neural network. The role of the function $h_2(x, N(x,P))$, is to ensure that the output from $N(x,P)$ is zero when $g_t(x)$ is evaluated at the values of $x$ where the given conditions must be satisfied. The function $h_1(x)$ should alone make $g_t(x)$ satisfy the conditions. But what about the network $N(x,P)$? As described previously, an optimization method could be used to minimize the parameters of a neural network, that being its weights and biases, through backward propagation. ## Minimization process For the minimization to be defined, we need to have a cost function at hand to minimize. It is given that $f\left(x, \, g(x), \, g'(x), \, g''(x), \, \dots \, , \, g^{(n)}(x)\right)$ should be equal to zero in ([1](#ode)). We can choose to consider the mean squared error as the cost function for an input $x$. Since we are looking at one input, the cost function is just $f$ squared. The cost function $c\left(x, P \right)$ can therefore be expressed as $$ C\left(x, P\right) = \big(f\left(x, \, g(x), \, g'(x), \, g''(x), \, \dots \, , \, g^{(n)}(x)\right)\big)^2 $$ If $N$ inputs are given as a vector $\boldsymbol{x}$ with elements $x_i$ for $i = 1,\dots,N$, the cost function becomes <!-- Equation labels as ordinary links --> <div id="cost"></div> $$ \begin{equation} \label{cost} \tag{3} C\left(\boldsymbol{x}, P\right) = \frac{1}{N} \sum_{i=1}^N \big(f\left(x_i, \, g(x_i), \, g'(x_i), \, g''(x_i), \, \dots \, , \, g^{(n)}(x_i)\right)\big)^2 \end{equation} $$ The neural net should then find the parameters $P$ that minimizes the cost function in ([3](#cost)) for a set of $N$ training samples $x_i$. ## Minimizing the cost function using gradient descent and automatic differentiation To perform the minimization using gradient descent, the gradient of $C\left(\boldsymbol{x}, P\right)$ is needed. It might happen so that finding an analytical expression of the gradient of $C(\boldsymbol{x}, P)$ from ([3](#cost)) gets too messy, depending on which cost function one desires to use. Luckily, there exists libraries that makes the job for us through automatic differentiation. Automatic differentiation is a method of finding the derivatives numerically with very high precision. ## Example: Exponential decay An exponential decay of a quantity $g(x)$ is described by the equation <!-- Equation labels as ordinary links --> <div id="solve_expdec"></div> $$ \begin{equation} \label{solve_expdec} \tag{4} g'(x) = -\gamma g(x) \end{equation} $$ with $g(0) = g_0$ for some chosen initial value $g_0$. The analytical solution of ([4](#solve_expdec)) is <!-- Equation labels as ordinary links --> <div id="_auto2"></div> $$ \begin{equation} g(x) = g_0 \exp\left(-\gamma x\right) \label{_auto2} \tag{5} \end{equation} $$ Having an analytical solution at hand, it is possible to use it to compare how well a neural network finds a solution of ([4](#solve_expdec)). ## The function to solve for The program will use a neural network to solve <!-- Equation labels as ordinary links --> <div id="solveode"></div> $$ \begin{equation} \label{solveode} \tag{6} g'(x) = -\gamma g(x) \end{equation} $$ where $g(0) = g_0$ with $\gamma$ and $g_0$ being some chosen values. In this example, $\gamma = 2$ and $g_0 = 10$. ## The trial solution To begin with, a trial solution $g_t(t)$ must be chosen. A general trial solution for ordinary differential equations could be $$ g_t(x, P) = h_1(x) + h_2(x, N(x, P)) $$ with $h_1(x)$ ensuring that $g_t(x)$ satisfies some conditions and $h_2(x,N(x, P))$ an expression involving $x$ and the output from the neural network $N(x,P)$ with $P $ being the collection of the weights and biases for each layer. For now, it is assumed that the network consists of one input layer, one hidden layer, and one output layer. ## Setup of Network In this network, there are no weights and bias at the input layer, so $P = \{ P_{\text{hidden}}, P_{\text{output}} \}$. If there are $N_{\text{hidden} }$ neurons in the hidden layer, then $P_{\text{hidden}}$ is a $N_{\text{hidden} } \times (1 + N_{\text{input}})$ matrix, given that there are $N_{\text{input}}$ neurons in the input layer. The first column in $P_{\text{hidden} }$ represents the bias for each neuron in the hidden layer and the second column represents the weights for each neuron in the hidden layer from the input layer. If there are $N_{\text{output} }$ neurons in the output layer, then $P_{\text{output}} $ is a $N_{\text{output} } \times (1 + N_{\text{hidden} })$ matrix. Its first column represents the bias of each neuron and the remaining columns represents the weights to each neuron. It is given that $g(0) = g_0$. The trial solution must fulfill this condition to be a proper solution of ([6](#solveode)). A possible way to ensure that $g_t(0, P) = g_0$, is to let $F(N(x,P)) = x \cdot N(x,P)$ and $A(x) = g_0$. This gives the following trial solution: <!-- Equation labels as ordinary links --> <div id="trial"></div> $$ \begin{equation} \label{trial} \tag{7} g_t(x, P) = g_0 + x \cdot N(x, P) \end{equation} $$ ## Reformulating the problem We wish that our neural network manages to minimize a given cost function. A reformulation of out equation, ([6](#solveode)), must therefore be done, such that it describes the problem a neural network can solve for. The neural network must find the set of weights and biases $P$ such that the trial solution in ([7](#trial)) satisfies ([6](#solveode)). The trial solution $$ g_t(x, P) = g_0 + x \cdot N(x, P) $$ has been chosen such that it already solves the condition $g(0) = g_0$. What remains, is to find $P$ such that <!-- Equation labels as ordinary links --> <div id="nnmin"></div> $$ \begin{equation} \label{nnmin} \tag{8} g_t'(x, P) = - \gamma g_t(x, P) \end{equation} $$ is fulfilled as *best as possible*. ## More technicalities The left hand side and right hand side of ([8](#nnmin)) must be computed separately, and then the neural network must choose weights and biases, contained in $P$, such that the sides are equal as best as possible. This means that the absolute or squared difference between the sides must be as close to zero, ideally equal to zero. In this case, the difference squared shows to be an appropriate measurement of how erroneous the trial solution is with respect to $P$ of the neural network. This gives the following cost function our neural network must solve for: $$ \min_{P}\Big\{ \big(g_t'(x, P) - ( -\gamma g_t(x, P) \big)^2 \Big\} $$ (the notation $\min_{P}\{ f(x, P) \}$ means that we desire to find $P$ that yields the minimum of $f(x, P)$) or, in terms of weights and biases for the hidden and output layer in our network: $$ \min_{P_{\text{hidden} }, \ P_{\text{output} }}\Big\{ \big(g_t'(x, \{ P_{\text{hidden} }, P_{\text{output} }\}) - ( -\gamma g_t(x, \{ P_{\text{hidden} }, P_{\text{output} }\}) \big)^2 \Big\} $$ for an input value $x$. ## More details If the neural network evaluates $g_t(x, P)$ at more values for $x$, say $N$ values $x_i$ for $i = 1, \dots, N$, then the *total* error to minimize becomes <!-- Equation labels as ordinary links --> <div id="min"></div> $$ \begin{equation} \label{min} \tag{9} \min_{P}\Big\{\frac{1}{N} \sum_{i=1}^N \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2 \Big\} \end{equation} $$ Letting $\boldsymbol{x}$ be a vector with elements $x_i$ and $C(\boldsymbol{x}, P) = \frac{1}{N} \sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2$ denote the cost function, the minimization problem that our network must solve, becomes $$ \min_{P} C(\boldsymbol{x}, P) $$ In terms of $P_{\text{hidden} }$ and $P_{\text{output} }$, this could also be expressed as $$ \min_{P_{\text{hidden} }, \ P_{\text{output} }} C(\boldsymbol{x}, \{P_{\text{hidden} }, P_{\text{output} }\}) $$ ## A possible implementation of a neural network For simplicity, it is assumed that the input is an array $\boldsymbol{x} = (x_1, \dots, x_N)$ with $N$ elements. It is at these points the neural network should find $P$ such that it fulfills ([9](#min)). First, the neural network must feed forward the inputs. This means that $\boldsymbol{x}s$ must be passed through an input layer, a hidden layer and a output layer. The input layer in this case, does not need to process the data any further. The input layer will consist of $N_{\text{input} }$ neurons, passing its element to each neuron in the hidden layer. The number of neurons in the hidden layer will be $N_{\text{hidden} }$. ## Technicalities For the $i$-th in the hidden layer with weight $w_i^{\text{hidden} }$ and bias $b_i^{\text{hidden} }$, the weighting from the $j$-th neuron at the input layer is: $$ \begin{aligned} z_{i,j}^{\text{hidden}} &= b_i^{\text{hidden}} + w_i^{\text{hidden}}x_j \\ &= \begin{pmatrix} b_i^{\text{hidden}} & w_i^{\text{hidden}} \end{pmatrix} \begin{pmatrix} 1 \\ x_j \end{pmatrix} \end{aligned} $$ ## Final technicalities I The result after weighting the inputs at the $i$-th hidden neuron can be written as a vector: $$ \begin{aligned} \boldsymbol{z}_{i}^{\text{hidden}} &= \Big( b_i^{\text{hidden}} + w_i^{\text{hidden}}x_1 , \ b_i^{\text{hidden}} + w_i^{\text{hidden}} x_2, \ \dots \, , \ b_i^{\text{hidden}} + w_i^{\text{hidden}} x_N\Big) \\ &= \begin{pmatrix} b_i^{\text{hidden}} & w_i^{\text{hidden}} \end{pmatrix} \begin{pmatrix} 1 & 1 & \dots & 1 \\ x_1 & x_2 & \dots & x_N \end{pmatrix} \\ &= \boldsymbol{p}_{i, \text{hidden}}^T X \end{aligned} $$ ## Final technicalities II The vector $\boldsymbol{p}_{i, \text{hidden}}^T$ constitutes each row in $P_{\text{hidden} }$, which contains the weights for the neural network to minimize according to ([9](#min)). After having found $\boldsymbol{z}_{i}^{\text{hidden}} $ for every $i$-th neuron within the hidden layer, the vector will be sent to an activation function $a_i(\boldsymbol{z})$. In this example, the sigmoid function has been chosen to be the activation function for each hidden neuron: $$ f(z) = \frac{1}{1 + \exp{(-z)}} $$ It is possible to use other activations functions for the hidden layer also. The output $\boldsymbol{x}_i^{\text{hidden}}$ from each $i$-th hidden neuron is: $$ \boldsymbol{x}_i^{\text{hidden} } = f\big( \boldsymbol{z}_{i}^{\text{hidden}} \big) $$ The outputs $\boldsymbol{x}_i^{\text{hidden} } $ are then sent to the output layer. The output layer consists of one neuron in this case, and combines the output from each of the neurons in the hidden layers. The output layer combines the results from the hidden layer using some weights $w_i^{\text{output}}$ and biases $b_i^{\text{output}}$. In this case, it is assumes that the number of neurons in the output layer is one. ## Final technicalities III The procedure of weighting the output neuron $j$ in the hidden layer to the $i$-th neuron in the output layer is similar as for the hidden layer described previously. $$ \begin{aligned} z_{1,j}^{\text{output}} & = \begin{pmatrix} b_1^{\text{output}} & \boldsymbol{w}_1^{\text{output}} \end{pmatrix} \begin{pmatrix} 1 \\ \boldsymbol{x}_j^{\text{hidden}} \end{pmatrix} \end{aligned} $$ ## Final technicalities IV Expressing $z_{1,j}^{\text{output}}$ as a vector gives the following way of weighting the inputs from the hidden layer: $$ \boldsymbol{z}_{1}^{\text{output}} = \begin{pmatrix} b_1^{\text{output}} & \boldsymbol{w}_1^{\text{output}} \end{pmatrix} \begin{pmatrix} 1 & 1 & \dots & 1 \\ \boldsymbol{x}_1^{\text{hidden}} & \boldsymbol{x}_2^{\text{hidden}} & \dots & \boldsymbol{x}_N^{\text{hidden}} \end{pmatrix} $$ In this case we seek a continuous range of values since we are approximating a function. This means that after computing $\boldsymbol{z}_{1}^{\text{output}}$ the neural network has finished its feed forward step, and $\boldsymbol{z}_{1}^{\text{output}}$ is the final output of the network. ## Back propagation The next step is to decide how the parameters should be changed such that they minimize the cost function. The chosen cost function for this problem is $$ C(\boldsymbol{x}, P) = \frac{1}{N} \sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2 $$ In order to minimize the cost function, an optimization method must be chosen. Here, gradient descent with a constant step size has been chosen. ## Gradient descent The idea of the gradient descent algorithm is to update parameters in a direction where the cost function decreases goes to a minimum. In general, the update of some parameters $\boldsymbol{\omega}$ given a cost function defined by some weights $\boldsymbol{\omega}$, $C(\boldsymbol{x}, \boldsymbol{\omega})$, goes as follows: $$ \boldsymbol{\omega}_{\text{new} } = \boldsymbol{\omega} - \lambda \nabla_{\boldsymbol{\omega}} C(\boldsymbol{x}, \boldsymbol{\omega}) $$ for a number of iterations or until $ \big|\big| \boldsymbol{\omega}_{\text{new} } - \boldsymbol{\omega} \big|\big|$ becomes smaller than some given tolerance. The value of $\lambda$ decides how large steps the algorithm must take in the direction of $ \nabla_{\boldsymbol{\omega}} C(\boldsymbol{x}, \boldsymbol{\omega})$. The notation $\nabla_{\boldsymbol{\omega}}$ express the gradient with respect to the elements in $\boldsymbol{\omega}$. In our case, we have to minimize the cost function $C(\boldsymbol{x}, P)$ with respect to the two sets of weights and biases, that is for the hidden layer $P_{\text{hidden} }$ and for the output layer $P_{\text{output} }$ . This means that $P_{\text{hidden} }$ and $P_{\text{output} }$ is updated by $$ \begin{aligned} P_{\text{hidden},\text{new}} &= P_{\text{hidden}} - \lambda \nabla_{P_{\text{hidden}}} C(\boldsymbol{x}, P) \\ P_{\text{output},\text{new}} &= P_{\text{output}} - \lambda \nabla_{P_{\text{output}}} C(\boldsymbol{x}, P) \end{aligned} $$ ## The code for solving the ODE ```python %matplotlib inline import autograd.numpy as np from autograd import grad, elementwise_grad import autograd.numpy.random as npr from matplotlib import pyplot as plt def sigmoid(z): return 1/(1 + np.exp(-z)) # Assuming one input, hidden, and output layer def neural_network(params, x): # Find the weights (including and biases) for the hidden and output layer. # Assume that params is a list of parameters for each layer. # The biases are the first element for each array in params, # and the weights are the remaning elements in each array in params. w_hidden = params[0] w_output = params[1] # Assumes input x being an one-dimensional array num_values = np.size(x) x = x.reshape(-1, num_values) # Assume that the input layer does nothing to the input x x_input = x ## Hidden layer: # Add a row of ones to include bias x_input = np.concatenate((np.ones((1,num_values)), x_input ), axis = 0) z_hidden = np.matmul(w_hidden, x_input) x_hidden = sigmoid(z_hidden) ## Output layer: # Include bias: x_hidden = np.concatenate((np.ones((1,num_values)), x_hidden ), axis = 0) z_output = np.matmul(w_output, x_hidden) x_output = z_output return x_output # The trial solution using the deep neural network: def g_trial(x,params, g0 = 10): return g0 + x*neural_network(params,x) # The right side of the ODE: def g(x, g_trial, gamma = 2): return -gamma*g_trial # The cost function: def cost_function(P, x): # Evaluate the trial function with the current parameters P g_t = g_trial(x,P) # Find the derivative w.r.t x of the neural network d_net_out = elementwise_grad(neural_network,1)(P,x) # Find the derivative w.r.t x of the trial function d_g_t = elementwise_grad(g_trial,0)(x,P) # The right side of the ODE func = g(x, g_t) err_sqr = (d_g_t - func)**2 cost_sum = np.sum(err_sqr) return cost_sum / np.size(err_sqr) # Solve the exponential decay ODE using neural network with one input, hidden, and output layer def solve_ode_neural_network(x, num_neurons_hidden, num_iter, lmb): ## Set up initial weights and biases # For the hidden layer p0 = npr.randn(num_neurons_hidden, 2 ) # For the output layer p1 = npr.randn(1, num_neurons_hidden + 1 ) # +1 since bias is included P = [p0, p1] print('Initial cost: %g'%cost_function(P, x)) ## Start finding the optimal weights using gradient descent # Find the Python function that represents the gradient of the cost function # w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer cost_function_grad = grad(cost_function,0) # Let the update be done num_iter times for i in range(num_iter): # Evaluate the gradient at the current weights and biases in P. # The cost_grad consist now of two arrays; # one for the gradient w.r.t P_hidden and # one for the gradient w.r.t P_output cost_grad = cost_function_grad(P, x) P[0] = P[0] - lmb * cost_grad[0] P[1] = P[1] - lmb * cost_grad[1] print('Final cost: %g'%cost_function(P, x)) return P def g_analytic(x, gamma = 2, g0 = 10): return g0*np.exp(-gamma*x) # Solve the given problem if __name__ == '__main__': # Set seed such that the weight are initialized # with same weights and biases for every run. npr.seed(15) ## Decide the vales of arguments to the function to solve N = 10 x = np.linspace(0, 1, N) ## Set up the initial parameters num_hidden_neurons = 10 num_iter = 10000 lmb = 0.001 # Use the network P = solve_ode_neural_network(x, num_hidden_neurons, num_iter, lmb) # Print the deviation from the trial solution and true solution res = g_trial(x,P) res_analytical = g_analytic(x) print('Max absolute difference: %g'%np.max(np.abs(res - res_analytical))) # Plot the results plt.figure(figsize=(10,10)) plt.title('Performance of neural network solving an ODE compared to the analytical solution') plt.plot(x, res_analytical) plt.plot(x, res[0,:]) plt.legend(['analytical','nn']) plt.xlabel('x') plt.ylabel('g(x)') plt.show() ``` ## The network with one input layer, specified number of hidden layers, and one output layer It is also possible to extend the construction of our network into a more general one, allowing the network to contain more than one hidden layers. The number of neurons within each hidden layer are given as a list of integers in the program below. ```python import autograd.numpy as np from autograd import grad, elementwise_grad import autograd.numpy.random as npr from matplotlib import pyplot as plt def sigmoid(z): return 1/(1 + np.exp(-z)) # The neural network with one input layer and one output layer, # but with number of hidden layers specified by the user. def deep_neural_network(deep_params, x): # N_hidden is the number of hidden layers N_hidden = np.size(deep_params) - 1 # -1 since params consists of # parameters to all the hidden # layers AND the output layer. # Assumes input x being an one-dimensional array num_values = np.size(x) x = x.reshape(-1, num_values) # Assume that the input layer does nothing to the input x x_input = x # Due to multiple hidden layers, define a variable referencing to the # output of the previous layer: x_prev = x_input ## Hidden layers: for l in range(N_hidden): # From the list of parameters P; find the correct weigths and bias for this layer w_hidden = deep_params[l] # Add a row of ones to include bias x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0) z_hidden = np.matmul(w_hidden, x_prev) x_hidden = sigmoid(z_hidden) # Update x_prev such that next layer can use the output from this layer x_prev = x_hidden ## Output layer: # Get the weights and bias for this layer w_output = deep_params[-1] # Include bias: x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0) z_output = np.matmul(w_output, x_prev) x_output = z_output return x_output # The trial solution using the deep neural network: def g_trial_deep(x,params, g0 = 10): return g0 + x*deep_neural_network(params, x) # The right side of the ODE: def g(x, g_trial, gamma = 2): return -gamma*g_trial # The same cost function as before, but calls deep_neural_network instead. def cost_function_deep(P, x): # Evaluate the trial function with the current parameters P g_t = g_trial_deep(x,P) # Find the derivative w.r.t x of the neural network d_net_out = elementwise_grad(deep_neural_network,1)(P,x) # Find the derivative w.r.t x of the trial function d_g_t = elementwise_grad(g_trial_deep,0)(x,P) # The right side of the ODE func = g(x, g_t) err_sqr = (d_g_t - func)**2 cost_sum = np.sum(err_sqr) return cost_sum / np.size(err_sqr) # Solve the exponential decay ODE using neural network with one input and one output layer, # but with specified number of hidden layers from the user. def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb): # num_hidden_neurons is now a list of number of neurons within each hidden layer # The number of elements in the list num_hidden_neurons thus represents # the number of hidden layers. # Find the number of hidden layers: N_hidden = np.size(num_neurons) ## Set up initial weights and biases # Initialize the list of parameters: P = [None]*(N_hidden + 1) # + 1 to include the output layer P[0] = npr.randn(num_neurons[0], 2 ) for l in range(1,N_hidden): P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias # For the output layer P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included print('Initial cost: %g'%cost_function_deep(P, x)) ## Start finding the optimal weights using gradient descent # Find the Python function that represents the gradient of the cost function # w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer cost_function_deep_grad = grad(cost_function_deep,0) # Let the update be done num_iter times for i in range(num_iter): # Evaluate the gradient at the current weights and biases in P. # The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases # in the hidden layers and output layers evaluated at x. cost_deep_grad = cost_function_deep_grad(P, x) for l in range(N_hidden+1): P[l] = P[l] - lmb * cost_deep_grad[l] print('Final cost: %g'%cost_function_deep(P, x)) return P def g_analytic(x, gamma = 2, g0 = 10): return g0*np.exp(-gamma*x) # Solve the given problem if __name__ == '__main__': npr.seed(15) ## Decide the vales of arguments to the function to solve N = 10 x = np.linspace(0, 1, N) ## Set up the initial parameters num_hidden_neurons = np.array([10,10]) num_iter = 10000 lmb = 0.001 P = solve_ode_deep_neural_network(x, num_hidden_neurons, num_iter, lmb) res = g_trial_deep(x,P) res_analytical = g_analytic(x) plt.figure(figsize=(10,10)) plt.title('Performance of a deep neural network solving an ODE compared to the analytical solution') plt.plot(x, res_analytical) plt.plot(x, res[0,:]) plt.legend(['analytical','dnn']) plt.ylabel('g(x)') plt.show() ``` ## Example: Population growth, comparing Autograd, and Euler's scheme A logistic model of population growth assumes that a population converges toward an equilibrium. The population growth can be modeled by <!-- Equation labels as ordinary links --> <div id="log"></div> $$ \begin{equation} \label{log} \tag{10} g'(t) = \alpha g(t)(A - g(t)) \end{equation} $$ where $g(t)$ is the population density at time $t$, $\alpha > 0$ the growth rate and $A > 0$ is the maximum population number in the environment. Also, at $t = 0$ the population has the size $g(0) = g_0$, where $g_0$ is some chosen constant. In this example, similar network as for the exponential decay using Autograd has been used to solve the equation. However, as the implementation might suffer from e.g numerical instability and high execution time (this might be more apparent in the examples solving PDEs), a network has been constructed using TensorFlow also. For comparison, the forward Euler method has been implemented in order to see how the networks performs compared to a numerical scheme. ## Setting up the problem Here, we will model a population $g(t)$ in an environment having carrying capacity $A$. The population follows the model <!-- Equation labels as ordinary links --> <div id="solveode_population"></div> $$ \begin{equation} \label{solveode_population} \tag{11} g'(t) = \alpha g(t)(A - g(t)) \end{equation} $$ where $g(0) = g_0$. In this example, we let $\alpha = 2$, $A = 1$, and $g_0 = 1.2$. ## The trial solution We will get a slightly different trial solution, as the boundary conditions are different compared to the case for exponential decay. A possible trial solution satisfying the condition $g(0) = g_0$ could be $$ h_1(t) = g_0 + t \cdot N(t,P) $$ with $N(t,P)$ being the output from the neural network with weights and biases for each layer collected in the set $P$. The analytical solution is $$ g(t) = \frac{Ag_0}{g_0 + (A - g_0)\exp(-\alpha A t)} $$ ## The program using Autograd The network will be the similar as for the exponential decay example, but with some small modifications for our problem. ```python import autograd.numpy as np from autograd import grad, elementwise_grad import autograd.numpy.random as npr from matplotlib import pyplot as plt def sigmoid(z): return 1/(1 + np.exp(-z)) # Function to get the parameters. # Done such that one can easily change the paramaters after one's liking. def get_parameters(): alpha = 2 A = 1 g0 = 1.2 return alpha, A, g0 def deep_neural_network(P, x): # N_hidden is the number of hidden layers N_hidden = np.size(P) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer # Assumes input x being an one-dimensional array num_values = np.size(x) x = x.reshape(-1, num_values) # Assume that the input layer does nothing to the input x x_input = x # Due to multiple hidden layers, define a variable referencing to the # output of the previous layer: x_prev = x_input ## Hidden layers: for l in range(N_hidden): # From the list of parameters P; find the correct weigths and bias for this layer w_hidden = P[l] # Add a row of ones to include bias x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0) z_hidden = np.matmul(w_hidden, x_prev) x_hidden = sigmoid(z_hidden) # Update x_prev such that next layer can use the output from this layer x_prev = x_hidden ## Output layer: # Get the weights and bias for this layer w_output = P[-1] # Include bias: x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0) z_output = np.matmul(w_output, x_prev) x_output = z_output return x_output def cost_function_deep(P, x): # Evaluate the trial function with the current parameters P g_t = g_trial_deep(x,P) # Find the derivative w.r.t x of the trial function d_g_t = elementwise_grad(g_trial_deep,0)(x,P) # The right side of the ODE func = f(x, g_t) err_sqr = (d_g_t - func)**2 cost_sum = np.sum(err_sqr) return cost_sum / np.size(err_sqr) # The right side of the ODE: def f(x, g_trial): alpha,A, g0 = get_parameters() return alpha*g_trial*(A - g_trial) # The trial solution using the deep neural network: def g_trial_deep(x, params): alpha,A, g0 = get_parameters() return g0 + x*deep_neural_network(params,x) # The analytical solution: def g_analytic(t): alpha,A, g0 = get_parameters() return A*g0/(g0 + (A - g0)*np.exp(-alpha*A*t)) def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb): # num_hidden_neurons is now a list of number of neurons within each hidden layer # Find the number of hidden layers: N_hidden = np.size(num_neurons) ## Set up initial weigths and biases # Initialize the list of parameters: P = [None]*(N_hidden + 1) # + 1 to include the output layer P[0] = npr.randn(num_neurons[0], 2 ) for l in range(1,N_hidden): P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias # For the output layer P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included print('Initial cost: %g'%cost_function_deep(P, x)) ## Start finding the optimal weigths using gradient descent # Find the Python function that represents the gradient of the cost function # w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer cost_function_deep_grad = grad(cost_function_deep,0) # Let the update be done num_iter times for i in range(num_iter): # Evaluate the gradient at the current weights and biases in P. # The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases # in the hidden layers and output layers evaluated at x. cost_deep_grad = cost_function_deep_grad(P, x) for l in range(N_hidden+1): P[l] = P[l] - lmb * cost_deep_grad[l] print('Final cost: %g'%cost_function_deep(P, x)) return P if __name__ == '__main__': npr.seed(4155) ## Decide the vales of arguments to the function to solve Nt = 10 T = 1 t = np.linspace(0,T, Nt) ## Set up the initial parameters num_hidden_neurons = [100, 50, 25] num_iter = 1000 lmb = 1e-3 P = solve_ode_deep_neural_network(t, num_hidden_neurons, num_iter, lmb) g_dnn_ag = g_trial_deep(t,P) g_analytical = g_analytic(t) # Find the maximum absolute difference between the solutons: diff_ag = np.max(np.abs(g_dnn_ag - g_analytical)) print("The max absolute difference between the solutions is: %g"%diff_ag) plt.figure(figsize=(10,10)) plt.title('Performance of neural network solving an ODE compared to the analytical solution') plt.plot(t, g_analytical) plt.plot(t, g_dnn_ag[0,:]) plt.legend(['analytical','nn']) plt.xlabel('t') plt.ylabel('g(t)') plt.show() ``` ## Using forward Euler to solve the ODE A straight-forward way of solving an ODE numerically, is to use Euler's method. Euler's method uses Taylor series to approximate the value at a function $f$ at a step $\Delta x$ from $x$: $$ f(x + \Delta x) \approx f(x) + \Delta x f'(x) $$ In our case, using Euler's method to approximate the value of $g$ at a step $\Delta t$ from $t$ yields $$ \begin{aligned} g(t + \Delta t) &\approx g(t) + \Delta t g'(t) \\ &= g(t) + \Delta t \big(\alpha g(t)(A - g(t))\big) \end{aligned} $$ along with the condition that $g(0) = g_0$. Let $t_i = i \cdot \Delta t$ where $\Delta t = \frac{T}{N_t-1}$ where $T$ is the final time our solver must solve for and $N_t$ the number of values for $t \in [0, T]$ for $i = 0, \dots, N_t-1$. For $i \geq 1$, we have that $$ \begin{aligned} t_i &= i\Delta t \\ &= (i - 1)\Delta t + \Delta t \\ &= t_{i-1} + \Delta t \end{aligned} $$ Now, if $g_i = g(t_i)$ then <!-- Equation labels as ordinary links --> <div id="odenum"></div> $$ \begin{equation} \begin{aligned} g_i &= g(t_i) \\ &= g(t_{i-1} + \Delta t) \\ &\approx g(t_{i-1}) + \Delta t \big(\alpha g(t_{i-1})(A - g(t_{i-1}))\big) \\ &= g_{i-1} + \Delta t \big(\alpha g_{i-1}(A - g_{i-1})\big) \end{aligned} \end{equation} \label{odenum} \tag{12} $$ for $i \geq 1$ and $g_0 = g(t_0) = g(0) = g_0$. Equation ([12](#odenum)) could be implemented in the following way, extending the program that uses the network using Autograd: ```python # Assume that all function definitions from the example program using Autograd # are located here. if __name__ == '__main__': npr.seed(4155) ## Decide the vales of arguments to the function to solve Nt = 10 T = 1 t = np.linspace(0,T, Nt) ## Set up the initial parameters num_hidden_neurons = [100,50,25] num_iter = 1000 lmb = 1e-3 P = solve_ode_deep_neural_network(t, num_hidden_neurons, num_iter, lmb) g_dnn_ag = g_trial_deep(t,P) g_analytical = g_analytic(t) # Find the maximum absolute difference between the solutons: diff_ag = np.max(np.abs(g_dnn_ag - g_analytical)) print("The max absolute difference between the solutions is: %g"%diff_ag) plt.figure(figsize=(10,10)) plt.title('Performance of neural network solving an ODE compared to the analytical solution') plt.plot(t, g_analytical) plt.plot(t, g_dnn_ag[0,:]) plt.legend(['analytical','nn']) plt.xlabel('t') plt.ylabel('g(t)') ## Find an approximation to the funtion using forward Euler alpha, A, g0 = get_parameters() dt = T/(Nt - 1) # Perform forward Euler to solve the ODE g_euler = np.zeros(Nt) g_euler[0] = g0 for i in range(1,Nt): g_euler[i] = g_euler[i-1] + dt*(alpha*g_euler[i-1]*(A - g_euler[i-1])) # Print the errors done by each method diff1 = np.max(np.abs(g_euler - g_analytical)) diff2 = np.max(np.abs(g_dnn_ag[0,:] - g_analytical)) print('Max absolute difference between Euler method and analytical: %g'%diff1) print('Max absolute difference between deep neural network and analytical: %g'%diff2) # Plot results plt.figure(figsize=(10,10)) plt.plot(t,g_euler) plt.plot(t,g_analytical) plt.plot(t,g_dnn_ag[0,:]) plt.legend(['euler','analytical','dnn']) plt.xlabel('Time t') plt.ylabel('g(t)') plt.show() ``` ## Using TensorFlow TensorFlow is a library widely used in the machine learning community. Tensorflow is an open source library machine learning library developed by the Google Brain team for internal use. It was released under the Apache 2.0 open source license in November 9, 2015. Tensorflow is a computational framework that allows you to construct machine learning models at different levels of abstraction, from high-level, object-oriented APIs like Keras, down to the C++ kernels that Tensorflow is built upon. The higher levels of abstraction are simpler to use, but less flexible, and our choice of implementation should reflect the problems we are trying to solve. To install tensorflow on Unix/Linux systems, use pip as ```python pip3 install tensorflow ``` and/or if you use **anaconda**, just write (or install from the graphical user interface) (current release of CPU-only TensorFlow) ```python conda create -n tf tensorflow conda activate tf ``` To install the current release of GPU TensorFlow ```python conda create -n tf-gpu tensorflow-gpu conda activate tf-gpu ``` ## Using Keras Keras is a high level [neural network](https://en.wikipedia.org/wiki/Application_programming_interface) that supports Tensorflow, CTNK and Theano as backends. If you have Anaconda installed you may run the following command ```python conda install keras ``` You can look up the [instructions here](https://keras.io/) for more information. ```python import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from math import * import time from keras.optimizers import Adam from keras.models import Model from keras.layers import Dense, Input, LeakyReLU from keras import optimizers %matplotlib inline # Analytical Solution to position def ana_cos(r0,t,k=1,m=1): w0 = sqrt(k/m) return r0*np.cos(w0*t) # Trial Function for Neural Net def trial_func(x,y,y0=1): return x*y + y0 # Loss Function for Position and Velocity def combined_right(trialv,trialy,k=1,m=1): return -k/m*trialy, trialv # Loss Wrapper in order to pass the Loss Function to Neural Net def con_loss_wrapper(input_tensor): def con_loss_function(y,y_pred): trialy = trial_func(input_tensor,y_pred) trialv = trial_func(input_tensor,y_pred) righty, rightv = combined_right(trialv,trialy) leftv = tf.gradients(trialv,input_tensor)[0] lefty = tf.gradients(leftv,input_tensor)[0] loss = tf.reduce_mean((tf.math.squared_difference(lefty,righty))) return loss return con_loss_function # Creates the input data for the Neural Net def create_input_data(x0=0,xmax=1,num_batch=5,len_batch=15): input_data = np.linspace(x0,xmax,num_batch*len_batch) input_data = input_data.reshape(num_batch,len_batch) return input_data # Creates the Neural Net def create_net(data,len_batch,lr,epochs,right_side,loss,n_hidden_layer=50): input_tensor = Input(shape=(len_batch,)) hidden1 = Dense(n_hidden_layer,activation='tanh', kernel_initializer='random_uniform',bias_initializer='random_uniform')(input_tensor) hidden2 = Dense(n_hidden_layer,activation='tanh', kernel_initializer='random_uniform',bias_initializer='random_uniform')(hidden1) hidden3 = Dense(n_hidden_layer,activation='tanh', kernel_initializer='random_uniform',bias_initializer='random_uniform')(hidden2) hidden4 = Dense(n_hidden_layer,activation='tanh', kernel_initializer='random_uniform',bias_initializer='random_uniform')(hidden3) output = Dense(len_batch)(hidden4) model = Model(input_tensor,output) gd = optimizers.adam(lr=lr) # May need to change first 'lr' to 'learning_rate' depending on TF/Keras version model.compile(loss=loss(input_tensor),optimizer=gd) model.fit(data,np.zeros((data.shape[0])),epochs=epochs) res = model.predict(data) del model return res # Define Function for Mean Squared Error def mean_squared_error(analytical, results): mse = 0 for i in range(len(analytical)): mse += (analytical[i] - results[i])**2 mse = mse/len(analytical) return mse # Define Constants num_batch = 1000 len_batch = 1 # Create input data data = create_input_data(x0=0,xmax=10,num_batch=num_batch,len_batch=len_batch) # Create and Run the Neural Net nn_start_time = time.time() velocity = create_net(data,len_batch=len_batch,lr=0.001,n_hidden_layer=50, epochs=1000,right_side=combined_right,loss=con_loss_wrapper) nn_end_time = time.time() # Reshape Neural Net Output for easy graphing and analysis n = num_batch*len_batch velocity = velocity.reshape(1,n) t = data.reshape(1,n)[0] results_v = trial_func(t,velocity)[0] # Create Comparison data from Analytical Solution analyt = ana_cos(1,t,m=1,k=1) # Plot plt.plot(t,analyt,label='analytical') plt.plot(t,results_v,label='net') plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left') # Find Mean Squared Error print("The Mean Squared Error of the Neural Net Solution is", mean_squared_error(analyt, results_v), "with a runtime of", nn_end_time-nn_start_time,"seconds.") ```
8040f3d744ceaabef82a9fb383b22f4d0bcaab66
58,116
ipynb
Jupyter Notebook
doc/src/week43/odenn.ipynb
marlgryd/MachineLearning
e07439cee1f9e3042aec765754116dccdf8bcf01
[ "CC0-1.0" ]
null
null
null
doc/src/week43/odenn.ipynb
marlgryd/MachineLearning
e07439cee1f9e3042aec765754116dccdf8bcf01
[ "CC0-1.0" ]
null
null
null
doc/src/week43/odenn.ipynb
marlgryd/MachineLearning
e07439cee1f9e3042aec765754116dccdf8bcf01
[ "CC0-1.0" ]
1
2021-09-04T16:21:16.000Z
2021-09-04T16:21:16.000Z
35.328875
350
0.552258
true
11,558
Qwen/Qwen-72B
1. YES 2. YES
0.826712
0.874077
0.72261
__label__eng_Latn
0.978416
0.517197
# Financial Network **Author**: [Erika Fille Legara](http://www.erikalegara.net/) You are free to use (or change) this notebook for any purpose you'd like. However, please respect the MIT License that governs its use, and for copying permission. Copyright © 2016 Erika Fille Legara --- ## Description I have been receiving requests to release the Python code I wrote to produce the financial network discussed in my blog post at [erikafille.ph](http://erikafille.ph) titled [PSE Correlation-Based Network](https://erikafille.ph/2016/08/16/pse-correlation-based-network/). Here it is. In this notebook, we build a correlation-based network (minimum spanning tree (MST)) of companies listed in the [Philippine Stock Exchange](https://www.pse.com.ph/stockMarket/home.html). The entire process can be summarized into three steps: 1. Set up the **correlation matrix** for the stock prices in the Philippine Stock Exchange. 2. Convert the resulting correlation matrix into a **distance matrix**. 3. Build a **minimum spanning tree** from the distance matrix. ### Data I got the link to the data from this [stock forum post](http://www.stockmarketpilipinas.com/thread-500.html). In this post it says [Drop Box: 2006-present worth of CSV files uploaded by Mr. Coelacanth](https://www.dropbox.com/sh/1dluf0lawy9a7rm/fHREemAjWS). The last I checked, the 2016 historical dataset is only until the 15th of July. --- ## Let's Dig In Import the necessary packages. ```python from __future__ import division try: import networkx as nx import pandas as pd import os import matplotlib.pyplot as plt import seaborn as sns import math import numpy as np from datetime import datetime %matplotlib inline except: import traceback traceback.print_exc() raise ImportError('Something failed, see above.') ``` ### Load Data We first load the list of companies in the PSE and store it in a Pandas data frame we call `pse_companies`. ```python pse_companies = pd.read_csv("PSE-listed-companies.csv") pse_companies = pse_companies[["Company Name", "Stock Symbol", "Sector", "Subsector", "Listing Date"]] pse_companies.head() ``` <div style="max-height:1000px;max-width:1500px;overflow:auto;"> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Company Name</th> <th>Stock Symbol</th> <th>Sector</th> <th>Subsector</th> <th>Listing Date</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>Unioil Resources &amp; Holdings Company, Inc.</td> <td>UNI</td> <td>Holding Firms</td> <td>Holding Firms</td> <td>27 July 1987</td> </tr> <tr> <th>1</th> <td>Union Bank of the Philippines, Inc.</td> <td>UBP</td> <td>Financials</td> <td>Banks</td> <td>29 June 1992</td> </tr> <tr> <th>2</th> <td>United Paragon Mining Corporation</td> <td>UPM</td> <td>Mining and Oil</td> <td>Mining</td> <td>2 April 1973</td> </tr> <tr> <th>3</th> <td>Universal Robina Corporation</td> <td>URC</td> <td>Industrial</td> <td>Food, Beverage &amp; Tobacco</td> <td>25 March 1994</td> </tr> <tr> <th>4</th> <td>Uniwide Holdings, Inc.</td> <td>UW</td> <td>Property</td> <td>Property</td> <td>19 August 1996</td> </tr> </tbody> </table> </div> Then, load all files under the folder "2016" to load all 2016 data. Each file inside a folder contains the prices of all stock quotes in the PSE for the day (as indicated in the filename). As mentioned, the dataset I have for the year 2016 only runs until the 15th of July. The cell block below loads all files in the directory `"./2016/"`. ```python files2016 = os.listdir("./2016/") ``` Let's now explore the content of a file (stock prices for 1 day) inside the directory (folder). *I am not exactly sure what the last two columns are, so I'm assigning them the variables X1 and X2, respectively.* ```python print "Day: ", files2016[0] df0 = pd.read_csv("./2016/" + files2016[0], header=None) df0.head() ``` Day: stockQuotes_01042016.csv <div style="max-height:1000px;max-width:1500px;overflow:auto;"> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>^FINANCIAL</td> <td>01/04/2016</td> <td>1554.07</td> <td>1555.61</td> <td>1531.13</td> <td>1531.13</td> <td>298140</td> <td>59289</td> </tr> <tr> <th>1</th> <td>AUB</td> <td>01/04/2016</td> <td>46.30</td> <td>46.30</td> <td>45.00</td> <td>46.15</td> <td>12900</td> <td>411515</td> </tr> <tr> <th>2</th> <td>BDO</td> <td>01/04/2016</td> <td>106.00</td> <td>106.00</td> <td>101.90</td> <td>101.90</td> <td>551000</td> <td>10878135</td> </tr> <tr> <th>3</th> <td>BPI</td> <td>01/04/2016</td> <td>83.70</td> <td>83.75</td> <td>83.00</td> <td>83.20</td> <td>384540</td> <td>9318360</td> </tr> <tr> <th>4</th> <td>CHIB</td> <td>01/04/2016</td> <td>37.50</td> <td>37.50</td> <td>37.05</td> <td>37.05</td> <td>33200</td> <td>193819</td> </tr> </tbody> </table> </div> ```python df0.columns = ["Company_Index", "Date", "Open", "High", "Low", "Close", "X1", "X2"] df0.head() ``` <div style="max-height:1000px;max-width:1500px;overflow:auto;"> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Company_Index</th> <th>Date</th> <th>Open</th> <th>High</th> <th>Low</th> <th>Close</th> <th>X1</th> <th>X2</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>^FINANCIAL</td> <td>01/04/2016</td> <td>1554.07</td> <td>1555.61</td> <td>1531.13</td> <td>1531.13</td> <td>298140</td> <td>59289</td> </tr> <tr> <th>1</th> <td>AUB</td> <td>01/04/2016</td> <td>46.30</td> <td>46.30</td> <td>45.00</td> <td>46.15</td> <td>12900</td> <td>411515</td> </tr> <tr> <th>2</th> <td>BDO</td> <td>01/04/2016</td> <td>106.00</td> <td>106.00</td> <td>101.90</td> <td>101.90</td> <td>551000</td> <td>10878135</td> </tr> <tr> <th>3</th> <td>BPI</td> <td>01/04/2016</td> <td>83.70</td> <td>83.75</td> <td>83.00</td> <td>83.20</td> <td>384540</td> <td>9318360</td> </tr> <tr> <th>4</th> <td>CHIB</td> <td>01/04/2016</td> <td>37.50</td> <td>37.50</td> <td>37.05</td> <td>37.05</td> <td>33200</td> <td>193819</td> </tr> </tbody> </table> </div> After my initial data exploration, I made a list of companies I am discarding in this analysis. I am also excluding the indices. In any case, you may add or delete stocks to your liking. ```python pse_comp = list(pse_companies["Stock Symbol"]) discard = ['UW', 'VMC', 'VVT', 'PRIM', 'MJIC', 'MACAY', 'PMT', 'REG', 'ROX', \ 'RCI', 'SPC', 'SPM', 'STR', 'STN', 'SRDC', 'SGP', 'MAH', 'MGH', \ 'NXGEN', 'PCP', 'PMPC', 'PAX', 'PHC', 'H2O', 'PNC', 'PRC', 'PTT', \ 'PTC', 'PORT', 'GPH', 'GREEN', 'KPH', 'LMG', 'LSC', 'CHP', 'CAT', \ 'CIP', 'CSB', 'DWC', 'ECP', 'EVER', 'EIBA', 'FEU', 'FFI', 'FYN', \ 'FAF', 'ABC', 'AAA', 'ATI', 'AB', 'BH', 'CHI', 'CPV', "BCOR"] companies = [company for company in pse_comp if company not in discard ] print companies ``` ['UNI', 'UBP', 'UPM', 'URC', 'V', 'VLL', 'VITA', 'VUL', 'WPI', 'WIN', 'X', 'YEHEY', 'ZHI', 'IPO', 'PHN', 'PNX', 'PSPC', 'PHA', 'PLC', 'POPI', 'PRMX', 'PPC', 'PGOLD', 'RFM', 'RCB', 'RLC', 'RRHI', 'ROCK', 'SBS', 'SM', 'SMPH', 'SOC', 'SSI', 'STI', 'SMC', 'PF', 'SECB', 'SCC', 'SHNG', 'SGI', 'SPH', 'SLI', 'SLF', 'SUN', 'SFI', 'T', 'PSE', 'OV', 'TFHI', 'TA', 'TAPET', 'TBGI', 'RWM', 'MJC', 'MA', 'MWC', 'MFC', 'MARC', 'MAXS', 'MWIDE', 'MEG', 'MCP', 'MPI', 'MRSGI', 'MBT', 'MG', 'NRCP', 'NI', 'NIKL', 'NOW', 'OM', 'ORE', 'OPM', 'PAL', 'TEL', 'TFC', 'LOTO', 'PA', 'PIP', 'PERC', 'PCOR', 'WEB', 'PX', 'PXP', 'PBC', 'PBB', 'PHES', 'PNB', 'RLT', 'PSB', 'SEVN', 'GTCAP', 'GSMI', 'FNI', 'GERI', 'GLO', 'HVN', 'TUGS', 'HLCM', 'HI', 'I', 'EG', 'IPM', 'IRC', 'ISM', 'IMP', 'IMI', 'ICT', 'ION', 'IS', 'IDC', 'JGS', 'JAS', 'JFC', 'JOH', 'KEP', 'LBC', 'LTG', 'LR', 'LC', 'LFM', 'LIB', 'LIHC', 'LPZ', 'MED', 'MRC', 'MHC', 'MVC', 'MAC', 'MFIN', 'MBC', 'MB', 'MER', 'CEU', 'CNPF', 'CPM', 'CPG', 'CHIB', 'TECH', 'LAND', 'CDC', 'COAL', 'CIC', 'CA', 'COSCO', 'CROWN', 'CEI', 'CYBR', 'DNL', 'DFNN', 'DMC', 'DAVIN', 'DMPL', 'DIZ', 'DD', 'EEI', 'EW', 'EMP', 'ELI', 'EDC', 'EURO', 'FJP', 'FDC', 'FLI', 'FGEN', 'FMETF', 'FPH', 'FPI', 'GEO', 'GMAP', 'GMA7', '2GO', 'HOUSE', 'BRN', 'ANS', 'ABS', 'ABSP', 'AGF', 'APC', 'ATN', 'ABA', 'AEV', 'AP', 'AR', 'ACE', 'ANI', 'AGI', 'FOOD', 'ACR', 'ALT', 'ALHI', 'APO', 'APX', 'ARA', 'ALCO', 'AUB', 'ABG', 'AT', 'AC', 'ALI', 'BLFI', 'BDO', 'BPI', 'BSC', 'BEL', 'BC', 'BLOOM', 'BMM', 'BHI', 'BKR', 'COL', 'CAL', 'CEB'] We then filter the dataframe to companies listed in `companies`. We also drop the other columns and only retain the following `["Company_Index", "Date", "Close"]` columns ```python allprices = pd.DataFrame() for f in files2016: df = pd.read_csv("./2016/" + f, header=None) df.columns = ["Company_Index", "Date", "Open", "High", "Low", "Close", "Volume1", "Volume2"] df = df[df.Company_Index.isin(companies)] df.Date = pd.to_datetime(df.Date) df = df[["Company_Index", "Date", "Close"]] allprices = pd.concat([allprices,df], ignore_index = True) ``` ```python allprices.head() ``` <div style="max-height:1000px;max-width:1500px;overflow:auto;"> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Company_Index</th> <th>Date</th> <th>Close</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>AUB</td> <td>2016-01-04</td> <td>46.15</td> </tr> <tr> <th>1</th> <td>BDO</td> <td>2016-01-04</td> <td>101.90</td> </tr> <tr> <th>2</th> <td>BPI</td> <td>2016-01-04</td> <td>83.20</td> </tr> <tr> <th>3</th> <td>CHIB</td> <td>2016-01-04</td> <td>37.05</td> </tr> <tr> <th>4</th> <td>EW</td> <td>2016-01-04</td> <td>18.22</td> </tr> </tbody> </table> </div> ```python ## List of all comapnies we are considering in this notebook print set(list(allprices.Company_Index)) ``` set(['PXP', 'CPM', 'AGI', 'BC', 'JFC', 'AGF', 'POPI', 'FMETF', 'CPG', 'RWM', 'IDC', 'SCC', 'BMM', 'PAL', 'FNI', 'MED', 'PERC', 'RRHI', 'LIB', 'GMA7', 'GERI', 'CROWN', 'SPH', 'ION', 'LBC', 'MFIN', 'YEHEY', 'PIP', 'LR', 'WEB', 'LPZ', 'URC', 'VITA', 'OM', 'GMAP', 'GSMI', '2GO', 'T', 'PGOLD', 'X', 'MRC', 'APX', 'UBP', 'PHN', 'SMPH', 'HOUSE', 'PHA', 'CDC', 'IPO', 'IPM', 'MAC', 'APO', 'SFI', 'APC', 'TBGI', 'FDC', 'DMC', 'BEL', 'AEV', 'MWC', 'HI', 'EEI', 'EMP', 'GTCAP', 'MFC', 'VLL', 'PX', 'CEI', 'PRMX', 'CEU', 'PA', 'PF', 'GEO', 'EG', 'UPM', 'BLFI', 'FOOD', 'BDO', 'CNPF', 'STI', 'PHES', 'TUGS', 'NRCP', 'PBC', 'PBB', 'ABSP', 'EW', 'JGS', 'RFM', 'MG', 'MA', 'OV', 'PSB', 'MWIDE', 'MPI', 'AR', 'HLCM', 'ANS', 'OPM', 'NOW', 'AT', 'TAPET', 'ACE', 'GLO', 'ATN', 'SUN', 'VUL', 'ACR', 'PCOR', 'ISM', 'CHIB', 'FJP', 'SMC', 'COL', 'LAND', 'NI', 'ALHI', 'SECB', 'TEL', 'DMPL', 'LFM', 'TECH', 'WPI', 'BKR', 'ICT', 'BSC', 'AUB', 'IMI', 'DNL', 'COSCO', 'JOH', 'MVC', 'LTG', 'SBS', 'V', 'UNI', 'FLI', 'EURO', 'EDC', 'MEG', 'FGEN', 'ALI', 'WIN', 'MB', 'PSPC', 'RLC', 'BRN', 'PLC', 'SM', 'MER', 'RLT', 'ALT', 'ORE', 'TFHI', 'CIC', 'ALCO', 'KEP', 'DAVIN', 'BPI', 'DD', 'ANI', 'ELI', 'SHNG', 'TFC', 'DIZ', 'CAL', 'ZHI', 'PSE', 'RCB', 'PPC', 'SOC', 'LC', 'LIHC', 'MJC', 'SEVN', 'MBC', 'SGI', 'COAL', 'MHC', 'HVN', 'LOTO', 'ROCK', 'MBT', 'SSI', 'DFNN', 'BLOOM', 'CA', 'TA', 'CYBR', 'ABA', 'AC', 'ABG', 'I', 'IS', 'FPI', 'FPH', 'JAS', 'AP', 'ABS', 'SLI', 'SLF', 'IRC', 'CEB', 'BHI', 'PNX', 'MARC', 'MRSGI', 'IMP', 'MAXS', 'NIKL', 'ARA', 'PNB', 'MCP']) In the variable `subset` below, we further filter the dataset by date range. Here, I only want to look at the prices from January 01, 2016 to the end of July. I then reshape the data frame `subset` and store it in the variable `final_df` where the columns are the assets and the rows are the prices of the assets on a particular day. ```python subset = allprices[(allprices.Date > datetime(2016,1,1,0,0,0)) & (allprices.Date < datetime(2016,7,31,23,59,59))] final_df = subset.pivot(index="Date", columns="Company_Index", values="Close") ``` ```python final_df.head() ``` <div style="max-height:1000px;max-width:1500px;overflow:auto;"> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>Company_Index</th> <th>2GO</th> <th>ABA</th> <th>ABG</th> <th>ABS</th> <th>ABSP</th> <th>AC</th> <th>ACE</th> <th>ACR</th> <th>AEV</th> <th>AGF</th> <th>...</th> <th>V</th> <th>VITA</th> <th>VLL</th> <th>VUL</th> <th>WEB</th> <th>WIN</th> <th>WPI</th> <th>X</th> <th>YEHEY</th> <th>ZHI</th> </tr> <tr> <th>Date</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>2016-01-04</th> <td>7.00</td> <td>0.370</td> <td>10.30</td> <td>62.40</td> <td>64.0</td> <td>756</td> <td>NaN</td> <td>1.38</td> <td>57.20</td> <td>2.80</td> <td>...</td> <td>3.20</td> <td>0.59</td> <td>5.00</td> <td>1.05</td> <td>23.0</td> <td>NaN</td> <td>NaN</td> <td>15.62</td> <td>4.10</td> <td>NaN</td> </tr> <tr> <th>2016-01-05</th> <td>6.91</td> <td>0.370</td> <td>10.28</td> <td>62.30</td> <td>63.0</td> <td>742</td> <td>NaN</td> <td>1.37</td> <td>57.75</td> <td>2.71</td> <td>...</td> <td>1.83</td> <td>0.59</td> <td>5.00</td> <td>1.08</td> <td>22.7</td> <td>0.21</td> <td>0.33</td> <td>15.52</td> <td>4.12</td> <td>0.250</td> </tr> <tr> <th>2016-01-06</th> <td>6.80</td> <td>0.365</td> <td>10.20</td> <td>62.00</td> <td>62.6</td> <td>732</td> <td>1.14</td> <td>1.39</td> <td>57.90</td> <td>2.58</td> <td>...</td> <td>1.80</td> <td>0.62</td> <td>4.97</td> <td>1.06</td> <td>21.5</td> <td>NaN</td> <td>0.34</td> <td>15.20</td> <td>3.88</td> <td>NaN</td> </tr> <tr> <th>2016-01-07</th> <td>6.63</td> <td>0.370</td> <td>10.22</td> <td>61.75</td> <td>62.1</td> <td>709</td> <td>1.15</td> <td>1.36</td> <td>56.00</td> <td>2.65</td> <td>...</td> <td>1.76</td> <td>0.58</td> <td>4.82</td> <td>1.03</td> <td>21.0</td> <td>0.20</td> <td>0.33</td> <td>14.48</td> <td>3.62</td> <td>0.231</td> </tr> <tr> <th>2016-01-08</th> <td>6.55</td> <td>0.360</td> <td>10.18</td> <td>61.10</td> <td>62.0</td> <td>700</td> <td>1.12</td> <td>1.35</td> <td>56.30</td> <td>2.50</td> <td>...</td> <td>1.70</td> <td>0.56</td> <td>4.93</td> <td>1.06</td> <td>21.0</td> <td>0.19</td> <td>0.32</td> <td>14.64</td> <td>3.38</td> <td>0.231</td> </tr> </tbody> </table> <p>5 rows × 213 columns</p> </div> --- ### Step 1: Build Correlation Matrix From the data frame above, we now build the **correlation matrix**. <blockquote>In finance/stock trading, **correlation** is just a measure of the extent at which two equities behave with respect to each other.</blockquote> Below, we build the correlation matrix from `final_df` and store it in the variable `price_corr`. The matrix provides us with the corresponding **correlation coefficients (-1.0 to +1.0)** for all stock pairs in the list of companies. [Straightforward](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html), isn't it? ```python price_corr = final_df.corr() ``` When two assets are *positively correlated*, it means that the general trends of the two stocks are similar; when one goes up, the other one goes up as well. When, on the other hand, two assets are *negatively correlated*, their trends go in opposite directions. I made a quick sketch below to illustrate these relationships. For a more concrete example, let us have a look at the relationships between (`$ABS` and `$ACR`), (`$FMETF` and `$ALI`) and (`$BLOOM` and `$BDO`); the `$` sign is the hashtag used for stock quotes. I chose these securities based on the correlation values of the pairs in `price_corr`. Can you tell which of the pairs are positively correlated? Negatively correlation? ```python ## ABS and ACR plt.figure(figsize=(8,5)) plt.plot(final_df['ABS'], final_df["ACR"], '.', markersize=10) plt.xlabel("Price of $ABS (PhP)") plt.ylabel("Price of $ACR (PhP)") _ = plt.show() ``` ```python ## FMETF and ALI plt.figure(figsize=(8,5)) plt.plot(final_df['FMETF'], final_df["ALI"], '.', markersize=10) plt.xlabel("Price of $FMETF (PhP)") plt.ylabel("Price of $ALI (PhP)") _ = plt.show() ``` ```python ## BLOOM and BDO plt.figure(figsize=(8,5)) plt.plot(final_df['BLOOM'], final_df["BDO"], '.', markersize=10) plt.xlabel("Price of $BLOOM (PhP)") plt.ylabel("Price of $BDO (PhP)") _ = plt.show() ``` Below, we draw the heatmap of the resulting correlation matrix. ```python ## Source: https://stanford.edu/~mwaskom/software/seaborn/examples/many_pairwise_correlations.html ## Generate a mask for the upper triangle mask = np.zeros_like(price_corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True ## Set up the matplotlib figure f, ax = plt.subplots(figsize=(15, 15)) ## Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) ## Draw the heatmap with the mask and correct aspect ratio sns.heatmap(price_corr, mask=mask, cmap=cmap, vmax=.3, square=True, xticklabels=2, yticklabels=2, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax) _ = plt.show() ``` --- ### Step 2: Build Distance Matrix As mentioned in the blog post, we will use two distance metrics in building the distance matrices. The first metric is from Bonanno et al. \begin{equation}d_{ij} = \sqrt{2 \times (1 - c_{ij})}\end{equation} where $c_{ij}$ is the correlation cofficient of stocks $i$ and $j$. In the equation, when $c_{ij}=1$, $d_{ij}=0$; and, when $c_{ij}=-1$, $d_{ij}=2$. That is, when there is a perfectly positive correlation (+1), the distance is 0; and, when the correlation is perfectly negative, the distance is the farthest at 2. The next distance measure is from [SliceMatrix](http://SliceMatrix.com) (mktstk) and it is given by \begin{equation}d_{ij} = 1 - |c_{ij}|.\end{equation} This equation does not distinguish between a positively or a negatively correlated pair of stocks; as long as two stocks are highly correlated, the distance is minimized. Here, we define the distance matrices as `dist_bonanno` and `dist_mktstk`. ```python dist_bonanno = np.sqrt(2*(1-(price_corr))) dist_mktstk = 1-abs(price_corr) ## I am just defining the labels labs_bonanno = list(dist_bonanno.index) labs_mktstk = list(dist_mktstk.index) ``` --- ### Step 3: Build Minimum Spanning Tree (MST) Now, we are ready to build the minimum spanning tree. The idea is to connect the ones that have the closest distance to each other, i.e. connect those that are highly correlated. Let's first build the "weighted" networks `G_bonanno` and `G_mktstk` from the distance matrices `dist_bonanno` and `dist_mktstk`, respectively. Using the Python package NetworkX, that's pretty straightforward to do. ```python G_bonanno = nx.from_numpy_matrix(dist_bonanno.as_matrix()) G_mktstk = nx.from_numpy_matrix(dist_mktstk.as_matrix()) ``` Once we have the distance networks, we can already build minimum spanning trees (MST). Here, we use Kruskal's algorithm. Below is the pseudo-code copied from the Wikipedia entry on the algorithm. <pre> KRUSKAL(G): 1 A = ∅ 2 foreach v ∈ G.V: 3 MAKE-SET(v) 4 foreach (u, v) in G.E ordered by weight(u, v), increasing: 5 if FIND-SET(u) ≠ FIND-SET(v): 6 A = A ∪ {(u, v)} 7 UNION(u, v) 8 return A </pre> Again, we can use NetworkX to build the MST with the graphs as inputs. ```python MST_b = nx.minimum_spanning_tree(G_bonanno) MST_m = nx.minimum_spanning_tree(G_mktstk) ``` Finally, let's add more attributes to the "nodes" or the stocks. The attributes that I want to include here are: - `label` (the stock symbol) - `sector` (which sector the stock belongs) - `change` (the $\%$ change of the stock for the period under study) This way, when we draw the MSTs, we can choose to color the nodes by either `sector` or `change`. ```python change = (final_df.iloc[-1] - final_df.iloc[0]) * 100 / final_df.iloc[0] ``` ```python for node in MST_b.nodes(): sector = pse_companies[pse_companies["Stock Symbol"] == labs_bonanno[node]].Sector.iloc[0] MST_b.node[node]["sector"] = sector MST_b.node[node]["label"] = labs_bonanno[node] if math.isnan(change[labs_bonanno[node]]): MST_b.node[node]["color"] = "black" elif change[labs_bonanno[node]] < -10: MST_b.node[node]["color"] = "red" elif change[labs_bonanno[node]] > 10: MST_b.node[node]["color"] = "green" else: MST_b.node[node]["color"] = "blue" ``` ```python for node in MST_m.nodes(): sector = pse_companies[pse_companies["Stock Symbol"] == labs_mktstk[node]].Sector.iloc[0] MST_m.node[node]["sector"] = sector MST_m.node[node]["label"] = labs_mktstk[node] if math.isnan(change[labs_mktstk[node]]): #print change[labs_mktstk[node]], labs_mktstk[node] #Gm.node[node]["change"] = 101 MST_m.node[node]["color"] = "black" elif change[labs_mktstk[node]] < -10: MST_m.node[node]["color"] = "red" elif change[labs_mktstk[node]] > 10: MST_m.node[node]["color"] = "green" else: MST_m.node[node]["color"] = "blue" ``` ### Drawing the MSTs ```python plt.figure(figsize=(10,10)) nx.draw_networkx(MST_b) ``` ```python plt.figure(figsize=(10,10)) nx.draw_networkx(MST_m) ``` ### Write out MSTs Below, we write the MSTs as `gexf` files so we can use them in [Gephi](https://gephi.org/) (open-source and free) to generate much prettier networks/trees. <blockquote>Gephi is the leading visualization and exploration software for all kinds of graphs and networks. Gephi is open-source and free.</blockquote> ```python nx.write_gexf(MST_b, "corrmat_bonanno.gexf") nx.write_gexf(MST_m, "corrmat_mktstk.gexf") ``` Below is the resulting network (`MST_b`) drawn using Gephi. ```python ```
a580ef1e41ca9332c9093da00bdd1b1a9c3a00f7
680,015
ipynb
Jupyter Notebook
Financial Network.ipynb
eflegara/FinancialNetwork
0ba785a26f20cdae66fa265e1c8df502fc481a79
[ "MIT" ]
null
null
null
Financial Network.ipynb
eflegara/FinancialNetwork
0ba785a26f20cdae66fa265e1c8df502fc481a79
[ "MIT" ]
null
null
null
Financial Network.ipynb
eflegara/FinancialNetwork
0ba785a26f20cdae66fa265e1c8df502fc481a79
[ "MIT" ]
1
2020-03-19T00:39:57.000Z
2020-03-19T00:39:57.000Z
540.552464
221,744
0.922803
true
8,634
Qwen/Qwen-72B
1. YES 2. YES
0.782662
0.817574
0.639885
__label__eng_Latn
0.540165
0.324998
# Vertical Line Test ``` import matplotlib.pyplot as plt import numpy as np ``` ## 1.1 Create two graphs, one that passes the vertical line test and one that does not. ``` plt.axhline(y=2) plt.title("passes the vertical line test") plt.show() ``` ``` plt.axvline(x=2) plt.title("fails the vertical line test") plt.show() ``` ## 1.2 Why are graphs that don't pass the vertical line test not considered "functions?" A function cannot have the same input (x value/domain value) mapped to multiple outputs (y value/co-domain value). # Functions as Relations ## 2.1 Which of the following relations are functions? Why? \begin{align} \text{Relation 1: } \{(1, 2), (3, 2), (1, 3)\} \\ \text{Relation 2: } \{(1, 3), (2, 3), (6, 7)\} \\ \text{Relation 3: } \{(9, 4), (2, 1), (9, 6)\} \\ \text{Relation 4: } \{(6, 2), (8, 3), (6, 4)\} \\ \text{Relation 5: } \{(2, 6), (2, 7), (2, 4)\} \end{align} Relations 2 is the only one here that is a function because there are not repeated x values. In other words, there are no two different y values that are mapped to by the same x value. # Functions as a mapping between dimensions ## 3.1 for the following functions what is the dimensionality of the domain (input) and codomain (range/output)? \begin{align} m(𝑥_1,𝑥_2,𝑥_3)=(x_1+x_2, x_1+x_3, x_2+x_3) \\ n(𝑥_1,𝑥_2,𝑥_3,𝑥_4)=(x_2^2 + x_3, x_2x_4) \end{align} function m: 3D -> 3D function n: 4D -> 2D ## 3.2 Do you think it's possible to create a function that maps from a lower dimensional space to a higher dimensional space? If so, provide an example. Yes this is possible. For more information on functions from lower dimensions of space that map to higher dimensions of space, google the terms injective and surjective in relation to Linear Transformations: <https://youtu.be/xKNX8BUWR0g> Example: $f(x) = (x, x+1)$ # Vector Transformations ## 4.1 Plug the corresponding unit vectors into each function. Use the output vectors to create a transformation matrix. \begin{align} p(\begin{bmatrix}x_1 \\ x_2 \end{bmatrix}) = \begin{bmatrix} x_1 + 3x_2 \\2 x_2 - x_1 \\ \end{bmatrix} \\ \\ q(\begin{bmatrix}x_1 \\ x_2 \\ x_3\end{bmatrix}) = \begin{bmatrix} 4x_1 + x_2 + 2x_3 \\2 x_2 - x_1 + 3x_3 \\ 5x_1 - 2x_3 + x_2 \end{bmatrix} \end{align} --- \begin{align} p(\begin{bmatrix} 1 \\ 0 \end{bmatrix}) = \begin{bmatrix} x_1 + 3x_2 \\ -x_1+ 2x_2 \\ \end{bmatrix} = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \end{align} \begin{align} p(\begin{bmatrix} 0 \\ 1 \end{bmatrix}) = \begin{bmatrix} x_1 + 3x_2 \\ -x_1+ 2x_2 \\ \end{bmatrix} = \begin{bmatrix} 3 \\ 2 \end{bmatrix} \end{align} \begin{align} T = \begin{bmatrix} 1 & 3 \\ -1 & 2 \end{bmatrix} \end{align} --- \begin{align} q(\begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix}) = \begin{bmatrix} 4x_1 + x_2 + 2x_3 \\ -x_1 + 2x_2 + 3x_3 \\ 5x_1 + x_2 - 2x_3 \end{bmatrix} = \begin{bmatrix} 4 \\ -1 \\ 5\end{bmatrix} \end{align} \begin{align} q(\begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix}) = \begin{bmatrix} 4x_1 + x_2 + 2x_3 \\ -x_1 + 2x_2 + 3x_3 \\ 5x_1 + x_2 - 2x_3 \end{bmatrix}= \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} \end{align} \begin{align} q(\begin{bmatrix}0 \\ 0 \\ 1 \end{bmatrix}) = \begin{bmatrix} 4x_1 + x_2 + 2x_3 \\ -x_1 + 2x_2 + 3x_3 \\ 5x_1 + x_2 - 2x_3 \end{bmatrix}= \begin{bmatrix} 2 \\ 3 \\ -2\end{bmatrix} \end{align} \begin{align} T = \begin{bmatrix} 4 & 1 & 2 \\ -1 & 2 & 3 \\ 5 & 1 & -2 \end{bmatrix} \end{align} --- ## 4.2 Verify that your transformation matrices are correct by choosing an input matrix and calculating the result both via the traditional functions above and also via vector-matrix multiplication. ``` # Transformation Matrix T1 = np.array([[1,3],[-1,2]]) # Input Vector v1 = np.array([2,3]) # Product found by hand-calculating product_by_hand_1 = [11, 4] # Product found with NumPy np.matmul(T1, v1) ``` array([11, 4]) ``` # Transformation Matrix T2 = np.array([[4,1,2],[-1,2,3],[5,1,-2]]) # Input Vector v2 = np.array([1,2,3]) # Product found by hand-calculating product_by_hand_2 = [12, 12, 1] # Product found with NumPy np.matmul(T2, v2) ``` array([12, 12, 1]) # Eigenvalues and Eigenvectors ## 5.1 In your own words, give an explanation for the intuition behind eigenvalues and eigenvectors. An eigenvector is a vector that does not change orientation (direction in which it is pointing) during a linear transformation. An eigenvector always comes paired with a corresponding eigenvalue. Every linear transformation has at least one eigenvector-eigenvalue pair. The eigenvalue describes how much the eigenvector gets scaled by (stretched or squished) during the transformation. # The Curse of Dimensionality ## 6.1 What are some of the challenges of working with high dimensional spaces? - Increased computational inefficiency (searching) - Increased data redundancy with increasing number of columns. - More difficult to visualize/explore the data. - Measures of euclidean distance breaks down in high-dimensional spaces - Increased sparcity of data. - As the number of parameters in a model grows while number of available observations remains fixed, overfitting a predictive model becomes more certain. ## 6.2 What is the rule of thumb for how many observations you should have compared to parameters in your model? You should have 5x as many observations (rows) as you do parameters (columns) in a machine learning model. # Principal Component Analysis ## 7.1 Code for loading and cleaning the 2013 national dataset from the [Housing Affordability Data System (HADS)](https://www.huduser.gov/portal/datasets/hads/hads.html) --housing data, can be found below. ## Perform PCA on the processed dataset `national_processed` (Make sure you standardize your data!) and then make a scatterplot of PC1 against PC2. Some of our discussion and work around PCA with this dataset will continue during tomorrow's lecture and assignment. The code below will read in the dataset and perform categorical encoding of the categorical variables to ensure that we're only working with numeric columns from our dataset. Start adding your PCA code at the bottom of the provided code. ``` from urllib.request import urlopen from zipfile import ZipFile from io import BytesIO import os.path import pandas as pd import numpy as np import matplotlib.pyplot as plt # Read Natinal Data national_url = 'https://www.huduser.gov/portal/datasets/hads/hads2013n_ASCII.zip' national_file = 'thads2013n.txt' if os.path.exists(national_file): national = pd.read_csv(national_file) else: z_national = urlopen(national_url) zip_national = ZipFile(BytesIO(z_national.read())).extract(national_file) national = pd.read_csv(zip_national) print(national.shape) national.head() ``` (64535, 99) <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>CONTROL</th> <th>AGE1</th> <th>METRO3</th> <th>REGION</th> <th>LMED</th> <th>FMR</th> <th>L30</th> <th>L50</th> <th>L80</th> <th>IPOV</th> <th>BEDRMS</th> <th>BUILT</th> <th>STATUS</th> <th>TYPE</th> <th>VALUE</th> <th>VACANCY</th> <th>TENURE</th> <th>NUNITS</th> <th>ROOMS</th> <th>WEIGHT</th> <th>PER</th> <th>ZINC2</th> <th>ZADEQ</th> <th>ZSMHC</th> <th>STRUCTURETYPE</th> <th>OWNRENT</th> <th>UTILITY</th> <th>OTHERCOST</th> <th>COST06</th> <th>COST12</th> <th>COST08</th> <th>COSTMED</th> <th>TOTSAL</th> <th>ASSISTED</th> <th>GLMED</th> <th>GL30</th> <th>GL50</th> <th>GL80</th> <th>APLMED</th> <th>ABL30</th> <th>...</th> <th>COST08RELPOVCAT</th> <th>COST08RELFMRPCT</th> <th>COST08RELFMRCAT</th> <th>COST12RELAMIPCT</th> <th>COST12RELAMICAT</th> <th>COST12RELPOVPCT</th> <th>COST12RELPOVCAT</th> <th>COST12RELFMRPCT</th> <th>COST12RELFMRCAT</th> <th>COSTMedRELAMIPCT</th> <th>COSTMedRELAMICAT</th> <th>COSTMedRELPOVPCT</th> <th>COSTMedRELPOVCAT</th> <th>COSTMedRELFMRPCT</th> <th>COSTMedRELFMRCAT</th> <th>FMTZADEQ</th> <th>FMTMETRO3</th> <th>FMTBUILT</th> <th>FMTSTRUCTURETYPE</th> <th>FMTBEDRMS</th> <th>FMTOWNRENT</th> <th>FMTCOST06RELPOVCAT</th> <th>FMTCOST08RELPOVCAT</th> <th>FMTCOST12RELPOVCAT</th> <th>FMTCOSTMEDRELPOVCAT</th> <th>FMTINCRELPOVCAT</th> <th>FMTCOST06RELFMRCAT</th> <th>FMTCOST08RELFMRCAT</th> <th>FMTCOST12RELFMRCAT</th> <th>FMTCOSTMEDRELFMRCAT</th> <th>FMTINCRELFMRCAT</th> <th>FMTCOST06RELAMICAT</th> <th>FMTCOST08RELAMICAT</th> <th>FMTCOST12RELAMICAT</th> <th>FMTCOSTMEDRELAMICAT</th> <th>FMTINCRELAMICAT</th> <th>FMTASSISTED</th> <th>FMTBURDEN</th> <th>FMTREGION</th> <th>FMTSTATUS</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>'100003130103'</td> <td>82</td> <td>'3'</td> <td>'1'</td> <td>73738</td> <td>956</td> <td>15738</td> <td>26213</td> <td>40322</td> <td>11067</td> <td>2</td> <td>2006</td> <td>'1'</td> <td>1</td> <td>40000</td> <td>-6</td> <td>'1'</td> <td>1</td> <td>6</td> <td>3117.394239</td> <td>1</td> <td>18021</td> <td>'1'</td> <td>533</td> <td>1</td> <td>'1'</td> <td>169.000000</td> <td>213.750000</td> <td>648.588189</td> <td>803.050535</td> <td>696.905247</td> <td>615.156712</td> <td>0</td> <td>-9</td> <td>73738</td> <td>15738</td> <td>26213</td> <td>40322</td> <td>51616.6</td> <td>20234.571429</td> <td>...</td> <td>4</td> <td>72.898038</td> <td>2</td> <td>48.402635</td> <td>2</td> <td>290.250487</td> <td>4</td> <td>84.001102</td> <td>2</td> <td>37.077624</td> <td>2</td> <td>222.339102</td> <td>4</td> <td>64.346936</td> <td>2</td> <td>'1 Adequate'</td> <td>'-5'</td> <td>'2000-2009'</td> <td>'1 Single Family'</td> <td>'2 2BR'</td> <td>'1 Owner'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'3 150-200% Poverty'</td> <td>'2 50.1 - 100% FMR'</td> <td>'2 50.1 - 100% FMR'</td> <td>'2 50.1 - 100% FMR'</td> <td>'2 50.1 - 100% FMR'</td> <td>'1 LTE 50% FMR'</td> <td>'2 30 - 50% AMI'</td> <td>'2 30 - 50% AMI'</td> <td>'2 30 - 50% AMI'</td> <td>'2 30 - 50% AMI'</td> <td>'2 30 - 50% AMI'</td> <td>'.'</td> <td>'2 30% to 50%'</td> <td>'-5'</td> <td>'-5'</td> </tr> <tr> <th>1</th> <td>'100006110249'</td> <td>50</td> <td>'5'</td> <td>'3'</td> <td>55846</td> <td>1100</td> <td>17165</td> <td>28604</td> <td>45744</td> <td>24218</td> <td>4</td> <td>1980</td> <td>'1'</td> <td>1</td> <td>130000</td> <td>-6</td> <td>'1'</td> <td>1</td> <td>6</td> <td>2150.725544</td> <td>4</td> <td>122961</td> <td>'1'</td> <td>487</td> <td>1</td> <td>'1'</td> <td>245.333333</td> <td>58.333333</td> <td>1167.640781</td> <td>1669.643405</td> <td>1324.671218</td> <td>1058.988479</td> <td>123000</td> <td>-9</td> <td>55846</td> <td>17165</td> <td>28604</td> <td>45744</td> <td>55846.0</td> <td>19911.400000</td> <td>...</td> <td>4</td> <td>120.424656</td> <td>3</td> <td>103.094063</td> <td>6</td> <td>275.768999</td> <td>4</td> <td>151.785764</td> <td>3</td> <td>65.388468</td> <td>4</td> <td>174.909320</td> <td>3</td> <td>96.271680</td> <td>2</td> <td>'1 Adequate'</td> <td>'-5'</td> <td>'1980-1989'</td> <td>'1 Single Family'</td> <td>'4 4BR+'</td> <td>'1 Owner'</td> <td>'3 150-200% Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'3 150-200% Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'2 50.1 - 100% FMR'</td> <td>'3 GT FMR'</td> <td>'4 60 - 80% AMI'</td> <td>'4 60 - 80% AMI'</td> <td>'6 100 - 120% AMI'</td> <td>'4 60 - 80% AMI'</td> <td>'7 120% AMI +'</td> <td>'.'</td> <td>'1 Less than 30%'</td> <td>'-5'</td> <td>'-5'</td> </tr> <tr> <th>2</th> <td>'100006370140'</td> <td>53</td> <td>'5'</td> <td>'3'</td> <td>55846</td> <td>1100</td> <td>13750</td> <td>22897</td> <td>36614</td> <td>15470</td> <td>4</td> <td>1985</td> <td>'1'</td> <td>1</td> <td>150000</td> <td>-6</td> <td>'1'</td> <td>1</td> <td>7</td> <td>2213.789404</td> <td>2</td> <td>27974</td> <td>'1'</td> <td>1405</td> <td>1</td> <td>'1'</td> <td>159.000000</td> <td>37.500000</td> <td>1193.393209</td> <td>1772.627006</td> <td>1374.582175</td> <td>1068.025168</td> <td>28000</td> <td>-9</td> <td>55846</td> <td>13750</td> <td>22897</td> <td>36614</td> <td>44676.8</td> <td>19937.500000</td> <td>...</td> <td>4</td> <td>124.962016</td> <td>3</td> <td>109.452905</td> <td>6</td> <td>458.339239</td> <td>4</td> <td>161.147910</td> <td>3</td> <td>65.946449</td> <td>4</td> <td>276.153890</td> <td>4</td> <td>97.093197</td> <td>2</td> <td>'1 Adequate'</td> <td>'-5'</td> <td>'1980-1989'</td> <td>'1 Single Family'</td> <td>'4 4BR+'</td> <td>'1 Owner'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'3 150-200% Poverty'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'2 50.1 - 100% FMR'</td> <td>'2 50.1 - 100% FMR'</td> <td>'4 60 - 80% AMI'</td> <td>'5 80 - 100% AMI'</td> <td>'6 100 - 120% AMI'</td> <td>'4 60 - 80% AMI'</td> <td>'4 60 - 80% AMI'</td> <td>'.'</td> <td>'3 50% or More'</td> <td>'-5'</td> <td>'-5'</td> </tr> <tr> <th>3</th> <td>'100006520140'</td> <td>67</td> <td>'5'</td> <td>'3'</td> <td>55846</td> <td>949</td> <td>13750</td> <td>22897</td> <td>36614</td> <td>13964</td> <td>3</td> <td>1985</td> <td>'1'</td> <td>1</td> <td>200000</td> <td>-6</td> <td>'1'</td> <td>1</td> <td>6</td> <td>2364.585097</td> <td>2</td> <td>32220</td> <td>'1'</td> <td>279</td> <td>1</td> <td>'1'</td> <td>179.000000</td> <td>70.666667</td> <td>1578.857612</td> <td>2351.169341</td> <td>1820.442900</td> <td>1411.700224</td> <td>0</td> <td>-9</td> <td>55846</td> <td>13750</td> <td>22897</td> <td>36614</td> <td>44676.8</td> <td>17875.000000</td> <td>...</td> <td>4</td> <td>191.827492</td> <td>3</td> <td>161.926709</td> <td>7</td> <td>673.494512</td> <td>4</td> <td>247.752301</td> <td>3</td> <td>97.224801</td> <td>5</td> <td>404.382763</td> <td>4</td> <td>148.756610</td> <td>3</td> <td>'1 Adequate'</td> <td>'-5'</td> <td>'1980-1989'</td> <td>'1 Single Family'</td> <td>'3 3BR'</td> <td>'1 Owner'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'2 50.1 - 100% FMR'</td> <td>'6 100 - 120% AMI'</td> <td>'7 120% AMI +'</td> <td>'7 120% AMI +'</td> <td>'5 80 - 100% AMI'</td> <td>'4 60 - 80% AMI'</td> <td>'.'</td> <td>'1 Less than 30%'</td> <td>'-5'</td> <td>'-5'</td> </tr> <tr> <th>4</th> <td>'100007130148'</td> <td>26</td> <td>'1'</td> <td>'3'</td> <td>60991</td> <td>737</td> <td>14801</td> <td>24628</td> <td>39421</td> <td>15492</td> <td>2</td> <td>1980</td> <td>'1'</td> <td>1</td> <td>-6</td> <td>-6</td> <td>'2'</td> <td>100</td> <td>4</td> <td>2314.524902</td> <td>2</td> <td>96874</td> <td>'1'</td> <td>759</td> <td>5</td> <td>'2'</td> <td>146.000000</td> <td>12.500000</td> <td>759.000000</td> <td>759.000000</td> <td>759.000000</td> <td>759.000000</td> <td>96900</td> <td>0</td> <td>60991</td> <td>14801</td> <td>24628</td> <td>39421</td> <td>48792.8</td> <td>16651.125000</td> <td>...</td> <td>3</td> <td>102.985075</td> <td>3</td> <td>55.308707</td> <td>3</td> <td>195.972115</td> <td>3</td> <td>102.985075</td> <td>3</td> <td>55.308707</td> <td>3</td> <td>195.972115</td> <td>3</td> <td>102.985075</td> <td>3</td> <td>'1 Adequate'</td> <td>'Central City'</td> <td>'1980-1989'</td> <td>'5 50+ units'</td> <td>'2 2BR'</td> <td>'2 Renter'</td> <td>'3 150-200% Poverty'</td> <td>'3 150-200% Poverty'</td> <td>'3 150-200% Poverty'</td> <td>'3 150-200% Poverty'</td> <td>'4 200%+ Poverty'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'3 GT FMR'</td> <td>'3 50 - 60% AMI'</td> <td>'3 50 - 60% AMI'</td> <td>'3 50 - 60% AMI'</td> <td>'3 50 - 60% AMI'</td> <td>'7 120% AMI +'</td> <td>'0 Not Assisted'</td> <td>'1 Less than 30%'</td> <td>'-5'</td> <td>'-5'</td> </tr> </tbody> </table> <p>5 rows × 99 columns</p> </div> ``` # Look at datatypes # a lot of object datatypes even though they seem to be strings of numbers. national.dtypes ``` CONTROL object AGE1 int64 METRO3 object REGION object LMED int64 FMR int64 L30 int64 L50 int64 L80 int64 IPOV int64 BEDRMS int64 BUILT int64 STATUS object TYPE int64 VALUE int64 VACANCY int64 TENURE object NUNITS int64 ROOMS int64 WEIGHT float64 PER int64 ZINC2 int64 ZADEQ object ZSMHC int64 STRUCTURETYPE int64 OWNRENT object UTILITY float64 OTHERCOST float64 COST06 float64 COST12 float64 ... COSTMedRELAMICAT int64 COSTMedRELPOVPCT float64 COSTMedRELPOVCAT int64 COSTMedRELFMRPCT float64 COSTMedRELFMRCAT int64 FMTZADEQ object FMTMETRO3 object FMTBUILT object FMTSTRUCTURETYPE object FMTBEDRMS object FMTOWNRENT object FMTCOST06RELPOVCAT object FMTCOST08RELPOVCAT object FMTCOST12RELPOVCAT object FMTCOSTMEDRELPOVCAT object FMTINCRELPOVCAT object FMTCOST06RELFMRCAT object FMTCOST08RELFMRCAT object FMTCOST12RELFMRCAT object FMTCOSTMEDRELFMRCAT object FMTINCRELFMRCAT object FMTCOST06RELAMICAT object FMTCOST08RELAMICAT object FMTCOST12RELAMICAT object FMTCOSTMEDRELAMICAT object FMTINCRELAMICAT object FMTASSISTED object FMTBURDEN object FMTREGION object FMTSTATUS object Length: 99, dtype: object ``` # check for null values national.isnull().sum().any() ``` False ``` # check for number of categorical vs numeric columns cat_cols = national.columns[national.dtypes=='object'] num_cols = national.columns[national.dtypes!='object'] print(f'{len(cat_cols)} categorical columns') print(f'{len(num_cols)} numerical columns') ``` 32 categorical columns 67 numerical columns ``` # We're making a copy of our data in case we mess something up. national_processed = national.copy() # Categorically Encode our Variables: # They need to all be numeric before we do PCA. # https://pbpython.com/categorical-encoding.html # Cast categorical columns to "category" data type national_processed[cat_cols] = national_processed[cat_cols].astype('category') national_processed.dtypes ``` CONTROL category AGE1 int64 METRO3 category REGION category LMED int64 FMR int64 L30 int64 L50 int64 L80 int64 IPOV int64 BEDRMS int64 BUILT int64 STATUS category TYPE int64 VALUE int64 VACANCY int64 TENURE category NUNITS int64 ROOMS int64 WEIGHT float64 PER int64 ZINC2 int64 ZADEQ category ZSMHC int64 STRUCTURETYPE int64 OWNRENT category UTILITY float64 OTHERCOST float64 COST06 float64 COST12 float64 ... COSTMedRELAMICAT int64 COSTMedRELPOVPCT float64 COSTMedRELPOVCAT int64 COSTMedRELFMRPCT float64 COSTMedRELFMRCAT int64 FMTZADEQ category FMTMETRO3 category FMTBUILT category FMTSTRUCTURETYPE category FMTBEDRMS category FMTOWNRENT category FMTCOST06RELPOVCAT category FMTCOST08RELPOVCAT category FMTCOST12RELPOVCAT category FMTCOSTMEDRELPOVCAT category FMTINCRELPOVCAT category FMTCOST06RELFMRCAT category FMTCOST08RELFMRCAT category FMTCOST12RELFMRCAT category FMTCOSTMEDRELFMRCAT category FMTINCRELFMRCAT category FMTCOST06RELAMICAT category FMTCOST08RELAMICAT category FMTCOST12RELAMICAT category FMTCOSTMEDRELAMICAT category FMTINCRELAMICAT category FMTASSISTED category FMTBURDEN category FMTREGION category FMTSTATUS category Length: 99, dtype: object ``` # Replace all category cell values with their numeric category codes for col in cat_cols: national_processed[col] = national_processed[col].cat.codes print(national_processed.shape) national_processed.head() ``` (64535, 99) <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>CONTROL</th> <th>AGE1</th> <th>METRO3</th> <th>REGION</th> <th>LMED</th> <th>FMR</th> <th>L30</th> <th>L50</th> <th>L80</th> <th>IPOV</th> <th>BEDRMS</th> <th>BUILT</th> <th>STATUS</th> <th>TYPE</th> <th>VALUE</th> <th>VACANCY</th> <th>TENURE</th> <th>NUNITS</th> <th>ROOMS</th> <th>WEIGHT</th> <th>PER</th> <th>ZINC2</th> <th>ZADEQ</th> <th>ZSMHC</th> <th>STRUCTURETYPE</th> <th>OWNRENT</th> <th>UTILITY</th> <th>OTHERCOST</th> <th>COST06</th> <th>COST12</th> <th>COST08</th> <th>COSTMED</th> <th>TOTSAL</th> <th>ASSISTED</th> <th>GLMED</th> <th>GL30</th> <th>GL50</th> <th>GL80</th> <th>APLMED</th> <th>ABL30</th> <th>...</th> <th>COST08RELPOVCAT</th> <th>COST08RELFMRPCT</th> <th>COST08RELFMRCAT</th> <th>COST12RELAMIPCT</th> <th>COST12RELAMICAT</th> <th>COST12RELPOVPCT</th> <th>COST12RELPOVCAT</th> <th>COST12RELFMRPCT</th> <th>COST12RELFMRCAT</th> <th>COSTMedRELAMIPCT</th> <th>COSTMedRELAMICAT</th> <th>COSTMedRELPOVPCT</th> <th>COSTMedRELPOVCAT</th> <th>COSTMedRELFMRPCT</th> <th>COSTMedRELFMRCAT</th> <th>FMTZADEQ</th> <th>FMTMETRO3</th> <th>FMTBUILT</th> <th>FMTSTRUCTURETYPE</th> <th>FMTBEDRMS</th> <th>FMTOWNRENT</th> <th>FMTCOST06RELPOVCAT</th> <th>FMTCOST08RELPOVCAT</th> <th>FMTCOST12RELPOVCAT</th> <th>FMTCOSTMEDRELPOVCAT</th> <th>FMTINCRELPOVCAT</th> <th>FMTCOST06RELFMRCAT</th> <th>FMTCOST08RELFMRCAT</th> <th>FMTCOST12RELFMRCAT</th> <th>FMTCOSTMEDRELFMRCAT</th> <th>FMTINCRELFMRCAT</th> <th>FMTCOST06RELAMICAT</th> <th>FMTCOST08RELAMICAT</th> <th>FMTCOST12RELAMICAT</th> <th>FMTCOSTMEDRELAMICAT</th> <th>FMTINCRELAMICAT</th> <th>FMTASSISTED</th> <th>FMTBURDEN</th> <th>FMTREGION</th> <th>FMTSTATUS</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0</td> <td>82</td> <td>2</td> <td>0</td> <td>73738</td> <td>956</td> <td>15738</td> <td>26213</td> <td>40322</td> <td>11067</td> <td>2</td> <td>2006</td> <td>0</td> <td>1</td> <td>40000</td> <td>-6</td> <td>1</td> <td>1</td> <td>6</td> <td>3117.394239</td> <td>1</td> <td>18021</td> <td>1</td> <td>533</td> <td>1</td> <td>0</td> <td>169.000000</td> <td>213.750000</td> <td>648.588189</td> <td>803.050535</td> <td>696.905247</td> <td>615.156712</td> <td>0</td> <td>-9</td> <td>73738</td> <td>15738</td> <td>26213</td> <td>40322</td> <td>51616.6</td> <td>20234.571429</td> <td>...</td> <td>4</td> <td>72.898038</td> <td>2</td> <td>48.402635</td> <td>2</td> <td>290.250487</td> <td>4</td> <td>84.001102</td> <td>2</td> <td>37.077624</td> <td>2</td> <td>222.339102</td> <td>4</td> <td>64.346936</td> <td>2</td> <td>1</td> <td>0</td> <td>5</td> <td>1</td> <td>2</td> <td>0</td> <td>4</td> <td>4</td> <td>4</td> <td>4</td> <td>3</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>2</td> <td>0</td> <td>2</td> <td>0</td> <td>0</td> </tr> <tr> <th>1</th> <td>1</td> <td>50</td> <td>4</td> <td>2</td> <td>55846</td> <td>1100</td> <td>17165</td> <td>28604</td> <td>45744</td> <td>24218</td> <td>4</td> <td>1980</td> <td>0</td> <td>1</td> <td>130000</td> <td>-6</td> <td>1</td> <td>1</td> <td>6</td> <td>2150.725544</td> <td>4</td> <td>122961</td> <td>1</td> <td>487</td> <td>1</td> <td>0</td> <td>245.333333</td> <td>58.333333</td> <td>1167.640781</td> <td>1669.643405</td> <td>1324.671218</td> <td>1058.988479</td> <td>123000</td> <td>-9</td> <td>55846</td> <td>17165</td> <td>28604</td> <td>45744</td> <td>55846.0</td> <td>19911.400000</td> <td>...</td> <td>4</td> <td>120.424656</td> <td>3</td> <td>103.094063</td> <td>6</td> <td>275.768999</td> <td>4</td> <td>151.785764</td> <td>3</td> <td>65.388468</td> <td>4</td> <td>174.909320</td> <td>3</td> <td>96.271680</td> <td>2</td> <td>1</td> <td>0</td> <td>3</td> <td>1</td> <td>4</td> <td>0</td> <td>3</td> <td>4</td> <td>4</td> <td>3</td> <td>4</td> <td>2</td> <td>2</td> <td>2</td> <td>1</td> <td>3</td> <td>3</td> <td>3</td> <td>5</td> <td>3</td> <td>7</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> </tr> <tr> <th>2</th> <td>2</td> <td>53</td> <td>4</td> <td>2</td> <td>55846</td> <td>1100</td> <td>13750</td> <td>22897</td> <td>36614</td> <td>15470</td> <td>4</td> <td>1985</td> <td>0</td> <td>1</td> <td>150000</td> <td>-6</td> <td>1</td> <td>1</td> <td>7</td> <td>2213.789404</td> <td>2</td> <td>27974</td> <td>1</td> <td>1405</td> <td>1</td> <td>0</td> <td>159.000000</td> <td>37.500000</td> <td>1193.393209</td> <td>1772.627006</td> <td>1374.582175</td> <td>1068.025168</td> <td>28000</td> <td>-9</td> <td>55846</td> <td>13750</td> <td>22897</td> <td>36614</td> <td>44676.8</td> <td>19937.500000</td> <td>...</td> <td>4</td> <td>124.962016</td> <td>3</td> <td>109.452905</td> <td>6</td> <td>458.339239</td> <td>4</td> <td>161.147910</td> <td>3</td> <td>65.946449</td> <td>4</td> <td>276.153890</td> <td>4</td> <td>97.093197</td> <td>2</td> <td>1</td> <td>0</td> <td>3</td> <td>1</td> <td>4</td> <td>0</td> <td>4</td> <td>4</td> <td>4</td> <td>4</td> <td>3</td> <td>2</td> <td>2</td> <td>2</td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>3</td> <td>4</td> <td>0</td> <td>3</td> <td>0</td> <td>0</td> </tr> <tr> <th>3</th> <td>3</td> <td>67</td> <td>4</td> <td>2</td> <td>55846</td> <td>949</td> <td>13750</td> <td>22897</td> <td>36614</td> <td>13964</td> <td>3</td> <td>1985</td> <td>0</td> <td>1</td> <td>200000</td> <td>-6</td> <td>1</td> <td>1</td> <td>6</td> <td>2364.585097</td> <td>2</td> <td>32220</td> <td>1</td> <td>279</td> <td>1</td> <td>0</td> <td>179.000000</td> <td>70.666667</td> <td>1578.857612</td> <td>2351.169341</td> <td>1820.442900</td> <td>1411.700224</td> <td>0</td> <td>-9</td> <td>55846</td> <td>13750</td> <td>22897</td> <td>36614</td> <td>44676.8</td> <td>17875.000000</td> <td>...</td> <td>4</td> <td>191.827492</td> <td>3</td> <td>161.926709</td> <td>7</td> <td>673.494512</td> <td>4</td> <td>247.752301</td> <td>3</td> <td>97.224801</td> <td>5</td> <td>404.382763</td> <td>4</td> <td>148.756610</td> <td>3</td> <td>1</td> <td>0</td> <td>3</td> <td>1</td> <td>3</td> <td>0</td> <td>4</td> <td>4</td> <td>4</td> <td>4</td> <td>4</td> <td>2</td> <td>2</td> <td>2</td> <td>2</td> <td>2</td> <td>5</td> <td>6</td> <td>6</td> <td>4</td> <td>4</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> </tr> <tr> <th>4</th> <td>4</td> <td>26</td> <td>0</td> <td>2</td> <td>60991</td> <td>737</td> <td>14801</td> <td>24628</td> <td>39421</td> <td>15492</td> <td>2</td> <td>1980</td> <td>0</td> <td>1</td> <td>-6</td> <td>-6</td> <td>2</td> <td>100</td> <td>4</td> <td>2314.524902</td> <td>2</td> <td>96874</td> <td>1</td> <td>759</td> <td>5</td> <td>1</td> <td>146.000000</td> <td>12.500000</td> <td>759.000000</td> <td>759.000000</td> <td>759.000000</td> <td>759.000000</td> <td>96900</td> <td>0</td> <td>60991</td> <td>14801</td> <td>24628</td> <td>39421</td> <td>48792.8</td> <td>16651.125000</td> <td>...</td> <td>3</td> <td>102.985075</td> <td>3</td> <td>55.308707</td> <td>3</td> <td>195.972115</td> <td>3</td> <td>102.985075</td> <td>3</td> <td>55.308707</td> <td>3</td> <td>195.972115</td> <td>3</td> <td>102.985075</td> <td>3</td> <td>1</td> <td>1</td> <td>3</td> <td>5</td> <td>2</td> <td>1</td> <td>3</td> <td>3</td> <td>3</td> <td>3</td> <td>4</td> <td>2</td> <td>2</td> <td>2</td> <td>2</td> <td>3</td> <td>2</td> <td>2</td> <td>2</td> <td>2</td> <td>7</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> </tr> </tbody> </table> <p>5 rows × 99 columns</p> </div> ``` # Now we only ahve numeric columns (ints and floats) national_processed.dtypes ``` CONTROL int32 AGE1 int64 METRO3 int8 REGION int8 LMED int64 FMR int64 L30 int64 L50 int64 L80 int64 IPOV int64 BEDRMS int64 BUILT int64 STATUS int8 TYPE int64 VALUE int64 VACANCY int64 TENURE int8 NUNITS int64 ROOMS int64 WEIGHT float64 PER int64 ZINC2 int64 ZADEQ int8 ZSMHC int64 STRUCTURETYPE int64 OWNRENT int8 UTILITY float64 OTHERCOST float64 COST06 float64 COST12 float64 ... COSTMedRELAMICAT int64 COSTMedRELPOVPCT float64 COSTMedRELPOVCAT int64 COSTMedRELFMRPCT float64 COSTMedRELFMRCAT int64 FMTZADEQ int8 FMTMETRO3 int8 FMTBUILT int8 FMTSTRUCTURETYPE int8 FMTBEDRMS int8 FMTOWNRENT int8 FMTCOST06RELPOVCAT int8 FMTCOST08RELPOVCAT int8 FMTCOST12RELPOVCAT int8 FMTCOSTMEDRELPOVCAT int8 FMTINCRELPOVCAT int8 FMTCOST06RELFMRCAT int8 FMTCOST08RELFMRCAT int8 FMTCOST12RELFMRCAT int8 FMTCOSTMEDRELFMRCAT int8 FMTINCRELFMRCAT int8 FMTCOST06RELAMICAT int8 FMTCOST08RELAMICAT int8 FMTCOST12RELAMICAT int8 FMTCOSTMEDRELAMICAT int8 FMTINCRELAMICAT int8 FMTASSISTED int8 FMTBURDEN int8 FMTREGION int8 FMTSTATUS int8 Length: 99, dtype: object --- ``` from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA ``` ``` # There isn't a super clear Y varaible in this dataset, so we'll just # pretend that our whole dataset is our X matrix for now. # Make a copy to save our work at this checkpoint df = national_processed.copy() # Turn our dataframe into a numpy array X = df.values # instantiate our standard scaler object scaler = StandardScaler() # Standardize our data Z = scaler.fit_transform(X) ``` ``` # Instantiate our PCA object pca = PCA(2) transformed_data = pca.fit_transform(Z) ``` ``` transformed_data ``` array([[-2.57657018, -1.37612728], [ 2.04064284, -0.98806146], [ 1.21527025, 0.37016814], ..., [ 1.83162816, -2.95570216], [-5.02514474, -2.52857809], [-9.89816404, -0.80284263]]) ``` plt.scatter(transformed_data[:,0], transformed_data[:,1]) plt.xlabel("PC1") plt.ylabel("PC2") plt.title("Principal Component 1 vs Principal Component 2") plt.show() ``` # Stretch Goals ## 1) Perform further data exploration on the HADS national dataset (the version before we one-hot encoded it) Make scatterplots and see if you can see any resemblance between the original scatterplots and the plot of the principal components that you made in 7.1. (You may or may not not see very much resemblance depending on the variables you choose, and that's ok!) ## 2) Study "Scree Plots" and then try and make one for your PCA dataset. How many principal conponents do you need to retain in order for your PCs to contain 90% of the explained variance? We will present this topic formally at the beginning of tomorrow's lecture, so if you figure this stretch goal out, you're ahead of the game. ## 3) Explore further the intuition behind eigenvalues and eigenvectors by creating your very own eigenfaces: Prioritize self-study over this stretch goal if you are not semi-comfortable with the topics of PCA, Eigenvalues, and Eigenvectors. You don't necessarily have to use this resource, but this will get you started: [Eigenface Tutorial](https://sandipanweb.wordpress.com/2018/01/06/eigenfaces-and-a-simple-face-detector-with-pca-svd-in-python/)
6fc2da77ce806062bfb92010b10ada73a2eec82d
118,156
ipynb
Jupyter Notebook
curriculum/unit-1-statistics-fundamentals/sprint-3-linear-algebra/module4-clustering/module-3.ipynb
BrianThomasRoss/lambda-school
6140db5cb5a43d0a367e9a08dc216e8bec9fb323
[ "MIT" ]
null
null
null
curriculum/unit-1-statistics-fundamentals/sprint-3-linear-algebra/module4-clustering/module-3.ipynb
BrianThomasRoss/lambda-school
6140db5cb5a43d0a367e9a08dc216e8bec9fb323
[ "MIT" ]
null
null
null
curriculum/unit-1-statistics-fundamentals/sprint-3-linear-algebra/module4-clustering/module-3.ipynb
BrianThomasRoss/lambda-school
6140db5cb5a43d0a367e9a08dc216e8bec9fb323
[ "MIT" ]
null
null
null
54.24977
19,750
0.531932
true
14,599
Qwen/Qwen-72B
1. YES 2. YES
0.803174
0.90599
0.727667
__label__kor_Hang
0.334323
0.528947
```python %matplotlib inline import matplotlib.pyplot as p ``` ```python from sympy import * import scipy as sc init_printing() ``` ```python x=var('x') ``` ```python a, b, c = var("a, b, c") ``` ```python x = var('x', real=True) ``` ## bio cal ```python r_m, N, t = var("r_m N t", real=True) ``` ```python N=Function("N") ``` ```python dN_dt = Derivative(N(t), t)-r_m*N(t) dN_dt ## cal N at any given time ``` ```python dsolve(dN_dt) ```
ffb373f420139b9ee2bf90ab9c25a5fc998c7ce3
8,859
ipynb
Jupyter Notebook
Week7/Code/simply_trial.ipynb
ph-u/CMEECourseWork_pmH
8d52d4dcc3a643da7d55874e350c18f3bf377138
[ "Apache-2.0" ]
null
null
null
Week7/Code/simply_trial.ipynb
ph-u/CMEECourseWork_pmH
8d52d4dcc3a643da7d55874e350c18f3bf377138
[ "Apache-2.0" ]
null
null
null
Week7/Code/simply_trial.ipynb
ph-u/CMEECourseWork_pmH
8d52d4dcc3a643da7d55874e350c18f3bf377138
[ "Apache-2.0" ]
null
null
null
36.307377
2,072
0.707755
true
159
Qwen/Qwen-72B
1. YES 2. YES
0.899121
0.76908
0.691496
__label__eng_Latn
0.332017
0.444909
# Investigating the Re-use of subsamples from previous iterations Context: a multi-fidelity optimization procedure where an Error Grid is created after every evaluation to determine the next best fidelity for evaluation. Each Error Grid is made up of 'pixels' $(n_h, n_l)$ where $n_h < N_H$ and $n_l < N_L$, where $N_H, N_L$ are the total number of high- and low-fidelity evaluations available and $n_h, n_l$ are the size of the subsample from that total. The idea is as follows: In each iteration, a single new evaluation is added, i.e., $N_H += 1$ or $N_L += 1$. For each previously calculated Error Grid 'pixel' $(n_h, n_l)$, all subsamples used in that previous iteration are still valid subsamples: they simply happen to exclude the newly added point. Reusing these is therefore no problem in principle, but we want to avoid a bias from ignoring the new point (otherwise, what's the point in evaluating that in the first place if we don't bother learning from it). So the question is: if we were to uniformly at random draw a new set of subsamples based on the new $N_H$ or $N_L$, what percentage do we expect to **not** include the newest point? Assuming the previous set of subsamples is also drawn uniformly at random, a random subselection from those should suffice to be reused as that percentage for the new iteration. The remainder can then be drawn anew, with the explicit requirement that they should contain the new sample. This procedure allows us to reuse some of the work from the previous iteration for all previously calculated pixels $(n_h, n_l)$, without introducing a bias towards including/excluding the newest sample. (It must however be noted that this procedure is more 'exact' rather than an actual random distribution. While I don't expect this to cause a problem, this will have to be kept in mind in case it becomes relevant elsewhere in the process) ## Deriving formulas: how to calculate how much re-use is possible? ### Definitions/assumptions: All high-fidelity samples are also evaluated in low-fidelity. A subsample of size $(n_h, n_l)$ is created in two steps: 1. First a subsample of $n_h$ high-fidelity samples is drawn from the total number $N_H$. This means that the number of 'high-fidelity'-only subsamples is $N_H \choose n_h$ 2. All high-fidelity samples are automatically included in the low-fidelity sample, meaning that only $n_l - n_h$ low-fidelity samples are left to be sampled from the $N_L - n_H$ remaining samples. This means the number of possible 'low-fidelity'-only subsamples after having drawn a 'high-fidelity'-only subsample is ${N_L - n_h} \choose {n_l - n_h}$ Since the drawing of the high- and low-fidelity subsamples are independent of eachother, the total number of subsamples for an Error Grid 'pixel' $n_h, n_l$ is ${N_H \choose n_h} \cdot {{N_L - n_h} \choose {n_l - n_h}}$ ### Derivation of formulas ```python import sympy sympy.init_printing() NH, NL, nh, nl = sympy.symbols('N_H N_L n_h n_l') ``` #### For $N_L += 1$ ```python num_subsamples = sympy.binomial(NH, nh) * sympy.binomial(NL-nh, nl-nh) more_subsamples = sympy.binomial(NH, nh) * sympy.binomial((NL+1)-nh, nl-nh) ratio = num_subsamples / more_subsamples ratio # binom(NH, nh) cancels out ``` ```python ratio.rewrite(sympy.factorial) ``` ```python sympy.simplify(ratio.rewrite(sympy.factorial)) # simplify: (x+1)! / x! = x+1 and x! / (x+1)! = 1/(x+1) ``` #### For $N_H += 1$ ```python num_subsamples = sympy.binomial(NH, nh) * sympy.binomial(NL-nh, nl-nh) more_subsamples = sympy.binomial((NH+1), nh) * sympy.binomial(NL-nh, nl-nh) ratio = num_subsamples / more_subsamples ratio # binom(NL-nh, nl-nh) cancels out ``` ```python ratio.rewrite(sympy.factorial) ``` ```python sympy.simplify(ratio.rewrite(sympy.factorial)) # simplify: (x+1)! / x! = x+1 and x! / (x+1)! = 1/(x+1) ``` #### Summary: As can be seen in the derivations above, when adding a new low-fidelity sample, the fraction that can be reused is fairly simply defined by $\dfrac{N_L - n_l + 1}{N_L - n_h + 1}$, depending only on $N_L, n_h$ and $n_l$. For the case of adding a high-fidelity sample, this is defined only as a relation between $N_H$ and $n_h$ by $\dfrac{N_H - n_h + 1}{N_H + 1}$ ## Visual inspection of actual percentage reuse ```python import matplotlib.pyplot as plt import numpy as np ``` ```python def reuse_after_low_add(nh, nl, NL): """calculate the fraction of subsamples that can be reused if the number of low-fidelity samples is increased by 1""" return (NL-nl+1)/(NL-nh+1) def reuse_after_high_add(nh, NH): """calculate the fraction of subsamples that can be reused if the number of high-fidelity samples is increased by 1""" return (NH - nh + 1) / (NH + 1) ``` ```python nh, NH = np.meshgrid(np.arange(2, 100), np.arange(3, 101)) ratio = reuse_after_high_add(nh, NH) ratio[nh >= NH] = np.nan fig = plt.figure() ax = fig.gca() img = ax.imshow(ratio, origin='lower') ax.set_title('Ratio of reusable subsamples for $N_H$ += 1') ax.set_xlabel('$n_h$') ax.set_ylabel('$N_H$') fig.colorbar(img) fig.show() ``` ```python from ipywidgets import interact, IntSlider %matplotlib inline def update(nh): nl, NL = np.meshgrid(np.arange(2, 250, dtype=float), np.arange(3, 251, dtype=float)) nl[(nl <= nh) | (nl >= NL)] = np.nan NL[(nl <= nh) | (nl >= NL)] = np.nan ratio = reuse_after_low_add(nh, nl, NL) fig = plt.figure() ax = fig.gca() img = ax.imshow(ratio, origin='lower') ax.set_title(f'Ratio of reusable subsamples for $N_L$ += 1 ($n_h$ = {nh})') ax.set_xlabel('$n_l$') ax.set_ylabel('$N_L$') fig.colorbar(img) fig.show() interact(update, nh=IntSlider(value=10, min=1, max=200, step=1)) ``` interactive(children=(IntSlider(value=10, description='nh', max=200, min=1), Output()), _dom_classes=('widget-… <function __main__.update(nh)> ```python ```
3bcd7d9fdb1c4e01bee03251e7ec1dbd4295108f
53,255
ipynb
Jupyter Notebook
notebooks/subsample_reuse.ipynb
sjvrijn/multi-level-co-surrogates
04a071eb4360bed6f1a517531690beec7857e3e5
[ "MIT" ]
null
null
null
notebooks/subsample_reuse.ipynb
sjvrijn/multi-level-co-surrogates
04a071eb4360bed6f1a517531690beec7857e3e5
[ "MIT" ]
2
2021-02-25T14:07:50.000Z
2021-02-25T14:12:35.000Z
notebooks/subsample_reuse.ipynb
sjvrijn/multi-level-co-surrogates
04a071eb4360bed6f1a517531690beec7857e3e5
[ "MIT" ]
null
null
null
123.561485
32,040
0.862849
true
1,683
Qwen/Qwen-72B
1. YES 2. YES
0.861538
0.822189
0.708347
__label__eng_Latn
0.993434
0.48406
# Astroinformatics "Machine Learning Basics" ## Class 3: In this tutorial, we'll see basics concepts of machine learning. (We will not see classification yet, but these concepts applies to those problems too). All this concepts are very well explained in the [Deep Learning Book, Chapter 5](http://www.deeplearningbook.org/contents/ml.html) First a brief discussion about the frequentist and bayesians approach of machine learning. Then a basic problem of linear regression in order to give some insight about the approach of frequentist and bayesians of this problem and its connection. Then we'll see the concept of capacity, overfitting and underfitting and how to solve it using regularization (explained from a frequentist and bayesian point of view). Finally, we use cross validation to select the hyperparameters of our optimization problem. # Frequentiest and Bayesians Please read the discussion in [this great book](http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb), It'll give you a great insight about the difference between the two approaches. I also recommend [this reading](https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading20.pdf) from an MIT class ```python import numpy as np import matplotlib.pyplot as plt from numpy.linalg import inv from mpl_toolkits.mplot3d import axes3d from matplotlib import cm ``` # Linear regression least square Given some data $(x_{i}, y_{i})_{i=1}^{N}$, we want to find the best affine transformation of $x$ that better predicts $y$, the affine model is $\hat{y} = w^{T}x$ where we add a 1 to $x$ in order to have an offset of the linear model (affine transformation). Let's define $(X,Y)$ as the dataset where each row of $X$ is $x_{i}^{T}$ and $Y$ has $y_{i}$ as components. Now, we define the mean squared error ($MSE$) as our measure of performance of the model: $$ MSE = \frac{1}{N}\| \hat{Y}-Y \|^{2}_{2} $$ where $\hat{Y}$ is the estimated values of the linear model. In order to find the best model in the least square sense, we can minimize MSE by gradient: \begin{equation} \nabla_{w} MSE = 0 \\ \nabla_{w} \frac{1}{N} \| \hat{Y} - Y \|^{2}_{2} = 0 \\ \frac{1}{N} \nabla_{w} \| Xw - Y \|^{2}_{2} = 0 \\ \nabla_{w} (Xw-Y)^{T}(Xw-Y) = 0 \\ \nabla_{w} (w^{T}X^{T}Xw-2w^{T}X^{T}Y-Y^{T}Y) = 0 \\ 2X^{T}Xw-2X^{T}Y=0 \\ w = (X^{T}X)^{-1}X^{T}Y \end{equation} We know that this is the solution for the linear regression because the $MSE$ is a convex function of $w$m we can check this by taking the second derivative, $\nabla_{w}^{2}MSE = 2X^{T}X$ which is a positive semi definite matrix (the eigenvalues are bigger or equal to zero). In this case, we need all the eigenvalues bigger than zero, if there is at least one eigenvalue equal to zero, that means that the value of the MSE is "flat" in the direction of that eigenvector, when the eigenvalue is "close" to zero, it can produce numerical instability on the optimization (this can be fixed with regularization). Now let's make an example: ```python def linear_function(w, x): return np.dot(x, w) w = np.array([0.7, 0.3])[...,np.newaxis] print(w.shape) noise = 0.10 n_points = 20 np.random.seed(500) # we add an extra dimension to make it a column vector x_samples = np.linspace(3, 5, n_points)[..., np.newaxis] # then we add a column of ones in order to have the constant term a*x + b*1 = y augmented_x = np.concatenate([x_samples, np.ones(shape=(n_points,1))], axis=1) print("samples shape: "+str(augmented_x.shape)) # adding gaussian noise to the data y_samples = linear_function(w, augmented_x) + np.random.normal(loc=0.0, scale=noise, size=(n_points,1)) print("target shape: "+str(y_samples.shape)) fig, ax = plt.subplots(figsize=(12,7)) ax.plot(x_samples, linear_function(w, augmented_x), label="Real solution") ax.scatter(x_samples, y_samples, label="Samples", s=70) ax.legend(fontsize=14) ax.set_xlabel("x", fontsize=14) ax.set_ylabel("y", fontsize=14) plt.show() ``` ```python # Least square solution estimated_w = inv(augmented_x.T @ augmented_x) @ augmented_x.T @ y_samples # MSE error = np.linalg.norm(y_samples - linear_function(estimated_w, augmented_x))**2/len(y_samples) # eigenvectors and eigenvalues of the covariance matrix eg_values, eg_vectors = np.linalg.eig(augmented_x.T @ augmented_x) print("estimated w:" +str(estimated_w)) print("mean squared error: "+str(error)) print("eigenvalues: "+str(eg_values)) print("eigenvectos: "+str(eg_vectors)) ``` estimated w:[[ 0.68565289] [ 0.33056476]] mean squared error: 0.0101285040086 eigenvalues: [ 346.94365923 0.42476182] eigenvectos: [[ 0.97134386 -0.23767859] [ 0.23767859 0.97134386]] ```python # making error maningfold X_array = np.arange(-1, 2.5, 0.05) Y_array = np.arange(-1, 2.5, 0.05) X, Y = np.meshgrid(X_array, Y_array) Z = np.zeros(shape=(len(X_array), len(Y_array))) for i, x in enumerate(X_array): for j, y in enumerate(Y_array): w_loop = np.array([x, y])[..., np.newaxis] Z[i, j] = np.linalg.norm(y_samples - linear_function(w_loop, augmented_x))**2/len(y_samples) ``` ```python fig, (ax, ax2) = plt.subplots(1, 2, figsize=(15,7)) ax.plot(x_samples, linear_function(w, augmented_x), label="Real solution") ax.scatter(x_samples, y_samples, label="Samples", s=70) ax.plot(x_samples, linear_function(estimated_w, augmented_x), label="Estimated solution") ax.legend(fontsize=14) ax.set_xlabel("x", fontsize=14) ax.set_ylabel("y", fontsize=14) levels = np.linspace(0, np.amax(Z), 100) cont = ax2.contourf(X, Y, Z, levels = levels)#,cmap="inferno") soa = np.concatenate([np.roll(np.repeat(estimated_w.T, 2, axis=0), shift=1), eg_vectors], axis=1)*1.0 X2, Y2, U, V = zip(*soa) ax2.quiver(X2, Y2, U, V, angles='xy', scale_units='xy', scale=1, color="y", label="eigen vectors of covariance matrix") ax2.legend(fontsize=14) ax2.set_xlabel("MSE for each w", fontsize=14) plt.show() ``` ```python fig = plt.figure(figsize=(12, 7)) ax2 = fig.add_subplot(1, 1, 1, projection='3d') surf = ax2.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False) ax2.set_ylabel("w[1]", fontsize=14) ax2.set_xlabel("x[0]", fontsize=14) plt.show() ``` As we can see here, there is one direction where the MSE is relatively flat (small eigenvalue), that is why we see little variation in that direction by changing w, but it is still convex enough to have a unique solution. This is a frequentist way to solve a linear regression, is just a data driven solution with the only assumption of the model shape and that the MSE is a good way to measure performance, let's see now what is the bayesian way, which involves some distribution assumptions expressed using probabilities. # Linear Regression Maximum Likelihood Estimation Now, we define our model in a probabilistic way, that is, for example: $$ \hat{y} = w^{T}x + \epsilon \hspace{0.5cm} \text{with} \hspace{0.5cm} \epsilon \sim \mathcal{N}(0, \sigma) $$ This means that we are assuming that the data has gaussian noise of a given variance $\sigma^{2}$. Now, we can write what is the probability of occurrence of the data Y, for a given input X and a model: $$ P(Y \mid X, w, \sigma) $$ We call this "Likelihood". Now, it is very intuitive that if a particular value of w produces a high probability of seeing the data (high likelihood value), then w is a good choice for our model, so we define the maximum likelihood estimation of w as: $$ \hat{w} = \text{argmax}_{w} P(Y \mid X, w, \sigma) = \text{argmax}_{w} \prod_{i}^{N}P(y_{i}, x_{i},w,\sigma) = \text{argmax}_{w} \prod_{i}^{N}\mathcal{N}(y_{i}; wx_{i}, \sigma) $$ where the last equality comes from the assumption of i.i.d samples. Now, this product over many probabilities can produce numerical underflow, so we can obtain a more convenient optimization problem. Instead of maximizing the likelihood, we minimize the "negative log likelihood" which is $NLL = -\log(P(Y \mid X, w, \sigma))$, so the optimization problem is: $$ \hat{w} = \text{argmin} -\log(P(Y \mid X, w, \sigma)) $$ This expression to minimize arise naturally when we start by the optimization by minimizing the Kullback Leibler divergence between the estimated distribution of the data and the distribution produced by the model (we'll skip this for now). Something interesting happens when we work a little bit the expression of the NLL \begin{equation} \begin{split} NLL = & -\log(\prod_{i}^{N}\mathcal{N}(y_{i}; wx_{i}, \sigma)) \\ = & -\sum_{i=1}^{N}\log(\mathcal{N}(y_{i}; wx_{i}, \sigma)) = & -\sum_{i=1}^{N}\left [ \log \frac{1}{\sqrt{2\pi}\sigma} - \frac{(y_{i}-wx_{i})^{2}}{2 \sigma^{2}} \right ] \propto \| \hat{Y} - Y \|^{2}_{2} \end{split} \end{equation} Least Mean Squared Error is equivalent to the Maximum Likelihood Estimation when gaussian noise in the data is assumed!! # Capacity, overfitting and Underfitting Generally, in machine learning, the purpose of fitting a model to a dataset, is to evaluate the model on new data, not just the one that we use to train the model. The ability to perform well on new data is called generalization. In order to measure the generalization capacity of a trained model, we subtract a subset of the original dataset which is not used to train the model but to test it. Those sets are called train set (used to find the parameters) and test set (used to measure the generalization performance), then we can estimate the generalization ability of our model by measuring the error (or another metric of performance) on the test set. Here we are assuming that both sets, train and test sets are generated by the same data-generating process, that means i.i.d assumptions over every sample (as we did before in the maximum likelihood case) The objective of the optimization process is to make the training error small enough and make the gap between the training and test error small. The following concepts are useful to understand this process: - Capacity: ability of the model to fit a wide variety of functions (like relations between inputs and outputs) - Overfitting: Occurs when the gap between the training error and test error is too large, when the Capacity of the model is too high compared with the real complexity of the data-generating process - Underfitting: Happens when the model is not able to obtain a sufficiently low error value on the training set, may occur because the Capacity of the model is not enough to encode the complexity of the data-generating process Let's understand this with an example using a polinomial fitting over data. In the following example, the true complexity of the data is a quadratic polynomial. We fit models with low and high capacities compare with the real complexity ```python w = np.array([-2, 0.6, 0.7])[...,np.newaxis] noise = 1.2 n_points = 20 train_size = 10 test_size = n_points - train_size np.random.seed(0) x_samples = np.linspace(-2, 2, n_points)[..., np.newaxis] # Making quadratic polinomial augmented_x = np.concatenate([x_samples**2, x_samples, x_samples**0], axis=1) y_samples = linear_function(w, augmented_x) + np.random.normal(loc=0.0, scale=noise, size=(n_points,1)) x_plot = np.linspace(-2,2,100)[..., np.newaxis] aug_x_plot = np.concatenate([x_plot**2, x_plot, x_plot**0], axis=1) # Dividing in train and test set indexes = np.arange(start=0, stop=n_points,step=1) np.random.shuffle(indexes) train_index = indexes[:train_size] test_index = indexes[train_size:] x_train = x_samples[train_index, ...] aug_x_train = augmented_x[train_index, ...] y_train = y_samples[train_index, ...] x_test = x_samples[test_index, ...] aug_x_test = augmented_x[test_index, ...] y_test = y_samples[test_index, ...] fig, ax = plt.subplots(figsize=(12,7)) ax.plot(x_plot, linear_function(w, aug_x_plot), label="Real solution") ax.scatter(x_train, y_train, label="train samples", s=70) ax.scatter(x_test, y_test, label="test samples", s=70) ax.legend(fontsize=14) ax.set_xlabel("x", fontsize=14) ax.set_ylabel("y", fontsize=14) plt.show() ``` ```python # Linear, Quadratic 5 and 10 degree polynomial fit. linear_coef = np.polyfit(x_train[:, 0], y_train[:, 0], deg=1, full=True) qd_coef = np.polyfit(x_train[:, 0], y_train[:, 0], deg=2,full=True) deg5_coef = np.polyfit(x_train[:, 0], y_train[:, 0], deg=5, full=True) deg10_coef = np.polyfit(x_train[:, 0], y_train[:, 0], deg=10, full=True) p1 = np.poly1d(linear_coef[0]) p2 = np.poly1d(qd_coef[0]) p3 = np.poly1d(deg5_coef[0]) p4 = np.poly1d(deg10_coef[0]) error1 = np.linalg.norm(y_test[:, 0] - p1(x_test[:, 0]))**2/len(y_test) error2 = np.linalg.norm(y_test[:, 0] - p2(x_test[:, 0]))**2/len(y_test) error3 = np.linalg.norm(y_test[:, 0] - p3(x_test[:, 0]))**2/len(y_test) error4 = np.linalg.norm(y_test[:, 0] - p4(x_test[:, 0]))**2/len(y_test) ``` ```python print("Generalization errors") print("linear: "+str(error1)) print("quadratic: "+str(error2)) print("deg5: "+str(error3)) print("deg10: "+str(error4)) fig, ax = plt.subplots(figsize=(12,7)) ax.plot(x_plot, linear_function(w, aug_x_plot), label="Real solution", lw=3) ax.scatter(x_train, y_train, label="train samples", s=70) ax.scatter(x_test, y_test, label="test samples", s=70) ax.plot(x_plot, p1(x_plot),'--' ,label="linear (Underfitting)") ax.plot(x_plot, p2(x_plot),'--' , label="quadratic (Appropiate Capacity)") ax.plot(x_plot, p3(x_plot),'--' , label="5 deg (Not too overfitted)" ) ax.plot(x_plot, p4(x_plot),'--' , label="10 deg (Overfitting)") ax.legend(fontsize=14) ax.set_xlabel("x", fontsize=14) ax.set_ylabel("y", fontsize=14) ax.set_ylim([-15, 5]) plt.show() ``` In the last plot, we can see that the linear model does not have enough capacity to express the relation between x and y. Quadratic and deg 5 polynomial are good enough to find the relation. Some times, a high capacity model with a large family of functions that can represent (this is representational capacity), does not find the best solution during the optimization process, these additional limitations reduce the capacity of the actual solution, this is called the effective capacity. In the case of deg 10 polynomial, the capacity of the model is too high, so fits perfectly the train data but has a poor generalization ability. The capacity of the model can be modified by modifying the model or changing the effective capacity by adding restrictions to the loss function. This is called Regularization. # Regularization ## Regularized least squares In this example, we'll use the weight decay regularization, which tends to choose parameters on the solution space that are close to the origin (small euclidean norm). We just need to modify the loss function by adding one term: \begin{equation} \begin{split} J(w) = & MSE + R \\ = & \frac{1}{N} \| \hat{Y} - Y \|^{2}_{2} + \lambda \| w \|^{2}_{2} \end{split} \end{equation} Let's see how the solution looks by taking the gradient of J with respect to w \begin{equation} \nabla_{w} J(w) = 0 \\ 2X^{T}Xw-2X^{T}Y + 2\lambda w=0 \\ w = (X^{T}X + \lambda I)^{-1}X^{T}Y \end{equation} As we can see in the last expression, now we take the inverse of the correlation matrix plus the identity ponderated by $\lambda$. This means that we are adding convexity to the problem because the eigenvalues of this new matrix are going to be bigger if we increase $\lambda$ (we are making the matrix less singular), the convexity of this new manifold as a combination of a parabola because of the regularization term and the convexity of the original problem without the regularization. The optimization will tend to choose small norm w if we increase $\lambda$ Now, let's see an example by fitting a high capacity model but with this regularization term in the loss function ```python # Same model as before w = np.array([-2, 0.6, 0.7])[...,np.newaxis] noise = 1.2 n_points = 20 train_size = 10 test_size = n_points - train_size np.random.seed(0) x_samples = np.linspace(-2, 2, n_points)[..., np.newaxis] augmented_x = np.concatenate([x_samples**2, x_samples, x_samples**0], axis=1) y_samples = linear_function(w, augmented_x) + np.random.normal(loc=0.0, scale=noise, size=(n_points,1)) x_plot = np.linspace(-2,2,100)[..., np.newaxis] aug_x_plot = np.concatenate([x_plot**2, x_plot, x_plot**0], axis=1) indexes = np.arange(start=0, stop=n_points,step=1) np.random.shuffle(indexes) train_index = indexes[:train_size] test_index = indexes[train_size:] x_train = x_samples[train_index, ...] aug_x_train = augmented_x[train_index, ...] y_train = y_samples[train_index, ...] x_test = x_samples[test_index, ...] aug_x_test = augmented_x[test_index, ...] y_test = y_samples[test_index, ...] # Now we do it for high capacity model deg = 10 x_deg = [] for i in range(deg+1): x_deg.append(x_samples**(deg-i)) x_deg = np.concatenate(x_deg, axis=1) x_deg_train = x_deg[train_index, ...] x_deg_test = x_deg[test_index, ...] ``` ```python # Least square solution reg_values = [10**7, 0.5, 0] labels = ["Too large lambda (Underfitting)", "appropiate lambda", "no regularization (Overfitting)"] reg_w = [] solution = [] for i, lam in enumerate(reg_values): # we save the regularized solution for each lambda reg_w.append(inv(x_deg_train.T @ x_deg_train + lam*np.identity(deg+1)) @ x_deg_train.T @ y_train) solution.append(np.poly1d(reg_w[-1][:,0])) ``` ```python fig, ax_array = plt.subplots(1,3,figsize=(15,5)) for i, lam in enumerate(reg_values): ax_array[i].plot(x_plot, linear_function(w, aug_x_plot), label="Real solution", lw=3) ax_array[i].scatter(x_train, y_train, label="train samples", s=70) ax_array[i].scatter(x_test, y_test, label="test samples", s=70) p = solution[i] ax_array[i].plot(x_plot, p(x_plot), label="Estimated solution") ax_array[i].set_ylim([-10, 5]) ax_array[i].set_title(labels[i], fontsize=14) ax_array[i].legend() plt.show() ``` The last plot shows how the regularization term modifies the effective capacity of the model. For very high $\lambda$ (left plot), the solutions are reduced to a very small region of the original space, so the capacity of the model is reduced too much and produce underfitting on the data. For very small $\lambda$ (right plot), there is no penalization for the size of the weights and the model is able to look for solutions using its original capacity, so the model overfits. For a medium $\lambda$ (middle plot), the effective capacity is probably close to the necessary one to find the correct function of the data. ## Probabilistic Perspective of Regularization, Maximum a Posteriori In this case, we will add information about the distribution of the parameters p(w) as a prior knowledge, by using Bayes' theorem to modify the likelihood in the following way: \begin{equation} \begin{split} P(\theta \mid D) = & \frac{P(D \mid \theta) P(\theta)}{P(D)} \\ \propto & P(D \mid w) P(w) \end{split} \end{equation} Where $D$ is the data and $\theta$ the parameters of the model. $P(\theta)$ is called "prior" since it is prior knowledge added to the model about how the parameters distribute, in some application, the designer of the model might have some idea where to look for the parameters for a particular problem. $P(D \mid \theta)$ is the likelihood as we already know. $P(\theta \mid D)$ is called "posterior" probability, it is the distribution of the parameters for a given data, basically an update of our prior after seeing evidence of the real process (samples from the data-generating process) Let's consider our previous model and assume a gaussian distribution for the prior of the parameters $$ \hat{y} = w^{T}x + \epsilon \hspace{0.5cm} \text{with} \hspace{0.5cm} \epsilon \sim \mathcal{N}(0, \sigma) \hspace{0.5cm} \text{with} \hspace{0.5cm} p(w) \sim \mathcal{N(0, \tau)} $$ Then, the posterior probability of the parameters is proportional to: \begin{equation} \begin{split} P(w \mid Y,X,\sigma , \tau) \propto P(Y \mid X, w, \sigma) P(w \mid \tau) \end{split} \end{equation} If we find $w$ where the posterior probability is maximized, it means that for the given dataset and the prior knowledge, there is a high probability that the model with that value of $w$ is the one that produce the data. So the solution to the maximum a posteriori (MAP) is: \begin{equation} \begin{split} \hat{w} = & \text{argmax}_{w} P(w \mid Y, X, \sigma, \tau) \\ = & \text{argmin}_{w} -\log(P(w \mid Y, X, \sigma, \tau)) \end{split} \end{equation} Something interesting happens (again) when we work a little bit this expression \begin{equation} \begin{split} -\log(P(w \mid Y, X, \sigma, \tau)) = & -\sum_{i=1}^{N}\log \mathcal{N}(y_{i}, wx_{i}, \sigma)-\log \mathcal{N}(w; 0, \tau) \\ = & n \log \sqrt{2 \pi} \sigma + \sum_{i=1}^{N} \left ( \frac{(y_{i}-wx_{i})^{2}}{2\sigma^{2}} \right ) + n\log \sqrt{2 \pi \tau} + \sum_{i=1}^{N}\left ( \frac{w^{2}}{2 \tau^{2}} \right ) \\ \propto & \| \hat{Y} - Y \|^{2}_{2} + \frac{N \sigma^{2}}{2 \tau^{2}} \| w \|^{2}_{2} \hspace{0.2cm} \text{check this math please} \end{split} \end{equation} Regularized least mean squared is the same as maximum a posteriori with gaussian noise and gaussian prior! The amount of regularization, in this case, is controlled by the width of the gaussian prior $\tau$ # Hyperparameters and Cross-validation Many models and regularization factors are subject to hyperparameters that must be chosen by the designer. With hyperparameters I mean, for example, the lambda for regularization, the degree of the polynomial, number of layer and neurons in a neural network, gamma coefficient for support vector machines, etc. We should choose the hyperparameters that produce a better model to generalize over new data (test set). I good way to do this is used cross-validation, which is a procedure to have a better estimation of the generalization performance. Some times we do not have too many examples, so the random choice for the test set could be very sensitive to the realization, and of course, the generalization performance estimation too, cross-validation try to fix this problem by doing the following: #### K-fold cross-validation For a given dataset $D$, performance metric F and number of subsets k, we do: - Split D into k mutually exclusive subsets $D_{i}$ with $\bigcup_{i=1}^{K} D_{i} = D$ - For i from 1 to k: - train the model with $D\backslash D_{i}$ - Compute performance F over $D_{i}$ - end for - Return performance ```python def cross_validation(lam, x_subsets, y_subsets): train_error = [] test_error = [] for i, x_test in enumerate(x_subsets): x_train = np.concatenate([x for j, x in enumerate(x_subsets) if j!=i], axis=0) y_train = np.concatenate([y for j, y in enumerate(y_subsets) if j!=i], axis=0) y_test = y_subsets[i] w = inv(x_train.T @ x_train + lam*np.identity(x_train.shape[1])) @ x_train.T @ y_train p = np.poly1d(w[:,0]) test_error.append(np.linalg.norm(y_test[:, 0] - p(x_test[:, -2]))**2/len(y_test)) train_error.append(np.linalg.norm(y_train[:, 0] - p(x_train[:, -2]))**2/len(y_train)) return np.array(train_error), np.array(test_error) def kfold_cv(x_data, y_data, lam_array, kfold=4): x_subsets = np.split(x_data, kfold) y_subsets = np.split(y_data, kfold) train_error_mean = [] test_error_mean = [] train_error_std = [] test_error_std = [] for j, lam in enumerate(lam_array): print('\r{}'.format(float(j/len(lam_array))*100), end='') train_error, test_error = cross_validation(lam, x_subsets, y_subsets) train_error_mean.append(np.mean(train_error)) train_error_std.append(np.std(train_error)) test_error_mean.append(np.mean(test_error)) test_error_std.append(np.std(test_error)) return [np.array(train_error_mean), np.array(train_error_std), np.array(test_error_mean), np.array(test_error_std)] ``` ```python lam_array = np.linspace(0.01, 10**8, 10000) one_over_lambda = 1.0/lam_array train_error_mean, train_error_std, test_error_mean, test_error_std = kfold_cv(x_deg, y_samples, lam_array) optimal_lambda = lam_array[np.where(test_error_mean==np.amin(test_error_mean))[0]] ``` 99.9900000000000143 ```python fig, ax = plt.subplots(figsize=(12,7)) ax.plot(lam_array, test_error_mean, label="test error") ax.plot(lam_array, train_error_mean, label="train error") ax.set_xscale("log") ax.set_yscale("log") ax.set_xlabel("lambda (Less capacity <--- ---> More capacity)", fontsize=14) ax.set_ylabel("errors log scale", fontsize=14) ax.set_title("Cross validation results", fontsize=14) ax.set_xlim([np.amax(lam_array), np.amin(lam_array)]) ax.axvline(x=optimal_lambda, color='r', linestyle='--', lw=4, label="Optimal lambda = "+str(optimal_lambda)) ax.legend(fontsize=14) plt.show() ``` It is easy to see how the parameter $\lambda$ change the effective capacity and produce a smooth transition between underfitting (left side of the optimal $\lambda$, too large $\lambda$) and overfitting (right side of the optimal $\lambda$, too small $\lambda$) regime. ```python deg_w = (inv(x_deg_train.T @ x_deg_train + optimal_lambda*np.identity(deg+1)) @ x_deg_train.T @ y_train) deg_p = np.poly1d(w[:,0]) ``` ```python fig, ax = plt.subplots(figsize=(12,7)) ax.plot(x_plot, linear_function(w, aug_x_plot), label="Real solution", lw=4) ax.scatter(x_train, y_train, label="train samples", s=70) ax.scatter(x_test, y_test, label="test samples", s=70) ax.plot(x_plot, deg_p(x_plot),'-o' ,label="regularized high capacity model", lw=1,ms=4) ax.legend(fontsize=14) ax.set_xlabel("x", fontsize=14) ax.set_ylabel("y", fontsize=14) ax.set_ylim([-15, 5]) plt.show() ```
1858e7f7a0b78d615d70aac326448d4c9fd41570
532,272
ipynb
Jupyter Notebook
auxiliar3.ipynb
rodrigcd/Astroinformatics_AS4501
4ac614ff5cfc15922df8592562e5fdad3151abe1
[ "MIT" ]
null
null
null
auxiliar3.ipynb
rodrigcd/Astroinformatics_AS4501
4ac614ff5cfc15922df8592562e5fdad3151abe1
[ "MIT" ]
null
null
null
auxiliar3.ipynb
rodrigcd/Astroinformatics_AS4501
4ac614ff5cfc15922df8592562e5fdad3151abe1
[ "MIT" ]
null
null
null
666.172716
101,916
0.931687
true
7,258
Qwen/Qwen-72B
1. YES 2. YES
0.91848
0.903294
0.829658
__label__eng_Latn
0.96774
0.765906
# Session 3: Unsupervised and Supervised Learning <p class="lead"> Parag K. Mital<br /> <a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning w/ Tensorflow</a><br /> <a href="https://www.kadenze.com/partners/kadenze-academy">Kadenze Academy</a><br /> <a href="https://twitter.com/hashtag/CADL">#CADL</a> </p> <a name="learning-goals"></a> # Learning Goals * Build an autoencoder w/ linear and convolutional layers * Understand how one hot encodings work * Build a classification network w/ linear and convolutional layers <!-- MarkdownTOC autolink=true autoanchor=true bracket=round --> - [Introduction](#introduction) - [Unsupervised vs. Supervised Learning](#unsupervised-vs-supervised-learning) - [Autoencoders](#autoencoders) - [MNIST](#mnist) - [Fully Connected Model](#fully-connected-model) - [Convolutional Autoencoder](#convolutional-autoencoder) - [Denoising Autoencoder](#denoising-autoencoder) - [Variational Autoencoders](#variational-autoencoders) - [Predicting Image Labels](#predicting-image-labels) - [One-Hot Encoding](#one-hot-encoding) - [Using Regression for Classification](#using-regression-for-classification) - [Fully Connected Network](#fully-connected-network) - [Convolutional Networks](#convolutional-networks) - [Saving/Loading Models](#savingloading-models) - [Checkpoint](#checkpoint) - [Protobuf](#protobuf) - [Wrap Up](#wrap-up) - [Reading](#reading) <!-- /MarkdownTOC --> <a name="introduction"></a> # Introduction In the last session we created our first neural network. We saw that in order to create a neural network, we needed to define a cost function which would allow gradient descent to optimize all the parameters in our network <TODO: Insert animation of gradient descent from previous session>. We also saw how neural networks become much more expressive by introducing series of linearities followed by non-linearities, or activation functions. <TODO: Insert graphic of activation functions from previous session>. We then explored a fun application of neural networks using regression to learn to paint color values given x, y positions. This allowed us to build up a sort of painterly like version of an image. In this session, we'll see how to use some simple deep nets with about 3 or 4 layers capable of performing unsupervised and supervised learning, and I'll explain those terms in a bit. The components we learn here will let us explore data in some very interesting ways. <a name="unsupervised-vs-supervised-learning"></a> # Unsupervised vs. Supervised Learning Machine learning research in deep networks performs one of two types of learning. You either have a lot of data and you want the computer to reason about it, maybe to encode the data using less data, and just explore what patterns there might be. That's useful for clustering data, reducing the dimensionality of the data, or even for generating new data. That's generally known as unsupervised learning. In the supervised case, you actually know what you want out of your data. You have something like a label or a class that is paired with every single piece of data. In this first half of this session, we'll see how unsupervised learning works using something called an autoencoder and how it can be extended using convolution.. Then we'll get into supervised learning and show how we can build networks for performing regression and classification. By the end of this session, hopefully all of that will make a little more sense. Don't worry if it doesn't yet! Really the best way to learn is to put this stuff into practice in the homeworks. <a name="autoencoders"></a> # Autoencoders <TODO: Graphic of autoencoder network diagram> An autoencoder is a type of neural network that learns to encode its inputs, often using much less data. It does so in a way that it can still output the original input with just the encoded values. For it to learn, it does not require "labels" as its output. Instead, it tries to output whatever it was given as input. So in goes an image, and out should also go the same image. But it has to be able to retain all the details of the image, even after possibly reducing the information down to just a few numbers. We'll also explore how this method can be extended and used to cluster or organize a dataset, or to explore latent dimensions of a dataset that explain some interesting ideas. For instance, we'll see how with handwritten numbers, we will be able to see how each number can be encoded in the autoencoder without ever telling it which number is which. <TODO: place teaser of MNIST video learning> But before we get there, we're going to need to develop an understanding of a few more concepts. First, imagine a network that takes as input an image. The network can be composed of either matrix multiplications or convolutions to any number of filters or dimensions. At the end of any processing, the network has to be able to recompose the original image it was input. In the last session, we saw how to build a network capable of taking 2 inputs representing the row and column of an image, and predicting 3 outputs, the red, green, and blue colors. Instead if having 2 inputs, we'll now have an entire image as an input, the brightness of every pixel in our image. And as output, we're going to have the same thing, the entire image being output. <a name="mnist"></a> ## MNIST Let's first get some standard imports: ```python # imports %matplotlib inline # %pylab osx import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cm as cmx # Some additional libraries which we'll use just # to produce some visualizations of our training from libs.utils import montage from libs import gif import IPython.display as ipyd plt.style.use('ggplot') # Bit of formatting because I don't like the default inline code style: from IPython.core.display import HTML HTML("""<style> .rendered_html code { padding: 2px 4px; color: #c7254e; background-color: #f9f2f4; border-radius: 4px; } </style>""") ``` <style> .rendered_html code { padding: 2px 4px; color: #c7254e; background-color: #f9f2f4; border-radius: 4px; } </style> Then we're going to try this with the MNIST dataset, which I've included a simple interface for in the `libs` module. ```python from libs.datasets import MNIST ds = MNIST() ``` Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz Let's take a look at what this returns: ```python # ds.<tab> ``` So we can see that there are a few interesting accessors. ... we're not going to worry about the labels until a bit later when we talk about a different type of model which can go from the input image to predicting which label the image is. But for now, we're going to focus on trying to encode the image and be able to reconstruct the image from our encoding. let's take a look at the images which are stored in the variable `X`. Remember, in this course, we'll always use the variable `X` to denote the input to a network. and we'll use the variable `Y` to denote its output. ```python print(ds.X.shape) ``` (70000, 784) So each image has 784 features, and there are 70k of them. If we want to draw the image, we're going to have to reshape it to a square. 28 x 28 is 784. So we're just going to reshape it to a square so that we can see all the pixels arranged in rows and columns instead of one giant vector. ```python plt.imshow(ds.X[0].reshape((28, 28))) ``` ```python # Let's get the first 1000 images of the dataset and reshape them imgs = ds.X[:1000].reshape((-1, 28, 28)) # Then create a montage and draw the montage plt.imshow(montage(imgs), cmap='gray') ``` Let's take a look at the mean of the dataset: ```python # Take the mean across all images mean_img = np.mean(ds.X, axis=0) # Then plot the mean image. plt.figure() plt.imshow(mean_img.reshape((28, 28)), cmap='gray') ``` And the standard deviation ```python # Take the std across all images std_img = np.std(ds.X, axis=0) # Then plot the std image. plt.figure() plt.imshow(std_img.reshape((28, 28))) ``` So recall from session 1 that these two images are really saying whats more or less contant across every image, and what's changing. We're going to try and use an autoencoder to try to encode everything that could possibly change in the image. <a name="fully-connected-model"></a> ## Fully Connected Model To try and encode our dataset, we are going to build a series of fully connected layers that get progressively smaller. So in neural net speak, every pixel is going to become its own input neuron. And from the original 784 neurons, we're going to slowly reduce that information down to smaller and smaller numbers. It's often standard practice to use other powers of 2 or 10. I'll create a list of the number of dimensions we'll use for each new layer. ```python dimensions = [512, 256, 128, 64] ``` So we're going to reduce our 784 dimensions down to 512 by multiplyling them by a 784 x 512 dimensional matrix. Then we'll do the same thing again using a 512 x 256 dimensional matrix, to reduce our dimensions down to 256 dimensions, and then again to 128 dimensions, then finally to 64. To get back to the size of the image, we're going to just going to do the reverse. But we're going to use the exact same matrices. We do that by taking the transpose of the matrix, which reshapes the matrix so that the rows become columns, and vice-versa. So our last matrix which was 128 rows x 64 columns, when transposed, becomes 64 rows x 128 columns. So by sharing the weights in the network, we're only really learning half of the network, and those 4 matrices are going to make up the bulk of our model. We just have to find out what they are using gradient descent. We're first going to create `placeholders` for our tensorflow graph. We're going to set the first dimension to `None`. This is something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter. We're going to pass our entire dataset in minibatches. So we'll send 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible in the graph. ```python # reset graph tf.reset_default_graph() # So the number of features is the second dimension of our inputs matrix, 784 n_features = ds.X.shape[1] # And we'll create a placeholder in the tensorflow graph that will be able to get any number of n_feature inputs. X = tf.placeholder(tf.float32, [None, n_features]) ``` Now we're going to create a network which will perform a series of multiplications on `X`, followed by adding a bias, and then wrapping all of this in a non-linearity: ```python # let's first copy our X placeholder to the name current_input current_input = X n_input = n_features # We're going to keep every matrix we create so let's create a list to hold them all Ws = [] # We'll create a for loop to create each layer: for layer_i, n_output in enumerate(dimensions): # just like in the last session, # we'll use a variable scope to help encapsulate our variables # This will simply prefix all the variables made in this scope # with the name we give it. with tf.variable_scope("encoder/layer/{}".format(layer_i)): # Create a weight matrix which will increasingly reduce # down the amount of information in the input by performing # a matrix multiplication W = tf.get_variable( name='W', shape=[n_input, n_output], initializer=tf.random_normal_initializer(mean=0.0, stddev=0.02)) # Now we'll multiply our input by our newly created W matrix # and add the bias h = tf.matmul(current_input, W) # And then use a relu activation function on its output current_input = tf.nn.relu(h) # Finally we'll store the weight matrix so we can build the decoder. Ws.append(W) # We'll also replace n_input with the current n_output, so that on the # next iteration, our new number inputs will be correct. n_input = n_output ``` So now we've created a series of multiplications in our graph which take us from our input of batch size times number of features which started as `None` x `784`, and then we're multiplying it by a series of matrices which will change the size down to `None` x `64`. ```python print(current_input.get_shape()) ``` (?, 64) In order to get back to the original dimensions of the image, we're going to reverse everything we just did. Let's see how we do that: ```python # We'll first reverse the order of our weight matrices Ws = Ws[::-1] # then reverse the order of our dimensions # appending the last layers number of inputs. dimensions = dimensions[::-1][1:] + [ds.X.shape[1]] print(dimensions) ``` [128, 256, 512, 784] ```python for layer_i, n_output in enumerate(dimensions): # we'll use a variable scope again to help encapsulate our variables # This will simply prefix all the variables made in this scope # with the name we give it. with tf.variable_scope("decoder/layer/{}".format(layer_i)): # Now we'll grab the weight matrix we created before and transpose it # So a 3072 x 784 matrix would become 784 x 3072 # or a 256 x 64 matrix, would become 64 x 256 W = tf.transpose(Ws[layer_i]) # Now we'll multiply our input by our transposed W matrix h = tf.matmul(current_input, W) # And then use a relu activation function on its output current_input = tf.nn.relu(h) # We'll also replace n_input with the current n_output, so that on the # next iteration, our new number inputs will be correct. n_input = n_output ``` After this, our `current_input` will become the output of the network: ```python Y = current_input ``` Now that we have the output of the network, we just need to define a training signal to train the network with. To do that, we create a cost function which will measure how well the network is doing: ```python # We'll first measure the average difference across every pixel cost = tf.reduce_mean(tf.squared_difference(X, Y), 1) print(cost.get_shape()) ``` (?,) And then take the mean again across batches: ```python cost = tf.reduce_mean(cost) ``` We can now train our network just like we did in the last session. We'll need to create an optimizer which takes a parameter `learning_rate`. And we tell it that we want to minimize our cost, which is measuring the difference between the output of the network and the input. ```python learning_rate = 0.001 optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) ``` Now we'll create a session to manage the training in minibatches: ```python # %% # We create a session to use the graph sess = tf.Session() sess.run(tf.global_variables_initializer()) ``` Now we'll train: ```python # Some parameters for training batch_size = 100 n_epochs = 5 # We'll try to reconstruct the same first 100 images and show how # The network does over the course of training. examples = ds.X[:100] # We'll store the reconstructions in a list imgs = [] fig, ax = plt.subplots(1, 1) for epoch_i in range(n_epochs): for batch_X, _ in ds.train.next_batch(): sess.run(optimizer, feed_dict={X: batch_X - mean_img}) recon = sess.run(Y, feed_dict={X: examples - mean_img}) recon = np.clip((recon + mean_img).reshape((-1, 28, 28)), 0, 255) img_i = montage(recon).astype(np.uint8) imgs.append(img_i) ax.imshow(img_i, cmap='gray') fig.canvas.draw() print(epoch_i, sess.run(cost, feed_dict={X: batch_X - mean_img})) gif.build_gif(imgs, saveto='ae.gif', cmap='gray') ``` ```python ipyd.Image(url='ae.gif?{}'.format(np.random.rand()), height=500, width=500) ``` <a name="convolutional-autoencoder"></a> ## Convolutional Autoencoder To get even better encodings, we can also try building a convolutional network. Why would a convolutional network perform any different to a fully connected one? Let's see what we were doing in the fully connected network. For every pixel in our input, we have a set of weights corresponding to every output neuron. Those weights are unique to each pixel. Each pixel gets its own row in the weight matrix. That really doesn't make a lot of sense, since we would guess that nearby pixels are probably not going to be so different. And we're not really encoding what's happening around that pixel, just what that one pixel is doing. In a convolutional model, we're explicitly modeling what happens around a pixel. And we're using the exact same convolutions no matter where in the image we are. But we're going to use a lot of different convolutions. Recall in session 1 we created a Gaussian and Gabor kernel and used this to convolve an image to either blur it or to accentuate edges. Armed with what you know now, you could try to train a network to learn the parameters that map an untouched image to a blurred or edge filtered version of it. What you should find is the kernel will look sort of what we built by hand. I'll leave that as an excercise for you. But in fact, that's too easy really. That's just 1 filter you would have to learn. We're going to see how we can use many convolutional filters, way more than 1, and how it will help us to encode the MNIST dataset. To begin we'll need to reset the current graph and start over. ```python from tensorflow.python.framework.ops import reset_default_graph reset_default_graph() ``` ```python # And we'll create a placeholder in the tensorflow graph that will be able to get any number of n_feature inputs. X = tf.placeholder(tf.float32, [None, n_features]) ``` Since `X` is currently `[batch, height*width]`, we need to reshape it to a 4-D tensor to use it in a convolutional graph. Remember back to the first session that in order to perform convolution, we have to use 4-dimensional tensors describing the: `N x H x W x C` We'll reshape our input placeholder by telling the `shape` parameter to be these new dimensions. However, since our batch dimension is `None`, we cannot reshape without using the special value `-1`, which says that the size of that dimension should be computed so that the total size remains constant. Since we haven't defined the batch dimension's shape yet, we use `-1` to denote this dimension should not change size. ```python X_tensor = tf.reshape(X, [-1, 28, 28, 1]) ``` We'll now setup the first convolutional layer. Remember from Session 2 that the weight matrix for convolution should be `[height x width x input_channels x output_channels]` Think a moment about how this is different to the fully connected network. In the fully connected network, every pixel was being multiplied by its own weight to every other neuron. With a convolutional network, we use the extra dimensions to allow the same set of filters to be applied everywhere across an image. This is also known in the literature as weight sharing, since we're sharing the weights no matter where in the input we are. That's unlike the fully connected approach, which has unique weights for every pixel. What's more is after we've performed the convolution, we've retained the spatial organization of the input. We still have dimensions of height and width. That's again unlike the fully connected network which effectively shuffles or takes int account information from everywhere, not at all caring about where anything is. That can be useful or not depending on what we're trying to achieve. Often, it is something we might want to do after a series of convolutions to encode translation invariance. Don't worry about that for now. With MNIST especially we won't need to do that since all of the numbers are in the same position. Now with our tensor ready, we're going to do what we've just done with the fully connected autoencoder. Except, instead of performing matrix multiplications, we're going to create convolution operations. To do that, we'll need to decide on a few parameters including the filter size, how many convolution filters we want, and how many layers we want. I'll start with a fairly small network, and let you scale this up in your own time. ```python n_filters = [16, 16, 16] filter_sizes = [4, 4, 4] ``` Now we'll create a loop to create every layer's convolution, storing the convolution operations we create so that we can do the reverse. ```python current_input = X_tensor # notice instead of having 784 as our input features, we're going to have # just 1, corresponding to the number of channels in the image. # We're going to use convolution to find 16 filters, or 16 channels of information in each spatial location we perform convolution at. n_input = 1 # We're going to keep every matrix we create so let's create a list to hold them all Ws = [] shapes = [] # We'll create a for loop to create each layer: for layer_i, n_output in enumerate(n_filters): # just like in the last session, # we'll use a variable scope to help encapsulate our variables # This will simply prefix all the variables made in this scope # with the name we give it. with tf.variable_scope("encoder/layer/{}".format(layer_i)): # we'll keep track of the shapes of each layer # As we'll need these for the decoder shapes.append(current_input.get_shape().as_list()) # Create a weight matrix which will increasingly reduce # down the amount of information in the input by performing # a matrix multiplication W = tf.get_variable( name='W', shape=[ filter_sizes[layer_i], filter_sizes[layer_i], n_input, n_output], initializer=tf.random_normal_initializer(mean=0.0, stddev=0.02)) # Now we'll convolve our input by our newly created W matrix h = tf.nn.conv2d(current_input, W, strides=[1, 2, 2, 1], padding='SAME') # And then use a relu activation function on its output current_input = tf.nn.relu(h) # Finally we'll store the weight matrix so we can build the decoder. Ws.append(W) # We'll also replace n_input with the current n_output, so that on the # next iteration, our new number inputs will be correct. n_input = n_output ``` Now with our convolutional encoder built and the encoding weights stored, we'll reverse the whole process to decode everything back out to the original image. ```python # We'll first reverse the order of our weight matrices Ws.reverse() # and the shapes of each layer shapes.reverse() # and the number of filters (which is the same but could have been different) n_filters.reverse() # and append the last filter size which is our input image's number of channels n_filters = n_filters[1:] + [1] print(n_filters, filter_sizes, shapes) ``` [16, 16, 1] [4, 4, 4] [[None, 7, 7, 16], [None, 14, 14, 16], [None, 28, 28, 1]] ```python # and then loop through our convolution filters and get back our input image # we'll enumerate the shapes list to get us there for layer_i, shape in enumerate(shapes): # we'll use a variable scope to help encapsulate our variables # This will simply prefix all the variables made in this scope # with the name we give it. with tf.variable_scope("decoder/layer/{}".format(layer_i)): # Create a weight matrix which will increasingly reduce # down the amount of information in the input by performing # a matrix multiplication W = Ws[layer_i] # Now we'll convolve by the transpose of our previous convolution tensor h = tf.nn.conv2d_transpose(current_input, W, tf.pack([tf.shape(X)[0], shape[1], shape[2], shape[3]]), strides=[1, 2, 2, 1], padding='SAME') # And then use a relu activation function on its output current_input = tf.nn.relu(h) ``` Now we have the reconstruction through the network: ```python Y = current_input Y = tf.reshape(Y, [-1, n_features]) ``` We can measure the cost and train exactly like before with the fully connected network: ```python cost = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(X, Y), 1)) learning_rate = 0.001 # pass learning rate and cost to optimize optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) # Session to manage vars/train sess = tf.Session() sess.run(tf.global_variables_initializer()) # Some parameters for training batch_size = 100 n_epochs = 5 # We'll try to reconstruct the same first 100 images and show how # The network does over the course of training. examples = ds.X[:100] # We'll store the reconstructions in a list imgs = [] fig, ax = plt.subplots(1, 1) for epoch_i in range(n_epochs): for batch_X, _ in ds.train.next_batch(): sess.run(optimizer, feed_dict={X: batch_X - mean_img}) recon = sess.run(Y, feed_dict={X: examples - mean_img}) recon = np.clip((recon + mean_img).reshape((-1, 28, 28)), 0, 255) img_i = montage(recon).astype(np.uint8) imgs.append(img_i) ax.imshow(img_i, cmap='gray') fig.canvas.draw() print(epoch_i, sess.run(cost, feed_dict={X: batch_X - mean_img})) gif.build_gif(imgs, saveto='conv-ae.gif', cmap='gray') ``` ```python ipyd.Image(url='conv-ae.gif?{}'.format(np.random.rand()), height=500, width=500) ``` <a name="denoising-autoencoder"></a> ## Denoising Autoencoder The denoising autoencoder is a very simple extension to an autoencoder. Instead of seeing the input, it is corrupted, for instance by masked noise. but the reconstruction loss is still measured on the original uncorrupted image. What this does is lets the model try to interpret occluded or missing parts of the thing it is reasoning about. It would make sense for many models, that not every datapoint in an input is necessary to understand what is going on. Denoising autoencoders try to enforce that, and as a result, the encodings at the middle most layer are often far more representative of the actual classes of different objects. In the resources section, you'll see that I've included a general framework autoencoder allowing you to use either a fully connected or convolutional autoencoder, and whether or not to include denoising. If you interested in the mechanics of how this works, I encourage you to have a look at the code. <a name="variational-autoencoders"></a> ## Variational Autoencoders A variational autoencoder extends the traditional autoencoder by using an additional layer called the variational layer. It is actually two networks that are cleverly connected using a simple reparameterization trick, to help the gradient flow through both networks during backpropagation allowing both to be optimized. We dont' have enough time to get into the details, but I'll try to quickly explain: it tries to optimize the likelihood that a particular distribution would create an image, rather than trying to optimize simply the L2 loss at the end of the network. Or put another way it hopes that there is some distribution that a distribution of image encodings could be defined as. This is a bit tricky to grasp, so don't worry if you don't understand the details. The major difference to hone in on is that instead of optimizing distance in the input space of pixel to pixel distance, which is actually quite arbitrary if you think about it... why would we care about the exact pixels being the same? Human vision would not care for most cases, if there was a slight translation of our image, then the distance could be very high, but we would never be able to tell the difference. So intuitively, measuring error based on raw pixel to pixel distance is not such a great approach. Instead of relying on raw pixel differences, the variational autoencoder tries to optimize two networks. One which says that given my pixels, I am pretty sure I can encode them to the parameters of some well known distribution, like a set of Gaussians, instead of some artbitrary density of values. And then I can optimize the latent space, by saying that particular distribution should be able to represent my entire dataset, and I try to optimize the likelihood that it will create the images I feed through a network. So distance is somehow encoded in this latent space. Of course I appreciate that is a difficult concept so forgive me for not being able to expand on it in more details. But to make up for the lack of time and explanation, I've included this model under the resources section for you to play with! Just like the "vanilla" autoencoder, this one supports both fully connected, convolutional, and denoising models. This model performs so much better than the vanilla autoencoder. In fact, it performs so well that I can even manage to encode the majority of MNIST into 2 values. The following visualization demonstrates the learning of a variational autoencoder over time. <mnist visualization> There are of course a lot more interesting applications of such a model. You could for instance, try encoding a more interesting dataset, such as CIFAR which you'll find a wrapper for in the libs/datasets module. <TODO: produce GIF visualization madness> Or the celeb faces dataset: <celeb dataset> Or you could try encoding an entire movie. We tried it with the copyleft movie, "Sita Sings The Blues". Every 2 seconds, we stored an image of this movie, and then fed all of these images to a deep variational autoencoder. This is the result. <show sita sings the blues training images> And I'm sure we can get closer with deeper nets and more train time. But notice how in both celeb faces and sita sings the blues, the decoding is really blurred. That is because of the assumption of the underlying representational space. We're saying the latent space must be modeled as a gaussian, and those factors must be distributed as a gaussian. This enforces a sort of discretization of my representation, enforced by the noise parameter of the gaussian. In the last session, we'll see how we can avoid this sort of blurred representation and get even better decodings using a generative adversarial network. For now, consider the applications that this method opens up. Once you have an encoding of a movie, or image dataset, you are able to do some very interesting things. You have effectively stored all the representations of that movie, although its not perfect of course. But, you could for instance, see how another movie would be interpretted by the same network. That's similar to what Terrance Broad did for his project on reconstructing blade runner and a scanner darkly, though he made use of both the variational autoencoder and the generative adversarial network. We're going to look at that network in more detail in the last session. We'll also look at how to properly handle very large datasets like celeb faces or the one used here to create the sita sings the blues autoencoder. Taking every 60th frame of Sita Sings The Blues gives you aobut 300k images. And that's a lot of data to try and load in all at once. We had to size it down considerably, and make use of what's called a tensorflow input pipeline. I've included all the code for training this network, which took about 1 day on a fairly powerful machine, but I will not get into the details of the image pipeline bits until session 5 when we look at generative adversarial networks. I'm delaying this because we'll need to learn a few things along the way before we can build such a network. <a name="predicting-image-labels"></a> # Predicting Image Labels We've just seen a variety of types of autoencoders and how they are capable of compressing information down to its inner most layer while still being able to retain most of the interesting details. Considering that the CelebNet dataset was nearly 200 thousand images of 64 x 64 x 3 pixels, and we're able to express those with just an inner layer of 50 values, that's just magic basically. Magic. Okay, let's move on now to a different type of learning often called supervised learning. Unlike what we just did, which is work with a set of data and not have any idea what that data should be *labeled* as, we're going to explicitly tell the network what we want it to be labeled by saying what the network should output for a given input. In the previous cause, we just had a set of `Xs`, our images. Now, we're going to have `Xs` and `Ys` given to us, and use the `Xs` to try and output the `Ys`. With MNIST, the outputs of each image are simply what numbers are drawn in the input image. The wrapper for grabbing this dataset from the libs module takes an additional parameter which I didn't talk about called `one_hot`. ```python from libs import datasets # ds = datasets.MNIST(one_hot=True) ``` To see what this is doing, let's compare setting it to false versus true: ```python ds = datasets.MNIST(one_hot=False) # let's look at the first label print(ds.Y[0]) # okay and what does the input look like plt.imshow(np.reshape(ds.X[0], (28, 28)), cmap='gray') # great it is just the label of the image ``` ```python plt.figure() # Let's look at the next one just to be sure print(ds.Y[1]) # Yea the same idea plt.imshow(np.reshape(ds.X[1], (28, 28)), cmap='gray') ``` And now let's look at what the one hot version looks like: ```python ds = datasets.MNIST(one_hot=True) plt.figure() plt.imshow(np.reshape(ds.X[0], (28, 28)), cmap='gray') print(ds.Y[0]) # array([ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]) # Woah a bunch more numbers. 10 to be exact, which is also the number # of different labels in the dataset. plt.imshow(np.reshape(ds.X[1], (28, 28)), cmap='gray') print(ds.Y[1]) # array([ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]) ``` So instead of have a number from 0-9, we have 10 numbers corresponding to the digits, 0-9, and each value is either 0 or 1. Whichever digit the image represents is the one that is 1. To summarize, we have all of the images of the dataset stored as: `n_observations` x `n_features` tensor (n-dim array) ```python print(ds.X.shape) ``` (70000, 784) And labels stored as `n_observations` x `n_labels` where each observation is a one-hot vector, where only one element is 1 indicating which class or label it is. ```python print(ds.Y.shape) print(ds.Y[0]) ``` (70000, 10) [ 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.] <a name="one-hot-encoding"></a> ## One-Hot Encoding Remember in the last session, we saw how to build a network capable of taking 2 inputs representing the row and column of an image, and predicting 3 outputs, the red, green, and blue colors. Just like in our unsupervised model, instead of having 2 inputs, we'll now have 784 inputs, the brightness of every pixel in our image. And instead of 3 outputs, like in our painting network from last session, or the 784 outputs we had in our unsupervised MNIST network, we'll now have 10 outputs representing the one-hot encoding of its label. So why don't we just have 1 output? A number from 0-9? Wouldn't having 10 different outputs instead of just 1 be harder to learn? Consider how we normally train the network. We have to give it a cost which it will use to minimize. What could our cost be if our output was just a single number, 0-9? We would still have the true label, and the predicted label. Could we just take the subtraction of the two values? e.g. the network predicted 0, but the image was really the number 8. Okay so then our distance could be: ```python # cost = tf.reduce_sum(tf.abs(y_pred - y_true)) ``` But in this example, the cost would be 8. If the image was a 4, and the network predicted a 0 again, the cost would be 4... but isn't the network still just as wrong, not half as much as when the image was an 8? In a one-hot encoding, the cost would be 1 for both, meaning they are both just as wrong. So we're able to better measure the cost, by separating each class's label into its own dimension. <a name="using-regression-for-classification"></a> ## Using Regression for Classification The network we build will be trained to output values between 0 and 1. They won't output exactly a 0 or 1. But rather, they are able to produce any value. 0, 0.1, 0.2, ... and that means the networks we've been using are actually performing regression. In regression, the output is "continuous", rather than "discrete". The difference is this: a *discrete* output means the network can only output one of a few things. Like, 0, 1, 2, or 3, and that's it. But a *continuous* output means it can output any real number. In order to perform what's called classification, we're just simply going to look at whichever value is the highest in our one hot encoding. In order to do that a little better, we're actually going interpret our one hot encodings as probabilities by scaling the total output by their sum. What this does is allows us to understand that as we grow more confident in one prediction, we should grow less confident in all other predictions. We only have so much certainty to go around, enough to add up to 1. If we think the image might also be the number 1, then we lose some certainty of it being the number 0. It turns out there is a better cost function that simply measuring the distance between two vectors when they are probabilities. It's called cross entropy: \begin{align} \Large{H(x) = -\sum{y_{\text{t}}(x) * \log(y_{\text{p}}(x))}} \end{align} What this equation does is measures the similarity of our prediction with our true distribution, by exponentially increasing error whenever our prediction gets closer to 1 when it should be 0, and similarly by exponentially increasing error whenever our prediction gets closer to 0, when it should be 1. I won't go into more detail here, but just know that we'll be using this measure instead of a normal distance measure. <a name="fully-connected-network"></a> ## Fully Connected Network ### Defining the Network Let's see how our one hot encoding and our new cost function will come into play. We'll create our network for predicting image classes in pretty much the same way we've created previous networks: We will have as input to the network 28 x 28 values. ```python import tensorflow as tf from libs import datasets ds = datasets.MNIST(split=[0.8, 0.1, 0.1]) n_input = 28 * 28 ``` Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz As output, we have our 10 one-hot-encoding values ```python n_output = 10 ``` We're going to create placeholders for our tensorflow graph. We're going to set the first dimension to `None`. Remember from our unsupervised model, this is just something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter. Since we're going to pass our entire dataset in batches we'll need this to be say 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible. ```python X = tf.placeholder(tf.float32, [None, n_input]) ``` For the output, we'll have `None` again, since for every input, we'll have the same number of images that have outputs. ```python Y = tf.placeholder(tf.float32, [None, n_output]) ``` Now we'll connect our input to the output with a linear layer. Instead of `relu`, we're going to use `softmax`. This will perform our exponential scaling of the outputs and make sure the output sums to 1, making it a probability. ```python # We'll use the linear layer we created in the last session, which I've stored in the libs file: # NOTE: The lecture used an older version of this function which had a slightly different definition. from libs import utils Y_pred, W = utils.linear( x=X, n_output=n_output, activation=tf.nn.softmax, name='layer1') ``` And then we write our loss function as the cross entropy. And then we'll give our optimizer the `cross_entropy` measure just like we would with GradientDescent. The formula for cross entropy is: \begin{align} \Large{H(x) = -\sum{\text{Y}_{\text{true}} * log(\text{Y}_{pred})}} \end{align} ```python # We add 1e-12 because the log is undefined at 0. cross_entropy = -tf.reduce_sum(Y * tf.log(Y_pred + 1e-12)) optimizer = tf.train.AdamOptimizer(0.001).minimize(cross_entropy) ``` To determine the correct class from our regression output, we have to take the maximum index. ```python predicted_y = tf.argmax(Y_pred, 1) actual_y = tf.argmax(Y, 1) ``` We can then measure the accuracy by seeing whenever these are equal. Note, this is just for us to see, and is not at all used to "train" the network! ```python correct_prediction = tf.equal(predicted_y, actual_y) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) ``` ### Training the Network The rest of the code will be exactly the same as before. We chunk the training dataset into `batch_size` chunks, and let these images help train the network over a number of iterations. ```python sess = tf.Session() sess.run(tf.global_variables_initializer()) # Now actually do some training: batch_size = 50 n_epochs = 5 for epoch_i in range(n_epochs): for batch_xs, batch_ys in ds.train.next_batch(): sess.run(optimizer, feed_dict={ X: batch_xs, Y: batch_ys }) valid = ds.valid print(sess.run(accuracy, feed_dict={ X: valid.images, Y: valid.labels })) # Print final test accuracy: test = ds.test print(sess.run(accuracy, feed_dict={ X: test.images, Y: test.labels })) ``` 0.893429 0.907 0.913286 0.916286 0.919143 0.917143 What we should see is the accuracy being printed after each "epoch", or after every run over the entire dataset. Since we're using batches, we use the notion of an "epoch" to denote whenever we've gone through the entire dataset. <a name="inspecting-the-network"></a> ### Inspecting the Trained Network Let's try and now inspect *how* the network is accomplishing this task. We know that our network is a single matrix multiplication of our 784 pixel values. The weight matrix, `W`, should therefore have 784 rows. As outputs, it has 10 values. So the matrix is composed in the `linear` function as `n_input` x `n_output` values. So the matrix is 784 rows x 10 columns. <TODO: graphic w/ wacom showing network and matrix multiplication and pulling out single neuron/column> In order to get this matrix, we could have had our `linear` function return the `tf.Tensor`. But since everything is part of the tensorflow graph, and we've started using nice names for all of our operations, we can actually find this tensor using tensorflow: ```python # We first get the graph that we used to compute the network g = tf.get_default_graph() # And can inspect everything inside of it [op.name for op in g.get_operations()] ``` ['Placeholder', 'Reshape/shape', 'Reshape', 'encoder/layer/0/W', 'encoder/layer/0/W/Initializer/random_normal/shape', 'encoder/layer/0/W/Initializer/random_normal/mean', 'encoder/layer/0/W/Initializer/random_normal/stddev', 'encoder/layer/0/W/Initializer/random_normal/RandomStandardNormal', 'encoder/layer/0/W/Initializer/random_normal/mul', 'encoder/layer/0/W/Initializer/random_normal', 'encoder/layer/0/W/Assign', 'encoder/layer/0/W/read', 'encoder/layer/0/Conv2D', 'encoder/layer/0/Relu', 'encoder/layer/1/W', 'encoder/layer/1/W/Initializer/random_normal/shape', 'encoder/layer/1/W/Initializer/random_normal/mean', 'encoder/layer/1/W/Initializer/random_normal/stddev', 'encoder/layer/1/W/Initializer/random_normal/RandomStandardNormal', 'encoder/layer/1/W/Initializer/random_normal/mul', 'encoder/layer/1/W/Initializer/random_normal', 'encoder/layer/1/W/Assign', 'encoder/layer/1/W/read', 'encoder/layer/1/Conv2D', 'encoder/layer/1/Relu', 'encoder/layer/2/W', 'encoder/layer/2/W/Initializer/random_normal/shape', 'encoder/layer/2/W/Initializer/random_normal/mean', 'encoder/layer/2/W/Initializer/random_normal/stddev', 'encoder/layer/2/W/Initializer/random_normal/RandomStandardNormal', 'encoder/layer/2/W/Initializer/random_normal/mul', 'encoder/layer/2/W/Initializer/random_normal', 'encoder/layer/2/W/Assign', 'encoder/layer/2/W/read', 'encoder/layer/2/Conv2D', 'encoder/layer/2/Relu', 'decoder/layer/0/Shape', 'decoder/layer/0/strided_slice/stack', 'decoder/layer/0/strided_slice/stack_1', 'decoder/layer/0/strided_slice/stack_2', 'decoder/layer/0/strided_slice', 'decoder/layer/0/pack/1', 'decoder/layer/0/pack/2', 'decoder/layer/0/pack/3', 'decoder/layer/0/pack', 'decoder/layer/0/conv2d_transpose', 'decoder/layer/0/Relu', 'decoder/layer/1/Shape', 'decoder/layer/1/strided_slice/stack', 'decoder/layer/1/strided_slice/stack_1', 'decoder/layer/1/strided_slice/stack_2', 'decoder/layer/1/strided_slice', 'decoder/layer/1/pack/1', 'decoder/layer/1/pack/2', 'decoder/layer/1/pack/3', 'decoder/layer/1/pack', 'decoder/layer/1/conv2d_transpose', 'decoder/layer/1/Relu', 'decoder/layer/2/Shape', 'decoder/layer/2/strided_slice/stack', 'decoder/layer/2/strided_slice/stack_1', 'decoder/layer/2/strided_slice/stack_2', 'decoder/layer/2/strided_slice', 'decoder/layer/2/pack/1', 'decoder/layer/2/pack/2', 'decoder/layer/2/pack/3', 'decoder/layer/2/pack', 'decoder/layer/2/conv2d_transpose', 'decoder/layer/2/Relu', 'Reshape_1/shape', 'Reshape_1', 'SquaredDifference', 'Mean/reduction_indices', 'Mean', 'Const', 'Mean_1', 'gradients/Shape', 'gradients/Const', 'gradients/Fill', 'gradients/Mean_1_grad/Reshape/shape', 'gradients/Mean_1_grad/Reshape', 'gradients/Mean_1_grad/Shape', 'gradients/Mean_1_grad/Tile', 'gradients/Mean_1_grad/Shape_1', 'gradients/Mean_1_grad/Shape_2', 'gradients/Mean_1_grad/Const', 'gradients/Mean_1_grad/Prod', 'gradients/Mean_1_grad/Const_1', 'gradients/Mean_1_grad/Prod_1', 'gradients/Mean_1_grad/Maximum/y', 'gradients/Mean_1_grad/Maximum', 'gradients/Mean_1_grad/floordiv', 'gradients/Mean_1_grad/Cast', 'gradients/Mean_1_grad/truediv', 'gradients/Mean_grad/Shape', 'gradients/Mean_grad/Size', 'gradients/Mean_grad/add', 'gradients/Mean_grad/mod', 'gradients/Mean_grad/Shape_1', 'gradients/Mean_grad/range/start', 'gradients/Mean_grad/range/delta', 'gradients/Mean_grad/range', 'gradients/Mean_grad/Fill/value', 'gradients/Mean_grad/Fill', 'gradients/Mean_grad/DynamicStitch', 'gradients/Mean_grad/Maximum/y', 'gradients/Mean_grad/Maximum', 'gradients/Mean_grad/floordiv', 'gradients/Mean_grad/Reshape', 'gradients/Mean_grad/Tile', 'gradients/Mean_grad/Shape_2', 'gradients/Mean_grad/Shape_3', 'gradients/Mean_grad/Const', 'gradients/Mean_grad/Prod', 'gradients/Mean_grad/Const_1', 'gradients/Mean_grad/Prod_1', 'gradients/Mean_grad/Maximum_1/y', 'gradients/Mean_grad/Maximum_1', 'gradients/Mean_grad/floordiv_1', 'gradients/Mean_grad/Cast', 'gradients/Mean_grad/truediv', 'gradients/SquaredDifference_grad/Shape', 'gradients/SquaredDifference_grad/Shape_1', 'gradients/SquaredDifference_grad/BroadcastGradientArgs', 'gradients/SquaredDifference_grad/scalar', 'gradients/SquaredDifference_grad/mul', 'gradients/SquaredDifference_grad/sub', 'gradients/SquaredDifference_grad/mul_1', 'gradients/SquaredDifference_grad/Sum', 'gradients/SquaredDifference_grad/Reshape', 'gradients/SquaredDifference_grad/Sum_1', 'gradients/SquaredDifference_grad/Reshape_1', 'gradients/SquaredDifference_grad/Neg', 'gradients/SquaredDifference_grad/tuple/group_deps', 'gradients/SquaredDifference_grad/tuple/control_dependency', 'gradients/SquaredDifference_grad/tuple/control_dependency_1', 'gradients/Reshape_1_grad/Shape', 'gradients/Reshape_1_grad/Reshape', 'gradients/decoder/layer/2/Relu_grad/ReluGrad', 'gradients/decoder/layer/2/conv2d_transpose_grad/Shape', 'gradients/decoder/layer/2/conv2d_transpose_grad/Conv2DBackpropFilter', 'gradients/decoder/layer/2/conv2d_transpose_grad/Conv2D', 'gradients/decoder/layer/2/conv2d_transpose_grad/tuple/group_deps', 'gradients/decoder/layer/2/conv2d_transpose_grad/tuple/control_dependency', 'gradients/decoder/layer/2/conv2d_transpose_grad/tuple/control_dependency_1', 'gradients/decoder/layer/1/Relu_grad/ReluGrad', 'gradients/decoder/layer/1/conv2d_transpose_grad/Shape', 'gradients/decoder/layer/1/conv2d_transpose_grad/Conv2DBackpropFilter', 'gradients/decoder/layer/1/conv2d_transpose_grad/Conv2D', 'gradients/decoder/layer/1/conv2d_transpose_grad/tuple/group_deps', 'gradients/decoder/layer/1/conv2d_transpose_grad/tuple/control_dependency', 'gradients/decoder/layer/1/conv2d_transpose_grad/tuple/control_dependency_1', 'gradients/decoder/layer/0/Relu_grad/ReluGrad', 'gradients/decoder/layer/0/conv2d_transpose_grad/Shape', 'gradients/decoder/layer/0/conv2d_transpose_grad/Conv2DBackpropFilter', 'gradients/decoder/layer/0/conv2d_transpose_grad/Conv2D', 'gradients/decoder/layer/0/conv2d_transpose_grad/tuple/group_deps', 'gradients/decoder/layer/0/conv2d_transpose_grad/tuple/control_dependency', 'gradients/decoder/layer/0/conv2d_transpose_grad/tuple/control_dependency_1', 'gradients/encoder/layer/2/Relu_grad/ReluGrad', 'gradients/encoder/layer/2/Conv2D_grad/Shape', 'gradients/encoder/layer/2/Conv2D_grad/Conv2DBackpropInput', 'gradients/encoder/layer/2/Conv2D_grad/Shape_1', 'gradients/encoder/layer/2/Conv2D_grad/Conv2DBackpropFilter', 'gradients/encoder/layer/2/Conv2D_grad/tuple/group_deps', 'gradients/encoder/layer/2/Conv2D_grad/tuple/control_dependency', 'gradients/encoder/layer/2/Conv2D_grad/tuple/control_dependency_1', 'gradients/encoder/layer/1/Relu_grad/ReluGrad', 'gradients/AddN', 'gradients/encoder/layer/1/Conv2D_grad/Shape', 'gradients/encoder/layer/1/Conv2D_grad/Conv2DBackpropInput', 'gradients/encoder/layer/1/Conv2D_grad/Shape_1', 'gradients/encoder/layer/1/Conv2D_grad/Conv2DBackpropFilter', 'gradients/encoder/layer/1/Conv2D_grad/tuple/group_deps', 'gradients/encoder/layer/1/Conv2D_grad/tuple/control_dependency', 'gradients/encoder/layer/1/Conv2D_grad/tuple/control_dependency_1', 'gradients/encoder/layer/0/Relu_grad/ReluGrad', 'gradients/AddN_1', 'gradients/encoder/layer/0/Conv2D_grad/Shape', 'gradients/encoder/layer/0/Conv2D_grad/Conv2DBackpropInput', 'gradients/encoder/layer/0/Conv2D_grad/Shape_1', 'gradients/encoder/layer/0/Conv2D_grad/Conv2DBackpropFilter', 'gradients/encoder/layer/0/Conv2D_grad/tuple/group_deps', 'gradients/encoder/layer/0/Conv2D_grad/tuple/control_dependency', 'gradients/encoder/layer/0/Conv2D_grad/tuple/control_dependency_1', 'gradients/AddN_2', 'beta1_power/initial_value', 'beta1_power', 'beta1_power/Assign', 'beta1_power/read', 'beta2_power/initial_value', 'beta2_power', 'beta2_power/Assign', 'beta2_power/read', 'zeros', 'encoder/layer/0/W/Adam', 'encoder/layer/0/W/Adam/Assign', 'encoder/layer/0/W/Adam/read', 'zeros_1', 'encoder/layer/0/W/Adam_1', 'encoder/layer/0/W/Adam_1/Assign', 'encoder/layer/0/W/Adam_1/read', 'zeros_2', 'encoder/layer/1/W/Adam', 'encoder/layer/1/W/Adam/Assign', 'encoder/layer/1/W/Adam/read', 'zeros_3', 'encoder/layer/1/W/Adam_1', 'encoder/layer/1/W/Adam_1/Assign', 'encoder/layer/1/W/Adam_1/read', 'zeros_4', 'encoder/layer/2/W/Adam', 'encoder/layer/2/W/Adam/Assign', 'encoder/layer/2/W/Adam/read', 'zeros_5', 'encoder/layer/2/W/Adam_1', 'encoder/layer/2/W/Adam_1/Assign', 'encoder/layer/2/W/Adam_1/read', 'Adam/learning_rate', 'Adam/beta1', 'Adam/beta2', 'Adam/epsilon', 'Adam/update_encoder/layer/0/W/ApplyAdam', 'Adam/update_encoder/layer/1/W/ApplyAdam', 'Adam/update_encoder/layer/2/W/ApplyAdam', 'Adam/mul', 'Adam/Assign', 'Adam/mul_1', 'Adam/Assign_1', 'Adam', 'init', 'Placeholder_1', 'Placeholder_2', 'layer1/W', 'layer1/W/Initializer/random_uniform/shape', 'layer1/W/Initializer/random_uniform/min', 'layer1/W/Initializer/random_uniform/max', 'layer1/W/Initializer/random_uniform/RandomUniform', 'layer1/W/Initializer/random_uniform/sub', 'layer1/W/Initializer/random_uniform/mul', 'layer1/W/Initializer/random_uniform', 'layer1/W/Assign', 'layer1/W/read', 'layer1/b', 'layer1/b/Initializer/Const', 'layer1/b/Assign', 'layer1/b/read', 'layer1/MatMul', 'layer1/h', 'layer1/Softmax', 'add/y', 'add', 'Log', 'mul', 'Const_1', 'Sum', 'Neg', 'gradients_1/Shape', 'gradients_1/Const', 'gradients_1/Fill', 'gradients_1/Neg_grad/Neg', 'gradients_1/Sum_grad/Reshape/shape', 'gradients_1/Sum_grad/Reshape', 'gradients_1/Sum_grad/Shape', 'gradients_1/Sum_grad/Tile', 'gradients_1/mul_grad/Shape', 'gradients_1/mul_grad/Shape_1', 'gradients_1/mul_grad/BroadcastGradientArgs', 'gradients_1/mul_grad/mul', 'gradients_1/mul_grad/Sum', 'gradients_1/mul_grad/Reshape', 'gradients_1/mul_grad/mul_1', 'gradients_1/mul_grad/Sum_1', 'gradients_1/mul_grad/Reshape_1', 'gradients_1/mul_grad/tuple/group_deps', 'gradients_1/mul_grad/tuple/control_dependency', 'gradients_1/mul_grad/tuple/control_dependency_1', 'gradients_1/Log_grad/Reciprocal', 'gradients_1/Log_grad/mul', 'gradients_1/add_grad/Shape', 'gradients_1/add_grad/Shape_1', 'gradients_1/add_grad/BroadcastGradientArgs', 'gradients_1/add_grad/Sum', 'gradients_1/add_grad/Reshape', 'gradients_1/add_grad/Sum_1', 'gradients_1/add_grad/Reshape_1', 'gradients_1/add_grad/tuple/group_deps', 'gradients_1/add_grad/tuple/control_dependency', 'gradients_1/add_grad/tuple/control_dependency_1', 'gradients_1/layer1/Softmax_grad/mul', 'gradients_1/layer1/Softmax_grad/Sum/reduction_indices', 'gradients_1/layer1/Softmax_grad/Sum', 'gradients_1/layer1/Softmax_grad/Reshape/shape', 'gradients_1/layer1/Softmax_grad/Reshape', 'gradients_1/layer1/Softmax_grad/sub', 'gradients_1/layer1/Softmax_grad/mul_1', 'gradients_1/layer1/h_grad/BiasAddGrad', 'gradients_1/layer1/h_grad/tuple/group_deps', 'gradients_1/layer1/h_grad/tuple/control_dependency', 'gradients_1/layer1/h_grad/tuple/control_dependency_1', 'gradients_1/layer1/MatMul_grad/MatMul', 'gradients_1/layer1/MatMul_grad/MatMul_1', 'gradients_1/layer1/MatMul_grad/tuple/group_deps', 'gradients_1/layer1/MatMul_grad/tuple/control_dependency', 'gradients_1/layer1/MatMul_grad/tuple/control_dependency_1', 'beta1_power_1/initial_value', 'beta1_power_1', 'beta1_power_1/Assign', 'beta1_power_1/read', 'beta2_power_1/initial_value', 'beta2_power_1', 'beta2_power_1/Assign', 'beta2_power_1/read', 'zeros_6', 'layer1/W/Adam', 'layer1/W/Adam/Assign', 'layer1/W/Adam/read', 'zeros_7', 'layer1/W/Adam_1', 'layer1/W/Adam_1/Assign', 'layer1/W/Adam_1/read', 'zeros_8', 'layer1/b/Adam', 'layer1/b/Adam/Assign', 'layer1/b/Adam/read', 'zeros_9', 'layer1/b/Adam_1', 'layer1/b/Adam_1/Assign', 'layer1/b/Adam_1/read', 'Adam_1/learning_rate', 'Adam_1/beta1', 'Adam_1/beta2', 'Adam_1/epsilon', 'Adam_1/update_layer1/W/ApplyAdam', 'Adam_1/update_layer1/b/ApplyAdam', 'Adam_1/mul', 'Adam_1/Assign', 'Adam_1/mul_1', 'Adam_1/Assign_1', 'Adam_1', 'ArgMax/dimension', 'ArgMax', 'ArgMax_1/dimension', 'ArgMax_1', 'Equal', 'Cast', 'Const_2', 'Mean_2', 'init_1'] Looking at the names of the operations, we see there is one `linear/W`. But this is the `tf.Operation`. Not the `tf.Tensor`. The tensor is the result of the operation. To get the result of the operation, we simply add ":0" to the name of the operation: ```python W = g.get_tensor_by_name('layer1/W:0') ``` We can use the existing session to compute the current value of this tensor: ```python W_arr = np.array(W.eval(session=sess)) print(W_arr.shape) ``` (784, 10) And now we have our tensor! Let's try visualizing every neuron, or every column of this matrix: ```python fig, ax = plt.subplots(1, 10, figsize=(20, 3)) for col_i in range(10): ax[col_i].imshow(W_arr[:, col_i].reshape((28, 28)), cmap='coolwarm') ``` We're going to use the `coolwarm` color map, which will use "cool" values, or blue-ish colors for low values. And "warm" colors, red, basically, for high values. So what we begin to see is that there is a weighting of all the input values, where pixels that are likely to describe that number are being weighted high, and pixels that are not likely to describe that number are being weighted low. By summing all of these multiplications together, the network is able to begin to predict what number is in the image. This is not a very good network though, and the representations it learns could still do a much better job. We were only right about 93% of the time according to our accuracy. State of the art models will get about 99.9% accuracy. <a name="convolutional-networks"></a> ## Convolutional Networks To get better performance, we can build a convolutional network. We've already seen how to create a convolutional network with our unsupervised model. We're going to make the same modifications here to help us predict the digit labels in MNIST. ### Defining the Network I'll first reset the current graph, so we can build a new one. We'll use tensorflow's nice helper function for doing this. ```python from tensorflow.python.framework.ops import reset_default_graph reset_default_graph() ``` And just to confirm, let's see what's in our graph: ```python # We first get the graph that we used to compute the network g = tf.get_default_graph() # And can inspect everything inside of it [op.name for op in g.get_operations()] ``` [] Great. Empty. Now let's get our dataset, and create some placeholders like before: ```python # We'll have placeholders just like before which we'll fill in later. ds = datasets.MNIST(one_hot=True, split=[0.8, 0.1, 0.1]) X = tf.placeholder(tf.float32, [None, 784]) Y = tf.placeholder(tf.float32, [None, 10]) ``` Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz Since `X` is currently `[batch, height*width]`, we need to reshape to a 4-D tensor to use it in a convolutional graph. Remember, in order to perform convolution, we have to use 4-dimensional tensors describing the: `N x H x W x C` We'll reshape our input placeholder by telling the `shape` parameter to be these new dimensions and we'll use `-1` to denote this dimension should not change size. ```python X_tensor = tf.reshape(X, [-1, 28, 28, 1]) ``` We'll now setup the first convolutional layer. Remember that the weight matrix for convolution should be `[height x width x input_channels x output_channels]` Let's create 32 filters. That means every location in the image, depending on the stride I set when we perform the convolution, will be filtered by this many different kernels. In session 1, we convolved our image with just 2 different types of kernels. Now, we're going to let the computer try to find out what 32 filters helps it map the input to our desired output via our training signal. ```python filter_size = 5 n_filters_in = 1 n_filters_out = 32 W_1 = tf.get_variable( name='W', shape=[filter_size, filter_size, n_filters_in, n_filters_out], initializer=tf.random_normal_initializer()) ``` Bias is always `[output_channels]` in size. ```python b_1 = tf.get_variable( name='b', shape=[n_filters_out], initializer=tf.constant_initializer()) ``` Now we can build a graph which does the first layer of convolution: We define our stride as `batch` x `height` x `width` x `channels`. This has the effect of resampling the image down to half of the size. ```python h_1 = tf.nn.relu( tf.nn.bias_add( tf.nn.conv2d(input=X_tensor, filter=W_1, strides=[1, 2, 2, 1], padding='SAME'), b_1)) ``` And just like the first layer, add additional layers to create a deep net. ```python n_filters_in = 32 n_filters_out = 64 W_2 = tf.get_variable( name='W2', shape=[filter_size, filter_size, n_filters_in, n_filters_out], initializer=tf.random_normal_initializer()) b_2 = tf.get_variable( name='b2', shape=[n_filters_out], initializer=tf.constant_initializer()) h_2 = tf.nn.relu( tf.nn.bias_add( tf.nn.conv2d(input=h_1, filter=W_2, strides=[1, 2, 2, 1], padding='SAME'), b_2)) ``` 4d -> 2d ```python # We'll now reshape so we can connect to a fully-connected/linear layer: h_2_flat = tf.reshape(h_2, [-1, 7 * 7 * n_filters_out]) ``` Create a fully-connected layer: ```python # NOTE: This uses a slightly different version of the linear function than the lecture! h_3, W = utils.linear(h_2_flat, 128, activation=tf.nn.relu, name='fc_1') ``` And one last fully-connected layer which will give us the correct number of outputs, and use a softmax to expoentially scale the outputs and convert them to a probability: ```python # NOTE: This uses a slightly different version of the linear function than the lecture! Y_pred, W = utils.linear(h_3, n_output, activation=tf.nn.softmax, name='fc_2') ``` <TODO: Draw as graphical representation> ### Training the Network The rest of the training process is the same as the previous network. We'll define loss/eval/training functions: ```python cross_entropy = -tf.reduce_sum(Y * tf.log(Y_pred + 1e-12)) optimizer = tf.train.AdamOptimizer().minimize(cross_entropy) ``` Monitor accuracy: ```python correct_prediction = tf.equal(tf.argmax(Y_pred, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float')) ``` And create a new session to actually perform the initialization of all the variables: ```python sess = tf.Session() sess.run(tf.global_variables_initializer()) ``` Then we'll train in minibatches and report accuracy: ```python batch_size = 50 n_epochs = 10 for epoch_i in range(n_epochs): for batch_xs, batch_ys in ds.train.next_batch(): sess.run(optimizer, feed_dict={ X: batch_xs, Y: batch_ys }) valid = ds.valid print(sess.run(accuracy, feed_dict={ X: valid.images, Y: valid.labels })) # Print final test accuracy: test = ds.test print(sess.run(accuracy, feed_dict={ X: test.images, Y: test.labels })) ``` 0.352714 0.466714 0.662571 0.812286 0.964286 0.968571 0.972143 0.971714 0.974857 0.974714 0.974857 <TODO: Fun timelapse of waiting> ### Inspecting the Trained Network Let's take a look at the kernels we've learned using the following montage function, similar to the one we've been using for creating image montages, except this one is suited for the dimensions of convolution kernels instead of 4-d images. So it has the height and width first, unlike images which have batch then height then width. We'll use this function to visualize every convolution kernel in the first and second layers of our network. ```python from libs.utils import montage_filters W1 = sess.run(W_1) plt.figure(figsize=(10, 10)) plt.imshow(montage_filters(W1), cmap='coolwarm', interpolation='nearest') ``` What we're looking at are all of the convolution kernels that have been learned. Compared to the previous network we've learned, it is much harder to understand what's happening here. But let's try and explain these a little more. The kernels that have been automatically learned here are responding to edges of different scales, orientations, and rotations. It's likely these are really describing parts of letters, or the strokes that make up letters. Put another way, they are trying to get at the "information" in the image by seeing what changes. That's a pretty fundamental idea. That information would be things that change. Of course, there are filters for things that aren't changing as well. Some filters may even seem to respond to things that are mostly constant. However, if our network has learned a lot of filters that look like that, it's likely that the network hasn't really learned anything at all. The flip side of this is if the filters all look more or less random. That's also a bad sign. Let's try looking at the second layer's kernels: ```python W2 = sess.run(W_2) plt.imshow(montage_filters(W2 / np.max(W2)), cmap='coolwarm') ``` It's really difficult to know what's happening here. There are many more kernels in this layer. They've already passed through a set of filters and an additional non-linearity. How can we really know what the network is doing to learn its objective function? The important thing for now is to see that most of these filters are different, and that they are not all constant or uniformly activated. That means it's really doing something, but we aren't really sure yet how to see how that effects the way we think of and perceive the image. In the next session, we'll learn more about how we can start to interrogate these deeper representations and try to understand what they are encoding. Along the way, we'll learn some pretty amazing tricks for producing entirely new aesthetics that eventually led to the "deep dream" viral craze. <a name="savingloading-models"></a> # Saving/Loading Models Tensorflow provides a few ways of saving/loading models. The easiest way is to use a checkpoint. Though, this really useful while you are training your network. When you are ready to deploy or hand out your network to others, you don't want to pass checkpoints around as they contain a lot of unnecessary information, and it also requires you to still write code to create your network. Instead, you can create a protobuf which contains the definition of your graph and the model's weights. Let's see how to do both: <a name="checkpoint"></a> ## Checkpoint Creating a checkpoint requires you to have already created a set of operations in your tensorflow graph. Once you've done this, you'll create a session like normal and initialize all of the variables. After this, you create a `tf.train.Saver` which can restore a previously saved checkpoint, overwriting all of the variables with your saved parameters. ```python import os sess = tf.Session() init_op = tf.global_variables_initializer() saver = tf.train.Saver() sess.run(init_op) if os.path.exists("model.ckpt"): saver.restore(sess, "model.ckpt") print("Model restored.") ``` Creating the checkpoint is easy. After a few iterations of training, depending on your application say between 1/10 of the time to train the full model, you'll want to write the saved model. You can do this like so: ```python save_path = saver.save(sess, "./model.ckpt") print("Model saved in file: %s" % save_path) ``` Model saved in file: ./model.ckpt <a name="protobuf"></a> ## Protobuf The second way of saving a model is really useful for when you don't want to pass around the code for producing the tensors or computational graph itself. It is also useful for moving the code to deployment or for use in the C++ version of Tensorflow. To do this, you'll want to run an operation to convert all of your trained parameters into constants. Then, you'll create a second graph which copies the necessary tensors, extracts the subgraph, and writes this to a model. The summarized code below shows you how you could use a checkpoint to restore your models parameters, and then export the saved model as a protobuf. ```python path='./' ckpt_name = './model.ckpt' fname = 'model.tfmodel' dst_nodes = ['Y'] g_1 = tf.Graph() with tf.Session(graph=g_1) as sess: x = tf.placeholder(tf.float32, shape=(1, 224, 224, 3)) # Replace this with some code which will create your tensorflow graph: net = create_network() sess.run(tf.global_variables_initializer()) saver.restore(sess, ckpt_name) graph_def = tf.python.graph_util.convert_variables_to_constants( sess, sess.graph_def, dst_nodes) g_2 = tf.Graph() with tf.Session(graph=g_2) as sess: tf.train.write_graph( tf.python.graph_util.extract_sub_graph( graph_def, dst_nodes), path, fname, as_text=False) ``` When you wanted to import this model, now you wouldn't need to refer to the checkpoint or create the network by specifying its placeholders or operations. Instead, you'd use the `import_graph_def` operation like so: ```python with open("model.tfmodel", mode='rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(net['graph_def'], name='model') ``` <a name="wrap-up"></a> # Wrap Up In the next session, we'll learn some very powerful techniques for exploring the representations learned by these kernels, and how we can better understand what they are learning. We'll look at state of the art deep networks for image recognition and interrogate what they've learned using techniques that led the public to Deep Dream. <a name="reading"></a> # Reading Bourlard, H.; Kamp, Y. (1988). "Auto-association by multilayer perceptrons and singular value decomposition". Biological Cybernetics 59 (4–5): 291–294. G. E. Hinton, R. R. Salakhutdinov. Reducing the Dimensionality of Data with Neural Networks. Science, 28 Jul 2006. Vol. 313, Issue 5786, pp. 504-507. DOI: 10.1126/science.1127647. http://science.sciencemag.org/content/313/5786/504.abstract Bengio, Y. (2009). "Learning Deep Architectures for AI". Foundations and Trends in Machine Learning 2. doi:10.1561/2200000006 Vincent, Pascal; Larochelle, Hugo; Lajoie, Isabelle; Bengio, Yoshua; Manzagol, Pierre-Antoine (2010). "Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion". The Journal of Machine Learning Research 11: 3371–3408. Auto-Encoding Variational Bayes, Kingma, D.P. and Welling, M., ArXiv e-prints, 2013 http://arxiv.org/abs/1312.6114
a0763f2873a01d53cb2f15ba88b201d857dcc132
469,282
ipynb
Jupyter Notebook
session-3/lecture-3.ipynb
axsauze/deep-learning-creative-tensorflow
b427398702fd7ade03a2f873493fbeb202e75726
[ "Apache-2.0" ]
2
2018-04-20T02:08:17.000Z
2018-04-20T11:43:00.000Z
session-3/lecture-3.ipynb
axsauze/deep-learning-creative-tensorflow
b427398702fd7ade03a2f873493fbeb202e75726
[ "Apache-2.0" ]
null
null
null
session-3/lecture-3.ipynb
axsauze/deep-learning-creative-tensorflow
b427398702fd7ade03a2f873493fbeb202e75726
[ "Apache-2.0" ]
2
2017-06-23T22:51:21.000Z
2018-08-05T15:12:09.000Z
166.176346
156,314
0.875265
true
18,518
Qwen/Qwen-72B
1. YES 2. YES
0.757794
0.705785
0.53484
__label__eng_Latn
0.992934
0.080942
```python ### conflits with Deepnote ### # matplotlib inline plotting %matplotlib inline # make inline plotting higher resolution %config InlineBackend.figure_format = 'svg' ### conflits with Deepnote ### ``` ```python # imports import pandas as pd import numpy as np import statsmodels.api as sm import matplotlib.pyplot as plt import seaborn as sns from scipy import stats from datetime import datetime, timedelta import calendar import re plt.style.use('dark_background') ``` ```python # data for Problem 1) df = pd.read_excel('Data.xlsx', sheet_name='cape', engine='openpyxl') df = df.rename(columns={df.columns[0]: 'date'}) df = df[df.columns[:4]] # drop any cols above first 4 df['date'] = pd.to_datetime(df['date'], format=('%Y%m')) df.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>date</th> <th>ret</th> <th>rf</th> <th>cape</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1926-07-01</td> <td>0.0318</td> <td>0.0022</td> <td>11.869694</td> </tr> <tr> <th>1</th> <td>1926-08-01</td> <td>0.0289</td> <td>0.0025</td> <td>12.488808</td> </tr> <tr> <th>2</th> <td>1926-09-01</td> <td>0.0059</td> <td>0.0023</td> <td>12.692615</td> </tr> <tr> <th>3</th> <td>1926-10-01</td> <td>-0.0292</td> <td>0.0032</td> <td>12.426518</td> </tr> <tr> <th>4</th> <td>1926-11-01</td> <td>0.0284</td> <td>0.0031</td> <td>12.615251</td> </tr> </tbody> </table> </div> ```python # data for Problem 2) df2 = pd.read_excel('Data.xlsx', sheet_name='cay', engine='openpyxl') def parse_quater(x): x = str(x) year = x[:4] q = x[4] if q == '1': return f'{year}-03-31' elif q == '2': return f'{year}-06-30' elif q == '3': return f'{year}-09-30' else: return f'{year}-12-31' df2 = df2.rename(columns={df2.columns[0]: 'date'}) df2 = df2[df2.columns[:5]] # drop any cols above first 4 df2 = df2.dropna(how='all') df2['date'] = df2['date'].apply(parse_quater) df2['date'] = pd.to_datetime(df2['date'], format=('%Y-%m-%d')) df2.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>date</th> <th>log excess return</th> <th>cay</th> <th>ret</th> <th>rf</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1952-12-31</td> <td>0.092468</td> <td>0.602544</td> <td>0.102295</td> <td>0.004939</td> </tr> <tr> <th>1</th> <td>1953-03-31</td> <td>-0.041697</td> <td>0.607042</td> <td>-0.035683</td> <td>0.005376</td> </tr> <tr> <th>2</th> <td>1953-06-30</td> <td>-0.038692</td> <td>0.602768</td> <td>-0.032102</td> <td>0.006082</td> </tr> <tr> <th>3</th> <td>1953-09-30</td> <td>-0.029554</td> <td>0.603446</td> <td>-0.022973</td> <td>0.006333</td> </tr> <tr> <th>4</th> <td>1953-12-31</td> <td>0.070750</td> <td>0.604790</td> <td>0.077865</td> <td>0.004241</td> </tr> </tbody> </table> </div> ## Problem 1, a) The purpose of this problem is to analyze the predictive ability of the Cyclically Adjusted Price-Earnings (CAPE) ratio for future stock returns. The CAPE ratio is also known as the price-smoothed-earnings ratio or as the Shiller P/E ratio and the variable is available for free download at Robert Shillerís website. The CAPE ratio is defined as the real stock price divided by average real earnings over a ten-year period. It has been used in a series of articles by John Campbell and Robert Shiller to examine long-horizon stock market predictability. Estimate long-horizon predictive regressions: $$r_{t \rightarrow t+k}=\alpha_{k}+\beta_{k} x_{t}+\varepsilon_{t \rightarrow t+k}$$ where $r_{t \rightarrow t+k}$ is the log excess return on the US stock market from time $t$ to $t+k$ and $x_{t}$ is the log CAPE ratio at time $t .^{1}$ Consider horizons in the range from one month up to ten years: $k=1,6,12,24,36,48,60,72,84,96,108,$ and $120 .$ Report and compare the $\beta_{k}$ coefficients and $R^{2}$ statistics across the forecast horizons. All necessary data to estimate (1) are available in the excel file "Data.xlsx". The sample period is from 1926:m7 to 2020:m7. ```python k = [1, 6, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120] # reverse sorting the data df = df.set_index('date') df = df.sort_index(ascending=False) df ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>ret</th> <th>rf</th> <th>cape</th> </tr> <tr> <th>date</th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>2020-07-01</th> <td>0.0578</td> <td>0.0001</td> <td>29.610927</td> </tr> <tr> <th>2020-06-01</th> <td>0.0246</td> <td>0.0001</td> <td>28.843644</td> </tr> <tr> <th>2020-05-01</th> <td>0.0559</td> <td>0.0001</td> <td>27.329646</td> </tr> <tr> <th>2020-04-01</th> <td>0.1365</td> <td>0.0000</td> <td>25.927359</td> </tr> <tr> <th>2020-03-01</th> <td>-0.1327</td> <td>0.0012</td> <td>24.817169</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>1926-10-01</th> <td>-0.0292</td> <td>0.0032</td> <td>12.426518</td> </tr> <tr> <th>1926-09-01</th> <td>0.0059</td> <td>0.0023</td> <td>12.692615</td> </tr> <tr> <th>1926-08-01</th> <td>0.0289</td> <td>0.0025</td> <td>12.488808</td> </tr> <tr> <th>1926-07-01</th> <td>0.0318</td> <td>0.0022</td> <td>11.869694</td> </tr> <tr> <th>NaT</th> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> </tbody> </table> <p>1130 rows × 3 columns</p> </div> #### Log k-period returns The $k$ -period log return is calculated as $$ \begin{aligned} r_{t \rightarrow t+k} &=\log \left(1+R_{t \rightarrow t+k}\right) \\ &=\log \left(1+R_{t+1}\right)+\log \left(1+R_{t+2}\right)+\ldots+\log \left(1+R_{t+k}\right) \\ &=r_{t+1}+r_{t+2}+\ldots+r_{t+k} \end{aligned} $$ ```python # log-transforming CAPE df['cape'] = np.log(df['cape']) # generating k-period excess log-returns for df['ret'] = np.log(1 + df['ret']) df['rf'] = np.log(1 + df['rf']) for period in k: df[f'k={period}'] = df['ret'].rolling(period).sum() - df['rf'].rolling(period).sum() # resording again df = df.sort_index(ascending=True) ``` #### Lagging We always lag the predictive variable in predictive regressions. Here we get $r_{t+1}=\alpha+\beta x_{t}+\varepsilon_{t+1}$ _(abstracting from the k-period returns notation)_. Thus we move the returns columns one period backwards. ```python # Lag all returns one period back-wards non_lagged_returns = df[df.columns[3:]] # in case I need dropped values later df[df.columns[3:]] = df[df.columns[3:]].shift(-1) df = df.dropna(how='all') df ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>ret</th> <th>rf</th> <th>cape</th> <th>k=1</th> <th>k=6</th> <th>k=12</th> <th>k=24</th> <th>k=36</th> <th>k=48</th> <th>k=60</th> <th>k=72</th> <th>k=84</th> <th>k=96</th> <th>k=108</th> <th>k=120</th> </tr> <tr> <th>date</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1926-07-01</th> <td>0.031305</td> <td>0.002198</td> <td>2.473988</td> <td>0.025993</td> <td>0.046851</td> <td>0.192655</td> <td>0.372093</td> <td>0.709978</td> <td>0.359452</td> <td>-0.065708</td> <td>-0.793039</td> <td>-0.243071</td> <td>-0.305186</td> <td>-0.027681</td> <td>0.339625</td> </tr> <tr> <th>1926-08-01</th> <td>0.028490</td> <td>0.002497</td> <td>2.524833</td> <td>0.003585</td> <td>0.061703</td> <td>0.186116</td> <td>0.410563</td> <td>0.762309</td> <td>0.336451</td> <td>-0.087611</td> <td>-0.503864</td> <td>-0.155322</td> <td>-0.276886</td> <td>-0.027522</td> <td>0.323481</td> </tr> <tr> <th>1926-09-01</th> <td>0.005883</td> <td>0.002297</td> <td>2.541020</td> <td>-0.032830</td> <td>0.059413</td> <td>0.228937</td> <td>0.435296</td> <td>0.702673</td> <td>0.196794</td> <td>-0.435396</td> <td>-0.537282</td> <td>-0.271492</td> <td>-0.282774</td> <td>-0.005150</td> <td>0.329647</td> </tr> <tr> <th>1926-10-01</th> <td>-0.029635</td> <td>0.003195</td> <td>2.519833</td> <td>0.024909</td> <td>0.096821</td> <td>0.217823</td> <td>0.481284</td> <td>0.512011</td> <td>0.137814</td> <td>-0.325310</td> <td>-0.645640</td> <td>-0.325956</td> <td>-0.266682</td> <td>0.095612</td> <td>0.431243</td> </tr> <tr> <th>1926-11-01</th> <td>0.028004</td> <td>0.003095</td> <td>2.534906</td> <td>0.025791</td> <td>0.124729</td> <td>0.256510</td> <td>0.567606</td> <td>0.351362</td> <td>0.082074</td> <td>-0.445239</td> <td>-0.731136</td> <td>-0.255846</td> <td>-0.211587</td> <td>0.118341</td> <td>0.438508</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>2020-03-01</th> <td>-0.142370</td> <td>0.001199</td> <td>3.211536</td> <td>0.127953</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>2020-04-01</th> <td>0.127953</td> <td>0.000000</td> <td>3.255299</td> <td>0.054293</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>2020-05-01</th> <td>0.054393</td> <td>0.000100</td> <td>3.307972</td> <td>0.024202</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>2020-06-01</th> <td>0.024302</td> <td>0.000100</td> <td>3.361890</td> <td>0.056091</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>2020-07-01</th> <td>0.056191</td> <td>0.000100</td> <td>3.388143</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> </tbody> </table> <p>1129 rows × 15 columns</p> </div> ```python res = [] # placeholder for results for period in k: data = df[[f'k={period}', 'cape']].dropna(how='any') Y = data[f'k={period}'] X = sm.add_constant(data['cape']) fit = sm.OLS(endog=Y, exog=X).fit() res.append({ 'horizon': period, 'alpha': fit.params['const'], 'beta': fit.params['cape'], 'R2': fit.rsquared_adj * 100 }) res = pd.DataFrame(res) res ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>horizon</th> <th>alpha</th> <th>beta</th> <th>R2</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1</td> <td>0.032864</td> <td>-0.009782</td> <td>0.454831</td> </tr> <tr> <th>1</th> <td>6</td> <td>0.205182</td> <td>-0.061759</td> <td>3.193525</td> </tr> <tr> <th>2</th> <td>12</td> <td>0.446518</td> <td>-0.136438</td> <td>7.186357</td> </tr> <tr> <th>3</th> <td>24</td> <td>0.863311</td> <td>-0.263625</td> <td>13.235476</td> </tr> <tr> <th>4</th> <td>36</td> <td>1.171717</td> <td>-0.353641</td> <td>16.864028</td> </tr> <tr> <th>5</th> <td>48</td> <td>1.410224</td> <td>-0.419058</td> <td>18.923008</td> </tr> <tr> <th>6</th> <td>60</td> <td>1.585868</td> <td>-0.460648</td> <td>19.757823</td> </tr> <tr> <th>7</th> <td>72</td> <td>1.695399</td> <td>-0.475547</td> <td>20.920020</td> </tr> <tr> <th>8</th> <td>84</td> <td>1.878679</td> <td>-0.516682</td> <td>23.922550</td> </tr> <tr> <th>9</th> <td>96</td> <td>2.127895</td> <td>-0.583363</td> <td>27.275441</td> </tr> <tr> <th>10</th> <td>108</td> <td>2.429521</td> <td>-0.668976</td> <td>31.716735</td> </tr> <tr> <th>11</th> <td>120</td> <td>2.710301</td> <td>-0.748767</td> <td>34.777340</td> </tr> </tbody> </table> </div> #### Conclusion As we see from the table above. The predictive power of CAPE increases when looking at longer return-horizons. ```python ``` ## Problem 1, b) The use of overlapping data in (1) leads to autocorrelation in the error term. To address this issue, we can use e.g. the Newey-West estimator to compute $t$ -statistics across the different forecast horizons. To examine the effect of how standard errors are computed in long-horizon regressions, try with two different lag-length specifications in the Newey-West estimator. First, try to set the lag length in the Newey-West estimator equal to the forecast horizon and then afterwards try with no lags in the Newey-West estimator. ```python res = [] # placeholder for results for period in k: data = df[[f'k={period}', 'cape']].dropna(how='any') Y = data[f'k={period}'] X = sm.add_constant(data['cape']) fit_nw_0 = sm.OLS(endog=Y, exog=X).fit(cov_type='HAC', cov_kwds={'maxlags': 0}) fit_nw_k = sm.OLS(endog=Y, exog=X).fit(cov_type='HAC', cov_kwds={'maxlags': period}) res.append({ 'horizon': period, 'T-stat: NW (0 lags)': fit_nw_0.tvalues['cape'], 'T-stat: NW (k lags)': fit_nw_k.tvalues['cape'] }) res = pd.DataFrame(res) res ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>horizon</th> <th>T-stat: NW (0 lags)</th> <th>T-stat: NW (k lags)</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1</td> <td>-2.028352</td> <td>-1.831084</td> </tr> <tr> <th>1</th> <td>6</td> <td>-5.952230</td> <td>-3.040038</td> </tr> <tr> <th>2</th> <td>12</td> <td>-9.257427</td> <td>-3.423782</td> </tr> <tr> <th>3</th> <td>24</td> <td>-12.989475</td> <td>-3.283021</td> </tr> <tr> <th>4</th> <td>36</td> <td>-13.784869</td> <td>-3.066353</td> </tr> <tr> <th>5</th> <td>48</td> <td>-16.111079</td> <td>-3.209365</td> </tr> <tr> <th>6</th> <td>60</td> <td>-18.894856</td> <td>-3.537802</td> </tr> <tr> <th>7</th> <td>72</td> <td>-20.524614</td> <td>-3.580867</td> </tr> <tr> <th>8</th> <td>84</td> <td>-22.790482</td> <td>-3.486762</td> </tr> <tr> <th>9</th> <td>96</td> <td>-23.624109</td> <td>-3.462548</td> </tr> <tr> <th>10</th> <td>108</td> <td>-23.480802</td> <td>-3.493523</td> </tr> <tr> <th>11</th> <td>120</td> <td>-25.049065</td> <td>-3.467663</td> </tr> </tbody> </table> </div> #### Conclusion - When we don't lag our Newey-West std. errors we don't take into account serial correlation. - Thus we get large (absolute) t-statistics when using 0-lags, however this has low power due to serial correlation. - $H_0:\; \beta_k=0$ can however be rejected as $k$ increases for the model with $k$ lags. - The reason why $k$ lags is a relevant choice is that the overlap in returns implies that $\varepsilon_{t+k} \sim M A(k-1)$ by construction. ```python ``` ## Problem 1, c) Similar to the price-dividend ratio, the CAPE ratio is highly persistent and slow to mean- revert, implying that forecasts build up over time. Make two scatter plots where you plot the time $t$ log CAPE ratio against the one-month ahead log excess return $\left(r_{t \rightarrow t+1}\right)$ and the ten-year ahead log excess return $\left(r_{t \rightarrow t+120}\right),$ respectively. ```python sns.scatterplot(x='cape', y='k=1', data=df) plt.ylabel('One month ahead excess return') plt.xlabel('CAPE') plt.title('CAPE vs. one month ahead excess returns') plt.show() ``` ```python sns.scatterplot(x='cape', y='k=120', data=df) plt.ylabel('Ten year ahead excess return') plt.xlabel('CAPE') plt.title('CAPE vs. ten year ahead excess returns') plt.show() ``` #### Conclusion The predictive power of CAPE becomes more visible at very long horizons as CAPE is slow to mean revert - this is the same result as we saw ealier. ```python ``` ## Problem 1, d) In-sample evidence of time-varying expected excess returns does not imply that it is possible to predict returns out-of-sample. Use an out-of-sample period from 1990:m1 to 2020:m7 to check the out-of-sample predictive power of the log CAPE ratio by computing the out-of-sample $R^{2}$ and Clark and West test statistic for the $k=1,6,$ and 12 horizons. In addition, plot the Goyal and Welch (2008) cumulative-squared-error-difference figure for $k=1$. ```python # information from problem start_oos = datetime(1990, 1, 1) horizons = [1, 6, 12] nlag = 1 # Lag x relative to y as specified in the forecast regression. oos_cols = [f'k={period}' for period in horizons] oos_cols.append('cape') # select relevant data oos_data = df[oos_cols] oos_data.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>k=1</th> <th>k=6</th> <th>k=12</th> <th>cape</th> </tr> <tr> <th>date</th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1926-07-01</th> <td>0.025993</td> <td>0.046851</td> <td>0.192655</td> <td>2.473988</td> </tr> <tr> <th>1926-08-01</th> <td>0.003585</td> <td>0.061703</td> <td>0.186116</td> <td>2.524833</td> </tr> <tr> <th>1926-09-01</th> <td>-0.032830</td> <td>0.059413</td> <td>0.228937</td> <td>2.541020</td> </tr> <tr> <th>1926-10-01</th> <td>0.024909</td> <td>0.096821</td> <td>0.217823</td> <td>2.519833</td> </tr> <tr> <th>1926-11-01</th> <td>0.025791</td> <td>0.124729</td> <td>0.256510</td> <td>2.534906</td> </tr> </tbody> </table> </div> ```python # prevailing mean benchmark forecast - using expanding window PM = [] PM_data = non_lagged_returns[[f'k={i}' for i in horizons]] for window in PM_data.expanding(1): # this works if window.index[-1] >= start_oos: res = {'date': window.index[-1]} for col in window.columns: k = re.findall(string=col, pattern='[^\d](\d+)$')[0] k = int(k) window_data = window[f'k={k}'].iloc[0:-k] res.update({f'PM_k={k}': np.mean(window_data)}) # save resulting dictionary PM.append(res) PM_result = pd.DataFrame(PM).set_index('date', drop=True) PM_result ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>PM_k=1</th> <th>PM_k=6</th> <th>PM_k=12</th> </tr> <tr> <th>date</th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1990-01-01</th> <td>0.004984</td> <td>0.029831</td> <td>0.058755</td> </tr> <tr> <th>1990-02-01</th> <td>0.004871</td> <td>0.029673</td> <td>0.058721</td> </tr> <tr> <th>1990-03-01</th> <td>0.004879</td> <td>0.029511</td> <td>0.058732</td> </tr> <tr> <th>1990-04-01</th> <td>0.004896</td> <td>0.029383</td> <td>0.058746</td> </tr> <tr> <th>1990-05-01</th> <td>0.004846</td> <td>0.029259</td> <td>0.058659</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>2020-03-01</th> <td>0.005148</td> <td>0.031051</td> <td>0.061706</td> </tr> <tr> <th>2020-04-01</th> <td>0.005015</td> <td>0.030894</td> <td>0.061552</td> </tr> <tr> <th>2020-05-01</th> <td>0.005125</td> <td>0.030835</td> <td>0.061478</td> </tr> <tr> <th>2020-06-01</th> <td>0.005168</td> <td>0.030789</td> <td>0.061517</td> </tr> <tr> <th>2020-07-01</th> <td>0.005185</td> <td>0.030742</td> <td>0.061518</td> </tr> </tbody> </table> <p>367 rows × 3 columns</p> </div> ```python # Predictive regression forecast - also using expanding window PR = [] for window in oos_data.expanding(1): # this works if window.index[-1] >= start_oos: res = {'date': window.index[-1]} for col in window.columns: if col != 'cape': k = re.findall(string=col, pattern='[^\d](\d+)$')[0] k = int(k) Y = window[f'k={k}'].iloc[0:-(k + nlag)] X = sm.add_constant(window['cape']).iloc[0:-(k + nlag)] # fit OOS predictive regression fit = sm.OLS(endog=Y, exog=X).fit(cov_type='HAC', cov_kwds={'maxlags': k}) pr_val = np.dot(np.matrix([1, window['cape'].iloc[-(1 + nlag)]]), fit.params.values) res.update({f'PR_k={k}': pr_val[0, 0]}) # save resulting dictionary PR.append(res) PR_result = pd.DataFrame(PR).set_index('date', drop=True) PR_result ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>PR_k=1</th> <th>PR_k=6</th> <th>PR_k=12</th> </tr> <tr> <th>date</th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1990-01-01</th> <td>0.000064</td> <td>-0.000205</td> <td>-0.007794</td> </tr> <tr> <th>1990-02-01</th> <td>0.000630</td> <td>0.004022</td> <td>0.001917</td> </tr> <tr> <th>1990-03-01</th> <td>0.001328</td> <td>0.007952</td> <td>0.011008</td> </tr> <tr> <th>1990-04-01</th> <td>0.000945</td> <td>0.005334</td> <td>0.005592</td> </tr> <tr> <th>1990-05-01</th> <td>0.000908</td> <td>0.005363</td> <td>0.005870</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>2020-03-01</th> <td>-0.000835</td> <td>-0.005909</td> <td>-0.021442</td> </tr> <tr> <th>2020-04-01</th> <td>0.000979</td> <td>0.006888</td> <td>0.007664</td> </tr> <tr> <th>2020-05-01</th> <td>0.000752</td> <td>0.004115</td> <td>0.001645</td> </tr> <tr> <th>2020-06-01</th> <td>0.000327</td> <td>0.000823</td> <td>-0.005306</td> </tr> <tr> <th>2020-07-01</th> <td>-0.000158</td> <td>-0.002557</td> <td>-0.012498</td> </tr> </tbody> </table> <p>367 rows × 3 columns</p> </div> ```python oos_data = non_lagged_returns.join(PR_result) oos_data = oos_data.join(PM_result) oos_data ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>k=1</th> <th>k=6</th> <th>k=12</th> <th>k=24</th> <th>k=36</th> <th>k=48</th> <th>k=60</th> <th>k=72</th> <th>k=84</th> <th>k=96</th> <th>k=108</th> <th>k=120</th> <th>PR_k=1</th> <th>PR_k=6</th> <th>PR_k=12</th> <th>PM_k=1</th> <th>PM_k=6</th> <th>PM_k=12</th> </tr> <tr> <th>date</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1926-07-01</th> <td>0.029107</td> <td>0.076557</td> <td>0.151879</td> <td>0.395039</td> <td>0.695591</td> <td>0.348264</td> <td>0.031850</td> <td>-1.055330</td> <td>-0.112727</td> <td>-0.160007</td> <td>-0.070980</td> <td>0.304169</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1926-08-01</th> <td>0.025993</td> <td>0.046851</td> <td>0.192655</td> <td>0.372093</td> <td>0.709978</td> <td>0.359452</td> <td>-0.065708</td> <td>-0.793039</td> <td>-0.243071</td> <td>-0.305186</td> <td>-0.027681</td> <td>0.339625</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1926-09-01</th> <td>0.003585</td> <td>0.061703</td> <td>0.186116</td> <td>0.410563</td> <td>0.762309</td> <td>0.336451</td> <td>-0.087611</td> <td>-0.503864</td> <td>-0.155322</td> <td>-0.276886</td> <td>-0.027522</td> <td>0.323481</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1926-10-01</th> <td>-0.032830</td> <td>0.059413</td> <td>0.228937</td> <td>0.435296</td> <td>0.702673</td> <td>0.196794</td> <td>-0.435396</td> <td>-0.537282</td> <td>-0.271492</td> <td>-0.282774</td> <td>-0.005150</td> <td>0.329647</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1926-11-01</th> <td>0.024909</td> <td>0.096821</td> <td>0.217823</td> <td>0.481284</td> <td>0.512011</td> <td>0.137814</td> <td>-0.325310</td> <td>-0.645640</td> <td>-0.325956</td> <td>-0.266682</td> <td>0.095612</td> <td>0.431243</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>2020-04-01</th> <td>0.127953</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>0.000979</td> <td>0.006888</td> <td>0.007664</td> <td>0.005015</td> <td>0.030894</td> <td>0.061552</td> </tr> <tr> <th>2020-05-01</th> <td>0.054293</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>0.000752</td> <td>0.004115</td> <td>0.001645</td> <td>0.005125</td> <td>0.030835</td> <td>0.061478</td> </tr> <tr> <th>2020-06-01</th> <td>0.024202</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>0.000327</td> <td>0.000823</td> <td>-0.005306</td> <td>0.005168</td> <td>0.030789</td> <td>0.061517</td> </tr> <tr> <th>2020-07-01</th> <td>0.056091</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>-0.000158</td> <td>-0.002557</td> <td>-0.012498</td> <td>0.005185</td> <td>0.030742</td> <td>0.061518</td> </tr> <tr> <th>NaT</th> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> </tbody> </table> <p>1130 rows × 18 columns</p> </div> ```python ``` #### Calculations Calculating $R^2$ as defined as, $$R_{O O S}^{2}=1-\frac{\sum_{i=t+h+1}^{T}\left(r_{i}-\widehat{r}_{i}\right)^{2}}{\sum_{i=t+h+1}^{T}\left(r_{i}-\bar{r}_{i}\right)^{2}}$$ where $\hat{r}$ is the predictive regression forecast, and $\bar{r}$ is the prevailing mean forecast. Afterwards we wish to test $H_{0}: R_{O O S}^{2} \leq 0$ _(no predictability)_ using the Clark and West test. First we compute $$f_{i}=\left(r_{i}-\bar{r}_{i}\right)^{2}-\left(r_{i}-\widehat{r}_{i}\right)^{2}+\left(\bar{r}_{i}-\widehat{r}_{i}\right)^{2}$$ and run the regression $$f_{i}=\theta+u_{i}, \quad i=1, \ldots, T-t-h$$ If we use autocorrelation robust std. errors (Newey West) we can perform a standard t-test on $\hat{\theta}$ for inference. ```python MSE = [] for period in horizons: # Predictive regression error oos_data[f'e_PR_k={period}'] = oos_data[f'k={period}'] - oos_data[f'PR_k={period}'] # Predictive mean error oos_data[f'e_PM_k={period}'] = oos_data[f'k={period}'] - oos_data[f'PM_k={period}'] MSE_PR = np.mean(np.power(oos_data[f'e_PR_k={period}'], 2)) MSE_PM = np.mean(np.power(oos_data[f'e_PM_k={period}'], 2)) R2OOS = 100 * (1 - (MSE_PR/MSE_PM)) # calculating Clark-West test f = np.power(oos_data[f'e_PM_k={period}'], 2) - np.power(oos_data[f'e_PR_k={period}'], 2) + np.power((oos_data[f'PM_k={period}'] - oos_data[f'PR_k={period}']), 2) Y = f.dropna() X = np.ones(shape=(Y.shape[0], 1)) fit = sm.OLS(endog=Y, exog=X).fit(cov_type='HAC', cov_kwds={'maxlags': period}) # data for plotting the Goyal and Welch (2008) cumulative-squared-error-difference figure for k = 1. if period == 1: oos_data['GW'] = np.power(oos_data[f'e_PM_k={period}'], 2).cumsum() - np.power(oos_data[f'e_PR_k={period}'], 2).cumsum() MSE.append({ 'Horizon': period, 'R2': R2OOS, 'P-value': fit.pvalues[0] / 2 # remember - we are testing one-sided (Normal CDF is symmetric) }) pd.DataFrame(MSE) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Horizon</th> <th>R2</th> <th>P-value</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1</td> <td>-2.501215</td> <td>0.472038</td> </tr> <tr> <th>1</th> <td>6</td> <td>-14.842058</td> <td>0.432010</td> </tr> <tr> <th>2</th> <td>12</td> <td>-37.708539</td> <td>0.444608</td> </tr> </tbody> </table> </div> ```python oos_data['GW'].plot(figsize=(10, 7)) plt.axhline(0, color='red', linewidth=1) plt.xlabel('Time') plt.ylabel('Cumulative SSE difference') plt.title('Difference in cumulative squared forecast error') plt.show() ``` #### Conclusion As expected we see that out-of-sample evidence shows that CAPE is not able to predict returns _(at least not out-of-sample)_. The out-of-sample $R^2$ is negative and the Clark and West test is not able to reject the $H_0$ of no predictability. As we also see from the cumulative-squared-error-difference figure, the predictive ablity of CAPE has been unstable over time. ```python ``` ## Problem 2, a) Lettau and Ludvigson (2001) find that there is a cointegration relationship between consumption $\left(c_{t}\right),$ financial asset wealth $\left(a_{t}\right),$ and income $\left(y_{t}\right)$. They show that the estimated cointegration residual $\widehat{cay}$ has the ability to capture time-varying expected returns on the US stock market. The excel file "TimeVaryingRiskPremia.xlsx" contains the log excess return on the S\&P500 index as well as the original cay data used by Lettau and Ludvigson (2001) with a sample period from 1952:q4 to 1998:q3. Estimate the predictive regression model: $$ r_{t \rightarrow t+k}=\alpha_{k}+\beta_{k} \widehat{c a y}_{t}+\varepsilon_{t \rightarrow t+k} $$ where $r_{t \rightarrow t+k}$ is the $k$ -period ahead log excess return. Is $\beta_{k}$ statistically significant across horizons? Compare your results with Table VI (row 2) in Lettau and Ludvigson (2001). $^{2}$ ```python k = [1, 2, 3, 4, 8, 12, 16, 24] # forecast horizons ``` ```python df = df2 df = df.rename(columns={'ret': 'S&P','log excess return': 'ret'}) df = df.set_index('date', drop=True) df.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>ret</th> <th>cay</th> <th>S&amp;P</th> <th>rf</th> </tr> <tr> <th>date</th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1952-12-31</th> <td>0.092468</td> <td>0.602544</td> <td>0.102295</td> <td>0.004939</td> </tr> <tr> <th>1953-03-31</th> <td>-0.041697</td> <td>0.607042</td> <td>-0.035683</td> <td>0.005376</td> </tr> <tr> <th>1953-06-30</th> <td>-0.038692</td> <td>0.602768</td> <td>-0.032102</td> <td>0.006082</td> </tr> <tr> <th>1953-09-30</th> <td>-0.029554</td> <td>0.603446</td> <td>-0.022973</td> <td>0.006333</td> </tr> <tr> <th>1953-12-31</th> <td>0.070750</td> <td>0.604790</td> <td>0.077865</td> <td>0.004241</td> </tr> </tbody> </table> </div> ```python for period in k: df[f'k={period}'] = df['ret'].rolling(period).sum() df[f'k={period}'] = df[f'k={period}'].dropna().iloc[1:] # drop first observation - implicit lag df.head(n=10) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>ret</th> <th>cay</th> <th>S&amp;P</th> <th>rf</th> <th>k=1</th> <th>k=2</th> <th>k=3</th> <th>k=4</th> <th>k=8</th> <th>k=12</th> <th>k=16</th> <th>k=24</th> </tr> <tr> <th>date</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1952-12-31</th> <td>0.092468</td> <td>0.602544</td> <td>0.102295</td> <td>0.004939</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1953-03-31</th> <td>-0.041697</td> <td>0.607042</td> <td>-0.035683</td> <td>0.005376</td> <td>-0.041697</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1953-06-30</th> <td>-0.038692</td> <td>0.602768</td> <td>-0.032102</td> <td>0.006082</td> <td>-0.038692</td> <td>-0.080389</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1953-09-30</th> <td>-0.029554</td> <td>0.603446</td> <td>-0.022973</td> <td>0.006333</td> <td>-0.029554</td> <td>-0.068247</td> <td>-0.109943</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1953-12-31</th> <td>0.070750</td> <td>0.604790</td> <td>0.077865</td> <td>0.004241</td> <td>0.070750</td> <td>0.041196</td> <td>0.002504</td> <td>-0.039193</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1954-03-31</th> <td>0.091315</td> <td>0.614112</td> <td>0.099671</td> <td>0.003703</td> <td>0.091315</td> <td>0.162065</td> <td>0.132511</td> <td>0.093819</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1954-06-30</th> <td>0.088655</td> <td>0.617901</td> <td>0.095901</td> <td>0.002926</td> <td>0.088655</td> <td>0.179970</td> <td>0.250720</td> <td>0.221166</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1954-09-30</th> <td>0.108374</td> <td>0.622924</td> <td>0.116368</td> <td>0.001708</td> <td>0.108374</td> <td>0.197029</td> <td>0.288344</td> <td>0.359094</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1954-12-31</th> <td>0.123406</td> <td>0.611707</td> <td>0.134620</td> <td>0.002896</td> <td>0.123406</td> <td>0.231780</td> <td>0.320435</td> <td>0.411750</td> <td>0.372557</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>1955-03-31</th> <td>0.026647</td> <td>0.610457</td> <td>0.029533</td> <td>0.002461</td> <td>0.026647</td> <td>0.150054</td> <td>0.258428</td> <td>0.347083</td> <td>0.440901</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> </tbody> </table> </div> ```python res = [] for period in k: Y = df[f'k={period}'].dropna().values X = df['cay'].iloc[:-period].values X = sm.add_constant(X) # fit OOS predictive regression fit = sm.OLS(endog=Y, exog=X).fit(cov_type='HAC', cov_kwds={'maxlags': period}) res.append({ 'Horizon': period, 'Beta': fit.params[1], 't-stat': fit.tvalues[1], 'R2': fit.rsquared_adj }) results = pd.DataFrame(res) results ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Horizon</th> <th>Beta</th> <th>t-stat</th> <th>R2</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1</td> <td>2.154744</td> <td>3.971369</td> <td>0.088550</td> </tr> <tr> <th>1</th> <td>2</td> <td>3.796311</td> <td>3.739416</td> <td>0.117052</td> </tr> <tr> <th>2</th> <td>3</td> <td>5.452758</td> <td>3.648954</td> <td>0.158731</td> </tr> <tr> <th>3</th> <td>4</td> <td>6.751756</td> <td>3.890468</td> <td>0.178875</td> </tr> <tr> <th>4</th> <td>8</td> <td>8.366383</td> <td>3.662538</td> <td>0.155721</td> </tr> <tr> <th>5</th> <td>12</td> <td>8.610317</td> <td>3.150374</td> <td>0.138208</td> </tr> <tr> <th>6</th> <td>16</td> <td>7.872463</td> <td>2.967739</td> <td>0.103365</td> </tr> <tr> <th>7</th> <td>24</td> <td>12.268554</td> <td>2.800466</td> <td>0.146477</td> </tr> </tbody> </table> </div> #### Conclusion Below is the table in question from the article, _Table VI (row 2) in Lettau and Ludvigson (2001):_ |k|1|2|3|4|8|12|16|24| |---|---|---|---|---|---|---|---|---| |Beta|2.16|3.8|5.43|6.72|8.35|8.57|7.86|12.44| |t-stat|3.44|3.34|3.37|3.7|3.73|3.24|2.99|3.41| |R2|0.09|0.12|0.16|0.18|0.16|0.15|0.11|0.16| We get very similar results as did study did. Thus we're able to _(accounting for these particular results)_ to show that the estimated cointegration residual $\widehat{cay}$ is able to capture time-varying expected returns on the US stock market. ```python ``` ## Problem 2, b) It is important to take into account small sample bias in order to be able to conduct valid inference from predictive regressions. Small sample bias in predictive regressions is particularly severe for financial predictive variables such as the CAPE ratio, the price-dividend ratio and other predictive variables scaled by price, but often found to be less severe for macroeconomic predictive variables such as the cay ratio. To judge the degree of small sample bias in the predictive regression in $r_{t \rightarrow t+k}=\alpha_{k}+\beta_{k} \widehat{c a y}_{t}+\varepsilon_{t \rightarrow t+k}$ conduct a bootstrap analysis where you bootstrap under the null hypothesis of no predictability and assume an $\mathrm{AR}(1)$ data-generating process for $\widehat{c a y}$: $$ \begin{aligned} r_{t+1} &=\alpha+\varepsilon_{t+1} \\ \widehat{c a y}_{t+1} &=\mu+\phi \widehat{c a y}_{t}+\eta_{t+1} \end{aligned} $$ Compute $N=10,000$ artificial estimates of the slope coefficients under the null of no predictability and then compute the degree of bias in $\beta_{k}$ as well as one-sided empirical $p$-values across the different forecast horizons. In addition, make a histogram of the bootstrapped slope coefficients for $k=1 .$ Based on the output from your bootstrap analysis, do the predictive regression in (2) suffer from small sample bias? ```python ``` #### Solution Using residual-based bootstrapping. We're bootstrapping under the null of no predictability. We are told to assume and $AR(1)$ process for the predictive variable (cay) - Thus we first estimate the two following equations, $$ \begin{align} (1):&\qquad r_{t+1} =\alpha+\varepsilon_{t+1} \\ (2):&\qquad pd_{t+1} =\mu+\phi p d_{t}+\eta_{t+1} \end{align} $$ - Then save the residuals and coefficients _(we save the residuals in pairs to preserve the cross-correlation of the residuals)_. - We construct $N$ bootstrap samples of length $T$ by setting the initial values of $r_{t}$ and $p d_{t}$ equal to their sample averages and by randomly selecting residual pairs (with replacement) from (1) and (2). - We then estimate $\beta$ from each bootstrap sample: $$ r_{t+1}=\alpha+\beta p d_{t}+\varepsilon_{t+1} $$ which will provide us with $N$ artificial estimates of the slope coefficient: $\widetilde{\beta}^{(1)}, \widetilde{\beta}^{(2)}, \ldots, \widetilde{\beta}^{(N)}$ - The size of the bias given by: $$ \operatorname{bias}(\widehat{\beta})=\frac{1}{N} \sum_{i=1}^{N} \widetilde{\beta}^{(i)}-\beta_{0} $$ where $\beta_{0}=0$ in our case. - We can compute the empirical one-sided $p$ -value under the null hypothesis as $$ P(\widetilde{\beta}<\widehat{\beta})=\frac{1}{N} \sum_{i=1}^{N} I\left[\widetilde{\beta}^{(i)}<\widehat{\beta}\right] $$ ```python # from assignment m = 10000 ``` ```python # return regression under H0 - equation (1) alpha = np.mean(df['ret'].iloc[1:]) e1 = df['ret'].iloc[1:] - alpha ``` ```python # AR(1) model - equation (2): Y = df['cay'].shift(-1).dropna() X = df['cay'].iloc[:-1] X = sm.add_constant(X) out_ar1 = sm.OLS(endog=Y, exog=X).fit() theta = out_ar1.params e2 = out_ar1.resid ``` ```python # paramters to use in bootstrap a = np.matrix([alpha, theta[0]]).T phi = np.matrix([[0,0], [0, theta[1]]]) e = np.matrix([e1, e2]).T # important to save residuals in pairs X = df[['ret', 'cay']] X = sm.add_constant(X) X ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>const</th> <th>ret</th> <th>cay</th> </tr> <tr> <th>date</th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1952-12-31</th> <td>1.0</td> <td>0.092468</td> <td>0.602544</td> </tr> <tr> <th>1953-03-31</th> <td>1.0</td> <td>-0.041697</td> <td>0.607042</td> </tr> <tr> <th>1953-06-30</th> <td>1.0</td> <td>-0.038692</td> <td>0.602768</td> </tr> <tr> <th>1953-09-30</th> <td>1.0</td> <td>-0.029554</td> <td>0.603446</td> </tr> <tr> <th>1953-12-31</th> <td>1.0</td> <td>0.070750</td> <td>0.604790</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>1997-09-30</th> <td>1.0</td> <td>0.059567</td> <td>0.585296</td> </tr> <tr> <th>1997-12-31</th> <td>1.0</td> <td>0.015896</td> <td>0.575760</td> </tr> <tr> <th>1998-03-31</th> <td>1.0</td> <td>0.118251</td> <td>0.574597</td> </tr> <tr> <th>1998-06-30</th> <td>1.0</td> <td>0.020338</td> <td>0.567769</td> </tr> <tr> <th>1998-09-30</th> <td>1.0</td> <td>-0.117313</td> <td>0.571596</td> </tr> </tbody> </table> <p>184 rows × 3 columns</p> </div> ```python T = X.shape[0] beta_sim = {period: [] for period in k} t_sim = {period: [] for period in k} r2_sim = {period: [] for period in k} for simulation in range(m): Xsim = np.zeros((T, 2)) Xsim[0] = X.mean()[1:].values # initial values # simulate cay and returns for i in range(1, T): # random draw with replacement Xsim[i] = a.T + np.dot(Xsim[i-1], phi.T) + e[np.random.randint(low=0, high=T-1)] # wrap in dataframe for easier handling sim_data = pd.DataFrame(Xsim).rename(columns={ 0: 'ret_sim', 1: 'cay_sim' }) for period in k: sim_data[f'k={period}'] = sim_data['ret_sim'].rolling(period).sum() sim_data[f'k={period}'] = sim_data[f'k={period}'].dropna().iloc[1:] # drop first observation - implicit lag for period in k: Y = sim_data[f'k={period}'].dropna().values X_sim = sim_data['cay_sim'].iloc[:-period].values X_sim = sm.add_constant(X_sim) # fit OOS predictive regression fit = sm.OLS(endog=Y, exog=X_sim).fit(cov_type='HAC', cov_kwds={'maxlags': period}) beta_sim[period].append(fit.params[1]) t_sim[period].append(fit.tvalues[1]) r2_sim[period].append(fit.rsquared_adj) ``` ```python beta_sim = pd.DataFrame(beta_sim) t_sim = pd.DataFrame(t_sim) r2_sim = pd.DataFrame(r2_sim) ``` ```python res = [] for period in k: b_hat = results.set_index('Horizon')['Beta'].loc[period] sum_count = beta_sim[period].loc[beta_sim[period] > b_hat].count() p_val = (1/m) * sum_count res.append({ 'Horizon': period, 'Beta-Bias': beta_sim[period].mean() - 0, # H0 is no predictability thus beta_0=0 'P-value': p_val }) pd.DataFrame(res) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Horizon</th> <th>Beta-Bias</th> <th>P-value</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1</td> <td>-0.011046</td> <td>0.0004</td> </tr> <tr> <th>1</th> <td>2</td> <td>-0.022261</td> <td>0.0002</td> </tr> <tr> <th>2</th> <td>3</td> <td>-0.037642</td> <td>0.0000</td> </tr> <tr> <th>3</th> <td>4</td> <td>-0.047831</td> <td>0.0000</td> </tr> <tr> <th>4</th> <td>8</td> <td>-0.089239</td> <td>0.0046</td> </tr> <tr> <th>5</th> <td>12</td> <td>-0.121847</td> <td>0.0257</td> </tr> <tr> <th>6</th> <td>16</td> <td>-0.149551</td> <td>0.0712</td> </tr> <tr> <th>7</th> <td>24</td> <td>-0.192051</td> <td>0.0413</td> </tr> </tbody> </table> </div> ```python sns.histplot(beta_sim[1]) plt.axvline(results['Beta'].iloc[0], color='red') plt.xlabel(None) plt.ylabel(None) plt.title('Distribution of slope coef. assuming no predictability') plt.show() ``` #### Conclusion We see that the size of our bias is slightly negative thorugh-out the k horizons. In the plot for $k=1$ we see that only very few of the bootstrapped betas are greater than the original estimate in problem a) of 2.15. This implies that the predictive power of $\widehat{cay}$ is not driven by small-sample bias. The main reason why the small sample bias is less severe for $\widehat{cay}$ compared to price-scaled variables such as the price-dividend ratio or the CAPE ratio is that the return innovations and innovations in $\widehat{cay}$ have a low degree of correlation - this would not be the case for CAPE and the return innovations.
9a72854d9a1c1d0f2b7dd58ecea47b434150409f
615,352
ipynb
Jupyter Notebook
Problem Set 3 - Time-Varying Risk Primea/My Solution/Problem Set 3 - Time-Varying Risk Premia.ipynb
ismand95/QFE
e80a902bc2d0147a604ee86414a7f7e9df92b5c9
[ "MIT" ]
null
null
null
Problem Set 3 - Time-Varying Risk Primea/My Solution/Problem Set 3 - Time-Varying Risk Premia.ipynb
ismand95/QFE
e80a902bc2d0147a604ee86414a7f7e9df92b5c9
[ "MIT" ]
null
null
null
Problem Set 3 - Time-Varying Risk Primea/My Solution/Problem Set 3 - Time-Varying Risk Premia.ipynb
ismand95/QFE
e80a902bc2d0147a604ee86414a7f7e9df92b5c9
[ "MIT" ]
1
2022-02-05T13:29:40.000Z
2022-02-05T13:29:40.000Z
61.937796
574
0.524019
true
20,458
Qwen/Qwen-72B
1. YES 2. YES
0.651355
0.826712
0.538483
__label__eng_Latn
0.312877
0.089405
<a href="https://colab.research.google.com/github/john-s-butler-dit/Numerical-Analysis-Python/blob/master/Chapter%2006%20-%20Boundary%20Value%20Problems/.ipynb_checkpoints/601_Linear%20Shooting%20Method-checkpoint.ipynb" target="_parent"></a> # Linear Shooting Method #### John S Butler john.s.butler@tudublin.ie [Course Notes](https://johnsbutler.netlify.com/files/Teaching/Numerical_Analysis_for_Differential_Equations.pdf) [Github](https://github.com/john-s-butler-dit/Numerical-Analysis-Python) ## Overview This notebook illustates the implentation of a the linear shooting method to a linear boundary value problem. ## Introduction To numerically approximate the Boundary Value Problem $$ y^{''}=p(x)y^{'}+q(x)y+r(x) \ \ \ a < x < b $$ $$y(a)=\alpha$$ $$y(b) =\beta$$ The Boundary Value Problem is divided into two Initial Value Problems: 1. The first 2nd order Initial Value Problem is the same as the original Boundary Value Problem with an extra initial condtion $y_1^{'}(a)=0$. \begin{equation} y^{''}_1=p(x)y^{'}_1+q(x)y_1+r(x), \ \ y_1(a)=\alpha, \ \ \color{green}{y^{'}_1(a)=0},\\ \end{equation} 2. The second 2nd order Initial Value Problem is the homogenous form of the original Boundary Value Problem, by removing $r(x)$, with the initial condtions $y_2(a)=0$ and $y_2^{'}(a)=1$. \begin{equation} y^{''}_2=p(x)y^{'}_2+q(x)y_2, \ \ \color{green}{y_2(a)=0, \ \ y^{'}_2(a)=1}. \end{equation} combining these intial values problems together to get the unique solution \begin{equation} y(x)=y_1(x)+\frac{\beta-y_1(b)}{y_2(b)}y_2(x), \end{equation} provided that $y_2(b)\not=0$. The truncation error for the shooting method is $$ |y_i - y(x_i)| \leq K h^n\left|1+\frac{y_{2 i}}{y_{2 i}}\right| $$ $O(h^n)$ is the order of the numerical method used to approximate the solution of the Initial Value Problems. ```python import numpy as np import math import matplotlib.pyplot as plt import pandas as pd import warnings warnings.filterwarnings("ignore") from IPython.display import HTML ``` <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form> ## Example Boundary Value Problem To illustrate the shooting method we shall apply it to the Boundary Value Problem: $$ y^{''}=2y^{'}+3y-6, $$ with boundary conditions $$y(0) = 3, $$ $$y(1) = e^3+2, $$ with the exact solution is $$y=e^{3x}+2. $$ The __boundary value problem__ is broken into two second order __Initial Value Problems:__ 1. The first 2nd order Intial Value Problem is the same as the original Boundary Value Problem with an extra initial condtion $u^{'}(0)=0$. \begin{equation} u^{''} =2u'+3u-6, \ \ \ \ u(0)=3, \ \ \ \color{green}{u^{'}(0)=0} \end{equation} 2. The second 2nd order Intial Value Problem is the homogenous form of the original Boundary Value Problem with the initial condtions $w^{'}(0)=0$ and $w^{'}(0)=1$. \begin{equation} w^{''} =2w^{'}+3w, \ \ \ \ \color{green}{w(0)=0}, \ \ \ \color{green}{w^{'}(0)=1} \end{equation} combining these results of these two intial value problems as a linear sum \begin{equation} y(x)=u(x)+\frac{e^{3x}+2-u(1)}{w(1)}w(x) \end{equation} gives the solution of the Boundary Value Problem. ## Discrete Axis The stepsize is defined as $$h=\frac{b-a}{N}$$ here it is $$h=\frac{1-0}{10}$$ giving $$x_i=0+0.1 i$$ for $i=0,1,...10.$ ```python ## BVP N=10 h=1/N x=np.linspace(0,1,N+1) fig = plt.figure(figsize=(10,4)) plt.plot(x,0*x,'o:',color='red') plt.xlim((0,1)) plt.title('Illustration of discrete time points for h=%s'%(h)) plt.show() ``` ## Initial conditions The initial conditions for the discrete equations are: $$ u_1[0]=3$$ $$ \color{green}{u_2[0]=0}$$ $$ \color{green}{w_1[0]=0}$$ $$ \color{green}{w_2[0]=1}$$ ```python U1=np.zeros(N+1) U2=np.zeros(N+1) W1=np.zeros(N+1) W2=np.zeros(N+1) U1[0]=3 U2[0]=0 W1[0]=0 W2[0]=1 ``` ## Numerical method The Euler method is applied to numerically approximate the solution of the system of the two second order initial value problems they are converted in to two pairs of two first order initial value problems: ### 1. Inhomogenous Approximation The plot below shows the numerical approximation for the two first order Intial Value Problems \begin{equation} u_1^{'} =u_2, \ \ \ \ u_1(0)=3, \end{equation} \begin{equation} u_2^{'} =2u_2+3u_1-6, \ \ \ \color{green}{u_2(0)=0}, \end{equation} that Euler approximate of the inhomogeneous two Initial Value Problems is : $$u_{1}[i+1]=u_{1}[i] + h u_{2}[i]$$ $$u_{2}[i+1]=u_{2}[i] + h (2u_{2}[i]+3u_{1}[i] -6)$$ with $u_1[0]=3$ and $\color{green}{u_2[0]=0}$. ```python for i in range (0,N): U1[i+1]=U1[i]+h*(U2[i]) U2[i+1]=U2[i]+h*(2*U2[i]+3*U1[i]-6) ``` ### Plots The plot below shows the Euler approximation of the two intial value problems $u_1$ on the left and $u2$ on the right. ```python fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(1,2,1) plt.rc('text', usetex=True) plt.rc('font', family='serif') plt.plot(x,U1,'^') plt.title(r"$u_1'=u_2, \ \ u_1(0)=3$",fontsize=16) plt.grid(True) ax = fig.add_subplot(1,2,2) plt.rc('text', usetex=True) plt.rc('font', family='serif') plt.plot(x,U2,'v') plt.title(r"$u_2'=2u_2+3u_1-6, \ \ u_2(0)=0$", fontsize=16) plt.grid(True) plt.show() ``` ### 2. Homogenous Approximation The homogeneous Bounday Value Problem is divided into two first order Intial Value Problems \begin{equation} w_1^{'} =w_2, \ \ \ \ \color{green}{w_1(1)=0} \end{equation} \begin{equation} w_2^{'} =2w_2+3w_1, \ \ \ \color{green}{w_2(1)=1} \end{equation} The Euler approximation of the homogeneous of the two Initial Value Problem is $$w_{1}[i+1]=w_{1}[i] + h w_{2}[i]$$ $$w_{2}[i+1]=w_{2}[i] + h (2w_{2}[i]+3w_{1}[i])$$ with $\color{green}{w_1[0]=0}$ and $\color{green}{w_2[1]=1}$. ```python for i in range (0,N): W1[i+1]=W1[i]+h*(W2[i]) W2[i+1]=W2[i]+h*(2*W2[i]+3*W1[i]) ``` ### Homogenous Approximation ### Plots The plot below shows the Euler approximation of the two intial value problems $u_1$ on the left and $u2$ on the right. ```python fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(1,2,1) plt.rc('text', usetex=True) plt.rc('font', family='serif') plt.plot(x,W1,'^') plt.grid(True) plt.title(r"$w_1'=w_2, \ \ w_1(0)=0$",fontsize=16) ax = fig.add_subplot(1,2,2) plt.rc('text', usetex=True) plt.rc('font', family='serif') plt.plot(x,W2,'v') plt.grid(True) plt.title(r"$w_2'=2w_2+3w_1, \ \ w_2(0)=1$",fontsize=16) plt.tight_layout() plt.subplots_adjust(top=0.85) plt.show() beta=math.exp(3)+2 y=U1+(beta-U1[N])/W1[N]*W1 ``` ## Approximate Solution Combining together the numerical approximation of $u_1$ and $w_1$ as a weighted sum $$y(x[i])\approx u_{1}[i] + \frac{e^3+2-u_{1}[N]}{w_1[N]}w_{1}[i]$$ gives the approximate solution of the Boundary Value Problem. The truncation error for the shooting method using the Euler method is $$ |y_i - y(x[i])| \leq K h\left|1+\frac{w_{1}[i]}{u_{1}[i]}\right| $$ $O(h)$ is the order of the method. The plot below shows the approximate solution of the Boundary Value Problem (left), the exact solution (middle) and the error (right) ```python Exact=np.exp(3*x)+2 fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(2,3,1) plt.rc('text', usetex=True) plt.rc('font', family='serif') plt.plot(x,y,'o') plt.grid(True) plt.title(r"Numerical: $u_1+\frac{e^3+2-u_1(N)}{w_1(N)}w_1$", fontsize=16) ax = fig.add_subplot(2,3,2) plt.rc('text', usetex=True) plt.rc('font', family='serif') plt.plot(x,Exact,'ks-') plt.grid(True) plt.title(r"Exact: $y=e^{3x}+2$", fontsize=16) ax = fig.add_subplot(2,3,3) plt.rc('text', usetex=True) plt.rc('font', family='serif') plt.plot(x,abs(y-Exact),'ro') plt.grid(True) plt.title(r"Error ",fontsize=16) plt.tight_layout() plt.subplots_adjust(top=0.85) plt.show() ``` ### Data The Table below shows that output for $x$, the Euler numerical approximations $U1$, $U2$, $W1$ and $W2$ of the system of four Intial Value Problems, the shooting methods approximate solution $y_i=u_{1 i} + \frac{e^3+2-u_{1}(x_N)}{w_1(x_N)}w_{1 i}$ and the exact solution of the Boundary Value Problem. ```python table = ListTable() table.append(['x', 'U1','U2','W1','W2','Approx','Exact']) for i in range (0,len(x)): table.append([round(x[i],3), round(U1[i],3), round(U2[i],3), round(W1[i],5),round(W2[i],3), round(y[i],5), round(Exact[i],5)]) table d = {'time x_i': x[0:10], 'Runge Kutta':round(U1[0:10],3), round(U1[i],3), round(U2[i],3), round(W1[i],5),round(W2[i],3), round(y[i],5), round(Exact[i],5) } df = pd.DataFrame(data=d) df ``` <table><tr><td>x</td><td>U1</td><td>U2</td><td>W1</td><td>W2</td><td>Approx</td><td>Exact</td></tr><tr><td>0.0</td><td>3.0</td><td>0.0</td><td>0.0</td><td>1.0</td><td>3.0</td><td>3.0</td></tr><tr><td>0.1</td><td>3.0</td><td>0.3</td><td>0.1</td><td>1.2</td><td>3.48753</td><td>3.34986</td></tr><tr><td>0.2</td><td>3.03</td><td>0.66</td><td>0.22</td><td>1.47</td><td>4.10257</td><td>3.82212</td></tr><tr><td>0.3</td><td>3.096</td><td>1.101</td><td>0.367</td><td>1.83</td><td>4.88524</td><td>4.4596</td></tr><tr><td>0.4</td><td>3.206</td><td>1.65</td><td>0.55</td><td>2.306</td><td>5.88752</td><td>5.32012</td></tr><tr><td>0.5</td><td>3.371</td><td>2.342</td><td>0.78061</td><td>2.932</td><td>7.17681</td><td>6.48169</td></tr><tr><td>0.6</td><td>3.605</td><td>3.222</td><td>1.07384</td><td>3.753</td><td>8.84059</td><td>8.04965</td></tr><tr><td>0.7</td><td>3.927</td><td>4.347</td><td>1.44914</td><td>4.826</td><td>10.99242</td><td>10.16617</td></tr><tr><td>0.8</td><td>4.362</td><td>5.795</td><td>1.93171</td><td>6.226</td><td>13.77985</td><td>13.02318</td></tr><tr><td>0.9</td><td>4.942</td><td>7.663</td><td>2.55427</td><td>8.05</td><td>17.39453</td><td>16.87973</td></tr><tr><td>1.0</td><td>5.708</td><td>10.078</td><td>3.35929</td><td>10.427</td><td>22.08554</td><td>22.08554</td></tr></table> ```python ```
fe8aebfd3867c1d715eb4e29632c8d4f393cba69
71,221
ipynb
Jupyter Notebook
Chapter 06 - Boundary Value Problems/.ipynb_checkpoints/601_Linear Shooting Method-checkpoint.ipynb
jjcrofts77/Numerical-Analysis-Python
97e4b9274397f969810581ff95f4026f361a56a2
[ "MIT" ]
69
2019-09-05T21:39:12.000Z
2022-03-26T14:00:25.000Z
Chapter 06 - Boundary Value Problems/.ipynb_checkpoints/601_Linear Shooting Method-checkpoint.ipynb
jjcrofts77/Numerical-Analysis-Python
97e4b9274397f969810581ff95f4026f361a56a2
[ "MIT" ]
null
null
null
Chapter 06 - Boundary Value Problems/.ipynb_checkpoints/601_Linear Shooting Method-checkpoint.ipynb
jjcrofts77/Numerical-Analysis-Python
97e4b9274397f969810581ff95f4026f361a56a2
[ "MIT" ]
13
2021-06-17T15:34:04.000Z
2022-01-14T14:53:43.000Z
121.745299
15,598
0.820306
true
3,777
Qwen/Qwen-72B
1. YES 2. YES
0.774583
0.79053
0.612332
__label__eng_Latn
0.643457
0.260982
## Surfinpy #### Tutorial 1 - Bulk Phase diagrams In this tutorial we learn how to generate a basic bulk phase diagram from DFT energies. This enables the comparison of the thermodynamic stability of various different bulk phases under different chemical potentials giving valuable insight in to the syntheis of solid phases. This example will consider a series of bulk phases which can be defined through a reaction scheme across all phases, thus for this example including an example bulk phase, H<sub>2</sub>O and CO<sub>2</sub> as reactions and A as a generic product. \begin{align} x\text{Bulk} + y\text{H}_2\text{O} + z\text{CO}_2 \rightarrow \text{A} \end{align} ##### Methodology The system is in equilibrium when the chemical potentials of the reactants and product are equal; <i>i.e.</i> the change in Gibbs free energy is $\delta G_{T,p} = 0$. \begin{align} \delta G_{T,p} = \mu_A - x\mu_{\text{Bulk}} - y\mu_{\text{H}_2\text{O}} - z\mu_{\text{CO}_2} = 0 \end{align} Assuming that H<sub>2</sub>O and CO<sub>2</sub> are gaseous species, $\mu_{CO_2}$ and $\mu_{H_2O}$ can be written as \begin{align} \mu_{\text{H}_2\text{O}} = \mu^0_{\text{H}_2\text{O}} + \Delta\mu_{\text{H}_2\text{O}} \end{align} and \begin{align} \mu_{\text{CO}_2} = \mu^0_{\text{CO}_2} + \Delta\mu_{\text{CO}_2} \end{align} The chemical potential $\mu^0_x$ is the partial molar free energy of any reactants or products (x) in their standard states, in this example we assume all solid components can be expressed as \begin{align} \mu_{\text{component}} = \mu^0_{\text{component}} \end{align} Hence, we can now rearrange the equations to produce; \begin{align} \mu^0_A - x\mu^0_{\text{Bulk}} - y\mu^0_{\text{H}_2\text{O}} - z\mu^0_{\text{CO}_2} = y\Delta\mu_{\text{H}_2\text{O}} + z\Delta\mu_{\text{CO}_2} \end{align} As $\mu^0_A$ corresponds to the partial molar free energy of product A, we can replace the left side with the Gibbs free energy ($\Delta G_{\text{f}}^0$). \begin{align} \delta G_{T,p} = \Delta G_{\text{f}}^0 - y\Delta\mu_{\text{H}_2\text{O}} - z\Delta\mu_{\text{CO}_2} \end{align} At equilibrium $\delta G_{T,p} = 0$, and hence \begin{align} \Delta G_{\text{f}}^0 = y\Delta\mu_{\text{H}_2\text{O}} + z\Delta\mu_{\text{CO}_2} \end{align} Thus, we can find the values of $\Delta\mu_{\text{H}_2\text{O}}$ and $\Delta\mu_{\text{CO}_2}$ (or $(p_{\text{H}_2\text{O}})^y$ and $(p_{\text{CO}_2})^z$) when solid phases are in thermodynamic equilibrium; <i>i.e.</i> they are more or less stable than bulk. This procedure can then be applied to all phases to identify which is the most stable, provided that the free energy $\Delta G_f^0$ is known for each phase. The free energy can be calculated using \begin{align} \Delta G^{0}_{f} = \sum\Delta G_{f}^{0,\text{products}} - \sum\Delta G_{f}^{0,\text{reactants}} \end{align} Where for this tutorial the free energy (G) is equal to the calculated DFT energy (U<sub>0</sub>). ```python import matplotlib.pyplot as plt from surfinpy import bulk_mu_vs_mu as bmvm from surfinpy import utils as ut from surfinpy import data ``` The first thing to do is input the data that we have generated from our DFT simulations. The input data needs to be contained within a `surfinpy.data` object. First we have created a `surfinpy.data.ReferenceDataSet` object for the bulk data (reference data), where `cation` is the number of cations, `anion` is the number of anions, `energy` is the DFT energy and `funits` is the number of formula units. ```python bulk = data.ReferenceDataSet(cation = 1, anion = 1, energy = -92.0, funits = 10) ``` Next we create the bulk `surfinpy.data.DataSet` objects - one for each surface or "phase". `cation` is the number of cations, `x` is in this case the number of oxygen species (corresponding to the X axis of the phase diagram), `y` is the number of in this case water molecules (corresponding to the Y axis of our phase diagram), `energy` is the DFT energy, `label` is the label for the phase (appears on the phase diagram) and finally `nSpecies` is the number of adsorbin species. ```python Bulk = data.DataSet(cation = 10, x = 0, y = 0, energy = -92.0, label = "Bulk") A = data.DataSet(cation = 10, x = 5, y = 20, energy = -468.0, label = "A") B = data.DataSet(cation = 10, x = 0, y = 10, energy = -228.0, label = "B") C = data.DataSet(cation = 10, x = 10, y = 30, energy = -706.0, label = "C") D = data.DataSet(cation = 10, x = 10, y = 0, energy = -310.0, label = "D") E = data.DataSet(cation = 10, x = 10, y = 50, energy = -972.0, label = "E") F = data.DataSet(cation = 10, x = 8, y = 10, energy = -398.0, label = "F") ``` Next we need to create a list of our data. Don't worry about the order, surfinpy will sort that out for you. ```python data = [Bulk, A, B, C, D, E, F] ``` We now need to generate our X and Y axis, or more appropriately, our chemical potential values. These exist in a dictionary. 'Range' corresponds to the range of chemcial potential values to be considered and 'Label' is the axis label. Additionally, the x and y energy need to be specified. ``` deltaX = {'Range': Range of Chemical Potential, 'Label': Species Label} ``` ```python deltaX = {'Range': [ -3, 2], 'Label': 'CO_2'} deltaY = {'Range': [ -3, 2], 'Label': 'H_2O'} x_energy=-20.53412969 y_energy=-12.83725889 ``` And finally we can generate our plot using these 6 variables of data. ```python system = bmvm.calculate(data, bulk, deltaX, deltaY, x_energy, y_energy) ax = system.plot_phase(figsize=(6, 4.5)) plt.show() ``` ```python ```
473efcc3b88b756d1bb73106697b1bc077d831d8
352,147
ipynb
Jupyter Notebook
examples/Notebooks/Bulk/Tutorial_1.ipynb
jstse/SurfinPy
ff3a79f9415c170885e109ab881368271f3dcc19
[ "MIT" ]
null
null
null
examples/Notebooks/Bulk/Tutorial_1.ipynb
jstse/SurfinPy
ff3a79f9415c170885e109ab881368271f3dcc19
[ "MIT" ]
null
null
null
examples/Notebooks/Bulk/Tutorial_1.ipynb
jstse/SurfinPy
ff3a79f9415c170885e109ab881368271f3dcc19
[ "MIT" ]
null
null
null
1,443.22541
333,913
0.729812
true
1,809
Qwen/Qwen-72B
1. YES 2. YES
0.72487
0.715424
0.51859
__label__eng_Latn
0.978202
0.043187
# Probability A trial, experiment or observation is an event with an unknown outcome. All possible outcomes of the trial are called the sample space, and the particular outcomes being looked for are known as events. For example, if the trial is flipping a coin the sample space is heads or tails. If the trial is rolling a six sided die, looking for an odd number, the sample space is `{1, 2, 3, 4, 5, 6}` and the events are `{1, 3, 5}`. Trials are considered independent if the outcome of one doesn't affect the others, and events are considered mutually exclusive if they cannot occur at the same time. The probability of an event is always between 0 and 1. Zero indicates that there is no chance of the event occuring, whilst one means that the event is certain to occur. Probability statements are usually written as $P(E) = 0.25$, meaning that the probability of event $E$ occuring is 0.25, or 25%. The probability that event E will occur given the occurence of event F is usually written as $P(E|F)$. If the two events were independent $P(E|F) = P(E)$ would be true. ## Intersection A compound event that occurs when *all* of its constituent events occur. $$ \begin{align} E & = \{1, 3\} \\ F & = \{1, 2\} \\ E \cap F & = \{1\} \end{align} $$ If the events are independent the probability of the intersection is the product of the probabilities for all of the events. $$P(E \cap F) = P(E)P(F)$$ If the events are not independent we multiply by the conditional probability. $$P(E \cap F) = P(E)P(F|E)$$ ## Union A compound event that occurs if *at least one* of its constituent events has occurred. $$ \begin{align} E & = \{1, 3\} \\ F & = \{1, 2\} \\ E \cup F & = \{1, 2, 3\} \end{align} $$ If the events are mutually exclusive the probability of the union is the sum of the probabilities for all of the events. $$P(E \cup F) = P(E) + P(F)$$ If the events aren't mutually exclusive we need to account for any intersection. $$P(E \cup F) = P(E) + P(F) - P(E \cap F)$$ ## Complement All outcomes in the sample space that are not part of the event. $$ \begin{align} \text{Sample Space} & = \{1, 2, 3, 4, 5, 6\} \\ E & = \{1, 3\} \\ E' & = \{2, 4, 5, 6\} \end{align} $$ The complement can also be denoted with $\bar{E}$ or $E^c$. An event and its complement define a Bernoulli trial. ## Permutations and Combinations The number of permutations of subsets of size $k$, drawn from a set of size $n$: $$nPk = \frac{n!}{(n - k)!}$$ The number of combinations: $$nCk = \frac{n!}{k!(n-k)!} = \frac{nPk}{k!}$$ There are always fewer combinations than permutations because a different order of the same elements is a different permutation, but not a different combination. ```python ```
7c2b39a8e251e54b0453b7be89b4ba416f9459c5
4,413
ipynb
Jupyter Notebook
statistics/probability.ipynb
mostlyoxygen/braindump
6ef57bbb0444b2bd78ff408af4fdc58a9ade46fc
[ "CC0-1.0" ]
null
null
null
statistics/probability.ipynb
mostlyoxygen/braindump
6ef57bbb0444b2bd78ff408af4fdc58a9ade46fc
[ "CC0-1.0" ]
null
null
null
statistics/probability.ipynb
mostlyoxygen/braindump
6ef57bbb0444b2bd78ff408af4fdc58a9ade46fc
[ "CC0-1.0" ]
null
null
null
32.688889
89
0.549739
true
818
Qwen/Qwen-72B
1. YES 2. YES
0.952574
0.833325
0.793803
__label__eng_Latn
0.999672
0.682604
# Solve equation systems with SymPy Once an a while you need to solve simple equation systems, I have found that using SymPy for this is a much better option than using pen and paper, where I usually make mistakes. Here is some short examples... ```python # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import sympy as sp # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session ``` ## Linear system ```python x,y,z = sp.symbols('x y z') ``` ```python eq_1 = sp.Eq(lhs=y, rhs=2*x) eq_1 ``` ```python eq_2 = sp.Eq(lhs=z, rhs=3*y) eq_2 ``` ```python eq_2.subs(y,sp.solve(eq_1,y)[0]) ``` We try to use *solve* to get the expression for **z**: ```python eqs = ( eq_1, eq_2, ) solution = sp.solve(eqs, [z]) solution ``` ...so this is giving us **eq_2** which we kind of know already. So we must add a list [y,z] to solve both equations. ```python eqs = ( eq_1, eq_2, ) solution = sp.solve(eqs, [y,z]) solution ``` ```python solution[z] ``` ## Quadratic ```python eq_3 = sp.Eq(lhs=z**2, rhs=2*y) eq_3 ``` ```python eq_3.subs(y,sp.solve(eq_1,y)[0]) ``` ```python eqs = ( eq_1, eq_3, ) solution = sp.solve(eqs, [y,z]) solution ``` Since there is now two solutions these are given in a list where each item contain the solution for **y** and **z**. ```python solution[0][-1] ``` ```python solution[1][-1] ``` But in order to make this less confusing the *dict* flag can be set to True: ```python eqs = ( eq_1, eq_3, ) solution = sp.solve(eqs, [y,z], dict=True) solution ``` ## Changing the order of symbols: ```python eqs = ( eq_1, eq_3, ) solution = sp.solve(eqs, [z,y]) solution ``` ... this is swapping the order of solutions ## Changing the order of equations: ```python eqs = ( eq_3, eq_1, ) solution = sp.solve(eqs, [y,z]) solution ``` ...this has no effect.
d41450a8092d350f579b02e2d7ee253c28a6a459
5,184
ipynb
Jupyter Notebook
kernels/sympy-solve/sympy-solve.ipynb
martinlarsalbert/kaggle
5f75b0b7bf6adf1f5c9c20c2c3d4e1f6670716ac
[ "MIT" ]
null
null
null
kernels/sympy-solve/sympy-solve.ipynb
martinlarsalbert/kaggle
5f75b0b7bf6adf1f5c9c20c2c3d4e1f6670716ac
[ "MIT" ]
null
null
null
kernels/sympy-solve/sympy-solve.ipynb
martinlarsalbert/kaggle
5f75b0b7bf6adf1f5c9c20c2c3d4e1f6670716ac
[ "MIT" ]
null
null
null
5,184
5,184
0.685764
true
774
Qwen/Qwen-72B
1. YES 2. YES
0.941654
0.937211
0.882528
__label__eng_Latn
0.994206
0.888743
# "Social network Graph Link Prediction - Facebook Challenge" > "Given records of people's unique Id's, Our task is to find out wether they are friends or not and suggest any of the user with his probable top 5 friend recommendations." - toc: false - branch: master - badges: true - comments: true - author: Sai Kumar Reddy Pochireddygari - categories: [Machine Learning, Statistics, EDA, Data Science, Graph] ### Problem statement: Given a directed social graph, have to predict missing links to recommend users (Link Prediction in graph) ### Data Overview Taken data from facebook's recruting challenge on kaggle https://www.kaggle.com/c/FacebookRecruiting data contains two columns source and destination eac edge in graph - Data columns (total 2 columns): - source_node int64 - destination_node int64 ### Mapping the problem into supervised learning problem: - Generated training samples of good and bad links from given directed graph and for each link got some features like no of followers, is he followed back, page rank, katz score, adar index, some svd fetures of adj matrix, some weight features etc. and trained ml model based on these features to predict link. - Some reference papers and videos : - https://www.cs.cornell.edu/home/kleinber/link-pred.pdf - https://www3.nd.edu/~dial/publications/lichtenwalter2010new.pdf - https://kaggle2.blob.core.windows.net/forum-message-attachments/2594/supervised_link_prediction.pdf - https://www.youtube.com/watch?v=2M77Hgy17cg ### Business objectives and constraints: - No low-latency requirement. - Probability of prediction is useful to recommend ighest probability links ### Performance metric for supervised learning: - Both precision and recall is important so F1 score is good choice - Confusion matrix ```python #Importing Libraries # please do go through this python notebook: import warnings warnings.filterwarnings("ignore") import csv import pandas as pd#pandas to create small dataframes import datetime #Convert to unix time import time #Convert to unix time # if numpy is not installed already : pip3 install numpy import numpy as np#Do aritmetic operations on arrays # matplotlib: used to plot graphs import matplotlib import matplotlib.pylab as plt import seaborn as sns#Plots from matplotlib import rcParams#Size of plots from sklearn.cluster import MiniBatchKMeans, KMeans#Clustering import math import pickle import os # to install xgboost: pip3 install xgboost import xgboost as xgb import warnings import networkx as nx import pdb import pickle from sklearn.model_selection import GridSearchCV ``` ```python traincsv=pd.read_csv("/content/drive/My Drive/train.csv") ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>source_node</th> <th>destination_node</th> </tr> </thead> <tbody> </tbody> </table> </div> ```python #reading graph if not os.path.isfile('/content/drive/My Drive/train_woheader.csv'): traincsv=pd.read_csv('/content/drive/My Drive/train.csv') print(traincsv[traincsv.isna().any(1)]) print(traincsv.info()) print("Number of Duplicate Entries are :", sum(traincsv.duplicated())) traincsv.to_csv("/content/drive/My Drive/train_woheader.csv" , header = False , index = False) print("saved data into the file") else: g = nx.read_edgelist("/content/drive/My Drive/train_woheader.csv", delimiter= ',',create_using= nx.DiGraph() ,nodetype=int) print(nx.info(g)) ``` Empty DataFrame Columns: [source_node, destination_node] Index: [] <class 'pandas.core.frame.DataFrame'> RangeIndex: 9437519 entries, 0 to 9437518 Data columns (total 2 columns): # Column Dtype --- ------ ----- 0 source_node int64 1 destination_node int64 dtypes: int64(2) memory usage: 144.0 MB None Number of Duplicate Entries are : 0 saved data into the file > Displaying a sub graph ```python if not os.path.isfile('/content/drive/My Drive/train_woheader_sample.csv'): pd.read_csv('/content/drive/My Drive/train_woheader.csv',nrows=50).to_csv("/content/drive/My Drive/train_woheader_sample.csv",index=False,header=False) subgraph=nx.read_edgelist("/content/drive/My Drive/train_woheader_sample.csv",delimiter=',',create_using=nx.DiGraph(),nodetype=int) pos=nx.spring_layout(subgraph) nx.draw(subgraph,pos,node_color='#A0CBE2',edge_color='#00bb5e',width=1,edge_cmap=plt.cm.Blues,with_labels=True) plt.savefig("graph_sample.pdf") print(nx.info(subgraph)) ``` # 1. Exploratory Data Analysis ```python # graph creation g = nx.read_edgelist("/content/drive/My Drive/train_woheader.csv", delimiter= ',',create_using= nx.DiGraph() ,nodetype=int) ``` ```python # No of Unique persons print("The number of unique persons are:{count}".format(count = len(g.nodes()))) ``` The number of unique persons are:1862220 ## 1.1 No of followers for each person ```python indegree_dist = list(dict(g.in_degree()).values()) indegree_dist.sort() plt.figure(figsize=(10,6)) plt.plot(indegree_dist) plt.xlabel('Index No') plt.grid('box') plt.ylabel('No Of Followers') plt.show() ``` *Most of the users have followers less than 50 but at last there are few users who have more than 50 followers* ```python indegree_dist = list(dict(g.in_degree()).values()) indegree_dist.sort() plt.figure(figsize=(10,6)) plt.plot(indegree_dist[0:1500000]) plt.xlabel('Index No') plt.grid('box') plt.ylabel('No Of Followers') plt.show() ``` *we can observe that until 1.5 M users have only atmost 7 followers and starting until 2 lakh followers have 0 followers* ```python # boxplot plt.boxplot(indegree_dist) plt.ylabel('No Of Followers') plt.grid('box') plt.show() ``` *By seeing the above box plot we cant notice 25,50,75 percentiles just by seeing and most of them have 0-10 followers just by looking in the graph,Rest of the portion up are outliers who has crazy numbe of followers* Lets check out some of the percentiles to get the sense of the number of followers:- ```python for i in range(90,101): print(i," Percentile of users have followers less than or equal to:" , np.percentile(indegree_dist , i)) ``` 90 Percentile of users have followers less than or equal to: 12.0 91 Percentile of users have followers less than or equal to: 13.0 92 Percentile of users have followers less than or equal to: 14.0 93 Percentile of users have followers less than or equal to: 15.0 94 Percentile of users have followers less than or equal to: 17.0 95 Percentile of users have followers less than or equal to: 19.0 96 Percentile of users have followers less than or equal to: 21.0 97 Percentile of users have followers less than or equal to: 24.0 98 Percentile of users have followers less than or equal to: 29.0 99 Percentile of users have followers less than or equal to: 40.0 100 Percentile of users have followers less than or equal to: 552.0 *By seeing the above data we can notice that 90 percentile just have followers lessthan equal to 12 and there is one user with 552 followers lets look at 99-100 percentiles data* ```python for i in range(1,10): print(99+(i/10)," Percentile of users have followers less than or equal to:" , np.percentile(indegree_dist , 99+(i/10))) print(100," Percentile of users have followers less than or equal to:" , np.percentile(indegree_dist ,100)) ``` 99.1 Percentile of users have followers less than or equal to: 42.0 99.2 Percentile of users have followers less than or equal to: 44.0 99.3 Percentile of users have followers less than or equal to: 47.0 99.4 Percentile of users have followers less than or equal to: 50.0 99.5 Percentile of users have followers less than or equal to: 55.0 99.6 Percentile of users have followers less than or equal to: 61.0 99.7 Percentile of users have followers less than or equal to: 70.0 99.8 Percentile of users have followers less than or equal to: 84.0 99.9 Percentile of users have followers less than or equal to: 112.0 100 Percentile of users have followers less than or equal to: 552.0 *By seeing the above data we can notice that 99.1 - 99.9 percentile have followers lessthan equal to 112 and there is one of user with 552 followers* ```python plt.figure(figsize=(13,7)) sns.distplot(indegree_dist, color='#16A085') plt.title("Pdf of followers in degree") plt.xlabel("no of followers") plt.ylabel("count of users ") plt.grid('box') plt.show() ``` * By observing the pdf we can see that only 0.001 percent of the users has 500 plus followers ## 1.2 No of people each person is following ```python outdegree_dist = list(dict(g.out_degree()).values()) outdegree_dist.sort() plt.figure(figsize=(10,6)) plt.plot(outdegree_dist) plt.xlabel('Index No') plt.grid('box') plt.ylabel('No Of Followers') plt.show() ``` *Most of the 1.5m users are following less than 30 but at last there are few users who are following more than 50 followers* ```python outdegree_dist = list(dict(g.out_degree()).values()) outdegree_dist.sort() plt.figure(figsize=(10,6)) plt.plot(outdegree_dist[0:1500000]) plt.xlabel('Index No') plt.grid('box') plt.ylabel('No Of Followers') plt.show() ``` *we can observe that until 1.5 M users have only atmost 7 followers and starting 2 lakh users are following 0 users* ```python # boxplot plt.boxplot(outdegree_dist) plt.ylabel('No Of Followers') plt.grid('box') plt.show() ``` *By seeing the above box plot we can't notice 25,50,75 percentiles just by seeing and most of them are 0-1 followers just by looking in the graph,Rest of the portion up are outliers who are some following* Lets check out some of the percentiles to get the sense of the number following:- ```python for i in range(90,101): print(i," Percentile of users following less than or equal to:" , np.percentile(outdegree_dist , i)) ``` 90 Percentile of users following less than or equal to: 12.0 91 Percentile of users following less than or equal to: 13.0 92 Percentile of users following less than or equal to: 14.0 93 Percentile of users following less than or equal to: 15.0 94 Percentile of users following less than or equal to: 17.0 95 Percentile of users following less than or equal to: 19.0 96 Percentile of users following less than or equal to: 21.0 97 Percentile of users following less than or equal to: 24.0 98 Percentile of users following less than or equal to: 29.0 99 Percentile of users following less than or equal to: 40.0 100 Percentile of users following less than or equal to: 1566.0 *By seeing the above data we can notice that 90 percentile users are following lessthan equal to 12 and there is one user with 1500+ followers lets look at 99-100 percentiles data* ```python for i in range(1,10): print(99+(i/10)," Percentile of users have followers less than or equal to:" , np.percentile(outdegree_dist , 99+(i/10))) print(100," Percentile of users have followers less than or equal to:" , np.percentile(outdegree_dist ,100)) ``` 99.1 Percentile of users have followers less than or equal to: 42.0 99.2 Percentile of users have followers less than or equal to: 45.0 99.3 Percentile of users have followers less than or equal to: 48.0 99.4 Percentile of users have followers less than or equal to: 52.0 99.5 Percentile of users have followers less than or equal to: 56.0 99.6 Percentile of users have followers less than or equal to: 63.0 99.7 Percentile of users have followers less than or equal to: 73.0 99.8 Percentile of users have followers less than or equal to: 90.0 99.9 Percentile of users have followers less than or equal to: 123.0 100 Percentile of users have followers less than or equal to: 1566.0 *By seeing the above data we can notice that 99.1 - 99.9 percentile are following lessthan equal to 123 and there is one user following 1566 users* ```python plt.figure(figsize=(13,7)) sns.distplot(outdegree_dist, color='#16A085') plt.title("Pdf of followers in degree") plt.xlabel("no of followers") plt.ylabel("count of users ") plt.grid('box') plt.show() ``` * By observing the pdf we can see that only 0.001 percent of the user is following 1500 plus users. ```python print('No of persons those are not following anyone are' ,sum(np.array(outdegree_dist)==0),'and % is', sum(np.array(outdegree_dist)==0)*100/len(outdegree_dist) ) ``` No of persons those are not following anyone are 274512 and % is 14.741115442858524 ```python print('No of persons having zero followers are' ,sum(np.array(indegree_dist)==0),'and % is', sum(np.array(indegree_dist)==0)*100/len(indegree_dist) ) ``` No of persons having zero followers are 188043 and % is 10.097786512871734 ## 1.3 both followers + following ```python count=0 for i in g.nodes(): if len(list(g.predecessors(i)))==0 : if len(list(g.successors(i)))==0: count+=1 print('No of persons those are not not following anyone and also not having any followers are',count) ``` No of persons those are not not following anyone and also not having any followers are 0 ```python from collections import Counter dict_in = dict(g.in_degree()) dict_out = dict(g.out_degree()) d = Counter(dict_in) + Counter(dict_out) in_out_degree = np.array(list(d.values())) ``` ```python in_out_degree_sort = sorted(in_out_degree) plt.figure(figsize=(10,6)) sns.distplot(in_out_degree_sort, color='#16A085') plt.xlabel('Index No') plt.ylabel('No Of people each person is following + followers') plt.show() ``` *By graph we can see that very few people are following n has less follwers less users and only one user is following and has followers more than 1500* ```python in_out_degree_sort = sorted(in_out_degree) plt.figure(figsize=(10,6)) plt.plot(in_out_degree_sort[0:1500000]) plt.xlabel('Index No') plt.grid('box') plt.ylabel('No Of people each person is following + followers') plt.show() ``` *By graph we can see that less than 1.4m people has 14 or less number of follwers and following* ```python ### 90-100 percentile for i in range(0,11): print(90+i,'percentile value is',np.percentile(in_out_degree_sort,90+i)) ``` 90 percentile value is 24.0 91 percentile value is 26.0 92 percentile value is 28.0 93 percentile value is 31.0 94 percentile value is 33.0 95 percentile value is 37.0 96 percentile value is 41.0 97 percentile value is 48.0 98 percentile value is 58.0 99 percentile value is 79.0 100 percentile value is 1579.0 *We can see that 99 percentile of the users are having and following less than or equal to 79 of the users* ```python ### 99-100 percentile for i in range(10,110,10): print(99+(i/100),'percentile value is',np.percentile(in_out_degree_sort,99+(i/100))) ``` 99.1 percentile value is 83.0 99.2 percentile value is 87.0 99.3 percentile value is 93.0 99.4 percentile value is 99.0 99.5 percentile value is 108.0 99.6 percentile value is 120.0 99.7 percentile value is 138.0 99.8 percentile value is 168.0 99.9 percentile value is 221.0 100.0 percentile value is 1579.0 *We can see that only one user is having and following 1500 plus users* ```python print('Min of no of followers + following is',in_out_degree.min()) print(np.sum(in_out_degree==in_out_degree.min()),' persons having minimum no of followers + following') ``` Min of no of followers + following is 1 334291 persons having minimum no of followers + following ```python print('Max of no of followers + following is',in_out_degree.max()) print(np.sum(in_out_degree==in_out_degree.max()),' persons having maximum no of followers + following') ``` Max of no of followers + following is 1579 1 persons having maximum no of followers + following ```python print('No of persons having followers + following less than 10 are',np.sum(in_out_degree<10)) ``` No of persons having followers + following less than 10 are 1320326 ```python print('No of weakly connected components',len(list(nx.weakly_connected_components(g)))) count=0 for i in list(nx.weakly_connected_components(g)): if len(i)==2: count+=1 print('weakly connected components with 2 nodes',count) ``` No of weakly connected components 45558 weakly connected components with 2 nodes 32195 ## 2.1 Generating some edges which are not present in graph for supervised learning Generated Bad links from graph which are not in graph and whose shortest path is greater than 2. ```python %%time ###generating bad edges from given graph import random if not os.path.isfile('/content/drive/My Drive/missing_edges_final.p'): #getting all set of edges r = csv.reader(open('/content/drive/My Drive/train_woheader.csv','r')) edges = dict() for edge in r: edges[(edge[0], edge[1])] = 1 missing_edges = set([]) while (len(missing_edges)<9437519): a=random.randint(1, 1862220) b=random.randint(1, 1862220) tmp = edges.get((a,b),-1) if tmp == -1 and a!=b: try: if nx.shortest_path_length(g,source=a,target=b) > 2: missing_edges.add((a,b)) else: continue except: missing_edges.add((a,b)) else: continue pickle.dump(missing_edges,open('/content/drive/My Drive/missing_edges_final.p','wb')) else: missing_edges = pickle.load(open('/content/drive/My Drive/missing_edges_final.p','rb')) ``` CPU times: user 5.09 s, sys: 812 ms, total: 5.91 s Wall time: 6.07 s ```python len(missing_edges) ``` 9437519 ## 2.2 Training and Test data split: Removed edges from Graph and used as test data and after removing used that graph for creating features for Train and test data ```python from sklearn.model_selection import train_test_split if (not os.path.isfile('/content/drive/My Drive/train_pos_after_eda.csv')) and (not os.path.isfile('/content/drive/My Drive/test_pos_after_eda.csv')): #reading total data df df_pos = pd.read_csv('/content/drive/My Drive/train.csv') df_neg = pd.DataFrame(list(missing_edges), columns=['source_node', 'destination_node']) print("Number of nodes in the graph with edges", df_pos.shape[0]) print("Number of nodes in the graph without edges", df_neg.shape[0]) #Trian test split #Spiltted data into 80-20 #positive links and negative links seperatly because we need positive training data only for creating graph #and for feature generation X_train_pos, X_test_pos, y_train_pos, y_test_pos = train_test_split(df_pos,np.ones(len(df_pos)),test_size=0.2, random_state=9) X_train_neg, X_test_neg, y_train_neg, y_test_neg = train_test_split(df_neg,np.zeros(len(df_neg)),test_size=0.2, random_state=9) print('='*60) print("Number of nodes in the train data graph with edges", X_train_pos.shape[0],"=",y_train_pos.shape[0]) print("Number of nodes in the train data graph without edges", X_train_neg.shape[0],"=", y_train_neg.shape[0]) print('='*60) print("Number of nodes in the test data graph with edges", X_test_pos.shape[0],"=",y_test_pos.shape[0]) print("Number of nodes in the test data graph without edges", X_test_neg.shape[0],"=",y_test_neg.shape[0]) #removing header and saving X_train_pos.to_csv('/content/drive/My Drive/train_pos_after_eda.csv',header=False, index=False) X_test_pos.to_csv('/content/drive/My Drive/test_pos_after_eda.csv',header=False, index=False) X_train_neg.to_csv('/content/drive/My Drive/train_neg_after_eda.csv',header=False, index=False) X_test_neg.to_csv('/content/drive/My Drive/test_neg_after_eda.csv',header=False, index=False) else: #Graph from Traing data only del missing_edges ``` Number of nodes in the graph with edges 9437519 Number of nodes in the graph without edges 9437519 ============================================================ Number of nodes in the train data graph with edges 7550015 = 7550015 Number of nodes in the train data graph without edges 7550015 = 7550015 ============================================================ Number of nodes in the test data graph with edges 1887504 = 1887504 Number of nodes in the test data graph without edges 1887504 = 1887504 ```python if (os.path.isfile('/content/drive/My Drive/train_pos_after_eda.csv')) and (os.path.isfile('/content/drive/My Drive/test_pos_after_eda.csv')): train_graph=nx.read_edgelist('/content/drive/My Drive/train_pos_after_eda.csv',delimiter=',',create_using=nx.DiGraph(),nodetype=int) test_graph=nx.read_edgelist('/content/drive/My Drive/test_pos_after_eda.csv',delimiter=',',create_using=nx.DiGraph(),nodetype=int) print(nx.info(train_graph)) print(nx.info(test_graph)) # finding the unique nodes in the both train and test graphs train_nodes_pos = set(train_graph.nodes()) test_nodes_pos = set(test_graph.nodes()) trY_teY = len(train_nodes_pos.intersection(test_nodes_pos)) trY_teN = len(train_nodes_pos - test_nodes_pos) teY_trN = len(test_nodes_pos - train_nodes_pos) print('no of people common in train and test -- ',trY_teY) print('no of people present in train but not present in test -- ',trY_teN) print('no of people present in test but not present in train -- ',teY_trN) print(' % of people not there in Train but exist in Test in total Test data are {} %'.format(teY_trN/len(test_nodes_pos)*100)) ``` Name: Type: DiGraph Number of nodes: 1780722 Number of edges: 7550015 Average in degree: 4.2399 Average out degree: 4.2399 Name: Type: DiGraph Number of nodes: 1144623 Number of edges: 1887504 Average in degree: 1.6490 Average out degree: 1.6490 no of people common in train and test -- 1063125 no of people present in train but not present in test -- 717597 no of people present in test but not present in train -- 81498 % of people not there in Train but exist in Test in total Test data are 7.1200735962845405 % ```python ``` > we have a cold start problem here ```python %%timeit #final train and test data sets if (not os.path.isfile('/content/drive/My Drive/train_after_eda.csv')) and \ (not os.path.isfile('/content/drive/My Drive/test_after_eda.csv')) and \ (not os.path.isfile('/content/drive/My Drive/train_y.csv')) and \ (not os.path.isfile('/content/drive/My Drive/test_y.csv')) and \ (os.path.isfile('/content/drive/My Drive/train_pos_after_eda.csv')) and \ (os.path.isfile('/content/drive/My Drive/test_pos_after_eda.csv')) and \ (os.path.isfile('/content/drive/My Drive/train_neg_after_eda.csv')) and \ (os.path.isfile('/content/drive/My Drive/test_neg_after_eda.csv')): X_train_pos = pd.read_csv('/content/drive/My Drive/train_pos_after_eda.csv', names=['source_node', 'destination_node']) X_test_pos = pd.read_csv('/content/drive/My Drive/test_pos_after_eda.csv', names=['source_node', 'destination_node']) X_train_neg = pd.read_csv('/content/drive/My Drive/train_neg_after_eda.csv', names=['source_node', 'destination_node']) X_test_neg = pd.read_csv('/content/drive/My Drive/test_neg_after_eda.csv', names=['source_node', 'destination_node']) print('='*60) print("Number of nodes in the train data graph with edges", X_train_pos.shape[0]) print("Number of nodes in the train data graph without edges", X_train_neg.shape[0]) print('='*60) print("Number of nodes in the test data graph with edges", X_test_pos.shape[0]) print("Number of nodes in the test data graph without edges", X_test_neg.shape[0]) X_train = X_train_pos.append(X_train_neg,ignore_index=True) y_train = np.concatenate((y_train_pos,y_train_neg)) X_test = X_test_pos.append(X_test_neg,ignore_index=True) y_test = np.concatenate((y_test_pos,y_test_neg)) X_train.to_csv('/content/drive/My Drive/train_after_eda.csv',header=False,index=False) X_test.to_csv('/content/drive/My Drive/test_after_eda.csv',header=False,index=False) pd.DataFrame(y_train.astype(int)).to_csv('/content/drive/My Drive/train_y.csv',header=False,index=False) pd.DataFrame(y_test.astype(int)).to_csv('/content/drive/My Drive/test_y.csv',header=False,index=False) ``` The slowest run took 29.44 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 3: 242 µs per loop ```python X_train = pd.read_csv("/content/drive/My Drive/after_eda/train_after_eda.csv") X_test = pd.read_csv("/content/drive/My Drive/after_eda/test_after_eda.csv") y_train = pd.read_csv("/content/drive/My Drive/train_y.csv") y_test = pd.read_csv("/content/drive/My Drive/test_y.csv") ``` ```python print("Data points in train data",X_train.shape) print("Data points in test data",X_test.shape) print("Shape of traget variable in train",y_train.shape) print("Shape of traget variable in test", y_test.shape) ``` Data points in train data (15100029, 2) Data points in test data (3775007, 2) Shape of traget variable in train (15100029, 1) Shape of traget variable in test (3775007, 1) # Feature Engineering ```python #creating train graph train_graph=nx.read_edgelist('/content/drive/My Drive/train_pos_after_eda.csv',delimiter=',',create_using=nx.DiGraph(),nodetype=int) print(nx.info(train_graph)) ``` Name: Type: DiGraph Number of nodes: 1780722 Number of edges: 7550015 Average in degree: 4.2399 Average out degree: 4.2399 # 2. Similarity measures ## 2.1 Jaccard Distance: http://www.statisticshowto.com/jaccard-index/ \begin{equation} j = \frac{|X\cap Y|}{|X \cup Y|} \end{equation} ```python def jaccard_for_followees(a,b): try: if len(set(train_graph.successors(a))) == 0 | len(set(train_graph.successors(b))) == 0: return (0) else: result = len(set(train_graph.successors(a).intersection(set(train_graph.successors(b))))) / \ len(set(train_graph.successors(a).union(set(train_graph.successors(b))))) return (result) except: return (0) ``` ```python print(jaccard_for_followees(273084,1505602)) ``` 0 ```python #node 1635354 not in graph print(jaccard_for_followees(273084,1505602)) ``` 0 ```python def jaccard_for_followers(a,b): try: if len(set(train_graph.predecessors(a))) == 0 | len(set(train_graph.predecessors(b))) == 0: return (0) else: result = len(set(train_graph.predecessors(a).intersection(set(train_graph.predecessors(b))))) / \ len(set(train_graph.predecessors(a).union(set(train_graph.predecessors(b))))) except: return (0) return (result) ``` ```python print(jaccard_for_followers(273084,470294)) ``` 0 ```python #node 1635354 not in graph print(jaccard_for_followees(669354,1635354)) ``` 0 ## 2.2 Cosine distance \begin{equation} CosineDistance = \frac{|X\cap Y|}{|X|\cdot|Y|} \end{equation} ```python #for followees def cosine_for_followees(a,b): try: if len(set(train_graph.successors(a))) == 0 | len(set(train_graph.successors(b))) == 0: return (0) result = (len(set(train_graph.successors(a)).intersection(set(train_graph.successors(b)))))/\ (math.sqrt(len(set(train_graph.successors(a)))*len((set(train_graph.successors(b)))))) return (result) except: return (0) ``` ```python print(cosine_for_followees(273084,1635354)) ``` 0 ```python print(cosine_for_followees(273084,1505602)) ``` 0.0 ```python def cosine_for_followers(a,b): try: if len(set(train_graph.predecessors(a))) == 0 | len(set(train_graph.predecessors(b))) == 0: return (0) result = (len(set(train_graph.predecessors(a)).intersection(set(train_graph.predecessors(b)))))/\ (math.sqrt(len(set(train_graph.predecessors(a))))*(len(set(train_graph.predecessors(b))))) return (result) except: return (0) ``` ```python print(cosine_for_followers(2,470294)) ``` 0.02886751345948129 ```python print(cosine_for_followers(669354,1635354)) ``` 0 ## 3. Ranking Measures https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.link_analysis.pagerank_alg.pagerank.html PageRank computes a ranking of the nodes in the graph G based on the structure of the incoming links. Mathematical PageRanks for a simple network, expressed as percentages. (Google uses a logarithmic scale.) Page C has a higher PageRank than Page E, even though there are fewer links to C; the one link to C comes from an important page and hence is of high value. If web surfers who start on a random page have an 85% likelihood of choosing a random link from the page they are currently visiting, and a 15% likelihood of jumping to a page chosen at random from the entire web, they will reach Page E 8.1% of the time. <b>(The 15% likelihood of jumping to an arbitrary page corresponds to a damping factor of 85%.) Without damping, all web surfers would eventually end up on Pages A, B, or C, and all other pages would have PageRank zero. In the presence of damping, Page A effectively links to all pages in the web, even though it has no outgoing links of its own.</b> ## 3.1 Page Ranking https://en.wikipedia.org/wiki/PageRank ```python if not os.path.isfile('/content/drive/My Drive/page_rank.p'): pr = nx.pagerank(train_graph, alpha=0.85) pickle.dump(pr,open('/content/drive/My Drive/page_rank.p','wb')) else: pr = pickle.load(open('/content/drive/My Drive/page_rank.p','rb')) ``` ```python print('min',pr[min(pr, key=pr.get)]) print('max',pr[max(pr, key=pr.get)]) print('mean',float(sum(pr.values())) / len(pr)) ``` min 1.6556497245737814e-07 max 2.7098251341935827e-05 mean 5.615699699389075e-07 ```python #for imputing to nodes which are not there in Train data mean_pr = float(sum(pr.values())) / len(pr) print(mean_pr) ``` 5.615699699389075e-07 ```python def shrtpath(a,b): if train_graph.has_edge(a,b): train_graph.remove_edge(a,b) p=nx.shortest_path_length(train_graph,source=a,target=b) train_graph.add_edge(a,b) else: p=nx.shortest_path_length(train_graph,source=a,target=b) return (p) ``` # 4. Other Graph Features ## 4.1 Shortest path: Getting Shortest path between twoo nodes, if nodes have direct path i.e directly connected then we are removing that edge and calculating path. ```python #if has direct edge then deleting that edge and calculating shortest path def compute_shortest_path_length(a,b): p=-1 try: if train_graph.has_edge(a,b): train_graph.remove_edge(a,b) p= nx.shortest_path_length(train_graph,source=a,target=b) train_graph.add_edge(a,b) else: p= nx.shortest_path_length(train_graph,source=a,target=b) return p except: return -1 ``` ```python #testing compute_shortest_path_length(77697, 826021) ``` 10 ```python #testing compute_shortest_path_length(669354,1635354) ``` -1 ## 4.2 Checking for same community ```python #getting weekly connected edges from graph wcc=list(nx.weakly_connected_components(train_graph)) def belongs_to_same_wcc(a,b): index = [] if train_graph.has_edge(b,a): return 1 if train_graph.has_edge(a,b): for i in wcc: if a in i: index= i break if (b in index): train_graph.remove_edge(a,b) if compute_shortest_path_length(a,b)==-1: train_graph.add_edge(a,b) return 0 else: train_graph.add_edge(a,b) return 1 else: return 0 else: for i in wcc: if a in i: index= i break if(b in index): return 1 else: return 0 ``` ```python belongs_to_same_wcc(861, 1659750) ``` 0 ```python belongs_to_same_wcc(669354,1635354) ``` 0 ## 4.3 Adamic/Adar Index: Adamic/Adar measures is defined as inverted sum of degrees of common neighbours for given two vertices. $$A(x,y)=\sum_{u \in N(x) \cap N(y)}\frac{1}{log(|N(u)|)}$$ ```python #adar index def calc_adar_in(a,b): sum=0 try: n=list(set(train_graph.successors(a)).intersection(set(train_graph.successors(b)))) if len(n)!=0: for i in n: sum=sum+(1/np.log10(len(list(train_graph.predecessors(i))))) return sum else: return 0 except: return 0 ``` ```python calc_adar_in(1,189226) ``` 0 ```python calc_adar_in(669354,1635354) ``` 0 ## 4.4 Is persion was following back: ```python def follows_back(a,b): if train_graph.has_edge(b,a): return 1 else: return 0 ``` ```python follows_back(1,189226) ``` 1 ```python follows_back(669354,1635354) ``` 0 ## 4.5 Katz Centrality: https://en.wikipedia.org/wiki/Katz_centrality https://www.geeksforgeeks.org/katz-centrality-centrality-measure/ Katz centrality computes the centrality for a node based on the centrality of its neighbors. It is a generalization of the eigenvector centrality. The Katz centrality for node `i` is $$x_i = \alpha \sum_{j} A_{ij} x_j + \beta,$$ where `A` is the adjacency matrix of the graph G with eigenvalues $$\lambda$$. The parameter $$\beta$$ controls the initial centrality and $$\alpha < \frac{1}{\lambda_{max}}.$$ ```python if not os.path.isfile('/content/drive/My Drive/katz.p'): katz = nx.katz.katz_centrality(train_graph,alpha=0.005,beta=1) pickle.dump(katz,open('/content/drive/My Drive/katz.p','wb')) else: katz = pickle.load(open('/content/drive/My Drive/katz.p','rb')) ``` ```python print('min',katz[min(katz, key=katz.get)]) print('max',katz[max(katz, key=katz.get)]) print('mean',float(sum(katz.values())) / len(katz)) ``` min 0.0007313532484065916 max 0.003394554981699122 mean 0.0007483800935562018 ```python mean_katz = float(sum(katz.values())) / len(katz) print(mean_katz) ``` 0.0007483800935562018 ## 4.6 Hits Score The HITS algorithm computes two numbers for a node. Authorities estimates the node value based on the incoming links. Hubs estimates the node value based on outgoing links. https://en.wikipedia.org/wiki/HITS_algorithm ```python if not os.path.isfile('/content/drive/My Drive/hits.p'): hits = nx.hits(train_graph, max_iter=100, tol=1e-08, nstart=None, normalized=True) pickle.dump(hits,open('/content/drive/My Drive/hits.p','wb')) else: hits = pickle.load(open('/content/drive/My Drive/hits.p','rb')) ``` ```python print('min',hits[0][min(hits[0], key=hits[0].get)]) print('max',hits[0][max(hits[0], key=hits[0].get)]) print('mean',float(sum(hits[0].values())) / len(hits[0])) ``` min 0.0 max 0.004868653378780953 mean 5.615699699344123e-07 # 5. Featurization ## 5. 1 Reading a sample of Data from both train and test ```python import random if os.path.isfile('/content/drive/My Drive/after_eda/train_after_eda.csv'): filename = "/content/drive/My Drive/after_eda/train_after_eda.csv" # you uncomment this line, if you dont know the lentgh of the file name # here we have hardcoded the number of lines as 15100030 # n_train = sum(1 for line in open(filename)) #number of records in file (excludes header) n_train = 15100028 s = 100000 #desired sample size skip_train = sorted(random.sample(range(1,n_train+1),n_train-s)) #https://stackoverflow.com/a/22259008/4084039 ``` ```python if os.path.isfile('/content/drive/My Drive/after_eda/test_after_eda.csv'): filename = "/content/drive/My Drive/after_eda/test_after_eda.csv" # you uncomment this line, if you dont know the lentgh of the file name # here we have hardcoded the number of lines as 3775008 # n_test = sum(1 for line in open(filename)) #number of records in file (excludes header) n_test = 3775006 s = 50000 #desired sample size skip_test = sorted(random.sample(range(1,n_test+1),n_test-s)) #https://stackoverflow.com/a/22259008/4084039 ``` ```python print("Number of rows in the train data file:", n_train) print("Number of rows we are going to elimiate in train data are",len(skip_train)) print("Number of rows in the test data file:", n_test) print("Number of rows we are going to elimiate in test data are",len(skip_test)) ``` Number of rows in the train data file: 15100028 Number of rows we are going to elimiate in train data are 15000028 Number of rows in the test data file: 3775006 Number of rows we are going to elimiate in test data are 3725006 ```python df_final_train = pd.read_csv('/content/drive/My Drive/after_eda/train_after_eda.csv', skiprows=skip_train, names=['source_node', 'destination_node']) df_final_train['indicator_link'] = pd.read_csv('/content/drive/My Drive/train_y.csv', skiprows=skip_train, names=['indicator_link']) print("Our train matrix size ",df_final_train.shape) df_final_train.head(2) ``` Our train matrix size (100002, 3) <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>source_node</th> <th>destination_node</th> <th>indicator_link</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>273084</td> <td>1505602</td> <td>1</td> </tr> <tr> <th>1</th> <td>1613640</td> <td>1313162</td> <td>1</td> </tr> </tbody> </table> </div> ```python df_final_test = pd.read_csv('/content/drive/My Drive/after_eda/test_after_eda.csv', skiprows=skip_test, names=['source_node', 'destination_node']) df_final_test['indicator_link'] = pd.read_csv('/content/drive/My Drive/test_y.csv', skiprows=skip_test, names=['indicator_link']) print("Our test matrix size ",df_final_test.shape) df_final_test.head(2) ``` Our test matrix size (50002, 3) <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>source_node</th> <th>destination_node</th> <th>indicator_link</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>848424</td> <td>784690</td> <td>1</td> </tr> <tr> <th>1</th> <td>1790470</td> <td>571834</td> <td>1</td> </tr> </tbody> </table> </div> ## 5.2 Adding a set of features __we will create these each of these features for both train and test data points__ <ol> <li>jaccard_followers</li> <li>jaccard_followees</li> <li>cosine_followers</li> <li>cosine_followees</li> <li>num_followers_s</li> <li>num_followees_s</li> <li>num_followers_d</li> <li>num_followees_d</li> <li>inter_followers</li> <li>inter_followees</li> </ol> ```python if not os.path.isfile('/content/drive/My Drive/storage_sample_stage1.h5'): #mapping jaccrd followers to train and test data df_final_train['jaccard_followers'] = df_final_train.apply(lambda row: jaccard_for_followers(row['source_node'],row['destination_node']),axis=1) df_final_test['jaccard_followers'] = df_final_test.apply(lambda row: jaccard_for_followers(row['source_node'],row['destination_node']),axis=1) #mapping jaccrd followees to train and test data df_final_train['jaccard_followees'] = df_final_train.apply(lambda row: jaccard_for_followees(row['source_node'],row['destination_node']),axis=1) df_final_test['jaccard_followees'] = df_final_test.apply(lambda row: jaccard_for_followees(row['source_node'],row['destination_node']),axis=1) #mapping jaccrd followers to train and test data df_final_train['cosine_followers'] = df_final_train.apply(lambda row: cosine_for_followers(row['source_node'],row['destination_node']),axis=1) df_final_test['cosine_followers'] = df_final_test.apply(lambda row: cosine_for_followers(row['source_node'],row['destination_node']),axis=1) #mapping jaccrd followees to train and test data df_final_train['cosine_followees'] = df_final_train.apply(lambda row: cosine_for_followees(row['source_node'],row['destination_node']),axis=1) df_final_test['cosine_followees'] = df_final_test.apply(lambda row: cosine_for_followees(row['source_node'],row['destination_node']),axis=1) ``` ```python def compute_features_stage1(df_final): #calculating no of followers followees for source and destination #calculating intersection of followers and followees for source and destination num_followers_s=[] num_followees_s=[] num_followers_d=[] num_followees_d=[] inter_followers=[] inter_followees=[] for i,row in df_final.iterrows(): try: s1=set(train_graph.predecessors(row['source_node'])) s2=set(train_graph.successors(row['source_node'])) except: s1 = set() s2 = set() try: d1=set(train_graph.predecessors(row['destination_node'])) d2=set(train_graph.successors(row['destination_node'])) except: d1 = set() d2 = set() num_followers_s.append(len(s1)) num_followees_s.append(len(s2)) num_followers_d.append(len(d1)) num_followees_d.append(len(d2)) inter_followers.append(len(s1.intersection(d1))) inter_followees.append(len(s2.intersection(d2))) return num_followers_s, num_followers_d, num_followees_s, num_followees_d, inter_followers, inter_followees ``` ```python if not os.path.isfile('/content/drive/My Drive/storage_sample_stage1.h5'): df_final_train['num_followers_s'], df_final_train['num_followers_d'], \ df_final_train['num_followees_s'], df_final_train['num_followees_d'], \ df_final_train['inter_followers'], df_final_train['inter_followees']= compute_features_stage1(df_final_train) df_final_test['num_followers_s'], df_final_test['num_followers_d'], \ df_final_test['num_followees_s'], df_final_test['num_followees_d'], \ df_final_test['inter_followers'], df_final_test['inter_followees']= compute_features_stage1(df_final_test) hdf = pd.HDFStore('/content/drive/My Drive/storage_sample_stage1.h5') hdf.put('train_df',df_final_train, format='table', data_columns=True) hdf.put('test_df',df_final_test, format='table', data_columns=True) hdf.close() else: df_final_train = read_hdf('/content/drive/My Drive/storage_sample_stage1.h5', 'train_df',mode='r') df_final_test = read_hdf('/content/drive/My Drive/storage_sample_stage1.h5', 'test_df',mode='r') ``` ## 5.3 Adding new set of features __we will create these each of these features for both train and test data points__ <ol> <li>adar index</li> <li>is following back</li> <li>belongs to same weakly connect components</li> <li>shortest path between source and destination</li> </ol> ```python if not os.path.isfile('/content/drive/My Drive/storage_sample_stage2.h5'): #mapping adar index on train df_final_train['adar_index'] = df_final_train.apply(lambda row: calc_adar_in(row['source_node'],row['destination_node']),axis=1) #mapping adar index on test df_final_test['adar_index'] = df_final_test.apply(lambda row: calc_adar_in(row['source_node'],row['destination_node']),axis=1) #-------------------------------------------------------------------------------------------------------- #mapping followback or not on train df_final_train['follows_back'] = df_final_train.apply(lambda row: follows_back(row['source_node'],row['destination_node']),axis=1) #mapping followback or not on test df_final_test['follows_back'] = df_final_test.apply(lambda row: follows_back(row['source_node'],row['destination_node']),axis=1) #-------------------------------------------------------------------------------------------------------- #mapping same component of wcc or not on train df_final_train['same_comp'] = df_final_train.apply(lambda row: belongs_to_same_wcc(row['source_node'],row['destination_node']),axis=1) ##mapping same component of wcc or not on train df_final_test['same_comp'] = df_final_test.apply(lambda row: belongs_to_same_wcc(row['source_node'],row['destination_node']),axis=1) #-------------------------------------------------------------------------------------------------------- #mapping shortest path on train df_final_train['shortest_path'] = df_final_train.apply(lambda row: compute_shortest_path_length(row['source_node'],row['destination_node']),axis=1) #mapping shortest path on test df_final_test['shortest_path'] = df_final_test.apply(lambda row: compute_shortest_path_length(row['source_node'],row['destination_node']),axis=1) hdf = pd.HDFStore('/content/drive/My Drive/storage_sample_stage2.h5') hdf.put('train_df',df_final_train, format='table', data_columns=True) hdf.put('test_df',df_final_test, format='table', data_columns=True) hdf.close() else: df_final_train = read_hdf('/content/drive/My Drive/storage_sample_stage2.h5', 'train_df',mode='r') df_final_test = read_hdf('/content/drive/My Drive/storage_sample_stage2.h5', 'test_df',mode='r') ``` ## 5.4 Adding new set of features __we will create these each of these features for both train and test data points__ <ol> <li>Weight Features <ul> <li>weight of incoming edges</li> <li>weight of outgoing edges</li> <li>weight of incoming edges + weight of outgoing edges</li> <li>weight of incoming edges * weight of outgoing edges</li> <li>2*weight of incoming edges + weight of outgoing edges</li> <li>weight of incoming edges + 2*weight of outgoing edges</li> </ul> </li> <li>Page Ranking of source</li> <li>Page Ranking of dest</li> <li>katz of source</li> <li>katz of dest</li> <li>hubs of source</li> <li>hubs of dest</li> <li>authorities_s of source</li> <li>authorities_s of dest</li> </ol> #### Weight Features In order to determine the similarity of nodes, an edge weight value was calculated between nodes. Edge weight decreases as the neighbor count goes up. Intuitively, consider one million people following a celebrity on a social network then chances are most of them never met each other or the celebrity. On the other hand, if a user has 30 contacts in his/her social network, the chances are higher that many of them know each other. `credit` - Graph-based Features for Supervised Link Prediction William Cukierski, Benjamin Hamner, Bo Yang \begin{equation} W = \frac{1}{\sqrt{1+|X|}} \end{equation} it is directed graph so calculated Weighted in and Weighted out differently ```python #weight for source and destination of each link Weight_in = {} Weight_out = {} for i in (train_graph.nodes()): s1=set(train_graph.predecessors(i)) w_in = 1.0/(np.sqrt(1+len(s1))) Weight_in[i]=w_in s2=set(train_graph.successors(i)) w_out = 1.0/(np.sqrt(1+len(s2))) Weight_out[i]=w_out #for imputing with mean mean_weight_in = np.mean(list(Weight_in.values())) mean_weight_out = np.mean(list(Weight_out.values())) ``` ```python if not os.path.isfile('/content/drive/My Drive/storage_sample_stage3.h5'): #mapping to pandas train df_final_train['weight_in'] = df_final_train.destination_node.apply(lambda x: Weight_in.get(x,mean_weight_in)) df_final_train['weight_out'] = df_final_train.source_node.apply(lambda x: Weight_out.get(x,mean_weight_out)) #mapping to pandas test df_final_test['weight_in'] = df_final_test.destination_node.apply(lambda x: Weight_in.get(x,mean_weight_in)) df_final_test['weight_out'] = df_final_test.source_node.apply(lambda x: Weight_out.get(x,mean_weight_out)) #some features engineerings on the in and out weights df_final_train['weight_f1'] = df_final_train.weight_in + df_final_train.weight_out df_final_train['weight_f2'] = df_final_train.weight_in * df_final_train.weight_out df_final_train['weight_f3'] = (2*df_final_train.weight_in + 1*df_final_train.weight_out) df_final_train['weight_f4'] = (1*df_final_train.weight_in + 2*df_final_train.weight_out) #some features engineerings on the in and out weights df_final_test['weight_f1'] = df_final_test.weight_in + df_final_test.weight_out df_final_test['weight_f2'] = df_final_test.weight_in * df_final_test.weight_out df_final_test['weight_f3'] = (2*df_final_test.weight_in + 1*df_final_test.weight_out) df_final_test['weight_f4'] = (1*df_final_test.weight_in + 2*df_final_test.weight_out) ``` ```python if not os.path.isfile('/content/drive/My Drive/storage_sample_stage3.h5'): #page rank for source and destination in Train and Test #if anything not there in train graph then adding mean page rank df_final_train['page_rank_s'] = df_final_train.source_node.apply(lambda x:pr.get(x,mean_pr)) df_final_train['page_rank_d'] = df_final_train.destination_node.apply(lambda x:pr.get(x,mean_pr)) df_final_test['page_rank_s'] = df_final_test.source_node.apply(lambda x:pr.get(x,mean_pr)) df_final_test['page_rank_d'] = df_final_test.destination_node.apply(lambda x:pr.get(x,mean_pr)) #================================================================================ #Katz centrality score for source and destination in Train and test #if anything not there in train graph then adding mean katz score df_final_train['katz_s'] = df_final_train.source_node.apply(lambda x: katz.get(x,mean_katz)) df_final_train['katz_d'] = df_final_train.destination_node.apply(lambda x: katz.get(x,mean_katz)) df_final_test['katz_s'] = df_final_test.source_node.apply(lambda x: katz.get(x,mean_katz)) df_final_test['katz_d'] = df_final_test.destination_node.apply(lambda x: katz.get(x,mean_katz)) #================================================================================ #Hits algorithm score for source and destination in Train and test #if anything not there in train graph then adding 0 df_final_train['hubs_s'] = df_final_train.source_node.apply(lambda x: hits[0].get(x,0)) df_final_train['hubs_d'] = df_final_train.destination_node.apply(lambda x: hits[0].get(x,0)) df_final_test['hubs_s'] = df_final_test.source_node.apply(lambda x: hits[0].get(x,0)) df_final_test['hubs_d'] = df_final_test.destination_node.apply(lambda x: hits[0].get(x,0)) #================================================================================ #Hits algorithm score for source and destination in Train and Test #if anything not there in train graph then adding 0 df_final_train['authorities_s'] = df_final_train.source_node.apply(lambda x: hits[1].get(x,0)) df_final_train['authorities_d'] = df_final_train.destination_node.apply(lambda x: hits[1].get(x,0)) df_final_test['authorities_s'] = df_final_test.source_node.apply(lambda x: hits[1].get(x,0)) df_final_test['authorities_d'] = df_final_test.destination_node.apply(lambda x: hits[1].get(x,0)) #================================================================================ hdf = pd.HDFStore('/content/drive/My Drive/storage_sample_stage3.h5') hdf.put('train_df',df_final_train, format='table', data_columns=True) hdf.put('test_df',df_final_test, format='table', data_columns=True) hdf.close() else: df_final_train = read_hdf('data/fea_sample/storage_sample_stage3.h5', 'train_df',mode='r') df_final_test = read_hdf('data/fea_sample/storage_sample_stage3.h5', 'test_df',mode='r') ``` ## 5.5 Adding new set of features __we will create these each of these features for both train and test data points__ <ol> <li>SVD features for both source and destination</li> </ol> ```python def svd(x, S): try: z = sadj_dict[x] return S[z] except: return [0,0,0,0,0,0] ``` ```python #for svd features to get feature vector creating a dict node val and inedx in svd vector sadj_col = sorted(train_graph.nodes()) sadj_dict = { val:idx for idx,val in enumerate(sadj_col)} ``` ```python Adj = nx.adjacency_matrix(train_graph,nodelist=sorted(train_graph.nodes())).asfptype() ``` ```python U, s, V = svd(Adj, k = 6) print('Adjacency matrix Shape',Adj.shape) print('U Shape',U.shape) print('V Shape',V.shape) print('s Shape',s.shape) ``` Adjacency matrix Shape (1780722, 1780722) U Shape (1780722, 6) V Shape (6, 1780722) s Shape (6,) ```python if not os.path.isfile('/content/drive/My Drive/storage_sample_stage4.h5'): #=================================================================================================== df_final_train[['svd_u_s_1', 'svd_u_s_2','svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6']] = \ df_final_train.source_node.apply(lambda x: svd(x, U)).apply(pd.Series) df_final_train[['svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5','svd_u_d_6']] = \ df_final_train.destination_node.apply(lambda x: svd(x, U)).apply(pd.Series) #=================================================================================================== df_final_train[['svd_v_s_1','svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6',]] = \ df_final_train.source_node.apply(lambda x: svd(x, V.T)).apply(pd.Series) df_final_train[['svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5','svd_v_d_6']] = \ df_final_train.destination_node.apply(lambda x: svd(x, V.T)).apply(pd.Series) #=================================================================================================== df_final_test[['svd_u_s_1', 'svd_u_s_2','svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6']] = \ df_final_test.source_node.apply(lambda x: svd(x, U)).apply(pd.Series) df_final_test[['svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5','svd_u_d_6']] = \ df_final_test.destination_node.apply(lambda x: svd(x, U)).apply(pd.Series) #=================================================================================================== df_final_test[['svd_v_s_1','svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6',]] = \ df_final_test.source_node.apply(lambda x: svd(x, V.T)).apply(pd.Series) df_final_test[['svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5','svd_v_d_6']] = \ df_final_test.destination_node.apply(lambda x: svd(x, V.T)).apply(pd.Series) #=================================================================================================== hdf = pd.HDFStore('/content/drive/My Drive/storage_sample_stage4.h5') hdf.put('train_df',df_final_train, format='table', data_columns=True) hdf.put('test_df',df_final_test, format='table', data_columns=True) hdf.close() ``` ```python #reading from pandas import read_hdf df_final_train = read_hdf('/content/drive/My Drive/fea_sample/storage_sample_stage4.h5', 'train_df',mode='r') df_final_test = read_hdf('/content/drive/My Drive/fea_sample/storage_sample_stage4.h5', 'test_df',mode='r') ``` ```python df_final_train.shape ``` (100002, 54) ```python df_final_test.shape ``` (50002, 54) #### 5.7 Adding new feature Preferential attachment :- Preferential Attachment One well-known concept in social networks is that users with many friends tend to create more connections in the future. This is due to the fact that in some social networks, like in finance, the rich get richer. We estimate how ”rich” our two vertices are by calculating the **multiplication between the number of friends (|Γ(x)|) or followers each vertex has. It may be noted that the similarity index does not require any node neighbor information; therefore, this similarity index has the lowest computational complexity. ```python #function for getting the successors of user def get_successors_train(data): """ This Function is used to get the followers of each node """ out_followers = len(set(train.successors(data))) return (out_followers) ``` ```python #Applying the function to the data set column df_final_train['successors_train_source_node'] = df_final_train['source_node'].apply(get_successors_train) df_final_train['successors_train_dest_node'] = df_final_train['destination_node'].apply(get_successors_train) df_final_train['preferential_score(Ui,Uj)'] = (df_final_train['successors_train_source_node'])*(df_final_train['successors_train_dest_node']) ``` ```python def get_predecessors_train(data): """ This Function is used to get the followers of each node """ in_followers = len(set(train.predecessors(data))) return (in_followers) ``` ```python #Applying the function to the data set column df_final_train['pred_train_source_node'] = df_final_train['source_node'].apply(get_predecessors_train) df_final_train['pred_train_dest_node'] = df_final_train['destination_node'].apply(get_predecessors_train) df_final_train['preferential_score(Ui,Uj)pred'] = (df_final_train['pred_train_source_node'])*(df_final_train['pred_train_dest_node']) ``` ```python #function for getting the successors of user def get_successors_test(data): """ This Function is used to get the followers of each node """ out_followers = len(set(test.successors(data))) return (out_followers) ``` ```python #Applying the function to the data set column df_final_test['successors_test_source_node'] = df_final_test['source_node'].apply(get_successors_test) df_final_test['successors_test_dest_node'] = df_final_test['destination_node'].apply(get_successors_test) df_final_test['preferential_score(Ui,Uj)'] = (df_final_test['successors_test_source_node']) *(df_final_test['successors_test_dest_node']) ``` ```python def get_predecessors_test(data): """ This Function is used to get the followers of each node """ in_followers = len(set(test.predecessors(data))) return (in_followers) ``` ```python #Applying the function to the data set column df_final_test['pred_test_source_node'] = df_final_test['source_node'].apply(get_predecessors_test) df_final_test['pred_test_dest_node'] = df_final_test['destination_node'].apply(get_predecessors_test) df_final_test['preferential_score(Ui,Uj)pred'] = (df_final_test['pred_test_source_node']) *(df_final_test['pred_test_dest_node']) ``` ```python df_final_test=df_final_test.drop(columns=['pred_test_source_node', 'pred_test_dest_node']) df_final_train=df_final_train.drop(columns=['pred_train_source_node', 'pred_train_dest_node']) ``` ```python df_final_test.columns ``` Index(['source_node', 'destination_node', 'indicator_link', 'jaccard_followers', 'jaccard_followees', 'cosine_followers', 'cosine_followees', 'num_followers_s', 'num_followees_s', 'num_followees_d', 'inter_followers', 'inter_followees', 'adar_index', 'follows_back', 'same_comp', 'shortest_path', 'weight_in', 'weight_out', 'weight_f1', 'weight_f2', 'weight_f3', 'weight_f4', 'page_rank_s', 'page_rank_d', 'katz_s', 'katz_d', 'hubs_s', 'hubs_d', 'authorities_s', 'authorities_d', 'svd_u_s_1', 'svd_u_s_2', 'svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6', 'svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5', 'svd_u_d_6', 'svd_v_s_1', 'svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6', 'svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5', 'svd_v_d_6', 'preferential_score(Ui,Uj)', 'preferential_score(Ui,Uj)pred', 'svd_dot_u', 'svd_dot_v'], dtype='object') ### 5.8 Adding new Feature svd_dot svd_dot:- you can calculate svd_dot as Dot product between sourse node svd and destination node svd features. you can read about this in below pdf https://storage.googleapis.com/kaggle-forum-message-attachments/2594/supervised_link_prediction.pdf ```python #Performing the svd Dot Product for train data set df_final_train["svd_dot_u"] =((df_final_train['svd_u_s_1']*df_final_train['svd_u_d_1']) + (df_final_train['svd_u_s_2']*df_final_train['svd_u_d_2']) + (df_final_train['svd_u_s_3']*df_final_train['svd_u_d_3']) + (df_final_train['svd_u_s_4']*df_final_train['svd_u_d_4']) + (df_final_train['svd_u_s_5']*df_final_train['svd_u_d_5']) + (df_final_train['svd_u_s_6']*df_final_train['svd_u_d_6']) ) ``` ```python df_final_train["svd_dot_v"] =((df_final_train['svd_v_s_1']*df_final_train['svd_v_d_1']) + (df_final_train['svd_v_s_2']*df_final_train['svd_v_d_2']) + (df_final_train['svd_v_s_3']*df_final_train['svd_v_d_3']) + (df_final_train['svd_v_s_4']*df_final_train['svd_v_d_4']) + (df_final_train['svd_v_s_5']*df_final_train['svd_v_d_5']) + (df_final_train['svd_v_s_6']*df_final_train['svd_v_d_6'])) ``` ```python #Performing the svd Dot Product for test data set df_final_test["svd_dot_u"] =((df_final_test['svd_u_s_1']*df_final_test['svd_u_d_1']) + (df_final_test['svd_u_s_2']*df_final_test['svd_u_d_2']) + (df_final_test['svd_u_s_3']*df_final_test['svd_u_d_3']) + (df_final_test['svd_u_s_4']*df_final_test['svd_u_d_4']) + (df_final_test['svd_u_s_5']*df_final_test['svd_u_d_5']) + (df_final_test['svd_u_s_6']*df_final_test['svd_u_d_6'])) ``` ```python df_final_test["svd_dot_v"] =((df_final_test['svd_v_s_1']*df_final_test['svd_v_d_1']) + (df_final_test['svd_v_s_2']*df_final_test['svd_v_d_2']) + (df_final_test['svd_v_s_3']*df_final_test['svd_v_d_3']) + (df_final_test['svd_v_s_4']*df_final_test['svd_v_d_4']) + (df_final_test['svd_v_s_5']*df_final_test['svd_v_d_5']) + (df_final_test['svd_v_s_6']*df_final_test['svd_v_d_6']) ) ``` ```python df_final_train.columns ``` Index(['source_node', 'destination_node', 'indicator_link', 'jaccard_followers', 'jaccard_followees', 'cosine_followers', 'cosine_followees', 'num_followers_s', 'num_followees_s', 'num_followees_d', 'inter_followers', 'inter_followees', 'adar_index', 'follows_back', 'same_comp', 'shortest_path', 'weight_in', 'weight_out', 'weight_f1', 'weight_f2', 'weight_f3', 'weight_f4', 'page_rank_s', 'page_rank_d', 'katz_s', 'katz_d', 'hubs_s', 'hubs_d', 'authorities_s', 'authorities_d', 'svd_u_s_1', 'svd_u_s_2', 'svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6', 'svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5', 'svd_u_d_6', 'svd_v_s_1', 'svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6', 'svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5', 'svd_v_d_6', 'preferential_score(Ui,Uj)', 'preferential_score(Ui,Uj)pred', 'svd_dot_u', 'svd_dot_v'], dtype='object') ```python #Saving the final Dataframes if not os.path.isfile("/content/drive/My Drive/df_final_test.csv"): df_final_test.to_csv("/content/drive/My Drive/df_final_test.csv") else: df_final_test = pd.read_csv("/content/drive/My Drive/df_final_test.csv") ``` ```python #Saving the final Dataframes if not os.path.isfile("/content/drive/My Drive/df_final_train.csv"): df_final_test.to_csv("/content/drive/My Drive/df_final_train.csv") else: df_final_test = pd.read_csv("/content/drive/My Drive/df_final_train.csv") ``` # 6.Modelling ## 6.1 Random forest model ```python df_final_test=pd.read_csv("/content/drive/My Drive/df_final_test.csv") df_final_test=df_final_test.drop(columns='Unnamed: 0',axis=1) ``` ```python df_final_train=pd.read_csv("/content/drive/My Drive/df_final_train.csv") df_final_train=df_final_train.drop(columns='Unnamed: 0',axis=1) ``` ```python df_train.to_csv("/content/drive/My Drive/df_train.csv",index=False,header=False) ``` ```python df_test.to_csv("/content/drive/My Drive/df_test.csv",index=False,header=False) ``` ```python train=nx.read_edgelist("/content/drive/My Drive/df_train.csv",delimiter=',',create_using=nx.DiGraph(),nodetype=int) test = nx.read_edgelist("/content/drive/My Drive/df_test.csv",delimiter=',',create_using=nx.DiGraph(),nodetype=int) ``` ```python ``` ```python df_train = df_final_train.drop(columns=['indicator_link', 'jaccard_followers', 'jaccard_followees', 'cosine_followers', 'cosine_followees', 'num_followers_s', 'num_followees_s', 'num_followees_d', 'inter_followers', 'inter_followees', 'adar_index', 'follows_back', 'same_comp', 'shortest_path', 'weight_in', 'weight_out', 'weight_f1', 'weight_f2', 'weight_f3', 'weight_f4', 'page_rank_s', 'page_rank_d', 'katz_s', 'katz_d', 'hubs_s', 'hubs_d', 'authorities_s', 'authorities_d', 'svd_u_s_1', 'svd_u_s_2', 'svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6', 'svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5', 'svd_u_d_6', 'svd_v_s_1', 'svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6', 'svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5', 'svd_v_d_6', 'preferential_score(Ui,Uj)', 'svd_dot']) ``` ```python df_test= df_final_test.drop(columns=['indicator_link', 'jaccard_followers', 'jaccard_followees', 'cosine_followers', 'cosine_followees', 'num_followers_s', 'num_followees_s', 'num_followees_d', 'inter_followers', 'inter_followees', 'adar_index', 'follows_back', 'same_comp', 'shortest_path', 'weight_in', 'weight_out', 'weight_f1', 'weight_f2', 'weight_f3', 'weight_f4', 'page_rank_s', 'page_rank_d', 'katz_s', 'katz_d', 'hubs_s', 'hubs_d', 'authorities_s', 'authorities_d', 'svd_u_s_1', 'svd_u_s_2', 'svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6', 'svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5', 'svd_u_d_6', 'svd_v_s_1', 'svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6', 'svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5', 'svd_v_d_6', 'preferential_score(Ui,Uj)', 'svd_dot']) ``` ```python y_train = df_final_train.indicator_link y_test = df_final_test.indicator_link ``` ```python df_final_train.drop(['source_node', 'destination_node','indicator_link'],axis=1,inplace=True) df_final_test.drop(['source_node', 'destination_node','indicator_link'],axis=1,inplace=True) ``` ```python estimators = [10,50,100,250,450] train_scores = [] test_scores = [] for i in estimators: clf = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=5, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=52, min_samples_split=120, min_weight_fraction_leaf=0.0, n_estimators=i, n_jobs=-1,random_state=25,verbose=0,warm_start=False) clf.fit(df_final_train,y_train) train_sc = f1_score(y_train,clf.predict(df_final_train)) test_sc = f1_score(y_test,clf.predict(df_final_test)) test_scores.append(test_sc) train_scores.append(train_sc) print('Estimators = ',i,'Train Score',train_sc,'test Score',test_sc) plt.plot(estimators,train_scores,label='Train Score') plt.plot(estimators,test_scores,label='Test Score') plt.xlabel('Estimators') plt.ylabel('Score') plt.title('Estimators vs score at depth of 5') ``` ```python depths = [3,9,11,15,20,35,50,70,130] train_scores = [] test_scores = [] for i in depths: clf = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=i, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=52, min_samples_split=120, min_weight_fraction_leaf=0.0, n_estimators=115, n_jobs=-1,random_state=25,verbose=0,warm_start=False) clf.fit(df_final_train,y_train) train_sc = f1_score(y_train,clf.predict(df_final_train)) test_sc = f1_score(y_test,clf.predict(df_final_test)) test_scores.append(test_sc) train_scores.append(train_sc) print('depth = ',i,'Train Score',train_sc,'test Score',test_sc) plt.plot(depths,train_scores,label='Train Score') plt.plot(depths,test_scores,label='Test Score') plt.xlabel('Depth') plt.ylabel('Score') plt.title('Depth vs score at depth of 5 at estimators = 115') plt.show() ``` ```python from sklearn.metrics import f1_score from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import f1_score from sklearn.model_selection import RandomizedSearchCV from scipy.stats import randint as sp_randint from scipy.stats import uniform param_dist = {"n_estimators":sp_randint(105,125), "max_depth": sp_randint(10,15), "min_samples_split": sp_randint(110,190), "min_samples_leaf": sp_randint(25,65)} clf = RandomForestClassifier(random_state=25,n_jobs=-1) rf_random = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=5,cv=10,scoring='f1',random_state=25) rf_random.fit(df_final_train,y_train) print('mean test scores',rf_random.cv_results_['mean_test_score']) print('mean train scores',rf_random.cv_results_['mean_train_score']) ``` mean test scores [0.96225043 0.96215493 0.96057081 0.96194015 0.96330005] mean train scores [0.96294922 0.96266735 0.96115674 0.96263457 0.96430539] ```python print(rf_random.best_estimator_) ``` RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=14, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=28, min_samples_split=111, min_weight_fraction_leaf=0.0, n_estimators=121, n_jobs=-1, oob_score=False, random_state=25, verbose=0, warm_start=False) ```python clf = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=14, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=28, min_samples_split=111, min_weight_fraction_leaf=0.0, n_estimators=121, n_jobs=-1, oob_score=False, random_state=25, verbose=0, warm_start=False) ``` ```python clf.fit(df_final_train,y_train) y_train_pred = clf.predict(df_final_train) y_test_pred = clf.predict(df_final_test) ``` ```python from sklearn.metrics import f1_score print('Train f1 score',f1_score(y_train,y_train_pred)) print('Test f1 score',f1_score(y_test,y_test_pred)) ``` Train f1 score 0.9652533106548414 Test f1 score 0.9241678239279553 ```python from sklearn.metrics import confusion_matrix def plot_confusion_matrix(test_y, predict_y): C = confusion_matrix(test_y, predict_y) A =(((C.T)/(C.sum(axis=1))).T) B =(C/C.sum(axis=0)) plt.figure(figsize=(20,4)) labels = [0,1] # representing A in heatmap format cmap=sns.light_palette("blue") plt.subplot(1, 3, 1) sns.heatmap(C, annot=True, cmap=cmap, fmt=".3f", xticklabels=labels, yticklabels=labels) plt.xlabel('Predicted Class') plt.ylabel('Original Class') plt.title("Confusion matrix") plt.subplot(1, 3, 2) sns.heatmap(B, annot=True, cmap=cmap, fmt=".3f", xticklabels=labels, yticklabels=labels) plt.xlabel('Predicted Class') plt.ylabel('Original Class') plt.title("Precision matrix") plt.subplot(1, 3, 3) # representing B in heatmap format sns.heatmap(A, annot=True, cmap=cmap, fmt=".3f", xticklabels=labels, yticklabels=labels) plt.xlabel('Predicted Class') plt.ylabel('Original Class') plt.title("Recall matrix") plt.show() ``` ### Confusion Matrix ```python print('Train confusion_matrix') plot_confusion_matrix(y_train,y_train_pred) print('Test confusion_matrix') plot_confusion_matrix(y_test,y_test_pred) ``` ### Plotting ROC Curve ```python from sklearn.metrics import roc_curve, auc fpr,tpr,ths = roc_curve(y_test,y_test_pred) auc_sc = auc(fpr, tpr) plt.plot(fpr, tpr, color='navy',label='ROC curve (area = %0.2f)' % auc_sc) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic with test data') plt.legend() plt.show() ``` ### Feature Importance ```python features = df_final_train.columns importances = clf.feature_importances_ indices = (np.argsort(importances))[-25:] plt.figure(figsize=(10,12)) plt.title('Feature Importances') plt.barh(range(len(indices)), importances[indices], color='r', align='center') plt.yticks(range(len(indices)), [features[i] for i in indices]) plt.xlabel('Relative Importance') plt.show() ``` *The most important feature of all of them is **follow back** feature* ### XGBoost Model ```python model = xgb.XGBClassifier() parameters = {'max_depth':[1,5,10],'n_estimators':[50,100,150,200], 'learning_rate':[0,0.1,0.5,1] } clf = GridSearchCV(model, parameters,cv=5,scoring='f1',return_train_score=True) clf.fit(df_final_train, y_train) ``` GridSearchCV(cv=5, error_score=nan, estimator=XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=3, min_child_weight=1, missing=None, n_estimators=100, n_jobs=1, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, verbosity=1), iid='deprecated', n_jobs=None, param_grid={'learning_rate': [0, 0.1, 0.5, 1], 'max_depth': [1, 5, 10], 'n_estimators': [50, 100, 150, 200]}, pre_dispatch='2*n_jobs', refit=True, return_train_score=True, scoring='f1', verbose=0) ```python results = pd.DataFrame.from_dict(clf.cv_results_) results ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>mean_fit_time</th> <th>std_fit_time</th> <th>mean_score_time</th> <th>std_score_time</th> <th>param_learning_rate</th> <th>param_max_depth</th> <th>param_n_estimators</th> <th>params</th> <th>split0_test_score</th> <th>split1_test_score</th> <th>split2_test_score</th> <th>split3_test_score</th> <th>split4_test_score</th> <th>mean_test_score</th> <th>std_test_score</th> <th>rank_test_score</th> <th>split0_train_score</th> <th>split1_train_score</th> <th>split2_train_score</th> <th>split3_train_score</th> <th>split4_train_score</th> <th>mean_train_score</th> <th>std_train_score</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>5.293298</td> <td>0.372677</td> <td>0.029432</td> <td>0.000437</td> <td>0</td> <td>1</td> <td>50</td> <td>{'learning_rate': 0, 'max_depth': 1, 'n_estima...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>1</th> <td>9.767770</td> <td>0.130188</td> <td>0.034827</td> <td>0.000130</td> <td>0</td> <td>1</td> <td>100</td> <td>{'learning_rate': 0, 'max_depth': 1, 'n_estima...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>2</th> <td>14.395171</td> <td>0.201271</td> <td>0.041009</td> <td>0.000411</td> <td>0</td> <td>1</td> <td>150</td> <td>{'learning_rate': 0, 'max_depth': 1, 'n_estima...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>3</th> <td>19.045856</td> <td>0.249639</td> <td>0.046329</td> <td>0.000992</td> <td>0</td> <td>1</td> <td>200</td> <td>{'learning_rate': 0, 'max_depth': 1, 'n_estima...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>4</th> <td>9.247545</td> <td>0.134063</td> <td>0.031119</td> <td>0.002537</td> <td>0</td> <td>5</td> <td>50</td> <td>{'learning_rate': 0, 'max_depth': 5, 'n_estima...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>5</th> <td>17.976655</td> <td>0.207548</td> <td>0.035097</td> <td>0.000597</td> <td>0</td> <td>5</td> <td>100</td> <td>{'learning_rate': 0, 'max_depth': 5, 'n_estima...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>6</th> <td>26.801553</td> <td>0.351903</td> <td>0.041343</td> <td>0.000693</td> <td>0</td> <td>5</td> <td>150</td> <td>{'learning_rate': 0, 'max_depth': 5, 'n_estima...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>7</th> <td>35.512280</td> <td>0.320803</td> <td>0.049025</td> <td>0.003150</td> <td>0</td> <td>5</td> <td>200</td> <td>{'learning_rate': 0, 'max_depth': 5, 'n_estima...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>8</th> <td>9.297703</td> <td>0.075156</td> <td>0.029134</td> <td>0.000248</td> <td>0</td> <td>10</td> <td>50</td> <td>{'learning_rate': 0, 'max_depth': 10, 'n_estim...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>9</th> <td>17.960515</td> <td>0.204389</td> <td>0.035154</td> <td>0.000386</td> <td>0</td> <td>10</td> <td>100</td> <td>{'learning_rate': 0, 'max_depth': 10, 'n_estim...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>10</th> <td>26.862908</td> <td>0.307729</td> <td>0.040693</td> <td>0.000534</td> <td>0</td> <td>10</td> <td>150</td> <td>{'learning_rate': 0, 'max_depth': 10, 'n_estim...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>11</th> <td>35.935152</td> <td>0.601251</td> <td>0.050276</td> <td>0.003593</td> <td>0</td> <td>10</td> <td>200</td> <td>{'learning_rate': 0, 'max_depth': 10, 'n_estim...</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>37</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>12</th> <td>5.224301</td> <td>0.032876</td> <td>0.031630</td> <td>0.002356</td> <td>0.1</td> <td>1</td> <td>50</td> <td>{'learning_rate': 0.1, 'max_depth': 1, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>13</th> <td>9.647452</td> <td>0.109673</td> <td>0.035888</td> <td>0.000311</td> <td>0.1</td> <td>1</td> <td>100</td> <td>{'learning_rate': 0.1, 'max_depth': 1, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>14</th> <td>12.912822</td> <td>0.081637</td> <td>0.040854</td> <td>0.002026</td> <td>0.1</td> <td>1</td> <td>150</td> <td>{'learning_rate': 0.1, 'max_depth': 1, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>15</th> <td>16.060659</td> <td>0.098565</td> <td>0.041892</td> <td>0.000237</td> <td>0.1</td> <td>1</td> <td>200</td> <td>{'learning_rate': 0.1, 'max_depth': 1, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>16</th> <td>9.414715</td> <td>0.039430</td> <td>0.031090</td> <td>0.002583</td> <td>0.1</td> <td>5</td> <td>50</td> <td>{'learning_rate': 0.1, 'max_depth': 5, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>17</th> <td>17.775616</td> <td>0.058846</td> <td>0.035776</td> <td>0.000465</td> <td>0.1</td> <td>5</td> <td>100</td> <td>{'learning_rate': 0.1, 'max_depth': 5, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>18</th> <td>21.497155</td> <td>0.068947</td> <td>0.040027</td> <td>0.000244</td> <td>0.1</td> <td>5</td> <td>150</td> <td>{'learning_rate': 0.1, 'max_depth': 5, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>19</th> <td>24.893124</td> <td>0.096082</td> <td>0.043248</td> <td>0.000829</td> <td>0.1</td> <td>5</td> <td>200</td> <td>{'learning_rate': 0.1, 'max_depth': 5, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>20</th> <td>9.559423</td> <td>0.033718</td> <td>0.031076</td> <td>0.001258</td> <td>0.1</td> <td>10</td> <td>50</td> <td>{'learning_rate': 0.1, 'max_depth': 10, 'n_est...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>21</th> <td>17.779846</td> <td>0.021406</td> <td>0.037105</td> <td>0.001220</td> <td>0.1</td> <td>10</td> <td>100</td> <td>{'learning_rate': 0.1, 'max_depth': 10, 'n_est...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>22</th> <td>21.543649</td> <td>0.070453</td> <td>0.040314</td> <td>0.000669</td> <td>0.1</td> <td>10</td> <td>150</td> <td>{'learning_rate': 0.1, 'max_depth': 10, 'n_est...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>23</th> <td>24.810440</td> <td>0.063423</td> <td>0.042254</td> <td>0.000202</td> <td>0.1</td> <td>10</td> <td>200</td> <td>{'learning_rate': 0.1, 'max_depth': 10, 'n_est...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>24</th> <td>4.331822</td> <td>0.025802</td> <td>0.028689</td> <td>0.000253</td> <td>0.5</td> <td>1</td> <td>50</td> <td>{'learning_rate': 0.5, 'max_depth': 1, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>25</th> <td>7.590193</td> <td>0.031570</td> <td>0.033358</td> <td>0.003554</td> <td>0.5</td> <td>1</td> <td>100</td> <td>{'learning_rate': 0.5, 'max_depth': 1, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>26</th> <td>10.856622</td> <td>0.017206</td> <td>0.037413</td> <td>0.003155</td> <td>0.5</td> <td>1</td> <td>150</td> <td>{'learning_rate': 0.5, 'max_depth': 1, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>27</th> <td>14.169348</td> <td>0.047842</td> <td>0.038107</td> <td>0.000460</td> <td>0.5</td> <td>1</td> <td>200</td> <td>{'learning_rate': 0.5, 'max_depth': 1, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>28</th> <td>6.064367</td> <td>0.029661</td> <td>0.029169</td> <td>0.000334</td> <td>0.5</td> <td>5</td> <td>50</td> <td>{'learning_rate': 0.5, 'max_depth': 5, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>29</th> <td>9.378761</td> <td>0.042094</td> <td>0.031505</td> <td>0.000418</td> <td>0.5</td> <td>5</td> <td>100</td> <td>{'learning_rate': 0.5, 'max_depth': 5, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>30</th> <td>12.634556</td> <td>0.026014</td> <td>0.035485</td> <td>0.000481</td> <td>0.5</td> <td>5</td> <td>150</td> <td>{'learning_rate': 0.5, 'max_depth': 5, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>31</th> <td>15.722478</td> <td>0.043080</td> <td>0.037554</td> <td>0.000372</td> <td>0.5</td> <td>5</td> <td>200</td> <td>{'learning_rate': 0.5, 'max_depth': 5, 'n_esti...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>32</th> <td>6.026124</td> <td>0.038738</td> <td>0.028696</td> <td>0.000472</td> <td>0.5</td> <td>10</td> <td>50</td> <td>{'learning_rate': 0.5, 'max_depth': 10, 'n_est...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>33</th> <td>9.237901</td> <td>0.023303</td> <td>0.031758</td> <td>0.000553</td> <td>0.5</td> <td>10</td> <td>100</td> <td>{'learning_rate': 0.5, 'max_depth': 10, 'n_est...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>34</th> <td>12.484442</td> <td>0.069820</td> <td>0.034907</td> <td>0.000434</td> <td>0.5</td> <td>10</td> <td>150</td> <td>{'learning_rate': 0.5, 'max_depth': 10, 'n_est...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>35</th> <td>15.725432</td> <td>0.035642</td> <td>0.037854</td> <td>0.000733</td> <td>0.5</td> <td>10</td> <td>200</td> <td>{'learning_rate': 0.5, 'max_depth': 10, 'n_est...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>36</th> <td>3.997875</td> <td>0.019430</td> <td>0.029214</td> <td>0.002305</td> <td>1</td> <td>1</td> <td>50</td> <td>{'learning_rate': 1, 'max_depth': 1, 'n_estima...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>37</th> <td>7.176247</td> <td>0.023981</td> <td>0.030736</td> <td>0.000453</td> <td>1</td> <td>1</td> <td>100</td> <td>{'learning_rate': 1, 'max_depth': 1, 'n_estima...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>38</th> <td>10.400242</td> <td>0.036906</td> <td>0.034983</td> <td>0.001284</td> <td>1</td> <td>1</td> <td>150</td> <td>{'learning_rate': 1, 'max_depth': 1, 'n_estima...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>39</th> <td>13.563983</td> <td>0.016601</td> <td>0.037238</td> <td>0.001582</td> <td>1</td> <td>1</td> <td>200</td> <td>{'learning_rate': 1, 'max_depth': 1, 'n_estima...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>40</th> <td>4.854039</td> <td>0.023113</td> <td>0.028086</td> <td>0.000138</td> <td>1</td> <td>5</td> <td>50</td> <td>{'learning_rate': 1, 'max_depth': 5, 'n_estima...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>41</th> <td>8.051349</td> <td>0.016581</td> <td>0.030970</td> <td>0.000739</td> <td>1</td> <td>5</td> <td>100</td> <td>{'learning_rate': 1, 'max_depth': 5, 'n_estima...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>42</th> <td>11.249002</td> <td>0.033681</td> <td>0.034361</td> <td>0.001230</td> <td>1</td> <td>5</td> <td>150</td> <td>{'learning_rate': 1, 'max_depth': 5, 'n_estima...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>43</th> <td>14.494715</td> <td>0.045002</td> <td>0.038054</td> <td>0.002760</td> <td>1</td> <td>5</td> <td>200</td> <td>{'learning_rate': 1, 'max_depth': 5, 'n_estima...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>44</th> <td>4.855808</td> <td>0.029675</td> <td>0.028406</td> <td>0.000850</td> <td>1</td> <td>10</td> <td>50</td> <td>{'learning_rate': 1, 'max_depth': 10, 'n_estim...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>45</th> <td>8.062193</td> <td>0.051644</td> <td>0.030944</td> <td>0.000498</td> <td>1</td> <td>10</td> <td>100</td> <td>{'learning_rate': 1, 'max_depth': 10, 'n_estim...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>46</th> <td>11.262729</td> <td>0.048420</td> <td>0.033710</td> <td>0.000492</td> <td>1</td> <td>10</td> <td>150</td> <td>{'learning_rate': 1, 'max_depth': 10, 'n_estim...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>47</th> <td>14.534151</td> <td>0.046507</td> <td>0.037512</td> <td>0.000794</td> <td>1</td> <td>10</td> <td>200</td> <td>{'learning_rate': 1, 'max_depth': 10, 'n_estim...</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> <td>1</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>0.0</td> </tr> </tbody> </table> </div> * By seeing the above results i can take max depth of 1 and learning rate of 0.1 and n_estimators count as 50. As 50,100,150 estimators are leading to same result. ```python clf.best_estimator_ ``` XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=1, min_child_weight=1, missing=None, n_estimators=50, n_jobs=1, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, verbosity=1) ```python #Training the model xgbmodel = xgb.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=1, min_child_weight=1, missing=None, n_estimators=50, n_jobs=-1, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, verbosity=1) xgbmodel.fit(df_final_train, y_train) ``` XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=1, min_child_weight=1, missing=None, n_estimators=50, n_jobs=-1, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, verbosity=1) ```python #Predicting using model y_train_pred = xgbmodel.predict(df_final_train) y_test_pred = xgbmodel.predict(df_final_test) ``` ```python print('Train f1 score',f1_score(y_train,y_train_pred)) print('Test f1 score',f1_score(y_test,y_test_pred)) ``` Train f1 score 1.0 Test f1 score 1.0 ### Confusion Matrix ```python print('Train confusion_matrix') plot_confusion_matrix(y_train,y_train_pred) print('Test confusion_matrix') plot_confusion_matrix(y_test,y_test_pred) ``` ### ROC curve ```python from sklearn.metrics import roc_curve, auc fpr,tpr,ths = roc_curve(y_test,y_test_pred) auc_sc = auc(fpr, tpr) plt.plot(fpr, tpr, color='navy',label='ROC curve (area = %0.2f)' % auc_sc) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic with test data') plt.grid() plt.legend() plt.show() ```
2f87032344f3f7f2fca1a04ab1d5f538f3ec711e
664,449
ipynb
Jupyter Notebook
_notebooks/2021-11-12-Facebook Case Study.ipynb
saikumarpochireddygari/dsgrad-projects-articles
3988293bcbdb44b55ba9b88c8500481a309b2eaa
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-11-12-Facebook Case Study.ipynb
saikumarpochireddygari/dsgrad-projects-articles
3988293bcbdb44b55ba9b88c8500481a309b2eaa
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-11-12-Facebook Case Study.ipynb
saikumarpochireddygari/dsgrad-projects-articles
3988293bcbdb44b55ba9b88c8500481a309b2eaa
[ "Apache-2.0" ]
null
null
null
104.015185
124,344
0.823711
true
34,234
Qwen/Qwen-72B
1. YES 2. YES
0.771844
0.699254
0.539715
__label__eng_Latn
0.474131
0.092268
```python # First check the Python version import sys if sys.version_info < (3,4): print('You are running an older version of Python!\n\n' \ 'You should consider updating to Python 3.4.0 or ' \ 'higher as the libraries built for this course ' \ 'have only been tested in Python 3.4 and higher.\n') print('Try installing the Python 3.5 version of anaconda ' 'and then restart `jupyter notebook`:\n' \ 'https://www.continuum.io/downloads\n\n') # Now get necessary libraries try: import os import numpy as np import matplotlib.pyplot as plt from skimage.transform import resize except ImportError: print('You are missing some packages! ' \ 'We will try installing them before continuing!') !pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" import os import numpy as np import matplotlib.pyplot as plt from skimage.transform import resize print('Done!') # Import Tensorflow try: import tensorflow as tf except ImportError: print("You do not have tensorflow installed!") print("Follow the instructions on the following link") print("to install tensorflow before continuing:") print("") print("https://github.com/pkmital/CADL#installation-preliminaries") # This cell includes the provided libraries from the zip file try: from libs import utils except ImportError: print("Make sure you have started notebook in the same directory" + " as the provided zip file which includes the 'libs' folder" + " and the file 'utils.py' inside of it. You will NOT be able" " to complete this assignment unless you restart jupyter" " notebook inside the directory created by extracting" " the zip file or cloning the github repo.") # We'll tell matplotlib to inline any drawn figures like so: %matplotlib inline plt.style.use('ggplot') ``` <a name="variables"></a> ## Variables We're first going to import the tensorflow library: Let's take a look at how we might create a range of numbers. Using numpy, we could for instance use the linear space function: ```python x = np.linspace(-3.0, 3.0, 100) # Immediately, the result is given to us. An array of 100 numbers equally spaced from -3.0 to 3.0. print(x) # We know from numpy arrays that they have a `shape`, in this case a 1-dimensional array of 100 values print(x.shape) # and a `dtype`, in this case float64, or 64 bit floating point values. print(x.dtype) ``` [-3. -2.93939394 -2.87878788 -2.81818182 -2.75757576 -2.6969697 -2.63636364 -2.57575758 -2.51515152 -2.45454545 -2.39393939 -2.33333333 -2.27272727 -2.21212121 -2.15151515 -2.09090909 -2.03030303 -1.96969697 -1.90909091 -1.84848485 -1.78787879 -1.72727273 -1.66666667 -1.60606061 -1.54545455 -1.48484848 -1.42424242 -1.36363636 -1.3030303 -1.24242424 -1.18181818 -1.12121212 -1.06060606 -1. -0.93939394 -0.87878788 -0.81818182 -0.75757576 -0.6969697 -0.63636364 -0.57575758 -0.51515152 -0.45454545 -0.39393939 -0.33333333 -0.27272727 -0.21212121 -0.15151515 -0.09090909 -0.03030303 0.03030303 0.09090909 0.15151515 0.21212121 0.27272727 0.33333333 0.39393939 0.45454545 0.51515152 0.57575758 0.63636364 0.6969697 0.75757576 0.81818182 0.87878788 0.93939394 1. 1.06060606 1.12121212 1.18181818 1.24242424 1.3030303 1.36363636 1.42424242 1.48484848 1.54545455 1.60606061 1.66666667 1.72727273 1.78787879 1.84848485 1.90909091 1.96969697 2.03030303 2.09090909 2.15151515 2.21212121 2.27272727 2.33333333 2.39393939 2.45454545 2.51515152 2.57575758 2.63636364 2.6969697 2.75757576 2.81818182 2.87878788 2.93939394 3. ] (100,) float64 In tensorflow, we could try to do the same thing using their linear space function: ```python x = tf.linspace(-3.0, 3.0, 100) print(x) ``` Tensor("LinSpace_1:0", shape=(100,), dtype=float32) Instead of a numpy.array, we are returned a tf.Tensor. The name of it is "LinSpace:0". Wherever we see this colon 0, that just means the output of. So the name of this Tensor is saying, the output of LinSpace. Think of tf.Tensors the same way as you would the numpy.array. It is described by its shape, in this case, only 1 dimension of 100 values. And it has a dtype, in this case, float32. But unlike the numpy.array, there are no values printed here! That's because it actually hasn't computed its values yet. Instead, it just refers to the output of a tf.Operation which has been already been added to Tensorflow's default computational graph. The result of that operation is the tensor that we are returned. <a name="graphs"></a> ## Graphs Let's try and inspect the underlying graph. We can request the "default" graph where all of our operations have been added: ```python g = tf.get_default_graph() ``` <a name="operations"></a> ## Operations And from this graph, we can get a list of all the operations that have been added, and print out their names: ```python [op.name for op in g.get_operations()] ``` ['LinSpace/start', 'LinSpace/stop', 'LinSpace/num', 'LinSpace', 'sub/y', 'sub', 'Pow/y', 'Pow', 'Pow_1/x', 'Pow_1/y', 'Pow_1', 'mul/x', 'mul', 'truediv', 'Neg', 'Exp', 'Sqrt/x', 'Sqrt', 'mul_1/x', 'mul_1', 'truediv_1/x', 'truediv_1', 'mul_2', 'LinSpace_1/start', 'LinSpace_1/stop', 'LinSpace_1/num', 'LinSpace_1'] So Tensorflow has named each of our operations to generally reflect what they are doing. There are a few parameters that are all prefixed by LinSpace, and then the last one which is the operation which takes all of the parameters and creates an output for the linspace. <a name="tensor"></a> ## Tensor We can request the output of any operation, which is a tensor, by asking the graph for the tensor's name: ```python g.get_tensor_by_name('LinSpace' + ':0') ``` <tf.Tensor 'LinSpace:0' shape=(100,) dtype=float32> What I've done is asked for the `tf.Tensor` that comes from the operation "LinSpace". So remember, the result of a `tf.Operation` is a `tf.Tensor`. Remember that was the same name as the tensor `x` we created before. <a name="sessions"></a> ## Sessions In order to actually compute anything in tensorflow, we need to create a `tf.Session`. The session is responsible for evaluating the `tf.Graph`. Let's see how this works: ```python import tensorflow as tf # We're first going to create a session: sess = tf.Session() # Now we tell our session to compute anything we've created in the tensorflow graph. computed_x = sess.run(x) print(computed_x) # Alternatively, we could tell the previous Tensor to evaluate itself using this session: computed_x = x.eval(session=sess) print(computed_x) # We can close the session after we're done like so: sess.close() ``` [-3. -2.939394 -2.87878799 -2.81818175 -2.75757575 -2.69696975 -2.63636351 -2.5757575 -2.5151515 -2.4545455 -2.3939395 -2.33333325 -2.27272725 -2.21212125 -2.15151501 -2.090909 -2.030303 -1.969697 -1.90909088 -1.84848475 -1.78787875 -1.72727275 -1.66666663 -1.6060605 -1.5454545 -1.4848485 -1.42424238 -1.36363626 -1.30303025 -1.24242425 -1.18181813 -1.12121201 -1.060606 -1. -0.939394 -0.87878776 -0.81818175 -0.75757575 -0.69696951 -0.63636351 -0.5757575 -0.5151515 -0.4545455 -0.39393926 -0.33333325 -0.27272725 -0.21212101 -0.15151501 -0.090909 -0.030303 0.030303 0.09090924 0.15151525 0.21212125 0.27272749 0.33333349 0.3939395 0.4545455 0.5151515 0.57575774 0.63636374 0.69696975 0.75757599 0.81818199 0.87878799 0.939394 1. 1.060606 1.12121201 1.18181849 1.24242449 1.30303049 1.36363649 1.4242425 1.4848485 1.5454545 1.60606098 1.66666698 1.72727299 1.78787899 1.84848499 1.909091 1.969697 2.030303 2.090909 2.15151548 2.21212149 2.27272749 2.33333349 2.3939395 2.4545455 2.5151515 2.57575798 2.63636398 2.69696999 2.75757599 2.81818199 2.87878799 2.939394 3. ] [-3. -2.939394 -2.87878799 -2.81818175 -2.75757575 -2.69696975 -2.63636351 -2.5757575 -2.5151515 -2.4545455 -2.3939395 -2.33333325 -2.27272725 -2.21212125 -2.15151501 -2.090909 -2.030303 -1.969697 -1.90909088 -1.84848475 -1.78787875 -1.72727275 -1.66666663 -1.6060605 -1.5454545 -1.4848485 -1.42424238 -1.36363626 -1.30303025 -1.24242425 -1.18181813 -1.12121201 -1.060606 -1. -0.939394 -0.87878776 -0.81818175 -0.75757575 -0.69696951 -0.63636351 -0.5757575 -0.5151515 -0.4545455 -0.39393926 -0.33333325 -0.27272725 -0.21212101 -0.15151501 -0.090909 -0.030303 0.030303 0.09090924 0.15151525 0.21212125 0.27272749 0.33333349 0.3939395 0.4545455 0.5151515 0.57575774 0.63636374 0.69696975 0.75757599 0.81818199 0.87878799 0.939394 1. 1.060606 1.12121201 1.18181849 1.24242449 1.30303049 1.36363649 1.4242425 1.4848485 1.5454545 1.60606098 1.66666698 1.72727299 1.78787899 1.84848499 1.909091 1.969697 2.030303 2.090909 2.15151548 2.21212149 2.27272749 2.33333349 2.3939395 2.4545455 2.5151515 2.57575798 2.63636398 2.69696999 2.75757599 2.81818199 2.87878799 2.939394 3. ] You can also specify which graph you want to use in a given session as an argument to tf.Session() To simplify things, since we'll be working in iPython's interactive console, we can create an `tf.InteractiveSession`: ```python sess = tf.InteractiveSession() x.eval() ``` array([-3. , -2.939394 , -2.87878799, -2.81818175, -2.75757575, -2.69696975, -2.63636351, -2.5757575 , -2.5151515 , -2.4545455 , -2.3939395 , -2.33333325, -2.27272725, -2.21212125, -2.15151501, -2.090909 , -2.030303 , -1.969697 , -1.90909088, -1.84848475, -1.78787875, -1.72727275, -1.66666663, -1.6060605 , -1.5454545 , -1.4848485 , -1.42424238, -1.36363626, -1.30303025, -1.24242425, -1.18181813, -1.12121201, -1.060606 , -1. , -0.939394 , -0.87878776, -0.81818175, -0.75757575, -0.69696951, -0.63636351, -0.5757575 , -0.5151515 , -0.4545455 , -0.39393926, -0.33333325, -0.27272725, -0.21212101, -0.15151501, -0.090909 , -0.030303 , 0.030303 , 0.09090924, 0.15151525, 0.21212125, 0.27272749, 0.33333349, 0.3939395 , 0.4545455 , 0.5151515 , 0.57575774, 0.63636374, 0.69696975, 0.75757599, 0.81818199, 0.87878799, 0.939394 , 1. , 1.060606 , 1.12121201, 1.18181849, 1.24242449, 1.30303049, 1.36363649, 1.4242425 , 1.4848485 , 1.5454545 , 1.60606098, 1.66666698, 1.72727299, 1.78787899, 1.84848499, 1.909091 , 1.969697 , 2.030303 , 2.090909 , 2.15151548, 2.21212149, 2.27272749, 2.33333349, 2.3939395 , 2.4545455 , 2.5151515 , 2.57575798, 2.63636398, 2.69696999, 2.75757599, 2.81818199, 2.87878799, 2.939394 , 3. ], dtype=float32) Now we didn't have to explicitly tell the `eval` function about our session. We'll leave this session open for the rest of the lecture. <a name="tensor-shapes"></a> ## Tensor Shapes ```python # We can find out the shape of a tensor like so: print(x.get_shape()) # %% Or in a more friendly format print(x.get_shape().as_list()) ``` (100,) [100] <a name="many-operations"></a> ## Many Operations Lets try a set of operations now. We'll try to create a Gaussian curve. This should resemble a normalized histogram where most of the data is centered around the mean of 0. It's also sometimes refered to by the bell curve or normal curve. ```python # The 1 dimensional gaussian takes two parameters, the mean value, and the standard deviation, which is commonly denoted by the name sigma. mean = 0.0 sigma = 1.0 # Don't worry about trying to learn or remember this formula. I always have to refer to textbooks or check online for the exact formula. z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0) / (2.0 * tf.pow(sigma, 2.0)))) * (1.0 / (sigma * tf.sqrt(2.0 * 3.1415)))) ``` Just like before, amazingly, we haven't actually computed anything. We *have just added a bunch of operations to Tensorflow's graph. Whenever we want the value or output of this operation, we'll have to explicitly ask for the part of the graph we're interested in before we can see its result. Since we've created an interactive session, we should just be able to say the name of the Tensor that we're interested in, and call the `eval` function: ```python res = z.eval() plt.plot(res) # if nothing is drawn, and you are using ipython notebook, uncomment the next two lines: %matplotlib inline plt.plot(res) ``` <a name="deep_nets"></a> # Deep Nets In TensorFlow <a name="perceptron"></a> # The Perceptron The perceptron is a simple model developed by scientist Frank Rosenblatt in the 60s that serves as the basis for modern neural nets. <a name="gradient-descent"></a> # Gradient Descent Whenever we create a neural network, we have to define a set of operations. These operations try to take us from some input to some output. For instance, the input might be an image, or frame of a video, or text file, or sound file. The operations of the network are meant to transform this input data into something meaningful that we want the network to learn about. Initially, all of the parameters of the network are random. So whatever is being output will also be random. But let's say we need it to output something specific about the image. To teach it to do that, we're going to use something called "Gradient Descent". Simply, Gradient descent is a way of optimizing a set of parameters. Let's say we have a few images, and know that given a certain image, when I feed it through a network, its parameters should help the final output of the network be able to spit out the word "orange", or "apple", or some appropriate *label* given the image of that object. The parameters should somehow accentuate the "orangeness" of my image. It probably will be able to transform an image in away that it ends up having high intensities for images that have the color orange in them, and probably prefer images that have that color in a fairly round arrangement. Rather than hand crafting all of the possible ways an orange might be manifested, we're going to learn the best way to optimize its objective: separating oranges and apples. How can we teach a network to learn something like this? <a name="defining-cost"></a> ## Defining Cost Well we need to define what "best" means. In order to do so, we need a measure of the "error". Let's continue with the two options we've been using: orange, or apple. I can represent these as 0 and 1 instead. I'm going to get a few images of oranges, and apples, and one by one, feed them into a network that I've randomly initialized. I'll then filter the image, by just multiplying every value by some random set of values. And then I'll just add up all the numbers, and then squash the result in a way that means I'll only ever get 0 or 1. So I put in an image, and I get out a 0 or 1. Except, the parameters of my network are totally random, and so my network will only ever spit out random 0s or 1s. How can I get this random network to know when to spit out a 0 for images of oranges, and a 1 for images of apples? We do that by saying, if the network predicts a 0 for an orange, then the error is 0. If the network predicts a 1 for an orange, then the error is 1. And vice-versa for apples. If it spits out a 1 for an apple, then the error is 0. If it spits out a 0 for an apple, then the error is 1. What we've just done is create a function which describes error in terms of our parameters: Let's write this another way: \begin{align} \text{error} = \text{network}(\text{image}) - \text{true_label} \end{align} where \begin{align} \text{network}(\text{image}) = \text{prediected_label} \end{align} More commonly, we'll see these components represented by the following letters: \begin{align} E = f(X) - y \end{align} Don't worry about trying to remember this equation. Just see how it is similar to what we've done with the oranges and apples. `X` is generally the input to the network, which is fed to some network, or a function $f$, which we know should output some label `y`. Whatever difference there is between what it should output, y, and what it actually outputs, $f(x)$ is what is different, or error, $E$. <a name="minimizing-error"></a> ## Minimizing Error Instead of feeding one image at a time, we're going to feed in many. Let's say 100. This way, we can see what our network is doing on average. If our error at the current network parameters is e.g. 50/100, we're correctly guessing about 50 of the 100 images. Now for the crucial part. If we move our network's parameters a tiny bit and see what happens to our error, we can actually use that knowledge to find smaller errors. Let's say the error went up after we moved our network parameters. Well then we know we should go back the way we came, and try going the other direction entirely. If our error went down, then we should just keep changing our parameters in the same direction. The error provides a "training signal" or a measure of the "loss" of our network. You'll often hear anyone number of these terms to describe the same thing, "Error", "Cost", "Loss", or "Training Signal'. That's pretty much gradient descent in a nutshell. Of course we've made a lot of assumptions in assuming our function is continuous and differentiable. But we're not going to worry about that, and if you don't know what that means, don't worry about it. <a name="backpropagation"></a> ## Backpropagation To summarize, Gradient descent is a simple but very powerful method for finding smaller measures of error by following the negative direction of its gradient. The gradient is just saying, how does the error change at the current set of parameters? One thing I didn't mention was how we figure out what the gradient is. In order to do that, we use something called backpropagation. When we pass as input something to a network, it's doing what's called forward propagation. We're sending an input and multiplying it by every weight to an expected output. Whatever differences that output has with the output we wanted it to have, gets *backpropagated* to every single parameter in our network. Basically, backprop is a very effective way to find the gradient by simply multiplying many partial derivatives together. It uses something called the chain rule to find the gradient of the error with respect to every single parameter in a network, and follows this error from the output of the network, all the way back to the input. While the details won't be necessary for this course, we will come back to it in later sessions as we learn more about how we can use both backprop and forward prop to help us understand the inner workings of deep neural networks. If you are interested in knowing more details about backprop, I highly recommend both Michael Nielsen's online Deep Learning book: http://neuralnetworksanddeeplearning.com/ and Yoshua Bengio's online book: http://www.deeplearningbook.org/ <a name="local-minimaoptima"></a> ## Local Minima/Optima ```python xs = np.linspace(-6, 6, 100) plt.plot(xs, np.maximum(xs, 0), label='relu') plt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid') plt.plot(xs, np.tanh(xs), label='tanh') plt.xlabel('Input') plt.xlim([-6, 6]) plt.ylabel('Output') plt.ylim([-1.5, 1.5]) plt.title('Common Activation Functions/Nonlinearities') plt.legend(loc='lower right') ``` ```python ```
a099fd4eddc2410221d64f51f1769327931d8f07
65,450
ipynb
Jupyter Notebook
session-1/.ipynb_checkpoints/Inquidia Day Prez-checkpoint.ipynb
arkansasred/CADL
5fe4141124c19c5f331cf5b49970313612a47c4e
[ "Apache-2.0" ]
1
2018-06-10T06:06:27.000Z
2018-06-10T06:06:27.000Z
session-1/.ipynb_checkpoints/Inquidia Day Prez-checkpoint.ipynb
joshoberman/CADL
5fe4141124c19c5f331cf5b49970313612a47c4e
[ "Apache-2.0" ]
null
null
null
session-1/.ipynb_checkpoints/Inquidia Day Prez-checkpoint.ipynb
joshoberman/CADL
5fe4141124c19c5f331cf5b49970313612a47c4e
[ "Apache-2.0" ]
null
null
null
98.717949
21,472
0.803453
true
6,390
Qwen/Qwen-72B
1. YES 2. YES
0.699254
0.851953
0.595732
__label__eng_Latn
0.986836
0.222415
<center> ## [mlcourse.ai](mlcourse.ai) – Open Machine Learning Course ### <center> Author: Ilya Larchenko, ODS Slack ilya_l ## <center> Individual data analysis project ## 1. Data description __I will analyse California Housing Data (1990). It can be downloaded from Kaggle [https://www.kaggle.com/harrywang/housing]__ We will predict the median price of household in block. To start you need to download file housing.csv.zip . Let's load the data and look at it. ```python import pandas as pd import numpy as np import os %matplotlib inline import warnings # `do not disturbe` mode warnings.filterwarnings('ignore') ``` ```python # change this if needed PATH_TO_DATA = 'data' ``` ```python full_df = pd.read_csv(os.path.join(PATH_TO_DATA, 'housing.csv.zip'), compression ='zip') print(full_df.shape) full_df.head() ``` Data consists of 20640 rows and 10 features: 1. longitude: A measure of how far west a house is; a higher value is farther west 2. latitude: A measure of how far north a house is; a higher value is farther north 3. housingMedianAge: Median age of a house within a block; a lower number is a newer building 4. totalRooms: Total number of rooms within a block 5. totalBedrooms: Total number of bedrooms within a block 6. population: Total number of people residing within a block 7. households: Total number of households, a group of people residing within a home unit, for a block 8. medianIncome: Median income for households within a block of houses (measured in tens of thousands of US Dollars) 9. medianHouseValue: Median house value for households within a block (measured in US Dollars) 10. oceanProximity: Location of the house w.r.t ocean/sea *median_house_value* is our target feature, we will use other features to predict it. The task is to predict how much the houses in particular block cost (the median) based on information of blocks location and basic sociodemographic data Let's divide dataset into train (75%) and test (25%). ```python %%time from sklearn.model_selection import train_test_split train_df, test_df = train_test_split(full_df,shuffle = True, test_size = 0.25, random_state=17) train_df=train_df.copy() test_df=test_df.copy() print(train_df.shape) print(test_df.shape) ``` All futher analysis we will do with the test set. But feature generation and processing will be simmultaneously done on both sets. ## 2-3. Primary data analysis / Primary visual data analysis ```python train_df.describe() ``` ```python train_df.info() ``` We can see that most columns has no nan values (except total_bedrooms), most features has float format, only 1 feature is categorical - ocean_proximity. ```python train_df[pd.isnull(train_df).any(axis=1)].head(10) ``` There is no obvious reasons for some total_bedrooms to be NaN. The number of NaNs is about 1% of total dataset. Maybe we could just drop this rows or fill it with mean/median values, but let's wait for a while, and deal with blanks after initial data analysis in a smarter manner. Let's create the list of numeric features names (it will be useful later). ```python numerical_features=list(train_df.columns) numerical_features.remove('ocean_proximity') numerical_features.remove('median_house_value') print(numerical_features) ``` Let's look at target feature distribition ```python train_df['median_house_value'].hist() ``` We can visually see that distribution is skewed and not normal. Also it seems that the values are clipped somewhere near 500 000. We can check it numerically. ```python max_target=train_df['median_house_value'].max() print("The largest median value:",max_target) print("The # of values, equal to the largest:", sum(train_df['median_house_value']==max_target)) print("The % of values, equal to the largest:", sum(train_df['median_house_value']==max_target)/train_df.shape[0]) ``` Almost 5% of all values = exactly 500 001. It proves our clipping theory. Let's check the clipping of small values: ```python min_target=train_df['median_house_value'].min() print("The smallest median value:",min_target) print("The # of values, equal to the smallest:", sum(train_df['median_house_value']==min_target)) print("The % of values, equal to the smallest:", sum(train_df['median_house_value']==min_target)/train_df.shape[0]) ``` This time it looks much better, a little bit artificial value 14 999 - is common for prices. And there are only 4 such values. So probably the small values are not clipped. Let's conduct some normality tests: ```python from statsmodels.graphics.gofplots import qqplot from matplotlib import pyplot qqplot(train_df['median_house_value'], line='s') pyplot.show() ``` ```python from scipy.stats import normaltest stat, p = normaltest(train_df['median_house_value']) print('Statistics=%.3f, p=%.3f' % (stat, p)) alpha = 0.05 if p < alpha: # null hypothesis: x comes from a normal distribution print("The null hypothesis can be rejected") else: print("The null hypothesis cannot be rejected") ``` QQ-plot and D’Agostino and Pearson’s normality test show that the distribution is far from normal. We can try to use log(1+n) to make it more normal: ```python target_log=np.log1p(train_df['median_house_value']) qqplot(target_log, line='s') pyplot.show() ``` ```python stat, p = normaltest(target_log) print('Statistics=%.3f, p=%.3f' % (stat, p)) alpha = 0.05 if p < alpha: # null hypothesis: x comes from a normal distribution print("The null hypothesis can be rejected") else: print("The null hypothesis cannot be rejected") ``` This graph looks much better, the only non-normal parts are clipped high prices and very low prices. Unfortunately we can not reconstruct clipped data and statistically the distribution it is still not normal - p-value = 0, the null hypothesis of distribution normality can be rejected. Anyway, predicting of target_log instead of target can be a good choice for us, but we still should check it during model validation phase. ```python train_df['median_house_value_log']=np.log1p(train_df['median_house_value']) test_df['median_house_value_log']=np.log1p(test_df['median_house_value']) ``` Now let's analyze numerical features. First of all we need to look at their distributions. ```python train_df[numerical_features].hist(bins=50, figsize=(10, 10)) ``` Some features are signifacantly skewed, and our "log trick" should be heplfull ```python skewed_features=['households','median_income','population', 'total_bedrooms', 'total_rooms'] log_numerical_features=[] for f in skewed_features: train_df[f + '_log']=np.log1p(train_df[f]) test_df[f + '_log']=np.log1p(test_df[f]) log_numerical_features.append(f + '_log') ``` ```python train_df[log_numerical_features].hist(bins=50, figsize=(10, 10)) ``` Our new features looks much better (during the modeling phase we can use either original, new ones or both of them) housing_median_age looks clipped as well. Let's look at it's highest value precisely. ```python max_house_age=train_df['housing_median_age'].max() print("The largest value:",max_house_age) print("The # of values, equal to the largest:", sum(train_df['housing_median_age']==max_house_age)) print("The % of values, equal to the largest:", sum(train_df['housing_median_age']==max_house_age)/train_df.shape[0]) ``` It is very likely the data is clipped (there are also a small chance that in 1938 there was a great reconstruction project in California but it seems less likely). We can't recreate original values, but it can be useful to create new binary value indicating the clipping of the house age. ```python train_df['age_clipped']=train_df['housing_median_age']==max_house_age test_df['age_clipped']=test_df['housing_median_age']==max_house_age ``` Now we will analyse correleation between features and target variable ```python import matplotlib.pyplot as plt import seaborn as sns corr_y = pd.DataFrame(train_df).corr() plt.rcParams['figure.figsize'] = (20, 16) # Размер картинок sns.heatmap(corr_y, xticklabels=corr_y.columns.values, yticklabels=corr_y.columns.values, annot=True) ``` We can see some (maybe obvious) patterns here: - House values are significantly correlated with median income - Number of households is not 100% correlated with population, we can try to add average_size_of_household as a feature - Longitude and Latitude should be analyzed separately (just a correlation with target variable is not very useful) - There is a set of highly correlated features: number of rooms, bedrooms, population and households. It can be useful to reduce dimensionality of this subset, especially if we use linear models - total_bedrooms is one of these highly correlated features, it means we can fill NaN values with high precision using simplest linear regression Let's try to fill NaNs with simple linear regression: ```python from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error lin = LinearRegression() # we will train our model based on all numerical non-target features with not NaN total_bedrooms appropriate_columns = train_df.drop(['median_house_value','median_house_value_log', 'ocean_proximity', 'total_bedrooms_log'],axis=1) train_data=appropriate_columns[~pd.isnull(train_df).any(axis=1)] # model will be validated on 25% of train dataset # theoretically we can use even our test_df dataset (as we don't use target) for this task, but we will not temp_train, temp_valid = train_test_split(train_data,shuffle = True, test_size = 0.25, random_state=17) lin.fit(temp_train.drop(['total_bedrooms'],axis=1), temp_train['total_bedrooms']) np.sqrt(mean_squared_error(lin.predict(temp_valid.drop(['total_bedrooms'],axis=1)), temp_valid['total_bedrooms'])) ``` RMSE on a validation set is 64.5. Let's compare this with the best constant prediction - what if we fill NaNs with mean value: ```python np.sqrt(mean_squared_error(np.ones(len(temp_valid['total_bedrooms']))*temp_train['total_bedrooms'].mean(), temp_valid['total_bedrooms'])) ``` Obviously our linear regression approach is much better. Let's train our model on whole train dataset and apply it to the rows with blanks. But preliminary we will "remember" the rows with NaNs, because there is a chance, that it can contain useful information. ```python lin.fit(train_data.drop(['total_bedrooms'],axis=1), train_data['total_bedrooms']) train_df['total_bedrooms_is_nan']=pd.isnull(train_df).any(axis=1).astype(int) test_df['total_bedrooms_is_nan']=pd.isnull(test_df).any(axis=1).astype(int) train_df['total_bedrooms'].loc[pd.isnull(train_df).any(axis=1)]=\ lin.predict(train_df.drop(['median_house_value','median_house_value_log','total_bedrooms','total_bedrooms_log', 'ocean_proximity','total_bedrooms_is_nan'],axis=1)[pd.isnull(train_df).any(axis=1)]) test_df['total_bedrooms'].loc[pd.isnull(test_df).any(axis=1)]=\ lin.predict(test_df.drop(['median_house_value','median_house_value_log','total_bedrooms','total_bedrooms_log', 'ocean_proximity','total_bedrooms_is_nan'],axis=1)[pd.isnull(test_df).any(axis=1)]) #linear regression can lead to negative predictions, let's change it test_df['total_bedrooms']=test_df['total_bedrooms'].apply(lambda x: max(x,0)) train_df['total_bedrooms']=train_df['total_bedrooms'].apply(lambda x: max(x,0)) ``` Let's update 'total_bedrooms_log' and check if there are no NaNs left ```python train_df['total_bedrooms_log']=np.log1p(train_df['total_bedrooms']) test_df['total_bedrooms_log']=np.log1p(test_df['total_bedrooms']) ``` ```python print(train_df.info()) print(test_df.info()) ``` After filling of blanks let's have a closer look on dependences between some numerical features ```python sns.set() sns.pairplot(train_df[log_numerical_features+['median_house_value_log']]) ``` It seems there are no new insights about numerical features (only confirmation of the old ones). Let's try to do the same thing but for the local (geographically) subset of our data. ```python sns.set() local_coord=[-122, 41] # the point near which we want to look at our variables euc_dist_th = 2 # distance treshhold euclid_distance=train_df[['latitude','longitude']].apply(lambda x: np.sqrt((x['longitude']-local_coord[0])**2+ (x['latitude']-local_coord[1])**2), axis=1) # indicate wethere the point is within treshhold or not indicator=pd.Series(euclid_distance<=euc_dist_th, name='indicator') print("Data points within treshhold:", sum(indicator)) # a small map to visualize th eregion for analysis sns.lmplot('longitude', 'latitude', data=pd.concat([train_df,indicator], axis=1), hue='indicator', markers ='.', fit_reg=False, height=5) # pairplot sns.pairplot(train_df[log_numerical_features+['median_house_value_log']][indicator]) ``` We can see that on any local territory (you can play with local_coord and euc_dist_th) the linear dependences between variables became stronger, especially median_income_log / median_house_value_log. So the coordinates is very important factor for our task (we will analyze it later) Now let's move on to the categorical feature "ocean_proximity". It is not 100% clear what does it values means. So let's first of all plot in on the map. ```python sns.lmplot('longitude', 'latitude', data=train_df,markers ='.', hue='ocean_proximity', fit_reg=False, height=5) plt.show() ``` Now we better undersand the meaning of different classes. Let's look at the data. ```python value_count=train_df['ocean_proximity'].value_counts() value_count ``` ```python plt.figure(figsize=(12,5)) sns.barplot(value_count.index, value_count.values) plt.title('Ocean Proximity') plt.ylabel('Number of Occurrences') plt.xlabel('Ocean Proximity') plt.figure(figsize=(12,5)) plt.title('House Value depending on Ocean Proximity') sns.boxplot(x="ocean_proximity", y="median_house_value_log", data=train_df) ``` We can see that INLAND houses has significantly lower prices. Distribution in other differ but not so much. There is no clear trend in house price / poximity, so we will not try to invent complex encoding approach. Let's just do OHE for this feature. ```python ocean_proximity_dummies = pd.get_dummies(pd.concat([train_df['ocean_proximity'],test_df['ocean_proximity']]), drop_first=True) ``` ```python dummies_names=list(ocean_proximity_dummies.columns) ``` ```python train_df=pd.concat([train_df,ocean_proximity_dummies[:train_df.shape[0]]], axis=1 ) test_df=pd.concat([test_df,ocean_proximity_dummies[train_df.shape[0]:]], axis=1 ) train_df=train_df.drop(['ocean_proximity'], axis=1) test_df=test_df.drop(['ocean_proximity'], axis=1) ``` ```python train_df.head() ``` And finally we will explore coordinates features. ```python train_df[['longitude','latitude']].describe() ``` Let's plot the house_values (target) on map: ```python from matplotlib.colors import LinearSegmentedColormap plt.figure(figsize=(10,10)) cmap = LinearSegmentedColormap.from_list(name='name', colors=['green','yellow','red']) f, ax = plt.subplots() points = ax.scatter(train_df['longitude'], train_df['latitude'], c=train_df['median_house_value_log'], s=10, cmap=cmap) f.colorbar(points) ``` It seems that the average value of geographically nearest houses can be very good feature. We can also see, that the most expensive houses are located near San Francisco (37.7749° N, 122.4194° W) and Los Angeles (34.0522° N, 118.2437°). Based on this we can use the distance to this cities as additional features. We also see that the most expensive houses are on approximately on the straight line, and become cheaper when we moving to North-East. This means that the linear combination of coordinates themselves can be useful feature as well. ```python sf_coord=[-122.4194, 37.7749] la_coord=[-118.2437, 34.0522] train_df['distance_to_SF']=np.sqrt((train_df['longitude']-sf_coord[0])**2+(train_df['latitude']-sf_coord[1])**2) test_df['distance_to_SF']=np.sqrt((test_df['longitude']-sf_coord[0])**2+(test_df['latitude']-sf_coord[1])**2) train_df['distance_to_LA']=np.sqrt((train_df['longitude']-la_coord[0])**2+(train_df['latitude']-la_coord[1])**2) test_df['distance_to_LA']=np.sqrt((test_df['longitude']-la_coord[0])**2+(test_df['latitude']-la_coord[1])**2) ``` ## 4. Insights and found dependencies Let's quickly sum up what useful we have found so far: - We have analyzed the features and found some ~lognorm distributed among them. We have created corresponding log features - We have analyzed the distribution of the target feature, and concluded that it may be useful to predict log of it (to be checked) - We have dealt with clipped and missing data - We have created features corresponding to simple Eucledian distances to LA ans SF - We also has found several highly correlated variables and maybe will work with them later - We have already generated several new variables and will create more of them later after the initial modeling phase All explanation about this steps were already given above. ## 5. Metrics selection This is regression problem. Our target metric will be RMSE - it is one of the most popular regression metrics, and it has same unit of measurement as target value thus is easy to explain to other people. \begin{align} RMSE = \sqrt{\frac{1}{n}\Sigma_{i=1}^{n}{\Big(\frac{d_i -f_i}{\sigma_i}\Big)^2}} \end{align} As far as there is a monotonic dependence between RMSE and MSE we can optimize MSE in our model and compute RMSE only in the end. MSE is easy to optimize it is a default loss function for the most of regression models. The main drawback of MSE and RMSE - high penalty for big errors in predictions - it can overfit to outliers, but in our case outlaying target values have already been clipped so it is not a big problem. ## 6. Model selection We will try to solve our problem with 3 different regression models: - Linear regression - Random forest - Gradient boosting Linear regression is fast, simple and can provide quite a good baseline result for our task. Tree based models can provide better results in case of nonlinear complex dependences of variables and in case of small number of variables, they are also more stable to multicollinearity (and we have highly correlated variables). Moreover in our problem target values are clipped and targets can't be outside the clipping interval, it is good for the tree-based models. The results of using these models will be compared in the 11-12 parts of the project. Tree-based models are expected to work better in this particular problem, but we will start with more simple model. We will start with standard linear regression, go through all of the modeling steps, and then do some simplified computations for 2 other models (without in-depth explanation of every step). The final model selection will be done based on the results. ## 7. Data preprocessing We have already done most of the preprocessing steps: - OHE for the categorical features - Filled NaNs - Computed logs of skewed data - Divided data into train and hold-out sets Now let's scale all numerical features (it is useful for the linear models), prepare cross validation splits and we are ready to proceed to modeling ```python from sklearn.preprocessing import StandardScaler features_to_scale=numerical_features+log_numerical_features+['distance_to_SF','distance_to_LA'] scaler = StandardScaler() X_train_scaled=pd.DataFrame(scaler.fit_transform(train_df[features_to_scale]), columns=features_to_scale, index=train_df.index) X_test_scaled=pd.DataFrame(scaler.transform(test_df[features_to_scale]), columns=features_to_scale, index=test_df.index) ``` ## 8 Cross-validation and adjustment of model hyperparameters Let's prepare cross validation samples. As far as there are not a lot of data we can easily divide it on 10 folds, that are taken from shuffled train data. Within every split we will train our model on 90% of train data and compute CV metric on the other 10%. We fix the random state for the reproducibility. ```python from sklearn.model_selection import KFold, cross_val_score kf = KFold(n_splits=10, random_state=17, shuffle=True) ``` ### Linear regression For the first initial baseline we will take Rigge model with only initial numerical and OHE features ```python from sklearn.linear_model import Ridge model=Ridge(alpha=1) X=train_df[numerical_features+dummies_names] y=train_df['median_house_value'] cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) print(np.sqrt(-cv_scores.mean())) ``` We are doing cross validation with 10 folds, computing 'neg_mean_squared_error' (neg - because sklearn needs scoring functions to be minimized). Our final metrics: RMSE=np.sqrt(-neg_MSE) So our baseline is RMSE = $68 702 we will try to improve this results using everything we have discovered during the data analysis phase. We will do the following steps: - Use scaled features - Add log features - Add NaN and age clip indicating features - Add city-distance features - Generate several new features - Try to predict log(target) instead of target - Tune some hyperparameters of the model One again the most part of the hyperparameters adjustment will be done later after we add some new features. Actually the cross-validation and parameters tuning process is done through the parts 8-11. ```python # using scaled data X=pd.concat([train_df[dummies_names], X_train_scaled[numerical_features]], axis=1, ignore_index = True) cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) print(np.sqrt(-cv_scores.mean())) ``` ```python # adding NaN indicating feature X=pd.concat([train_df[dummies_names+['total_bedrooms_is_nan']], X_train_scaled[numerical_features]], axis=1, ignore_index = True) cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) print(np.sqrt(-cv_scores.mean())) ``` ```python # adding house age cliiping indicating feature X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled[numerical_features]], axis=1, ignore_index = True) cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) print(np.sqrt(-cv_scores.mean())) ``` ```python # adding log features X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled[numerical_features+log_numerical_features]], axis=1, ignore_index = True) cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) print(np.sqrt(-cv_scores.mean())) ``` ```python # adding city distance features X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled], axis=1, ignore_index = True) cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) print(np.sqrt(-cv_scores.mean())) ``` Up to this moment we have got best result using numerical features + their logs + age_clipped+ dummy variables + distances to the largest cities. Let's try to generate new features ## 9. Creation of new features and description of this process Previously we have already created and explained the rational of new features creation. Now Let's generate additional ones City distances features work, but maybe there are also some non-linear dependencies between them and the target variables. ```python sns.set() sns.pairplot(train_df[['distance_to_SF','distance_to_LA','median_house_value_log']]) ``` Visually is not obvious so let's try to create a couple of new variables and check: ```python new_features_train_df=pd.DataFrame(index=train_df.index) new_features_test_df=pd.DataFrame(index=test_df.index) new_features_train_df['1/distance_to_SF']=1/(train_df['distance_to_SF']+0.001) new_features_train_df['1/distance_to_LA']=1/(train_df['distance_to_LA']+0.001) new_features_train_df['log_distance_to_SF']=np.log1p(train_df['distance_to_SF']) new_features_train_df['log_distance_to_LA']=np.log1p(train_df['distance_to_LA']) new_features_test_df['1/distance_to_SF']=1/(test_df['distance_to_SF']+0.001) new_features_test_df['1/distance_to_LA']=1/(test_df['distance_to_LA']+0.001) new_features_test_df['log_distance_to_SF']=np.log1p(test_df['distance_to_SF']) new_features_test_df['log_distance_to_LA']=np.log1p(test_df['distance_to_LA']) ``` We can also generate some features correlated to the prosperity: - rooms/person - how many rooms are there per person. The higher - the richer people are living there - the more expensive houses they buy - rooms/household - how many rooms are there per family. The similar one but corresponds to the number of rooms per family (assuming household~family), not per person. - two similar features but counting only bedrooms ```python new_features_train_df['rooms/person']=train_df['total_rooms']/train_df['population'] new_features_train_df['rooms/household']=train_df['total_rooms']/train_df['households'] new_features_test_df['rooms/person']=test_df['total_rooms']/test_df['population'] new_features_test_df['rooms/household']=test_df['total_rooms']/test_df['households'] new_features_train_df['bedrooms/person']=train_df['total_bedrooms']/train_df['population'] new_features_train_df['bedrooms/household']=train_df['total_bedrooms']/train_df['households'] new_features_test_df['bedrooms/person']=test_df['total_bedrooms']/test_df['population'] new_features_test_df['bedrooms/household']=test_df['total_bedrooms']/test_df['households'] ``` - the luxurity of house can be characterized buy number of bedrooms per rooms ```python new_features_train_df['bedroom/rooms']=train_df['total_bedrooms']/train_df['total_rooms'] new_features_test_df['bedroom/rooms']=test_df['total_bedrooms']/test_df['total_rooms'] ``` - the average number of persons in one household can be the signal of prosperity or the same time the signal of richness but in any case it can be a useful feature ```python new_features_train_df['average_size_of_household']=train_df['population']/train_df['households'] new_features_test_df['average_size_of_household']=test_df['population']/test_df['households'] ``` And finally let's scale all this features ```python new_features_train_df=pd.DataFrame(scaler.fit_transform(new_features_train_df), columns=new_features_train_df.columns, index=new_features_train_df.index) new_features_test_df=pd.DataFrame(scaler.transform(new_features_test_df), columns=new_features_test_df.columns, index=new_features_test_df.index) ``` ```python new_features_train_df.head() ``` ```python new_features_test_df.head() ``` We will add new features one by one and keeps only those that improve our best score ```python # computing current best score X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled], axis=1, ignore_index = True) cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) best_score = np.sqrt(-cv_scores.mean()) print("Best score: ", best_score) # list of the new good features new_features_list=[] for feature in new_features_train_df.columns: new_features_list.append(feature) X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled, new_features_train_df[new_features_list] ], axis=1, ignore_index = True) cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) score = np.sqrt(-cv_scores.mean()) if score >= best_score: new_features_list.remove(feature) print(feature, ' is not a good feature') else: print(feature, ' is a good feature') print('New best score: ', score) best_score=score ``` We have got 5 new good features. Let's update our X variable ```python X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled, new_features_train_df[new_features_list] ], axis=1).reset_index(drop=True) y=train_df['median_house_value'].reset_index(drop=True) ``` To deal with log of target we need to create our own cross validation or our own predicting model. We will try the first option ```python from sklearn.metrics import mean_squared_error def cross_val_score_with_log(model=model, X=X,y=y,kf=kf, use_log=False): X_temp=np.array(X) # if use_log parameter is true we will predict log(y+1) if use_log: y_temp=np.log1p(y) else: y_temp=np.array(y) cv_scores=[] for train_index, test_index in kf.split(X_temp,y_temp): prediction = model.fit(X_temp[train_index], y_temp[train_index]).predict(X_temp[test_index]) # if use_log parameter is true we should come back to the initial targer if use_log: prediction=np.expm1(prediction) cv_scores.append(-mean_squared_error(y[test_index],prediction)) return np.sqrt(-np.mean(cv_scores)) ``` ```python cross_val_score_with_log(X=X,y=y,kf=kf, use_log=False) ``` We have got exactly the same result as with cross_val_score function. That means everything work ok. Now let's try to set use_log to true ```python cross_val_score_with_log(X=X,y=y,kf=kf, use_log=True) ``` Unfortunately, it has not helped. So we will stick to the previous version. And now we will tune the only meaningful hyperparameter of the Ridge regression - alpha. ## 10. Plotting training and validation curves Let's plot Validation Curve ```python from sklearn.model_selection import validation_curve Cs=np.logspace(-5, 4, 10) train_scores, valid_scores = validation_curve(model, X, y, "alpha", Cs, cv=kf, scoring='neg_mean_squared_error') plt.plot(Cs, np.sqrt(-train_scores.mean(axis=1)), 'ro-') plt.fill_between(x=Cs, y1=np.sqrt(-train_scores.max(axis=1)), y2=np.sqrt(-train_scores.min(axis=1)), alpha=0.1, color = "red") plt.plot(Cs, np.sqrt(-valid_scores.mean(axis=1)), 'bo-') plt.fill_between(x=Cs, y1=np.sqrt(-valid_scores.max(axis=1)), y2=np.sqrt(-valid_scores.min(axis=1)), alpha=0.1, color = "blue") plt.xscale('log') plt.xlabel('alpha') plt.ylabel('RMSE') plt.title('Regularization Parameter Tuning') plt.show() ``` ```python Cs[np.sqrt(-valid_scores.mean(axis=1)).argmin()] ``` We can see that curves for train and CV are very close to each other, it is a sign of underfiting. The difference between the curves does not change along with change in alpha this mean that we should try more complex models comparing to linear regression or add more new features (f.e. polynomial ones) Using this curve we can find the optimal value of alpha. It is alpha=1. But actually our prediction does not change when alpha goes below 1. Let's use alpha=1 and plot the learning curve ```python from sklearn.model_selection import learning_curve model=Ridge(alpha=1.0) train_sizes, train_scores, valid_scores = learning_curve(model, X, y, train_sizes=list(range(50,10001,100)), scoring='neg_mean_squared_error', cv=5) plt.plot(train_sizes, np.sqrt(-train_scores.mean(axis=1)), 'ro-') plt.fill_between(x=train_sizes, y1=np.sqrt(-train_scores.max(axis=1)), y2=np.sqrt(-train_scores.min(axis=1)), alpha=0.1, color = "red") plt.plot(train_sizes, np.sqrt(-valid_scores.mean(axis=1)), 'bo-') plt.fill_between(x=train_sizes, y1=np.sqrt(-valid_scores.max(axis=1)), y2=np.sqrt(-valid_scores.min(axis=1)), alpha=0.1, color = "blue") plt.xlabel('Train size') plt.ylabel('RMSE') plt.title('Regularization Parameter Tuning') plt.show() ``` Learning curves indicate high bias of the model - this means we will not improve our model by adding more data, but we can try to use more complex models or add more features to improve the results. This result is inline with the validation curve results. So let's move on to the more complex models. ### Random forest Actually we can just put all our features into the model but we can easily improve computational performance of the tree-based models, by deleting all monotonous derivatives of features because they does not help at all. For example, adding log(feature) don't help tree-based model, it will just make it more computationally intensive. So let's train random forest classifier based on shorten set of the features ```python X.columns ``` ```python features_for_trees=['INLAND', 'ISLAND', 'NEAR BAY', 'NEAR OCEAN', 'age_clipped', 'longitude', 'latitude', 'housing_median_age', 'total_rooms', 'total_bedrooms', 'population', 'households', 'median_income', 'distance_to_SF', 'distance_to_LA','bedroom/rooms'] ``` ```python %%time from sklearn.ensemble import RandomForestRegressor X_trees=X[features_for_trees] model_rf=RandomForestRegressor(n_estimators=100, random_state=17) cv_scores = cross_val_score(model_rf, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) print(np.sqrt(-cv_scores.mean())) ``` We can see significant improvement, comparing to the linear model and higher n_estimator probably will help. But first, let's try to tune other hyperparametres: ```python from sklearn.model_selection import GridSearchCV param_grid={'n_estimators': [100], 'max_depth': [22, 23, 24, 25], 'max_features': [5,6,7,8]} gs=GridSearchCV(model_rf, param_grid, scoring='neg_mean_squared_error', fit_params=None, n_jobs=-1, cv=kf, verbose=1) gs.fit(X_trees,y) ``` ```python print(np.sqrt(-gs.best_score_)) ``` ```python gs.best_params_ ``` ```python best_depth=gs.best_params_['max_depth'] best_features=gs.best_params_['max_features'] ``` ```python %%time model_rf=RandomForestRegressor(n_estimators=100, max_depth=best_depth, max_features=best_features, random_state=17) cv_scores = cross_val_score(model_rf, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) print(np.sqrt(-cv_scores.mean())) ``` With the relatively small effort we have got a significant improvement of results. Random Forest results can be further improved by higher n_estimators, let's find the n_estimators at witch the results stabilize. ```python model_rf=RandomForestRegressor(n_estimators=200, max_depth=best_depth, max_features=best_features, random_state=17) Cs=list(range(20,201,20)) train_scores, valid_scores = validation_curve(model_rf, X_trees, y, "n_estimators", Cs, cv=kf, scoring='neg_mean_squared_error') plt.plot(Cs, np.sqrt(-train_scores.mean(axis=1)), 'ro-') plt.fill_between(x=Cs, y1=np.sqrt(-train_scores.max(axis=1)), y2=np.sqrt(-train_scores.min(axis=1)), alpha=0.1, color = "red") plt.plot(Cs, np.sqrt(-valid_scores.mean(axis=1)), 'bo-') plt.fill_between(x=Cs, y1=np.sqrt(-valid_scores.max(axis=1)), y2=np.sqrt(-valid_scores.min(axis=1)), alpha=0.1, color = "blue") plt.xlabel('n_estimators') plt.ylabel('RMSE') plt.title('Regularization Parameter Tuning') plt.show() ``` This time we can see that the results of train is much better than CV, but it is totally ok for the Random Forest. Higher value of n_estimators (>100) does not help much. Let's stick to the n_estimators=200 - it is high enough but not very computationally intensive. ### Gradient boosting And finally we will try to use LightGBM to solve our problem. We will try the model out of the box, and then tune some of its parameters using random search ```python # uncomment to install if you have not yet #!pip install lightgbm ``` ```python %%time from lightgbm.sklearn import LGBMRegressor model_gb=LGBMRegressor() cv_scores = cross_val_score(model_gb, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=1) print(np.sqrt(-cv_scores.mean())) ``` LGBMRegressor has much more hyperparameters than previous models. As far as this is educational problem we will not spend a lot of time to tuning all of them. In this case RandomizedSearchCV can give us very good result quite fast, much faster than GridSearch. We will do optimization in 2 steps: model complexity optimization and convergence optimization. Let's do it. ```python gs ``` ```python # model complexity optimization from sklearn.model_selection import RandomizedSearchCV from scipy.stats import randint, uniform param_grid={'max_depth': randint(6,11), 'num_leaves': randint(7,127), 'reg_lambda': np.logspace(-3,0,100), 'random_state': [17]} gs=RandomizedSearchCV(model_gb, param_grid, n_iter = 50, scoring='neg_mean_squared_error', fit_params=None, n_jobs=-1, cv=kf, verbose=1, random_state=17) gs.fit(X_trees,y) ``` ```python np.sqrt(-gs.best_score_) ``` ```python gs.best_params_ ``` Let's fix n_estimators=500, it is big enough but is not to computationally intensive yet, and find the best value of the learning_rate ```python # model convergency optimization param_grid={'n_estimators': [500], 'learning_rate': np.logspace(-4, 0, 100), 'max_depth': [10], 'num_leaves': [72], 'reg_lambda': [0.0010722672220103231], 'random_state': [17]} gs=RandomizedSearchCV(model_gb, param_grid, n_iter = 20, scoring='neg_mean_squared_error', fit_params=None, n_jobs=-1, cv=kf, verbose=1, random_state=17) gs.fit(X_trees,y) ``` ```python np.sqrt(-gs.best_score_) ``` ```python gs.best_params_ ``` We have got the best params for the gradient boosting and will use them for the final prediction. ## 11. Prediction for test or hold-out samples Lets sum up the results of our project. We will compute RMSE on cross validation and holdout set and compare them. ```python results_df=pd.DataFrame(columns=['model','CV_results', 'holdout_results']) ``` ```python # hold-out features and target X_ho=pd.concat([test_df[dummies_names+['age_clipped']], X_test_scaled, new_features_test_df[new_features_list]],axis=1).reset_index(drop=True) y_ho=test_df['median_house_value'].reset_index(drop=True) X_trees_ho=X_ho[features_for_trees] ``` ```python %%time #linear model model=Ridge(alpha=1.0) cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) score_cv=np.sqrt(-np.mean(cv_scores.mean())) prediction_ho = model.fit(X, y).predict(X_ho) score_ho=np.sqrt(mean_squared_error(y_ho,prediction_ho)) results_df.loc[results_df.shape[0]]=['Linear Regression', score_cv, score_ho] ``` ```python %%time #Random Forest model_rf=RandomForestRegressor(n_estimators=200, max_depth=23, max_features=5, random_state=17) cv_scores = cross_val_score(model_rf, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) score_cv=np.sqrt(-np.mean(cv_scores.mean())) prediction_ho = model_rf.fit(X_trees, y).predict(X_trees_ho) score_ho=np.sqrt(mean_squared_error(y_ho,prediction_ho)) results_df.loc[results_df.shape[0]]=['Random Forest', score_cv, score_ho] ``` ```python %%time #Gradient boosting model_gb=LGBMRegressor(reg_lambda=0.0010722672220103231, max_depth=10, n_estimators=500, num_leaves=72, random_state=17, learning_rate=0.06734150657750829) cv_scores = cross_val_score(model_gb, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1) score_cv=np.sqrt(-np.mean(cv_scores.mean())) prediction_ho = model_gb.fit(X_trees, y).predict(X_trees_ho) score_ho=np.sqrt(mean_squared_error(y_ho,prediction_ho)) results_df.loc[results_df.shape[0]]=['Gradient boosting', score_cv, score_ho] ``` ```python results_df ``` It seems we have done quite a good job. Cross validation results are inline with holdout ones. Our best CV model - gradient boosting, turned out to be the best on hold-out dataset as well (and it is also faster than random forest). ## 12. Conclusions To sum up, we have got the solution that can predict the mean house value in the block with RMSE \$46k using our best model - LGB. It is not an extremely precise prediction: \$46k is about 20% of the average mean house price, but it seems that it is near the possible solution for these classes of model based on this data (it is popular dataset but I have not find any solution with significantly better results). We have used old Californian data from 1990 so it is not useful right now. But the same approach can be used to predict modern house prices (if applied to the resent market data). We have done a lot but the results surely can be improved, at least one could try: - feature engineering: polynomial features, better distances to cities (not Euclidean ones, ellipse representation of cities), average values of target for the geographically closest neighbours (requires custom estimator function for correct cross validation) - PCA for dimensionality reduction (I have mentioned it but didn't used) - other models (at least KNN and SVM can be tried based on data) - more time and effort can be spent on RF and LGB parameters tuning
e8a004e169c8d6f20ec28d7701afdab140613644
63,185
ipynb
Jupyter Notebook
jupyter_english/projects_indiv/California_housing_value_prediction_Ilya_Larchenko.ipynb
salman394/AI-ml--course
2ed3a1382614dd00184e5179026623714ccc9e8c
[ "Unlicense" ]
null
null
null
jupyter_english/projects_indiv/California_housing_value_prediction_Ilya_Larchenko.ipynb
salman394/AI-ml--course
2ed3a1382614dd00184e5179026623714ccc9e8c
[ "Unlicense" ]
null
null
null
jupyter_english/projects_indiv/California_housing_value_prediction_Ilya_Larchenko.ipynb
salman394/AI-ml--course
2ed3a1382614dd00184e5179026623714ccc9e8c
[ "Unlicense" ]
null
null
null
32.319693
426
0.607866
true
9,922
Qwen/Qwen-72B
1. YES 2. YES
0.749087
0.760651
0.569794
__label__eng_Latn
0.955475
0.162151
# Second Law Efficiency A power plant receives two heat inputs, 25 kW at 825°C and 50 kW at 240°C, rejects heat to the environment at 20°C, and produces power of 12 kW. Calculate the second-law efficiency of the power plant. ```python from pint import UnitRegistry ureg = UnitRegistry() Q_ = ureg.Quantity ``` The second law efficiency is $$ \eta_2 = \frac{\text{exergy of desired output}}{\text{exergy supplied}} \\ \eta_2 = \frac{\dot{W}}{\dot{X}_{Q_{HT}} + \dot{X}_{Q_{MT}}} \;, $$ where the exergy input due to heat transfer is \begin{equation} \dot{X}_{Q_i} = \dot{Q}_i \left( 1 - \frac{T_0}{T_i} \right) \end{equation} ```python heat_in_hot = Q_(25, 'kW') temp_hot = Q_(825, 'degC').to('K') heat_in_med = Q_(50, 'kW') temp_med = Q_(240, 'degC').to('K') temp_out = Q_(20, 'degC').to('K') work_out = Q_(12, 'kW') ``` ```python exergy_heat_hot = heat_in_hot * (1.0 - (temp_out / temp_hot)) exergy_heat_med = heat_in_med * (1.0 - (temp_out / temp_med)) eta_2 = work_out / (exergy_heat_hot + exergy_heat_med) print(f'second law efficiency: {100*eta_2.magnitude: .1f}%') ``` second law efficiency: 30.2% Let's compare to the first-law efficiency: $$ \eta = \frac{\dot{W}}{\dot{Q}_{HT} + \dot{Q}_{MT}} $$ ```python eta = work_out / (heat_in_hot + heat_in_med) print(f'first law efficiency: {100*eta.magnitude: .1f}%') ``` first law efficiency: 16.0%
b1ee773257d5c44a7a7fb73ffebf043b5e55b521
3,002
ipynb
Jupyter Notebook
book/content/exergy/second-law-efficiency.ipynb
kyleniemeyer/computational-thermo
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
[ "CC-BY-4.0", "BSD-3-Clause" ]
13
2020-04-01T05:52:06.000Z
2022-03-27T20:25:59.000Z
book/content/exergy/second-law-efficiency.ipynb
kyleniemeyer/computational-thermo
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
[ "CC-BY-4.0", "BSD-3-Clause" ]
1
2020-04-28T04:02:05.000Z
2020-04-29T17:49:52.000Z
book/content/exergy/second-law-efficiency.ipynb
kyleniemeyer/computational-thermo
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
[ "CC-BY-4.0", "BSD-3-Clause" ]
6
2020-04-03T14:52:24.000Z
2022-03-29T02:29:43.000Z
22.916031
206
0.509327
true
474
Qwen/Qwen-72B
1. YES 2. YES
0.937211
0.817574
0.76624
__label__eng_Latn
0.8543
0.618564
# Case study 1: Diffusion of fluid pressure and seismicity below Mt. Hood We will apply our new transient model to study the relation between fluid pressure and seismicity in the crust below an active volcano, Mt. Hood in Oregon, USA. We will follow a publication by Saar and Manga (2003). The central claim of this paper is that there appears to be a correlation between seasonal recharge and watertable changes and seismicity in the upper crust. Saar and Manga suggest that this may be due to the effect of fluid pressure on seismicity. Groundwater recharge increases the watertable, which in turn increases pore fluid pressure in the crust. High fluid pressure means a lower effective normal stress on fault planes, which makes faults more likely to fail and generate seismic activity. We will use our 1D model to model the seasonal change in fluid pressure in the crust, and will try to quantify the magnitude of changes at a depth of 4500 m, where seismic activity is thought to originate. ## Workflow 1. Experiment with the timestep size to get a model that is stable 2. Model the effect of an instantaneous increase in recharge and record how long it takes before it reaches the depth where seismic activity occurs (4500 m below the surface) 3. Adjust hydraulic conductivity and storativity to find the optimal values that correspond to the phase shift between the recharge events and the spike in seimsic activity (151 days) 4. Implement a periodic boundary condition to simulate seasonal recharge and calculate the pore-pressure change at depth due to seismic activity ## 1. Keeping the model stable The solution that we are using here is a so-called explicit finite difference solution. This is, next to the steady-state solution, the easiest and most intuitive approach to solving partial differential equations and simulating physical processes. However, explicit solutions have one drawback, the solution can become numerically unstable at large timestep sizes ($\Delta t$). We can test this experimentally. Increase the value of the timestep `dt` and run your model again. Keep on increasing the timestep until you get weird results. Record the timestep at which this occurs. Bonus points for the most artistic figure of unstable model results. The timestep value at which these solutions become numerically unstable is predictable and has been quantified by three mathematicians from Göttingen, Courant, Friedrichs and Lewy in 1928. The stability condition is therefore also called the CFL or Courant number, and follows: \begin{equation} CFL = \dfrac{q \Delta t}{\Delta x} \end{equation} The numerical solution becomes unstable for values of CFL that exceed 1. **Assignment 1: Run the model several times and adjust the timestep size until the model becomes unstable. Choose a timestep size that is still stable, but large enough for the model to finish relatively fast. What timestep did you choose? Make a figure of an unstable model result.** ## 2. Model the effects of an instantaneous recharge event We will start by setting up our numerical model and simulating the effect of an instantaneous increase in hydraulic head at the surface. The seasonal recharge at Mt. Hood is approximately equal to a layer of water of 1.5 m. The volcanic rocks at the surface have a specific yield of approximately 0.15. This means that a recharge of 1.5 m will increase the watertable by 1.5/0.15 = 10 m. We will try to implement an instantaneous change in hydraulic head at the surface by adjusting the top boundary condition *after* running the steady-state solution first. Add a line where you change the value of the hydraulic head at the surface to 10 m. Note that the variable controlling hydraulic head is an array instead of a single number like in exercise 2. This is because we want to make a boundary condition that varies over time later on. **Assignment 2 Model the effects of an instantaneous increase in hydraulic head. How long does it take for the change to reach depths of 4500 m, where seismic activity occurs?** ## 3. Adjusting storativity and hydraulic conductivity Next we will try to make our model more realistic. Instead of a single number for storativity ``S`` calculate the storativity using the specific storage equation given in your lecture handout (lecture 7). Note that we do not directly model the change in watertable. All the nodes in our model are assumed to be located below the watertable, and therefore we can ignore specific yield and assume that storativity is equal to the specific storage. **Assignment 3: Implement the full equation for storativity in the model code and replace the line that currently assigns a value to ``S``. Use a reasonable value for fluid density, fluid compressibility and the compressibility of the rock matrix. Describe which parameter values you use and which value of storativity this results in. (see equation 16 and the parameter values mentioned below this equation. Use 1000 kg/m3 as density).** The observed peak in seismicity underneath Mt. Hood shows a delay of approximately 151 days when compared to streamflow peaks at this location. Streamflow can be regarded as an indicator of seasonal groundwater recharge by snowmelt. The offset of peak seismicity may be an indicator of the time it takes for the fluid pressure increase at the surface to affect fluid pressures at 4500 m depth. **Assignment 4: Calibrate your model by adjusting ``K`` until the observed delay in increase in hydraulic head at 4500 m compared to the surface equals the observed delay in peak seismicity of ~151 days. Make figures of the model results. Which parameter value for K did you use for the final calibrated model?** Note that changing K and/or S may make the model unstable again in some cases. This can be fixed by decreasing the timestep size. ## 4. Adding periodic boundary conditions So far we have modeled the effect of an instantaneous groundwater recharge event. A more realistic way to model the effects of seasonal recharge is to include a periodic change in hydraulic head at the top boundary. Saar and Manga (2003) approximate this by using a cosine function (see equation 8). Uncomment the line where a periodic boundary condition is assigned in this notebook (in the transient parameters box below) and complete the line. Add a new box below, copy-paste the line to calculate the boundary condition and inspect the variable by typing ``print(h0)`` in the next line. Or ``print(h0[:100])`` if you want to inspect the first 100 values for instance. Make sure that you are modelling an increase from 0 to 10 m by checking the min and max values of ``h0`` by typing ``h0.min()`` and ``h0.max()``. **Assignment 5 Run the model with the new boundary condition and make a figure of the result. What is the seasonal change in hydraulic head at a depth of 4500 m? What is the corresponding change in pore pressure (Pa)? Is this difference high or low compared to the difference between hdyraostatic and lithostatic pressure at these depths? (note, see lecture handouts for lecture on compaction, or ask the dutch guy in front of the classroom)** # The actual model code: ## import python modules ```python import matplotlib %matplotlib inline import numpy as np import matplotlib.pyplot as pl ``` ## Function to calculate steady-state hydraulic head ```python def run_steady_state_model(x, dx, K, u0, W, n_iter=20000): C = (W * dx**2) / K u_new = np.ones_like(x) * u0 u_old = np.ones_like(x) * u0 # iterative solution steady-state h for t in range(n_iter): # make sure you indent anything below a for loop # set bnd conditions # left bnd: u_new[0] = u0 # right bnd: u_new[-1] = 0.5 * (C[-1] + u_old[-1] + u_old[-2]) # middle nodes: u_new[1:-1] = 0.5 * (C[1:-1] + u_old[2:] + u_old[:-2]) u_old = u_new.copy() return u_new ``` ## Model parameters ```python day = 24.0 * 60.0 * 60.0 year = 365 * 24 * 60 * 60.0 L = 10000.0 dx = 100.0 # top boundary condition for the initial steady-state model h0 = 0 # hydraulic conductivity K = 1e-6 ``` ## Set up arrays ```python # calculate depth of each node z = np.arange(0, L + dx, dx) # source term, we keep this at 0 for this exercise: W = np.zeros_like(z) ``` ## Parameters for transient model runs ```python # specify storage coefficient # for unconfined groundwater flow this is the specific yield # for confined this is the specific storage porosity = 0.1 density = 1000.0 g = 9.81 S = 1e-4 # formulate storativity using the compressibilites of water and the rock matrix # uncomment and complete the following lines # compressibility_water = .... # compressibility_rocks = .... # fluid_density = 1000.0 # g = 9.81 # S = ... a function of porosity, compressibility_water, compressibility_rocks, g, density # timestep size dt = 5.0 * day #total duration duration = 2.0 * year # calculate total number of timesteps n_timesteps = int(duration / dt) ``` ## run steady-state model We will first run the steady state model. The steady state value of h will be used as an initial value for the transient model runs ```python h_steady = run_steady_state_model(z, dx, K, h0, W) ``` ## Set up parameters for the transient model ```python # set up array that records timesteps: time = np.arange(n_timesteps) * dt # define array to store flux and the variable over time: n_nodes = len(z) h = np.zeros((n_timesteps, n_nodes)) # set the steady-state value of u as value for first timestep h[0] = h_steady # increase the hydraulic head in the upper node: h0 = np.zeros(n_timesteps) # uncomment the following line to change the hydraulic head at the top # boundary for the transient model runs: h0[:] = 10 # uncomment the next lines to add a periodic boundary condition #period = 1 * year #amplitude = 10.0 #h0 = ... add a variation of equation 8 in the Saar and Manga (2003) paper here, in python language. # use the variables time, period, amplitude # use np.pi for the number pi, np.cos(x) for the cosine of a variable x (replace x with your own desired function or number), # and use np.sin() for sine. ``` ## run the transient model: ```python for j in range(1, n_timesteps): # calculate the flux between nodes q = -K * (h[j-1, 1:] - h[j-1, :-1]) / dx # set specified variable value at the left-hand node: h[j, 0] = h0[j] # implement no-flow boundary condition at right-hand side: q_right = 0.0 h[j, -1] = h[j-1, -1] + (dt/S) * (-(q_right - q[-1])/dx) + (dt / S) * W[-1] # update nodes in the middle: h[j, 1:-1] = h[j-1, 1:-1] + (dt/S)*(-(q[1:] - q[:-1])/dx) + (dt/S) * W[1:-1] # print results to screen each 100 timesteps if j / 1000 == j / 1000.0: print('time = ', ((j * dt) / year), ', min, max value of h = ', h[j].min(), h[j].max()) ``` time = 0.0136986301369863 , min, max value of h = 0.0 10.0 time = 0.0273972602739726 , min, max value of h = 0.0 10.0 time = 0.0410958904109589 , min, max value of h = 0.0 10.0 time = 0.0547945205479452 , min, max value of h = 0.0 10.0 time = 0.0684931506849315 , min, max value of h = 0.0 10.0 time = 0.0821917808219178 , min, max value of h = 0.0 10.0 time = 0.0958904109589041 , min, max value of h = 0.0 10.0 time = 0.1095890410958904 , min, max value of h = 0.0 10.0 time = 0.1232876712328767 , min, max value of h = 0.0 10.0 time = 0.136986301369863 , min, max value of h = 0.0 10.0 time = 0.1506849315068493 , min, max value of h = 0.0 10.0 time = 0.1643835616438356 , min, max value of h = 0.0 10.0 time = 0.1780821917808219 , min, max value of h = 0.0 10.0 time = 0.1917808219178082 , min, max value of h = 0.0 10.0 time = 0.2054794520547945 , min, max value of h = 0.0 10.0 time = 0.2191780821917808 , min, max value of h = 0.0 10.0 time = 0.2328767123287671 , min, max value of h = 0.0 10.0 time = 0.2465753424657534 , min, max value of h = 0.0 10.0 time = 0.2602739726027397 , min, max value of h = 0.0 10.0 time = 0.273972602739726 , min, max value of h = 0.0 10.0 time = 0.2876712328767123 , min, max value of h = 0.0 10.0 time = 0.3013698630136986 , min, max value of h = 0.0 10.0 time = 0.3150684931506849 , min, max value of h = 0.0 10.0 time = 0.3287671232876712 , min, max value of h = 0.0 10.0 time = 0.3424657534246575 , min, max value of h = 0.0 10.0 time = 0.3561643835616438 , min, max value of h = 0.0 10.0 time = 0.3698630136986301 , min, max value of h = 0.0 10.0 time = 0.3835616438356164 , min, max value of h = 0.0 10.0 time = 0.3972602739726027 , min, max value of h = 0.0 10.0 time = 0.410958904109589 , min, max value of h = 0.0 10.0 time = 0.4246575342465753 , min, max value of h = 0.0 10.0 time = 0.4383561643835616 , min, max value of h = 0.0 10.0 time = 0.4520547945205479 , min, max value of h = 0.0 10.0 time = 0.4657534246575342 , min, max value of h = 0.0 10.0 time = 0.4794520547945205 , min, max value of h = 0.0 10.0 time = 0.4931506849315068 , min, max value of h = 0.0 10.0 time = 0.5068493150684932 , min, max value of h = 0.0 10.0 time = 0.5205479452054794 , min, max value of h = 0.0 10.0 time = 0.5342465753424658 , min, max value of h = 0.0 10.0 time = 0.547945205479452 , min, max value of h = 0.0 10.0 time = 0.5616438356164384 , min, max value of h = 0.0 10.0 time = 0.5753424657534246 , min, max value of h = 0.0 10.0 time = 0.589041095890411 , min, max value of h = 0.0 10.0 time = 0.6027397260273972 , min, max value of h = 0.0 10.0 time = 0.6164383561643836 , min, max value of h = 0.0 10.0 time = 0.6301369863013698 , min, max value of h = 0.0 10.0 time = 0.6438356164383562 , min, max value of h = 0.0 10.0 time = 0.6575342465753424 , min, max value of h = 0.0 10.0 time = 0.6712328767123288 , min, max value of h = 0.0 10.0 time = 0.684931506849315 , min, max value of h = 0.0 10.0 time = 0.6986301369863014 , min, max value of h = 0.0 10.0 time = 0.7123287671232876 , min, max value of h = 0.0 10.0 time = 0.726027397260274 , min, max value of h = 0.0 10.0 time = 0.7397260273972602 , min, max value of h = 0.0 10.0 time = 0.7534246575342466 , min, max value of h = 0.0 10.0 time = 0.7671232876712328 , min, max value of h = 0.0 10.0 time = 0.7808219178082192 , min, max value of h = 0.0 10.0 time = 0.7945205479452054 , min, max value of h = 0.0 10.0 time = 0.8082191780821918 , min, max value of h = 0.0 10.0 time = 0.821917808219178 , min, max value of h = 0.0 10.0 time = 0.8356164383561644 , min, max value of h = 0.0 10.0 time = 0.8493150684931506 , min, max value of h = 0.0 10.0 time = 0.863013698630137 , min, max value of h = 0.0 10.0 time = 0.8767123287671232 , min, max value of h = 0.0 10.0 time = 0.8904109589041096 , min, max value of h = 0.0 10.0 time = 0.9041095890410958 , min, max value of h = 0.0 10.0 time = 0.9178082191780822 , min, max value of h = 0.0 10.0 time = 0.9315068493150684 , min, max value of h = 0.0 10.0 time = 0.9452054794520548 , min, max value of h = 0.0 10.0 time = 0.958904109589041 , min, max value of h = 0.0 10.0 time = 0.9726027397260274 , min, max value of h = 0.0 10.0 time = 0.9863013698630136 , min, max value of h = 0.0 10.0 time = 1.0 , min, max value of h = 0.0 10.0 time = 1.0136986301369864 , min, max value of h = 0.0 10.0 time = 1.0273972602739727 , min, max value of h = 0.0 10.0 time = 1.0410958904109588 , min, max value of h = 0.0 10.0 time = 1.0547945205479452 , min, max value of h = 0.0 10.0 time = 1.0684931506849316 , min, max value of h = 0.0 10.0 time = 1.082191780821918 , min, max value of h = 0.0 10.0 time = 1.095890410958904 , min, max value of h = 0.0 10.0 time = 1.1095890410958904 , min, max value of h = 0.0 10.0 time = 1.1232876712328768 , min, max value of h = 0.0 10.0 time = 1.1369863013698631 , min, max value of h = 0.0 10.0 time = 1.1506849315068493 , min, max value of h = 0.0 10.0 time = 1.1643835616438356 , min, max value of h = 0.0 10.0 time = 1.178082191780822 , min, max value of h = 0.0 10.0 time = 1.1917808219178083 , min, max value of h = 0.0 10.0 time = 1.2054794520547945 , min, max value of h = 0.0 10.0 time = 1.2191780821917808 , min, max value of h = 0.0 10.0 time = 1.2328767123287672 , min, max value of h = 0.0 10.0 time = 1.2465753424657535 , min, max value of h = 0.0 10.0 time = 1.2602739726027397 , min, max value of h = 0.0 10.0 time = 1.273972602739726 , min, max value of h = 0.0 10.0 time = 1.2876712328767124 , min, max value of h = 0.0 10.0 time = 1.3013698630136987 , min, max value of h = 0.0 10.0 time = 1.3150684931506849 , min, max value of h = 0.0 10.0 time = 1.3287671232876712 , min, max value of h = 0.0 10.0 time = 1.3424657534246576 , min, max value of h = 0.0 10.0 time = 1.356164383561644 , min, max value of h = 0.0 10.0 time = 1.36986301369863 , min, max value of h = 0.0 10.0 time = 1.3835616438356164 , min, max value of h = 3.5348800511017475e-36 10.0 time = 1.3972602739726028 , min, max value of h = 5.313631692816152e-35 10.0 time = 1.4109589041095891 , min, max value of h = 4.702562351399871e-34 10.0 time = 1.4246575342465753 , min, max value of h = 3.0863830579969544e-33 10.0 time = 1.4383561643835616 , min, max value of h = 1.6559108715242797e-32 10.0 time = 1.452054794520548 , min, max value of h = 7.640322532973994e-32 10.0 time = 1.4657534246575343 , min, max value of h = 3.1291528655245264e-31 10.0 time = 1.4794520547945205 , min, max value of h = 1.162072933417117e-30 10.0 time = 1.4931506849315068 , min, max value of h = 3.973726770181997e-30 10.0 time = 1.5068493150684932 , min, max value of h = 1.2656787237550293e-29 10.0 time = 1.5205479452054795 , min, max value of h = 3.788787813944579e-29 10.0 time = 1.5342465753424657 , min, max value of h = 1.0735736265308414e-28 10.0 time = 1.547945205479452 , min, max value of h = 2.896332787766082e-28 10.0 time = 1.5616438356164384 , min, max value of h = 7.475654216806541e-28 10.0 time = 1.5753424657534247 , min, max value of h = 1.8535377639074024e-27 10.0 time = 1.5890410958904109 , min, max value of h = 4.4300831529537575e-27 10.0 time = 1.6027397260273972 , min, max value of h = 1.0237148300631569e-26 10.0 time = 1.6164383561643836 , min, max value of h = 2.2931490763115765e-26 10.0 time = 1.63013698630137 , min, max value of h = 4.990710683381966e-26 10.0 time = 1.643835616438356 , min, max value of h = 1.057412582053079e-25 10.0 time = 1.6575342465753424 , min, max value of h = 2.185026456811923e-25 10.0 time = 1.6712328767123288 , min, max value of h = 4.41057306624575e-25 10.0 time = 1.6849315068493151 , min, max value of h = 8.709337287331064e-25 10.0 time = 1.6986301369863013 , min, max value of h = 1.684581880519881e-24 10.0 time = 1.7123287671232876 , min, max value of h = 3.1954260163651656e-24 10.0 time = 1.726027397260274 , min, max value of h = 5.9506167157784476e-24 10.0 time = 1.7397260273972603 , min, max value of h = 1.0889766431303269e-23 10.0 time = 1.7534246575342465 , min, max value of h = 1.9601528822971494e-23 10.0 time = 1.7671232876712328 , min, max value of h = 3.4732511286285386e-23 10.0 time = 1.7808219178082192 , min, max value of h = 6.063026304830206e-23 10.0 time = 1.7945205479452055 , min, max value of h = 1.0434147870082653e-22 10.0 time = 1.8082191780821917 , min, max value of h = 1.7714286877602342e-22 10.0 time = 1.821917808219178 , min, max value of h = 2.9686177050970946e-22 10.0 time = 1.8356164383561644 , min, max value of h = 4.913549945320053e-22 10.0 time = 1.8493150684931507 , min, max value of h = 8.036687946572415e-22 10.0 time = 1.8630136986301369 , min, max value of h = 1.299614763804055e-21 10.0 time = 1.8767123287671232 , min, max value of h = 2.0787847067469636e-21 10.0 time = 1.8904109589041096 , min, max value of h = 3.2904162194893586e-21 10.0 time = 1.904109589041096 , min, max value of h = 5.156040450766675e-21 10.0 time = 1.917808219178082 , min, max value of h = 8.00153297845471e-21 10.0 time = 1.9315068493150684 , min, max value of h = 1.2302096292176574e-20 10.0 time = 1.9452054794520548 , min, max value of h = 1.8744888194840886e-20 10.0 time = 1.9589041095890412 , min, max value of h = 2.8315548806160036e-20 10.0 time = 1.9726027397260273 , min, max value of h = 4.2416860891387943e-20 10.0 time = 1.9863013698630136 , min, max value of h = 6.303031500820405e-20 10.0 ## Some figures: ### A figure of h vs depth: ```python fig, panel = pl.subplots(1, 1) # show change in hydraulic head over time for j in range(0, n_timesteps, int(n_timesteps/20)): if j == 0: label = 't=0' else: label = None panel.plot(h[j], z, label=label) label = 't=%0.1f yr' % (duration/year) panel.plot(h[-1], z, color='black', lw=1.0, label=label) panel.legend(loc='upper left', fontsize='small') panel.set_ylabel('Depth (m)') panel.set_xlabel('Hydraulic head (m)') panel.set_ylim(L, 0) fig.savefig('simulated_h_vs_depth.png') ``` ### A figure of h over time for a specific depth ```python # add the depths that you would like to show in the figures here (m) target_depths = np.array([4500]) # add colors for the different depths target_colors = ['blue', 'orange'] # look up the node for the different target depths target_nodes = (target_depths / dx).astype(int) fig, panel = pl.subplots(1, 1) for target_depth, target_node, color in zip(target_depths, target_nodes, target_colors): label = '-%0.0f m' % target_depth panel.plot(time/year, h[:, target_node], color=color, label=label) panel.legend() panel.set_xlabel('Time (years)') panel.set_ylabel('Hydraulic head (m)') fig.savefig('simulated_h_over_time.png') ``` # References Courant, R, K Friedrichs, and H Lewy. 1928. “Über Die Partiellen Differenzengleichungen Der Mathematischen Physik.” Mathematische Annalen 100 (1): 32–74. Saar, M. O. & Manga, M. Seismicity induced by seasonal groundwater recharge at Mt. Hood, Oregon. Earth Planet. Sci. Lett. 214, 605–618 (2003). Note, you can find these publications using google scholar: https://scholar.google.com ```python ```
ba6579817e6bde161a293f35d135a67b1f146fab
70,472
ipynb
Jupyter Notebook
exercises/exercise_3_transient_flow/.ipynb_checkpoints/exercise_3a_pore_pressure_diffusion_and_seismicity-checkpoint.ipynb
ElcoLuijendijk/fluids_in_the_crust
c2cadb0a91e9f9ed62094ac5e796168fef0d1a3e
[ "CC-BY-4.0" ]
2
2021-01-12T19:08:16.000Z
2021-01-13T14:27:42.000Z
exercises/exercise_3_transient_flow/.ipynb_checkpoints/exercise_3a_pore_pressure_diffusion_and_seismicity-checkpoint.ipynb
ElcoLuijendijk/fluids_in_the_crust
c2cadb0a91e9f9ed62094ac5e796168fef0d1a3e
[ "CC-BY-4.0" ]
null
null
null
exercises/exercise_3_transient_flow/.ipynb_checkpoints/exercise_3a_pore_pressure_diffusion_and_seismicity-checkpoint.ipynb
ElcoLuijendijk/fluids_in_the_crust
c2cadb0a91e9f9ed62094ac5e796168fef0d1a3e
[ "CC-BY-4.0" ]
null
null
null
111.860317
26,784
0.818297
true
7,940
Qwen/Qwen-72B
1. YES 2. YES
0.934395
0.812867
0.759539
__label__eng_Latn
0.976158
0.602996
# Chi-Squared Distribution *** ## Definition >The Chi-Squared distribution is a continous probability distribution focused on sample standard deviations and can (e.g.) "let you know whether two groups have significantly different opinions, which makes it a very useful statistic for survey research" $ ^{[1]}$. ## Formula The probability mass function of a Chi-Squared distributed random variable is defined as: $$ \begin{equation} f(x|k) = \begin{cases} \frac{x^{\frac{k}{2}-1}e^{-\frac{x}{2}}}{2^{\frac{k}{2}}\Gamma(\frac{k}{2})}, & \text{if}\ x>0 \\ 0, & \text{otherwise} \end{cases} \end{equation} $$<br> where $$ \Gamma(n) = (n-1)! $$<br> and $k$ denotes the degrees of freedom. ```python # IMPORTS import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt import matplotlib.style as style from IPython.core.display import HTML # PLOTTING CONFIG %matplotlib inline style.use('fivethirtyeight') plt.rcParams["figure.figsize"] = (14, 7) HTML(""" <style> .output_png { display: table-cell; text-align: center; vertical-align: center; } </style> """) plt.figure(dpi=100) # PDF plt.plot(np.linspace(0, 20, 100), stats.chi2.pdf(np.linspace(0, 20, 100), df=4) / np.max(stats.chi2.pdf(np.linspace(0, 20, 100), df=4)), ) plt.fill_between(np.linspace(0, 20, 100), stats.chi2.pdf(np.linspace(0, 20, 100), df=4) / np.max(stats.chi2.pdf(np.linspace(0, 20, 100), df=4)), alpha=.15, ) # CDF plt.plot(np.linspace(0, 20, 100), stats.chi2.cdf(np.linspace(0, 20, 100), df=4), ) # LEGEND plt.xticks(np.arange(0, 21, 2)) plt.text(x=11, y=.25, s="pdf (normed)", alpha=.75, weight="bold", color="#008fd5") plt.text(x=11, y=.85, s="cdf", alpha=.75, weight="bold", color="#fc4f30") # TICKS plt.xticks(np.arange(0, 21, 2)) plt.tick_params(axis = 'both', which = 'major', labelsize = 18) plt.axhline(y = 0, color = 'black', linewidth = 1.3, alpha = .7) # TITLE, SUBTITLE & FOOTER plt.text(x = -2, y = 1.25, s = r"Chi-Squared $(\chi^{2})$ Distribution - Overview", fontsize = 26, weight = 'bold', alpha = .75) plt.text(x = -2, y = 1.1, s = 'Depicted below are the normed probability density function (pdf) and the cumulative density\nfunction (cdf) of a Chi-Squared distributed random variable $ y \sim \chi^{2}(k) $, given $k$=4.', fontsize = 19, alpha = .85) plt.text(x = -2,y = -0.2, s = 'Chi-Square', fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey'); ``` *** ## Parameters ```python # IMPORTS import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt import matplotlib.style as style from IPython.core.display import HTML # PLOTTING CONFIG %matplotlib inline style.use('fivethirtyeight') plt.rcParams["figure.figsize"] = (14, 7) HTML(""" <style> .output_png { display: table-cell; text-align: center; vertical-align: center; } </style> """) plt.figure(dpi=100) # PDF k = 1 plt.plot(np.linspace(0, 15, 500), stats.chi2.pdf(np.linspace(0, 15, 500), df=1), ) plt.fill_between(np.linspace(0, 15, 500), stats.chi2.pdf(np.linspace(0, 15, 500), df=1), alpha=.15, ) # PDF k = 3 plt.plot(np.linspace(0, 15, 100), stats.chi2.pdf(np.linspace(0, 15, 100), df=3), ) plt.fill_between(np.linspace(0, 15, 100), stats.chi2.pdf(np.linspace(0, 15, 100), df=3), alpha=.15, ) # PDF k = 6 plt.plot(np.linspace(0, 15, 100), stats.chi2.pdf(np.linspace(0, 15, 100), df=6), ) plt.fill_between(np.linspace(0, 15, 100), stats.chi2.pdf(np.linspace(0, 15, 100), df=6), alpha=.15, ) # LEGEND plt.text(x=.5, y=.7, s="$ k = 1$", rotation=-65, alpha=.75, weight="bold", color="#008fd5") plt.text(x=1.5, y=.35, s="$ k = 3$", alpha=.75, weight="bold", color="#fc4f30") plt.text(x=5, y=.2, s="$ k = 6$", alpha=.75, weight="bold", color="#e5ae38") # TICKS plt.tick_params(axis = 'both', which = 'major', labelsize = 18) plt.axhline(y = 0, color = 'black', linewidth = 1.3, alpha = .7) # TITLE, SUBTITLE & FOOTER plt.text(x = -1.5, y = 2.8, s = "Chi-Squared Distribution - $ k $", fontsize = 26, weight = 'bold', alpha = .75) plt.text(x = -1.5, y = 2.5, s = 'Depicted below are three Chi-Squared distributed random variables with varying $ k $. As one can\nsee the parameter $k$ smoothens the distribution and softens the skewness.', fontsize = 19, alpha = .85) plt.text(x = -1.5,y = -0.4, s = 'Chi-Square', fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey'); ``` *** ## Implementation in Python Multiple Python packages implement the Chi-Squared distribution. One of those is the stats.poisson module from the scipy package. The following methods are only an excerpt. For a full list of features the [official documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.poisson.html) should be read. ### Random Variates In order to generate a random sample from, the function `rvs` should be used. ```python import numpy as np from scipy.stats import chi2 # draw a single sample np.random.seed(42) print(chi2.rvs(df=4), end="\n\n") # draw 10 samples print(chi2.rvs(df=4, size=10), end="\n\n") ``` 4.78735877974 [ 2.98892946 2.76456717 2.76460459 9.29942882 5.73341246 2.262156 4.93962895 3.99792053 0.43182989 1.34248457] ### Probability Density Function The probability mass function can be accessed via the `pdf` function. Like the rvs method, the pdf allows for adjusting the mean of the random variable: ```python from scipy.stats import chi2 # additional imports for plotting purpose import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.rcParams["figure.figsize"] = (14,7) # likelihood of x and y x = 1 y = 7 print("pdf(1) = {}\npdf(7) = {}".format(chi2.pdf(x=x, df=4), chi2.pdf(x=y, df=4))) # continuous pdf for the plot x_s = np.arange(15) y_s = chi2.pdf(x=x_s, df=4) plt.scatter(x_s, y_s, s=100); ``` ### Cumulative Probability Density Function¶ The cumulative probability density function is useful when a probability range has to be calculated. It can be accessed via the `cdf` function: ```python from scipy.stats import chi2 # probability of x less or equal 0.3 print("P(X <=3) = {}".format(chi2.cdf(x=3, df=4))) # probability of x in [-0.2, +0.2] print("P(2 < X <= 8) = {}".format(chi2.cdf(x=8, df=4) - chi2.cdf(x=2, df=4))) ``` P(X <=3) = 0.4421745996289252 P(2 < X <= 8) = 0.6441806878992138 *** ## Infering $k$ Given a sample of datapoints it is often required to estimate the "true" parameters of the distribution. In the case of the Chi-Squared distribution this estimation is quite simple. $k$ can be derived by calculating the mean of the sample. ```python # IMPORTS import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt import matplotlib.style as style from IPython.core.display import HTML # PLOTTING CONFIG %matplotlib inline style.use('fivethirtyeight') plt.rcParams["figure.figsize"] = (14, 7) HTML(""" <style> .output_png { display: table-cell; text-align: center; vertical-align: center; } </style> """) plt.figure(dpi=100) ##### COMPUTATION ##### # DECLARING THE "TRUE" PARAMETERS UNDERLYING THE SAMPLE k_real = 2 # DRAW A SAMPLE OF N=1000 np.random.seed(42) sample = stats.chi2.rvs(df=k_real, size=1000) # ESTIMATE K k_est = np.mean(sample) print("Estimated k: {}".format(k_est)) ##### PLOTTING ##### # SAMPLE DISTRIBUTION plt.hist(sample, bins=50, alpha=.25) # TRUE CURVE plt.plot(np.linspace(0, 18, 1000), stats.chi2.pdf(np.linspace(0, 18, 1000),df=k_real)) # ESTIMATED CURVE plt.plot(np.linspace(0, 18, 1000), stats.chi2.pdf(np.linspace(0, 18, 1000),df=k_est)) # LEGEND plt.text(x=.75, y=.1, s="sample", alpha=.75, weight="bold", color="#008fd5") plt.text(x=3, y=.15, s="true distrubtion", alpha=.75, weight="bold", color="#fc4f30") plt.text(x=1, y=.4, s="estimated distribution", alpha=.75, weight="bold", color="#e5ae38") # TICKS plt.xticks(range(0, 19)[::2]) plt.tick_params(axis = 'both', which = 'major', labelsize = 18) plt.axhline(y = 0.003, color = 'black', linewidth = 1.3, alpha = .7) # TITLE, SUBTITLE & FOOTER plt.text(x = -2, y = 0.675, s = "Chi-Squared Distribution - Parameter Estimation", fontsize = 26, weight = 'bold', alpha = .75) plt.text(x = -2, y = 0.6, s = 'Depicted below is the distribution of a sample (blue) drawn from a Chi-Squared distribution with \n$k=2$ (red). Also the estimated distrubution with $k \sim {:.3f} $ is shown (yellow).'.format(np.mean(sample)), fontsize = 19, alpha = .85) plt.text(x = -2,y = -0.075, s = 'Chi-Square', fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey'); ``` ## Infering $k$ - MCMC In addition to a "direct" inference, $k$ can also be estimated using Markov chain Monte Carlo simulation - implemented in Python's [PyMC3](https://github.com/pymc-devs/pymc3). ```python # IMPORTS import numpy as np import pymc3 as pm import scipy.stats as stats import matplotlib.pyplot as plt import matplotlib.style as style from IPython.core.display import HTML # PLOTTING CONFIG %matplotlib inline style.use('fivethirtyeight') plt.rcParams["figure.figsize"] = (14, 7) HTML(""" <style> .output_png { display: table-cell; text-align: center; vertical-align: center; } </style> """) plt.figure(dpi=100) ##### COMPUTATION ##### # DECLARING THE "TRUE" PARAMETERS UNDERLYING THE SAMPLE k_real = 2 # DRAW A SAMPLE OF N=1000 np.random.seed(42) sample = stats.chi2.rvs(df=k_real, size=1000) ##### SIMULATION ##### # MODEL BUILDING with pm.Model() as model: k = pm.DiscreteUniform("k", lower=0, upper=np.mean(sample)*7) # mean + 3stds chi_2 = pm.ChiSquared("chi2", nu=k, observed=sample) # MODEL RUN with model: trace = pm.sample(50000) burned_trace = trace[45000:] # MU - 95% CONF INTERVAL ks = burned_trace["k"] k_est_95 = np.mean(ks) - 2*np.std(ks), np.mean(ks) + 2*np.std(ks) print("95% of sampled mus are between {} and {}".format(*k_est_95)) ##### PLOTTING ##### # SAMPLE DISTRIBUTION plt.hist(sample, bins=50,normed=True, alpha=.25) # TRUE CURVE plt.plot(np.linspace(0, 18, 1000), stats.chi2.pdf(np.linspace(0, 18, 1000),df=k_real), linestyle="--") # ESTIMATED CURVE plt.plot(np.linspace(0, 18, 1000), stats.chi2.pdf(np.linspace(0, 18, 1000),df=np.mean(ks)), linestyle=":") # LEGEND plt.text(x=.75, y=.1, s="sample", alpha=.75, weight="bold", color="#008fd5") plt.text(x=3, y=.15, s="true distrubtion", alpha=.75, weight="bold", color="#fc4f30") plt.text(x=1, y=.4, s="estimated distribution", alpha=.75, weight="bold", color="#e5ae38") # TICKS plt.xticks(range(0, 19)[::2]) plt.tick_params(axis = 'both', which = 'major', labelsize = 18) plt.axhline(y = 0.003, color = 'black', linewidth = 1.3, alpha = .7) # TITLE, SUBTITLE & FOOTER plt.text(x = -2, y = 0.675, s = "Chi-Squared Distribution - Parameter Estimation (MCMC)", fontsize = 26, weight = 'bold', alpha = .75) plt.text(x = -2, y = 0.6, s = 'Depicted below is the distribution of a sample (blue) drawn from a Chi-Squared distribution with \n$k=2$ (red). Also the estimated distrubution with $k \sim {} $ is shown (yellow).'.format(np.mean(ks)), fontsize = 19, alpha = .85) plt.text(x = -2,y = -0.075, s = 'Chi-Square', fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey'); ``` *** [1] - [Practical Surveys. Understanding Chi Squared](http://practicalsurveys.com/reporting/chisquare.php)
25206169a08d878b6056258e4285d6ab1c506bd3
346,947
ipynb
Jupyter Notebook
Mathematics/Statistics/Statistics and Probability Python Notebooks/Important-Statistics-Distributions-py-notebooks/Chi-Squared Distribution.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Mathematics/Statistics/Statistics and Probability Python Notebooks/Important-Statistics-Distributions-py-notebooks/Chi-Squared Distribution.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Mathematics/Statistics/Statistics and Probability Python Notebooks/Important-Statistics-Distributions-py-notebooks/Chi-Squared Distribution.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
2
2022-02-09T15:41:33.000Z
2022-02-11T07:47:40.000Z
631.961749
132,248
0.94076
true
3,675
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.763484
0.671214
__label__eng_Latn
0.559435
0.397787
```python import numpy as np import numpy.linalg as la import sympy as sp ``` ```python def gradient(formula, symbols, values=None): ''' Given a SymPy formula and variables Find its analytic gradient without substituion as a list of SymPy formulae or numerical gradient if values specified ''' size = len(symbols) gradient = [] for i in range(size): gradient.append(formula.diff(symbols[i])) if values == None: return gradient # Make sure you don't mess up the analytical gradient g_copy = gradient.copy() gradient_at = [] for i in range(len(g_copy)): for j in range(len(symbols)): g_copy[i] = g_copy[i].subs(symbols[j], values[j]) gradient_at.append(float(g_copy[i].evalf())) return gradient_at ``` ```python def subs_all(formula, variables, values): ''' You know what, it's getting to the point where this function is necessary ''' result = formula for i in range(len(values)): result = result.subs(variables[i], values[i]) return float(result.evalf()) def hessian(gradient, symbols, values=None): ''' Given an analytic gradient and variables Calculate its analytic Hessian or numerical Hessian if values are specified ''' size = len(symbols) hessian = [] for i in range(size): hessian.append([0]*size) for i in range(size): for j in range(size): hessian[i][j] = gradient[i].diff(symbols[j]) if values == None: return hessian hessian_at = hessian.copy() size = len(hessian) for i in range(size): for j in range(size): hessian_at[i][j] = subs_all(hessian_at[i][j], symbols, [float(v) for v in values]) # for k in range(len(symbols)): # hessian_at[i][j] = hessian_at[i][j].subs(symbols[k], values[k]) # hessian_at[i][j] = float(hessian_at[i][j].evalf()) return hessian_at ``` ```python def newton_nd_optimization_crude_step(formula, symbols, x_prev): x_prev = list(x_prev) grad = gradient(formula, symbols) neg_gradient_at = -1 * np.array(gradient(formula, symbols, values=x_prev)) hes_at = np.array(hessian(grad, symbols, values=x_prev)) return np.array(x_prev) + la.solve(hes_at, neg_gradient_at) ``` ```python def newton_nd_optimization_crude(f_str, s_str, start, tolerance, actual_solution): ''' A crude version of Newton ND Optimization algorithm Maybe you can help out implementing an analytic solution Returning the numerical solution as well as the number of iterations it took ''' formula = sp.sympify(f_str) symbols = sp.symbols(s_str) curr = np.copy(start) iterations = 0 while (la.norm(curr - actual_solution, 2) > tolerance): curr = newton_nd_optimization_crude_step(formula, symbols, curr) iterations += 1 return curr, iterations ``` ```python # Test case set-up form = sp.sympify('12*x**2+10*x*y+12*y**2+8*E**(9*x*y)+8*(sin(y)**2)+9*cos(x*y)') # 记住,大E才是natural number x, y = sp.symbols('x y') tolerance = 10**-7 # Test Case: Gradient grad_at = gradient(form, (x, y), values=[0, 1]) expected = np.array([ float(form.diff(x).subs(x, 0).subs(y, 1).evalf()), float(form.diff(y).subs(x, 0).subs(y, 1).evalf()) ]) actual = np.array(grad_at) assert la.norm(expected - actual, 2) < tolerance ``` ```python # Test Case: Hessian hes = hessian(gradient(form, (x, y)), (x, y), values=[0, 1]) expected = np.array([ [ float(form.diff(x).diff(x).subs(x, 0).subs(y, 1).evalf()), float(form.diff(x).diff(y).subs(x, 0).subs(y, 1).evalf()) ], [ float(form.diff(y).diff(x).subs(x, 0).subs(y, 1).evalf()), float(form.diff(y).diff(y).subs(x, 0).subs(y, 1).evalf()) ] ]) actual = np.array(hes) assert la.norm(expected - actual, 2) < tolerance ``` ```python # Test Case: Newton ND step H = np.array([ [663, 82], [82, -16*np.sin(1)**2 + 16*np.cos(1)**2 + 24] ]) v = np.array([82, 16*np.sin(1)*np.cos(1) + 24]) expected = la.solve(H, v) * -1 + np.array([0, 1]) actual = newton_nd_optimization_crude_step(form, (x, y), np.array([0, 1])) assert la.norm(expected - actual, 2) < tolerance ``` ```python form = sp.sympify('11*x**2+13*x*y+11*y**2+6*E**(2*x*y)+5*(sin(y)**2)+7*cos(x*y)') symbols = sp.symbols('x y') print(newton_nd_optimization_crude_step(form, symbols, [0, 1])) print(gradient(form, symbols, values=[0, 1])) print(hessian(gradient(form, symbols), symbols, values=[0, 1])) # Works, great! ``` [ 3.07907313 -4.80335408] [25.0, 26.54648713412841] [[39.0, 25.0], [25.0, 17.83853163452858]] ```python # Workspace form = sp.sympify('4*x**2+6*y**4') symbols = sp.symbols('x y') values = [2, 1] hes = hessian(gradient(form, symbols), symbols, values) x_1 = newton_nd_optimization_crude_step(form, symbols, values) step = np.array(x_1) - np.array(values) print(hes) print(x_1) print(step) ``` [[8.0, 0.0], [0.0, 72.0]] [0. 0.66666667] [-2. -0.33333333] ```python ```
b80fc7e2eb5099b521be47ad0edb3509b97baa4e
8,030
ipynb
Jupyter Notebook
newton_nd_optimization_crude.ipynb
Racso-3141/uiuc-cs357-fa21-scripts
e44f0a1ea4eb657cb77253f1db464d52961bbe5e
[ "MIT" ]
10
2021-11-02T05:56:10.000Z
2022-03-03T19:25:19.000Z
newton_nd_optimization_crude.ipynb
Racso-3141/uiuc-cs357-fa21-scripts
e44f0a1ea4eb657cb77253f1db464d52961bbe5e
[ "MIT" ]
null
null
null
newton_nd_optimization_crude.ipynb
Racso-3141/uiuc-cs357-fa21-scripts
e44f0a1ea4eb657cb77253f1db464d52961bbe5e
[ "MIT" ]
3
2021-10-30T15:18:01.000Z
2021-12-10T11:26:43.000Z
30.074906
114
0.509838
true
1,531
Qwen/Qwen-72B
1. YES 2. YES
0.897695
0.843895
0.757561
__label__eng_Latn
0.522653
0.598399
## Cosmological constraints on quantum fluctuations in modified teleparallel gravity The Friedmann equations' modified by quantum fluctuations can be written as \begin{equation} 3 H^2=\cdots , \end{equation} and \begin{equation} 2 \dot{H}+3 H^2=\cdots , \end{equation} whereas the modified Klein-Gordon equation can be written in the form \begin{equation} \dot{\rho} + 3 H \left( \rho + P \right) = \cdots \end{equation} where $H$ is the Hubble function, and $(\rho, P)$ are the fluid energy density and pressure. Dots over a variable denote differentiation with respect to the cosmic time $t$. The ellipses on the right hand sides represent the quantum corrections. See [arXiv:2108.04853](https://arxiv.org/abs/2108.04853) and [arXiv:2111.11761](https://arxiv.org/abs/2111.11761) for full details. This jupyter notebook is devoted to constraining the quantum corrections using late-time compiled data sets from cosmic chronometers (CC), supernovae (SNe), and baryon acoustic oscillations (BAO). In other words, we shall be numerically integrate the dynamical system and perform a Bayesian analysis to determine a best fit theory parameters. We divide the discussion in three sections: (1) observation, (2) theory, and (3) data analysis. *References to the data and python packages can be found at the end of the notebook.* ```python import numpy as np from scipy.integrate import solve_ivp, simps import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1.inset_locator import inset_axes # for the insets from scipy.constants import c from cobaya.run import run from getdist.mcsamples import loadMCSamples from gdplotter_rcb import plot_triangle, plot_1d import os # requires *full path* # for imposing likelihood time limit; otherwise, mcmc gets stuck from multiprocessing import Process ``` ### 1. Observation We import the cosmological data to be used for constraining the theory. We start with the CC + BAO which provides measurements of the Hubble function at various redshifts. ```python cc_data = np.loadtxt('Hdz_2020.txt') z_cc = cc_data[:, 0] Hz_cc = cc_data[:, 1] sigHz_cc = cc_data[:, 2] fig, ax = plt.subplots() ax.errorbar(z_cc, Hz_cc, yerr = sigHz_cc, fmt = 'rx', ecolor = 'k', markersize = 7, capsize = 3) ax.set_xlabel('$z$') ax.set_ylabel('$H(z)$') plt.show() ``` We also consider the 1048 supernovae observations in the form of the Pantheon compilation. ```python # load pantheon compressed m(z) data loc_lcparam = 'https://raw.githubusercontent.com/dscolnic/Pantheon/master/Binned_data/lcparam_DS17f.txt' loc_lcparam_sys = 'https://raw.githubusercontent.com/dscolnic/Pantheon/master/Binned_data/sys_DS17f.txt' #loc_lcparam = 'https://raw.githubusercontent.com/dscolnic/Pantheon/master/lcparam_full_long_zhel.txt' #loc_lcparam_sys = 'https://raw.githubusercontent.com/dscolnic/Pantheon/master/sys_full_long.txt' lcparam = np.loadtxt(loc_lcparam, usecols = (1, 4, 5)) lcparam_sys = np.loadtxt(loc_lcparam_sys, skiprows = 1) # setup pantheon samples z_ps = lcparam[:, 0] logz_ps = np.log(z_ps) mz_ps = lcparam[:, 1] sigmz_ps = lcparam[:, 2] # pantheon samples systematics covmz_ps_sys = lcparam_sys.reshape(40, 40) #covmz_ps_sys = lcparam_sys.reshape(1048, 1048) covmz_ps_tot = covmz_ps_sys + np.diag(sigmz_ps**2) # plot data set plt.errorbar(logz_ps, mz_ps, yerr = np.sqrt(np.diag(covmz_ps_tot)), fmt = 'bx', markersize = 7, ecolor = 'k', capsize = 3) plt.xlabel('$\ln(z)$') plt.ylabel('$m(z)$') plt.show() ``` The compiled CC, SNe, and BAO data sets above will be used to constrain the quantum corrections arising as teleparallel gravity terms in the Friedmann equations. ### 2. Theory We setup the Hubble function $H(z)$ by numerically integrating the field equations. This is in preparation for analysis later on where this observable as well as the supernovae apparent magnitudes are compared with the data. We start by coding the differential equation (in the form $y'(z) = f[y(z),z]$) and the density parameters and other relevant quantities in the next line. ```python def F(z, y, om0, eps): '''returns the differential equation y' = f(y, z) for input to odeint input: y = H(z)/H0 z = redshift om0 = matter fraction at z = 0 eps = LambdaCDM deviation''' lmd = 1 - om0 + eps q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0)) num = 3*(-lmd + (1 - 24*q*lmd)*(y**2) + 36*q*(y**4)) \ *(1 + 18*q*(y**2)*(-1 + 4*q*(y**2))) den = 2*(1 + z)*y*(1 - 18*q*lmd \ + 6*q*(y**2)*(7 + 126*q*lmd \ + 24*q*(y**2)*(-13 - 12*q*lmd + 45*q*(y**2)))) return num/den def ol0(om0, eps): '''returns the density parameter of lambda''' return 1 - om0 + eps def q_param(om0, eps): '''returns the dimensionless quantum correction parameter''' lmd = 1 - om0 + eps q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0)) return q def oq0(om0, eps): '''returns the density parameter of the quantum corrections''' lmd = 1 - om0 + eps q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0)) return q*(24*lmd - 6*(6 + lmd)) ``` Now that the equations are all set, we can proceed with the numerical integration. We test this out in the next line. ```python # late-time redshifts z_min = 0 z_max = 2.5 n_div = 12500 z_late = np.linspace(z_min, z_max, n_div) def nsol(om0, eps): '''numerically integrates the master ode returns: y(z) = H(z)/H0: rescaled Hubble function''' nsol = solve_ivp(F, t_span = (z_min, z_max), y0 = [1], t_eval = z_late, args = (om0, eps)) return nsol # pilot/test run, shown with the CC data test_run = nsol(om0 = 0.3, eps = 0.01) fig, ax = plt.subplots() ax.errorbar(z_cc, Hz_cc, yerr = sigHz_cc, fmt = 'kx', ecolor = 'k', markersize = 7, capsize = 3) ax.plot(test_run.t, 70*test_run.y[0], 'r-') ax.set_xlim(z_min, z_max) ax.set_xlabel('$z$') ax.set_ylabel('$H(z)$') plt.show() ``` We also setup the integral to obtain the SNe apparent magnitude. We assume a spatially-flat scenario in which the luminosity distance given by \begin{equation} d_L \left( z \right) = \dfrac{c}{H_0} \left( 1 + z \right) \int_0^z \dfrac{dz'}{H\left(z'\right) /H_0} . \end{equation} *$H_0$ will be written as $h \times 100$ (km/s/Mpc) $= h \times 10^{-1}$ (m/s/pc). The factor $c/H_0$ will then be written as $c / \left(h \times 10^{-1}\right)$ parsecs where $c$ is the speed of light in vacuum in m/s (a.k.a. scipy value). ```python def dl(om0, eps, z_rec): '''returns the luminosity distance input: z_rec = redshifts at prediction''' E_sol = nsol(om0, eps).y[0] E_inv = 1/E_sol dL = [] for z_i in z_rec: diff_list = list(abs(z_i - z_late)) idx = diff_list.index(min(diff_list)) dL.append((1 + z_i)*simps(E_inv[:idx + 1], z_late[:idx + 1])) return np.array(dL) def dm(H0, om0, eps, z_rec): '''returns the distance modulus m - M input: z_rec = redshifts at prediction''' h = H0/100 return 5*np.log10((c/h)*dl(om0, eps, z_rec)) def m0(H0, om0, eps, M, z_rec): '''returns the apparent magnitude m input: z_rec = redshifts at prediction''' return dm(H0, om0, eps, z_rec) + M ``` We can test out a prediction with the Pantheon data set. Here is an illustration for the same parameters used in the CC prediction earlier. ```python test_run = m0(H0 = 70, om0 = 0.3, eps = 0.01, M = -19.3, z_rec = z_late[1:]) fig, ax = plt.subplots() ax.plot(np.log(z_late[1:]), test_run, 'k-') ax.errorbar(logz_ps, mz_ps, yerr = np.sqrt(np.diag(covmz_ps_tot)), fmt = 'bo', markersize = 2, ecolor = 'k', capsize = 3) ax.set_xlim(min(logz_ps) - 1, max(logz_ps)) ax.set_xlabel('$\ln(z)$') ax.set_ylabel('$m(z)$') plt.show() ``` With predictions of $H(z)$ and $m(z)$, we're now ready to study the data with the model. ### 3. Data analysis We setup the individual and joint log-likelihoods for the CC, SNe, and BAO data sets. ```python def loglike_cc_bao(H0, om0, eps): '''returns the log-likelihood for the CC data''' if (om0 < 0) or (np.abs(oq0(om0, eps)) > 0.1): return -np.inf else: H_sol = H0*nsol(om0, eps).y[0] H_sol_cc = [] for z_i in z_cc: diff_list = list(abs(z_i - z_late)) idx = diff_list.index(min(diff_list)) H_sol_cc.append(H_sol[idx]) H_sol_cc = np.array(H_sol_cc) Delta_H = H_sol_cc - Hz_cc ll_cc = -0.5*np.sum((Delta_H/sigHz_cc)**2) if np.isnan(ll_cc) == True: return -np.inf else: return ll_cc C_inv = np.linalg.inv(covmz_ps_tot) def loglike_sn(H0, om0, eps, M): '''returns the log-likelihood for the SN data''' if (om0 < 0) or (np.abs(oq0(om0, eps)) > 0.1): return -np.inf else: m_sol_ps = m0(H0, om0, eps, M, z_ps) Delta_m = m_sol_ps - mz_ps ll_sn = -0.5*(Delta_m.T @ C_inv @ Delta_m) if np.isnan(ll_sn) == True: return -np.inf else: return ll_sn def loglike_cc_bao_sn(H0, om0, eps, M): '''returns the total CC + BAO + SNe likelihood for a theory prediction''' return loglike_cc_bao(H0, om0, eps) + loglike_sn(H0, om0, eps, M) ``` Now, we must impose a time limit to the evaluation of the likelihood. Otherwise, the MCMC would not converge particularly when using MPI as some of the chains get stuck in certain, isolated regions of the parameter space. ```python # impose timeout, to avoid evaluations/chains getting stuck somewhere def Loglike_cc_bao(H0, om0, eps): '''same loglike but with timelimit of 10 secs per eval''' p = Process(target = loglike_cc_bao, args = (H0, om0, eps,)) p.start() p.join(10) if p.is_alive(): p.terminate() p.join() return -np.inf else: return loglike_cc_bao(H0, om0, eps) def Loglike_cc_bao_sn(H0, om0, eps, M): '''same loglike but with timelimit of 10 secs per eval''' p = Process(target = loglike_cc_bao_sn, args = (H0, om0, eps, M,)) p.start() p.join(10) if p.is_alive(): p.terminate() p.join() return -np.inf else: return loglike_cc_bao_sn(H0, om0, eps, M) ``` The input to ``cobaya`` is preferrably prepared as a ``.yaml`` file to run in a cluster. See the ones in the directory. This comprises of the likelihood and the priors to be used for the sampling. Nonetheless, if one insists, the input can also be prepared as a python dictionary. We show an example below. ```python # SNe Mag prior, SH0ES taken from lit., cepheids calibrated M_priors = {'SH0ES': {'ave': -19.22, 'std': 0.04}} M_prior = M_priors['SH0ES'] # likelihood #info = {"likelihood": {"loglike": Loglike_cc_bao}} info = {"likelihood": {"loglike": Loglike_cc_bao_sn}} # parameters to perform mcmc info["params"] = {"H0": {"prior": {"min": 50, "max": 80}, "ref": {"min": 68, "max": 72}, "proposal": 0.05, "latex": r"H_0"}, "om0": {"prior": {"min": 0, "max": 1}, "ref": {"min": 0.25, "max": 0.35}, "proposal": 1e-3, "latex": r"\Omega_{m0}"}, "eps": {"prior": {"min": -1e-1, "max": 1e-1}, "ref": {"min": -1e-2, "max": 1e-2}, "proposal": 1e-3, "latex": r"\epsilon"}} # uncomment info["params"]["M"] if SNe data is considered info["params"]["M"] = {"prior": {"dist": "norm", "loc": M_prior['ave'], "scale": M_prior['std']}, "ref": M_prior['ave'], "proposal": M_prior['std']/4, "latex": r"M"} info["params"]["q"] = {"derived": q_param, "latex": r"q"} info["params"]["ol0"] = {"derived": ol0, "latex": r"\Omega_{\Lambda}"} info["params"]["oq0"] = {"derived": oq0, "latex": r"\Omega_{q0}"} # mcmc, Rminus1_stop dictates covergence info["sampler"] = {"mcmc":{"Rminus1_stop": 0.01, "max_tries": 1000}} # output, uncomment to save output in the folder chains #info["output"] = "chains_nonminmat_Hdz_Pantheon/tg_quantum_M_SH0ES_cc_bao" info["output"] = "chains_nonminmat_Hdz_Pantheon/tg_quantum_M_SH0ES_cc_bao_sn" # uncomment to overwrite existing files, be careful #info["force"] = True ``` The sampling can now be performed. Suggestion is to run this in a cluster using the command ``cobaya-run``, e.g., with $N$ processes: ``mpirun -n N cobaya-run -f __.yaml``. See also the sample yaml file in the same directory as this jupyter notebook. In a python interpreter, the MCMC can be performed using the function ``run``. Example below. # uncomment next two lines if input is yaml file #from cobaya.yaml import yaml_load_file #info = yaml_load_file("tg_quantum_mcmc_Hdz_Pantheon_cc_bao_sn.yaml") updated_info, sampler = run(info) The results of the sampling can be viewed any time once the results are saved. We prepare the plots by defining the following generic plotting functions using ``getdist`` in ``gdplotter_rcb.py``. The posteriors for the density parameters provided the (1) CC + SNe and (2) CC + SNe + BAO data sets are shown below. ```python # specify file location(s) folder_filename_0 = "chains_nonminmat_Hdz_Pantheon/tg_quantum_cc_bao" folder_filename_1 = "chains_nonminmat_Hdz_Pantheon/tg_quantum_M_SH0ES_cc_bao_sn" # loading results from folder_filename gdsamples_0 = loadMCSamples(os.path.abspath(folder_filename_0)) gdsamples_1 = loadMCSamples(os.path.abspath(folder_filename_1)) plot_triangle([gdsamples_0, gdsamples_1], ["H0", "om0", "oq0"], ['red', 'blue'], ['-', '--'], [r"CC + BAO", r"CC + BAO + SNe"], thickness = 3, font_size = 15, title_fs = 15, parlims = {'oq0': (-0.1, 0.1)}, lgd_font_size = 15) ``` This shows a slight preference for quantum corrections ($\Omega_{q0} < 0$). We shall look at the statistical significance of this later. Here is the corresponding plot for the other parameters. ```python plot_triangle([gdsamples_0, gdsamples_1], ["H0", "ol0", "eps"], ['red', 'blue'], ['-', '--'], [r"CC + BAO", r"CC + BAO + SNe"], thickness = 3, font_size = 15, title_fs = 15, parlims = {'eps': (-0.07, 0.07)}, lgd_font_size = 15) plot_1d([gdsamples_1], ["M"], clrs = ['blue'], thickness = 3, lsty = ['--'], font_size = 15, width_inch = 3.5, figs_per_row = 1) ``` It is also useful to look at the posteriors with the corresponding $\Lambda$CDM model ($\varepsilon = 0$). ```python # specify file location(s) folder_filename_2 = "chains_lcdm_Hdz_Pantheon/lcdm_cc_bao" folder_filename_3 = "chains_lcdm_Hdz_Pantheon/lcdm_M_SH0ES_cc_bao_sn" # loading results from folder_filename gdsamples_2 = loadMCSamples(os.path.abspath(folder_filename_2)) gdsamples_3 = loadMCSamples(os.path.abspath(folder_filename_3)) plot_triangle([gdsamples_0, gdsamples_2, gdsamples_1, gdsamples_3], ["H0", "om0"], ['red', 'green', 'blue', 'black'], ['-', '-.', '--', ':'], [r"TG/quant: CC + BAO", r"$\Lambda$CDM: CC + BAO", r"TG/quant: CC + BAO + SNe", r"$\Lambda$CDM: CC + BAO + SNe"], thickness = 3, font_size = 15, title_fs = 15, width_inch = 7, lgd_font_size = 12) plot_1d([gdsamples_1, gdsamples_3], ["M"], lbls = [r"TG/quant: CC + BAO + SNe", r"$\Lambda$CDM: CC + BAO + SNe"], clrs = ['blue', 'black'], lsty = ['--', ':'], thickness = 3, font_size = 15, lgd_font_size = 12, width_inch = 3.5, figs_per_row = 1) plot_1d([gdsamples_0, gdsamples_1], ["oq0", "q"], lbls = [r"TG/quant: CC + BAO", r"TG/quant: CC + BAO + SNe"], clrs = ['red', 'blue'], lsty = ['-', '--'], thickness = 3, font_size = 15, lgd_font_size = 12, width_inch = 7, figs_per_row = 2) ``` We can obtain the best estimates (marginalized statistics) of the constrained parameters $H_0$, $\Omega_{m0}$, $\Omega_\Lambda$, $\Omega_{q0}$, $\varepsilon$, and $M$ (SN absolute magnitude). ```python # uncomment next 3 lines to get more info on gdsamples_X #print(gdsamples_x.getGelmanRubin()) #print(gdsamples_x.getConvergeTests()) #print(gdsamples_x.getLikeStats()) def get_bes(gdx, params_list): '''get summary statistics for params_list and gdx, params_list = list of parameter strings, e.g., ["H0", "om0"] gdx = cobaya/getdist samples, e.g., gdsamples_1''' stats = gdx.getMargeStats() for p in params_list: p_ave = stats.parWithName(p).mean p_std = stats.parWithName(p).err print() print(p, '=', p_ave, '+/-', p_std) def get_loglike_cc_bao(gdx): '''returns the loglikelihood at the mean of the best fit''' stats = gdx.getMargeStats() return Loglike_cc_bao(stats.parWithName("H0").mean, stats.parWithName("om0").mean, stats.parWithName("eps").mean) def get_loglike_cc_bao_sn(gdx): '''returns the loglikelihood at the mean of the best fit''' stats = gdx.getMargeStats() return Loglike_cc_bao_sn(stats.parWithName("H0").mean, stats.parWithName("om0").mean, stats.parWithName("eps").mean, stats.parWithName("M").mean) print('CC + BAO : loglike = ', get_loglike_cc_bao(gdsamples_0)) get_bes(gdsamples_0, ["H0", "om0", "ol0", "oq0", "eps", "q"]) print() print('CC + SNe + BAO : loglike = ', get_loglike_cc_bao_sn(gdsamples_1)) get_bes(gdsamples_1, ["H0", "om0", "ol0", "oq0", "eps", "q", "M"]) ``` CC + BAO : loglike = -14.390450724779976 H0 = 67.8004534543283 +/- 1.4770558775736187 om0 = 0.34080578948060447 +/- 0.03776932283936424 ol0 = 0.6852167726443292 +/- 0.031097649142164077 oq0 = -0.028386252082125873 +/- 0.01623081432927593 eps = 0.026022562079968937 +/- 0.014889691748570664 q = 0.0011964420976620595 +/- 0.0006889664003658708 CC + SNe + BAO : loglike = -36.76284425721016 H0 = 70.05454742388778 +/- 0.8527931592535787 om0 = 0.2991032108061891 +/- 0.025414395633843102 ol0 = 0.7235945176419945 +/- 0.016742957793318802 oq0 = -0.025412627377857364 +/- 0.020899602813229563 eps = 0.02269772841245066 +/- 0.01862145927771875 q = 0.001106075763625953 +/- 0.0009143510073116634 M = -19.354843877575284 +/- 0.020811981993327823 We end the notebook by comparing the best fit results compared with $\Lambda$CDM. We also print out the $\chi^2$ statistics for the SNe + CC + BAO results. ```python # generic plotting function def plot_best_fit_Hdz(gdxs, lbls, lsts, gdxs_lcdm, lbls_lcdm, lsts_lcdm, save = False, fname = None, folder = None): '''plots the best fit CC results with compared with LambdaCDM''' # cosmic chronometers fig, ax = plt.subplots() ix = inset_axes(ax, width = '45%', height = '30%', loc = 'upper left') ax.errorbar(z_cc, Hz_cc, yerr = sigHz_cc, fmt = 'rx', ecolor = 'k', markersize = 7, capsize = 3, zorder = 0) ix.errorbar(z_cc, Hz_cc, yerr = sigHz_cc, fmt = 'rx', ecolor = 'k', markersize = 7, capsize = 3, zorder = 0) for i in np.arange(0, len(gdxs)): stats = gdxs[i].getMargeStats() H0 = stats.parWithName("H0").mean om0 = stats.parWithName("om0").mean eps = stats.parWithName("eps").mean Hz = H0*nsol(om0 = om0, eps = eps).y[0] ax.plot(z_late, Hz, lsts[i], label = lbls[i]) ix.plot(z_late, Hz, lsts[i]) for i in np.arange(0, len(gdxs_lcdm)): stats = gdxs_lcdm[i].getMargeStats() H0 = stats.parWithName("H0").mean om0 = stats.parWithName("om0").mean Hz = H0*nsol(om0 = om0, eps = 0).y[0] ax.plot(z_late, Hz, lsts_lcdm[i], label = lbls_lcdm[i]) ix.plot(z_late, Hz, lsts_lcdm[i]) ax.set_xlim(z_min, z_max) ax.set_xlabel('$z$') ax.set_ylabel('$H(z)$') ax.legend(loc = 'lower right', prop = {'size': 9.5}) ix.set_xlim(0, 0.2) ix.set_ylim(66, 74) ix.set_xticks([0.05, 0.1]) ix.yaxis.tick_right() ix.set_yticks([68, 70, 72]) ix.xaxis.set_tick_params(labelsize = 10) ix.yaxis.set_tick_params(labelsize = 10) if save == True: fig.savefig(folder + '/' + fname + '.' + fig_format) def plot_best_fit_sne(gdxs, lbls, lsts, \ gdxs_lcdm, lbls_lcdm, lsts_lcdm, save = False, fname = None, folder = None): '''plots the best fit CC results with compared with LambdaCDM''' # setup full pantheon samples lcparam_full = np.loadtxt('../../datasets/pantheon/lcparam_full_long_zhel.txt', usecols = (1, 4, 5)) lcparam_sys_full = np.loadtxt('../../datasets/pantheon/sys_full_long.txt', skiprows = 1) z_ps = lcparam_full[:, 0] mz_ps = lcparam_full[:, 1] sigmz_ps = lcparam_full[:, 2] covmz_ps_sys = lcparam_sys_full.reshape(1048, 1048) covmz_ps_tot = covmz_ps_sys + np.diag(sigmz_ps**2) # supernovae z_sne = np.logspace(-3, np.log10(2.5), 100) fig, ax = plt.subplots() ax.errorbar(z_ps, mz_ps, yerr = np.sqrt(np.diag(covmz_ps_tot)), fmt = 'rx', markersize = 3, ecolor = 'k', capsize = 3, zorder = 0) for i in np.arange(0, len(gdxs)): stats = gdxs[i].getMargeStats() H0 = stats.parWithName("H0").mean om0 = stats.parWithName("om0").mean eps = stats.parWithName("eps").mean M = stats.parWithName("M").mean mz = m0(H0 = H0, om0 = om0, eps = eps, M = M, z_rec = z_sne) ax.plot(z_sne, mz, lsts[i], label = lbls[i]) for i in np.arange(0, len(gdxs_lcdm)): stats = gdxs_lcdm[i].getMargeStats() H0 = stats.parWithName("H0").mean om0 = stats.parWithName("om0").mean M = stats.parWithName("M").mean mz = m0(H0 = H0, om0 = om0, eps = 0, M = M, z_rec = z_sne) ax.plot(z_sne, mz, lsts_lcdm[i], label = lbls_lcdm[i]) ax.set_xlim(0, 2.5) ax.set_ylim(11.5, 27.5) ax.set_xlabel('$\ln(z)$') ax.set_ylabel('$m(z)$') ax.legend(loc = 'lower right', prop = {'size': 9.5}) if save == True: fig.savefig(folder + '/' + fname + '.' + fig_format) plot_best_fit_Hdz([gdsamples_0, gdsamples_1], ['TG/quant: CC + BAO', 'TG/quant: CC + BAO + SNe'], ['r-', 'b--'], [gdsamples_2, gdsamples_3], [r'$\Lambda$CDM: CC + BAO', r'$\Lambda$CDM: CC + BAO + SNe'], ['g-.', 'k:']) plot_best_fit_sne([gdsamples_1], ['TG/quant: CC + BAO + SNe'], ['b--'], [gdsamples_3], [r'$\Lambda$CDM: CC + BAO + SNe'], ['k:']) ``` To objectively assess whether the results are significant, we calculate three statistical measures: the $\chi^2$, the Akaike information criterion (AIC), and the Bayesian information criterion (BIC). We can easily compute the chi-squared from the loglikelihood as $\chi^2 = -2 \log \mathcal{L}$. Doing so leads to $\Delta \chi^2 = \chi^2_{\Lambda \text{CDM}} - \chi^2_{\text{TG}}$: ```python def get_bfloglike_cc_bao(gdx): '''returns the best fit loglikelihood using like stats''' stats = gdx.getLikeStats() return Loglike_cc_bao(stats.parWithName("H0").bestfit_sample, stats.parWithName("om0").bestfit_sample, stats.parWithName("eps").bestfit_sample) def get_bfloglike_cc_bao_sn(gdx): '''returns the best fit loglikelihood using like stats''' stats = gdx.getLikeStats() return Loglike_cc_bao_sn(stats.parWithName("H0").bestfit_sample, stats.parWithName("om0").bestfit_sample, stats.parWithName("eps").bestfit_sample, stats.parWithName("M").bestfit_sample) # LambdaCDM CC + BAO like-stats stats_lcdm_cc_bao = gdsamples_2.getLikeStats() H0_lcdm_cc_bao = stats_lcdm_cc_bao.parWithName("H0").bestfit_sample om0_lcdm_cc_bao = stats_lcdm_cc_bao.parWithName("om0").bestfit_sample loglike_lcdm_cc_bao = Loglike_cc_bao(H0_lcdm_cc_bao, om0_lcdm_cc_bao, eps = 0) # LambdaCDM CC + BAO + SNe like-stats stats_lcdm_cc_bao_sn = gdsamples_3.getLikeStats() H0_lcdm_cc_bao_sn = stats_lcdm_cc_bao_sn.parWithName("H0").bestfit_sample om0_lcdm_cc_bao_sn = stats_lcdm_cc_bao_sn.parWithName("om0").bestfit_sample M_lcdm_cc_bao_sn = stats_lcdm_cc_bao_sn.parWithName("M").bestfit_sample loglike_lcdm_cc_bao_sn = Loglike_cc_bao_sn(H0_lcdm_cc_bao_sn, om0_lcdm_cc_bao_sn, \ eps = 0, M = M_lcdm_cc_bao_sn) print('CC + BAO results') print('LambdaCDM : chi-squared = ', -2*loglike_lcdm_cc_bao) print('TG/quant : chi-squared = ', -2*get_bfloglike_cc_bao(gdsamples_0)) print('Delta chi-squared = ', \ -2*(loglike_lcdm_cc_bao - get_bfloglike_cc_bao(gdsamples_0))) print() print('CC + BAO + SNe results') print('LambdaCDM : chi-squared = ', -2*loglike_lcdm_cc_bao_sn) print('TG/quant : chi-squared = ', -2*get_bfloglike_cc_bao_sn(gdsamples_1)) print('Delta chi-squared = ', \ -2*(loglike_lcdm_cc_bao_sn - get_bfloglike_cc_bao_sn(gdsamples_1))) ``` CC + BAO results LambdaCDM : chi-squared = 32.075673520122784 TG/quant : chi-squared = 28.618737106195127 Delta chi-squared = 3.4569364139276573 CC + BAO + SNe results LambdaCDM : chi-squared = 75.24897103548676 TG/quant : chi-squared = 68.92102582823969 Delta chi-squared = 6.327945207247069 This shows that in both cases $\chi^2 > 0$ which corresponds a (very) slight preference for the inclusion of the quantum corrections. Moving on, the AIC can be computed using \begin{equation} \text{AIC} = 2 k - 2 \log(\mathcal{L}) \end{equation} where $\log(\mathcal{L})$ is the log-likelihood and $k$ is the number of parameters estimated by the model. The results for the AIC are printed in the next line with $\Delta \text{AIC} = \text{AIC}_{\Lambda\text{CDM}} - \text{AIC}_{\text{TG}}$. ```python print('CC + BAO results') aic_lcdm_cc_bao = 2*2 - 2*loglike_lcdm_cc_bao # estimated H0, om0 aic_tg_quantum_cc_bao = 2*3 - 2*get_bfloglike_cc_bao(gdsamples_0) # estimated H0, om0, eps print('LambdaCDM : AIC = ', aic_lcdm_cc_bao) print('TG/quant : AIC = ', aic_tg_quantum_cc_bao) print('Delta AIC = ', \ aic_lcdm_cc_bao - aic_tg_quantum_cc_bao) print() aic_lcdm_cc_bao_sn = 2*3 - 2*loglike_lcdm_cc_bao_sn # estimated ... + M aic_tg_quantum_cc_bao_sn = 2*4 - 2*get_bfloglike_cc_bao_sn(gdsamples_1) print('CC + BAO + SNe results') print('LambdaCDM : AIC = ', aic_lcdm_cc_bao_sn) print('TGquantum : AIC = ', aic_tg_quantum_cc_bao_sn) print('Delta AIC = ', \ aic_lcdm_cc_bao_sn - aic_tg_quantum_cc_bao_sn) ``` CC + BAO results LambdaCDM : AIC = 36.075673520122784 TG/quant : AIC = 34.61873710619513 Delta AIC = 1.4569364139276573 CC + BAO + SNe results LambdaCDM : AIC = 81.24897103548676 TGquantum : AIC = 76.92102582823969 Delta AIC = 4.327945207247069 In the first case (CC + BAO), the inclusion of the TG/quantum corrections is preferred by the AIC as $\Delta \text{AIC} > 0$; on the other hand, with CC + BAO + SNe, the $\Lambda$CDM is slightly preferred. The BIC can be computed using \begin{equation} \text{BIC} = k \log(n) - 2 \log(\mathcal{L}) \end{equation} where $\log(\mathcal{L})$ is the log-likelihood, $n$ is the number of data points, and $k$ is the number of parameters estimated by the model. We can again easily compute this together with $\Delta \text{BIC} = \text{BIC}_{\Lambda\text{CDM}} - \text{BIC}_{\text{TG}}$. The results are printed below. ```python print('CC + BAO results') n_cc_bao = len(z_cc) bic_lcdm_cc_bao = 2*np.log(n_cc_bao) - 2*loglike_lcdm_cc_bao # estimated H0, om0 bic_tg_quantum_cc_bao = 3*np.log(n_cc_bao) - 2*get_bfloglike_cc_bao(gdsamples_0) # estimated H0, om0, eps print('LambdaCDM : BIC = ', bic_lcdm_cc_bao) print('TG/quant : BIC = ', bic_tg_quantum_cc_bao) print('Delta BIC = ', \ bic_lcdm_cc_bao - bic_tg_quantum_cc_bao) print() n_cc_bao_sn = len(z_cc) + len(z_ps) bic_lcdm_cc_bao_sn = 3*np.log(n_cc_bao_sn) - 2*loglike_lcdm_cc_bao_sn # estimated ... + M bic_tg_quantum_cc_bao_sn = 4*np.log(n_cc_bao_sn) - 2*get_bfloglike_cc_bao_sn(gdsamples_1) print('CC + BAO + SNe results') print('LambdaCDM : BIC = ', bic_lcdm_cc_bao_sn) print('TG/quant : BIC = ', bic_tg_quantum_cc_bao_sn) print('Delta BIC = ', \ bic_lcdm_cc_bao_sn - bic_tg_quantum_cc_bao_sn) ``` CC + BAO results LambdaCDM : BIC = 40.16177605579188 TG/quant : BIC = 40.74789090969878 Delta BIC = -0.5861148539068992 CC + BAO + SNe results LambdaCDM : BIC = 88.9731039709969 TG/quant : BIC = 87.21986974225322 Delta BIC = 1.753234228743679 We find here that CC + BAO and CC + BAO + SNe prefers the $\Lambda$CDM model $\left( \Delta \text{BIC} < 0 \right)$ over the inclusion of quantum corrections. ### Appendix: A quantum corrected DE EoS It is additionally insightful to look at the dark energy equation of state. This is computed below considering the contributions sourcing an accelerated expansion phase through the modified Friedmann equations. ```python def rhoLambda(H0, om0, eps): lmd = 1 - om0 + eps Lmd = lmd*(3*(H0**2)) q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0)) alpha = q/(H0**2) Hz = H0*nsol(om0 = om0, eps = eps).y[0] return Lmd + 24*alpha*Lmd*Hz**2 def preLambda(H0, om0, eps): lmd = 1 - om0 + eps Lmd = lmd*(3*(H0**2)) q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0)) alpha = q/(H0**2) Hz = H0*nsol(om0 = om0, eps = eps).y[0] z = z_late Hpz = H0*F(z, Hz/H0, om0, eps) return -Lmd*(1 + 24*alpha*(Hz**2) \ - 16*(1 + z)*alpha*Hz*Hpz) def wLambda(H0, om0, eps): return preLambda(H0, om0, eps)/rhoLambda(H0, om0, eps) def rhoHO(H0, om0, eps): lmd = 1 - om0 + eps Lmd = lmd*(3*(H0**2)) q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0)) alpha = q/(H0**2) Hz = H0*nsol(om0 = om0, eps = eps).y[0] return -108*alpha*(Hz**4) def preHO(H0, om0, eps): lmd = 1 - om0 + eps Lmd = lmd*(3*(H0**2)) q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0)) alpha = q/(H0**2) Hz = H0*nsol(om0 = om0, eps = eps).y[0] z = z_late Hpz = H0*F(z, Hz/H0, om0, eps) return 36*alpha*(Hz**3)*(3*Hz - 4*(1 + z)*Hpz) def wHO(H0, om0, eps): return preHO(H0, om0, eps)/rhoHO(H0, om0, eps) def wLambdaPlusHO(H0, om0, eps): preTot = preLambda(H0, om0, eps) + preHO(H0, om0, eps) rhoTot = rhoLambda(H0, om0, eps) + rhoHO(H0, om0, eps) return preTot/rhoTot def plot_best_fit_wz(gdxs, lbls, lsts, save = False, fname = None, folder = None): '''plots the best fit DE EoS including quantum corrections''' fig, ax = plt.subplots() for i in np.arange(0, len(gdxs)): stats = gdxs[i].getMargeStats() H0 = stats.parWithName("H0").mean om0 = stats.parWithName("om0").mean eps = stats.parWithName("eps").mean wz = wLambdaPlusHO(H0 = H0, om0 = om0, eps = eps) ax.plot(z_late, 1 + wz, lsts[i], label = lbls[i]) ax.plot(z_late, np.array([0]*len(z_late)), "k:", label = r"$\Lambda$CDM") ax.set_xlim(0, 1) ax.set_ylim(-1.1, 1.1) ax.set_xlabel('$z$') ax.set_ylabel('$1 + w(z)$') ax.legend(loc = 'upper right', prop = {'size': 9.5}) if save == True: fig.savefig(folder + '/' + fname + '.' + fig_format, bbox_inches = 'tight') ``` Here we go with the plot. ```python plot_best_fit_wz([gdsamples_0, gdsamples_1], ['TG/quant: CC + BAO', 'TG/quant: CC + BAO + SNe'], ['r-', 'b--']) ``` This shows that within this model, the quantum-corrections source a phantom-like dark energy permeating in the late Universe. ### Data references **pantheon** D. M. Scolnic et al., The Complete Light-curve Sample of Spectroscopically Confirmed SNe Ia from Pan-STARRS1 and Cosmological Constraints from the Combined Pantheon Sample, Astrophys. J. 859 (2018) 101 [[1710.00845](https://arxiv.org/abs/1710.00845)]. **cosmic chronometers** M. Moresco, L. Pozzetti, A. Cimatti, R. Jimenez, C. Maraston, L. Verde et al., *A 6% measurement of the Hubble parameter at z ∼ 0.45: direct evidence of the epoch of cosmic re-acceleration*, JCAP 05 (2016) 014 [1601.01701](https://arxiv.org/abs/1601.01701). M. Moresco, *Raising the bar: new constraints on the Hubble parameter with cosmic chronometers at z ∼ 2*, Mon. Not. Roy. Astron. Soc. 450 (2015) L16 [1503.01116](https://arxiv.org/abs/1503.01116). C. Zhang, H. Zhang, S. Yuan, S. Liu, T.-J. Zhang and Y.-C. Sun, *Four new observational H(z) data from luminous red galaxies in the Sloan Digital Sky Survey data release seven*, Research in Astronomy and Astrophysics 14 (2014) 1221 [1207.4541](https://arxiv.org/abs/1207.4541). D. Stern, R. Jimenez, L. Verde, M. Kamionkowski and S. A. Stanford, *Cosmic chronometers: constraining the equation of state of dark energy. I: H(z) measurements*, JCAP 2010 (2010) 008 [0907.3149](https://arxiv.org/abs/0907.3149). M. Moresco et al., *Improved constraints on the expansion rate of the Universe up to z$\sim$1.1 from the spectroscopic evolution of cosmic chronometers*, JCAP 2012 (2012) 006 [1201.3609](https://arxiv.org/abs/1201.3609). A. L. Ratsimbazafy, S. I. Loubser, S. M. Crawford, C. M. Cress, B. A. Bassett, R. C. Nichol et al., *Age-dating Luminous Red Galaxies observed with the Southern African Large Telescope*, Mon. Not. Roy. Astron. Soc. 467 (2017) 3239 [1702.00418](https://arxiv.org/abs/1702.00418). **baryon acoustic oscillations** C. Blake, S. Brough, M. Colless, C. Contreras, W. Couch, S. Croom et al., *The WiggleZ Dark Energy Survey: joint measurements of the expansion and growth history at z $<$ 1*, Mon. Not. Roy. Astron. Soc. 425 (2012) 405 [1204.3674](https://arxiv.org/abs/1204.3674). C.-H. Chuang et al., *The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: single-probe measurements and the strong power of normalized growth rate on constraining dark energy*, Mon. Not. Roy. Astron. Soc. 433 (2013) 3559 [1303.4486](https://arxiv.org/abs/1303.4486). BOSS collaboration, *Quasar-Lyman α Forest Cross-Correlation from BOSS DR11 : Baryon Acoustic Oscillations*, JCAP 05 (2014) 027 [1311.1767](https://arxiv.org/abs/1311.1767). BOSS collaboration, *Baryon acoustic oscillations in the Lyα forest of BOSS DR11 quasars*, Astron. Astrophys. 574 (2015) A59 [1404.1801](https://arxiv.org/abs/1404.1801). J. E. Bautista et al., *Measurement of baryon acoustic oscillation correlations at z = 2.3 with SDSS DR12 Lyα-Forests*, Astron. Astrophys. 603 (2017) A12 [1702.00176](https://arxiv.org/abs/1702.00176). **python packages** ``cobaya``: J. Torrado and A. Lewis, Cobaya: Code for Bayesian Analysis of hierarchical physical models (2020) [[2005.05290](https://arxiv.org/abs/2005.05290)]. ``getdist``: A. Lewis, GetDist: a Python package for analysing Monte Carlo samples (2019) [[1910.13970](https://arxiv.org/abs/1910.13970)]. ``numpy``: C. R. Harris et al., Array programming with NumPy, [Nature 585 (2020) 357–362](https://www.nature.com/articles/s41586-020-2649-2?fbclid=IwAR3qKNC7soKsJlgbF2YCeYQl90umdrcbM6hw8vnpaVvqQiaMdTeL2GZxUR0). ``scipy``: P. Virtanen et al., SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python, [Nature Methods 17 (2020) 261](https://www.nature.com/articles/s41592-019-0686-2). ``matplotlib``: J. D. Hunter, Matplotlib: A 2d graphics environment, [Computing in Science Engineering 9 (2007) 90](https://ieeexplore.ieee.org/document/4160265).
7a821dd6cff518e98e2de91090314c7f7a5958ed
433,108
ipynb
Jupyter Notebook
supp_ntbks_arxiv.2111.11761/tg_quant_sample.ipynb
reggiebernardo/notebooks
b54efe619e600679a5c84de689461e26cf1f82af
[ "MIT" ]
null
null
null
supp_ntbks_arxiv.2111.11761/tg_quant_sample.ipynb
reggiebernardo/notebooks
b54efe619e600679a5c84de689461e26cf1f82af
[ "MIT" ]
null
null
null
supp_ntbks_arxiv.2111.11761/tg_quant_sample.ipynb
reggiebernardo/notebooks
b54efe619e600679a5c84de689461e26cf1f82af
[ "MIT" ]
null
null
null
315.216885
84,084
0.915391
true
11,713
Qwen/Qwen-72B
1. YES 2. YES
0.7773
0.699254
0.54353
__label__eng_Latn
0.491746
0.101133
# juliaのSymbolicsでやってみる ```julia using Symbolics ``` ```julia include("./kinematics.jl") using .Kinematics ``` WARNING: replacing module Kinematics. WARNING: using Kinematics.locals in module Main conflicts with an existing identifier. ```julia @variables l1_1, l1_2, l1_3, l2_1, l2_2, l2_3 @variables ξ1, ξ2 N = 2 Q = [l1_1, l1_2, l1_3, l2_1, l2_2, l2_3] Ξ = [ξ1, ξ2] ``` \begin{equation} \left[ \begin{array}{c} {\xi}1 \\ {\xi}2 \\ \end{array} \right] \end{equation} ```julia Ps = locals(N, Q, Ξ) ``` ```julia Z = [l1_1 l1_2; l2_1 l2_2] ``` \begin{equation} \left[ \begin{array}{cc} l1_{1} & l1_{2} \\ l2_{1} & l2_{2} \\ \end{array} \right] \end{equation}
95d396a70b6cb24f2946773fcce21497a12a49e5
6,379
ipynb
Jupyter Notebook
o/soft_robot/derivation_of_kinematics/jacobian_jl.ipynb
YoshimitsuMatsutaIe/ctrlab2021_soudan
7841c981e6804cc92d34715a00e7c3efce41d1d0
[ "MIT" ]
null
null
null
o/soft_robot/derivation_of_kinematics/jacobian_jl.ipynb
YoshimitsuMatsutaIe/ctrlab2021_soudan
7841c981e6804cc92d34715a00e7c3efce41d1d0
[ "MIT" ]
null
null
null
o/soft_robot/derivation_of_kinematics/jacobian_jl.ipynb
YoshimitsuMatsutaIe/ctrlab2021_soudan
7841c981e6804cc92d34715a00e7c3efce41d1d0
[ "MIT" ]
null
null
null
35.243094
613
0.591472
true
299
Qwen/Qwen-72B
1. YES 2. YES
0.884039
0.76908
0.679897
__label__eng_Latn
0.206058
0.41796
# Lecture 20: Classification of Astronomical Images with Deep Learning #### This notebook was developed by [Zeljko Ivezic](http://faculty.washington.edu/ivezic/) for the 2021 data science class at the University of Sao Paulo and it is available from [github](https://github.com/ivezic/SaoPaulo2021/blob/main/notebooks/Lecture20.ipynb). Note: this notebook contains code developed by Z. Ivezic, M. Juric, A. Connolly, B. Sippocz, Jake VanderPlas, G. Richards and many others. <a id='toc'></a> ## This notebook includes: [Introduction to Deep Learning](#intro) [Example: Image classification with ResNet](#resnet) ## Introduction to Deep Learning <a id='intro'></a> #### Originally developed by Andy Connolly for [astroML workshop at the 235th Meeting of the American Astronomical Society (January 6, 2020)](http://www.astroml.org/workshops/AAS235.html) ResNet50 example based on on https://github.com/priya-dwivedi/Deep-Learning/blob/master/resnet_keras/Residual_Networks_yourself.ipynb We use a sample of simulated stamps with three balanced classes: **star** (i.e. point spread function), a **moving source** (a.k.a. trailed source) and the so-called **"dipole"** source generated by subtracting two identical stars with somewhat mismatched positions (a fraction of pixels, due to e.g., bad astrometry or proper motion). The code that generated these images is in [makeSampleResNet.ipynb notebook](https://github.com/ivezic/SaoPaulo2021/blob/main/notebooks/makeSampleResNet.ipynb). **Deep learning**, an extension of the neural networks that were popularized in the 1990s. The concepts are inspired by the structure and function of the brain. A neuron in the brain is a core computational unit that takes a series of inputs from branched extensions of the neuron called dendrites, operates on these inputs, and generates an output that is transmitted along an axon to one or more other neurons. In the context of a neural network a neuron, $j$, takes a set of inputs, $x_i$, applies a, typically non-linear, function to these inputs and generates an output value. Networks are then created by connecting multiple neurons or layers of neurons to one another. If we consider the simplified network inputs are passed to the neurons in the network. Each input is weighted by a value, $w_{ij}$ and the sum of these weighted inputs is operated on by a response or activation function $f(\theta)$, which transform the input signal so that it varies between 0 and 1 through the application of a non-linear response. The output from any neuron is then given by, $$ a_j = f \left( \sum_i w_{ij} x_i + b_j \right) $$ where $b_j$ is a bias term which determines the input level at which the neuron becomes activated. We refer to the neurons between the inputs and the output layers as the hidden layers. If the neurons from one layer connect to all neurons in a subsequent layer we call this a fully connected layer. When the outputs from the neurons only connect to subsequent layers (i.e. the graph is acyclic) we refer to this as a feed-forward network -- this is the most common structure for a neural network used in classification. The final layer in the network is the output layer. As with the hidden layer, an activation function, $g(\theta)$, in the output layer acts on the weighted sum of its inputs. In this figure we have a single output node but there can be multiple outputs. For our example network the output from the final neuron, $y_k$, would be given by $$ y_k = g \left( \sum_j w_{jk} a_j + b_k \right) = g\left( \sum_j w_{jk} f \left( \sum_i w_{ij} x_i + b_j\right) + b_k\right) $$ **Training of the network is simply the learning of the weights and bias values** ## Neural Network Frameworks The development and release of open source deep learning libraries has made the use of deep neural networks accessible to a wide range of fields. Currently there are two common packages PyTorch (https://pytorch.org) and Tensorflow (https://www.tensorflow.org). Either code base can be utilized for the figures and problems in this book (and generally they have the same functionality). ### TensorFlow: Tensorflow is the more established code base with a large community and a large number of tutorials (https://www.tensorflow.org/tutorials) and online courses. Its functionality is more developed than PyTorch with tools to visualize and inspect a network (e.g., see TensorBoard). On the other hand, the learning curve for PyTorch is generally considered to be easier than that for Tensorflow with PyTorch having a more natural object oriented interface for people used to writing Python code. ### PyTorch: The primary difference between TensorFlow and PyTorch is that the networks (or graphs) that TensorFlow generates are static while the networks for PyTorch are dynamic (see TensorFlow Fold for dynamic graphs). This means that with PyTorch one can modify and adjust the network on-the-fly (e.g., making it easier to adjust for changes in the input dimensionality or number of input nodes within a network). This feature and the object-oriented design of PyTorch often results in fewer lines of code to achieve the same solution when compared to Tensorflow. ### Keras: Keras is a high-level API written on top of TensorFlow (and its precursor Theano). It is written in Python and provides a simple and intuitive interface when building neural networks. It is currently released as part of TensorFlow. **What should you choose?** Both frameworks are continuously evolving. The choice of deep learning library will likely come down to which one you find better fits your style of programming and learning. We will use Keras here as it has an intuitive implementation of the graphical or network models. ### Building a network: Let's start by defining what we need for the network. We will start with Keras and - create a sequential model (this means we add layers one-by-one as we see in our introductory figure) - add a dense (fully connected) layer with 30 neurons - **input_shape** describes the dimensionality of the _input data_ to this first hidden layer - **activation** describes the activation fuction for the neurons (in this case we will be using 'relu'; rectified linear unit) - add a second dense (fully connected) layer with 30 neurons - flatten the output of the second layer into a single vector so we can use ```categorical_crossentropy``` as we are assuming that our classes are "one-hot encoding" (i.e. [1,0] or [0,1] - add an output layer using "softmax" (this means the activation values for each class sum to 1 so they can be treated like probabilities) with 2 nodes (_for our example we could have used a single output_) ### Training the network Training a neural network is conceptually simple. Given a labelled set of data and a loss function, we need to optimize the weights and biases within the network by minimizing the loss. A solution for training large networks uses backpropagation to efficiently estimate the gradient of the loss function with respect to the weights and biases. **Mini-batch:** Optimization of the weights uses a standard gradient descent technique. If the loss function can be expressed in terms of a sum over subsets of the training data (e.g., as is the case for the L2 norm) the training can be undertaken either for the dataset as a whole, for subsets of the data (batch learning), or for individual entries (on-line or stochastic learning). _Batch gradient descent_ looks at all points in the data and calculates the average gradients before updating the weights in the model. _Stochastic gradient descent_ takes a single point and calculates the gradients and then updates the model (and then repeats). _Mini-batch gradient descent_ takes a subset of the training data and calculates the average gradients and updates the model (and then repeats over all mini-batches). ### Batch normalization Batch normalization scales the activations from a layer (the input data are assumed normalized) to have zero mean and unit variance. In reality, the two parameters gamma (for the standard deviation) and beta (for the mean) are learned by the network and the activations multiplied/added by these parameters. Batch normalization provides a degree of regularization and allows for faster learning rates as the outputs are constrained to 0-1 (i.e. you dont get large excursions in the weights of subsequent layers in a network that need to be reoptimized/trained). The normalization is applied to mini-batches of training data (as opposed to using the full training sample). ### Convolutional Networks Convolutional Neural Networks or CNNs are networks designed to work with images or with any regularly sampled dataset. CNNs reduce the complexity of the network by requiring that neurons only respond to inputs from a subset of an image (the receptive field). This mimics the operation of the visual cortex where neurons only respond to a small part of the field-of-view. There are four principal components to a CNN: - a convolutional layer, - a _non-linear activation function_ , - a pooling or downsampling operation, and - a _fully connected layer for classification_ Dependent on the complexity of the network or structure of the data, these components can occur singularly or chained together in multiple sequences. **Convolution** in a CNN refers to the convolution of the input data $I(x,y)$ with a kernel $K(x,y)$ which will produce a feature map $F(x,y)$ \begin{equation} F(x,y) = K(x,y) * I(x,y) = \sum_{x_0} \sum_{y_0} I(x-x_0, y-y_0) K(x_0, y_0). \end{equation} The kernel only responds to pixels within its receptive field (i.e., the size of the kernel), reducing the computational complexity of the resulting network. The kernels in the convolution are described by a depth (the number of kernels, $K$, applied to the image), and a stride (how many pixels a kernel shifts at each step in the convolution; typically one). Given an $N\times M$ image, the result of the convolution step is to transform a single image into a data cube of feature maps with a dimension $N \times M \times K$. Once **learned** the kernels within the convolutional layer can appear as physically intuitive operations on the images such as edge detection filters. As with traditional neural networks, a non-linear activation function is applied to the individual pixels in the resulting feature maps. The **pooling** in the CNN downsamples or subsamples the feature maps. Pooling summarizes values within a region of interest (e.g., a 2x2 pixel window). The summary can be the average pixel value but more commonly the maximum pixel value is preserved (Max Pooling) in the downsampling. This pooling of the feature maps reduces the size of the resulting network and makes the network less sensitive to small translations or distortions between images. The final layer of a CNN is the classification layer which maps the output of the CNN to a set of labels. This is typically a fully connected layer where each output of the final pooling layer connects to all neurons in the classification layer. ### The use of dropout layers As we increase the complexity of the network we run into the issue of overfitting the data. The **dropout layer** at each training epoch randomly sets a neuron to 0 with a probability of 0.5. There is debate over whether the dropout layer should come before or after an activation layer but a recommended rule of thumb is that it should come after the activation layer for activation functions other than relu. ### Interpreting networks: how many layers and how many neurons? The number of layers, number of neurons in a layer, and the connectivity of these layers is typically described as the network architecture. Approaches to defining a network architecture become more trial and error than applying an underlying set of principles. For a starting point, however, there are relatively few problems that benefit significantly from more than two layers and we recommend starting with a single layer when training an initial network and using cross-validation to determine when additional layers lead result in the data being overfit. As with the number of layers, the number of neurons within a layer drives the computational cost (and requiring progressively larger training sets to avoid overfitting of the data). There are many proposals for rules of thumb for defining a network architecture: - the number of neurons should lie between the number of inputs and output nodes - the number of neurons should be equal to the number of outputs plus 2/3rd the number input nodes. - the number of neurons in the hidden layer should be less than twice the size of the input layers ## ResNet 50 Convolutional Neural Network <a id='resnet'></a> [Go to top](#toc) The Residual Network (ResNet) algorithm was proposed in [He et al. (2015) paper.](https://arxiv.org/abs/1512.03385). **The ResNet idea** is simple but results in substantial improvements: instead of fitting some function F(x), where x is the vector of input parameters, fit instead G(x) = F(x) + x. In other words, with the residuals with respect to identity function I(x) = x. The ResNet50 example below is based on on https://github.com/priya-dwivedi/Deep-Learning/blob/master/resnet_keras/Residual_Networks_yourself.ipynb 50 in ResNet50 comes from about 50-layers deep implementation. ### Run ResNet50 to classify stamps ### (stamps were produced with the code in makeSampleResNet.ipynb) ```python %matplotlib inline import os import numpy as np import math import matplotlib import matplotlib.pyplot as plt from scipy.stats.distributions import rv_continuous from astropy.io import fits plt.rc("lines", linewidth=1) plt.rc("figure", dpi=170) np.set_printoptions(precision=3) ``` ```python # data file with images (stamps), produced with the code in makeSampleResNet.ipynb npyFile = 'data/stamps4ResNet.npy' # SNRmin = 50 with open(npyFile, 'rb') as f: data = np.load(f,allow_pickle=True) Nstamps = data.shape[0] print('read', Nstamps, 'stamps from', npyFile) ``` read 3000 stamps from data/stamps4ResNet.npy ### Setup ResNet50 ```python import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Conv2D, Flatten, Activation from tensorflow.keras.utils import to_categorical ``` ```python # Based on https://github.com/priya-dwivedi/Deep-Learning/blob/master/resnet_keras/Residual_Networks_yourself.ipynb import numpy as np import tensorflow as tf from tensorflow.keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D from tensorflow.keras.initializers import glorot_uniform from tensorflow.keras.models import Model, load_model from tensorflow.keras.utils import to_categorical import tensorflow.keras.backend as K K.set_image_data_format('channels_last') def identity_block(X, f, filters, stage, block): """ Implementation of the identity block Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network Returns: X -- output of the identity block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value. You'll need this later to add back to the main path. X_shortcut = X # First component of main path X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) # Second component of main path (≈3 lines) X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) return X def convolutional_block(X, f, filters, stage, block, s = 2): """ Implementation of the convolutional block Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network s -- Integer, specifying the stride to be used Returns: X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value X_shortcut = X ##### MAIN PATH ##### # First component of main path X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) # Second component of main path (≈3 lines) X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) ##### SHORTCUT PATH #### (≈2 lines) X_shortcut = Conv2D(filters = F3, kernel_size = (1, 1), strides = (s,s), padding = 'valid', name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut) X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) return X def ResNet50(input_shape=(21, 21, 1), classes=2): """ Implementation of the popular ResNet50 the following architecture: CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3 -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER Arguments: input_shape -- shape of the images of the dataset classes -- integer, number of classes Returns: model -- a Model() instance in Keras """ # Define the input as a tensor with shape input_shape X_input = Input(input_shape) # Zero-Padding X = ZeroPadding2D((3, 3))(X_input) # Stage 1 X = Conv2D(64, (7, 7), strides=(2, 2), name='conv1', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name='bn_conv1')(X) X = Activation('relu')(X) X = MaxPooling2D((3, 3), strides=(2, 2))(X) # Stage 2 X = convolutional_block(X, f=3, filters=[64, 64, 256], stage=2, block='a', s=1) X = identity_block(X, 3, [64, 64, 256], stage=2, block='b') X = identity_block(X, 3, [64, 64, 256], stage=2, block='c') # Stage 3 X = convolutional_block(X, f = 3, filters = [128, 128, 512], stage = 3, block='a', s = 2) X = identity_block(X, 3, [128, 128, 512], stage=3, block='b') X = identity_block(X, 3, [128, 128, 512], stage=3, block='c') X = identity_block(X, 3, [128, 128, 512], stage=3, block='d') # Stage 4 X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block='a', s = 2) X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f') # Stage 5 X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block='a', s = 2) X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b') X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c') # output layer X = Flatten()(X) X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X) # Create model model = Model(inputs = X_input, outputs = X, name='ResNet50') return model ``` ### A few helper routines: These functions normalize_image, plot_image_array, plot_confusion_matrix, plot_model_history will be used to visualize the data and the outputs of the neural networks as a function of the type and complexity of the network ```python def normalize_image(image): '''Rescale the constrast in an image based on the noise (used for displays and the CNN)''' sigmaG_coeff = 0.7413 # turns (q75-q25) difference into standard deviation in case of Gaussian per25,per50,per75 = np.percentile(image,[25,50,75]) sigmaG = sigmaG_coeff * (per75 - per25) # sigma clip image, remove background, and normalize to unity image[image<(per50-2*sigmaG)] = per50-2*sigmaG image -= np.min(image) image /= np.sum(image) return image def reshape_arrays(data, labels): '''reshape arrays for Keras''' data = data.reshape(-1,32, 32, 1) labels = to_categorical(labels) return data,labels ``` ```python from sklearn.metrics import confusion_matrix from sklearn.utils.multiclass import unique_labels def plot_confusion_matrix(y_true, y_pred, normalize=False, title=None, cmap=plt.cm.Blues): """ From scikit-learn: plots a confusion matrix. Normalization can be applied by setting `normalize=True`. """ if not title: if normalize: title = 'Normalized confusion matrix' else: title = 'Confusion matrix, without normalization' # Compute confusion matrix cm = confusion_matrix(y_true, y_pred) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] fig, ax = plt.subplots() im = ax.imshow(cm, interpolation='nearest', cmap=cmap) ax.figure.colorbar(im, ax=ax) ax.set(xticks=np.arange(cm.shape[1]), yticks=np.arange(cm.shape[0]), # ... and label them with the respective list entries title=title, ylabel='True label', xlabel='Predicted label') # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor") #fixes "squishing of plot" plt.ylim([2.5, -0.5]) # Loop over data dimensions and create text annotations. fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i in range(cm.shape[0]): for j in range(cm.shape[1]): ax.text(j, i, format(cm[i, j], fmt), ha="center", va="center", color="white" if cm[i, j] > thresh else "black") fig.tight_layout() return def plot_model_history(history): '''Plot the training and validation history for a TensorFlow network''' # Extract loss and accuracy loss = history.history['loss'] val_loss = history.history['val_loss'] acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(10,5)) ax[0].plot(np.arange(n_epochs), loss, label='Training Loss') ax[0].plot(np.arange(n_epochs), val_loss, label='Validation Loss') ax[0].set_title('Loss Curves') ax[0].legend() ax[0].set_xlabel('Epoch') ax[0].set_ylabel('Loss') ax[1].plot(np.arange(n_epochs), acc, label='Training Accuracy') ax[1].plot(np.arange(n_epochs), val_acc, label='Validation Accuracy') ax[1].set_title('Accuracy Curves') ax[1].legend() ax[1].set_xlabel('Epoch') ax[1].set_ylabel('Accuracy') def plot_image_array(images, nrows=2, ncols=5, figsize=[8,4], nx=32, ny=32, title='', subtitle=False, class_true=None, classes=None): '''Plot an array of images''' Nimages = images.shape[0] fig, ax = plt.subplots(nrows=nrows,ncols=ncols,figsize=figsize, squeeze=False) fig.subplots_adjust(hspace=0, left=0.07, right=0.95, wspace=0.05, bottom=0.15) Ndisplay = nrows*ncols if Nimages < Ndisplay: Ndisplay = Nimages for indx in np.arange(Ndisplay): i = int(indx/ncols) j = indx%ncols if (i == 0): ax[i][j].xaxis.set_major_formatter(plt.NullFormatter()) if (j != 0): ax[i][j].yaxis.set_major_formatter(plt.NullFormatter()) ax[i][j].imshow(images[indx].reshape(nx,ny), cmap='gray') if (subtitle == True): title = 'True Class: %i, Predicted Class: %i\n p0: %e\n p1: %e\n p2 %e' pT = np.argmax(class_true[indx]) pP = np.argmax(classes[indx]) ax[i][j].set_title(title % (pT, pP, classes[indx,0], classes[indx,1], classes[indx,2])) ax[0][0].set_ylabel('$y$') ax[nrows-1][int(ncols/2)].set_xlabel('$x$') ``` ## Classify stamps ### first, we need to renormalize stamps to the same background noise ```python ## renormalize stamps stamps0all = [] stamps1all = [] stamps2all = [] for i in range(0,1000): stamps0all.append(normalize_image(data[i,:,:,0])) stamps1all.append(normalize_image(data[1000+i,:,:,0])) stamps2all.append(normalize_image(data[2000+i,:,:,0])) ``` ```python # assumes that class 1 comes after class 0, and class 2 after class 1 (ZI: ugly hack, VOLATILE!) input_stamps = np.vstack([stamps0all, stamps1all, stamps2all]) stamp_class = np.zeros(len(stamps0all) + len(stamps1all) + len(stamps2all)) stamp_class[len(stamps0all):] = 1 # stamp_class[len(stamps0all)+len(stamps1all):] = 2 # ``` ### split the sample into training, validation and test data sets ```python from astroML.utils import split_samples # split the samples into training, validation and test data sets # for definitions, see https://machinelearningmastery.com/difference-test-validation-datasets/ (data_train, data_val, data_test), (class_train, class_val, class_test) = split_samples(input_stamps, stamp_class, [0.7,0.1,0.2]) data_train, class_train = reshape_arrays(data_train, class_train) data_val, class_val = reshape_arrays(data_val, class_val) data_test, class_test = reshape_arrays(data_test, class_test) print ('Number of samples in the training ({}); test ({}); and validation ({}) data sets'.format(data_train.shape[0], data_test.shape[0], data_val.shape[0])) ``` Number of samples in the training (2100); test (600); and validation (300) data sets ```python # plot a few stamps to see what we are trying to classify plot_image_array(data_test, figsize=[16,10], subtitle=True, classes=class_test, class_true=class_test) ``` ### and now run CNN (ResNet50): very slow, about 3 hours for 100 epochs! (runtime scales with n_epochs; sometimes only 50 is sufficient) ```python resnet50_model = ResNet50(input_shape=(32, 32, 1), classes=3) resnet50_model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) n_epochs=100 %time resnet_model_history = resnet50_model.fit(data_train, class_train, epochs=n_epochs, batch_size=256, verbose=1, validation_data=(data_val, class_val), shuffle=True) ``` Epoch 1/100 9/9 [==============================] - 170s 17s/step - loss: 2.0521 - accuracy: 0.6005 - val_loss: 1.1044 - val_accuracy: 0.3300 Epoch 2/100 9/9 [==============================] - 136s 14s/step - loss: 2.1468 - accuracy: 0.6681 - val_loss: 1.1223 - val_accuracy: 0.3300 Epoch 3/100 9/9 [==============================] - 135s 15s/step - loss: 1.0639 - accuracy: 0.7900 - val_loss: 1.1115 - val_accuracy: 0.3300 Epoch 4/100 9/9 [==============================] - 156s 17s/step - loss: 0.5806 - accuracy: 0.9043 - val_loss: 1.1367 - val_accuracy: 0.3300 Epoch 5/100 9/9 [==============================] - 152s 17s/step - loss: 0.7645 - accuracy: 0.8838 - val_loss: 1.1515 - val_accuracy: 0.3300 Epoch 6/100 9/9 [==============================] - 153s 17s/step - loss: 0.1984 - accuracy: 0.9276 - val_loss: 1.1779 - val_accuracy: 0.3300 Epoch 7/100 9/9 [==============================] - 119s 13s/step - loss: 0.2828 - accuracy: 0.9433 - val_loss: 1.2154 - val_accuracy: 0.3300 Epoch 8/100 9/9 [==============================] - 112s 12s/step - loss: 0.3430 - accuracy: 0.9443 - val_loss: 1.2295 - val_accuracy: 0.3300 Epoch 9/100 9/9 [==============================] - 110s 12s/step - loss: 0.1141 - accuracy: 0.9557 - val_loss: 1.2861 - val_accuracy: 0.3300 Epoch 10/100 9/9 [==============================] - 103s 11s/step - loss: 0.0824 - accuracy: 0.9705 - val_loss: 1.2860 - val_accuracy: 0.3300 Epoch 11/100 9/9 [==============================] - 99s 11s/step - loss: 0.2226 - accuracy: 0.9514 - val_loss: 1.2220 - val_accuracy: 0.3300 Epoch 12/100 9/9 [==============================] - 101s 11s/step - loss: 0.4407 - accuracy: 0.9000 - val_loss: 1.4702 - val_accuracy: 0.3300 Epoch 13/100 9/9 [==============================] - 101s 11s/step - loss: 0.2069 - accuracy: 0.9581 - val_loss: 1.5200 - val_accuracy: 0.3300 Epoch 14/100 9/9 [==============================] - 99s 11s/step - loss: 0.1463 - accuracy: 0.9733 - val_loss: 1.6042 - val_accuracy: 0.3300 Epoch 15/100 9/9 [==============================] - 98s 11s/step - loss: 0.0631 - accuracy: 0.9738 - val_loss: 1.6862 - val_accuracy: 0.3300 Epoch 16/100 9/9 [==============================] - 95s 10s/step - loss: 0.0745 - accuracy: 0.9771 - val_loss: 1.7985 - val_accuracy: 0.3300 Epoch 17/100 9/9 [==============================] - 97s 10s/step - loss: 0.1278 - accuracy: 0.9705 - val_loss: 1.8924 - val_accuracy: 0.3300 Epoch 18/100 9/9 [==============================] - 106s 12s/step - loss: 0.0932 - accuracy: 0.9695 - val_loss: 2.0520 - val_accuracy: 0.3300 Epoch 19/100 9/9 [==============================] - 95s 10s/step - loss: 0.1548 - accuracy: 0.9676 - val_loss: 2.0014 - val_accuracy: 0.3300 Epoch 20/100 9/9 [==============================] - 110s 12s/step - loss: 0.1319 - accuracy: 0.9767 - val_loss: 2.1734 - val_accuracy: 0.3300 Epoch 21/100 9/9 [==============================] - 98s 11s/step - loss: 0.0617 - accuracy: 0.9810 - val_loss: 2.3139 - val_accuracy: 0.3300 Epoch 22/100 9/9 [==============================] - 95s 10s/step - loss: 0.0785 - accuracy: 0.9857 - val_loss: 2.4558 - val_accuracy: 0.3300 Epoch 23/100 9/9 [==============================] - 95s 10s/step - loss: 0.1043 - accuracy: 0.9790 - val_loss: 2.6288 - val_accuracy: 0.3300 Epoch 24/100 9/9 [==============================] - 99s 11s/step - loss: 0.0797 - accuracy: 0.9771 - val_loss: 2.7636 - val_accuracy: 0.3300 Epoch 25/100 9/9 [==============================] - 96s 11s/step - loss: 0.2535 - accuracy: 0.9724 - val_loss: 2.9559 - val_accuracy: 0.3300 Epoch 26/100 9/9 [==============================] - 99s 11s/step - loss: 0.1877 - accuracy: 0.9629 - val_loss: 3.1571 - val_accuracy: 0.3300 Epoch 27/100 9/9 [==============================] - 100s 11s/step - loss: 0.1824 - accuracy: 0.9714 - val_loss: 3.5310 - val_accuracy: 0.3300 Epoch 28/100 9/9 [==============================] - 102s 11s/step - loss: 0.2138 - accuracy: 0.9581 - val_loss: 3.5913 - val_accuracy: 0.3300 Epoch 29/100 9/9 [==============================] - 94s 10s/step - loss: 0.0559 - accuracy: 0.9795 - val_loss: 3.8520 - val_accuracy: 0.3300 Epoch 30/100 9/9 [==============================] - 94s 10s/step - loss: 0.1634 - accuracy: 0.9681 - val_loss: 4.0405 - val_accuracy: 0.3300 Epoch 31/100 9/9 [==============================] - 98s 11s/step - loss: 0.0445 - accuracy: 0.9833 - val_loss: 4.2531 - val_accuracy: 0.3300 Epoch 32/100 9/9 [==============================] - 95s 10s/step - loss: 0.0481 - accuracy: 0.9771 - val_loss: 4.2291 - val_accuracy: 0.3300 Epoch 33/100 9/9 [==============================] - 94s 10s/step - loss: 0.0662 - accuracy: 0.9800 - val_loss: 4.6897 - val_accuracy: 0.3300 Epoch 34/100 9/9 [==============================] - 98s 11s/step - loss: 0.0363 - accuracy: 0.9876 - val_loss: 4.6900 - val_accuracy: 0.3300 Epoch 35/100 9/9 [==============================] - 96s 11s/step - loss: 0.0388 - accuracy: 0.9857 - val_loss: 4.8374 - val_accuracy: 0.3300 Epoch 36/100 9/9 [==============================] - 96s 11s/step - loss: 0.0136 - accuracy: 0.9962 - val_loss: 5.0035 - val_accuracy: 0.3300 Epoch 37/100 9/9 [==============================] - 96s 11s/step - loss: 0.0138 - accuracy: 0.9957 - val_loss: 5.1499 - val_accuracy: 0.3300 Epoch 38/100 9/9 [==============================] - 94s 10s/step - loss: 0.0141 - accuracy: 0.9967 - val_loss: 5.3833 - val_accuracy: 0.3300 Epoch 39/100 9/9 [==============================] - 96s 11s/step - loss: 0.0328 - accuracy: 0.9886 - val_loss: 5.5023 - val_accuracy: 0.3300 Epoch 40/100 9/9 [==============================] - 95s 11s/step - loss: 0.0710 - accuracy: 0.9876 - val_loss: 5.5498 - val_accuracy: 0.3300 Epoch 41/100 9/9 [==============================] - 101s 11s/step - loss: 0.0156 - accuracy: 0.9962 - val_loss: 5.7122 - val_accuracy: 0.3300 Epoch 42/100 9/9 [==============================] - 98s 11s/step - loss: 0.0162 - accuracy: 0.9943 - val_loss: 5.8017 - val_accuracy: 0.3300 Epoch 43/100 9/9 [==============================] - 96s 11s/step - loss: 0.1213 - accuracy: 0.9914 - val_loss: 5.8983 - val_accuracy: 0.3300 Epoch 44/100 9/9 [==============================] - 94s 10s/step - loss: 0.0232 - accuracy: 0.9938 - val_loss: 5.9411 - val_accuracy: 0.3300 Epoch 45/100 9/9 [==============================] - 96s 11s/step - loss: 0.0059 - accuracy: 0.9986 - val_loss: 5.9792 - val_accuracy: 0.3300 Epoch 46/100 9/9 [==============================] - 101s 11s/step - loss: 0.0172 - accuracy: 0.9933 - val_loss: 6.0615 - val_accuracy: 0.3300 Epoch 47/100 9/9 [==============================] - 97s 11s/step - loss: 0.0398 - accuracy: 0.9862 - val_loss: 6.0743 - val_accuracy: 0.3333 Epoch 48/100 9/9 [==============================] - 97s 11s/step - loss: 0.0374 - accuracy: 0.9905 - val_loss: 5.9783 - val_accuracy: 0.3367 Epoch 49/100 9/9 [==============================] - 96s 11s/step - loss: 0.0072 - accuracy: 0.9990 - val_loss: 5.9701 - val_accuracy: 0.3400 Epoch 50/100 9/9 [==============================] - 102s 11s/step - loss: 0.0537 - accuracy: 0.9967 - val_loss: 6.0070 - val_accuracy: 0.3467 Epoch 51/100 9/9 [==============================] - 98s 11s/step - loss: 0.0294 - accuracy: 0.9962 - val_loss: 5.9993 - val_accuracy: 0.3467 Epoch 52/100 9/9 [==============================] - 94s 10s/step - loss: 0.0099 - accuracy: 0.9962 - val_loss: 5.9744 - val_accuracy: 0.3500 Epoch 53/100 9/9 [==============================] - 94s 10s/step - loss: 0.0377 - accuracy: 0.9933 - val_loss: 6.0159 - val_accuracy: 0.3567 Epoch 54/100 9/9 [==============================] - 96s 11s/step - loss: 0.0360 - accuracy: 0.9962 - val_loss: 5.9142 - val_accuracy: 0.3667 Epoch 55/100 9/9 [==============================] - 97s 11s/step - loss: 0.1258 - accuracy: 0.9686 - val_loss: 5.9231 - val_accuracy: 0.3667 Epoch 56/100 9/9 [==============================] - 95s 10s/step - loss: 0.0525 - accuracy: 0.9810 - val_loss: 5.9170 - val_accuracy: 0.3633 Epoch 57/100 9/9 [==============================] - 97s 10s/step - loss: 0.0385 - accuracy: 0.9895 - val_loss: 5.5553 - val_accuracy: 0.3900 Epoch 58/100 9/9 [==============================] - 101s 11s/step - loss: 0.0106 - accuracy: 0.9962 - val_loss: 5.4114 - val_accuracy: 0.3833 Epoch 59/100 9/9 [==============================] - 98s 11s/step - loss: 0.0246 - accuracy: 0.9948 - val_loss: 5.0994 - val_accuracy: 0.4033 Epoch 60/100 9/9 [==============================] - 97s 11s/step - loss: 0.2294 - accuracy: 0.9905 - val_loss: 4.9631 - val_accuracy: 0.4400 Epoch 61/100 9/9 [==============================] - 98s 11s/step - loss: 0.0112 - accuracy: 0.9971 - val_loss: 4.7554 - val_accuracy: 0.4333 Epoch 62/100 9/9 [==============================] - 117s 13s/step - loss: 0.0052 - accuracy: 0.9990 - val_loss: 4.4414 - val_accuracy: 0.4567 Epoch 63/100 9/9 [==============================] - 128s 14s/step - loss: 0.0043 - accuracy: 0.9990 - val_loss: 4.4075 - val_accuracy: 0.4367 Epoch 64/100 9/9 [==============================] - 129s 14s/step - loss: 0.0063 - accuracy: 0.9986 - val_loss: 4.0454 - val_accuracy: 0.4900 Epoch 65/100 9/9 [==============================] - 134s 15s/step - loss: 0.0036 - accuracy: 0.9990 - val_loss: 4.2817 - val_accuracy: 0.4533 Epoch 66/100 9/9 [==============================] - 141s 16s/step - loss: 0.0441 - accuracy: 0.9862 - val_loss: 3.6922 - val_accuracy: 0.4900 Epoch 67/100 9/9 [==============================] - 148s 17s/step - loss: 0.0113 - accuracy: 0.9962 - val_loss: 3.3507 - val_accuracy: 0.5133 Epoch 68/100 9/9 [==============================] - 153s 16s/step - loss: 0.0050 - accuracy: 0.9981 - val_loss: 2.9237 - val_accuracy: 0.5367 Epoch 69/100 9/9 [==============================] - 161s 17s/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 2.6702 - val_accuracy: 0.5600 Epoch 70/100 9/9 [==============================] - 139s 15s/step - loss: 0.0051 - accuracy: 0.9990 - val_loss: 2.3772 - val_accuracy: 0.5867 Epoch 71/100 9/9 [==============================] - 162s 18s/step - loss: 0.0016 - accuracy: 0.9995 - val_loss: 2.0295 - val_accuracy: 0.6500 Epoch 72/100 9/9 [==============================] - 172s 19s/step - loss: 0.0082 - accuracy: 0.9967 - val_loss: 2.1949 - val_accuracy: 0.6333 Epoch 73/100 9/9 [==============================] - 164s 18s/step - loss: 0.0696 - accuracy: 0.9781 - val_loss: 1.1657 - val_accuracy: 0.7567 Epoch 74/100 9/9 [==============================] - 146s 16s/step - loss: 0.0096 - accuracy: 0.9957 - val_loss: 1.2180 - val_accuracy: 0.7733 Epoch 75/100 9/9 [==============================] - 161s 18s/step - loss: 0.0112 - accuracy: 0.9957 - val_loss: 0.8892 - val_accuracy: 0.8100 Epoch 76/100 9/9 [==============================] - 159s 17s/step - loss: 0.0034 - accuracy: 0.9981 - val_loss: 1.0254 - val_accuracy: 0.7833 Epoch 77/100 9/9 [==============================] - 165s 18s/step - loss: 0.0035 - accuracy: 0.9990 - val_loss: 0.6477 - val_accuracy: 0.8667 Epoch 78/100 9/9 [==============================] - 186s 21s/step - loss: 0.0019 - accuracy: 0.9995 - val_loss: 0.6268 - val_accuracy: 0.8633 Epoch 79/100 9/9 [==============================] - 218s 24s/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 0.5048 - val_accuracy: 0.8867 Epoch 80/100 9/9 [==============================] - 186s 20s/step - loss: 7.6155e-04 - accuracy: 1.0000 - val_loss: 0.4435 - val_accuracy: 0.8967 Epoch 81/100 9/9 [==============================] - 190s 21s/step - loss: 4.4687e-04 - accuracy: 1.0000 - val_loss: 0.3868 - val_accuracy: 0.9067 Epoch 82/100 9/9 [==============================] - 189s 21s/step - loss: 5.6939e-04 - accuracy: 1.0000 - val_loss: 0.3277 - val_accuracy: 0.9233 Epoch 83/100 9/9 [==============================] - 237s 26s/step - loss: 0.0928 - accuracy: 0.9905 - val_loss: 0.4965 - val_accuracy: 0.8833 Epoch 84/100 9/9 [==============================] - 220s 24s/step - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.3732 - val_accuracy: 0.9067 Epoch 85/100 9/9 [==============================] - 181s 20s/step - loss: 0.0743 - accuracy: 0.9933 - val_loss: 0.3638 - val_accuracy: 0.9200 Epoch 86/100 9/9 [==============================] - 217s 24s/step - loss: 0.0835 - accuracy: 0.9976 - val_loss: 0.3664 - val_accuracy: 0.9100 Epoch 87/100 9/9 [==============================] - 177s 19s/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.3600 - val_accuracy: 0.9100 Epoch 88/100 9/9 [==============================] - 132s 14s/step - loss: 0.0069 - accuracy: 0.9981 - val_loss: 0.2826 - val_accuracy: 0.9400 Epoch 89/100 9/9 [==============================] - 119s 13s/step - loss: 0.0255 - accuracy: 0.9976 - val_loss: 2.9415 - val_accuracy: 0.4533 Epoch 90/100 9/9 [==============================] - 109s 12s/step - loss: 0.2233 - accuracy: 0.9724 - val_loss: 2.7223 - val_accuracy: 0.4067 Epoch 91/100 9/9 [==============================] - 106s 12s/step - loss: 0.1282 - accuracy: 0.9862 - val_loss: 2.3811 - val_accuracy: 0.4767 Epoch 92/100 9/9 [==============================] - 106s 12s/step - loss: 0.0155 - accuracy: 0.9933 - val_loss: 2.3025 - val_accuracy: 0.4667 Epoch 93/100 9/9 [==============================] - 103s 11s/step - loss: 0.0031 - accuracy: 1.0000 - val_loss: 1.2310 - val_accuracy: 0.6567 Epoch 94/100 9/9 [==============================] - 101s 11s/step - loss: 0.0192 - accuracy: 0.9981 - val_loss: 0.8055 - val_accuracy: 0.7333 Epoch 95/100 9/9 [==============================] - 101s 11s/step - loss: 0.0023 - accuracy: 0.9995 - val_loss: 0.7652 - val_accuracy: 0.7433 Epoch 96/100 9/9 [==============================] - 100s 11s/step - loss: 0.0028 - accuracy: 0.9990 - val_loss: 0.1776 - val_accuracy: 0.9433 Epoch 97/100 9/9 [==============================] - 101s 11s/step - loss: 0.0455 - accuracy: 0.9895 - val_loss: 2.0419 - val_accuracy: 0.6967 Epoch 98/100 9/9 [==============================] - 99s 11s/step - loss: 0.0714 - accuracy: 0.9914 - val_loss: 1.9221 - val_accuracy: 0.8667 Epoch 99/100 9/9 [==============================] - 101s 11s/step - loss: 0.1337 - accuracy: 0.9900 - val_loss: 0.6657 - val_accuracy: 0.7333 Epoch 100/100 9/9 [==============================] - 101s 11s/step - loss: 0.0896 - accuracy: 0.9824 - val_loss: 0.4118 - val_accuracy: 0.9233 CPU times: user 5h 52min 45s, sys: 38min 28s, total: 6h 31min 13s Wall time: 3h 21min 20s ## analyze classification results ```python # plot the training history plot_model_history(resnet_model_history) ``` ### Note: the training accuracy is essentially 100% after only 30-40 epochs; however, the validation accuracy varies wildly ## now we will use the test subsample which was never used in training to obtain another estimate of performance ```python # predicted class (this will be (600, 3) array - we actually get probabilities for each class) Rclasses = resnet50_model.predict(data_test) ``` ```python # plot example classifications plot_image_array(data_test, figsize=[16,10], subtitle=True, classes=Rclasses, class_true=class_test) ``` ```python # plot the confusion matrix plot_confusion_matrix(np.argmax(class_test,axis=1), np.argmax(Rclasses,axis=1), normalize=True, title='Normalized confusion matrix') ``` ## that is fairly decent performance! (though note that it was SNR>20 dataset) ### let's have a closer look at misclassified stamps: ```python # note that here we adopt the class with the highest probability as the highest class tC = np.argmax(class_test,axis=1) aC = np.argmax(Rclasses,axis=1) Nok = 0 Ndata = np.shape(class_test)[0] good = np.ones(Ndata) for i in range(0, Ndata): if (tC[i] == aC[i]): Nok += 1 else: good[i] = 0 print('Overall classification accuracy for test sample:', Nok/Ndata) ``` Overall classification accuracy for test sample: 0.9066666666666666 ```python # let's see what is true class and the predicted probabilities for all 3 classes for i in range(0, Ndata): if (good[i]==0): print(tC[i], aC[i], Rclasses[i,:]) ``` 1 0 [1.000e+00 4.432e-13 1.340e-08] 1 2 [0.023 0.313 0.664] 0 1 [0.353 0.522 0.125] 1 0 [1.000e+00 0.000e+00 5.893e-33] 1 0 [0.636 0.343 0.021] 1 2 [0.041 0.455 0.504] 1 2 [0.018 0.487 0.495] 1 0 [0.942 0.055 0.003] 1 0 [9.998e-01 1.909e-04 1.147e-05] 1 0 [0.965 0.033 0.003] 1 0 [1.000e+00 2.054e-13 4.105e-08] 1 2 [0.014 0.435 0.55 ] 1 0 [0.878 0.118 0.004] 1 0 [0.548 0.374 0.078] 0 2 [0.174 0.195 0.631] 1 0 [1.000e+00 2.670e-05 9.323e-09] 1 0 [9.992e-01 6.522e-04 1.319e-04] 1 2 [0.027 0.336 0.638] 1 0 [9.942e-01 5.578e-03 2.587e-04] 1 2 [0.014 0.407 0.579] 1 2 [0.051 0.401 0.548] 1 0 [9.999e-01 5.714e-05 9.277e-06] 1 2 [0.008 0.064 0.928] 1 0 [1.000e+00 4.677e-29 2.025e-18] 1 0 [0.862 0.135 0.003] 1 2 [0.044 0.175 0.781] 1 0 [0.903 0.095 0.002] 1 2 [0.022 0.482 0.496] 0 2 [0.132 0.281 0.586] 1 0 [9.994e-01 5.127e-04 8.105e-05] 1 0 [0.954 0.045 0.002] 1 2 [0.026 0.468 0.507] 1 2 [0.024 0.353 0.623] 1 2 [0.013 0.224 0.763] 1 0 [9.613e-01 3.802e-02 6.658e-04] 1 2 [0.056 0.12 0.824] 1 2 [0.075 0.253 0.672] 1 0 [0.855 0.136 0.009] 1 2 [0.194 0.311 0.495] 1 0 [0.372 0.334 0.294] 1 0 [9.943e-01 5.548e-03 1.981e-04] 1 0 [0.804 0.123 0.073] 1 2 [0.019 0.44 0.541] 1 0 [9.966e-01 3.210e-03 1.446e-04] 1 2 [0.009 0.156 0.835] 1 2 [0.012 0.248 0.74 ] 1 2 [0.034 0.226 0.74 ] 1 2 [0.015 0.35 0.635] 1 0 [1.000e+00 6.224e-06 1.949e-06] 1 0 [0.899 0.097 0.004] 1 0 [0.753 0.233 0.014] 1 2 [0.018 0.378 0.604] 1 0 [0.688 0.303 0.009] 0 1 [0.083 0.719 0.198] 1 0 [0.967 0.032 0.001] 1 0 [0.979 0.02 0.001] ### Some trailed sources are miclassified as stars - they were probably not very elongated, let's see... ```python # plot misclassified stamps misfits = data_test[good==0] trueClass = class_test[good==0] predClass = Rclasses[good==0] plot_image_array(misfits, figsize=[16,10], subtitle=True, classes=predClass, class_true=trueClass) #plot_image_array(misfits, nrows=2, ncols=5, figsize=[16,5], subtitle=True, classes=predClass, class_true=trueClass) ``` ## How many model parameters did we fit? We can find that information as follows ```python # print model summary resnet50_model.summary() ``` Model: "ResNet50" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 32, 32, 1)] 0 __________________________________________________________________________________________________ zero_padding2d (ZeroPadding2D) (None, 38, 38, 1) 0 input_1[0][0] __________________________________________________________________________________________________ conv1 (Conv2D) (None, 16, 16, 64) 3200 zero_padding2d[0][0] __________________________________________________________________________________________________ bn_conv1 (BatchNormalization) (None, 16, 16, 64) 256 conv1[0][0] __________________________________________________________________________________________________ activation (Activation) (None, 16, 16, 64) 0 bn_conv1[0][0] __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 7, 7, 64) 0 activation[0][0] __________________________________________________________________________________________________ res2a_branch2a (Conv2D) (None, 7, 7, 64) 4160 max_pooling2d[0][0] __________________________________________________________________________________________________ bn2a_branch2a (BatchNormalizati (None, 7, 7, 64) 256 res2a_branch2a[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 7, 7, 64) 0 bn2a_branch2a[0][0] __________________________________________________________________________________________________ res2a_branch2b (Conv2D) (None, 7, 7, 64) 36928 activation_1[0][0] __________________________________________________________________________________________________ bn2a_branch2b (BatchNormalizati (None, 7, 7, 64) 256 res2a_branch2b[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 7, 7, 64) 0 bn2a_branch2b[0][0] __________________________________________________________________________________________________ res2a_branch2c (Conv2D) (None, 7, 7, 256) 16640 activation_2[0][0] __________________________________________________________________________________________________ res2a_branch1 (Conv2D) (None, 7, 7, 256) 16640 max_pooling2d[0][0] __________________________________________________________________________________________________ bn2a_branch2c (BatchNormalizati (None, 7, 7, 256) 1024 res2a_branch2c[0][0] __________________________________________________________________________________________________ bn2a_branch1 (BatchNormalizatio (None, 7, 7, 256) 1024 res2a_branch1[0][0] __________________________________________________________________________________________________ add (Add) (None, 7, 7, 256) 0 bn2a_branch2c[0][0] bn2a_branch1[0][0] __________________________________________________________________________________________________ activation_3 (Activation) (None, 7, 7, 256) 0 add[0][0] __________________________________________________________________________________________________ res2b_branch2a (Conv2D) (None, 7, 7, 64) 16448 activation_3[0][0] __________________________________________________________________________________________________ bn2b_branch2a (BatchNormalizati (None, 7, 7, 64) 256 res2b_branch2a[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 7, 7, 64) 0 bn2b_branch2a[0][0] __________________________________________________________________________________________________ res2b_branch2b (Conv2D) (None, 7, 7, 64) 36928 activation_4[0][0] __________________________________________________________________________________________________ bn2b_branch2b (BatchNormalizati (None, 7, 7, 64) 256 res2b_branch2b[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 7, 7, 64) 0 bn2b_branch2b[0][0] __________________________________________________________________________________________________ res2b_branch2c (Conv2D) (None, 7, 7, 256) 16640 activation_5[0][0] __________________________________________________________________________________________________ bn2b_branch2c (BatchNormalizati (None, 7, 7, 256) 1024 res2b_branch2c[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 7, 7, 256) 0 bn2b_branch2c[0][0] activation_3[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 7, 7, 256) 0 add_1[0][0] __________________________________________________________________________________________________ res2c_branch2a (Conv2D) (None, 7, 7, 64) 16448 activation_6[0][0] __________________________________________________________________________________________________ bn2c_branch2a (BatchNormalizati (None, 7, 7, 64) 256 res2c_branch2a[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 7, 7, 64) 0 bn2c_branch2a[0][0] __________________________________________________________________________________________________ res2c_branch2b (Conv2D) (None, 7, 7, 64) 36928 activation_7[0][0] __________________________________________________________________________________________________ bn2c_branch2b (BatchNormalizati (None, 7, 7, 64) 256 res2c_branch2b[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 7, 7, 64) 0 bn2c_branch2b[0][0] __________________________________________________________________________________________________ res2c_branch2c (Conv2D) (None, 7, 7, 256) 16640 activation_8[0][0] __________________________________________________________________________________________________ bn2c_branch2c (BatchNormalizati (None, 7, 7, 256) 1024 res2c_branch2c[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, 7, 7, 256) 0 bn2c_branch2c[0][0] activation_6[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 7, 7, 256) 0 add_2[0][0] __________________________________________________________________________________________________ res3a_branch2a (Conv2D) (None, 4, 4, 128) 32896 activation_9[0][0] __________________________________________________________________________________________________ bn3a_branch2a (BatchNormalizati (None, 4, 4, 128) 512 res3a_branch2a[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 4, 4, 128) 0 bn3a_branch2a[0][0] __________________________________________________________________________________________________ res3a_branch2b (Conv2D) (None, 4, 4, 128) 147584 activation_10[0][0] __________________________________________________________________________________________________ bn3a_branch2b (BatchNormalizati (None, 4, 4, 128) 512 res3a_branch2b[0][0] __________________________________________________________________________________________________ activation_11 (Activation) (None, 4, 4, 128) 0 bn3a_branch2b[0][0] __________________________________________________________________________________________________ res3a_branch2c (Conv2D) (None, 4, 4, 512) 66048 activation_11[0][0] __________________________________________________________________________________________________ res3a_branch1 (Conv2D) (None, 4, 4, 512) 131584 activation_9[0][0] __________________________________________________________________________________________________ bn3a_branch2c (BatchNormalizati (None, 4, 4, 512) 2048 res3a_branch2c[0][0] __________________________________________________________________________________________________ bn3a_branch1 (BatchNormalizatio (None, 4, 4, 512) 2048 res3a_branch1[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, 4, 4, 512) 0 bn3a_branch2c[0][0] bn3a_branch1[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 4, 4, 512) 0 add_3[0][0] __________________________________________________________________________________________________ res3b_branch2a (Conv2D) (None, 4, 4, 128) 65664 activation_12[0][0] __________________________________________________________________________________________________ bn3b_branch2a (BatchNormalizati (None, 4, 4, 128) 512 res3b_branch2a[0][0] __________________________________________________________________________________________________ activation_13 (Activation) (None, 4, 4, 128) 0 bn3b_branch2a[0][0] __________________________________________________________________________________________________ res3b_branch2b (Conv2D) (None, 4, 4, 128) 147584 activation_13[0][0] __________________________________________________________________________________________________ bn3b_branch2b (BatchNormalizati (None, 4, 4, 128) 512 res3b_branch2b[0][0] __________________________________________________________________________________________________ activation_14 (Activation) (None, 4, 4, 128) 0 bn3b_branch2b[0][0] __________________________________________________________________________________________________ res3b_branch2c (Conv2D) (None, 4, 4, 512) 66048 activation_14[0][0] __________________________________________________________________________________________________ bn3b_branch2c (BatchNormalizati (None, 4, 4, 512) 2048 res3b_branch2c[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, 4, 4, 512) 0 bn3b_branch2c[0][0] activation_12[0][0] __________________________________________________________________________________________________ activation_15 (Activation) (None, 4, 4, 512) 0 add_4[0][0] __________________________________________________________________________________________________ res3c_branch2a (Conv2D) (None, 4, 4, 128) 65664 activation_15[0][0] __________________________________________________________________________________________________ bn3c_branch2a (BatchNormalizati (None, 4, 4, 128) 512 res3c_branch2a[0][0] __________________________________________________________________________________________________ activation_16 (Activation) (None, 4, 4, 128) 0 bn3c_branch2a[0][0] __________________________________________________________________________________________________ res3c_branch2b (Conv2D) (None, 4, 4, 128) 147584 activation_16[0][0] __________________________________________________________________________________________________ bn3c_branch2b (BatchNormalizati (None, 4, 4, 128) 512 res3c_branch2b[0][0] __________________________________________________________________________________________________ activation_17 (Activation) (None, 4, 4, 128) 0 bn3c_branch2b[0][0] __________________________________________________________________________________________________ res3c_branch2c (Conv2D) (None, 4, 4, 512) 66048 activation_17[0][0] __________________________________________________________________________________________________ bn3c_branch2c (BatchNormalizati (None, 4, 4, 512) 2048 res3c_branch2c[0][0] __________________________________________________________________________________________________ add_5 (Add) (None, 4, 4, 512) 0 bn3c_branch2c[0][0] activation_15[0][0] __________________________________________________________________________________________________ activation_18 (Activation) (None, 4, 4, 512) 0 add_5[0][0] __________________________________________________________________________________________________ res3d_branch2a (Conv2D) (None, 4, 4, 128) 65664 activation_18[0][0] __________________________________________________________________________________________________ bn3d_branch2a (BatchNormalizati (None, 4, 4, 128) 512 res3d_branch2a[0][0] __________________________________________________________________________________________________ activation_19 (Activation) (None, 4, 4, 128) 0 bn3d_branch2a[0][0] __________________________________________________________________________________________________ res3d_branch2b (Conv2D) (None, 4, 4, 128) 147584 activation_19[0][0] __________________________________________________________________________________________________ bn3d_branch2b (BatchNormalizati (None, 4, 4, 128) 512 res3d_branch2b[0][0] __________________________________________________________________________________________________ activation_20 (Activation) (None, 4, 4, 128) 0 bn3d_branch2b[0][0] __________________________________________________________________________________________________ res3d_branch2c (Conv2D) (None, 4, 4, 512) 66048 activation_20[0][0] __________________________________________________________________________________________________ bn3d_branch2c (BatchNormalizati (None, 4, 4, 512) 2048 res3d_branch2c[0][0] __________________________________________________________________________________________________ add_6 (Add) (None, 4, 4, 512) 0 bn3d_branch2c[0][0] activation_18[0][0] __________________________________________________________________________________________________ activation_21 (Activation) (None, 4, 4, 512) 0 add_6[0][0] __________________________________________________________________________________________________ res4a_branch2a (Conv2D) (None, 2, 2, 256) 131328 activation_21[0][0] __________________________________________________________________________________________________ bn4a_branch2a (BatchNormalizati (None, 2, 2, 256) 1024 res4a_branch2a[0][0] __________________________________________________________________________________________________ activation_22 (Activation) (None, 2, 2, 256) 0 bn4a_branch2a[0][0] __________________________________________________________________________________________________ res4a_branch2b (Conv2D) (None, 2, 2, 256) 590080 activation_22[0][0] __________________________________________________________________________________________________ bn4a_branch2b (BatchNormalizati (None, 2, 2, 256) 1024 res4a_branch2b[0][0] __________________________________________________________________________________________________ activation_23 (Activation) (None, 2, 2, 256) 0 bn4a_branch2b[0][0] __________________________________________________________________________________________________ res4a_branch2c (Conv2D) (None, 2, 2, 1024) 263168 activation_23[0][0] __________________________________________________________________________________________________ res4a_branch1 (Conv2D) (None, 2, 2, 1024) 525312 activation_21[0][0] __________________________________________________________________________________________________ bn4a_branch2c (BatchNormalizati (None, 2, 2, 1024) 4096 res4a_branch2c[0][0] __________________________________________________________________________________________________ bn4a_branch1 (BatchNormalizatio (None, 2, 2, 1024) 4096 res4a_branch1[0][0] __________________________________________________________________________________________________ add_7 (Add) (None, 2, 2, 1024) 0 bn4a_branch2c[0][0] bn4a_branch1[0][0] __________________________________________________________________________________________________ activation_24 (Activation) (None, 2, 2, 1024) 0 add_7[0][0] __________________________________________________________________________________________________ res4b_branch2a (Conv2D) (None, 2, 2, 256) 262400 activation_24[0][0] __________________________________________________________________________________________________ bn4b_branch2a (BatchNormalizati (None, 2, 2, 256) 1024 res4b_branch2a[0][0] __________________________________________________________________________________________________ activation_25 (Activation) (None, 2, 2, 256) 0 bn4b_branch2a[0][0] __________________________________________________________________________________________________ res4b_branch2b (Conv2D) (None, 2, 2, 256) 590080 activation_25[0][0] __________________________________________________________________________________________________ bn4b_branch2b (BatchNormalizati (None, 2, 2, 256) 1024 res4b_branch2b[0][0] __________________________________________________________________________________________________ activation_26 (Activation) (None, 2, 2, 256) 0 bn4b_branch2b[0][0] __________________________________________________________________________________________________ res4b_branch2c (Conv2D) (None, 2, 2, 1024) 263168 activation_26[0][0] __________________________________________________________________________________________________ bn4b_branch2c (BatchNormalizati (None, 2, 2, 1024) 4096 res4b_branch2c[0][0] __________________________________________________________________________________________________ add_8 (Add) (None, 2, 2, 1024) 0 bn4b_branch2c[0][0] activation_24[0][0] __________________________________________________________________________________________________ activation_27 (Activation) (None, 2, 2, 1024) 0 add_8[0][0] __________________________________________________________________________________________________ res4c_branch2a (Conv2D) (None, 2, 2, 256) 262400 activation_27[0][0] __________________________________________________________________________________________________ bn4c_branch2a (BatchNormalizati (None, 2, 2, 256) 1024 res4c_branch2a[0][0] __________________________________________________________________________________________________ activation_28 (Activation) (None, 2, 2, 256) 0 bn4c_branch2a[0][0] __________________________________________________________________________________________________ res4c_branch2b (Conv2D) (None, 2, 2, 256) 590080 activation_28[0][0] __________________________________________________________________________________________________ bn4c_branch2b (BatchNormalizati (None, 2, 2, 256) 1024 res4c_branch2b[0][0] __________________________________________________________________________________________________ activation_29 (Activation) (None, 2, 2, 256) 0 bn4c_branch2b[0][0] __________________________________________________________________________________________________ res4c_branch2c (Conv2D) (None, 2, 2, 1024) 263168 activation_29[0][0] __________________________________________________________________________________________________ bn4c_branch2c (BatchNormalizati (None, 2, 2, 1024) 4096 res4c_branch2c[0][0] __________________________________________________________________________________________________ add_9 (Add) (None, 2, 2, 1024) 0 bn4c_branch2c[0][0] activation_27[0][0] __________________________________________________________________________________________________ activation_30 (Activation) (None, 2, 2, 1024) 0 add_9[0][0] __________________________________________________________________________________________________ res4d_branch2a (Conv2D) (None, 2, 2, 256) 262400 activation_30[0][0] __________________________________________________________________________________________________ bn4d_branch2a (BatchNormalizati (None, 2, 2, 256) 1024 res4d_branch2a[0][0] __________________________________________________________________________________________________ activation_31 (Activation) (None, 2, 2, 256) 0 bn4d_branch2a[0][0] __________________________________________________________________________________________________ res4d_branch2b (Conv2D) (None, 2, 2, 256) 590080 activation_31[0][0] __________________________________________________________________________________________________ bn4d_branch2b (BatchNormalizati (None, 2, 2, 256) 1024 res4d_branch2b[0][0] __________________________________________________________________________________________________ activation_32 (Activation) (None, 2, 2, 256) 0 bn4d_branch2b[0][0] __________________________________________________________________________________________________ res4d_branch2c (Conv2D) (None, 2, 2, 1024) 263168 activation_32[0][0] __________________________________________________________________________________________________ bn4d_branch2c (BatchNormalizati (None, 2, 2, 1024) 4096 res4d_branch2c[0][0] __________________________________________________________________________________________________ add_10 (Add) (None, 2, 2, 1024) 0 bn4d_branch2c[0][0] activation_30[0][0] __________________________________________________________________________________________________ activation_33 (Activation) (None, 2, 2, 1024) 0 add_10[0][0] __________________________________________________________________________________________________ res4e_branch2a (Conv2D) (None, 2, 2, 256) 262400 activation_33[0][0] __________________________________________________________________________________________________ bn4e_branch2a (BatchNormalizati (None, 2, 2, 256) 1024 res4e_branch2a[0][0] __________________________________________________________________________________________________ activation_34 (Activation) (None, 2, 2, 256) 0 bn4e_branch2a[0][0] __________________________________________________________________________________________________ res4e_branch2b (Conv2D) (None, 2, 2, 256) 590080 activation_34[0][0] __________________________________________________________________________________________________ bn4e_branch2b (BatchNormalizati (None, 2, 2, 256) 1024 res4e_branch2b[0][0] __________________________________________________________________________________________________ activation_35 (Activation) (None, 2, 2, 256) 0 bn4e_branch2b[0][0] __________________________________________________________________________________________________ res4e_branch2c (Conv2D) (None, 2, 2, 1024) 263168 activation_35[0][0] __________________________________________________________________________________________________ bn4e_branch2c (BatchNormalizati (None, 2, 2, 1024) 4096 res4e_branch2c[0][0] __________________________________________________________________________________________________ add_11 (Add) (None, 2, 2, 1024) 0 bn4e_branch2c[0][0] activation_33[0][0] __________________________________________________________________________________________________ activation_36 (Activation) (None, 2, 2, 1024) 0 add_11[0][0] __________________________________________________________________________________________________ res4f_branch2a (Conv2D) (None, 2, 2, 256) 262400 activation_36[0][0] __________________________________________________________________________________________________ bn4f_branch2a (BatchNormalizati (None, 2, 2, 256) 1024 res4f_branch2a[0][0] __________________________________________________________________________________________________ activation_37 (Activation) (None, 2, 2, 256) 0 bn4f_branch2a[0][0] __________________________________________________________________________________________________ res4f_branch2b (Conv2D) (None, 2, 2, 256) 590080 activation_37[0][0] __________________________________________________________________________________________________ bn4f_branch2b (BatchNormalizati (None, 2, 2, 256) 1024 res4f_branch2b[0][0] __________________________________________________________________________________________________ activation_38 (Activation) (None, 2, 2, 256) 0 bn4f_branch2b[0][0] __________________________________________________________________________________________________ res4f_branch2c (Conv2D) (None, 2, 2, 1024) 263168 activation_38[0][0] __________________________________________________________________________________________________ bn4f_branch2c (BatchNormalizati (None, 2, 2, 1024) 4096 res4f_branch2c[0][0] __________________________________________________________________________________________________ add_12 (Add) (None, 2, 2, 1024) 0 bn4f_branch2c[0][0] activation_36[0][0] __________________________________________________________________________________________________ activation_39 (Activation) (None, 2, 2, 1024) 0 add_12[0][0] __________________________________________________________________________________________________ res5a_branch2a (Conv2D) (None, 1, 1, 512) 524800 activation_39[0][0] __________________________________________________________________________________________________ bn5a_branch2a (BatchNormalizati (None, 1, 1, 512) 2048 res5a_branch2a[0][0] __________________________________________________________________________________________________ activation_40 (Activation) (None, 1, 1, 512) 0 bn5a_branch2a[0][0] __________________________________________________________________________________________________ res5a_branch2b (Conv2D) (None, 1, 1, 512) 2359808 activation_40[0][0] __________________________________________________________________________________________________ bn5a_branch2b (BatchNormalizati (None, 1, 1, 512) 2048 res5a_branch2b[0][0] __________________________________________________________________________________________________ activation_41 (Activation) (None, 1, 1, 512) 0 bn5a_branch2b[0][0] __________________________________________________________________________________________________ res5a_branch2c (Conv2D) (None, 1, 1, 2048) 1050624 activation_41[0][0] __________________________________________________________________________________________________ res5a_branch1 (Conv2D) (None, 1, 1, 2048) 2099200 activation_39[0][0] __________________________________________________________________________________________________ bn5a_branch2c (BatchNormalizati (None, 1, 1, 2048) 8192 res5a_branch2c[0][0] __________________________________________________________________________________________________ bn5a_branch1 (BatchNormalizatio (None, 1, 1, 2048) 8192 res5a_branch1[0][0] __________________________________________________________________________________________________ add_13 (Add) (None, 1, 1, 2048) 0 bn5a_branch2c[0][0] bn5a_branch1[0][0] __________________________________________________________________________________________________ activation_42 (Activation) (None, 1, 1, 2048) 0 add_13[0][0] __________________________________________________________________________________________________ res5b_branch2a (Conv2D) (None, 1, 1, 512) 1049088 activation_42[0][0] __________________________________________________________________________________________________ bn5b_branch2a (BatchNormalizati (None, 1, 1, 512) 2048 res5b_branch2a[0][0] __________________________________________________________________________________________________ activation_43 (Activation) (None, 1, 1, 512) 0 bn5b_branch2a[0][0] __________________________________________________________________________________________________ res5b_branch2b (Conv2D) (None, 1, 1, 512) 2359808 activation_43[0][0] __________________________________________________________________________________________________ bn5b_branch2b (BatchNormalizati (None, 1, 1, 512) 2048 res5b_branch2b[0][0] __________________________________________________________________________________________________ activation_44 (Activation) (None, 1, 1, 512) 0 bn5b_branch2b[0][0] __________________________________________________________________________________________________ res5b_branch2c (Conv2D) (None, 1, 1, 2048) 1050624 activation_44[0][0] __________________________________________________________________________________________________ bn5b_branch2c (BatchNormalizati (None, 1, 1, 2048) 8192 res5b_branch2c[0][0] __________________________________________________________________________________________________ add_14 (Add) (None, 1, 1, 2048) 0 bn5b_branch2c[0][0] activation_42[0][0] __________________________________________________________________________________________________ activation_45 (Activation) (None, 1, 1, 2048) 0 add_14[0][0] __________________________________________________________________________________________________ res5c_branch2a (Conv2D) (None, 1, 1, 512) 1049088 activation_45[0][0] __________________________________________________________________________________________________ bn5c_branch2a (BatchNormalizati (None, 1, 1, 512) 2048 res5c_branch2a[0][0] __________________________________________________________________________________________________ activation_46 (Activation) (None, 1, 1, 512) 0 bn5c_branch2a[0][0] __________________________________________________________________________________________________ res5c_branch2b (Conv2D) (None, 1, 1, 512) 2359808 activation_46[0][0] __________________________________________________________________________________________________ bn5c_branch2b (BatchNormalizati (None, 1, 1, 512) 2048 res5c_branch2b[0][0] __________________________________________________________________________________________________ activation_47 (Activation) (None, 1, 1, 512) 0 bn5c_branch2b[0][0] __________________________________________________________________________________________________ res5c_branch2c (Conv2D) (None, 1, 1, 2048) 1050624 activation_47[0][0] __________________________________________________________________________________________________ bn5c_branch2c (BatchNormalizati (None, 1, 1, 2048) 8192 res5c_branch2c[0][0] __________________________________________________________________________________________________ add_15 (Add) (None, 1, 1, 2048) 0 bn5c_branch2c[0][0] activation_45[0][0] __________________________________________________________________________________________________ activation_48 (Activation) (None, 1, 1, 2048) 0 add_15[0][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 2048) 0 activation_48[0][0] __________________________________________________________________________________________________ fc3 (Dense) (None, 3) 6147 flatten[0][0] ================================================================================================== Total params: 23,587,587 Trainable params: 23,534,467 Non-trainable params: 53,120 __________________________________________________________________________________________________ ### Wow, 23 million parameters! How many pixels did we fit? Each image is 32x32 pixels and we trained on 2100 images, so there are more model parameters than input data! How can that be? ### Answer: regularization. ## Interpreting networks: where is a network looking Occulsion maps, saliency maps, class activation maps are all techniques for expressing which pixels contribute to classification. These are attempts to reduce the "black box" nature of the networks. The simplest of these is the occlussion map where we part of an image and calculate the probability of it belonging to a class. If the probability decreases the occluded part of the image is assumed to be important. If there is no change in probability the occluded pixels are not assumed to be important. A simple implementation of this is shown here. ```python model = resnet50_model image_number = 11 stampSize = 32 kernel_size=5 input_stamp = data_test[image_number].reshape(stampSize,stampSize) i = 0 j=0 heatmap = [] keras_stamps = [] for j in range(stampSize+1-kernel_size): for i in range(stampSize+1-kernel_size): img = np.copy(input_stamp) img[i:i+kernel_size,j:j+kernel_size] = 0 img = normalize_image(img) keras_stamps.append(img) keras_stamps = np.array(keras_stamps).reshape([-1,stampSize,stampSize,1]) probs = 1. - model.predict(keras_stamps) heatmap = probs[:,1].reshape(stampSize+1-kernel_size,stampSize+1-kernel_size) def transparent_cmap(cmap, N=255): "Copy colormap and set alpha values" mycmap = cmap mycmap._init() mycmap._lut[:,-1] = np.linspace(0, 0.8, N+4) return mycmap # pad heatmap to same size as original image heatmap = np.pad(heatmap, pad_width=np.int(kernel_size/2), mode='minimum') # use the base cmap to create transparent overlay mycmap = transparent_cmap(plt.cm.Reds) fig, ax = plt.subplots(nrows=1,ncols=1) ax.imshow(data_test[image_number].reshape(stampSize,stampSize), cmap='gray') ax.imshow(np.array(heatmap), alpha=0.5, cmap=mycmap) ``` ```python ```
8892e6c1a81e81ec5391c4a04dfd98a6082065c0
846,586
ipynb
Jupyter Notebook
lectures/notes/Lecture13-deep-learning-cnn.ipynb
uw-astro/astr-598a-win22
65e0f366e164c276f1dfc06873741c6f6c94b300
[ "BSD-3-Clause" ]
7
2021-06-16T00:46:26.000Z
2021-08-05T18:55:39.000Z
notebooks/Lecture20.ipynb
ivezic/SaoPaulo2021
6e88724fd07eab711fef1c1fc4c94decb20fc315
[ "BSD-2-Clause" ]
null
null
null
notebooks/Lecture20.ipynb
ivezic/SaoPaulo2021
6e88724fd07eab711fef1c1fc4c94decb20fc315
[ "BSD-2-Clause" ]
2
2021-07-19T16:28:16.000Z
2021-08-23T01:39:45.000Z
459.851168
194,348
0.914826
true
23,563
Qwen/Qwen-72B
1. YES 2. YES
0.746139
0.712232
0.531424
__label__yue_Hant
0.984556
0.073006
# Chapter 2 > Linear Algebra and Machine Learning ## Lecture 9 ___ ### Review of Linear Algebra Reference Books: Matrix Cookbook by Kaare Brandt Petersen & Michael Syskind Pedersen, 2012 $A \in \mathbb{R}^{n \times m}, n\text{ rows and } m\text{ columns}$ range($A$):=span{\underline{a}$_1$,...,\underline{a}$_m$} null($A$):={\underline{x} $\in \mathbb{R}^{m}$|$A$\underline{x}=0} Column rank = Row rank = number of linearly independent vectors. Full rank: rank(A) = min{m,n}. A \textbf{nonsingular} or \textbf{ invertible} matrix is a square matrix of full rank Angle between two vectors: $\displaystyle \alpha = cos^{-1}(\frac{\mathbf{x}^T\mathbf{y}}{||\mathbf{x}||\cdot ||\mathbf{y}|| })$ $Q$ matrix is \textbf{ unitary} or \textbf{ orthogonal} if $Q^T=Q^{-1}$: $||Q\mathbf{x}||=||\mathbf{x}||$ rotation \textbf{ Norms}: $\displaystyle ||\mathbf{x}||_p:=(\sum_{j=1}^n|\mathbf{x}_j|^p)^{1/p}$ $\displaystyle ||\mathbf{x}||_1:=\sum_{j=1}^n|\mathbf{x}_j|$ $\displaystyle ||\mathbf{x}||_2:=(\mathbf{x}^T\mathbf{x})^{1/2}$ $\displaystyle ||\mathbf{x}||_\infty:=\max_{1\le j\le n}|\mathbf{x}_j|$ $\displaystyle ||A||_{(m,n)}:=\sup_{x\in \mathbf{R}^m, \mathbf{x} \ne 0}\frac{||A\mathbf{x}||_n}{||\mathbf{x}||_m}=\sup_{||\mathbf{x}||_m=1}||A\mathbf{x}||_n$ $\displaystyle ||A||_1 = \text{ max column in } A$ <br> $\displaystyle ||A||_\infty = \text{ max row in } A$ Frobenins (Hilbert-Schmidt norm): $\displaystyle ||A||_F = (\sum_{i=1}^n\sum_{j=1}^m |a_{i,j}|^2)^{1/2}=\sqrt{\text{ Tr}(A^TA)}=\sqrt{\text{Tr}(AA^T)}\text{ , where Tr}(B) = \sum_j b_{jj}, \text{ sum of diagonal entries.}$ $||QA||_2=||A||_2 \,\&\, ||QA||_F=||A||_F $ \textbf{Singular Value Decomposition} (SVD): rotation & stretching of a basis $U$ - left singular matrix, $V$- right singular matrix, $\Sigma$ is the diagonal entry matrix. \begin{align} \text{To find the } V, \text{we note that } A^TA &= V\Sigma U^{-1}U\Sigma V^T \\ & = V\Sigma^2V^T \\ &= V\left[ \begin{array}{ccc} \sigma_1^2 & & \\ & \sigma_2^2 & \\ & & ... \\ & & \sigma_n^2 \\ \end{array} \right]V^T \end{align} The eigenvectors of this matrix $A^TA$ will give us the vectors $\mathbf{v}_i$, and the eigenvalues will give the numbers $\sigma_i$. Similarly, the matrix $AA^T$ gives us info for $U$. Examples can be seen here [SVD examples]( https://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/positive-definite-matrices-and-applications/singular-value-decomposition/MIT18_06SCF11_Ses3.5sum.pdf#:~:text=The%20singular%20value%20decomposition%20of%20a%20matrix%20is,its%20eigenvectors%20are%20orthogonal%20and%20we%20can%20write) <div> </div> ## Lecture 10 ___ ### PCA (principal component analysis) $\overline{x} $ center data by subtraction the mean $\frac{1}{N-1}XX^T$ is simply the empirical approximation to Cov(X) which henceforth we denote as $C_x$. $C_x$ is non-negative definite & Symmetric (NDS). Hence it has an eigen decomposition $C_x = Q\Lambda Q^T$. The eigen-vectors of $C_x$ are called the principal components or the Rarhumen-Loere modes or PCA modes of $C_x$. At the same time write $X=U\Sigma V^T$ then, \begin{align} C_x=\frac{1}{N-1}XX^T&=\frac{1}{N-1}U\Sigma V^TV\Sigma U^T\\ &=\frac{1}{N-1}U\Sigma^2U^T \end{align} Thus, columns of U,\textbf{ the left singular vectors of X are precisely the Principal Components of} $C_x$ But why do we care about the PCA? A random varibale is called Gaussian if its PDF has the form $$\Pi(x)=\frac{1}{\sqrt{(2\pi)^ddet(C)}}exp(-\frac{1}{2}(x-m)^TC^{-1}(x-m))$$, where m is mean \begin{lemma} Suppose $x \sim \mathcal{N}(m,C),m \in \mathbb{R}^d, C\in \mathbb{R}^{d\times{d}}. \text{ Let } b\in \mathbb{R}^n \& A\in \mathbb{R}^{n\times d} \text{ then } z=Ax+b \text{ is also Gaussian & } z\in \mathcal{N}(b+Am, ACA^T) $ \end{lemma} PCA explains the covariance and directions of maximum singular values in our dataset. It is good for low-dimension compression. ## Lecture 11 ___ ### From PCA & SVD to Proper Orthogonal Decomposition & Dynamic Mode Decoposition POD(proper orthogonal decomposition) $\displaystyle f(x,t):=\sum_{k=0}^{\infty}c_k(t)\psi_k(x)$ In the context, $\psi_k(x)$ are called the POD modes & $c_k(t)$ are called the POD coefficient. Luckily, there is an easy way to compute POD using SVD. Suppose dynamics of f(x,t) are observed over u discrete set, ie (x_j,t_k) ,j=0...N-1, k=0,...,T-1. Now put it into a matrix D: $$D:=\begin{bmatrix} \\ d_0=f(x_j,t_0) \;\; | \;\; d_1=f(x_j,t_1) \;\; | \;\; ....\;\; |\;\; d_{T-1}=f(x_j,t_{T-1})\\ \\ \end{bmatrix}$$ Then, compute the SVD of D. $D=U\Sigma V^T$, $U$ doesn't change even if the matrix is shuffled around, the direction of time is not important. Then the columns of $U$ are precisely the POD modes & the rows of $\Sigma V^T$ are precisely the POD coefficients. \textbf{ The power of this method is also important that it could be used to predict the future behavior based on the DMD}. Book:Data-Driven Science and Engineering by Steven ```python #import numpy as np #import matplotlib.pyplot as plt # # first load frames of cylinder flow simulation #from google.colab import drive # drive.mount('/content/drive') # data = np.load( "/content/drive/MyDrive/Courses/AMATH482582-WIN2022/Notebooks/CylData/cyldata.npy") Nx = 200 Ny = 50 ``` ```python # ax0 is the flattened frames and ax1 are the time frames # compute svd of data import numpy.matlib as matlib mean_data = np.mean(data, 1) centered_data = data - np.transpose(matlib.repmat(mean_data, 200, 1)) dU, ds, dVt = np.linalg.svd(centered_data, full_matrices=False) ``` ```python # plot some of the singular values and some of the principal modes fig, ax = plt.subplots(1, 1, figsize=(8, 8)) ax.plot(np.log(ds[0:100])) ax.set_xlabel('index $j$') ax.set_ylabel('$\log(\sigma_j)$') #ax.set_xlim(0, 150) fig, ax = plt.subplots(8, 2, figsize=(20, 20)) for j in range(8): ax[j][0].imshow(np.reshape(dU[:, j], (Ny, Nx)), cmap='bwr') ax[j][0].get_xaxis().set_visible(False) ax[j][0].get_yaxis().set_visible(False) ax[j][0].set_aspect('equal') ax[j][1].plot(np.abs(dVt[j, :])) ax[j][1].set_xlabel('Time step') if j == 0: ax[j][0].set_title('POD Modes', fontsize=30) ax[j][1].set_title('|POD Coeff.|', fontsize=30) plt.show() ``` ```python # approximate the dynamics using the POD modes only keeping the first 40 and compare to original data. dss = np.copy(ds) dss[10:None] = 0 # low rank approx approx_centered_data = np.dot(dU, np.dot(np.diag(dss), dVt)) print(approx_centered_data.shape) # add the mean back approx_data = approx_centered_data + \ np.transpose(matlib.repmat(mean_data, 200, 1)) ``` ```python # side by side comparison of original and approximate dynamics fig, ax = plt.subplots(4, 2, figsize=(20, 15)) frm_indx = [10, 100, 150, 199] for j in range(4): ax[j][0].imshow(np.reshape(data[:, frm_indx[j]], (Ny, Nx)), cmap='bwr') ax[j][0].get_xaxis().set_visible(False) ax[j][0].get_yaxis().set_visible(False) ax[j][0].set_aspect('equal') ax[j][1].imshow(np.reshape( approx_data[:, frm_indx[j]], (Ny, Nx)), cmap='bwr') ax[j][1].get_xaxis().set_visible(False) ax[j][1].get_yaxis().set_visible(False) ax[j][1].set_aspect('equal') if j == 0: ax[j][0].set_title('Original', fontsize=30) ax[j][1].set_title('POD Approximation', fontsize=30) ``` ### DMD (Dynamic Modes Decomposition) - developed 2010s, some work done at UW: Look back to the POD, the POD modes are smooth & structured, the POD coefficients are rough & chaotic. This is because POD only sees spatial structure & ignores temporal dynamics. DMD attempts to address this issue by modeling the dynamics of the data: $$d_{k+1}=Ad_k$$ $$f(x,t_{k+1}){\leftarrow}A\leftarrow f(x,t_k)$$ Lets reconstruct our data as follows, one keeping the T-1 data from 0 to T-2, the other keeping the T-1 data from 1 to T-1. $$D_1:=\begin{bmatrix} \\ d_0 \;\; | \;\; d_1\;\; | \;\; ....\;\; |\;\; d_{T-2}\\ \\ \end{bmatrix}$$ $$D_2:=\begin{bmatrix} \\ d_1 \;\; | \;\; d_2\;\; | \;\; ....\;\; |\;\; d_{T-1}\\ \\ \end{bmatrix}$$ Then DMD seeks to find a matrix $A$ such that $$D_2 \approx AD_1 $$ This approx is then done in a best-fit $$A=\min_{B\in \mathbb{R}^{N\times N},\text{ rank(B)}\le r}||BD_1-D_2||_F^2$$ So what does DMD actually do? \begin{align} d_k&=Ad_{k-1}=A(Ad_{k-2})=...\\ d_k&=A^{k-1}d_0=Q\Lambda^{k-1}Q^{-1}d_0 \\ &=Q\Lambda^{k-1}b=\sum_{j=1}^rq_{j}\lambda_{j}^{k-1}b_j \end{align} ```python from pydmd import DMD ``` ```python # first we create a DMD object dmd = DMD(svd_rank=10) # a rank 10 approximation to the dynamic matrix A # simply pass centered data set of snapshots to the dmd class dmd.fit(centered_data) dmd.plot_eigs(show_axes=True) ``` ```python # now plot the DMD modes and the temporal dynamics as we did for POD fig, ax = plt.subplots(8, 4, figsize=(30, 18)) for j in range(8): ax[j][0].imshow(np.reshape(np.real(dmd.modes[:, j]), (Ny, Nx)), cmap='bwr') ax[j][0].get_xaxis().set_visible(False) ax[j][0].get_yaxis().set_visible(False) ax[j][0].set_aspect('equal') ax[j][1].plot(np.real(dmd.dynamics[j, :])) ax[j][1].set_xlabel('Time step') ax[j][2].imshow(np.reshape(np.imag(dmd.modes[:, j]), (Ny, Nx)), cmap='bwr') ax[j][2].get_xaxis().set_visible(False) ax[j][2].get_yaxis().set_visible(False) ax[j][2].set_aspect('equal') ax[j][3].plot(np.imag(dmd.dynamics[j, :])) ax[j][3].set_xlabel('Time step') if j == 0: ax[j][0].set_title('Real DMD Modes', fontsize=30) ax[j][1].set_title('Real DMD Dynamics', fontsize=30) ax[j][2].set_title('Imag DMD Modes', fontsize=30) ax[j][3].set_title('Imag DMD Dynamics', fontsize=30) plt.show() ``` ## Lecture 12 ___ ### Introduction to Machine Learning Book: The Elements of Statistical Learning by Hastie \textbf{Supervised Learning}: With labels. Predict/classify data given a training dataset. Eg: REgression, classification, function approximation, etc. \textbf{Un-supervised Learning}: No labels. Find meaningful structure in dataset. Eg: Clustering, dimensionality reduction, Generation modeling. ### Supervised Learning The function model assumes there exists a function $\displaystyle f^+:X\rightarrow y$ so that $y_j=f^+(x_j)+\epsilon_j$, where $\epsilon_j$ are some noise that may be in the output or our observation of the $f^+(x_j)$ By far the most common assumption is Gaussian noise $$\epsilon_j \approx \mathcal{N}(0,\sigma^2)$$ This implies $\displaystyle y_i|x_j \approx \mathcal{N}(f^+(x_j),\sigma^2)$ $$\Pi(y_j|x_j)\propto exp(-\frac{1}{2\sigma^2}|f^+(x_j)-y_j|^2)$$, $\Pi$ is the PDF of $y$ for fixed $x_j$ For Euclidern norm, this is called a maximum likelihood estimate (MLE): $$f_{MLE}=arg\min_f \frac{1}{2\sigma^2}||f(X)-Y||^2$$ At this moment, it is useless without a model, since there are many solutions. One of the most simple model is \textbf{ linear regression}. $$f_{MLE}\equiv\beta_{MLE}=argmin \frac{1}{2\sigma^2}||A\beta-Y||^2$$, where A=$ \left[ \begin{array}{ccc} 1& & x_0^T \\ .& & . \\ .& & . \\ 1& & x_{N-1}^T \end{array} \right] $. Therefore, MLE is nothing but a least square solution to the problem. Typically, the system is over-determined. Solution is given by solving the normal equations, $$\frac{\partial}{\partial\beta}(\frac{1}{2\sigma^2}(A\beta-y)^T(A\beta-y))=\frac{1}{\sigma^2}A^T(A\beta-y)=0$$ \begin{align}\implies A^T(A\beta-y)&=0\\ \beta&=(A^TA)^{-1}A^Ty \end{align} ## Lecture 13 ___ ### Evaluating SL models \textbf{ Regularization/penalization/shrinkage}: we consider $\displaystyle \hat{\beta}=argmin \frac{1}{2\sigma^2}||A\beta-y||^2+\frac{\lambda}{2}||\beta||_p^p$ $\lambda\ge 0$ is called the regularization/penalty parameter & $p \ge 1$ denotes the choice. $p=2 $ for Ridge regression. $$\beta=(\frac{1}{\sigma^2}A^TA+\lambda I)^{-1}A^Ty$$ So doing SVD of A, $$\frac{1}{\sigma^2}A^TA+\lambda I=V(\frac{1}{\sigma^2}\Sigma^2+ \lambda I)V^T$$, the diagonals are non-negative, eliminating zeros so A matrix can be invertible. Again, the choice of $\lambda$ is important for stability and accuracy. ### Training & Testing Errors \textbf{Training mean squared error (MSE)} ${X,Y}$ - training set, used to find $\hat{f}(\equiv \hat{\beta})$ ${X',Y'}$ - testing set, used for validation. Analyzing the MSE doesn't mean which model is always better. It is still important for choice of $\lambda$ ## Lecture 14 ___ ### Model tuning with Cross Validation $\lambda$ too small, the model is basically memorizing all the train data. The test error is large. (over-fitting, high-variance) $\lambda $ too large, model is too simply biased.() We want the best test error which is the smallest. However, in real life, we don't know the test error. \textbf{Cross Validation}: Split the train data to k parts, or k-fold. Randomly permute the data pairs-- $\mathbf{x}=\{x_{10},x_{-1},...,x{_13}\}$ and responding $\mathbf{y}$. Then split the data $\mathbf{x} \,\&\, \mathbf{y} $ into K-subsets. Iterate over $k=0,...k=K-1$ and fit the model to the training data with the k-th fold removed. Finally, calculate the CV prediction error (CV cost) with changing $\lambda$ $$CV(\hat{f},\lambda) := \frac{1}{N}\sum_{k=0}^{K-1}||\hat{f}(\mathbf{x}_k,\lambda)-\mathbf{y}_k||^2$$ ## Lecture 15 ___ ### Introduction to Kernel Methods \textbf{Kernel NDS}: We say K is non-negative definite & symmetric (NDS) if \begin{itemize} \item $K(\mathbf{x},\mathbf{x'})=K(\mathbf{x'},\mathbf{x})$ \item For any set of points $(x_0,...x_n)$, the matrix $(K)_{ij}=K(x_i,x_j) $ is NDS. \item $\displaystyle K(\mathbf{x},\mathbf{x'})=\sum_{j=0}^\infty\lambda_j\psi_j(\mathbf{x})\psi_j(\mathbf{x'}) \text{, the number }\lambda_j\ge 0 $ are the eigenvalues, while the $\psi_j$ are eigenfunctions. \end{itemize} Mercer Theorem says if a matrix $A$ is NDS then $A=Q\Lambda Q^{-1}$ $$\implies A=\sum_j\lambda_jq_jq_j^T$$ Define the functions, $$F_j(x)=\sqrt{\lambda_j}\psi_j(x)$$ along with the mapping, $$F(\mathbf{x})=(F_0(\mathbf{x}),F_1(\mathbf{x}),....)$$ Functions in RKHS(Reproducing Kernel to Hilbert Space) have the nice properties: $\displaystyle K (\mathbf{x},\mathbf{x'})=\sum_j F_j(\mathbf{x})F_j(\mathbf{x'}) \,\&\, \text{ so, for a fixed }\mathbf{x}$ we have that $\displaystyle f(\mathbf{x})=\sum_j c_jF_j(\mathbf{x})=(\{c_j\}_{j=0}^\infty,\{F_j(\mathbf{x})\}_{j=0}^\infty)_{l^2}=<f,K(\mathbf{x},)>_{H_k}$ ### Kernel Interpolation Suppose we have an interpolation problem, given x datapoints and y datapoints. we wish to find $\displaystyle f=\sum_{j=0}^\infty c_jF_j$ so we need to solve $\displaystyle \sum_{j=0}^\infty c_jF_j(x_i)=y_i, i=0,...,N-1$ We want the solution with minimized $H_k$ norm. Therefore, we wish to have $$minimize \sum_{j=0}^\infty c_j^2$$ $$s.t \sum_{j=0}^\infty c_jF_j(x_i)=y_i$$ What it reduces to is a solution of the form $$f(\mathbf{x})=\sum_{j=0}^{N-1}a_jK(\mathbf{x_j},\mathbf{x})$$ The interpolation constraints then tell us that $\displaystyle \sum_{j=0}^{N-1}a_jK(\mathbf{x}_j,\mathbf{x}_i)=y_i$ \begin{align} \quad\quad\quad\quad\quad\quad\quad\quad \mathbf{a}&=(a_0,....,a_{N-1})\\ \implies \Theta \mathbf{a}=\mathbf{y},\quad\quad\mathbf{y}&=(y_0,....y_{N-1}) \\\quad\quad \Theta_{ji}&=K(\mathbf{x}_j,\mathbf{x}_i) \end{align} Thus, $\mathbf{a}=\Theta^{-1}\mathbf{y}$, now the matrix is invertible provided that it is NDS, and the $x_j$ are distinct. ```python import numpy as np import matplotlib.pyplot as plt ``` ```python # define function to be interpolated def f(x): val = np.exp( - x**2/0.05 )*np.cos(np.pi*20*x) + 0.5*np.tanh(5*(x - 0.5)) return val ``` ```python N = 100 N_p = 5 x = np.linspace(0,1, N) # grid used for plotting x_p = np.linspace(0,1, N_p) # set of points for interpolation # plot the function fig,ax = plt.subplots(1,1, figsize=(10,8)) ax.plot(x, f(x)) ax.scatter(x_p, f(x_p), color='r') ax.set_xlabel('x', fontsize=20) ax.set_ylabel('f(x)', fontsize=20) ``` ```python # define the kernel we wish to use # Gaussian Kernel def K(x1, x2, l): val = np.exp( - (np.abs(x1 - x2)**2)/(l**2) ) return val # plot K(x, 0.5) for illustration l = [0.05, 0.1, 0.25] fig,ax = plt.subplots(1,1, figsize=(10,8)) for i in range(3): ax.plot(x, K(x, 0.5, l[i]), label='l = '+str(l[i])) ax.set_xlabel('x', fontsize=20) ax.set_ylabel('K(x, 0.5)', fontsize=20) plt.legend(fontsize=20) ``` ```python # compute kernel interpolands and plot them for each choice of l # construct y vector, just for uniform notation with notes y = f(x_p) Theta = np.zeros( (N_p, N_p) ) fig,ax = plt.subplots(1,1, figsize=(10,8)) ax.plot(x, f(x), label='f') for i in range(3): # construct kernel matrix (maybe not the most efficient way but ok) for j in range(N_p): Theta[j,:] = K( x_p[j], x_p, l[i] ) # compute kernel interpolant a = np.linalg.solve(Theta, y) # plot the interpolant along with original function and interpolation data KV = np.zeros( (N_p, N) ) for j in range(N_p): KV[j, :] = K( x_p[j], x, l[i] ) f_interp = np.dot(a, KV) ax.plot(x, f_interp, label='l = ' + str(l[i])) ax.scatter(x_p, y, color='r' ) ax.set_xlabel('x', fontsize=20) plt.legend(fontsize=20) ``` ## Lecture 16 ___ ### Kernel Ridge Regression Mercers Theorem: $$K(\mathbf{x},\mathbf{x'})=\sum_{j=0}^\infty F_j(\mathbf{x})F_j(\mathbf{x'})$$ ,where $F_j$ are the features of $K$, $F_j\rightarrow 0 \text{ as } j\rightarrow \infty$. This regression is capable of solving cases with infinite number of features but at the cost of constructing a finite $n\times n$ matrix. \textbf{Ridge Regression}: $$\min_{\beta}||A\beta-y||^2+\lambda||\beta||^2$$ where A is our feature matrix often of the form $$A=\left[\begin{array}{ccc} F_0(x_0) &F_1(x_1)&...\\ F_0(x_1) & F_1(x_1)&...\\ .. & .. \\ F_0(x_{N-1}) &F_1(x_{N-1})&.. \end{array}\right]$$ while we see $||f||^2_{H_k}=\sum_{j=0}^{J-1}\beta_j^2=||\beta||^2$ \textbf{Kernel Ridge Regression}: $$\min_{f}||f(X)-Y||^2+\lambda||f||_{H_k}^2$$ \textbf{Representer Theorem}: $$\hat{f}(\mathbf{x})=\sum_{n=0}^{N-1}\hat{a}_nK(\mathbf{x}_n,\mathbf{x})$$ ,with $\hat{a}=(\hat{a}_0,...,\hat{a}_{N-1})$ being minimizer of $$minimize \;||\Theta\mathbf{a}-Y||^2+\lambda\mathbf{a}^T\Theta\mathbf{a}$$ The solution is found by differentiating it, and it is exactly $\displaystyle \mathbf{a}=(\Theta+\lambda I)^{-1}y$ ### Example: House Pricing in Taiwan Steps to do: \begin{itemize} \item Create corresponding train and test data. \item Normalize the train and test data: (data-mean)/std \item Choose and fit Kernel functions (example: "rbf"-Gaussian Kernel) \item Create discrete $\lambda \text{ and } \gamma $ parameters. \item Double for-loop to calculate cross-validation scores (scoring="neg_mean_squared_error") and store the scores. \item Find the best score for choosing parameters \item Predict \end{itemize} ## Lecture 17 ___ ### Kernel Perspective vs. Feature Perspective FR: $minimize \;||A\beta-Y||^2+\lambda||\beta||^2$ \\ KR: $minimize \; ||\Theta a-Y||^2 +\lambda a^T\Theta a$ where $A_{ij}=F_j(x_i)$ matrix of feature values, $\Theta_{ij}=K(x_i.x_j)$ The size of computation is different in that, provided a large data set with data points far more than feature points, we might want to use FR because KR would generate a matrix of NXN dimension which costs too much. For KR, we have infinite many features and more freedom to choose. ### Connecting Kernels to Fourier Series & Wavelets From discrete Fourier Series we note that $\displaystyle f(t)=\sum_{k=-N/2}^{N/2-1}\hat{f}_kexp(ikt)$. We than identify a kernel called the \textbf{ Dirichlet Kernel}: $$K(t,s)=\sum_{k=-N/2}^{N/2-1}exp(ik(t-s))$$ This gives the identity that $$\int_0^{2\pi}K(t,s)f(s)\,ds=\sum_{k=-N/2}^{N/2-1}\hat{f}_kexp(ikt)$$ Then, we also note that linear projection is not the only transformation, non-linear is also possible.
8266bbc8300edcc12b57efbf52ed7806fc99b3b3
188,902
ipynb
Jupyter Notebook
course_notes/.ipynb_checkpoints/Chapter2-checkpoint.ipynb
raph651/Amath-582-Data-Analysis
c1d72d897b7611652c7fe1f71c5439062b8bdf9e
[ "MIT" ]
null
null
null
course_notes/.ipynb_checkpoints/Chapter2-checkpoint.ipynb
raph651/Amath-582-Data-Analysis
c1d72d897b7611652c7fe1f71c5439062b8bdf9e
[ "MIT" ]
null
null
null
course_notes/.ipynb_checkpoints/Chapter2-checkpoint.ipynb
raph651/Amath-582-Data-Analysis
c1d72d897b7611652c7fe1f71c5439062b8bdf9e
[ "MIT" ]
null
null
null
190.617558
71,180
0.890954
true
6,878
Qwen/Qwen-72B
1. YES 2. YES
0.843895
0.849971
0.717287
__label__eng_Latn
0.812803
0.504829
```python import numpy as np import pandas as pd import sympy as sym from sympy import init_printing from lgbayes.models import LinearGaussianBN init_printing(use_latex=True) %matplotlib inline %load_ext autoreload %autoreload 2 ``` The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload ```python def sample(): u = np.random.normal(0,s_u,size=n) z = w_zu*u + np.random.normal(0,s_z,size=n) x = w_xz*z+np.random.normal(0,s_x,size=n) y = w_yx*x + w_yz*z + w_yu*u+ np.random.normal(0,s_y,size=n) return {"U":u,"Z":z,"X":x,"Y":y} ``` ```python s_u = 1 s_z = 0.3 s_x = 1 s_y = 0.5 w_zu = 2 w_xz = 0.5 w_yx = 0.5 w_yz = -1.0 w_yu = 3.0 beta_x = 0.5 beta_z = -1 sigma = 0.5 n = 1000 ``` ```python data_dict = sample() df1 = pd.DataFrame(data_dict,columns=["U","Z","X","Y"]) df1.cov() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>U</th> <th>Z</th> <th>X</th> <th>Y</th> </tr> </thead> <tbody> <tr> <th>U</th> <td>0.976613</td> <td>1.945132</td> <td>1.001346</td> <td>1.480123</td> </tr> <tr> <th>Z</th> <td>1.945132</td> <td>3.964203</td> <td>2.033707</td> <td>2.878700</td> </tr> <tr> <th>X</th> <td>1.001346</td> <td>2.033707</td> <td>2.067833</td> <td>2.034622</td> </tr> <tr> <th>Y</th> <td>1.480123</td> <td>2.878700</td> <td>2.034622</td> <td>2.836955</td> </tr> </tbody> </table> </div> ```python model = LinearGaussianBN() model.add_var("X",[]) model.add_var("Y",["X"]) print(model) model.cov a = model.observe(["X"],["x"]) a.cov a.mu[0].simplify() ``` ```python model.mu ``` ```python model.cov ``` ```python model.information_matrix ``` ```python model = LinearGaussianBN() model.add_var("U",None,[0],s_u**2) model.add_var("Z",["U"],[0,w_zu],s_z**2) model.add_var("X",["Z"],[0,w_xz],s_x**2) model.add_var("Y",["U","Z","X"],[0,w_yu,w_yz,w_yx],s_y**2) print (model) data_dict = dict(zip(model.variables,model.sample(n).T)) df = pd.DataFrame(data_dict,columns=model.variables) df.cov() ``` U ~ N(0 ; 1) Z ~ N(2*U ; 0.09) X ~ N(0.5*Z ; 1) Y ~ N(3*U + -1*Z + 0.5*X ; 0.25) <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>U</th> <th>Z</th> <th>X</th> <th>Y</th> </tr> </thead> <tbody> <tr> <th>U</th> <td>1.004677</td> <td>2.000284</td> <td>0.997798</td> <td>1.499634</td> </tr> <tr> <th>Z</th> <td>2.000284</td> <td>4.073505</td> <td>2.018449</td> <td>2.893577</td> </tr> <tr> <th>X</th> <td>0.997798</td> <td>2.018449</td> <td>2.016385</td> <td>1.946916</td> </tr> <tr> <th>Y</th> <td>1.499634</td> <td>2.893577</td> <td>1.946916</td> <td>2.814356</td> </tr> </tbody> </table> </div> ```python ```
641627ba15d0eef78c7c84186b46e442d57099ab
19,855
ipynb
Jupyter Notebook
Multivariate Gaussians.ipynb
finnhacks42/linear-gaussian-bn
63e3355bbdcb0c7218e41b1c33858b7d9917177e
[ "MIT" ]
null
null
null
Multivariate Gaussians.ipynb
finnhacks42/linear-gaussian-bn
63e3355bbdcb0c7218e41b1c33858b7d9917177e
[ "MIT" ]
null
null
null
Multivariate Gaussians.ipynb
finnhacks42/linear-gaussian-bn
63e3355bbdcb0c7218e41b1c33858b7d9917177e
[ "MIT" ]
null
null
null
48.664216
3,348
0.675246
true
1,373
Qwen/Qwen-72B
1. YES 2. YES
0.896251
0.785309
0.703834
__label__kor_Hang
0.187666
0.473573
# Understanding the SVD ```python import numpy as np ``` ### Useful reference - [A Singularly Valuable Decomposition](https://datajobs.com/data-science-repo/SVD-[Dan-Kalman].pdf) ## Sketch of lecture ### Singular value decomposition Our goal is to understand the following forms of the SVD. $$ A = U \Sigma V^T $$ $$ A = \begin{bmatrix} U_1 & U_2 \end{bmatrix}\begin{bmatrix} \Sigma_1 & 0 \\ 0 & 0 \end{bmatrix}\begin{bmatrix} V_1^T \\ V_2^T \end{bmatrix} $$ $$ A = \sum_{i=1}^r \sigma u_i v_i^T $$ ### (1) The matrix A #### What does a matrix do? A linear function is one that satisfies the property that $$ f(a_1x_1 + a_2x_2 + \cdots + a_nx_n) = a_1 f(x_1) + a_2 f(x_2) + \ldots + a_n f(x_n) $$ Let $f(x) = Ax$, where $A$ is a matrix and $x$ is a vector. You can check that the matrix $A$ fulfills the property of being a linear function. If $A$ is $m \times n$, then it is a linear map from $\mathbb{R}^n \mapsto \mathbb{R}^m$. Let's consider: what does a matrix *do* to a vector? Matrix multiplication has a *geometric* interpretation. When we multiply a vector, we either rotate, reflect, dilate or some combination of those three. So multiplying by a matrix *transforms* one vector into another vector. This is known as a *linear transformation*. Important Facts: * Any matrix defines a linear transformation * The matrix form of a linear transformation is NOT unique * We need only define a transformation by saying what it does to a *basis* Suppose we have a matrix $A$ that defines some transformation. We can take any invertible matrix $B$ and $$BAB^{-1}$$ defines the same transformation. This operation is called a *change of basis*, because we are simply expressing the transformation with respect to a different basis. **Example** Let $f(x)$ be the linear transformation that takes $e_1=(1,0)$ to $f(e_1)=(2,3)$ and $e_2=(0,1)$ to $f(e_2) = (1,1)$. A matrix representation of $f$ would be given by: $$A = \left(\begin{matrix}2 & 1\\3&1\end{matrix}\right)$$ This is the matrix we use if we consider the vectors of $\mathbb{R}^2$ to be linear combinations of the form $$c_1 e_1 + c_2 e_2$$ Now, consider a second pair of (linearly independent) vectors in $\mathbb{R}^2$, say $v_1=(1,3)$ and $v_2=(4,1)$. We first find the transformation that takes $e_1$ to $v_1$ and $e_2$ to $v_2$. A matrix representation for this is: $$B = \left(\begin{matrix}1 & 4\\3&1\end{matrix}\right)$$ Our original transformation $f$ can be expressed with respect to the basis $v_1, v_2$ via $$B^{-1}AB$$ #### Fundamental subspaces of $A$ - Span and basis - Inner and outer products of vectors - Rank of outer product is 1 - $C(A)$, $N(A)$, $(C(A^T))$ and $N(A^T)$ mean - Dimensions of each space and its rank - How to find a basis for each subspace given a $m \times n$ matrix $A$ - Sketch the diagram relating the four fundamental subspaces ### (2) Orthogonal matrices $U$ and $V^T$ - Orthogonal (perpendicular) vectors - Orthonormal vectors - Orthogonal matrix - $Q^TQ = QQ^T = I$ - Orthogonal matrices are rotations (and reflections) - Orthogonal matrices preserve norms (lengths) - 2D orthogonal matrix is a rotation matrix $$ V = \begin{bmatrix} \cos\theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} $$ - $V^T$ rotates the perpendicular frame spanned by $V$ into the standard frame spanned by $e_i$ - $V$ rotates the standard frame into the frame spanned by $V$ - $$\text{proj}_v x = \frac{\langle x, v \rangle}{\langle v, v \rangle} v $$ - Matrix form $$ P = \frac{vv^T}{v^Tv} $$ - Gram-Schmidt for converting $A$ into an orthogonal matrix $Q$ - QR decomposition ### (3) Diagonal matrix $S$ - Recall that a matrix $A$ is a transform with respect to some basis - It is desirable to find the simplest similar matrix $B$ in some other basis - $A$ and $B$ represent the exact same linear transform, just in different coordinate systems - $Av = \lambda v$ defines the eigenvectors and eigenvalues of $A$ - When a square matrix $A$ is real, symmetric and has all non-negative eigenvalues, it has an eigen-space decomposition (ESD) $$ A = V \Lambda V^T $$ where $V$ is orthogonal and $\Lambda$ is diagonal - The columns of $V$ are formed from the eigenvectors of $A$ - The diagonals of $\Lambda$ are the eigenvalues of $A$ (arrange from large to small in absolute value) ## (4) SVD $U\Sigma V^T$ - The SVD is a generalization of ESD for general $m \times n$ matrices $A$ - If $A$ is $(m \times n)$, we cannot perform an ESD - $A^TA$ is diagonalizable (note this is the dot product of all pairs of column vectors in $A$) - $$ A^TA = V \Lambda V^T $$ - Let $\Lambda = \Sigma^2$ - Let $U = AV\Sigma^{-1}$ - The $A = U\Sigma V^T$ - Show $U$ is orthogonal - Show $U$ is formed from eigenvectors of $AA^T$ - Geometric interpretation of SVD - rotate orthogonal frame $V$ onto standard frame - scale by $\Sigma$ - rotate standard frame into orthogonal frame $U$ ### Covariance, PCA and SVD Remember the formula for covariance $$ \text{Cov}(X, Y) = \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1} $$ where $\text{Cov}(X, X)$ is the sample variance of $X$. ```python %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.linalg as la ``` ```python np.set_printoptions(precision=3) ``` ```python def cov(x, y): """Returns covariance of vectors x and y).""" xbar = x.mean() ybar = y.mean() return np.sum((x - xbar)*(y - ybar))/(len(x) - 1) ``` ```python X = np.random.random(10) Y = np.random.random(10) ``` ```python np.array([[cov(X, X), cov(X, Y)], [cov(Y, X), cov(Y,Y)]]) ``` array([[0.077, 0.027], [0.027, 0.097]]) Using `numpy` function ```python np.cov(X, Y) ``` array([[0.077, 0.027], [0.027, 0.097]]) ```python Z = np.random.random(10) np.cov([X, Y, Z]) ``` array([[0.077, 0.027, 0.01 ], [0.027, 0.097, 0.014], [0.01 , 0.014, 0.06 ]]) #### Eigendecomposition of the covariance matrix ```python mu = [0,0] sigma = [[0.6,0.2],[0.2,0.2]] n = 1000 x = np.random.multivariate_normal(mu, sigma, n).T ``` ```python A = np.cov(x) ``` ```python m = np.array([[1,2,3],[6,5,4]]) ms = m - m.mean(1).reshape(2,1) np.dot(ms, ms.T)/2 ``` array([[ 1., -1.], [-1., 1.]]) ```python e, v = la.eigh(A) ``` ```python plt.scatter(x[0,:], x[1,:], alpha=0.2) for e_, v_ in zip(e, v.T): plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2) plt.axis([-3,3,-3,3]) plt.title('Eigenvectors of covariance matrix scaled by eigenvalue.'); ``` ### PCA Principal Components Analysis (PCA) basically means to find and rank all the eigenvalues and eigenvectors of a covariance matrix. This is useful because high-dimensional data (with $p$ features) may have nearly all their variation in a small number of dimensions $k$, i.e. in the subspace spanned by the eigenvectors of the covariance matrix that have the $k$ largest eigenvalues. If we project the original data into this subspace, we can have a dimension reduction (from $p$ to $k$) with hopefully little loss of information. Numerically, PCA is typically done using SVD on the data matrix rather than eigendecomposition on the covariance matrix. The next section explains why this works. Numerically, the condition number for working with the covariance matrix directly is the square of the condition number using SVD, so SVD minimizes errors. For zero-centered vectors, \begin{align} \text{Cov}(X, Y) &= \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1} \\ &= \frac{\sum_{i=1}^nX_iY_i}{n-1} \\ &= \frac{XY^T}{n-1} \end{align} and so the covariance matrix for a data set X that has zero mean in each feature vector is just $XX^T/(n-1)$. In other words, we can also get the eigendecomposition of the covariance matrix from the positive semi-definite matrix $XX^T$. Note: Here $x$ is a matrix of **row** vectors ```python X = np.random.random((5,4)) X ``` array([[0.027, 0.212, 0.602, 0.276], [0.118, 0.095, 0.058, 0.736], [0.595, 0.283, 0.537, 0.475], [0.39 , 0.952, 0.45 , 0.16 ], [0.493, 0.958, 0.811, 0.184]]) ```python Y = X - X.mean(1)[:, None] ``` ```python np.around(Y.mean(1), 5) ``` array([-0., 0., -0., -0., -0.]) ```python Y ``` array([[-0.252, -0.067, 0.323, -0.003], [-0.134, -0.157, -0.194, 0.484], [ 0.123, -0.19 , 0.064, 0.003], [-0.098, 0.464, -0.038, -0.328], [-0.119, 0.346, 0.2 , -0.427]]) ```python np.cov(X) ``` array([[ 0.057, -0.007, 0.001, -0.006, 0.024], [-0.007, 0.105, 0.001, -0.07 , -0.095], [ 0.001, 0.001, 0.018, -0.034, -0.023], [-0.006, -0.07 , -0.034, 0.111, 0.102], [ 0.024, -0.095, -0.023, 0.102, 0.119]]) ```python np.cov(Y) ``` array([[ 0.057, -0.007, 0.001, -0.006, 0.024], [-0.007, 0.105, 0.001, -0.07 , -0.095], [ 0.001, 0.001, 0.018, -0.034, -0.023], [-0.006, -0.07 , -0.034, 0.111, 0.102], [ 0.024, -0.095, -0.023, 0.102, 0.119]]) ```python e1, v1 = np.linalg.eig(np.dot(x, x.T)/(n-1)) ``` #### Principal components Principal components are simply the eigenvectors of the covariance matrix used as basis vectors. Each of the original data points is expressed as a linear combination of the principal components, giving rise to a new set of coordinates. ```python plt.scatter(x[0,:], x[1,:], alpha=0.2) for e_, v_ in zip(e1, v1.T): plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2) plt.axis([-3,3,-3,3]); ``` #### Change of basis Suppose we have a vector $u$ in the standard basis $B$ , and a matrix $A$ that maps $u$ to $v$, also in $B$. We can use the eigenvalues of $A$ to form a new basis $B'$. As explained above, to bring a vector $u$ from $B$-space to a vector $u'$ in $B'$-space, we multiply it by $Q^{-1}$, the inverse of the matrix having the eigenvctors as column vectors. Now, in the eigenvector basis, the equivalent operation to $A$ is the diagonal matrix $\Lambda$ - this takes $u'$ to $v'$. Finally, we convert $v'$ back to a vector $v$ in the standard basis by multiplying with $Q$. #### We get the principal components by a change of basis ```python plt.scatter(ys[0,:], ys[1,:], alpha=0.2) for e_, v_ in zip(e1, np.eye(2)): plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2) plt.axis([-3,3,-3,3]); ``` For example, if we only use the first column of `ys`, we will have the projection of the data onto the first principal component, capturing the majority of the variance in the data with a single feature that is a linear combination of the original features. #### Transform back to original coordinates We may need to transform the (reduced) data set to the original feature coordinates for interpretation. This is simply another linear transform (matrix multiplication). ```python zs = np.dot(v1, ys) ``` ```python plt.scatter(zs[0,:], zs[1,:], alpha=0.2) for e_, v_ in zip(e1, v1.T): plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2) plt.axis([-3,3,-3,3]); ``` ```python u, s, v = np.linalg.svd(x) u.dot(u.T) ``` array([[1.00e+00, 1.11e-16], [1.11e-16, 1.00e+00]]) #### Dimension reduction via PCA We have the spectral decomposition of the covariance matrix $$ A = Q^{-1}\Lambda Q $$ Suppose $\Lambda$ is a rank $p$ matrix. To reduce the dimensionality to $k \le p$, we simply set all but the first $k$ values of the diagonal of $\Lambda$ to zero. This is equivalent to ignoring all except the first $k$ principal components. What does this achieve? Recall that $A$ is a covariance matrix, and the trace of the matrix is the overall variability, since it is the sum of the variances. ```python A ``` array([[0.589, 0.191], [0.191, 0.191]]) ```python A.trace() ``` 0.7798589520198774 ```python e, v = np.linalg.eigh(A) D = np.diag(e) D ``` array([[0.114, 0. ], [0. , 0.665]]) ```python D.trace() ``` 0.7798589520198773 ```python D[0,0]/D.trace() ``` 0.14677219641978512 Since the trace is invariant under change of basis, the total variability is also unchanged by PCA. By keeping only the first $k$ principal components, we can still "explain" $\sum_{i=1}^k e[i]/\sum{e}$ of the total variability. Sometimes, the degree of dimension reduction is specified as keeping enough principal components so that (say) $90\%$ of the total variability is explained. ### Using SVD for PCA SVD is a decomposition of the data matrix $X = U S V^T$ where $U$ and $V$ are orthogonal matrices and $S$ is a diagonal matrix. Recall that the transpose of an orthogonal matrix is also its inverse, so if we multiply on the right by $X^T$, we get the following simplification \begin{align} X &= U S V^T \\ X X^T &= U S V^T (U S V^T)^T \\ &= U S V^T V S U^T \\ &= U S^2 U^T \end{align} Compare with the eigendecomposition of a matrix $A = W \Lambda W^{-1}$, we see that SVD gives us the eigendecomposition of the matrix $XX^T$, which as we have just seen, is basically a scaled version of the covariance for a data matrix with zero mean, with the eigenvectors given by $U$ and eigenvealuse by $S^2$ (scaled by $n-1$).. ```python u, s, v = np.linalg.svd(x) ``` ```python e2 = s**2/(n-1) v2 = u plt.scatter(x[0,:], x[1,:], alpha=0.2) for e_, v_ in zip(e2, v2): plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2) plt.axis([-3,3,-3,3]); ``` ```python v1 # from eigenvectors of covariance matrix ``` array([[ 0.928, -0.373], [ 0.373, 0.928]]) ```python v2 # from SVD ``` array([[-0.928, -0.373], [-0.373, 0.928]]) ```python e1 # from eigenvalues of covariance matrix ``` array([0.665, 0.115]) ```python e2 # from SVD ``` array([0.665, 0.115])
7489bbf026167bdc8a9195deaddbbc6e141dd6cf
217,923
ipynb
Jupyter Notebook
notebook/S08E_SVD.ipynb
ashnair1/sta-663-2019
17eb85b644c52978c2ef3a53a80b7fb031360e3d
[ "BSD-3-Clause" ]
68
2019-01-09T21:53:55.000Z
2022-02-16T17:14:22.000Z
notebook/S08E_SVD.ipynb
ashnair1/sta-663-2019
17eb85b644c52978c2ef3a53a80b7fb031360e3d
[ "BSD-3-Clause" ]
null
null
null
notebook/S08E_SVD.ipynb
ashnair1/sta-663-2019
17eb85b644c52978c2ef3a53a80b7fb031360e3d
[ "BSD-3-Clause" ]
62
2019-01-09T21:43:48.000Z
2021-11-15T04:26:25.000Z
203.286381
42,060
0.900988
true
4,611
Qwen/Qwen-72B
1. YES 2. YES
0.815232
0.893309
0.728255
__label__eng_Latn
0.978073
0.530312
# Stereo Geometry This notebook visualizes the geometry between two views called epipolar geometry. **Subjects are covered:** 1. **Definitions of epipolar geometry, the Fundamental Matrix, and the Essential Matrix.** 2. **Visualizing epipolar geometry.** 3. **8 point algorithm for computing the Fundamental matrix.** 4. **Deriving relative camera poses from the essential matrix.** 5. **Conclusion** 6. **Sources** Section 1 is meant as a theoretical underpinning of the methods used. It can be skipped by readers only interested in the algorithm's implementation. Many of the descriptions in this section I have taken, sometimes verbatim, from [Multiple View Geometry in Computer Vision](https://www.robots.ox.ac.uk/~vgg/hzbook/) by Richard Hartley and Andrew Zisserman, [epipolar geometry Wikipedia articles](https://en.wikipedia.org/wiki/Essential_matrix), and [Computer Vision Algorithms and Applications](https://szeliski.org/Book/) by Richard Szeliski. **Why are we interested in these subjects?** Structure and depth are inherently ambiguous from single views. Understanding stereo geometry will allow us to overcome this ambiguity and estimate many useful values, such as: * **The 3D structure of a scene.** Given calibrated cameras, the 3D reconstruction can be known up to a similarity transform. Additionally, if a landmark with known dimensions and pose is observed, a Euclidean reconstruction can be computed. This allows for applications such as: * measuring objects in the scene (photogrammetry) * collision avoidance for autonomous robots (depth perception) * inserting objects into a scene (virtual reality) * superimposing textures onto the scene (virtual reality) * creating 3D models of real-world objects (metric reconstruction). * **The relative pose of a camera.** Knowing the relative pose will allow for a more targeted search of point correspondences. Stereo geometry provides closed-form solutions for calculating relative camera poses from point correspondences. Also, when performing camera pose estimation with iterative non-linear methods (e.g. Levenberg Marquardt in notebook 2), it is crucial to have an initial pose estimate. **Further reading** For a comprehensive treatment of epipolar geometry, see chapters nine to eleven of the textbook [Multiple view geometry in computer vision](https://www.robots.ox.ac.uk/~vgg/hzbook/) by Richard Hartley and Andrew Zisserman. For this subject, I would not recommend video lectures. The notations in many lectures was inconsistent, and I eventually found it was best understood through reading and drawing. # Epipolar Geometry Epipolar geometry is the intrinsic geometry between two views. It is independent of scene structure and only depends on the cameras' internal parameters and relative pose. This geometry is essentially the geometry of the intersection of the two camera image planes with a pencil of planes having the baseline as axis. Here, the baseline is the line joining the camera centers. (images taken from [Multiple View Geometry in Computer Vision](https://www.robots.ox.ac.uk/~vgg/hzbook/)). <center> </center> $ \mathbf {C,C}'$ - The camera projection centers $ \mathbf {X}$ - A 3D point $ \mathbf {x,x'}$ - The 2D images of $X$ $ \mathbf {\pi}$ - The epipolar plane $ \mathbf {e, e'}$ - The epipoles $ \mathbf {l, l'}$ - The epipolar lines $ \mathbf {P, P'}$ - The camera matrices * The epipole is the point of intersection of the line joining the camera centers (the baseline) with the image plane. * Equivalently, the epipole is the image in one view of the other view's camera center. * $ \mathbf {X,x,x',C, C'}$ are coplanar. This plane is denoted as $ \mathbf{\pi}$. The rays back-projected from $ \mathbf {x}$ and $ \mathbf {x'}$ intersect at $ \mathbf {X}$, and also lie in $ \mathbf {\pi}$. * The epipolar plane must contain the baseline. Therefore, there is a one-parameter family (a pencil) of epipolar planes. * The epipolar line is the intersection of an epipolar plane with the image plane. * All epipolar lines intersect at the epipole. * An epipolar plane defines the correspondence between the two epipolar lines. * A segment of the epipolar line can be found by projecting the other camera's back-projected ray of the image of $ \mathbf {X}$. * For each point $ \mathbf {x}$ in one image, there exists a corresponding epipolar line $ \mathbf {l'}$ in the other image. Any point $ \mathbf {x'}$ in the second image matching point $ \mathbf {x}$ must lie on the epipolar line $ \mathbf {l'}$. * **Recap.** In homogenous coordinates, the cross product between two points is the line passing through those points. The cross product of two lines is the point where they intersect. The dot product of a line and a point on that line is always $0$. # The Fundamental Matrix The fundamental matrix is the algebraic representation of epipolar geometry. That is the above variables and their interrelations are captured in this matrix. Mostly simply: An epipolar line is a projection in the second image of the ray from the point $ \mathbf {x}$ through the camera center $ \mathbf {C}$ of the first camera. Thus, there is a map $$ \mathbf {x \rightarrow l'}$$ This mapping is a projective mapping from points to lines, which can be represented by the Fundamental matrix $\mathbf{F}$. $$ \mathbf{l'} = \mathbf{Fx} $$ Because each corresponding point $\mathbf{x'}$ lies on $\mathbf{l'}$ and has a dot product of 0, $\mathbf{F}$ defines the constraint $$ \mathbf{x'} ^{\mathbf{T}} \mathbf{F} \mathbf{x} = 0$$ ### Algebraic Derivation of the Fundamental Matrix This derivation is taken from the book Multiple View Geometry in Computer Vision. The ray back-projected from $\mathbf{x}$ is obtained by solving $\mathbf{PX = x}$. The one-parameter family of solutions is of the form $$ \mathbf{X}(\lambda) = \mathbf{P^{+} x} + \lambda \mathbf{C}$$ Keep in mind these are homogenous coordinates, meaning the last coordinate is rescaled to 1. Essentially, we are choosing a point on a line segment between camera center $\mathbf{C}$ and 3D point $\mathbf{P^{+} x}$. In particular, two 3D points on the ray are $\mathbf{P^{+} x}$ at $\lambda = 0$, and $\mathbf{C}$ at $\lambda = \infty$. Here $\mathbf{P^{+}}$ is the psuedo-inverse of $\mathbf{P}$, and $\mathbf{C}$ is its null-vector, namely the camera center, defined by $\mathbf{PC}=\mathbf{0}$. These two points are imaged by the second camera $\mathbf{P'}$ at $\mathbf{P'}\mathbf{P^{+} x}$ and $\mathbf{P'C}$ respectively. The epipolar line is the line joining these two projeted points, namely $\mathbf{l'} = (\mathbf{P'C})\times (\mathbf{P'}\mathbf{P^{+} x})$. The point $\mathbf{P'C}$ is the epipole in the second image, namely the projection of the first camera center, and may be denoted by $\mathbf{e'}$. Thus, $\mathbf{l'} = [\mathbf{e'}]_{\times} (\mathbf{P'}\mathbf{P^{+}) x} = \mathbf{Fx} $, where $\mathbf{F}$ is the matrix $$ \mathbf{F} = [\mathbf{e'}]_{\times} (\mathbf{P'}\mathbf{P^{+})} $$ Thus formalizing $ \mathbf {x \rightarrow l'}$ to the equation $$ \mathbf{l'} = \mathbf{Fx}$$ Given that any point $w$ on $\mathbf{l'}$ will have the dot product $w \cdot \mathbf{l'} = 0$, we can infer the following: The fundamental matrix satisfies the condition that for any pair of corresponding points $\mathbf{x} \leftrightarrow \mathbf{x'}$ in the two images $$ \mathbf{x'} ^{\mathbf{T}} \mathbf{F} \mathbf{x} = 0$$ The importance of the relation is that it gives a way of characterizing the fundamental matrix without reference to the camera matrices, i.e. only in terms of corresponding image points. This enables $\mathbf{F}$ to be computed from image correspondences alone. To compute matrix $\mathbf{F}$ at least $7$ correspondences are required. ### Properties * $\mathbf{F}$ is a $3 \times 3$ matrix * $\mathbf{F}$ is rank 2 * If $\mathbf{F}$ is the fundamental matrix of the pair of cameras $(\mathbf{P}, \mathbf{P'})$, then $\mathbf{F^{T}}$ is the fundamental matrix of the pair in the opposite order $(\mathbf{P'}, \mathbf{P})$. * $ \mathbf{l'} = \mathbf{Fx}$ * $ \mathbf{l} = \mathbf{F^Tx'}$ * For any point $ \mathbf{x}$ (other than $ \mathbf{e}$) the epipolar line $ \mathbf{l'} = \mathbf{F} \mathbf{x}$ contains epipole $\mathbf{e'}$ * $\mathbf{e'^T} (\mathbf{Fx}) = (\mathbf{e'^TF})\mathbf{x} = 0$ for all $\mathbf{x}$. * $\mathbf{e'}$ is the left null-vector of $\mathbf{F}$. * $\mathbf{e}$ is the right null-vector of $\mathbf{F}$. * $\mathbf{e'} = \mathbf{P'C}$ * $\mathbf{e} = \mathbf{PC'}$ * $\mathbf{Fe} = 0$ * $\mathbf{F^{T}e'} = 0$ * $\mathbf{l'} = \mathbf{F}[\mathbf{e}]_{\times}\mathbf{l}$ * $\mathbf{l} = \mathbf{F^T}[\mathbf{e'}]_{\times}\mathbf{l'}$ # The Essential Matrix The essential matrix is the specialization of the fundamental matrix to the case of normalized image coordinates. Historically, the essential matrix was introduced (by Longuet-Higgins) before the fundamental matrix, and the fundamental matrix may be thought of as the generalization of the essential matrix in which the (inessential) assumption of calibrated cameras is removed. The essential matrix is defined as $$ \mathbf{E} = \mathbf{K^{'T}FK}$$ The relation between the essential matrix and corresponding points is essentially ( ;] ) the same as for the fundamental matrix. $$ \mathbf{x'} ^{\mathbf{T}} \mathbf{F} \mathbf{x} = 0$$ $$ \mathbf{x'} \mathbf{K^{'-T}K^{'T}FKK^{-1}} \mathbf{x} = 0$$ $$ \mathbf{y'} \mathbf{K^{'T}FK} \mathbf{y} = 0 $$ $$ \mathbf{y'} \mathbf{E} \mathbf{y} = 0 $$ Where $\mathbf{y}$ and $\mathbf{y'}$ are the **normalized coordinates**. These can be thought of as the image plane coordinates for a camera with the identity matrix as camera intrinsics. # Visualizing epipolar geometry We will now visualize epipolar geometry by plotting the epipolar lines for stereo point correspondences. For this demonstration, we use a stereo setup from notebook 1. <center> </center> ```python from notebook_functions import (init_3d_plot, plot_chessboard, plot_camera_wireframe, plot_picture, project_points_to_picture, object_points, images, get_stereo_setup_with_correspondences, triangulate) import ipyvolume as ipv import numpy as np import cv2 import matplotlib.pyplot as plt ``` ```python images, extrinsics, cam_centers, intrinsics, match_coords, object_points = get_stereo_setup_with_correspondences() init_3d_plot() plot_chessboard(object_points) vis_scale = 5 for idx, (cam_center, extrinsic, image) in enumerate(zip(cam_centers, extrinsics, images)): inv_extrinsic = np.linalg.inv(extrinsic) inv_intrinsics = np.linalg.inv(intrinsics) plot_camera_wireframe(cam_center, vis_scale, inv_extrinsic) plot_picture(image, inv_extrinsic, vis_scale) ipv.show() ``` ## Calculate the fundamental matrix from camera matrices We will calculate the fundamental matrix using $ \mathbf{F} = [\mathbf{e'}]_{\times} (\mathbf{P'}\mathbf{P^{+})} $ and $\mathbf{e'} = \mathbf{P'C}$. Where $ [T_{\times}] = \begin{pmatrix} 0 & -t_z & t_y\\ t_z & 0 & -t_x\\ -t_y & t_x & 0 \end{pmatrix} $ ```python extrinsic_1, extrinsic_2 = extrinsics extrinsic_1_orig = extrinsic_1 extrinsic_2_orig = extrinsic_2 cam_center_1, cam_center_2 = cam_centers cam_center_1_orig = cam_center_1 cam_center_2_orig = cam_center_2 proj_1 = intrinsics[:3, :3] @ extrinsic_1[:3, :4] proj_2 = intrinsics[:3, :3] @ extrinsic_2[:3, :4] pinv_proj_1 = np.linalg.pinv(proj_1) pinv_proj_2 = np.linalg.pinv(proj_2) e_1 = proj_1 @ cam_center_2[:4] e_2 = proj_2 @ cam_center_1[:4] e_1 = e_1 / e_1[2] e_2 = e_2 / e_2[2] x, y, z = e_2 cross_e_2 = np.array([[ 0, -z, y], [ z, 0, -x], [-y, x, 0]]) F = cross_e_2 @ (proj_2 @ pinv_proj_1) print(f'Rank of F is very near 2 (ignoring numerical rounding errors): {np.isclose(np.linalg.svd(F)[1][2], 0)}') print(f'Fe≈0 and F trans e\'≈ 0: {np.all(F @ e_1 < 0.0001), np.all(F.T @ e_2 < 0.0001)}') ``` ## Draw the epipolar lines The epipolar lines will now be drawn using the equation $ \mathbf{l'} = \mathbf{Fx}$ and $ \mathbf{l} = \mathbf{F^Tx'}$. ```python def draw_homogenous_line(image, line): """ Draws a line represented by homogenous coordinates onto an image. Args: image (np.ndarray): An image array. line (np.ndarray): A 2D line represented by homgenous coordinates. Returns: image (np.ndarray): The image with the line drawn on it. """ height, width, _ = image.shape a, b, c = line y_at_x0 = int(-c/b) y_at_xwidth = int((-a * width - c)/b) image = cv2.line(image, (0, y_at_x0), (width, y_at_xwidth), [255, 255, 255], 5) return image def plot_epipolar_lines(image_1, image_2, points_1, points_2, F): """ Plots point correspondences along with epipolar lines. Args: image_{1|2} (np.ndarray): The stereo images. points_{1|2} (np.ndarray): The stereo point correspondences. F (no.ndarray): The fundamental matrix. """ for point_1, point_2 in zip(points_1, points_2): line_1 = F.T @ point_2[:3] line_2 = F @ point_1[:3] image_1 = cv2.circle(image_1, point_1[:2].astype(int), 15, [255,255,255], -1) image_2 = cv2.circle(image_2, point_2[:2].astype(int), 15, [255,255,255], -1) image_1 = draw_homogenous_line(image_1, line_1) image_2 = draw_homogenous_line(image_2, line_2) image_concat = cv2.hconcat([image_1, image_2]) image_concat = cv2.cvtColor(image_concat, cv2.COLOR_BGR2RGB) plt.figure(figsize=(20,20)) plt.imshow(image_concat) plt.show() ``` ```python im_1_points = match_coords[0] im_2_points = match_coords[1] image_1 = images[0] image_2 = images[1] plot_epipolar_lines(image_1.copy(), image_2.copy(), im_1_points, im_2_points, F) ``` # 8 point algorithm - Fundamental matrix from point correspondences The 8 point algorithm is a straightforward approach for calculating the Fundamental matrix $\mathbf{F}$. 1. **Points**. Find $8$ or more stereo point correspondences. Here we will use SIFT features. 2. **Linear solution**. Use an SVD to find the least-squares solution to $\mathbf{F}$ given the $8$ or more correspondence constraints $\mathbf{x'} ^{\mathbf{T}} \mathbf{F} \mathbf{x} = 0$. That is, for SVD$= \mathbf {U\Sigma V^{T}}$, take the last column of $\mathbf{V}$ as $\mathbf{F}$.* 3. **Enforce Rank 2**. For noisy data, the linear solution $\mathbf{F}$ will not be of rank $2$ in general. To enforce this, use the SVD to perform a PCA and keep only the first $2$ principal axes when reconstructing $\mathbf{F}$. *-See the appendix for an intuitive explanation and proof of why: * The Singular Value Decomposition can be used to find a least-squares solution. * The rank of the Fundamental matrix must be $2$. ### Constraint matrix $ \begin{align} \mathbf{x'} ^{\mathbf{T}} \mathbf{F} \mathbf{x} &= 0 \\\\ \begin{bmatrix} x' & y' & w' \end{bmatrix} \begin{bmatrix} f_1 & f_2 & f_3 \\ f_4 & f_5 & f_6 \\ f_7 & f_8 & f_9 \end{bmatrix} \begin{bmatrix} x \\ y \\ w \end{bmatrix} &= 0 \\\\ x' x f_1 + x' y f_2 + x' w f_3 \\ + y' x f_4 + y' y f_5 + y' w f_6 \\ + w' x f_7 + w' y f_8 + w' w f_9 &= 0 \\\\ \begin{bmatrix} x'x & x'y & x'w & y'x & y'y & y'w & w'x & w'y & w'w \end{bmatrix} \begin{bmatrix} f_1 \\ f_2 \\ f_3 \\ f_4 \\ f_5 \\ f_6 \\ f_7 \\ f_8 \\ f_9 \end{bmatrix} &= \begin{bmatrix} 0 \end{bmatrix} \end{align} $ The last equation is a linear homogeneous equation of form $\mathbf{A}b = 0$. Adding $7$ more constraints to $\mathbf{A}$ allows us to compute $b$. In this case, $b$ is the rolled-out fundamental matrix $\mathbf{F}$. ```python # Linear solution constraint_matrix = list() for point_1, point_2 in zip(im_1_points, im_2_points): x1, y1, w1, _ = point_1 x2, y2, w2, _ = point_2 constraint = [x2*x1, x2*y1, x2*w1, y2*x1, y2*y1, y2*w1, w2*x1, w2*y1, w2*w1] constraint_matrix.append(constraint) constraint_matrix = np.array(constraint_matrix) u, s, v_t = np.linalg.svd(constraint_matrix) least_squares = v_t[-1] # last column of V F = least_squares.reshape((3, 3)) # Enforce rank 2 u, s, v_t = np.linalg.svd(F) s = np.diag(s) s[2, 2] = 0 F = u @ s @ v_t plot_epipolar_lines(image_1.copy(), image_2.copy(), im_1_points, im_2_points, F) ``` # Deriving relative camera pose from the essential matrix. For a proof of this result refer to 9.6.2 in Multiple View Geometry in Computer Vision and [this Wikipedia article](https://en.wikipedia.org/wiki/Essential_matrix#Extracting_rotation_and_translation). The proof is out-of-scope and would require too much space. For a given essential matrix $\mathbf{E}$ with SVD decomposition $\mathbf{E} = \mathbf{U} \text{diag}(1,1,0)\mathbf{V^T}$ and first camera matrix $\mathbf{P} = [\mathbf{I} | \mathbf{0}]$, there are our possible choices for the second camera $\mathbf{P'}$, namely $$ \mathbf{P'} = [\mathbf{UWV^T} | +\mathbf{u_3}] \hspace{0.5em} \text{ or } \hspace{0.5em} [\mathbf{UWV^T} | -\mathbf{u_3}] \hspace{0.5em} \text{ or } \hspace{0.5em} [\mathbf{UW^TV^T} | +\mathbf{u_3}] \hspace{0.5em} \text{ or } \hspace{0.5em} [\mathbf{UW^TV^T} | -\mathbf{u_3}]$$ Where $\mathbf{W} = \begin{bmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} $ This 4 solution ambiguity can be reduced to a single solution. This is the solution that places all points in front of the cameras after triangulation. We compute all 4 solutions and check which solution has points with only positive z coordinates relative to the camera. ```python def decompose_essential(essential): """ Decomposes an essential matrix into a relative camera pose. Args: essential (np.ndarray): The essential matrix. Returns: pose (np.ndarray): The extrinsic matrix of the second camera. """ w = np.array([[0, -1, 0], [1, 0, 0], [0, 0, 1]]) u, s, v_t = np.linalg.svd(essential) # Translation hypothesis u_3 = u[:, 2, np.newaxis] # Rotation hypothesis rot_a = u @ w @ v_t rot_b = u @ w.T @ v_t rot_a = -rot_a if np.linalg.det(rot_a) < 0 else rot_a rot_b = -rot_b if np.linalg.det(rot_b) < 0 else rot_b # 4 motion hypothoses as extrinsic matrices extr_2_a = np.hstack([rot_a, u_3]) extr_2_b = np.hstack([rot_a, -u_3]) extr_2_c = np.hstack([rot_b, u_3]) extr_2_d = np.hstack([rot_b, -u_3]) # Assume cam 1 = [I | 0] extr_1 = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]]) cam_center_1 = np.zeros(3) inv_intrinsics = np.linalg.inv(intrinsics[:3, :3]) inv_extrinsic_1 = np.linalg.pinv(extr_1) vec_in_cam_ref_1 = inv_intrinsics @ im_1_points.T[:3] vec_in_world_1 = inv_extrinsic_1 @ vec_in_cam_ref_1 pose = None # Triangulate points for each of the 4 solutions for extr_2 in [extr_2_a, extr_2_b, extr_2_c, extr_2_d]: cam_center_2 = -extr_2[:3, :3] @ extr_2[:, 3].T inv_extrinsic_2 = np.linalg.pinv(extr_2) vec_in_cam_ref_2 = inv_intrinsics @ im_2_points.T[:3] vec_in_world_2 = inv_extrinsic_2 @ vec_in_cam_ref_2 rel_zs = np.array([]) for vec_1, vec_2 in zip(vec_in_world_1.T, vec_in_world_2.T): triangulated, _, _ = triangulate(cam_center_1, vec_1, cam_center_2, vec_2) triangulated = np.append(triangulated, 1) x1, y1, z1 = extr_1 @ triangulated # Triangulated point in camera 1 ref x2, y2, z2 = extr_2 @ triangulated # Triangulated point in camera 2 ref rel_zs = np.append(rel_zs, [z1, z2]) if (rel_zs > 0).all(): pose = extr_2 return pose ``` # Visualize relative pose inferred from the essential matrix The ground truth pose of the second camera is shown in red. The inferred pose of the second camera is shown in blue. As can be seen, the pose can be very decently inferred from point correspondences. The careful reader will notice that the relative translation is artificially scaled by a "magic" number. This is because the essential matrix has no information on the scale of the relative camera translation. For details on why this is, see [this post](https://stackoverflow.com/questions/69742520/extracting-the-scale-of-translation-vector-that-i-got-from-the-essential-matrix/69980810#69980810). We do however know the scale of the scene due to the chessboard calibration object, which is how I manually found the translation scale 8.6. ```python init_3d_plot() plot_chessboard(object_points) plot_camera_wireframe(cam_center_2_orig, vis_scale, np.linalg.inv(extrinsic_2_orig), color='red') essential = intrinsics[:3, :3].T @ F @ intrinsics[:3, :3] rel_extrinsic_2 = decompose_essential(essential) # Convert relative pose from essential matrix decomposition to absolute poses. rel_extrinsic_2[:, 3] *= 8.6 extrinsic_2 = rel_extrinsic_2 @ extrinsic_1 cam_center_2 = -extrinsic_2[:3, :3].T @ extrinsic_2[:3, 3] extrinsic_2 = np.vstack([extrinsic_2, [0, 0, 0, 1]]) extrinsics[1] = extrinsic_2 cam_centers[1] = cam_center_2 for idx, (cam_center, extrinsic, image) in enumerate(zip(cam_centers, extrinsics, images)): inv_extrinsic = np.linalg.pinv(extrinsic) inv_intrinsics = np.linalg.inv(intrinsics) plot_camera_wireframe(cam_center, vis_scale, inv_extrinsic) plot_picture(image, inv_extrinsic, vis_scale) ipv.show() ``` ## Conclusion In this notebook we * visualized the epipolar lines for stereo point correspondences * computed the fundamental matrix using the 8 point algorithm * decomposed the essential matrix into a relative pose of a camera Having an intuition of how epipolar geometry works will serve us well. It is both a tool for problem-solving and a theoretical basis for many computer vision algorithms. ## Sources 1. [Szeliski, Richard. "Computer vision: algorithms and applications." Springer Science & Business Media, 2010](https://szeliski.org/Book/) 2. [Andrew, Alex M. "Multiple view geometry in computer vision." Kybernetes, 2001](https://www.robots.ox.ac.uk/~vgg/hzbook/) ## Appendix #### Why does the SVD contain the least-squares solution? This section gives an intuitive proof of why for a homogenous equation (or data) matrix $\mathbf{A}$ in $\mathbf{A}b = 0$, the last column of the SVD's $\mathbf{V}$ matrix is the least-squares solution. For a homogenous least-squares problem, we are interested in finding a parameter vector $b$ such that $$ \mathbf{A}b = 0 $$ subject to the constraint $ \lVert b \rVert ^{2}=1$ to avoid the arbitrary solution. Because there may be no exact solution, this is phrased as a minimization problem. $$ \arg \underset{b}{\min} \lVert \mathbf{A}b \rVert^2_2 $$ The SVD decomposes a matrix into two orthonormal matrices $ \mathbf {U, V^{T}} $ and one diagonal matrix with non-negative real numbers $\mathbf{\Sigma}$. Interpreting the SVD as a PCA, $\mathbf{V}$ encodes the principal axes of the data and $\mathbf{\Sigma}$ the amount of variance of the data when projected onto these principal axes in descending order, i.e. $\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_n$(for details on the SVD $\leftrightarrow$PCA connection, see [this Princeton tutorial](https://www.cs.princeton.edu/courses/archive/spring12/cos598C/svdchapter.pdf)). $$ \mathbf{A} \rightarrow \mathbf {U\Sigma V^{T}} $$ Matrix multiplication with an orthonormal matrix does not affect the norm of a vector, thus we can remove $\mathbf{U}$ from the equation. $$ \arg \underset{b}{\min} \lVert \mathbf {U \Sigma V^{T}}b \rVert = \arg \underset{b}{\min} \lVert \mathbf {\Sigma V^{T}}b \rVert$$ We further substitute with $y = \mathbf{V^{T}} b$, with constraint $ \lVert y \rVert ^{2}=1$ due to orthonormality of $\mathbf{V^{T}}$. $$ \arg \underset{b}{\min} \lVert \mathbf {\Sigma V^{T}}b \rVert =arg \underset{y}{\min} \lVert \mathbf {\Sigma} y \rVert $$ Because the singular values $\sigma$ in $\mathbf{\Sigma}$ encode the variance in descending order, the optimal $y$ is trivially $(0,\dots,0,1)^T$. $$ y = (0,\dots,0,1)^T$$ $$ \mathbf{V^{T}} b = (0,\dots,0,1)^T$$ $$ \mathbf{V V^{T}}b = \mathbf{V} (0,\dots,0,1)^T $$ $$ b = \mathbf{V} (0,\dots,0,1)^T $$ $$ b = \text{last column of } \mathbf{V} $$ making the optimal $b$ the last column of $\mathbf{V}$. ## Why is the fundamental matrix rank 2? This proof was taken from [this Quora post by Samarth Brahmbhatt](https://www.quora.com/Why-is-the-fundamental-matrix-in-computer-vision-rank-2). Additional references to other proofs have been added where needed. #### Intuition A fundamental matrix is given by the equation $ \mathbf{x'}^T \mathbf{F} \mathbf{x} = 0 $. Now consider an epipolar line $ l' = \mathbf{F} \mathbf{x} $. The right epipole $ \mathbf{e'} $ lies on this line, so $ \mathbf{e'}^T l' = 0 $ or $ \mathbf{e'} ^T \mathbf{F} \mathbf{x} = 0 $ for all $ \mathbf{x} $. This implies that $ \mathbf{e'} ^T \mathbf{F} = 0 $. Similarly, one can prove that $ \mathbf{F} \mathbf{e} = 0 $. Hence $ \mathbf{F} $ has a null space which is not just the zero vector. So $ \mathbf{F} $ is rank deficient. #### Proof The proof that $ \mathbf{F} $ has of rank exactly 2 comes from the fact that $ \mathbf{F} $ is constructed from the essential matrix $ \mathbf{E}$: $$ \mathbf{F} = (\mathbf{K'^{-1}})^T \mathbf{E} \mathbf{K^{-1}} $$ where the $ \mathbf{K} $'s are intrinsic matrices of the two cameras. Now, $ \mathbf{E} = [T_{\times}]R $ where $ R $ is the rotation matrix relating the two camera co-ordinate systems and $ [T_{\times}] = \begin{pmatrix} 0 & -t_z & t_y\\ t_z & 0 & -t_x\\ -t_y & t_x & 0 \end{pmatrix} $. For a proof of this identity, see the [wikipedia page](https://en.wikipedia.org/wiki/Essential_matrix#Derivation_and_definition). A little bit of manipulation will show that one column of $ [T_{\times}] $ is a linear combination of the other two columns. So $ [T_{\times}] $ has rank 2. For a proof of this see [this row reduced solution](https://www.wolframalpha.com/input/?i2d=true&i=row+echelon+form+of+%7B%7B0%2C-z%2Cy%7D%2C%7Bz%2C0%2C-x%7D%2C%7B-y%2Cx%2C0%7D%7D). Hence any matrix that you construct by multiplying other matrices with $ [T_{\times}] $ (such as $ \mathbf{E} $ and $ \mathbf{F} $) will also have at most rank 2.
d637eab16ba361b5ecbd119185e0710dfccedeff
35,507
ipynb
Jupyter Notebook
3_stereo_geometry.ipynb
maxcrous/multiview_notebooks
bea2f87b8c78c5819337a496a0d330c255b492d1
[ "MIT" ]
47
2021-12-05T16:12:01.000Z
2022-03-28T12:18:23.000Z
3_stereo_geometry.ipynb
maxcrous/multiview_notebooks
bea2f87b8c78c5819337a496a0d330c255b492d1
[ "MIT" ]
null
null
null
3_stereo_geometry.ipynb
maxcrous/multiview_notebooks
bea2f87b8c78c5819337a496a0d330c255b492d1
[ "MIT" ]
7
2021-12-05T18:48:06.000Z
2022-03-26T02:19:43.000Z
51.015805
1,081
0.592447
true
8,127
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.843895
0.70506
__label__eng_Latn
0.960091
0.476423
```python %matplotlib inline ``` 序列模型和长短时记忆网络(LSTM) =================================================== 到目前为止,我们已经看到了各种各样的前馈网络(feed-forward networks)。 也就是说,根本不存在由网络维护的状态(state)。 这可能不是我们想要的行为。序列模型(Sequence models)是NLP的核心: 它们是在输入之间通过时间存在某种依赖关系的模型。 序列模型的经典例子是用于词性标注(part-of-speech tagging)的 隐马尔可夫模型(Hidden Markov Model)。 另一个例子是条件随机场(conditional random field)。 递归神经网络(recurrent neural network)是一种保持某种状态的网络。 例如,它的输出可以作为下一个输入的一部分使用,以便信息可以在序列通过网络时在序列中传播。 在LSTM的情况下,对于序列中的每个元素,都有相应的隐藏状态($h_t$), 原则上可以包含来自序列中较早的任意点的信息。 我们可以利用隐藏状态来预测语言模型中的单词、词性标注(part-of-speech tags)以及无数其他事物。 Pytorch中的LSTM ~~~~~~~~~~~~~~~~~ 在开始这个示例之前,请注意以下几点。Pytorch的LSTM期望它的所有输入都是3D张量。 这些张量的每个轴(axes)的语义很重要。第一个轴是序列本身,第二个轴索引batch中的样例, 以及第三个轴索引输入的元素。我们还没有讨论过mini-batching,所以让我们忽略这一点, 并假设我们总是只有1维在第二轴。如果我们想在"The cow jumped"这个句子上运行序列模型, 我们的输入应该看起来像这样: \begin{align}\begin{bmatrix} \overbrace{q_\text{The}}^\text{row vector} \\ q_\text{cow} \\ q_\text{jumped} \end{bmatrix}\end{align} 不过,请记住,还有一个额外的第二维,其size为1。 此外,您可以一次遍历一遍序列,在这种情况下,第一轴的size也是1。 让我们看一个快速的例子。 ```python # Author: Robert Guthrie import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim torch.manual_seed(1) ``` ```python lstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3 inputs = [torch.randn(1, 3) for _ in range(5)] # make a sequence of length 5 # initialize the hidden state. hidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3)) for i in inputs: # Step through the sequence one element at a time. # after each step, hidden contains the hidden state. out, hidden = lstm(i.view(1, 1, -1), hidden) # alternatively, we can do the entire sequence all at once. # the first value returned by LSTM is all of the hidden states throughout # the sequence. the second is just the most recent hidden state # (compare the last slice of "out" with "hidden" below, they are the same) # The reason for this is that: # "out" will give you access to all hidden states in the sequence # "hidden" will allow you to continue the sequence and backpropagate, # by passing it as an argument to the lstm at a later time # Add the extra 2nd dimension inputs = torch.cat(inputs).view(len(inputs), 1, -1) hidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3)) # clean out hidden state out, hidden = lstm(inputs, hidden) print(out) print(hidden) ``` 样例: 一种用于词性标注的LSTM ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 在这个小节中, 我们将使用 LSTM 来获得词性标注(part of speech tags)。 我们将不会使用Viterbi 或 Forward-Backward 或 其他任何类似的技术, 但是作为一个对读者稍微有挑战性的练习, 当你了解了这一切如何运转的时候 再考虑一下如何使用 Viterbi。 模型如下: 假定我们的输入语句是 $w_1, \dots, w_M$, 其中 $w_i \in V$, 我们的词汇库。 另外, 假定 $T$ 是我们的标记集合, 以及 $y_i$ 是单词 $w_i$ 的标记。 把我们对单词 $w_i$ 的标记的预测记为 $\hat{y}_i$ 。 这是一个结构预测,模型,其中我们的输出是序列 $\hat{y}_1, \dots, \hat{y}_M$, 其中 $\hat{y}_i \in T$ 。 为了进行预测, 在句子上传递一个LSTM(pass an LSTM over the sentence)。 在时间步(timestep) $i$ 的隐藏状态记为 $h_i$ 。 另外,给每个tag分配一个唯一的index (就像在词嵌入章节中的 word\_to\_ix 一样)。 然后,我们预测 $\hat{y}_i$ 的规则是: \begin{align}\hat{y}_i = \text{argmax}_j \ (\log \text{Softmax}(Ah_i + b))_j\end{align} 也就是说, 对 隐藏状态的仿射映射 取 对数软最大化(log softmax), 并且预测出的tag是这个向量中的最大值对应的tag。 请注意,这立即意味着 $A$ 的目标空间的维数为 $|T|$ 。 准备数据: ```python def prepare_sequence(seq, to_ix): idxs = [to_ix[w] for w in seq] return torch.tensor(idxs, dtype=torch.long) training_data = [ ("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]), ("Everybody read that book".split(), ["NN", "V", "DET", "NN"]) ] word_to_ix = {} for sent, tags in training_data: for word in sent: if word not in word_to_ix: word_to_ix[word] = len(word_to_ix) print(word_to_ix) tag_to_ix = {"DET": 0, "NN": 1, "V": 2} # These will usually be more like 32 or 64 dimensional. # We will keep them small, so we can see how the weights change as we train. EMBEDDING_DIM = 6 HIDDEN_DIM = 6 ``` 创建模型: ```python class LSTMTagger(nn.Module): def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size): super(LSTMTagger, self).__init__() self.hidden_dim = hidden_dim self.word_embeddings = nn.Embedding(vocab_size, embedding_dim) # The LSTM takes word embeddings as inputs, and outputs hidden states # with dimensionality hidden_dim. self.lstm = nn.LSTM(embedding_dim, hidden_dim) # The linear layer that maps from hidden state space to tag space self.hidden2tag = nn.Linear(hidden_dim, tagset_size) self.hidden = self.init_hidden() def init_hidden(self): # Before we've done anything, we dont have any hidden state. # Refer to the Pytorch documentation to see exactly # why they have this dimensionality. # The axes semantics are (num_layers, minibatch_size, hidden_dim) return (torch.zeros(1, 1, self.hidden_dim), torch.zeros(1, 1, self.hidden_dim)) def forward(self, sentence): embeds = self.word_embeddings(sentence) lstm_out, self.hidden = self.lstm( embeds.view(len(sentence), 1, -1), self.hidden) tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1)) tag_scores = F.log_softmax(tag_space, dim=1) return tag_scores ``` 训练模型: ```python model = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(tag_to_ix)) loss_function = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.1) # See what the scores are before training # Note that element i,j of the output is the score for tag j for word i. # Here we don't need to train, so the code is wrapped in torch.no_grad() with torch.no_grad(): inputs = prepare_sequence(training_data[0][0], word_to_ix) tag_scores = model(inputs) print(tag_scores) for epoch in range(300): # again, normally you would NOT do 300 epochs, it is toy data for sentence, tags in training_data: # Step 1. Remember that Pytorch accumulates gradients. # We need to clear them out before each instance model.zero_grad() # Also, we need to clear out the hidden state of the LSTM, # detaching it from its history on the last instance. model.hidden = model.init_hidden() # Step 2. Get our inputs ready for the network, that is, turn them into # Tensors of word indices. sentence_in = prepare_sequence(sentence, word_to_ix) targets = prepare_sequence(tags, tag_to_ix) # Step 3. Run our forward pass. tag_scores = model(sentence_in) # Step 4. Compute the loss, gradients, and update the parameters by # calling optimizer.step() loss = loss_function(tag_scores, targets) loss.backward() optimizer.step() # See what the scores are after training with torch.no_grad(): inputs = prepare_sequence(training_data[0][0], word_to_ix) tag_scores = model(inputs) # The sentence is "the dog ate the apple". i,j corresponds to score for tag j # for word i. The predicted tag is the maximum scoring tag. # Here, we can see the predicted sequence below is 0 1 2 0 1 # since 0 is index of the maximum value of row 1, # 1 is the index of maximum value of row 2, etc. # Which is DET NOUN VERB DET NOUN, the correct sequence! print(tag_scores) ``` 练习: 使用字符级特征增强LSTM语义标注 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 在上面的例子中,每个单词都有一个嵌入,作为序列模型的输入。 让我们用 从单词的字符派生出来的表示 来增强 单词嵌入。 我们希望这会有很大的帮助,因为词缀(affixes)之类的字符级信息对词性(part-of-speech)有很大的影响。 例如,带有词缀 *-ly* 的词在英语中几乎总是被标记为副词(adverbs)。 为了做到这一点, 令 $c_w$ 是单词 $w$ 的字符级表示(character-level representation)。 像之前一样,令 $x_w$ 是单词嵌入。 然后,我们序列模型的输入是 $x_w$ 和 $c_w$ 的串接(concatenation)。因此,如果 $x_w$ 有 5 个维度, 并且 $c_w$ 的纬度是 3 , 那么我们的 LSTM 应该接受维数为8的输入。 为了获得字符级表示, 在一个单词的若干字符上做LSTM ,并且令 $c_w$ 是这个LSTM的最终隐藏状态。 提示: * 你的新模型将会有两个LSTM。原来的LSTM输出POS标签分数(POS tag scores), 新的LSTM输出每个单词的字符级表示。 * 为了在字符集上建一个序列模型, 你必须嵌入字符(embed characters)。 字符嵌入将会成为字符级LSTM的输入。
a649d85626beb8aedd9aec5bcdbf425341c7fbd2
15,820
ipynb
Jupyter Notebook
build/_downloads/56409bf15ae7b72b139b998779f82a23/sequence_models_tutorial.ipynb
ScorpioDoctor/antares02
631b817d2e98f351d1173b620d15c4a5efed11da
[ "BSD-3-Clause" ]
null
null
null
build/_downloads/56409bf15ae7b72b139b998779f82a23/sequence_models_tutorial.ipynb
ScorpioDoctor/antares02
631b817d2e98f351d1173b620d15c4a5efed11da
[ "BSD-3-Clause" ]
null
null
null
build/_downloads/56409bf15ae7b72b139b998779f82a23/sequence_models_tutorial.ipynb
ScorpioDoctor/antares02
631b817d2e98f351d1173b620d15c4a5efed11da
[ "BSD-3-Clause" ]
null
null
null
125.555556
3,549
0.693552
true
3,031
Qwen/Qwen-72B
1. YES 2. YES
0.76908
0.79053
0.607981
__label__eng_Latn
0.739999
0.250874
``` %load_ext autoreload %autoreload 2 ``` ``` import numpy as np import matplotlib.pyplot as plt import common import sympy as sp %matplotlib inline %config InlineBackend.figure_format='retina' fault_depth = 0.5 def fault_fnc(q): return 0 * q, q - 1 - fault_depth, -np.ones_like(q), 0 * q, np.ones_like(q) surf_L = 10 def flat_fnc(q): return surf_L * q, 0 * q, 0 * q, np.ones_like(q), np.full_like(q, surf_L) def slip_fnc(xhat): # This must be zero at the endpoints! return np.where( xhat < -0.9, (1.0 + xhat) * 10, np.where(xhat < 0.9, 1.0, (1.0 - xhat) * 10) ) plt.plot(slip_fnc(np.linspace(-1, 1, 100))) ``` ``` qr_fault = common.gauss_rule(50) fault = fault_fnc(qr_fault[0]) qr_flat = common.gauss_rule(200) flat = flat_fnc(qr_flat[0]) A, A_info = common.interaction_matrix( common.double_layer_matrix, flat, qr_flat, flat, qr_flat ) B, B_info = common.interaction_matrix( common.double_layer_matrix, flat, qr_flat, fault, qr_fault ) A = A[:, 0, :] B = B[:, 0, :] ``` ``` slip = slip_fnc(qr_fault[0]) v = B.dot(slip) ``` ``` surf_disp = np.linalg.solve(A - 0.5 * np.eye(A.shape[0]), v) plt.plot(surf_disp) plt.show() ``` ``` def hypersingular_matrix(surface, quad_rule, obsx, obsy): srcx, srcy, srcnx, srcny, curve_jacobian = surface dx = obsx[:, None] - srcx[None, :] dy = obsy[:, None] - srcy[None, :] r2 = dx ** 2 + dy ** 2 obsnx = np.full_like(obsx, 1.0) obsny = 0.0 * obsx srcn_dot_obsn = srcnx[None, :] * obsnx[:, None] + srcny[None, :] * obsny[:, None] d_dot_srcn = dx * srcnx[None, :] + dy * srcny[None, :] d_dot_obsn = dx * obsnx[:, None] + dy * obsny[:, None] # The definition of the hypersingular kernel. integrand = (srcn_dot_obsn - (2 * d_dot_srcn * d_dot_obsn / r2)) / (2 * np.pi * r2) return integrand * curve_jacobian * quad_rule[1][None, :] ``` ``` def interior_eval( kernel, src_surface, src_quad_rule, src_slip, obsx, obsy, offset_mult, kappa, qbx_p, visualize_centers=False, ): n_qbx = src_surface[0].shape[0] * kappa quad_rule_qbx = common.gauss_rule(n_qbx) surface_qbx = common.interp_surface(src_surface, src_quad_rule[0], quad_rule_qbx[0]) slip_qbx = common.interp_fnc(src_slip, src_quad_rule[0], quad_rule_qbx[0]) qbx_center_x1, qbx_center_y1, qbx_r1 = common.qbx_choose_centers( src_surface, src_quad_rule, mult=offset_mult, direction=1.0 ) qbx_center_x2, qbx_center_y2, qbx_r2 = common.qbx_choose_centers( src_surface, src_quad_rule, mult=offset_mult, direction=-1.0 ) qbx_center_x = np.concatenate([qbx_center_x1, qbx_center_x2]) qbx_center_y = np.concatenate([qbx_center_y1, qbx_center_y2]) qbx_r = np.concatenate([qbx_r1, qbx_r2]) if visualize_centers: plt.plot(surface_qbx[0], surface_qbx[1], "k-") plt.plot(qbx_center_x, qbx_center_y, "r.") plt.show() Qexpand = common.qbx_expand_matrix( kernel, surface_qbx, quad_rule_qbx, qbx_center_x, qbx_center_y, qbx_r, qbx_p=qbx_p, ) qbx_coeffs = Qexpand.dot(slip_qbx) disp_qbx = common.qbx_interior_eval( kernel, src_surface, src_quad_rule, src_slip, obsx, obsy, qbx_center_x, qbx_center_y, qbx_r, qbx_coeffs, ) return disp_qbx ``` ``` nobs = 100 zoomx = [-2.5, 2.5] zoomy = [-5.1, -0.1] # zoomx = [-25, 25] # zoomy = [-45, 5] xs = np.linspace(*zoomx, nobs) ys = np.linspace(*zoomy, nobs) obsx, obsy = np.meshgrid(xs, ys) disp_flat = interior_eval( common.double_layer_matrix, flat, qr_flat, surf_disp, obsx.flatten(), obsy.flatten(), offset_mult=5, kappa=2, qbx_p=10, visualize_centers=True, ).reshape(obsx.shape) disp_fault = interior_eval( common.double_layer_matrix, fault, qr_fault, slip, obsx.flatten(), obsy.flatten(), offset_mult=5, kappa=2, qbx_p=10, visualize_centers=True, ).reshape(obsx.shape) disp_full = disp_flat + disp_fault levels = np.linspace(-0.5, 0.5, 21) cntf = plt.contourf(obsx, obsy, disp_full, levels=levels, extend="both") plt.contour( obsx, obsy, disp_full, colors="k", linestyles="-", linewidths=0.5, levels=levels, extend="both", ) plt.plot(flat[0], flat[1], "k-", linewidth=1.5) plt.plot(fault[0], fault[1], "k-", linewidth=1.5) plt.colorbar(cntf) plt.xlim(zoomx) plt.ylim(zoomy) plt.show() ``` ``` nobs = 100 zoomx = [-2.5, 2.5] zoomy = [-5.1, -0.1] xs = np.linspace(*zoomx, nobs) ys = np.linspace(*zoomy, nobs) obsx, obsy = np.meshgrid(xs, ys) stress_flat = interior_eval( common.hypersingular_matrix, flat, qr_flat, surf_disp, obsx.flatten(), obsy.flatten(), offset_mult=5, kappa=2, qbx_p=10, ).reshape((*obsx.shape, 2)) stress_fault = interior_eval( common.hypersingular_matrix, fault, qr_fault, slip, obsx.flatten(), obsy.flatten(), offset_mult=5, kappa=2, qbx_p=10, ).reshape((*obsx.shape, 2)) stress_full = stress_flat + stress_fault levels = np.linspace(-0.5, 0.5, 21) plt.figure(figsize=(8, 4)) for d in range(2): plt.subplot(1, 2, 1 + d) cntf = plt.contourf(obsx, obsy, stress_full[:, :, d], levels=levels, extend="both") plt.contour( obsx, obsy, stress_full[:, :, d], colors="k", linestyles="-", linewidths=0.5, levels=levels, extend="both", ) plt.plot(flat[0], flat[1], "k-", linewidth=1.5) plt.plot(fault[0], fault[1], "k-", linewidth=1.5) plt.colorbar(cntf) plt.xlim(zoomx) plt.ylim(zoomy) plt.tight_layout() plt.show() ```
3bc33e2a61d2d3f5d6e4da08d994494a530726b3
9,254
ipynb
Jupyter Notebook
tutorials/volumetric/gravity.ipynb
tbenthompson/BIE_tutorials
02cd56ab7e63e36afc4a10db17072076541aab77
[ "MIT" ]
1
2021-06-18T18:02:55.000Z
2021-06-18T18:02:55.000Z
tutorials/volumetric/gravity.ipynb
tbenthompson/BIE_tutorials
02cd56ab7e63e36afc4a10db17072076541aab77
[ "MIT" ]
null
null
null
tutorials/volumetric/gravity.ipynb
tbenthompson/BIE_tutorials
02cd56ab7e63e36afc4a10db17072076541aab77
[ "MIT" ]
1
2021-07-14T19:47:00.000Z
2021-07-14T19:47:00.000Z
27.78979
98
0.484007
true
1,884
Qwen/Qwen-72B
1. YES 2. YES
0.885631
0.721743
0.639198
__label__eng_Latn
0.182349
0.323403
```python # HIDDEN from datascience import * from prob140 import * import numpy as np import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') %matplotlib inline import math from scipy import stats from sympy import * init_printing() ``` ## Independence ## Jointly distributed random variables $X$ and $Y$ are *independent* if $$ P(X \in A, Y \in B) = P(X \in A)P(Y \in B) $$ for all intervals $A$ and $B$. Let $X$ have density $f_X$, let $Y$ have density $f_Y$, and suppose $X$ and $Y$ are independent. Then if $f$ is the joint density of $X$ and $Y$, $$ \begin{align*} f(x, y)dxdy &\sim P(X \in dx, Y \in dy) \\ &= P(X \in dx)P(Y \in dy) ~~~~~ \text{(independence)} \\ &= f_X(x)dx f_Y(y)dy \\ &= f_X(x)f_Y(y)dxdy \end{align*} $$ Thus if $X$ and $Y$ are independent then their joint density is given by $$ f(x, y) = f_X(x)f_Y(y) $$ This is the *product rule for densities*: the joint density of two independent random variables is the product of their densities. ### Independent Standard Normal Random Variables ### Suppose $X$ and $Y$ are i.i.d. standard normal random variables. Then their joint density is given by $$ f(x, y) = \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}x^2} \cdot \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}y^2}, ~~~~ -\infty < x, y < \infty $$ Equivalently, $$ f(x, y) = \frac{1}{2\pi} e^{-\frac{1}{2}(x^2 + y^2)}, ~~~~ -\infty < x, y < \infty $$ Here is a graph of the joint density surface. ```python def indep_standard_normals(x,y): return 1/(2*math.pi) * np.exp(-0.5*(x**2 + y**2)) Plot_3d((-4, 4), (-4, 4), indep_standard_normals, rstride=4, cstride=4) ``` Notice the circular symmetry of the surface. This is because the formula for the joint density involves the pair $(x, y)$ through the expression $x^2 + y^2$ which is symmetric in $x$ and $y$. Notice also that $P(X = Y) = 0$, as the probability is the volume over a line. This is true of all pairs of independent random variables with a joint density: $P(X = Y) = 0$. So for example $P(X > Y) = P(X \ge Y)$. You don't have to worry about whether or not to the inequality should be strict. ### The Larger of Two Independent Exponential Random Variables ### Let $X$ and $Y$ be independent random variables. Suppose $X$ has the exponential $(\lambda)$ distribution and $Y$ has the exponential $(\mu)$ distribution. The goal of this example is to find $P(Y > X)$. By the product rule, the joint density of $X$ and $Y$ is given by $$ f(x, y) ~ = ~ \lambda e^{-\lambda x} \mu e^{-\mu y}, ~~~~ x > 0, ~ y > 0 $$ The graph below shows the joint density surface in the case $\lambda = 0.5$ and $\mu = 0.25$, so that $E(X) = 2$ and $E(Y) = 4$. ```python def independent_exp(x, y): return 0.5 * 0.25 * np.e**(-0.5*x - 0.25*y) Plot_3d((0, 10), (0, 10), independent_exp) ``` To find $P(Y > X)$ we must integrate the joint density over the upper triangle of the first quadrant, a portion of which is shown below. ```python # NO CODE plt.axes().set_aspect('equal') xx = np.arange(0, 10.1, 0.1) yy = 10*np.ones(len(xx)) plt.fill_between(xx, xx, yy, alpha=0.3) plt.xlabel('$x$') plt.ylabel('$y$', rotation=0) plt.title('$Y > X$ (portion of infinite region)'); ``` The probability is therefore $$ P(Y > X) ~ = ~ \int_0^\infty \int_x^\infty \lambda e^{-\lambda x} \mu e^{-\mu y} dy dx $$ We can do this double integral without much calculus, just by using probability facts. $$ \begin{align*} P(Y > X) &= \int_0^\infty \int_x^\infty \lambda e^{-\lambda x} \mu e^{-\mu y} dy dx \\ \\ &= \int_0^\infty \lambda e^{-\lambda x} \big{(} \int_x^\infty \mu e^{-\mu y} dy\big{)} dx \\ \\ &= \int_0^\infty \lambda e^{-\lambda x} e^{-\mu x} dx ~~~~~~ \text{(survival function of } Y\text{, evaluated at } x \text{)} \\ \\ &= \frac{\lambda}{\lambda + \mu} \int_0^\infty (\lambda + \mu) e^{-(\lambda + \mu)x} dx \\ \\ &= \frac{\lambda}{\lambda + \mu} ~~~~~~~ \text{(total integral of exponential } (\lambda + \mu) \text{ density is 1)} \end{align*} $$ Thus $$ P(Y > X) ~ = ~ \frac{\lambda}{\lambda + \mu} $$ Analogously, $$ P(X > Y) ~ = ~ \frac{\mu}{\lambda + \mu} $$ Notice that the two chances are proportional to the parameters. This is consistent with intuition if you think of $X$ and $Y$ as two lifetimes. If $\lambda$ is large, the corresponding lifetime $X$ is likely to be short, and therefore $Y$ is likely to be larger than $X$ as the formula implies. If $\lambda = \mu$ then $P(Y > X) = 1/2$ which you can see by symmetry since $P(X = Y) = 0$. If we had attempted the double integral in the other order – first $x$, then $y$ – we would have had to do more work. The integral is $$ \int_0^\infty \int_0^y \lambda e^{-\lambda x} \mu e^{-\mu y} dx dy $$ Let's take the easy way out by using `SymPy` to confirm that we will get the same answer. ```python # Create the symbols; they are all positive x = Symbol('x', positive=True) y = Symbol('y', positive=True) lamda = Symbol('lamda', positive=True) mu = Symbol('mu', positive=True) ``` ```python # Construct the expression for the joint density f_X = lamda * exp(-lamda * x) f_Y = mu * exp(-mu * y) joint_density = f_X * f_Y joint_density ``` ```python # Display the integral – first x, then y Integral(joint_density, (x, 0, y), (y, 0, oo)) ``` ```python # Evaluate the integral answer = Integral(joint_density, (x, 0, y), (y, 0, oo)).doit() answer ``` ```python # Confirm that it is the same # as what we got by integrating in the other order simplify(answer) ``` ```python ```
ab84e57141bdd1a77b7eb754234da6ff3885be1e
232,131
ipynb
Jupyter Notebook
content/Chapter_17/02_Independence.ipynb
dcroce/jupyter-book
9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624
[ "MIT" ]
null
null
null
content/Chapter_17/02_Independence.ipynb
dcroce/jupyter-book
9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624
[ "MIT" ]
null
null
null
content/Chapter_17/02_Independence.ipynb
dcroce/jupyter-book
9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624
[ "MIT" ]
null
null
null
578.880299
104,836
0.938113
true
1,804
Qwen/Qwen-72B
1. YES 2. YES
0.843895
0.90053
0.759953
__label__eng_Latn
0.978797
0.603957
<a href="https://colab.research.google.com/github/HenriqueCCdA/ElementosFinitosCurso/blob/main/notebooks/Elemento_finitos_Exercicios_ex1.ipynb" target="_parent"></a> ```python import numpy as np from scipy.linalg import lu_factor, lu_solve import matplotlib.pyplot as plt import matplotlib as mpl ``` # Paramentros de entrada ```python f = -2 dudx_0 = -2 u3 = 0.0 ``` ## No 1 \begin{equation} k_{11} = \int_0^1 \frac{dN_1}{dx} \frac{dN_1}{dx} dx = \int_0^{1/2} \frac{dN_1}{dx} \frac{dN_1}{dx} dx = \int_0^{1/2} (-2) (-2) dx = (-2) (=2) \left(\frac{1}{2} - 0 \right) = 2 \end{equation} \begin{equation} k_{12} = \int_0^1 \frac{dN_1}{dx} \frac{dN_2}{dx} dx = \int_0^{1/2} \frac{dN_1}{dx} \frac{dN_2}{dx} dx = \int_0^{1/2} (-2) (2) dx = (-2) (2) \left(\frac{1}{2} - 0 \right) = -2 \end{equation} \begin{equation} k_{13} = \int_0^1 \frac{dN_1}{dx} \frac{dN_3}{dx} dx = 0 \end{equation} \begin{equation} f_{1} = \int_0^1 f N_1dx + \frac{du}{dx}(1) N_1(1) - \frac{du}{dx}(0) N_1(0) = \frac{f}{4} - \frac{du}{dx}(0) \end{equation} ```python k11 = 2.0 k12 = -2.0 k13 = 0.0 f1 = f/4 - dudx_0 ``` ## No 2 \begin{equation} k_{21} = \int_0^1 \frac{dN_2}{dx} \frac{dN_1}{dx} dx = 2 \end{equation} \begin{equation} k_{22} = \int_0^1 \frac{dN_2}{dx} \frac{dN_2}{dx} dx = \int_0^{1/2} \frac{dN_2}{dx} \frac{dN_2}{dx} dx + \int_{1/2}^{1} \frac{dN_2}{dx} \frac{dN_2}{dx} dx = \int_0^{1/2} (2) (2) dx + \int_{1/2}^{1} (-2) (-2) dx = 2 + 2 = 4 \end{equation} \begin{equation} k_{23} = \int_0^1 \frac{dN_2}{dx} \frac{dN_3}{dx} dx = \int_{1/2}^1 \frac{dN_2}{dx} \frac{dN_3}{dx} dx = \int_{1/2}^1 (2) (-2) dx = -2 \end{equation} \begin{equation} f_{2} = \int_0^1 f N_2dx + \frac{du}{dx}(1) N_2(1) - \frac{du}{dx}(0) N_2(0) = \frac{f}{2} \end{equation} ```python k21 = k12 k22 = 4.0 k23 = -2.0 f2 = f/2 ``` # Sistema de equações \begin{equation} \begin{bmatrix} k_{11} & k_{12}\\ k_{21} & k_{22} \end{bmatrix} * \begin{bmatrix} u_1\\ u_2 \end{bmatrix} = \begin{bmatrix} f_1 - k_{13} * u_3\\ f_2 - k_{23} * u_3 \end{bmatrix} \end{equation} ## Matriz de Coeficiente Real ```python K = np.array([ [k11, k12], [k21, k22], ]) K ``` array([[ 2., -2.], [-2., 4.]]) ## Vetor de forças ```python F = np.array([ f1 - k13 * u3, f2 - k23 * u3 ]) F ``` array([ 1.5, -1. ]) ```python lu, piv = lu_factor(K) u1, u2 = lu_solve((lu, piv), F) ``` ```python u_numerico_coef = [ u1, u2, u3] u_numerico_coef ``` [1.0, 0.25, 0.0] ```python x_malha = [0, 0.5, 1.0] x_malha ``` [0, 0.5, 1.0] # Solução Exata Solução $$ u(x) = x ^ 2 - 2 x + 1 $$ Derivada da solução $$ \frac{du}{dx} = 2 x - 2 $$ ```python def u_analitico(x): return x**2 - 2.0 * x + 1.0 def dudx_analitico(x): return 2.0 * x - 2.0 ``` # Solução númerica Aproximação $$u(x) = N_1(x) u_1 + N_2(x) u_2 + N_3(x) u_3$$ **Funções de interpolação:** * $N_1$: $$ N_1(x) = \begin{cases} &1 * -2 x &\text{ se } & 0 < x < 1/2 \\ &0 &\text{ se } & 1/2 < x < 1 \end{cases} $$ * $N_2$: $$ N_2(x) = \begin{cases} &2 x &\text{ se } & 0 < x < 1/2 \\ &2 - 2 x &\text{ se } & 1/2 < x < 1 \end{cases} $$ * $N_3$: $$ N_3(x) = \begin{cases} &0 &\text{ se } & 0 < x < 1/2 \\ &2 x - 1 &\text{ se } & 1/2 < x < 1 \end{cases} $$ ```python def u_numerico(x, u_numerico_coef, x_malha): u1, u2, u3 = u_numerico_coef x1, x2, x3 = x_malha # 0 < x < 1/2 if x1 <= x < x2: N1 = 1.0 - 2.0 * x N2 = 2.0 * x N3 = 0.0 # 1/2 < x < 1 elif x2 <= x <= x3: N1 = 0.0 N2 = 2.0 * ( 1.0 - x) N3 = 2*x - 1.0 return N1*u1 + N2 * u2 + N3 *u3 ``` **Solução númerica:** $$ \frac{du}{dx}(x) = \frac{dN_1}{dx}(x) u_1 + \frac{dN_2}{dx}(x) u_2 + \frac{dN_3}{dx}(x) u_3 $$ **Funções de interpolação:** * $\frac{dN_1}{dx}$: $$ \frac{dN_1}{dx} = \begin{cases} &-2 &\text{ se } & 0 < x < 1/2 \\ &0 &\text{ se } & 1/2 < x < 1 \end{cases} $$ * $N_2$: $$ \frac{dN_2}{dx} = \begin{cases} & 2 &\text{ se } & 0 < x < 1/2 \\ &-2 &\text{ se } & 1/2 < x < 1 \end{cases} $$ * $N_3$: $$ \frac{dN_3}{dx} = \begin{cases} &0 &\text{ se } & 0 < x < 1/2 \\ &2 &\text{ se } & 1/2 < x < 1 \end{cases} $$ ```python def dudx_numerico(x, u_numerico_coef, x_malha): u1, u2, u3 = u_numerico_coef x1, x2, x3 = x_malha # 0 < x < 1/2 if x1 <= x < x2: dN1dx = -2.0 dN2dx = 2.0 dN3dx = 0.0 # 1/2 < x < 1 elif x2 <= x <= x3: dN1dx = 0.0 dN2dx = -2.0 dN3dx = 2.0 return dN1dx * u1 + dN2dx * u2 + dN3dx *u3 ``` # Plotando os resultados ```python x = np.linspace(0, 1, 50) ``` ```python u_exato = [ u_analitico(xi) for xi in x ] dudx_exato = [ dudx_analitico(xi) for xi in x ] ``` ```python u_num = [ u_numerico(xi, u_numerico_coef, x_malha) for xi in x ] dudx_num = [ dudx_numerico(xi, u_numerico_coef, x_malha) for xi in x ] ``` ```python mpl.rcParams['figure.figsize'] = (20, 10) # fig, (ax1, ax2) = plt.subplots(ncols = 2) # ax1.set_title('Solução', fontsize = 18) ax1.plot(x, u_exato, label = 'Analito') ax1.plot(x, u_num , label = 'Numerico') ax1.set_ylabel('u(x)', fontsize = 14) ax1.set_xlabel('x', fontsize = 14) # ax2.set_title('Derivada', fontsize = 18) ax2.plot(x, dudx_exato) ax2.plot(x, dudx_num) ax2.set_ylabel(r'$\frac{du}{dx}(x)$', fontsize = 14) ax2.set_xlabel('x', fontsize = 14) # ax1.grid(ls = '--') ax2.grid(ls = '--') # ax1.legend(fontsize=14) plt.show() ``` ```python ```
3a155dbd5a8df0a949dbe0a26bdb02523af3ba96
118,637
ipynb
Jupyter Notebook
notebooks/Elemento_finitos_Exercicios_ex1.ipynb
HenriqueCCdA/ElementosFinitosCurso
5cd37d3d3d77a5b6234fad5fca871d907558dff4
[ "MIT" ]
2
2021-09-28T00:31:07.000Z
2021-09-28T00:31:25.000Z
notebooks/Elemento_finitos_Exercicios_ex1.ipynb
HenriqueCCdA/ElementosFinitosCurso
5cd37d3d3d77a5b6234fad5fca871d907558dff4
[ "MIT" ]
null
null
null
notebooks/Elemento_finitos_Exercicios_ex1.ipynb
HenriqueCCdA/ElementosFinitosCurso
5cd37d3d3d77a5b6234fad5fca871d907558dff4
[ "MIT" ]
null
null
null
188.312698
91,014
0.88308
true
2,714
Qwen/Qwen-72B
1. YES 2. YES
0.901921
0.785309
0.708286
__label__yue_Hant
0.321136
0.483917
```python # Header starts here. from sympy.physics.units import * from sympy import * # Rounding: import decimal from decimal import Decimal as DX from copy import deepcopy def iso_round(obj, pv, rounding=decimal.ROUND_HALF_EVEN): import sympy """ Rounding acc. to DIN EN ISO 80000-1:2013-08 place value = Rundestellenwert """ assert pv in set([ # place value # round to: 1, # 1 0.1, # 1st digit after decimal 0.01, # 2nd 0.001, # 3rd 0.0001, # 4th 0.00001, # 5th 0.000001, # 6th 0.0000001, # 7th 0.00000001, # 8th 0.000000001, # 9th 0.0000000001, # 10th ]) objc = deepcopy(obj) try: tmp = DX(str(float(objc))) objc = tmp.quantize(DX(str(pv)), rounding=rounding) except: for i in range(len(objc)): tmp = DX(str(float(objc[i]))) objc[i] = tmp.quantize(DX(str(pv)), rounding=rounding) return objc # LateX: kwargs = {} kwargs["mat_str"] = "bmatrix" kwargs["mat_delim"] = "" # kwargs["symbol_names"] = {FB: "F^{\mathsf B}", } # Units: (k, M, G ) = ( 10**3, 10**6, 10**9 ) (mm, cm) = ( m/1000, m/100 ) Newton = kg*m/s**2 Pa = Newton/m**2 MPa = M*Pa GPa = G*Pa kN = k*Newton deg = pi/180 half = S(1)/2 # Header ends here. # # https://colab.research.google.com/github/kassbohm/tm-snippets/blob/master/ipynb/TM_2/4_BB/2_BL/2.4.2.G-FEM_cc.ipynb pprint("\nSolution 1: 2 Elements and using Symmetry:") a, q, EI = var("l, q, EI") # length of element: l = a/4 l2 = l*l l3 = l*l*l K = EI/l3 K *= Matrix( [ [ 4*l2 , -6*l , 2*l2 , 6*l , 0 , 0 ], [ -6*l , 12 , -6*l , -12 , 0 , 0 ], [ 2*l2 , -6*l , 8*l2 , 0 , 2*l2 , 6*l ], [ 6*l , -12 , 0 , 24 , -6*l , -12 ], [ 0 , 0 , 2*l2 , -6*l , 4*l2 , 6*l ], [ 0 , 0 , 6*l , -12 , 6*l , 12 ], ] ) w1,p2,w2,p3 = var("w1,p2,w2,p3") M1,F3 = var("M1,F3") u = Matrix([0,w1,p2,w2,p3,0]) f = Matrix([M1,0,0,0,0,F3]) + q * Matrix([-l2/12, l/2, 0, l,l2/12, l/2 ]) unknowns = [w1,p2,w2,p3,M1,F3] eq = Eq(K*u , f) sol = solve(eq, unknowns) w1,p2,w2,p3 = sol[w1],sol[p2],sol[w2],sol[p3] l, B = var("l, B") sub_list = [ (a, 4*l), (EI, B), ] pprint("\n(w1, p2, w2, p3) / (l³ q / EI):") for x in [w1,p2,w2,p3]: tmp = x.subs(sub_list) tmp /= l**3*q/B print(tmp) pprint("\n(w1, ψ2, w2, ψ3) / ( q a³ / (EI) ):") for x in [w1,p2,w2,p3]: tmp = x / (q*a**3 / EI) pprint(tmp) pprint("\nSolution 2: 2 Elements + Symmetry, Stiffness matrix as in Klein:") l = var("l") l2 = l*l l3 = l*l*l # Klein: sub_list_Klein=[ (a, 2 *m), (q, 1 *Newton/m), (EI, 1 *Newton*m**2), ] # Only partial matrix to find deformations only K = Matrix( [ [ 12 , -6*l , -12 , 0 ], [ -6*l , 8*l2 , 0 , 2*l2 ], [ -12 , 0 , 24 , -6*l ], [ 0 , 2*l2 , -6*l , 4*l2 ], ]) K *= EI/l3 w1, p2, w2, p3 = var("w1, p2, w2, p3") u = Matrix([w1, p2, w2, p3]) f = q * Matrix([l/2, 0, l, l2/12]) eq = Eq(K*u , f) sol = solve(eq, [w1,p2,w2,p3]) w1, p2, w2, p3 = sol[w1], sol[p2], sol[w2], sol[p3] pprint("\n(w1, ψ2, w2, ψ3) / ( q a³ / (EI) ):") fac = a**3*q/EI tmp = w1.subs(l, a/4) pprint(tmp/fac) w1 = tmp.subs(sub_list_Klein) tmp = p2.subs(l, a/4) pprint(tmp/fac) p2 = tmp.subs(sub_list_Klein) tmp = w2.subs(l, a/4) pprint(tmp/fac) w2 = tmp.subs(sub_list_Klein) tmp = p3.subs(l, a/4) pprint(tmp/fac) p3 = tmp.subs(sub_list_Klein) pprint("\n(w1 / m, ψ2 / rad, w2 / m, ψ3 / rad):") w1 /= m w1 = iso_round(w1, 0.001) pprint(w1) p2 = iso_round(p2, 0.001) pprint(p2) w2 /= m w2 = iso_round(w2, 0.001) pprint(w2) p3 = iso_round(p3, 0.001) pprint(p3) pprint("\nSection loads:") x, xi, l, B, q = var("x, xi, l, B, q") N1 = -xi**3 + 2*xi**2 - xi N2 = 2*xi**3 - 3*xi**2 + 1 N3 = -xi**3 + xi**2 N4 = -2*xi**3 + 3*xi**2 N5 = xi**4/24 - xi**3/12 + xi**2/24 # pprint("\ntmp:") # tmp = N1.subs(xi,half) # pprint(tmp) # tmp = N3.subs(xi,half) # pprint(tmp) # exit() N = Matrix([l*N1, N2, l*N3, N4, l**4/B * N5]) dNx = diff(N, xi) / l d2Nx = diff(dNx, xi) / l A = - B * d2Nx fac = l**3*q/B w1 = fac * 10*l/3 p2 = fac * S(11)/6 w2 = fac * 19*l/8 p3 = fac * S(8)/3 pprint("\n- B w'':") u1 = Matrix([0, w1, p2, w2, q]) u2 = Matrix([p2, w2, p3, 0, q]) tmp = A.T*u1 tmp = tmp[0] tmp = tmp.simplify() pprint(tmp) a = var("a") tmp = tmp.subs(l, a/4) pprint(tmp) pprint("\nSolution 3: 1 element only, disregarding symmetry:") # Using 1 element only: p1, p2 = var("ψ₁, ψ₂") fac = - q*l**3/B eq1 = Eq(4*p1 + 2*p2, fac*S(1)/12) eq2 = Eq(2*p1 + 4*p2, fac*S(-1)/12) sol = solve([eq1, eq2], [p1, p2]) pprint(sol) pprint("\nInterpolated displacement at x = 1/2 l:") p1, p2 = sol[p1], sol[p2] u = Matrix([p1, 0, p2, 0, q],) tmp = N.dot(u) tmp = tmp.subs(xi, S(1)/2) pprint(tmp) # Solution 1: 2 Elements + Symmetry: # # (w1, p2, w2, p3) / (l³ q / EI): # 10*l/3 # 11/6 # 19*l/8 # 8/3 # # (w1, ψ2, w2, ψ3) / ( q a³ / (EI) ): # 5⋅l # ─── # 384 # 11 # ─── # 384 # 19⋅l # ──── # 2048 # 1/24 # # Solution 2: 2 Elements + Symmetry, Stiffness matrix as in Klein: # # (w1, ψ2, w2, ψ3) / ( q a³ / (EI) ): # 5⋅l # ─── # 384 # 11 # ─── # 384 # 19⋅l # ──── # 2048 # 1/24 # # (w1 / m, ψ2 / rad, w2 / m, ψ3 / rad): # 0.208 # 0.229 # 0.148 # 0.333 # # Section loads: # # - B w'': # 2 ⎛ 2 ⎞ # l ⋅q⋅⎝- ξ + 4⎠ # ─────────────── # 2 # 2 ⎛ 2 ⎞ # a ⋅q⋅⎝- ξ + 4⎠ # ─────────────── # 32 # # Solution 3: 1 element only, disregarding symmetry: # ⎧ 3 3 ⎫ # ⎪ -l ⋅q l ⋅q⎪ # ⎨ψ₁: ──────, ψ₂: ────⎬ # ⎪ 24⋅B 24⋅B⎪ # ⎩ ⎭ # # Interpolated displacement at x = 1/2 l: # 4 # 5⋅l ⋅q # ────── # 384⋅B ```
c867936a8abf4e57a836aa55552ad967f0f61c32
11,005
ipynb
Jupyter Notebook
ipynb/TM_2/4_BB/2_BL/2.4.2.G-FEM_cc.ipynb
kassbohm/tm-snippets
5e0621ba2470116e54643b740d1b68b9f28bff12
[ "MIT" ]
null
null
null
ipynb/TM_2/4_BB/2_BL/2.4.2.G-FEM_cc.ipynb
kassbohm/tm-snippets
5e0621ba2470116e54643b740d1b68b9f28bff12
[ "MIT" ]
null
null
null
ipynb/TM_2/4_BB/2_BL/2.4.2.G-FEM_cc.ipynb
kassbohm/tm-snippets
5e0621ba2470116e54643b740d1b68b9f28bff12
[ "MIT" ]
null
null
null
33.551829
130
0.367742
true
2,788
Qwen/Qwen-72B
1. YES 2. YES
0.785309
0.73412
0.57651
__label__eng_Latn
0.143314
0.177757
<div style = "font-family:Georgia; font-size:2.5vw; color:lightblue; font-style:bold; text-align:center; background:url('./Animations/Title Background.gif') no-repeat center; background-size:cover)"> <br><br> Histograms of Oriented Gradients (HOG) <br><br><br> </div> <h1 style = "text-align:left">Introduction</h1> As we saw with the ORB algorithm, we can use keypoints in images to do keypoint-based matching to detect objects in images. These type of algorithms work great when you want to detect objects that have a lot of consistent internal features that are not affected by the background. For example, these algorithms work well for facial detection because faces have a lot of consistent internal features that don’t get affected by the image background, such as the eyes, nose, and mouth. However, these type of algorithms don’t work so well when attempting to do more general object recognition, say for example, pedestrian detection in images. The reason is that people don’t have consistent internal features, like faces do, because the body shape and style of every person is different (see Fig. 1). This means that every person is going to have a different set of internal features, and so we need something that can more generally describe a person. <br> <figure> <figcaption style = "text-align:left; font-style:italic">Fig. 1. - Pedestrians.</figcaption> </figure> <br> One option is to try to detect pedestrians by their contours instead. Detecting objects in images by their contours (boundaries) is very challenging because we have to deal with the difficulties brought about by the contrast between the background and the foreground. For example, suppose you wanted to detect a pedestrian in an image that is walking in front of a white building and she is wearing a white coat and black pants (see Fig. 2). We can see in Fig. 2, that since the background of the image is mostly white, the black pants are going to have a very high contrast, but the coat, since it is white as well, is going to have very low contrast. In this case, detecting the edges of pants is going to be easy but detecting the edges of the coat is going to be very difficult. This is where **HOG** comes in. HOG stands for **Histograms of Oriented Gradients** and it was first introduced by Navneet Dalal and Bill Triggs in 2005. <br> <figure> <figcaption style = "text-align:left; font-style:italic">Fig. 2. - High and Low Contrast.</figcaption> </figure> <br> The HOG algorithm works by creating histograms of the distribution of gradient orientations in an image and then normalizing them in a very special way. This special normalization is what makes HOG so effective at detecting the edges of objects even in cases where the contrast is very low. These normalized histograms are put together into a feature vector, known as the HOG descriptor, that can be used to train a machine learning algorithm, such as a Support Vector Machine (SVM), to detect objects in images based on their boundaries (edges). Due to its great success and reliability, HOG has become one of the most widely used algorithms in computer vison for object detection. In this notebook, you will learn: * How the HOG algorithm works * How to use OpenCV to create a HOG descriptor * How to visualize the HOG descriptor. # The HOG Algorithm As its name suggests, the HOG algorithm, is based on creating histograms from the orientation of image gradients. The HOG algorithm is implemented in a series of steps: 1. Given the image of particular object, set a detection window (region of interest) that covers the entire object in the image (see Fig. 3). 2. Calculate the magnitude and direction of the gradient for each individual pixel in the detection window. 3. Divide the detection window into connected *cells* of pixels, with all cells being of the same size (see Fig. 3). The size of the cells is a free parameter and it is usually chosen so as to match the scale of the features that want to be detected. For example, in a 64 x 128 pixel detection window, square cells 6 to 8 pixels wide are suitable for detecting human limbs. 4. Create a Histogram for each cell, by first grouping the gradient directions of all pixels in each cell into a particular number of orientation (angular) bins; and then adding up the gradient magnitudes of the gradients in each angular bin (see Fig. 3). The number of bins in the histogram is a free parameter and it is usually set to 9 angular bins. 5. Group adjacent cells into *blocks* (see Fig. 3). The number of cells in each block is a free parameter and all blocks must be of the same size. The distance between each block (known as the stride) is a free parameter but it is usually set to half the block size, in which case you will get overlapping blocks (*see video below*). The HOG algorithm has been shown empirically to work better with overlapping blocks. 6. Use the cells contained within each block to normalize the cell histograms in that block (see Fig. 3). If you have overlapping blocks this means that most cells will be normalized with respect to different blocks (*see video below*). Therefore, the same cell may have several different normalizations. 7. Collect all the normalized histograms from all the blocks into a single feature vector called the HOG descriptor. 8. Use the resulting HOG descriptors from many images of the same type of object to train a machine learning algorithm, such as an SVM, to detect those type of objects in images. For example, you could use the HOG descriptors from many images of pedestrians to train an SVM to detect pedestrians in images. The training is done with both positive a negative examples of the object you want detect in the image. 9. Once the SVM has been trained, a sliding window approach is used to try to detect and locate objects in images. Detecting an object in the image entails finding the part of the image that looks similar to the HOG pattern learned by the SVM. <br> <figure> <figcaption style = "text-align:left; font-style:italic">Fig. 3. - HOG Diagram.</figcaption> </figure> <br> <figure> <figcaption style = "text-align:left; font-style:italic">Vid. 1. - HOG Animation.</figcaption> </figure> # Why The HOG Algorithm Works As we learned above, HOG creates histograms by adding the magnitude of the gradients in particular orientations in localized portions of the image called *cells*. By doing this we guarantee that stronger gradients will contribute more to the magnitude of their respective angular bin, while the effects of weak and randomly oriented gradients resulting from noise are minimized. In this manner the histograms tell us the dominant gradient orientation of each cell. ### Dealing with contrast Now, the magnitude of the dominant orientation can vary widely due to variations in local illumination and the contrast between the background and the foreground. To account for the background-foreground contrast differences, the HOG algorithm tries to detect edges locally. In order to do this, it defines groups of cells, called **blocks**, and normalizes the histograms using this local group of cells. By normalizing locally, the HOG algorithm can detect the edges in each block very reliably; this is called **block normalization**. In addition to using block normalization, the HOG algorithm also uses overlapping blocks to increase its performance. By using overlapping blocks, each cell contributes several independent components to the final HOG descriptor, where each component corresponds to a cell being normalized with respect to a different block. This may seem redundant but, it has been shown empirically that by normalizing each cell several times with respect to different local blocks, the performance of the HOG algorithm increases dramatically. ### Loading Images and Importing Resources The first step in building our HOG descriptor is to load the required packages into Python and to load our image. We start by using OpenCV to load an image of a triangle tile. Since, the `cv2.imread()` function loads images as BGR we will convert our image to RGB so we can display it with the correct colors. As usual we will convert our BGR image to Gray Scale for analysis. ```python import cv2 import numpy as np import matplotlib.pyplot as plt # Set the default figure size plt.rcParams['figure.figsize'] = [17.0, 7.0] # Load the image image = cv2.imread('./images/triangle_tile.jpeg') # Convert the original image to RGB original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # Convert the original image to gray scale gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Print the shape of the original and gray scale images print('The original image has shape: ', original_image.shape) print('The gray scale image has shape: ', gray_image.shape) # Display the images plt.subplot(121) plt.imshow(original_image) plt.title('Original Image') plt.subplot(122) plt.imshow(gray_image, cmap='gray') plt.title('Gray Scale Image') plt.show() ``` The original image has shape: (250, 250, 3) The gray scale image has shape: (250, 250) <matplotlib.figure.Figure at 0x7f0a81ff4e48> # Creating The HOG Descriptor We will be using OpenCV’s `HOGDescriptor` class to create the HOG descriptor. The parameters of the HOG descriptor are setup using the `HOGDescriptor()` function. The parameters of the `HOGDescriptor()` function and their default values are given below: `cv2.HOGDescriptor(win_size = (64, 128), block_size = (16, 16), block_stride = (8, 8), cell_size = (8, 8), nbins = 9, win_sigma = DEFAULT_WIN_SIGMA, threshold_L2hys = 0.2, gamma_correction = true, nlevels = DEFAULT_NLEVELS)` Parameters: * **win_size** – *Size* Size of detection window in pixels (*width, height*). Defines the region of interest. Must be an integer multiple of cell size. * **block_size** – *Size* Block size in pixels (*width, height*). Defines how many cells are in each block. Must be an integer multiple of cell size and it must be smaller than the detection window. The smaller the block the finer detail you will get. * **block_stride** – *Size* Block stride in pixels (*horizontal, vertical*). It must be an integer multiple of cell size. The `block_stride` defines the distance between adjecent blocks, for example, 8 pixels horizontally and 8 pixels vertically. Longer `block_strides` makes the algorithm run faster (because less blocks are evaluated) but the algorithm may not perform as well. * **cell_size** – *Size* Cell size in pixels (*width, height*). Determines the size fo your cell. The smaller the cell the finer detail you will get. * **nbins** – *int* Number of bins for the histograms. Determines the number of angular bins used to make the histograms. With more bins you capture more gradient directions. HOG uses unsigned gradients, so the angular bins will have values between 0 and 180 degrees. * **win_sigma** – *double* Gaussian smoothing window parameter. The performance of the HOG algorithm can be improved by smoothing the pixels near the edges of the blocks by applying a Gaussian spatial window to each pixel before computing the histograms. * **threshold_L2hys** – *double* L2-Hys (Lowe-style clipped L2 norm) normalization method shrinkage. The L2-Hys method is used to normalize the blocks and it consists of an L2-norm followed by clipping and a renormalization. The clipping limits the maximum value of the descriptor vector for each block to have the value of the given threshold (0.2 by default). After the clipping the descriptor vector is renormalized as described in *IJCV*, 60(2):91-110, 2004. * **gamma_correction** – *bool* Flag to specify whether the gamma correction preprocessing is required or not. Performing gamma correction slightly increases the performance of the HOG algorithm. * **nlevels** – *int* Maximum number of detection window increases. As we can see, the `cv2.HOGDescriptor()`function supports a wide range of parameters. The first few arguments (`block_size, block_stride, cell_size`, and `nbins`) are probably the ones you are most likely to change. The other parameters can be safely left at their default values and you will get good results. In the code below, we will use the `cv2.HOGDescriptor()`function to set the cell size, block size, block stride, and the number of bins for the histograms of the HOG descriptor. We will then use `.compute(image)`method to compute the HOG descriptor (feature vector) for the given `image`. ```python # Specify the parameters for our HOG descriptor # Cell Size in pixels (width, height). Must be smaller than the size of the detection window # and must be chosen so that the resulting Block Size is smaller than the detection window. cell_size = (6, 6) # Number of cells per block in each direction (x, y). Must be chosen so that the resulting # Block Size is smaller than the detection window num_cells_per_block = (2, 2) # Block Size in pixels (width, height). Must be an integer multiple of Cell Size. # The Block Size must be smaller than the detection window block_size = (num_cells_per_block[0] * cell_size[0], num_cells_per_block[1] * cell_size[1]) # Calculate the number of cells that fit in our image in the x and y directions x_cells = gray_image.shape[1] // cell_size[0] y_cells = gray_image.shape[0] // cell_size[1] # Horizontal distance between blocks in units of Cell Size. Must be an integer and it must # be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer. h_stride = 1 # Vertical distance between blocks in units of Cell Size. Must be an integer and it must # be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer. v_stride = 1 # Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride) # Number of gradient orientation bins num_bins = 9 # Specify the size of the detection window (Region of Interest) in pixels (width, height). # It must be an integer multiple of Cell Size and it must cover the entire image. Because # the detection window must be an integer multiple of cell size, depending on the size of # your cells, the resulting detection window might be slightly smaller than the image. # This is perfectly ok. win_size = (x_cells * cell_size[0] , y_cells * cell_size[1]) # Print the shape of the gray scale image for reference print('\nThe gray scale image has shape: ', gray_image.shape) print() # Print the parameters of our HOG descriptor print('HOG Descriptor Parameters:\n') print('Window Size:', win_size) print('Cell Size:', cell_size) print('Block Size:', block_size) print('Block Stride:', block_stride) print('Number of Bins:', num_bins) print() # Set the parameters of the HOG descriptor using the variables defined above hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins) # Compute the HOG Descriptor for the gray scale image hog_descriptor = hog.compute(gray_image) ``` The gray scale image has shape: (250, 250) HOG Descriptor Parameters: Window Size: (246, 246) Cell Size: (6, 6) Block Size: (12, 12) Block Stride: (6, 6) Number of Bins: 9 # Number of Elements In The HOG Descriptor The resulting HOG Descriptor (feature vector), contains the normalized histograms from all cells from all blocks in the detection window concatenated in one long vector. Therefore, the size of the HOG feature vector will be given by the total number of blocks in the detection window, multiplied by the number of cells per block, times the number of orientation bins: <span class="mathquill"> \begin{equation} \mbox{total_elements} = (\mbox{total_number_of_blocks})\mbox{ } \times \mbox{ } (\mbox{number_cells_per_block})\mbox{ } \times \mbox{ } (\mbox{number_of_bins}) \end{equation} </span> If we don’t have overlapping blocks (*i.e.* the `block_stride`equals the `block_size`), the total number of blocks can be easily calculated by dividing the size of the detection window by the block size. However, in the general case we have to take into account the fact that we have overlapping blocks. To find the total number of blocks in the general case (*i.e.* for any `block_stride` and `block_size`), we can use the formula given below: <span class="mathquill"> \begin{equation} \mbox{Total}_i = \left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right)\left( \frac{\mbox{window_size}_i}{\mbox{block_size}_i} \right) - \left [\left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right) - 1 \right]; \mbox{ for } i = x,y \end{equation} </span> Where <span class="mathquill">Total$_x$</span>, is the total number of blocks along the width of the detection window, and <span class="mathquill">Total$_y$</span>, is the total number of blocks along the height of the detection window. This formula for <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, takes into account the extra blocks that result from overlapping. After calculating <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, we can get the total number of blocks in the detection window by multiplying <span class="mathquill">Total$_x$ $\times$ Total$_y$</span>. The above formula can be simplified considerably because the `block_size`, `block_stride`, and `window_size`are all defined in terms of the `cell_size`. By making all the appropriate substitutions and cancelations the above formula reduces to: <span class="mathquill"> \begin{equation} \mbox{Total}_i = \left(\frac{\mbox{cells}_i - \mbox{num_cells_per_block}_i}{N_i}\right) + 1\mbox{ }; \mbox{ for } i = x,y \end{equation} </span> Where <span class="mathquill">cells$_x$</span> is the total number of cells along the width of the detection window, and <span class="mathquill">cells$_y$</span>, is the total number of cells along the height of the detection window. And <span class="mathquill">$N_x$</span> is the horizontal block stride in units of `cell_size` and <span class="mathquill">$N_y$</span> is the vertical block stride in units of `cell_size`. Let's calculate what the number of elements for the HOG feature vector should be and check that it matches the shape of the HOG Descriptor calculated above. ```python # Calculate the total number of blocks along the width of the detection window tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1) # Calculate the total number of blocks along the height of the detection window tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1) # Calculate the total number of elements in the feature vector tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins # Print the total number of elements the HOG feature vector should have print('\nThe total number of elements in the HOG Feature Vector should be: ', tot_bx, 'x', tot_by, 'x', num_cells_per_block[0], 'x', num_cells_per_block[1], 'x', num_bins, '=', tot_els) # Print the shape of the HOG Descriptor to see that it matches the above print('\nThe HOG Descriptor has shape:', hog_descriptor.shape) print() ``` The total number of elements in the HOG Feature Vector should be: 40 x 40 x 2 x 2 x 9 = 57600 The HOG Descriptor has shape: (57600, 1) # Visualizing The HOG Descriptor We can visualize the HOG Descriptor by plotting the histogram associated with each cell as a collection of vectors. To do this, we will plot each bin in the histogram as a single vector whose magnitude is given by the height of the bin and its orientation is given by the angular bin that its associated with. Since any given cell might have multiple histograms associated with it, due to the overlapping blocks, we will choose to average all the histograms for each cell to produce a single histogram for each cell. OpenCV has no easy way to visualize the HOG Descriptor, so we have to do some manipulation first in order to visualize it. We will start by reshaping the HOG Descriptor in order to make our calculations easier. We will then compute the average histogram of each cell and finally we will convert the histogram bins into vectors. Once we have the vectors, we plot the corresponding vectors for each cell in an image. The code below produces an interactive plot so that you can interact with the figure. The figure contains: * the grayscale image, * the HOG Descriptor (feature vector), * a zoomed-in portion of the HOG Descriptor, and * the histogram of the selected cell. **You can click anywhere on the gray scale image or the HOG Descriptor image to select a particular cell**. Once you click on either image a *magenta* rectangle will appear showing the cell you selected. The Zoom Window will show you a zoomed in version of the HOG descriptor around the selected cell; and the histogram plot will show you the corresponding histogram for the selected cell. The interactive window also has buttons at the bottom that allow for other functionality, such as panning, and giving you the option to save the figure if desired. The home button returns the figure to its default value. **NOTE**: If you are running this notebook in the Udacity workspace, there is around a 2 second lag in the interactive plot. This means that if you click in the image to zoom in, it will take about 2 seconds for the plot to refresh. ```python %matplotlib notebook import copy import matplotlib.patches as patches # Set the default figure size plt.rcParams['figure.figsize'] = [9.8, 9] # Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins]. # The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number # and the second index to the column number. This will be useful later when we plot the feature vector, so # that the feature vector indexing matches the image indexing. hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx, tot_by, num_cells_per_block[0], num_cells_per_block[1], num_bins).transpose((1, 0, 2, 3, 4)) # Print the shape of the feature vector for reference print('The feature vector has shape:', hog_descriptor.shape) # Print the reshaped feature vector print('The reshaped feature vector has shape:', hog_descriptor_reshaped.shape) # Create an array that will hold the average gradients for each cell ave_grad = np.zeros((y_cells, x_cells, num_bins)) # Print the shape of the ave_grad array for reference print('The average gradient array has shape: ', ave_grad.shape) # Create an array that will count the number of histograms per cell hist_counter = np.zeros((y_cells, x_cells, 1)) # Add up all the histograms for each cell and count the number of histograms per cell for i in range (num_cells_per_block[0]): for j in range(num_cells_per_block[1]): ave_grad[i:tot_by + i, j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :] hist_counter[i:tot_by + i, j:tot_bx + j] += 1 # Calculate the average gradient for each cell ave_grad /= hist_counter # Calculate the total number of vectors we have in all the cells. len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2] # Create an array that has num_bins equally spaced between 0 and 180 degress in radians. deg = np.linspace(0, np.pi, num_bins, endpoint = False) # Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude # equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram). # To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the # image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the # cells in the image. Create the arrays that will hold all the vector positons and components. U = np.zeros((len_vecs)) V = np.zeros((len_vecs)) X = np.zeros((len_vecs)) Y = np.zeros((len_vecs)) # Set the counter to zero counter = 0 # Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the # cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the # average gradient array for i in range(ave_grad.shape[0]): for j in range(ave_grad.shape[1]): for k in range(ave_grad.shape[2]): U[counter] = ave_grad[i,j,k] * np.cos(deg[k]) V[counter] = ave_grad[i,j,k] * np.sin(deg[k]) X[counter] = (cell_size[0] / 2) + (cell_size[0] * i) Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j) counter = counter + 1 # Create the bins in degress to plot our histogram. angle_axis = np.linspace(0, 180, num_bins, endpoint = False) angle_axis += ((angle_axis[1] - angle_axis[0]) / 2) # Create a figure with 4 subplots arranged in 2 x 2 fig, ((a,b),(c,d)) = plt.subplots(2,2) # Set the title of each subplot a.set(title = 'Gray Scale Image\n(Click to Zoom)') b.set(title = 'HOG Descriptor\n(Click to Zoom)') c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False) d.set(title = 'Histogram of Gradients') # Plot the gray scale image a.imshow(gray_image, cmap = 'gray') a.set_aspect(aspect = 1) # Plot the feature vector (HOG Descriptor) b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5) b.invert_yaxis() b.set_aspect(aspect = 1) b.set_facecolor('black') # Define function for interactive zoom def onpress(event): #Unless the left mouse button is pressed do nothing if event.button != 1: return # Only accept clicks for subplots a and b if event.inaxes in [a, b]: # Get mouse click coordinates x, y = event.xdata, event.ydata # Select the cell closest to the mouse click coordinates cell_num_x = np.uint32(x / cell_size[0]) cell_num_y = np.uint32(y / cell_size[1]) # Set the edge coordinates of the rectangle patch edgex = x - (x % cell_size[0]) edgey = y - (y % cell_size[1]) # Create a rectangle patch that matches the the cell selected above rect = patches.Rectangle((edgex, edgey), cell_size[0], cell_size[1], linewidth = 1, edgecolor = 'magenta', facecolor='none') # A single patch can only be used in a single plot. Create copies # of the patch to use in the other subplots rect2 = copy.copy(rect) rect3 = copy.copy(rect) # Update all subplots a.clear() a.set(title = 'Gray Scale Image\n(Click to Zoom)') a.imshow(gray_image, cmap = 'gray') a.set_aspect(aspect = 1) a.add_patch(rect) b.clear() b.set(title = 'HOG Descriptor\n(Click to Zoom)') b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5) b.invert_yaxis() b.set_aspect(aspect = 1) b.set_facecolor('black') b.add_patch(rect2) c.clear() c.set(title = 'Zoom Window') c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1) c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0])) c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1])) c.invert_yaxis() c.set_aspect(aspect = 1) c.set_facecolor('black') c.add_patch(rect3) d.clear() d.set(title = 'Histogram of Gradients') d.grid() d.set_xlim(0, 180) d.set_xticks(angle_axis) d.set_xlabel('Angle') d.bar(angle_axis, ave_grad[cell_num_y, cell_num_x, :], 180 // num_bins, align = 'center', alpha = 0.5, linewidth = 1.2, edgecolor = 'k') fig.canvas.draw() # Create a connection between the figure and the mouse click fig.canvas.mpl_connect('button_press_event', onpress) plt.show() ``` The feature vector has shape: (57600, 1) The reshaped feature vector has shape: (40, 40, 2, 2, 9) The average gradient array has shape: (41, 41, 9) <IPython.core.display.Javascript object> # Understanding The Histograms Let's take a look at a couple of snapshots of the above figure to see if the histograms for the selected cell make sense. Let's start looking at a cell that is inside a triangle and not near an edge: <br> <figure> <figcaption style = "text-align:center; font-style:italic">Fig. 4. - Histograms Inside a Triangle.</figcaption> </figure> <br> In this case, since the triangle is nearly all of the same color there shouldn't be any dominant gradient in the selected cell. As we can clearly see in the Zoom Window and the histogram, this is indeed the case. We have many gradients but none of them clearly dominates over the other. Now let’s take a look at a cell that is near a horizontal edge: <br> <figure> <figcaption style = "text-align:center; font-style:italic">Fig. 5. - Histograms Near a Horizontal Edge.</figcaption> </figure> <br> Remember that edges are areas of an image where the intensity changes abruptly. In these cases, we will have a high intensity gradient in some particular direction. This is exactly what we see in the corresponding histogram and Zoom Window for the selected cell. In the Zoom Window, we can see that the dominant gradient is pointing up, almost at 90 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 90-degree bin in the histogram to dominate strongly over the others. This is in fact what we see. Now let’s take a look at a cell that is near a vertical edge: <br> <figure> <figcaption style = "text-align:center; font-style:italic">Fig. 6. - Histograms Near a Vertical Edge.</figcaption> </figure> <br> In this case we expect the dominant gradient in the cell to be horizontal, close to 180 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 170-degree bin in the histogram to dominate strongly over the others. This is what we see in the histogram but we also see that there is another dominant gradient in the cell, namely the one in the 10-degree bin. The reason for this, is because the HOG algorithm is using unsigned gradients, which means 0 degrees and 180 degrees are considered the same. Therefore, when the histograms are being created, angles between 160 and 180 degrees, contribute proportionally to both the 10-degree bin and the 170-degree bin. This results in there being two dominant gradients in the cell near the vertical edge instead of just one. To conclude let’s take a look at a cell that is near a diagonal edge. <br> <figure> <figcaption style = "text-align:center; font-style:italic">Fig. 7. - Histograms Near a Diagonal Edge.</figcaption> </figure> <br> To understand what we are seeing, let’s first remember that gradients have an *x*-component, and a *y*-component, just like vectors. Therefore, the resulting orientation of a gradient is going to be given by the vector sum of its components. For this reason, on vertical edges the gradients are horizontal, because they only have an x-component, as we saw in Figure 4. While on horizontal edges the gradients are vertical, because they only have a y-component, as we saw in Figure 3. Consequently, on diagonal edges, the gradients are also going to be diagonal because both the *x* and *y* components are non-zero. Since the diagonal edges in the image are close to 45 degrees, we should expect to see a dominant gradient orientation in the 50-degree bin. This is in fact what we see in the histogram but, just like in Figure 4., we see there are two dominant gradients instead of just one. The reason for this is that when the histograms are being created, angles that are near the boundaries of bins, contribute proportionally to the adjacent bins. For example, a gradient with an angle of 40 degrees, is right in the middle of the 30-degree and 50-degree bin. Therefore, the magnitude of the gradient is split evenly into the 30-degree and 50-degree bin. This results in there being two dominant gradients in the cell near the diagonal edge instead of just one. Now that you know how HOG is implemented, in the workspace you will find a notebook named *Examples*. In there, you will be able set your own paramters for the HOG descriptor for various images. Have fun! ```python ```
e6609f2a02c8c6ba3cadf7ba8e99786b9a78516c
457,124
ipynb
Jupyter Notebook
Feature vectors/1. HOG.ipynb
IllgamhoDuck/CVND
06f9530b79c977d33c6220a9bba38cbcf8d164b9
[ "MIT" ]
null
null
null
Feature vectors/1. HOG.ipynb
IllgamhoDuck/CVND
06f9530b79c977d33c6220a9bba38cbcf8d164b9
[ "MIT" ]
null
null
null
Feature vectors/1. HOG.ipynb
IllgamhoDuck/CVND
06f9530b79c977d33c6220a9bba38cbcf8d164b9
[ "MIT" ]
1
2020-03-29T00:40:55.000Z
2020-03-29T00:40:55.000Z
307.000672
380,981
0.900609
true
7,909
Qwen/Qwen-72B
1. YES 2. YES
0.672332
0.760651
0.51141
__label__eng_Latn
0.99732
0.026505
# Project 3: Percolation - FYS4460 Author: Øyvind Sigmundson Schøyen In this project we'll explore _percolation_ from the project shown here: https://www.uio.no/studier/emner/matnat/fys/FYS4460/v19/notes/project2017-ob3.pdf ```python import numpy as np import matplotlib.pyplot as plt import scipy.ndimage as spi import skimage import tqdm import sklearn.linear_model import seaborn as sns sns.set(color_codes=True) ``` ## Computing the density of spanning clusters We call $P(p, L)$ the _density of spanning clusters._ It is defined as the probability for a site to belong to a _spanning cluster._ A spanning cluster is defined as a cluster having an extent over the entire system size along one axis. We compute the density by extracting all spanning clusters from a system and then counting all the set sites in the spanning clusters. Dividing the number of set sites contained in all the spanning clusters by the total number of sites we find $P(p, L)$. ```python def compute_density_of_spanning_clusters(system): num_rows, num_cols = system.shape total_mass = num_rows * num_cols # Label and count the number of connected clusters labels, num_features = spi.measurements.label(system) # Collect regions props = skimage.measure.regionprops(labels) num_percolating = 0 mass = 0 # Iterate through regions and check if they span the entire system for prop in props: min_row, min_col, max_row, max_col = prop.bbox if max_row - min_row == num_rows or max_col - min_col == num_cols: num_percolating += 1 mass += prop.area return mass / total_mass, num_percolating ``` The function `compute_density_of_spanning_clusters` takes in a system and computes the density of spanning clusters, $P(p, L)$. It is general in the sense that it does not distinguish between quadratic or rectangular systems. Below we plot $P(p, L)$ for different $L$ and $p$. ```python num_systems = 101 p_arr = np.linspace(0, 1, num_systems) L_list = [50, 100, 200, 500] plt.figure(figsize=(14, 10)) for L in L_list: density_arr = np.zeros_like(p_arr) pi_arr = np.zeros_like(p_arr) num_rows = L num_cols = L num_percolating_systems = 0 for i in tqdm.tqdm_notebook(range(num_systems)): p = p_arr[i] system = np.random.choice([0, 1], size=(num_rows, num_cols), p=[1 - p, p]) density_arr[i], num_percolating = compute_density_of_spanning_clusters(system) num_percolating_systems += num_percolating > 0 plt.plot(p_arr, density_arr, label=fr"$L = {L}$") plt.legend(loc="best") plt.xlabel(r"$p$") plt.ylabel(r"$P(p, L)$") plt.title(r"Density of spanning clusters for varying system sizes $L$") plt.show() ``` Here we can see the expected behaviour of the density of spanning clusters as a function of the probability $p$. At $p = p_c \approx 0.59$ the density increases almost step-wise before taking on a linear shape. ## Compute density of spanning clusters and spanning probability for a given system size $L$ We now want to create a function which can compute the density of spanning clusters $P(p, L)$ and the spanning probability $\Pi(p, L)$ for a given system size $L$. A way we can do this is by generating a system of size $L$. We can then vary the probability $p$ over a set range and mask the different sites. This lets us compute the density of spanning clusters and the spanning probability efficiently on a given system. ```python def compute_density_and_probability(system, p_arr): num_rows, num_cols = system.shape total_mass = num_rows * num_cols percolating = np.zeros(*p_arr.shape) mass = np.zeros(*p_arr.shape) for i, p in enumerate(p_arr): p_system = system < p # Label and count the number of connected clusters labels, num_features = spi.measurements.label(p_system) # Collect regions props = skimage.measure.regionprops(labels) # Iterate through regions and check if they span the entire system for prop in props: min_row, min_col, max_row, max_col = prop.bbox if max_row - min_row == num_rows or max_col - min_col == num_cols: percolating[i] += 1 mass[i] += prop.area return mass / total_mass, percolating > 0 ``` We now run $10$ experiments, that is regenerate a specific system $10$ times for every system, and find the percolation probability and density of the spanning clusters for systems with $L = 2, 4, 8, 16, 32, 64, 128, 256$. ```python p_arr = np.linspace(0, 1, 51) num_experiments = 20 L_list = [2 ** (i + 1) for i in range(8)] P_list = [] Pi_list = [] ``` ```python for L in L_list: P_mat = np.zeros((num_experiments, *p_arr.shape)) Pi_mat = np.zeros_like(P_mat) for n in tqdm.tqdm_notebook(range(num_experiments)): system = np.random.random((L, L)) P_mat[n], Pi_mat[n] = compute_density_and_probability(system, p_arr) P_list.append(np.average(P_mat, axis=0)) Pi_list.append(np.average(Pi_mat, axis=0)) ``` ```python plt.figure(figsize=(14, 10)) for L, P in zip(L_list, P_list): plt.plot(p_arr, P, label=fr"$L = {L}$") plt.xlabel(r"$p$") plt.ylabel(r"$P(p, L)$") plt.legend(loc="best") plt.title(r"Density of spanning clusters for varying system sizes $L$") plt.show() ``` Here we can see the density of spanning clusters for $L = 2, 4, 8, 16, 32, 64, 128, 256$. The trend shows that we start getting percolation around $p = p_c \approx 0.57$. We also see the characteristic logarithmic behaviour before density becomes more or less linear up to $p = 1$. We clearly see how an increase in system size makes the density exhibit more of expected behaviour. ```python plt.figure(figsize=(14, 10)) for L, Pi in zip(L_list, Pi_list): plt.plot(p_arr, Pi, label=fr"$L = {L}$") plt.legend(loc="best") plt.xlabel(r"$p$") plt.ylabel(r"$\Pi(p, L)$") plt.title(r"Percolation probability for varying system sizes $L$") plt.show() ``` This plot shows the percolation probability $\Pi(p, L)$ for $L = 2, 4, 8, 16, 32, 64, 128, 256$. We see much of the same behaviour as for the density of spanning clusters, that is, that $p_c \approx 0.57$, as this is where the largest system starts to percolate. Here we expect to see a step-function at $p = p_c$, and we can see that as the system increase, this behaviour is approximated well. ## Form of density of spanning cluster when $p > p_c$ We know that when $p > p_c$ we get \begin{align} P(p, L) \propto (p - p_c)^{\beta}. \end{align} To find a value for $\beta$, we take the logarithm on both sides yielding \begin{align} \log\Bigl[P(p, L)\Bigr] \propto \beta \log(p - p_c). \end{align} We can thus compute $\beta$ as the slope of the curve. For the approximate value of the critical percolation probability, we set $p_c = 0.59275$. ```python p_c = 0.59275 # Critical percolation probability p_arr = np.linspace(p_c, 1, 101 + 1)[1:] # p > p_c log_ppc = np.log(p_arr - p_c) # Dependent variable ``` ```python P_list = [] Pi_list = [] for L in L_list: P_mat = np.zeros((num_experiments, *p_arr.shape)) Pi_mat = np.zeros_like(P_mat) for n in tqdm.tqdm_notebook(range(num_experiments)): system = np.random.random((L, L)) P_mat[n], Pi_mat[n] = compute_density_and_probability(system, p_arr) P_list.append(np.average(P_mat, axis=0)) Pi_list.append(np.average(Pi_mat, axis=0)) ``` ```python plt.figure(figsize=(14, 10)) for L, P in zip(L_list[-3:], P_list[-3:]): log_P = np.log(P) plt.plot( log_ppc, log_P, label=fr"L = {L}", ) clf = sklearn.linear_model.LinearRegression( fit_intercept=True ).fit(log_ppc[:, None], log_P[:, None]) beta = clf.coef_[0, 0] print(f"For L = {L}: beta = {beta}") plt.plot( log_ppc, clf.predict(log_ppc[:, None]).ravel(), "--", label=fr"Predicted line: L = {L}", ) plt.legend(loc="best") plt.xlabel(r"$\log(p - p_c)$") plt.ylabel(r"$\log[P(p, L)]$") plt.title(r"Plot of line fit to compute $\beta$ when $p > p_c$ from density") plt.show() ``` Here we can see log plots of the functional relation between the density of spanning clusters and the difference between the percolation probability and the critical probability when $p > p_c$. From the largest system size we find $\beta \approx 0.245$.
726aef6fedee3fcb6e34f9206eb6c6d2f797452a
318,338
ipynb
Jupyter Notebook
project-3/generating-percolation-clusters.ipynb
Schoyen/FYS4460
0c6ba1deefbfd5e9d1657910243afc2297c695a3
[ "MIT" ]
1
2019-08-29T16:29:18.000Z
2019-08-29T16:29:18.000Z
project-3/generating-percolation-clusters.ipynb
Schoyen/FYS4460
0c6ba1deefbfd5e9d1657910243afc2297c695a3
[ "MIT" ]
null
null
null
project-3/generating-percolation-clusters.ipynb
Schoyen/FYS4460
0c6ba1deefbfd5e9d1657910243afc2297c695a3
[ "MIT" ]
1
2020-05-27T14:01:36.000Z
2020-05-27T14:01:36.000Z
425.017356
92,334
0.927241
true
2,311
Qwen/Qwen-72B
1. YES 2. YES
0.831143
0.822189
0.683357
__label__eng_Latn
0.961272
0.425998
<a href="https://colab.research.google.com/github/hBar2013/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments/blob/master/module2-intermediate-linear-algebra/Kim_Lowry_Intermediate_Linear_Algebra_Assignment.ipynb" target="_parent"></a> # Statistics ``` import numpy as np ``` ## 1.1 Sales for the past week was the following amounts: [3505, 2400, 3027, 2798, 3700, 3250, 2689]. Without using library functions, what is the mean, variance, and standard deviation of of sales from last week? (for extra bonus points, write your own function that can calculate these two values for any sized list) ``` sales = np.array([3505, 2400, 3027, 2798, 3700, 3250, 2689]) length = len(sales) ``` ``` def mean_var_stdev(data): sales_mean = sum(data)/length for num in data: vnom = sum((data - sales_mean)**2) sales_var = vnom / length sales_stdev = sales_var ** 0.5 return sales_mean, sales_var, sales_stdev ``` ``` mean_var_stdev(sales) ``` (3052.714285714286, 183761.06122448976, 428.67360686714756) ## 1.2 Find the covariance between last week's sales numbers and the number of customers that entered the store last week: [127, 80, 105, 92, 120, 115, 93] (you may use librray functions for calculating the covariance since we didn't specifically talk about its formula) ``` customers = np.array([127, 80, 105, 92, 120, 115, 93]) ``` ``` cov_sc = np.cov(sales, customers) cov_sc ``` array([[214387.9047619 , 7604.35714286], [ 7604.35714286, 290.95238095]]) ``` ``` (104.57142857142857, 249.3877551020408, 15.792015549069118) ``` ``` ## 1.3 Find the standard deviation of customers who entered the store last week. Then, use the standard deviations of both sales and customers to standardize the covariance to find the correlation coefficient that summarizes the relationship between sales and customers. (You may use library functions to check your work.) ``` length = len(customers) ``` ``` mean_var_stdev(customers) ``` (104.57142857142857, 249.3877551020408, 15.792015549069118) ``` corr_sc = np.corrcoef(sales, customers) corr_sc ``` array([[1. , 0.96283398], [0.96283398, 1. ]]) ## 1.4 Use pandas to import a cleaned version of the titanic dataset from the following link: [Titanic Dataset](https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv) ## Calculate the variance-covariance matrix and correlation matrix for the titanic dataset's numeric columns. (you can encode some of the categorical variables and include them as a stretch goal if you finish early) ``` import pandas as pd ``` ``` file_url = 'https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv' titanic = pd.read_csv(file_url) ``` ``` titanic.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Unnamed: 0</th> <th>pclass</th> <th>survived</th> <th>name</th> <th>sex</th> <th>age</th> <th>sibsp</th> <th>parch</th> <th>ticket</th> <th>fare</th> <th>cabin</th> <th>embarked</th> <th>boat</th> <th>body</th> <th>home.dest</th> <th>has_cabin_number</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1</td> <td>1.0</td> <td>1.0</td> <td>Allen, Miss. Elisabeth Walton</td> <td>female</td> <td>29.0000</td> <td>0.0</td> <td>0.0</td> <td>24160</td> <td>211.3375</td> <td>B5</td> <td>S</td> <td>2</td> <td>NaN</td> <td>St Louis, MO</td> <td>1</td> </tr> <tr> <th>1</th> <td>2</td> <td>1.0</td> <td>1.0</td> <td>Allison, Master. Hudson Trevor</td> <td>male</td> <td>0.9167</td> <td>1.0</td> <td>2.0</td> <td>113781</td> <td>151.5500</td> <td>C22 C26</td> <td>S</td> <td>11</td> <td>NaN</td> <td>Montreal, PQ / Chesterville, ON</td> <td>1</td> </tr> <tr> <th>2</th> <td>3</td> <td>1.0</td> <td>0.0</td> <td>Allison, Miss. Helen Loraine</td> <td>female</td> <td>2.0000</td> <td>1.0</td> <td>2.0</td> <td>113781</td> <td>151.5500</td> <td>C22 C26</td> <td>S</td> <td>NaN</td> <td>NaN</td> <td>Montreal, PQ / Chesterville, ON</td> <td>1</td> </tr> <tr> <th>3</th> <td>4</td> <td>1.0</td> <td>0.0</td> <td>Allison, Mr. Hudson Joshua Creighton</td> <td>male</td> <td>30.0000</td> <td>1.0</td> <td>2.0</td> <td>113781</td> <td>151.5500</td> <td>C22 C26</td> <td>S</td> <td>NaN</td> <td>135.0</td> <td>Montreal, PQ / Chesterville, ON</td> <td>1</td> </tr> <tr> <th>4</th> <td>5</td> <td>1.0</td> <td>0.0</td> <td>Allison, Mrs. Hudson J C (Bessie Waldo Daniels)</td> <td>female</td> <td>25.0000</td> <td>1.0</td> <td>2.0</td> <td>113781</td> <td>151.5500</td> <td>C22 C26</td> <td>S</td> <td>NaN</td> <td>NaN</td> <td>Montreal, PQ / Chesterville, ON</td> <td>1</td> </tr> </tbody> </table> </div> ``` titanic.cov() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Unnamed: 0</th> <th>pclass</th> <th>survived</th> <th>age</th> <th>sibsp</th> <th>parch</th> <th>fare</th> <th>body</th> <th>has_cabin_number</th> </tr> </thead> <tbody> <tr> <th>Unnamed: 0</th> <td>143117.500000</td> <td>284.357034</td> <td>-53.967125</td> <td>-1442.939812</td> <td>25.828746</td> <td>1.172783</td> <td>-9410.735123</td> <td>591.579132</td> <td>-95.438885</td> </tr> <tr> <th>pclass</th> <td>284.357034</td> <td>0.701969</td> <td>-0.127248</td> <td>-3.954605</td> <td>0.053090</td> <td>0.013287</td> <td>-24.227788</td> <td>-2.876653</td> <td>-0.249992</td> </tr> <tr> <th>survived</th> <td>-53.967125</td> <td>-0.127248</td> <td>0.236250</td> <td>-0.314343</td> <td>-0.014088</td> <td>0.034776</td> <td>6.146023</td> <td>0.000000</td> <td>0.061406</td> </tr> <tr> <th>age</th> <td>-1442.939812</td> <td>-3.954605</td> <td>-0.314343</td> <td>165.850021</td> <td>-2.559806</td> <td>-1.459378</td> <td>114.416613</td> <td>81.622922</td> <td>1.463138</td> </tr> <tr> <th>sibsp</th> <td>25.828746</td> <td>0.053090</td> <td>-0.014088</td> <td>-2.559806</td> <td>1.085052</td> <td>0.336833</td> <td>8.641768</td> <td>-8.708471</td> <td>-0.003946</td> </tr> <tr> <th>parch</th> <td>1.172783</td> <td>0.013287</td> <td>0.034776</td> <td>-1.459378</td> <td>0.336833</td> <td>0.749195</td> <td>9.928031</td> <td>4.237190</td> <td>0.013316</td> </tr> <tr> <th>fare</th> <td>-9410.735123</td> <td>-24.227788</td> <td>6.146023</td> <td>114.416613</td> <td>8.641768</td> <td>9.928031</td> <td>2678.959738</td> <td>-179.164684</td> <td>10.976961</td> </tr> <tr> <th>body</th> <td>591.579132</td> <td>-2.876653</td> <td>0.000000</td> <td>81.622922</td> <td>-8.708471</td> <td>4.237190</td> <td>-179.164684</td> <td>9544.688567</td> <td>3.625689</td> </tr> <tr> <th>has_cabin_number</th> <td>-95.438885</td> <td>-0.249992</td> <td>0.061406</td> <td>1.463138</td> <td>-0.003946</td> <td>0.013316</td> <td>10.976961</td> <td>3.625689</td> <td>0.174613</td> </tr> </tbody> </table> </div> ``` titanic.corr() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Unnamed: 0</th> <th>pclass</th> <th>survived</th> <th>age</th> <th>sibsp</th> <th>parch</th> <th>fare</th> <th>body</th> <th>has_cabin_number</th> </tr> </thead> <tbody> <tr> <th>Unnamed: 0</th> <td>1.000000</td> <td>0.897822</td> <td>-0.293717</td> <td>-0.296172</td> <td>0.065594</td> <td>0.003584</td> <td>-0.481215</td> <td>0.015558</td> <td>-0.603727</td> </tr> <tr> <th>pclass</th> <td>0.897822</td> <td>1.000000</td> <td>-0.312469</td> <td>-0.366370</td> <td>0.060832</td> <td>0.018322</td> <td>-0.558629</td> <td>-0.034642</td> <td>-0.713857</td> </tr> <tr> <th>survived</th> <td>-0.293717</td> <td>-0.312469</td> <td>1.000000</td> <td>-0.050199</td> <td>-0.027825</td> <td>0.082660</td> <td>0.244265</td> <td>NaN</td> <td>0.302250</td> </tr> <tr> <th>age</th> <td>-0.296172</td> <td>-0.366370</td> <td>-0.050199</td> <td>1.000000</td> <td>-0.190747</td> <td>-0.130872</td> <td>0.171892</td> <td>0.059059</td> <td>0.271887</td> </tr> <tr> <th>sibsp</th> <td>0.065594</td> <td>0.060832</td> <td>-0.027825</td> <td>-0.190747</td> <td>1.000000</td> <td>0.373587</td> <td>0.160238</td> <td>-0.099961</td> <td>-0.009064</td> </tr> <tr> <th>parch</th> <td>0.003584</td> <td>0.018322</td> <td>0.082660</td> <td>-0.130872</td> <td>0.373587</td> <td>1.000000</td> <td>0.221539</td> <td>0.051099</td> <td>0.036806</td> </tr> <tr> <th>fare</th> <td>-0.481215</td> <td>-0.558629</td> <td>0.244265</td> <td>0.171892</td> <td>0.160238</td> <td>0.221539</td> <td>1.000000</td> <td>-0.043110</td> <td>0.507253</td> </tr> <tr> <th>body</th> <td>0.015558</td> <td>-0.034642</td> <td>NaN</td> <td>0.059059</td> <td>-0.099961</td> <td>0.051099</td> <td>-0.043110</td> <td>1.000000</td> <td>0.083796</td> </tr> <tr> <th>has_cabin_number</th> <td>-0.603727</td> <td>-0.713857</td> <td>0.302250</td> <td>0.271887</td> <td>-0.009064</td> <td>0.036806</td> <td>0.507253</td> <td>0.083796</td> <td>1.000000</td> </tr> </tbody> </table> </div> # Orthogonality ## 2.1 Plot two vectors that are orthogonal to each other. What is a synonym for orthogonal? ``` import matplotlib.pyplot as plt ``` ``` dp = np.dot(vector_1, vector_2) dp ``` 0 ``` vector_1 = [2, 4] vector_2 = [-2, 1] # Plot the Scaled Vectors plt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red') plt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green') plt.xlim(-4,5) plt.ylim(-4,5) plt.title("Orthogonal Vectors") plt.show() ``` ## 2.2 Are the following vectors orthogonal? Why or why not? \begin{align} a = \begin{bmatrix} -5 \\ 3 \\ 7 \end{bmatrix} \qquad b = \begin{bmatrix} 6 \\ -8 \\ 2 \end{bmatrix} \end{align} ``` vector_a = [-5,3,7] vector_b = [6,-8,2] ab_dp = np.dot(vector_a, vector_b) ab_dp ``` -40 Not orthagonal as the dot product of the vectors does not == zero ## 2.3 Compute the following values: What do these quantities have in common? ## What is $||c||^2$? ## What is $c \cdot c$? ## What is $c^{T}c$? \begin{align} c = \begin{bmatrix} 2 & -15 & 6 & 20 \end{bmatrix} \end{align} ``` from numpy import linalg as LA ``` ``` 𝑐 = np.array([2,-15, 6, 20]) ``` ``` dp_c = np.dot(c,c) dp_c ``` 665 ``` norm_c = LA.norm(c) norm_c ``` 25.787593916455254 ``` norm_c_sq = norm_c**2 norm_c_sq ``` 665.0 ``` cTxC = np.matmul(c.T,c) cTxC ``` 665 # Unit Vectors ## 3.1 Using Latex, write the following vectors as a linear combination of scalars and unit vectors: \begin{align} d = \begin{bmatrix} 7 \\ 12 \end{bmatrix} \qquad e = \begin{bmatrix} 2 \\ 11 \\ -8 \end{bmatrix} \end{align} ||d|| = 13.89 ||e|| = 13.74 \begin{align} d-hat = \begin{bmatrix} 0.49\\ 0.84 \end{bmatrix} \end{align} \begin{align} d-hat = 0.49\begin{bmatrix} 1\\ 0 \end{bmatrix}, 0.84\begin{bmatrix} 0\\ 1 \end{bmatrix} \end{align} \begin{align}e -hat = \begin{bmatrix} 0.14 \\ 0.79 \\ -0.58 \end{bmatrix} \end{align} \begin{align} e-hat = 0.14\begin{bmatrix} 1\\ 0\\0 \end{bmatrix}, 0.79\begin{bmatrix} 0\\ 1\\0 \end{bmatrix}, -0.58\begin{bmatrix} 0\\ 0\\1 \end{bmatrix} \end{align} ## 3.2 Turn vector $f$ into a unit vector: \begin{align} f = \begin{bmatrix} 4 & 12 & 11 & 9 & 2 \end{bmatrix} \end{align} ``` f = np.array([4, 12, 11, 9, 2]) norm_f = LA.norm(f) inv_norm_f = 1/norm_f unit_f = np.multiply(inv_norm_f,f) unit_f ``` array([0.20908335, 0.62725005, 0.57497921, 0.47043754, 0.10454167]) # Linear Independence / Dependence ## 4.1 Plot two vectors that are linearly dependent and two vectors that are linearly independent (bonus points if done in $\mathbb{R}^3$). ``` vector_1 = [2, 4] vector_2 = [-2, 1] plt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red') plt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green') plt.xlim(-4,5) plt.ylim(-4,5) plt.title("Linearly Independent") plt.show() ``` ``` vector_g = [1, 2] vector_h = [4, 8] plt.arrow(0,0, vector_g[0], vector_1[1],head_width=.05, head_length=0.05, color ='blue') plt.arrow(0,0, vector_h[0], vector_2[1],head_width=.05, head_length=0.05, color ='orange') plt.xlim(-1,10) plt.ylim(-1,10) plt.title("Linearly Dependent") plt.show() ``` I have no idea what's going on with my colors. Happened yesterday also # Span ## 5.1 What is the span of the following vectors? \begin{align} g = \begin{bmatrix} 1 & 2 \end{bmatrix} \qquad h = \begin{bmatrix} 4 & 8 \end{bmatrix} \end{align} you can see that the span is 1, because h is just g scaled by 4, also see above for the plot ## 5.2 What is the span of $\{l, m, n\}$? \begin{align} l = \begin{bmatrix} 1 & 2 & 3 \end{bmatrix} \qquad m = \begin{bmatrix} -1 & 0 & 7 \end{bmatrix} \qquad n = \begin{bmatrix} 4 & 8 & 2\end{bmatrix} \end{align} The rank is 3 so therefore the span is also 3 and so all 3 equations are required to describe the solution space. (ie there are no linearly dependent rows) ``` M = np.array([[1,-1,4], [2,0,8], [3,7,2]]) ``` ``` M_rank = LA.matrix_rank(M) M_rank ``` 3 # Basis ## 6.1 Graph two vectors that form a basis for $\mathbb{R}^2$ ``` vector_1 = [2, 4] vector_2 = [-2, 1] plt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red') plt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green') plt.xlim(-4,5) plt.ylim(-4,5) plt.title("Linearly Independent") plt.show() ``` Two vectors form a basis for 2D when they are linearly independent. They can be scaled and used as a basis set of vectors to represent the entire plane they lie in. In the case above these vectors form an orthagonal basis. ## 6.2 What does it mean to form a basis? ^^^^^ See above # Rank ## 7.1 What is the Rank of P? \begin{align} P = \begin{bmatrix} 1 & 2 & 3 \\ -1 & 0 & 7 \\ 4 & 8 & 2 \end{bmatrix} \end{align} ``` P = np.array([[1,2,3], [-1,0,7], [4,8,9]]) ``` ``` P_rank = LA.matrix_rank(P) P_rank ``` 3 ## 7.2 What does the rank of a matrix tell us? The matrix cannot be reduced as all rows are linearly independent. All 3 are required to describe the solution space # Linear Projections ## 8.1 Line $L$ is formed by all of the vectors that can be created by scaling vector $v$ \begin{align} v = \begin{bmatrix} 1 & 3 \end{bmatrix} \end{align} \begin{align} w = \begin{bmatrix} -1 & 2 \end{bmatrix} \end{align} ## find $proj_{L}(w)$ ## graph your projected vector to check your work (make sure your axis are square/even) ``` ``` # Stretch Goal ## For vectors that begin at the origin, the coordinates of where the vector ends can be interpreted as regular data points. (See 3Blue1Brown videos about Spans, Basis, etc.) ## Write a function that can calculate the linear projection of each point (x,y) (vector) onto the line y=x. run the function and plot the original points in blue and the new projected points on the line y=x in red. ## For extra points plot the orthogonal vectors as a dashed line from the original blue points to the projected red points. ``` import pandas as pd import matplotlib.pyplot as plt # Creating a dataframe for you to work with -Feel free to not use the dataframe if you don't want to. x_values = [1, 4, 7, 3, 9, 4, 5 ] y_values = [4, 2, 5, 0, 8, 2, 8] data = {"x": x_values, "y": y_values} df = pd.DataFrame(data) df.head() plt.scatter(df.x, df.y) plt.show() ``` ``` ```
4a2bebf0afe61ee4febaef23c37e55d78d9341aa
104,445
ipynb
Jupyter Notebook
module2-intermediate-linear-algebra/Kim_Lowry_Intermediate_Linear_Algebra_Assignment.ipynb
hBar2013/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
21e773e2e657fca9f3d8509ae4caaa170d536406
[ "MIT" ]
null
null
null
module2-intermediate-linear-algebra/Kim_Lowry_Intermediate_Linear_Algebra_Assignment.ipynb
hBar2013/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
21e773e2e657fca9f3d8509ae4caaa170d536406
[ "MIT" ]
null
null
null
module2-intermediate-linear-algebra/Kim_Lowry_Intermediate_Linear_Algebra_Assignment.ipynb
hBar2013/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
21e773e2e657fca9f3d8509ae4caaa170d536406
[ "MIT" ]
null
null
null
55.320445
9,914
0.633922
true
6,741
Qwen/Qwen-72B
1. YES 2. YES
0.812867
0.83762
0.680874
__label__eng_Latn
0.495062
0.420229
```python from sympy import * import numpy as np import matplotlib.pyplot as plt from PlottingSpectrum import generate_SED def weighted_fitting(x_s, y_s, errs): list_Y = [] list_A = [] list_C = [] for i in range(len(x_s)): list_Y.append([y_s[i]]) list_A.append([1, x_s[i]]) C_row = [] counter = 0 while counter < len(x_s): if counter == i: C_row.append(errs[i]) else: C_row.append(0) counter += 1 list_C.append(C_row) A = Matrix(list_A) Y = Matrix(list_Y) C = Matrix(list_C) X = ((A.T*C.inv()*A).inv())*(A.T*C.inv()*Y) return X[0, 0], X[1, 0] ``` ```python def calc_chi(x_s, y_s, errs, b, m): chi = 0 for i in range(len(x_s)): chi += ((y_s[i] - (b + m*x_s[i]))**2)/(errs[i]**2) return chi ``` ```python def read_file(name): lst_1 = [] lst_2 = [] lst_3 = [] with open(name) as reader: for line in reader: list_str = line.split() if len(list_str) == 3: lst_1.append(float(list_str[0])) lst_2.append(float(list_str[1])) lst_3.append(float(list_str[2])) return lst_1, lst_2, lst_3 ``` ```python x_s, y_s, errs = read_file('fitting_simple_data.txt') b, m = weighted_fitting(x_s, y_s, errs) #b = -10 #m = 6 print(b, m) x_line = np.linspace(0, 12) y_line = b + m*x_line plt.figure(figsize=(15, 9)) plt.scatter(x_s, y_s) plt.plot(x_line, y_line) plt.show() print(calc_chi(x_s, y_s, errs, b, m)) ``` ```python def linear_interpolation(x_s, y_s, x_val): index = np.abs(x_s - x_val).argmin() #Index of x in x_s closest to x_val if x_s[index] > x_val: l_index = index - 1 r_index = index else: l_index = index r_index = index + 1 m = (y_s[r_index] - y_s[l_index])/(x_s[r_index] - x_s[l_index]) b = y_s[r_index] - m*x_s[r_index] return b + m*x_val def spectrum_likelihood(wave_obs, spec_obs, wave_theo, spec_theo): epsilon = 0 for i in range(len(wave_obs)): interpol_flux = linear_interpolation(wave_theo, spec_theo, wave_obs[i]) epsilon += ((spec_obs[i] - interpol_flux)**2)/(errs[i]**2) return epsilon ``` ```python wave_obs, spec_obs, errs = read_file('fitting_spectrum_1.txt') plt.figure(figsize=(15, 9)) plt.loglog(wave_obs, spec_obs) x_label = r"$\lambda$ ($\AA$)" plt.xlabel(x_label, fontsize=14) plt.xlim(10**3, 199600.0) y_label = r"f$_v$ $(\mu Jy)$" plt.ylabel(y_label, fontsize=14) plt.ylim(10**-14, 10**-8) plt.show() ``` ```python generate_SED(10, 1) ``` ```python ```
e9d6d8c4e51dd8b112a40bbb3e81388bf023a291
6,591
ipynb
Jupyter Notebook
Final Project/.ipynb_checkpoints/PhysicalProperties-checkpoint.ipynb
CalebLammers/CTA200
2b8e442f10479b8f82a9b8c4558a45aa9e791118
[ "MIT" ]
null
null
null
Final Project/.ipynb_checkpoints/PhysicalProperties-checkpoint.ipynb
CalebLammers/CTA200
2b8e442f10479b8f82a9b8c4558a45aa9e791118
[ "MIT" ]
null
null
null
Final Project/.ipynb_checkpoints/PhysicalProperties-checkpoint.ipynb
CalebLammers/CTA200
2b8e442f10479b8f82a9b8c4558a45aa9e791118
[ "MIT" ]
null
null
null
31.6875
1,106
0.499166
true
854
Qwen/Qwen-72B
1. YES 2. YES
0.935347
0.779993
0.729564
__label__eng_Latn
0.215772
0.533353
# Lecture 02 Elimination with Matrices Today's lecture contains: 1. Elimination <br/> 2. Explaination of elimination <br/> 3. Permutation <br/> 4. Inverse Matrix <br/> ## 1. Elimination Suppose we have equations with 3 unknown: \begin{align} \begin{cases}x&+2y&+z&=2\\3x&+8y&+z&=12\\&4y&+z&=2\end{cases} \end{align} Such equation can be expressed in the format of \begin{align} Ax=b \end{align} which is: \begin{align} \begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}2\\12\\2\end{bmatrix} \end{align} ### 1.1 Process of elimination The objective of elimination is to acquire an upper triangular matrix from $A$. What does it look like? It looks something like this: \begin{align} U(Upper Triangular Matrix): \begin{bmatrix}1&2&1\\0&8&1\\0&0&1\end{bmatrix} \end{align} Lower triangular matrix looks like this: \begin{align} L(Lower Triangular Matrix): \begin{bmatrix}1&0&0\\3&8&0\\2&6&1\end{bmatrix} \end{align} * (1) We wish we can eliminate all the $x$ from second and third equations. Taking the matrix $A$ as an example: \begin{align} A=\begin{bmatrix}\underline{1}&2&1\\3&8&1\\0&4&1\end{bmatrix} \end{align} The number with underscore is called **pivot** which is the coefficient of $x$ in the first equation. And we need to eliminate all the $x$ below which means **all numbers below pivot should be eliminated to zero**. Apparently, we can take $row_2$-**3**$row_1$ Therefore, the first step is: \begin{align} \begin{bmatrix}\underline{1}&2&1\\3&8&1\\0&4&1\end{bmatrix}\xrightarrow{row_2-3row_1}\begin{bmatrix}\underline{1}&2&1\\0&2&-2\\0&4&1\end{bmatrix} \end{align} * (2) In light of above logic, next step is to eliminate the second pivot which stands for $y$. That would be $row_3$-**2**$row_2$ \begin{align} \begin{bmatrix}\underline{1}&2&1\\0&\underline{2}&-2\\0&4&1\end{bmatrix}\xrightarrow{row_3-2row_2}\begin{bmatrix}\underline{1}&2&1\\0&\underline{2}&-2\\0&0&\underline{5}\end{bmatrix} \end{align} * (3) Because we want to make step(1) and step(2) more intuitive, so we don't take $b$ into account for a moment. In the end , we can add the right hand side $b$ back to the above logic. This step is called **back substitution** and the matrix in the following format called **augmented matrix**. \begin{align} \left[\begin{array}{c|c}A&b\end{array}\right]=\left[\begin{array}{ccc|c}1&2&1&2\\3&8&1&12\\0&4&1&2\end{array}\right]\to\left[\begin{array}{ccc|c}1&2&1&2\\0&2&-2&6\\0&4&1&2\end{array}\right]\to\left[\begin{array}{ccc|c}1&2&1&2\\0&2&-2&6\\0&0&5&-10\end{array}\right] \end{align} * (4) In the end, we can convert matrix to equation. \begin{align} \begin{cases}x&+2y&+z&=2\\&2y&-2z&=6\\&&5z&=-10\end{cases} \end{align} $x$,$y$,$z$ can be easily solved: \begin{align} x=2, y=1, z=-2 \end{align} ### 1.2 Shape of multiplication $matrix×column=column$ \begin{align} \begin{bmatrix}...&...&...\\...&...&...\\...&...&...\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}...\\...\\...\end{bmatrix} \end{align} $vector×matrix=vector$ \begin{align} \begin{bmatrix}x&y&z\end{bmatrix}\begin{bmatrix}...&...&...\\...&...&...\\...&...&...\end{bmatrix}=\begin{bmatrix}...&...&...\end{bmatrix} \end{align} Sometime vector can be looked as 1×3 matrix. ## 2. Detail explaination of elimination Before jumping into the explaination, we first need to figure out **how matrix multiply**? Taking the following mutiplication as an example, where $I$ stands for **identity matrix**. One matrix multiple Identity matrix is itself. \begin{align} IA=A \end{align} \begin{align} \begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix} \end{align} So what is going on in the above matrix multiplication? * (1) Let's look from the perspective of the first matrix from the left. \begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix} can be decomposed to \begin{bmatrix}1&0&0\end{bmatrix} \begin{bmatrix}0&1&0\end{bmatrix} \begin{bmatrix}0&0&1\end{bmatrix} * (2) Each row of the left matrix stands for the coeffient of each row of the right matrix. $\begin{bmatrix}1&0&0\end{bmatrix}$ means take $1$×$\begin{bmatrix}1&2&1\end{bmatrix}$, $0$×$\begin{bmatrix}3&8&1\end{bmatrix}$, $0$×$\begin{bmatrix}0&4&1\end{bmatrix}$ $\begin{bmatrix}0&1&0\end{bmatrix}$ means take $0$×$\begin{bmatrix}1&2&1\end{bmatrix}$, $1$×$\begin{bmatrix}3&8&1\end{bmatrix}$, $0$×$\begin{bmatrix}0&4&1\end{bmatrix}$ $\begin{bmatrix}0&0&1\end{bmatrix}$ means take $0$×$\begin{bmatrix}1&2&1\end{bmatrix}$, $0$×$\begin{bmatrix}3&8&1\end{bmatrix}$, $1$×$\begin{bmatrix}0&4&1\end{bmatrix}$ And sum them vertically! ### 2.1 Step(1) So we look back to the elimination in part 1. \begin{align} \Bigg[\quad ?\quad \Bigg]\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\\0&2&-2\\0&4&1\end{bmatrix} \end{align} We can easily understand the above multiplication should be the following: \begin{align} \begin{bmatrix}1&0&0\\-3&1&0\\0&0&1\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\\0&2&-2\\0&4&1\end{bmatrix} \end{align} In the meantime, we denote $\begin{bmatrix}1&0&0\\-3&1&0\\0&0&1\end{bmatrix}$ as $E_{21}$ since it eliminates all the component below the second row and first column. ### 2.2 Step(2) The second elimination is: \begin{align} \Bigg[\quad ?\quad \Bigg]\begin{bmatrix}1&2&1\\0&2&-2\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\\0&2&-2\\0&0&5\end{bmatrix} \end{align} And we can easily see that: \begin{align} \begin{bmatrix}1&0&0\\0&1&0\\0&-2&1\end{bmatrix}\begin{bmatrix}1&2&1\\0&2&-2\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\\0&2&-2\\0&0&5\end{bmatrix} \end{align} We can denoted $\begin{bmatrix}1&0&0\\0&1&0\\0&-2&1\end{bmatrix}$ as $E_{32}$ since it eliminates all the component below the third row and second column. ### 2.3 Summary Finally, the whole process can be written as followed: \begin{align} E_{32}(E_{12}A)=U \end{align} where $U$ stands for the right hand side from step two. $\begin{bmatrix}1&2&1\\0&2&-2\\0&0&5\end{bmatrix}$ **Upper triangular matrix**. In the meantime, you may wonder **why the order of above multiplication starts from the right to left**? It is $E_{32}(E_{12}A)$ rather than $A E_{12}E_{32}$? Intuitively, I **think of this order as an order of function**. Like $g(f(x))$, we first apply $f(x)$ then $g()$. This is very similar to the syntax of Wolfram language. ## 3. Permutation ### 3.1 Exchange row1 and row2 Giving above knowledge, how can we do that: \begin{align} \Bigg[\quad ?\quad \Bigg]\begin{bmatrix}a&b\\c&d\end{bmatrix}=\begin{bmatrix}c&d\\a&b\end{bmatrix} \end{align} Very simple, that is: \begin{align} \begin{bmatrix}0&1\\1&0\end{bmatrix}\begin{bmatrix}a&b\\c&d\end{bmatrix}=\begin{bmatrix}c&d\\a&b\end{bmatrix} \end{align} ### 3.2 Exchange column1 and column2 \begin{align} \Bigg[\quad ?\quad \Bigg]\begin{bmatrix}a&b\\c&d\end{bmatrix}=\begin{bmatrix}b&a\\d&c\end{bmatrix} \end{align} That is impossible! While if we switch the position, and we can **see it in a column perspective**. \begin{align} \begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}0&1\\1&0\end{bmatrix}=\begin{bmatrix}b&a\\d&c\end{bmatrix} \end{align} $\begin{bmatrix}\underline{0}&1\\\underline{1}&0\end{bmatrix}$ means take 0 first column and 1 second column. ## 4. Inverse Matrix We take $E_{21}$ as example, What should be: \begin{align} \Bigg[\quad ?\quad \Bigg]\begin{bmatrix}1&0&0\\-3&1&0\\0&0&1\end{bmatrix}=\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix} \end{align} We can easily solve: \begin{align} \begin{bmatrix}1&0&0\\3&1&0\\0&0&1\end{bmatrix}\begin{bmatrix}1&0&0\\-3&1&0\\0&0&1\end{bmatrix}=\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix} \end{align} We denote the inverse matrix of $E$ as $E^{-1}$. That said, $E^{-1}E=I$.
82a83ff6488e5dcfbcb658797a11c37a1d9c0c20
10,853
ipynb
Jupyter Notebook
Lecture 02 Elimination with Matrices.ipynb
XingxinHE/Linear_Algebra
7d6b78699f8653ece60e07765fd485dd36b26194
[ "MIT" ]
3
2021-04-24T17:23:50.000Z
2021-11-27T11:00:04.000Z
Lecture 02 Elimination with Matrices.ipynb
XingxinHE/Linear_Algebra
7d6b78699f8653ece60e07765fd485dd36b26194
[ "MIT" ]
null
null
null
Lecture 02 Elimination with Matrices.ipynb
XingxinHE/Linear_Algebra
7d6b78699f8653ece60e07765fd485dd36b26194
[ "MIT" ]
null
null
null
41.903475
305
0.547959
true
3,091
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.914901
0.815919
__label__eng_Latn
0.802924
0.733986
# Mass-spring-damper In this tutorial, we will describe the mechanics and control of the one degree of freedom translational mass-spring-damper system subject to a control input force. We will first derive the dynamic equations by hand. Then, we will derive them using the `sympy.mechanics` python package. The system on which we will work is depicted below: Note that in what follows, we use the notation $u(t) = F$. ## 1. Mechanics ### Deriving the dynamical equations by hand #### 1.1 By using Newton equations Using Newton's law, we have: \begin{align} m \ddot{x}(t) &= \sum F_{ext} \\ &= - b \dot{x}(t) - k x(t) + u(t) \end{align} #### 1.2 By using the Lagrange Method Let's first derive the kinematic and potential energies. \begin{equation} T = \frac{1}{2} m \dot{x} \\ V = - \int \vec{F} . \vec{dl} = - \int (-kx \vec{1_x}) . dx \vec{1_x} = \frac{k x^2}{2} \end{equation} The Lagrangian is then given by: \begin{equation} \mathcal{L} = T - V = \frac{1}{2} m \dot{x} - \frac{k x^2}{2} \end{equation} Using the Lagrange's equations we can derive the dynamics of the system: \begin{equation} \frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{q}} - \frac{\partial \mathcal{L}}{\partial q} = Q \end{equation} where $q$ are the generalized coordinates (in this case $x$), and $Q$ represents the non-conservative forces (input force, dragging or friction forces, etc). * $\frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{x}} = \frac{d}{dt} m \dot{x}(t) = m \ddot{x}(t) $ * $\frac{\partial \mathcal{L}}{\partial x} = - k x(t) $ * $Q = - b \dot{x}(t) + u(t) $ which when putting everything back together gives us: \begin{equation} m \ddot{x}(t) + b \dot{x}(t) + k x(t) = u(t) \end{equation} ### Deriving the dynamical equations using sympy ```python import sympy import sympy.physics.mechanics as mechanics from sympy import init_printing init_printing(use_latex='mathjax') from sympy import pprint ``` ```python # define variables q = mechanics.dynamicsymbols('q') dq = mechanics.dynamicsymbols('q', 1) u = mechanics.dynamicsymbols('u') # define constants m, k, b = sympy.symbols('m k b') # define the inertial frame N = mechanics.ReferenceFrame('N') # define a particle for the mass P = mechanics.Point('P') P.set_vel(N, dq * N.x) # go in the x direction Pa = mechanics.Particle('Pa', P, m) # define the potential energy for the particle (the kinematic one is derived automatically) Pa.potential_energy = k * q**2 / 2.0 # define the Lagrangian and the non-conservative force applied on the point P L = mechanics.Lagrangian(N, Pa) force = [(P, -b * dq * N.x + u * N.x)] # Lagrange equations lagrange = mechanics.LagrangesMethod(L, [q], forcelist = force, frame = N) pprint(lagrange.form_lagranges_equations()) ``` ⎡ 2 ⎤ ⎢ d d ⎥ ⎢b⋅──(q(t)) + 1.0⋅k⋅q(t) + m⋅───(q(t)) - u(t)⎥ ⎢ dt 2 ⎥ ⎣ dt ⎦ ## 2. Laplace transform and transfer function Applying the Laplace transform on the dynamic equation: \begin{equation} m \ddot{x}(t) + b \dot{x}(t) + k x(t) = u(t) \stackrel{L}{\rightarrow} m s^2 X(s) + b s X(s) + k X(s) = U(s) \end{equation} The transfer equation is given by: \begin{equation} H(s) = \frac{X(s)}{U(s)} = \frac{1}{m s^2 + b s + k} \end{equation} By calculating the pole: \begin{equation} m s^2 + b s + k = 0 \Leftrightarrow s = \frac{-b}{2m} \pm \sqrt{\left(\frac{b}{2m}\right)^2 - \frac{k}{m}} \end{equation} Note that $b, k, m > 0$ because they represent real physical quantities. ### LTI system We can rewrite the above equation as a first-order system of equations. Let's first define the state vector $\pmb{x} = \left[ \begin{array}{c} x(t) \\ \dot{x}(t) \end{array} \right]$ and the control vector $\pmb{u} = \left[ \begin{array}{c} u(t) \end{array} \right]$, then we can rewrite the above equation in the form $\pmb{\dot{x}} = \pmb{Ax} + \pmb{Bu}$, as below: \begin{equation} \left[ \begin{array}{c} \dot{x}(t) \\ \ddot{x}(t) \end{array} \right] = \left[ \begin{array}{cc} 0 & 1 \\ -\frac{k}{m} & -\frac{b}{m} \end{array} \right] \left[ \begin{array}{c} x(t) \\ \dot{x}(t) \end{array} \right] + \left[ \begin{array}{c} 0 \\ \frac{1}{m} \end{array} \right] \left[ \begin{array}{c} u(t) \end{array} \right] \end{equation} If there is no $u(t)$, i.e. $u(t) = 0 \; \forall t$, then we have $\pmb{\dot{x}} = \pmb{Ax}$. The solution to this system of equation is $\pmb{x}(t) = e^{\pmb{A}t} \pmb{x}_0$. ```python ```
3004917c36d3d9173a492457c866edeaecbf9a5d
7,253
ipynb
Jupyter Notebook
tutorials/robotics/mass-spring-damper.ipynb
Pandinosaurus/pyrobolearn
9cd7c060723fda7d2779fa255ac998c2c82b8436
[ "Apache-2.0" ]
2
2021-01-21T21:08:30.000Z
2022-03-29T16:45:49.000Z
tutorials/robotics/mass-spring-damper.ipynb
Pandinosaurus/pyrobolearn
9cd7c060723fda7d2779fa255ac998c2c82b8436
[ "Apache-2.0" ]
null
null
null
tutorials/robotics/mass-spring-damper.ipynb
Pandinosaurus/pyrobolearn
9cd7c060723fda7d2779fa255ac998c2c82b8436
[ "Apache-2.0" ]
1
2020-09-29T21:25:39.000Z
2020-09-29T21:25:39.000Z
33.578704
393
0.511099
true
1,569
Qwen/Qwen-72B
1. YES 2. YES
0.91611
0.880797
0.806907
__label__eng_Latn
0.887428
0.713047
# Supply Network Design 2 ## Objective and Prerequisites Take your supply chain network design skills to the next level in this example. We’ll show you how – given a set of factories, depots, and customers – you can use mathematical optimization to determine which depots to open or close in order to minimize overall costs. This model is example 20 from the fifth edition of Model Building in Mathematical Programming, by H. Paul Williams on pages 275-276 and 332-333. This example is of beginning difficulty; we assume that you know Python and have some knowledge of the Gurobi Python API and building mathematical optimization models. **Download the Repository** <br /> You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). --- ## Problem Description In this problem, we have six end customers, each with a known demand for a product. Customer demand can be satisfied from a set of six depots, or directly from a set of two factories. Each depot can support a maximum volume of product moving through it, and each factory can produce a maximum amount of product. There are known costs associated with transporting the product, from a factory to a depot, from a depot to a customer, or from a factory directly to a customer. This extension provides the opportunity to choose which four of the six possible depots to open. It also provides an option of expanding capacity at one specific depot. Our supply network has two factories, in Liverpool and Brighton, that produce a product. Each has a maximum production capacity: | Factory | Supply (tons) | | --- | --- | | Liverpool | 150,000 | | Brighton | 200,000 | The product can be shipped from a factory to a set of six depots. Each depot has a maximum throughput. Depots don't produce or consume the product; they simply pass the product through to customers. | Depot | Throughput (tons) | | --- | --- | | Newcastle | 70,000 | | Birmingham | 50,000 | | London | 100,000 | | Exeter | 40,000 | | Bristol | 30,000 | | Northampton | 25,000 | We can actually only choose four of the six depots to open. Opening a depot has a cost: | Depot | Cost to open | | --- | --- | | Newcastle | 10,000 | | Exeter | 5,000 | | Bristol | 12,000 | | Northampton | 4,000 | (Note that the description in the book talks about the cost of opening Bristol or Northampton, and the savings from closing Newcastle or Exeter, but these are simply different ways of phrasing the same choice). We also have the option of expanding the capacity at Birmingham by 20,000 tons, for a cost of \$3000. Our network has six customers, each with a given demand. | Customer | Demand (tons) | | --- | --- | | C1 | 50,000 | | C2 | 10,000 | | C3 | 40,000 | | C4 | 35,000 | | C5 | 60,000 | | C6 | 20,000 | Shipping costs are given in the following table (in dollars per ton). Columns are source cities and rows are destination cities. Thus, for example, it costs $1 per ton to ship the product from Liverpool to London. A '-' in the table indicates that that combination is not possible, so for example it is not possible to ship from the factory in Brighton to the depot in Newcastle. | To | Liverpool | Brighton | Newcastle | Birmingham | London | Exeter | Briston | Northhampton | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Depots | | Newcastle | 0.5 | - | | Birmingham | 0.5 | 0.3 | | London | 1.0 | 0.5 | | Exeter | 0.2 | 0.2 | | Bristol | 0.6 | 0.4 | | Northampton | 0.4 | 0.3 | | Customers | | C1 | 1.0 | 2.0 | - | 1.0 | - | - | 1.2 | - | | C2 | - | - | 1.5 | 0.5 | 1.5 | - | 0.6 | 0.4 | | C3 | 1.5 | - | 0.5 | 0.5 | 2.0 | 0.2 | 0.5 | - | | C4 | 2.0 | - | 1.5 | 1.0 | - | 1.5 | - | 0.5 | | C5 | - | - | - | 0.5 | 0.5 | 0.5 | 0.3 | 0.6 | | C6 | 1.0 | - | 1.0 | - | 1.5 | 1.5 | 0.8 | 0.9 | The questions to be answered: (i) Which four depots should be opened? (ii) Should Birmingham be expanded? (iii) Which depots should be used to satisfy customer demand? --- ## Model Formulation ### Sets and Indices $f \in \text{Factories}=\{\text{Liverpool}, \text{Brighton}\}$ $d \in \text{Depots}=\{\text{Newcastle}, \text{Birmingham}, \text{London}, \text{Exeter}, \text{Bristol}, \text{Northampton}\}$ $c \in \text{Customers}=\{\text{C1}, \text{C2}, \text{C3}, \text{C4}, \text{C5}, \text{C6}\}$ $\text{Cities} = \text{Factories} \cup \text{Depots} \cup \text{Customers}$ ### Parameters $\text{cost}_{s,t} \in \mathbb{R}^+$: Cost of shipping one ton from source $s$ to destination $t$. $\text{supply}_f \in \mathbb{R}^+$: Maximum possible supply from factory $f$ (in tons). $\text{through}_d \in \mathbb{R}^+$: Maximum possible flow through depot $d$ (in tons). $\text{demand}_c \in \mathbb{R}^+$: Demand for goods at customer $c$ (in tons). $\text{opencost}_d \in \mathbb{R}^+$: Cost of opening depot $d$ (in dollars). ### Decision Variables $\text{flow}_{s,t} \in \mathbb{N}^+$: Quantity of goods (in tons) that is shipped from source $s$ to destionation $t$. $\text{open}_{d} \in [0,1]$: Is depot $d$ open? $\text{expand} \in [0,1]$: Should Birmingham be expanded? ### Objective Function - **Cost**: Minimize total shipping costs plus costs of opening depots. \begin{equation} \text{Minimize} \quad Z = \sum_{(s,t) \in \text{Cities} \times \text{Cities}}{\text{cost}_{s,t}*\text{flow}_{s,t}} + \sum_{{d} \in \text{Depots}}{\text{opencost}_d*\text{open}_d} + 3000 * \text{expand} \end{equation} ### Constraints - **Factory output**: Flow of goods from a factory must respect maximum capacity. \begin{equation} \sum_{t \in \text{Cities}}{\text{flow}_{f,t}} \leq \text{supply}_{f} \quad \forall f \in \text{Factories} \end{equation} - **Customer demand**: Flow of goods must meet customer demand. \begin{equation} \sum_{s \in \text{Cities}}{\text{flow}_{s,c}} = \text{demand}_{c} \quad \forall c \in \text{Customers} \end{equation} - **Depot flow**: Flow into a depot equals flow out of the depot. \begin{equation} \sum_{s \in \text{Cities}}{\text{flow}_{s,d}} = \sum_{t \in \text{Cities}}{\text{flow}_{d,t}} \quad \forall d \in \text{Depots} \end{equation} - **Depot capacity (all but Birmingham)**: Flow into a depot must respect depot capacity, and is only allowed if the depot is open. \begin{equation} \sum_{s \in \text{Cities}}{\text{flow}_{s,d}} \leq \text{through}_{d} * \text{open}_{d} \quad \forall d \in \text{Depots} - \text{Birmingham} \end{equation} - **Depot capacity (Birmingham)**: Flow into Birmingham must respect depot capacity, which may have been expanded. \begin{equation} \sum_{s \in \text{Cities}} \text{flow}_{s,\text{Birmingham}} \leq \text{through}_{\text{Birmingham}} + 20000 * \text{expand} \end{equation} - **Open depots**: At most 4 open depots (no choice for Birmingham or London). \begin{equation} \sum_{d \in \text{Depots}}{\text{open}_{d}} \leq 4 \end{equation} \begin{equation} \text{open}_{\text{Birmingham}} = \text{open}_{\text{London}} = 1 \end{equation} --- ## Python Implementation We import the Gurobi Python Module and other Python libraries. ```python %pip install gurobipy ``` ```python import numpy as np import pandas as pd import gurobipy as gp from gurobipy import GRB # tested with Python 3.7.0 & Gurobi 9.0 ``` ## Input Data We define all the input data for the model. ```python # Create dictionaries to capture factory supply limits, depot throughput limits, cost of opening depots, and customer demand. supply = dict({'Liverpool': 150000, 'Brighton': 200000}) through = dict({'Newcastle': 70000, 'Birmingham': 50000, 'London': 100000, 'Exeter': 40000, 'Bristol': 30000, 'Northampton': 25000}) opencost = dict({'Newcastle': 10000, 'Birmingham': 0, 'London': 0, 'Exeter': 5000, 'Bristol': 12000, 'Northampton': 4000}) demand = dict({'C1': 50000, 'C2': 10000, 'C3': 40000, 'C4': 35000, 'C5': 60000, 'C6': 20000}) # Create a dictionary to capture shipping costs. arcs, cost = gp.multidict({ ('Liverpool', 'Newcastle'): 0.5, ('Liverpool', 'Birmingham'): 0.5, ('Liverpool', 'London'): 1.0, ('Liverpool', 'Exeter'): 0.2, ('Liverpool', 'Bristol'): 0.6, ('Liverpool', 'Northampton'): 0.4, ('Liverpool', 'C1'): 1.0, ('Liverpool', 'C3'): 1.5, ('Liverpool', 'C4'): 2.0, ('Liverpool', 'C6'): 1.0, ('Brighton', 'Birmingham'): 0.3, ('Brighton', 'London'): 0.5, ('Brighton', 'Exeter'): 0.2, ('Brighton', 'Bristol'): 0.4, ('Brighton', 'Northampton'): 0.3, ('Brighton', 'C1'): 2.0, ('Newcastle', 'C2'): 1.5, ('Newcastle', 'C3'): 0.5, ('Newcastle', 'C5'): 1.5, ('Newcastle', 'C6'): 1.0, ('Birmingham', 'C1'): 1.0, ('Birmingham', 'C2'): 0.5, ('Birmingham', 'C3'): 0.5, ('Birmingham', 'C4'): 1.0, ('Birmingham', 'C5'): 0.5, ('London', 'C2'): 1.5, ('London', 'C3'): 2.0, ('London', 'C5'): 0.5, ('London', 'C6'): 1.5, ('Exeter', 'C3'): 0.2, ('Exeter', 'C4'): 1.5, ('Exeter', 'C5'): 0.5, ('Exeter', 'C6'): 1.5, ('Bristol', 'C1'): 1.2, ('Bristol', 'C2'): 0.6, ('Bristol', 'C3'): 0.5, ('Bristol', 'C5'): 0.3, ('Bristol', 'C6'): 0.8, ('Northampton', 'C2'): 0.4, ('Northampton', 'C4'): 0.5, ('Northampton', 'C5'): 0.6, ('Northampton', 'C6'): 0.9 }) ``` ## Model Deployment We create a model and the variables. The 'flow' variables simply capture the amount of product that flows along each allowed path between a source and destination. The 'open' variable capture decisions about which depots to open. The 'expand' variable captures the choice of whether to expand Birmingham. Objective coefficients are provided here, so we don't need to provide an optimization objective later. ```python model = gp.Model('SupplyNetworkDesign2') depots = through.keys() flow = model.addVars(arcs, obj=cost, name="flow") open = model.addVars(depots, obj=opencost, vtype=GRB.BINARY, name="open") expand = model.addVar(obj=3000, vtype=GRB.BINARY, name="expand") open['Birmingham'].lb = 1 open['London'].lb = 1 model.objcon = -(opencost['Newcastle'] + opencost['Exeter']) # Phrased as 'savings from closing' ``` Using license file c:\gurobi\gurobi.lic Our first constraints require the total flow along arcs leaving a factory to be at most as large as the supply capacity of that factory. ```python # Production capacity limits factories = supply.keys() factory_flow = model.addConstrs((gp.quicksum(flow.select(factory, '*')) <= supply[factory] for factory in factories), name="factory") ``` Our next constraints require the total flow along arcs entering a customer to be equal to the demand from that customer. ```python # Customer demand customers = demand.keys() customer_flow = model.addConstrs((gp.quicksum(flow.select('*', customer)) == demand[customer] for customer in customers), name="customer") ``` Our final constraints relate to depots. The first constraints require that the total amount of product entering the depot must equal the total amount leaving. ```python # Depot flow conservation depot_flow = model.addConstrs((gp.quicksum(flow.select(depot, '*')) == gp.quicksum(flow.select('*', depot)) for depot in depots), name="depot") ``` The second set limits the product passing through the depot to be at most equal the throughput of that deport, or 0 if the depot isn't open. ```python # Depot throughput all_but_birmingham = list(set(depots) - set(['Birmingham'])) depot_capacity = model.addConstrs((gp.quicksum(flow.select(depot, '*')) <= through[depot]*open[depot] for depot in all_but_birmingham), name="depot_capacity") ``` The capacity constraint for Birmingham is different. The depot is always open, but we have the option of expanding its capacity. ```python birmingham_capacity = model.addConstr(gp.quicksum(flow.select('*', 'Birmingham')) <= through['Birmingham'] + 20000*expand, name="birmingham_capacity") ``` Finally, there's a limit of at most 4 open depots ```python # Depot count depot_count = model.addConstr(open.sum() <= 4) ``` We now optimize the model ```python model.optimize() ``` Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (win64) Thread count: 4 physical cores, 8 logical processors, using up to 8 threads Optimize a model with 21 rows, 49 columns and 119 nonzeros Model fingerprint: 0x140cc3a9 Variable types: 42 continuous, 7 integer (7 binary) Coefficient statistics: Matrix range [1e+00, 1e+05] Objective range [2e-01, 1e+04] Bounds range [1e+00, 1e+00] RHS range [4e+00, 2e+05] Presolve removed 0 rows and 2 columns Presolve time: 0.00s Presolved: 21 rows, 47 columns, 113 nonzeros Variable types: 42 continuous, 5 integer (5 binary) Root relaxation: objective 1.740000e+05, 17 iterations, 0.00 seconds Nodes | Current Node | Objective Bounds | Work Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time * 0 0 0 174000.00000 174000.000 0.00% - 0s Explored 0 nodes (17 simplex iterations) in 0.02 seconds Thread count was 8 (of 8 available processors) Solution count 1: 174000 Optimal solution found (tolerance 1.00e-04) Best objective 1.740000000000e+05, best bound 1.740000000000e+05, gap 0.0000% --- ## Analysis The product demand from all of our customers can be satisfied for a total cost of $\$174,000$ by opening a depot in Northampton, closing the depot in Newcastle, and expanding the depot in Birmingham: ```python print('List of open depots:', [d for d in depots if open[d].x > 0.5]) if expand.x > 0.5: print('Expand Birmingham') ``` List of open depots: ['Birmingham', 'London', 'Exeter', 'Northampton'] Expand Birmingham ```python product_flow = pd.DataFrame(columns=["From", "To", "Flow"]) for arc in arcs: if flow[arc].x > 1e-6: product_flow = product_flow.append({"From": arc[0], "To": arc[1], "Flow": flow[arc].x}, ignore_index=True) product_flow.index=[''] * len(product_flow) product_flow ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>From</th> <th>To</th> <th>Flow</th> </tr> </thead> <tbody> <tr> <th></th> <td>Liverpool</td> <td>Exeter</td> <td>40000.0</td> </tr> <tr> <th></th> <td>Liverpool</td> <td>C1</td> <td>50000.0</td> </tr> <tr> <th></th> <td>Liverpool</td> <td>C6</td> <td>20000.0</td> </tr> <tr> <th></th> <td>Brighton</td> <td>Birmingham</td> <td>70000.0</td> </tr> <tr> <th></th> <td>Brighton</td> <td>London</td> <td>10000.0</td> </tr> <tr> <th></th> <td>Brighton</td> <td>Northampton</td> <td>25000.0</td> </tr> <tr> <th></th> <td>Birmingham</td> <td>C2</td> <td>10000.0</td> </tr> <tr> <th></th> <td>Birmingham</td> <td>C4</td> <td>10000.0</td> </tr> <tr> <th></th> <td>Birmingham</td> <td>C5</td> <td>50000.0</td> </tr> <tr> <th></th> <td>London</td> <td>C5</td> <td>10000.0</td> </tr> <tr> <th></th> <td>Exeter</td> <td>C3</td> <td>40000.0</td> </tr> <tr> <th></th> <td>Northampton</td> <td>C4</td> <td>25000.0</td> </tr> </tbody> </table> </div> --- ## References H. Paul Williams, Model Building in Mathematical Programming, fifth edition. Copyright © 2020 Gurobi Optimization, LLC ```python ```
072e7ede6084f374dd436f7b291c84ec7bb868a3
24,632
ipynb
Jupyter Notebook
supply_network_design_1_2/supply_network_design_2_gcl.ipynb
gglockner/modeling-examples
51575a453d28e1e9435abd865432955b182ba577
[ "Apache-2.0" ]
1
2021-12-22T06:17:22.000Z
2021-12-22T06:17:22.000Z
supply_network_design_1_2/supply_network_design_2_gcl.ipynb
Maninaa/modeling-examples
51575a453d28e1e9435abd865432955b182ba577
[ "Apache-2.0" ]
null
null
null
supply_network_design_1_2/supply_network_design_2_gcl.ipynb
Maninaa/modeling-examples
51575a453d28e1e9435abd865432955b182ba577
[ "Apache-2.0" ]
1
2021-11-29T07:41:53.000Z
2021-11-29T07:41:53.000Z
34.84017
654
0.497808
true
5,057
Qwen/Qwen-72B
1. YES 2. YES
0.727975
0.855851
0.623039
__label__eng_Latn
0.916432
0.285858
# PharmSci 175/275 (UCI) ## What is this?? The material below is a supplement to the quantum mechanics (QM) lecture from Drug Discovery Computing Techniques, PharmSci 175/275 at UC Irvine. Extensive materials for this course, as well as extensive background and related materials, are available on the course GitHub repository: [github.com/mobleylab/drug-computing](https://github.com/mobleylab/drug-computing) # Using QM in Python This material adapted (under CC-BY) from a [workshop example](https://github.com/QCMM/workshop2017/blob/master/Theory_electronic_structure_day3/i_inter_es.ipynb) of Stefano Vogt-Giesse (University of Concepcion) from the December, 2017 QCMM workshop in Chile, available under the [workshop repository](https://github.com/QCMM/workshop2017) ### Instructor: David L. Mobley ### Contributors to these materials: - Stefano Vogt-Giesse - David L. Mobley ## Choose whether to run under Google Colab or locally You need to do different preparation to run this notebook locally vs Google Colab; skip to the appropriate section following depending on which you choose. ## Preparation for using Google Colab (SKIP IF RUNNING LOCALLY)) [](https://colab.research.google.com/github/MobleyLab/drug-computing/blob/master/uci-pharmsci/lectures/QM/psi4_example.ipynb) If you are running this on Google Colab, you need to take a couple additional steps of preparation. **Note that these steps may take 5-10 minutes to complete.** Psi4 installs via `conda`, not pip, so you will need to get conda set up on Colab: ```python ! wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.10.3-Linux-x86_64.sh ! chmod +x Miniconda3-py37_4.10.3-Linux-x86_64.sh ! bash ./Miniconda3-py37_4.10.3-Linux-x86_64.sh -b -f -p /usr/local import sys sys.path.append('/usr/local/lib/python3.7/site-packages/') ``` Then `conda`-install psi4: ```python !conda install -c psi4 psi4 --yes ``` ## Preparation for running locally (SKIP IF RUNNING USING COLAB) For today's activity we will use the package of program psi4, so we will need to install it first. Assuming you already have anaconda/miniconda installed, you can install as follows (in a new conda environment, `psi4`: **conda create -n psi4 psi4 psi4-rt jupyter matplotlib -c psi4/label/dev -c psi4** (Note psi4 seems to be somewhat incompatible with the other software we are using in this course, so a separate environment is needed.) This will install all the psi4 binaries and a python module which can be imported from the notebook. Then activate the environment via `conda activate psi4`. You also need to ensure it works in your jupyter notebook, which you can do via (in the terminal, with your `psi4` environment active): ``` conda install ipykernel --name psi4 python -m ipykernel install --user ``` To finish the installation you need to provide a scratch directory in your `~/.bash_profile`, for example (assuming you want your scratch directory in this space): **export PSI_SCRATCH=/home/user_name/scratch/psi4** Then type `source ~/.bash_profile`. Now you may open jupyter-notebook and install psi4. Every time you wish to use psi4 you will need to `conda activate psi4`. # Exploring molecular interactions using electronic structure methods. ```python import psi4 import numpy as np ``` # 1. Compute the energy of a diatomic molecule As a first example we will compute the scf energy of the diatómic molecule hydrogen flouride (HF): ```python # ==> Basic Psi4 options <== # Memory psi4.set_memory(int(5e8)) numpy_memory = 500 # Output psi4.core.set_output_file('output.dat', False) # Geometry input hf_mol = psi4.geometry(""" 0 1 H F 1 0.917 """) energy_hf_mol , wfn_hf_mol = psi4.energy('mp2/cc-pvtz', return_wfn=True) print(energy_hf_mol) #Energy in Hartrees ``` Memory set to 476.837 MiB by Python driver. -100.34290633086249 This corresponds to the MP2/cc-pVTZ energy for this system (HF). Input coordinates are given in [Z matrix format](https://en.wikipedia.org/wiki/Z-matrix_(chemistry)), using internal coordinates (in this case just a bond distance). ### Exercise: Compute the energy of other diatomic molecules using similar methods You might consider computing the energy of of F$_2$ and N$_2$ using the cc-pVDZ and cc-pVTZ methods. # 2. Compute the dipole and quadrupole moment of diatomic molecules Since we are intrested in studying long range molecular interactions using classical elctrodynamics, it is necessary to compute the dipole and quadrupole moments. Quantum mechanically the dipole can be computed using the one electron dipole operator: \begin{equation} \hat{\mu} = \sum_i q_i r_i \end{equation} where $q_i$ is the charge of the particle and $r_i$ is the position vector of the particle. The dipole moment can be computed using the wavefunction through the expectation value of the operator $\mu$. \begin{equation} \mu = <\psi|\hat{\mu}|\psi> \end{equation} In psi4 we can obtain the dipole moment from the wafefunction object that was defined above ```python psi4.oeprop(wfn_hf_mol, 'DIPOLE', 'QUADRUPOLE', title='HF SCF') mux = psi4.core.get_variable('HF SCF DIPOLE X') # in debye muy = psi4.core.get_variable('HF SCF DIPOLE Y') muz = psi4.core.get_variable('HF SCF DIPOLE Z') quad_zz = psi4.core.get_variable('HF SCF QUADRUPOLE ZZ') ``` /var/folders/7z/88tvgj2941bclbbw1xhn0t8r0000gn/T/ipykernel_58639/3045530770.py:3: FutureWarning: Using `psi4.core.get_variable` instead of `psi4.core.variable` (or `psi4.core.scalar_variable` for scalar variables only) is deprecated, and in 1.4 it will stop working mux = psi4.core.get_variable('HF SCF DIPOLE X') # in debye /var/folders/7z/88tvgj2941bclbbw1xhn0t8r0000gn/T/ipykernel_58639/3045530770.py:4: FutureWarning: Using `psi4.core.get_variable` instead of `psi4.core.variable` (or `psi4.core.scalar_variable` for scalar variables only) is deprecated, and in 1.4 it will stop working muy = psi4.core.get_variable('HF SCF DIPOLE Y') /var/folders/7z/88tvgj2941bclbbw1xhn0t8r0000gn/T/ipykernel_58639/3045530770.py:5: FutureWarning: Using `psi4.core.get_variable` instead of `psi4.core.variable` (or `psi4.core.scalar_variable` for scalar variables only) is deprecated, and in 1.4 it will stop working muz = psi4.core.get_variable('HF SCF DIPOLE Z') /var/folders/7z/88tvgj2941bclbbw1xhn0t8r0000gn/T/ipykernel_58639/3045530770.py:6: FutureWarning: Using `psi4.core.get_variable` instead of `psi4.core.variable` (or `psi4.core.scalar_variable` for scalar variables only) is deprecated, and in 1.4 it will stop working quad_zz = psi4.core.get_variable('HF SCF QUADRUPOLE ZZ') ```python print(muz) print(quad_zz) mu = (np.sqrt(mux**2 + muy**2 + muz**2)) print(mu) ``` -1.9411550055413282 -3.2710162959870983 1.9411550055413282 # 3. Compute a potential energy surface of HF dimer. In order to study the physical interactions between two molecules it is convinient to draw a potential energy surface along the interaction coordinate. In this section we will obtain a potential energy profile for the most favorable dipole-dipole interaction, which is the horizontal orientation with oposing dipole vectors, HF---FH. First we need to define a list containing the distances between both dimers for which the energy will be obtained. ```python hf_dimer = psi4.geometry(""" 0 1 H F 1 0.917 H 2 R 1 180.0 F 3 0.917 2 180.0 1 0.0 """) ``` Next, we write a loop and in each step of the loop we compute the energy at the mp4 level of theory. ```python energy = [] dist = [] Rval = np.arange(1.5,10.0,0.1) for d in Rval: hf_dimer.R = d psi4.set_options({'freeze_core': 'True'}) en = psi4.energy('scf/cc-pvtz') print(en) print(d) energy.append(en) dist.append(d) ``` -200.11594007613158 1.5 -200.11908196505507 1.6 -200.12084511099732 1.7000000000000002 -200.12174182364433 1.8000000000000003 -200.12210646361427 1.9000000000000004 -200.12214740636392 2.0000000000000004 -200.12199128717546 2.1000000000000005 -200.1217185553115 2.2000000000000006 -200.12138348389254 2.3000000000000007 -200.12102248369894 2.400000000000001 -200.1206584432208 2.500000000000001 -200.12030471982058 2.600000000000001 -200.11996870241614 2.700000000000001 -200.11965427235268 2.800000000000001 -200.1193632136904 2.9000000000000012 -200.11909600301578 3.0000000000000013 -200.11885228615392 3.1000000000000014 -200.11863115920457 3.2000000000000015 -200.11843131725777 3.3000000000000016 -200.118251147143 3.4000000000000017 -200.11808882795262 3.5000000000000018 -200.1179424573063 3.600000000000002 -200.1178101821683 3.700000000000002 -200.11769030211374 3.800000000000002 -200.117581325851 3.900000000000002 -200.11748198151327 4.000000000000002 -200.11739119494283 4.100000000000002 -200.11730805348418 4.200000000000003 -200.11723176950403 4.3000000000000025 -200.11716165120245 4.400000000000002 -200.11709708268364 4.500000000000003 -200.11703751178317 4.600000000000003 -200.1169824430437 4.700000000000003 -200.1169314331076 4.8000000000000025 -200.11688408717754 4.900000000000003 -200.1168400551661 5.0000000000000036 -200.11679902773142 5.100000000000003 -200.11676073196662 5.200000000000003 -200.11672492703738 5.300000000000003 -200.1166914000603 5.400000000000004 -200.1166599622753 5.5000000000000036 -200.11663044566174 5.600000000000003 -200.11660269999402 5.700000000000004 -200.11657659037843 5.800000000000004 -200.11655199514317 5.900000000000004 -200.11652880412805 6.0000000000000036 -200.11650691722386 6.100000000000004 -200.11648624315433 6.200000000000005 -200.11646669847164 6.300000000000004 -200.1164482067375 6.400000000000004 -200.11643069773467 6.500000000000004 -200.11641410687145 6.600000000000005 -200.1163983746509 6.700000000000005 -200.1163834461709 6.800000000000004 -200.1163692707005 6.900000000000005 -200.11635580132997 7.000000000000005 -200.11634299461193 7.100000000000005 -200.1163308102652 7.200000000000005 -200.116319210949 7.300000000000005 -200.11630816193497 7.400000000000006 -200.1162976309706 7.500000000000005 -200.11628758805156 7.600000000000005 -200.11627800521813 7.7000000000000055 -200.1162688564144 7.800000000000006 -200.1162601173462 7.900000000000006 -200.11625176532 8.000000000000005 -200.11624377915305 8.100000000000005 -200.11623613901043 8.200000000000006 -200.11622882636473 8.300000000000006 -200.11622182384693 8.400000000000006 -200.11621511520096 8.500000000000007 -200.11620868514754 8.600000000000007 -200.1162025193921 8.700000000000006 -200.11619660448488 8.800000000000006 -200.11619092780117 8.900000000000006 -200.11618547746463 9.000000000000007 -200.11618024229836 9.100000000000007 -200.1161752117998 9.200000000000006 -200.11617037606567 9.300000000000008 -200.11616572573791 9.400000000000007 -200.11616125203147 9.500000000000007 -200.11615694665616 9.600000000000007 -200.11615280173663 9.700000000000006 Now we are ready to plot the potential energy profile. We will use the matplotlib python library for this purpose. The function `ref_zero_kcal` transforms the energy which is in Hartee to kcal/mol and takes the energy of the dimer with the farthest separation as the reference energy. ```python import matplotlib.pyplot as plt %matplotlib inline ``` ```python def ref_zero_kcal(en_list): energy_kcal = [] for x in range(len(en_list)): energy_kcal.append((en_list[x] - en_list[-1])*627.51) return energy_kcal energy_kcal = ref_zero_kcal(energy) plt.plot(dist,energy_kcal) plt.xlabel('Distance') plt.ylabel('Energy (kcal/mol)') ``` There are many more examples/tutorials in the Psi4 GitHub repositories, especially see `Tutorials` under the (psi4numpy repository](https://github.com/psi4/psi4numpy) for many Jupyter notebooks.
3d552e36a5f9245b43bb739b353434d8f4241763
18,416
ipynb
Jupyter Notebook
uci-pharmsci/lectures/QM/psi4_example.ipynb
aakankschit/drug-computing
3ea4bd12f3b56cbffa8ea43396f3a32c009985a9
[ "CC-BY-4.0", "MIT" ]
null
null
null
uci-pharmsci/lectures/QM/psi4_example.ipynb
aakankschit/drug-computing
3ea4bd12f3b56cbffa8ea43396f3a32c009985a9
[ "CC-BY-4.0", "MIT" ]
null
null
null
uci-pharmsci/lectures/QM/psi4_example.ipynb
aakankschit/drug-computing
3ea4bd12f3b56cbffa8ea43396f3a32c009985a9
[ "CC-BY-4.0", "MIT" ]
null
null
null
31.861592
348
0.600348
true
4,098
Qwen/Qwen-72B
1. YES 2. YES
0.847968
0.752013
0.637682
__label__eng_Latn
0.702688
0.319881
# Restricted Boltzmann Machine The restricted Boltzman Machine model is the Joint Probability Distribution which is specified by the Energy Function : \begin{equation} P(v,h) = \frac{1}{Z} e^{-E(v,h)} \end{equation} The energy function for the RBM is stated as follows: \begin{equation} E(v,h) = -b^{T} v - c^{T} h - v^{T} W h \end{equation} We also have the Partition Function Z which is the normalizing constant. \begin{equation} Z = \Sigma_{v} \Sigma_{h} e^{-E(v,h)} \end{equation} In Boltzmann Machines, the partition function Z is intractable and hence implies that the normalized Joint Probability Distribution _P(v)_ is also intractable to evaluate. Even though this is the case, the bipartitie graph structure of the RBM has a special property that the visible and hidden units are conditionally independent, given one another. \begin{equation} P(h|v) = \frac{P(h,v)}{P(v)} = \frac{1}{P(v)} \frac{1}{Z} exp\{b^{T} v + c^{T} h + v^{T} W h\} \end{equation} \begin{equation} = \frac{1}{Z'} exp\{\Sigma_j c_j h_j + \Sigma_j v^T W_j h_j \} \end{equation} \begin{equation} = \frac{1}{Z'} \Pi_j exp \{ c_j h_j + v^T W_j h_j\} \end{equation} \begin{equation} P(h_j = 1,v) = \frac{\hat{P}(h_j = 1,v)}{\hat{P}(h_j = 0,v) + \hat{P}(h_j = 1,v)} = \frac{exp\{c_j + v^T W_j \}}{exp\{0\} + exp\{c_j + v^T W_j \}} \end{equation} Therefore: \begin{equation} P(h_j = 1|v) = \sigma(c_j + v^T W_j ) \end{equation} Similary from Eq 8 we can say that: \begin{equation} P(v|h) = \frac{1}{Z'} \Pi_k exp \{b_k + h^T W_k \} \end{equation} Therefore: \begin{equation} P(v_k = 1|h) = \sigma(b_k + h^T W_k) \end{equation} ```python import numpy as np import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import os from tensorflow.examples.tutorials.mnist import input_data if not os.path.exists('outRBM/'): os.makedirs('outRBM/') mnist = input_data.read_data_sets('../MNIST_data', one_hot=True) X_dim = mnist.train.images.shape[1] y_dim = mnist.train.labels.shape[1] mb_size = 16 h_dim = 20 W = np.random.randn(X_dim, h_dim) * 0.001 a = np.random.randn(h_dim) * 0.001 b = np.random.randn(X_dim) * 0.001 def sigm(x): return 1/(1 + np.exp(-x)) def infer(X): # mb_size x x_dim -> mb_size x h_dim return sigm(X @ W) def generate(H): # mb_size x h_dim -> mb_size x x_dim return sigm(H @ W.T) # Here we find the Contrastive Divergence # ---------------------- # Approximate the log partition gradient Gibbs sampling alpha = 0.1 K = 15 # Num. of Gibbs sampling step for t in range(1, 1001): X_mb = (mnist.train.next_batch(mb_size)[0] > 0.5).astype(np.float) g = 0 g_a = 0 g_b = 0 for v in X_mb: # E[h|v,W] h = infer(v) # Gibbs sampling steps # -------------------- v_prime = np.copy(v) for k in range(K): # h ~ p(h|v,W) h_prime = np.random.binomial(n=1, p=infer(v_prime)) # v ~ p(v|h,W) v_prime = np.random.binomial(n=1, p=generate(h_prime)) # E[h|v',W] h_prime = infer(v_prime) # Compute data gradient grad_w = np.outer(v, h) - np.outer(v_prime, h_prime) grad_a = h - h_prime grad_b = v - v_prime # Accumulate minibatch gradient g += grad_w g_a += grad_a g_b += grad_b # Monte carlo gradient g *= 1 / mb_size g_a *= 1 / mb_size g_b *= 1 / mb_size # Update to maximize W += alpha * g a += alpha * g_a b += alpha * g_b # Visualization # ------------- def plot(samples, size, name): size = int(size) fig = plt.figure(figsize=(4, 4)) gs = gridspec.GridSpec(4, 4) gs.update(wspace=0.05, hspace=0.05) for i, sample in enumerate(samples): ax = plt.subplot(gs[i]) plt.axis('off') ax.set_xticklabels([]) ax.set_yticklabels([]) ax.set_aspect('equal') plt.imshow(sample.reshape(size, size), cmap='Greys_r') plt.savefig('outRBM/{}.png'.format(name), bbox_inches='tight') plt.close(fig) X = (mnist.test.next_batch(mb_size)[0] > 0.5).astype(np.float) H = np.random.binomial(n=1, p=infer(X)) plot(H, np.sqrt(h_dim), 'H') X_recon = (generate(H) > 0.5).astype(np.float) plot(X_recon, np.sqrt(X_dim), 'V') ``` # Variational Autoencoder In Variational AutoEncoder we convert the input into a latent space and calculate the mean and standard deviation. By combining this, we get a new distribution of latent space which provides a better output than a regular autoencoder. We shall consider the following definitions when deriving the VAE. 1. X - The data to be modeled 2. z - Latent Variable 3. P(X) - Probability distribution of Data 4. P(z) - Probability Distribution of Latent Variable P(X|z) - Probability Distribution of the generated Data given the Latent Variable. In VAE, we try to find the latent space _z_ using the data _X_. Hence we try to infer _P(z)_ using _P(z|X)_. But we do not know _P(z|X)_, hence we use Variational Inference and approach it as an optimization problem. We do this by actual distribution _P(z|X)_ using a simpler distribution such as Gaussian and then find the difference between the two distributions using KL Divergence. For inferring _P(z|X)_ using _Q(z|X)_, we have the KL Divergence as : \begin{equation} D_{KL}[Q(z|X)∥P(z|X)] = \Sigma_z Q(z|X) log \frac{Q(z|X)}{P(z|X)} = E[log Q(z|X) - log P(z|X)] \end{equation} Using Bayes' Rule, we can expand _P(z|X)_ as the following: \begin{equation} D_{KL}[Q(z|X)∥P(z|X)] = E[log Q(z|X) - log \frac{P(X|z) P(z)}{P(X)}] = E[log Q(z|X) − log P(X|z) − log P(z) + log P(X)] \end{equation} The expection we are finding is over _z_, hence independent of _x_. Therefore, we we move _x_ out of the expectation. \begin{equation} D_{KL}[Q(z|X)∥P(z|X)] = E[log Q(z|X) − log P(X|z) − log P(z) ] + log P(X) \end{equation} \begin{equation} D_{KL}[Q(z|X)∥P(z|X)] - log P(X) = E[log Q(z|X) − log P(X|z) − log P(z)] \end{equation} This equation can then be written as another KL Divergence. That is shown below. \begin{equation} log P(X) - D_{KL}[Q(z|X)∥P(z|X)] = E[log P(X|z) - (log Q(z|X) − log P(z))] = E[log P(X|z)] - E[log Q(z|X) - log P(z)] \end{equation} This is the final objective functions that we arrive from the derivation: \begin{equation} log P(X) - D_{KL}[Q(z|X)∥P(z|X)] = E[log P(X|z)] - D_{KL}[Q(z|X)∥P(z)] \end{equation} ## Vanilla VAE ```python import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import os from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('../MNIST_data', one_hot=True) mb_size = 64 z_dim = X_dim = mnist.train.images.shape[1] y_dim = mnist.train.labels.shape[1] h_dim = 256 c = 0 lr = 1e-3 def plot(samples): fig = plt.figure(figsize=(4, 4)) gs = gridspec.GridSpec(4, 4) gs.update(wspace=0.05, hspace=0.05) for i, sample in enumerate(samples): ax = plt.subplot(gs[i]) plt.axis('off') ax.set_xticklabels([]) ax.set_yticklabels([]) ax.set_aspect('equal') plt.imshow(sample.reshape(28, 28), cmap='Greys_r') return fig def xavier_init(size): in_dim = size[0] xavier_stddev = 1. / tf.sqrt(in_dim / 2.) return tf.random_normal(shape=size, stddev=xavier_stddev) # =============================== Q(z|X) ====================================== X = tf.placeholder(tf.float32, shape=[None, X_dim]) z = tf.placeholder(tf.float32, shape=[None, z_dim]) Q_W1 = tf.Variable(xavier_init([X_dim, h_dim])) Q_b1 = tf.Variable(tf.zeros(shape=[h_dim])) Q_W2_mu = tf.Variable(xavier_init([h_dim, z_dim])) Q_b2_mu = tf.Variable(tf.zeros(shape=[z_dim])) Q_W2_sigma = tf.Variable(xavier_init([h_dim, z_dim])) Q_b2_sigma = tf.Variable(tf.zeros(shape=[z_dim])) def Q(X): h = tf.nn.relu(tf.matmul(X, Q_W1) + Q_b1) z_mu = tf.matmul(h, Q_W2_mu) + Q_b2_mu z_logvar = tf.matmul(h, Q_W2_sigma) + Q_b2_sigma return z_mu, z_logvar def sample_z(mu, log_var): eps = tf.random_normal(shape=tf.shape(mu)) return mu + tf.exp(log_var / 2) * eps # =============================== P(X|z) ====================================== P_W1 = tf.Variable(xavier_init([z_dim, h_dim])) P_b1 = tf.Variable(tf.zeros(shape=[h_dim])) P_W2 = tf.Variable(xavier_init([h_dim, X_dim])) P_b2 = tf.Variable(tf.zeros(shape=[X_dim])) def P(z): h = tf.nn.relu(tf.matmul(z, P_W1) + P_b1) logits = tf.matmul(h, P_W2) + P_b2 prob = tf.nn.sigmoid(logits) return prob, logits # =============================== TRAINING ==================================== z_mu, z_logvar = Q(X) z_sample = sample_z(z_mu, z_logvar) _, logits = P(z_sample) # Sampling from random z X_samples, _ = P(z) # E[log P(X|z)] recon_loss = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=X), 1) # D_KL(Q(z|X) || P(z)); calculate in closed form as both dist. are Gaussian kl_loss = 0.5 * tf.reduce_sum(tf.exp(z_logvar) + z_mu**2 - 1. - z_logvar, 1) # VAE loss vae_loss = tf.reduce_mean(recon_loss + kl_loss) solver = tf.train.AdamOptimizer().minimize(vae_loss) sess = tf.Session() sess.run(tf.global_variables_initializer()) ``` WARNING:tensorflow:From <ipython-input-2-7e44334aa31a>:9: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use alternatives such as official/mnist/dataset.py from tensorflow/models. WARNING:tensorflow:From /Users/akash/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. Instructions for updating: Please write your own downloading logic. WARNING:tensorflow:From /Users/akash/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.data to implement this functionality. Extracting ../MNIST_data/train-images-idx3-ubyte.gz WARNING:tensorflow:From /Users/akash/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.data to implement this functionality. Extracting ../MNIST_data/train-labels-idx1-ubyte.gz WARNING:tensorflow:From /Users/akash/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.one_hot on tensors. Extracting ../MNIST_data/t10k-images-idx3-ubyte.gz Extracting ../MNIST_data/t10k-labels-idx1-ubyte.gz WARNING:tensorflow:From /Users/akash/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use alternatives such as official/mnist/dataset.py from tensorflow/models. WARNING:tensorflow:From /Users/akash/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /Users/akash/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. ```python if not os.path.exists('out/'): os.makedirs('out/') i = 0 for it in range(1000000): X_mb, _ = mnist.train.next_batch(mb_size) _, loss = sess.run([solver, vae_loss], feed_dict={X: X_mb}) if it % 1000 == 0: print('Iter: {}'.format(it)) print('Loss: {:.4}'. format(loss)) print() samples = sess.run(X_samples, feed_dict={z: np.random.randn(16, z_dim)}) print (type(samples)) #fig = plot(samples) #samples('out/{}.png'.format(str(i).zfill(3)), bbox_inches='tight') i += 1 #plt.close(fig) ``` # References : 1.https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/ 2.https://github.com/wiseodd/generative-models/blob/master/RBM/rbm_binary_cd.py 3.https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf 4.http://anotherdatum.com/vae2.html ```python ```
e649070de19651ba88ec1d9a022fab89db5816f7
19,705
ipynb
Jupyter Notebook
Assignment 5.ipynb
Mgosi/Pattern-Recognition
e4a51b41e3ac0e64456adb629da2e8d8825c6b12
[ "MIT" ]
null
null
null
Assignment 5.ipynb
Mgosi/Pattern-Recognition
e4a51b41e3ac0e64456adb629da2e8d8825c6b12
[ "MIT" ]
null
null
null
Assignment 5.ipynb
Mgosi/Pattern-Recognition
e4a51b41e3ac0e64456adb629da2e8d8825c6b12
[ "MIT" ]
null
null
null
34.937943
394
0.547577
true
3,900
Qwen/Qwen-72B
1. YES 2. YES
0.934395
0.826712
0.772476
__label__eng_Latn
0.612755
0.633052
```python # File Contains: Python code containing closed-form solutions for the valuation of European Options, # American Options, Asian Options, Spread Options, Heat Rate Options, and Implied Volatility # # This document demonstrates a Python implementation of some option models described in books written by Davis # Edwards: "Energy Trading and Investing", "Risk Management in Trading", "Energy Investing Demystified". # # for backward compatability with Python 2.7 from __future__ import division # import necessary libaries import unittest import math import numpy as np from scipy.stats import norm from scipy.stats import mvn # Developer can toggle _DEBUG to True for more messages # normally this is set to False _DEBUG = False ``` MIT License Copyright (c) 2017 Davis William Edwards Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # Closed Form Option Pricing Formulas ## Generalized Black Scholes (GBS) and similar models ** ChangeLog: ** * 1/1/2017 Davis Edwards, Created GBS and Asian option formulas * 3/2/2017 Davis Edwards, added TeX formulas to describe calculations * 4/9/2017 Davis Edwards, added spread option (Kirk's approximation) * 5/10/2017 Davis Edwards, added graphics for sensitivity analysis * 5/18/2017 Davis Edwards, added Bjerksund-Stensland (2002) approximation for American Options * 5/19/2017 Davis Edwards, added implied volatility calculations * 6/7/2017 Davis Edwards, expanded sensitivity tests for American option approximation. * 6/21/2017 Davis Edwards, added documentation for Bjerksund-Stensland models * 7/21/2017 Davis Edwards, refactored all of the functions to match the parameter order to Haug's "The Complete Guide to Option Pricing Formulas". ** TO DO List ** 1. Since the Asian Option valuation uses an approximation, need to determine the range of acceptable stresses that can be applied to the volatility input 2. Sub-class the custom assertions in this module to work with "unittest" 3. Update the greek calculations for American Options - currently the Greeks are approximated by the greeks from GBS model. 4. Add a bibliography referencing the academic papers used as sources 5. Finish writing documentation formulas for Close-Form approximation for American Options 6. Refactor the order of parameters for the function calls to replicate the order of parameters in academic literature ------------------------- ## Purpose: (Why these models exist) The software in this model is intended to price particular types of financial products called "options". These are a common type of financial product and fall into the category of "financial derivative". This documentation assumes that the reader is already familiar with options terminology. The models are largely variations of the Black Scholes Merton option framework (collectively called "Black Scholes Genre" or "Generalized Black Scholes") that are used to price European options (options that can only be exercised at one point in time). This library also includes approximations to value American options (options that can be exercised prior to the expiration date) and implied volatility calculators. Pricing Formulas 1. BlackScholes() Stock Options (no dividend yield) 2. Merton() Assets with continuous dividend yield (Index Options) 3. Black76() Commodity Options 4. GK() FX Options (Garman-Kohlhagen) 5. Asian76() Asian Options on Commodities 6. Kirks76() Spread Options (Kirk's Approximation) 7. American() American options 8. American76() American Commodity Options Implied Volatility Formulas 9. EuroImpliedVol Implied volatility calculator for European options 10. EuroImpliedVol76 Implied volatiltity calculator for European commodity options 11. AmerImpliedVol Implied volatiltity calculator for American options 11. AmerImpliedVol76 Implied volatility calculator for American commodity options Note: In honor of the Black76 model, the 76() on the end of functions indicates a commodity option. ------------------------- ## Scope (Where this model is to be used): This model is built to price financial option contracts on a wide variety of financial commodities. These options are widely used and represent the benchmark to which other (more complicated) models are compared. While those more complicated models may outperform these models in specific areas, outperformance is relatively uncommon. By an large, these models have taken on all challengers and remain the de-facto industry standard. ## Theory: ### Generalized Black Scholes Black Scholes genre option models widely used to value European options. The original “Black Scholes” model was published in 1973 for non-dividend paying stocks. This created a revolution in quantitative finance and opened up option trading to the general population. Since that time, a wide variety of extensions to the original Black Scholes model have been created. Collectively, these are referred to as "Black Scholes genre” option models. Modifications of the formula are used to price other financial instruments like dividend paying stocks, commodity futures, and FX forwards. Mathematically, these formulas are nearly identical. The primary difference between these models is whether the asset has a carrying cost (if the asset has a cost or benefit associated with holding it) and how the asset gets present valued. To illustrate this relationship, a “generalized” form of the Black Scholes equation is shown below. The Black Scholes model is based on number of assumptions about how financial markets operate. Black Scholes style models assume: 1. **Arbitrage Free Markets**. Black Scholes formulas assume that traders try to maximize their personal profits and don’t allow arbitrage opportunities (riskless opportunities to make a profit) to persist. 2. **Frictionless, Continuous Markets**. This assumption of frictionless markets assumes that it is possible to buy and sell any amount of the underlying at any time without transaction costs. 3. **Risk Free Rates**. It is possible to borrow and lend money at a risk-free interest rate 4. **Log-normally Distributed Price Movements**. Prices are log-normally distributed and described by Geometric Brownian Motion 5. **Constant Volatility**. The Black Scholes genre options formulas assume that volatility is constant across the life of the option contract. In practice, these assumptions are not particularly limiting. The primary limitation imposed by these models is that it is possible to (reasonably) describe the dispersion of prices at some point in the future in a mathematical equation. In the traditional Black Scholes model intended to price stock options, the underlying assumption is that the stock is traded at its present value and that prices will follow a random walk diffusion style process over time. Prices are assumed to start at the spot price and, on the average, to drift upwards over time at the risk free rate. The “Merton” formula modifies the basic Black Scholes equation by introducing an additional term to incorporate dividends or holding costs. The Black 76 formula modifies the assumption so that the underlying starts at some forward price rather than a spot price. A fourth variation, the Garman Kohlhagen model, is used to value foreign exchange (FX) options. In the GK model, each currency in the currency pair is discounted based on its own interest rate. 1. **Black Scholes (Stocks)**. In the traditional Black Scholes model, the option is based on common stock - an instrument that is traded at its present value. The stock price does not get present valued – it starts at its present value (a ‘spot price’) and drifts upwards over time at the risk free rate. 2. **Merton (Stocks with continuous dividend yield)**. The Merton model is a variation of the Black Scholes model for assets that pay dividends to shareholders. Dividends reduce the value of the option because the option owner does not own the right to dividends until the option is exercised. 3. **Black 76 (Commodity Futures)**. The Black 76 model is for an option where the underlying commodity is traded based on a future price rather than a spot price. Instead of dealing with a spot price that drifts upwards at the risk free rate, this model deals with a forward price that needs to be present valued. 4. **Garman-Kohlhagen (FX Futures)**. The Garman Kohlhagen model is used to value foreign exchange (FX) options. In the GK model, each currency in the currency pair is discounted based on its own interest rate. An important concept of Black Scholes models is that the actual way that the underlying asset drifts over time isn't important to the valuation. Since European options can only be exercised when the contract expires, it is only the distribution of possible prices on that date that matters - the path that the underlying took to that point doesn't affect the value of the option. This is why the primary limitation of the model is being able to describe the dispersion of prices at some point in the future, not that the dispersion process is simplistic. The generalized Black-Scholes formula can found below (see *Figure 1 – Generalized Black Scholes Formula*). While these formulas may look complicated at first glance, most of the terms can be found as part of an options contract or are prices readily available in the market. The only term that is difficult to calculate is the implied volatility (σ). Implied volatility is typically calculated using prices of other options that have recently been traded. >*Call Price* >\begin{equation} C = Fe^{(b-r)T} N(D_1) - Xe^{-rT} N(D_2) \end{equation} >*Put Price* >\begin{equation} P = Xe^{-rT} N(-D_2) - Fe^{(b-r)T} N(-D_1) \end{equation} >*with the following intermediate calculations* >\begin{equation} D_1 = \frac{ln\frac{F}{X} + (b+\frac{V^2}{2})T}{V*\sqrt{T}} \end{equation} >\begin{equation} D_2 = D_1 - V\sqrt{T} \end{equation} >*and the following inputs* >| Symbol | Meaning | >|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| >| F or S | **Underlying Price**. The price of the underlying asset on the valuation date. S is used commonly used to represent a spot price, F a forward price | >| X | **Strike Price**. The strike, or exercise, price of the option. | >| T | **Time to expiration**. The time to expiration in years. This can be calculated by comparing the time between the expiration date and the valuation date. T = (t_1 - t_0)/365 | >| t_0 | **Valuation Date**. The date on which the option is being valued. For example, it might be today’s date if the option we being valued today. | >| t_1 | **Expiration Date**. The date on which the option must be exercised. | >| V | **Volatility**. The volatility of the underlying security. This factor usually cannot be directly observed in the market. It is most often calculated by looking at the prices for recent option transactions and back-solving a Black Scholes style equation to find the volatility that would result in the observed price. This is commonly abbreviated with the greek letter sigma,σ, although V is used here for consistency with the code below. | >| q | **Continuous Yield**. Used in the Merton model, this is the continuous yield of the underlying security. Option holders are typically not paid dividends or other payments until they exercise the option. As a result, this factor decreases the value of an option. | >| r | **Risk Free Rate**. This is expected return on a risk-free investment. This is commonly a approximated by the yield on a low-risk government bond or the rate that large banks borrow between themselves (LIBOR). The rate depends on tenor of the cash flow. For example, a 10-year risk-free bond is likely to have a different rate than a 20-year risk-free bond.[DE1] | >| rf | **Foreign Risk Free Rate**. Used in the Garman Kohlhagen model, this is the risk free rate of the foreign currency. Each currency will have a risk free rate. | >*Figure 1 - Generalized Black Scholes Formula* The correction term, b, varies by formula – it differentiates the various Black Scholes formula from one another (see *Figure 2 - GBS Cost of Carry Adjustments*). The cost of carry refers to the cost of “carrying” or holding a position. For example, holding a bond may result in earnings from interest, holding a stock may result in stock dividends, or the like. Those payments are made to the owner of the underlying asset and not the owner of the option. As a result, they reduce the value of the option. >| | Model | Cost of Carry (b) | >|----|------------------|-------------------| >| 1. | BlackScholes | b = r | >| 2. | Merton | b = r - q | >| 3. | Black 1976 | b = 0 | >| 4. | Garman Kohlhagen | b = r - rf | >| 5. | Asian | b = 0, modified V | >*Figure 2 - GBS Cost of Carry Adjustment* ### Asian Volatility Adjustment An Asian option is an option whose payoff is calculated using the average price of the underlying over some period of time rather than the price on the expiration date. As a result, Asian options are also called average price options. The reason that traders use Asian options is that averaging a settlement price over a period of time reduces the affect of manipulation or unusual price movements on the expiration date on the value of the option. As a result, Asian options are often found on strategically important commodities, like crude oil or in markets with intermittent trading. The average of a set of random numbers (prices in this case) will have a lower dispersion (a lower volatility) than the dispersion of prices observed on any single day. As a result, the implied volatility used to price Asian options will usually be slightly lower than the implied volatility on a comparable European option. From a mathematical perspective, valuing an Asian option is slightly complicated since the average of a set of lognormal distributions is not itself lognormally distributed. However, a reasonably good approximation of the correct answer is not too difficult to obtain. In the case of Asian options on futures, it is possible to use a modified Black-76 formula that replaces the implied volatility term with an adjusted implied volatility of the average price. As long as the first day of the averaging period is in the future, the following formula can be used to value Asian options (see *Figure 3 – Asian Option Formula*). >*Asian Adjusted Volatility* \begin{equation} V_a = \sqrt{\frac{ln(M)}{T}} \end{equation} >*with the intermediate calculation* \begin{equation} M = \frac{2e^{V^2T} - 2e^{V^2T}[1+V^2(T-t)]}{V^4(T-t)^2} \end{equation} >| Symbol | Meaning | |--------|-----------------------------------------------------------------------------------------------------------------| | Va | **Asian Adjusted Volatility**, This will replace the volatility (V) term in the GBS equations shown previously. | | T | **Time to expiration**. The time to expiration of the option (measured in years). | | t | **Time to start of averaging period**. The time to the start of the averaging period (measured in years). | >*Figure 3 - Asian Option Formula* ### Spread Option (Kirk's Approximation) Calculation Spread options are based on the spread between two commodity prices. They are commonly used to model physical investments as "real options" or to mark-to-market contracts that hedge physical assets. For example, a natural gas fueled electrical generation unit can be used to convert fuel (natural gas) into electricity. Whenever this conversion is profitable, it would be rational to operate the unit. This type of conversion is readily modeled by a spread option. When the spread of (electricity prices - fuel costs) is greater than the conversion cost, then the unit would operate. In this example, the conversion cost, which might be called the *Variable Operations and Maintenance* or VOM for a generation unit, would represent the strike price. Analytic formulas similar to the Black Scholes equation are commonly used to value commodity spread options. One such formula is called *Kirk’s approximation*. While an exact closed form solution does not exist to value spread options, approximate solutions can give reasonably accurate results. Kirk’s approximation uses a Black Scholes style framework to analyze the joint distribution that results from the ratio of two log-normal distributions. In a Black Scholes equation, the distribution of price returns is assumed to be normally distributed on the expiration date. Kirk’s approximation builds on the Black Scholes framework by taking advantage of the fact that the ratio of two log-normal distributions is approximately normally distributed. By modeling a ratio of two prices rather than the spread between the prices, Kirk’s approximation can use the same formulas designed for options based on a single underlying. In other words, Kirk’s approximation uses an algebraic transformation to fit the spread option into the Black Scholes framework. The payoff of a spread option is show in *Figure 4 - Spread Option Payoff*. >\begin{equation} C = max[F_1 - F_2 - X, 0] \end{equation} >\begin{equation} P = max[X - (F_1 - F_2), 0] \end{equation} >where >| Symbol | Meaning | |--------|----------------------------------------------------| | F_1 | **Price of Asset 1**, The prices of the first asset. | | F_2 | **Price of Asset 2**. The price of the second asset. | >*Figure 4 - Spread Option Payoff* This can be algebraically manipulated as shown in *Figure 5 - Spread Option Payoff, Manipulated*. >\begin{equation} C = max \biggl[\frac{F_1}{F_2+X}-1,0 \biggr](F_2 + X) \end{equation} >\begin{equation} P = max \biggl[1-\frac{F_1}{F_2+X},0 \biggr](F_2 + X) \end{equation} >*Figure 5 - Spread Option Payoff, Manipulated* This allows Kirk’s approximation to model the distribution of the spread as the ratio of the price of asset 1 over the price of asset 2 plus the strike price. This ratio can then be converted into a formula very similar to the Generalized Black Scholes formulas. In fact, this is the Black Scholes formula shown above with the addition of a (F_2 + X) term (See *Figure 6 – Kirk’s Approximation Ratio*). >*Ratio of prices* >\begin{equation} F = \frac{F_1}{F_2 + X} \end{equation} >The ratio implies that the option is profitable to exercise (*in the money*) whenever the ratio of prices (F in the formula above) is greater than 1. This occurs the cost of the finished product (F_1) exceeds total cost of the raw materials (F_2) and the conversion cost (X). This requires a modification to the Call/Put Price formulas and to the D_1 formula. Because the option is in the money when F>1, the "strike" price used in inner square brackets of the Call/Put Price formulas and the D1 formula is set to 1. >*Spread Option Call Price* >\begin{equation} C = (F_2 + X)\biggl[Fe^{(b-r)T} N(D_1) - e^{-rT} N(D_2)\biggr] \end{equation} >*Spread Option Put Price* >\begin{equation} P = (F_2 + X)\biggl[e^{-rT} N(-D_2) - Fe^{(b-r)T} N(-D_1)\biggr] \end{equation} >\begin{equation} D_1 = \frac{ln(F) + (b+\frac{V^2}{2})T}{V*\sqrt{T}} \end{equation} >\begin{equation} D_2 = D_1 - V\sqrt{T} \end{equation} >*Figure 6- Kirk's Approximation Ratio* The key complexity is determining the appropriate volatility that needs to be used in the equation. The “approximation” which defines Kirk’s approximation is the assumption that the ratio of two log-normal distributions is normally distributed. That assumption makes it possible to estimate the volatility needed for the modified Black Scholes style equation. (See *Figure 7 - Kirk's Approximation (Volatility)*). >\begin{equation} V = \sqrt{ V_1^{2}+ \biggl[V_2\frac{F_2}{F_2+X}\biggr]^2 - 2ρ V_1 V_2 \frac{F_2}{F_2+X} } \end{equation} >| Symbol | Meaning | >|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| >| V | **Volatility**. The Kirk's approximation volatility that will be placed into the formula shown in Figure 6 | >| V1 | **Volatility of Asset 1**. The strike, or exercise, price of the option. | >| V2 | **Volatility of Asset 2**. The volatility of the second asset | >| ρ | **Correlation**. The correlation between price of asset 1 and the price of asset 2. | >*Figure 7- Kirk's Approximation (Volatility)* A second complexity is that the prices of two assets (F1 and F2) have to be in the same units. For example, in a heat rate option, the option represents the ability to convert fuel (natural gas) into electricity. The price of the first asset, electricity, might be quoted in US dollars per megawatt-hour or USD/MWH. However, the price of the second asset might be quoted in USD/MMBTU. To use the approximation, it is necessary to convert the price of the second asset into the units of the first asset (See *Example 1 - a Heat Rate Option*). This conversion rate will typically be specified as part of the contract. >Example: A 10 MMBTU/MWH heat rate call option >* F1 = price of electricity = USD 35/MWH >* F2* = price of natural gas = USD 3.40/MMBTU; *This is not the price to plug into the model!* >* V1 = volatility of electricity forward prices = 35% >* V2 = volatility of natural gas forward price = 35% >* Rho = correlation between electricity and natural gas forward prices = 90% >* VOM = variable operation and maintenance cost (the conversion cost) = USD 3/MWH >Before being placed into a spread option model, the price of natural gas would need to >be converted into the correct units. >* F2 = Heat Rate * Fuel Cost = (10 MMBTU/MWH)(USD 3.40/MMBTU) = USD 34/MWH >The strike price would be set equal to the conversion cost >* X = VOM costs = USD 3/MWH > *Example 1 - a Heat Rate Call Option* Another important consideration (not discussed in this write-up) is that volatility and correlation need to be matched to the tenor of the underlying assets. This means that it is necessary to measure the volatility of forward prices rather than spot prices. It may also be necessary to match the volatility and correlation to the correct month. For example, power prices in August may behave very differently than power prices in October or May in certain regions. Like any model, spread options are subject to the "garbage in = garbage out" problem. However, the relative complexity of modeling commodity prices (the typical underlying for spread options) makes calibrating inputs a key part of the model. ### American Options American options differ from European options because they can be exercised at any time. If there is a possibility that it will be more profitable to exercise the option than sell it, an American option will have more value than a corresponding European option. Early exercise typically occurs only when an option is *in the money*. If an option is out of the money, there is usually no reason to exercise early - it would be better to sell the option (in the case of a put option, to sell the option and the underlying asset). The decision of whether to exercise early is primarily a question of interest rates and carrying costs. Carrying costs, or *cost of carry*, is a term that means an intermediate cash flows that is the result of holding an asset. For example, dividends on stocks are a postive cost of carry (owning the asset gives the owner a cash flow). A commodity might have a negative cost of carry. For example, a commodity that requires its owner to pay for storage would cause the owner of the physical commodity to pay cash to hold the asset. (**Note:** Commodity options are typically written on forwards or futures which have zero cost of carry instead of the actual underlying commodity). Cost of carry is cash flow that affects the owner of the underlying commodity and not the owner of the option. For example, when a stock pays a dividend, the owner of a call option does not receive the dividend - just the owner of the stock. For the perspective for the owner of a call option on a stock, the cost of carry will be the interest received from holding cash (r) less any dividends paid to owners of the stock (q). Since an option has some value (the *extrinsic value*) that would be given up by exercising the option, exercising an option prior to maturity is a trade off between the option's extrinsic value (the remaining optionality) and the relative benefit of holding cash (time value of money) versus the benefit of holding the asset (carrying costs). The early exercise feature of **American equity put options** may have value when: * The cost of carry on the asset is low - preferably zero or negative. * Interest rates are high * The option is in the money The early exercise feature of **American equity call options** may have value when: * The cost of carry on the asset is positive * Interest rates are low or negative * The option is in the money With commodities, things are a slightly different. There is typically no cost of carry since the underlying is a forward or a futures contract. It does not cost any money to enter an at-the-money commodity forward, swap, or futures contract. Also, these contracts don't have any intermediate cash flows. As a result, the primary benefit of early exercise is to get cash immediately (exercising an in-the-money option) rather than cash in the future. In high interest rate environements, the money recieved immediately from immediate execution may exceed the extrinsic value of the contract. This is due to strike price not being present valued in immediate execution (it is specified in the contract and a fixed number) but the payoff of a European option is discounted (forward price - strike price). The overall result is that early exercise is fairly uncommon for most commodity options. Typically, it only occurs when interest rates are high. Generally, interest rates have to be higher than 15%-20% for American commodity options to differ substantially in value from European options with the same terms. The early exercise feature of **American commodity options** has value when: * Interest rates are high * Volatility is low (this makes selling the option less of a good option) * The option is in the money There is no exact closed-form solution for American options. However, there are many approximations that are reasonably close to prices produced by open-form solutions (like binomial tree models). Two models are shown below, both created by Bjerksund and Stensland. The first was produced in 1993 and the second in 2002. The second model is a refinement of the first model, adding more complexity, in exchange for better accuracy. #### Put-Call Parity Because of Put/Call parity, it is possible to use a call valuation formula to calculate the value of a put option. >\begin{equation} P(S,X,T,r,b,V) = C(X,S,T,r-b,-b,V) \end{equation} or using the order of parameters used in this library: >\begin{equation} P(X,S,T,b,r,V) = C(S,X,T,-b,r-b,V) \end{equation} #### BjerksundStensland 1993 (BS1993) There is no closed form solution for American options, and there are multiple people who have developed closed-form approximations to value American options. This is one such approximation. However, this approximation is no longer in active use by the public interface. It is primarily included as a secondary test on the BS2002 calculation. This function uses a numerical approximation to estimate the value of an American option. It does this by estimating a early exercise boundary and analytically estimating the probability of hitting that boundary. This uses the same inputs as a Generalized Black Scholes model: FS = Forward or spot price (abbreviated FS in code, F in formulas below) X = Strike Price T = time to expiration r = risk free rate b = cost of carry V = volatility _Intermediate Calculations_. To be consistent with the Bjerksund Stensland paper, this write-up uses similar notation. Please note that both a capital B (B_0 and B_Infinity), a lower case b, and the greek symbol Beta are all being used. B_0 and B_infinity represent that optimal exercise boundaries in edge cases (for call options where T=0 and T=infinity respectively), lower case b is the cost of carry (passed in as an input), and Beta is an intermediate calculations. >\begin{array}{lcl} \beta & = & (0.5 - \frac{b}{V^2}) + \sqrt{(\frac{b}{V^2} - 0.5)^2 + 2 \frac{r}{V^2}} \\ B_\infty & = & \frac{\beta}{\beta-1} X \\ B_0 & = & max\biggl[X, (\frac{r}{r-b}) X\biggr] \\ h_1 & = & - b T + 2 V \sqrt{T} \frac{B_0}{B_\infty-B_0} \\ \end{array} _Calculate the Early Exercise Boundary (i)_. The lower case i represents the early exercise boundary. Alpha is an intermediate calculation. >\begin{array}{lcl} i & = & B_0 + (B_\infty-B_0)(1 - e^{h_1} ) \\ \alpha & = & (i-X) i^{-\beta} \end{array} Check for immediate exercise_. >\begin{equation} if F >= i, then Value = F - X \end{equation} If no immediate exercise, approximate the early exercise price. >\begin{eqnarray} Value & = & \alpha * F^\beta \\ & - & \alpha * \phi(F,T,\beta,i,i,r,b,V) \\ & + & \phi(F,T,1,i,i,r,b,V) \\ & - & \phi(F,T,1,X,i,r,b,V) \\ & - & X * \phi(F,T,0,i,i,r,b,V) \\ & + & X * \phi(F,T,0,X,i,r,b,V) \end{eqnarray} _Compare to European Value_. Due to the approximation, it is sometime possible to get a value slightly smaller than the European value. If so, set the value equal to the European value estimated using Generalized Black Scholes. >\begin{equation} Value_{BS1993} = Max \biggl[ Value, Value_{GBS} \biggr] \end{equation} #### Bjerksund Stensland 2002 (BS2002) source: https://www.researchgate.net/publication/228801918 FS = Forward or spot price (abbreviated FS in code, F in formulas below) X = Strike Price T = time to expiration r = risk free rate b = cost of carry V = volatility #### Psi Psi is an intermediate calculation used by the Bjerksund Stensland 2002 approximation. \begin{equation} \psi(F, t_2, \gamma, H, I_2, I_1, t_1, r, b, V) \end{equation} _Intermediate calculations_. The Psi function has a large number of intermediate calculations. For clarity, these are loosely organized into groups with each group used to simplify the next set of intermediate calculations. >\begin{array}{lcl} A_1 & = & V \ln(t_1) \\ A_2 & = & V \ln(t_2) \\ B_1 & = & \biggl[ b+(\gamma-0.5) V^2 \biggr] t_1 \\ B_2 & = & \biggl[ b+(\gamma-0.5) V^2 \biggr] t_2 \end{array} More Intermediate calculations >\begin{array}{lcl} d_1 & = & \frac{ln(\frac{F}{I_1}) + B_1}{A_1} \\ d_2 & = & \frac{ln(\frac{I_2^2}{F I_1}) + B_1}{A_1} \\ d_3 & = & \frac{ln(\frac{F}{I_1}) - B_1}{A_1} \\ d_4 & = & \frac{ln(\frac{I_2^2}{F I_1}) - B_1}{A_1} \\ e_1 & = & \frac{ln(\frac{F}{H}) + B_2}{A_2} \\ e_2 & = & \frac{ln(\frac{I_2^2}{F H}) + B_2}{A_2} \\ e_3 & = & \frac{ln(\frac{I_1^2}{F H}) + B_2}{A_2} \\ e_4 & = & \frac{ln(\frac{F I_1^2}{H I_2^2}) + B_2}{A_2} \end{array} Even More Intermediate calculations >\begin{array}{lcl} \tau & = & \sqrt{\frac{t_1}{t_2}} \\ \lambda & = & -r+\gamma b+\frac{\gamma}{2} (\gamma-1) V^2 \\ \kappa & = & \frac{2b}{V^2} +(2 \gamma - 1) \end{array} _The calculation of Psi_. This is the actual calculation of the Psi function. In the function below, M() represents the cumulative bivariate normal distribution (described a couple of paragraphs below this section). The abbreviation M() is used instead of CBND() in this section to make the equation a bit more readable and to match the naming convention used in Haug's book "The Complete Guide to Option Pricing Formulas". >\begin{eqnarray} \psi & = & e^{\lambda t_2} F^\gamma M(-d_1, -e_1, \tau) \\ & - & \frac{I_2}{F}^\kappa M(-d_2, -e_2, \tau) \\ & - & \frac{I_1}{F}^\kappa M(-d_3, -e_3, -\tau) \\ & + & \frac{I_1}{I_2}^\kappa M(-d_4, -e_4, -\tau)) \end{eqnarray} #### Phi Phi is an intermediate calculation used by both the Bjerksun Stensland 1993 and 2002 approximations. Many of the parameters are the same as the GBS model. \begin{equation} \phi(FS, T, \gamma, h, I, r, b, V) \end{equation} FS = Forward or spot price (abbreviated FS in code, F in formulas below). T = time to expiration. I = trigger price (as calculated in either BS1993 or BS2002 formulas gamma = modifier to T, calculated in BS1993 or BS2002 formula r = risk free rate. b = cost of carry. V = volatility. Internally, the Phi() function is implemented as follows: >\begin{equation} d_1 = -\frac{ln(\frac{F}{h}) + \biggl[b+(\gamma-0.5) V^2 \biggr] T}{V \sqrt{T}} \end{equation} >\begin{equation} d_2 = d_1 - 2 \frac{ln(I/F)}{V \sqrt(T)} \end{equation} >\begin{equation} \lambda = -r+\gamma b+0.5 \gamma (\gamma-1) V^2 \end{equation} >\begin{equation} \kappa = \frac{2b}{V^2}+(2\gamma-1) \end{equation} >\begin{equation} \phi = e^{\lambda T} F^{\gamma} \biggl[ N(d_1)-\frac{I}{F}^{\kappa} N(d_2) \biggr] \end{equation} ##### Normal Cumulative Density Function (N) This is the normal cumulative density function. It can be found described in a variety of statistical textbooks and/or wikipedia. It is part of the standard scipy.stats distribution and imported using the "from scipy.stats import norm" command. Example: \begin{equation} N(d_1) \end{equation} #### Cumulative bivariate normal distribution (CBND) The bivariate normal density function (BNDF) is given below (See *Figure 8 - Bivariate Normal Density Function (BNDF)*): >\begin{equation} BNDF(x, y) = \frac{1}{2 \pi \sqrt{1-p^2}} exp \biggl[-\frac{x^2-2pxy+y^2}{2(1-p^2)}\biggr] \end{equation} >*Figure 8. Bivariate Normal Density Function (BNDF)* This can be integrated over x and y to calculate the joint probability that x < a and y < b. This is called the cumulative bivariate normal distribution (CBND) (See *Figure 9 - Cumulative Bivariate Normal Distribution (CBND))*: >\begin{equation} CBND(a, b, p) = \frac{1}{2 \pi \sqrt{1-p^2}} \int_{-\infty}^{a} \int_{-\infty}^{b} exp \biggl[-\frac{x^2-2pxy+y^2}{2(1-p^2)}\biggr] d_x d_y \end{equation} >*Figure 9. Cumulative Bivariate Normal Distribution (CBND)* Where * x = the first variable * y = the second variable * a = upper bound for first variable * b = upper bound for second variable * p = correlation between first and second variables There is no closed-form solution for this equation. However, several approximations have been developed and are included in the numpy library distributed with Anaconda. The Genz 2004 model was chosen for implementation. Alternative models include those developed by Drezner and Wesolowsky (1990) and Drezner (1978). The Genz model improves these other model by going to an accuracy of 14 decimal points (from approximately 8 decimal points and 6 decimal points respectively). ------------------------- ## Limitations: These functions have been tested for accuracy within an allowable range of inputs (see "Model Input" section below). However, general modeling advice applies to the use of the model. These models depend on a number of assumptions. In plain English, these models assume that the distribution of future prices can be described by variables like implied volatility. To get good results from the model, the model should only be used with reliable inputs. The following limitations are also in effect: 1. The Asian Option approximation shouldn't be used for Asian options that are into the Asian option calculation period. 2. The American and American76 approximations break down when r < -20%. The limits are set wider in this example for testing purposes, but production code should probably limit interest rates to values between -20% and 100%. In practice, negative interest rates should be extremely rare. 3. No greeks are produced for spread options 4. These models assume a constant volatility term structure. This has no effect on European options. However, options that are likely to be exercise early (certain American options) and Asian options may be more affected. ------------------------- ## Model Inputs This section describes the function calls an inputs needed to call this model: These functions encapsulate the most commonly encountered option pricing formulas. These function primarily figure out the cost-of-carry term (b) and then call the generic version of the function. All of these functions return an array containg the premium and the greeks. #### Public Functions in the Library Pricing Formulas: 1. BlackScholes (OptionType, X, FS, T, r, V) 2. Merton (OptionType, X, FS, T, r, q, V) 3. Black76 (OptionType, X, FS, T, r, V) 4. GK (OptionType, X, FS, T, b, r, rf, V) 5. Asian76 (OptionType, X, FS, T, TA, r, V) 6. Kirks76 7. American (OptionType, X, FS, T, r, q, V) 8. American76 (OptionType, X, FS, T, r, V) Implied Volatility Formulas 9. GBS_ImpliedVol(OptionType, X, FS, T, r, q, CP) 10. GBS_ImpliedVol76(OptionType, X, FS, T, r, CP) 11. American_ImpliedVol(OptionType, X, FS, T, r, q, CP) 11. American_ImpliedVol76(OptionType, X, FS, T, r, CP) #### Inputs used by all models | **Parameter** | **Description** | |---------------|------------------------------------------------------------------------------------------------------------------------------------------------------| | option_type | **Put/Call Indicator** Single character, "c" indicates a call; "p" a put| | fs | **Price of Underlying** FS is generically used, but for specific models, the following abbreviations may be used: F = Forward Price, S = Spot Price) | | x | **Strike Price** | | t | **Time to Maturity** This is in years (1.0 = 1 year, 0.5 = six months, etc)| | r | **Risk Free Interest Rate** Interest rates (0.10 = 10% interest rate | | v | **Implied Volatility** Annualized implied volatility (1=100% annual volatility, 0.34 = 34% annual volatility| #### Inputs used by some models | **Parameter** | **Description** | |---------------|------------------------------------------------------------------------------------------------------------------------------------------------------| | b | **Cost of Carry** This is only found in internal implementations, but is identical to the cost of carry (b) term commonly found in academic option pricing literature| | q | **Continuous Dividend** Used in Merton and American models; Internally, this is converted into cost of carry, b, with formula b = r-q | | rf | **Foreign Interest Rate** Only used GK model; this functions similarly to q | | t_a | **Asian Start** Used for Asian options; This is the time that starts the averaging period (TA=0 means that averaging starts immediately). As TA approaches T, the Asian value should become very close to the Black76 Value | | cp | **Option Price** Used in the implied vol calculations; This is the price of the call or put observed in the market | #### Outputs All of the option pricing functions return an array. The first element of the array is the value of the option, the other elements are the greeks which measure the sensitivity of the option to changes in inputs. The greeks are used primarily for risk-management purposes. | **Output** | **Description** | |------------|-------------------------------------------------------------------------------------------------------------------| | [0] | **Value** | | [1] | **Delta** Sensitivity of Value to changes in price | | [2] | **Gamma** Sensitivity of Delta to changes in price | | [3] | **Theta** Sensitivity of Value to changes in time to expiration (annualized). To get a daily Theta, divide by 365 | | [4] | **Vega** Sensitivity of Value to changes in Volatility | | [5] | **Rho** Sensitivity of Value to changes in risk-free rates. | The implied volatility functions return a single value (the implied volatility). #### Acceptable Range for inputs All of the inputs are bounded. While many of these functions will work with inputs outside of these bounds, they haven't been tested and are generally believed to be uncommon. The pricer will return an exception to the caller if an out-of-bounds input is used. If that was a valid input, the code below will need to be modified to allow wider inputs and the testing section updated to test that the models work under the widened inputs. ```python # This class contains the limits on inputs for GBS models # It is not intended to be part of this module's public interface class _GBS_Limits: # An GBS model will return an error if an out-of-bound input is input MAX32 = 2147483248.0 MIN_T = 1.0 / 1000.0 # requires some time left before expiration MIN_X = 0.01 MIN_FS = 0.01 # Volatility smaller than 0.5% causes American Options calculations # to fail (Number to large errors). # GBS() should be OK with any positive number. Since vols less # than 0.5% are expected to be extremely rare, and most likely bad inputs, # _gbs() is assigned this limit too MIN_V = 0.005 MAX_T = 100 MAX_X = MAX32 MAX_FS = MAX32 # Asian Option limits # maximum TA is time to expiration for the option MIN_TA = 0 # This model will work with higher values for b, r, and V. However, such values are extremely uncommon. # To catch some common errors, interest rates and volatility is capped to 100% # This reason for 1 (100%) is mostly to cause the library to throw an exceptions # if a value like 15% is entered as 15 rather than 0.15) MIN_b = -1 MIN_r = -1 MAX_b = 1 MAX_r = 1 MAX_V = 1 ``` ------------------------ ## Model Implementation These functions encapsulate a generic version of the pricing formulas. They are primarily intended to be called by the other functions within this libary. The following functions will have a fixed interface so that they can be called directly for academic applicaitons that use the cost-of-carry (b) notation: _GBS() A generalized European option model _American() A generalized American option model _GBS_ImpliedVol() A generalized European option implied vol calculator _American_ImpliedVol() A generalized American option implied vol calculator The other functions in this libary are called by the four main functions and are not expected to be interface safe (the implementation and interface may change over time). ### Implementation: European Options These functions implement the generalized Black Scholes (GBS) formula for European options. The main function is _gbs(). ```python # ------------------------------ # This function verifies that the Call/Put indicator is correctly entered def _test_option_type(option_type): if (option_type != "c") and (option_type != "p"): raise GBS_InputError("Invalid Input option_type ({0}). Acceptable value are: c, p".format(option_type)) ``` ```python # ------------------------------ # This function makes sure inputs are OK # It throws an exception if there is a failure def _gbs_test_inputs(option_type, fs, x, t, r, b, v): # ----------- # Test inputs are reasonable _test_option_type(option_type) if (x < _GBS_Limits.MIN_X) or (x > _GBS_Limits.MAX_X): raise GBS_InputError( "Invalid Input Strike Price (X). Acceptable range for inputs is {1} to {2}".format(x, _GBS_Limits.MIN_X, _GBS_Limits.MAX_X)) if (fs < _GBS_Limits.MIN_FS) or (fs > _GBS_Limits.MAX_FS): raise GBS_InputError( "Invalid Input Forward/Spot Price (FS). Acceptable range for inputs is {1} to {2}".format(fs, _GBS_Limits.MIN_FS, _GBS_Limits.MAX_FS)) if (t < _GBS_Limits.MIN_T) or (t > _GBS_Limits.MAX_T): raise GBS_InputError( "Invalid Input Time (T = {0}). Acceptable range for inputs is {1} to {2}".format(t, _GBS_Limits.MIN_T, _GBS_Limits.MAX_T)) if (b < _GBS_Limits.MIN_b) or (b > _GBS_Limits.MAX_b): raise GBS_InputError( "Invalid Input Cost of Carry (b = {0}). Acceptable range for inputs is {1} to {2}".format(b, _GBS_Limits.MIN_b, _GBS_Limits.MAX_b)) if (r < _GBS_Limits.MIN_r) or (r > _GBS_Limits.MAX_r): raise GBS_InputError( "Invalid Input Risk Free Rate (r = {0}). Acceptable range for inputs is {1} to {2}".format(r, _GBS_Limits.MIN_r, _GBS_Limits.MAX_r)) if (v < _GBS_Limits.MIN_V) or (v > _GBS_Limits.MAX_V): raise GBS_InputError( "Invalid Input Implied Volatility (V = {0}). Acceptable range for inputs is {1} to {2}".format(v, _GBS_Limits.MIN_V, _GBS_Limits.MAX_V)) ``` ```python # The primary class for calculating Generalized Black Scholes option prices and deltas # It is not intended to be part of this module's public interface # Inputs: option_type = "p" or "c", fs = price of underlying, x = strike, t = time to expiration, r = risk free rate # b = cost of carry, v = implied volatility # Outputs: value, delta, gamma, theta, vega, rho def _gbs(option_type, fs, x, t, r, b, v): _debug("Debugging Information: _gbs()") # ----------- # Test Inputs (throwing an exception on failure) _gbs_test_inputs(option_type, fs, x, t, r, b, v) # ----------- # Create preliminary calculations t__sqrt = math.sqrt(t) d1 = (math.log(fs / x) + (b + (v * v) / 2) * t) / (v * t__sqrt) d2 = d1 - v * t__sqrt if option_type == "c": # it's a call _debug(" Call Option") value = fs * math.exp((b - r) * t) * norm.cdf(d1) - x * math.exp(-r * t) * norm.cdf(d2) delta = math.exp((b - r) * t) * norm.cdf(d1) gamma = math.exp((b - r) * t) * norm.pdf(d1) / (fs * v * t__sqrt) theta = -(fs * v * math.exp((b - r) * t) * norm.pdf(d1)) / (2 * t__sqrt) - (b - r) * fs * math.exp( (b - r) * t) * norm.cdf(d1) - r * x * math.exp(-r * t) * norm.cdf(d2) vega = math.exp((b - r) * t) * fs * t__sqrt * norm.pdf(d1) rho = x * t * math.exp(-r * t) * norm.cdf(d2) else: # it's a put _debug(" Put Option") value = x * math.exp(-r * t) * norm.cdf(-d2) - (fs * math.exp((b - r) * t) * norm.cdf(-d1)) delta = -math.exp((b - r) * t) * norm.cdf(-d1) gamma = math.exp((b - r) * t) * norm.pdf(d1) / (fs * v * t__sqrt) theta = -(fs * v * math.exp((b - r) * t) * norm.pdf(d1)) / (2 * t__sqrt) + (b - r) * fs * math.exp( (b - r) * t) * norm.cdf(-d1) + r * x * math.exp(-r * t) * norm.cdf(-d2) vega = math.exp((b - r) * t) * fs * t__sqrt * norm.pdf(d1) rho = -x * t * math.exp(-r * t) * norm.cdf(-d2) _debug(" d1= {0}\n d2 = {1}".format(d1, d2)) _debug(" delta = {0}\n gamma = {1}\n theta = {2}\n vega = {3}\n rho={4}".format(delta, gamma, theta, vega, rho)) return value, delta, gamma, theta, vega, rho ``` ### Implementation: American Options This section contains the code necessary to price American options. The main function is _American(). The other functions are called from the main function. ```python # ----------- # Generalized American Option Pricer # This is a wrapper to check inputs and route to the current "best" American option model def _american_option(option_type, fs, x, t, r, b, v): # ----------- # Test Inputs (throwing an exception on failure) _debug("Debugging Information: _american_option()") _gbs_test_inputs(option_type, fs, x, t, r, b, v) # ----------- if option_type == "c": # Call Option _debug(" Call Option") return _bjerksund_stensland_2002(fs, x, t, r, b, v) else: # Put Option _debug(" Put Option") # Using the put-call transformation: P(X, FS, T, r, b, V) = C(FS, X, T, -b, r-b, V) # WARNING - When reconciling this code back to the B&S paper, the order of variables is different put__x = fs put_fs = x put_b = -b put_r = r - b # pass updated values into the Call Valuation formula return _bjerksund_stensland_2002(put_fs, put__x, t, put_r, put_b, v) ``` ```python # ----------- # American Call Option (Bjerksund Stensland 1993 approximation) # This is primarily here for testing purposes; 2002 model has superseded this one def _bjerksund_stensland_1993(fs, x, t, r, b, v): # ----------- # initialize output # using GBS greeks (TO DO: update greek calculations) my_output = _gbs("c", fs, x, t, r, b, v) e_value = my_output[0] delta = my_output[1] gamma = my_output[2] theta = my_output[3] vega = my_output[4] rho = my_output[5] # debugging for calculations _debug("-----") _debug("Debug Information: _Bjerksund_Stensland_1993())") # if b >= r, it is never optimal to exercise before maturity # so we can return the GBS value if b >= r: _debug(" b >= r, early exercise never optimal, returning GBS value") return e_value, delta, gamma, theta, vega, rho # Intermediate Calculations v2 = v ** 2 sqrt_t = math.sqrt(t) beta = (0.5 - b / v2) + math.sqrt(((b / v2 - 0.5) ** 2) + 2 * r / v2) b_infinity = (beta / (beta - 1)) * x b_zero = max(x, (r / (r - b)) * x) h1 = -(b * t + 2 * v * sqrt_t) * (b_zero / (b_infinity - b_zero)) i = b_zero + (b_infinity - b_zero) * (1 - math.exp(h1)) alpha = (i - x) * (i ** (-beta)) # debugging for calculations _debug(" b = {0}".format(b)) _debug(" v2 = {0}".format(v2)) _debug(" beta = {0}".format(beta)) _debug(" b_infinity = {0}".format(b_infinity)) _debug(" b_zero = {0}".format(b_zero)) _debug(" h1 = {0}".format(h1)) _debug(" i = {0}".format(i)) _debug(" alpha = {0}".format(alpha)) # Check for immediate exercise if fs >= i: _debug(" Immediate Exercise") value = fs - x else: _debug(" American Exercise") value = (alpha * (fs ** beta) - alpha * _phi(fs, t, beta, i, i, r, b, v) + _phi(fs, t, 1, i, i, r, b, v) - _phi(fs, t, 1, x, i, r, b, v) - x * _phi(fs, t, 0, i, i, r, b, v) + x * _phi(fs, t, 0, x, i, r, b, v)) # The approximation can break down in boundary conditions # make sure the value is at least equal to the European value value = max(value, e_value) return value, delta, gamma, theta, vega, rho ``` ```python # ----------- # American Call Option (Bjerksund Stensland 2002 approximation) def _bjerksund_stensland_2002(fs, x, t, r, b, v): # ----------- # initialize output # using GBS greeks (TO DO: update greek calculations) my_output = _gbs("c", fs, x, t, r, b, v) e_value = my_output[0] delta = my_output[1] gamma = my_output[2] theta = my_output[3] vega = my_output[4] rho = my_output[5] # debugging for calculations _debug("-----") _debug("Debug Information: _Bjerksund_Stensland_2002())") # If b >= r, it is never optimal to exercise before maturity # so we can return the GBS value if b >= r: _debug(" Returning GBS value") return e_value, delta, gamma, theta, vega, rho # ----------- # Create preliminary calculations v2 = v ** 2 t1 = 0.5 * (math.sqrt(5) - 1) * t t2 = t beta_inside = ((b / v2 - 0.5) ** 2) + 2 * r / v2 # forcing the inside of the sqrt to be a positive number beta_inside = abs(beta_inside) beta = (0.5 - b / v2) + math.sqrt(beta_inside) b_infinity = (beta / (beta - 1)) * x b_zero = max(x, (r / (r - b)) * x) h1 = -(b * t1 + 2 * v * math.sqrt(t1)) * ((x ** 2) / ((b_infinity - b_zero) * b_zero)) h2 = -(b * t2 + 2 * v * math.sqrt(t2)) * ((x ** 2) / ((b_infinity - b_zero) * b_zero)) i1 = b_zero + (b_infinity - b_zero) * (1 - math.exp(h1)) i2 = b_zero + (b_infinity - b_zero) * (1 - math.exp(h2)) alpha1 = (i1 - x) * (i1 ** (-beta)) alpha2 = (i2 - x) * (i2 ** (-beta)) # debugging for calculations _debug(" t1 = {0}".format(t1)) _debug(" beta = {0}".format(beta)) _debug(" b_infinity = {0}".format(b_infinity)) _debug(" b_zero = {0}".format(b_zero)) _debug(" h1 = {0}".format(h1)) _debug(" h2 = {0}".format(h2)) _debug(" i1 = {0}".format(i1)) _debug(" i2 = {0}".format(i2)) _debug(" alpha1 = {0}".format(alpha1)) _debug(" alpha2 = {0}".format(alpha2)) # check for immediate exercise if fs >= i2: value = fs - x else: # Perform the main calculation value = (alpha2 * (fs ** beta) - alpha2 * _phi(fs, t1, beta, i2, i2, r, b, v) + _phi(fs, t1, 1, i2, i2, r, b, v) - _phi(fs, t1, 1, i1, i2, r, b, v) - x * _phi(fs, t1, 0, i2, i2, r, b, v) + x * _phi(fs, t1, 0, i1, i2, r, b, v) + alpha1 * _phi(fs, t1, beta, i1, i2, r, b, v) - alpha1 * _psi(fs, t2, beta, i1, i2, i1, t1, r, b, v) + _psi(fs, t2, 1, i1, i2, i1, t1, r, b, v) - _psi(fs, t2, 1, x, i2, i1, t1, r, b, v) - x * _psi(fs, t2, 0, i1, i2, i1, t1, r, b, v) + x * _psi(fs, t2, 0, x, i2, i1, t1, r, b, v)) # in boundary conditions, this approximation can break down # Make sure option value is greater than or equal to European value value = max(value, e_value) # ----------- # Return Data return value, delta, gamma, theta, vega, rho ``` ```python # --------------------------- # American Option Intermediate Calculations # ----------- # The Psi() function used by _Bjerksund_Stensland_2002 model def _psi(fs, t2, gamma, h, i2, i1, t1, r, b, v): vsqrt_t1 = v * math.sqrt(t1) vsqrt_t2 = v * math.sqrt(t2) bgamma_t1 = (b + (gamma - 0.5) * (v ** 2)) * t1 bgamma_t2 = (b + (gamma - 0.5) * (v ** 2)) * t2 d1 = (math.log(fs / i1) + bgamma_t1) / vsqrt_t1 d3 = (math.log(fs / i1) - bgamma_t1) / vsqrt_t1 d2 = (math.log((i2 ** 2) / (fs * i1)) + bgamma_t1) / vsqrt_t1 d4 = (math.log((i2 ** 2) / (fs * i1)) - bgamma_t1) / vsqrt_t1 e1 = (math.log(fs / h) + bgamma_t2) / vsqrt_t2 e2 = (math.log((i2 ** 2) / (fs * h)) + bgamma_t2) / vsqrt_t2 e3 = (math.log((i1 ** 2) / (fs * h)) + bgamma_t2) / vsqrt_t2 e4 = (math.log((fs * (i1 ** 2)) / (h * (i2 ** 2))) + bgamma_t2) / vsqrt_t2 tau = math.sqrt(t1 / t2) lambda1 = (-r + gamma * b + 0.5 * gamma * (gamma - 1) * (v ** 2)) kappa = (2 * b) / (v ** 2) + (2 * gamma - 1) psi = math.exp(lambda1 * t2) * (fs ** gamma) * (_cbnd(-d1, -e1, tau) - ((i2 / fs) ** kappa) * _cbnd(-d2, -e2, tau) - ((i1 / fs) ** kappa) * _cbnd(-d3, -e3, -tau) + ((i1 / i2) ** kappa) * _cbnd(-d4, -e4, -tau)) return psi ``` ```python # ----------- # The Phi() function used by _Bjerksund_Stensland_2002 model and the _Bjerksund_Stensland_1993 model def _phi(fs, t, gamma, h, i, r, b, v): d1 = -(math.log(fs / h) + (b + (gamma - 0.5) * (v ** 2)) * t) / (v * math.sqrt(t)) d2 = d1 - 2 * math.log(i / fs) / (v * math.sqrt(t)) lambda1 = (-r + gamma * b + 0.5 * gamma * (gamma - 1) * (v ** 2)) kappa = (2 * b) / (v ** 2) + (2 * gamma - 1) phi = math.exp(lambda1 * t) * (fs ** gamma) * (norm.cdf(d1) - ((i / fs) ** kappa) * norm.cdf(d2)) _debug("-----") _debug("Debug info for: _phi()") _debug(" d1={0}".format(d1)) _debug(" d2={0}".format(d2)) _debug(" lambda={0}".format(lambda1)) _debug(" kappa={0}".format(kappa)) _debug(" phi={0}".format(phi)) return phi ``` ```python # ----------- # Cumulative Bivariate Normal Distribution # Primarily called by Psi() function, part of the _Bjerksund_Stensland_2002 model def _cbnd(a, b, rho): # This distribution uses the Genz multi-variate normal distribution # code found as part of the standard SciPy distribution lower = np.array([0, 0]) upper = np.array([a, b]) infin = np.array([0, 0]) correl = rho error, value, inform = mvn.mvndst(lower, upper, infin, correl) return value ``` ### Implementation: Implied Vol This section implements implied volatility calculations. It contains 3 main models: 1. **At-the-Money approximation.** This is a very fast approximation for implied volatility. It is used to estimate a starting point for the search functions. 2. **Newton-Raphson Search.** This is a fast implied volatility search that can be used when there is a reliable estimate of Vega (i.e., European options) 3. **Bisection Search.** An implied volatility search (not quite as fast as a Newton search) that can be used where there is no reliable Vega estimate (i.e., American options). ```python # ---------- # Inputs (not all functions use all inputs) # fs = forward/spot price # x = Strike # t = Time (in years) # r = risk free rate # b = cost of carry # cp = Call or Put price # precision = (optional) precision at stopping point # max_steps = (optional) maximum number of steps # ---------- # Approximate Implied Volatility # # This function is used to choose a starting point for the # search functions (Newton and bisection searches). # Brenner & Subrahmanyam (1988), Feinstein (1988) def _approx_implied_vol(option_type, fs, x, t, r, b, cp): _test_option_type(option_type) ebrt = math.exp((b - r) * t) ert = math.exp(-r * t) a = math.sqrt(2 * math.pi) / (fs * ebrt + x * ert) if option_type == "c": payoff = fs * ebrt - x * ert else: payoff = x * ert - fs * ebrt b = cp - payoff / 2 c = (payoff ** 2) / math.pi v = (a * (b + math.sqrt(b ** 2 + c))) / math.sqrt(t) return v ``` ```python # ---------- # Find the Implied Volatility of an European (GBS) Option given a price # using Newton-Raphson method for greater speed since Vega is available def _gbs_implied_vol(option_type, fs, x, t, r, b, cp, precision=.00001, max_steps=100): return _newton_implied_vol(_gbs, option_type, x, fs, t, b, r, cp, precision, max_steps) ``` ```python # ---------- # Find the Implied Volatility of an American Option given a price # Using bisection method since Vega is difficult to estimate for Americans def _american_implied_vol(option_type, fs, x, t, r, b, cp, precision=.00001, max_steps=100): return _bisection_implied_vol(_american_option, option_type, fs, x, t, r, b, cp, precision, max_steps) ``` ```python # ---------- # Calculate Implied Volatility with a Newton Raphson search def _newton_implied_vol(val_fn, option_type, x, fs, t, b, r, cp, precision=.00001, max_steps=100): # make sure a valid option type was entered _test_option_type(option_type) # Estimate starting Vol, making sure it is allowable range v = _approx_implied_vol(option_type, fs, x, t, r, b, cp) v = max(_GBS_Limits.MIN_V, v) v = min(_GBS_Limits.MAX_V, v) # Calculate the value at the estimated vol value, delta, gamma, theta, vega, rho = val_fn(option_type, fs, x, t, r, b, v) min_diff = abs(cp - value) _debug("-----") _debug("Debug info for: _Newton_ImpliedVol()") _debug(" Vinitial={0}".format(v)) # Newton-Raphson Search countr = 0 while precision <= abs(cp - value) <= min_diff and countr < max_steps: v = v - (value - cp) / vega if (v > _GBS_Limits.MAX_V) or (v < _GBS_Limits.MIN_V): _debug(" Volatility out of bounds") break value, delta, gamma, theta, vega, rho = val_fn(option_type, fs, x, t, r, b, v) min_diff = min(abs(cp - value), min_diff) # keep track of how many loops countr += 1 _debug(" IVOL STEP {0}. v={1}".format(countr, v)) # check if function converged and return a value if abs(cp - value) < precision: # the search function converged return v else: # if the search function didn't converge, try a bisection search return _bisection_implied_vol(val_fn, option_type, fs, x, t, r, b, cp, precision, max_steps) ``` ```python # ---------- # Find the Implied Volatility using a Bisection search def _bisection_implied_vol(val_fn, option_type, fs, x, t, r, b, cp, precision=.00001, max_steps=100): _debug("-----") _debug("Debug info for: _bisection_implied_vol()") # Estimate Upper and Lower bounds on volatility # Assume American Implied vol is within +/- 50% of the GBS Implied Vol v_mid = _approx_implied_vol(option_type, fs, x, t, r, b, cp) if (v_mid <= _GBS_Limits.MIN_V) or (v_mid >= _GBS_Limits.MAX_V): # if the volatility estimate is out of bounds, search entire allowed vol space v_low = _GBS_Limits.MIN_V v_high = _GBS_Limits.MAX_V v_mid = (v_low + v_high) / 2 else: # reduce the size of the vol space v_low = max(_GBS_Limits.MIN_V, v_mid * .5) v_high = min(_GBS_Limits.MAX_V, v_mid * 1.5) # Estimate the high/low bounds on price cp_mid = val_fn(option_type, fs, x, t, r, b, v_mid)[0] # initialize bisection loop current_step = 0 diff = abs(cp - cp_mid) _debug(" American IVOL starting conditions: CP={0} cp_mid={1}".format(cp, cp_mid)) _debug(" IVOL {0}. V[{1},{2},{3}]".format(current_step, v_low, v_mid, v_high)) # Keep bisection volatility until correct price is found while (diff > precision) and (current_step < max_steps): current_step += 1 # Cut the search area in half if cp_mid < cp: v_low = v_mid else: v_high = v_mid cp_low = val_fn(option_type, fs, x, t, r, b, v_low)[0] cp_high = val_fn(option_type, fs, x, t, r, b, v_high)[0] v_mid = v_low + (cp - cp_low) * (v_high - v_low) / (cp_high - cp_low) v_mid = max(_GBS_Limits.MIN_V, v_mid) # enforce high/low bounds v_mid = min(_GBS_Limits.MAX_V, v_mid) # enforce high/low bounds cp_mid = val_fn(option_type, fs, x, t, r, b, v_mid)[0] diff = abs(cp - cp_mid) _debug(" IVOL {0}. V[{1},{2},{3}]".format(current_step, v_low, v_mid, v_high)) # return output if abs(cp - cp_mid) < precision: return v_mid else: raise GBS_CalculationError( "Implied Vol did not converge. Best Guess={0}, Price diff={1}, Required Precision={2}".format(v_mid, diff, precision)) ``` -------------------- ### Public Interface for valuation functions This section encapsulates the functions that user will call to value certain options. These function primarily figure out the cost-of-carry term (b) and then call the generic version of the function (like _GBS() or _American). All of these functions return an array containg the premium and the greeks. ```python # This is the public interface for European Options # Each call does a little bit of processing and then calls the calculations located in the _gbs module # Inputs: # option_type = "p" or "c" # fs = price of underlying # x = strike # t = time to expiration # v = implied volatility # r = risk free rate # q = dividend payment # b = cost of carry # Outputs: # value = price of the option # delta = first derivative of value with respect to price of underlying # gamma = second derivative of value w.r.t price of underlying # theta = first derivative of value w.r.t. time to expiration # vega = first derivative of value w.r.t. implied volatility # rho = first derivative of value w.r.t. risk free rates ``` ```python # --------------------------- # Black Scholes: stock Options (no dividend yield) def black_scholes(option_type, fs, x, t, r, v): b = r return _gbs(option_type, fs, x, t, r, b, v) ``` ```python # --------------------------- # Merton Model: Stocks Index, stocks with a continuous dividend yields def merton(option_type, fs, x, t, r, q, v): b = r - q return _gbs(option_type, fs, x, t, r, b, v) ``` ```python # --------------------------- # Commodities def black_76(option_type, fs, x, t, r, v): b = 0 return _gbs(option_type, fs, x, t, r, b, v) ``` ```python # --------------------------- # FX Options def garman_kohlhagen(option_type, fs, x, t, r, rf, v): b = r - rf return _gbs(option_type, fs, x, t, r, b, v) ``` ```python # --------------------------- # Average Price option on commodities def asian_76(option_type, fs, x, t, t_a, r, v): # Check that TA is reasonable if (t_a < _GBS_Limits.MIN_TA) or (t_a > t): raise GBS_InputError( "Invalid Input Averaging Time (TA = {0}). Acceptable range for inputs is {1} to <T".format(t_a, _GBS_Limits.MIN_TA)) # Approximation to value Asian options on commodities b = 0 if t_a == t: # if there is no averaging period, this is just Black Scholes v_a = v else: # Approximate the volatility m = (2 * math.exp((v ** 2) * t) - 2 * math.exp((v ** 2) * t_a) * (1 + (v ** 2) * (t - t_a))) / ( (v ** 4) * ((t - t_a) ** 2)) v_a = math.sqrt(math.log(m) / t) # Finally, have the GBS function do the calculation return _gbs(option_type, fs, x, t, r, b, v_a) ``` ```python # --------------------------- # Spread Option formula def kirks_76(option_type, f1, f2, x, t, r, v1, v2, corr): # create the modifications to the GBS formula to handle spread options b = 0 fs = f1 / (f2 + x) f_temp = f2 / (f2 + x) v = math.sqrt((v1 ** 2) + ((v2 * f_temp) ** 2) - (2 * corr * v1 * v2 * f_temp)) my_values = _gbs(option_type, fs, 1.0, t, r, b, v) # Have the GBS function return a value return my_values[0] * (f2 + x), 0, 0, 0, 0, 0 ``` ```python # --------------------------- # American Options (stock style, set q=0 for non-dividend paying options) def american(option_type, fs, x, t, r, q, v): b = r - q return _american_option(option_type, fs, x, t, r, b, v) ``` ```python # --------------------------- # Commodities def american_76(option_type, fs, x, t, r, v): b = 0 return _american_option(option_type, fs, x, t, r, b, v) ``` ### Public Interface for implied Volatility Functions ```python # Inputs: # option_type = "p" or "c" # fs = price of underlying # x = strike # t = time to expiration # v = implied volatility # r = risk free rate # q = dividend payment # b = cost of carry # Outputs: # value = price of the option # delta = first derivative of value with respect to price of underlying # gamma = second derivative of value w.r.t price of underlying # theta = first derivative of value w.r.t. time to expiration # vega = first derivative of value w.r.t. implied volatility # rho = first derivative of value w.r.t. risk free rates ``` ```python def euro_implied_vol(option_type, fs, x, t, r, q, cp): b = r - q return _gbs_implied_vol(option_type, fs, x, t, r, b, cp) ``` ```python def euro_implied_vol_76(option_type, fs, x, t, r, cp): b = 0 return _gbs_implied_vol(option_type, fs, x, t, r, b, cp) ``` ```python def amer_implied_vol(option_type, fs, x, t, r, q, cp): b = r - q return _american_implied_vol(option_type, fs, x, t, r, b, cp) ``` ```python def amer_implied_vol_76(option_type, fs, x, t, r, cp): b = 0 return _american_implied_vol(option_type, fs, x, t, r, b, cp) ``` ### Implementation: Helper Functions These functions aren't part of the main code but serve as utility function mostly used for debugging ```python # --------------------------- # Helper Function for Debugging # Prints a message if running code from this module and _DEBUG is set to true # otherwise, do nothing def _debug(debug_input): if (__name__ is "__main__") and (_DEBUG is True): print(debug_input) ``` ```python # This class defines the Exception that gets thrown when invalid input is placed into the GBS function class GBS_InputError(Exception): def __init__(self, mismatch): Exception.__init__(self, mismatch) ``` ```python # This class defines the Exception that gets thrown when there is a calculation error class GBS_CalculationError(Exception): def __init__(self, mismatch): Exception.__init__(self, mismatch) ``` ```python # This function tests that two floating point numbers are the same # Numbers less than 1 million are considered the same if they are within .000001 of each other # Numbers larger than 1 million are considered the same if they are within .0001% of each other # User can override the default precision if necessary def assert_close(value_a, value_b, precision=.000001): my_precision = precision if (value_a < 1000000.0) and (value_b < 1000000.0): my_diff = abs(value_a - value_b) my_diff_type = "Difference" else: my_diff = abs((value_a - value_b) / value_a) my_diff_type = "Percent Difference" _debug("Comparing {0} and {1}. Difference is {2}, Difference Type is {3}".format(value_a, value_b, my_diff, my_diff_type)) if my_diff < my_precision: my_result = True else: my_result = False if (__name__ is "__main__") and (my_result is False): print(" FAILED TEST. Comparing {0} and {1}. Difference is {2}, Difference Type is {3}".format(value_a, value_b, my_diff, my_diff_type)) else: print(".") return my_result ``` ## Unit Testing This will print out a "." if the test is successful or an error message if the test fails ```python if __name__ == "__main__": print ("=====================================") print ("American Options Intermediate Functions") print ("=====================================") # --------------------------- # unit tests for _psi() # _psi(FS, t2, gamma, H, I2, I1, t1, r, b, V): print("Testing _psi (American Option Intermediate Calculation)") assert_close(_psi(fs=120, t2=3, gamma=1, h=375, i2=375, i1=300, t1=1, r=.05, b=0.03, v=0.1), 112.87159814023171) assert_close(_psi(fs=125, t2=2, gamma=1, h=100, i2=100, i1=75, t1=1, r=.05, b=0.03, v=0.1), 1.7805459905819128) # --------------------------- # unit tests for _phi() print("Testing _phi (American Option Intermediate Calculation)") # _phi(FS, T, gamma, h, I, r, b, V): assert_close(_phi(fs=120, t=3, gamma=4.51339343051624, h=151.696096685711, i=151.696096685711, r=.02, b=-0.03, v=0.14), 1102886677.05955) assert_close(_phi(fs=125, t=3, gamma=1, h=374.061664206768, i=374.061664206768, r=.05, b=0.03, v=0.14), 117.714544103477) # --------------------------- # unit tests for _CBND print("Testing _CBND (Cumulative Binomial Normal Distribution)") assert_close(_cbnd(0, 0, 0), 0.25) assert_close(_cbnd(0, 0, -0.5), 0.16666666666666669) assert_close(_cbnd(-0.5, 0, 0), 0.15426876936299347) assert_close(_cbnd(0, -0.5, 0), 0.15426876936299347) assert_close(_cbnd(0, -0.99999999, -0.99999999), 0.0) assert_close(_cbnd(0.000001, -0.99999999, -0.99999999), 0.0) assert_close(_cbnd(0, 0, 0.5), 0.3333333333333333) assert_close(_cbnd(0.5, 0, 0), 0.3457312306370065) assert_close(_cbnd(0, 0.5, 0), 0.3457312306370065) assert_close(_cbnd(0, 0.99999999, 0.99999999), 0.5) assert_close(_cbnd(0.000001, 0.99999999, 0.99999999), 0.5000003989422803) ``` ===================================== American Options Intermediate Functions ===================================== Testing _psi (American Option Intermediate Calculation) . . Testing _phi (American Option Intermediate Calculation) . . Testing _CBND (Cumulative Binomial Normal Distribution) . . . . . . . . . . . ```python # --------------------------- # Testing American Options if __name__ == "__main__": print("=====================================") print("American Options Testing") print("=====================================") print("testing _Bjerksund_Stensland_2002()") # _american_option(option_type, X, FS, T, b, r, V) assert_close(_bjerksund_stensland_2002(fs=90, x=100, t=0.5, r=0.1, b=0, v=0.15)[0], 0.8099, precision=.001) assert_close(_bjerksund_stensland_2002(fs=100, x=100, t=0.5, r=0.1, b=0, v=0.25)[0], 6.7661, precision=.001) assert_close(_bjerksund_stensland_2002(fs=110, x=100, t=0.5, r=0.1, b=0, v=0.35)[0], 15.5137, precision=.001) assert_close(_bjerksund_stensland_2002(fs=100, x=90, t=0.5, r=.1, b=0, v=0.15)[0], 10.5400, precision=.001) assert_close(_bjerksund_stensland_2002(fs=100, x=100, t=0.5, r=.1, b=0, v=0.25)[0], 6.7661, precision=.001) assert_close(_bjerksund_stensland_2002(fs=100, x=110, t=0.5, r=.1, b=0, v=0.35)[0], 5.8374, precision=.001) print("testing _Bjerksund_Stensland_1993()") # Prices for 1993 model slightly different than those presented in Haug's Complete Guide to Option Pricing Formulas # Possibly due to those results being based on older CBND calculation? assert_close(_bjerksund_stensland_1993(fs=90, x=100, t=0.5, r=0.1, b=0, v=0.15)[0], 0.8089, precision=.001) assert_close(_bjerksund_stensland_1993(fs=100, x=100, t=0.5, r=0.1, b=0, v=0.25)[0], 6.757, precision=.001) assert_close(_bjerksund_stensland_1993(fs=110, x=100, t=0.5, r=0.1, b=0, v=0.35)[0], 15.4998, precision=.001) print("testing _american_option()") assert_close(_american_option("p", fs=90, x=100, t=0.5, r=0.1, b=0, v=0.15)[0], 10.5400, precision=.001) assert_close(_american_option("p", fs=100, x=100, t=0.5, r=0.1, b=0, v=0.25)[0], 6.7661, precision=.001) assert_close(_american_option("p", fs=110, x=100, t=0.5, r=0.1, b=0, v=0.35)[0], 5.8374, precision=.001) assert_close(_american_option('c', fs=100, x=95, t=0.00273972602739726, r=0.000751040922831883, b=0, v=0.2)[0], 5.0, precision=.01) assert_close(_american_option('c', fs=42, x=40, t=0.75, r=0.04, b=-0.04, v=0.35)[0], 5.28, precision=.01) assert_close(_american_option('c', fs=90, x=100, t=0.1, r=0.10, b=0, v=0.15)[0], 0.02, precision=.01) print("Testing that American valuation works for integer inputs") assert_close(_american_option('c', fs=100, x=100, t=1, r=0, b=0, v=0.35)[0], 13.892, precision=.001) assert_close(_american_option('p', fs=100, x=100, t=1, r=0, b=0, v=0.35)[0], 13.892, precision=.001) print("Testing valuation works at minimum/maximum values for T") assert_close(_american_option('c', 100, 100, 0.00396825396825397, 0.000771332656950173, 0, 0.15)[0], 0.3769, precision=.001) assert_close(_american_option('p', 100, 100, 0.00396825396825397, 0.000771332656950173, 0, 0.15)[0], 0.3769, precision=.001) assert_close(_american_option('c', 100, 100, 100, 0.042033868311581, 0, 0.15)[0], 18.61206, precision=.001) assert_close(_american_option('p', 100, 100, 100, 0.042033868311581, 0, 0.15)[0], 18.61206, precision=.001) print("Testing valuation works at minimum/maximum values for X") assert_close(_american_option('c', 100, 0.01, 1, 0.00330252458693489, 0, 0.15)[0], 99.99, precision=.001) assert_close(_american_option('p', 100, 0.01, 1, 0.00330252458693489, 0, 0.15)[0], 0, precision=.001) assert_close(_american_option('c', 100, 2147483248, 1, 0.00330252458693489, 0, 0.15)[0], 0, precision=.001) assert_close(_american_option('p', 100, 2147483248, 1, 0.00330252458693489, 0, 0.15)[0], 2147483148, precision=.001) print("Testing valuation works at minimum/maximum values for F/S") assert_close(_american_option('c', 0.01, 100, 1, 0.00330252458693489, 0, 0.15)[0], 0, precision=.001) assert_close(_american_option('p', 0.01, 100, 1, 0.00330252458693489, 0, 0.15)[0], 99.99, precision=.001) assert_close(_american_option('c', 2147483248, 100, 1, 0.00330252458693489, 0, 0.15)[0], 2147483148, precision=.001) assert_close(_american_option('p', 2147483248, 100, 1, 0.00330252458693489, 0, 0.15)[0], 0, precision=.001) print("Testing valuation works at minimum/maximum values for b") assert_close(_american_option('c', 100, 100, 1, 0, -1, 0.15)[0], 0.0, precision=.001) assert_close(_american_option('p', 100, 100, 1, 0, -1, 0.15)[0], 63.2121, precision=.001) assert_close(_american_option('c', 100, 100, 1, 0, 1, 0.15)[0], 171.8282, precision=.001) assert_close(_american_option('p', 100, 100, 1, 0, 1, 0.15)[0], 0.0, precision=.001) print("Testing valuation works at minimum/maximum values for r") assert_close(_american_option('c', 100, 100, 1, -1, 0, 0.15)[0], 16.25133, precision=.001) assert_close(_american_option('p', 100, 100, 1, -1, 0, 0.15)[0], 16.25133, precision=.001) assert_close(_american_option('c', 100, 100, 1, 1, 0, 0.15)[0], 3.6014, precision=.001) assert_close(_american_option('p', 100, 100, 1, 1, 0, 0.15)[0], 3.6014, precision=.001) print("Testing valuation works at minimum/maximum values for V") assert_close(_american_option('c', 100, 100, 1, 0.05, 0, 0.005)[0], 0.1916, precision=.001) assert_close(_american_option('p', 100, 100, 1, 0.05, 0, 0.005)[0], 0.1916, precision=.001) assert_close(_american_option('c', 100, 100, 1, 0.05, 0, 1)[0], 36.4860, precision=.001) assert_close(_american_option('p', 100, 100, 1, 0.05, 0, 1)[0], 36.4860, precision=.001) ``` ===================================== American Options Testing ===================================== testing _Bjerksund_Stensland_2002() . . . . . . testing _Bjerksund_Stensland_1993() . . . testing _american_option() . . . . . . Testing that American valuation works for integer inputs . . Testing valuation works at minimum/maximum values for T . . . . Testing valuation works at minimum/maximum values for X . . . . Testing valuation works at minimum/maximum values for F/S . . . . Testing valuation works at minimum/maximum values for b . . . . Testing valuation works at minimum/maximum values for r . . . . Testing valuation works at minimum/maximum values for V . . . . ```python # --------------------------- # Testing European Options if __name__ == "__main__": print("=====================================") print("Generalized Black Scholes (GBS) Testing") print("=====================================") print("testing GBS Premium") assert_close(_gbs('c', 100, 95, 0.00273972602739726, 0.000751040922831883, 0, 0.2)[0], 4.99998980469552) assert_close(_gbs('c', 92.45, 107.5, 0.0876712328767123, 0.00192960198828152, 0, 0.3)[0], 0.162619795863781) assert_close(_gbs('c', 93.0766666666667, 107.75, 0.164383561643836, 0.00266390125346286, 0, 0.2878)[0], 0.584588840095316) assert_close(_gbs('c', 93.5333333333333, 107.75, 0.249315068493151, 0.00319934651984034, 0, 0.2907)[0], 1.27026849732877) assert_close(_gbs('c', 93.8733333333333, 107.75, 0.331506849315069, 0.00350934592318849, 0, 0.2929)[0], 1.97015685523537) assert_close(_gbs('c', 94.1166666666667, 107.75, 0.416438356164384, 0.00367360967852615, 0, 0.2919)[0], 2.61731599547608) assert_close(_gbs('p', 94.2666666666667, 107.75, 0.498630136986301, 0.00372609838856132, 0, 0.2888)[0], 16.6074587545269) assert_close(_gbs('p', 94.3666666666667, 107.75, 0.583561643835616, 0.00370681407974257, 0, 0.2923)[0], 17.1686196701434) assert_close(_gbs('p', 94.44, 107.75, 0.668493150684932, 0.00364163303865433, 0, 0.2908)[0], 17.6038273793172) assert_close(_gbs('p', 94.4933333333333, 107.75, 0.750684931506849, 0.00355604221290591, 0, 0.2919)[0], 18.0870982577296) assert_close(_gbs('p', 94.49, 107.75, 0.835616438356164, 0.00346100468320478, 0, 0.2901)[0], 18.5149895730975) assert_close(_gbs('p', 94.39, 107.75, 0.917808219178082, 0.00337464630758452, 0, 0.2876)[0], 18.9397688539483) print("Testing that valuation works for integer inputs") assert_close(_gbs('c', fs=100, x=95, t=1, r=1, b=0, v=1)[0], 14.6711476484) assert_close(_gbs('p', fs=100, x=95, t=1, r=1, b=0, v=1)[0], 12.8317504425) print("Testing valuation works at minimum/maximum values for T") assert_close(_gbs('c', 100, 100, 0.00396825396825397, 0.000771332656950173, 0, 0.15)[0], 0.376962465712609) assert_close(_gbs('p', 100, 100, 0.00396825396825397, 0.000771332656950173, 0, 0.15)[0], 0.376962465712609) assert_close(_gbs('c', 100, 100, 100, 0.042033868311581, 0, 0.15)[0], 0.817104022604705) assert_close(_gbs('p', 100, 100, 100, 0.042033868311581, 0, 0.15)[0], 0.817104022604705) print("Testing valuation works at minimum/maximum values for X") assert_close(_gbs('c', 100, 0.01, 1, 0.00330252458693489, 0, 0.15)[0], 99.660325245681) assert_close(_gbs('p', 100, 0.01, 1, 0.00330252458693489, 0, 0.15)[0], 0) assert_close(_gbs('c', 100, 2147483248, 1, 0.00330252458693489, 0, 0.15)[0], 0) assert_close(_gbs('p', 100, 2147483248, 1, 0.00330252458693489, 0, 0.15)[0], 2140402730.16601) print("Testing valuation works at minimum/maximum values for F/S") assert_close(_gbs('c', 0.01, 100, 1, 0.00330252458693489, 0, 0.15)[0], 0) assert_close(_gbs('p', 0.01, 100, 1, 0.00330252458693489, 0, 0.15)[0], 99.660325245681) assert_close(_gbs('c', 2147483248, 100, 1, 0.00330252458693489, 0, 0.15)[0], 2140402730.16601) assert_close(_gbs('p', 2147483248, 100, 1, 0.00330252458693489, 0, 0.15)[0], 0) print("Testing valuation works at minimum/maximum values for b") assert_close(_gbs('c', 100, 100, 1, 0.05, -1, 0.15)[0], 1.62505648981223E-11) assert_close(_gbs('p', 100, 100, 1, 0.05, -1, 0.15)[0], 60.1291675389721) assert_close(_gbs('c', 100, 100, 1, 0.05, 1, 0.15)[0], 163.448023481557) assert_close(_gbs('p', 100, 100, 1, 0.05, 1, 0.15)[0], 4.4173615264761E-11) print("Testing valuation works at minimum/maximum values for r") assert_close(_gbs('c', 100, 100, 1, -1, 0, 0.15)[0], 16.2513262267156) assert_close(_gbs('p', 100, 100, 1, -1, 0, 0.15)[0], 16.2513262267156) assert_close(_gbs('c', 100, 100, 1, 1, 0, 0.15)[0], 2.19937783786316) assert_close(_gbs('p', 100, 100, 1, 1, 0, 0.15)[0], 2.19937783786316) print("Testing valuation works at minimum/maximum values for V") assert_close(_gbs('c', 100, 100, 1, 0.05, 0, 0.005)[0], 0.189742620249) assert_close(_gbs('p', 100, 100, 1, 0.05, 0, 0.005)[0], 0.189742620249) assert_close(_gbs('c', 100, 100, 1, 0.05, 0, 1)[0], 36.424945370234) assert_close(_gbs('p', 100, 100, 1, 0.05, 0, 1)[0], 36.424945370234) print("Checking that Greeks work for calls") assert_close(_gbs('c', 100, 100, 1, 0.05, 0, 0.15)[0], 5.68695251984796) assert_close(_gbs('c', 100, 100, 1, 0.05, 0, 0.15)[1], 0.50404947485) assert_close(_gbs('c', 100, 100, 1, 0.05, 0, 0.15)[2], 0.025227988795588) assert_close(_gbs('c', 100, 100, 1, 0.05, 0, 0.15)[3], -2.55380111351125) assert_close(_gbs('c', 100, 100, 2, 0.05, 0.05, 0.25)[4], 50.7636345571413) assert_close(_gbs('c', 100, 100, 1, 0.05, 0, 0.15)[5], 44.7179949651117) print("Checking that Greeks work for puts") assert_close(_gbs('p', 100, 100, 1, 0.05, 0, 0.15)[0], 5.68695251984796) assert_close(_gbs('p', 100, 100, 1, 0.05, 0, 0.15)[1], -0.447179949651) assert_close(_gbs('p', 100, 100, 1, 0.05, 0, 0.15)[2], 0.025227988795588) assert_close(_gbs('p', 100, 100, 1, 0.05, 0, 0.15)[3], -2.55380111351125) assert_close(_gbs('p', 100, 100, 2, 0.05, 0.05, 0.25)[4], 50.7636345571413) assert_close(_gbs('p', 100, 100, 1, 0.05, 0, 0.15)[5], -50.4049474849597) ``` ===================================== Generalized Black Scholes (GBS) Testing ===================================== testing GBS Premium . . . . . . . . . . . . Testing that valuation works for integer inputs . . Testing valuation works at minimum/maximum values for T . . . . Testing valuation works at minimum/maximum values for X . . . . Testing valuation works at minimum/maximum values for F/S . . . . Testing valuation works at minimum/maximum values for b . . . . Testing valuation works at minimum/maximum values for r . . . . Testing valuation works at minimum/maximum values for V . . . . Checking that Greeks work for calls . . . . . . Checking that Greeks work for puts . . . . . . ```python # --------------------------- # Testing Implied Volatility if __name__ == "__main__": print("=====================================") print("Implied Volatility Testing") print("=====================================") print("For options far away from ATM or those very near to expiry, volatility") print("doesn't have a major effect on the price. When large changes in vol result in") print("price changes less than the minimum precision, it is very difficult to test implied vol") print("=====================================") print ("testing at-the-money approximation") assert_close(_approx_implied_vol(option_type="c", fs=100, x=100, t=1, r=.05, b=0, cp=5),0.131757) assert_close(_approx_implied_vol(option_type="c", fs=59, x=60, t=0.25, r=.067, b=0.067, cp=2.82),0.239753) print("testing GBS Implied Vol") assert_close(_gbs_implied_vol('c', 92.45, 107.5, 0.0876712328767123, 0.00192960198828152, 0, 0.162619795863781),0.3) assert_close(_gbs_implied_vol('c', 93.0766666666667, 107.75, 0.164383561643836, 0.00266390125346286, 0, 0.584588840095316),0.2878) assert_close(_gbs_implied_vol('c', 93.5333333333333, 107.75, 0.249315068493151, 0.00319934651984034, 0, 1.27026849732877),0.2907) assert_close(_gbs_implied_vol('c', 93.8733333333333, 107.75, 0.331506849315069, 0.00350934592318849, 0, 1.97015685523537),0.2929) assert_close(_gbs_implied_vol('c', 94.1166666666667, 107.75, 0.416438356164384, 0.00367360967852615, 0, 2.61731599547608),0.2919) assert_close(_gbs_implied_vol('p', 94.2666666666667, 107.75, 0.498630136986301, 0.00372609838856132, 0, 16.6074587545269),0.2888) assert_close(_gbs_implied_vol('p', 94.3666666666667, 107.75, 0.583561643835616, 0.00370681407974257, 0, 17.1686196701434),0.2923) assert_close(_gbs_implied_vol('p', 94.44, 107.75, 0.668493150684932, 0.00364163303865433, 0, 17.6038273793172),0.2908) assert_close(_gbs_implied_vol('p', 94.4933333333333, 107.75, 0.750684931506849, 0.00355604221290591, 0, 18.0870982577296),0.2919) assert_close(_gbs_implied_vol('p', 94.39, 107.75, 0.917808219178082, 0.00337464630758452, 0, 18.9397688539483),0.2876) print("Testing that GBS implied vol works for integer inputs") assert_close(_gbs_implied_vol('c', fs=100, x=95, t=1, r=1, b=0, cp=14.6711476484), 1) assert_close(_gbs_implied_vol('p', fs=100, x=95, t=1, r=1, b=0, cp=12.8317504425), 1) print("testing American Option implied volatility") assert_close(_american_implied_vol("p", fs=90, x=100, t=0.5, r=0.1, b=0, cp=10.54), 0.15, precision=0.01) assert_close(_american_implied_vol("p", fs=100, x=100, t=0.5, r=0.1, b=0, cp=6.7661), 0.25, precision=0.0001) assert_close(_american_implied_vol("p", fs=110, x=100, t=0.5, r=0.1, b=0, cp=5.8374), 0.35, precision=0.0001) assert_close(_american_implied_vol('c', fs=42, x=40, t=0.75, r=0.04, b=-0.04, cp=5.28), 0.35, precision=0.01) assert_close(_american_implied_vol('c', fs=90, x=100, t=0.1, r=0.10, b=0, cp=0.02), 0.15, precision=0.01) print("Testing that American implied volatility works for integer inputs") assert_close(_american_implied_vol('c', fs=100, x=100, t=1, r=0, b=0, cp=13.892), 0.35, precision=0.01) assert_close(_american_implied_vol('p', fs=100, x=100, t=1, r=0, b=0, cp=13.892), 0.35, precision=0.01) ``` ===================================== Implied Volatility Testing ===================================== For options far away from ATM or those very near to expiry, volatility doesn't have a major effect on the price. When large changes in vol result in price changes less than the minimum precision, it is very difficult to test implied vol ===================================== testing at-the-money approximation . . testing GBS Implied Vol . . . . . . . . . . Testing that GBS implied vol works for integer inputs . . testing American Option implied volatility . . . . . Testing that American implied volatility works for integer inputs . . ```python # --------------------------- # Testing the external interface if __name__ == "__main__": print("=====================================") print("External Interface Testing") print("=====================================") # BlackScholes(option_type, X, FS, T, r, V) print("Testing: GBS.BlackScholes") assert_close(black_scholes('c', 102, 100, 2, 0.05, 0.25)[0], 20.02128028) assert_close(black_scholes('p', 102, 100, 2, 0.05, 0.25)[0], 8.50502208) # Merton(option_type, X, FS, T, r, q, V) print("Testing: GBS.Merton") assert_close(merton('c', 102, 100, 2, 0.05, 0.01, 0.25)[0], 18.63371484) assert_close(merton('p', 102, 100, 2, 0.05, 0.01, 0.25)[0], 9.13719197) # Black76(option_type, X, FS, T, r, V) print("Testing: GBS.Black76") assert_close(black_76('c', 102, 100, 2, 0.05, 0.25)[0], 13.74803567) assert_close(black_76('p', 102, 100, 2, 0.05, 0.25)[0], 11.93836083) # garman_kohlhagen(option_type, X, FS, T, b, r, rf, V) print("Testing: GBS.garman_kohlhagen") assert_close(garman_kohlhagen('c', 102, 100, 2, 0.05, 0.01, 0.25)[0], 18.63371484) assert_close(garman_kohlhagen('p', 102, 100, 2, 0.05, 0.01, 0.25)[0], 9.13719197) # Asian76(option_type, X, FS, T, TA, r, V): print("Testing: Asian76") assert_close(asian_76('c', 102, 100, 2, 1.9, 0.05, 0.25)[0], 13.53508930) assert_close(asian_76('p', 102, 100, 2, 1.9, 0.05, 0.25)[0], 11.72541446) # Kirks76(option_type, X, F1, F2, T, r, V1, V2, corr) print("Testing: Kirks") assert_close(kirks_76("c", f1=37.384913362, f2=42.1774, x=3.0, t=0.043055556, r=0, v1=0.608063, v2=0.608063, corr=.8)[0],0.007649192) assert_close(kirks_76("p", f1=37.384913362, f2=42.1774, x=3.0, t=0.043055556, r=0, v1=0.608063, v2=0.608063, corr=.8)[0],7.80013583) ``` ===================================== External Interface Testing ===================================== Testing: GBS.BlackScholes . . Testing: GBS.Merton . . Testing: GBS.Black76 . . Testing: GBS.garman_kohlhagen . . Testing: Asian76 . . Testing: Kirks . . ```python # --------------------------- # Testing the external interface if __name__ == "__main__": print("=====================================") print("External Interface Testing") print("=====================================") # BlackScholes(option_type, X, FS, T, r, V) print("Testing: GBS.BlackScholes") assert_close(black_scholes('c', 102, 100, 2, 0.05, 0.25)[0], 20.02128028) assert_close(black_scholes('p', 102, 100, 2, 0.05, 0.25)[0], 8.50502208) # Merton(option_type, X, FS, T, r, q, V) print("Testing: GBS.Merton") assert_close(merton('c', 102, 100, 2, 0.05, 0.01, 0.25)[0], 18.63371484) assert_close(merton('p', 102, 100, 2, 0.05, 0.01, 0.25)[0], 9.13719197) # Black76(option_type, X, FS, T, r, V) print("Testing: GBS.Black76") assert_close(black_76('c', 102, 100, 2, 0.05, 0.25)[0], 13.74803567) assert_close(black_76('p', 102, 100, 2, 0.05, 0.25)[0], 11.93836083) # garman_kohlhagen(option_type, X, FS, T, b, r, rf, V) print("Testing: GBS.garman_kohlhagen") assert_close(garman_kohlhagen('c', 102, 100, 2, 0.05, 0.01, 0.25)[0], 18.63371484) assert_close(garman_kohlhagen('p', 102, 100, 2, 0.05, 0.01, 0.25)[0], 9.13719197) # Asian76(option_type, X, FS, T, TA, r, V): print("Testing: Asian76") assert_close(asian_76('c', 102, 100, 2, 1.9, 0.05, 0.25)[0], 13.53508930) assert_close(asian_76('p', 102, 100, 2, 1.9, 0.05, 0.25)[0], 11.72541446) # Kirks76(option_type, X, F1, F2, T, r, V1, V2, corr) print("Testing: Kirks") assert_close( kirks_76("c", f1=37.384913362, f2=42.1774, x=3.0, t=0.043055556, r=0, v1=0.608063, v2=0.608063, corr=.8)[0], 0.007649192) assert_close( kirks_76("p", f1=37.384913362, f2=42.1774, x=3.0, t=0.043055556, r=0, v1=0.608063, v2=0.608063, corr=.8)[0], 7.80013583) ``` ===================================== External Interface Testing ===================================== Testing: GBS.BlackScholes . . Testing: GBS.Merton . . Testing: GBS.Black76 . . Testing: GBS.garman_kohlhagen . . Testing: Asian76 . . Testing: Kirks . . ## Benchmarking This section benchmarks the output against output from a 3rd party option pricing libraries described in the book "The Complete Guide to Option Pricing Formulas" by Esper Haug. *Haug, Esper. The Complete Guide to Option Pricing Formulas. McGraw-Hill 1997, pages 10-15* Indexes for GBS Functions: * [0] Value * [1] Delta * [2] Gamma * [3] Theta (annualized, divide by 365 to get daily theta) * [4] Vega * [5] Rho ```python # ------------------ # Benchmarking against other option models if __name__ == "__main__": print("=====================================") print("Selected Comparison to 3rd party models") print("=====================================") print("Testing GBS.BlackScholes") assert_close(black_scholes('c', fs=60, x=65, t=0.25, r=0.08, v=0.30)[0], 2.13336844492) print("Testing GBS.Merton") assert_close(merton('p', fs=100, x=95, t=0.5, r=0.10, q=0.05, v=0.20)[0], 2.46478764676) print("Testing GBS.Black76") assert_close(black_76('c', fs=19, x=19, t=0.75, r=0.10, v=0.28)[0], 1.70105072524) print("Testing GBS.garman_kohlhagen") assert_close(garman_kohlhagen('c', fs=1.56, x=1.60, t=0.5, r=0.06, rf=0.08, v=0.12)[0], 0.0290992531494) print("Testing Delta") assert_close(black_76('c', fs=105, x=100, t=0.5, r=0.10, v=0.36)[1], 0.5946287) assert_close(black_76('p', fs=105, x=100, t=0.5, r=0.10, v=0.36)[1], -0.356601) print("Testing Gamma") assert_close(black_scholes('c', fs=55, x=60, t=0.75, r=0.10, v=0.30)[2], 0.0278211604769) assert_close(black_scholes('p', fs=55, x=60, t=0.75, r=0.10, v=0.30)[2], 0.0278211604769) print("Testing Theta") assert_close(merton('p', fs=430, x=405, t=0.0833, r=0.07, q=0.05, v=0.20)[3], -31.1923670565) print("Testing Vega") assert_close(black_scholes('c', fs=55, x=60, t=0.75, r=0.10, v=0.30)[4], 18.9357773496) assert_close(black_scholes('p', fs=55, x=60, t=0.75, r=0.10, v=0.30)[4], 18.9357773496) print("Testing Rho") assert_close(black_scholes('c', fs=72, x=75, t=1, r=0.09, v=0.19)[5], 38.7325050173) ``` ===================================== Selected Comparison to 3rd party models ===================================== Testing GBS.BlackScholes . Testing GBS.Merton . Testing GBS.Black76 . Testing GBS.garman_kohlhagen . Testing Delta . . Testing Gamma . . Testing Theta . Testing Vega . . Testing Rho . ```python bs = black_scholes('c', fs=60, x=65, t=0.25, r=0.08, v=0.30)[0] bs ``` 2.1333684449162007 ```python ```
5db6c867039996d0976d1801cdf552278f5748af
125,001
ipynb
Jupyter Notebook
GBS.ipynb
SolitonScientific/Option_Pricing
8e1ba226583f3f03a2d978d332696129bafa83cc
[ "MIT" ]
null
null
null
GBS.ipynb
SolitonScientific/Option_Pricing
8e1ba226583f3f03a2d978d332696129bafa83cc
[ "MIT" ]
null
null
null
GBS.ipynb
SolitonScientific/Option_Pricing
8e1ba226583f3f03a2d978d332696129bafa83cc
[ "MIT" ]
null
null
null
50.607692
1,117
0.549836
true
29,582
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.831143
0.737394
__label__eng_Latn
0.930563
0.551545
# Denmark - Infer parameters ```python %%capture ## compile PyRoss for this notebook import os owd = os.getcwd() os.chdir('../../') %run setup.py install os.chdir(owd) ``` ```python %matplotlib inline import numpy as np from matplotlib import pyplot as plt import matplotlib.image as mpimg import pyross import time import pandas as pd ``` We use the Denmark age structure and contact matrix as well as the real data to infer parameters from the different model implemented in Pyross. **Summary:** 1. We briefly summarize each model starting with the SIIR model. 2. We load the age structure and contact matrix for Denmark. The contact matrix is given as \begin{equation} C = C_{H} + C_{W} + C_{S} + C_{O}, \end{equation} where the four terms denote the number of contacts at home, work, school, and all other remaining contacts. 3. We are using real data (the number of case in Denmark) to fit parameters from each model. ## Import all the relevant Denmark data ```python my_data = np.genfromtxt('../data/age_structures/Denmark-2019.csv', delimiter=',', skip_header=1) aM, aF = my_data[:, 1], my_data[:, 2] Ni0=aM+aF; M=16 ## number of age classes Ni = Ni0[:M] N=np.sum(Ni) print("Age groups are in brackets of 5 (i.e. 0-4, 5-9, 10-14, .. , 75-79).") print("Number of individuals in each bracket:") print(Ni.astype('int')) print("Total number of individuals: {0}".format(np.sum(Ni.astype('int')))) ``` Age groups are in brackets of 5 (i.e. 0-4, 5-9, 10-14, .. , 75-79). Number of individuals in each bracket: [302353 305513 338779 341219 379522 395469 342443 320132 366147 385944 422585 381360 338039 319145 346572 220374] Total number of individuals: 5505596 ```python Ni ``` array([302353., 305513., 338779., 341219., 379522., 395469., 342443., 320132., 366147., 385944., 422585., 381360., 338039., 319145., 346572., 220374.]) ```python # Get individual contact matrices CH, CW, CS, CO = pyross.contactMatrix.Denmark() # By default, home, work, school, and others contribute to the contact matrix C = CH + CW + CS + CO # Illustrate the individual contact matrices: fig,aCF = plt.subplots(2,2); aCF[0][0].pcolor(CH, cmap=plt.cm.get_cmap('GnBu', 10)); aCF[0][1].pcolor(CW, cmap=plt.cm.get_cmap('GnBu', 10)); aCF[1][0].pcolor(CS, cmap=plt.cm.get_cmap('GnBu', 10)); aCF[1][1].pcolor(CO, cmap=plt.cm.get_cmap('GnBu', 10)); ``` C is the sum of contributions from contacts at home, workplace, schools and all other public spheres. Using superscripts $H$, $W$, $S$ and $O$ for each of these, we write the contact matrix as $$ C_{ij} = C^H_{ij} + C^W_{ij} + C^S_{ij} + C^O_{ij} $$ We read in these contact matrices from the data sets provided in the paper *Projecting social contact matrices in 152 countries using contact surveys and demographic data* by Prem et al, sum them to obtain the total contact matrix. We also read in the age distribution of UK obtained from the *Population pyramid* website. ```python denmark_age_structured_case = pd.read_csv('../../denmark_cases_by_age.csv') case = denmark_age_structured_case.sum(axis=0)[1:] ``` ```python ## infective people - remove pp 80 years old + - no contact matrices nbday = denmark_age_structured_case.shape[1]-1 nbgroup = denmark_age_structured_case.shape[0] I = np.array(np.zeros([nbday, nbgroup-2])) for i in range(1,nbday+1): I[i-1] = np.array([denmark_age_structured_case.iloc[:,i][:nbgroup-2]]) ``` ```python # Get the latest data from Johns Hopkins University !git clone https://github.com/CSSEGISandData/COVID-19 ``` fatal: destination path 'COVID-19' already exists and is not an empty directory. ```python cases = pd.read_csv('COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv') cols = cases.columns.tolist() case = cases.loc[94,][4:] ``` ```python # march 13 - 52 ``` ```python ## adding to the data the 11 previous day to have consistent increasing cases number I1 = np.array(np.zeros([nbday+11, 8])) I1[0]=np.array([[ 0., 0., 1., 2., 0., 1., 0., 0.]]) I1[1] = np.array([[ 0., 0., 1., 3., 0., 2., 0., 0.]]) I1[2] =np.array([[ 0., 1., 2., 3., 1., 2., 1., 0.]]) I1[3] =np.array([[ 0., 1., 2., 3., 1., 2., 1., 0.]]) I1[4] =np.array([[ 1., 2., 4., 5., 4., 4., 2., 1.]]) I1[5] =np.array([[ 1., 2., 4., 5., 4., 4., 2., 1.]]) I1[6] =np.array([[ 2., 3., 4., 7., 6., 6., 4., 3.]]) I1[7] =np.array([[ 4., 9., 16., 19., 15., 12.,12., 3.]]) I1[8] =np.array([[ 6., 13., 42., 60., 75., 39.,24., 3.]]) I1[9] =np.array([[ 7., 18., 78., 98., 120., 79.,38., 4.]]) I1[10] =np.array([[ 9., 28., 101., 121., 187., 119.,46., 4.]]) I1[11:] = I I = I1 ## we don't want a cum sum I[1:] -= I[:-1].copy() # weird mistake , negative number I[14,0] = 1 ``` ```python ## duplicate for each subgroup of age M = 16 Is = np.array(np.zeros([nbday+11, M])) for i in range(I.shape[0]): Is[i] = np.array([val for val in I[i]/2 for _ in (0, 1)]) ## take only integer values, loosing cases ! for j in range(Is.shape[0]): Is[j] = np.array([int(i) for i in Is[j]]) ``` ## SIIR model In this notebook, we consider the SIR model with symptomatically and asymptomatically infected. We are trying to infer the parameters of the model * $\alpha$ (fraction of asymptomatic infectives), * $\beta$ (probability of infection on contact), * $\gamma_{I_a}$ (rate of recovery for asymptomatic infected individuals), and * $\gamma_{I_s}$ (rate of recovery for symptomatic infected individuals) when given **real data** from Denmark for : * $S$ (number of susceptible individual), * $Is$ (number of symptomatic infected individual until May 5th), * $Ia$ (number of asymptomatic individual consider equal to Is), ```python from IPython.display import Image Image('SIIR.jpg') ``` ### 1) Let's create 3 groups ```python ## young 0-20 years ## medium 20-50 years ## senior 50-80 years ``` ```python M = 3 Ismod = np.array(np.zeros([nbday+11, M])) for i in range(nbday+11): Ismod[i,0] = sum(Is[i,0:5]) Ismod[i,1] = sum(Is[i,4:10]) Ismod[i,2] = sum(Is[i,11:16]) ``` ## contact matrix for 2 groups C1 = np.array(np.zeros([M, M])) C1[0,0] = int(sum(C[0:5,0:5]).sum()) C1[0,1] = int(sum(C[0:5,11:16]).sum()) C1[1,0] = int(sum(C[11:16,0:5]).sum()) C1[1,1] = int(sum(C[11:16,11:16]).sum()) ```python ## contact matrix C1 = np.array(np.zeros([M, M])) C1[0,0] = int(sum(C[0:5,0:5]).sum()) C1[0,1] = int(sum(C[0:5,4:10]).sum()) C1[0,2] = int(sum(C[0:5,11:16]).sum()) C1[1,0] = int(sum(C[4:10,0:5]).sum()) C1[1,1] = int(sum(C[4:10,4:10]).sum()) C1[1,2] = int(sum(C[4:10,11:16]).sum()) C1[2,0] = int(sum(C[11:16,0:5]).sum()) C1[2,1] = int(sum(C[11:16,4:10]).sum()) C1[2,2] = int(sum(C[11:16,11:16]).sum()) ``` ```python C = C1 ``` ```python Nimod = np.array(np.zeros([M])) Nimod[0] = sum(Ni[0:5]) Nimod[1] = sum(Ni[4:10]) Nimod[2] = sum(Ni[11:16]) ``` ```python Ni = Nimod ``` ```python N=np.sum(Ni) N ``` 5462533.0 #### 1)a) Run one simulation We generate a test trajectory on a population with two ages groups. ```python beta = 0.01 # infection rate gIa = 1./3. # recovery rate of asymptomatic infectives gIs = 1./3. # recovery rate of symptomatic infectives alpha = 0.4 # fraction of asymptomatic infectives fsa = 0.3 # initial conditions Is0 = np.array([2, 2, 2]) Ia0 = np.array([2, 2, 2]) R0 = np.zeros((M)) S0 = Ni - (Ia0 + Is0 + R0) ``` ```python ## matrix with all events def contactMatrix(t): return C ``` ```python fi = Ni/sum(Ni) ``` ```python # fraction of population in Age group Ni = N*fi ``` #### 1)b) Model ```python print('M :', M) print('Ni :', Ni) print('N :', N) print('C :', C) print('Ia0 :', Ia0) print('Is0 :', Is0) print('S0 :', S0) print('R0 :', R0) ``` M : 3 Ni : [1667386. 2189657. 1605490.] N : 5462533.0 C : [[44. 26. 2.] [26. 59. 5.] [ 9. 16. 14.]] Ia0 : [2 2 2] Is0 : [2 2 2] S0 : [1667382. 2189653. 1605486.] R0 : [0. 0. 0.] ```python Tf = 150 Nf = Tf+1 parameters = {'alpha':alpha, 'beta':beta, 'gIa':gIa, 'gIs':gIs,'fsa':fsa} true_parameters = {'alpha':alpha, 'beta':beta, 'gIa':gIa, 'gIs':gIs,'fsa':fsa} # use pyross stochastic to generate traj and save sto_model = pyross.stochastic.SIR(parameters, M, Ni) data = sto_model.simulate(S0, Ia0, Is0, contactMatrix, Tf, Nf) data_array = data['X'] np.save('SIR_sto_traj.npy', data_array) ``` ```python fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 22}) t = data['t'] plt.fill_between(t, 0, np.sum(data_array[:, :M], axis=1), alpha=0.3) plt.plot(t, np.sum(data_array[:, :M], axis=1), '-', label='S', lw=2) plt.fill_between(t, 0, np.sum(data_array[:, M:2*M], axis=1), alpha=0.3) plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-', label='Ia', lw=2) plt.fill_between(t, 0, np.sum(data_array[:, 2*M:3*M], axis=1), alpha=0.3) plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-', label='Is', lw=2) plt.legend(fontsize=26) plt.grid() plt.xlabel(r'time') plt.autoscale(enable=True, axis='x', tight=True) ``` ```python y_plot = np.sum(data_array[:, 2*M:3*M], axis=1) lw=2 fig,ax = plt.subplots(1,1,figsize=(8,5)) ax.plot(t,y_plot,lw=lw,) ax.axvline(20, color='crimson',lw=lw, label='Beginning of lockdown',ls='--') ax.axvline(55, color='limegreen',lw=lw, label='Schools re-opened',ls='--') ax.plot(np.array(case), 'o-', lw=4, color='#348ABD', ms=5, label='data', alpha=0.5) plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-', label='Ia', lw=2) plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-', label='Is', lw=2) ax.set_xlim(0,Tf) fs=20 ax.legend(loc='best',fontsize=8,framealpha=1) ax.set_xlabel('time since first infection [days]',fontsize=fs) ax.set_ylabel('Number of known active cases',fontsize=fs) # plt.show(fig) plt.savefig('denmarkSIR.png') ``` The model is predicting too many infected individuals. #### 1)c) Inference We take the first real points and use them to infer the parameters of the model. We assume the same number of symptomatic and asymptomatic individuals. ```python Tf_inference = 64 Nf_inference = Tf_inference+1 ``` ```python # load the oberved data x = np.load('SIR_sto_traj.npy').astype('float') x.shape # x = (x/N)[:Nf_inference] ``` (151, 9) ```python dayobs = nbday+11 ``` ```python x[1:dayobs, 2*M:3*M] = Ismod[1:dayobs ,] # x[1:dayobs, M:2*M] = Ismod[1:dayobs ,] # x[1:dayobs,0:M] = Ni - x[1:dayobs, M:2*M] - x[1:dayobs, 2*M:3*M] x = (x/N)[:Nf_inference] steps = 101 ``` ## Is non cumulative - 52 days from the data - overwrite dayobs = 65 x[:dayobs,M:2*M] = Ismod[:dayobs ,] x[:dayobs,2*M:3*M] = Ismod[:dayobs ,] x[:dayobs,0:M] = Ni - 2*Ismod[:dayobs ,] # Compare the deterministic trajectory and the stochastic trajectory with the same # initial conditions and parameters x0=x[0] det_model = pyross.deterministic.SIR(parameters, int(M), fi) xm = estimator.integrate(x[0], 0, Tf_inference, Nf_inference, det_model, contactMatrix) fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 22}) plt.plot(np.sum(xm[:, M:], axis=1), label='deterministic I') plt.plot(np.sum(x[:Nf_inference, M:], axis=1), label='stochastic I') plt.legend() plt.savefig('denmarkSIRdeterministic-stochastic.png') plt.show() ```python # initialise the estimator estimator = pyross.inference.SIR(parameters, M, fi, int(N), steps) # compute -log_p for the original (correct) parameters start_time = time.time() parameters = {'alpha':alpha, 'beta':beta, 'gIa':gIa, 'gIs':gIs,'fsa':fsa} logp = estimator.obtain_minus_log_p(parameters, x, Tf_inference, Nf_inference, contactMatrix) end_time = time.time() print(logp) print(end_time - start_time) ``` 367954.8124224418 0.34119391441345215 ```python # Define the prior (Gamma prior around guess of parameter with defined std. deviation) alpha_g = 0.3 beta_g = 0.02 gIa_g = 0.1 gIs_g = 0.1 fsa_g= 0.1 # compute -log_p for the initial guess parameters = {'alpha':alpha_g, 'beta':beta_g, 'gIa':gIa_g, 'gIs':gIs_g, 'fsa':fsa_g} logp = estimator.obtain_minus_log_p(parameters, x, Tf_inference, Nf_inference, contactMatrix) print(logp) ``` 827796.99617319 ```python # the names of the parameters to be inferred eps = 1e-4 keys = ['alpha', 'beta', 'gIa', 'gIs'] # initial guess guess = np.array([alpha_g, beta_g, gIa_g, gIs_g]) # error bars on the initial guess alpha_std = 0.2 beta_std = 0.1 gIa_std = 0.1 gIs_std = 0.1 stds = np.array([alpha_std, beta_std , gIa_std, gIs_std]) # bounds on the parameters bounds = np.array([(eps, 0.8), (eps, 0.2), (eps, 0.6), (eps, 0.6)]) # Stopping criterion for minimisation (realtive change in function value) ftol = 1e-6 start_time = time.time() params = estimator.infer_parameters(keys, guess, stds, bounds, x, Tf_inference, Nf_inference, contactMatrix, global_max_iter=20, local_max_iter=200, global_ftol_factor=1e3, ftol=ftol, verbose=True) end_time = time.time() print(params) # best guess print(end_time - start_time) ``` Starting global minimisation... (8_w,16)-aCMA-ES (mu_w=4.8,w_1=32%) in dimension 4 (seed=2212580848, Wed Jun 3 17:14:50 2020) Exception ignored in: 'pyross.inference.SIR_type.log_cond_p' ValueError: Cov has negative determinant Iterat #Fevals function value axis ratio sigma min&max std t[m:s] 1 16 3.721331160275696e+05 1.0e+00 8.44e-01 7e-02 2e-01 0:03.1 2 32 3.218666597190848e+05 1.2e+00 8.37e-01 7e-02 2e-01 0:12.0 3 48 3.435732811043398e+05 1.5e+00 6.39e-01 5e-02 1e-01 0:21.7 4 64 3.809416067551735e+05 1.8e+00 6.74e-01 4e-02 1e-01 0:30.2 5 80 3.435990883578743e+05 2.3e+00 5.14e-01 3e-02 1e-01 0:35.7 6 96 2.844296902744054e+05 2.4e+00 6.84e-01 4e-02 2e-01 0:41.0 8 128 2.080415888665728e+05 5.4e+00 1.12e+00 4e-02 3e-01 0:49.8 10 160 1.084220272698482e+05 6.0e+00 1.29e+00 4e-02 3e-01 0:57.9 13 208 7.413868906968871e+04 9.4e+00 1.55e+00 3e-02 2e-01 1:07.1 17 272 6.757444267493341e+04 1.2e+01 1.34e+00 2e-02 1e-01 1:18.7 Optimal value (global minimisation): 56184.18359142253 Starting local minimisation... Optimal value (local minimisation): 40912.14976427028 [0.8 0.01677197 0.6 0.59992088] 128.52436304092407 ```python # compute log_p for best estimate start_time = time.time() new_parameters = estimator.fill_params_dict(keys, params) logp = estimator.obtain_minus_log_p(new_parameters, x, Tf_inference, Nf_inference, contactMatrix) end_time = time.time() print(logp) print(end_time - start_time) ``` 40903.60708449021 0.5751838684082031 ```python print("True parameters:") print(true_parameters) print("\nInferred parameters:") print(new_parameters) ``` True parameters: {'alpha': 0.4, 'beta': 0.01, 'gIa': 0.3333333333333333, 'gIs': 0.3333333333333333, 'fsa': 0.3} Inferred parameters: {'alpha': 0.8, 'beta': 0.01677196896424667, 'gIa': 0.6, 'gIs': 0.5999208831891778, 'fsa': 0.1} ```python x = np.load('SIR_sto_traj.npy').astype('float')/N Nf = x.shape[0] Tf = Nf-1 det_model = pyross.deterministic.SIR(new_parameters, int(M), fi) x_det = estimator.integrate(x[0], 0, Tf, Nf, det_model, contactMatrix) fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 22}) plt.plot(np.sum(x_det[:, :M], axis=1), label='Inferred S') plt.plot(np.sum(x[:, :M], axis=1), label='True S') plt.plot(np.sum(x_det[:, M:2*M], axis=1), label='Inferred Ia') plt.plot(np.sum(x[:, M:2*M], axis=1), label='True Ia') plt.plot(np.sum(x_det[:, 2*M:3*M], axis=1), label='Inferred Is') plt.plot(np.sum(x[:, 2*M:3*M], axis=1), label='True Is') plt.xlim([0, Tf]) plt.axvspan(0, Tf_inference, label='Used for inference', alpha=0.3, color='dodgerblue') plt.legend() plt.show() ``` April 15, Denmark reopen schools
f3c1c8b737f4cae165006e53aa59cda877ca8136
402,360
ipynb
Jupyter Notebook
examples/inference/SIRinference_Denmark.ipynb
ineskris/pyross
2ee6deb01b17cdbff19ef89ec6d1e607bceb481c
[ "MIT" ]
null
null
null
examples/inference/SIRinference_Denmark.ipynb
ineskris/pyross
2ee6deb01b17cdbff19ef89ec6d1e607bceb481c
[ "MIT" ]
null
null
null
examples/inference/SIRinference_Denmark.ipynb
ineskris/pyross
2ee6deb01b17cdbff19ef89ec6d1e607bceb481c
[ "MIT" ]
null
null
null
405.604839
211,417
0.94149
true
5,897
Qwen/Qwen-72B
1. YES 2. YES
0.749087
0.692642
0.518849
__label__eng_Latn
0.529158
0.04379
```python %%capture ## compile PyRoss for this notebook import os owd = os.getcwd() os.chdir('../../') %run setup.py install os.chdir(owd) %matplotlib inline ``` ```python import numpy as np import matplotlib.pyplot as plt import pyross ``` In this notebook we consider a control protocol consisting of a lockdown. For our numerical study we generate synthetic data using the deterministic SEkIkIkR model. While we use the UK age structure and contact matrix, we emphasise that, with the exception of $\beta$, **the model parameters considered in this notebook have NOT been obtained from real data, but rather are chosen ad-hoc**. **Outline of this notebook:** 1. We briefly summarize the SEkIkIkR model (still to do). 2. We load the age structure and contact matrix for UK. The contact matrix is generally given as \begin{equation} C = C_{H} + C_{W} + C_{S} + C_{O}, \end{equation} where the four terms denote the number of contacts at home, work, school, and all other remaining contacts. 3. We define the other model parameters of the SEkIkIkR model **(again, these are not fitted to any real data, but rather chosen ad-hoc)**. 4. We define a "lockdown-protocol": 1. After a fixed time, a lockdown is imposed. The contact matrix is reduced to \begin{equation} C = C_{H} + 0.5\cdot C_W + 0.4 \cdot C_O, \end{equation} i.e. the "home" contacts $C_H$, and 50% of the "work" as well as 40% of the "other" contacts are reainted. The latter two model that at the beginning of lockdown, a significant amount of people still goes to work, and might be lenient in following the lockdown advice. 2. 2 Days after the initial lockdown, the 40% "social interaction" part of the contact matrix is removed, the work matrix is reduced to 10%, so that we have \begin{equation} C = C_{H} + 0.1\cdot C_W. \end{equation} This models that people take the lockdown seriously and seize to interact socially. 5. We run a deterministic simulation of this lockdown protocol, and compare the results to the UK data. ## 2. Load UKage structure and contact matrix ```python my_data = np.genfromtxt('../data/age_structures/UK.csv', delimiter=',', skip_header=1) aM, aF = my_data[:, 1], my_data[:, 2] Ni0=aM+aF; M=16 ## number of age classes Ni = Ni0[:M] N=np.sum(Ni) print("Age groups are in brackets of 5 (i.e. 0-4, 5-9, 10-14, .. , 75-79).") print("Number of individuals in each bracket:") print(Ni.astype('int')) print("Total number of individuals: {0}".format(np.sum(Ni.astype('int')))) ``` Age groups are in brackets of 5 (i.e. 0-4, 5-9, 10-14, .. , 75-79). Number of individuals in each bracket: [3951046 4114237 3884626 3684534 4120269 4510345 4683636 4519933 4270785 4353894 4674918 4463447 3799485 3406990 3329496 2343961] Total number of individuals: 64111602 ```python # Get individual contact matrices CH, CW, CS, CO = pyross.contactMatrix.UK() # By default, home, work, school, and others contribute to the contact matrix C = CH + CW + CS + CO # Illustrate the individual contact matrices: fig,aCF = plt.subplots(2,2); aCF[0][0].pcolor(CH, cmap=plt.cm.get_cmap('GnBu', 10)); aCF[0][1].pcolor(CW, cmap=plt.cm.get_cmap('GnBu', 10)); aCF[1][0].pcolor(CS, cmap=plt.cm.get_cmap('GnBu', 10)); aCF[1][1].pcolor(CO, cmap=plt.cm.get_cmap('GnBu', 10)); ``` The above contact matrices illustrate the interactions at home (upper left), work (upper right), school (lower left), and the other remaining contacts (lower right). x- and y- axes denote the age groups, a darker color indicates more interaction. ## 3. Define model parameters **Note: These have not been fitted to real data.** ```python alpha=0.3 # fraction of symptomatics who self-isolate beta = 0.05984224 # probability of infection on contact gE = 1/2.72 # recovery rate of exposeds kI = 10; # # of stages of I class kE = 10; # # of stages of E class gIa = 1./7 # recovery rate of infectives gIs = 1./17.76 fsa = 0.5 # We start with one symptomatic infective in each of the age groups 6-13 S0 = np.zeros(M) I0 = np.zeros((kI,M)); E0 = np.zeros((kE,M)); for i in range(kI): I0[i, 6:13]= 1 for i in range(M) : S0[i] = Ni[i] - np.sum(I0[:,i]) - np.sum(E0[:,i]) I0 = np.reshape(I0, kI*M)/kI; E0 = np.reshape(E0, kE*M)/kE; ``` ## 4. Define events for protocol ```python # Dummy event for initial (standard) contact matrix events = [lambda t: 1] contactMatrices = [C] T_offset = 19 # see fitParamBeta notebook where the beta parameter of the SEkIkIkR model is fitted # After 20 days, start lockdown lockdown_threshold_0 = T_offset+19 def event0(t,rp): return t - lockdown_threshold_0 events.append(event0) contactMatrices.append( CH + 0.5*CW + .4*CO ) # for a short time, # people still have some social contacts # After 2 days, decrease contacts further lockdown_threshold_1 = T_offset+21 def event1(t,rp): return t- lockdown_threshold_1 events.append(event1) contactMatrices.append( CH + 0.2*CW) ''' # After 70 days, add 50% of school contacts to contact matrix lockdown_threshold_2 = T_offset+260 def event2(t,rp): return t - lockdown_threshold_2 events.append(event2) contactMatrices.append( CH + 0.2*CW + 0.5*CS ) # everybody in lockdown '''; ``` ## 5. Simulate protocol and analyse results #### Initialise pyross.control, run and plot a single test simulation ```python # duration of simulation Tf=500; Nf=Tf+1; # intantiate model parameters = {'beta':beta, 'gE':gE, 'gIa':gIa, 'gIs':gIs, 'kI':kI, 'kE' : kE, 'fsa':fsa, 'alpha':alpha} model = pyross.control.SEkIkIkR(parameters, M, Ni) # run model once data=model.simulate(S0, E0, 0*I0, I0, events,contactMatrices, Tf, Nf) ``` ```python # Load data my_data = np.genfromtxt('../data/covid-cases/uk.txt', delimiter='', skip_header=7) day, cases = my_data[:,0], my_data[:,1] ``` ```python # Plot result t = data['t']; # to compare with the UK dataset, we need the total number of known cases. # We assume that this is given by # (All known cases) = (Total population) - (# of suspectibles) + (# of asymptomatics), Ia = (model.Ia(data)) summedAgesIa = Ia.sum(axis=1) S = (model.S(data)) summedAgesS = S.sum(axis=1) trajectory = N - summedAgesS + summedAgesIa index = T_offset # determined in notebook fitParamBeta, where beta was fitted to the data lw=3 fig,ax = plt.subplots(1,1,figsize=(8,5)) ax.plot(t[index:]-t[index],trajectory[index:],lw=lw, label='SEkIkIkR model') ax.axvline(data['events_occured'][0][0]-t[index], color='crimson',lw=lw, label='Beginning of lockdown',ls='--') ax.plot(cases,marker='o',ls='', label='UK data') #ax.set_xlim(0,Tf) ax.set_xlim(0,40) ax.set_ylim(0,5e4) fs=20 ax.legend(loc='best',fontsize=15,framealpha=1) ax.set_xlabel('time since first infection [days]',fontsize=fs) ax.set_ylabel('Total number of cases',fontsize=fs) plt.show() plt.close() fig,ax = plt.subplots(1,1,figsize=(8,5)) ax.plot(t[index:]-t[index],trajectory[index:],lw=lw, label='SEkIkIkR model') ax.axvline(data['events_occured'][0][0]-t[index], color='crimson',lw=lw, label='Beginning of lockdown',ls='--') ax.plot(cases,#marker='o', lw=lw,ls='--', label='UK data') ax.set_xlim(0,100) ax.set_ylim(0,5e5) fs=20 ax.legend(loc='best',fontsize=fs,framealpha=1) ax.set_xlabel('time since first infection [days]',fontsize=fs) ax.set_ylabel('Total number of cases',fontsize=fs) fig.savefig('ex7_SEkIkIkR_lockdown.pdf',bbox_inches='tight') plt.show() plt.close() print("Number of symptomatic infectives (to get a feeling for 'active cases'):") Is = (model.Is(data)) summedAgesIs = Is.sum(axis=1) fig,ax = plt.subplots(1,1,figsize=(8,5)) ax.plot(t[index:]-t[index],summedAgesIs[index:],lw=lw, label='SEkIkIkR model') ax.axvline(data['events_occured'][0][0]-t[index], color='crimson',lw=lw, label='Beginning of lockdown',ls='--') fs=20 ax.legend(loc='best',fontsize=fs,framealpha=1) ax.set_xlabel('time since first infection [days]',fontsize=fs) ax.set_ylabel('Total number of cases',fontsize=fs) plt.show() plt.close() ``` (The reader is encouraged to play around with the contact matrices in the lockdown protocol. Even if after 20 days, only the home contact matrix remains, the curvature remains positive and the total number of active cases "explodes".) **(We emphasise again that the model used here is, except for the parameter $\beta$, not fitted to the UK data!)** ```python ``` ```python ```
87460f04ac6fba7c2c187fb3cf903c8416f25adb
127,462
ipynb
Jupyter Notebook
examples/control/ex08 - SEkIkIkR - UK - lockdown.ipynb
ineskris/pyross
2ee6deb01b17cdbff19ef89ec6d1e607bceb481c
[ "MIT" ]
null
null
null
examples/control/ex08 - SEkIkIkR - UK - lockdown.ipynb
ineskris/pyross
2ee6deb01b17cdbff19ef89ec6d1e607bceb481c
[ "MIT" ]
null
null
null
examples/control/ex08 - SEkIkIkR - UK - lockdown.ipynb
ineskris/pyross
2ee6deb01b17cdbff19ef89ec6d1e607bceb481c
[ "MIT" ]
null
null
null
297.808411
37,116
0.92142
true
2,590
Qwen/Qwen-72B
1. YES 2. YES
0.685949
0.7773
0.533188
__label__eng_Latn
0.925038
0.077105
We've been working on a [conference paper](https://github.com/gilbertgede/idetc-2013-paper) to demonstrate the ability to do multibody dynamics with Python. We've been calling this work flow [PyDy](http://pydy.org), short for Python Dynamics. Several pieces of the puzzle have come together lately to really demonstrate the power of the scientific python software packages to handle complex dynamic and controls problems (i.e. IPython notebooks, matplotlib animations, python-control, and our software package mechanics which is a part of SymPy). After writing the draft of our paper, which uses a general n-link pendulum as it's main example, I came across this [blog post by Wolfram](http://blog.wolfram.com/2011/03/01/stabilized-n-link-pendulum/) demonstrating their ability to symbolically derive the equations of motion for the n-link pendulum and stabilize it with an LQR controller. It inspired me to replicate the example as I realized that it was relatively easy to do with all free and open source software! In this example problem we will derive the equations of motion of an n-link pendulum on a laterally sliding cart and then develop a controller to stabilize it. Balancing a single inverted pendulum is a classic problem that is many times a student's first experience with non-linear dynamics and control. The problem here is extended to a general n-link pendulum and as we will see the equations of motion quickly get messy with greater than 2 links. The diagram below shows the general description of the problem. ``` from IPython.display import SVG SVG(filename='n-pendulum-with-cart.svg') ``` I used these software versions for the following computations: - IPython: 0.13.1.rc2 - matplotlib: 1.1.1 - NumPy: 1.6.2 - SciPy: 0.10.1 - SymPy: 0.7.2 - python-control: 0.6d Equations of Motion =================== We'll start by generating the equations of motion for the system with SymPy **[mechanics](http://docs.sympy.org/dev/modules/physics/mechanics/index.html)**. The functionality that mechanics provides is much more in depth than Mathematica's functionality. In the Mathematica example, Lagrangian mechanics were implemented manually with Mathematica's symbolic functionality. **mechanics** provides an assortment of functions and classes to derive the equations of motion for arbitrarily complex (i.e. configuration constraints, nonholonomic motion constraints, etc) multibody systems in a very natural way. First we import the necessary functionality from SymPy. ``` %pylab inline from sympy import symbols from sympy.physics.mechanics import * ``` Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'. Now specify the number of links, $n$. I'll start with 5 since the Wolfram folks only showed four. ``` n = 5 ``` **mechanics** will need the generalized coordinates, generalized speeds, and the input force which are all time dependent variables and the bob masses, link lengths, and acceleration due to gravity which are all constants. Time, $t$, is also made available because we will need to differentiate with respect to time. ``` q = dynamicsymbols('q:' + str(n + 1)) # Generalized coordinates u = dynamicsymbols('u:' + str(n + 1)) # Generalized speeds f = dynamicsymbols('f') # Force applied to the cart m = symbols('m:' + str(n + 1)) # Mass of each bob l = symbols('l:' + str(n)) # Length of each link g, t = symbols('g t') # Gravity and time ``` Now we can create and inertial reference frame $I$ and define the point, $O$, as the origin. ``` I = ReferenceFrame('I') # Inertial reference frame O = Point('O') # Origin point O.set_vel(I, 0) # Origin's velocity is zero ``` Secondly, we define the define the first point of the pendulum as a particle which has mass. This point can only move laterally and represents the motion of the "cart". ``` P0 = Point('P0') # Hinge point of top link P0.set_pos(O, q[0] * I.x) # Set the position of P0 P0.set_vel(I, u[0] * I.x) # Set the velocity of P0 Pa0 = Particle('Pa0', P0, m[0]) # Define a particle at P0 ``` Now we can define the $n$ reference frames, particles, gravitational forces, and kinematical differential equations for each of the pendulum links. This is easily done with a loop. ``` frames = [I] # List to hold the n + 1 frames points = [P0] # List to hold the n + 1 points particles = [Pa0] # List to hold the n + 1 particles forces = [(P0, f * I.x - m[0] * g * I.y)] # List to hold the n + 1 applied forces, including the input force, f kindiffs = [q[0].diff(t) - u[0]] # List to hold kinematic ODE's for i in range(n): Bi = I.orientnew('B' + str(i), 'Axis', [q[i + 1], I.z]) # Create a new frame Bi.set_ang_vel(I, u[i + 1] * I.z) # Set angular velocity frames.append(Bi) # Add it to the frames list Pi = points[-1].locatenew('P' + str(i + 1), l[i] * Bi.x) # Create a new point Pi.v2pt_theory(points[-1], I, Bi) # Set the velocity points.append(Pi) # Add it to the points list Pai = Particle('Pa' + str(i + 1), Pi, m[i + 1]) # Create a new particle particles.append(Pai) # Add it to the particles list forces.append((Pi, -m[i + 1] * g * I.y)) # Set the force applied at the point kindiffs.append(q[i + 1].diff(t) - u[i + 1]) # Define the kinematic ODE: dq_i / dt - u_i = 0 ``` With all of the necessary point velocities and particle masses defined, the `KanesMethod` class can be used to derive the equations of motion of the system automatically. ``` kane = KanesMethod(I, q_ind=q, u_ind=u, kd_eqs=kindiffs) # Initialize the object fr, frstar = kane.kanes_equations(forces, particles) # Generate EoM's fr + frstar = 0 ``` The equations of motion are quite long as can been seen below. This is the general nature of most non-simple mutlibody problems. That is why a SymPy is so useful; no more mistakes in algegra, differentiation, or copying in hand written equations. ``` fr ``` [ -f(t)] [g*l0*m1*cos(q1(t)) + g*l0*m2*cos(q1(t)) + g*l0*m3*cos(q1(t)) + g*l0*m4*cos(q1(t)) + g*l0*m5*cos(q1(t))] [ g*l1*m2*cos(q2(t)) + g*l1*m3*cos(q2(t)) + g*l1*m4*cos(q2(t)) + g*l1*m5*cos(q2(t))] [ g*l2*m3*cos(q3(t)) + g*l2*m4*cos(q3(t)) + g*l2*m5*cos(q3(t))] [ g*l3*m4*cos(q4(t)) + g*l3*m5*cos(q4(t))] [ g*l4*m5*cos(q5(t))] ``` frstar ``` [ -l0*m1*u1(t)**2*cos(q1(t)) - l0*m2*u1(t)**2*cos(q1(t)) - l0*m3*u1(t)**2*cos(q1(t)) - l0*m4*u1(t)**2*cos(q1(t)) - l0*m5*u1(t)**2*cos(q1(t)) - l1*m2*u2(t)**2*cos(q2(t)) - l1*m3*u2(t)**2*cos(q2(t)) - l1*m4*u2(t)**2*cos(q2(t)) - l1*m5*u2(t)**2*cos(q2(t)) - l2*m3*u3(t)**2*cos(q3(t)) - l2*m4*u3(t)**2*cos(q3(t)) - l2*m5*u3(t)**2*cos(q3(t)) - l3*m4*u4(t)**2*cos(q4(t)) - l3*m5*u4(t)**2*cos(q4(t)) - l4*m5*u5(t)**2*cos(q5(t)) - l4*m5*sin(q5(t))*Derivative(u5(t), t) + (-l3*m4*sin(q4(t)) - l3*m5*sin(q4(t)))*Derivative(u4(t), t) + (-l2*m3*sin(q3(t)) - l2*m4*sin(q3(t)) - l2*m5*sin(q3(t)))*Derivative(u3(t), t) + (-l1*m2*sin(q2(t)) - l1*m3*sin(q2(t)) - l1*m4*sin(q2(t)) - l1*m5*sin(q2(t)))*Derivative(u2(t), t) + (-l0*m1*sin(q1(t)) - l0*m2*sin(q1(t)) - l0*m3*sin(q1(t)) - l0*m4*sin(q1(t)) - l0*m5*sin(q1(t)))*Derivative(u1(t), t) + (m0 + m1 + m2 + m3 + m4 + m5)*Derivative(u0(t), t)] [-l0*l1*m2*(-sin(q1(t))*cos(q2(t)) + sin(q2(t))*cos(q1(t)))*u2(t)**2 - l0*l1*m3*(-sin(q1(t))*cos(q2(t)) + sin(q2(t))*cos(q1(t)))*u2(t)**2 - l0*l1*m4*(-sin(q1(t))*cos(q2(t)) + sin(q2(t))*cos(q1(t)))*u2(t)**2 - l0*l1*m5*(-sin(q1(t))*cos(q2(t)) + sin(q2(t))*cos(q1(t)))*u2(t)**2 - l0*l2*m3*(-sin(q1(t))*cos(q3(t)) + sin(q3(t))*cos(q1(t)))*u3(t)**2 - l0*l2*m4*(-sin(q1(t))*cos(q3(t)) + sin(q3(t))*cos(q1(t)))*u3(t)**2 - l0*l2*m5*(-sin(q1(t))*cos(q3(t)) + sin(q3(t))*cos(q1(t)))*u3(t)**2 - l0*l3*m4*(-sin(q1(t))*cos(q4(t)) + sin(q4(t))*cos(q1(t)))*u4(t)**2 - l0*l3*m5*(-sin(q1(t))*cos(q4(t)) + sin(q4(t))*cos(q1(t)))*u4(t)**2 + l0*l4*m5*(sin(q1(t))*sin(q5(t)) + cos(q1(t))*cos(q5(t)))*Derivative(u5(t), t) - l0*l4*m5*(-sin(q1(t))*cos(q5(t)) + sin(q5(t))*cos(q1(t)))*u5(t)**2 + (l0*l3*m4*(sin(q1(t))*sin(q4(t)) + cos(q1(t))*cos(q4(t))) + l0*l3*m5*(sin(q1(t))*sin(q4(t)) + cos(q1(t))*cos(q4(t))))*Derivative(u4(t), t) + (l0*l2*m3*(sin(q1(t))*sin(q3(t)) + cos(q1(t))*cos(q3(t))) + l0*l2*m4*(sin(q1(t))*sin(q3(t)) + cos(q1(t))*cos(q3(t))) + l0*l2*m5*(sin(q1(t))*sin(q3(t)) + cos(q1(t))*cos(q3(t))))*Derivative(u3(t), t) + (l0*l1*m2*(sin(q1(t))*sin(q2(t)) + cos(q1(t))*cos(q2(t))) + l0*l1*m3*(sin(q1(t))*sin(q2(t)) + cos(q1(t))*cos(q2(t))) + l0*l1*m4*(sin(q1(t))*sin(q2(t)) + cos(q1(t))*cos(q2(t))) + l0*l1*m5*(sin(q1(t))*sin(q2(t)) + cos(q1(t))*cos(q2(t))))*Derivative(u2(t), t) + (l0**2*m1 + l0**2*m2 + l0**2*m3 + l0**2*m4 + l0**2*m5)*Derivative(u1(t), t) + (-l0*m1*sin(q1(t)) - l0*m2*sin(q1(t)) - l0*m3*sin(q1(t)) - l0*m4*sin(q1(t)) - l0*m5*sin(q1(t)))*Derivative(u0(t), t)] [ -l0*l1*m2*(sin(q1(t))*cos(q2(t)) - sin(q2(t))*cos(q1(t)))*u1(t)**2 - l0*l1*m3*(sin(q1(t))*cos(q2(t)) - sin(q2(t))*cos(q1(t)))*u1(t)**2 - l0*l1*m4*(sin(q1(t))*cos(q2(t)) - sin(q2(t))*cos(q1(t)))*u1(t)**2 - l0*l1*m5*(sin(q1(t))*cos(q2(t)) - sin(q2(t))*cos(q1(t)))*u1(t)**2 - l1*l2*m3*(-sin(q2(t))*cos(q3(t)) + sin(q3(t))*cos(q2(t)))*u3(t)**2 - l1*l2*m4*(-sin(q2(t))*cos(q3(t)) + sin(q3(t))*cos(q2(t)))*u3(t)**2 - l1*l2*m5*(-sin(q2(t))*cos(q3(t)) + sin(q3(t))*cos(q2(t)))*u3(t)**2 - l1*l3*m4*(-sin(q2(t))*cos(q4(t)) + sin(q4(t))*cos(q2(t)))*u4(t)**2 - l1*l3*m5*(-sin(q2(t))*cos(q4(t)) + sin(q4(t))*cos(q2(t)))*u4(t)**2 + l1*l4*m5*(sin(q2(t))*sin(q5(t)) + cos(q2(t))*cos(q5(t)))*Derivative(u5(t), t) - l1*l4*m5*(-sin(q2(t))*cos(q5(t)) + sin(q5(t))*cos(q2(t)))*u5(t)**2 + (l1*l3*m4*(sin(q2(t))*sin(q4(t)) + cos(q2(t))*cos(q4(t))) + l1*l3*m5*(sin(q2(t))*sin(q4(t)) + cos(q2(t))*cos(q4(t))))*Derivative(u4(t), t) + (l1*l2*m3*(sin(q2(t))*sin(q3(t)) + cos(q2(t))*cos(q3(t))) + l1*l2*m4*(sin(q2(t))*sin(q3(t)) + cos(q2(t))*cos(q3(t))) + l1*l2*m5*(sin(q2(t))*sin(q3(t)) + cos(q2(t))*cos(q3(t))))*Derivative(u3(t), t) + (l1**2*m2 + l1**2*m3 + l1**2*m4 + l1**2*m5)*Derivative(u2(t), t) + (-l1*m2*sin(q2(t)) - l1*m3*sin(q2(t)) - l1*m4*sin(q2(t)) - l1*m5*sin(q2(t)))*Derivative(u0(t), t) + (l0*l1*m2*(sin(q1(t))*sin(q2(t)) + cos(q1(t))*cos(q2(t))) + l0*l1*m3*(sin(q1(t))*sin(q2(t)) + cos(q1(t))*cos(q2(t))) + l0*l1*m4*(sin(q1(t))*sin(q2(t)) + cos(q1(t))*cos(q2(t))) + l0*l1*m5*(sin(q1(t))*sin(q2(t)) + cos(q1(t))*cos(q2(t))))*Derivative(u1(t), t)] [ -l0*l2*m3*(sin(q1(t))*cos(q3(t)) - sin(q3(t))*cos(q1(t)))*u1(t)**2 - l0*l2*m4*(sin(q1(t))*cos(q3(t)) - sin(q3(t))*cos(q1(t)))*u1(t)**2 - l0*l2*m5*(sin(q1(t))*cos(q3(t)) - sin(q3(t))*cos(q1(t)))*u1(t)**2 - l1*l2*m3*(sin(q2(t))*cos(q3(t)) - sin(q3(t))*cos(q2(t)))*u2(t)**2 - l1*l2*m4*(sin(q2(t))*cos(q3(t)) - sin(q3(t))*cos(q2(t)))*u2(t)**2 - l1*l2*m5*(sin(q2(t))*cos(q3(t)) - sin(q3(t))*cos(q2(t)))*u2(t)**2 - l2*l3*m4*(-sin(q3(t))*cos(q4(t)) + sin(q4(t))*cos(q3(t)))*u4(t)**2 - l2*l3*m5*(-sin(q3(t))*cos(q4(t)) + sin(q4(t))*cos(q3(t)))*u4(t)**2 + l2*l4*m5*(sin(q3(t))*sin(q5(t)) + cos(q3(t))*cos(q5(t)))*Derivative(u5(t), t) - l2*l4*m5*(-sin(q3(t))*cos(q5(t)) + sin(q5(t))*cos(q3(t)))*u5(t)**2 + (l2*l3*m4*(sin(q3(t))*sin(q4(t)) + cos(q3(t))*cos(q4(t))) + l2*l3*m5*(sin(q3(t))*sin(q4(t)) + cos(q3(t))*cos(q4(t))))*Derivative(u4(t), t) + (l2**2*m3 + l2**2*m4 + l2**2*m5)*Derivative(u3(t), t) + (-l2*m3*sin(q3(t)) - l2*m4*sin(q3(t)) - l2*m5*sin(q3(t)))*Derivative(u0(t), t) + (l0*l2*m3*(sin(q1(t))*sin(q3(t)) + cos(q1(t))*cos(q3(t))) + l0*l2*m4*(sin(q1(t))*sin(q3(t)) + cos(q1(t))*cos(q3(t))) + l0*l2*m5*(sin(q1(t))*sin(q3(t)) + cos(q1(t))*cos(q3(t))))*Derivative(u1(t), t) + (l1*l2*m3*(sin(q2(t))*sin(q3(t)) + cos(q2(t))*cos(q3(t))) + l1*l2*m4*(sin(q2(t))*sin(q3(t)) + cos(q2(t))*cos(q3(t))) + l1*l2*m5*(sin(q2(t))*sin(q3(t)) + cos(q2(t))*cos(q3(t))))*Derivative(u2(t), t)] [ -l0*l3*m4*(sin(q1(t))*cos(q4(t)) - sin(q4(t))*cos(q1(t)))*u1(t)**2 - l0*l3*m5*(sin(q1(t))*cos(q4(t)) - sin(q4(t))*cos(q1(t)))*u1(t)**2 - l1*l3*m4*(sin(q2(t))*cos(q4(t)) - sin(q4(t))*cos(q2(t)))*u2(t)**2 - l1*l3*m5*(sin(q2(t))*cos(q4(t)) - sin(q4(t))*cos(q2(t)))*u2(t)**2 - l2*l3*m4*(sin(q3(t))*cos(q4(t)) - sin(q4(t))*cos(q3(t)))*u3(t)**2 - l2*l3*m5*(sin(q3(t))*cos(q4(t)) - sin(q4(t))*cos(q3(t)))*u3(t)**2 + l3*l4*m5*(sin(q4(t))*sin(q5(t)) + cos(q4(t))*cos(q5(t)))*Derivative(u5(t), t) - l3*l4*m5*(-sin(q4(t))*cos(q5(t)) + sin(q5(t))*cos(q4(t)))*u5(t)**2 + (l3**2*m4 + l3**2*m5)*Derivative(u4(t), t) + (-l3*m4*sin(q4(t)) - l3*m5*sin(q4(t)))*Derivative(u0(t), t) + (l0*l3*m4*(sin(q1(t))*sin(q4(t)) + cos(q1(t))*cos(q4(t))) + l0*l3*m5*(sin(q1(t))*sin(q4(t)) + cos(q1(t))*cos(q4(t))))*Derivative(u1(t), t) + (l1*l3*m4*(sin(q2(t))*sin(q4(t)) + cos(q2(t))*cos(q4(t))) + l1*l3*m5*(sin(q2(t))*sin(q4(t)) + cos(q2(t))*cos(q4(t))))*Derivative(u2(t), t) + (l2*l3*m4*(sin(q3(t))*sin(q4(t)) + cos(q3(t))*cos(q4(t))) + l2*l3*m5*(sin(q3(t))*sin(q4(t)) + cos(q3(t))*cos(q4(t))))*Derivative(u3(t), t)] [ l0*l4*m5*(sin(q1(t))*sin(q5(t)) + cos(q1(t))*cos(q5(t)))*Derivative(u1(t), t) - l0*l4*m5*(sin(q1(t))*cos(q5(t)) - sin(q5(t))*cos(q1(t)))*u1(t)**2 + l1*l4*m5*(sin(q2(t))*sin(q5(t)) + cos(q2(t))*cos(q5(t)))*Derivative(u2(t), t) - l1*l4*m5*(sin(q2(t))*cos(q5(t)) - sin(q5(t))*cos(q2(t)))*u2(t)**2 + l2*l4*m5*(sin(q3(t))*sin(q5(t)) + cos(q3(t))*cos(q5(t)))*Derivative(u3(t), t) - l2*l4*m5*(sin(q3(t))*cos(q5(t)) - sin(q5(t))*cos(q3(t)))*u3(t)**2 + l3*l4*m5*(sin(q4(t))*sin(q5(t)) + cos(q4(t))*cos(q5(t)))*Derivative(u4(t), t) - l3*l4*m5*(sin(q4(t))*cos(q5(t)) - sin(q5(t))*cos(q4(t)))*u4(t)**2 + l4**2*m5*Derivative(u5(t), t) - l4*m5*sin(q5(t))*Derivative(u0(t), t)] Simulation ========== Now that the symbolic equations of motion are available we can simulate the pendulum's motion. We will need some more SymPy functionality and several NumPy functions, and most importantly the integration function from SciPy, `odeint`. ``` from sympy import Dummy, lambdify from numpy import array, hstack, zeros, linspace, pi from numpy.linalg import solve from scipy.integrate import odeint ``` First, define some numeric values for all of the constant parameters in the problem. ``` arm_length = 1. / n # The maximum length of the pendulum is 1 meter bob_mass = 0.01 / n # The maximum mass of the bobs is 10 grams parameters = [g, m[0]] # Parameter definitions starting with gravity and the first bob parameter_vals = [9.81, 0.01 / n] # Numerical values for the first two for i in range(n): # Then each mass and length parameters += [l[i], m[i + 1]] parameter_vals += [arm_length, bob_mass] ``` Mathematica has a really nice `NDSolve` function for quickly integrating their symbolic differential equations. We have plans to develop something similar for SymPy but haven't found the development time yet to do it properly. So the next bit isn't as clean as we'd like but you can make use of SymPy's lambdify function to create functions that will evaluate the mass matrix, $M$, and forcing vector, $\bar{f}$ from $M\dot{u} = \bar{f}(q, \dot{q}, u, t)$ as a NumPy function. We make use of dummy symbols to replace the time varying functions in the SymPy equations a simple dummy symbol. ``` dynamic = q + u # Make a list of the states dynamic.append(f) # Add the input force dummy_symbols = [Dummy() for i in dynamic] # Create a dummy symbol for each variable dummy_dict = dict(zip(dynamic, dummy_symbols)) kindiff_dict = kane.kindiffdict() # Get the solved kinematical differential equations M = kane.mass_matrix_full.subs(kindiff_dict).subs(dummy_dict) # Substitute into the mass matrix F = kane.forcing_full.subs(kindiff_dict).subs(dummy_dict) # Substitute into the forcing vector M_func = lambdify(dummy_symbols + parameters, M) # Create a callable function to evaluate the mass matrix F_func = lambdify(dummy_symbols + parameters, F) # Create a callable function to evaluate the forcing vector ``` To integrate the ODE's we need to define a function that returns the derivatives of the states given the current state and time. ``` def right_hand_side(x, t, args): """Returns the derivatives of the states. Parameters ---------- x : ndarray, shape(2 * (n + 1)) The current state vector. t : float The current time. args : ndarray The constants. Returns ------- dx : ndarray, shape(2 * (n + 1)) The derivative of the state. """ u = 0.0 # The input force is always zero arguments = hstack((x, u, args)) # States, input, and parameters dx = array(solve(M_func(*arguments), # Solving for the derivatives F_func(*arguments))).T[0] return dx ``` Now that we have the right hand side function, the initial conditions are set such that the pendulum is in the vertical equilibrium and a slight initial rate is set for each speed to ensure the pendulum falls. The equations can then be integrated with SciPy's `odeint` function given a time series. ``` x0 = hstack(( 0, pi / 2 * ones(len(q) - 1), 1e-3 * ones(len(u)) )) # Initial conditions, q and u t = linspace(0, 10, 1000) # Time vector y = odeint(right_hand_side, x0, t, args=(parameter_vals,)) # Actual integration ``` Plotting ======== The results of the simulation can be plotted with matplotlib. ``` lines = plot(t, y[:, :y.shape[1] / 2]) lab = xlabel('Time [sec]') leg = legend(dynamic[:y.shape[1] / 2]) ``` ``` lines = plot(t, y[:, y.shape[1] / 2:]) lab = xlabel('Time [sec]') leg = legend(dynamic[y.shape[1] / 2:]) ``` Animation ========= matplotlib now includes very nice animation functions for animating matplotlib plots. First we import the necessary functions for creating the animation. ``` from numpy import zeros, cos, sin, arange, around from matplotlib import pyplot as plt from matplotlib import animation from matplotlib.patches import Rectangle ``` The following function was modeled from Jake Vanderplas's [post on matplotlib animations](http://jakevdp.github.com/blog/2012/08/18/matplotlib-animation-tutorial/). ``` def animate_pendulum(t, states, length, filename=None): """Animates the n-pendulum and optionally saves it to file. Parameters ---------- t : ndarray, shape(m) Time array. states: ndarray, shape(m,p) State time history. length: float The length of the pendulum links. filename: string or None, optional If true a movie file will be saved of the animation. This may take some time. Returns ------- fig : matplotlib.Figure The figure. anim : matplotlib.FuncAnimation The animation. """ # the number of pendulum bobs numpoints = states.shape[1] / 2 # first set up the figure, the axis, and the plot elements we want to animate fig = plt.figure() # some dimesions cart_width = 0.4 cart_height = 0.2 # set the limits based on the motion xmin = around(states[:, 0].min() - cart_width / 2.0, 1) xmax = around(states[:, 0].max() + cart_width / 2.0, 1) # create the axes ax = plt.axes(xlim=(xmin, xmax), ylim=(-1.1, 1.1), aspect='equal') # display the current time time_text = ax.text(0.04, 0.9, '', transform=ax.transAxes) # create a rectangular cart rect = Rectangle([states[0, 0] - cart_width / 2.0, -cart_height / 2], cart_width, cart_height, fill=True, color='red', ec='black') ax.add_patch(rect) # blank line for the pendulum line, = ax.plot([], [], lw=2, marker='o', markersize=6) # initialization function: plot the background of each frame def init(): time_text.set_text('') rect.set_xy((0.0, 0.0)) line.set_data([], []) return time_text, rect, line, # animation function: update the objects def animate(i): time_text.set_text('time = {:2.2f}'.format(t[i])) rect.set_xy((states[i, 0] - cart_width / 2.0, -cart_height / 2)) x = hstack((states[i, 0], zeros((numpoints - 1)))) y = zeros((numpoints)) for j in arange(1, numpoints): x[j] = x[j - 1] + length * cos(states[i, j]) y[j] = y[j - 1] + length * sin(states[i, j]) line.set_data(x, y) return time_text, rect, line, # call the animator function anim = animation.FuncAnimation(fig, animate, frames=len(t), init_func=init, interval=t[-1] / len(t) * 1000, blit=True, repeat=False) # save the animation if a filename is given if filename is not None: anim.save(filename, fps=30, codec='libx264') ``` Now we can create the animation of the pendulum. This animation will show the open loop dynamics. ``` animate_pendulum(t, y, arm_length, filename="open-loop.ogv") animate_pendulum(t, y, arm_length, filename="open-loop.mp4") ``` ``` from IPython.display import HTML h = \ """ """ HTML(h) ``` Controller Design ================= The n-link pendulum can be balanced such that all of the links are inverted above the cart by applying the correct lateral force to the cart. We can design a full state feedback controller based off of a linear model of the pendulum about its upright equilibrium point. We'll start by specifying the equilibrium point and parameters in dictionaries. ``` equilibrium_point = hstack(( 0, pi / 2 * ones(len(q) - 1), zeros(len(u)) )) equilibrium_dict = dict(zip(q + u, equilibrium_point)) parameter_dict = dict(zip(parameters, parameter_vals)) ``` The `KanesMethod` class has method that linearizes the forcing vector about generic state and input perturbation vectors. The equilibrium point and numerical constants can then be substituted in to give the linear system in this form: $M\dot{x}=F_Ax+F_Bu$. The state and input matrices, $A$ and $B$, can then be computed by left side multiplication by the inverse of the mass matrix: $A=M^{-1}F_A$ and $B=M^{-1}F_B$. ``` # symbolically linearize about arbitrary equilibrium linear_state_matrix, linear_input_matrix, inputs = kane.linearize() # sub in the equilibrium point and the parameters f_A_lin = linear_state_matrix.subs(parameter_dict).subs(equilibrium_dict) f_B_lin = linear_input_matrix.subs(parameter_dict).subs(equilibrium_dict) m_mat = kane.mass_matrix_full.subs(parameter_dict).subs(equilibrium_dict) # compute A and B from numpy import matrix A = matrix(m_mat.inv() * f_A_lin) B = matrix(m_mat.inv() * f_B_lin) ``` Now that we have a linear system, the python-control package can be used to design an optimal controller for the system. ``` import control from numpy import dot, rank from numpy.linalg import matrix_rank ``` First we can check to see if the system is, in fact, controllable. ``` assert matrix_rank(control.ctrb(A, B)) == A.shape[0] ``` The control matrix is full rank, so now we can compute the optimal gains with a linear quadratic regulator. I chose identity matrices for the weightings for simplicity. ``` K, X, E = control.lqr(A, B, ones(A.shape), 1); ``` The gains can now be used to define the required input during simulation to stabilize the system. The input $u$ is simply the gain vector multiplied by the error in the state vector from the equilibrium point, $u(t)=K(x_{eq} - x(t))$. ``` def right_hand_side(x, t, args): """Returns the derivatives of the states. Parameters ---------- x : ndarray, shape(2 * (n + 1)) The current state vector. t : float The current time. args : ndarray The constants. Returns ------- dx : ndarray, shape(2 * (n + 1)) The derivative of the state. """ u = dot(K, equilibrium_point - x) # The controller arguments = hstack((x, u, args)) # States, input, and parameters dx = array(solve(M_func(*arguments), # Solving for the derivatives F_func(*arguments))).T[0] return dx ``` Now we can simulate and animate the system to see if the controller works. ``` x0 = hstack(( 0, pi / 2 * ones(len(q) - 1), 1 * ones(len(u)) )) # Initial conditions, q and u t = linspace(0, 10, 1000) # Time vector y = odeint(right_hand_side, x0, t, args=(parameter_vals,)) # Actual integration ``` The plots show that we seem to have a stable system. ``` lines = plot(t, y[:, :y.shape[1] / 2]) lab = xlabel('Time [sec]') leg = legend(dynamic[:y.shape[1] / 2]) ``` ``` lines = plot(t, y[:, y.shape[1] / 2:]) lab = xlabel('Time [sec]') leg = legend(dynamic[y.shape[1] / 2:]) ``` ``` animate_pendulum(t, y, arm_length, filename="closed-loop.ogv") animate_pendulum(t, y, arm_length, filename="closed-loop.mp4") ``` ``` from IPython.display import HTML h = \ """ """ HTML(h) ``` The video clearly shows that our controller can balance all $n$ of the pendulum links. The weightings in the lqr design can be tweaked to give different performance if needed. This example shows that the free and open source scientific Python tools for dynamics are easily comparable in ability and quality a commercial package such as Mathematica. Besides the current installation hurdles for Python, I'd like to claim that it may better than commercial packages, due to our much more robust SymPy **mechanics** package and the fact that all of the code is liberally licensed for reuse and hacking. The IPython notebook for this example can be downloaded from https://github.com/gilbertgede/idetc-2013-paper/blob/master/n-pendulum-control.ipynb. Yo ucan try out different $n$ values. I've gotten the equations of motion to compute and an open loop simulation of 10 links. My computer ran out of memory when I tried to compute for $n=50$. The controller weightings and initial conditions will probably have to be adjusted for better performance for $n>5$, but it should work. Let me know the resuls if you play with it.
b422c5de43a5e25c7884c11d21aceb59ad620e17
312,307
ipynb
Jupyter Notebook
examples/npendulum/n-pendulum-control.ipynb
nouiz/pydy
20c8ca9fc521208ae2144b5b453c14ed4a22a0ec
[ "BSD-3-Clause" ]
1
2019-06-27T05:30:36.000Z
2019-06-27T05:30:36.000Z
examples/npendulum/n-pendulum-control.ipynb
nouiz/pydy
20c8ca9fc521208ae2144b5b453c14ed4a22a0ec
[ "BSD-3-Clause" ]
null
null
null
examples/npendulum/n-pendulum-control.ipynb
nouiz/pydy
20c8ca9fc521208ae2144b5b453c14ed4a22a0ec
[ "BSD-3-Clause" ]
1
2016-10-02T13:43:48.000Z
2016-10-02T13:43:48.000Z
224.036585
89,808
0.806328
true
8,524
Qwen/Qwen-72B
1. YES 2. YES
0.787931
0.808067
0.636701
__label__eng_Latn
0.850765
0.317601
# Week 2 # Lecture 3 - Aug 31 ## Least Squares by Gradient Descent We left off last week needing to minimize a loss function for linear regression, i.e. the minimization problem below. $$\min\limits_w\,L(w)=\min\limits_w\,\|Xw-y\|^2$$ We will use the method of **gradient descent** to find an approximate solution. Gradient descent is a very quick method exploiting some pretty simple ideas from multivariable calculus. While it will only find an approximate answer at best, it is practically good enough in most cases. The benefits are that it is computationally cheap and can be used for many other useful loss functions. Gradient descent (and it's sped-up version, **stochastic gradient descent** or **SGD**) is *heavily* used in machine learning. Along with backpropagation, SGD is the primary method used for training neural networks. In fact, the first practical neural networks were written about in the machine learning literature under a class of methods called "gradient-based learning" due to the primacy of SGD and related methods. ## Some Ideas from Multivariate Calculus First, we need just a few ideas from multivariate calculus. This is quite minimal, but you learn more details about these topics in sections 4.3 (partial derivatives), 4.6 (gradients), and 4.7 (multivariate optimization) of <a href="https://openstax.org/details/books/calculus-volume-3">*Calculus Volume 3*</a> by Strang. The ideas we need for gradient descent include: * If we have a differentiable function of several variables, like our loss function $L(w) = L(w_0, ..., w_d)$, we can define the **partial derivatives** with respect to each of these variables as $$L_{w_i}=\frac{\partial L}{\partial w_i}=\lim\limits_{h\to 0}\frac{L(w+he_i) - L(w)}{h},$$ where $e_i$ is a $(d+1)$-vector with all 0s except for a 1 in the $i$th component. Geometrically, this partial derivative is the slope of $L$ if we go in the direction of $e_i$. * To **minimize a multivariable function** by hand, we need to find critical points, which are points $w$ where *all* partial derivatives are 0 and compare which ones give the lowest outputs. In numerical algorithms, must settle for approximations that are "nearly" critical points. * If we put these partial derivatives into a vector of $d+1$ variables, we call that a **gradient**, which we denote $$ \nabla L(w) =\begin{pmatrix} L_{w_0}(w) \\ \vdots \\ L_{w_d}(w) \end{pmatrix} $$ * The **directional derivative** of a function in the direction of a unit vector $u$ starting from a point $w$. $$D_u L(w) = \lim\limits_{h\to 0}\frac{L(w+hu) - L(w)}{h},$$ which is the slope in the direction of the vector $u$. Since $L$ is differentiable, this directional derivative will be defined for all directions leaving from $w$. In 1D, there are just two directions: left of right. In 2D, we have directional derivatives at every angle in a circle around the point. * A common theorem says the **directional derivative is maximized in the direction of the gradient** at each point, so the gradient gives the direction of the *steepest ascent* in the function $L$. Similarly, the direction of the *steepest descent* is $-\nabla L(w)$, the opposite direction. ### The Geometry of Gradient Descent We will discuss the geometry of gradient-based methods in class, but let's discuss a general outline of how gradient descent works, setting aside the stochastic version for now. The goal of gradient descent is to approximately solve the minimization problem $$\min\limits_{w}\,L(w)$$ by finding (approximate) critical values by making a guess for the location of a critical value, taking a small step in the opposite direction as the gradient, and repeating this over and over until, hopefully, we reach a minimum value. The steps are: 0. Make a guess for the critical value -- $w^0$ 1. Compute the gradient of $L$ at $w^0$ 2. Take a small step to $w^1 = w^0 - \alpha\nabla L\left(w^0\right)$ 3. Compute the gradient of $L$ at $w^1$ 4. Take a small step to $w^2 = w^1 - \alpha\nabla L\left(w^1\right)$ 5. (repeat until the gradient gets close to $(0, ..., 0)$) This $\alpha>0$ is a number that will be used in the algorithm as a multiplier of the steps the method will take. This is called the **learning rate**. This idea seems plausible from the calculus ideas above because we just keep switching directions and making a step in the direction of the steepest downward path--the opposite direction as the gradient--until we reach a good place. This is a "greedy" algorithm because it just picks the quickest step in each iteration, which is fast, but it is likely to land in the first minimum it finds, which may or may not be optimal. If you had two parameters, $L$ would be like a 3D curved surface. A nice visual to have in mind is a rain drop falling on a huge leaf. The droplet of water will move in the steepest downward direction due to gravity--but this direction *changes* as the drip follows the contours of the leaf. This is what gradient descent does. Will the drip land in the physically lowest altitude part of the leaf? Maybe, but maybe not. If the rain drop lands on the edge, it will probably just roll off the edge. If the leaf has a few different "sinks," different initial locations of the rain drop might cause it to land in these different ones, some of which have lower altitudes than others. Now, if there is heavy rain and lots of rain drops land on the leaf, we can be pretty sure *some* of them will reach the lowest-altitude sink. From this analogy, you might get the idea that we can make several initial guesses and run it to be more confident we will find the global minimum and not just a local minimum. In the end, if some of our initial guesses are good choices, the step size $\alpha$ is not too big or too small, and the loss function is pretty well-behaved, the method will converge approximately to a local minimum. ### Implementing Gradient Descent Before writing code for gradient descent, let's import some libraries. ```python import numpy as np from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.metrics import r2_score from sklearn.model_selection import train_test_split from sklearn.preprocessing import normalize from sklearn.preprocessing import scale import matplotlib.pyplot as plt import pandas as pd # increase the width of boxes in the notebook file (this is only cosmetic) np.set_printoptions(linewidth=180) ``` Now, let's implement gradient descent and test it. Gradient descent will need a few inputs: * A function to minimize $f$ * A starting point $x_0$ * A learning rate $\alpha$ * A small number $h$ (the variable that goes to 0 in the definition of the derivative) * A small positive value that we can use for a stopping condition for the derivative being sufficiently small (the tolerance) * A maximum number of iterations ```python # estimate the gradient def computeGradient(f, x, h): n = len(x) gradient = np.zeros(n) for counter in range(n): xUp = x.copy() xUp[counter] += h gradient[counter] = (f(xUp) - f(x))/h return gradient # run gradient descent ant output the def gradientDescent(f, x0, alpha, h, tolerance, maxIterations): # set x equal to the initial guess x = x0 # take up to maxIterations number of steps for counter in range(maxIterations): # update the gradient gradient = computeGradient(f, x, h) # stop if the norm of the gradient is near 0 if np.linalg.norm(gradient) < tolerance: print('Gradient descent took', counter, 'iterations to converge') print('The norm of the gradient is', np.linalg.norm(gradient)) # return the approximate critical value x return x # if we do not converge, print a message elif counter == maxIterations-1: print("Gradient descent failed") print('The gradient is', gradient) # return x, sometimes it is still pretty good return x # take a step in the opposite direction as the gradient x -= alpha*gradient ``` Let's test it on some simple functions ```python f = lambda x : x[0]**2 x = gradientDescent(f,[2],0.4,0.4,0.000001,10000) print(x, f(x)) ``` Gradient descent took 10 iterations to converge The norm of the gradient is 4.5055999998641627e-07 [-0.19999977] 0.03999990988805076 ```python f = lambda x: np.sin(x) x = gradientDescent(f,[2],0.5,0.5,0.0001,10000) print(x, f(x)) ``` Gradient descent took 18 iterations to converge The norm of the gradient is 6.221664077488143e-05 [4.46232611] [-0.96889687] Let's implement a class for linear regression using gradient descent to estimate $w$. ```python class LeastSquaresGradient: # fit the model to the data def fit(self, X, y, w0, alpha, h, tolerance, maxIterations): self.n = X.shape[0] self.d = X.shape[1] self.h = h self.alpha = alpha self.initialGuess = w0 # save the training data self.data = np.hstack((np.ones([self.n, 1]), X)) # save the training labels self.outputs = y # find the w values that minimize the sum of squared errors via gradient descent X = self.data L = lambda w: ((X @ w).T - y.T) @ (X @ w - y) self.w = self.gradientDescent(L, self.initialGuess, self.alpha, self.h, tolerance, maxIterations) # predict the output from testing data def predict(self, X): # initialize an empty matrix to store the predicted outputs yPredicted = np.empty([X.shape[0],1]) # append a column of ones at the beginning of X X = np.hstack((np.ones([X.shape[0],1]), X)) # apply the function f with the values of w from the fit function to each testing datapoint (rows of X) for row in range(X.shape[0]): yPredicted[row] = self.w @ X[row,] return yPredicted # run gradient descent to minimize the loss function def gradientDescent(self, f, x0, alpha, h, tolerance, maxIterations): # set x equal to the initial guess x = x0 # take up to maxIterations number of steps for counter in range(maxIterations): # update the gradient gradient = self.computeGradient(f, x, h) # stop if the norm of the gradient is near 0 if np.linalg.norm(gradient) < tolerance: print('Gradient descent took', counter, 'iterations to converge') print('The norm of the gradient is', np.linalg.norm(gradient)) # return the approximate critical value x return x # if we do not converge, print a message elif counter == maxIterations-1: print("Gradient descent failed") print('The gradient is', gradient) # return x, sometimes it is still pretty good return x # take a step in the opposite direction as the gradient x -= alpha*gradient # estimate the gradient def computeGradient(self, f, x, h): n = len(x) gradient = np.zeros(n) for counter in range(n): xUp = x.copy() xUp[counter] += h gradient[counter] = (f(xUp) - f(x))/h return gradient ``` Let's try it! ```python X = np.array([[6], [7], [8], [9], [7]]) y = np.array([1, 2, 3, 3, 4]) # instantiate an least squares object, fit to data, predict data model = LeastSquaresGradient() print('Fitting the model...\n') model.fit(X, y, [0, 0], alpha = 0.001, h = 0.001, tolerance = 0.01, maxIterations = 100000) predictions = model.predict(X) # print the predictions print('\nThe predicted y values are', predictions.T[0]) # print the real y values print('The real y values are', y) # print the w values parameters = model.w print('The w values are', parameters) # plot the training points plt.scatter(X, y, label = 'Data') # plot the fitted model with the data xModel = np.linspace(6,10,100) yModel = parameters[0] + parameters[1]*xModel # write a string for the formula lineFormula = 'y={:.3f}+{:.3f}x'.format(parameters[0], parameters[1]) # plot the model plt.plot(xModel, yModel, 'r', label = lineFormula) # add a legend plt.legend() # return quality metrics print('\nThe r^2 score is', r2_score(y, predictions)) print('The mean squared error is', mean_squared_error(y, predictions)) print('The mean absolute error is', mean_absolute_error(y, predictions),'\n') ``` Clearly, the model seems to work. It's not a great fit, although the data is not really linear, so the model cannot fit it well. ## Cross-Validation: Train, Dev, and Test Datasets (see the lecture notes: in Python, we can use the `train_test_split` function from the `scikit-learn` library to randomly assign datasets to training and testing sets. If we need a dev set, we can run it twice!) ## Example: High School Graduate Rates in US States Let's try to use least squares on a real dataset. The CSV file in `../data/US_State_data.csv` contains data from each U.S. state. We would like to predict the output variable included, the high school graduation rate, from some input variables: including the crime rate (per 100,000 persons), the violent crime rate (per 100,000 persons), average teacher salary, student-to-teacher ratio, education expenditure per student, population density, and median household income. This means we have 50 examples (one for each state), 7 input (predictor) variables, and one output (response) variable. In order to use the formula we derived above to attack the problem with least squares, we need to find the matrices $X$ and $y$. ```python # import the data from the csv file to an numpy array data = pd.read_csv('../data/US_State_Data.csv', sep=',').to_numpy() X = np.array(data[:,1:8], dtype=float) y = np.array(data[:,8], dtype=float) # split the data into training and test sets (trainX, testX, trainY, testY) = train_test_split(X, y, test_size = 0.25, random_state = 1) trainX = scale(trainX) testX = scale(testX) # instantiate a least squares model model = LeastSquaresGradient() # fit the model to the training data (find the w parameters) print('Fitting the model...\n') model.fit(trainX, trainY, [0, 0, 0, 0, 0, 0, 0, 0], alpha = 0.001, h = 0.001, tolerance = 0.01, maxIterations = 100000) # return the predicted outputs for the datapoints in the training set trainPredictions = model.predict(trainX) # print the coefficient of determination r^2 print('\nThe r^2 score is', r2_score(trainY, trainPredictions)) # print quality metrics print('\nThe mean absolute error on the training set is', mean_absolute_error(trainY, trainPredictions)) # return the predicted outputs for the datapoints in the test set predictions = model.predict(testX) # print the predictions print('\nThe predicted y values for the test set are', np.round(predictions.T[0],0)) # print the real y values print('The real y values for the test set are ', testY) # print the weights print('\nThe weights are', model.w) # print quality metrics print('\nThe mean absolute error on the test set is', mean_absolute_error(testY, predictions), '\n') ``` Fitting the model... Gradient descent took 834 iterations to converge The norm of the gradient is 0.009954123437901363 The r^2 score is 0.40352475296449164 The mean absolute error on the training set is 3.6674122809135645 The predicted y values for the test set are [82. 80. 89. 85. 81. 81. 85. 90. 78. 84. 83. 80. 83.] The real y values for the test set are [70. 83. 83. 81. 76. 87. 89. 89. 78. 78. 84. 80. 79.] The weights are [82.97247297 -3.76322781 0.79836794 -1.79592293 -1.16423823 -1.61290262 2.114449 0.85703448] The mean absolute error on the test set is 4.004933566374094 ## Comments on Gradient Descent * We must be careful with the $h$ and tolerance hyperparameters to be sure gradient descent will converge. * Gradient descent in our implementation above does not actually require any derivatives since we only used approximate derivatives. * If we knew formulas for the derivatives, we could compute them exactly to let the step size be exactly proportional to $\nabla L$. This would drastically reduce the number of times we compute the loss function and increase speed. * We will use exact derivatives as well as other approaches to speed up gradient descent when we start to build huge neural networks. * Gradient descent and related methods are the main driver of many machine learning problems that are based on to minimizing a loss function (least squares and neural networks, among others) ## Regularization **Regularization** is an approach of adjusting a model in various ways to deal with collinearity among other problems. Another important effect of regularization is that it can sometimes reduce overfitting, a problem where we fit the model to the training data too strongly causing it to perform badly on test data. For example, in the image below, a regularized model would look more like the green curve, which is much simpler and *probably* captures the real dynamics of the system generating the data better than the blue curve, even though they both fit the training data (the red dots) perfectly. In general, regularization typically sacrifices some training accuracy through approaches that reduce the dimension of the parameter space, shrink some parameters, and adjust loss functions in such a way that it improves how well the model generalizes to test data and other unknown data. We will want to add regularization to our toolboxfor reducing overfitting whenever we learn by minimizing a loss function, which is what neural networks do. There are a few common regularization methods use in linear regression, which mirror its usage in neural networks, where they are commonly referred to as **weight decay**, which we will discuss below. ## Ridge Regression One issue that can cause overfitting is when parameters get far larger than other parameters, which can over-emphasize certain features. **Ridge regression** (also called **Tikhonov regularization** or $L^2$ **regularization**) offers a partial remedy for the problems of overfitting by adjusting the loss function to "encourage" the $w_i$'s to be smaller by **penalizing** large $w_i$'s. To do this, instead of the ordinary least squares loss function $$L(w)=\sum\limits_{i=1}^n \left(\hat{f}(x_i)-y_i\right)^2= \|Xw-y\|_2^2,$$ we add another term to the loss function to get $$L_{\text{ridge}}(\lambda,w)=\sum\limits_{i=1}^n \left(\hat{f}(x_i)-y_i\right)^2 + \lambda\sum\limits_{i=1}^dw_i^2=\|Xw-y\|_2^2+\lambda\|w\|_2^2,$$ for some constant $\lambda\geq 0$. So, here, if $w_i$ is large, the loss will be larger. Therefore, when we minimize the loss function, it will push $w_i$'s toward 0 unless there is a really good reason not to do so. That's why we say we *penalize* large $w$ values. The value $\lambda$ is a hyperparameter of ridge regression. As $\lambda\to 0$, ridge regression approaches ordinary least squares. If $\lambda$ is very large, optimizing the loss function will force the $w$ values to be small. So, the larger we make $\lambda$, the more pressure we put on the $w$ parameters to shrink. ### Note For ridge regression, you typically should normalize the data before minimizing the loss function. Otherwise, different scaling will cause minimization to penalize some variables more than others. ## LASSO Regression and Elastic Net Regression **LASSO** (least absolute shrinkage and selection operator) regression is very similar to ridge regression, but it can otherwise be called **$L^1$ regularization** because it adds an $L^1$ penalty to the size of the $w$ parameters, so the loss function is $$L_{\text{lasso}}(\lambda,w)=\sum\limits_{i=1}^n \left(\hat{f}(x_i)-y_i\right)^2 + \lambda\sum\limits_{i=1}^d |w_i|=\|Xw-y\|_2^2+\lambda\|w\|_1.$$ **Elastic net** regression combines both $L^1$ and $L^2$ regularization as a linear combination. So, the elastic net loss function is $$L_{\text{elastic}}(\lambda_1,\lambda_2,w)=\sum\limits_{i=1}^n \left(\hat{f}(x_i)-y_i\right)^2 + \lambda_1\sum\limits_{i=1}^d |w_i| + \lambda_2\sum\limits_{i=1}^d w_i^2=\|Xw-y\|_2^2+\lambda_1\|w\|_1+\lambda_2\|w\|_2^2,$$ so it contains two hyperparameters $\lambda_1\geq 0$ and $\lambda_2\geq 0$ controlling how large the $L^1$ and $L^2$ penalties are. ## Ridge vs. LASSO vs. Elastic Net Regression Ridge, LASSO, and elastic net regression aim to accomplish most of the same tasks: 1. Predict outputs when there are linearly dependent variables 2. Reduce the dimension of the data (practically speaking) 3. improve how well a model can generalize to test data and beyond So, which one should we use? That's not really an easy question to answer because the usefulness of a model depends on how well it can generalize to unknown datapoints, but they are unknown, so... we don't know much about them! There are some differences between $L^1$ and $L^2$ penalties that can guide our testing, although only testing is likely to tell us what will work better, practically speaking. * A loss function with an $L^1$ penalty is NOT differentiable when any $w_i=0$, so we have to be careful with gradient-based methods. A method called soft-thresholding is often used to send parameters directly to 0 if the gradient method brings a $w_i$ sufficiently close close to 0. * A loss function with an $L^2$ penalty will not cause parameters to go to 0, but $L^1$ can. (We will talk about some of the geometry of why this is true in class. It's also discussed in some videos about <a href="https://www.youtube.com/watch?v=5asL5Eq2x0A">ridge</a> and <a href="https://www.youtube.com/watch?v=jbwSCwoT51M">LASSO</a> regression.) * There is no simple formula for $w$ minimizing the loss functions in LASSO or elastic net regression, so numerical optimization, such as gradient descent, must be used. * There is a matrix expression for $w$ minimizing the loss function in ridge regression, but gradient descent can be used as well. This means $L^1$ can totally eliminate variables, which can be good or bad depending on what we are modeling. If the output is well-predicted by only a few variables, this is good. If we need lots of variables to predict the output, this is bad. These notes only provide a short intro to regularization techniques, but you can read more from the (free) classic book <a href="https://web.stanford.edu/~hastie/ElemStatLearn/">*Elements of Statistical Learning*</a> by Hastie, et. al., in section 3.4. ## Implementing Regularized Linear Regression To implement these methods, we simply adjust the loss function, and gradient descent will still work. Previously, we had `L = lambda w: ((X @ w).T - y.T) @ (X @ w - y)` For ridge regression, simply add a hyperparameter $\lambda_2$ times the sum of squares of the weights: `L = lambda w: ((W @ w).T - y.T) @ (X @ x - y) + l2 * w.T @ w` For LASSO regression, simply add a hyperparameter $\lambda_1$ times the sum of absolute values of the weights: `L = lambda w: ((W @ w).T - y.T) @ (X @ x - y) + l1 * np.sum(np.abs(w))` For elastic net, add both parts: `L = lambda w: ((W @ w).T - y.T) @ (X @ x - y) + l1 * np.sum(np.abs(w)) + l2 * w.T @ w` # Lecture 4 - Sept 2 ### Classification Problems **Classification problems** are problems where we would like to take datapoints and assign them to an appropriate **class** based on some examples that have known classes. ### Examples * If we have a dataset of medical records where each datapoint has a single patient's age, weight, blood pressure, status as a smoker or not, and other information and each patient is known to either have or not have kidney disease. We might want take data from a new patient and predict if he or she is likely to develop kidney disease. * If we have a dataset of labeled images of cat and dogs, we might want to take a new image and classify whether it has a dog or a cat. (How does Google image search know how to find pictures of what you search?) * If we have a dataset of audio files, each of which is a jazz, classical, rock, pop, or hip-hop song, we might want to predict the genre of a new audio file. (Spotify does this kind of analysis to recommend songs based on your listening history.) * If we have a dataset of traffic logs on a network, some known to be infected by a specific virus and some are not, we might want to use this information to classify a new traffic log as likely to be infected or not. * If we have a dataset of sounds of people speaking along with transcripts of the words, we might want to classify the words spoken into a microphone. (Think Siri!) In all of these cases, the information can be represented as a point in the $n$-dimensional real space. * The medical records would have numbers for age, weight, and blood pressure and a binary digit for non-smoker or smoker. * The dog/cat images might have three channels for a picture, meaning three numbers for each pixel (the red, green, and blue levels) like the bird picture below. * The audio files might have numbers specifying the type of sound for the song many times per second. * The network traffic logs might have numbers of packets transferred, file size, ports, addresses, the content of the packets, etc. * The audio files might have numbers specifying the type of sound for a word many times per second. The blue lines indicate the beginning of a word and the red lines indicate the ends of words. In all mature applications, there are likely preprocessing steps done before the classification is done. I chose these applications to demonstrate two things: (1) classification problems are interesting and useful in almost every area of study and (2) a huge class of classification problems have much in common mathematically. All apply to datapoints, although some types of data may have far more dimensions than others--a medical record may only have 10 to 12 numbers, but a 12-megapixel photo from the latest iPhone would have $4,000\times 3,000\times 3=36,000,000$ numbers, 3 for each pixel). ### The Math of a Classification Problem To exploit the similarities, let's abstract away the specifics of the applications for now and think about how to describe a classification problem mathematically. Consider a $d$-dimensional point, or vector, $x_1\in\mathbb{R}^d$ and denote $x=(x_{11},x_{12},...,x_{1d})$. $x$ is a member one of $k$ classes $C=\{c_1, c_2, ..., c_k\}$. We call the point $x_1$ an **example** and we call the class the **label** of $x_1$. The goal of a classification problem is to find a function $M:\mathbb{R}^d\to C$ mapping each example $x_1$ to its class $y_1=M(x_1)$ and will generalize to successfully classify new, unlabeled datapoints with high accuracy. This will segment the space $\mathbb{R}^d$ into sets $X_j=\{x_1\in\mathbb{R}^d | M(x_1)=c_j\}$ corresponding to each class. In the image below, for example, the space $\mathbb{R}^2$ is partitioned into three sets colored red, blue, and green. The colored points are labeled examples and the $\mathbb{R}^2$ space is colored by the class to be assigned to points in different regions. (image from Wikipedia) ### Classification Algorithms There are many algorithms used for classification. Some of the most popular include * Bayes and naive Bayes * $k$-nearest neighbors * decision trees * logistic regression * support vector machines * neural networks (many types) as well as tree-based algorithms that systematically apply an ensemble of different classifiers. Any of the methods above are good choices and there are pros and cons of each, but we only consider tiny neural networks and logistic regression this week. An array of classifiers are used on a few datasets in the code below from [scikit-learn](https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html). I ask you not to focus on the code, but look at the diagram it generates. The diagram shows how different classifiers come to quite different results at classifying 2D points into the red and blue classes. ```python print(__doc__) # Code source: Gaël Varoquaux # Andreas Müller # Modified for documentation by Jaques Grobler # License: BSD 3 clause import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis h = .02 # step size in the mesh names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process", "Decision Tree", "Random Forest", "Neural Net", "AdaBoost", "Naive Bayes", "QDA"] classifiers = [ KNeighborsClassifier(3), SVC(kernel="linear", C=0.025), SVC(gamma=2, C=1), GaussianProcessClassifier(1.0 * RBF(1.0)), DecisionTreeClassifier(max_depth=5), RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1), MLPClassifier(alpha=1, max_iter=1000), AdaBoostClassifier(), GaussianNB(), QuadraticDiscriminantAnalysis()] X, y = make_classification(n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1) rng = np.random.RandomState(2) X += 2 * rng.uniform(size=X.shape) linearly_separable = (X, y) datasets = [make_moons(noise=0.3, random_state=0), make_circles(noise=0.2, factor=0.5, random_state=1), linearly_separable ] figure = plt.figure(figsize=(27, 9)) i = 1 # iterate over datasets for ds_cnt, ds in enumerate(datasets): # preprocess dataset, split into training and test part X, y = ds X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=.4, random_state=42) x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # just plot the dataset first cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) ax = plt.subplot(len(datasets), len(classifiers) + 1, i) if ds_cnt == 0: ax.set_title("Input data") # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors='k') # Plot the testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors='k') ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) i += 1 # iterate over classifiers for name, clf in zip(names, classifiers): ax = plt.subplot(len(datasets), len(classifiers) + 1, i) clf.fit(X_train, y_train) score = clf.score(X_test, y_test) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, x_max]x[y_min, y_max]. if hasattr(clf, "decision_function"): Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()]) else: Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1] # Put the result into a color plot Z = Z.reshape(xx.shape) ax.contourf(xx, yy, Z, cmap=cm, alpha=.8) # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors='k') # Plot the testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, edgecolors='k', alpha=0.6) ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) if ds_cnt == 0: ax.set_title(name) ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'), size=15, horizontalalignment='right') i += 1 plt.tight_layout() plt.show() ``` ### Measuring Success of Classifiers For a specific class labeled 1, we denote $c_{00}$ is the number of test examples in class 0 classified in class 0 (true negatives) $c_{01}$ is the number of test examples in class 0 classified in class 1 (false positives) $c_{10}$ is the number of test examples in class 1 classified in class 0 (false negatives) $c_{11}$ is the number of test examples in class 1 classified in class 1 (true positives) These numbers are sometimes included in what is called a **confusion matrix** $$ \begin{align} \begin{pmatrix} c_{00} & c_{01} \\ c_{10} & c_{11} \end{pmatrix} \end{align} $$ There are a few different measures of success in common usage. Each one can be computed for each class. * Accuracy $$ \begin{align} \text{Accuracy}=\frac{\text{# of true classifications}}{\text{# of test examples}}=\frac{c_{00}+c_{11}}{c_{00}+c_{01}+c_{10}+c_{11}} \end{align} $$ * Precision $$ \begin{align} \text{Precision}=\frac{\text{# of true positives}}{\text{# of positive classifications}}=\frac{c_{11}}{c_{01}+c_{11}} \end{align} $$ * Recall $$ \begin{align} \text{Recall}=\frac{\text{# of true positives}}{\text{# of positive examples}}=\frac{c_{11}}{c_{10}+c_{11}} \end{align} $$ * F1 Score $$ \begin{align} F_1=\frac{2\text{(precision)(recall)}}{\text{precision + recall}} \end{align} $$ It is convenient to use the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html">classification_report</a> function from the <a href="https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics">sklearn.metrics</a> module with inputs of the test set labels and then the labels predicted by the classifier. This will return the precision and recall for each class, accuracy, and some additional quality metrics. ## A One-Node Neural Network for Binary Classification Next, we show how classification can be done for a binary classification problem with a neural network of just one node, similar to the way we represented linear regression as a one-node neural network. The method is conventionally known as **logistic regression** if a very specific loss function is used, as we will see later. In a binary classification problem, each datapoint is assumed to belong to class 0 or it belongs to class 1. Therefore, for a datapoint $x_i$ will be mapped to $y_i=0$ or $y_i=1$. As usual, suppose we have a modified data matrix, labels, and weights: $$ X=\begin{pmatrix} 1 & x_{11} & x_{12} & \cdots & x_{1d}\\ 1 & x_{21} & x_{22} & \cdots & x_{2d}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & x_{n1} & x_{n2} & \cdots & x_{nd} \end{pmatrix} \hspace{2cm}y=\begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix} \hspace{2cm}w=\begin{pmatrix} w_0 \\ w_1 \\ \vdots \\ w_d \end{pmatrix} $$ We hope to train our model to map $x_i$ values to their corresponding targets $y_i$. With linear regression, the predicted $y$ values ($\hat{y}$) were computed as $Xw$, which can be any real number, depending on the input data $X$ and weights $w$. This was good for regression, where we tried to predict a numerical value. However, with classification, we will predict a probability the datapoint belongs to class 1 which must be a number between 0 and 1 and round it to the nearest integer to predict the class. So, the question becomes: how can we "squish" this $Xw$ into $[0,1]$? The **sigmoid** or **logisitc** function does precisely this: its domain is $\mathbb{R}$, but its range is just $[0,1]$. It has some other nice properties that will help us in the future. The sigmoid function is defined as $$\sigma(z)=\frac{1}{1+e^{-z}}$$ Therefore, we can attempt to solve the classification problem using a very similar minimization problem that we used with linear regression, except our predicted probabilities will be computed as $\sigma(Xw)$, with $\sigma$ applied elementwise: $$L(w)=\|\hat{y}-y\|_2^2=\|\sigma(Xw)-y\|_2^2$$ We will write a simple Python implementation of the method using gradient descent below. First, let's import some libraries. ```python import matplotlib.pyplot as plt import numpy as np import seaborn as sn from sklearn import datasets from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split ``` Next, let's write a class for the logistic classifier. We only need to lightly modify the `LinearRegressionGradient` class from our last class session to use the sigmoid function. ```python class LogisticClassifierGradient: # fit the model to the data def fit(self, X, y, w0, alpha, h, tolerance, maxIterations): self.n = X.shape[0] self.d = X.shape[1] self.h = h self.alpha = alpha self.initialGuess = w0 # standardize the data X = self.standardize(X) # save the training data and add a column of 1s to it self.data = np.hstack((np.ones([self.n, 1]), X)) # save the training labels self.outputs = y # find the w values that minimize the sum of squared errors via gradient descent X = self.data L = lambda w: (self.sigmoid((X @ w)).T - y.T) @ (self.sigmoid(X @ w) - y) self.w = self.gradientDescent(L, self.initialGuess, self.alpha, self.h, tolerance, maxIterations) # predict the output from testing data def predict(self, X): # standardize the data X = self.standardize(X) # initialize an empty matrix to store the predicted outputs yPredicted = np.empty([X.shape[0],1]) # append a column of ones at the beginning of X X = np.hstack((np.ones([X.shape[0],1]), X)) # apply the function f with the values of w from the fit function to each testing datapoint (rows of X) for row in range(X.shape[0]): yPredicted[row] = np.round(self.sigmoid(self.w @ X[row,])) return yPredicted # run gradient descent to minimize the loss function def gradientDescent(self, f, x0, alpha, h, tolerance, maxIterations): # set x equal to the initial guess x = x0 # take up to maxIterations number of steps for counter in range(maxIterations): # update the gradient gradient = self.computeGradient(f, x, h) # stop if the norm of the gradient is near 0 if np.linalg.norm(gradient) < tolerance: print('Gradient descent took', counter, 'iterations to converge') print('The norm of the gradient is', np.linalg.norm(gradient)) # return the approximate critical value x return x # if we do not converge, print a message elif counter == maxIterations-1: print("Gradient descent failed") print('The norm of the gradient is', np.linalg.norm(gradient)) # return x, sometimes it is still pretty good return x # take a step in the opposite direction as the gradient x -= alpha*gradient # estimate the gradient def computeGradient(self, f, x, h): n = len(x) gradient = np.zeros(n) fx = f(x) for counter in range(n): xUp = x.copy() xUp[counter] += h gradient[counter] = (f(xUp) - fx)/h return gradient def sigmoid(self, z): return 1.0 / (1 + np.exp(-z)) def standardize(self, X): n = X.shape[0] # normalize all the n features of X. for i in range(n): X = (X - X.mean(axis=0))/X.std(axis=0) return X ``` ### Example: Breast Cancer Detection ```python breastcancer = datasets.load_breast_cancer() print(breastcancer['DESCR']) ``` .. _breast_cancer_dataset: Breast cancer wisconsin (diagnostic) dataset -------------------------------------------- **Data Set Characteristics:** :Number of Instances: 569 :Number of Attributes: 30 numeric, predictive attributes and the class :Attribute Information: - radius (mean of distances from center to points on the perimeter) - texture (standard deviation of gray-scale values) - perimeter - area - smoothness (local variation in radius lengths) - compactness (perimeter^2 / area - 1.0) - concavity (severity of concave portions of the contour) - concave points (number of concave portions of the contour) - symmetry - fractal dimension ("coastline approximation" - 1) The mean, standard error, and "worst" or largest (mean of the three worst/largest values) of these features were computed for each image, resulting in 30 features. For instance, field 0 is Mean Radius, field 10 is Radius SE, field 20 is Worst Radius. - class: - WDBC-Malignant - WDBC-Benign :Summary Statistics: ===================================== ====== ====== Min Max ===================================== ====== ====== radius (mean): 6.981 28.11 texture (mean): 9.71 39.28 perimeter (mean): 43.79 188.5 area (mean): 143.5 2501.0 smoothness (mean): 0.053 0.163 compactness (mean): 0.019 0.345 concavity (mean): 0.0 0.427 concave points (mean): 0.0 0.201 symmetry (mean): 0.106 0.304 fractal dimension (mean): 0.05 0.097 radius (standard error): 0.112 2.873 texture (standard error): 0.36 4.885 perimeter (standard error): 0.757 21.98 area (standard error): 6.802 542.2 smoothness (standard error): 0.002 0.031 compactness (standard error): 0.002 0.135 concavity (standard error): 0.0 0.396 concave points (standard error): 0.0 0.053 symmetry (standard error): 0.008 0.079 fractal dimension (standard error): 0.001 0.03 radius (worst): 7.93 36.04 texture (worst): 12.02 49.54 perimeter (worst): 50.41 251.2 area (worst): 185.2 4254.0 smoothness (worst): 0.071 0.223 compactness (worst): 0.027 1.058 concavity (worst): 0.0 1.252 concave points (worst): 0.0 0.291 symmetry (worst): 0.156 0.664 fractal dimension (worst): 0.055 0.208 ===================================== ====== ====== :Missing Attribute Values: None :Class Distribution: 212 - Malignant, 357 - Benign :Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian :Donor: Nick Street :Date: November, 1995 This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets. https://goo.gl/U2Uwz2 Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. Separating plane described above was obtained using Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree Construction Via Linear Programming." Proceedings of the 4th Midwest Artificial Intelligence and Cognitive Science Society, pp. 97-101, 1992], a classification method which uses linear programming to construct a decision tree. Relevant features were selected using an exhaustive search in the space of 1-4 features and 1-3 separating planes. The actual linear program used to obtain the separating plane in the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34]. This database is also available through the UW CS ftp server: ftp ftp.cs.wisc.edu cd math-prog/cpo-dataset/machine-learn/WDBC/ .. topic:: References - W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on Electronic Imaging: Science and Technology, volume 1905, pages 861-870, San Jose, CA, 1993. - O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and prognosis via linear programming. Operations Research, 43(4), pages 570-577, July-August 1995. - W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994) 163-171. ```python breastcancer = datasets.load_breast_cancer() # find the data and labels X = breastcancer.data Y = breastcancer.target # split the data into train and test sets trainX, testX, trainY, testY = train_test_split(X, Y, test_size = 0.25) # build the logistic classifier model = LogisticClassifierGradient() # fit the Bayes classifier to the training data model.fit(trainX, trainY, [0] * (X.shape[1] + 1), alpha = 0.001, h = 0.001, tolerance = 0.001, maxIterations = 10000) # predict the labels of the training set predictedY = model.predict(trainX) # print quality metrics print('\nTrain Classification Report:\n\n', classification_report(trainY, predictedY)) # predict the labels of the test set predictedY = model.predict(testX) # print quality metrics print('\nTest Classification Report:\n\n', classification_report(testY, predictedY)) print('\nTest Confusion Matrix:\n') sn.heatmap(confusion_matrix(testY, predictedY), annot = True) ``` Even though our model is a tiny neural network, it still makes quite good predictions! It correctly classified 99% of training tumor data as benign or malignant. Even more impressive, it correctly classifies 96% of the test data, which is data the model was not allowed to see during training. The model successfully **generalized** to data it has never seen!
4680baadd352f9eda74b94831707054eaf297c87
896,631
ipynb
Jupyter Notebook
Week-2-Gradient-Descent-Classification/Week2.ipynb
grivasleal/Fall-2021-Neural-Networks
980d00b28a1733cc298b2a044487a1e45b984326
[ "MIT" ]
null
null
null
Week-2-Gradient-Descent-Classification/Week2.ipynb
grivasleal/Fall-2021-Neural-Networks
980d00b28a1733cc298b2a044487a1e45b984326
[ "MIT" ]
null
null
null
Week-2-Gradient-Descent-Classification/Week2.ipynb
grivasleal/Fall-2021-Neural-Networks
980d00b28a1733cc298b2a044487a1e45b984326
[ "MIT" ]
null
null
null
665.650334
816,040
0.938343
true
12,114
Qwen/Qwen-72B
1. YES 2. YES
0.843895
0.83762
0.706863
__label__eng_Latn
0.992183
0.480612
# CSE 330 Numerical Analysis Lab ### Lab 8: LU Decomposition Let a system of equations be, \begin{equation} 2\boldsymbol{x}_1 - \boldsymbol{x}_{2}+3\boldsymbol{x}_3 = 4 \end{equation} \begin{equation} 4\boldsymbol{x}_1 + 2\boldsymbol{x}_{2}+\boldsymbol{x}_3 = 1 \end{equation} \begin{equation} -6\boldsymbol{x}_1 - \boldsymbol{x}_{2}+2\boldsymbol{x}_3 = 2 \end{equation} We can write the whole thing in matrix form, \begin{equation} \begin{pmatrix} 2 & -1 & 3 \\ 4 & 2 & 1 \\ -6 & -1 & 2 \\ \end{pmatrix} \begin{pmatrix} \boldsymbol{x}_1 \\ \boldsymbol{x}_2 \\ \boldsymbol{x}_3 \end{pmatrix} = \begin{pmatrix} 4 \\ 1 \\ 2 \end{pmatrix} \end{equation} In the system of equations, there are 3 equations and 3 unknowns. Therefore, it can be solved for $\boldsymbol{x}_1, \boldsymbol{x}_2, \boldsymbol{x}_3$ unless any 2 of the equations are parallel to each other. Gaussian elimination is a commonly used tactics to solve a system of equation. In this method, we create an augmented matrix from a set of equations and then eliminate everything below the diagonal terms. First, let's create the augmented matrix. \begin{equation} \begin{pmatrix} 2 & -1 & 3 & \qquad 4 \\ 4 & 2 & 1 & \qquad 1\\ -6 & -1 & 2 & \qquad 2 \\ \end{pmatrix} \end{equation} Now, if we multiply row 1 with 2 and subtract it from row 2, we get, \begin{equation} \begin{pmatrix} 2 & -1 & 3 & \qquad 4 \\ 0 & 4 & -5 & \qquad -7\\ -6 & -1 & 2 & \qquad 2 \\ \end{pmatrix} \end{equation} Additionally, multiply row 1 with -3 and subtract it from row 3, \begin{equation} \begin{pmatrix} 2 & -1 & 3 & \qquad 4 \\ 0 & 4 & -5 & \qquad -7\\ 0 & -4 & 11 & \qquad 14 \\ \end{pmatrix} \end{equation} Finally, multiply row 2 with -1 and subtract it from row 3, \begin{equation} \begin{pmatrix} 2 & -1 & 3 & \qquad 4 \\ 0 & 4 & -5 & \qquad -7\\ 0 & 0 & 6 & \qquad 7 \\ \end{pmatrix} \end{equation} Now, the simplified equations are very easy to solve, \begin{equation} 6\boldsymbol{x}_3 = 7 \quad or, \quad \boldsymbol{x}_3 = 7/6 \end{equation} \begin{equation} 4\boldsymbol{x}_2 - 5\boldsymbol{x}_3 = -7 \quad or, \quad 4\boldsymbol{x}_2 - 35/6 = -7 \quad or, \quad \boldsymbol{x}_2 = -7/24 \end{equation} \begin{equation} 2\boldsymbol{x}_1 - \boldsymbol{x}_{2}+3\boldsymbol{x}_3 = 4 \quad or, \quad 2\boldsymbol{x}_1 + 7/24 + 21/6 = 4 \quad or, \quad \boldsymbol{x}_1 = 5/48 \end{equation} We are solving this for a system $Ax=B$, the complexity of this whole process is somewhere around $O(n^3)$. In many matrix problems you do not only have to solve $Ax=B$, but also have to solve $Ax=C$, $Ax=D$, $Ax=E$ etc., where repeating the same process again and again becomes expensive. Therefore, a clever approach is to just drop the right side of the equation at first and decompose the matrix into lower part $\boldsymbol{L}$ and upper part $\boldsymbol{U}$. Where, $\boldsymbol{U}$ is the simplified matrix and $\boldsymbol{L}$ has the multipliers. Then the LU matrices can be repeatedly used to solve multiple systems of equations which is much cheaper. The common structure for these matrices are, \begin{equation} L = \begin{pmatrix} 1 & 0 & 0 \\ L_{21} & 1 & 0 \\ L_{31} & L_{32} & 1 \\ \end{pmatrix} , U = \begin{pmatrix} U_{11} & U_{12} & U_{13} \\ 0 & U_{22} & U_{23} \\ 0 & 0 & U_{33} \\ \end{pmatrix} \end{equation} Now, let's drop the right side of the original system of equations and isolate the left side, \begin{pmatrix} 2 & -1 & 3 \\ 4 & 2 & 1 \\ -6 & -1 & 2 \\ \end{pmatrix} Let's calculate the LU decomposition for this. ``` from sympy import Matrix from sympy import init_printing A = Matrix([[2,-1,3], [4,2,1], [-6,-1,2]]) L, U, _ = A.LUdecomposition() init_printing(use_latex='matplotlib') L ``` ⎡1 0 0⎤ ⎢ ⎥ ⎢2 1 0⎥ ⎢ ⎥ ⎣-3 -1 1⎦ ``` U ``` ⎡2 -1 3 ⎤ ⎢ ⎥ ⎢0 4 -5⎥ ⎢ ⎥ ⎣0 0 6 ⎦ Using the symbolic python library we can directly compute the LU decomposition for any matrix. Our task for today will be to manually calculate the LU decomposition matrices. LU decompoition has different variants which involve pivoting and partial pivoting in order to avoid 0 on the diagonal and to reduce rounding error. For now, we'll avoid those and calculate in the simplest way possible using gaussian elimination. An important factor to note: LU decomposition is not a unique value. There might be multiple LU decomposition for the same matrix. Therefore, your manually calculated LU decomposition and the values returned by python libraries may not match. ``` import numpy as np def LUDecomposition(M): #Initializing the Upper matrix that has to be simplified U = M #Initializing the Lower matrix as an identity matrix (all diagonals are 1) L = np.identity(U.shape[0]) ###Use Gaussian elimination to populate both L and U##### print("L and U of Given Matrix b: \n",L,"\n\n",U,"\n\n") pivot = 0 for i in range(1, len(U)): print(f"\n#### Iteration = {i}") print("#################################################") for row in range(i, len(U)): print(f"\n## row = {row}") print(f"\npivot = {pivot}, U[{pivot}][{pivot}] = {U[pivot][pivot]}") multiplier = U[row][pivot] / U[pivot][pivot] L[row][pivot] = multiplier print(f"\nmultiplier = {multiplier}\n") for col in range(len(U)): print(f"col = {col}, U[{row}][{col}] = {U[row][col]}, U[{row-1}][{col}] = {U[row-1][col]}") print(f"{U[pivot][col]} - {U[pivot][col]} * {multiplier} = {U[pivot][col] * multiplier}\n") U[row][col] -= U[pivot][col] * multiplier print(U) pivot += 1 return L, U b = np.array([[2,-1,3], [4,2,1], [-6,-1,2]]) L, U = LUDecomposition(b) print("\n\nL and U of Matrix b: \n",L,"\n\n",U,"\n\n") ``` L and U of Given Matrix b: [[1. 0. 0.] [0. 1. 0.] [0. 0. 1.]] [[ 2 -1 3] [ 4 2 1] [-6 -1 2]] #### Iteration = 1 ################################################# ## row = 1 pivot = 0, U[0][0] = 2 multiplier = 2.0 col = 0, U[1][0] = 4, U[0][0] = 2 2 - 2 * 2.0 = 4.0 col = 1, U[1][1] = 2, U[0][1] = -1 -1 - -1 * 2.0 = -2.0 col = 2, U[1][2] = 1, U[0][2] = 3 3 - 3 * 2.0 = 6.0 [[ 2 -1 3] [ 0 4 -5] [-6 -1 2]] ## row = 2 pivot = 0, U[0][0] = 2 multiplier = -3.0 col = 0, U[2][0] = -6, U[1][0] = 0 2 - 2 * -3.0 = -6.0 col = 1, U[2][1] = -1, U[1][1] = 4 -1 - -1 * -3.0 = 3.0 col = 2, U[2][2] = 2, U[1][2] = -5 3 - 3 * -3.0 = -9.0 [[ 2 -1 3] [ 0 4 -5] [ 0 -4 11]] #### Iteration = 2 ################################################# ## row = 2 pivot = 1, U[1][1] = 4 multiplier = -1.0 col = 0, U[2][0] = 0, U[1][0] = 0 0 - 0 * -1.0 = -0.0 col = 1, U[2][1] = -4, U[1][1] = 4 4 - 4 * -1.0 = -4.0 col = 2, U[2][2] = 11, U[1][2] = -5 -5 - -5 * -1.0 = 5.0 [[ 2 -1 3] [ 0 4 -5] [ 0 0 6]] L and U of Matrix b: [[ 1. 0. 0.] [ 2. 1. 0.] [-3. -1. 1.]] [[ 2 -1 3] [ 0 4 -5] [ 0 0 6]] Let's go back to the original matrix form, \begin{equation} \begin{pmatrix} 2 & -1 & 3 \\ 4 & 2 & 1 \\ -6 & -1 & 2 \\ \end{pmatrix} \begin{pmatrix} \boldsymbol{x}_1 \\ \boldsymbol{x}_2 \\ \boldsymbol{x}_3 \end{pmatrix} = \begin{pmatrix} 4 \\ 1 \\ 2 \end{pmatrix} \end{equation} If it is in the form $Ax=B$ then, \begin{equation} A= \begin{pmatrix} 2 & -1 & 3 \\ 4 & 2 & 1 \\ -6 & -1 & 2 \\ \end{pmatrix} ,B= \begin{pmatrix} 4 \\ 1 \\ 2 \end{pmatrix} \end{equation} We already know that the LU decompositon of A is, \begin{equation} L= \begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ -3 & -1 & 1 \\ \end{pmatrix} , U= \begin{pmatrix} 2 & -1 & 3 \\ 0 & 4 & -5 \\ 0 & 0 & 6 \\ \end{pmatrix} \end{equation} Now, Use $L$, $U$ and $B$ to solve the original system of equations. We have to solve y for $Ly=B$ Or, in this case, \begin{equation} \begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ -3 & -1 & 1 \\ \end{pmatrix} \begin{pmatrix} y_1 \\ y_2 \\ y_3 \\ \end{pmatrix} = \begin{pmatrix} 4 \\ 1 \\ 2 \end{pmatrix} \end{equation} It can be solved by forward substitution. It is an iterative process that can be implemented with a nested loop. \begin{equation} \boldsymbol{y}_1 = 4 \end{equation} \begin{equation} 2\boldsymbol{y}_1 + \boldsymbol{y}_2 = 1 \quad or, \quad 8 + \boldsymbol{y}_2 = 1 \quad or, \quad \boldsymbol{y}_2 = -7 \end{equation} \begin{equation} -3\boldsymbol{y}_1 - \boldsymbol{y}_{2}+\boldsymbol{y}_3 = 2 \quad or, \quad -12 + 7 + \boldsymbol{y}_3 = 2 \quad or, \quad \boldsymbol{y}_3 = 7 \end{equation} After solving $y$, in order to calculate $x$ we need to solve, $Ux = y$ or, \begin{equation} \begin{pmatrix} 2 & -1 & 3 \\ 0 & 4 & -5 \\ 0 & 0 & 6 \\ \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \end{pmatrix} = \begin{pmatrix} y_1 \\ y_2 \\ y_3 \\ \end{pmatrix} \Longrightarrow \begin{pmatrix} 2 & -1 & 3 \\ 0 & 4 & -5 \\ 0 & 0 & 6 \\ \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \end{pmatrix} = \begin{pmatrix} 4 \\ -7 \\ 7 \\ \end{pmatrix} \end{equation} Which can be done through backward substitution. If you find it hard to iterate backwards, you can just flip $U$, $x$ and $y$, then it results into, \begin{equation} \begin{pmatrix} 6 & 0 & 0 \\ -5 & 4 & 0 \\ 3 & -1 & 2 \\ \end{pmatrix} \begin{pmatrix} x_3 \\ x_2 \\ x_1 \\ \end{pmatrix} = \begin{pmatrix} 7 \\ -7 \\ 4 \\ \end{pmatrix} \end{equation} Solving which would be the same as the forward substitution! The forward substitution part is done for you. Complete the backward substitution part for correct output. ``` import numpy as np L = np.array([[1,0,0], [2,1,0], [-3,-1,1]]) U = np.array([[2,-1,3], [0,4,-5], [0,0,6]]) B = np.array([4,1,2]) #Forward Substitution process B_L = np.zeros(B.shape[0]) for i in range(L.shape[0]): summation=0 for j in range(L.shape[0]): if i == j: B_L[j] = B[j] - summation B_L[j] = B_L[j]/L[i,j] break else: summation = summation + L[i,j]*B_L[j] # Backward Substitution, your task is to complete this task #Flip the U and B_L matrices if necessary using np.flip(array, axis) method U = np.flip(U, 0) U = np.flip(U, 1) B_L = np.flip(B) B_LU = np.zeros(B.shape[0]) #Use U and B_L to populate B_LU, just like how L and B was used to populate B_L in forward substitution #Place your code here (You may need nested loop) for i in range(L.shape[0]): summation=0 for j in range(L.shape[0]): if i == j: B_LU[j] = B_L[j] - summation B_LU[j] = B_LU[j]/L[i,j] break else: summation = summation + U[i,j]*B_LU[j] ################################################################ final_result = np.flip(B_LU) print(final_result) ``` [0. 0. 0.] Both the forward and the backward substitution method can accomplish the task with $O(n^2)$ complexity!
b39046a7312bd7d1922200cda8c623c3d0bff989
13,581
ipynb
Jupyter Notebook
LU Decomposition.ipynb
sheikhmishar/Numerical-Analysis-Python
03a737ba38b372fb52ad773f52cd029f7da2b307
[ "MIT" ]
null
null
null
LU Decomposition.ipynb
sheikhmishar/Numerical-Analysis-Python
03a737ba38b372fb52ad773f52cd029f7da2b307
[ "MIT" ]
null
null
null
LU Decomposition.ipynb
sheikhmishar/Numerical-Analysis-Python
03a737ba38b372fb52ad773f52cd029f7da2b307
[ "MIT" ]
null
null
null
13,581
13,581
0.593918
true
4,156
Qwen/Qwen-72B
1. YES 2. YES
0.939025
0.939025
0.881768
__label__eng_Latn
0.863077
0.886975
```python import numpy as np import pandas as pd import matplotlib.pyplot as plt import pandas_datareader.data as web import datetime as dt from statsmodels.stats.diagnostic import acorr_ljungbox from statsmodels.tsa.stattools import acf, pacf, adfuller ``` ```python start_time, end_time = dt.datetime(2016,1,1), dt.datetime(2017,4,1) TOPIX_df = web.DataReader("INDEXTOPIX:TOPIX", 'google', start_time, end_time)['Close'].rename(columns={'Close': 'P(t)'}) TOPIX_price_df = pd.concat([TOPIX_df, TOPIX_df.shift(1)], axis = 1).rename(columns={0:'P(t)', 1: 'P(t-1)'})[1:] TOPIX_price_df['price_return'] = TOPIX_price_df['P(t)'] / TOPIX_price_df['P(t-1)'] - 1 TOPIX_price_df[:2] ``` <div> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>P(t)</th> <th>P(t-1)</th> <th>price_return</th> </tr> <tr> <th>Date</th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>2016-01-05</th> <td>1504.71</td> <td>1509.67</td> <td>-0.003285</td> </tr> <tr> <th>2016-01-06</th> <td>1488.84</td> <td>1504.71</td> <td>-0.010547</td> </tr> </tbody> </table> </div> ```python plt.figure(figsize=(15, 6)) TOPIX_price_df['P(t)'].plot() plt.title('price'); plt.show(); ``` ```python plt.figure(figsize=(15, 6)) TOPIX_price_df['price_return'].plot() plt.title('price return'); plt.show(); ``` # ACF(autocorrelation function) ```python close_acf = acf(TOPIX_price_df['price_return']) print(close_acf[:5]) plt.figure(figsize=(15, 6)) plt.bar(range(len(close_acf)), close_acf, width = 0.01) plt.show() ``` # pACF (partial autocorrelation function) ```python close_acf = pacf(TOPIX_price_df['price_return']) print(close_acf[:5]) plt.figure(figsize=(15, 6)) plt.bar(range(len(close_acf)), close_acf, width = 0.01) plt.show() ``` # Ljung–Box test Reference: https://en.wikipedia.org/wiki/Ljung%E2%80%93Box_test The Ljung–Box test (named for Greta M. Ljung and George E. P. Box) is a type of statistical test of whether any of a group of autocorrelations of a time series are different from zero. ## The Ljung–Box test may be defined as - H0: The data are **independently distributed** (i.e. the correlations in the population from which the sample is taken are 0, so that any observed correlations in the data result from randomness of the sampling process). - Ha: The data are **not independently distributed**; they **exhibit serial correlation**. ## Statistics \begin{equation} Q = n\left(n+2\right)\sum_{k=1}^h\frac{\hat{\rho}^2_k}{n-k} \end{equation} For significance level $\alpha$, the critical region for rejection of the hypothesis of randomness is \begin{equation} Q > \chi_{1-\alpha,h}^2 \end{equation} ```python def test_ljungbox(price_df, lag): _, pvalue = acorr_ljungbox(price_df['price_return'], lags=lag); print("P-value = %s on lag = %s"%(pvalue, lag) ) test_ljungbox(TOPIX_price_df, 1) test_ljungbox(TOPIX_price_df, 2) test_ljungbox(TOPIX_price_df, 3) ``` P-value = [ 0.31528089] on lag = 1 P-value = [ 0.31528089 0.57674997] on lag = 2 P-value = [ 0.31528089 0.57674997 0.4448136 ] on lag = 3 # Dickey-Fuller test Reference: https://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test the Dickey–Fuller test tests the null hypothesis of whether a unit root is present in an autoregressive model. ### 単位根の存在を確かめる検定 1. 統計量、もしくはディッキー–フラー検定(DF検定) 2. 根が1以上であるかの有意性検定(F検定) 3. フィリップス–ペロン検定(PP検定) 4. Dickey–Pantula 検定 ```python _, p_value, _, _, _, _ = adfuller(TOPIX_price_df['price_return']); print(p_value) ``` 2.19916281361e-30 p valueは、有意水準1%のもとで、帰無仮説は棄却される。従って、TOPIXの収益率は、単位根を有していない。 ** 発散する動きに見えないため、収益率は定常性を満たしている ** と本文では表現していた。ホント?
22e6c46f813404e661c5da556dd13c4db68c8552
129,497
ipynb
Jupyter Notebook
TimeSeries.ipynb
Hitoshi-Nakanishi/TimeSeries
d97e64d74e45c7db2840e0368a52ae465bd24c2e
[ "MIT" ]
null
null
null
TimeSeries.ipynb
Hitoshi-Nakanishi/TimeSeries
d97e64d74e45c7db2840e0368a52ae465bd24c2e
[ "MIT" ]
null
null
null
TimeSeries.ipynb
Hitoshi-Nakanishi/TimeSeries
d97e64d74e45c7db2840e0368a52ae465bd24c2e
[ "MIT" ]
null
null
null
403.417445
58,046
0.92494
true
1,383
Qwen/Qwen-72B
1. YES 2. YES
0.817574
0.79053
0.646317
__label__eng_Latn
0.527024
0.339943
# Supervised Learning: Neural Networks ```python %matplotlib inline import warnings warnings.filterwarnings("ignore") import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(style='ticks') import tensorflow as tf from scipy import optimize from ipywidgets import interact from IPython.display import SVG from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.preprocessing import scale ``` ## McCulloch and Pitts Neuron In 1943, McCulloch and Pitts introduced a mathematical model of a neuron. It consisted of three components: 1. A set of **weights** $w_i$ corresponding to synapses (inputs) 2. An **adder** for summing input signals; analogous to cell membrane that collects charge 3. An **activation function** for determining when the neuron fires, based on accumulated input The neuron model is shown schematically below. On the left are input nodes $\{x_i\}$, usually expressed as a vector. The strength with which the inputs are able to deliver the signal along the synapse is determined by their corresponding weights $\{w_i\}$. The adder then sums the inputs from all the synapses: $$h = \sum_i w_i x_i$$ The parameter $\theta$ determines whether or not the neuron fires given a weighted input of $h$. If it fires, it returns a value $y=1$, otherwise $y=0$. For example, a simple **activation function** is using $\theta$ as a simple fixed threshold: $$y = g(h) = \left\{ \begin{array}{l} 1, \text{if } h \gt \theta \\ 0, \text{if } h \le \theta \end{array} \right.$$ this activation function may take any of several forms, such as a logistic function. A single neuron is not interesting, nor useful, from a learning perspective. It cannot learn; it simply receives inputs and either fires or not. Only when neurons are joined as a **network** can they perform useful work. Learning takes place by changing the weights of the connections in a neural network, and by changing the parameters of the activation functions of neurons. ## Perceptron A collection of McCullough and Pitts neurons, along with a set of input nodes connected to the inputs via weighted edges, is a perceptron, the simplest neural network. Each neuron is independent of the others in the perceptron, in the sense that its behavior and performance depends only on its own weights and threshold values, and not of those for the other neurons. Though they share inputs, they operate independently. The number of inputs and outputs are determined by the data. Weights are stored as a `N x K` matrix, with N observations and K neurons, with $w_{ij}$ specifying the weight on the *i*th observation on the *j*th neuron. In order to use the perceptron for statistical learning, we compare the outputs $y_j$ from each neuron to the obervation target $t_j$, and adjust the input weights when they do not correspond (*e.g.* if a neuron fires when it should not have). $$t_j - y_j$$ We use this difference to update the weight $w_{ij}$, based on the input and a desired **learning rate**. This results in an update rule: $$w_{ij} \leftarrow w_{ij} + \eta (t_j - y_j) x_i$$ After an incremental improvement, the perceptron is shown the training data again, resulting in another update. This is repeated until the performance no longer improves. Having a learning rate less than one results in a more stable learning rate, though this stability is traded off against having to expose the network to the data multiple times. Typical learning rates are in the 0.1-0.4 range. An additional input node is typically added to the perceptron model, which is a constant value (usually -1, 0, or 1) that acts analogously to an intercept in a regression model. This establishes a baseline input for the case when all inputs are zero. ## Learning with Perceptrons 1. Initialize weights $w_{ij}$ to small, random numbers. 2. For each t in T iterations * compute activation for each neuron *j* connected to each input vector *i* $$y_j = g\left( h=\sum_i w_{ij} x_i \right) = \left\{ \begin{array}{l} 1, \text{if } h \gt 0 \\ 0, \text{if } h \le 0 \end{array} \right.$$ * update weights $$w_{ij} \leftarrow w_{ij} + \eta (t_j - y_j) x_i$$ This algorithm is $\mathcal{O}(Tmn)$ ### Example: Logical functions Let's see how the perceptron learns by training it on a couple of of logical functions, AND and OR. For two variables `x1` and `x2`, the AND function returns 1 if both are true, or zero otherwise; the OR function returns 1 if either variable is true, or both. These functions can be expressed as simple lookup tables. ```python AND = pd.DataFrame({'x1': (0,0,1,1), 'x2': (0,1,0,1), 'y': (0,0,0,1)}) AND ``` First, we need to initialize weights to small, random values (can be positive and negative). ```python w = np.random.randn(3)*1e-4 ``` Then, a simple activation function for calculating $g(h)$: ```python g = lambda inputs, weights: np.where(np.dot(inputs, weights)>0, 1, 0) ``` Finally, a training function that iterates the learning algorithm, returning the adapted weights. ```python def train(inputs, targets, weights, eta, n_iterations): # Add the inputs that match the bias node inputs = np.c_[inputs, -np.ones((len(inputs), 1))] for n in range(n_iterations): activations = g(inputs, weights); weights -= eta*np.dot(np.transpose(inputs), activations - targets) return(weights) ``` Let's test it first on the AND function. ```python inputs = AND[['x1','x2']] target = AND['y'] w = train(inputs, target, w, 0.25, 10) ``` Checking the performance: ```python g(np.c_[inputs, -np.ones((len(inputs), 1))], w) ``` Thus, it has learned the function perfectly. Now for OR: ```python OR = pd.DataFrame({'x1': (0,0,1,1), 'x2': (0,1,0,1), 'y': (0,1,1,1)}) OR ``` ```python w = np.random.randn(3)*1e-4 ``` ```python inputs = OR[['x1','x2']] target = OR['y'] w = train(inputs, target, w, 0.25, 20) ``` ```python g(np.c_[inputs, -np.ones((len(inputs), 1))], w) ``` Also 100% correct. ### Exercise: XOR Now try running the model on the XOR function, where a one is returned for either `x1` or `x2` being true, but *not* both. What happens here? ```python # Write your answer here ``` Let's explore the problem graphically: ```python AND.plot(kind='scatter', x='x1', y='x2', c='y', s=50, colormap='winter') plt.plot(np.linspace(0,1.4), 1.5 - 1*np.linspace(0,1.4), 'k--'); ``` ```python OR.plot(kind='scatter', x='x1', y='x2', c='y', s=50, colormap='winter') plt.plot(np.linspace(-.4,1), .5 - 1*np.linspace(-.4,1), 'k--'); ``` ```python XOR = pd.DataFrame({'x1': (0,0,1,1), 'x2': (0,1,0,1), 'y': (0,1,1,0)}) XOR.plot(kind='scatter', x='x1', y='x2', c='y', s=50, colormap='winter'); ``` The perceptron tries to find a separating hyperplane for the two response classes. Namely, a set of weights that satisfies: $$\mathbf{x_1}\mathbf{w}^T=0$$ and: $$\mathbf{x_2}\mathbf{w}^T=0$$ Hence, $$\begin{aligned} \mathbf{x}_1\mathbf{w}^T &= \mathbf{x}_2\mathbf{w}^T \\ \Rightarrow (\mathbf{x}_1 - \mathbf{x}_2) \mathbf{w}^T &= 0 \end{aligned}$$ This means that either the norms of $\mathbf{x}_1 - \mathbf{x}_2$ or $\mathbf{w}$ are zero, or the cosine of the angle between them is equal to zero, due to the identity: $$\mathbf{a}\mathbf{b} = \|a\| \|b\| \cos \theta$$ Since there is no reason for the norms to be zero in general, we need the two vectors to be at right angles to one another. So, we need a weight vector that is perpendicular to the decision boundary. Clearly, for the XOR function, the output classes are not linearly separable. So, the algorithm does not converge on an answer, but simply cycles through two incorrect solutions. ## Multi-layer Perceptron The solution to fitting more complex (*i.e.* non-linear) models with neural networks is to use a more complex network that consists of more than just a single perceptron. The take-home message from the perceptron is that all of the learning happens by adapting the synapse weights until prediction is satisfactory. Hence, a reasonable guess at how to make a perceptron more complex is to simply **add more weights**. There are two ways to add complexity: 1. Add backward connections, so that output neurons feed back to input nodes, resulting in a **recurrent network** 2. Add neurons between the input nodes and the outputs, creating an additional ("hidden") layer to the network, resulting in a **multi-layer perceptron** The latter approach is more common in applications of neural networks. How to train a multilayer network is not intuitive. Propagating the inputs forward over two layers is straightforward, since the outputs from the hidden layer can be used as inputs for the output layer. However, the process for updating the weights based on the prediction error is less clear, since it is difficult to know whether to change the weights on the input layer or on the hidden layer in order to improve the prediction. Updating a multi-layer perceptron (MLP) is a matter of: 1. moving forward through the network, calculating outputs given inputs and current weight estimates 2. moving backward updating weights according to the resulting error from forward propagation. In this sense, it is similar to a single-layer perceptron, except it has to be done twice, once for each layer (in principle, we can add additional hidden layers, but without sacrificing generality, I will keep it simple). ### Error back-propagation We update the weights in a MLP using **back-propagation** of the prediction errors, which is essentially a form of gradient descent, as we have used previously for optimization. First, for the multi-layer perceptron we need to modify the error function, which in the single-layer case was a simple difference between the predicted and observed outputs. Because we will be summing errors, we have to avoid having errors in different directions cancelling each other out, so a sum of squares error is more appropriate: $$E(t,y) = \frac{1}{2} \sum_i (t_i - y_i)^2$$ It is on this function that we will perform gradient descent, since the goal is to minimize the error. Specificially, we will differentiate with respect to the weights, since it is the weights that we are manipulating in order to get better predictions. Recall that the error is a function of the threshold function $$E(\mathbf{w}) = \frac{1}{2} \sum_i (t_i - y_i)^2 = \frac{1}{2} \sum_i \left(t_i - g\left[ \sum_j w_{ij} a_j \right]\right)^2$$ So, we will also need to differentiate that. However, the threshold function we used in the single-layer perceptron was discontinuous, making it non-differentiable. Thus, we need to modify it as well. An alternative is to employ some type of sigmoid function, such as the logistic, which can be parameterized to resemble a threshold function, but varies smoothly across its range. $$g(h) = \frac{1}{1 + \exp(-\beta h)}$$ ```python logistic = lambda h, beta: 1./(1 + np.exp(-beta * h)) @interact(beta=(-1, 25)) def logistic_plot(beta=5): hvals = np.linspace(-2, 2) plt.plot(hvals, logistic(hvals, beta)) ``` This has the advantage of having a simple derivative: $$\frac{dg}{dh} = \beta g(h)(1 - g(h))$$ Alternatively, the hyperbolic tangent function is also sigmoid: $$g(h) = \tanh(h) = \frac{\exp(h) - \exp(-h)}{\exp(h) + \exp(-h)}$$ ```python hyperbolic_tangent = lambda h: (np.exp(h) - np.exp(-h)) / (np.exp(h) + np.exp(-h)) @interact(theta=(-1, 25)) def tanh_plot(theta=5): hvals = np.linspace(-2, 2) h = hvals*theta plt.plot(hvals, hyperbolic_tangent(h)) ``` Notice that the hyperbolic tangent function asymptotes at -1 and 1, rather than 0 and 1, which is sometimes beneficial, and its derivative is simple: $$\frac{d \tanh(x)}{dx} = 1 - \tanh^2(x)$$ Performing gradient descent will allow us to change the weights in the direction that optimially reduces the error. The next trick will be to employ the **chain rule** to decompose how the error changes as a function of the input weights into the change in error as a function of changes in the inputs to the weights, mutliplied by the changes in input values as a function of changes in the weights. $$\frac{\partial E}{\partial w} = \frac{\partial E}{\partial h}\frac{\partial h}{\partial w}$$ This will allow us to write a function describing the activations of the output weights as a function of the activations of the hidden layer nodes and the output weights, which will allow us to propagate error backwards through the network. The second term in the chain rule simplifies to: $$\begin{align} \frac{\partial h_k}{\partial w_{jk}} &= \frac{\partial \sum_l w_{lk} a_l}{\partial w_{jk}} \\ &= \sum_l \frac{\partial w_{lk} a_l}{\partial w_{jk}} \\ & = a_j \end{align}$$ where $a_j$ is the activation of the jth hidden layer neuron. For the first term in the chain rule above, we decompose it as well: $$\frac{\partial E}{\partial h_k} = \frac{\partial E}{\partial y_k}\frac{\partial y_k}{\partial h_k} = \frac{\partial E}{\partial g(h_k)}\frac{\partial g(h_k)}{\partial h_k}$$ The second term of this chain rule is just the derivative of the activation function, which we have chosen to have a conveneint form, while the first term simplifies to: $$\frac{\partial E}{\partial g(h_k)} = \frac{\partial}{\partial g(h_k)}\left[\frac{1}{2} \sum_k (t_k - y_k)^2 \right] = t_k - y_k$$ Combining these, and assuming (for illustration) a logistic activiation function, we have the gradient: $$\frac{\partial E}{\partial w} = (t_k - y_k) y_k (1-y_k) a_j$$ Which ends up getting plugged into the weight update formula that we saw in the single-layer perceptron: $$w_{jk} \leftarrow w_{jk} - \eta (t_k - y_k) y_k (1-y_k) a_j$$ Note that here we are *subtracting* the second term, rather than adding, since we are doing gradient descent. We can now outline the MLP learning algorithm: 1. Initialize all $w_{jk}$ to small random values 2. For each input vector, conduct forward propagation: * compute activation of each neuron $j$ in hidden layer (here, sigmoid): $$h_j = \sum_i x_i v_{ij}$$ $$a_j = g(h_j) = \frac{1}{1 + \exp(-\beta h_j)}$$ * when the output layer is reached, calculate outputs similarly: $$h_k = \sum_k a_j w_{jk}$$ $$y_k = g(h_k) = \frac{1}{1 + \exp(-\beta h_k)}$$ 3. Calculate loss for resulting predictions: * compute error at output: $$\delta_k = (t_k - y_k) y_k (1-y_k)$$ 4. Conduct backpropagation to get partial derivatives of cost with respect to weights, and use these to update weights: * compute error of the hidden layers: $$\delta_{hj} = \left[\sum_k w_{jk} \delta_k \right] a_j(1-a_j)$$ * update output layer weights: $$w_{jk} \leftarrow w_{jk} - \eta \delta_k a_j$$ * update hidden layer weights: $$v_{ij} \leftarrow v_{ij} - \eta \delta_{hj} x_i$$ Return to (2) and iterate until learning completes. Best practice is to shuffle input vectors to avoid training in the same order. Its important to be aware that because gradient descent is a hill-climbing (or descending) algorithm, it is liable to be caught in local minima with respect to starting values. Therefore, it is worthwhile training several networks using a range of starting values for the weights, so that you have a better chance of discovering a globally-competitive solution. One useful performance enhancement for the MLP learning algorithm is the addition of **momentum** to the weight updates. This is just a coefficient on the previous weight update that increases the correlation between the current weight and the weight after the next update. This is particularly useful for complex models, where falling into local mimima is an issue; adding momentum will give some weight to the previous direction, making the resulting weights essentially a weighted average of the two directions. Adding momentum, along with a smaller learning rate, usually results in a more stable algorithm with quicker convergence. When we use momentum, we lose this guarantee, but this is generally seen as a small price to pay for the improvement momentum usually gives. A weight update with momentum looks like this: $$w_{jk} \leftarrow w_{jk} - \eta \delta_k a_j + \alpha \Delta w_{jk}^{t-1}$$ where $\alpha$ is the momentum (regularization) parameter and $\Delta w_{jk}^{t-1}$ the update from the previous iteration. The multi-layer pereptron is implemented below in the `MLP` class. The implementation uses the scikit-learn interface, so it is uses in the same way as other supervised learning algorithms in that package. ```python softmax = lambda a: a / np.sum(a, axis=1, keepdims=True) class MLP: def __init__(self, alpha=0.01, eta=0.01, n_hidden_dim=25): self.alpha = alpha self.eta = eta self.n_hidden_dim = n_hidden_dim # Helper function to evaluate the total loss on the dataset def calculate_loss(self, X, y): # Forward propagation to calculate our predictions z1 = X.dot(self.w1) + self.b1 a1 = np.tanh(z1) z2 = a1.dot(self.w2) + self.b2 exp_scores = np.exp(z2) probs = softmax(exp_scores) # Calculating the loss data_loss = -np.log(probs[range(num_examples), y]).sum() # Add regulatization term to loss (optional) data_loss += self.alpha/2 * np.square(self.w1).sum() + np.square(self.w2).sum() return 1./num_examples * data_loss # Helper function to predict an output (0 or 1) def predict(self, x): # Forward propagation z1 = x.dot(self.w1) + self.b1 a1 = np.tanh(z1) z2 = a1.dot(self.w2) + self.b2 exp_scores = np.exp(z2) probs = softmax(exp_scores) return np.argmax(probs, axis=1) def fit(self, X, y, num_passes=20000, print_loss=False, seed=42): num_examples, nn_input_dim = X.shape nn_output_dim = len(set(y)) # Initialize the parameters to random values. We need to learn these. np.random.seed(seed) self.w1 = np.random.randn(nn_input_dim, self.n_hidden_dim) / np.sqrt(nn_input_dim) self.b1 = np.zeros((1, self.n_hidden_dim)) self.w2 = np.random.randn(self.n_hidden_dim, nn_output_dim) / np.sqrt(self.n_hidden_dim) self.b2 = np.zeros((1, nn_output_dim)) # Gradient descent. For each batch... for i in range(num_passes): # Forward propagation z1 = X.dot(self.w1) + self.b1 a1 = np.tanh(z1) z2 = a1.dot(self.w2) + self.b2 exp_scores = np.exp(z2) # Backpropagation delta3 = softmax(exp_scores) delta3[range(num_examples), y] -= 1 dw2 = (a1.T).dot(delta3) db2 = np.sum(delta3, axis=0, keepdims=True) delta2 = delta3.dot(self.w2.T) * (1 - np.power(a1, 2)) dw1 = np.dot(X.T, delta2) db1 = np.sum(delta2, axis=0) # Add regularization terms (b1 and b2 don't have regularization terms) dw2 += self.alpha * self.w2 dw1 += self.alpha * self.w1 # Gradient descent parameter update self.w1 += -self.eta * dw1 self.b1 += -self.eta * db1 self.w2 += -self.eta * dw2 self.b2 += -self.eta * db2 # Optionally print the loss. # This is expensive because it uses the whole dataset, so we don't want to do it too often. if print_loss and i % 1000 == 0: print("Loss after iteration %i: %f" %(i, calculate_loss(X, y))) ``` Let's initialize a MLP classifier, specifying the conjugate gradient minimization method. ```python mlp = MLP() ``` Now we can confirm that it solves a non-linear classification, using the simple XOR example ```python X = XOR[['x1','x2']].values y = XOR['y'].values ``` ```python mlp.fit(X, y, num_passes=100) ``` ```python mlp.predict(X) ``` For a somewhat more sophisiticated example, we can use scikit-learn to simulate some data with a non-linear boundary. ```python # Generate a dataset and plot it np.random.seed(0) X, y = datasets.make_moons(200, noise=0.20) X = X.astype(np.float32) y = y.astype(np.int32) plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Set1) ``` ```python clf = MLP(n_hidden_dim=3) clf.fit(X, y) ``` ```python def plot_decision_boundary(pred_func, X=X, y=y): # Set min and max values and give it some padding x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 h = 0.01 # Generate a grid of points with distance h between them xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Predict the function value for the whole gid Z = pred_func(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the contour and training examples plt.contourf(xx, yy, Z, cmap=plt.cm.summer_r) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Greens) ``` ```python plot_decision_boundary(lambda x: clf.predict(x)) plt.title("Decision Boundary for hidden layer size 3") ``` Since we used the scikit-learn interface, its easy to take advantage of the `metrics` module to evaluate the MLP's performance. ```python X, y = datasets.make_moons(50, noise=0.20) X = X.astype(np.float32) y = y.astype(np.int32) ``` ```python from sklearn.metrics import accuracy_score, confusion_matrix accuracy_score(y, clf.predict(X)) ``` ```python confusion_matrix(y, clf.predict(X)) ``` # Varying the hidden layer size In the example above we picked a hidden layer size of 3. Let's now get a sense of how varying the hidden layer size affects the result. ```python plt.figure(figsize=(16, 32)) n_hidden_dim = [1, 2, 3, 4, 5, 20, 50] for i, h in enumerate(n_hidden_dim): plt.subplot(5, 2, i+1) plt.title('Hidden Layer size %d' % h) model = MLP(n_hidden_dim=h) model.fit(X, y) plot_decision_boundary(lambda x: model.predict(x)) ``` ### Neural network specification The MLP implemented above uses a single hidden layer, though it allows a user-specified number of hidden layer nodes (defaults to 25). It is worth considering whether it is useful having **multiple hidden layers**, and whether more hidden nodes is desirable. Unfortunately, there is no theory to guide the choice of hidden node number. As a result, we are left to experiment with this parameter, perhaps in some systematic fashion such as cross-validation. Adding additional layers presents only additional "bookkeeping" overhead to the user, with the weight updating becoming more complicated as layers are added. So, we don't want to add more hidden layers if it does not pay off in performance. It turns out that two or three layers (including the output layer) can be shown to approximate almost any smooth function. Combining 3 sigmoid functions allows local responses to be approximated with arbitrary accuracy. This is sufficient for determining any decision boundary. ### Neural network validation Just as with other supervised learning algorithms, neural networks can be under- or over-fit to a training dataset. The degree to which a network is trained to a particular dataset depends on how long we train it on that dataset. Every time we run the MLP learning algorithm over a dataset (an **epoch**), it reduces the prediction error for that dataset. Thus, the number of epochs should be tuned as a hyperparameter, stopping when the testing-training error gap begins to widen. Note that though we can also use cross-validation to tune the number of hidden layers in the network, there is no risk of overfitting by having too many layers. ### Exercise: Epoch tuning for MLPs The dataset `pima-indians-diabetes.data.txt` in your data folder contains eight measurements taken from a group of Pima Native Americans in Arizona, along with an indicator of the onset of diabetes. Use the MLP class to fit a neural network classifier to this dataset, and use cross-validation to examine the optimal number of epochs to use in training. 1. Number of times pregnant 2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test 3. Diastolic blood pressure (mm Hg) 4. Triceps skin fold thickness (mm) 5. 2-Hour serum insulin (mu U/ml) 6. Body mass index (weight in kg/(height in m)^2) 7. Diabetes pedigree function 8. Age (years) 9. Class variable (0 or 1) ```python pima = pd.read_csv('../data/pima-indians-diabetes.data.txt', header=None) pima.head() ``` ```python # Write your answer here ``` ## Example: Mulitilayer perceptron in TensorFlow TensorFlow is designed to evaluate expressions efficiently, and one of its key features is that it automatically differentiates expressions. This saves us from having to code a gradient function by hand. For this, we will use the MNIST dataset, which we will introduce more formally later. ```python from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) ``` ```python # Parameters learning_rate = 0.001 training_epochs = 15 batch_size = 100 display_step = 1 # Network Parameters nn_hdim = 256 # hidden layer number of neurons nn_input_dim = 784 # MNIST data input (img shape: 28*28) nn_output_dim = 10 # MNIST total classes (0-9 digits) ``` ## Defining the Computation Graph in TensorFlow The first thing we need to is define our computations using TensorFlow. We start by defining our input data matrix `X` and our training labels `y`: ```python # Our data vectors X = tf.placeholder('float', [None, nn_input_dim], name='X') Y = tf.placeholder('float', [None, nn_output_dim], name='y') ``` Remember, we have not assigned any values to `X` or `y`. All we have done is defined mathematical expressions for them. We can use these expressions in subsequent calculations. If we want to evaluate an expression we can call its `eval` method. For example, to evaluate the expression `X * 2` for a given value of `X` we could do the following: ```python tf.InteractiveSession() ``` This creates a TensorFlow `Session` that can be used interactively. More on this later. ```python (Y * 2).eval(feed_dict={Y : np.random.randn(10)[None, :]}) ``` Recall also that in addition to tensor objects, Theano also has shared variables which have values associated with them. Their value that is kept in memory and can be shared by all functions that use them. Shared variables can also be updated, and Theano includes low-level optimizations that makes updating them very efficient, especially on GPUs. Our network parameters $W_1, b_1, W_2, b_2$ are constantly updated using gradient descent, so they can be represented by shared variables: ```python # Store layers weight & bias weights = { 'W1': tf.Variable(tf.random_normal([nn_input_dim, nn_hdim])), 'W2': tf.Variable(tf.random_normal([nn_hdim, nn_hdim])), 'out': tf.Variable(tf.random_normal([nn_hdim, nn_output_dim])) } biases = { 'b1': tf.Variable(tf.random_normal([nn_hdim])), 'b2': tf.Variable(tf.random_normal([nn_hdim])), 'out': tf.Variable(tf.random_normal([nn_output_dim])) } ``` We can now build our forward propagation step for the perceptron, in TensorFlow. Note that we don't need to explicitly do a forward propagation here. TensorFlow knows that our gradients depend on our predictions from the forward propagation and it will handle all the necessary calculations for us. It does everything it needs to update the values. ```python def multilayer_perceptron(x): # Hidden fully connected layer with 256 neurons layer_1 = tf.add(tf.matmul(x, weights['W1']), biases['b1']) # Hidden fully connected layer with 256 neurons layer_2 = tf.add(tf.matmul(layer_1, weights['W2']), biases['b2']) # Output fully connected layer with a neuron for each class out_layer = tf.matmul(layer_2, weights['out']) + biases['out'] return out_layer ``` Rather than calling the `eval` method to evaluate our TF expressions, we can instead define operators for expressions we want to evaluate. For example, to calculate the loss, we need to know the values for $X$ and $y$, and pass them to our loss function. ```python # Construct model logits = multilayer_perceptron(X) # Define loss and optimizer loss_op = tf.reduce_mean(tf.losses.softmax_cross_entropy( logits=logits, onehot_labels=Y)) optimizer = tf.train.AdamOptimizer(learning_rate=eta) train_op = optimizer.minimize(loss_op) # Initializing the variables init = tf.global_variables_initializer() ``` ```python with tf.Session() as sess: sess.run(init) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_x, batch_y = mnist.train.next_batch(batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([train_op, loss_op], feed_dict={X: batch_x, Y: batch_y}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if epoch % display_step == 0: print("Epoch:", '%04d' % (epoch+1), "cost={:.9f}".format(avg_cost)) print("Optimization Finished!") # Test model pred = tf.nn.softmax(logits) # Apply softmax to logits correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(Y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print("Accuracy:", accuracy.eval({X: mnist.test.images, Y: mnist.test.labels})) ``` --- ## References [The TensorFlow Playground](https://playground.tensorflow.org/) T. Hastie, R. Tibshirani and J. Friedman. (2009) [Elements of Statistical Learning: Data Mining, Inference, and Prediction](http://statweb.stanford.edu/~tibs/ElemStatLearn/), second edition. Springer. X. Glorot, A. Bordes and Y. Bengio (2011). [Deep sparse rectifier neural networks (PDF)](http://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf). AISTATS. S. Marsland. (2009) [Machine Learning: An Algorithmic Perspective](Machine Learning: An Algorithmic Perspectivehttp://seat.massey.ac.nz/personal/s.r.marsland/MLBook.html). CRC Press. D. Rodriguez. (2013) [Basic [1 hidden layer] neural network on Python](http://danielfrg.com/blog/2013/07/03/basic-neural-network-python/). D. Britz. (2015) [Implementing a Neural Network from Scratch](http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/)
8f77073df43d493ff2f0388427aa5517bcdfa4cf
42,672
ipynb
Jupyter Notebook
notebooks/Day5_2-Neural-Networks.ipynb
fonnesbeck/cqs_machine_learning
0e82dbde2e09a255d2e6e374db6a3737d2b64e36
[ "MIT" ]
5
2018-07-26T20:05:02.000Z
2019-08-14T05:04:36.000Z
notebooks/Day5_2-Neural-Networks.ipynb
noisyoscillator/cqs_machine_learning
0e82dbde2e09a255d2e6e374db6a3737d2b64e36
[ "MIT" ]
null
null
null
notebooks/Day5_2-Neural-Networks.ipynb
noisyoscillator/cqs_machine_learning
0e82dbde2e09a255d2e6e374db6a3737d2b64e36
[ "MIT" ]
17
2018-08-03T17:08:36.000Z
2022-03-16T15:03:42.000Z
39.148624
786
0.595683
true
7,931
Qwen/Qwen-72B
1. YES 2. YES
0.948917
0.855851
0.812132
__label__eng_Latn
0.991118
0.725187
``` %matplotlib inline from IPython.display import display from sympy import * from sympy.abc import x, a, n k = Symbol("k", positive=True, integer=True) init_printing() ``` ``` n = 6 tj = [2*pi*j/n for j in range(n)] display(tj) ``` $$\begin{bmatrix}0, & \frac{\pi}{3}, & \frac{2 \pi}{3}, & \pi, & \frac{4 \pi}{3}, & \frac{5 \pi}{3}\end{bmatrix}$$ ``` fj = [expand((t - pi)*t*(t-2*pi)) for t in tj] display(fj) ``` $$\begin{bmatrix}0, & \frac{10 \pi^{3}}{27}, & \frac{8 \pi^{3}}{27}, & 0, & - \frac{8 \pi^{3}}{27}, & - \frac{10 \pi^{3}}{27}\end{bmatrix}$$ ``` ak = [2/n * sum([fj[j]*cos(k*tj[j]) for j in range(n)]) for k in range(4)] display(ak) ``` $$\begin{bmatrix}0, & 0, & 0, & 0\end{bmatrix}$$ ``` zk = [[simplify(fj[j]*sin(k*tj[j])) for j in range(n)] for k in range(1, 4)] display(zk) bk = [nsimplify(2/n * sum(zk[k])) for k in range(0, 3)] display(bk) ``` $$\begin{bmatrix}\begin{bmatrix}0, & \frac{5 \sqrt{3}}{27} \pi^{3}, & \frac{4 \sqrt{3}}{27} \pi^{3}, & 0, & \frac{4 \sqrt{3}}{27} \pi^{3}, & \frac{5 \sqrt{3}}{27} \pi^{3}\end{bmatrix}, & \begin{bmatrix}0, & \frac{5 \sqrt{3}}{27} \pi^{3}, & - \frac{4 \sqrt{3}}{27} \pi^{3}, & 0, & - \frac{4 \sqrt{3}}{27} \pi^{3}, & \frac{5 \sqrt{3}}{27} \pi^{3}\end{bmatrix}, & \begin{bmatrix}0, & 0, & 0, & 0, & 0, & 0\end{bmatrix}\end{bmatrix}$$ $$\begin{bmatrix}\frac{2 \sqrt{3}}{9} \pi^{3}, & \frac{2 \sqrt{3}}{81} \pi^{3}, & 0\end{bmatrix}$$ ``` plot1 = plot((x-pi)*x*(x-2*pi),(x, 0, 2*pi), show=False) plot1.extend(plot(bk[0]*sin(x)+bk[1]*sin(2*x), (x, 0, 2*pi), show=False, line_color='r')) plot1.extend(plot(bk[0]*sin(x), (x, 0, 2*pi), show=False, line_color='g')) plot1.extend(plot(bk[1]*sin(2*x), (x, 0, 2*pi), show=False, line_color='y')) plot1.show() ``` ``` simplify(integrate(1/pi * x * (x-pi) * (x-2*pi) * cos(k*x), (x, 0, 2*pi))) ``` ``` simplify(integrate(1/pi * x * (x-pi) * (x-2*pi) * sin(k*x), (x, 0, 2*pi))) ``` ``` [N(bk[j]) for j in range(3)] ``` $$\begin{bmatrix}11.9343214586261, & 1.32603571762512, & 0\end{bmatrix}$$ ``` [12/(j**3) for j in range(1, 4)] ``` $$\begin{bmatrix}12.0, & 1.5, & 0.4444444444444444\end{bmatrix}$$
caa622776a3e73d680d12fc53ace120f12546661
34,334
ipynb
Jupyter Notebook
Aufgabe 23).ipynb
bschwb/Numerik
dcd178847104c382474142eae3365b6df76d8dbf
[ "MIT" ]
null
null
null
Aufgabe 23).ipynb
bschwb/Numerik
dcd178847104c382474142eae3365b6df76d8dbf
[ "MIT" ]
null
null
null
Aufgabe 23).ipynb
bschwb/Numerik
dcd178847104c382474142eae3365b6df76d8dbf
[ "MIT" ]
null
null
null
125.765568
24,723
0.831275
true
978
Qwen/Qwen-72B
1. YES 2. YES
0.914901
0.83762
0.766339
__label__kor_Hang
0.091004
0.618795
```python from sympy import * init_printing(use_latex='mathjax') Re,theta_r,D,rho,L_x,lam,tau,k,x = symbols('Re theta_r D rho L_x lambda tau k x', positive=True) C0 = symbols('C0') ``` ```python rho = solve(Re - rho*sqrt(D/rho/theta_r)*L_x/D,rho)[0] # density from Reynolds number Re V_p = sqrt(D/rho/theta_r) # velocity scale - wave velocity ``` ```python H = C0*exp(-lam*tau*V_p/L_x)*sin(pi*k*x/L_x) q = C0*exp(-lam*tau*V_p/L_x)*cos(pi*k*x/L_x) ``` ```python eq1 = rho*diff(H,tau) + diff(q,x) # mass balance eq2 = theta_r*diff(q,tau) + q + D*diff(H,x) # momentum balance ``` ```python disp = diff(eq1,tau)*theta_r - diff(eq2,x) + eq1 disp = expand(disp) disp = expand(simplify(disp/disp.coeff(lam**2))) disp ``` $\displaystyle - Re \lambda + \pi^{2} k^{2} + \lambda^{2}$ ```python sol = solve(disp,lam) disc = discriminant(disp,lam) Re_opt = solve(disc,Re)[0].subs(k,1) Re_opt ``` $\displaystyle 2 \pi$ ```python from sympy import maximum, lambdify import numpy as np import warnings warnings.filterwarnings('ignore') lam = [re(s.subs(k,1)) for s in sol] lamf = lambdify(Re,lam,"numpy") Re1 = np.linspace(0,2*float(Re_opt),1000) lam1 = np.stack(lamf(Re1),axis=0) lam1 = np.nanmin(lam1,axis=0) ``` ```python %matplotlib notebook from matplotlib import pyplot as plt plt.rcParams.update({"text.usetex": True, "font.size": 14}) f = plt.figure(figsize=(5,5)) ax = plt.subplot() ax.plot(Re1,lam1) ax.axvline(x=Re_opt.evalf(),ls='--',c='k',linewidth=2) ax.set_xlabel("$Re$"); ax.set_ylabel("$\mathrm{min}\{\Re(\lambda_k)\}$") ax.set_xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi]) ax.set_xticklabels(["$0$","$\pi$","$2\pi$","$3\pi$","$4\pi$"]) ax.set_xlim(0,4*np.pi) ax.grid() ``` <IPython.core.display.Javascript object> ```python ```
c6c9f411c1bf47625bd7e1f8312eeed8708ba627
196,802
ipynb
Jupyter Notebook
dispersion_analysis/dispersion_analysis_stationary_diffusion1D.ipynb
PTsolvers/PseudoTransientDiffusion.jl
07b3e2e52d04a3f3f9e8fb724bca740ef57249df
[ "MIT" ]
1
2021-12-06T19:25:10.000Z
2021-12-06T19:25:10.000Z
dispersion_analysis/dispersion_analysis_stationary_diffusion1D.ipynb
PTsolvers/PseudoTransientDiffusion.jl
07b3e2e52d04a3f3f9e8fb724bca740ef57249df
[ "MIT" ]
null
null
null
dispersion_analysis/dispersion_analysis_stationary_diffusion1D.ipynb
PTsolvers/PseudoTransientDiffusion.jl
07b3e2e52d04a3f3f9e8fb724bca740ef57249df
[ "MIT" ]
null
null
null
172.784899
147,871
0.847801
true
643
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.782662
0.710119
__label__eng_Latn
0.158003
0.488176
# Gaussian Processes In this exercise, you will implement Gaussian process regression and apply it to a toy and a real dataset. We use the notation used in the paper "Rasmussen (2005). Gaussian Processes in Machine Learning" linked on ISIS. Let us first draw a training set $X = (x_1,\dots,x_n)$ and a test set $X_\star = (x^\star_1,\dots,x^\star_m)$ from a $d$-dimensional input distribution. The Gaussian Process is a model under which the real-valued outputs $\mathbf{f} = (f_1,\dots,f_n)$ and $\mathbf{f}_\star = (f^\star_1,\dots,f^\star_m)$ associated to $X$ and $X_\star$ follow the Gaussian distribution: \begin{equation*} \left[ \begin{array}{c}\mathbf{f}\\ \mathbf{f}_\star\end{array} \right] \sim \mathcal{N} \left( \left[ \begin{array}{c} \boldsymbol{0}\\ \boldsymbol{0} \end{array} \right] , \left[ \begin{array}{cc} \Sigma & \Sigma_\star\\ \Sigma_\star^\top & \Sigma_{\star\star} \end{array} \right] \right) \end{equation*} where \begin{align*} \Sigma &= k(X,X)+\sigma^2 I\\ \Sigma_\star &= k(X,X_\star)\\ \Sigma_{\star\star} &= k(X_\star,X_\star)+\sigma^2 I \end{align*} and where $k(\cdot,\cdot)$ is the Gaussian kernel function. (The kernel function is implemented in `utils.py`.) Predicting the output for new data points $X_\star$ is achieved by conditioning the joint probability distribution on the training set. Such conditional distribution called posterior distribution can be written as: \begin{equation} \mathbf{f}_\star | \mathbf{f} \sim \mathcal{N} ( \underbrace{\Sigma_\star^\top \Sigma^{-1} \mathbf{f}}_{\boldsymbol{\mu}_\star} ~,~ \underbrace{\Sigma_{\star\star} - \Sigma_\star^\top \Sigma^{-1} \Sigma_\star}_{C_\star} ) \end{equation} Having inferred the posterior distribution, the log-likelihood of observing for the inputs $X_\star$ the outputs $\mathbf{y}_\star$ is given by evaluating the distribution $\mathbf{f}_\star | \mathbf{f}$ at $\mathbf{y}_\star$: \begin{equation} \log p(\mathbf{y}_\star | \mathbf{f}) = -\frac1{2} (\mathbf{y}_\star - \boldsymbol{\mu}_\star)^\top C^{-1}_\star (\mathbf{y}_\star - \boldsymbol{\mu}_\star) - \frac1{2}\log|C_\star| - \frac{m}{2}\log2\pi \end{equation} where $|\cdot|$ is the determinant. Note that the likelihood of the data given this posterior distribution can be measured both for the training data and the test data. ## Part 1: Implementing a Gaussian Process (30 P) **Tasks:** * **Create a class `GP_Regressor` that implements a Gaussian process regressor and has the following three methods:** * **`def __init__(self,Xtrain,Ytrain,width,noise):`** Initialize a Gaussian process with noise parameter $\sigma$ and width parameter $w$. The variable `Xtrain` is a two-dimensional array where each row is one data point from the training set. The Variable `Ytrain` is a vector containing the associated targets. The function must also precompute the matrix $\Sigma^{-1}$ for subsequent use by the method `predict()` and `loglikelihood()`. * **`def predict(self,Xtest):`** For the test set $X_\star$ of $m$ points received as parameter, return the mean vector of size $m$ and covariance matrix of size $m \times m$ of the corresponding output, that is, return the parameters $(\boldsymbol{\mu}_\star,C_\star)$ of the Gaussian distribution $\mathbf{f}_\star | \mathbf{f}$. * **`def loglikelihood(self,Xtest,Ytest):`** For a data set $X_\star$ of $m$ test points received as first parameter, return the loglikelihood of observing the outputs $\mathbf{y}_\star$ received as second parameter. ```python # -------------------------- import utils import numpy as np import math class GP_Regressor: def __init__(self,Xtrain,Ytrain,width,noise): self.width=width self.noise=noise self.X=Xtrain self.Y=Ytrain self.kXX=utils.gaussianKernel(self.X,self.X,self.width)+self.noise**2*numpy.identity(len(self.X)) self.kXXI=numpy.linalg.inv(self.kXX) def predict(self,Xtest): self.kZZ=utils.gaussianKernel(Xtest,Xtest,self.width)+self.noise**2*numpy.identity(len(Xtest)) self.kXZ=utils.gaussianKernel(self.X,Xtest,self.width) self.kZX=self.kXZ.T mean=self.kZX.dot(self.kXXI.dot(self.Y)) cov=self.kZZ-self.kZX.dot(self.kXXI).dot(self.kXZ) return mean,cov def loglikelihood(self,Xtest,Ytest): mean,cov=self.predict(Xtest) cov1=np.linalg.inv(cov) q=-0.5*cov1.dot(Ytest-mean).dot(Ytest-mean) q1=-0.5*np.linalg.slogdet(cov)[1] q2=-0.5*Xtest.shape[0]*numpy.log(2*numpy.pi) return q+q1+q2 #---------------------------- ``` * **Test your implementation by running the code below (it visualizes the mean and variance of the prediction at every location of the input space) and compares the behavior of the Gaussian process for various noise parameters $\sigma$ and width parameters $w$.** ```python import utils,datasets,numpy import matplotlib.pyplot as plt %matplotlib inline # Open the toy data Xtrain,Ytrain,Xtest,Ytest = utils.split(*datasets.toy()) # Create an analysis distribution Xrange = numpy.arange(-3.5,3.51,0.025)[:,numpy.newaxis] f = plt.figure(figsize=(18,15)) # Loop over several parameters: for i,noise in enumerate([2.5,0.5,0.1]): for j,width in enumerate([0.1,0.5,2.5]): # Create Gaussian process regressor object gp = GP_Regressor(Xtrain,Ytrain,width,noise) # Compute the predicted mean and variance for test data mean,cov = gp.predict(Xrange) var = cov.diagonal() # Compute the log-likelihood of training and test data lltrain = gp.loglikelihood(Xtrain,Ytrain) lltest = gp.loglikelihood(Xtest ,Ytest ) # Plot the data p = f.add_subplot(3,3,3*i+j+1) p.set_title('noise=%.1f width=%.1f lltrain=%.1f, lltest=%.1f'%(noise,width,lltrain,lltest)) p.set_xlabel('x') p.set_ylabel('y') p.scatter(Xtrain,Ytrain,color='green',marker='x') # training data p.scatter(Xtest,Ytest,color='green',marker='o') # test data p.plot(Xrange,mean,color='blue') # GP mean p.plot(Xrange,mean+var**.5,color='red') # GP mean + std p.plot(Xrange,mean-var**.5,color='red') # GP mean - std p.set_xlim(-3.5,3.5) p.set_ylim(-4,4) ``` ## Part 2: Application to the Yacht Hydrodynamics Data Set (20 P) In the second part, we would like to apply the Gaussian process regressor that you have implemented to a real dataset: the Yacht Hydrodynamics Data Set available on the UCI repository at the webpage http://archive.ics.uci.edu/ml/datasets/Yacht+Hydrodynamics. As stated on the web page, the input variables for this regression problem are: 1. Longitudinal position of the center of buoyancy 2. Prismatic coefficient 3. Length-displacement ratio 4. Beam-draught ratio 5. Length-beam ratio 6. Froude number and we would like to predict from these variables the residuary resistance per unit weight of displacement (last column in the file `yacht_hydrodynamics.data`). **Tasks:** * **Load the data using `datasets.yacht()` and partition the data between training and test set using the function `utils.split()`. Standardize the data (center and rescale) so that each dimension of the training data and the labels have mean 0 and standard deviation 1 over the training set.** * **Train several Gaussian processes on the regression task using various combinations of width and noise parameters.** * **Draw two contour plots where the training and test log-likelihood are plotted as a function of the noise and width parameters. Choose suitable ranges of parameters so that the best parameter combination for the test set is in the plot. Use the same ranges and contour levels for the training and test plots. The contour levels can be chosen linearly spaced between e.g. 50 and the maximum log-likelihood value** ```python # -------------------------- # TODO: Replace by your code # -------------------------- from matplotlib import pyplot as plt Xtrain,Ytrain,Xtest,Ytest = utils.split(*datasets.yacht()) xmean, xstd = Xtrain.mean(axis=0),Xtrain.std(axis=0) ymean, ystd = Ytrain.mean(), Ytrain.std() Xtrain = (Xtrain-xmean) / xstd Ytrain = (Ytrain-ymean) / ystd Xtest = (Xtest -xmean) / xstd Ytest = (Ytest-ymean) / ystd noises = np.linspace(0.005,0.04,num=24) widths = np.linspace(0.05,2.0,num=24) PX,PY=numpy.meshgrid(widths,noises) lltrain = np.zeros([len(noises),len(widths)]) lltest = np.zeros([len(noises),len(widths)]) for i,noise in enumerate(noises): for j,width in enumerate(widths): gp = GP_Regressor(Xtrain,Ytrain,width,noise) lltrain[i,j] = gp.loglikelihood(Xtrain,Ytrain) lltest[i,j] = gp.loglikelihood(Xtest,Ytest) m=max(lltrain.max(),lltest.max()) f=plt.figure(figsize=(12,6)) p=f.add_subplot(1, 2, 1) p.set_title('logP(train | GP posterior)') p.set_xlabel('width') p.set_ylabel('noise') CS=p.contour(PX,PY,lltrain,levels=numpy.arange(50,m,20)) p.clabel(CS,inline=1,fontsize=10) p=f.add_subplot(1, 2, 2) p.set_xlabel('width') p.set_ylabel('noise') p.set_title('logP(test | GP posterior)') CS=p.contour(PX,PY,lltest,levels=numpy.arange(50,m,20)) p.clabel(CS,inline=1,fontsize=10) ``` ```python ```
8f8ff4142ddc13e4215eceb9efcb127afdb3d1b3
359,385
ipynb
Jupyter Notebook
Ex09 - Gaussian Process/sheet09-programming.ipynb
qiaw99/Machine-Learning-1
ababda51d8fa3cdad1548bf8225991335b912eaf
[ "Apache-2.0" ]
null
null
null
Ex09 - Gaussian Process/sheet09-programming.ipynb
qiaw99/Machine-Learning-1
ababda51d8fa3cdad1548bf8225991335b912eaf
[ "Apache-2.0" ]
null
null
null
Ex09 - Gaussian Process/sheet09-programming.ipynb
qiaw99/Machine-Learning-1
ababda51d8fa3cdad1548bf8225991335b912eaf
[ "Apache-2.0" ]
null
null
null
1,099.036697
183,060
0.950493
true
2,585
Qwen/Qwen-72B
1. YES 2. YES
0.893309
0.79053
0.706188
__label__eng_Latn
0.897972
0.479043
<a href="https://colab.research.google.com/github/ragnariock/LNU_Ostap_Salo_Kiberg_Arima/blob/master/Arima.ipynb" target="_parent"></a> ```python import math import matplotlib.pyplot as plt from sympy import symbols, diff import re import numpy as np import pandas as pd from statsmodels.tsa.arima_model import ARIMA import statsmodels.api as sm ``` ```python # Data : # Продаж солодкої води : x = [i for i in range(1,37)] y = [9,10,10,11,12,18,26,40,39,28,20,13, 8,9,11,9,13,15,33,45,45,25,18,10, 7,9,10,11,13,15,31,40,35,26,19,13] ``` ```python plt.plot(x, y) plt.axis([0, 50, 0, 50]) plt.show() ``` ```python ## Stage find regressions : def Cov(a,b): a = list(a) b = list(b) avr_a = 0 for i in range(len(a)): avr_a +=a[i] avr_a /=len(a) avr_b = 0 for i in range(len(b)): avr_b += b[i] avr_b /= len(b) znamenik = 0 for i in range(len(a)): znamenik += (a[i] - avr_a) * (b[i] - avr_b) chiselnik = len(a) - 1 cov = 0 cov = znamenik / chiselnik return cov ``` ```python print('кореляція = ', Cov(x,y)) ``` кореляція = 18.62857142857143 ```python def standart_error(a,b): # Дисперсії : avr_a = 0 for i in range(len(a)): avr_a += a[i] avr_a /=len(a) avr_b = 0 for i in range(len(b)): avr_b += b[i] avr_b /= len(b) d_a = 0 # dispersia for i in range(len(a)): d_a += math.pow((a[i] - avr_a),2) d_b = 0 # dispersia for i in range(len(b)): d_b += math.pow((b[i] - avr_b),2) # Стандартне відхилення : std_a = 0 std_b = 0 std_a = math.pow(d_a,(0.5)) std_b = math.pow(d_b ,(0.5)) return std_a,std_b def rxy(a,b): std_a,std_b = standart_error(a,b) # cof_korell : cov = Cov(a,b) cof_korell = cov / (std_a * std_b) return cof_korell print('Кофіцієнт кореляції :',rxy(x,y)) ``` Кофіцієнт кореляції : 0.00429572464444998 ```python print('Кофіцієнт детермінації : ',math.pow(rxy(x,y),(0.5))) ``` Кофіцієнт детермінації : 0.06554177785542577 ```python def firstRegressor(a,b): std_a,std_b = standart_error(a,b) cov = Cov(a,b) # Регресор : first = (std_a / std_b) * cov return first firstRegressor(x,y) ``` 16.688890243688178 ```python def secondRegressor(a,b): avr_a = 0 for i in range(len(a)): avr_a += a[i] avr_a /= len(a) avr_b = 0 for i in range(len(b)): avr_b += b[i] avr_b /= len(b) first = firstRegressor(a,b) second = avr_b - (first * avr_a) return second secondRegressor(x,y) ``` -289.1333583971202 ```python ``` ```python ``` ```python def minMNK(a,b): function = 0 w0, w1 = symbols('x y', real=True) for i in range(len(a)): function +=(w1*a[i] + w0 - b[i]) ** 2 #print(function) function1W0 = diff(function,w0) function2W1 = diff(function,w1) #print(function1W0) # print(function2W1) function1W0 = str(function1W0) function2W1 = str(function2W1) result1 = re.findall(r'\d+',function1W0) result2 = re.findall(r'\d+',function2W1) for i in range(len(result1)): result1[i] = int(result1[i]) for i in range(len(result2)): result2[i] = int(result2[i]) matrix = [result1[0:2],result2[0:2]] matrix_result = [result1[2],result2[2]] #print('\n') #for i in range(2): # print(matrix[i],' \t= ',matrix_result[i]) #print(matrix) #print(matrix_result) resultAll = np.linalg.solve(matrix,matrix_result) new_w0, new_w1 = resultAll return new_w0, new_w1 minMNK(x,y) ``` (16.506349206349206, 0.16782496782496775) ```python def linerRegression(a,b,x): w0,w1 = minMNK(a,b) y = w0 + (w1*x) return y ``` ```python print('Результат лінійної регресі на 37 місяць : ', linerRegression(x,y,37)) ``` Результат лінійної регресі на 37 місяць : 22.715873015873015 ```python ``` ## DegreeData : ```python data = pd.read_csv('/content/sample_data/full_external_temperatures.csv',header=0, parse_dates=[0], index_col=0, squeeze=True) ``` ```python x = [] temp_x = [] temp_x.extend(data['dateTime']) for i in range(2000): x.append(temp_x[i]) x_data = [] for i in range(200): x_data.append(i) ``` ```python y = [] temp_y = [] temp_y.extend(data['data']) for i in range(200): y.append(temp_y[i]) ``` ```python test_y = [] temp_y = [] temp_y.extend(data['data']) for i in range(2003): test_y.append(temp_y[i]) ``` ```python # Автокорегресія : def auto_reg(a,b,p=1): w0,w1 = minMNK(a,b) function = w0 #print(w0) result_array = [] for i in range(p): len_arr = len(a) - i -1 w0,w1 = minMNK(a[0:len_arr],b[0:len_arr]) #print('liner - ',res_liner_regr, 'b - ',b[len_arr]) function += b[len_arr] * w1 result_array.append(function) return function auto_reg(x_data,y,p=5) ``` 8.931993883482486 ```python # Скільзьке середнє : def SMA(b,p=3): SMA_res = [] sum_b = 0 num = 0 for i in range(len(b) - p): for j in range(p): sum_b += b[i+j] SMA_res.append(sum_b / p) sum_b = 0 return SMA_res def MA_foresee(b,p): SMA_res = 0 for i in range(p): sum_b = len(b) - i -1 SMA_res += b[sum_b] SMA_res /= p return SMA_res def MAGrath(b,p=3): SMA_res = [] sum_b = 0 num = 0 for i in range(len(b) - p): for j in range(p): sum_b += b[i+j] SMA_res.append(sum_b / p) sum_b = 0 return SMA_res MA_foresee(y,2) ``` 8.254999999999999 ```python MA_arr = [0,0,0] temp_arr = SMA(y) for i in range(len(temp_arr)): MA_arr.append(temp_arr[i]) ``` ```python ``` ```python plt.plot(x_data,MA_arr,'r') plt.plot(x_data, y,'g') plt.axis([0, 50, 0, 50]) plt.show() ``` ```python test_y = [] temp_y = [] temp_y.extend(data['data']) for i in range(2003): test_y.append(temp_y[i]) ``` ```python print(test_y[201]) ``` 10.29 ```python def ARIMA(x,y,my_x,p,q,d): new_y = [0] new_y1 = [] MA_plust_ar = 0 if d == 0: MA_res = MA_foresee(y,q) AR_res = auto_reg(x_data,y,p=p) print("result MA = ", type(MA_res)) print("result AR = ", type(AR_res)) MA_plust_ar = MA_res + AR_res arr_MA = [0,0,0] temp_MA = MAGrath(y) for i in range(len(temp_MA)): tr = temp_MA[i] arr_MA.append(tr) plt.plot(x_data,arr_MA,'r') plt.plot(x_data,y,'b') plt.axis([-10, 350, -10, 20]) plt.show() elif d == 1: for i in range(len(y)): num = y[i] - y[i-1] new_y.append(int(num)) num = 0 MA_res = MA_foresee(new_y,q) AR_res = auto_reg(x_data,new_y,p=p) print("result MA = ", MA_res) print("result AR = ", AR_res) MA_plust_ar = MA_res + AR_res elif d == 2: for i in range(len(y)): num = y[i] - y[i-1] new_y.append(int(num)) num = 0 for i in range(len(new_y)): num = new_y[i] - new_y[i-1] new_y1.append(num) num = 0 arr_MA = SMA(new_y1,p) res_MA = 0 for i in range(len(arr_MA)): res_MA += arrMA[i] arr_AR = auto_reg() return float(MA_plust_ar) print(ARIMA(x_data,y,my_x=1,p=1,q=7,d=0)) print("True value",test_y[201]) ``` ```python print(ARIMA(x_data,y,my_x=201,p=2,q=6,d=0)) ``` ```python def ArimaForForecast(x,y,p,q,d): for i in range(1,3): x_temp = 200 + i my_x = x_temp y_temp = ARIMA(x_data,y,my_x,p,q,d) x_data.append(x_temp) y.append(y_temp) #print(len(x_data)) #print(len(y)) ARIMA(x_data,y,my_x,p,q,d) temp = ARIMA(x_data,y,my_x,p,q,d) print(temp) print("True value",test_y[my_x]) ``` ```python ArimaForForecast(x_data,y,p=2,q=6,d=0) ``` ```python ``` ```python ``` ```python ``` ```python ``` ```python ``` ```python ```
2a3c890640c09837190337d17538da0b5e3ef2c7
165,000
ipynb
Jupyter Notebook
FES-31/Ostap/ArimaColaboratory.ipynb
artiiblack/rcs-research
02aecbbeccc7559ee8b0c2b1a81567e0f236cd3a
[ "MIT" ]
2
2019-09-18T09:49:41.000Z
2019-10-01T16:20:46.000Z
FES-31/Ostap/ArimaColaboratory.ipynb
artiiblack/rcs-research
02aecbbeccc7559ee8b0c2b1a81567e0f236cd3a
[ "MIT" ]
null
null
null
FES-31/Ostap/ArimaColaboratory.ipynb
artiiblack/rcs-research
02aecbbeccc7559ee8b0c2b1a81567e0f236cd3a
[ "MIT" ]
8
2019-07-16T12:35:18.000Z
2019-12-04T12:07:53.000Z
139.593909
17,090
0.847333
true
2,861
Qwen/Qwen-72B
1. YES 2. YES
0.868827
0.743168
0.645684
__label__kor_Hang
0.115236
0.338472
# Physics 256 ## Simple Harmonic Oscillators ```python import style style._set_css_style('../include/bootstrap.css') ``` ## Last Time ### [Notebook Link: 15_Baseball.ipynb](./15_Baseball.ipynb) - motion of a pitched ball - drag and the magnus force - surface roughness of a projectile ## Today - The simple harmonic pendulum ## Setting up the Notebook ```python import matplotlib.pyplot as plt import numpy as np %matplotlib inline plt.style.use('../include/notebook.mplstyle'); %config InlineBackend.figure_format = 'svg' ``` ## Equation of Motion The equation of motion for a simple linear pendulum of length $\ell$ and mass $m$ is givn by: $$ m \frac{d \vec{v}}{d t} = \vec{F}_{\rm g} = -m g \hat{y}$$ Measuring $x$ and $y$ from the equlibrium position we have \begin{align} x &= \ell \sin \theta \\ y &= \ell (1-\cos\theta) \end{align} The kinetic and potential energy are: \begin{align} T &= \frac{1}{2} m \dot{r}^2 \\ &= \frac{1}{2} m (\dot{x}^2 + \dot{y}^2) \\ &= \frac{1}{2} m \ell^2 \dot{\theta}^2 \end{align} \begin{equation} V = m g \ell (1-\cos\theta). \end{equation} Thus, the Lagrangian is: \begin{align} \mathcal{L} &= T - V \\ &= \frac{1}{2} m \ell^2 \dot{\theta}^2 - m g \ell (1-\cos\theta) \end{align} and the equation of motion is given by the Euler-Lagrange formula \begin{align} \frac{\partial \mathcal{L}}{\partial \theta} - \frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{\theta}} &= 0 \\ -m g \ell \sin \theta - \frac{d}{dt} (m\ell^2 \dot{\theta}) &= 0 \end{align} which yields the familiar equation: \begin{equation} \ddot{\theta} = -\frac{g}{\ell} \sin\theta . \end{equation} To solve this analytically, we are used to considering only small angle oscillations allowing us to replace $\sin\theta \simeq \theta$ for $\theta \ll 1$. For $\theta(0) = \theta_0 \ll 1$ and $\dot{\theta}(0) = 0$ can be it can be integrated to give $$ \theta(t) = \theta_0 \cos \left( \sqrt{\frac{g}{\ell}} t \right).$$ <div class="span alert alert-success"> <h2> Programming challenge </h2> Use the Euler method to directly integrate the full equation of motion and compare with the analytical expression for $\theta_0 = \pi/12$ and $\dot{\theta}(0) =0$ for $\ell = 0.25$ m. \begin{align} \theta_{n+1} &= \theta_n + \omega_n \Delta t \\ \omega_{n+1} &= \omega_n - \frac{g}{\ell} \sin\theta_n \Delta t \\ \end{align} </div> <!-- θ[n+1] = θ[n] + ω[n]*Δt ω[n+1] = ω[n] -(g/ℓ)*np.sin(θ[n])*Δt --> ```python from scipy.constants import pi as π from scipy.constants import g # constants and intitial conditions ℓ = 0.25 # m Δt = 0.001 # s t = np.arange(0.0,5.0,Δt) θ,ω = np.zeros_like(t),np.zeros_like(t) θ[0] = π/12.0 # rad for n in range(t.size-1): pass # the small angle solution plt.plot(t, θ[0]*np.cos(np.sqrt(g/ℓ)*t), label='Small angle solution') # the Euler method plt.plot(t,θ, label='Euler method') plt.legend(loc='lower left') plt.xlabel('Time [s]') plt.ylabel('θ(t) [rad]') ``` ## What went wrong? The oscillations are **growing** with time! This is our first encounter with a numerical procedure that is **unstable**. Let's examine the total energy of the system where we can approximate $\cos\theta \simeq 1 - \theta^2/2$: \begin{align} E &= \frac{1}{2} m \ell^2 \omega^2 + m g \ell (1-\cos\theta) \\ &\simeq \frac{1}{2}m \ell^2 \left(\omega^2 + \frac{g}{\ell}\theta^2 \right). \end{align} Writing things in terms of our Euler variables: \begin{align} E_{n+1} &= \frac{1}{2}m\ell^2 \left[\left(\omega_n - \frac{g}{\ell}\theta_n \Delta t\right)^2 + \frac{g}{\ell}\left(\theta_n + \omega_n\Delta t\right)^2 \right] \\ &= E_{n} + \frac{1}{2}mg \ell \left(\omega_i^2 + \frac{g}{\ell} \theta_n^2\right) \Delta t^2. \end{align} This tells us the origin of the problem: **the energy is increasing without bound, regardless of the size of $\Delta t$**. ### Question: Why didn't we encounter this problem previously? <!-- With the exception of constant acceleration, we always had it, we just never noticed it on the timescales we were interested in. --> ### How do we fix it? We can consider alternative higher-order ODE solvers (as described in Appendix A of the textbook). However, there is a very simple fix that works here: ### Euler-Cromer Method Looking at our original discretized equations: \begin{align} \theta_{n+1} &= \theta_n + \omega_n \Delta t \\ \omega_{n+1} &= \omega_n - \frac{g}{\ell} \sin\theta_n \Delta t \end{align} we can make the simple observation that we can replace the order of evaluation and use the updated value of $\omega$ in our calculation of $\theta$. \begin{align} \omega_{n+1} &= \omega_n - \frac{g}{\ell} \sin\theta_n \Delta t \\ \theta_{n+1} &= \theta_n + \omega_{n+1} \Delta t \end{align} This leads to the energy being *approximately* conserved at each step: \begin{equation} E_{n+1} = E_{n} + \frac{1}{2}m g \left(\omega_n^2 - \frac{g}{\ell}\theta_n^2 \right)\Delta t^2 + \mathrm{O}(\Delta t^3). \end{equation} ```python from scipy.constants import pi as π from scipy.constants import g # constants and intitial conditions ℓ = 0.25 # m Δt = 0.001 # s t = np.arange(0.0,4.0,Δt) θ,ω = np.zeros_like(t),np.zeros_like(t) θ[0] = π/12.0 # rad for n in range(t.size-1): pass # the small angle solution plt.plot(t, θ[0]*np.cos(np.sqrt(g/ℓ)*t), label='Small angle solution') # the Euler-Cromer method plt.plot(t,θ, label='Euler Cromer method') plt.legend(loc='lower left',frameon=True) plt.xlabel('Time [s]') plt.ylabel('θ(t) [rad]') ``` ## There are still some noticable deviations, thoughts? <!--Non-linear corrections. --> ## Turning on Non-Linearity An analytical solution exists without the small-angle approximation, but it is considerably more complicated: \begin{eqnarray} \theta(t) &=& 2 \sin^{-1} \left\{ k\, \mathrm{sn}\left[K(k^2)-\sqrt{\frac{g}{\ell}} t; k^2\right] \right\} \newline k &=& \sin \frac{\theta_0}{2} \newline K(m) &=& \int_0^1 \frac{d z}{\sqrt{(1-z^2)(1-m z^2)}} \end{eqnarray} <!-- # the exact solution plt.plot(t,non_linear_θ(ℓ,θ[0],t), label='Exact solution') --> ```python def non_linear_θ(ℓ,θₒ,t): '''The solution for θ for the non-linear pendulum.''' # use special functions from scipy import special k = np.sin(θₒ/2.0) K = special.ellipk(k*k) (sn,cn,dn,ph) = special.ellipj(K-np.sqrt(g/l)*t,k*k) return 2.0*np.arcsin(k*sn) ``` ```python # the small angle solution plt.plot(t, θ[0]*np.cos(np.sqrt(g/ℓ)*t), label='Small angle solution') # the Euler-Cromer method plt.plot(t,θ,label='Euler Cromer method') # the exact solution in terms of special functions plt.plot(t,non_linear_θ(ℓ,θ[0],t), label='Exact', alpha=0.5) plt.legend(loc='lower left',frameon=True) plt.xlabel('Time [s]') plt.ylabel('θ(t) [rad]') ``` ```python ```
811b3dac5a0ace19252fe35e35ba81502524a9c0
11,004
ipynb
Jupyter Notebook
4-assets/BOOKS/Jupyter-Notebooks/Overflow/16_SimpleHarmonicMotion.ipynb
impastasyndrome/Lambda-Resource-Static-Assets
7070672038620d29844991250f2476d0f1a60b0a
[ "MIT" ]
null
null
null
4-assets/BOOKS/Jupyter-Notebooks/Overflow/16_SimpleHarmonicMotion.ipynb
impastasyndrome/Lambda-Resource-Static-Assets
7070672038620d29844991250f2476d0f1a60b0a
[ "MIT" ]
null
null
null
4-assets/BOOKS/Jupyter-Notebooks/Overflow/16_SimpleHarmonicMotion.ipynb
impastasyndrome/Lambda-Resource-Static-Assets
7070672038620d29844991250f2476d0f1a60b0a
[ "MIT" ]
1
2021-11-05T07:48:26.000Z
2021-11-05T07:48:26.000Z
30.065574
270
0.514995
true
2,289
Qwen/Qwen-72B
1. YES 2. YES
0.857768
0.882428
0.756918
__label__eng_Latn
0.798603
0.596907
# Funciones de forma unidimensionales Las funciones de forma unidimensionales sirven para aproximar los desplazamientos: \begin{equation} w = \alpha_{0} + \alpha_{1} x + \cdots + \alpha_{n} x^{n} = \sum_{i = 0}^{n} \alpha_{i} x^{i} \end{equation} ## Elemento viga Euler-Bernoulli Los elementos viga soportan esfuerzos debido a flexión. ### Elemento de dos nodos Para un elemento de dos nodos y cuatro grados de libertad: \begin{equation} w = \alpha_{0} + \alpha_{1} x + \alpha_{2} x^{2} + \alpha_{3} x^{3} = \left [ \begin{matrix} 1 & x & x^{2} & x^{3} \end{matrix} \right ] \left [ \begin{matrix} \alpha_{0} \\\ \alpha_{1} \\\ \alpha_{2} \\\ \alpha_{3} \end{matrix} \right ] \end{equation} La deformación angular: \begin{equation} \theta = \frac{\partial{w}}{\partial{x}} = \alpha_{1} + 2 \alpha_{2} x + 3 \alpha_{3} x^{2} \end{equation} Reemplazando los valores nodales en coordenadas naturales: \begin{eqnarray} \alpha_{0} + \alpha_{1} (-1) + \alpha_{2} (-1)^{2} + \alpha_{3} (-1)^{3} &=& w_{1} \\\ \alpha_{1} + 2 \alpha_{2} (-1) + 3 \alpha_{3} (-1)^{2} &=& \theta_{1} \\\ \alpha_{0} + \alpha_{1} (1) + \alpha_{2} (1)^{2} + \alpha_{3} (1)^{3} &=& w_{2} \\\ \alpha_{1} + 2 \alpha_{2} (1) + 3 \alpha_{3} (1)^{2} &=& \theta_{2} \end{eqnarray} Evaluando: \begin{eqnarray} \alpha_{0} - \alpha_{1} + \alpha_{2} - \alpha_{3} &=& w_{1} \\\ \alpha_{1} - 2 \alpha_{2} + 3 \alpha_{3} &=& \theta_{1} \\\ \alpha_{0} + \alpha_{1} + \alpha_{2} + \alpha_{3} &=& w_{2} \\\ \alpha_{1} + 2 \alpha_{2} + 3 \alpha_{3} &=& \theta_{2} \end{eqnarray} En forma matricial: \begin{equation} \left [ \begin{matrix} 1 & -1 & 1 & -1 \\\ 0 & 1 & -2 & 3 \\\ 1 & 1 & 1 & 1 \\\ 0 & 1 & 2 & 3 \end{matrix} \right ] \left [ \begin{matrix} \alpha_{0} \\\ \alpha_{1} \\\ \alpha_{2} \\\ \alpha_{3} \end{matrix} \right ] = \left [ \begin{matrix} w_{1} \\\ \theta_{1} \\\ w_{2} \\\ \theta_{2} \end{matrix} \right ] \end{equation} Resolviendo el sistema: \begin{equation} \left [ \begin{matrix} \alpha_{0} \\\ \alpha_{1} \\\ \alpha_{2} \\\ \alpha_{3} \end{matrix} \right ] = \left [ \begin{matrix} \frac{1}{2} & \frac{1}{4} & \frac{1}{2} & -\frac{1}{4} \\\ -\frac{3}{4} & -\frac{1}{4} & \frac{3}{4} & -\frac{1}{4} \\\ 0 & -\frac{1}{4} & 0 & \frac{1}{4} \\\ \frac{1}{4} & \frac{1}{4} & -\frac{1}{4} & \frac{1}{4} \end{matrix} \right ] \left [ \begin{matrix} w_{1} \\\ \theta_{1} \\\ w_{2} \\\ \theta_{2} \end{matrix} \right ] \end{equation} Reemplazando: \begin{equation} w = \left [ \begin{matrix} 1 & x & x^{2} & x^{3} \end{matrix} \right ] \left [ \begin{matrix} \alpha_{0} \\\ \alpha_{1} \\\ \alpha_{2} \\\ \alpha_{3} \end{matrix} \right ] = \left [ \begin{matrix} 1 & x & x^{2} & x^{3} \end{matrix} \right ] \left [ \begin{matrix} \frac{1}{2} & \frac{1}{4} & \frac{1}{2} & -\frac{1}{4} \\\ -\frac{3}{4} & -\frac{1}{4} & \frac{3}{4} & -\frac{1}{4} \\\ 0 & -\frac{1}{4} & 0 & \frac{1}{4} \\\ \frac{1}{4} & \frac{1}{4} & -\frac{1}{4} & \frac{1}{4} \end{matrix} \right ] \left [ \begin{matrix} w_{1} \\\ \theta_{1} \\\ w_{2} \\\ \theta_{2} \end{matrix} \right ] = \left [ \begin{matrix} \frac{1}{4} (x+2) (x-1)^{2} & \frac{1}{4} (x+1) (x-1)^{2} & -\frac{1}{4} (x-2) (x+1)^{2} & \frac{1}{4} (x-1) (x+1)^{2} \end{matrix} \right ] \left [ \begin{matrix} w_{1} \\\ \theta_{1} \\\ w_{2} \\\ \theta_{2} \end{matrix} \right ] \end{equation} Reescribiendo $w$: \begin{equation} w = \Big [ \frac{1}{4} (x+2) (x-1)^{2} \Big ] w_{1} + \Big [ \frac{1}{4} (x+1) (x-1)^{2} \Big ] \theta_{1} + \Big [ -\frac{1}{4} (x-2) (x+1)^{2} \Big ] w_{2} + \Big [ \frac{1}{4} (x-1) (x+1)^{2} \Big ] \theta_{2} = H_{01} w_{1} + H_{11} \theta_{1} + H_{02} w_{2} + H_{12} \theta_{2} \end{equation} ### Elemento de tres nodos Para un elemento de tres nodos y seis grados de libertad: \begin{equation} w = \alpha_{0} + \alpha_{1} x + \alpha_{2} x^{2} + \alpha_{3} x^{3} + \alpha_{4} x^{4} + \alpha_{5} x^{5} = \left [ \begin{matrix} 1 & x & x^{2} & x^{3} & x^{4} & x^{5} \end{matrix} \right ] \left [ \begin{matrix} \alpha_{0} \\\ \alpha_{1} \\\ \alpha_{2} \\\ \alpha_{3} \\\ \alpha_{4} \\\ \alpha_{5} \end{matrix} \right ] \end{equation} La deformación angular: \begin{equation} \theta = \frac{\partial{w}}{\partial{x}} = \alpha_{1} + 2 \alpha_{2} x + 3 \alpha_{3} x^{2} + 4 \alpha_{4} x^{3} + 5 \alpha_{5} x^{4} \end{equation} Reemplazando los valores nodales en coordenadas naturales: \begin{eqnarray} \alpha_{0} + \alpha_{1} (-1) + \alpha_{2} (-1)^{2} + \alpha_{3} (-1)^{3} + \alpha_{4} (-1)^{4} + \alpha_{5} (-1)^{5} &=& w_{1} \\\ \alpha_{1} + 2 \alpha_{2} (-1) + 3 \alpha_{3} (-1)^{2} + 4 \alpha_{4} (-1)^{3} + 5 \alpha_{5} (-1)^{4} &=& \theta_{1} \\\ \alpha_{0} + \alpha_{1} (0) + \alpha_{2} (0)^{2} + \alpha_{3} (0)^{3} + \alpha_{4} (0)^{4} + \alpha_{5} (0)^{5} &=& w_{2} \\\ \alpha_{1} + 2 \alpha_{2} (0) + 3 \alpha_{3} (0)^{2} + 4 \alpha_{4} (0)^{3} + 5 \alpha_{5} (0)^{4} &=& \theta_{2} \\\ \alpha_{0} + \alpha_{1} (1) + \alpha_{2} (1)^{2} + \alpha_{3} (1)^{3} + \alpha_{4} (1)^{4} + \alpha_{5} (1)^{5} &=& w_{3} \\\ \alpha_{1} + 2 \alpha_{2} (1) + 3 \alpha_{3} (1)^{2} + 4 \alpha_{4} (1)^{3} + 5 \alpha_{5} (1)^{4} &=& \theta_{3} \end{eqnarray} Evaluando: \begin{eqnarray} \alpha_{0} - \alpha_{1} + \alpha_{2} - \alpha_{3} + \alpha_{4} - \alpha_{5} &=& w_{1} \\\ \alpha_{1} - 2 \alpha_{2} + 3 \alpha_{3} - 4 \alpha_{4} + 5 \alpha_{5} &=& \theta_{1} \\\ \alpha_{0} &=& w_{2} \\\ \alpha_{1} &=& \theta_{2} \\\ \alpha_{0} + \alpha_{1} + \alpha_{2} + \alpha_{3} + \alpha_{4} + \alpha_{5} &=& w_{3} \\\ \alpha_{1} + 2 \alpha_{2} + 3 \alpha_{3} + 4 \alpha_{4} + 5 \alpha_{5} &=& \theta_{3} \end{eqnarray} En forma matricial: \begin{equation} \left [ \begin{matrix} 1 & -1 & 1 & -1 & 1 & -1 \\\ 0 & 1 & -2 & 3 & -4 & 5 \\\ 1 & 0 & 0 & 0 & 0 & 0 \\\ 0 & 1 & 0 & 0 & 0 & 0 \\\ 1 & 1 & 1 & 1 & 1 & 1 \\\ 0 & 1 & 2 & 3 & 4 & 5 \end{matrix} \right ] \left [ \begin{matrix} \alpha_{0} \\\ \alpha_{1} \\\ \alpha_{2} \\\ \alpha_{3} \\\ \alpha_{4} \\\ \alpha_{5} \end{matrix} \right ] = \left [ \begin{matrix} w_{1} \\\ \theta_{1} \\\ w_{2} \\\ \theta_{2} \\\ w_{3} \\\ \theta_{3} \end{matrix} \right ] \end{equation} Resolviendo el sistema: \begin{equation} \left [ \begin{matrix} \alpha_{0} \\\ \alpha_{1} \\\ \alpha_{2} \\\ \alpha_{3} \\\ \alpha_{4} \\\ \alpha_{5} \end{matrix} \right ] = \left [ \begin{matrix} 0 & 0 & 1 & 0 & 0 & 0 \\\ 0 & 0 & 0 & 1 & 0 & 0 \\\ 1 & \frac{1}{4} & -2 & 0 & 1 & -\frac{1}{4} \\\ -\frac{5}{4} & -\frac{1}{4} & 0 & -2 & \frac{5}{4} & -\frac{1}{4} \\\ -\frac{1}{2} & -\frac{1}{4} & 1 & 0 & -\frac{1}{2} & \frac{1}{4} \\\ \frac{3}{4} & \frac{1}{4} & 0 & 1 & -\frac{3}{4} & \frac{1}{4} \end{matrix} \right ] \left [ \begin{matrix} w_{1} \\\ \theta_{1} \\\ w_{2} \\\ \theta_{2} \\\ w_{3} \\\ \theta_{3} \end{matrix} \right ] \end{equation} Reemplazando: \begin{equation} w = \left [ \begin{matrix} 1 & x & x^{2} & x^{3} & x^{4} & x^{5} \end{matrix} \right ] \left [ \begin{matrix} \alpha_{0} \\\ \alpha_{1} \\\ \alpha_{2} \\\ \alpha_{3} \\\ \alpha_{4} \\\ \alpha_{5} \end{matrix} \right ] = \left [ \begin{matrix} 1 & x & x^{2} & x^{3} & x^{4} & x^{5} \end{matrix} \right ] \left [ \begin{matrix} 0 & 0 & 1 & 0 & 0 & 0 \\\ 0 & 0 & 0 & 1 & 0 & 0 \\\ 1 & \frac{1}{4} & -2 & 0 & 1 & -\frac{1}{4} \\\ -\frac{5}{4} & -\frac{1}{4} & 0 & -2 & \frac{5}{4} & -\frac{1}{4} \\\ -\frac{1}{2} & -\frac{1}{4} & 1 & 0 & -\frac{1}{2} & \frac{1}{4} \\\ \frac{3}{4} & \frac{1}{4} & 0 & 1 & -\frac{3}{4} & \frac{1}{4} \end{matrix} \right ] \left [ \begin{matrix} w_{1} \\\ \theta_{1} \\\ w_{2} \\\ \theta_{2} \\\ w_{3} \\\ \theta_{3} \end{matrix} \right ] = \left [ \begin{matrix} \frac{1}{4} x^{2} (3x+4) (x-1)^{2} & \frac{1}{4} x^{2} (x+1) (x-1)^{2} & (x-1)^{2} (x+1)^{2} & x (x-1)^{2} (x+1)^{2} & -\frac{1}{4} x^{2} (3x-4) (x+1)^{2} & \frac{1}{4} x^{2} (x-1) (x+1)^{2} \end{matrix} \right ] \left [ \begin{matrix} w_{1} \\\ \theta_{1} \\\ w_{2} \\\ \theta_{2} \\\ w_{3} \\\ \theta_{3} \end{matrix} \right ] \end{equation} Reescribiendo $u$: \begin{equation} w = \Big [ \frac{1}{4} x^{2} (3x+4) (x-1)^{2} \Big ] w_{1} + \Big [ \frac{1}{4} x^{2} (x+1) (x-1)^{2} \Big ] \theta_{1} + \Big [ (x-1)^{2} (x+1)^{2} \Big ] w_{2} + \Big [ x (x-1)^{2} (x+1)^{2} \Big ] \theta_{2} + \Big [ -\frac{1}{4} x^{2} (3x-4) (x+1)^{2} \Big ] w_{3} + \Big [ \frac{1}{4} x^{2} (x-1) (x+1)^{2} \Big ] \theta_{3} = H_{01} w_{1} + H_{11} \theta_{1} + H_{02} w_{2} + H_{12} \theta_{2} + H_{03} w_{3} + H_{13} \theta_{3} \end{equation} ## Elementos viga Euler-Bernoulli de mayor grado polinomial Los elementos de mayor grado pueden obtenerse mediante polinomios de Hermite: \begin{eqnarray} H_{0i} &=& [1 - 2 \ \ell_{(x_{i})}^{\prime} (x - x_{i})] [\ell_{(x)}]^{2} \\\ H_{1i} &=& (x - x_{i}) [\ell_{(x)}]^{2} \end{eqnarray} ### Elemento de dos nodos Usando la fórmula para polinomios de Lagrange: \begin{eqnarray} \ell_{1} &=& \frac{x - 1}{-1 - 1} = \frac{1}{2} - \frac{1}{2} x \\\ \ell_{1}^{\prime} &=& - \frac{1}{2} \\\ \ell_{2} &=& \frac{x - (-1)}{1 - (-1)} = \frac{1}{2} + \frac{1}{2} x \\\ \ell_{2}^{\prime} &=& \frac{1}{2} \end{eqnarray} Usando la fórmula para polinomios de Hermite: \begin{eqnarray} H_{01} &=& \Big \\{ 1 - 2 \Big [ -\frac{1}{2} \Big ] [x - (-1)] \Big \\} \Big ( \frac{1}{2} - \frac{1}{2} x \Big )^{2} = \frac{1}{4} (x+2) (x-1)^{2} \\\ H_{11} &=& [x - (-1)] \Big ( \frac{1}{2} - \frac{1}{2} x \Big )^{2} = \frac{1}{4} (x+1) (x-1)^{2} \\\ H_{02} &=& \Big [ 1 - 2 \Big ( \frac{1}{2} \Big ) (x - 1) \Big ] \Big ( \frac{1}{2} + \frac{1}{2} x \Big )^{2} = -\frac{1}{4} (x-2) (x+1)^{2} \\\ H_{12} &=& (x - 1) \Big ( \frac{1}{2} + \frac{1}{2} x \Big )^{2} = \frac{1}{4} (x-1) (x+1)^{2} \end{eqnarray} ### Elemento de tres nodos Usando la fórmula para polinomios de Lagrange: \begin{eqnarray} \ell_{1} &=& \frac{x - 0}{-1 - 0} \frac{x - 1}{-1 - 1} = -\frac{1}{2} x + \frac{1}{2} x^{2} \\\ \ell_{1}^{\prime} &=& -\frac{1}{2} + x \\\ \ell_{2} &=& \frac{x - (-1)}{0 - (-1)} \frac{x - 1}{0 - 1} = 1 - x^{2} \\\ \ell_{2}^{\prime} &=& - 2 x \\\ \ell_{3} &=& \frac{x - 0}{1 - 0} \frac{x - 1}{1 - 1} = \frac{1}{2} x + \frac{1}{2} x^{2} \\\ \ell_{3}^{\prime} &=& \frac{1}{2} + x \end{eqnarray} Usando la fórmula para polinomios de Hermite: \begin{eqnarray} H_{01} &=& \Big \\{ 1 - 2 \Big [ -\frac{1}{2} + (-1) \Big ] [x - (-1)] \Big \\} \Big ( -\frac{1}{2} x + \frac{1}{2} x^{2} \Big )^{2} = \frac{1}{4} x^{2} (3x+4) (x-1)^{2} \\\ H_{11} &=& [x - (-1)] \Big ( -\frac{1}{2} x + \frac{1}{2} x^{2} \Big )^{2} = \frac{1}{4} x^{2} (x+1) (x-1)^{2} \\\ H_{02} &=& \Big \\{ 1 - 2 \Big [ - 2 (0) \Big ] (x - 0) \Big \\} \Big ( 1 - x^{2} \Big )^{2} = (x-1)^{2} (x+1)^{2} \\\ H_{12} &=& (x - 0) \Big ( 1 - x^{2} \Big )^{2} = x (x-1)^{2} (x+1)^{2} \\\ H_{03} &=& \Big \\{ 1 - 2 \Big [ \frac{1}{2} + (1) \Big ] \Big \\} \Big ( \frac{1}{2} x + \frac{1}{2} x^{2} \Big )^{2} = -\frac{1}{4} x^{2} (3x-4) (x+1)^{2} \\\ H_{13} &=& [x - (1)] \Big ( \frac{1}{2} x + \frac{1}{2} x^{2} \Big )^{2} = \frac{1}{4} x^{2} (x-1) (x+1)^{2} \end{eqnarray} ``` ``` ``` ```
722056b9edbcc72ba8e617d1684b52f173b00d83
20,779
ipynb
Jupyter Notebook
Funciones de forma/funciones forma viga.ipynb
ClaudioVZ/Teoria-FEM-Python
8a4532f282c38737fb08d1216aa859ecb1e5b209
[ "Artistic-2.0" ]
1
2021-09-28T00:23:45.000Z
2021-09-28T00:23:45.000Z
Funciones de forma/funciones forma viga.ipynb
ClaudioVZ/Teoria-FEM-Python
8a4532f282c38737fb08d1216aa859ecb1e5b209
[ "Artistic-2.0" ]
null
null
null
Funciones de forma/funciones forma viga.ipynb
ClaudioVZ/Teoria-FEM-Python
8a4532f282c38737fb08d1216aa859ecb1e5b209
[ "Artistic-2.0" ]
3
2015-12-04T12:42:00.000Z
2019-10-31T21:50:32.000Z
29.183989
467
0.367486
true
5,540
Qwen/Qwen-72B
1. YES 2. YES
0.957278
0.882428
0.844729
__label__yue_Hant
0.311866
0.800921
<table> <tr align=left><td> <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td> </table> ```python %matplotlib inline import numpy import matplotlib.pyplot as plt ``` # Root Finding and Optimization **GOAL:** Find where $f(x) = 0$. ### Example: Future Time Annuity When can I retire? $$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$ $P$ is payment amount per compounding period $m$ number of compounding periods per year $r$ annual interest rate $n$ number of years to retirement $A$ total value after $n$ years If I want to retire in 20 years what does $r$ need to be? Set $P = \frac{\$18,000}{12} = \$1500, ~~~~ m=12, ~~~~ n=20$. ```python def total_value(P, m, r, n): """Total value of portfolio given parameters Based on following formula: A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] :Input: - *P* (float) - Payment amount per compounding period - *m* (int) - number of compounding periods per year - *r* (float) - annual interest rate - *n* (float) - number of years to retirement :Returns: (float) - total value of portfolio """ return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n) - 1.0) P = 1500.0 m = 12 n = 20.0 r = numpy.linspace(0.05, 0.1, 100) goal = 1e6 fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(r, total_value(P, m, r, n)) axes.plot(r, numpy.ones(r.shape) * goal, 'r--') axes.set_xlabel("r (interest rate)") axes.set_ylabel("A (total value)") axes.set_title("When can I retire?") axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1)) plt.show() ``` ## Fixed Point Iteration How do we go about solving this? Could try to solve at least partially for $r$: $$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$ $$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$ $$ r = g(r)$$ or $$ g(r) - r = 0$$ ```python def g(P, m, r, n, A): """Reformulated minimization problem Based on following formula: g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] :Input: - *P* (float) - Payment amount per compounding period - *m* (int) - number of compounding periods per year - *r* (float) - annual interest rate - *n* (float) - number of years to retirement - *A* (float) - total value after $n$ years :Returns: (float) - value of g(r) """ return P * m / A * ( (1.0 + r / float(m))**(float(m) * n) - 1.0) P = 1500.0 m = 12 n = 20.0 r = numpy.linspace(0.00, 0.1, 100) goal = 1e6 fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(r, g(P, m, r, n, goal)) axes.plot(r, r, 'r--') axes.set_xlabel("r (interest rate)") axes.set_ylabel("$g(r)$") axes.set_title("When can I retire?") axes.set_ylim([0, 0.12]) axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1)) plt.show() ``` Guess at $r_0$ and check to see what direction we need to go... 1. $r_0 = 0.0800$, $g(r_0) - r_0 = -0.009317550125425428$ 1. $r_1 = 0.0850$, $g(r_1) - r_1 = -0.00505763375972$ 1. $r_2 = 0.0875$, $g(r_2) - r_2 = -0.00257275331014$ A bit tedious, we can also make this algorithmic: ```python r = 0.09 for steps in xrange(10): print "r = ", r print "Residual = ", g(P, m, r, n, goal) - r r = g(P, m, r, n, goal) print ``` ### Example 2: Let $f(x) = x - e^{-x}$, solve $f(x) = 0$ Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$ Note that this problem is equivalent to $x = -\ln x$. ```python x = numpy.linspace(0.2, 1.0, 100) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, numpy.exp(-x), 'r') axes.plot(x, x, 'b') axes.set_xlabel("x") axes.set_ylabel("f(x)") x = 0.4 for steps in xrange(7): print "x = ", x print "Residual = ", numpy.exp(-x) - x x = numpy.exp(-x) print axes.plot(x, numpy.exp(-x),'o',) plt.show() ``` ### Example 3: Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$. ```python x = numpy.linspace(0.1, 1.0, 100) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, -numpy.log(x), 'r') axes.plot(x, x, 'b') axes.set_xlabel("x") axes.set_ylabel("f(x)") axes.set_ylim([0.0, 1.5]) x = 0.5 for steps in xrange(3): print "x = ", x print "Residual = ", numpy.log(x) + x x = -numpy.log(x) print axes.plot(x, -numpy.log(x),'o',) plt.show() ``` These are equivalent problems! Something is awry... ## Analysis of Fixed Point Iteration *Theorem*: Existence and uniqueness of fixed point problems Assume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b]~~~ \forall~~~ x \in [a, b]$ then $g$ has a fixed point in $[a, b]$. ```python x = numpy.linspace(0.0, 1.0, 100) # Plot function and intercept fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, numpy.exp(-x), 'r') axes.plot(x, x, 'b') axes.set_xlabel("x") axes.set_ylabel("f(x)") # Plot domain and range axes.plot(numpy.ones(x.shape) * 0.4, x, '--k') axes.plot(numpy.ones(x.shape) * 0.8, x, '--k') axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k') axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k') axes.set_xlim((0.0, 1.0)) axes.set_ylim((0.0, 1.0)) plt.show() ``` ```python x = numpy.linspace(0.1, 1.0, 100) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, -numpy.log(x), 'r') axes.plot(x, x, 'b') axes.set_xlabel("x") axes.set_ylabel("f(x)") axes.set_xlim([0.1, 1.0]) axes.set_ylim([0.1, 1.0]) # Plot domain and range axes.plot(numpy.ones(x.shape) * 0.4, x, '--k') axes.plot(numpy.ones(x.shape) * 0.8, x, '--k') axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k') axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k') plt.show() ``` Additionally, suppose $g'(x)$ is defined for $x \in [a,b]$ and $\exists K < 1$ s.t. $|g'(x)| \leq K < 1 ~~~ \forall ~~~ x \in (a,b)$, then $g$ has a unique fixed point $P \in [a,b]$ ```python x = numpy.linspace(0.4, 0.8, 100) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, -numpy.exp(-x), 'r') axes.set_xlabel("x") axes.set_ylabel("f(x)") plt.show() ``` *Theorem 2*: Asymptotic convergence behavior of fixed point iterations $$x_{k+1} = g(x_k)$$ Assume that $\exists ~ x^*$ s.t. $x^* = g(x^*)$ $$x_k = x^* + e_k ~~~~~~~~~~~~~~ x_{k+1} = x^* + e_{k+1}$$ $$x^* + e_{k+1} = g(x^* + e_k)$$ Using a Taylor expansion we know $$g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$ $$x^* + e_{k+1} = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$ Note that because $x^* = g(x^*)$ these terms cancel leaving $$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$ So if $|g'(x^*)| \leq K < 1$ we can conclude that $$|e_{k+1}| = K |e_k|$$ which shows convergence (although somewhat arbitrarily fast). ### Convergence of iterative schemes Given any iterative scheme where $$|e_{k+1}| = C |e_k|^n$$ If $C < 1$ and - $n=1$ then the scheme is **linearly convergence** - $n=2$ then the scheme exhibits **quadratic convergence** - $n > 1$ the scheme can also be called **superlinearly convergent** If $C > 1$ then the scheme is **divergent** ### Examples Revisited $g(x) = e^{-x}$ with $x^* \approx 0.56$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ $g(x) = - \ln x$ with $x^* \approx 0.56$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ $g(r) = \frac{m P}{A} ((1 + \frac{r}{m})^{mn} - 1)$ with $r^* \approx 0.09$ $$|g'(r^*)| = \frac{P m n}{A} \left(1 + \frac{r}{m} \right)^{m n - 1} \approx 2.15$$ ```python import sympy m, P, A, r, n = sympy.symbols('m, P, A, r, n') (m * P / A * ((1 + r / m)**(m * n) - 1)).diff(r) ``` ## Better ways for root-finding/optimization If $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$. $$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ or $$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ ## Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) ## Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) ### Bracketing and Bisection A *bracket* is an interval $[a,b]$ s.t. it contains the zero or minima/maxima of interest. In the case of a zeros the bracket should satisfy $\text{sign}(f(a)) \neq \text{sign}(f(b))$. In the case of minima or maxima we need $f'(a)$ and $f'(b)$ to be opposite. **Theorem**: If $f(x) \in C[a,b]$ and $\text{sign}(f(a)) \neq \text{sign}(f(b))$ then there exists a number $c \in (a,b)$ s.t. $f(c) = 0$. (proof uses intermediate value theorem) ```python P = 1500.0 m = 12 n = 20.0 A = 1e6 r = numpy.linspace(0.05, 0.1, 100) f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(r, f(r, A, m, P, n), 'b') axes.plot(r, numpy.zeros(r.shape),'r--') axes.set_xlabel("r (%)") axes.set_ylabel("f(r)") axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1)) a = 0.075 b = 0.095 axes.plot(a, f(a, A, m, P, n), 'ko') axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--') axes.plot(b, f(b, A, m, P, n), 'ko') axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--') plt.show() ``` ### Bisection Algorithm Given a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket 2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went ```python P = 1500.0 m = 12 n = 20.0 A = 1e6 r = numpy.linspace(0.05, 0.11, 100) f = lambda r, A=A, m=m, P=P, n=n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0) # Initialize bracket a = 0.07 b = 0.10 # Setup figure to plot convergence fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(r, f(r, A, m, P, n), 'b') axes.plot(r, numpy.zeros(r.shape),'r--') axes.set_xlabel("r (%)") axes.set_ylabel("f(r)") # axes.set_xlim([0.085, 0.091]) axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1)) axes.plot(a, f(a, A, m, P, n), 'ko') axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--') axes.plot(b, f(b, A, m, P, n), 'ko') axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--') # Algorithm parameters TOLERANCE = 1e-4 MAX_STEPS = 100 # Initialize loop f_a = f(a) f_b = f(b) delta_x = b - a # Loop until we reach the TOLERANCE or we take MAX_STEPS for step in xrange(MAX_STEPS): c = a + delta_x / 2.0 f_c = f(c) if numpy.sign(f_a) != numpy.sign(f_c): b = c f_b = f_c else: a = c f_a = f_c delta_x = b - a # Plot iteration axes.text(c, f(c), str(step)) # Check tolerance - Could also check the size of delta_x if numpy.abs(f_c) < TOLERANCE: break if step == MAX_STEPS: print "Reached maximum number of steps!" else: print "Success!" print " x* = %s" % c print " f(x*) = %s" % f(c) print " number of steps = %s" % step ``` #### Convergence of Bisection $$|e_{k+1}| = C |e_k|^n$$ $$e_k \approx \Delta x_k$$ $$e_{k+1} \approx \frac{1}{2} \Delta x_k$$ $$|e_{k+1}| = \frac{1}{2} |e_k|$$ $\Rightarrow$ Linear convergence ### Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point **Basic Idea**: Given $f(x)$ and $f'(x)$ use a linear approximation to $f(x)$ "locally" and use x-intercept of the resulting line to predict where $x^*$ might be. Given current location $x_k$, we have $f(x_k)$ and $f'(x_k)$ and form a line through the point $(x_k, f(x_k))$: Form equation for the line: $$y = f'(x_k) x + b$$ Solve for the y-intercept value $b$ $$f(x_k) = f'(x_k) x_k + b$$ $$b = f(x_k) - f'(x_k) x_k$$ and simplify. $$y = f'(x_k) x + f(x_k) - f'(x_k) x_k$$ $$y = f'(x_k) (x - x_k) + f(x_k)$$ Now find the intersection of our line and the x-axis (i.e. when $y = 0$) and use the resulting value of $x$ to set $x_{k+1}$ $$0 = f'(x_k) (x_{k+1}-x_k) + f(x_k)$$ $$x_{k+1} = x_k-\frac{f(x_k)}{f'(x_k)}$$ ```python P = 1500.0 m = 12 n = 20.0 A = 1e6 r = numpy.linspace(0.05, 0.11, 100) f = lambda r, A=A, m=m, P=P, n=n: \ A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0) f_prime = lambda r, A=A, m=m, P=P, n=n: \ -P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \ + P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2 # Algorithm parameters MAX_STEPS = 100 TOLERANCE = 1e-4 # Initial guess x_k = 0.06 # Setup figure to plot convergence fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(r, f(r), 'b') axes.plot(r, numpy.zeros(r.shape),'r--') # Plot x_k point axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--') axes.plot(x_k, f(x_k), 'ko') axes.text(x_k, -5e4, "$x_k$", fontsize=16) axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16) axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k') # Plot x_{k+1} point x_k = x_k - f(x_k) / f_prime(x_k) axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--') axes.plot(x_k, f(x_k), 'ko') axes.text(x_k, 1e4, "$x_k$", fontsize=16) axes.text(0.089, f(x_k) - 2e4, "$f(x_k)$", fontsize=16) axes.set_xlabel("r (%)") axes.set_ylabel("f(r)") axes.set_title("Newton-Raphson Steps") axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1)) plt.show() ``` ### Algorithm 1. Initialize $x_k$ 1. Begin loop and calculate what $x_{k+1}$ 1. Check stopping criteria ```python P = 1500.0 m = 12 n = 20.0 A = 1e6 r = numpy.linspace(0.05, 0.11, 100) f = lambda r, A=A, m=m, P=P, n=n: \ A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0) f_prime = lambda r, A=A, m=m, P=P, n=n: \ -P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \ + P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2 # Algorithm parameters MAX_STEPS = 100 TOLERANCE = 1e-4 # Initial guess x_k = 0.06 # Setup figure to plot convergence fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(r, f(r), 'b') axes.plot(r, numpy.zeros(r.shape),'r--') for n in xrange(1, MAX_STEPS + 1): axes.text(x_k, f(x_k), str(n)) x_k = x_k - f(x_k) / f_prime(x_k) if numpy.abs(f(x_k)) < TOLERANCE: break if n == MAX_STEPS: print "Reached maximum number of steps!" else: print "Success!" print " x* = %s" % x_k print " f(x*) = %s" % f(x_k) print " number of steps = %s" % n axes.set_xlabel("r (%)") axes.set_ylabel("f(r)") axes.set_title("Newton-Raphson Steps") axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1)) plt.show() ``` ### Example: $$f(x) = x - e^{-x}$$ $$f'(x) = 1 + e^{-x}$$ $$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ ### Asymptotic Convergence of Newton's Method For a simple root (non-multiplicative) - Let $g(x) = x - \frac{f(x)}{f'(x)}$, then $$x_{k+1} = g(x_k)$$ Definitions of errors and iteration: $$x_{k+1} = x^* + e_{k+1} ~~~~~ x_k = x^* + e_k$$ General Taylor expansion: $$x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ Note that as before $x^*$ and $g(x^*)$ cancel: $$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ What about $g'(x^*)$ though: $$g'(x) = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x)f''(x)}{f'(x)^2}$$ which simplifies when evaluated at $x = x^*$ to $$g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$ The expansion then simplifies to $$e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$ leading to the conclusion that $$|e_{k+1}| = \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$ Newton's method is therefore quadratically convergent where the constant is controlled by the second derivative. For a multiple root (e.g. $f(x) = (x-1)^2$) the case is not particularly rosey unfortunately. ### Example: $f(x) = \sin (2 \pi x)$ $$x_{k+1} = x_k - \frac{\sin (2 \pi x)}{2 \pi \cos (2 \pi x)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x)$$ ```python x = numpy.linspace(0, 2, 1000) f = lambda x: numpy.sin(2.0 * numpy.pi * x) f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, f(x),'b') axes.plot(x, f_prime(x), 'r') axes.set_xlabel("x") axes.set_ylabel("y") axes.set_title("Comparison of $f(x)$ and $f'(x)$") axes.set_ylim((-2,2)) axes.set_xlim((0,2)) axes.plot(x, numpy.zeros(x.shape), 'k--') x_k = 0.3 axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko') axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--') axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k') x_k = x_k - f(x_k) / f_prime(x_k) axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko') axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--') plt.show() ``` ```python x = numpy.linspace(0, 2, 1000) f = lambda x: numpy.sin(2.0 * numpy.pi * x) x_kp = lambda x: x - 1.0 / (2.0 * numpy.pi) * numpy.tan(2.0 * numpy.pi * x) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, f(x),'b') axes.plot(x, x_kp(x), 'r') axes.set_xlabel("x") axes.set_ylabel("y") axes.set_title("Comparison of $f(x)$ and $f'(x)$") axes.set_ylim((-2,2)) axes.set_xlim((0,2)) axes.plot(x, numpy.zeros(x.shape), 'k--') plt.show() ``` #### Other Issues Need to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$ Can use symbolic differentiation (`sympy`) ### Secant Methods Is there a method with the convergence of Newton's method but without the extra derivatives? Maybe something that calculates the derivative rather than expects it? Given $x_k$ and $x_{k-1}$ represent the derivative as $$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$ Combining this with the basic approach of Newton leads to $$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$ This leads to superlinear convergence (the exponent on the convergence is $\approx 1.7$) Alternative interpretation, fit a line through two points and see where they intersect the x-axis. $$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$ $$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$ $$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$) $$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$ $$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$ ```python P = 1500.0 m = 12 n = 20.0 A = 1e6 r = numpy.linspace(0.05, 0.11, 100) f = lambda r, A=A, m=m, P=P, n=n: \ A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0) # Initial guess x_k = 0.07 x_km = 0.06 fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(r, f(r), 'b') axes.plot(r, numpy.zeros(r.shape),'r--') axes.plot(x_k, 0.0, 'ko') axes.plot(x_k, f(x_k), 'ko') axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--') axes.plot(x_km, 0.0, 'ko') axes.plot(x_km, f(x_km), 'ko') axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--') axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k') x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km))) axes.plot(x_kp, 0.0, 'ro') axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--') axes.plot(x_kp, f(x_kp), 'ro') axes.set_xlabel("r (%)") axes.set_ylabel("f(r)") axes.set_title("Secant Method") axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1)) plt.show() ``` #### Algorithm Given $f(x)$, given bracket $[a,b]$, a `TOLERANCE`, and a `MAX_STEPS` 1. Initialize $x_1 = a$, $x_2 = b$, $f_1 = f(x_1)$, and $f_2 = f(x_2)$ 2. Loop until either `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $x_{k+1}$ 2. Check for convergence and break if reached 3. Update parameters $x_1$, $x_2$, $f_1 = f(x_1)$ and $f_2(x_2)$ 3. Celebrate ```python P = 1500.0 m = 12 n = 20.0 A = 1e6 r = numpy.linspace(0.05, 0.11, 100) f = lambda r, A=A, m=m, P=P, n=n: \ A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0) f_prime = lambda r, A=A, m=m, P=P, n=n: \ -P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \ + P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2 # Algorithm parameters MAX_STEPS = 100 TOLERANCE = 1e-4 # Initial guess x_k = 0.07 x_km = 0.06 # Setup figure to plot convergence fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(r, f(r), 'b') axes.plot(r, numpy.zeros(r.shape),'r--') for n in xrange(1, MAX_STEPS + 1): axes.plot(x_k, f(x_k), 'o') x_kp = x_k - f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)) x_km = x_k x_k = x_kp if numpy.abs(f(x_k)) < TOLERANCE: break if n == MAX_STEPS: print "Reached maximum number of steps!" else: print "Success!" print " x* = %s" % x_k print " f(x*) = %s" % f(x_k) print " number of steps = %s" % n axes.set_xlabel("r (%)") axes.set_ylabel("f(r)") axes.set_title("Secant Method") axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1)) plt.show() ``` #### Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not gauranteed to converge - Do not preserve brackets - Almost as good as Newton's method if your initial guess is good. ### Hybrid Methods Combine attributes of methods with others to make one great algorithm to rule them all (not really) #### Goals 1. Robustness: Given a bracket $[a,b]$, maintain bracket 1. Efficiency: Use superlinear convergent methods when possible #### Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package ## Optimization (finding extrema) I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$. A few approaches: - Bracketing Algorithms: Golden-Section Search (linear) - Interpolation Alogithms: Repeated parabolic interpolation - Hybrid Algorithms ### Bracketing Algorithm (Golden Section Search) Given $f(x) \in C[a,b]$ that is convex over an interval $x \in [a,b]$ reduce the interval size until it brackets the minimum. Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit more tricky in this case. In particular choosing your initial bracket is important! #### Golden Section Search - Picking Intervals We also may want to choose the search points $c$ and $d$ so that the distance between $a$ and $d$, say $\Delta_{ad}$, and $b$ and $c$, say $\Delta_{bc}$, is carefully choosen. For Golden Section Search we require that these are equal. This tells use where to put $d$ but not $c$. The Golden Section Search also requires that $b$ should be choosen so that the spacing between the points have the same proportion as $(a, c, d)$ and $(c, d, b)$. Ok, that's weird. Also, why are we calling this thing "Golden"? Mathematically: If $f(d) > f(c)$ then $$\frac{\Delta_{cd}}{\Delta_{ca}} = \frac{\Delta_{ca}}{\Delta_{bc}}$$ If $f(d) < f(c)$ then $$\frac{\Delta_{cd}}{\Delta_{bc} - \Delta_{cd}} = \frac{\Delta_{ca}}{\Delta_{bc}}$$ Eliminating $\Delta_{cd}$ leads to the equation $$\left( \frac{\Delta_cb}{\Delta_{ca}} \right )^2 = \frac{\Delta_cb}{\Delta_{ca}} + 1$$ Solving this leads to $$ \frac{\Delta_cb}{\Delta_{ca}} = \varphi$$ where $\varphi$ is the golden ratio! $$\varphi = \frac{1 \pm \sqrt{5}}{2}$$ #### Algorithm 1. Initialize bracket $[a,b]$ and compute $f_a = f(a)$ and $f_b = f(b)$, $\Delta x = b-a$ 1. Initialize points $c = b - \varphi * (b - a)$ and $d = a + \varphi * (b -a)$ 1. Loop 1. Evaluate $f_c$ and $f_d$ 1. If $f_c < f_d$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $\Delta_{cd} <$ `TOLERANCE` ```python # New Test Function! def f(t): """Simple function for minimization demos""" return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \ + numpy.exp(-(t - 0.6)**2 / (0.2)**2) \ + numpy.exp(-(t - 1.0)**2 / (0.2)**2) \ + numpy.sin(t) \ - 2.0 t = numpy.linspace(0, 2, 200) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(t, f(t)) axes.set_xlabel("t (days)") axes.set_ylabel("People (N)") axes.set_title("Decrease in Population due to SPAM Poisoning") plt.show() ``` ```python phi = (numpy.sqrt(5.0) - 1.0) / 2.0 TOLERANCE = 1e-4 MAX_STEPS = 100 a = 0.2 b = 0.5 c = b - phi * (b - a) d = a + phi * (b - a) t = numpy.linspace(0, 2, 200) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(t, f(t)) axes.set_xlabel("t (days)") axes.set_ylabel("People (N)") axes.set_title("Decrease in Population due to SPAM Poisoning") success = False for n in xrange(1, MAX_STEPS + 1): axes.plot(a, f(a),'ko') axes.plot(b, f(b),'ko') fc = f(c) fd = f(d) if fc < fd: b = d d = c c = b - phi * (b - a) else: a = c c = d d = a + phi * (b - a) if numpy.abs(b - a) < TOLERANCE: success = True break if success: print "Success!" print " t* = %s" % str((b + a) / 2.0) print " f(t*) = %s" % f((b + a) / 2.0) print " number of steps = %s" % n else: print "Reached maximum number of steps!" plt.show() ``` ### Interpolation Approach Successive parabolic interpolation - similar to secant method Basic idea: Fit polynomial to function using three points, find it's minima, and guess new points based on that minima #### Algorithm Given $f(x)$ and $[a,b]$ 1. Initialize $x = [a, b, (a+b)/2]$ 1. Loop 1. Evaluate function $f(x)$ 1. Use a polynomial fit to the function: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 1. Calculate the minimum: $$p'(x) = 2 p_0 x + b = 0 ~~~~ \Rightarrow ~~~~ x = -b / (2 p_0)$$ 1. Calculate new interval 1. Check tolerance ```python MAX_STEPS = 100 TOLERANCE = 1e-4 a = 0.5 b = 0.2 x = numpy.array([a, b, (a + b) / 2.0]) t = numpy.linspace(0, 2, 200) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(t, f(t)) axes.set_xlabel("t (days)") axes.set_ylabel("People (N)") axes.set_title("Decrease in Population due to SPAM Poisoning") axes.plot(x[0], f(x[0]), 'ko') axes.plot(x[1], f(x[1]), 'ko') success = False for n in xrange(1, MAX_STEPS + 1): axes.plot(x[2], f(x[2]), 'ko') poly = numpy.polyfit(x, f(x), 2) axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--') x[0] = x[1] x[1] = x[2] x[2] = -poly[1] / (2.0 * poly[0]) if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE: success = True break if success: print "Success!" print " t* = %s" % x[2] print " f(t*) = %s" % f(x[2]) print " number of steps = %s" % n else: print "Reached maximum number of steps!" axes.set_ylim((-5, 0.0)) plt.show() ``` ## Scipy Optimization Scipy contains a lot of ways for optimization! ```python import scipy.optimize as optimize optimize.golden(f, brack=(0.2, 0.25, 0.5)) ``` ```python ```
2379d90f2d3890ddfd84f1752b81ad70a9ad5ab8
47,613
ipynb
Jupyter Notebook
05_root_finding_optimization.ipynb
antoniopradom/Intro-numerical-methods
c177ccec215df8c3c6b6bb8df68d2527fb5ef2cc
[ "CC0-1.0" ]
null
null
null
05_root_finding_optimization.ipynb
antoniopradom/Intro-numerical-methods
c177ccec215df8c3c6b6bb8df68d2527fb5ef2cc
[ "CC0-1.0" ]
null
null
null
05_root_finding_optimization.ipynb
antoniopradom/Intro-numerical-methods
c177ccec215df8c3c6b6bb8df68d2527fb5ef2cc
[ "CC0-1.0" ]
null
null
null
26.854484
453
0.459769
true
10,299
Qwen/Qwen-72B
1. YES 2. YES
0.872347
0.887205
0.773951
__label__eng_Latn
0.748359
0.636479
# Incremental control example Åström & Wittenmark Problem 5.3 We have plant model $$ H(z) = \frac{z+0.7}{z^2 - 1.8z + 0.81} $$ and controller $$ F_b(z) = \frac{s_0z^2 + s_1z + s_2}{(z-1)(z + r_1)} $$ Want closed-loop characteristic polynomial $A_c(z) = z^2 - 1.5z + 0.7$ and observer poles in the range $0<\alpha<a$. ### Diophantine equation $$ (z^2 - 1.8z + 0.81)(z-1)(z+r_1) + (z+0.7)(s_0z^2 + s_1z + s_2) = (z-\alpha)^2(z^2-1.5z+0.7) $$ \begin{align} (z^2 - 1.8z + 0.81)(z^2 +(r_1-1)z - r_1) + s_0z^3 + s_1z^2 + s_2z + 0.7s_0z^2 + 0.7s_1z + 0.7s_2 &= \\ \qquad\qquad\qquad\qquad\qquad\qquad (z^2 -2\alpha z + \alpha^2)(z^2 - 1.5z + 0.7)\\ (z^4 - 1.8z^3 + 0.81z^2 + (r_1-1)z^3 - 1.8(r_1-1)z^2 + 0.81(r_1-1)z - r_1z^2 + 1.8r_1z - 0.81r_1 + \\ \qquad\qquad\qquad s_0z^3 + s_1z^2 + s_2z + 0.7s_0z^2 + 0.7s_1z + 0.7s_2 = \\ \qquad\qquad\qquad\qquad\qquad\qquad z^4 - 2\alpha z^3 + \alpha^2 z^2 -1.5z^3 + 3\alpha z^2 - 1.5\alpha^2 z + 0.7z^2 - 1.4\alpha z + 0.7\alpha^2 \end{align} Resulting equations when setting coefficients equal \begin{align} \begin{cases} z^3: & r_1 + s_0 = 1.8 + 1 -1.5 -2\alpha\\ z^2: & -2.8 r_1 + 0.7s_0 + s_1 = -0.81 -1.8 +0.7 + \alpha^2 + 3\alpha\\ z^1: & 2.61r_1 + 0.7s_1 + s_2 = 0.81 -1.5\alpha^2 - 1.4\alpha\\ z^0: & -0.81r_1 + 0.7s_2 = 0.7\alpha^2 \end{cases} \end{align} ### Feedforward part of controller $$T(z) = t_0A_o(z) = t_0(z-\alpha)^2$$ $$ G_c(z) = \frac{T(z)B(z)}{A_o(z)A_c(z)} = \frac{t_0 B(z)}{A_c(z)}, \quad \text{want}\, G_c(1)=1$$ $$t_0 = \frac{A_c(1)}{B(1)} = \frac{1 - 1.5+0.7}{1+0.7} = \frac{2}{17}$$ ```python import numpy as np import matplotlib.pyplot as plt import sympy as sy import control.matlab as cm import ipywidgets as widgets %matplotlib widget ``` ## Symbolic solution ```python sy.init_printing() alphaa, hh, r1, s0, s1,s2 = sy.symbols('alpha, h, r1, s0, s1,s2', real=True) zz = sy.symbols('z', real=False) ``` ```python A = zz**2 - 1.8*zz + 0.81 B = zz+0.7 R = (zz-1)*(zz+r1) S = s0*zz**2 + s1*zz + s2 LHS = sy.Poly(A*R + B*S, zz) LHS ``` ```python RHS = sy.Poly((zz-alphaa)**2*(zz**2 - 1.5*zz + 0.7), zz) Dioph = LHS-RHS coeffs = Dioph.all_coeffs() coeffs ``` ```python sol=sy.solve(coeffs, [r1,s0, s1,s2]); ``` ## Effect of observer pole ```python h = 1 # The plant H = cm.tf([1, 0.7], [1, -1.8, 0.81], h) # set up plot fig, ax = plt.subplots(figsize=(8, 4)) #ax.set_ylim([-.1, 4]) ax.grid(True) def bode_S_T_and_Gc(alpha): """ Returns the bode plot data for S and T given observer pole """ r1n = float(sol[r1].subs({alphaa: alpha})) s0n = float(sol[s0].subs({alphaa: alpha})) s1n = float(sol[s1].subs({alphaa: alpha})) s2n = float(sol[s2].subs({alphaa: alpha})) B = [1, 0.7] Ac = [1, -1.5, 0.7] Rp = np.convolve([1, r1n],[1,-1]) Fb = cm.tf([s0n, s1n, s2n], Rp , h) Ff = np.sum(Ac)/np.sum(B) * cm.tf(np.convolve([1, -alpha], [1, -alpha]), Rp, h) #Gc = cm.minreal(Ff * cm.feedback(H, Fb)) Gc = Ff * cm.feedback(H, Fb) Ss = cm.feedback(1, H*Fb) Ts = cm.feedback(H*Fb, 1) Gcbode = cm.bode(Gc, omega_limits=[0.01, 3], Plot=False) Sbode = cm.bode(Ss, omega_limits=[0.01, 3], Plot=False) Tbode = cm.bode(Ts, omega_limits=[0.01,3], Plot=False) return (Sbode, Tbode, Gcbode) @widgets.interact(alpha=(0, 0.9, 0.02)) def update(alpha = 0.4): """Remove old lines from plot and plot new one""" [l.remove() for l in ax.lines] [l.remove() for l in ax.lines] Sb, Tb, Gb = bode_S_T_and_Gc(alpha) sh = ax.loglog(Sb[2], Sb[0], color='C0', label='Ss') th = ax.loglog(Tb[2], Tb[0], color='C1', label='Ts') gh = ax.loglog(Gb[2], Gb[0], color='C2', label='Gc') ax.legend(loc='lower center') ``` ```python h = 1 # The plant H = cm.tf([1, 0.7], [1, -1.8, 0.81], h) # set up plot fig, ax = plt.subplots(3,1,figsize=(8, 6), sharex=True ) ax[0].set_ylabel('Resp to pert') ax[1].set_ylabel('Resp to noise') ax[2].set_ylabel('Resp to setpoint') def step_S_T_and_Gc(alpha): """ Returns the bode plot data for S and T given observer pole """ r1n = float(sol[r1].subs({alphaa: alpha})) s0n = float(sol[s0].subs({alphaa: alpha})) s1n = float(sol[s1].subs({alphaa: alpha})) s2n = float(sol[s2].subs({alphaa: alpha})) B = [1, 0.7] Ac = [1, -1.5, 0.7] Rp = np.convolve([1, r1n],[1,-1]) Fb = cm.tf([s0n, s1n, s2n], Rp , h) Ff = np.sum(Ac)/np.sum(B) * cm.tf(np.convolve([1, -alpha], [1, -alpha]), Rp, h) #Gc = cm.minreal(Ff * cm.feedback(H, Fb)) Gc = Ff * cm.feedback(H, Fb) Ss = cm.feedback(1, H*Fb) Ts = cm.feedback(H*Fb, 1) NN = 50 uu = np.ones(NN) uu[24:] = 0 tt = np.arange(NN) Gcstep = cm.lsim(Gc, uu, tt) Sstep = cm.lsim(Ss, uu, tt) Tstep = cm.lsim(Ts, uu, tt) return (Sstep, Tstep, Gcstep) @widgets.interact(alpha=(0, 0.9, 0.02)) def update(alpha = 0.4): """Remove old lines from plot and plot new one""" [[l.remove() for l in ax_.lines] for ax_ in ax] [[l.remove() for l in ax_.lines] for ax_ in ax] Sb, Tb, Gb = step_S_T_and_Gc(alpha) sh = ax[0].stem(Sb[1], Sb[0], markerfmt='C0o', label='Ss') th = ax[1].stem(Tb[0], markerfmt='C1o', label='Ts') gh = ax[2].stem(Gb[0], markerfmt='C2o', label='Gc') ``` ```python ?plt.subplots ``` ```python ``` ```python h = 1 # The plant H = cm.tf([1, 0.7], [1, -1.8, 0.81], h) # set up plot fig, ax = plt.subplots(figsize=(8, 4)) #ax.set_ylim([-.1, 4]) ax.grid(True) def bode_S_T_and_Gc(alpha): """ Returns the bode plot data for S and T given observer pole """ r1n = float(sol[r1].subs({alphaa: alpha})) s0n = float(sol[s0].subs({alphaa: alpha})) s1n = float(sol[s1].subs({alphaa: alpha})) s2n = float(sol[s2].subs({alphaa: alpha})) B = [1, 0.7] Ac = [1, -1.5, 0.7] Rp = np.convolve([1, r1n],[1,-1]) Fb = cm.tf([s0n, s1n, s2n], Rp , h) Ff = np.sum(Ac)/np.sum(B) * cm.tf(np.convolve([1, -alpha], [1, -alpha]), Rp, h) #Gc = cm.minreal(Ff * cm.feedback(H, Fb)) Gc = Ff * cm.feedback(H, Fb) Ss = cm.feedback(1, H*Fb) Ts = cm.feedback(H*Fb, 1) Gcbode = cm.bode(Gc, omega_limits=[0.01, 3], Plot=False) Sbode = cm.bode(Ss, omega_limits=[0.01, 3], Plot=False) Tbode = cm.bode(Ts, omega_limits=[0.01,3], Plot=False) return (Sbode, Tbode, Gcbode) @widgets.interact(alpha=(0, 0.9, 0.02)) def update(alpha = 0.4): """Remove old lines from plot and plot new one""" [l.remove() for l in ax.lines] [l.remove() for l in ax.lines] Sb, Tb, Gb = bode_S_T_and_Gc(alpha) sh = ax.semilogy(Sb[2], Sb[0], color='C0', label='Ss') th = ax.semilogy(Tb[2], Tb[0], color='C1', label='Ts') gh = ax.semilogy(Gb[2], Gb[0], color='C2', label='Gc') ax.legend(loc='lower center') ``` FigureCanvasNbAgg() interactive(children=(FloatSlider(value=0.4, description='alpha', max=0.9, step=0.02), Output()), _dom_classes… ```python ```
e3c7da65b4e5357e03a9d859f8363eba3485eff9
43,143
ipynb
Jupyter Notebook
polynomial-design/notebooks/A-and-W-5.3.ipynb
kjartan-at-tec/mr2007-computerized-control
16e35f5007f53870eaf344eea1165507505ab4aa
[ "MIT" ]
2
2020-11-07T05:20:37.000Z
2020-12-22T09:46:13.000Z
polynomial-design/notebooks/A-and-W-5.3.ipynb
kjartan-at-tec/mr2007-computerized-control
16e35f5007f53870eaf344eea1165507505ab4aa
[ "MIT" ]
4
2020-06-12T20:44:41.000Z
2020-06-12T20:49:00.000Z
polynomial-design/notebooks/A-and-W-5.3.ipynb
kjartan-at-tec/mr2007-computerized-control
16e35f5007f53870eaf344eea1165507505ab4aa
[ "MIT" ]
1
2021-03-14T03:55:27.000Z
2021-03-14T03:55:27.000Z
95.238411
13,892
0.79707
true
3,018
Qwen/Qwen-72B
1. YES 2. YES
0.779993
0.766294
0.597704
__label__eng_Latn
0.215352
0.226996
# 7. Bandit Algorithms **Recommender systems** are a subclass of information filtering system that seek to predict the 'rating' or 'preference' that a user would give to an item. **k-armed bandits** are one way to solve this recommendation problem. They can also be used in other similar contexts, such as clinical trials and (financial) portfolio optimization. ## General concepts The **(instantaneous) regret** is the difference between the $\mu_i$ of the arm we pick, versus the $\mu^{*}$ of the best arm. The **total regret** is the sum of the instantaneous regrets over time. $i_1, \dots, i_T$ are the decisions we made over time. \begin{equation} R_T = \sum_{t = 1}^T r_t = \sum_{t = 1}^T \mu^{*} - \mu_{i_t} \end{equation} Ideally, we pick the perfect arm from the beginning, and we get a total regret of 0. In practice, if the regret goes to zero over time, we're happy enough, and the algorithm with this propery is called **no regret**, even though there might be a little bit of regret. ## Stochastic k-armed bandits Like in online supervised learning, the algorithm runs over time, and at every time point we need to make a decision. k arms to pull, each arm can win ($y = $ reward $ = 1$) with probability $\mu_i$ (unknown). Note that the reward is always constant, it's just the probability which varies. However, unlike online learning, we only receive information about the action we choose, i.e. we only have one single "try". In e.g. OCP, at any time point we get a new $x$, and we can try a plethora of different ways to update our model ($w$) (hypothetically) and see how good the new potential model is. With bandits, you don't know how good your choice is until you commit to it and do it (pull the arm), by which time, you can no longer change it. ## $\epsilon$-greedy algorithm At every time, pick random arm with probability $\epsilon$, and pick the current best arm otherwise. This works surprisingly well (it's no-regret), but could be better. ## Hoeffding's inequality > [...] provides an upper bound on the probability that the sum of random variables deviates from its expected value. > > -- Wikipedia Let $X_1,\dots,X_m$ be i.i.d. random variables taking values in $[0, 1]$. The real mean $\mu = \mathbb{E}[X]$ is unknown. We have an empirical estimate based on our $m$ trials: \begin{equation} \hat{\mu}_m = \frac{1}{m} \sum_{i = 1}^{m} X_i \end{equation} Then we have: \begin{equation} P\left(\left|\mu - \hat{\mu}_m\right| \ge b\right) \le 2 \exp\left(-2b^2m \right) = \delta_m \end{equation} That is, the probability of our operation being more than $b$ off the real value is smaller than a computable threshold. We just want a lower bound $b$ for any given fixed probability bound. So we fix the probability bound to, say, $\delta_m$, and then compute the corresponding $b$, as a function of $\delta_m$ and $m$. For a fixed upper probability bound, we fix $\delta_m$ and get $b = \sqrt{\frac{1}{2m} \log{\frac{2}{\delta_m}}}$. All we need now is to decide what $\delta_m$ should be. Now we also want to set an upper bound on the probability not just for a single $m$, but for all $m$. Want upper bound for $E_m = P(|\mu - \hat{\mu}_m| \ge b)$, i.e. a lower bound for $P(|\mu - \hat{\mu}_t| \ge b, \> \forall t)$. So we get: \begin{equation} \begin{aligned} P(|\mu - \hat{\mu}_t| \le b, \> \forall t) & = 1 - P(E_1 \cup E_2 \cup \dots) \\ & \ge 1 - \sum_{t=1}^{\infty} P(E_t) \\ & \ge 1 - \sum_{t=1}^{\infty} \delta_t \\ & \ge 1 - \delta \> \text{with} \> \delta < \sum_{t=1}^{\infty}\delta_t \end{aligned} \end{equation} 2nd row smaller since the sum we're subtracting is bigger (longer than the prev.). 3rd row smaller because all deltas are larger than their corresponding Es, as defined by Hoeffding's inequality itself. We therefore want a sum of $\delta_t$ which is bounded. Setting $\delta_t = \frac{c}{t^2}$ works well, since the correspoinding sum converges, so the upper bound $\delta$ exists and is finite. We now have a good heuristic: at any time step $t$, our upper bound should be $\delta_t = \frac{c}{t^2}$. Recall that we want to express $b$ as a function of $\delta_t$, since $b$ is *the value which actually defines our upper confidence bound*. (Note that this probability shrinks quadratically over time, so it keeps getting tighter and tighter for all arms, as we keep playing.) ## UCB1 All we need to do now is shove our $\delta_t$ in the formula for $b$, and we get (setting $c := 2$): \begin{aligned} \operatorname{UCB}(i) & = \hat{\mu}_i + \sqrt{\frac{1}{n_i} \ln (2 \frac{t^2}{2}) } \\ & = \hat{\mu}_i + \sqrt{\frac{\ln{t^2}}{n_i}} \\ & = \hat{\mu}_i + \sqrt{\frac{2 \ln{t}}{n_i}} \end{aligned} We can plug this formula right into a program! See `bonus/tutorial-bandits.ipynb` for an implementation. This is an algorithm which is much smarter than $\epsilon$-greedy about what it explores. TODO: there seems to be a "2" missing somewhere. Low priority. It can be shown that UCB is a **no-regret** algorithm. ($R_T / T \rightarrow 0 \> \text{as} \> T \rightarrow \infty$) ## Applications of bandit algorithms Non-DM: * Clinical trials (give the best possible cure to a patient, while also working on improving the accuracy of our diagnostics) * Matching markets (TODO: more info) * Asset pricing * Adaptive routing * Go DM: * Advertising * Optimizing relevance (e.g. news article recommendations) * Scheduling web crawlers * Optimizing user interfaces (e.g. smart A/B testing) ## Contextual bandits Also incorporate some info about every arm and every user. Useful when e.g. recommending articles, since it takes users topic preferences into account. We still use **cummulative (contextual) regret** as a metric, $R_t = \sum_{t=1}^{T}r_t$. Can achieve *sublinear regret* by learning **optimal mapping** from contexts (e.g. (user, article features) tuples) to actions. ### Outline * Observe context: $z_t \in \mathcal{Z}$, and, e.g. $\mathcal{Z} \subseteq \mathbb{R}^{d}$ * Pick arm from set of possible arms, $x_t \in \mathcal{A}_t$ * Observe reward $y_t$, which depends on the picked arm and the context, plus some possible noise: $y_t = f(x_t, z_t) + \epsilon_t$ * Incur regret: $r_t = \max_{x \in \mathcal{A}_t}(f(x, z_t)) - f(x_t, z_t)$ (like before, difference between the best arm, given the context, and the arm we actually picked) ### Linear recommendations Want to minimize regularized square loss for \begin{equation} \hat{w}_i = \arg \min_w \sum_{t=1}^{m} (y_t - w^T z_t)^2 + \|w\|_2^2 \end{equation} Note: This model can take features from the user, the article, or both into account. And every article has its own $\hat{w}$. This is linear regression and it's easy to solve. Key idea: Want to merge UCB and regression by having an upper confidence bound (UCB) for our $w$s. Ideally, just as in UCB1, this bound will shrink towards $w$ as time goes on. This is LinUCB. [CHEATSHEET] \begin{aligned} \left| \> \text{estimated reward} - \text{true reward} \> \right| & \le \text{some bound} \quad \text{(with some probability)} \\ \left| \hat{w}^T_i z_t - w^T_i z_t \right| & \le \alpha\sqrt{z^T_t(D^T_i D_i + I)^{-1}z_t}, \> p \ge 1 - \delta \\ \left| \hat{w}^T_i z_t - w^T_i z_t \right| & \le \alpha\sqrt{z^T_t M_i z_t}, \> p \ge 1 - \delta \end{aligned} This holds as long as $\alpha = 1 + \sqrt{\frac{1}{2} \ln \left( \frac{2}{\delta} \right)}$. We set our desired probability bound, compute $\alpha$ and we have an algorithm! Same as UCB1, but compute an arm's UCB as: \begin{aligned} M_x \in \mathbb{R}^{d \times d}, b_x \in \mathbb{R}^{d} & \quad \text{(the arm's model)} \\ \hat{w} = M_x^{-1} b & \quad \text{(the model used for the primary payoff prediction)} \\ \operatorname{UCB}_x = \hat{w}_x^T z_t + \alpha \sqrt{z_t^T M_t^{-1} z_t} & \quad \text{(arm UCB given z)} \end{aligned} Not storing $M_x$ and $b_x$ together because we need $M_x^{-1}$ to compute the upper confidence bound of our predicted payoff. LinUCB is also no-regret (i.e. regret sub-linear in T). ### Learning from $y_t$ If the payoff $y_t > 0$ (see *rejection sampling*): * $M_x \leftarrow M_x + z_tz_t^T$ (outer product) * $b_x \leftarrow b_x + y_t z_t$ ### Problem with linear recommendations No shared effect modeling. We optimize every arm separately based on what users like it, but there's no way to directly exploit the fact that similar users may like similar articles. Use hybrid models! ## Hybrid LinUCB \begin{equation} y_t = w_i^T z_t + \beta^T \phi(x_i, z_t) + \epsilon_t \end{equation} * $\phi(x, z)$ simply flattens (like `numpy.ravel`) the outer product $x_i z_t^T$. * $w_i$ is an arm's model * $\beta$ captures user-article similarity (i.e. user interests). Can also solve this using regularized regression. We also need to compute confidence intervals for this bad boy. The algorithm is fluffy, but it works. ## Practical implementation of contextual bandits Sample case: * 1193 user features, 81 article features * We need to perform dimensionality reduction! ### Extracting feature vectors * Data consists of triplets of form (article_features, user_features, reward): $D = \left\{ (\phi_{a,1}, \phi_{u,1}, y_1), \dots, (\phi_{a,n}, \phi_{u,n}, y_n) \right\}$ * Learn the model parameters $W$ with *logistic regression* (remember that our reward $y_i$ is either 1 or 0, e.g. click or no click). This (super) model now predicts rewards based on both article and user features. It incorporates every arm's model. * Want per-arm models like before * Set: $\psi_{a,i} = \phi^T_{a,i} W$ (vector); in effect, this splits $W$ back to per-arm models; * $\psi_{a,i}$ is still hugely dimensional * k-means cluster $\psi_{a, i}$ our arm models (i.e. over i datapoints, with $i = 1, \dots, n$) * Obtain $j < n$ clusters; the final article features for article $i$ are $x_{i, j} = \frac{1}{Z} \exp{\left( -\| \psi_{a, i} - \mu_j \|_2^2 \right)}, \> x_{i, j} \in \mathbb{R}^{k}$ * i.e. compute some clusters and model articles relative to them, i.e. every article's feature is its distance to that cluster (and exp + constant, but the principle stays the same). This way we can express our articles and users using much fewer features. ## Evaluating bandit algorithms Gather data with pure exploration (random). Learn from log using **rejection sampling**. Go through log and try to predict the click at every step. If we're wrong, reject the sample (ignore log line), if we're right, feed back that reward into the algorithm. Stop when T events kept. This is what we did in the last project! This approach is **unbiased**, and the expected number of needed events are $kT$, with $k$ being the (post-dim-red) number of article features. In general, UCB algorithms tend to perform *much* better than greedy algorithms when there isn't a lot of training data. And hybrid LinUCB seems to be the best. [Li et al WWW '10] ## Sharing observations across users * Use stereotypes * Describe stereotypes in lower-dim space (estimate using PCA/SVD, so dim-reduction). * First explore in stereotype subspace, then in the full space (whose exploration is significantly more expensive). This is **coarse to fine bandits**. ## Sets of k recommendations * In many cases (ads, news) want to recommend more than one thing at a time. * Want to choose set that's relevant to as many users as possible. * $\implies$ optimize **relevance** and **diversity** * Want to cover as many users as possible with a (limited) set of e.g. ads. * Every article $i$ is relevant to a set of users $S_i$. Suppose this is known. ### This is a maximum (set) coverage problem * Define the coverage of a set $A$ of articles: \begin{equation} F(A) = \left| \bigcup_{i \in A}S_i \right| \end{equation} * And we want to maximize this coverage: $\max_{|A| \le k} F(A)$ - nr. sets A grows exponentially in $k$ - finding the optimal A is NP-hard. - Let's try a greedy solution! - Start with $A_0$ empty, and always add the article which increases the coverage of $A$ the most. - Turns out, this solution is "good enough" (~63% of optimal) - $F(A_{\text{greedy}}) \ge \left( 1 - \frac{1}{e} \right) F(A_{\text{opt}})$ - $F(\> \text{greedy set of size} \> l \>) \ge \left(1 - e\left( -\frac{l}{k}\right)\right) \max_{|A| \le k}F(A)$ TODO: hard to tell from prof.'s writing; double check! - this works because F is non-negative monotone and **submodular** ### Submodularity [EXAM] **Submodularity** is a property of *set functions*. $F : 2^V \rightarrow \mathbb{R} \> \text{submodular} \iff \forall A \subseteq B, s \not\in B: F(A \cup \{s\}) - F(A) \ge F(B \cup \{s\}) - F(B)$ Adding a set earlier cannot be worse than adding it later. Marginal benefits can never increase, i.e. our delta improvement at every step only gets smaller and smaller. **Closedness**: A weighted sum of submodular functions is also submodular (positive weights). (Closed under nonnegative linear combinations.) * Allows multi-objective optimization with weights, as long as each objective is itself submodular: $F(A) = \sum_i \lambda_i F_i(A)$ is submodular, if $F_1, \dots F_n$ are submodular ### "Lazy" greedy algorithm * First iteration as usual. * Keep ordered list of marginal benefits $\Delta_i$ for every option from previous iteration (marginal benefit = increase in coverage = # new elements we would get by adding the i<sup>th</sup> set). * Re-evaluate $\Delta_i$ only for top element. * If $\Delta_i$ stays on top, use it; otherwise, re-sort. This works because of submodularity. If $\Delta_i$ is on top, there's no way some other $\Delta_{i'}$ will "grow" in a subsequent step and overtake it. The only thing that can happen is for $\Delta_i$ itself to "drop down". In practice, this means we can solve greedy problems with submodular objective functions **really fast**. Examples include sensor placement and blog recommendation selection. General idea for recommending sets of k articles: Select article from pool. Iterations represent adding additional articles in order to maximize the user interest coverage. * Bandit submodular optimization: learn from observing **marginal gains** * $F_t(A_t)$ is the feedback at time $t$, given that the set of articles $A_t$ was shown. So how do we measure user coverage for articles? ## Submodular bandits ### Simple abstract model * Given set of articles $V$, $\lvert V \rvert = n$. * Every round $t = 1 : T$ do: - $\exists$ an unknown subset of $V$ in which the user is interested: $W_t \subseteq V$ - recommend a set of articles $A_t \subseteq V$ (how do we pick this? This is part of the challenge.) - if we recommended anything in which the user is interested, they click and we get a reward: \begin{equation} F_t(A_t) = \left\{ \begin{array}{ll} 1 & \> \text{if} A_t \cap W_t \not= \varnothing \\ 0 & \> \text{otherwise} \end{array} \right. \end{equation} ### Algorithm * Intialize $k$ multi-armed bandit algorithms for $k$ out of $n$ item selections. * In every round $t$, every bandit picks an article, and gets as feedback the difference between the reward for bandits up to, and including him, and the reward from all arms up to, but not including him (i.e. $\Delta_i$). Can show that submodular bandits using semi-bandit feedback have sublinear regret. SF = Submodular Function ### LSBGreedy * Bandit algorithm for context-aware article set recommendations. * No-regret ```python ```
98562941f2d56797f9a964b14c97eb418b72922f
21,408
ipynb
Jupyter Notebook
07-bandits.ipynb
AndreiBarsan/dm-notes
24e5469c4ba9d6be0c8a5da18b8b99968436e69c
[ "Unlicense" ]
2
2016-01-22T14:36:41.000Z
2017-10-17T07:17:07.000Z
07-bandits.ipynb
AndreiBarsan/dm-notes
24e5469c4ba9d6be0c8a5da18b8b99968436e69c
[ "Unlicense" ]
null
null
null
07-bandits.ipynb
AndreiBarsan/dm-notes
24e5469c4ba9d6be0c8a5da18b8b99968436e69c
[ "Unlicense" ]
null
null
null
41.328185
332
0.583754
true
4,491
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.92523
0.839472
__label__eng_Latn
0.993502
0.788709
# Design of Retaining Wall http://structengblog.com/retaining-wall-analysis-ipython-sympy-possible-bim-integration/ ```python from sympy import * init_printing() ka, q, gs, z = symbols('k_a q gamma_s z') # soil properties and depth gfq, gfg = symbols('gamma_fq gamma_fg') # partial load factors pa, va, ma = symbols('p_a v_a m_a') # force effects pressure = Eq(pa,ka*(gs*z*gfg + q*gfq)) pressure ``` ```python shear = Eq(va, Integral(pressure.rhs, z)) shear ``` ```python shear = shear.doit() shear ``` ```python moment = Eq(ma, Integral(shear.rhs,z)) moment ``` ```python moment = moment.doit() moment ``` ```python wallData = ({ka:0.3, gs:19, gfg:1.35, gfq:1.5, q:5, z:3.4}) designSliding = shear.subs(wallData).rhs print('The ULS sliding force at the base of the wall = {:.2f} kN'.format(designSliding)) ``` The ULS sliding force at the base of the wall = 52.13 kN ```python designMoment = moment.subs(wallData).rhs print('The ULS overturning moment at the base of the wall = {:.2f} kNm'.format(designMoment)) ``` The ULS overturning moment at the base of the wall = 63.41 kNm ```python # geometric properties of the wall hs, ts, lb, tb, toe = symbols('h_s t_s l_b t_b toe') gc = symbols('gamma_c') # unit weight of the wall W, Mst, Ru = symbols('W M_st R_u') # resistances mu = symbols('mu') # coeff of friction to base stemWeight = gc*hs*ts baseWeight = gc*lb*tb soilLength = lb-(toe+ts) soilWeight = hs*soilLength*gs totalWeight = Eq(W,stemWeight+baseWeight+soilWeight) totalWeight ``` ```python slidingResistance = Eq(Ru, totalWeight.rhs*mu) slidingResistance ``` ```python # lever arms to the elements lab = lb/2 las = toe+ts/2 lau = lb - soilLength/2 stabilityMoment = Eq(Mst,stemWeight*las+baseWeight*lab + soilWeight*lau) stabilityMoment ``` ```python wallGeo = ({gc:24, hs:3.15, ts:0.25, tb:0.25, toe:0.75, gs:19, mu:0.5}) slidingStability = slidingResistance.subs(wallGeo) slidingStability ``` ```python baseLength1 = solve(slidingStability.rhs - designSliding, lb)[0] print('The minimum base length to prevent sliding = {:.2f} m'.format(baseLength1)) ``` The minimum base length to prevent sliding = 2.21 m ```python overturningStability = stabilityMoment.subs(wallGeo) overturningStability ``` ```python baseLength2 = solve(overturningStability.rhs - designMoment, lb)[1] print('The minimum base length to resist overturning = {:.2f} m'.format(baseLength2)) ``` The minimum base length to resist overturning = 1.53 m ```python # by inspection of the above solutions adopt the following base length baseLength = 2.3 ``` ```python # Calculate the eccentricity of load from the centroid of the base ec = symbols('e_c') netMoment = stabilityMoment.rhs-moment.rhs xBar = netMoment / totalWeight.rhs eccentricity = Eq(ec,lb/2 - xBar) eccentricity ``` ```python # amend the partial load factors to 1 for the force effectrs wallData = ({ka:0.3, gs:19, gfg:1.0, gfq:1.0, q:5, z:3.4}) wallGeo = ({gc:24, hs:3.15, ts:0.25, tb:0.25, toe:0.75, gs:19, mu:0.5, lb: baseLength}) wallData.update(wallGeo) actEc = eccentricity.subs(wallData) actEc.evalf(3) ``` ```python # The kern of the foundation kern = lb.subs(wallData)/6 kern.evalf(3) ``` ```python if actEc.rhs > kern: print('Tension occurs at the heel') qmax = 2*totalWeight.rhs.subs(wallData)/(3*xBar.subs(wallData)) print('The maximum pressure = {:.2f} kPa'.format(qmax)) else: print('No tension occurs under the base') qmax = actEc.rhs*totalWeight.rhs.subs(wallData)*6/(lb.subs(wallData)**2)\ +totalWeight.rhs.subs(wallData)/lb.subs(wallData) print('The maximum pressure = {:.2f} kPa'.format(qmax)) ``` No tension occurs under the base The maximum pressure = 62.00 kPa ```python fck, fy = symbols('f_ck f_y') ec, ecy, ecu = symbols('e_c, e_cy, e_cu') fc = symbols('f_c') conc_str = Eq(fc, 4.0/9*fck*(2*(ec/ecy)-(ec/ecy)**2)) conc_str ``` ```python conc_data = ({fck:25, ecy:0.002, ec:0.0025}) conc_stress = conc_str.rhs.subs(conc_data) conc_stress ``` ```python ```
9c68c94142affc728a3d6c20c940faca02684600
49,668
ipynb
Jupyter Notebook
ret_wall.ipynb
satish-annigeri/Notebooks
92a7dc1d4cf4aebf73bba159d735a2e912fc88bb
[ "CC0-1.0" ]
null
null
null
ret_wall.ipynb
satish-annigeri/Notebooks
92a7dc1d4cf4aebf73bba159d735a2e912fc88bb
[ "CC0-1.0" ]
null
null
null
ret_wall.ipynb
satish-annigeri/Notebooks
92a7dc1d4cf4aebf73bba159d735a2e912fc88bb
[ "CC0-1.0" ]
null
null
null
77.124224
7,302
0.783804
true
1,352
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.817574
0.741795
__label__eng_Latn
0.593274
0.561771
# Content: 1. [Simple example](#1.-Simple-example) 2. [Parametric equations](#2.-Parametric-equations) 3. [Polishing the plot](#3.-Polishing-the-plot) 4. [Contour plot](#4.-Contour-plot) 5. [Beginner-level animation](#5.-Beginner-level-animation) 6. [Intermediate-level animation](#6.-Intermediate-level-animation) ## 1. Simple example ```python import numpy as np import matplotlib.pyplot as plt #=== x-range xmin=-10 xmax=10 xgrids=51 #Make grids along x, discretize x x=np.linspace(xmin, xmax, xgrids) #print(x.shape[0]) #=== variables to plot y=x**2 # here x is a vector (or numpy array) #=== plot plt.plot(x,y) plt.title('Parabola') plt.show() ``` You can change the default style such as line color, line width, etc. Bookmark [Matplotlib documentation](https://matplotlib.org/2.1.1/api/_as_gen/matplotlib.pyplot.plot.html) and refer this page for more details. ```python #=== plot plt.plot(x,y,linestyle='dashed',color='cyan',linewidth=2) plt.title('Parabola') plt.show() ``` ## 2. Parametric equations [Lissajous curves](https://en.wikipedia.org/wiki/Lissajous_curve) are defined by the parametric equations $$ \begin{align} x & = A\sin(at+\pi/2)\\ y & = B\sin(bt) \end{align} $$ ```python import numpy as np import matplotlib.pyplot as plt #=== time grids t_min=-np.pi t_max=np.pi t_grids=501 t=np.linspace(t_min, t_max, t_grids) # PLOT -1 #=== Constants A=1 B=1 a=10 b=12 #=== variables to plot x=A*np.sin(a*t+np.pi/2) y=B*np.sin(b*t) #=== plot plt.plot(x,y,color='orange') # PLOT-2 #=== Constants A=1 B=1 a=1 b=2 #=== variables to plot x=A*np.sin(a*t+np.pi/2) y=B*np.sin(b*t) #=== plot plt.plot(x,y,color='green') plt.title('Lissajous curves') plt.grid() plt.show() #=== To see the current working directory, uncomment the following 2 lines #import os #os.getcwd() ``` ## 3. Polishing the plot ```python import numpy as np import matplotlib.pyplot as plt #=== t-range t_min=-np.pi t_max=np.pi t_grids=501 t=np.linspace(t_min, t_max, t_grids) #=== Constants A=1 B=1 a=10 b=9 #=== variables to plot x=A*np.sin(a*t+np.pi/2) y=B*np.sin(b*t) #=== make it a square plot fig = plt.figure() # comment if square plot is not needed ax = fig.add_subplot(111) # comment if square plot is not needed plt.plot(x,y) ax.set_aspect('equal', adjustable='box') # comment if square plot is not needed #=== labels, titles, grids plt.xlabel("x") plt.ylabel("y") plt.title('Lissajous curves') plt.grid() #=== save in file, you can also use .pdf or .svg plt.savefig('Lissajous.png') #=== display plt.show() ``` ## 4. Contour plot ```python import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt #=== Define a 2D function def f(x,y): #z=(x**2+y**2) z=(1-x**2+y**2) return z x=np.arange(-2.0,2.0,0.1) y=np.arange(-2.0,2.0,0.1) X,Y=np.meshgrid(x,y) Z=f(X,Y) N_iso=np.arange(-2,2.5,0.5) #fig = plt.figure() # comment if square plot is not needed #ax = fig.add_subplot(111) # comment if square plot is not needed CS=plt.contour(Z,N_iso,linewidths=2,cmap=mpl.cm.coolwarm) #ax.set_aspect('equal', adjustable='box') # comment if square plot is not needed plt.clabel(CS, inline=True, fmt='%1.2f', fontsize=10) plt.colorbar(CS) plt.title('A contour plot') plt.show() ``` ## 5. Beginner-level animation ```python import os import numpy as np import matplotlib.pyplot as plt import imageio ``` ```python x_min=-10 x_max=10 x_grids=501 x=np.linspace(x_min, x_max, x_grids)## ONE ## def gauss(x,x0): f=np.exp(-(x-x0)**2) return f #=== plot-1 x0=0 y=gauss(x,x0) plt.plot(x,y) plt.title('Moving Gaussian') plt.savefig('_tmp_1.png') plt.show() #=== plot-2 x0=1 y=gauss(x,x0) plt.plot(x,y) plt.title('Moving Gaussian') plt.savefig('_tmp_2.png') plt.show() #=== plot-3 x0=2 y=gauss(x,x0) plt.plot(x,y) plt.title('Moving Gaussian') plt.savefig('_tmp_3.png') plt.show() #=== plot-4 x0=3 y=gauss(x,x0) plt.plot(x,y) plt.title('Moving Gaussian') plt.savefig('_tmp_4.png') plt.show() ``` ```python # Build GIF with imageio.get_writer('mygif1.gif', mode='I') as writer: for filename in ['_tmp_1.png', '_tmp_2.png', '_tmp_3.png', '_tmp_4.png']: image = imageio.imread(filename) writer.append_data(image) ``` Double click the next image and then shift+enter ## 6. Intermediate-level animation ```python import numpy as np import matplotlib.pyplot as plt #=== Particle-in-a-box solutions hbar=1 mass=1 L = 1 #=== Eigenvalues n=1 E1=n**2 * np.pi**2 * hbar**2 / (2.0 * mass * L**2) n=2 E2=n**2 * np.pi**2 * hbar**2 / (2.0 * mass * L**2) x=np.linspace(0,1,101) def psi1(x): # Ground state, n = 1 n=1 L=1 val=np.sqrt(2.0/L)*np.sin(n*np.pi*x/L) return val def psi2(x): # First excited state, n = 2 n=2 L=1 val=np.sqrt(2.0/L)*np.sin(n*np.pi*x/L) return val c1=1.0/np.sqrt(2.0) c2=1.0/np.sqrt(2.0) filenames = [] t=0 it=0 dt=0.01 i=complex(0,1) dx=0.1 #print(np.sqrt(np.dot(psi1(x)*dx,psi1(x)*dx))) # uncomment to check for Normalization while it <= 100: psi=c1*psi1(x)*dx*np.exp(-i*E1*t/hbar) + c2*psi2(x)*dx*np.exp(-i*E2*t/hbar) plt.plot(x,np.real(psi)**2+np.imag(psi)**2) plt.xlim(0, 1) plt.ylim(0, 0.2) plt.xlabel("x") plt.ylabel("$|\psi|^2$(x)") #NOTE: LaTex syntax for psi plt.title('Time evolution of $[\psi_1+\psi_2]/\sqrt{2}$') plt.text(0.2,0.15, r'$time=$ {0:10.3f} [au]'.format(t), fontsize=10) filename='_tmp_'+str(it).zfill(5)+'.png' filenames.append(filename) plt.savefig(filename) plt.close() t=t+dt it=it+1 # build gif with imageio.get_writer('mygif2.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image) # Remove tmp files for filename in set(filenames): os.remove(filename) ``` Double click the next image and then shift+enter ```python ```
2be3803bd0222e8334ab058a30767bc83b6fce5c
275,619
ipynb
Jupyter Notebook
notebooks/nm_02_Plotting.ipynb
raghurama123/NumericalMethods
b31737b97e155b0b9b38b0c8bc7a20e90e9c5401
[ "MIT" ]
1
2022-01-01T01:12:51.000Z
2022-01-01T01:12:51.000Z
notebooks/nm_02_Plotting.ipynb
raghurama123/NumericalMethods
b31737b97e155b0b9b38b0c8bc7a20e90e9c5401
[ "MIT" ]
null
null
null
notebooks/nm_02_Plotting.ipynb
raghurama123/NumericalMethods
b31737b97e155b0b9b38b0c8bc7a20e90e9c5401
[ "MIT" ]
5
2022-01-25T03:40:30.000Z
2022-02-22T05:38:21.000Z
475.205172
77,208
0.943037
true
1,985
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.913677
0.803256
__label__eng_Latn
0.461787
0.704565
# Baseline Model Prior to any machine learning, it is prudent to establish a baseline model with which to compare any trained models against. If none of the trained models can beat this "naive" model, then the conclusion is that either machine learning is not suitable for the predictive task or a different learning approach is needed. Our goal here is to create a *rules-based classifier* that can be used as a baseline to compare against machine learning classifiers. ```python import operator import numpy as np import pandas as pd from sklearn.dummy import DummyClassifier from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import matthews_corrcoef from src.features.features_utils import convert_categoricals_to_numerical from src.features.features_utils import convert_target_to_numerical from src.models.metrics_utils import confusion_matrix_to_dataframe from src.models.metrics_utils import print_matthews_corrcoef ``` ## Reading in the Data First let's read in the training and validation features and target. Also, let's convert the categorical fields to a numerical form that is suitable for performance evaluation. ```python train_features = pd.read_csv('../data/processed/train-features.csv') train_features = convert_categoricals_to_numerical(train_features) train_features.head() ``` ```python train_target = pd.read_csv('../data/processed/train-target.csv', index_col='full_name', squeeze=True) train_target = convert_target_to_numerical(train_target) train_target.head() ``` ```python validation_features = pd.read_csv('../data/processed/validation-features.csv') validation_features = convert_categoricals_to_numerical(validation_features) validation_features.head() ``` ```python validation_target = pd.read_csv('../data/processed/validation-target.csv', index_col='full_name', squeeze=True) validation_target = convert_target_to_numerical(validation_target) validation_target.head() ``` ## Performance Measure Before building a baseline classifier, we first need to address the issue of how to compare and assess the quality of different classifiers. A **performance measure** is clearly needed. But which one? [Accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) is affected by the probability of class membership of the target and therefore it is not a suitable metric for this problem, as there are many more non-laureates than laureates. In such situations accuracy can be very misleading. The [Matthews Correlation Coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient) (MCC) (also known as the [phi coefficient](https://en.wikipedia.org/wiki/Phi_coefficient)) is a suitable performance measure that can be used when there is a class imbalance. It is widely regarded as a balanced measure of binary classification performance. [Predicting Protein-Protein Interaction by the Mirrortree Method Possibilities and Limitations](https://www.researchgate.net/publication/259354929_Predicting_Protein-Protein_Interaction_by_the_Mirrortree_Method_Possibilities_and_Limitations) says that "MCC is a more robust measure of effectiveness of binary classification methods than such measures as precision, recall, and F-measure because it takes into account in a balanced way of all four factors contributing to the effectiveness; true positives, false positives, true negatives and false negatives". The MCC can be calculated directly from the [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix) using the formula: \begin{equation} MCC = \frac{TP \times TN - FP \times FN}{{\sqrt{(TP + FP)(TP + FN)(TN + FP)(TN + FN)}}} \end{equation} where TP is the number of [true positives](https://en.wikipedia.org/wiki/True_positive), TN the number of true [negatives](https://en.wikipedia.org/wiki/True_negative), FP the number of [false positives](https://en.wikipedia.org/wiki/False_positive) and FN the number of [false negatives](https://en.wikipedia.org/wiki/False_negative). If any of the four sums in the denominator is zero, the denominator can be arbitrarily set to one; this results in a Matthews correlation coefficient of zero, which can be shown to be the correct limiting value. The MCC is the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) between the observed and predicted binary classifications. It has an upper limit of +1 indicating a perfect prediction, a lower limit of -1 indicating total disagreement between prediction and observation and a mid value of 0 representing a random prediction. ## Baseline Classifier How should we go about creating this baseline classifier? One idea is a classifier that always predicts the majority class. Let's go ahead and look at the MCC and confusion matrix for such a classifier on the training and validation data. ```python majority = DummyClassifier(strategy='most_frequent') majority.fit(train_features, train_target) majority_train_predict = majority.predict(train_target) majority_train_predict_mcc = matthews_corrcoef(y_true=train_target, y_pred=majority_train_predict) print_matthews_corrcoef(majority_train_predict_mcc, 'Majority class classifier', data_label='train') majority_confusion_matrix_train = confusion_matrix_to_dataframe( confusion_matrix(y_true=train_target, y_pred=majority_train_predict)) majority_confusion_matrix_train ``` ```python majority_validation_predict = majority.predict(validation_features) majority_validation_predict_mcc = matthews_corrcoef(y_true=validation_target, y_pred=majority_validation_predict) print_matthews_corrcoef(majority_validation_predict_mcc, 'Majority class classifier', data_label='validation') index = ['Observed non-laureate', 'Observed laureate'] columns = ['Predicted non-laureate', 'Predicted laureate'] majority_confusion_matrix_validation = confusion_matrix_to_dataframe( confusion_matrix(y_true=validation_target, y_pred=majority_validation_predict), index=index, columns=columns) majority_confusion_matrix_validation ``` We can see that a classifier which always predicts the negative class is equivalent to random guessing and therefore is completely useless. The runtime warning is screaming this out loud as the sum of TP and FP is zero. Note that if we had instead used accuracy as the performance measure we would have been completely misled into believing that this is a reasonable good classifier! ```python print('Majority class classifier accuracy (train):', round(accuracy_score(y_true=train_target, y_pred=majority_train_predict), 2)) print('Majority class classifier accuracy (validation):', round(accuracy_score(y_true=validation_target, y_pred=majority_validation_predict), 2)) ``` Surely we can do better than this classifier. The function below is a brute force approach to creating a baseline classifier. It fits a model for each of the predictors in turn and returns the best model, as judged by MCC on the validation set. ```python def find_feature_with_highest_mcc(train_features, train_target, validation_features, validation_target): """Find the feature with the highest Matthews Correlation Coefficient (MCC) on the validation set. Prints the feature, it's MCC values on the training and validation sets as well as the confusion matrices. Args: train_features (pandas.Dataframe): Training features. train_target (pandas.Series): Training target. validation_features (pandas.Dataframe): Training features. validation_target (pandas.Series): Validation target. """ validation_mccs = {} for feature in train_features.columns: validation_mccs[feature] = round(matthews_corrcoef(y_true=validation_target, y_pred=validation_features[feature]), 2) highest_mcc = sorted(validation_mccs.items(), key=operator.itemgetter(1), reverse=True)[0] classifier_label = highest_mcc[0] + ' classifier' print_matthews_corrcoef(round(matthews_corrcoef(y_true=train_target, y_pred=train_features[feature]), 2), classifier_label, data_label='train') confusion_matrix_train = confusion_matrix_to_dataframe( confusion_matrix(y_true=train_target, y_pred=train_features[highest_mcc[0]]), index=index, columns=columns) display(confusion_matrix_train) print_matthews_corrcoef(highest_mcc[1], classifier_label, data_label='validation') confusion_matrix_validation = confusion_matrix_to_dataframe( confusion_matrix(y_true=validation_target, y_pred=validation_features[highest_mcc[0]]), index=index, columns=columns) display(confusion_matrix_validation) ``` ```python find_feature_with_highest_mcc(train_features, train_target, validation_features, validation_target) ``` This classifier is not great, but it's much better than the previous one. The MCC's are low for the training and validation sets, however, they are definitely better than chance level performance. Examination of the confusion matrices illustrates that this classifier is slightly better than 50-50 at identifying the positive class and is quite good at identifying the negative class. This is also confirmed looking at the precision, recall and f1-score of the classes. ```python print(classification_report(y_true=validation_target, y_pred=validation_features.num_workplaces_at_least_2)) ``` This classifier is far from perfect, but it's not too bad for a "naive" rules-based classifier. It does seem like a classifier that is reasonable to use as a benchmark for comparing machine learning classifiers against.
e863071e68a06d96121e7443aa1e396c7f078e02
13,102
ipynb
Jupyter Notebook
nobel_physics_prizes/notebooks/5.0-baseline-model.ipynb
covuworie/nobel-physics-prizes
f89a32cd6eb9bbc9119a231bffee89b177ae847a
[ "MIT" ]
3
2019-08-21T05:35:42.000Z
2020-10-08T21:28:51.000Z
nobel_physics_prizes/notebooks/5.0-baseline-model.ipynb
covuworie/nobel-physics-prizes
f89a32cd6eb9bbc9119a231bffee89b177ae847a
[ "MIT" ]
139
2018-09-01T23:15:59.000Z
2021-02-02T22:01:39.000Z
nobel_physics_prizes/notebooks/5.0-baseline-model.ipynb
covuworie/nobel-physics-prizes
f89a32cd6eb9bbc9119a231bffee89b177ae847a
[ "MIT" ]
null
null
null
47.129496
1,060
0.676996
true
2,011
Qwen/Qwen-72B
1. YES 2. YES
0.870597
0.875787
0.762458
__label__eng_Latn
0.960626
0.609777
# Quantization of Signals *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. ## Introduction [Digital signal processors](https://en.wikipedia.org/wiki/Digital_signal_processor) and general purpose processors can only perform arithmetic operations within a limited number range. So far we considered discrete signals with continuous amplitude values. These cannot be handled by processors in a straightforward manner. [Quantization](https://en.wikipedia.org/wiki/Quantization_%28signal_processing%29) is the process of mapping a continuous amplitude to a countable set of amplitude values. This refers also to the *requantization* of a signal from a large set of countable amplitude values to a smaller set. Scalar quantization is an instantaneous and memoryless operation. It can be applied to the continuous amplitude signal, also referred to as *analog signal* or to the (time-)discrete signal. The quantized discrete signal is termed as *digital signal*. The connections between the different domains are illustrated in the following. ### Model of the Quantization Process In order to discuss the effects of quantizing a continuous amplitude signal, a mathematical model of the quantization process is required. We restrict our considerations to a discrete real-valued signal $x[k]$. The following mapping is used in order to quantize the continuous amplitude signal $x[k]$ \begin{equation} x_Q[k] = g( \; \lfloor \, f(x[k]) \, \rfloor \; ) \end{equation} where $g(\cdot)$ and $f(\cdot)$ denote real-valued mapping functions, and $\lfloor \cdot \rfloor$ a rounding operation. The quantization process can be split into two stages 1. **Forward quantization** The mapping $f(x[k])$ maps the signal $x[k]$ such that it is suitable for the rounding operation. This may be a scaling of the signal or a non-linear mapping. The result of the rounding operation is an integer number $\lfloor \, f(x[k]) \, \rfloor \in \mathbb{Z}$, which is termed as *quantization index*. 2. **Inverse quantization** The mapping $g(\cdot)$, maps the quantization index to the quantized value $x_Q[k]$ such that it constitutes an approximation of $x[k]$. This may be a simple scaling or non-linear operation. The quantization error (quantization noise) $e[k]$ is defined as \begin{equation} e[k] = x_Q[k] - x[k] \end{equation} Rearranging yields that the quantization process can be modeled by adding the quantization error to the discrete signal #### Example - Quantization of a sine signal In order to illustrate the introduced model, the quantization of one period of a sine signal is considered \begin{equation} x[k] = \sin[\Omega_0 k] \end{equation} using \begin{align} f(x[k]) &= 3 \cdot x[k] \\ i &= \lfloor \, f(x[k]) \, \rfloor \\ g(i) &= \frac{1}{3} \cdot i \end{align} where $\lfloor \cdot \rfloor$ denotes the [nearest integer function](https://en.wikipedia.org/wiki/Nearest_integer_function) and $i$ the quantization index. The quantized signal is then given as \begin{equation} x_Q[k] = \frac{1}{3} \cdot \lfloor \, 3 \cdot \sin[\Omega_0 k] \, \rfloor \end{equation} The discrete signals are not shown by stem plots for ease of illustration. ```python %matplotlib inline import numpy as np import matplotlib.pyplot as plt N = 1024 # length of signal # generate signal x = np.sin(2*np.pi/N * np.arange(N)) # quantize signal xi = np.round(3 * x) xQ = 1/3 * xi e = xQ - x # plot (quantized) signals fig, ax1 = plt.subplots(figsize=(10,4)) ax2 = ax1.twinx() ax1.plot(x, 'r', label=r'signal $x[k]$') ax1.plot(xQ, 'b', label=r'quantized signal $x_Q[k]$') ax1.plot(e, 'g', label=r'quantization error $e[k]$') ax1.set_xlabel('k') ax1.set_ylabel(r'$x[k]$, $x_Q[k]$, $e[k]$') ax1.axis([0, N, -1.2, 1.2]) ax1.legend() ax2.set_ylim([-3.6, 3.6]) ax2.set_ylabel('quantization index') ax2.grid() ``` **Exercise** * Investigate the quantization error $e[k]$. Is its amplitude bounded? * If you would represent the quantization index (shown on the right side) by a binary number, how much bits would you need? * Try out other rounding operations like `np.floor()` and `np.ceil()` instead of `np.round()`. What changes? Solution: It can be concluded from the illustration that the quantization error is bounded as $|e[k]| < \frac{1}{3}$. There are in total 7 quantization indexes needing 3 bits in a binary representation. The properties of the quantization error are different for different rounding operations. ### Properties Without knowledge of the quantization error $e[k]$, the signal $x[k]$ cannot be reconstructed exactly from its quantization index or quantized representation $x_Q[k]$. The quantization error $e[k]$ itself depends on the signal $x[k]$. Therefore, quantization is in general an irreversible process. The mapping from $x[k]$ to $x_Q[k]$ is furthermore non-linear, since the superposition principle does not hold in general. Summarizing, quantization is an inherently irreversible and non-linear process. It potentially removes information from the signal. ### Applications Quantization has widespread applications in Digital Signal Processing. For instance * [Analog-to-Digital conversion](https://en.wikipedia.org/wiki/Analog-to-digital_converter) * [Lossy compression](https://en.wikipedia.org/wiki/Lossy_compression) of signals (speech, music, video, ...) * Storage and transmission ([Pulse-Code Modulation](https://en.wikipedia.org/wiki/Pulse-code_modulation), ...)
2c48144b04275ccbe7787d6ba8705d06f80bf974
49,410
ipynb
Jupyter Notebook
Lectures_Advanced-DSP/quantization/introduction.ipynb
lev1khachatryan/ASDS_DSP
9059d737f6934b81a740c79b33756f7ec9ededb3
[ "MIT" ]
1
2020-12-29T18:02:13.000Z
2020-12-29T18:02:13.000Z
Lectures_Advanced-DSP/quantization/introduction.ipynb
lev1khachatryan/ASDS_DSP
9059d737f6934b81a740c79b33756f7ec9ededb3
[ "MIT" ]
null
null
null
Lectures_Advanced-DSP/quantization/introduction.ipynb
lev1khachatryan/ASDS_DSP
9059d737f6934b81a740c79b33756f7ec9ededb3
[ "MIT" ]
null
null
null
257.34375
41,520
0.911799
true
1,394
Qwen/Qwen-72B
1. YES 2. YES
0.865224
0.845942
0.73193
__label__eng_Latn
0.990379
0.53885
# Characterization of Systems in the Time Domain *This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* ## Impulse Response The response $y(t)$ of a linear time-invariant (LTI) system $\mathcal{H}$ to an arbitrary input signal $x(t)$ is derived in the following. The input signal can be represented as an integral when applying the [sifting-property of the Dirac impulse](../continuous_signals/standard_signals.ipynb#Dirac-Impulse) \begin{equation} x(t) = \int_{-\infty}^{\infty} x(\tau) \cdot \delta(t-\tau) \; d \tau \end{equation} Introducing above relation for the the input signal $x(t)$ into the output signal $y(t) = \mathcal{H} \{ x(t) \}$ of the system yields \begin{equation} y(t) = \mathcal{H} \left\{ \int_{-\infty}^{\infty} x(\tau) \cdot \delta(t-\tau) \; d \tau \right\} \end{equation} where $\mathcal{H} \{ \cdot \}$ denotes the system response operator. The integration and system response operator can be exchanged under the assumption that the system is linear \begin{equation} y(t) = \int_{-\infty}^{\infty} x(\tau) \cdot \mathcal{H} \left\{ \delta(t-\tau) \right\} \; d \tau \end{equation} where $\mathcal{H} \{\cdot\}$ was only applied to the Dirac impulse, since $x(\tau)$ can be regarded as constant factor with respect to the time $t$. It becomes evident that the response of a system to a Dirac impulse plays an important role in the calculation of the output signal for arbitrary input signals. The response of a system to a Dirac impulse as input signal is denoted as [*impulse response*](https://en.wikipedia.org/wiki/Impulse_response). It is defined as \begin{equation} h(t) = \mathcal{H} \left\{ \delta(t) \right\} \end{equation} If the system is time-invariant, the response to a shifted Dirac impulse is $\mathcal{H} \left\{ \delta(t-\tau) \right\} = h(t-\tau)$. Hence, for an LTI system we finally get \begin{equation} y(t) = \int_{-\infty}^{\infty} x(\tau) \cdot h(t-\tau) \; d \tau \end{equation} Due to its relevance in the theory of LTI systems, this operation is explicitly termed as [*convolution*](https://en.wikipedia.org/wiki/Convolution). It is commonly abbreviated by $*$, hence for above integral we get $y(t) = x(t) * h(t)$. In some books the mathematically more precise nomenclature $y(t) = (x*h)(t)$ is used, since $*$ is the operator acting on the two signals $x$ and $h$ with regard to time $t$. In can be concluded that the properties of an LTI system are entirely characterized by its impulse response. The response $y(t)$ of a system to an arbitrary input signal $x(t)$ is given by the convolution of the input signal $x(t)$ with its impulse response $h(t)$. **Example** The following example considers an LTI system whose relation between input $x(t)$ and output $y(t)$ is given by an ordinary differential equation (ODE) with constant coefficients \begin{equation} y(t) + \frac{d}{dt} y(t) = x(t) \end{equation} The system response is computed for the input signal $x(t) = e^{- 2 t} \cdot \epsilon(t)$ by 1. explicitly solving the ODE and by 2. computing the impulse response $h(t)$ and convolution with the input signal. The solution should fulfill the initial conditions $y(t)\big\vert_{t = 0-} = 0$ and $\frac{d}{dt}y(t)\big\vert_{t = 0-} = 0$ due to causality. First the ODE is defined in `SymPy` ```python import sympy as sym sym.init_printing() t = sym.symbols('t', real=True) x = sym.Function('x')(t) y = sym.Function('y')(t) ode = sym.Eq(y + y.diff(t), x) ode ``` The ODE is solved for the given input signal in order to calculate the output signal. The integration constant is calculated such that the solution fulfills the initial conditions ```python solution = sym.dsolve(ode.subs(x, sym.exp(-2*t)*sym.Heaviside(t))) integration_constants = sym.solve( (solution.rhs.limit(t, 0, '-'), solution.rhs.diff(t).limit(t, 0, '-')), 'C1') y1 = solution.subs(integration_constants) y1 ``` Lets plot the output signal derived by explicit solution of the ODE ```python sym.plot(y1.rhs, (t, -1, 10), ylabel=r'$y(t)$'); ``` The impulse response $h(t)$ is computed by solving the ODE for a Dirac impulse as input signal, $x(t) = \delta(t)$ ```python h = sym.Function('h')(t) solution2 = sym.dsolve(ode.subs(x, sym.DiracDelta(t)).subs(y, h)) integration_constants = sym.solve((solution2.rhs.limit( t, 0, '-'), solution2.rhs.diff(t).limit(t, 0, '-')), 'C1') h = solution2.subs(integration_constants) h ``` Lets plot the impulse response $h(t)$ of the LTI system ```python sym.plot(h.rhs, (t, -1, 10), ylabel=r'$h(t)$'); ``` As alternative to the explicit solution of the ODE, the system response is computed by evaluating the convolution $y(t) = x(t) * h(t)$. Since `SymPy` cannot handle the Heaviside function properly in integrands, the convolution integral is first simplified. Both the input signal $x(t)$ and the impulse response $h(t)$ are causal signals. Hence, the convolution integral degenerates to \begin{equation} y(t) = \int_{0}^{t} x(\tau) \cdot h(t - \tau) \; d\tau \end{equation} for $t \geq 0$. Note that $y(t) = 0$ for $t<0$. ```python tau = sym.symbols('tau', real=True) y2 = sym.integrate(sym.exp(-2*tau) * h.rhs.subs(t, t-tau), (tau, 0, t)) y2 ``` Lets plot the output signal derived by evaluation of the convolution ```python sym.plot(y2, (t, -1, 10), ylabel=r'$y(t)$'); ``` **Exercise** * Compare the output signal derived by explicit solution of the ODE with the signal derived by convolution. Are both equal? * Check if the impulse response $h(t)$ is a solution of the ODE by manual calculation. Hint $\frac{d}{dt} \epsilon(t) = \delta(t)$. * Check the solution of the convolution integral by manual calculation including the Heaviside functions. **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Continuous- and Discrete-Time Signals and Systems - Theory and Computational Examples*.
1c8ffe90b56fe8cc46d9d670b24922e6ac5c5e9a
173,348
ipynb
Jupyter Notebook
systems_time_domain/impulse_response.ipynb
spatialaudio/signals-and-systems-lecture
93e2f3488dc8f7ae111a34732bd4d13116763c5d
[ "MIT" ]
243
2016-04-01T14:21:00.000Z
2022-03-28T20:35:09.000Z
systems_time_domain/impulse_response.ipynb
bagustris/signals-and-systems-lecture
08a8c7ea21f88c20b457daffe77fcca021c53137
[ "MIT" ]
6
2016-04-11T06:28:17.000Z
2021-11-10T10:59:35.000Z
systems_time_domain/impulse_response.ipynb
bagustris/signals-and-systems-lecture
08a8c7ea21f88c20b457daffe77fcca021c53137
[ "MIT" ]
63
2017-04-20T00:46:03.000Z
2022-03-30T14:07:09.000Z
64.827225
13,754
0.636488
true
1,836
Qwen/Qwen-72B
1. YES 2. YES
0.805632
0.803174
0.647063
__label__eng_Latn
0.978205
0.341674
**It would be nice to do a Judea Pearl-type DAG** Let's say we're interested in predicting a college-football game. What are all the things that influence the outcome? Here's a list of things that come to mind: * Team A's offensive strength ($A_o$). * Team B's offensive strength ($B_o$). * Team A's defensive strength ($A_d$). * Team B's defensive strength ($B_d$). * Team A's special-teams strength ($A_s$). * Team B's special-teams strength ($B_s$). * Team A's "heart and determination" ($A_h$). * Team B's "heart and determination" ($B_h$). * Home-field advantage ($H$). * Referees ($R$). * Other influences, which I'll call The X Factor. Obviously, this list is incomplete: there are missing variables (perhaps each team's previous-week result) and some variables are aggregates of more finely grained variables (for example, offensive abilities is a combination of passing abilities and rushing abilities.) But to make things easy, pretend that only these variables determine the outcome of football games and they do so in the following way: $MOV = (A_o − B_d) − (B_o − A_d) + (A_s − B_s) + (A_h − B_h) + H + R + X$, where $MOV$ is Team A's margin of victory. $MOV$ can take positive and negative values. A negative $MOV$ means Team B wins. We can use this equation to make predictions. For example, given two equal teams ($A_o = B_o$, $A_d = B_d$, $A_s = B_s$, and $A_h = B_h$) and unbiased refs ($R = 0$), A will win by $H + X$ points. Equations require consistency of units: 1. To be equal, quantities must have the same units. Five oranges do not equal five apples, and five miles do not equal five miles per hour. This means the left- and right-hand side equations must have the same units. In our football equation, the left-hand side is expressed in units. Then the right-hand side must be measured in points too. 2. We can only add and subtract quantities with the same units. Therefore, Because the left-hand side is measured in points, the right-hand side must be measured in points. And because the right-hand side is a sum, each part of the right-hand side must be measured in points. We call this precisely defined relationship between the causes (on the right) and the effect (on the left) the **data-generating process**. This equation is easily interpeted: 1. A one-point change in any of these causes changes $MOV$ by one point; for example, increasing $H$ from 2 to 3, holding the other causes constant, increases $MOV$ by 1. 3. $(A_o − B_d)$: A's offensive contribution to $MOV$ depends not only on A's offensive strength but also on B's defensive strength. Each additional point of A's offensive strength increases $MOV$ by one, and each additional point of B's defensive strength decreases $MOV$ by one. 4. $− (B_o − A_d)$: B's offensive contribution to $MOV$ depends not only on B's offensive strength but also on A's defensive strength. Each additional point of B's offensive strength decreases $MOV$ by one, and each additional point of A's defensive strength increases $MOV$ by one. 5. $(A_s − B_s)$: A's special-teams contribution to $MOV$ depends not only on A's special-teams strength but also on B's special-teams strength. Each additional point of A's special-teams strength increases $MOV$ by one, and each additional point of B's special-teams strength decreases $MOV$ by one. 6. $(A_h − B_h)$: A's heart and determination contributes to $MOV$ only as much as it exceeds B's heart and determination. If B's heart exceeds A's heart, this term is negative. 7. $H$: conventional wisdom tells us that this term is positive if A is home and negative if A is away. An $H$ of 3 means that the home team gets the equivalent of an extra field goal by playing at home. 8. $R$: The refs can be biased. A positive $R$ means the refs make calls in favor of A, and a negative $R$ means the refs make calls in favor of B. 9. $X$: This is a catch-all. Many things can affect the outcome of the game, such as weather, injuries, and unlucky bounces. $X$ captures all of these influences. $X$ is positive if, in the aggregate, these things help A and negative if these things hurt A. # Why Statistics If we perfectly knew the values for each part of the right-hand side of the equation, we could perfectly predict the result of each game. Unfortunately, # Random Variable A random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. It is easy to confuse random variables with algebraic variables, but the two differ. The value of an algebraic variable is deterministic (i.e. the variable can take multiple values, but given inputs to the deterministic process there is only one possible value that the algebraic variable can take) while the value of a random variable is at least partly determined by a random process (i.e. even if a deterministic process underlies a random variable, knowing inputs to the deterministic process is not good enough to know the value of the random variable with certainty.) Here are a few examples: 1. **algebraic variable** y: y = 2x + 3 If we know x, then we know y with certainty. If x = 2, y must equal 7. If x = 1, y cannot equal anything but 5. An algebraic variable like this has a two-way functional relationship: we can calculate x given y (x = (y − 3)) (?). 2. **random variable** y: Nature assigns y such that P (y = 1) = .5 and P (y = 0) = .5 In this example, y can take 0 or 1. There is not a deterministic process that determines whether y will equal 1 or 0, so knowing x (or any other potential inputs) does not tell us with certainty what value y will take; y could still take a 0 or 1. Unless the process assigns an outcome with probability 1, it is random. 3. **random variable** z: z = 2x + 3 + y Assume y from the above example. Then knowing x does not tell us with certainty what value z will take. If x = 1, then z could equal 5 or 6 (with equal probability). If x = 4, then z could equal 11 or 12 (with equal probability). Note that a variable that is a function of a random variable is also a random variable. Often we treat deterministic processes as random because it is simpler to think of them that way. For example, if we knew the exact weight and measurement of a die and the speed, height, rotation, etc. at which it was tossed, we might be able to figure out exactly which side would come up (this has been demonstrated using the coin toss). But getting that information and doing those calculations is a burden, and treating it as random is simpler. Formally, a random variable is a function from a probability space, typically to the real numbers, that is measurable. (For finite probability spaces, the measurable requirement is superfluous.) Random variables can be classified as either discrete (a random variable that may assume either a finite number of values or an infinite sequence of values) or as continuous (a variable that may assume any numerical value in an interval or collection of intervals). A random variable's possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the potential values of a quantity whose already-existing value is uncertain (for example, as a result of incomplete information or imprecise measurements). ```python ``` ## Power Distribution $f_X(X=x|k) = cx^{-k}$ Note that $x$ and $k$ need constraints. For example, if $k = -2$ the distribution doesn't integrate: ```python import numpy import matplotlib.pyplot as plt %matplotlib inline x = numpy.linspace(0.1,10,99) y = x**2 plt.plot(x,y) plt.show() ``` To force large $x$ toward 0, $k$ needs to be positive. And if $k$ is positive, $x \geq 1$. Let's find the normalizing constant ($c$): $1 = \int_{1}^{\infty} c x^{-k} dx$ $1 = c \bigl[ \frac{1}{1-k} x^{1-k} + d \bigr]_{1}^{\infty}$ $1 = \frac{c}{1-k} \bigl[ 0 - 1 \bigr]$ $1 = \frac{c}{k-1}$ $c = k-1$ So the power law density function is $\begin{equation} f_X(X=x | k)=\begin{cases} (k-1)x^{-k} & \text{if }1 \leq x < \infty \text{ and } k > 0 \\ 0 & \text{otherwise}. \end{cases} \end{equation}$ Here's what $f_X(X=x | k=2)$ looks like: ```python power_k2 = lambda x: x**-2 if x>=1 else 0 x = numpy.linspace(0.1,10,99) y = [power_k2(z) for z in x] plt.plot(x,y) plt.show() ``` Nassim Taleb offered the following quiz that uses a power distribution. Note his typo ("$q=.07$" should be "$q=.007$"). Using our equation, what is $k$? First integrate the distribution from some $y$ to infinity: $1-F_X(X=x|k) = 1 - \int_1^y (k-1)x^{-k} dx$ $1-F_X(X=x|k) = 1 - \biggl[ (k-1) \bigl[ \frac{1}{1-k} x^{1-k} \bigr]_1^y \biggr]$ $1-F_X(X=x|k) = 1 - \biggl[ (-1) \bigl[ y^{1-k} - 1 \bigr] \biggr]$ $1-F_X(X=x|k) = 1 - \biggl[ 1 - y^{1-k} \biggr]$ $1-F_X(X=x|k) = y^{1-k}$ Then $.45 = .007^{1-k}$ $\ln{.45} = (1-k) \ln{.007}$ $k = 1 - \frac{ \ln{.45} }{ \ln{.007} }$ $k = .84$ ## Convex set of distributions Is this a mixture distribution? What happens as data move from simple to complex? We look at it using a convex set of a simple distribution (uniform) and a complex distribution (power). First the uniform: $\begin{equation} f_X(X=x)=\begin{cases} 1 & \text{if }1 \leq x \leq 2 \\ 0 & \text{otherwise}. \end{cases} \end{equation}$ Then the power: $\begin{equation} g_X(X=x | k)=\begin{cases} (k-1)x^{-k} & \text{if }1 \leq x < \infty \text{ and } k \geq 0 \\ 0 & \text{otherwise}. \end{cases} \end{equation}$ And the convex set: $\begin{equation} h_X(X=x | \alpha, k)=\begin{cases} \alpha + (1-\alpha)(k-1)x^{-k} & \text{if }1 \leq x < 2 \text{ and } k > 0 \\ (1-\alpha)(k-1)x^{-k} & \text{if }2 \leq x < \infty \text{ and } k > 0 \\ 0 & \text{otherwise}. \end{cases} \end{equation}$ (No normalizing constant needed because those were included in the input distributions.) ```python def convex_dist(x,alpha,k): if x>=1 and x<2 and k>0: return alpha + (1-alpha)*(k-1)*x**-k elif x>=2 and k>0: return (1-alpha)*(k-1)*x**-k else: return 0 x = numpy.linspace(0.1,10,99) y0 = [convex_dist(z,0,2) for z in x] y5 = [convex_dist(z,0.5,2) for z in x] y1 = [convex_dist(z,1,2) for z in x] plt.plot(x,y0) plt.plot(x,y5) plt.plot(x,y1) plt.show() ```
8ad7f57b780d4c98f72e61ca9cf9793144dbe64b
48,914
ipynb
Jupyter Notebook
Introduction.ipynb
jtwalsh0/methods
a7d862c02260fcdf12b5ed08a3e0d9f22aff6624
[ "MIT" ]
null
null
null
Introduction.ipynb
jtwalsh0/methods
a7d862c02260fcdf12b5ed08a3e0d9f22aff6624
[ "MIT" ]
null
null
null
Introduction.ipynb
jtwalsh0/methods
a7d862c02260fcdf12b5ed08a3e0d9f22aff6624
[ "MIT" ]
null
null
null
151.906832
15,574
0.852558
true
2,913
Qwen/Qwen-72B
1. YES 2. YES
0.872347
0.79053
0.689617
__label__eng_Latn
0.998265
0.440543
<!-- dom:TITLE: Demo - Sparse Chebyshev-Petrov-Galerkin methods for differentiation --> # Demo - Sparse Chebyshev-Petrov-Galerkin methods for differentiation <!-- dom:AUTHOR: Mikael Mortensen Email:mikaem@math.uio.no at Department of Mathematics, University of Oslo. --> <!-- Author: --> **Mikael Mortensen** (email: `mikaem@math.uio.no`), Department of Mathematics, University of Oslo. Date: **October 26, 2021** **Summary.** This demo explores how to use sparse Chebyshev-Petrov-Galerkin methods for finding Chebyshev coefficients of the derivatives of smooth functions. We will compare the methods to the more commonly adopted recursion methods that are found in most spectral textbooks. ## Introduction The Chebyshev polynomials of the first kind can be defined as <!-- Equation labels as ordinary links --> <a id="eq:chebTU"></a> $$ \begin{equation} \label{eq:chebTU} \tag{1} T_k(x) = \cos(k\theta), \end{equation} $$ where $\theta = \cos^{-1} x$, $k$ is a positive integer and $x \in [-1, 1]$. The Chebyshev polynomials span the discrete space $S_N = \text{span}\{T_k\}_{k=0}^{N-1}$, and a function $u(x)$ can be approximated in this space as <!-- Equation labels as ordinary links --> <a id="eq:uT"></a> $$ \begin{equation} u_N(x) = \sum_{k=0}^{N-1} \hat{u}_k T_k(x). \label{eq:uT} \tag{2} \end{equation} $$ Consider the expansion of the function $u(x)=\sin(\pi x)$, created in `shenfun` as ``` from shenfun import * import sympy as sp x = sp.Symbol('x') ue = sp.sin(sp.pi*x) N = 16 SN = FunctionSpace(N, 'C') uN = Function(SN, buffer=ue) uN ``` The Python Function `uN` represents the expansion ([2](#eq:uT)), and the printed values represent $\boldsymbol{\hat{u}} = \{\hat{u}_k\}_{k=0}^{N-1}$. The expansion is fairly well resolved since the highest values of $\{\hat{u}_k\}_{k=0}^{N-1}$ approach 0. Note that the coefficients obtained are the discrete coefficients based on interpolation at quadrature points and they do not agree completely with the coefficients truncated from a series $u(x) = \sum_{k=0}^{\infty} \hat{u}_k T_k$. Under the hood the coefficients are found by projection using quadrature for the integrals: find $u_N \in S_N$ such that $$ (u_N-u, v)_{\omega^{-1/2}} = 0, \quad \forall v \in S_N, $$ where $\omega = (1-x^2)$ and the scalar product notation $(a, b)_{\omega^{-1/2}} = \sum_{j=0}^{N-1} a(x_j)b(x_j)\omega_j \approx \int_{-1}^{1} a(x)b(x) \omega(x)^{-1/2} dx$, where $\{\omega_j\}_{j=0}^{N-1}$ are the quadrature weights. The quadrature approach ensures that $u(x_j) = u_N(x_j)$ for all quadrature points $\{x_j\}_{j=0}^{N-1}$. In `shenfun` we compute the following under the hood: insert for $u_N = \sum_{j=0}^{N-1} \hat{u}_j T_j$, $u=\sin(\pi x)$ and $v = T_k$ to get $$ \sum_{j=0}^{N-1}(T_j, T_k)_{\omega^{-1/2}} \hat{u}_j = (\sin(\pi x), T_k)_{\omega^{-1/2}}, $$ This has now become a linear algebra problem, and we recognise the matrix $d^{(0)}_{kj} = (T_j, T_k)_{\omega^{-1/2}}=c_k \pi /2 \delta_{kj}$, where $\delta_{kj}$ is the Kronecker delta function, and $c_0=2$ and $c_k=1$ for $k>0$. The problem is solved trivially since $d^{(0)}_{kj}$ is diagonal, and thus $$ \hat{u}_k = \frac{2}{c_k \pi} (\sin(\pi x), T_k)_{\omega^{-1/2}}, \quad \forall \, k\in I^N, $$ where $I^N = \{0, 1, \ldots, N-1\}$. We can compare this to the exact coefficients, where the integral $(\sin(\pi x), T_k)_{\omega^{-1/2}}$ is computed with high precision. To this end we could use adaptive quadrature, or symbolic integration with sympy, but it is sufficient to use a large enough number of polynomials to fully resolve the function. Below we find this number to be 22 and we see that the absolute error in $\hat{u}_{N-1} \approx 10^{-11}$. ``` SM = FunctionSpace(0, 'C') uM = Function(SM, buffer=ue, abstol=1e-16, reltol=1e-16) print(uM[:N] - uN[:N]) print(len(uM)) ``` ## Differentiation Let us now consider the $n$'th derivative of $u(x)$ instead, denoted here as $u^{(n)}$, and attempt to find $u^{(n)}$ in the space $S_N$, i.e., $$ u_N^{(n)} = \sum_{k=0}^{N-1} \hat{u}^{(n)}_k T_k. $$ We note that this is not the same as $(u_N)^{(n)}$, which is $$ (u_N)^{(n)} = \sum_{k=0}^{N-1} \hat{u}_k T^{(n)}_k, $$ where $T^{(n)}_k$ is the $n$'th derivative of $T_k$, a polynomial of order $k-n$. Again use projection to find $u_N^{(n)} \in S_N$ such that $$ (u_N^{(n)}-u^{(n)}, v)_{\omega^{-1/2}} = 0, \quad \forall v \in S_N. $$ Inserting for $u_N^{(n)}$ and $u^{(n)} = (u_N)^{(n)}$ we get <!-- Equation labels as ordinary links --> <a id="_auto1"></a> $$ \begin{equation} \sum_{j=0}^{N-1}(T_j, T_k)_{\omega^{-1/2}} \hat{u}_j^{(n)} = (T_j^{(n)}, T_k)_{\omega^{-1/2}} \hat{u}_j, \label{_auto1} \tag{3} \end{equation} $$ <!-- Equation labels as ordinary links --> <a id="_auto2"></a> $$ \begin{equation} \sum_{j=0}^{N-1} d^{(0)}_{kj} \hat{u}_j^{(n)} = \sum_{j=0}^{N-1} d^{(n)}_{kj} \hat{u}_j, \label{_auto2} \tag{4} \end{equation} $$ where $d^{(n)}_{kj} = (T_j^{(n)}, T_k)_{\omega^{-1/2}}$. We compute $\hat{u}_k^{(n)}$ by inverting the diagonal $d^{(0)_{kj}}$ <!-- Equation labels as ordinary links --> <a id="eq:fhat"></a> $$ \begin{equation} \hat{u}_k^{(n)} = \frac{2}{c_k \pi} \sum_{j=0}^{N-1} d^{(n)}_{kj} \hat{u}_j, \quad \forall \, k \in I^{N}. \label{eq:fhat} \tag{5} \end{equation} $$ The matrix $d^{(n)}_{kj}$ is upper triangular, and the last $n$ rows are zero. Since $d^{(n)}_{kj}$ is dense the matrix vector product $\sum_{j=0}^{N-1} d^{(n)}_{kj} \hat{u}_j$ is costly and also susceptible to roundoff errors if the structure of the matrix is not taken advantage of. But computing it in shenfun is straightforward, for $n=1$ and $2$: ``` uN1 = project(Dx(uN, 0, 1), SN) uN2 = project(Dx(uN, 0, 2), SN) uN1 ``` where `uN1` $=u_N^{(1)} $ and `uN2` $=u_N^{(2)}$. Alternatively, doing all the work that goes on under the hood ``` u = TrialFunction(SN) v = TestFunction(SN) D0 = inner(u, v) D1 = inner(Dx(u, 0, 1), v) D2 = inner(Dx(u, 0, 2), v) w0 = Function(SN) # work array uN1 = Function(SN) uN2 = Function(SN) uN1 = D0.solve(D1.matvec(uN, w0), uN1) uN2 = D0.solve(D2.matvec(uN, w0), uN2) uN1 ``` We can look at the sparsity patterns of $(d^{(1)}_{kj})$ and $(d^{(2)}_{kj})$ ``` %matplotlib inline import matplotlib.pyplot as plt fig, (ax1, ax2) = plt.subplots(1, 2) ax1.spy(D1.diags(), markersize=2, color='r') ax2.spy(D2.diags(), markersize=2, color='b') ``` just to see that they are upper triangular. We now ask is there a better and faster way to get `uN1` and `uN2`? A better approach would involve only sparse matrices, like the diagonal $(d^{(0)}_{kj})$. But how do we get there? Most textbooks on spectral methods use recursive methods to find $\{\hat{u}_N^{(n)}\}$. Here we will show a Galerkin approach. It turns out that a simple change of test space/function will be sufficient. Let us first replace the test space $S_N$ with the Dirichlet space $D_N=\{v \in S_N | v(\pm 1) = 0\}$ using basis functions $v=T_k-T_{k+2}$ and see what happens. Because of the two boundary conditions, the number of degrees of freedom is reduced by two, and we need to use a space with $N+2$ quadrature points in order to get a square matrix system. The method now becomes classified as Chebyshev-Petrov-Galerkin, as we wish to find $u_N^{(1)} \in S_N$ such that $$ (u_N^{(n)}-u^{(n)}, v)_{\omega^{-1/2}} = 0, \quad \forall v \in D_{N+2}. $$ The implementation is straightforward ``` SD = FunctionSpace(N+2, 'C', bc=(0, 0)) v = TestFunction(SD) D0 = inner(u, v) D1 = inner(Dx(u, 0, 1), v) uN11 = Function(SN) uN11 = D0.solve(D1.matvec(uN, w0), uN11) print(uN11-uN1) ``` and since `uN11 = uN1` we see that we have achived the same result as in the regular projection. However, the matrices in use now look like ``` fig, (ax1, ax2) = plt.subplots(1, 2) ax1.spy(D0.diags(), markersize=2, color='r') ax2.spy(D1.diags(), markersize=2, color='b') ``` So $(d^{(0)}_{kj})$ now contains two nonzero diagonals, whereas $(d^{(1)}_{kj})$ is a matrix with one single diagonal. There is no longer a `full` differentiation matrix, and we can easily perform this projection for millions of degrees of freedom. What about $(d^{(2)}_{kj})$? We can now use biharmonic test functions that satisfy four boundary conditions in the space $B_N = \{v \in S_N | v(\pm 1) = v'(\pm 1) =0\}$, and continue in a similar fashion: ``` SB = FunctionSpace(N+4, 'C', bc=(0, 0, 0, 0)) v = TestFunction(SB) D0 = inner(u, v) D2 = inner(Dx(u, 0, 2), v) uN22 = Function(SN) uN22 = D0.solve(D2.matvec(uN, w0), uN22) print(uN22-uN2) ``` We get that `uN22 = uN2`, so the Chebyshev-Petrov-Galerkin projection works. The matrices involved are now ``` fig, (ax1, ax2) = plt.subplots(1, 2) ax1.spy(D0.diags(), markersize=2, color='r') ax2.spy(D2.diags(), markersize=2, color='b') ``` So there are now three nonzero diagonals in $(d^{(0)}_{kj})$, whereas the differentiation matrix $(d^{(2)}_{kj})$ contains only one nonzero diagonal. Why does this work? The Chebyshev polynomials and their derivatives satisfy the following orthogonality relation <!-- Equation labels as ordinary links --> <a id="eq:orthon"></a> $$ \begin{equation} \label{eq:orthon} \tag{6} \int_{-1}^{1} T^{(n)}_j T^{(n)}_k \omega^{n-1/2} dx = \alpha^{n}_k \delta_{kj}, \quad \text{for}\, n \ge 0, \end{equation} $$ where <!-- Equation labels as ordinary links --> <a id="_auto3"></a> $$ \begin{equation} \alpha^n_k = \frac{c_{k+n}\pi k (k+n-1)!}{2(k-n)!}. \label{_auto3} \tag{7} \end{equation} $$ So when we choose a test function that is $\omega^n T^{(n)}_k$, we get the diagonal differentiation matrix <!-- Equation labels as ordinary links --> <a id="_auto4"></a> $$ \begin{equation} d^{(n)}_{kj} = \int_{-1}^{1} T^{(n)}_j (\omega^n T^{(n)}_k) \omega^{-1/2} dx = \alpha^{n}_k \delta_{kj}, \quad \text{for}\, n \ge 0. \label{_auto4} \tag{8} \end{equation} $$ The two chosen test functions above are both proportional to $\omega^n T^{(n)}_k$. More precisely, $T_k-T_{k+2} = \frac{2}{k+1} \omega T^{(1)}_{k+1}$ and the biharmonic test function is $T_k-\frac{2(k+2)}{k+3}T_{k+2} + \frac{k+1}{k+3}T_{k+4} = \frac{4 \omega^2T^{(2)}_{k+2}}{(k+2)(k+3)}$. Using these very specific test functions correponds closely to using the Chebyshev recursion formulas that are found in most textbooks. Here they are adapted to a Chebyshev-Petrov-Galerkin method, where we simply choose test and trial functions and everything else falls into place in a few lines of code. ## Recursion Let us for completion show how to find $\hat{u}_N^{(1)}$ with a recursive approach. The Chebyshev polynomials satisfy <!-- Equation labels as ordinary links --> <a id="eq:Trec1"></a> $$ \begin{equation} 2T_k = \frac{1}{k+1}T'_{k+1}- \frac{1}{k-1} T'_{k-1}, \quad k \ge 1. \label{eq:Trec1} \tag{9} \end{equation} $$ By using this and setting $u' = \sum_{k=0}^{\infty} \hat{u}^{(1)}_k T_k = \sum_{k=0}^{\infty} \hat{u}_k T'_k$ we get <!-- Equation labels as ordinary links --> <a id="eq:Trec2"></a> $$ \begin{equation} 2k\hat{u}_k = c_{k-1}\hat{u}^{(1)}_{k-1} - \hat{u}^{(1)}_{k+1}, \quad k \ge 1. \label{eq:Trec2} \tag{10} \end{equation} $$ Using this recursion together with $\hat{u}^{(1)}_{N-1} = 0$ we get ([[canuto]](#canuto)) <!-- Equation labels as ordinary links --> <a id="eq:Trec3"></a> $$ \begin{equation} c_k \hat{u}^{(1)}_k = \hat{u}^{(1)}_{k+2} + 2(k+1)\hat{u}_{k+1}, \quad 0 \le k \le N-2, \label{eq:Trec3} \tag{11} \end{equation} $$ which is easily implemented in a (slow) for-loop ``` f1 = np.zeros(N+1) ck = np.ones(N); ck[0] = 2 for k in range(N-2, -1, -1): f1[k] = (f1[k+2]+2*(k+1)*uN[k+1])/ck[k] print(f1[:-1]-uN1) ``` which evidently is exactly the same result. It turns out that this is not strange. If we multiply ([11](#eq:Trec3)) by $\pi/2$ and rearrange a little bit we get <!-- Equation labels as ordinary links --> <a id="_auto5"></a> $$ \begin{equation} c_k \pi/2 \hat{u}^{(1)}_k - \pi/2 \hat{u}^{(1)}_{k+2} = (k+1)\pi \hat{u}_{k+1} \label{_auto5} \tag{12} \end{equation} $$ <!-- Equation labels as ordinary links --> <a id="_auto6"></a> $$ \begin{equation} \underbrace{(c_k \pi/2 \delta_{kj} - \pi/2 \delta_{k, j-2})}_{(D^0)_{kj}} \hat{u}^{(1)}_j = \underbrace{(k+1)\pi \delta_{k,j-1}}_{(D^1)_{kj}} \hat{u}_{j} \label{_auto6} \tag{13} \end{equation} $$ <!-- Equation labels as ordinary links --> <a id="_auto7"></a> $$ \begin{equation} D^0 \boldsymbol{\hat{u}} = D^1 \boldsymbol{\hat{u}} \label{_auto7} \tag{14} \end{equation} $$ <!-- Equation labels as ordinary links --> <a id="_auto8"></a> $$ \begin{equation} \boldsymbol{\hat{u}^{(1)}} = (D^0)^{-1} D^1 \boldsymbol{\hat{u}} \label{_auto8} \tag{15} \end{equation} $$ which is exactly how $\boldsymbol{\hat{u}^{(1)}}$ was computed above with the Chebyshev-Petrov-Galerkin approach (`uN11 = D0.solve(D1.matvec(uN, w0), uN11)`). Not convinced? Check that the matrices `D0` and `D1` are truly as stated above. The matrices below are printed as dictionaries with diagonal number as key (main is 0, first upper is 1 etc) and diagonal values as values: ``` import pprint SD = FunctionSpace(N+2, 'C', bc=(0, 0)) v = TestFunction(SD) D0 = inner(u, v) D1 = inner(Dx(u, 0, 1), v) pprint.pprint(dict(D0)) pprint.pprint(dict(D1)) ``` In conclusion, we have shown that we can use an efficient Chebyshev-Petrov-Galerkin approach to obtain the discrete Chebyshev coefficients for the derivatives of a function. By inspection, it turns out that this approach is identical to the common methods based on well-known Chebyshev recursion formulas. <!-- ======= Bibliography ======= --> 1. <a id="canuto"></a> **C. Canuto, M. Hussaini, A. Quarteroni and J. T. A.**. *Spectral Methods in Fluid Dynamics*, *Scientific Computation*, Springer, 2012.
c9663b3d7ab82ee13e1e0ad50c2fad9496cb57a5
23,508
ipynb
Jupyter Notebook
content/sparsity.ipynb
mikaem/shenfun-demos
c2ad13d62866e0812068673fdb6a7ef68ecfb7f2
[ "BSD-2-Clause" ]
null
null
null
content/sparsity.ipynb
mikaem/shenfun-demos
c2ad13d62866e0812068673fdb6a7ef68ecfb7f2
[ "BSD-2-Clause" ]
1
2021-09-21T16:10:01.000Z
2021-09-21T16:10:01.000Z
content/sparsity.ipynb
mikaem/shenfun-demos
c2ad13d62866e0812068673fdb6a7ef68ecfb7f2
[ "BSD-2-Clause" ]
null
null
null
28.322892
173
0.511528
true
5,018
Qwen/Qwen-72B
1. YES 2. YES
0.76908
0.766294
0.589341
__label__eng_Latn
0.929198
0.207567
<p style="font-size:32px;text-align:center"> <b>Social network Graph Link Prediction - Facebook Challenge</b> </p> ```python #Importing Libraries # please do go through this python notebook: import warnings warnings.filterwarnings("ignore") import csv import pandas as pd#pandas to create small dataframes import datetime #Convert to unix time import time #Convert to unix time # if numpy is not installed already : pip3 install numpy import numpy as np#Do aritmetic operations on arrays # matplotlib: used to plot graphs import matplotlib import matplotlib.pylab as plt import seaborn as sns#Plots from matplotlib import rcParams#Size of plots from sklearn.cluster import MiniBatchKMeans, KMeans#Clustering import math import pickle import os # to install xgboost: pip3 install xgboost import xgboost as xgb import warnings import networkx as nx import pdb import pickle from pandas import HDFStore,DataFrame from pandas import read_hdf from scipy.sparse.linalg import svds, eigs import gc from tqdm import tqdm ``` ```python #!pip3 install --user networkx ``` # 1. Reading Data ```python if os.path.isfile('data/after_eda/train_pos_after_eda.csv'): train_graph=nx.read_edgelist('data/after_eda/train_pos_after_eda.csv',delimiter=',',create_using=nx.DiGraph(),nodetype=int) print(nx.info(train_graph)) else: print("please run the FB_EDA.ipynb or download the files from drive") ``` Name: Type: DiGraph Number of nodes: 1780722 Number of edges: 7550015 Average in degree: 4.2399 Average out degree: 4.2399 # 2. Similarity measures ## 2.1 Jaccard Distance: http://www.statisticshowto.com/jaccard-index/ \begin{equation} j = \frac{|X\cap Y|}{|X \cup Y|} \end{equation} ```python #for followees def jaccard_for_followees(a,b): try: if len(set(train_graph.successors(a))) == 0 | len(set(train_graph.successors(b))) == 0: return 0 sim = (len(set(train_graph.successors(a)).intersection(set(train_graph.successors(b)))))/\ (len(set(train_graph.successors(a)).union(set(train_graph.successors(b))))) except: return 0 return sim ``` ```python #one test case print(jaccard_for_followees(273084,1505602)) ``` 0.0 ```python #node 1635354 not in graph print(jaccard_for_followees(273084,1505602)) ``` 0.0 ```python #for followers def jaccard_for_followers(a,b): try: if len(set(train_graph.predecessors(a))) == 0 | len(set(g.predecessors(b))) == 0: return 0 sim = (len(set(train_graph.predecessors(a)).intersection(set(train_graph.predecessors(b)))))/\ (len(set(train_graph.predecessors(a)).union(set(train_graph.predecessors(b))))) return sim except: return 0 ``` ```python print(jaccard_for_followers(273084,470294)) ``` 0 ```python #node 1635354 not in graph print(jaccard_for_followees(669354,1635354)) ``` 0 ## 2.2 Cosine distance \begin{equation} CosineDistance = \frac{|X\cap Y|}{|X|\cdot|Y|} \end{equation} ```python #for followees def cosine_for_followees(a,b): try: if len(set(train_graph.successors(a))) == 0 | len(set(train_graph.successors(b))) == 0: return 0 sim = (len(set(train_graph.successors(a)).intersection(set(train_graph.successors(b)))))/\ (math.sqrt(len(set(train_graph.successors(a)))*len((set(train_graph.successors(b)))))) return sim except: return 0 ``` ```python print(cosine_for_followees(273084,1505602)) ``` 0.0 ```python print(cosine_for_followees(273084,1635354)) ``` 0 ```python def cosine_for_followers(a,b): try: if len(set(train_graph.predecessors(a))) == 0 | len(set(train_graph.predecessors(b))) == 0: return 0 sim = (len(set(train_graph.predecessors(a)).intersection(set(train_graph.predecessors(b)))))/\ (math.sqrt(len(set(train_graph.predecessors(a))))*(len(set(train_graph.predecessors(b))))) return sim except: return 0 ``` ```python print(cosine_for_followers(2,470294)) ``` 0.02886751345948129 ```python print(cosine_for_followers(669354,1635354)) ``` 0 ## 3. Ranking Measures https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.link_analysis.pagerank_alg.pagerank.html PageRank computes a ranking of the nodes in the graph G based on the structure of the incoming links. Mathematical PageRanks for a simple network, expressed as percentages. (Google uses a logarithmic scale.) Page C has a higher PageRank than Page E, even though there are fewer links to C; the one link to C comes from an important page and hence is of high value. If web surfers who start on a random page have an 85% likelihood of choosing a random link from the page they are currently visiting, and a 15% likelihood of jumping to a page chosen at random from the entire web, they will reach Page E 8.1% of the time. <b>(The 15% likelihood of jumping to an arbitrary page corresponds to a damping factor of 85%.) Without damping, all web surfers would eventually end up on Pages A, B, or C, and all other pages would have PageRank zero. In the presence of damping, Page A effectively links to all pages in the web, even though it has no outgoing links of its own.</b> ## 3.1 Page Ranking https://en.wikipedia.org/wiki/PageRank ```python if not os.path.isfile('data/fea_sample/page_rank.p'): pr = nx.pagerank(train_graph, alpha=0.85) pickle.dump(pr,open('data/fea_sample/page_rank.p','wb')) else: pr = pickle.load(open('data/fea_sample/page_rank.p','rb')) ``` ```python print('min',pr[min(pr, key=pr.get)]) print('max',pr[max(pr, key=pr.get)]) print('mean',float(sum(pr.values())) / len(pr)) ``` min 1.6556497245737814e-07 max 2.7098251341935827e-05 mean 5.615699699365892e-07 ```python #for imputing to nodes which are not there in Train data mean_pr = float(sum(pr.values())) / len(pr) print(mean_pr) ``` 5.615699699365892e-07 # 4. Other Graph Features ## 4.1 Shortest path: Getting Shortest path between twoo nodes, if nodes have direct path i.e directly connected then we are removing that edge and calculating path. ```python #if has direct edge then deleting that edge and calculating shortest path def compute_shortest_path_length(a,b): p=-1 try: if train_graph.has_edge(a,b): train_graph.remove_edge(a,b) p= nx.shortest_path_length(train_graph,source=a,target=b) train_graph.add_edge(a,b) else: p= nx.shortest_path_length(train_graph,source=a,target=b) return p except: return -1 ``` ```python #testing compute_shortest_path_length(77697, 826021) ``` 10 ```python #testing compute_shortest_path_length(669354,1635354) ``` -1 ## 4.2 Checking for same community ```python #getting weekly connected edges from graph wcc=list(nx.weakly_connected_components(train_graph)) def belongs_to_same_wcc(a,b): index = [] if train_graph.has_edge(b,a): return 1 if train_graph.has_edge(a,b): for i in wcc: if a in i: index= i break if (b in index): train_graph.remove_edge(a,b) if compute_shortest_path_length(a,b)==-1: train_graph.add_edge(a,b) return 0 else: train_graph.add_edge(a,b) return 1 else: return 0 else: for i in wcc: if a in i: index= i break if(b in index): return 1 else: return 0 ``` ```python belongs_to_same_wcc(861, 1659750) ``` 0 ```python belongs_to_same_wcc(669354,1635354) ``` 0 ## 4.3 Adamic/Adar Index: Adamic/Adar measures is defined as inverted sum of degrees of common neighbours for given two vertices. $$A(x,y)=\sum_{u \in N(x) \cap N(y)}\frac{1}{log(|N(u)|)}$$ ```python #adar index def calc_adar_in(a,b): sum=0 try: n=list(set(train_graph.successors(a)).intersection(set(train_graph.successors(b)))) if len(n)!=0: for i in n: sum=sum+(1/np.log10(len(list(train_graph.predecessors(i))))) return sum else: return 0 except: return 0 ``` ```python calc_adar_in(1,189226) ``` 0 ```python calc_adar_in(669354,1635354) ``` 0 ## 4.4 Is persion was following back: ```python def follows_back(a,b): if train_graph.has_edge(b,a): return 1 else: return 0 ``` ```python follows_back(1,189226) ``` 1 ```python follows_back(669354,1635354) ``` 0 ## 4.5 Katz Centrality: https://en.wikipedia.org/wiki/Katz_centrality https://www.geeksforgeeks.org/katz-centrality-centrality-measure/ Katz centrality computes the centrality for a node based on the centrality of its neighbors. It is a generalization of the eigenvector centrality. The Katz centrality for node `i` is $$x_i = \alpha \sum_{j} A_{ij} x_j + \beta,$$ where `A` is the adjacency matrix of the graph G with eigenvalues $$\lambda$$. The parameter $$\beta$$ controls the initial centrality and $$\alpha < \frac{1}{\lambda_{max}}.$$ ```python if not os.path.isfile('data/fea_sample/katz.p'): katz = nx.katz.katz_centrality(train_graph,alpha=0.005,beta=1) pickle.dump(katz,open('data/fea_sample/katz.p','wb')) else: katz = pickle.load(open('data/fea_sample/katz.p','rb')) ``` ```python print('min',katz[min(katz, key=katz.get)]) print('max',katz[max(katz, key=katz.get)]) print('mean',float(sum(katz.values())) / len(katz)) ``` min 0.0007313532484065916 max 0.003394554981699122 mean 0.0007483800935504637 ```python mean_katz = float(sum(katz.values())) / len(katz) print(mean_katz) ``` 0.0007483800935504637 ## 4.6 Hits Score The HITS algorithm computes two numbers for a node. Authorities estimates the node value based on the incoming links. Hubs estimates the node value based on outgoing links. https://en.wikipedia.org/wiki/HITS_algorithm ```python if not os.path.isfile('data/fea_sample/hits.p'): hits = nx.hits(train_graph, max_iter=100, tol=1e-08, nstart=None, normalized=True) pickle.dump(hits,open('data/fea_sample/hits.p','wb')) else: hits = pickle.load(open('data/fea_sample/hits.p','rb')) ``` ```python print('min',hits[0][min(hits[0], key=hits[0].get)]) print('max',hits[0][max(hits[0], key=hits[0].get)]) print('mean',float(sum(hits[0].values())) / len(hits[0])) ``` min 0.0 max 0.004868653378780953 mean 5.615699699353278e-07 # 5. Featurization ## 5. 1 Reading a sample of Data from both train and test ```python import random if os.path.isfile('data/after_eda/train_after_eda.csv'): filename = "data/after_eda/train_after_eda.csv" # you uncomment this line, if you dont know the lentgh of the file name # here we have hardcoded the number of lines as 15100030 # n_train = sum(1 for line in open(filename)) #number of records in file (excludes header) n_train = 15100028 s = 100000 #desired sample size skip_train = sorted(random.sample(range(1,n_train+1),n_train-s)) #https://stackoverflow.com/a/22259008/4084039 ``` ```python if os.path.isfile('data/after_eda/train_after_eda.csv'): filename = "data/after_eda/test_after_eda.csv" # you uncomment this line, if you dont know the lentgh of the file name # here we have hardcoded the number of lines as 3775008 # n_test = sum(1 for line in open(filename)) #number of records in file (excludes header) n_test = 3775006 s = 50000 #desired sample size skip_test = sorted(random.sample(range(1,n_test+1),n_test-s)) #https://stackoverflow.com/a/22259008/4084039 ``` ```python print("Number of rows in the train data file:", n_train) print("Number of rows we are going to elimiate in train data are",len(skip_train)) print("Number of rows in the test data file:", n_test) print("Number of rows we are going to elimiate in test data are",len(skip_test)) ``` Number of rows in the train data file: 15100028 Number of rows we are going to elimiate in train data are 15000028 Number of rows in the test data file: 3775006 Number of rows we are going to elimiate in test data are 3725006 ```python df_final_train = pd.read_csv('data/after_eda/train_after_eda.csv', skiprows=skip_train, names=['source_node', 'destination_node']) df_final_train['indicator_link'] = pd.read_csv('data/train_y.csv', skiprows=skip_train, names=['indicator_link']) print("Our train matrix size ",df_final_train.shape) df_final_train.head(2) ``` Our train matrix size (100002, 3) <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>source_node</th> <th>destination_node</th> <th>indicator_link</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>273084</td> <td>1505602</td> <td>1</td> </tr> <tr> <th>1</th> <td>569324</td> <td>1194578</td> <td>1</td> </tr> </tbody> </table> </div> ```python df_final_test = pd.read_csv('data/after_eda/test_after_eda.csv', skiprows=skip_test, names=['source_node', 'destination_node']) df_final_test['indicator_link'] = pd.read_csv('data/test_y.csv', skiprows=skip_test, names=['indicator_link']) print("Our test matrix size ",df_final_test.shape) df_final_test.head(2) ``` Our test matrix size (50002, 3) <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>source_node</th> <th>destination_node</th> <th>indicator_link</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>848424</td> <td>784690</td> <td>1</td> </tr> <tr> <th>1</th> <td>713880</td> <td>167361</td> <td>1</td> </tr> </tbody> </table> </div> ## 5.2 Adding a set of features __we will create these each of these features for both train and test data points__ <ol> <li>jaccard_followers</li> <li>jaccard_followees</li> <li>cosine_followers</li> <li>cosine_followees</li> <li>num_followers_s</li> <li>num_followees_s</li> <li>num_followers_d</li> <li>num_followees_d</li> <li>inter_followers</li> <li>inter_followees</li> </ol> ```python #if not os.path.isfile('data/fea_sample/storage_sample_stage1.h5'): if True: #mapping jaccrd followers to train and test data df_final_train['jaccard_followers'] = df_final_train.apply(lambda row: jaccard_for_followers(row['source_node'],row['destination_node']),axis=1) df_final_test['jaccard_followers'] = df_final_test.apply(lambda row: jaccard_for_followers(row['source_node'],row['destination_node']),axis=1) #mapping jaccrd followees to train and test data df_final_train['jaccard_followees'] = df_final_train.apply(lambda row: jaccard_for_followees(row['source_node'],row['destination_node']),axis=1) df_final_test['jaccard_followees'] = df_final_test.apply(lambda row: jaccard_for_followees(row['source_node'],row['destination_node']),axis=1) #mapping jaccrd followers to train and test data df_final_train['cosine_followers'] = df_final_train.apply(lambda row: cosine_for_followers(row['source_node'],row['destination_node']),axis=1) df_final_test['cosine_followers'] = df_final_test.apply(lambda row: cosine_for_followers(row['source_node'],row['destination_node']),axis=1) #mapping jaccrd followees to train and test data df_final_train['cosine_followees'] = df_final_train.apply(lambda row: cosine_for_followees(row['source_node'],row['destination_node']),axis=1) df_final_test['cosine_followees'] = df_final_test.apply(lambda row: cosine_for_followees(row['source_node'],row['destination_node']),axis=1) print("Addition of Jaccard & Cosine Distanc complete") ``` Addition of Jaccard & Cosine Distanc complete ```python def compute_features_stage1(df_final): #calculating no of followers followees for source and destination #calculating intersection of followers and followees for source and destination num_followers_s=[] num_followees_s=[] num_followers_d=[] num_followees_d=[] inter_followers=[] inter_followees=[] for i,row in df_final.iterrows(): try: s1=set(train_graph.predecessors(row['source_node'])) s2=set(train_graph.successors(row['source_node'])) except: s1 = set() s2 = set() try: d1=set(train_graph.predecessors(row['destination_node'])) d2=set(train_graph.successors(row['destination_node'])) except: d1 = set() d2 = set() num_followers_s.append(len(s1)) num_followees_s.append(len(s2)) num_followers_d.append(len(d1)) num_followees_d.append(len(d2)) inter_followers.append(len(s1.intersection(d1))) inter_followees.append(len(s2.intersection(d2))) return num_followers_s, num_followers_d, num_followees_s, num_followees_d, inter_followers, inter_followees ``` ```python #if not os.path.isfile('data/fea_sample/storage_sample_stage1.h5'): if True: df_final_train['num_followers_s'], df_final_train['num_followers_d'], \ df_final_train['num_followees_s'], df_final_train['num_followees_d'], \ df_final_train['inter_followers'], df_final_train['inter_followees']= compute_features_stage1(df_final_train) df_final_test['num_followers_s'], df_final_test['num_followers_d'], \ df_final_test['num_followees_s'], df_final_test['num_followees_d'], \ df_final_test['inter_followers'], df_final_test['inter_followees']= compute_features_stage1(df_final_test) print("Addition of Number of followers/followees complete") #hdf = HDFStore('data/fea_sample/storage_sample_stage1.h5') #hdf.put('train_df',df_final_train, format='table', data_columns=True) #hdf.put('test_df',df_final_test, format='table', data_columns=True) #hdf.close() #else: # df_final_train = read_hdf('data/fea_sample/storage_sample_stage1.h5', 'train_df',mode='r') # df_final_test = read_hdf('data/fea_sample/storage_sample_stage1.h5', 'test_df',mode='r') ``` Addition of Number of followers/followees complete ## 5.3 Adding new set of features __we will create these each of these features for both train and test data points__ <ol> <li>adar index</li> <li>is following back</li> <li>belongs to same weakly connect components</li> <li>shortest path between source and destination</li> </ol> ```python #if not os.path.isfile('data/fea_sample/storage_sample_stage2.h5'): if True: #mapping adar index on train df_final_train['adar_index'] = df_final_train.apply(lambda row: calc_adar_in(row['source_node'],row['destination_node']),axis=1) #mapping adar index on test df_final_test['adar_index'] = df_final_test.apply(lambda row: calc_adar_in(row['source_node'],row['destination_node']),axis=1) #-------------------------------------------------------------------------------------------------------- #mapping followback or not on train df_final_train['follows_back'] = df_final_train.apply(lambda row: follows_back(row['source_node'],row['destination_node']),axis=1) #mapping followback or not on test df_final_test['follows_back'] = df_final_test.apply(lambda row: follows_back(row['source_node'],row['destination_node']),axis=1) #-------------------------------------------------------------------------------------------------------- #mapping same component of wcc or not on train df_final_train['same_comp'] = df_final_train.apply(lambda row: belongs_to_same_wcc(row['source_node'],row['destination_node']),axis=1) ##mapping same component of wcc or not on train df_final_test['same_comp'] = df_final_test.apply(lambda row: belongs_to_same_wcc(row['source_node'],row['destination_node']),axis=1) #-------------------------------------------------------------------------------------------------------- #mapping shortest path on train df_final_train['shortest_path'] = df_final_train.apply(lambda row: compute_shortest_path_length(row['source_node'],row['destination_node']),axis=1) #mapping shortest path on test df_final_test['shortest_path'] = df_final_test.apply(lambda row: compute_shortest_path_length(row['source_node'],row['destination_node']),axis=1) print("Addition of Adar Index, follows back and shortest parth complete") #hdf = HDFStore('data/fea_sample/storage_sample_stage2.h5') #hdf.put('train_df',df_final_train, format='table', data_columns=True) #hdf.put('test_df',df_final_test, format='table', data_columns=True) #hdf.close() #else: # df_final_train = read_hdf('data/fea_sample/storage_sample_stage2.h5', 'train_df',mode='r') # df_final_test = read_hdf('data/fea_sample/storage_sample_stage2.h5', 'test_df',mode='r') ``` Addition of Adar Index, follows back and shortest parth complete ## 5.4 Adding new set of features __we will create these each of these features for both train and test data points__ <ol> <li>Weight Features <ul> <li>weight of incoming edges</li> <li>weight of outgoing edges</li> <li>weight of incoming edges + weight of outgoing edges</li> <li>weight of incoming edges * weight of outgoing edges</li> <li>2*weight of incoming edges + weight of outgoing edges</li> <li>weight of incoming edges + 2*weight of outgoing edges</li> </ul> </li> <li>Page Ranking of source</li> <li>Page Ranking of dest</li> <li>katz of source</li> <li>katz of dest</li> <li>hubs of source</li> <li>hubs of dest</li> <li>authorities_s of source</li> <li>authorities_s of dest</li> </ol> #### Weight Features In order to determine the similarity of nodes, an edge weight value was calculated between nodes. Edge weight decreases as the neighbor count goes up. Intuitively, consider one million people following a celebrity on a social network then chances are most of them never met each other or the celebrity. On the other hand, if a user has 30 contacts in his/her social network, the chances are higher that many of them know each other. `credit` - Graph-based Features for Supervised Link Prediction William Cukierski, Benjamin Hamner, Bo Yang \begin{equation} W = \frac{1}{\sqrt{1+|X|}} \end{equation} it is directed graph so calculated Weighted in and Weighted out differently ```python #weight for source and destination of each link Weight_in = {} Weight_out = {} for i in tqdm(train_graph.nodes()): s1=set(train_graph.predecessors(i)) w_in = 1.0/(np.sqrt(1+len(s1))) Weight_in[i]=w_in s2=set(train_graph.successors(i)) w_out = 1.0/(np.sqrt(1+len(s2))) Weight_out[i]=w_out #for imputing with mean mean_weight_in = np.mean(list(Weight_in.values())) mean_weight_out = np.mean(list(Weight_out.values())) ``` 100%|██████████| 1780722/1780722 [00:14<00:00, 120154.67it/s] ```python #if not os.path.isfile('data/fea_sample/storage_sample_stage3.h5'): if True: #mapping to pandas train df_final_train['weight_in'] = df_final_train.destination_node.apply(lambda x: Weight_in.get(x,mean_weight_in)) df_final_train['weight_out'] = df_final_train.source_node.apply(lambda x: Weight_out.get(x,mean_weight_out)) #mapping to pandas test df_final_test['weight_in'] = df_final_test.destination_node.apply(lambda x: Weight_in.get(x,mean_weight_in)) df_final_test['weight_out'] = df_final_test.source_node.apply(lambda x: Weight_out.get(x,mean_weight_out)) #some features engineerings on the in and out weights df_final_train['weight_f1'] = df_final_train.weight_in + df_final_train.weight_out df_final_train['weight_f2'] = df_final_train.weight_in * df_final_train.weight_out df_final_train['weight_f3'] = (2*df_final_train.weight_in + 1*df_final_train.weight_out) df_final_train['weight_f4'] = (1*df_final_train.weight_in + 2*df_final_train.weight_out) #some features engineerings on the in and out weights df_final_test['weight_f1'] = df_final_test.weight_in + df_final_test.weight_out df_final_test['weight_f2'] = df_final_test.weight_in * df_final_test.weight_out df_final_test['weight_f3'] = (2*df_final_test.weight_in + 1*df_final_test.weight_out) df_final_test['weight_f4'] = (1*df_final_test.weight_in + 2*df_final_test.weight_out) print("Addition of weights complete") ``` Addition of weights complete ```python #if not os.path.isfile('data/fea_sample/storage_sample_stage3.h5'): if True: #page rank for source and destination in Train and Test #if anything not there in train graph then adding mean page rank df_final_train['page_rank_s'] = df_final_train.source_node.apply(lambda x:pr.get(x,mean_pr)) df_final_train['page_rank_d'] = df_final_train.destination_node.apply(lambda x:pr.get(x,mean_pr)) df_final_test['page_rank_s'] = df_final_test.source_node.apply(lambda x:pr.get(x,mean_pr)) df_final_test['page_rank_d'] = df_final_test.destination_node.apply(lambda x:pr.get(x,mean_pr)) #================================================================================ print("Addition of page rank complete") #Katz centrality score for source and destination in Train and test #if anything not there in train graph then adding mean katz score df_final_train['katz_s'] = df_final_train.source_node.apply(lambda x: katz.get(x,mean_katz)) df_final_train['katz_d'] = df_final_train.destination_node.apply(lambda x: katz.get(x,mean_katz)) df_final_test['katz_s'] = df_final_test.source_node.apply(lambda x: katz.get(x,mean_katz)) df_final_test['katz_d'] = df_final_test.destination_node.apply(lambda x: katz.get(x,mean_katz)) #================================================================================ print("Addition of Katz score complete") #Hits algorithm score for source and destination in Train and test #if anything not there in train graph then adding 0 df_final_train['hubs_s'] = df_final_train.source_node.apply(lambda x: hits[0].get(x,0)) df_final_train['hubs_d'] = df_final_train.destination_node.apply(lambda x: hits[0].get(x,0)) df_final_test['hubs_s'] = df_final_test.source_node.apply(lambda x: hits[0].get(x,0)) df_final_test['hubs_d'] = df_final_test.destination_node.apply(lambda x: hits[0].get(x,0)) #================================================================================ #Hits algorithm score for source and destination in Train and Test #if anything not there in train graph then adding 0 df_final_train['authorities_s'] = df_final_train.source_node.apply(lambda x: hits[1].get(x,0)) df_final_train['authorities_d'] = df_final_train.destination_node.apply(lambda x: hits[1].get(x,0)) df_final_test['authorities_s'] = df_final_test.source_node.apply(lambda x: hits[1].get(x,0)) df_final_test['authorities_d'] = df_final_test.destination_node.apply(lambda x: hits[1].get(x,0)) #================================================================================ print("Addition of HITS complete") #hdf = HDFStore('data/fea_sample/storage_sample_stage3.h5') #hdf.put('train_df',df_final_train, format='table', data_columns=True) #hdf.put('test_df',df_final_test, format='table', data_columns=True) #hdf.close() #else: # df_final_train = read_hdf('data/fea_sample/storage_sample_stage3.h5', 'train_df',mode='r') # df_final_test = read_hdf('data/fea_sample/storage_sample_stage3.h5', 'test_df',mode='r') ``` Addition of page rank complete Addition of Katz score complete Addition of HITS complete ## 5.5 Adding new set of features __we will create these each of these features for both train and test data points__ <ol> <li>SVD features for both source and destination</li> </ol> ```python def svd(x, S): try: z = sadj_dict[x] return S[z] except: return [0,0,0,0,0,0] ``` ```python #for svd features to get feature vector creating a dict node val and inedx in svd vector sadj_col = sorted(train_graph.nodes()) sadj_dict = { val:idx for idx,val in enumerate(sadj_col)} ``` ```python Adj = nx.adjacency_matrix(train_graph,nodelist=sorted(train_graph.nodes())).asfptype() ``` ```python U, s, V = svds(Adj, k = 6) print('Adjacency matrix Shape',Adj.shape) print('U Shape',U.shape) print('V Shape',V.shape) print('s Shape',s.shape) ``` Adjacency matrix Shape (1780722, 1780722) U Shape (1780722, 6) V Shape (6, 1780722) s Shape (6,) ```python #if not os.path.isfile('data/fea_sample/storage_sample_stage4.h5'): #=================================================================================================== if True: df_final_train[['svd_u_s_1', 'svd_u_s_2','svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6']] = \ df_final_train.source_node.apply(lambda x: svd(x, U)).apply(pd.Series) df_final_train[['svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5','svd_u_d_6']] = \ df_final_train.destination_node.apply(lambda x: svd(x, U)).apply(pd.Series) #=================================================================================================== df_final_train[['svd_v_s_1','svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6',]] = \ df_final_train.source_node.apply(lambda x: svd(x, V.T)).apply(pd.Series) df_final_train[['svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5','svd_v_d_6']] = \ df_final_train.destination_node.apply(lambda x: svd(x, V.T)).apply(pd.Series) #=================================================================================================== print("Addition of SVD for Train complete") df_final_test[['svd_u_s_1', 'svd_u_s_2','svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6']] = \ df_final_test.source_node.apply(lambda x: svd(x, U)).apply(pd.Series) df_final_test[['svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5','svd_u_d_6']] = \ df_final_test.destination_node.apply(lambda x: svd(x, U)).apply(pd.Series) #=================================================================================================== df_final_test[['svd_v_s_1','svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6',]] = \ df_final_test.source_node.apply(lambda x: svd(x, V.T)).apply(pd.Series) df_final_test[['svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5','svd_v_d_6']] = \ df_final_test.destination_node.apply(lambda x: svd(x, V.T)).apply(pd.Series) #=================================================================================================== print("Addition of SVD for Test complete") #hdf = HDFStore('data/fea_sample/storage_sample_stage4.h5') #hdf.put('train_df',df_final_train, format='table', data_columns=True) #hdf.put('test_df',df_final_test, format='table', data_columns=True) #hdf.close() ``` Addition of SVD for Train complete Addition of SVD for Test complete ```python # prepared and stored the data from machine learning models # pelase check the FB_Models.ipynb ``` ```python df_final_train.columns ``` Index(['source_node', 'destination_node', 'indicator_link', 'jaccard_followers', 'jaccard_followees', 'cosine_followers', 'cosine_followees', 'num_followers_s', 'num_followers_d', 'num_followees_s', 'num_followees_d', 'inter_followers', 'inter_followees', 'adar_index', 'follows_back', 'same_comp', 'shortest_path', 'weight_in', 'weight_out', 'weight_f1', 'weight_f2', 'weight_f3', 'weight_f4', 'page_rank_s', 'page_rank_d', 'katz_s', 'katz_d', 'hubs_s', 'hubs_d', 'authorities_s', 'authorities_d', 'svd_u_s_1', 'svd_u_s_2', 'svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6', 'svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5', 'svd_u_d_6', 'svd_v_s_1', 'svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6', 'svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5', 'svd_v_d_6'], dtype='object') ```python df_final_test.columns ``` Index(['source_node', 'destination_node', 'indicator_link', 'jaccard_followers', 'jaccard_followees', 'cosine_followers', 'cosine_followees', 'num_followers_s', 'num_followers_d', 'num_followees_s', 'num_followees_d', 'inter_followers', 'inter_followees', 'adar_index', 'follows_back', 'same_comp', 'shortest_path', 'weight_in', 'weight_out', 'weight_f1', 'weight_f2', 'weight_f3', 'weight_f4', 'page_rank_s', 'page_rank_d', 'katz_s', 'katz_d', 'hubs_s', 'hubs_d', 'authorities_s', 'authorities_d', 'svd_u_s_1', 'svd_u_s_2', 'svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6', 'svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5', 'svd_u_d_6', 'svd_v_s_1', 'svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6', 'svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5', 'svd_v_d_6'], dtype='object') ## Preferential Attachment with followers and followees ```python # http://be.amazd.com/link-prediction/ # Preferential Attachment One well-known concept in social networks is that users with many friends #tend to create more connections in the future. #This is due to the fact that in some social networks, like in finance, the rich get richer. #We estimate how ”rich” our two vertices are by calculating the multiplication between the number of friends (|Γ(x)|) or followers each vertex has. #It may be noted that the similarity index does not require any node neighbor information; #therefore, this similarity index has the lowest computational complexity. ``` ```python df_final_train['pref_attachment_follower'] = df_final_train['num_followers_s'] * df_final_train['num_followers_d'] df_final_train['pref_attachment_followee'] = df_final_train['num_followees_s'] * df_final_train['num_followees_d'] df_final_test['pref_attachment_follower'] = df_final_test['num_followers_s'] * df_final_test['num_followers_d'] df_final_test['pref_attachment_followee'] = df_final_test['num_followees_s'] * df_final_test['num_followees_d'] ``` ## Dot product between sourse node svd and destination node svd features ```python df_final_train.shape ``` (100002, 57) ```python df_final_train.columns ``` Index(['source_node', 'destination_node', 'indicator_link', 'jaccard_followers', 'jaccard_followees', 'cosine_followers', 'cosine_followees', 'num_followers_s', 'num_followers_d', 'num_followees_s', 'num_followees_d', 'inter_followers', 'inter_followees', 'adar_index', 'follows_back', 'same_comp', 'shortest_path', 'weight_in', 'weight_out', 'weight_f1', 'weight_f2', 'weight_f3', 'weight_f4', 'page_rank_s', 'page_rank_d', 'katz_s', 'katz_d', 'hubs_s', 'hubs_d', 'authorities_s', 'authorities_d', 'svd_u_s_1', 'svd_u_s_2', 'svd_u_s_3', 'svd_u_s_4', 'svd_u_s_5', 'svd_u_s_6', 'svd_u_d_1', 'svd_u_d_2', 'svd_u_d_3', 'svd_u_d_4', 'svd_u_d_5', 'svd_u_d_6', 'svd_v_s_1', 'svd_v_s_2', 'svd_v_s_3', 'svd_v_s_4', 'svd_v_s_5', 'svd_v_s_6', 'svd_v_d_1', 'svd_v_d_2', 'svd_v_d_3', 'svd_v_d_4', 'svd_v_d_5', 'svd_v_d_6', 'pref_attachment_follower', 'pref_attachment_followee'], dtype='object') ```python df_final_train['svd_u_d_1'].head(5) ``` 0 -2.038018e-11 1 -1.093444e-14 2 -1.913205e-11 3 1.396549e-19 4 -3.075034e-13 Name: svd_u_d_1, dtype: float64 ```python #https://stackoverflow.com/questions/28639551/pandas-create-new-dataframe-column-using-dot-product-of-elements-in-each-row df_final_train['dot_svd_u_s_d'] = df_final_train['svd_u_s_1'].dot(df_final_train['svd_u_d_1'].T) + \ df_final_train['svd_u_s_2'].dot(df_final_train['svd_u_d_2'].T) + \ df_final_train['svd_u_s_3'].dot(df_final_train['svd_u_d_3'].T) + \ df_final_train['svd_u_s_4'].dot(df_final_train['svd_u_d_4'].T) + \ df_final_train['svd_u_s_5'].dot(df_final_train['svd_u_d_5'].T) + \ df_final_train['svd_u_s_6'].dot(df_final_train['svd_u_d_6'].T) #df_final_train['dot_svd_u_s_d_1'] ``` ```python df_final_train.shape ``` (100002, 58) ```python df_final_test.shape ``` (50002, 63) ```python #https://stackoverflow.com/questions/28639551/pandas-create-new-dataframe-column-using-dot-product-of-elements-in-each-row df_final_test['dot_svd_u_s_d'] = df_final_test['svd_u_s_1'].dot(df_final_test['svd_u_d_1'].T) + \ df_final_test['svd_u_s_2'].dot(df_final_test['svd_u_d_2'].T) + \ df_final_test['svd_u_s_3'].dot(df_final_test['svd_u_d_3'].T) + \ df_final_test['svd_u_s_4'].dot(df_final_test['svd_u_d_4'].T) + \ df_final_test['svd_u_s_5'].dot(df_final_test['svd_u_d_5'].T) + \ df_final_test['svd_u_s_6'].dot(df_final_test['svd_u_d_6'].T) ``` ```python df_final_train.drop(['svd_u_s_1', 'svd_u_d_1','svd_u_s_2','svd_u_d_2', 'svd_u_s_3','svd_u_d_3','svd_u_s_4','svd_u_s_5','svd_u_s_6', 'svd_u_d_4','svd_u_d_5','svd_u_d_6'],axis=1,inplace=True) df_final_test.drop(['svd_u_s_1', 'svd_u_d_1','svd_u_s_2','svd_u_d_2', 'svd_u_s_3','svd_u_d_3','svd_u_s_4','svd_u_s_5','svd_u_s_6', 'svd_u_d_4','svd_u_d_5','svd_u_d_6'],axis=1,inplace=True) ``` ```python y_train = df_final_train.indicator_link y_test = df_final_test.indicator_link df_final_train.drop(['source_node', 'destination_node','indicator_link'],axis=1,inplace=True) df_final_test.drop(['source_node', 'destination_node','indicator_link'],axis=1,inplace=True) ``` ## Using RandomForest ```python from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import f1_score from tqdm import tqdm estimators = [10,50,100,250,450] train_scores = [] test_scores = [] for i in estimators: clf = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=5, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=52, min_samples_split=120, min_weight_fraction_leaf=0.0, n_estimators=i, n_jobs=-1,random_state=25,verbose=0,warm_start=False) clf.fit(df_final_train,y_train) train_sc = f1_score(y_train,clf.predict(df_final_train)) test_sc = f1_score(y_test,clf.predict(df_final_test)) test_scores.append(test_sc) train_scores.append(train_sc) print('Estimators = ',i,'Train Score',train_sc,'test Score',test_sc) plt.plot(estimators,train_scores,label='Train Score') plt.plot(estimators,test_scores,label='Test Score') plt.xlabel('Estimators') plt.ylabel('Score') plt.title('Estimators vs score at depth of 5') ``` ```python depths = [3,9,11,15,20,35,50,70,130] train_scores = [] test_scores = [] for i in depths: clf = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=i, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=52, min_samples_split=120, min_weight_fraction_leaf=0.0, n_estimators=115, n_jobs=-1,random_state=25,verbose=0,warm_start=False) clf.fit(df_final_train,y_train) train_sc = f1_score(y_train,clf.predict(df_final_train)) test_sc = f1_score(y_test,clf.predict(df_final_test)) test_scores.append(test_sc) train_scores.append(train_sc) print('depth = ',i,'Train Score',train_sc,'test Score',test_sc) plt.plot(depths,train_scores,label='Train Score') plt.plot(depths,test_scores,label='Test Score') plt.xlabel('Depth') plt.ylabel('Score') plt.title('Depth vs score at depth of 5 at estimators = 115') plt.show() ``` depth = 3 Train Score 0.8982946928680544 test Score 0.8731735329774337 depth = 9 Train Score 0.9600342040434067 test Score 0.9201869989211601 depth = 11 Train Score 0.9626819495866511 test Score 0.9272162525184688 depth = 15 Train Score 0.9651058995374503 test Score 0.9285128744088281 depth = 20 Train Score 0.9651807901009178 test Score 0.9284048164414652 depth = 35 Train Score 0.9653311147411248 test Score 0.9283777249889634 depth = 50 Train Score 0.9653311147411248 test Score 0.9283777249889634 ```python from sklearn.metrics import f1_score from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import f1_score from sklearn.model_selection import RandomizedSearchCV from scipy.stats import randint as sp_randint from scipy.stats import uniform param_dist = {"n_estimators":sp_randint(105,125), "max_depth": sp_randint(10,15), "min_samples_split": sp_randint(110,190), "min_samples_leaf": sp_randint(25,65)} clf = RandomForestClassifier(random_state=25,n_jobs=-1) rf_random = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=5,cv=10,scoring='f1',random_state=25) rf_random.fit(df_final_train,y_train) ``` RandomizedSearchCV(cv=10, error_score=nan, estimator=RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None, criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_job... 'min_samples_leaf': <scipy.stats._distn_infrastructure.rv_frozen object at 0x7fe7cba80668>, 'min_samples_split': <scipy.stats._distn_infrastructure.rv_frozen object at 0x7fe904b964e0>, 'n_estimators': <scipy.stats._distn_infrastructure.rv_frozen object at 0x7fe7cba87e80>}, pre_dispatch='2*n_jobs', random_state=25, refit=True, return_train_score=False, scoring='f1', verbose=0) ```python #print('mean test scores',rf_random.cv_results_['mean_test_score']) #print('mean train scores',rf_random.cv_results_['mean_train_score']) print(rf_random.best_estimator_) #print(rf_random.cv_results_) ``` XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1.0, eta=0.1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=12, min_child_weight=1, missing=None, n_estimators=9, n_jobs=1, nthread=None, num_boost_round=250, objective='binary:logistic', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=0.9, verbosity=1) ```python clf = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=14, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=28, min_samples_split=111, min_weight_fraction_leaf=0.0, n_estimators=121, n_jobs=-1, oob_score=False, random_state=25, verbose=0, warm_start=False) ``` ```python clf.fit(df_final_train,y_train) y_train_pred = clf.predict(df_final_train) y_test_pred = clf.predict(df_final_test) ``` ```python from sklearn.metrics import f1_score print('Train f1 score',f1_score(y_train,y_train_pred)) print('Test f1 score',f1_score(y_test,y_test_pred)) randomforest_f1_score_train = f1_score(y_train,y_train_pred) randomforest_f1_score_test = f1_score(y_test,y_test_pred) ``` Train f1 score 0.9666737724721096 Test f1 score 0.9291272344900107 ```python from sklearn.metrics import confusion_matrix def plot_confusion_matrix(test_y, predict_y): C = confusion_matrix(test_y, predict_y) A =(((C.T)/(C.sum(axis=1))).T) B =(C/C.sum(axis=0)) plt.figure(figsize=(20,4)) labels = [0,1] # representing A in heatmap format cmap=sns.light_palette("blue") plt.subplot(1, 3, 1) sns.heatmap(C, annot=True, cmap=cmap, fmt=".3f", xticklabels=labels, yticklabels=labels) plt.xlabel('Predicted Class') plt.ylabel('Original Class') plt.title("Confusion matrix") plt.subplot(1, 3, 2) sns.heatmap(B, annot=True, cmap=cmap, fmt=".3f", xticklabels=labels, yticklabels=labels) plt.xlabel('Predicted Class') plt.ylabel('Original Class') plt.title("Precision matrix") plt.subplot(1, 3, 3) # representing B in heatmap format sns.heatmap(A, annot=True, cmap=cmap, fmt=".3f", xticklabels=labels, yticklabels=labels) plt.xlabel('Predicted Class') plt.ylabel('Original Class') plt.title("Recall matrix") plt.show() ``` ```python print('Train confusion_matrix') plot_confusion_matrix(y_train,y_train_pred) ``` ```python print('Test confusion_matrix') plot_confusion_matrix(y_test,y_test_pred) ``` ```python from sklearn.metrics import roc_curve, auc fpr,tpr,ths = roc_curve(y_test,y_test_pred) auc_sc = auc(fpr, tpr) plt.plot(fpr, tpr, color='navy',label='ROC curve (area = %0.2f)' % auc_sc) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic with test data') plt.legend() plt.show() ``` ```python features = df_final_train.columns importances = clf.feature_importances_ indices = (np.argsort(importances))[-25:] plt.figure(figsize=(10,12)) plt.title('Feature Importances') plt.barh(range(len(indices)), importances[indices], color='r', align='center') plt.yticks(range(len(indices)), [features[i] for i in indices]) plt.xlabel('Relative Importance') plt.show() ``` ## Using XGBOOST ```python import time #from sklearn.model_selection import GridSearchCV from xgboost import XGBClassifier start_time = time.time() parameters = { 'num_boost_round': [100, 250, 500], 'eta': [0.05, 0.1, 0.3], 'max_depth': [6, 9, 12], 'subsample': [0.9, 1.0], 'colsample_bytree': [0.9, 1.0], 'n_estimators' : [3,5,7,9] } model = XGBClassifier() rf_random = RandomizedSearchCV(model, param_distributions=parameters, n_iter=5,cv=10,scoring='f1',random_state=25) rf_random.fit(df_final_train,y_train) #grid = GridSearchCV(estimator=model, param_grid = parameters, cv = 2, n_jobs=-1) #grid.fit(df_final_train, y_train) # Summarize results print("Execution time: " + str((time.time() - start_time)) + ' ms') ``` Execution time: 185.5570821762085 ms ```python #print("Best: %f using %s" % (rf_random.best_score_, rf_random.best_params_)) print('mean test scores',rf_random.cv_results_['mean_test_score']) #print('mean train scores',rf_random.cv_results_['mean_train_score']) print(rf_random.best_estimator_) ``` mean test scores [0.97141442 0.92808935 0.97475804 0.97434437 0.97151487] XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1.0, eta=0.1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=12, min_child_weight=1, missing=None, n_estimators=9, n_jobs=1, nthread=None, num_boost_round=250, objective='binary:logistic', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=0.9, verbosity=1) ```python #depth_gbdt = grid.best_params_['max_depth'] #eta_gbdt = grid.best_params_['eta'] #num_boost_rount_gbdt = grid.best_params_['num_boost_round'] #subsample_gbdt = grid.best_params_['subsample'] #colsample_bytree_gbdt = grid.best_params_['colsample_bytree'] #best_n_estimators = grid.best_params_['n_estimators'] ``` ```python depth=[2, 5, 8, 10,15] n_est=[5, 15, 25,35, 50] sol_train=pd.DataFrame(index=depth, columns=n_est) for d in tqdm(depth): for n in n_est: gbdt = XGBClassifier(booster='gbtree',scale_pos_weight=1,objective='binary:logistic', gamma = 0.1,eval_metric='auc',seed=100,max_depth=d,n_estimators=n) gbdt.fit(df_final_train, y_train.values.reshape(-1,1)) y_prob_train = gbdt.predict_proba(df_final_train) fpr_train, tpr_train, threshold_train = roc_curve(y_train, y_prob_train[:, 1]) roc_auc_train = auc(fpr_train, tpr_train) sol_train.at[d,n] = roc_auc_train #https://stackoverflow.com/questions/30485986/type-error-in-visualising-pandas-dataframe-as-heatmap sol_train=sol_train[sol_train.columns].astype(float) sol_test=pd.DataFrame(index=depth, columns=n_est) for d in tqdm(depth): for n in n_est: gbdt = XGBClassifier(booster='gbtree',scale_pos_weight=1,objective='binary:logistic', gamma = 0.1,etest_metric='auc',seed=100,max_depth=d,n_estimators=n) gbdt.fit(df_final_train, y_train.values.reshape(-1,1)) y_prob_test = gbdt.predict_proba(df_final_test) fpr_test, tpr_test, threshold_test = roc_curve(y_test, y_prob_test[:, 1]) roc_auc_test = auc(fpr_test, tpr_test) sol_test.at[d,n] = roc_auc_test #https://stackoverflow.com/questions/30485986/type-error-in-visualising-pandas-dataframe-as-heatmap sol_test=sol_test[sol_test.columns].astype(float) ``` 100%|██████████| 5/5 [05:39<00:00, 67.88s/it] 100%|██████████| 5/5 [05:37<00:00, 67.48s/it] ```python import seaborn as sn fig, ax = plt.subplots(1, 2, figsize=(32,10)) sn.set(font_scale=1)#for label size sn.heatmap(sol_train, ax=ax[0], cmap='RdYlGn_r',linewidths=0.5, annot_kws={"size": 20},annot=True)# font size ax[0].set_xlabel('N_Estimators') ax[0].set_ylabel('Tree Depth'); ax[0].set_title('ROC AUC HeatMap for bow Train'); sn.heatmap(sol_test, ax=ax[1], cmap='RdYlGn_r',linewidths=0.5, annot_kws={"size": 20}, annot=True)# font size ax[1].set_xlabel('N_Estimators') ax[1].set_ylabel('Tree Depth'); ax[1].set_title('ROC AUC HeatMap for bow val'); plt.show() ``` ### Conclusion - Best depth is 5 and best n_estimator is 5 as for all other values, the train set is overfitting. ```python #xgb_all_models = xgb.XGBRegressor(n_jobs=10, random_state=15) gbdt = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1.0, eta=0.1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=5, min_child_weight=1, missing=None, n_estimators=5, n_jobs=1, nthread=None, num_boost_round=250, objective='binary:logistic', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=0.9, verbosity=1) gbdt=gbdt.fit(df_final_train,y_train) y_prob_test = gbdt.predict_proba(df_final_test) y_prob_train = gbdt.predict_proba(df_final_train) fpr_train, tpr_train, threshold_train = roc_curve(y_train, y_prob_train[:, 1]) roc_auc_train = auc(fpr_train, tpr_train) fpr_test, tpr_test, threshold_test = roc_curve(y_test, y_prob_test[:, 1]) roc_auc_test_gbdt = auc(fpr_test, tpr_test) plt.title('Receiver Operating Characteristic for RF') plt.plot(fpr_train, tpr_train, 'b', label = 'TRAIN AUC = %0.2f' % roc_auc_train) plt.plot(fpr_test, tpr_test, 'r', label = 'TEST AUC = %0.2f' % roc_auc_test_gbdt) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() ``` ```python y_train_pred = gbdt.predict(df_final_train) xgboost_f1_score_train = f1_score(y_train,y_train_pred) print('Train confusion_matrix') plot_confusion_matrix(y_train,y_train_pred) ``` ```python y_test_pred = gbdt.predict(df_final_test) xgboost_f1_score_test = f1_score(y_test,y_test_pred) print('Test confusion_matrix') plot_confusion_matrix(y_test,y_test_pred) ``` ```python #https://stackoverflow.com/questions/37627923/how-to-get-feature-importance-in-xgboost from xgboost import plot_importance #feature_important = gbdt.get_score(importance_type='weight') #keys = list(feature_important.keys()) #values = list(feature_important.values()) #data = pd.DataFrame(data=values, index=keys, columns=["score"]).sort_values(by = "score", ascending=True) #data.plot(kind='barh') plot_importance(gbdt, max_num_features = 15) plt.show() ``` ## Conclusion ```python #! pip3 install --user prettytable ``` ```python #http://zetcode.com/python/prettytable/ from prettytable import PrettyTable x = PrettyTable() x.field_names = ["Model", "Depth", "N_Estimators", "AUC Score",'Train F1','Test F1'] x.add_row(["Random Forest",14,121,auc_sc,randomforest_f1_score_train, randomforest_f1_score_test]) x.add_row(["GBDT using XGBOOST",5,5,roc_auc_test_gbdt,xgboost_f1_score_train,xgboost_f1_score_test]) print(x) ``` +--------------------+-------+--------------+--------------------+--------------------+--------------------+ | Model | Depth | N_Estimators | AUC Score | Train F1 | Test F1 | +--------------------+-------+--------------+--------------------+--------------------+--------------------+ | Random Forest | 14 | 121 | 0.9326465698236833 | 0.9666737724721096 | 0.9291272344900107 | | GBDT using XGBOOST | 5 | 5 | 0.9548305346502652 | 0.9295903593936298 | 0.927525731455239 | +--------------------+-------+--------------+--------------------+--------------------+--------------------+
59658ce05c2039dc1b32750ebac74e4c2e7824da
454,113
ipynb
Jupyter Notebook
suny.sn1@gmail.com FB_featurization and Modeling.ipynb
sunneysood/appliedai
62770d57bc4bb30a0e4ed19b915ebb1888cf962c
[ "Apache-2.0" ]
1
2020-04-21T14:31:35.000Z
2020-04-21T14:31:35.000Z
suny.sn1@gmail.com FB_featurization and Modeling.ipynb
sunneysood/appliedai
62770d57bc4bb30a0e4ed19b915ebb1888cf962c
[ "Apache-2.0" ]
null
null
null
suny.sn1@gmail.com FB_featurization and Modeling.ipynb
sunneysood/appliedai
62770d57bc4bb30a0e4ed19b915ebb1888cf962c
[ "Apache-2.0" ]
null
null
null
150.717889
86,356
0.877436
true
15,335
Qwen/Qwen-72B
1. YES 2. YES
0.763484
0.675765
0.515935
__label__eng_Latn
0.441085
0.03702